diff --git a/data_all_eng_slimpj/shuffled/split2/finalzfzg b/data_all_eng_slimpj/shuffled/split2/finalzfzg new file mode 100644 index 0000000000000000000000000000000000000000..c9da2afaa6e597c3449a4bda932cc625875530bf --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzfzg @@ -0,0 +1,5 @@ +{"text":"\\section{Lower Bound}\n\nTo prove the lower bound on the round complexity of approximating (weighted) diameter in the quantum CONGEST model, we combine the reduction in~\\cite{Elkin06,SarmaHKKNPPW12,ElkinKNP14} and the graph gadget in~\\cite{AbboudCK16}.\n\n\n\n\\subsection{Reduction from Server Model}\n\\label{sec:reduction}\n\nWe briefly outline the reduction introduced by Elkin et al.~\\cite{Elkin06,SarmaHKKNPPW12,ElkinKNP14} from the Server model to prove the hardness of certain graph problems such as diameter and radius.\nWe will introduce a distributed network $G=(V,E)$ and embed a certain two-argument function $F:\\{0,1\\}^k\\times\\{0,1\\}^k\\to\\{0,1\\}$ into the network by showing that if the instance on the network $G$ has a low round-complexity protocol in the quantum CONGEST model, then there exists a low communication complexity protocol for $F$ in the quantum Server model.\nThus, the hardness of diameters and radius in the quantum CONGEST model is reduced to proving the lower bounds the communication complexity in the quantum Server model.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{figures\/example.pdf}\n\\caption{An example of constructed graph $G$.}\n\\label{fig:example}\n\\end{figure}\n\nThe network $G=(V,E)$ is depicted by Figure~\\ref{fig:example} where $V=V_S\\uplus V_A\\uplus V_B$ and $E=E_S\\uplus E_A\\uplus E_B\\uplus E'$.\nWe use $G[U]$ to denote the subgraph induced by vertex set $U\\subseteq V$, then $E_S,E_A,E_B$ are the edges in $G[V_S],G[V_A],G[V_B]$ respectively.\nAnd $E'$ denotes the edges between $V_S$ and $V_A\\uplus V_B$.\n\n$G[V_S]$ includes a full binary tree of height $h$ and $m$ disjoint paths of length $2^h-1$.\nEach of the $2^h$ leaves of the binary tree is connected to the nodes on the paths as depicted in Figure~\\ref{fig:example}.\nSuppose nodes of depth $i$ on the tree are $t_{i,1},\\cdots,t_{i,2^i}$ and nodes on the $i$-th path are $p_{i,1},\\ldots,p_{i,2^h}$ from left to right.\nThen $t_{h,1},\\ldots, t_{h,2^h}$ are the leaves of the binary tree in $G[V_S]$.\nFor each $i\\in[1,m]$ and $j\\in[1,2^h]$, there is an edge between $t_{h,j}$ and $p_{i,j}$.\nThus,\n$$\n\\begin{aligned}\nV_S & =\\left\\{t_{i,j}:i\\in[0,h],j\\in[1,2^i]\\right\\} \\\\\n& \\uplus\\left\\{p_{i,j}:i\\in[1,m],j\\in[1,2^i]\\right\\}, \\\\\nE_S & =\\left\\{\\{t_{i,j},t_{i-1,\\lceil j\/2\\rceil}\\}:i\\in[1,h],j\\in[1,2^i]\\right\\} \\\\\n& \\uplus\\left\\{\\{p_{i,j},p_{i,j-1}\\}:i\\in[1,m],j\\in[2,2^m]\\right\\} \\\\\n& \\uplus\\left\\{\\{t_{h,j},p_{i,j}\\}:i\\in[1,m],j\\in[1,2^m]\\right\\}.\n\\end{aligned}\n$$\n$V_A$ contains at least $m$ nodes, each of which is connected to $p_{i,1}$ for $1\\leq i\\leq m$.\n$V_B$ contains at least $m$ nodes, each of which is connected to $p_{i,2^h}$ for $1\\leq i\\leq m$.\nThose $2m$ edges are contained in $E'$.\nThe subgraphs $G[V_A]$ and $G[V_B]$ are decided by Alice's input and Bob's input, respectively.\n\n\\bigskip\n\nThe following lemma gives an efficient simulation of algorithms on network $G$ by the protocols in the quantum Server model.\n\n\\begin{lemma}[Quantum Simulation Lemma]\nSuppose Alice and Bob are given $(V_A,E_A)$ and $(V_B,E_B)$, respectively.\nFor any $T$-round ($T<2^h\/2$) distributed algorithm on network $G$ described above, there exists a communication protocol for Alice and Bob in the quantum Server model to simulate the algorithm with communication complexity $O(T\\cdot h\\cdot B)$, where $B$ denotes the bandwidth in the CONGEST model.\n\\label{lem:simulation}\n\\end{lemma}\n\n\\begin{proof}\nThe proof of Lemma~\\ref{lem:simulation} follows closely with the proof in \\cite[Proof of Theorem 3.5]{ElkinKNP14}.\nThe protocol we will construct simulates the distributed algorithm round by round.\nThus, it also has $T<2^h\/2$ rounds of communication.\nIn the beginning, the server simulates all the nodes in $V_S$ which are independent of Alice and Bob's inputs.\nAnd in the end of the $r$-th round, the server simulates $p_{i,1+r},\\cdots,p_{i,2^h-r}$ on the $i$-th path and nodes $t_{h,1+r},\\cdots,t_{h,2^h-r}$ along with their ancestors on the binary tree, while Alice simulates the nodes on the left side and Bob simulates on the right side.\nMore formally, in the end of the $r$-th round, the server simulates\n$$\\left\\{p_{i,j}:i\\in[1,m],j\\in[1+r,2^h-r]\\right\\}\\cup\\left\\{t_{i,j}:i\\in[0,h],j\\in\\left[\\left\\lceil(1+r)\/2^{h-i}\\right\\rceil,\\left\\lceil(2^h-r)\/2^{h-i}\\right\\rceil\\right]\\right\\};$$\nAlice simulates\n$$V_A\\cup\\left\\{p_{i,j}:i\\in[1,m],j\\in[1,1+r)\\right\\}\\cup\\left\\{t_{i,j}:i\\in[0,h],j\\in\\left[1,\\left\\lceil(1+r)\/2^{h-i}\\right\\rceil\\right)\\right\\};$$\nBob simulates\n$$V_B\\cup\\left\\{p_{i,j}:i\\in[1,m],j\\in(2^h-r,2^h]\\right\\}\\cup\\left\\{t_{i,j}:i\\in[0,h],j\\in\\left(\\left\\lceil(2^h-r)\/2^{h-i}\\right\\rceil,2^i\\right]\\right\\}.$$\n\nWe describe the simulation of the computation and communication of a processor $v$ in the $r$-th round, and count the total communication complexity.\n\\begin{itemize}\n\\item If $v$ is owned by Alice or the server in the $(r-1)$-th round and will be owned by Alice in the $r$-th round, Alice needs the local information of $v$ in the $(r-1)$-th round and messages from $\\Gamma(v)$ (neighbours of $v$) to $v$ in the $r$-th round, which can be obtained by local computation and communication from the server to Alice since $T<2^h\/2$, which implies that each of $v$ and nodes in $\\Gamma(v)$ is owned by either Alice or the server in the $(r-1)$-th round for $r\\leq T$.\nSo in this case, we only need communication from the server to Alice in the Server model.\nThis part will not be counted to complexity by definition.\n\\item If $v$ is owned by Bob or the server in the $(r-1)$-th round and will be owned by Bob in the $r$-th round, no communication will be counted to complexity by the same argument as mentioned above.\n\\item If $v$ is owned by the server in both the $(r-1)$-th round and the $r$-th round,\nthe server needs the messages from $\\Gamma(v)$ to $v$.\nFor each node $u\\in\\Gamma(v)$ owned by Alice or Bob in the $(r-1)$-th round, Alice or Bob will simulates the local computation of $u$ in the $r$-th round, and send the message to the server.\n\\begin{itemize}\n\\item If $v$ is on the paths on $V_S$, none of $\\Gamma(v)$ is owned by Alice and Bob in the $(r-1)$-th round.\n\\item If $v$ is on the binary tree, node $u\\in\\Gamma(v)$ is owned by Alice in the $(r-1)$-th round only if all nodes of the same depth with $v$, meanwhile on the left side of $v$, are not owned by the server in the $r$-th round, and $u$ is the left-child of $v$.\nSimilarly, node $u\\in\\Gamma(v)$ is owned by Bob in the $(r-1)$-th round only if all nodes of the same depth with $v$, meanwhile on the right side of $v$, are not owned by the server in the $r$-th round, and $u$ is the right-child of $v$.\nIn the $r$-th round, there are at most $2h$ such $(u,v)$ in total.\n\\end{itemize}\n\\end{itemize}\nHence, a total of $O(T\\cdot h)$ messages, each of size $O(B)$, are sent from Alice or Bob to the server.\n\\end{proof}\n\n\n\n\\subsection{Hardness of Approximating Diameter}\n\nwe will use $G$ constructed above as a gadget to prove a lower bound on round complexity of approximating weighted diameter in the quantum CONGEST model.\nThe specific graph depicted in Figure~\\ref{fig:diameter} will contain $n=\\left(2^{h+1}-1\\right)+\\left(2s+\\ell\\right)\\left(2^h+2\\right)+2\\cdot2^s$ nodes, where parameters $h,s,\\ell$ are chosen as follows throughout this section.\n\\begin{equation}\n\\label{eqn:paramters}\nh\\text{ is some even number},s=3h\/2,\\ell=2^{s-h}.\n\\end{equation}\nThis choice makes $2^h=\\widetilde\\Theta(n^{2\/3}),2^s=\\widetilde\\Theta(n)$ and $\\ell=\\widetilde\\Theta(n^{1\/3})$.\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/Diameter.pdf}\n\\caption{\nGraph $G$ for approximating diameter.\nThe black edges are of weight $1$; the blue edges are of weight $\\alpha$; and weights of red edges are determined by inputs $x,y$, i.e, $w(\\{a_i,a^\\star_j\\})=\\alpha$ if $x_{i,j}=1$ and $w(\\{a_i,a^\\star_j\\})=\\beta$ if $x_{i,j}=0$, and $w(\\{b_i,b^\\star_j\\})=\\alpha$ if $y_{i,j}=1$ and $w(\\{b_i,b^\\star_j\\})=\\beta$ if $y_{i,j}=0$, for $i\\in[1,2^s]$ and $j\\in[1,\\ell]$.\n}\n\\label{fig:diameter}\n\\end{figure*}\n\n\\begin{theorem}[Restated]\nFor any constant $\\varepsilon\\in(0,\\frac12]$, any algorithm, with probability at least $\\frac{11}{12}$, computing a $(\\frac32-\\varepsilon)$-approximation of the weighted diameter in the quantum CONGEST model requires $\\Omega\\left(\\frac{n^{2\/3}}{\\log^2n}\\right)$ rounds, even when the unweighted diameter is $\\Theta(\\log n)$, where $n$ denotes the number of nodes.\n\\label{thm:diameter}\n\\end{theorem}\n\nOn network $G=(V,E)$ described in Section~\\ref{sec:reduction}, we specify $G[V_A]$ and $G[V_B]$.\nLet\n\\begin{equation*}\n\\begin{aligned}\nV_A & =\\left\\{a_1,\\cdots,a_{2^s}\\right\\}\\uplus\\left\\{a^0_1,a^1_1,\\cdots,a^0_s,a^1_s\\right\\}\\uplus\\left\\{a^\\star_1,\\cdots,a^\\star_\\ell\\right\\}, \\\\%\\text{ and} \\\\\nV_B & =\\left\\{b_1,\\cdots,b_{2^s}\\right\\}\\uplus\\left\\{b^0_1,b^1_1,\\cdots,b^0_s,b^1_s\\right\\}\\uplus\\left\\{b^\\star_1,\\cdots,b^\\star_\\ell\\right\\}.\n\\end{aligned}\n\\end{equation*}\nThe edges $E_A, E_B$ and $E'$ are specified as follows.\n$$\n\\begin{aligned}\nE_A & =\\left\\{\\{a_i,a^{\\text{bin}(i,j)}_j\\}:i\\in[1,2^s],j\\in[1,s]\\right\\} \\\\\n& \\uplus\\left\\{\\{a_i,a^\\star_j\\}:i\\in[1,2^s],j\\in[1,\\ell]\\right\\} \\\\\n& \\uplus\\left\\{\\{a_i,a_j\\}:i,j\\in[1,2^s],i\\ne j\\right\\}, \\\\\nE_B & =\\left\\{\\{b_i,b^{\\text{bin}(i,j)}_j\\}:i\\in[1,2^s],j\\in[1,s]\\right\\} \\\\\n& \\uplus\\left\\{\\{b_i,b^\\star_j\\}:i\\in[1,2^s],j\\in[1,\\ell]\\right\\} \\\\\n& \\uplus\\left\\{\\{b_i,b_j\\}:i,j\\in[1,2^s],i\\ne j\\right\\}, \\\\\nE' & =\\left\\{\\{a^0_i,p_{2i-1,1}\\},\\{b^1_i,p_{2i-1,2^h}\\}:i\\in[1,s]\\right\\} \\\\\n& \\uplus\\left\\{\\{a^1_i,p_{2i,1}\\},\\{b^0_i,p_{2i,2^h}\\}:i\\in[1,s]\\right\\} \\\\\n& \\uplus\\left\\{\\{a^\\star_i,p_{2s+i,1}\\},\\{b^\\star_i,p_{2s+i,2^h}\\}:i\\in[1,\\ell]\\right\\},\n\\end{aligned}\n$$\nwhere $\\text{bin}(i,j)$ denote the $j$-th bit in binary expression of integer $i-1$.\n\nThe node pairs $(a^0_i, p_{2i-1,1})$, $(a^1_i,p_{2i,1})$, $(b^0_i,p_{2i,2^h})$, $(b^1_i,p_{2i-1,2^h})$ for $1\\le i\\le s$, and $(a^\\star_j,p_{2s+j,1})$, $(b^\\star_j,p_{2s+j,2^h})$ for $1\\le j\\le\\ell$ are connected.\nFor each $i\\in[1,2^s]$, $a_i$ is connected to $a^{\\text{bin}(i,j)}_j$ for each $j\\in[1,s]$, and $a_i$ is connected to $a^\\star_j$ for each $j\\in[1,\\ell]$.\nMoreover, $G[\\{a_1,\\cdots,a_{2^s}\\}]$ is a clique.\nThe edges in $G[V_B]$ are linked in the same way as the edges in $G[V_A]$.\n\nThe weights of the edges are specified as follows, which are also depicted in Figure~\\ref{fig:diameter}.\n\\begin{itemize}\n\\item The edges on the binary tree and the edges on the $2s+\\ell$ paths (including the endpoints in $V_A$ and $V_B$) are of weight $1$ (the black edges in Figure~\\ref{fig:diameter}).\n\\item Recall that Alice and Bob receive inputs $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$ respectively.\n$x$ and $y$ are indexed by $x_{i,j}$ and $y_{i,j}$ for $i\\in[1,2^s],j\\in[1,\\ell]$ where $s$ and $\\ell$ are given in Eq.~\\eqref{eqn:paramters}.\nFor each $i\\in[1,2^s],j\\in[1,\\ell]$, $w(\\{a_i,a^\\star_j\\})=\\alpha$ if $x_{i,j}=1$ and $w(\\{a_i,a^\\star_j\\})=\\beta$ if $x_{i,j}=0$ ($\\alpha<\\beta$); weights of edges between $\\{b_1,\\cdots,b_{2^s}\\}$ and $\\{b^\\star_1,\\cdots,b^\\star_\\ell\\}$ are assigned according to $y$ in the same way (the red edges in Figure~\\ref{fig:diameter}).\n\\item The edges between the binary tree and the $2s+\\ell$ paths, those between $\\{a_1,\\cdots,a_{2^s}\\}$ and $\\{a^0_1,a^1_1,\\cdots,a^0_s,a^1_s\\}$, and those between $\\{b_1,\\cdots,b_{2^s}\\}$ and $\\{b^0_1,b^1_1,\\cdots,b^0_s,b^1_s\\}$ are of weight $\\alpha$; weights of edges inside $G[\\{a_1,\\cdots,a_{2^s}\\}]$ and $G[\\{b_1,\\cdots,b_{2^s}\\}]$ are also $\\alpha$ (the blue edges in Figure~\\ref{fig:diameter}).\n\\end{itemize}\n\nIt is sufficient to analyze the diameter of graph after contracting all edges of weight $1$ due to the following lemma.\nAn edge is contracted if the two endpoints are merged to one node, and the adjacent edges of the two endpoints are incident to it.\nIf there are parallel edges after contraction, we only keep the one with the lowest weight.\n\n\\begin{lemma}\nGiven a weighted graph $(G,w)$ where $G=(V,E)$ and $w:E\\to\\mathbb N^+$.\nLet $G'$ be the graph after contracting all edges of weight $1$.\nWe have $D_{G',w}\\le D_{G,w}\\le D_{G',w}+n$ and $R_{G',w}\\le R_{G,w}\\le R_{G',w}+n$, where $n=|V|$.\n\\label{lem:contraction}\n\\end{lemma}\n\\begin{proof}\nFor any path $P$ in $G$, let $P'$ be the path in $G'$ obtained from $P$ after contraction.\nThen\n$$\\text{length}(P')\\leq\\text{length}(P)\\leq\\text{length}(P')+n$$\nas there are at most $n-1$ $1$-weight edges.\nThus we conclude the result.\n\\end{proof}\n\nFor inputs $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$ received by Alice and Bob, define\n$$F(x,y)=\\bigwedge_{i\\in[1,2^s]}\\left(\\bigvee_{j\\in[1,\\ell]}\\left(x_{i,j}\\wedge y_{i,j}\\right)\\right),$$\ni.e., $F=\\text{AND}_{2^s}\\circ(\\text{OR}_\\ell\\circ\\text{AND}^\\ell_2)^{2^s}$.\nWe have the following lemma.\n\n\\begin{lemma}\n$D_{G,w}\\le\\max\\{2\\alpha,\\beta\\}+n$ if $F(x,y)=1$, and $D_{G,w}\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}$ otherwise.\n\\label{lem:reduction}\n\\end{lemma}\n\n\\begin{proof}\nThe graph $G'$ after contraction is given in Figure~\\ref{fig:contraction}.\nThe binary tree is contracted to node $t$.\nThe $2s+\\ell$ paths are contracted to nodes $a^0_1,a^1_1,\\cdots,a^0_s,a^1_s$ and $a^\\star_1,\\cdots,a^\\star_\\ell$ respectively.\nNote that $b_i$ is connected to $a^{\\text{bin}(i,j)\\oplus1}_j$ for $i\\in[1,2^s],j\\in[1,s]$.\nwe list upper bounds of the distances between any two nodes $u$ and $v$ in $G'$ on Table~\\ref{tab:distance} with the corresponding paths, except for the distance between $a_i$ and $b_i$ with $i\\in[1,2^s]$.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{figures\/Contraction.pdf}\n\\caption{\nGraph $G'$ after contraction.\nThe distance between any pair of nodes, except $a_i$ and $b_i$ for $i\\in[1,2^s]$, is at most $\\max\\{2\\alpha,\\beta\\}$; and the distance between $a_i$ and $b_i$ is at most $2\\alpha$ if there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, otherwise it is at least $\\min\\{\\alpha+\\beta,3\\alpha\\}$.\nTherefore, the diameter is at most $\\max\\{2\\alpha,\\beta\\}$ if, for any $i\\in[1,2^s]$, there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, otherwise it is at least $\\min\\{\\alpha+\\beta,3\\alpha\\}$.\n}\n\\label{fig:contraction}\n\\end{figure}\n\n\\begin{table*}[t]\n\\centering\n\\caption{\nDistance between nodes in $G'$.\nLet {\\rm router} be any node in $\\{a^0_1,a^1_1,\\cdots,a^0_s,a^1_s,a^\\star_1,\\cdots,a^\\star_\\ell\\}$.\n${\\rm adj}(i,j)$ denotes the integer after changing the $j$-th bit in binary expression of integer $i-1$, and ${\\rm ind}(i,j)$ is the smallest $z\\in[1,s]$ satisfying ${\\rm bin}(i,z)\\ne{\\rm bin}(j,z)$.\n}\n\\label{tab:distance}\n\\begin{tabular}{cccc}\n\\toprule[1.5pt]\n$u$ & $v$ & $d_{G',w}(u,v)$ & Path \\\\\n\\midrule[1.5pt]\n\\multirow{3.6}{*}{$t$} \t\t\t\t\t\t& router & $\\le\\alpha$ & $\\left(t\\to v\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a_i$ ($i\\in[1,2^s]$) & $\\le2\\alpha$ & $\\left(t\\to a^{\\text{bin}(i,0)}_0\\to a_i\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $b_i$ ($i\\in[1,2^s]$) & $\\le2\\alpha$ & $\\left(t\\to a^{\\text{bin}(i,0)\\oplus1}_0\\to b_i\\right)$ \\\\\n\\midrule\n\\multirow{6.4}{*}{$a_i$ ($i\\in[1,2^s]$)} & $a_j$ ($j\\ne i,j\\in[1,2^s]$) & $\\le\\alpha$ & $\\left(a_i\\to a_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^{\\text{bin}(i,j)}_j$ ($j\\in[1,s]$) & $\\le\\alpha$ & $\\left(a_i\\to a^{\\text{bin}(i,j)}_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^{\\text{bin}(i,j)\\oplus1}_j$ ($j\\in[1,s]$) & $\\le2\\alpha$ & $\\left(a_i\\to a_{\\text{adj}(i,j)}\\to a^{\\text{bin}(i,j)\\oplus1}_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $b_j$ ($j\\ne i,j\\in[1,2^s]$) & $\\le2\\alpha$ & $\\left(a_i\\to a^{\\text{bin}(i,\\text{ind}(i,j))}_{\\text{ind}(i,j)}\\to b_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^\\star_j$ ($j\\in[1,\\ell]$) & $\\le\\beta$ & $\\left(a_i\\to a^\\star_j\\right)$ \\\\\n\\midrule\n\\multirow{5.2}{*}{$b_i$ ($i\\in[1,2^s]$)} & $b_j$ ($j\\ne i,j\\in[1,2^s]$) & $\\le\\alpha$ & $\\left(b_i\\to b_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^{\\text{bin}(i,j)\\oplus1}_j$ ($j\\in[1,s]$) & $\\le\\alpha$ & $\\left(b_i\\to a^{\\text{bin}(i,j)\\oplus1}_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^{\\text{bin}(i,j)}_j$ ($j\\in[1,s]$) & $\\le2\\alpha$ & $\\left(b_i\\to b_{\\text{adj}(i,j)}\\to a^{\\text{bin}(i,j)}_j\\right)$ \\\\\n\t\t\t\t\t\t\t\t\t\t\t& $a^\\star_j$ ($j\\in[1,\\ell]$) & $\\le\\beta$ & $\\left(b_i\\to a^\\star_j\\right)$ \\\\\n\\midrule\nrouter \t\t\t\t\t\t\t\t\t\t& router & $\\le2\\alpha$ & $\\left(u\\to t\\to v\\right)$ \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{table*}\n\nRegarding the distance between $a_i$ and $b_i$ for $i\\in[1,2^s]$, if there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, then $w(\\{a_i,a^\\star_j\\})=w(\\{b_i,a^\\star_j\\})=\\alpha$ and $d_{G',w}(a_i,b_i)\\le2\\alpha$ because of the path $(a_i\\to a^\\star_j\\to b_i)$ in $G'$.\nIf there is no $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, we claim that $d_{G',w}(a_i,b_i)\\geq\\min\\{\\alpha+\\beta, 3\\alpha\\}$.\nFor any path between $a_i$ and $b_i$, if it contains exactly two edges, it is of the form $(a_i\\to a^\\star_j\\to b_i)$ for some $j\\in[1,\\ell]$ by the construction of $G'$, and it is of length at least $\\alpha+\\beta$ by the assumption.\nIf it contains at least three edges, it is of length at least $3\\alpha$.\n\nIf $F(x,y)=1$, then for any $i\\in[1,2^s]$, there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$.\nHence,\n$$\n\\begin{aligned}\n& d_{G',w}(a_i,b_i)\\le2\\alpha,\\forall i\\in[1,2^s], \\\\\n& D_{G',w}=\\max_{u,v}d_{G',w}(u,v)\\le\\max\\{2\\alpha,\\beta\\}.\n\\end{aligned}\n$$\nTherefore, $D_{G,w}\\le D_{G',w}+n\\le\\max\\{2\\alpha,\\beta\\}+n$ by Lemma~\\ref{lem:contraction}.\n\nIf $F(x,y)=0$, then there exists $i\\in[1,2^s]$ such that $x_{i,j}=0$ or $y_{i,j}=0$ for any $j\\in[1,\\ell]$.\nHence,\n$$\n\\begin{aligned}\n& d_{G',w}(a_i,b_i)=\\min_{\\text{path }P\\text{ from }a_i\\text{ to }b_i}\\text{length}(P)\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}, \\\\\n& D_{G',w}=\\max_{u,v}d_{G',w}(u,v)\\ge d_{G',w}(a_i,b_i)\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}.\n\\end{aligned}\n$$\nTherefore, $D_{G,w}\\ge D_{G',w}\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}$ by Lemma~\\ref{lem:contraction}.\n\\end{proof}\n\nCombining Lemma~\\ref{lem:simulation} and Lemma~\\ref{lem:reduction}, we have a reduction from computing $F$ in the Server model to approximating diameter in the quantum CONGEST model.\nTo prove the communication complexity of $F$ in the Server model, we adopt the following lemma.\n\n\\begin{lemma}[Lemma B.4 in \\cite{ElkinKNP14}, arXiv version]\nFunction ${\\rm VER}:\\{0,1,2,3\\}\\times\\{0,1,2,3\\}\\to\\{0,1\\}$ is defined by ${\\rm VER}(x,y)=1$ if and only if $x+y$ is equivalent to $0$ or $1$ modulo $4$, where $x,y\\in\\{0,1,2,3\\}$.\nLet $f:\\{0,1\\}^k\\to\\{0,1\\}$ be an arbitrary function.\nThen\n$$Q^{sv}_\\varepsilon(f\\circ{\\rm VER}^k)\\ge\\frac12\\text{deg}_{4\\varepsilon}(f)-O(1)$$\nfor any $0<\\varepsilon<1\/4$.\n\\label{lem:lifting}\n\\end{lemma}\n\nA read-once formula, which consists of AND gates, OR gates, and NOT gates, is a formula in which each variable appears exactly once.\nWe will need the following conclusion for approximate degree of read-once formulas.\n\n\\begin{lemma}[Theorem 6 in \\cite{AaronsonBKRT21}]\nFor any read-once formula $f:\\{0,1\\}^k\\to\\{0,1\\}$, $\\text{deg}_{1\/3}(f)=\\Theta\\left(\\sqrt k\\right)$.\n\\label{lem:read_once}\n\\end{lemma}\n\n\\begin{lemma}\nGiven $s,\\ell$ defined in Eq.~\\eqref{eqn:paramters} where $\\ell$ is a multiple of $4$, $F={\\rm AND}_{2^s}\\circ({\\rm OR}_\\ell\\circ{\\rm AND}^\\ell_2)^{2^s}$ with inputs $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$, set\n\\[F(x,y)=\\bigwedge_{i\\in[1,2^s]}\\left(\\bigvee_{j\\in[1,\\ell]}\\left(x_{i,j}\\wedge y_{i,j}\\right)\\right).\\]\nIt holds that\n$$Q^{sv}_{1\/12}(F)=\\Omega\\left(\\sqrt{2^s\\cdot\\ell}\\right).$$\n\\label{lem:and_or_and}\n\\end{lemma}\n\\begin{proof}\nThe function $F$ can be rewritten as $F=f\\circ \\text{GDT}^{2^s\\cdot\\ell\/4}$, where\n$f=\\text{AND}_{2^s}\\circ\\text{OR}^{2^s}_{\\ell\/4}$ and $\\text{GDT}=\\text{OR}_4\\circ\\text{AND}^4_2$.\nObviously the function $f$ is a read-once formula.\nIt can be seen that the function VER is actually a {\\it promise version} of the function GDT where inputs $x,y\\in\\{0,1\\}^4$ satisfy\n$$x\\in\\{0011,1001,1100,0110\\},y\\in\\{0001,0010,0100,1000\\}.$$\nThus, the lower bound for $f\\circ\\text{VER}^{2^s\\cdot\\ell\/4}$ clearly implies the lower bound for $f\\circ\\text{GDT}^{2^s\\cdot\\ell\/4}$.\nTherefore,\n$$Q^{sv}_{1\/12}(f\\circ\\text{GDT}^{2^s\\cdot\\ell\/4})\\ge Q^{sv}_{1\/12}(f\\circ\\text{VER}^{2^s\\cdot\\ell\/4})\\ge\\frac12\\text{deg}_{1\/3}(f)-O(1)=\\Omega\\left(\\sqrt{2^s\\cdot\\ell}\\right).$$\nThe second inequality is due to Lemma~\\ref{lem:lifting} and the last inequality is due to Lemma~\\ref{lem:read_once}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:diameter}]\nLet $\\mathcal A$ be a $T$-round algorithm ($T<2^h\/2$) in the quantum CONGEST model which, for any weighted graph $(G,w)$, computes a $(\\frac32-\\varepsilon)$-approximation of $D_{G,w}$ (constant $\\varepsilon\\in(0,1\/2]$) with probability at least $11\/12$.\nAlice and Bob, who receive $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$, respectively, construct the network $G$ as described above with parameters $h,s,\\ell$ given in Eq.~\\eqref{eqn:paramters}.\nThe number of nodes is\n$$n=\\left(2^{h+1}-1\\right)+\\left(2s+\\ell\\right)\\left(2^h+2\\right)+2\\cdot2^s=\\Theta\\left(2^{3h\/2}\\right).$$\nAnd the unweighted diameter is $D_G=\\Theta(h)=\\Theta(\\log n)$.\nLet $w$ be the weight function.\nDue to Lemma~\\ref{lem:simulation}, they can simulate $\\mathcal A$ on $(G,w)$ in the quantum Server model with communication complexity $O(T\\cdot h\\cdot B)$ where $B$ denotes the bandwidth.\nWith probability at least $\\frac{11}{12}$, Alice and Bob output an approximation $\\widetilde D_{G,w}$ satisfying $D_{G,w}\\le\\widetilde D_{G,w}\\le(\\frac32-\\varepsilon)D_{G,w}$.\nWe set $\\alpha=n^2$ and $\\beta=2n^2$.\nBy Lemma~\\ref{lem:reduction},\n$$\n\\begin{aligned}\n\\text{if }F(x,y)=1,\\widetilde D_{G,w} & \\le\\left(\\frac32-\\varepsilon\\right)D_{G,w}\\le\\left(\\frac32-\\varepsilon\\right)\\left(\\max\\{2\\alpha,\\beta\\}+n\\right) \\\\\n& =3n^2-\\left(2\\varepsilon n^2-\\left(\\frac32-\\varepsilon\\right)n\\right); \\\\\n\\text{if }F(x,y)=0,\\widetilde D_{G,w} & \\ge D_{G,w}\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}=3n^2. \\\\\n\\end{aligned}\n$$\nFor large enough $n$, Alice and Bob can distinguish whether $F(x,y)=1$ or not with probability at least $\\frac{11}{12}$ in the Server model, and thus $Q^{sv}_{1\/12}(F)=O(T\\cdot h\\cdot B)$.\nDue to Lemma~\\ref{lem:and_or_and},\n$$T=\\Omega\\left(\\frac{\\sqrt{2^s\\cdot\\ell}}{h\\cdot B}\\right)=\\Omega\\left(\\frac{2^h}{h\\cdot B}\\right)=\\Omega\\left(\\frac{n^{2\/3}}{ \\log^2 n}\\right),$$\nwhere the last equality is by the choice of $h$ and the the bandwidth $B=\\Theta(\\log n)$.\nTherefore, the round complexity of approximating diameter is $\\Omega\\left(\\min\\left\\{2^h\/2,\\frac{n^{2\/3}}{\\log^2n}\\right\\}\\right)=\\Omega\\left(\\frac{n^{2\/3}}{\\log^2n}\\right)$.\n\\end{proof}\n\n\n\n\\subsection{Hardness of Approximating Radius}\n\nWe choose the same set of parameters $h,s,\\ell$ given in Eq.~\\eqref{eqn:paramters}.\nThe argument is very close to the one for diameter.\n\n\\begin{theorem}[Restated]\nFor any constant $\\varepsilon\\in(0,\\frac12]$, any algorithm, with probability at least $\\frac{11}{12}$, computing a $(\\frac32-\\varepsilon)$-approximation of radius in the quantum CONGEST model requires $\\Omega\\left(\\frac{n^{2\/3}}{\\log^2n}\\right)$ rounds, even when the unweighted diameter is $\\Theta(\\log n)$, where $n$ denotes the number of nodes.\n\\label{thm:radius}\n\\end{theorem}\n\nThe weighted graph $(G,w)$ that we construct for showing hardness of approximating radius is almost the same except that we add a node $a_0$ in $V_A$ along with edges $\\{a_0,a_1\\},\\cdots,\\{a_0,a_{2^s}\\}$ of weight $2\\alpha$.\nHere we only show in Figure~\\ref{fig:radius} the graph $G'$ after contracting all edges of weight $1$ (the green edges are the new-added edges).\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{figures\/Radius.pdf}\n\\caption{\nGraph $G'$ (after contraction) for approximating radius.\nThe additional green edges are of weight $2\\alpha$.\nThe eccentricity of any node, except $a_i$ for $i\\in[1,2^s]$, is at least $3\\alpha$; and the eccentricity of $a_i$ is at most $\\max\\{2\\alpha,\\beta\\}$ if there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, otherwise it is at least $\\min\\{\\alpha+\\beta,3\\alpha\\}$.\nTherefore, the radius is at most $\\max\\{2\\alpha,\\beta\\}$ if there exist $i\\in[1,2^s],j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, otherwise it is at least $\\min\\{\\alpha+\\beta,3\\alpha\\}$.\n}\n\\label{fig:radius}\n\\end{figure}\n\nFor inputs $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$ define\n$$F'(x,y)=\\bigvee_{i\\in[1,2^s],j\\in[1,\\ell]}(x_{i,j}\\wedge y_{i,j}).$$\nWe have the following lemma.\n\n\\begin{lemma}\n$R_{G,w}\\le\\max\\{2\\alpha,\\beta\\}+n$ if $F'(x,y)=1$, and $R_{G,w}\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}$ otherwise.\n\\label{lem:reduction_radius}\n\\end{lemma}\n\\begin{proof}\nIt suffices to estimate the radius of $(G',w)$ by Lemma~\\ref{lem:contraction}.\nFor any node $v\\notin\\{a_0,a_1,\\cdots,a_{2^s}\\}$, $d_{G',w}(a_0,v)\\ge3\\alpha$.\nThis is because that any path from $a_0$ to $v$ is of the form $(a_0\\to a_i\\leadsto v)$ for some $i\\in[1,2^s]$, where $w(\\{a_0,a_i\\})=2\\alpha$, and the remaining edges on the path have total weight at least $\\alpha$.\nTherefore, $e_{G',w}(v)\\ge3\\alpha$ for any $v\\notin\\{a_1,\\cdots,a_{2^s}\\}$.\nTo estimate the eccentricity of $a_i$ for $i\\in[1,2^s]$, we have $d(a_i,v)\\le\\max\\{2\\alpha,\\beta\\}$ for any $v\\ne b_i$ as shown on Table~\\ref{tab:distance}, and $d_{G',w}(a_i,b_i)\\le2\\alpha$ if there exists $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, and $d_{G',w}(a_i,b_i)\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}$ otherwise.\n\nIf $F'(x,y)=1$, then there are $i\\in[1,2^s]$ and $j\\in[1,\\ell]$ such that $x_{i,j}=y_{i,j}=1$, and thus\n$$\n\\begin{aligned}\n& d_{G',w}(a_i,b_i)\\le2\\alpha,\\\\\n& e_{G',w}(a_i)=\\max_v d_{G',w}(a_i,v)\\le\\max\\{2\\alpha,\\beta\\}, \\\\\n& R_{G',w}=\\min_u e_{G',w}(u)\\le e_{G',w}(a_i)\\le\\max\\{2\\alpha,\\beta\\}.\n\\end{aligned}\n$$\nTherefore, $R_{G,w}\\le R_{G',w}+n\\le\\max\\{2\\alpha,\\beta\\}+n$ by Lemma~\\ref{lem:contraction}.\n\nIf $F'(x,y)=0$, then for any $i\\in[1,2^s]$ and $j\\in[1,\\ell]$, $x_{i,j}=0$ or $y_{i,j}=0$, and thus\n$$\n\\begin{aligned}\n& d_{G',w}(a_i,b_i)\\ge\\min\\{\\alpha+\\beta,3\\alpha\\},\\forall i\\in[1,2^s], \\\\\n& e_{G',w}(a_i)=\\max_v d_{G',w}(a_i,v)\\ge d_{G',w}(a_i,b_i) \\\\\n& \\qquad\\ge\\min\\{\\alpha+\\beta,3\\alpha\\},\\forall i\\in[1,2^s], \\\\\n& R_{G',w}=\\min_u e_{G',w}(u)\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}.\n\\end{aligned}\n$$\nTherefore, $R_{G,w}\\ge R_{G',w}\\ge\\min\\{\\alpha+\\beta,3\\alpha\\}$ by Lemma~\\ref{lem:contraction}.\n\\end{proof}\n\nSimilar to Lemma~\\ref{lem:and_or_and}, one can prove a lower bound on communication complexity of $F'$ in the quantum Server model.\n\n\\begin{lemma}\nGiven $s,\\ell$ defined in Eq.~\\eqref{eqn:paramters} where $2^s\\cdot\\ell$ is a multiple of $4$,\n$F'={\\rm OR}_{2^s\\cdot\\ell}\\circ{\\rm AND}^{2^s\\cdot\\ell}_2$ with inputs $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$,\nset\n$$F'(x,y)=\\bigvee_{i\\in[1,2^s],j\\in[1,\\ell]}(x_{i,j}\\wedge y_{i,j}).$$\nIt holds that\n$$Q^{sv}_{1\/12}(F')=\\Omega\\left(\\sqrt{2^s\\cdot\\ell}\\right).$$\n\\label{lem:or_and}\n\\end{lemma}\n\\begin{proof}\nThe function $F'$ can be rewritten as $F'=f'\\circ \\text{GDT}^{2^s\\cdot\\ell\/4}$,\nwhere $f'=\\text{OR}_{2^s\\cdot\\ell\/4}$.\nNote that $f'$ is still a read-once formula.\nThus the rest of proof is the same as the one in Lemma~\\ref{lem:and_or_and}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:radius}]\nLet $\\mathcal A$ be a $T$-round algorithm ($T<2^h\/2$) in the quantum CONGEST model which, for any weighted graph $(G,w)$, computes a $(\\frac32-\\varepsilon)$-approximation of $R_{G,w}$ (constant $\\varepsilon\\in(0,\\frac12]$) with probability at least $\\frac{11}{12}$.\nAlice and Bob, who receive $x,y\\in\\{0,1\\}^{2^s\\cdot\\ell}$ as input, construct the weighted graph $(G,w)$ described above with the number of node $n=\\Theta(2^{3h\/2})$. The unweighted diameter $D_G=\\Theta(\\log n)$.\nDue to Lemma~\\ref{lem:simulation}, Alice and Bob can simulate $\\mathcal A$ on $(G,w)$ in the quantum Server model with communication complexity $O(T\\cdot h\\cdot B)$.\nThen with probability at least $\\frac{11}{12}$, Alice and Bob compute $\\widetilde R_{G,w}$ satisfying $R_{G,w}\\le\\widetilde R_{G,w}\\le(\\frac32-\\varepsilon)R_{G,w}$.\nWe set $\\alpha=n^2$ and $\\beta=2n^2$.\nDue to Lemma~\\ref{lem:reduction_radius},\n$$\n\\begin{aligned}\n& \\text{if }F'(x,y)=1,\\widetilde R_{G,w}\\le3n^2-\\left(2\\varepsilon n^2-\\left(\\frac32-\\varepsilon\\right)n\\right); \\\\\n& \\text{if }F'(x,y)=0,\\widetilde R_{G,w}\\ge3n^2. \\\\\n\\end{aligned}\n$$\nFor large enough $n$, Alice and Bob can compute $F'$ with probability at least $\\frac{11}{12}$ in the Server model, and thus $Q^{sv}_{1\/12}(F')=O(T\\cdot h\\cdot B)$.\nDue to Lemma~\\ref{lem:or_and}, $T=\\Omega\\left(\\frac{n^{2\/3}}{\\log^2n}\\right)$.\nTherefore, the round complexity of approximating radius is $\\Omega\\left(\\min\\left\\{2^h\/2,\\frac{n^{2\/3}}{\\log^2n}\\right\\}\\right)=\\Omega\\left(\\frac{n^{2\/3}}{\\log^2n}\\right)$.\n\\end{proof}\n\\section{Introduction}\n\nQuantum distributed computing has received great attention in the past decade~\\cite{Ben-OrH05,TaniKM12,ElkinKNP14,GallM18,GallNR19,IzumiG19,IzumiGM20,Censor-HillelFG22,MagniezN20,GavoilleKM09}.\nA large body of work has been devoted to investigating the quantum advantages in distributed computing.\nIn this paper, we are concerned with the CONGEST networks, which are one of the most fundamental models in distributed computing.\nIn a classical CONGEST network, the nodes synchronously exchange classical messages, and each channel has $O(\\log n)$-bit bandwidth, where $n$ is the number of nodes in the network.\nQuantum CONGEST networks were first introduced by Elkin, Klauck, Nanongkai,\nand Pandurangan~\\cite{ElkinKNP14}, where the only difference is that the nodes exchange quantum messages and the bandwidth of each channel is $O(\\log n)$ qubits. The round complexity of diameter and radius of unweighted graphs in classical CONGEST networks has been extensively studied~\\cite{AbboudCK16,AnconaCDEW20,HolzerW12,PelegRT12,FrischknechtHW12,HolzerPRW14}.\nLe Gall and Magniez~\\cite{GallM18} proved that quantum communication may save the round complexity in CONGEST networks if the graph has a low diameter.\n\nIn this paper, we further investigate the round complexity of computing the diameters and radius of weighted graphs in quantum CONGEST networks. We prove that quantum communication may also save the round complexity for both problems.\n\n\n\n\\subsection{Our Results}\n\\label{sec:results}\n\nThe following is one of our main results which asserts that quantum communication may save the round complexity for computing the weighted diameter and radius of a graph given that the graph has a low unweighted diameter.\n\n\\begin{theorem}\nThere exists a $\\widetilde O\\left(\\min\\left\\{n^{9\/10}D^{3\/10},n\\right\\}\\right)$-round distributed algorithm computing a $(1+o(1))$-approximation of the weighted diameter\/radius with probability at least $1-1\/\\text{poly}(n)$, in the quantum CONGEST model, where $D$ denotes the unweighted diameter.\n\\label{thm:upper_bound}\n\\end{theorem}\n\nHolzer and Pinsker in~\\cite{HolzerP15} and Abboud, Censor-Hillel and Khoury in~\\cite{AbboudCK16} proved that $(2-o(1))$-approximating the diameter and $(3\/2-\\varepsilon)$-approximating the (even unweighted) radius in the classical CONGEST network require $\\widetilde\\Omega(n)$ rounds, even when $D$ is constant.\nTherefore, Theorem~\\ref{thm:upper_bound} exhibits the advantages of quantum communication over classical communication in approximating the weighted diameter\/radius when $D=o(n^{1\/3})$.\n\nWe prove Theorem~\\ref{thm:upper_bound} by applying the framework of distributed quantum optimization introduced by Le Gall and Magniez in~\\cite{GallM18}.\nNote that the diameter and radius are the maximum and minimum of eccentricities respectively.\nIt will not give a sublinear-time algorithm if we simply apply a quantum search algorithm, because evaluating the eccentricity of one node takes $\\widetilde\\Theta\\left(\\sqrt n\\right)$ rounds (the lower bound is due to \\cite{ElkinKNP14}), and the searching process should require another $\\widetilde\\Theta\\left(\\sqrt n\\right)$ times of evaluation (the number of nodes with maximum\/minimum eccentricity maybe $O(1)$).\nThus the number of rounds in total will be $\\widetilde{\\Theta}(n)$.\\footnote{\n$\\widetilde O(\\cdot)$, $\\widetilde\\Omega(\\cdot)$, and $\\widetilde\\Theta(\\cdot)$ hide polylogarithmic factors.}\nOur algorithm is inspired by Nanongkai's algorithm~\\cite{Nanongkai14STOC} for approximating the weighted shortest paths in a classical network.\nThe algorithm constructs several small vertex sets and searches the node achieving the maximum\/minimum eccentricity within those sets, which turns out to be a good approximation of diameter\/radius.\nOur algorithm quantizes Nanongkai's algorithm using the standard technique \\cite{Bennett89} and further combines with the framework of distributed quantum optimization in~\\cite{GallM18}.\n\nWe also prove lower bounds for approximating weighted diameter and radius.\n\n\\begin{theorem}\nAny algorithm computing a $(3\/2-o(1))$-approximation of the weighted diameter\/radius requires $\\widetilde\\Omega(n^{2\/3})$ rounds, in the quantum CONGEST model, even when $D=\\Theta(\\log n)$.\n\\label{thm:lower_bound_raw}\n\\end{theorem}\n\nThe hardness of both problems is proved via the communication complexity of quantum Server models.\nThe Server model is a variant of two-party communication complexity models introduced in \\cite{ElkinKNP14}.\nCombining with the graph gadget in \\cite{AbboudCK16}, we get a reduction from the communication complexity of certain read-once functions to the round complexity of approximating the weighted diameter and radius.\nWe further apply a lifting theorem of quantum communication complexity~\\cite{ElkinKNP14} to obtain the desired lower bounds.\n\nCompared with Le Gall and Magniez's algorithm~~\\cite{GallM18} for unweighted diameter\/radius with $\\widetilde O\\left(\\sqrt{nD}\\right)$ rounds, Theorem~\\ref{thm:lower_bound_raw} says that computing weighted diameter\/radius is strictly harder than unweighted diameter\/radius, when $D$ is small.\nWhile in the classical setting, computing weighted and unweighted diameter\/radius have the same round complexity $\\Theta(n)$~\\cite{AbboudCK16,BernsteinN19}\\footnote{The lower bound is proved in~\\cite{AbboudCK16}. The upper bound is followed from the $\\widetilde O(n)$-round algorithm for the exact weighted All-Pairs Shortest Paths (APSP) in~\\cite{BernsteinN19}.}.\n\n\n\n\\subsection{Related Works}\n\nA series of works started with the distance computation in the classical CONGEST network.\nEarlier, Frischknecht, Holzer, and Wattenhofer~\\cite{FrischknechtHW12} showed that computing the diameter of an unweighted graph with constant diameter requires $\\widetilde\\Omega(n)$ rounds, which is tight up to logarithmic factors since even computing All-Pairs Shortest Paths (ASAP) on an unweighted graph can be resolved in $O(n)$ rounds \\cite{HolzerW12,PelegRT12}.\nAbboud, Censor-Hillelet, and Khoury~\\cite{AbboudCK16} later gave the same lower bound of $\\widetilde\\Omega(n)$ for $(3\/2-\\varepsilon)$-approximating the diameter\/radius in sparse networks.\nBernstein and Nanongkai~\\cite{BernsteinN19} provided a $\\widetilde O(n)$-round algorithm computing the exact APSP on any weighted graph.\nAs a result, computing unweighted diameter\/radius and weighted diameter\/radius (exactly or with a small approximation ratio) have an almost tight complexity of $\\widetilde\\Theta(n)$ in the classical CONGEST network.\nIf a larger approximation ratio is allowed, there are $\\widetilde O(\\sqrt n+D)$-round algorithms for $3\/2$-approximating the diameter\/radius on any unweighted graph \\cite{HolzerPRW14,AnconaCDEW20}.\nBesides, Chechik and Mukhtar~\\cite{ChechikM20} showed a $\\widetilde O(\\sqrt nD^{1\/4}+D)$-round algorithm computing Single-Source Shortest Paths (SSSP) exactly on any weighted graph, which also gives a $2$-approximation of the diameter\/radius.\n\nAs for the quantum setting, while quantum computation offers advantages over classical computation in various settings such as query complexity and two-party communication complexity, the power of quantum computation in distributed computing has not been fully explored.\nIn the quantum CONGEST network, Elkin et al.~\\cite{ElkinKNP14} gave negative results for several problems such as minimum spanning tree, minimum cut, and SSSP, i.e., quantum communication does not speed up distributed algorithms for these problems.\nLe~Gall and Magniez~\\cite{GallM18} presented a $\\widetilde O\\left(\\sqrt{nD}\\right)$-round algorithm computing the diameter\/radius on any unweighted graph, along with a $\\widetilde O\\left(\\sqrt[3]{nD}+D\\right)$-round algorithm $3\/2$-approximating the diameter.\nThey also proved a $\\widetilde\\Omega(\\sqrt n+D)$-lower bound for computing the unweighted diameter, which was later improved to $\\widetilde\\Omega\\left(\\sqrt[3]{nD^2}+\\sqrt n\\right)$ by Magniez and Nayak~\\cite{MagniezN20}.\nThe above results are listed on Table~\\ref{tab:complexity}.\n\n\\begin{table}[t]\n\\centering\n\\scriptsize\n\\renewcommand\\arraystretch{1.5}\n\\caption{Complexity of computing diameter and radius in the CONGEST model.}\n\\label{tab:complexity}\n\\begin{tabular}{cccccccc}\n\\toprule[1.5pt]\n\\multirow{2}{*}{Problem} & \\multirow{2}{*}{Variant} & \\multirow{2}{*}{Approx.} & \\multicolumn{2}{c}{Upper bound $\\widetilde O(\\cdot)$} & ~ & \\multicolumn{2}{c}{Lower bound $\\widetilde\\Omega(\\cdot)$} \\\\\n\\cline{4-5} \\cline{7-8}\n~ & ~ & ~ & Classical & Quantum & ~ & Classical & Quantum \\\\\n\\midrule[1.5pt]\n\\multirow{7}{*}{diameter} & ~ & exact & $n$~\\cite{HolzerW12,PelegRT12} & $\\sqrt{nD}$~\\cite{GallM18} & ~ & $n$~\\cite{FrischknechtHW12} & $\\sqrt[3]{nD^2}+\\sqrt n$~\\cite{MagniezN20} \\\\\n~ & ~ & $3\/2-\\varepsilon$ & $n$ & $\\sqrt{nD}$ & ~ & $n$~\\cite{AbboudCK16} & $\\sqrt n+D$~\\cite{GallM18} \\\\\n~ & ~ & $3\/2$ & $\\sqrt n+D$~\\cite{HolzerPRW14,AnconaCDEW20} & $\\sqrt[3]{nD}+D$~\\cite{GallM18} & ~ & open & open \\\\\n~ & weighted & exact & $n$~\\cite{BernsteinN19} & $n$ & ~ & $n$ & $n^{2\/3}$ \\\\\n~ & weighted & $(1,3\/2)$ & $n$ & \\textcolor{red}{$\\min\\left\\{n^{9\/10}D^{3\/10},n\\right\\}$ (This work)} & ~ & $n$ & \\textcolor{red}{$n^{2\/3}$ (This work)} \\\\\n~ & weighted & $2-\\varepsilon$ & $n$ & $\\min\\left\\{n^{9\/10}D^{3\/10},n\\right\\}$ & ~ & $n$~\\cite{HolzerP15} & $\\sqrt n+D$ \\\\\n~ & weighted & $2$ & $\\sqrt nD^{1\/4}+D$~\\cite{ChechikM20} & $\\sqrt nD^{1\/4}+D$ & ~ & open & open \\\\\n\\midrule\n\\multirow{6}{*}{radius} & ~ & exact & $n$~\\cite{HolzerW12,PelegRT12} & $\\sqrt{nD}$ & ~ & $n$ & $\\sqrt[3]{nD^2}+\\sqrt n$ \\\\\n~ & ~ & $3\/2-\\varepsilon$ & $n$ & $\\sqrt{nD}$ & ~ & $n$~\\cite{AbboudCK16} & $\\sqrt n+D$ \\\\\n~ & ~ & $3\/2$ & $\\sqrt n+D$~\\cite{AnconaCDEW20} & $\\sqrt n+D$ & ~ & open & open \\\\\n~ & weighted & exact & $n$~\\cite{BernsteinN19} & $n$ & ~ & $n$ & $n^{2\/3}$ \\\\\n~ & weighted & $(1,3\/2)$ & $n$ & \\textcolor{red}{$\\min\\left\\{n^{9\/10}D^{3\/10},n\\right\\}$ (This work)} & ~ & $n$ & \\textcolor{red}{$n^{2\/3}$ (This work)} \\\\\n~ & weighted & $2$ & $\\sqrt nD^{1\/4}+D$~\\cite{ChechikM20} & $\\sqrt nD^{1\/4}+D$ & ~ & open & open \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{table} \n\\section{Preliminaries}\n\n\\subsection{Graph Notations}\n\\label{sec:graph_notations}\n\nGiven a weighted graph $(G,w)$ where $G=(V,E)$ and $w:E\\to\\mathbb N^+$.\nThe length of a path is defined to be the sum of weights of edges on it, and the distance between nodes $u$ and $v$, denoted by $d_{G,w}(u,v)$, is the least length over all paths between them.\nThe eccentricity of a node $u$ is denoted by $e_{G,w}(u)=\\max_{v\\in V}d_{G,w}(u,v)$.\nThe radius of weighted graph $(G,w)$, denoted by $R_{G,w}$, is the minimum eccentricity over all nodes, i.e., $R_{G,w}=\\min_{u\\in V}e_{G,w}(u)$, while the diameter of $(G,w)$, denoted by $D_{G,w}$, is the maximum eccentricity of nodes, or equally, the maximum distance between any two nodes, i.e., $D_{G,w}=\\max_{u\\in V}e_{G,w}(u)=\\max_{u,v\\in V}d_{G,w}(u,v)$.\nThe unweighted diameter of graph $G$ is denoted by $D_G=D_{G,w^\\star}$ where $w^\\star(e)=1$ for all $e\\in E$, which is an essential parameter when $G$ represents the underlying graph of a distributed network.\n\n\\subsection{CONGEST Model}\n\nIn the classical CONGEST model, the communication network is a graph $G=(V,E)$ with $n$ nodes, and every node is assigned with a unique identifier.\nEach node represents a processor with unlimited computational power, i.e., the consumption of any local computation in a single processor is ignored.\nEach edge connecting two nodes represents a communication channel with $B=O(\\log n)$ bits of bandwidth.\nIn this article, we further consider the weighted graph $(G,w)$ as underlying network, where the weight of each edge is initially known to both of its endpoints.\n\nFor quantum version of the CONGEST model defined in \\cite{ElkinKNP14}, adjacent nodes are allowed to exchange \\textit{qubits} (quantum bits), i.e., the classical channels are now quantum channels with the same bandwidth $B=O(\\log n)$.\nEach node can locally do some quantum computation, and distinct nodes may own qubits with entanglement.\nIn this paper, we assume that initially all nodes do not share any entanglement, but the nodes can, for example, locally create a pair of entangled qubits, and send one to others.\n\nFor both classical and quantum CONGEST models, the algorithm is implemented round by round in a synchronous manner.\nIn each round, each node sends\/receives a message of $O(\\log n)$ (qu)bits to\/from each neighbor, and then does local computation according to local knowledge.\nThe algorithm halts when all nodes halt, and at the end of the algorithm, each node has its own output. We say an algorithm computes the diameter\/radius if all nodes output the correct answer.\nThe round complexity of an algorithm in this model is defined to be the number of communication rounds needed.\nAnd the round complexity of a distributed problem is the least round complexity of any algorithm solving it.\nOur focus here is the distance problems, mainly the computation of diameter and radius mentioned in Section~\\ref{sec:graph_notations}.\n\n\\subsection{Server Model}\n\nThe Sever model is a variant of the two-party communication model, which was introduced by Elkin et al.~\\cite{ElkinKNP14} to prove lower bounds in the CONGEST model.\nThere are three players in the Server model: Alice, Bob, and the server.\nAlice and Bob receive the inputs $x$ and $y$ respectively, and want to compute $F(x,y)$ for some function $F$.\nThe server receives no input.\nAlice and Bob can exchange messages with the server.\nThe catch here is that the server can send messages for free.\nThus, the communication complexity counts only messages sent by Alice and Bob.\nNote that Alice and Bob can talk to each other by considering the server as a communication channel, so any protocol in the traditional two-party communication model can be implemented in the Server model with the same complexity.\n\nFor a two-argument function $F$ and $0\\le\\varepsilon<1$, we let $Q^{sv}_\\varepsilon(F)$ denote the communication complexity (in the quantum setting) of computing $F$ where for any inputs $x,y$, the algorithm must output $F(x,y)$ with probability at least $1-\\varepsilon$.\nFor Boolean function $f:\\{0,1\\}^n\\rightarrow\\mathbb{R}$ and $0\\le\\varepsilon<1$, the $\\varepsilon$-approximate degree of $f$, denoted by $\\text{deg}_\\varepsilon(f)$, is the smallest degree of any polynomial $p$ that $\\varepsilon$-represents $f$, i.e., $|p(x)-f(x)|\\le\\varepsilon$ for any input $x\\in\\{0,1\\}^n$.\n\\section{Algorithm}\n\nWe first introduce the framework of distributed quantum optimization in \\cite{GallM18}.\nGiven function $f:X\\to\\mathbb Z$, where $X$ is a finite set, let $G=(V,E)$ be a network with a pre-defined node $\\text{leader}\\in V$.\nWe write $\\ket\\phi_v$ to denote a state in the memory space of node $v$.\nA specific register $\\ket\\cdot_I$ called \\textit{internal} and the control of the algorithm are centralized by the node leader.\nAssume that the following three quantum procedures are given as black boxes.\n\\begin{itemize}\n\\item {\\bf Initialization:} Prepare an initial state $\\ket0_I\\ket{\\text{init}}$ with some pre-computed information $\\ket{\\text{init}}$.\n\\item {\\bf Setup:} Produce a superposition from the initial state:\n$$\\ket0_I\\ket{\\text{init}}\\mapsto\\sum_{x\\in X}\\alpha_x\\ket x_I\\ket{\\text{data}(x)}\\ket{\\text{init}},$$\nwhere the $\\alpha_x$'s are arbitrary amplitudes and $\\text{data}(x)$ are information depending on $x$.\n\\item {\\bf Evaluation:} Perform the transformation\n$$\\ket{x,0}_I\\ket{\\text{data}(x)}\\ket{\\text{init}}\\mapsto\\ket{x,f(x)}_I\\ket{\\text{data}(x)}\\ket{\\text{init}}.$$\n\\end{itemize}\n\nThe following lemma provides an algorithm to search $x\\in X$ with high value $f(x)$ given the three procedures above.\n\n\\begin{lemma}[Theorem 2.4 in \\cite{GallM18}\\footnote{Although Le~Gall and Magniez write a slightly weaker statement, the lemma we claim here can be proven by the same argument in \\cite{GallM18}.}]\nAssume that {\\rm Initialization} can be implemented within $T_0$ rounds in the quantum CONGEST model, and that unitary operators {\\rm Setup} and {\\rm Evaluation} and their inverses can be implemented within $T$ rounds.\nLet $\\rho>0$ be such that $\\sum_{x\\in X:f(x)\\ge M}|\\alpha_x|^2\\ge\\rho$ where $M$ is unknown to all nodes.\nThen, for any $\\delta>0$, the node {\\rm leader} can find, with probability at least $1-\\delta$, some element $x$ such that $f(x)\\ge M$, in $T_0+O\\left(\\sqrt{\\log(1\/\\delta)\/\\rho}\\right)\\times T$ rounds.\n\\label{lem:optimization}\n\\end{lemma}\n\nThe three procedures will be described as deterministic or randomized procedures that combine the subroutines provided by Nanongkai~\\cite{Nanongkai14STOC} (also presented in Appendix~\\ref{sec:toolkits}).\nThey can be quantized using the standard technique \\cite{Bennett89}, with potentially additional garbage whose size is of the same order as the initial memory space.\n\nGiven a weighted graph $(G,w)$ where $G=(V,E)$ is a network and $w:E\\to\\mathbb N^+$, we show a quantum algorithm approximating $D_{G,w}$ and $R_{G,w}$ by proving Theorem~\\ref{thm:upper_bound}.\nWe only show the algorithm approximating the diameter.\nThe proof for radius is basically the same except that it finds the minimum (approximate) eccentricity instead of the maximum one.\n\nWe choose the parameters throughout this section.\n\\begin{equation}\n\\label{eqn:alg_parameters}\n\\varepsilon=1\/\\log n,r=n^{2\/5}D_G^{-1\/5},\\ell=n\\log n\/r,k=\\sqrt{D_G}.\n\\end{equation}\nAs mentioned in Section~\\ref{sec:results}, finding a node with maximum eccentricity among all nodes by directly applying a quantum search algorithm can hardly be done in $o(n)$ rounds.\nWe instead try to find a vertex set containing a node with maximum approximate eccentricity among $n$ vertex sets $S_1,\\cdots,S_n$, and then search such a node in this vertex set.\nEach set $S_i$ for $i\\in[1,n]$ is sampled by having each node $v\\in V$ join it independely with probability $r\/n$.\nFor such a random set and a node $s$ in it, Nanongkai showed in \\cite{Nanongkai14STOC} an efficient classical procedure to approximate its eccentricity (actually every node $v\\in V$ can know an approximation of the distance from $s$ to $v$).\n\n\n\n\\subsection{Computation of Approximate Eccentricity}\n\\label{sec:approx_ecc}\n\nFor convenience, we need to introduce several graph notations.\nGiven a weighted graph $(G,w)$, the hop distance between nodes $u$ and $v$, denoted by $h_{G,w}(u,v)$, is the minimum number of edges over all shortest paths between them.\nThe hop diameter of the weighted graph, denoted by $H_{G,w}$, is the maximum hop distance between any two nodes.\nFor $\\ell>0$, the $\\ell$-hop distance between $u$ and $v$, denoted by $d^\\ell_{G,w}(u,v)$, is the least length over all paths between them containing at most $\\ell$ edges.\nNote that $d^\\ell_{G,w}(u,v)=d_{G,w}(u,v)$ when $h_{G,w}(u,v)\\le\\ell$.\n\nIn general, Nanongkai~\\cite{Nanongkai14STOC} would approximate the bounded-hop distance, and sample a random set of key nodes as skeleton.\nThen it could approximate the distance from any key node $s$ to any node $v$ since, with high probability, any shortest path from $s$ to $v$ can be partitioned into bounded-hop shortest paths between key nodes, along with a tail path from some key node to $v$, as long as the number of key nodes is sufficiently large.\n\nHere we only list the necessary definitions of approximate bounded-hop distance, approximate distance, and approximate eccentricity.\nWe claim that these are good approximations.\nThe algorithms evaluating these quantities are presented in Appendix~\\ref{sec:toolkits}, and the detailed proof should be found in \\cite[arXiv version]{Nanongkai14STOC}.\nNote that we are given a weighted graph $(G,w)$ where $G=(V,E)$ and $w:E\\to\\mathbb N^+$.\n\n\\begin{lemma}[Theorem 3.3 in \\cite{Nanongkai14STOC}]\nGiven an integer $\\ell>0$.\nFor integer $i\\ge0$, define $w_i:E\\to\\mathbb N^+$ where $w_i(e)=\\left\\lceil\\frac{2\\ell w(e)}{\\varepsilon\\cdot2^i}\\right\\rceil$ for $e\\in E$.\nFor any $u,v\\in V$, the approximate bounded-hop distance is defined as\n$$\\widetilde d^\\ell_{G,w}(u,v)=\\min_i\\left\\{d_{G,w_i}(u,v)\\cdot\\frac{\\varepsilon\\cdot2^i}{2\\ell}:d_{G,w_i}(u,v)\\le\\left(1+\\frac2\\varepsilon\\right)\\ell\\right\\}.$$\nThen $d_{G,w}(u,v)\\le\\widetilde d^\\ell_{G,w}(u,v)\\le(1+\\varepsilon)d^\\ell_{G,w}(u,v)$.\n\\label{lem:bounded_hop_distance}\n\\end{lemma}\n\n\\begin{lemma}[Theorem 4.2 in \\cite{Nanongkai14STOC}]\nGiven a vertex set $S\\subseteq V$.\nLet the weighted complete graph $\\left(G'_S,w'_S\\right)$ be such that\n$$\n\\begin{aligned}\n& G'_S=\\left(S,\\tbinom{S}{2}\\right),w'_S:\\tbinom{S}{2}\\to\\mathbb N^+, \\\\\n& w'_S(\\{u,v\\})=\\widetilde d^\\ell_{G,w}(u,v),\\forall\\{u,v\\}\\in \\tbinom{S}{2}.\n\\end{aligned}\n$$\nFor node $v\\in S$, let $N^k_S(v)$ be the set of the $k$ nodes with the least distance from $v$ on $(G'_S,w'_S)$.\nAnd let the weighted complete graph $\\left(G''_S,w''_S\\right)$ be such that\n$$\n\\begin{aligned}\n& G''_S=\\left(S,\\tbinom{S}{2}\\right),w''_S:\\tbinom{S}{2}\\to\\mathbb N^+, \\\\\n& w''_S(\\{u,v\\})=\n\\begin{cases}\nd_{G'_S,w'_S}(u,v),\t& u\\in N^k_S(v)\\text{ or }v\\in N^k_S(u) \\\\\nw'_S(\\{u,v\\}), \t& \\text{otherwise}\n\\end{cases},\\forall\\{u,v\\}\\in \\tbinom{S}{2}.\n\\end{aligned}\n$$\nFor any $s\\in S$ and $v\\in V$, the approximate distance is defined as\n$$\\widetilde d_{G,w,S}(s,v)=\\min_{u\\in S}\\left\\{\\widetilde d^{4|S|\/k}_{G''_S,w''_S}(s,u)+\\widetilde d^\\ell_{G,w}(u,v)\\right\\}.$$\nIf $\\ell=n\\log n\/r$ and $S$ is sampled by having each node $v\\in V$ join it independetly with probability $r\/n$, then $d_{G,w}(s,v)\\le\\widetilde d_{G,w,S}(s,v)\\le(1+\\varepsilon)^2d_{G,w}(s,v)$ for all $s\\in S$ and $v\\in V$, with probability at least $1-2^{-cn}$, for some constant $c>0$ and sufficiently large $n$.\n\\label{lem:approx_distance}\n\\end{lemma}\n\n\\noindent\n{\\bf Remark.} We briefly explain why $\\widetilde d_{G,w,S}(\\cdot)$ is a good approximation.\nBy the choice of $\\ell$ and $S$, Lemma 4.3 in \\cite{Nanongkai14STOC} says that, with high probability, any $s\\in S,v\\in V$ and shortest path $\\left(s\\leadsto v\\right)$ on $(G,w)$ is of the form $\\left(s=s_1\\leadsto\\cdots\\leadsto s_m=u\\leadsto v\\right)$ such that $s_i\\in S$ for $i\\in[1,m]$, $h_{G,w}(s_{i-1},s_i)\\le\\ell$ for $i\\in[2,m]$, and $h_{G,w}(u,v)\\le\\ell$.\nApparently $\\widetilde d_{G,w,S}\\ge d_{G,w}(s,v)$.\nOn the other side,\n\\begin{equation*}\n\\begin{split}\n\\widetilde d_{G,w,S}(s,v) & =\\widetilde d^{4|S|\/k}_{G''_S,w''_S}(s,u)+\\widetilde d^\\ell_{G,w}(u,v) \\\\\n& \\le(1+\\varepsilon)d^{4|S|\/k}_{G''_S,w''_S}(s,u)+\\widetilde d^\\ell_{G,w}(u,v) \\\\\n& =(1+\\varepsilon)d_{G''_S,w''_S}(s,u)+\\widetilde d^\\ell_{G,w}(u,v) \\\\\n& \\le(1+\\varepsilon)\\sum_{i=2}^mw'_S(\\{s_{i-1},s_i\\})+\\widetilde d^\\ell_{G,w}(u,v) \\\\\n& \\le(1+\\varepsilon)\\sum_{i=2}^m\\widetilde d^\\ell_{G,w}(s_{i-1},s_i)+\\widetilde d^\\ell_{G,w}(u,v) \\\\\n& \\le(1+\\varepsilon)^2\\left(\\sum_{i=2}^md^\\ell_{G,w}(s_{i-1},s_i)+d^\\ell_{G,w}(u,v)\\right) \\\\\n& =(1+\\varepsilon)^2\\left(\\sum_{i=2}^md_{G,w}(s_{i-1},s_i)+d_{G,w}(u,v)\\right) \\\\\n& =(1+\\varepsilon)^2d_{G,w}(s,v).\n\\end{split}\n\\end{equation*}\nThe second and sixth lines are due to Lemma~\\ref{lem:bounded_hop_distance}.\nThe third line is due to Theorem 3.10 in \\cite{Nanongkai14STOC}, which says that $H_{G''_S,w''_S}<4|S|\/k$ since $(G''_S,w''_S)$ is the {\\it $k$-shortcut graph} of $(G'_S,w'_S)$.\n\n\\bigskip\n\nFor $i\\in[1,n]$, we rewrite $G''_{S_i},w''_{S_i},\\widetilde d_{G,w,S_i}(\\cdot)$ as $G''_i,w''_i,\\widetilde d_{G,w,i}(\\cdot)$ for short.\nFor any $s\\in S_i$, the approximate eccentricity is defined as $\\widetilde e_{G,w,i}(s)=\\max_{v\\in V}\\widetilde d_{G,w,i}(s,v)$.\nDefine two good events:\n\\begin{itemize}\n\\item {\\bf Good-Scale:} For all $i\\in[1,n]$, $|S_i|=\\Theta(r)$.\nBesides, let $v^\\star\\in V$ be a node with maximum eccentricity, i.e., $e_{G,w}(v^\\star)=D_{G,w}$, then $v^\\star$ joins $\\beta=\\Theta(r)$ sets $S_{i_1},\\cdots,S_{i_\\beta}$.\n\\item {\\bf Good-Approximation:} For all $i\\in[1,n]$ and $s\\in S_i,v\\in V$, $d_{G,w}(s,v)\\le\\widetilde d_{G,w,i}(s,v)\\le(1+\\varepsilon)^2d_{G,w}(s,v)$, thus $e_{G,w}(s)\\le\\widetilde e_{G,w,i}(s)\\le(1+\\varepsilon)^2e_{G,w}(s)$.\n\\end{itemize}\nBy Chernoff inequality and a union bound, the event Good-Scale occurs with probability at least $1-1\/\\text{poly}(n)$.\nBy Lemma~\\ref{lem:approx_distance}\nand a union bound, the event Good-Approximation occurs with probability at least $1-1\/\\text{poly}(n)$.\nTherefore, we can assume that the two events all happen in the following context.\n\n\n\n\\section{Quantization}\n\nFor each $i\\in[1,n]$, we define $f_i:S_i\\to\\mathbb Z$ where $f_i(s)=\\widetilde e_{G,w,i}(s)$ for $s\\in S_i$, and $f:[1,n]\\to\\mathbb Z$ where $f(i)=\\max_{s\\in S_i}f_i(s)$ for $i\\in[1,n]$.\n\n\\begin{lemma}\nThe number of $i\\in[1,n]$ satisfying $f(i)\\ge D_{G,w}$ is $\\Theta(r)$.\nMoreover, $f(i)\\le(1+\\varepsilon)^2D_{G,w}$ for all $i\\in[1,n]$.\n\\label{lem:approximation}\n\\end{lemma}\n\n\\begin{proof}\n$$\n\\begin{aligned}\n& f(i_j)=\\max_{s\\in S_{i_j}}\\widetilde e_{G,w,i_j}(s)\\ge\\max_{s\\in S_{i_j}}e_{G,w}(s)\\ge e_{G,w}(v^\\star)=D_{G,w}, & \\forall j\\in[1,\\beta]; \\\\\n& f(i)=\\max_{s\\in S_i}\\widetilde e_{G,w,i}(s)\\le\\max_{s\\in S_i}(1+\\varepsilon)^2e_{G,w}(s)\\le(1+\\varepsilon)^2D_{G,w}, & \\forall i\\in[1,n].\n\\end{aligned}\n$$\n\\end{proof}\n\n\\begin{lemma}\nGiven $i\\in[1,n]$, there exists a quantum procedure performing the transformation\n$$\\bigotimes_{v\\in V}\\ket i_v\\ket0_{{\\rm leader}}\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\ket{f(i)}_{{\\rm leader}}$$\nin the quantum CONGEST model, and taking $\\widetilde O\\left(D_G+\\frac n{\\varepsilon\\cdot r}+rk+\\sqrt r\\left(\\frac r{\\varepsilon\\cdot k}\\cdot D_G+r\\right)\\right)$ rounds, with probability at least $1-1\/{\\rm poly}(n)$.\n\\label{lem:evaluation}\n\\end{lemma}\n\n\\begin{proof}\nWe give the quantum procedure maximizing $f_i$ (thus evaluating $f(i)$) by following the framework of distributed quantum optimization:\n\\begin{itemize}\n\\item {\\bf $\\text{Initialization}_i$:} Perform the transformation\n$$\n\\bigotimes_{v\\in V}\\ket i_v\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\ket0_I\\ket{\\text{init}_i},$$\nwhere\n$$\\ket{\\text{init}_i}=\\bigotimes_{v\\in V,u\\in S_i}\\left|\\widetilde d^\\ell_{G,w}(u,v)\\right\\rangle_v\\ket{G''_i,w''_i},$$\nand $d^\\ell_{G,w}(u,v)$ is given in Lemma~\\ref{lem:bounded_hop_distance}, $G''_i$ and $w''_i$ are given in lemma~\\ref{lem:approx_distance}.\n\n\\item {\\bf $\\text{Setup}_i$:} Perform the transformation\n$$\\bigotimes_{v\\in V}\\ket i_v\\ket0_I\\ket{\\text{init}_i}\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\left(\\sum_{s\\in S_i}\\frac1{|S_i|}\\ket s_I\\ket{\\text{data}_i(s)}\\right)\\ket{\\text{init}_i},$$\nwhere\n$\\ket{\\text{data}_i(s)}=\\bigotimes_{v\\in V}\\ket s_v\\bigotimes_{v\\in V,u\\in S_i}\\left|\\widetilde d_{G''_i,w''_i}(s,u)\\right\\rangle_v$.\n\n\\item {\\bf $\\text{Evaluation}_i$:} Perform the transformation\n$$\\bigotimes_{v\\in V}\\ket i_v\\left(\\ket{s,0}_I\\ket{\\text{data}_i(s)}\\right)\\ket{\\text{init}_i}\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\left(\\ket{s,f_i(s)}_I\\ket{\\text{data}_i(s)}\\right)\\ket{\\text{init}_i}.$$\n\\end{itemize}\n\nWe now analyze the round complexity:\n\\begin{itemize}\n\\item In $\\widetilde O\\left(D_G+\\frac n{\\varepsilon\\cdot r}+r\\right)$ rounds, each $v\\in V$ can know $\\widetilde d^\\ell_{G,w}(u,v)$ for each $u\\in S_i$, with high probability, due to Lemma~\\ref{lem:bounded_hop_mssp}.\nAfter that, the overlay network $\\left(G''_i,w''_i\\right)$ can be embedded in $O(D_G+rk)$ rounds due to Lemma~\\ref{lem:embedding} (we say that the network $G=(V,E)$ embeds an overlay network $G'=(V',E')$ with a weight function $w':E'\\to\\mathbb N^+$ if $V'\\subseteq V$ and for each $v\\in V'$, it stores each $e\\in E'$ incident to $v$ along with $w'(e)$ in the local memory).\nTherefore, the procedure $\\text{Initialization}_i$ can be implemented in $T_0=\\widetilde O\\left(D_G+\\frac n{\\varepsilon\\cdot r}+rk\\right)$ rounds.\n\\item The node leader can collect $S_i$ in $O(D_G+r)$ rounds.\nIt then prepares the quantum state $\\sum_{s\\in S_i}\\frac1{|S_i|}\\ket s_I$ and broadcasts to all nodes using CNOT copies, in $O(D_G)$ rounds.\nThus, the transformation\n$$\\bigotimes_{v\\in V}\\ket i_v\\ket0_I\\ket{\\text{init}_i}\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\left(\\sum_{s\\in S_i}\\frac1{|S_i|}\\ket s_I\\bigotimes_{v\\in V}\\ket s_v\\right)\\ket{\\text{init}_i}$$\ncan be implemented in $O(D_G+r)$ rounds.\nBesides, the transformation\n$$\\bigotimes_{v\\in V}\\ket i_v\\bigotimes_{v\\in V}\\ket s_v\\ket{\\text{init}_i}\\mapsto\\bigotimes_{v\\in V}\\ket i_v\\bigotimes_{v\\in V}\\ket s_v\\bigotimes_{v\\in V,u\\in S_i}\\left|\\widetilde d^{4|S_i|\/k}_{G''_i,w''_i}(s,u)\\right\\rangle_v\\ket{\\text{init}_i}$$\ncan be implemented in $T_1=\\widetilde O\\left(\\frac r{\\varepsilon\\cdot k}\\cdot D_G+r\\right)$ rounds since Lemma~\\ref{lem:sssp_on_overlay} implies that, after the overlay network $\\left(G''_i,w''_i\\right)$ is embedded, each $v\\in V$ can know $\\widetilde d^{4|S_i|\/k}_{G''_i,w''_i}(s,u)$ for each $u\\in S_i$ within $T_1$ rounds.\nTherefore, the procedure $\\text{Setup}_i$ can be implemented in $T_1=\\widetilde O\\left(\\frac r{\\varepsilon\\cdot k}\\cdot D_G+r\\right)$ rounds.\n\\item For the procedure $\\text{Evaluation}_i$, recall that $f_i(s)=\\max_{v\\in V}\\widetilde d_{G,w,i}(s,v)$ where\n$$\\widetilde d_{G,w,i}(s,v)=\\min_{u\\in S_i}\\left\\{\\widetilde d^{4|S_i|\/k}_{G''_i,w''_i}(s,u)+\\widetilde d^\\ell_{G,w}(u,v)\\right\\}.$$\nBy definition, for any $v\\in V$ and $u\\in S_i$, $\\widetilde d^{4|S_i|\/k}_{G''_i,w''_i}(s,u)$ and $\\widetilde d^\\ell_{G,w}(u,v)$ have been stored in the local memory of $v$, i.e., $\\ket\\cdot_v$.\nThus, each $v\\in V$ can locally compute $\\widetilde d_{G,w,S_i}(s,v)$, and the node leader can compute the maximum by converge-casting in $O(D_G)$ rounds.\nSo the procedure $\\text{Evaluation}_i$ can be implemented in $T_2=O(D_G)$ rounds.\n\\end{itemize}\nBy Lemma~\\ref{lem:optimization}, there exists a quantum procedure maximizing $f_i$ in $\\widetilde O(T_0+\\sqrt r(T_1+T_2))$ rounds with high probability.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:upper_bound}]\nWe give a quantum procedure maximizing $f$ also by following the framework of distributed quantum optimization:\n\\begin{itemize}\n\\item \\textbf{Initialization} is a classical procedure which samples vertex sets $S_1,\\cdots,S_n$, and $\\ket{\\text{init}}$ represents the corresponding classical information.\n\\item \\textbf{Setup:} Perform the transformation\n$$\\ket0_I\\ket{\\text{init}}\\mapsto\\sum_{i=1}^n\\frac1n\\ket i_I\\ket{\\text{data}(i)}\\ket{\\text{init}},$$\nwhere $\\ket{\\text{data}(i)}=\\bigotimes_{v\\in V}\\ket i_v$.\n\\item \\textbf{Evaluation:} Perform the transformation\n$$\\ket{i,0}_I\\ket{\\text{data}(i)}\\ket{\\text{init}}\\mapsto\\ket{i,f(i)}_I\\ket{\\text{data}(i)}\\ket{\\text{init}}.$$\n\\end{itemize}\n\nWe now analyze the round complexity:\n\\begin{itemize}\n\\item $S_1,\\cdots,S_n$ are sampled locally in parallel, and the procedure Initialization is free, i.e., $T_0=0$.\n\\item The node leader prepares the quantum state $\\sum_{i=1}^n\\frac1n\\ket i_I$ and broadcast using CNOT copies to all nodes.\nTherefore, the procedure Setup can be implemented in $T_1=O(D_G)$ rounds.\n\\item The procedure Evaluation can be of $T_2=\\widetilde O\\left(D_G+\\frac n{\\varepsilon\\cdot r}+rk+\\sqrt r\\left(\\frac r{\\varepsilon\\cdot k}\\cdot D_G+r\\right)\\right)$ rounds by Lemma~\\ref{lem:evaluation}.\n\\end{itemize}\n\nBy Lemma~\\ref{lem:optimization} and Lemma~\\ref{lem:approximation}, there exists a quantum procedure that find, with high probability, some $i\\in[1,n]$ such that $D_{G,w}\\le f(i)\\le(1+\\varepsilon)^2D_{G,w}$, in\n$$\\widetilde O(T_0+\\sqrt{n\/r}(T_1+T_2))=\\widetilde O\\left(\\sqrt{n\/r}\\left(D_G+\\frac n{\\varepsilon\\cdot r}+rk+\\sqrt r\\left(\\frac r{\\varepsilon\\cdot k}\\cdot D_G+r\\right)\\right)\\right)$$\nrounds.\nBy the choice of the parameters in Eq.~\\eqref{eqn:alg_parameters}, Theorem~\\ref{thm:upper_bound} follows.\n\\end{proof}\n\\section{Toolkits in Nanongkai's Algorithm}\n\\label{sec:toolkits}\n\nLet $G=(V,E)$ be a distributed network with a weight function $w:E\\to\\mathbb N^+$ and a pre-defined node $\\text{leader}\\in V$.\nWe assume that each node initially knows $n=|V|$ and $W=\\max_{e\\in E}w(e)$.\nThe parameters $\\varepsilon,r,\\ell,k$ are chosen the same as in Eq.~\\eqref{eqn:alg_parameters}.\nWe follow the background of Section~\\ref{sec:approx_ecc}.\nGiven a vertex set $S\\subseteq V$, let $\\widetilde d^\\ell_{G,w}(\\cdot)$, $(G'_S,w'_S)$, $N^k_S(\\cdot)$, $(G''_S,w''_S)$, $\\widetilde d_{G,w,S}(\\cdot)$ be as defined in Lemma~\\ref{lem:bounded_hop_distance} and Lemma~\\ref{lem:approx_distance}.\nThe following lemmas and algorithms are summarized from \\cite[arXiv version]{Nanongkai14STOC}.\n\n\\begin{lemma}[Theorem 3.2 in \\cite{Nanongkai14STOC}]\nFor $s\\in V$ known to all nodes, there exists an algorithm (Algorithm~\\ref{alg:bounded_hop_sssp}) such that in $\\widetilde O(\\ell\/\\varepsilon)$ rounds, each $v\\in V$ knows $\\widetilde d^\\ell_{G,w}(s,v)$, and during the whole computation, each node broadcasts $O(\\log n)$ messages of size $O(\\log n)$ to its neighbors.\n\\label{lem:bounded_hop_sssp}\n\\end{lemma}\n\n\\begin{algorithm}\n\\caption{Bounded-Hop SSSP $(G,w,s,\\ell,\\varepsilon)$}\n\\label{alg:bounded_hop_sssp}\n\\begin{algorithmic}[1]\n\\Require Network $(G,w)$, source node $s$ and parameters $\\ell,\\varepsilon>0$.\n\\Ensure Each node $v$ knows $\\widetilde d^\\ell_{G,w}(s,v)$.\n\\State Initially, $\\widetilde d^\\ell_{G,w}(s,v)\\leftarrow\\infty$ for each $v\\in V$.\n\\For{$i=0$ to $\\log\\frac{2nW}\\varepsilon$}\n\\State Run bounded-distance SSSP with parameters $(G,w_i,s,(1+2\/\\varepsilon)\\ell)$ using Algorithm~\\ref{alg:bounded_distance_sssp}.\n\\For{each $v\\in V$} in parallel\n\\If{$d_{G,w_i}(s,v)\\le(1+2\/\\varepsilon)\\ell$}\n\\State $\\widetilde d^\\ell_{G,w}(s,v)\\leftarrow\\min\\left\\{\\widetilde d^\\ell_{G,w}(s,v),d_{G,w_i}(s,v)\\right\\}$.\n\\EndIf\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}\n\\caption{Bounded-Distance SSSP $(G,w,s,L)$}\n\\label{alg:bounded_distance_sssp}\n\\begin{algorithmic}[1]\n\\Require Network $(G,w)$, source node $s$ and parameter $L>0$.\n\\Ensure Each node $v$ knows whether $d_{G,w}(s,v)\\le L$, and if so, it further knows $d_{G,w}(s,v)$.\n\\State Initially, $d_{G,w}(s,s)\\leftarrow0$ and $d_{G,w}(s,v)\\leftarrow\\infty$ for each $v\\ne s$.\n\\State Let $t$ be the time this algorithm starts.\n\\For{round $r=t$ to $t+L$}\n\\For{each $v\\in V$} in parallel\n\\For{each message $(u,d_{G,w}(s,u))$ received in the previous round}\n\\If{$d_{G,w}(s,u)+w(\\{u,v\\})\\le L$}\n\\State $d_{G,w}(s,v)\\leftarrow\\min\\left\\{d_{G,w}(s,v),d_{G,w}(s,u)+w(\\{u,v\\})\\right\\}$.\n\\EndIf\n\\EndFor\n\\If{$d_{G,w}(s,v)=r-t$}\n\\State $v$ broadcasts message $(v,d_{G,w}(s,v))$ to all neighbors.\n\\EndIf\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}[Theorem 3.6 and Lemma 3.7 in \\cite{Nanongkai14STOC}]\nThere exist an algorithm (Algorithm~\\ref{alg:bounded_hop_mssp}) such that in $\\widetilde O(D_G+\\ell\/\\varepsilon+|S|)$ rounds, each node $v\\in V$ knows $\\widetilde d^\\ell_{G,w}(s,v)$ for each $s\\in S$, with probability of failure at most $n^{-c}$, for any constant $c>0$ and sufficiently large $n$.\n\\label{lem:bounded_hop_mssp}\n\\end{lemma}\n\n\\begin{algorithm}[!h]\n\\caption{Bounded-Hop Multi-Source Shortest Paths $(G,w,S,\\ell,\\varepsilon)$}\n\\label{alg:bounded_hop_mssp}\n\\begin{algorithmic}[1]\n\\Require Network $(G,w)$, set of source nodes $S$ and parameters $\\ell,\\varepsilon>0$.\n\\Ensure With high probability, each node $v$ knows $\\widetilde d^\\ell_{G,w}(s,v)$ for each $s\\in S$.\n\\State Assume that $S=\\{s_1,\\cdots,s_b\\}$.\nLet $\\mathcal A_i$ be the Algorithm~\\ref{alg:bounded_hop_sssp} with parameters $(G,w,s_i,\\ell,\\varepsilon)$ for each $i\\in[1,k]$ (each $\\mathcal A_i$ is of $T=\\widetilde O(\\ell\/\\varepsilon)$ rounds, and during the whole computation of $\\mathcal A_i$, each node broadcasts $O(\\log n)$ messages to its neighbors due to Lemma~\\ref{lem:bounded_hop_sssp}).\n\\State The node leader samples $\\Delta_1,\\cdots,\\Delta_b\\in[0,b\\log n]$ independently and uniformly at random for delaying algorithms $\\mathcal A_1,\\cdots,\\mathcal A_k$, and broadcasts them by pipelining in $O(D_G+b)$ rounds.\n\\For{$r=1$ to $T+b\\log n$}\n\\For{each $v\\in V$} in parallel\n\\State Let $a=|\\ \\{i\\in[1,b]:v\\text{ broadcasts a message in the }(r-\\Delta_i)\\text{-th round of }\\mathcal A_i\\}\\ |$.\n\\If{$a\\le\\lceil\\log n\\rceil$}\n\\State $v$ broadcasts these $a$ messages in the next $\\lceil\\log n\\rceil$ rounds.\n\\Else\n\\State The algorithm fails.\n\\EndIf\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\newpage\n\n\\begin{lemma}[Theorem 4.5 in \\cite{Nanongkai14STOC}]\nAfter the overlay network $(G'_S,w'_S)$ is embedded, there exists an algorithm (Algorithm~\\ref{alg:embedding}) which further embeds the overlay network $(G''_S,w''_S)$ in $\\widetilde O(D_G+|S|k)$\nrounds.\n\\label{lem:embedding}\n\\end{lemma}\n\n\\begin{algorithm}\n\\caption{Embedding Overlay Network $(G,w,S,G'_S,w'_S,k)$}\n\\label{alg:embedding}\n\\begin{algorithmic}[1]\n\\Require Network $(G,w)$, set of source nodes $S$, overlay network $(G'_S,w'_S)$ and parameter $k>0$.\n\\Ensure It embeds the overlay network $(G''_S,w''_S)$.\n\\State Each node $s\\in S$ broadcasts the $k$ shortest edges incident to it on $(G'_S,w'_S)$ (this can be done in $O(D_G+|S|k)$ rounds).\n\\For{each $s\\in S$} locally\n\\State $s$ computes $N^k_S(s)$, along with the weight $w''_S(\\{s,v\\})=d_{G'_S,w'_S}(s,v)$ for each $v\\in N^k_S(s)$ (this can be done due to Observation 3.12 in \\cite{Nanongkai14STOC}).\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}[Lemma 4.6 in \\cite{Nanongkai14STOC}]\nFor node $s\\in S$ known to all nodes, after the overlay network $(G''_S,w''_S)$ is embedded, there exists an algorithm (Algorithm~\\ref{alg:sssp_on_overlay}) such that in $\\widetilde O\\left(\\frac{|S|}{\\varepsilon\\cdot k}\\cdot D_G+|S|\\right)$ rounds, each node $v\\in V$ knows for each $u\\in S$ the value of $\\widetilde d^{4|S|\/k}_{G''_S,w''_S}(s,u)$.\n\\label{lem:sssp_on_overlay}\n\\end{lemma}\n\n\\begin{algorithm}\n\\caption{SSSP on Overlay Network $(G,w,S,\\varepsilon,k,G''_S,w''_S,s)$}\n\\label{alg:sssp_on_overlay}\n\\begin{algorithmic}[1]\n\\Require Network $(G,w)$, set of source nodes $S$, parameters $\\varepsilon,k>0$, overlay network $(G''_S,w''_S)$ and source node $s\\in S$.\n\\Ensure Each node $v$ knows $\\widetilde d^{4|S|\/k}_{G''_S,w''_S}(s,u)$ for each $u\\in S$.\n\\State Let $\\mathcal A$ be the Algorithm~\\ref{alg:bounded_hop_sssp} with parameters $(G''_S,w''_S,s,4|S|\/k,\\varepsilon)$ ($\\mathcal A$ is of $T=\\widetilde O\\left(\\frac{|S|}{\\varepsilon\\cdot k}\\right)$ rounds, and during the whole computation of $\\mathcal A$, each node broadcasts $O(\\log n)$ messages to its neighbors due to Lemma~\\ref{lem:bounded_hop_sssp}).\n\\For{$r=1$ to $T$}\n\\State Let $a$ be the number of nodes in $G''_S$ that want to broadcast a message to its neighbors in $G''_S$ in the $r$-th round of $\\mathcal A$.\nCount $a$ and make every nodes in $G$ knows $a$ in $O(D_G)$ rounds.\n\\State Each node in $G''_S$, which wants to send a message to each of its neighbors in $G''_S$, broadcasts such message to all nodes in $G$ (this takes $O(D_G+a)$ rounds).\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\\section*{Acknowledgements}\nWe thank the anonymous reviewers' feedback.\nThis work was supported in part by, the National Key R\\&D Program of China 2018YFB1003202, National Natural Science Foundation of China (Grant No. 61972191), the Program for Innovative Talents and Entrepreneur in Jiangsu, and Anhui Initiative in Quantum Information Technologies (Grant No. AHY150100).\n\n\\bibliographystyle{acm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and main results}\nThe study of real-valued topological cocycles and real skew product extensions has been initiated by Besicovitch, Gottschalk, and Hedlund.\nBesicovitch \\cite{Be} proved the existence of point transitive real skew product extensions of an irrational rotation on the one-dimensional torus.\nFurthermore, he proved that none of them is minimal, i.e. there are always non-transitive points for a point transitive real skew product extension.\nThe main result in Chapter 14 of \\cite{G-H} can be rephrased to the dichotomy that a topologically conservative real skew product extension of a minimal rotation on a torus (finite or infinite dimensional) is either point transitive or it is defined by a topological coboundary and almost periodic.\nThis result and a generalisation to skew product extensions of a Kronecker transformation (cf. \\cite{LM}) exploit the isometric behaviour of a minimal rotation.\nA corresponding result apart from isometries is based on homotopy conditions for the class of distal minimal homeomorphisms usually called Furstenberg transformations (cf. \\cite{Gr}).\nHowever, in general this dichotomy is not valid, and counterexamples can be provided by the Rokhlin skew products of the so-called topological type $III_0$.\nThis motivates the study of topologically conservative real skew product extensions of compact flows apart from isometries and toral extensions, which is carried out in this note for \\emph{distal} minimal flows with Abelian compactly generated acting groups.\n\nThroughout this note we shall denote by $T$ a \\emph{compactly generated Abelian} Hausdorff topological group acting continuously on a compact metric phase space $(X,d)$ so that $(X,T)$ is a \\emph{compact metric flow}.\nIn the monograph \\cite{G-H} such an acting group $T$ is called \\emph{generative}, and notions of recurrence are provided for such Abelian acting groups apart from $\\mathbb Z$ and $\\mathbb R$.\nFor a $\\mathbb Z$-action on $X$ we let $T$ be the self-homeomorphism of $X$ generating the action by $(n,x)\\mapsto T^n x$, while in the case of a real flow we shall use the notation $\\{\\phi^t:t\\in\\mathbb R\\}$ for the acting group.\nWe call a flow \\emph{minimal} if the whole phase space is the only non-empty invariant closed subset, and then for every $x\\in X$ the \\emph{orbit closure} $\\bar{\\mathcal O}_T(x)=\\overline{\\{\\tau x:\\tau\\in T\\}}$ is all of $X$.\nA flow $(X,T)$ is \\emph{topologically transitive} if for arbitrary open neighbourhoods $\\mathcal U,\\mathcal V\\subset X$ there exists some $\\tau\\in T$ with $\\tau\\mathcal U\\cap\\mathcal V\\neq\\emptyset$, and it is \\emph{weakly mixing} if the flow $(X\\times X,T)$ with the diagonal action is topologically transitive.\nFor a topologically transitive flow $(X,T)$ with complete separable metric phase space there exists a dense $G_\\delta$-set of \\emph{transitive points} $x$ with $\\bar{\\mathcal O}_T(x)=X$, and a flow with transitive points is \\emph{point transitive}.\nIf $(X,T)$ and $(Y,T)$ are flows with the same acting group $T$ and $\\pi:X\\longrightarrow Y$ is a continuous \\emph{onto} mapping with $\\pi(\\tau x)=\\tau\\pi(x)$ for every $\\tau\\in T$ and $x\\in X$, then $(Y,T)=\\pi(X,T)$ is called a factor of $(X,T)$ and $(X,T)$ is called an extension of $(Y,T)$.\nSuch a mapping $\\pi$ is called a \\emph{homomorphism} of the flows $(X,T)$ and $(Y,T)$.\nThe set of bicontinuous bijective homomorphisms of a flow $(X,T)$ onto itself is the topological group $\\textup{Aut}(X,T)$ of \\emph{automorphisms} of $(X,T)$ with the topology of uniform convergence.\nTwo points $x,y\\in X$ are called \\emph{distal} if\n\\begin{equation*}\n\\inf_{\\tau\\in T}d(\\tau x,\\tau y)> 0 ,\n\\end{equation*}\notherwise they are called \\emph{proximal}.\nFor a general compact Hausdorff flow $(X,T)$ distality of two points $x,y\\in X$ is defined by the absence of any nets $\\{\\tau_n\\}_{n\\in I}\\subset T$ with $\\lim \\tau_n x=\\lim \\tau_n y$.\nA flow is called distal if any two distinct points are distal, and an extension of flows is called distal if any two distinct points in the same fibre are distal.\nAn important property of distal compact flows is the \\emph{partitioning} of the phase space into invariant closed minimal subsets, even if the flow is not minimal.\n\nSuppose that $\\mathbb A$ is an Abelian locally compact second countable (Abelian l.c.s.) group with zero element $\\mathbf 0_\\mathbb A$, and let $\\mathbb A_\\infty$ denote its one point compactification with the convention that $g+\\infty=\\infty+g=\\infty$ for every $g\\in \\mathbb A$.\nA cocycle of a compact metric flow $(X,T)$ is a continuous mapping $f:T\\times X\\longrightarrow \\mathbb A$ with the identity\n\\begin{equation*}\nf(\\tau,\\tau' x)+f(\\tau',x)=f(\\tau\\tau',x)\n\\end{equation*}\nfor all $\\tau,\\tau'\\in T$ and $x\\in X$.\nGiven a compact metric $\\mathbb Z$-flow $(X,T)$ and a continuous function $f:X\\longrightarrow \\mathbb A$, we can define a cocycle $f:\\mathbb{Z}\\times X \\longrightarrow \\mathbb A$ with $f(1,\\cdot)\\equiv f$ by\n\\begin{equation*}\nf(n,x)=\n\\begin{cases}\n\\sum_{k=0}^{n-1}f(T^k x) & \\textup{if}\\enspace n\\geq 1 ,\n\\\\\n\\mathbf 0_\\mathbb A & \\textup{if}\\enspace n=0 ,\n\\\\\n-f(-n,T^n x) & \\textup{if}\\enspace n < 0.\n\\end{cases}\n\\end{equation*}\nMoreover, there is a natural occurrence of cocycles of $\\mathbb R$-flows as solutions to ODE's.\nSuppose that $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ is a smooth flow on a compact manifold $M$ and $A:M\\longrightarrow\\mathbb R$ is a smooth function.\nThen a continuous real valued cocycle $f(t,m)$ of the flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ is given by the fundamental solution to the ODE\n\\begin{equation*}\n\\frac{d\\, f(t,m)}{d\\, t}=A(\\phi^t(m))\n\\end{equation*}\nwith the initial condition $f(0,m)=0$.\nThe \\emph{skew product extension} of the flow $(X,T)$ by a cocycle $f:T\\times X\\longrightarrow \\mathbb A$ is defined by the homeomorphisms\n\\begin{equation*}\n\\widetilde\\tau_f(x,a)=(\\tau x, f(\\tau,x)+ a)\n\\end{equation*}\nof $X\\times \\mathbb A$ for all $\\tau\\in T$, which provide a continuous action $(\\tau,x,a)\\mapsto\\widetilde\\tau_f(x,a)$ of $T$ on $X\\times \\mathbb A$ by the cocycle identity.\nFor a $\\mathbb Z$-flow $(X,T)$ this action is generated by\n\\begin{equation*}\n\\widetilde T_f(x,a)=(T x, f(x)+ a) .\n\\end{equation*}\nThe essential property of a skew product is that the $T$-action on $X\\times\\mathbb A$ commutes with the right translation action of the group $\\mathbb A$ on $X\\times\\mathbb A$, which is defined by $$R_b(x,a)=(x,a-b)$$\n for every $b\\in\\mathbb A$.\nThe orbit closure of $(x,a)\\in X\\times\\mathbb A$ under $\\widetilde\\tau_f$ will be denoted by $\\bar{\\mathcal O}_{T,f}(x,a)=\\overline{\\{\\widetilde\\tau_f(x,a):\\tau\\in T\\}}$.\n\nThe \\emph{prolongation} $\\mathcal D_T(x)$ of $x\\in X$ under the group action of $T$ is defined by\n\\begin{equation*}\n\\mathcal D_T(x)=\\bigcap\\{\\bar{\\mathcal O}_T(\\mathcal U):\\mathcal U\\enspace\\textup{is an open neighbourhood of}\\enspace x\\},\n\\end{equation*}\nand we shall use the notation $\\mathcal D_{T,f}(x,a)$ for the prolongation of a point $(x,a)\\in X\\times \\mathbb A$ under the skew product action $\\widetilde\\tau_f$.\n\nWhile the inclusion of the orbit closure in the prolongation is obvious, the coincidence of these sets is generic by a result from the paper \\cite{Gl3}.\nThis result, one of our main tools, is usually referred to as ``topological ergodic decomposition''.\n\n\\begin{fact}\\label{fact:o_p}\nFor every compact \\emph{metric} flow $(X,T)$ there exists a $T$-invariant dense $G_\\delta$ set $\\mathcal F\\subset X$ so that for every $x\\in\\mathcal F$ holds\n\\begin{equation*}\n\\bar{\\mathcal O}_{T}(x)=\\mathcal D_{T}(x) .\n\\end{equation*}\nFor a skew product extension $\\widetilde\\tau_f$ of $(X,T)$ by a cocycle $f:T\\times X\\longrightarrow \\mathbb A$ there exists a $T$-invariant dense $G_\\delta$ set $\\mathcal F\\subset X$ so that for every $x\\in\\mathcal F$ and \\emph{every} $a\\in \\mathbb A$ holds\n\\begin{equation*}\n\\bar{\\mathcal O}_{T,f}(x,a)=\\mathcal D_{T,f}(x,a) .\n\\end{equation*}\nThis assertion holds as well for the extension of $\\widetilde\\tau_f$ to $X\\times \\mathbb A_\\infty$ which is defined by $(x,\\infty)\\mapsto (\\tau x,\\infty)$ for every $x\\in X$, and given an $\\mathbb R^2$-valued topological cocycle $g=(g_1,g_2):T\\times X\\longrightarrow\\mathbb R^2$ for the extension of $\\widetilde\\tau_g$ to $X\\times(\\mathbb R_\\infty)^2$ which is defined by $(x,s,\\infty)\\mapsto (\\tau x,s+g_1(x),\\infty)$, $(x,\\infty,t)\\mapsto (\\tau x,\\infty,t+g_2(x))$, and $(x,\\infty,\\infty)\\mapsto (\\tau x,\\infty,\\infty)$, for every $x\\in X$ and $s,t\\in\\mathbb R$.\n\\end{fact}\n\n\\begin{proof}\nThe statement for a compact metric phase space and a general acting group is according to Theorem 1 of \\cite{AkGl}.\nThe other statements can be verified by means of the extension of $\\widetilde\\tau_f$ onto the compactification of $X\\times\\mathbb A$.\nThe coincidence of $\\bar{\\mathcal O}_{T,f}(x,a)$ and $\\mathcal D_{T,f}(x,a)$ for some $(x,a)\\in X\\times \\mathbb A$ implies this coincidence for all $(x,a')\\in\\{x\\}\\times \\mathbb A_\\infty$, since the extension of $\\widetilde\\tau_f$ to $X\\times\\mathbb A_\\infty$ commutes with the right translation on $X\\times \\mathbb A_\\infty$.\n\\end{proof}\n\n\\begin{remark}\\label{rem:o_p}\nIf $y\\in\\bar{\\mathcal O}_{T}(x)$ and $z\\in\\bar{\\mathcal O}_{T}(y)$, then $z\\in\\bar{\\mathcal O}_{T}(x)$ follows by a diagonalisation argument.\nA corresponding statement for prolongations is not valid, however follows from $x\\in\\bar{\\mathcal O}_{T}(y)$ and $z\\in\\mathcal D_{T}(y)$ that $z\\in\\mathcal D_{T}(x)$.\n\\end{remark}\n\nWe shall consider more general Abelian acting groups than $\\mathbb Z$ and $\\mathbb R$, hence the definition of recurrence requires the notions of a replete semigroup and an extensive subset of the Abelian compactly generated group $T$ (cf. \\cite{G-H}).\nWe recall that a semigroup $P\\subset T$ is replete if for every compact subset $K\\subset T$ there exists a $\\tau\\in T$ with $\\tau K\\subset P$, and a subset $E\\subset T$ is extensive if it intersects every replete semigroup.\nTherefore, a subset $E$ of $T=\\mathbb Z$ or $T=\\mathbb R$ is extensive if and only if $E$ contains arbitrarily large positive \\emph{and} arbitrarily large negative elements.\n\n\\begin{defi}\nWe call a cocycle $f(\\tau,x)$ of a minimal compact metric flow $(X,T)$ \\emph{topologically recurrent} if for arbitrary neighbourhoods $\\mathcal U\\subset X$ and $U(\\mathbf 0_\\mathbb A)\\subset \\mathbb A$ of $\\mathbf 0_\\mathbb A$ there exists an extensive set of elements $\\tau\\in T$ with\n\\begin{equation*}\n\\mathcal U\\cap\\tau^{-1}(\\mathcal U)\\cap\\{x\\in X: f(\\tau,x)\\in U(\\mathbf 0_\\mathbb A)\\}\\neq\\emptyset .\n\\end{equation*}\nSince $\\widetilde\\tau_f$ and the right translation on $X\\times\\mathbb A$ commute, this is equivalent to the \\emph{regional recurrence} of the skew product action $\\widetilde\\tau_f$ on $X\\times\\mathbb A$, i.e. for every open neighbourhood $U\\subset X\\times \\mathbb A$ there exists an extensive set of elements $\\tau\\in T$ with $\\widetilde\\tau_f(U)\\cap U\\neq\\emptyset$.\nA non-recurrent cocycle is called \\emph{transient}.\n\nA point $(x,a)\\in X\\times \\mathbb A$ is $\\widetilde\\tau_f$-recurrent if for every neighbourhood $U\\subset X\\times \\mathbb A$ of $(x,a)$ the set of $\\tau\\in T$ with $\\widetilde\\tau_f(x,a)\\in U$ is extensive.\nMoreover, a point $(x,a)\\in X\\times\\mathbb A$ is \\emph{regionally} $\\widetilde\\tau_f$-recurrent if for every neighbourhood $U$ of $(x,a)$ the set of $\\tau\\in T$ with $\\widetilde\\tau_f(U)\\cap U\\neq\\emptyset$ is extensive.\n\\end{defi}\n\n\\begin{remarks}\\label{rems:rec}\nIf $f(\\tau,x)$ is recurrent, then by Theorems 7.15 and 7.16 in \\cite{G-H} there exists a dense $G_\\delta$ set of $\\widetilde\\tau_f$-recurrent points in $X\\times\\mathbb R$.\n\nGiven a regionally $\\widetilde\\tau_f$-recurrent point $(x,a)\\in X\\times\\mathbb A$, every point in $\\{x\\}\\times\\mathbb A$ is regionally $\\widetilde\\tau_f$-recurrent.\nThe minimality of $(X,T)$ and Theorem 7.13 in \\cite{G-H} imply that every point in $X\\times \\mathbb A$ is regionally $\\widetilde\\tau_f$-recurrent, hence $f(\\tau,x)$ is recurrent.\n\nA cocycle $f(n,x)$ of a $\\mathbb Z$-flow is topologically recurrent if and only if $\\widetilde T_f$ is topologically conservative, i.e. for every open neighbourhood $U\\subset X\\times\\mathbb A$ there exists an integer $n\\neq 0$ so that $\\widetilde T_f^n(U)\\cap U\\neq\\emptyset$.\n\\end{remarks}\n\nOne of the most important concepts in the study of cocycles is the essential range, originally introduced in the measure theoretic category by Schmidt \\cite{Sch}.\n\n\\begin{defi}\\label{def:er}\nLet $f(\\tau,x)$ be a cocycle of a minimal compact metric flow $(X,T)$.\nAn element $a\\in \\mathbb A$ is in the set $E(f)$ of \\emph{topological essential values} if for arbitrary neighbourhoods $\\mathcal U\\subset X$ and $U(a)\\subset \\mathbb A$ of $a$ there exists an element $\\tau\\in T$ so that\n\\begin{equation*}\n\\mathcal U\\cap\\tau^{-1}(\\mathcal U)\\cap\\{x\\in X: f(\\tau,x)\\in U(a)\\}\n\\end{equation*}\nis non-empty.\nThe set $E(f)$ is also called the \\emph{topological essential range}.\nThe cocycle identity implies that $f(\\mathbf 1_T,x)=\\mathbf 0_\\mathbb A$ for all $x\\in X$ and hence $\\mathbf 0_\\mathbb A\\in E(f)$.\nMoreover, the essential range is always a closed \\emph{subgroup} of $\\mathbb A$ (cf. \\cite{LM}, Proposition 3.1, which carries over from the case of a minimal $\\mathbb Z$-action to a general Abelian group acting minimally).\n\\end{defi}\n\n\\begin{fact}\\label{fact:er}\nIf $f(\\tau,x)$ is a cocycle with full topological essential range $E(f)=\\mathbb A$, then $\\mathcal D_{T,f}(x,a)\\subset\\{x\\}\\times\\mathbb A$ holds for every $(x,a)\\in X\\times\\mathbb A$.\nBy Fact \\ref{fact:o_p} there exists a $T$-invariant dense $G_\\delta$ set $\\mathcal F\\subset X$ with $\\{x\\}\\times \\mathbb A\\subset\\bar{\\mathcal O}_{T,f}(x,a)$ for every $(x,a)\\in\\mathcal F\\times \\mathbb A$.\nFor every $\\tau\\in T$ follows that $\\{\\tau x\\}\\times \\mathbb A\\subset\\bar{\\mathcal O}_{T,f}(x,g)$, and by the minimality of the flow $(X,T)$ every $(x,a)\\in\\mathcal F\\times \\mathbb A$ is a transitive point for $\\widetilde\\tau_f$.\n\\end{fact}\n\nThroughout this note we shall use a notion of ``relative'' triviality of cocycles.\n\n\\begin{defi}\\label{def:rnt}\nLet $f_1(\\tau,x)$ and $f_2(\\tau,x)$ be $\\mathbb R$-valued cocycles of a minimal compact metric flow $(X,T)$.\nWe shall call the cocycle $f_2(\\tau,x)$ \\emph{relatively trivial} with respect to $f_1(\\tau,x)$, if for every sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ with $d(x_k, \\tau_k x_k)\\to 0$ and $f_1(\\tau_k,x_k)\\to 0$ it holds also that $f_2(\\tau_k,x_k)\\to 0$ as $k\\to\\infty$.\nFor a sequence $\\{\\tau_k\\}_{k\\geq 1}\\subset T$ and a point $\\bar x\\in X$ so that $\\tau_k\\bar x$ and $f_1(\\tau_k,\\bar x)$ are convergent, this implies that also $f_2(\\tau_k,\\bar x)$ is convergent.\n\\end{defi}\n\nBy the following lemma it suffices to verify an essential value condition ``locally''.\n\n\\begin{lemma}\\label{lem:trans}\nLet $(X,T)$ be a minimal compact metric flow with an \\emph{Abelian} group $T$ acting, and let $f(\\tau,x)$ be a cocycle of $(X,T)$ with values in an Abelian l.c.s. group $\\mathbb A$.\nIf there exists a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ with $d(x_k, \\tau_k x_k)\\to 0$ and $f(\\tau_k,x_k)\\to a\\in \\mathbb A_\\infty$ ($\\mathbb R_\\infty\\times\\mathbb R_\\infty$ for $\\mathbb A=\\mathbb R^2$, respectively) as $k\\to\\infty$, then for every $x\\in X$ it holds that $(x,a)\\in\\mathcal D_{T,f}(x,\\mathbf 0_\\mathbb A)$.\nHence if $a\\in \\mathbb A$ is finite, then $a\\in E(f)$.\n\nNow let $g=(g_1,g_2):T\\times X\\longrightarrow\\mathbb R^2$ be a cocycle of the flow $(X,T)$ with a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ so that $d(x_k,\\tau_k x_k)\\to 0$, $g_1(\\tau_k,x_k)\\to 0$, and $g_2(\\tau_k,x_k)\\nrightarrow 0$ as $k\\to\\infty$.\nThen there exist a point $\\bar x\\in X$ and a sequence $\\{\\bar\\tau_k\\}_{k\\geq 1}\\subset T$ so that $d(\\bar x,\\bar\\tau_k \\bar x)\\to 0$ and $g(\\bar\\tau_k,\\bar x)\\to(0,\\infty)$ as $k\\to\\infty$.\nMoreover, for an extension $(Y,T)$ of $(X,T)=\\pi(Y,T)$ there exists a sequence $\\{(\\tilde\\tau_k,y_k)\\}_{k\\geq 1}\\subset T\\times Y$ with $d_Y(y_k,\\tilde\\tau_k y_k)\\to 0$ and $(g\\circ\\pi)(\\tilde\\tau_k,y_k)\\to(0,\\infty)$ as $k\\to\\infty$.\n\\end{lemma}\n\n\\begin{proof}\nWe let $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ be a sequence with the properties above, and we may assume that $x_k\\to x'\\in X$ as $k\\to\\infty$.\nFor arbitrary neighbourhoods $\\mathcal U\\subset X$ and $U(a)$ of $a\\in \\mathbb A_\\infty$ we can fix an element $\\tau\\in T$ with $\\tau x'\\in\\mathcal U$, and since the group $T$ is Abelian it holds that $\\tau x_k\\to\\tau x'$ and $\\tau_k\\tau x_k=\\tau\\tau_k x_k\\to \\tau x'$ as $k\\to\\infty$.\nFrom the cocycle identity and the continuity of $f(\\tau,\\cdot)$ follows\n\\begin{eqnarray*}\nf(\\tau_k,\\tau x_k) & = & f(\\tau,\\tau_k x_k) + f(\\tau_k,x_k)+f(\\tau^{-1},\\tau x_k)\\\\\n& = & f(\\tau,\\tau_k x_k)+ f(\\tau_k,x_k)-f(\\tau, x_k) \\to a\n\\end{eqnarray*}\nas $k\\to\\infty$, and for all $k$ large enough it holds that $\\tau x_k$, $\\tau_k\\tau x_k\\in\\mathcal U$ and $f(\\tau_k,\\tau x_k)\\in U(a)$.\nSince the neighbourhoods $\\mathcal U$ and $U(a)$ were arbitrary, we have $(x,a)\\in\\mathcal D_{T,f}(x,\\mathbf 0_\\mathbb A)$ for every $x\\in X$ and $a\\in E(f)$ if $a\\neq\\infty$.\n\nIf $g(\\tau_k,x_k)\\nrightarrow(0,\\infty)$ as $k\\to\\infty$, then $E(g)$ has an element $(0,c)$ with $c\\in\\mathbb R\\setminus\\{0\\}$.\nSince $E(g)$ is a closed subspace of $\\mathbb R^2$, we can start over with a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ so that $d(x_k,\\tau_k x_k)\\to 0$, $g_1(\\tau_k,x_k)\\to 0$, and $|g_2(\\tau_k,x_k)|\\to\\infty$ as $k\\to\\infty$.\nThe statement above implies that $(x,0,\\infty)\\in\\mathcal D_{T,g}(x,0,0)$ for every $x\\in X$, and by Fact \\ref{fact:o_p} we can select $\\bar x\\in X$ and a sequence $\\{\\bar\\tau_k\\}_{k\\geq 1}\\subset T$ so that $\\bar\\tau_k\\bar x\\to\\bar x$ and $g(\\bar\\tau_k,\\bar x)\\to(0,\\infty)$.\nFor an arbitrary point $\\bar y\\in\\pi^{-1}(\\bar x)$ we can select an increasing sequence of positive integers $\\{k_l\\}_{l\\geq 1}$ with $d_Y(\\bar\\tau_{k_{l+1}} \\bar y,\\bar\\tau_{k_l}\\bar y)\\to 0$ and $(g\\circ\\sigma)(\\bar\\tau_{k_{l+1}}(\\bar\\tau_{k_l})^{-1},\\bar\\tau_{k_l}\\bar y)\\to(0,\\infty)$ and put $\\{(\\tilde\\tau_l,y_l)=(\\bar\\tau_{k_{l+1}}(\\bar\\tau_{k_l})^{-1},\\bar\\tau_{k_l}\\bar y)\\}_{l\\geq 1}$.\n\\end{proof}\n\n\\begin{defi}\nLet $f(\\tau,x)$ be a cocycle of a minimal compact metric flow $(X,T)$ with values in an Abelian l.c.s. group $\\mathbb A$, and let $b:X\\longrightarrow \\mathbb A$ be a continuous function.\nAnother cocycle of the flow $(X,T)$ can be defined by the $\\mathbb A$-valued function\n\\begin{equation*}\ng(\\tau,x)=f(\\tau,x)+b(\\tau x)-b(x).\n\\end{equation*}\nThe cocycle $g(\\tau,x)$ is called \\emph{topologically cohomologous} to the cocycle $f(\\tau,x)$ with the \\emph{transfer function} $b(x)$.\nA cocycle $g(\\tau,x)=b(\\tau x)-b(x)$ topologically cohomologous to zero is bounded on $T\\times X$ and called a \\emph{topological coboundary}.\n\\end{defi}\n\nThe Gottschalk-Hedlund theorem (\\cite{G-H}, Theorem 14.11) characterises topological coboundaries of a minimal $\\mathbb Z$-action as cocycles bounded on at least one semi-orbit.\nThe generalisation to an Abelian group $T$ acting minimally is natural.\n\n\\begin{fact}\\label{fact:GH}\nA real valued topological cocycle $f(\\tau,x)$ of a minimal compact metric flow $(X,T)$ with an Abelian acting group $T$ is a coboundary if and only if there exists a point $\\bar x\\in X$ so that the function $\\tau\\mapsto f(\\tau,\\bar x)$ is bounded on $T$.\nFor the groups $T=\\mathbb Z$ and $T=\\mathbb R$ acting, the boundedness on a semi-orbit is sufficient.\n\nA real valued cocycle $f(\\tau,x)$ is also a topological coboundary if for every sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ with $d(x_k,\\tau_k x_k)\\to 0$ the set $\\{f(\\tau_k,x_k)\\}_{k\\geq 1}\\subset\\mathbb R$ is bounded.\n\\end{fact}\n\n\\begin{proof}\nSuppose that $\\tau\\mapsto f(\\tau,\\bar x)$ is bounded on $T$.\nBy the cocycle identity holds\n\\begin{equation*}\nf(\\tau,\\tau'\\bar x)=f(\\tau\\tau',\\bar x)-f(\\tau',\\bar x)\n\\end{equation*}\nfor all $\\tau,\\tau'\\in T$, and by the density of the $T$-orbit of $\\bar x$ follows the boundedness of $f(\\tau,x)$ on $T\\times X$ and thus the triviality of the subgroup $E(f)=\\{0\\}$.\nBy the density of the $T$-orbit of $\\bar x$ and the boundedness of $\\tau\\mapsto f(\\tau,\\bar x)$, the intersection $\\{x\\}\\times\\mathbb R\\cap\\bar{\\mathcal O}_{T,f}(\\bar x,0)$ is non-empty for every $x\\in X$.\nFor every $x\\in X$ this intersection is a singleton, since otherwise Lemma \\ref{lem:trans} proves a non-zero element in $E(f)$.\nHence the compact set $\\bar{\\mathcal O}_{T,f}(\\bar x,0)$ is the graph of a continuous function $b:X\\longrightarrow\\mathbb R$ with $f(\\tau,\\bar x)=b(\\tau\\bar x)$, and thus $f(\\tau,x)=b(\\tau x)-b(x)$ holds for every $(\\tau,x)\\in T\\times X$.\nFor $T=\\mathbb Z$ and $T=\\mathbb R$ the set of \\emph{limit points} of a semi-orbit is a $T$-invariant closed subset of $X$, which is non-empty by compactness and equal to $X$ by minimality.\nWe can conclude the proof as above, but using the semi-orbit.\n\nNow suppose that $f(\\tau,x)$ is not a topological coboundary and let $\\bar x\\in X$ be arbitrary.\nThen there exists a sequence $\\{\\tau'_l\\}_{l\\geq 1}\\subset T$ with $|f(\\tau'_l,\\bar x)|\\to\\infty$, and we may assume that $\\tau'_l \\bar x\\to x'$ as $l\\to\\infty$.\nSince $(X,T)$ is minimal, there exists sequence $\\{\\tau''_k\\}_{k\\geq 1}\\subset T$ with $\\tau''_k x'\\to\\bar x$ as $k\\to\\infty$.\nA diagonalisation with a sufficiently increasing sequence of positive integers $\\{l_k\\}_{l\\geq 1}$ yields for $\\tau_k=\\tau_k''\\tau_{l_k}'$ that $\\tau_k\\bar x\\to\\bar x$ and $|f(\\tau_k,\\bar x)|=|f(\\tau_k'',\\tau_{l_k}'\\bar x)+f(\\tau_{l_k}',\\bar x)|\\to\\infty$ as $k\\to\\infty$.\n\\end{proof}\n\nThe following lemma appeared originally in the paper \\cite{A} in a setting for $\\mathbb R^d$-valued cocycles of a minimal rotation on a torus.\n\n\\begin{lemma}\\label{lem:at}\nLet $f(\\tau,x)$ be a real valued topological cocycle of a minimal compact metric flow $(X,T)$ with an Abelian acting group $T$.\nIf the skew product action $\\widetilde\\tau_f$ is \\emph{not} point transitive on $X\\times\\mathbb R$, then for every neighbourhood $U\\subset\\mathbb R$ of $0$ there exist a compact symmetric neighbourhood $K\\subset U$ of $0$ and an $\\varepsilon>0$ so that for every $\\tau\\in T$ holds\n\\begin{equation}\\label{eq:s_l}\n\\{x\\in X:d(x,\\tau x)<\\varepsilon\\enspace\\textup{and}\\enspace f(\\tau,x)\\in 2K\\setminus K^0\\}=\\emptyset .\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $f(\\tau,x)$ is real valued and $\\widetilde\\tau_f$ is not point transitive.\nBy Fact \\ref{fact:er} the essential range $E(f)$ is a proper closed subgroup of $\\mathbb R$, and thus there exists a compact symmetric neighbourhood $K\\subset U$ of $0$ with $(2K\\setminus K^0)\\cap E(f)=\\emptyset$.\nIf the assertion is false for the neighbourhood $K$, then there exists a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ with $d(x_k, \\tau_k x_k)\\to 0$ and $f(\\tau_k,x_k)\\to t\\in 2K\\setminus K^0$.\nNow Lemma \\ref{lem:trans} implies $t\\in E(f)\\cap 2K\\setminus K^0$, in contradiction to the choice of $K$.\n\\end{proof}\n\nWe shall commence the study of cocycles of distal minimal flows by the generalisation of the results for minimal rotations in \\cite{G-H} and \\cite{LM}.\n\n\\begin{proposition}\\label{prop:isom}\nLet $(X,T)$ be a minimal compact \\emph{isometric} flow with a compactly generated Abelian acting group $T$, and let $f(\\tau,x)$ be a topologically recurrent real valued cocycle of $(X,T)$.\nThen the cocycle $f(\\tau,x)$ is either a coboundary or its skew product extension $\\widetilde \\tau_f$ is point transitive on $X\\times\\mathbb R$.\n\\end{proposition}\n\n\\begin{proof}\nSuppose that the cocycle $f(\\tau,x)$ is not a coboundary and $\\widetilde \\tau_f$ is not point transitive.\nThen by Lemma \\ref{lem:at} there exist a compact symmetric neighbourhood $K$ of $0$ and an $\\varepsilon>0$ so that equality (\\ref{eq:s_l}) holds for every $\\tau\\in T$.\nFurthermore, if $L\\subset T$ is a compact generative subset, then $\\varepsilon>0$ can be chosen small enough so that for all $\\tau'\\in L$ and $x,x'\\in X$ with $d(x,x')<\\varepsilon$ it holds that\n\\begin{equation*}\nf(\\tau',x)-f(\\tau',x')\\in K^0 .\n\\end{equation*}\nBy Fact \\ref{fact:GH} we can fix a pair $(\\bar\\tau,\\bar x)\\in T\\times X$ with $d(\\bar x,\\bar\\tau\\bar x)<\\varepsilon$ and $f(\\bar\\tau,\\bar x)\\notin 2K$, since $f(\\tau,x)$ is not a coboundary.\nThe Abelian group $T$ acts on $X$ isometrically, and thus $d(\\bar x,\\bar\\tau\\bar x)<\\varepsilon$ implies that $d(\\tau'\\bar x,\\bar\\tau\\tau' \\bar x)=d(\\tau'\\bar x,\\tau'\\bar\\tau\\bar x)<\\varepsilon$.\nTogether with equality (\\ref{eq:s_l}) we can conclude for every $\\tau'\\in L$ that\n\\begin{equation*}\nf(\\bar\\tau,\\tau'\\bar x)=f(\\bar\\tau,\\bar x)-f(\\tau',\\bar x)+f(\\tau',\\bar\\tau\\bar x)\\notin 2K ,\n\\end{equation*}\nand hence both of the real numbers $f(\\bar\\tau,\\bar x)$ and $f(\\bar\\tau,\\tau'\\bar x)$ are elements of the one and the same of the disjoint sets $\\mathbb R^+\\setminus 2K$ and $\\mathbb R^-\\setminus 2K$.\nSince the set $L$ is generative in the Abelian group $T$ acting minimally on $X$, it follows by induction that $f(\\bar\\tau, x)$ is in the closure of one of the sets $\\mathbb R^+\\setminus 2K$ and $\\mathbb R^-\\setminus 2K$ for every $x\\in X$.\nThus we have a constant $c>0$ with $|f(\\bar\\tau^k,x)|> |k| c$ for every integer $k$, and we define a subset $P\\subset T$ by\n\\begin{equation*}\nP=\\cup_{k\\geq 1}\\bar\\tau^k\\cdot\\{\\tau\\in T:f(\\tau,\\cdot)<|k| c\/2\\} .\n\\end{equation*}\nGiven two integers $k,k'\\geq 1$ and $\\bar\\tau^k\\tau$, $\\bar\\tau^{k'}\\tau'\\in P$ with $f(\\tau,\\cdot)<|k| c\/2$ and $f(\\tau',\\cdot)<|k'| c\/2$, we can conclude that $\\bar\\tau^k\\tau\\bar\\tau^{k'}\\tau'=\\bar\\tau^{k+k'}(\\tau\\tau')$ with $f(\\tau\\tau',\\cdot)<|k+k'| c\/2$, hence $P$ is a semigroup.\nMoreover, the semigroup $P$ contains a translate of every compact set $L\\subset T$, since for large enough $k\\geq 1$ the inequality $f(\\tau,x)<|k| c\/2$ holds for every $\\tau\\in L$ and every $x\\in X$.\nTherefore $P$ is a replete semigroup in $T$ so that $|f(\\tau,x)|>c\/2$ holds for every $(\\tau,x)\\in P\\times X$, which contradicts the existence of a dense $G_\\delta$ set of $\\widetilde\\tau_f$-recurrent points (cf. Remarks \\ref{rems:rec}).\n\\end{proof}\n\nThe \\emph{Rokhlin extensions} and the \\emph{Rokhlin skew products} have been studied in the measure theoretic setting in \\cite{LL} and \\cite{LP}.\nWe shall introduce the notion of a \\emph{perturbed Rokhlin skew product}, which will be inevitable in our main result.\n\n\\begin{defi}\nSuppose that $(X,T)$ is a distal minimal compact metric flow and $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ is a distal minimal compact metric $\\mathbb R$-flow.\nLet $f:T\\times X\\longrightarrow\\mathbb R$ be a cocycle of $(X,T)$ with a point transitive skew product $\\widetilde\\tau_f$ on $X\\times\\mathbb R$.\nWe define the \\emph{Rokhlin extension} $\\tau_{\\phi,f}$ on $X\\times M$ by\n\\begin{equation*}\n\\tau_{\\phi,f}(x,m)=(\\tau x,\\phi^{f(\\tau,x)}(m)) ,\n\\end{equation*}\nwhich is an action of the group $T$ on $X\\times M$ due to the cocycle identity for $f(\\tau,x)$.\nIf $(\\bar x,0)$ is a transitive point for $\\widetilde\\tau_f$, then by the minimality of $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ every point $(\\bar x,m)$ with $m\\in M$ is a transitive point for $(X\\times M,T)$.\nSince the flow $(X\\times M,T)$ is distal by the distality of its components, it is even minimal.\n\nThe skew product extension of $(X\\times M,T)$ by the cocycle $(\\tau,x,m)\\mapsto f(\\tau,x)$ is the \\emph{Rokhlin skew product} $\\widetilde\\tau_{\\phi,f}$ on $X\\times M\\times \\mathbb R$ with\n\\begin{equation*}\n\\widetilde\\tau_{\\phi,f}(x,m,t)=(\\tau x,\\phi^{f(\\tau,x)}(m),t+f(\\tau,x)) .\n\\end{equation*}\nLet $g:\\mathbb R\\times M\\longrightarrow\\mathbb R$ be a cocycle of the flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$.\nThe $\\mathbb R$-valued map\n\\begin{equation*}\n(\\tau,x,m)\\mapsto f(\\tau,x)+g(f(\\tau,x),m)\n\\end{equation*}\ndefined on $T\\times X\\times M$ turns out to be a cocycle of the flow $(X\\times M,T)$ due to the cocycle identity for $g(t,m)$.\nThe skew product extension of this cocycle with\n\\begin{equation*}\n\\widetilde\\tau_{\\phi,f,g}(x,m,t)=(\\tau x,\\phi^{f(\\tau,x)}(m),t+f(\\tau,x)+g(f(\\tau,x),m)) .\n\\end{equation*}\nis called a \\emph{perturbed Rokhlin skew product} $\\widetilde\\tau_{\\phi,f,g}$ on $X\\times M\\times\\mathbb R$.\n\\end{defi}\n\nWe shall present at first the basic example of a topological Rokhlin skew product of topological type $III_0$, i.e. recurrent with a trivial topological essential range but not a topological coboundary.\n\n\\begin{example}\\label{ex:zi}\nLet $f:\\mathbb T\\longrightarrow\\mathbb R$ be a continuous function with a point transitive skew product extension $\\widetilde T_f$ of the irrational rotation $T$ by $\\alpha$ on the torus, and let $\\beta\\in(0,1)$ be irrational so that the $\\mathbb R$-flow $(\\mathbb T^2,\\{\\phi^t:t\\in\\mathbb R\\})$ defined by $\\phi^t(y,z)=(y+t,z+\\beta t)$ is minimal and distal.\nThe minimal and distal Rokhlin extension $T_{\\phi,f}$ on $\\mathbb T^3$ is\n\\begin{equation*}\nT_{\\phi,f}(x,y,z)=(x+\\alpha,y+f(x),z+\\beta f(x)),\n\\end{equation*}\nand putting $h(x,y,z)=f(x)$ for all $(x,y,z)\\in\\mathbb T^3$ gives a topological type $III_0$ cocycle $h(n,(x,y,z))$ of the homeomorphism $T_{\\phi,f}$ with the skew product extension $\\widetilde T_{\\phi,f}$.\nIndeed, since $\\widetilde T_f$ is point transitive, the cocycle $h(n,(x,y,z))$ is recurrent, but it is not bounded and therefore no topological coboundary.\nFurthermore, a sequence $\\{t_n\\}_{n\\geq 1}\\subset\\mathbb R$ with $t_n\\mod 1\\to 0$ and $(\\beta t_n)\\mod 1\\to 0$ cannot have a finite cluster point apart from zero, and hence $E(h)=\\{0\\}$.\nFor a point $\\bar x\\in\\mathbb T$ so that $(\\bar x,0)\\in\\mathbb T\\times\\mathbb R$ is transitive under $\\widetilde T_f$ and arbitrary $y,z\\in\\mathbb T$ the orbit closure of $(\\bar x,y,z,0)$ under the skew product extension of $T_{\\phi,f}$ by $h$ is of the form\n\\begin{equation*}\n\\bar{\\mathcal O}_{\\widetilde T_{\\phi,f}}((\\bar x,y,z),0)=\\bar{\\mathcal O}_{T_{\\phi,f},h}((\\bar x,y,z),0)=\\mathbb T\\times\\{(\\phi^t(y,z),t)\\in\\mathbb T^2\\times\\mathbb R:t\\in\\mathbb R\\} .\n\\end{equation*}\nThe collection of these sets is a partition of $\\mathbb T^3\\times\\mathbb R$ into $\\widetilde T_{\\phi,f}$-orbit closures.\n\\end{example}\n\nThe next example makes clear that the perturbation of a Rokhlin skew product by a cocycle is an essential component, which in general cannot be eliminated by continuous cohomology.\n\n\\begin{example}\\label{ex:pe}\nLet $T$, $f$, $h$, and $\\{\\phi^t:t\\in\\mathbb R\\}$ be defined as in Example \\ref{ex:zi}, and suppose that $g(t,(y,z))$ is a point transitive $\\mathbb R$-valued cocycle of the flow $\\{\\phi^t:t\\in\\mathbb R\\}$.\nFrom the unique ergodicity of the flow $\\{\\phi^t:t\\in\\mathbb R\\}$ follows $\\int_{\\mathbb T^2} g(t,(y,z)) d\\lambda(y,z)=0$ for every $t\\in\\mathbb R$, and after rescaling $g$ we can assume that $|g(t,(y,z))|<|t|\/2$ for every $t\\in\\mathbb R$ and $(y,z)\\in\\mathbb T^2$.\nWe define a function\n\\begin{equation*}\n\\bar h(x,y,z)= f(x)+g(f(x),(y,z))\n\\end{equation*}\nso that the cocycle of $T_{\\phi,f}$ is $\\bar h(n,(x,y,z))= f(n,x)+g(f(n,x),(y,z))$ for every $n$.\nSince the perturbation $g(f(n,x),(y,z))$ is unbounded, there cannot be a continuous transfer function defined on $\\mathbb T^3$ so that $\\bar h$ and $h$ are cohomologous.\nHowever, due to the condition $|g(t,(y,z))|<|t|\/2$ the set\n\\begin{equation*}\n\\mathbb T\\times\\{(\\phi^t(y,z),t+g(t,(y,z)))\\in\\mathbb T^2\\times\\mathbb R:t\\in\\mathbb R\\}\n\\end{equation*}\nis closed, and it coincides with $\\bar{\\mathcal O}_{\\widetilde T_{\\phi,f,g}}((\\bar x,y,z),0)$ if the point $(\\bar x,0)\\in\\mathbb T\\times\\mathbb R$ is transitive under $\\widetilde T_f$.\nThus the structure of the orbit closures is preserved as well as these sets provide a partition of $\\mathbb T^3\\times\\mathbb R$.\n\\end{example}\n\n\\begin{remark}\nThe structure of Example \\ref{ex:zi} can be revealed from the toral extensions of $T_{\\phi,f}$ by the function $(\\gamma h) \\mod 1$ for all $\\gamma\\in\\mathbb R$.\nThis distal homeomorphism of $\\mathbb T^4$ is transitive and hence minimal for rationally independent $1$, $\\beta$, and $\\gamma$.\nHowever, for $\\gamma=1$ and $\\gamma=\\beta$ the orbit closures collapse to graphs representing the dependence of $h$ and the action on the coordinates of the torus.\nThe same approach will not be successful with respect to Example \\ref{ex:pe}.\nIt can be verified that for every $\\gamma\\in\\mathbb R$ the toral extension of $T_{\\phi,f}$ by the function $(\\gamma\\bar h) \\mod 1$ is minimal on $\\mathbb T^4$.\n\\end{remark}\n\nThe main result of this note puts these examples into a structure theorem.\n\n\\begin{strth*}\nSuppose that $(X,T)$ is a distal minimal compact metric flow with a compactly generated Abelian acting group $T$ and that $f:T\\times X\\longrightarrow\\mathbb R$ is a topologically recurrent cocycle which is not a coboundary.\nThen there exist a factor $(X_\\alpha,T)=\\pi_\\alpha(X,T)$, a topological cocycle $f_\\alpha:T\\times X_\\alpha\\longrightarrow\\mathbb R$ of $(X_\\alpha,T)$, and a distal minimal compact metric $\\mathbb R$-flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$, so that the Rokhlin extension $(X_\\alpha\\times M,T)$ with the action $\\tau_{\\phi,f_\\alpha}$ is a factor $(Y,T)=\\pi_Y(X,T)$ of $(X,T)$.\nThe cocycle $f(\\tau,x)$ is topologically cohomologous to $(f_Y\\circ\\pi_Y)(\\tau,x)=f(\\tau,x)+b(\\tau x)-b(x)$ with a topological cocycle $f_Y:T\\times Y\\longrightarrow\\mathbb R$ of the flow $(Y,T)$ so that\n\\begin{equation}\\label{eq:fpy}\n\\mathcal D_{T, f_Y\\circ\\pi_Y}(x,0)\\cap(\\pi_\\alpha^{-1}(\\pi_\\alpha(x))\\times\\{0\\})=\\pi_Y^{-1}(\\pi_Y(x))\\times\\{0\\}\n\\end{equation}\nholds for all $x\\in X$.\nMoreover, there exists a cocycle $g:\\mathbb R\\times M\\longrightarrow\\mathbb R$ of the $\\mathbb R$-flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ so that the cocycle $(\\mathbbm 1+g)(t,m)= t+g(t,m)$ is topologically \\emph{transient} and\n\\begin{equation}\\label{eq:f_Y2}\nf_Y(\\tau,(x,m))=f_\\alpha(\\tau,x)+g(f_\\alpha(\\tau,x),m)=(\\mathbbm 1+g)(f_\\alpha(\\tau,x),m)\n\\end{equation}\nholds for every $\\tau\\in T$ and $(x,m)\\in Y=X_\\alpha\\times M$.\nThus the skew product $\\widetilde\\tau_{f_Y}$ on $Y\\times\\mathbb R$ is the perturbed Rokhlin skew product $\\widetilde \\tau_{\\phi,f_\\alpha, g}$.\n\\end{strth*}\n\n\\noindent We shall conclude the proof of this theorem in the next section of this note.\n\\medskip\n\nThe application of the structure theorem for a topological ergodic decomposition requires a suitable topology on the hyperspace of the non-compact space $X\\times\\mathbb R$.\nWe shall use the Fell topology on the hyperspace of \\emph{non-empty} closed subsets of a locally compact separable metric space.\nGiven finitely many open neighbourhoods $\\mathcal U_1,\\dots,\\mathcal U_k$ and a compact set $K$, an element of the Fell topology base consists of all non-empty closed subsets which intersect each of the open neighbourhoods $\\mathcal U_1,\\dots,\\mathcal U_k$ while being disjoint from $K$.\nThis topology is separable, metrisable, and $\\sigma$-compact (cf. \\cite{HLP}).\nThe Fell topology was introduced in \\cite{Fe} as a compact topology on the hyperspace of all closed subsets, with the empty set as infinity.\n\n\\begin{decth*}\nSuppose that $f:T\\times X\\longrightarrow\\mathbb R$ is a topologically recurrent cocycle of a distal minimal compact metric flow $(X,T)$ with a compactly generated Abelian acting group $T$.\nThe prolongations $\\mathcal D_{T,f}(x,s)\\subset X\\times\\mathbb R$ of the skew product action $\\widetilde\\tau_{f}$ with $(x,s)\\in X\\times\\mathbb R$ define a \\emph{partition} of $X\\times\\mathbb R$.\nThe mapping $(x,s)\\mapsto\\mathcal D_{T,f}(x,s)$ is continuous with respect to the Fell topology on the hyperspace of non-empty closed subsets of $X\\times\\mathbb R$, and the right translation on $X\\times\\mathbb R$ is a minimal continuous $\\mathbb R$-action on the set of prolongations.\nIf the cocycle $f(\\tau,x)$ is not a topological coboundary, then the set of all prolongations in the skew product is Fell compact.\n\\end{decth*}\n\n\\begin{tmath*}\nA recurrent cocycle $f(\\tau,x)$ apart from a coboundary has a minimal compact metric flow as a topological version of the \\emph{Mackey action}.\nIts phase space is the set of prolongations in the skew product with the Fell topology, with the right translation of $\\mathbb R$ acting on the prolongations.\nThis flow is a distal extension (possibly the trivial extension) of a weakly mixing compact metric flow (possibly the trivial flow).\nThe Mackey action is distal if and only if the perturbation cocycle $g(t,m)$ in the structure theorem is a topological coboundary.\n\\end{tmath*}\n\nWhile most of the properties of the topological Mackey action are part of the decomposition theorem, its structure as a distal extension of a weakly mixing flow will be verified in the next section of this note.\nThe proof of the decomposition theorem depends on the following general lemma on transient cocycles of minimal $\\mathbb R$-flows, which might be of independent interest.\n\n\\begin{lemma}\\label{lem:tr_coc}\nLet $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ be a minimal compact metric $\\mathbb R$-flow and let $h(t,m):\\mathbb R\\times M\\longrightarrow\\mathbb R$ be a \\emph{transient} cocycle of $(M,\\{\\phi^t:t\\in\\mathbb R\\})$.\nThen for every point $(m,s)\\in M\\times\\mathbb R$ the orbit $\\mathcal O_{\\phi,h}(m,s)$, the orbit closure $\\bar{\\mathcal O}_{\\phi,h}(m,s)$, and the prolongation $\\mathcal D_{\\phi,h}(m,s)$ under the skew product extension $\\widetilde{\\phi^t}_h$ coincide.\nThe mapping from points to their orbits in $M\\times\\mathbb R$ is continuous with a compact range with respect to the Fell topology, and the right translation on $M\\times\\mathbb R$ provides a minimal continuous $\\mathbb R$-action on the set of orbits.\nMoreover, for every $m\\in M$ the mapping $t\\mapsto h(t,m)$ maps $\\mathbb R$ \\emph{onto} $\\mathbb R$.\n\\end{lemma}\n\n\\begin{proof}\nSince prolongations are closed sets, it suffices to verify that for every $(m,s)\\in M\\times\\mathbb R$ the orbit and the prolongation coincide.\nOtherwise, there exist two points $(m,s), (m',s')\\in M\\times\\mathbb R$ so that $(m',s')$ is not in the $\\widetilde{\\phi^t}_h$-orbit of $(m,s)$, however there exists a sequence $\\{(t_k,m_k)\\}_{k\\geq 1}\\subset\\mathbb R\\times M$ so that $(t_k,m_k)\\to (+\\infty,m)$ and\n\\begin{equation*}\n\\widetilde{\\phi^{t_k}}_h(m_k,s)=(\\phi^{t_k}(m_k),s+h(t_k,m_k))\\to (m',s') .\n\\end{equation*}\nIf there exists a compact set $L\\subset\\mathbb R$ with $h([0,t_k],m_k)\\subset L$ for all $k\\geq 1$, then $h([0,\\infty),m)\\subset L$ since $m_k\\to m$, and by Fact \\ref{fact:GH} the cocycle $h(t,m)$ is a coboundary in contradiction to its transience.\nTherefore we have an increasing sequence of integers $\\{k_l\\}_{l\\geq 1}$, a sequence $\\{t'_l\\}_{l\\geq 1}\\subset\\mathbb R$ with $t'_l\\in[0,t_{k_l}]$, and $S\\in\\{+1,-1\\}$ so that\n\\begin{equation*}\nS\\cdot h(t'_l,m_{k_l})=\\max_{t\\in[0,t_{k_l}]}S\\cdot h(t,m_{k_l})\\to +\\infty\n\\end{equation*}\nas $l\\to\\infty$.\nFor every limit point $\\bar m$ of the sequence $\\{\\phi^{t'_l}(m_{k_l})\\}_{l\\geq 1}$ it holds that $S\\cdot h(t,\\bar m)\\leq 0$ for all $t\\in\\mathbb R$, and the mapping $t\\mapsto h(t,\\bar m)$ maps each of the sets $\\mathbb R^+$ and $\\mathbb R^-$ onto $S\\cdot \\mathbb R^-$.\nHence for every $t\\in\\mathbb R^+$ there exists a $t'\\in\\mathbb R^-$ with $h(t,\\bar m)=h(t',\\bar m)$, and by the density of the semi-orbit $\\{\\phi^t(\\bar m):t\\in\\mathbb R^+\\}$ (cf. the proof of Fact \\ref{fact:GH}) and the cocycle identity the open set\n\\begin{equation*}\nM_k=\\{m\\in M:|h(t,m)|<2^{-k}\\enspace\\textup{for some}\\enspace t<-k\\}\n\\end{equation*}\nis dense for every integer $k\\geq 1$.\nFor a point $m_k$ in the dense $G_\\delta$ set $\\bigcap_{t\\in\\mathbb Q}\\phi^t(M_k)$, we can find rational numbers $t_1,\\dots,t_k<-k$ so that $\\phi^{t_1+\\cdots+t_l}(m_k)\\in M_k$ and $|h(t_1+\\dots+t_l,m_k)|0$ so that for all $z\\in Z$ and $\\tau\\in T$ with $d_Z(z,\\tau z)<\\varepsilon$ holds $(g\\circ\\sigma+h)(\\tau,z)\\notin 2K\\setminus K^0$.\nSuppose that there exists a $\\delta>0$ so that for all $\\tau\\in T$ and $z\\in Z$ with $d_Z(z,\\tau z)<\\delta$ holds $d_Z(z',\\tau z')<\\varepsilon$ for every $z'\\in\\sigma^{-1}(\\sigma(z))$.\nGiven $\\bar z\\in Z$ and a sequence $\\{\\bar\\tau_k\\}\\subset T$ so that $\\bar\\tau_k \\bar z$ converges and $(g\\circ\\sigma)(\\bar\\tau_k,\\bar z)\\to 0$ as $k\\to\\infty$, the sequence $\\{(g\\circ\\sigma+h)(\\bar\\tau_k,\\bar z)\\}_{k\\geq 1}$ is bounded.\nSimilarly, for a sequence $\\{\\bar\\tau_k\\}\\subset T$ so that $\\bar\\tau_k\\bar z$ converges and $(g\\circ\\sigma+h)(\\bar\\tau_k,\\bar z)\\to 0$, the sequence $\\{(g\\circ\\sigma)(\\bar\\tau_k,\\bar z)\\}_{k\\geq 1}$ is bounded.\n\\end{lemma}\n\n\\begin{proof}\nThere exists a $k_0\\geq 1$ so that for all $k,k'\\geq k_0$ holds $d_Z(\\bar\\tau_k\\bar z,\\bar\\tau_{k'}\\bar z)<\\delta$ and\n\\begin{equation*}\n(g\\circ\\sigma)(\\bar\\tau_{k'},\\bar z)-(g\\circ\\sigma)(\\bar\\tau_k,\\bar z)=(g\\circ\\sigma)(\\bar\\tau_{k'}\\bar\\tau_{k}^{-1},\\bar\\tau_k \\bar z)\\in K^0 .\n\\end{equation*}\nBy the choice of $K$, $\\varepsilon$, and $\\delta$ follows that $(g\\circ\\sigma+h)(\\bar\\tau_{k'}\\bar\\tau_k^{-1},z)\\notin 2K\\setminus K^0$ for all $z\\in\\sigma^{-1}(\\sigma(\\bar\\tau_k\\bar z))$.\nSince the range of $(g\\circ\\sigma+h)(\\bar\\tau_{k'}\\bar\\tau_k^{-1},z)$ on the fibre $\\sigma^{-1}(\\sigma(\\bar\\tau_k\\bar z))$ is connected and intersects $K^0$, we can conclude that $(g\\circ\\sigma+h)(\\bar\\tau_{k'}\\bar\\tau_k^{-1},\\bar\\tau_k \\bar z)\\in K^0$ for all $k,k'\\geq k_0$.\nTherefore the sequence $\\{(g\\circ\\sigma+h)(\\bar\\tau_k,\\bar z)\\}_{k\\geq 1}$ is bounded.\n\nProvided a sequence $\\{\\bar\\tau_k\\}\\subset T$ with convergent $\\bar\\tau_k\\bar z$ and $(g\\circ\\sigma+h)(\\bar\\tau_k,\\bar z)\\to 0$, there exists an integer $k_0\\geq 1$ so that for all $k,k'\\geq k_0$ holds $d_Z(\\bar\\tau_k\\bar z,\\bar\\tau_{k'}\\bar z)<\\delta$ and $(g\\circ\\sigma+h)(\\bar\\tau_{k'}\\bar\\tau_{k}^{-1},\\bar\\tau_k \\bar z)\\in K^0$.\nWe conclude as above that $(g\\circ\\sigma+h)(\\bar\\tau_{k'}\\bar\\tau_k^{-1},z)\\in K^0$ for all $k,k'\\geq k_0$ and $z\\in\\sigma^{-1}(\\sigma(\\bar\\tau_k\\bar z))$.\nSince $h(\\bar\\tau_{k'}\\bar\\tau_k^{-1},z)=0$ for some $z\\in\\sigma^{-1}(\\sigma(\\bar\\tau_k\\bar z))$, the sequence $\\{(g\\circ\\sigma)(\\bar\\tau_k,\\bar z)\\}_{k\\geq 1}$ is bounded.\n\\end{proof}\n\nAt first the step from an ordinal to its successor by an isometric extension shall be considered.\nThe ``local'' behaviour within the fibres of a compact group extension is similar to a skew product extension by a compact metric group, even if the global structure might be different since it does not necessarily split into a product.\n\n\\begin{lemma}\\label{lem:c_t}\nLet $\\gamma$ be an ordinal with $1\\leq\\gamma<\\eta$.\nIf there exists a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X_{\\gamma+1}$ with $d_{\\gamma+1} (x_k,\\tau_k x_k)\\to 0$ so that $(f_\\gamma\\circ\\pi_\\gamma^{\\gamma+1})(\\tau_k,x_k)\\to 0$ and $f_{\\gamma+1}(\\tau_k,x_k)\\nrightarrow 0$ as $k\\to\\infty$ (or equivalently $(f_{\\gamma+1}\\circ\\pi_\\gamma^{\\gamma+1})(\\tau_k,x_k)\\nrightarrow 0$ and $f_\\gamma(\\tau_k,x_k)\\to 0$), then the skew product $\\widetilde\\tau_{f_{\\gamma+1}}$ is necessarily point transitive.\nTherefore, if $f_\\gamma(\\tau,x_\\gamma)$ is transient, then $f_{\\gamma+1}(\\tau,x_{\\gamma+1})$ is either transient or the skew product $\\widetilde\\tau_{f_{\\gamma+1}}$ is point transitive.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $\\widetilde\\tau_{f_{\\gamma+1}}$ is not point transitive and let $G\\subset\\textup{Aut}(Z,T)$ define a compact metric group extension of $(X_\\gamma, T)$ with $(X_{\\gamma+1},T)=\\pi(Z,T)$.\nThen the skew product extension $\\widetilde\\tau_{f_{\\gamma+1}\\circ\\pi}$ of the flow $(Z,T)$ is also not point transitive, and Lemma \\ref{lem:at} provides $K\\subset\\mathbb R$ and $\\varepsilon>0$.\nSince $G$ acts uniformly equicontinuous, there exists a $\\delta>0$ so that for all $z\\in Z$ and $\\tau\\in T$ with $d_Z (z,\\tau z)<\\delta$ follows $d_Z(k(z),k(\\tau z))=d_Z(k(z),\\tau k(z))<\\varepsilon$ for all $k\\in K$.\nFor every $z\\in Z$ the $G$-orbit of $z$ is all of the fibre $(\\pi_\\gamma^{\\gamma+1}\\circ\\pi)^{-1}((\\pi_\\gamma^{\\gamma+1}\\circ\\pi)(z))$.\nSince the $\\pi^{\\gamma+1}_\\gamma$-fibres are connected, for every $\\tau\\in T$ and $z'\\in Z$ the range of $(f_{\\gamma+1}-f_\\gamma\\circ\\pi^{\\gamma+1}_\\gamma)(\\tau,\\pi(z))$ on the fibre $(\\pi_\\gamma^{\\gamma+1}\\circ\\pi)^{-1}((\\pi_\\gamma^{\\gamma+1}\\circ\\pi)(z'))$ is connected and contains zero.\nHence Lemma \\ref{lem:sub} applies with $(Y,T)=(X_\\gamma,T)$, $\\sigma=\\pi_\\gamma^{\\gamma+1}\\circ\\pi$, $g=f_\\gamma$, and $h(\\tau,z)=(f_{\\gamma+1}-f_\\gamma\\circ\\pi^{\\gamma+1}_\\gamma)(\\tau,\\pi(z))$.\n\nHowever, given the sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X_{\\gamma+1}$ in the hypothesis, Lemma \\ref{lem:trans} provides a point $\\bar x\\in X_{\\gamma+1}$ and a sequence $\\{\\bar\\tau_k\\}\\subset T$ so that $(f_\\gamma\\circ\\pi_\\gamma^{\\gamma+1})(\\bar\\tau_k,\\bar x)\\to 0$ and $f_{\\gamma+1}(\\bar\\tau_k,\\bar x)\\to\\infty$ (or $(f_\\gamma\\circ\\pi_\\gamma^{\\gamma+1})(\\bar\\tau_k,\\bar x)\\to\\infty$ and $f_{\\gamma+1}(\\bar\\tau_k,\\bar x)\\to 0$).\nBy choosing a point $\\bar z\\in Z$ with $\\pi(\\bar z)=\\bar x$ and changing to a subsequence of $\\{\\bar\\tau_k\\}\\subset T$ with $\\bar\\tau_k\\bar z$ convergent, this contradicts to Lemma \\ref{lem:sub}.\n\nNow suppose that $f_\\gamma(\\tau,x_\\gamma)$ is transient and $f_{\\gamma+1}(\\tau,x_{\\gamma+1})$ is recurrent.\nLet $x'\\in X_{\\gamma+1}$ be so that $(x',0)$ is $\\widetilde\\tau_{f_{\\gamma+1}}$-recurrent (cf. Remarks \\ref{rems:rec}).\nSince $(x',0)$ is cannot be $\\widetilde\\tau_{f_\\gamma\\circ\\pi_\\gamma^{\\gamma+1}}$-recurrent, there exist a neighbourhood $V\\subset X_{\\gamma+1}\\times\\mathbb R$ of $(x',0)$ and a replete semigroup $P\\subset T$ so that $\\widetilde\\tau_{f_\\gamma\\circ\\pi_\\gamma^{\\gamma+1}}(x',0)\\notin V$ for every $\\tau\\in P$.\nGiven an arbitrary compact set $C\\subset T$, by Theorem 6.32 in \\cite{G-H} there exists a replete semigroup $Q\\subset P\\setminus C$.\nSince $(x',0)$ is $\\widetilde\\tau_{f_{\\gamma+1}}$-recurrent, we can inductively construct a sequence $\\{\\tau_k\\}_{k\\geq 1}\\subset P$ with $\\tau_k x'\\to x'$, $f_{\\gamma+1}(\\tau_k,x')\\to0$, and $(f_\\gamma\\circ\\pi^{\\gamma+1}_\\gamma)(\\tau_k,x')\\nrightarrow 0$.\nThe point transitivity of $\\widetilde\\tau_{f_{\\gamma+1}}$ follows by the preceding statement.\n\\end{proof}\n\nFurthermore, we shall study the case of transfinite induction to a limit ordinal.\nThe arguments are quite similar, however with an approximation of a limit ordinal instead of an isometric group extension.\n\n\\begin{lemma}\\label{lem:lim}\nSuppose that $\\gamma$ is a limit ordinal with $1<\\gamma\\leq\\eta$.\n\\begin{enumerate}\n\\item If for every ordinal $1<\\alpha<\\gamma$ there exists an ordinal $\\alpha\\leq\\xi<\\gamma$ so that $f_\\xi(\\tau,x_\\xi)$ has a point transitive skew product extension, then $f_\\gamma(\\tau,x_\\gamma)$ has a point transitive skew product extension.\n\\item If there exist an ordinal $1\\leq\\alpha<\\gamma$ and a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X_\\gamma$ with $d_\\gamma (x_k,\\tau_k x_k)\\to 0$ so that $(f_\\xi\\circ\\pi_\\xi^\\gamma)(\\tau_k,x_k)\\to 0$ for every $\\alpha\\leq\\xi<\\gamma$ and $f_\\gamma(\\tau_k,x_k)\\nrightarrow 0$ as $k\\to\\infty$ (or equivalently $(f_\\xi\\circ\\pi_\\xi^\\gamma)(\\tau_k,x_k)\\nrightarrow 0$ for every $\\alpha\\leq\\xi<\\gamma$ and $f_\\gamma(\\tau_k,x_k)\\to 0$), then $\\widetilde\\tau_{f_\\gamma}$ is necessarily point transitive.\n\\item If there exists an ordinal $1\\leq\\alpha<\\gamma$ so that for all $\\alpha\\leq\\xi<\\gamma$ the cocycle $f_\\xi(\\tau,x_\\xi)$ is transient, then $f_\\gamma(\\tau,x_\\gamma)$ is either transient or its skew product extension is point transitive.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSuppose that the skew product of $\\tau_{f_\\gamma}$ on $X_\\gamma\\times\\mathbb R$ is not point transitive, and let $K\\subset\\mathbb R$ and $\\varepsilon>0$ be provided by Lemma \\ref{lem:at}.\nSince $\\gamma$ is a limit ordinal and $(X_\\gamma,T)$ is the inverse limit of the flows $\\{(X_\\xi,T):0\\leq\\xi<\\gamma\\}$, we can choose an ordinal $\\zeta<\\gamma$ so that for all $x,x'\\in X_\\gamma$ with $\\pi_\\zeta^\\gamma (x)=\\pi_\\zeta^\\gamma (x')$ holds $d_\\gamma (x,x')<\\varepsilon\/3$.\nIf we put $\\delta=\\varepsilon\/3$, then $d_\\gamma(x',\\tau x')<\\delta$ for $x'\\in X_\\gamma$ and $\\tau\\in T$ implies $d_\\gamma (x,\\tau x)<\\varepsilon$ for all $x\\in(\\pi^\\gamma_\\zeta)^{-1}(\\pi^\\gamma_\\zeta(x'))$.\nThese conditions remain valid even if the ordinal $\\zeta$ will be increased later.\nSince the $\\pi_\\zeta^{\\gamma}$-fibres are connected, for every $\\tau\\in T$ and $x'_\\gamma\\in X_\\gamma$ the range of $(f_{\\gamma}-f_\\zeta\\circ\\pi^{\\gamma}_\\zeta)(\\tau,x_\\gamma)$ on the $\\pi_\\zeta^{\\gamma}$-fibre of $x'_\\gamma$ is connected and contains $0$.\n\nUnder the hypothesis (i), we can choose $(\\tau,x_\\zeta)\\in T\\times X_\\zeta$ so that $f_\\zeta(\\tau,x_\\zeta)\\in 2K\\setminus K^0$ and $d_\\gamma(x'_\\gamma,\\tau x'_\\gamma)<\\delta$ for some $x_\\gamma'\\in(\\pi_\\zeta^{\\gamma})^{-1}(x_\\zeta)$.\nThus $d_\\gamma(x_\\gamma,\\tau x_\\gamma)<\\varepsilon$ holds for all $x_\\gamma\\in(\\pi_\\zeta^{\\gamma})^{-1}(x_\\zeta)$, and for $x_\\gamma$ with $(f_{\\gamma}-f_\\zeta\\circ\\pi^{\\gamma}_\\zeta)(\\tau,x_\\gamma)=0$ this contradicts $f_\\gamma(\\tau,x_\\gamma)\\notin 2K\\setminus K^0$.\nThus assertion (i) is verified.\n\nWe apply then Lemma \\ref{lem:sub} with $(Z,T)=(X_\\gamma,T)$, $(Y,T)=(X_\\zeta,T)$, $\\sigma=\\pi_\\zeta^{\\gamma}$, $h=(f_{\\gamma}-f_\\zeta\\circ\\pi^{\\gamma}_\\zeta)$, and $g=f_\\zeta$.\nHowever, given the sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X_\\gamma$ in hypothesis (ii), Lemma \\ref{lem:trans} provides a point $\\bar x\\in X_\\gamma$ and a sequence $\\{\\bar\\tau_k\\}\\subset T$ so that $(f_\\zeta\\circ\\pi_\\zeta^{\\gamma},f_{\\gamma}-f_\\zeta\\circ\\pi^{\\gamma}_\\zeta)(\\bar\\tau_k,\\bar x)=(g\\circ\\sigma,h)(\\bar\\tau_k,\\bar x)\\to(0,\\infty)$ (or $(\\infty,0)$) and $\\bar\\tau_k\\bar x\\to\\bar z$ as $k\\to\\infty$.\nThis is a contradiction to Lemma \\ref{lem:sub} and verifies (ii).\n\nNow suppose that $f_\\zeta(\\tau,x_\\zeta)$ is transient and $f_{\\gamma}(\\tau,x_{\\gamma})$ is recurrent, and choose $x'\\in X_\\gamma$ so that $(x',0)$ is $\\widetilde\\tau_{f_{\\gamma}}$-recurrent.\nSince $(x',0)$ is cannot be $\\widetilde\\tau_{f_\\zeta\\circ\\pi_\\zeta^{\\gamma}}$-recurrent, there exist a neighbourhood $V\\subset X_\\gamma\\times\\mathbb R$ of $(x',0)$ and a replete semigroup $P\\subset T$ with $\\widetilde\\tau_{f_\\zeta\\circ\\pi_\\zeta^{\\gamma}}(x',0)\\notin V$ for every $\\tau\\in P$.\nBy induction exists a sequence $\\{\\tau_k\\}_{k\\geq 1}\\subset P$ with $\\widetilde{(\\tau_k)}_{f_{\\gamma}}(x',0)\\to(x',0)$ as $k\\to\\infty$, and by Lemma \\ref{lem:trans} there exist a point $\\bar x\\in X_\\gamma$ and sequence $\\{\\bar\\tau_k\\}\\subset T$ so that $(f_\\gamma,f_\\zeta\\circ\\pi^{\\gamma}_\\zeta)(\\bar\\tau_k,\\bar x)=(g\\circ\\sigma+h,g\\circ\\sigma)(\\bar\\tau_k,\\bar x)\\to(0,\\infty)$ and $\\bar\\tau_k\\bar x\\to\\bar x$.\nThis contradiction to Lemma \\ref{lem:sub} verifies the statement (iii).\n\\end{proof}\n\n\\begin{proposition}\\label{prop:max}\nIf the real-valued cocycle $f(\\tau,x)$ is topologically recurrent apart from a coboundary, then there exists a maximal ordinal $1\\leq\\alpha\\leq\\eta$ so that the skew product extension $\\widetilde\\tau_{f_\\alpha}$ is point transitive on $X_\\alpha\\times\\mathbb R$.\nThe cocycle $(f-f_\\alpha\\circ\\pi_\\alpha) (\\tau,x)$ is relatively trivial with respect to $(f_\\alpha\\circ\\pi_\\alpha)(\\tau,x)$.\n\\end{proposition}\n\n\\begin{proof}\nLet us first suppose that the cocycle $f_\\xi(\\tau,x_\\xi)$ is recurrent for every ordinal $1\\leq\\xi<\\eta$, and let $\\mathcal O=\\{1\\leq\\xi\\leq\\eta:{f_\\xi}(\\tau,x_\\xi)\\enspace\\textup{is \\emph{not} a coboundary}\\}$.\nThis set is non-empty since $f_\\eta(\\tau,x)$ is not a coboundary, and let $\\beta$ be its minimal element.\nIf $\\beta=1$, then by Proposition \\ref{prop:isom} the recurrent skew product $\\widetilde{\\tau}_{f_1}$ of the isometric flow $(X_1,T)$ is point transitive.\nIf $\\beta>1$, then Fact \\ref{fact:GH} provides a sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X_\\beta$ with $d_\\beta(x_k,\\tau_k x_k)\\to 0$ and $f_\\beta(\\tau_k, x_k)\\to\\infty$.\nFor all $1\\leq\\zeta<\\beta$ holds $(f_\\beta\\circ\\pi_\\zeta^\\beta)(\\tau_k, x_k)\\to 0$, and by the Lemmas \\ref{lem:c_t} and \\ref{lem:lim} (ii) $\\widetilde{\\tau}_{f_\\beta}$ is point transitive.\n\nIf $f_\\xi(\\tau,x_\\xi)$ is transient for an ordinal $1\\leq\\xi<\\eta$, then let $\\beta$ be the minimal element of the set $\\mathcal O=\\{\\xi<\\zeta\\leq\\eta:f_\\zeta(\\tau,x_\\zeta)\\enspace\\textup{is topologically recurrent}\\}$.\nThis set is non-empty since $f_\\eta(\\tau,x_\\eta)$ is topologically recurrent, and it follows from the Lemmas \\ref{lem:c_t} and \\ref{lem:lim} (iii) that $\\widetilde\\tau_{f_\\beta}$ is even point transitive.\n\nNow let $\\mathcal O=\\{1\\leq\\xi\\leq\\eta:\\widetilde\\tau_{f_\\zeta}\\enspace\\textup{is \\emph{not} point transitive for all}\\enspace\\xi\\leq\\zeta\\leq\\eta\\}$.\nIf $\\mathcal O$ is empty, then $\\widetilde\\tau_{f_\\eta}$ is point transitive and $\\alpha=\\eta$.\nOtherwise, the set $\\mathcal O$ has a minimal element $\\gamma>1$ since $\\widetilde\\tau_{f_\\beta}$ is point transitive for some $1\\leq\\beta\\leq\\eta$.\nSince $\\gamma$ cannot be a limit ordinal by Lemma \\ref{lem:lim} (i), there exists a maximal ordinal $\\alpha\\geq 1$ with point transitive $\\widetilde\\tau_{f_\\alpha}$.\nIf $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ is a sequence with $d (x_k,\\tau_k x_k)\\to 0$ and $(f_\\alpha\\circ\\pi_\\alpha)(\\tau_k,x_k)\\to 0$, then transfinite induction using the maximality of $\\alpha$ and Lemmas \\ref{lem:c_t}, \\ref{lem:lim} (ii) verifies that $(f_\\xi\\circ\\pi_\\xi)(\\tau_k,x_k)\\to 0$ for every $\\alpha\\leq\\xi\\leq\\eta$.\n\\end{proof}\n\nAfter the flow $(X_\\alpha,T)$ with a point transitive skew product extension $\\widetilde\\tau_{f_\\alpha}$ has been identified, we shall study the extension from $(X_\\alpha,T)$ to $(X,T)$.\nThere might be infinitely many isometric extensions in between, and therefore this extension is in general a distal extension.\nSince our construction will use the regulariser of this extension, it is necessary to leave the category of compact metric flows for the category of compact Hausdorff flows during the following construction (cf. Remark \\ref{rem:d_n_m}).\nHowever, the flow which will be constructed by means of the regulariser will be metric as a factor of the compact metric flow $(X,T)$.\n\n\\begin{proposition}\\label{prop:flow}\nThere exists a factor $(Y,T)=(X_\\alpha\\times M,T)=\\pi_Y(X,T)$ which is a Rokhlin extension of $(X_\\alpha,T)=\\rho_\\alpha(Y,T)$ by a distal minimal compact metric $\\mathbb R$-flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ and the cocycle $f_\\alpha(\\tau,x_\\alpha)$ so that for every $x\\in X$ holds\n\\begin{equation}\\label{eq:p_X}\n\\pi_Y^{-1}(\\pi_Y(x))\\times\\{0\\}=\\mathcal D_{T, f_\\alpha\\circ\\pi_\\alpha}(x,0)\\cap(\\pi_\\alpha^{-1}(\\pi_\\alpha(x))\\times\\{0\\}) .\n\\end{equation}\nThe $\\mathbb R$-flow $\\{\\psi^t:t\\in\\mathbb R\\}\\subset\\textup{Aut}(Y,T)$ defined by $\\psi^t(x_\\alpha,m)=(x_\\alpha,\\phi^t(m))$ for $(x_\\alpha,m)\\in Y=X_\\alpha\\times M$ fulfils for every $y\\in Y$ and every $t\\in\\mathbb R$ that\n\\begin{equation}\\label{eq:o_flow}\n\\bar{\\mathcal O}_{T, f_\\alpha\\circ\\rho_\\alpha}(y,0)\\cap(\\rho_\\alpha^{-1}(\\rho_\\alpha(y))\\times\\{t\\})\\subset\\{(\\psi^t(y),t)\\} ,\n\\end{equation}\nwith coincidence of these sets if $(\\rho_\\alpha(y),0)\\in X_\\alpha\\times\\mathbb R$ is transitive for $\\widetilde\\tau_{f_\\alpha}$.\n\\end{proposition}\n\n\\begin{proof}\nWe shall construct a factor $(Y,T)$ of $(X,T)$ and a flow $\\{\\varphi^t:t\\in\\mathbb R\\}\\subset\\textup{Aut}(Y,T)$, and then we shall represent $(Y,T)$ as a Rokhlin extension of $(X_\\alpha,T)$.\nLet $(\\tilde X,T)$ be a distal minimal compact Hausdorff flow with $(X,T)=\\pi(\\tilde X,T)$ and a Hausdorff topological group $G\\subset\\textup{Aut}(\\tilde X,T)$ acting freely on the fibres of $\\pi_\\alpha\\circ\\pi$ so that $(X,T)$ is the $H$-orbit space of a subgroup $H\\subset G$ (cf. Fact \\ref{fact:reg_ex}).\nFor an arbitrary point $\\tilde z\\in \\tilde X$ and $t\\in\\mathbb R$ we define a closed subset of $G$ by\n\\begin{equation}\\label{eq:G}\nG_{\\tilde z,t}=\\{g\\in G: (\\pi(g(\\tilde z)),t)\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\pi(\\tilde z),0)\\} .\n\\end{equation}\nThe mapping $\\pi$ is open as a homomorphism of distal minimal compact flows, and hence for every $g\\in G_{\\tilde z,t}$ there exist nets $\\{\\tilde z_i\\}_{i\\in I}\\subset \\tilde X$ and $\\{\\tau_i\\}_{i\\in I}\\subset T$ with $\\tilde z_i\\to \\tilde z$, $\\tau_i\\pi(\\tilde z_i)\\to\\pi(g(\\tilde z))$, and $f_\\alpha(\\tau_i,\\pi_\\alpha\\circ\\pi(\\tilde z_i))\\to t$.\nSince the cocycle $(f_\\alpha\\circ\\pi_\\alpha)(\\tau,x_\\alpha)$ is constant on the fibres of $\\pi_\\alpha$ and $T$ is Abelian, it follows for every fixed $\\tau\\in T$ that\n\\begin{equation*}\n\\tau_i\\pi(\\tau \\tilde z_i)=\\tau_i\\tau\\pi(\\tilde z_i)=\\tau\\tau_i\\pi(\\tilde z_i)\\to\\tau\\pi(g(\\tilde z))=\\pi(\\tau g(\\tilde z))=\\pi(g(\\tau \\tilde z))\n\\end{equation*}\nand by the cocycle identity\n\\begin{eqnarray*}\nf_\\alpha(\\tau_i,\\pi_\\alpha\\circ\\pi(\\tau \\tilde z_i)) & = & f_\\alpha(\\tau_i,\\pi_\\alpha\\circ\\pi(\\tilde z_i))-f_\\alpha(\\tau,\\pi_\\alpha\\circ\\pi(\\tilde z_i))\\\\\n& & +f_\\alpha(\\tau,\\pi_\\alpha\\circ\\pi(\\tau_i\\tilde z_i))\\to t .\n\\end{eqnarray*}\nBy the density of the $T$-orbit of $\\tilde z$ and a diagonalisation of nets there exist for every $\\tilde x\\in \\tilde X$ nets $\\{\\tilde x_i\\}_{i\\in I}\\subset \\tilde X$ and $\\{\\tau'_i\\}_{i\\in I}\\subset T$ with $\\tilde x_i\\to \\tilde x$, $\\tau'_i\\pi(\\tilde x_i)\\to\\pi(g(\\tilde x))$, and $f_\\alpha(\\tau_i,\\pi_\\alpha\\circ\\pi(\\tilde x_i))\\to t$.\nTherefore\n\\begin{equation*}\n(\\pi(g(\\tilde x)),t)\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\pi(\\tilde x),0)\n\\end{equation*}\nso that $g\\in G_{\\tilde x,t}=G_{\\tilde z,t}=G_t$.\nBy symmetry follows now that $G_{-t}=(G_t)^{-1}$.\n\nThen we fix a point $x'\\in X$ with $\\bar{\\mathcal O}_{T,f_\\alpha}(\\pi_\\alpha(x'),0)=X_\\alpha\\times\\mathbb R$ and $\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(x',0)=\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\pi_\\alpha}(x',0)$ (cf. Fact \\ref{fact:o_p}).\nThe set $G_t$ is non-empty for every $t\\in\\mathbb R$, since $\\bar{\\mathcal O}_{T, f_\\alpha}(\\pi_\\alpha(x'),0)=X_\\alpha\\times\\mathbb R$ and the compactness of $X$ ensure that\n\\begin{equation*}\n\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\pi_\\alpha}(x',0)\\cap\\pi_\\alpha^{-1}(\\pi_\\alpha(x'))\\times\\{t\\}\\neq\\emptyset .\n\\end{equation*}\nFor arbitrary $t,t'\\in\\mathbb R$ and $g\\in G_t$, $g'\\in G_{t'}$, we select $\\tilde x, \\tilde z\\in \\tilde X$ so that $\\pi(\\tilde x)=x'$ and $\\tilde x=g'(\\tilde z)$.\nThen we have\n\\begin{equation*}\n(x',t')=(\\pi(g'(\\tilde z)),t')\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\pi(\\tilde z),0) ,\n\\end{equation*}\nand for $\\tilde y=g(\\tilde x)=gg'(\\tilde z)$ it holds that $(\\pi(\\tilde y),t)\\in\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\pi_\\alpha}(x',0)=\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(x',0)$.\nBy Remark \\ref{rem:o_p} follows $(\\pi(\\tilde y),t+t')\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\pi(\\tilde z),0)$ so that $gg'\\in G_{t+t'}$.\nHence $G_t G_{t'}\\subset G_{t+t'}$ holds for all $t,t'\\in\\mathbb R$, and from $G_{-t}=(G_{t})^{-1}$ follows $(G_{t})^{-1} G_{t+t'}=G_{-t} G_{t+t'}\\subset G_{t'}$ so that $G_{t} G_{t'}= G_{t+t'}$.\nThus the Hausdorff topological group\n\\begin{equation*}\n\\tilde G=\\cup_{t\\in\\mathbb R}G_t\n\\end{equation*}\nhas the closed set $G_0\\supset H$ as a normal subgroup so that $G_t$ is a $G_0$-coset in $\\tilde G$ for every $t\\in\\mathbb R$.\nMoreover, the mapping $t\\mapsto G_t$ is a group homomorphism from $\\mathbb R$ into $\\tilde G\/G_0$.\nThe group $G_0$ is not necessarily compact, however its orbit space on $\\tilde X$ defines a partition into sets invariant under $H\\subset G_0$.\nHence this is also a partition of $X$, and the equivalence relation $R_Y$ of this partition of $X$ is $T$-invariant since $G_0\\subset\\textup{Aut}(\\tilde X,T)$.\nMoreover, $R_Y$ is closed in $X^2$, since definition (\\ref{eq:G}) implies that $(x,x')\\in R_Y$ if and only if $(x',0)\\in\\mathcal D_{T, f_\\alpha\\circ\\pi_\\alpha}(x,0)\\cap(\\pi_\\alpha^{-1}(\\pi_\\alpha(x))\\times\\{0\\})$.\nThe factor $(Y,T)=\\pi_Y(X,T)$ defined by the $T$-invariant closed equivalence relation $R_Y$ is an extension of $(X_\\alpha,T)=\\rho_\\alpha(Y,T)$, and equality (\\ref{eq:p_X}) follows.\nThe $\\mathbb R$-action $\\{\\varphi^t:t\\in\\mathbb R\\}\\subset\\textup{Aut}(Y,T)$ is well defined for every $y\\in Y$ and $t\\in\\mathbb R$ by\n\\begin{equation*}\n\\varphi^t(y)=G_t ((\\pi_Y\\circ\\pi)^{-1}(y))=G_t (\\{\\tilde x\\in \\tilde X:G_0(\\tilde x)=y\\}) .\n\\end{equation*}\nLet $\\{(t_k,y_k)\\}_{k\\geq 1}\\subset\\mathbb R\\times Y$ be a sequence with $(t_k,y_k)\\to (t,y)$, then $\\varphi^{t_k}(y_k)=G_0 g_k(\\tilde x_k)$ for a sequence $\\{\\tilde x_k\\}_{k\\geq 1}\\subset\\tilde X$ with $\\pi_Y\\circ\\pi(\\tilde x_k)=y_k$ and $g_k\\in G_{t_k}$.\nWe can assume that $\\tilde x_k\\to\\tilde x$ and $g_k(\\tilde x_k)\\to\\tilde z$ so that $(\\pi(\\tilde z),t)\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\pi(\\tilde x),0)$ and $\\tilde z=g_t(\\tilde x)$ for some $g_t\\in G_t$.\nFrom $\\pi_Y\\circ\\pi(\\tilde x)=y$ and $\\varphi^{t_k}(y_k)=\\pi_Y\\circ\\pi(g_k(\\tilde x_k))\\to\\pi_Y\\circ\\pi(\\tilde z)=\\varphi^t(y)$ follows the continuity of the action $\\{\\varphi^t:t\\in\\mathbb R\\}$ on $Y$.\n\nWe turn to the inclusion (\\ref{eq:o_flow}).\nSuppose that $(y_i,t)\\in\\bar{\\mathcal O}_{T, f_\\alpha\\circ\\rho_\\alpha}(y,0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{t\\}$ for some $x_\\alpha\\in X_\\alpha$ and $i\\in\\{1,2\\}$, and select $x\\in\\pi_Y^{-1}(y)$.\nBy the compactness of $X$ there exist points $x_i\\in\\pi_Y^{-1}(y_i)\\subset\\pi_\\alpha^{-1}(x_\\alpha)$ so that $(x_i,t)\\in\\bar{\\mathcal O}_{T, f_\\alpha\\circ\\pi_\\alpha}(x,0)$, and therefore $(x_2,0)\\in\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(x_1,0)$.\nThe equality (\\ref{eq:p_X}) implies that $y_1=\\pi_Y(x_1)=\\pi_Y(x_2)=y_2$, and thus for every $y\\in Y$ and $t\\in\\mathbb R$ holds\n\\begin{equation}\\label{eq:card_1}\n\\textup{card}\\{\\bar{\\mathcal O}_{T, f_\\alpha\\circ\\rho_\\alpha}(y,0)\\cap\\rho_\\alpha^{-1}(\\rho_\\alpha(y))\\times\\{t\\}\\}\\leq 1 .\n\\end{equation}\nMoreover, for $x_\\alpha=\\rho_\\alpha(y)$ follows $x_1=\\pi(g_t(\\tilde x))$ with $g_t\\in G_t$ and $\\tilde x\\in\\pi^{-1}(x)\\subset\\tilde X$.\nHence $y_1=\\pi_Y(x_1)=\\varphi^t(y)$ and inclusion (\\ref{eq:o_flow}) is verified.\nFor $\\widetilde\\tau_{f_\\alpha}$-transitive $(\\rho_\\alpha(y),0)$ the cardinality in (\\ref{eq:card_1}) is equal to $1$ for every $y\\in Y$ and $t\\in\\mathbb R$, and for $y'\\in\\rho_\\alpha^{-1}(\\rho_\\alpha(y))$ and $x_\\alpha\\in X_\\alpha$ holds $\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{0\\}=\\{(y_1,0)\\}$.\nWe fix a point $\\bar x\\in X_\\alpha$ with $\\widetilde\\tau_{f_\\alpha}$-transitive $(\\bar x,0)$.\nIf $(y_2,0)\\in\\mathcal D_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{0\\}$, then Remark \\ref{rem:o_p} implies that $(y_2,0)\\in\\mathcal D_{T,f_\\alpha\\circ\\rho_\\alpha}(y_1,0)$, and as above follows $y_1=y_2$.\nHence\n\\begin{equation}\\label{eq:homeo_Y}\n\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{0\\}=\\mathcal D_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{0\\}\n\\end{equation}\nholds for every $y'\\in\\rho_\\alpha^{-1}(\\bar x)$ and $x_\\alpha\\in X_\\alpha$.\nFor distinct $y',y''\\in\\rho_\\alpha^{-1}(\\bar x)$ follows\n\\begin{equation*}\n\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y'',0)\\cap\\rho_\\alpha^{-1}(x_\\alpha)\\times\\{0\\}=\\emptyset .\n\\end{equation*}\nIndeed, given a point $\\bar y$ in this intersection, for every sequence $\\{\\tau_k\\}_{k\\geq 1}\\subset T$ with $\\tau_k\\bar x\\to\\rho_\\alpha(\\bar y)$ and $f_\\alpha(\\tau_k,\\bar x)\\to 0$ follows by equality (\\ref{eq:card_1}) that $d_Y(\\tau_k y',\\tau_k y'')\\to 0$, in contradiction to the distality of $(Y,T)$.\nHence the mapping $\\iota:X_\\alpha\\times\\rho_\\alpha^{-1}(\\bar x)\\longrightarrow Y$\n\\begin{equation*}\n(x_\\alpha,y')\\mapsto\\rho_\\alpha^{-1}(x_\\alpha)\\cap\\{y\\in Y:(y,0)\\in\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\}\n\\end{equation*}\nis well-defined, one-to-one, and by equality (\\ref{eq:homeo_Y}) also continuous.\nFor a dense set of points $\\bar y\\in Y$ holds the $\\widetilde\\tau_{f_\\alpha}$-transitivity of $(\\rho_\\alpha(\\bar y),0)$, since $\\rho_\\alpha$ is open.\nWe can conclude for every $y\\in Y$ that $\\mathcal D_{T,f_\\alpha\\circ\\rho_\\alpha}(y,0)\\cap\\rho_\\alpha^{-1}(\\bar x)\\times\\{0\\}\\neq\\emptyset$, and thus $\\mathcal D_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\{y\\}\\times\\{0\\}=\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\{y\\}\\times\\{0\\}\\neq\\emptyset$ for some $y'\\in\\rho_\\alpha^{-1}(\\bar x)$.\nHence $\\iota$ is onto and by compactness $Y$ and $X_\\alpha\\times\\rho_\\alpha^{-1}(\\bar x)$ are homeomorphic.\n\nLet $\\{\\phi^t:t\\in\\mathbb R\\}$ be the restriction of $\\{\\varphi^t:t\\in\\mathbb R\\}$ to the $\\{\\varphi^t:t\\in\\mathbb R\\}$-invariant compact metric space $M=\\rho_\\alpha^{-1}(\\bar x)$.\nFor every $y'\\in M$ and $\\tau\\in T$ holds $\\bar{\\mathcal O}_{T, f_\\alpha\\circ\\rho_\\alpha}(y',0)\\cap\\rho_\\alpha^{-1}(\\bar x)\\times\\{-f_\\alpha(\\tau,\\bar x) \\}=\\{(\\phi^{-f_\\alpha(\\tau,\\bar x)}(y'),-f_\\alpha(\\tau,\\bar x))\\}$ and $\\widetilde{\\tau}_{f_\\alpha\\circ\\pi_Y}(\\phi^{-f_\\alpha(\\tau,\\bar x)}(y'),-f_\\alpha(\\tau,\\bar x))\\in\\rho_\\alpha^{-1}(\\tau\\bar x)\\cap\\{y\\in Y:(y,0))\\in\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\rho_\\alpha}(y',0)\\}$.\nTherefore $\\tau\\phi^{-f_\\alpha(\\tau,\\bar x)}(y')=\\iota(\\tau\\bar x,y')$ and $\\tau y=\\iota(\\tau_{\\phi,f_\\alpha}(\\bar x,y))$ for every $y\\in M$ and $\\tau\\in T$.\nThe minimality of $(Y,T)$ implies that $(X_\\alpha\\times M,T)$ and $(Y,T)$ are topologically isomorphic via $\\iota$.\nMoreover, for the mapping $\\psi^t(x_\\alpha,m)=(x_\\alpha,\\phi^t(m))$ with $\\psi\\in\\textup{Aut}(X_\\alpha\\times M,T)$ and every $m\\in M=\\rho_\\alpha^{-1}(\\bar x)$ and $t\\in\\mathbb R$ holds $\\varphi^t(\\iota(\\tau_{\\phi,f_\\alpha}(\\bar x,m)))=\\varphi^t(\\tau m)=\\tau\\varphi^t(m)=\\iota(\\tau_{\\phi,f_\\alpha}(\\bar x,\\phi^t(m)))=\\iota(\\psi^t(\\tau_{\\phi,f_\\alpha}(\\bar x,m)))$.\nBy the minimality of $(X_\\alpha\\times M,T)$ follows $\\psi^t=\\iota^{-1}\\circ\\varphi^t\\circ\\iota$ for every $t\\in\\mathbb R$.\nThe flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ is minimal and distal, since a non-transitive point $m'\\in M$ and a proximal pair $(m',m'')\\in M^2$, respectively, give rise a non-transitive point $(x_\\alpha,m')\\in Y$ and a proximal pair $((x_\\alpha,m'),(x_\\alpha,m''))\\in Y^2$, respectively.\n\\end{proof}\n\nIt should be mentioned that an ordinal $\\xi\\leq\\eta$ with $(Y,T)=(X_\\xi,T)$ does not necessarily exist.\nTherefore we shall define a cocycle $f_Y:T\\times Y\\longrightarrow\\mathbb R$ independently of the cocycles $f_\\xi(\\tau,x_\\xi)$, and it will turn out that $(f_Y\\circ\\pi_Y)(\\tau,x)$ can be chosen topologically cohomologous to $f$.\n\n\\begin{proposition}\\label{prop:res}\nThere exists a topological cocycle $f_Y(\\tau,y)$ of the flow $(Y,T)$ so that $(f_Y\\circ\\pi_Y)(\\tau,x)$ is topologically cohomologous to $f(\\tau,x)$ and $f_Y(\\tau,y)$ is relatively trivial with respect to $(f_\\alpha\\circ\\rho_\\alpha)(\\tau,y)$.\n\\end{proposition}\n\nWe shall prove another technical lemma first.\n\n\\begin{lemma}\\label{lem:tilde_coc}\nLet $(Z,T)$ be a distal minimal compact metric flow which extends $(X_\\alpha,T)=\\sigma_\\alpha(Z,T)$, and let $G\\subset\\textup{Aut}(Z,T)$ be a Hausdorff topological group preserving the fibres of $\\sigma_\\alpha$.\nSuppose that there exists a continuous group homomorphism $\\varphi:G\\longrightarrow\\mathbb R$ so that for every $g\\in G$ and every $z\\in Z$ it holds that\n\\begin{equation*}\n(g(z),\\varphi(g))\\in\\mathcal D_{T,f_\\alpha\\circ\\sigma_\\alpha}(z,0) .\n\\end{equation*}\nFurthermore, suppose that $h(\\tau,z)$ is a real-valued cocycle of $(Z,T)$ which is relatively trivial with respect to $(f_\\alpha\\circ\\sigma_\\alpha)(\\tau,z)$.\nThen there exists a continuous cocycle $\\bar h((\\tau,g),z)$ of the flow $(Z,T\\times G)$ with the action $\\{g\\circ\\tau:(\\tau,g)\\in T\\times G\\}$ so that $h(\\tau,z)=\\bar h((\\tau,\\mathbf 1_G),z)$ holds for every $(\\tau,z)\\in T\\times Z$ and the mapping $h\\mapsto\\bar h$ is linear.\nFor $z\\in Z$, $g\\in G$, and a sequence $\\{(\\tau_k,z_k)\\}_{k\\geq 1}\\subset T\\times Z$ with $z_k\\to z$, $\\tau_k z_k\\to g(z)$, and $(f_\\alpha\\circ\\rho_\\alpha)(\\tau_k,z_k)\\to\\varphi(g)$ holds\n\\begin{equation}\\label{eq:h_uni}\nh(\\tau_k,z_k)\\to\\bar h((\\mathbf 1_T,g),z)\\quad\\textup{as}\\enspace k\\to\\infty.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe put $F=(f_\\alpha\\circ\\sigma_\\alpha,h):T\\times Z\\longrightarrow\\mathbb R^2$ and fix a point $\\bar z\\in Z$ so that $\\bar{\\mathcal O}_{T,F} (\\bar z,0,0)$ and $\\mathcal D_{T,F} (\\bar z,0,0)$ coincide in $Z\\times(\\mathbb R_\\infty)^2$ (cf. Fact \\ref{fact:o_p}).\nFor every $g\\in G$ we fix a sequence $\\{\\tau_k^g\\}_{k\\geq 1}\\subset T$ with $\\widetilde{(\\tau_k^g)}_{f_\\alpha\\circ\\sigma_\\alpha}(\\bar z,0)\\to(g(\\bar z),\\varphi(g))$ as $k\\to\\infty$, with $\\{\\tau_k^{\\mathbf 1_G}=\\mathbf 1_T\\}_{k\\geq 1}$.\nSince $g\\in\\textup{Aut}(Z,T)$ and $f_\\alpha\\circ\\sigma_\\alpha\\circ g=f_\\alpha\\circ\\sigma_\\alpha$, we can conclude for every $t\\in T$ that $\\tau_k^gt\\bar z=t\\tau_k^g\\bar z\\to t g(\\bar z)=g(t\\bar z)$ as $k\\to\\infty$ as well as\n\\begin{equation}\\label{eq:transf}\n(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_k^g,t\\bar z)=(f_\\alpha\\circ\\sigma_\\alpha)(t, \\tau_k^g\\bar z)+(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_k^g,\\bar z)-(f_\\alpha\\circ\\sigma_\\alpha)(t,\\bar z)\\to\\varphi(g).\n\\end{equation}\nBy the relative triviality of $h(\\tau,z)$ with respect to $(f_\\alpha\\circ\\sigma_\\alpha)(\\tau,z)$, the sequence $\\{h(\\tau\\tau_k^g,t\\bar z)\\}_{k\\geq 1}$ converges for all $\\tau,t\\in T$.\nThus we can put\n\\begin{equation}\\label{eq:def_g}\n\\bar h((\\tau,g),t\\bar z)=\\lim_{k\\to\\infty} h(\\tau\\tau_k^g,t\\bar z)=h(\\tau,g(t\\bar z))+\\lim_{k\\to\\infty} h(\\tau_k^g,t\\bar z)\n\\end{equation}\nfor every $(\\tau, g,t\\bar z)\\in T\\times G\\times Z$.\nSuppose that there exist sequences $\\{(\\tau_k^{i},z_k^i)\\}_{k\\geq 1}\\subset T\\times Z$ for $i=1,2$ so that $z_k^i\\to z$, $\\tau_k^i z_k^i\\to g(z)=\\lim_{k\\to\\infty}g(z_k^i)$, and\n\\begin{equation*}\n(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_k^{i},z_k^i)\\to\\varphi(g) \\quad\\textup{as}\\enspace k\\to\\infty,\n\\end{equation*}\nwhile for $i=1,2$ the limit points $\\bar h_i=\\lim_{k\\to\\infty} h(\\tau_k^{i},z_k^i)\\in\\mathbb R_\\infty$ are either distinct or both equal to $\\infty$.\nThen $(g(z),\\varphi(g),\\bar h_i)\\in\\mathcal D_{T,F} (z,0,0)$ for $i=1,2$, and for every $\\tau'\\in T$ follows from $g\\in\\textup{Aut}(Z,T)$ and the cocycle identity that\n\\begin{eqnarray*}\n(g(\\tau' z),\\varphi(g)+(f_\\alpha\\circ\\sigma_\\alpha)(\\tau',g(z))-(f_\\alpha\\circ\\sigma_\\alpha)(\\tau',z),h(\\tau',g(z))+\\bar h_i-h(\\tau',z))=\\hspace{-5mm}\\\\\n(g(\\tau' z),\\varphi(g),h(\\tau',g(z))+\\bar h_i-h(\\tau',z))\\in\\mathcal D_{T,F}(\\tau' z,0,0).\n\\end{eqnarray*}\nSince $\\bar{\\mathcal O}_T(z)=Z$, either there are distinct points $a_1,a_2\\in\\mathbb R_\\infty$ with $(g(\\bar z),\\varphi(g),a_i)\\in\\mathcal D_{T,F} (\\bar z,0,0)$ or it holds that $(g(\\bar z),\\varphi(g),\\infty)\\in\\mathcal D_{T,F} (\\bar z,0,0)$.\nIn either case, since $\\bar{\\mathcal O}_{T,F} (\\bar z,0,0)=\\mathcal D_{T,F} (\\bar z,0,0)$ in $Z\\times(\\mathbb R_\\infty)^2$, this contradicts to the relative triviality of $h(\\tau,z)$ with respect to $(f_\\alpha\\circ\\sigma_\\alpha)(\\tau,z)$.\nTherefore equality (\\ref{eq:h_uni}) holds true, and the definition (\\ref{eq:def_g}) extends uniquely from the $T$-orbit of $\\bar z$ to a continuous mapping $\\bar h:T\\times G\\times Z\\longrightarrow\\mathbb R$ since the action of $T\\times G$ on $X$ and $\\varphi$ are continuous.\n\nFor the cocycle identity let $(\\tau_1,g_1),(\\tau_2,g_2)\\in T\\times G$ be arbitrary with sequences $\\{\\tau_k^{g_1}\\}_{k\\geq 1},\\{\\tau_k^{g_2}\\}_{k\\geq 1}\\subset T$.\nBy equality (\\ref{eq:transf}) we select a sequence $\\{k_l\\}_{l\\geq 1}\\subset\\mathbb N$ with\n\\begin{equation*}\n\\tau_{k_l}^{g_2}\\tau_l^{g_1}\\bar z=\\tau_l^{g_1}\\tau_{k_l}^{g_2}\\bar z\\to g_2(g_1(\\bar z))\\enspace\\textup{and}\\enspace (f_\\alpha\\circ\\sigma_\\alpha)(\\tau_l^{g_1}\\tau_{k_l}^{g_2},\\bar z)\\to\\varphi(g_2)+\\varphi(g_1)=\\varphi(g_2 g_1)\n\\end{equation*}\nas $l\\to\\infty$.\nThus we can put $\\{\\tau_l^{g_2 g_1}\\}_{l\\geq 1}=\\{\\tau_l^{g_1}\\tau_{k_l}^{g_2}\\}_{l\\geq 1}$, and for every $t\\in T$ the equality (\\ref{eq:transf}) implies that $\\tau_{k_l}^{g_2}\\tau_2 t\\bar z\\to g_2(\\tau_2 t\\bar z)$, $\\tau_l^{g_1}\\tau_{k_l}^{g_2}\\tau_2 t\\bar z\\to(g_2 g_1)(\\tau_2 t\\bar z)$, and\n\\begin{eqnarray*}\n(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_1\\tau_l^{g_1},\\tau_{k_l}^{g_2}\\tau_2 t\\bar z)=\\hspace{7.5cm}\\\\\n(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_1,\\tau_l^{g_1}\\tau_{k_l}^{g_2}\\tau_2 t\\bar z)+(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_l^{g_1}\\tau_{k_l}^{g_2},\\tau_2 t\\bar z)-(f_\\alpha\\circ\\sigma_\\alpha)(\\tau_{k_l}^{g_2},\\tau_2 t\\bar z)\\\\\n\\to (f_\\alpha\\circ\\sigma_\\alpha)(\\tau_1,g_2(\\tau_2 t\\bar z))+\\varphi(g_2g_1)-\\varphi(g_2) \\quad\\textup{as}\\enspace l\\to\\infty.\n\\end{eqnarray*}\nThe uniqueness according to equality (\\ref{eq:h_uni}) verifies that $\\bar h((\\tau_1,g_2 g_1 g_2^{-1}),g_2(\\tau_2 t\\bar z))=\\lim_{l\\to\\infty}h(\\tau_1\\tau_l^{g_1},\\tau_2\\tau_{k_l}^{g_2}t\\bar z)$, and therefore\n\\begin{eqnarray*}\n\\bar h((\\tau_1,g_2 g_1 g_2^{-1}),g_2(\\tau_2t\\bar z))+\\bar h((\\tau_2,g_2),t\\bar z)=\\hspace{4.5cm}\\\\\n=\\lim_{l\\to\\infty}h(\\tau_1\\tau_l^{g_1},\\tau_2\\tau_{k_l}^{g_2}t\\bar z)+\\lim_{l\\to\\infty}h(\\tau_2\\tau_{k_l}^{g_2},t\\bar z)=\\\\\n=\\lim_{l\\to\\infty}h(\\tau_1\\tau_l^{g_1}\\tau_2\\tau_{k_l}^{g_2},t\\bar z)=\\bar h((\\tau_1\\tau_2,g_2g_1),t\\bar z) .\n\\end{eqnarray*}\nWe substitute $g_2^{-1} g_1 g_2$ for $g_1$ and obtain from $\\bar{\\mathcal O}_T(\\bar z)=Z$ and the continuity of $\\bar h$ that cocycle identity is valid.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{prop:res}]\nLet $(Y_c,T)=\\pi_c(X,T)$ be the flow defined by the connected components of the fibres of $\\pi_Y$ (cf. \\cite{MMWu}, Definition 2.2), and let $\\rho$ be the homomorphism from $(Y_c,T)$ onto $(Y,T)=\\rho(Y_c,T)$.\nWith a \\emph{RIM} $\\{\\mu_{c,y}:y\\in Y_c\\}$ for the distal extension $(Y_c,T)=\\pi_c(X,T)$ we define a cocycle $f_c(\\tau,y)=\\mu_{c,y}(f(\\tau,\\cdot))$ for every $(\\tau,y)\\in T\\times Y_c$.\nWe fix a point $\\bar x\\in X$ with $\\mathcal D_{T,f_\\alpha\\circ\\pi_\\alpha}(\\tau\\bar x,0)=\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\pi_\\alpha}(\\tau\\bar x,0)$ for all $\\tau\\in T$.\nBy equality (\\ref{eq:p_X}) and $\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\subset\\pi_Y^{-1}(\\pi_Y(\\tau\\bar x))$ holds for all $\\tau\\in T$\n\\begin{equation*}\n\\bar{\\mathcal O}_{T,f_\\alpha\\circ\\pi_\\alpha}(\\bar x,0)\\cap(\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\times\\mathbb R)=\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\times\\{(f_\\alpha\\circ\\pi_\\alpha)(\\tau,\\bar x)\\} .\n\\end{equation*}\nLet $F(\\tau,x)$ be the $\\mathbb R^2$-valued cocycle $(f_\\alpha\\circ\\pi_\\alpha,f)$.\nWe shall verify that\n\\begin{eqnarray}\\label{eq:f_c}\n\\bar{\\mathcal O}_{T,F} (\\bar x,0,0)\\cap(\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\times\\{(f_\\alpha\\circ\\pi_\\alpha)(\\tau,\\bar x)\\}\\times\\mathbb R)=\\nonumber\\\\\n\\{(x,(f_\\alpha\\circ\\pi_\\alpha)(\\tau,\\bar x),b_\\tau(x)):x\\in\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\}\n\\end{eqnarray}\nfor every $\\tau\\in T$, in which $b_\\tau:\\pi_c^{-1}(\\pi_c(\\tau\\bar x))\\longrightarrow\\mathbb R$ is a continuous function.\nIndeed, for a sequence $\\{\\tau_k\\}_{k\\geq 1}\\subset T$ with $\\tau_k\\tau\\bar x\\to x\\in\\pi_c^{-1}(\\pi_c(\\tau\\bar x))$ and $(f_\\alpha\\circ\\pi_\\alpha)(\\tau_k,\\tau\\bar x)\\to 0$ follows by the relative triviality of $(f-f_\\alpha\\circ\\pi_\\alpha)(\\tau,x)$ the existence and uniqueness of the limit $b_\\tau(x)$ of $f(\\tau_k,\\tau\\bar x)$.\nIt also follows that for every $\\varepsilon>0$ there exists a $\\delta>0$ so that for all $\\tau\\in T$ and $x,x'\\in\\pi_c^{-1}(\\pi_c(\\tau\\bar x))$ with $d(x,x')<\\delta$ holds $|b_\\tau(x)-b_\\tau(x')|<\\varepsilon$.\nSince the fibres of $\\pi_c$ are connected, a covering of $X$ by $\\delta$-neighbourhoods provides a constant $D>0$ with $|b_\\tau(x)-b_\\tau(x')|0$ there exists a $\\delta>0$ so that for every $(\\tau',y')\\in T\\times Y$ with $d_Y(y',\\tau' y')<\\delta$ and $|(f_\\alpha\\circ\\rho_\\alpha)(\\tau',y')|<\\delta$ holds $|(f_Y-f_\\alpha\\circ\\rho_\\alpha)(\\tau',y')|<\\varepsilon$.\nFrom $\\tau^{-1} y_k\\to y$, $\\tau_k y_k\\to L_\\tau y$, and $(f_\\alpha\\circ\\rho_\\alpha)(\\tau_k\\tau,\\tau^{-1} y_k)\\to 0$ follows for every $(\\tau,y)\\in T\\times Y$ with $d_Y(y,L_\\tau(y))<\\delta$ that $f'((\\tau,0),y)<\\varepsilon$.\nFact \\ref{fact:GH} implies that the cocycle $(\\tau,y)\\mapsto f'((\\tau,0),y)$ of the distal flow $(Y,\\{L_\\tau:\\tau\\in T\\})$ is a coboundary on the $\\{L_\\tau:\\tau\\in T\\}$-orbit closure $X_\\alpha\\times\\{m\\}$ with transfer function $b_m:X_\\alpha\\longrightarrow\\mathbb R$ for every $m\\in M$.\nSince $\\delta>0$ is valid for all $\\{L_\\tau:\\tau\\in T\\}$-orbit closures, the transfer functions $\\{b_m:m\\in M\\}$ are uniformly equicontinuous.\nWe fix a point $\\bar x\\in X_\\alpha$ and obtain from the cocycle identity for all $(\\tau,t)\\in T\\times\\mathbb R$ and $(x_\\alpha,m)\\in Y$ that\n\\begin{eqnarray*}\nf_{\\bar x}((\\tau,t),(x_\\alpha,m))=f'((\\tau,t),(x_\\alpha,m))-f'((\\mathbf 1_T,t),(\\bar x,m))=\\\\\n=b_{\\phi^t(m)}(\\tau x_\\alpha)-b_{\\phi^t(m)}(\\bar x)-b_{m}(\\tau x_\\alpha)+b_m(\\bar x) .\n\\end{eqnarray*}\nThe function $f_{\\bar x}((\\tau,t),(x_\\alpha,m))$ is also a cocycle of $(Y,\\{\\psi^t\\circ L_\\tau:(\\tau,t)\\in T\\times\\mathbb R\\})$ and bounded on $T\\times\\mathbb R\\times Y$, hence a coboundary with a transfer function $\\bar b:Y\\longrightarrow\\mathbb R$ so that $\\bar b(\\rho_\\alpha^{-1}(\\bar x))=\\{0\\}$.\nNow equality (\\ref{eq:tf}) follows, and equality (\\ref{eq:tF}) follows from $f'((\\mathbf 1_T,t),(x_\\alpha,m))=\\bar f((\\mathbf 1_T,t),(x_\\alpha,m))$ for all $t\\in\\mathbb R$ and $(x_\\alpha,m)\\in Y$.\n\\end{proof}\n\nWith these prerequisites we can conclude the proof of our main result.\n\n\\begin{proof}[Proof of the structure theorem]\nWe let all elements of the theorem and the flow $\\{\\psi^t:t\\in\\mathbb R\\}\\subset\\textup{Aut}(Y,\\{L_\\tau:\\tau\\in T\\})\\cap\\textup{Aut}(Y,T)$ be defined according to the Propositions \\ref{prop:max}, \\ref{prop:flow}, \\ref{prop:res}, and \\ref{prop:inc}.\nWe fix a point $\\bar x\\in X_\\alpha$ so that $(\\bar x,0)$ is transitive for $\\widetilde\\tau_{f_\\alpha}$ and $\\bar b(\\rho_\\alpha^{-1}(\\bar x))=\\{0\\}$.\nThen we define a cocycle $g(t,m)$ of the distal minimal flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ by\n\\begin{equation*}\ng(t,m)=\\bar f((\\mathbf 1_T,t),(\\bar x,m)) \\quad\\textup{for all}\\enspace (t,m)\\in\\mathbb R\\times M.\n\\end{equation*}\nFrom equalities (\\ref{eq:tf}) and (\\ref{eq:tF}) follows for all $\\tau\\in T$ and $(x_\\alpha,m)\\in Y$ that\n\\begin{eqnarray*}\n(f_Y-f_\\alpha\\circ\\rho_\\alpha)(\\tau,(x_\\alpha,m))=\\bar f((\\tau,0),(x_\\alpha,m))=\\hspace{3cm}\\\\\n=\\bar f((\\mathbf 1_T,f_\\alpha(\\tau,x_\\alpha)),L_\\tau (x_\\alpha,m))+\\bar b(L_\\tau (x_\\alpha,m))-\\bar b(x_\\alpha,m)=\\\\\n=g(f_\\alpha(\\tau,x_\\alpha),m)+\\bar b\\circ \\tau(x_\\alpha,m)-\\bar b(x_\\alpha,m) .\n\\end{eqnarray*}\nHence equality (\\ref{eq:f_Y2}) holds for the cocycle $f_Y(\\tau,y)-\\bar b(\\tau y)+\\bar b(y)$ cohomologous to $f_Y(\\tau,y)$, and this cocycle will be substituted for $f_Y(\\tau,y)$ henceforth.\nFor every sequence $\\{(\\tau_k,x_k)\\}_{k\\geq 1}\\subset T\\times X$ with $(f_\\alpha\\circ\\pi_\\alpha)(\\tau_k,x_k)\\to 0$ it holds also that $(f_Y\\circ\\pi_Y)(\\tau_k,x_k)\\to 0$ as $k\\to\\infty$, and thus identity (\\ref{eq:p_X}) implies identity (\\ref{eq:fpy}).\n\nFor every ordinal $\\xi$ with $\\alpha\\leq\\xi\\leq\\eta$ we can apply the Propositions \\ref{prop:flow}, \\ref{prop:res}, and \\ref{prop:inc} to the distal minimal flow $(X_\\xi,T)$ and the cocycle $f_\\xi(\\tau,x_\\xi)$.\nWe obtain a factor $(Y_\\xi,T)=\\pi_{Y_\\xi}(X_\\xi,T)$ with $(X_\\alpha,T)=\\rho_\\alpha^\\xi(Y_\\xi,T)$, an $\\mathbb R$-flow $\\{\\psi_\\xi^t:t\\in\\mathbb R\\}\\subset\\textup{Aut}(Y_\\xi,T)$, a cocycle $f_{Y_\\xi}(\\tau,y_\\xi)$ of $(X_\\xi,T)$, and a cocycle $\\bar f_\\xi((\\tau,t),y_\\xi)$ of the flow $(Y_\\xi,T\\times\\mathbb R)$ extending the cocycle $(f_{Y_\\xi}-f_\\alpha\\circ\\rho^\\xi_\\alpha)(\\tau,y_\\xi)$.\nStriving for a contradiction to the maximality of the ordinal $\\alpha$ (cf. Proposition \\ref{prop:max}), we assume that the cocycle $(\\mathbbm 1+g)(t,m)$ of the minimal flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ is recurrent so that the cocycle $\\bar f_\\eta((\\mathbf 1_T,t),y_\\eta)+t$ of the minimal flow $((\\rho^\\eta_\\alpha)^{-1}(\\bar x),\\{\\psi_\\eta^t:t\\in\\mathbb R\\})$ is also recurrent.\nWe let $\\beta$ be the minimal element of the non-empty set of ordinals\n\\begin{equation*}\n\\{\\alpha\\leq\\xi\\leq\\eta: \\bar f_\\xi((\\mathbf 1_T,t),y_\\xi)+t\\enspace\\textup{is a recurrent cocycle of}\\enspace ((\\rho_\\alpha^\\xi)^{-1}(\\bar x),\\{\\psi_\\xi^t:t\\in\\mathbb R\\})\\} ,\n\\end{equation*}\nwith $\\beta>\\alpha$ since $f_{Y_\\alpha}=f_\\alpha$ and $\\bar f_\\alpha((\\mathbf 1_T,t),y_\\alpha)\\equiv 0$.\nWe fix a point $\\bar x_\\beta\\in(\\pi_\\alpha^\\beta)^{-1}(\\bar x)$ so that $(\\pi_{Y_\\beta}(\\bar x_\\beta),0)$ is a recurrent point for the skew product extension of the flow $((\\rho_\\alpha^\\beta)^{-1}(\\bar x),\\{\\psi_\\beta^t:t\\in\\mathbb R\\})$ by the cocycle $\\bar f_\\beta((\\mathbf 1_T,t),y_\\beta)+t$.\nThen there exists a sequence $\\{\\bar\\tau_k\\}_{k\\geq 1}\\subset T$ with $f_\\alpha(\\bar\\tau_k,\\pi_\\alpha^\\beta(\\bar x_\\beta))\\to\\infty$ so that\n\\begin{equation*}\n\\bar f_\\beta((\\mathbf 1_T,f_\\alpha(\\bar\\tau_k,\\pi_\\alpha^\\beta(\\bar x_\\beta))),\\pi_{Y_\\beta}(\\bar x_\\beta))+f_\\alpha(\\bar\\tau_k,\\pi_\\alpha^\\beta(\\bar x_\\beta))=f_{Y_\\beta}(\\bar\\tau_k,\\pi_{Y_\\beta}(\\bar x_\\beta))\\to 0\n\\end{equation*}\nfor $k\\to\\infty$, and by the cohomology of the cocycles $f_\\beta(\\tau,x_\\beta)$ and $(f_{Y_\\beta}\\circ\\pi_{Y_\\beta})(\\tau,x_\\beta)$ the sequence $f_\\beta(\\bar\\tau_k,\\bar x_\\beta)$ is bounded.\nHence there exists a sequence $\\{k_l\\}_{l\\geq 1}\\subset\\mathbb N$ so that $f_\\alpha(\\bar\\tau_{k_{l+1}},\\pi_\\alpha^\\beta(\\bar\\tau_{k_l}\\bar x_\\beta))\\to\\infty$, $d_\\beta(\\bar\\tau_{k_{l+1}} \\bar x_\\beta,\\bar\\tau_{k_l}\\bar x_\\beta)\\to 0$, and $f_\\beta(\\bar\\tau_{k_l},\\bar x_\\beta)$ is convergent.\nThen the sequence $\\{(\\tau_l,x_l)=(\\bar\\tau_{k_{l+1}}(\\bar\\tau_{k_l})^{-1},\\bar\\tau_{k_l}\\bar x_\\beta)\\}_{l\\geq 1}\\subset T\\times X_\\beta$ fulfils $f_\\alpha(\\tau_l,\\pi_\\alpha^\\beta(x_l))\\to\\infty$, $d_\\beta(x_l,\\tau_l x_l)\\to 0$, and $f_\\beta(\\tau_l,x_l)\\to 0$ for $l\\to\\infty$.\nHowever, for every $\\alpha\\leq\\xi<\\beta$ holds\n\\begin{equation*}\n\\bar f_\\xi((\\mathbf 1_T,f_\\alpha(\\tau_l,\\pi_\\alpha^\\beta(x_l))),\\pi_{Y_\\xi}\\circ\\pi_\\xi^\\beta(x_l))+f_\\alpha(\\tau_l,\\pi_\\alpha^\\beta(x_l))=f_{Y_\\xi}(\\tau_l,\\pi_{Y_\\xi}\\circ\\pi_\\xi^\\beta(x_l))\\to\\infty\n\\end{equation*}\nfor $l\\to\\infty$.\nOtherwise, since $|\\bar f_\\xi((\\mathbf 1_T,t),(x_\\alpha,m_\\xi))-\\bar f_\\xi((\\mathbf 1_T,t),(\\bar x,m_\\xi))|$ is uniformly bounded for all $t\\in\\mathbb R$, $x_\\alpha\\in X_\\alpha$, and $m_\\xi\\in M_\\xi$ (cf. identity (\\ref{eq:tF})), there exists a non-trivial prolongation in the skew product of the minimal flow $((\\rho_\\alpha^\\xi)^{-1}(\\bar x),\\{\\psi_\\xi^t:t\\in\\mathbb R\\})$ and its cocycle $\\bar f_\\xi((\\mathbf 1_T,t),y_\\xi)+t$, which sufficient for its recurrence (cf. Lemma \\ref{lem:tr_coc}).\nTherefore also $f_\\xi(\\tau_l,\\pi_\\xi^\\beta(x_l))\\to\\infty$ as $l\\to\\infty$, and depending on the type of the ordinal $\\beta$ follows either from Lemma \\ref{lem:c_t} or Lemma \\ref{lem:lim} that $\\widetilde\\tau_{f_\\beta}$ is point transitive, in contradiction to the maximality of $\\alpha$.\n\\end{proof}\n\n\\begin{proof}[Proof of the structure of the topological Mackey action]\nIn the proof of the decomposition theorem it is verified that the topological Mackey actions for the cocycle $f(\\tau,x)$ of $(X,T)$ and the transient cocycle $(\\mathbbm 1+g)(t,m)$ of $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ are topologically isomorphic.\nLet $\\{(M_\\xi,\\{\\phi_\\xi^t:t\\in\\mathbb R\\}):0\\leq\\xi\\leq\\theta\\}$ be the normal \\emph{I}-system for the distal minimal compact metric flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$ with the homomorphisms $\\sigma_\\xi:M\\longrightarrow M_\\xi$.\nFor every ordinal $0\\leq\\xi\\leq\\theta$ a cocycle $g_\\xi(t,m_\\xi)$ of $(M_\\xi,\\{\\phi_\\xi^t:t\\in\\mathbb R\\})$ is defined by a \\emph {RIM}.\nLet $\\beta$ be the minimal element of the set\n\\begin{equation*}\n\\theta\\in\\{0\\leq\\xi\\leq\\theta:(g-g_\\xi\\circ\\sigma_\\xi)(t,m)\\enspace\\textup{is a coboundary of}\\enspace(M,\\{\\phi^t:t\\in\\mathbb R\\})\\} .\n\\end{equation*}\nThe cocycle $(\\mathbbm 1+g_\\beta)(t,m_\\beta)$ is transient, since the cocycle $(\\mathbbm 1+g)(t,m)$ cohomologous to $(\\mathbbm 1+g_\\beta\\circ\\sigma_\\beta)(t,m)$ is transient.\nBy Lemma \\ref{lem:tr_coc} the right translation action $\\{R_b:b\\in\\mathbb R\\}$ acts minimally on the Fell compact space $D$ of orbits in $M_\\beta\\times\\mathbb R$.\nThe mapping $\\chi:M_\\beta\\longrightarrow D$ defined by $m_\\beta\\mapsto\\mathcal O_{\\phi_\\beta,(\\mathbbm 1+g_\\beta)}(m_\\beta,0)$ is Fell continuous, and for every $t\\in\\mathbb R$ holds $\\chi\\circ\\phi_\\beta^t(m_\\beta)=R_{(\\mathbbm 1+g_\\beta)(t,m_\\beta)}\\circ\\chi(m_\\beta)$.\nFor $\\beta=0$ the flow $(D,\\{R_b:b\\in\\mathbb R\\})$ is trivial and thus weakly mixing.\nIf $\\beta\\geq 1$, then $(D,\\{R_b:b\\in\\mathbb R\\})$ is a non-trivial minimal compact metric flow.\nIf it is not weakly mixing, then there exists a non-trivial equicontinuous factor $(D_1,\\{\\varphi^t:t\\in\\mathbb R\\})=\\nu(D,\\{R_b:b\\in\\mathbb R\\})$ with homomorphism $\\nu$ (cf. \\cite{KeRo}).\nWe shall use a generalised and relativised version of Theorem 1 in \\cite{Eg} to obtain a contradiction to the minimality of $\\beta$.\nSince $(D_1,\\{\\varphi^t:t\\in\\mathbb R\\})$ is a minimal and non-trivial flow, for each small enough $\\varepsilon>0$ holds $\\varphi^\\varepsilon(d_1)\\neq d_1$ for all $d_1\\in D_1$.\nWe shall verify as a sub-lemma that there are no sequences $\\{t_k\\}_{k\\geq 1}\\subset\\mathbb R$, $\\{m_k\\}_{k\\geq 1},\\{m'_k\\}_{k\\geq 1}\\subset M_\\beta$ so that $m_k\\to\\bar m$, $m'_k\\to\\bar m$, $\\phi_\\beta^{t_k}(m_k)\\to\\bar m'$, $\\phi_\\beta^{t_k}(m'_k)\\to\\bar m'$, and $g_\\beta(t_k,m'_k)-g_\\beta(t_k,m_k)\\to\\varepsilon$.\nIndeed,\n\\begin{gather*}\n\\nu\\circ\\chi\\circ\\phi_\\beta^{t_k}(m'_k)=\\varphi^{(\\mathbbm 1+g_\\beta)(t_k,m'_k)}\\circ\\nu\\circ\\chi(m'_k)\\to\\varphi^{\\varepsilon}(\\lim \\varphi^{(\\mathbbm 1+g_\\beta)(t_k,m_k)}\\circ\\nu\\circ\\chi(m'_k))\n\\end{gather*}\nand $\\lim\\varphi^{(\\mathbbm 1+g_\\beta)(t_k,m_k)}\\circ\\nu\\circ\\chi(m'_k)=\\lim\\varphi^{(\\mathbbm 1+g_\\beta)(t_k,m_k)}\\circ\\nu\\circ\\chi(m_k)=\\nu\\circ\\chi(\\bar m')$, by the equicontinuity of $(D_1,\\{\\varphi^t:t\\in\\mathbb R\\})$, imply that $\\nu\\circ\\chi(\\bar m')=\\varphi^{\\varepsilon}\\circ\\nu\\circ\\chi(\\bar m')$, which contradicts to the choice of $\\varepsilon$.\n\nIf $\\beta=\\gamma+1$ for some ordinal $0\\leq\\gamma<\\theta$, then the sub-lemma implies the uniform equicontinuity of the mapping $m_\\beta\\mapsto g_\\beta(t,m_\\beta)$ restricted on the $\\sigma_\\gamma^\\beta$-fibres and for all $t\\in\\mathbb R$.\nIndeed, otherwise we can find sequences as above with $\\sigma_\\gamma^\\beta(m_k)=\\sigma_\\gamma^\\beta(m'_k)$ for all $k\\geq 1$, and the condition on sufficiently small $\\varepsilon$ can be fulfilled by the connectedness of the $\\sigma_\\gamma^\\beta$-fibres.\nThus for all $t\\in\\mathbb R$ the cocycle $(g_\\beta-g_\\gamma\\circ\\sigma_\\gamma^\\beta)(t,m_\\beta)$ is uniformly equicontinuous and assumes zero on every connected $\\sigma_\\gamma^\\beta$-fibre.\nBy Fact \\ref{fact:GH} this is then a coboundary, in contradiction to the minimality of $\\beta$.\n\nIf $\\beta$ is a limit ordinal, then the sub-lemma applies to sequences so that for every ordinal $0\\leq\\xi<\\beta$ there exists an integer $k_\\xi\\geq 1$ with $\\sigma_\\xi^\\beta(m_k)=\\sigma_\\xi^\\beta(m'_k)$ for all $k\\geq k_\\xi$.\nHence there exists an ordinal $0\\leq\\zeta<\\beta$ so that $|g_\\beta(t,m_\\beta)-g_\\beta(t,m'_\\beta)|<\\varepsilon$ for all $m_\\beta,m'_\\beta\\in M_\\beta$ with $\\sigma_\\zeta^\\beta(m_\\beta)=\\sigma_\\zeta^\\beta(m'_\\beta)$ and for all $t\\in\\mathbb R$.\nIt follows that $(g_\\beta-g_\\zeta\\circ\\sigma_\\zeta^\\beta)(t,m_\\beta)$ is a coboundary, in contradiction to the minimality of $\\beta$.\n\nThe topological Mackey action of the transient cocycle $(\\mathbbm 1+g)(t,m)$ is topologically isomorphic to the topological Mackey action of the cohomologous cocycle $(\\mathbbm 1+g_\\beta\\circ\\sigma_\\beta)(t,m)$ (cf. the proof of the decomposition theorem).\nThe weakly mixing flow $(D,\\{R_b:b\\in\\mathbb R\\})$ is a factor of the topological Mackey action of the cocycle $(\\mathbbm 1+g_\\beta\\circ\\sigma_\\beta)(t,m)$, since for every $(m,s)\\in M\\times\\mathbb R$ the mapping $\\sigma_\\beta\\times\\textup{id}_\\mathbb R$ maps the orbit $\\mathcal O_{\\phi,(\\mathbbm 1+g_\\beta\\circ\\sigma_\\beta)}(m,s)$ in $M\\times\\mathbb R$ to the orbit $\\mathcal O_{\\phi_\\beta,(\\mathbbm 1+g_\\beta)}(\\sigma_\\beta(m),s)$ in $M_\\beta\\times\\mathbb R$ continuously with respect to the Fell topologies.\nSuppose that there exists two distinct orbits $\\mathcal O, \\mathcal O'$ in $M\\times\\mathbb R$ within the same $\\sigma_\\beta\\times\\textup{id}_\\mathbb R$-fibre and $\\{t_k\\}_{k\\geq 1}\\subset\\mathbb R$ is a sequence with $R_{t_k}\\mathcal O\\to \\mathcal O''$ and $R_{t_k}\\mathcal O'\\to \\mathcal O''$.\nSince the mapping $t\\mapsto (\\mathbbm 1+g_\\beta)(t,m_\\beta)$ is onto $\\mathbb R$ for every $m_\\beta\\in M_\\beta$ (cf. Lemma \\ref{lem:tr_coc}), there exists a point $\\bar m\\in M_\\beta$ and distinct $m,m'\\in\\sigma_\\beta^{-1}(\\bar m)$ so that $(m,0)\\in\\mathcal O$ and $(m',0)\\in\\mathcal O'$.\nMoreover, for every integer $k\\geq 1$ we can select a real number $t'_k$ so that $(\\mathbbm 1+g_\\beta)(t'_k,\\bar m)=t_k$, and therefore $(\\phi^{t'_k}(m),0)\\in R_{t_k}\\mathcal O$ as well as $(\\phi^{t'_k}(m'),0)\\in R_{t_k}\\mathcal O'$.\nBy changing to a subsequence we can suppose that $\\phi^{t'_k}(m)\\to m_1$ and $\\phi^{t'_k}(m')\\to m_2$ as $k\\to\\infty$ with $m_1\\neq m_2$, by the distality of the flow $(M,\\{\\phi^t:t\\in\\mathbb R\\})$.\nHowever, since $(m_1,0),(m_2,0)\\in\\mathcal O''$, the point $(\\bar m,0)$ is a periodic point in $(D,\\{R_b:b\\in\\mathbb R\\})$ in contradiction to the transience of the cocycle $(\\mathbbm 1+g_\\beta)(t,m_\\beta)$.\nWe can conclude that $\\sigma_\\beta\\times\\textup{id}_\\mathbb R$ is a distal homomorphism of the topological Mackey action of the cocycle $(\\mathbbm 1+g_\\beta\\circ\\sigma_\\beta)(t,m)$ onto the weakly mixing flow $(D,\\{R_b:b\\in\\mathbb R\\})$.\n\\end{proof}\n\n\\textbf{Acknowledgement}: The author would like to thank Professor Jon Aaronson and Professor Eli Glasner for useful discussions and encouragement.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nGender classification has an important role in modern society for surveillance or smart adaptation systems. It would be advantageous if a computer system or a machine could correctly classify an individual's gender. For example, a surveillance camera system of mall shoppers could be beneficial to know the gender of the customers to create a proper strategy, or a sale-man robot \\cite{ref0} could use an appropriate and smart approach to communicate with customers based on their gender.\n\n\nHuman gender, an active and promising area of research, can be classified using either a recorded voice \\cite{ref1}\\cite{ref2}\\cite{ref3} or face image \\cite{ref4}\\cite{ref5}\\cite{ref6}\\cite{ref7}\\cite{ref8}\\cite{ref9}\\cite{ref10}\\cite{ref10a}. SexNet \\cite{ref4} is an early system for gender classification using face images. The system uses the back-propagation algorithm of a neural network to train the gender classifier and obtains an error rate of 8.1\\%. Based on this encouraging result, the system demonstrates that automatic gender classification by computers is feasible. However, the use of voice and face features for gender classification has limitations when the objects are distant from the sensor because it is difficult to obtain a high-quality recorded voice or face image from a distance.\n\nMany psychological and medical experiments \\cite{ref11}\\cite{ref12} have indicated that humans and their gender can be recognized using their gait features. Therefore, gait features appear as alternative cues for resolving the recognition problem that occurs at long distances. Compared with other biometric features, gait information has particular advantages:\n\\begin{enumerate}\n\\item Easily obtainable from public areas and from a distance: Even when the subject is distant from the camera, we remain able to capture their gait information with an acceptable level of quality for specific tasks such as gait recognition and gender classification.\n\\item Uses simple instruments: Capturing the gait features requires only a simple conventional camera that can be placed anywhere in public areas such as banks, parking lots, and airports.\n\\item Does not require collaboration with the subjects: Gait features can be captured easily, even without the subject's permission. Although this is an advantage, it raises the issue of the right to privacy.\n\\item It is difficult to forge or falsify gait features: Gait features indicate the walking manner of a human, characterizing their physical capability. Mimicking the gait of other people is difficult.\n\\end{enumerate}\n\nHowever, gait-based systems such as gender classification and gait recognition share the same challenges as indicated in Fig.~\\ref{fig:1}. These challenges arise from the environment including the viewpoint of camera changes or the subject's physical characteristics such as carrying a backpack, wearing a heavy coat, or displaying signs of an injury. These factors change the subject's appearance leading to a significant effect on their gait information as they move \\cite{ref13}\\cite{ref14}.\n\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[scale=0.35]{Fig1}\n\\caption{Challenges in gait analysis of humans: change in viewpoint (top row), carrying an item (middle row), wearing a coat (bottom row)}\n\\label{fig:1} \n\\end{figure}\n\n\nThis paper proposes a novel gender classification method for use with an arbitrary viewpoint. To improve the performance, we present a method to remove areas with an attachment, such as a heavy coat or backpack. A general flowchart of the proposed method is presented in Fig. ~\\ref{fig:2}, which includes two major phases: training and testing. During the training phase, after the preprocessing step, the distance signal (DS) model, viewpoint (VP) model, and view-dependent gender classifier are built. Before building the VP model, the average gait image (AGI) and the lower portion of the average gait image (LAGI) are generated. During the testing phase, after the human detection and preprocessing step, the viewpoint of the current object is estimated using the current silhouette image and the VP model from the training phase. The attachment-area removal module is then used to eliminate unwanted areas such as backpacks or bags to obtain an attachment-free silhouette. Based on the estimated viewpoint of the current object, the corresponding classifier of that viewpoint (built during the training phase) is applied to the attachment-free silhouette to classify the object gender. The contributions of this paper are as follows:\n\\begin{itemize}\n\\item Building a VP model for viewpoint estimation, allowing the proposed method to estimate the viewing direction automatically.\n\\item Building a DS model for attachment-area removal to eliminate the noise generated from carried objects, which significantly degrades the performance of the system.\n\\item Building a viewpoint-dependent gender classifier using an SVM \\cite{ref15} that allows the algorithm to function from any viewpoint.\n\\end{itemize}\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[scale=0.25]{Fig2}\n\\caption{Flowchart of proposed gender-classification process}\n\\label{fig:2}\n\\end{figure}\n\nThis paper is organized as follows. Section 2 discusses related works. Section 3 introduces the proposed method including the training and testing phases. The details of the key modules, such as VP modeling and estimation, DS modeling, viewpoint-dependent classifier building, and attachment-area removal are discussed in this section. A pseudo-code of the overall flowchart is also presented in this section. Section 4 presents the experimental results on public datasets. Finally, Section 5 concludes this paper and provides some areas for future work.\n\n\\section{Related works}\nGait-based recognition techniques can be divided into two categories, marker-based and markerless methods. Using markers, in an early work, Kozlowski and Cutting \\cite{ref16}\\cite{ref17} attempted to attach a point-light display (marker-based method) to a human body to extract the gait information. With this system, a human observer can determine a subject's gender based on the signals obtained with an acceptable level of accuracy (63\\%). However, to capture the gait information, the subject is required to wear a swimsuit and special devices, which is inconvenient, unfriendly, and impractical in real circumstances.\n\nToday, owing to technical innovations in camera and sensor development, the human gait can be easily obtained without a point-light display, leading to the development of markerless methods. The markerless-based methods for gait recognition can be classified into the model and appearance-based approaches. Such categorization can also be used for gender classification.\n\nIn the model-based methods \\cite{ref18}\\cite{ref19}\\cite{ref20}, the human body is divided into various parts, the structures of which are then fitted using primitive shapes such as ellipses, rectangles, and cylinders. Then, the gait feature is encoded using the parameters of the primitive shapes to measure the time-varying motion of the subject. In \\cite{ref18}, L. Lee et al. divide a human silhouette into seven different parts corresponding to the head and shoulder region, the front of the torso, back of the torso, front thigh, back thigh, front calf and foot, and back calf and foot. They then use ellipses to fit the model and capture the parameters of the ellipses such as the mean, standard deviation, orientation, and magnitude of the major components as feature vectors for classification. Although such methods are robust to noise and occlusions, they typically require a relatively high computational cost.\n\nAppearance-based methods \\cite{ref21}\\cite{ref22}\\cite{ref23}\\cite{ref24}\\cite{ref25}\\cite{ref26} analyze the spatio-temporal shape and dynamic motion characteristics of the silhouette in a gait sequence without using a human body model. A gait energy image (GEI) \\cite{ref21} is frequently used to encode the gait features because it includes both static (body shape) and dynamic information (arm swings and leg movements). The GEI feature is defined as the average frame of the subject in the gait cycle. Compared to the model-based methods, the appearance-based methods are considerably faster. In \\cite{ref23}, instead of modeling the silhouette, Shiqi Yu et al. calculate the GEI and use it to create the seven-part model defined in \\cite{ref17}. Because the contribution of each part to the gender classification varies, the authors assign different weights to the parts based on their experiments. Such methods obtain highly accurate classification rates (approximately 95\\%). However, they were developed to the only function on a side view, making it inappropriate to apply in real applications.\n\nIn \\cite{ref13}\\cite{ref14}\\cite{ref27}\\cite{ref28}\\cite{ref29}\\cite{ref30}\\cite{ref31}\\cite{ref32}\\cite{ref47}, the researchers attempted to resolve gender classification from multiple viewpoints. In \\cite{ref31}\\cite{ref32}, De Zhang et al. build an invariant classifier by combining the GEIs of different viewpoints into a single third-order tensor. They then use multiple linear principal component analysis (PCA) to reduce the dimensions and apply a support vector machine (SVM) to create a discriminative gender classifier. From another perspective, Kale et al. \\cite{ref33} use complicated equations from the structure of motion \\cite{ref34} to eliminate the viewpoint effect by synthesizing the side view from other viewpoints. A final recognition task is conducted on the synthesized data. Issac et al. \\cite{ref47} propose a method to delineate the gait instance as a sequence of poses or frames based on the fact that humans tend to assume certain poses at each part of a gait cycle. The gender of each frame is predicted, and the gender decision of a sequence is then made using majority voting. However, none of the previous works considers solving the problem of a subject carrying an item or wearing a heavy coat, which are common situations in real applications that can significantly degrade the classification rate.\n\n\\section{Proposed method}\n\\label{sec:3}\n\n\\subsection{Dataset and preprocessing step}\n\\label{sec:31}\nThis paper proposes a method for gender classification from an arbitrary viewpoint, and therefore, a dataset with multiple camera views is required. For this purpose, the CASIA gait Dataset B \\cite{ref13}\\cite{ref14} was utilized throughout this study for illustrative and experimental purposes.\n\nA person is first detected using a histogram of oriented gradient (HOG) \\cite{ref35}. During the preprocessing step, a classic background subtraction \\cite{ref36} is then applied to obtain a person's silhouette. Because a person's size changes from frame to frame, it is necessary to normalize the human bounding box before the training process. Assume in frame $I_t$ at time $t$, that a human is detected with a bounding box $B_t$; denote their silhouette obtained from the background subtraction as $S_t$. To register the silhouette image, the center of the silhouette $P_t$ (reference point) at time t is computed as:\n\\begin{equation}\n\\label{eq1}\nP_t(x_0, y_0) = \\left(\\frac{M_{10}}{M_{00}}, \\frac{M_{01}}{M_{00}}\\right)\n\\end{equation}\nwhere $M_{ij}$ are the raw moments of the binary silhouette image defined by $M_{ij}=\\sum_{x}\\sum_{y}x^iy^jS_{t}(x,y)$.\n\nThe silhouette image $S$ is resized to the fixed height $h$ to maintain the human ratio scale. The resized image is then zero padded or cropped on both sides (left and right sides) to ensure that the silhouette image has the predefined width $w$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{Fig3}\n\\caption{Example AGIs: models from different viewpoints}\n\\label{fig:3}\n\\end{figure*}\n\n\n\\subsection{Viewpoint modeling and estimation}\n\\label{sec:32}\n\\subsubsection{Viewpoint modeling}\n\\label{sec: 321}\nA person can move in an arbitrary direction under real circumstances. Therefore, in this paper, rather than using GEI \\cite{ref21} as a representation of a gait feature for classification, AGI is defined, as shown in Fig.~\\ref{fig:3}. The main difference between GEI and AGI is that the gait cycle information, which must be calculated into a GEI, is not required in an AGI. Furthermore, applying the gait cycle during the feature extraction step makes the entire algorithm inflexible because the gait cycle can be calculated accurately only when the person is captured from a side view, which is impractical under real circumstances. With the proposed method, all models are trained based on each viewpoint independently. Subsequently, we describe the details required for training for viewpoint $\\alpha$ using the data on that viewpoint. The AGI is defined as:\n\\begin{equation}\n\\label{eq2}\nAGI^{\\alpha}(x, y)=\\frac{1}{T}\\sum_{t=1}^{T}S_t^\\alpha(x,y)\n\\end{equation}\nwhere $T$, gait period, is defined adaptively using the video frame rate $f$ and approximate gait cycle time $\\mu$ as $T = \\mu*f$. According to \\cite{ref37}\\cite{ref38}, when the frame rate $f$ is 25 frames\/s, the value of the gait cycle time $\\mu$ must be 0.6 seconds to capture the most informative gait features; thus, $T = 0.6*f$ is used in the proposed method; $S_t^\\alpha$ is the silhouette image at time $t$ with the viewpoint $\\alpha$.\n\nDefining $\\gamma_k^\\alpha$ as $\\gamma_k^\\alpha=\\left\\{{AGI}_1^\\alpha, {AGI}_2^\\alpha, ..., {AGI}_n^\\alpha\\right\\}_k$, is the feature vector of subject $k$ in a viewpoint $\\alpha$, $\\alpha=\\overline{1, \\nu}$ and $k=\\overline{1,N}$, where $\\nu$ and $N$ are number of viewpoints and number of subjects (training samples), respectively. In fact, for the training step, a greater number of training samples $N$ is preferable. The corresponding label of $\\gamma_k^\\alpha$, denoted as set $L_k^\\alpha=\\{y_1^\\alpha, y_2^\\alpha, ..., y_n^\\alpha\\}_k, k=\\overline{1, N}; \\alpha=\\overline{1,\\nu}; y_i^\\alpha\\in\\{-1,1\\}$, is used to indicate the gender (\"-1\" for female and \"1\" for male).\n\nTo estimate the viewpoint for the input during the testing phase, we construct a viewpoint model $D$. This viewpoint model includes the viewpoint templates of an individual view. The viewpoint template is calculated as the average silhouette of all sequences from the $\\alpha$-th viewpoint. As observed, the viewpoint is clearly distinguished in the lower part of the silhouette and therefore, the $\\alpha$-th viewpoint template, denoted as ${LAGI}^\\alpha$, is extracted as the lower part of the average silhouette denoted as $LPS^\\alpha$ with a height of 0.715$h$ to $h$, as suggested by \\cite{ref39}, where $h$ is the height of the silhouette:\n\n\\begin{equation}\n\\label{eq3}\n{LPS}^\\alpha(x,y)=S^\\alpha(x,y), x=\\overline{0.715h, h}, y=\\overline{1, w}\n\\end{equation}\n\\begin{equation}\n\\label{eq4}\n{LAGI}^\\alpha(x,y) = \\frac{1}{N}\\sum_{t=1}^N{LPS_t^\\alpha(x,y)}\n\\end{equation}\n\nThe viewpoint model is then denoted as $D_{Low}=D=\\{{LAGI}^0,{LAGI}^1, \u2026,{LAGI}^\\nu \\}$, where $\\nu$ is the number of viewpoints. This viewpoint model $D_{Low}$ is used to estimate the viewpoint of a person walking during the testing phase.\n\n\\subsubsection{Viewpoint estimation}\nWith this method, the attachment-area removal module and gender classifier are dependent on the viewpoint; thus, the viewpoint is first estimated. During the testing phase, given the sequences of the silhouettes, the average gait image of the current walking subject, ${AGI}^c$, is calculated using Eq.~\\ref{eq2}. Then, ${LAGI}^c$ is extracted from the lower part of ${AGI}^c$ based on the size given in Eq.~\\ref{eq3}, rather than recalculating $LAGI$, as during the training phase.\n\nTo obtain viewpoint $\\alpha$ of the current walking subject, ${LAGI}^c$ is matched with each viewpoint template in viewpoint model $D_{Low}$ using the Euclidean distance. The least distance is then selected for the viewpoint estimation, as indicated in Fig. ~\\ref{fig:4}.\n\\begin{equation}\n\\label{eq5}\n\\alpha=\\min_j||{LAGI}^c-{LAGI}^j||_2, j=\\overline{0,\\nu}\n\\end{equation}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{Fig4}\n\\caption{Viewpoint model of 11 viewpoint templates where ${LAGI}^c$ is matched with the first template}\n\\label{fig:4}\n\\end{figure*}\n\n\n\n\\subsection{Distance signal modeling and attachment removal}\n\\subsubsection{Distance modeling}\nIn real applications, it is common to view people moving with attached objects such as bags or backpacks; similarly, their appearance can be significantly changed when wearing a heavy coat. The added area resulting from a held item or worn coat contributes nothing to the result of the gender classification. In actuality, these factors negatively influence the results of the classification. In this section, a distance signal (DS) model of humans under normal walking conditions (not holding anything and wearing thin clothes) from different viewpoints is proposed for removing these redundant attachments.\n\nGiven a set of silhouettes in movement direction $\\alpha$ , for each silhouette, a distance signal is built. Considering the mass reference point calculated by Eq. ~\\ref{eq1}, each point $P_i$ on the silhouette boundary is represented in polar coordinates by two parameters, $d_i$ and $\\theta_i$, which indicate the distance from the point $P_i$ to the reference point $P$, and the angle formed by the line connecting the point to the reference point ${PP}_i$ with the horizon ${PP}_h$, respectively. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{Fig5}\n\\caption{Example of distance signal (a) and distance signal model for different viewpoints (b)}\n\\label{fig:5}\n\\end{figure*}\n\n\n\\begin{equation}\n\\label{eq6}\nd_i=||P-P_i||_2\n\\end{equation}\n\n\\begin{equation}\n\\label{eq7}\n\\theta_i=\\arccos\\left(\\frac{\\overrightarrow{PP_i}*\\overrightarrow{PP_h}}{|\\overrightarrow{PP_i}||\\overrightarrow{PP_h}|}\\right)\n\\end{equation}\nwhere * is the dot product between two vectors and the value of angle $\\theta$ varies from $0^o$ to $360^o$ computed counterclockwise; $P_i$ and $P_h$ are depicted in Fig. \\ref{fig:5}a (left). The DS signal is then constructed by continuously concatenating these parameters from $P_h$ counterclockwise to define the signal presented in Fig. \\ref{fig:5}a (right). After building these DS signals for viewpoint $\\alpha$, denoted by ${DS}^\\alpha=\\{{DS}_1^\\alpha,\u2026,{DS}_n^\\alpha\\}$, the DS model for viewpoint $\\alpha$ is constructed using two curves, $MaDS^\\alpha$ and $MiDS^\\alpha$, which are defined as:\n\n\\begin{equation}\n\\label{eq8}\nMaDS^\\alpha=\\{d_i^{max}, \\theta_i\\}, \\text{where } d_i^{max} = \\max\\limits_{k=\\overline{1,N}}\\{d_k|\\theta_k\\}\n\\end{equation}\n\n\\begin{equation}\n\\label{eq9}\nMiDS^\\alpha=\\{d_i^{min}, \\theta_i\\}, \\text{where } d_i^{min}=\\min\\limits_{k=\\overline{1,N}}\\{d_k|\\theta_k\\}\n\\end{equation}\n\nThe distance signals are smoothed using the moving average technique with the number $n_{avg}=3$ before calculating the DS model. Fig. \\ref{fig:5}b illustrates the DS model for 11 viewpoints in our experiments.\n\n\n\\subsubsection{Attachment-area removal}\nDuring the testing phase, the DS signal of the silhouette of the current subject $DS^c$ is calculated in the manner described in the section 3.3.1. Given the viewpoint estimated using the viewpoint estimation module, the current $DS^c$ is projected to the corresponding viewpoint DS model. The current $DS^c$ is modified using the following rule to eliminate any attachments, if they exist:\n\n\\begin{equation}\n\\label{eq10}\nDS^c=\\{d_i^c, \\theta_i\\} \\text{where } d_i^c= \\begin{cases} d_i^c & \\text{if } d_i^c \\leq d_i^{max} \\\\\nd_i^{min} & \\text{ otherwise}\n\\end{cases} \n\\end{equation}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{Fig6}\n\\caption{Example of original silhouette, DS of the silhouette, corrected DS of the silhouette, and silhouette reconstruction of the current subject from a side view}\n\\label{fig:6}\n\\end{figure*}\n\n\nFig. \\ref{fig:6} illustrates this process. In the figure, an example of a side-view silhouette input with an attachment is provided because in this view the attachment can be seen most clearly. Our goal is to remove the attachment from the human silhouette, therefore, the human silhouette is divided into three parts, as shown in Fig. \\ref{fig:6}a. The first part consists of head and shoulder with a height of 0 to 0.17$h$, as suggested in \\cite{ref45}\\cite{ref46}, where $h$ is the height of human silhouette. The second part includes the human torso and thigh with a height of 0.17$h$ to 0.715$h$. Finally, the last part is human calf with a height of 0.715$h$ to $h$, as suggested in \\cite{ref39}. The red vertical lines (Fig. 6b, c, d, e ) are drawn to separate these three parts from the human silhouette. Firstly, the input silhouette is converted into a distance signal $DS^c$, presented as violet curve in Fig. \\ref{fig:6}b. The white curves are the maximum distance signal $MaDS^\\alpha$ (upper) and minimum distance signal $MiDS^\\alpha$ (lower) from the DS model for a specific viewpoint ($\\alpha=90^o$). \n\n\n\nWhen human carrying a backpack or bag, the appearance of torso thigh part is\nchanged due to the attachment, thus only this part is taken into account for\ncorrection. As we observed from the experiments, a $DS^c$ with a value less than\n$MiDS^\\alpha$ is the noise from an imperfect background subtraction. A $DS^c$\nwith a value greater than $MaDS^\\alpha$ is considered as the attachment area\nfrom a subject carrying an item while walking. Using the DS model from\nSection 3.3.1, the attachment area can therefore be removed. The corrected\nversion of $DS^c$ is obtained by replacing the violated signal with the \ncorresponding values of the $MiDS^\\alpha$ curve at Point A, as shown in Fig. \n\\ref{fig:6}c. To avoid the problem of strict change in the resulted signal, we \ncontinue to look from the point A to point B in Fig. \\ref{fig:6}c to find the \npoint that has the smallest vertical distance between $MiDS^\\alpha$ and the \nresulted signal (point C). The segment of the resulted signal from A to C is again \nreplaced by $MiDS^\\alpha$, as presented in Fig. \\ref{fig:6}d. The signal in Fig. \n\\ref{fig:6}e is obtained by smoothing using the average filter $\\frac{1}{5}[1 1 1 1 1]$. The same process is applied to all segments of torso\/thigh part of the $DS^c$ curve. Finally, the corrected version of the $DS^c$ is used to reconstruct the silhouette of attachment-free, as shown in Fig. \\ref{fig:6}f. The updated version of the silhouette is then used to recalculate the AGI for gender classification. \n\n\n\\subsection{Gender classifier building}\nSVM \\cite{ref15} is a superior tool for a binary classification problem regarding minimizing the classification error and maximizing the margin between the two classes. Because gender classification is a binary classification task, a standard SVM with a linear kernel was selected to train the view-dependent classifiers. For solving a constrained quadratic optimization problem, we set the maximum number of iteration to 100.\n\nTo create the viewpoint-dependent classifier, the feature sets $\\gamma^\\alpha=\\{\\gamma_1^\\alpha,\\gamma_2^\\alpha, \u2026,\\gamma_N^\\alpha\\}$ and its corresponding labels $L^\\alpha=\\{L_1^\\alpha,L_2^\\alpha, \u2026,L_N^\\alpha\\}$ are used as inputs for the linear SVM. The $\\alpha$-th viewpoint classifier, obtained by using the SVM, is denoted as $C_{gen}^\\alpha$. The multiple-view classifier is a collection of different viewpoint-dependent classifiers, which is denoted as $C_{gen}={C_{gen}^\\alpha}$ and $\\alpha=\\overline{1, \\nu}$. \n\nIn the testing phase, the viewpoint is estimated as discussed in section 3.2.2. Based on the estimated viewpoint $\\alpha$, the corresponding classifier $C_{gen}^\\alpha$ is automatically selected from $C_{gen}$ to predict the person's gender in a current frame.\n\nThe algorithms 1 and 2 give a more detail description of the proposed method using pseudo-code in which all notations described above are used.\n\n\\begin{algorithm*}\n\\caption{The training phase}\n\\begin{algorithmic}[1]\n\\STATE\\textbf{Input}: $video\\texttt{\\_}sequences$ in the $training\\texttt{\\_}samples$ of male and female in all viewpoints \n\\STATE \\textbf{Output}: VP model, DS model, and $C_{gen}$ classifier\n\n\\FORALL {$\\alpha$ \\textbf{in} $views$}\n\t\\FORALL {$video\\texttt{\\_}sequence\\; k$ \\textbf{in} $training\\texttt{\\_}samples$ of viewpoint $\\alpha$ }\n\t\t\\FORALL {frame $t$ \\textbf{in} $video\\texttt{\\_}sequence$}\n\t\t\t\\STATE Human detection\n\t\t\t\\IF {no human detected}\n\t\t\t\t\\STATE Skip to the next frame\n\t\t\t\\ENDIF\n\t\t\t\\STATE Preprocessing to get the normalized silhouette $S^\\alpha$\n\t\t\t\\STATE Extract low part of the silhouette $S^\\alpha$ by Eq.~\\ref{eq3} and accumulate to $LPS_{k,t}^\\alpha$\n\t\t\t\\STATE Detect contour of a normalized silhouette $S^\\alpha$\n\t\t\t\\STATE Calculate $d_i$ and $\\theta_i$ by Eq.~\\ref{eq6}, Eq.~\\ref{eq7}\n\t\t\t\\STATE Calculate $AGI^\\alpha$ and assign its label $y^\\alpha$ by Eq.~\\ref{eq2}\n\t\t\t\\STATE Append $AGI^\\alpha$ to vector $\\gamma_k^\\alpha$\n\t\t\t\\STATE Append $y^\\alpha$ to vector $L_k^\\alpha$\n\t\t\\ENDFOR\n\t\\ENDFOR\n\t\\STATE Calculate the $LAGI^\\alpha$ using $LPS_{t,f}^\\alpha$ by Eq.~\\ref{eq4} for VP model\n\t\\STATE Calculate $MaDS^\\alpha$ and $MiDS^\\alpha$ using $d_i$ and $\\theta_i$ by Eq.~\\ref{eq8}, Eq.~\\ref{eq9} for DS model\n\t\\STATE Train view-dependent classifier $C_{gen}^\\alpha$ using $\\gamma^\\alpha$ and $L^\\alpha$ as inputs of SVM\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm*}\n\n\n\\begin{algorithm*}\n\\caption{The testing phase}\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Input}: $video\\texttt{\\_}sequence$ of a person in an unknown viewing angle\n\\STATE \\textbf{Output}: Gender information\n\\STATE Initialize $counter = 0$\n\\STATE Initialize empty vector $v$\n\\FORALL {frame $t$ \\textbf{in} $video\\texttt{\\_}sequence$}\n\t\\STATE Human detection\n\t\\IF {human detected}\n\t\t\\STATE Increase $counter$ by 1\n\t\\ELSE\n\t\t\\STATE $counter=0$\n\t\t\\STATE Skip to the next frame\n\t\\ENDIF\n\t\\STATE Preprocessing to get the normalized silhouette $S$\n\t\\STATE Append $S$ to the end of vector $v$\n\t\\IF {$counter\\geq15$}\n\t\t\\STATE Calculate the $AGI$ using a vector of silhouette $v$\n\t\t\\STATE Extract low part of average gait image of the current frame $LAGI^c$ from $AGI$\n\t\t\\STATE Estimate the viewpoint $\\alpha$ using $LAGI^c$ and VP model by Eq.~\\ref{eq5}\n\t\t\\STATE Remove the attachment area using estimated viewpoint $\\alpha$ and DS model by Eq.~\\ref{eq10}\n\t\t\\STATE Reconstruct silhouette $S\\rightarrow S$' and update $AGI\\rightarrow AGI$'\n\t\t\\STATE Predict the gender in current frame using the updated $AGI$' and the estimated viewpoint $\\alpha$\n\t\t\\STATE Remove the first element of $v$\n\t\\ENDIF\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm*}\n\n\\section{Experimental results}\n\\subsection{Experimental dataset}\nThe CASIA Dataset B \\cite{ref13}\\cite{ref14} addresses our requirements of multiple camera views because it includes sequences of various people from 11 viewpoints (from $0^o$ to $180^o$) under different walking conditions such as walking normally, carrying a backpack, and wearing a coat. The CASIA Dataset B captures sequences of 124 individual people (31 females and 93 males). Each person is captured ten times to create ten different sequences including six sequences under normal walking conditions, two backpack-carrying sequences, and two coat-wearing sequences. Table~\\ref{tab:1} summarizes the information of the CASIA Dataset B.\n\n\n\n\n\\begin{table}[!b]\n\\centering\n\\caption{Summary of CASIA Dataset B}\n\\label{tab:1} \n\\begin{tabular}{lll}\n\\hline\\noalign{\\smallskip}\n\\text{Walking condition} & \\text{\\#subjects} & \\text{\\#sequences} \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\text{Normal walking} & 6 & $6\\times124\\times11$ \\\\\n\\text{Carrying a bag} & 2 & $2\\times124\\times11$ \\\\\n\\text{Wearing a coat} & 2 & $2\\times124\\times11$ \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table}\n\nThe CASIA Dataset B includes background subtraction and thus, in the proposed system we are only required to resize and center the silhouette to the same size (144$\\times$144). For the AGI calculation, we must accumulate fifteen frames; it requires approximately 0.6 seconds to obtain the first gender-classification result when the frame rate is 25 fps, which can be considered a system delay.\n\nWe used the CASIA Dataset B for both training and testing using the same protocol as in \\cite{ref32}, which uses n-fold cross-validation. With this protocol, all 31 females were selected; 31 males were selected randomly from the CASIA Dataset B owing to a bias in the number of males in the dataset. The 31 females and 31 males were then grouped into 31 disjoint sets consisting of one female and one male. To create viewpoint-dependent classifiers, we use 30 sets for training. The remaining sets were used to test the system accuracy. The training and testing phases were repeated 31 times; the averages of the correct classification rate are listed for all experiments.\n\\subsection{Viewpoint-dependent classifiers test}\nThis test was used to validate the performance of only viewpoint-dependent classifiers under the assumption that the viewpoint was given. We conducted the test for both correct and incorrect viewpoint classifiers with respect to a specific viewpoint to observe the effect of viewpoint changes on the gender classification. Table \\ref{tab:2} displays the correct classification rates ($CCRs$) when using the corresponding classifier and a non-corresponding classifier (the viewpoint is given). The $CCR$ is defined as:\n\\begin{equation}\n\\label{eq11}\nCCR=\\frac{TP+TN}{N}\n\\end{equation}\nwhere $TP$ is the true positive referring to the cases in which the system correctly classifies positive samples (male to male), $TN$ is the true negative referring to the cases in which the system correctly classifies negative samples (female to female), and $N$ is the total number of samples. In these experiments, the male samples are labeled as 1 (positive samples) and the female samples are labeled as -1 (negative samples). As indicated in Table \\ref{tab:2}, applying a proper classifier for a specific viewpoint provides higher $CCRs$ (97.6\\% $\\pm$ 0.881) (for further description see Table \\ref{tab:2}).\n\n\\begin{table*}\n\\centering\n\\caption{CCRs (\\%) of viewpoint-dependent classifiers for specific viewing angle under normal walking conditions}\n\\label{tab:2} \n\\begin{tabular}{llllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Ground-truth viewpoint}} & \\multicolumn{11}{c}{Classifier} \\\\\n& $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$0^o$ &97.6&96.5&94.1&89.6&89.9&82.3&87.2&93.8&94.6&94.5&95.8 \\\\\n$18^o$&97.5&98.6&96.8&93.4&89.1&85.4&88.6&92.4&93.7&95.2&96.4 \\\\\n$36^o$&95.3&96.6&97.4&95.8&94.2&93.4&93.9&94.5&95.7&95.7&94.2 \\\\\n$54^o$&92.2&95.3&96.1&96.6&95.8&94.6&93.4&95.0&95.5&94.6&93.1 \\\\\n$72^o$&92.3&93.2&94.1&95.4&96.1&94.8&94.5&95.2&93.0&92.8&92.4 \\\\\n$90^o$&90.0&93.1&95.3&95.5&95.7&98.8&96.9&95.7&94.1&92.5&91.1 \\\\\n$108^o$&91.4&92.7&95.4&96.1&96.5&97.0&97.3&96.2&95.5&92.4&93.2 \\\\\n$126^o$&92.5&94.3&96.7&95.7&95.3&94.1&93.4&96.8&94.5&94.7&94.5 \\\\\n$144^o$&95.7&96.8&97.8&95.8&93.2&93.4&93.9&94.8&97.5&95.5&94.5 \\\\\n$162^o$&95.5&96.3&95.8&93.4&91.3&90.4&92.6&94.4&95.4&98.3&96.6 \\\\\n$180^o$&96.8&97.6&95.8&92.8&91.5&91.4&92.6&92.3&93.5&95.3&98.5 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\centering\n\\caption{CCRs (\\%) of viewpoint-dependent classifiers of specific viewing angle when carrying a bag}\n\\label{tab:3} \n\\begin{tabular}{llllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Ground-truth viewpoint}} & \\multicolumn{11}{c}{Classifier} \\\\\n & $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$0^o$&94.3&91.6&89.7&84.8&81.5&79.6&80.3&83.9&85.2&89.9&92.6 \\\\\n$18^o$&92.8&94.4&93.5&89.9&83.1&80.1&82.8&86.4&85.4&92.2&91.7 \\\\\n$36^o$&89.8&92.1&93.7&90.7&85.2&82.0&88.8&90.2&86.4&90.5&89.1 \\\\\n$54^o$&88.0&88.3&90.1&91.2&89.1&84.5&89.1&90.6&87.9&88.3&86.4 \\\\\n$72^o$&88.1&89.4&89.5&90.4&90.6&86.2&89.4&89.1&89.3&88.8&84.4 \\\\\n$90^o$&82.3&82.8&83.1&85.5&86.4&87.4&87.1&85.3&85.1&83.6&82.1 \\\\\n$108^o$&82.2&82.4&83.9&84.2&85.5&86.7&89.8&87.4&87.0&85.4&83.7 \\\\\n$126^o$&87.4&87.3&88.3&90.7&83.3&85.5&88.1&91.2&90.5&89.7&88.5 \\\\\n$144^o$&88.7&89.3&90.6&90.1&82.2&83.4&87.3&90.2&91.4&89.8&88.4 \\\\\n$162^o$&91.4&91.2&90.8&89.4&81.1&82.1&85.6&89.6&90.7&93.1&92.2 \\\\\n$180^o$&93.2&92.5&91.0&90.7&80.5&80.6&84.1&87.1&89.1&91.4&94.8 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{CCRs (\\%) of viewpoint-dependent classifiers of specific viewing angle when wearing a coat }\n\\label{tab:4} \n\\begin{tabular}{llllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Ground-truth viewpoint}}& \\multicolumn{11}{c}{Classifier} \\\\\n& $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$0^o$&92.2&91.5&89.2&87.4&86.1&85.5&87.2&88.8&89.3&90.6&91.7 \\\\\n$18^o$&91.3&93.5&90.9&89.5&88.2&84.6&86.2&87.1&89.8&89.4&91.8 \\\\\n$36^o$&88.5&90.4&94.1&91.7&89.1&88.4&91.0&92.5&93.8&90.1&89.4 \\\\\n$54^o$&87.5&88.3&89.7&91.8&90.3&87.6&90.6&91.1&90.4&89.2&87.1 \\\\\n$72^o$&85.6&88.5&90.4&91.6&92.4&91.3&89.1&87.3&87.0&86.3&84.4 \\\\\n$90^o$&83.0&84.7&85.3&88.6&90.8&93.7&91.7&89.6&87.4&86.2&85.1 \\\\\n$108^o$&82.4&85.6&85.4&88.1&89.5&89.4&90.0&89.8&87.2&85.3&83.2 \\\\\n$126^o$&84.6&86.1&88.6&90.3&89.3&88.0&89.8&91.9&87.5&85.7&83.4 \\\\\n$144^o$&84.7&85.0&89.2&89.8&88.2&84.4&86.9&87.8&89.9&86.5&84.5 \\\\\n$162^o$&90.5&91.2&88.8&87.2&84.6&83.4&85.1&87.4&89.7&91.5&90.2 \\\\\n$180^o$&92.0&91.6&87.8&86.8&83.5&82.4&82.9&85.1&86.5&90.9&92.4 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\nWe also conducted experiments under more challenging conditions such as a person\ncarrying an item or wearing a coat because the CASIA Dataset B includes sequences\nof such conditions, which were not used in previous studies \\cite{ref22}\\cite{ref23}\\cite{ref30}\\cite{ref31}\\cite{ref40}\\cite{ref41}. As indicated in Tables \\ref{tab:3} and \\ref{tab:4}, the $CCR$ of the gender prediction was\nsignificantly decreased under the challenging conditions of a side view or\nnearside view, even when the proper classifier was applied for the specific \nviewpoint. This problem is understandable because our viewpoint-dependent \nclassifiers are built upon sequences under normal walking conditions.\n\n \nMoreover, for a side view or nearside view, the appearance of the person is clearly changed, both when carrying an item and when wearing a coat. The mean $\\pm$ std of both $CCRs$ while carrying a bag and wearing a coat were 92.0\\% $\\pm$ 2.3 and 92.1\\% $\\pm$ 1.4, respectively. \n\nSome examples of silhouette from the same persons in front view (top row) and side view (bottom row) are shown in Fig. \\ref{fig:7}, respectively. It is interesting to notice from the figure that even in different views, the head part of the silhouette also contains the classifiable gender information (head and hair style). This is to explain that even the $90^o$ classifier is used to test the silhouette in $0^o$, the accuracy is not too low as seen in Tables 2-3-4. However, when the correct view classifier is applied, many traits for gender classification such as head and hair style, chest and back, waist and buttocks, legs \\cite{ref23} are taken into account to increase the performance.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{Fig7}\n\\caption{Silhouette example of human in front view (first row) and side view (second row)}\n\\label{fig:7}\n\\end{figure}\n\n\n\n\\begin{table*}\n\\centering\n\\caption{Results of viewpoint estimation in terms of percentage for arbitrary viewpoint under normal walking conditions }\n\\label{tab:5} \n\\begin{tabular}{llllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Ground-truth viewpoint}}& \\multicolumn{11}{c}{Estimated viewpoint} \\\\\n& $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$0^o$&90.8&8.1&0&0&0&0&0&0&0&0&1.1 \\\\\n$18^o$&3.4&89.5&0&0&0&0&0&0&0&8.1&0 \\\\\n$36^o$&0&5.3&88.3&6.4&0&0&0&0&0&0&0 \\\\\n$54^o$&0&0&4.3&82.1&11.6&0&0&0&0&0&0 \\\\\n$72^o$&0&0&0&3.0&93.7&3.3&0&0&0&0&0 \\\\\n$90^o$&0&0&0&0&4.4&93.2&2.4&0&0&0&0 \\\\\n$108^o$&0&0&0&0&0&7.4&87.1&5.5&0&0&0 \\\\\n$126^o$&0&0&0&0&0&0&1.1&88.2&10.7&0&0 \\\\\n$144^o$&0&0&0&0&0&0&0&6.2&84.3&9.5&0 \\\\\n$162^o$&0&0&0&0&0&0&0&0&6.4&84.5&9.1 \\\\\n$180^o$&2.6&0&0&0&0&0&0&0&0&9.3&88.1 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\centering\n\\caption{CCRs (\\%) of the viewpoint-dependent classifier of unknown viewing angle under different walking conditions without attachment-area removal module }\n\\label{tab:6} \n\\begin{tabular}{llllllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Walking condition}}& \\multicolumn{11}{c}{Classifier} \\\\\n& $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ & \\text{Avg} & \\text{Unified} \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\text{Normal walking}&98.5&99.1&98.7&97.8&99.3&99.8&98.7&98.4&98.3&98.9&99.2&98.8 &90.3\\\\\n\\text{Carrying backpack}&94.6&94.4&93.7&92.6&91.3&87.5&90.1&91.7&92.5&93.7&94.9&92.5 &85.7\\\\\n\\text{Wearing coat}&92.1&93.7&94.4&92.6&93.2&94.1&91.3&92.6&90.1&92.3&93.4&92.7 &86.1\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{CCRs (\\%) of the viewpoint-dependent classifier of unknown viewing angle when including attachment-area removal module }\n\\label{tab:7} \n\\begin{tabular}{lllllllllllll}\n\\hline\\noalign{\\smallskip}\n\\multirow{2}{*}{\\text{Walking condition}}& \\multicolumn{11}{c}{Classifier} \\\\\n& $0^o$ & $18^o$ & $36^o$ & $54^o$ & $72^o$ & $90^o$ & $108^o$ & $126^o$ & $144^o$ & $162^o$ & $180^o$ & \\text{Avg} \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\text{Normal walking}&98.5&99.1&98.7&97.8&99.3&99.8&98.7&98.4&98.3&98.9&99.2&98.8 \\\\\n\\text{Carrying backpack}&94.8&94.6&94.5&94.8&93.5&95.1&94.4&94.3&93.4&93.9&94.9&94.4 \\\\\n\\text{Wearing coat}&93.1&93.8&94.6&93.4&93.7&94.5&93.1&93.5&92.3&92.6&93.7&93.5 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\subsection{Viewpoint estimation test}\nViewpoint estimation is an important step in this work because it determines the DS model and classifier to be used for gender prediction. To test the accuracy of the viewpoint estimation module, we randomly selected ten sequences from a specific viewpoint under normal walking conditions from the CASIA Dataset B. This procedure was conducted as discussed in Section 3.2.2. The average percentages (for the ten sequences) of the viewpoint estimation are displayed in Table \\ref{tab:5}. As can be seen, given a sequence with a specific viewpoint from the CASIA Dataset B, the viewpoint estimated from the program did not match the given viewpoint (for the given $0^o$ degree sequence, the estimated viewpoints are $0^o$, $18^o$, and $180^o$ with probabilities of 90.8, 8.1, and 1.1, respectively). This is understandable because people change their gait features while walking. Moreover, the person's appearance from the front and rear views are similar, which results in a classification step, i.e., 90.8\\% for a $0^o$ classifier, 8.1\\% for an $18^o$ classifier, and 1.1\\% for a $180^o$ classifier were used to obtain the gender of the individual.\n\nAfter obtaining the viewing angle, the corresponding classifier was selected to conduct the gender prediction. Table \\ref{tab:6} displays the $CCRs$ for an unknown viewpoint under different walking conditions, i.e., normal walking, carrying an item, and wearing a coat. The CCRs under normal walking conditions (Table \\ref{tab:6}, r ow 1) are improved because the viewpoints were automatically calculated and the proper classifier was selected for the gender prediction. \n\n\nFor the given unknown $0^o$ viewpoint, $90.8\\%$ of the image sequences are estimated at $0^o$ viewing angle, $8.1\\%$ at $18^o$, and $1.1\\%$ at $180^o$, respectively, as shown in Table \\ref{tab:5}. Those image sequences are then sent to $0^o$, $18^o$, $180^o$ classifiers, respectively to calculate the $CCR$ at $0^o$ viewpoint. The performance using the corresponding classifiers under normal walking conditions increases the $CCRs$ from ($97.6\\% \\pm 0.881$) to ($98.8\\% \\pm 0.550$). The $CCRs$, while carrying a bag or wearing a coat (Table \\ref{tab:6}, rows 2 and 3), were not significantly improved because we used classifiers trained under normal walking conditions to predict the gender.\n\n\n\nFor the aim of proving the superiority of the view-dependent design, a unified classifier of all viewpoints is trained without considering the viewing angle using the same configuration of the SVM. The experimental result is reported in the last column of Table \\ref{tab:6}. As seen in the table, the average performance of view-dependent (penultimate column) increases significantly in each experimental scenario comparing to the unified classifier.\n\n\n\\begin{table*}\n\\centering\n\\caption{Comparison with related methods}\n\\label{tab:8} \n\\begin{tabular}{llllll}\n\\hline\\noalign{\\smallskip}\n\\text{Compared methods} & \\text{\\#subjects} & \\text{Viewpoints} & \\text{Walking condition} & \\text{Reported} & \\text{Proposed method} \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\n\\text{Lee et al. \\cite{ref42}} & \\text{14 males, 10 females} & $90^o$ & \\text{Normal} & $84.5\\%$ & $100\\%$ \\\\\n\n\\text{Li et al. \\cite{ref44}} & \\text{31 males, 31 females} & $90^o$ & \\text{Normal} & $93.2\\%$ & $99.8\\%$\\\\\n\n\\text{Yu et al. \\cite{ref23}} & \\text{31 males, 31 females} & $90^o$ & \\text{Normal} & $95.9\\%$ & $99.8\\%$\\\\\n\n\\text{Huang et al. \\cite{ref43}} & \\text{30 males, 30 females} & $0^o, 90^o, 180^o$ & \\text{Normal} & $89.5\\%$ & $99.2\\%$\\\\\n\n\\text{Zhang De \\cite{ref32}} & \\text{31 males, 31 females} & $0^o, 18^o,..., 180^o$ & \\text{Normal} & $98.1\\%$ & $98.8\\%$\\\\\n\n\\text{NA}& \\text{31 males, 31 females} & $0^o, 18^o,..., 180^o$ & \\text{Bag-carrying} & \\text{NA} & $94.4\\%$\\\\\n\n\\text{NA}& \\text{31 males, 31 females} & $0^o, 18^o,..., 180^o$ & \\text{Coat-wearing} & \\text{NA} & $93.5\\%$\\\\\n\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Attachment removal test}\nThe attachment area and noise can be removed using the procedure discussed in \nSection 3.3.2. During the testing phase, the silhouette was corrected and updated\nusing the attachment-area removal module. The updated version of the AGI was \ncalculated based on the new version of the silhouette. As indicated in Table \\ref{tab:6} and \nTable \\ref{tab:7}, the $CCRs$ of a human carrying a backpack in the side view ($90^o$) is \nsignificantly improved with the attachment removal module (95.1\\%), without the \nattachment removal module (87.5\\%). More improvement in the $CCRs$ of 94.4\\% $\\pm$\n0.564 and 93.5\\% $\\pm$ 0.704, compared to the cases of no attachment removal, is indicated in Table \\ref{tab:7} because of the attachment-area removal module. Moreover, a significant improvement for the side view or nearside view is presented in Table \\ref{tab:7}, row 2 (carrying a bag). The $CCR$ values in Table \\ref{tab:7}, row 1 were not changed because in this case, the person was walking with a thin coat and not carrying any objects. The attachment-area removal module did not remove anything in this case since the distance signal was within the range of $MaDS$ and $MiDS$.\n\\subsection{Comparisons}\nThe dataset used in \\cite{ref42} consists of twenty-four subjects, 14 males and 10 females, walking in normal speed and stride. The camera was placed perpendicular to their walking path. In \\cite{ref23}\\cite{ref44}experiments, only side-view sequences of 31 males and 31 females were collected from CASIA Dataset B for gender classification evaluation. Huang et al. in \\cite{ref43} extracted only 30 males and 30 females from CASIA Dataset B in three viewing angles including $0^o$,$90^o$, and $180^o$.\nTable \\ref{tab:8} presents a $CCR$ comparison of the proposed method with other related works. In the side-view dataset, the proposed method attained $CCRs$ of 100\\% and 99.8\\% compared with 84.5\\% reported in \\cite{ref42} and 95.9\\% reported in \\cite{ref23} on a small dataset and the CASIA Dataset B under the same conditions, respectively. The proposed method was also tested on normal walking conditions in three viewing angles ($0^o,90^o$, and $180^o$) and achieved greater accuracy (99.2\\% on average) compared with 89.5\\% as in \\cite{ref43}.\n\nTo demonstrate the effectiveness of the proposed method for gender classification with multiple-viewing angles, we conducted a test on multiple-views of the CASIA Dataset B (11 viewing angles) and obtained CCRs of 98.8\\%, which is also greater than the state-of-the-art method, 98.1\\% reported in \\cite{ref32}.\n\nAs described in Section 4.1, this CASIA Dataset B included three categories. The first category contained videos of humans walking in a normal condition without any attachments. The two remaining categories were more challenging, containing videos of humans carrying a backpack and humans wearing a coat. Because of the attachments, the silhouettes were highly deformed, leading to significant degradation on the classification results (Table \\ref{tab:6} and Table \\ref{tab:7}). To the best of our knowledge, there are no experimental results reported for these two remaining datasets.\n\nApplying the proposed module to remove the attachments, we performed experiments on these two datasets in multiple viewpoints (11 viewing angles) in the same scenario as the first category. The CCRs on the challenging dataset images indicated promising results of 94.4\\% and 93.5\\% for the bag-carrying and coat-wearing images, respectively, as indicated in Table \\ref{tab:8}.\n\nFurther, because the proposed method uses simple operations for gender classification such as 2-dimensional signals (distance signal), and linear SVM it requires only 48 ms (20.8 frames per second) to process a frame after skipping the first 15 frames for the AGI calculation. This means that the algorithm can be applied to a surveillance application in real time.\n\n\n\n\\section{Conclusions and future works}\nGender information can be effectively obtained from a video surveillance system based on the gait feature of the subject. Instead of using a GEI, this paper employed an AGI, which is easier to calculate for a real application. To accurately predict the human gender in real applications, we created viewpoint-dependent classifiers, i.e., a VP model and a DS model. The VP model is used to estimate the viewing angle during the testing phase; any attachment area is then removed using the DS model. Finally, the gender information is provided through the use of the viewpoint-dependent classifier. A comparison with other state-of-the-art methods \\cite{ref23} confirmed that the proposed method achieved a high accuracy of 98.8\\% and can be applied to a real-world system. However, the results of this method depend mainly on the quality of the silhouette obtained during the background subtraction step, as shown in Fig. \\ref{fig:8}. In the figure, the first row shows the samples that the method correctly classifies the gender whereas the second row shows the ones that are wrongly classified due to bad quality. As future work, we will attempt to apply the raw RGB image of a person rather than a silhouette image using deep-learning techniques because color information is an important factor for gender classification accurately predict.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{Fig8}\n\\caption{Good and bad silhouettes obtained from the background subtraction process.}\n\\label{fig:8} \n\\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n{\\bf Introduction.}\nThe most general form of the neutrino electromagnetic vertex function \\cite{Giunti:2014ixa} is given by\n$\\Lambda_{\\mu}^{ij}(q) = \\left( \\gamma_{\\mu} - {q}_{\\mu}\n\\slashed{q}\/q^{2} \\right) \\left[ f_{Q}^{ij}(q^{2}) + f_{A}^{ij}(q^{2})\nq^{2} \\gamma_{5} \\right] \\nonumber\n - i \\sigma_{\\mu\\nu} q^{\\nu} \\left[ f_{M}^{ij}(q^{2}) +\ni f_{E}^{ij}(q^{2}) \\gamma_{5} \\right]$ ,\nwhere $\\Lambda_{\\mu}(q)$ and form factors $f_{Q,A,M,E}(q^2)$ are $3\\times 3$ matrices in the space of massive neutrinos. In the case of coupling with a real photon ($q^2=0$) form factors provide four sets of neutrino electromagnetic characteristics: 1) the dipole magnetic moments $\\mu_{ij}=f_{M}^{ij}(0)$,\n2) the dipole electric moments $\\epsilon_{ij}=f_{E}^{ij}(0)$, 3) the millicharges $q_{ij}=f_{Q}^{ij}(0)$ and\n4) the anapole moments $a_{ij}=f_{A}^{ij}(0)$.\n\n\n\n\n{\\bf Neutrino dipole magnetic moments.} The most well understood and studied among neutrino electromagnetic characteristics are the neutrino magnetic moments.\nIn the Standard Model with massless neutrinos magnetic moments of neutrinos are zero. Therefore, it is believed that the studies of neutrino electromagnetic properties open a window to {\\it new physics} \\cite{Giunti:2014ixa,Aprile:2020tmw,Studenikin:2008bd,Studenikin:2018vnp}. In a minimal extension of the Standard Model the diagonal magnetic moment of a Dirac neutrino is given \\cite{Fujikawa:1980yx} by\n$\\mu^{D}_{ii}\n = \\frac{3e G_F m_{i}}{8\\sqrt {2} \\pi ^2}\\approx 3.2\\times 10^{-19}\n \\Big(\\frac{m_i}{1 \\ \\mathrm{eV} }\\Big) \\mu_{B}$,\n$\\mu_B$ is the Bohr magneton. The Majorana neutrinos can have only transition\n(off-diagonal) magnetic\nmoments $\\mu^{M}_{i\\neq j}$. The same is valid also for the flavour neutrinos in the case of the Majorana mass states.\n\nThe most stringent constraints on the neutrino\nmagnetic moments are obtained with the reactor antineutrinos\n(GEMMA Collaboration \\cite{GEMMA:2012}):\n$\\mu_{\\nu_e} < 2.9 \\times 10^{-11} \\mu_{B}$,\nand solar neutrinos (Borexino Collaboration \\cite{Borexino:2017fbd}):\n${\\mu}_\\nu ^{eff}< 2.8 \\times\n10^{-11} \\mu _B$. The last limit can be translated to the upper limits for flavour neutrinos: $(\\mu_{\\nu_e}, \\mu_{\\nu_{\\mu, \\tau}}) \\sim (4, 6 ) \\times 10^{-11} \\mu _B$.\n\nNote that in general in the scattering experiments the neutrino\nis created at some distance from the detector as a flavor neutrino, which is a\nsuperposition of massive neutrinos. Therefore, the magnetic\nand electric moments that are measured in these experiments are not that of a\nmassive neutrino, but there are effective moments that take into account the neutrino mixing and oscillations\nduring the propagation between the\nsource and detector \\cite{Grimus:1997aa, Beacom:1999wx}.\nFor the recent and detailed study of the neutrino electromagnetic characteristics\ndependence on neutrino mixing see \\cite{Kouzakov:2017hbc}.\n\nA new phase of the GEMMA project for measuring the neutrino magnetic moment is now underway at the Kalinin Power Plant in Russia. The discussed next experiment \\cite{Belov:2015ufh} called GEMMA-3$ \/ \\nu $GEN is aimed at the further increase in sensitivity to the neutrino magnetic moment and will reach the level of\n$\\mu_{\\nu_e} \\sim (5{-}9) \\times 10^{-12}\\mu _B $.\nTo reach the claimed limit on the neutrino magnetic moment the $\\nu$GEN experiment setup reasonably improves characteristics in respect to those of the previous editions of the GEMMA project. The most important are the following \\cite{Lubashevskiy:2020}: 1) a factor of 2 increase in the total neutrino flux at the detector because of much closer location of the detector to the reactor core, 2) a factor of 3.7 increase in the total mass of the detector, 3) the energy threshold would be improved from $2.8 \\ keV$ to $ 200 \\ eV$. Furthermore, the $\\nu$GEN experimental setup is located in the new room at the Kalinin Power Plat with much better (by an order of magnitude) gamma-background conditions and on a moveable platform. The later gives an opportunity to vary online the neutrino flux and thus suppress systematic errors.\n\n\n\n\nThe observation of coherent elastic neutrino-nucleus scattering reported for the first time \\cite{Akimov:2017ade} by the\nCOHERENT experiment at the Spallation Neutron Source can be also used for constraining neutrino electromagnetic properties. For the case of neutrino magnetic moments, however, as it was shown in \\cite{Kosmas:2017tsq}\nand then confirmed in recent studies (see, for instance, \\cite{Miranda:2020tif} ) the bounds for the flavour neutrino magnetic moments are of the order $\\mu_e , \\mu_\\mu \\sim 10^{-8} \\mu _{B}$.\n\n\n\n\nIn the recent studies \\cite{Miranda:2020kwy} it is shown that the puzzling results of the XENON1T collaboration \\cite{Aprile:2020tmw} at few keV electronic recoils could be due to the scattering of solar neutrinos endowed with finite Majorana transition magnetic moments of the strengths lie within the limits set by the Borexino experiment with solar neutrinos \\cite{Borexino:2017fbd}. The comprehensive analysis of the existing and new extended mechanisms for enhancing neutrino transition magnetic moments to the level appropriate for the interpretation of the XENON1T data and leaving neutrino masses within acceptable values is provided in \\cite{Babu:2020ivd}.\n\nIn the most recent paper \\cite{Cadeddu:2019qmv} we have proposed an experimental setup to observe coherent elastic neutrino-atom scattering using electron antineutrinos from tritium decay and a liquid helium target. In this scattering process with the whole atom, that has not beeen observed so far, the electrons tend to screen the weak charge of the nucleus as seen by the electron antineutrino probe.\n Finally, we study the sensitivity of this apparatus to a possible electron\n neutrino magnetic moment and we find that it is possible\n to set an upper limit of about\n$\\mu_{\\nu} < 7 \\times 10^{-13} \\mu_{B}$,\nthat is more than one order of magnitude smaller than\nthe current experimental limits from GEMMA and Borexino.\n\n\n\nAn astrophysical bound on an effective neutrino magnetic moment (valid for both cases of\nDirac and Majorana neutrinos) is provided\n\\cite{Raffelt-Clusters:90, Viaux-clusterM5:2013, Arceo-Diaz-clust-omega:2015}\nby observations of the properties of globular cluster stars:\n$\\Big( \\sum _{i,j}\\left| \\mu_{ij}\\right| ^2\\Big) ^{1\/2}\\leq (2.2{-}2.6) \\times\n10^{-12} \\mu _B$. There is also a statement \\cite{deGouvea:2012hg}, that\nobservations of supernova fluxes in the future largevoluem experiments like JUNO, DUNE and Hyper-Kamiokande ( see for instance\n\\cite{An:2015jdp,Giunti:2015gga,Lu:2016ipr}) may reveal the effect of collective spin-flavour oscillations due to the Majorana neutrino transition moment $\\mu^{M}_\\nu \\sim 10^{-21}\\mu_B$. Other new possibilities for neutrino\nmagnetic moment visualization in extreme astrophysical environments are\nconsidered recently in \\cite{Grigoriev:2017wff,Kurashvili:2017zab}.\n\n\n\nA general and termed model-independent upper bound on the Dirac neutrino\nmagnetic moment, that can be generated by an effective theory beyond\na minimal extension of the Standard Model, has been derived in\n\\cite{Bell:2005kz}: $\\mu_{\\nu}\\leq\n10^{-14}\\mu_B$. The corresponding limit for transition moments of Majorana neutrinos is much weaker \\cite{Bell:2006wi}.\n\n\n{\\bf Neutrino dipole electric moments.} In the theoretical framework with $CP$ violation a neutrino\ncan have nonzero electric moments $\\epsilon_{ij}$. In the laboratory neutrino\nscattering experiments for searching $\\mu_{\\nu}$ (for instance, in the GEMMA experiment)\nthe electric moment $\\epsilon_{ij}$ contributions interfere with\nthose due to $\\mu_{ij}$. Thus, these kind of experiments also provide constraints\non $\\epsilon_{ij}$. The astrophysical bounds on $\\mu_{ij}$\nare also applicable for constraining $\\epsilon_{ij}$ (see \\cite{Raffelt-Clusters:90, Viaux-clusterM5:2013, Arceo-Diaz-clust-omega:2015} and \\cite{Raffelt:2000kp}).\n\n{\\bf Neutrino electric millicharge.} There are extensions of the Standard Model that allow for nonzero\nneutrino electric millicharges. This option can be provided by\nnot excluded experimentally possibilities for hypercharhge dequantization or\nanother {\\it new physics} related with an additional $U(1)$ symmetry\npeculiar for extended theoretical frameworks. Note that neutrino millicharges\nare strongly constrained on the level $q_{\\nu}\\sim 10^{-21} e_0$\n($e_0$ is the value of an electron charge) from neutrality of the hydrogen atom.\n\n A nonzero neutrino millicharge $q_{\\nu}$ would contribute to the neutrino electron scattering in the terrestrial experiments. Therefore, it is possible to get bounds on $q_{\\nu}$ in the reactor antineutrino\n experiments. The most stringent reactor antineutrino constraint\n\n $q_{\\nu}< 1.5 \\times 10^{-12} e_0 $\n\n is obtained in \\cite{Studenikin:2013my} within the free-electron approximation using the GEMMA experimental data \\cite{GEMMA:2012}. This limit is cited by the Particle Data Group since 2016 (see also \\cite{Zyla:2020zbs}).\nA certain increase in the cross section is expected in the case when instead of the free-electron approximation one accounts for\nthe so called atomic ionization effect \\cite{Chen:2014dsa}, and the obtained corresponding limit on the neutrino millicharge is\n $q_{\\nu} < 1 \\times 10^{-12} e_0$.\n\n\n The expected increasing sensitivity to the neutrino-electron scattering of the future $\\nu$GEN experiment that is aimed to reach a new limit for the magnetic moment would provide a possibility \\cite{Studenikin:2013my} to check the neutrino millicharge at the scale of $ q_{\\nu} \\sim 10^{-13} e_0$.\n\nAs it has been already mentioned above, the coherent elastic neutrino-nucleus scattering \\cite{Akimov:2017ade} is a new\npowerful tool to probe the electromagnetic neutrino properties \\cite{Kosmas:2017tsq}. In the flavour basis neutrinos can have diagonal $q_{lf}$ ($l=f$, $l,f = e, \\mu, \\tau$) and transition $q_{lf}$ ($l\\neq f $) electric charges (see, for instance, \\cite{Giunti:2014ixa} and \\cite{Kouzakov:2017hbc}). Such possibilities are not excluded by theories beyond the Standard Model. Recently \\cite{Cadeddu:2019eta} from the analysis of the COHERENT data new constraints for all neutrino charges on the level of $\\sim 10^{-7} e_0$ are obtained. It follows, that the bounds for involving the electron neutrino flavour charges $q_{ee}, q_{e\\mu}$ and $q_{e\\tau}$ are not competitive with respect to constraints $\\sim 10^{-12} e_0$ obtained for the effective electron neutrino charge $q_{eff}= \\sqrt{q_{ee}^2 +q_{e\\mu}^2 +q_{e\\tau}^2}$ from the reactor antineutrino scattering experiments \\cite{Studenikin:2013my, Chen:2014dsa}. Note, that the bounds for $q_{\\mu \\mu}$ and $q_{\\mu \\tau}$ from a laboratory data are obtained in \\cite{Cadeddu:2019eta} for the first time.\n\n\nThe most recent and one of the most detailed statistical studies \\cite{Parada:2019gvy} of experimental data from the elastic neutrino-electron and coherent neutrino-nucleus scattering show that the combined inclusion of different experimental data can lead to stronger\nconstraints on $q_\\nu$ than those based on individual analysis of different experiments.\n\n\nA neutrino millicharge would have specific phenomenological consequences\nin astrophysics because of new electromagnetic processes are opened\ndue to a nonzero charge (see \\cite{Giunti:2014ixa,Raffelt:1996wa,Studenikin:2012vi}). Following this line, the most stringent astrophysical constraint on neutrino millicharges\n$q_{\\nu}< 1.3 \\times 10^{-19} e_0 $\n was obtained in \\cite{Studenikin:2012vi}. This bound\nfollows from the impact of the {\\it neutrino star turning} mechanism ($\\nu ST$) \\cite{Studenikin:2012vi} that can be considered as a {\\it new physics} phenomenon end up with a pulsar rotation frequency\nshift engendered by the motion of escaping from the\nstar neutrinos along curved trajectories due to millicharge interaction with a constant\nmagnetic field of the star. The existed other astrophysical constraints on the neutrino millicharge, however less restrictive than that of \\cite{Studenikin:2012vi}, are discussed in \\cite{Giunti:2014ixa,Parada:2019gvy}.\n\n{\\bf Neutrino cherge radius and anapole moment.} Even if a neutrino millicharge is vanishing, the electric form factor\n$f^{ij}_{Q}(q^{2})$ can still contain nontrivial information about\nneutrino electromagnetic properties. The corresponding electromagnetic characteristics is\ndetermined by the derivative of $f^{ij}_{Q}(q^{2})$ over $q^{2}$ at\n$q^{2}=0$ and is termed neutrino charge radius,\n$\\langle{r}_{ij}^{2}\\rangle\n=-\n6\n\\frac{df^{ij}_{Q}(q^{2})}{dq^{2}} \\\n_{\\mid _ {q^{2}=0}}\n$ (see \\cite{Giunti:2014ixa} for the detailed discussions).\nNote that for a massless neutrino the neutrino charge radius is the only\nelectromagnetic characteristic that can have nonzero value. In the Standard Model\nthe neutrino charge radius and the anapole moment are not defined separately,\nand there is a relation between these two values: $a = - \\frac{\\langle{r}^{2}\\rangle}{6}$.\n\nA neutrino charge radius contributes to the neutrino scattering cross section on electrons and thus\ncan be constrained by the corresponding laboratory experiments \\cite{Bernabeu:2004jr}.\nIn all but one previous studies it was claimed\n that the effect of the neutrino\ncharge radius can be included just as a shift of the vector coupling constant $g_V$\nin the weak\ncontribution to the cross section.\nHowever, as it has been recently demonstrated in \\cite{Kouzakov:2017hbc} within the direct calculations of\nthe elastic neutrino-electron scattering cross section accounting for all possible neutrino electromagnetic characteristics\nand neutrino mixing, this is not the fact. The neutrino charge radius dependence of the cross section\nis more complicated and there are, in particular, the dependence on the interference terms of the type\n$g_{V}\\langle{r}_{ij}^{2}\\rangle$ and also on the neutrino mixing.\nThe current constraints on the flavour neutrino charge radius $\\langle{r}_{e,\\mu,\\tau}^{2}\\rangle\\leq 10^{-32} - 10^{-31} \\ cm ^2$\nfrom the scattering experiments differ only by 1 to 2\norders of magnitude from the values $\\langle{r}_{e,\\mu,\\tau}^{2}\\rangle\\leq 10^{-33} \\ cm ^2$ calculated within the minimally extended Standard Model with right-handed neutrinos\n\\cite{Bernabeu:2004jr}. This indicates that the minimally extended Standard Model neutrino charge radii could be experimentally tested in the near future.\n\nNote that there is a need to re-estimate experimental constraints on\n$\\langle{r}_{e,\\mu,\\tau}^{2}\\rangle$ from the scattering experiments following\nnew derivation of the cross section \\cite{Kouzakov:2017hbc} that properly accounts for the interference of the weak and charge radius electromagnetic interactions and also for the neutrino mixing.\n\nRecently constraints on charged radii have been obtained\n\\cite{Caddedu:2018prd} from the analysis of the data on coherent\nelastic neutrino-nucleus scattering obtained in the COHERENT experiment\n\\cite{Akimov:2017ade,Akimov:2018vzs}. In addition to the customary diagonal\ncharge radii $\\langle{r}_{e,\\mu,\\tau}^{2}\\rangle$\nalso the neutrino transition (off-diagonal) charge radii have been constrained\nin \\cite{Caddedu:2018prd} for the first time:\n$\\left(|\\langle r_{\\nu_{e\\mu}}^2\\rangle|,|\\langle r_{\\nu_{e\\tau}}^2\\rangle|,|\\langle r_{\\nu_{\\mu\\tau}}^2\\rangle|\\right)\n< (22,38,27)\\times10^{-32}~{\\rm cm}^2$. Since 2018 these limits are included by the Particle Data Group to Review of Particle Properties (see also \\cite{Zyla:2020zbs}) and also were noted by the Editors' Suggestion as the most important results (PRD Highlights 2018) published in the journal.\n\n\n\nThe work is supported by the Russian Foundation for Basic Research under grant No. 20-52-53022-GFEN-a.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nResearch on highly accurate atomic frequency standards and their applications is making fast and steady progress. The quest for ever increased accuracy is advancing hand in hand with advances in quantum physics, with better understanding and manipulation of atomic systems, with exploration of fundamental laws of nature and with the development of important services and infrastructures for science and society. The quest for increased accuracy is also a powerful incentive to innovation in such areas as lasers, laser stabilization, low noise electronics, stable oscillators, low noise detection of optical signals, fiber devices and cold-atom based instrumentation for ground or space applications. Finally, in addition to enhancing existing applications, improved accuracy leads to new applications.\n\nThis article focuses on key achievements and trends of the last 10 years. Over this period of time, the first generation of laser-cooled standards, using the atomic fountain geometry, reached maturity and had large impact on international timekeeping. The typical accuracy of these frequency standards, 2 parts in $10^{16}$, is now permanently accessible both locally and globally. At the same time, a new generation of optical clocks showed tremendous and steady improvement, gaining more than two orders of magnitude in a decade. To date, an accuracy of $6.4\\times 10^{-18}$ \\cite{bloom2014optical} was reported. Similarly, major improvements occurred in many other aspects of optical frequency metrology, notably in optical frequency combs and optical fiber links.\n\nIn this article, we will report on developments of the LNE-SYRTE atomic clock ensemble since our 2004 report in the Special Issue of the Comptes Rendus de l'Acad\\'emie des sciences on Fundamental Metrology \\cite{Bize2004}. This work exemplifies many of the above mentioned features of research on highly accurate atomic frequency standards. We will focus on frequency standards and their impact on timescales and timekeeping, on clock comparisons, including optical-to-microwave comparisons with combs, and their applications. Several other aspects of our research are covered by other articles of the Special Issue of the Comptes Rendus de l'Acad\\'emie des Sciences (Volume 16, Issue 5), i.e. the space mission PHARAO\/ACES \\cite{cras2014onACES}, fundamental tests with clocks \\cite{cras2014onFundamentalTests}, development of technologies for space optical clocks \\cite{cras2014onSpaceOpticalClocks} and optical fibers links \\cite{cras2014onFiberLinks1,cras2014onFiberLinks2}.\n\n\n\\section{Atomic fountains} \\label{fountains}\n\n\\begin{figure*}[t]\n\t\\centering\n \\includegraphics[width=1\\textwidth, angle=0]{Fig1_ClockEnsemble1.pdf}\n\t\\caption{Overview of the LNE-SYRTE atomic clock ensemble at the Observatoire de Paris.}\n\t\\label{fig:ClockEnsemble}\n\\end{figure*}\n\nAtomic fountains are the first generation of the laser-cooled atomic frequency standards. They use the fountain geometry where spectroscopy of the clock transition is performed onto a free-falling sample of laser-cooled atoms which is beforehand launched upwards vertically (see, for instance, \\cite{guena2012} and references therein). To date, atomic fountains using cesium provide the most accurate realization of the SI second. One important aspect of the last 10 years was to better understand systematic shifts limiting the accuracy of these devices.\n\n\\textit{Distributed cavity phase shift}-- In atomic fountains, the Ramsey interrogation is realized by the up-going and down-going passages of the atomic sample through the microwave Ramsey cavity. The spatial phase variations of the field inside the cavity, when sampled by the moving atoms, induce a frequency shift, which can be described as a residual Doppler shift. For a long time, there was a lack of both a complete and agreed model for this effect and of experiments to test it. Consequently, this effect was one of the main sources of uncertainty in atomic fountains. In \\cite{li2004,li2010}, a new approach was proposed to compute the cavity phase distribution. We performed measurements of these shifts in FO2-Cs which enabled the first quantitative comparison between theory and experiment \\cite{guena2011}. This study validated the theoretical model and lowered the distributed cavity phase uncertainty for FO2-Cs to $10^{-16}$. It also defined a method to determine this uncertainty, which was then adopted in \\cite{li2011,weyers2012} and for other SYRTE fountains.\n\n\\textit{Microwave lensing shift}-- The microwave field inside the Ramsey cavity not only excites the transition between the two internal clock states but also modifies the motion of atomic wave packets, leading to a frequency shift \\cite{Borde2002}\\cite{Wolf2004}. In \\cite{Gibble2006}, a new approach to compute the shift was proposed. It was then used in a complete model of the effect, taking into account all features of the interaction such as atomic velocity and space distributions and detection non-uniformities \\cite{li2011}\\cite{weyers2012}. This same method was applied to LNE-SYRTE fountains, for which shifts are reported in table 2 of \\cite{guena2012}.\n\n\\textit{Blackbody radiation shift}-- In 2004, conflicting measurements and calculations of this shift induced by thermal radiation bathing the atoms were reported. This led us to revisit our early accurate measurements of the Stark coefficient \\cite{simon1998}. Our new measurements at lower electric fields have been found to be in excellent agreement \\cite{rosenbusch2007}. The theory of the Stark shift developed in the 60's turned out to have a sign error for the tensor part. This led to a small change of the blackbody radiation shift correction of $7\\times 10^{-17}$ \\cite{guena2012}. Two independent high accuracy ab initio calculations further agreed with the blackbody radiation shift correction derived from our Stark measurements.\n\n\\textit{Microwave leakage and synchronous phase perturbations}-- Interaction of atoms with unintended residual microwave field and synchronous perturbation of phase of the probing field can produce shifts. We developed microwave synthesizers that can be switched without introducing phase transients and a phase transient analyzer with 1~$\\mu$rad.s$^{-1}$ resolution \\cite{santarelli2009}. Using these tools, we lowered the uncertainty related to these putative frequency shifts to less than $10^{-16}$.\n\nOur approach to deal with other systematic shifts remained as described in our last report in the Comptes Rendus \\cite{Bize2004}. Table \\ref{tab_accuracy} gives, as an example, the accuracy budget of FO2-Cs as of 2014.\n\n\\begin{table}\n\\footnotesize\n\\caption{Typical uncertainty budget of the FO2-Cs primary frequency standard (top). Uncertainty budget of the Sr1 optical lattice clock as of July 2011 (bottom). Tables give the fractional frequency correction and its Type~B uncertainty for each systematic shift, in units of $10^{-16}$. The total uncertainty is the quadratic sum of all uncertainties.}\\label{tab_accuracy}\n\\begin{center}\n\\begin{tabular}{lcc}\n\\multicolumn{3}{c}{FO2-Cs}\\\\\n\\hlin\nPhysical origin of the shift & Correction & Uncertainty \\\\\n\\hline\n{\\small Quadratic Zeeman} & $-1919.9$&$0.3$ \\\\\n{\\small Blackbody radiation} & $168.4$&$0.6$\\\\\n{\\small Collisions and cavity pulling} & $201.2$&$1.5$\\\\\n{\\small Distributed cavity phase} & $-0.9$& $1.2$ \\\\\n{\\small Microwave lensing} & $-0.7$&$0.7$ \\\\\n{\\small Spectral purity \\& leakage} & 0 &$0.5$ \\\\\n{\\small Ramsey \\& Rabi pulling} & 0 &$0.1$ \\\\\n{\\small Relativistic effects} & 0 & $0.05$ \\\\\n{\\small Background collisions} & 0 &$1.0$ \\\\\n\\hline\n{\\small Total} & $-1551.9$&$2.5$ \\\\\n\\hline\n\\end{tabular}\n\\hspace{1cm}\n\\begin{tabular}{lcc}\n\\multicolumn{3}{c}{Sr1}\\\\\n\\hlin\nPhysical origin of the shift & Correction & Uncertainty \\\\\n\\hline\n{\\small Quadratic Zeeman} & $19.7$&$0.2$ \\\\\n{\\small Blackbody radiation} & $ 53.8 $&$0.8$\\\\\n{\\small Collisions} & $-0.2 $&$0.5 $\\\\\n{\\small AC Stark shift lattice 1st order} & $-0.5 $& $0.1 $ \\\\\n{\\small AC Stark shift Lattice 2nd order} & $0 $&$ 0.1$ \\\\\n{\\small DC Stark shift} & 0 &$0.01 $ \\\\\n{\\small Line pulling} & 0 &$0.5$ \\\\\n\\hline\n{\\small Total} & $72.8$&$1.0$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-6mm}\n\\end{table}\n\n\\normalsize\n\\vspace{1mm}\n\nAnother important achievement was the simultaneous operation with $^{87}$Rb and $^{133}$Cs of the dual fountain FO2. This was done by implementing dichroic collimators overlapping 780~nm (for Rb) and 852~nm (for Cs) radiations for all laser beams, and by adopting a time sequence enabling time resolved selective detection of the two atomic species \\cite{guena2010}. Further notable improvement relates to reliability and capability for long term unattended operation. Using 2D magneto-optic traps to load the optical molasses suppressed residual background vapor and enhanced the lifetime of the alkali sources. Also, we developed an automatic data processing system that monitors the status of all fountains, oscillators and internal links in quasi real time. This system allows rapid detection of failures. It also performs automated fountain data processing, taking account of all systematic corrections, and it continuously generates frequency measurements at the nominal uncertainty of the fountains. This capability has had major impact on timekeeping and other applications of atomic fountains (see sect.~\\ref{sec:fundphystests} and \\ref{sec:timekeeping} ).\n\n\\section{Optical lattice clocks}\n\n\\label{sec:OLC}\n\nIn optical lattice clocks (OLCs), a set of neutral cold atoms, dipole-trapped in an optical lattice, are interrogated by an ultra-stable ``clock'' laser. Because they involve probing an optical transition of a large (typically $10^4$) number of tightly confined atoms, they combine an excellent ultimate frequency stability -- only limited by the Quantum Projection Noise (QPN) -- and a high accuracy. Proposed in 2001 \\cite{Katori2001}\\cite{Katori2003a}, OLCs made tremendous progress in the last decade. OLCs have demonstrated unprecedented frequency stabilities of a few $10^{-16}\/\\sqrt{\\tau}$ and a record accuracy below $10^{-17}$ \\cite{bloom2014optical}, overcoming the best ion clocks~\\cite{letargat2013,hinkley_atomic_2013,falke2014,bloom2014optical,ushijima2014cryogenic}. With current improvement in laser stabilization, OLCs are expected to reach a QPN limited stability on the order of $10^{-17}\/\\sqrt{\\tau}$ within a few years, thus enabling even better characterization of systematic effects. OLCs with Sr, Yb, Hg and more prospectively Mg have been demonstrated. Among these atomic species, Sr is currently the most popular choice because of the accessibility of the required laser wavelengths, the possibility to cool Sr down to sub-$\\mu$K temperature using the narrow $^1S_0 \\rightarrow{}^3P_1$ inter-combination line, and the possibility it offers on the control of systematic effects, most notably concerning the high order perturbation by the trapping light and cold collisions.\n\n\\textit{Strontium optical lattice clocks}-- LNE-SYRTE developed two OLCs using strontium atoms. The design of these clocks uses an optical cavity to enhance the optical lattice light, giving access to large trap depths. It enabled us to explore systematic effects induced by the trapping laser light. These effects are specific to OLCs, and we have demonstrated that they can be controlled to better than $10^{-17}$, even with a significant trapping depth~\\cite{Brusch2006a,westergaard2011lattice}, thus validating the concept of OLCs. In particular, we have determined the precise value for the ``magic wavelength'' for which the impact of the trapping light is canceled to first order, and resolved or upper-bounded a number of higher order effects. Because OLCs use a large number of atoms in a tightly confined space, they are subject to a significant density-dependent systematic frequency shift. Some groups have resolved a density shift on the order of $10^{-16}$ with both Sr and Yb. However, the loading technique chosen at LNE-SYRTE leads to a lower atomic density, thus dramatically reducing this effect below $10^{-17}$. The blackbody radiation shift has remained the dominant contribution to the accuracy budget, with an uncertainty around $5\\times 10^{-17}$ for both Sr and Yb, assuming a 1~K uncertainty on the temperature of the environment. Recently, precise measurements of the static polarizability of Yb and Sr, together with carefully crafted environments for the atoms has enabled a few groups to drastically reduce this uncertainty, down to the $10^{-18}$ range.\n\nComparisons between clocks are necessary to confirm their accuracy budget. The first comparisons between remote Sr OLCs were achieved by comparing to cesium clocks (see sect.~\\ref{sec:fundphystests}), but they were soon limited by the cesium accuracy. LNE-SYRTE published the first comparison between two local OLCs that confirm the accuracy budget of the clocks better than the accuracy of the cesium fountains, involving two Sr clocks with a frequency difference smaller than their combined accuracy budgets of $1.5\\times 10^{-16}$~\\cite{letargat2013}. This resolution is obtained after less than one hour of integration. Table~\\ref{tab_accuracy} gives the accuracy budget of one of the Sr OLCs at the time of this comparison.\n\n\n\\textit{Mercury optical lattice clock}-- SYRTE also started the development of a mercury OLC. Hg has the advantage of low sensitivity to blackbody radiation and to electric field (30 times lower than Sr, 15 times lower than Yb). For the $^{199}$Hg isotope, clock levels have a spin 1\/2 for which the tensor light shift sensitivity is absent. Also, because of its high vapor pressure, it does not require an oven and enables the use of a 2D magneto-optic trap as the initial source of atoms. Hg is also interesting for fundamental physics and atomic physics, because of a quite high sensitivity to a variation of $\\alpha$ (see sect.~\\ref{sec:fundphystests}) and its 7 natural isotopes. The main challenge of using Hg lies in the need for deep UV laser sources. When the potential of Hg for a highly accurate optical lattice clock arose, Hg had never been laser cooled.\n\nIn the last years, we made all the steps leading to the demonstration, for the first time, of a Hg lattice clock. Laser cooling on the 254 nm $^1S_ 0 \\rightarrow{} ^3P_1$ intercombination transition was demonstrated and studied \\cite{Petersen2008b}\\cite{Hachisu2008}\\cite{McFerran2010}. A clock laser system with thermal noise limited instability of $4\\times 10^{-16}$ was developed \\cite{Millo2009b}\\cite{Dawkins2010}. We performed the first direct laser spectroscopy of the the clock transition, firstly on atoms free-falling from a magneto-optic trap \\cite{Petersen2008a} and secondly on lattice-bound atoms, with linewidth down to 11~Hz (at 265.6~nm or 1128~THz). We performed the first experimental determination of the ``magic wavelength'' \\cite{Yi2011}, for which our best value is $362.5697 \\pm \ufffd0.0011$~nm. We performed initial measurements of the absolute frequency of the $^{199}$Hg clock transition down to an uncertainty of 5.7 parts in $10^{15}$ \\cite{McFerran2012}.\n\nSo far, the advancement of the Hg optical lattice was mainly hindered by the poor reliability of the 254~nm laser-cooling and by the modest lattice trap depth. Recent work enabled large improvements to overcome these two limitations, opening the way to in-depth systematic studies and higher accuracies.\n\n\n\n\n\\section{Optical frequency combs}\n{}\nThe development of optical frequency standards of the kind presented in the previous section is aimed at producing a laser electromagnetic field of extremely stable and accurate frequency. To realize a complete metrological chain that allows comparison with other standards operating at different wavelengths in the optical domain or in the microwave domain (such as primary frequency standards), it is necessary to use a specific device. The method of choice nowadays is the optical frequency comb based on mode-locked femto-second laser, that provides a phase coherent link spanning across the optical and microwave domains. Recently, SYRTE focused on a technology of comb based on erbium-doped fiber lasers, whose foremost asset is the capability to function for months with a very limited maintenance while performing state-of-the-art measurements. Tight phase locking (bandwidth $\\sim 1\\,$MHz) of such comb onto a 1.5~$\\mu$m ultra-stable laser sets it in the narrow-linewidth regime where beatnotes with other ultra-stable light at other wavelengths are typically $\\sim1\\,$Hz linewidth. This provides high performance simultaneous measurements of the various ultra-stable optical frequency references at SYRTE in the visible and near infrared domain.\n\nThe comb also behaves like a frequency divider: the repetition rate of the comb, $f_{\\rm rep}$, results from the coherent division of the 1.5$\\,\\mu$m laser frequency $\\nu_{\\rm L}$. By photo-detecting the train of pulses and filtering out one specific harmonic of the repetition rate, one can generate a microwave signal whose phase noise is that of the cw optical reference, divided by the large frequency ratio (typically 20000) between a 1.5$\\,\\mu$m wavelength cw laser and a $10\\,$GHz microwave signal. $10\\,$GHz signals with phase noise of -100\\,dBc\/Hz at 1\\,Hz Fourier frequency and -140\\,dBc\/Hz white noise plateau are now straightforward to produce. This low phase noise level is comparable with that of cryogenic sapphire oscillators and is thus sufficient to operate state-of-the-art microwave atomic fountains (sect.~\\ref{fountains}) with a short term stability limited only by atomic quantum projection noise. We have realized proof-of-principle experiments of such a scheme \\cite{MilloAPL2009} and are now progressing toward implementing it in an operational system. We further demonstrated several advanced techniques \\cite{ZhangAPB2012, HabouchaOL2011, ZhangOL2014} to reduce the imperfection of the frequency division and photo-detection processes. We have shown that it is now becoming possible to generate microwave signals with phase noise as low or lower than any other technology for a large range of Fourier frequencies. Applications of such extremely low noise microwave signals can be found in RADAR (civil and military) as well as very long baseline interferometry.\n\nFinally, the comb-based transfer of spectral purity between different wavelengths recently led to exciting results. This application is crucial for the future development of optical lattice clocks, whose short term stability is currently limited by the spectral purity of the ultra-stable clock laser probing the atomic transition. Several competing technologies are being explored, notably at SYRTE, to improve the performance of these lasers, all of them very challenging, some of them wavelength-specific. Being able to utilize the performance of an extremely stable laser at a given wavelength and to distribute its performance to any wavelength within reach of the frequency comb is an important milestone for the future developments of optical lattice clocks. We demonstrated such transfer from a master to a slave laser with an added instability of no more than a few $10^{-18}$ at $1\\,$s (see fig. \\ref{fig:TSP}, right), well within the requirements expected for the next several years \\cite{NicolodiNP2014}.\n\n\\begin{figure*}[ht]\n\t\\centering\n \\includegraphics[width=0.9\\textwidth, angle=0]{Fig2_FrequencyComb.pdf}\n\t\\caption{Principle of the transfer of spectral purity (left): the optical beatnotes of the comb with a master laser on the one hand, and with a slave laser on the other hand, are rescaled and mixed before being compared to a stable synthesizer. The feedback on the offset-locked slave transfers the spectral purity of the master to the slave laser. The modified Allan deviation (right) of the noise added by the transfer itself has a level of only $2\\times10^{-18}$ at $1\\,$s, and averages down to $2 \\times 10^{-20}$ after $1000\\,$s.}\n\t\\label{fig:TSP}\n\\end{figure*}\n\n\\section{Fundamental physics tests}\\label{sec:fundphystests}\n\nOne exciting scientific application of atomic clocks with extreme uncertainties is to contribute to testing fundamental physical laws and searching for physics beyond the Standard Model of particle physics. The frequency of an atomic transition relates to parameters of fundamental interactions (strong interaction, electro-weak interaction), such as the fine-structure constant $\\alpha$, and to fundamental properties of particles like for instance the electron mass, $m_e$. Repeated highly accurate atomic clock comparisons can be used to look for a putative variation with time or with gravitational potential of atomic frequency ratios, and, via suitable atomic structure calculations, of natural constants. Clocks provide laboratory tests, independent of any cosmological model, that constrain alternative theories of gravity and quantum mechanics, thereby contributing to the quest for a unified theory of the three fundamental interactions.\n\n\n\\label{RbCsFitLin}\n\\textit{$^{87}$Rb vs $^{133}$Cs comparisons}-- Improvements of atomic fountains described in sect.~\\ref{fountains} enabled major enhancement in the number and in the quality of Rb\/Cs hyperfine frequency ratio measurements since our last report in the Comptes Rendus \\cite{Bize2004}. Measurements have been performed almost continuously since 2009. Fig.\\ref{Graphdrift}, top shows the temporal record of the variations of this ratio. Measurements extending over 14~years give stringent measurements of a putative variation with time and gravity\nof the Rb\/Cs ratio, as reported in \\cite{guena2012b}. Taking into account out most recent data, we get $d \\ln(\\nu_{\\mathrm{Rb}}\/\\nu_{\\mathrm{Cs}})\/dt=(-11.6\\pm 6.1 )\\times 10^{-17}$~yr$^{-1}$ for the time variation. For the variation scaled to the annual change of the Sun gravitational potential on Earth $U$, we get $c^2 d \\ln(\\nu_{\\mathrm{Rb}}\/\\nu_{\\mathrm{Cs}})\/dU=(7.4 \\pm 6.5)\\times10^{-7}$, which provides a differential redshift test between Rb and Cs twice more stringent than \\cite{peil2013}.\n\n\\begin{figure}\n\t\\begin{center}\n\n\t \\includegraphics[width=0.45\\textwidth]{Fig3_RbCs.pdf}\n\n\t \\vspace{3mm}\n\n \\includegraphics[width=0.48\\textwidth]{Fig3_SrCs.pdf}\n \n\t\\end{center}\n\n\t \\vspace{1mm}\n\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{p{28mm} ccc}\n\\hline \\hline\n & $\\ln(\\alpha)$ &$\\ln(\\mu$)&$\\ln(m_{q}\/\\Lambda_{\\mathrm{QCD}})$ \\\\\n\\hline\n$d\/dt~ (\\times 10^{-16}\\mathrm{yr}^{-1})$& $-0.26\\pm 0.24$ &$1.1\\pm 1.4$& $59\\pm 30$\\\\\n$c^{2}d\/dU ~(\\times 10^{-6})$ & $0.27 \\pm 0.46$ & $-0.2 \\pm 2.1$ & $-2.9\\pm 5.6$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\normalsize\n\n\\caption{Top: Temporal record of fractional variations of the $\\nu_{\\mathrm{Rb}}\/\\nu_{\\mathrm{Cs}}$ hyperfine frequency ratio. The error bars are the total 1~$\\sigma$ uncertainties, dominated by the systematic uncertainties. The solid red line is the weighted fit to a line with inverse quadratic weighting. The origin of the vertical axis corresponds to the $^{87}$Rb secondary representation of the SI second recommended in 2012, with a recommended uncertainty $1.2 \\times 10^{-15}$ (grey area) \\cite{CCTF2012}. Middle: International comparisons of the $\\nu_{\\mathrm{Sr}}\/\\nu_{\\mathrm{Cs}}$ frequency ratio. A fit shows an upper bound on a drift of this ratio, as well as on a variation synchronized with the Earth's orbit around the Sun. Bottom: Results of the global analysis of accurate experimental determinations of variations of atomic frequency ratios, available as of October 2014. The table gives constraints on temporal variations and on couplings to gravitational potential for the three fundamental constants: $\\alpha$, $\\mu=m_{e}\/m_{p}$ and $m_{q}\/\\Lambda_{\\mathrm{QCD}}$.} \\label{Graphdrift}\n\\end{figure}\n\n\n\\textit{$^{87}$Sr vs $^{133}$Cs comparisons}-- Frequency ratios between optical and microwave clocks offer a different sensitivity to natural constants than hyperfine frequency ratios. International absolute frequency measurements of strontium optical lattice clocks against Cs fountain primary frequency standards over a decade (fig.\\ref{Graphdrift}, Middle) give the linear drift with time of the $\\nu_{\\mathrm{Sr}}\/\\nu_{\\mathrm{Cs}}$ ratio: $d \\ln(\\nu_{\\mathrm{Sr}}\/\\nu_{\\mathrm{Cs}})\/dt=(-2.3 \\pm 1.8)\\times 10^{-16}$~yr$^{-1}$, and of the variation with the gravitational potential: $c^2 d \\ln(\\nu_{\\mathrm{Sr}}\/\\nu_{\\mathrm{Cs}})\/dU=(-1.3 \\pm 1.5)\\times 10^{-6}$. Because the accuracy of these measurements has improved considerably over time, these bounds will be significantly improved by future measurements.\n\n\\textit{Combining with other comparisons}-- Each pair of atoms has a different sensitivity to variations of three fundamental constants $\\alpha$, $\\mu=m_e\/m_p$ and $m_{q}\/\\Lambda_{\\mathrm{QCD}}$. To set independent limits to variations of these three constants with time, one can perform weighted least-squares fit to all accurate experimental determinations of variation with time of atomic frequency ratios, available as of October 2014. This includes the above Rb\/Cs result (see sect.~\\ref{RbCsFitLin}), optical frequency measurements of H(1S-2S) \\cite{fischer2004}, Yb$^{+}$ \\cite{tamm2014}, Hg$^{+}$ \\cite{fortier2007}, Dy \\cite{leefer2013} and Sr (see above, and \\cite{letargat2013} and references therein) against Cs, and the optical-to-optical ion clock frequency ratio $\\mathrm{Al}^{+}\/\\mathrm{Hg}^{+}$ \\cite{rosenband2008}.\nThe fit yields independent constraints for the three constants given in the first row of the Table in fig.~\\ref{Graphdrift}. The constraint relative to $\\alpha$ is mainly determined by the $\\mathrm{Al}^{+}\/\\mathrm{Hg}^{+}$ comparison. In this fit, only the Rb\/Cs comparison disentangles variations of $\\mu$ and of $m_q\/\\Lambda_{\\mathrm{QCD}}$. It is therefore essential to constrain $m_q\/\\Lambda_{\\mathrm{QCD}}$. This stems from the fact that optical frequency measurements are all performed against the Cs hyperfine frequency, except $\\mathrm{Al}^{+}\/\\mathrm{Hg}^{+}$.\n\nSimilarly, we perform a global analysis for the variation with the gravitational potential exploiting all available comparisons as of October 2014 \\cite{peil2013, fortier2007, leefer2013} and the above Rb\/Cs and Sr\/Cs results.\nThe least-squares fit to these results yields independent constraints for the three couplings to gravity given in the second row of the Table in fig.~ \\ref{Graphdrift}.\n\nThe number of atomic systems contributing to improve these tests will continue to grow, e.g. with $^{88}$Sr$^+$ \\cite{Madej2012}\\cite{Barwood2014} and $^{171}$Yb \\cite{Lemke2009a}, thanks to the steady efforts of many laboratories worldwide in the field of optical frequency metrology.\n\n\n\n\\section{Advanced timekeeping}\\label{sec:timekeeping}\n\n\\textit{TAI calibration with atomic fountains}-- The International Atomic Time (TAI) which is based on approximately 400 atomic clocks, now gets its accuracy from some ten atomic fountain clocks worldwide (see e.g. \\cite{cras2014Petit}). In the last decade, the number of calibrations of TAI with atomic fountains has grown from approximately 10 per year to 4 to 6 per month in 2014, while simultaneously the accuracy improved from several $10^{-15}$ to a few $10^{-16}$, improving TAI a lot. Combining a tremendous number of monthly calibrations and a high accuracy, LNE-SYRTE atomic fountains are providing the largest contribution to the accuracy of TAI. Between 2007 and August 2014, they provided 197 calibrations out of a total of 407 calibrations worldwide, a weight of nearly 50\\%. fig.~\\ref{TAI_UTC}, Top shows these calibrations as published in \\emph{Circular T}, and the SI second (red line) which is the average over all primary calibrations computed by the BIPM on a monthly basis. This illustrates how research on laser-cooled atomic fountain started 25 years ago led to improving an important service and infrastructure for science and society.\n\n\n\n\n\\textit{UTC(OP): timescales using atomic fountain clocks}-- The UTC(OP) timescale, elaborated at SYRTE, in Observatoire de Paris, is the real time realization of UTC for France. It is a continuously operated time reference used for multiple purposes:\ndefinition of legal time disseminated in France, reference provided to French laboratories for\nsynchronization applications, pivot for French contributions to TAI, test of advanced time transfer\nmethods, link to UTC of the EGNOS system, contribution to the development of GALILEO, time reference for the ground-segment of the PHARAO\/ACES space mission \\cite{laurent2006, cacciapuoti2007,cacciapuoti2009}.\n\nProgress in the accuracy and most importantly in the reliability of SYRTE atomic fountains (see sect.~\\ref{fountains}) enabled a new implementation of UTC(OP). A new UTC(OP) algorithm based on a hydrogen maser steered by the atomic fountains was developed and\nimplemented in October 2012. The maser is predictable enough to reach a stability of $\\sim 10^{-15}$. The\natomic fountains allow the maser frequency to be calibrated with an uncertainty in the $10^{-16}$ range \\cite{guena2012, guena2014}. These features are sufficient to maintain a phase deviation of a few ns over $1-\n2$ months, which corresponds to the delay of publication of the BIPM \\emph{Circular T}.\nThis timescale is as autonomous and independent as possible, except a small long term steering to remain\nclose to UTC, and does not rely on any other timescale available in real time such as GPS time or\nother UTC(k). A timescale with these characteristics provides a powerful tool to understand current limits and eventually improve international timekeeping.\n\nPractically, UTC(OP) is realized using a microphase stepper fed by the reference maser. A frequency\ncorrection is updated every day to compensate the maser frequency and\nmaintain UTC(OP) close to UTC. This correction is the sum of two terms. The main term corresponds to the current frequency of the maser as measured\nby the fountains. The value is estimated with a linear extrapolation of the data covering the past 20~days\nto remain robust against possible interruptions of data provision or of the automatic data processing. The\nsecond term is a fine steering to maintain UTC(OP) close to UTC, compensating the\nfrequency and phase offset between UTC(OP) and UTC. It is updated monthly at the BIPM \\emph{Circular~T}\npublication. The steering correction is usually of the order of $10^{-15}$ or below.\n\nFigure \\ref{TAI_UTC}, bottom presents the comparison of three UTC(k) to UTC as published in \\emph{Circular~T}\nsince the implementation of the new UTC(OP). Over this period, UTC(OP) is one of the 3 best real time\nrealizations of UTC \\cite{rovera2013, abgrall2014}, with UTC(PTB), the pivot of time\ntransfers for international contributions to TAI, and UTC(USNO), the laboratory providing the largest number of clock data included in EAL computation. Departure between UTC(OP) and UTC remains well below 10~ns, with a rms value less than 3~ns. This is an improvement of about a factor of 5 compared to the previous realization method of the timescale. On-going instrumental upgrades shall further improve the short term stability of the timescale and the robustness of the system.\n\n\n\\begin{figure}[htb]\n\n\\includegraphics[width=0.45\\textwidth]{Fig4_GTAI4.pdf}\n\\includegraphics[width=0.45\\textwidth]{Fig4_GUTCk5.pdf}\n\\caption{Top: Calibrations of TAI by the atomic fountain PFSs. Filled symbols: contributions of SYRTE fountains. Solid red line: the SI. Bottom: Comparisons of 3 UTC(k) to UTC: UTC(OP), UTC(PTB) and UTC(USNO). The inset shows the significant improvement achieved with the new method for generating UTC(OP) implemented at MJD 56218, compared to the previous one using a commercial Cs clock manually steered towards UTC.}\n\\label{TAI_UTC}\n\\end{figure}\n\n\\section{Toward a redefinition of the SI second}\n{}\n\nSeveral optical frequency standards are now largely surpassing Cs atomic fountains which realize the second of the international system of units (SI) and define the accuracy of TAI. This opens the inviting prospect of a redefinition the SI second. In 2001, anticipating this situation, the Consultative Committee for Time and Frequency (CCTF) of the Comit\\'e International des Poids et Mesures (CIPM) recommended that a list of Secondary Representations of the Second be established. Secondary Representations of the SI Second (SRS) are transitions which are used to realize frequency standards with excellent uncertainties, and which are measured in the SI system with accuracies close to the limit of Cs fountains. They are part of the broader list of recommended values of standard frequencies produced and maintained by the CCL-CCTF Working Group and adopted by the CIPM. Producing and maintaining these lists of recommended values is a vehicle to keep track of measurements providing the most stringent connections between the optical domain and the SI second, and to verify the level of consistency between these measurements. This is an important task to prepare for a possible redefinition of the SI second.\n\n\n\\textit{Contributions to the list of recommended values}--\nLNE-SYRTE provided several high accuracy absolute frequency measurements which contributed to the list of recommended values. The $^{87}$Rb hyperfine transition was measured repeatedly against Cs fountains, as already shown in fig.~\\ref{Graphdrift}. This transition became the first Secondary Representation of the Second proposed by the CCTF in 2004, based our early measurements, and adopted by the CIPM in 2006. After further significant progress visible in fig.~\\ref{Graphdrift}, the recommended value was revised by the 2012 CCTF and adopted in 2013. The origin of the vertical scale of the graph is the recommended value of 2012 and the gray area represents the recommended uncertainty. The weighted average of all data points (green line) gives $(2.15\\pm 1.48)\\times 10^{-16}$ is consistent with zero within the smallest overall uncertainty of the measurements ($4.4\\times 10^{-16}$). LNE-SYRTE also provided absolute frequency measurements that contributed to the establishment of the recommended values for the $^1S_0\\rightarrow{}^3P_0$ of $^{87}$Sr \\cite{Baillard2007b}, $^{88}$Sr \\cite{Baillard2007} and $^{199}$Hg \\cite{McFerran2012}. Using the transportable Cs fountain primary standard FOM, LNE-SYRTE also contributed to the establishment of the recommended value for H(1S-2S) (measured at the Max Planck Institut f\\\"ur Quantenoptik, Garching, Germany) \\cite{parthey2011} and $^{40}$Ca$^{+}$ (at the University of Innsbruck, Austria) \\cite{chwalla2009}.\n\n$^{87}$Sr is currently the most widespread optical frequency standard. For this reason a large number of groups measured the absolute frequency of $^{87}$Sr against Cs with a remarkable degree of consistency, as can be seen in fig.~\\ref{Graphdrift}, middle. These measurements led the 2006 CCTF to recommend the $^{87}$Sr as a Secondary Representation of the SI second. The recommended value was updated by the 2012 CCTF based on measurements from 5 institutes. It has a recommended uncertainty of $1\\times 10^{-15}$. More recently, LNE-SYRTE reported a measurement of the $^{87}$Sr clock transition with an uncertainty of $3.1\\times 10^{-16}$ limited by the accuracy of atomic fountains \\cite{letargat2013}. This is the most accurate absolute measurement to date of any atomic frequency. One of the key factors for the measurement is the record stability between an optical and a microwave clock: $4.1\\times 10^{-14}\/\\sqrt{\\tau}$ against Cs (and $2.8\\times 10^{-14}\/\\sqrt{\\tau}$ against Rb). In 2014, PTB reported another absolute frequency measurement with an uncertainty of $3.9\\times 10^{-16}$ \\cite{falke2014}. These two last measurements are in excellent agreement.\n\n\n\n\n\\textit{Using a Secondary Representation of Second to calibrate TAI}-- One major application of primary frequency standards is to calibrate and steer the scale interval of the widely used International Atomic Time TAI. It is important to anticipate how a possible redefinition of the second would impact the elaboration of TAI. We used the FO2-Rb fountain to investigate how a Secondary Representation of Second could participate to TAI. Calibrations of the frequency of our reference hydrogen maser were produced with FO2-Rb, in a similar way that absolute calibrations are done with primary frequency standards. These data were then submitted to the BIPM and to the Working Group on Primary Frequency Standards. Following this submission, the BIPM and the Working Group defined how frequency standards based on Secondary Representations will be handled by the BIPM and how they will be included into the \\textit{Circular~T}. The Working Group was renamed Working Group on Primary and Secondary Frequency Standards and it was decided that calibrations produced by LNE-SYRTE with FO2-Rb could be included into \\textit{Circular~T} and, since July 2013, contribute to steering TAI. This was the first time that a transition other than the Cs hyperfine transition was used to steer TAI \\cite{guena2014}.\n\n\\textit{Absolute frequency measurement against the TAI ensemble}-- More than 40 formal calibrations of TAI with FO2-Rb have been sent, processed by the BIPM and published into \\textit{Circular~T}. These data can be used to measure the frequency Rb hyperfine transition directly against the second as realized by the TAI ensemble. This can be done with a statistical uncertainty of 1 part in $10^{-16}$, and therefore at the accuracy limit of primary frequency standards defining the scale interval of TAI \\cite{guena2014}. This illustrates how TAI provides worldwide access to the accuracy of Cs fountains. This also shows how recommended values of Secondary Representation of the Second based on optical transitions could be checked against the SI second as realized by the TAI ensemble.\n\n\n\n\\section{Prospects}\n\nIn the future, highly accurate atomic clocks and their applications will keep improving at a high pace. An important milestone in the field will be the simultaneous availability of advanced timescales, of the new generation of optical clocks and of the means to compared them remotely at unprecedented levels of uncertainty. In the coming decade, the ACES mission will allow ground-to-space comparisons to the $10^{-16}$ level and ground-to-ground comparisons to the mid $10^{-17}$ level \\cite{cras2014onACES}. Optical fiber links will allow comparisons of the most accurate optical clocks at their limit: $10^{-18}$ or better \\cite{cras2014onFiberLinks1,cras2014onFiberLinks2}. We can confidently predict major improvements in all applications of highly accurate atomic clocks. Availability of clocks and clock comparisons at the $10^{-18}$ level can further enable new applications. Clock comparisons determine Einstein's gravitational redshift between the 2 remote clock locations. For a clock at the surface of the Earth, $10^{-18}$ corresponds to an uncertainty of 1~cm in height-above-geoid, making the idea of clock-based geodesy realistic and potentially useful. Highly accurate clocks could become a new type of sensors for applications in Earth science, illustrating once again the fertilizing power of the quest for ever increased accuracy.\n\n\n\n\\section*{Acknowledgements}\n\nSYRTE is UMR CNRS 8630 between Centre National de la Recherche Scientifique (CNRS), Universit\\'e Pierre et Marie Curie (UPMC) and Observatoire de Paris. LNE, Laboratoire National de M\\'etrologie et d'Essais, is the French National Metrology Institute.\nSYRTE is a member of IFRAF, of the nanoK network of the R\\'egion \\^Ile de France and of the FIRST-TF LabeX. We acknowledge the large number of contributions of SYRTE technical services. This work is supported by LNE, CNRS, UPMC, Observatoire de Paris, IFRAF, nanoK, Ville de Paris, CNES, DGA, ERC AdOC, EMRP JRP SIB55 ITOC, EMRP JRP EXL01 QESOCAS. We are grateful to the University of Western Australia and to M.E.~Tobar for the long-lasting collaboration which gives us access to the cryogenic sapphire oscillator used in the LNE-SYRTE ultra-stable reference.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalztti b/data_all_eng_slimpj/shuffled/split2/finalztti new file mode 100644 index 0000000000000000000000000000000000000000..1a2c66339371df1a62b293c9e21675699e10ea7b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalztti @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec-introduction}\n\nRobot autonomy offers great promise as a tool by which we can enhance, or restore, the natural abilities of a human partner. For example, in the fields of assistive and rehabilitative medicine, devices such as exoskeletons and powered wheelchairs can be used to assist a human who has severely diminished motor capabilities. However, many assistive devices can be difficult to control. This can be due to the inherent complexity of the system, the required fidelity in the control signal, or the physical limitations of the human partner. We can, therefore, further improve the efficacy of these devices by offloading challenging aspects of the control problem to an autonomous partner. In doing so, the human operator is freed to focus their mental and physical capacities on important high-level tasks like path planning and interaction with the environment. This idea forms the basis of \\textit{shared control} (see Figure~\\ref{fig-shared-control}), a paradigm that aims to produce joint human-machine systems that are more capable than either the human or machine on their own. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.95\\hsize]{shared_control_new.png}\n\t\\caption{Pictorial representation of a shared control paradigm. Both the human and autonomy are capable of controlling the mechanical system, and a dynamic control allocation algorithm selects which agent is in control at any given moment.}\n\t\\label{fig-shared-control}\n\\end{figure}\n\nA primary challenge that researchers and engineers face when developing shared control paradigms for generic human-machine systems is a lack of \\textit{a priori} knowledge of the human and robot partners. This issue is compounded by the fact that, in the real world, many users may operate the same mechanical device. It is therefore necessary to consider solutions that generalize to a variety of potential human and machine partners. In this work, we propose a data-driven methodology that learns all relevant information about how a given human and machine pair interact directly from observation. We then integrate the learned model of the joint system into a single shared control paradigm. We refer to this idea as \\textit{model-based shared control}.\n\nIn this work, we learn a model of the joint human-machine system through an approximation to the Koopman operator (\\cite{koopman1931hamiltonian}), though any machine learning approach could be used. However, the Koopman operator is chosen specifically for this work as it has previously proven useful in human-in-the-loop systems~(\\cite{broad2017learning}) and can be computed efficiently~(\\cite{williams2015kernel}). This model is trained on observation data collected during demonstration of the human and machine interacting and therefore describes both the human's input to the system, and the robot's response to the human input and system state. We can then integrate the portion of the learned model that specifically describes the system and control dynamics of the mechanical device into an optimal control algorithm to produce autonomous policies. Finally, the input provided by the human and autonomous partners are integrated via a geometric signal filter to provide real-time, dynamic shared control of unknown systems. \n\nWe validate our thesis that modeling the joint human-machine system is sufficient for the purpose of automating assistance with two human subjects studies consisting of 32 total participants. The first study imposes a linear constraint on the modeling and control algorithms, while the second study relaxes these constraints to evaluate the more general, nonlinear case. The linear variant of our proposed algorithm is used to validate the efficacy of our shared control paradigm and was first presented in~\\cite{broad2017learning}. The nonlinear variant extends these results to a wider class of human-machine systems. The results of the two studies demonstrate that the nonlinear variant has a greater impact on overall task performance than the linear methods. We also find that our modeling technique is generalizable across users with results that suggest that individualizing the model offline, based on a user's own data, does not affect the ability to learn a useful representation of the dynamical system. Finally, we evaluate the efficacy of our shared control paradigm in an online learning scenario, demonstrating the sample efficiency of the model-based shared control paradigm.\n\nWe provide background and related work in Section~\\ref{sec-background-and-related-work}. We then define model-based shared control in Section~\\ref{sec-model-based-shared-control}. In Section~\\ref{sec-experimental-validation} we describe the human subjects study we perform and detail the results in Section~\\ref{sec-results}. We describe important takeaways in Section~\\ref{sec-discussion} and conclude in Section~\\ref{sec-conclusion}.\n\n\\section{Background and Related Work}\n\\label{sec-background-and-related-work}\n\nThis section presents background and related work in the shared control literature for human-machine systems. We also identify alternative methods of autonomous policy generation for shared control, and provide a detailed background on the Koopman operator (\\cite{koopman1931hamiltonian}) with a particular focus on its use in learning system dynamics.\n\n\\subsection{Shared Control}\n\\label{sec-background-shared-control}\n\nIn this work, we explore the question of how automation can be used to adjust to, and account for, the specific capabilities of a human partner. In particular, we aim to develop a methodology that allows us to \\textit{dynamically adjust} the amount of control authority given to the robot and human partners~(\\cite{hoeniger1998dynamically, hoffman2004collaboration}). If done intelligently, and with appropriate knowledge of the individual capabilities of each team member, we can improve the overall efficiency, stability and safety of the joint system~(\\cite{lasota2017survey}). Approaches to shared control range from pre-defined, discretely adjustable methods~(\\cite{kortenkamp2000adjustable}) to probabilistic models~(\\cite{javdani2015shared}) to policy blending~(\\cite{dragan2013policy}). In addition to blending in the original control signal space, shared control has been researched through haptic control~(\\cite{nudehi2005shared}) and compliant control~(\\cite{kim1992force}).\n\nIn this work, we allocate control using a filter~(\\cite{tzorakoleftherakis2015controllers}) described more thoroughly in Section~\\ref{sub-sec-control-allocation}. Our control allocation strategy is similar in practice to \\textit{virtual fixtures} and \\textit{virtual guides}, techniques that are common in the haptics literature~(\\cite{forsyth2005predictive, griffiths2005sharing}). In particular, virtual fixtures and guides are techniques by which autonomously generated forces are \\textit{added to the control of a system} to limit movement into undesriable areas and\/or influence motion towards an optimal strategy~(\\cite{abbink2012haptic}). These ideas have been explored most commonly in association with robotic telemanipulation~(\\cite{abbott2007haptic}), including applications like robotic surgery~(\\cite{marayong2004speed}) and robot-assisted therapy~(\\cite{noohi2016model}). A key difference between these approaches and our own is that our control allocation method does not incorporate additional information from the autonomous partner into the control loop. Instead, the autonomous partner simply rejects input from the operator that does not meet the proposed criteria. Our approach therefore requires no \\textit{a priori} information about (or ability to sense) the environment, and no information about the system dynamics. In contrast, virtual fixtures\/guides require information about (or the ability to detect) hard constraints in the environment, and knowledge of the system dynamics. This information is then used to compute forces---the virtual fixtures---that counteract user-generated forces that are defined by as dangerous. The approach in this paper does not have similar \\textit{a priori} information requirements, suggesting our approach can more easily be incorporated into novel human-machine systems. An important benefit of the methods proposed in the virtual fixtures\/guide literature is that the techniques often provide an explicit guarantee of safety for the joint human-machine system. Our approach can be extended to provide the same guarantees by incorporating information about (or the ability to sense) the environment and using control barrier functions to implement safety requirements~(\\cite{broad2018operation}).\n\nThe effects of shared control (SC) have been explored in numerous fields in which the addition of a robot partner could benefit a human operator. For example, in assistive and rehabilitation robotics, researchers have explored the effects of shared control on teleoperation of smart wheelchairs~(\\cite{erdogan2017effect, trieu2008shared}) and robotic manipulators~(\\cite{kim2006continuous}). Similarly, researchers have explored shared control as it applies to the teleoperation of larger mobile robots and human-machine systems, such as cars~(\\cite{dewinter11smc}) and aircraft~(\\cite{matni08acc}). When dealing with systems of this size, safety is often a primary concern. \n\nThe above works are conceptually similar to our own as they use automation to facilitate control of a robot by a human partner. However, in this work, we do not augment the user's control based on an explicit model of the user. Instead, we use observations of the user demonstrations to build a \\textit{model of the joint human-robot system}. The effect of the human partner on the shared control system is implicitly encoded in the model learned from their interactions.\n\n\\subsection{Model-Based Reinforcement Learning}\n\\label{sec-background-model-based}\n\nModel-based shared control (MbSC) is a paradigm that generalizes shared control to generic human-machine partners~(\\cite{broad2017learning}). That is, MbSC assumes no \\textit{a priori} knowledge of either partner and instead uses data-driven techniques to learn models of the human and\/or robot partner(s) from observation. In addition to providing a quantitative understanding of each partner, these models can be used to generate autonomous control policies by integrating the learned system and control dynamics into an optimal control framework. \n\nModel-based shared control is therefore highly related to model-based reinforcement learning (MbRL), a paradigm that explicitly learns a model of the system dynamics in addition to learning an effective control policy. MbSC extends MbRL to systems that integrate control from various sources. Early work in model-based reinforcement learning includes~(\\cite{barto1995learning}) and~(\\cite{kaelbling1996reinforcement}). More recently, researchers have considered integrating learned system models with optimal control algorithms to produce control trajectories in a more data-efficient manner~(\\cite{mitrovic2010adaptive}). These algorithms compute control through an online optimization process, instead of through further data collection~(\\cite{barto1995learning}). There are of course, many viable model learning techniques that can be used to describe the system and control dynamics. For example, Neural Networks~(\\cite{williams2017information}), Gaussian Processes~(\\cite{nguyen2009local}), and Gaussian Mixture Models~(\\cite{khansari2011learning}) have all shown great promise in this area. Often the best choice of modeling algorithm is related specifically to the application domain. For example, Gaussian Processes perform well in low-data regimes, but scale poorly with the size of the dataset where Neural Networks fit naturally. In this work we explore a modeling technique that easily integrates with derivative-based optimal control algorithms. A survey of learning for control can be found in~(\\cite{schaal2010learning}).\n\nFrom a motivational standpoint, related work also includes methods that model not only the dynamics of a robotic system, but combined human-machine systems from data. For example, researchers have explored learning control policies from user demonstrations, thereby incorporating both system dynamics and the user's desires~(\\cite{argall2009survey, celemin2019fast}). Building on these ideas, researchers have proposed learning shared control policies directly from demonstration data using deep reinforcement learning~(\\cite{reddy2018shared}). To improve the human partner's intuition for the interaction paradigm, researchers have also proposed learning latent spaces to allow users to control complex robots with low dimensional input devices~(\\cite{losey2019controlling}). Relatedly, people have also proposed techniques for modeling both the dynamics of a system, and a policy for deciding when a human or autonomous partner should be in control. One such method is to learn local approximations to the system's dynamics and only provide autonomous assistance when the system is nearby a state it has previously observed~(\\cite{peternel2016shared}). Our approach utilizes a linear representation of the nonlinear human-robot dynamics which avoids the use of local models in exchange for a higher capacity linear model which globally represents the complex system. This is also distinct from the virtual fixtures\/guide literature where system models are known \\textit{a priori}, and frequently nonlinear.\n\nFrom a methodological standpoint, the most closely related research is recent work that computes control trajectories by integrating learned dynamics models with model predictive control (MPC) algorithms~(\\cite{williams2017information, drews2017aggressive}). These algorithms are defined by an iterative, receding horizon optimization process instead of using an infinite-horizon. Similar to our own work, these researchers first collect observations from live demonstrations of the mechanical device to learn a model of the system dynamics. They then integrate the model with an MPC algorithm to develop control policies. Beyond methodological differences (e.g., choice of machine learning and optimal control algorithms), the key theoretical distinction between these works and our own is our focus on shared control of joint human-machine systems, instead of developing fully autonomous systems. In particular, we learn a model of the joint system that is integrated into a shared control system to improve a human operator's control of a dynamic system. We therefore consider the influence of the human operator both during the data-collection process and at run-time in the control of the dynamic system.\n\nIn this work, we learn a model of the system and control dynamics through an approximation to the Koopman operator~(\\cite{koopman1931hamiltonian}). As the Koopman operator is a relatively new concept in robot learning for control, we now provide additional information on its description in the following section.\n\n\\subsection{The Koopman Operator}\n\\label{sec-background-koopman}\n\n\\begin{figure*}[!th]\n\t\\centering\n\t\\includegraphics[width=0.8\\hsize]{pipeline.png}\n\t\\caption{Pictorial depiction of the our model-based shared control paradigm. (a) Collect observations from user interaction and learn a model of the joint human-machine system through an approximation to the Koopman operator. This can be computed offline or online. (b) Compute control policy of autonomous agent by solving optimal control problem using the learned model. (c) Allocate control to integrate autonomy (gray) and user input (green\/red).}\n\t\\label{fig-pipeline}\n\\end{figure*}\n\nThe Koopman operator is an infinite-dimensional linear operator that can capture all information about the evolution of nonlinear dynamical systems. This is possible because the operator describes a linear mapping between sequential \\textit{functions of states} instead of the state itself. In particular, the Koopman operator acts on an infinite dimensional Hilbert space representation of the state. To define the Koopman operator, let us consider a discrete time dynamic system $(\\mathcal{X}, t, F)$: \n\\begin{equation}\nx_{t+1} = F(x_t)\n\\label{eq-gen-dynamics}\n\\end{equation}\n\n\\noindent where $\\mathcal{X} \\subseteq \\mathbb{R}^N$ is the state space, $t \\in \\mathbb{R}$ is time and $F : \\mathcal{X} \\rightarrow \\mathcal{X}$ is the state evolution operator. We also define $\\phi$, a nominally infinite dimensional observation function \n\\begin{equation}\ny_t = \\phi(x_t)\n\\label{fn-obs}\n\\end{equation}\n\n\\noindent where $\\phi : \\mathcal{X} \\rightarrow \\mathbb{C}$ defines the transformation from the original state space into the Hilbert space representation that the Koopman operator acts on. The Koopman operator $\\mathcal{K}$ is defined as the composition of $\\phi$ with $F$, such that \n\n\\begin{equation}\n\\mathcal{K} \\phi = \\phi \\circ F.\n\\label{fn-koopman-eq}\n\\end{equation}\n\n\\noindent By acting on the Hilbert state representation, the \\textit{linear} Koopman operator is able to capture the complex, nonlinear dynamics described by the state evolution operator. \n\nWhile the Koopman operator is nominally infinite dimensional, recent work has demonstrated the ability to approximate a finite dimensional representation using data-driven techniques~(\\cite{rowley2009spectral, budivsic2012applied}). In the limit of collected observation data, the approximation to the Koopman becomes exact~(\\cite{williams2015data}). These data-driven methods have renewed an interest in using the Koopman operator in applied engineering fields. In contemporary work, the Koopman operator has been successfully used to learn the dynamics of numerous challenging systems. This includes demonstrations that show the Koopman operator can differentiate between cyclic and non-cyclic stochastic signals in stock market data~(\\cite{hua2016using}) and that it can detect specific signals in neural data that signify non-rapid eye movement (NREM) sleep~(\\cite{brunton2016extracting}). More recently these systems have included physical robotics systems~(\\cite{abraham2019active, bruder2019modeling}).\n\n\\section{Model-based Shared Control}\n\\label{sec-model-based-shared-control}\n\nOur primary goal is to develop a shared control methodology that improves the skill of human-machine systems without relying on \\textit{a priori} knowledge of the relationship between the human and the machine. To define our model-based shared control algorithm we now describe the (1) model learning process, (2) method for computing the policy of the autonomous agent (\\textit{autonomy input} in Figure~\\ref{fig-shared-control}) and (3) control allocation method (the \\textit{green box} in Figure~\\ref{fig-shared-control}). A pictorial depiction of our model-based shared control paradigm can be found in Figure~\\ref{fig-pipeline}. Our learning-based approach develops a model of the joint human-machine system solely from observation, and this model can be used by the policy generation method to develop autonomous control trajectories. The control allocation method then describes how we integrate the input provided by the human partner and the autonomous agent into a single command that is sent to the dynamic system. \n\n\\subsection{Model Learning via the Koopman Operator}\n\nWhen designing assistive shared control systems, it is important to consider both the human and autonomous partners. To ensure that our paradigm is valid for generic human-machine systems, we learn both the \\textit{system dynamics} and information about the \\textit{user interaction} directly from data. In this work, we develop a model of the joint human-machine through an approximation to the Koopman operator, which can be computed offline or online (discussed further in Section~\\ref{sec-study-two-results-online}). The model learning process is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(a)}. As mentioned previously, there are of course a variety of other machine learning algorithms and representions one could choose to learn the system dynamics. In this work, we use the Koopman operator, which is particularly well suited to model-based shared control of human-machine systems for two main reasons. First, it is possible to approximate the Koopman operator in low-data regimes (see Section~\\ref{sec-study-two-results-online}) which allows us to quickly expand the set of human-machine systems we can control under the general MbSC paradigm. Second, there are a variety of highly efficient learning algorithms~(\\cite{williams2015data, klus2015numerical, rowley2009spectral}) that make the Koopman operator well suited to an online learning paradigm, an important feature in shared control where it is unlikely that we have \\textit{a priori} knowledge of the joint human-machine system.\n\nWe use Extended Dynamic Mode Decomposition (EDMD) to approximate the Koopman operator~(\\cite{williams2015data}). EDMD belongs to a class of data-driven techniques known as Dynamic Mode Decomposition (DMD)~(\\cite{rowley2009spectral, schmid2010dynamic, tu2013dynamic}). These algorithms use snapshots of observation data to approximate the Koopman modes that describe the dynamics of the observed quantities. We now provide a mathematical treatment of the EDMD algorithm. We start by redefining the observation function $\\phi$ from Equation~\\ref{fn-obs} as a vector valued set of basis functions chosen to compute a finite approximation to the Hilbert space representation. We can then define the following approximation to the Koopman operator\n\n\\begin{equation}\n\\phi(x_{t+1}) = \\mathcal{K}^T\\phi(x_t) + r(x_t)\n\\end{equation}\n\n\\noindent where $r(x_t)$ is a residual term that represents the error in the model. The Koopman operator is therefore the solution to the optimization problem that minimizes this residual error term \n\\begin{align}\n\\begin{split}\nJ & = \\frac{1}{2} \\sum_{t=1}^{T} |r(x_t)|^2\\\\\n& = \\frac{1}{2} \\sum_{t=1}^{T} |\\phi(x_{t+1}) - \\phi(x_t)K) |^2\n\\end{split}\n\\label{eq-ls}\n\\end{align}\n\n\\noindent where $T$ is the time horizon of the optimization procedure, and $|\\cdot|$ is the absolute value. The solution to the least squares problem presented in Equation~\\eqref{eq-ls} is\n\\begin{equation*}\nK = G^\\dagger A\n\\end{equation*}\n\n\\noindent where $\\dagger$ denotes the Moore-Penrose pseudo inverse and\n\\begin{align*}\nG & = \\frac{1}{T} \\sum_{t=1}^{T} \\phi(x_t)^T\\phi(x_t) \\\\\nA & = \\frac{1}{T} \\sum_{t=1}^{T} \\phi(x_t)^T\\phi(x_{t+1})\n\\end{align*}\n\n\\subsubsection{Basis} In this work, we require that the finite basis $\\phi$ \\textit{includes both the state and control variables}~(\\cite{proctor2016generalizing}). This ensures that the Koopman operator models both the natural dynamics of the mechanical system and the control dynamics as provided by the user demonstration. In this work we empirically select a fixed set of basis functions to ensure that all models (across the different users in our validation study) are learned using the same basis. Here we choose $\\phi$ such that\n\\begin{align}\n\\begin{split}\n\\phi = &[1, x_1, x_2, x_3, x_4, x_5, x_6, u_1, u_2, u_1*x_1, u_1*x_2, \\\\\n& u_1*x_3, u_1*x_4, u_1*x_5, u_1*x_6, u_2*x_1, u_2*x_2, \\\\\n& u_2*x_3, u_2*x_4, u_2*x_5, u_2*x_6, u_1*cos(x_3), \\\\\n& u_1*sin(x_3), u_2*cos(x_3), u_2*sin(x_3)].\n\\end{split}\n\\label{eq-basis}\n\\end{align}\nThese 25 basis functions were chosen to combine information about the geometry of the task (e.g., the trigonometric functions capture specific nonlinearities present in the system dynamics, see Section~\\ref{sec-experimental-environment}) with information related to how the user responds to system state. For this reason, we include terms that mix state information with control information. To evaluate the accuracy of the learned approximation to the Koopman operator we compute the H-step prediction accuracy (see Figure~\\ref{fig-h-step-accuracy}).\n\nThere are, of course, a variety of methods that one could use to select an appropriate basis for a given dynamical system. This step is particularly important as selecting a poor basis will quickly degrade the validity of the learned model~(\\cite{berrueta2018dynamical}). One such method is to integrate known information about the system dynamics into the chosen basis functions, such as the relationship between the heading of the lander and the motion generated by the main thruster. This approach works well when the system dynamics are easy to understand, however it can prove challenging when the dynamics are more complex. For this reason, one could also choose the set of basis functions through purely data-driven techniques. Sparsity Promoting DMD~(\\cite{jovanovic2014sparsity}) is one such algorithm. SP-DMD takes a large initial set of randomly generated basis functions and imposes an $\\ell_1$ penalty during the learning process to algorithmically decides which basis functions are the most relevant to the observable dynamics~(\\cite{tibshirani1996regression}). An example of this purely data-driven approach being applied to human-machine systems can be found in~\\cite{broad2019highly}.\n\n\\subsection{Autonomous Policy Generation}\n\nTo generate an autonomous control policy, we can integrate the portion of the learned model that relates to the system and control dynamics into a model predictive control (MPC) algorithm. In particular, we use Koopman operator model-based control~(\\cite{broad2017learning, abraham2017model}), which we detail now in full. To compute the optimal control sequence, $u$, we must solve the following Model Predictive Control (MPC) problem\n\\begin{equation}\n\\begin{aligned}\n& \\underset{u}{\\text{minimize}}\n& & J = \\sum_{t=0}^{T-1} l(x_t,u_t) + l_T(x_T) \\\\\n& \\text{subject to}\n& & x_{t+1} = f(x_t, u_t), \\\\\n& & & u_t \\in U, x_t \\in X, \\forall t\n\\end{aligned}\n\\label{eqn-mpc}\n\\end{equation}\n\n\\noindent where $f(x_t,u_t)$ is the system dynamics, $l$ and $l_T$ are the running and terminal cost, and $U$ and $X$ are the set of valid control and state values, respectively.\n\nIn this work, we define\n\\begin{align*}\nl(x_t, u_t) &= \\frac{1}{2} (x_t-x_d) Q_t (x_t-x_d) + \\frac{1}{2} u_t R_t u_t \\\\\n\\textnormal{where }Q_t &= Diag[6.0, 10.0, 20.0, 2.0, 2.0, 3.0]\n\\end{align*}\n$x_t$ is the current state and $x_d$ is the desired goal state. Additionally,\n\\begin{align*}\nl_T(x_T) &= \\frac{1}{2} (x_t-x_d) Q_T (x_t-x_d) \\\\\n\\textnormal{where }Q_T &= Diag[3.0, 3.0, 5.0, 1.0, 1.0, 1.0]\n\\end{align*}\nThese values were chosen empirically based on results observed from the system operating fully autonomously.\n\nTo integrate our learned system model, we re-write the system dynamics as such:\n\\begin{equation}\n\\phi(x_{t+1}) = f_\\mathcal{K}(x_t, u_t)\n\\label{eqn-koopman-dyn}\n\\end{equation}\n\n\\noindent where $f_\\mathcal{K} = \\mathcal{K}^T\\phi(x_t, u_t)$ is the learned system dynamics parameterized by a Koopman operator $\\mathcal{K}$. This equation demonstrates the fact that the Koopman operator does not map directly from state to state, but rather operates on functions of state. We can then evaluate the evolved state by recovering the portion of the basis that represents the system's state\n\\begin{equation}\nx_{t+1} = \\phi(x_{t+t})_{1:N}\n\\label{eqn-koopman-recover}\n\\end{equation}\n\\noindent where values $1:N$ are the state variables, as per our definition in Equation~\\eqref{eq-basis}, and $N$ is the dimension of the state space. The policy generation process is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(b)}.\n\n\\subsubsection{Nonlinear Model Predictive Control Algorithm}\n\nWe solve Equation~\\eqref{eqn-mpc} with Sequential Action Control~(\\cite{ansari2016sequential}) (SAC). SAC is a real-time, model-based non-linear optimal control algorithm that is designed to iteratively find a single value, a time to act, and a duration that maximally improves performance. Other viable nonlinear optimal control algorithms include iLQR~(\\cite{li2004iterative}) and DDP~(\\cite{mayne1966second, tassa2014control}). SAC is particularly well suited for our shared control algorithm because it searches for single, short burst actions which aligns well with our control allocation algorithm (described in detail in Section~\\ref{sub-sec-control-allocation}). Additionally, SAC can compute closed-loop trajectories very quickly (1 kHz), an important feature for interactive human-machine systems such as the one presented in this work. \n\n\\subsubsection{Integrating the Koopman model and SAC}\nSequential Action Control is a gradient-based optimization technique and it is therefore necessary to compute derivatives of a system during the optimization process. The linearization of the discrete time system is defined by the following equation\n\\begin{equation*}\nx_{t+1} = A x_t + B u_t.\\\\\n\\end{equation*}\n\n\\noindent By selecting a differentiable $\\phi$, one can compute $A$ and $B$ \n\\begin{equation}\n\\begin{aligned}\nA &= \\mathcal{K}_{1:N}^T \\frac{\\partial \\phi}{\\partial x}\\\\\nB &= \\mathcal{K}_{N:N+P}^T \\frac{\\partial \\phi}{\\partial u}\n\\label{eqn-linear-model-a_and-b}\n\\end{aligned}\n\\end{equation}\n\n\\noindent where $N$ is again the dimension of the state space, and $P$ is the dimension of the control space. \n\n\\subsection{Control Allocation Method}\n\\label{sub-sec-control-allocation}\n\nTo close the loop on our shared control paradigm, we define a control allocation method that uses the solution from the optimal control algorithm to provide outer-loop stabilization. We use a geometric signal filter that is capable of dynamically shifting which partner is in control at any given instant based on optimality criteria. This technique is known as Maxwell's Demon Algorithm (MDA)~(\\cite{tzorakoleftherakis2015controllers}). Our specific implementation of MDA is detailed in Algorithm~\\ref{mda-algorithm} where $u_h$ is the control input from the human operator, $u_a$ is the control produced by the autonomy, and $u$ is applied to the dynamic system. \n\\begin{algorithm}[!h]\n\\caption{Maxwell's Demon Algorithm (MDA)}\n\\begin{algorithmic}\n \\If {$\\langle u_h, u_a \\rangle \\geq 0$} \n \\State $u = u_h$;\n \\Else\n \\State $u = 0$;\n \\EndIf\n\\end{algorithmic}\n\\label{mda-algorithm}\n\\end{algorithm}\n\nWe also provide a pictorial representation of the algorithm in Figure~\\ref{fig-mda}.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=\\hsize]{mda.png}\n\t\\caption{Maxwell's Demon Algorithm (MDA)}\n\t\\label{fig-mda}\n\\end{figure}\n\nThis control allocation method restricts the user's input to the system to be in the same half-plane as the optimal control solution, and places no other limitations on the human-machine interaction. If the user's input is in the opposite half-plane, no input is provided to the system. This control allocation method is lenient to the human partner, as notably, \\textit{the autonomous agent does not add any information into the system} and instead only blocks particularly bad input from the user. Therefore, \\textit{any signal sent to the system originates from the human partner}. We use this filter because we are motivated by assistive robotics, in which prior research has shown that there is no consensus across users on desired assistance level~(\\cite{erdogan2017effect}). By allowing the user a high level of control freedom, the system encourages input from the human operator and restricts how much authority is granted to the autonomous partner. This method is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(c)}. We use MDA in this work primarily because it has been experimentally validated in prior studies on human-machine systems for assistive robotics~(\\cite{fitzsimons2016optimal}). Notably, this method does not guarantee optimal (or even \"good\") performance, as a human operator could theoretically always provide input orthogonal to the autonomous solution, resulting in no control ever being applied to the system. However, this technique does allocate a large amount of control authority to the human-in-the-loop, a desirable feature in our motivating application domains. There are also alternative methods that can be used for similar purposes, including extensions to MDA that incorporate additional information from the autonomous partner, which can be used to improve performance or safety~(\\cite{broad2018operation}). A review paper of alternative control allocation techniques can be found in~\\cite{losey2018review}.\n\n\\section{Human Subjects Study}\n\\label{sec-experimental-validation}\n\nHere, we detail the experimental setup that we use to study three main aspects of the described system. \n\\begin{itemize}\n\\item First, our aim is to evaluate the efficacy of model-based shared control as it relates to task success and control skill. Concurrently, we aim to evaluate the generalizability of the learned system models with respect to a wide range of human operators.\n\\item Second, we aim to evaluate the efficacy of model-based shared control under an online learning paradigm---specifically, the sample-efficiency of the Koopman operator representation.\n\\item Finally, we aim to evaluate the impact of nonlinear modeling and policy generation techniques through a comparison to a second human-subjects study that enforces linear constraints on our model-based shared control algorithm.\n\\end{itemize}\n\n\\subsection{Experimental Environment}\n\\label{sec-experimental-environment}\n\nThe proposed shared control framework is evaluated using a simulated lunar lander (see Figure~\\ref{fig-ll-env}). \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\fbox{\\includegraphics[width=0.85\\hsize]{environment.png}}\n\t\\caption{Simulated lunar lander system. The green circle is the goal location. The red dots represent an engine firing.}\n\t\\label{fig-ll-env}\n\\end{figure}\n\n\\noindent We use a simulated lunar lander (rocket) as our experimental environment for a number of reasons. This environment is challenging for a novice user, but performance can be improved (and sometimes mastered) given enough time and experience. Similar to a real rocket, one of the main control challenges is the stability of the system. As the rocket rotates along its yaw axis, firing the main thruster can produce nonintuitive dynamics for a novice. Furthermore, once the rocket has begun to rotate, momentum can easily overwhelm a user who is unfamiliar with such systems. Therefore, it is often imperative---particularly for non-expert users---to maintain a high degree of stability at all times in order to successfully complete the task. In addition to the control challenges, we choose this environment because the simulator abstracts the system dynamics through calls to the Box2D physics engine; therefore, we do not have an exact model and thus have an \\textit{explicit need to learn one}.\n\n\\subsection{System Description}\n\\label{sub-sec-system-description}\n\nThe dynamic system is a modified version of an open-source environment implemented in the Box2D physics engine and released by OpenAI~(\\cite{brockman2016gym}). Our modifications (1) allow for continuous-valued multi-dimensional user control via a joystick, and (2) incorporate the codebase into the open-source ROS framework. We have made our code available online at \\url{https:\/\/github.com\/asbroad\/model_based_shared_control}.\n\nThe lunar lander is defined by a 6D state space made up of the position ($x,y$), heading ($\\theta$), and their rates of change ($\\dot{x}, \\dot{y}, \\dot{\\theta}$). The control input to the system is a continuous two dimensional vector ($u_1, u_2$) which represents the throttle of the main and rotational thrusters. The main engine can only apply positive force. The left engine fires when the second input is negative, while the right engine fires when the second input is positive. The main engine applies an impulse that acts on the center of mass of the lunar lander, while the left and right engines apply impulses that act on either side of the rocket. We remind the reader that our goal is to learn both the system dynamics and user interaction. For this reason, we must collect data both on the system state and also the control input. Together, this defines an eight dimensional system: \n\\begin{equation*}\n\\mathcal{X} = [x, y, \\theta, \\dot{x}, \\dot{y}, \\dot{\\theta}, u_1, u_2]\n\\end{equation*}\n\n\\noindent where the first six terms define the lunar lander state and $u_1, u_2$ are the main and rotational thruster values, through which the user interacts with the system.\n\n\\subsection{Trial Description}\n\\label{sec-experiment-trial-description}\n\nThe task in this environment requires the user to navigate the lander from its initial location to the goal location (represented by the green circle in Figure~\\ref{fig-ll-env}) and to arrive with a heading nearly perpendicular to the ground plane and with linear and rotational velocities near zero. A trial is considered complete either (1) when the center of an upright lunar lander is fully contained within the goal circle (i.e., the Euclidean distance between the center of the lander and the center of the goal is less than $0.9$ m) and the linear and angular velocities are near zero (i.e., the linear velocities must be less than $1.0$ m\/s and the angular velocity must be less than $0.3$ rad\/sec), or (2) when the lander moves outside the bounded environment (i.e., when the lander moves off the screen to the left or right) or crashes into the ground.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.85\\hsize]{control_interface.jpeg}\n\\end{figure}\nIn each trial, the lunar lander is initialized to the same $x, y$ position ($10.0$ m, $13.3$ m), to which we added a small amount of Gaussian noise ($\\mu = 0.2$ m). Additionally, a random force is applied at the start of each trial (uniform($-1000$ N,$1000$ N)). The goal location ($10.0$ m, $6.0$ m) is constant throughout all trials and is displayed to the user as a green circle (see Figure~\\ref{fig-ll-env}). \n\nThe operator uses a PS3 controller to interact with the system. The joystick controlled by the participant's dominant hand fires the main thruster, and the opposing joystick fires the side thrusters. As the user moves through the environment, we keep track of the full state space at each timestep (10 Hz). We provide a video of the system, task and user interaction under shared control as part of the supplementary material.\n\n\\subsection{Analysis I : Efficacy and Generalizability of Model-based Shared Control}\n\\label{sub-sec-analysis-i}\n\n\\subsubsection{Control Conditions}\n\nTo study the efficacy and generalizability of our shared control system and the generalizability of the learned system dynamics, we compare four distinct control conditions.\n\n\\begin{itemize}\n\\item In the first condition, the user is in full control of the lander and is not assisted by the autonomy in any way; we call this approach \\textit{User Only} control. As each user undergoes repeated trials with the same goal, this can also be considered a natural learning paradigm. \n\\end{itemize}\n\nIn the remaining three conditions an autonomous agent provides outer-loop stabilization on the user's input as described in Section~\\ref{sec-model-based-shared-control}. The main distinction between these three control conditions is the source of the data used to compute the model of the joint system. \n\\begin{itemize}\n\\item In the second condition, the model is defined by a Koopman operator learned on data captured from earlier observations of the current user; we call this approach \\textit{Individual Koopman}. \n\\item In the third condition, the model is defined by a Koopman operator learned on data captured from observations of three novice participants prior to the experiment (who were not included in our analysis); we call this approach \\textit{General Koopman}.\n\\item In the fourth condition, the model is defined by a Koopman operator learned on data captured from observations of an expert user (the first author of the paper, who has significant practice controlling the simulated system); we call this approach \\textit{Expert Koopman}.\n\\end{itemize}\n\nWe analyze the viability of model-based shared control by comparing the \\textit{User Only} condition to each of the shared control conditions. We analyze the generalizability of the learned models by comparing the results from the \\textit{Individual Koopman}, \\textit{General Koopman} and \\textit{Expert Koopman} conditions. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsubsection{Protocol and Participants}\n\nEach experiment begins with a training period for the user to become accustomed to the dynamics of the system and the interface. This training period continues until the user is able to successfully achieve the task three times in a row or 15 minutes elapses. During the next phase of the experiment, we collect data from 10 user-controlled trials, which we then use to develop a model. Finally, each user performs the task under the four conditions detailed above (10 trials each). The order in which the control paradigms are presented to the user is randomized and counter-balanced to reduce the effect of experience.\n\nThe study consisted of 16 participants (11 female, 5 male). All subjects gave their informed consent and the experiment was approved by Northwestern University's Institutional Review Board. \n\n\\subsection{Analysis II : Online Model-based Shared Control}\n\\label{sub-sec-analysis-ii}\n\nTo study the efficacy of our model-based shared control algorithm in an online learning paradigm, we collect data from a fifth experimental condition, which we call \\textit{Online Koopman}. \n\n\\subsubsection{Control Condition}\n\n\\begin{itemize}\n\\item The main difference between the \\textit{Online Koopman} paradigm and the three previously described shared control conditions is that the model of the joint human-machine system is learned online in real-time. In all other control conditions, all models were trained offline from observations gathered during a data collection phase. In the online paradigm, the model is updated continuously starting with the first collected set of observations.\n\\end{itemize}\n\nIn addition to the lack of a separate data collection phase, the online learning paradigm is distinct from the other shared control conditions because of the data that we use to learn the model. In the shared control conditions that use a model learned offline, we use all of the observations collected from the user demonstrations to learn the model. In the online learning paradigm we only update the model when the user input is admitted by the MDA controller. We choose this learning paradigm because it fits well conceptually with our long term goal of using the outer-loop algorithm to provide stability and safety constraints on the shared control system. It is important to note that at the beginning of the online learning paradigm, the MDA controller relies on randomly initialized control and system dynamics models. For this reason, the control let through to the system will be very noisy during the first few moments of the experiment, making the system difficult to control successfully for any human-in-the-loop. For this reason, it is important that the system dynamics and control models can be learned quickly, something we evaluate in Section~\\ref{sec-study-two-results-online}. As soon as the learning process produces a model of any kind, the policy is computed using MPC techniques.\n\n\\subsubsection{Protocol and Participants}\n\\label{sec-sub-sub-protocol}\n\nThe online learning paradigm consists of 15 trials per user to allow us to evaluate possible learning curves. The model is updated at the same rate as the simulator (10 Hz) and is initialized naively (i.e., all values are sampled from a uniform distribution [0,1)). This paradigm is presented as the final experimental condition to all subjects. The subjects are the same 16 participants as in Section~\\ref{sub-sec-analysis-i}. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsection{Analysis III : Comparison of Linear and Nonlinear Model-based Shared Control}\n\\label{sub-sec-analysis-iii}\n\nTo study the impact of nonlinear modeling and policy generation techniques on our model-based shared control paradigm, we compare results from the above study to a second study (consisting of a separate group of 16 participants) that enforces linear constraints on these parts of the system. \n\n\\subsubsection{Control Conditions}\n\nThe same four control conditions from Analysis I are evaluated. The differences lie in (1) the choice of basis function used to approximate the Koopman operator and (2) the choice of optimal control algorithm used to generate the autonomous policy. In this study, we use a linear basis, instead of a nonlinear basis, to approximate the Koopman operator, which consists of the first nine terms in Equation~\\eqref{eq-basis}. We furthermore use a Linear Quadratic Regulator, instead of a nonlinear model predictive control (MPC) algorithm (Sequential Action Control (SAC)). \n\n\\subsubsection{Protocol and Participants}\n\nThe same experimental protocol described in Section~\\ref{sub-sec-analysis-i} was used, allowing us to perform a direct comparison between the two studies. Unlike the prior sections, the data we analyze to evaluate the ideas presented in this section comes from both the linear and nonlinear MbSC studies. The data from the linear MbSC study was previously analyzed in~(\\cite{broad2017learning}) and was collected from a different set of 16 subjects, resulting in 32 total participants.\n\n\\subsection{Statistical Analysis} \n\\label{sec-statistical-analysis}\n\nWe analyze the results of the human-subjects studies using statistical tests to compare the performance of participants along a set of pertinent metrics under the control conditions described in Section~\\ref{sub-sec-analysis-i}. Our analysis consists of one-way ANOVA tests conducted to evaluate the effect of the shared control paradigm on each of the dependent variables in the study. These tests allow us to statistically analyze the effect of each condition while controlling for overinflated type I errors that are common with repeated t-tests. Each test is computed at a significance value of 0.05. When the omnibus F-test produces significant results, we conduct post-hoc pair-wise Student's t-tests using Holm-Bonferroni adjusted alpha values~(\\cite{wright1992adjusted}). The post-hoc t-tests allow us to further evaluate the cause of the significance demonstrated by the ANOVA by comparing each pair of control paradigms separately. Similar to the ANOVA test, the Holm-Bonferroni correction is used to reduce the likelihood of type I errors in the post-hoc t-tests. \n\nIn addition to reporting the results of the statistical tests, we also use box-and-whisker diagrams to display specific metrics. In these plots, the box represents the \\textit{interquartile range (IQR)} which refers to the data that lies between the first and third quartiles. This area contains 50$\\%$ of the data. The line inside the box represents the median value and the whiskers above and below the box are the minimum and maximum values inside 1.5 times the interquartile range. The small circles are outliers. The plots also depict the results of the reported statistical tests. That is, if a post-hoc t-test finds statistically significant differences between two conditions, we depict these results on the box-and-whisker diagrams using asterisks to represent the significance level ($*: p < 0.05, **: p < 0.01, ***: p < 0.005$). \n\nWe note that this analysis is used for \\textit{all reported results}. Therefore, if we present the results of a t-test, it signifies that we have previously run an ANOVA and found a statistically significant difference. The reader can also assume that any unreported post-hoc t-tests failed to reject the null hypothesis. \n\n\\section{Results}\n\\label{sec-results}\n\nWe now present the results of the desired analyses described in Sections~\\ref{sub-sec-analysis-i}, ~\\ref{sub-sec-analysis-ii}, and ~\\ref{sub-sec-analysis-iii}. Our analyses support the premise that model-based shared control is a valid and effective data-driven method for improving a human operator's control of an \\textit{a priori} unknown dynamic system. We also find the learned system models are generalizable across a population of users. Finally, we find that these models can be learned online in a fast, data-efficient manner.\n\n\\subsection{Efficacy of Model-based Shared Control}\n\\label{sec-results-performance-metrics}\n\nTo evaluate the efficacy of our model-based shared control algorithm, we compute the average success rate under each control paradigm and examine the distribution of executed trajectories. Our analysis compares the User Only control condition to each of the shared control conditions (Individual Koopman, General Koopman and Expert Koopman). All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsubsection{Task Success and User Skill}\n\nA trial is considered a success when the user is able to meet the conditions defined in Section~\\ref{sec-experiment-trial-description}. We can interpret the success rate of a user, or shared control system, on a set of trials as a measure of skill. The greater the skill, the higher the success rate. By comparing the average success rate under the User Only control paradigm with the average success rate under the shared control paradigms, we can analyze the impact of the assistance provided by the autonomous partner. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\hsize]{results_success_nonlinear.png} \n \\caption{Number of successful trials under each control condition.}\n \\label{fig-results-success-nonlinear}\n\\end{figure} \n\n\\begin{figure*}[t]\n\t\\centering\n\n\t\\includegraphics[width=0.8\\hsize]{trajectories.png}\n\t\\caption{Trajectory plots which visualize the most frequently visited parts of the state space. The data is broken down by control condition (columns) and whether the trial was successful (rows). The plots are generated by overlaying each trajectory with a low opacity and the intensity of the plots therefore represents more frequently visited portions of the state space.}\n\t\\label{fig-heatmaps}\n\\end{figure*}\n\nThe average number of successful trials produced in each control condition are displayed in Figure~\\ref{fig-results-success-nonlinear}. An analysis of variance shows that the choice of control paradigm has a significant effect on the success rate ($F(3, 59) = 4.58, p < 0.01$). Post-hoc t-tests find that users under the shared control conditions show statistically significant improvements in the success rate when compared to their performance under the User Only control condition ($p < 0.01$, for all cases). No other pairings are found to be statistically distinct. This result demonstrates that the assistance provided by the autonomous agent significantly improves the skill of the joint human-machine system, thereby validating the efficacy of model-based shared control. This result is inline with related work that aims to provide similar assistance, such as can be found in the virtual fixtures\/guides literature~(\\cite{forsyth2005predictive, griffiths2005sharing, abbink2012haptic}). Importantly, however, unlike these prior methods, model-based shared control does not require \\textit{a priori} knowledge of the system dynamics or the human operator.\n\nThe fact that there are no observed differences in task performance between the Individual, General and Expert cases suggests that the source of the data used to learn the model may not be important in developing helpful autonomous assistance in the shared control of dynamic systems (discussed further in Section~\\ref{sec-results-generalizability}). An alternative interpretation of this data could be that the discrepancy of skill demonstrated by the participants in the individual, general and expert cases was not large enough to produce any potential difference in performance. This, however, is not likely, as the expert demonstrator (the lead author) was able to easily achieve the desired goal state during every demonstration (i.e. 10 out of 10 trials). In contrast, the average subject who provided data in the individual and general cases performed similarly to how participants performed under the User Only cases in Figure~\\ref{fig-results-success-nonlinear} (i.e. about 1 in 10 successful demonstrations).\n\n\\subsubsection{Distribution of Trajectories---Qualitative}\n\nWe further analyze the different control conditions through a comparison of the distribution of trajectories we observe in each condition. Unlike the success metric, this analysis is not based on task performance, and is instead performed to evaluate the control skill exhibited by either the human operator alone or the joint human-machine system. Figure~\\ref{fig-heatmaps} depicts trajectory plots which represent the most frequently occupied sections of the state space. The plots are generated using data separated based on the control condition (columns) and whether the user was able to complete the task on a given trial (rows). \n\nThe first distinction we draw is between the User Only control condition and the three shared control conditions. In particular, the distribution of trajectories in the User Only condition depicts both larger excursions away from the target and lower levels of similarity between individual executions. When we focus specifically on which parts of the state space users spend the most time in (as represented by the intensity of the plots), we see two main clusters of high intensity (around the start and goal locations) in the shared control conditions, whereas we see a wider spread of high-intensity values in the User Only control condition. This suggests more purposeful motions under the shared control conditions.\n\nThe second distinction we draw focuses on a comparison between the successful and unsuccessful trials. Specifically, we note that trajectory plots computed from the failed trials under the shared control conditions demonstrate similar properties (e.g., the extent of the excursions away from the target, as well as two main clusters of intensity) to the trajectory plots computed from successful trials under the shared control conditions. This suggests that users may have been closer to succeeding in these tasks than the binary success metric gives them credit for. By comparison, the trajectory plot computed from the failed trials under the User Only control condition depicts a significantly different distribution of trajectories with less structure. Specifically, we observe numerous clusters of intensity that represent time spent far away from the start and goal locations. This suggests that users were particularly struggling to control the system in these cases.\n\n\\subsubsection{Distribution of Trajectories---Quantitative}\n\nThese observations are supported by an evaluation of the ergodicity~(\\cite{mathew2011metrics}) of the distributions of trajectories described above. We find users under the shared control paradigm are able to produce trajectories that are more ergodic with respect to the goal location then users under User Only control, which means that they spend more a significantly larger proportion of their time navigating near the goal location under shared control.\n\nTo perform this comparison, we compute the ergodicity of each trajectory with respect to a probability distribution defined by a Gaussian centered at the goal location (which represents highly desirable states). This metric can be calculated as the weighted Euclidean distance between the Fourier coefficients of the spatial distribution and the trajectory~(\\cite{miller2016ergodic}).\n\nSimilar to our qualitative analysis of the trajectory plots in Figure~\\ref{fig-heatmaps}, we first compare ergodicity between the different control conditions by analyzing \\textit{all} the trajectories observed under each condition. An analysis of variance showed that the effect of the shared control paradigm on trajectory ergodicity is significant ($F(3, 640) = 12.97, p < 0.00001$). Post-hoc t-tests find statistically significant differences between the performance of the users in the User Only control condition and users in the shared control conditions based on the individual, general and expert datasets ($p < 0.0005, p < 0.001, p < 0.0005$, respectively). No other pairings demonstrate statistically distinct results. We interpret this result as additional evidence that model-based shared control improves the skill of the human partner in controlling the dynamic system.\n\nWe further analyze the ergodicity results by separating the trajectories based on whether they come from an unsuccessful or successful trial. An analysis of variance computed over all control conditions showed that the effect of the shared control paradigm on trajectory ergodicity is significant for both unsuccessful ($F(3, 310) = 6.60, p < 0.0005$) and successful ($F(3, 325) = 7.20, p < 0.0005$) trials. Post-hoc t-tests find statistically significant differences between the performance of the users in the User Only control condition and users in the shared control conditions ($p < 0.005$ in all unsuccessful cases, $p < 0.05$ in all successful cases). No other pairings reject the null hypothesis. These results suggest that the shared control paradigm is helpful in improving the user's skills even when they provide input that is ultimately unsuccessful in achieving the task. Furthermore, our shared control paradigm is helpful, even when users are performing at their best. Thus, for both failed and successful trials, users exhibit a greater amount of control skill than when there is no assistance. \n\n\\subsection{Generalizability of Shared Control Paradigm}\n\\label{sec-results-generalizability}\n\nWe continue the evaluation of our human subjects study with an analysis of the generalizability of the learned system models and our model-based shared control algorithm. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study. As reported in Section~\\ref{sec-results-performance-metrics}, we find no statistical evidence that the source of the data used to train the model impacts the efficacy of the shared control paradigm. This test was again conducted using an ANOVA which can be used to evaluate differences between groups by comparing the mean and variance computed from the data collected during the experimental trials. When we compare the success rate of users in each shared control condition, we find no statistically significant difference. However, we do find a significant difference between the user's performance under each shared control condition and the User Only condition. The same result holds when we compare each control condition along the ergodic metric described in Section ~\\ref{sec-results-performance-metrics} and visualized by trajectory plots in Figure~\\ref{fig-heatmaps}. Taken together, these results suggest that the efficacy of the assistance provided by the autonomous agent is \\textit{independent of the source of the data used to learn a model of the joint system}. That is, models trained on data collected from an individual user generalize to a larger population of human partners. \n\n\n\n\\begin{figure}[h]\n \\centering\n \\captionsetup[subfigure]{justification=centering}\n \\begin{subfigure}[t]{0.49\\hsize}\n \t\\centering\n \t\\includegraphics[width=\\hsize]{results_agreement_2_main_by_user.png} \n \t\\caption{}\n \t\\label{fig-results-agreement-main}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\hsize}\n \\centering\n \\includegraphics[width=\\hsize]{results_agreement_2_side_by_user.png} \n \\caption{}\n \\label{fig-results-agreement-side}\n \\end{subfigure}\n \\caption{Average agreement between user and optimal control algorithm as defined by the Maxwell's Demon Algorithm (Equation~\\eqref{mda-algorithm}) along the (\\subref{fig-results-agreement-main}) main and (\\subref{fig-results-agreement-side}) side thrusters.}\n \\label{fig-results-agreement}\n\\end{figure} \n\nTo further analyze the generalizability of the model-based shared control paradigm, we evaluate the participants' interactions with the outer-loop autonomous control. We are interested in whether or not users agree more often with the autonomy when control signals are produced based on models learned from their personal demonstration data. To evaluate this idea, we look at the percentage of user inputs that are let through as control to the dynamic system based on our control allocation method (MDA). The average agreement metric is broken down by control condition and presented in Figure~\\ref{fig-results-agreement}. \n\nAn analysis of variance shows that the effect of the source of the model data on the average agreement is not significant in either the main thruster ($F(2, 44) = 0.87, p = 0.43$) nor the side thruster ($F(2, 44) = 0.38, p = 0.69$). These results show a uniformity in the response to system state across users and suggest that the system is able to adapt to the person, instead of requiring a personalized notion of the user and system. \n\nWe interpret this finding as further evidence of the generalizability of our model-based shared control paradigm. In particular, we find that it is not necessary to incorporate demonstration data from individual users when developing model-based shared control. This result replicates findings from our analysis of data collected under a shared control paradigm that enforced a linear constraint on the model learning and policy generation techniques~(\\cite{broad2017learning}).\n\n\\subsection{Online Learning Shared Control}\n\\label{sec-study-two-results-online}\n\nWe next evaluate our model-based shared control algorithm in an online learning paradigm. Our evaluation considers the sample complexity of our model-based learning algorithm through a comparison of the \\textit{impact each shared control paradigm has on the skill of the joint system over time}. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study. Our statistical analysis is a comparison of the percent of participants who succeed under each paradigm \\textit{by trial number}, shown in Figure~\\ref{fig-res-success-by-trial}. We remind the reader that users participate in 15 trials of the Online Koopman condition while they participate in 10 trials of the four other experimental conditions. For comparison we only plot the first 10 trials of the Online Koopman data, though we note that the improved success rate is sustained over the final five trials. From this plot, we can see that users in the Online Koopman shared control condition start off performing poorly, but by around trial 7 start performing on par with the other shared control conditions. \n\nHere, we also note the number of trials used to train the model of the system and control dynamics in each condition. In the Individual and Expert conditions, data is collected from 10 trials to train the model. In the General condition, data is collected from three different users who each control the system for 10 trials each, which means the model is trained from a total of 30 trials. Finally, as discussed above, in the Online condition, the model is learned continuously over the course of 15 trials.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=\\hsize]{success_by_trial_by_paradigm_dark.png} \n \\caption{Average percentage success by trial for the first 10 trials by control condition. Users under all shared control conditions using models learned offline (Individual, General, Expert) outperform the User Only control condition across all trials. Users under the shared control condition using models learned online (Online) start off performing poorly, but quickly begin to outperform the User Only control condition and, in the end, achieve the same level of success as those under the offline shared control conditions.}\n \\label{fig-res-success-by-trial}\n\\end{figure}\n\nTo provide quantitative evidence of this visual trend, we perform the same types of statistical analyses as in previous sections, but now include data from the \\textit{Online Koopman} as a fifth experimental condition. For ease of discussion we refer to the \\textit{Individual}, \\textit{General} and \\textit{Expert Koopman} model-based shared control conditions as the offline learning conditions, and the \\textit{Online Koopman} model-based shared control as the online learning condition. As users provide more data in the Online Koopman condition than in all other conditions, we perform two sets of analyses. First, we compare the data from the first ten trials from the Online Koopman condition to all other control conditions. We then re-perform the same tests, but use the final ten trials from the Online Koopman condition. By comparing these results, we can evaluate the efficacy of the online learning paradigm, and also analyze the effect of the amount of data used during the learning process. \n\n\\subsubsection{Statistical Analysis of the First Ten Trials}\n\\label{sec-study-two-results-online-first-ten}\n\nAn analysis of variance finds a statistically significant difference between the various control conditions along the primary success metric ($F(4, 74) = 5.35, p < 0.001$). Post-hoc t-tests find that all offline learning conditions significantly outperform the Online Koopman and User Only control conditions ($p < 0.05$ for all cases). We do not find the same statistically significant difference between the User Only and Online Koopman conditions. These results suggest that users under the online learning paradigm initially perform on par with how they perform under the user only control paradigm, but worse than under the offline control conditions. This analysis is consistent with our expectations since, in the online condition, the model of the joint system is initialized randomly and therefore does a poor job of assisting the user. However, it is also important that this online shared control does not degrade performance in comparison to the User Only paradigm, suggesting that there is little downside to employing the online learning paradigm during learning. \n\n\\subsubsection{Statistical Analysis of the Final Ten Trials}\n\\label{sec-study-two-results-online-last-ten}\n\nAs a point of comparison, we now re-run the same statistical tests using the final ten trials from the Online Koopman condition. An analysis of variance finds a statistically significant difference between the various control conditions along the primary success metric ($F(4, 74) = 3.55, p < 0.05$) (see Figure~\\ref{fig-avg-metrics-second-online-study}). Post-hoc t-tests find that all shared control conditions (using models learned offline and online) significantly out perform the User Only control paradigm ($p < 0.01$ for all conditions). This result is different from our analysis of the first ten trials and suggests that the learned model improves significantly with more data and now is on par with the models learned in the offline conditions. No other pairings show statistically significant differences. \n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.8\\hsize]{results_success_online.png} \n \\caption{Number of successful trials under each control condition (including an online learning paradigm). We find statistically significant differences between the User Only condition and each shared control condition ($p < 0.01$).}\n \\label{fig-avg-metrics-second-online-study}\n\\end{figure}\n\nThe visual trend present in Figure~\\ref{fig-res-success-by-trial} and the statistical analysis demonstrated in Figure~\\ref{fig-avg-metrics-second-online-study} suggest that the Koooman operator is able to quickly learn an actionable representation of the joint human-machine system. These results also demonstrate the efficacy of our model-based shared control algorithm in an online learning scenario and in limited data regimes. Here we note that follow-up studies are required to tease apart the impact of the model learning process and the user's experience controlling the dynamic system when comparing the offline paradigms to the online paradigm. Notably, in the offline learning paradigm, users undergo 10 trials of training at the start of the experiment. In contrast, and as stated in Section~\\ref{sec-sub-sub-protocol}, all users operate the system under the online learning paradigm as the final condition. For this reason, we do not account for user experience in this condition and therefore highlight the data-efficiency of the model learning process instead of the overall task performance in this condition. The main takeaway from this portion of the analysis is therefore that an actionable Koopman operator can be learned \\textit{quickly}, from significantly less data than alternative approaches commonly found in the literature, like neural networks~(\\cite{nagabandi2018neural}).\n\n\\subsection{Linear and Nonlinear Model-based Shared Control}\n\nThe final piece of analysis we perform in this work is related to the impact that nonlinear modeling and policy generation techniques have on our model-based shared control paradigm. For this analysis we compare the User Only control condition to the three offline shared control conditions. Unlike the prior sections, the data we analyze to evaluate the ideas presented in this section comes from both the linear and nonlinear MbSC studies.\n\n\\begin{figure}[h]\n \\centering\n \\captionsetup[subfigure]{justification=centering}\n \\begin{subfigure}[t]{0.49\\hsize}\n \t\\centering\n \t\\includegraphics[width=\\hsize]{results_success_linear.png} \n \t\\caption{Linear MbSC.}\n \t\\label{fig-results-comparison-success-linear}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\hsize}\n \\centering\n \\includegraphics[width=\\hsize]{results_success_nonlinear.png} \n \\caption{Nonlinear MbSC.}\n \\label{fig-results-comparison-success-nonlinear}\n \\end{subfigure}\n \\caption{A comparison of the average success rate under (a) linear and (b) nonlinear model-based shared control to user only control.}\n \\label{fig-results-comparison-success}\n\\end{figure} \n\n\nThe average success rate of users under each control paradigm for both studies is presented in Figure~\\ref{fig-results-comparison-success}. In the linear study~(\\cite{broad2017learning}) we observe a trend (see Figure~\\ref{fig-results-comparison-success-linear}) that suggests users perform better under the shared control paradigm, but we do not find statistically significant evidence of this observation. In contrast, we find that model-based shared control using nonlinear modeling and policy generation techniques does statistically improve the success rate when compared to a User Only control paradigm.\n\n\nOne potential explanation for the difference we find in the results of the two studies is that the nonlinear basis produces more accurate models of the system dynamics then the linear basis. To explore this explanation, we evaluate the predictive capabilities of a Koopman operator learned with a linear basis to one learned with a nonlinear basis. This analysis is performed by comparing the predicted system states with ground truth data. We evaluate the error (mean and variance) as a function of prediction horizon (a.k.a. the H-step error). Figure~\\ref{fig-h-step-accuracy} depicts the raw error (in meters) of Koopman operators trained using linear and nonlinear bases. \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.9\\hsize]{h_step_model_prediction_accuracy_joint.png}\n\t\\caption{H-step Prediction Accuracy of Koopman operator models based on linear and nonlinear bases. Error is computed as a combination of the Euclidean distance between the predicted $(x, y)$ values and the ground truth $(x, y)$.}\n\t\\label{fig-h-step-accuracy}\n\\end{figure}\n\nOur analysis of the predictive capabilities of the Koopman operator models demonstrates that each is highly accurate. The nonlinear model does slightly outperform the linear model as the prediction horizon grows; however, we find that both models are able to produce single-step predictions with error on the scale of $10^{-3}$ meters. As a reminder to the reader, the state space is bounded with $X \\in (-10, 10), Y \\in (0, 16)$. This analysis suggests that the choice of basis function does not cause the observed difference in average success rate between the two studies. Instead, the important design decision may be the choice of model predictive control algorithm. In the linear study we use an infinite horizon LQR to produce autonomous control policies, whereas in the nonlinear study we use a receding-horizon Model Predictive Control (MPC) to produce autonomous control. Our interpretation of these results is that the receding horizon nature of MPC is better suited to the visual planning approach that human operators use when solving the lunar lander task.\n\n\\section{Discussion}\n\\label{sec-discussion}\n\nIn this section, we highlight a number of main takeaways that stem from our analysis. To begin, the results of our human-subjects studies demonstrate that our model-based shared control paradigm is able to (1) successfully learn a model of the joint human-machine system from observation data, and (2) use the learned system model to generate autonomous policies that can help assist a human partner achieve a desired goal. We evaluate the predictive capabilities of the learned system models through a comparison to ground truth trajectory data (see Figure~\\ref{fig-h-step-accuracy}) and evaluate the impact of the assistive shared control system through a comparison of performance (success rate, see Figure~\\ref{fig-results-success-nonlinear}) with a User Only (or natural learning) control paradigm. All analyses support the idea that MbSC can help improve the control skill of a human operator both when they are able to achieve a task on their own and when they are not.\n\nAdditional evaluations demonstrate that the learned system and control dynamics generalize across users, and suggests that, unlike in other human-machine interaction paradigms~(\\cite{sadigh2016information, macindoe2012pomcop, javdani2015shared}), personalization is not required for defining shared control paradigms of generic human-machine systems. Specifically, we find that the demonstration data used to learn the system and control models does not need to come from an optimal, or expert, controller, and can instead come from \\textit{any} human operator. Therefore, at a base level, the controller does not need to be personalized to each individual user as the learned model captures all necessary information. This idea is important for application in real-world scenarios where personalization of control paradigms can be time-consuming, costly, and challenging to appropriately define, often due to the variety in preferences described by human operators~(\\cite{gopinath2016human, erdogan2017effect}).\n\nWe also demonstrate that our approach can be used in an online learning setting. Importantly, we find that the model is able to learn very quickly, from limited amounts of data. In the Online Koopman condition, each trial took an average of 18 seconds, and therefore provided 180 data points. From our analysis in Section~\\ref{sec-study-two-results-online-last-ten}, we find that we are able to learn an effective model of the joint system after only 5 trials (or about 900 data points). Our model learning technique is also well suited for an online learning paradigm as it is not computationally intensive and can easily run at 50Hz on a Core i7 laptop with 8 GB of RAM. Additionally, we find that, even during the learning process, the application of the online model-based shared control algorithm does not significantly degrade the performance of the human operator.\n\nFinally, we also evaluate the impact that nonlinear modeling and policy generation techniques have on our model-based shared control algorithm~(\\cite{broad2017learning}). In particular, we replace the nonlinear modeling and policy generation techniques with linear counterparts and compare how they impact the ability of a human operator to achieve a desired task. This requires using a nonlinear basis when computing the approximation to the Koopman operator and using nonlinear model predictive control (SAC) to generate the autonomous policy. We find that the nonlinear model-based shared control paradigm produces a joint human-machine system that is significantly better along the primary performance metric (task success) then users under a user only control paradigm. The same result is not found from the data collected under a shared control paradigm that enforced linear constraints (see Figure~\\ref{fig-results-comparison-success}).\n\n\\section{Conclusion}\n\\label{sec-conclusion}\n\nIn this work, we introduce model-based shared control (MbSC). A particularly important aspect of this work is that \\textit{we do not rely on a priori knowledge, or a high-fidelity model, of the system dynamics}. Instead, we learn the system dynamics \\textit{and} information about the user interaction with the system directly from data. We learn this model through an approximation to the Koopman operator, an infinite dimensional linear operator that can exactly model non-linear dynamics. By learning the joint system dynamics through user interaction, the robot's understanding of the human is implicit to the system definition.\n\nResults from two human subjects studies (consisting of 32 total participants) demonstrate that incorporating the learned models into our shared control framework statistically improves the performance of the operator along a number of pertinent metrics. Furthermore, an analysis of trajectory ergodicity demonstrates that our shared control framework is able encourage the human-machine system to spend a significantly greater percentage of time in desirable states. We also find that the learned system models are able to be used in shared control systems that generalize across a population of users. Finally, we find that, using this approach, models can be efficiently learned online. In conclusion, we believe that our approach is an effective step towards shared control of human-machine systems with unknown dynamics. This framework is sufficiently general that it could be applied to any robotic system with a human in the loop. Additionally, we have made our code available online at \\url{https:\/\/github.com\/asbroad\/model_based_shared_control}, and include a video depicting a user's control of the dynamic system and the impact of model-based shared control in the supplementary material.\n\n\\begin{acks}\nThis material is based upon work supported by the National Science Foundation under Grant CNS 1329891. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the aforementioned institutions.\n\\end{acks}\n\n\\bibliographystyle{SageH}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nAerial radiometric survey is a mature field. Successful prospecting for\neconomically viable ore deposits using the radiation signal from naturally occuring\nrocks stretches back decades~\\cite{Grasty_1975}.\nSurvey systems composed of large volumes of NaI(Tl) scintillator gamma-ray detectors, as much as 20~L,\ncoupled with georeferenced position sensors (now making use of the global\nnavigation satellite system (GNSS)), record gamma energy spectra versus\nposition. This information is later processed to produce maps of natural\npotassium, uranium and thorium concentrations. Standards exist to guide\npractitioners in this area~\\cite{IAEA_1991,IAEA_1995} and vast regions of the\nearth have been covered~\\cite{IAEA_2010}.\nPractitioners have also developed methods to correct for terrain variation in\naerial survey~\\cite{Schwarz1992,ISHIZAKI201782}.\n\nThe emphasis in aerial radiometric survey methods until recently has been on\ndevelopment of techniques suitable for geologic sources, for which the\nsimplification of the source as an infinite and uniform sheet is reasonable in\ncomparison with the distance scales of the survey parameters (altitude, line\nspacing). The higher an aircraft flies, the more that far-away locations\ncontribute to the detected signal, relative to locations directly underneath\nthe aircraft~\\cite{King_1912}. This can have the\nadvantage of allowing for complete coverage in a more economical survey with\nwider line spacing. However, detection systems at higher altitude see a signal\nwhich is effectively averaged over a larger area of the ground. Anthropogenic\nsignals such as those resulting in case of a\nreactor accident or malicious radiological dispersal could result in hot spots\nthe concentration of which would be underestimated if averaged over a larger area.\n\nIn this paper, we present a method to deconvolve an aerial radiometric survey\nfor spatial smearing. This kind of problem, requiring inversion of a spatial\ndistribution, is encountered frequently in geophysical surveying. \nGeophysical spatial\ninversion problems are typically underdetermined, and one way of dealing\nwith this has been to select only those solutions which are close to some\npreconceived model~\\cite{Parker_2015}. An approach to spatial\ndeconvolution of airborne spectrometric data which relies on an analytical\nmodel for the response function, and allows underdetermined problems, has been\npublished previously~\\cite{Minty_2016}. A related method for spatial\ninversion to the approach presented here, but using an iterative inversion and\nneglecting uncertainties, was published recently~\\cite{Penny_2015}. Other\ngroups are taking a similar approach to that advocated here, but applied to\nthe inversion of spectra rather than spatial\nmaps~\\cite{Mattingly_2010, Hussein_2012}.\n\nThe method which will be presented here was applied to data obtained using\ndetectors aboard a manned helicopter. Nevertheless, it is prudent to mention\nthe proliferation of work ongoing currently in aerial radiation detection\nfrom unmanned aerial vehicle (UAV) systems. The use of a UAV platform for \naerial survey has facilitated the advance of \naerial survey methodology.\nA good review of recent publications can be found here~\\cite{Connor_2016}.\n\nThe method which will be presented here involves simply a) determining the\ninfluence of each independent region of the earth's surface on the measured spatial\ndistribution and then b) optimizing the weight coefficients of each region of\nthe surface to obtain the best reproduction of the measured map. The number of\npixels of the solution can be chosen such that the problem is not underdetermined. No\npotentially biasing prior assumption about the underlying distribution is\nnecessary. The method can handle complicated detector geometries as the\nresponse functions are determined in simulation. The method could easily be\nextended to allow it to be applied when there is significant\nterrain variation in the source such as would be the case in an urban\nenvironment. The method is stable under different starting\nconditions, and naturally allows for propagation of uncertainties from the\nmeasurement to the unfolded result. \n\nWe demonstrate the application of the unfolding method by applying it first to\na synthetic data set. This is compared with the known underlying distribution. \nWe proceed to apply\nthe unfolding method to a real aerial survey measurement acquired following\ndetonation of a radiological dispersal device~\\cite{Sinclair_RDD_2015}.\n\nThis spatial deconvolution method has been\npresented previously at conferences by this\ngroup~\\cite{RDD_CTBT_2015,NSSMIC_RDD_2015}, however, this is the first full write-up.\n\n\\section{Methods}\n\\subsection{Unfolding method}\n\\label{sec:unfolding_method}\nA measurement of surface activity concentration under the uniform infinite\nplane assumption may be denoted $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ where $x$\nand $y$ represent easting and northing in geographic Cartesian coordinates.\nWe seek to determine the true underlying surface radioactivity concentration, $f(x,y)$. $f(x,y)$ is related to $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ through\n\\begin{equation} \ng^{\\mbox{\\scriptsize MEAS}}(x,y) = S [f(x,y)],\n\\end{equation}\nwhere $S$ represents the effect of the measurement system on $f(x,y)$.\n\nWe divide space into $N^{\\mbox{\\scriptsize PAR}}$ pixels $i$, and using Monte Carlo simulation, generate uniform radioactivity distributions in each pixel, $f_i(x,y)$.\n\nThe measurement system $S$ consists of the detection system as well as the air\nand all other absorbing and scattering materials between the source and the\nproducts of scattering, and the detectors. It is represented in the Monte\nCarlo simulation, and the emissions from the radioactive sources $f_i(x,y)$ are transported\nthrough the system $S$ to obtain the template responses\n$g_i(x,y)$, where\n\\begin{equation}\ng_i(x,y) = S[f_i(x,y)].\n\\end{equation}\n\nWe let \n\\begin{equation}\ng(x,y) = \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i g_i(x,y)\n\\end{equation}\nand fit $g(x,y)$ to $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ using a $\\chi^2$ minimization~\\cite{Minuit} to extract the weighting coefficients $w_i$.\n\nTo examine this $\\chi^2$ function, let $g_j^{\\mbox{\\scriptsize MEAS}}(x,y)$ represent the $j$th measurement of the activity concentration $g^{\\mbox{\\scriptsize MEAS}}(x,y)$. Then\n\\begin{equation}\n\\chi^2 = \\sum_{j=1}^{N^{\\mbox{\\tiny MEAS}}}\\frac{g_j^{\\mbox{\\scriptsize MEAS}}(x,y) - \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i g_i(x,y)}{e_j^2}\n\\end{equation}\nwhere there are $N^{\\mbox{\\scriptsize MEAS}}$ measurements $g_j^{\\mbox{\\scriptsize MEAS}}(x,y)$ in the problem each with uncertainty $e_j$.\n\nThe estimator of $f(x,y)$ is then the reconstructed radioactivity\nconcentration distribution $f^{\\mbox{\\scriptsize REC}}(x,y)$, where\n\\begin{equation}\nf^{\\mbox{\\scriptsize REC}}(x,y) = \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i f_i(x,y).\n\\end{equation}\n\nWe choose to require the problem to be oversampled. That is, there is\neverywhere a greater spatial density of measurements than of fit parameters\nand $N^{\\mbox{\\scriptsize MEAS}} > N^{\\mbox{\\scriptsize PAR}}$.\nThen, provided that the uncertainties $e_j$ in the denominator of the\n$\\chi^{2}$ function encompass all of the uncertainties of the problem, the\nminimum $\\chi^2$ value will be approximately equal to the number of degrees\nof freedom of the problem. \nAnd the MINOS algorithm~\\cite{Minuit} can be used to propagate the\n$N^{\\mbox{\\scriptsize MEAS}}$ measurement uncertainties $e_j$ through the fit procedure\nto calculate\nthe $N^{\\mbox{\\scriptsize PAR }}$ uncertainties \n$\\delta w_i$ on the weighting parameters $w_i$.\nIn practice, there are irreconcilable nonstochastic uncertainties affecting the problem which\nmust be included in the\n$e_j$ by application of a constant scaling factor\nto bring $\\chi^{2}$ per degree of freedom to one before the fit uncertainties\ncan be utilized. These are due to the statistical uncertainties\nin the template responses $g_i(x,y)$, and the finite pixellization of the problem.\n\nUncertainties for spatial deconvolution of fallout surveys can be expected to be asymmetric owing to the\nboundary condition that the amount of deposition can not physically be less\nthan zero. MINOS works by setting the positive and negative uncertainty for\neach parameter to the amount the parameter has to vary in each direction such\nthat $\\chi^{2}$ increases by one. Thus MINOS naturally allows for assymetric\nuncertainties and is particularly suited to uncertainty analysis in the\nmeasurement of amount of radioactivity.\n\n\\subsection{Experimental method -- aerial survey}\n\\label{sec:exp_method}\nThe experimental methods to obtain the data to which we will apply the unfolding method have been described previously~\\cite{Sinclair_RDD_2015}. We will repeat only the most essential points here.\nThree RDD detonations were conducted during the trial. In the first, $\\sim$31~GBq of \\mbox{La-140} was dispersed explosively, with radioactive debris subsequently carried by wind as far as $\\sim$~2~km from the site of the detonation.\nAerial gamma-ray spectrometric surveys were conducted using two\n10.2~x~10.2~x~40.6~cm$^{3}$ NaI(Tl) crystals mounted exterior to a helicopter\nin a skid-mounted cargo expansion basket. GNSS antennae and inertial navigation and altimetry systems\nwere also installed in the basket, to determine location.\nThe system recorded a linearized 1024-channel gamma-ray energy spectrum over the domain 0~MeV to 3~MeV, tagged with the location information, once per second.\nPost-acquisition, counts were selected from the spectra in an energy window of approximately four sigma in width around the 1.6~MeV \\mbox{La-140} photopeak. \nThese count rates were corrected for lag and dead time. Count rates were\nalso corrected for variations in\nflight altitude to the nominal flying height making use of the Shuttle Radar\nTopography Mission~\\cite{g.2007shuttle} digital terrain model\nadjusted to the local elevation using a GNSS base station.\nBackgrounds due to the radioactivity of the earth, the\natmosphere, the aircraft and its occupants, and cosmic rays, were all\nsubtracted. The count rates were all corrected for radioactive decay, back to\nthe time of the blast. A coefficient to convert the measurements from counts\nper second to kBq\/m$^2$, assuming an infinite and uniform source, was obtained\nfrom experimentally validated Monte Carlo simulation. \nFinally, measurements of radioactivity concentration in kBq\/m$^2$ for four\naerial surveys, two conducted after the first blast and one conducted after\neach of the subsequent two blasts, were presented.\n\nIn this paper, we will discuss only the data recorded during the first aerial\nsurvey after the first blast.\nThis survey was flown at a nominal 40~m flying height, with speed of\n25~m\/s and flight-line spacing of 50~m.\n\n\\subsection{Experimental method -- truckborne survey}\n\\label{sec:method_truck}\n\n\\subsubsection{Data collection -- truckborne survey}\nTruckborne surveys were driven by criss-crossing the deposited fallout in an\nextemporaneous pattern following\nthe first and third RDD blasts, restricting to the part of the fallout outside\nof a 500~m x 500~m fenced hot zone~\\cite{Green_RDD_2015,Marshall_thesis_2014}. \nThe detection system was mounted in the bed of a pickup truck and consisted of four 10.2~x~10.2~x~40.6~cm$^{3}$\nNaI(Tl) crystals oriented vertically in a self-shielding arrangement for\nazimuthal direction measurement.\nTruckborne data following the first RDD blast will be presented here for\ncomparison with the aerial survey data. The truckborne data has not undergone\nsufficient analysis for a full quantitative evaluation, but the shape will\nnevertheless provide an interesting comparison for interpretation of the aerial survey data.\n\n\\subsubsection{Sensitivity calculation -- truckborne survey}\n\nThe sensitivity of the truckborne system to a \\mbox{La-140} contamination on\nthe ground was determined using experimentally validated Monte Carlo\nsimulation.\nIn the simulations, the detector was placed with the centre of its sensitive\nvolume at a height of 1.2~m above ground. \nThe NaI(Tl) crystals and their housing were represented in the simulation in \ntheir vertical arrangement. \nThe $\\sim 5000$~kg of mass of the pick-up\ntruck carrying the detector was represented by simple blocks of steel.\n\nSensitivity validation data was collected by placing a \\mbox{La-140} source of\nknown emission rate at fixed locations around the truck and detector system.\nThe dead material was adjusted in size and position in the model until an acceptable agreement with the\nsensitivity validation data was obtained.\nThe engine and other materials at the front of the truck effectively block\nradiation coming from that direction, and most of the detected counts arise\nfrom radiation originating to the side and rear of the truck where there is\ncomparatively little material.\nThe uncertainty in the estimation of the sensitivity of the truckborne\ndetector obtained in this manner is large, about 20\\% to 30\\%.\nThis level of accuracy is sufficient for illumination of the value of the spatial deconvolution applied to the aerial\ndata by comparison with data collected from a ground-based system.\n\nFig.~\\ref{fig:truck_sensitivity}~a) shows the number of energy deposits\nregistered in the detector per second as a function of the radius $R$ of a\ndisc-shaped source centered beneath the truckborne detector on the ground as\ndetermined by the simulation.\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=4.7cm]{2crystal_sensivity_25.eps}\\put(18,52){\\textcolor{black}{a)}}\\end{overpic}\n \\hspace*{-.1cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=4.7cm]{trk_scaled_25.eps}\\put(18,52){\\textcolor{black}{b)}}\\end{overpic}\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:truck_sensitivity} \na) Sensitivity of the truckborne survey system to a disc-shaped distribution of isotropic emitters of the \\mbox{La-140} gamma spectrum, as a function of disk radius. Black dots show EGSnrc prediction. Solid curve shows fit of the expression $C(R,H)$ to the synthetic data. The dashed line shows the asymptote of the fit curve and represents the sensitivity to a uniform and infinite sheet source.\nb) Comparison of the shapes of the sensitivity curves for the aerial and truckborne survey systems. The dashed line shows the sensitivity to a disc source relative to the sensitivity to an infinite sheet as a function of disc radius for an aerial survey system at an altitude of 40~m~\\cite{Sinclair_RDD_2015}. The solid line shows the equivalent relative sensitivity curve for the truckborne survey system.\n}\n\\end{figure} \nThe expression for the flux, $\\Phi(R,H)$, due to a surface activity concentration $S_0$ at a point an elevation $H$ above a disc-shaped source of radius $R$ can be readily calculated~\\cite{King_1912},\n\\begin{equation}\n \\Phi(R,H) = \\frac{S_0}{2} (E_1(\\mu_{\\mbox{\\scriptsize a}}H) - E_1(\\mu_{\\mbox{\\scriptsize a}}\\sqrt{H^2+R^2})),\n\\label{eqn:flux_vs_R}\n\\end{equation}\nwhere E$_1$ is the exponential integral and $\\mu_{\\mbox{\\scriptsize a}}$ is the linear attenuation coefficient for gamma rays in \nair.\nTo determine the asymptotic sensitivity, we formed a function for the detected\ncounts as a function of the source radius, \n\\begin{equation}\n C(R,H) = \\epsilon \\Phi(R,H),\n\\label{eqn:count_rate}\n\\end{equation}\nand fit the expression for $C(R,H)$ to the synthetic data to obtain the\ndetection efficiency $\\epsilon$.\nThe fit result is shown as the solid curve in\nFig.~\\ref{fig:truck_sensitivity}a) and the asymptotic sensitivity, shown by\nthe dashed line, is $\\sim 71$~s$^{-1}$\/(kBq\/m$^2$).\nAs mentioned, the uncertainty on this sensitivity is large due to the lack of\ndetailed representation of the shielding material of the truck in the\nsimulation.\nNevertheless, the shape of the sensitivity curve is of value, as is the shape\nof the profile\nof counts measured with the truckborne system as it traversed the deposited plume.\n\n\\subsubsection{Comparison of aerial and truckborne sensitivity curves}\nFig.~\\ref{fig:truck_sensitivity}b) shows a comparison of the shapes of the\nsensitivity curves of the ground-based and aerial systems. Despite the tall\nnarrow shape of its detectors, which would tend to increase sensitivity to incoming radiation\nfrom the sides, for the truckborne system at ``altitude'' $H=1.2$~m, a greater\npercentage of detected gamma rays originate close to the point directly\nunderneath the detector as compared to the airborne\nsystem at $H=40$~m.\nThis leads to superior spatial precision in the results of truckborne survey.\n\n\\subsection{Aerial survey template response determination through Monte Carlo simulation}\n\\label{sec:simulation}\nThe radiation transport model EGSnrc~\\cite{EGSnrc1,EGSnrc2} was used to\ngenerate the individual uniform pixel sources $f_i(x,y)$ and to propagate the\ngenerated gamma rays and their interaction products through air and into the detection volume to create the\ntemplate responses $g_i(x,y)$. For the solutions presented herein, the\nsimulation geometry represented the experimental setup during the first RDD\ntrial~\\cite{Sinclair_RDD_2015}. The actual aerial survey system was described and shown\nin photographs in the previous publication and briefly reiterated in \nSect.~\\ref{sec:exp_method}.\nThe model of that system in EGSnrc is shown in\nFig.~\\ref{fig:egspp}~a). The simulated gamma detection system included the\ntwo NaI(Tl) crystals, as well as their aluminum cladding, and felt, foam and\ncarbon fibre enclosure. The exterior-mounted basket containing the detectors\nwas modelled in a simplified manner with 51 3~mm~x~1.5~mm bars of aluminum,\nrepresenting the basket strands, running the length of the basket in a\nsemicircle around its long axis. Dead materials due to the photo-multiplier\ntube readout of the crystals and associated electronics, as well as the\naltimeter and GNSS receivers, were modelled as simple blocks of metal of the\nappropriate overall mass. The ground was represented as perfectly flat with\nthe detection system at a height of 40~m. \nThe model was validated experimentally using data from point sources of known\nemission rate placed at\nknown locations around the actual detector. The uncertainty associated with approximation in the representation of the\nmeasurement system in the model was evaluated by variation of the arrangement\nof dead material in the model, by variation of the detector's position,\naltitude and\nattitude and by variation of the detector's energy resolution within\nreasonable limits. This uncertainty was evaluated to be about 12\\% on the \nactivity\n concentration and it was included the overall systematic\nuncertainty quoted in the publication~\\cite{Sinclair_RDD_2015}.\n\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim=0cm -5cm 0cm 3cm, clip = true, height=5.1cm]{basket_model_4.eps}\\put(5,89){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{1.4cm}\n \\begin{overpic}[height=5.5cm]{K_53_resp.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:egspp} \na) The aerial survey system as modelled in EGSnrc. The two\n10.2~x~10.2~x~40.6~cm$^{3}$ NaI(Tl) crystals\nare represented in their housing, shown in purple, with steel blocks at the\nends representing the dead material of the PMTs and readout electronics. The\ntwo crystals are mounted lengthwise with their centres 152.4~cm apart on an aluminium plate which is \nrepresented with mass and dimensions according to the engineering drawing.\nAuxilliary instrumentation is represented by an\naluminium block in the centre of the basket, of the summed instrument mass.\nThis dead material is shown in green in the figure. The aluminum plate on\nwhich the detectors and auxiliary equipment is mounted is itself attached to a\nbasket which is mounted to\nthe skids on the outside of the helicopter. Dead material of the basket is represented by 51 thin aluminum bars running the length of the basket, in a\nsemicircle around the basket long axis, shown in gold. Uncertainty on the\nsensitivities due to misrepresentation of the system in the model has been\nestimated to be approximately 12\\%.\nb) Response function, $g_{53}(x,y)$ to the pixel source $f_{53}(x,y)$,\nnormalized to the number of generated events, where the $x$-axis shows Easting\nand the $y$-axis shows Northing. The true spatial extent of pixel 53 is indicated by the black square.}\n\\end{figure} \n\nThe simulated pixel sources, $f_i(x,y)$, consisted of uniform distributions of\nisotropic emitters of the \\mbox{La-140} spectrum of gamma rays, including all\nemission energies above 0.05\\% emission probability~\\cite{TabRad_v1}. Gammas\nwere emitted into the full 4$\\pi$ solid angle. Each pixel source $f_i(x,y)$\nwas square, and 50~m on a side. Note that with the survey parameters\nmentioned previously, we have one spectrum recorded approximately every\n25~m~x~50~m in the real data. Thus, the number of measurements in the data is\nhigher than the number of fit parameters in the simulation, the problem is over-determined, and we can expect reasonably rapid convergence of the method to a solution which is stable under different starting conditions.\n\nThe entire region to be unfolded measured 1.5~km~x~1.5~km. To speed up\nconvergence of the fitting, we chose to parallelize the computation, breaking\nthe problem up into individual tiles, each 500~m~x~500~m. Only eight of these\ntiles were necessary to cover the area over which the radioactivity actually\nsettled. To knit the tiles together, we extended the fit into a border of\npixel sources 50~m wide, surrounding each tile. Thus the fit area\ncorresponding to each tile was actually 600~m~x~600~m. If, after background\nsubtractions, fewer than one count corresponding to a \\mbox{La-140} energy\ndeposit was measured in the detection system in the region of space\ncorresponding to one fit parameter $w_i$, then that fit parameter was assigned\na value of zero and left out of the minimization. Thus, 144 or fewer\nfit parameters $w_i$ were determined from the inversion of each tile.\nTo merge the tiles after the individual inversions, the weighting parameters of the border pixels were simply ignored, and the central 500~m~x~500~m areas of the tiles were placed next to each other.\n\nThe simulation included one detection system for each of the 144 pixel sources, centered\nlaterally within the pixel, at 40~m height. \nHere we have used multiple detection systems at different places at one time to\nrepresent the real world in which one detection system was in different places\nat different times. Given the 40~m height of the detection systems above the\nsource, the 50~m spacing between them and the requirement that the deposited\nenergy lie in the highest-energy photopeak of the source, the error in this\napproximation, which would come from a single emitted gamma ray depositing energy in two\ndifferent detection systems, is negligible.\n\nThe volume of the air in which the gamma rays and electrons were tracked in each\ntile extended 1.2~km in easting (or $x$) and northing (or $y$), and 600~m up.\nA 5~m-thick concrete slab underneath the emitters and filling the lateral\ndimensions of the simulated volume, represented the earth.\n\nThe simulation included all physical interactions of the emitted gammas and of\nthe gammas, electrons and positrons resulting from those interactions. Scattering\nand absorption in the air and ``earth'' of the simulated volume, and in the dead material\nof the basket system and housing of the NaI(Tl) crystals, was included. All\nprocesses leading to energy deposit within the NaI(Tl) crystal from either the\ndirect gamma rays, or the products of scattering, were included. Energy\ndeposits in the NaI(Tl) were then smeared to create simulated spectra,\naccording to the energy resolutions of the crystals determined during the experiment.\n\nFig.~\\ref{fig:egspp}~b) shows the response, $g_{53}(x,y)$, of the 144\ndetection systems of one tile, as a percentage of the number of events\ngenerated in a single one of the pixel sources, $f_{53}(x,y)$, where this\nhappens to correspond to the pixel numbered ``53''. The extent of the source\nis indicated by the black square. Note how the measured response extends much\nmore broadly in space. This is the spatial smearing which will be corrected\nby the unfolding.\n\n\\section{Results}\n\n\\subsection{Results obtained by application of the method to synthetic data}\nWe begin by applying the spatial inversion method to a synthetic data set for\nwhich we know the underlying distribution $f(x,y)$. \nAn annular region consisting of the area between two circles of radius 100~m\nand 250~m, centered at (550~m, 550~m) was uniformly populated with\n10~$\\cdot$~10$^{9}$ emitters of the \\mbox{La-140} gamma sectrum. Considering the\nbranching ratios for the \\mbox{La-140} gamma emissions, this amounts to a source\nconcentration of 28.4~kBq\/m$^2$. The annular region\nis indicated by the black outlines in Fig.'s~\\ref{fig:annulus_data_and_fit}~a)\nand~b).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{gdconc_K_giant_fast_99.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hDFIT_GIANT_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:annulus_data_and_fit} \nSynthetic aerial survey and fit results, where the $x$\naxis is Easting, and the $y$ axis is Northing. The source was of concentration \n28.4~kBq\/m$^2$ and\nannular with inner radius 100~m and outer radius 250~m, centered at\n(550~m, 550~m), as indicated by the area between the black circles.\na) Synthetic aerial survey measurement result.\nb) Result of fit of template responses $g_i(x,y)$ to the synthetic aerial survey.}\n\\end{figure} \n\nGeneration of the synthetic dataset makes use of the\nidentical detector simulations as used in generating the template \nresponses $g_i(x,y)$ as described in Section~\\ref{sec:simulation}, however the detection systems were more narrowly spaced\nin the synthetic dataset.\nDetection systems for the synthetic dataset were located every 20~m in $x$ and\n$y$ such that there were 225 detection systems in total in each 600~m~x~600~m\ntile.\n\nThe template sources $f_i(x,y)$ and the template responses $g_i(x,y)$ utilized\nin the inversion are the same as will be used for the real data and are as\ndescribed in Section~\\ref{sec:simulation}.\nThus, the synthetic data measurement density of 20~m~x~20~m exceeds the\ndensity of the parametrization, 25~m~x~25~m, and the problem\nis overdetermined, as desired.\n\nFig.~\\ref{fig:annulus_data_and_fit}~a) shows the synthetic aerial survey\nmeasurement. The result is broader than the underlying true source\ndistribution. Contamination appears to extend outside of the known true\nunderlying borders. The central area of the annulus appears to be filled with\nsignificant contamination. The average concentration of contamination\nreconstructed within the annular region is lower than the known true concentration. \n\nFig.~\\ref{fig:annulus_data_and_fit}~b) shows the result of the fit of the\ntemplate measured activity distributions to the synthetic aerial survey\nmeasurement. The colour scale used in Fig.~\\ref{fig:annulus_data_and_fit}~b) is\nthe same as that of Fig.~\\ref{fig:annulus_data_and_fit}~a).\nThe first observation to note is that the tile knitting procedure\napparently works well. The synthetic data stretches over six of the\noverlapping 600~m x\n600~m simulation tiles. After knitting of the inverted result into adjacent \nnon-overlapping\n500~m~x~500~m tiles, there is seamless matching of the reconstructed activity\nconcentration at the tile borders.\nAlso to note is that within the limitations of the somewhat coarse pixellization of\nthe problem, the survey is well reproduced by the fit. \nThe tendency of the\nmeasurement to extend outside of the bounds of the true source is reproduced,\nas is the tendency of the measurement to underestimate the magnitude of the\nconcentration within the source boundary.\n\nThe reconstruction of the true underlying source distribution for the\nsynthetic data is presented in Fig.~\\ref{fig:annulus_inverted}.\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hCONC_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{a)}}\\end{overpic}\\\\\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hpERR_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{b)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hmERR_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{c)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:annulus_inverted} \nSpatially deconvolved synthetic aerial survey result, $x$ axis is Easting and\n$y$ axis is Northing. Black circles indicate\nthe true annular source distribution.\na) The spatially deconvolved measurement.\nb) Positive statistical uncertainty on the spatially deconvolved measurement.\nc) Negative statistical uncertainty on the spatially deconvolved measurement.}\n\\end{figure} \nAs shown in Fig.~\\ref{fig:annulus_inverted}~a), the inversion process results\nin a reconstructed source distribution which is higher in magnitude and closer\nto the true activity concentration than the\ninitial survey measurement. The boundaries of the actual source are \nbetter reproduced after inversion, in particular the absence of radioactivity\nin the centre of the annulus is recovered.\n\nA major advantage of the spatial\ndeconvolution method presented here is that statistical uncertainties affecting\nthe measurement may be propagated through the minimization procedure by the MINOS algorithm as described\nin Sect.~\\ref{sec:unfolding_method}.\n A map\nshowing the one-sigma positive statistical uncertainty on the reconstructed surface activity\nconcentration is shown in\nFig.~\\ref{fig:annulus_inverted}~b) and the corresponding negative statistical uncertainty\nis shown in Fig.~\\ref{fig:annulus_inverted}~c) where the same colour scale is\nused for the uncertainties as for the measurement shown in \nFig.~\\ref{fig:annulus_inverted}~a).\nConsidering the uncertainties, the ability of the method to reconstruct the\ntrue underlying activity concentration is good. The reconstructed activity\nconcentration magnitude is in agreement with the known true activity\nconcentration within uncertainties in most places. For example, near the\ninner boundary of the annulus where the reconstructed concentration is low\ncompared to the known true concentration, further negative movement of the\nresult is not allowed by the negative uncertainty. The positive uncertainty exceeds the\nnegative uncertainty in magnitude, and does allow for a positive movement of the\nreconstructed value.\n\n\\subsection{Real data collected following detonation of a radiological\n dispersal device}\n\\subsubsection{Spatial deconvolution of aerial survey data}\n\nThe aerial survey measurement from the first RDD trial is shown in Fig.~\\ref{fig:data_and_fit}~a).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{gdconc_99_1.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hDFIT_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:data_and_fit} \na) Aerial survey measurement of the distribution of fallout following detonation\n of the radiological dispersal device.\nb) Result of fit of template histograms to the aerial survey measurement.}\n\\end{figure}\nThis result has been published previously~\\cite{Sinclair_RDD_2015} and the methods to\narrive at the result were recapitulated here in Sect.~\\ref{sec:exp_method}.\nWe observe an area of activity concentration exceeding 100~kBq\/m$^2$ near\nground zero of the detonation, with a long deposited plume of maximum width\nof 300~m to 400~m, and significant measured radioactivity extending over a\ndistance of about 2~km.\nThe total systematic uncertainty affecting this measurement was determined to\nbe around 12\\% and the statistical uncertainty peaks at 6~kBq\/m$^2$.\n\nFig.~\\ref{fig:data_and_fit}~b) shows the result of the $\\chi^2$ fit of\nthe weighting coefficients of the template response functions to the\nmeasurement of Fig.~\\ref{fig:data_and_fit}~a). The colour scales of \nFig.'s~\\ref{fig:data_and_fit}~a) and~b) have been chosen to be equal. \n(This colour\nscale has in fact been set to the optimal colour scale for the data from a\ntruckborne survey which will be presented in the upcoming \nSect.~\\ref{sec:truckborne}.)\nThe features of the measurement are broadly well-reproduced by the fit,\nconsidering the pixellization of the reconstruction. In particular, the\nmagnitude, width and extent of the contamination are well represented in the\nresult of the fit.\n\nThe underlying distribution of pixel sources which gives rise to the fit\nresult of Fig.~\\ref{fig:data_and_fit}~b) is presented in\nFig.~\\ref{fig:data_inverted}~a).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hCONC_ALL.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\\\\\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hpERR_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hmERR_ALL.eps}\\put(18,70){\\textcolor{white}{c)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:data_inverted} \na) Spatially deconvolved aerial survey measurement of fallout following\nradiological dispersal device blast.\nb) Positive statistical uncertainty on spatially deconvolved measurement.\nc) Negative statistical uncertainty on spatially deconvolved measurement.}\n\\end{figure}\n This is the reconstructed distribution of\nthe surface activity concentration following spatial deconvolution.\nWe find that the width of the deposited plume is now much smaller than the\noriginal undeconvolved measurement, around\n50~m. Correspondingly, the peak concentration is higher, over 400~kBq\/m$^2$.\nNote that the width of the deposited plume is small with respect to the altitude and line spacing of the\nsurvey. This accounts for its significant overestimation when the ``infinite and\nuniform'' approximation was used to obtain a concentration measurement from the\nmeasured counts as shown in Fig.~\\ref{fig:data_and_fit}~a) and~\\cite{Sinclair_RDD_2015}.\nThe length of the deposition is\nhowever much larger than the survey parameters, so in this dimension the\n``infinite and uniform'' sheet approximation is not so bad and the original\nlength measurement is not much altered by the spatial deconvolution.\n\nThe positive and negative statistical uncertainties on the spatially\ndeconvolved deposited fallout map are shown in Fig.'s~\\ref{fig:data_inverted}~b)\nand~c) respectively. Note that the statistical uncertainties affecting the\nmeasurement are very small, and on the colour scale of\nFig.~\\ref{fig:data_inverted}~a) would be difficult to see, so the colour scale in\nFig.'s~\\ref{fig:data_inverted}~b) and~c) is chosen to optimize the\nrepresentation of the information in Fig.~\\ref{fig:data_inverted}~c). The\nuncertainties reveal important features of the measurement and its inversion.\nThe positive uncertainty indicates a region extending away from the measured deposited\nplume axis in which a positive quantity of activity is permitted, however at a\nvery low amount of between 5~kBq\/m$^2$ and 10~kBq\/m$^2$. The negative\nuncertainty shows that the measurement of the presence and distribution of\nradioactivity is significantly above zero.\n\nThe MINOS error propagation includes only the stochastic uncertainties on the\nmeasurement. There are additional uncertainties which are systematic and arise from\napproximations in the representation of the system in the simulation. These include mis-representation of the position, particularly the altitude; the attitude (yaw, pitch and roll); the amount of shielding material in the basket containing the detectors; the energy resolution and the air density.\nThe systematic uncertainty on the (undeconvolved) radioactivity concentration distribution was \ndetermined to be about 12\\% by variation of these parameters within reasonable limits~\\cite{Sinclair_RDD_2015}.\n\nFor the spatially deconvolved measurement, some of the systematic\nuncertainties must be re-examined as they can be expected to have an effect on the shape of the reconstructed fallout distribution, as well as its overall magnitude. These are the systematic uncertainties associated with the measurement of altitude and pitch angle. \nIt is also interesting to examine the effect of the measurement of yaw angle\non the spatially inverted measurement as it can have no effect at all on the\noriginal undeconvolved measurement which used the infinite sheet approximation\nfor the source and therefore the detector response was invariant under changes of yaw.\n \nThe inertial navigation system\ndetermined the yaw angle \nduring the measurement to be around -30$^\\circ$. The spatial inversion was conducted\ntwice. For the central value of the inversion as presented in \nFig.~\\ref{fig:data_inverted}~a) the helicopter systems in the template\nhistograms were assigned a yaw of -30$^\\circ$ to match the data. To allow for\nchanges in yaw during flight, the regression was repeated with yaw set\nmaximally different at 60$^\\circ$. Pitch was varied from the nominal\n0$^\\circ$ to -10$^\\circ$ according to the maximum deviation of pitch recorded\nby the inertial navigation system while on line during one sortie.\nAltitude was varied from nominal by 1~m to account for approximately one sigma\nof variability in height determination.\nThese variations did not significantly alter the measurement of length and\nwidth of the deposited fallout.\nAdded in quadrature, and considering that some of the variation was already\nincluded in the original systematic uncertainty associated with the\nsensitivity, the deviations do not yield a significant additional systematic\nuncertainty.\nAlthough not a significant additional source of uncertainty for the measurements\npresented here, these sources of uncertainty are worth discussing for the\nbenefit of researchers following this approach under different operating conditions.\n\n\\subsubsection{Comparison of spatially deconvolved aerial survey data\n with truckborne survey data}\n\\label{sec:truckborne}\n\nIn Fig.~\\ref{fig:cftruck}a) the truckborne survey result which followed the\nfirst blast is shown overlaid on\nthe undeconvolved aerial survey result from the same blast\non the same colour scale. \n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{truck_on_data_1.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{truck_on_unfold_1.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\vspace*{.5cm}\\\\ \n \\begin{overpic}[height=6.5cm, angle=0]{RMSD_RDD.eps}\\put(95,45){\\textcolor{black}{c)}}\\end{overpic}\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:cftruck} \na) Aerial radiation survey following detonation of radiological dispersal\ndevice, with results of radiation survey from a truck-based system overlaid.\nb) Spatially deconvolved aerial radiation survey of the RDD fallout, with\nresults of radiation survey from a truck-based system. Colour scale is the\nsame in a) and b) and is optimized to show the range of values of the result from the truckborne survey.\nc) (top) Transects of the deposited RDD plume following the path of the truck-based survey\nsystem. The\nsolid line shows the aerial survey result sampled at the truck location. The dot-dashed line shows the\ntruckborne survey result and the dashed line shows the aerial survey result\nafter spatial deconvolution sampled at the truck location. (bottom)\nRoot-mean-square deviation of the seven transects versus the location of maximum\nconcentration of each transect according to the truckborne survey. Circles\nshow the aerial survey result, squares show the truckborne survey and\ntriangles show the spatially deconvolved aerial survey result.}\n\\end{figure} \n(The colour\nscale is optimized to show the variation in the truckborne survey result.)\nThe truckborne survey reports much higher surface activity oncentrations than\nthe aerial survey and the fallout appears to be narrower in width.\n\nFig.~\\ref{fig:cftruck}b) shows the same truckborne survey result this time\noverlaid on the spatially deconvolved aerial survey measurement. (The colour\nscale is the same as that used in Fig.~\\ref{fig:cftruck}a).)\nHere, both the width and magnitude of the concentration are in better\nagreement.\n\nThe aerial survey maps were sampled at the locations of the truck and these\nsampled activity concentration values are presented in\nFig.~\\ref{fig:cftruck}c) (top) as a function of the distance from the start of the\ntruck-driven sortie.\nAgain, it is clear that the spatially deconvolved result is generally higher\nat maximum\nmagnitude and more narrow than the undeconvolved aerial survey result. \nThe bottom part of Fig.~\\ref{fig:cftruck}c) shows the root-mean-square\ndeviation (RMSD)\nof the curves calculated by sampling each profile at regular intervals\nfrom the maximum concentration down to the point at which the concentration\nfalls below 10\\% of the maximum. This plot shows that the RMSD width\nof the deposition according to the undeconvolved survey is typically around\n90~m while the width of the deposition according to the\ntruckborne survey and the spatially deconvolved aerial survey tends to be\nsignificantly narrower, closer to 30~m.\nSpatially deconvolved aerial survey thus recovers the\nnarrowness of the fallout to approximately the same spectral\nprecision as the truckborne survey.\n\nTruckborne data is shown for the purpose of shape comparison only and does not\ninclude detailed error analysis. In any event, the truckborne system has its\nown finite area of sensitivity largely caused by its ``altitude'' of just over one\nmetre, causing smearing of the measured spatial distribution. \nA contact measurement of the deposited radioactivity can be expected to be even more\nconcentrated in places~\\cite{Erhardt_deposition_RDD_2015}.\n\n\\section{Discussion and Conclusions}\n\nRadiometric survey would be performed to map fallout following a reactor\naccident or following a malicious\nrelease. To cover a large area quickly, the surveys are initially performed\nusing manned aircraft at some significant altitude $H$. If there is spatial\nvariability in the fallout at distance scales much less than $H$, then\nthe map result of the survey can underestimate the quantitity of\nradioactivity on the ground in places.\n\nWe have presented here a method to deconvolve an aerial survey map for the\nspatial smearing caused by measurement at altitude, at least to the extent\npermitted by the sampling density as determined by the aircraft speed and line\nspacing.\nPerformed on synthetic data, the deconvolution method returns a distribution\nwhich is consistent with the true underlying distribution within\nmeasurement uncertainty.\nThe deconvolved distribution is more narrowly distributed, and shows regions\nof locally higher radioactivity concentration than the initial undeconvolved\nmeasurement.\nPerformed on real data acquired following detonation of a radiological\ndispersal device, the method produces a distribution which is narrower and\nshows radioactivity concentration as much as four times that of the original\nmeasurement.\n\nThe method can unfold a distribution for smearing effects up to a resolution\npermittable by the sampling frequency of the original measurement. The\nmethod allows for propagation of stochastic measurement\nuncertainties through the unfolding to obtain the measurement uncertainties on\nthe fit parameters.\nThe method relies on application of the MINUIT and MINOS algorithms well known\nin the field of particle physics.\nWhat is perhaps not well known is that these algorithms can tolerate\noperating with hundreds of independent fit parameters, converging to a stable solution in\na reasonable amount of time from an arbitrary starting distribution.\nOur current need was to develop a method to extract the greatest possible\ninformation from a set of aerial surveys performed to improve scientific\nunderstanding of the behaviour of radiological dispersal devices.\nThe method, however, is also applicable to unfolding of any smeared distribution\nof any dimensionality. It could find application in other fields.\n\nThe result of the unfolding is limited in spatial resolution by the requirement\nthat the density of pixellization of the answer not exceed the density of\nmeasurements as determined by the aircraft speed and line spacing. \nNevertheless, aerial survey practitioners should be aware that there is \nimproved information about the spatial distribution of the radioactivity \ncontained in their aerial survey map that can be extracted provided good \nknowledge of the response function of the system is available.\nThe achieved spatial resolution for the particular aerial survey following RDD \ndetonation presented here approximately matched that obtained during\ntruckborne survey over the same deposition (while providing complete coverage).\nContact measurements ($H=0$~m) can be expected to reveal even greater\nlocal spatial variations than the truckborne data.\nStill, the truckborne survey ``height'' of about 1.2~m provides a salient\nbenchmark for spatial resolution as this is close to the average height of an\nadult human.\nShould humans be required to enter a possibly contaminated area guided by the\nresults of aerial survey alone, a spatially deconvolved aerial survey map could\nprovide a better predictor of the activity concentrations they will encounter\nthan the undeconvolved measurement.\n\n\\section*{Acknowledgements}\n\nThe authors gratefully acknowledge the leadership of the RDD field trials\nunder L. Erhardt, and helpful comments on the analysis from H.C.J.~Seywerd,\nP.R.B.~Saull and F.A,~Marshall. Funding for this project was provided through\nCanada's Chemical, Biological, Radiological-Nuclear and Explosives Research\nand Technology Initiative. This report is NRCan Contribution 20180112.\n\n\n\n\n\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{s.intro}}\n\nHigh-mass stars are influential in galactic evolution by dynamically affecting and ionizing the interstellar medium, and also by chemically enriching heavy elements via supernova explosions. It is of fundamental importance to understand the physical processes in the evolution of molecular clouds where high-mass stars are forming. There have been numerous works on high-mass star formation in the literature \\citep[for reviews see e.g.,][]{Zinnecker2007,Tan2014}. In spite of these works we have not yet understood how high-mass star formation takes place. One of the promising candidates where young high-mass stars are forming is the very dense and massive cores such as infrared dark clouds in the Milky Way \\citep{Peretto2013}. Another possible candidate is the compressed layer formed in cloud-cloud collisions. Observations of a few super star clusters and smaller H$\\,${\\sc ii} regions in the Milky Way have shown signs of triggered formation of high-mass stars in the collision-compressed layers \\citep[e.g.,][]{Furukawa2009,Torii2011,Torii2015,Fukui2014,Fukui2015}. Magneto-hydro-dynamical numerical simulations of two colliding molecular clouds by \\citet{Inoue2013} have shown that turbulence is excited and the magnetic field is amplified in the collision-shocked layer between the clouds. These turbulence and magnetic field increase the mass accretion rate, favoring high-mass star formation.\n\nThe difficulty in studying young high-mass stars lies in the considerably small number of young high-mass stars as compared with low-mass stars in the solar vicinity; this is in part due to the lower frequency of high-mass stars and the heavy sightline contamination in the Galactic disk. ALMA is now opening a new possibility to explore high-mass star formation in external galaxies by its unprecedented sensitivity and resolution, having a potential to revolutionize our view of high-mass star formation. The Large and Small Magellanic Clouds (LMC and SMC), at a distance of 50\\,kpc \\citep{Schaefer2008} and 61\\,kpc \\citep{Szewczyk2009}, are actively forming high-mass stars. The LMC is an ideal laboratory to see the evolution of stars and clouds thanks to the non-obscured face-on view \\citep{Subramanian2010} of all the GMCs in a single galaxy \\citep[for a review][]{Fukui2010}. A $^{12}$CO($J$=1--0) survey for GMCs with NANTEN 4m telescope \\citep{Fukui1999,Mizuno2001,Yamaguchi2001,Fukui2008} provided a sample of nearly 300 GMCs at 40\\,pc resolution and led to an evolutionary scheme from starless GMCs (Type I) to active star-forming GMCs (Type III) over a timescale of 20 Myrs \\citep{Fukui1999,Kawamura2009}. Aiming at revealing the finer-scale details of the molecular gas in the LMC, we have commenced systematic CO observations by using ALMA at sub-pc resolution.\n\nAmong the nearly 300 GMCs over the LMC obtained with NANTEN, N159 is the brightest one with H$\\,${\\sc ii} regions. Infrared studies have revealed nearly twenty young high-mass stars in N159 with Spitzer and Herschel (\\citealt{Chen2010}, \\citealt{Wong2011}, \\citealt{Carlson2012} and references therein; \\citealt{Seale2014}), where two clumps N159 East and West are active in star formation. \\citet{Mizuno2010} showed that the CO $J$=4--3\/$J$=1--0 ratio shows enhancement toward the molecular peak without a well-developed H$\\,${\\sc ii} region in N159 West (N159W). This high excitation condition suggests that N159W is possibly on the verge of high-mass star formation, and thus the initial condition of high-mass star formation may still hold. The preceding observations with Australia Telescope Array (ATCA), while low in resolution (HPBW$\\sim6\\arcsec$), presented some hint of small-scale clumps and filaments in N159W\\citep{Seale2012}. N159W is therefore the most suitable target for the purpose of witnessing the oneset of high-mass star formation. \n\nWe present the first results of the ALMA observations of N159W in this Letter mainly based on the $^{13}$CO($J$=2--1) data.\n\n\\section{Observations \\label{s.obs}}\n\nWe carried out ALMA Cycle 1 Band 3 (86--116\\,GHz) and Band 6 (211--275\\,GHz) observations toward N159W both with the main array 12m antennas and the Atacama Compact Array (ACA) 7m antennas. The observations centered at ($\\alpha_{J2000.0}$, $\\delta_{J2000.0}$) = (5$^{\\rm h}$39$^{\\rm m}$35\\fs34, -69\\arcdeg45\\arcmin33\\farcs2), were carried out between October 2013 to May 2014. The target molecular lines were $^{13}$CO($J$=1--0), C$^{18}$O($J$=1--0), CS($J$=2--1), $^{12}$CO($J$=2--1), $^{13}$CO($J$=2--1) and C$^{18}$O($J$=2--1) with a bandwidth of 58.6 MHz (15.3 kHz $\\times$ 3840 channels). We used a spectral window for the observations of the continuum emission among the four with a bandwidth of 1875.0 MHz (488.3 kHz $\\times$ 3840 channels). The radio recombination lines of H30$\\alpha$ and H40$\\alpha$ were also included in the windows. The projected baseline length of the 12m array ranges from 16\\,m to 395\\,m. The ACA covers 9\\,m to 37\\,m baselines. The calibration of the complex gains was carried out through observations of seven quasars, phase calibration on four quasars, and flux calibration on five solar system objects. For the flux calibration of the solar system objects, we used the Butler-JPL-Horizons 2012 model (https:\/\/science.nrao.edu{\\slash}facilities{\\slash}alma{\\slash}aboutALMA{\\slash}Technology{\\slash}ALMA\\_Memo\\_Series{\\slash}alma594{\\slash}abs594). The data were reduced using the Common Astronomy Software Application (CASA) package (http:\/\/casa.nrao.edu), and visibility imaged. \nWe used the natural weighting for both the Band 3 and Band 6 data, providing synthesized beam sizes of $\\sim$2\\farcs5 $\\times$ 1\\farcs8 (0.6 pc $\\times$ 0.4 pc at 50kpc) and $\\sim$1\\farcs3 $\\times$ 0\\farcs8 (0.3 $\\times$ 0.2 pc), respectively. \nThe rms noises of molecular lines of Band 3 and band 6 are $\\sim$40 mJy beam$^{-1} $and $\\sim$20 mJy beam$^{-1}$, respectively, in emission-free channels.\nThe comparison of the cloud mass derived from the ALMA observation with that of a single dish observation described in Section \\ref{s.r.filaments} suggests that the missing flux of the present ALMA observation is not significant.\n\n\\section{Results \\label{results}}\n\\subsection{A complex filamentary structure \\label{s.r.filaments}}\n\nFigure \\ref{fig1} shows the $^{13}$CO velocity integrated intensity image of the $J$=2--1 transition. \nThe distribution of the $^{13}$CO emission is highly filamentary. \nThe filaments, having often straight or curved distribution, have a typical length of 5--10\\,pc and a width of 0.5--1.0\\,pc defined as a full-width of the emission area at the 3$\\sigma$ level of the intensity integrated over a range of 234 to 240\\,km\\,s$^{-1}$, which may be analogous to the dominance of filaments in the interstellar medium of the solar vicinity \\citep[e.g.,][]{Andre2013,Molinari2010}, possibly suggesting that filaments are ubiquitous in other galaxies as well. \nMore details of the filaments will be published separately.\nThe most active star formation is found in two regions as denoted by N159W-N and N159W-S in Figure \\ref{fig1}, both of which are associated with enhanced $^{13}$CO emission. \n\nThe cloud mass is estimated from the $^{12}$CO($J$=2--1) intensity by assuming a conversion factor from the $^{12}$CO($J$=1--0) intensity to the column density of X(CO)=$7\\times10^{20}$cm$^{-2}$ \\citep{Fukui2008} and the typical $^{12}$CO($J$=2--1)\/$^{12}$CO($J$=1--0) ratio toward H{\\sc ii} regions of 0.85 (the ratio in the Orion-KL region of \\citealt{Nishimura2015}).\n We also assumed the absorption coefficient per unit dust mass at 1.2\\,mm and the dust-to-gas mass ratio to be 0.77\\,cm$^2$\\,g$^{-1}$ and $3.5\\times10^{-3}$, respectively to derive the gas mass from the dust emission (Herrera et al. 2013).\nIn total, the filaments have molecular mass of 2.4 $\\times$ 10$^{5}$ $M_{\\odot}$ in N159W corresponding to 35 \\% of the total mass which is estimated by the lower resolution study \\citep{Minamidani2008,Mizuno2010}. \nWe define the N159W-N and N159W-S clumps at the 5\\,$\\sigma$ level of Band 6 continuum (white contours in Figure 1), and the masses of these clumps are estimated to be $2.9\\times10^4$\\,$M_\\odot$ and $4.1\\times10^3$\\,$M_\\odot$, respectively, by assuming the dust temperature 20\\,K. Their masses derived from the CO emission are $1.5\\times10^4$\\,$M_\\odot$ and $4.2\\times10^3$\\,$M_\\odot$, respectively, as is consistent with the dust-emission estimate.\n\n\\subsection{Outflows \\label{s.r.outflows}}\n\nWe have discovered two molecular outflows having velocity span of 10--20\\,km\\,s$^{-1}$ in $^{12}$CO($J$=2--1). Figure \\ref{fig2} shows the distribution of the outflow wings. One of them corresponds to N159W-N and the other N159W-S. The N159W-S outflow has red-shifted and blue-shifted lobes which show offsets of 0.1--0.15\\,pc from the peak of the continuum emission. The outflow axis is along the east-west direction. The N159W-N outflow has only the blue-shifted lobe which shows an offset of 0.2 pc from the $^{13}$CO peak. It is possible that the complicated gas distribution around N159W-N may mask the possible red lobe. The size of the red-shifted and blue-shifted lobes is less than the beam size 0.2\\,pc $\\times$ 0.3\\,pc and the upper-limit timescale of the outflow is roughly estimated to be 10$^{4}$ yrs. This is the first discovery of extragalactic outflows associated with a single protostar. The positions of outflows in N159W-N and N159W-S coincide with YSOs identified based on the {\\it Spitzer} data: 053937.56-694525.4 (hereafter YSO-N; Chen et al. 2010) and 053941.89-694612.0 (hereafter YSO-S; \\citealt{Chen2010}; P2 in \\citealt{Jones2005}), respectively. \n\n\\subsection{YSO characteristics \\label{s.r.yso}}\nTwo YSOs associated with outflows, YSO-N and YSO-S, have been studied extensively at near- to far-infrared, submillimeter, and radio wavelengths (\\citealt{Carlson2012} and references therein; \\citealt{Seale2014}; \\citealt{Indebetouw2004}). Using the \\citet{Robitaille2006,Robitaille2007} YSO model grid and spectral energy distribution (SED) fitter, we model all the available data including the {\\it Spitzer} and {\\it Herschel} fluxes (1.2--500 $\\mu$m), as well as photometry we extracted from {\\it Spitzer}\/IRS spectra (5--37\\,$\\mu$m; \\citealt{Seale2009}), and the fit of the SEDs indicates that both YSO-N and YSO-S are Stage 0\/I YSOs. \nThe mass and luminosity are estimated to be $31\\pm8$\\,$M_\\odot$ and $(1.4\\pm0.4)\\times10^5$\\,$L_\\odot$ for YSO-N, and $37\\pm2$\\,$M_\\odot$ and $(2.0\\pm0.3)\\times10^5$\\,$L_\\odot$ for YSO-S. These results are consistent with those from Chen et al. (2010), who also used Robitaille fitter but without the Herschel constraints. \nThe dynamical ages of the two outflows are consistent with the ages output from the SED fitter \\citep{Robitaille2006}, $\\sim$10$^{4}$ yrs.\n\nAccording to 3\\,cm radio continuum measurements YSO-N is determined to be an O5.5V star, whereas YSO-S is not detected \\citep{Indebetouw2004}, suggesting that YSO-S is in an earlier evolutionary state than YSO-N.\nThis is consistent with a non-detection of the He 2.113\\,$\\mu$m and with the weak Br$\\gamma$ in YSO-S \\citep{Testor2006}.\nThe \\citet{Testor2006} near-IR VLT data revealed that YSO-S consists of at least two sources, whereas their detailed physical properties and relation with the mid-\/far-infrared source are yet unknown.\n\n\\subsection{Filamentary collision in N159W-S \\label{s.r.col}}\n\nIn Figure \\ref{fig1}, N159W-N shows complicated $^{13}$CO distribution, whereas the source N159W-S shows relatively simple $^{13}$CO morphology. The $^{13}$CO distribution in N159W-N consists of several filaments which are elongated generally in the direction from the northeast to southwest, and N159W-S is located at the tip of a V-shaped distribution of two filaments. We shall focus on N159W-S in the following to describe the filament distribution and the high-mass young star, because the simple morphology allows us to understand the physical process unambiguously. \n\nFigure \\ref{fig3} shows the two filaments toward N159W-S (Figure \\ref{fig3}(a) the whole velocity range, Figure \\ref{fig3}(b) red-shifted filament, and Figure \\ref{fig3}(c) blue-shifted filament, respectively). The two filaments overlap toward N159W-S, where the $^{13}$CO intensity and linewidth are significantly enhanced. Figures \\ref{fig3}(d-h) show position-velocity diagrams taken along the two filaments. We see that the filaments have small velocity span of 3\\,km\\,s$^{-1}$ in the north of N159W-S which shows significantly enhanced velocity span of 8\\,km\\,s$^{-1}$ at the 15\\,\\% level of the $^{13}$CO peak in Figure\\,\\ref{fig3}(g). \nAn HST image at near infrared indicates that the red-shifted filament is extended toward the south beyond N159W-S (Carlson et al. 2015 in preparation), while no CO emission is detected there in our $^{12}$CO or $^{13}$CO observations with ALMA. We also find that the blue-shifted filament has its extension beyond N159W-S in $^{13}$CO. So, although the filaments are apparently terminated toward N159W-S, they are actually more extended, placing N159W-S in the intersection of the two filaments. \n\nIn N159W-S the longer red-shifted filament in the east is highly elongated and mildly curved, having a length of 10\\,pc, while the other blue-shifted filament in the west is straight and elongated by 5\\,pc. N159W-S clearly demonstrates that a high-mass YSO with bipolar outflow is formed toward the intersection between the two thin filaments, and the velocity dispersion is significantly enhanced in the intersection. \n\nBased on these results we set up a hypothesis that formation of N159W-S was triggered by the collision between the two filaments. We first describe a possible scenario for N159W-S and then discuss the observational constraints on the collision and high-mass star formation. The two crossing filaments overlapping toward N159W-S give direct support for the present scenario. The lower limit for the relative velocity in the collision is given by the velocity difference of the two filaments 2--3\\,km\\,s$^{-1}$. The actual collision velocity should be higher than 2--3 km s$^{-1}$ because of the projection effect. According to the magneto-hydro-dynamical numerical simulations of two colliding molecular flows by \\citet{Inoue2013}, the collision-shocked layer enhances isotropic turbulence, independent of the direction of the collision, and the velocity span in the shocked layer is similar to the relative collision velocity. The simulations by \\citet{Inoue2013} for a velocity difference of 20 km s$^{-1}$ allow us to scale the relative velocity to $\\sim$10 km s$^{-1}$ with basically the same physical process. We therefore assume the velocity span in N 159W-S, 8 km s$^{-1}$, as the actual collision velocity. This implies the relative motion of the two filaments is nearly vertical, roughly 70$^\\circ$, to the line of sight.\n\n\\section{Discussion on the high-mass star formation processes \\label{s.discussion}}\n\nSince the rest of the filaments show no sign of velocity dispersion enhancement with high-mass star formation, we assume that the non-interacting filaments retain the initial condition prior to the collision. \nThe line-mass, mass per unit length, in the filaments changes from region to region by an order of magnitude. In order to estimate the typical mass of the filaments associated with N159W-S clump for the following discussion, we pick up two segments of 1.5\\,pc and 1.8\\,pc in length and 0.7\\,pc and 0.6\\,pc in full width at a 35\\,\\% level of the $^{13}$CO peak for the red-shifted and blue-shifted filaments, respectively, as indicated in Figures 3(b) and 3(c). \n{Below the 35\\,\\% level, it is hard to estimate the line-mass of the individual filaments separately due to overlapping. Above this level, the mass sampled becomes underestimated.\nWe estimate the total mass of these two segments to be $2.9\\times10^3$\\,$M_\\odot$ from $^{12}$CO($J$=2--1) for a velocity range 234\\,--\\,240\\,km\\,s$^{-1}$. \nWe then estimate the average line-mass of these two filaments to be $8.9\\times10^2$\\,$M_\\odot$\\,pc$^{-1}$. \nThe filaments are not detected in the Band 6 continuum at the 3\\,$\\sigma$ noise level of the molecular mass density $1.6\\times10^3$\\,$M_\\odot$\\,pc$^{-1}$, which is higher than the above CO-based line-mass density of the filaments.\n \nThis suggests that the collision took place in a timescale, $\\sim$0.5\\,pc divided by 8\\,km\\,s$^{-1}$, i.e., $\\sim$6 $\\times$ 10$^{4}$ years ago. We assume that formation of the high-mass star initiated at the same time. By using the stellar mass 37 $M_{\\odot}$, the average mass accretion timescale of the star formation is given as 37\\,$M_{\\odot}$\/$6 \\times 10^{4}$\\,yrs $\\sim$6 $\\times$ 10$^{-4}$ $M_{\\odot}$ yr$^{-1}$. This rate is well in accord with the theoretical estimate around 10$^{-3}$ $M_{\\odot}$ yr$^{-1}$ and satisfies the criterion to form high-mass stars by overcoming the stellar feedback \\citep[e.g.,][]{Wolfire1986}. The small outflow timescale 10$^{4}$\\,yrs is consistent with this picture involving rapid high-mass star formation.\n\nThe present case of N159W-S has shown that the high-mass star having 37\\,$M_{\\odot}$ is formed in a turbulent condition created by the collisional shock. \nThe mass of the N159W-S clump is estimated to be $4\\times10^3$\\,$M_\\odot$ toward its CO peak. \nThere is no sign of such dense clumps over the rest of the filament according to our ALMA data, either in CS($J$=2--1) data, whose line-mass detection limit is about 150\\,$M_\\odot$\\,pc$^{-1}$, or in dust emission data, whose line-mass detection limit is about $1.6\\times10^3$\\,$M_\\odot$\\,pc$^{-1}$ by assuming a filament width of 0.6\\,pc. \nThis offers an interesting possibility that high-mass stars do not necessarily require dense cloud cores as the initial condition. Instead, high velocity colliding molecular flows are able to efficiently collect mass into a cloud core non-gravitationally. \\citet{Inoue2013} discuss that the mass flow in the collision can be efficiently converged into a shock-induced core due to the oblique shock effect, and that self-gravity is not important in the beginning of the high-mass star formation, while soon later, in the shock-collected core, self gravity will play a role to form the stellar core \\citep[see also][]{Vaidya2013}. \n\nIn the Milky Way we see increasing observational evidence for cloud-cloud collisions which trigger high-mass star formation. Four super star clusters, Westerlund2, NGC3603, RCW38 and DSB[2003]179, are found to be formed by collisions between two clouds \\citep{Furukawa2009,Ohama2010,Fukui2014,Fukui2015}. Isolated O stars with H$\\,${\\sc ii} region, M20, RCW120 etc., are also suggested to be triggered by cloud-cloud collisions \\citep{Torii2011, Torii2015}. N159W-S is in the very early stage of star formation as indicated by the non-detection of ionized gas, as well as by the collision scenario and SED models. Therefore N159W-S is an optimal source to study filamentary collision leading to star formation. \nIt has been shown that the youngest O stars are formed coevally in a duration of $\\lesssim10^{5}$\\,yrs in NGC3603 and Westerlund1 by careful measurements of stellar ages with HST and VLT \\citep{Kudryavtseva2012}. We have here an independent estimate of the stellar age by taking the advantage of the simple cloud morphology in N159W-S, and the present time-scale estimate is consistent with that of \\citet{Kudryavtseva2012}.\n\n\\section{Conclusions \\label{s.conclusion}}\n\nIn this Letter we presented the $^{13}$CO ($J$ = 2--1) observations with ALMA on the active star-forming region N159 West in the LMC. We have found the first two extragalactic protostellar molecular outflows toward young high-mass stars, whose dynamical timescale is $\\sim$10$^{4}$ yrs. One of the two stars N159W-S is clearly located toward the intersection of two filamentary clouds. We set up a hypothesis that two filaments collided with each other $\\sim$10$^{5}$ yrs ago and triggered the formation of the high-mass star. The results demonstrate the unprecedented power of ALMA to resolve extragalactic star formation.\n\n\\acknowledgments\n\nThe authors thank the anonymous referee for his\/her helpful comments. This paper makes use of the following ALMA data: ADS\/JAO.ALMA\\#2012.1.00554.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO and NAOJ. This work was supported by JSPS KAKENHI grant numbers 22244014, 22540250, 22740127, 23403001, 24224005, 26247026; by JSPS and by the Mitsubishi Foundation. MM and ON are grateful for support from NSF grant \\#1312902.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Self-injection locking with microresonators and microcombs}\nThe solution proposed in\\cite{Maleki2015} for a compact microresonator based Kerr comb used a relatively broadband (10-1000 times wider than the microresonator's resonance) single-frequency distributed feedbacak (DFB) laser. In this case, the role of the microresonator was twofold:\n1) acting as an external cavity it narrowed the linewidth of the laser via the self-injection locking effect\\cite{Vasil'ev1996,Yarovitsky1998,Donvalkar2018}, and 2) it provided a nonlinear low-threshold Kerr medium where a frequency comb appeared if the laser was appropriately detuned. Laser linewidth narrowing in this approach exploits the coupling of a free-running laser diode with the WGM microresonator\\cite{Braginsky1989} having internal and surface Rayleigh backscattering\\cite{Ilchenko1992,Ilchenko2000}. These scattering effects resonantly couple an excited traveling wave WGM with the identical but counter propagating mode which returns back to the laser, locking it to the cavity resonance. This technique was used for stabilisation of a DFB laser down to a record sub--Hz level\\cite{Liang2015}. The analytical theory of self-injection locking by a WGM cavity initially proposed in\\cite{Oraevsky} was recently revised and extended in\\cite{Kondratiev2017}. \n\nIt was previously assumed for granted that `only single-mode DFB lasers characterized with comparably high internal Q's are suitable for stable self-injection locking using multimode optical cavities'\\cite{Xie:15}. Similarly, earlier only single-frequency pre-stabilized external cavity diode lasers (ECDLs) with diffraction grating\\cite{Yarovitsky1998, Velichansky2003} were considered for self-injection locking to WGM cavity\\cite{Maleki2015, Maleki2010}. Any external cavity pre-stabilisation complicates devices and their integration. The DFB lasers, however, are not available for many wavelengths and have a limited power that in turn restricts the power of single-mode operation and generated frequency combs at milliwatt level. Meanwhile many applications require higher comb power. For example, absorption dual-comb spectroscopy using surface diffuse scattering has less than $1$\\% of light collection efficiency so that high power of incident combs must be provided\\cite{Hensley}. Evaluations in\\cite{Shchekin:18} show that dual-comb CARS spectroscopy of blood glucose provides a measurable glucose signal at about 100mW power of frequency combs with $10$~GHz mode spacing. \n\n\n\\section*{Single-frequency lasing with a multi-frequency laser}\n\nWe revealed that the initial mode pre-selection and pre-stabilisation in laser diodes are not required to obtain stable narrow-linewidth single-frequency lasing, and a WGM microresonator can handle all of these purposes efficiently as well. Consequently, simpler and cheaper FP laser diodes with higher power may be used. \n\nWe demonstrate for the first time an efficient (up to 50\\%) conversion of a broadband multifrequency FP laser diode, coupled to a high-Q WGM microresonator, into a narrow linewidth single-frequency light source in the 100 mW power range at optical telecom wavelength, with its subsequent transformation to a single-soliton Kerr comb oscillator. FP laser diode spectrum narrowing occurs in regime of competition between many longitudinal modes. Self-injection locking solves two critical technical problems of soliton Kerr combs: (1) thermal instability and (2) preferential excitation of multiple-soliton regimes. The soliton states are only possible at strong red detunings from the cavity resonance where the CW internal circulating power is small. That is why, the transition from a chaotic comb to a multiple solitons comb and finally to the single-soliton state via slow laser tuning leads to a fast drop of the internal power resulting in cooling of the resonator and finally, due to thermal refraction and expansion effects, to a large detuning from the required regime causing the loss of the soliton state. Detuning can be compensated by an electronic feedback which requires fast tuning (with the characteristic time of thermal relaxation of the resonator) of the laser frequency which is difficult to achieve. That is why different additional nonstationary `power kicking' methods\\cite{Kippenberg2016, Brasch2016, Yi2016} were proposed to reach the thermal equilibrium. However, the optical feedback in self-injection locking is fast enough to compensate thermal effects in real time. Additionally, slower tuning from a chaotic to soliton state, only possible with the supported thermal equilibrium, allows a transition to smaller initial numbers of circulating solitons and finally to a single-soliton state\\cite{Lobanov2016}.\n\n\n\\section*{Experimental setup} \n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Spectra.pdf}\n\t\\caption{\\textbf{Self-injection locking and spectral narrowing of a multi-frequency diode laser coupled to a MgF$_2$ ultra-high-Q whispering gallery microresonator.} \\textbf{(a)} The spectrum of the free-running diode laser. \\textbf{(b)} The spectrum of the diode laser stabilised by a high-Q microresonator. \\textbf{(c)} Soliton generation in self-injection locking regime. \\textbf{(d)} The FSR beat note signal of the free running multi-frequency laser, the beat note frequency corresponds to the diode chip length of 2500 $\\mu m$. \\textbf{(e)} Heterodyne signal between self-injection locked diode laser and narrow linewidth fiber laser -- blue curve, Voigt fit -- red curve. \\textbf{(f)} repetition rate signal of a single-soliton state, central frequency corresponds to a WGM cavity with 5.5 mm diameter (ESA RBW is $1$~kHz).}\n\t\\label{ris:image2}\n\\end{figure*}\n\nA schematic view and a picture of the experimental setup is presented in Fig.\\ref{ris:image1}. \nThe laser beam from a free-space multi-frequency InP diode is collimated and coupled to a MgF$_2$ WGM resonator with a glass (BK-7) prism \\cite{Gorodetsky1999}. Resonantly backscattered Rayleigh radiation returns to the diode laser and forces self-injection locking of the laser frequency to the microresonator's WGM mode. The output beam is collimated to a single-mode fiber and analyzed with an optical spectrum analyzer (OSA), on a photodiode (PD) with an oscilloscope (OSC), and an electrical spectrum analyzer (ESA). The repetition rate of the soliton pulses is monitored by a fast photodiode and ESA. The detuning of the laser frequency from an optical resonance is monitored on a PD with an oscilloscope. A narrow linewidth tunable fiber laser is used for the heterodyne linewidth measurements.\n\nFor pumping millimeter-sized MgF$_2$ resonators, ordinary packaged uncapped free-space multi-frequency laser diodes were used (Seminex, chip length $L=2500$~$\\mu$m, central wavelengths 1535~nm, 1550~nm and 1650~nm covering spectral intervals of $\\Delta \\lambda \\sim 10$~nm and a total power of $P\\sim$200~mW). Generation of the self-injection locked soliton combs with a repetition rate signal linewidth $\\sim1$~kHz was observed when the laser diode driving current was manually adjusted to red detune the pump frequency in self-injection locked regime within a soliton-supporting high-Q cavity resonance.\n\nThe experimental results demonstrated in Fig.\\ref{ris:image2} were obtained with a MgF$_2$ resonator with a diameter of $5.5$~mm and edge curvature radius of $500 \\mu$m corresponding to the free spectral range (FSR) of $\\sim 12.5$~GHz (inverse of the pulse round-trip time in the microresonator). The group velocity dispersion (GVD) for all tested laser frequencies is anomalous allowing for the generation of DKS. The microresonator was manufactured by precise single-point diamond turning (DAC ALM lathe, see SI)\\cite{Tanabe2016}. The ultra-high intrinsic Q-factor exceeding $10^9$ was achieved by polishing with diamond slurries\\cite{Maleki2001}. Experimental results with other microresonators pumped at different wavelength are presented in SI.\n\nThe laser diode used to obtain results in Fig.\\ref{ris:image2}(a) has an optical spectrum consisting of tens of incoherent lines covering $\\sim 10$~nm with mode spacing $\\Delta f=\\frac{c}{2Ln} \\approx 17.68$~GHz around the central wavelength $1535$~nm and $200$~mW maximum output power. The intensity in the laser gain region is approximately uniformly distributed between the lines. Fig.\\ref{ris:image2}(d) shows the noisy beatnote signal from the adjacent lines of the free-running laser diode at $17.68$~GHz with $\\sim 1$~MHz full width at half maximum (FWHM) linewidth. The light back-reflected from the microresonator due to resonant Rayleigh backscattering, provided in case of crystalline materials mostly by surface inhomogeneities, leads to natural feedback for the laser diode. Back reflection is measured in our case at $10^{-3}$ intensity level (see SI for details). The backscattering intensity depends on the degree of loading (coupling efficiency) of the resonator, and thus can be regulated by changing the gap between the prism and the resonator\\cite{Yarovitsky1998}. \n\n\\section*{Self-injection locking with a multi-frequency laser diode}\n\nFig.\\ref{ris:image2}(b) demonstrates a collapse of the wide spectrum of the multi-frequency laser diode to a single-frequency line. In case of a multi-frequency diode in the self-injection locking regime, the power from multiple modes due to mode competition is transferred into a single narrow line, and its output power increases. In this way, the microresonator behaves not like a simple filter cavity but plays an active role in lasing. Fig.\\ref{ris:image2}(b) illustrates that in the case of self-injection locking to the WGM microresonator, the power in the dominant line increases by $\\sim 7$~dBm. This effect gives a significant additional advantage of using longer multi-frequency FP diode chips with higher power as compared to DFB lasers (with maximal power $\\sim 40$~mW at telecom wavelengths) \\cite{Maleki2010,Maleki2015}. The asymmetry of residual laser modes in the self-injection locking regime in Fig.\\ref{ris:image2}(b), with the dip on the high-frequency wing, is associated with the anomalous interaction of spectral modes in a semiconductor laser\\cite{Bogatov1975, ahmed2002,AhmedYamada2010}. Note that the contrast between the dominant line and residual lines of an order of 35~dB may in our case be significantly improved with an additional coupler prism as a drop port. In this case, the residual modes of the laser will be filtered out by the resonator. Recently a 446.5 nm self-injection locked laser \\cite{Donvalkar2018} was demonstrated with sub-MHz linewidth by using a high-${Q}$ (${Q>10^9}$) WGM ${\\rm MgF_2}$ microresonator in conjunction with a multi-longitudinal-mode laser diode. The presented blue FP laser had a two peak spectrum at low driving current in the free running regime. Longitudinal mode competition and conversion efficiency of the laser power to the power of a single mode were not considered. \n\nWe used the heterodyne technique to measure the instantaneous laser linewidth in the self-injection locking regime operating at $1550$~nm. The beatnote of the self-injection locked diode laser with the narrow linewidth tunable fiber laser (Koheras Adjustik) was analyzed on a fast PD and is shown in Fig.\\ref{ris:image2}(e). The Voigt profile\\cite{Stephan2005} fit provides a Lorentzian width contribution due to laser white frequency noise of $370$Hz, and Gaussian contribution due to flicker frequency noise of $1.7$ kHz. The theory of self-injection locking developed in\\cite{Kondratiev2017} allows this linewidth to be estimated (see the Supplementary Information (SI) for details) with a good agreement. In this way, an efficient and compact single-mode diode laser with a narrow linewidth at kHz range is demonstrated.\n\n\\section*{Soliton microcomb with a multifrequency laser}\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Pulse1.pdf}\n\t\\caption{\\textbf{Soliton generation with a laser diode.} \\textbf{(a)} Cavity mode response on oscilloscope when the frequency of the laser is swept through the WGM cavity resonance, characteristic for the self-injection locking, with step-like transition to a soliton state. \\textbf{(b)} The autocorrelation of a soliton waveform obtained from the spectrum in Fig.\\ref{ris:image2}b (blue) and the theoretical fit (red). $\\tau_R = 80 ps$ is the roundtrip time.}. \n\t\\label{ris:image3}\n\\end{figure*}\n\n\nFig.\\ref{ris:image3}(a) illustrates the typical cavity response on PD when the frequency of the laser is swept through the WGM cavity resonance, characteristic for the self-injection locking\\cite{Kondratiev2017}, with step-like transition to a soliton state. The self-injection locking range, calibrated with a fiber-loop cavity, was of an order of $1$~GHz, depending on the quality-factor of the WGM resonance. Region (a) corresponds to the free-running laser (Fig.\\ref{ris:image2}(a)), region (b) corresponds to the case of a self-injection locking regime without a frequency comb (Fig.\\ref{ris:image2}(b)), and region (c) is a soliton existence region analogous to the soliton step obtained in experiments with tunable single frequency lasers\\cite{Kippenberg2014} (Fig.\\ref{ris:image2}(c)). We use MgF$_2$ material for microresonator fabrication where thermo-optical instability is suppressed due to the same sign of thermo-refractive and thermal expansion effects and stable soliton generation is possible. In the self-injection locking regime, if the characteristic time of back-reflection is shorter than the thermal relaxation time, the laser frequency follows the cavity thermally shifted resonance and thermo-optical instabilities are suppressed. We observed partial suppressing of thermal nonlinearity while scanning the laser across the cavity resonance (Fig.\\ref{ris:image3}(a)) due to self-injection locking and we were able to tune into stable regime of soliton generation without thermal jumps.\n\nBy gradually red detuning the diode laser frequency with the current, but staying in the locked regime, we could smoothly switch the system into a soliton comb regime (predominantly single-soliton one) with a very characteristic sech$^2$(x) envelope Fig.\\ref{ris:image2}(c). The soliton comb has a span $\\sim 30$~nm with a line spacing of $12.5$~GHz. Additional residual laser lines separated by $17.68$~GHz are visible in optical spectrum, although they are weak and could be filtered out in drop-port configuration. By stronger variation of the diode current it was also possible to switch to a different resonance of the microresonator, jumping from one resonance to another within a single mode family, thus gradually changing the central frequency of the soliton and its bandwidth (see SI Fig.S.6). \n\nThe soliton repetition rate signal in the microwave range is demonstrated in Fig.\\ref{ris:image2}(f). Fig.\\ref{ris:image3}(b) shows the result of the inverse Fourier transform of the spectrum from \\ref{ris:image2}(c) and a fit $2A^2t\/\\sinh(Bt)$ which corresponds to the soliton waveform $A{\\,\\rm sech}(Bt)$, which reveals the soliton duration of $220$~fs. The residual fringes are caused by a dispersive wave formed due to parasitic mode-crossings which one can see in Fig. \\ref{ris:image2}(c). \n\nNote, that no particular technique with amplitude and frequency manipulations and hence no additional equipment was used to control the soliton generation in the presence of thermal nonlinearities\\cite{Kippenberg2014}. This is possible due to the very fast optical feedback which compensates thermal cavity detuning upon switching. Single-soliton states lived several hours in laboratory conditions without any additional stabilisation techniques -- another convenient consequence of self-injection locking. In our experiment coupling rate to soliton resonances was around 15--25\\% due to the non-optimised geometry of the microresonator and prism coupling\\cite{bilenko2017optimisation68043753}, so we did not efficiently use available laser power to generate a wide frequency comb. Nevertheless, our result shows that this proposed method is applicable due to pump power overhead even when coupling conditions are not optimal or the Q-factor is not ultra high, e.g. for integrated microresonators.\n\nWe have checked that the Kerr soliton comb is generated inside the microresonator and is not generated or amplified in the FP laser chip by adding a beam splitter between the laser chip and the microresonator and by observing a spectrum of the light immediately after the gain chip. In this way we confirm that only single frequency lasing (corresponding to Fig.\\ref{ris:image2}(b)) is observed at the output facet of the laser. It should be noted that in our work the FSRs of the FP diode laser and microresonator did not match.\n\nWe observe that single soliton generation is preferable although multi-soliton states are also possible. This results from a relatively slow transition from CW to soliton regime \\cite{Lobanov2016}, due to low speed of pump frequency tuning in the locked regime. Comparing to previous realisations of tuning methods to obtain soliton states (fast forward scan\\cite{Kippenberg2014}, slow backward tuning\\cite{Karpov16}, pump power kicking\\cite{Brasch2016}), in self-injection locking the frequency tuning is orders of magnitude slower. Considering realistic parameters of our system we estimate the tuning speed is $10^4$ times slower than without self-injection locking\\cite{Kondratiev2017} (see SI for details)\n\nWe have also checked and confirmed the generation of the soliton comb in the same microresonator using the traditional technique with a CW narrow linewidth tunable fiber laser (see SI for details). \n\n\\section*{Discussion and conclusion}\n\nIn conclusion, we have demonstrated a new efficient method for achieving a single-frequency narrow linewidth lasing and independently a method for generating stable Kerr soliton combs directly from multi-frequency high-gain laser diode chips using the self-injection locking effect. This result paves a way to compact, low-noise photonic microwave sources and for generating stable powerful frequency combs, which are important for spectroscopy, LIDAR application, astronomy, metrology and telecommunications. \n\n\\section*{Methods}\n\n{In experiments we used for the self-injection locking with high-Q MgF$_2$ WGM resonators Indium Phosphide multifrequency and single latitudinal mode laser diode chips. The length of chips was 1.5--2.5 mm and the free running spectrum consisted approximately of 50 FP lines with a beatnote between adjacent diode FP lines 1--3~MHz. The output power was 100--500~mW depending of the diode length and applied current. Such chips are commercially available in wide wavelength range.}\n\n\n\n\n\\section*{Data availability statement}\nAll data used in this study are available from the corresponding authors upon reasonable request.\n\n\\section*{Authors contributions}\nExperiments were conceived by N.G.P., S.K., and M.L.G. Analysis of results was conducted by N.G.P., A.S.V, S.K., G.V.L. and M.L.G., N.G.P., A.S.V., S.K. and A.S.G performed measurements with diode lasers and G.V.L. with a fiber laser. G.V.L., N.G.P. and A.S.V. fabricated devices. S.V.P. and M.R. set the research direction relevant for industrial needs - comb source for wearable spectrometer (Samsung Gear). S.V.P. supervised the project from the Samsung. M.L.G. supervised the project. All authors participated in writing the manuscript.\n\n\\begin{acknowledgments}\nThis publication was supported by the Russian Science Foundation (17-12-01413). G.V.L., N.G.P. and A.S.V. were partially supported by the Samsung Research center in Moscow. The authors gratefully acknowledge valuable discussions with Tobias Kippenberg, Kerry Vahala and Vitaly Vassiliev. The authors express gratefulness to Hong-Seok Lee and Young-Geun Roh from Samsung Advanced Institute of Technologies for help in establishing the project and its further support.\n\\end{acknowledgments}\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\la{intro}\nFigure \\ref{figConvergenceintro} is extracted from \\cite{Paper3} and \\cite{Andrea}. \n\\begin{figure}[h]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{FigConvergence.pdf}\n\t\\vspace{-6.2cm}\n\t\\caption{{\\textbf a)} Maximal cubic coupling showing up in the scattering of the lightest particle in a gapped theory with a single bound-state (in this channel at least)~\\cite{Paper3}. Convergence is perfect when the bound-state mass (measured in units of the lightest mass) is bigger than $\\sqrt{2}$ and quite painful otherwise. {\\textbf b)} The allowed chiral zeroes space of putative pion S-matrices associated to an $SU(2)$ chiral symmetry breaking patterns draws a beautiful peninsula like object with a sharp tip~\\cite{Andrea}.\\protect\\footnotemark \\,Convergence is great almost everywhere except close to the tip where numerics struggle. In those cases where the primal problem struggles, having a dual rigorous bound would be a blessing. This paper is about such dual bounds. }\n\t\\label{figConvergenceintro}\n\\end{figure}\n\\footnotetext{There are, at least, other two structures would benefit a dual description. One is the ``pion lake''~\\cite{Andrea}, found imposing the presence of the physical $\\rho$ resonance only. Another interesting and recent structure is the ``pion river''~\\cite{river}, found imposing additional constraints on the scattering lengths arising from $\\chi$PT and monotonicity of the relative entropy. The dual formulation would allow to rigorously define these structures excluding theories not compatible with the assumed low energy QCD behavior.}\n\nThese works explore the allowed space of physical 4D S-matrices. One parametrizes a vast family of S-matrices compatible with given physical and mathematical assumptions and maximize or minimize quantities within this ansatz to find the boundaries of what is possible. The more parameters the ansatz has, the better is the exploration. As the number of parameters become very large, one hopes that these boundaries converge towards the true boundaries of the S-matrix space. \n\nSometimes this works beautifully as illustrated in the figure; sometimes convergence is painful, to say the least, as also illustrated in the figure.\nIn those cases where convergence is a struggle, what can we do? Sometimes, it is a simple matter of improving the ansatz; sometimes it is not clear what exactly is missing. And in either case, how can we ever tell how close to converging are we anyways? \n\nA solution would be to develop a dual numerical procedure -- called the \\text{dual} problem -- where instead of constructing viable S-matrices we would instead rule out unphysical S-matrix space.\\footnote{Such dual bounds were attempted more than 50 years ago already in \\cite{Archeo1, Archeo2, Archeo3, Archeo4}. Would be very important to do some archeology work and revive\/translate\/re-discover\/improve those old explorations in a modern computer friendly era. A beautiful first step is currently being pursued by Martin Kruczenski and Yifei He \\cite{MartinTalkBootstrap}. The conformal bootstrap bounds are also exclusion analysis of this sort \\cite{bootstrapReview}.} Then we would approach the boundaries of the S-matrix space from two sides, dual and primal, and in this way rigorously bracket the true boundaries of the sought after S-matrix space. \nThis was recently achieved in two dimensions for simple models with a single type of particle transforming in some non-trivial global symmetry group \\cite{Monolith}.\\footnote{The primal version of these single particle studies with global symmetry was the subject of \\cite{Martin,Miguel,Lucia}; the case without global symmetry was considered in \\cite{Creutz,Paper2}.} \n\nThis paper concerns two dimensional multi-particle systems with arbitrary mass spectra from this dual perspective, clearly one step further in the complexity ladder, closer to the full higher dimensional problem.\\footnote{Multi-particle primal problems of this kind were pioneered in \\cite{Paper4,ToAppearSUSY}.} We will also consider a different technical approach, complementary to \\cite{Monolith}, with some aspects which we hope can be more directly transposable to higher dimensions. \n\n\n\\section{Dual optimization and the S-matrix bootstrap}\n\\label{sec2}\n\n\\qquad To achieve the desired dual formulation, it is useful to revisit the S-matrix bootstrap with a slightly different perspective.\n\nIn the \\textit{primal} S-matrix bootstrap formulation\n one constructs scattering amplitudes consistent with a set of axioms, or constraints. Such amplitudes are said to be \\textit{feasible}, that is, they belong to the allowed space of theories. \nOne then optimizes physical observables, such as the interaction strength between stable particles, in the space of feasible amplitudes. The prototypical example is \\cite{Paper2,Creutz}: in a 2D theory with a single stable particle of mass $m$, what is the maximum cubic coupling $g$ consistent with a $2 \\rightarrow 2$ scattering amplitude $M$ satisfying the constraints of unitarity, extended analyticity, and crossing? \n\nIn other words, we would like to solve the optimization problem\n \\begin{mdframed}[frametitle={Primal problem},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \\label{primal}\n\t\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } M(s)\\text{, }g^2}{\\text{maximize}} && g^2 \\label{primal}\\\\\n & \\text{constrained by} && \\mathcal{A}(s) \\equiv M(s) - \\Big(M_\\infty -\\frac{g^2}{s-m^2} +\\!\\!\\! \\int\\limits_{4m^2}^\\infty \\! \\frac{dz}{\\pi} \\frac{\\text{Im}M(z)}{s-z {+}i0} {+} \\left(s \\leftrightarrow 4m^2-s \\right)\\Big) =0 \\nonumber\\\\\n & &&\\text{for } s>4m^2, \\label{analandcrossing}\\\\\n & \\text{} && \\mathcal{U}(s) \\equiv 2\\text{ Im}M(s) - \\frac{\\lvert M(s)\\rvert^2}{2\\sqrt{s-4m^2}\\sqrt{s}} \\geq 0 \\qquad \\text{for } s>4m^2. \\label{unitarity}\n \\end{align}\n \\end{mdframed}\nwhere we maximize over the space of analytic functions $M$, and emphasize that one parameter in this infinite dimensional space is the residue of such functions at $s=m^2$ which is equal to~$-g^2$. \nThe first constraint (\\ref{analandcrossing}), an exact equality, imposes that feasible scattering amplitudes must respect crossing, real analyticity, and have singularities determined by physical processes: poles corresponding to one particle states, and cuts corresponding to multi-particle states.\\footnote{It turns out that there is no loss of generality in omitting subtractions from (\\ref{analandcrossing}), since a more careful analysis shows that the inclusion of those leads to the same result (\\ref{dual bootstrap}). We opt for not including subtractions in the main text for the sake of clarity -- see appendix~\\ref{analyticstuff} for a more detailed discussion.}\n We choose to impose this condition for $s > 4m^2$, but because we maximise over analytic functions, feasible amplitudes will have have this property for all $s$ in the physical sheet.\\footnote{The physical sheet is defined as the first Riemann sheet encountered after analytically continuing from physical kinematics, $s>4m^2$, using the $+ i \\epsilon$ prescription.\n } The convenience of imposing this condition for $s > 4m^2$ will become clear in time. The second constraint (\\ref{unitarity}) is the physical unitarity condition, equivalent to~$\\lvert S(s)\\rvert \\leq 1$. \n\nSince the quantity we are maximising, the objective, is a linear map in the space of analytic functions, the map that evaluates the residue at a point, and since the constraints~(\\ref{analandcrossing}),~(\\ref{unitarity}) are affine and convex respectively, the optimization problem we aim to solve is an infinite dimensional convex optimization problem. For such a simple problem, there are now two directions that can be taken. The first option is to solve the infinite dimensional problem analytically. As is well known by now, this follows from a simple application of the maximum modulus principle~\\cite{Paper2, Creutz}. The second option, available in more complicated situations, is to bring the problem to the realm of computers by maximizing our objective in some finite dimensional subspace of analytic functions. For example, one can consider analytic functions that are, up to poles, polynomial of at most degree $N_\\text{max}$ in some foliation variable $\\rho$ that trivializes the constraint (\\ref{analandcrossing}), as done in~\\cite{Paper3}. This truncated problem can be efficiently solved by a convex optimization software, for example SDPB \\cite{SDPB, scaling}. By choosing and increasing the finite dimensional subspace smartly, one obtains lower bounds to the solution of the primal problem that should converge to the correct bound with more expensive numerics.\n\nThe primal formulation suffers from two important shortcomings. First, for some problems it is hard to identify a simple ansatz, or truncation scheme, that allows for fast convergence. This is often the case in higher dimensional S-matrix bootstrap applications, or when scattering heavy particles in 2D. Second, and perhaps more importantly, one may want to add extra variables and constraints to the primal problem. In the previous example, those variables and constraints could be, respectively, higher point amplitudes and higher point unitarity equations. It may be the case that a feasible $2 \\rightarrow 2$ amplitude in the original primal problem may no longer be feasible in the enlarged space with extra constraints. In those cases, a point in theory space previously said to be allowed becomes forbidden. It would be more satisfying if bounds on the space of theories obtained by studying some scattering subsector remained true once the full set of QFT constraints were imposed.\\footnote{Much in the same way that CFT data excluded by the numerical conformal bootstrap remains excluded once more crossing equations are included into the system.} To overcome both of this shortcomings, we introduce the dual formulation. We use the coupling maximization problem as a guiding example, before generalizing.\n\nConsider the Lagrangian\\footnote{Note $\\mathcal{A}(s)$ is actually real. }\n\\begBvR{equation}\n\\mathcal{L}(M,w,\\lambda) = g^2 + \\int_{4 m^2}^\\infty ds\\text{ } w(s) \\mathcal{A}(s) + \\lambda(s) \\mathcal{U}(s) \\label{lagrangian}\n\\end{equation}\nwith $\\lambda(s) \\geq 0$ and define the dual functional\n\\begBvR{equation}\nd(w,\\lambda) = \\underset{\\{M, g\\}}{\\text{sup}} \\mathcal{L}(M,w,\\lambda)\\label{dualfunctional}\n\\end{equation}\nNotice that the supremum is taken over unconstrained analytic functions $M$.\\footnote{It is useful to think of analytic functions as being defined through their independent real and imaginary parts along a line. Of course, if the dispersion (\\ref{analandcrossing}) were to hold, then those would not be independent. However, since we maximise over generic analytic functions, we are free to treat $\\text{Re }M$ and $\\text{Im } M$ for $s>4m^2$ as independent.} The dual functional $d$ is the central object in the dual formulation due to the following property: \n\n \\begin{mdframed}[frametitle={Weak Duality},frametitlealignment=\\centering,backgroundcolor=black!10, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \\label{duality}\n\t\\begBvR{equation}\n\\text{Let the solution of the primal problem be $g_*^2$. Then\t}\nd(w,\\lambda) \\geq g_*^2. \\label{weak}\n\\end{equation}\n \\end{mdframed}\nWeak duality holds due to two observations. First, note that since\n\\begBvR{equation}\n\\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda) = \n\\begin{cases}\n g^2& \\text{if } M \\text{ is feasible}\\\\\n -\\infty & \\text{otherwise},\n\\end{cases}\\label{since} \n\\end{equation}\nwe have that \n\\begin{equation*}\n\\normalfont g_*^2 = \\underset{\\{M, g\\}}{\\text{sup}} \\left[ \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda)\\right].\n\\end{equation*}\nWeak duality then follows from the max-min inequality\n\\begBvR{equation}\nd(w,\\lambda) \\geq \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\left[ \\underset{\\{M, g\\}}{\\text{sup}} \\mathcal{L}(M,w,\\lambda) \\right] \\geq \\underset{\\{M, g\\}}{\\text{sup}} \\left[ \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda)\\right] = g_*^2. \\label{maxmin}\n\\end{equation}\n\nExploring the $\\{w,\\lambda\\}$ space, the space of dual variables, we therefore obtain upper bounds on the values of $g$ allowed by the axioms and exclude regions in theory space. This, in turn, partially solves the first shortcome of the primal formulation: by providing upper limits on the coupling, it bounds how far from converging an ineffective primal truncation scheme may be. To find the best possible upper bound, we solve the\n \\begin{mdframed}[frametitle={Dual problem (generic)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } w(s)\\text{, }\\lambda(s)}{\\text{minimize}} && d(w,\\lambda) \\label{dual generic}\\\\\n & \\text{constrained by} && \\lambda(s)\\geq 0\\nonumber\n \\end{align}\n \\end{mdframed}\n\nThe construction of dual functionals from a primal optimization problem is standard in optimization theory, but the particularities of the problems encountered in the S-matrix bootstrap lead to important simplifications. One of these is that the analyticity of the scattering amplitude is inherited by the dual variable $w(s)$, conjugate to the analyticity constraint. In fact, let's define a ``dual scattering function\", $W(s)$\\footnote{It is worth stressing that the introduction of an analytic function $W(s)$ is not mandatory. It is possible to work with real densities $w(s)$ and follow the argument presented in this section using the same logic. This possibility is particularly useful in higher dimensions if one wants to assume no more than the proven analyticity domains~\\cite{Archeo4}.}, odd under crossing and whose absorptive part is $w(s)$: \\begBvR{equation}\nW(s) \\equiv \\frac{1}{\\pi}\\int_{4m^2}^\\infty dz \\frac{w(z)}{s-z {+}i0} - \\left(s \\leftrightarrow 4m^2-s \\right). \\label{disp}\n\\end{equation}\n\nThen, swapping a few integrals in (\\ref{lagrangian}) and using $\\frac{1}{\\left(s-z{\\pm}i0\\right)} = \\mp i \\pi \\delta(s-z) + \\mathcal{P}\\frac{1}{(s-z)}$ leads to a very simple representation for the lagrangian as\n\\begBvR{equation}\n\\mathcal{L}(M,W,\\lambda) = g^2 \\left(1 + \\pi W(m^2)\\right) + \\int_{4 m^2}^\\infty ds\\text{ } \\text{Im}\\left(W(s) M(s)\\right) + \\lambda(s)\\, \\mathcal{U}(s). \n\\label{eq13}\n\\end{equation}\nNote that the Lagrangian density is now manifestly local in $M$ as the Cauchy kernel from~(\\ref{analandcrossing}) has been nicely absorbed into $W$. This locality, together with the quadratic nature of the constraint equations\\footnote{Dispersions for higher point amplitudes are no longer expected to be quadratic in lower point functions due to the presence of Landau singularities.} leads to the next simplification over generic dual optimization problems: we can perform both the maximization over $M$ in (\\ref{dualfunctional}) and the minimization over~$\\lambda$ in~\\eqref{dual generic} exactly. We now analyze those in sequence. \n\nBefore doing that, first notice, linearity of $\\mathcal{L}$ under $g^2$ implies that \n\\begBvR{equation}\nd(W, \\lambda) = + \\infty \\,\\,\\,\\, \\text{ unless } \\,\\,\\,\\, \\pi W(m^2)=-1. \\label{normunless}\n\\end{equation}\nThis means that unless $W$ is properly normalized at $m^2$, the bounds obtained from the dual functional are vacuous. Hence, in solving the dual problem, there is no loss of generality in restricting ourselves to the space of $W$ satisfying the constrain in (\\ref{normunless}).\n \n The linear Lagrange equations with respect to variations of $M(s)$ for $s>4m^2$ results in\n \\begBvR{equation}\n M_\\text{critical}(s) = \\left[\\text{Im}(W(s))\/\\lambda(s) + i \\left(2\\lambda(s) + \\text{Re}(W(s))\/\\lambda(s)\\right)\\right] \/(2 \\rho^2_{11}) .\n\\nonumber\n \\end{equation}\nwhere $ \\rho^2_{11} = 1\/(2 \\sqrt{s-4m^2}\\sqrt{s})$. Second order variations show that, indeed, this is a local maximum provided $\\lambda(s)>0$. It follows from the definition (\\ref{dualfunctional}) that, provided $\\pi W(m^2)=-1$,\n\n\\begBvR{equation}\nd(W, \\lambda) = \\int_{4m^2}^\\infty ds \\left( \\frac{\\lvert W(s)\\rvert^2}{4 \\lambda(s)} + \\lambda(s) + \\text{Re}W(s))\\right)\/ \\rho^2_{11} . \\la{dWL}\n\\end{equation}\n\nNext, we minimize over $\\lambda$ leading to $\\lambda=|W(s)|\/2$. The result is $D(W) \\equiv \\underset{\\lambda \\geq 0}{\\text{inf }} d(W,\\lambda))$ given by \n\\begBvR{equation}\nD(W) = \\int_{4m^2}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11}. \\label{bfunc} ,\n\\end{equation}\nin which case\\footnote{Note that unitarity is automatically saturated once we minimize in $\\lambda$.} \n\\begBvR{equation}\n M_{\\text{critical}}(s) = \\frac{i}{\\rho^2_{11}}\\left(1 + \\frac{W^*}{\\lvert W\\rvert} \\right).\\nonumber\n\\end{equation}\n\nIn sum, the dual of (\\ref{primal}) simplifies to\n\n \\begin{mdframed}[frametitle={Dual problem (S-matrix bootstrap)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\t\\vspace{-0.4cm}\n \\begin{align} &\\underset{\\text{in } W(s)}{\\text{minimize}} && D(W)=\\int_{4m^2}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11} \\label{dual bootstrap}\\\\\n & \\text{constrained by} && \\pi W(m^2)=-1. \\label{norm}\n \\end{align}\n \\end{mdframed}\n\nThe dual problem can be tackled numerically through the same strategy used for the primal problem, that is, restricting our search to a finite dimensional subspace of analytic $W$s. For example, one could use the $\\rho$ foliation variables to write the ansatz\\footnote{The Ansatz (\\ref{wansatz}) is consistent with the dispersion (\\ref{disp}). In particular, the poles in (\\ref{wansatz}) correspond to a delta function contribution in $w(s)$.}\n\\begBvR{equation}\nW_{\\text{ansatz}}(s) = \\frac{1}{s (4m^2-s)}\\sum_{n=1}^{N_\\text{max}}a_n (\\rho(s)^n - \\rho(t)^n), \\label{wansatz}\n\\end{equation}\nwhere \n\\begBvR{equation}\n\\rho(s) = \\frac{\\sqrt{2m^2 } - \\sqrt{4m^2 -s}}{\\sqrt{2m^2} + \\sqrt{4m^2 -s}},\n\\label{rhovariabledef}\n\\end{equation}\nand minimize the functional (\\ref{dual bootstrap}) in the finite dimensional space parametrized by the $a_n$'s. Note that the constraint (\\ref{norm}) is a linear constraint in this space. The functional (\\ref{bfunc}) is nonlinear, but it is convex in $W$. Performing such minimization, say, in \\texttt{Mathematica} shows that, as one increases $N_\\text{max}$, the result of the problem (\\ref{dual bootstrap}) converges to the result of the primal problem (\\ref{primal}). This is expected if our optimization problem satisfies\n\n\\begin{mdframed}[frametitle={Strong Duality},frametitlealignment=\\centering,backgroundcolor=black!10, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false]\nThe solutions to the primal (\\ref{primal}) and dual problem (\\ref{dual bootstrap}) are identical, i.e. $g_*^2 = \\underset{\\text{in } W}{\\text{min}} \\text{ } D(W).$ In other words, the $\\ge$ symbol in (\\ref{weak}) is actually an $=$ sign.\n \\end{mdframed}\nThis property is argued for in appendix \\ref{strong}.\n \nTo explain how the dual formulation solves the second shortcoming of the primal optimization, and in view of the applications in section \\ref{application}, let's consider a slightly different class of S-matrix Bootstrap problems. Consider a gapped theory with two real stable particles of masses $m_1$ and $m_2$ respectively, $m_14m_1^2, \\label{analandcrossing2}\\\\\n & \\text{} && \\mathbb{U}(s) \\equiv 2\\text{ Im}\\,\\mathbb{M}(s) - \\mathbb{M}^\\dagger \\text{\\outline{$\\rho$}}\\, \\mathbb{M} \\succeq 0 \\qquad \\qquad &&&\\text{for } s>4m_1^2. \\label{unitarity2}\n \\end{align}\n \\end{mdframed}\nwhere $\\mathbb{A}_{a b} \\equiv \\mathcal{A}_{a \\to b}$ are analogous to (\\ref{analandcrossing}) and impose the correct dispersion relations for the amplitudes $M_{a \\to b}$ (see e.g. (\\ref{AA}) in the next section). Here $\\text{\\outline{$\\rho$}}$ are the phase space factors for the intermediate states (see e.g. (\\ref{rhoMatrix}) in the next section). \nTo obtain the dual problem, we introduce the Lagrangian\n\\begBvR{equation}\n\\mathcal{L}(\\mathbb{M}, \\text{\\outline{w}} ,\\text{\\outline{$\\Lambda$}}) = g^2 + \\int_{4 m_1^2}^\\infty ds\\text{ } \\text{Tr}\\left( \\text{\\outline{w}} \\cdot \\mathbb{A} (s) + \\text{\\outline{$\\Lambda$}} \\cdot \\mathbb{U} (s) \\right), \\label{lagrangian2}\n\\end{equation}\nwhere $\\text{\\outline{w}}$ and $\\text{\\outline{$\\Lambda$}}$ are respectively symmetric and hermitian matrices of dual variables with~$\\text{\\outline{$\\Lambda$}}$ positive semi-definite. The new dual functional\n\\begBvR{equation}\nd(\\text{\\outline{w}}, \\text{\\outline{$\\Lambda$}}) = \\underset{\\mathbb{M}}{\\text{sup }} \\mathcal{L}(\\mathbb{M}, \\text{\\outline{w}} ,{\\text{\\outline{$\\Lambda$}}}) \\label{dualfunctional2}\n\\end{equation}\nsatisfies weak duality by similar arguments as those in equations (\\ref{since}-\\ref{maxmin}). The dual optimization problem is \n \\begin{mdframed}[frametitle={Dual problem (matrix)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\t\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } \\text{\\outline{w}}(s)\\text{, }\\text{\\outline{$\\Lambda$}}(s)}{\\text{minimize}} && d(\\text{\\outline{w}}, \\text{\\outline{$\\Lambda$}}) \\label{dual 2}\\\\\n & \\text{constrained by} && \\text{\\outline{$\\Lambda$}}(s)\\succeq 0. \\nonumber\n \\end{align}\n \\end{mdframed}\n Note that an upper bound on the solution of the primal problem (\\ref{primal}) is obtained by choosing minimizing $d$ in the subspace $\\text{\\outline{w}}_{ab}(s) = \\delta_a^{11} \\delta_b^{11} w(s)$, $\\text{\\outline{$\\Lambda$}}_{ab} = \\delta_a^{11} \\delta_b^{11} \\lambda(s)$, $\\lambda\\geq0$. This is equivalent to the dual problem obtained by including only the amplitude $M_{11\\to11}$ in the bootstrap system, or primal problem. Restricting to a scattering subsector in the dual formulation provides true bounds to the more complete optimization problem. Conversely, bounds obtained by studying some restricted space of amplitudes and constrains remain valid once extra axioms and degrees of freedom are considered. We hope it is clear that the argument provided by means of an example is generic. This solves the second shortcoming of the primal formulation.\n \n\n\\section{An application}\n\\label{application}\n\\subsection{The setup}\nWe now turn our attention to much richer S-matrix bootstrap. We consider a theory with two particles of mass $m_1$ and $m_2>m_1$. We will \\textit{not} assume any global symmetry. For concreteness, we will take\\footnote{Setting $m_1=1$ simply sets our units. All $m_2> \\sqrt{2}$ would then give very similar plots\/conclusions. We could also consider $m_2<\\sqrt{2}$; the plots are a little bit less eye pleasing in that case. The significance of the transition point $m_2^*=\\sqrt{2}$ is that this is the crossing invariant point for the $11\\to 11$ process; on either sign of this point residues have different signs leading to quite different optimization results.}\n\\begBvR{equation}\nm_1=1\\,, \\qquad m_2=3\/2 \\,.\\nonumber\n\\end{equation}\nThere are a priori four couplings involving these two particles: $g_{111},g_{112},g_{122},g_{222}$. They would show up as $s$-channel residues in the various scattering amplitudes:\n\\begBvR{equation}\n\\begin{array}{c|c|c}\n\\text{Amplitude} & \\text{Exchange of particle } 1 & \\text{Exchange of particle } 2 \\\\ \\hline\n11\\to 11 & {\\color{red}g_{111}^2} & {\\color{blue} g_{112}^2} \\\\ \\hline\n11\\to 12 &{\\color{red}g_{111}}{\\color{blue} g_{112}} &{\\color{blue} g_{112}}{\\color{cadmiumgreen}g_{122}} \\\\ \\hline\n12\\to 12 & {\\color{blue} g_{112}^2} & {\\color{cadmiumgreen} g_{122}^2} \\\\ \\hline\n11\\to 22 & {\\color{red} g_{111}} {\\color{cadmiumgreen} g_{122}} & {\\color{blue} g_{112}} {\\color{magenta}g_{222}} \\\\ \\hline\n12\\to 22 & {\\color{blue} g_{112}}{\\color{cadmiumgreen} g_{122}} & {\\color{cadmiumgreen} g_{122}} {\\color{magenta} g_{222}} \\\\ \\hline\n22\\to 22 & {\\color{cadmiumgreen} g_{122}^2} & {\\color{magenta} g_{222}^2 }\n\\end{array} \\nonumber\n\\end{equation}\nWe will not consider the full coupled system of six amplitudes. Instead we will consider a nice closed subset involving the $11\\to 11$, $11\\to 12$ and (the forward) $12\\to 12$ processes only (that is, the first three lines in the table). As such we will be insensitive to $g_{222}$. We will furthermore consider a section of the remaining three-dimensional space where $g_{122}=0$ so that the problem simplifies slightly to\\footnote{The analysis for any other fixed value of $g_{122}$ follows identically, see more at the end of this section. }\n\\begBvR{equation}\n\\begin{array}{c|c|c}\n\\text{Amplitude} & \\text{Exchange of particle } 1 & \\text{Exchange of particle } 2 \\\\ \\hline\n11\\to 11 & {\\color{red}g_{111}^2} & {\\color{blue} g_{112}^2} \\\\ \\hline\n11\\to 12 &{\\color{red}g_{111}}{\\color{blue} g_{112}} & 0 \\\\ \\hline\n12\\to 12 & {\\color{blue} g_{112}^2} & 0 \n\\end{array} \\nonumber\n\\end{equation}\nand our main goal here is to explore the allowed two dimensional $(g_{112},g_{111})$ space. A convenient way to find the boundary of this space is by shooting radially. We fix an angle $\\beta$ and define a radius $R$ as\n\\begBvR{equation}\n(g_{112},g_{111}) = R(\\cos\\beta,\\sin\\beta) \\,. \\nonumber\n\\end{equation}\nThen we find the maximum value of $R$ for each $\\beta$ choice to plot the full two-dimensional space. \n\nIn the primal language we will get larger and larger $R$'s as our ansatz is more and more complete. In the dual language we will rule out smaller and smaller $R$ as we improve our ansatz. Sandwiched between the two will be the true (two dimensional section of the) boundary of the S-matrix space. \n\nIt is equally straightforward to fix $g_{122}$ to any other value and analyze another 2d section in this way or even collect various values of $g_{122}$ to construct the full $3D$ space. We leave such detailed scans for the future when we will have more realistic setups designed to bootstrap particular relevant physical theories such as the (regular and tricritical) Ising model (perturbed by thermal and magnetic deformations) as discussed in the conclusions. \n\n\\subsection{Single Component Horn}\n\\label{Horn}\nLet us start our search for the two dimensional section of the allowed S-matrix space by focusing on the constraints arising from the single $M=M_{11\\to 11}$ component alone. \n\nThis is a warm up section and many of the results here are not new: indeed, the primal formulation of single component scattering has been the subject of \\cite{Paper2}; a minor new ingredient we will consider here is the radial search element. (The radial problem for the space of S-matrices with $O(N)$ symmetry and no bound states was introduced in~\\cite{Monolith}.) In appendix H of \\cite{Paper4} an almost identical primal problem was solved analytically; the analytic curves in figure \\ref{figHorn} are obtained by trivially adapting the arguments therein. The dual formulation for these single component cases with several exchanges masses, however, will be novel and provide very useful intuition for the most general case. \n\nThe primal radial problem can be compactly formulated as \n\n\\begin{mdframed}[frametitle={Primal Radial Problem for Single Component},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.6cm}\n\\begin{align} &\\underset{\\text{in } {M, R^2}}{\\text{maximize}} && R^2\\nonumber\\\\\n& \\text{constr. by} && \\text{Res}_{m_1^2}(M)=R^2\\sin^2\\beta, \\quad \\text{Res}_{m_2^2}(M)=R^2\\cos^2\\beta \\label{radialcondition}\\\\ \n& s\\geq4m_1^2 && \\mathcal{A}(s)=M(s){-}M_\\infty+\\left(\\frac{g_{111}^2}{s-m_1^2}{+}\\frac{g_{112}^2}{s-m_2^2}{-}\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\,\\frac{\\IM M(z)}{z-s} +(s\\leftrightarrow t)\\right){=}0\\nonumber\\\\\n& s\\geq 4m_1^2&& \\mathcal{U}(s)=2\\IM M(s) -\\rho_{11}^2 |M(s)|^2 \\geq 0. \n \\label{primal bootstrap 11to11}\n\\end{align}\n\\end{mdframed}\n\nWe will now construct the dual problem. If it were not for the radial additional equality constraints~\\eqref{radialcondition} the corresponding dual problem would be given already in eq.~\\eqref{dual bootstrap}.\nIn this case we need to introduce additional Lagrange multipliers $\\nu_1$ and $\\nu_2$\nto the lagrangian~\\eqref{lagrangian}\n\\begBvR{equation}\n\\mathcal{L}=R^2+\\nu_1 (\\text{Res}_{m_1^2}(M)-R^2\\sin^2\\beta)+\\nu_2 (\\text{Res}_{m_2^2}(M)-R^2\\cos^2\\beta)+\\int_{4m_1^2}^\\infty ds\\,\\mathcal{A}(s)w(s)+\\mathcal{U}(s)\\lambda(s).\n\\label{lagrangianhorn}\n\\end{equation}\nNow we follow the logic of section \\ref{sec2} verbatin modulo a few small differences inherent to the radial nature of the primal problem which we will highlight. First of all note that the maximum of the Lagrangian with respect to $R^2$ yields a bounded result only when\n\\begBvR{equation}\n1-\\nu_1 \\sin^2\\beta-\\nu_2 \\cos^2\\beta=0.\\nonumber\n\\end{equation}\nNext, identifying $w(s)=\\IM W(s)$ with $W(s)$ given by (\\ref{disp}) as before will lead to a beautiful dual problem formulation with a totally local optimization target. Importantly \n\\begBvR{equation}\n\\int_{4m_1^2}^\\infty ds\\,\\mathcal{A}(s)w(s)= \\int_{4m_1^2}^\\infty ds\\,\\text{Im}(M(s)W(s))+ \\pi \\text{Res}_{m_1^2}(M) W(m_1^2)+\\pi \\text{Res}_{m_2^2}(M) W(m_2^2)\\nonumber\n\\end{equation}\nso we see that the optimization with respect to the parameters $ \\text{Res}_{m_i^2}(M) $ identifies the lagrange multipliers $\\nu_i$ with the normalization of the dual functional at the stable mass values $W(m_i^2)$. All in all we therefore obtain the simple dual problem radial generalization of (\\ref{dual bootstrap}) as \n\\begin{mdframed}[frametitle={Dual Radial Problem for Single Component},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.3cm}\n\\begin{align} &\\underset{\\text{in } {W}}{\\text{minimize}} && D(W)=\\int_{4m^2_1}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11}\\nonumber\\\\\n& \\text{constrained by} && 1+\\pi\\, W(m_1^2)\\sin^2\\beta+\\pi\\, W(m_2^2)\\cos^2\\beta=0.\n\\label{dual bootstrap 11to11}\n\\end{align}\n\\end{mdframed}\n\n\nNotice again the nice complementarity between the pole singularities associated to bound states in the physical amplitude and the absence of poles in the ``dual scattering function\" $W$ given by (\\ref{disp}), replaced instead by the simple normalization conditions (\\ref{dual bootstrap 11to11}). Conversely, when we maximize effective couplings in theories without bound-states the primal S-matrices have no bound-states and the dual functionals have poles \\cite{Monolith}. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{Hornv1-crop.pdf}\n\t\\caption{Numerical bounds on the coupling space $\\{g_{111},g_{112}\\}$. The blue shaded regions enclose the allowed points for different $N_{\\text{max}}$ in our primal ansatz. The red shaded regions mark the points that are rigorously excluded. The thin black analytic curve is the boundary of the allowed region \\cite{Paper4}. \n\t As we increase $N_{\\max}$ from 1 to 5 in the primal problem, the blue regions enlarge, allowing for more and more points and eventually converging to touch the boundary of the permitted space (this is more evident in the ``horn'' region). In the dual strategy as we increase $N_{\\max}$ from 1 to 5 we exclude more and more points. At convergence the excluded region touches the boundary of the allowed space. We restrict the plot to the first quadrant since it is symmetric under $g \\leftrightarrow -g$.}\n\t\\label{figHorn}\n\\end{figure}\n\n\nIn figure~\\ref{figHorn} we show the numerical results for both the primal (inner blue shaded regions) and the dual problem (outer red shaded regions).\n\n\n\\subsection{Multiple Component Kinematics}\n\nNext we consider the full system with $11\\to 11$, $11\\to 12$ and \\textit{forward} $12\\to 12$ amplitudes.\\footnote{As reviewed in detail in \\cite{Paper4} when a particle of type $1$ scatters with a particle of type $2$ it can either continue straight (\\textit{forward amplitude}) or bounce back (\\textit{backward amplitude}). Here we consider the forward process only. This process is nicely crossing symmetric. (The backward process is not; instead it is related by crossing to $11\\to 22$ scattering so considering this backward process would require more scattering processes to close the system of {unitarity} equations.)} The two dimensional kinematics of the $11\\to 11$ process and of the \\textit{forward} $12\\to 12$ process are reviewed in great detail in section 2 of \\cite{Paper4} so here we will mostly focus on the new $11\\to 12$ process.\\footnote{This process was not considered in \\cite{Paper4} because it violates $\\mathbb{Z}_2$ symmetry. Here we don't have $\\mathbb{Z}_2$ symmetry so it is the first most natural process to consider after the lightest $11\\to 11$ scattering amplitude.} \nThis scattering process is a nice fully symmetric process. No matter which channel we look at it, it always describes two particles of type $1$ (in the infinite future or past) scattering into a particle of type $1$ and another of type $2$. As such \n\\begBvR{equation}\nM_{11\\to 12}(s,t,u)\\nonumber\n\\end{equation}\nis fully symmetric under any permutation of the three Mandelstam variables $s,t,u$. Of course, they are not independent. Besides \n\\begBvR{equation}\ns+t+u=3m_1^2+m_2^2 \\la{Plus}\n\\end{equation} \nwhich holds in any dimension, we have the two dimensional constraint \n\\begBvR{equation}\ns t u=m_1^2\\left(m_1^2- m_2^2\\right){}^2 \\la{Times}\n\\end{equation}\n\nEquations (\\ref{Plus}) and (\\ref{Times}) describe a curve. Its projection into real $s,t,u$ is given by the solid curved blue lines in figure \\ref{triangle}. \n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{1112Triangle.pdf}\n\t\\caption{Maldelstam Triangle for $11\\to 12$ scattering. The x-axis is given by $x=(s+2 t-3 m_1^2-m_2^2)\/\\sqrt{3}$. The $11\\to 12$ scattering if fully crossing invariant and indeed so is this picture. Physical processes in 2D lie on top of the blue solid lines and outside the red lines; in higher dimensions they fill in the interior of the regions delimited by the blue solid lines as one scans over physical scattering angles. Similar triangle for $12\\to 12$ scattering can be found in \\cite{Paper4}.}\n\t\\label{triangle}\n\\end{figure}\nThere, we see four disconnected regions: three non-compact parabola like curves related by a rotation symmetry and a round triangle in the middle. The three outer curves are the three physical regions associated to the three scattering channels. The one in the top, for instance, corresponds to the $s$-channel. (Each outer curve has a left and right components which are equivalent; they are related to a simple parity transformation.) \nThe $s$-channel outer curve start at $s=(m_1+m_2)^2$ as indicated by the red solid line. That corresponds to the minimal energy necessary to produce a particle of type $1$ and a particle of mass $2$ at rest. (Recall that $2$ is heavier than $1$.) Another important energy marked by the blue dashed line in the figure occurs at $s=(2m_1)^2$ which would correspond to the minimal energy necessary to produce two particle of type $1$ at rest. This is however \\textit{not} a physical energy for this process since physical energies are those for which we can produce \\textit{both} initial \\textit{and} final state. Nonetheless, the region between $s=4m_1^2$ and $s=(m_1+m_2)^2$ is very interesting because we know precisely what are the only possible {physical} states in that energy range: they can only be two particle states involving two particles of type $1$.~\\cite{Landau} The equation which reflects this is the so called \\textit{extended} unitarity relation which in this case reads\n\\begBvR{equation}\n2\\IM M_{11\\to 12}=\\rho_{11}^2 M_{11\\to 11} M_{11\\to 12}^*, \\qquad 4 m_1^2< s < (m_1+m_2)^2 \\la{extUnit1112}\n\\end{equation}\n\nHere, since we are focusing on the top curve (which is crossing equivalent to any of the other two) we can think of $M$ as a single function of $s$ with \n\\begin{eqnarray}\n&&t(s)=\\frac{1}{2} \\left(3\n m_1^2+m_2^2-s-\\sqrt{\\frac{\\left(s-4 m_1^2\\right) \\left(-2 m_2^2\n \\left(m_1^2+s\\right)+\\left(s-m_1^2\\right){}^2+m_2^4\\right)}{s}}\\right)\\label{t11to12}\\\\\n&&u(s)=\\frac{1}{2} \\left(3\n m_1^2+m_2^2-s+\\sqrt{\\frac{\\left(s-4 m_1^2\\right) \\left(-2 m_2^2\n \\left(m_1^2+s\\right)+\\left(s-m_1^2\\right){}^2+m_2^4\\right)}{s}}\\right)\\label{u11to12}\n\\end{eqnarray}\nAs a check, note that as $m_2 \\to m_1$ we find $u \\to 0$ and $t\\to 4m_1^2-s$ as expected for two dimensional elastic scattering of particles of equal mass. \n\nThe extended unitarity relation (\\ref{extUnit1112}) is of course part of a coupled system of equations when we consider all components at once. They can all be nicely packed into matrix form by defining \n\\begBvR{equation}\n\\mathbb{U}\\equiv 2\\IM \\mathbb{M}-\\mathbb{M}^\\dagger \\text{\\outline{$\\rho$}}\\, \\mathbb{M} \\,,\n\\label{unitarityfullsystem}\n\\end{equation}\nwhere\n\\begBvR{equation}\n\\!\\!\\!\\! \\mathbb{M}\\equiv \\begin{pmatrix}\nM_{11\\to 11} & M_{11\\to 12} \\\\\nM_{11\\to 12} & M_{12\\to 12}\n\\end{pmatrix}, \\,\\,\\,\\,\\,\n\\text{\\outline{$\\rho$}} \\equiv \\begin{pmatrix}\n\\rho_{11}^2=\\frac{\\theta\\left(s-4m_1^2\\right)}{2\\sqrt{s-4m_1^2}\\sqrt{s}} & 0 \\\\\n0 & \\rho_{12}^2=\\frac{\\theta\\left(s-(m_1 + m_2)^2\\right)}{2\\sqrt{s-(m_1 + m_2)^2}\\sqrt{s-(m_1 - m_2)^2}} \n\\end{pmatrix} \\la{rhoMatrix}\n\\end{equation}\nThen extended unitarity is the statement that $\\mathbb{U}=\\mathbf{0}$ for $s\\in [4m_1^2,(m_1+m_2)^2]$. Above $s=(m_1+m_2)^2$ we are at physical energies and the extended unitarity relation is replaced by regular unitarity which is now nothing but the statement that $\\mathbb{U}$ is a positive semi-definite matrix $\\mathbb{U} \\succeq 0$ for $s>(m_1+m_2)^2$.\\footnote{Strictly speaking we can impose $\\mathbb{U}=\\mathbf{0}$ for a while longer in the unitarity region, more precisely until the energy where we can produce two particles of type $2$ or three particles of type $1$. In practice, bounds we will find will saturate unitarity so this will be automatic. Because of this, in all implementations, we will actually impose $\\mathbb{U} \\succeq 0$ even in the extended unitarity region, that is for any $s>4m_1^2$. This is very convenient as it renders the problem convex. }\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.5]{1112Poles.pdf}\n\t\\vspace{-1.5cm}\n\t\\caption{$t(s)$ (blue) and $u(s)$ (yellow) for $11\\to 12$ scattering and $m_2=\\tfrac{3}{2} m_1$. $u(s)$ and $t(s)$ are two branches of the same analytic function. In the extended unitarity region they are complex. As a function of $s$, all poles are located before the extended unitarity region. The grey horizontal dashed lines are equal to $m_1^2$ and $m_2^2$ and fix the position of the $t$-- and $u$-- channel poles.}\n\t\\label{poles1112}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.5]{1212Poles.pdf}\n\t\\vspace{-1.5cm}\n\t\\caption{$t(s)$ (blue) and $u(s)=0$ (yellow) for $12\\to 12$ forward scattering and $m_2=\\tfrac{3}{2} m_1$. In the $s$-channel extended unitarity sit $t$-channel poles (and vice-versa). The $s$-channel poles lie before the s-channel extended unitarity region. As in the previous figure, the grey horizontal dashed lines are equal to $m_1^2$ and $m_2^2$ determine the position of $t$-channel poles. }\n\t\\label{poles1212}\n\\end{figure}\n\nFinally we have poles. These correspond to the single particle exchanges when $s$ or $t$ or $u$ are equal to either $m_1$ or $m_2$. The poles show up in the (rounded) triangle region in the Mandelstam triangle picture \\ref{triangle} in the $11\\to 12$ process as depicted in figure \\ref{poles1112}. For $12\\to 12$, we have $u=0$ and the two $t$-channel poles lie in the extended unitarity region. Note here the important difference between unitarity and extended unitarity. In the unitarity region the amplitudes describe physical probability amplitudes, are bounded and can thus never have poles. In the extended unitarity region they can in principle. And here they do as we see in the figure. \n\nAll in all, we can summarize the analytic structure of our amplitudes with their cuts and poles by dispersion relations as usual. \nThese can be conveniently packaged into a simple matrix statement $\\mathbb{A}=\\mathbf{0}$ with\n\\begBvR{equation}\n\\mathbb{A}\\equiv \\begin{pmatrix}\n\\mathcal{A}_{11\\to11} & \\mathcal{A}_{11\\to 12} \\\\\n\\mathcal{A}_{11\\to 12} & \\mathcal{A}_{12\\to12}\n\\end{pmatrix} \\, \\la{AA}\n\\end{equation}\nand \n\\begin{align}\n\\mathcal{A}_{11\\to11}(s)\\equiv&M_{11\\to11}(s)-M_{11\\to11}^\\infty+{\\color{red}g_{111}^2}\\left(\\frac{1}{s{-}m_1^2}+\\frac{1}{t(s){-}m_1^2}\\right)+{\\color{blue}g_{112}^2}\\left(\\frac{1}{s{-}m_2^2}+\\frac{1}{t(s){-}m_2^2}\\right)\\nonumber\\\\\n&- \\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{11\\to11}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}\\right)dz\\,,\\label{m11to11disp} \\\\\n\\mathcal{A}_{11\\to12}(s)\\equiv&\\,M_{11\\to 12}(s)-M_{11\\to12}^\\infty+{\\color{red}g_{111}}{\\color{blue}g_{112}}\\left(\\frac{1}{s-m_1^2}+\\frac{1}{t(s)-m_1^2}+\\frac{1}{u(s)-m_1^2}\\right)\\nonumber\\\\\n&-\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{11\\to12}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}+\\frac{1}{z-u(s)}\\right)dz\\,,\\label{m11to12disp}\n\\end{align}\n\\begin{align}\n\\mathcal{A}_{12\\to12}(s)\\equiv&\\,M_{12\\to12}(s)-M_{12\\to12}^\\infty+{\\color{blue}g_{112}^2}\\left(\\frac{1}{s-m_1^2}+\\frac{1}{t(s)-m_1^2}\\right)\\nonumber\\\\\n&-\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{12\\to12}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}\\right)dz\\,.\n\\label{m12to12disp}\n\\end{align}\nWe hope there will be no confusing created by the fact that $t(s)$ signifies different things depending in which equation we are since crossing is implemented differently for different components. In~(\\ref{m11to11disp}) is it $t(s)=4m_1^2-s$; in (\\ref{m11to12disp}) it is given by (\\ref{t11to12}); and in (\\ref{m12to12disp}) it is given by~$t(s)=2m_1^2+2m_2^2-s$. In what follows, it should always be clear from the context which~$t(s)$ we are talking about. \n\n\n\\subsection{Multiple Component Dual Problem} \\la{mDual}\n\nThe formulation of the dual problem for the multiple component scenario can be derived following the steps outlined in Sec.~\\ref{sec2}.\nThere are, however, two practical obstacles: one is the complicated analytic structure of the $11\\to12$ component, the other is the presence of the \\emph{extended} unitarity region. \nIn this section we shall solve both problems if we want to arrive at an elegant and efficient dual numerical setup.\n\nAs always, we start from the primal radial problem\n\\begin{mdframed}[frametitle={Primal Radial Problem for Multiple Component},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.4cm}\n\\begin{align} &\\underset{\\text{in } {R^2,\\mathbb{M}}}{\\text{maximize}} && R^2\\nonumber\\\\\n& \\text{constr. by} && \n0=c_1 \\equiv \\text{Res}_{m_1^2}(M_{11\\to11})- R^2\\sin^2\\beta \\,, \\nonumber\\\\\n& && 0=c_2\\equiv \\text{Res}_{m_2^2}(M_{11\\to11})-R^2\\cos^2\\beta \\,,\\nonumber \\\\\n& && 0=c_3 \\equiv \\text{Res}_{m_1^2}(M_{11\\to12})-R^2\\sin\\beta\\cos\\beta\\,, \\nonumber\\\\ \n& && 0=c_4 \\equiv \\text{Res}_{m_1^2}(M_{12\\to12})-R^2\\cos^2\\beta \\,,\\nonumber\\\\\n& s > 4m_1^2 && \\mathbb{A}=0 \\qquad \\text{where $\\mathbb{A}$ is given in (\\ref{AA})} \\nonumber \\,, \\\\\n& s > 4m_1^2&& \\mathbb{U} \\succeq 0\\qquad \\text{where $\\mathbb{U}$ is given in (\\ref{unitarityfullsystem})}\\,.\n \\label{primal bootstrap 11to12}\n\\end{align}\n\\end{mdframed}\nIf not for the $c_i=0$ equality constraints related to the radial problem, this setup would fit~(\\ref{primal2}). \nNote also that the last constraint incorporate automatically unitarity and extended unitarity. Sometimes it is convenient to analyze it separately in the extended and regular unitarity regions corresponding to $s$ bigger\/smaller than $(m_1+m_2)^2$ respectively. \n\nWe start our path towards the dual problem with the usual Lagrangian starting point \n\\begBvR{equation}\n\\mathcal{L}{=}R^2 + \\sum_{i=1}^4 c_i \\nu _i +\n\\int_{4m_1^2}^\\infty {\\rm tr~}{(\\text{\\outline{w}} \\mathbb{A})}\\,ds\n+\\int_{4m_1^2}^\\infty{\\rm tr~}{(\\text{\\outline{$\\Lambda$}} \\mathbb{U})}\\,ds,\n\\label{fullsystlag}\n\\end{equation}\nwith $$\\text{\\outline{w}} =\\begin{pmatrix}\nw_1 & \\tfrac{1}{2}w_2\\\\\n\\tfrac{1}{2}w_2 & w_3\n\\end{pmatrix}$$ \nand $\\text{\\outline{$\\Lambda$}}$ semi-definite positive. Next we want to identify $\\text{\\outline{w}}$ as the discontinuities of full analytic functions $\\mathbb{W}$ such that the resulting lagrangian becomes manifestly local. This is still possible here but turns out to be more interesting than before because of the richer $11\\to 12$ kinematics reviewed in the previous section. The final result is \n\\begBvR{equation}\\mathbb{W} =\\begin{pmatrix}\nW_1 & \\tfrac{1}{2}W_2\\\\\n\\tfrac{1}{2}W_2 & W_3\n\\end{pmatrix} \\label{Wmat}\n\\end{equation}\nwith the dispersive representations of the three \\emph{dual scattering functions}\n\\begin{align}\n\\label{analyticWcitable}\nW_1(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\, \\IM W_1(z)\\left(\\frac{1}{z-s}-\\frac{1}{z-4m_1^2+s}\\right),\\\\\nW_2(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\,\\IM W_2(z)\\left(\\frac{1}{z-s}+\\frac{J_t(s)}{z-t(s)}+\\frac{J_u(s)}{z-u(s)}\\right), \\la{W2disp}\\\\\nW_3(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\, \\IM W_3(z)\\left(\\frac{1}{z-s}-\\frac{1}{z-(m_1+m_2)^2+s}\\right).\n\\end{align}\nNote that the first and last lines here are pretty much as before: they correspond to anti-crossing symmetric symmetric functionals $W_1$ and $W_3$. The middle line -- with its Jacobians $J_t=dt\/ds$ and $J_u=du\/ds$ from (\\ref{u11to12},\\ref{t11to12}) -- is more interesting and more subtle. We explain its origin in full detail in appendix \\ref{W2explanation}. \n\n\nThen we have the crucial relation required to render the Lagrangian local:\n\\begin{eqnarray}\n\\int_{4m_1^2}^\\infty {\\rm tr~}{(\\text{\\outline{w}}\\,\\mathbb{A})}\\,ds &=&\\int_{4m_1^2}^\\infty \\IM {\\rm tr~}{(\\mathbb{W}\\, \\mathbb{M})}\\,ds+\\pi\\big(\\underset{m_1^2}{\\text{Res}}(M_{11\\to11}) W_1(m_1^2)+\\underset{m_2^2}{\\text{Res}}(M_{11\\to11}) W_1(m_2^2) \\nonumber\\\\\n&&+\\underset{m_1^2}{\\text{Res}}(M_{11\\to12}) W_3(m_1^2)+\\underset{m_1^2}{\\text{Res}}(M_{12\\to12}) W_2(m_1^2) \\big)\\nonumber\n\\end{eqnarray}\nOnce we plug this relation into our lagrangian (\\ref{fullsystlag}) the last line nicely combines with the first two terms there; these terms are the only terms where $R$, $\\nu_i$ and the various residues appear.\\footnote{{Recall that $R$, the residues and $M(s)$ for $s>4$ are our primal variables, while $\\nu_i$ and $W_i(s)$ are our dual variables.}} Maximization with respect to the residues will relate the various functionals $W$ evaluated at the stable particle masses to the lagrange multipliers $\\nu_i$ as before while maximization with respect to $R$ will lead to to a linear constraint involving all these functionals which plays the important role of our normalization condition. It reads: \n\\begBvR{equation}\n1+\\pi(W_1(m_1)^2 \\sin^2\\beta+W_1(m_2^2)\\cos^2\\beta+W_2(m_1^2)\\sin\\beta\\cos\\beta+W_3(m_1^2)\\cos^2\\beta)=0 \\,.\n\\label{RadialCondFull}\n\\end{equation}\nAt this point we already got rid of the lagrange multipliers, the radius and the residues; our (partially extremized) Lagrangian is now a functional of the real and imaginary parts of the amplitudes $\\mathbb{M}$ above $4m_1^4$ and of the functionals $W_i$ also for $s>4m_1^2$. Our dual functional $d$ is therefore the maximization over the amplitudes $\\mathbb{M}$ of \n\\begBvR{equation}\nd( \\mathbb{W},\\text{\\outline{$\\Lambda$}})= \\sup_{\\mathbb{M}} \\int_{4m_1^2}^\\infty ds \\Big( {\\rm tr~}\\!(\\IM \\mathbb{W}\\, \\mathbb{M})+{\\rm tr~}{(\\text{\\outline{$\\Lambda$}} \\mathbb{U}(\\mathbb{M}))}\\Big)\n\\label{LindaLagrangia}\n\\end{equation}\nSince we are dealing with small $2\\times 2$ matrices we found it convenient to go to components at this point and also to separate the last integral into its extended and regular unitarity contributions separately. \n\nFor example, using \n\\begBvR{equation}\n\\text{\\outline{$\\Lambda$}}=\\begin{pmatrix}\n\\lambda_1 & \\tfrac{1}{2} \\lambda_2\\\\\n\\tfrac{1}{2}\\lambda_2^* & \\lambda_3\n\\end{pmatrix}\\succeq \\mathbf{0}, \\label{Lmat} \n\\end{equation}\nand evaluating the equations of motion for $\\RE M_{12\\to12}$ and $\\IM M_{12\\to12}$ in the extended unitarity region we get\n\\begBvR{equation}\n\\RE W_3+2\\lambda_3=0,\\qquad \\IM W_3=0.\\nonumber\n\\end{equation}\nThese two equations constrain the dual scattering function associated to the $12\\to 12$ to have a discontinuity starting at $(m_1+m_2)^2$. Moreover, the semidefinite-positiveness condition on $\\text{\\outline{$\\Lambda$}}$ \nimplies\\footnote{Second order variations show that the full positive semidefiniteness of $\\text{\\outline{$\\Lambda$}}$ is required for the critical $\\mathbb{M}_c$ to be a maximum.} that \n\\begBvR{equation}\n\\lambda_3(s)\\geq 0 \\qquad \\implies \\qquad \\RE W_3(s)\\leq0, \\qquad \\text{for } 4m_1^20$ in $t((m_1+m_2)^2)=(m_1-m_2)^20$ such that\n$\\sigma^N(x_0,y_0)=(x_0,y_0)$.\n\\end{rem}\n\n\\begin{rem}\nLet us suppose $(x_0,y_0)\\in X\\times X$ is such that $\\sigma^N(x_0,y_0)=(x_0,y_0)$ and $u\\in X$ an arbitrary element.\nConsider the element \n\\[\n((\\mathrm{Id} \\times \\sigma)( \\sigma \\times \\mathrm{Id}) (u,x_0,y_0)=(\\widetilde x_0,\\widetilde y_0,u'')\n\\]\ngraphically\n\\[\n \\xymatrix{\n u\\ar[rd]&x\\ar[ld]&y\\ar[d]\\\\\n \\widetilde x\\ar[d]&u'\\ar[rd]&y\\ar[ld]\\\\\n \\widetilde x&\\widetilde y&\\widetilde u'' \n }\n\\]\nthen $\\sigma^N(\\widetilde x_0,\\widetilde y_0)=(\\widetilde x_0,\\widetilde y_0)$.\n\\end{rem}\n\\begin{proof}\\[\n (\\sigma^N \\times id)(\\widetilde x_0,\\widetilde y_0,u'')=(\\sigma^N\\times id)(id\\times \\sigma)(\\sigma \\times id)(u,x_0,y_0)=\n \\]\n\\[\n (\\sigma^{N-1}\\times id)(\\sigma\\times id)(id\\times \\sigma)(\\sigma \\times id)(u,x_0,y_0)=\n\\]\nusing YBeq\n\\[\n (\\sigma^{N-1}\\times id)(id\\times \\sigma)(\\sigma \\times id)(id\\times \\sigma)(u,x_0,y_0)=\n\\]\n\nrepeating the procedure $N-1$ times leaves\n\\[\n(id\\times \\sigma)(\\sigma \\times id)(id\\times \\sigma^N)(u,x_0,y_0)=(id\\times \\sigma)(\\sigma \\times id )(u,x_0,y_0)=(\\widetilde x_0,\\widetilde y_0,u'')\n\\]\n\n\\end{proof}\n\n\n\n\\section{$2^{nd}$ application: Comparison with Hochschild cohomology}\n\n$B$ is a differential graded algebra, and on each degree $n$\nit is isomorphic to $A\\otimes (TV)_n\\otimes A$, where $V=\\oplus_{x\\in X}ke_x$.\nIn particular $B_n$\nis free as $A^e$-module. We \nhave {\\em for free} the existence of a comparison map\n\\[\n\\xymatrix@-2ex{\n\\cdots\\ar[r]&B_n\\ar[r]\\ar@{=}[d]&\\cdots\\ar[r]&B_2\\ar[r]^d\\ar@{=}[d]&B_1\\ar[r]^d\\ar@{=}[d]&\\ar@{=}[d]B_0\\\\\n\\cdots\\ar[r]&A'(TX)_n A\\ar@{=}[d] ^{\\cong}\\ar[r]&\\cdots\\ar[r] &\\oplus_{x,y\\in X}A'e_xe_y A\\ar@{=} ^{\\cong}[d]\\ar[r]^d&\\oplus_{x\\in X} A'e_x A\\ar[r]^d\\ar@{=}[d]&A' A\\ar@{=}[d] ^{\\cong}\\\\\n\\cdots\\ar[r]&A\\otimes V^{\\otimes n}\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]&\\cdots\\ar[r] &A\\otimes V^{\\otimes 2}\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]^{d_2}& A\\otimes V\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]^{d_1}&A\\otimes A\\ar[d]^{\\mathrm{Id}}\\ar[r]^m& A\\ar[d]^{\\mathrm{Id}}\\ar[r]&0\\\\\n\\cdots\\ar[r]&A\\otimes A^{\\otimes n}\\otimes A\\ar[r]& \\cdots\\ar[r] &A\\otimes A^{\\otimes 2}\\otimes A\\ar[r]^{b'}& A\\otimes A\\otimes A\\ar[r]^{b'}&A\\otimes A\\ar[r]^m& A\\ar[r]&0\\\\\n}\\]\n\n\\begin{coro}\nFor all $A$-bimodule $M$, there exists natural maps\n\n\\[\n\\widetilde\\mathrm{Id}_*: H^{YB}_\\bullet(X,M)\\to H_\\bullet(A,M)\n\\]\n\\[\n\\widetilde\\mathrm{Id}^*: H^\\bullet(A,M)\\to H_{YB}^\\bullet(X,M)\n\\]\nthat are the identity in degree zero and 1.\n\\end{coro}\n\n\n\nMoreover, one can choose an explicit map with extra properties. For that we recall some definitions: there is a set theoretical section to the canonical projection from the\nBraid group to the symmetric group\n\\[\n\\xymatrix{\n\\mathbb{B}_n\\ar@{->>}[r]& \\mathbb{S}_n\\ar@\/_\/@{..>}[l]\n}\n\\]\n\\[\n\\xymatrix{\nT_s:=\\sigma_{i_1}\\dots\\sigma_{i_k}\n&\n s=\\tau_{i_1}\\dots \\tau_{i_k} \\ar@{|->}[l]\n}\n\\]\nwhere \n\\begin{itemize}\n \\item $\\tau\\in S_n$ are transpositions of neighboring elements $i$ and $i+1$, \nso-called simple transpositions,\n \\item $\\sigma_i$ are the corresponding generators of $\\mathbb{B}_n$,\n \\item $\\tau_{i_{1}}\\dots \\tau_{i_{k}}$ is one of the shortest words representing $s$.\n\\end{itemize}\nThis inclusion factorizes trough \n\\[\n \\mathbb{S}_n\\hookrightarrow \\mathbb{B}_n^+\\hookrightarrow \\mathbb{B}_n\n \\]\n It is a set inclusion not preserving the monoid structure.\n \n\\begin{defi}\nThe permutation sets\n\\[\n \\mathrm{Sh}_{p_1,\\dots,p_k}:=\\left\\{ s\\in \\mathbb{S}_{p_1+\\dots+p_k}\/s(1)<\\dots}[r]&\n a_1\\otimes(x_1\\shuffle_{-\\sigma} \\cdots \\shuffle_{-\\sigma} x_n)\\otimes a_2}\n\\]\nis a chain map lifting the identify. Moreover, \n$\\widetilde\\mathrm{Id}:B\\to (A\\otimes TA\\otimes A,b')$ is a differential graded algebra\nmap, where in $TA$ the product is $\\shuffle_{-\\sigma}$, and\nin $A\\otimes TA\\otimes A$ the multiplicative structure\n is not the usual tensor product algebra, but the braided one.\n In particular, this map factors through\n $A\\otimes \\mathfrak{B}\\otimes A$, where $\\mathfrak{B}$ is the Nichols algebra\n associated to the braiding $\\sigma'(x\\otimes y)= - z\\otimes t$, where\n $x,y\\in X$ and $\\sigma(x,y)=(z,t)$. \n\\end{teo}\n\n\n\\begin{rem}The Nichols algebra\n$\\mathfrak{B}$ is the quotient of $TV$ by the ideal generated by (skew)primitives that\nare not in $V$, so the result above explains the good behavior\nof the ideals $invo$, $idS$,\nor in general the ideal generated by\nelements\nof the form\n $\\omega=\\sum_{i=0}^{N-1}e_{x_i}e_{y_i}$ where $\\sigma(x_i,y_i)=(x_{i+1},y_{i+1})$ and $\\sigma^N(x_0,y_0)=(x_0,y_0)$.\n It would be interesting to know the properties of $A\\otimes\\mathfrak{B}\\otimes A$\n as a differential object, since it appears to be a candidate of\n Koszul-type resolution for the semigroup algebra $A$\n (or similarly the group algebra $k[G_X]$).\n \\end{rem}\n\nThe rest of the paper is devoted to the proof of \\ref{teoid}. Most of the Lemmas\nare \"folklore\" but we include them for completeness. The interested reader can\nlook at \\cite{Lebed2} and references therein.\n\n\\begin{lem}\\label{AC monoid}\n Let $\\sigma$ be a braid in the braided (sub)category that contains two associative algebras $A$ and $C$, meaning there \n exists\n bijective functions \n \\[\n\\sigma_A:A\\otimes A\\to A\\otimes A,\\\n\\sigma_C:C\\otimes C\\to C\\otimes C,\\\n\\sigma_{C,A}:C\\otimes A\\to A\\otimes C\\]\nsuch that \n\\[\n\\sigma_*(1,-)=(-,1)\\hbox{ and } \\sigma_*(-,1)=(1,-) \\ \\hbox{ for } *\\in \\{A,C;C,A\\} \n\\]\n\\[\n \\sigma_{C,A}\\circ (1\\otimes m_A)=(m_A\\otimes 1)(1\\otimes \\sigma_{C,A})(\\sigma_{C,A}\\otimes 1)\n\\]\n and \n\\[\\sigma_{C,A}\\circ ( m_C\\otimes 1)=(1\\otimes m_C)(\\sigma_{C,A}\\otimes 1)(1\\otimes \\sigma_{C,A})\\]\nDiagrammatically\n\\\n\\xymatrix{\nC\\ar[d]&A\\ar[rd]^{\\!\\!\\!m_A}&&A\\ar[ld]\\\\\n\\ar[rrd]^{\\!\\!\\!\\!\\!\\!\\! \\sigma_{C,A}}&&A\\ar[lld]&\\\\\nA&&C&\n}\n\\xymatrix{\n\\\\\n\\\\\n&=^{[*]}&\\\\}\n\\xymatrix{\nC\\ar[rrd]^{\\!\\!\\!\\!\\sigma_{C,A}}&&A\\ar[lld]&A\\ar[d]\\\\\nA\\ar[d]&&C\\ar[rd]&A\\ar[ld]\\\\\nA\\ar[rd]&&A\\ar[ld]&C\\ar[d]\\\\\n&A&&C\n}\n\\]\nand\n\\[\n\\xymatrix{\nC\\ar[rd]^{\\ \\ m_C}&&C\\ar[ld]&A\\ar[d]\\\\\n&C\\ar[rrd]^{\\!\\!\\!\\!\\!\\!\\!\\sigma_{C,A}}&&A\\ar[lld]\\\\\n&A&&C\n}\n\\xymatrix{\n\\\\\n\\\\\n&=^{[**]}&\\\\}\n\\xymatrix{\nC\\ar[d]&C\\ar[rrd]&&A\\ar[lld]\\\\\nC\\ar[rd]&A\\ar[ld]&&C\\ar[d]\\\\\nA\\ar[d]&C\\ar[rd]&&C\\ar[ld]\\\\\nA&&C&\n}\n\\]\nAssume that they\nsatisfy the braid equation with any combination of $\\sigma_A,\\sigma_C$ or $\\sigma_{A,C}$.\nThen, $A\\otimes_\\sigma C=A\\otimes C$ with product defined by\n \\[\n(m_A\\otimes m_C)\\circ(\\mathrm{Id}_A\\otimes \\sigma_{C,A}\\otimes\\mathrm{Id}_C)\\colon\n(A\\otimes C)\\otimes (A\\otimes C)\\to A\\otimes C\n\\]\nis an associative algebra. In diagram:\n\\[\n\\xymatrix{\nA\\ar[d]&&C\\ar[rd]^{\\!\\!\\!\\sigma}&A\\ar[ld]&&C\\ar[d]\\\\\nA\\ar[rd]^{\\ \\ m_A}&&A\\ar[ld]&C\\ar[rd]^{\\ \\ m_C}&&C\\ar[ld]\\\\\n&A&&&C&\n}\\]\n \n\\end{lem}\n\\begin{proof}\n\n\n\nTake $m\\circ (1\\otimes m)((a_1\\otimes c_2)\\otimes ((a_2\\otimes c_2)\\otimes (a_3\\otimes c_3))$\n use $[*]$, associativity in $A$, associativity in $C$ then $[**]$ and the result follows.\n\\end{proof}\n\n\n\n\n\n\\begin{lem}\nLet $M$ be the monoid freely generated by $X$\nmodule the relation $xy=zt$ where $\\sigma(x,y)=(z,t)$, then,\n$\\sigma:X\\times X\\to X\\times X$ naturally extends to a braiding in $M$ and verifies \n\n\n\n\\[\n\\xymatrix{\nM\\ar[rd]^{\\ \\ \\ m}& &M\\ar[ld]&M\\ar[d]^\\mathrm{Id}\\\\\n &M\\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n&M&&M\n} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[d]^{\\mathrm{Id}} &M\\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n M\\ar[rd]^{\\!\\!\\sigma}&M\\ar[ld]&&M\\ar[d]\\\\\nM\\ar[d]&M\\ar[rd]^{\\ \\ \\ m}&&M\\ar[ld]\\\\\nM&&M} \n\\]\n\n\\[\n\\xymatrix{\nM\\ar[d]&M\\ar[rd] &&M\\ar[ld]_{m}\\\\\n M \\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n M&&M\n} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[rd]^{\\!\\!\\sigma} &M\\ar[ld]&&M\\ar[d]\\\\\n M\\ar[d]&M\\ar[rrd]^\\sigma&&M\\ar[ld]\\\\\nM\\ar[rd]^m&&M\\ar[ld]&M\\ar[d]^\\mathrm{Id}\\\\\n&M&&M} \n\\]\n\\end{lem}\n\n\\begin{proof}\n It is enough to prove that the extension mentioned before is well defined in the quotient. Inductively, it will be enough to\n see that\n $\\sigma(axyb,c)=\\sigma(aztb,c)$ and $\\sigma(c,axyb)=\\sigma(c,aztb)$ where \n$ \\sigma(x,y)=(z,t)$, and this follows\n immediately from the braid equation:\n\n A diagram for the first equation is the following: \n \\[\n\\xymatrix{\na\\ar[d]&x\\ar[rd]&y\\ar[ld]&b\\ar[rd]&c\\ar[ld]\\\\\n\\ar[d]&z\\ar[d]&t\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]\\\\\n\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]&\\ar[d]\\\\\n&&\\alpha&\\beta&} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\na\\ar[d]&x\\ar[d]&y\\ar[d]&b\\ar[rd]&c\\ar[ld]\\\\\n\\ar[d]&\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]\\\\\n\\ar[rd]&\\ar[ld]&\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n&&\\alpha^*&\\beta^*&\n } \n\\]\n \n As $\\alpha\\beta=\\alpha^*\\beta^*$ the result follows.\n \n \n\\end{proof}\n\n\n\n\n\n\\begin{lem}\n\n$m\\circ\\sigma=m$, diagrammatically:\n\\[\n\\xymatrix{\nM\\ar[rrd]&&M\\ar[lld]\\\\\nM \\ar[rd]^{\\ \\ \\ m} &&M\\ar[ld]\\\\\n& M } \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[rd]^{\\ \\ \\ m}&&M\\ar[ld]\\\\\n& M\\ar[d]^\\mathrm{Id}\\\\\n& M\n } \n\\]\n\n\\end{lem}\n\n\\begin{proof} Using successively that $m\\circ \\sigma_i=m$, we have: \n\\[m\\circ \\sigma(x_1\\dots x_n, y_1\\dots y_k)=m\\left((\\sigma_k\\dots \\sigma_1)\\dots(\\sigma_{n+k-1}\\dots \\sigma_n)_{(x_1\\dots x_ny_1\\dots y_k)}\\right)\\]\n\\[=m\\left((\\sigma_{k-1}\\dots \\sigma_1)\\dots(\\sigma_{n+k-1}\\dots \\sigma_n)_{(x_1\\dots x_ny_1\\dots y_k)}\\right)=\\dots\\newline\\]\n\\[\n=m(x_1\\dots x_n,y_1\\dots y_k) \n\\]\n\\end{proof}\n\n\\begin{coro}\nIf one considers $A=k[M]$,\nthen the algebra $A$ verifies all diagrams in previous lemmas.\n \\end{coro}\n\n\\begin{lem}\nIf $T=(TA, \\shuffle_\\sigma)$ there are bijective functions \n\\[\n\\sigma_{T,A}:=\\sigma|_{T\\otimes A}: T\\otimes A\\rightarrow A\\otimes T \n\\]\n\\[\n\\sigma_{A,T}:=\\sigma|_{A\\otimes T}: A\\otimes T\\rightarrow T\\otimes A \n\\]\nthat verifies the hypothesis of Lemma \\ref{AC monoid}, and the same for \n $(TA, \\shuffle_{-\\sigma})$.\n\\end{lem}\n\\begin{coro}\n $A\\otimes (TA, \\shuffle_{-\\sigma})\\otimes A$ is an algebra.\n\\end{coro}\n\n\\begin{proof}\n Use \\ref{AC monoid} twice and the result follows.\n\\end{proof}\n\n\\begin{coro}\\label{btp}\nTaking $A=k[M]$, then the standard resolution of $A$ as $A$-bimodule has a natural algebra structure\n defining the braided tensorial product as follows: \n\\[\nA\\otimes TA\\otimes A=\nA\\otimes_\\sigma(T^cA,\\shuffle_{-\\sigma})\\otimes_\\sigma A\\]\n\\end{coro}\nRecall the differential of the standard resolution\nis defined as\n$b':A^{\\otimes n+1}\\to A^{\\otimes n}$\n\\[ b'(a_0\\otimes\\dots\\otimes a_n)= \\sum_{i=0}^{n-1}(-1)^{i}a_0\\otimes \\dots\\otimes a_ia_{i+1}\\otimes\\dots\\otimes a_n\\]\nfor all $n\\geq 2$.\nIf $A$ is a commutative algebra then the Hochschild resolution\nis an algebra viewed as $\\oplus_{n\\geq 2}A^{\\otimes n}=A\\otimes TA\\otimes A$, with right and left \n$A$-bilinear extension of the shuffle product on $TA$, and $b'$ is a (super) derivation with\n respect to that product (see for instance Prop. 4.2.2 \\cite{L}). \nIn the braided-commutative case we have the analogous result:\n\\begin{lem}\n$b'$ is a derivation with respect to the product mentioned in Corollary \\ref{btp}.\n\\end{lem}\n\\begin{proof}\nRecall the commutative proof \nas in Prop. 4.2.2 \\cite{L}. \nDenote $*$ the product\n \\[\n(a_0\\otimes\\dots\\otimes a_{p+1} )*(b_0\\otimes\\dots\\otimes b_{q+1})=\na_0b_0\\otimes((a_1\\dots\\otimes a_{p} )\\shuffle (b_1\\otimes\\dots\\otimes b_{q}))\\otimes a_{p+1}b_{q+1}\n\\]\nSince $\\oplus_{n\\geq 2}A^{\\otimes n}=A\\otimes TA\\otimes A$\nis generated by $A\\otimes A$ and $1\\otimes TA\\otimes 1$, we check on generators.\nFor $a\\otimes b\\in A\\otimes A$, $b'(a\\otimes b)=0$, in particular, it satisfies Leibnitz\nrule for elements in $A\\otimes A$.\n Also, $b'$ is $A$-linear on the left, and right-linear on the right, so\n\\[\n b'\\big((a_0\\otimes a_{n+1})*(1\\otimes a_1\\otimes \\cdots \\otimes a_n\\otimes 1)\\big)=\nb'(a_0\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes a_{n+1})\n\\]\n\\[=\na_0b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)a_{n+1}\n=(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n\\]\n\\[\n=0+(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n \\]\\[\n=b'(a_0\\otimes a_{n+1})*(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n+(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n\\]\nNow consider $(1\\otimes a_1\\otimes \\dots\\otimes a_{p}\\otimes 1 )*(1\\otimes b_1\\otimes\\dots\\otimes b_{q}\\otimes 1)$,\nit is a sum\nof terms where two consecutive tensor terms can be of the form\n$(a_i,a_{i+1})$, or $(b_j,b_{j+1})$, or $(a_i,b_j)$ or $(b_j,a_i)$.\nWhen one computes $b'$, multiplication of two consecutive tensor factors will give,\nrespectively, terms of the form\n\\[ \\cdots\\otimes a_ia_{i+1}\\otimes \\cdots, \\ \\cdots\\otimes b_jb_{j+1}\\otimes\\cdots,\\ \\cdots\\otimes a_ib_j\\otimes\\cdots,\n\\cdots \\otimes b_ja_i\\otimes\\cdots\n\\]\nThe first type of terms will recover $b'((1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1))*\n(1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1)$ and the second type of terms will recover\n$\\pm (1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)*b'((1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1))$.\nOn the other hand, the difference between the third and forth type of terms is just\na single trasposition so they have different signs, while $a_ib_j=b_ja_i$ because the \nalgebra is commutative, if one take the {\\em signed} shuffle then they cancel each other.\n\n\nIn the {\\em braided} shuffle product, the summands are indexed by the same set\nof shuffles, so we have the same type of terms, that is, when computing $b'$ of \na (signed) shuffle product, one may do the product of two elements in coming form the \nfirst factor, two elements of the second factor. \nor a mixed term. For the mixed terms, they will have the form\n\\[\n\\cdots\\otimes A_iB_j \\otimes\\cdots \\hbox{, or }\n\\cdots\\otimes \\sigma^1(A_i,B_j)\\sigma^2(A_i,B_j)\\otimes\\cdots\n \\]\nAs in the algebra $A$ we have $A_iB_j=\\sigma^1(A_i,B_j)\\sigma^2(A_i,B_j)$ then this \nterms will cancel leaving only the terms corresponding to \n $b'(1\\otimes a_1\\otimes\\cdots\\otimes a_p \\otimes 1)\\shuffle_{-\\sigma} (1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes )$\nand $\\pm(1\\otimes a_1\\otimes\\cdots\\otimes a_p\\otimes 1 )\\shuffle_{-\\sigma} b'(1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1)$\nrespectively.\n\n\n\n\\end{proof}\n\n\n\n\n\\begin{coro} There exists a comparison morphism \n$f:(B,d)\\to (A\\otimes TA\\otimes A,b')$ which is a differential graded algebra morphism, $f(d)=b'(f)$,\n simply defining it on $e_x$ ($x\\in X$)\n and verifying $f(x'-x)=b'(f(e_x))$.\n\\end{coro}\n\\begin{proof}\nDefine $f$ on $e_x$, extend $k$-linearly to $V$, multiplicatively to $TV$, and $A'$-$A$ linearly to\n$A'\\otimes TV\\otimes A=B$. In order to see that $f$ commutes with the differential, by $A'$-$A$-linearity\nit suffices to check on $TV$, but since $f$ is multiplicative on $TV$ it is enough to check on $V$, and by $k$-linearity we check on basis, that is, we only need $f(de_x)=b'f(e_x)$.\n\\end{proof}\n\n\n\\begin{coro}\n$f|_{TX}$ is the quantum symmetrizer map, and therefore \n$\\mathrm{Ker}(f)\\cap TX\\subset B$ defines the Nichol's ideal \nassociated to $-\\sigma$.\n\\end{coro}\n\\begin{proof}\n\\[\n f(e_{x_1}\\cdots e_{x_n})=f(e_{x_1})*\\cdots *f(e_{x_n})=(1\\otimes x_1\\otimes 1)*\\cdots *(1\\otimes x_n \\otimes 1)=1\\otimes(x_1\\shuffle \\cdots \\shuffle x_n)\\otimes 1\n\\]\n\n\\end{proof}\n\nThe previous corollary explains why $\\mathrm{Ker}(\\mathrm{Id}-\\sigma)\\subset B_2$\ngives a Hopf ideal and also ends the proof of Theorem \\ref{teoid}.\n\n\\begin{question}\n$Im(f)=A\\otimes \\mathfrak{B}\\otimes A$ is a resolution of $A$ as a $A$-bimodule? namely,\nis $(A\\otimes\\mathfrak{B}\\otimes A,d)$ acyclic?\n\\end{question}\nThis is the case for involutive solutions in characteristic zero, but\n also for $\\sigma=$flip in any characteristic, and $\\sigma=\\mathrm{Id}$ (notice this $\\mathrm{Id}$-case \ngives the Koszul resolution for the tensor algebra). If the answer to that question is yes,\nand $\\mathfrak{B}$ is finite dimensional then $A$ have necessarily finite global dimension. \nAnother interesting question is how to relate \ngenerators for the relations defining $\\mathfrak{B}$ and cohomology classes for $X$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe competition between thermal fluctuations, pinning and\ninteractions between vortices leads to many novel physical\nphenomena in type-II high-temperature\nsuperconductors~\\cite{blatter}. Examples include the melting of the\nAbrikosov flux-lattice into an entangled vortex-liquid~\\cite{ns} and\nthe proposed existence of low temperature Bose-glass~\\cite{nv},\nvortex glass~\\cite{fisher_glass} and Bragg glass~\\cite{nat_sch}\nphases.\n\nMany experimental probes have been used to study these phenomena.\nThey include decoration, transport and magnetization measurements,\nneutron scattering, electron microscopy, electron holography and\nHall probe microscopes. More recently it has become possible to\nmanipulate single vortices, for example using magnetic force\nmicroscopy (MFM)~\\cite{Wadas92}. These can, in principle, measure\ndirectly many microscopic properties which have been up to now under\ndebate or assumed. The possibility of performing such experiments is\nsimilar in spirit to single molecule experiments on motor proteins,\nDNA, and RNA which have opened a window on phenomena inaccessible\nvia traditional bulk biochemistry experiments~\\cite{singlemol}.\n\nIn this spirit Olson-Reichhardt and Hastings~\\cite{ORH04} have\nproposed using MFM to wind two vortices around each other. Such an\nexperiment allows direct probing of the energetic barrier for two\nvortices to cut through each other. A high barrier for flux lines\ncrossing has important consequences for the dynamics of the\nentangled vortex phase.\n\nIn this paper we introduce and study several experiments in which a\nsingle vortex is depinned from extended defects using, for example,\nMFM. A brief account of the results can be found in\nRef.~[\\onlinecite{knp}]. First we consider a setup where MFM is used\nto pull an isolated vortex bound to common extended defects such as\na columnar pin, screw dislocation, or a twin plane in the presence\nof point disorder. Using a scaling argument, supported by numerical\nand rigorous analytical results, we derive the displacement of the\nvortex as a function of the force exerted by the tip of a magnetic\nforce microscope. We focus on the behavior near the depinning\ntransition and consider an arbitrary dimension $d$. We argue that\nthe transition can be characterized by a universal critical\nexponent, which depends {\\it only on the dimensionality of the\ndefect}. We show that unzipping experiments from a twin plane\ndirectly measures the free-energy fluctuations of a vortex in the\npresence of point disorder in $d=1+1$ dimensions. To the best of our\nknowledge, there is only one, indirect, measurement of this\nimportant quantity in Ref.~[\\onlinecite{Bolle}]. The form of the\nphase diagram in the force temperature plane is also analyzed in\ndifferent dimensions. Related results apply when a tilted magnetic\nfield is used to tear away vortex lines in the presence of point\ndisorder, which was not considered in earlier work on clean\nsystems.~\\cite{hatano}. Furthermore, we show that a possible\nexperimental application of the scaling argument is a direct measurement of the vortex line tension in an unzipping\nexperiment. As we will show in this paper, in a system of finite size, the displacement of the flux line at the transition\ndepends only on the critical force exerted on the flux line by the\nMFM tip, the flux line tension and the sample thickness. Thus\nunzipping experiments can provide valuable information on the\nmicroscopic properties of flux lines.\n\nNext we consider a setup where a single vortex is pulled out of a\nplane with many vortices. It is known that the large-scale behavior\nof vortices in a plane is characterized by a single dimensionless\nnumber, often referred to as the Luttinger liquid parameter due to\nan analogy with bosons in $d=1+1$ dimensions. We show that\nexperiments which unzip a single vortex out of the plane can be used\nto directly probe the Luttinger liquid parameter. We also discuss\nthe effects of disorder both within the defect and in the bulk with the same setup.\n\n\n\\section{Unzipping a vortex from a defect}\n\\label{Sec2}\n\n\\subsection{Review of clean case}\n\\label{sectioncleancase}\n\nWe begin by considering the unzipping of a vortex from an extended\ndefect in a clean sample. For a columnar defect the system is\ndepicted in Fig.~\\ref{fig:clean_unzip}. At the top of the sample the\nMFM applies a constant force ${\\bf f}$ which pulls the vortex away\nfrom the defect. We assume that at the bottom of the sample the\nvortex is always bound to the defect at a specific location. This\nassumption will not influence the results since below the unzipping\ntransition the flux line is unaffected by the boundary conditions at\nthe far end of the sample.\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=8cm]{Clean_unzip.eps}\n\\caption{A MFM tip applies a constant force ${\\bf f}$ which pulls the\nvortex away from the defect. The configuration of the vortex is\nrepresented by ${\\bf r}(\\tau)$. We assume throughout that the vortex\nis always bound to the defect at the bottom of the sample so that\n${\\bf r}(\\tau=0)=0$. }\\label{fig:clean_unzip}\n\\end{figure}\n\nIn the absence of external force, and for an external field aligned\nwith the defect, the appropriate energy for a given configuration\n${\\bf r}(\\tau)$ of the vortex is given by \\cite{blatter}:\n\\begin{equation}\nF_0\\!\\!=\\!\\!\\!\\int_0^L \\!\\!\\!\\!d \\tau \\left[ \\frac{\\gamma}{2}\n(\\partial_\\tau {\\bf r}(\\tau))^2+V({\\bf r}(\\tau)) \\right] .\n\\label{f0}\n\\end{equation}\nHere $\\gamma$ is the line tension and $L$ is the length of the\nsample along the $\\tau$ direction. The vector ${\\bf r}(\\tau)$\nrepresents the configuration of the vortex in the $d$ dimensional\nspace and $V({\\bf r})$ is a short-ranged attractive\npotential describing the $d'$-dimensional extended defect (in\nFig.~\\ref{fig:clean_unzip} $d=3$ and $d'=1$). The effect of the the\nexternal force, exerted by the MFM, can be incorporated by adding to\nthe free energy the contribution\n\\begin{equation}\nF_1=-{\\bf f}\\cdot {\\bf r(L)}=-\\int_0^L {\\bf f}\\cdot\n\\partial_\\tau \\bf r(\\tau)\\,d\\tau\n\\label{eq:unzipfe}\n\\end{equation}\nwhere we have used ${\\bf r}(\\tau=0)={\\bf 0}$. Here ${\\bf f}$\nstands for the local force exerted by the MFM in the transverse\ndirection. The free energy of a given configuration of the vortex\nis given by\n\\begin{equation}\nF({\\bf r})=F_0({\\bf r})+F_1({\\bf r}) \\;.\n\\end{equation}\nThe problem, as stated, has been studied first in the context of\nvortices in the presence of a tilted magnetic field~\\cite{hatano}\nand the results have been applied to the related problem of DNA\nunzipping~\\cite{Lubensky}.\n\nWe note that a similar setup can be\nachieved by using a transverse magnetic field instead of the\nexternal force. See Fig.~(\\ref{fig:clean_unzip_mag}).\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=8cm]{Clean_unzip_mag.eps}\n\\caption{Same as in Fig.~\\ref{fig:clean_unzip} but with a transverse\nmagnetic field instead of the MFM force tearing the flux line away from a\ndefect.}\\label{fig:clean_unzip_mag}\n\\end{figure}\nIndeed in the free energy (\\ref{eq:unzipfe}) the external force\ncouples to the slope of the flux line $\\partial_\\tau {\\bf r}$ in the\nsame way as the external magnetic field does~\\cite{hatano}. The only\ndifference between the two setups is that there are now equal and\nopposite forces acting on the top and bottom ends of the sample.\nHowever this difference is important only in short samples, where\nthe two ends of the flux line are not independent from each other.\n\nIn this paper we focus on the thermal average of distance of the tip\nof the vortex from the extended defect $\\langle x_m (\\tau=L)\\rangle$. This quantity\nis related to the thermal average of the length of the vortex that is unzipped from the\ndefect, $\\langle \\tau_m \\rangle$, through $\\langle x_m \\rangle = f\n\\langle \\tau_m \\rangle \/ \\gamma$. Here and throughout the paper\n$\\langle \\ldots \\rangle$ denotes a thermal average while an overbar\ndenotes an average over realizations of the disorder.\n\nAs stated above the universal behavior of $\\langle \\tau_m \\rangle$\n(or equivalently $\\langle x_m \\rangle$) within this disorder-free model have been\nderived previously. Here we sketch two approaches which will be\ngeneralized to samples with quenched disorder in the rest of the paper.\n\nIn the first approach, instead of directly summing over\nconfigurations of the vortex we perform the sum in two parts by\ndividing the vortex into bound and unbound segments. The unbound\nsegment starts at the point where the vortex departs the defect\nwithout ever returning to hit it again up to the top of the sample.\nUsing Eq.~(\\ref{eq:unzipfe}) it is straightforward to integrate over\nvortex configurations to obtain for the partition function of the\nunzipped segment\n\\begin{eqnarray}\nZ_u(\\tau_m)&=&\\int {\\cal D}{\\bf r}(\\tau)\\,\\mathrm\ne^{-\\beta\\int_0^{\\tau_m}\\! d \\tau \\left[ \\frac{\\gamma}{2}\n(\\partial_\\tau {\\bf r}(\\tau))^2-{\\bf\nf}\\cdot\\partial_\\tau{\\bf r(\\tau)}\\right]} \\nonumber \\\\\n&\\propto& \\mathrm e^{\\tau_m \\beta f^2 \/2\\gamma} \\;,\n\\label{free_unzip1}\n\\end{eqnarray}\nso that the free energy associated with this conditional partition function is\n\\begin{equation}\n{\\cal F}_u(\\tau_m)=-\\beta^{-1}\\ln Z_u(\\tau_m)= - f^2 \\tau_m\/\n2\\gamma \\;,\n\\label{free_unzip}\n\\end{equation}\nwhere $\\beta$ is the inverse temperature. Henceforth in this paper we\nset $\\beta=1$, which can be always achieved by appropriate rescaling\nof the energy units. Even though the above sum also runs over\nconfigurations which return to the defect it is easy to verify that\nthese configurations give rise to exponentially small correction in\nthe $\\tau_m$. Equation~(\\ref{free_unzip}) implies that as the force,\n${\\bf f}$, increases the free energy density of the unzipped portion\nof the vortex decreases. In contrast, the free energy density of the\nbound part is, clearly, independent of the force and given by ${\\cal\nF}_b(\\tau_m)=V_0(L-\\tau_m)$, where $V_0$ is the free energy per unit\nlength of a bound vortex and $L$ is the length of the sample along\nthe defect. The vortex will be unzipped when $f=f_c=\\sqrt{2 \\gamma\n|V_0|}$ such that the free-energy densities of the bound and\nunzipped states are equal.\n\nIn this representation the total free energy of the vortex is\ngiven by\n\\begin{equation}\n{\\cal F}(\\tau_m)={\\cal F}_u(\\tau_m)+{\\cal F}_b(\\tau_m)\\;.\n\\end{equation}\nThe unconstrained partition function of the model is given by\n\\begin{equation}\nZ=\\int_0^L d\\tau_m e^{-(f_c^2-f^2)\\tau_m\/2 \\gamma} \\;.\n\\end{equation}\nSince both results are independent of the dimensionality of the\ndefect (columnar or planar) near the transition one always finds\nin the $L \\to \\infty$ limit\n\\begin{equation}\n\\langle \\tau_m \\rangle \\sim \\frac{1}{(f_c-f)^\\nu} \\;,\n\\label{eq:clean}\n\\end{equation}\nwith $\\nu=1$. Note, that it can easily be seen that approaching the\ntransition from above the average length of the vortex which is\nbound to the defect, $\\langle (L-\\tau_m) \\rangle$, diverges in the\nsame manner~\\cite{hatano}.\n\nAn alternative approach, which will also be useful in this paper,\nuses the mapping of the problem to the physics of a fictitious\nquantum particle \\cite{NelsonBook}. The contribution of the external\nfield ${\\bf f}$ to the free energy now manifests itself as an\nimaginary vector potential acting on the particle in $d-1$\ndimensions (with the $\\tau$ axis acting as a time direction).\nExplicitly, using the standard conversion from path-integrals (see\nRef.~[\\onlinecite{hatano}] for details) one finds that the problem\ncan be described in terms of a non-Hermitian Hamiltonian:\n\\begin{equation}\n{\\cal H}={1\\over 2\\gamma} {\\bf p}^2 -{i\\over \\gamma} {\\bf f} \\cdot\n{\\bf p} +V({\\bf r}) \\;,\n\\label{H}\n\\end{equation}\nwhere ${\\bf p}=\\frac{1}{i} \\vec{\\nabla}$ is the momentum operator.\nIn this language the vortex is bound to the defect as long as there\nis a bound state in the Hamiltonian. As mentioned above $i {\\bf f}$\nis equivalent to a constant imaginary vector potential. This analogy\nmakes it apparent that solutions of the non-Hermitian problem can be\nrelated to those of the Hermitian Hamiltonian (where one sets ${\\bf\nf}=0$) by an imaginary gauge-transformation~\\cite{hatano}. In\nparticular the left $\\psi^L_n({\\bf r},{\\bf f})$ and the right\n$\\psi^R_n({\\bf r},{\\bf f})$ eigenfunctions of the non-Hermitian\nproblem can be obtained from those of the Hermitian problem,\n$\\psi_n({\\bf r},{\\bf f}={\\bf 0})$, using\n\\begin{eqnarray}\n\\psi^R_n({\\bf r},{\\bf f}) &=& {\\cal U} \\psi_n({\\bf r},{\\bf\nf}={\\bf 0}) \\nonumber \\\\\n\\psi^L_n({\\bf r},{\\bf f}) &=& \\psi_n({\\bf r},{\\bf f}={\\bf 0})\n{\\cal U}^{-1} \\;, \\label{eq:gauge1}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n{\\cal U}=\\mathrm e^{{\\bf f} \\cdot {\\bf r}}= \\mathrm e^{f x}\\; ;\n\\;\\;\\;\\;\\; {\\cal U}^{-1}=\\mathrm e^{-{\\bf f} \\cdot {\\bf r}}=\\mathrm\ne^{-f x} \\;.\n\\label{eq:gauge2}\n\\end{equation}\nThe universal behavior of $\\tau_m$ at the transition,\nEq.~(\\ref{eq:clean}), was obtained in Ref.~[\\onlinecite{hatano}] by noting\nthat\n\\begin{equation}\n\\langle\\tau_m\\rangle\\propto \\langle x_m\\rangle ={\\int x\\psi^R_n({\\bf\nr}) d{\\bf r}\\over \\int \\psi^R_n({\\bf r}) d{\\bf r}}\\propto{1\\over\n{f_c-f}}, \\label{tau_m}\n\\end{equation}\nwhere $\\psi^R_n({\\bf r})\\propto \\mathrm e^{-f_c r}$ at long $r$. We\nnote that the imaginary gauge transformation is justified only at\n$f1$ this result\nis modified and one can use known results for the free energy of a\ndirect path in a random media (see Eq.~(\\ref{eq:fefluct})), which\nleads to $\\delta {\\cal F}_{b}\\propto \\tau_m^{\\omega(d')}$, where\n$d'$ is the dimensionality of the defect \\cite{Karreview}. Finally,\nthere is a contribution to the free energy fluctuations from the\ninteraction of the unzipped part of the vortex with the bulk point\ndisorder, $\\delta {\\cal F}_{u}$. This contribution behaves similarly\nto $\\delta \\mathcal F_b$ with a different bulk exponent: $\\delta\n{\\cal F}_{u}\\propto \\tau_m^{\\omega(d)}$, where $d>d'$ is the\ndimensionality of the sample. Collecting all three terms gives:\n\\begin{equation}\n{\\cal F}(\\tau_m)=a(f_c-f)\\tau_m +\\delta\\mathcal F_{b}(\\tau_m)\n+\\delta\\mathcal F_u(\\tau_m)\\;.\n\\label{f}\n\\end{equation}\nAs discussed above, $\\omega(d)$ has been studied extensively in the\npast and it is well known that $\\omega(d')>\\omega(d)$ for any\n$d'0$ it is relevant, i. e. long excursions are\nenergetically costly.\n\nAs mentioned above in $d=3$ numerical simulations indicate that\n$\\zeta \\approx 0.6$ which gives for the planar defect ($d^\\prime=2$)\n$\\varepsilon \\approx 1\/8$ and for the columnar pin ($d^\\prime=1$)\n$\\varepsilon \\approx -1\/2$. Therefore, a weak twin plane is always relevant and\nthe vortex is always bound to it. However, a weak columnar pin is\nirrelevant and then one expects an unbinding transition. In $d=2$,\nwhere there can be no twin plane, $\\zeta=2\/3$ and the columnar\ndefect is found to be marginal. As argued in Ref.~[\\onlinecite{HwaNatter}]\nit is in fact marginally {\\it relevant}.\n\nTo summarize this discussion for columnar defects in 3 dimensional samples we\nexpect there is a critical strength of the bulk disorder beyond which the\nflux line spontaneously unzips even at zero force. In contrast for a planar defect in\n3 dimensions and for columnar defects in planar 2 dimensions superconductors we expect\nthat for any strength of the disorder there is a finite non-zero\nvalue of the force needed to unzip the vortex.\n\nNext, we will check the scaling (\\ref{nu}) and the anticipated\nlocalization \/ delocalization behavior for a number of different\nsituations using both analytical methods based on the replica trick\nand numerical simulations.\n\n\n\\subsection{Unzipping from a disordered columnar pin without excursions.}\n\\label{replica}\n\nWe start our quantitative analysis from the simplest situation,\nwhere one can get exact analytical results. Namely, we consider\nunzipping from a 1D pin with disorder localized only on the pin.\nAdditionally we neglect all excursions of the vortex line from the\npin except for the unzipped region. This problem then becomes\nidentical to DNA unzipping. In Ref.~[\\onlinecite{Lubensky}] the\nauthors analyzed this problem using a Fokker-Planck approach and\nindeed derived $\\nu=2$ near the unzipping transition. Here we show\nhow the same problem can be solved using the replica trick. The\nsolution was sketched in Ref.~[\\onlinecite{kp}]. Here we review the\nderivation for completeness and provide additional details.\n\nIgnoring excursions of the bound part of the flux line into the bulk\ngives the free energy a particularly simple form. We again write it\nas a sum over the contribution from the bound and unbound segments.\nThe bound segment contribution is given by ${\\cal\nF}_b(\\tau_m)=V_0(L-\\tau_m)+\\int_{\\tau_m}^L d \\tau_m' U(\\tau_m')$,\nwhere $V_0<0$ is the mean value of the attractive potential, $L$ is\nthe length of the columnar defect which is assumed to be very large,\nand $U(\\tau_m)$ is a random Gaussian uncorrelated potential with\nzero mean satisfying\n$\\overline{U(\\tau_{m_1})U(\\tau_{m_2})}=\\Delta\\delta(\\tau_{m_1}-\\tau_{m_2})$.\nThe contribution from the unzipped part takes the same form as in\nthe clean case (see Eq. (\\ref{free_unzip})). Collecting the two\nterms gives:\n\\begin{equation}\n\\mathcal F(\\tau_m)=\\epsilon \\tau_m+\\int_{\\tau_m}^L d \\tau_m'\nU(\\tau_m').\n\\label{fz}\n\\end{equation}\nAs before we work in the units, where $k_B T=1$. In the equation\nabove the deviation from the unzipping transition is measured by\n$\\epsilon=(f_c^2-f^2)\/2\\gamma$, where $f$ is the force applied to\nthe end of the flux line and $f_c=\\sqrt{2\\gamma |V_0|}$ is the\ncritical force. In Eq.~(\\ref{fz}) we dropped an unimportant constant\nadditive term $V_0 L$.\n\nThe statistical properties of the unzipping transition can be\nobtained by considering $n$ replicas of the partition function $Z(\\tau)=\\exp(-\\mathcal\nF(\\tau))$~\\cite{edwards anderson}:\n\\begin{equation}\n\\overline{Z^n}=\\int_0^L d\\tau_1\\ldots\\int_0^L\nd\\tau_n\\,\\overline{\\exp\\left(-\\sum_{\\alpha=1}^n \\mathcal\nF(\\tau_\\alpha)\\right)},\n\\label{Z_n}\n\\end{equation}\nwhere the overbar denotes averaging over point disorder. The\naveraging procedure can be easily done for a positive integer $n$.\nWe eventually wish to take the limit $n \\to 0$. First we order the\ncoordinates $\\tau_j$, where the $j^{th}$ replica unbinds from the\npin according to: $0\\leq \\tau_1\\leq \\tau_{2}\\leq\\dots\\leq \\tau_n$.\nThen for $\\tau\\in[0,z_1)$ there are no replicas bound to the\ncolumnar pin, for $\\tau \\in[\\tau_1,\\tau_2)$ there is one replica on\nthe pin until finally for $L \\geq \\tau\\geq \\tau_n$ all $n$ replicas\nare bound to the pin. Using this observation and explicitly\naveraging over the point disorder in Eq.~(\\ref{Z_n}) we arrive at:\n\\begin{equation}\n\\overline{Z^n}\\!=n!\\!\\int\\limits_0^L d\\tau_1.\\,.\\!\\!\\int\\limits_{\\tau_{n-1}}^L\n\\!\\!d\\tau_n\\exp\\!\\left[-\\!\\sum\\limits_{j=1}^n\\! \\epsilon \\tau_j+\n{\\Delta\\over 2} j^2(\\tau_{j+1}-\\tau_{j})\\right],\n\\label{tuam2}\n\\end{equation}\nwhere we use the convention $\\tau_{n+1}=L$. The integral above is\nstraightforward to evaluate in the $L \\to \\infty$ limit so that\n\\begin{eqnarray}\n&&\\overline{Z^n}=\\mathrm e^{n^2L\\Delta\/2}{1\\over \\epsilon_n^n}\\prod_{j=1}^n\n{1\\over 1-\\kappa_n j} \\nonumber\n\\\\\n&&=\\mathrm\ne^{n^2L\\Delta\/2}\\left({2\\over\\Delta}\\right)^n\n{\\Gamma(1+1\/\\kappa_n-n)\\over\\Gamma(1+1\/\\kappa_n)}\n\\;, \\phantom{XXX} \\label{eq:partunzip}\n\\end{eqnarray}\nwhere $\\epsilon_n=\\epsilon+\\Delta n$ and\n$\\kappa_n=\\Delta\/2\\epsilon_n$. The exponential prefactor is an\nunimportant overall contribution of the whole columnar pin while the\nrest of the expression is the ($L$ independent) contribution from\nthe unzipped region. Interestingly the restricted partition\nfunctions for the unbinding problem from a hard wall (with no\nexternal force) and for the unzipping from a 1 dimensional pin are identical\nand thus there is equivalence between the two problems (see\nRef.~[\\onlinecite{kp}] for more details.)\n\nThe disorder-averaged free energy is given by the limit\n$\\overline{\\mathcal F}=-\\lim_{n \\to 0}\n(\\overline{Z^n}-1)\/n$~[\\onlinecite{edwards anderson}]. With the help\nof Eq.~(\\ref{eq:partunzip}) one obtains\n\\begin{equation}\n\\overline{\\mathcal F}=\\ln (\\epsilon \\kappa) + \\Psi(1\/\\kappa),\n\\label{free_en}\n\\end{equation}\nwhere $\\Psi(x)$ is the digamma function and\n$\\kappa=\\Delta\/2\\epsilon$. The unzipping transition occurs at\n$\\epsilon=0$ or equivalently at $\\kappa \\to \\infty$. The expression\n(\\ref{free_en}) is identical to the one found in\nRef.~[\\onlinecite{oper}] using a Fokker-Planck equation approach,\nsupporting the validity of the analytic continuation in $n$ for this\nparticular application of the replica calculation.\n\nIt is easy to see that this free energy yields\n\\begin{equation}\n\\overline{\\langle \\tau_m\\rangle}={\\partial \\overline{\\mathcal F}\\over\n\\partial\\epsilon}={1 \\over \\kappa\\epsilon}\\Psi^{(1)}(1\/\\kappa),\n\\label{zav}\n\\end{equation}\nwhere $\\Psi^{(n)}(x)$ stands for the $n$-th derivative of the\ndigamma function. The expression above predicts a crossover from\n$\\overline{\\langle \\tau_m\\rangle}\\approx 1\/\\epsilon$ for $\\kappa\\ll\n1$ (far from the transition) to $\\overline{\\langle\n\\tau_m\\rangle}\\approx\\kappa\/\\epsilon=\\Delta\/\\epsilon^2$ for\n$\\kappa\\gg 1$ (close to the transition) similarly to the unzipping\nfrom the wall problem analyzed above. Also, it is easy to check that\n\\begin{equation}\nw=\\overline{\\langle \\tau_m^2 \\rangle - \\langle \\tau_m\n\\rangle^2}={\\partial^2 \\overline{\\mathcal F}\\over \\partial\\epsilon^2}=-{1 \\over\n(\\kappa\\epsilon)^2}\\Psi^{(2)}(1\/\\kappa). \\label{fav}\n\\end{equation}\nHere there is a crossover from $w \\approx 1\/\\epsilon^2$ for $\\kappa\n\\ll 1$ to $w \\approx 2 \\kappa\/\\epsilon^2=\\Delta\/\\epsilon^3$ for\n$\\kappa\\gg 1$. As has been noted in the context of DNA unzipping\n\\cite{Lubensky} $\\sqrt{w}\/\\overline{\\langle \\tau_m\\rangle}$ changes\nfrom being of order unity for the weakly disordered $\\kappa \\ll 1$ case to\n$\\sim \\epsilon^{1\/2}$ for $\\kappa \\gg 1$. Thus for $\\kappa \\gg 1$,\nclose to the unzipping transition, thermal fluctuations become\nnegligible and one can work in the zero temperature limit.\n\nThe simplicity of the problem also allows finding the higher moments\nof the distribution. Here we evaluate the second moment, which gives\nthe width of the distribution of $\\overline{\\langle \\tau_m\\rangle}$\ndue to different disorder realizations. Note that since the order of\naveraging over thermal fluctuations and disorder is important this\nquantity can not be extracted directly from Eq. (\\ref{fav}). To\nproceed we consider the generating function, ${\\cal W}_n(\\epsilon_j)$ defined by\n\\begin{equation}\n{\\cal W}_n(\\epsilon_j)=\\int\\limits_{0}^L\nd\\tau_1\\ldots\\int\\limits_{\\tau_{n-1}}^L d\\tau_n\\,\\mathrm\ne^{-\\sum\\limits_{j=1}^n \\epsilon_j\\tau_j+\\Delta\/2\nj^2(\\tau_{j+1}-\\tau_j)}\\!\\!. \\nonumber \\label{zm1}\n\\end{equation}\nThe second (and similarly the higher) moments can be found by\ndifferentiating ${\\cal W}_n$ with respect to $\\epsilon_j$:\n\\begin{equation}\n\\overline{\\langle \\tau_m^2\\rangle}=\\lim_{n\\to 0} \\left. {1\\over\n{\\cal W}_n(\\epsilon_j)}\\,{1\\over n}\\sum_{j=1}^n {\\partial^2 {\\cal\nW}_n(\\epsilon_j)\\over\\partial\n\\epsilon_j^2}\\right|_{\\epsilon_j=\\epsilon}. \\label{zm3}\n\\end{equation}\nUpon evaluating the integral, we find\n\\begin{equation}\n{\\cal W}_n(\\epsilon_j)=\\prod_{j=1}^n {1\\over\n\\sum_{k=1}^j\\epsilon_k\\,-\\,\\Delta j^2\/2} \\label{zm2}\n\\end{equation}\nand correspondingly\n\\begin{equation}\n\\overline {\\langle \\tau_m^2\\rangle}={1\\over \\epsilon^2}\\lim_{n\\to\n0}{1\\over n}\\sum_{j=1}^n {2\\over 1-\\kappa j}\\sum_{k=j}^n {1\\over k\n(1-\\kappa k)}.\n\\end{equation}\nThis double sum can be calculated using a trick similar to the one\ndescribed in Ref.~[\\onlinecite{Kardar}]:\n\\begin{eqnarray}\n\\overline{\\langle \\tau_m^2\\rangle}&=&{2\\kappa^2\\over \\epsilon\n^2}\\int\\!\\!\\!\\!\\!\\!\\int\\limits_{\\!\\!\\!\\!x>y>0}\\!\\!\\!\\! dx dy\n{1\\over \\mathrm e^{\\kappa x}-1}{y\\,\\mathrm e^{-y}\\over \\mathrm\ne^{\\kappa y }-1}\\left[ \\mathrm e^{\\kappa y}+\\mathrm e^{2y}\\mathrm\ne^{\\kappa\nx-x}\\right]\\nonumber\\\\\n&-&{4\\over \\kappa\n\\epsilon^2}\\Psi^{(1)}(1\/\\kappa)\\left(C+\\Psi(1\/\\kappa)\\right),\n\\label{z2}\n\\end{eqnarray}\nwhere $C\\approx 0.577$ is Euler's constant. In the limit of weak\ndisorder or high temperature $\\kappa\\ll 1$, not surprisingly, we get\n$\\overline{\\langle \\tau_m^2\\rangle }\\approx 2\/ \\epsilon^2$, which\nagrees with the Poissonian statistics of $\\tau_m$ with an average\ngiven by $\\overline{\\langle \\tau_m \\rangle}=1\/\\epsilon$. In the\nopposite limit $\\kappa\\gg 1$ one finds $\\overline{\\langle\n\\tau_m^2\\rangle }=4\\kappa^2\/ \\epsilon^2$. Note that\n$\\overline{\\langle \\tau_m\\rangle}=\\kappa\/\\epsilon$, thus the\nrelative width of the distribution ($\\delta \\tau_m\/\\overline{\\langle\n\\tau_m\\rangle}$), defined as the ratio of the variance of the\nunzipping length $\\tau_m$ to its mean is larger by a factor of\n$\\sqrt{3}$ than that in the high temperature regime. The\ndistribution thus becomes superpoissonian at large $\\kappa$. In\nfact, in the limit $\\kappa\\to\\infty$ one can derive the full\ndistribution function $P_{\\kappa\\to\\infty}(\\tau_m)$ using extreme\nvalue statistics~\\cite{Lubensky, ledoussal}:\n\\begin{equation}\n{\\cal P}_{\\kappa\\to\\infty}(\\tau_m)\\approx {\\epsilon\/ \\kappa}\\,\nG(\\tau_m\\,\\epsilon\/\\kappa)\n\\end{equation}\nwith\n\\begin{equation}\nG(x)={1\\over\\sqrt{\\pi x}}\\,\\mathrm e^{-x\/4}-{1\\over 2}{\\rm\nerfc}(\\sqrt{x}\/2),\n\\end{equation}\nwhere ${\\rm erfc}(x)$ is the complimentary error function. It is\neasy to check that this distribution indeed reproduces correct\nexpressions for the mean and the variance. We emphasize that while\nthe thermal fluctuations of the unzipping length become negligible\nnear the transition, the fluctuations due to different realizations\nof point disorder are enhanced and lead to a wider-than-Poissonian\ndistribution of $\\tau_m$.\n\nTo check these results and uncover subtleties that might\narise in experiments, we performed direct numerical simulations of\nthe partition function of the free energy (\\ref{fz}). For this\npurpose we considered a discrete version of the problem where the\npartition function is\n\\begin{equation}\nZ=\\prod_l \\mathrm e^{-\\epsilon m_l+\\sum_{l^\\prime=1}^l U(m_l)}.\n\\end{equation}\nHere $U(m_l)$ is the random potential uniformly distributed in the\ninterval $[-U_0,U_0]$ so that the disorder variance is\n$\\Delta=\\overline{U^2(m_l)}=U_0^2\/3$. For the simulations we choose\n$\\epsilon=\\ln(1.2)-0.18\\approx 0.00232$ and $U_0=0.3$, which gives\n$\\Delta=0.03$, $\\kappa\\approx 6.46$ and according to both\nEq.~(\\ref{zav}) and numerical simulations $\\overline{\\langle\n\\tau_m\\rangle}\\approx 2860$. Then we computed $\\delta\n\\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ using both Eq.~(\\ref{z2})\nand performing numerical simulations. For the chosen parameters the\nequation~(\\ref{z2}) gives $\\delta \\tau_m\/\\overline{\\langle\n\\tau_m\\rangle}\\approx 1.68$, while the numerical simulations yield\n$\\delta \\tau_m\/\\overline{\\langle \\tau_m\\rangle}\\approx 1.67$.\nClearly the results are very close to each other and the small\ndiscrepancy can be attributed to the discretization error. In\nFig.~\\ref{fig_var} we plot dependence of\n$\\delta\\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ vs. system size.\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=9cm]{distrib1d6.eps}\n\\caption{Dependence of the relative width of the distribution\n$\\delta \\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ on the system size.\nSymbols correspond to the actual data, the solid line is the guide\nto the eye, and the dashed line corresponds to the replica result in\nthe thermodynamic limit.}\\label{fig_var}\n\\end{figure}\nIt is obvious from the figure that in the thermodynamic limit\n$L\\to\\infty$ the replica result is in excellent agreement with\nnumerical simulations. We mention that numerical simulations of\n$\\delta \\tau_m$ show very strong finite size effects. Therefore one\nhas to go to very large $L\\gtrsim 50 \\overline{\\langle\n\\tau_m\\rangle}$ in order to approach the thermodynamic limit for the\nwidth of the distribution.\n\nDepending on the system the quantity $\\overline{\\langle\n\\tau_m^2\\rangle}$ is not always experimentally accessible. For\nexample, in the unzipping experiments it is easier to measure\nthermal average, $\\langle \\tau_m\\rangle$, in each experimental run.\nWe note that this quantity has sample to sample fluctuations only\ndue to the presence of disorder. Then the variance of the\ndistribution will be characterized by $\\overline{\\langle\n\\tau_m\\rangle^2}$. The difference between the two expectation values\nis given by $w$ found in Eq.~(\\ref{fav}). Defining $(\\delta\n\\tau_m^{T})^2=\\overline{\\langle \\tau_m\\rangle^2}-\\overline{\\langle\n\\tau_m\\rangle}^{\\,2}$ and using Eqs.~(\\ref{z2}) and (\\ref{fav}) we\nfind that $\\delta \\tau_m^T\/\\overline{\\langle \\tau_m\\rangle}\\approx\n\\sqrt{\\kappa\/2}$ in the weak disorder limit ($\\kappa\\ll 1$) and\n$\\delta \\tau_m^T\/\\overline{\\langle \\tau_m\\rangle}\\approx\n\\sqrt{3}-1\/(\\sqrt{3}\\kappa)$ in the opposite limit $\\kappa\\gg 1$. We\nplot both $\\delta \\tau_m^T$ and $\\delta \\tau_m$ versus the disorder\nparameter $\\kappa$ in Fig.~\\ref{fig_dz}.\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=9cm]{width.eps}\n\\caption{Dependence of the relative width of the\ndistribution on the disorder parameter $\\kappa$. The two curves\ncorrespond to different averaging over temperature and disorder (see\ntext for details). The horizontal line at $\\sqrt{3}$ denotes the\nasymptotic value of both $\\delta \\tau_m$ and $\\delta \\tau_m^T$ at\n$\\kappa\\to\\infty$}\\label{fig_dz}\n\\end{figure}\nThe same issue of importance of the order of thermal and disorder\naveraging appears in the calculation of the higher moments of\n$\\tau_m$, becoming irrelevant only in the limit ($\\kappa\\to\\infty$),\nwhich effectively corresponds to the zero temperature case.\n\nBefore concluding this section let us make a few remarks about the\nrebinding transition, i.e., the rezipping that occurs with decreasing force. One can consider a similar setup with a lower\nend of the flux line fixed at the bottom of the columnar pin and the\ntop end is pulled away from the pin with a force $f$. However, now\nwe will be interested in $f>f_c$. Then clearly most of the flux line\nwill be unzipped from the pin except for a portion near the bottom\nend. If $f$ is very large, the length of the bound segment $\\tilde\n\\tau_m$ near the sample boundary is small. However as $f$ decreases and approaches $f_c$ from\nabove, the length of this segment increases and finally diverges at\nthe transition. This rebinding transition can be described in a\nsimilar spirit to the unbinding. For example instead of the free\nenergy (\\ref{fz}) one has to deal with\n\\begin{equation}\n\\mathcal F(\\tilde \\tau_m)=|\\epsilon|\n\\tilde\\tau_m+\\int_{0}^{\\tilde\\tau_m} d \\tau_m' U(\\tau_m').\n\\label{fzr}\n\\end{equation}\nAs we already noted the free energies (\\ref{fz}) and (\\ref{fzr}) are\nequivalent up to an unimportant constant equal to the total disorder\npotential of the pin: $\\int_0^L d\\tau_m' U(\\tau_m')$. We conclude that the unbinding and rebinding\ntransitions for a single flux line on a disordered columnar pin are\nidentical. In other words, statistical properties of $\\tau_m$ for a\ngiven $f=f_c-\\delta f$ are identical to those of $\\tilde\\tau_m$ for\n$f=f_c+\\delta f$.\n\n\n\n\\subsection{Unzipping from a planar defect without excursions.}\n\\label{replica_2D}\n\nWe now generalize the ideas of the previous section to the more\ncomplicated problem of unzipping of a single flux line from a\ndisordered twin plane. As before we ignore excursions out of the\nplane for the bound part of the flux line. Let us consider the\nrebinding transition first. That is we assume that $f$ is slightly\ngreater than $f_c$ and we study the statistics of the\nbound part of the flux line. We again assume that the flux line is\npinned at the bottom of the plane ($\\tau=0$) and unbinds for $\\tau$ larger than some\n$\\tilde\\tau_m$.\n\nThe point disorder potential now depends on the two coordinates\n$\\tau$ and $z$ spanning the twin plane. Using Eq. (\\ref{free_unzip}) the\npartition function reads:\n\\begin{eqnarray}\n&&Z=\\int_0^L d\\tilde\\tau_m \\int Dz(\\tau^\\prime)\n\\exp\\biggl[-{f^2\\over\n2\\gamma}\\tilde\\tau_m-V\\tilde\\tau_m\\nonumber\\\\\n&&~~~-\\beta\\int_0^{\\tilde\\tau_m} d\\tau^\\prime \\left({\\gamma\\over\n2}\\left({dz\\over d\\tau^\\prime}\\right)^2+\n\\mu(\\tau^\\prime,z^\\prime)\\right)\\Biggr],\n\\end{eqnarray}\nwhere $V<0$ is the mean attractive potential of the twin plane and\nwe have dropped the unimportant $L$-dependent factors. As before, we\nassume a Gaussian random noise with zero mean and\n\\begin{equation}\n\\overline{\\mu(\\tau_1,z_1)\\mu(\\tau_2,z_2)}=\\sigma\n\\delta(\\tau_1-\\tau_2)\\delta(z_1-z_2).\n\\end{equation}\nWe also introduce $\\epsilon=-f^2\/(2\\gamma)-V$. Note that for the\nrebinding transition $\\epsilon<0$. After replicating the partition\nfunction and averaging over point disorder we find\n\\begin{eqnarray}\n&&\\overline{Z^n}=n!\\int\\limits_{0}^L\nd\\tilde\\tau_n\\int\\limits_{\\tilde\\tau_n}^{L}d\\tilde\\tau_{n-1}\\ldots\n\\int\\limits_{\\tilde\\tau_2}^L d\\tilde\\tau_1\\int\nDz_1(\\tau_1^\\prime)\\dots Dz_n(\\tau_n^\\prime)\\nonumber\\\\\n&&~~~~~~~\\exp\\left[\\sum_{\\alpha=1}^n\n\\epsilon\\tilde\\tau_\\alpha+\\!\\!\\!\\int\\limits_{\\tilde\\tau_{\\alpha+1}}^{\\tilde\\tau_{\\alpha}}\n\\!\\!\\!d\\tau_\\alpha^\\prime \\mathcal\nL_{\\alpha}[z_1(\\tau_1^\\prime),\\ldots,\nz_\\alpha(\\tau_\\alpha^\\prime)]\\right],\n\\end{eqnarray}\nwhere we define $\\tilde\\tau_{n+1}\\equiv 0$ and $\\mathcal L_\\alpha$\nis the Euclidean Lagrangian corresponding to the Hamiltonian\n($\\mathcal H_\\alpha$) of $\\alpha$ interacting particles~\\cite{Kardar}:\n\\begin{equation}\n\\mathcal H_\\alpha=-{\\sigma\\over 2}\\alpha-{1\\over\n2\\gamma}\\sum_{\\beta=1}^{\\alpha} {\\partial^2\\over\\partial\nz_\\beta^2}-\\sigma\\sum_{1\\leq\\beta<\\gamma\\leq\\alpha}\n\\delta(z_\\beta-z_\\gamma).\n\\end{equation}\nClose to the rebinding transition, we anticipate\n$\\tilde\\tau_m\\to\\infty$ and thus the mean separation between the\nrebinding times of different replicas $\\tilde\\tau_\\alpha$ and\n$\\tilde\\tau_{\\alpha-1}$ diverges. Therefore the contribution to the\npartition function coming from integration over $\\tau_\\alpha$ will\nbe dominated by the ground state of configurations with $\\alpha$\nreplicas. In this case we can significantly simplify the partition\nfunction and evaluate it analytically:\n\\begin{eqnarray}\n&&\\overline{Z^n}=n!\\int\\limits_{0}^L\nd\\tilde\\tau_n\\int\\limits_{\\tilde\\tau_n}^{L}d\\tilde\\tau_{n-1}\\ldots\n\\int\\limits_{\\tilde\\tau_2}^L d\\tilde\\tau_1\\\\\n&&\\exp\\left[\\sum_{\\alpha=1}^n \\epsilon\\tilde\\tau_\\alpha+(\\mathcal\nE_{\\alpha}-\\mathcal E_{\\alpha-1})\n\\tilde\\tau_{\\alpha})\\right].\\nonumber\n\\end{eqnarray}\nHere $\\mathcal E_\\alpha$ is the ground state energy of $\\mathcal\nH_{\\alpha}$ with a subtracted term linear in $\\alpha$, that just\nrenormalizes $f_c$. Close to the transition $\\epsilon$ is linear\nin the difference $f-f_c$. The energy, $\\mathcal E_\\alpha$, was\ncomputed in Ref.~[\\onlinecite{Kardar}]:\n\\begin{equation}\n\\mathcal E_\\alpha=-{\\sigma^2\\gamma\\over\n12}\\alpha^3=-\\xi\\alpha^3.\n\\end{equation}\nUpon integrating over $\\tilde\\tau_\\alpha$ one obtains\n\\begin{equation}\n\\overline{Z^n}=n!\\prod_{\\alpha=1}^n {1\\over\n|\\epsilon|\\alpha-\\xi\\alpha^3}\\to\\prod_{\\alpha=1}^n {1\\over\n|\\epsilon|-\\xi\\alpha^2}\n\\label{Z_n0}\n\\end{equation}\nThe product above can be reexpressed in terms of $\\Gamma$-functions,\nwhich in turn allows for a straightforward analytic continuation to\n$n\\to 0$:\n\\begin{equation}\n\\overline{Z^n} ={1\\over\n\\xi^n}{1\\over1+n{\\sqrt{\\xi}\\over\\sqrt{|\\epsilon|}}}\n{\\Gamma\\left({\\sqrt{|\\epsilon|}\\over\\sqrt{\\xi}}-n\\right)\\over\n\\Gamma\\left({\\sqrt{|\\epsilon|}\\over\\sqrt{\\xi}}+n\\right)}.\n\\label{Z_n1}\n\\end{equation}\nUsing this expression we obtain the free energy and the mean\nlength of the localized segment:\n\\begin{equation}\n\\mathcal F=-\\lim_{n\\to 0}{\\overline{Z^n}-1\\over n}=\\ln\n\\xi+{\\sqrt{\\xi}\\over\\sqrt{|\\epsilon|}}+2\\Psi\\left({\\sqrt{|\\epsilon|}\\over\n\\sqrt{\\xi}}\\right),\n\\label{f_2d}\n\\end{equation}\n\\begin{equation}\n\\overline{\\langle \\tilde\\tau_m \\rangle}={\\partial \\mathcal\nF\\over\\partial |\\epsilon|}=-{\\sqrt{\\xi}\\over\n2|\\epsilon|^{3\/2}}+{1\\over\n\\sqrt{|\\epsilon|\\xi}}\\Psi^{(1)}\\left({\\sqrt{|\\epsilon|}\\over\n\\sqrt{\\xi}}\\right)\n\\label{tau_2d}\n\\end{equation}\nwhere as before $\\Psi^{(n)}(x)$ stands for the $n$th derivative of\nthe digamma function. This expression has the asymptotic behaviors:\n\\begin{eqnarray}\n&&\\overline{\\langle \\tilde\\tau_m \\rangle}\\to\n{1\\over\\epsilon}\\qquad\\quad~ \\xi\\ll\n|\\epsilon|\\nonumber\\\\\n&&\\overline{\\langle \\tilde\\tau_m \\rangle}\\to {\\sqrt{\\xi}\\over\n2|\\epsilon|^{3\/2}} \\quad \\xi\\gg|\\epsilon|.\n\\end{eqnarray}\nThis scaling confirms the crossover between exponents $\\nu=1$ and\n$\\nu=3\/2$ for the rebinding transition to a two-dimensional\ndisordered plane predicted by the simple scaling argument leading to\nEq.~(\\ref{valuenu}).\n\nIn a similar way one can also consider an unzipping transition with\n$f \\leq f_c$. One finds an expression for the partition function\nwhich is identical to (\\ref{Z_n0}) with the substitution\n$\\xi\\to-\\xi$. Note however, that the analytic continuation of the\nproduct (\\ref{Z_n1}) results in a complex partition function and\nhence a complex free energy. It thus appears that the analytic\ncontinuation of the product (\\ref{Z_n0}) to noninteger values of $n$\nis not unique. One can always multiply it by any periodic function\nof $n$, which is equal to unity when the argument is integer. While\nwe were able to find some real-valued analytic continuations of\n$\\overline{Z^n}$ to negative values of $\\xi$, these continuations\ndid not lead to physically sensible results.\n\nBecause of the ambiguity of the analytic continuation and some\napproximations used to derive Eqs.~(\\ref{f_2d}) and (\\ref{tau_2d})\nwe also performed numerical simulations for the vortex unzipping\nfrom a disordered twin plane.\n\nFor numerical simulations we are using the lattice version of the\nmodel, where in each step along the $\\tau$ direction the vortex can\neither move to the left or the right one lattice spacing. Note that\nbecause we neglect excursions the vortex motion occurs strictly\nwithin the plane until the vortex is unbound. Then the restricted\npartition function for the bound part of the flux line, $Z(x,\\tau)$,\nwhich sums over the weights of all path leading to $x,\\tau$,\nstarting at $x=0,\\tau=0$ satisfies the recursion\nrelation~\\cite{Kardar}\n\\begin{eqnarray}\n&& Z(x,\\tau+1)=e^{\\mu(x,\\tau+1)}\\big[J Z(x-1,\\tau) +J\nZ(x+1,\\tau)\\nonumber\\\\\n&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(1-2J) Z(x,\\tau)\\big].\n\\label{eqz1}\n\\end{eqnarray}\nWe assume that $\\mu(x,\\tau)$ is uniformly distributed in the\ninterval $[-U_0,U_0]$ implying as before the variance $\\sigma=U_0^2\/3$. The\nvariable $J$ controls the line tension. In the continuum limit $J\\ll\n1$ and $U_0\\ll 1$ the equation (\\ref{eqz1}) reduces to the\nSchr\\\"odinger equation:\n\\begin{equation}\n{\\partial Z\\over \\partial\\tau}=-\\mathcal H Z(x,\\tau)\n\\label{Z_tauu}\n\\end{equation}\nwith the Hamiltonian given by Eq.~(\\ref{H}) with $\\gamma=2J$ and\n$f=0$ (there is no force acting on the flux line within the\nplane). We note that even if the parameters of the discrete model are not\nsmall we still expect that Eq.~(\\ref{Z_tauu}) remains valid at long\nlength and time scales. However, the relation between the\nmicroscopic parameters of the discrete model and the parameters of\nthe effective coarse-grained Hamiltonian (\\ref{H}) is more\ncomplicated.\n\nIn our simulations we evaluated numerically the free energy of the\nbound part of the vortex line for each realization of point disorder\nand used the analytical expression for the free energy of the\nunbound part, for which point disorder can be neglected. The latter\nis given by Eq.~(\\ref{free_unzip}). This free energy is controlled\nby a single parameter $f^2\/(2\\gamma)$. Use of the analytic result\n(\\ref{free_unzip}) significantly simplifies calculations of\n$\\overline{\\langle \\tau_m\\rangle}$ and allows us to perform large\nscale simulations.\n\nFirst we verify the scaling (\\ref{nu}) with $\\nu=3\/2$ at the\nunzipping transition. To do this we perform standard finite size\nscaling procedure.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_6.eps}\n\\caption{Ratio of the unzipping length\n$\\overline{\\langle\\tau_m\\rangle}$ to the system size $L$ as a\nfunction of $f^2\/2\\gamma$ for different system sizes. Here $f$ is\nthe external force and $\\gamma$ is the line tension of the vortex\n(see Eqs.~(\\ref{free_unzip1} and (\\ref{free_unzip}))). According to\nthe scaling relation (\\ref{scaling1}) the crossing point corresponds\nto the unzipping transition. In simulations the parameters of the\nmicroscopic model (\\ref{eqz1}) are chosen to be $J=0.2$, $U_0=2$}\n\\label{fig3}\n\\end{figure}\nIn Fig.~\\ref{fig3} we show dependence of the ratio\n$\\overline{\\langle\\tau_m\\rangle}\/L$ on the parameter $f^2\/(2\\gamma)$\nfor four different sizes. As we expect from the scaling relation\n(\\ref{scaling1}) the three curves intersect at the same point\ncorresponding to the unzipping transition ($g_0\\approx 0.7$). Once\nwe determine the crossing point corresponding to the critical force\n$f_c$ we can verify the scaling relation (\\ref{scaling1}) with\n$\\nu=3\/2$.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_1.eps}\n\\caption{Data collapse of $\\overline{\\langle\\tau_m\\rangle}\/L$ as a\nfunction of $\\epsilon L^{1\/\\nu}$ with the exponent $\\nu=3\/2$ for two\ndifferent system sizes (see Eq.~(\\ref{scaling1})). The parameters of\nthe model are the same as in Fig.~\\ref{fig3}. The inset shows\nderivative of $\\overline{\\langle\\tau_m\\rangle}$ with respect to\n$\\epsilon$ for $L=12800$. Clearly the scaling function is asymmetric\nwith respect to $\\epsilon\\to -\\epsilon$. Thus the unbinding and\nrebinding transitions are not equivalent.}\n\\label{fig:collapse2D}\n\\end{figure}\nIn Fig.~\\ref{fig:collapse2D} we plot\n$\\overline{\\langle\\tau_m\\rangle}\/L$ versus the scaling parameter\n$\\epsilon L^{1\/\\nu}$ (see Eq.~(\\ref{scaling1})) with $\\nu=3\/2$ for\ntwo different system sizes. Clearly the data collapse is nearly\nperfect, which proves the validity of the scaling (\\ref{scaling})\nwith $\\nu=3\/2$ for the unzipping of a flux line from a twin plane.\nThe inset shows the derivative of $\\overline{\\langle\\tau_m\\rangle}$\nwith respect to $\\epsilon$. Clearly this derivative is asymmetric\nwith respect to $\\epsilon\\to\\ -\\epsilon$, implying that there is no\nsymmetry between the unbinding and rebinding transitions. This is\ncontrary to the unzipping from a columnar pin with no excursions,\nwhere such a symmetry does exist.\n\nNext we turn to verifying the analytic prediction for\n$\\overline{\\langle\\tau_m\\rangle}$, Eq.~(\\ref{tau_2d}). As we argued\nabove the parameter $\\zeta$ describing the disorder strength can be\neasily extracted from microscopic parameters of the model only in\nthe continuum limit $U_0\\gg 1$, $J\\ll 1$. Unfortunately, it is not\npossible to do simulations directly in the continuum limit ($J\\ll 1$\nand $U_0\\ll 1$). Indeed as Eq.~(\\ref{tau_2d}) suggests in order to\nsee the scaling exponent $\\nu=3\/2$ one needs to go to length scales much\nlarger than $1\/\\xi$, where $\\xi=\\sigma^2 J\/12=U_0^4 J\/36$. If\n$J\\ll 1$ and especially $U_0\\ll 1$ then one has to simulate\nextremely large system sizes where $L$ is larger than $10^7$ for\n$U_0=0.1$ and $J=0.1$. Therefore we perform simulations in the regime\nwhere $J$ and especially $U_0$ are appreciable. We then\nregard $\\xi$ as a fitting parameter of the model which should be\nequal roughly to $U_0^4 J\/36$.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_5.eps}\n\\caption{Dependence of the length of the bound part of the flux line\nto the twin plane $L-\\overline{\\langle\\tau_m\\rangle}$ on $\\epsilon$\nfor the rebinding transition. Different curves correspond to\ndifferent system sizes. The solid black line is the best single\nparameter fit using Eq.~(\\ref{tau_2d}) with $\\xi$ being the\nfitting parameter.}\n\\label{fig:replica1}\n\\end{figure}\nIn Fig.~\\ref{fig:replica1} we show results of numerical simulation\nfor the rezipping length $L-\\overline{\\langle\\tau_m\\rangle}$ on the\ndetuning parameter $\\epsilon$ for different system sizes. The solid\nblack line is the best single-parameter fit to the data using the\nanalytic expression (\\ref{tau_2d}). The fitting parameter $\\xi$\nfound from simulations is $\\xi \\approx 0.036$, while a continuum\nestimate $U_0^4 J\/36$ gives $\\xi \\approx 0.089$, which is very\nreasonable given that this estimate is valid only at $U_0\\ll 1$. We\nalso performed similar simulations for $U_0=1.5$ and got a very good\nfit with (\\ref{tau_2d}) for $\\xi=0.018$, while the continuum\nestimate gives $\\xi \\approx 0.028$. We thus see that indeed as\n$U_0$ decreases the fitting parameter $\\xi$ becomes closer to the\ncontinuum expression.\n\nWhile we were not able to derive a closed analytic expression for\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding transition, we\nperformed numerical simulations. As the inset in\nFig.~\\ref{fig:collapse2D} suggests the transition is highly\nasymmetric. In fact this asymmetry persists in the thermodynamic\nlimit.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_7.eps}\n\\caption{Comparison of dependences of\n$L-\\overline{\\langle\\tau_m\\rangle}$ for the rebinding transition and\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding transition on\n$|\\epsilon|$. We used the parameters of Fig.~\\ref{fig3} with\n$L=51200$. The finite size effects are negligible on the scale of\nthe graph. Both curves interpolate between $1\/|\\epsilon|$ dependence\nat $|\\epsilon|\\gg \\xi$ and $C\/|\\epsilon|^{3\/2}$ at $|\\epsilon|\\ll\n\\xi$. However, the prefactor $C$ for the unbinding transition is\nabout three times larger than for the rebinding.}\n\\label{fig:unbind_rebind}\n\\end{figure}\nIn Fig.~\\ref{fig:unbind_rebind} we plot\n$L-\\overline{\\langle\\tau_m\\rangle}$ for the rebinding transition and\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding versus\n$|\\epsilon|$. Both curves interpolate between $1\/|\\epsilon|$\ndependence at weak disorder $|\\epsilon|\\ll \\xi$ and\n$C\/|\\epsilon|^{3\/2}$ dependence at strong disorder $|\\epsilon|\\gg\n\\xi$. However, the prefactor $C$ in front of $1\/|\\epsilon|^{3\/2}$\nis larger for the unzipping transition.\n\n\\subsection{Unzipping from a hard wall}\n\\label{Bethe_ansatz}\n\nAs the next step we consider unzipping from an attractive hard wall\nin $d=1+1$ dimensions with point disorder in the bulk. Our method\nis a straightforward generalization of the Bethe ansatz solution\nfound by Kardar in the absence of the external force~\\cite{Kardar}.\nThe system is illustrated in Fig. \\ref{Bethe}. Here the potential\nexperienced by the flux line, $V(x)$, has a short ranged attractive\npart and an impenetrable core at $x=0$. While the scaling argument\nis unchanged in this case, this problem has the merit of being\nexactly solvable within the replica approach. Since most details of\nthe calculation are identical to those presented in\nRef.~[\\onlinecite{Kardar}], here we only outline the solution.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.7]{unzipwall.eps}\n\\caption{\\label{Bethe} An illustration of the setup considered in\nthe Bethe Ansatz calculation. The flux line is restricted to the\nhalf plane and a MFM tip is acting on it at the top of the sample.}\n\\end{figure}\nAfter replicating the free energy Eq.~(\\ref{f0dis}) along with the\ncontribution from the external field Eq.~(\\ref{eq:unzipfe}) and\naveraging over the disorder, the replicated sum over all path weights connecting points $(0,0)$ and $(x,\\tau)$,\n$\\overline{Z^n(x,t)}$, can be calculated from\n\\begin{equation}\n\\partial_\\tau \\overline{Z^n(x,t)} = -{\\cal H} \\overline{Z^n(x,t)} \\;,\n\\end{equation}\nwith the initial condition $\\overline{Z^n(x,0)}=\\delta(x)$. The\nreplicated system describes $n$ {\\it attractively interacting\nbosons} with a non-Hermitian Hamiltonian ${\\cal H}$ given by\n\\begin{eqnarray}\n{\\cal H}&=& \\sum_{\\alpha=1}^n \\left[ -\\frac{1}{2\\gamma}\n \\partial^2_{x_\\alpha}-f\n\\partial_{x_\\alpha}+ V(x_\\alpha) \\right]\\nonumber\n\\\\&-&\\sigma \\sum_{\\alpha < \\beta}\n\\delta(x_\\alpha-x_\\beta) -\\frac{1}{2} \\sigma n \\;.\n\\end{eqnarray}\n\nIn Ref.~[\\onlinecite{Kardar}] the problem was solved for $f=0$ using\nthe Bethe Ansatz. The boundary conditions were that the ground state\nwave function should vanish at large $x$ should decay as\n$\\exp(-\\lambda x)$ for the particle closest to the wall. One then\nfinds that for the permutation ${\\bf P}$ of particles such that\n$0 \\lambda$) it is unbound.\n\nThe ground state wave function for the {\\it non-zero} value of the force\ncan be obtained by noting that the non-Hermitian term acts like an\nimaginary vector potential. In particular, it can be gauged away when\nthe vortices are bound to the wall as discussed in Sec.\n\\ref{sectioncleancase} (see Eqs.~(\\ref{eq:gauge1}) and\n(\\ref{eq:gauge2})). This imaginary gauge transformation gives\n\\begin{equation}\n\\Psi_{f}=\\Psi_{f=0}\\exp\\left( \\sum_{\\alpha=1}^n fx_\\alpha \\right)\n\\;,\n\\end{equation}\nwhich implies that the solution is\n\\begin{equation}\n\\Psi_{f}=\\exp \\left(-\\sum_{\\alpha=1}^n \\tilde{\\kappa}_\\alpha x_{P\n\\alpha}\\right) \\;,\n\\end{equation}\nwith $\\tilde{\\kappa}_\\alpha = \\lambda+2(\\alpha-1)\\kappa-f$. The\neffect of the force is simply to shift all the $\\kappa_\\alpha$'s by a\nconstant. The average localization length (which satisfies near the\ntransition $\\langle x _m\\rangle \\simeq f_c \\langle \\tau_m \\rangle \/\n\\gamma$) is then given by\n\\begin{equation}\n\\langle x_m \\rangle={1\\over \\tilde Z_n n}\\int_0^\\infty\n\\prod_{j=1}^n dx_j \\left[\\sum_{j=1}^n x_j\\right]\\,\\Psi_f(x_j),\n\\label{eq:Kardarresult}\n\\end{equation}\nwhere $\\tilde Z_n=\\int_0^\\infty \\prod_{j=1}^n dx_j \\Psi_f(x_j)$.\nNote that the normalization factor $\\tilde Z_n$ in the equation\nabove is formally equivalent to the partition function (\\ref{tuam2})\nfor the unzipping from a columnar pin without excursions if we\nidentify $\\lambda-\\kappa-f$ with $\\epsilon$ and $\\kappa$ with\n$\\Delta\/2$. This equivalence implies that $\\langle x_m\\rangle$ for\nthe unzipping from a hard wall has the same statistical properties\nas $\\overline{\\langle\\tau_m\\rangle}$ for the unbinding from a\ncolumnar pin (for more details see Ref.~[\\onlinecite{kp}]). In particular, the\nunzipping problem has a crossover from $\\langle x_m\\rangle \\sim\n1\/(f_c-f)$ for $\\lambda-f\\gg\\kappa$ to $\\langle x_m\\rangle \\sim 1\/\n(f_c-f)^{3\/2}$ in the opposite limit.\n\nThis example confirms another prediction of\nthe simple scaling argument: the critical exponents for the\nunbinding transition are determined only by the dimensionality of\nthe defect even if the disorder is also present in the bulk of the system.\n\n\\subsection{Unzipping from a columnar pin with excursions into the bulk.}\n\\label{sec:numerics}\n\nIn this section we consider the setup similar to Sec.~\\ref{replica},\nnamely unzipping from a columnar defect in $d=1+1$ dimensions, but\nallowing excursions of the flux line to the bulk (see\nFig.~\\ref{fig:unzip_1D}).\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=9cm]{unzip.eps}\n\\caption{A setup illustrating unzipping from a\ncolumnar pin in $d=1+1$ dimensions with excursions into the bulk.}\n\\label{fig:unzip_1D}\n\\end{figure}\nUnfortunately there is no analytic solution available for this\nproblem. Therefore we present only numerical results. As in\nSec.~\\ref{replica_2D} we consider a lattice version of the model\nwhere in each step along the $\\tau$ direction the vortex can either\nmove to the left or the right one lattice spacing. The attractive\npotential was placed at $x=0$. The restricted partition function of\nthis model, $Z(x,\\tau)$, which sums over the weights of all path\nleading to $x,\\tau$, starting at $x=0,\\tau=0$ satisfies the\nrecursion relation~\\cite{Kardar}:\n\\begin{eqnarray}\n&&Z(x,\\tau+1)=\\delta_{x,0}(e^{V}-1)Z(0,\\tau) \\nonumber\\\\\n&&+e^{\\mu(x,\\tau+1)}\\left[J e^f Z(x-1,\\tau) +J\ne^{-f}Z(x+1,\\tau)\\right]. \\label{eqz}\n\\end{eqnarray}\nSimilarly to Eq.~(\\ref{eqz1}) we assume that $\\mu(x,\\tau)$ is\nuniformly distributed in the interval $[-U_0,U_0]$ implying the\nvariance $\\sigma=U_0^2\/3$. The variable $J$ controls the line\ntension, $V$ is the attractive pinning potential, and $f$ is proportional\nto the external force. In the continuum limit $J\\ll 1$, $f\\ll 1$,\nand $U_0\\ll 1$, equation (\\ref{eqz}) reduces to the Schr\\\"odinger\nequation:\n\\begin{equation}\n{\\partial Z\\over \\partial\\tau}=-\\mathcal H Z(x,\\tau)\n\\label{Z_tau}\n\\end{equation}\nwith the Hamiltonian given by Eq.~(\\ref{H}) with $\\gamma=2J$.\n\nFor the simulations we have chosen particular values of $J=0.1$ and\n$V=0.1$. As before we work in units such that $k_B T=1$. In the\nresults described below the partition function was evaluated for\neach variance of the disorder for several systems of finite width\n$w=2L_x$ averaging over the time-like direction (typically $\\tau\n\\simeq 10^6$ ``time'' steps) with the initial condition $Z(0,0)=1$\nand $Z(x,0)=0$ for $x \\neq 0$.\n\nTo analyze the numerics we performed a finite size scaling analysis. In the spirit of Eq.~(\\ref{nu}), in the vicinity of the transition we\nexpect the scaling form (compare Eq.~(\\ref{scaling1})):\n\\begin{equation}\n\\overline{\\langle\\tau_m\\rangle}=L_x \\Phi\\left[L_x(f_c-f)^\\nu\\right],\n\\label{scaling}\n\\end{equation}\nwhere $\\Phi$ is some scaling function.\nBased on the results of previous sections we anticipate a smooth\ninterpolation between scaling exponents $\\nu=1$ and $\\nu=2$ with either\nincreasing $L_x$ or increasing strength of disorder at fixed $L_x$.\nTo perform the finite size scaling we obtain for each value of $L_x$\na value for the exponent $\\nu$ from the best collapse of the\nnumerical data of two systems sizes $L_x$ and $L_x\/2$. In\nFig.~\\ref{fig1} we plot $1\/\\nu$ as a function of the system size\n$L_x$. As can be seen the data is consistent with $\\nu$ saturating\nat $\\nu=2$ for large systems. The crossover to $\\nu=2$ is much more\nrapid if the point disorder is enhanced near the columnar pin (see\nthe inset in Fig.~\\ref{fig1}), as might be expected for damage\ntracks created by heavy ion radiation.\n\\begin{figure}\n\\center\n\\includegraphics[width=8.5cm]{scaling_1dv2.eps}\n\\caption{Effective exponent $1\/\\nu$ versus $L_x$ for a fixed\nstrength of point disorder $\\sigma=0.03$. The results are\nconsistent with the general argument that this exponent should\nsaturate at $\\nu=2$ as $L_x\\to\\infty$. The inset shows the same\nexponent vs $\\sigma_c$, the variance of additional point disorder\nplaced directly on the columnar pin extracted from two system\nsizes $L_x=600$ and $L_x=1200$. It appears that $\\nu\\to 2$ as\n$\\sigma_c$ increases.} \\label{fig1}\n\\end{figure}\n\nNext, we test the behavior of the critical force as the\ndisorder strength is increased. According to our discussion in Sec.\n\\ref{scalingphasediagram}, we anticipate that in the absence of an\nexternal force the flux line is always bound to the pin in $1+1$\ndimensions. This is in contrast with the problem of unzipping from\nthe wall discussed in the previous section, where there is a\ncritical strength of the disorder, $\\sigma_c$, which leads to an\nunbinding transition for $f=0$. Note that the existence of a critical value\nof the disorder is a direct consequence (see discussion in Sec.\n\\ref{scalingphasediagram}) of the excursions of the vortex from the\ndefect which, as argued above, do not modify the critical behavior\nof the unzipping transition. The existence of a critical value of\nthe disorder is therefore strongly dependent on the dimensionality\nof the problem.\n\nIn numerical simulations for each strength of disorder we determine\nthe critical force plotting the ratio\n$\\overline{\\langle\\tau_m\\rangle}\/L_x$ for two different sizes $L_x$\nand using the scaling relation~(\\ref{scaling}). Note that this ratio\ndoes not depend on $L_x$ at $f=f_c$ (see also the discussion in\nSec.~\\ref{replica_2D}). We checked that this is indeed the case.\nUpon repeating this procedure for different disorder strengths we\nobtain the dependence $f_c(U_0)$ which is plotted in\nFig.~\\ref{fig8}.\n\\begin{figure}[ht]\n\\hspace{0.5cm}\n\\includegraphics[bb=1cm 1cm 20cm 25cm, scale=0.38, angle=90]{crit_f.eps}\n\\caption{Critical force for unzipping from a columnar defect in\n$1+1$ dimensions as a function of the disorder\nstrength.}\\label{fig8}\n\\end{figure}\nThe graph suggests that there is no unbinding transition at zero\ntilt at any strength of disorder consistent with the scaling\nargument presented in Sec.~\\ref{scalingphasediagram} and those of\nRef.~[\\onlinecite{HwaNatter}]. We point out that the strongest disorder\nshown in the graph $U_0=0.9$ required samples quite extended in the\ntime-like direction, $L_\\tau\\approx 10^8$.\n\n\n\\section{Unzipping a Luttinger liquid}\n\\label{sec:Lutunzip}\n\nWe now turn to consider the effect of interactions on the unzipping\nof single vortices. To do this we study a system where the vortices\nare preferentially bound to a thin two-dimensional slab which is\nembedded in a three-dimensional sample so that the density of\nvortices in the slab is much higher than in the bulk.\nExperimentally, this setup could be achieved using, for example, a\ntwin plane in YBCO or by inserting a thin plane with a reduced lower\ncritical field $H_{c1}$ (with, for example, molecular beam epitaxy)\ninto a bulk superconductor. The scenario we analyze is one where a\nMFM is used to pull a single vortex out of the two-dimensional slab\n(see Fig. \\ref{fig9}). The physics of the vortices confined to two\ndimensions is well understood and is analogous to a spinless Luttinger\nliquid of bosons (see, e.g. Ref.~[\\onlinecite{AHNS}]).\n\nAs we show below the dependence of the displacement of the vortex\nfrom the two-dimensional slab on the force exerted by the MFM\ndepends on the physics of the two-dimensional vortex liquid which\nresides in the slab. Specifically, the critical properties of the\nunbinding transition depend on the ``Luttinger liquid parameter''\nwhich controls the large-distance behavior of the vortex liquid. The\nexperimental setup can thus be used to probe the two-dimensional\nphysics of the vortices in the slab.\n\n\\begin{figure}[ht]\n\\includegraphics[scale=0.6]{UnzipLuttinger.eps}\n\\caption{Possible experimental setup for studying unzipping from\nLuttinger Liquid. A MFM is used to pull a single vortex out of a\nplane where the vortices are confined. The measured quantity is the\ndistance of the pulled vortex from the confining plane as a function\nof the force $f$. }\\label{fig9}\n\\end{figure}\n\\subsection{Two-dimensional vortex liquids}\n\nThe physics of vortices in two dimensions is very well understood.\nThe vortices form a one-dimensional array located at position\n$x_i(\\tau)$. The density profile of the vortices is then given by\n\\begin{equation}\n n(x,\\tau)=\\sum_j \\delta \\left[ x-x_j(\\tau)\\right] \\;,\n\\end{equation}\nwhere $x$ and $\\tau$ denote transverse and longitudinal coordinates\nwith respect to the vortices and $i$ is an index labeling the\nvortices. By changing variable into the phonon displacement field\n$u_j$ through $x_j(\\tau)=a\\left[j+u_j(\\tau)\\right]$, where $a$ is\nthe mean distance between vortex lines the free-energy of a\nparticular configuration can be written as:\n\\begin{equation}\n {\\cal F}_0=\\frac{a^2}{2} \\int dx d\\tau \\left[ c_{11}\n (\\partial_x u)^2 + c_{44} (\\partial_\\tau u)^2\\right] \\;.\n\\end{equation}\nHere $c_{11}$ and $c_{44}$ are the compressional and the tilt moduli\nrespectively. After rescaling the variables $x$ and $\\tau$ according\nto\n\\begin{equation}\n x \\to x \\left(\\frac{c_{11}}{c_{44}}\\right)^{1\/4} \\;\\; ,\n \\; \\tau \\to \\tau \\left(\\frac{c_{44}}{c_{11}}\\right)^{1\/4} \\;,\n\\end{equation}\nthe free energy takes the isotropic form\n\\begin{equation}\n {\\cal F}_0=\\frac{A}{2}\\int dx d\\tau\n \\left[ (\\partial_x u)^2 + (\\partial_\\tau u)^2\\right]\n\\end{equation}\nwith $A=a^2\\sqrt{c_{11}c_{44}}$. The partition function is then\ngiven by the functional integral\n\\begin{equation}\n Z=\\int D u(x,\\tau) e^{-S} \\;,\n\\end{equation}\nwith $S=S_0={\\cal F}_0\/T$. In the limit of large sample sizes in the\n``timelike'' direction one can regard $Z$ as the zero temperature\npartition function of interacting bosons~\\cite{AHNS}. In this\nlanguage the imaginary time action can be written as\n\\begin{equation}\n S_0=\\frac{\\pi}{2g}\\int dx d\\tau\n \\left[ (\\partial_x u)^2 + (\\partial_\\tau u)^2\\right] \\;.\n \\label{freeaction}\n\\end{equation}\nHere we set $\\hbar=1$ and identified the Luttinger-liquid parameter,\n$g$, as\n\\begin{equation}\n g=\\frac{\\pi T}{A} \\;.\n \\label{Lutpara}\n\\end{equation}\nThe Luttinger-liquid parameter controls the long-distance properties\nof the model. For vortices $g$ it is a\ndimensionless combination of the compressional and tilt moduli, the\ndensity of vortices and temperature.\n\nVarious properties of Luttinger liquids are well understood. For\nexample, the correlation function for the density fluctuations\n$\\delta n(x,\\tau)=n(x,\\tau)-n_0$, where $n_0=1\/a$ is the mean\ndensity, obeys\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle \\simeq\n \\frac{\\cos\\left( 2 \\pi n_0 x \\right)}{(x^2+\\tau^2)^g} \\;.\n\\end{equation}\nThere is quasi long-range order in the system and the envelope of\nthe density correlation function decays as a power law with the exponent\ndepending only on $g$. As we show below, $g$ can be probed by\nunzipping a single vortex out of a plane which contains a $(1+1)$-dimensional vortex liquid.\n\nIn what follows we also consider the case where there is point\ndisorder present in the sample. The behavior will be strongly influenced by the behavior of the vortices\nin two dimensions in the presence of disorder. This problem has been\nstudied in some detail in the past (see e.g. Ref.~[\\onlinecite{pkn}] and\nreferences therein). Here we briefly review features which will be\nimportant in analyzing the unzipping problem. The most relevant (in\nthe renormalization group sense) contributions to the action from\nthe point disorder is\n\\begin{equation}\n S_{PD}=2\\int dx d\\tau R(x,\\tau)\n \\cos \\left[2 \\pi u(x,\\tau) +\\beta(x,\\tau) \\right] \\;,\n\\end{equation}\nwhere positive (negative) $R$ implies a repulsive (attractive)\npotential between the vortices and the quenched random disorder. We assume, for simplicity, that $\\beta(x,\\tau)$ is\ndistributed uniformly between $0$ and $2 \\pi$ and $R(x,\\tau)$ has a\nan uncorrelated Gaussian distribution with the variance $\\Delta_0$:\n\\begin{equation}\n \\overline{R(x_1,\\tau_1)R(x_2,\\tau_2)}=\n \\Delta_0 \\delta(x_1-x_2)\\delta(\\tau_1-\\tau_2) \\;,\n\\end{equation}\nwhere the overbar, as before, represents averaging over disorder.\n\nTo analyze the disordered problem, similar to the single vortex case,\nwe use the replica trick. Then the replicated noninteracting part of\nthe action becomes\n\\begin{equation}\n S_0=\\frac{\\pi}{2g} \\sum_{\\alpha,\\beta} \\int\n \\int dx d\\tau \\left[ \\frac{\\partial u_\\alpha}{\\partial \\tau}\n \\frac{\\partial u_\\beta}{\\partial \\tau} +\\frac{\\partial u_\\alpha}{\\partial x}\n \\frac{\\partial u_\\beta}{\\partial x} \\right] \\left[ \\delta_{\\alpha,\\beta}\n - \\frac{\\kappa}{g} \\right] \\;.\n\\end{equation}\nHere $u_\\alpha(x,\\tau)$ is the replicated phonon field and $\\kappa$\nis an off-diagonal coupling which is zero in the bare model but is\ngenerated by the disorder. It plays the role of a quenched random\n``chemical potential'' which is coupled to the first derivative of\nthe phonon field $u$. The replica indices, $\\alpha$ and $\\beta$ run\nfrom $1$ to $n$ and at the end of the calculation one takes the\nlimit $n \\to 0$. After replication the contribution from the point\ndisorder becomes\n\\begin{equation}\n S_{PD}=-\\Delta_0 \\sum_{\\alpha,\\beta} \\int \\int dx d\\tau\n \\cos 2 \\pi \\left[ u_\\alpha (x,\\tau) - u_\\beta (x,\\tau) \\right] \\;.\n\\end{equation}\nThe combined action can be treated within the renormalization group\nusing a perturbation series near $g=1$ where a phase transition\nbetween a vortex liquid and a vortex glass\noccurs~\\cite{fisher_v_glass}. By continuously eliminating degrees of\nfreedom depending on frequency and momentum within the shell\n$\\Lambda - \\delta \\Lambda < \\sqrt{\\omega^2+q^2} < \\Lambda$, one\nobtains the following renormalization group equations~\\cite{Cardy,\npkn}\n\\begin{eqnarray}\n \\frac{dg}{dl}&=&0 \\\\\n \\frac{d \\Delta}{dl}&=&2(1-g) \\Delta - 2 C \\Delta^2 \\\\\n \\frac{d \\kappa}{dl}&=&C^2 \\Delta^2\n\\end{eqnarray}\nHere $l$ is the flow parameter $\\Lambda(l)=\\Lambda e^{-l}$. $C$ is\na non-universal constant which depends on the cutoff $\\Lambda$. The\nequations are subject to the initial conditions $\\kappa(l=0)=0$ and\n$\\Delta(l=0)=\\Delta_0$. Note that the Luttinger liquid parameter is\nnot renormalized. Analyzing the flow equations it has been shown\nthat in the vortex liquid phase ($g>1$) the correlation of the\ndensity fluctuation behaves in the vortex liquid phase as\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle\n \\simeq \\frac{1}{(x^2+\\tau^2)^{g+\\tilde{\\kappa}\/2}} \\;,\n\\end{equation}\nwhere $\\tilde{\\kappa}$ is a nonuniversal exponent. In the glass\nphase ($g<1$) correlations decay faster than a power law, with\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle\n \\simeq \\exp \\left( -(1-g)^2 \\ln^2 \\sqrt{x^2+\\tau^2}\\right) \\;.\n\\end{equation}\n\nIn what follows we consider a setup in which a two dimensional array\nof vortices, whose properties have been described above, is embedded\nin a three dimensional bulk sample. As shown below when a single vortex\nis unzipped into the bulk in a clean sample the critical properties\nof the unzipping transition yield information on the properties of\nthe two dimensional vortex liquid. In particular, they provide a\ndirect measure of the Luttinger-liquid parameter. In the same setup\nin a disordered sample we will show that the critical properties of\nthe unzipping transition will be modified. In particular, they can\nyield information on the on the three-dimension wandering exponent\nof a single vortex in a disordered sample.\n\n\n\\subsection{Unzipping a Luttinger liquid: The clean case}\nConsider first an experiment where an attractive two-dimensional\npotential holds vortices confined to it. A MFM then pulls a {\\it\nsingle} vortex out of the plane (see Fig. \\ref{fig9}). We assume\nthroughout that the density of vortices in the three dimensional bulk\nis so small that we can neglect interactions between the vortex that\nis pulled out of the sample and vortices in the three dimensional\nbulk. In this subsection only the clean case (no point disorder) will be studied.\n\nWe assume the MFM exerts a force ${\\bf f}=f \\hat{x}$. As in the\nunzipping experiments discussed above we expect that for large\nforces $f>f_c$ the vortex will be completely pulled out of the two\ndimensional slab. Similar to the case of the unzipping of a single\nvortex we write the free energy of the vortex as a sum of two\ncontributions. The first, ${\\cal F}_u(\\tau_m)$, arises from the part\nof the vortex that is outside the two dimensional slab. The second\n${\\cal F}_b(\\tau_m)$ is the change in the free-energy of the\nvortices that remain inside the two dimension slab. As before\n$\\tau_m$ is the length along the $\\tau$ direction which is unbound\nfrom the two-dimensional slab. The free-energy of the unzipped part\nis clearly identical to that calculated in Eq.~\\ref{free_unzip} or\nexplicitly\n\\begin{equation}\n {\\cal F}_u(\\tau_m)= - f^2 \\tau_m\/ 2\\gamma \\;.\n \\label{eq:unzupfeagain}\n\\end{equation}\n\nThe calculation of the free-energy, ${\\cal F}_b(\\tau_m)$, is\nsomewhat more involved. Clearly there is a linear contributions due\nto the length $\\tau_m$ removed from the attractive potential of the\nslab. However, in addition there is an extra contribution from the\nenergy of the dislocation, ${\\cal F}_d(\\tau_m)$, (see Fig.\n\\ref{fig9}) created in the two dimensional vortex array. This\ncontribution to the free-energy, as we show below, is {\\it\nnon-linear} and controlled by the Luttinger liquid parameter $g$.\nThis non-linearity results, near the unzipping transition, in a\nsensitivity of the critical properties to the value of $g$.\n\nWe leave the details of the calculation of the dislocation energy to\nAppendix~\\ref{App:dislocation} and present here only the key steps\nof derivation.\n\nIn order to satisfy boundary conditions near the interface one can\nuse the method of images (see Fig.~(\\ref{fig11})). The free energy\nof this dislocation pair can be calculated by standard methods (see\ndetails in Appendix~\\ref{App:dislocation}). In particular, at large\n$\\tau_m$ it behaves logarithmically (see e.g. Ref.~[\\onlinecite{chakin}]):\n\\begin{equation}\n {\\cal F}_d=\\frac{T}{4g} \\ln(\\tau_m\/a_0),\n \\label{free_en_dis}\n\\end{equation}\nwhere $a_0$ is the short range cutoff of the order of the distance\nbetween flux lines. We note that the free energy of the dislocation\nnear the interface (\\ref{free_en_dis}) is one half of the free\nenergy of a dislocation pair.\n\nWith the energy of the dislocation in hand we can now analyze the\nproperties of the unzipped length near the transition using the\nmethods used for analyzing the single vortex unzipping experiments.\nThe contributions to the free energy are from the unzipped part of\nthe vortex and the energy of the dislocation. Collecting all the\nrelevant terms, near the transition the free energy is given by\n\\begin{equation}\n {\\cal F}(\\tau_m)={\\cal F}_u(\\tau_m)+{\\cal F}_b(\\tau_m)=\n \\epsilon\\tau_m+\\frac{T}{4g}\\ln(\\tau_m\/a_0)\\;.\n\\end{equation} \nThe probability of finding a certain value of $\\tau_m$ is then given\nby\n\\begin{equation}\n P(\\tau_m) \\propto e^{-F(\\tau_m)\/T}=\\frac{C}{\\tau_m^{1\/(4g)}}e^{-\\epsilon\\tau_m}.,\n\\end{equation}\nwhere $C$ is the normalization constant. At the transition\n$\\epsilon=0$ the distribution becomes a pure power law in $\\tau_m$.\nTherefore, the average value of $\\tau_m$ is very sensitive to the\nvalue of $g$. In particular, for $g>1\/4$ (i.e. for weakly\ninteracting flux lines) the behavior of $\\langle\\tau_m\\rangle$ near\nthe transition is identical to that of a single vortex in the\nabsence of interactions with other vortices\n\\begin{equation}\n \\langle \\tau_m \\rangle \\sim {1\\over\\epsilon} \\;.\n\\end{equation}\nIn contrast, for $1\/8 < g < 1\/4$ (stronger interactions) there is a\ncontinuously varying exponent governing the transition\n\\begin{equation}\n\\langle\\tau_m\\rangle\\sim {1\\over \\epsilon^{2-1\/4g}} \\;.\n\\end{equation}\nAnd finally, for $g<1\/8$ (strongly interacting flux lines) we find\nthat $\\langle \\tau_m\\rangle$ does not diverge near the transition.\nNote that even though in this regime the mean displacement remains\nconstant at the transition the higher moments of $\\tau_m$ diverge\nand are thus sensitive to $\\epsilon$. The reason for this is at the\ntransition the distribution of $\\tau_m$ is a power law.\n\n\n\\subsection{Unzipping from a twin plane with point disorder}\n\\label{3c}\n\nWe now consider the problem of unzipping a vortex from a\nplane with many vortices in the presence of disorder. In the spirit\nof the treatments presented in this paper, one needs to calculate the\nfree-energy of the unzipped part of the vortex ${\\cal F}_u(\\tau_m)$,\nthe free-energy of the bound part of the vortex ${\\cal F}_b(\\tau_m)$\nand the {\\it fluctuations} in both quantities averaged over\nrealizations of disorder. This can be done perturbatively near $g=1$. We again relegate details of the derivation of\nthe dislocation energy to\nAppendix~\\ref{App:dislocation1}. One conclusion from our\ncalculations is that the mean free energy of the dislocation near\nthe boundary is not affected by the disorder and is given by\nEq.~(\\ref{free_en_dis}). Another important conclusion is that the\nfluctuations of the free energy also depend logarithmically on\n$\\tau_m$:\n\\begin{equation}\n \\overline{\\delta {\\cal F}^2_d(\\tau_m)}= T^2\\frac{\\kappa(\\infty)}{8g^2} \\ln(\\tau_m\/a_0)\n\\end{equation}\nfor $g>1$ and\n\\begin{equation}\n \\overline{\\delta {\\cal F}^2_d(\\tau_m)}=T^2\\frac{(1-g)^2}{4} \\ln^2(\\tau_m\/a_0)\n\\end{equation}\nfor $g<1$.\n\\begin{figure}[ht]\n\\includegraphics[scale=0.6]{UnzipLuttingerdis.eps}\n\\caption{Possible experimental setup for studying unzipping from\nLuttinger Liquid in the presence of disorder.}\\label{fig10}\n\\end{figure}\nWe note that in the case of many flux lines there is a weak\nlogarithmic dependence of free energy fluctuations on $\\tau_m$ as\nopposed to strong power law dependence in the case of a single flux\nline (compare Eq.~(\\ref{eq:fefluct})). This somewhat surprising\nresult is a consequence of the screening of strong power-law\nfluctuations by other flux lines. We note that if the pinning of\nflux lines by disorder is extremely strong so that tearing a single\nflux line does not affect positions of other lines in the duration\nof experiment, we are back to the single flux line physics and\n$\\overline{\\delta \\mathcal F_d^2}\\propto \\tau_m$.\n\nTo complete the analysis, we need to consider the free-energy\ncontribution from the unzipped part. Of particular of importance are\nthe free-energy fluctuations due to the disorder in the bulk of the\nsample. As discussed in Sec. \\ref{Sec2}, in a three dimensional\nsample these grow as $\\delta {\\cal F}_u \\propto m^{\\omega(3)}$ with\n$\\omega(3) \\simeq 0.22$. This contribution grows much quicker than\nthe contribution from the fluctuations in the free-energy of the\ndislocation. Therefore following the ideas of Sec. \\ref{Sec2} the\ntotal free-energy is given by\n\\begin{equation}\n{\\cal F}(m)=a(f_c-f)\\tau_m -b\\tau_m^{\\omega(3)}\\;.\n\\label{fplane}\n\\end{equation}\nwhere $a$ and $b$ are positive constants. Minimizing Eq. (\\ref{fplane}) gives for the critical properties in\nthis case\n\\begin{equation}\n s \\sim \\frac{1}{(f_c-f)^{1.28}} \\;.\n\\end{equation}\nThus screening disorder fluctuations in the plain by other flux\nlines effectively enhances the role of disorder in the bulk. As the\nresult these unzipping experiments can serve as a probe of the\nthree-dimensional anomalous wandering exponent.\n\n\\acknowledgements\n\nYK was supported by the Israel Science Foundation and thanks the\nBoston University visitors program for hospitality. YK and DRN were\nsupported by the Israel-US Binational Science Foundation. Research\nby DRN was also supported by the National Science Foundation,\nthrough grant DMR 0231631 and through the Harvard Materials Research\nScience and Engineering center through grant DMR0213805. AP was\nsupported by AFOSR YIP.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Learning with Adversarial Contexts}\n\\label{sec:adversarial}\n\n\nIn this section, we focus on the adversarial contexts case, where at time $t\\in [T]$, the contexts $x_{t,1},\\cdots,x_{t,K}$ can be arbitrarily chosen by an adversary who observes all past contexts and rewards, with $\\|x_{t,a}\\|_2\\le 1$ for any $a\\in [K]$. \nWe first state the main results in Section~\\ref{subsec:adversarial_main}, which\ncharacterize the upper and lower bounds of the regret.\nWe then give a UCB-based algorithm in the sequential batch setting in Section \\ref{subsec:adversarial_UCB} and describe several important aspects of the algorithm, including a variant that is used for theoretical bound purposes. \nNext, in Section \\ref{subsec.UCB_upperbound}, we show that the proposed sequential batching algorithm achieves the regret upper bound in Theorem \\ref{thm.adversarial} . \nFinally, we prove the regret lower bound in Section \\ref{subsec.adversarial} and therefore establish that the previous upper bound is close to be tight. \n\n\\subsection{Main Results}\\label{subsec:adversarial_main}\n\\begin{theorem}\\label{thm.adversarial}\n\tLet $T$, $M$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.\n\t\\begin{enumerate}\n\t\t\\item Under Assumption~\\ref{aspn.TKd}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$, where ${\\mathcal{T}}$ is a uniform grid defined by $t_m = \\lfloor \\frac{mT}{M}\\rfloor$ and $\\pi$ is explicitly defined in Section \\ref{subsec:adversarial_UCB},\n\t\tsuch that:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T\\left(\\textbf{Alg}\\right)] \\le \\mathsf{polylog}(T)\\cdot \\left(\\sqrt{dT} + \\frac{dT}{M}\\right).\n\t\t\\end{align*}\n\t\t\\item Conversely, for $K=2$ and any sequential batch learning algorithm, we have:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T\\left(\\textbf{Alg}\\right)] \\ge c\\cdot \\left(\\sqrt{dT} + \\left(\\frac{T\\sqrt{d}}{M}\\wedge \\frac{T}{\\sqrt{M}}\\right)\\right),\n\t\t\\end{align*}\n\t\twhere $c>0$ is a universal constant independent of $(T,M,d)$. \n\t\\end{enumerate}\n\\end{theorem}\n\nOur subsequent analysis easily gives high-probability regret upper bounds. However, for simplicity and to highlight more clearly the matching between the upper and lower bounds, \nwe stick with presenting results on expected regret.\nTheorem \\ref{thm.adversarial} shows a polynomial dependence of the regret on the number of batches $M$ under adversarial contexts, and the following corollary is immediate. \n\\begin{corollary}\\label{cor.adversarial}\n\tUnder adversarial contexts, $\\Theta(\\sqrt{dT})$ batches achieve the fully online regret $\\tilde{\\Theta}(\\sqrt{dT})$. \n\\end{corollary}\n\nAccording to Corollary \\ref{cor.adversarial}, $T$ batches are not necessary to achieve the fully online performance under adversarial contexts: $\\Theta(\\sqrt{Td})$ batches suffice. Since we are \\text{not} in the high-dimensional regime (per Assumption~\\ref{aspn.TKd}, $d \\le \\sqrt{T}$), the number of batches needed without any performance suffering is at most $O(T^{0.75})$, a sizable reduction from $O(T)$. Further, in the low-dimensional regime (i.e. when $d$ is a constant), only $O(\\sqrt{T})$ batches are needed to achieve fully online performance.\nNevertheless, $O(\\sqrt{dT})$ can still be a fairly large number. \nIn particular, if only a constant number of batches are available, then the regret is linear. The lower bound indicates that not much better can be done in the adversarial contexts. \nThis is because the power of the adversary under adversarial contexts is too strong when the learner only has a few batches: the adversary may simply pick any batch and choose all contexts anterior to this batch to be orthogonal with the contexts within this batch, such that the learner can learn nothing about the rewards in any given batch. \n\n\n\n\\subsection{A Sequential Batch UCB Algorithm}\\label{subsec:adversarial_UCB}\nThe overall idea of the algorithm is that, at the end of every batch, the learner computes an estimate $\\hat{\\theta}$ of the unknown parameter $\\theta^\\star$ via ridge regression as well as a confidence set that contains $\\theta^\\star$ with high probability. Then, whenever the learner enters a new batch, at each time $t$ he simply picks the action with the largest upper confidence bound. Finally, we choose the uniform grid, i.e., $t_m = \\lfloor \\frac{mT}{M}\\rfloor$ for each $m\\in [M]$. The algorithm is formally illustrated in Algorithm \\ref{algo.ucb}.\n\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Sequential Batch UCB (SBUCB) \\label{algo.ucb}}\n\t\\textbf{Input:} time horizon $T$; context dimension $d$; number of batches $M$; tuning parameter $\\gamma>0$.\n\t\n\t\\textbf{Grid choice:} ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$ with $t_m = \\lfloor \\frac{mT}{M}\\rfloor$. \n\t\n\t\\textbf{Initialization:} $A_0 = I_d\\in \\mathbb{R}^{d\\times d}$, $\\hat{\\theta}_0={\\bf 0}\\in \\mathbb{R}^d$, $t_0 = 0$.\n\t\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tChoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a}^\\top A^{-1}_{m-1} x_{t,a}}$ (break ties arbitrarily). \\\\\n\t\t\t}\n\t\tReceive rewards in the $m$-th batch: $\\{r_{t,a_t}\\}_{t_{m-1}+1 \\le t \\le t_m}$. \n\t\t\n\t\t$A_m = A_{m-1} + \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta}_m = A^{-1}_m\\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.\n\t}\n\\end{algorithm}\n\n\\begin{remark}\nNote that when $M=T$ (i.e. the fully online setting), Algorithm~\\ref{algo.ucb} degenerates to the standard LinUCB algorithm in~\\cite{chu2011contextual}.\n\\end{remark}\t\n\nTo analyze the sequential batch UCB algorithm, we need to first show that the constructed confidence bound is feasible. By applying \\cite[Lemma 1]{chu2011contextual} to our setting, we immediately obtain the following concentration result that the estimated $\\hat{\\theta}_{m-1}$ is close to the true $\\theta^\\star$: \n\n\\begin{lemma}\\label{lemma.concentration}\n\tFix any $\\delta > 0$.\n\tFor each $m\\in [M]$, if for a fixed sequence of selected contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$ up to time $t_m$, the (random) rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$ are independent, then for each $t \\in [t_{m-1}+1, t_m]$,\n\twith probability at least $1-\\frac{\\delta}{T}$, the following holds for all $a\\in [K]$:\n\t\\begin{align*}\n\t|x_{t,a}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star)| \\le \\left(1+\\sqrt{\\frac{1}{2}\\log\\left(\\frac{2KT}{\\delta}\\right)}\\right)\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}.\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{remark}\nLemma~\\ref{lemma.concentration} rests on an important conditional independence assumption of the rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$. However, this assumption\ndoes not hold in the vanilla version of the algorithm as given in Algorithm~\\ref{algo.ucb}.\nThis is because a future selected action $a_t$ and hence the chosen context $x_{t,a_t}$ depends on \nthe previous rewards. Consequently, by conditioning on $x_{t,a_t}$, previous rewards, say $r_{\\tau_1}, r_{\\tau_2}$ ($\\tau_1, \\tau_2 < t$), can become dependent. Note the somewhat subtle issue here on\nthe dependence of the rewards: when conditioning on $x_{t,a_t}$, the corresponding reward $r_t$ becomes independent of all the past rewards $\\{r_\\tau\\}_{\\tau < t}$. Despite this, when a future $x_{t^\\prime, a_{t^\\prime}}$ is revealed ($t^\\prime > t$), these rewards (i.e. $r_t$ and all the rewards prior to $r_t$) become coupled again: what was known about $r_t$ now reveals information about the previous rewards $\\{r_\\tau\\}_{\\tau < t}$, because $r_t$ itself would not determine the selection of $x_{t^\\prime, a_{t^\\prime}}$:\nall those rewards have influence over $x_{t^\\prime, a_{t^\\prime}}$. Consequently, a complicated dependence structure is thus created when conditioning on $\\{x_{t,a_t}\\}_{t\\in [t_m]}$.\n\nThis lack of independence issue will be handled with a master algorithm variant of Algorithm~\\ref{algo.ucb} discussed in the next subsection. \nUsing the master algorithm to decouple dependencies is a standard technique in contextual bandits that was first developed in~\\cite{auer2002using}. Subsequently, it has been used for the same purpose in~\\cite{chu2011contextual, li2017provably}, among others. We will describe how to adapt the master algorithm in our current sequential batch learning setting next. We end this subsection by pointing out that, strictly speaking, our regret upper bound is achieved only by this master algorithm, rather than Algorithm~\\ref{algo.ucb}. However, we take the conventional view that the master algorithm is purely used as a theoretical construct (to resolve the dependence issue) rather than a practical algorithm that should actually be deployed in practice. In practice, Algorithm~\\ref{algo.ucb} should be used instead. For that reason, we discuss the master algorithm only in the proof.\n\\end{remark}\n\n\\subsection{Regret Analysis for Upper bound}\\label{subsec.UCB_upperbound}\n\nWe start with a simple fact from linear algebra that will be useful later.\n\n\\begin{lemma}\\cite[Lemma 11]{auer2002using}\\label{lemma.eigenvalue}\n\tLet $A$ be a symmetric matrix such that $I_d\\preceq A$, and $x\\in \\mathbb{R}^d$ be a vector satisfying $\\|x\\|_2\\le 1$. Then the eigenvalues $\\lambda_1,\\cdots,\\lambda_d$ of $A$ and the eigenvalues $\\nu_1,\\cdots,\\nu_d$ of $A+xx^\\top$ can be rearranged in a way such that $\\lambda_i\\le \\nu_i$ for all $i\\in [d]$, and\n\t\\begin{align*}\n\t\\mathsf{Tr}(A^{-1}xx^\\top) \\le 10\\sum_{j=1}^d \\frac{\\nu_j - \\lambda_j}{\\lambda_j}. \n\t\\end{align*}\n\\end{lemma}\n\nWe next establish a key technical lemma that will be used in establishing our regret upper bound. \n\\begin{lemma}\\label{lemma.trace_sum}\nDefine $X_m = \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. We have:\n\\begin{align*}\n\\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)} \\le \\sqrt{10}\\log(T+1)\\cdot \\left(\\sqrt{Md} + d\\sqrt{\\frac{T}{M}} \\right). \n\\end{align*}\n\\end{lemma} \n\n\\begin{proof}\nWe start by noting that with the above notation, we have $A_m=A_{m-1}+X_m$ for any $m\\in [M]$ with $A_0=I_d$. \nApplying Lemma \\ref{lemma.eigenvalue} repeatedly, we may rearrange the eigenvalues $\\lambda_{m,1},\\cdots,\\lambda_{m,d}$ of $A_m$ in such a way that $\\lambda_{m-1,j}\\le \\lambda_{m,j}$ for all $m\\in [M], j\\in [d]$, and \n\\begin{align}\\label{eq.reduction_eigenvalue}\n\\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)} \\le \\sqrt{10}\\cdot \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}. \n\\end{align}\nNote that $\\lambda_{0,j}=1$ for all $j\\in [d]$. \nNote further that $ \\lambda_{M,j}\\le 1+T, \\forall j \\in [d]$, which follows from the \nfact that $z^\\top (A_M) z = z^\\top (I_d + \\sum_{t=1}^T x_{t,a_t}x_{t,a_t}^\\top ) z = \\|z\\|_2^2 + \\sum_{t=1}^T \\|z^\\top x_{t,a_t}\\|^2_2 \\le (T+1) \\|z\\|_2^2$, since $\\|x_{t,a_t}\\|_2 \\le 1$.\nConsequently, every eigenvalue of $A_M$ must be bounded by $T+1$.\n\nUtilizing the above two pieces of information on $\\lambda_{0,j}$ and $ \\lambda_{M,j}$, we then have the following:\n\\begin{align}\n\\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} &\\le \\sqrt{M\\sum_{m=1}^M\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}} }\n =\\sqrt{M \\sum_{j=1}^d \\sum_{m=0}^{M-1} \\frac{\\lambda_{m+1,j} - \\lambda_{m,j}}{\\lambda_{m+1,j}} } \\nonumber \\\\\n&\\le \\sqrt{M\\sum_{j=1}^d \\int_{\\lambda_{0,j}}^{\\lambda_{M,j}} \\frac{dx}{x} } \n= \\sqrt{M\\sum_{j=1}^d \\log \\lambda_{M,j}}\n\\le \\sqrt{Md\\log(T+1)}, \\label{eq.inequality_1}\n\\end{align}\nwhere the first inequality follows from $(\\sum_{i=1}^n x_i)^2 \\le n \\sum_{i=1}^n x_i^2$, for any real numbers\n$x_1, \\dots, x_n$.\n\n\nWe now look at the difference between Equation~\\eqref{eq.reduction_eigenvalue} and Equation~\\eqref{eq.inequality_1} and have:\n\\begin{align*}\n\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}} - \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} &\\stepa{\\le} \\frac{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^2}{\\lambda_{m,j}\\lambda_{m-1,j} }}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n & = \\frac{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{1\/2}}{\\lambda_{m-1,j}^{1\/2} } \\cdot \n\\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{3\/2}}{\\lambda_{m,j}\\lambda_{m-1,j}^{1\/2}}}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n &\\stepb{\\le} \n \\frac{\\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})}{\\lambda_{m-1,j} }} \\cdot \n \\sqrt{\\sum_{j=1}^d\t\\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{3}}{\\lambda_{m,j}^2\\lambda_{m-1,j}}}}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n &= \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^3}{\\lambda_{m-1,j}\\lambda_{m,j}^2} }, \n\\end{align*}\nwhere step (a) follows from the basic inequality $\\sqrt{a}-\\sqrt{b}\\le (a-b)\/\\sqrt{a}$ for $a\\ge b\\ge 0$, and step (b) is due to Cauchy--Schwartz. \n\n\nNote further that $\\lambda_{m,j}-\\lambda_{m-1,j}\\le \\mathsf{Tr}(X_m) = \\sum_{t=t_{m-1}+1}^{t_m} \\|x_{t,a_t}\\|_2^2\\le t_m - t_{m-1} = \\frac{T}{M}$, we therefore have:\n\\begin{align}\n&\\sum_{m=1}^M \\left(\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}} - \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} \\right) \\le \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^3}{\\lambda_{m-1,j}\\lambda_{m,j}^2} } \\nonumber \\\\\n& \\le \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^2}{\\lambda_{m,j}^2 }}\\cdot \\sqrt{\\frac{T}{M}}\n\\le \\sum_{m=1}^M \\sum_{j=1}^d \\frac{\\lambda_{m,j}-\\lambda_{m-1,j}}{\\lambda_{m,j} }\\cdot \\sqrt{\\frac{T}{M}} \\nonumber \\\\\n&\\le \\sqrt{\\frac{T}{M}}\\sum_{j=1}^d \\int_{\\lambda_{0,j}}^{\\lambda_{M,j}} \\frac{dx}{x} \\nonumber \\\\\n&\\le d\\sqrt{\\frac{T}{M}}\\log(T+1) \\label{eq.inequality_2},\n\\end{align}\nwhere the second inequality follows from the fact that\n$\\lambda_{m-1,j}\\ge \\lambda_{0,j}=1$ for any $m\\in [M]$.\nNow combining \\eqref{eq.reduction_eigenvalue}, \\eqref{eq.inequality_1} and \\eqref{eq.inequality_2} completes the proof. \n\\end{proof}\n\nWe are now ready to prove the regret upper bound stated in Theorem~\\ref{thm.adversarial}.\n\\begin{proof}[Proof of Statement 1 in Theorem~\\ref{thm.adversarial}]\n\\begin{enumerate}\n\\item[]\n\\item \\textbf{Regret bound under conditional independence assumption.}\n\nFor a given $\\delta > 0$, set the hyper-parameter $\\gamma$ in Algorithm~\\ref{algo.ucb} to be $1+\\sqrt{\\frac{1}{2}\\log(\\frac{2KT}{\\delta})}$ for the entire proof.\nUnder the conditional independence assumption in Lemma~\\ref{lemma.concentration},\nby a simple union bound over all $t \\in [T]$, we have with probability at least $1 - \\delta$, the following event holds:\n\\begin{align*}\n\\forall m \\in [M], \\forall t \\in [t_{m-1}+1, t_m], \\forall a\\in [K], \\quad |x_{t,a}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star)| \\le \\gamma\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}.\n\\end{align*}\nOn this high probability event (with probability $1 - \\delta$), we can bound the regret as follows:\n\\begin{align}\nR_T(\\textbf{Alg}) &= \\sum_{t=1}^T \\left( \\max_{a\\in [K]}x_{t,a}^\\top \\theta^\\star - x_{t,a_t}^\\top \\theta^\\star \\right)\n= \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( \\max_{a\\in [K]}x_{t,a}^\\top \\theta^\\star - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& \\le \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( \\max_{a\\in [K]} \\Big(x_{t,a}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}\\Big) - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& = \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( x_{t,a_t}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& = \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( x_{t,a_t}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star) + \\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} \\right) \\nonumber\\\\\n& \\le \n\\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} 2\\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} = 2\\gamma\\cdot \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} 1\\cdot \\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1} x_{t,a_t}} \\nonumber \\\\ \\label{eq.regret_ucb}\n&\\le 2\\gamma\\sqrt{\\frac{T}{M}}\\cdot \\sum_{m=1}^M \\sqrt{\\sum_{t=t_{m-1} +1}^{t_m} x_{t,a_t}^\\top A_{m-1}^{-1} x_{t,a_t}} = \n 2\\gamma\\sqrt{\\frac{T}{M}}\\cdot \\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)}, \n\\end{align}\nwhere the inequality in~\\eqref{eq.regret_ucb} follows from Cauchy--Schwartz and the choice of a uniform grid (without loss of generality we assume that $T\/M$ is an integer).\n\nNext, setting $\\delta = \\frac{1}{T}$ (and hence resulting in $\\gamma = 1+\\sqrt{\\frac{1}{2}\\log\\left(2KT^2\\right)}$) and applying Lemma~\\ref{lemma.trace_sum} to the upper bound in~\\eqref{eq.regret_ucb}, we immediately obtain that again on this high-probability event:\n\\begin{align}\nR_T(\\textbf{Alg}) &\\le 2\\sqrt{10}\\left(\\sqrt{\\frac{1}{2}\\log\\left(2KT^2\\right)}+1\\right)\\log(T+1)\\sqrt{\\frac{T}{M}}\\left(\\sqrt{Md} + d\\sqrt{\\frac{T}{M}} \\right) \\nonumber\\\\ \n&\\label{eq.x}= \\mathsf{polylog}(T)\\cdot (\\sqrt{dT} + \\frac{dT}{M}).\n\\end{align}\nConsequently, taking the expectation of $R_T(\\textbf{Alg})$ yields the same bound as in Equation~\\eqref{eq.x}, since with probability at most $\\frac{1}{T}$, the total regret over the entire horizon is at most $T$ (each time accumulates at most a regret of $1$ by the normalization assumption).\nSince the regret bound is independent of $\\theta^\\star$, it \nimmediately follows that \n$\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)]\n\\le \\mathsf{polylog}(T)\\cdot (\\sqrt{dT} + \\frac{dT}{M}).$ \n\n\n\\item \\textbf{Building a Master algorithm that satisfies conditional independence}\n\nTo complete the proof, we need to validate the conditional independence assumption in Lemma~\\ref{lemma.concentration}. Since the length of the confidence intervals does not depend on the random rewards, this task can be done by using a master algorithm SupSBUCB (Algorithm~\\ref{algo:SupSBUCB}), which runs in $O(\\log T)$ stages at each time step $t$ similar to \\cite{auer2002using}, which is subsequently adopted in the linear contextual bandits setting~\\cite{chu2011contextual} and then in the generalized linear contextual bandits setting~\\cite{li2017provably} for the same purpose of meeting the conditional independence assumption. \nNote that SupSBUCB is responsible for selecting the actions $a_t$ and it does so by calling BaseSBUCB (Algorithm~\\ref{algo:BaseSBUCB}), which merely performs regression.\nThis master-base algorithm pair has by now become a standard trick to get around the conditional dependency in the vanilla UCB algorithm for a variety of contextual bandits problems (by sacrificing at most $O(\\log T)$ regret).\n\n\\begin{algorithm}[!h]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{SupSBUCB \\label{algo:SupSBUCB}}\n \\textbf{Inputs}: $T, M \\in \\mathbb{Z}_{++}$, Grid ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$.\n \n $S\\leftarrow \\log(T), \\Psi_1^s \\leftarrow \\emptyset$ for all $s\\in[S]$\n \n\t\\For{$m=1,2,\\cdots,M$}{\n\t\tInitialize $\\Psi_{m+1}^{s^{\\prime}}\\leftarrow \\Psi_{m}^{s^{\\prime}}$ for all $s^{\\prime} \\in [S]$.\n\t\t\n\t \\For{$t= t_{m-1}+1, \\dots, t_m$}{\n\t $s \\leftarrow 1$ and $\\hat{A}_1 \\leftarrow [K]$\n\t\t\n\t\t\\textbf{Repeat:}\n\t\t\n\n\t\t\n\t Use BaseSBUCB with $\\Psi_{m}^s$ to compute $\\theta_m^s$ and $A_m^s$\n\t \n\t For all $a \\in \\hat{A}_s$, compute $w^s_{t,a} =\\gamma \\sqrt{x_{t,a}^T (A_{m}^s)^{-1}x_{t,a}}$, $\\hat{r}_{t,a}^s = \\langle \\theta_m^s, x_{t,a} \\rangle$ \n\t\t\n\t\t\\textbf{(a)} If $w^s_{t,a}\\leq 1\/\\sqrt{T}$ for all $a\\in \\hat{A}_s$,\n\t\tchoose $a_t = \\arg\\max_{a\\in \\hat{A}_s}\\left( \\hat{r}_{t,a}^s+w^s_{t,a}\\right)$. \n\t\t\n\t\t\n\t\t\\textbf{(b)} Else if $w^s_{t,a}\\leq 2^{-s}$ for all $a \\in \\hat{A}_s$,\n\t\t$\\hat{A}_{s+1}\\leftarrow \\{a\\in \\hat{A}_s \\,\\,\\vert\\,\\,\\hat{r}_{t,a}^s+w^s_{t,a}\\geq \\max_{a^{\\prime}\\in \\hat{A}_s}(\\hat{r}_{t,a^{\\prime}}^s+w^s_{t,a^{\\prime}})-2^{1-s}\\}$, \n\t\t\n\t\t\\quad $s \\leftarrow s+1$.\n\t\t\n\t\t\\textbf{(c)} Else choose any $a_t \\in \\hat{A}_{s}$ such that $w_{t,a_t}^s >2^{-s}$, Update\n\t $\\Phi_{m+1}^{s} \\leftarrow\n\t\t\\Phi_{m+1}^{s} \\cup \\{t\\}.$\n\t\n\t\t\n\t\t\\textbf{Until}{\\quad an action $a_t$ is found.}\n\t}\n}\n\\end{algorithm}\n\n\\begin{algorithm}[!h]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{BaseSBUCB \\label{algo:BaseSBUCB}}\n\t\n\t\t\\textbf{Input}: $\\Psi_m$.\n\t\n\t\t$A_m = I_d+\\sum_{\\tau \\in \\Psi_m}x_{t,a_{\\tau}}x^{\\prime}_{t,a_{\\tau}}$\n\t\t\n\t\t$c_m = \\sum_{\\tau \\in \\Psi_m} r_{\\tau,a_{\\tau}}x_{\\tau,a_{\\tau}}$\n\t\n\t\t$\\theta_m = A_m^{-1} c_m$\n\t\t\n\t\t\\textbf{Return} $(\\theta_m, A_m)$.\n\\end{algorithm}\n\n\n\nMore specifically, the master algorithm developed by~\\cite{auer2002finite} has the following structure: each time step is divided into at most $\\log T$ stages. At the beginning of each stage $s$, the learner computes the confidence interval using only the previous contexts designated as belonging to that stage and selects any action whose confidence interval has a large length (exceeding some threshold). If all actions has a small confidence interval, then we end this stage, observe the rewards of the given contexts and move on to the next stage with a smaller threshold on the length of the confidence interval. In other words, conditional independence is obtained by successive manual masking and revealing of certain information. One can intuitively think of each stage $s$ as a color, and each time step $t$ is colored using one of the $\\log T$ colors (if colored at all). When computing confidence intervals and performing regression, only previous contexts that have the same color are used, instead of all previous contexts. \n\nAdapting this algorithm to the sequential batch setting is not difficult: we merely keep track\nof the sets $\\Psi_{m}^s$ ($s \\in [\\log T]$) per each batch $m$ (rather than per each time step $t$ as in the fully online learning case). Note that we are still coloring each time step $t$, the difference here lies in the frequency at which we are running BaseSBUCB to compute the confidence bounds and rewards. \nDue to great similarity we omit the details here and refer to \\cite[Section 4.3]{auer2002using}. In particular, by establishing similar results to \\cite[Lemma 15, Lemma 16]{auer2002using}, it is straightforward to show that the regret of the master algorithm SupSBUCB here is enlarged at most by a multiplicative factor of $O(\\log T)$, which leads to the upper bound in Theorem \\ref{thm.adversarial}. \n\\end{enumerate}\n\\end{proof}\n\n\n\n\n\n\\subsection{Regret Analysis for Lower bound}\\label{subsec.adversarial}\nIn this section, we establish the regret lower bound and show that for any fixed grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$ and any learner's policy on this grid, there exists an adversary who can make the learner's regret at least $\\Omega(\\sqrt{Td}+(T\\sqrt{d}\/M \\wedge T\/\\sqrt{M}))$ even if $K=2$. Since the lower bound $\\Omega(\\sqrt{Td})$ has been proved in \\cite{chu2011contextual} even in the fully online case, it remains to show the lower bound $\\Omega(T\\sqrt{d}\/M \\wedge T\/\\sqrt{M})$. Note that in the fully online case, the lower bound $\\Omega(\\sqrt{Td})$ given in \\cite{chu2011contextual} is obtained under the same assumption $d^2 \\le T$ as in Assumption~\\ref{aspn.TKd}.\n\n\\begin{proof}[Proof of Statement 2 in Theorem~\\ref{thm.adversarial}]\nFirst we consider the case where $M\\ge d\/2$, and without loss of generality we may assume that $d'=d\/2$ is an integer (if $d$ is odd, then we can take $d^\\prime = \\frac{d-1}{2}$ and modify the subsequent procedure only slightly). By an averaging argument, there must be $d'$ batches $\\{i_1,i_2,\\cdots,i_{d'}\\}\\subset [M]$ such that\n\\begin{align}\\label{eq.large_batch}\n\\sum_{k=1}^{d'} \\left(t_{i_k} - t_{i_k-1} \\right) \\ge \\frac{d'T}{M}. \n\\end{align}\nNow $\\theta^*$ is chosen as follows: Flip $d'$ independent fair coins to obtain $U_1,\\cdots,U_{d'}\\in \\{1,2\\}$, and set $\\theta^\\star = (\\theta_1,\\cdots,\\theta_d)$ with\n$\\theta_{2k-1} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 1), \\theta_{2k} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 2), \\forall k\\in [d']$.\n(If $d$ is odd, then the last component $\\theta_d$ is set to $0$.)\n\n\nNote that $\\theta^\\star$ is a random variable and clearly $\\|\\theta^\\star\\|_2=1$ (surely). Next the contexts are generated in the following manner: for $t\\in (t_{m-1},t_{m}]$, if $m=i_k$ for some $k\\in [d']$, set $x_{t,1}=e_{2k-1}, x_{t,2}=e_{2k}$, where $e_j$ is the $j$-th basis vector in $\\mathbb{R}^d$; otherwise, set $x_{t,1}=x_{t,2}={\\bf 0}$. \n\nNow we analyze the regret of the learner under this environment. Clearly, for any $k\\in [d']$, the learner has no information about whether $(\\theta_{2k-1}, \\theta_{2k}) = (1\/\\sqrt{d'},0)$ or $(0,1\/\\sqrt{d'})$ before entering the $i_k$-th batch, while an incorrect action incurs an instantenous regret $1\/\\sqrt{d'}$. Consequently, averaged over all possible coin flips $(U_1,\\cdots,U_{d'})\\in \\{1,2\\}^{d'}$, the expected regret is at least:\n\\begin{align*}\n\\frac{1}{2}\\sum_{k=1}^{d'} \\frac{t_{i_k} - t_{i_{k-1}}}{\\sqrt{d'}} \\ge \\frac{1}{2\\sqrt{2}}\\cdot \\frac{T\\sqrt{d}}{M}\n\\end{align*}\ndue to \\eqref{eq.large_batch}, establishing the lower bound $\\Omega\\left(\\frac{T\\sqrt{d}}{M}\\right)$ when $M\\ge d\/2$.\n\nNext, in the case where $M < d\/2$, choose $d^\\prime = M$.\nHere, we obviously have $\\sum_{k=1}^{d'} \\left(t_{i_k} - t_{i_k-1} \\right) = T.$ \nIn this case, again flip $d'$ independent fair coins to obtain $U_1,\\cdots,U_{d'}\\in \\{1,2\\}$, and set $\\theta^\\star = (\\theta_1,\\cdots,\\theta_d)$ with\n$\\theta_{2k-1} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 1), \\theta_{2k} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 2), \\forall k\\in [d']$.\nSet all remaining components of $\\theta$ to $0$.\nThe contexts are generated as follows: for $t\\in (t_{m-1},t_{m}], 1\\le m \\le M$, set $x_{t,1}=e_{2m-1}, x_{t,2}=e_{2m}$.\nIn this case, we again average over all possible coin flips $(U_1,\\cdots,U_{d'})\\in \\{1,2\\}^{d'}$, and the expected regret is at least:\n\\begin{align*}\n\\frac{1}{2}\\sum_{m=1}^{M} \\frac{t_{m} - t_{m-1}}{\\sqrt{d'}} = \\frac{1}{2}\\cdot \\frac{T}{\\sqrt{M}}\n\\end{align*}\n\nCombining the above two cases yields a lower bound of $ \\Omega\\left(\\frac{T\\sqrt{d}}{M}\\wedge \\frac{T}{\\sqrt{M}}\\right)$.\n\\end{proof}\n\n\\section{Definitions and Auxiliary Results}\\label{appendix.auxiliary}\n\n\\begin{definition}\nLet $(\\mathcal{X}, \\mathcal{F})$ be a measurable space and $P$, $Q$\nbe two probability measures on $(\\mathcal{X}, \\mathcal{F})$. \n\\begin{enumerate}\n\t\\item The total-variation distance between $P$ and $Q$ is defined as:\n\t$$ \\mathsf{TV}(P,Q) = \\sup_{A \\in \\mathcal{A}} |P(A) - Q(A)|.$$\n\t\\item The KL-divergence between $P$ and $Q$ is:\n\t\\begin{equation*}\n\tD_{\\text{\\rm KL}}(P\\|Q) = \\begin{cases}\n\t\\int \\log \\frac{dP}{dQ} dP \\text{\\quad if $P << Q$} \\\\\n\t+\\infty \\text{\\quad otherwise}\n\t\\end{cases}\n\t\\end{equation*}\n\\end{enumerate}\n\n\n\\end{definition}\n\\begin{lemma}\\cite[Lemma 2.6]{Tsybakov2008}\\label{lemma.TV_KL}\n\tLet $P$ and $Q$ be any two probability measures on the same measurable space. Then\n\t\\begin{align*}\n\t1- \\mathsf{TV}(P,Q) \\ge \\frac{1}{2}\\exp\\left(-D_{\\text{\\rm KL}}(P\\|Q)\\right). \n\t\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}\\cite[Theorem 6.1]{wainwright2019high}\n\t\\label{lemma.wishart}\nLet $x_1,x_2,\\cdots,x_n\\sim {\\mathcal{N}}(0,I_d)$ be i.i.d. random vectors. Then for any $\\delta>0$, \n\\begin{align*}\n\\mathbb{P}\\left(\\sigma_{\\max}\\left(\\frac{1}{n}\\sum_{i=1}^n x_ix_i^\\top\\right) \\ge 1+\\sqrt{\\frac{d}{n}}+\\delta \\right) \\le \\exp\\left(-\\frac{n\\delta^2}{2}\\right),\n\\end{align*}\nwhere $\\sigma_{\\max}(A)$ denotes the largest singular value of $A$. \n\\end{lemma}\n\n\n\\section{Proof of Main Lemmas}\n\\subsection{Proof of Lemma \\ref{lemma.equator}}\nLet $y_{t,a} = \\Sigma^{-1\/2}x_{t,a}$, then each $y_{t,a}$ is marginally distributed as ${\\mathcal{N}}(0,I_d)$. Define\n\\begin{align*}\nB \\triangleq \\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\\top.\n\\end{align*}\n\nRecall that $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta} = \\arg\\max_{a\\in [K]} y_{t,a}^\\top (\\Sigma^{1\/2}\\hat{\\theta})$ for any $t\\in [t_{m-1}+1,t_m]$, and $\\hat{\\theta}$ is an estimate of $\\theta^\\star$ that is independent of all contexts in the current batch $ [t_{m-1}+1,t_m]$. By rotational invariance of ${\\mathcal{N}}(0,I_d)$, we can without loss of generality assume $\\Sigma^{1\/2}\\hat{\\theta}=ce_d$ for some $c>0$. Consequently, each $y_{t,a_t}$ follows the distribution\n$\\mu_t = {\\mathcal{N}}(0,1) \\otimes \\cdots \\otimes {\\mathcal{N}}(0,1) \\otimes \\nu_t,$\nwhere $\\nu_t$ is the probability distribution of $\\max_{a\\in [K]} Z_{t,a}$, where each $Z_{t,a}$ is a standard Gaussian and the $Z_{t,a}$'s can be correlated across different $a$'s. \n\nNow for $y=(y_1,y_2,\\cdots,y_d)\\sim \\mu_t$ and any unit vector $u\\in \\mathbb{R}^d$, we show that there exist numerical constants $c_1,c_2>0$ independent of $(d,K)$ such that\n\\begin{align}\\label{eq.large_prob_fixed_u}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1\\right) \\ge c_2.\n\\end{align}\nTo establish \\eqref{eq.large_prob_fixed_u}, we distinguish into two cases. If $|u_d|<\\frac{1}{2}$, using the fact that $\\mathbb{P}(|{\\mathcal{N}}(0,1)+t|\\ge c)$ is minimized at $t=0$ for any fixed $c>0$, we conclude that\n\\begin{align*}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1 \\right) \\ge \\mathbb{P}\\left( \\left| \\sum_{i=1}^{d-1}y_iu_i \\right| \\ge c_1 \\right) = \\mathbb{P}(|{\\mathcal{N}}(0,1-u_d^2)|\\ge c_1) \\ge \\mathbb{P}\\left(\\left|{\\mathcal{N}}(0,\\frac{3}{4})\\right|\\ge c_1\\right)\n\\end{align*}\nis lower bounded by some positive constant. If $|u_d|\\ge \\frac{1}{2}$, we have\n\\begin{align*}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1 \\right)\\ge \\frac{1}{2}\\mathbb{P}\\left(|u_dy_d| \\ge c_1 \\right) \\ge \\frac{1}{2}\\mathbb{P}\\left(|y_d| \\ge 2c_1 \\right) \\ge \\frac{1}{2}\\mathbb{P}\\left(Z_{t,1}\\ge 2c_1\\right) = \\frac{1}{2}\\mathbb{P}({\\mathcal{N}}(0,1)\\ge 2c_1),\n\\end{align*}\nwhich is again lower bounded by a numerical constant. Hence the proof of \\eqref{eq.large_prob_fixed_u} is completed. \n\nBased on \\eqref{eq.large_prob_fixed_u} and the deterministic inequality\n\\begin{align*}\nu^\\top\\cdot \\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\\top\\right)\\cdot u \\ge \\frac{c_1^2}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} \\mathbbm{1}\\left(|y_{t,a_t}^\\top u| \\ge c_1 \\right),\n\\end{align*}\nthe Chernoff inequality yields that for any unit vector $u\\in \\mathbb{R}^d$, we have\n\\begin{align}\\label{eq.concentration_fixed_u}\n\\mathbb{P}\\left( u^\\top B u \\ge \\frac{c_1^2c_2}{2}\\right) \\ge 1 - e^{-c_3(t_m-t_{m-1})},\n\\end{align}\nwhere $c_3>0$ is some numerical constant. \n\nNext we prove an upper bound of $\\lambda_{\\max}(B)$, i.e., the largest eigenvalue of $B$. Since $(a+b)(a+b)^\\top \\preceq 2(aa^\\top + bb^\\top)$ for any vectors $a,b\\in\\mathbb{R}^d$, for $y_t\\sim \\mu_t$ we have\n\\begin{align*}\ny_ty_t^\\top \\preceq 2(v_tv_t^\\top + w_tw_t^\\top), \n\\end{align*}\nwhere $v_t=(v_{t,1},\\cdots,v_{t,d-1},0)$ with $v_{t,i}\\sim{\\mathcal{N}}(0,1)$, and $w_t=(0,\\cdots,0,w_{t,d})$ with $w_{t,d}\\sim \\nu_t$. By concentration of Wishart matrices (cf. Lemma \\ref{lemma.wishart}), with probability at least $1-e^{-\\Omega(t_m-t_{m-1})}$, \n\\begin{align*}\n\\lambda_{\\max}\\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} v_tv_t^\\top \\right) \\le c_4\n\\end{align*}\nholds for some numerical constant $c_4>0$. For the second term, since $w_{t,d}\\sim \\nu_t$ is the maximum of $K$ arbitrary ${\\mathcal{N}}(0,1)$ random variables, the Gaussian tail and the union bound imply that $|w_{t,d}|\\le \\sqrt{c_5\\log(KT)}$ with probability at least $1-O(T^{-5})$. Hence, with probability at least $1 - O(T^{-4})$, we have\n\\begin{align*}\n\\lambda_{\\max}\\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} w_tw_t^\\top \\right) = \\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} w_{t,d}^2 \\le c_5\\log (KT). \n\\end{align*}\nCombining all the previous results, and using $\\lambda_{\\max}(A+B)\\le \\lambda_{\\max}(A)+\\lambda_{\\max}(B)$ for symmetric matrices $A,B$, we conclude that with probability at least $1-e^{-\\Omega(t_m-t_{m-1})} - O(T^{-4})$, we have\n\\begin{align}\\label{eq.lambda_max}\n\\lambda_{\\max}(B) \\le c_6\\log (KT)\n\\end{align}\nholds for some numerical constant $c_6>0$. \n\nFinally, we are ready to prove a lower bound on $\\lambda_{\\min}(B)$ via an $\\varepsilon$-net argument. Let ${\\mathcal{N}}_d(\\varepsilon)$ be an $\\varepsilon$-net of the unit ball in $\\mathbb{R}^d$ (both in $\\ell_2$ norm) with cardinality at most $(1+\\frac{2}{\\varepsilon})^d$. Standard $\\varepsilon$-net techniques (cf. \\cite[Section 2.3.1]{tao2012topics}) give\n\\begin{align*}\n\\min_{u: \\|u\\|_2=1} u^\\top Bu \\ge \\min_{u\\in {\\mathcal{N}}_d(\\varepsilon)} u^\\top Bu - 2\\varepsilon\\lambda_{\\max}(B).\n\\end{align*}\nHence, choosing $\\varepsilon = \\frac{c_1^2c_2}{8c_6\\log (KT)}$ and combining \\eqref{eq.concentration_fixed_u}, \\eqref{eq.lambda_max} and the union bound over ${\\mathcal{N}}_d(\\varepsilon)$ gives\n\\begin{align*}\n\\mathbb{P}\\left(\\lambda_{\\min}(B) \\ge \\frac{c_1^2c_2}{4}\\right) \\ge 1 - e^{O(d\\log\\log (KT)) - \\Omega(t_m-t_{m-1})} - O(T^{-4}). \n\\end{align*}\nBy noting that $t_m - t_{m-1} = \\Omega(d\\sqrt{T})$ due to the choice of the grid in \\eqref{eq.minimax_grid}, the parameter $a$ in \\eqref{eq.a}, and the assumption $M=O(\\log\\log T)$, we conclude that $\\lambda_{\\min}(B) \\ge c_7$ for some numerical constant $c_7>0$ with probability at least $1 - O(T^{-4})$. The proof is completed by noting that\n$$\n\\frac{1}{t_m - t_{m-1}}\\sum_{t = t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top = \\Sigma^{1\/2} B \\Sigma^{1\/2} \\succeq \\Sigma^{1\/2} (c_7I_d) \\Sigma^{1\/2}= c_7\\Sigma\n$$\nwhenever $\\lambda_{\\min}(B) \\ge c_7$ and the assumption $\\lambda_{\\min}(\\Sigma)\\ge \\kappa\/d$. \n\\qed\n\n\\subsection{Proof of Lemma \\ref{lemma.Q}}\nLet $v_1,\\cdots,v_d$ be an orthonormal basis of $\\mathbb{R}^d$ with $v_1=u_t$. By rotational invariance of the uniform distribution on spheres, we have $(v_1^\\top \\theta, v_2^\\top \\theta, \\cdots, v_d^\\top \\theta)\\sim \\mathsf{Unif}(\\Delta\\mathbb{S}^{d-1})$ under $Q_0$. Now recall that\n\\begin{align*}\n\\frac{dQ_1}{dQ_0}(\\theta) = \\frac{r_t\\jiao{v_1,\\theta}_+}{Z_0}, \\qquad \\frac{dQ_2}{dQ_0}(\\theta) = \\frac{r_t\\jiao{v_1,\\theta}_-}{Z_0}, \n\\end{align*}\nwe conclude that if $\\theta' = \\theta - 2(v_1^\\top \\theta)v_1$, we have\n\\begin{align*}\n\\frac{dQ_1}{dQ_0}(\\theta) = \\frac{dQ_2}{dQ_0}(\\theta'). \n\\end{align*}\nAs a result, it is equivalent to have $\\theta\\sim Q_1$ or $\\theta' = \\theta - 2(v_1^\\top \\theta)v_1\\sim Q_2$. \n\nFor the identity \\eqref{eq.Z_0}, recall that the density of $\\theta=(\\theta_1,\\cdots,\\theta_d)\\sim \\mathsf{Unif}(\\Delta\\mathbb{S}^{d-1})$ is \n\\begin{align}\\label{eq.uniform_density}\nf(\\theta) = f(\\theta_2,\\cdots,\\theta_d) = \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\frac{2\\Delta}{\\sqrt{\\Delta^2 - \\theta_2^2 - \\cdots - \\theta_d^2}}\\cdot \\mathbbm{1}\\left(\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2\\right), \n\\end{align}\nwhere $\\Gamma(t)=\\int_0^\\infty x^{t-1}e^{-x}dx$ is the Gamma function. Hence, by rotational invariance, we have\n\\begin{align*}\nZ_0 &= \\frac{r_t}{2}\\mathbb{E}_{Q_0}[|\\theta_1|] = r_t\\Delta\\cdot \\int_{\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} d\\theta_2\\cdots d\\theta_d \\\\\n&= r_t\\Delta\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{\\Delta^{d-1} \\pi^{\\frac{d-1}{2}}}{\\Gamma(\\frac{d-1}{2}+1)} = r_t\\Delta\\cdot\\begin{cases}\n\\frac{2^d}{\\pi d}\\binom{d}{d\/2}^{-1}, & \\text{if }d\\text{ is even} \\\\\n\\frac{1}{2^d}\\binom{d-1}{(d-1)\/2}, & \\text{if }d\\text{ is odd}\n\\end{cases}. \n\\end{align*}\n\nUsing Stirling's approximation $\\sqrt{2\\pi n}(\\frac{n}{e})^n\\le n!\\le e\\sqrt{n}(\\frac{n}{e})^{n}$ for any $n\\ge 1$, we have\n\\begin{align}\\label{eq.combinatorics}\n\\frac{2}{e^2}\\sqrt{\\frac{\\pi}{n}}\\le \\frac{1}{2^{2n}}\\binom{2n}{n} \\le \\frac{e}{\\pi\\sqrt{2n}}\n\\end{align}\nfor all $n\\ge 1$, and the rest of \\eqref{eq.Z_0} follows from \\eqref{eq.combinatorics}. \n\nAs for the second moment in \\eqref{eq.second_moment}, we use the spherical coordinates \n\\begin{align*}\n\\left\\{\n\\begin{array}{lr}\n\\theta_2 = r\\cos\\varphi_1, \\\\\n\\theta_3 = r\\sin\\varphi_1\\cos\\varphi_2, \\\\\n\\vdots \\\\\n\\theta_{d-1} = r\\sin\\varphi_1\\sin\\varphi_2\\cdots\\sin\\varphi_{d-3}\\cos\\varphi_{d-2}, \\\\\n\\theta_d = r\\sin\\varphi_1\\sin\\varphi_2\\cdots\\sin\\varphi_{d-3}\\sin\\varphi_{d-2}.\n\\end{array}\n\\right.\n\\end{align*}\nto obtain\n\\begin{align*}\n\\mathbb{E}_{Q_1}[(v_1^\\top \\theta)^2] &= \\frac{r_t}{2Z_0}\\cdot \\mathbb{E}_{Q_0}[|\\theta_1|^3] \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\cdot \\int_{\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} (\\Delta^2-\\theta_2^2-\\cdots-\\theta_d^2) d\\theta_2\\cdots d\\theta_d \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\cdot \\int_0^\\Delta \\int_0^\\pi \\cdots \\int_0^\\pi \\int_0^{2\\pi} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} (\\Delta^2-r^2) \\\\\n&\\qquad \\cdot r^{d-2}\\sin^{d-3}(\\varphi_1) \\sin^{d-4}(\\varphi_2)\\cdots \\sin(\\varphi_{d-3}) drd\\varphi_1\\cdots d\\varphi_{d-2} \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{2\\Delta^{d+1}}{d^2-1}\\cdot \\frac{\\Gamma(\\frac{d-2}{2})\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{d-1}{2})}\\cdot \\frac{\\Gamma(\\frac{d-3}{2})\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{d-2}{2})}\\cdot \\cdots \\cdot \\frac{\\Gamma(1)\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{3}{2})}\\cdot 2\\pi \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{2\\Delta^{d+1}}{d^2-1}\\cdot\\frac{2\\pi^{\\frac{d-1}{2}}}{\\Gamma(\\frac{d-1}{2})} \\\\\n&= \\frac{2\\Delta^2}{d+1}. \n\\end{align*}\\qed\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\nWe introduce the problem of sequential batch learning on finite-action linear contextual bandits.\n\n\\subsection{Notation}\nWe start by fixing some notation that will be used throughout the paper. For a positive integer $n$, let $[n]\\triangleq\\{1,\\cdots,n\\}$. For real numbers $a,b$, let $a\\wedge b\\triangleq \\min\\{a,b\\}$. For a vector $v$, let $v^\\top$ and $\\|v\\|_2$ be the transpose and $\\ell_2$ norm of $v$, respectively. For square matrices $A,B$, let $\\mathsf{Tr}(A)$ be the trace of $A$, and let $A\\preceq B$ denote that the difference $B-A$ is symmetric and positive semi-definite. We adopt the standard asymptotic notations: for two non-negative sequences $\\{a_n\\}$ and $\\{b_n\\}$, let $a_n=O(b_n)$ iff $\\limsup_{n\\to\\infty} a_n\/b_n<\\infty$, $a_n=\\Omega(b_n)$ iff $b_n=O(a_n)$, and $a_n=\\Theta(b_n)$ iff $a_n=O(b_n)$ and $b_n=O(a_n)$. We also write $\\tilde{O}(\\cdot), \\tilde{\\Omega}(\\cdot)$ and $\\tilde{\\Theta}(\\cdot)$ to denote the respective meanings within multiplicative logarithmic factors in $n$. For probability measures $P$ and $Q$, let $P\\otimes Q$ be the product measure with marginals $P$ and $Q$. If measures $P$ and $Q$ are defined on the same probability space, we denote by $\\mathsf{TV}(P,Q) = \\frac{1}{2}\\int |dP-dQ| $ and $ D_{\\text{KL}}(P\\|Q) = \\int dP\\log\\frac{dP}{dQ}$ the total variation distance and Kullback--Leibler (KL) divergences between $P$ and $Q$, respectively. \n\n\n\\subsection{Decision Procedure and Reward Structures}\nLet $T$ be the time horizon of the problem. At the beginning of each time $t\\in [T]$, the decision maker observes a set of $K$ $d$-dimensional feature vectors (i.e. contexts) $\\{x_{t,a} \\mid a \\in [K]\\} \\subseteq \\mathbb{R}^d$ corresponding to the $t$-th unit.\nIf the decision maker selects action $a \\in [K]$, then a reward $r_{t,a} \\in \\mathbb{R}$ corresponding to time $t$ is incurred (although not necessarily immediately observed).\nWe assume the mean reward is linear: that is, there exists an underlying (but unknown) parameter\n$\\theta^\\star$ such that \n $$r_{t,a} = x_{t,a}^\\top \\theta^\\star + \\xi_t,$$\n where $\\{\\xi_t\\}_{t=0}^{\\infty}$ is a sequence of zero-mean independent sub-Gaussian random variables with a uniform upper bound on the sub-Gaussian constants. Without loss of generality and for notational simplicity,\n we assume each $\\xi_t$ is $1$-sub-Gaussian: $\\mathbf{E}[e^{\\lambda \\xi_t}] \\le e^{\\lambda^2\/2}, \\forall t, \\forall \\lambda \\in \\mathbb{R}$.\n Further, without loss of generality (via normalization), we assume $\\|\\theta^\\star\\|_2\\le 1$.\n We denote by $a_t$ and $r_{t, a_t}$ the action chosen and the reward obtained at time $t$, respectively. \n Note that both are random variables; in particular, $a_t$ is random either because the action is randomly selected based on the contexts $\\{x_{t,a} \\mid a \\in [K]\\}$ or because the contexts $\\{x_{t,a} \\mid a \\in [K]\\}$\n are random, or both.\n\nAs there are different (but equivalent) formulations of contextual bandits, we briefly discuss the meaning of the above abstract quantities and how they arise in practice. In general, at each round $t$, an individual characterized by $v_t$ (a list of characteristics associated with that individual) becomes available.\nWhen the decision maker decides to apply action $a_t$ to this individual,\n a reward $y_t(v_t, a_t)$, which depends (stochastically) on both $v_t$ and $a_t$, is obtained. In practice, for both modelling and computational reasons, one often first featurizes the individual characteristics and the actions.\nIn particular, with sufficient generality, one assumes $\\mathbf{E}[y_t(v_t, a_t) \\mid v_t, a_t] = g_{\\theta} (\\phi(v_t, a_t))$, \nwhere $g_{\\theta}(\\cdot)$ is the parametrized mean reward function and $\\phi(v_t, a_t)$ extracts the features from the given raw individual characteristics $v_t$ and action $a_t$. In the above formulation,\nas is standard in the literature, we assume the feature map $\\phi(\\cdot)$ is known and given and $x_{t,a} = \\phi(v_t, a)$. Consequently, we directly assume access to contexts $\\{x_{t,a} \\mid a \\in [K]\\}$.\nNote that the linear contextual bandits setting then corresponds to $g_{\\theta}(\\cdot)$ is linear.\n\n\n\\subsection{Sequential Batch Learning}\nIn the standard online learning setting, the decision maker immediately observes the reward $r_{t, a_t}$ after selecting action $a_t$ at time $t$. Consequently, in selecting $a_t$, the decision maker can base his decision on all the past contexts $\\{x_{\\tau,a} \\mid a \\in [K], \\tau\\le t\\}$ and all the past rewards $\\{r_{\\tau,a_\\tau} \\mid \\tau\\le t-1\\}$. \n\nIn constrast, we consider a \\textit{sequential batch learning} setting, where the decision maker is only allowed to partition the $T$ units into (at most) $M$ batches, and the reward corresponding to each unit in a batch can only be observed at the end of the batch. More specifically, given a maximum batch size $M$, the decision maker needs to choose a sequential batch learning algorithm \\textbf{Alg} that has the following two components:\n\\begin{enumerate}\n\t\\item A \\emph{grid} ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$, with $0 = t_0 < t_10$, where $\\lambda_{\\min}(\\Sigma), \\lambda_{\\max}(\\Sigma)$ denote the smallest and the largest eigenvalues of $\\Sigma$, respectively. \n \\end{assumption}\n \nThe upper bound $ \\lambda_{\\max}(\\Sigma) \\le 1\/d$ in Assumption \\ref{assumption:cov} ensures that $\\mathbb{E}\\|x_{t,a}\\|_2^2\\le 1$, and therefore the stochastic contexts share the similar constraint with the previous adversarial contexts. The lower bound $\\lambda_{\\min}(\\Sigma) \\ge \\kappa\/d$ ensures that each stochastic context is approximately distributed as an isotropic Gaussian random vector, with a bounded condition number no less than $\\kappa^{-1}$. We assume that $\\kappa>0$ is a fixed constant (say $0.1$) and will not optimize the dependence on $\\kappa$. \n \nThe next theorem presents tight regret bounds for the stochastic contexts case.\n\n\\begin{theorem}\\label{thm.stochastic}\n\tLet $T$, $M=O(\\log\\log T)$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.\n\t\\begin{enumerate}\n\t\\item \n\tUnder Assumptions \\ref{aspn.TKd} and \\ref{assumption:cov}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$ (explicitly defined in Section \\ref{subsec.pure-exp}) such that:\n\t\\begin{align*}\n\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\le \\mathsf{polylog}(T)\\cdot \\sqrt{\\frac{dT}{\\kappa}}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}}.\n\t\\end{align*}\n\t\\item\n\tConversely, even when $K=2$ and contexts $x_{t,a}\\sim {\\mathcal{N}}(0,I_d\/d)$ are independent over all $a\\in [K], t\\in [T]$, for any $M\\le T$ and any sequential batch learning algorithm, we have:\n\t\\begin{align*}\n\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot \\sqrt{dT}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}},\n\t\\end{align*}\n\twhere $c>0$ is a numerical constant independent of $(T,M,d)$. \n\\end{enumerate}\n\\end{theorem}\n\nTheorem \\ref{thm.stochastic} completely characterizes the minimax regret for the sequential batch learning problem in linear contextual bandits with stochastic contexts, and shows a doubly exponential dependence of the optimal regret on the number of batches $M$. The following corollary is immediate. \n\\begin{corollary}\\label{cor.stochastic}\n\tUnder stochastic contexts, it is necessary and sufficient to have $\\Theta(\\log\\log (T\/d^2))$ batches to achieve the fully online regret $\\tilde{\\Theta}(\\sqrt{dT})$. \n\\end{corollary}\n\nIn contrast to Corollary \\ref{cor.adversarial}, the above corollary shows that a much smaller number of batches are capable of achieving the fully online performance, which suits better for many practical scenarios. Note that for smaller number of batches, Theorem \\ref{thm.stochastic} also gives the tight regrets within logarithmic factors, e.g., the optimal regret is $\\tilde{\\Theta}(Td^{-1\/2})$ when $M=1$, is $\\tilde{\\Theta}(T^{2\/3}d^{1\/6})$ when $M=2$, is $\\tilde{\\Theta}(T^{4\/7}d^{5\/14})$ when $M=3$, and so on. \n\n\n\\subsection{A Sequential Batch Pure-Exploitation Algorithm}\\label{subsec.pure-exp}\n\nIn contrast to the adversarial contexts, under stochastic contexts the decision maker enjoys the advantage that he can choose to learn the unknown parameter $\\theta^\\star$ from any desired direction. In other words, the exploration of the learner is no longer subject to the adversary's restrictions, and strikingly, making decisions based on the best possible inference of $\\theta^\\star$ is already sufficient.\n\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Sequential Batch Pure-exploitation\t\\label{algo.pure-exp}}\n\t\\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$. \\\\\n\t\\textbf{Set} $a = \\Theta\\left( \\sqrt{T}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}} \\right)$\\\\\n\t\\textbf{Grid choice}: ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$, with $t_1 = ad, \\quad t_m = \\lfloor a\\sqrt{t_{m-1}} \\rfloor, m=2,3,\\cdots,M,.$\\\\\n\t\\textbf{Initialization:} $A = {\\bf 0}\\in \\mathbb{R}^{d\\times d}$, $\\hat{\\theta}={\\bf 0}\\in \\mathbb{R}^d$\\;\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tchoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}$ (break ties arbitrarily). \\\\\n\t\t\t\treceive reward $r_{t,a_t}$. \n\t\t\t}\n\t\t}\n\t\t$A\\gets A + \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta} \\gets A^{-1}\\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.\n\\end{algorithm}\n\nThe algorithm we use in this setting is quite simple (see Algorithm~\\ref{algo.pure-exp}). Specifically, under a particularly chosen grid ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$, the learner, at the beginning of each batch, uses the least squares estimate $\\hat{\\theta}$ of $\\theta^\\star$ based on the data in the previous batches, and then simply selects the action $a\\in [K]$ which maximizes the estimated reward $x_{t,a}^\\top \\hat{\\theta}$ for any time $t$ in this batch. Then at the end of each batch, the learner updates his estimate $\\hat{\\theta}$ of $\\theta^\\star$ based on the new observations from the current batch. \n\nHow do we select the grid ${\\mathcal{T}}$? Intuitively, in order to minimize overall regret, we must ensure that the regret incurred on each batch is not too large, because the overall regret is dominated by the batch that has the largest regret. Guided by this observation, we can see intuitively an optimal way of selecting the grid must ensure that each batch's regret is the same (at least orderwise in terms of the dependence of $T$ and $d$): for otherwise, there is a way of reducing the regret order in one batch and increasing the regret order in the other and the sum of the two will still have smaller regret order than before (which is dominated by the batch that has larger regret order). As we shall see later, the following grid choice satisfies this equal-regret-across-batches requirement:\n\\begin{align}\\label{eq.minimax_grid}\nt_1 = ad, \\quad t_m = \\lfloor a\\sqrt{t_{m-1}} \\rfloor, \\qquad m=2,3,\\cdots,M,\n\\end{align}\nwhere the parameter $a = \\Theta\\left( \\sqrt{T}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}} \\right)$ is chosen so that $t_M=T$. \n\n\n\n\n\\subsection{Regret Analysis for Upper bound}\\label{subsec.stochastic_upperbound}\nWe now turn to establishing the upper bound in Theorem \\ref{thm.stochastic}. \nWe again execute a two-step program. First, we prove that Algorithm \\ref{algo.pure-exp} with the grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$ in \\eqref{eq.minimax_grid} attains the regret upper bound in Theorem \\ref{thm.stochastic}, assuming the conditional independence assumption (cf. Lemma \\ref{lemma.difference}) holds. Second, similar to the master algorithm in the previous section, we then modify Algorithm \\ref{algo.pure-exp} slightly to validate this condition. One thing to note here is that, unlike in the adversarial contexts case, here the modification is much simpler, as we shall see later.\n\nWe start by establishing that the least squares estimator $\\hat{\\theta}$ is close to the true parameter $\\theta^\\star$ at the beginning of every batch with high probability. By the theory of least squares, this would be obvious if the chosen contexts $x_{t,a_t}$ were i.i.d. Gaussian. However, since the action $a_t$ depends on all contexts $(x_{t,a})_{a\\in [K]}$ available at time $t$, the probability distribution of $x_{t,a_t}$ may be far from isotropic. Consequently, a priori, there might be one or more directions in the context space that were never chosen, hence yielding inaccurate estimation of $\\theta^\\star$ along that (or those) direction(s). However, as we shall see next, this is not a concern: we establish that the matrix formed by the selected contexts are reasonbly well-conditioned, despite being selected in a greedy fashion.\n\n\n\\begin{lemma}\\label{lemma.equator}\n\tFor each $m\\in [M]$, with probability at least $1-O(T^{-4})$ we have\n\t\\begin{align*}\n\t\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d},\n\t\\end{align*}\n\twhere $c>0$ is a numerical constant independent of $(K,T,d,m,\\kappa)$. \n\\end{lemma}\n\nThe proof of the above lemma is a bit long and hence deferred to the appendix. Based on Lemma \\ref{lemma.equator}, we are ready to show that the least squares estimator $\\hat{\\theta}$ is close to the true parameter $\\theta^\\star$ with high probability. For $m\\in [M]$, let $\\hat{\\theta}_m$ be the estimate at the end of $m$-th batch, and $A_m = \\sum_{t=1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$ be the regression matrix. \n\\begin{lemma}\\label{lemma.difference}\n\tFor each $m\\in [M]$, if the rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$ up to time $t_m$ are mutually independent given the selected contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$, then with probability at least $1-O(T^{-3})$,\n\t\\begin{align*}\n\t\\|\\hat{\\theta}_m - \\theta^\\star\\|_2 \\le Cd\\cdot \\sqrt{\\frac{\\log T}{\\kappa t_m}}\n\t\\end{align*}\n\tfor a numerical constant $C>0$ independent of $(K,T,d,m,\\kappa)$. \n\\end{lemma}\n\\begin{proof}{Proof.}\n\tBy the standard algebra of linear regression, we have:\n\t\\begin{align*}\n\t\\hat{\\theta}_m - \\theta^\\star = A_m^{-1}\\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star). \n\t\\end{align*}\n\tHence, conditioned on the contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$, the noise terms $r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star$ are independent by the assumption, and each noise term $r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star$ is $1$-sub-Gaussian.\n\t\n\tNext, we show that the random vector $\\hat{\\theta}_m - \\theta^\\star$ is $\\sigma^2$-sub-Gaussian conditioned on the contexts with $\\sigma^2 = \\lambda_{\\min}(A_m)^{-1}$. To see this, we start by recalling that a centered (i.e. zero-mean) random vector $V$ is $v$-sub-Gaussian if the scalar random variable $\\langle V, u\\rangle$ is $v$-sub-Guassian for any unit vector $u$.\n\tConsequently, take any unit vector $u \\in \\mathbf{R}^d$, we have:\n\t$$\\langle\t\\hat{\\theta}_m - \\theta^\\star , u \\rangle = \\langle A_m^{-1}\\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star), u\\rangle = \\sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star).$$\n\tSince each term in the summand is $(u^T A_m^{-1}x_{t,a_t})^2$-sub-Gaussian, and since all of them are independent (after being conditioned on $\\{x_{t,a_t}\\}_{t\\in [t_m]}$), their sum is also sub-Gaussian with the sub-Gaussian constant equal to the sum of the sub-Guassian constants:\n\t\\begin{align*}\n\t&\\sum_{t=1}^{t_m} (u^T A_m^{-1}x_{t,a_t})^2= \\sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} x_{t,a_t}^T A_m^{-1} u\n\t= u^T A_m^{-1}\\big( \\sum_{t=1}^{t_m} x_{t,a_t} x_{t,a_t}^T \\big) A_m^{-1} u \\\\\n\t&= u^T A_m^{-1}A_m A_m^{-1} u = u^T A_m^{-1} u \\le \\lambda_{\\max}(A_m^{-1}) = \\lambda_{\\min}(A_m)^{-1}. \n\t\\end{align*}\n\tSince the above inequality holds for any unit vector $u$, choosing $\\sigma^2 = \\lambda_{\\min}(A_m)^{-1}$\n\testablishes the claim.\n\t\n\t\n\tProceeding further, by Lemma \\ref{lemma.equator}, we have for each $m\\in [M]$, with probability at least $1-O(T^{-4})$ \n\t$\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d}$. Consequently, by a union bound over all $M$ (which is at most $T$),\n\twe have with probability at least $1-O(T^{-3})$, $\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d}$ for all $m \\in [M]$.\n\tSince $\\lambda_{\\min}(X+Y)\\ge \\lambda_{\\min}(X)+\\lambda_{\\min}(Y)$ for any symmetric matrices $X,Y$, \n\tit then follows that with probability at least $1-O(T^{-3})$:\n\t\\begin{align*}\n\t\\lambda_{\\min}(A_m) = \\lambda_{\\min}\\left(\\sum_{l=1}^m \\sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge \\sum_{l=1}^m \\lambda_{\\min}\\left(\\sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge \\frac{c\\kappa t_m}{d}.\n\t\\end{align*}\n\t\n\tFinally, since $\\hat{\\theta}_m - \\theta^\\star$ is a $\\frac{d}{c\\kappa t_m}$-sub-Gaussian random vector, $\\|\\hat{\\theta}_m - \\theta^\\star\\|_2^2 $\n\tis a sub-exponential random variable.\n\tTherefore, conditioned on the above event for the stochastic contexts, the sub-exponential concentration gives the claimed upper bound on $\\|\\hat{\\theta}_m - \\theta^\\star\\|_2$ with a further probability at least $1 - O(T^{-3})$ over the random noises. Finally, taking a union bound to complete the proof. \n\\end{proof}\n\nLemma~\\ref{lemma.difference} shows that given the conditional independence assumption, the estimator $\\hat{\\theta}$ given by pure exploitation essentially achieves the rate-optimal estimation of $\\theta^\\star$ even if one purely explores. This now positions us well to prove the upper bound of Theorem \\ref{thm.stochastic}. Of course, bear in mind that when using Algorithm~\\ref{algo.pure-exp}, the conditional independence assumption does not hold, for the choice of future contexts depends on the rewards in the previous batches. Therefore, we will use sample splitting to build another master algorithm to gain independence at the cost of the sample size reduction by a multiplicative factor of $M$ (recall that $M = O(\\log\\log T)$). The following proof implements these two steps; note that in this setting, the master algorithm is entirely different from and much simpler than the one given in the adversarial case.\n\n\\begin{proof}[Proof of Statement 1 in Theorem~\\ref{thm.stochastic}]\n\\begin{enumerate}\n\\item[]\n\\item \\textbf{Regret bound under conditional independence assumption.}\n\n Consider the $m$-th batch with any $m\\ge 2$, and any time point $t$ inside this batch. By the definition of $a_t$, we have $x_{t,a_t}^\\top \\hat{\\theta}_{m-1}\\ge x_{t,a}^\\top \\hat{\\theta}_{m-1}$ for any $a\\in [K]$. Consequently, \n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star &\\le \\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\\\\n&\\le \\max_{a,a'\\in [K]} (x_{t,a} - x_{t,a'})^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\\\\n&\\le 2\\max_{a\\in [K]} |x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})|. \n\\end{align*}\nFor fixed $a\\in [K]$, marginally we have $x_{t,a}\\sim {\\mathcal{N}}(0,\\Sigma)$ independent of $\\hat{\\theta}_{m-1}$. Therefore, conditioning on the previous contexts and rewards, we have $x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})\\sim {\\mathcal{N}}(0,\\sigma^2)$ with\n$$\n\\sigma^2 = (\\theta^\\star - \\hat{\\theta}_{m-1})^\\top \\Sigma (\\theta^\\star - \\hat{\\theta}_{m-1}) \\le \\frac{\\|\\theta^\\star - \\hat{\\theta}_{m-1}\\|_2^2}{d}\n$$\nby Assumption \\ref{assumption:cov}. By a union bound over $a\\in [K]$, with probability at least $1-O(T^{-3})$ over the randomness in the current batch we have\n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\le 2\\max_{a\\in [K]} |x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})| = O\\left(\\|\\theta^\\star - \\hat{\\theta}_{m-1} \\|_2 \\cdot \\sqrt{\\frac{\\log(KT)}{d}}\\right). \n\\end{align*}\nApplying Lemma \\ref{lemma.difference} and another union bound, there exists some numerical constant $C'>0$ such that with probability at least $1-O(T^{-3})$, the instanteous regret at time $t$ is at most\n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\le C'\\sqrt{\\log(KT)\\log T}\\cdot \\sqrt{\\frac{d}{\\kappa t_{m-1}}}. \n\\end{align*}\nNow taking the union bound over $t\\in [T]$, the total regret incurred after the first batch is at most\n\\begin{align}\\label{eq.later_batch}\n\\sum_{m=2}^M C'\\sqrt{\\log(KT)\\log T}\\cdot t_m\\sqrt{\\frac{d}{\\kappa t_{m-1}}} \\le C'\\sqrt{\\frac{\\log(KT)\\log T}{\\kappa}}M\\cdot a\\sqrt{d}\n\\end{align}\nwith probability at least $1-O(T^{-2})$, where the inequality is due to the choice of the grid in \\eqref{eq.minimax_grid}. \n\nAs for the first batch, the instanteous regret at any time point $t$ is at most the maximum of $K$ Gaussian random variables ${\\mathcal{N}}(0,(\\theta^\\star)^\\top \\Sigma \\theta^\\star)$. Since $\\|\\theta^\\star\\|_2\\le 1$ and $\\lambda_{\\max}(\\Sigma)\\le 1\/d$, we conclude that the instanteous regret is at most $C''\\sqrt{\\log(KT)\/d}$ for some constant $C''>0$ with probability at least $1-O(T^{-3})$. Now by a union bound over $t\\in [t_1]$, with probability at least $1-O(T^{-2})$ the total regret in the first batch is at most\n\\begin{align}\\label{eq.first_batch}\nC''\\sqrt{\\log(KT)\/d}\\cdot t_1 = C''\\sqrt{\\log(KT)}\\cdot a\\sqrt{d}. \n\\end{align}\n\nNow combining \\eqref{eq.later_batch}, \\eqref{eq.first_batch} and the choice of $a$ in Algorithm~\\ref{algo.pure-exp} gives the desired regret bound in Theorem \\ref{thm.stochastic} with high probability (note that $M=O(\\log\\log T)$), and consequently in expectation. \n\n\\item \\textbf{Building a Master algorithm that satisfies conditional independence}\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Batched Pure-exploitation (with sample splitting)\t\\label{algo.sample_splitting}}\n\t\\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$; grid ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$ same as in Algorithm~\\ref{algo.pure-exp}.\\;\n\t\\textbf{Initialization:} Partition each batch into $M$ intervals evenly, i.e., $(t_m,t_{m+1}]=\\cup_{j=1}^M T_m^{(j)}$. \\;\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\t\t\\If{$m=1$}{\n\t\t\tchoose $a_t = 1$ and receives reward $r_{t,a_t}$ for any $t\\in [1,t_1]$. \n\t\t}\n\t\t\\Else{\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tchoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}_{m-1}$ (break ties arbitrarily). \\\\\n\t\t\t\treceive reward $r_{t,a_t}$. \n\t\t\t}\n\t\t}\n\t\t$T^{(m)} \\gets \\cup_{m'=1}^m T_{m'}^{(m)}.$\\\\\n\t\t$A_m\\gets \\sum_{t\\in T^{(m)}} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta}_m \\gets A_m^{-1}\\sum_{t\\in T^{(m)}} r_{t,a_t}x_{t,a_t}$.\n\t}\n\t\\textbf{Output: resulting policy $\\pi=(a_1,\\cdots,a_T)$}.\n\\end{algorithm}\n\nWe start by proposing a sample splitting based master algorithm (see Algorithm~\\ref{algo.sample_splitting}) that ensures that when restricting to the subset of observations used for constructing $\\hat{\\theta}$, the rewards are conditionally independent given the contexts. \nThe key modification in Algorithm \\ref{algo.sample_splitting} lies in the computation of the estimator $\\hat{\\theta}_{m}$ after the first $m$ batches. Specifically, instead of using all past contexts and rewards before $t_m$, we only use the past observations inside the time frame $T^{(m)}\\subsetneq [t_m]$ to construct the estimator. The key property of the time frames is the disjointness, i.e., $T^{(1)},\\cdots,T^{(M)}$ are pairwise disjoint. Then the following lemma shows that the conditional independence condition holds within each time frame $T^{(m)}$. \n\n\n\\begin{lemma}\\label{lemma.cond_indep}\n\tFor each $m\\in [M]$, the rewards $\\{r_{t,a_t}\\}_{t\\in T^{(m)}}$ are mutually independent conditioning on the selected contexts $\\{x_{t,a_t}\\}_{t\\in T^{(m)}}$. \n\\end{lemma}\n\\begin{proof}{Proof.}\n\tFor $t\\in T^{(m)}$, the action $a_t$ only depends on the contexts $\\{x_{t,a}\\}_{a\\in [K]}$ at time $t$ and the past estimators $\\hat{\\theta}_1, \\cdots, \\hat{\\theta}_{m-1}$. However, for any $m'\\in [m-1]$, the estimator $\\hat{\\theta}_{m'}$ only depends on the contexts $x_{\\tau,a_\\tau}$ and rewards $r_{\\tau,a_\\tau}$ with $\\tau\\in T^{(m')}$. Repeating the same arguments for the action $a_\\tau$ with $\\tau\\in T^{(m')}$, we conclude that $a_t$ only depends on the contexts $\\{x_{\\tau,a}\\}_{a\\in [K],\\tau\\in \\cup_{m'\\le m-1} T^{(m')}\\cup \\{t\\}}$ and rewards $\\{r_{\\tau,a_\\tau} \\}_{\\tau\\in \\cup_{m'\\le m-1} T^{(m')}}$. Consequently, by the disjointness of $T^{(m)}$ and $\\cup_{m'\\le m-1} T^{(m')}$, the desired conditional independence holds. \n\\end{proof}\n\nBy Lemma \\ref{lemma.cond_indep}, the conditional independence condition of Lemma \\ref{lemma.difference} holds for Algorithm \\ref{algo.sample_splitting}. Moreover, the sample splitting in Algorithm \\ref{algo.sample_splitting} reduces the sample size by a multiplicative factor at most $M$ at each round, and $M=O(\\log\\log T)$, therefore all proofs in Section \\ref{subsec.pure-exp} continue to hold with a multiplicative penalty at most doubly logarithmic in $T$. As a result, Algorithm \\ref{algo.sample_splitting} achieves the regret upper bound in Theorem \\ref{thm.stochastic}. \n\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Lower bound}\\label{subsec.stochastic_lower}\nIn this section we prove the minimax lower bound of the regret under stochastic contexts for $K=2$. \nThe lower bound argument for the stochastic context case is quite involved and we start by establishing the following key lemma. \n\\begin{lemma}\\label{lemma.lower_bound}\n\tFor any fixed grid $0=t_0 1. \n\t\\end{align*}\n\tCombining \\eqref{eq.bayesian} and \\eqref{eq.target} completes the proof of Lemma \\ref{lemma.lower_bound}. \n\\end{proof}\n\nWe are now ready to put everything together and complete the proof of the lower bound.\n\n\\begin{proof}[Proof of Statement 2 in Theorem~\\ref{thm.stochastic}]\nFor any fixed grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$, define $s=\\min\\{m\\in [M]: t_m\\ge d^{2} \\}$, which always exists due to our assumption that $T\\ge d^2$. Now choosing some candidates of $\\Delta \\in \\{1, \\frac{d}{\\sqrt{t_s}}, \\frac{d}{\\sqrt{t_{s+1}}}, \\cdots, \\frac{d}{\\sqrt{T}}\\} \\subset [0,1]$ in Lemma \\ref{lemma.lower_bound} gives\n\\begin{align}\\label{eq.minimax}\n\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2 \\le 1} \\mathbb{E}[R_T(\\pi)] \\ge c\\cdot \\max\\left\\{\\frac{t_s}{\\sqrt{d}}, t_{s+1}\\sqrt{\\frac{d}{t_s}}, t_{s+2}\\sqrt{\\frac{d}{t_{s+1}}},\\cdots, T\\sqrt{\\frac{d}{t_{M-1}}} \\right\\}\n\\end{align}\nfor some numerical constant $c>0$. After some algebra, the right-hand side of \\eqref{eq.minimax} may be further lower bounded by\n\\begin{align*}\n\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2 \\le 1} \\mathbb{E}[R_T(\\pi)] \\ge c\\sqrt{dT}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^{M-s+1}-1)}} \\ge c\\sqrt{dT}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^{M}-1)}}. \n\\end{align*}\n\\end{proof}\n\n\\section{Problem-Dependent Regret Bounds}\\label{sec:gap}\n\nThe regret bounds given in the previous two sections are problem-independent regret bounds (also known as gap-independent regret bounds in the bandits literature): they do not depend on the underlying parameters of the probability distribution. When the contexts are stochastic, under certain ``margin\" conditions, we can also consider problem-dependent regret bounds that can result in sharper bounds than those problem-independent ones. When the number of contexts is small (e.g., $K=2$), there could be a large margin between the performance of the optimal context and any sub-optimal contexts if $\\|\\theta^\\star\\|_2$ is bounded away from zero, raising the possibility that a problem-dependent regret bound sometimes better than the worst-case regret $\\Theta(\\sqrt{dT})$ could be obtained in sequential batch learning. The next theorem characterizes this. \n\n\\begin{theorem}\\label{thm.problem-dependent}\n\tAssume $K=2$, and let $T$, $M=O(\\log T)$, $d$ be the learning horizon, number of batches and the dimension of each context respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$. Assume without loss of generality $\\|\\theta^*\\|_2 > 0$. \n\t\\begin{enumerate}\n\t\t\\item \n\t\tUnder Assumptions \\ref{aspn.TKd} and \\ref{assumption:cov}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$ (explicitly defined below ) that achieves the following regret:\n\t\t\\begin{align*}\n\t \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\le \\mathsf{polylog}(T)\\cdot \\frac{(d\/\\kappa)^{3\/2}}{\\|\\theta^\\star\\|_2} \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}.\n\t\t\\end{align*}\n\t\t\\item\n\t\tConversely, when the contexts $x_{t,a}\\sim {\\mathcal{N}}(0,I_d\/d)$ are independent over all $a\\in [K], t\\in [T]$, for any $M\\le T$ and any sequential batch learning algorithm, we have:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot d^{3\/2} \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}, \n\t\t\\end{align*}\n\t\twhere $c>0$ is a numerical constant independent of $(T,M,d)$. \n\t\\end{enumerate}\n\\end{theorem}\n\\begin{corollary}\nIn this setting, it is necessary and sufficient to have $\\Theta(\\log(T\/d^2))$ batches to achieve the optimal problem-dependent regret $\\tilde{\\Theta}(d^{3\/2} \/ \\|\\theta^\\star\\|_2)$. \nHere we are not aiming to get the tightest dependence on $\\log T$ (note that $\\tilde{\\Theta}(\\cdot)$ hides polylog factors). \n\\end{corollary}\n\nNote that the dependence on $T$ is significantly better than $\\sqrt{T}$ in the problem-dependent bound, showing that a large $\\|\\theta^\\star\\|_2$ makes learning simpler. We remark that although the problem-dependent regret in Theorem \\ref{thm.problem-dependent} only holds for $K=2$, the generalization to a generic $K$ is straightforward. Moreover, the margin between the optimal context and the sub-optimal context shrinks quickly as $K$ gets larger, and therefore the margin-based problem-dependent bound is not that useful compared with the worst-case regret bound in Theorem \\ref{thm.stochastic} for large $K$. \n\n\\subsection{Proof of the Upper Bound in Theorem \\ref{thm.problem-dependent}}\nThe sequential batch learning algorithm which achieves the claimed upper bound is exactly the batched pure-exploitation algorithm with sample splitting shown in Algorithm \\ref{algo.sample_splitting}, with a different choice of the grid: we consider a geometric grid ${\\mathcal{T}}' = \\{t_1', t_2', \\cdots, t_M'\\}$ with\n\\begin{align*}\nt_1' = bd^2, \\qquad t_m' = \\lfloor bt_{m-1}' \\rfloor, \\quad m=2,3,\\cdots,M,\n\\end{align*}\nwhere $b = \\Theta((T\/d^2)^{1\/M})$ so that $t_M' = T$. Next we show that with the above choice of the grid, Algorithm \\ref{algo.sample_splitting} attains the regret upper bound in Theorem \\ref{thm.problem-dependent}. \n\nConsider the $m$-th batch with any $m\\ge 2$, and any time point $t$ inside this batch. Define $v_t = x_{t,1} - x_{t,2}$, then our algorithm chooses the wrong arm if and only if $v_t^\\top \\theta^\\star$ and $v_t^\\top \\hat{\\theta}_{m-1}$ have different signs. Hence, the instantenous regret at time $t$ is\n\\begin{align*}\nv_t^\\top \\theta^\\star \\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0) - v_t^\\top \\theta^\\star \\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\le 0, v_t^\\top \\hat{\\theta}_{m-1} \\ge 0),\n\\end{align*}\nand by the symmetry of $v_t\\sim {\\mathcal{N}}(0,2\\Sigma)$, it holds that\n\\begin{align*}\n\\mathbb{E}\\left[\\max_{a\\in \\{1,2\\}} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\right] = 2\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0) \\right]. \n\\end{align*}\nSet $\\delta = \\sqrt{d\\log T\/(\\kappa t_{m-1}')}$, and partition the non-negative axis $\\mathbb{R}_+$ into $\\bigcup_{i=0}^\\infty [i\\delta, (i+1)\\delta)$. Using this partition gives\n\\begin{align}\n&\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] \\nonumber\\\\\n&= \\sum_{i=0}^\\infty \\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top \\hat{\\theta}_{m-1} \\le 0,\\|v_t\\|_2 \\le \\sqrt{10\\log T} \\right) \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta,\\|v_t\\|_2 \\le \\sqrt{10\\log T} \\right) \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right)\\cdot \\mathbb{P}\\left( v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta \\big| v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), \\|v_t\\|_2 \\le \\sqrt{10\\log T}\\right). \\label{eq.partition}\n\\end{align}\n\nWe deal with each term in \\eqref{eq.partition} separately. For $\\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right)$, note that $v_t^\\top\\theta^\\star$ is a normal random variable with variance $(\\theta^\\star)^\\top \\Sigma \\theta^\\star \\ge \\lambda_{\\min}(\\Sigma)\\|\\theta^\\star\\|_2^2\\ge \\kappa\\|\\theta^\\star\\|_2^2\/d$, thus the probability density of this random variable is upper bounded by $\\sqrt{d\/2\\pi \\kappa}\/\\|\\theta^\\star\\|_2$ everywhere. Therefore, \n\\begin{align}\\label{eq.anticoncentration}\n\\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right) \\le \\delta\\cdot \\frac{\\sqrt{d}}{\\sqrt{2\\pi\\kappa}\\|\\theta^\\star\\|_2}. \n\\end{align}\nFor the second term of \\eqref{eq.partition}, the proof of Lemma \\ref{lemma.difference} shows that the random vector $\\theta^\\star - \\hat{\\theta}_{m-1}\\in \\mathbb{R}^d$ is $d\/(c\\kappa t_{m-1}')$-subGaussian for some absolute constant $c>0$, and is also independent of $v_t$. Hence, conditioning on $\\|v_t\\|_2\\le \\sqrt{10\\log T}$, the random variable $v_t^\\top(\\theta^\\star - \\hat{\\theta}_{m-1})$ is also subGaussian with parameter $\\|v_t\\|_2^2d\/(c\\kappa t_{m-1})\\le 10d\\log T\/(c\\kappa t_{m-1}')$. Consequently, subGaussian concentration gives\n\\begin{align}\\label{eq.concentration}\n\\mathbb{P}\\left( v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta \\big| v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), \\|v_t\\|_2 \\le \\sqrt{10\\log T}\\right) \\le \\exp\\left(-\\frac{c\\kappa i^2\\delta^2t_{m-1}'}{20d\\log T}\\right). \n\\end{align}\n\nCombining \\eqref{eq.partition}, \\eqref{eq.anticoncentration}, \\eqref{eq.concentration} and the choice of $\\delta$, we conclude that\n\\begin{align*}\n\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] &\\le \\frac{d^{3\/2}\\log T}{\\sqrt{2\\pi \\kappa^3}t_{m-1}'\\|\\theta^\\star\\|_2} \\sum_{i=0}^\\infty (i+1)e^{-ci^2\/20} \\\\\n&\\le C\\cdot \\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}t_{m-1}'\\|\\theta^\\star\\|_2}. \n\\end{align*}\nMoreover, since $v_t^\\top \\theta^\\star \\le 2$ almost surely and $\\mathbb{P}(\\|v_t\\|_2\\ge \\sqrt{10\\log T})\\le T^{-5}$, we also have\n\\begin{align*}\n\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 > \\sqrt{10\\log T}) \\right] \\le 2T^{-5}. \n\\end{align*}\nTherefore, by the choice of the grid, the expected total regret in the $m$-th batch is at most\n\\begin{align*}\n\\left(C\\cdot \\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}t_{m-1}'\\|\\theta^\\star\\|_2} + 2T^{-5}\\right)\\cdot t_m' = O\\left(\\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}\\|\\theta^\\star\\|_2}\\cdot \\left(\\frac{T}{d^2}\\right)^{1\/M} \\right). \n\\end{align*}\n\nThe first batch is handled in the same way as the upper bound proof of Theorem \\ref{thm.stochastic}. Specifically, the expected total regret in the first batch is \n\\begin{align*}\nO\\left( t_1'\\cdot \\sqrt{\\frac{\\log T}{d}} \\right) = O\\left(\\sqrt{d^3\\log T}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}\\right) = O\\left(\\frac{\\sqrt{d^3\\log T}}{\\kappa^{3\/2}\\|\\theta^\\star\\|_2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}} \\right).\n\\end{align*}\nFinally summing up all batches $m=1,2,\\cdots,M$ completes the proof. \n\n\\subsection{Proof of the lower bound in Theorem \\ref{thm.problem-dependent}} The proof is entirely analogous to the lower bound proof of Theorem \\ref{thm.stochastic}. First we observe that by Lemma \\ref{lemma.lower_bound}, for any $\\Delta \\in [0,1]$ and fixed grid ${\\mathcal{T}} = \\{t_1, t_2, \\cdots, t_M\\}$ we have\n\\begin{align*}\n\\inf_\\pi \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] &\\ge \\Delta\\cdot \\inf_\\pi \\sup_{\\theta^\\star: \\Delta\\le \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\\\\n&\\ge \\Delta^2 \\cdot\\sum_{m=1}^M \\frac{t_m-t_{m-1}}{10\\sqrt{d}}\\exp\\left(-\\frac{16t_{m-1}\\Delta^2}{d^2}\\right). \n\\end{align*}\nNow define $s = \\min\\{m\\in [M]: t_m \\ge d^2 \\}$, which always exists due to the assumption $T\\ge d^2$. Choosing $\\Delta \\in \\{1,d^2\/t_s,d^2\/t_{s+1},\\cdots,d^2\/t_M\\} \\subseteq [0,1]$ in the above inequality gives\n\\begin{align*}\n\\inf_\\pi \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot \\max\\left\\{\\frac{t_s}{\\sqrt{d}}, \\frac{d^{3\/2}t_{s+1}}{t_s}, \\frac{d^{3\/2}t_{s+2}}{t_{s+1}},\\cdots, \\frac{d^{3\/2}T}{t_{M-1}} \\right\\}\n\\end{align*}\nfor some absolute constant $c>0$. Finally, applying $\\max\\{a_1,\\cdots,a_n\\} \\ge \\sqrt[n]{a_1a_2\\cdots a_n}$ gives\n\\begin{align*}\n\\inf_{\\pi} \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot d^{3\/2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M-s+1}} \\ge c\\cdot d^{3\/2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}},\n\\end{align*}\nas claimed. \n\n\n\\section{Conclusion}\n\nAs we have shown in this paper, sequential batch learning provides an interesting and nontrivial departure from the traditional online learning setting where feedback is immediately observed and incorporated into making the next decision. We studied sequential batch learning in the linear contextual bandits setting and provided an in-depth inquiry into the algorithms and theoretical performance. An important insight here is that the nature of the contexts-adversarial or stochastic--has a significant impact on the optimal achievable performance, as well as the algorithms that would achieve the minimax optimal regret bounds.\n\nSeveral questions immediately suggest themselves.\nFirst, in the stochastic context setting, our current regret upper bound\ndepends heavily on the Gaussian assumption of the contexts. It would be interesting to see how far we can move beyond the Gaussian family. \nIt would be unlikely that the same result holds for any distribution and hence, characterizing a (hopefully large) class of distributions under which the same tight bounds are achievable would be interesting.\nAnother direction would be to look at more complex reward structures that go beyond linear bandits and see to what extent can the current set of results be generalized. We leave them for future work. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Datasets and their Templates}\n\\label{sec:appendix:analysis}\n\n\\subsection{Division of Crowdsourcing Instructions into Minimal Tasks}\n\\label{sec:appendix:division:screenshots}\nFig.~\\ref{fig:subtsakdivision} shows an example of how a task is divided into multiple subtasks for the MC-TACO dataset. MC-TACO has five categories (Event Duration, Event Frequency etc.). Each category contributes to 2 subtasks one for question generation and one for answer generation.\n\n\n\n\n\n\\paragraph{Number of tasks in each dataset.} \nFig.~\\ref{fig:no. of subtasks} illustrates how the number of steps in the data creation process varies across the 6 datasets. QASC and MC-TACO contain a relatively higher number of steps in the data creation process in comparison to DROP, Quoref, CosmosQA, and Winogrande. \n\n\n\\begin{figure}[H]\n\\centering\n \\includegraphics[scale=0.28,trim=2cm 4cm 0cm 3cm]{figures\/Number_of_subtasks.pdf}\n \\caption{Variations in the number of subtasks}\n \\label{fig:no. of subtasks}\n\\end{figure}\n\n\\subsection{Analysis of Crowdsourcing Templates}\nWe analyzed crowdsourcing templates of 6 datasets: CosmosQA~\\cite{huang2019cosmos}, \nDROP~\\cite{dua2019drop}, \nMC-TACO~\\cite{zhou2019going}, \nQASC~\\cite{khot2020qasc}, \nQuoref~\\cite{dasigi2019quoref}, and\nWinogrande~\\cite{sakaguchi2020winogrande}. Our intention behind the analysis is to identify similarities and differences across templates and subsequently decide regarding the collection of more templates.\n\\label{appendix:analysis:templates}\n\n\n\\paragraph{Size of the instructions.} We observe significant variation in size across the 6 datasets (Fig.~\\ref{fig:size inst}). In the case of QASC, the instruction size associated with each step of the data creation process is very high, whereas for Winogrande, it is exactly the opposite-- instruction size associated with each step of the data creation process is very low. Instead, the size of the common instruction (i.e., the instruction preceding the first step of the data creation process) is high in Winogrande; this is also seen for DROP. The major mode of instruction varies across datasets. Examples and instructions associated with each step of data creation respectively take up the majority of space in Quoref and CosmosQA. MC-TACO relies on examples to explain the crowdsourcing task, while Winogrande and QASC depend mostly on common instructions and instructions associated with each step of the data creation process respectively, to explain the task to the crowdworker.\n\n\\paragraph{The number of positive\/negative examples.} \nVariation in the occurrence of \\textsc{Positive} and \\textsc{Negative} Examples across datasets has been illustrated in Fig.~\\ref{fig:no. of examples}. Only Winogrande provides an equal number of \\textsc{Positive} and \\textsc{Negative} Examples. \nQASC instructions do not contain any \\textsc{Negative} Examples. \nOverall, DROP instructions consist of a relatively higher number of examples than other datasets.\n\n\\begin{figure}[H]\n\\centering\n \\includegraphics[width=0.96\\columnwidth ]{figures\/example_num.png}\n \\caption{Variation in the number of positive and negative examples}\n\\label{fig:no. of examples}\n\\end{figure}\n\n\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.96\\columnwidth]{figures\/instruction_size.pdf}\n \\caption{Variation in the number of sentences in the crowdsourcing instructions across datasets}\n \\label{fig:size inst}\n\\end{figure}\n\n\\paragraph{Presence of reasons\/suggestions in examples.} All datasets except QASC contain both \\textsc{Positive} and \\textsc{Negative} Examples. \nHowever, Quoref is the only dataset to provide \\textsc{Reasons} for all the \\textsc{Positive} and \\textsc{Negative} Examples. There are explanations associated with each of the \\textsc{Negative} Examples, but the presence of explanations associated with \\textsc{Positive} Examples varies across datasets. Finally, Quoref is the only dataset to provide \\textsc{Suggestions} along with the \\textsc{Reasons} associated with the \\textsc{Negative} Examples.\n\n\\begin{comment}\n\\paragraph{Dimensions of Input and Output:}The input dimension of a step is defined as the number of previous step outputs that are fed as input. Parallely, the output dimension of a step is the number of distinct outputs the model needs to produce in that step-- for example, if a model has to generate both a question and an answer in a step, the output dimension will be 2. CosmosQA and QASC have relatively high dimensional instances, whereas Quoref and MC-TACO have relatively low dimensional instances.\n\\end{comment}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth,trim=0cm 0cm 0cm 0cm]{figures\/sub-task.pdf}\n \\caption{\n Dividing a data creation task into multiple subtasks for the MC-TACO dataset. \n }\n \\label{fig:subtsakdivision}\n\\end{figure}\n\n\n\n\n\\subsection{Qualitative Analysis}\n\\paragraph{Writing Style.} There are significant variation in writing style across the datasets, even among those datasets \nthat have the common a objective (e.g., DROP, Quoref and QASC). \nDROP instructions say \\textit{\"There is an AI running in the background which will also try to answer the question. You won't be able to submit the question if the AI gives the same response.\"} The writing style in Quoref however is different: \\textit{\"We also want you to avoid questions that can be answered correctly by someone without actually understanding the paragraph. ...\"} \n\n\\paragraph{Information.} We observe that sometimes instructions of a dataset contain information that is relevant to several other datasets, which do not contain similar instruction information. For example, Quoref, DROP and CosmosQA are datasets that are all based on reading comprehension tasks. CosmosQA contains a step in the data creation process asking users to skip passages containing inappropriate or offensive content. This information is also relevant to Quoref and DROP, but is not mentioned in their respective instructions.\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \n \\includegraphics[scale=0.36,trim=0.1cm 0.1cm 0.1cm 0.1cm]{figures\/Task_specification.pdf}\n \n \\caption{Variation in Task Specification: Quoref contains a single line instruction whereas the CosomosQA contains a detailed instruction. QASC on the other hand, contains examples along with instruction.}\n \\label{fig:task_specification}\n\\end{figure}\n\n\n\n\\paragraph{Hardness.} In a typical crowdsourcing task, certain tasks may be harder than the others, often these are the core tasks, e.g.: question generation, adversarial data creation, etc. Additional information, especially in the form of tips is always helpful in solving these hard tasks. Figure~\\ref{fig:task_specification} illustrates that the task of question generation is stated differently in Quoref, CosmosQA, and QASC. QASC mentions an easy and detailed way to create questions, whereas CosmosQA mentions several different attributes of a good quality question. Knowing about the CosmosQA and QASC question generation processes may help with data creation for Quoref and other such question generation tasks, where less additional information is provided regarding question creation. \n\n\n\n\n\n\\subsection{Data Curation Effort}\n\\label{appendix:subsect:curation}\nTable \\ref{tab:datacuration} shows the effort distribution in the data curation process of \\textsc{Natural Instructions}{}. Step-8 which involves parsing instances is the main bottleneck in the data curation process. Table \\ref{tab:structure} shows the detailed structure of tasks in \\textsc{Natural Instructions}{}. Fig.~\\ref{fig:examplesfull} shows examples of four different tasks in \\textsc{Natural Instructions}{}.\n\n\\begin{table}[h]\n \\centering\n \\footnotesize\n \n \\begin{tabular}{m{0.5cm}p{4.5cm}p{1.5cm}}\n \\toprule\n step & task & time per task \\\\ \n \\midrule\n 1 & Identify crowdsourced dataset and engage with their authors. & 20-30 mins \\\\\n \n 2 & Go through the template and understand the task. & 10-15 mins \\\\ \n 3 & Manually fill fields in the schema with content from the template. & 30-45 mins \\\\ \n 4 & Iterate over the instructions to ensure their clarity while eliminating the repeated content. Fix writing issue in examples, also typos etc. \n \n & 2-3 hrs\\\\ \n 5 & Create negative examples if not present. Add the missing explanations to the examples. & 1-2 hrs \\\\ \n \n 6 & Extract the input\/output instances from raw crowdsourcing annotations. & 0.5-24 hrs \\\\ \n \n 7 & Final inspections of the data to verify the data quality \n \n & 0.25- 2hrs \\\\\n \\midrule\n & Overall & 6-34 hrs\\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{Steps taken to curate each task in \\textsc{Natural Instructions}{} and their estimated times.\n \n }\n \\label{tab:datacuration}\n\\end{table}\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[scale=0.75,trim=0.7cm 0.5cm 0.5cm 1.5cm]{figures\/examples_detailed.pdf}\n \\caption{\n Examples from \\textsc{Natural Instructions}{}. \n Each task follows the schema provided in Fig.~\\ref{fig:schema_plate}. \n }\n \\label{fig:examplesfull}\n\\end{figure*}\n\n\n\\begin{table*}\n \\centering\n \\small\n \\begin{adjustbox}{max width=\\textwidth}\n \\begin{tabular}{llcc}\n \\toprule\n task id & title& source dataset & task category\\\\\n \\midrule\n1 & task001\\_quoref\\_question\\_generation & Quoref & Question Generation \\\\\n2 & task002\\_quoref\\_answer\\_generation & Quoref & Answer Generation \\\\\n\\midrule \n3 & task003\\_mctaco\\_question\\_generation\\_event\\_duration & MC-TACO & Question Generation \\\\\n4 & task004\\_mctaco\\_answer\\_generation\\_event\\_duration & MC-TACO & Answer Generation \\\\\n5 & task005\\_mctaco\\_wrong\\_answer\\_generation\\_event\\_duration & MC-TACO & Incorrect Answer Generation \\\\\n6 & task006\\_mctaco\\_question\\_generation\\_transient\\_stationary & MC-TACO & Question Generation \\\\\n7 & task007\\_mctaco\\_answer\\_generation\\_transient\\_stationary & MC-TACO & Answer Generation \\\\\n8 & task008\\_mctaco\\_wrong\\_answer\\_generation\\_transient\\_stationary & MC-TACO & Incorrect Answer Generation \\\\\n9 & task009\\_mctaco\\_question\\_generation\\_event\\_ordering & MC-TACO & Question Generation \\\\\n10 & task010\\_mctaco\\_answer\\_generation\\_event\\_ordering & MC-TACO & Answer Generation \\\\\n11 & task011\\_mctaco\\_wrong\\_answer\\_generation\\_event\\_ordering & MC-TACO & Incorrect Answer Generation \\\\\n12 & task012\\_mctaco\\_question\\_generation\\_absolute\\_timepoint & MC-TACO & Question Generation \\\\\n13 & task013\\_mctaco\\_answer\\_generation\\_absolute\\_timepoint & MC-TACO & Answer Generation \\\\\n14 & task014\\_mctaco\\_wrong\\_answer\\_generation\\_absolute\\_timepoint & MC-TACO & Incorrect Answer Generation \\\\\n15 & task015\\_mctaco\\_question\\_generation\\_frequency & MC-TACO & Question Generation \\\\\n16 & task016\\_mctaco\\_answer\\_generation\\_frequency & MC-TACO & Answer Generation \\\\\n17 & task017\\_mctaco\\_wrong\\_answer\\_generation\\_frequency & MC-TACO & Incorrect Answer Generation \\\\\n18 & task018\\_mctaco\\_temporal\\_reasoning\\_presence & MC-TACO & Classification \\\\\n19 & task019\\_mctaco\\_temporal\\_reasoning\\_category & MC-TACO & Classification \\\\\n20 & task020\\_mctaco\\_span\\_based\\_question & MC-TACO & Classification \\\\\n21 & task021\\_mctaco\\_grammatical\\_logical & MC-TACO & Classification \\\\\n\\midrule \n22 & task022\\_cosmosqa\\_passage\\_inappropriate\\_binary & Cosmosqa & Classification \\\\\n23 & task023\\_cosmosqa\\_question\\_generation & Cosmosqa & Question Generation \\\\\n24 & task024\\_cosmosqa\\_answer\\_generation & Cosmosqa & Answer Generation \\\\\n25 & task025\\_cosmosqa\\_incorrect\\_answer\\_generation & Cosmosqa & Incorrect Answer Generation \\\\\n\\midrule \n26 & task026\\_drop\\_question\\_generation & DROP & Question Generation \\\\\n27 & task027\\_drop\\_answer\\_type\\_generation & DROP & Classification \\\\\n28 & task028\\_drop\\_answer\\_generation & DROP & Answer Generation \\\\\n\\midrule \n29 & task029\\_winogrande\\_full\\_object & Winogrande & Minimal Text Modification \\\\\n30 & task030\\_winogrande\\_full\\_person & Winogrande & Minimal Text Modification \\\\\n31 & task031\\_winogrande\\_question\\_generation\\_object & Winogrande & Question Generation \\\\\n32 & task032\\_winogrande\\_question\\_generation\\_person & Winogrande & Question Generation \\\\\n33 & task033\\_winogrande\\_answer\\_generation & Winogrande & Answer Generation \\\\\n34 & task034\\_winogrande\\_question\\_modification\\_object & Winogrande & Minimal Text Modification \\\\\n35 & task035\\_winogrande\\_question\\_modification\\_person & Winogrande & Minimal Text Modification \\\\\n\\midrule \n36 & task036\\_qasc\\_topic\\_word\\_to\\_generate\\_related\\_fact & QASC & Minimal Text Modification \\\\\n37 & task037\\_qasc\\_generate\\_related\\_fact & QASC & Minimal Text Modification \\\\\n38 & task038\\_qasc\\_combined\\_fact & QASC & Minimal Text Modification \\\\\n39 & task039\\_qasc\\_find\\_overlapping\\_words & QASC & Verification \\\\\n40 & task040\\_qasc\\_question\\_generation & QASC & Question Generation \\\\\n41 & task041\\_qasc\\_answer\\_generation & QASC & Answer Generation \\\\\n42 & task042\\_qasc\\_incorrect\\_option\\_generation & QASC & Incorrect Answer Generation \\\\\n\\midrule \n43 & task043\\_essential\\_terms\\_answering\\_incomplete\\_questions & Essential Terms & Answer Generation \\\\\n44 & task044\\_essential\\_terms\\_identifying\\_essential\\_words & Essential Terms & Verification \\\\\n\\midrule \n45 & task045\\_miscellaneous\\_sentence\\_paraphrasing & Miscellaneous & Minimal Text Modification \\\\\n46 & task046\\_miscellaenous\\_question\\_typing & Miscellaenous & Classification \\\\\n47 & task047\\_miscellaenous\\_answering\\_science\\_questions & Miscellaenous & Answer Generation \\\\\n\\midrule \n48 & task048\\_multirc\\_question\\_generation & MultiRC & Question Generation \\\\\n49 & task049\\_multirc\\_questions\\_needed\\_to\\_answer & MultiRC & Classification \\\\\n50 & task050\\_multirc\\_answerability & MultiRC & Classification \\\\\n51 & task051\\_multirc\\_correct\\_answer\\_single\\_sentence & MultiRC & Answer Generation \\\\\n52 & task052\\_multirc\\_identify\\_bad\\_question & MultiRC & Classification \\\\\n53 & task053\\_multirc\\_correct\\_bad\\_question & MultiRC & Minimal Text Modification \\\\\n54 & task054\\_multirc\\_write\\_correct\\_answer & MultiRC & Answer Generation \\\\\n55 & task055\\_multirc\\_write\\_incorrect\\_answer & MultiRC & Incorrect Answer Generation \\\\\n56 & task056\\_multirc\\_classify\\_correct\\_answer & MultiRC & Classification \\\\\n57 & task057\\_multirc\\_classify\\_incorrect\\_answer & MultiRC & Classification \\\\\n58 & task058\\_multirc\\_question\\_answering & MultiRC & Answer Generation \\\\\n\\midrule \n59 & task059\\_ropes\\_story\\_generation & ROPES & Minimal Text Modification \\\\\n60 & task060\\_ropes\\_question\\_generation & ROPES & Question Generation \\\\\n61 & task061\\_ropes\\_answer\\_generation & ROPES & Answer Generation \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Detailed set of tasks included in \\textsc{Natural Instructions}{}}\n \\label{tab:structure}\n\\end{table*}\n\n\n\\clearpage\n\\onecolumn\n\n\\changed{\n\\subsection{Qualitative Comparison to PromptSource}\n\\label{subsec:promptsource}\nWe provide a comparison between our proposed dataset and PromptSource~\\cite{sanh2021multitask}. \nPromptSource tasks are mainly focused on the common NLP downstream tasks (such as question-answering, coreference, NLI, etc). \nHowever, since we create tasks from various steps (including the intermediate steps) in a data creation process, our instructions contain a broader variety of tasks. For example, tasks for chaining facts (task 38; Table~\\ref{tab:structure}), question typing (task 27; Table~\\ref{tab:structure}) or detecting inappropriate content (task 22; Table~\\ref{tab:structure}) are unique additions in \\textsc{Natural Instructions}{}. \nAdditionally, since our instructions were originally written by various researchers targeted for crowdworkers, they are elaborate and contain the complete definition of each task. \nThis is somewhat evident from observation that GPT3 leads to higher performance on our instructions (Table~\\ref{tab:prompt:source:gpt3:eval}). \nLast but not least, since we represent the instructions in a structured format, we are able to ablate various elements of the instructions (definition, negative\/positive examples, etc.) and empirically quantify their contributions (\\S\\ref{sec:experiments}). \n}\n\n\\begin{table}[h]\n \\centering\n \\small\n \\begin{tabular}{clcc}\n \\toprule \n Task & Model & PromptSource & \\textsc{Natural Instructions}{} \\\\\n \\midrule\n \\multirow{2}{*}{ Quoref QA (002) } & GPT3-Instruct & 43 & {\\bf 47} \\\\\n & GPT3 & 2 & {\\bf 13} \\\\\n \\multirow{2}{*}{ DROP QA (028) } & GPT3-Instruct & 6 & {\\bf 10} \\\\\n & GPT3 & 2 & {\\bf 3} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{ \n Comparing zero-shot performance of GPT3 on our instructions vs. PromptSource. \n The instructions curated in this work, despite being lengthier, lead to higher performance. \n \n }\n \\label{tab:prompt:source:gpt3:eval}\n\\end{table}\n\n\\begin{table*}[h]\n \\centering\n \n \n \\includegraphics[scale=0.88,trim=1.4cm 13.4cm 1.2cm 1.85cm,clip=true]{figures\/comparisonWithPromptSource-3.pdf}\n \\caption{Qualitative comparison of the task instructions for several shared tasks among \\textsc{Natural Instructions}{} and PromptSource~\\cite{sanh2021multitask}.}\n \\label{tab:prompt:source}\n\\end{table*}\n\n\\twocolumn\n\\clearpage\n\n\\section{Building Baselines for \\textsc{Natural Instructions}{}}\nIn this section, we provide several details on the baselines included in our work. \n\n\\subsection{Encoding of the instructions}\n\\label{appendix:subsect:encoding}\n\nAccording to our schema (\\S\\ref{subsec:schema}), each instruction $I_t$ for the $t$-th task is a set that contains the following fields:\n$$\nI_t = \\setOf{ \n \\I{t}{title}, \n \\I{t}{def.}, \n \\I{t}{avoid}, \n \\I{t}{emph.}, \n \\I{t}{prompt},\n \\I{t}{pos. ex.},\n \\I{t}{neg. ex.}\n }\n$$\n\n\nTo feed the instances to LMs, we first encoder them into plain text. \nLet $enc(I, x)$ define a function that maps a given instruction $I$ and input instance $x$ to plain text. \nEvidently, there are many choices for this function. \nIn our study, we consider the following encodings: \n\n\\paragraph{\\textsc{No-instructions} encoding.}\nThis encoding is the conventional paradigm where no instructions exist: \n\n\\begin{equation} \\label{eq1}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\n\n\\paragraph{\\textsc{prompt} encoding.}\nIn this encoding, we append the prompt message before the input:\n\n\\begin{equation} \\label{eq2}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\n\\paragraph{\\textsc{Prompt + Definition} encoding.}\nIn this encoding, the prompt message and the task definition appear before the input: \n\\begin{equation} \\label{eq3}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\textnormal{``}\\mathtt{\\small Definition:} \\; \\I{t}{def.} \\\\ \n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small input:} \\; x \\\\\n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\nIntuitively, this encoding is more informative and more complex than ``prompt'' encoding. \n\n\\paragraph{\\textsc{Full Instructions} encoding.}\nThis encoding contains all the instruction content: \n\\begin{equation} \n \\label{eq4}\n \\small\n \\begin{split}\n enc(I_t, x) := \n & \\textnormal{``}\\mathtt{\\small Definition:} \\; \\I{t}{def.} \\\\ \n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small Things \\; to \\; Avoid:} \\; \\I{t}{avoid.} \\\\ \n & \\mathtt{\\small Emphasis \\& Caution:} \\; \\I{t}{emph.} \\\\ \n & \\textnormal{``}\\mathtt{\\small Negative Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.}\\mathtt{(reason)} \\\\ \n & \\mathtt{\\small Negative Example2-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\textnormal{``}\\mathtt{\\small Positive Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.}\\mathtt{(reason)} \\\\ \n & \\mathtt{\\small Positive Example2-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\nwhere $enc_{\\textnormal{ex}} (I_t)$ is an alternating encoding positive and negative examples. We include as many examples as possible, before exceeding the input limit. \n\n\\begin{comment}\n\\newcommand{\\mathrel{+}=}{\\mathrel{+}=}\n\\begin{equation*} \n \\begin{split}\n & \\mathtt{for \\; (p, n) \\; in \\; zip(} \\I{t}{pos. ex.}, \\I{t}{neg. ex.} \\mathtt{):} \\\\ \n & \\hspace{0.5cm} enc_{\\textnormal{ex}} (I_t) \\mathrel{+}= \\\\ \n & \\hspace{0.99cm} \\textnormal{``}\\mathtt{\\small Positive Example-} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small input:} \\; \\mathtt{p}_{\\textnormal{\\tiny input}} \\; \\mathtt{\\small output:} \\; \\mathtt{p}_{\\textnormal{\\tiny output}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small reason:} \\; \\mathtt{p}_{\\textnormal{\\tiny reason}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small Negative Example-} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small input:} \\; \\mathtt{n}_{\\textnormal{\\tiny input}} \\; \\mathtt{\\small output:} \\; \\mathtt{n}_{\\textnormal{\\tiny output}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small reason:} \\; \\mathtt{n}_{\\textnormal{\\tiny reason}} \\;\n \\mathtt{\\small suggestion:} \\; \\mathtt{n}_{\\textnormal{\\tiny sugg.}} \n \\textnormal{''} \n \\end{split}\n\\end{equation*}\n\\end{comment}\n\n\\paragraph{\\textsc{Positive Examples} encoding.}\nThis encoding contains only positive examples of the subtask (no task description, etc). \n\n\\begin{equation} \n \\label{eq5}\n \\small\n \\begin{split}\n enc(I_t, x) := \n \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n \n \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\nSuch example-only have been used in several recent studies in the field~\\cite{zhao2021calibrate}. \n\n\\clearpage\n\\onecolumn\n\n\\twocolumn\n\n\\section{Analysis on Baseline Results}\n\\label{sec:appendix:banalysis}\n\n\n\n\\changed{\n\\subsection{Comparison to Raw Instructions}\n\\label{subsec:efratlevycomparison}\nWe seek to understand the value of breaking the tasks into sub-tasks and mapping them into our proposed schema (\\S\\ref{sec:mapping}). \nWe compute performance of raw instructions (first sub-task of four datasets), \nin the same vein as \n\\citep{efrat2020turking}'s setup. \nWe compare this to our \\textsc{Full Instruction - neg examples} encoding. \nThe results in Table~\\ref{tab:comparison:raw:instructions} indicate that GPT3 leads to higher performance with our encoding (2nd row) compared to raw instructions (first row). \nWeak performance of LMs on raw instructions aligns with \\citep{efrat2020turking}'s finding that ``language model performs poorly''. \n\n\\newcolumntype{R}[2]{%\n >{\\adjustbox{angle=#1,lap=\\width-(#2)}\\bgroup}%\n l%\n <{\\egroup}%\n}\n\\newcommand*\\rot{\\multicolumn{1}{R{30}{1em}}\n\n\n\\begin{table}[h]\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n \\toprule\n & \\rot{Quoref} & \\rot{MCTaco} & \\rot{CosmosQA} & \\rot{QASC} \\\\\n \\midrule\n \\makecell{raw instructions} & 12.5 & 5.00 & 6.9 & 3.7 \\\\\n \\makecell{our schema} & 25.8 & 42.6 & 17.7 & 51.3 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Comparing GPT3 performance on raw crowdsourcing instructions vs. our encoding. All numbers are ROUGE-L.} \n \\label{tab:comparison:raw:instructions}\n\\end{table}\n\nThis might be partly due to the verbose language of the raw instructions: \nthe average length of the raw instructions is $2.5k$ tokens, in comparison to $950$ tokens for our encoding. \nWhile repetition often helps human understanding, concise instructions seem to be more effective for computers. \n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\subsection{An Ablation Study of Instructional Elements}\n\\label{sec:ablation:study}\nWe conduct an ablation study with GPT3 on 3 distinct tasks (answer generation from Winogrande; question generation from QASC; verifying temporal reasoning category of a given question from MC-TACO). \nTable~\\ref{tab:ablation:subset} (top) shows the effect of eliminating various fields in the encoding while Table~\\ref{tab:ablation:subset} (bottom) indicates the gains from adding each field. \nThe overall observation is that GPT3 benefits the most from \\emph{positive examples}, mildly from \\emph{definition}, and deteriorates with \\emph{negative examples}. \nWe hypothesize it is easier for GPT3 to mimic the patterns in positive examples while utilizing \\emph{negative examples} requires deeper understanding. \n\n\n\n\n\n\\begin{table}[h]\n \\centering\n \n \n \n \n \n \\includegraphics[scale=0.73,trim=8.7cm 5.7cm 2cm 1.9cm]{figures\/ablation-subset.pdf}\n \n \\caption{An ablation study of the different fields included in \\textsc{Natural Instructions}{} based on GPT3. This model benefits the most from \\textsc{positive} examples and the least from \\textsc{negative} examples. \n }\n \\label{tab:ablation:subset}\n\\end{table}\n\\newpage\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{table}[ht]\n \\centering\n \\small\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lcc}\n \\toprule\n error type & GPT3 & BART \\\\\n \\midrule\n \n \n \n \n does not follow instruction and generate an invalid question & 14 & 8\\\\\n \n generates a nonsensical\/vague question & 4 & 47\\\\\n copies the given fact or a subset of it & 8 & 3 \\\\\n explains the question after generating it & 6 & 0\\\\\n generates a yes\/no question & 12 & 4\\\\\n generates candidate answers as output &4 & 0\\\\\n generates questions whose answer does not exist &4 &3\\\\\n \\makecell[l]{generates generic questions independent\\\\ of the given context} &6 &0\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Percentage of errors on QASC QG task (\\S\\ref{sec:error:analysis}). \n The numbers do not sum to 100 since the error types are not mutually exclusive. \n \n }\n \\label{Tab: Error Analysis}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\begin{table*}[t]\n \\small \n \\centering\n \\resizebox{\\linewidth}{!}{\n\\begin{tabular}{lcccccc|l||cccccc|l}\n\\toprule\n& \\multicolumn{7}{c}{BART} & \\multicolumn{7}{c}{GPT3} \\\\\n\\cmidrule(r){2-8} \\cmidrule(r){9-15} \n task category \u2192 & QG & AG & CF & IAG & MM & VF & avg & QG & AG & CF & IAG & MM & VF & avg \\\\\n\\midrule\n\\textsc{No Instruction} & 26 & 6 & 0 & 21 & 33 & 7 & 13 & - & - & - & - & - & - & - \\\\\n\\midrule\n\\textsc{prompt} & 27 & 22 & 7 & 22 & 34 & \\textbf{9} & 20 & 33 & 32 & 14 & 13 & \\textbf{73} & 16 & 30 \\\\\n{\\ \\ \\ +\\textsc{definition}} & 35 & 24 & 50 & \\textbf{25} & 36 & 7 & 30$\\uparrow$ (+50) & 36 & 35 & 40 & 14 & 70 & 16 & 35$\\uparrow$ (+17)\\\\\n{ \\ \\ \\ +\\textsc{things to avoid}} & 33 & 24 & 4 & 24 & \\textbf{58} & \\textbf{9} & 25$\\uparrow$ (+25) & 28 & 33 & 11 & 16 & 68 & 14 & 28$\\downarrow$ (-7) \\\\\n{\\ \\ \\ +\\textsc{emphasis}} & 38 & 23 & 16 & \\textbf{26} & 49 & 3 & 26$\\uparrow$ (+30) & 29 & 28 & 18 & 16 & 72 & 16 & 30 \\\\\n{\\ \\ \\ +\\textsc{pos. examp.}} & 53 & 22 & 14 & \\textbf{25} & 17 & 7 & 23$\\uparrow$ (+15) & \\textbf{43} & 49 & 29 & 21 & 70 & \\textbf{36} & 41$\\uparrow$ (+37) \\\\\n{\\ \\ \\ +\\textsc{definition+pos. examp.}} & 51 & 23 & \\textbf{56} & \\textbf{25} & 37 & 6 & 33$\\uparrow$ (+65) & \\textbf{43} & 50 & \\textbf{45} & \\textbf{23} & 70 & 32 & \\textbf{44}$\\uparrow$(+47) \\\\\n{\\ \\ \\ +\\textsc{pos, neg ex+ explan.}} & 50 & 21 & 27 & 25 & 50 & 7 & 30 $\\uparrow$ (+50) & 32 & 19 & 8 & 12 & 61 & 13 & 24$\\downarrow$(-20) \\\\\n\\textsc{pos. examp.} & \\textbf{55} & 6 & 18 & \\textbf{25} & 8 & 6 & 20 & 30 & 32 & 15 & 16 & 68 & 23 & 31$\\uparrow$(+3) \\\\\n\\midrule\n\\textsc{Full Instruction} & 46 & 25 & 52 & 25 & 35 & 7 & 32$\\uparrow$ (+60) & 33 & 18 & 8 & 12 & 60 & 11 & 24$\\downarrow$(-20) \\\\\n{\\ \\ \\ -\\textsc{ examples }} & 40 & 24 & 36 & 25 & 55 & 8 & 31$\\uparrow$ (+55) & 31 & 34 & 39 & 14 & 69 & 13 & 33$\\uparrow$(+10) \\\\\n{\\ \\ \\ - \\textsc{neg. examp.}} & 52 & \\textbf{30} & 50 & \\textbf{25} & 47 & 8 & \\textbf{35}$\\uparrow$ (+75) & \\textbf{43} & \\textbf{54} & 44 & 21 & 70 & 32 & \\textbf{44}$\\uparrow$(+47) \\\\\n\\bottomrule\n\\end{tabular}\n}\n \\caption{\n Full BART and GPT3 results with various input encodings for different task categories, under random split (\\S\\ref{subsec:split}).\n Both models show improved results when encoded with instructions, comparing relative gains indicated in the `avg' columns (in percentage compared to \\textsc{prompt} encoding.)\n Category names: QG: Question Generation, AG: Answer Generation, CF: Classification, IAG: Incorrect Answer Generation, MM: Minimal Text Modification, VF: Verification.\n \n }\n \\label{tab:random:splitfull2}\n\\end{table*}\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\caption{GPT3}\n \\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures\/gains-categories-gpt.pdf}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\caption{BART}\n \\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures\/gains-categories-bart.pdf}\n \\end{subfigure}\n \n \\caption{\n GPT3 and BART were evaluated with various encoding and various categories of tasks. \n The benefit of instructions to the models depends on the semantics of the task. For instance, for GPT3 (left) \\emph{minimal text modification} category benefits a lot, while the benefits to \\emph{verification} tasks are minimal. \n }\n \\label{fig:gains:per:categories}\n\\end{figure*}\n\\end{comment}\n\n\\begin{comment}\n\\subsection{Generalization vs. number of positive examples}\n\\label{subsection:numberofpositiveexamples}\nFig.~\\ref{fig:GPTexample} and \\ref{fig:BARTexample} illustrates the performance variation of models with respect to the number of examples. Clearly, addition of examples is not helping GPT3 and BART. Note that model performance with just the prompt+definition encoding is 35 in case of GPT3 and 30 in case of BART. This may suggest that the effort in creating many examples can be utilized to improve other aspects of Instruction. Another demerit of larger number of examples is that they increases input token size which increases the API usage cost in case of GPT3 and training time and higher memory usage in case of BART.\n\\end{comment}\n\n\\clearpage\n\n\\begin{comment}\n\\onecolumn\n\n\\subsection{User Study to Find Important Task-Specific Instruction Fields}\n\\label{subsec:appendix:user:study}\nWe ask our quality assessment annotators to also specify which instruction fields help them understand the task and answer prompts. For each of the 12 tasks in our evaluation set, we ask: \\textit{Which instruction field helps you the most to understand the task and answer questions and why? Remember, on removing this field significant major information should get lost.} We compile these results category-wise, and present them in Table \\ref{Tab: User Study}. In particular, there are two tasks Classification (CF) and Minimal Text Modification (MM) for which humans find only a single instruction field to be important. We find that models also find the same fields to be most important, as evinced in Table \\S\\ref{tab:random:splitfull}), where the performance of models with these fields is higher than the rest. Interestingly, this is similar to the patterns observed in the model performance (Table \\S\\ref{tab:random:splitfull}).\n\n\\begin{figure*}[]\n \\centering\n \\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures\/example_variation_GPT.pdf}\n \\caption{GPT3 performance as a function of the number of examples in its encoding. The number of examples is limited by three upperbounds: 3, 10 and 70. This shows that addition of examples is not helping GPT3.\n }\n \\label{fig:GPTexample}\n\\end{figure*}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures\/example_variation_BART.pdf}\n \\caption{BART performance as a function of the number of examples in its encoding. The number of examples is limited by two upperbounds: 3 and 10. This shows that addition of examples is not helping BART. Since BART's maximum token size is 1024, it can not fit a lot examples unlike GPT3, so we did not experiment further with larger number of examples.\n }\n \\label{fig:BARTexample}\n\\end{figure*}\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{table*}\n \\centering\n \n \\includegraphics[scale=0.65,trim=1.8cm 6.5cm 0cm 3cm]{figures\/ablation-polished.pdf}\n \n \\caption{Detailed results of the encoding ablation performed on three distinct subtasks.}\n \\label{tab:ablation:all}\n\\end{table*}\n\\end{comment}\n\n\\begin{comment}\n\\begin{table*}\n \\centering\n \\includegraphics[scale=0.65,trim=1.7cm 8.9cm 0cm 2cm]{figures\/results_table_det.pdf}\n \n \\caption{\n Empirical results \\textsc{Natural Instructions}{}. \n The best numbers among the four encodings are indicated with \\textbf{bold}. \n The first row is {\\color{gray} grayed out} since it is our oracle upperbound.\n }\n \\label{tab:results:detail}\n\\end{table*}\n\\end{comment}\n\\subsection{Evaluating Generalization Across Tasks} \n\n\n\n\\begin{comment}\n\\paragraph{\\textsc{Natural Instructions}{} tasks splits.}\nTo evaluate generalization across subtasks, we divide \\textsc{Natural Instructions}{} into two collections: \n(i) \\emph{evaluation} tasks \\task{eval} (35 subtasks) for test and \n(ii) \\emph{non-evaluation} tasks \\task{non-eval} (26 subtasks) for training.\\footnote{The tasks are enumerated in the appendix.} \nIn making this collection, we ensure that the tasks included in \\task{eval} accept a relatively reliable automatic evaluation. For example, those tasks that have restricted answer space (like classification tasks) or those that have several gold output references. The end tasks of the source datasets which are typically the answer generation tasks are often included in the evaluation set.\nAdditionally, we ensure to have at least one representative subtask from each of the semantic categories (\\S\\ref{sec:mapping}) in the \\emph{non-evaluation} collection\nHowever, tasks within categories are very different from each other. For instance, creation of DROP questions requires understanding of numerical reasoning and reading comprehension, whereas creation of Winogrande questions requires understanding of co-reference resolution and the requirement of the task to create twin question and answer pairs.\n\n\\daniel{\n emphasize that not any two tasks are exactly the same. Every two QG tasks are different (e.g., MC-TACO vs CosmosQA). \n}\n\\paragraph{Evaluation.}\nWe formulate three evaluation settings with different supervision types available to a model (Table~\\ref{tab:supervision:types}). \nIn `task-specific' setting, a model is supervised with the training instances of the evaluation task -- similar to the conventional setup. \nIn `few-shot' setting, a model only observes a few examples of the evaluation task.\\footnote{\n We use ``few-shot'' to refer to \\emph{any setup with a small number of labeled examples},\n \n regardless of whether these examples are used for fine-tuning or inference-time conditioning (no gradient updates). \n}\nIn `generalization' setting, a model does not observe any instances from the evaluation task.\n\\end{comment}\n\n\n\n\\begin{comment}\n\\paragraph {``no-instructions''} encoding.\nThis encoding is the conventional paradigm where no instructions exist, except the input instance (Eq.~\\ref{}). \n\n\\paragraph{\\emph{``prompt''} encoding.}\nIn this encoding, we append the prompt message before the input instance. \n\n\\paragraph{\\emph{``prompt + definition''} encoding.}\nIn this encoding, the prompt message and the task \\emph{definition} appear before the input instance.\nIntuitively, this encoding is more informative and more complex than \\emph{``prompt''} only encoding. \n\n\\paragraph{\\emph{``all instructions''} encoding.}\nThis encoding contains all the instruction content.\nWe include as many examples as possible, before exceeding the token limit of LMs. \n\n\\paragraph{\\emph{``positive examples''} encoding.}\nThis encoding contains only positive examples from the task instructions. \nSuch example-only encodings have been used in several recent studies in prompting LMs~\\cite{zhao2021calibrate}. \n\n\n\\end{comment}\n\n\\begin{comment}\n\\begin{table}[h]\n \\centering\n \\small\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{L{1.9cm}cL{3.9cm}}\n \\toprule\n Setup & Evaluation & Supervision \\\\\n \\midrule\n \n task-specific & $T \\in$ \\task{eval} & all the instances of $T$ \\\\ \n \\cmidrule(r){1-3} \n few-shot & $T \\in$ \\task{eval} & \n \n instructions of $T$ \\\\ \n \\cmidrule(r){1-3} \n generalization & $T \\in$ \\task{eval} & instructions+ instances of \\task{non-eval} tasks + instructions of $T$ \\\\ \n \n \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Different modes of supervision considered in this work, when evaluating a model on the instances of a fixed task $T \\in$ \\task{eval}. \n }\n \\label{tab:supervision:types}\n\\end{table}\n\\end{comment}\n\n\\begin{comment}\n\\section{Evaluating Language Models to Address \\textsc{Natural Instructions}{}} \n\nWe use generative language models BART~\\cite{lewis2019bart} and GPT-3~\\cite{brown2020language} to address tasks in \\textsc{Natural Instructions}{}. Here, we describe how we encode instructions and instances into plain text and feed them into generative language models (\\S \\ref{subsect:encoding}). We then describe the model details (\\S \\ref{subsec:models}). \nWe then explain how we use language models to encode instruction (\\S \\ref{subsect:encoding}). \\daniel{to be updated}\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{The benefit from instructions heavily depends on the task at hand.}\nFigure~\\ref{fig:gains:per:categories} shows the performance of our models on our task categories, broken down into several coarse input encodings. \nSimilar to our previous observations, \\emph{all instructions} encoding \\emph{typically} performs better than other encodings. \nHowever, these gains are not uniform across task categories. \n\n\n\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{Task-specific BART (oracle upper-bound estimate).}\nWe train BART on input\/output instances of each task (no instructions) and evaluate on the same task. \nThis is the conventional setup where the model is fine-tuned to solve the task only, without any instructions involved.\nSuch a model, by design, won't generalize across different tasks since it is specialized to each subtask. \nHowever, the numbers elicited from this can be viewed as the upper-bounds for each task (i.e., how well can BART perform, if it were to be trained on many instances of this particular task). \n\n\\end{comment}\n\n\\begin{comment}\n\\subsection{Task-specific Calibration of GPT-3}\n\\label{subsec:calibration}\n\nWe transform inputs (Instructions) to make it more understandable for GPT-3. We have a human-in-the loop setup to perform calibration. We do calibration in two steps (i) we develop various calibration procedure by experimenting with various type of prompts and identifying the types of prompt which help model follow instructions better. This is done on the non-eval split. (ii) we employ one of the calibration procedures by looking at few samples of the end-task in the eval split.\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{A Data Creation Toolbox:} \n\\textsc{Natural Instructions}{} covers various skills beyond question generation and answer generation, such as sentence paraphrasing, verification of whether a question is in a specified category, etc that are frequently used during NLP dataset creation. A successful model trained on \\textit{Natural Instructions} will be a toolbox for dataset creation. Our intuition behind the focus on dataset creation is that any NLP task can be expressed as a step in the data creation \n\\end{comment}\n\n\\section{Introduction}\n\nWe have witnessed great progress in solving many NLP datasets through fine-tuning pre-trained language models (LMs)~\\cite{peters2018deep,brown2020language}. \nMore recent studies show tremendous promise in generalization \\emph{within} the set of observed tasks through multi-task training and unified encoding~\\cite{khashabi2020unifiedqa,aghajanyan2021muppet}. \nHowever, cross-task generalization -- \\emph{generalization} to \\emph{unseen} tasks -- has generally remained under-explored.\nFor example, can we supervise a model with instances of grammar checking or question answering tasks, yet expect it to solve a different task like question typing (Fig.\\ref{fig:teaster}).\nEvidently, humans are capable of such generalizations; an average human can follow natural language \\emph{instructions} to solve a variety of problems, as evident by the success of crowdsourcing platforms (also argued in~\\citet{efrat2020turking}). In this paper, we study if models can generalize to \\emph{unseen} tasks given their \ncrowdsourcing instructions (Fig.\\ref{fig:teaster}). \n\n\n\n\\begin{figure}[t]\n \\centering\n \n \n \n \n \n \n \\includegraphics[scale=0.9, trim=0.75cm 0.8cm 0cm 1.0cm,clip=false]{figures\/teaser1-2.pdf}\n \n \\caption{\n We construct the \\textsc{Natural Instructions}{} dataset from crowdsourcing instructions and instances of different NLP datasets. We study if models can learn from {\\emph{\\color{blue} seen}} tasks and generalize to {\\emph{\\color{red} unseen}} tasks given their natural crowdsourcing instructions. \n \n \n \n \n \n \n \n \n }\n \\label{fig:teaster}\n\\end{figure}\n\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \n \n \n \\small\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{ccc}\n \\toprule \n Task & \\makecell{Instance-Level\\\\Generalization} & \\makecell{Task-Level\\\\Generalization} \\\\\n \\midrule \n \\makecell{Training\\\\data} & $X^{\\color{darkgreen} \\text{train}}, Y^{\\color{darkgreen} \\text{train}}$ & \\makecell{$(I_t, X_t^{{\\color{darkgreen} \\text{train}}}, Y_t^{{\\color{darkgreen} \\text{train}}})$ \\\\ $t \\in \\text{\\task{\\color{blue} seen}} $ \\\\ } \\\\ \n \\midrule \n Evaluation & \\makecell{ $x \\rightarrow y$ \\vspace{0.2cm} \\\\ where: \\\\ $(x, y) \\in (X^{ \\color{purple} \\text{test}}, Y^{ \\color{purple} \\text{test}})$ \\vspace{0.3cm} } & \\makecell{$(x, I_t) \\rightarrow y$ \\vspace{0.2cm} \\\\ where: \\\\ $(x, y) \\in (X_t^{ {\\color{purple} \\text{test}}}, Y_t^{{\\color{purple} \\text{test}}})$ \\\\ $t \\in$ \\task{\\color{red} unseen} } \\\\ \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n A comparison of \\emph{task} vs \\emph{instance}-level generalization \n $I_t$, $X_t$ and $Y_t$ indicate natural language instructions, input, and output sets respectively for task $t$.\n \n In the conventional setup, training and evaluation are done on the instances of the same task. \n However, in task-level generalization, a model is expected to generalize to {\\color{red} unseen} tasks, where \\task{\\color{red} unseen} $\\cap$ \\task{\\color{blue} seen}$ = \\emptyset $. \n }\n \\label{tab:comparison}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \n \n \n \\includegraphics[scale=0.64,trim=0.2cm 0cm 0cm 1cm,clip=false]{figures\/fig5scaleup-4.pdf}\n \\caption{BART evaluation on {\\emph{unseen}} tasks ($y$-axis is perf. on \\task{unseen}) when supervised with {\\emph{seen}} tasks ($x$-axis is $|$\\task{seen}$|$). \n \n \\changed{\n A model using {\\color{purple}instructions} ($I_t$) consistently improves with more observed tasks. In contrast, models with {\\color{orange} no access to the instructions} show no sign of improved generalization. \n }\n \\changed{Details in \\S\\ref{subsec:supervision:size:experiment}.}\n }\n \\label{fig:scaling:tasks}\n \\end{subfigure}\n \\caption{The formal definition of generalization to unseen tasks (a) and a summary of its empirical outcome (b). }\n \n\\end{figure*}\n\n\n\n\nWe build \\textsc{Natural Instructions}, a dataset consisting of {\\it natural} crowdsourcing instructions for various tasks and their instances. \nTraining on {\\it seen} tasks $\\text{\\task{\\color{blue} seen}}$ in our dataset, we build a model that learns to follow natural instructions that define a task and perform tasks (i.e., mapping input to output).\nTesting on \\emph{unseen} tasks \\text{\\task{\\color{red} unseen}}, we evaluate if the model can perform {\\it unseen} tasks solely from their instructions and without any task-specific labeled data (Table~\\ref{tab:comparison}; right). \nIn contrast to the instance-level generalization (Table~\\ref{tab:comparison}; left), our model uses instruction as additional input, and evaluations are done on tasks that were not observed in the training stage. \n\n\\changed{\nWe compile \\textsc{Natural Instructions}{} from task instructions written by researchers for crowdsourcing existing NLP datasets. \nSuch crowdsourcing instructions often elaborate a variety of details about how a task should (and should not) be done. \nTo provide a systematic study of various elements of crowdsourcing instructions, we map them\n}\nto a unified {\\it schema} to cover the most important elements of task descriptions --- such as definition, constraints, positive and negative examples. \nWe collect tasks in \\textsc{Natural Instructions}{} as minimal stand-alone steps provided to crowdworkers to complete a downstream NLP task. \nFor example, tasks collected from \n\\changed{QASC~\\cite{khot2020qasc} include sub-tasks about generating topic words or combining facts, as well as answering multi-hop questions. \nTherefore our dataset not only contains typical downstream tasks in NLP, but also the intermediate subtasks that are not well-represented in the common benchmarks. \n}\nThe unified schema and the collection of minimal subtasks enable training LMs that can generalize across different tasks by learning from instructions.\nIn total, our dataset consists of 61 distinct NLP tasks and $193k$ instances.\n\n\nOur experimental results indicate that LMs learn to leverage natural language instructions as they show improved generalization to new\ntasks. \nFor example, a BART~\\cite{lewis2019bart} achieves a 19\\% gain in terms of cross-task generalization compared to a model not using instructions\n(\\S\\ref{sec:experiments}). \nImportantly, LMs can generalize better to unseen tasks if they observe more tasks in training (Fig.\\ref{fig:scaling:tasks}). \nThis upward trajectory suggests the potential for stronger cross-task generalizable models upon scaling up the diversity of tasks represented in a meta-dataset of task instructions. \nDespite the benefits of instructions, we observe a sizable gap between models' generalization and their estimated upperbounds (\\ref{subsec:task-specific}), encouraging the community to work on this challenging problem. \n\n\n\n\\vspace{.1cm}\n\\noindent\\textbf{Contributions:} In summary, the contributions of this work are as follows: \n(a) we introduce \\textsc{Natural Instructions}{}, a dataset of human-authored instructions curated from existing well-known datasets mapped to a unified schema, providing training and evaluation data for learning from instructions;\n(b) we build models that can encode instructions and show: \n(b.1) the benefit of cross-task generalization by leveraging instructions; \n(b.2) the importance of different elements of instructions in the performance; \n(b.3) noteworthy headroom for improvement on our benchmark, which hopefully will motivate further work in this direction. \n\n\\input{related}\n\n\n\\changed{\n\\section{Defining Cross-Task Generalization}\n\\label{subsec:input:output}\nHere we formally define the problem setup for generalization across tasks. \nEach task $t$ consists of input\/output instances $(X_t, Y_t)$ and is described in terms of its natural language instructions $I_t$. \n\n\n\\vspace{-.2cm}\n\\paragraph{Task-specific models.}\nStandard supervised learning algorithms use task-specific labeled instances to learn a mapping from input $x$ to output $y$: $M(x)=y$ for $(x,y)\\in (X_t^{\\text{train}}, Y_t^{\\text{train}})$ and is evaluated on the test instances of the same (or similar) task $(X_t^{\\text{test}}, Y_t^{\\text{test}})$. We refer to this as the \\emph{instance-level} generalization (Table~\\ref{tab:comparison}; left).\n\n\\vspace{-.2cm}\n\\paragraph{Cross-task models.} \nIn this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \\; \\mbox{for} \\ (x,y)\\in (X_t, Y_t)$.\nIn contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. We collect \\textsc{Natural Instructions}\\ (\\S\\ref{sec:construction:natural:instructions}) to study this question: can a model be trained to follow instructions via training tasks \\task{seen} and be generalized to follow instructions for a task $t' \\in$ \\task{unseen}. \nWe refer to this as a \\emph{task}-level generalization (Table~\\ref{tab:comparison}; right). \n}\n\n\\section{\\textsc{Natural Instructions}{}}\n\\label{sec:construction:natural:instructions}\n\n\\textsc{Natural Instructions}{} consists of instructions that describe a task (e.g., question answering) and instances of that task (e.g., answers extracted for a given question). \nFig.\\ref{fig:examples} shows an example instruction for the task of `generating questions that require an understanding of event duration' accompanied with positive and negative examples \nthat contextualize the task. \nHere we introduce a schema for representing instructions (\\S\\ref{subsec:schema}) and then describe how existing datasets (their crowdsourcing templates) are mapped into our schema (\\S\\ref{sec:mapping}).\n\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \n \n \\includegraphics[scale=0.70, trim=0.45cm 0.8cm 0cm 0.99cm]{figures\/examples_detailed_two.pdf}\n \\caption{\n An example from our dataset. \n Note that it follows the schema provided in Fig.\\ref{fig:schema_plate}. See Fig~.\\ref{fig:examplesfull} for more examples.\n }\n \\label{fig:examples}\n\\end{figure}\n\n\\subsection{Instruction Schema}\n\\label{subsec:schema}\n\n\nInstructions used in crowdsourcing various datasets, are written by distinct authors for different purposes, and they are different in a variety of ways (see Appendix~\\ref{appendix:analysis:templates} for their differences.) We introduce a unified schema (Fig.\\ref{fig:schema_plate}) to consistently represent these diverse forms of instructions. \nOur instruction schema is the result of our pilot study conducted on a subset of datasets. Below we describe the ingredients of this schema: \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.97\\columnwidth,trim=0.35cm 0.8cm 0.5cm 1cm]{figures\/schema-2.pdf}\n \\caption{The schema used for representing instruction in \\textsc{Natural Instructions}{} (\\S\\ref{subsec:schema}), shown in plate notation.\n }\n \\label{fig:schema_plate}\n\\end{figure}\n\n\n\\begin{itemize}[noitemsep,topsep=0pt,parsep=3pt,leftmargin=0.3cm]\n \n \\item \\underline{\\textsc{Title}} provides a high-level description of a task and its associated skill (such as question generation, answer generation).\n \\item \\underline{\\textsc{Prompt}} is a single sentence command that often appears before the input instance and connects it to the instructions.\n \\item \\underline{\\textsc{Definition}} provides the core detailed instructions for a task. \n \n \n \\item \\underline{\\textsc{Things to Avoid}} contain instructions regarding undesirable annotations that must be avoided. These help to define the scope of a task and the space of acceptable responses. \n \n \\item \\underline{\\textsc{Emphasis and Caution}} are short, but important statements highlighted in the crowdsourcing templates which were intended to be emphasized or warned against.\n \\item \\underline{\\textsc{Positive Examples}} contain inputs\/outputs similar to the input given to a worker\/system and its expected output, helping crowdworkers better understand a task~\\cite{ali1981use}. \n \\item \\underline{\\textsc{Negative Examples}} contain inputs\/outputs to emphasize \\textsc{Things to Avoid} by providing examples that must not be produced. \n \n \n \n \\item \\underline{\\textsc{Reason}} provides explanations behind why an example is positive or negative.\n \\item \\underline{\\textsc{Suggestion}} contains suggestions on how a negative example could be modified to turn it into a positive example. \n\\end{itemize}\n\n The next section describes the process of mapping the raw instructions (designed for crowdworkers) to our instruction schema. \n\n\n\n\n\n\n\n\n\n\\subsection{Constructing \\textsc{Natural Instructions}} \n\\label{sec:mapping}\n\n\n\n\n\n\\subsubsection{Collecting Data}\n\\label{sec:datacollection}\n\\paragraph{Collecting raw instructions and instances.} \n We use existing, widely adopted NLP benchmarks that are collected via crowdsourcing platforms and hence, come with crowdsourcing templates. \n In the first step, we identified several datasets and engaged with their authors to get their crowdsourcing templates and raw data. \nThis yields the following datasets: \nCosmosQA~\\cite{huang2019cosmos}, \nDROP~\\cite{dua2019drop}, \nEssential-Terms~\\cite{khashabi2017learning}, \nMCTACO~\\cite{zhou2019going}, \nMultiRC~\\cite{khashabi2018looking}, \nQASC~\\cite{khot2020qasc}, \nQuoref~\\cite{dasigi2019quoref}, ROPES~\\cite{lin2019reasoning} and\nWinogrande~\\cite{sakaguchi2020winogrande}.\\footnote{\n We only focus on textual instructions and avoid datasets that involve visual or auditory steps, mostly focusing on QA datasets that were available to the authors. \n} \n \n\\vspace{-.2cm}\n\\paragraph{Splitting crowdsourcing instructions into minimal tasks.} \nAlmost all the crowdworking instructions include sequences of steps to guide crowdworkers in creating task instances.\nFor example, QASC and MCTACO include 7 and 19 steps in the data creation process, respectively. \nWe divide crowdsourcing instructions into their underlying steps and generate multiple subtasks that are minimal and standalone.\\footnote{\n We eliminate tasks that involve model-in-the-loop. \n} Table~\\ref{tab:sample:tasks} shows subtasks extracted for Quoref and QASC. For example, the main task in Quoref is to answer a question given a context paragraph, but the crowdsourcing template consists of two sub-tasks of {\\it question generation} and {\\it answer generation} with their separate instructions. This process results in a more consistent definition of tasks, enabling a successful mapping of instructions into our schema, in contrast to the work of \\citet{efrat2020turking} that uses crowdsourcing instructions as-is. \n\n\n\\begin{table}\n \\centering\n \\footnotesize \n \\begin{tabular}{ll}\n \\toprule\n source dataset & task \\\\\n \\midrule\n \\multirow{2}{*}{\\makecell{Quoref\\\\ \\cite{dasigi2019quoref} }} & question generation \\\\ \n & answer generation \\\\ \n \\midrule\n \\multirow{6}{*}{\\makecell{QASC\\\\ \\cite{khot2020qasc}} } & topic word generation \\\\\n & fact generation \\\\ \n & combining facts \\\\ \n & question generation \\\\ \n & answer generation \\\\ \n & incorrect answer generation \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\n Examples of the datasets and the tasks formed from them. \n The extracted tasks are independent annotation assignments in the crowdsourcing templates of the datasets. \n The complete list is in Table~\\ref{tab:structure} in Appendix. \n }\n \\label{tab:sample:tasks}\n\\end{table}\n\n\\begin{table}\n \\footnotesize \n \\begin{tabular}{lcc}\n \\toprule\n category & \\# of tasks & \\# of instances \\\\\n \\midrule\n {question generation} & 13 & 38$k$ \\\\ \n {answer generation} & 16 & 53$k$ \\\\ \n {classification} & 12 & 36$k$ \\\\ \n {incorrect answer generation} & 8 & 18$k$ \\\\ \n {minimal modification} & 10 & 39$k$ \\\\ \n {verification} & 2 & 9$k$ \\\\ \n \\midrule\n Total & 61 & 193$k$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Task categories and their statistics. \n \n }\n \\label{tab:taskcategories}\n\\end{table}\n\n\nIn total, there are 61 tasks, which are categorized into 6 semantic categories (Table~\\ref{tab:taskcategories}). \nWe assigned these broad categories to the tasks to understand their collective behavior in the experiments. \nIt is noteworthy that, despite the apparent resemblance of the tasks included in the same category, \nany pair of tasks are distinct. \nFor example, while \\emph{question generation} is part of Quoref, CosmosQA, and QASC, each has its own separate variant of the question generation task (see Fig.\\ref{fig:task_specification} in Appendix). \n\n\n\n\n\\subsubsection{Mapping Raw Instructions to Schema } \n\\label{subsec:maptoschema}\n We manually fill in the fields of our instruction schema with the content from the crowdsourcing instructions.\n For instance, parts of the raw instructions that are highlighted for emphasis are incorporated as part of our \\emph{emphasis\/caution} field. \nThe modifications suggested in this step were applied by one author and were verified by another author.\\footnote{On average, the process of data curation for each task takes around 5 hrs-34 hrs (details in Appendix; Table~\\ref{tab:datacuration}).} \n\n\\vspace{-.2cm}\n\\paragraph{Improving description quality and consistency.}\n We edit raw instructions to ensure their quality. Particularly, we fix writing issues (typos, ambiguities, etc.) and redact repetitions. \n While repetition often helps in augmenting human understanding, short and concise instructions are often more effective for computers due to their limited attention span~\\cite{beltagy2020longformer}. \n \n\\vspace{-.2cm}\n\\paragraph{Augmenting examples and reasons.}\n \n There is a large variance in the number of examples provided in the raw instructions. Instructions often include more positive examples, or some instructions do not include any negative examples (e.g., QASC). \nWhenever possible, we add negative examples such that each task has at least two negative examples. \nFurthermore, not all raw instructions contain \\textsc{reasons} or \\textsc{suggestions} for each of their examples. For example, positive examples are usually not accompanied by explanations, and most datasets do not include suggestions.\nWe add them, wherever such information is missing in the instructions. \n\n\\vspace{-.2cm}\n\\paragraph{Collecting input\/output instances for subtasks.} \nMost of our tasks are the intermediate steps in the crowdsourcing process. \n Therefore, to extract input\/output instances for each task, we need to parse the raw annotations of crowdworkers for every step. Since each dataset stores its annotations in a slightly different format, extracting and unifying such intermediate annotations can be non-trivial. \n \n \n \n \n \n\\vspace{-.2cm}\n\\paragraph{Verification.} \n\\changed{\nAn annotator verified the quality of the resulting data in consultation with dataset authors. \nThe annotator iterated on the authors' feedback (avg of 3 iters) until they were \nsatisfied. \n}\n\n\\vspace{-.2cm}\n\\paragraph{Quality assessment.}\nWe ask independent human annotators to answer 240 random instances (20 instances from 12 random tasks, used later for our evaluation~\\S\\ref{subsec:split}). \nThe subsequent evaluation of the human-generated responses results in more than 96\\% accuracy, which indicates that humans can effortlessly understand and execute our instructions. \n\n\n\n\n\n\n\n\n\n\n\\subsubsection{\\textsc{Natural Instructions}\\ Statistics}\n\\label{subsec:dataset:statistics}\n\nIn summary, \\textsc{Natural Instructions}\\ consists of subtasks each with a set of instructions and input\/output instances (Fig.\\ref{fig:examples} and \\ref{fig:schema_plate}). The complete list of instructions is included in the appendix. In total, the dataset includes 61 tasks and 193$k$ instances.\nTable~\\ref{tab:taskcategories} shows data statistics for each task category.\\footnote{We limit the number of instances in each task to $6.5k$ to avoid massive instance imbalance.} On average, instructions contain 4.9 positive examples and 2.2 negative examples. \nThe longest element of instructions is usually \\textsc{Definitions} with 65.5 tokens and the shortest is \\textsc{title} with 8.3 tokens (more statistics in Table~\\ref{tab:schemastat}).\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{lc}\n \\toprule\n statistic & value \\\\ \n \\midrule\n ``title'' length & 8.3 tokens \\\\ \n ``prompt'' length & 12.6 tokens \\\\ \n ``definition'' length & 65.5 tokens \\\\ \n ``things to avoid'' length & 24.1 tokens\\\\ \n ``emphasis\/caution'' length & 45.0 tokens\\\\\n ``reason'' length & 24.9 tokens\\\\ \n ``suggestion'' length & 19.6 tokens\\\\ \n \n \n \n num of positive examples & 4.9 \\\\ \n num of negative examples & 2.2 \\\\ \n \n \\bottomrule\n \\end{tabular}\n \\caption{\n Statistics of \\textsc{Natural Instructions}{}\n \n}\n \\label{tab:schemastat}\n\\end{table}\n\n\n\n\\section{Problem Setup and Models }\n\\label{subsec:setup}\n\\changed{\nHere we define different cross-task generalization settings (\\S \\ref{subsec:split}) and the models (\\S\\ref{subsec:models}). \n}\n\n\\begin{comment}\n\\subsection{Learning Tasks From Instructions} \n\\label{subsec:input:output}\nEvery task $t$ in \\textsc{Natural Instructions}{} consists of an instruction $I_t$ and a set of input and output instances $D_t=\\{(x,y)|x \\in X_t,y \\in Y_t\\}.$\n\\vspace{-.2cm}\n\\paragraph{Task-specific models.} Standard supervised learning uses task-specific training instances to train a model that learns a mapping between input and output: $M(x)=y$ for $(x,y)\\in D_t$. \n\n\\vspace{-.2cm}\n\\paragraph{Learning from instructions.} In this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \\; \\mbox{for} \\ (x,y)\\in D_t$.\n\nIn contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. Instead, we use \\textsc{Natural Instructions}\\ to study this question: can a model be trained to follow instructions via training tasks \\task{seen} and be generalized to follow instructions for a task $t' \\in$ \\task{unseen}. \nWe study various generalization settings with different splits of the tasks. \n\n\\end{comment}\n\n\\subsection{Task Splits and Generalizations Types}\n\\label{subsec:split}\n\n\n\n\n\\paragraph{Random split.}\nThis setup follows the common practice in benchmarking NLP models with random data splits. Here, two tasks from each task category (Table~\\ref{tab:taskcategories}) in \\textsc{Natural Instructions}{} are randomly selected for evaluation, and the rest of the tasks are used for training. This leads to 12 tasks in \\task{unseen} and 49 tasks in \\task{seen}.\\footnote{Those tasks that do not accept a relatively reliable automatic evaluation are excluded from \\task{unseen}. } \n\n\n\\paragraph{Leave-one-out generalization.}\nTo better understand the nature of cross-task generalization, we study more restrictive settings of dividing training and evaluation tasks. \n\n\\noindent \\ul{leave-one-category}: evaluates how well a model generalizes to a task category if it is trained on others -- no task of that category is in \\task{seen}. \n\n\n\\noindent \\ul{leave-one-dataset}: evaluates how well a model can generalize to all tasks in a particular dataset if it is trained on all other tasks -- no task of that dataset is in \\task{seen}.\nThis split prevents any leakage across tasks that belong to the same source datasets. \n\n\\noindent \\underline{leave-one-task}: evaluates how well a model can learn a single task by training on all other tasks. \\\\\n\n\n\n\n\\subsection{Models}\n\\label{subsec:models}\nWe build models using pre-trained LMs with encoder-decoder architectures BART~\\cite{lewis2019bart} for fine-tuning and GPT3~\\cite{brown2020language} for few-shot experiments. \n \\paragraph{Encoding instructions and instances.} \n For every problem setup, we map a given instruction $I_t$ and an input instance $x$ into a textual format and decode an output $y$ and obtain $enc(I_t, x)$. \nThis encoding function is then fed to an encoder-decoder model to predict $y$: $M:enc(I_t, x) \\rightarrow y$. \n\n\n\\begin{figure}\n\\centering\n\\begin{boxedminipage}{\\columnwidth}\n\\begin{equation*} \n \\small\n \\begin{split}\n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small Definition:} \\; \\I{t}{Definition} \\\\ \n & \\mathtt{\\small Things \\; to \\; Avoid:} \\; \\I{t}{avoid.} \\\\ \n & \\mathtt{\\small Emphasis \\& Caution:} \\; \\I{t}{emph.} \\\\ \n & \\textnormal{}\\mathtt{\\small Negative Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}, \\mathtt{\\small output:} \\; \\I{t}{pos. ex.},\n \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.} \\\\ \n \n & \\mathtt{\\small Positive Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}, \n \\mathtt{\\small output:} \\; \\I{t}{pos. ex.} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex. } \\\\ \n \n & \\mathtt{\\small input:} \\; x, \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation*}\n\\end{boxedminipage}\n\\caption{Encoding instruction $I_t$, where $I_t^c$ refers to the text of a component $c$ in the instruction schema.}\n\\label{fig:encoding}\n\\end{figure}\nEncoding instances follows a standard NLP paradigm of mapping an input instance to text. \nEach instruction $I_t$ consists of multiple elements as described in our instruction schema (\\S\\ref{subsec:schema}). Here, we map each element of the instruction to a textual format and append it before the input instance. Fig.\\ref{fig:encoding} shows how we encode the full instruction. \n\nTo study the impact of each instruction element for cross-task generalization, we compare these encodings: (1) \\textsc{prompt}, (2) \\textsc{pos. examples}, (3) \\textsc{prompt + definition}, (4) \\textsc{prompt + things to avoid}, (5) \\textsc{prompt + emphasis} , (6) \\textsc{prompt + pos. examples}, (7) \\textsc{prompt + + definition + pos. examples}, and (8) \\textsc{Full instruction}.\n\\changed{\n Each of these (e.g., \\textsc{prompt} and \\textsc{pos. examples}) correspond to prompting setups in the recent literature~\\cite{scao2021many,lu2021fantastically}. \n} \n\n \\begin{table*}[t]\n \\small \n \\centering\n \n \\begin{tabular}{clcccc}\n \\toprule\n \n \n \n \\makecell{model \u2193} & \\makecell{evaluation set \\task{unseen} \u2192} & \\makecell{random split\\\\of tasks} &\\makecell{leave-one-\\\\category (QG)} & \\makecell{leave-one-\\\\dataset (QASC)} & \\makecell{leave-one-\\\\task (QASC QG)} \\\\\n \\cmidrule(lr){1-1} \\cmidrule(lr){2-2} \\cmidrule(lr){3-3} \\cmidrule(lr){4-4} \\cmidrule(lr){5-5} \\cmidrule(lr){6-6} \n \n \n \\multirow{2}{*}{\\makecell{BART (fine-Tuned)} } & \\textsc{No instructions} & 13 & 6 & 37 & 20 \\\\\n & \\textsc{Full instructions} & \\textbf{32} & \\textbf{17} & \\textbf{51} & \\textbf{56} \\\\\n \\midrule\n GPT3 (not fine-tuned)& \\textsc{Full instructions} & 24 & 33 & 22 & 33 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{Cross-task generalization of BART under various splits (\\S\\ref{subsec:split}).\n \n Fine-tuned BART shows improved performance when provided with instructions. \n It also archives better performance than GPT3, despite being over $1k$ times smaller. \n \n \\changed{All numbers are ROUGE-L. }\n }\n \\label{tab:bart:generalization:all:splits}\n\\end{table*}\n\n\\vspace{-.2cm}\n\\paragraph{BART.}\n\\label{sec:bart}\nWe use BART (base)~\\cite{lewis2019bart} which allows us to fine-tune its model parameters. \nThis is an encoder-decoder architecture with $140m$ parameters.\nFor each setup, the input is encoded using different instruction elements, trained on all \\task{seen} tasks, and evaluated on \\task{unseen} (\\S\\ref{subsec:split}). \n\n\\vspace{-.2cm}\n\\paragraph{GPT3.}\nAs a comparison, we evaluate\nGPT3~\\cite{brown2020language} which is a $175B$ parameter autoregressive LM ($\\times1.2k$ larger than BART) and has shown promising results in mimicking demonstrations provided in its prompt.\nWe cannot fine-tune the parameters of this massive model and use it as-is \nunder its default setting on the evaluation tasks in \\task{unseen} (\\S\\ref{subsec:split}) using the encoding introduced earlier. \n\n\\begin{comment}\n \\begin{table*}[t]\n \\small \n \\centering\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{cl|c|cc|cc|cc}\n \\toprule\n model \u2193 & & random split & \\multicolumn{6}{c}{leave-one-$x$ split} \\\\\n \n \\cmidrule(lr){3-3} \\cmidrule(lr){4-9}\n & & &\\multicolumn{2}{c}{$x =$ category} & \\multicolumn{2}{c}{$x =$ dataset} & \\multicolumn{2}{c}{$x =$ task} \\\\\n \\cmidrule(lr){4-5} \\cmidrule(lr){6-7} \\cmidrule(lr){8-9} \n & evaluation set \\task{unseen} \u2192 & \\makecell{ALL} &\\makecell{AG} & \\makecell{QG} & \\makecell{QASC} & \\makecell{Quoref} & \\makecell{Winogrande AG } & \\makecell{QASC QG } \\\\\n \\midrule\n \n \n \n \\multirow{4}{*}{\\makecell{\\cha{BART-Fine-Tuned}} } & \\textsc{No instructions} & 13 & 11 & 6 & 37 & 10 & 11 & 20 \\\\\n \\cmidrule(lr){2-9}\n & \\textsc{prompt+definition} & 30 &18 & 10 & 43 & \\textbf{39} & 11 & 22 \\\\\n & \\textsc{prompt+pos. examp.} & 23 &18 & \\textbf{20} & 47 & 33 & 16 & 55 \\\\\n \n \n & \\textsc{Full instructions} & \\textbf{32} &\\textbf{19} & 17 & \\textbf{51} & 37 & \\textbf{19} & \\textbf{56} \\\\\n \\midrule\n \n \\cha{GPT3-Few-Shot}& \\textsc{Full instructions} & 24 & 18 & 33 & 22 & 24 & 10 & 33 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{BART generalization under various leave-one-out splits (\\S\\ref{subsec:split}). Encoding instructions improve cross-task generalization across all settings. \n \\changed{All numbers are ROUGE-L. }\n }\n \\label{tab:bart:generalization}\n\\end{table*}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\\input{maintable}\n\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\vspace{-.1cm}\n\\paragraph{Evaluation metrics.}\nWe treat all of our tasks as text generation problems and evaluate them with \nautomated evaluation metrics for text generation. \nIn particular, we use \nROUGE-L~\\cite{lin2004rouge} to automatically evaluate the generated outputs.\\footnote{\nOur experiments show that other metrics, e.g. BLEURT~\\cite{sellam2020bleurt} are also correlated with ROUGE-L, which has also been used in generative QA tasks.\n}\n\n\n\\vspace{-.2cm}\n\\paragraph{Implementation details.}\nFor BART, our models are trained for 3 epochs with a learning rate of 5e-5 for a given training split and input encoding. For GPT3, we use the {\\texttt{davinci-instruct}} engine and produce outputs with greedy decoding, \ngenerating up to a maximum number of tokens of 16 (the default value). We use the default stop condition which is 2 newline tokens.\\footnote{The relevant code is available at: \n\\url{https:\/\/github.com\/allenai\/natural-instructions-v1}\n}\n\n\n\\subsection{Generalization Under Various Task Splits}\n\\label{sec:gen:various:splits}\n\n\\changed{\nTable~\\ref{tab:bart:generalization:all:splits} reports the results of the BART model train and evaluated with various task splits (\\S\\ref{subsec:split})}. \nFor comparison, we evaluate GPT3 which uses no fine-tuning, unlike BART that is fine-tuned with the \\task{seen} tasks.\nThe first column corresponds to random split of tasks, while the remaining columns report cross-task generalization results of the BART model under \n\\changed{\nleave-one-$x$\n}\nsplits (\\S\\ref{subsec:split}). \nFor \n\\changed{\n$x =$ \\ul{category},}\nthe tasks in \\emph{question-generation} category are held out during training. \nFor \n\\changed{\n$x =$ \\ul{dataset},}\nthe tasks that were extracted from the \\emph{QASC} dataset were excluded from training. \nFor \n\\changed{\n$x =$ \\ul{task},}\nwe train a model on all tasks, except \\emph{QASC question generation} task which is used for evaluation. \n\n\n\n\\vspace{-.2cm}\n\\paragraph{Instructions benefit cross-task generalization.} \nThe results indicate that BART benefits from instructions in generalizing to new tasks, regardless of task splits. \nFor example, under random split, the model using \\textsc{Full Instructions} results in +19\\% gains over a model that is not using instructions. \n This is particularly interesting for \nleave-one-\\ul{category}-out split\nsince the trained model can generalize to the tasks of a particular semantic category, without being exposed to it. \nIn comparison to GPT3, the fine-tuned BART model that utilizes instructions achieves a stronger performance despite being $\\times 1k$ smaller than GPT3. \nFor example, a BART models using \\textsc{Full Instructions} achieves 8\\% higher performance than GPT3 under random split of tasks. \n\n\nNote that the absolute values in leave-one-category are lower due to the difficulty of this setup compared to, for example, the random split setup. \nWhile all settings involve evaluating on tasks not seen during training, the leave-one-category setting enforces more dissimilarity among training and evaluation tasks.\n\n\n\\subsection{Generalization Under Instruction Encoding and Task Categories}\nTable~\\ref{tab:random:splitfull2} reports the results of the BART model \nper encodings of different instruction elements (\\S\\ref{subsec:models}) and for different task categories.\nThe table shows that encoding more elements of the instructions generally achieves better results than just using \\textsc{prompt} or \\textsc{positive examples}. \nIt additionally shows that the benefit of the instruction elements seems to depend on the target task category. \nWe observe that the \\emph{question-generation} (QG) tasks benefit the most from \\textsc{positive examples}, whereas in \\emph{classification} (CF),\n\\textsc{positive examples} are of little help. We hypothesis this is because it is easier to mimic question-generation based on a few examples, whereas it is difficult to define classes via a few examples, where \\textsc{definition} can be more helpful. \nThe models show little improvement in \\emph{verification} (VF). \nWe hypothesize these tasks are inherently more difficult, partially because of their distinctness from the rest of the tasks in the dataset. \nWe hope future work on this line will study a wider variety of tasks and will improve our understanding of such failure cases. \n\n\\subsection{Generalization vs. Number of Seen Tasks}\n\\label{subsec:supervision:size:experiment}\nFig.\\ref{fig:scaling:tasks} compares the impact of the number of seen tasks for cross-task generalization. \nFor supervision, we randomly sample a few tasks as \\task{seen} and evaluate on 6 tasks (one from each category). \n(each point in the figure is averaged over 5 random subsamples.) \nThe results\nshow that with \\textsc{no-instruction} encoding there is no tangible value in observing more tasks. \nIn contrast, the generalization of the models that encode instructions improves with observing more tasks. \nThis is an exciting observation since it suggests that scaling up our dataset to more tasks may lead to stronger instruction-following systems. \n\n\n\n\n\n\\subsection{Analyses} \\label{subsec:task-specific}\n\n\n\\paragraph{Upperbound: Task-specific Models.}\nFor each task, we obtain a task-specific model (\\S~\\ref{subsec:input:output}) by training BART separately on each task's annotated training data. We evaluate these task-specific models to obtain a loose estimate of \\emph{upperbounds} for each task. \nOn average, task-specific models score 66\\% which is considerably higher than our models' best generalization (32\\%; Table~\\ref{tab:bart:generalization:all:splits}).\nThis indicates that { there is considerable room for improving generalization-based models} that use instructions. \n\n\\begin{comment}\n\\begin{table}\n \\centering\n \n \\footnotesize\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lcc}\n \\toprule\n error type & GPT3 & BART \\\\\n \\midrule\n \n \n \n \n \n \n generates a nonsensical\/vague question & 4 & 47\\\\\n \n explains the question after generating it & 6 & 0\\\\\n generates a yes\/no question & 12 & 4\\\\\n \n \n \\makecell[l]{generates generic questions independent\\\\ of the given context} &6 &0\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Percentage of errors on QASC QG task. \n The numbers do not sum to 100 since the error types are not mutually exclusive. \n \n }\n \\label{tab_error_analysis_main_text}\n\\end{table}\n\\subsection{Error Analysis}\nTable~\\ref{tab_error_analysis_main_text} shows the breakdown of most common error types for the QASC question generation task by analyzing 30 errors (more error analyses can be found in Appendix~\\ref{sec:error:analysis}; Table~\\ref{Tab: Error Analysis}).\n\\end{comment}\n\n\\begin{table}\n \\centering\n \\small\n \\resizebox{0.99\\linewidth}{!}{\n \\begin{tabular}{llcc}\n \n \\toprule\n Model \u2193 & Split \u2193 & \\makecell{w\/ neg.\\\\examples} & \\makecell{w\/o neg.\\\\examples} \\\\\n \\midrule\n \\multirow{5}{*}{BART} & random & 32 & {\\bf 35} \\\\ \n & leave-one-$x$ \\\\ \n & \\ $\\drsh x=$ category (AG) & 19 & {\\bf 21} \\\\ \n & \\ $\\drsh x=$ dataset (Quoref) & 37 & 37 \\\\ \n & \\ $\\drsh x=$ task (QASC QG) & 56 & {\\bf 57} \\\\ \n \\midrule\n GPT3 & - & 24 & {\\bf 44} \\\\ \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Effect of excluding negative examples from \\textsc{Full Instruction} encoding. Negative instructions are surprisingly difficult for the models to learn from. \n }\n \\label{tab:negative:examples}\n\\end{table}\n\n\n\n\\paragraph{Impact of Negative Examples.}\nCrowdsourcing instructions often include negative examples to exemplify undesirable responses. \nWe study how negative examples in instructions affect cross-task generalization. \nOur cases study (Table~\\ref{tab:negative:examples}) indicates that the models work better \\emph{without} (w\/o) negative examples, \ncontrary to the previously-observed benefits of other instructional elements (e.g., definition, positive examples). \nThis is aligned with the previous studies ~\\cite{xuan2020hard,lin2003bootstrapped} that discuss the challenges of learning from negative examples.\nInterestingly, GPT3's drop (44 vs 24) is more significant than BART (35 vs 32), showing that BART can partly recover through the training step. \n\n\\begin{table*}[ht]\n \\centering\n \\resizebox{0.78\\textwidth}{!}{\n \\footnotesize\n \\begin{tabular}{p{4.5cm}p{3.5cm}p{6.5cm}}\n \\toprule\n Category & Helpful Fields & Explanation \\\\\n \\midrule\n Question Generation (QG) & 1. \\textsc{Definition} & - Provides a holistic picture of the task.\\\\\n & 2. \\textsc{Emphasis \\& Caution} & - Provides key information for solving the task.\\\\\n & 3. \\textsc{Positive Examples} & - This gives an idea of what is expected in the output.\\\\\n & 4. \\textsc{Negative Examples} & - Good to know the common mistakes people do.\\\\ \n \\midrule\n Answer Generation (AG) & \\textsc{1. Prompt} & - It limits the exploration space to question spans.\\\\\n & \\textsc{2. Definition} & - Provides a general understanding of the task. \\\\\n & \\textsc{3. Positive Examples} & - Reason field is very helpful.\\\\\n \\midrule\n Classification (CF) & \\textsc{1. Definition} & - The task is unclear without this field.\\\\\n \\midrule\n Incorrect Answer Generation (IAG) & \\textsc{1. Definition} & - Helps understand the utility of such a task.\\\\\n & \\textsc{2. Emphasis \\& Caution} & - Source of some useful shortcuts.\\\\\n & \\textsc{3. Positive Examples} & - Helps in understanding the type of questions asked.\\\\\n \\midrule\n Minimal Text Modification (MM) & \\textsc{1. Things to Avoid} & - Provides critical information.\\\\\n \\midrule\n Verification (VF) & \\textsc{1. Definition} & - Makes the task easy to understand.\\\\\n & \\textsc{2. Things to avoid} & - Contains useful tips required for this task.\\\\\n & \\textsc{3. Positive Examples} & - Exemplifies task understanding.\\\\ \n & \\textsc{4. Negative examples} & - Helps avoid potential mistakes.\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Results of humans' perceived importance of instruction elements. Our annotators, for example, find \\textsc{Definition} and \\textsc{Thing to Avoid} to be helpful for \\textit{Classification} and \\textit{Minimal Text Modification} tasks, respectively.}\n \\label{Tab:User:Study}\n\\end{table*}\n\n\n\\paragraph{Error Analysis.}\nWe randomly sample 30 erroneous predictions of our fine-tuned BART on 3 distinct tasks (Winogrande answer generation; QASC question generation; MC-TACO incorrect answer generation). We categorize the errors into common patterns (Table~\\ref{Tab: Error Analysis}).\n\\begin{table}[ht]\n \\centering\n \\small\n \n \\begin{tabular}{lc}\n \\toprule\n error type & BART \\\\\n \\midrule\n {\\color{brown} \\textit{Generates a nonsensical\/vague question}} & 47\\\\\n {\\color{brown}\\textit{Generate an invalid question}} & 8\\\\\n {\\color{brown}\\textit{Generates a yes\/no question}} & 4\\\\\n {\\color{brown}\\textit{Copies the given fact or a subset of it}} & 3 \\\\\n {\\color{brown}\\textit{Generates unanswerable questions}} &3\\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Percentage of errors on QASC QG task. \n The numbers do not sum to 100 since the error types are not mutually exclusive.\n \n }\n \\label{Tab: Error Analysis}\n\\end{table}\n\nUnlike GPT3 which generally suffers from generating redundant content irrelevant to instructions, our BART model provides more control, however, it can fail to generate proper output. \nHere are several erroneous predictions from our model: \n\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Generates a nonsensical\/vague question}}\\\\\n \\textbf{BART input:}[...]input: Fact: graduations are used for measuring the volume of substances. output: ?\\\\\n \\textbf{BART output:} How many substances are there in volume?\n}\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Copies the given fact or a subset of it}}\\\\\n \\textbf{BART input:}[...] input: Fact: If someone is not breathing, begin with ventilation. output: ? \\\\\n \\textbf{BART output:} If someone is breathing, begin with ventilation.\n}\n\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Generates a yes\/no question }}\\\\\n \\textbf{BART input:}[...] input: Fact: Lack of saliva hinders the break down of food into components the body can absorb. output: ?\n \\textbf{BART output:} Does saliva hinders the break down of food into components the body can absorb?\n}\n\n\\paragraph{Perceived Impact of Instruction Elements.}\nWe survey human annotators to find out the value of instruction elements to humans.\nExcept for the negative examples which were shown to be difficult for models, we observe similar trends between humans' perceived value of those elements (Table~\\ref{Tab:User:Study}) and their contributions to the model performance (Table~\\ref{tab:random:splitfull2}). \nFor example, humans viewed \\textsc{Definition} and \\textsc{Things to Avoid} as necessary fields for \\emph{classification} and \\emph{minimal text modification} categories, respectively, which is compatible with our empirical observations (e.g., \\textsc{prompt + definition} has the highest score on CF category in Table~\\ref{tab:random:splitfull2}). \n\n\n\n\n\\section{Conclusion}\n\\label{sec:discussion}\nIn this paper, we studied the goal of building models that generalize to new tasks by encoding and understanding crowdsourcing instructions. \nWe introduced \\textsc{Natural Instructions}{}, which is built based on existing crowdsourced datasets, that enables building such models and systematically evaluate them. \nTo the best of our knowledge, this is the first work to show the benefit of instructions towards improved cross-task generalization. \nAdditionally, we observe that our proposed task has a large room for improvement, which we believe \nwill bring more attention to building stronger models that can generalize to a wider range of tasks. \n\n\n\\begin{comment}\n\\vspace{-.2cm}\n\\paragraph{Future extensions.}\n The observations made in \\S\\ref{subsec:supervision:size:experiment} indicate that there are likely benefits to repeating our study with a larger set of datasets. \n We hope the future work expands our work with a larger and broader range of tasks. \n \n We use automatic evaluation, in order to facilitate the replicability of the follow-up work on \\textsc{Natural Instructions}{}. Admitting limitations of automatic evaluations, we hope future work will provide an easy-to-reproduce human evaluation for the tasks studied here, based on the recent proposals for streamlining human evaluation of text generation models~\\cite{khashabi2021genie}. \n\\end{comment}\n\n\n\n\n\n\n\\section*{Acknowledgements}\nWe thank OpenAI for providing access to the GPT3 API, \nauthors who generously shared their dataset templates with us, Matt Peters and Nicholas Lourie for helpful input, the Beaker team for their support with experiments, and the anonymous reviewers for their helpful feedback. \nThe support of DARPA SAIL-ON, DARPA CHESS program, NSF IIS-2044660, ONR N00014-18-1-2826,\nand Paul G. Allen Foundation is gratefully acknowledged.\n\n\n\n\n\n\\section{Related Works}\n\\label{sec:related:work}\n\\begin{comment}\n\\paragraph{Instructions in NLP applications.} \\hanna{if no space, you can cut this paragraph}\nPrior work has studied ``instructions'' in various niches, such as\nrobotic instructions~\\cite{shridhar2020alfred, stepputtis2020language}, \ndatabases~\\cite{kim2020natural}, \nprogramming~\\cite{lin2018nl2bash,shao2020chartdialogs}, \\emph{inter alia}. \nSuch {instructions} are inherently different from ours, as they are intended to be mapped to pre-defined symbolic forms (e.g., SQL commands). \nConversely, our instructions describe general NLP tasks (no underlying grammar) for measuring task-level generalization. \n\\end{comment}\n\n\\changed{\n\\vspace{-.2cm} \\paragraph{Learning from instructions.}\nThere is recent literature on the extent to which models follow language instructions~~\\cite{hase2021can,ye2021zero,Gupta2021TowardsGP,Zhong2021AdaptingLM}.\nFor example, \\citet{efrat2020turking} examine if language models can follow crowdsourcing instructions with no further training. On the contrary, our work is pursuing a fundamentally different goal: creating a dataset of crowdsourcing instructions and task instances and formulating cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones.\n\\citet{weller-etal-2020-learning} construct a crowdsourced dataset with short question-like task descriptions. \nCompared to this work, our instructions are longer, more complex and natural since they were used to collect datasets through crowdsourcing. \n\nPromptSource and FLAN~\\cite{wei2021finetuned,sanh2021multitask} are two concurrent works that pursue a similar goal as ours. \nA key difference between our work to these works is in terms of data collection strategy. \nOur work uses natural instructions created by NLP researchers before the dataset instances were created by crowd workers, and hence it contains the complete definition of each task (definition, things to avoid, negative examples, etc.). \nOn the other hand, instructions in the concurrent work are collected retroactively based on the already-available task instances. \nOur {\\it natural} instructions enable evaluating models on how they learn tasks given different elements of task descriptions. (See \\S\\ref{subsec:promptsource} for further comparisons.) \nNevertheless, we believe that all these approaches to constructing instructions and task categories are complementary and the community will benefit from considering both towards solving the challenging problem of cross-task generalization.\n\n\\vspace{-.2cm}\\paragraph{Prompt engineering.}\nConstructing effective discrete prompts for language models to perform NLP tasks is an active area of research~\\cite{schick2020few,reynolds2021prompt,liu2021pre}. \nSuch prompts are often extremely short and may not include a complete definition of complex tasks. \nIn contrast, our instructions encode detailed instructions as they were used to collect the datasets. \nMoreover, the goals are different:\nMost prompt-engineering approaches seek prompts with higher performance on a particular task, \ntypically through assumptions about their target task which make them non-trivial to generalize to any other task. \nHowever, our introduced meta dataset enables the measurement of generalization to unseen tasks. \n\n\n\\vspace{-.2cm}\\paragraph{Beyond standard multi-task learning.}\nMulti-task learning is a long-standing goal for AI~\\cite{caruana1997multitask} and has led to successful models that can support a wider range of tasks\n~\\cite{mccann2018natural,raffel2020exploring,khashabi2020unifiedqa,mishra2020towards,aghajanyan2021muppet,ye2021crossfit}.\nMost of the conventional setups in the multi-tasking literature evaluate on instances that belong to the tasks that are seen, i.e., their labeled instances were observed during training (1st column of Table~\\ref{tab:comparison}). \nWe augment this setup by \nintroducing natural language instructions which enable our models to bridge to tasks that were not seen during training. \n}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Lambda Function and Motivations}\n\nThe existence of the core decomposition with Peano quotient for planar compacta \\cite{LLY-2019} enables us to associate to each compact set $K\\subset\\hat{\\bbC}$ a map $\\lambda_K:\\hat{\\bbC}\\rightarrow\\bbN\\cup\\{\\infty\\}$, called the {\\em lambda function of $K$}. This function sends all points $x\\notin K$ to zero and may take a positive value for some $x\\in K$. It ``quantifies'' certain aspects of the topological structure of $K$, that is more or less related to the property of being locally connected. In particular, a continuum $K\\subset\\hat{\\bbC}$ is locally connected if and only if $\\lambda_K(x)=0$ for all $x\\in\\hat{\\mathbb{C}}$. On the other hand, if a continuum $K\\subset\\hat{\\bbC}$ is not locally connected at $x\\in K$ then $\\lambda_K(x)\\ge1$; but the converse is not necessarily true.\n\nThe quantification in terms of lambda function allows us to carry out a new analysis of the topology of $K$, by computing or estimating $\\lambda_K(x)$ for specific choices of $x\\in K$. In the current paper, we will investigate an interesting phenomenon that was firstly revealed in a fundamental result by Marie Torhorst, as one of the three highlights of \\cite{Torhorst}. This result is often referred to as Torhorst Theorem \\cite[p.106, (2.2)]{Whyburn42} and reads as follows.\n\\begin{theorem*}[{\\bf Torhorst Theorem}]\nThe boundary $F$ of every complementary domain $R$ of a locally connected continuum $M\\subset\\hat{\\mathbb{C}}$ is itself a locally connected continuum.\n\\end{theorem*}\n\nWe will obtain an inequality that includes the Torhorst Theorem as a simple case.\nThe inequality is about the lambda function $\\lambda_K$. The function $\\lambda_K$ is based on the core decomposition of $K$ with Peano quotient \\cite{LLY-2019}, which is motivated by some open questions in \\cite{Curry10} and extends two earlier models of polynomial Julia sets developed in \\cite{BCO11,BCO13}. Those models, briefly called BCO models, provide efficient ways (1) to describe the topology of unshielded compacta, like polynomial Julia sets, and (2) to obtain specific factor systems for polynomials restricted to the Julia set. The BCO models are special cases of a more general model, working well for all planar compacta, that associates natural factor systems to the dynamics of rational functions \\cite{LLY-2019,LYY-2020}.\n\nRecall that a {\\bf Peano continuum} means the image of $[0,1]$ under a continuous map. By Hahn-Mazurkiewicz-Sierpi\\'nski Theorem\n\\cite[p.256, \\S 50, II, Theorem 2]{Kuratowski68}, a continuum is locally connected if and only if it is a Peano continuum. On the other hand,\na {\\bf Peano compactum} is defined to be a compactum having locally connected components such that for any constant $C>0$ at most finitely many of its components are of diameter greater than $C$. Therefore, the Cantor ternary set is a Peano compactum and a Peano continuum is just a Peano compactum that is connected. Concerning how such a definition arises from the discussions of BCO models, we refer to \\cite[Theorems 1-3]{LLY-2019}.\n\n\n\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$, there exists an upper semi-continuous decomposition of $K$ into sub-continua, denoted as $\\Dc_K^{PC}$, such that (1) the quotient space is a Peano compactum and (2) $\\Dc_K^{PC}$ refines every other such decomposition of $K$ \\cite[Theorem 7]{LLY-2019}.\nWe call $\\Dc_K^{PC}$ the core decomposition of $K$ with Peano quotient. The hyperspace $\\Dc_K^{PC}$ under quotient topology is called the Peano model of $K$. Every $d\\in\\Dc_K^{PC}$ is called an {\\bf atom} of $K$, or an {\\bf order-one atom}, or an atom of order $1$. Every atom of an order-one atom is called an {\\bf order-two atom}, and so on. Note that a compactum such as the pseudo-arc or Cantor's Teepee may have a non-degenerate atom of order $\\infty$.\n\nConsidering the atoms of a compactum $K\\subset\\hat{\\mathbb{C}}$ as its structural units, we summarize the results obtained in \\cite[Theorem 7]{LLY-2019} and \\cite[Theorem 1.1]{LYY-2020} in the following way.\n\\begin{theorem*}[{\\bf Theory of Atoms}\nEvery compactum $K\\subset\\hat{\\mathbb{C}}$ is made up of atoms; all its atoms are sub-continua of $K$ and they form an upper semi-continuous decomposition, with its quotient space being a Peano compactum, that refines every other such decomposition; moreover, for any finite-to-one open map $f:\\hat{\\bbC}\\rightarrow\\hat{\\bbC}$ and any atom $d$ of $K$, each component of $f^{-1}(d)$ is an atom of $f^{-1}(K)$.\n\\end{theorem*}\n\nUsing the hierarchy formed by {\\bf atoms of atoms}, we introduce the lambda function.\n\\begin{definition*}[{\\bf Lambda Function}]\nGiven a compactum $K\\subset\\hat{\\bbC}$. Let $\\lambda_K(x)=0$ for $x\\notin K$. Let $\\lambda_K(x)=m-1$ for any $x\\in K$, if there is a smallest integer $m\\ge1$ such that $\\{x\\}$ is an order-$m$ atom of $K$. If such an integer $m$ does not exist, we put $\\lambda_K(x)=\\infty$.\n\\end{definition*}\nWhen little is known about the topology of $K$, it is difficult to completely determine the values of $\\lambda_K$. On the other hand, the level sets $\\lambda_K^{-1}(n)(n\\ge0)$ are ``computable'' for typical choices of $K$. In such circumstances, the lambda function $\\lambda_K$ is useful in describing certain aspects of the topology of $K$. For instance, one may check the following observations: (1) a compact set $K\\subset\\hat{\\bbC}$ is a Peano compactum if and only if $\\lambda_K(x)=0$ everywhere; (2) if $K=\\left\\{t+\\left(\\sin\\frac1t\\right){\\bf i}: 00$ its complement has at most finitely many components of diameter $>\\varepsilon$. Unfortunately, the condition of $E$-compactum alone is still not sufficient for the {\\bf Lambda Equality} $\\tilde{\\lambda}_K=\\lambda_K$. See Examples \\ref{E-compactum} and \\ref{finite-comp}.\n\nThe theorem below gives three conditions under which the Lambda Equality holds.\n\n\\begin{main-theorem}\\label{equality-case}\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$, the Lambda Equality $\\tilde{\\lambda}_K=\\lambda_K$ holds if one of the following conditions is satisfied:\n\n(i) $K$ is an $E$-compactum such that the envelope function $\\tilde{\\lambda}_K(x)$ vanishes everywhere;\n\n(ii) $K$ is an $E$-compactum whose complementary components have disjoint closures.\n\n(iii) $K$ is a partially unshielded compactum.\n\\end{main-theorem}\n\n\\begin{rem}\nIn (i) and (ii) of Theorem \\ref{equality-case}, the assumption that $K$ is an $E$-compactum can not be removed. We may set $K=[0,1]\\!\\times\\![0,{\\bf i}]\\setminus\\left(\\bigcup\\limits_1^\\infty R_n\\right)$, with $R_n=\\left(\\frac{1}{3n},\\frac{2}{3n}\\right)\\times\\left(\\frac13{\\bf i},\\frac23{\\bf i}\\right)$. If $W=\\hat{\\mathbb{C}}\\setminus[0,1]\\!\\times\\![0,{\\bf i}]$ then $\\partial W\\cap\\partial R_n=\\emptyset$ for all $n\\ge1$ and $\\partial R_n\\cap \\partial R_m=\\emptyset$ for $n\\ne m$. See Figure \\ref{non-E} for a simple depiction of $K$.\nThe continuum $K$ is not an $E$-continuum but it satisfies the other assumptions in (i) and (ii) of Theorem \\ref{equality-case}. It has exactly one non-degenerate atom, the segment $\\left[\\frac13{\\bf i},\\frac23{\\bf i}\\right]$. Thus $\\lambda_K(x)-\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}1& x\\in\\left[\\frac13{\\bf i},\\frac23{\\bf i}\\right]\\\\ 0&\\text{otherwise}.\\end{array}\\right.$\n\n\\begin{figure}[ht]\n\\vskip -0.75cm\n\\begin{center}\n\\begin{tikzpicture}[x=5cm,y=5cm,scale=0.618]\n\\fill[gray!20,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);\n\\draw[gray,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);\n\\draw[black, ultra thick] (0,1\/3) -- (0,2\/3);\n\n\\foreach \\j in {1,...,3}\n{\n \\fill[white] (1\/3^\\j,1\/3) -- (2\/3^\\j,1\/3) -- (2\/3^\\j,2\/3) -- (1\/3^\\j,2\/3) --(1\/3^\\j,1\/3);\n \\draw[gray, thick] (1\/3^\\j,1\/3) -- (2\/3^\\j,1\/3) -- (2\/3^\\j,2\/3) -- (1\/3^\\j,2\/3) --(1\/3^\\j,1\/3);\n}\n\\foreach \\j in {1,...,6}\n{\\fill[gray] (1\/60,0.32+0.05*\\j) circle(0.3ex);\n}\n\n\\draw(0,0.02) node[left]{$0$};\n\\draw(1,0.02) node[right]{$1$};\n\\draw(0,0.98) node[left]{${\\bf i}$};\n\n\\end{tikzpicture}\n\\end{center}\n\\vskip -0.95cm\n\\caption{A depiction for $K$ and some of the rectangles.}\\label{non-E}\n\\vskip -0.25cm\n\\end{figure}\n\\end{rem}\n\nNote that the Lambda Equality may not hold for an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$, even if it has finitely many complementary components. See Example \\ref{finite-comp}. Also notice that the Lambda Equality under condition (i) implies the theorem below. This extends Whyburn's Theorem \\cite[p.113, (4.4)]{Whyburn42}, which says that {\\em an $E$-continuum is a Peano continuum if and only if the boundary of any of its complementary components is a Peano continuum}.\n\\begin{theorem*}[{Extended Whyburn's Theorem}] An $E$-compactum is a Peano compactum if and only if the boundary of any of its complementary components is a Peano compactum.\n\\end{theorem*}\n\n\nTheorem \\ref{lambda_inequality} addresses how $\\lambda_K$ and $\\lambda_L$ are related when $L$ lies on the boundary of a component of $\\hat{\\bbC}\\setminus K$. There are other choices of planar compacta $K\\supset L$ so that $\\lambda_K$ and $\\lambda_L$ are intrinsically related. A typical situation happens, if the common part of $\\overline{K\\setminus L}$ and $L$ is a finite set.\n\n\\begin{main-theorem}\\label{gluing_lemma}\nIf $K\\supset L$ are planar compacta such that $\\overline{K\\setminus L}$ intersects $L$ at finitely many points then $\\lambda_K(x)=\\max\\left\\{\\lambda_{\\overline{K\\setminus L}}(x),\\lambda_L(x)\\right\\}$ for all $x$. \\end{main-theorem}\n\nSetting $A=\\overline{K\\setminus L}$, we can infer that that $\\lambda_K(x)$ coincides with $\\lambda_A(x)$ for $x\\in A\\setminus L$ and with $\\lambda_L(x)$ for $x\\in L\\setminus A$, equals $\\max\\left\\{\\lambda_{A}(x),\\lambda_L(x)\\right\\}$ for $x\\in A\\cap L$, and vanishes for every $x\\notin(A\\cup L)$. Therefore, we have.\n\n\\begin{theorem*}[{Gluing Lemma for Lambda Functions}]\\label{gluing_lemma_1}\nIf in addition $\\lambda_A(x)=\\lambda_L(x)$ for all $x\\in A\\cap L$ then $\\lambda_K(x)=\\lambda_{A\\cup L}$ may be obtained by gluing $\\lambda_A$ and $\\lambda_L$, in the sense that\n\\begin{equation}\\label{form-1}\n\\lambda_{A\\cup L}(x)=\\left\\{\\begin{array}{ll}\\lambda_A(x)& x\\in A\\\\ \\lambda_L(x)& x\\in L\\\\ 0& {otherwise.}\\end{array}\\right.\n\\end{equation}\n\\end{theorem*}\n\\begin{rem}\nThe formation in Equation (\\ref{form-1}) is similar to the one illustrated in the well known gluing lemma for continuous maps. See for instance \\cite[p.69, Theorem (4.6)]{Armstrong}.\nIn certain situations, Theorem \\ref{gluing_lemma} helps us to analyze questions concerning local connectedness of polynomial Julia sets. See Question \\ref{small_julia}. However,\nthe case that $A\\cap L$ is an infinite set is more involved. In Theorem \\ref{baby_M}, we will extend Theorem 4 to such a case under additional assumptions. This extension allows one to choose $K$ to be the Mandelbrot set and $L$ the closure of a hyperbolic component. For concrete choices of $A$ and $L$ so that Equation \\ref{form-1} does not hold, we refer to Examples \\ref{cantor_combs}, \\ref{brooms} and \\ref{cup-fs}.\n\\end{rem}\n\n\n\n\n\n\n\n\nThe other parts of this paper are arranged as follows. Section \\ref{proof-c} is devoted to the proofs for Theorems \\ref{compare_atoms} to \\ref{lambda_inequality}. Section \\ref{equality} gives a proof for Theorem \\ref{equality-case}. In Section \\ref{glue} we firstly prove Theorem \\ref{gluing_lemma} and then continue to establish Theorem \\ref{baby_M}.\nSection \\ref{examples} gives examples.\n\n\n\n\n\n\n\\section{The Lambda Inequality}\\label{proof-c}\n\nIn this section we prove Theorems \\ref{compare_atoms} and \\ref{lambda_inequality}.\n\nWe will study relations on compacta $K\\subset\\hat{\\mathbb{C}}$. Such a relation is considered as a subset of the product space $K\\times K$ and is said to be {\\bf closed} if it is closed in $K\\times K$. Given a relation $\\Rc$ on $K$, we call $\\Rc[x]=\\{y\\in K: \\ (x,y)\\in\\Rc\\}$ the fiber of $\\Rc$ at $x$. We mostly consider closed relations $\\Rc$ that are reflexive and symmetric, so that for all $x,y\\in K$ we have (1) $x\\in \\Rc[x]$ and (2) $x\\in\\Rc[y]$ if and only if $y\\in\\Rc[x]$. For such a relation, the {\\em iterated relation} $\\Rc^2$ is defined naturally so that\n$\\displaystyle y\\in\\Rc^2[x]$ if and only if there exist $z\\in K$ with $(x,z), (z,y)\\in \\Rc$.\n\nRecall that the {\\bf Sch\\\"onflies relation} on a planar compactum $K$ is a reflexive symmetric relation. Under this relation, two points $x_1, x_2$ are related provided that either $x_1=x_2$ or there exist two disjoint Jordan curves $J_i\\ni x_i$ such that $\\overline{U}\\cap K$ has infinitely many components $P_n$, intersecting $J_1$ and $J_2$ both, whose limit under Hausdorff distance contains $\\{x_1,x_2\\}$. Here $U$ is the component of \\ $\\hat{\\bbC}\\setminus(J_1\\cup J_2)$ with $\\partial U=J_1 \\cup J_2$.\n\nGiven a compactum $K\\subset\\hat{\\bbC}$, denote by $R_K$ the Sch\\\"onflies relation on $K$ and by $\\overline{R_K}$ the closure of $R_K$. We also call $\\overline{R_K}$ the {\\bf closed Sch\\\"onflies relation}.\nLet $\\Dc_K$ be the finest upper semi-continuous decompositions of $K$ into sub-continua that splits none of the fibers $R_K[x]$. Then $\\Dc_K$ coincides with $\\Dc_K^{PC}$, the core decomposition of $K$ with Peano quotient \\cite[Theorem 7]{LLY-2019}. Therefore the elements of $\\mathcal{D}_K$ are the (order-one) atoms of $K$.\n\nEvery fiber $\\overline{R_K}[x]$ is a continuum. See\n\\cite[Theorem 1.4]{LYY-2020}. However, the compactness and connectedness of the fibers of $R_K$ remain open.\nMoreover, in order that a point $y\\ne x$ lies in $\\overline{R_K}[x]$ it is necessary and sufficient that for small enough $r>0$ the difference $K\\setminus(B_r(x)\\cup B_r(y))$ has infinitely many components that intersect each of the circles $\\partial B_r(x)$ and $\\partial B_r(y)$. See \\cite[Theorem 1.3]{LYY-2020}.\nThe lemma below relates the fibers of $\\overline{R_K}^2$ to those of $\\overline{R_L}$, where $L$ is a compact subset of $K$ satisfying certain properties.\n\n\\begin{lemma}\\label{key-lemma}\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$ and a component $U$ of \\ $\\hat{\\bbC}\\setminus K$. If $L\\subset\\partial U$ is compact then $\\overline{R_L}[x]\\subset \\overline{R_K}^2[x]$ for any $x\\in L$.\n\\end{lemma}\n\\begin{proof}\nTo obtain the containment $\\overline{R_L}[x]\\subset \\overline{R_K}^2[x]$ for any given $x\\in L$, we may fix an arbitrary point $y\\in\\overline{R_L}[x]\\setminus\\{x\\}$ and consider the annulus $A_n=\\hat{\\mathbb{C}}\\setminus\\left(B_{1\/n}(x)\\cup B_{1\/n}(y)\\right)$, for any integer $n\\ge1$ such that $\\overline{B_{1\/n}(x)}\\bigcap\\overline{B_{1\/n}(y)}=\\emptyset$. Here $B_{1\/n}(x)$ and $B_{1\/n}(y)$ are open disks with radius $1\/n$ under spherical distance, that are respectively centered at $x$ and $y$. By \\cite[Theorem 1.3]{LYY-2020}, $A_n\\cap L$ has infinitely many components intersecting both $\\partial B_{1\/n}(x)$ and $\\partial B_{1\/n}(y)$. So we can find an infinite sequence $\\{P_i\\}$ of such components that converge to some continuum $P_\\infty$ under Hausdorff metric.\n\nSince $x,y\\in L\\subset\\partial U$, we can find an open arc $\\alpha_0\\subset U$ that connects a point on $\\partial B_{1\/n}(x)$ to one on $\\partial B_{1\/n}(y)$. Going to an appropriate sub-arc, if necessary, we may assume that $\\alpha\\subset A_n$. Then, we may slightly thicken the closed arc $\\overline{\\alpha}$ and obtain a topological disc $\\alpha^*\\subset A_n$, satisfying $\\alpha^*\\cap K=\\emptyset$. From this we see that $\\overline{A_n\\setminus\\alpha^*}$ is homeomorphic to $[0,1]^2$. We will obtain the following.\n\n\n{\\bf Claim}. $P_\\infty$ contains two points $u_n\\in \\partial B_{1\/n}(x), v_n\\in \\partial B_{1\/n}(y)$ with $v_n\\in\\overline{R_K}^2[u_n]$.\n\nThe flexibility of the large enough integers $n$ ensures that $\\lim\\limits_nu_n=x$ and $\\lim\\limits_nv_n=y$. Since $\\overline{R_K}^2$ is a closed relation, we surely obtain $y\\in \\overline{R_K}^2[x]$. This completes our proof. Thus, the remaining issue is to verify the above claim.\n\nAs $\\overline{A_n\\setminus\\alpha^*}$ is a topological disc, we consider it to be the unit square $[0,1]^2$. Moreover, we may represent by $[0,1]\\times\\{1\\}$ the arc $l_1=\\overline{A_n\\setminus\\alpha^*}\\cap\\partial B_n(x)$ and by $[0,1]\\times\\{0\\}$ the arc $l_2=\\overline{A_n\\setminus\\alpha^*}\\cap\\partial B_n(y)$. Fix any point $z$ in $P_\\infty\\cap(0,1)^2$. For any $r>0$ that is small, let $W_r$ denote the open rectangle centered at $z$ with diameter $r$.\nSince $P_i\\rightarrow P_\\infty$ under Hausdorff distance we may assume that every $P_i$ intersects $W_r$ and lies in $[0,1]^2$, which from now on represents $\\overline{A_n\\setminus\\alpha^*}$.\nSee Figure \\ref{key}.\n\\begin{figure}[ht]\n\\vskip -0.5cm\n\\center{\n\\begin{tikzpicture}[scale=0.8,x=1.618cm, y=0.618cm]\n\\draw(-2,0)--(-2,7);\n\\draw(-2,0)--(7,0)node[below]{$l_2\\subset \\partial B_n(y)$};\n\\draw(-2,7)--(7,7)node[above]{$l_1\\subset \\partial B_n(x)$};\n\\draw(7,0)--(7,7);\n\\draw(2,0)--(2,7)node[above]{$P_\\infty$};\n\\fill(2,3.5)circle(2pt);\n\\draw[blue,thick](-1,2)--(5,2) -- (5,5) -- (-1,5) -- (-1,2);\n\\draw(2,3.5) node[left]{$W_r\\ni z$};\n\\draw(3,0)--(3,7)node[above]{$P_{2i+1}$};\n\\draw(4.5,0)--(4.5,7)node[above]{$P_{2i-1}$};\n\\draw[dashed](3.75,0)--(3.75,7);\n\\draw (3.75,0)node[below]{$P_{2i}$};\n\\draw(3.75,3) node[left]{$a_i$};\n\\fill(3.75,3)circle(2pt);\n\\draw(4,4) node[right]{$b_i$};\n\\fill(4,4)circle(2pt);\n\\draw[red](4.25,5)--(4.25,7); \\draw(4.3,6) node[left]{$\\beta_i$};\n\\draw[red](4.25,2)--(4.25,0); \\draw(4.3,1) node[left]{$\\beta_i$};\n\\end{tikzpicture}\n}\\vskip -0.75cm\n\\caption{Relative locations of $z,l_1,l_2,W_r, P_{2i-1}, P_{2i}, P_{2i+1}$ and $a_i, b_i$.}\\label{key}\n\\vskip -0.25cm\n\\end{figure}\n\nRecall that $[0,1]^2\\setminus P_\\infty$ has two components, one containing $\\{1\\}\\times[0,1]$ and the other $\\{0\\}\\times[0,1]$. One of these components contains infinitely many $P_i$. Without losing generality we may assume that every $P_i$ lies in the one containing $\\{1\\}\\times[0,1]$, denoted $V$. Thus $P_i$ can be connected to $\\{1\\}\\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_\\infty$. Moreover, rename $P_i(i\\ge1)$ so that every $P_i$ can be connected to $\\{1\\}\\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_j$ for $j\\ge i+1$. Therefore, each $P_i$ is ``to the right of'' $P_{i+1}$.\n\n\nFor all $i\\ge1$ let $V_i$ be the unique component of $\\hat{\\bbC}\\setminus\\left(P_{2i-1}\\cup P_{2i+1}\\cup l_1\\cup l_2\\right)$ whose boundary intersects each of $l_1$, $l_2$, $P_{2i-1}$ and $P_{2i+1}$. Then $P_{2i}\\subset \\overline{V_i}$ for $i\\ge1$. For the previously given point $z$ in $P_\\infty\\cap(0,1)^2$, we can find for each $i\\ge1$ a point $a_i\\in P_{2i}\\cap W_r$ such that $\\lim\\limits_{i\\rightarrow\\infty}a_i=z$. Since $P_{2i}\\subset L\\subset \\partial U$, we further find a point $b_i\\in (W_r\\cap V_i\\cap U)$ for every $i\\ge1$, such that the distance between $a_i$ and $b_i$ converges to zero as $i\\rightarrow\\infty$. Check Figure \\ref{key} for relative locations of $a_i\\in P_{2i+1}$ and $b_i\\in(W_r\\cap V_i\\cap U)$.\n\nNow, we may find arcs $\\alpha_i\\subset U$ for each $i\\ge1$ that starts from a fixed point $b_0\\in U$ and ends at $b_i$. Let $c_i$ be the last point on $\\alpha_i$ that leaves $\\partial[0,1]^2$. Let $d_i$ be the first point on $\\alpha_i$ after $c_i$ at which $\\alpha_i$ intersects $\\partial W_r$. Clearly, we have $c_i\\in(l_1\\cup l_2)$. Let $\\beta_i$ be the sub-arc of $\\alpha_i$ from $c_i$ to $d_i$. Check Figure \\ref{key} for a rough depiction of two possible locations for $\\beta_i$. Then $\\beta_i$ and $\\beta_j$ for $i\\ne j$ are contained in distinct components of $\\mathcal{A}_r\\setminus L$, where $\\mathcal{A}_r=[0,1]^2\\setminus W_r$ is topologically a closed annulus.\nSince $L\\subset K$ and $K\\cap U=\\emptyset$, the arcs $\\beta_i$ and $\\beta_j$ for $i\\ne j$ are contained in distinct components of $\\mathcal{A}_r\\setminus K$.\n\nLet $x_n$ be the only point on $l_1\\cap P_\\infty$ such that the right piece of $l_1\\setminus\\{x_n\\}$ does not intersect $P_\\infty$. Let $y_n$ be the point on $l_2\\cap P_\\infty$ such that the right piece of $l_2\\setminus\\{y_n\\}$ does not intersect $P_\\infty$. The sequence $\\{c_i\\}$ then has a limit point in $\\{x_n,y_n\\}$. We may assume that $z_r=\\lim\\limits_{i\\rightarrow\\infty}d_i$ for some point $z_r\\in\\partial W_r$. Since $\\partial[0,1]^2$ and $\\partial W_r$ are disjoint Jordan curves, from the choices of $x_n, y_n$ and $z_r$ we can infer that either $(x_n,z_r)\\in R_K$ or $(y_n,z_r)\\in R_K$. The flexibility of $r>0$ then leads to the inclusion $z\\in\\left(\\overline{R_K}[x_n]\\cup\\overline{R_K}[y_n]\\right)$.\n\nNow consider the two closed sets $E_n=P_\\infty\\cap \\left(\\overline{R_K}[x_n]\\cup l_1\\right)$ and $F_n=P_\\infty\\cap \\left(\\overline{R_K}[y_n]\\cup l_2\\right)$, which satisfy $P_\\infty=E_n\\cup F_n$. From the connectedness of $P_\\infty$ we see that $E_n\\cap F_n\\ne\\emptyset$. Clearly, each point $w\\in (E_n\\cap F_n)$ necessarily falls into one of the following cases:\n\\begin{itemize}\n\\item[(1)] $w$ lies in $l_1\\subset\\partial B_{1\/n}(x)$ and belongs to $\\overline{R_K}[y_n]$,\n\\item[(2)] $w$ lies in $l_2\\subset\\partial B_{1\/n}(y)$ and belongs to $\\overline{R_K}[x_n]$,\n\\item[(3)] $w\\notin(l_1\\cup l_2)$ and it lies in $\\overline{R_K}[x_n]\\cap\\overline{R_K}[y_n]\\cap(0,1)^2$.\n\\end{itemize}\nIn case (1) we set $u_n=w,v_n=y_n$; in case (2) we set $u_n=x_n, v_n=w$; in case (3) we set $u_n=x_n, v_n=y_n$. Then, in cases (1) and (2) we have $v_n\\in\\overline{R_K}[u_n]\\subset\\overline{R_K}^2[u_n]$; and in case (3) we will have $v_n\\in\\overline{R_K}^2[u_n]$. This verifies the claim and completes our proof.\n\\end{proof}\n\nWith Lemma \\ref{key-lemma}, we are well prepared to prove Theorems \\ref{compare_atoms} and \\ref{lambda_inequality} as follows.\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{compare_atoms}}]\nSince $U$ is also a complementary component of $\\partial U$, we only verify that every atom of $L$ is contained in a single atom of $K$.\n\nTo this end, let $\\Dc_L^\\#$ consist of all those continua that are each a component of $d^*\\cap L$ for some $d^*\\in\\Dc_K$. By \\cite[p.44, Theorem 3.21]{Nadler92} and \\cite[p.278, Lemma 13.2]{Nadler92}, we see that $\\Dc_L^\\#$ is an upper semi-continuous decomposition of $L$. As every fiber of $\\overline{R_K}^2$ is entirely contained in a single element of $\\Dc_K$, by Lemma \\ref{key-lemma} we know that every fiber $\\overline{R_L}[z]$ is entirely contained in a single element of $\\Dc_L^\\#$. This implies that $\\Dc_L^\\#$ is refined by $\\Dc_L$. In other words, every atom of $L$ is entirely contained in a single atom of $K$.\n\\end{proof}\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{lambda_inequality}}]\nTo obtain $\\lambda_L(x)\\le\\lambda_K(x)$ for all $x$, we only need to consider the points $x\\in L$. With no loss of generality, we may assume that $\\lambda_K(x)=m-1$ for some integer $m\\ge1$. That is to say, there exist strictly decreasing continua $d_1^*\\supset d_2^*\\supset\\cdots\\supset d_{m}^*=\\{x\\}$ such that $d_1^*$ is an atom of $K$ and $d_{i+1}^*$ an atom of $d_i^*$ for $1\\le i\\le m-1$. Here we may have $m=1$. By Theorem \\ref{compare_atoms}, the atom of $L$ containing $x$, denoted as $d_1$, is a subset of $d_1^*$. Since $d_1\\subset d_1^*$ also satisfies the assumptions of Theorem \\ref{compare_atoms}, we can infer that the atom of $d_1$ containing $x$, denoted as $d_2$, is a subset of $d_2^*$. Repeating the same argument for $m$ times, we obtain for $1\\le i\\le m$ an order-$i$ atom $d_i$ of $L$ with $d_i\\subset d_i^*$. Here we have $d_{m}=\\{x\\}$ and hence $\\lambda_L(x)\\le m=\\lambda_K(x)$.\n\\end{proof}\n\\begin{rem}\nIn the proof for Theorem \\ref{lambda_inequality}, we know that $U$ is a component of $\\hat{\\bbC}\\setminus K$ and $L\\subset\\partial U$. Therefore, in the same way we can show that $\\lambda_L(x)\\le\\lambda_{\\partial U}(x)$ for all $x$. From this we can infer that\n$\\sup\\limits_U\\lambda_{\\partial U}(x)=\\tilde{\\lambda}_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$.\n\\end{rem}\n\n\n\n\n\\section{On Lambda Equalities}\\label{equality}\n\nWe prove Theorem \\ref{equality-case}, establishing three equalities in terms of the lambda function. Two of these equalities are for $E$-compacta. The other one is for partially unshielded compacta.\n\nGiven an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$ with complementary components $U_1,U_2,\\ldots$, so that the diameters $\\delta(U_i)$ either form a finite sequence of an infinite one converging to zero. The Torhorst Inequality requires that $\\sup\\limits_i\\lambda_{\\partial U_i}(x)\\le\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$ and for all $i\\ge1$. Since $\\lambda_K(x)=\\tilde{\\lambda}_K(x)=0$ for all $x\\in K^o\\cup\\left(\\bigcup\\limits_iU_i\\right)$, we only need to consider the points on $\\partial K$, which may not equal $\\bigcup\\limits_i\\partial U_i$.\n\nLemma \\ref{bridging_lemma} follows from \\cite[Lemma 3.3]{LLY-2019} and is useful when we prove Lemma \\ref{trivial-fiber}.\n\\begin{lemma}\\label{bridging_lemma}\nIf $A\\subset\\hat{\\bbC}$ is a closed topological annulus and $K\\subset\\hat{\\bbC}$ a compactum then the following statements are equivalent: (1) $A\\cap K$ has infinitely many components intersecting each of the two components of $\\partial A$; (2) $A\\setminus K$ has infinitely many components intersecting each of the two components of $\\partial A$.\n\\end{lemma}\n\n\n\n\\begin{lemma}\\label{trivial-fiber}\nGiven an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$ with complementary components $U_1,U_2,\\ldots$. If $\\overline{R_K}[x]$ contains a point $y\\ne x$ then $y\\in\\overline{R_{\\partial U_i}}[x]$ for some $i$.\n\\end{lemma}\n\\begin{proof}\nLet $\\rho(x,y)$ be the spherical distance between $x$ and $y$. For each $n\\ge2$ let $B_n(x)$ and $B_n(y)$ be the open disks of radius $2^{-n}\\rho(x,y)$ that are centered at $x$ and $y$ respectively. Then $A_n=\\hat{\\mathbb{C}}\\setminus\\left(B_n(x)\\cup B_n(y)\\right)$ is a topological annulus. By \\cite[Theorem 1.3]{LYY-2020}, the intersection $A_n\\cap K$ has infinitely many components that intersect $\\partial B_n(x)$ and $\\partial B_n(y)$ both. By Lemma \\ref{bridging_lemma}, the difference $A_n\\setminus K$ has infinitely many components, say $\\{P^n_j: j\\ge1\\}$, that intersect $\\partial B_n(x)$ and $\\partial B_n(y)$ both. Since the diameters of those $P^n_j$ are no less than $\\rho(x,y)\/2$ and since we assume $K$ to be an $E$-compactum, there is an integer $i(n)$ such that $U_{i(n)}$ contains infinitely many of those $P^n_j$. Here all those $P^n_j$ that are contained in $U_{i(n)}$ are each a component of $A_n\\cap U_{i(n)}$.\n\nNow, choose a subsequence $\\{Q^n_k: k\\ge1\\}$ of $\\{P^n_j: j\\ge1\\}$, with $Q_k^n\\subset U_{i(n)}$, such that $\\overline{Q^n_k}$ converges under Hausdorff distance to a continuum $M_n$. Then $M_n$ is a subset of $\\partial U_{i(n)}$ and intersects $\\partial B_n(x)$ and $\\partial B_n(y)$ both. Fixing any $a_n$ in $M_n\\cap \\partial B_n(x)$ and $b_n$ in $M_n\\cap \\partial B_n(y)$, we will have $(a_n,b_n)\\in R_{\\partial U_{i(n)}}$. Since $K$ is an $E$-compactum, there are infinitely many integers $n$ such that $i(n)$ takes the same value, say $i$. Therefore, we have two infinite sequences $\\{c_n\\}\\subset\\{a_n\\}$ and $\\{d_n\\}\\subset \\{b_n\\}$, with $c_n,d_n\\in\\partial U_i$, such that $(c_n,d_n)\\in R_{\\partial U_i}$ for all $n\\ge2$. Since $\\lim\\limits_{n\\rightarrow\\infty}c_n=x$ and $\\lim\\limits_{n\\rightarrow\\infty}d_n=y$, we readily have $(x,y)\\in\\overline{R_{\\partial U_i}}$, or equivalently $y\\in\\overline{R_{\\partial U_i}}[x] $.\n\\end{proof}\n\nNow we are well prepared to prove parts (i) and (ii) of Theorem \\ref{equality-case}, whose results are respectively included in the next two propositions.\n\n\\begin{proposition}\\label{equality-case-1}\nIf $K$ is an $E$-compactum such that $\\tilde{\\lambda}_K(x)=0$ for all $x\\in\\hat{\\mathbb{C}}$ then $\\lambda_K(x)$ vanishes everywhere.\n\\end{proposition}\n\\begin{proof}\nAs $\\tilde{\\lambda}_K(x)$ vanishes everywhere, all the relations $\\overline{R_{\\partial U_i}}$ are trivial, in the sense that the fibers $\\overline{R_{\\partial U_i}}[x]$ are each a singleton for all $i$ and all $x\\in\\partial U_i$. Combing this with the conclusion of Lemma \\ref{trivial-fiber}, we can infer that the fiber $\\overline{R_K}[x]=\\{x\\}$ for all $x\\in K$. From this, we see that every atom of $K$ is a singleton and that $\\lambda_K(x)=0$ for all $x$.\n\\end{proof}\n\n\n\\begin{proposition}\\label{equality-case-2}\nGiven an $E$-compactum $K$. If $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$ then $\\lambda_K=\\tilde{\\lambda}_K$.\n\\end{proposition}\n\\begin{proof}\nLet $\\mathcal{D}_i$ denote the core decomposition of $\\partial U_i$. Since we assume that $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$, the collection\n$\\displaystyle \\mathcal{D}_K^*:=\\left(\\bigcup\\limits_i\\mathcal{D}_i\\right)\\cup\\left\\{\\{x\\}: x\\in K\\setminus\\left(\\bigcup\\limits_i\\partial U_i\\right)\\right\\}$\nis a partition that divides $K$ into sub-continua. It suffices to show that $\\Dc_K^*$ is the core decomposition of $K$.\n\nRecall that $\\Dc_K$ is the finest monotone decomposition such that every fiber of $\\overline{R_K}$ is contained in a single element of $\\Dc_K$. By Lemma \\ref{key-lemma}, we know that $\\Dc_K$ is refined by $\\Dc_K^*$. On the other hand, since $K$ is an $E$-compactum and since $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$, we can use Lemma \\ref{trivial-fiber} to infer that every fiber of $\\overline{R_K}$ is contained in a single element of $\\Dc^*_K$. Therefore, we only need to verify that $\\mathcal{D}^*_K$ is upper semi-continuous, which then indicates that $\\Dc_K^*$ is a monotone decomposition hence is refined by $\\Dc_K$.\n\nIn other words, we need to verify that the equivalence $\\sim$ determined by the partition $\\Dc_K^*$ is closed as a subset of $K\\times K$. To this end, we consider an arbitrary sequence $\\{(x_n,y_n): n\\ge1\\}$ in $K\\times K$ with $\\lim\\limits_{n\\rightarrow\\infty}(x_n,y_n)=(x,y)$ such that $x_n\\sim y_n$ for all $n\\ge1$. There are two possibilities: either $x=y$ or $x\\ne y$. In the first case, we have $(x,y)=(x,x)$, which is surely an element of $\\sim$. In the second, the assumption that $K$ is an $E$-compactum implies that there is some $U_i$ such that $\\{x_n,y_n\\}\\subset\\partial U_i$ for infinitely many $n\\ge1$. Consequently, the subset $\\{x,y\\}$ is contained in a single element of $\\Dc_{i}$, which is a sub-collection of $\\Dc_K^*$. That is to say, we have $x\\sim y$. This ends our proof.\n\\end{proof}\n\n\nThe arguments in the above proof actually imply the following.\n\\begin{theo}\\label{equal-cd}\nGiven an $E$-compactum $K$. If $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$ then every atom of $K$ is either an atom of some $\\partial U_i$ or a singleton $\\{x\\}$ with $x\\in K\\setminus\\left(\\bigcup_i\\partial U_i\\right)$.\n\\end{theo}\n\n\nNow we go on to consider partially unshielded compacta and obtain Theorem \\ref{equality-case}(iii).\n\n\\begin{deff}\\label{part-unshielded}\nLet $L\\subset\\hat{\\mathbb{C}}$ be an unshielded compactum, which equals the boundary $\\partial U$ of one of its complementary components $U$. A compactum $K$ formed by the union of $L$ with some complementary components of $L$ other than $U$ is called a {\\bf partially unshielded compactum} determined by $L$.\n\\end{deff}\n\nIn order to find typical examples, one may set $L$ to be a polynomial Julia set, $U$ the unbounded Fatou component, and $K$ the union of $L$ and some bounded Fatou components. The next proposition discusses the relation between the atoms of any given compactum $L\\subset\\hat{\\mathbb{C}}$ and those of a compactum $K$, where $K$ is the union of $L$ with some (not all) components of $\\hat{\\bbC}\\setminus L$.\n\n\n\\begin{proposition}\\label{useful}\nGiven a planar compactum $L\\subset \\hat{\\mathbb{C}}$ and a family $\\{U_\\alpha:\\ \\alpha\\in I\\}$ of components of \\ $\\hat{\\mathbb{C}}\\!\\setminus\\!L$. If $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$ then $\\overline{R_K}$ is a subset of $\\{(z,z):\\ z\\in K\\!\\setminus\\!L\\}\\cup\\overline{R_L}$. Consequently, every atom of $K$ is either a singleton lying in $K\\setminus L$ or a sub-continuum of an atom of $L$.\n\\end{proposition}\n\n\\begin{proof}\nSince $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$, every point $z\\in (K\\setminus L)$ lies in some $U_\\alpha$. Thus the atom of $K$ containing $z$ is exactly the singleton $\\{z\\}$. From this it readily follows that every atom $d^*$ of $K$ that intersects $L$ is a sub-continuum of $L$. So we have $\\overline{R_K}=\\{(z,z):\\ z\\in K\\!\\setminus\\!L\\}\\cup\\left(\\overline{R_K}\\cap L^2\\right)$. Therefore, we only need to show that $\\left(\\overline{R_K}\\cap L^2\\right)\\subset \\overline{R_L}$.\n\nIndeed, if on the contrary there were some $(x,y)\\in \\overline{R_K}\\cap L^2$ not belonging to $\\overline{R_L}$ then, for any small enough number $r>0$, the difference $L\\setminus (B_r(x)\\cup B_r(y))$ would have finitely many components intersecting $\\partial B_r(x)$ and $\\partial B_r(y)$ both. Let $A_r=\\hat{\\mathbb{C}}\\setminus (B_r(x)\\cup B_r(y))$. By Lemma \\ref{bridging_lemma}, $A_r\\setminus L$ has at most finitely many components that intersect $\\partial B_r(x)$ and $\\partial B_r(y)$ both. As we assume that $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$, every component of $A_r\\setminus K$ is also a component of $A_r\\setminus L$. Thus $A_r\\setminus K$ has at most finitely many components that intersect both $\\partial B_r(x)$ and $\\partial B_r(y)$. In other words, we have $(x,y)\\notin\\overline{R_K}$. This is absurd since we assume that $(x,y)\\in \\overline{R_K}$.\n\\end{proof}\n\n\nThere are other basic facts concerning an unshielded compactum $L$ and a partially unshielded compactum $K$ determined by $L$. Firstly, every interior point of $K$ lies in some complementary component of $L$; secondly, every boundary point of $K$ lies in $L$. Thus we always have $\\partial K=L$; moreover, every atom of $K$ that intersects the interior $K^o$ is necessarily a singleton. Therefore, in order to determine the atoms of $K$ we only need to consider those of $L$.\n\n\\begin{theo}\\label{part-2}\nLet $L\\subset\\hat{\\mathbb{C}}$ be an unshielded compactum. Let $K$ be a partially unshielded compactum determined by $L$. Then every atom of $L$ is also an atom of $K$ and we have $\\Dc_K=\\Dc_L\\cup\\{\\{x\\}: x\\in K\\setminus L\\}$. Consequently, $\\tilde{\\lambda}_K(x)=\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$.\n\\end{theo}\n\n\\begin{proof}\nAs $L$ is unshielded, there is a component $U$ of $\\hat{\\mathbb{C}}\\setminus L$ with $L=\\partial U$. By Lemma \\ref{key-lemma}, every atom of $L$ lies in a single atom of $K$. By Lemma \\ref{useful}, every atom of $K$ intersecting $L$ is contained in a single atom of $L$. Thus every atom of $L$ is also an atom of $K$. As any singleton $\\{x\\}$ with $x\\in K^o= K\\setminus L$ is an atom of $K$, we have $\\Dc_K=\\Dc_L\\cup\\{\\{x\\}: x\\in K\\setminus L\\}$. This indicates the Lambda Equality $\\tilde{\\lambda}_K=\\lambda_K$.\n\\end{proof}\n\n\\begin{rem}\\label{why_partially_unshielded}\nTheorem \\ref{part-2} gives a result that is slightly stronger than Theorem \\ref{equality-case}(iii). In particular, for any full compactum $K$ we have $\\mathcal{D}_{\\partial K}\\subset\\mathcal{D}_K$. Therefore, a full compactum $K$ is a Peano compactum if and only if the boundary $\\partial K$ is. In particular, if $G\\subset\\hat{\\mathbb{C}}$ is a simply connected bounded domain then $\\partial G$ is locally connected if and only if $K=\\hat{\\mathbb{C}}\\setminus G$ is locally connected, or equivalently when $K$ is a Peano continuum. This basic fact has been well known, see for instance the items (iii) and (iv) of \\cite[p.20, Theorem 2.1]{Pom92}. Now, it is extended to a quantitative version in Theorem \\ref{part-2}. This extension applies to an arbitrary full continuum, that may or may not be locally connected.\n\\end{rem}\n\n\n\\section{The Gluing Lemma for Lambda Functions}\\label{glue}\n\nWe will follow the philosophy of the well known gluing lemma for continuous maps.\nSee for instance \\cite[p.69, Theorem (4.6)]{Armstrong} for the simple case and \\cite[p.70, Theorem (4.8)]{Armstrong} for the general setting. Our aim is to prove Theorem \\ref{gluing_lemma}, which deals with the lambda functions $\\lambda_K,\\lambda_L$ for planar compacta $K\\supset L$ such that $A=\\overline{K\\setminus L}$ intersects $L$ at finitely many points $x_1,\\ldots,x_n$. In Theorem \\ref{baby_M}, we further extend Theorem \\ref{gluing_lemma} to the case that $A\\cap L$ is a countably infinite set, under additional assumptions. Notice that when $A\\cap L$ is an infinite set Theorem \\ref{gluing_lemma} may not hold. See Examples \\ref{cantor_combs}, \\ref{brooms} and \\ref{cup-fs}.\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{gluing_lemma}}]\nFor $1\\le i\\le n$, denote by $d_i^1$ the order-$1$ atom of $A$ that contains $x_i$. Similarly, denote by $e_i^1$ the atom of $L$ that contains $x_i$.\nLet $K_1=A_1\\cup L_1$, where $\\displaystyle A_1=\\bigcup_id_i^1$ and $\\displaystyle L_1=\\bigcup_ie_i^1$. Then $K_1$ has finitely many components. Let $\\Ec_1$ be the collection of these components.\n\nBy \\cite[Theorem 1.3]{LYY-2020}, a point $y\\ne x$ lies in $\\overline{R_K}[x]$\nif and only if $K\\setminus(B_r(x)\\cup B_r(y))$ has infinitely many components that intersect both $\\partial B_r(x)$ and $\\partial B_r(y)$ for small enough $r>0$. Because of this, we can directly check that $\\overline{R_K}=\\overline{R_A}\\cup\\overline{R_L}$. Here $\\overline{R_K},\\overline{R_A},\\overline{R_L}$ are respectively the closed Sch\\\"onflies relations on $K,A$ and $L$. Let \\[\n\\Dc_1=\\left(\\Dc_L\\setminus\\left\\{e_1^1,\\ldots,e_n^1\\right\\}\\right)\\cup\n\\left(\\Dc_A\\setminus\\left\\{d_1^1,\\ldots,d_n^1\\right\\}\\right)\\cup\n\\Ec_1.\\]\nThen $\\Dc_1$ is an upper semi-continuous decomposition of $K$ into subcontinua. Since $\\Dc_1$ does not split the fibers of $\\overline{R_K}$, it is refined by $\\Dc_K$, the core decomposition of $K$ with Peano quotient. On the other hand, the equality $\\overline{R_K}=\\overline{R_A}\\cup\\overline{R_L}$ indicates that $\\mathcal{D}_K$ does not split the fibers of $\\overline{R_A}$ and those of $\\overline{R_L}$. Thus each atom of $A$ lies in an atom of $K$; similarly, every atom of $L$ lies in an atom of $K$. Consequently, we have.\n\\begin{lemma}\\label{gluing_atoms_a}\n$\\Dc_K=\\Dc_1$. Thus $d\\cap A$ (or $d\\cap L$) either is empty or consists of finitely many atoms of $A$ (resp. $L$) for any atom $d$ of $K$.\n\\end{lemma}\n\n\nLemma \\ref{gluing_atoms_a} ensures that $\\displaystyle \\lambda_K(x)=\\max\\left\\{\\lambda_A(x),\\lambda_L(x)\\right\\}$ for all $x\\notin K_1$. That is to say, the equation $\\lambda_K(x)=\\max\\left\\{\\lambda_{\\overline{K\\setminus L}}(x),\\lambda_L(x)\\right\\}$ in Theorem \\ref{gluing_lemma} holds for all points $x\\notin K_1$, so that we only need to consider the points $x\\in K_1$.\n\nNotice that we have set $A=\\overline{K\\setminus L}$, $\\displaystyle A_1=\\bigcup_id_i^1$, and $\\displaystyle L_1=\\bigcup_ie_i^1$. We will need to verify that $\\displaystyle \\lambda_{K_1}(x)=\\max\\left\\{\\lambda_{A_1}(x),\\lambda_{L_1}(x)\\right\\}$ for all $x\\in K_1$, since for $x\\in A_1$ and $y\\in L_1$ we have\n\\[\n\\begin{array}{ccc}\n\\lambda_A(x)=\\left\\{\\begin{array}{ll} 0& \\{x\\}\\in\\Dc_A\\\\\n1+\\lambda_{A_1}(x)& otherwise\\end{array}\\right. &\\text{and}&\n\\lambda_L(y)=\\left\\{\\begin{array}{ll} 0& \\{y\\}\\in\\Dc_L\\\\\n1+\\lambda_{L_1}(y)& otherwise.\\end{array}\\right.\n\\end{array}\\]\nTo do that, we recall that $\\mathcal{D}_{A_1}$ consists of all the order-$2$ atoms of $A$ lying in $A_1$. Similarly, $\\mathcal{D}_{L_1}$ consists of all the order-$2$ atoms of $L$ lying in $L_1$. Thus we may repeat the above procedure again, replacing $A$ and $L$ by $A_1$ and $L_1$. This then gives rise to two compacta $A_2\\subset A_1$ and $L_2\\subset L_1$ such that\n$\\displaystyle \\lambda_{K_1}(x)=\\max\\left\\{\\lambda_{A_1}(x),\\lambda_{L_1}(x)\\right\\}$ for all $x\\notin K_2=A_2\\cup L_2$.\n\nWe may carry out the same procedure indefinitely and obtain two decreasing sequences of compacta: (1) $A_1\\supset A_2\\supset\\cdots$ and (2) $L_1\\supset L_2\\supset\\cdots$. Setting $K_p=A_p\\cup L_p$ for $p\\ge1$, we have the following equations:\n\\begin{equation}\n\\displaystyle \\lambda_{K_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{K_p}\\\\\n1+\\lambda_{K_{p+1}}(x)& otherwise\\end{array}\\right. \\quad (x\\in K_{p+1}).\n\\end{equation}\n\\begin{equation}\n\\displaystyle \\lambda_{A_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{A_p}\\\\\n1+\\lambda_{A_{p+1}}(x)& otherwsie\\end{array}\\right. \\quad(x\\in A_{p+1})\n\\end{equation}\n\\begin{equation}\n\\displaystyle \\lambda_{L_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{L_p}\\\\\n1+\\lambda_{L_{p+1}}(x)& otherwise\\end{array}\\right.\\quad (x\\in L_{p+1})\n\\end{equation}\n\\begin{equation}\n\\lambda_{K_p}(x)=\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}\\quad (x\\notin K_{p+1})\n\\end{equation}\nThere are two possibilities. In the first, we have $K_p=K_{p+1}$ for some $p\\ge1$, indicating that $K_m=K_p$ for all $m\\ge p$.\nIn such a case, we have $\\lambda_{K_p}(x)=\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}$ and hence $\\lambda_{K}(x)=\\max\\left\\{\\lambda_{A}(x),\\lambda_{L}(x)\\right\\}$.\nIn the second, we have $K_p\\ne K_{p+1}$ for all $p\\ge1$. This implies that $\\lambda_K(x)=\\max\\left\\{\\lambda_A(x),\\lambda_L(x)\\right\\}=\\infty$ holds for all $x\\in K_\\infty=\\bigcap_pK_p$ and that $\\lambda_{K}(x)=p+\\lambda_{K_p}(x)=p+\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}=\n\\max\\left\\{\\lambda_{A}(x),\\lambda_{L}(x)\\right\\}$ holds for $p\\ge1$ and $x\\notin K_p\\setminus K_{p+1}$. Here $\\displaystyle K_1\\setminus K_{\\infty}=\\bigcup_{p=1}^\\infty(K_p\\setminus K_{p+1})$. This completes our proof.\n\\end{proof}\n\nLemma \\ref{gluing_atoms_a} and Theorem \\ref{gluing_lemma} are useful, when we study $\\lambda_K$ for certain choices of planar compacta $K$. For instance, we may choose $K$ to be the Julia set of a renormalizable polynomial $f(z)=z^2+c$ and $L$ the small Julia set. For the sake of convenience, we further assume that the only critical point of $f$ is recurrent and that there is no irrationally neutral cycle. Then it is possible to choose a decreasing sequence of Jordan domains $\\{U_n\\}$, with $\\overline{U_{n+1}}\\subset U_n$ and $\\displaystyle L=\\bigcap_{n=1}^\\infty U_n$, such that every $K\\cap \\partial U_n$ consists of finitely many points that are periodic or pre-periodic. See for instance \\cite[section 2.2]{Jiang00}.\nFor any $n\\ge1$ we can use \\cite[Theorems 2 and 3]{Kiwi04} to infer that every singleton $\\{x\\}$ with $x\\in (K\\cap\\partial U_n)$ is an atom of $K$ hence is also an atom of $L_n=K\\cap\\overline{U_n}$. Combining these with Lemma \\ref{gluing_atoms_a} and Theorem \\ref{gluing_lemma}, we further see that $\\mathcal{D}_{L_{n+1}}\\subset\\mathcal{D}_{L_n}\\subset\\mathcal{D}_K$ for all $n\\ge1$. However, we are not sure whether $\\mathcal{D}_L\\subset\\mathcal{D}_K$. Similarly, it is not clear whether $\\lambda_K(x)=\\lambda_L(x)$ holds for $x\\in L$. Therefore, we propose the following.\n\n\\begin{que}\\label{small_julia}\nLet $K=L_0\\supset L_1\\supset L_2\\supset\\cdots$ a decreasing sequence of planar compacta such that $L_n\\cap\\overline{K\\setminus L_n}$ is a finite set for all $n\\ge1$.\nSetting $L=\\bigcap_{n\\ge1}L_n$. Find conditions so that (1) $\\mathcal{D}_L\\subset\\mathcal{D}_K$ or (2) $\\lambda_K(x)=\\lambda_L(x)$ holds for all $x\\in L$.\n\\end{que}\n\nAs a response to Question \\ref{small_julia}, we turn to study the lambda functions of two planar compact $K\\supset L$\nsuch that $K\\setminus L$ is contained in the union of at most countably many continua $P_n\\subset K$ that satisfy the following properties:\n\\begin{itemize}\n\\item[(P1)] every $P_n$ intersects $L$ at a single point $x_n$, and\n\\item[(P2)] for any constant $C>0$ at most finitely many $P_n$ are of diameter greater than $C$.\n\\item[(P3)] $P_n\\cap P_m=\\emptyset$ for $n\\ne m$.\n\\end{itemize}\nHere $P_n\\setminus\\{x_n\\}$ might be disconnected for some of the integers $n\\ge1$. Notice that there is a special situation, when $K$ is the Mandelbrot set $\\M$. Then, in order that the above properties (P1)-(P3) be satisfied, we may choose $L$ to be the closure of a hyperbolic component or a {\\bf Baby Mandelbrot set}.\n\nAs an extension of Theorem \\ref{gluing_lemma}, we will obtain the following.\n\n\\begin{theo}\\label{baby_M}\nGiven two planar compacta $K\\supset L$ that satisfy (P1) to (P3), we have \\begin{equation}\\label{baby}\n\\lambda_K(x)=\\left\\{\\begin{array}{lll} \\lambda_{P_n}(x)&x\\in P_n\\setminus\\{x_n\\}\\ {for\\ some}\\ n&({case}\\ 1)\\\\ \\lambda_L(x)& x\\in L\\setminus\\{x_n: n\\in\\mathbb{N}\\}&({case}\\ 2)\\\\ \\max\\left\\{\\lambda_L(x_n),\\lambda_{P_n}(x_n)\\right\\}& x=x_n\\ {for\\ some}\\ x_n&({case}\\ 3)\\\\\n0 &{otherwise}&(case\\ 4)\\end{array}\\right. \\end{equation}\n\\end{theo}\n\nWe just need to consider the above equation for points $x\\in K$.\nTo do that, we may\ndefine an equivalence $\\sim$ on $\\mathbb{N}$ so that $m\\sim n$ if and only if $x_m,x_n$ are contained in the same atom of $L$. Let $\\{I_j:j\\}$ be the equivalence classes of $\\sim$.\n\nDenote by $d_n$ the atom of $P_n$ that contains $x_n$, and by $e_j$ the atom of $L$ that contains all $x_n$ with $n\\in I_j$. Moreover, set $e'_j=e_j\\cup\\left(\\bigcup_{n\\in I_j}d_n\\right)$ for every $j$.\n\nThen $\\{e'_j: j\\}$ is a collection of at most countably many continua that are pairwise disjoint.\nNow we consider the following upper semi-continuous decomposition of $K$:\n\\begin{equation}\\label{baby_M_partition}\n\\Dc_1=\\left(\\Dc_L\\setminus\\left\\{e_j: j\\right\\}\\right)\\cup\n\\left(\\bigcup_n\\Dc_{P_n}\\setminus\\{d_n\\}\\right)\\cup\n\\{e'_j: j\\}.\n\\end{equation}\nAll its elements are sub-continua of $K$ that do not split the fibers of $\\overline{R_K}$. So it is refined by $\\Dc_K$. On the other hand, by \\cite[Theorem 1.3]{LYY-2020}, we also have $\\overline{R_K}=\\overline{R_L}\\cup\\left(\\bigcup_{n\\ge1}\\overline{R_{P_n}}\\right)$. Thus $\\mathcal{D}_K$ does not split the fibers of $\\overline{R_L}$ and those of $\\overline{R_{P_n}}$ for all $n\\ge1$. Therefore, every atom of $L$ lies in an atom of $K$, so does every atom of $P_n$. Consequently, we have.\n\\begin{lemma}\\label{gluing_atoms_b}\n$\\Dc_K=\\Dc_1$. Therefore, for any atom $d$ of $K$ the intersection $d\\cap L$ (or $d\\cap P_n$ for any $n\\ge1$) is either empty or a single atom of $L$ (respectively, $P_n$).\n\\end{lemma}\n\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{baby_M}}]\nClearly, $\\lambda_K(x)=\\lambda_L(x)$ for all $x$ in $L\\setminus\\left(\\bigcup_je'_j\\right)$. Similarly, $\\lambda_K(x)=\\lambda_{P_n}(x)$ for all $x$ in $P_n\\setminus d_n$. Moreover, $\\lambda_K(x_n)=\\lambda_L(x_n)=\\lambda_{P_n}(x_n)=0$ for all $x_n$ such that $\\{x_n\\}$ is an atom of $L$ and also an atom of $P_n$. Therefore, we just need to consider those $e_j'$ that are non-degenerate.\n\nLet $\\mathcal{N}_1$ be the collection of all the integers $j$ such that $e_j'$ is not a singleton.\nThen $e_{n_1}$ is a subcontinuum of $e_{n_1}'$ for any $n_1\\in\\mathcal{N}_1$, such that $e_{n_1}'\\setminus e_{n_1}$ is covered by all those $d_n$ with $n\\in I_{n_1}$. Thus the properties (P1) - (P3) are satisfied, if $K$ and $L$ are respectively replaced by $e_{n_1}'$ and $e_{n_1}$. It is then routine to check the following:\n\\begin{equation}\\label{inductive_1}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1}e_{n_1}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1}e_{n_1}'\\right)\\\\\n1+\\lambda_{e_{n_1}'}(x)& x\\in e_{n_1}'\\ \\text{for\\ some}\\ n_1\\in\\mathcal{N}_1\n\\end{array}\\right.\n\\end{equation}\n\nEvery atom of $e_{n_1}'$ falls into exactly one of the following possibilities: (1) an order-two atom of $P_n$ for some $n\\in I_{n_1}$ that is disjoint from $\\{x_n\\}$, (2) an order-two atom of $L$ that is disjoint from $\\{x_n: n\\in I_{n_1}\\}$, (3) a singleton $\\{x_n\\}$ for some $n\\in I_{n_1}$, which is an order-two atom of $L$ and is also an order-two atom of $P_n$, (4) a non-singleton continuum that consists of the order-two atom of $L$ containing some $x_n$, with $n\\in I_{n_1}$, and the order-two atom of $P_n$ containing $x_n$.\n\nAn atom falling in the first three possibilities is called an atom of {\\bf pure type}.\nWe can check that $e_{n_1}'$ has at most countably many atoms that is not of pure type. Such an atom is generally denoted as $e'_{n_1n_2}$. Similarly we can define continua $e'_{n_1n_2\\ldots n_p}$ for $p>2$. On the one hand, such a continuum is an order-$p$ atom of $K$; on the other, it is also an atom of $e'_{n_1n_2,\\ldots n_{p-1}}$ that is not of pure type.\nBy the same arguments, that have been used in obtaining Equation (\\ref{inductive_1}), we can infer the following equation:\n\\begin{equation}\\label{inductive_2}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1,n_2}e_{n_1n_2}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1,n_2}e_{n_1n_2}'\\right) \\\\\n2+\\lambda_{e_{n_1n_2}'}(x)& x\\in e_{n_1n_2}'\\ \\text{for\\ some}\\ n_1,n_2\n\\end{array}\\right.\n\\end{equation}\nThis equation may be extended to order-$p$ atoms $e'_{n_1n_2\\ldots n_p}$ with $p\\ge2$ in the following way.\n\\begin{equation}\\label{inductive_p}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1,\\ldots,n_p}e_{n_1\\cdots n_p}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1,\\ldots,n_p}e_{n_1\\cdots n_p}'\\right) \\\\\np+\\lambda_{e_{n_1\\cdots n_p}'}(x)& x\\in e_{n_1\\cdots n_p}'\\ \\text{for\\ some}\\ n_1,\\ldots,n_p\n\\end{array}\\right.\n\\end{equation}\nNotice that Theorem \\ref{baby_M} holds for every $x\\in K$ lying in an atom of $e'_{n_1n_2\\ldots n_p}$ that is of pure type. Such a point $x$ does not lie in $e'_{n_1n_2\\ldots n_pn_{p+1}}$ for any choice of $n_{p+1}$ and hence falls into exactly one of the following possibilities:\n\\begin{itemize}\n\\item that $x\\in P_n\\setminus\\{x_n\\}$ for some $n\\ge1$ and $\\lambda_K(x)=\\lambda_{P_n}(x)\\ge p$.\n\\item that $x\\in L\\setminus\\{x_n: n\\ge1\\}$ and $\\lambda_K(x)=\\lambda_L(x)\\ge p$.\n\\item that $x=x_n$ for some $n\\ge1$ and $\\lambda_K(x)=\\max\\left\\{\\lambda_L(x),\\lambda_{P_n}(x)\\right\\}=p$.\n\\end{itemize}\nEvery other point $x\\in K$ necessarily lies in $e'_{n_1n_2\\ldots n_p}$ for infinitely many $p$. The continua $e'_{n_1n_2\\ldots n_p}$ decrease to a continuum $M_x$. There are three possibilities, either $x\\in L\\setminus\\{x_n\\}$, or $x\\in P_n\\setminus\\{x_n\\}$ for some $n\\ge1$, or $x=x_n$ for some $n\\ge1$. In the first case, we have $\\lambda_K(x)=\\lambda_L(x)=\\infty$; in the second, we have $\\lambda_K(x)=\\lambda_{P_n}(x)=\\infty$; in the third, we have $\\lambda_K(x)=\\max\\left\\{\\lambda_L(x),\\lambda_{P_n}(x)\\right\\}=\\infty$. This completes our proof.\n\\end{proof}\n\n\n\\begin{rem}\nLet $K=L_0\\supset L_1\\supset L_2\\supset\\cdots$ be given as in Question \\ref{small_julia}. Also let $L=\\bigcap_{n\\ge1}L_n$. Then $L_n\\cap\\overline{K\\setminus L_n}$ is a finite set for all $n\\ge1$.\nAssume in addition that (1) every singleton $\\{x_n\\}$ is an atom of $K$ and (2) $K$ and $L$ satisfy the requirements in Theorem \\ref{baby_M}. By Lemma \\ref{gluing_atoms_a}, we see that $\\mathcal{D}_{L_{n+1}}\\subset\\mathcal{D}_{L_n}\\subset\\mathcal{D}_K$ for all $n\\ge1$; thus from Theorem \\ref{gluing_lemma} we can infer that $\\lambda_K(x)=\\lambda_{L_n}(x)$ for all $x\\in L_n$. Moreover, by Lemma \\ref{gluing_atoms_b} we have $\\mathcal{D}_L\\subset\\mathcal{D}_K$. Therefore, by Theorem \\ref{baby_M} we further infer that $\\lambda_K(x)=\\lambda_L(x)$ holds for all $x\\in L$.\n\\end{rem}\n\n\n\\section{Examples}\\label{examples}\n\nWe shall construct examples.\nThe beginning two provide choices of compacta $A,B\\subset\\hat{\\bbC}$ such that $\\lambda_{A\\cup B}(x)\\ne\\max\\left\\{\\lambda_A(x),\\lambda_B(x)\\right\\}$ for some $x$, although $\\lambda_A(x)=\\lambda_B(x)$ for all $x\\in A\\cap B$. In the first, $A\\cap B$ is an uncountable set; in the second, $A\\cap B$ is a countably infinite set. Therefore, the conditions of Theorem \\ref{gluing_lemma} are not satisfied.\n\n\n\\begin{exam}\\label{cantor_combs}\nLet $A\n\\{t+s{\\bf i}: t\\in\\Kc, 0\\le s\\le1\\}$, where $\\Kc$ is the Cantor ternary set. Let\n$B\n\\{t+(1+s){\\bf i}: 0\\le t\\le 1, s\\in\\Kc\\}$. Let $A_1=A\\cup B$ and $B_1=(A+1+{\\bf i})\\cup(B+1-{\\bf i})$.\nSee Figure \\ref{not_glued} for a simplified depiction of $A, B, A_1, B_1$.\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\begin{tabular}{ccccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\draw[gray,dashed] (0,0) -- (3,0)-- (3,3)-- (0,3)-- (0,0);\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,3) -- (3+3*\\i\/27,6);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,3) -- (3+6\/9+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+3*\\i\/27,3) -- (3+2+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,3) -- (3+2+6\/9+3*\\i\/27,6);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,3) -- (3+3*\\i\/27,6);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,3) -- (3+6\/9+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+3*\\i\/27,3) -- (3+2+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,3) -- (3+2+6\/9+3*\\i\/27,6);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture}\n\\\\ $A$& $B$& $A_1=A\\cup B$& $B_1$& $A_1\\cup B_1$\\end{tabular}\n\\caption{The two compacta $A, B$ and their union.}\\label{not_glued}\n\\end{figure}\nThen\n$\\lambda_A(x)=1$ for all $x\\in A$ and vanishes otherwise; similarly, $\\lambda_B(x)=1$ for all $x\\in B$ and vanishes otherwise.\nHowever, both $A\\cap B$ and $A_1\\cap B_1$ are uncountable, thus the conditions in Theorem \\ref{gluing_lemma} are not satisfied. Moreover, we have\n\\[\\lambda_{A_1}(x)=\\lambda_{A\\cup B}(x)=\\left\\{\\begin{array}{ll}2&x\\in A\\\\ 1& B\\setminus A\\\\ 0& {otherwise}\\end{array}\\right.\\quad {and}\\quad\n\\lambda_{A_1\\cup B_1}(x)=\\left\\{\\begin{array}{ll}\\infty & x\\in (A_1\\cup B_1)\\\\ 0& {otherwise}.\\end{array}\\right.\\]\n\n\\end{exam}\n\n\n\\begin{exam}\\label{brooms}\nSet $A=\\bigcup\\limits_{n\\ge0}A_n$. Here $A_0=\\{s{\\bf i}: 0\\le s\\le1\\}$ and $A_1$ is the continuum that consists of the line $\\displaystyle\\left\\{1+t{\\bf i}: 0\\le t\\le1\\right\\}$ and all those lines connecting $1+{\\bf i}$ to $\\displaystyle\\frac{k}{k+1}$ for $k\\ge1$; moreover, for $n\\ge2$, $A_n=\\displaystyle \\left\\{2^{-n+1}t+s{\\bf i}: t+s{\\bf i}\\in A_1\\right\\}$. See Figure \\ref{broom_comb}.\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{tabular}{cc}\n\\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3+3*\\i\/27) -- (3,3+3*\\i\/27);\n \\draw[gray,very thick] (0,3+6\/9+3*\\i\/27) -- (3,3+6\/9+3*\\i\/27);\n \\draw[gray,very thick] (0,3+2+3*\\i\/27) -- (3,3+2+3*\\i\/27);\n \\draw[gray,very thick] (0,3+2+6\/9+3*\\i\/27) -- (3,3+2+6\/9+3*\\i\/27);\n}\n\n\\draw[gray,very thick] (0,0) --(0,3);\n\\draw[gray,very thick] (3,0) --(3,3);\n\\draw[gray,very thick] (3\/2,0) --(3\/2,3);\n\\draw[gray,very thick] (3\/4,0) --(3\/4,3);\n\\draw[gray,very thick] (3\/8,0) --(3\/8,3);\n\\draw[gray,very thick] (3\/16,0) --(3\/16,3);\n\\draw[gray,very thick] (3\/32,0) --(3\/32,3);\n\\draw[gray,very thick] (3\/64,0) --(3\/64,3);\n\\draw[gray,very thick] (3\/128,0) --(3\/128,3);\n\\draw[gray,very thick] (3\/256,0) --(3\/256,3);\n\\draw[gray,very thick] (3\/512,0) --(3\/512,3);\n\n\\foreach \\i in {2,...,7}\n{\n \\draw[gray,very thick] (3,3) -- (3-3\/\\i,0);\n}\n\\node at (2.8,0.15) {$\\ldots$};\n\n\n\\foreach \\i in {2,...,7}\n{\n \\draw[gray,very thick] (3\/2,3) -- (3\/2-1.5\/\\i,0);\n}\n\n\n \\node at (9\/16,1.75) {$\\vdots$};\n \\node at (9\/16,1.25) {$\\vdots$};\n\\node at (-0.1,0.2){$0$}; \\node at (3.1,0.2){$1$};\n\\node at (0,0){$\\cdot$}; \\node at (3,0){$\\cdot$};\n\\node at (3,3){$\\cdot$}; \\node at (3,6){$\\cdot$};\n\\node at (3.35,3.0){$1\\!+\\!{\\bf i}$}; \\node at (3.35,6.0){$1\\!+\\!2{\\bf i}$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,dashed] (0,3+3*\\i\/27) -- (3,3+3*\\i\/27);\n \\draw[gray,dashed] (0,3+6\/9+3*\\i\/27) -- (3,3+6\/9+3*\\i\/27);\n \\draw[gray,dashed] (0,3+2+3*\\i\/27) -- (3,3+2+3*\\i\/27);\n \\draw[gray,dashed] (0,3+2+6\/9+3*\\i\/27) -- (3,3+2+6\/9+3*\\i\/27);\n}\n\n\\draw[gray,very thick] (0,3) --(3,3);\n\n\\draw[gray,very thick] (0,0) --(0,3);\n\\draw[gray,very thick] (3,0) --(3,3);\n\\draw[gray,very thick] (3\/2,0) --(3\/2,3);\n\\draw[gray,very thick] (3\/4,0) --(3\/4,3);\n\\draw[gray,very thick] (3\/8,0) --(3\/8,3);\n\\draw[gray,very thick] (3\/16,0) --(3\/16,3);\n\\draw[gray,very thick] (3\/32,0) --(3\/32,3);\n\\draw[gray,very thick] (3\/64,0) --(3\/64,3);\n\\draw[gray,very thick] (3\/128,0) --(3\/128,3);\n\\draw[gray,very thick] (3\/256,0) --(3\/256,3);\n\\draw[gray,very thick] (3\/512,0) --(3\/512,3);\n\n\\node at (-0.1,0.2){$0$}; \\node at (3.1,0.2){$1$}; \\node at (3.35,3.0){$1\\!+\\!{\\bf i}$};\n\\end{tikzpicture}\\\\ $A\\cup B$ & $d$\n\\end{tabular}\n\\end{center}\n\\vskip -0.75cm\n\\caption{The compactum $A\\cup B$ and the atom $d$ of $A\\cup B$.}\\label{broom_comb}\n\\end{figure}\nFurther setting $B$ as in Example \\ref{cantor_combs}, we have $A\\cap B=\\{ {\\bf i}\\}\\cup\\left\\{2^{-n}+{\\bf i}: n\\ge0\\right\\}$.\nIf $x\\in A\\cap B$ then $\\lambda_A(x)=\\lambda_B(x)=1$. Let\n$\\displaystyle L_1=\\left\\{t+s{\\bf i}: 0\\le s\\le 1, t=0\\ {or}\\ 2^{-n}\\ {for\\ some}\\ n\\ge0\\right\\}.$ Then $d=L_1 \\cup\\ \\{t+{\\bf i}: 0\\le t\\le1\\}$ is an atom of $A\\cup B$ and is not locally connected at any $x\\in A_0$. Moreover, we have\n\\[\\lambda_{A}(x)=\\left\\{\\begin{array}{ll}1&x\\in L_1 \\\\ 0& {otherwise}\\end{array}\\right.\\quad {and}\\quad\n\\lambda_{A\\cup B}(x)=\\left\\{\\begin{array}{ll}2& x\\in A_0\\\\ 1&x\\in (B\\cup d)\\setminus A_0\\\\ 0& {otherwise}.\\end{array}\\right.\\]\n\\end{exam}\n\n\n\nThe next two examples are about $E$-continua $K\\subset\\hat{\\mathbb{C}}$ such that the lambda equality given in (i) or (ii) of Theorem \\ref{equality-case} does not hold.\n\n\n\\begin{exam}\\label{E-compactum}\nLet $X$ denote the square $[1,2]\\times[0,{\\mathbf i}]\\subset\\hat{\\mathbb{C}}$. Let $Y$ be an embedding of $[0,\\infty)$ whose closure $\\overline{Y}$ equals the union of $Y$ with $\\partial X$. See the left part of Figure \\ref{negative} for a simplified representation of \\ $\\overline{Y}$, which is depicted as \\tb{blue}.\n\\begin{figure}[ht]\n\\vskip -0.25cm\n\\begin{tabular}{ll}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]\n\\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);\n\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\\foreach \\k in {0,1}\n{\n\\draw[gray, dashed, thick] (16+1,16*\\k+1) -- (32-1,16*\\k+1) -- (32-1, 16*\\k+16-1) -- (16+1\/2,16*\\k+16-1);\n}\n\n\n\\foreach \\k in {0,1,2,3}\n{\n\\draw[gray, dashed, thick] (8+1\/2,8*\\k+1\/2) -- (16-1\/2,8*\\k+1\/2) -- (16-1\/2, 8*\\k+8-1\/2) -- (8+1\/2,8*\\k+8-1\/2);\n}\n\n\\foreach \\i in {0,1}\n{\\foreach \\j in {2,...,6}\n{\n \\draw[gray, thick] (16+\\j,\\j+16*\\i) -- (32-\\j,\\j+16*\\i) -- (32-\\j,16-\\j+16*\\i) -- (16+\\j-1,16-\\j+16*\\i)--(16+\\j-1,\\j-1+16*\\i);\n}\n}\n\n\\foreach \\i in {0,...,3}\n{\\foreach \\j in {2,...,6}\n{\n \\draw[gray] (8+\\j\/2,\\j\/2+8*\\i) -- (16-\\j\/2,\\j\/2+8*\\i) -- (16-\\j\/2,8-\\j\/2+8*\\i) --\n (8+\\j\/2-1\/2,8-\\j\/2+8*\\i)--(8+\\j\/2-1\/2,\\j\/2-1\/2+8*\\i);\n}\n}\n\\foreach \\j in {0,...,7}\n{\n \\fill(5.5,2+\\j*4)circle(1pt);\n \\fill(6,2+\\j*4)circle(1pt);\n \\fill(6.5,2+\\j*4)circle(1pt);\n}\n\\node at (0.25,-1.5){$0$};\n\\node at (32,-1.5){$1$};\n\\node at (64,-1.5){$2$};\n\\node at (0.25,33.5){${\\mathbf i}$};\n\n\\draw[blue,thick] (64,0) -- (64,32) -- (32,32) -- (32,0) -- (64,0);\n\n\\foreach \\j in {2,...,6}\n{ \\draw[blue, thick] (32+2*\\j,2*\\j) -- (64-2*\\j,2*\\j) -- (64-2*\\j,32-2*\\j) -- (32+2*\\j-2,32-2*\\j) --(32+2*\\j-2,2*\\j-2);\n}\n\n\\draw[blue, dashed, thick] (32+2,2) -- (64-2,2) -- (64-2,32-2) -- (32+1,32-2);\n\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]\n\\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\\node at (0.25,-1.5){$0$};\n\\node at (32,-1.5){$1$};\n\\node at (64,-1.5){$2$};\n\\node at (0.25,33.5){${\\mathbf i}$};\n\\end{tikzpicture}\n\\end{tabular}\n\\vskip -0.25cm\n\\caption{(left): the $E$-continuum $K$; (right): the only non-degenerate atom.}\\label{negative}\n\\vskip -0.25cm\n\\end{figure}\nLet $f_1(z)=\\frac{z}{2}$ and $f_2(z)=\\frac{z+{\\mathbf i}}{2}$. Let $K_0=\\overline{Y}$. For all $n\\ge1$, let $K_n=f_1\\left(K_{n-1}\\right)\\cup f_2\\left(K_{n-1}\\right)$. Then $K_0,K_1,\\ldots$ is an infinite sequence of continua converging to the segment $[0,{\\mathbf i}]$ under Hausdorff distance. Clearly,\n\\[K=\\left(\\bigcup\\limits_{n\\ge0}K_n\\right)\\cup\\{s{\\mathbf i}: 0\\le s\\le1\\}\\]\nis an $E$-continuum. See left part of Figure \\ref{negative}. Let $L_0=\\partial X$. For all $n\\ge1$, let $L_n=f_1\\left(L_{n-1}\\right)\\cup f_2\\left(L_{n-1}\\right)$. Then $L_0,L_1,\\ldots$ is an infinite sequence of continua converging to the segment $[0,{\\mathbf i}]$ under Hausdorff distance. Similarly, we see that\n\\[L=\\left(\\bigcup\\limits_{n\\ge0}L_n\\right)\\cup\\{s{\\mathbf i}: 0\\le s\\le1\\}\\]\nis also an $E$-continuum. See right part of Figure \\ref{negative}. Moreover, the continuum $K$ has exactly one atom of order $1$ that is not a singleton. This atom equals $L$. Thus we have\n\\[\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}1&x\\in L\\\\ 0& {otherwise}\\end{array}\\right.\\ \\text{and}\\\n\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}\n1& x\\in L\\setminus[0,{\\mathbf i}]\\\\ 0& {otherwise}.\\end{array}\\right.\n\\]\n\\end{exam}\n\n\n\\begin{exam}\\label{finite-comp}\nLet $\\mathcal{C}$ denote Cantor's ternary set. Let $U_1\\subset\\hat{\\mathbb{C}}$ be the domain, not containing $\\infty$, whose boundary consists of $[0,1]\\times{\\bf i}\\mathcal{C}=\\{t+s{\\bf i}: 0\\le t\\le 1, s\\in\\mathcal{C}\\}$ and $\\partial\\left([0,\\frac43]\\times[0,{\\bf i}]\\right)$.\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\\draw[gray,very thick] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);\n\\draw[gray,very thick] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);\n\\draw[gray,very thick] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);\n\\draw[gray,very thick] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27-3) -- (3,3*\\i\/27-3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27-3) -- (3,6\/9+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+3*\\i\/27-3) -- (3,2+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27-3) -- (3,2+6\/9+3*\\i\/27-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,0) -- (3+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,0) -- (3+6\/9+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+3*\\i\/27,0) -- (3+2+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,0) -- (3+2+6\/9+3*\\i\/27,-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\n\\draw(1.5,2.75) node[above]{$U_2$};\n\\draw(0.25,-1.5) node[left]{$U_3$};\n\\draw(4.5,-2.75) node[below]{$U_4$};\n\\draw(5.75,1.5) node[right]{$U_1$};\n\\draw(4.5,3.5) node[right]{$U_5$};\n\\end{tikzpicture}\n&&\\hskip 1.0cm\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\\draw[gray,dashed] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);\n\\draw[gray,dashed] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);\n\\draw[gray,dashed] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);\n\\draw[gray,dashed] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27-3) -- (3,3*\\i\/27-3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27-3) -- (3,6\/9+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+3*\\i\/27-3) -- (3,2+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27-3) -- (3,2+6\/9+3*\\i\/27-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,0) -- (3+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,0) -- (3+6\/9+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+3*\\i\/27,0) -- (3+2+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,0) -- (3+2+6\/9+3*\\i\/27,-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{The continuum $K$ and the only non-degenerate atom $d\\in\\Dc_K$.}\\label{finite-comp-pic}\n\\end{figure}\nFor $2\\le j\\le 4$ let $U_j=f^{j-1}(U_1)$, where $f(z)={\\bf i}z$. See the left part of Figure \\ref{finite-comp-pic}. Then $K=\\bigcup_i\\partial U_i$ is a continuum, whose complementary components are $U_1,\\ldots, U_4, U_5$. Here $U_5$ is the one containing $\\infty$. Moreover, the only non-degenerate atom of $K$ is\n$\\displaystyle d=\\bigcup_{j=0}^3 f^j([0,1]\\times{\\bf i}\\mathcal{C})$.\nSince the continuum $d$ has a single atom, which is itself, we have\n$\\lambda_K(x)=\\left\\{\\begin{array}{ll}\\infty& x\\in d\\\\ 0&{otherwise}.\\end{array}\\right.$\nOn the other hand, by the construction of $U_1,\\ldots, U_4$ and $U_5$, we also have\n$\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}1& x\\in d\\\\ 0&{otherwise}.\\end{array}\\right.$\nConsequently, we have $\\lambda_K(x)-\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll} \\infty& x\\in d\\\\ 0& {otherwise}.\\end{array}\\right.$\n\n\\end{exam}\n\n\n\n\nThen we continue to find planar continua $K$, trying to describe possible relations between $\\lambda_K$ and $\\lambda_{\\partial K}$. The first one is Peano continuum $K$ but its boundary is a continuum that is not locally connected. Therefore, $\\lambda_{\\partial K}(x)\\ge\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$ and $\\lambda_{\\partial K}(x)>\\lambda_K(x)$ for uncountably many $x$.\n\n\\begin{exam}\\label{bd_larger}\nConsider a spiral made of broken lines, lying in the open square $W=\\{t+s{\\mathbf i}: 0< t,s<1\\}\\subset\\hat{\\mathbb{C}}$, which converges to $\\partial W$. See the left of Figure \\ref{spiral}\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\center{\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[blue, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n }\n\\draw[blue,ultra thick, dashed] (0,4) -- (13.0,4);\n\n\\draw[blue,ultra thick, dashed] (12.75,4.0) -- (12.75,27.5);\n\n\\end{tikzpicture}\n\\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\n\n\n\n\n\n\n\\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,4*\\j+16+\\k*0.45) --(-4*\\j-4-\\k*0.45,4*\\j+16+\\k*0.45)-- (-4*\\j-4-\\k*0.45,-4*\\j+12-\\k*0.45) --(0,-4*\\j+12-\\k*0.45);\n }\n\n\\draw[gray!16,ultra thick, dashed] (0,4) -- (13.2,4); \\draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55); \\draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);\n\n\n\\end{tikzpicture}\n\\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,4*\\j+16+\\k*0.45) --(-4*\\j-4-\\k*0.45,4*\\j+16+\\k*0.45)-- (-4*\\j-4-\\k*0.45,-4*\\j+12-\\k*0.45) --(0,-4*\\j+12-\\k*0.45);\n }\n\\draw[gray!16,ultra thick, dashed] (0,3.8) -- (13.2,3.8);\n\\draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55);\n\\draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);\n\n\n\\foreach \\j in {1,2}\n\\foreach \\k in {2}\n{\n\\draw[blue] (-4,-4*\\j+16+0.2) -- (4*\\j-0.2,-4*\\j+16+0.2) -- (4*\\j-0.2,4*\\j+16-0.2) --(-4*\\j-4+0.2,4*\\j+16-0.2)-- (-4*\\j-4+0.2,-4*\\j+12+0.2) --(0,-4*\\j+12+0.2);\n\\draw[blue] (-4,-4*\\j+16-\\k*0.45-0.3) -- (4*\\j+\\k*0.45+0.3,-4*\\j+16-\\k*0.45-0.3) -- (4*\\j+\\k*0.45+0.3,4*\\j+16+\\k*0.45+0.3) --(-4*\\j-4-\\k*0.45-0.3,4*\\j+16+\\k*0.45+0.3)-- (-4*\\j-4-\\k*0.45-0.3,-4*\\j+12-\\k*0.45-0.3) --(0,-4*\\j+12-\\k*0.45-0.3);\n }\n\n\n\\foreach \\i in {0,1,2}\n{\n\\draw[blue] (-4+3.9*\\i,12.25) -- (-4+3.9*\\i,12-1.25);\n\\draw[blue] (3.8,13.8+3*\\i) -- (5.2,13.8+3*\\i);\n\\draw[blue] (-7.8,13.8+3*\\i) -- (-9.2,13.8+3*\\i);\n\\draw[blue] (-7.8,12-1.9*\\i) -- (-9.2,12-1.9*\\i);\n}\n\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[blue] (-5.6+3.6*\\i,19.8) -- (-5.6+3.6*\\i,21.2);\n\\draw[blue] (-7.8+1.2*\\i,8.2) -- (-7.8+1.2*\\i,6.8);\n\\draw[blue] (-3+1.2*\\i,8.2) -- (-3+1.2*\\i,6.8);\n\\draw[blue] (1.8+1.0*\\i,8.2) -- (1.8+1.0*\\i,6.8);\n\\draw[blue] (4.8+1.0*\\i,8.2) -- (4.8+1.0*\\i,6.8);\n\\draw[blue] (7.8,7.8+1.0*\\i) -- (9.2,7.8+1.0*\\i);\n\\draw[blue] (7.8,11.8+1.0*\\i) -- (9.2,11.8+1.0*\\i);\n\\draw[blue] (7.8,15.8+0.8*\\i) -- (9.2,15.8+0.8*\\i);\n\\draw[blue] (7.8,19.2+0.8*\\i) -- (9.2,19.2+0.8*\\i);\n\\draw[blue] (7.8,22.4+0.7*\\i) -- (9.2,22.4+0.7*\\i);\n\\draw[blue] (7.8-0.8*\\i,23.8) -- (7.8-0.8*\\i,25.2);\n\\draw[blue] (4.6-0.8*\\i,23.8) -- (4.6-0.8*\\i,25.2);\n\\draw[blue] (1.4-0.8*\\i,23.8) -- (1.4-0.8*\\i,25.2);\n\\draw[blue] (-1.8-0.8*\\i,23.8) -- (-1.8-0.8*\\i,25.2);\n\\draw[blue] (-5-0.8*\\i,23.8) -- (-5-0.8*\\i,25.2);\n\\fill[blue!62](-9-\\i,24.5) circle(1.8pt);\n}\n\n\\end{tikzpicture}\n\n\\end{tabular}\n}\n\\caption{A Peano continuum $K$ whose boundary is not locally connected.}\\label{spiral}\n\\end{figure}\nWe may thicken the spiral to an embedding $h: [0,\\infty)\\times[0,1]\\rightarrow W$, of the unbounded strip $U=[0,\\infty)\\times[0,1]$. Such an embedding may be chosen appropriately, so that $h(\\partial U)$ consists of countably many segments. Then, we obtain a continuum\n$K_0=\\overline{W}\\setminus h(U)$. See the middle part of Figure \\ref{spiral}. Clearly, the continuum $K_0$ is not locally connected on $\\partial W$; and it is locally connected at all the other points. Now, divide the thickened spiral $h(U)$ into smaller and smaller quadruples, which are depicted in the right part of Figure \\ref{spiral} as small rectangles. Let $K$ be the union of $K_0$ with the newly added bars, used in the above mentioned division. Then $K$ is locally connected everywhere hence is a Peano continuum. However, its boundary $\\partial K$ is not locally connected on $\\partial W$ and is locally connected elsewhere. Therefore, we have\n\\[\\lambda_K(x)\\equiv 0\\quad{and}\\quad\n\\lambda_{\\partial K}(x)=\\left\\{\\begin{array}{ll}1 & x\\in\\partial W\\\\\n0& x\\notin\\partial W.\\end{array}\\right.\\]\n\\end{exam}\n\n\n\n\\begin{exam}\\label{bd_smaller}\nLet the continuum $K$ be defined as in Example \\ref{bd_larger}. Let $f_j(z)=\\frac{z}{2}+\\frac{j-1}{2}{\\mathbf i}$ for $j=1,2$. For any compact set $X\\subset\\hat{\\bbC}$, put $\\Phi(X)=f_1(X)\\cup f_2(X)$. We will use the continuum $K$ and the mapping $\\Phi$ to construct a continuum $L$. See Figure \\ref{spiral-double}.\n\\begin{figure}[ht]\n\\vskip -0.05cm\n\\center{\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.8]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\n \\draw[gray, thick] (-4,-4) -- (64+4,-4) -- (64+4,32+4) --(-3,32+4)--(-3,-3);\n \\draw[gray, thick] (-3,-3) -- (64+3,-3) -- (64+3,32+2) --(-2,32+2) -- (-2,-2);\n \\draw[gray, thick] (-2,-2) -- (64+2,-2) -- (64+2,32+1) --(-1,32+1) -- (-1,-1);\n \\draw[gray, thick, dashed] (-1,-1)--(64+1,-1)--(64+1,30);\n\n\\node at (33,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (63,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (61,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\node at (48,16) {$\\partial K+1$};\n\\node at (24,8) {$f_1(\\partial K\\!+\\!1)$};\n\\node at (24,24) {$f_2(\\partial K\\!+\\!1)$};\n\n\\draw[gray, very thin] (12,4) -- (-12.5,8);\n\\node at (-15,10) {$f_1\\circ f_1(K\\!+\\!1)$};\n\\draw[gray, very thin] (12,12) -- (-12.5,16);\n\\node at (-15,18) {$f_1\\circ f_2(K\\!+\\!1)$};\n\n\n\\draw[gray, very thin] (12,20) -- (-12.5,24);\n\\node at (-15,26) {$f_2\\circ f_1(K\\!+\\!1)$};\n\\end{tikzpicture}\n}\\vskip -0.5cm\n\\caption{Relative locations of $\\partial K+1$, $\\Phi^{1}(\\partial K+1)$ and $\\Phi^{2}(K+1)$.}\\label{spiral-double}\n\\end{figure}\nThe continuum $L$ consists of five parts:\n\\begin{enumerate}\n\\item the segment $[0,{\\mathbf i}]=\\{s{\\bf i}: 0\\le s\\le1\\}$;\n\\item a spiral converging to the boundary of $[0,2]\\times[0,{\\mathbf i}]$;\n\\item $\\partial K+1=\\{z+1: z\\in \\partial K\\}$;\n\\item $\\Phi^{2n}(K+1)$ for all integers $n\\ge1$; and\n\\item $\\Phi^{2n-1}(\\partial K+1)$ for all integers $n\\ge1$.\n\\end{enumerate}\nOn the one hand, we can directly check that $L$ has a unique non-degenerate atom $d$, which consists of the following four parts: (1) the segment $[0,{\\mathbf i}]$; (2) the boundary of $[1,2]\\times[0,{\\mathbf i}]$, denoted as $A$; (3) $\\Phi^{2n-1}(A)$ for all integers $n\\ge1$; and (4) the boundary of $[2^{-2n},2^{-2n+1}]\\times[0,{\\mathbf i}]$ for all integers $n\\ge1$.\nOn the other hand, the boundary $\\partial L$ has a unique non-degenerate atom $d^*$, which is the union of $A$, the segment $[0,{\\mathbf i}]$, and $\\Phi^{n}(A)$ for all integers $n\\ge1$. See Figure \\ref{atoms} for a depiction of $d$ and $d^*$.\n\\begin{figure}[ht]\n\\vspace{-0.25cm}\n\\center{\\begin{tabular}{cc}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\n\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (16,\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (16,+\\j*8) -- (16,32+\\j*8) -- (8,32+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\n\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,\\j*4) -- (4,\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (4,\\j*2) -- (4,32+\\j*2) -- (2,32+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\n\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,\\j*1) -- (1,\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,32+\\j*1\/2) -- (1\/2,32+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\\node at (34,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (62,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (60,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\end{tikzpicture}\n\n& \\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\\node at (34,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (62,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (60,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\end{tikzpicture}\n\\end{tabular}\n}\\vskip -0.0cm\n\\caption{A depiction of $d$ and $d^*$.}\\label{atoms}\n\\end{figure}\nThe atom $d^*$ (of $\\partial L$) is a Peano continuum and contains $d$. However, the atom $d$ (of $L$) is not locally connected at points $s{\\mathbf i}$ with $0\\lambda_{\\partial K}(x)$ for at least one point $x\\in\\partial K$}.\n\n\n\nTo conclude this section, we now consider unions and intersections of specific Peano compacta in the plane. We will find concrete Peano continua in the plane, say $X$ and $Y$, such that $X\\cap Y$ is a continuum that is not locally connected. Notice that $X\\cup Y$ is always a Peano continuum.\n\n\\begin{exam}\\label{cap-peano}\nLet $M$ be the union of $[0,1]\\times\\{0\\}$ with the vertical segments $\\{0\\}\\times[0,1]$ and $\\{2^{-k}\\}\\times[0,1]$ for integers $k\\ge0$. Then $M$ is a continuum and is not locally connected at points on $\\{0\\}\\times(0,1]$; moreover, we have\n\\[\\lambda_M(x)=\\left\\{\\begin{array}{ll}1& x\\in \\{t{\\bf i}: 0\\le t\\le 1\\}\\\\ 0 & otherwise\\end{array}\\right.\n\\]\nWe will construct two Peano continua $X$ and $Y$ satisfying $X\\cap Y=M$. To this end, for all integers $k\\ge1$ we put\n\\[A_k=\\bigcup_{j=1}^{2^k-1}\\left[0,2^{-k+1}\\right]\\times\\left\\{j2^{-k}\\right\\}.\\]\nThen $X=M\\cup\\left(\\bigcup_kA_k\\right)$ is a Peano continuum.\n\\begin{figure}[ht]\n\\vskip -0.05cm\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]\n\\foreach \\i in {1,...,3}\n{\n\\draw[red] (0,1.296*\\i) -- (2.592,1.296*\\i);\n}\n\n\\foreach \\i in {1,...,7}\n{\n\\draw[red] (0,0.648*\\i) -- (1.296,0.648*\\i);\n}\n\n\\foreach \\i in {1,...,15}\n{\n\\draw[red] (0,0.324*\\i) -- (0.648,0.324*\\i);\n}\n\n\\foreach \\i in {1,...,31}\n{\n\\draw[red] (0,0.162*\\i) -- (0.324,0.162*\\i);\n}\n\n\\foreach \\i in {1,...,63}\n{\n\\draw[red] (0,0.081*\\i) -- (0.162,0.081*\\i);\n}\n\n\\draw[blue,thick] (0,0) -- (0,5.184);\n\\draw[blue,thick] (0,0) -- (5.184,0);\n\\draw[red] (2.592,2.592) -- (5.184,2.592);\n\n\\foreach \\i in {1,2,4,8,16,32}\n{\n\\draw[blue,thick] (5.184\/\\i,0) -- (5.184\/\\i,5.184);\n}\n\n\n\\fill[black] (5.184,0) circle (0.35ex); \\draw[purple] (5.184,0) node[right]{$1$};\n\\fill[black] (5.184,5.184) circle (0.35ex); \\draw[purple] (5.184,5.184) node[right]{$1+{\\bf i}$};\n\\fill[black] (0,0) circle (0.35ex); \\draw[purple] (0,0) node[left]{$0$};\n\\end{tikzpicture}\\hspace{0.25cm}\n&\n&\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]\n\\foreach \\i in {1,...,2}\n{\n\\draw[red] (0,1.728*\\i) -- (5.184,1.728*\\i);\n}\n\n\\foreach \\i in {1,...,8}\n{\n\\draw[red] (0,0.576*\\i) -- (2.592,0.576*\\i);\n}\n\n\\foreach \\i in {1,...,26}\n{\n\\draw[red] (0,0.192*\\i) -- (1.296,0.192*\\i);\n}\n\n\\foreach \\i in {1,...,80}\n{\n\\draw[red] (0,0.064*\\i) -- (0.648,0.064*\\i);\n}\n\n\\draw[blue,thick] (0,0) -- (0,5.184);\n\\draw[blue,thick] (0,0) -- (5.184,0);\n\n\\foreach \\i in {1,2,4,8}\n{\n\\draw[blue,thick] (5.184\/\\i,0) -- (5.184\/\\i,5.184);\n}\n\n\n\\fill[black] (5.184,0) circle (0.35ex); \\draw[purple] (5.184,0) node[right]{$1$};\n\\fill[black] (5.184,5.184) circle (0.35ex); \\draw[purple] (5.184,5.184) node[right]{$1+{\\bf i}$};\n\\fill[black] (0,0) circle (0.35ex); \\draw[purple] (0,0) node[left]{$0$};\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{\\small Two Peano continua that intersect at a non-locally connected continuum.}\\label{peano-cap}\n\\end{figure}\nSee left part of Figure \\ref{peano-cap} for a rough approximate of $X$. Similarly, if for every $k\\ge1$ we set\n\\[B_k=\\bigcup_{j=1}^{3^k-1}\\left[0,2^{-k+1}\\right]\\times\\left\\{j3^{-k}\\right\\},\\]\nthen $\\displaystyle Y=M\\cup\\left(\\bigcup\\limits_kB_k\\right)$ is also a Peano continuum. See right part of Figure \\ref{peano-cap} for a rough approximate of $Y$. Moreover, we have $X\\cap Y=M$.\n\\end{exam}\n\nWe also find Peano compacta $X$ and $Y$ such that the union $X\\cup Y$ is not a Peano compactum, although the intersection $X\\cap Y$ is always a Peano compactum. We will use {\\em fractal squares} to construct two such compacta.\nHere a {\\bf fractal square of order $n\\ge2$} is the attractor of an iterated function system\n$\\displaystyle \\Fc_\\Dc:=\\left\\{f_d(x)=\\frac{x+d}{n}: d\\in\\Dc\\right\\}$\nfor some $\\Dc\\subset\\{0,1,\\ldots,n-1\\}^2$ which contains at least $2$ and at most $n^2-1$ elements.\nFor general theory on iterated function systems, we refer to \\cite{Hutchinson81}.\n\n\\begin{exam}\\label{cup-fs}\nLet $X$ and $Y$ be the fractal squares determined by $\\Fc_{\\Dc_X}$ and $\\Fc_{\\Dc_Y}$. Here $\\Dc_X=\\{(i,0): i=0,1,2\\}\\cup\\{(0,2)\\}$ and $\\Dc_Y=\\{(i,0): i=0,1,2\\}\\cup\\{(1,2),(2,2)\\}$. See Figure \\ref{fs-cup} for relative locations of the small squares $f_d([0,1]^2)$ with $d\\in\\Dc_X$ and $d\\in\\Dc_Y$.\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\n\\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);\n\\fill[purple!30] (0,3.456) -- (1.728,3.456) -- (1.728,5.184) -- (0,5.184) -- (0,3.456);\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[gray,thick] (0,1.728*\\i) -- (5.184,1.728*\\i);\n\\draw[gray,thick] (1.728*\\i,0) -- (1.728*\\i,5.184);\n}\n\\end{tikzpicture}\n&\n\\hskip 0.5cm\n&\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\n\\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);\n\\fill[purple!30] (1.728,3.456) -- (5.184,3.456) -- (5.184,5.184) -- (1.728,5.184) -- (1.728,3.456);\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[gray,thick] (0,1.728*\\i) -- (5.184,1.728*\\i);\n\\draw[gray,thick] (1.728*\\i,0) -- (1.728*\\i,5.184);\n}\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{\\small The small squares $f_d([0,1]^2)$ for $d\\in\\Dc_1$ (left part) and for $d\\in\\Dc_2$ (right part).}\\label{fs-cup}\n\\end{figure}\nThen $X$ and $Y$ are Peano compacta, each of which contains the interval $[0,1]$, such that $X\\cup Y$ contains all the segments $[0,1]\\times\\{\\frac{2}{3^k}\\}$ for $k\\ge1$. Moreover, $X\\cap Y$ is a Peano compactum having uncountably many components. All but one of these components are single points. The only non-degenerate component is the interval $[0,1]$. On the other hand, for all $k\\ge1$ the horizontal strip $\\bbR\\times\\left(\\frac{1}{3^k},\\frac{2}{3^k}\\right)$ is disjoint from $X\\cup Y$. This implies that $X\\cup Y$ is not a Peano compactum. Consequently, we have\n\\[\n\\begin{array}{ccc}\\lambda_{X}(x)=0 (\\forall x\\in \\hat{\\mathbb{C}}); & \\lambda_{Y}(x)= 0 (\\forall x\\in \\hat{\\mathbb{C}}); & \\lambda_{X\\cup Y}(x)=\\left\\{\\begin{array}{ll}1& x\\in [0,1]\\\\ 0 & otherwise.\\end{array}\\right.\n\\end{array}\n\\]\nNotice that $Y\\cup X+1$ is also a Peano compactum, although $Y\\cap X+1$ has uncountably many components. Thus $\\lambda_{X+1}(x)=\\lambda_{Y}(x)=\\lambda_{Y\\cup X+1}(x)=0$ for all $x\\in \\hat{\\mathbb{C}}$. \\end{exam}\n\n\n\n\n\n\\noindent\n{\\bf Acknowledgement}. The authors are grateful to Dr. Yi Yang at Sun Yat-sen University, for valuable discussions during the period of his phd study.\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbjgd b/data_all_eng_slimpj/shuffled/split2/finalzzbjgd new file mode 100644 index 0000000000000000000000000000000000000000..ddcfb4a4987e02b6740c50e50a28c739feca9fe3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbjgd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAfter tremendous success in task automation, machine learning has recently been used to accelerate expert development. In the field of medical research, new tools to efficiently exploit knowledge database or to accelerate simulations by orders of magnitude can be of great aid. In that matter, the problem of drug design is a particularly challenging but important problem: Before a compound is commercialized, it is proposed as a candidate and will have to pass an extensive battery of tests and certifications on animals and humans. In current modern pipelines, only one candidate in ten thousand will make its way to the market, explaining the development cost (between 1 and 3 billion of dollars) and time (10 to 15 years) needed for a single new drug.\n\nWith a large quantity of freely available data, many recent works aiming at improving this ratio have been proposed by the community. Searching for a good molecule requires two things: a way to score its quality, and a method to search the ensemble of reasonable candidates. The former was addressed by supervised algorithms, trained to predict many molecular properties of interest, that could be efficiently used to accelerate the prediction while increasing its accuracy \\citep{gilmer2017neural, wu2018moleculenet}. Notably, the use of graph neural networks seems to be particularly adapted to these tasks \\citep{Hu2020, wu2018moleculenet}. Although the latter has also been explored extensively by different methods, some remaining recurring issues still limit their practical use. In particular, \\citet{gao2020synthesizability} have shown that molecules generated by deep learning models are often unrealistic, and cannot be synthesized. \n\nMolecular data can be represented in many forms, but in this work we focus on molecules represented as graphs, with atoms seen as nodes and chemical bonds seen as edges. In literature, three main approaches can be found. \nFirstly, a simple representation learning method can be used to produce an embedding of molecules in a smooth space, in which standard optimization methods can be used, such as bayesian optimization \\citep{jin2018junction}. This approach relies heavily on the quality of the embedding space, limiting the optimization performance. A second one is to explicitly train a model to produce molecules with given properties \\citep{jin2018learning}. However, this requires a large quantity of data specific to the target properties, limiting the usefulness of such methods for the elaboration of drugs with \\emph{new} effects. Finally, reinforcement learning (RL) methods have proven effective as a pure optimization method \\citep{zhou2019optimization, you2018graph, shi2020graphaf}. Most works often add a component to enforce realism regarding a dataset, such as a discriminator or pretraining, as it is often easier for the model to learn the specifics of the score function, leading to the generation of unrealistic molecules. Although one can argue that proposing new ways to optimize a score can be interesting, these models are difficult to use in practice.\n\nIn this work, we propose a different approach to enforce molecule realism, combining machine learning and traditional importance sampling. More precisely, we train a graph autoregressive generative model to learn the distribution of reasonable molecules on the ZINC dataset \\citep{irwin2005zinc}. We then alter a simple Metropolis sampling algorithm to propose molecules which are both optimized and realistic. The resulting method, that we refer to as Learned Realism Sampling (LRS) does not need score-specific data, produces realistic molecules when RL algorithms do not, and improves performance on all our baselines in the task of molecule \\emph{improvement} on a variety of different scores.\n\n\n\\section{Related work}\n\\label{related}\n\n\\subsection*{Molecule generation}\nThis work focuses mainly on graph representations of molecules.\nMolecular graphs do not have a well defined node ordering and their ensemble lacks a topology, making the problem of molecular graph generation difficult in most deep learning frameworks. Three main approaches can be found in literature. First, a standard generative framework such as a Variational Autoencoder \\citep{Kingma2013VAE} can be used to generate the graph adjacency matrix \\citep{simonovsky2018graphvae}. Although simple, these methods suffer from the fact that they are not node permutation invariant. A second popular approach relies on the junction tree representation \\citep{jin2018junction, jin2018learning}. Briefly, it consists in heuristics to represent the molecule as a tree, ie to remove the cycles. A tree is considerably simpler than a molecular graph, and recurrent methods can be efficiently applied to it. Finally, autoregressive models can be used to generate a graph node-by-node and edge-by-edge \\citep{popova2019molecularrnn, shi2020graphaf, you2018graph, zhou2019optimization, li2018multi-objective}. This has the advantage of breaking the symmetries introduced by the incomplete ordering of the nodes.\n\nMore broadly used than graph representations, SMILES and more recently SELFIES\\citep{krenn2020self} are text-based representations of molecules, on which standard sequence models can be used. The main drawback of this approach is that neighboring atoms in the molecule can be far away in the representation. In this regard, we highlight the work of \\citet{alperstein2019all} that solved the problem by using multiple SMILES representations of the molecule. Here, we also consider the work of \\cite[SMILES LSTM]{segler2018generating} in section \\ref{section: unconstrained optimisation} as another baseline.\n\n\\subsection*{Optimization}\nA first approach for property optimization is to perform representation learning to embed molecules as vectors in a smooth space. Standard algorithms such as Bayesian optimization\\citep[JTVAE]{jin2018junction}, gradient descent \\citep[All SMILES]{alperstein2019all} or hill climbing \\citep[SMILES LSTM]{segler2018generating} can then be used on those representations . Although efficient in principle, these algorithms are limited by the quality of the representation as well as by the ability of the model to reconstruct the original molecule from it.\nA second approach is to train the model in a \\emph{domain transfer} framework, that is to train the model to take a molecule with low score as input and give a similar molecule with high score as output \\citep[VJTNN or VJTNN+GAN]{jin2018learning}. These models can be effective, although they do not actually optimize properties. Notice that both these approaches rely heavily on the availability of labelled data. \n\nIn order to overcome this last issue, a standard approach is to train a model to directly maximize a score with reinforcement learning. These algorithms are very efficient for the optimization itself. Methods to enforce molecule realism regarding a dataset are added, such as pretraining \\citep[MRNN]{popova2019molecularrnn}\\citep[GraphAF]{shi2020graphaf} or a discriminator \\citep[GCPN]{you2018graph}. However, despite these efforts, they usually generate unrealistic samples, as highlighted in this work.\n\nFinally, there is a proposition of an hybrid approach between machine learning and other algorithms. Most similar to our work, we highlight two recent models based on Markov Chain Monte Carlo (MCMC). The MARS model \\citep{xie2021mars} trains and generates molecules simultaneously, adding better candidates to its training set. The generation procedure shares a lot of similarity with our work. Although the model produces impressive result in optimization, it does not attempt to enforce realism and we do not use it as a baseline. The MIMOSA model \\citep{fu2020mimosa} performs MCMC and directly models the transition probability with a pretrained neural network, as opposed to this work, where we model a dataset distribution. \n\n\n\n\n\n\n\n\\section{Method}\n\\subsection{Graph generation with an auto-regressive model}\n\\label{section:armodel}\n\nWe consider a dataset representing a distribution $\\mathcal{Q}(G)$ over molecular graphs G that we wish to approximate. We first represent a graph $G$ in a constructive fashion, as a sequence $s$ of vectors which can either represent the addition of an atom or of a chemical bond, as detailed in section \\ref{section:sequence definition}. The resulting distribution $\\mathcal{Q}(s)$ is then approximated by an autoregressive distribution $\\mathcal{P}(s)$ parameterized with neural networks, as detailed in section \\ref{section: nn}.\n\n\\subsubsection{Graphs as sequences}\n\\label{section:sequence definition}\nWe consider a molecule, seen as a complete graph $G(\\mathcal{V}, \\mathcal{E})$ where $\\mathcal{V} = 1 .. N$ is the set of nodes and $\\mathcal{E}\\subset \\mathcal{V}\\times\\mathcal{V}$ is the set of \\emph{undirected} edges, $x_v \\in \\{1..D\\}$ is the label of the node $v\\in \\mathcal{V}$ and $e_{ij} \\in \\{1..E\\}$ is the label of the edge $(i, j) \\in \\mathcal{E}$. \n\nThis graph can be represented as a sequence $s$ of vectors $V_k$ that can represent either a node or an edge. This sequence is constructed sequentially by adding each node label $x_i$ followed by all the edge labels between this node and the previously added ones $e_{ij}$:\n\\begin{equation*}\n s = \\left(V_k\\right)_{k=1..T} = \\left(x_0, x_1, e_{01}, x_2, e_{02}, e_{12}, x_3, ...\\right)\n\\end{equation*}\nwhere $T$ is the length of the sequence. \nFor practical reasons, we define all sequences to end with a particular node type vector acting as an \\emph{end sequence} token. \nAs an illustration, the representation of propane, ie a chain of three carbons with single bonds, could be $s=$ (\\emph{carbon, carbon, single bond, carbon, no edge, single bond, end token}). Any sequence $s_k = \\left(V_u\\right)_{u=1..k-1}$ defines a valid subgraph $G_{s_k}$. \n\nDifferent sequences can represent the same molecule. Typically, the choice of the node indices $\\mathcal{V}$ is arbitrary but changing the node indices will change the sequence s. We note $seqs(G)$ the ensemble of sequences allowed by our algorithm that represent graph $G$.\n\nFrom a distribution over graphs $\\mathcal{Q}(G)$ we can then trivially obtain a distribution on sequences $\\mathcal{Q}(s)=\\mathbb{E}_{G\\sim\\mathcal{Q}(G)}\\mathcal{Q}(s|G)$ where $\\mathcal{Q}(s|G)$ is uniform over $seqs(G)$. Notice that to any sequence corresponds only one graph, so that ${\\mathcal{Q}(s) = \\frac{\\mathcal{Q}(G)}{|seqs(G)|}}$.\n\n\\subsubsection{Auto-regressive model}\n\\label{section: nn}\n\nA learned distribution over sequences $\\mathcal{P}(s)$ is then modelled in an auto-regressive fashion:\n\n\\begin{equation}\n \\mathcal{P}(s=\\left(V_k\\right)_{k=1..T}) = \\prod_k \\mathcal{P}_\\theta(V_k|G_{s_k})\n\\end{equation}\nwhere $\\mathcal{P}_\\theta$ is a graph neural network defined below, with parameter $\\theta$.\n\nAs there is only one graph per sequence, the corresponding distribution on graphs is simply $\\mathcal{P}(G) = \\sum_{s\\in seqs(G)} \\mathcal{P}(s)$.\n\nWe explicitly model $\\mathcal{P}_\\theta(V_k|G_{s_k})$ as a categorical distribution whose probability vector is the output of a graph neural network. \nFirst, node embeddings $h_i$ are produced by $L$ layers of convolution. As in \\citet{shi2020graphaf}, we use an edge sensitive version of the standard GCN convolution:\n\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:conv}\n h_i^{(l)} &= \\sum_{e=1}^E ReLU\\left(W_e h_i^{(l-1)} + V_e\\sum_{j \\in \\mathcal{N}_e(i)} \\frac{h_j^{(l)}}{\\sqrt{d_i^e d_j^e}} + U_e\\right)\n\\end{aligned}\n\\end{equation}\nwhere $U_e$, $V_e$ and $W_e$ are learnable weight matrices, $\\mathcal{N}_e(i) = \\left\\{j \/ (i,j)\\in E \\text{ and } e_{ij} = e\\right\\}$ is the set of neighbours of $i$ via an edge labelled $e$ and $d^e_i=|\\mathcal{N}_e(i)|$ is the degree of node $i$ in the graph containing only edges labelled $e$. \nWe set $h_i^{(0)}$ as the one hot embedding of $x_i$.\n\nA global embedding is then defined by global addition pooling, ie $H = \\sum_i h_i^{(l)}$.\n\nFinally, two independent MLPs with \\emph{softmax} activation are used to produce probability vectors for node and edge distributions:\n\n\\begin{equation}\n\\label{eq:mlp}\n\\begin{aligned}\n\\mathcal{P}(V_k=x_i|G_k) &= Cat(MLP^{node}(H)) \\\\\n\\mathcal{P}(V_k=e_{ij}|G_k) &= Cat(MLP^{edge}(H, h_i^{(l)}, h_j^{(l)}))\n\\end{aligned}\n\\end{equation}\nwhere $Cat(.)$ is the categorical distribution with probabilities defined by the outputs of the network.\n\nThe network is then trained by minimizing the cross-entropy loss of each decision over the dataset.\n\n\\subsubsection{Further improvements and modifications}\n\n\n\\nlparagraph{Conditioning the network on molecule size}\n\\label{section:size}\nIn the next sections, we use among others a version of the model conditioned on molecule size \\footnote{Additional care must be taken in training such network, as detailed in supplementary materials}. \nIn order to do so, we redefine the convolution step in eq. \\ref{eq:conv} as:\n\n\\begin{equation}\n h_i^{(l)} = \\widehat{h}_i^{(l)} + C * \\frac{s}{S}\n\\end{equation}\nwhere $s$ is the final molecule size, $S$ is the maximal allowed molecule size, $C^{(l)}$ is a learned vector, and $\\widehat{h}_i^{(l)}$ is the original definition in eq. \\ref{eq:conv}.\n\n\n\\nlparagraph{Limiting the length of sequences}\n\\label{section:seq}\nThe length of the sequences is the major limitation of auto-regressive methods in term of time efficiency.\nMoreover, most $V_k$ correspond to $e_{ij}=\\text{\\emph{no edge}}$, which seems like a waste of resources. \nA common approach is to always present the nodes ordered as a breadth first search(BFS) traversal of the graph, without repetition and rooted randomly. \nIn that setting, there exists an upper limit $B \\ge |i-j|$ depending on the maximal width of the BFS tree in the dataset. \nIn the ZINC dataset, $B=12$. We can then omit all $e_{ij}$ if $|i-j|> B$ in $s$.\nNotice that the same advantage occurs if the nodes are instead presented in the reversed order of a BFS exploration, meaning that the last explored node is presented first in the sequence. \nAs opposed to the former case, in the latter, the last generated node is uniformly sampled among all the nodes. \n\nThese two choices correspond to two definitions of $seqs(G)$. The difference between them will be exploited in section \\ref{section:mcmc}.\n\n\\nlparagraph{Labelling the nodes for better cycle detection}\nA common issue in graph generation is the occurrence of unrealistic cycles, never seen in the dataset. \nWe believe this is due to the fact that atoms are anonymous, so that two atoms in a similar environment will have similar embeddings. Hence, the atom embedding cannot capture information about the distance to another specific atom. \nIn order to partially solve the problem, we add a random label to each node to enable the model to distinguish between close pair of nodes, that have interacted with each other through convolutions, and far-away pairs. \nWe implement that by redefining the initial node embedding: $h_i^{(0)} = (F(x_i), \\epsilon)$ where $F(x_i)$ is the one hot embedding of $x_i$ and $\\epsilon$ is a random normal vector of fixed dimension. \nWe found empirically that, without completely solving the problem, this improved the distribution of cycles in generated molecules.\n\n\\nlparagraph{Enforcing validity}\n\\label{section:validity}\nA great advantage of auto-regressive models is that we can easily ensure that the maximal valency of each atom is respected. \nIn order to do so, we simply set the probability of edges that would break the valency constraint to zero during generation. \nIf the nodes are presented in standard BFS order, we can also ensure that the molecule remains connected during the generation process, by stopping the generation if a new node has no real bond to the rest of the molecule.\n\n\\subsection{Direct optimization in molecule manifold}\n\\label{section:mcmc}\nIn order to optimize molecule properties we use the Metropolis Hastings algorithm, with a slight adaptation in order to bias the simulated distribution, centered around the criterion to optimize, toward the learned distribution, which defines implicitly the criterion of realism. We refer to that method as Learned Realism Sampling (LRS).\n\n\\subsubsection{Algorithm}\nFirst, we define a conditional transition probability $\\mathcal{T}(G_1|G_0)$ from graph $G_0$ to graph $G_1$. \nIntuitively, $\\mathcal{T}$ is a distribution over the neighborhood of $G_0$ in the ensemble of all graphs, which is not defined. \nMoreover, in order to ensure the realism of the proposal $G_1$, we need to sample from the learned distribution $\\mathcal{P}$. \nIn order to satisfy both conditions, we simply resample the last steps of the generative algorithm. \nMore precisely, we start by sampling uniformly a generation sequence $s \\in seqs(G_0)$ of length $M$, and remove the last elements corresponding to $\\delta n\\in\\mathbb{N}$ node additions, yielding a shorter sequence that we call $s*$. We then rerun the generative model, starting from the intermediate graph $G_{s*}$. \n\nThen, given a score function $sc$ and temperature $T$, we define an acceptance probability $\\mathcal{R}$ as:\n\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{R}(G_1|G_0) &= \\min\\left(\\exp\\left(\\frac{sc(G_1) - sc(G_0)}{T}\\right), 1\\right)\n\\end{aligned}\n\\label{eq:acceptance}\n\\end{equation}\n\nFrom those two distributions and an initial graph $G_0$, we iteratively produce new graphs similarly to the Metropolis Hastings algorithm, as summed up in algorithm \\ref{algo}:\n\n\\begin{algorithm}[h]\n\\begin{enumerate}\n\\item Sample a proposal graph $G^{proposal}_{n+1} \\sim \\mathcal{T}(G_{n+1}|G_n)$\n\\item Accept the proposal graph with probability $\\mathcal{R}(G^{proposal}_{n+1}, G_n)$\n\\item If the proposal is accepted, set $G_{n+1} = G^{proposal}_{n+1}$. Else, set $G_{n+1}=G_n$.\n\\end{enumerate}\n\\caption{Learned Realism Sampling}\n\\label{algo}\n\\end{algorithm}\n\nNotice that this is \\emph{not} Metropolis Hastings algorithm as $\\mathcal{T}$ is not symmetric, which should require a correction in $r$. Hence the simulated distribution is \\emph{not} proportional to $F$. \n\n\\begin{lemma}\\ \\\\\nGiven a joint distribution $\\mathcal{P}(s, G)$ over $\\mathcal{S}\\times\\mathcal{G}$, if the conditional distribution $\\mathcal{P}(s|G)$ is the uniform distribution over $seqs(G)$, then algorithm \\ref{algo} is equivalent to Metropolis Hastings sampling for a distribution $\\pi(G) \\propto F(G)\\mathcal{P}(G)$.\n\\label{lemma}\n\\end{lemma}\nThe proof is given in supplementary materials. \nNotice that the condition is fulfilled by the dataset distribution $\\mathcal{Q}$, as detailed in \\ref{section:sequence definition}. The result is only exact in the limit where the learned distribution $\\mathcal{P}$ perfectly matches $\\mathcal{Q}$.\nThis lemma should however describe the general tendency of the algorithm even if the approximation is imperfect, as there is no consistent bias during training. \n\nAs a result, algorithm \\ref{algo} produces graphs which both have high scores and are likely with regards to the learned distribution $\\mathcal{P}$. \nThe temperature parameter can be used to tune the importance of both those aspects or to perform annealing.\n\nA great advantage of the method is that the learned distribution is completely independent from the score calculation. \nTypically, $\\mathcal{P}$ can be learned on large public datasets, like ZINC, while the score can be learned on scarce data, using very simple models, or even on no data at all if an analytic solution exists. \nNotice that many models optimize the score by learning, in particular reinforcement learning ones. There is then a risk that some specificity of the score is learned at the expense of neglecting the dataset statistics. This is not the case of our model as the learning phase is only performed prior to optimization.\n\n\\subsubsection{Further improvements and variations}\n\n\\nlparagraph{Controlling the sampled graph size}\nIf the initial graph $G_0$ of size $N$ has few nodes compared to the median size of the training dataset, then the graph $G_1\\sim\\mathcal{T}(G_1|G_0)$ is likely to be much larger than $G_0$, breaking the intuition that $G_1$ should be similar to $G_0$. \nTo correct that effect, a model conditioned on size can be used. We remove $\\delta n_r$ nodes and condition the output size to be $N - \\delta n_r + \\delta n_a$. We sample both quantities $\\delta n_r$ and $\\delta n_a$ from a Poisson distribution with parameter $\\lambda$.\n\n\\nlparagraph{Sampling the modified region uniformly}\nAs detailed in section \\ref{section:seq}, the nodes can be presented in different orders. \nFollowing \\citet{shi2020graphaf}, our basic model presents the nodes in a BFS order to improve efficiency. The removed atoms are the ones visited last. Thus, the probability to be removed is not the same among all nodes. For example, in a long carbon chain, the atoms in the middle will almost never be removed.\n\nIn order to circumvent that effect and without loss of efficiency, the nodes can be presented in reversed BFS order, so that the root is presented last. \nBy doing so, the first removed node is sampled uniformly among the graph. \nNotice that if $\\delta n_r > 1$, this modification is not enough to ensure that nodes have a uniform probability to be removed, as more connected nodes will be selected more frequently. \nStill, we find that this correction is enough for substantial improvement while remaining very simple.\n\n\\section{Experiments}\nWe compare our model to multiple state-of-the-art models described in more details in section \\ref{related}.\n\n\\subsection{Distribution Learning}\n\n\\begin{table}[t]\n\\caption{Distribution learning performance of all the baselines. Validity(Val.), Novelty(Nov.), Unicity(Uni.), Kl, FCD scores are presented. ($\\uparrow$) and ($\\downarrow$) indicates respectively that higher or lower is better. VJTNN and MIMOSA are missing from this table, as they cannot be trained only for distribution learning. All models have been retrained using the official released code, except for MRNN which is not available. }\n\\label{results:distribution learning}\n\\vskip 0.15in\n\\centering\n\\begin{tabularx}{\\linewidth}{@{}lXXXXX@{}}\n\\toprule\n~& Val.($\\uparrow$) & Nov.($\\uparrow$) & Uni.($\\uparrow$) & KL($\\downarrow$) & FCD($\\downarrow$)\\\\\n \\midrule\nJTVAE & 1.00 & 1.0 & 1.000 & 0.078 & 1.46\\\\\nGCPN & 0.20 & 1.0 & 1.000 & 0.604 & 12.49\\\\\nMRNN & 0.65 & 1.0 & 0.999 & - & -\\\\\nGraphAF & 0.68 &1.0 & 0.991 & 0.2294 &10.06\\\\\nSMILES LSTM & 0.87 & 0.999 & 1.000 & 0.178 & 0.515\\\\\n \\midrule\nAR & 0.97 & 1.0 & 1.000 & 0.060 & 1.48\\\\\nAR-SC & 0.97& 1.0 & 1.000 & 0.058 & 1.50\\\\\nAR-RBFS & 0.88 & 1.0 & 1.000 & 0.102 & 2.83\\\\\n \\bottomrule\n\\end{tabularx}\n\\end{table}\n\n\\newcommand{\\textwidth}{\\textwidth}\n\\newcommand{0.35}{0.35}\n\\newcommand{3}{3}\n\n\\begin{figure*}[htb]\n\\centering{\n\\begin{subfigure}[b]{0.3\\textwidth}\n\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{3\\textwidth}\n\\scalebox{0.35}{\\input{basic_2x2_compressed.pdf_tex}}\n\\caption{AR}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[b]{0.3\\textwidth}\n\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{3\\textwidth}\n\\scalebox{0.35}{\\input{sc_2x2_compressed.pdf_tex}}\n\\caption{AR-SC}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[b]{0.3\\textwidth}\n\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{3\\textwidth}\n\\scalebox{0.35}{\\input{rbfs_2x2_compressed.pdf_tex}}\n\\caption{AR-RBFS}\n\\end{subfigure}\n}\n\\caption{Samples from the different versions of our model. No cherrypicking.}\n\\label{figure:generation}\n\\end{figure*}\n\n\n\n\n\nOur method begins by training the autoregressive models described above to learn the dataset distribution. \nThis is obviously a critical step, as the subsequent algorithm will produce samples biased toward that distribution. \n\nWe train three versions of our autoregressive model:\n\\begin{itemize}\n\\item The basic model described in \\ref{section:armodel}, simply referred to as AR.\n\\item A model conditioned on size as described in section \\ref{section:size}, referred to as AR-SC\n\\item A model conditioned on size, and with nodes presented in reversed BFS order, as in section \\ref{section:seq}, referred to as AR-RBFS\n\\end{itemize}\nThe model is trained on the ZINC250K dataset from \\cite{irwin2005zinc}.\nImplementation details can be found in supplementary materials, such as input format and hyperparameters.\n\nThe model is compared to state-of-the-art baselines using multiple metrics, all detailed in appendix \\ref{metrics definition}.\nWe use the standard validity, novelty and uniqueness metrics, measured for a set of 10,000 molecules. \nFollowing \\citet{brown2019guacamol}, we also provide two other metrics: the Kullback-Leibler divergence between the distributions of the generated molecules and the dataset, evaluated on many molecular descriptors. The Fr\u00e9chet ChemNet Distance (FCD) \\citep{preuer2018frechet} is a molecular version of the Fr\u00e9chet Inception Distance. \nFinally, we provide images of generated molecules in figure \\ref{figure:generation}.\n\n\nThe results are displayed in table \\ref{results:distribution learning}, showing that our model achieves state-of-the-art performance, and generally better than reinforcement learning based methods. \nThe RBFS model achieves slightly lower validity, KL and FCD, which is accountable to its more complex task as it must also ensure molecule connectivity.\n\n\n\\subsection{Score optimization}\nWe then use the three models from the previous section to perform our proposed LRS methods. Our method using the AR-X model is referred to as LRS-X. In the first section, we demonstrate that optimization can produce irrealistic molecules. In the second section, we show that although our model has the additional limitation to produce realistic molecules, it can still perform as well or better than state-of-the-art methods in the task of molecule improvement. \n\n\\subsubsection{Unsafe optimization can lead to generation of unrealistic molecules}\n\\label{section: unconstrained optimisation}\n\\newcommand{8cm}{8cm}\n\\newcommand{0.25}{0.25}\n\\newcommand{0.2}{0.2}\n\\begin{figure*}[htb]\n\\begin{subtable}[b]{\\textwidth}\n\\begin{tabularx}{\\textwidth}{@{}cXccXc@{}}\n (4.52) & & GCPN (7.98) & JTVAE (5.30) & & LRS (5.42)\\\\\n\\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{DB_0.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{GCPN_0.pdf_tex}} & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{JTVAE_0.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{ours_basic_0.pdf_tex}}\\\\\n(4.23) & & GraphAF (12.23) & All SMILES (12.34) & & LRS-SC (5.90)\\\\\n\n\\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{DB_1.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{GraphAF_0.pdf_tex}} & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{all_smiles_0.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{ours_size_1.pdf_tex}}\\\\\n(4.22) & & MRNN (8.63) & SMILES LSTM (11.87) & & LRS-RBFS (6.98)\\\\\n\n\\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{DB_2.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{MRNN_0.pdf_tex}} & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{smiles_lstm_0.pdf_tex}} & & \\scalebox{0.25}{\\def\\molsize}\\input{ours_reversed_0.pdf_tex}{8cm}\\input{ours_reversed_0.pdf_tex}}\\\\\n(a) Dataset molecules & & \\multicolumn{2}{c}{(b) Baselines} & & (c) Ours \\\\\n\n\\end{tabularx}\n\\end{subtable}\n\\caption{Candidates with largest NPLogP from the dataset (a), or sampled from the baselines (b) and our model (c).}\n\\label{figure:high NPLogP}\n\\end{figure*}\n\nA standard task is to generate molecules with the largest score possible. A common proof-of-principle score is the normalized penalized water octanol partition logP coefficient (NPlogP). This score happens to be easy to maximize, if no realism constraint is required. For example, a very long carbon chain, although unlikely in the training dataset, has a very high score. We show that many models will disregard the dataset statistics in order to optimize the score.\n\nIn general, it is hard to provide a metric, or even to evaluate, the realism of a molecule regarding a dataset. In this particular case, the molecules obtained from a model that disregarded its training set are particularly odd, so that we can attempt to quantify their quality. As in literature, we provide the top 3 molecules generated by each model. To estimate their realism, we provide their mean log-likelihood in feature space: First, we use a set of property descriptors as for the KL metric in previous section. Second, we use the ChemNet features. Details can be found in appendix. Results are grouped in table \\ref{results:property optimization}. Finally, we provide images of the top candidate for each model in figure \\ref{figure:high NPLogP}.\n\nFirst, we notice that \\emph{all} models produce NPlogP scores that are considerably above what can be found in the dataset, which is already an alarming sign. We also notice that most models produce extremely unlikely molecules. Indeed, as can be seen on figure \\ref{figure:high NPLogP}, most models only learned to repeat the same pattern many times. On the other hand, both JTVAE and our models produce more reasonable candidates (with some doubt concerning the descriptor log likelihood of LRS-SC and LRS-RBFS). We conclude that our model is indeed efficiently able to maximize a criterion while better enforcing realism of the generated candidates.\n\n\n\n\n\n\\begin{table}[hb]\n\\caption{Top three performances of NPLogP optimization for all models and log-likelihood against descriptors and ChemNet features.}\n\\label{results:property optimization}\n\\vskip 0.15in\n\\centering\n\\begin{tabularx}{\\linewidth}{@{}lXXXXX@{}}\n \\toprule\n & \\multicolumn{3}{c}{NPLogP ($\\uparrow$)} & \\multicolumn{2}{c}{Log-Likelihood ($\\uparrow$)}\\\\ \n \\cmidrule(r){2-4} \\cmidrule(l){5-6}\n & 1st & 2nd & 3rd & Descriptors & Chemnet Feat.\\\\\n \\midrule\nDataset & 4.52 & 4.23 & 4.22 & -37 & 291\\\\\n \\hline\n JTVAE & 5.30 & 4.93 & 4.49 & -38 & 283\\\\\n GCPN & 7.98 & 7.85 & 7.80 & -101 & -226 \\\\\n MRNN & 8.63 & 6.90 & 4.73 & -44 & -125\\\\\n GraphAF & 12.23 & 11.29 & 11.05 & -286 & 130\\\\\n All-SMILES & 12.34 & 12.16 & 12.05 & -210 & -333\\\\\n \\small{SMILES} LSTM & 11.87 & 11.84 & 11.70 & -164 & -171\\\\\n \\midrule\n LRS & 5.42 & 5.22 & 5.11 & -36 & 427\\\\\n LRS-SC & 5.90& 5.90 & 5.83 & -47 & 284\\\\\n LRS-RBFS & 6.98 & 6.92 & 6.69 & -51 & 293\\\\\n \\bottomrule\n\\end{tabularx}\n\\end{table}\n\n\n\n\\subsubsection{Molecule improvement}\n\n\\begin{table}[t]\n\\caption{NPlogP score improvement for a given similarity threshold $\\delta$. Results for JTVAE are taken from \\citet{jin2018learning}.}\n\\label{results:molecule improvement NPlogP}\n\\vskip 0.15in\n\\centering\n\\begin{tabularx}{\\linewidth}{@{}lXXX@{}}\n \\toprule\n & \\multicolumn{3}{c}{NPlogP}\\\\\n \\cmidrule(l){2-4}\n & $\\delta = 0.2$ & $\\delta = 0.4$ & $\\delta = 0.6$\\\\\n \\midrule\n JTVAE & - & $1.03 \\pm 1.39$ & $0.28 \\pm 0.79$ \\\\\n VJTNN & - & $3.55 \\pm 1.67$ & $2.33 \\pm 1.24$ \\\\\n GCPN & $4.12 \\pm 1.19$& $2.49 \\pm 1.30$ & $0.79 \\pm 0.63$ \\\\\n GraphAF & $5.00 \\pm 1.37$& $3.72 \\pm 1.19$ & $1.94 \\pm 1.00$ \\\\\n MIMOSA & $3.48 \\pm 1.41$& $3.44 \\pm 1.41$ & $\\mathbf{3.24 \\pm 1.48}$ \\\\\n \\midrule\n LRS & $4.68 \\pm 1.48$ & $3.63 \\pm 1.23$ & $2.29 \\pm 1.12$ \\\\\n LRS-SC & $5.49 \\pm 1.37$ & $4.14 \\pm 1.08$& $2.81 \\pm 0.92$ \\\\\n LRS-RBFS & $\\mathbf{6.23 \\pm 1.28}$ & $\\mathbf{4.44 \\pm 1.15}$ & $2.80 \\pm 1.09$ \\\\\n \\bottomrule\n\\end{tabularx}\n\\end{table}\n\nWe then evaluate our model in the context of molecule improvement. Given an initial molecule $m_0$ and a threshold $\\delta$, the objective is to produce a molecule $m^*$ with maximal score so that the similarity $sim(m^*, m_0)$ between $m^*$ and $m_0$ is larger than $\\delta$. As in our baselines, the similarity is defined as the Tanimoto similarity between the Morgan fingerprints with radius of 2 of $m^*$ and $m_0$. We simply set the score of a molecule with $sim(m^*, m_0) < \\delta$ to be $-\\infty$. \n\nAs a first task, we adopt the same setup as most previous works. More precisely, we optimize the NPlogP scores of 800 molecules with initially low score in ZINC test set, and report the average score improvement for different values of $\\delta$. \nThe results are presented in table \\ref{results:molecule improvement NPlogP}.\nOur model outperforms all baselines by a large margin, except MIMOSA in the case $\\delta=0.6$.\n\n\n\n\\begin{table}[t]\n\\caption{QED and DRD2 score improvement and success for a similarity threshold $\\delta = 0.4$.}\n\\label{results:molecule improvement QED and DRD2}\n\\begin{tabularx}{\\linewidth}{@{}lXXXX@{}}\n \\toprule\n & \\multicolumn{2}{c}{QED}&\\multicolumn{2}{c}{DRD2}\\\\\n \\cmidrule(l){2-3}\n \\cmidrule(l){4-5}\n & Improvement & Success& Improvement & Success\\\\\n \\cmidrule(l){1-3}\n \\cmidrule(l){4-5}\nVJTNN + GAN & $0.11(0.08)$ & $79.3\\%$& $0.61(0.39)$&$\\mathbf{97.3\\%}$\\\\\nMIMOSA & $0.10(0.05)$&$16.3\\%$& $0.28(0.30)$&$25.8\\%$\\\\\n \\cmidrule(l){1-3}\n \\cmidrule(l){4-5}\nLRS-SC& $0.17(0.04)$& $80.0\\% $&$0.73(0.31)$&$79.1\\%$\\\\\nLRS-RBFS& $\\mathbf{0.18(0.03)}$ & $\\mathbf{94.5\\%}$ &$\\mathbf{0.76(0.26)}$&$83\\%$\\\\\n \\bottomrule\n\\end{tabularx}\n\\end{table}\n\nThen, following \\citet{jin2018learning}, we optimize the QED score and the DRD2 score, which is the affinity with dopamine receptor of type 2, as predicted by a model from \\citet{olivecrona2017molecular}, and report the average score improvement and success rate. In the former case, we optimize a set of 800 molecules with QED scores between 0.7 and 0.8. An optimization is considered a success if a candidate with a score greater than 0.9 is found. In the latter case, we optimize a set of 1000 molecules with DRD2 scores lower than 0.05. An optimization is considered successful if a candidate with a score above 0.5 is found. For those experiments, we use a threshold $\\delta = 0.4$. The results are presented in table \\ref{results:molecule improvement QED and DRD2}. In general, our method produces larger improvements.\n\n\\section{Conclusion}\n\nIn this work, we proposed a method for molecular property optimization that only produces realistic candidates with respect to a given dataset. In order to do so, we start by learning the dataset distribution using a generative graph autoregressive model. Based on this learned model, we then perform sampling akin to Metropolis Hastings, implicitly forcing the exploration to focus on regions with higher probability under the training distribution.\n\nWe demonstrate that the generated molecules are visually varied and realistic, compared to all reinforcement learning based baselines. Furthermore, our model outperforms all baselines in the task of molecule \\emph{improvement}, showing that enforcing realism is not only desirable but also beneficial. \n\nNotably, the optimization step in our model is not based on learning, so that no data specific to the optimized score is required, which is essential for practical applications where data is scarce.\n\nThese results prove that the combination of distribution learning and traditional importance sampling is promising to improve current molecule design methods. Other than Markov Chain Monte Carlo methods, like ours, we believe that similar ideas could be promising when applied to genetic algorithms, which are prominent in modern pipelines. We leave this perspective to future work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nThroughout the years,\nthe use of predictive computational models has become standard practice\nin many professional fields such as logistics, economics or R\\&D\n\\cite{generic0, generic1, generic2, generic3, generic4}, with computational\nchemistry being no exception.\nPredictive models in this field often approximate\nthe potential energy surface (PES) and derived properties for a given chemical system,\nand can be broadly categorized based on their level of theory:\nQuantum mechanical (QM) approaches\nsuch as wave function or density functional theory\nexplicitly model the electronic structure,\ngenerally resulting in a high accuracy and broad applicability.\nHowever, explicit electronic treatments come with a high computational price tag,\nmaking calculations unfeasible for larger systems.\nThis limitation can be overcome by introducing more empiricism to the model\non the electronic (\\textit{e.g. } tight binding), atomic (\\textit{e.g. } molecular mechanics) or \nmolecular (\\textit{e.g. } coarse-graining)\nlevel.\nSuch empirical models attempt to strike the balance between speed and prediction accuracy by approximating the PES description through the introduction of parameters.\nProminent examples include approximate functionals in\ndensity functional theory (DFT) \\cite{becke3, lyp, b3lyp, wB97XD},\ndensity functional tight binding (DFTB) \\cite{dftb1, dftb2, dftb3, xtb1, xtb2},\nmachine learning (ML) potentials \\cite{behler, schnet, ani1x, pinn},\nand force fields (FF). \\cite{amber, charmm, reax1, reax2}\n\nWhile some of the best empirical models can closely approximate higher-level QM theories\nat only a fraction of the computational time \\cite{ani1ccx, gfn2xtb},\ntheir quality strongly varies with different sets of parameters\n\\cite{anna,ogolem,forcebalance,potfit,garfield}.\nAdditionally, many empirical models can only deliver accurate results for a comparably limited chemical space,\neither due to their functional form or parameters being specific to certain (combinations of) elements.\nThese limitations can lead to the existence of multiple parameter sets\nbased on the desired application, \\textit{e.g. } ReaxFF has distinct families of combustion or condensed phase parameterizations. \\cite{reax_npj}\nSuch lack of general parameters gives rise to the research field of parameter fitting\n\\cite{fitting1, fitting2, fitting3, fitting4, anna},\nwhere the task is to find an optimal parameter set, given training data\nconstructed from chemical systems and their properties of interest.\nAlthough the fitting process can be an appealing solution to the above shortcomings,\nits practical implementation remains hardly accessible to the broader audience\nand instead is almost exclusively carried out by specialized research groups.\nIn our experience, the majority of researchers,\nalthough being interested in individual parameter fitting,\nare discouraged by the high barrier that comes with it.\nThe main reason for this being a lack of generalization and transparency:\nTraining data often comes in a variety of formats, optimizers expect a different input all together and the format in which parameters are stored is specific to each method. \\cite{reax_manual, forcebalance_manual, ogolem_manual, posmat}\nThe combination of these oftentimes results in works that can be hardly comprehended and reproduced by third parties.\nIn an effort to address the above issues, we introduce the ParAMS scripting package for Python. \nThe following section briefly summarizes the architecture of ParAMS and we refer to the documentation for further technical details. \\cite{doc}\nThe Results section demonstrates how the package can be used to i) generate density functional-based tight binding (DFTB) two-body repulsive potentials for an ionic material, and ii) reparameterize a ReaxFF reactive force field \\cite{reax1, reax2} for organic disulfides.\nStep-by-step Jupyter notebooks for the two case studies are provided as supporting information, as well as on GitHub \\cite{si}.\nAdditional application examples are available in the package's documentation. \\cite{doc}\nThe final section concludes with a summary and an outlook on future work.\n\n\n\\section{Implementation}\nParAMS follows a modular package structure\nwith well-defined application programming interfaces (APIs),\nwhich allows components to be treated independently.\nThis is essential for future development, as individual sub-modules can\nbe easily worked on and extended.\nWe describe the main components and their functionality below.\nFor a mathematical description of the functionality as well as \nadditional explanation of the syntax, please refer to section S1 of the supporting information.\n\n\\textbf{Job Collection and Data Set}\nclasses are responsible for the input \/ output (IO) of relevant data.\nIn the context of ParAMS, these will be collections of chemical systems, properties and settings alongside optional metadata.\nA Job Collection clearly defines job entries by combining systems and settings.\nThe Data Set defines which properties of a job are relevant to the optimization and stores\nthe reference values of all entries in a vector $\\bm{y}$.\nReference values can be added from any source: Experimental \/ external results, or a high-level calculation.\nTo ensure reproducibility and ease-of-use, ParAMS makes use of the\nYAML data-serialization format \\cite{yaml} as the default for all IO operations.\n\n\\textbf{Extractors} tell ParAMS how to \nextract a property of interest $P$ from a calculated job.\nTechnically, they are small standalone Python modules that read the native\n(\\textit{e.g. } stream, text or binary file) output of the Amsterdam Modeling Suite\\cite{ams, scm} into Python variables.\nExamples for implemented extractors are:\nInteratomic distances, valence angles, dihedral angles, atomic charges, reaction energies,\nlinear transit energy profiles, lattice vectors, bulk moduli, atomic forces, stress tensors, Hessian matrices and vibrational frequencies.\nNew extractors are easily written by the user,\neffectively allowing any property that can be calculated with AMS\nto be fitted within the scope of ParAMS.\nExtractors also support more elaborate cases that require additional processing before a comparison. This is for example needed when computing the minimal root-mean-square deviation of atomic positions.\n\n\\textbf{Loss Functions} implement various metrics that describe the distance between two vectors.\nIn the context of parameter fitting, a vector consisting of all reference values \n$\\bm{y}$ has to be compared to the predictions vector $\\bm{\\hat{y}}$,\nas generated given a specific set of parameters,\nin order to measure the quality of the fit.\nImplemented metrics are least absolute error (LAE), mean absolute error (MAE),\nroot-mean-square error (RMSE) and residual sum of squares (RSS).\nAdditionally, user-defined metrics are supported.\n\n\\textbf{Optimizers} provide a unified interface to a variety of optimization algorithms.\nCurrently, the following are supported:\nCovariance Matrix Adaptation Evolution Strategy (CMA-ES) \\cite{cma1,cma2,cma_py},\nAdaptive Rate Monte Carlo (ARMC) \\cite{armc} and\noptimizers available through the Nevergrad \\cite{nevergrad} and SciPy \\cite{scipy} packages. \nTo guarantee parameterization support for a wider range of models,\nthe current version of ParAMS is designed to work with gradient-free optimization algorithms only.\n\n\\textbf{Parameter interfaces} translate the parameter vector $\\bm{x}$ into the native\nformat of the empirical model (\\textit{e.g. } a file on disk).\nAny existing parameter interface can be parameterized.\nAt the time of writing, ParAMS supports interfaces to\nReaxFF \\cite{reax_ADF}, SCC-DFTB repulsive potentials, GFN1-xTB \\cite{xtb1}, and Lennard-Jones potentials.\n\n\\textbf{Callbacks} allow the interaction with a parameterization at runtime.\nSuch interactions can be progress loggers, timeouts, early stopping criteria or plotting functions \\cite{matplotlib}.\n\nThe \\textbf{main script} is a command line interface to ParAMS for users who do not wish\nto spend much time writing their own parameterization scripts.\nIt allows the setup and execution of the most common tasks through a configuration file.\n\nFigure \\ref{fig:loop} shows the general parameterization loop and highlights the main input-output relationships.\n\\begin{figure}\n \\centering\n \\includegraphics[width=.4\\textwidth]{fig\/flowchart.png}\n \\caption{\n A schematic representation of the ParAMS parameterization loop.\n Highlighting the interplay between the three main components:\n Optimizer, Job Collection and Data Set.\n Every parameter set suggested by the optimizer produces different job results,\n which are evaluated by the Data Set based on a unique \\textit{jobID}.\n The results are then compared with the reference values,\n weighted and combined into the loss function value.\n A more detailed description is available in the package documentation.\n }\n \\label{fig:loop}\n\\end{figure}\nThe implementation allows users already experienced in other data science packages to prepare inputs and process results in a familiar way.\nTechniques like training- and validation set splitting,\ncross-validation, early stopping, batching or outlier detection are supported out of the box.\nParameter constraints can be further used to limit the search space of a problem.\nIn addition to regular box constraints,\nusers can express inequality constraints involving multiple parameters (\\textit{e.g. } $x_1 \\leq x_2$), as will be demonstrated in the ReaxFF example application.\nParAMS implements two levels of parallelism.\nMultiple parameter vectors and multiple jobs per vector can be evaluated at the same time,\nresulting in workloads that can be distributed effectively based on the training set size and optimization algorithm.\nMoreover, since the signatures of all classes in a submodule are the same,\ndifferent models and optimizers can be effortlessly compared and deployed (\\textit{e.g. } comparison of different levels of theory in Tab. \\ref{tab:repulsive}).\nDaily regression tests are performed to guarantee an error-free functionality of the package.\n\n\n\n\\section{Results}\n\\subsection{DFTB two-body repulsive potential parameterization}\\label{sec:dftb}\nHere, we illustrate how the ParAMS package can be used to train a two-body SCC-DFTB repulsive potential \\cite{dftb1}. \nFor simplicity, we use ZnO, for which several previous parameterizations already exist in the literature \\cite{znopt}. We will approximately follow the approach from Ref. \\citenum{znopt}, in which the authors reused the electronic parameters from the znorg-0-1 DFTB parameter set \\cite{znorg} and reparametrized the two-body Zn-O repulsive potential to reference data calculated for the wurtzite and rocksalt polymorphs of ZnO. With the znorg-0-1 parameters, the rocksalt polymorph is predicted to be more stable than the wurtzite polymorph, but in experiments and DFT calculations, the opposite is true, which motivates the reparameterization.\n\nThe entire code needed for the parameterization is provided as supporting information \\cite{si}. It fits the Zn-O pairwise repulsive potential $V^\\text{rep}$ as a tapered double exponential function of the form\n\\begin{equation}\n\\begin{split}\n &V^\\mathrm{rep}(r) = \\\\ &[A_0\\exp(-A_1 r) + A_2\\exp(-A_3 r)]f^\\mathrm{cut}(r) \n\\end{split}\n\\end{equation}\nwhere $A_0$, $A_1$, $A_2$, $A_3$ are the parameters and $f^\\text{cut}(r)$ is a tapering function of the form\n\\begin{equation}\n f^\\text{cut}(r) = \\frac{1}{2}\\left(\\cos(\\frac{\\pi r}{r_\\text{cut}})+1\\right)\n\\end{equation}\nwith the cutoff distance $r_\\text{cut} = 5.67$ bohr.\n\nWith ParAMS, it is possible to either directly define the reference values for the training set (for example, from literature values) or to automatically calculate them if no reference values have been given. Here, we illustrate the second approach, and perform the reference calculations using the periodic DFT code BAND\\cite{band} in the Amsterdam Modeling Suite\\cite{ams} (AMS). \n\nThe training set comprises the $a$ and $c$ lattice parameters of wurtzite ZnO, the bulk modulus $B_0$ of wurtzite ZnO, as well as the relative energies of the wurtzite and rocksalt polymorphs of ZnO,\n$\\Delta E = E_\\text{wurtzite}-E_\\text{rocksalt}$ (per ZnO formula unit). We do not need to specify the reference values themselves, as they will automatically be calculated by ParAMS.\n\nThe job collection contains two jobs: lattice optimizations of the wurtzite and rocksalt polymorphs. From these two jobs, the $a$, $c$, $B_0$, and $\\Delta E$ quantities can be extracted. For wurtzite, $B_0$ can be extracted by requesting that the elastic tensor be calculated at the end of the lattice optimization.\n\nThe reference DFT calculations were run with the PBE exchange-correlation functional, a triple-$\\zeta$ (TZP) basis set, and ``Good'' numerical quality (dense k-space and integration grids). For the parametrized DFTB engine, a ``Good'' (dense) k-space grid was also used, since the results of lattice optimizations can be quite sensitive to the k-space grid.\n\n\nThe optimization was done with the Nelder-Mead algorithm \\cite{neldermead} from scipy, with a sum-of-squared-errors loss function. The smallest loss function value was obtained for $A_0 = 0.45$, $A_1 = 1.01$, $A_2 = 0.25$, $A_3 = 0.40$. The resulting repulsive potential is shown in Figure \\ref{fig:repulsive}. Typical Zn-O distances in wurtzite (``w\") and rocksalt (``rs\") are indicated with gray lines at 3.8 bohr and 4.1 bohr, respectively. With znorg-0-1 (red line), the repulsive potential decays very rapidly between the typical wurtzite and rocksalt distances. This decrease is not as pronounced with either the potential in this work (black line) or znopt (blue line). This affects the relative stability of the wurtzite and rocksalt ZnO polymorphs. \n\nTable \\ref{tab:repulsive} compares the resulting ZnO properties from the parameterization in this work to the DFT reference data, as well as previous DFTB ZnO parameterizations (note: znorg-0-1 was not parametrized to the DFT data in Table~\\ref{tab:repulsive}, and znopt was parametrized to also reproduce some adsorption energies). The repulsive potential in this work closely reproduces the wurtzite lattice parameters $a$ and $c$ and the relative energy $\\Delta E$, and provides a good estimate of the bulk modulus, compared to the DFT reference to which it was trained. \n\nWith ParAMS it is additionally possible to evaluate the loss function, and individual training set entries, using any of the engines in the Amsterdam Modeling Suite. For comparison, Table~\\ref{tab:repulsive} also gives the corresponding quantities for the UFF (Universal Force Field) engine, which performs significantly worse than the DFTB parameterizations for these quantities.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{fig\/fig-repulsive.pdf}\n \\caption{\n ZnO parameterization data.\n (a-b) pictures of the unit cells of ZnO wurtzite and rocksalt polymorphs. Zn is shown in blue and O in red. (c) DFTB Zn-O pairwise repulsive potentials for the znorg-0-1 \\cite{znorg} (red), znopt \\cite{znopt} (blue), and refitted repulsive potential from this work (black). The gray lines mark typical Zn-O distances in the wurtzite (w) and rocksalt (rs) polymorphs, respectively.} \n \\label{fig:repulsive}\n\\end{figure}\n\n\\begin{table*}\n \\centering\n \\begin{tabular}{l|llll}\n \n & $a$ (\\AA) & $c$ (\\AA) & $B_0$ (GPa) & $\\Delta E$ (eV) \\\\\n \\hline\n DFT (this work) & 3.30 & 5.32 & 126 & $-$0.24 \\\\\n DFTB (this work) & 3.28 & 5.34 & 146 & $-$0.24 \\\\\n DFT \\cite{znopt} & 3.29 & 5.31 & 129 & $-$0.30 \\\\\n \n DFTB, znopt \\cite{znopt} & 3.21 & 5.25 & 161 & $-$0.32 \\\\\n DFTB, znorg-0-1 \\cite{znorg} & 3.29 & 5.38 & 161 & +0.14 \\\\\n UFF & 2.90 & 4.73 & 199 & $-$16.2 \\\\\n \\end{tabular}\n \\caption{Calculated ZnO wurtzite lattice constants $a$ and $c$, bulk modulus $B_0$, and energy relative to the rocksalt polymorph $\\Delta E = E_\\text{wurtzite}-E_\\text{rocsksalt}$ (per ZnO formula unit)}\n \\label{tab:repulsive}\n\\end{table*}\n\n\n\n\\subsection{ReaxFF parameterization} \\label{sec:application}\nReaxFF is another contemporary example of an empirical model. \\cite{reax1,reax2}.\nThis formalism has been applied to a wide range of chemical problems,\nand consequently has seen a lot of new parameter development\n\\cite{reax_RDX, reax_Silica, muellerhartke}\n(for a general overview of ReaxFF and its development, see Senftle \\textit{et al.}\\cite{reax_npj}).\nIn this section, we demonstrate the parameterization of ReaxFF with a training set\npreviously published by M\u00fcller and Hartke (MH) \\cite{muellerhartke}. The optimized\nparameter vector $\\bm{x}^*$, as found by MH, is called Mue2016.\nAn overview of the training set can be found in Table S1 of the supporting information.\nIt features a total of 231 geometries needed for the computation of 4875 chemical properties. Additionally, MH included a validation set to check for overfitting.\nExamples of three structures included in the data are depicted in Fig. S1 of the supporting information, showing cyclopentathione, diphenyl disulfide and \ndimethyl disulfide.\nPrior to the parameter optimization, we evaluate\nthe training and validation sets with the Mue2016 ReaxFF parameters and\nreport sum-of-squared-errors (SSE) losses of 14441 and 14451 respectively.\nNote that in the original publication, MH report a training set\nloss of 12400\\cite{muellerhartke}, while in a more recent work, Shchygol \\textit{et al. } calculate a loss of 16300\\cite{anna}.\nSuch differences are expected, because numerical instabilities inherent to ReaxFF and software improvements (mostly related to geometry optimization) may result in different optimized geometries. \\cite{anna, tapered_bond_orders}\n\nIn our setup, we use the covariance matrix adaptation evolution strategy (CMA-ES)\n\\cite{cma1, cma2, cma_py} as the optimization algorithm with Mue2016 as the initial point.\nCMA-ES is gradient-free, and relies on a population to sample\nnew parameter vectors from an adapted, n-dimensional Gaussian.\nIt does not require additional hyperparameters other than a population size and an initial\nwidth of the Gaussian distribution, $\\sigma$.\nHere, we use a population size of 36 and an initial $\\sigma$\nof 0.3.\nFurthermore, we limit the optimization to 24 hours and set up an \nearly stopping mechanism based on the validation set.\nThe optimization is set up to stop early only if there has been no\nimprovement in the validation set error for the last 6000 evaluations.\n\nRather than optimizing the same 87 parameters as MH,\nwe perform a one-dimensional scan on all\nparameters and select the 35 most sensitive with respect to the training set:\nAlthough Mue2016 is a set of 701 parameters in total,\nonly a subset of these significantly affects the overall cost function value.\nThis is for example the case when a model includes parameters for each chemical element (\\textit{e.g. } C, H, O),\nbut the total training set of systems $R$ can be constructed from fewer elements (\\textit{e.g. } C, H).\nIn such cases, the dimensionality of the problem can be reduced by scanning for a\nrelevant parameter subset which yields the biggest change\nin the cost function value.\nThe simplest setting, which we used in this case study, only modifies one parameter at a time to determine its influence on the objective function.\nIt is also possible to scan all parameter combinations, to discover coupling between parameters, albeit at a highly increased computational cost.\nOut of the 35 parameters selected this way, 16 have also been optimized by MH.\nWe list all optimized parameters in the provided Python notebooks\\cite{si}.\n\nParameter bounds are set to be relative to the initial values such that \n$\\bm{x}_\\pm = \\bm{x}_0 \\pm 0.2|\\bm{x}_0|$.\nIn addition to box constraints, ParAMS enables a definition of inequality constraints.\nAs the ReaxFF formalism works with bond orders, we limit the parameters responsible for the covalent radii of\n$\\sigma$, $\\pi$ and $\\pi\\pi$ bonds to\n$r_0^\\sigma \\geq r_0^\\pi \\geq r_0^{\\pi\\pi}$\nfor every atom and atom pair defined in the force field.\nThis approach effectively limits the search space and is available in combination with all optimizers.\n\nA summary of all settings is provided in Table S2 in the supporting information.\nTo compensate for the randomness of CMA-ES, we repeat the optimization\nset-up nine times.\nFor the best solution, we report improved training and validation set losses of 11877 and 5377 respectively.\nWe make this work's optimized parameter set available through the supporting information under the title MueParAMS.\nCorrelation plots between reference and predicted values for the\nnew parameters are presented in Figure \\ref{fig:corrplot}, showing very good agreement to the reference data.\nMoreover,\nFigure \\ref{fig:pes} compares the S-S dissociation curve of\ndiphenyl disulfide, as computed with Mue2016 and the new MueParAMS parameters, showing an improved agreement to the reference data for this case.\n\n\\begin{figure}\n \\includegraphics[width=.5\\textwidth]{fig\/corrplots.png}\n \\centering\n \\caption{\n Training data correlation plots. Showing\n (a,b) energy differences, (c,d) atomic forces, (e,f) atomic distances, (g,h) interatomic angles and (i,k) dihedral angles\n as calculated with the Mue2016 (left) and MueParAMS (right) parameters. Reaction energies and internal coordinates are compared after geometry optimization.\n X and Y axes depict reference and predicted values respectively.\n }\n \\label{fig:corrplot}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.45\\textwidth]{fig\/ssopscan.png}\n \\caption{\n S-S dissociation curve of diphenyl disulfide. Computed with the\n Mue2016 and MueParAMS ReaxFF parameterizations. For details about the reference data, see Ref. \\citenum{muellerhartke}.\n }\n \\label{fig:pes}\n\\end{figure}\n\n\n\\section{Summary and Outlook}\nWith ParAMS, we have presented a modern Python package, supporting versatile parameterization workflows with minimal effort.\nIts integration with the Amsterdam Modeling Suite adds a high amount of\nflexibility through the number of properties that can be fitted\nalongside the support for multiple codes when it comes to the model, optimization algorithm and reference data selection.\nFeatures such as highly customizable optimizations,\nsupport for multiple validation sets, or the\nintuitive processing of data\naim to make ParAMS accessible to both, advanced and less experienced users.\nAt the same time, developers can easily extend existing functionality.\nWe showed how an SCC-DFTB repulsive potential could easily be parameterized for the inorganic crystal ZnO. The reference data was calculated automatically using a DFT engine within AMS.\nThis example application also demonstrates how ParAMS can be used\nto compare the accuracy of different chemical simulation packages given the same\ntraining set.\nWe also demonstrated how the package can be used to easily process, set up and start a fitting procedure for ReaxFF.\nUsing previously published data by M\u00fcller and Hartke,\nwe were able to find parameters that produce a considerably lower error for the validation set\nwhile maintaining a similar accuracy in the training data.\nIn future, we hope to extend the number of empirical models that can be fitted with ParAMS\nand further improve the ease of use through the introduction of additional shortcut functions for training set building.\nWe also expect additions in other functionality such as optimization algorithms or extractors\nbased on user feedback and wishes as the project matures.\nThe package is included in all AMS releases since 2020.\n\n\\textbf{Supporting Information Available}\nPDF file with mathematical description of the optimization problem, summary of the reference data published by M\u00fcller and Hartke used in the ReaxFF example, settings used for the ReaxFF parameterization and visualization of some structures in the M\u00fcller and Hartke training set.\n\n\n\\textbf{Data and Software Availability}\nAll data needed to reproduce the examples is available\nat \\url{www.doi.org\/10.5281\/zenodo.4629706}.\nThe package's documentation is available at\n\\url{www.scm.com\/doc.trunk\/params}.\nParAMS is distributed with the Amsterdam Modeling Suite, for which a free trial can be requested at \\url{www.scm.com}.\n\n\n\\begin{acknowledgement}\nWe thank Micha\u0142 Handzlik for the initial design of the \nParAMS package and Dr.\\ Tom\u00e1\u0161 Trnka for the implementation of the AMSWorker interface,\nresulting in a considerable computational speed-up.\nThis project has received funding from the European Union's Horizon\n2020 research and innovation programme under grant agreement No 814143 (L.K. \\& T.V.), No 798129 (M.H).\nT.V. also acknowledges funding of the research board of Ghent University.\nR.R. and M.H. have received funding from the Netherlands Enterprise Agency (RVO) and Stimulus under the MIT R\\&D Collaboration programme, project number PROJ-02612.\nThe computational resources (Stevin Supercomputer Infrastructure) and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by Ghent University, FWO and the Flemish Government \u2013 department EWI.\n\\end{acknowledgement}\n\n\\textbf{Author Contributions}\nL.K. and R.R. developed the ParAMS Python package.\nL.K. and M.H. performed the case studies.\nL.K., M.H. and T.V. wrote the paper.\nT.V. oversaw the project.\nAll authors read and approved the final manuscript.\n\n\\textbf{Competing Interests}\nAuthors L.K., R.R. and M.H. were employed by the company Software for Chemistry and Materials (SCM). SCM develops and commercializes the Amsterdam Modeling Suite, of which ParAMS is a new module.\n\n\\footnotesize{\n\n\\section{The Parameterization Problem}\n\nThis section provides a mathematical framework for the parameterization problem. \nWe assume that the training data can be defined as a set of physico-chemical properties for a number of isolated or periodic systems.\nExamples for relevant properties are energy differences, nuclear gradients or system geometries.\nIn the context of ParAMS, we define an arbitrary property $P$ \nthat can be expressed as the output of a computational job.\nWhen working with multiple jobs as part of a training set, a job function\ncan be defined as\n\n\\begin{equation}\n \\label{eq:jobcollection} \n J(R_j, S_j, M) =\n \\left( R'_j, P_j^n\\right)\n \\quad \\forall\n \\; j\\in \\{1\\ldots N_\\text{job}\\},\n \\; n\\in \\{1\\ldots N_\\text{prop}(j)\\},\n\\end{equation}\n\ncalculating the output geometry $R'_j$ and all properties $P_j^n$ of a job $j$.\nThe input for every job in $J$ consists of the input geometry $R_j$, \nthe job settings $S_j$ (\\textit{e.g. } geometry optimization and frequencies)\nand the computational model $M$.\nNote that a parametric model is additionally a function of the parameter vector $\\bm{x}$,\nin which case the outputs of the above equation can be denoted with the hat operator\n(\\textit{i.e. } $ \\hat{R}'_j, \\hat{P}_j^n$),\nas to distinguish between reference properties and properties predicted by the parametric model.\nTraining set entries can be constructed, for example, from a linear combination of multiple properties\n\\begin{align}\n \\label{eq:dsentry}\n y_i &=\n \\sum_{k=1}^{N_\\text{lc}(i)} c_{i,k} P_{j(i,k)}^{n(i,k)}\n \\quad \\forall\\; i\\in \\{1\\ldots N_\\text{data}\\},\n\\end{align}\nwhere $c_{i,k}$ is the coefficient for term $k$ of training set entry $i$ and $N_\\text{lc}(i)$ is the total number of terms\nper entry.\nNon-linear combinations of properties to construct $y_i$ are also possible.\nSuch a formulation offers a high degree of flexibility for the construction of a training set.\nOne example is the combination of multiple system energies into one reaction energy.\nIt should be noted that a training set entry, as defined in Eq.\\ \\ref{eq:dsentry},\ndoes not have to originate from the results of computational jobs.\nThe reference value can instead be provided directly, making it easy to work with experimental or external data.\n\nWhile training set entries $\\bm{y}$ have to be defined only once,\ntheir predicted counterpart $\\bm{\\hat{y}}$ has to be re-calculated every time the model parameters\nchange.\nFor this purpose, we introduce a Data Set function\n\\begin{equation}\n \\label{eq:dataset}\n DS(\\bm{x} | \\bm{y}) = \\bm{y} - \\hat{\\bm{y}},\n\\end{equation}\nwhich extracts all properties needed for the calculation of $\\bm{\\hat{y}}$\nbased on a parameter set $\\bm{x}$ and returns the respective vector of residuals.\nA metric in the form of a loss function $L(\\bm{y} - \\hat{\\bm{y}}, \\bm{w})$\nis then applied to the residuals for a qualitative measure of how close\nreference and predicted values are.\nThe additional weights vector vector $\\bm{w}$ can be used to balance\npossibly different orders of magnitude in the data set or\nmake certain entries more relevant for the fitting process than others.\n\nFinally, the optimization algorithm can be defined as a function that minimizes $L$ with respect to the parameters\n\\begin{equation}\n \\label{eq:optimizer}\n O(\\bm{x}_0, L) = \n \\underset{\\bm{x}} {\\mathop{\\mathrm{arg\\,min}}}\\,\n L = \\bm{x}^*,\n\\end{equation}\nfinding an optimal solution $\\bm{x}^*$ from an initial point $\\bm{x}_0$.\n\n\n\n\n\n\\section{Additional Display Items}\n\\begin{table}\n\\centering\n\\caption{\nComposition of the reference data published by M\\\"uller and Hartke\\cite{muellerhartke},\nsplit by the computational tasks Single Point (SP) and Geometry Optimization (GO).\nFor each of the two sets, the upper part describes the chemical systems,\nwhile the lower breaks down the individual entries in the training and validation sets.\nNote that some entries might be a function of multiple chemical systems,\nmeaning that the sum of SP+GO is not necessarily equal to the total\nnumber of entries for that row (\\textit{cf. } Sec.\\ 3.1 in the main text).\n}\n\\begin{tabular}{lrrr}\n\n \\textbf{Training Set}&\n \\textbf{SP}&\n \\textbf{GO}&\n \\textbf{Total}\n \\\\\n \\hline\n \n Number of systems&\n 222&\n 9&\n 231\n \\\\\n \n Mean system size (atoms)&\n 6.6&\n 11.4&\n 6.8\n \\\\\n \n Std. dev. (atoms)&\n 2.9&\n 7.7&\n 3.3\n \\\\\n \n \n \\\\\n Total number of entries&\n 4620&\n 317&\n 4875\n \\\\\n \n Energies&\n 219&\n 62&\n 219\n \\\\\n \n Forces&\n 4401&\n 0&\n 4401\n \\\\\n \n Atomic distances&\n 0&\n 94&\n 94\n \\\\\n \n Angles&\n 0&\n 85&\n 85\n \\\\\n \n Dihedrals&\n 0&\n 76&\n 76\n \\\\\n \n\n \\\\\n \\textbf{Validation Set}&\n &\n &\n \\\\\n \\hline\n \n Number of systems&\n 200&\n 24&\n 224\n \\\\\n \n Mean system size (atoms)&\n 24.0&\n 12.7&\n 22.8\n \\\\\n \n Std. dev. (atoms)&\n 0.0&\n 5.9&\n 4.0\n \\\\\n\n \n \\\\ \n Total number of entries&\n 199&\n 771&\n 970\n \\\\\n \n Energies&\n 199&\n 0&\n 199\n \\\\\n \n Forces&\n &\n &\n 0\n \\\\\n \n Atomic distances&\n 0&\n 281&\n 281\n \\\\\n \n Angles&\n 0&\n 257&\n 257\n \\\\\n \n Dihedrals&\n 0&\n 233&\n 233\n\\label{tab:trainingset}\n\\end{tabular}\n\\end{table}\n\n\n\n\\begin{table}\n\\centering\n\\caption{Summary of relevant ParAMS settings used for the re-parameterization of Mue2016.}\n\\begin{tabular}{lr}\n \\textbf{Setting}&\n \\textbf{Value}\n \\\\\n \\hline\n\n Number of optimizations&\n 9\n \\\\\n \n Number of parameters to optimize&\n 35\n \\\\\n \n Lower \/ upper parameter bounds&\n $\\bm{x}_0 \\pm 0.2|\\bm{x}_0|$\n \\\\\n \n Optimization timeout&\n 24 hours\n \\\\\n \n CMA-ES population size&\n 36\n \\\\\n \n CMA-ES sigma&\n 0.3\n \\\\\n \n Loss function&\n sum of squared errors\n \\\\\n \n Early stopping patience&\n 6000 evaluations\n \\\\\n \n Constraints&\n $r_0^\\sigma \\geq r_0^\\pi$\n and\n $r_0^\\pi \\geq r_0^{\\pi\\pi}$\n\\label{tab:params_setup}\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.45\\textwidth]{fig\/compounds.pdf}\n \\caption{\n From top to bottom: Example structures of\n cyclopentathione, diphenyl disulfide and dimethyl disulfide, \n containing S (yellow), C (black), and H (white),\n included in the data provided by MH.\n The fitted properties include bond distances, angles, relative\n energies and atomic forces.\n }\n \\label{fig:compounds}\n\\end{figure}\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nMilky Way (MW) satellite galaxies have long anchored our understanding of low-mass mass galaxy formation. Their number counts, spatial distributions, and structural properties are used to constrain dark matter cosmology on small scales \\citep[e.g.,][]{bullock2017, simon2019}. Their star formation histories (SFHs) and chemical content provide insight into cosmic reionization and the baryonic processes that uniquely shape low-mass galaxy evolution \\citep[e.g.,][]{kirby2011, brown2014, weisz2014a}. More recently, their orbital histories, as measured by the Hubble Space Telescope (HST) and Gaia, reveal the complex effects of central galaxies on the evolution of low-mass satellites \\citep[e.g.,][]{sohn2012, fritz2018, simon2018}.\n\nAt the same time, there is growing evidence that the MW satellites may not be representative of low-mass satellites in general. Compared to the MW, satellite systems throughout the local Universe show varying luminosity functions, stellar populations, quenching properties, and spatial configurations, often in excess of cosmic variance \\citep[e.g.,][]{mcconnachie2006, brasseur2011, Tollerud:2011wd, geha2017, muller2018, pawlowski2018, smercina2018, pawlowski2019}. Even in our nearest neighbor, M31, there are hints that the internal \\citep[e.g., kinematics, stellar content;][]{dacosta1996, collins2010, collins2014, martin2017} and global \\citep[e.g., ``plane of satellites'';][]{ibata2013, pawlowski2018} properties of M31 and MW satellites are different. Thus, it is unclear whether the fundamental insights established in MW satellites are applicable to all low-mass systems or stem from the specific accretion history of the MW.\n\nIn this Letter, we present the first uniform SFH measurements of many faint M31 satellites. We use sub-horizontal branch (HB) HST imaging, which has been presented in two previous papers in this series \\citep{martin2017, weisz2019a}, to measure their SFHs, and we compare our measurements to literature SFHs of MW satellites. Relative to the MW satellites, the SFHs we measure for M31 satellites have coarser age resolution, because the M31 CMDs do not reach the oldest main sequence turn off (MSTO). We will acquire oldest MSTO photometry for all known M31 satellites during HST Cycle 27 as part of HST-GO-15902 (PI D.~Weisz), which will strengthen the results we preview in this Letter.\n\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.65]{m31_example_fit.png}\n\\caption{An example BASTI-based SFH measurement using And~XXI. Panel (a) shows the observed Hess diagram. Panel (b) shows the best fit model Hess diagram for the BaSTI stellar evolution models. Panel (c) shows the residual significance diagram, i.e., (data-model)\/model, in units of standard deviation. The lack of systematic structures in Panel (c), i.e., large contiguous regions of black or white points, indicates a good fit. Panel (d) shows the cumulative SFH, i.e., the fraction of stars formed before a given epoch. The solid black line is the best fit SFH for the BaSTI models and the grey shaded envelope are the total (random plus systematic) uncertainties for the 68\\% confidence interval. The thin colored lines are the best fit SFHs for different stellar evolution models. Overall, the shape of the SFH is similar between the models, but the ages of SFH features can shift by $\\sim 2$~Gyr owing to differences in the underlying stellar physics. The grey shaded region captures the 1-$\\sigma$ scatter in the different SFHs reasonably well.}\n\\label{fig:4panel}\n\\end{center}\n\\end{figure*}\n\n\\section{The Data} \\label{sec:data}\n\nThe observations and data reduction used in this program are described in detail in \\citet{martin2017}. Here, we provide a brief summary. \n\nThrough HST-GO-13699 (PI N.~Martin), we observed 16 faint M31 satellites with the Advanced Camera for Surveys (ACS) \n\nthat had no previous HST imaging. Each galaxy was observed for a single orbit with equal integration times in the F606W and F814W filters. We added archival F606W and F814W ACS data of comparable depth for And~XVIII (HST-SNAP-13442; PI B.~Tully). We also added archival observations of And~XI, And~XII, and And~XIII (HST-GO-11084; PI D.~Zucker) which were taken in F606W and F814W with the Wide Field Planetary Camera 2 (WFPC2).\n \nIn total, our sample has 20 systems with $-12 \\lesssim M_V \\lesssim -6$, including 7 ultra-faint dwarf galaxies (UFDs, $M_{V} > -7.7$; \\citealt{simon2019}).\n\nFor each galaxy, we performed point spread function photometry with \\texttt{DOLPHOT}, a widely-used package for reducing observations of resolved stellar populations with HST-specific modules \\citep{dolphin2000b}. We adopted the \\texttt{DOLPHOT} parameters recommended in \\citet{williams2014}. The raw photometric catalogs were culled to include only good stars as described in \\citet{martin2017}. We ran $\\sim 10^5$ artificial star tests (ASTs) for each galaxy to quantify completeness and photometric uncertainties. The 50\\% completeness limit for a typical galaxy in our sample is $F606W \\sim 27.1$ and $F814W \\sim 26.2$.\n\nFigure \\ref{fig:4panel} illustrates the quality of our data by showing a Hess diagram, i.e., a binned CMD, for And~XXI ($M_{V} = -9.2$). The CMD shows a clear red giant branch (RGB), red clump (RC), and a predominantly red HB, as described in \\citet{martin2017}. The faint limit of the CMD is set to the 50\\% completeness limit of $F814W =26.3$, which is $\\sim$1.5 magnitudes fainter than the HB. \n\n\\section{Methodology} \\label{sec:methods}\n\nWe measure the SFHs of the 20 galaxies in our sample using \\texttt{MATCH} \\citep{dolphin2002}, a software package that forward models the CMD of a galaxy in order to measure its SFH as described in \\citet{weisz2014a}. Here, we provide a brief summary of the pertinent details. \n\n\nFor this analysis, we adopt a Kroupa IMF \\citep{kroupa2001}, a binary fraction of 0.35, HB-based distances for the ACS data \\citep{weisz2019a} and self-consistent TRGB-based distances for the WFPC2 data \\citep{weisz2014a}, and foreground extinction values from \\citet{schlafly2011}. \n\nWe fit each entire CMD with five different stellar evolution libraries: Dartmouth \\citep{dotter2008}, Padova \\citep{girardi2010}, PARSEC \\citep{bressan2012}, MIST \\citep{choi2016}, and BaSTI \\citep{hidalgo2018}. We find that for all CMDs analyzed in this paper the BaSTI 2018 models provide the best overall fits in terms of visual inspection of the residuals and through comparison of likelihood ratios between models. Thus, we adopt the BaSTI models for this paper. \n\nWe adopt a metallicity grid that ranges from $-2.3 \\le$ [M\/H] $\\le -0.5$ with a resolution of 0.1 dex and an age grid that ranges from $9.00 \\le \\log(t\/{\\rm yr}) \\le 10.15$ with a resolution of $\\log(t\/{\\rm yr}) = 0.05$ dex. We find that including ages younger than $\\log(t\/{\\rm yr}) =9.0$ did not change the SFHs (as these galaxies have no young populations) but increased the computational time. Thus, we simply exclude ages with $\\log(t\/{\\rm yr}) < 9.0$ from our CMD modeling.\n\nFinally, given that our CMDs do not reach the oldest MSTO, we follow \\citet{weisz2011a} and\n\\citet{weisz2014a} and adopt a prior on the age-metallicity relationship that requires the metallicity to increase monotonically with time, with a modest dispersion allowed at each age. This choice helps to mitigate some of the age-metallicity degeneracy on the RGB and RC. We compute random (which account for the finite number of stars on the a CMD) and systematic uncertainties (which are a proxy for uncertainties in the physics of the underlying stellar models) as described in \\citet{weisz2014a}. Finally, as detailed in \\S 3.6 and Figure 6 of \\citet{weisz2014a}, SFHs measured from CMDs that include the HB but do not include the oldest MSTO have larger systematic uncertainties, but are consistent with SFHs measured from oldest MSTO-depth CMDs.\n\n\nFigure \\ref{fig:4panel} illustrates the CMD modeling process. Panel (a) shows the observed Hess diagram of And~XXI and panel (b) shows the best fit model for the BaSTI stellar library. A visual comparison of the model and data indicates good overall agreement. Panel (c) is a residual significance diagram, i.e., (model-data)\/model, which quantifies the level of (dis)agreement. The majority of populated pixels are consistent within $\\sim$ 1-$\\sigma$, while only a handful of pixels are highly discrepant (i.e., $>$ 3-$\\sigma$). There are no signs of poorly fit regions of the CMD (e.g., large swaths of only black or white pixels), which indicates that the model is a good fit to the data.\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.7]{m31_shallow_basti_sfhs_v2.png}\n\\caption{The cumulative SFHs of 20 faint M31 satellites ordered by increasing luminosity. The black solid line is the best fit BaSTI SFH. The grey and purple shaded envelopes reflect the 68\\% confidence intervals for the random and total uncertainties, respectively. Fainter M31 satellites generally form the bulk of their stellar mass at earlier time compared to the brighter systems. All galaxies appear to have quenching times between 3 and 9 Gyr ago. \\label{fig:sfhs}}\n\\end{center}\n\\end{figure*}\n\nPanel (d) shows the cumulative SFH of And~XXI, i.e., the fraction of total stellar mass formed prior to a given epoch. The solid black line is the best fit BaSTI solution and the grey shaded band reflects the 68\\% confidence interval of the total (i.e., random plus systematic) uncertainties. The random uncertainties are negligibly small compared to the systematic uncertainties. \n\nThe thin colored lines in panel (d) are the best fit SFHs from the other four stellar libraries. These SFHs are similar in shape to the BaSTI solution, though particlar features can be shifted by up to $\\sim 2$ Gyr, which is due to differences in the underlying stellar physics \\citep[e.g.,][]{Gallart:2005qy}. As intended, the scatter in the SFHs from different models is well-captured by the grey-shaded error envelope. \n\nThe small amount of apparent star formation $\\sim$ 1-3 Gyr ago is likely due to a handful of blue stragglers which occupy similar portions of the CMD as younger main sequence stars \\citep[e.g.,][]{monelli2012}. \n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.7]{m31_mw_t50_t90_all_lines.png}\n\\caption{The lookback time (Gyr ago) at which 50\\% of the stellar mass formed ($\\tau_{50}$) versus the time at which 90\\% of the stellar mass formed ($\\tau_{90}$), i.e., the quenching time. The top panel includes the 20 M31 satellites from this paper and 6 from \\citet{skillman2017}. Points are color-coded by luminosity and their relative sizes reflect their half-light radii. The grey point indicates a size of 500~pc. The black lines illustrate a constant and exponentially declining SFHs. The bottom panels shows results from literature SFHs of MW satellites. The area enclosed by the blue dotted line contains half the M31 sample, but no MW satellites. The smaller uncertainties for the \\citet{skillman2017} M31 dSphs are indicative of what can be expected from the forthcoming cycle 27 observations. \\label{fig:t50_t90}}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Results and Discussion} \\label{sec:discussion}\n\nFigure \\ref{fig:sfhs} shows the cumulative SFHs of 20 faint M31 satellites plotted in order of increasing luminosity from upper left to lower right. The solid black lines are the best fit SFHs, while the grey and purple shaded envelopes reflect the 68\\% confidence intervals for the random and total (random plus systematic) uncertainties, respectively. These values are tabulated in Table \\ref{tab:sfhs}.\n\nThis figure reveals both a diversity of SFHs among the M31 faint satellite population and some broad trends. For example, galaxies with $M_V \\gtrsim -8.5$ tend to have formed $\\gtrsim 50$\\% of their stars prior to $\\sim 10-12$~Gyr ago, compared to 6-9~Gyr ago for more luminous systems. Interestingly, galaxies such as And~XXIX, Per~I, and Lac~I appear to have formed $\\lesssim$ 10\\% of their stellar mass prior to $\\sim 10$~Gyr ago, which is unusually low when put into context with our knowledge of LG dwarf galaxy SFHs \\citep[e.g.,][]{weisz2014a, gallart2015}. \n\nAnother interesting feature is the quenching times. That is, very few systems have either very early ($>$10-12 Gyr ago) or very late ($\\lesssim 3$~Gyr ago) quenching times. Instead, the vast majority of the systems stopped forming stars $\\sim 3-6$ Gyr ago, almost independent of luminosity. \n\nThe top panel of Figure \\ref{fig:t50_t90} consolidates the SFHs of the 20 faint M31 satellites from this paper and the 6 systems from \\citet{skillman2017} into a more digestible form. Here, we plot the quenching time\\footnote{Following the literature \\citep[e.g.,][]{weisz2015a, skillman2017}, we adopt the time at which 90\\% of the total stellar mass formed as a proxy for the quenching time to avoid ambiguity due to blue stragglers.} ($\\tau_{90}$) versus the time at which 50\\% of the total stellar mass formed ($\\tau_{50}$). Points are color-coded by luminosity, point sizes are proportional to half-light radius, and the error bars reflect the total (random plus systematic) uncertainties. To guide the eye, we overplot black lines that illustrate cases of a constant (solid) and exponentially declining SFHs ($\\tau_{SFH}=10$~Gyr, dashed lined; $\\tau_{SFH}=2$~Gyr, dot-dashed lined). \n\nThis plot shows several interesting trends. First, although there are several predominantly ancient galaxies (i.e., $\\tau_{50}$ $>12$~Gyr), there are very few systems with $\\tau_{90}$ $>12$~Gyr. Instead, the predominantly ancient systems have a range of $\\tau_{90}$\\ values that extend from 3 to 10 Gyr ago. This is particularly interesting in the context of reionization, in which the prevailing view is that the lowest-mass galaxies have star formation shutdown by reionization in the very early Universe \\citep[e.g.,][]{ bullock2000, benson2002, ricotti2005}. However, we caution against over-interpretation of this finding, as (i) we lack a complete census of UFDs around M31 and (ii) current SFHs are uncertain at the oldest ages.\n\nSecond, there are no galaxies that quenched within the last 3 Gyr, i.e., $\\tau_{90}$\\ $<3$~Gyr ago. Instead, most galaxies have quenching times concentrated at intermediate ages, with a notable grouping at $\\tau_{90}$$\\sim$ 3-6~Gyr ago. \n\nThird, there appears to be some degree of synchronicity in the quenching of some systems. Most notably, the over-density of M31 satellites at $\\tau_{50}$ $\\sim 6-9$~Gyr ago also all seem to have $\\tau_{90}$\\ $\\sim 3-6$~Gyr ago, with no clear trends in galaxy size or luminosity, in agreement with the results of the representative sample from \\citet{skillman2017}. \n\n\n\n\nFourth, compared to the over-plotted fiducial SFH models (i.e., constant SFH, exponentially declining) there are two distinct groups. Eight galaxies fall tightly on the $\\tau_{SFH}=2$~Gyr track while 15 are to the right of the constant SFH track (i.e., they have rising SFHs). Only three galaxies exist between these groups. \n\n\n\nThe age information provided by our SFHs may help model the complex formation and accretion history of the M31 and its satellites. The M31 halo hosts rich stellar substructures (e.g., streams, over-densities) that suggest an active history of mergers in M31 (see \\citealt{mcconnachie2018} and references therein) and its stellar halo and outer disk have large populations of intermediate age stars, as revealed by oldest MSTO-depth CMD analysis \\citep[e.g.,][]{brown2006a, bernard2015b}. Several models have posited a major merger between M31 and (what would have been) the third largest member of the LG $\\sim$2-4 Gyr ago \\citep[e.g.,][]{hammer2018, dsouza2018}. These models can qualitatively explain some observed features of M31, such as a global burst of star formation 2-4 Gyr ago \\citep[e.g.,][]{bernard2015b, williams2017} and the metal-rich inner halo. For example, \\citet{dsouza2018} hypothesize such an interaction between M31 and M32p (the putative progenitor of M32) could explain the metal-rich component of M31's halo and the unusually compact nature of M32. This model implies that M32p had a $M_{\\star} \\sim 2.5\\times 10^{10}$ $M_{\\odot}$ prior to its interaction with M31, making it the third largest member of the LG just a few Gyr ago. One implication of this scenario may be that the large number of satellites with $\\tau_{90}$ $\\sim$ 3-6~Gyr ago may have been environmentally quenched during the merger. A second speculative angle is that the dichotomy of SFHs in the top panel of Figure \\ref{fig:t50_t90}, i.e., rising SFHs vs exponentially declining, may be due to the presence of two difference satellite populations, i.e., one set from M31, the other from M32p. Though speculative, we use these examples to illustrate the potential of our data for deciphering the formation history of M31's halo and emphasize that more rigorous analysis is clearly warranted.\n\n\n\nWe also consider the relationship between our SFHs and sub-structures in the M31 system, e.g., the plane of satellites from \\citet{ibata2013}. We find no clear evidence for a correlation with membership in structures identified in \\citet{ibata2013} and \\citet{santossantos2019}. However, given the large uncertainties and unclear theoretical expectations between sub-structures and SFHs, the lack of a clear correlation is challenging to interpret.\n\nFigure \\ref{fig:t50_t90} also summarizes differences in the formation histories of M31 and MW satellites. In the bottom panel, we plot $\\tau_{90}$\\ \\emph{vs.} $\\tau_{50}$\\ for the MW satellites using literature SFHs \\citep[e.g.,][]{brown2014, weisz2015a} and global properties \\citet[e.g., luminosity, size;][]{mcconnachie2012}. \n\nIt is striking that the M31 and MW satellite populations do not share many similar trends. The M31 satellites fill out intermediate values of $\\tau_{50}$\\ and $\\tau_{90}$, i.e., $6 \\lesssim$ $\\tau_{50}$\\ $\\lesssim 12$~Gyr ago and $3 \\lesssim$ $\\tau_{90}$\\ $\\lesssim 6$~Gyr ago (the dotted blue box in Figure \\ref{fig:t50_t90}, whereas there are essentially no MW satellites in that range. In terms of quenching, the MW satellites Fornax, Carina, and Leo~I (galaxies located in the upper right region of the lower panel) all ceased star formation within the most recent $\\sim 1-3$~Gyr, whereas none of the M31 satellites did. The faintest MW satellites ($M_V \\gtrsim -7$) all quenched $\\gtrsim 12$~Gyr ago, presumably due to reionization \\cite[e.g.,][]{brown2014}, but some of the comparably faint M31 satellites appear to have more extended SFHs. \n\nThis may indicate that the evolution of the satellites are coupled to the accretion history of the host galaxy. By extension, it may be that the MW satellites do not cover the full range of intrinsic formation histories of low-mass galaxies.\n\n\\begin{deluxetable}{lcccc}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{Summary statistics for SFHs of faint M31 satellites. Values of $\\tau_{50}$\\ and $\\tau_{90}$\\ are from the best fit SFHs measured in this paper. Error bars are the 68\\% confidence intervals for the total uncertainties (i.e., random plus systematic). Values for the galaxies below the horizontal lines are taken directly from \\citet{skillman2017}. }\n\\label{tab:sfhs}\n\\tablehead{\n\\colhead{Name} & \\colhead{$M_V$} & \n\\colhead{$r_h$} & \\colhead{$\\tau_{50}$} & \n\\colhead{$\\tau_{90}$} \\\\ \n\\colhead{} & \\colhead{(mag)} & \n\\colhead{(pc)} & \\colhead{(Gyr ago)} & \n\\colhead{(Gyr ago)} \\\\\n\\colhead{(1)} & \\colhead{(2)} & \n\\colhead{(3)} & \\colhead{(4)} & \n\\colhead{(5)} \n} \n\\startdata\nCas~III & $-12.6$ & 1640 & 7.9$_{-1.6}^{+1.6}$ & 4.1$_{-1.5}^{+2.5}$\\\\\nLac~I & $-11.5$ & 967 & 8.1$_{-1.7}^{+0.8}$ & 4.9$_{-1.7}^{+1.7}$\\\\\nCas~II & $-11.2$ & 275 & 9.8$_{-1.1}^{+3.4}$ & 7.2$_{-3.4}^{+2.8}$\\\\\nPer~I & $-10.2$ & 384 & 7.9$_{-1.8}^{+1.4}$ & 4.0$_{-1.6}^{+2.6}$\\\\\nAnd~XXIII & $-10.0$ & $1277$ & 6.8$_{-0.9}^{+1.9}$ & 5.1$_{-2.8}^{+1.5}$\\\\\nAnd~XXV & $-9.3$ & 679 & 8.7$_{-1.1}^{+2.8}$ & 5.8$_{-1.3}^{+2.6}$\\\\\nAnd~XXI & $-9.2$ & 1033 & 8.3$_{-1.9}^{+1.2}$ & 5.8$_{-2.5}^{+0.9}$\\\\\nAnd~XVIII & $-9.2$ & 262 & 8.5$_{-1.6}^{+2.0}$ & 4.6$_{-2.1}^{+1.7}$\\\\\nAnd~IX & $-9.0$ & 444 & 7.2$_{-0.3}^{+2.5}$ & 5.1$_{-2.0}^{+1.8}$\\\\\nAnd~XIV & $-8.6$ & 379 & 8.7$_{-0.4}^{+4.5}$ & 4.8$_{-0.7}^{+5.2}$\\\\\nAnd~XXIX & $-8.5$ & 397 & 7.6$_{-0.7}^{+3.1}$ & 5.2$_{-1.2}^{+2.2}$\\\\\nAnd~XVII & $-8.2$ & $339$ & 13.2$_{-0.3}^{+0.0}$ & 10.5$_{-5.0}^{+2.1}$\\\\\nAnd~XXIV & $-7.9$ & $579$ & 12.9$_{-3.3}^{+0.3}$ & 5.4$_{-3.1}^{+4.4}$\\\\\nAnd~X & $-7.5$ & 239 & 9.5$_{-0.2}^{+3.6}$ & 6.5$_{-2.1}^{+4.8}$\\\\\nAnd~XII & $-7.0$ & 420 & 12.9$_{-6.3}^{+0.3}$ & 3.4$_{-0.2}^{+2.6}$\\\\\nAnd~XXII & $-6.7$ & 253 & 11.5$_{-0.8}^{+2.0}$ & 6.8$_{-2.5}^{+5.8}$\\\\\nAnd~XX & $-6.7$ & 110 & 10.2$_{-0.9}^{+2.6}$ & 6.9$_{-2.1}^{+4.6}$\\\\\nAnd~XIII & $-6.5$ & $130$ & 9.1$_{-0.4}^{+4.1}$ & 6.5$_{-0.1}^{+3.5}$\\\\\nAnd~XI & $-6.3$ & 120 & 13.2$_{-0.3}^{+0.0}$ & 7.4$_{-1.4}^{+2.4}$\\\\\nAnd~XXVI & $-6.1$ & 228 & 12.9$_{-0.3}^{+0.3}$ & 9.1$_{-6.0}^{+2.9}$\\\\\n\\hline \n\\hline\nAnd~II & $-12.6$ & 965 & 11.5$_{-2.1}^{+0.8}$ & 6.3$_{-0.6}^{+0.5}$\\\\\nAnd~I & $-12.0$ & 815 & 12.6$_{-3.9}^{+0.3}$ & 7.4$_{-0.7}^{+0.9}$ \\\\\nAnd~III & $-10.1$ & 405 & 11.7$_{-1.3}^{+1.1}$ & 8.7$_{-0.6}^{+1.5}$\\\\\nAnd~XXVIII & $-8.8$ & 265 & 12.6$_{-0.8}^{+0.3}$ & 7.6$_{-0.3}^{+1.7}$\\\\\nAnd~XV & $-8.4$ & 230 & 12.9$_{-0.9}^{+0.3}$ & 9.3$_{-0.8}^{+3.3}$\\\\\nAnd~XVI & $-7.5$ & 130 & 9.8$_{-1.1}^{+1.4}$ & 5.9$_{-0.6}^{+0.4}$\n\\enddata\n\\end{deluxetable}\n\nThere are several caveats with the present analysis. First, while SFHs from MW satellites are all measured from CMDs that reach the oldest MSTO, our new M31 data are much shallower. Consequently, we are left with large uncertainties that may hide various trends in the data. Moreover, our SFHs are based primarily on the HB morphology, which is not as well an understood phase of stellar evolution as the MSTO \\citep[e.g.,][]{Gallart:2005qy}. However, we note that comparisons of SFHs measured from different depths (i.e., HB vs. oldest MSTO) generally show good agreement \\citep[e.g.,][]{weisz2014a} as previously described. We urge appropriate caution against over-interpreting this generation of M31 SFHs, particularly at ancient epochs. \nSecond, there are various selection effects that we have not explicitly considered. One is the size of the HST field of view relative to the size of a galaxy. In some cases, this can lead to $\\sim 1$~Gyr biases in the measured SFH relative to the true global SFH \\citep[e.g.,][]{graus2019b}. Another is the lack of many known UFDs in the M31 ecosystem. Detecting faint systems is quite challenging at the distance of M31, given the paucity of bright stars and the high level of contamination. \n\nDespite these challenges, we are optimistic about prospects of placing the M31 satellites onto equal observational footing with their MW counterparts. The 244 orbit cycle 27 HST Treasury program (HST-GO-15902; PI D.~Weisz) will obtain MSTO depth imaging across the entire M31 satellite system, which will significantly reduce the uncertainties on the SFHs and establish a first epoch for proper motion measurements. \n\nIn our view, among the most important next steps for M31 satellites is to identify UFDs around M31. If UFDs around M31 are found to have substantially extended SFHs, then our picture of how reionization affects low-mass galaxy formation, halo occupation, etc.\\ may fundamentally change. Finding and characterizing UFDs around M31 requires dedicated imaging and spectroscopic efforts, as well as the power of HST and\/or JWST for measuring their star formation and orbital histories. \n\n\\section*{Acknowledgements}\n\nSupport for HST program GO-13699 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. These observations are associated with program HST-SNAP-13442 and HST-GO-13699. DRW acknowledges support from an Alfred P. Sloan Fellowship and an Alexander von Humboldt Fellowship. SMA is supported by the National Science Foundation Graduate Research Fellowship under Grant DGE 1752814. This research has made use of the NASA\/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{{Introduction \\& Related Work}}\n The spatial degrees of freedom offered by multiple antenna arrays are a valuable interference mitigation resource. Advanced signal processing techniques are currently employed to boost the performance of the multi-antenna transmitters without compromising the complexity of single antenna receivers.\nThese beamforming (or equivalently precoding) techniques efficiently manage the co-channel interferences to achieve the targeted service requirements (Quality of Service--$\\mathrm{QoS}$ targets). As a result, the available spectrum can be aggressively reused towards increasing the system throughput.\n\nThe optimal downlink transmission strategy in the sense of minimizing the total transmit power whilst guaranteing specific $\\mathrm{QoS}$ targets at each user, was given in \\cite{Bengtsson2001,bengtsson1999}.\n{Therein, the tool of Semi-Definite Relaxation ($\\mathrm{SDR}$) reduced the non-convex quadratically constrained quadratic problem ($\\mathrm{QCQP}$) into a relaxed semi-definite programming instance by changing the optimization variables and disregarding the unit-rank constraints over the new variable. The solution of the relaxed problem was proven to be optimal. {The multiuser downlink beamforming problem in terms of maximizing the minimum $\\mathrm{SINR}$, was optimally solved in \\cite{Schubert2004}. The goal of the later formulation is to increase the fairness of the system by boosting the $\\mathrm{SINR}$ of the user that is further away from a targeted performance. Hence, the problem is commonly referred to as \\textit{max--min fair}. In \\cite{Schubert2004}, this problem was solved using the principles of uplink\/downlink duality. Therein, \\textit{Schubert and Boche} developed a strongly convergent iterative alternating optimization algorithm for the equivalent uplink problem. In the same work, the power minimization problem of \\cite{Bengtsson2001} was also solved by acknowledging its inherent connection with the max-min fair problem. Consequently, a significantly less complex framework to solve the optimal beamforming problem was established. }Extending these works, the practical per-antenna power constraints $\\mathrm{ (PAC)}$ were considered in \\cite{Yu2007}. {Generalized power constraints, including sum power, per-antenna power and per-antenna array power constraints were considered in \\cite{Dartmann2013}, where the proposed max-min fair solution was derived on an extended duality framework. This framework accounted for both instantaneous and long term channel state information ($\\mathrm{CSI}$).} $\\mathrm{PAC}$s are motivated from the practical implementation of systems that rely on precoding. The lack of flexibility in sharing energy resources amongst the antennas of the transmitter is usually the case, since a common practice in multi-antenna systems is the use of individual amplifiers per antenna. Despite the fact that flexible amplifiers could be incorporated in multi-antenna transmitters, specific communication systems cannot afford this design. Typical per antenna power limited systems can be found in multibeam satellite communications \\cite{Christopoulos2013AIAA}, where flexible on board payloads are difficult to implement and in cooperative multicell systems (also known as distributed antenna systems, $\\mathrm{DAS}$), where the physical co-location of the transmitting elements is not a requisite and hence power sharing might be infeasible.\n\n\nA fundamental consideration of the aforementioned works is that independent data is addressed to multiple users. However, the new generation of multi-antenna communication standards has to adapt the physical layer design to the needs of the higher network layers. Examples of such cases include highly demanding applications (e.g. video broadcasting) that stretch the throughput limits of multiuser broadband systems.\nIn this direction, physical layer ($\\mathrm{PHY}$) multicasting\nhas the potential to efficiently address the nature of future traffic demand and has become part of the new generation of communication standards. $\\mathrm{PHY}$ multicasting is also relevant for the application of beamforming without changing the framing structure of standards.{Such a scenario can be found in satellite communications where the communication standards are optimized to cope with long propagation delays and guarantee scheduling efficiency by framing multiple users per transmission \\cite{Christopoulos2013AIAA,Christopoulos2014_ASMS}.}\n\nIn \\cite{Sidiropoulos2006}, the NP-hard multicast problem was accurately approximated by $\\mathrm{SDR}$ and Gaussian randomization.\nThe natural extension of the multicast concept lies in assuming multiple interfering groups of users.\nA unified framework for physical layer multicasting to multiple co-channel groups, where independent sets of common data are transmitted to groups of users by the multiple antennas, was given in \\cite{Karipidis2005CAMSAP,Karipidis2008}. Therein, the $\\mathrm{QoS}$ and the fairness problems were formulated, proven NP-hard and solved for the sum power constrained multicast multigroup case. In parallel to \\cite{Karipidis2005CAMSAP}, the independent work of \\cite{Gao2005} involved complex dirty paper coding methods. Also, a convex approximation method was proposed in \\cite{Pesavento2012b} that exhibits superior performance as the number of users per group grows. Finally, in \\cite{Silva2009}\nthe multicast multigroup problem under $\\mathrm{SPC}$, was solved based on approximations and uplink-downlink duality \\cite{Schubert2004}.\n{In the context of coordinated multicast multicell systems{\\footnote{{ Coordinated multicell networks consist of connected base stations ($\\mathrm{BS}$), with each $\\mathrm{BS}$ serving a single multicast group, a case tackled in \\cite{Xiang2013}. Extending this, the methods presented herein can be applied in cooperative multicell systems where all $\\mathrm{BS}$s will jointly transmit to several multicast groups\\cite{Chatzinotas_JWCOM}.}}}, max--min fair beamforming with per base-station ($\\mathrm{BS}$) constraints has been considered in \\cite{Xiang2013} where each $\\mathrm{BS}$ transmits to a single multicast group. Hence, a power constraint over each precoder was imposed while no optimization weights were considered.This formulation still considers power sharing amongst the multiple antennas at each transmitter.}\n\n\n\n\n\n\nTowards deriving the optimal multigroup multicast precoders when a maximum limit is imposed on the transmitted power of each antenna, a new optimization problem with one constraint per transmit antenna needs to be formulated.\n{Amid the extensive literature on multigroup multicast beamforming, the $\\mathrm{PAC}$s have only been considered in \\cite{Christopoulos2014_ICC}, where an equally fair multicast multigroup solution is presented.} Extending these considerations, the present work accounts optimization weights. Therefore, a consolidated solution for the weighted max--min fair multigroup multicast beamforming under $\\mathrm{PAC}$s is hereafter presented. The contributions of the present work are summarized as follows\n{\\begin{itemize}\n\\item The $\\mathrm{PAC}$ weighted fair multigroup multicast beamforming problem is formulated and solved.\n\\item Practical system design insights are given by examining the implications of the $\\mathrm{PAC}$s on multigroup multicast distributed antenna systems ($\\mathrm{DAS}$), modulation constrained systems and uniform linear array ($\\mathrm {ULA}$) transmitters.\n\\item A robust to erroneous $\\mathrm{CSI}$ multigroup multicast design under $\\mathrm{PAC}$s is proposed.\n\\item The {performance of the solution} is evaluated through extensive numerical results under various system setups.\\end{itemize}\n}\n\nThe rest of the paper is structured as follows. The multigroup multicast system model is presented in Sec. \\ref{sec: System Model} while the weighted fair problem is formulated and solved in Sec. \\ref{sec: problem}. In Sec. \\ref{sec: performance}, the performance of the design is evaluated for various system setups along with a robust extension of the derived algorithm and a weighted multigroup multicast application paradigm. Finally, Sec. \\ref{sec: conclusions} concludes the paper.\n\n\n\n\n\n\n\n\n\n\n\n\n\n{\\textit{Notation}: In the remainder of this paper, bold face lower case and upper case characters denote column vectors and matrices, respectively. The operators \\(\\left(\\cdot\\right)^\\text{T}\\), \\(\\left(\\cdot\\right)^\\dag\\), $|\\cdot|$, ${\\mathrm{Tr}\\left(\\cdot\\right)}$ and \\(||\\cdot||_2, \\) correspond to the transpose, the conjugate transpose, the absolute value, the {trace} and the Frobenius norm operations, while $[\\cdot]_{ij} $ denotes the $i, j$-th element of a matrix. The principal eigenvalue of a matrix $\\mathbf X$ are denoted as $\\lambda_{max}(\\mathbf X)$. Calligraphic indexed characters denote sets}.\n\n\\section{System Model }\\label{sec: System Model}\n Herein, the focus is on a multi-user ($\\mathrm{MU}$) multiple input single output ($\\mathrm{MISO}$) multicast system. Assuming a single transmitter, let $N_t$ denote the number of transmitting elements and $N_{u}$ the total number of users served. The input-output analytical expression will read as $y_{i}= \\mathbf h^{\\dag}_{i}\\mathbf x+n_{i},$\nwhere \\(\\mathbf h^{\\dag}_{i}\\) is a \\(1 \\times N_{t}\\) vector composed of the channel coefficients (i.e. channel gains and phases) between the \\(i\\)-th user and the \\(N_{t}\\) antennas of the transmitter, \\(\\mathbf x\\) is the \\(N_{t} \\times 1\\) vector of the transmitted symbols and \\(n_{i}\\) is the independent complex circular symmetric (c.c.s.) independent identically distributed (i.i.d) zero mean Additive White Gaussian Noise ($\\mathrm{AWGN}$) measured at the \\(i\\)-th user's receive antenna.\nFocusing in a multigroup multicasting scenario, let there be a total of $1\\leq G \\leq N_{u}$ multicast groups with $\\mathcal{I} = \\{\\mathcal{G}_1, \\mathcal{G}_2, \\dots \\mathcal{G}_G\\}$ the collection of index sets and $\\mathcal{G}_k$ the set of users that belong to the $k$-th multicast group, $k \\in \\{1\\dots G \\}$. Each user belongs to only one group, thus $\\mathcal{G}_i\\cap\\mathcal{G}_j=$\\O ,$ \\forall i,j \\in \\{1\\cdots G\\}$. Let $\\mathbf w_k \\in \\mathbb{C}^{N_t \\times 1}$ denote the precoding weight vector applied to the transmit antennas to beamform towards the $k$-th group.\nThe assumption of independent data transmitted to different groups renders the symbol streams $\\{s_k\\}_{k=1}^G$ mutually uncorrelated and the total power radiated from the antenna array is\n\\begin{align}\nP_{tot} = \\sum_{k=1}^ G \\mathbf w_k{^\\dag} \\mathbf w_k\n\\end{align}\nThe power radiated by each\nantenna element is a linear combination of all precoders\n\\cite{Yu2007}:\\begin{align}\\label{eq: PAC}\nP_n = \\left[\\sum_{k=1}^G \\mathbf w_k \\mathbf w_k^\\dag \\right]_{nn}\n\\end{align}\nwhere $n \\in \\{1\\dots N_t\\}$ is the antenna index.\nThe fundamental difference between the $\\mathrm{SPC}$ of \\cite{Karipidis2008} and the proposed $\\mathrm{PAC}$ is clear in \\eqref{eq: PAC}, where instead of one, $N_t$ constraints are realized, each one involving all the precoding vectors. A more general constraint formulation to model power flexibility amongst groups of antennas can be found in \\cite{Zheng2011a}.\n\\section{Multicast Multigroup Beamforming with Per Antenna Power Constraints}\\label{sec: problem}\n\\subsection{Weighted Max-Min Fair Formulation}\nThe $\\mathrm{PAC}$ weighted max-min fair problem is defined as\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{F:}\\ \\max_{\\ t, \\ \\{\\mathbf w_k \\}_{k=1}^{G}} &t& \\notag\\\\\n\\mbox{subject to } & \\frac{1}{\\gamma_i}\\frac{|\\mathbf w_k^\\dag \\mathbf h_i|^2}{\\sum_{l\\neq k }^G |\\mathbf w_l^\\dag\\mathbf h_i|^2+\\sigma_i^2 }\\geq t, &\\label{const: F SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\},\\notag\\\\\n \\text{and to }\\ \\ \\ \\ & \\left[\\sum_{k=1}^G \\mathbf w_k\\mathbf w_k^\\dag \\right]_{nn} \\leq P_n, \\label{eq: max-min fair power const1 }\\\\\n &\\forall n\\in \\{1\\dots N_{t}\\},\\notag\n \\end{empheq}\n where $\\mathbf w_k\\in \\mathbb{C}^{N_t}$ and $t \\in \\mathbb{R}^{+}$. Different service levels between the users can be acknowledged in this weighted formulation. Problem $ \\mathcal{F}$ receives as inputs the $\\mathrm{PAC}$s vector $\\mathbf p = [P_1, P_2\\dots P_{N_t}]$ and the target $\\mathrm{SINR}$s vector $\\mathbf g = [\\gamma_1,\\gamma_2, \\dots \\gamma_{N_u}]$. {Its goal is to maximize the slack variable $t$ while keeping all $\\mathrm{SINR}$s above this value. Thus, it constitutes a max-min problem that guarantees fairness amongst users.} Following the common in the literature notation for ease of reference, the optimal objective value of $ \\mathcal{F}$ is denoted as $t^*=\\mathcal{F}(\\mathbf g, \\mathbf p)$ and the associated optimal point as $\\{\\mathbf w_k^\\mathcal{F}\\ \\}_{k=1}^{G}$. {Of particular interest is the case where the co-group users share the same target i.e. $\\gamma_i = \\gamma_{k},\\ \\forall i \\in\\mathcal{G}_k, k\\in\\{1\\dots G\\}$.}\n\n{\\textit{Remark 1}: {The difference of the present formulation with respect to the weighted max-min fair problem with $\\mathrm{SPC}$ presented in \\cite{Sidiropoulos2006,Karipidis2008} lies in the $N_t$ power constraints over each individual radiating element. Additionally, this formulation differs from the coordinated multicell multicasting Max-Min formulation of \\cite{Xiang2013} since {the constraint is imposed on the $n$-th diagonal element of the summation of the correlation matrices of all precoders, while weights on each users } $\\mathrm{SINR}$ are also inserted. On the contrary, in \\cite{Xiang2013}, the imposed per base station constraints are translated to one power constraint per each precoder.\nIn the present work, weights to differentiate the $\\mathrm{QoS}$ targets between users are also proposed.}\n\n\\subsection{Per-antenna power minimization}\n{ The relation between the fairness and the power minimization problems for the multicast multigroup case was firstly established in \\cite{Karipidis2008}. As a result, by bisecting the solution of the $\\mathrm{QoS}$ optimization, a solution to the weighted fairness problem can be derived. Nevertheless, fundamental differences between the existing formulations and problem $\\mathcal{F}$ complicate the solution. In more detail, the per-antenna constraints are not necessarily met with equality (a discussion on this is also given in Sec. \\ref{sec: power consumption}). Therefore, the fairness problem is no longer equivalent to the sum power minimization under $\\mathrm{QoS}$ constraints problem. Since the absence of a related, solvable problem prohibits the immediate application of bisection, a novel equivalent per-antenna power minimization problem is proposed as}\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{Q:} \\min_{\\ r, \\ \\{\\mathbf w_k \\}_{k=1}^{G}} &r& \\notag\\\\\n\\mbox{subject to } & \\frac{|\\mathbf w_k^\\dag \\mathbf h_i|^2}{\\sum_{l\\neq k }^G | \\mathbf w_l^\\dag\\mathbf h_i|^2 + \\sigma^2_i}\\geq \\gamma_i, \\label{const: Q SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\},\\notag\\\\\n\\text{and to} \\ \\ \\ \\ \\ & \\frac{1}{P_n} \\left[\\sum_{k=1}^G \\mathbf w_k\\mathbf w_k^\\dag \\right]_{nn} \\leq r,\\label{eq: PAC Q}\\\\\n& \\forall n\\in \\{1\\dots N_{t}\\}, \\notag\n \\end{empheq}\n with $r\\in\\mathbb{R^+}$. Problem $\\mathcal{Q }$ receives as input $\\mathrm{SINR}$ constraints for all users, defined before as $\\mathbf g $, as well as the per antenna power constraint vector $\\mathbf p$ of \\eqref{eq: max-min fair power const1 }. {The introduction of the slack-variable $r$, a common practice in convex optimization \\cite{convex_book}, constraints the power consumption of each and every antenna. Subsequently, at the optimum $r^*$, the maximum power consumption out of all antennas is minimized} and this solution is denoted as $r^*=\\mathcal{Q}(\\mathbf g, \\mathbf p)$. {The generic difference of the present min-max formulation and the formulation proposed in \\cite{Xiang2013} lies in the per antenna constraint \\eqref{eq: PAC Q}. Instead of constraining the power of each antenna, the authors of \\cite{Xiang2013} impose a constraint over each precoder that serves a common multicast group. In the case tackled herein, the number of constraints is increased from one to $N_t$, while each constraint is a function of all multigroup precoders as the summation in \\eqref{eq: PAC Q} reveals.} The following claim reveals the relation between the described problems.\n\n \\textit{Claim 1}: Problems $\\mathcal{F}\\ $ and $\\mathcal{Q}\\ $ are related as follows\n\\begin{align}\n1 = \\mathcal{Q}\\left(\\mathcal{F}\\left( \\mathbf g, \\mathbf p\\right)\\cdot\\mathbf g, \\mathbf p \\right)\\label{eq: equivalence 1}\\\\\nt = \\mathcal{F}\\left(\\mathbf g, \\mathcal{Q}\\left( t\\cdot \\mathbf g,\\mathbf p\\right)\\cdot\\mathbf p \\right)\\label{eq: equivalence 2}\n \\end{align}\n \\textit{Proof: }{Similar to the line of reasoning in \\cite{Xiang2013} the above claims will be proven by contradiction.} Starting with \\eqref{eq: equivalence 1}, let $t^*=\\mathcal{F}(\\mathbf g,\\ \\mathbf p)$ denote the optimal value of $\\mathcal{F}$ with associated variable $\\{\\mathbf w_k^F \\}_{k=1 }^{G}$.\nAlso, let $\\hat r=\\mathcal{Q}\\left( t^*\\cdot\\mathbf g,\\ \\mathbf p\\right)$ be the optimal value of $\\mathcal{Q}$ at the point $ \\{\\mathbf w_k^Q \\}_{k=1 }^{G}$. Then, assuming that $\\hat r > 1$, the vectors $ \\{\\mathbf w_k^F \\}_{k=1 }^{G}$ satisfy the feasibility criteria of $\\mathcal{Q}$ and produce a lower optimal value thus contradicting the optimality of $ \\{\\mathbf w_k^Q \\}_{k=1 }^{G}$ and opposing the hypothesis. Alternatively, assuming that $\\hat r < 1$ then the solutions $ \\{\\mathbf w_k^Q \\}_{k=1 }^{G}$ can be scaled by the non-negative $ \\hat r $. The vectors $ \\{ \\hat r \\cdot\\mathbf{ w}_k^Q \\}_{k=1 }^{G}$ are feasible solutions to $\\mathcal{F }$ which provide the same optimal objective value with however some remaining power budget. Therefore, the power could be scaled up until at least one of the $\\mathrm{PAC}$s is satisfied with equality and a higher objective value would be derived thus again contradicting the hypothesis. Consequently, $\\hat r = 1$.\nThe same line of reasoning is followed to prove \\eqref{eq: equivalence 2}. Let $r^*=\\mathcal{Q}(t\\cdot\\mathbf g,\\ \\mathbf p)$ denote the optimal value of $\\mathcal{Q}$ with associated solution $ \\{\\mathbf w_k^Q \\}_{k=1 }^{G}$. Assuming that the optimal value of $\\mathcal{F}$ under constraints scaled by the solution of $\\mathcal{Q}$ is different, i.e. $\\hat t= \\mathcal{F}\\left(\\mathbf g , \\ \\mathcal{Q}\\left(t\\cdot \\mathbf g ,\\ \\mathbf p\\right)\\cdot\\mathbf p\\right)$ with $ \\{\\mathbf w_k^F \\}_{k=1}^{G}$, the following contradictions arise. In the case where $\\hat t < t $, then the precoders $ \\{\\mathbf w_k^Q \\}_{k=1 }^{G}$ are feasible solutions to $\\mathcal{F}$ which lead to a higher minimum $\\mathrm{SINR}$, thus contradicting the optimality of $\\hat t$. Alternatively, if $\\hat t > t $ then the solution set $ \\{\\mathbf w_k^F \\}_{k=1 }^{G}$ can be scaled by a positive constant $ c = t \/\\hat t<1$. The new solution $ \\{c \\mathbf w_k^F \\}_{k=1 }^{G}$ respects the feasibility conditions of $\\mathcal{Q}$ and provides a lower optimal value, i.e. $c\\cdot r^*$, thus again contradicting the hypothesis. As a result, $\\hat t = t\\ \\square$.\n\n \\subsection{Semidefinite Relaxation}\\label{sec: SDR}\n Problem $\\mathcal{Q}$ belongs in the general class of non-convex $\\mathrm{QCQP}$s for which the $\\mathrm{SDR}$ technique is proven to be a powerful and computationally efficient approximation technique \\cite{Luo2010}. The relaxation is based on the observation that\n$|\\mathbf w_k^\\dag \\mathbf h_i|^2 = \\mathbf w_k^\\dag \\mathbf h_i \\mathbf h_i^\\dag \\mathbf w_k = {\\mathrm{Tr}}(\\mathbf w_k^\\dag \\mathbf h_i \\mathbf h_i^\\dag \\mathbf w_k) = {\\mathrm{Tr}}(\\mathbf w_k \\mathbf w_k^\\dag \\mathbf h_i \\mathbf h_i^\\dag )$. With the change of variables $\\mathbf X_i = \\mathbf w_i \\mathbf w_i^\\dag $,\n $\\mathcal{Q}$ can be relaxed to\n $\\mathcal{Q}_r$\n\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{Q}_r:\\min_{r,\\ \\{\\mathbf X_k \\}_{k=1}^{G}} &r& \\notag\\\\\n\\mbox{subject to } & \\frac{\\mathrm{Tr}\\left(\\mathbf Q_i \\mathbf X_k\\right)}{\\sum_{l\\neq k }^G \\mathrm{Tr}\\left(\\mathbf Q_l \\mathbf X_k\\right) +\\sigma_i^2}\\geq \\gamma_i, \\label{const: Q_r SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\},\\notag\\\\\n \\text{and to} \\ \\ \\ \\ \\ &\\frac{1}{P_n} \\left[\\sum_{k=1}^G \\mathbf X_{k}\\right]_{nn} \\leq r \\label{const: Q_r Power}\\\\\n \\text{and to} \\ \\ \\ \\ \\ &\\ \\mathbf X_k\\succeq0,\\label{const: Q_r SDF}\n\\ \\forall n\\in \\{1\\dots N_{t}\\},\\notag\n \\end{empheq}\nwhere $\\mathbf Q_i = \\mathbf h_i\\mathbf h_i^\\dag$, $r \\in \\mathbb{R}^{+}$ , while the constraint $\\text{rank}(\\mathbf X_i) = 1$ is dropped. Now the relaxed $\\mathcal{Q}_r$ is convex, thus solvable to an arbitrary accuracy. {This relaxation can be interpreted as a Lagrangian bi-dual of the original problem \\cite{convex_book}. }The weighted max-min fair optimization is also relaxed as\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{F}_r:\\max_{t, \\ \\{\\mathbf w_k \\}_{k=1}^{G}}&t& \\notag\\\\\n\\mbox{subject to } & \\frac{1}{\\gamma_i}\\frac{\\mathrm{Tr}\\left(\\mathbf Q_i \\mathbf X_k\\right)}{\\sum_{l\\neq k }^G \\mathrm{Tr}\\left(\\mathbf Q_l \\mathbf X_k\\right)+ \\sigma_i^2}\\geq t, \\label{const: F_r SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\}, \\notag\\\\\n\\mbox{and to }\\ \\ \\ \\ &\\left[\\sum_{k=1}^G \\mathbf X_k \\right]_{nn} \\leq P_n,\\\\\n &\\forall n\\in \\{1\\dots N_{t}\\},\\notag \\\\\n \\mbox{and to }\\ \\ \\ \\ & \\mathbf X_k\\succeq0,\n\\end{empheq}\n{which, however, remains non-convex due to \\eqref{const: F_r SINR}, as in detail explained in \\cite{Karipidis2008}}. However, this obstacle can be overcome by the following observation.\n\n\\textit{ \\textit{Claim 2}: Problems $\\mathcal{F}_r$ and $\\mathcal{Q}_r$ are related as follows}\n\\begin{align}\n1 = \\mathcal{Q}_r\\left(\\mathcal{F}_{r}\\left( \\mathbf g, \\mathbf p\\right)\\cdot\\mathbf g, \\mathbf p \\right)\\label{eq: equivalence 3}\\\\\nt = \\mathcal{F}_r\\left(\\mathbf g, \\mathcal{Q}_{r}\\left( t\\cdot \\mathbf g,\\mathbf p\\right)\\cdot\\mathbf p \\right)\\label{eq: equivalence 4}\n \\end{align}\n\\textit{Proof:} Follows the steps of the proof of \\textit{Claim 1} and is therefore omitted. $\\square$\n\\subsection{Gaussian Randomization} \\label{sec: Gaussian randomization}\nDue to the NP-hardness of the multicast problem, the relaxed problems do not necessarily yield unit rank matrices. Consequently, one can apply a rank-1 approximation over $\\mathbf X^{*}$.\nMany types of rank-1 approximations are possible depending on the nature of the original problem.\nThe solution with the highest provable accuracy for the multicast case is given by the {Gaussian randomization method \\cite{Luo2010}.} In more detail, let $\\mathbf X^{*}$ be a symmetric positive semidefinite solution of the relaxed problem. Then, a candidate solution to the original problem can be generated as a Gaussian random variable with zero mean and covariance equal to $\\mathbf X^{*}$, i.e. $\\hat{\\mathbf{w}} \\backsim\\mathbb{C}\\mathbb{N}(0, \\mathbf X^{*} )$.\nAfter generating a predetermined number of candidate solutions, the one that yields the highest objective value of the original problem can be chosen. The accuracy of this approximate solution is measured by the distance of the approximate objective value and the optimal value of the relaxed problem and it increases with the predetermined number of randomizations {\\cite{Luo2010,Karipidis2008}}.\nNonetheless, an intermediate problem dependent step between generating a Gaussian instance with the statistics obtained from the relaxed solution and creating a feasible candidate instance of the original problem still remains, since the feasibility of the original problem is not yet guaranteed.\n\n\\subsection{Feasibility Power Control }\\label{sec: power control}\n\nAfter generating a random instance of a Gaussian variable with statistics defined by the relaxed problem, an additional step comes in play to guarantee the feasibility of the original problem. { In \\cite{Sidiropoulos2006}, the feasibility of the candidate solutions, as given by the Gaussian randomization, was guaranteed by a simple power rescaling.} Nevertheless, since in the multigroup case an interference scenario is dealt with, a simple rescaling does not guarantee feasibility. Therefore, an additional optimization step is proposed in \\cite{Karipidis2008} to re-distribute the power amongst the candidate precoders. To account for the inherently different $\\mathrm{PAC}$s, a novel power control problem with per antenna power constraints is proposed. Given a set of Gaussian instances, $\\{\\mathbf{\\hat w}_k\\}_{k=1 }^G$, the \\textit{Multigroup Multicast Per Antenna power Control} ($\\mathrm{MMPAC}$) problem reads as\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{S^{\\mathcal{F}}:} \\max_{t, \\ \\{ p_{k} \\}_{k=1}^{G}} &t& \\notag\\\\\n\\mbox{subject to } & \\frac{1}{\\gamma_i}\\frac{|\\mathbf{\\hat w}_k^\\dag \\mathbf{ h}_i|^2 p_{k}}{\\sum_{l\\neq k }^G |\\mathbf{\\hat w}_l \\mathbf{ h}_i |^2 p_{l} +\\sigma_i^2}\\geq t, \\label{const: S F SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k,k,l\\in\\{1\\dots G\\}\\notag \\\\\n\\mbox{and to} \\ \\ \\ \\ \\ & \\left[\\sum_{k=1}^G \\mathbf{\\hat w}_k\\mathbf{\\hat w}_k^\\dag p_k\\right]_{nn} \\leq P_{n,} \\label{eq: MMPAC PACs}\\\\\n&\\forall n\\in \\{1\\dots N_{t}\\},\\notag\n \\end{empheq}\nwith $\\{ p_{k} \\}_{k=1}^{G}\\in\\mathbb{R}^{+}$. Problem $\\mathcal{S^{\\mathcal{F}}}$ receives as input the $\\mathrm{PAC}$s as well as the $\\mathrm{SINR }$ targets and returns the maximum scaled worst $\\mathrm{SINR}$ $t^{*} = \\mathcal{S}(\\mathbf g, \\mathbf p)$ and is also non-convex like $ \\mathcal{F}$.\n{{The difference of this problem compared to \\cite{Karipidis2008} lies in \\eqref{eq: MMPAC PACs}.\n}\n\\textit{\\\\ Remark 2:} A very important observation is clear in the formulation of the power control problem. The optimization variable $\\mathbf p $ is of size $G$, i.e. equal to the number of groups, while the power constraints are equal to the number of antennas, $N_t$. In each constraint, all the optimization variables contribute. This fact prohibits the total exploitation of the available power at the transmitter. Once at least one of the $N_t$ constraints is satisfied with equality and remaining power budget, then the rest can not be scaled up since this would lead to at least one constraint exceeding the maximum permitted value.}\n\n\\subsection{Bisection}\\label{sec: bisec}\n\n{The establishment of claims 1 and 2, allows for the application of the bisection method, as developed in \\cite{Sidiropoulos2006,Karipidis2008}.} The solution of $\nr^* = \\mathcal{Q}_r \\left ( \\frac{L+U}{2}\\mathbf g, \\mathbf p\\right)\n$ is obtained by bisecting the interval $[L, U]$ as defined by the minimum and maximum $\\mathrm{SINR}$ values. Since $t=(L+U)\/2$ represents the $\\mathrm{SINR}\n, it will always be positive or zero. Thus, $L = 0.$ Also, if the system was interference free while all the users had the channel of the best user, then the maximum worst $\\mathrm{SINR}$ would be attained, thus $U = \\max_i\\{P_{tot}\\mathbf Q_i\/\\sigma_i \\}. $ If $r^*<1$, then the lower bound of the interval is updated with this value. Otherwise the value is assigned to the upper bound of the interval. Bisection is iteratively performed until an the interval size is reduced to a pre-specified value $\\epsilon$.\nThis value needs to be dependent on the magnitude of $L \\text{ and } U$ so that the accuracy of the solution is maintained regardless of the region of operation.\n After a finite number of iterations the optimal value of $\\mathcal{F}_r$ is given as the resulting value for which $L \\text{ and } U$ become almost identical. This procedure provides an accurate solution to the non-convex $\\mathcal{F}_r$.\nFollowing this, for each and every solution $\\{\\mathbf{\\hat w}_k\\}_{k=1 }^G$, the power of the precoders needs to be controlled.\n{Consequently, problem $\\mathcal{S}^F$ can be solved using the well established framework of bisection \\cite{convex_book} over its convex equivalent problem, which reads as}\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{S^{\\mathcal{Q}}:} \\min_{r, \\ \\{ p_k \\}_{k=1}^{G}} &r& \\notag\\\\\n\\mbox{subject to } & \\frac{|\\mathbf{\\hat w}_k^\\dag \\mathbf h_i|^2 p_{k}}{\\sum_{l\\neq k }^G | \\mathbf{\\hat w}_l^\\dag\\mathbf h_i|^2 p_{l} +\\sigma_i^2}\\geq \\mathbf \\gamma_i, \\label{const: F SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\},\\notag \\\\\n\\mbox{and to} \\ \\ \\ \\ \\ & \\frac{1}{ P_{n}} \\left[\\sum_{k=1}^G \\mathbf{\\hat w}_k\\mathbf{\\hat w}_k^\\dag p_k\\right]_{nn} \\leq r, \\\\\n&\\forall n\\in \\{1\\dots N_{t}\\},\\notag\n \\end{empheq}\n{ Problem $\\mathcal{S^\\mathcal{Q}}$ is an instance of a linear programming (LP) problem.}\n\n{\\textit{Remark\n3}:\nFor completeness,\nthe\npossible\nreformulation of the non-convex problem\n$\\mathcal{S^{\\mathcal{F}}} $ into the following geometric problem ($\\mathrm{GP}$) is considered, thus surpassing the need for bisection:}\n\\begin{empheq}[box=\\fbox]{align}\n\\mathcal{S^{\\mathcal{F}}_{\\mathcal{GP}}}:& \\min_{t, \\ \\{ p_{k} \\}_{k=1}^{G}} t^{-1}& \\notag\\\\\n\\mbox{s. t. }& {\\sum_{l\\neq k }^G |\\mathbf{\\hat w}_l \\mathbf{ h}_i |^2 p_{l} +\\sigma_i^2}\\leq \\frac{t^{-1}}{\\gamma_i}|\\mathbf{\\hat w}_k^\\dag \\mathbf{ h}_i|^2 p_{k}, \\label{const: S F SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k,k,l\\in\\{1\\dots G\\}\\notag \\\\\n\\mbox{and to} & \\left[\\sum_{k=1}^G \\mathbf{\\hat w}_k\\mathbf{\\hat w}_k^\\dag p_k\\right]_{nn} \\leq P_{n,} \\label{eq: MMPAC PACs GP}\n\\forall n\\in \\{1\\dots N_{t}\\},\\notag\n \\end{empheq}\n\\subsection{Complexity}\\label{sec: complexity}\n An important discussion involves the complexity of the employed techniques to approximate a solution of the highly complex, NP-hard multigroup multicast problem under $\\mathrm{PAC}$s. Focusing on the proposed algorithm (cf. Alg. 1), the main complexity burden originates from the solution of a $\\mathrm{SDP}$. The present work relies on the CVX tool \\cite{convex_book} which calls numerical solvers such as SeDuMi to solve semi-definite programs. The complexity of the $\\mathrm{SDR}$ technique has been exhaustively discussed in \\cite{Luo2010} and the references therein. To calculate the total worst case complexity of the solution proposed in the present work, the following are considered.\n\nInitially, a bisection search is performed over $\\mathcal{Q}_r$ to obtain the relaxed solution. This bisection runs for $N_{iter} = \\lceil\\log_2\\left(U_{1}-L_{1}\\right)\/\\epsilon_{1}\\rceil$ where $\\epsilon_1 $ is the desired accuracy of the search. Typically $\\epsilon_1 $ needs to be at least three orders of magnitude below the magnitudes of $U_{1}, L_{1}$ for sufficient accuracy. In each iteration of the bisection search, problem $\\mathcal{Q}_r$ is solved. This $\\mathrm{SDP}$ has $G$ matrix variables of $N_t\\times N_t$ dimensions and $N_{u}+N_t$ linear constraints. The interior point methods employed to solve this $\\mathrm{SDP}$ require at most { $\\mathcal{O}\\left(\\sqrt{GN_t}\\log(1\/\\epsilon) \\right)$ } iterations, where $\\epsilon $ is the desired numerical accuracy of the solver. Moreover, in each iteration not more than $\\mathcal{O}({G^3 N_t^6 +GN_{t}^{3} +N_{u}GN_t^2})$ arithmetic operations will be performed. The increase in complexity stems from increasing the number of constraints, i.e. $N_t+N_{u}$ constraints are considered instead of only $N_{u}$ as in \\cite{Karipidis2008}. However, this increase is not significant, since the order of the polynomial with respect to the number of transmit antennas is not increased. The solver used also exploits the specific structure of matrices hence the actual running time is reduced. Next, a fixed number of Gaussian random instances with covariance given by the previous solution are generated. {The complexity burden of this step is given by the following considerations. For each randomization, a second bisection search is performed this time over the $\\mathrm{LP}$ $\\mathcal{S}^Q$. An $\\epsilon-$optimal solution of this problem can be generated with a worst case complexity of $\\mathcal O(G^{3.5}\\log(1\/\\epsilon))$ \\cite{ye2011interior} . The second bisection runs for $N_{iter} = \\lceil\\log_2\\left(U_{2}-L_{2}\\right)\/\\epsilon_{2}\\rceil$ iterations, which are significantly reduced since the upper bound $U_{2}$ is now the optimal value of the relaxed problem.\n Moreover, the Gaussian randomization is executed for a fixed number of iterations. The accuracy of the solution increases with the number of randomizations \\cite{Karipidis2008,Sidiropoulos2006,Luo2010}. Finally, the complexity burden can be further reduced by the reformulation of the non-convex\n $ \\mathcal{S}^{\\mathcal{F}} $\n into the $\\mathrm{GP}$, $\\mathcal{S^{\\mathcal{F}}_{\\mathcal{GP}}}$ which is efficiently solved by successive approximations of primal-dual interior point numerical methods \\cite{convex_book}. Thus the need for the second bisection can be surpassed. }\n\\begin{algorithm\n \\SetAlgoLined\n \\KwIn{$ N_{rand}, \\mathbf p, \\mathbf g, \\mathbf Q_i, \\sigma_i^2 \\ \\forall i \\in\\{1\\dots G\\} $ }\n \\KwOut{ $ \\{\\mathbf w_{k}^{opt}\\}_{k=1}^G$, $t^*_{opt}$ of $\\mathcal{F}$ $ \\{\\mathbf w_{k}^{out}\\}_{k=1}^G$ $t^*_{out}$ \n \\Begin{\n \\textbf{\\textit{\\textit{\\uline{Step 1:}}}} Solve $t_{opt} = \\mathcal{F}_r\\left(\\mathbf g, \\mathbf p \\right)$ by bisecting $\\mathcal{Q}_r\\left(\\frac{L+U}{2}\\mathbf g, \\mathbf p \\right)$, (see Sec. \\ref{sec: bisec}).\nLet the associated point be $\\{\\mathbf w_{k}^{opt}\\}_{k=1}^G$.\\\\\n \\eIf{$\\mathrm{rank}(\\mathbf X_{k}^{opt} )= 1, \\forall \\ k \\in\\{1\\dots G\\} $}\n{$ \\{\\mathbf w_{k}^{out}\\}_{k=1}^G$ is given by $\\lambda_{max}(\\mathbf X^{opt})$. }\n{\n\\textbf{\\textit{\\uline{Step 2:}}}\n: Generate $N_{rand}$ precoding vectors\n$\\{ \\mathbf{\\hat{w}}_{k}\\}_{k=1}^G$, (see Sec. \\ref{sec: Gaussian randomization} ).\n$t^{*}_{(0)}= 0$\\;\n\\For{$i=1\\dots N_{rand}$}{\n\\textbf{\\textit{\\uline{Step 3:}}}\nSolve $\\mathcal{S}^{\\mathcal{F}}\\left(\\mathbf g, \\mathbf p \\right)$ by bisecting $\\mathcal{S}^{\\mathcal{Q}}\\left(\\frac{L+U}{2}\\mathbf g, \\mathbf p \\right)$ to get a $\\{\\mathbf w_{k}^{can}\\}_{k=1}^G$ with $t_{(i)}^*$.\\\\\n \\If {$t_{(i)}^* > t_{(i-1)}$ }{ $t^{*}_{out}=t^*_{(i)}, \\{\\mathbf w_{k}^{out}\\}_{k=1}^G = \\{\\mathbf w_{k}^{can}\\}_{k=1}^G$}\n}\n}\n\n \\caption{ Fair multigroup multicasting under $\\mathrm{PAC}$s.}\n\\end{algorithm}\n\n\n\\section{ Performance Evaluation \\& Applications} \\label{sec: performance}\n\\subsection{Multigroup multicasting over Rayleigh Channels}\\label{sec: performance rayleigh}\nThe performance of linear multicast multigroup beamforming under per antenna power constraints is examined for a system with $N_t = 5$ transmit antennas, $G = 2 $ groups and $N_u = 4 $ users. Rayleigh fading is considered, thus the channels are generated as Gaussian complex variable instances with unit variance and zero mean. For every channel instance, the approximate solutions of the max-min fair $\\mathrm{SPC}$ and the proposed $\\mathrm{PAC}$ problems are evaluated using $N_{rand} = 100$ Gaussian randomizations\\cite{Karipidis2008}.\nThe results are averaged over one hundred channel realizations, while the noise variance is normalized to one for all receivers and all $\\mathrm{SINR}$ targets are assumed equal to one.\n\n {The achievable minimum rate\n is plotted for the $\\mathrm{SPC}$ and the $\\mathrm{PAC }$ optimization in Fig. \\ref{fig: power}\n with respect to the total transmit power in dBWs. Noise is assumed normalised to one.} For fair comparison, the total power constraint $P_{tot}$~[Watts] is equally distributed amongst the transmit antennas when $\\mathrm{PAC}$s are considered, hence each antenna can radiate at most $P_{tot}\/N_t $~[Watts]. The accuracy of the approximate solutions for both problems, {given by comparing the actual solution to the relaxed upper bound \\cite{Sidiropoulos2006,Karipidis2008},} is clear across a wide range of $\\mathrm{SNR}$. Nevertheless, the accuracy due to the $\\mathrm{PAC }$s is slightly reduced. This accuracy degradation is intuitively justified. A Gaussian randomization instance is less likely to approach the optimal point when the number of constraints is increased while the same number of Gaussian randomizations are performed ($N_\\mathrm{rand} = 100$).\n{Towards quantifying the gains of the proposed solution, the performance of the $\\mathrm{SPC }$ solution re-scaled to respect the $\\mathrm{PAC }$s is also included in Fig. 1. Re-scaling is achieved by multiplying each line of the precoding matrix with the square root of the inverse level of power over satisfaction of the corresponding antenna. In Fig. 1 it is clear that more than 1 dB of gain can be obtained by the proposed method over the suboptimal re-scaling approach.}\n\nA significant issue for the $\\mathrm{SDR}$ techniques in multicast applications is the tightness of the approximate solution versus an increasing number of receivers per multicast. In the extreme case of one user per group, it was proven in\\cite{Bengtsson2001} that the relaxation provides an optimal solution. Thus the solution is no longer approximate but exact. However, the increasing number of users per group degrades the solution, as depicted in Fig. \\ref{fig: users} for both problems. It is especially noticed that the $\\mathrm{PAC}$ system suffers more than the $\\mathrm{SPC}$ of \\cite{Karipidis2008} as the number of users per multicast group increases. An attempt to solve this inaccuracy, but only under sum power constraints, is presented in \\cite{Pesavento2012b}.\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.8\\columnwidth]{rate_Gauss_100_revised2}\\\\\n \\caption{Minimum user rate with $\\mathrm{SPC}$ and $\\mathrm{PAC}$.\n Results for $N_{u} = 4$ users, $N_t = 5$ antennas, $L= 2$ groups and $\\rho = 2$ users per group.\n}\\label{fig: power}\n \\end{figure}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{PAC_user_Gauss_100_prop}\\\\\n\\caption{Minimum $\\mathrm{SINR}$ with $\\mathrm{SPC}$ and $\\mathrm{PAC}$ versus an increasing ratio of users per group $ \\rho = N_u \/ G $, for ${P}_{tot} = 10$~dBW.\n}\\label{fig: users}\n\\end{figure}\n\\subsection{Power Consumption in DAS }\\label{sec: power consumption}\nThe main difference between the $\\mathrm{SPC}$ and the $\\mathrm{PAC}$ optimization problems is the utilization of the available on board power in each system architecture.\nIn \\cite{Karipidis2008}, the sum power constraint is always satisfied with equality, since any remaining power budget can be equally distributed to the precoding vectors and the solution is further maximized.\nOn the contrary, the $\\mathrm{PAC }$ system includes $N_t$ constraints which are coupled via the precoders. According to the relation between $\\mathcal{F}$ and $\\mathcal{Q}$, i.e. \\eqref{eq: equivalence 1}, the ratio of transmitted power over the power constraint (i.e. $r$) is one. Since this ratio applies for at least one of the $N_t$ power constraints, if one is met with equality and the remaining $N_t - 1$ are not, then no more power can be allocated to the precoders.\nLet us assume a channel matrix with one compromised transmit antenna, i.e. $\\mathbf{H}=$\n\\begin{align}\n \\begin{bmatrix}\\notag\n2.94\\angle41^\\circ & 11\\angle{-25}^\\circ & 4.4\\angle{50}^\\circ &6.6\\angle{-4}^\\circ \\\\\n 13.2 \\angle{-150}^\\circ & 4.8\\angle{14}^\\circ & 15.2\\angle{-7}^\\circ &4.8\\angle{-37}^\\circ \\\\\n 12 \\angle{-155}^\\circ&1.5\\angle{163}^\\circ & 13.5\\angle{-105}^\\circ &3.9\\angle{-46}^\\circ \\\\\n 0.02 \\angle{-53}^\\circ & 0.03\\angle{-66}^\\circ & 0.03\\angle{120}^\\circ &0.03\\angle{-129}^\\circ \\\\\n 5.66 \\angle{137}^\\circ & 9.2\\angle{49}^\\circ & 13\\angle{-175}^\\circ & 2.45\\angle{126}^\\circ \\\\\n\\end{bmatrix} ^\\text T,\n \\end{align}\nwhere $4$ users, divided into $2$ groups, are served by $5$ antennas. One of the antennas (the $4$-th antenna) has severely degraded gains towards all users. This practical case can appear in a $\\mathrm{DAS }$ where the physical separation of the transmit antennas not only imposes per antenna constraints but can also justify highly unbalanced channel conditions around the environment the antennas.\nThe power utilization of the solution of the optimization for each of the two problems is defined as the total transmitted power over the total available power $P_{tot}$, that is $P_u =\\left(\\sum_{k=1}^ G \\mathbf w_k{^\\dag} \\mathbf w_k\\right)\/{P_{tot}}$,\n and is plotted versus an increasing power budget in Fig. \\ref{fig: power consumption}. It is clear that in the low power regime the available power is not fully utilized. As the available power increases, however, the power consumption of the $\\mathrm{PAC}$ increases. This result is in accordance with the optimality of equal power allocation in the high power regime and renders the $\\mathrm{PAC}$ formulation relevant for power limited systems.\nFurther insights for this $\\mathrm{PAC}$ system are given in Fig. \\ref{fig: power bar}, where the power utilization of each antenna is shown, for different total power budgets.\nInterpreting these results, it can be concluded that the $\\mathrm{PAC }$ problem is highly relevant for power-over-noise limited systems. Otherwise, in the high power regime, the solution of the $\\mathrm{SPC }$ problem with less constraints could be also used as an accurate approximation.\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{power_cons_Gauss_100_prop2}\\\\\n \\caption{Total power consumption of a $\\mathrm{PAC}$ system versus available power.}\\label{fig: power consumption}\n \\end{figure}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{power_cons_bar}\\\\\n \\caption{Per-antenna consumption in a $\\mathrm{PAC}$ system versus transmit power.\n}\\label{fig: power bar}\n\\end{figure}\n\\subsection{Weighted Fairness Paradigm}\nTo the end of establishing the importance of the weighted optimization, a simple paradigm is elaborated herein. Under the practical assumption of a modulation constrained system, the weighted fair design can be exploited for rate allocation towards increasing the total system throughput.\n More specifically, the considered system employs adaptive modulation and allocates binary phase shift keying ($\\mathrm{BPSK}$) modulation if the minimum $\\mathrm{SINR_i}$ in the $k$-th group is less than the ratio for which the maximum modulation constrained spectral efficiency is achieved. This ratio is simply given by $ \\log_2 M$, where $M $ is the modulation order. Hence for $\\mathrm{BPSK}$, $\\gamma_2 = 0$~dB, and so forth. If for some group $k$, $\\min_i{\\mathrm{SINR_i}} \\geq \\gamma_{ \\mathrm{2}},\\ \\forall i \\in\\mathcal{G}_k, $ then quaternary phase shift keying ($\\mathrm{QPSK}$) is used for all users in the group. Forward error correction is not assumed. Let there be a two antenna transmitter that serves four users grouped into two groups. The considered channel matrix reads as\n \\begin{align}\\mathbf{H }=\n \\begin{bmatrix}\\notag\n 0.2 \\angle106^\\circ & 90\\angle{-69}^\\circ & 0.5\\angle{-99}^\\circ &0.5\\angle{61}^\\circ \\\\\n 0.8 \\angle111^\\circ & 120 \\angle{-112}^\\circ & 1\\angle{127}^\\circ & 1.5\\angle{49}^\\circ \\\\\n\\end{bmatrix} ^\\text T.\n \\end{align}\nThe attributes of the specific channel matrix depict one possible instance of the system where one user with a good channel state (i.e. user two) is in the same group with a jeopardized user, namely user one. On the other hand, the second group contains relatively balanced users in terms of channel conditions. For an un-weighted optimization (i.e. $\\mathbf g = [1\\ 1\\ 1\\ 1]$) the spectral efficiency of each user is shown in Fig. \\ref{fig: spf bar}. Baring in mind that each user is constrained by the minimum group rate, the actual rate at which all users will receive data is 0.52 [bps\/Hz]. Both groups achieve the same spectral efficiency since the minimum $\\mathrm{SINR}$s and hence the minimum rates are balanced between the groups. Subsequently, a modulation constrained multicast transmitter will employ $\\mathrm{BPSK}$ for all users.\n By heuristically choosing the constraint vector to be $\\mathbf g = [1\\ 1\\ 5.3\\ 5.3]$ each user rate is modified. As depicted in Fig. \\ref{fig: spf bar} both users in the second group are achieving adequate $\\mathrm{SINR}$ to support a higher order modulation. This gain is achieved at the expense of the rates of the users of the first group.\nFollowing this paradigm, the weight optimization can lead to an improved modulation assignment and thus higher throughput in practical systems. Hence, the weighted formulation offers the substantial degrees of freedom to maximize the total throughput of a modulation constrained multicast system by properly allocating the rates amongst the groups.\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{SINR_bar_weights}\\\\\n \\caption{Modulation constrained paradigm.}\n\\label{fig: spf bar}\n\\end{figure}\n\\subsection{{Uniform Linear Arrays}}\nTo the end of investigating the sensitivity of the proposed algorithm with respect to the angular separation of co-group users, a uniform linear array ($\\mathrm{ULA}$) transmitter is considered. Assuming far-field, line-of-sight conditions, the user channels can be modeled using Vandermonde matrices. For this important special case, the $\\mathrm{SPC}$ multicast multigroup problem was reformulated into a convex optimization problem and solved in \\cite{Karipidis2006,Karipidis2007}. These results where motivated by the observation that in $\\mathrm{ULA}$ scenarios, the relaxation consistently yields rank one solutions. Thus, for such cases, the $\\mathrm{SDR}$ is essentially optimal \\cite{Sidiropoulos2006}. The fact that the $\\mathrm{SDR}$ of the sum power minimization problem is tight for Vandermonde channels was established in \\cite{Karipidis2007}.\nLet us consider a $\\mathrm{ULA}$ serving $4$ users allocated to $2$ distinct groups. In Fig. \\ref{fig: ULA pos}, its radiation pattern for co-group angular separation $\\theta_a = 35^\\circ$ is plotted. The symmetricity due to the inherent ambiguity of the $\\mathrm{ULA}$ is apparent. Clearly, the multigroup multicast beamforming optimizes the lobes to reduce interferences between the two groups. The $\\mathrm{SPC}$ solution, re-scaled to respect the $\\mathrm{PAC}$s are also included in Fig. \\ref{fig: ULA pos}. The superiority of the proposed solution is apparent.\n\nIn Fig. \\ref{fig: ULA sr}, the performance in terms of minimum user rate over the area with respect to an increasing angular separation is investigated. When co-group users are collocated, i.e. $\\theta_a = 0^\\circ$, the highest performance is attained. As the separation increases, the performance is reduced reaching the minimum when users from different groups are placed in the same position, i.e. $\\theta_a = 45^\\circ$. In Fig. \\ref{fig: ULA sr}, the tightness of the relaxation for the $\\mathrm{SPC}$ problem \\cite{Karipidis2007} is clear. However, the same does not apply for the proposed $\\mathrm{PAC}$. As co-group channels tend to become orthogonal, the approximation becomes less tight. Nevertheless, $N_\\mathrm{rand} = 200$ randomizations are sufficient to maintain the solution above the re-scaled $\\mathrm{SPC}$, as shown in Fig. \\ref{fig: ULA sr}. Consequently, the proposed solution outperforms a re-scaled to respect the per-antenna constraints, $\\mathrm{SPC}$ solution, over the span of the angular separations.\n\n\\textit{Remark 4:} The semidefinite relaxation of the per-antenna power minimization problem in $\\mathrm{ULA}$ transmitters is not always tight.\n\nFor every optimum high rank set of matrices $\\{\\mathbf { X}_k ^{opt}\\}_{k=1}^G$, there exists a set of rank one positive semidefinite matrices $\\{\\mathbf {\\bar X}_k^{opt}\\}_{k=1}^G$, i.e. $\\mathrm{rank}(\\mathbf {\\bar X}_k ^{opt})=1, \\forall k \\in \\{1\\dots G\\} $, which is equivalent with respect to the power received at each user, i.e $\\mathrm{Tr}(\\mathbf { X}_k ^{opt}\\mathbf { Q}_i)=\\mathrm{Tr}(\\mathbf {\\bar X}_k ^{opt}\\mathbf { Q}_i) , \\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\}$. This result is based on the Riesz-F\\'ejer theorem on real valued complex trigonometric polynomials \\cite{Karipidis2007}. Therefore, the Vandermonde channels impose a specific structure to the $\\mathrm{SPC}$ solution that allows for a convex reformulation.\nThe difference in the case tackle herein lies in the $N_t$ $\\mathrm {PAC}$s, i.e. $\\left[\\sum_{k=1}^G \\mathbf X_k \\right]_{nn} \\leq P_n, \\forall n\\in \\{1\\dots N_{t}\\}$, in which the channel structure is not involved. Thus, a rank-1 matrix is equivalent in terms of per user received power \\cite{Karipidis2007} but not necessarily in terms of per-antenna consumed power, as shown herein.\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{ULA_tot}\\\\\n \\caption{ULA beampattern for $\\mathrm{PAC}$ and re-scaled $\\mathrm{SPC}$ solutions.}\n\\label{fig: ULA pos}\n\\end{figure}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{ULA_final3}\\\\\n \\caption{ $\\mathrm{ULA}$ performance for increasing co-group user angular separation.\n\\label{fig: ULA sr}\n\\end{figure}\n\\subsection{{Robust Design under $\\mathrm{PAC}$s}}\\label{sec: robustness}\n When beamforming under uncertainty is considered, three different designs can be realized\\cite{Gershman2010}. Namely, the probabilistic design, where acceptable performance is guaranteed for some percentage of time, the expectation based design that requires knowledge of the second order channel statistics but cannot guarantee any outage performance and the worst-case design. The latter approach guarantees a minimum $\\mathrm{QoS}$ requirement for any error realization.\n\nFocusing on a worst-case design, let us assume an elliptically bounded error vector. In this context, the actual channel is given as $\\mathbf h_{i} = \\bar{ \\mathbf h}_{i} +\\mathbf e_{i}$ where $\\bar{\\mathbf h_{i}}$ is the channel available at the transmitter and $\\mathbf e_{i}$ is an error vector bounded by $\\mathbf e_{i}^\\dag \\mathbf C_i \\mathbf e_{i}\\leq 1$. The hermitian positive definite matrix $\\mathbf C_{i}$ defines the shape and size of the ellipsoidal bound. For $\\mathbf C_{i} = 1\/\\sigma_\\epsilon^2\\mathbf I_{N_t}$, then $||\\mathbf e_i||^2_2\\leq \\mathbf \\sigma_\\epsilon^2$ and the error remains in a spherical region of radius $ \\mathbf \\sigma_\\epsilon$ \\cite{Shenouda2007}. This spherical error model is mostly relevant when the feedback quantization error of a uniform quantizer at the receiver is considered \\cite{Jindal2004}.\nThe proposed design is formulated as \\begin{empheq}[box=\\fbox]{align}\n\\mathcal{F_{RB}:}&\\max_{\\ t, \\ \\{\\mathbf w_k \\}_{k=1}^{G}} t\\notag\\\\\n\\mbox{s. t. } &\\frac{1}{\\gamma_i}\\frac{|\\mathbf w_k^\\dag \\left(\\bar{ \\mathbf h}_{i} +\\mathbf e_{i}\\right)|^2}{\\sum_{l\\neq k }^G |\\mathbf w_l^\\dag\\left(\\bar{ \\mathbf h}_{i} +\\mathbf e_{i}\\right)|^2+\\sigma_i^2 }\\geq t, \\label{const: FRB SINR}\\\\\n&\\forall i \\in\\mathcal{G}_k, k,l\\in\\{1\\dots G\\},\\notag\\\\\n \\text{and to }& \\left[\\sum_{k=1}^G \\mathbf w_k\\mathbf w_k^\\dag \\right]_{nn} \\leq P_n, \\forall n\\in \\{1\\dots N_{t}\\}\\label{const: FRB PAC},\n \\end{empheq}\nand involves the channel imperfections only in the $\\mathrm{SINR}$ constraints. The novelty of $\\mathcal{F_{RB}}$ over existing robust multicast formulations lies in \\eqref{const: FRB PAC}.\nThe $\\mathrm{SINR}$ constraints of $\\mathcal{F_{RB}}$, i.e. \\eqref{const: FRB SINR}, are over all possible error realizations and cannot be handled. However, by applying the S-lemma \\cite{convex_book}, the error vector in \\eqref{const: FRB SINR} can be eliminated. This procedure is analytically described in \\cite{Chen2012}. Thus, $\\mathcal{F_{RB}}$ can be converted to a $\\mathrm{SDP}$ and solved efficiently using the methods described in Sec. \\ref{sec: problem}.\nThe performance gain of the proposed robust design for a $\\mathrm{ULA}$ with $N_t = 2$ transmit antennas, serving $N_{u}=6$ users is given in Fig. \\ref{fig: robust SR config}, versus an increasing error radius $ \\mathbf \\sigma_\\epsilon$, for different user per group configurations, $\\rho$. These results exhibit the significant gains of the proposed technique as the error and the group sizes increase.\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{Robust_prop2}\\\\\n \\caption{Robust performance for various user per group configurations.\n}\n\\label{fig: robust SR config}\n\\end{figure}\n\n { To establish the importance of the novel formulation, the performance in terms of minimum user rate over 1000 error realizations is given in Fig. \\ref{fig: robust SR}, versus a wide range of the error radius $ \\mathbf \\sigma_\\epsilon$ for the proposed $\\mathcal{F_{RB}}$ as well as the existing $\\mathrm{SPC}$ solutions re-scaled to respect the per-antenna constrains. For this figure, a $\\mathrm{ULA}$ with $N_{t} = 3 $ transmit antennas is considered, serving $N_{u}=6$ users partitioned into $L=2$ multicast groups. The co-group angular separation is $\\theta_a = 10^\\circ$ and the number of Gaussian randomizations chosen is $N_\\mathrm{rand} = 200$ and $N_\\mathrm{rand} = 1000$ for the high and low precision curves respectively. According to Fig. \\ref{fig: robust SR}, the proposed robust $\\mathrm{PAC}$ formulation (i.e. $\\mathcal{F_{RB}}$) outperforms existing solutions, in a per-antenna power constrained setting, for a wide range of channel error radius.\n However, as the error radius increases, a slight performance degradation is noted, especially for the low precision results.\nTo further investigate on this result, the following remark is given.}\n\n{\\textit {Remark 5}: The semidefinite relaxation of robust multigroup multicasting under $\\mathrm{PAC}$s yields non rank-1 solutions with higher probability as the channel errors increase. }\n\n{The accuracy of the minimum rate results of Fig. 9, is presented in Fig. 10. The accuracy is measured by the distance of the randomized solution from the upper bound given by the relaxation, following the standards of Sec. IV-A and \\cite{Sidiropoulos2006,Karipidis2008}. In Fig. 10, the results are also normalized by the value of the upper bound. According to these results, the probability for the $\\mathrm{SDR}$ to yield rank-1 solutions is reduced as the error radius increases, for all problems. The accuracy reduction of the $\\mathrm{SDR}$ technique as the channel errors increase was also reported via simulations in \\cite{Zheng2009_TSP}, but for unicast scenarios. What is more, $\\mathcal{F_{RB}}$ yields non rank-1 solutions as the errors increase, with higher probability than the $\\mathrm{SPC}$ problem. However, 1000 randomizations are sufficient to reduce the inaccuracy of all solutions to less than 7\\%, as illustrated in Fig. 10. }\n{\n It is therefore concluded that although the relaxation of the robust formulations does not consistently yield rank-1 solutions, especially for higher values of error radius, the Gaussian randomization can provide solutions with adequate accuracy. Finally, the proposed solutions surpass the performance of existing approaches, in practical per-antenna power constrained settings. }\n\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Robust_round2_prop3}\\\\\n\\caption{{Minimum user rate versus increasing $\\mathrm{CSI}$ error.} }\n\\label{fig: robust SR}\n\\end{figure}\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\columnwidth]{Robust_round2_accu_prop3}\\\\\n \\caption{{Accuracy of the semidefinite relaxation versus an increasing $\\mathrm{CSI}$ error}.\n}\n\\label{fig: robust accuracy}\n\\end{figure}\n\n\\section{Conclusions} \\label{sec: conclusions}\n\\textit{}In the present work, optimum linear precoding vectors are derived under per antenna power constraints, when independent sets of common information are transmitted by an antenna array to distinct co-channel sets of users. The novel weighted max--min fair multigroup multicast problem under $\\mathrm{PAC}$s is formulated. An approximate solution for this NP-hard problem is presented based on the well established methods of semidefinite relaxation.\nThe performance of the weighted max--min fair multigroup multicast optimization is examined under various system parameters and important insights on the system design are gained. Moreover, an application paradigm of the new system design is described while robust to imperfect $\\mathrm{CSI}$ extensions are given. \nConsequently, an important practical constraint towards the implementation of physical layer multigroup multicasting is alleviated.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nMotivated by the equations describing the steady motion of\ngeneralized Newtonian fluids we study the following fully\ninhomogeneous system \n\\begin{equation} \\label{eq:main_problem} \\begin{split}\n- \\div \\S(\\vec{Dv}) + \\vec{v} \\cdot \\nabla \\vec{v} + \\nabla \\pi &= \\vec{f}, \\\\\n\\div \\vec{v} &= g_1, \\\\\n\\vec{v}_{\\vert \\del \\Omega} &= \\vec{g_2}.\n\\end{split} \\end{equation} In this setting, $\\S$ is an extra stress\ntensor with $p$-$\\delta$-structure, $\\vec{v}$ is the velocity field\nwith its symmetric gradient $\\vec{Dv}$, $\\pi$ is the pressure,\n$\\vec{f}$ is the external force and $g_1$ and $\\vec{g_2}$ are data\non a sufficiently regular bounded domain\n$\\Omega \\subset \\Rd$ of dimension $d \\in \\{2, 3\\}$.\n\nSince \\eqref{eq:main_problem}$_1$ leads to a pseudomonotone and\ncoercive operator in the homogeneous case $g_1 = 0$,\n$\\vec{g}_2=\\vec{0}$ and $p >\\frac {3d}{d+2}$ (cf.~\\cite{lions-quel})\nand in the shear-thickening case $p>2$ (cf.~\\cite{r-mol-inhomo}), the\nexistence of weak solutions $(\\vec{v}, \\pi)$ to\n\\eqref{eq:main_problem} follows directly from the theory of\npseudomonotone operators in these cases. This approach can be adapted to the situation of homogeneous data and very low values of $p$: if $g_1 = 0$,\n$\\vec{g}_2=\\vec{0}$ and $p \\in (\\frac{2d}{d+2}, \\frac{3d}{d+2}]$, one can construct approximate solutions by the theory of pseudomonotone operators and prove their convergence with the Lipschitz truncation method (cf. \\cite{dms}, \\cite{fms2}). In the case $p=2$, we have to\ndeal with the fully inhomogeneous steady Navier-Stokes equations which\nare studied intensively (cf.~\\cite{Galdi}) and where the existence of\nsolutions is known under appropriate smallness conditions. In the\nshear-thinning, inhomogeneous case, i.e.~if $p \\in (1, 2)$ and the\ndata $g_1$, $\\vec{g_2}$ do not vanish, the coercivity of the elliptic\nterm is weaker than the growth of the convective term, i.e.~we are in\nthe supercritical\ncase.\nThis situation is treated in \\cite{mikelic}, \\cite{Sin} for\n$g_1=0$. In \\cite{Sin} even the case of electrorheological fluids is\ncovered. The result there is based on a nice smallness argument (\\cite[Lemma\n3.2]{Sin}), which is applied to estimate the convective term. \nSince we did not understand the application of this lemma in detail,\nwe give a different proof of local coercivity here. Our main result\nshows the existence of weak solutions of the fully inhomogeneous\nproblem \\eqref{eq:main_problem} in the shear-thinning case under appropriate smallness conditions\ninvolving higher regularity of the data.\n\nThe paper is organised as follows: by representing the inhomogeneous\ndata by a fixed function $\\vec{g}$ (Subsection \\ref{sub:div_eq}),\n\\eqref{eq:main_problem} turns into a homogeneous problem. We\ninvestigate the newly formed elliptic and convective terms in\nSubsections \\ref{sub:extra_stress} and \\ref{sub:conv_term}. Then we\nconclude properties and local coercivity of the whole system and prove\nexistence of solutions (Subsection \\ref{sub:existence}). In the case\n$p \\in (\\frac{2d}{d+2}, \\tfrac{3d}{d+2})$ we use the Lipschitz\ntruncation method in order to establish convergence of approximate\nsolutions. This step is presented as an abstract statement, Theorem~\\ref{thm:ident_limits}, which should fit to more general situations.\nIn contrast to \\cite{Sin}, we had to require additional regularity of\nthe data in our proof of local coercivity. We discuss this issue in\nSubsection \\ref{sub:less_regularity} and prove that the additional\nregularity assumption is necessary in the framework of pseudomonotone\noperator theory.\n\nThe results presented here are based on the thesis \\cite{MA} of the\nfirst author.\n\n\\section{Preliminaries}\n\n\\subsection{Notation} \\label{sub:notation} We work on a bounded\nLipschitz domain $\\Omega \\subset \\Rd$, $d \\in \\{2, 3\\}$, with\npossesses an exterior normal $\\vec{\\nu}$. Points and scalar-valued\nquantities are written in normal letters whereas vector- and matrix-valued functions, variables and operators are denoted in bold\nletters. The space of symmetric quadratic matrices is denoted as\n$\\R_\\textit{sym}^{d\\times d}$.\n\nWe use standard Lebesgue measure and integration theory. For a ball $B$, we\ndenote the ball with the same center and the double radius\nby~$2B$. The characteristic function of a set $S \\subset \\Rd$ is\ncalled $\\chi_S$.\n\nWe use standard notation for Lebesgue and Sobolev spaces. Due to\n\\cite{Gagliardo}, there exists a well-defined, surjective trace\noperator $W^{1,p}(\\Omega) \\to W^{1-\\frac{1}{p}, p}(\\del \\Omega)$ that\nassigns boundary values to a Sobolev function. We denote by\n$L_0^p(\\Omega)$ the subspace of $L^p(\\Omega)$ of functions with mean\nvalue zero and by $V_p$ the subspace of $ W_0^{1,p}(\\Omega)$ of vector\nfields with zero boundary values and zero divergence. For a\nvector-valued function $\\vec{v} \\in W^{1,p}(\\Omega)$, the definition\nof the (weak) gradient field follows the convention\n$(\\nabla \\vec{v})_{ij} = \\del_j v_i$ and the symmetric gradient is\ndefined as\n$\\vec{Dv} := \\frac{1}{2} (\\nabla\\vec{v}+\\nabla\\vec{v}^\\top)$. On\n$W_0^{1,p}(\\Omega)$ and on $V_p$, we may work with the symmetric\ngradient norm $\\norm{\\vec{D} \\cdot}_p$, thanks to Poincar\\'e's and\nKorn's inequalities.\n\nThe dual of some Banach space $X$ is denoted as $X\\d\\!$ and\n$\\fp{\\cdot}{\\cdot}$ denotes their canonical dual pairing. For an\nexponent $p \\in [ 1, \\infty ]$, we define its conjugate exponent\n$p' \\in [ 1, \\infty ]$ via $\\frac{1}{p} + \\frac{1}{p'} = 1$ and use\nthe duality $L^{p'}(\\Omega) = L^p(\\Omega)\\d$ for $p <\n\\infty$. Finally, we define the critical Sobolev exponent\n$p\\d := \\frac{pd}{d-p} \\in (p, \\infty)$ for~$p < d$.\n\n\\subsection{The divergence equation} \\label{sub:div_eq} In order to\nfulfil the boundary and divergence conditions in\n\\eqref{eq:main_problem}, we follow the usual ansatz\n$\\vec{v} = \\vec{u} + \\vec{g}$, where $\\vec{u} \\in V_p$ and\n$\\vec{g} \\in W^{1,p}(\\Omega)$ fulfils the boundary and divergence\ndata, i.e.~the vector field $\\vec{g} \\in W^{1,p}(\\Omega)$ solves\n\\begin{equation} \\label{eq:g_problem}\n\\begin{split}\n\\div \\vec{g} &= g_1, \\\\\n\\vec{g}_{\\vert \\del \\Omega} &= \\vec{g_2}.\n\\end{split}\n\\end{equation}\nFor the corresponding homogeneous system, we have the fundamental\nresult due to Bogovski\\u{\\i} (cf.~\\cite{bo1}, \\cite{bo2}, \\cite{Galdi}):\n\\begin{thm}[Bogovski\\u{\\i} operator]\\label{thm:bogovski}\n\tLet $\\Omega \\subset \\Rd$ be a bounded Lipschitz domain with\n\t$d \\geq 2$ and $p \\in (1,\\infty)$. Then there exists a linear and\n\tbounded operator $\\Bog \\colon L_0^p(\\Omega) \\to W_0^{1,p}(\\Omega)$\n\tand a constant $c_{Bog} = c(\\Omega, p)$ such that\n\t\\begin{align*}\n\t\t\\div \\Bog f &= f, \\\\\n\t\t\\norm{\\Bog f}_{1,p} &\\leq c_{Bog} \\norm{f}_p\n\t\\end{align*}\n\tfor all $f \\in L_0^p(\\Omega)$. \n\\end{thm}\n\nFor the inhomogeneous system, we combine Bogovski\\u{\\i}'s Theorem and\nthe fact that $ W^{1-\\frac{1}{p}, p}(\\del \\Omega)$ is precisely the\nspace of boundary values of $W^{1,p}(\\Omega)$-functions:\n\\begin{lem}[The inhomogeneous divergence equation] \\label{lem:div_eq}\n\tLet $\\Omega \\subset \\Rd$ be a bounded Lipschitz domain with\n\t$d \\geq 2$ and $p \\in (1, \\infty)$. Suppose $g_1 \\in L^p(\\Omega)$\n\tand $\\vec{g_2} \\in W^{1-\\frac{1}{p}, p}(\\del \\Omega)$ satisfy\n\t$\\int_{\\Omega} g_1\\, dx = \\int_{\\del \\Omega} \\vec{g_2} \\cdot\n\t\\vec{\\nu}\\, do$.\n\tThen there exists a solution $\\vec{g} \\in W^{1,p}(\\Omega)$ of\n\tproblem \\eqref{eq:g_problem} that satisfies\n\t\\begin{equation*}\n\t\t\\norm{\\vec{g}}_{1,p} \n\t\t\\leq c_{\\textit{lift}} \\left( 1 + c_{Bog} \\right) \\norm{\\vec{g_2}}_{1-\\frac{1}{p}, p} \n\t\t+ c_{Bog} \\norm{g_1}_p\n\t\\end{equation*}\n\twith constants $c_\\text{lift}$ and $c_\\text{Bog}$ from the trace\n\tlifting and the Bogovski\\u{\\i} operator.\n\\end{lem}\n\\begin{proof}\n\tDue to \\cite{Gagliardo}, there exists a trace lifting\n\t$\\vec{\\hat{g}} \\in W^{1,p}(\\Omega)$ of the boundary values\n\t$\\vec{g_2}$. By integration by parts, we see that the function\n\t$g_1 - \\div \\vec{\\hat{g}}$ has mean value zero. Thus, we may apply the\n\tBogovski\\u{\\i} operator and directly obtain that\n\t$\\vec{g} := \\vec{\\hat{g}} + \\Bog(g_1 - \\div \\vec{\\hat{g}}) \\in\n\tW^{1,p}(\\Omega)$ solves \\eqref{eq:g_problem}. The estimate of\n\t$\\vec{g}$ follows from the boundedness of the trace lifting and the\n\tBogovski\\u{\\i} operator.\n\\end{proof}\n\n\\subsection{Local coercivity}\nWe will work with the following notion of local coercivity: \n\\begin{defi}[local coercivity]\\label{defi:loc_coercive}\n\tLet $X$ be a Banach space. An operator $A \\colon X \\to X\\d$ is\n\tcalled \\emph{locally coercive with radius} $R$ if there exists a\n\tpositive real number $R$ such that\n\t\\begin{equation*}\n\t\t\\fp{Ax}{x} \\geq 0\n\t\\end{equation*}\n\tholds for all $x \\in X$ with $\\norm{x}_X = R$.\n\\end{defi}\n\nLocal coercivity is precisely the condition that allows to apply\nBrouwer's fixed point theorem in order to obtain approximate solutions\nin the proof of Br\\'ezis' theorem about pseudomonotone operators\n\\cite[Thm.~27.A]{Zeidler2B}. Therefore, we get a generalized version\nof Br\\'ezis' theorem that can be proved along the lines of the\nstandard version. It can also be regarded as a special case of the\nexistence theorem of Hess and Kato \\cite[Thm.~27.B]{Zeidler2B}.\n\n\\begin{thm}[Existence theorem for pseudomonotone operators]\\label{thm:brezis}\n\tLet $X$ be a reflexive and separable Banach space and\n\t$A \\colon X \\to X^\\ast$ be a pseudomonotone, demicontinuous and\n\tbounded operator that is locally coercive with radius $R$. Then\n\tthere exists a solution $u \\in X$ of the problem\n\t\\begin{equation*}\n\t\tAu = 0\n\t\\end{equation*}\n\tthat satisfies $\\norm{u}_X \\leq R$.\n\\end{thm}\n\n\\subsection{The extra stress tensor and its induced\n\toperator} \\label{sub:extra_stress}\n\nThe stress tensor describes the mechanical properties \nof the fluid in dependence on the strain rate $\\vec{Dv}$. In Newtonian fluid\ndynamics, the viscosity is a constant $\\kappa \\in \\R$ which induces\nthe linear operator $- \\div \\S(\\vec{Dv}) = - \\kappa \\Delta \\vec{v}$\ndescribing the viscous part of the stress tensor. The general\nsituation of non-Newtonian fluids can be modeled in various ways\n(cf.~\\cite{Saramito}, \\cite{BoyerFabrie}). Here, we consider the\nclass of fluids with extra stress tensor having\n$p$-$\\delta$-structure. This class includes and generalizes power law fluids,\nwhere the constitutive relation is given by \n\\begin{equation*}\n\t\\S(\\vec{Dv}) = \\mu_0 \\vec{Dv} + \\mu (\\delta + \\abs{\\vec{Dv}})^{p-2} \\vec{Dv}\n\\end{equation*}\nwith material constants $p \\in (1, \\infty)$,\n$\\mu_0, \\mu, \\delta \\geq 0$ (cf.~\\cite{Cetraro}).\n\n\\begin{defi}[extra stress tensor] \\label{defi:stress_tensor} An operator\n\t$\\S \\colon \\mathbb{R}_\\textrm{sym}^{d \\times d} \\to\n\t\\mathbb{R}_{\\textrm{sym}}^{d \\times d}$ is called an \\emph{extra\n\t\tstress tensor} with $p$-$\\delta$-structure if it is continuous,\n\tsatisfies $\\S(\\vec{0}) = \\vec{0}$ and if there exist constants\n\t$p \\in (1, \\infty)$, $\\delta \\geq 0$ and $C_1(\\S)$, $C_2(\\S) > 0$\n\tsuch that \n\t\\begin{equation} \\begin{split} \\label{eq:pdelta_growth1}\n\t(\\S(\\vec{A}) - \\S(\\vec{B}))\\cdot (\\vec{A}-\\vec{B}) &\\geq C_1(\\S)\n\t\\, (\\delta + \\abs{\\vec{B}} + \\abs{\\vec{A}-\\vec{B}})^{p-2}\n\t\\abs{\\vec{A}-\\vec{B}}^2,\n\t\\\\ \n\t\\abs{\\S(\\vec{A}) - \\S(\\vec{B})} &\\leq C_2(\\S) \\, (\\delta +\n\t\\abs{\\vec{B}} + \\abs{\\vec{A}-\\vec{B}})^{p-2}\n\t\\abs{\\vec{A}-\\vec{B}}\n\t\\end{split}\\end{equation}\n\tholds for all $\\vec{A}, \\vec{B} \\in \\mathbb{R}_\\text{sym}^{d \\times\n\t\td}$. The constants $C_1(\\S), C_2(\\S)$ and $p$ are called the\n\tcharacteristics of $\\S$. \n\\end{defi}\n\n\n\\begin{lem} [{\\cite{Cetraro}}] \\label{lem:pdelta_growth}\n\tLet $\\S$ be an extra stress tensor with $p$-$\\delta$-structure. Then, it holds\n\t\\begin{equation*}\n\t\t\\fp{\\S(\\vec{Dv}) - \\S(\\vec{Dw})}{\\vec{Dv}-\\vec{Dw}}\n\t\t\\geq C_3(\\S) \\int_{\\Omega} \\int_0^{\\abs{\\vec{Dv}-\\vec{Dw}}} (\\abs{\\vec{Dw}} + \\delta + s)^{p-2}s \\, ds\\, dx\n\t\\end{equation*}\n\tfor $\\vec{v}, \\vec{w} \\in W^{1,p}(\\Omega)$ with a constant $C_3(\\S)$\n\tthat only depends on the characteristics of $\\S$.\n\\end{lem}\n\n\\begin{comment}\n\\begin{defi}[extra stress tensor]\\emph{\\cite{Cetraro}}\nA map $\\S \\colon \\mathbb{R}^{d \\times d} \\to \\mathbb{R}_{\\textrm{sym}}^{d \\times d}$ is called an \\emph{extra stress tensor} if it is continuous on $\\mathbb{R}^{d \\times d}$, differentiable on $\\mathbb{R}^{d \\times d} \\setminus \\{ \\vec{0} \\}$, satisfies $\\S(\\vec{0}) = \\vec{0}$ and if it depends only on the symmetric part of the argument, i.e., $\\S(\\vec{A}) = \\S(\\tfrac{1}{2}(\\vec{A}+\\vec{A}^\\top))$.\n\nAn extra stress tensor $\\S$ has $p$-$\\delta$-structure if there exist constants $p \\in (1, \\infty)$, $\\delta \\geq 0$ and $C_1, C_2 > 0$ such that\n\\begin{align}\n\\sum_{i,j,k,l = 1}^{d} \\del_{kl} \\S_{ij}(\\vec{A}) \\vec{B}_{ij} \\vec{B}_{kl} &\\geq C_1 (\\delta + \\abs{\\vec{A}})^{p-2} \\abs{\\vec{B}}^2, \\label{eq:pdelta_cond1} \\\\\n\\abs{\\del_{kl} \\S_{ij} (\\vec{A})} &\\leq C_2 (\\delta + \\abs{\\vec{A}})^{p-2} \\label{eq:pdelta_cond2}\n\\end{align}\nholds for all $\\vec{A}, \\vec{B} \\in \\mathbb{R}_\\text{sym}^{d \\times d}$ with $\\vec{A} \\neq \\vec{0}$ and all $i,j,k,l \\in \\{1, ..., d\\}$.\n\nThe constants $C_1, C_2$ and $p$ are called the characteristics of $\\S$.\n\\end{defi}\n\nThe following Lemma shows how the growth behavior of the extra stress\ntensor $\\S$ is determined by the $p$-$\\delta$-structure:\n\\begin{lem} \\emph{\\cite{Cetraro}}\nLet $\\S$ be an extra stress tensor with\n$p$-$\\delta$-structure. Then, we have the following relations, where\nthe constants only depend on the characteristics of $\\S$:\n\\begin{enumerate}\n\\item For all\n$\\vec{A}, \\vec{B} \\in \\mathbb{R}_{\\textrm{sym}}^{d \\times d}$, we\nhave\n\\begin{equation}\n\\begin{split}\n\\abs{\\S(\\vec{A}) - \\S(\\vec{B})} &\\sim (\\delta + \\abs{\\vec{B}}\n+ \\abs{\\vec{A}-\\vec{B}})^{p-2} \\abs{\\vec{A}-\\vec{B}}, \\\\\n(\\S(\\vec{A}) - \\S(\\vec{B}))\\cdot (\\vec{A}-\\vec{B}) &\\sim\n(\\delta + \\abs{\\vec{B}} + \\abs{\\vec{A}-\\vec{B}})^{p-2}\n\\abs{\\vec{A}-\\vec{B}}^2.\n\\end{split}\n\\end{equation}\n\\item For $\\vec{v}, \\vec{w} \\in W^{1,p}(\\Omega)$, we have\n\\begin{equation*}\n\\fp{\\S(\\vec{Dv}) - \\S(\\vec{Dw})}{\\vec{Dv}-\\vec{Dw}}\n\\sim \\int_{\\Omega} \\int_0^{\\abs{\\vec{Dv}-\\vec{Dw}}} (\\abs{\\vec{Dw}} + \\delta + s)^{p-2}s \\, ds\\, dx.\n\\end{equation*}\n\\end{enumerate}\n\\end{lem}\n\\end{comment}\n\nSince we represented the inhomogeneous data in \\eqref{eq:main_problem}\nby a fixed function $\\vec{g}$ and since we want to solve\n\\eqref{eq:main_problem} by the ansatz $\\vec{v}=\\vec{u}+\\vec{g}$ with\n$\\vec{u} \\in V_p$, we shall work with a shifted version of the viscous\nstress tensor. Therefore, we define the \\emph{induced} operator\n$\\vec{S}\\colon W_0^{1,p}(\\Omega) \\to W_0^{1,p}(\\Omega)\\d$ via\n\\begin{equation} \\label{eq:defi_S}\n\\fp{\\vec{S}(\\vec{v})}{\\vec{\\phi}}\n:= \\fp{\\S(\\vec{Dv}+\\vec{Dg})}{\\vec{D\\phi}}\n\\end{equation}\nfor $\\vec{v}, \\vec{\\phi} \\in W_0^{1,p}(\\Omega)$.\n\n\\begin{lem}[Properties of $\\vec{S}$] \\label{lem:properties_S} Let $\\S$\n\tbe an extra stress tensor with $p$-$\\delta$-structure,\n\t$p \\in (1, 2]$ and $\\vec{g} \\in W^{1,p}(\\Omega)$. Then the induced\n\toperator~$\\vec{S}$ defined in \\eqref{eq:defi_S} is well-defined,\n\tbounded and continuous.\n\\end{lem}\n\\begin{proof}\n\tUsing \\eqref{eq:pdelta_growth1}$_2$ with $\\vec{A} = \\vec{Dw}$ and\n\t$\\vec{B} = \\vec{0}$, we obtain\n\t\\begin{equation*}\n\t\t\\abs{\\S(\\vec{Dw})}^{p'} \n\t\t\\leq C_2(\\S)^{p'} \\left[ (\\abs{\\vec{Dw}}+\\delta)^{p-2} \\abs{\\vec{Dw}} \\right]^{p'}\n\t\t\\leq C_2(\\S)^{p'} (\\abs{\\vec{Dw}}+\\delta)^p\n\t\\end{equation*}\n\tand consequently \n\t\\begin{equation} \\label{eq:S_bdd}\n\t\\norm{\\S(\\vec{Dw})}_{p'} \\leq\n\tC_2(\\S) \\bignorm{\\abs{\\vec{Dw}}+\\delta}_p^{p-1}\n\t\\end{equation}\n\tfor any $\\vec{w} \\in W^{1,p}(\\Omega)$. From this, we deduce that\n\t$\\vec{S}\\colon W_0^{1,p}(\\Omega) \\to W_0^{1,p}(\\Omega)\\d$ is\n\twell-defined and bounded.\n\t\n\tIn order to prove continuity, let\n\t$\\vec{v^n} \\to \\vec{v} \\in W_0^{1,p}(\\Omega)$ be a convergent\n\tsequence. Then, by the H\\\"older inequality and by\n\t\\eqref{eq:pdelta_growth1}$_2$ we get \n\t\\begin{align*}\n\t\t\\Vert\\vec{S}(\\vec{v^n}) - \\vec{S}(\\vec{v})\n\t\t&\\Vert_{W_0^{1,p}(\\Omega)\\d}\n\t\t\\leq \\norm{\\S(\\vec{Dv^n}+\\vec{Dg}) - \\S(\\vec{Dv}+\\vec{Dg})}_{p'} \\\\\n\t\t&\\leq C_2(\\S) \\norm{\\left(\\delta + \\abs{\\vec{Dv}+\\vec{Dg}} +\n\t\t\t\\abs{\\vec{Dv^n}-\\vec{Dv}}\\right)^{p-2} \n\t\t\t\\abs{\\vec{Dv^n}-\\vec{Dv}}}_{p'} \\\\\n\t\t&\\leq C_2(\\S) \\norm{\\vec{Dv^n}-\\vec{Dv}}_p^\\frac{p}{p'}\n\t\t\\xrightarrow{n \\to \\infty} 0.\n\t\t\\\\[-12mm]\n\t\\end{align*}\n\\end{proof}\n\nOur next goal is to describe coercivity properties of the operator\n$\\vec{S}$. For the proof of a good lower bound of $\\vec{S}$, we\nprove an auxiliary algebraic result.\n\n\\begin{lem} \\label{lem:estim_young_fct}\n\tLet $a, t \\geq 0$ and $p \\in (1,2 ] $. Then it holds\n\t\\begin{equation*}\n\t\t\\int_{0}^{t} (a+s)^{p-2} s \\, ds \\geq \\tfrac{1}{p} t^p - ta^{p-1}.\n\t\\end{equation*}\n\\end{lem}\n\\begin{proof}\n\tThe statement becomes trivial if $a = 0$, so we may assume $a >\n\t0$. For all $s \\geq 0$, it holds\n\t$\\frac{a}{(a+s)^{2-p}} \\leq \\frac{a}{a^{2-p}} = a^{p-1}$. We\n\testimate\n\t\\begin{equation*}\n\t\t\\frac{s}{(a+s)^{2-p}} = (a+s)^{p-1} - \\frac{a}{(a+s)^{2-p}} \\geq\n\t\t(a+s)^{p-1} - a^{p-1} \\geq s^{p-1} - a^{p-1} \n\t\\end{equation*}\n\tand by integration we obtain the result.\n\\end{proof}\n\n\\begin{comment}\nFor $p \\in (1,2)$, the function $x \\mapsto x^{p-1}$ is not\nconvex. However, we can still get a Jensen-type inequality:\n\\marginpar{still needed?}%\n\\begin{lem} \\label{lem:rev_jensen}\nFor $x, y \\geq 0$ and $p \\in (1,2 ] $, it holds\n\\begin{equation*}\n(x+y)^{p-1} \\leq x^{p-1} + y^{p-1}.\n\\end{equation*}\n\\end{lem}\n\\begin{proof}\nThe assertion becomes obvious in the case $y = 0$. Since the claimed\ninequality is homogeneous, we may fix $y = 1$ without loss of\ngenerality. The auxiliary function\n$f \\colon \\R _{\\geq 0} \\to \\R _{\\geq 0}\\colon x \\mapsto x^{p-1} + 1\n- (x+1)^{p-1}$ is continuous on $\\R _{\\geq 0} $, satisfies\n$f(0) = 0$ and it is differentiable on $\\R _{ x > 0 }$ with a\nnon-negative derivative. Thus $f(x)$ is non-negative for all\n$x \\geq 0$, which yields the result.\n\\end{proof}\n\\end{comment}\n\nWith this tool, we are able to prove a lower bound for $\\vec{S}$: \n\\begin{lem}[Lower bound for $\\vec{S}$] \\label{lem:S_estimate} For a\n\tgiven extra stress tensor $\\S$ with $p$-$\\delta$-structure,\n\t$p \\in (1, 2 ] $, and a function $\\vec{g} \\in W^{1,p}(\\Omega)$, the\n\tinduced operator\n\t$\\vec{S}\\colon W_0^{1,p}(\\Omega) \\to W_0^{1,p}(\\Omega)\\d$, defined\n\tin \\eqref{eq:defi_S}, satisfies the lower bound \n\n\n\n\n\n\n\t\\begin{equation*}\n\t\t\\fp{\\vec{S}(\\vec{v})}{\\vec{v}} \n\t\t\\geq \\tfrac{C_3(\\S)}{p} \\norm{\\vec{Dv}}_p^p - \\big ( C_2(\\S) +\n\t\tC_3(\\S) \\big) \\bignorm{\\abs{\\vec{Dg}}+\\delta}_p^{p-1}\n\t\t\\norm{\\vec{Dv}}_p \n\t\\end{equation*}\n\tfor all $\\vec{v} \\in W_0^{1,p}(\\Omega)$.\n\n\\end{lem}\n\n\\begin{proof}\n\tWe apply Lemma \\ref{lem:pdelta_growth} with\n\t$\\vec{v} = \\vec{v}+\\vec{g}$ and $\\vec{w}=\\vec{g}$ and Lemma\n\t\\ref{lem:estim_young_fct} to estimate\n\t\\begin{align*}\n\t\t\\fp{\\S(\\vec{Dv}+\\vec{Dg}) - \\S(\\vec{Dg})}{\\vec{Dv}}\n\t\t&\\geq C_3(\\S) \\int_{\\Omega} \\int_{0}^{\\abs{\\vec{Dv}}}\n\t\t(\\abs{\\vec{Dg}} + \\delta + s)^{p-2} s \\, ds \\, dx\n\t\t\\\\\n\t\t&\\geq C_3(\\S) \\int_{\\Omega} \\tfrac{1}{p} \\abs{\\vec{Dv}}^p -\n\t\t\\abs{\\vec{Dv}} (\\abs{\\vec{Dg}} + \\delta)^{p-1} \\, dx.\n\t\\end{align*}\n\tThis, the H\\\"older inequality and \\eqref{eq:S_bdd} with\n\t$\\vec{w} = \\vec{g}$\n\t\\begin{align*}\n\t\t\\fp{\\vec{S}(\\vec{v})}{\\vec{v}}\n\t\t&= \\fp{\\S(\\vec{Dv}+\\vec{Dg}) - \\S(\\vec{Dg})}{\\vec{Dv}} +\n\t\t\\fp{\\S(\\vec{Dg})}{\\vec{Dv}} \\nonumber\n\t\t\\\\\n\t\t&\\geq \\tfrac{C_3(\\S)}{p} \\norm{\\vec{Dv}}_p^p \n\t\t- \\left[C_3(\\S)\\, \\big\\Vert\n\t\t\\left(\\abs{\\vec{Dg}}+\\delta\\right)^{p-1} \\big\\Vert_{p'} +\n\t\t\\norm{\\S(\\vec{Dg})}_{p'}\\right] \\norm{\\vec{Dv}}_p\n\t\t\\\\ \n\t\t&\\geq \\tfrac{C_3(\\S)}{p} \\norm{\\vec{Dv}}_p^p \n\t\t- \\big (C_2(\\S) + C_3(\\S)\\big )\n\t\t\\bignorm{\\abs{\\vec{Dg}}+\\delta}_p^{p-1}\n\t\n\t\t\\norm{\\vec{Dv}}_p , \n\t\\end{align*}\n\twhich is the assertion.\n\\end{proof}\n\nIn the treatment of the inhomogeneous problem \\eqref{eq:main_problem},\nwe will have to deal with the shifted extra stress tensor\n$\\vec{A} \\mapsto \\S(\\vec{A} + \\vec{G})$ for some constant symmetric\nmatrix $\\vec{G}$. In order to get a precise description of the growth\nbehavior of this mapping, we introduce the notion of locally uniform\nmonotonicity:\n\n\\begin{defi}[Locally uniform monotonicity] \\label{defi:uni_monotone}\n\tLet $X$ be a reflexive Banach space and $A\\colon X \\to X\\d$ an\n\toperator. The operator $A$ is called \\emph{locally uniformly\n\t\tmonotone on $X$}\n\tif for every $y\\in X$ there exists a strictly monotonically\n\tincreasing function $\\rho_y\\colon [0, \\infty) \\to [0, \\infty)$ with\n\t$\\rho_y(0) = 0$ such that for all $x\\in X$ holds\n\t\\begin{equation} \\label{eq:uni_monotone}\n\t\\fp{Ax-Ay}{x-y} \\geq \\rho_y(\\norm{x-y}_X)\\,.\n\t\\end{equation}\n\n\\end{defi}\n\nBy the lower bound \\eqref{eq:pdelta_growth1}$_1$, we obtain that\n(possibly shifted) extra stress tensors are locally uniformly\nmonotone.\n\n\\begin{lem} \\label{lem:uni_monotonicity}\n\tLet $\\S\\colon \\R_{\\text{sym}}^{d\\times d}\\to \\R_{\\text{sym}}^{d\\times\n\t\td}$ be an extra stress tensor with $p$-$\\delta$-structure and\n\t$\\vec{G} \\in \\R_{\\text{sym}}^{d\\times d}$ be a symmetric\n\tmatrix. Then the shifted extra stress tensor\n\t$\\vec{A} \\mapsto \\S(\\vec{A}+\\vec{G})$ is a locally uniformly\n\tmonotone operator on $\\R_{\\text{sym}}^{d\\times d}$.\n\\end{lem}\n\\begin{proof}\n\tBy \\eqref{eq:pdelta_growth1}$_1$, we obtain for any $\\vec{A},\n\t\\vec{B} \\in \\R_{\\text{sym}}^{d\\times d}$ \n\t\\begin{align*}\n\t\t&(\\S(\\vec{A}\\!+\\!\\vec{G}) \\! -\\! \\S(\\vec{B}\\!+\\!\\vec{G})) \\cdot\n\t\t(\\vec{A}\\!-\\!\\vec{B})\n\t\t\\geq C_1(\\S) \\big(\\delta \\!+\\! \\abs{\\vec{B}\\!+\\!\\vec{G}}\\! +\\!\n\t\t\\abs{\\vec{A}\\!-\\!\\vec{B}}\\big)^{p-2} \\abs{\\vec{A}\\!-\\!\\vec{B}}^2.\n\t\\end{align*}\n\tFor any $\\vec{B}$, the function\n\t$\\rho_{\\vec{B}}(t) := C_1(\\S) (\\delta + \\abs{\\vec{B}+\\vec{G}} +\n\tt)^{p-2} t^2$ is non-negative, satisfies $\\rho_{\\vec{B}}(0) = 0$,\n\tand it is strictly monotonically increasing since for its derivative\n\tit holds\n\t\\begin{equation*}\n\t\t\\rho'_{\\vec{B}}(t) \n\t\t= C_1(\\S) (\\delta + \\abs{\\vec{B}+\\vec{G}} + t)^{p-3} t (2\\delta +\n\t\t2\\abs{\\vec{B}+\\vec{G}} + pt) > 0 \n\t\\end{equation*}\n\tfor all $t > 0$. Therefore, it fulfils the requirements from\n\tDefinition \\ref{defi:uni_monotone}.\n\\end{proof}\n\n\\subsection{Properties of the convective term} \\label{sub:conv_term}\n\nSince we fixed a function $\\vec{g}$ that expresses the inhomogeneous\ndata in \\eqref{eq:main_problem}, we shall work with a \"shifted\"\nversion of the convective term\n$\\fp{(\\vec{u}+\\vec{g}) \\, \\cdot \\nabla (\\vec{u}+\\vec{g})}{\\vec{\\phi}}$\nthat is integrable and thus well-defined even for\n$p > \\tfrac{2d}{d+2}$ and sufficiently regular $\\vec{\\phi}$ and\n$\\vec{g}$. Therefore, we set\n\\begin{equation}\\label{eq:s}\ns=s(p) := \\max \\Big\\{ p, \\Big(\\frac{p\\d}{2}\\Big)' \\Big\\}\n= \\left\\{ \\begin{array}{cl}\np &\\text{if }p > \\frac{3d}{d+2},\n\\\\\n\\left(\\frac{p\\d}{2}\\right)' &\\text{if } p \\leq \\frac{3d}{d+2}\n\\end{array} \\right.\n\\end{equation}\nfor $p \\in \\big(\\tfrac{2d}{d+2}, 2\\big)$ and define the convective\nterm $\\vec{T}\\colon V_p \\to W_0^{1,s}(\\Omega)\\d$ via\n\\begin{equation} \\label{eq:defi_T}\n\\fp{\\vec{T}(\\vec{u})}{\\vec{\\phi}}\n:= - \\fp{(\\vec{u}+\\vec{g}) \\otimes (\\vec{u}+\\vec{g})}{\\vec{D\\phi}} -\n\\fp{(\\div \\vec{g})(\\vec{u}+\\vec{g})}{\\vec{\\phi}}\n\\end{equation}\nfor $\\vec{u} \\in V_p$ and $\\vec{\\phi} \\in W_0^{1,s}(\\Omega)$.\n\n\\begin{lem}[Properties of the convective term] \\label{lem:t2_str_contin}\n\tFor $p \\in \\big(\\frac{2d}{d+2}, 2\\big)$ let $s $ be defined in\n\t\\eqref{eq:s}\n\tand let $q \\in \\R$ satisfy $q \\geq s$ and $q > (\\frac{p\\d}{2})'$. Then, for any given function\n\t$\\vec{g} \\in W^{1,s}(\\Omega)$, the operator $\\vec{T}$ defined in \n\t\\eqref{eq:defi_T} is formally equivalent to\n\t$\\fp{(\\vec{u}+\\vec{g}) \\, \\cdot \\nabla\n\t\t(\\vec{u}+\\vec{g})}{\\vec{\\phi}}$. It is well-defined and bounded\n\tfrom $V_p$ to $W_0^{1,s}(\\Omega)\\d$ and also from $V_p$ to\n\t$W_0^{1,q}(\\Omega)\\d$. The operator $\\vec{T}$ is continuous from $V_p$ to\n\t$W_0^{1,s}(\\Omega)\\d$ and strongly continuous from\n\t$V_p$ to $W_0^{1,q}(\\Omega)\\d$. It fulfils the estimate\n\t\\begin{align}\\label{eq:econv}\n\t\t\\begin{split}\n\t\t\t\\abs{\\fp{\\vec{T}(\\vec{u})}{\\vec{u}}} &\\leq c_\\text{Sob}\\,\n\t\t\tc_\\text{Korn}^2 \\left( \\norm{ \\vec{Dg}}_s + \\tfrac{1}{2}\n\t\t\t\\norm{\\div \\vec{g}}_s \\right) \\norm{\\vec{Du}}_p^2\n\t\t\t\\\\\n\t\t\t&\\quad + c_\\text{Sob}\\, \\big( \\norm{\\vec{g}}_{1,s}^2 +\n\t\t\tc_\\text{Korn} \\norm{\\div \\vec{g}}_s \\norm{\\vec{g}}_{1,s} \\big)\n\t\t\t\\norm{\\vec{Du}}_p\n\t\t\\end{split}\n\t\\end{align}\n\tfor all $\\vec{u} \\in V_q$, where\n\t$c_\\text{Sob}$ are Sobolev embedding constants and\n\t$c_\\text{Korn}$ is the constant in the Korn inequality for $\\Omega$.\n\\end{lem}\n\n\\begin{proof}\n\tThe formal equivalence follows from a straightforward computation\n\twith integration by parts. We abbreviate $g_1 := \\div \\vec{g} \\in\n\tL^s(\\Omega)$ and use the continuous Sobolev embeddings\n\t$W^{1,s}(\\Omega) \\injto W^{1,p}(\\Omega) \\injto\n\tL^{p\\d}(\\Omega)$. The definition of $s$ implies $\\frac{1}{p\\d} +\n\t\\frac{1}{p\\d}+ \\frac{1}{s} \\leq\n\t1$, so both well-definedness of $\\vec{T}(\\vec{u}) \\in\n\tW_0^{1,s}(\\Omega)\\d$ for $\\vec{u} \\in\n\tV_p$ and boundedness follow by the H\\\"older inequality.\n\t\n\tIn view of the continuous embedding\n\t$W_0^{1,s}(\\Omega)\\d \\injto W_0^{1,q}(\\Omega)\\d$, we immediately\n\tobtain well-definedness and boundedness if $\\vec{T}$ is considered\n\tas an operator from $V_p$ to $W_0^{1,q}(\\Omega)\\d$.\n\t\n\tSince $\\frac{1}{p\\d} + \\frac{1}{p\\d}+ \\frac{1}{q} < 1$, there is\n\tsome $\\tau < p\\d$ such that\n\t$\\frac{1}{\\tau} + \\frac{1}{p\\d}+ \\frac{1}{q} = 1$. Let\n\t$\\vec{u^n} \\weakto \\vec{u} \\in V_p$~be a weakly convergent\n\tsequence. The Sobolev embedding\n\t$W^{1,p}(\\Omega) \\injto\\injto L^{\\tau}(\\Omega)$ is compact, so\n\t$\\vec{u^n} \\to \\vec{u} \\in L^{\\tau}(\\Omega)$ converges strongly.\n\tThus, we estimate\n\t\\begin{align*}\n\t\t&\\sup_{\\norm{\\vec{D\\phi}}_q \\leq 1} \n\t\t\\abs{\\fp{(\\vec{u^n} +\\vec{g}) \\otimes (\\vec{u^n}+\\vec{g}) \n\t\t\t\t- (\\vec{u}+\\vec{g}) \\otimes (\\vec{u}+\\vec{g})}{\\vec{D\\phi}}} \n\t\t\\\\\n\t\t&\\quad = \\sup_{\\norm{\\vec{D\\phi}}_q \\leq 1} \\big \\vert \\langle\\vec{u^n} \\otimes (\\vec{u^n}-\\vec{u}) \n\t\t+ (\\vec{u^n}-\\vec{u}) \\otimes \\vec{u} \\\\\n\t\t&\\qquad\\qquad\\qquad \t+ \\vec{g} \\otimes (\\vec{u^n}-\\vec{u}) \n\t\t+ (\\vec{u^n}-\\vec{u}) \\otimes \\vec{g}, \\vec{D\\phi}\\rangle \\big \\vert \n\t\t\\\\\n\t\t&\\quad \\leq \\norm{\\vec{u^n}}_{p\\d} \\norm{\\vec{u^n}-\\vec{u}}_\\tau \n\t\t+ \\norm{\\vec{u^n}-\\vec{u}}_\\tau \\norm{\\vec{u}}_{p\\d}\n\t\t+ 2 \\norm{\\vec{g}}_{p\\d} \\norm{\\vec{u^n}-\\vec{u}}_\\tau\n\t\t\\xrightarrow{\\,n \\to \\infty\\,} 0.\n\t\\end{align*}\t\n\tSimilarly, we obtain\n\t\\begin{equation*}\n\t\t\\sup_{\\norm{\\vec{D\\phi}}_q \\leq 1} \\abs{\\fp{g_1(\\vec{u^n}-\\vec{u})}{\\vec{\\phi}}}\n\t\t\\leq C \\norm{g_1}_s \\norm{\\vec{u^n}-\\vec{u}}_\\tau\n\t\t\\xrightarrow{n \\to \\infty} 0.\n\t\\end{equation*}\n\tThus, we proved $\\vec{T}(\\vec{u^n}) \\to \\vec{T}(\\vec{u})$ in\n\t$W_0^{1,q}(\\Omega)\\d$, i. e.\n\t$\\vec{T}\\colon V_p \\to W_0^{1,q}(\\Omega)\\d$ is strongly continuous.\n\t\n\tAnalogously, we prove continuity for\n\t$\\vec{T} \\colon V_p \\to W_0^{1,s}(\\Omega)\\d$ using\n\t$\\vec{u^n} \\to \\vec{u} \\in V_p$ and the continuous embedding\n\t$W^{1,p}(\\Omega) \\injto L^{p\\d}(\\Omega)$.\n\t\n\tFor the bound \\eqref{eq:econv} of $\\vec{T}$, we use \n\t\\begin{equation} \\label{eq:t2_eq1}\n\t\\fp{\\vec{u}\\otimes\\vec{u}}{\\vec{Du}} = 0\n\t\\end{equation}\n\twhich follows by integration by parts, since $\\vec{u}$ has zero\n\tdivergence and zero boundary values. In the same way, we see\n\t\\begin{equation} \\label{eq:t2_eq2}\n\t\\fp{\\vec{u} \\otimes\n\t\t\\vec{g}}{\\nabla \\vec{u}} = - \\fp{\\vec{u} \\otimes\n\t\t\\vec{u}}{\\vec{Dg}}\n\t\\end{equation}\n\tand\n\t\\begin{equation} \\label{eq:t2_eq3}\n\t\\fp{\\vec{u} \\otimes\n\t\t\\vec{g}}{\\nabla \\vec{u}^T} = -\\tfrac{1}{2}\\, \\fp{g_1\n\t\t\\vec{u}}{\\vec{u}}.\n\t\\end{equation}\n\tUsing \\eqref{eq:t2_eq1}, \\eqref{eq:t2_eq2} and \\eqref{eq:t2_eq3}\n\tin the definition of $\\vec{T}$, we obtain the following expression\n\tfor the convective term:\n\t\\begin{align}\n\t\t\\langle \\vec{T} (\\vec{u}), \\vec{u}\\rangle\n\t\t&= \\fp{\\vec{u} \\otimes \\vec{u}}{\\vec{Du}} + \\fp{\\vec{u} \\otimes\n\t\t\t\\vec{g}}{\\nabla \\vec{u}} \n\t\t+ \\fp{\\vec{u} \\otimes \\vec{g}}{\\nabla \\vec{u}^T} + \\fp{\\vec{g}\n\t\t\t\\otimes \\vec{g}}{\\vec{Du}} \\nonumber\n\t\t\\\\\n\t\t&\\quad + \\fp{g_1 \\vec{u}}{\\vec{u}} + \\fp{g_1 \\vec{g}}{\\vec{u}}\n\t\t\\nonumber\n\t\t\\\\\n\t\t&= - \\fp{\\vec{u} \\otimes \\vec{u}}{\\vec{Dg}} + \\tfrac{1}{2}\\,\n\t\t\\fp{g_1 \\vec{u}}{\\vec{u}} + \\fp{\\vec{g} \\otimes\n\t\t\t\\vec{g}}{\\vec{Du}} + \\fp{g_1\\vec{g}}{\\vec{u}}. \\label{eq:t2_eq4} \n\t\\end{align}\n\tIn order to estimate this expression, we use the Sobolev embedding\n\t$W^{1,s}(\\Omega) \\injto L^{2p'}(\\Omega)$. In fact, in the case\n\t$s \\geq d$ this follows directly. If $\\frac{3d}{d+2} < p < 2$ and\n\t$s 2p'$, due to a straightforward\n\tcomputation. If $\\frac{2d}{d+1} < p \\leq \\frac{3d}{d+2}$ and\n\t$s < d$, we get\n\t$s\\d \\geq \\big(\\big(\\frac{p\\d}{2}\\big)'\\big)\\d = \\frac{pd}{pd-2d+p}\n\t\\geq 2p'$. Finally, if $\\frac{2d}{d+2} < p \\leq \\frac{2d}{d+1}$,\n\tthen it holds $s \\geq \\big(\\frac{p\\d}{2}\\big)' \\geq d$. Applying\n\tthe H\\\"older and the Korn inequality and the embeddings\n\t$W^{1,s}(\\Omega) \\injto L^{2p'}(\\Omega)$ and\n\t$W^{1,p}(\\Omega) \\injto L^{p\\d}(\\Omega)$ to \\eqref{eq:t2_eq4}, the \n\tclaimed estimate follows.\n\\end{proof}\n\n\\subsection{Lipschitz truncation} \\label{sub:lip_trunc}\nIn case of a small growth parameter $p$, this means\n$p \\in \\big( \\frac{2d}{d+2}, \\frac{3d}{d+2} \\big)$, a function\n$\\vec{v} \\in W^{1,p}(\\Omega)$ does not have enough integrability to be\nchosen as a test function in operators like\n$\\fp{\\vec{v}\\otimes \\vec{v}}{\\vec{D\\phi}}$. Hence, we use\nsufficiently smooth approximations of the test functions in the limit\nprocess of the existence proof, which are given by the Lipschitz\ntruncation method. The existence of Lipschitz truncations is\nguaranteed by the following result proved in \\cite{fms2}, \\cite{dms}, \\cite{Cetraro}:\n\\begin{thm}[Lipschitz truncation] \\label{thm:lip_trunc}\n\tLet $\\Omega$ be a bounded domain with Lipschitz continuous\n\tboundary, let $p \\in (1, \\infty)$ and let \n\t$(\\vec{v^n})_{n \\in \\N} \\subset W_0^{1, p}(\\Omega)$ be a sequence\n\tsuch that $\\vec{v^n} \\weakto \\vec{0}$ weakly.\n\t\n\tThen, for all $j, n \\in \\N$, there exists a function\n\t$\\vec{v_j^n} \\in W_0^{1, \\infty}(\\Omega)$ and a number\n\t$\\lambda_j^n \\in \\big[ 2^{2^j}, 2^{2^{j+1}} \\big]$ such that\n\t\\begin{align} \\label{eq:lip_trunc1} \\begin{split}\n\t\t\t\\lim\\limits_{n \\to \\infty} \\big( \\sup\\limits_{j \\in \\N}\n\t\t\t\\norm{\\vec{v_j^n}}_{\\infty} \\big) &= 0,\n\t\t\t\\\\ \n\t\t\t\\norm{\\nabla \\vec{v_j^n}}_{\\infty} &\\leq c \\, \\lambda_j^n,\n\t\t\t\\\\\n\t\t\t\\limsup_{n \\to \\infty} \\left( \\lambda_j^n \\right)^p \\abs{\\{\n\t\t\t\t\\vec{v_j^n} \\neq \\vec{v^n} \\}} &\\leq c \\, 2^{-j},\n\t\t\t\\\\ \n\t\t\t\\limsup_{n \\to \\infty} \\norm{\\nabla \\vec{v_j^n} \\, \\chi_{\\{\n\t\t\t\t\t\\vec{v_j^n} \\neq \\vec{v^n} \\}}}_p^p &\\leq c \\, 2^{-j}\n\t\\end{split} \\end{align}\n\tholds with a uniform constant $c = c(d, p, \\Omega)$. \n\t\n\tMoreover, for fixed $j \\in \\N$ and $r \\in [1, \\infty)$, we have\n\t\\begin{align} \\label{eq:lip_trunc2}\n\t\t\\begin{split}\n\t\t\t\\nabla \\vec{v_j^n} \\weakto \\vec{0} &\\quad \\text{ in }\n\t\t\tL^r(\\Omega),\n\t\t\t\\\\\n\t\t\t\\nabla \\vec{v_j^n} \\weakstarto \\vec{0} &\\quad \\text{ in }\n\t\t\tL^{\\infty}(\\Omega) \n\t\t\\end{split}\n\t\\end{align}\n\tas $n \\to \\infty$.\n\\end{thm}\n\nThe following lemma shows how Lipschitz truncation can be used to get\na connection between weak and almost everywhere convergence. Statement\nand proof are close to \\cite[Lemma 2.6]{dms}, only the\nassumptions on the operator $\\S$ have been reduced for the reasons\ndiscussed in the previous Subsection~\\ref{sub:extra_stress}.\n\n\\begin{lem}[Almost everywhere convergence for the Lipschitz truncation\n\tmethod] \\label{lem:ae_equality2} Let $\\Omega$ be a bounded domain,\n\t$p \\in (1, \\infty)$,\n\t$(\\vec{u^n})_{n \\in \\N} \\subset W^{1,p}(\\Omega)$ be a weakly\n\tconvergent sequence with limit $\\vec{u} \\in W^{1,p}(\\Omega)$. Let\n\t$\\A\\colon \\R_{\\text{sym}}^{d\\times d}\\to \\R_{\\text{sym}}^{d \\times\n\t\td}$ be a locally uniformly monotone operator on $\\R_{\\text{sym}}^{d\\times d}$ such that the induced\n\toperator $\\vec{w} \\mapsto \\A(\\vec{Dw})$ is well-defined and\n\tbounded in $W^{1,p}(\\Omega) \\to L^{p'}(\\Omega)$.\n\n\n\n\n\n\n\n\t\n\tNow let $B$ be a ball with $2B \\subset \\subset \\Omega$ and\n\t$\\xi \\in C_0^\\infty (\\Omega)$ be a cutoff function such that \n\t$\\chi_B \\leq \\xi \\leq \\chi_{2B}$. We set\n\t$\\vec{v^n} := \\xi (\\vec{u^n}- \\vec{u})$ and let $\\vec{v_j^n}$ be the\n\tLipschitz truncation of $\\vec{v^n}$ with respect to the domain $2B$\n\tas described in Theorem \\ref{thm:lip_trunc}. If we have\n\t\\begin{equation} \\label{eq:ae_equality_assum}\n\t\\lim_{n \\to \\infty} \\abs{\\fp{\\A(\\vec{Du^n}) -\n\t\t\t\\A(\\vec{Du})}{\\vec{Dv_j^n}}} \\leq \\delta_j \n\t\\end{equation}\n\tfor all $j \\in \\N$ and some sequence $(\\delta_j)_{j \\in \\N}$ with\n\t$\\lim_{j \\to \\infty} \\delta_j = 0$, then a subsequence of\n\t$\\vec{Du^n}$ converges to $\\vec{Du}$ almost everywhere in $B$.\n\\end{lem}\n\n\\begin{rem}\\label{rem:S}\n\tIn Lemma \\ref{lem:uni_monotonicity} and lemma \\ref{lem:properties_S}\n\twe have seen that for an extra stress tensor\n\t$\\S\\colon \\R_{\\text{sym}}^{d\\times d}\\to \\R_{\\text{sym}}^{d\\times d}$\n\twith $p$-$\\delta$-structure and a given vector field\n\t$\\vec {g} \\in W^{1,p}(\\Omega)$ the operator\n\t$\\A(\\vec{B}):=\\S(\\vec{B} + \\vec{Dg})$ fulfils the requirements in\n\tLemma~\\ref{lem:ae_equality2}.\n\\end{rem}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:ae_equality2}]\n\tLet $\\theta \\in (0, 1)$. Making use of the properties of $\\A$, we\n\tobtain strong convergence\n\t$\\left[(\\A(\\vec{Du^n})-\\A(\\vec{Du}))\\cdot\n\t(\\vec{Du^n}-\\vec{Du})\\right] ^\\theta \\to 0$ in $L^1(B)$ as\n\t$n \\to \\infty$ along the lines of \\cite[Lemma 2.6]{dms}. We switch to a\n\tsubsequence that converges almost everywhere.\n\tBy the definition of locally uniform monotonicity, there exists a\n\tstrictly monotonically increasing function\n\t$\\rho_x\\colon [0, \\infty) \\to [0, \\infty)$ with\n\t\\begin{align*}\n\t\t&(\\A(\\vec{Du^n}(x))-\\A(\\vec{Du}(x)))\\cdot\n\t\t(\\vec{Du^n}(x)-\\vec{Du}(x))\n\t\t\\geq \\rho_x (\\abs{\\vec{Du^n}(x)-\\vec{Du}(x)}) \n\t\\end{align*}\n\tfor all $n\\in\\N$ and almost every $x\\in B$ ($\\rho_x$~depends on\n\t$\\vec{Du}(x)$). Utilizing the almost everywhere convergence of the\n\tleft-hand side and the non-negativity of the right-hand side, we\n\tobtain a subsequence that fulfils\n\t$\\rho_x (\\abs{\\vec{Du^n}(x)-\\vec{Du}(x)}) \\to 0$ almost\n\teverywhere. Thus, it holds $\\vec{Du^n}(x) \\to \\vec{Du}(x)$ as\n\t$n\\to\\infty$ for this subsequence and almost every~$x\\in B$.\t\n\\end{proof}\n\nBy applying a covering argument and taking the diagonal sequence we\nobtain a global version of Lemma \\ref{lem:ae_equality2}\n(cf.~\\cite[Cor. 3.32]{Cetraro}):\n\n\\begin{cor}\\label{cor:ae_equality3}\n\tAssume that the assumptions of Lemma \\ref{lem:ae_equality2} are fulfilled\n\tfor all balls $B$ with $2B \\subset \\subset \\Omega$ (with sequences\n\t$\\delta_j$ that may depend on the ball $B$). Then $\\vec{Du^n}$\n\tconverges to $\\vec{Du}$ almost everywhere on $\\Omega$ for a suitable\n\tsubsequence.\n\\end{cor}\n\nUsing the almost everywhere convergence established in Corollary\n\\ref{cor:ae_equality3}, we may prove a general statement about the\nlimit process with the Lipschitz truncation method in existence\nproofs:\n\n\\begin{thm}[Identification of limits using the Lipschitz truncation\n\tmethod] \\label{thm:ident_limits} Let $\\Omega \\subset \\Rd$ be a\n\tbounded domain, $p \\in (1, \\infty)$ and $\\vec{g}\\in W^{1,p}(\\Omega)$\n\tbe given. Let\n\t$\\A\\colon \\R_{\\text{sym}}^{d\\times d}\\to \\R_{\\text{sym}}^{d \\times\n\t\td}$ be a continuous, locally uniformly monotone operator on\n\t$\\R_{\\text{sym}}^{d\\times d}$ such that the induced operator\n\t$\\vec{w} \\mapsto \\A(\\vec{Dw})$ is well-defined and bounded\n\tin $W^{1,p}(\\Omega) \\to L^{p'}(\\Omega)$. Let there be an operator\n\t$\\vec{B} \\colon V_p \\to V_s\\d$ for some $s \\in [p, \\infty)$ and a\n\tspace $X$ such that $X \\injto V_p$ embeds continuously and such that\n\t$\\vec{B}$ is well-defined as an operator $X \\to X\\d$. Assume we\n\thave a sequence of operators $\\vec{A_n}\\colon X \\to X\\d$ and\n\tsolutions $\\vec{u^n} \\in X$ to\n\t\\begin{equation} \\label{eq:lem_ident_appr}\n\t\\fp{\\A(\\vec{Du^n})}{\\vec{D\\phi}} +\n\t\\fp{\\vec{B}(\\vec{u^n})}{\\vec{\\phi}} +\n\t\\fp{\\vec{A_n}(\\vec{u^n})}{\\vec{\\phi}} = 0\n\t\\end{equation}\n\twith test functions $\\vec{\\phi} \\in X\\!$.\n\t\n\tIn addition, assume that for some $r \\in (1, \\infty)$ the embedding\n\t$V_r \\injto X$ is continuous and dense, that $\\vec{B}$ is strongly\n\tcontinuous as an operator $V_p \\to W_0^{1,r}(\\Omega)\\d$ and that we have\n\tconvergences\n\t\\begin{align} \\label{eq:ident_assum}\n\t\t\\begin{split}\n\t\t\t\\vec{u^n} \\weakto \\vec{u} &\\quad \\text{weakly in } V_p,\\\\\n\t\t\t\\vec{A_n}(\\vec{u^n}) \\to \\vec{0} &\\quad \\text{strongly in } W_0^{1,r}(\\Omega)\\d\n\t\t\\end{split}\n\t\\end{align}\n\tas $n \\to \\infty$.\n\t\n\tThen $\\vec{u}$ is a solution of the limit equation\n\t\\begin{equation} \\label{eq:lem_ident_limit}\n\t\\fp{\\A(\\vec{Du})}{\\vec{D\\phi}} + \\fp{\\vec{B}(\\vec{u})}{\\vec{\\phi}} = 0\n\t\\end{equation}\n\tfor all $\\vec{\\phi} \\in V_s$.\n\\end{thm}\n\n\\begin{rem}\n\tThe operator $\\A$ represents a (possibly shifted) extra stress\n\ttensor (cf.~Remark \\ref{rem:S}) and\n\t$\\vec{B}$ may be chosen as the convective term. Typical choices for\n\tthe space $X$ are $X = V_q$ or $X = V_p \\cap L^q(\\Omega)$ with\n\tcoercive operators\n\t$\\fp{\\vec{A_n}(\\vec{v})}{\\vec{\\phi}} = \\fp{\\abs{ \\vec{Dv}}^{q-2}\n\t\t\\vec{Dv}}{\\vec{D\\phi}}$ and\n\t$\\fp{\\vec{A_n}(\\vec{v})}{\\vec{\\phi}} = \\fp{\\abs{\\vec{v}}^{p-2}\n\t\t\\vec{v}}{\\vec{\\phi}}$ respectively.\n\t\n\tThe inclusions $X\\injto V_p$, $V_s \\injto V_p$ and $V_r \\injto X$\n\tguarantee the well-definedness of $\\vec{A_n}$ and of the operator\n\twhich is induced by $\\A$.\n\\end{rem}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:ident_limits}]\n\tThe proof of Theorem \\ref{thm:ident_limits} follows and generalizes\n\tthe procedure in \\cite{dms}, \\cite{Cetraro}. First, we check the assumptions of\n\tLemma \\ref{lem:ae_equality2}\/ Corollary \\ref{cor:ae_equality3} in\n\torder to obtain almost everywhere convergence\n\t$\\vec{Du^n} \\to \\vec{Du}$, then we use this to prove\n\t\\eqref{eq:lem_ident_limit}.\n\t\n\tAs in Lemma \\ref{lem:ae_equality2} we let $B$ be a ball with\n\t$2B \\subset \\subset \\Omega$ and $\\xi \\in C_0^\\infty (\\Omega)$ be a\n\tcutoff function such that $\\chi_B \\leq \\xi \\leq \\chi_{2B}$. We set\n\t$\\vec{v^n} := \\xi (\\vec{u^n}- \\vec{u})$ and let $\\vec{v_j^n}$ be the\n\tLipschitz truncation of $\\vec{v^n}$ with respect to the domain $2B$\n\tfrom Theorem \\ref{thm:lip_trunc}. Since the functions $\\vec{v_j^n}$\n\tare in general not divergence-free, we have to introduce correction\n\tterms in order to use them as test functions in\n\t\\eqref{eq:lem_ident_appr}. We use the Bogovski\\u{\\i} operator\n\t$\\Bog \\colon L_0^r(\\Omega) \\to W_0^{1,r}(\\Omega)$ and set\n\t\\begin{equation} \\label{eq:defi_eta}\n\t\\vec{\\psi_j^n} := \\Bog(\\div \\vec{v_j^n}) \\in W_0^{1,r}(\\Omega)\n\t\\quad\\text{and}\\quad \\vec{\\eta_j^n} := \\vec{v_j^n} -\n\t\\vec{\\psi_j^n} \\in V_r. \n\t\\end{equation}\n\tBy \\eqref{eq:lip_trunc2}$_1$, we get\n\t$\\nabla \\vec{v_j^n} \\weakto \\vec{0}$ in $L^r(\\Omega)$ for each\n\t$j \\in \\N$ as $n\\to\\infty$. Since both the divergence and the\n\tBogovski\\u{\\i} operator are linear and continuous, we get the\n\tconvergence\n\t\\begin{align} \\label{eq:conv_psi1}\n\t\t\\vec{\\psi_j^n} \\weakto \\vec{0} &\\quad \\text{ weakly in } W_0^{1,r}(\\Omega)\n\t\n\t\\end{align}\n\tas $n \\to \\infty$ for every $j \\in \\N$. By a well-known fact, we\n\tknow $\\nabla \\vec{v_j^n} = \\nabla \\vec{v^n}$ on the set\n\t$\\{\\vec{v_j^n} = \\vec{v^n}\\}$ (cf.~\\cite{Maly}). Thus, we obtain\n\t$\\div \\vec{v^n} = \\nabla \\xi \\cdot (\\vec{u^n}-\\vec{u})$ by the\n\tproduct rule and get\n\t$\\div\\vec{v_j^n} = \\chi_{\\{\\vec{v^n}\\neq\\vec{v_j^n} \\}} \\div\n\t\\vec{v_j^n} + \\chi_{\\{\\vec{v^n}=\\vec{v_j^n} \\}} \\nabla \\xi \\cdot\n\t(\\vec{u^n}-\\vec{u})$. Together with the continuity of the\n\tBogovski\\u{\\i} operator and the $W^{1, \\infty}(\\Omega)$-boundedness\n\tof the cutoff function~$\\xi$, this implies\n\t\\begin{equation} \\label{eq:estim_psi1}\n\t\\norm{\\vec{\\psi_j^n}}_{1,p}\n\t\\leq c \\norm{\\div \\vec{v_j^n}}_p \\leq c\n\t\\big\\Vert\\chi_{\\{\\vec{v^n}\\neq\\vec{v_j^n} \\}} \\nabla \\vec{v_j^n}\n\t\\big\\Vert_p + c\\,(\\xi) \\norm{\\vec{u^n}-\\vec{u}}_p\\!.\n\t\\end{equation}\n\tFurthermore, due to the assumption $\\vec{u^n} \\weakto \\vec{u}$ in\n\t$W_0^{1,p}(\\Omega)$ and the compact embedding\n\t$W_0^{1,p}(\\Omega) \\injto \\injto L^p(\\Omega)$, we have strong\n\tconvergence $\\vec{u^n} \\to \\vec{u}$ in $L^p(\\Omega)$. Applying\n\t\\eqref{eq:lip_trunc1}$_4$ and this strong convergence in\n\t\\eqref{eq:estim_psi1}, we obtain\n\t\\begin{equation} \\label{eq:estim_psi2}\n\t\\limsup_{n \\to \\infty} \\norm{\\vec{\\psi_j^n}}_{1,p} \\leq c\\, 2^{\\frac{-j}{p}}\n\t\\end{equation}\n\tfor all $j \\in \\N$.\n\t\n\tFrom \\eqref{eq:lip_trunc2}$_1$, \\eqref{eq:conv_psi1} and the compact\n\tembedding $W_0^{1,r}(\\Omega) \\injto \\injto L^r(\\Omega)$, we conclude\n\t\\begin{align} \\label{eq:conv_eta1}\n\t\t\\vec{\\eta_j^n} \\weakto \\vec{0}\n\t\t&\\quad \\text{ weakly in } W_0^{1,r}(\\Omega)\n\t\n\t\\end{align}\n\tfor all $j \\in \\N$ as $n \\to \\infty$.\n\t\n\tSince $\\vec{B}\\colon V_p \\to W_0^{1,r}(\\Omega)\\d$ is strongly\n\tcontinuous and $\\vec{u^n} \\weakto \\vec{u}$ in $V_p$, we obtain the\n\tconvergence $\\vec{B}(\\vec{u^n}) \\to \\vec{B}(\\vec{u})$ in\n\t$W_0^{1,r}(\\Omega)\\d$. This and \\eqref{eq:conv_eta1} imply\n\t\\begin{equation} \\label{eq:lip2}\n\t\\lim_{n \\to \\infty} \\fp{\\vec{B}(\\vec{u^n})}{\\vec{\\eta_j^n}} = 0.\n\t\\end{equation}\n\tSimilarly, we obtain\n\t\\begin{equation} \\label{eq:lip3}\n\t\\lim_{n \\to \\infty} \\fp{\\vec{A_n}(\\vec{u^n})}{\\vec{\\eta_j^n}} = 0\n\t\\end{equation}\n\tfrom \\eqref{eq:ident_assum}$_2$ and \\eqref{eq:conv_eta1}.\n\tFurthermore, \\eqref{eq:lip_trunc2}$_1$ implies\n\t$\\vec{v_j^n} \\weakto \\vec{0}$ in $W_0^{1,p}(\\Omega)$ and\n\t\\begin{equation} \\label{eq:lip4}\n\t\\lim_{n \\to \\infty} \\fp{\\A(\\vec{Du})}{\\vec{Dv_j^n}} \n\t= 0\n\t\\end{equation}\n\tfor all $j \\in \\N$.\n\t\n\tBy \\eqref{eq:defi_eta} and equation\n\t\\eqref{eq:lem_ident_appr} we have\n\t\\begin{align*}\n\t\t&\\langle \\A (\\vec{Du^n}) - \\A(\\vec{Du}), \\vec{Dv_j^n} \\rangle\n\t\t\\\\\n\t\t&= - \\fp{\\vec{B}(\\vec{u^n})}{\\vec{\\eta_j^n}} - \\fp{\\vec{A_n}(\\vec{u^n})}{\\vec{\\eta_j^n}} \n\t\t+ \\fp{\\A(\\vec{Du^n})}{\\vec{D\\psi_j^n}} - \\fp{\\A(\\vec{Du})}{\\vec{Dv_j^n}}.\n\t\\end{align*}\n\tWe use the convergences \\eqref{eq:lip2}, \\eqref{eq:lip3},\n\t\\eqref{eq:estim_psi2} and \\eqref{eq:lip4} in this identity and obtain\n\t\\begin{equation*}\n\t\t\\limsup_{n \\to \\infty} \\abs{\\fp{\\A(\\vec{Du^n}) -\n\t\t\t\t\\A(\\vec{Du})}{\\vec{Dv_j^n}}} \\leq c\\, 2^{\\frac{-j}{p}}\\!. \n\t\\end{equation*}\n\tSince $2^{\\frac{-j}{p}} \\to 0$ as $j \\to \\infty$, we may apply\n\tCorollary \\ref{cor:ae_equality3} and conclude\n\t$\\vec{Du^n} \\to \\vec{Du}$ almost everywhere in $\\Omega$ up to some\n\tsubsequence. By the continuity of $\\A$, it follows\n\t$\\A(\\vec{Du^n}) \\to \\A(\\vec{Du})$ almost everywhere in $\\Omega$.\n\t\n\tBy assumption, the mapping $\\vec{v} \\mapsto \\A(\\vec{Dv})$ defines a\n\tbounded operator \\linebreak$W^{1,p}(\\Omega) \\to L^{p'}(\\Omega)$,\n\tand thus the sequence \n\t$(\\A(\\vec{Du^n}))_{n \\in \\N}$ is bounded. We may extract a weakly convergent\n\tsubsequence $\\A(\\vec{Du^n}) \\weakto \\vec{\\chi}$\n\tin~$L^{p'}(\\Omega)$. The combination of almost everywhere\n\tconvergence $\\A(\\vec{Du^n}) \\to \\A(\\vec{Du})$ and weak convergence\n\t$\\A(\\vec{Du^n}) \\weakto \\vec{\\chi}$ (for some subsequences) implies\n\t$\\A(\\vec{Du}) = \\vec{\\chi}$ by a well-known convergence principle\n\t(cf.~\\cite{Gajewski}); in particular, it follows\n\t\\begin{equation} \\label{eq:conv_s}\n\t\\A(\\vec{Du^n}) \\weakto \\A(\\vec{Du}) \\quad \\text{weakly in } L^{p'}(\\Omega).\n\t\\end{equation}\n\tWe pass to the limit for $n \\to \\infty$ in\n\t\\eqref{eq:lem_ident_appr} and use \\eqref{eq:conv_s}, the strong\n\tcontinuity of $\\vec{B}$ and \\eqref{eq:ident_assum}$_2$ to obtain\n\t\\begin{align*}\n\t\t0 &= \\lim_{n \\to \\infty} \\fp{\\A(\\vec{Du^n})}{\\vec{D\\phi}} +\n\t\t\\fp{\\vec{B}(\\vec{u^n})}{\\vec{\\phi}} +\n\t\t\\fp{\\vec{A_n}(\\vec{u^n})}{\\vec{\\phi}}\n\t\t\\\\ \n\t\t&= \\fp{\\A(\\vec{Du})}{\\vec{D\\phi}} + \\fp{\\vec{B}(\\vec{u})}{\\vec{\\phi}}\n\t\\end{align*}\n\tfor $\\vec{\\phi} \\in V_p \\cap V_r \\cap X = V_r$ and therefore, by\n\tdensity, for all $\\vec{\\phi} \\in V_s$.\n\\end{proof}\n\n\\section{Existence of weak solutions} \\label{sec:existence}\n\n\\subsection{Smallness condition and main result}\nAs mentioned in the introduction, our ansatz for proving existence requires smallness of the boundary and the divergence data which is necessary for proving local coercivity. In order to formulate a precise smallness condition, we define the following dependent constants:\n\nFor a domain $\\Omega$, an extra stress tensor $\\S$ with\n$p$-$\\delta$-structure, $s = \\max \\big\\{ p, \\big (\\frac{p\\d}{2}\\big )' \\big\\}$, a functional $\\vec{f} \\in W_0^{1,p}(\\Omega)\\d$\nand a function $\\vec{g} \\in W^{1,s}(\\Omega)$ we define \n\\begin{equation} \\begin{split} \\label{eq:g123} G_1 &:= \\tfrac{1}{p} \\,\nC_3(\\S),\n\\\\\nG_2 &:= c_\\text{Sob} c_\\text{Korn}^2 \\left[ \\norm{\\vec{Dg}}_{s} +\n\\tfrac{1}{2} \\norm{\\div \\vec{g}}_s \\right],\n\\\\\nG_3 &:= \\left( C_2(\\S) + C_3(\\S) \\right)\n\\bignorm{\\abs{\\vec{Dg}}+\\delta}_p^{p-1} + c_\\text{Sob}\n\\norm{\\vec{g}}_{1,s}^2\n\\\\\n&\\qquad + c_\\text{Sob} c_\\text{Korn} \\norm{\\div \\vec{g}}_s\n\\norm{\\vec{g}}_{1,s} + c_\\text{Korn}\n\\norm{\\vec{f}}_{W_0^{1,p}(\\Omega)\\d}\n\\end{split} \\end{equation} with constants $c_\\text{Korn}$,\n$c_\\text{Sob}$, $C_2(\\S)$ and $C_3(\\S)$ that do only depend on\n$\\Omega$ and the characteristics of~$\\S$.\n\nWith these constants, we impose a smallness condition on the data\n$g_1$ and $\\vec{g_2}$:\n\n\\begin{assum} \\label{assum:smallness}\n\tWe assume that $g_1 \\in L^s(\\Omega)$ and\n\t$\\vec{g_2} \\in W^{s-\\frac{1}{s}, s}(\\del \\Omega)$ satisfy the\n\tcompatibility condition\n\t$\\int_{\\Omega} g_1\\,dx = \\int_{\\del \\Omega} \\vec{g_2} \\cdot\n\t\\vec{\\nu}\\, do$\n\tand that their norms are so small that a solution\n\t$\\vec{g} \\in W^{1,s}(\\Omega)$ of the corresponding inhomogeneous\n\tdivergence equation (see Lemma \\ref{lem:div_eq}) satisfies\n\t\\begin{equation} \\label{eq:g_smallness}\n\t(2-p)^{2-p} (p-1)^{p-1} G_1 \\geq G_2^{p-1}G_3^{2-p}\n\t\\end{equation}\n\tfor the constants $G_1, G_2, G_3$ from \\eqref{eq:g123}.\n\\end{assum}\n\nUnder that condition, we are able to prove the following existence result:\n\n\\begin{thm}[Existence] \\label{thm:main_thm} Let $\\Omega \\subset \\Rd$\n\tbe a bounded Lipschitz domain with $d \\in \\{2, 3\\}$. Let~$\\S$ be an\n\textra stress tensor with $p$-$\\delta$-structure,\n\t$p \\in \\big( \\frac{2d}{d+2}, 2 \\big)$,\n\t$s := \\max \\big\\{ p, \\big (\\frac{p\\d}{2}\\big )' \\big\\}$ and\n\t$\\vec{f} \\in W_0^{1,p}(\\Omega)\\d$. For any $g_1 \\in L^s(\\Omega)$ and\n\t$\\vec{g_2} \\in W^{s-\\frac{1}{s}, s}(\\del \\Omega)$ that satisfy\n\tAssumption \\ref{assum:smallness}, there exists a weak solution\n\t$(\\vec{v}, \\pi) \\in W^{1,p}(\\Omega) \\times L^{s'}(\\Omega)$\n\tof~\\eqref{eq:main_problem}.\n\\end{thm}\n\n\\subsection{Existence proof} \\label{sub:existence}\nTo get a formulation of \\eqref{eq:main_problem}$_1$, we use the\ndefinitions \\eqref{eq:defi_S} and \\eqref{eq:defi_T} of the operators\n$\\vec{S}$ and $\\vec{T}$ and define the \"full\" operator\n$\\vec{P} \\colon V_p \\to W_0^{1,s}(\\Omega)\\d$ via\n\\begin{equation} \\begin{split} \\label{eq:defi_P}\n\\fp{\\vec{P}(\\vec{v})}{\\vec{\\phi}} := \\,\n&\\fp{\\vec{S}(\\vec{v}) + \\vec{T}(\\vec{v}) - \\vec{f}}{\\vec{\\phi}} \\\\\n= \\,&\\fp{\\S(\\vec{Dv}+\\vec{Dg})}{\\vec{D\\phi}} - \\fp{(\\vec{v}+\\vec{g}) \\otimes (\\vec{v}+\\vec{g})}{\\vec{D \\phi}} \\\\\n&- \\fp{(\\div \\vec{g}) (\\vec{v}+\\vec{g})}{\\vec{\\phi}}\n- \\fp{\\vec{f}}{\\vec{\\phi}}\n\\end{split} \\end{equation}\nfor $\\vec{v} \\in V_p$ and $\\vec{\\phi} \\in W_0^{1,s}(\\Omega)$.\n\nWe collect our results on $\\vec{S}$ and $\\vec{T}$ to deduce properties\nof $\\vec{P}$: \n\\begin{cor} \\label{cor:t_estim}\n\tFor $p$, $q$ and $s$ as in Lemma\n\t\\ref{lem:t2_str_contin}, the operator $\\vec{P}$ defined in\n\t\\eqref{eq:defi_P} is well-defined, bounded and continuous on\n\t$V_p \\to W_0^{1,s}(\\Omega)\\d$. It is well-defined, bounded,\n\tcontinuous and pseudomonotone on $V_q \\to V_q\\d$ and it fulfils the\n\testimate\n\t\\begin{equation*}\n\t\t\\fp{\\vec{P}(\\vec{u})}{\\vec{u}} \\geq G_1 \\norm{\\vec{Du}}_p^p - G_2\n\t\t\\norm{\\vec{Du}}_p^2 - G_3 \\norm{\\vec{Du}}_p \n\t\\end{equation*}\n\tfor $\\vec{u} \\in V_q$. If furthermore\n\t\\begin{equation} \\label{eq:smallness2}\n\t(2-p)^{2-p} (p-1)^{p-1} G_1 \\geq G_2^{p-1}G_3^{2-p},\n\t\\end{equation}\n\tthen\n\t\\begin{equation*}\n\t\t\\fp{\\vec{P}(\\vec{u})}{\\vec{u}} \\geq 0\n\t\\end{equation*}\n\tholds for all $\\vec{u} \\in V_q$ with $\\norm{\\vec{Du}}_p = R :=\n\t\\left[\\frac{G_3}{(2-p)\\,G_1}\\right]^\\frac{1}{p-1}$. \n\\end{cor}\n\\begin{proof}\n\tWell-definedness, boundedness and continuity follow similarly to the\n\tproperties of $\\vec{S}$ and $\\vec{T}$ in Lemmas\n\t\\ref{lem:properties_S} and \\ref{lem:t2_str_contin}.\n\t\n\tThe continuity of $\\S$ and its monotonicity, which follows from\n\t\\eqref{eq:pdelta_growth1}$_1$, yield that $\\vec{S}\\colon V_q \\to V_q\\d$ is\n\tpseudomonotone. Lemma \\ref{lem:t2_str_contin} shows that\n\t$\\vec{T}\\colon V_q \\to V_q\\d$ is strongly continuous and thus\n\tpseudomonotone. Therefore, the sum\n\t$\\vec{P} = \\vec{S} + \\vec{T} - \\vec{f}$ is also pseudomonotone.\n\t\n\tBy Lemmas \\ref{lem:S_estimate} and \\ref{lem:t2_str_contin} and by\n\tthe definition of the constants $G_1$, $G_2$, $G_3$\n\tin~\\eqref{eq:g123}, we have\n\t\\begin{align} \\label{eq:t_estim1}\n\t\t\\fp{\\vec{P}(\\vec{u})}{\\vec{u}}\n\t\t&\\geq \\fp{\\vec{S}(\\vec{u})}{\\vec{u}} -\n\t\t\\abs{\\fp{\\vec{T}(\\vec{u})}{\\vec{u}}}\n\t\t- \\abs{\\fp{\\vec{f}}{\\vec{u}}} \\nonumber\n\t\t\\\\\n\t\t&\\geq G_1 \\norm{\\vec{Du}}_p^p - G_2 \\norm{\\vec{Du}}_p^2 - G_3\n\t\t\\norm{\\vec{Du}}_p\n\t\\end{align}\n\tfor any $\\vec{u} \\in V_q$. \n\t\n\tNow assume that \\eqref{eq:smallness2} holds. It follows\n\t$\\left[\\frac{(p-1)\\,G_1}{G_2}\\right]^{p-1} \\geq \\left[\n\t\\frac{G_3}{(2-p)G_1}\\right]^{2-p}$, so we may define\n\t$R := \\left[\\frac{G_3}{(2-p)G_1}\\right]^\\frac{1}{p-1}$ and obtain\n\t$\\frac{(p-1)\\,G_1}{G_2} \\geq R^{2-p}$. Together, we get\n\t\\begin{equation} \\label{eq:t_estim2}\n\tG_1 R^p = (p-1)G_1 R^p + (2-p)G_1 R^p \\geq G_2 R^2 + G_3 R.\n\t\\end{equation}\n\tWe use $\\norm{\\vec{Du}}_p = R$, insert \\eqref{eq:t_estim2} into\n\t\\eqref{eq:t_estim1} and obtain the result. \n\\end{proof}\n\\begin{rem}\n\n\t(i) Note that the dependence of the constants $G_i$, $i=1,2,3$, on\n\t$\\norm{\\vec{g}}_{1,s}$ stems only from the estimate of the\n\tconvective term.\\\\[-3mm]\n\t\n\t(ii) In order to prove $G_1 R^p - G_2 R^2 - G_3 R \\geq 0$, we have split\n\tthe positive summand $G_1 R^p$ into two parts using the weights\n\t$p-1$ and $2-p$. By considering the weights as a free parameter,\n\tone can show that this choice is optimal.\n\\end{rem}\n\nNow we are ready to complete the proof of Theorem\n\\ref{thm:main_thm}. Since the proof requires an approximation process\nonly if $p \\in {\\big ( \\frac{2d}{d+2}, \\frac{3d}{d+2} \\big ]} $, we\nhandle the two cases separately.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main_thm} in the case $p \\in\n\t\\big (\\frac{3d}{d+2}, 2\\big )$]\n\tIn this case we have $s \n\n\tp$. By Lemma \\ref{lem:div_eq}, we find a function\n\t$\\vec{g} \\in W^{1,p}(\\Omega)$ that solves the corresponding\n\tinhomogeneous divergence equation \\eqref{eq:g_problem}.\n\t\n\tWe consider the corresponding operator $\\vec{P}$ defined in\n\t\\eqref{eq:defi_P} and prove existence of a function\n\t$\\vec{u} \\in V_p$ which satisfies $\\vec{P}(\\vec{u}) = \\vec{0}$. The\n\tspace $V_p$ is reflexive and separable as a closed subspace of\n\t$W^{1,p}(\\Omega)$. In Corollary \\ref{cor:t_estim} (with\n\t$q = s = p$), we proved that $\\vec{P}$ is well-defined, bounded,\n\tcontinuous and pseudomonotone on $V_p \\to V_p\\d$ and we concluded\n\tfrom assumption \\eqref{eq:g_smallness} that there exists a positive\n\tnumber~$R$ such that $\\vec{P}$ is locally coercive with radius $R$.\n\tSo we may apply the main theorem on pseudomonotone operators,\n\tTheorem \\ref{thm:brezis}, to the operator $\\vec{P}$ on the space\n\t$V_p$ and obtain a weak solution $\\vec{u} \\in V_p$ of\n\t$\\vec{P}(\\vec{u}) = \\vec{0}$.\n\t\n\tBy a standard characterization of weak gradient fields (cf.~\\cite{Sohr}),\n\tthis is equivalent to the existence of a pressure $\\pi$ such that\n\t$(\\vec{u}+\\vec{g}, \\pi) \\in W^{1,p}(\\Omega) \\times L^{p'}(\\Omega)$\n\tsolves the original system \\eqref{eq:main_problem}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main_thm} in the case\n\t$p \\in {\\big ( \\frac{2d}{d+2}, \\frac{3d}{d+2} \\big ]} $]\n\tIn this case we have $s = \\big (\\frac{p\\d}{2}\\big\n\t)' $. By assumption \\ref{assum:smallness}, there is a function\n\t$\\vec{g} \\in W^{1,s}(\\Omega)$ which solves the inhomogeneous\n\tdivergence equation \\eqref{eq:g_problem} and satisfies\n\t\\eqref{eq:g_smallness}. We prove the existence of a function\n\t$\\vec{u} \\in V_p$ that solves $\\vec{P}(\\vec{u}) = \\vec{0}$ for the\n\tcorresponding operator $\\vec{P}$ from \\eqref{eq:defi_P}. For\n\tregularization, we choose some $q > s$ with $q > 2$ and consider the\n\tsymmetric $q$-Laplacian $\\vec{A} \\colon V_q \\to V_q\\d$ defined via\n\t\\begin{equation*}\n\t\t\\fp{\\vec{A}(\\vec{u})}{\\vec{\\phi}} := \\fp{\\abs{ \\vec{Du}}^{q-2}\n\t\t\t\\vec{Du}}{\\vec{D\\phi}}. \n\t\\end{equation*}\n\tThe operator $\\vec{A}$ is well-defined, bounded, continuous and\n\tmonotone.\n\t\n\tWe work in the reflexive and separable spaces $V_q^n$, which are\n\tdefined as the space $V_q$ equipped with the equivalent norms\n\t$\\lVert {\\vec{u}} \\rVert _{q, n} := \\max \\left\\{ n^\\frac{-2}{2q-1}\n\t\\norm{\\vec{Du}}_q, \\norm{\\vec{Du}}_p \\right\\}$. For sufficiently\n\tlarge $n$, we want to establish the existence of solutions $\\vec{u^n} \\in V_q^n$ to\n\tthe equation\n\t\\begin{equation} \\label{eq:tq}\n\t\\fp{\\vec{P}(\\vec{u^n})}{\\vec{\\phi}} + \\tfrac{1}{n}\n\t\\fp{\\vec{A}(\\vec{u^n})}{\\vec{\\phi}} = 0 \n\t\\end{equation}\n\tfor $\\vec{\\phi} \\in V_q$, which shall approximate a solution of the\n\toriginal equation. The operator $\\vec{P}$ is pseudomonotone,\n\tcontinuous and bounded by Corollary~\\ref{cor:t_estim} and the same\n\tholds for $\\vec{A}$ and their sum $\\vec{P}+\\frac{1}{n}\\vec{A}$. We\n\tprove local coercivity of $\\vec{P} + \\frac{1}{n} \\vec{A}$ with\n\tradius $R := \\left[\\frac{G_3}{(2-p)G_1}\\right]^\\frac{1}{p-1}$. So,\n\tlet $\\lVert {\\vec{u}} \\rVert _{q, n} = R$. If\n\t$n^\\frac{-2}{2q-1} \\norm{\\vec{Du}}_q \\leq \\norm{\\vec{Du}}_p$, we\n\thave $\\norm{\\vec{Du}}_p = \\lVert {\\vec{u}} \\rVert _{q, n} = R$ and we get\n\t$\\fp{\\vec{P}(\\vec{u})}{\\vec{u}} \\geq 0$ by assumption\n\t\\eqref{eq:g_smallness} and Corollary \\ref{cor:t_estim}. Otherwise,\n\tsuppose $n^\\frac{-2}{2q-1} \\norm{\\vec{Du}}_q > \\norm{\\vec{Du}}_p$,\n\tso $R = n^\\frac{-2}{2q-1} \\norm{\\vec{Du}}_q$ and\n\t$R > \\norm{\\vec{Du}}_p$. This, Corollary \\ref{cor:t_estim} and the\n\tSobolev embedding $V_q \\injto V_p$ imply\n\t\\begin{align*}\n\t\t\\fp{\\vec{P}(\\vec{u})}{\\vec{u}} + \\tfrac{1}{n}\\fp{\\vec{A}(\\vec{u})}{\\vec{u}} \n\t\t&\\geq G_1 \\norm{\\vec{Du}}_p^p - G_2 \\norm{\\vec{Du}}_p^2 - G_3\n\t\t\\norm{\\vec{Du}}_p + \\tfrac{1}{n}\\norm{\\vec{Du}}_q^q\n\t\t\\\\ \n\t\t&> -G_2 R^{\\,2} - G_3 R + n^\\frac{1}{2q-1} R^{\\,q}\\!.\n\t\\end{align*}\n\tAs $n^\\frac{1}{2q-1}$ grows to infinity, the latter expression\n\tbecomes positive for any $R > 0$ and sufficiently large $n$.\n\t\n\tThus, the existence Theorem \\ref{thm:brezis} gives us solutions\n\t$\\vec{u^n} \\in V_q^n$ of \\eqref{eq:tq} with\n\t\\begin{equation} \\label{eq:bound_un} \\max \\left\\{ n^\\frac{-2}{2q-1}\n\t\\norm{ \\vec{Du}}_q\\!, \\norm{\\vec{Du}}_p \\right\\} =\n\t\\lVert {\\vec{u^n}} \\rVert _{q, n} \\leq R\n\t\\end{equation}\n\tand the bound $R$ holds uniformly with respect to $n$.\n\t\n\tWe switch to a weakly convergent (and renamed) subsequence\n\t$\\vec{u^n}\\weakto \\vec{u}$ in $V_p$. The bound \\eqref{eq:bound_un}\n\timplies $\\norm{n^{-1}\\vec{A}(\\vec{u^n})}_{q'} = n^{-1}\n\t\\norm{\\vec{Du^n}}_{q}^{q-1} \\leq n^{\\frac{-1}{2q-1}} R^{\\,q-1} \\to\n\t0$ as $n \\to \\infty $.\n\t\n\tWe apply Theorem \\ref{thm:ident_limits} with the shifted\n\textra stress tensor $\\A(\\cdot) := \\S(\\cdot+\\vec{Dg})$,\n\t$\\vec{B} := \\vec{T} - \\vec{f}\\colon V_p \\to V_s\\d$, $r := q$, $X := V_q$ and\n\t$\\vec{A_n} := n^{-1}\\vec{A}\\colon V_q \\to V_q\\d\\!$ and obtain that\n\t$\\vec{u}$ solves $\\vec{P}(\\vec{u}) = \\vec{0}$ weakly.\n\t\n\tSimilarly to the proof in the first case, we obtain a pressure $\\pi$\n\tsuch that the pair\n\t$(\\vec{u}+\\vec{g}, \\pi) \\in W^{1,p}(\\Omega) \\times L^{s'}(\\Omega)$\n\tis a weak solution of \\eqref{eq:main_problem}.\n\\end{proof}\n\n\\subsection{Less regular data} \\label{sub:less_regularity}\n\nIn Theorem \\ref{thm:main_thm}, we demanded additional regularity of\nthe data: we required $\\vec{g} \\in W^{1,s}(\\Omega)$ with\n$s = \\big (\\frac{p\\d}{2}\\big )' > p$ in the case\n$p \\in \\big (\\frac{2d}{d+2}, \\frac{3d}{d+2}\\big )$, while the solution\n$\\vec{v}$ is sought only in $W^{1,p}(\\Omega)$. Thus, we want to\ndiscuss whether this assumption is really necessary or if it may be\nremoved, perhaps for the price of more regular test functions. This\nquestion only arises if\n$p \\in \\big (\\frac{2d}{d+2}, \\frac{3d}{d+2}\\big )$, since one has\n$s = p$ and $\\vec{g} \\in W^{1,s}(\\Omega) = W^{1,p}(\\Omega)$ in the\nother case.\n\nIn the proof of Theorem \\ref{thm:main_thm}, we used the additional\nregularity of our data to get more convenient estimates of the\nconvective term in Lemma \\ref{lem:t2_str_contin}. Since these\nestimates are mainly based on the H\\\"older inequality, one has to use\na stronger norm of $\\vec{u}$ if only $\\vec{g}\\in W^{1,p}(\\Omega)$ is\npresumed. By \\eqref{eq:t2_eq4} and the H\\\"older inequality, one\nobtains the following result similar to Lemma \\ref{lem:t2_str_contin}:\n\n\\begin{lem}[Properties of $\\vec{T}$] \\label{lem:t2_str_contin2}\n\tLet $p \\in \\big (\\frac{2d}{d+2}, \\frac{3d}{d+2}\\big )$ and $q$ be so\n\tlarge that it holds both $q > s = \\big (\\frac{p\\d}{2}\\big )'$ and\n\t$\\frac{1}{q'} \\geq \\frac{1}{p}+\\frac{1}{p\\d}$. For any given\n\tfunction $\\vec{g} \\in W^{1,p}(\\Omega)$, the operator $\\vec{T}$\n\thas the upper bound\n\t\\begin{align*}\n\t\t\\abs{\\fp{\\vec{T}(\\vec{u})}{\\vec{u}}}\n\t\t&\\leq c_\\text{Sob} c_\\text{Korn}^2 \\left( \\norm{\\vec{Dg}}_p +\n\t\t\\tfrac{1}{2} \\norm{\\div \\vec{g}}_p \\right) \\norm{\\vec{Du}}_p\n\t\t\\norm{\\vec{Du}}_q\n\t\t\\\\ \n\t\t&\\quad + c_\\text{Sob} \\left( \\norm{\\vec{g}}_{1,p}^2 +\n\t\tc_\\text{Korn} \\norm{\\div \\vec{g}}_p \\norm{\\vec{g}}_{1,p} \\right)\n\t\t\\norm{\\vec{Du}}_q \n\t\\end{align*}\n\tfor all $\\vec{u} \\in V_q$, where $C_\\text{Sob}$ are Sobolev\n\tembedding constants.\n\\end{lem}\n\nThis and Lemma \\ref{lem:S_estimate} yield an alternative lower bound\nof $\\vec{P}$: \n\n\\begin{cor}[Alternative estimate of $\\vec{P}$] \\label{cor:t_estim2}\n\tLet $p \\in \\big (\\frac{2d}{d+2}, \\frac{3d}{d+2}\\big )$ and $q$ be so\n\tlarge that it holds both $q > s = \\big (\\frac{p\\d}{2}\\big )'$ and\n\t$\\frac{1}{q'} \\geq \\frac{1}{p}+\\frac{1}{p\\d}$. For any given\n\tfunction $\\vec{g} \\in W^{1,p}(\\Omega) \\setminus \\{\\vec{0}\\}$, there\n\tare constants $F_1, F_2, G_1 \\geq 0$ with $F_1, G_1 > 0$ such that\n\tit holds\n\t\\begin{equation*}\n\t\t\\fp{\\vec{P}(\\vec{u})}{\\vec{u}} \\geq G_1 \\norm{\\vec{Du}}_p^p -\n\t\tF_1 \\norm{\\vec{Du}}_q - F_2 \\norm{\\vec{Du}}_p\n\t\t\\norm{\\vec{Du}}_q \n\t\\end{equation*}\n\tfor all $\\vec{u} \\in V_q$.\n\\end{cor}\n\nTo the authors' knowledge, a substantial improvement of the\nestimates in Lemma~\\ref{lem:t2_str_contin2} and Corollary\n\\ref{cor:t_estim2} is not available. So we ask whether the proof of\nTheorem \\ref{thm:main_thm} can be modified such that it works out with\n\\emph{these} estimates.\n\nThe main theorem on pseudomonotone operators, Theorem\n\\ref{thm:brezis}, which was used to obtain approximate solutions in\nsome smoother space $X = V_p \\cap Y$ already gave a priori estimates\nfor these approximate solutions. These a priori bounds were needed to\nestablish a weak accumulation point. The next Lemma shows that it is\nimpossible to obtain approximate solutions which are coming with an a\npriori bound in $V_p$, by Brouwer's fixed point theorem\/the main\ntheorem on pseudomonotone operators.\n\n\\begin{lem}[Limits for the applicability of pseudomonotone operator\n\ttheory for solving~\\eqref{eq:main_problem}] \\label{lem:no_brouwer2}\n\tLet $F_1, G_1, R, 1 1$ so large that\n\t$\\frac{c_2\\, R^{\\,q-1}}{F_1} \\geq c_1$ and define the auxiliary\n\tfunctions $t_n\\colon \\R\\to\\R$ via\n\t\\begin{equation*}\n\t\tt_n(x) := \\frac{c_2}{n} x^{q-1} - F_1.\n\t\\end{equation*}\n\tWe claim that for all sufficiently large $n$, the equation\n\t$t_n(x)=0$ has a solution $y_n \\in \\big ( \\frac{R}{c_s}, \\frac{R}{f(n)} \\big )$.\n\tSince the upper bound on $f$ implies $f(n) \\to 0$ as $n \\to\\infty$,\n\tit holds $\\frac{R}{c_s} < \\frac{R}{f(n)}$ for sufficiently large $n$\n\tand the interval $\\big ( \\frac{R}{c_s}, \\frac{R}{f(n)} \\big )$ is\n\tnot empty. We have\n\t$t_n\\! \\big (\\frac{R}{c_s}\\big ) = \\frac{c_2 R^{\\,q-1}}{n\\,\n\t\tc_s^{q-1}} - F_1 < 0$ for sufficiently large $n$ and the\n\tdefinitions of $c_1$ and $c_2$ imply\n\t$\\frac{c_2 R^{\\,q-1}}{F_1} \\geq c_1 > n f(n)^{q-1}$, thus\n\t$t_n\\! \\big (\\frac{R}{f(n)}\\big ) = \\frac{c_2\\, R^{\\,q-1}}{n\\,\n\t\tf(n)^{\\,q-1}} - F_1 > 0$. The existence of zeroes $y_n$ then\n\tfollows from the mean value theorem.\n\t\n\tRight from the definition of $y_n$, we obtain\n\t$\\frac{1}{n}\\, y_n^{\\,q} - \\frac{1}{c_2} F_1 y_n = 0$ and\n\t\\begin{equation*}\n\t\t\\left[1-\\frac{1}{c_2}\\right] F_1 y_n =\n\t\t\\left[1-\\frac{1}{c_2}\\right] F_1 \\left[ \\frac{nF_1}{c_2}\n\t\t\\right]^{\\frac{1}{q-1}} > G_1 R^p \n\t\\end{equation*}\n\tfor sufficiently large $n$. Together, it follows\n\t\\begin{equation*}\n\t\tG_1 R^p + \\frac{1}{n} y_n^q - F_1 y_n = G_1 R^p -\n\t\t\\left[1-\\frac{1}{c_2}\\right] F_1 y_n + \\frac{1}{n} y_n^q -\n\t\t\\frac{1}{c_2} F_1 y_n < 0.\n\t\\end{equation*}\n\tThus, any function $\\vec{u^n} \\in V_p \\cap Y$ with\n\t$\\norm{\\vec{u^n}}_{Y_n}=R$ and $\\norm{\\vec{u^n}}_{Y}=y_n$ satisfies\n\t\\begin{equation*}\n\t\tP_n(\\vec{u^n}) \\leq G_1 R^p + \\frac{1}{n} y_n^q - F_1 y_n < 0.\n\t\\end{equation*}\n\t\n\t\\emph{Step 3: Construction of functions $\\vec{u^n}\\in V_p \\cap Y$\n\t\twith prescribed norms.}\n\t\n\tIt remains to prove that for any\n\t$y_n \\in \\big ( \\frac{R}{c_s}, \\frac{R}{f(n)} \\big )$, there is a\n\tfunction $\\vec{u^n} \\in V_p \\cap Y$ with $\\norm{\\vec{u^n}}_{Y_n}=R$\n\tand $\\norm{\\vec{u^n}}_{Y}=y_n$.\n\t\n\tAssume that $\\norm{\\vec{u}}_{Y}>y_n$ for all $\\vec{u} \\in Y_n$ with\n\t$\\norm{\\vec{u}}_{Y_n}=R$. Since\n\t$y_n \\in \\big( \\frac{R}{c_s}, \\frac{R}{f(n)} \\big)$, it holds\n\t$c_s > \\frac{R}{y_n}$. By the definition of $c_s$, this implies the\n\texistence of a function $\\vec{u} \\in Y_n$ such that\n\t$\\norm{\\vec{D} \\vec{u}}_p > \\frac{R}{y_n} \\norm{\\vec{u}}_Y$. Without\n\tloss of generality, we may scale $\\vec{u}$ such that\n\t$\\norm{\\vec{u}}_{Y_n}=R$. We compile these estimates of $\\vec{u}$\n\tand apply \\eqref{eq:eq2} to obtain\n\t\\begin{equation*}\n\t\tR < \\frac{R}{y_n} \\norm{\\vec{u}}_Y \n\t\t< \\norm{\\vec{D} \\vec{u}}_p\n\t\t\\leq R,\n\t\\end{equation*}\n\twhich is impossible.\n\t\n\tNow assume that $\\norm{\\vec{u}}_{Y} f(n)$. This and the definition of $f(n)$ imply the\n\texistence of a function $\\vec{u} \\in Y_n$ such that it holds\n\t$\\norm{\\vec{u}}_{Y_n} < \\frac{R}{y_n} \\norm{\\vec{u}}_Y$. We may\n\tscale $\\vec{u}$ such that $\\norm{\\vec{u}}_{Y_n} = R$ and it follows\n\t\\begin{equation*}\n\t\tR = \\norm{\\vec{u}}_{Y_n} < \\frac{R}{y_n} \\norm{\\vec{u}}_Y < R, \n\t\\end{equation*}\n\twhich is again a contradiction. \n\t\n\tHence, there are $\\vec{v^n}, \\vec{w^n} \\in Y_n$ with\n\t$\\norm{\\vec{v^n}}_{Y_n} = \\norm{\\vec{w^n}}_{Y_n} = R$ and\n\t$\\norm{\\vec{v^n}}_{Y} \\leq y_n \\leq \\norm{\\vec{w^n}}_{Y}$ for every\n\t$n\\in\\N$. By assumption, the mapping\n\t$\\vec{u}\\mapsto\\norm{\\vec{u}}_Y$ is continuous on $Y_n$. We apply\n\tthe mean value theorem on the (path-connected) sphere\n\t$\\left\\{ \\norm{\\vec{\\cdot}}_{Y_n} = R \\right\\}$ and obtain functions\n\t$\\vec{u^n} \\in V_p \\cap Y$ with $\\norm{\\vec{u^n}}_{Y_n}=R$ and\n\t$\\norm{\\vec{u^n}}_{Y} = y_n$. In Step 2 we proved that such an\n\telement $\\vec{u^n}$ solves \\eqref{eq:eq1} for all sufficiently large\n\t$n$.\n\\end{proof}\n\nLemma \\ref{lem:no_brouwer2} shows that, without assuming additional\nregularity, it is impossible to find a radius $R$ such that local\ncoercivity is fulfilled and Brouwer's fixed point theorem becomes\napplicable. Consequently, the authors view the existence proof in\n\\cite[Theorem 1.3]{Sin} with suspicion; in particular, the requirements for\nBrouwer's fixed point theorem do not seem to be satisfied in our eyes.\nWe conclude from Lemma \\ref{lem:no_brouwer2} that it is impossible to\nmodify the proof of existence theorem \\ref{thm:main_thm} such that it\navoids the critical regularity assumption within the framework of\npseudomonotone operator theory.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbspp b/data_all_eng_slimpj/shuffled/split2/finalzzbspp new file mode 100644 index 0000000000000000000000000000000000000000..d94fd628e5a419a0ce51c6a3bf8e09dca4ba4fd5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbspp @@ -0,0 +1,5 @@ +{"text":"\\section{The domain of complex metrics}\n\nA Riemannian metric on a manifold $M$ is a positive-definite symmetric bilinear form $g: T_x \\times T_x \\to \\mathbb R$ on the tangent space $T_x$ at each point $x \\in M$. The metrics we shall consider will be defined by symmetric $\\mathbb R$-bilinear maps $g: T_x \\times T_x \\to \\mathbb C$ at each point, with an appropriate generalization of the positivity condition.\n\nTo see what condition we should require, let us consider the simplest example of a field theory: a free real scalar field of mass $m$. Then the space of `fields' $\\Phi_M$ is the vector space C$^{\\infty}(M;\\mathbb R)$ of smooth functions, and in the exponent of the path-integral we have the quadratic form\n\\begin{eqnarray*}\n{\\rm i} S_g(\\phi) \\ \\ &=& \\ \\ \\frac{1}{2}\\int_M ( d\\phi\\wedge*d\\phi + m^2 \\phi\\wedge *\\phi) \\\\ &=& \\ \\ \\frac{1}{2}\\int_M \\left\\{\\sum g^{ij}\\frac{\\partial \\phi}{\\partial x^i}\\frac{\\partial \\phi}{\\partial x^j} + m^2\\phi^2 \\right\\} ( \\det g)^{1\/2} |dx^1 \\ldots dx^d|.\n\\end{eqnarray*}\nHere $(g^{ij})$ denotes the inverse of the matrix $g = (g_{ij})$, and $*$ is the Hodge star-operator defined by the metric, which takes differential forms of degree $p$ to forms of degree $d-p$ twisted by the orientation bundle. (We shall not assume the space-time $M$ is orientable.) In particular the star-operator takes the constant function 1 to the volume element \n$$*1 \\ = \\ {\\rm vol}_g \\ = \\ ({\\rm det} g)^{1\/2}|dx^1 \\ldots dx^d| \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (3)$$\nNotice that for a Lorentzian metric $g$ the volume element $*1$ is pure imaginary. This agrees with the fact that the `action' $S_g$ should be real for a Lorentzian manifold. We want the real part of the quadratic form i$S_g$ to be positive-definite for all the complex metrics we allow. This imposes two conditions. First, we need the real part of the twisted $d$-form vol$_g$ defined by the formula (3) to be a positive volume-form on $M$. We therefore require that $\\det g$, which is invariantly defined up to multiplication by a positive real number, is \\emph{not} real and negative, and we choose $(\\det g)^{1\/2}$ to have positive real part. \n\nThe second condition we need is that the real part of the matrix (det$g)^{1\/2}g^{-1}$ --- or equivalently of the inverse matrix $({\\rm det}g)^{-1\/2}g$ --- is positive-definite. The two conditions together would give us a domain whose Shilov boundary (like that of the Siegel generalized half-plane) contains indefinite real quadratic forms of all signatures, and not only the Lorentzian ones. But we shall impose further conditions. A clue to what more is needed comes from the theory of the electromagnetic field on $M$, with its field-strength given by a real 2-form $F$ on $M$, and with the action-functional\n$${\\rm i} S_g(F) \\ = \\ \\frac{1}{2}\\int_M F \\wedge * F.$$\nThe Hodge $*$-operator makes sense for a complex metric: for a $p$-form $\\alpha$ we define a twisted $(d-p)$-form $*\\alpha$ by taking the inner-product of $\\alpha$ with vol$_g \\ = \\ *1$, using the complex inner-product $g$. We regard vol$_g$ as an element of the complex line $|\\wedge^d(T^*_x)|_{\\mathbb C}$, where $|\\wedge^d(T^*_x)|$ is the tensor product of $\\wedge^d(T^*_x)$ with the real orientation line of $T_x$, and $|\\wedge^d(T^*_x)|_{\\mathbb C}$ is its complexification, \\emph{but} with the convention that the orientation-reversing automorphisms of $T_x$ act \\emph{antilinearly}. We say that an element of the real part of the line is positive if it is a positive volume-element.\n\nFor the electromagnetic field we need the real part of the quadratic form\n$$\\wedge^2(T_x^*) \\ \\longrightarrow \\ |\\wedge^d(T_x^*)|_{\\mathbb C}$$ \ngiven by $F \\mapsto F\\wedge *F$ to be positive-definite. \n\nThis makes it natural, if we are going to consider space-time manifolds $M$ of all dimensions, to propose \n\n\\bigskip\n\n\\noindent{\\bf Definition 2.1} \\ \\ {\\it On a $d$-dimensional real vector space $V$ a quadratic form $g:V \\to \\mathbb C$ is called an {\\rm allowable} complex metric if, for all degrees $p \\geq 0$, the real part of the quadratic form \n$$\\wedge^p(V^*) \\ \\longrightarrow \\ |\\wedge^d(V^*)|_{\\mathbb C}$$\ngiven by $\\alpha \\mapsto \\alpha \\wedge * \\alpha$ is positive-definite.} \n\n\\bigskip\n\nFortunately, this definition has an equivalent formulation which is much more explicit and illuminating.\n\n\\bigskip\n\n\n\\noindent{\\bf Theorem 2.2} \\ \\ {\\it Definition 2.1 is equivalent to: there is a basis of the real vector space $V$ in which the quadratic form $g$ can be written\n$$\\lambda_1 y_1^2 + \\lambda_2 y_2^2 + \\ldots + \\lambda_d y_d^2,$$\nwhere the $y_i$ are coordinates with respect to the basis, and the $\\lambda_i$ are non-zero complex numbers, not on the negative real axis, such that}\n$$|\\arg (\\lambda_1) | + |\\arg (\\lambda_2) | + \\ldots + |\\arg (\\lambda_d)| < \\pi. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (4)$$ \n\n\\bigskip\n\nThe complex-valued quadratic forms $g:V \\to \\mathbb C$ on a real vector space $V$ which satisfy the conditions of (2.1) or (2.2) form an open subset $Q_{\\mathbb C}(V)$ of the complex vector space $S^2(V^*_{\\mathbb C})$. It follows from Theorem 2.2 that the real inner products with signature $(d-1,1)$ --- but not those with other signatures --- lie on the boundary of the domain $Q_{\\mathbb C}(V)$. For if the metric is real then each $|\\arg(\\lambda_i)|$ is either 0 or $\\pi$, and the inequality (4) shows that at most \\emph{one} of the $|\\arg(\\lambda_i)|$ can become $\\pi$ on the boundary. \n\nAnother consequence of (4) is that \n$$\\max \\arg \\lambda_i \\ - \\ \\min \\arg \\lambda_i \\ < \\ \\pi,$$\nwhich shows that when $v$ runs through $V$ the complex numbers $g(v)$ form a closed convex cone in $\\mathbb C$ disjoint from the open negative real axis. In particular, $g(v)$ can never be real and negative.\n \n Using the criterion mentioned at the end of Section 1 we see that the real Lorentzian metrics --- and no other nondegenerate metrics --- belong to the \\emph{Shilov} boundary of $Q_{\\mathbb C}(V)$, when it is regarded as a bounded domain in an affine variety (cf. the proof of 2.7 below). Indeed if $g = \\sum \\lambda_j y_j^2$ is a complex metric for which the inequality (4) becomes an equality, and at least two of the eigenvalues $\\lambda_j$ and $\\lambda_k$ are not on the negative real axis, then (after rescaling the basis vectors $e_j$ and $e_k$ so that $|\\lambda_j | = |\\lambda_k| = 1$) we get a holomorphic curve through $g$, in the closure of $Q_{\\mathbb C}(V)$, by changing $\\lambda_j$ to $(\\lambda_j)^{1+z}$ and $\\lambda_k$ to $(\\lambda_k)^{1-\\varepsilon z}$, where $\\varepsilon$ is $+1$ or $-1$ according as the arguments of $\\lambda_j$ and $\\lambda_k$ have the same or opposite signs. \n \n In fact the Shilov boundary of $Q_{\\mathbb C}(V)$ contains \\emph{two} disjoint copies of the space of Lorentzian metrics on $V$, for an eigenvalue $\\lambda$ can approach the negative real axis either from above or from below. The two copies are interchanged by the complex-conjugation map on $Q_{\\mathbb C}(V)$. Because of our choice to make the orientation-reversing elements of GL$(V)$ act antilinearly on the orientation-line of $V$, we can say that the nondegenerate points of the Shilov boundary of $Q_{\\mathbb C}(V)$ are the \\emph{time-oriented} Lorentzian metrics.\n\n\\bigskip\n\n\nWe define the space Met$_{\\mathbb C}(M)$ of allowable complex metrics on a smooth manifold $M$ as the space of smooth sections of the bundle on $M$ whose fibre at $x$ is $Q_{\\mathbb C}(T_x)$. \n\n\\bigskip\n\nBefore giving the surprisingly simple proof of Theorem 2.2\nlet us say what motivated the two different-looking conditions. The desire to make the real parts of natural quadratic action functionals positive-definite hardly needs further comment, but choosing to focus on the `higher abelian gauge field' actions $\\alpha \\wedge * \\alpha$ --- the `Ramond-Ramond' fields of superstring theory --- may well seem arbitrary. Why not allow other kinds of tensor fields? Our conditions do not imply that they will be positive-definite. Witten has kindly suggested to us a justification for our focus, based on properties of the classical energy-mometum tensor explained in [WW]. Including the higher gauge theories does, in any case, impose an \\emph{upper} bound on the class of complex metrics we can allow, for the partition functions of these theories on a $d$-dimensional torus $M$ with a flat Riemannian metric $g$ are explicitly known (cf. [Ke], [Sz](4.4)), and we can see to which complex metrics they can be analytically continued. The gauge-equivalence classes of fields form an infinite-dimensional Lie group which is a product of a torus, a lattice, and an infinite-dimensional real vector space, and the partition function is the product of three corresponding factors. More precisely, an abelian gauge $(p-1)$-field $A$ has a \\emph{field-strength} $F_A$, a closed $p$-form on $M$ with integral periods, which determines $A$ up to the finite-dimensional torus $H^{p-1}(M;\\mathbb T)$ of flat gauge fields with $F_A = 0$. The space of fields is therefore a product\n$$H^{p-1}(M;\\mathbb T) \\ \\times \\ \\Phi_p \\ \\times \\ \\Gamma_p,$$\nwhere $\\Phi_p$ is the vector space of exact $p$-forms on $M$, and $\\Gamma_p \\cong {\\rm Harm}_{\\mathbb Z}^p(M) \\cong H^p(M;\\mathbb Z)$ is the finite-dimensional lattice of harmonic (and hence constant) $p$-forms with integral periods. The partition function is a Gaussian integral on this product: the torus of flat fields contributes its volume (for an appropriate metric determined by the geometry of $M$), the lattice $\\Gamma_p$ of harmonic $p$-forms contributes its theta-function $$\\sum_{\\alpha \\in \\Gamma_p} \\ \\exp \\left(-\\frac{1}{2} \\int_M\\ \\alpha \\wedge *\\alpha \\right ) \\ ,$$\nwhile the vector space $\\Phi_p$ contributes an `analytic torsion' which is a power of the determinant of the Laplace operator acting on smooth functions on $M$ (with the zero-eigenvalue omitted) --- an analogue of the Dedekind eta-function, but with the lattice of characters of the torus $M$ replacing the lattice $\\mathbb Z + \\tau \\mathbb Z \\subset \\mathbb C$. Of these three factors, the first clearly extends holomorphically to the space of all flat complex metrics on $M$, and the analytic torsion can be continued to a non-vanishing holomorphic function in the open set of complex metrics $g$ for which (det $g)^{-1\/2}g$ belongs to the Siegel domain $\\mathbb U(V))$; but the theta-function cannot be continued beyond those metrics for which the real part of the form $\\int \\alpha \\wedge *\\alpha$ is positive. \n\n\n\\bigskip\n\nApproaching from the opposite direction, the inequality (4) is motivated by \nthe traditional analytical continuation of vacuum expection values to an open subset of the $k$-fold product of complexified Minkowski space $\\mathbb M_{\\mathbb C}$. The Wightman axioms imply that the expectation values extend holomorphically to a domain $\\mathcal U_k$ called the `permuted extended tube'\\footnote{A set of points $x_1, \\ldots , x_k$ belongs to $\\mathcal U_k$ if, after ordering them suitably, there is an element $\\gamma$ of the complexified Lorentz group such that the imaginary part of $\\gamma(x_i - x_{i+1})$ belongs to the forward light-cone for each $i$. }, which is functorially associated to $\\mathbb M_{\\mathbb C}$ with its $\\mathbb C$-bilinear metric. It is a basic result in the Wightman theory (cf. [SW], or [Ka](2.1)) that $\\mathcal U_k$ contains the configuration space Conf$_k(\\mathbb E)$ of all $k$-tuples of \\emph{distinct} points of the standard Euclidean subspace $\\mathbb E \\subset \\mathbb M_{\\mathbb C}$. For a $d$-dimensional real vector space $V$ with a complex metric the complexification $V_{\\mathbb C}$ is isomorphic to $\\mathbb M_{\\mathbb C}$, uniquely up to a complex Lorentz transformation, and so the domain $\\mathcal U_k(V)$ is well-defined in $(V_{\\mathbb C})^k$. In the next section we shall give a definition of a quantum field theory on space-times $M$ with complex metrics: it implies that the expectation values are smooth functions on the configuration spaces Conf$_k(M)$ of distinct $k$-tuples in $M$. That makes it natural to\nask which (constant) complex metrics on $V$ have the property that the configuration space Conf$_k(V)$ is contained in the holomorphic envelope of $\\mathcal U_k(V)$, i.e. the largest Stein manifold to which all holomorphic functions on $\\mathcal U_k(V)$ automatically extend. The original motivation of condition (4) was\n\n\\bigskip\n\n\\noindent{\\bf Proposition 2.3} \\ {\\it If a complex metric on a $d$-dimensional real vector space $V$ satisfies condition (4) then ${\\rm Conf}_k(V)$ is contained in the holomorphic envelope of $\\mathcal U_k(V)$.}\n\n\\bigskip\n\n We shall postpone the proof of this result to an appendix at the end of this section. \n\n\\bigskip\n \n\\noindent{\\it Proof of Theorem 2.2} \\ \\ The first point is to show that a quadratic form which satisfies the conditions of Definition 2.1 can be written in the diagonal form $\\sum \\lambda_j y_j^2$ with respect to real coordinates $y_j$ on $V$. To diagonalize a complex form $g = A+ {\\rm i} B$ with respect to a real basis is to diagonalize its real and imaginary parts simultaneously, which is possible if either $A$ or $B$ --- or, more generally, a real linear combination of them such as the real part of $({\\rm det}g)^{-1\/2}g$ --- is positive-definite. But 2.1, applied when $p=1$, implies that the real part of $({\\rm det}g)^{-1\/2}g$ is positive.\n\nSuppose now that $g$ is diagonalized with respect to a basis $\\{e_i\\}$ of $V$. Then the form $\\alpha \\mapsto \\alpha \\wedge * \\alpha$ on $\\wedge^p(V^*)$ is diagonal with respect to the basis $\\{e_S^* = e^*_{i_1}\\wedge \\ldots \\wedge e^*_{i_p}\\}$, where $\\{e_i^*\\}$ is the dual basis to $\\{e_i\\}$, and $S$ runs through $p$-tuples $S = (i_1, \\ldots , i_p)$. The value of the form $\\alpha \\wedge * \\alpha$ on the basis element $e^*_S$ is\n$$(\\lambda_1 \\ldots \\lambda_d)^{1\/2}\\prod_{i \\in S}\\lambda_i^{-1},$$\nwhich has positive real part if its argument\n$$\\frac{1}{2}\\left\\{\\sum_{i \\in S} {\\rm arg}(\\lambda_i) - \\sum_{i \\not \\in S} {\\rm arg}(\\lambda_i)\\right\\}$$\nlies in the open interval $(-\\pi\/2,\\pi\/2)$. But to say that this is true for every subset $S$ of $\\{1, \\ldots , d\\}$ is precisely condition $(4)$. $\\spadesuit$\n\n\\bigskip\n\n\nThe proof of Theorem 2.2 shows that to give an element $g$ of $Q_{\\mathbb C}(V)$ is the same as to give a finite sequence $\\theta_1 \\geq \\theta_2 \\geq \\ldots \\geq \\theta_m$ in the interval $(-\\pi, \\pi)$ together with a decomposition\n $$V = V_1 \\oplus \\ldots \\oplus V_m$$\n such that\n $$\\sum_k \\dim V_k\\cdot |\\theta_k|<\\pi. $$ Thus on $V_k$ the bilinear form $g$ is ${\\rm e}^{{\\rm i} \\theta_k}$ times a real positive-definite form. The only ambiguity in this description is that if, say, $\\theta_k = \\theta_{k+1}$ we can replace $V_k$ by $V_k \\oplus V_{k+1}$ and omit $\\theta_{k+1}$ and $V_{k+1}$. This means that the subspace $P = \\bigoplus {\\rm e}^{-{\\rm i} \\theta_k\/2}V_k$ of the complexification $V_{\\mathbb C}$ of $V$ is \\emph{canonically} associated to the form $g$. On the real subspace $P$ the complex bilinear form $g$ is real and positive-definite. Our argument gives us canonical isomorphisms \n $$V \\ = \\ \\exp({\\rm i}\\pi\\Theta\/2)(P) \\ \\subset \\ P_{\\mathbb C} \\ = \\ V_{\\mathbb C},$$\n where $\\Theta:P \\to P$ is the self-adjoint operator which is multiplication by $\\theta_k$ on $P_k = {\\rm e}^{-{\\rm i}\\theta_k\/2} V_k$. Condition $(4)$ becomes the assertion that $\\Theta$ has trace-norm\\footnote{The trace-norm is the sum of the absolute values of the eigenvalues.} $||\\Theta ||_1 < 1$. \nThis shows that the space $Q_{\\mathbb C}(V)$ is parametrized by the pairs $(g_0, \\Theta)$, where $g_0$ is a positive-definite inner-product on $V$ and $\\Theta$ belongs to the convex open set $\\Pi(V,g_0)$ of operators in $V$ which are self-adjoint with respect to $g_0$ and satisfy $||\\Theta||_1 < 1$, i.e. the interior of the convex hull of the rank 1 orthogonal projections in $V$. In fact we have proved\n \n \\bigskip\n \n \\noindent{\\bf Proposition 2.4} \\ \\ $Q_{\\mathbb C}(V)$ {\\it is a fibre-bundle over the space of positive-definite inner products on $V$ whose fibre at a point $g_0$ is $\\Pi(V,g_0)$. Equivalently, choosing a reference inner-product on $V$, we have\n $$Q_{\\mathbb C}(V) \\ \\cong \\ {\\rm GL}(V) \\times_{{\\rm O}(V)} \\Pi(V).$$\n In particular, $Q_{\\mathbb C}(V)$ is contractible.}\n \n \\bigskip\n\nIt is an important fact that an allowable complex metric on $V$ remains allowable when restricted to any subspace $W$ of $V$. This follows from an analogous property of the trace-norm, but we shall give a direct proof, as its point of view on the angles $\\theta_i$ as critical values helps give a feeling for allowable complex metrics.\n\n\\bigskip \n\n \\noindent{\\bf Proposition 2.5} \\ \\ {\\it If $g \\in Q_{\\mathbb C}(V)$ and $W$ is any vector subspace of $V$ then $g|W$ belongs to $Q_{\\mathbb C}(W)$. }\n \n \\bigskip\n\n\\noindent{\\it Proof} \\ \\ \\ For any $g \\in Q_{\\mathbb C}(V)$ the function $v \\mapsto {\\rm arg}(g(v))$ is a smooth map from the real projective space $\\mathbb P(V)$ to the open interval $(-\\pi, \\pi) \\subset \\mathbb R$. By rescaling the basis elements $\\{e_k\\}$ we can write $g$ as $\\sum{\\rm e}^{{\\rm i}\\theta_k}y_k^2$. The numbers $\\theta_k$ are precisely the critical values of arg$(g)$. We shall order the basis elements so that\n$$\\pi \\ > \\ \\theta_1 \\ \\geq \\ \\theta_2 \\ \\geq \\ \\ldots \\ \\geq \\ \\theta_d \\ > \\ -\\pi.$$\n\n\nFor each vector subspace $A$ of $V$ let us write $\\theta^A$ and $\\theta_A$ for the supremum and infimum of arg$(g)$ on $\\mathbb P(A)$. Then we have\n$$\\theta_k \\ \\ = \\ \\ {\\rm sup}\\{\\theta_A: {\\rm dim}(A) = k\\} \\ \\ = \\ \\ {\\rm inf}\\{\\theta^A: {\\rm dim}(A) = d-k+1\\}.$$\nIt is enough to prove Proposition 2.5 when $W$ is a subspace of $V$ of codimension 1. In that case the preceding characterization of the critical values shows that if \n$\\theta'_1 \\geq \\ldots \\geq \\theta'_{d-1}$\nare the critical values of arg$(g|W)$ we have $\\theta_k \\geq \\theta'_k \\geq \\theta_{k+1}$. The critical values for $g|W$ therefore \\emph{interleave} those for $g$:\n$$\\theta_1 \\geq \\theta'_1 \\geq \\theta_2 \\geq \\theta'_2 \\geq \\ldots \\geq \\theta_{d-1} \\geq \\theta'_{d-1} \\geq \\theta_d.$$\nThis implies that $\\sum |\\theta'_k| \\leq \\sum |\\theta_k| < \\pi$, as we want. $ \\ \\spadesuit$\n\n\n \n\\bigskip\n\nIn Section 5 we shall need the following variant of the preceding formulation. \nSuppose that $Z$ is a $d$-dimensional complex vector space with a nondegenerate quadratic form $g$. (Any such pair $(Z,g)$ is isomorphic to $\\mathbb C^d$ with the standard form $\\sum z^2_k$.) Let $\\mathcal R(Z)$ denote the space of all $d$-dimensional real subspaces $A$ of $Z$ such that $g|A$ belongs to $Q_{\\mathbb C}(A)$. This is an open subset of the Grassmannian of all real subspaces of $Z$. If $Z_{\\mathbb R}$ is any $d$-dimensional real vector subspace of $Z$ for which $g|z_{\\mathbb R}$ is real and positive-definite then the projection $A \\subset Z \\to Z_{\\mathbb R}$ is an isomorphism, for any non-zero element of its kernel would have the form i$v$ with $v \\in Z_{\\mathbb R}$, and so $g({\\rm i}v)$ would be real and negative, which cannot happen if $g|A$ is allowable.\n\n\\bigskip\n\n\\noindent{\\bf Proposition 2.6} \\ \\ {\\it The space $\\mathcal R(Z)$ is contractible, and is isomorphic to}\n$$ {\\rm O_{\\mathbb C}}(Z) \\times_{{\\rm O}(Z_{\\mathbb R})} \\Pi(Z_{\\mathbb R}).$$\n\n\\bigskip\n\n\\noindent{\\it Proof} \\ \\ This is essentially a reformulation of what has been said, but it may be helpful to relate the spaces $Q_{\\mathbb C}(V)$ and $\\mathcal R(Z)$ by considering, for a complex quadratic vector space $(Z,g)$ as above, the intermediate space $\\mathcal R(V;Z)$ of $\\mathbb R$-linear embeddings $f:V \\to Z$ of the real vector space $V$ such that $f^*(g)$ is allowable. This space has two connected components, corresponding to the orientation of the projection $V \\to Z_{\\mathbb R}$.\n\nThe groups GL$(V)$ and O$_{\\mathbb C}(Z)$ act by right- and left-composition on $\\mathcal R(V;Z)$, and each action is free. Thus $\\mathcal R(V;Z)$ is at the same time a principal GL$(V)$-bundle with base $\\mathcal R(Z)$ and a principal O$_{\\mathbb C}(Z)$-bundle with base $Q_{\\mathbb C}(V)$. But the Lie groups GL$(V)$ and O$_{\\mathbb C}(Z)$ are homotopy equivalent to their maximal compact subgroups, i.e. in both cases to the compact orthogonal group O$_d$. More precisely, the contractibility of $Q_{\\mathbb C}(V)$ implies that $\\mathcal R(V;Z)$ is homotopy-equivalent to the fibre O$_{\\mathbb C}(Z)f$ for any $f\\in \\mathcal R(V;Z)$. If we choose $f$ so that $f^*(g)$ is a positive-definite real form on $V$ this gives us a homotopy-equivalence $ {\\rm O}(V) \\to {\\rm O}_{\\mathbb C}(Z)f \\to \\mathcal R(V;Z)$. But O$(V)$ is also contained in and equivalent to the fibre $f{\\rm GL}(V)$ of the other fibration $\\mathcal R(V;Z) \\to \\mathcal R(Z)$, which implies the contractibility of its base $\\mathcal R(Z)$. \\ $\\spadesuit$\n\n\\bigskip\n\nThe last property of $Q_{\\mathbb C}(V)$ which we shall record briefly, for the sake of experts, is\n\n\\bigskip\n \n \n\\noindent{\\bf Proposition 2.7} \\ \\ {\\it The domain $Q_{\\mathbb C}(V)$ is holomorphically convex, i.e.\\ a `domain of holomorphy'.}\n\n\\bigskip\n\n\\noindent{\\it Proof} \\ \\ The Siegel domain $\\mathbb U(V)$ of complex-valued inner products with positive-definite real part on a real vector space $V$ is known to be a domain of holomorphy in $S^2(V^*_{\\mathbb C})$. So therefore is the product\n$$\\prod_{0 \\leq p \\leq d\/2} \\mathbb U(\\wedge^p(V))$$\ninside its ambient complex vector space. The space $Q_{\\mathbb C}(V)$ is the intersection of this product domain with the affine variety which is the natural embedding of $S^2(V^*_{\\mathbb C})$ in this ambient vector space, and so it too is a domain of holomorphy. \\ $\\spadesuit$\n\n\\bigskip\n\n\n\\bigskip\n \n\\noindent {\\bf The two-dimensional case}\n\n\\bigskip\n\nThe case $d = 2$ is especially simple because then the matrix $(\\det g)^{- 1\/2}g$ depends only on the conformal structure, and decouples from the volume element.\n\nA non-degenerate complex inner product $g$ on a 2-dimensional real vector space $V$ is determined up to a scalar multiple by its two distinct null-directions in the complexified space $V_{\\mathbb C}$. We can think of these as two points of the Riemann sphere $\\mathbb P(V_{\\mathbb C})$. Then $(\\det g)^{-1\/2}g$ has positive real part precisely when the two points lie one in each of the open hemispheres of the sphere $\\mathbb P(V_{\\mathbb C})$ separated by the real equatorial circle $\\mathbb P(V)$. When the two points move to distinct points of the equator we get a Lorentzian inner product, with its two light-directions in $\\mathbb P(V)$.\n\nA point of the sphere $\\mathbb P(V_{\\mathbb C})$ not on the equator can be regarded as a complex structure on the real vector space $V$, and the two hemispheres correspond to the two possibilities for the orientation which a complex structure defines. On a smooth surface $\\Sigma$ any almost-complex structure is integrable, so a point of Met$_{\\mathbb C}(\\Sigma)$ is a pair of complex structures on $\\Sigma$ of opposite orientations, together with a complex volume element. The Riemannian metrics are those for which the two complex structures are complex-conjugate to each other, and the volume element is real.\n\nWhen $d=2$ the domain $Q_{\\mathbb C}(V)$ is thus a 3-dimensional polydisc, one disc for each of the complex structures, and the third for the volume-element.\n\n\\bigskip\n\n\\newpage\n\n\\noindent{\\bf The one-dimensional case: electric circuits}\n\n\\bigskip\n\nOur concept of an allowable complex metric does not at first look interesting in the one-dimensional case, but if we allow \\emph{singular} 1-manifolds --- identified with finite graphs $M$ --- we find that complex metrics arise naturally in electrical circuit theory. A Riemannian metric on $M$ is determined (up to isometry) by the assignment of a positive real number to each edge of the graph, and can be interpreted as its \\emph{resistance} when the edge is regarded as a wire in an electrical circuit. A state of the system (perhaps with current entering or leaving at each node) is determined by a continuous potential function $\\phi:M \\to \\mathbb R$ which is smooth on each closed edge, and whose gradient is the current flowing in the circuit. Because $\\phi$ is determined only up to adding a constant we shall normalize it by $\\int_M \\phi = 0.$ The energy of a state is \n$$\\frac{1}{2}\\int_M ||\\nabla \\phi ||^2 {\\rm d}s,$$\nand so the system can be regarded as a free massless field theory on the graph: in particular the vacuum expectation value $\\langle \\phi(x)\\phi(y)\\rangle$, when $x$ and $y$ are two nodes of the graph, is the ratio of the potential-difference $\\phi(x) - \\phi(y)$ to the current flowing in at $x$ and out at $y$ when no current is allowed to enter or leave at other nodes.\n\nWe encounter complex metrics when we consider a circuit in which an alternating current with frequency $\\omega$ is flowing, and in which each branch has not only a resistance $R$ but also a positive inductance $L$ and a positive capacitance $C$. In that situation the volume element $\\sqrt g = R$ is replaced by the \\emph{impedance} $$\\sqrt g = R + {\\rm i}\\omega L + 1\/{\\rm i} \\omega C,$$ \na complex number which defines an allowable metric because Re$\\sqrt g > 0$.\n\nQuite apart from electric circuitry, however, singular one-dimensional manifolds with allowable complex metrics can arise in quantum field theory as the Gromov-Hausdorff limits of non-singular space-times of higher dimension. For example, if we embed a smooth graph $M$ in $\\mathbb R^3$, then for almost all sufficiently small $\\varepsilon >0$ the boundary of the $\\varepsilon$-neighbourhood of $M$ is a smooth surface $M_{\\varepsilon}$ whose limit is $M$ as $\\varepsilon \\to 0$: this is one way of viewing the passage from closed string theory to quantum field theory.\n\n\\bigskip\n\n\\newpage\n\n\\noindent{\\bf Appendix to Section 2: proof of 2.3}\n\n\\bigskip\n\nIf $V$ is a real vector space with an allowable complex metric then the preceding discussion shows that it can be identified with the subspace\n$$V \\ = \\ \\exp({\\rm i} \\Theta \/2)(\\mathbb E)$$\nof $\\mathbb M_{\\mathbb C}$. Here $\\mathbb E = \\mathbb R^d$ is the standard Euclidean subspace of $\\mathbb M_{\\mathbb C}$, and $\\Theta$ is a real diagonal matrix whose entries $\\theta_1, \\ldots , \\theta_d$ belong to the `generalized octahedron' $\\Pi_0 \\subset \\mathbb R^d$ consisting of those $\\Theta$ whose diagonal entries $\\theta_1, \\ldots , \\theta_d$ satisfy the inequality (4). We want to prove that $\\exp({\\rm i} \\Theta \/2)$ maps each $k$-tuple ${\\bf x} = \\{x_1,\\ldots , x_k\\}$ of distinct points of $\\mathbb E$ to a point of the holomorphic envelope $\\hat \\mathcal U_k$ of the Wightman permuted extended tube $\\mathcal U_k$. In fact we shall prove the stronger statement that $\\exp(\\rm i\\Theta\/2)({\\bf x}) \\in \\hat \\mathcal U_k$ when $\\Theta$ is a \\emph{complex} diagonal matrix with Re$(\\Theta) \\in \\Pi_0$.\n\n\n\nThe crucial fact is that $\\Pi_0$ is the convex hull of its intersection $\\Pi_{00}$ with the coordinate axes in $\\mathbb R^d$, (i.e.\\ $\\Pi_{00}$ consists of the diagonal matrices with only one entry $\\theta_r$ non-zero, and $-\\pi < \\theta_r < \\pi$). Our strategy is to show that $\\exp(\\rm i \\Theta \/2)({\\bf x}) \\in \\mathcal U_k$ when Re$(\\Theta) \\in \\Pi_{00}$, and to deduce that the same is true when Re$(\\Theta)$ belongs to the convex hull $\\Pi_0$. The essential tool is Bochner's `tube theorem' ([H\\\"or] Thm 2.5.10), which asserts that if $P$ is a connected open subset of $\\mathbb R^d$ then a holomorphic function defined in the tube domain $P\\times \\rm i\\mathbb R^d$ extends holomorphically to the tube domain $P' \\times \\rm i\\mathbb R^d$, where $P'$ is the convex hull of $P$.\n\nHaving fixed a $k$-tuple ${\\bf x}$ in $\\mathbb M_{\\mathbb C}$, let us first show that if Re$(\\Theta) \\in \\Pi_{00}$ then $\\exp({\\rm i} \\Theta \/2)({\\bf x})$ is contained in $\\mathcal U_k$. Suppose that the non-zero diagonal element of $\\Theta$ is in the $r^{\\rm th}$ place. Because $\\mathcal U_k$ in invariant under the orthogonal group O$(\\mathbb E)$ we can assume that the $r^{\\rm th}$ basis vector $e_r$ of $\\mathbb E$ is the Wick-rotated time-axis of $\\mathbb M$, so that $e_r$ belongs to $\\rm i C$, where $C$ is the forward light-cone in $\\mathbb M$. With respect to the real structure $\\mathbb M_{\\mathbb C} = \\mathbb M \\oplus \\rm i \\mathbb M$ the imaginary part of the $k$-tuple $${\\bf y} \\ = \\ \\exp(\\rm i\\Theta\/2)({\\bf x})$$ lies on the line $\\mathbb R e_r$, and so, after ordering the points appropriately, ${\\bf y}$ will belong to the forward tube in $\\mathbb M_{\\mathbb C}$ providing the points of ${\\bf x}$ have distinct $r^{\\rm th}$ coordinates. But if the $r^{\\rm th}$ coordinates of Im$({\\bf y})$ are not distinct, we can make them so by choosing a unit vector $e \\in \\mathbb E$ perpendicular to $e_r$ such that the coordinates $\\langle {\\bf x}, e \\rangle$ are distinct, and rotating the $k$-tuple ${\\bf y}$ by a small amount in the $\\{e,e_r\\}$-plane, again using the O$(\\mathbb E)$-invariance of $\\mathcal U_k$.\n\nWe now know that $\\mathcal U_k$ contains an open neighbourhood of $\\Pi_{00} \\times \\rm i\\mathbb R^d$ in $\\mathbb C^d$. To apply Bochner's theorem we need to know that the envelope $\\hat \\mathcal U_k$ contains a \\emph{tube} $P \\times \\rm i\\mathbb R^d$, where $P$ is an open neighbourhood of $\\Pi_{00}$ in $\\mathbb R^d$. In fact it is enough, by induction, to treat the case $d = 2$, for that case, together with Bochner's theorem, implies that a function holomorphic in a neighbourhood of $(\\Pi_0(\\mathbb R^r) \\cup \\Pi_{00}(\\mathbb R^{d-r}) \\times \\rm i\\mathbb R^d$ is holomorphic in a neighbourhood of $(\\Pi_0(\\mathbb R^{r+1}) \\cup \\Pi_{00}(\\mathbb R^{d-r-1})) \\times \\rm i\\mathbb R^d$.\n\nTo reduce the $d=2$ case to the standard Bochner theorem it is enough to prove the following\n\n\\bigskip\n\n\\noindent{\\bf Lemma 2.8} \\ {\\it Let $L$ be the L-shaped subset $(\\{0\\} \\times [0,1)) \\cup ([0,1) \\times \\{0\\})$ of the quadrant $(\\mathbb R_+)^2$. Then any holomorphic function $F$ defined in a neighbourhood of $L \\times \\rm i\\mathbb R^2 \\subset \\mathbb C^2$ can be extended holomorphically to $P \\times \\rm i\\mathbb R^2$, where $P$ is the intersection of $(\\mathbb R_+)^2$ with a neighbourhood of $L$ in $\\mathbb R^2$.}\n\n\\bigskip\n\n\\noindent{\\it Proof} \\ For any $t \\in (0,1\/2)$ we define $D_t$ as the intersection of the two unit discs $\\{z \\in \\mathbb C: |z - (1-t)| \\leq 1\\}$ and $\\{z \\in \\mathbb C: |z + (1-t)| \\leq 1 \\}$. Then we define $f:D_t \\to \\mathbb C^2$ by\n$$f(z) \\ = \\ (-\\log ((1-t)-z), -\\log ((1-t) +z).$$\nThe map $f$ is a holomorphic embedding in a neighbourhood of $D_t$ in $\\mathbb C$, and Re $f(\\partial D_t)$ is contained in the coordinate axes of $\\mathbb R^2$. If we choose \n $T = (1 - {\\rm e}^{-1})\/2$ then Re $f(\\partial D_T)$ is precisely the closure of $L$.\n \nFor any $\\eta \\in \\mathbb R^2$, define $f_{\\eta}:D_T \\to \\mathbb C^2$ by $ f_{\\eta}(z) = f(z)+ \\rm i\\eta$. Then the holomorphic map $F$ is defined in a neighbourhood of the curve $f_{\\eta}(\\partial D_T)$, and if we can show that $F \\circ f_{\\eta}$ extends holomorphically over $D_T$ then we shall have continued $F$ analytically to the tube domain $f(\\mathring D_T) + \\rm i\\mathbb R^2$, and the proof will be complete.\n\nWhen a function $F$ is holomorphic in an open domain containing the boundary of a holomorphically-embedded disc --- in this case $f_{\\eta}(D_T)$ --- then to show that $F$ can be extended over the whole disc the standard method is to show that the disc can be moved holomorphically, keeping its boundary within the domain of $F$, until the whole disc is contained in the domain of $F$; the Cauchy integral formula then defines the desired extension. In our case we can deform $f_{\\eta}(D_T)$ through the family $f_{\\eta}(D_t)$ as $t$ decreases from $T$ towards 0. As $t \\downarrow 0$ the domain $D_t$ shrinks to the origin in $\\mathbb C$, and $f_{\\eta}(D_t) \\to \\rm i\\eta$, which is contained in the domain of $F$. \\ \\ $\\spadesuit$ \n\n\n\n\\newpage\n\n\\section{Quantum field theories as functors}\n\nThe traditional Wightman approach to quantum field theory is not well-adapted to important examples such as gauge theories, especially when the space-time is not flat. Another formulation --- potentially more general --- views a $d$-dimensional field theory as something more like a group representation, except that the group is replaced by a {\\it category} $\\mathcal C_d^{\\mathbb C}$ of space-time manifolds. The guiding principle of this approach is to preserve as much as possible of the path-integral intuition. We shall present it very briefly here, with minimal motivation.\n\nRoughly, the objects of the category $\\mathcal C_d^{\\mathbb C}$ are compact smooth $(d-1)$-dimensional manifolds $\\Sigma$ equipped with complex metrics $g \\in {\\rm Met}_{\\mathbb C}(\\Sigma)$. A morphism from $\\Sigma_0$ to $\\Sigma_1$ is a cobordism $M$ from $\\Sigma_0$ to $\\Sigma_1$, also with a complex metric. We shall write $M:\\Sigma_0 \\leadsto \\Sigma_1$ to indicate a cobordism. Composition of morphisms is by concatenation of the cobordisms. The reason for the word `roughly' is that, because there is no canonical way to give a smooth structure to the concatenation of two smooth cobordisms, we must modify the definition slightly so that an object of $\\mathcal C_d^{\\mathbb C}$ is not a $(d-1)$-manifold but rather is a {\\it germ} of a $d$-manifold along a given $(d-1)$-manifold $\\Sigma$ --- i.e. $\\Sigma$ is given as a closed submanifold of a $d$-manifold $U$, but any two open neighbourhoods of $\\Sigma$ in $U$ define the same object of $\\mathcal C_d^{\\mathbb C}$. We require $\\Sigma$ to be \\emph{two-sided} in $U$, and equipped with a \\emph{co-orientation} which tells us which side is incoming and which is outgoing. (Nevertheless, we shall usually suppress the thickening $U$, the co-orientation, and the complex metric $g$ from the notation.) Furthermore, two morphisms $M$ and $M'$ from $\\Sigma_0$ to $\\Sigma_1$ are identified if there is an isometry $M \\to M'$ which is the identity on the germs $\\Sigma_0$ and $\\Sigma_1$. (We shall return below to the question of the existence of identity morphisms in the cobordism category.)\n\n\n\\bigskip\n\nIn terms of the category $\\mathcal C_d^{\\mathbb C}$ we make the \n\n\\bigskip\n\n\\noindent{\\bf Definition} \\ {\\it A $d$-dimensional field theory is a \\emph{holomorphic} functor from $\\mathcal C_d^{\\mathbb C}$ to the category of Fr\\'echet topological vector spaces and \\emph{nuclear} (i.e. trace-class) linear maps which takes disjoint unions to tensor products.}\n\n\\bigskip\n\nUnfortunately, almost every word in this definition requires further explication.\n\n We shall write $E_{\\Sigma}$ for the vector space associated to an object $\\Sigma$, and $Z_M: E_{\\Sigma_0} \\to E_{\\Sigma_1}$ for the linear map associated to a cobordism $M: \\Sigma_0 \\leadsto \\Sigma_1$. To say that the functor is `holomorphic' means that, for a given smooth manifold-germ $\\Sigma \\subset U$, the topological vector spaces $E_{\\Sigma}$ form a locally trivial holomorphic vector bundle on the complex manifold Met$_{\\mathbb C}(U)$ of complex metrics on $U$, and that the maps $Z_M:E_{\\Sigma_0} \\to E_{\\Sigma_1}$ define a morphism of holomorphic vector bundles on the manifold Met$_{\\mathbb C}(M)$ (to which the bundles $\\{E_{\\Sigma_0}\\}$ and $\\{E_{\\Sigma_1}\\}$ are pulled back).\n\nIn practice, theories are usually defined on cobordism categories where the manifolds are required to have additional structure such as an orientation or a spin-structure. These can easily be included, but are not relevant to our account. For the same reason we do not mention that, for a theory including fermions, the vector spaces $E_{\\Sigma}$ will have a mod 2 grading, and the usual sign-conventions must be applied when we speak of their tensor products.\n\n\\bigskip\n\nBecause our objects $\\Sigma \\subset U$ are really germs of $d$-manifolds, we automatically have a family of cobordisms $\\Sigma' \\leadsto \\Sigma$ embedded in $U$, each diffeomorphic to the trivial cobordism $\\Sigma \\times [0,1]$ with the outgoing boundary $\\Sigma \\times \\{1\\}$ corresponding to $\\Sigma \\subset U$. These cobordisms can be ordered by inclusion, giving us a direct system of objects $\\Sigma'$ with cobordisms to $\\Sigma$. Similarly, looking downstream rather than upstream, we have a family of cobordisms $\\Sigma \\leadsto \\Sigma''$ contained in $U$, giving us an inverse system of objects $\\Sigma''$ to which $\\Sigma$ maps. For any field theory, therefore, there are natural maps\n$$\\lim_{\\rightarrow} E_{\\Sigma'} \\ \\ \\to \\ \\ E_{\\Sigma} \\ \\ \\to \\ \\ \\lim_{\\leftarrow} E_{\\Sigma''},$$\nwhich we shall write\n$$\\check E_{\\Sigma} \\ \\to \\ E_{\\Sigma} \\ \\to \\ \\hat E_{\\Sigma},$$\nintroducing the notations $\\check E_{\\Sigma} = \\varinjlim E_{\\Sigma'}$ and $\\hat E_{\\Sigma} = \\varprojlim E_{\\Sigma''}$ for the upstream and downstream limits. \n\nWe shall assume the functor has the \\emph{continuity} property that each of these maps is injective with dense image. The space $\\hat E_{\\Sigma}$, being the inverse-limit of a countable sequence of nuclear maps of Fr\\'echet spaces, is a \\emph{nuclear} Fr\\'echet space\\footnote {A very useful concise account of nuclear spaces can be found in [C].}. The other space $\\check E_{\\Sigma}$ is also nuclear, but usually not metrizable: it is the dual of the nuclear Fr\\'echet space $\\hat E_{\\Sigma^*}$, where $\\Sigma^*$ denotes the germ $\\Sigma$ with its co-orientation reversed. As this is such a basic point, we have included a proof as an Appendix at the end of this section. \n\nWhen we have a cobordism $M:\\Sigma_0 \\leadsto \\Sigma_1$ we automatically get maps $\\check E_{\\Sigma_0} \\to \\check E_{\\Sigma_1}$ and $\\hat E_{\\Sigma_0} \\to \\hat E_{\\Sigma_1}$.\nThe space $E_{\\Sigma}$ with which we began plays only a provisional role in the theory, serving to construct the fundamental nuclear spaces between which it is sandwiched. \n\n \n\n\\bigskip\n\nThe essential requirement we place on the functor is that it takes disjoint unions to tensor products, i.e., we are given an isomorphism of functors\n$$\\check E_{\\Sigma} \\otimes \\check E_{\\Sigma'} \\ \\to \\ \\check E_{\\Sigma \\sqcup \\Sigma'} ,$$\nwhich is associative and commutative in terms of the usual isomorphisms for the disjoint union and tensor product. There is a unique natural concept of tensor product here, because all the vector spaces are nuclear, and $\\check E_{\\Sigma} \\otimes \\check E_{\\Sigma'} \\cong \\check E_{\\Sigma \\sqcup \\Sigma'}$ is equivalent to $\\hat E_{\\Sigma} \\otimes \\hat E_{\\Sigma'} \\cong \\hat E_{\\Sigma \\sqcup \\Sigma'}$. The functoriality means that we are assuming \n$$Z_M \\otimes Z_{M'} \\ = \\ Z_{M \\sqcup M'}$$\nfor two cobordisms $M$ and $M'$. \n\n\\bigskip\n\nThe tensoring assumption implies that $E_{\\emptyset} = \\mathbb C$, where $\\emptyset$ denotes the empty $(d-1)$-manifold. Thus for a \\emph{closed} $d$-manifold $M$ we have a \\emph{partition function} $Z_M \\in {\\rm End}(E_{\\emptyset}) = \\mathbb C$. The whole structure of the theory is a way of expressing the sense in which the number $Z_M$ depends \\emph{locally} on $M$.\n\n\n\\bigskip \n\nIn this discussion we have still committed an abuse of language: the ``category\" $\\mathcal C_d^{\\mathbb C}$ is not really a category because it does not have identity maps. We could deal with this by agreeing that an isomorphism $\\Sigma_0 \\to \\Sigma_1$ is a cobordism of zero length, but then these degenerate cobordisms are represented by operators which are not nuclear. The true replacement for the missing identity operators is our assumption that the maps $\\check E_{\\Sigma} \\to \\hat E_{\\Sigma}$ are injective with dense image. To avoid the abuse of language we can say that a field theory is a functor $\\Sigma \\mapsto E_{\\Sigma}$ from $(d-1)$-manifolds and isomorphisms to vector spaces, together with a transformation $Z_M: E_{\\Sigma_0} \\to E_{\\Sigma_1}$ for each cobordism. Whichever line we take, we must assume that an isomorphism $f:\\Sigma_0 \\to \\Sigma_1$ of germs of $d$-manifolds induces an isomorphism $f_*: E_{\\Sigma_0} \\to E_{\\Sigma_1}$ which depends smoothly on $f$, in the sense that for any family $P \\times \\Sigma_0 \\to \\Sigma_1$ parametrized by a finite-dimensional manifold $P$ the induced map $P \\times E_{\\Sigma_0} \\to E_{\\Sigma_1}$ is smooth.\n \n\n \n \\bigskip\n\n\n\\bigskip\n\nLet us explain briefly \nhow to get from this functorial picture to the traditional language of local observables and vacuum expectation values. For a point $x$ of a $d$-manifold $M$ we define the vector space $\\mathcal O_x$ of observables at $x$ as follows. We consider the family of all closed discs $D$ smoothly embedded in $M$ which contain $x$ in the interior $\\mathring D$. If $D' \\subset \\mathring D$ then $D \\setminus \\mathring D'$ is a cobordism $\\partial D' \\leadsto \\partial D$ and gives us a trace-class map $E_{\\partial D'} \\to E_{\\partial D}$. We therefore have an inverse system $\\{E_{\\partial D}\\}$ indexed by the discs $D$, and we define $\\mathcal O_x$ as its inverse-limit.\n\nNow suppose that $M$ is closed, and that $x_1, \\ \\ldots \\ x_k$ are distinct points of $M$. Let $D_1, \\ \\ldots \\ D_k$ be disjoint discs in $M$ with $x_i \\in \\mathring D_i$. Then $M' = M \\setminus \\bigcup \\mathring D_i$ is a cobordism from $\\bigsqcup \\partial D_i$ to the empty $(d-1)$-manifold $\\emptyset$, and defines $Z_{M'}:E_{\\sqcup \\partial D_i} \\to E_{\\emptyset} =\\mathbb C$. Using the tensoring property we can write this\n$$Z_{M'} : \\bigotimes E_{\\partial D_i} \\ \\longrightarrow \\ \\mathbb C,$$\nand then we can pass to the inverse-limits to get the expectation-value map\n$$\\bigotimes \\mathcal O_{x_i} \\ \\longrightarrow \\ \\mathbb C.$$\n\n\n\nWe might prefer the language of ``field operators\" to that of vacuum expectation values. If the space-time $M$ is a cobordism $\\Sigma_0 \\leadsto \\Sigma_1$, then for any $x$ in the interior of $M$ --- say $x \\in \\mathring D \\subset M$ --- the cobordisms $M \\setminus \\mathring D$ from $\\partial D \\sqcup \\Sigma_0$ to $\\Sigma_1$ define maps \n$$\\mathcal O_x \\ \\to \\ {\\rm Hom}_{nucl}(E_{\\Sigma_0}; E_{\\Sigma_1}),$$\nwhile if $x$ lies on a hypersurface $\\Sigma$ an observable at $x$ defines a map $\\check E_{\\Sigma} \\to \\hat E_{\\Sigma}$, i.e. it acts on $E_{\\Sigma}$ as an unbounded operator. But on a Lorentzian space-time $M$ we sometimes want to make the observables at all points $x \\in M$ act on a \\emph{single} vector space, and to ask whether they commute when space-like separated. We shall postpone that discussion to Section 5.\n\n\\bigskip\n\nOne observable which we should mention is the \\emph{energy-momentum tensor}. If we think of a field theory as analogous to a group representation then the energy-momentum tensor is the analogue of the induced representation of Lie algebras: for every cobordism $M:\\Sigma_0 \\leadsto \\Sigma_1$ it is the derivative of the operator $Z_M$ with respect to the metric of $M$. This makes it a distributional symmetric tensor-density $T^{ij}$ on $\\mathring M$ with values in Hom$_{nucl}(E_{\\Sigma_0};E_{\\Sigma_1})$. If we cover $M$ with small balls $D_i$, then by using a partition of unity we can write an infinitesimal change in the metric as the sum of contributions supported in the interiors of the $D_i$, and so the change in $Z_M$ is the sum of contributions coming from the spaces $E_{\\partial D_i}$, and hence from a field operators placed at the centres of the balls $D_i$. But to develop this picture properly needs much more discussion, which we shall not embark on here; it probably requires the assumption that the theory is asymptotically conformal at short distances. The case of a 2-dimensional conformal theory is treated fully in Section 9 of [Se2].\n\n\\bigskip\n \n\n\n\n\\noindent {\\bf Lorentzian manifolds}\n\n\\bigskip\n\n\nThere is a category $\\mathcal C^{\\rm Lor}_d$ which at first sight looks more relevant to quantum field theory than $\\mathcal C_d^{\\mathbb C}$. Its objects are compact Riemannian manifolds of dimension $(d-1)$ and its morphisms are $d$-dimensional cobordisms equipped with real Lorentzian metrics. Fredenhagen and his coworkers (cf. [BF]) have developed the theory of quantum fields in curved space-time using a version of this category. The category $\\mathcal C^{\\rm Lor}_d$ lies ``on the boundary\" of the category $\\mathcal C_d^{\\mathbb C}$. In section 5 we shall discuss the sense in which a representation of $\\mathcal C_d^{\\mathbb C}$ has a ``boundary value\" on $\\mathcal C_d^{\\rm Lor}$, at least if it is \\emph{unitary}. \n\n\\bigskip\n\n\n\n\\noindent {\\bf Unitarity}\n\n\n\n\\nopagebreak\n\n\n\\bigskip\n\nSo far we have not asked for an inner product on the topological vector space $E_{\\Sigma}$ associated to a $(d-1)$-manifold $\\Sigma$. Our main concern in this work is with {\\it unitary} theories, even though not all interesting quantum field theories are unitary.\n\nTo define unitarity in our context, recall that, if $\\Sigma^*$ denotes the manifold germ $\\Sigma$ with its co-orientation reversed, then $\\check E_{\\Sigma^*}$ is the dual topological vector space to $\\hat E_{\\Sigma}$. \nFurthermore, a cobordism $M: \\Sigma_0 \\leadsto \\Sigma_1$ can also be regarded as a cobordism from $\\Sigma^*_1$ to $\\Sigma^*_0$, and the two maps $E_{\\Sigma_0} \\to E_{\\Sigma_1}$ and $E_{\\Sigma_1^*} \\to E_{\\Sigma_0^*}$ are automatically algebraic transposes of each other. Thus $\\Sigma \\mapsto \\Sigma^*$ is a {\\it contravariant} functor.\n\n\nIn a unitary theory we shall not expect the vector space $E_{\\Sigma}$ to have an inner product for every $(d-1)$-manifold $\\Sigma$. A complex metric $g \\in {\\rm Met}_{\\mathbb C}(\\Sigma)$ has a complex conjugate $\\bar g$.\nIf we write $\\bar \\Sigma$ for $\\Sigma$ with the metric $\\bar g$ but with its co-orientation unchanged\\footnote{If our theory is defined on a category of \\emph{oriented} space-time manifolds, we must give $\\bar \\Sigma$ the opposite orientation to $\\Sigma$, although the same co-orientation in $U$.} then $\\Sigma \\mapsto \\bar \\Sigma$ is a \\emph{covariant} functor. It is natural to require that there is an antilinear involution\n$$E_{ \\bar\\Sigma} \\cong \\bar E_{\\Sigma}. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (5)$$\n\n\n\n\\bigskip\n\nFor a theory satisfying condition (5) the conjugate dual of the vector space $\\check E_{\\Sigma}$ is $\\hat E_{\\bar\\Sigma^*}$. We expect $\\check E_{\\Sigma}$ to have an inner product only when $\\Sigma \\cong \\bar\\Sigma^*$, i.e. when the $d$-manifold germ $\\Sigma \\subset U$ admits a reflection with fixed-point set $\\Sigma$ which reverses the co-orientation and changes the metric to its complex conjugate.\nSuch a hypersurface-germ $\\Sigma$ will be called {\\it time-symmetric}. Its metric is real and Riemannian when restricted to the $(d-1)$-dimensional hypersurface $\\Sigma$ itself.\n\n\\bigskip\n\nWe can now define a {\\it unitary} theory as one which satisfies two conditions:\n\n\\bigskip\n\n(i) \\ \\ the reality condition (5), and\n\n\\bigskip\n\n(ii) \\ \\ {\\it reflection-positivity}, in the sense that when we have a time-symmetric hypersurface $\\Sigma \\cong \\bar \\Sigma^*$ the hermitian duality between $\\check E_{\\Sigma}$ and $\\check E_{\\bar \\Sigma}$ is positive-definite. \n\n\\bigskip\n\nFor a unitary theory, when we have a time-symmetric germ $\\Sigma$ we can complete the pre-Hilbert space $\\check E_{\\Sigma}$ to obtain a Hilbert space $E^{Hilb}_{\\Sigma}$ with\n$$\\check E_{\\Sigma} \\ \\to \\ E_{\\Sigma}^{Hilb} \\ \\to \\ \\hat E_{\\Sigma}.$$ \n\n\n\n\n\n\n\n\n\n\n\\noindent{\\bf The theory on flat tori}\n\n\\bigskip\n\nThe partition function of a theory on oriented flat Riemannian tori already gives us a lot of information about the theory. The moduli space of such tori is the double-coset space$${\\rm O}_d \\backslash {\\rm GL}_d(\\mathbb R)\/{\\rm SL}_d(\\mathbb Z) \\ \\cong \\ Q(\\mathbb R^d)\/{\\rm SL}_d(\\mathbb Z),$$\nwhere $Q(\\mathbb R^d) = {\\rm O}_d\\backslash {\\rm GL}_d(\\mathbb R)$ is the space of positive-definite real $d \\times d$ matrices. This space is an orbifold, so the partition function is best described as a smooth function $Z: Q(\\mathbb R^d) \\to \\mathbb C$ which is invariant under SL$_d(\\mathbb Z)$. Our axioms imply that $Z$ extends to a \\emph{holomorphic} function\n$$Q_{\\mathbb C}(\\mathbb R^d) \\ \\to \\ \\mathbb C,$$\nbut they also imply very strong constraints beyond that. Notably, each choice of a surjection $\\mathbb Z^d = \\pi_1(M) \\to \\mathbb Z$ gives us a way of writing the torus $M$ as a cobordism $\\tilde M : \\Sigma \\leadsto \\Sigma$ from a $(d-1)$-dimensional torus $\\Sigma$ to itself, and then we have $Z(M) = {\\rm trace} (Z_{\\tilde M})$, where \\mbox{$Z_{\\tilde M}:E_{\\Sigma} \\to E_{\\Sigma}$} is a nuclear operator in the vector space $E_{\\Sigma}$, which is graded by the characters $\\chi$ of the translation-group $T_{\\Sigma}$ of $\\Sigma$. More explicitly, $M$ is constructed from the product manifold $\\tilde M \\times [0,t]$ by attaching the ends to each other after translating by a vector $\\xi \\in T_{\\Sigma}$, and we have\n$$Z(A,t,\\xi) \\ = \\ \\sum_{i,\\chi} n_{i,\\chi }\\chi(\\xi) \\ {\\rm e}^{-\\lambda_i t},$$\nwhere $\\{\\lambda_i = \\lambda_i(A) \\}$ is the sequence (tending to $+\\infty$) of eigenvalues of the Hamiltonian operator on $E_{\\Sigma}$, and the $n_{i,\\chi}$ are positive integers which, for each $i$, vanish for all but finitely many characters $\\chi$. \n\n\\bigskip\n\n\n\\noindent{\\bf Appendix to Section 3: \\ The duality $(\\check E_{\\Sigma})^* \\ \\cong \\ \\hat E_{\\Sigma^*}$}\n\n\\bigskip\n\nTo keep things as general as possible, we suppose that $\\Sigma \\mapsto E_{\\Sigma}$ is a functor from the $d$-dimensional cobordism category to a category of metrizable topological vector spaces and nuclear maps. We suppose also that the category of vector spaces is equipped with a tensor product functor\\footnote{For example, we could work with the category of Hilbert spaces with the natural Hilbert space tensor product.} which is coherently associative and commutative, and that we are given natural isomorphisms $E_{\\Sigma_1} \\otimes E_{\\Sigma_2} \\ \\to \\ E_{\\Sigma_1 \\sqcup \\Sigma_2}$.\n\n\\bigskip\n\nComposable cobordisms\n$\\Sigma_1 \\leadsto \\Sigma_2 \\leadsto \\Sigma_3$ \n give us maps\n$$E_{\\Sigma_1} \\to E_{\\Sigma_2} \\to E_{\\Sigma_3}. \\ \\qquad \\ \\qquad \\ (6)$$\nBy reinterpreting $\\Sigma_1 \\leadsto \\Sigma_2$ as a cobordism $\\Sigma_1 \\sqcup \\Sigma_2^* \\leadsto \\emptyset$ we get a map $E_{\\Sigma_1} \\otimes E_{\\Sigma_2^*} \\to \\mathbb C$, and hence $E_{\\Sigma_1} \\to (E_{\\Sigma_2^*})^*$. Similarly, we can reinterpret $\\Sigma_2 \\leadsto \\Sigma_3$ as $\\emptyset \\leadsto \\Sigma_2^* \\sqcup \\Sigma_3$, which gives $(E_{\\Sigma_2^*})^* \\to E_{\\Sigma_3}$. It is easy to see that the composite $E_{\\Sigma_1} \\to (E_{\\Sigma_2^*})^* \\to E_{\\Sigma_3}$ coincides with $E_{\\Sigma_1} \\to E_{\\Sigma_2} \\to E_{\\Sigma_3}$. \n\nYet again, performing the reinterpretations in the reverse order, we get maps\n$$(E_{\\Sigma^*_1})^* \\to E_{\\Sigma_2} \\to (E_{\\Sigma^*_3})^*$$\nwhose composite is the transpose of the map induced by the composite cobordism $\\Sigma_3^* \\leadsto \\Sigma_1^*$.\n\n\\bigskip\n\nNow suppose that we have an infinite sequence of cobordisms \n$$\\ldots \\leadsto \\Sigma_{i+1} \\leadsto \\Sigma_{i} \\leadsto \\Sigma_{i-1} \\leadsto \\ldots \\ , \\qquad \\qquad (7)$$ indexed by $i \\geq 0$, which form the downstream tail of a manifold-germ $\\Sigma$, i.e. the sequence which we used above to define the space $\\hat E_{\\Sigma} = \\lim_{\\leftarrow} E_{\\Sigma_i}$. \nLet us perform the two manipulations that we performed on (6) alternately on the sequence (7), thereby obtaining a sequence whose even terms are $E_{\\Sigma_{2i}}$ and whose odd terms are $(E_{\\Sigma^*_{2i+1}})^*$. The inverse-limit of the whole sequence is the same as that of any cofinal subsequence. Considering the cofinal subsequence of even terms shows that the inverse-limit is $\\hat E_{\\Sigma}$. But the inverse-limit of the cofinal sequence of odd terms is\n$$\\lim_{\\leftarrow} \\ (E_{\\Sigma_{21+1}^*})^* \\ = \\ (\\lim_{\\rightarrow} E_{\\Sigma^*_{21+1}})^*.$$ This shows that $\\hat E_{\\Sigma} \\cong (\\check E_{\\Sigma^*})^*$. But, because $\\hat E_{\\Sigma}$ is automatically a nuclear Fr\\'echet space, we can dualize again and conclude that $(\\hat E_{\\Sigma})^* \\cong \\check E_{\\Sigma^*}$ also.\n\n\\bigskip \n\n\n\\section{Some analogies from representation theory}\n\n\\nopagebreak\n\nThe relation between representations of the category $\\mathcal C_d^{\\mathbb C}$ and of the Lorentzian category $\\mathcal C_d^{\\rm Lor}$ which lies ``on its boundary\" follows a pattern familiar in the representation theory of many Lie groups which have a special class of unitary representations characterized as the boundary values of holomorphic representations of a complex semigroup by contraction operators. The essential features can all be seen in the simplest example.\n\n\\bigskip\n\nThe group $G = {\\rm PSL}_2(\\mathbb R)$ is the group of M\\\"obius transformations of the Riemann sphere $\\Sigma = \\mathbb C \\cup \\infty$ which map the open upper half-plane $\\mathbb U$ to itself. It lies on the boundary of the complex sub-semigroup of $G_{\\mathbb C} = {\\rm PSL}_2(\\mathbb C)$ consisting of M\\\"obius transformations which map the closure of $\\mathbb U$ into its own interior. It is natural, however, to consider a slightly larger semigroup $G^<_{\\mathbb C}$ by including the degenerate M\\\"obius transformations which collapse $\\mathbb U$ to a single point in $\\mathbb U$ --- these correspond to complex $2 \\times 2$ matrices of rank one. The resulting semigroup is then a contractible open subset of the 3-dimensional complex projective space formed from the $2 \\times 2$ matrices. The topological boundary of $G^<_{\\mathbb C}$ consists of the M\\\"obius transformations which take $\\mathbb U$ to a disc or point in the upper half-plane which touches the real axis, and the Shilov boundary consists of the group $G$ of real M\\\"obius transformations --- an open solid torus --- compactified by its 2-torus boundary, which is the hyperboloid det$(A) = 0$ in $\\mathbb P^3_{\\mathbb R}$ consisting of the degenerate real M\\\"obius transformations. (Thus the complete Shilov boundary is the part of $\\mathbb P^3_{\\mathbb R}$ where det$(A) \\geq 0$.)\n\n\n\n\\bigskip\n\nThe irreducible unitary representations of the group $G = {\\rm PSL}_2(\\mathbb R)$ are essentially\\footnote{We shall ignore the ``supplementary\" series, which is of measure zero in the space of representations.} of two kinds, the {\\it principal series} and the {\\it discrete series}. The best-known principal series representation is the action of $G$ on the Hilbert space of $1\/2$-densities on the circle $\\mathbb P^1_{\\mathbb R}$ which is the boundary of $\\mathbb U$ --- the general member of the series is the action on densities of complex degree $s$ with Re$(s) = 1\/2$. The best-known discrete series representation is the action of $G$ on the square-summable holomorphic 1-forms on $\\mathbb U$, with the natural norm \n$$\\parallel \\alpha \\parallel ^2 = {\\rm i}\\int_{\\mathbb U} \\alpha \\wedge \\bar \\alpha$$\n --- more generally, for each positive integer $p$ we have the action on holomorphic $p$-forms $\\alpha = f(z)(dz)^{\\otimes p}$, when one must divide $\\alpha \\wedge \\bar \\alpha$ by the $(p-1)^{\\rm st}$ power of the $G$-invariant area form on the Poincar\\'e plane $\\mathbb U$ to define the norm. \n \n The discrete series representations obviously extend to bounded holomorphic representations of the semigroup $G^<_{\\mathbb C}$ by contraction operators. They are singled out by this `positive energy' property: the principal series representations cannot extend to $G^<_{\\mathbb C}$, because when $|a| < 1$ the element $w \\mapsto aw$ (here $w = (z-{\\rm i})\/(z+{\\rm i})$ is the coordinate in the unit-disc model $|w|<1$ of $\\mathbb U$) of the semigroup $G^<_{\\mathbb C}$ would be represented by an operator whose eigenvalues are $a^n$ for \\emph{all} $n \\in \\mathbb Z$. But let us notice that, though the discrete series representations are unitary on the boundary group $G = {\\rm PSL}_2(\\mathbb R)$, the degenerate elements of $G^<_{\\mathbb C}$, which collapse $\\mathbb U$ to a point $p \\in \\mathbb U$, are represented by bounded operators of rank 1. So these unitary representations of PSL$_2(\\mathbb R)$ do not extend unitarily to the whole Shilov boundary: the degenerate elements correspond to unbounded rank 1 operators $\\xi \\mapsto \\langle \\zeta , \\xi \\rangle \\eta$, where $\\eta$ and $\\zeta$ are ``non-normalizable elements\" of the Hilbert space --- i.e. they belong to an appropriate completion of it.\n \n\\bigskip \n\nThe group $G$ is a subgroup of the group Diff$^+(S^1)$ of orientation-preserving diffeomorphisms of the circle. This infinite-dimensional Lie group does not possess a complexification, though its Lie algebra, the space of smooth vector fields on the circle, can of course be complexified. The beginning of the present work was the observation, made in the 1980s quite independently by the two authors and also by Yu. Neretin ([N], [Se1]), that there is an infinite-dimensional complex semigroup $\\mathcal A$ which has exactly the same relation to Diff$^+(S^1)$ as $G_{\\mathbb C}^<$ has to $G = {\\rm PSL}_2(\\mathbb R)$. Its elements are complex annuli with parametrized boundary circles: one can think of them as `` exponentiations\" of outward-pointing complex vector fields defined on a circle in the the complex plane. The annuli form a complex semigroup when concatenated as cobordisms, and the lowest-weight or ``positive-energy\" representations of Diff$^+(S^1)$ --- and of loop groups --- which arise in 2-dimensional conformal field theory are precisely those which are boundary values of holomorphic representations of the semigroup $\\mathcal A$ by trace-class operators.\n\n\\bigskip\n\nThe discussion of PSL$_2(\\mathbb R)$ generalizes to the symplectic group $G = {\\rm Sp}(V) \\cong {\\rm Sp}_{2n}(\\mathbb R)$ of a real symplectic vector space $V$ of dimension $2n$. The role of the upper half-plane $\\mathbb U$ is played by the Siegel `generalized upper half-plane' --- the domain $\\mathbb U(V)$ of positive Lagrangian subspaces of the complexification $V_{\\mathbb C}$ described in Section 1. The group $G$ lies on the boundary of a semigroup $G^<_{\\mathbb C}$ which is the Siegel domain $\\mathbb U(\\tilde V \\oplus V)$, where $\\tilde V$ denotes $V$ with sign of its symplectic form reversed. A generic element of this domain is the graph of a complex symplectic transformation of $V_{\\mathbb C}$ which maps the closure of $\\mathbb U(V)$ into its own interior, but, just as was the case with PSL$_2(\\mathbb C)$, there are degenerate elements which map $\\mathbb U(V)$ non-injectively into itself.\nThe complex semigroup $G_{\\mathbb C}^<$ has been carefully studied by Roger Howe [H], who called it the {\\it oscillator semigroup}. \n\nThe Shilov boundary of $G^<_{\\mathbb C}$ is the Grassmannian of real Lagrangian subspaces of $\\tilde V \\oplus V$: generically, these are the graphs of elements of the real group $G = {\\rm Sp}(V)$, but this group is compactified by the addition of Lagrangian subspaces which intersect the axes of $\\tilde V \\oplus V$ nontrivially, and thus correspond to Lagrangian correspondences from $V$ to $V$ which are not actual maps $V \\to V$. Once again, whereas Sp$^<(V_{\\mathbb C})$ is a genuine semigroup, the composition-law of the real group Sp$(V)$ does not extend to the compactification.\n\n\\bigskip\n\n\n\\bigskip\n\n\n\nThe group $G = {\\rm Sp}(V)$ has a discrete series of unitary representations generalizing those of PSL$_2(\\mathbb R)$. The most important is the \\emph{metaplectic representation} --- actually a representation of a double covering $\\tilde G$ of Sp$(V)$ --- which is the action on the {\\it quantization} $\\mathcal H_V$ of the symplectic space $V$. The Hilbert space $\\mathcal H_V$ is characterized by the property that it contains a copy of the ray $(\\bigwedge^n (W))^{\\otimes (1\/2)}$ for each point $W$ of the domain $\\mathbb U(V)$ --- the square-root of the natural hermitian holomorphic line bundle $\\{\\bigwedge^n(W)\\}$ on $\\mathbb U(V)$ is canonical up to multiplication by $\\pm 1$, and is holomorphically embedded in $\\mathcal H_V$. It is acted on by $\\tilde G$ rather than $G$.\n\nThe action of $\\tilde G$ on $\\mathcal H_V$ is the boundary-value of a holomorphic projective representation of the oscillator semigroup $G_{\\mathbb C}^<$. For $G_{\\mathbb C}^<$ is just the domain $\\mathbb U(\\tilde V \\oplus V)$, each point of which defines a ray in\n$$\\mathcal H_{\\tilde V \\oplus V} \\ \\cong \\ \\mathcal H_V^* \\otimes \\mathcal H_V \\ \\cong \\ {\\rm End}_{HS}(\\mathcal H_V),$$\nwhere End$_{HS}$ denotes the Hilbert-Schmidt endomorphisms. (A more careful discussion shows that $G^<_{\\mathbb C}$ is represented by operators of trace class.)\n\n\\bigskip\n\nWhen $n = 1$ the group Sp$(V)$ is SL$_2(\\mathbb R)$, a double covering of the group PSL$_2(\\mathbb R)$ of M\\\"obius transformations we considered before. To relate the cases of PSL$_2(\\mathbb R)$ and Sp$(V)$, recall that PSL$_2(\\mathbb C)$ is an open subspace of the complex projective space $\\mathbb P^3_{\\mathbb C}$ formed from the vector space of $2 \\times 2$ matrices: in fact it is the complement of the quadric $Q^2_{\\mathbb C} \\cong \\mathbb P^1_{\\mathbb C} \\times \\mathbb P^1_{\\mathbb C}$ defined by the vanishing of the determinant, i.e. by the matrices of rank 1. The double covering group SL$_2(\\mathbb C)$ sits inside the Grassmannian of complex Lagrangian subspaces of $\\mathbb C^4$, which is a quadric 3-fold $Q^3_{\\mathbb C}$ in $\\mathbb P^4_{\\mathbb C}$: it is a non-singular hyperplane section (corresponding to the Lagrangian condition) of the Klein quadric formed by all the lines in $\\mathbb P^3(\\mathbb C)$. The quadric $Q^3_{\\mathbb C}$ is the branched double-covering of the projective space $\\mathbb P^3_{\\mathbb C}$ of $2 \\times 2$ matrices, branched along the quadric $Q^2_{\\mathbb C}$ of rank 1 matrices. The contractible semigroup SL$^<_2(\\mathbb C)$ is the open subset of the Lagrangian Grassmannian of $\\mathbb C^4$ consisting of the positive Lagrangian subspaces, and it is a double covering of PSL$^<_2(\\mathbb C)$.\n\n\\bigskip\n \n\n\\section{Unitarity and global hyperbolicity}\n\nIn the previous section we saw how a holomorphic representation of a complex semigroup by contraction operators on a Hilbert space can give rise --- on passing to the boundary --- to a unitary representation of a group which is a dense open subset of the Shilov boundary of the semigroup. The remaining points of the Shilov boundary are not represented by unitary operators; the representation extends to them only in some ``weak\" sense. We now come to the analogue of this phenomenon in quantum field theory, where the Lorentzian cobordism category $\\mathcal C_d^{\\rm Lor}$ lies on the boundary of $\\mathcal C_d^{\\mathbb C}$, and the role of the open dense subgroup of the Shilov boundary is played by the subcategory of \\emph{globally hyperbolic} cobordisms which we shall define below. We should mention, however, that although the category of globally hyperbolic cobordisms is very natural, the category $\\mathcal C_d^{{\\rm Lor}}$ may be smaller than the optimal category we could put on the boundary of $\\mathcal C_d^{\\mathbb C}$. For example, the Lorentzian cobordisms could possibly be allowed to contain `black holes' surrounded by horizons, rather analogous to the `cobordisms-with-boundaries' used to describe two-dimensional theories with both open and closed strings. We shall not pursue such speculations here.\n\n\\bigskip\n\nWhen we have a theory defined on $\\mathcal C_d^{\\mathbb C}$ let us first consider how to extend the assignment $\\Sigma \\mapsto E_{\\Sigma}$ to a \\emph{Lorentzian} germ\n $\\Sigma \\subset U$, with $\\Sigma$ co-oriented in $U$. We can identify $U$ with $\\Sigma \\times (-\\varepsilon, \\varepsilon)$ by exponentiating the geodesic curves emanating perpendicularly from $\\Sigma$. The metric then takes the form $h_t - {\\rm d}t^2$, where $t \\mapsto h_t$ is a smooth map from $(-\\varepsilon, \\varepsilon)$ to the manifold of Riemannian metrics on $\\Sigma$. If the germ is time-symmetric then we can define $E_{\\Sigma}$ by replacing the Lorentzian metric by the `Wick rotated' Riemannian metric $h_{{\\rm i}t} + {\\rm d}t^2$, which makes sense because if $h_t = h_{-t}$ then $h_t$ is a function of $t^2$, so that $h_{{\\rm i}t}$ is defined and real. But this does not help for a general hypersurface, and in any case seems rather arbitrary: we shall return to this point in Remark 5.3 below.\n\n\\bigskip\n\nIt is less easy to assign an operator $Z_M:E_{\\Sigma_0} \\to E_{\\Sigma_1}$ to a Lorentzian cobordism $M:\\Sigma_0 \\leadsto \\Sigma_1$. Even if $M$ is a cylinder topologically, it can be complicated in its ``causal\" structure. Consider, for example, a 2-dimensional cylindrical space-time. We saw in Section 2 that, up to a conformal multiplier, a complex metric on a surface is a pair of complex structures with opposite orientations. At the Shilov boundary the complex structures degenerate to the foliations by the left- and right-moving light-lines of a Lorentzian surface. If each light-line which sets out from the incoming boundary circle of the cylinder eventually reaches the outgoing boundary circle then each family of light-lines gives us a diffeomorphism from the incoming to the outgoing boundary. In fact (cf. [Se2] p.8 and p.16) the isomorphism classes of Lorentzian cylinders of this kind are determined up to conformal equivalence by the pair of diffeomorphisms together with a positive integer which counts the number of times that the left- and right-moving lines emanating from a given point of the incoming circle cross before hitting the outgoing circle. This agrees with the well-known fact that the Hilbert space associated to a circle in 2-dimensional conformal field theory comes with a projective unitary representation of the group Diff$^+(S^1) \\times {\\rm Diff}^+(S^1)$.\n\nBut the light-lines from the incoming circle can behave in a more complicated way. For example, one set of light-lines may spiral closer and closer to a closed limit-cycle of the foliation, a light-line which is a circle parallel to the incoming boundary circle of the annulus. That set of lines will then never reach the outgoing circle. One might think of this phenomenon as akin to a black hole in the space-time, though, unlike a black hole, the Lorentzian metric here has no singularity. The ``blocked'' foliation is conformally the same as the ``degenerate annulus'' obtained by collapsing the closed light-line to a point, i.e. a pair of discs with their centre-points identified. This is usually regarded as an ``annulus of infinite length\", and it acts on an irreducible positive-energy representation of Diff$^+(S^1)$ by a projection operator of rank one, like the action of a degenerate complex M\\\"obius transformation in a discrete-series representation of PSL$_2(\\mathbb R)$.\n\n\n\n\\bigskip\n\n\n\nIn works on general relativity a Lorentzian cobordism $M:\\Sigma_0 \\leadsto \\Sigma_1$ between Riemannian manifolds is called {\\it globally hyperbolic} if every maximally-extended time-like geodesic in $M$ travels from $\\Sigma_0$ to $\\Sigma_1$. Such an $M$ must be diffeomorphic to $\\Sigma_0 \\times [0,1]$. It is only for globally hyperbolic manifolds that, for example, the Cauchy problem for the wave-equation on $M$ is soluble.\n\nOf course here we are only considering {\\it compact} cobordisms, which are not the usual focus in relativity theory. In the compact situation we can take the definition of global hyperbolicity to be the existence of a smooth time-function $t: M \\to [0,1]$ whose gradient is everywhere in the positive light-cone, and which is therefore a fibration with Riemannian fibres. From $t$ we obtain a diffeomorphism $M \\to \\Sigma_0 \\times [0,1]$ by following the orthogonal trajectories to the time-slices.\n\nThe existence of a time-function on a compact Lorentzian cobordism is clearly an open condition, and so the globally hyperbolic cobordisms form an open subcategory $\\mathcal C_d^{\\rm gh}$ of $\\mathcal C_d^{\\rm Lor}$ which should play the role of the real Lie group to which the holomorphic contraction representations of Section 4 can be extended (though the result (5.2) we prove below is unfortunately weaker).\n\n\\bigskip\n\nFor a globally hyperbolic cobordism equipped with a time-function, the metric, in terms of the diffeomorphism $M \\to \\Sigma_0 \\times [0,1]$, takes the form $h_t + c^2{\\rm d}t^2$ for some function $c: \\Sigma_0 \\times [0,1]\\to {\\rm i}\\mathbb R$. A small deformation $\\delta c$ of $c$ into the right half-plane changes the Lorentzian metric into an allowable complex metric, and we could hope to define $Z_M$ in the Lorentzian case as the limit of the operators associated to such deformations. That, however, encounters the problem that the deformed metric depends not only on the choice of the deformation $\\delta c$, but, more importantly, on the choice of the time-function, which should be irrelevant to the operator $U_M$. Happily, there is a better point of view, which also shows why the boundary-value of a semigroup of contraction operators is a \\emph{unitary} representation. There is, after all, no obvious reason why the concatenation of a Lorentzian cobordism with its reverse should be represented by the identity operator --- quite unlike what happens with Riemannian cobordisms. (A possible analogy is the process of making a based loop-space into a topological group by collapsing paths which retrace their steps.) \n\n\\bigskip\n\nThe passage from $\\mathcal C_d^{\\mathbb C}$ to $\\mathcal C_d^{\\rm Lor}$ is already interesting when $d=1$, i.e. for quantum mechanics rather than quantum field theory --- the case when the Euclidean path-integral can be treated by traditional measure-theory. It is worthwhile to spell out the argument in this case, before passing to higher dimensions. \n\nWe began this work with the relation of positive energy to 1-parameter contraction semigroups. Our first task now is to understand why a holomorphic representation of the category $\\mathcal C_1^{\\mathbb C}$ is just such a 1-parameter semigroup, where the parameter runs through the open half-plane $\\mathbb C_+ = \\{z \\in \\mathbb C: {\\rm Re}(z)>0\\}$. Whereas a Riemannian structure on a closed interval is completely determined by its length, the allowable complex metrics on the interval have an infinite-dimensional moduli-space.\n\nAny complex metric on $I = [0,1]$ can be pulled back from the holomorphic quadratic differential ${\\rm d}z^2$ on $\\mathbb C$ by means of a smooth embedding $f: I \\to \\mathbb C$ such that $f(0) = 0$ and Re $f'(t) > 0$ for all $t \\in I$. In fact the space Emb$(I;\\mathbb C)$ of such embeddings is isomorphic to Met$_{\\mathbb C}(I)$ {\\it as a complex manifold}. If $f'(t) = 1$ when $t$ is sufficiently close to the ends of the interval $I$ then the pulled-back metric defines a morphism $I_f: P \\to P$ in the category $\\mathcal C_1^{\\mathbb C}$, where $P$ denotes the object defined by the germ of the standard metric on the line $\\mathbb R$ at the origin. \n\nThe crucial observation is that the operator $Z_f: E_P \\to E_P$ defined by $I_f$ depends only on the point $f(1) \\in \\mathbb C_+$. It is as if $Z_f$ were the `contour integral' of a holomorphic differential on $\\mathbb C$ along the path $f$. The argument is as follows. First, $Z_f$ does not change if $f$ is replaced by $\\tilde f = f \\circ \\phi$ where $\\phi$ is any diffeomorphism $I \\to I$ which is the identity near the ends of the interval. This means that $Z_f$ does not change if $f$ moves along a curve in Emb$(I;\\mathbb C)$ whose tangent vector at each point is the action of an element of the Lie algebra Vect$(\\mathring I)$ of compactly supported vector fields on the interior of $I$. But then --- because $Z_f$ depends holomorphically on $f$ --- it does not change if each tangent vector is the action of an element of the \\emph{complexified} Lie algebra Vect$_{\\mathbb C}(\\mathring I)$. Finally, if $f,\\tilde f \\in {\\rm Emb}(I;\\mathbb C)$ define two morphisms $P \\to P$ and have $f(1) = \\tilde f(1)$, the tangent vectors to the obvious linear path from $f$ to $\\tilde f$ are given by the action of elements of Vect$_{\\mathbb C}(\\mathring I)$. \n\nWe can therefore write $Z_f = u(z)$, where $z = f(1)$. Obviously we have $u(z_1)u(z_2) = u(z_1 + z_2)$ for any $z_1,z_2 \\in \\mathbb C_+$. Furthermore, because the object $P$ of $\\mathcal C_1^{\\rm gh}$ is time-symmetric, the vector space $\\check E_P$ is a pre-Hilbert space, and the unitarity condition tells us that $u(\\bar z)$ is the hermitian transpose of $u(z)$.\n\nThe desired unitary semigroup $\\{u({\\rm i} T)\\}_{T \\in \\mathbb R}$, which will act on the triple $\\check E_P \\to E_P^{Hilb} \\to \\hat E_P$, can now be defined as follows. As explained in Section 3, any vector $\\xi \\in \\check E_P$ can be written $\\xi = u(\\varepsilon )\\eta$ for some $\\varepsilon > 0$ and some $\\eta \\in E_P$. We define $u({\\rm i} T)\\xi = u(\\varepsilon + {\\rm i}T)\\eta$, which is plainly independent of $\\varepsilon$. Finally, $u({\\rm i} T)$ is unitary because\n\\begin{eqnarray*}\nu(-{\\rm i}T)u({\\rm i}T) \\xi \\ &=& \\ u(-{\\rm i} T)u(\\varepsilon + {\\rm i} T)\\eta\\\\\n&=& \\ u(-{\\rm i} T)u(\\varepsilon\/2)u(\\varepsilon\/2 + {\\rm i} T)\\eta\\\\\n&=& \\ u(\\varepsilon\/2 - {\\rm i} T)u(\\varepsilon\/2 + {\\rm i} T)\\eta\\\\\n&=& \\ u(\\varepsilon)\\eta \\ = \\ \\xi.\n\\end{eqnarray*}\n\n\\bigskip\n\nTo pass from $d=1$ to higher-dimensional cobordisms we observe that the essential step in our argument was the first case of the following \n\n\\bigskip\n\n\\noindent{\\bf Principle 5.1} \\ \\ \\ {\\it If a $d$-dimensional cobordism $M$ is a real submanifold of a complex $d$-manifold $M_{\\mathbb C}$, and $M$ has an allowable complex metric induced from a holomorphic symmetric form $g$ on the tangent bundle $TM_{\\mathbb C}$, then the linear map $Z_M$ does not change when $M$ is moved around smoothly inside $M_{\\mathbb C}$ (leaving its ends fixed), providing the restriction of $g$ to $M$ remains an allowable complex metric.}\n\n\\bigskip\n\nAs in the $d=1$ case, this principle holds because any infinitesimal movement of $M$ inside $M_{\\mathbb C}$ is given by a complex vector field on $M$, while $Z_M$ depends holomorphically on $M$ and, being invariant under the action of \\mbox{Diff$(M$ rel $\\partial M)$}, does not change when $M$ moves in a direction given by the action of a complexified tangent vector to this group. \n\n\\bigskip\n\nUnfortunately, to use the principle we need the cobordism $M$ to be embedded in a complexification $M_{\\mathbb C}$, and the only natural way to ensure this is to pass from the smooth Lorentzian category $\\mathcal C_d^{\\rm Lor}$ to the corresponding \\emph{real-analytic} cobordism category $\\mathcal C_d^{\\rm Lor,\\omega}$, where both the manifolds and their metrics are assumed real-analytic. Inside this category there is the subcategory $\\mathcal C_d^{\\rm gh,\\omega}$ of globally hyperbolic cobordisms: we shall also assume that the time-function $\\tau:M \\to {\\rm i}[0,1]$ is real-analytic, though that could be avoided, because any smooth function can be approximated real-analytically. \n\n\\bigskip\n\nThere are two ways of thinking about restricting to real-analytic cobordisms. One might think that the smooth cobordism category is the natural object, and try to eliminate the analyticity hypothesis. But one could also think that that the natural allowable space-times really do come surrounded by a thin holomorphic thickening, within which the choice of a smooth totally-real representative is essentially arbitrary. In any case, we can prove the following theorem.\n\n\\bigskip\n\n\\noindent{\\bf Theorem 5.2} \\ \\ {\\it A unitary quantum field theory as defined in Section 3 on the category $\\mathcal C_d^{\\mathbb C}$ induces a functor from $\\mathcal C_d^{\\rm gh, \\omega}$ to topological vector spaces. The functor takes time-symmetric objects to Hilbert spaces, and takes cobordisms between them to unitary operators.} \n\n\\bigskip\n\nTo be quite precise: the theorem asserts that if $\\Sigma$ is a time-symmetric $(d-1)$-manifold germ then there is a Hilbert space $E_{\\Sigma}^{Hilb}$ with\n$$\\check E_{\\Sigma} \\ \\ \\subset \\ \\ E_{\\Sigma}^{Hilb} \\ \\ \\subset \\hat E_{\\Sigma},$$\nand a real-analytic globally hyperbolic cobordism $\\Sigma_0 \\leadsto \\Sigma_1$ between time-symmetric hypersurfaces induces a unitary isomorphism $E_{\\Sigma_0}^{Hilb} \\to E_{\\Sigma_1}^{Hilb}$ which also maps $\\check E_{\\Sigma_0}$ to $\\check E_{\\Sigma_1}$ and $\\hat E_{\\Sigma_0} $ to $\\hat E_{\\Sigma_1}$.\n\n\\bigskip\n\n\n\n\n\n\n\\bigskip\n\n \n \n\\noindent {\\it Proof of 5.2} \\ \\ Given a real-analytic globally hyperbolic cobordism \\mbox{$M:\\Sigma_0 \\leadsto \\Sigma_1$} we choose a time function $t:M \\to [0,1]$ whose level surfaces foliate $M$ by Riemannian manifolds, and, following the orthogonal trajectories to the foliation, we identify $M$ with $\\Sigma_0 \\times [0,1]$ as before. \n \n Using the real-analyticity assumptions, we can find a complexification $M_{\\mathbb C}$ of $M$ to which both $t$ and $g$ can be extended holomorphically, and we can assume that $\\tau = {\\rm i}t:M_{\\mathbb C} \\to U \\subset \\mathbb C$ is a holomorphic fibre bundle over a neighbourhood $U$ of the interval ${\\rm i} [0,1]$. Furthermore, the isomorphism $\\Sigma_0 \\times [0,1] \\to M$ extends to a holomorphic trivialization of the bundle $M_{\\mathbb C} \\to U$. For any smooth curve $f:[0,1] \\to U$ such that $f(0) = 0$ and ${\\rm Re} \\ f'(s) > 0$ for $s \\in [0,1]$ this gives us a totally real submanifold $M_f$ of $M_{\\mathbb C}$ sitting over the curve. We can use the morphism associated to the cobordism $M_f$ in exactly the way we used $Z_f$ in discussing the 1-dimensional case, to obtain a unitary operator $Z_M$ associated to the Lorentzian cobordism.\n \n It is important that $Z_M$ does not depend on the choice of the time-function $t$ defining the foliation. For two choices of $t$ are linearly homotopic, and changing from one to the other amounts to deforming the totally-real embedding $\\Sigma_0 \\times [0,1] \\to M_{\\mathbb C}$ by a real-analytic diffeomorphism of $\\Sigma_0 \\times [0,1]$.\n \n \\bigskip\n \n\\noindent{\\bf Remark 5.3} \\ \\ \\ We can apply the principle 5.1 to understand better how a theory defined on $\\mathcal C_d^{\\mathbb C}$ assigns a vector space $E_{\\Sigma}$ to a Lorentzian germ $\\Sigma \\subset U$. \n \n If the Lorentzian metric on $U$ is real-analytic then the complex theory gives us a holomorphic bundle $\\{\\hat E_f\\}$ on the space \n\n$\\mathcal J$ of germs of embeddings $f:(-\\varepsilon,\\varepsilon) \\to \\mathbb C$ such that $f(0)=0$ and Re $f'(t) >0$ for all $t$. In particular, for $\\lambda \\in \\mathbb C_+$ we have the radial paths $f_{\\lambda} \\in \\mathcal J$ for which $f_{\\lambda}(t) = \\lambda t$. But recall that $\\hat E_f$ is the inverse-limit of a sequence of spaces associated to the germs of $f$ at the points $f(t_k)$, for any sequence $\\{t_k \\downarrow 0\\}$. \n\nNow consider two neighbouring rays $f_{\\lambda},f_{\\lambda'}$ with $|\\lambda| = |\\lambda'|$, and choose a sequence $\\{t'_k \\downarrow 0 \\}$ which interleaves $\\{t_k \\}$, i.e. $t_k > t'_k >t_{k+1}$. We can choose a path $f \\in \\mathcal J$ which lies in the sector bounded by the rays $f_{\\lambda}$ and $f_{\\lambda '}$ and coincides with them alternately in the neighbourhoods of the points $\\lambda t_k$ and $\\lambda't'_k$. This $f$ gives us a family of cobordisms from the germ at $\\lambda' t'_k$ to the germ at $\\lambda t_k$, and from the germ at $\\lambda t_{k+1}$ to the germ at $\\lambda 't'_k$. Putting these together, we obtain inverse canonical isomorphisms between $\\hat E_{f_{\\lambda}}$ and $\\hat E_{f_{\\lambda '}}$. The coherence of these isomorphisms when we consider three nearby rays also follows from the principle 5.1.\n\nBy this means we see that we could have chosen \\emph{any} smooth path $f$ to define $\\hat E_{\\Sigma}$. However the family $\\hat E_f$ has the property that $\\hat E_{\\bar f}$ is the complex-conjugate space to $\\hat E_f$, so that reversing the complex time-direction conjugates the identification of $\\hat E_{\\Sigma}$ with the Euclidean choice $\\hat E_{f_1}$. If the Lorentzian germ $\\Sigma \\subset U$ is time-symmetric --- but not otherwise --- the arguments we have already used will give us a hermitian inner product on $\\check E_{\\Sigma}$.\n \n \n \n\\bigskip\n\n\\bigskip\n\n\\noindent{\\bf Field operators}\n\n\\nopagebreak\n\n\\bigskip\n\nFinally, we come to the Wick rotation of field operators, though our account will be sketchy. The first step is to understand how the vector space $\\mathcal O_x$ of observables at a point $x$ of a space-time $M$ behaves as the metric of $M$ passes from complex to Lorentzian. We shall continue to assume that $M$ and its Lorentzian metric are real-analytic.\n\nIn Section 3 we associated a space $\\mathcal O_x$ to a germ at $x$ of a complex metric on a manifold containing $x$: it is the fibre of a bundle on the space Met$_{\\mathbb C}(\\hat x)$ of such germs. If we embed a Lorentzian $M$ in a complexification $M_{\\mathbb C}$ there will be a holomorphic exponential map from a neighbourhood of $0$ in the complexified tangent space $T_x^{\\mathbb C} = T_xM \\otimes \\mathbb C$ to $M_{\\mathbb C}$. Inside $T_x^{\\mathbb C}$ we can consider the $d$-dimensional real vector subspaces $V$ on which the metric induced from the complex bilinear form of $T_x^{\\mathbb C}$ is allowable. We saw in (2.6) that these $V$ form a contractible open subset $\\mathcal U$ of the real Grassmannian Gr$_d(T_x^{\\mathbb C})$. Exponentiating $V$ will give us a germ of a $d$-manifold with a complex metric, and hence a map $\\mathcal U \\to {\\rm Met}_{\\mathbb C}(\\hat x)$. Pulling back the bundle of observables by this map gives us a bundle on $\\mathcal U$, which, using the principle (5.1) as we did in (5.3), we see to be trivial. Identifying its fibres gives us our definition of $\\mathcal O_x$ for Lorentzian $M$.\n\n\\bigskip\n\nWe need no new ideas to see that for any Lorentzian cobordism $M:\\Sigma_0 \\leadsto \\Sigma_1$ and any $x \\in \\mathring M$ an element $\\psi \\in \\mathcal O_x$ acts as an operator $E_{\\Sigma_0} \\to E_{\\Sigma_1}$. Furthermore, if $x$ lies on a time-slice $\\Sigma$ we get an operator $\\psi \\in {\\rm Hom}( \\check E_{\\Sigma}; \\hat E_{\\Sigma})$, i.e. an unbounded operator in $E_{\\Sigma}$, simply by considering the cobordisms corresponding to a sequence of successively thinner collars of $\\Sigma$. Indeed the same argument shows that if $x_1, \\ldots , x_k$ are distinct points on $\\Sigma$, we have a map\n$$\\mathcal O_{x_1} \\otimes \\ldots \\otimes \\mathcal O_{x_k} \\ \\ \\to \\ \\ {\\rm Hom}( \\check E_{\\Sigma}; \\hat E_{\\Sigma})$$\nwhich does not depend on choosing an ordering of the points. \n\n\\bigskip\n\nIn the introduction we mentioned the Wightman axiom that field operators at space-like separated points must commute. We can now see how this follows from our framework, at least in a globally hyperbolic space-time. For the spaces $\\check E_{\\Sigma_t} \\subset E_{\\Sigma_t}^{Hilb} \\subset E_{\\Sigma_t}$ for all times $t_0 \\leq t \\leq t_1$ can be identified with those at time $t_0$ by the unitary propagation $Z_{t,t'}$ from time $t$ to a later time $t'$ to get a single rigged Hilbert space $\\check E \\subset E^{Hilb} \\subset \\hat E$, and we can define an unbounded operator\n$$\\tilde \\psi \\ \\ = \\ \\ Z_{t_0,t}^{-1}\\circ \\psi \\circ Z_{t_0,t}:\\check E \\to \\hat E$$\nfor any $\\psi \\in \\mathcal O_x$ with $x \\in \\Sigma_t$. Furthermore, if we change the choice of time-function on the cobordism, so that $x$ lies on a different time-slice, then $\\tilde \\psi$ will not change.\n\nThe fact that two observables $\\psi,\\psi'$ situated at space-like separated points $x,x'$ give rise to operators $\\tilde \\psi, \\tilde \\psi'$ which are composable, and commute, is now clear. For if $x$ and $x'$ are space-like separated we can choose a single time-slice $\\Sigma_t$ which contains them both, and we see that the composed operator, in either order, is $Z_{t_0,t}^{-1}\\circ ( \\psi \\otimes \\psi') \\circ Z_{t_0,t}$.\n\n\\bigskip\n\n\n\n\\noindent{\\bf The domain of holomorphicity of the vacuum expectation values}\n\n\\bigskip\n\n\nWe end with a conjecture about a question arising in the traditional treatment of field theories defined in the standard Minkowski space $\\mathbb M = \\mathbb R^{d-1,1}$. There, the Wightman axioms imply that the vacuum expectation values, initially defined as distributions or other generalized functions on the $k$-fold products $\\mathbb M \\times \\ldots \\times \\mathbb M$, are boundary values of holomorphic functions defined in an open domain $\\mathcal U_k$ in the complexified space $\\mathbb M_{\\mathbb C}\\times \\ldots \\times \\mathbb M_{\\mathbb C}$. The definition of $\\mathcal U_k$, known as the `permuted extended tube', was given in Section 2. Recall that $\\mathcal U_2$ consits of all pairs of points $x,y$ such that $||x-y||^2$ is not real and $\\leq 0$.\n\nIf $k > 2$, however, $\\mathcal U_k$ is known not to be holomorphically convex, so it cannot be the largest complex manifold to which the expectation values can be analytically continued. It is an old problem to describe this largest manifold $\\mathcal V_k$, or even the holomorphic envelope of $ \\mathcal U_k$.\n\nThe ideas of this paper suggest a candidate for $\\mathcal V_k$. It sits over the open subset $\\check \\mathcal V_k$ of all $k$-tuples ${\\bf x} = \\{x_1, \\ldots ,x_k\\}$ of distinct points in $\\mathbb M_{\\mathbb C}$ which lie on some totally-real submanifold $M$ with two properties:\n\n(i) \\ the metric on $M$ induced from $\\mathbb M_{\\mathbb C}$ is allowable, and\n\n(ii) \\ $M$ projects surjectively onto the usual real Euclidean subspace $\\mathbb E = \\mathbb R^d$ of $\\mathbb M_{\\mathbb C} = \\mathbb E \\oplus \\rm i\\mathbb E$.\n\nNotice that, by the remark before Prop.\\ 2.6, the projection $M \\to \\mathbb E$ is a local diffeomorphism if the metric of $M$ is allowable, so (ii) implies that $M$ is the graph of a smooth map $\\mathbb E \\to \\rm i\\mathbb E$.\n\nLet $\\mathcal F_k$ denote the space of all pairs $(M,{\\bf x})$ satisfying the above conditions. It is an infinite-dimensional complex manifold projecting to the open subset $\\check \\mathcal V_k$ of Conf$_k(\\mathbb M_{\\mathbb C})$, and it is easy to see that the map $\\pi: \\mathcal F_k \\to \\check \\mathcal V_k$ is open. We define $\\mathcal V_k$ as the largest Hausdorff quotient manifold of $\\mathcal F_k$ through which $\\pi$ factorizes and which maps to $\\check V_k$ by a local diffeomorphism. Thus two points $(M,{\\bf x}),\\, ( M',\\bf x)$ of the fibre $\\mathcal F_{k,\\bf x}$ of $\\pi$ at $\\bf x$ have the same image in $\\mathcal V_k$ if they are in the same connected component of the fibre, but --- as that equivalence relation need not give a Hausdorff quotient --- also if there are paths $\\gamma,\\gamma'$ from $(M,\\bf x)$ and $(M',\\bf x)$ to a third point\n$(M'',\\bf x'')$ of $\\mathcal F_k$ which cover the same path from $\\bf x$ to $\\bf x''$ in $\\check V_k$.\n\n\\bigskip\n\n\n\n\n\n\n \n\nTo motivate the definition of $\\mathcal V_k$ we must enlarge our framework to allow Lorentzian space-times whose time-slices are not compact. The simplest way to do this is to introduce the cobordism category in which a morphism is the part of a $d$-dimensional allowable submanifold $M$ of $\\mathbb M_{\\mathbb C}$ cut off between two time-symmetric hypersurfaces.\n\nA field theory defined and holomorphic on this category, if it has a Lorentz-invariant vacuum state in a natural sense, will have vacuum expectation values which are holomorphic functions $\\mathcal E_k$ on the spaces $\\mathcal F_k$ of pairs $(M,{\\bf x})$. Strictly speaking, $\\mathcal E_k$ is a holomorphic section of a bundle on $\\mathcal F_k$, but we can use the local diffeomorphism $M \\to \\mathbb E$ to trivialize the bundle, giving us a holomorphic function\n$$\\mathcal E_k: \\mathcal F_k \\ \\to \\ {\\rm Hom}(\\mathcal O^{\\otimes k};\\mathbb C),$$\nwhere $\\mathcal O$ is the space of observables at a point of $\\mathbb E$.\n\nOur much-used Principle 5.1 tells us that the value of the function $\\mathcal E_k$ does not change if, while holding the marked points {\\bf x} fixed in $\\mathbb M_{\\mathbb C}$, we move $M$ smoothly in the allowable class. So in fact we have a holomorphic function on $\\mathcal F_k$ which is constant on the connected components of the fibres of $\\mathcal F_{k,\\bf x}$ of the map $\\pi$, i.e. to the isotopy classes of allowable manifolds conatining $\\bf x$.\n\n\\bigskip \n\nUnfortunately we have no proof that $\\mathcal V_k$ is a domain of holomorphy, but at least we can assert\n\n\\bigskip\n\n\\noindent {\\bf Proposition 5.4} \\ {\\it $\\mathcal V_k$ contains the Wightman domain $\\mathcal U_k$.}\n\n\\bigskip\n\nFurthermore, we saw in Proposition 2.3 that the holomorphic envelope of $\\mathcal U_k$ contains $\\mathcal V_k^{flat}$, the part of $\\mathcal V_k$ represented by flat affine submanifolds $M \\subset \\mathbb M_{\\mathbb C}$.\n\n\\bigskip\n\n\\noindent{\\it Proof of 5.4} \\ First, $\\mathcal V_k$ is invariant under the complex orthogonal group of $\\mathbb M_{\\mathbb C}$, and under reorderings of the points ${\\bf x} = \\{x_1, \\ldots ,x_k\\}$. So it is enough to consider ${\\bf x}$ such that the imaginary part of $x_{i+1} - x_i$ belongs to the forward light-cone $C \\subset \\mathbb M$ for each $i$.\n\nSmoothing the obvious polygonal path joing the points, we can thus assume that the $x_i$ lie on a curve $x: \\mathbb R \\to \\mathbb M_{\\mathbb C}$ whose derivative Im$(x'(t))$ belongs to $C$ for all $t$. But then we can choose, smoothly in $t$, a set of $d-1$ orthonormal vectors $e_j(t)$ in $\\mathbb M_{\\mathbb C}$ which are all orthogonal to $x'(t)$. Let $M_t$ be the \\emph{real} vector subspace of $\\mathbb M_{\\mathbb C}$ spanned by the vectors $e_j(t)$. The points ${\\bf x}$ lie on the $d$-dimensional real ruled manifold $M$ swept out by the affine $d-1$-planes $x(t)+ M_t$, and the metric of $M$ is clearly allowable. \\ \\ $\\spadesuit$ \n\n\n\n\n\\bigskip\n\n\n\\noindent{\\bf References}\n\n\\bigskip\n\n\\begin{enumerate}\n\\item[[BF]] Brunetti, R., and K. Fredenhagen, \\,{\\it Quantum field theory on curved backgrounds}. \\, Lecture Notes in Physics {\\bf 786}, 129 -- 155, Springer 2009.\n\\item[[C]] Costello, Kevin, \\, {\\it Renormalization and effective field theory}. \\, Math. Surveys and Monographs {\\bf 170}, Amer.\\ Math.\\ Soc. 2011.\n\\item[[F]] Fermi, E., \\,{\\it Quantum theory of radiation}. \\,Rev.\\ Mod.\\ Phys, {\\bf 4}\\, (1932), 87--132.\n\\item[[FH]] Feynman, R. P. and A. R. Hibbs, \\,{\\it Quantum mechanics and path integrals.} \\, McGraw-Hill, 1965.\n\\item[[H\\\"or]] H\\\"ormander, Lars, \\, {\\it An introduction to complex analysis in several variables}. \\, North-Holland, 1990. \n\\item[[H]] Howe, Roger, \\, {\\it The Oscillator Semigroup}. \\, Proc.\\ Symp.\\ Pure Math. {\\bf 48}, 1196 -- 1200, \\, Amer.\\ Math.\\ Soc. 1988.\n\\item[[Ka]] Kazhdan, David, \\, {\\it Introduction to QFT.} \\, In {\\it Quantum fields and strings: a course for mathematicians.} Vol 1 (Princeton 1996\/1997) 377--418. Amer.\\ Math.\\ Soc. 1999.\n\\item[[Ke]] Kelnhofer, Gerald, \\, {\\it Functional integration and gauge ambiguities in generalized abelian gauge theories.} \\, J.\\ Geom.\\ Physics {\\bf 59} (2009) \\, 1017--1035. (arXiv:0711.4085 [hep-th])\n\\item[[N]] Neretin, Yu. A., \\, {\\it Holomorphic continuations of representations of the group of diffeomorphisms of the circle.} \\, Mat. Sb. {\\bf 180} \\, (1989), 635--57. \\ (English translation Math.\\ USSR-Sb. {\\bf 67} (1990).\n\\item[[PS]] Pressley, A., and G. Segal, \\, {\\it Loop Groups}. Oxford U.P. 1986.\n\\item[[Se1]] Segal, Graeme, \\, {\\it The definition of conformal field theory}. \\, In: {\\it Differential geometrical methods in theoretical physics. (Como 1987)}\\, NATO Adv.\\ Sci.\\ Inst. Ser C, Math Phys. Sci. {\\bf 250}, 165 -- 171, Kluwer 1988.\n\\item[[Se2]] Segal, Graeme, \\, {\\it The definition of conformal field theory.} \\, In: {\\it Topology, Geometry, and Conformal Field Theory}, \\, ed. U. Tillmann, \\, London Math.\\ Soc. Lecture Notes {\\bf 308} (2004), 421 -- 577.\n\\item[[Sz]] Szabo, R., \\, {\\it Quantization of higher abelian gauge theory in generalized differential cohomology.} \\, In: {\\it Proc. 7$^{\\rm th}$ Internat.\\ Conf.\\ on Math.\\ Methods in Physics} \\, (ICMP2012) (arXiv:1209.2530 [hep-th]).\n\\item[[SW]] Streater, R. F., and A. S. Wightman, \\, {\\it PCT, Spin and Statistics, and all that.} \\, Princeton U.P. 2000. \\ ($1^{\\rm st}$ edn Benjamin 1964)\n\\item[[WW]] Weinberg, S., and E. Witten, \\, {\\it Limits on massless particles.} \\, Phys. Lett. B{\\bf 96} (1980), 59 -- 62.\n\n\n\n\n\\end{enumerate}\n\n\\\n\n\\textit{E-mail addresses}: \\texttt{maxim@ihes.fr}, \\texttt{ graeme.segal@all-souls.ox.ac.uk}\n\n \n\n\n\\end{document}\n\n \n \n \n\n\n\n\n\n \n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Results}\n\nThe combined UED\/UEDS experiments are based on a pump-probe scheme, and were performed with a radio-frequency compressed ultrafast electron scattering instrument described in detail elsewhere~\\cite{Chatelain2012, Otto2017}. Briefly, \\SI{35}{\\femto\\second} pump pulses at \\SI{800}{\\nano\\meter} (\\SI{1.55}{\\electronvolt}) drive vertical electronic transitions at $t=t_0$, photodoping electrons and holes into the band valleys near $\\Gamma$, $\\frac{2}{3}Y$ and $\\frac{3}{4}Z$~\\cite{Li2015,Zhao2016,Melendez2018,Wei2019} (Fig. \\ref{FIG:structure}c). The effects of this photoexcitation on both the lattice structure and phonon system are measured with UED and UEDS simultaneously at a time $t = t_0 + \\tau$. Time-series of the changes in electron scattering intensity at all scattering vectors, $\\vec{q}$, were assembled by scanning the pump-probe time delay, $\\tau$. The experiments were repeated over a range of pump-fluences at \\SI{300}{\\kelvin}.\n\nThe total measured electron scattering intensity $I(\\vec{q}, \\tau)$ can be decomposed into $I(\\vec{q}, \\tau) = I_0(\\vec{q}, \\tau) + I_1(\\vec{q}, \\tau) + ...$, where $I_0$ corresponds to the Bragg peaks of conventional diffraction and $I_1$ is known as \\emph{single-phonon diffuse scattering}. The intensity of Bragg peaks at scattering vector $\\vec{G}$, $I_0(\\vec{q}=\\vec{G},\\tau)$, reports on the lattice constants, unit cell structure, coherent modulation~\\cite{Chatelain2014, Sie2019} and\/or transient Debye-Waller (DW) suppression~\\cite{Siwick2003, Ernstorfer2009, Waldecker2016} of peak intensity. Diffraction signals can be $10^5$--$10^8$ times more intense than phonon diffuse scattering signals~\\cite{Stern2018,RenedeCotret2019}. The expression for single-phonon diffuse scattering (PDS) is given by:\n\\begin{equation}\n\tI_1(\\vec{q}, \\tau) \\propto \\sum_\\lambda \\left| a_{\\lambda \\vec{k}}\\right|^2 \\left| F_{1\\lambda}\\left(\\vec{q}, \\left\\{ e_{\\lambda\\vec{k}}\\right\\} \\right) \\right|^2 \\label{EQ:diffuse}\n\\end{equation}\nwhere the label $\\lambda$ indicates the specific phonon branch, $\\vec{q}$ is the electron scattering vector, $\\vec{k}$ is the reduced phonon wavevector (i.e. $\\vec{k}$ = $\\vec{q}$ - $\\vec{G}$, where $\\vec{G}$ is the closest Bragg peak), $a_{\\lambda\\vec{k}}$ is the vibrational amplitude of mode $\\lambda$, and $F_{1\\lambda}$ are known as the one-phonon structure factors. $I_1$ provides momentum-resolved information on the nonequilibrium distribution of phonons across the entire Brillouin zone, since $I_1(\\vec{q}, \\tau)$ depends only on phonon modes with wavevector $\\vec{k}$ = $\\vec{q}$ - $\\vec{G}$ (Fig. \\ref{FIG:data}a). The one-phonon structure factors $F_{1\\lambda}$ are geometrical weights that depend sensitively on the phonon mode atomic polarization vectors $\\left\\{ e_{\\lambda\\vec{k}}\\right\\}$~\\cite{RenedeCotret2019}. Most importantly, $F_{1\\lambda}\\left(\\vec{q}, \\left\\{ e_{\\lambda\\vec{k}}\\right\\} \\right)$ are relatively large when the phonon mode $\\lambda$ is polarized parallel to the scattering vector $\\vec{q}$. Terms of higher-order than $I_1$ have lower cross-sections and do not contribute significantly to the interpretation of the low-order Brillouin zone (BZ) signals reported on here~\\cite{Wang2013,Zacharias2021a,Zacharias2021b}. \n\n\\begin{figure*}\n \\centering\n\t\\includegraphics{diffuse.pdf}\n\t\\caption{Ultrafast electron diffraction and diffuse scattering signals from photodoped SnSe. \\textbf{a} Equilibrium scattering pattern of SnSe oriented along the $[100]$ direction with key vectors, the square BZ ($\\vec{b}^\\star$--$\\vec{c}^\\star$ plane) and high symmetry points indicated. \\textbf{b} Regions of interest for scattering as described in the text, shown around reflection $(00\\bar{2})$ as an example; (1) Bragg intensity, (2) small-wavevector phonons ($\\SI{0.114}{\\per\\angstrom} <|\\vec{k}| < \\SI{0.228}{\\per\\angstrom}$) and (3) larger wavevector phonons ($|\\vec{k}| > \\SI{0.228}{\\per\\angstrom}$) \\textbf{c} Line cut across the horizontal line shown in panel b). The Bragg peak lineshape is fit with a Voigt profile (solid black line) with a full-width at half-max of \\SI{0.158}{\\per\\angstrom}. \\textbf{d} Transient (photoinduced) ultrafast electron scattering intensity changes in several regions of the BZ shown in a) and b). The decrease of intensity directly on the Bragg peaks shows transient DW decays that are strongly anisotropic. The fast ($\\sim \\SI{300}{\\femto\\second}$) decay component is maximized in Bragg peaks along $\\vec{c}^\\star$ (black), and only the slow components ($\\sim \\SI{4}{\\pico\\second}$) is observed in peaks perpendicular to $\\vec{c}^\\star$ (red). Transient diffuse intensity also shows pronounced anisotropy and $\\vec{k}$-dependence. A fast rise in diffuse intensity (purple) is only observed for $|\\vec{k}| < \\SI{0.228}{\\per\\angstrom}$ (region 2, panel b) in BZs that show the fast DW dynamics. A slow increase in diffuse intensity (orange) is observed at all zone boundary high-symmetry points (see Fig. S4) and elsewhere in the BZ for all reflections $\\vec{G}$ (e.g. region (3), panel b). Error bars represent the standard error in the mean of intensity before time-zero, but are generally smaller than the markers.}\n\t\\label{FIG:data}\n\\end{figure*}\n\nFig. \\ref{FIG:data}a shows an equilibrium diffraction pattern of SnSe at room temperature along the $[100]$ zone axis with the rectangular in-plane ($\\vec{b}^\\star$--$\\vec{c}^\\star$) BZ of the $Pnma$ phase indicated. This pattern contains both Bragg scattering (at zone-center positions) and PDS contributions at all scattering vectors. Following photoexcitation, Bragg peaks show transient decreases in intensity whose dynamics are well-described by a biexponential decay with time-constants (\\SI{400 \\pm 130}{\\femto\\second} and \\SI{4 \\pm 1}{\\pico\\second}), as shown in Fig. \\ref{FIG:data}d. The fast component of these dynamics are profoundly anisotropic, with a maximum contribution in the $\\vec{c}^\\star$ direction and below detection along $\\vec{b}^\\star$ (Fig. \\ref{FIG:data}d). Given the DW factor ($\\exp(-\\frac{1}{2}\\langle \\vec{q} \\cdot \\vec{u} \\rangle^2$), this indicates that there are at least two distinct processes that contribute to increasing the atomic displacement $\\vec{u}$ following photoexcitation. The ultrafast dynamics of the PDS intensity following photoexcitation provides a clear perspective on these distinct processes. At all high-symmetry BZ boundary positions ($Z$, $Y$ and $T$) we find that PDS intensity increases with a single exponential time constant of \\SI{4 \\pm 1}{\\pico\\second} (Fig. S4) that is the complement of the slow time-constant observed in the Bragg peak dynamics. In fact, within experimental uncertainties an identical time-constant is determined for increases in PDS observed at all BZ positions $|\\vec{k}| > \\SI{0.228}{\\per\\angstrom}$ (i.e. far from Bragg peaks), as is shown in Fig. \\ref{FIG:data}d (yellow) and Fig. \\ref{FIG:polaron}e. \n\nPrevious work has identified a number of soft and strongly-coupled optical phonons in the zone-center region of the $Pnma$ phase~\\cite{Chattopadhyay1986,Li2015,Gong2020,Lanigan2020}. As $\\vec{k}$ approaches zone-center, the PDS contribution overlaps with the Bragg peak lineshape. Thus, Bragg peak intensity must be subtracted to accurately determine the differential PDS from small-wavevector phonons following photoexcitation. As part of this analysis we investigated whether photoexcitation resulted in measurable time-dependence of Bragg peak positions and widths. Fig. S3 demonstrates that the Bragg peak center positions and widths are shot noise limited and neither parameter shows a measurable time-dependence; i.e. any photoinduced change to in-plane lattice constants (which shift Bragg peak positions) or in-plane long-range strain (which broaden and skew the width of peaks) over the range of delay times investigated here are at a level that is below our signal-to-noise ratio. The Bragg peak lineshapes are effectively constant, but have integrated intensities that vary according to the observed transient Debye-Waller dynamics. Panels b and c of Fig. \\ref{FIG:data} show two regions of interest around every Bragg peak. We define an area at the BZ center associated with strongest Bragg scattering (region (1), $|\\vec{k}| \\leq \\SI{0.114}{\\per\\angstrom}$) and a surrounding region associated with wavevectors in the range $\\SI{0.114}{\\per\\angstrom} < |\\vec{k}| \\leq \\SI{0.228}{\\per\\angstrom}$ (region (2)). By integrating scattered intensity in these areas separately, we assemble time-series that probe both the Bragg (region 1) and the small wavevector ($|\\vec{k}| \\sim \\Gamma$) PDS (region 2) after subtraction of Bragg intensity. The results for reflections parallel to $\\vec{c}^\\star$ is shown in Fig. \\ref{FIG:data}d (purple), demonstrating that the fast component of the DW dynamics is exclusively associated with a rapid increase in the amplitude of phonons near zone-center. This signal shows the same $\\vec{b}^\\star$\/$\\vec{c}^\\star$ anisotropy as the Bragg peaks (Fig. S6) and is \\emph{insensitive} to the precise definition of the indicated regions, as demonstrated below. Determination of the relatively small differential PDS signals directly under the Bragg peak (region 1) is obviously subject to large uncertainties and is not reported here. These observations corroborate computational work~\\cite{Caruso2019,Ma2018} which found that the electron-phonon coupling is profoundly anisotropic in SnSe, with carriers coupling very strongly to polar zone-center modes and only much more weakly elsewhere. \n\nThe PDS in intermediate regions of the BZ is a mixture of the fast and slow dynamics shown in Fig. \\ref{FIG:data}d, with the magnitude of these components varying as a function of $|\\vec{k}|$ as shown in Fig. \\ref{FIG:polaron}. In a conventional weakly polar semiconductor like GaAs, these dynamics can be described in terms of a model for the re-equilibration of the photodoped carriers with the phonon-system via intra- and inter-valley inelastic electron-phonon scattering processes based on the electron and phonon bandstructures of the material~\\cite{Sjakste2018}. The dynamics observed here for SnSe cannot be described in these terms as is explained further in the discussion below. Polaron formation, however, provides a robust description of these observations. \n\n\\begin{figure*}\n \\centering\n \\includegraphics{polaron.pdf}\n \\caption{Polaron formation in SnSe visualized with UEDS. \\textbf{a} Wavevector-dependent scattering intensity for a Gaussian displacement field model of the polaron lattice distortion as a function of size (polaron FWHM), as described in the text. \\textbf{b} Measured change in diffuse intensity at \\SI{1}{\\pico\\second} (black triangles) and \\SI{5}{\\pico\\second} (orange circles) fit to the Gaussian displacement model above (solid curves). Best-fit FWHM polaron dimensions are $f=\\SI{18.7 \\pm 0.3}{\\angstrom}$ (\\SI{1}{\\pico\\second}) and $f=\\SI{4.2 \\pm 0.1}{\\angstrom}$ (\\SI{5}{\\pico\\second}). The larger polaron is consistent with uniaxial displacements along $\\vec{c}^\\star$ and the smaller polaron with uniform displacements in the $\\vec{b}$--$\\vec{c}$ plane. The boundary between regions (2) and (3) from Fig. \\ref{FIG:data}b are indicated. The shaded region represents the standard error in the mean of intensity across the integration region. \\textbf{c} Differential scattering intensity across the BZ due to the large polaron lattice distortion. \\textbf{d} Differential scattering intensity across the BZ due to the small polaron lattice distortion. \\textbf{e} The measured differential diffuse intensity across the BZ at $\\tau=\\SI{5}{\\pico\\second}$ is in excellent agreement with the predicted scattering vector dependence of the model.}\n \\label{FIG:polaron}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics{polaron-realspace.pdf}\n \\caption{Real-space visualization of the in-plane ($\\vec{b}$ -- $\\vec{c}$) atomic displacement due to the large and small polarons in Fig. \\ref{FIG:polaron}. The unperturbed in-plane dimensions of the unit cell are marked by solid lines. \\textbf{a} Large (\\SI{18.7}{\\angstrom} FWHM) one-dimensional polaron aligned to the $c$-axis \\textbf{b} Small (\\SI{4.2}{\\angstrom} FWHM) three-dimensional polaron. In both subpanels, the blue-shaded background represents the FWHM of the atomic displacement field $\\vec{u}(\\vec{r})$. The magnitude of the atomic displacements has been exaggerated for visual clarity.}\n \\label{FIG:polaron-realspace}\n\\end{figure}\n\nThe local lattice distortions associated with a polaron in real-space can be modelled as an atomic displacement field $\\vec{u}(\\vec{r})$, where atoms are displaced as $\\vec{r}_j \\to \\vec{r}_j + \\vec{u}(\\vec{r}_j)$ \\cite{Guzelturk2021}. For small displacement fields, the contribution of such a localized lattice distortion to the total scattering amplitude, $f^p$, is given by:\n\\begin{equation}\n f^p(\\vec{q}) \\approx - i \\sum_j f_{e,j}(\\vec{q}) e^{-i \\vec{q} \\cdot \\vec{r}_j} \\left( \\vec{G} \\cdot \\vec{u}_j \\right)\n\\end{equation}\nwhere $\\left\\{ \\vec{r}_j \\right\\}$ are the atomic positions, $\\left\\{ f_{e,j} \\right\\}$ are the atomic form factors for electron scattering, and $\\vec{G}$ is the Bragg reflection nearest to $\\vec{q}$ as in Fig. \\ref{FIG:data} (see SI). We consider two specific displacement fields due to the anisotropic observations described above; an effectively one-dimensional displacement field directed along the $c$-axis ($\\vec{u}_c(\\vec{r}) \\propto e^{-|\\vec{r}|^2\/r_p^2} ~ \\hat{\\vec{r}} \\cdot \\hat{\\vec{c}}$), and a three-dimensional displacement field ($\\vec{u}(\\vec{r}) \\propto e^{-|\\vec{r}|^2\/r_p^2} \\hat{\\vec{r}}$) (see SI). In both cases, $2 \\sqrt{2 \\ln(2)} ~ r_p$ is the full-width at half-maximum (FWHM) of the local lattice distortion associated with the polaron. The effect of both displacement fields is identical in terms of the impact on the diffuse scattering measured within a BZ, shown as a function of polaron FWHM in Fig. \\ref{FIG:polaron} a). The one-dimensional displacement field, however, has the same $\\vec{b}^\\star$\/$\\vec{c}^\\star$ anisotropy in scattering intensity as our measurements.\n\nThis model was fit to the measured photoinduced differential PDS signals at \\SI{1}{\\pico\\second} and at \\SI{5}{\\pico\\second}, to capture changes due to the fast and slow dynamics respectively. The fast PDS dynamics are in excellent agreement with the formation of a \\SI{18.7 \\pm 0.3}{\\angstrom} polaron displacement field. The slow PDS dynamics, by contrast, are in excellent agreement with the displacement field associated with a \\SI{4.2 \\pm 0.1}{\\angstrom} FWHM polaron. A qualitative real-space representation of the two polaron modes is presented in Fig. \\ref{FIG:polaron-realspace}. We tentatively assign the larger polaron to the electron and the smaller polaron to the hole, in analogy to the work on Sio \\emph{et al.}~\\cite{Sio2019} on other polar materials. However, these observations are also consistent with electron and hole polarons being a similar size in SnSe (i.e. \\SI{4.2 \\pm 0.1}{\\angstrom} FWHM) and formation occurring in two steps. We discuss these results within the context of both interpretations below. \n\n\\subsection*{Discussion}\n\n The carriers generated through photoexcitation in these experiments localize via their interactions with the phonon system. These interactions create a local potential that minimizes the free energy of the system according to the standard picture of polaron formation ~\\cite{Franchini2021} shown schematically in Fig. 1 d) - f). A polaron quasiparticle is thus formed as a localized carrier dressed by phonons, potentially in several phonon branches and over a range of wavevectors~\\cite{Sio2019}; a local lattice distortion in real-space is equivalent to a distribution of lattice normal modes in reciprocal space. These details are revealed by the UED and UEDS data. As recent ab initio work by Giustino and colleagues has demonstrated in other polar lattices~\\cite{Sio2019}, polarons require the recruitment of phonon modes across the entire BZ when the dimensions of the polaron approaches those of the lattice constants. In SnSe, we propose that this manifests in the bimodal formation dynamics reported here due to the profoundly anisotropic (momentum-dependent) nature of electron-phonon coupling in the material. Strong Fr\\\"ohlich coupling to near zone-center polar optical phonons ($A_g$, $B_u$, and $B_g$) results in the rapid ($\\sim\\SI{300}{\\femto\\second}$) formation of relatively large (electron) polarons, since polarons of this size only require the recruitment of strongly coupled small-wavevector phonon modes. The formation time of the smaller (hole) polarons is an order of magnitude longer ($\\sim \\SI{4}{\\pico\\second}$) due to the relatively weak coupling to the large-wavevector phonons that must be recruited to form polarons of this size~\\cite{Sio2019}. \n\nAlternatively, the observed bimodal polaron formation dynamics are also consistent with a two-step process that has features in common with Onsager's inverse snowball effect, often discussed in the context of the theory of solvation~\\cite{neria1992simulations}. Here the rapid but weak localization of the charge carriers is provided by the 1D ferroelectric-like lattice distortions along the $c$-axis. The slower but strong localization is provided by the subsequent 3D distortions. Polaron formation dynamically proceeds from the outside (long range) inwards (short range), not via layer accumulation from the inside out like a typical snowball. \n\nThese measurements alone cannot identify precisely how vibrational excitation is distributed over the phonon branches near zone-center by $\\sim \\SI{1}{\\pico\\second}$; however, the increase in mean-square atomic displacements can be determined from the Bragg peak DW decays and can be used to estimate the fraction of excitation energy that has left the carrier system in the form of the polaronic lattice distortion by $\\sim \\SI{1}{\\pico\\second}$. This quantity is shown in Fig. \\ref{FIG:msd}, and is linear with pump fluence over the range investigated. The observed increase, $\\Delta \\langle u_c^2\\rangle$, is consistent with a nearly complete transfer ($>85\\%$) of the excess photodoped carrier energy to zone center phonons (see SI and Fig. S7). Thus, the coupled electron-phonon system at $\\sim$\\SI{1}{\\pico\\second} is well described as a state in which the photodoped carriers in each valley have reached an equilibrium with only this limited set of strongly coupled small-wavevector phonons. This equilibrium is well described by the formation of polaron quasiparticles (Fig. \\ref{FIG:polaron} b) rather than the simple heating of the phonon system as has been observed in other systems, like graphite where rapid electron cooling through a pre-equilibrium with specific strongly coupled modes has been observed ~\\cite{Stern2018,RenedeCotret2019}. A similarly rapid coupling of electronic excitation energy to polaronic lattice distortions is thought to be present in methylammonium lead iodide perovskites~\\cite{Niesner2016}.\n\nPolaron formation is the simplest \\emph{self-consistent} explanation of these data. The isotropic slow-rise in diffuse scattering observed across the entire BZ specifically precludes an understanding of these results in terms of the conventional semiconductor picture of carrier relaxation through intervalley scattering mediated by large-wavevector phonons~\\cite{Sjakste2018,Waldecker2017,Stern2018,RenedeCotret2019,Otto2021}. Based on the electronic dispersion of SnSe calculated by Wei \\emph{et al.}~\\cite{Wei2019}, we modelled the decay of hot electrons and holes, mediated by phonons, via energy-allowed and momentum-conserving pathways. This relaxation mechanism imprints the structure of the electronic dispersion onto the PDS, including a pronounced anisotropy between the $\\vec{b}^\\star$ and $\\vec{c}^\\star$ directions as shown in Fig. S8. This is ruled out by our measurements, which are azimuthally symmetric in reciprocal space (Fig. \\ref{FIG:polaron}e). Neither does the anharmonic decay of phonons provide an explanation for these data, given that phonon lifetimes are estimated to be almost an order of magnitude longer (\\SIrange{15}{30}{\\pico\\second})~\\cite{Chandrasekhar1977,Li2015,Lanigan2020} than the timescales observed herein. Moreover, the anharmonic decay of phonons measured in the time-domain displays an imprint of the phonon dispersion due to energy- and momentum-conservation rules~\\cite{Stern2018,RenedeCotret2019}, which is not seen in our data (Fig. \\ref{FIG:polaron}e).\n\n\\begin{figure}\n \\centering\n \\includegraphics{displacement.pdf}\n \\caption{Increase in mean-square-displacement of all atoms along the $c$ axis, $\\Delta \\langle u_c^2 \\rangle$, due to the change in vibrational amplitude of the strongly-coupled zone-center modes exclusively. The associated photocarrier concentration $N_{\\gamma}$ for a sample of dimensions $\\SI{50 x 50 x 0.045}{\\micro\\meter}$ is shown above the plot. Boxes are used to represent error bars along both axes.}\n \\label{FIG:msd}\n\\end{figure}\n\nThere are a number of important connections between these observations and an understanding of the thermoelectric properties of SnSe. First, the rate of large polaron formation is very rapid and appears to be at or near the limit imposed by the period of the highest frequency phonons in SnSe ($\\sim \\SI{200}{\\femto\\second}$) due to the strong Fr\\\"ohlich coupling in the polar lattice, even at the high carrier density operating conditions of optimally doped SnSe for thermoelectric device applications~\\cite{Sootsman2009,Zhao2016,Fan2018}. At this doping level, these results indicate that SnSe is well described as a dense polaron system, since the density of polarons overlapping at a distance equal to their FWHM is \\SI{3.2e21}{\\per\\cubic\\centi\\meter} for the smaller (hole) polarons, overlapping with this optimal doping range. \nThe polaronic nature of the charge carriers in SnSe likely plays an important role in preserving the electron-crystal, phonon-glass conditions that are important for high-performance thermoelectric materials~\\cite{Rowe1995}. The dressed charges are better screened from scattering mechanisms that could otherwise deteriorate mobility at high carrier density and temperatures approaching a structural phase transition, where $ZT$ in SnSe is highest. Several open questions remain regarding the nature of polarons in SnSe. Our measurements directly probe the in-plane lattice distortions, but do not provide information on the inter-layer (or $a$-axis) dependence. There is also the important question regarding the potential ferroelectric nature of polarons in SnSe, as has been discussed in the context of lead halide perovskites~\\cite{Frost2017,Joshi2019}. Monolayer SnSe has been investigated as a possible platform for ultrathin ferroelectrics~\\cite{Fei2016}, and polar nanodomain formation and interlayer coupling may contribute to the dynamics we have observed. The in-plane polarization within a single layer can be modulated simply through changes in the angle of the Sn-Se bonds relative to the $a$-axis ~\\cite{Fei2016} in a manner that is consistent with the local polaronic displacements shown schematically in Fig. \\ref{FIG:polaron-realspace}.\n\nElectron-phonon coupling is not normally considered to be an important contributor to lattice thermal conductivity, however, previous work has shown that these interactions can play a role in suppressing thermal conductivity and enhancing thermoelectric performance in Si~\\cite{Liao2015} and SiGe~\\cite{Fan2018} at high carrier doping. While a giant lattice anharmonicity and 3-phonon scattering processes seems to be sufficient to explain the ultralow thermal conductivity of undoped SnSe~\\cite{Zhao2014, Li2015}, a quantitative understanding of electron-phonon interactions and their impact on both electronic and lattice thermal conductivity (in addition to electrical conductivity) in this strongly coupled, high-carrier density regime is important for the further development of thermoelectrics and higher $ZT$. The rate of electron-phonon scattering with the strongly coupled polar modes is more than an order of magnitude higher than 3-phonon scattering processes at the carrier densities investigated here~\\cite{Lanigan2020}.\n\nWe are currently in an excellent position to make significant progress understanding this complex, strongly-coupled regime across many material classes. The recent parallel development of ab initio approaches for both electron-phonon coupling~\\cite{Ponce2016} and polaron formation~\\cite{Sio2019} and time- and momentum-resolved measurement techniques like ultrafast electron\/xray diffuse scattering that are capable of interrogating electron-phonon interactions in exquisite detail and directly visualizing polaron formation. Future work that combines these approaches together with more well established ultrafast spectroscopic methods are likely to yield significant insights.\n\n\\subsection*{Synthesis and sample preparation} \n\nA SnSe ingot (\\SI{20}{\\gram}) was synthesized by mixing appropriate ratios of high purity starting materials (Sn chunk, 99.999\\%, American Elements, USA and Se shot, 99.999\\%, 5N Plus, Canada) in \\SI{13}{\\milli\\meter} diameter quartz tube. The tube was flame-sealed at a residual pressure of $\\SI{1e-4}{\\mmHg}$, then slowly heated to \\SI{1223}{\\kelvin} over \\SI{10}{\\hour}, soaked at this temperature for \\SI{6}{\\hour} and subsequently furnace cooled to room temperature. The obtained ingot was crushed into powder and flame-sealed in a quartz tube, which was placed into another, bigger, flame-sealed quartz tube. A crystal with dimensions of $\\sim$\\SI{13}{\\milli\\meter} (diameter) $\\times$ \\SI{20}{\\milli\\meter} (length) was obtained.\n\nSeven samples were used for the ultrafast electron scattering measurements, to ensure reproducibility. Six samples were ultramicrotomed with a \\ang{35} diamond blade, while one sample was mechanically exfoliated. Three of the ultramicrotomed samples were cut from a first SnSe mother flake at a thickness of \\SI{90}{\\nano\\meter}. The remaining three ultramicrotomed samples were cut to a thickness of \\SI{70}{\\nano\\meter} from a different mother flake, synthesized separately from the first. The exfoliated sample was prepared from the second mother crystal, with a final thickness of \\SI{45}{\\nano\\meter} as estimated based on the relative ratio of intensities, compared to thicker ultramicrotomed samples.\n\n\\subsection*{Ultrafast electron scattering experiments} \nElectron scattering experiments employed \\SI{35}{\\femto\\second} pulses of \\SI{800}{\\nano\\meter} light at a repetition rate of \\SI{1}{\\kilo\\hertz}. Part of these light pulses is upconverted to \\SI{266}{\\nano\\meter} pulses via third-harmonic generation, which is then used to generate bunches of $10^6$ -- $10^7$ electrons from a bulk copper photocathode. These electrons are then accelerated to \\SI{90}{\\kilo\\electronvolt} in a DC electric field. Electron bunches are compressed with a radio-frequency cavity to counterbalance space-charge repulsion, resulting in a time-resolution of $\\SI{130}{\\femto\\second}$. Electrons are transmitted through the samples before being collected by an electron camera. The other part of the \\SI{800}{\\nano\\meter} pulses is used to photoexcite the sample almost co-linearly with the electron propagation axis ($\\sim \\ang{5}$). Experiments were repeated for up to \\SI{72}{\\hour} to maximize signal-to-noise, which was made possible by improvements in RF-laser synchronization~\\cite{Otto2017}. Detailed descriptions of this instrument are presented elsewhere~\\cite{Chatelain2012, Otto2017}. The samples were thinner than than the optical depth of $>\\SI{100}{\\nano\\meter}$ at \\SI{800}{\\nano\\meter}~\\cite{Barrios2014, Makinistian2009}, ensuring that the entire sample was photoexcited. The samples were oriented in the $\\langle 100 \\rangle$ direction, giving UEDS measurements a full view of lattice dynamics in the plane spanned by $\\vec{b}$ and $\\vec{c}$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn recent years, the studies on black holes in higher dimensions have attracted much attention. \nSome of these studies show that they have much more complicated and richer structure than 4-dimensional ones.\nEspecially, the study of Kaluza-Klein (KK) black holes is important \nin association with our apparent 4-dimensional spacetime. \nThe Gregory-Laflamme instability~\\cite{Gregory:1993vy} \n(see also \\cite{Harmark:2007md} and references therein) and black hole\/black string phase transition \n(see {\\it e.g.} the reviews \\cite{Kol:2004ww}) \nmotivate many studies on thermodynamic aspects of KK black holes. \nNow, the thermodynamic properties and the first law for asymptotically flat KK black holes \nare widely investigated \\cite{Kol:2003if}-\\cite{Kastor:2007wr}. \n\n\nIn 5-dimensional Einstein-Maxwell theory, there is an analytic solution representing an \nelectrically charged black hole with squashed horizons \n\\cite{Ishihara:2005dp} as a generalization of the solution given in \\cite{Dobiasch:1981vh} and \\cite{Gibbons:1985ac}. \nIntriguingly, the spacetime far from the black hole is \nlocally a product of the 4-dimensional Minkowski spacetime and $S^1$. \nIn this sense, the black hole resides in KK spacetime and \nis worth to be named a KK black hole. \n\n\nThe black hole has an interesting property that various definitions of mass take different values, \nwhich means that the black hole gives an opportunity to investigate \nthe differences in various definitions of mass. \nIn this paper, we show some expressions for the first law of black hole thermodynamics \nsatisfied by those masses and discuss the differences from the viewpoint of thermodynamics. \n\n\n\\section{Kaluza-Klein black hole}\n\n\nLet us review the KK black holes with squashed horizons~\\cite{Ishihara:2005dp}, \nwhich is a solution of the 5-dimensional Einstein-Maxwell theory. \nThe action is given by \n\n \\begin{eqnarray}\n I = \\frac{1}{16\\pi G} \\int_M d^5 x \\sqrt{-g} \n \\left[ R - F^{\\mu\\nu} F_{\\mu\\nu} \\right]+\\frac{1}{8\\pi G} \\int_{\\partial M} \n K\\sqrt{-h}d^4x, \n \\end{eqnarray}\n\n where $G$ is the 5-dimensional Newton constant, $g_{\\mu\\nu}$ is the metric, $R$ is scalar curvature, \n $F_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu $ is \n the field strength of the 5-dimensional $U(1)$ gauge field $A_\\mu$. \nThe second term is so-called Gibbons-Hawking term, in which $h$ is determinant of the induced metric and \n $K$ is trace of the extrinsic curvature of the boundary $\\partial M$, respectively. \nThe metric of the black hole is given by \n\\begin{eqnarray}\n ds^2 &=& - V(\\rho) d\\tau^2 + \\frac{B (\\rho)}{V(\\rho)} d\\rho^2 \n + \\rho^2 B (\\rho) d\\Omega^2 \n + \\frac{r_{\\infty}^2}{4 B (\\rho)} \\left(d\\psi + \\cos \\theta d\\phi \\right)^2 ,\n\\label{eq:metric-Trho}\n\\end{eqnarray}\nwhere $d\\Omega^2 = d\\theta^2 + \\sin^2 \\theta d\\phi^2$ is the metric of the unit two-dimensional sphere and \n\\begin{eqnarray}\nV(\\rho)= \\left( 1- \\frac{\\rho_{+}}{\\rho}\\right)\\left( 1-\\frac{\\rho_{-}}{\\rho} \\right) , \\ \\ \nB(\\rho) = 1+ \\frac{\\rho_0}{\\rho}, \\ \\ \nr_{\\infty} = 2\\sqrt{(\\rho_+ +\\rho_0 )(\\rho_- + \\rho_0 )}. \n\\end{eqnarray}\nHere, the coordinate ranges are \n$0 \\leq \\theta <2\\pi ,\\ 0 \\leq \\phi < 2\\pi ,\\ 0 \\leq \\psi < 4\\pi $. \nThe gauge potential is given by\n\\begin{eqnarray}\nA = \\mp \\frac{\\sqrt{3}}{2} \\frac{\\sqrt{{\\rho_+}{\\rho_-}}}{ \\rho_0}\n \\left(1+\\frac{\\rho_0}{\\rho} \\right) d\\tau.\n \\end{eqnarray}\n\n\nIt is easy to see the apparent singularity at $\\rho_{+}$ corresponds to the outer horizon of the black hole.\nThe inner horizon $\\rho_{-}$ is analogous to that of the Reissner-Nordstr\\\"om (RN) black holes.\nThe spatial infinity corresponds to a limit $\\rho \\to \\infty$. \nIt should be noted that the shape of the horizon is a squashed sphere \nas was discussed in~\\cite{Ishihara:2005dp}.\nFrom the metric (\\ref{eq:metric-Trho}), \nit is seen that the $S^1$ circle parametrized by a coordinate $\\psi$ has finite size even at the spatial infinity. \nThe non-trivial twisting of the $S^1$ circle fibrated over the $S^2$ base space \nleads a 4-dimensional U(1) gauge field by KK reduction.\nActually, in no horizon limit $\\rho_+, \\rho_- \\to 0$, the black hole spacetime becomes the KK monopole \nspacetime \\cite{Gross-Perry}\\cite{Sorkin}. \nIn the limit $r_\\infty \\to \\infty$, the KK monopole becomes 5-dimensional Minkowski spacetime and \nthe black hole reduces to the 5-dimensional RN black hole.\nWe term this limit the spherically symmetric limit. \n\n\nGiven the metric (\\ref{eq:metric-Trho}), we can calculate various physical quantities.\nThe surface gravity is calculated as\n\\begin{eqnarray}\n \\kappa_{+} = \\frac{\\rho_{+} -\\rho_{-}}{2\\rho_{+} \\sqrt{{\\rho_{+}}({\\rho_{+} + \\rho_0})}},\n \\label{eq:surfacegravity}\n\\end{eqnarray}\nwhich gives the Hawking temperature of the black hole $T= \\kappa_+\/2\\pi$ \\cite{Hawking:1974sw}.\nWe assume that the entropy of the black hole is given by the Bekestein-Hawking formula \n\\begin{eqnarray}\nS = \\frac{A_+}{4G} = \\frac{4\\pi^2}{ G} \\rho_+ (\\rho_+ + \\rho_0) \\sqrt{\\rho_+(\\rho_- + \\rho_0 ) },\n \\label{eq:entropy}\n\\end{eqnarray}\nwhich is consistent with the Wald's entropy formula \\cite{Wald:1993nt} \\cite{Iyer:1994ys}. \nThe electric charge and electrostatic potential of the black hole are also calculated as \\cite{Ishihara:2005dp}\n\\begin{eqnarray}\n Q = \n \\pm \\frac{\\sqrt{3} \\pi}{G} r_\\infty \\sqrt{\\rho_{+} \\rho_{-} }, \\quad \n\\Phi \n= \\pm \\frac{\\sqrt{3}}{2} \\sqrt{\\frac{\\rho_-}{\\rho_+} }. \n\\end{eqnarray}\n\n\n\\section{Mass and free energy}\n\nThere are several definitions of mass for the black hole spacetime.\nCai {\\it et al.} \\cite{Cai:2006td} discussed mass of the black hole defined by the counter-term method for\nasymptotically locally flat spacetime \\cite{Mann:1999pc}-\\cite{Kraus:1999di}. \nUsing the counter-term mass $M_{ct}$, they investigated the first law of black hole thermodynamics and \nsuggested the existence of a new work term in the first law. \nThe direct calculation reveals \n\\begin{eqnarray}\n M_{ct} = M_{AD} = \\frac{\\pi}{2G}r_{\\infty} \\left( 2\\rho_+ + 2\\rho_- +\\rho_0 \\right), \n\\label{eq:ct-mass-BH}\n\\end{eqnarray}\nwhere $M_{AD}$ is the Abbott-Deser (AD) definition of mass~\\cite{Abbott:1981ff} \nevaluated on a product $S^1$ bundle over 4-dimensional Minkowski spacetime, which is a completely \nflat spacetime\\footnote{ Cai {\\it et al.} showed that $M_{ct}$ equals the AD mass evaluated on a \"twisted\" \n$S^1$ bundle over 4-dimensional Minkowski spacetime. \nHowever, this background spacetime is not a solution of the vacuum Einstein equation \nand we can not obtain finite free energy and Hamiltonian of the black hole on it. \nThus, we consider the flat background.\n}.\nHereafter, we term this spacetime flat background, shortly.\nTherefore, $M_{AD}$ satisfies the same equations as $M_{ct}$. \nThe Komar mass is also meaningful as a mass of black holes which possess a timelike Killing vector.\nUsing the timelike Killing vector $\\partial \/\\partial \\tau $ normalized at the spatial infinity, \nwe can calculate the Komar mass for the black hole (\\ref{eq:metric-Trho}) as \n\\begin{eqnarray}\n M_K = \\frac{3\\pi }{4 G} r_{\\infty} \\left( \\rho_{+} + \\rho_{-} \\right) ,\n \\label{eq:Komar}\n\\end{eqnarray}\nwhere the integral is taken over the 3-dimensional sphere at the spatial infinity.\nSmarr-type formula was shown generally in~\\cite{Gauntlett:1998fz} for the Komar mass \n\\begin{eqnarray}\n M_K -Q\\Phi = \\frac{3}{2} TS,\n\\end{eqnarray}\nwhich is sometimes called integrated expression for the first law. \nThis implies that we would have a differential expression for the first law using the Komar mass.\nClearly, since $M_K$ does not equal $M_{ct}$, then \nthe expressions for the first law satisfied by these masses should be different. \n\n\n\nIn order to deduce the work term suggested by Cai {\\it et al.}, \nwe note the fact that, far from the black hole, the geometry locally looks like the black string. \nIt is known that, in the case of the black p-branes or black string without twisting, \nthe so-called gravitational tension and the size of the extra-dimension \ncontribute to the first law~\\cite{Kol:2003if}\\cite{Townsend:2001rg}\\cite{Harmark:2003eg}\\cite{Traschen:2001pb}. \nOne may think that, also in the case of the squashed KK black hole, \nthese quantities contribute to the first law as a work term. \nThe gravitational tension which can be applied to non-asymptotically flat spacetime was \ndefined by using the Hamiltonian formalism to a foliation of the spacetime \nalong asymptotically translationally-invariant spatial direction~\\cite{Harmark:2004ch}. \nThe definition requires some reference spacetime in order to give finite gravitational tension. \nAs a reference background, we choose the flat background.\nThen, using the definition given in \\cite{Harmark:2004ch}, we can calculate the gravitational tension \nassociated with the direction $\\partial_{\\psi}$ as\n\\begin{eqnarray}\n\\mathcal{T} = \\frac{1}{4 G}\\left(\\rho_+ +\\rho_- +2\\rho_0\\right). \n\\end{eqnarray}\nThe conjugate variable to $\\mathcal{T}$ is \nthe size of the extra-dimension at the spatial infinity, $L:=2\\pi r_{\\infty}$. \nWith these quantities, $M_{AD}$ is related to the entropy and the electric charge \nvia the following expression for the first law: \n\\begin{eqnarray}\ndM_{AD} &=& TdS + \\Phi dQ +\\mathcal{T} dL.\n\\label{eq:first-law-AD}\n\\end{eqnarray}\nThus, the last term $\\mathcal{T}dL$ is \nthought of as a work term suggested by Cai {\\it et al.}~\\cite{Cai:2006td}. \nThe expression for the first law (\\ref{eq:first-law-AD}) shows that \n$M_{AD}$ is a thermodynamic potential as a function of $(S,Q,L)$. \nIt means that $M_{AD}$ is relevant to the thermodynamic system with natural variables $(S,Q,L)$. \nThe same holds true for $M_{ct}$, because of $M_{ct}=M_{AD}$.\n\n\nThe free energy of the black hole is obtained from the classical Euclidean action $I_E$ as \n\\begin{eqnarray}\nF =T {I_E} = \\frac{\\pi}{2G}r_{\\infty} (\\rho_+ + \\rho_0 ),\n\\end{eqnarray}\nwhich was calculated by the traditional background subtraction method \nwith the flat background~\\cite{Gibbons:1976ue}.\nThe free energy $F$ equals one evaluated by the counter-term method.\nThus, in this case, \nthe counter-term method is equivalent to background subtraction method with the flat background.\nIn the calculation of the free energy by the background subtraction method, we fixed the temperature, \nthe electro-static potential and the size of the extra-dimension at the boundary of the spacetime. \nIt follows that the free energy satisfies the following differential relation: \n\\begin{eqnarray}\ndF = -SdT -Qd\\Phi +\\mathcal{T} dL.\n\\label{eq:first-law-F}\n\\end{eqnarray}\nThus, the free energy $F$ has natural variables $(T, \\Phi,L)$. \nThe equations (\\ref{eq:first-law-AD}) and (\\ref{eq:first-law-F}) suggest the following relation with the AD mass:\n\\begin{eqnarray} \nF=M_{AD} - Q\\Phi -TS,\n\\label{eq:Legendre-F-AD}\n\\end{eqnarray}\nwhich can be easily shown by direct calculation. \nThis relation (\\ref{eq:Legendre-F-AD}) is nothing but Legendre transformation between \n$F$ and $M_{AD}$. \n\n\n\nFor asymptotically flat electrically charged black holes, it is known that Hamiltonian \ndoes not equal ADM mass and these two quantities differ by $Q\\Phi$~\\cite{Hawking:1995ap}.\nThe black hole we consider here is not asymptotically flat but asymptotically locally flat.\nIn spite of the difference in asymptotic structure, \nif we regard the AD mass as a counterpart of ADM mass, \nthe same is true for the case of the squashed black hole; \nThe Hamiltonian of the black hole evaluated on the flat background can be related to the AD mass as\n~\\cite{Hawking:1998jf}\n\\begin{eqnarray}\nH = \\frac{\\pi}{2G} r_{\\infty} \\left( 2\\rho_+ - \\rho_- +\\rho_0 \\right)\n=M_{AD} -Q\\Phi.\n\\label{eq:Hamiltonian}\n\\end{eqnarray}\nOften, Hamiltonian for solution without electric charge is called Hawking-Horowitz (HH) mass \\cite{Hawking:1995fd}.\nIn the case of $Q=0$, the HH mass of the black hole is equal to the AD mass \nas is in the case of asymptotically flat black hole. \nFrom the relation (\\ref{eq:Hamiltonian}), the first law can take the form with the Hamiltonian as \n\\begin{eqnarray}\ndH &=& TdS - Qd\\Phi+\\mathcal{T} dL.\n\\label{eq:first-law-flat-H} \n\\end{eqnarray}\nEquivalently, \nthe relation (\\ref{eq:Hamiltonian}) is a Legendre transformation between the Hamiltonian and the AD mass. \nThe equation (\\ref{eq:first-law-flat-H}) shows \nthat the Hamiltonian is the thermodynamic potential with natural variables $(S, \\Phi, L)$. \nThe Hamiltonian is also the Legendre transform of the free energy with respect to $TS$; \n$F = H-TS$.\nTherefore, \nthe AD mass and the Hamiltonian are related to the free energy $F$ via Legendre transformations and \ncan be interpreted as different thermodynamic potentials. \n\n\nHowever, it can be shown that the Komar mass does not have $\\mathcal{T}$ or $L$ as a natural variable. \nNow, let us obtain the differential expression for the first law by use of the Komar mass. \nIn order to do so, we require that the Komar mass should be related to the free energy via Legendre transformations.\nIn stead of $(L, \\mathcal{T})$, we introduce a couple of new variables $(\\epsilon, \\Sigma)$ satisfying \n\\begin{eqnarray}\n F=M_K - TS -Q\\Phi + \\epsilon \\Sigma,\n\\quad \ndM_K = TdS + \\Phi dQ -\\epsilon d \\Sigma.\n\\label{eq:first-law-Komar}\n\\end{eqnarray}\nThen, $(\\epsilon, \\Sigma)$ is determined up to a constant $C$ as\n\\begin{eqnarray}\n\\epsilon = C (2\\pi r_{\\infty})^2=CL^2, \\quad \n\\Sigma =\\frac{1}{16 \\pi GC r_{\\infty}} \\left(\\rho_+ +\\rho_-+2\\rho_{0}\\right)= \\frac{\\mathcal{T}}{2CL}. \n\\end{eqnarray}\nThus, the Komar mass has this quantity $\\Sigma$ as a natural variable as well as $S$ and $Q$.\nThe pair of variables $(\\epsilon, \\Sigma)$ contributes not only to the differential relation \nfor the Komar mass \nbut also to the following:\n\\begin{eqnarray}\ndF = -SdT -Qd\\Phi +\\Sigma d\\epsilon,\\ \ndH = TdS -Qd\\Phi +\\Sigma d\\epsilon, \\ dM_{AD} = TdS +\\Phi dQ +\\Sigma d\\epsilon.\n\\label{eq:first-law-H-AD-epsilon}\n\\end{eqnarray}\nTherefore, the expression for the first law including the free energy, \n the Hamiltonian or the AD mass is not unique. \nThese expressions are consistent with the interpretation that $H$ or $M_{AD}$ is the thermodynamic potential \nwith natural variables $(S,\\Phi, L)$ or $(S, Q,L)$, because \nsystem with fixed $L$ \nis equivalent to that with fixed $\\epsilon$ due to the relation $\\epsilon \\propto L^2$.\n\n\n\nIn order to clarify the relation with the case of the 5-dimensional RN black hole,\nlet us consider the spherically symmetric (SS) limit $r_{\\infty}\\to \\infty$.\nHowever, the free energy, the Hamiltonian and the AD mass evaluated on the flat background diverge in the limit.\nAs an alternative background, we consider the KK monopole spacetime. \nThe free energy and the Hamiltonian on the KK monopole background are calculated as \n\\begin{eqnarray}\n\\tilde{F} = \\frac{\\pi}{2G} r_{\\infty} \n \\left(\\rho_+ + \\rho_0 - \\frac{r_{\\infty}}{2} \\right), \\quad \n\\tilde{H} = \\frac{\\pi}{2G} r_{\\infty} \\left(2\\rho_+ -\\rho_- +\\rho_0 -\\frac{r_{\\infty}}{2} \\right).\n\\end{eqnarray}\nIt is easily checked that the difference between $F$ and $\\tilde{F}$ is the free energy of the KK monopole \non the flat background; $F-\\tilde{F}=\\frac{\\pi}{4G}r_\\infty^2$, \nwhich equals the free energy calculated by the counter-term method \\cite{Mann:2005cx}.\nThe gravitational tension calculated on the KK monopole background is \n\\begin{eqnarray}\n\\tilde{\\mathcal{T}} &=& \\frac{1}{4 G} \\left( \\rho_+ +\\rho_- + 2\\rho_0 -r_{\\infty}\\right).\n\\end{eqnarray}\nWith this tension, \nthe free energy and the Hamiltonian satisfy \n\\begin{eqnarray}\nd\\tilde{F} = -SdT -Qd\\Phi + \\tilde{\\mathcal{T}} dL, \\quad \nd\\tilde{H} = TdS -Qd\\Phi + \\tilde{\\mathcal{T}} dL.\n\\end{eqnarray}\nAs before, $\\tilde{\\mathcal{T}}$ or $L$ can not be a natural variable for the Komar mass.\nAs in the previous case, we can obtain a couple of thermodynamic variables $(\\epsilon, \\tilde{\\Sigma})$ satisfying\n\\begin{eqnarray}\n\\tilde{F}=M_K-TS-Q\\Phi+\\epsilon \\tilde{\\Sigma}, \\quad dM_K = TdS + \\Phi dQ -\\epsilon d \\tilde{\\Sigma}.\n\\label{eq:first-Komar-KKm}\n\\end{eqnarray}\nThe result is \n\\begin{eqnarray}\n\\epsilon ={C} (2\\pir_\\infty)^2 = {C}L^2, \\quad \n\\tilde{\\Sigma} = \n\\frac{1}{16\\pi G {C} r_{\\infty}} \\left(\\rho_+ +\\rho_-+2\\rho_{0}-r_\\infty \\right)=\\frac{\\tilde{\\mathcal{T}}}{2{C}L},\n\\end{eqnarray}\nwhere $\\tilde{\\Sigma}$ is different from $\\Sigma$ by a constant $(16 \\pi GC)^{-1}$. \nTherefore, the two quantities $\\Sigma$ and $\\tilde{\\Sigma}$ are essentially the same, \nand the differential expressions (\\ref{eq:first-law-Komar}) \nand (\\ref{eq:first-Komar-KKm}) are equivalent.\nThis is consistent with the fact that the Komar mass does not depend on the choice of background spacetime.\nIn the SS limit, the products $\\epsilon\\tilde{\\Sigma}$ and $L\\tilde{\\mathcal{T}}$ become zero, \nand $M_K$, $\\tilde{F}$ and $\\tilde{H}$ \nbecome those of 5-dimensional RN black hole evaluated on the 5-dimensional Minkowski background.\nThus, this formulation for the squashed black hole \nincludes usual thermodynamic formulation for 5-dimensional RN black hole as a limit.\nIn this sense, it \nis a generalized formulation of thermodynamics for 5-dimensional electrically charged static black holes. \n\n\nBecause $\\tilde{\\Sigma}$ and $\\tilde{\\mathcal{T}}$ vanish when the outer horizon is a perfectly-round three-sphere,\none may think that\nthe thermodynamic variable $\\tilde{\\Sigma}$ or $\\tilde{\\mathcal{T}}$ can be interpreted as \na quantities representing the deformation in the shape of the horizon. \nIn fact, the variables $\\tilde{\\Sigma}$ and $\\tilde{\\mathcal{T}}$ can be rewritten as \n\\begin{eqnarray} \n\\tilde{\\Sigma} = \\frac{1}{32\\pi G C}\\frac{1}{R_+ \\ell_+} \\left({\\ell_+} - R_+ \\right)^2, \\quad \n\\tilde{\\mathcal{T}} = \\frac{1}{8G}\\frac{ r_{\\infty}}{R_+ \\ell_+} \\left( \\ell_+ - R_+ \\right)^2,\n\\label{eq:Sigma-squashing}\n\\end{eqnarray}\nwhere we have defined new parameters $R_+$ and $\\ell_+$ as follows:\n \\begin{eqnarray}\nR_+ := \\sqrt{\\rho_+(\\rho_+ + \\rho_0 )}, \\quad {\\ell_+} := \\sqrt{\\rho_+ (\\rho_- +\\rho_0)}, \n \\end{eqnarray}\nwhich denote \nthe circumference radius of the $S^2$ base space at the outer horizon and \nthat of the twisted $S^1$ fibre there, respectively. \nIn this way, $\\tilde{\\Sigma}$ and $\\tilde{\\mathcal{T}}$ measure the squashing of the outer horizon.\n\n\n\\section{Summary}\n\n\nAs shown in this paper, \nthe Abbott-Deser mass which equals the counter-term mass, the Komar mass and the Hamiltonian \ncontribute to different expressions for the first law \nand are related to each other by the Legendre transformations. \nEach mass can be interpreted as a thermodynamic potential with its own natural variables.\nThe consistent set of natural variables for each mass has been revealed, \nand we have obtained a more general thermodynamic formulation for electrically charged black holes\nin 5-dimensional Einstein-Maxwell system.\n\n\nNow, we discuss the relation between $(L, \\mathcal{T})$ and \nthe pair of new quantities ($\\epsilon, \\Sigma$).\nLet us begin by considering the case of the free energy. \nIn the evaluation of the free energy, the size of the extra-dimension at spatial infinity $L$ was fixed, \nso that $L$ is a natural variable of the free energy. \nIn general, $L$ can be replaced by any monotonic function of $L$, say $f(L)$, as a thermodynamics variable. \nThis may be trivial because thermodynamic environment characterized by fixing the size $L$ is equivalent to\nthat by fixing $f(L)$. \nOne may write the differential relation for the free energy as \n\\begin{eqnarray}\ndF = -SdT -Qd\\Phi + W d f(L),\n\\label{eq:first-law-fL}\n\\end{eqnarray}\nwhere $W$ is conjugate to $f(L)$. \nSince the last work term can be rewritten as $Wdf(L) =W f'(L) dL$, \nthe relation (\\ref{eq:first-law-fL}) is equivalent to (\\ref{eq:first-law-F}) \nif the conjugate variable $W$ satisfies \n\\begin{eqnarray}\nW= \\frac{\\mathcal{T}}{f'(L)}.\n\\end{eqnarray}\nIn this way, the work term with the form $\\mathcal{T} dL$ can be replaced by $W d f(L)$.\nThe first law (\\ref{eq:first-law-fL}) is equivalent to (\\ref{eq:first-law-H-AD-epsilon}),\nif $f(L) = CL^2 = \\epsilon$, \nand the relation between $\\Sigma$ and $\\mathcal{T}$ is given as\n\\begin{eqnarray}\n\\Sigma = \\frac{\\mathcal{T}L}{2\\epsilon}\\ \\propto\\ \\frac{\\mathcal{T}}{2L}.\n\\label{eq:Sigma-T-relation}\n\\end{eqnarray}\nThe Hamiltonian and the AD mass are Legendre transforms of the free energy with respect to \n$TS$ or $TS+Q\\Phi$ respectively, so that the expressions in (\\ref{eq:first-law-H-AD-epsilon}) are \nequivalent to (\\ref{eq:first-law-AD}) and (\\ref{eq:first-law-flat-H}). \nTherefore, the work term in the first law for the Hamiltonian and the AD mass is not unique. \n\n\nHowever, the quantities $\\mathcal{T}$ and $\\Sigma$, which are conjugate to $L$ and $\\epsilon$, \nare different in the sense of thermodynamics: \nthermodynamic environment characterized by fixing $\\Sigma$ is one by fixing $\\mathcal{T}\/2L$, as shown in \n(\\ref{eq:Sigma-T-relation}). \nThus, the pair ($\\epsilon, \\Sigma$) is thermodynamically different from ($L, \\mathcal{T}$). \nIt is natural that thermodynamic potentials suitable for different environments are different. \nThe Komar mass is a thermodynamic potential for environment characterized by ($S, Q, \\Sigma$), \nwhile the AD mass is a thermodynamic potential for that by ($S, Q, \\epsilon$) or ($S, Q, L$). \nIn this way, we can interpret the difference of masses from thermodynamical viewpoint. \n\n\nIt is interesting to investigate thermodynamic properties and stability of the black hole in each environment \nand to compare the black hole with black string. \nIt will be reported in a future publication. \n\n\n\n\n\\begin{acknowledgments} \nY.K. was partially supported by the Yukawa memorial foundation and is also supported by \nthe 21st Century COE \"Constitution of wide-angle mathematical basis focused on knots\". \nThis work is supported by the Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science and \nCulture of Japan No. 13135208 and No. 14540275. \n\\end{acknowledgments} \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Introduction} \n\nMulticomponent quantum gases constitute an ideal playground for investigating a plethora of many-body (MB) \nprocesses~\\cite{Bloch2008,Cazalilla2011} \nincluding in particular the generation of quasiparticles~\\cite{landau1933bewegung,pekar1946autolocalization} such as polarons. \nQuasiparticle formation can be studied owing to the unprecedented experimental tunability of the impurity-medium \ninteraction strength, via Feshbach \nresonances~\\cite{Ospelkaus2006,Zaccanti2006,chin2010feshbach}, while systems containing few particles \ncan be realized especially in one spatial dimension~\\cite{serwane2011deterministic,wenz2013few}. \nDepending on the quantum statistics of the host, these quasiparticles are known \nas Bose~\\cite{grusdt2015new,rath2013field} and Fermi~\\cite{schmidt2018universal,massignan2014polarons} polarons respectively. \nTheir existence and a variety of their properties have already been experimentally probed in both \nBose~\\cite{jorgensen2016observation,hu2016bose,yan2020bose} and \nFermi~\\cite{schirotzek2009observation,kohstall2012metastability,koschorreck2012attractive,cetina2016ultrafast,scazza2017repulsive} \ngases, e.g. via employing injection spectroscopy~\\cite{cetina2016ultrafast,jorgensen2016observation,hu2016bose}. \nThe progress regarding the understanding of the quasiparticles features has also been corroborated by an extensive \ntheoretical activity revealing different aspects of their underlying dressing mechanism such as their effective \nmass~\\cite{grusdt2017bose,ardila2015impurity}, lifetime~\\cite{kohstall2012metastability}, induced \ninteractions~\\cite{dehkharghani2018coalescence,mistakidis2020many,mistakidis2019repulsive}, and bound \nstates termed bipolarons~\\cite{camacho2018bipolarons,schmidt2018universal} or trimerons~\\cite{nishida2015polaronic,alhyder2020impurity,naidon2018two}. \n\nAccordingly, the interaction of the impurities with their surrounding leads to deformations of the latter in the vicinity of the former being manifested \nas impurity-medium bound states~\\cite{ardila2019strong} for strong attractions as well as sound wave emission~\\cite{marchukov2020shape} \nand phase-separation~\\cite{mistakidis2020many,mistakidis2019quench} for repulsive interactions. \nThese phenomena are a direct imprint of the inevitable entangled nature of these systems~\\cite{mistakidis2019quench} \nwhose non-equilibrium dynamics is far less appreciated~\\cite{skou2020non}. \nThe impurity dynamics holds the premise of unveiling even more complex \nprocesses that will shape our understanding on these settings and may be exploited in future technological applications. \nTo date remarkable demonstrations of the impurities' non-equilibrium dynamics include the spontaneous generation \nof nonlinear excitations~\\cite{grusdt2017bose,mistakidis2019correlated}, collision induced pattern \nformation~\\cite{mistakidis2019dissipative,burovski2014momentum,gamayun2018impact,knap2014quantum,tajima2019collisional}, their \nmediated correlations~\\cite{kwasniok2020correlated,mistakidis2020many,mistakidis2020induced} and relaxation \nprocesses~\\cite{mistakidis2020pump,lausch2018prethermalization,boyanovsky2019dynamics}, as well as their \ntransport properties in optical lattices~\\cite{johnson2011impurity,theel2020entanglement,keiler2020doping,keiler2019interaction}. \nIt is also important to emphasize that the above-mentioned investigations have predominantly considered a single impurity \nwhilst the effect of larger impurity concentrations leading to enhanced correlation-induced phenomena is until now largely unexplored. \nFor these latter settings, the interplay of the quantum statistics between the impurities and the host is of \nimportance especially for the induced impurity-impurity correlations. \n\nIn this context, a very promising candidate is a fermionic environment containing two bosonic impurities which can \ninteract via direct $s$-wave scattering. \nIndeed, most of the experimental and theoretical endeavors of Fermi polarons have been mainly focused on the \nlimiting case of a strongly spin imbalanced Fermi gas ~\\cite{Schmidt2012,Vlietinck2013,Mora2009,Trefzger2012,Massignan2013,Pilati2010,Schmidt2011,Sanner2012}, while the situation of bosonic impurities in a Fermi sea is arguably much less studied~\\cite{De2014,Fratini2012,cetina2016ultrafast,huber2019medium}. \nIn this setting it is very interesting to reveal the presence and nature of induced impurity-impurity interactions which are known to be suppressed for fermionic impurities~\\cite{mistakidis2019correlated,dehkharghani2018coalescence,mistakidis2019repulsive}. \nThe study of the competition between induced and direct $s$-wave interactions, with the latter being naturally absent for fermionic impurities, is \nan intriguing prospect. \nAn additional perspective is the possible emergence of impurity-impurity and impurity-medium bound states for strong attractions. \nCertainly, the identification of the above properties in the dynamical response of the system e.g. subjected to an \nimpurity-medium interaction quench~\\cite{volosniev2015real,mistakidis2019effective} as well as the characterization of the respective pattern formation especially of the host is desirable. \nIn order to address these questions we consider, as a paradigmatic setup, a one-dimensional harmonically trapped Bose-Fermi (BF) mixture consisting of two bosonic impurities immersed in a few-body fermionic environment. \nTo track the stationary properties and importantly the quantum dynamics of this impurity setting we resort to the multi-layer multi-configuration time-dependent Hartree method for atomic mixtures (ML-MCTDHX)~\\cite{cao2017unified,cao2013multi} being a variational approach that allows us to capture all the relevant correlations of the BF mixture. \n\nFor the ground state we find that the impurities and the fermionic environment \nphase-separate for strong impurity-medium repulsions~\\cite{mistakidis2019correlated,Viverit2000,lous2018probing}, while \nthey exhibit a localization \ntendency close to the trap center for large attractions. \nInterestingly, attractive induced impurity-impurity interactions~\\cite{huber2019medium} mediated by the fermionic host \nare revealed for the case of non-interacting bosons for increasing impurity-medium repulsion and attraction. \nHowever, for repulsively interacting impurities we unveil that the \ninduced interactions dominate the direct $s$-wave ones for increasing impurity-medium attractions. \n\nA quench from zero to finite repulsive impurity-medium interactions triggers a breathing motion~\\cite{boudjemaa2020breathing,kiehn2019spontaneous}, \nin each component, with an interaction dependent frequency and amplitude for the impurities. \nMoreover, a dynamical impurity-bath phase-separation takes place for quenches to strong repulsions. \nImportantly, induced impurity-impurity correlations mediated by the host are identified during the evolution of two \nnon-interacting impurities and become more pronounced for quenches to stronger repulsions. \nHowever, in the case of repulsively interacting impurities a competition of induced and direct interactions is evident \nwith the latter dominating the former and enforcing the impurities to reside in a two-body superposition. \n\nQuenching to attractive impurity-medium interactions gives rise to \na beating pattern~\\cite{mistakidis2020many} on the single-particle level which originates from the participation of two breathing \nfrequencies in the dynamics of the impurities due to the dominant presence of their attractive induced interactions. \nThe impurities show a spatial localization tendency around the trap center leading to a density accumulation of the Fermi sea at \ntheir instantaneous location. \nThe strength of the attractive induced interactions is larger compared to the reverse quench scenario and it is possible to overcome \nthe direct impurities coupling for large post-quench attractions~\\cite{mistakidis2020induced,mistakidis2020many}. \nIn all cases, we show that the degree of impurity-medium entanglement is appreciable, and exhibits a hierarchy. \nFor instance, it is larger for fixed impurity interaction and increasing \nquench amplitude. \n\n\nThis work is structured as follows. \nSection~\\ref{theory} introduces the setup under consideration [Sec.~\\ref{setup}], the employed many-body variational approach [Sec.~\\ref{method}] \nand the main observables [Sec.~\\ref{observables}] utilized for the characterization of the ground state and the dynamics of the BF mixture. \nIn section~\\ref{Ground state} we address the ground state properties of the BF mixture \nwith a particular focus on the impurity-impurity induced interactions [Sec.~\\ref{corel_ground}]. \nThe non-equilibrium dynamics upon considering a quench of the impurity-medium coupling to either repulsive [Sec.~\\ref{repulsive_quench}] \nor attractive [Sec.~\\ref{attractive quench}] interaction regimes is discussed in Sec.~\\ref{quench_dynamics}. \nThe emergent entanglement dynamics is presented in Sec.~\\ref{entanglemet_dynamics}. \nWe summarize our results and give an outlook in Sec.~\\ref{conclusion}. \nAppendix~\\ref{convergence} elaborates further on the details of our variational method and \ndelineates the convergence of the presented results exemplarily. \n\n\n\n\\section{Theoretical Background}\\label{theory} \n\n\\subsection{Setup and Hamiltonian}\\label{setup}\n\nWe consider a particle-imbalanced ultracold BF mixture containing $N_B=2$ bosonic impurities and $N_F=6$ spin-polarized fermions constituting the environment. \nThe mixture is assumed to be mass-balanced i.e. $M_B=M_F\\equiv M$ and both species are confined in the same one-dimensional (1D) harmonic trap namely $\\omega_B=\\omega_F\\equiv \\omega=0.1$. \nThis 1D geometry can be experimentally realized by imposing a strong transverse confinement ($\\omega_{\\perp}$) compared to the longitudinal ($\\omega_{\\parallel}$) \none obeying $\\omega=\\omega_{\\parallel}\/\\omega_{\\perp} \\ll 1$~\\cite{katsimiga2020observation,serwane2011deterministic,wenz2013few}. \nThe individual species of such an approximately mass-balanced BF mixture correspond, for instance, to bosonic and fermionic isotopes of the same element e.g. $^{7}$Li-$^{6}$Li~\\cite{Kempen2004, Delehaye2015} or $^{171}$Yb-$^{172}$Yb~\\cite{Honda2002}. \nThe underlying MB Hamiltonian of the above-described system reads \n\\begin{equation}\\label{1}\n\\begin{split}\n&H = \\sum_{\\sigma = F, B}^{}\\sum_{i = 1}^{N_\\sigma} \\bigg [ -\\frac{\\hbar^2}{2M}\\bigg (\\frac{\\partial}{\\partial x^{\\sigma}_i} \\bigg)^2 \n+ \\frac{1}{2}M\\omega^2(x^\\sigma_i)^2 \\bigg ] \\\\ & + g_{BB}\\sum_{ i < j }^{} \\delta(x^{B}_i - x^{B}_j) + g_{BF}\\sum_{ i = 1 }^{N_F} \n\\sum_{j = 1}^{N_B}\\delta(x^{F}_i - x^{B}_j).\n\\end{split}\n\\end{equation}\nOperating in the ultracold regime, $s$-wave scattering constitutes the dominant two-body interaction process and hence interparticle interactions can be modeled by a short-range contact potential~\\cite{olshanii1998atomic}. \nNote that for the spin-polarized fermions $s$-wave scattering is forbidden due to the Pauli exclusion principle~\\cite{pethick2008bose,pitaevskii2003international} and therefore \ntheir intraspecies interactions vanish. \nAccordingly, the boson-boson and boson-fermion (alias impurity-medium) 1D effective coupling constants~\\cite{olshanii1998atomic} are $g_{BB} = 4 \\hbar^2a_{BB}\/(M a^2_{\\perp}) [1 - C a_{BB}\/a^2_{\\perp,B}]^{-1}$ and $g_{BF} = 4 \\hbar^2a_{BF}\/(M a^2_{\\perp}) [1 - Ca_{BF}\/a^2_{\\perp,B}]^{-1}$ respectively. \nHere, $a_{BB}$ ($a_{BF}$) is the three-dimensional boson-boson (boson-fermion) $s$-wave scattering length and $C \\approx 1.4603$. \nThe parameter $a_\\perp = \\sqrt{\\hbar\/M\\omega_\\perp}$ denotes the transversal confinement length scale, with $\\omega_{\\perp}$ being the transversal trapping frequency. \nImportantly, the boson-boson and boson-fermion interaction strengths $g_{BB}$ and $g_{BF}$ can be experimentally tuned \neither by means of ${a^s_{BB}}$, ${a^s_{BF}}$ using Feshbach resonances~\\cite{kohler2006production,chin2010feshbach} or \nvia adjusting ${{\\omega_\\bot}}$ by employing confinement-induced resonances~\\cite{olshanii1998atomic}. \n\nBelow, we rescale the MB Hamiltonian of Eq.~(\\ref{1}) in terms of $\\hbar \\omega_\\perp$. \nAs a consequence, the length, time and interaction strengths are expressed in units of $\\sqrt{\\hbar\/M\\omega_\\perp}\\equiv a_{\\perp}$, $\\omega^{-1}_\\perp$ and $\\sqrt{\\hbar^3 \\omega_\\perp\/M}$, respectively. \nIt is also worth mentioning that a BF mixture with $N_B \\ll N_F$, as the one considered herein, features supressed \nthree-body recombination particle losses, since their rate is known \nto be proportional to $N^2_B N_F$~\\cite{Helfrich2010}. \n\nIn the following, we characterize the ground-state properties of the highly particle-imbalanced BF mixture particularly focusing on the emergent correlation patterns and unveiling, for instance, phase-separation processes as well as identify impurity-impurity induced interactions for varying boson-boson and impurity-medium interaction strengths, see Sec.~\\ref{Ground state}. \nRecall that in the absence of an external confinement the two species are miscible by means that they spatially overlap when $g^2_{BF} < g_{BB}$, otherwise they phase-separate~\\cite{mistakidis2018correlation,Viverit2000,Roth2002,lous2018probing,mistakidis2019correlated}. \nIn the presence of an external trap and also away from the thermodynamic limit the above-mentioned relation is modified, i.e. $g_{BF}$ should \nbecome substantially larger than $g_{BB}$ in order to achieve the phase-separation. \nSubsequently, we trigger the non-equilibrium dynamics of the BF mixture by applying a quench of the impurity-medium interaction strength ($g_{BF}$) from zero to either repulsive [Sec.~\\ref{repulsive_quench}] or attractive [Sec.~\\ref{attractive quench}] couplings. \nImportantly, within these latter post-quench interaction regimes impurity-impurity correlations are finite whilst they vanish for the initial state. \nThus, the system is driven towards regions of finite impurity-impurity interactions aiming at exploring their dynamical fate, the consequent pattern formation and the associated build-up of correlations. \n\n\n\n\\subsection{Variational wavefunction ansatz and quantum dynamical approach} \\label{method} \n\nTo investigate the ground-state and most importantly the quench dynamics of the particle-imbalanced BF mixture we solve the underlying MB Schr{\\\"o}dinger \nequation using the variational ML-MCTDHX approach~\\cite{cao2017unified,cao2013multi}. \nIt is based on expanding the MB wavefunction in terms of a time-dependent and variationally optimized basis. \nThis asset enables us to capture both the inter- and the intraspecies correlations of the binary system in a computationally efficient manner compared to methods relying on a time-independent basis set. \n\nThe MB wavefunction, $\\Psi_{\\rm MB}$, is initially expressed in the form of a truncated Schmidt decomposition of rank $D$~\\cite{Horodecki2009}. \nNamely \n\\begin{equation} \\label{4}\n\\Psi_{\\rm MB}(\\vec{x}^B, \\vec{x}^F;t) = \\sum_{k = 1}^{D} \\sqrt{\\lambda_k(t)}\\Psi^B_k(\\vec{x}^B;t)\\Psi^F_k(\\vec{x}^F;t). \n\\end{equation}\nThe values of the Schmidt coefficients, $\\lambda_k(t)$, characterize the degree of entanglement of the binary system. \nIn decreasing order they are also known as natural species populations of the $k$-th species function. Evidently, the \nsystem is entangled~\\cite{roncaglia2014,Horodecki2009,mistakidis2018correlation} in the case that more than a single \ncoefficients $\\lambda_k(t)$ exhibit an non-zero population. \nThen, the many-body state [Eq.~(\\ref{4})] is a superposition of the respective species states instead of being a direct product of only two states (non-entangled case). \n\nAs a next step, each of the above-mentioned species functions is expanded in terms of the determinants and permanents of $d_\\sigma$ distinct time-dependent fermionic and bosonic single particle functions (SPFs) respectively. \nTherefore, each $\\Psi^{\\sigma}_k(\\vec{x}^{\\sigma};t)$ reads \n\\begin{equation}\\label{5}\n\\begin{split}\n&\\Psi_k^{\\sigma}(\\vec x^{\\sigma};t) = \\sum_{\\substack{l_1,\\dots,l_{d_{\\sigma}} \\\\ \\sum l_i=N}} C_{k,(l_1,\n\\dots,l_{d_{\\sigma}})}(t)\\sum_{i=1}^{N_{\\sigma}!} \\big ( {\\rm sign} (\\mathcal{P}_i) \\big ) ^{\\zeta} \\\\ & \\times \\mathcal{P}_i\n \\left[ \\prod_{j=1}^{l_1} \\varphi_1^{\\sigma}(x_j;t) \\cdots \\prod_{j=1}^{l_{d_{\\sigma}}} \\varphi_{d_{\\sigma}}^{\\sigma}(x_{K(d_{\\sigma})+j};t) \\right]. \n \\end{split}\n\\end{equation} \nIn this expression, $C_{k,(l_1,....., l_{d_{\\sigma}})}(t)$ denote the time-dependent expansion coefficients of a particular \ndeterminant for fermions or permanent for bosons and $n_i(t)$ is the occupation number of the SPF, $\\varphi_i(x;t)$. \nThe index $\\zeta = 0, 1 $ for bosons and fermions respectively and $\\mathcal{P}$ is the permutation operator exchanging the particle configuration $x_{\\nu}^{\\sigma}$, $\\nu=1,\\dots,N_{\\sigma}$ within the SPFs. \nAlso, $\\rm sign(\\mathcal{P}_i)$ is the sign of the corresponding permutation and $K(r)\\equiv l_1+l_2+\\dots+l_{r-1}$,\nwhere $l_{r}$ is the occupation of the $r$-th SPF and $r\\in\\{1,2,\\dots,d_{\\sigma}\\}$. \nWe remark that the bosonic subsystem is termed intraspecies correlated if more than one SPF is occupied otherwise it is fully coherent~\\cite{lode2020colloquium}, see also the discussion below. \nOn the other hand, the fermionic species exhibit beyond non-trivial Hatree-Fock correlations when more than $N_F$ eigenvalues possess a macroscopic population~\\cite{mistakidis2019correlated,erdmann2019phase}. \n\nThe time-evolution of the $(N_B+N_F)$-body wavefunction obeying the MB Hamiltonian of Eq.~(\\ref{1}) is determined by calculating the corresponding ML-MCTDHX equations of motion~\\cite{cao2013multi}. \nThe latter are found by performing e.g. the Dirac-Frenkel variational principle~\\cite{Dirac193,Frenkel1934} for the MB ansatz provided by Eqs.~\\eqref{4} and \\eqref{5}. \nAs a result we obtain a set of $D^2$ linear differential equations of motion for the coefficients $\\lambda_k(t)$ being coupled to $D[$ ${N_B+d_B-1}\\choose{d_B-1}$+${d_F}\\choose{N_F}$] non-linear integro-differential equations for the species functions and $d_F+d_B$ integro-differential equations for the SPFs. \nFinally, let us mention in passing that the variational ML-MCTDHX ansatz can be easily reduced to different levels of approximation. \nAs a case example, the corresponding mean-field wavefunction ansatz of the BF mixture corresponds to the case of $D = d_B = 1$ and $d_F = N_F$ while the respective mean-field equations of motion are retrieved \nby following a variational principle, see e.g. for details \\cite{lode2020colloquium,kohler2019dynamical}. \n\n\n\\subsection{Observables and analysis}\\label{observables}\n\nIn the following, we briefly introduce the basic observables that will be employed in the remainder of our work in order to characterize \nboth the stationary properties and the non-equilibrium dynamics of the BF mixture. \nA particular emphasis is paid on the impurities subsystem. \nTo visualize the spatial distribution of the $\\sigma=B,F$ species, i.e. the impurities and the medium respectively, on the single-particle \nlevel we invoke the corresponding one-body reduced density matrix \n\\begin{equation}\\label{1BD}\n\\rho^{(1)}_{\\sigma}(x, x';t)=\\langle \\Psi_{\\rm MB}(t)| \\hat{\\Psi}_\\sigma^{\\dagger}(x) \\hat{\\Psi}_{\\sigma}(x') | \\Psi_{\\rm MB}(t) \\rangle.\n\\end{equation}\nHere, $\\hat{\\Psi}_{B}(x)$ [$\\hat{\\Psi}_{F}(x)$] is the so-called bosonic [fermionic] field operator acting on position $x$ and satisfying the \nstandard commutation [anti-commutation] relations~\\cite{pethick2008bose,pitaevskii2003international}. \nThe diagonal of $\\rho^{(1)}_{\\sigma}(x, x';t)$ is the well-known one-body density of the $\\sigma$-species \ni.e. $\\rho^{(1)}_{ \\sigma}(x;t) = \\rho^{(1)}_ {\\sigma}(x, x' = x;t)$~\\cite{lode2020colloquium}. \nThe latter is accessible in ultracold atom experiments using the single-shot absorption imaging technique~\\cite{sakmann2016single,Bloch2008} \nand especially for few atoms can be retrieved by averaging over a sample of single-shots~\\cite{mistakidis2018correlation,Klein2005,mistakidis2019dissipative}. \nWe remark that the eigenfunctions of the $\\sigma$-species one-body reduced density matrix are known as the $\\sigma$-species natural orbitals, namely $\\phi^{\\sigma}_i$. \nIn this sense, when more than $N_F$ (one) fermionic (bosonic) natural orbitals are significantly populated the corresponding subsystem is called \nfragmented or intraspecies correlated~\\cite{mistakidis2018correlation,lode2020colloquium}. \nAccordingly, the underlying degree of fragmentation can be quantified via measuring $F_F(t) = 1 -\\sum_{i = 1}^{N_F}n^F_i(t)$ and $F_B(t) = 1 - n^B_1(t)$ for the fermionic and the bosonic subsystems respectively. \nHere we consider that the population of the total number of orbitals are normalized to unity i.e. \n$\\sum_{i=1}^{d_F}n^{F}_i(t) = 1$ and $\\sum_{i=1}^{d_B}n^{B}_i(t)=1$. \nRecall that in the MF limit of the BF mixture~\\cite{pitaevskii2003international,mistakidis2019correlated,karpiuk2004soliton} where \n$\\Psi_{\\rm MB}(\\vec{x}^F, \\vec{x}^B;t) \\rightarrow \\Psi_{MF}(\\vec{x}^{F}, \\vec{x}^B ; t) $ the natural populations \nof the fermionic and the bosonic species satisfy the constraints $\\sum_{i=1}^{N_F}n^{F}_i(t) = 1$, $n^F_{i > N_F}(t) = 0$, \nand $n^{B}_1(t) = 1$, $n^{B}_{i >1}(t) = 0$. \n\nThe emergence of impurity-medium entanglement can be identified by calculating the Schmidt coefficients, $\\lambda_k(t)$, participating in the MB wavefunction ansatz as described by Eq. (\\ref{4}). \nIndeed, in the case that more than one coefficients are populated, i.e. $\\lambda_{k>1}(t)\\neq 0$, then the MB wavefunction is not a single product state and the system is entangled~\\cite{Horodecki2009,mistakidis2018correlation}. \nThe Schmidt coefficients are essentially the eigenvalues of the species reduced density matrix namely \n$\\rho^{N_{\\sigma}} (\\vec{x}^{\\sigma}, \\vec{x}'^{\\sigma};t)=\\int d^{N_{\\sigma'}} x^{\\sigma'} \\Psi^*_{\\rm MB}(\\vec{x}^{\\sigma}, \n\\vec{x}^{\\sigma'};t) \\Psi_{\\rm MB}(\\vec{x}'^{\\sigma},\\vec{x}^{\\sigma'};t)$, with $\\vec{x}^{\\sigma}=(x^{\\sigma}_1, \\cdots,x^{\\sigma}_{N_{\\sigma-1}})$, and $\\sigma\\neq \\sigma'$. \nConsequently, in order to determine the degree of the impurity-medium entanglement we use the Von-Neumann entropy~\\cite{Catani2009, Horodecki2009} given by \n\\begin{equation}\\label{VN}\nS_{VN}(t) = - \\sum_{k = 1}^{D} \\lambda_k(t) \\ln[\\lambda_k(t)].\n\\end{equation}\nIt becomes apparent that $S_{VN}(t) \\geq 0$ only when $\\lambda_{k>1}(t)\\neq 0$ meaning that entanglement is present. \nFor instance, in the mean-field limit where $\\lambda_1 (t) = 1$, $\\lambda_{k>1}(t)= 0$ and entanglement is absent it holds that $S_{VN}(t)=0$. \n\nTo infer the role of impurity-impurity and fermion-fermion two-body correlation processes in the ground state as well as \nin the dynamics of the BF mixture in a spatially resolved manner we resort to the diagonal of the two-body reduced \ndensity matrix~\\cite{mistakidis2018correlation,lode2020colloquium,sakmann2008reduced}\n\\begin{equation}\\label{2BD}\n\\begin{split}\n \\rho^{(2)}_{\\sigma \\sigma}(x, x';t) = & \\langle \\Psi_{\\rm MB}(t)| \\hat{\\Psi}^{\\dagger}_{\\sigma}(x')\n \\hat{\\Psi}^{\\dagger}_{\\sigma}(x) \\hat{\\Psi}_{\\sigma}(x') \\\\ &\n \\times \\hat{\\Psi}_\\sigma(x) | \\Psi_{\\rm MB}(t) \\rangle.\n \\end{split}\n\\end{equation} \nThis measure refers to the probability of detecting simultaneously one impurity $\\sigma=B$ (fermionic, $\\sigma=F$) particle located at $x$ and another one at $x'$. \nIn that light, it reveals the occurrence of impurity-impurity (fermion-fermion) two-body correlations and thus provides insights on how \nthe two bosons (fermions) behave with respect to one another~\\cite{mistakidis2019correlated,erdmann2019phase,mistakidis2020many,mistakidis2020induced}. \n\nTo estimate the strength of the effective interactions between the two bosonic impurities \nwe utilize their relative distance~\\cite{mistakidis2019correlated,mistakidis2020many,mistakidis2019repulsive} defined as\n\\begin{equation} \\label{7}\n\\mathcal{D}_{\\rm rel}(t) = \\frac{\\int dx_1 dx_2 \\abs{x_1 -x_2} \\rho^{(2)}_{BB} (x_1, x_2; t)}{\\langle \\Psi_{\\rm MB}(t) | \\hat{N}_B(\\hat{N}_B -1) | \n\\Psi_{\\rm MB}(t) \\rangle}.\n\\end{equation}\nHere, $\\hat{N}_B$ is the bosonic number operator and $\\rho^{(2)}_{BB} (x_1, x_2; t)$ denotes the \ntwo-body density matrix [Eq.~(\\ref{2BD})] of the bosonic impurities subsystem. \nThe relative distance can be experimentally accessed using {\\it in-situ} spin-resolved single-shot measurements~\\cite{bergschneider2018spin}, \nwhere in particular the actual shape of $\\mathcal{D}_{\\rm rel}(t)$ can be retrieved by averaging over a sample of the individually obtained images. \n\n\n\n\\section{Ground state properties of two bosonic impurities in a fermionic environment}\\label{Ground state} \n\nWe consider $N_B = 2$ bosonic impurities in a fermionic finite-sized medium composed of $N_F=6$ spin-polarized fermions. \nRecall that an one-dimensional Fermi sea with $N_F>5$ atoms approaches the behavior of a many-body fermionic environment, see for instance \nRef.~\\cite{wenz2013few} for a corresponding experimental verification. \nIn our setting we have checked that our results, to be presented below, regarding both the ground state and the dynamics remain qualitatively the same also for $N_F=8$ (not shown here for brevity). \nThe system is mass-balanced and both species are trapped in the same harmonic oscillator of frequency $\\omega = 0.1$, unless it is stated otherwise. \nBelow, we examine the ground state characteristics of the composite system with a particular focus on the impurities properties for attractive and repulsive impurity-medium interactions. \nIn order to discriminate between direct and effective impurity-impurity interaction effects we analyze both the cases of non-interacting and interacting impurities. \nThe impact of the impurities mass on their induced interactions mediated by the fermionic environment is also discussed. \nAnother objective of our analysis is to unveil the spatial distributions of each species, discuss possibly emerging phases of the BF mixture as well as their associated correlation properties for varying impurity-medium interactions. \nTo obtain the ground state of the BF mixture governed by Eq.~\\eqref{1} we employ either the imaginary-time propagation or the improved relaxation \nmethod within ML-MCTDHX~\\cite{cao2017unified}. \n\n\n\\subsection{Single-particle density distribution}\\label{ground_state_density} \n\nLet us first inspect the spatial configuration of the ground state of the bosonic impurities and the fermionic sea for varying impurity-medium interaction strength $g_{BF}$. \nFor this reason, we employ the corresponding single-particle densities $\\rho^{(1)}_{F}(x)$ and $\\rho^{(1)}_{B}(x)$ with respect to $g_{BF}$ [Fig.~\\ref{spd_g}] \nfor both the cases of two non-interacting $g_{BB}=0$ [Figs.~\\ref{spd_g}($a_1$), ($b_1$)] and two repulsively interacting \nwith $g_{BB}=1$ [Figs. \\ref{spd_g} ($a_2$), ($b_2$)] bosonic impurities. \nOverall, we observe that the behavior of both the impurities and the medium depends strongly on the value of $g_{BF}$. \nAlso $\\rho^{(1)}_{F}(x)$ exhibits six shallow local density maxima [Figs.~\\ref{spd_g}($c$)-($h$)] almost irrespectively of $g_{BF}$, which \nindicates the presence of six fermions~\\cite{kwasniok2020correlated}, see \nalso the remark in \\footnote{Note that for increasing attraction, i.e. $g_{BF}<-2.5$, a major portion of $\\rho^{(1)}_{F}(x)$ resides around \n$x=0$ and its two central local maxima come very close to each other and eventually merge for very strong attractions.}. \nInterestingly, the shape of $\\rho^{(1)}_{F}(x)$ for fixed $g_{BF}$ remains almost unchanged between the $g_{BB}=0$ and the $g_{BB}=1$ cases, \nsee Figs.~\\ref{spd_g}($a_1$) and ($a_2$) as well as Figs.~\\ref{spd_g}($c$)-($h$). \nOn the other hand, $\\rho^{(1)}_{B}(x)$ at a certain value of $g_{BF}$ is affected by the direct impurity-impurity interactions since \nfor $g_{BB}=1$ it becomes slightly broader than for $g_{BB}=0$ especially when $-1.51.5$ the spatial configuration of the system and especially of the Fermi sea is significantly changed compared to smaller values of $g_{BF}$. \nIndeed, a local density dip builds upon $\\rho^{(1)}_{F}(x)$ around $x\\approx 0$ [Fig.~\\ref{spd_g}($g$)] which becomes more pronounced for increasing $g_{BF}$ and for $g_{BF}>2$ $\\rho^{(1)}_{F}(x)$ is segregated into two fragments residing in the left and right side with respect to $x=0$ [Figs. \\ref{spd_g} ($a_1$), ($h$)]. \nNote that each of the fragments has three local density maxima indicating that predominantly three fermions populate each of them and also reflects the fact that the first six lowest-lying single-particle eigenstates of the harmonic trap majorly contribute to the fermionic MB wavefunction. \nThe impurities density $\\rho^{(1)}_{B}(x)$ lies in between the two fragments of $\\rho^{(1)}_{F}(x)$ and therefore an impurity-medium phase-separation process takes place~\\cite{mistakidis2018correlation}, see e.g. Figs. \\ref{spd_g} ($a_1$), ($a_2$) and ($h$). \nThis procedure is identified by the small spatial overlap among the components~\\cite{mistakidis2018correlation} which becomes suppressed for increasing $g_{BF}$. \nWe remark that the phase-separation region is shifted to larger $g_{BF}$ values when $g_{BB}$ is finite, compare in particular Figs. \\ref{spd_g} ($a_1$) and ($a_2$). \nIndeed, phase-separation occurs when the interspecies interaction energy overcomes the intraspecies one and thus a larger $g_{BF}$ is required for \nincreasing $g_{BB}$~\\cite{mistakidis2018correlation} in order to accomplish this process. \nIt is also worth mentioning at this point that a system of two fermionic impurities immersed in a bosonic bath exhibits a similar phase-separation behavior at repulsive $g_{BF}$ but in this case the impurities reside at the edges of \nthe bosonic medium~\\cite{mistakidis2019correlated}. \n\nThe above-described phase-separation process as well as the localization tendency of the components taking place at large repulsive and attractive impurity-medium interactions respectively can be intuitively understood in terms of an effective potential approach~\\cite{mistakidis2020many,mistakidis2019quench,kiehn2019spontaneous}. \nFor this picture one can consider an effective potential for the impurities [Fermi sea] constructed by superimposing the single-particle density of the Fermi sea [impurities] to the external harmonic trap, namely $V_{\\rm eff}^{B}(x) = \\frac{1}{2}m \\omega^2 x^2 + g_{BF}\\rho^{(1)}_{F}(x)$ [$V_{\\rm eff}^{F}(x) = \\frac{1}{2}m \\omega^2 x^2 + g_{BF}\\rho^{(1)}_{B}(x)$]. \nReferring to the impurities subsystem at strong repulsive $g_{BF}$ their effective potential $V_{\\rm eff}^{B}(x)$ corresponds to a deformed harmonic trap due to $\\rho^{(1)}_{F}(x)$, see e.g. Fig. \\ref{spd_g} ($h$). \nIn this sense the impurities reside around the trap center possessing a Gaussian-like spatial distribution [Fig.~\\ref{spd_g}($h$)]. \nOn the other hand, for $g_{BF}>0$ the corresponding $V_{\\rm eff}^{F}(x)$ has a double-well like structure where the role of the potential barrier at $x=0$ is played by $\\rho^{(1)}_{B}(x)$. \nIn turn, this $V_{\\rm eff}^{F}(x)$ enforces the splitting of $\\rho^{(1)}_{F}(x)$ into two fragments, see e.g. Fig.~\\ref{spd_g}($h$). \nNote also here that for $g_{BB}=1$ the maximum of $\\rho^{(1)}_{B}(x)$ is smaller compared to the $g_{BB}=0$ case [Fig.~\\ref{spd_g}($g$)]. \nThis gives rise to a shallower double-well effective potential for fixed $g_{BF}$ and thus the barrier height that allows for phase-separation is achieved for larger values of $g_{BF}$ when $g_{BB}$ is finite. \nA similar argumentation can also be applied for attractive $g_{BF}$ where, for instance, the aforementioned localization tendency of $\\rho^{(1)}_{B}(x)$ is essentially determined by the hump structure building upon $\\rho^{(1)}_{F}(x)$ [Fig. \\ref{spd_g} ($c$)] and vice versa due to back-action. \nFor more details on the range of applicability of this effective potential picture we refer the interested \nreader to Refs.~\\cite{mistakidis2020many,mistakidis2019dissipative,mistakidis2019quench,kiehn2019spontaneous}. \n\n\n\n\\subsection{Impurity-impurity induced interactions}\\label{corel_ground}\n\nThe impurities being immersed in the Fermi sea are dressed by its excitations forming quasiparticles, herein Fermi \npolarons~\\cite{schmidt2018universal,massignan2014polarons,mistakidis2019repulsive}. \nAn intriguing property of the generated quasiparticles is the emergence of attractive induced interactions among them mediated by their host~\\cite{dehkharghani2018coalescence,mistakidis2020many,mistakidis2019repulsive} and that they can possibly form a bound pair for strong impurity-medium attractions~\\cite{schmidt2018universal,massignan2014polarons}. \nTo identify such quasiparticle related mechanisms in the ground state of the BF mixture we subsequently inspect the relative \ndistance $\\mathcal{D}_{\\rm rel}$ [Eq.~(\\ref{7})] and the spatially resolved two-body reduced density matrix $\\rho^{(2)}_{BB}(x_{1}, x_2)$ of \nthe bosonic impurities~\\cite{mistakidis2020induced} for different impurity-medium interaction strengths, see \nFigs.~\\ref{Relative_dis}, \\ref{2bd_g0} and \\ref{2bd_g1}. \nRecall that $\\rho^{(2)}_{BB}(x_1,x_2)$ measures the probability of finding a boson at position $x_1$ while the second one is located at $x_2$. \nImportantly, the combination of the behavior of $\\mathcal{D}_{\\rm rel}$ and $\\rho^{(2)}_{BB}(x_{1}, x_2)$ enables us to infer the presence \nand strength of the attractive induced interactions as well as the spatial configuration of the impurities~\\cite{mistakidis2020many}. \nBelow, we discuss the cases of both non-interacting ($g_{BB}=0$) and repulsively interacting ($g_{BB}=1.0$) impurities as well as the \neffect of a mass-imbalance between the impurities and their bath. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{Fig2.eps}\n\\caption{Relative distance $\\mathcal{D_{\\rm rel}}$ between the two bosonic impurities in the ground state of the BF mixture with \nrespect to the impurity-medium interaction strength $g_{BF}$. The relative distance is presented for the cases of two \nnon-interacting ($g_{BB} = 0$), and interacting ($g_{BB} = 1$) impurities in a mass-balanced as well as a mass-imbalanced system (see legend). \nThe medium consists of $N_F = 6$ fermions, while the BF mixture is confined in a harmonic trap with frequency $\\omega = 0.1$.} \n\\label{Relative_dis}\n\\end{figure}\n\nThe corresponding relative distance $\\mathcal{D}_{\\rm rel}$ between the impurities and their two-body density $\\rho^{(2)}_{BB}(x_{1}, x_2)$ \nfor a mass-balanced BF system containing non-interacting impurities are presented in Fig.~\\ref{Relative_dis} and \nFigs.~\\ref{2bd_g0}($a_1$)-($a_5$) respectively as a function of the impurity-medium interaction strength $g_{BF}$ ranging \nfrom attractive to repulsive values. \nIt becomes apparent that $\\mathcal{D}_{\\rm rel}$ gradually decreases, when compared to its value for $g_{BF}=0$, as $\\abs{g_{BF}}$ is \nincreased towards the attractive or the repulsive interaction regime. This overall decreasing behavior of $\\mathcal{D}_{\\rm rel}$ for \nlarger $\\abs{g_{BF}}$ indicates that for finite attractive or repulsive impurity-medium interactions the impurities move close to each other as compared to the $g_{BF}=0$ scenario. \nThe latter tendency suggests the emergence of attractive impurity-impurity induced interactions~\\cite{huber2019medium,mistakidis2020many}. \nInterestingly, $\\mathcal{D}_{\\rm rel}$ tends to approach a constant value which is different for strong repulsions ($g_{BF}>3$) and \nattractions ($g_{BF}<-3$). \nIndeed, the saturation value of $\\mathcal{D}_{\\rm rel}$ for strong repulsions is somewhat larger when compared to the corresponding \nvalue for strong attractions. \nThis means that for attractive $g_{BF}$ the impurities are substantially closer with respect to one another than in the repulsive case. \nConcluding, $\\mathcal{D}_{\\rm rel}$ signals the presence of induced impurity-impurity interactions, which are manifested to be attractive \nin general, irrespectively of the sign of the impurity-medium coupling~\\cite{huber2019medium,mistakidis2020many}. \n\n\nTo confirm the existence of attractive impurity-impurity induced interactions when $g_{BB}=0$ we next rely on the impurities two \nparticle density $\\rho^{(2)}_{BB}(x_1, x_2)$, see Figs.~\\ref{2bd_g0} ($a_1$)-($a_5$). \nThis quantity allows us to explicitly identify the spatial distribution of impurities. \nAs it can be seen, irrespectively of the value of $g_{BF}$ the two non-interacting bosons prefer to reside together close to the \ntrap center since $\\rho^{(2)}_{BB}(x_1, x_2)$ shows a maximum value in the vicinity of $x_1=0$, $x_2=0$, see Figs.~\\ref{2bd_g0}($a_1$)-($a_5$). \nIn particular, for $g_{BF} = 0$ [Fig.~\\ref{2bd_g0}($a_3$)] $\\rho^{(2)}_{BB}(x_1, x_2)$ has a circularly symmetric shape in the ($x_1,x_2$)-plane while showing a peak around $x_1=x_2=0$. \nThis can be understood by the fact that in the absence of any correlation with the majority species there is no induced interaction among the bosons. \nHence, the probability of finding the two bosons together at $x_1=x_2$ or one at $x_1$ and the other at $x_2=-x_1$ is the same and becomes maximal at the trap minimum i.e. at $x_1=x_2=0$. \nHowever, for a finite $g_{BF}$ the shape of $\\rho^{(2)}_{BB}(x_1, x_2)$ is significantly altered when compared to the $g_{BF}=0$ case \nsince predominantly the diagonal $\\rho^{(2)}_{BB}(x_1,x_2=x_1)$ is populated. \nIn fact, $\\rho^{(2)}_{BB}(x_1, x_2)$ becomes more elongated along the diagonal ($x_1 = x_2$) with increasing $\\abs{g_{BF}}$, while it \nshrinks across its anti-diagonal ($x_2 = -x_1$), see Figs.~\\ref{2bd_g0}($a_4$)-($a_5$) and Figs.~\\ref{2bd_g0}($a_1$)-($a_2$). \nThis means that the probability of detecting the two bosons at two different positions is substantially smaller than that of being \nclose together. \nTherefore, an effective attractive interaction between the impurities is established and occurs for both attractive and repulsive \nimpurity-medium interactions. \nImportantly, within the attractive $g_{BF}$ regime the shrinking of the anti-diagonal of $\\rho^{(2)}_{BB}(x_1, x_2)$ is much more \npronounced than on the repulsive $g_{BF}$ side, compare in particular Figs.~\\ref{2bd_g0}($a_1$)-($a_2$) with Figs.~\\ref{2bd_g0}($a_4$)-($a_5$). \nThe latter observation supports our previous statement in terms of $\\mathcal{D}_{\\rm rel}$ of a much stronger effective attraction \nbetween the impurities for negative than positive $g_{BF}$, a result that also holds for Bose polarons as it has been \ndemonstrated in Ref.~\\cite{mistakidis2020many}. \nMoreover, the pronounced elongation along the diagonal accompanied by the strong suppression of the anti-diagonal of $\\rho^{(2)}_{BB}(x_1, x_2)$ e.g. for $g_{BF}=-3$ is indicative of a bound state being formed between the impurities known as a bipolaron state~\\cite{mistakidis2020many,camacho2018bipolarons,schmidt2018universal}. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig3.eps}\n\\caption{Reduced two-body ($a_1$)-($a_5$) boson-boson $\\rho^{(2)}_{BB}(x_1, x_2)$, and ($b_1$)-($b_5$) fermion-fermion $\\rho^{(2)}_{FF}(x_1, x_2)$ \ndensity in the ground state of the BF mixture for selective impurity-medium interaction strengths (see legend). \nThe system contains $N_B =2$ non-interacting ($g_{BB}=0$) bosonic impurities and $N_F = 6$ fermions. \nIt is further confined in a harmonic trap of $\\omega = 0.1$. } \n\\label{2bd_g0}\n\\end{figure}\n\nNext, we turn our attention to repulsively interacting bosonic impurities with $g_{BB}=1$ aiming to investigate the competition between attractive induced interactions and direct $s$-wave ones. \nTo this end, we measure the impurities relative distance [Fig.~\\ref{Relative_dis}] and their two-particle density [Fig.~\\ref{2bd_g1}($a_1$)-($a_5$)] for distinct values of $g_{BF}$. \nAs expected, in the absence of direct impurity-impurity interactions i.e. $g_{BB}=0$ the impurities distance $\\mathcal{D}_{\\rm rel}$ is in general smaller than the corresponding for two repulsively interacting ones with $g_{BB} = 1$. \nThis difference becomes maximal for zero impurity-medium interactions, namely $g_{BF}=0$. \nIndeed, $\\mathcal{D}_{\\rm rel}$ decreases for a larger positive or negative $g_{BF}$ tending to approach a constant value which is smaller for attractive $g_{BF}$ interactions. \nConsequently, also the difference $\\mathcal{D}_{\\rm rel}(g_{BB} = 1)-\\mathcal{D}_{\\rm rel}(g_{BB}=0)$ gradually decreases and becomes constant for increasing $\\abs{g_{BF}}$. \nA direct comparison between $\\mathcal{D}_{\\rm rel}(g_{BB}=1)$ and $\\mathcal{D}_{\\rm rel}(g_{BB}=0)$ reveals that the \ndistance saturates at relatively larger (smaller) positive (negative) $g_{BF}$ values when $g_{BB}=1$. \nFor instance, the decreasing rate of $\\mathcal{D}_{\\rm rel}$ in the attractive $g_{BF}$ regime is much larger in the $g_{BB}=1$ scenario before showcasing a saturation tendency around $g_{BF}=-4$ towards $\\mathcal{D}_{\\rm rel} \\approx 0.5$. \nThe above-described overall behavior of $\\mathcal{D}_{\\rm rel}$ for varying $g_{BF}$ suggests the occurence of attractive induced interactions for a finite $g_{BF}$ despite the existence of direct $s$-wave ones. \nNevertheless, as we shall explicate in the following the direct interactions dominate the induced ones at least for repulsive impurity-medium \ncouplings, a result which reveals that the strength of induced interactions is larger in the negative $g_{BF}$ regime. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig4.eps}\n\\caption{Reduced two-body ($a_1$)-($a_5$) impurity-impurity $\\rho^{(2)}_{BB}(x_1, x_2)$, and ($b_1$)-($b_5$) fermion-fermion $\\rho^{(2)}_{FF}(x_1,x_2)$ \ndensity in the ground state of the BF mixture for different values of the boson-medium coupling constant $g_{BF}$ (see legend). \nThe system consists of $N_B =2$ interacting with $g_{BB}=1$ bosonic impurities and $N_F = 6$ fermions while it is trapped in a harmonic oscillator with frequency $\\omega = 0.1$.} \n\\label{2bd_g1}\n\\end{figure}\n\nIndeed by inspecting the impurities two-body density $\\rho^{(2)}_{BB}(x_1,x_2)$, illustrated in Figs.~\\ref{2bd_g1}($a_1$)-($a_5$), the following conclusions can be immediately drawn. \nThe circularly symmetric pattern of $\\rho^{(2)}_{BB}(x_1,x_2)$ occurring for $g_{BB}=0$ when $g_{BF}=0$ [Fig.~\\ref{2bd_g0}($a_3$)] is completely modified for repulsively interacting impurities [Fig.~\\ref{2bd_g1}($a_3$)]. \nThis modification favors a pattern whose diagonal is depleted, giving rise to a correlation hole~\\cite{erdmann2019phase,kwasniok2020correlated}, whilst the anti-diagonal develops two symmetric lobes with respect to $x_1=x_2$ and it is predominantly populated. \nThis is an explicit imprint of the direct $s$-wave interaction among the impurities and means that the probability of finding two bosons exactly at the same position is vanishingly \nsmall in contrast to the situation where each boson resides on a separate side in terms of the trap center. \nSwitching on $\\abs{g_{BF}}$ introduces deformations in the shape of $\\rho^{(2)}_{BB}(x_1, x_2)$ and in particular in the position of its anti-diagonal lobes which suggests that the induced interactions set in. \nReferring to repulsive impurity-medium interactions [Figs.~\\ref{2bd_g1}($a_4$-($a_5$)] it is apparent that for increasing $g_{BF}$ the anti-diagonal lobes of $\\rho^{(2)}_{BB}(x_1, x_2)$ approach the diagonal. \nTherefore, the two bosons get closer due to the presence of their attractive induced interactions mediated by the fermionic environment. \nNotice, however, that the two lobe structure of $\\rho^{(2)}_{BB}(x_1, x_2)$ is maintained also at $g_{BF}=3$ [Fig.~\\ref{2bd_g1}($a_5$)], indicating that the $s$-wave interactions dominate the induced ones. Turning to the attractive $g_{BF}$ regime we observe that for weak $g_{BF}$ values the anti-diagonal of $\\rho^{(2)}_{BB}(x_1, x_2)$ shrinks and thus the bosons come closer when compared to the $g_{BF}=0$ case due to the existence of attractive induced interactions. \nImportantly, this behavior of $\\rho^{(2)}_{BB}(x_1, x_2)$ is drastically changed for large attractive impurity-medium interactions. \nMore precisely, the two-lobed anti-diagonal structure related to the dominant repulsive contact interaction is changed into a circularly symmetric pattern, see e.g. $\\rho^{(2)}_{BB}(x_1,x_2)$ at $g_{BF}=-3$ depicted in Fig.~\\ref{2bd_g1}($a_1$). \nRecall that the appearance of such a circularly symmetric structure in $\\rho^{(2)}_{BB}(x_1, x_2)$ occurs in the case of zero effective interactions between the two bosons when $g_{BB}=0$ and $g_{BF}=0$ [Fig.~\\ref{2bd_g0}($a_3$)]. \nThis observation suggests that the attractive induced interactions nullify the direct repulsive contact ones for large attractive impurity-medium couplings, a phenomenon that is absent in the repulsive $g_{BF}$ regime. \nSummarizing, attractive impurity-medium couplings lead to stronger induced interactions than repulsive ones. \n\nSubsequently, we study the impact of the impurities mass on the strength of the induced interactions by invoking as an appropriate measure the impurities relative distance [Fig. \\ref{Relative_dis}]. \nFor this investigation we consider a harmonically trapped mass-imbalanced BF mixture consisting of a $^{40}$K fermionic environment and two $^{87}$Rb bosonic impurities~\\cite{Fratini2012}. \nEvidently, the overall phenomenology of $\\mathcal{D}_{\\rm rel}$ for varying $g_{BF}$ is similar to the mass-balanced scenario for both the $g_{BB}=0$ and the $g_{BB}=1$ cases. \nMoreover, $\\mathcal{D}_{\\rm rel}$ is always reduced when compared to the mass-balanced system, thus suggesting that heavier impurities prefer to stay closer to each other than lighter ones~\\cite{kwasniok2020correlated,mistakidis2020many}. Accordingly, we can deduce that an \nincreasing impurity mass allows for stronger attractive induced interactions. \n\n\n\\subsection{Two-body correlations of the fermionic medium}\\label{corel_ground_bath} \n\nHaving explicated the existence of attractive induced impurity-impurity correlations we then analyze \nthe two-particle distributions of the fermionic environment for different impurity-medium interactions. \nOur main objective here is to expose the back-action of the impurities onto their host~\\cite{mukherjee2020pulse,mistakidis2019dissipative}. \nRegarding the system with two non-interacting impurities ($g_{BB}=0$) $\\rho^{(2)}_{FF}(x_1,x_2)$ is presented in Figs. \\ref{2bd_g0} ($b_1$)-($b_5$) for specific values of $g_{BF}$. \nA depleted diagonal is observed irrespectively of $g_{BF}$ due to the Pauli exclusion principle, namely two fermions cannot occupy the same spatial region. \nAt $g_{BF}=0$ two fermions can be found at any two distinct positions within the interval $[-10, 10]$, see Fig.~\\ref{2bd_g0}($b_3$), possessing a slightly larger probability to reside close to the trap center either on the same side or at different ones with respect to $x=0$. \nInterestingly, even the presence of a very small number of bosonic impurities is able to significantly alter the properties of the Fermi sea if $g_{BF}\\neq 0$. \nFor repulsive $g_{BF}$, the fermions exhibit a tendency to stay away from the trap center, \ne.g. $\\rho^{(2)}_{FF}(x_1=5,x_2=-5)\\approx 0.14$ at $g_{BF} = 1$ in Fig. \\ref{2bd_g0} ($b_4$). \nThis behavior is manifested by the appearance of relatively low density stripes along the lines $x_1=0$ and $x_2=0$ e.g. for $g_{BF}=1$ [Fig.~\\ref{2bd_g0}($a_4$)] which are transformed into completely depleted density regions e.g. for $g_{BF} = 3$ [Fig. \\ref{2bd_g0}($a_5$)]. \nTurning to the attractive $g_{BF}$ regime [Figs.~\\ref{2bd_g0}($b_1$)-($b_2$)], the distribution of $\\rho^{(2)}_{FF}(x_1,x_2)$ is changed significantly. \nIndeed, the probability of finding two fermions at different positions in the vicinity of the trap center is the dominant contribution to $\\rho^{(2)}_{FF}(x_1,x_2)$ especially for larger attractions. \nFor instance, at $g_{BF} = -1$, the two particle density shown in Fig.~\\ref{2bd_g0}($b_2$) is higher close to the trap \ncenter [e.g. $\\rho^{(2)}_{FF}(x_1=1,x_2=-1)\\approx 0.16$] compared to the edges [e.g. $\\rho^{(2)}_{FF}(x_1=5,x_2=-14)\\approx 0.09$]. \nAlso, for $g_{BF} = -3$, the spatial region apart from the one close to the trap center is almost completely depleted [see Fig.~\\ref{2bd_g0}($b_1$)], as identified by the relevant cross-like pattern building upon $\\rho^{(2)}_ {FF}(x_1,x_2)$. \n\nComparing now $\\rho^{(2)}_{FF}(x_1, x_2)$ between the cases of $g_{BB}=1$ [Figs.~\\ref{2bd_g1}($b_1$)-($b_5$)] and $g_{BB}=0$ [Figs.~\\ref{2bd_g0}($b_1$)-($b_5$)] we can easily deduce that their shapes at specific $g_{BF}$ values are to a great extent similar. \nA slight difference occurs for moderate repulsive interactions e.g. $g_{BF}=1$ where the two-body density stripes imprinted along the lines $x_1=0$ and $x_2=0$ for $g_{BB}=0$ [Fig.~\\ref{2bd_g0}($b_4$)] \nare not noticeable for $g_{BB}=1$ [Fig.~\\ref{2bd_g1}($b_4$)]. \nAlso, for attractive impurity-medium interactions $g_{BF}<0$ the regions away from the trap center are relatively \nstronger populated for $g_{BB}=1$, compare Figs. \\ref{2bd_g1} ($b_1$)-($b_2$) with Figs. \\ref{2bd_g0} ($b_1$)-($b_2$). \nAs a case example, for $g_{BF}=-3$ it holds that $\\rho^{(2)}_{FF}(x_1=4,x_2=4) \\approx 0.1$ when $g_{BB} = 0$ \nwhile $\\rho^{(2)}_{FF}(x_1=4,x_2=-4)\\approx 0.12$ for $g_{BB} = 1$, see Fig.~\\ref{2bd_g0}($b_1$) and \nFig. ~\\ref{2bd_g1}($b_1$) respectively. \n\n \n\\section{Quench Dynamics}\\label{quench_dynamics}\n\nUp to now we have discussed the ground state properties of the harmonically trapped particle imbalanced BF mixture \nwith $N_F=6$ and $N_B=2$ for different impurity-medium interaction strengths ranging from attractive to repulsive values. \nImportantly, we have identified the presence of attractive induced interactions for the non-interacting impurities and \nanalyzed the competition between the direct $s$-wave repulsive interactions with the induced ones. \nAlso, in all cases we have quantified the back-action of the impurities to their fermionic environment. \n\nBelow, we explore the corresponding non-equilibrium dynamics of the impurities and the Fermi sea. \nThe mixture is prepared in its ground-state configuration, as already discussed in Sec.~\\ref{Ground state}, with zero impurity-medium coupling strength. \nThe dynamics is triggered by applying a quench of this coupling towards either the repulsive [Sec.~\\ref{repulsive_quench}] or the \nattractive [Sec.~\\ref{attractive quench}] regime of interactions~\\cite{volosniev2015real,mistakidis2019effective}. \nOur main objective is to inspect the dynamical emergence of induced impurity-impurity correlations and the pattern formation of the fermionic environment as a result of the impurities motion. \nIn the subsequent analysis we first study the dynamics of two non-interacting impurities and then contrast our findings to the case of two repulsively interacting ones. \n\n\n\\subsection{Quench to repulsive interactions}\\label{repulsive_quench} \n\nWe focus first on the correlated dynamics of the BF mixture induced by a quench from a vanishing to repulsive impurity-medium interactions. \nThe emergent dynamics is firstly analyzed by employing the corresponding single-particle density evolution of the participating \ncomponents [Sec.~\\ref{density_evol_repul}] and then by inspecting their two-body density matrix [Sec.~\\ref{two_body_evol_repul}] in \nthe course of the evolution. \nThese observables enable us to gain an overview of the dynamical evolution and importantly shed light on \nthe existence of impurity-impurity and bath correlations respectively. \n\n\n\\subsubsection{Single-particle density evolution}\\label{density_evol_repul}\n\nTo gain an overview of the spatially resolved quench-induced dynamics of the BF mixture we show the corresponding single-particle density evolution of the impurities $\\rho^{(1)}_{B}(x;t)$ and the Fermi sea $\\rho^{(1)}_{F}(x;t)$ in Fig.~\\ref{spd_dr} for different impurity-medium and impurity-impurity interaction strengths. \nNaturally, we commence our discussion on the system containing non-interacting impurities which provides the most clear signatures of induced correlations. \nReferring to weak post-quench interactions namely $g_{BF}=0.8$ the fermionic environment performs an overall \nbreathing motion~\\cite{huang2019breathing,boudjemaa2020breathing} manifested as a small amplitude periodic expansion and contraction of its cloud, see Fig.~\\ref{spd_dr}($a_1$). \nThe frequency of this global breathing mode is $\\omega_F^{br}\\approx 0.193\\approx2\\omega$ which is indeed in accordance to the corresponding theoretical predictions~\\cite{bauch2009quantum,abraham2012quantum}. \nMoreover $\\rho^{(1)}_{F}(x;t)$ exhibits at each time-instant of the evolution six in total shallow local maxima, namely three on the left and other three on the right side with respect to $x=0$, while a shallow density dip occurs around the trap center $x=0$. \nThese local maxima are indicative of the six fermions present in the system and also the fact that majorly the first six single-particle eigenstates of the trap participate in the dynamics. \nOn the other hand, the shallow dip of $\\rho^{(1)}_{F}(x=0;t)$ is caused by the presence of the impurities at the same location. \nThe impurities density $\\rho^{(1)}_{B}(x;t)$ undergoes a very weak amplitude breathing dynamics [see Fig.~\\ref{spd_dr}($a_3$)] characterized by a predominant \nfrequency $\\omega_{B}^{br}\\approx 0.24$. \nNotice here that $\\omega_B^{br}$ is slightly larger than $\\omega_F^{br}$ since the impurities experience an effective potential, created by the external trap and the density of the Fermi sea, which possesses a larger than the trap frequency. \nMoreover, $\\rho^{(1)}_{B}(x;t)$ completely overlaps with $\\rho^{(1)}_{F}(x;t)$ throughout the time-evolution, thus indicating the miscible character of the dynamics. \nRecall that the two components are also miscible in the ground state of the system for $g_{BF}=0.8$ as discussed in Sec.~\\ref{ground_state_density} and demonstrated in Figs. \\ref{spd_g} ($a_1$),($b_1$). \n\nTurning to repulsively interacting impurities with $g_{BB}=1$ for the same quench amplitude i.e. from $g_{BF} = 0$ to $g_{BF} = 0.8$ we observe that a qualitatively similar to the \nabove-described dynamics takes place when $g_{BB}=1$ for both the fermionic environment [Fig.~\\ref{spd_dr}($a_5$)] and the \nbosons [Fig.~\\ref{spd_dr}($a_7$)]. \nA notable difference is that $\\rho^{(1)}_{B}(x;t)$ is broader in the $g_{BB}=1$ case due to the presence of the direct $s$-wave interaction, compare in particular Fig.~\\ref{spd_dr}($a_3$) with Fig.~\\ref{spd_dr}($a_7$). \nNote that this broadening of $\\rho^{(1)}_{B}(x;t)$ for finite $g_{BB}$ occurs also in the ground state of the system, see Figs.~\\ref{spd_g}($b_1$), ($b_2$). \nAs expected, also the breathing amplitude of the impurities is larger when $g_{BB}=1$ while the frequency of this motion remains almost \nthe same ($\\omega_{B}^{br}\\approx 0.22$) with the $g_{BB}=0$ case. \nThis small deviation in the value of $\\omega_{B}^{br}$ is attributed to interaction effects~\\cite{schmitz2013quantum,kiehn2019spontaneous}. Consequently, due to the broader $\\rho^{(1)}_{B}(x;t)$ when $g_{BB}=1$ the central dip in $\\rho^{(1)}_{F}(x=0;t)$ occurring for $g_{BB}=0$ [Fig.~\\ref{spd_dr}($a_1$)] becomes very shallow and almost disappears for $g_{BB}=1$ [Fig.~\\ref{spd_dr}($a_3$)]. \nOtherwise, the inclusion of direct $s$-wave impurity-impurity interactions does not alter the characteristics of the system's dynamics at least on the single-particle level. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig5.eps}\n\\caption{One-body density evolution of [($a_1$), ($a_2$), ($a_5$), ($a_6$)] the Fermi sea $\\rho^{(1)}_{F}(x,t)$ and [($a_3$), ($a_4$), ($a_7$), ($a_8$)] the bosonic impurities $\\rho^{(1)}_{F}(x,t)$ upon considering an impurity-medium interaction quench from $g_{BF}=0$ to a finite repulsive value (see legend). \nThe impurities are ($a_1$)-($a_4$) either free i.e. $g_{BB} = 0$ or ($a_5$)-($a_8$) repulsively interacting with $g_{BB} = 1$. \nThe system is confined in a harmonic trap with $\\omega=1$ and comprises of $N_B = 2$ bosons immersed in Fermi sea of $N_F = 6$ fermions. \nIt is initialized in its ground state with $g_{BF}=0$ and either $g_{BB}=0$ or $g_{BB}=1$.} \n\\label{spd_dr}\n\\end{figure} \n\nIncreasing the post-quench interaction strength e.g. to $g_{BF}=2.5$ gives rise to a much more intricate dynamics for both the non-interacting impurities [Fig.~\\ref{spd_dr}($a_4$)] and the fermionic medium [Fig.~\\ref{spd_dr}($a_2$)] when compared to the $g_{BF}=0.8$ quench amplitude. \nWe remark that such a difference is already expected from the ground state properties of the system since for $g_{BF}=0.8$ the components are spatially overlapping (miscible) and become immiscible for $g_{BF}=2.5$, see also Figs. \\ref{spd_g} ($a_1$), ($b_1$). \nIn particular, the cloud of the Fermi sea exhibits a breathing oscillation with almost the same frequency $\\omega_F^{br}\\approx0.195$ \nas for the quench to $g_{BF}=0.8$. \nHowever, the amplitude of the contraction and expansion dynamics of $\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{spd_dr}($a_2$)] is larger when compared to the smaller $g_{BF}=0.8$ [Fig.~\\ref{spd_dr}($a_1$)] leading to a comparatively more excited medium in the former case. \nAccordingly, $\\rho^{(1)}_{F}(x;t)$ appears to be in general wider for $g_{BF}=2.5$, a result that can again be traced back to the ground state density of the Fermi sea [Fig.~\\ref{spd_g}($a_1$)]. \nMoreover, the local density humps building upon $\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{spd_dr}($a_2$)] are found to be shallower (deeper) during the expansion (contraction) of the fermionic cloud for $g_{BF}=2.5$ than for $g_{BF}=0.8$. \nImportantly, the density dip of $\\rho^{(1)}_{F}(x;t)$ around the trap center is substantially deeper when $g_{BF}=2.5$. \nThis is a direct consequence of the emergent phase-separation being anticipated already from the ground state of the system for such strongly repulsive impurity-medium interactions, see also Figs.~\\ref{spd_g} ($a_1$), ($b_1$). \n\nOf course, most of the above-described features of $\\rho^{(1)}_{F}(x;t)$ are intimately connected with the corresponding behavior of the single-particle density of the bosons $\\rho^{(1)}_{B}(x;t)$ [Fig.~\\ref{spd_dr}($a_4$)] since the two components are inevitably interdependent due to their mutual finite coupling $g_{BF}$. \nSpecifically, the impurities density $\\rho^{(1)}_{B}(x;t)$ shows a relatively larger localization tendency [Fig.~\\ref{spd_dr}($a_4$)] than for $g_{BF}=0.8$ [Fig.~\\ref{spd_dr}($a_3$)] which is expected due to the aforementioned phase-separated behavior among the components. \nMoreover, $\\rho^{(1)}_{B}(x;t)$ exhibits a weaker amplitude and \nlarger frequency $\\omega_{B}^{br}\\approx 0.36$ breathing motion for $g_{BF}=2.5$ than for $g_{BF}=0.8$. \nThe alteration of the impurities breathing frequency for $g_{BF}=2.5$ can in turn be explained within an effective potential picture. \nIndeed, as already argued in the ground state of the system the impurities can be viewed as trapped in the potential formed by \nthe harmonic trap with a superimposed density of their Fermi sea. \nSince $\\rho^{(1)}_{F}(x;t)$ is wider for increasing $g_{BF}$ also the impurities effective trapping frequency being r\nelated to the breathing one is larger. \n\nThe dynamics of interacting impurities with $g_{BB}=1$ following a quench to $g_{BF}=2.5$ as captured by $\\rho^{(1)}_{F}(x;t)$ \n[Fig.~\\ref{spd_dr}($a_6$)] and $\\rho^{(1)}_{B}(x;t)$ [Fig.~\\ref{spd_dr}($a_8$)] is more involved than the $g_{BB}=0$ case, especially for long evolution times $t>40$. \nEvidently, the impurities exhibit a significantly broader density distribution for $g_{BB}=1$ [Fig.~\\ref{spd_dr}($a_8$)] than \nfor $g_{BB}=0$ [Fig.~\\ref{spd_dr}($a_4$)], while performing a breathing motion of a larger amplitude and smaller \nfrequency $\\omega_{B}^{br}\\approx 0.33$ in the former case. \nFurthermore, $\\rho^{(1)}_{B}(x;t)$ initially ($t=0$) having a Gaussian profile deforms already within the initial stages of the dynamics ($t>5$) \nby developing three shallow humps being more pronounced during expansion and coming very close at the contraction points [Fig.~\\ref{spd_dr}($a_8$)]. \nThis behavior of $\\rho^{(1)}_{B}(x;t)$ essentially indicates that for $t<5$ the interacting impurities dominantly occupy the lowest-lying \nsingle-particle eigenstate of their external potential while as time evolves the contribution of higher-lying eigenstates becomes significant \nand excitations are formed. \nThis statement is also supported by the population of the individual bosonic orbitals $\\phi_i^{B}(x,t)$ with $i=1,2,\\dots,12$ \n(see also the discussion following Eq.~(\\ref{1BD})) from which the first eight have a non-negligible population during the evolution (results not shown). \nThe above-mentioned differences, regarding mainly the breathing mode and the structure of $\\rho^{(1)}_{B}(x;t)$ for $g_{BB}=1$ compared to \nthe $g_{BB}=0$ case are attributed to the presence of the direct $s$-wave repulsive interaction between the impurities. \nAs expected, the features of $\\rho^{(1)}_{B}(x;t)$ are also imprinted to a certain extent in the density of the fermionic environment \n$\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{spd_dr}($a_6$)] due to the finite $g_{BF}$. \nNotably, $\\rho^{(1)}_{F}(x;t)$ shows an arguably suppressed central dip compared to the $g_{BB}=0$ case due to \nthe broader distribution of the impurities for $g_{BB}=1$. \nThis in turn gives rise to an almost vanishing degree of phase-separation in the latter $g_{BB}=1$ case. \nOther properties, such as the amplitude and the frequency of the breathing mode remain almost the same as in the $g_{BB}=0$ scenario. \n\nConcluding this section, it is important to emphasize that the quench dynamics of non-interacting and interacting bosonic impurities \ndiffers noticeably already on the single-particle level, especially for large post-quench interaction strengths. \nThis impact of the direct $s$-wave interaction of the impurities is also imprinted in the Fermi sea leading to changes in its \npattern formation. \nAs we shall explicate below, the origin of the above-mentioned differences is the presence of impurity-impurity induced interactions. \n\n\n\\subsubsection{Dynamics of impurity-impurity correlations}\\label{two_body_evol_repul} \n\nTo track the spatially resolved dynamics of the two bosonic impurities with respect to one another we next invoke their \ntwo-particle density, $\\rho^{(2)}_{BB}(x_1,x_2);t)$ [Eq.~(\\ref{2BD})], which essentially provides the probability of measuring \nsimultaneously one particle at position $x_1$ and the other at $x_2$. \nAs a complementary measure of the impurities position we also calculate their relative distance $\\mathcal{D}_{\\rm rel}(t)$ [Eq.~(\\ref{7})] during the time-evolution. \nThis observable will allow us to identify whether the impurities interact among each other via induced correlations mediated by their host or \nthey move independently~\\cite{mistakidis2019correlated,mistakidis2020many}. \nSnapshots of $\\rho^{(2)}_{BB}(x_1, x_2;t)$ at specific time-instants of the evolution upon considering a quench from $g_{BF}=0$ to $g_{BF}=2.5$ for the cases of $g_{BB}=0$ and $g_{BB}=1$ are presented in Figs.~\\ref{2B_den_r}($a_1$)-($a_4$) and Figs.~\\ref{2B_den_r}($c_1$)-($c_4$) respectively. \nMoreover, the corresponding $\\mathcal{D}_{\\rm rel}(t)$ when $g_{BB}=0$ [Fig.~\\ref{2B_den_r}($d$)] and $g_{BB}=1$ [Fig.~\\ref{2B_den_r}($e$)] is demonstrated for different post-quench interactions providing an overview of the impurity-impurity correlations. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig6.eps}\n\\caption{Snapshots of the two-body reduced density of ($a_1$)-($a_4$) two non-interacting bosons $\\rho^{(2)}_{BB}(x_1,x_2)$, ($b_1$)-($b_4$) two fermions of the medium $\\rho^{(2)}_{FF}(x_1,x_2)$ and ($c_1$)-($c_4$) two repulsively interacting bosons $\\rho^{(2)}_{BB}(x_1,x_2)$ at specific time-instants of the evolution (see legends). \nThe system consists of $N_F=6$ fermions and $N_B=2$ bosons while it is confined in a harmonic trap with $\\omega=0.1$. \nIt is initialized in its ground state with $g_{BF}=0$ and either $g_{BB}=0$ or $g_{BB}=1$. \nTo trigger the dynamics an interaction quench is performed from $g_{BF} = 0$ to $g_{BF} = 2.5$. \nTime-evolution of the relative distance $\\mathcal{D}_{\\rm rel}(t)$ between the two bosonic impurities with ($d$) $g_{BB} = 0$ and ($e$) $g_{BB} =1$ at distinct post-quench $g_{BF}$ values (see legend).} \n\\label{2B_den_r}\n\\end{figure}\n\nFor non-interacting bosonic impurities, $\\rho^{(2)}_{BB}(x_1,x_2;t=0)$ has a circular shape in the ($x_1,x_2$)-plane [Fig.~\\ref{2B_den_r}($a_1$)] with a peak around $x_1,x_2\\in [-2,2]$. \nTherefore, the bosons are likely to reside in this spatial region close to the trap center. \nHowever, in the course of the dynamics this shape of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ is drastically altered exhibiting an elongated diagonal and a suppressed anti-diagonal, see Figs.~\\ref{2B_den_r}($a_2$)-($a_4$). \nNote that the anti-diagonal of the two-particle density of the impurities dictates their relative distance $\\mathcal{D}_{\\rm rel}(t)$ [Eq.~(\\ref{7})]. \nThe latter is illustrated in Fig.~\\ref{2B_den_r}($d$) for a variety of post-quench $g_{BF}$ values. \nAs it can be seen, in all cases $\\mathcal{D}_{\\rm rel}(t)$ undergoes a decaying amplitude oscillatory motion characterized by \ntwo dominantly participating frequencies which essentially correspond to the center-of-mass and relative coordinate \nbreathing modes~\\cite{schmitz2013quantum}, e.g. $\\omega_1\\approx 0.19$, $\\omega_2\\approx 0.24$ for $g_{BF}=0.8$ and \n$\\omega_1\\approx 0.19$, $\\omega_2\\approx 0.36$ when $g_{BF}=2.5$. \nIndeed, the oscillatory behavior of $\\mathcal{D}_{\\rm rel}(t)$ reflects the breathing motion of the impurities cloud already identified in \nthe dynamics of $\\rho^{(1)}_{B}(x;t)$ [Fig.~\\ref{spd_dr}($a_3$)]. \nThis is also imprinted in the modulated shape of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ [Figs.~\\ref{2B_den_r}($a_2$-($a_4$)], e.g. \nthe anti-diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ is more expanded at $t=55$ compared to $t=130$. \nAlso, the oscillation amplitude of $\\mathcal{D}_{\\rm rel}(t)$ and as a consequence of the breathing mode is enhanced \nfor a larger post-quench $g_{BF}$, see also Figs.~\\ref{spd_dr}($a_3$), ($a_7$). \nImportantly the decaying amplitude in time of $\\mathcal{D}_{\\rm rel}(t)$, and thus the elongated shape \nof $\\rho^{(2)}_{BB}(x_1,x_2;t)$ across its diagonal, implies that the impurities tend to approach \neach other during the dynamics and since they are non-interacting ($g_{BB}=0$) they experience an effective \nattraction mediated by the fermionic environment. \n\nProceeding we examine the role played by the direct $s$-wave repulsive contact interaction between the impurities and its competition with the induced interactions on the quench dynamics. \nDue to the finite impurity-impurity repulsion, herein $g_{BB}=1$, the bosons initially ($t=0$) reside one in the left ($x<0$) and the other in the right ($x>0$) side of the trap, see the pronounced anti-diagonal distribution of $\\rho^{(2)}_{BB}(x_1,x_2=-x_1;t=0)$ in Fig.~\\ref{2B_den_r}($c_1$). \nIn contrast, after the quench ($t>0$) three distinct segments develop in $\\rho^{(2)}_{BB}(x_1,x_2;t)$ [Figs.~\\ref{2B_den_r}($c_2$)-($c_4$)]. \nNamely the impurities are either very close to each other around the trap center [see the diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$] or they remain spatially separated with one of them located in the left and the other in the right side of the trap with respect to $x=0$ [see the anti-diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$]. \nThis two-body superposition is a consequence of the competition between the direct repulsive and induced \nattractive interactions~\\cite{mistakidis2020many,mistakidis2020induced}. \nInspecting now $\\mathcal{D}_{\\rm rel}(t)$ for different post-quench values of $g_{BF}$ [Fig.~\\ref{2B_den_r}($d$)] we can \nreadily see that it performs oscillations possessing two predominant frequencies, for instance \n$\\omega_1\\approx 0.22$, $\\omega_2\\approx 0.19$ when $g_{BF}=0.8$ and $\\omega_1\\approx 0.33$, $\\omega_2\\approx 0.19$ if $g_{BF}=2.5$. \nThese frequencies are again related to the center-of-mass and relative coordinate breathing modes respectively. \nInterestingly, the oscillation amplitude of $\\mathcal{D}_{\\rm rel}(t)$ e.g. for $g_{BF}=0.8$ and $g_{BF}=2.5$ is almost constant while for $g_{BF}=4$ it shows a decaying tendency. \nThis means that in the latter case the induced attraction tends to surpass the impurities direct repulsion. \nFinally, we remark that the oscillation amplitude (decay rate) of $\\mathcal{D}_{\\rm rel}(t)$ for fixed $g_{BF}$ is in general larger (smaller) when $g_{BB}=1$ [Fig.~\\ref{2B_den_r}($e$)] compared to $g_{BB}=0$ [Fig.~\\ref{2B_den_r}($d$)], thus evincing the presence of the direct impurities repulsion. \n\n\n\\subsubsection{Correlations of the fermionic environment}\\label{two_body_evol_repul_bath} \n\nTo complement our study we then investigate the correlation patterns of the Fermi sea as encoded in its two-body \ndensity $\\rho^{(2)}_{FF}(x_1,x_2;t)$ shown in Figs.~\\ref{2B_den_r}($b_1$)-($b_4$) at specific time-instants after a \nquench from $g_{BF}=0$ to $g_{BF}=2.5$ for $g_{BB}=0$. \nA correlation hole occurs along the diagonal of $\\rho^{(2)}_{FF}(x_1,x_2;t)$ throughout the evolution due to the Pauli exclusion principle. \nAlso, an expansion [Fig.~\\ref{2B_den_r}($b_2$)] and contraction [Fig.~\\ref{2B_den_r}($b_4$)] of the anti-diagonal \nof $\\rho^{(2)}_{FF}(x_1,x_2;t)$ \ntakes place which manifest the collective breathing motion of the fermionic cloud~\\cite{kwasniok2020correlated}, see \nalso Fig.~\\ref{spd_dr}($a_2$). \nMoreover, a depletion along the $x_1=0$ and $x_2=0$ spatial regions is observed indicating that it is more likely for \none fermion to be located in the vicinity of a density hump at $x<0$ and the other one being symmetrically placed \nwith respect to the trap center, see also Fig.~\\ref{spd_dr}($a_2$). \n\n\n\n\\subsection{Quench to attractive interactions}\\label{attractive quench} \n\nIn the following, we shall study the dynamical response of the impurities and the fermionic bath after a quench from $g_{BF}=0$ to \nthe attractive ($g_{BF}<0$) impurity-medium interaction regime. \nTo quantify the arising distinctive dynamical features we analyze the single-particle density [Sec.~\\ref{density_evol_attract}] \nand the corresponding two-body density [Sec.~\\ref{two_body_evol_attract}] evolution of the participating components. \nAs in the previous section, we first discuss the time-evolution of two non-interacting ($g_{BB} = 0$) impurities and subsequently \ncompare to the case of two repulsively interacting ($g_{BB} = 1.0$) ones. \n\n\n\\subsubsection{Density evolution}\\label{density_evol_attract}\n\nThe spatio-temporal evolution of $\\rho^{(1)}_{F}(x;t)$ and $\\rho^{(1)}_{B}(x;t)$ after a quench from $g_{BF}=0$ to $g_{BF}=-0.8$ \nfor $g_{BB}=0$ is presented in Fig.~\\ref{s_den_a} ($a_1$) and Fig.~\\ref{s_den_a} ($a_3$) respectively. \nAs a consequence of the interaction quench both the impurities and the fermionic clouds undergo a collective weak amplitude \nbreathing motion identified by their contraction and expansion dynamics~\\cite{huang2019breathing,mistakidis2019correlated}. \nThe breathing frequency of the fermionic bath is $\\omega_F^{br}\\approx 0.208\\approx2\\omega$ while for the impurities it corresponds to \n$\\omega_B^{br} \\approx 0.251$ since they experience a modified external potential composed of the harmonic oscillator and the density of their host, see for details Refs.~\\cite{mistakidis2020many,kiehn2019spontaneous}. \nImportantly, the attractive impurity-medium coupling results in the formation of a shallow density hump in $\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{s_den_a}($a_1$)] at the instantaneous location of the impurities [Fig.~\\ref{s_den_a}($a_3$)] i.e. around the trap center. \nTurning to repulsively interacting impurities where $g_{BB}=1$ we observe that a similar to the above-described dynamics takes place, see Figs.~\\ref{s_den_a} ($a_5$), ($a_7$). \nHowever, the expansion amplitude of $\\rho^{(1)}_{B}(x;t)$ is larger and the breathing frequency $\\omega_B^{br}\\approx 0.226$ is slightly smaller compared to the $g_{BB}=0$ scenario due to the inclusion of direct $s$-wave repulsive interactions. \nAlso, since $\\rho^{(1)}_{B}(x;t)$ is relatively wider than for $g_{BB}=0$ the density hump developed in $\\rho^{(1)}_{F}(x;t)$ in \nthe latter case almost disappears for $g_{BB}=1$. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig7.eps}\n\\caption{Time-evolution of the one-body density of [($a_1$), ($a_2$), ($a_5$), ($a_6$)] the fermionic medium $\\rho^{(1)}_{F}(x,t)$ and [($a_3$), ($a_4$), ($a_7$), ($a_8$)] the bosonic impurities $\\rho^{(1)}_{B}(x,t)$ after an interaction quench of the boson-fermion coupling constant from $g_{BF}=0$ to different attractive values (see legend). \nThe impurities are considered to be ($a_1$)-($a_4$) free i.e. $g_{BB} = 0$ and ($a_5$)-($a_8$) repulsively interacting with $g_{BB} = 1$. \nThe harmonically trapped mixture with $\\omega=1$ consists of $N_F = 6$ fermions and $N_B = 2$ bosons while it is prepared in its ground state with $g_{BF}=0$ and either $g_{BB}=0$ or $g_{BB}=1$.} \n\\label{s_den_a}\n\\end{figure} \n\nFollowing a quench to stronger impurity-medium interactions, e.g. $g_{BF}=-2.5$, leads to a more intricate response of both components than for $g_{BF}=-0.8$, see Figs.~\\ref{s_den_a}($a_2$), ($a_4$), ($a_6$), ($a_8$). \nReferring to the system containing non-interacting impurities [Figs.~\\ref{s_den_a}($a_2$), ($a_4$)] we observe that $\\rho^{(1)}_{B}(x;t)$ has a pronounced spatial localization tendency around the trap center while performing an ``irregular'' weak amplitude breathing dynamics. \nThe latter is characterized by two dominant frequencies, namely $\\omega_{B_1}^{br}=0.234$ and $\\omega_{B_2}^{br}=0.263$ corresponding to the \ncenter-of-mass and relative coordinate breathing mode respectively. \nSince these frequencies are close the dynamics of $\\rho^{(1)}_{B}(x;t)$ is reminiscent of a beating pattern. \nWe remark that a similar time-evolution takes place also for bosonic impurities immersed in a bosonic background~\\cite{mistakidis2020many}.\nAccordingly, as a result of this sharply peaked distribution of $\\rho^{(1)}_{B}(x;t)$ in the vicinity of $x=0$ there is an accumulation \nof the fermionic density at the same location due to the finite $g_{BF}$. \nIndeed, a prominent density hump builds upon $\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{s_den_a}($a_2$)] which otherwise exhibits collective \nbreathing oscillations of a frequency $\\omega_F^{br}\\approx 0.211$. \nThe dynamical response is somewhat changed when considering interacting impurities ($g_{BB}=1$) as depicted in Figs.~\\ref{s_den_a}($a_6$), ($a_8$). \nThe impurities possess a comparatively wider density distribution than for $g_{BB}=0$ as a consequence of their finite repulsion, $g_{BB}=1$. \nAlso, the amplitude of their breathing motion is slightly larger compared to the one of $g_{BB}=0$ and the participating frequencies \n$\\omega_{B_1}^{br}=0.355$ and $\\omega_{B_2}^{br}=0.376$ are very close thereby producing a beating pattern (hardly visible \nin Fig.~\\ref{s_den_a}($a_8$)) manifested by the periodic \nappearance of sharp peaks in $\\rho^{(1)}_{B}(x;t)$ as a result of its contraction. \nThe differences in the dynamics of non-interacting and interacting impurities for fixed post-quench $g_{BF}$ will be further analyzed in the next section. \nThis beating pattern is also imprinted in $\\rho^{(1)}_{F}(x;t)$ which as a back-action develops humps which follow the location of the impurities. \nOtherwise, $\\rho^{(1)}_{F}(x;t)$ performs a breathing mode of almost the same amplitude and equal \nfrequency $\\omega_{F}^{br}\\approx0.2$ compared to the $g_{BB}=0$ case. \n\n\n\n\\subsubsection{Evolution of impurity-impurity correlations}\\label{two_body_evol_attract} \n\nTo identify and consequently characterize the nature of the impurity-impurity correlations in the course of the dynamics we monitor the two-body density $\\rho^{(2)}_{BB}(x_1,x_2;t)$ and relative distance $\\mathcal{D}_{\\rm rel}(t)$ of the impurities depicted in Fig.~\\ref{2B_den_a} for different quench amplitudes. \nFigures~\\ref{2B_den_a}($a_1$)-($a_4$) show snapshots of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ upon considering a quench of two non-interacting impurities from $g_{BF}=0$ to $g_{BF}=-2.5$. \nInitially, $t=0$, the impurities lie in the vicinity of the trap center since $\\rho^{(2)}_{BB}(x_1,x_2;t)$ is non-zero within the spatial region $x_1,x_2\\in [-2,2]$ [Fig.~\\ref{2B_den_a}($a_1$)]. However, as time evolves, the two bosons start to occupy a smaller spatial region and therefore approach each other, a tendency that becomes evident by the gradual shrinking of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ across its diagonal accompanied by the depression of its anti-diagonal, see Figs.~\\ref{2B_den_a}($a_2$)-($a_4$). \n\nAs it has already been discussed in Sec.~\\ref{two_body_evol_repul} the shape of the anti-diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ is well captured by $\\mathcal{D}_{\\rm rel}(t)$ which is presented in Fig.~\\ref{2B_den_a}($d$) for distinct post-quench values of $g_{BF}$. \nEvidently $\\mathcal{D}_{\\rm rel}(t)$ oscillates irrespectively of $g_{BF}$, a behavior that corresponds to the breathing motion of the impurities and can also be inferred from the weak expansion [Fig.~\\ref{2B_den_a}($a_3$)] and contraction [Fig.~\\ref{2B_den_a}($a_2$)] of $\\rho^{(2)}_{BB}(x_1,x_2=-x_1;t)$. \nIts evolution contains a multitude of frequencies whose number and value depend on $g_{BF}$ and refer to the underlying \nbreathing motion, e.g. for $g_{BF}=-2.5$ the dominantly involved frequencies are $\\omega_1=0.234$ and $\\omega_2=0.263$ respectively. \nAlso, the oscillation amplitude of $\\mathcal{D}_{\\rm rel}(t)$ is smaller for a larger $\\abs{g_{BF}}$ which is in accordance to the localization tendency of the impurities for quenches to stronger impurity-medium attractions. \nImportantly, $\\mathcal{D}_{\\rm rel}(t)$ exhibits a decaying tendency in time which is more pronounced for increasing $\\abs{g_{BF}}$ and shows a saturation behavior for quite strong attractions, e.g. $g_{BF}=-4$ here, and long evolution times $t>80$. \nThis latter decaying behavior is again a manifestation of the presence of attractive induced impurity-impurity interactions. \nIt is also worth commenting at this point that the suppression of the anti-diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ or equivalently \nthe decay of $\\mathcal{D}_{\\rm rel}(t)$ is significantly more pronounced for quenches towards the attractive interaction regime than \nin the repulsive one, e.g. compare Fig.~\\ref{2B_den_r}($d$) and Fig.~\\ref{2B_den_a}($d$). \nTherefore, we can infer the generation of stronger attractive induced interactions for quenches in the attractive than in the repulsive \nimpurity-medium interaction regime~\\cite{mistakidis2020induced}, a result that also holds for the ground state of the system as \nexplicated in Sec.~\\ref{corel_ground}. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{Fig8.eps} \n\\caption{Instantaneous profiles of the two-body reduced density of ($a_1$)-($a_4$) two non-interacting bosons $\\rho^{(2)}_{BB}(x_1,x_2)$, ($b_1$)-($b_4$) two fermions of the bath $\\rho^{(2)}_{FF}(x_1,x_2)$ and ($c_1$)-($c_4$) two repulsively interacting bosons $\\rho^{(2)}_{BB}(x_1,x_2)$. \nThe BF mixture contains $N_F = 6$ fermions and $N_B = 2$ bosons. \nIt is confined in a harmonic trap with $\\omega = 0.1$ and it is prepared in its ground state with $g_{BF} = 0$ and either $g_{BB}=0$ or $g_{BB}=1$. \nThe dynamics is triggered upon considering an impurity-medium interaction quench from $g_{BF} = 0$ to $g_{BF} = -2.5$. \nTemporal-evolution of the boson-boson relative distance $\\mathcal{D}_{\\rm rel}(t)$ for ($d$) $g_{BB} = 0$ and ($e$) $g_{BB} =1$ at specific post-quench $g_{BF}$ couplings (see legend).} \n\\label{2B_den_a}\n\\end{figure} \n\nOn the other hand, the two-body dynamics of two repulsively interacting impurities, here $g_{BB}=1$, subjected to a quench \nfrom $g_{BF}=0$ to $g_{BF}=-2.5$ showcases quite different characteristics from the $g_{BB}=0$ case, compare in particular \nFigs.~\\ref{2B_den_a}($c_1$)-($c_4$) with Figs.~\\ref{2B_den_r}($c_1$)-($c_4$). \nIndeed, even for the ground state of the system ($t=0$) the impurities, since $g_{BB}$ is finite, are spatially separated with the one residing \nat $x<0$ and the other at $x>0$ as can be inferred from the correlation hole of $\\rho^{(2)}_{BB}(x_1,x_2=x_1;t=0)$ in Fig.~\\ref{2B_den_a}($c_1$). \nAfter the quench, they oscillate between two distinct configurations. \nNamely they either stay separated, see the elongated anti-diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ in Figs.~\\ref{2B_den_a}($c_2$), ($c_3$), or they move close to each other, see their bunching tendency in the region $x_1,x_2\\in [-2,2]$ in Fig.~\\ref{2B_den_a}($c_4$). \nThis behavior is caused by the competition of their inherent repulsive contact interaction and the induced attraction mediated by the fermionic environment~\\cite{huber2019medium}. \n\nTo understand better the aforementioned competing mechanism we present the time-evolution of the impurities relative distance $\\mathcal{D}_{\\rm rel}(t)$ for a variety of post-quench interactions $g_{BF}$ in Fig.~\\ref{2B_den_a}($e$). \nAs it can be directly seen, the response of $\\mathcal{D}_{\\rm rel}(t)$ depends crucially on $g_{BF}$. Indeed for quenches to weak attractions, e.g. $g_{BF}=-0.8$, $\\mathcal{D}_{\\rm rel}(t)$ oscillates with an almost constant amplitude and a \ndominant frequency $\\omega_1=0.226$. \nHowever, by increasing the quench amplitude e.g. to $g_{BF}=-2.5$ $\\mathcal{D}_{\\rm rel}(t)$ performs ``irregular'' oscillations characterized by multiple frequencies and importantly a decaying amplitude. \nThis decay is more pronounced for larger attractions e.g. $g_{BF}=-4$ where $\\mathcal{D}_{\\rm rel}(t)$ drops at the early stages of the dynamics and saturates to a fixed value for $t>50$. \nAs a consequence, we can deduce that the attractive induced interactions become stronger for quenches towards larger impurity-medium attractions and gradually dominate with respect to the impurities direct repulsion. \nNote here that such a mechanism is also present for quenches to repulsive impurity-medium interactions, see Fig.~\\ref{2B_den_r}($e$), but it is apparently less effective compared to the attractive $g_{BF}$ quench scenario. \n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=\\textwidth]{Fig9.eps}\n\\caption{Temporal-evolution of the Von-Neumann entropy $S_{VN}(t)$ for different post-quench ($a$)-($b$) attractive and ($c$)-($d$) repulsive impurity-medium interaction strengths (see legends). \nThe bosonic impurities are considered to be either [($a$), ($c$)] non-interacting $g_{BB}=0$ or [($b$), ($d$)] interacting with $g_{BB}=1$. \n($e$) Time-averaged Von-Neumann entropy $\\bar{S}_{VN}$ for distinct post-quench $\\abs{g_{BF}}$ attractive and repulsive values (see legend) when $g_{BB}=0$. \nThe solid and dashed lines provide a guide to the eye. \nThe BF mixture consists of $N_B=2$ bosons and $N_F=6$ fermions with equal masses and are confined in the same harmonic trap with $\\omega = 0.1$.} \n\\label{fig:vn_ent}\n\\end{figure*}\n\n\n\\subsubsection{Correlations of the Fermi bath}\\label{two_body_evol_bath_attract}\n\nTurning to the Fermi sea we observe the appearance of completely different correlation patterns building upon $\\rho^{(2)}_{FF}(x_1,x_2;t)$ when compared to the repulsive quench scenario [Figs.~\\ref{2B_den_r}($c_1$)-($c_4$)] as demonstrated in Figs.~\\ref{2B_den_a}($c_1$)-($c_4$) for a quench to $g_{BF}=-2.5$ in the system with two non-interacting ($g_{BB}=0$) impurities. \nAs expected, a correlation hole exists for the entire time-evolution due to the fermionic character of the bath~\\cite{erdmann2019phase,kwasniok2020correlated}. \nInitially, $t=0$, the fermions are symmetrically placed with respect to $x=0$ and predominantly reside one at $x>0$ and the other \nat $x<0$, see the bright spots close to the diagonal in Fig.~\\ref{2B_den_a}($c_1$). \nFollowing the quench a cross-like correlation pattern appears in $\\rho^{(2)}_{BB}(x_1,x_2;t)$ which becomes more elongated across the spatial regions lying in the vicinity of $x_1=0$ and $x_2=0$ [Figs.~\\ref{2B_den_a}($c_3$), ($c_4$)]. \nThis cross-like correlation pattern is the two-body analogue of the accumulation of the Fermi density $\\rho^{(1)}_{F}(x;t)$ [Fig.~\\ref{spd_dr}($a_2$)] around the trap center and more precisely in the vicinity of the position of the impurities due to the attractive $g_{BF}$. \nThus it is a direct imprint of the impurities motion into their host, see also the elongated diagonal of $\\rho^{(2)}_{BB}(x_1,x_2;t)$ in Figs.~\\ref{2B_den_a}($a_2$)-($a_4$), evincing that for strong attractive $g_{BF}$ the fermions move close to the trap center, a mechanism that competes with their inherent Fermi pressure. \n\n\n\\subsection{Entanglement dynamics}\\label{entanglemet_dynamics} \n\nTo further unveil the degree of impurity-medium correlations during the quench dynamics of the BF mixture we employ the time-evolution of the Von-Neumann entropy $S_{VN}(t)$ [Eq.~\\eqref{VN}]. \nThis quantity provides a measure of the overall build up of the impurity-medium entanglement~\\cite{mukherjee2020pulse,theel2020entanglement,kwasniok2020correlated} and also reveals the complexity of the time-evolved post-quench state of the system. \nThe dynamics of $S_{VN}(t)$ after a quench to attractive (repulsive) interactions for the system of two non-interacting and interacting impurities is shown in Figs.~\\ref{fig:vn_ent}($a$) and ($b$) [Figs.~\\ref{fig:vn_ent}($c$) and ($d$)] respectively. \n\nFocusing on the attractive post-quench interaction regime [Figs.~\\ref{fig:vn_ent}($a$), ($b$)] we observe that independently of the inclusion \nof direct $s$-wave impurity-impurity interactions $S_{VN}(t=0)=0$ and hence the components are initially non-entangled. \nHowever, directly after the quench an appreciable impurity-medium entanglement generation takes place in all cases \nsince $S_{VN}(t)\\neq 0$~\\cite{mistakidis2019correlated,mistakidis2019quench,mukherjee2020pulse}. \nMore precisely, an almost ballistic linear growth of $S_{VN}(t)$ is manifested at the very early stages of the dynamics ($t<5$) accompanied by a fluctuating behavior of $S_{VN}(t)$ around a fixed $g_{BF}$-dependent value at later evolution times. \nNotice that for both $g_{BB}=0$ and $g_{BB}=1$ the response of $S_{VN}(t)$ shows a hierarchy in terms of $g_{BF}$, namely it acquires \nlarger values for stronger attractions. \nAlso, the temporal fluctuations of $S_{VN}(t)$ deep in the evolution are suppressed for quenches to weak attractions, e.g. compare $S_{VN}(t)$ for $g_{BF}=-0.8$ and $g_{BF}=-2.5$ in Figs.~\\ref{fig:vn_ent}($a$), ($b$). \nThe latter means that for larger post-quench attractions the system is in a more complicated many-body superposition involving a \nlarger amount of states [see also Eq.~(\\ref{4})] than for smaller negative $g_{BF}$ values. \nThis situation holds equal for fixed $g_{BF}$ but increasing $g_{BB}$. \nIndeed, it becomes apparent by inspecting $S_{VN}(t)$ for fixed $g_{BF}$ between the $g_{BB}=0$ and $g_{BB}=1$ cases that in the latter case the temporal fluctuations of $S_{VN}(t)$ are enhanced, especially for a larger $\\abs{g_{BF}}$. \nWe remark that the saturating tendency of $S_{VN}(t)$ for long times can be attributed to the finite size of the system~\\cite{Calabrese_2005}, i.e. if the system would have been infinite then $S_{VN}(t)$ should increase linearly in time throughout the time-evolution. \n\nA similar to the above-described phenomenology regarding the entanglement dynamics takes place also during the unitary evolution of the system for quenches towards the repulsive impurity-medium interaction regime for both non-interacting [Fig.~\\ref{fig:vn_ent}($c$)] and interacting [Fig.~\\ref{fig:vn_ent}($d$)] impurities. \nIndeed, the sudden increase of $g_{BF}$ leads to entanglement formation since $S_{VN}(t)\\neq 0$ while $S_{VN}(t=0)=0$. \nAs for $g_{BF}<0$, here also $S_{VN}(t)$ increases linearly for $t<5$ and subsequently oscillates around a mean value, see Figs.~\\ref{fig:vn_ent}($c$), ($d$). \nInterestingly, the degree of entanglement is larger for quenches to the repulsive than the attractive interaction regimes, e.g. compare Fig.~\\ref{fig:vn_ent}($a$) with Fig.~\\ref{fig:vn_ent}($c$). \nThis fact evinces that a larger amount of dynamical impurity-medium entanglement is established in the repulsive interaction regime. \nTo support our argument we exemplarily showcase the time-averaged Von-Neumann entropy, defined as $\\bar{S}_{VN} = (1\/T)\\int_{0}^{T}dtS_{VN}(t)$ with $T$ being the considered evolution time, in Fig.~\\ref{fig:vn_ent}($e$) for varying post-quench repulsive ($g_{BF}>0$) and attractive ($g_{BF}<0$) impurity-medium interactions in the system containing the non-interacting impurities. \nAs it can be readily seen, irrespectively of the quench direction $\\bar{S}_{VN}$ increases monotonously with increasing magnitude of $g_{BF}$. \nHowever, it is also apparent that $\\bar{S}_{VN}$ is in general slightly larger for quenches to repulsive than to attractive interactions at a specific post-quench $\\abs{g_{BF}}$. \n\n\n\\section{Conclusions}\\label{conclusion} \n\nWe have unraveled the role of induced correlations and pattern formation in the ground state and the non-equilibrium quantum dynamics of two bosonic impurities embedded in a fermionic environment. \nThe one-dimensional Bose-Fermi mixture is harmonically trapped and the time-evolution is initiated upon considering a quench of the \nimpurity-medium coupling from a vanishing towards the repulsive or the attractive interaction regime. \nInspecting both one- and two-body observables enables us to expose correlation-induced phenomena mediated by the host, analyze the \ncompetition of induced interactions and direct $s$-wave ones, the emergent phase-separation processes and the underlying entanglement dynamics. \n\nReferring to the ground state of two non-interacting bosonic impurities it is shown that on the single-particle level they phase-separate \nwith the Fermi sea for strong repulsions and accumulate at the trap center together with their environment for large attractions, otherwise they are miscible. \nIn the system of two repulsively interacting impurities the boundaries of the aforementioned regions are shifted to larger interactions. \nImportantly, we identify the presence of induced impurity-impurity interactions mediated by the fermionic environment, in the system with non-interacting bosons, for either increasing impurity-medium repulsion or attraction. \nFor repulsively interacting impurities we elaborate on the competition of induced and direct interactions with the latter (former) dominating for \nrepulsive (attractive) impurity-medium couplings, evincing that the strength of induced interactions is larger for attractive impurity-bath interactions. \nInspecting the two-body correlation function of the Fermi sea we showcase that two fermions are likely to remain far apart (approach each other) for larger impurity-medium repulsions (attractions). \n\nWe trigger the dynamics by suddenly changing the impurity-medium interaction strength from zero to finite repulsive or attractive values. \nA quench to repulsive interactions induces in both components a collective breathing motion. \nThe impurities breathing frequency and amplitude depend on the post-quench coupling and their interacting nature. \nMoreover, a dynamical phase-separation occurs for quenches to large repulsions with the impurities residing at the origin and the \nfermionic environment splitting into two symmetric density branches with respect to the trap center. \nHere, two fermions are likely to lie one on the left and the other on the right density branch. \nInterestingly, induced impurity-impurity correlations mediated by the host are manifested in the course of the evolution of two \nnon-interacting impurities and become more pronounced for quenches to stronger repulsions. \nOn the other hand, monitoring the dynamics of repulsively interacting impurities we showcase the competition of induced and \ndirect interactions with the latter prevailing and enforcing the impurities to be in a two-body superposition. \nThe impact of induced interactions is also captured by the decaying amplitude in time of the impurities relative distance, which is \nclearly more prominent for non-interacting bosonic impurities. \n\nFor quenches to attractive impurity-medium interactions both components perform an overall breathing motion, whose amplitude and frequency regarding the impurities are impacted by the considered impurity-impurity and post-quench impurity-medium couplings. \nRemarkably, a beating pattern appears on the single-particle level stemming from the involvement of two nearly resonant breathing frequencies in the dynamics of the impurities due to the dominant nature of their attractive induced interactions. \nFurthermore, the impurities exhibit a spatial localization tendency around the trap center causing a density accumulation of the Fermi sea at their instantaneous location. \nThis mechanism becomes more pronounced for quenches to larger attractions and it is imprinted as a cross-like correlation pattern in the Fermi sea and dictates the dominant presence of attractive induced interactions whose strength is enhanced for quenches to larger attractions. \nIndeed they can even gradually surpass the direct impurity-impurity repulsive coupling, a result that is also evident by the prominent decaying amplitude of the impurities relative distance during the time-evolution. \n\nMoreover, by measuring the Von-Neumann entropy we explicate that in all cases the impurity-medium entanglement rises in a linear manner at the initial stages of the dynamics and afterwards it exhibits a fluctuating \nbehavior around a constant value. \nAlso, the entanglement exhibits a hierarchy by means that it is larger for fixed impurity (post-quench impurity-medium) interaction \nand increasing quench amplitude (impurity coupling). \n\nThere are several research directions that can be pursued in future endeavors. \nAn intriguing perspective is to study the robustness of the discussed phenomena in the presence of finite \ntemperature effects~\\cite{tajima2019thermal} and in particular their impact on the impurities induced interactions. \nCertainly, the generalization of our findings to higher dimensional settings as well as to larger impurity concentrations is desirable. \nAnother interesting direction would be to consider an additional long-range interparticle interaction potential~\\cite{kain2014polarons} \nand unravel the corresponding quench induced dynamics. \nThe emergent quasiparticle properties~\\cite{schmidt2018universal,mistakidis2019repulsive} such as the lifetime, residue, \neffective mass and induced interactions are of particular interest. \n\n\n\n\\begin{acknowledgments} \nK.M. acknowledges a research fellowship (Funding ID no 57381333) from the Deutscher Akademischer Austauschdienst (DAAD). \nS. I. M. gratefully acknowledges financial support in the framework of the Lenz-Ising Award of the University of Hamburg. \nP.S. is grateful for financial support by the Deutsche Forschungsgemeinschaft (DFG) in the framework \nof the SFB 925 ``Light induced dynamics and control of correlated quantum systems''. \n\\end{acknowledgments} \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection{Formulation}\n\nUnlike pulsed lasers, pulsation in optical microresonators requires neither active nor passive mode locking elements (e.g. modulators or saturable absorbers)\\cite{haus2000mode, kutz2006mode}. Rather, pulsed states arise naturally from a simple damped driven nonlinear Schroedinger equation known as the Lugiato-Lefever equation (LLE) \\cite{lugiato1987spatial, haelterman1992dissipative, matsko2011mode, coen2013modeling, chembo2013spatiotemporal}. Two categories of stable pulsed state solutions have been identified for the LLE: stable modulational instability (MI, also known as hyper-parametric oscillations or Turing rolls) and stable cavity solitons \\cite{matsko2012hard, erkintalo2014coherence}. Turing rolls arise from the intra-cavity equilibrium field through modulational instability of vacuum fluctuations and usually have multiple-FSR (free spectral range) spacing between their adjacent teeth, while the word soliton is used to refer to coherent combs with single-FSR spacing. Experimental and theoretical studies have suggested that soliton states are not accessible from the continuous wave (CW) intra-cavity field without seeding \\cite{taheri2015soliton, lobanov2015generation}, changing the pump frequency or power \\cite{matsko2012hard, lamont2013route, herr2014temporal}, or a suitable input pulse \\cite{leo2010temporal}. Owing to the low phase noise and exceedingly stable frequency spacing of the comb teeth in Turing rolls and solitons, chip-scale pure low-phase-noise radio frequency (RF) sources \\cite{liang2015high} and coherent communication with speeds in excess of 100 Gbit\/s per comb line have been demonstrated \\cite{pfeifle2014coherent, pfeifle2015optimally}.\\newline\n\\indent In addition to the generation of a frequency comb with equidistant teeth, temporal pulse generation requires mutual phase locking of the complex amplitudes. Phase locking in optical microresonators has been studied in terms of the cascaded emergence of phase-locked triplets \\cite{coillet2014robustness}. Injection locking of overlapping bunched combs has been explained using the Adler equation \\cite{del2014self}. Few-mode models have explained the phase offset between the pumped mode and the rest of the comb teeth \\cite{loh2014phase, taheri2015anatomy}. More recently, Wen \\emph{et al.} \\cite{wen2014self} have emphasized the link between oscillator synchronization---most famously described by the Kuramoto model \\cite{strogatz2000kuramoto}---and the onset of pulsing behavior. However, while stable ultrashort pulses have been demonstrated in a variety of microresonator platforms \\cite{herr2014temporal, saha2013modelocking, brasch2016photonic, vahala2015soliton}, the underlying phase locking mechanism is still unknown. As a result, features of microcomb phase spectra revealed in recent measurements \\cite{del2015phase} are yet not understood.\\newline\n\\indent In this paper, we introduce a reduced phase model which governs the nonlinear mode interactions responsible for spontaneous creation of pulsed states in the LLE which result from a balance between Kerr nonlinearity, dispersion (or, in the spatial case, diffraction), parametric gain, and cavity loss \\cite{grelu2012dissipative}. The model interactions are \\emph{ternary} (that is, they involve three-variable combinations) rather than binary, as in typical phase models. Our model admits attracting solutions which, interpreted in the context of nonlinear optical cavities, correspond to stable cavity solitons and Turing patterns, and provides an explanation of recent observations of phase steps in optical frequency combs. Moreover, our model clarifies the role of MI and chaos in the generation and stability of Turing rolls and solitons. \\newline\n\\indent The LLE is a nonlinear partial differential equation in time and the azimuthal angle around the whispering-gallery mode resonator \\cite{chembo2013spatiotemporal}, or, equivalently, in a slow and a fast time variable \\cite{haelterman1992dissipative,coen2013modeling}. Equivalently, a set of coupled nonlinear ordinary differential equations (ODEs) can be used to study resonator-based optical frequency comb generation \\cite{chembo2010modal}. The generalized spatiotemporal LLE in normalized form\n\\begin{equation}\\label{eq:LLE}\n\\pdifdisp{\\psi}{\\tau}=-(1+\\imi\\alpha)\\psi-\\imi \\frac{d_2}{2} \\frac{\\partial^2\\psi}{\\partial\\theta^2}+\\imi|\\psi|^2\\psi+F,\n\\end{equation}\nand its corresponding ODEs\n\\begin{equation}\\label{eq:CNODE}\n\\difdisp{\\tilde{a}_\\eta}{\\tau}=-(1+\\imi\\alpha)\\tilde{a}_\\eta+\\imi\\frac{d_2}{2}\\eta^2\\tilde{a}_\\eta+\\imi\\sum_{l,\\, m,\\, n}\\tilde{a}_l \\tilde{a}_m^*\\tilde{a}_n \\, \\delta_{\\eta_{lmn}\\eta}+\\tilde{F}_\\eta,\n\\end{equation} \nare Fourier transform pairs, the conjugate variables of the transform being the azimuthal angle around the resonator $\\theta$ and comb mode number $\\eta$, see the Supplemental Material (SM). The number of ODEs equals the number of modes comprising the frequency comb; each equation follows the temporal evolution of the complex amplitude (magnitude and phase) of a single mode. In the ODEs picture, each optical comb tooth can be thought of as a nonlinear oscillator coupled to other oscillators (comb teeth). In Eq.~(\\ref{eq:LLE}), $\\psi(\\theta, \\tau)$ is the normalized field envelope, $\\tau = t\\Delta\\omega_0\/2$ is the normalized time with $\\Delta\\omega_0$ the resonance linewidth for the cavity mode closest to the pump (the pumped resonance) and $t$ the laboratory time, $\\alpha = -2(\\omega_{\\mathrm{P}}-\\omega_0)\/\\Delta\\omega_0$ is the normalized detuning between the pump laser frequency $\\omega_{\\mathrm{P}}$ and the cold-cavity pumped resonance frequency $\\omega_0$, $d_2 = -2D_2\/\\Delta\\omega_0$ is the normalized second-order dispersion parameter, $D_2$ being the cavity second-order dispersion coefficient, and $F$ is the normalized pump amplitude. The field envelop $\\psi$ and the pump amplitude $F$ are normalized to the sideband generation threshold such that the comb generation threshold in Eq.~(\\ref{eq:LLE}) is equal to unity \\cite{chembo2010modal}. In Eq.~(\\ref{eq:CNODE}), $\\tilde{a}_\\eta=a_\\eta(\\tau) \\exp[\\imi\\phi_\\eta(\\tau)]$ is the complex-valued comb tooth amplitude for mode $\\eta$ with magnitude $a_\\eta(\\tau)$ and phase $\\phi_\\eta(\\tau)$, $\\tilde{F}_\\eta(\\tau)$ is the Fourier transform of $F$ and equals $\\delta_{0\\eta}F_{\\mathrm{P}}\\exp(\\imi\\phi_{\\mathrm{P}})$ for CW pumping, $\\delta_{pq}$ (for integers $p$ and $q$) is the Kronecker delta, and $\\eta_{lmn}=l-m+n$. All mode numbers $\\eta$ are define relative to the pumped mode. For a soliton, $\\mathord{\\eta\\in\\{0, \\pm 1, \\dots, \\pm N\\}}$ while for Turing rolls $\\mathord{\\eta\\in\\{0, \\pm\\mu, \\pm2\\mu, \\dots, \\pm N\\mu\\}}$, where the integer $\\mu \\geq 1$ is the mode number at which MI gain peaks.\\newline\n\\indent When driven by a CW pump, experiments and numerical simulations suggest that for stable solutions, the magnitude of the pumped mode is much larger than that of the other modes and that in the absence of third- and higher-order dispersion, the magnitude spectrum of these solutions are symmetric with respect to the pumped mode $\\eta=0$ \\cite{saha2013modelocking,herr2014temporal} (see, e.g., the inset curves $a_\\eta^2$ vs. mode number in Fig.~\\ref{fig:prelim:phasealigned}). Therefore, we exploit the symmetry of the magnitude spectrum, adopt a perturbative approach (with $a_\\eta$ for $\\eta\\ne0$ as the small perturbation parameters), and following Ref.~\\cite{wen2014self} simplify Eq.~(\\ref{eq:CNODE}) by keeping only terms with at least one contribution from the pumped mode $a_0$ in the triple summations \\cite{taheri2015anatomy}. The magnitude and phase equations for the pumped mode include no linear contributions from $a_{\\eta\\ne0}$ (corrections are proportional to $a_\\eta^2$, $\\eta\\ne0$), and their solutions settle on a fast time scale to the equilibrium intra-cavity field $\\psi_\\mathrm{e}=a_0\\exp(\\imi\\phi_0)$; subsequently, $a_0$ and $\\phi_0$ can be treated as constants (\\cite{taheri2015anatomy}, also see SM).\\newline\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{phasealigned}\n\\caption[Phase alignment in (a) solitons and (b) Turing rolls.]{\\label{fig:prelim:phasealigned}Phase alignment in (a) solitons and (b) Turing rolls seen in the steady-state solutions of Eqs.~(\\ref{eq:LLE}) and (\\ref{eq:CNODE}). The inset curves in red (top corners) show the spatiotemporal waveforms and those in black (bottom corners) are the frequency spectra. For both solitons and rolls the phases lie on straights lines of arbitrary slope. Parameter values are (a) $\\alpha=2,\\, d_2=-0.0124,\\, F=1.41$, and (b) $\\alpha=0,\\, d_2=-0.0124,\\, F=1.63$. The phase profile has been unwrapped in (b).}\n\\end{figure}\n\\indent Equations of motion for the magnitudes $a_\\eta(\\tau)$ and phases $\\phi_\\eta(\\tau)$ are readily found from Eq.~(\\ref{eq:CNODE}). The equation for the temporal evolution of the centered phase averages $\\zeta_\\eta=\\bar{\\phi}_\\eta-\\phi_0$, where the phase average $\\bar{\\phi}_\\eta=(\\phi_\\eta+\\phi_{-\\eta})\/2$ is centered to the pumped mode phase $\\phi_0$, can be found using the equations for $\\phi_{\\pm\\eta}$ and $\\phi_0$. To lowest non-zero order in $a_{\\eta\\ne 0}$, this equation can be integrated directly to give\n\\begin{equation}\\label{eq:antisym}\n\\tan{\\zeta_\\eta}=\\sqrt{\\abs*{\\frac{C+2}{C}}}\\tanh[\\sqrt{|C(C+2)|}a_0^2(\\tau-\\tau_0)].\n\\end{equation}\nHere $C=d_2\\eta^2\/2a_0^2-F_\\mathrm{P}\\sin(\\phi_\\mathrm{P}-\\phi_0)\/a_0^3$, $\\phi_\\mathrm{P}$ and $F_\\mathrm{P}$ are the phase and normalized magnitude of the pump, and $\\tau_0$ accounts for constants of integration (or initial conditions). Equation~(\\ref{eq:antisym}) holds when $|2a_0^2-\\alpha+d_2\\eta^2\/2| 0$. The Jacobian matrix $\\bm{\\mathrm{J}}$ and its eigenvalues can be expressed in closed form for any $N$ (see SM). Except for one zero eigenvalue forced by the rotational symmetry of the LLE, all of the eigenvalues are negative and real, indicating asymptotic stability of the synchronized state. Figure~\\ref{fig:prelim:eigs}(a) shows the non-zero eigenvalues of the equilibrium for increasing comb span ($2N+1$). It is seen that the eigenvalue closest to zero grows more negative with increasing comb span. Hence, for the constant comb amplitude case, a wider comb demonstrates superior stability.\n\nTo investigate the effect of a non-constant comb amplitude profile, we set $\\mathord{a_\\eta\\propto\\exp(-k_0|\\eta|)}$. This profile assumes a linear decay (in logarithmic scale) of the comb teeth magnitude with slope $-20k_0$ dB per increasing mode number by unity, (see, e.g., the insets $a_\\eta^2$ vs. mode number in Fig.~\\ref{fig:prelim:phasealigned}). Though not analytically tractable, we find numerically that the eigenvalues of $\\bm{\\mathrm{J}}$ all have negative real part (except for the single zero eigenvalue forced by symmetry). Figure~\\ref{fig:prelim:eigs}(b) shows the eigenvalue spectrum vs. increasing combs span for $\\mathord{a_\\eta\\propto\\exp(-k_0|\\eta|)}$. Note that as the comb span increases, the smallest magnitude eigenvalue becomes bounded and almost independent of $N$ (black curve in Fig.~\\ref{fig:prelim:eigs}(b)). Therefore, the stability of the comb \\emph{does not improve}---nor does it degrade---with increasing comb span when the mode number dependence of the comb teeth magnitude is taken into account. Pfeifle \\emph{et al}. \\cite{pfeifle2015optimally} showed that in the presence of pump magnitude and frequency noise, solitons are less robust than Turing rolls in the same microresonator with comparable pump power. Our results suggest that the superior stability of Turing rolls does not originate from their smaller number of comb teeth compared to solitons. Rather, it is linked to the presence of MI gain, which is responsible for the generation of Turing rolls from vacuum fluctuations. We note that Eq.~(\\ref{eq:pl}) does not explicitly include the effect of MI gain; this influence is reflected through the coupling coefficients $K(l,\\eta)$.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{eigs}\n\\caption{\\label{fig:prelim:eigs}Non-zero eigenvalues of the equilibrium (the Jacobian matrix $\\bm{\\mathrm{J}}$) versus comb span for Eq.~(\\ref{eq:pl}) for (a) uniform and (b) mode-number--dependent comb teeth magnitude profile of $\\mathord{a_\\eta\\propto\\exp(-k_0|\\eta|)}$, ($k_0=0.1$). The negative eigenvalue of smallest magnitude (black curve) increases in size with increasing comb span for constant magnitudes, but reaches a constant for the realistic comb magnitude profile.}\n\\end{figure}\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{numsol}\n\\caption[Numerical solutions of Eq.~(\\ref{eq:pl}) for uniform as well as mode-number--dependent comb teeth magnitude spectra, and the emergence of phase steps.]{\\label{fig:prelim:numsol}Numerical solutions of Eq.~(\\ref{eq:pl}) for uniform and mode-number--dependent comb teeth magnitude spectra, and the emergence of $\\pi$ phase steps. (a) Sample steady-state solution of Eq.~(\\ref{eq:pl}) for a comb with 101 teeth, with uniform magnitudes and random initial conditions for the phase differences (shown in (e)). (b) The temporal evolution of the phase differences shown in (a). Most of the phase differences settle to integer multiples of $\\pi$, but some deviations may arise (dotted red circles). (c,d) Same as (a,b) but for comb magnitude profile of $\\mathord{a_\\eta\\propto\\exp(-k_0|\\eta|)}$, ($k_0=0.1$). All of the phase differences $\\Delta_\\eta$ settle to integer multiples of $\\pi$. In (a,c), only the $\\pi$ phase steps are physically significant. The $2\\pi$ steps have not been removed (e.g., through unwrapping the phases) to better illustrate the correspondence of (a,c) with (b,d). (e) The phase differences at the onset of integration (initial conditions) for (a-d). (f) Schematic illustrating $\\pi$ steps in the phase profile $\\phi_\\eta$ of a comb resulting from steps in the phase differences $\\Delta_\\eta$. Comb teeth symmetrically positioned around the pumped mode ($\\eta$ and $-\\eta$) will show $\\pi$ phase steps, with one phase increasing as its counterpart decreases. Such phase steps do not change the phase averages $\\bar{\\phi}_\\eta$.}\n\\end{figure*}\n\nWe consider next numerical solutions of Eq.~(\\ref{eq:pl}). Our numerous runs of numerical integration for different comb spans ($N$ from 5 to 1000) and random initial phase differences taken from a uniform distribution over the range $(-\\pi, \\pi]$ and for uniform comb magnitude spectrum, typically lead to $\\Delta_\\eta=k\\pi$, ($k$ an integer). While a steady-state is always obtained, other steady-state solutions are also possible. A mode-number-dependent comb magnitude spectrum, in contrast, always leads to phase differences equal to integer multiples of $\\pi$, even for those cases in which steady-state phase differences for uniform comb magnitude spectra are not equal to integer multiples of $\\pi$. The reason is that a non-uniform comb magnitude profile places more strict constraints on the steady-state solution of Eq.~(\\ref{eq:pl}). In Figs.~\\ref{fig:prelim:numsol}(a-d), we show sample solutions found by numerically integrating Eq.~(\\ref{eq:pl}) for $N=50$ phase differences (a comb with $2N+1=101$ teeth) for constant comb teeth magnitudes (Fig.~\\ref{fig:prelim:numsol}(a,b)) and the non-constant magnitude profile of $\\mathord{a_\\eta\\propto\\exp(-k_0|\\eta|)}$ (Fig.~\\ref{fig:prelim:numsol}(c,d)). We show the steady-state solutions $\\Delta_\\eta$ at the end of the simulation time vs. mode number as well as the the evolution of the phase differences with time. The initial values of $\\Delta_\\eta$, $\\mathord{\\eta\\in\\{1,2,\u2026,50\\}}$, in both cases is shown in Fig.~\\ref{fig:prelim:numsol}(e). While most of the steady-state phase differences in Fig.~\\ref{fig:prelim:numsol}(a) are integer multiples of $\\pi$, some of them deviate from these values (the dotted red circles). For the non-uniform magnitude spectrum, however, steady-state phase differences are all integer multiples of $\\pi$, as seen in Fig.~\\ref{fig:prelim:numsol}(c). \\newline\n\\indent The $\\pi$ phase steps in the phase differences $\\Delta_\\eta$ imply similar steps in the phases $\\phi_\\eta$. To show this, we assume a set of solutions $\\Delta_\\eta=s_0\\eta$ is known and try to find another set based on it. It may seem that any constant $x$ can be added to the phases of comb teeth symmetrically positioned with respect to the pumped mode (i.e., $\\phi_{\\pm\\eta}\\to\\phi_{\\pm\\eta}+x$) without affecting the solution. Unfortunately, this alters the phase averages $\\bar{\\phi}_\\eta$ and so invalidates the stability analysis presented earlier. However, we can generate new stable phase-locked solutions by considering anti-symmetric changes of the phases, i.e., $\\phi_\\eta\\to\\phi_\\eta\\pm x$ and $\\phi_{-\\eta}\\to\\phi_{-\\eta}\\mp x$, which means $2\\Delta_\\eta=\\phi_\\eta-\\phi_{-\\eta}\\pm 2x=2s_0\\eta +2k\\pi$, and hence $x=\\pm \\pi$ (recall that $k$ is an integer). This demonstrates that the appearance of $\\pi$ steps in the phase spectrum of stable phase-locked frequency combs is permissible, as shown schematically in Fig.~\\ref{fig:prelim:numsol}(f). Indeed, such phase steps have been observed experimentally \\cite{del2015phase} (see, e.g., Fig.~\\ref{fig:prelim:steps}(a)) but have, to the best of our knowledge, remained unexplained until now. \\newline\n\\indent Besides $\\pi$ phase steps, $\\pi\/2$ phase steps \\cite{del2015phase} have also been observed in experiments. These phase steps also can be explained within the framework of our model, as follows. A comb with $\\pi\/2$ phase steps is in fact two interleaved \\emph{non-interacting} combs, each of which has $\\pi$ steps in its phase spectrum \\cite{del2015phase}. These two combs do not interact as a result of the $\\pi\/2$ offset between their phase spectra because this phase offset causes the coupling coefficients between their comb teeth, $K(l, \\eta)$, to vanish. To clarify this point, we refer to Fig.~\\ref{fig:prelim:steps}(c,d) where comb teeth labeled with $\\mathord{\\eta_\\mathrm{A}\\in\\{0,2,4,6,\u2026,14\\}}$ (red) share a constant phase, while those with mode numbers $\\mathord{\\eta_\\mathrm{B}\\in\\{1,3,5,\u2026,13\\}}$ (blue) share another phase, different from that of the former group by $\\pi\/2$, such that $\\zeta_1=\\bar{\\phi}_1-\\phi_0=\\pi\/2$ and $\\zeta_{\\eta_\\mathrm{A}}-\\zeta_{\\eta_\\mathrm{B}}=\\pi\/2$ (recall that the value of $\\zeta_{\\eta}$ is independent of the mode number for a stable comb, cf. Fig.~\\ref{fig:prelim:numsol}(f)). As a result, the coupling coefficient $K(\\eta, \\eta+1)$ is zero because $\\zeta_{\\eta+1}\\pm\\zeta_{\\eta}=\\pm\\pi\/2$, and so there is no coupling between modes $\\eta$ and $\\eta+1$ of the comb for any $\\eta$ (see Eq.~(\\ref{eq:pl})). It is worth noting that the frequency combs with phase steps in \\cite{del2015phase} were obtained through tuning the laser pump into resonance, which alters the MI gain profile and sweeps its peak.\\newline\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{steps}\n\\caption[Experimental data showing steps in the measured phase spectra of optical frequency combs.]{\\label{fig:prelim:steps}Experimental data showing steps in the measured phase spectra of optical frequency combs. (a,c) are the phase spectra while (b,d) depict the power profiles of the combs. (a,b) $\\pi$-steps in the phase spectrum of a stable comb; The green dot corresponds to the pumped mode phase. (c,d) $\\pi$\/2-steps in the phase spectrum of a stable comb; This comb is recognized as two interleaved combs (red and blue) with phases offset by $\\pi\/2$, each exhibiting $\\pi$-phase steps as well (cf. Fig.~\\ref{fig:prelim:numsol}(f)). The $\\pi\/2$-phase offset leads to the decoupling of the two combs, indicated by vanishing coupling coefficients, i.e., $K(l, \\eta)=0$ in Eq.~(\\ref{eq:pl}). The green dots correspond to the phases of the stronger comb teeth. (These plots are reproduced using data originally presented in~\\cite{del2015phase}.)}\n\\end{figure}\nIt has been argued that passing through the chaotic state is necessary for microcomb soliton formation \\cite{lamont2013route}. The foregoing analysis suggests a way of understanding this: passage through the chaotic state serves to provide the system with a large pool of initial conditions, which increases the odds of getting peaks that will then grow into solitons. Numerical simulations of Eq.~(\\ref{eq:pl}) suggest that with increasing comb span and non-uniform comb magnitude spectrum, chances of getting groups of phase-locked comb teeth, or weak pulses, will increase. These weak pulses will then grow into the modes of the nonlinear system (i.e., the solitons). \\newline\n\\indent Finally, although we have compared our theoretical results with microresonator-based frequency comb experiments, they should also apply to mode-locked laser systems. In 2002, Gordon and Fisher developed a many-body statistical mechanical theory to describe the onset of laser pulsations as a first order phase transition, treating the modes as the elementary degrees of freedom \\cite{gordon2002phase}. Their ordered collective state is analogous to our synchronized dynamical attractor. Now, Eq.~(\\ref{eq:pl}) roots in the cubic nonlinear term in the LLE, and the same nonlinearity appears in the master equation for passive mode locking based on a saturable absorber, which approximates the absorber with a cubic nonlinearity \\cite{haus2000mode}. We therefore expect the same dynamical mechanism to be responsible for the creation of sharp pulses in passively mode-locked lasers, despite the different physical source of optical gain (population inversion and stimulated emission rather than parametric amplification). What matters is the fundamental link between spatiotemporal pulse formation and mode synchronization. \\newline\n\\indent The generic reduced nonlinear oscillator model introduced in this work clearly demonstrates the fundamental link between mode synchronization and spatiotemporal pulse formation in Kerr-nonlinear media. This model admits attracting fixed point solutions corresponding to stable cavity solitons and Turing patterns, permits analyzing their stability in a unified scheme, and explains phase jumps observed in recent phase measurements of stable optical frequency combs. It also provides insight into the role of chaos and parametric gain in the generation of solitons and Turing rolls. This insight can be utilized towards devising novel techniques for controlled formation of robust pulses in optical microresonators.\n\\begin{acknowledgments}\nK.W. and H.T. thank Brian Kennedy for useful discussions. K.W. also thanks Henry Wen and Steve Strogatz for generously discussing the details of their results reported in \\cite{wen2014self}. H.T. was supported by the Air Force Office of Scientific Research Grant No.~2106DKP.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzceua b/data_all_eng_slimpj/shuffled/split2/finalzzceua new file mode 100644 index 0000000000000000000000000000000000000000..dc3db6519597669efd35715409e53f8197f6b7e4 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzceua @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn the last years a focused ion beam (FIB) of Ga$^+$-ions for etching\n\\cite{mat91} has attracted the attention of the community as an\nalternative and flexible method to produce micro- and nanostructures\nof materials, especially where the use of conventional methods\nappears to be limited. This FIB technique has been successfully used\nfor nanostructuring different materials like magnetic and\nsuperconducting \\cite{gie05,tru,tse05} or more recently to study the\nconduction behavior in metallic constrictions \\cite{fer08}. One of\nthe advantages of this technique is its versatility; the use of any\nresist appears, a priory, unnecessary. Nowadays, FIB devices are also\nused for deposition of metallic or insulating materials with the help\nof the same Ga$^+$-ions. These ions induce a decomposition of a\nchemical metal precursor over the surface in question, a technique\ncalled ion-beam induced or assisted deposition (IBID,IBAD)\n\\cite{mel87,mat96,lan02}. The main advantage of IBID\/EBID is the\ndeposition of the desired patterns of a material without the need of\na mask or a pre-structured pattern using optical or e-beam\nlithography (EBL). Also the possibility to modify only parts of the\npatterns in electronic devices within nanometer dimensions is other\nof the FIB advantages.\n\nThe modification of different properties of different materials\n has been studied in the past, as e.g. in\n magnetic La$_{0.7}$Sr$_{0.3}$MnO$_3$ thin films \\cite{pal08} or\n Co\/Pt multilayers \\cite{hyn01}. However, a fundamental\n problem of FIB was less discussed in\nliterature, namely the modification of the sample near surface region\nand to a certain extent also its interior and their influence in the\ntransport properties by the use of Ga$^+$ions of energies up to\n30~keV and fluences below $10^{12}$~cm$^{-2}$ in usual devices. We\nnote that before cutting or depositing material, the use of FIB\nrequires the precise alignment of the Ga-beam and this is done taking\na picture of the region in question irradiating it with the same\nGa$^+$-ions. Depending on the surface properties of the material in\nquestion, in general Ga$^+$ fluences $\\gtrsim 10^{11}~$cm$^{-2}$ are\nused. As we demonstrate below, these Ga$^+$-fluences necessary for\nthe first alignment may already affect seriously the intrinsic\nproperties of the material of interest and can lead to wrong\ninterpretations of the effects that a reduction of the sample\ngeometry may produce.\n\nThe influence of Ga irradiation during the FIB preparation processes\nwas not yet quantitatively studied, neither in situ nor after\nirradiation for fluences below $10^{12}$~cm$^{-2}$. Specially when\nmaterials are selected to investigate their transport properties\nwhile their size is being reduced, care should be taken since the\nelectrical transport can be sensitive to lattice defects as well as\nto the produced Ga contamination. The aim of this paper is to report\non the changes in the electrical resistivity measured in-situ and its\ntemperature $T$ and magnetic field $B$ dependence after irradiation\nof three different thin film materials to show the influence of the\nGa irradiation used in FIB devices and at fluences as low as the ones\nused for beam alignment. The experiments were realized in two stages.\nIn the first stage we have done in-situ measurements of the\nelectrical resistance during irradiation and in the second stage its\nresistance was measured as a function of $T$ and $B$ to compare with\nthe corresponding virgin states. Using graphite as a test material\nbecause its electrical resistance is extremely sensitive to lattice\ndefects \\cite{arn09}, we provide in this work a possible solution\nthat can be used to strongly diminish the effects due to the\nGa$^+$-irradiation on different materials.\n\n\\section{Experimental Details} \\label{details}\n\n\n\\begin{table}[b]\n \\caption{Samples used, their dimensions (total\n length $\\times$ width $\\times$ thickness) and the Ga$^+$-fluence\n irradiated on the samples. The fluence number in brackets refers to the total\n fluence after the second irradiation. The corresponding data of the Co-sample \\#2 is\n given in the Table. The Co\\#1 sample had a thickness of\n 57~nm. Other dimensions can be taken from Fig.~\\protect\\ref{samples_2}(c).}\n \\begin{tabular}[htbp]{@{}llll@{}}\n \\hline\n Sample & graphite & La$_{0.7}$Ca$_{0.3}$MnO$_3$ & Co\\#2 \\\\\n \\hline\n Dimensions ($\\mu$m)& $11\\times 2 \\times 0.015$ & $52 \\times 7.3 \\times 0.035$ & $18 \\times 0.9 \\times 0.022$ \\\\\n Fluence ($10^{11}$cm$^{-2}$) & 5 (10) & $\\geqslant 2.2$ & $\\geqslant 2.2$ \\\\\n\\hline\n \\end{tabular}\n \\label{table}\n\\end{table}\n\n\nWe have used the FIB capabilities of a FEI NanoLab XT 200 Dual Beam\nmicroscope (DBM). The acceleration voltage was fixed in all our\nexperiments to 30~kV. The ion current and the area to be irradiated\nwere changed in order to obtain different Ga$^+$-fluences, see\nTable~\\ref{table}. The selected samples were a crystalline graphite\nflake prepared by a rubbing and ultrasonic process and\npre-characterized with electron backscattering diffraction (EBSD) and\nRaman scattering \\cite{bar08,arn09}, a La$_{0.7}$Ca$_{0.3}$MnO$_3$\n(LCMO) thin film prepared by plasma laser deposition (PLD) and\nmicro-structured by EBL and wet etching process \\cite{barpat}, and\nthermal evaporated Co thin films (\\#1 and \\#2) previously structured\nby EBL, see Fig.~\\ref{samples_2}(a-c) and Table~\\ref{table}.\n\nLow-noise four-wires resistance measurements (two for the input\ncurrent and two for the voltage measurement, important to eliminates\ncontributions of the lead resistance) have been performed with the AC\ntechnique (Linear Research LR-700 Bridge with 8 channels LR-720\nmultiplexer) with ppm resolution and in some cases also with the DC\ntechnique (Keithley 2182 with 2001 Nanovoltmeter and Keithley 6221\ncurrent source).\n\nThe Au\/Pt lead contacts used for all samples were prepared by EBL\nprocess using a e-beam resist PMMA 950K of $\\sim 200~$nm thickness.\nThe lithography process was done with the Raith ELPHY Plus system\nincluded in our microscope. The Au\/Pt deposition of the contact\nelectrodes was done by evaporation in a high-vacuum chamber with a\nnominal thickness of 25 and 9~nm, respectively. The in-situ\nresistance measurements performed before and during the irradiation\nof the sample were done using a self-made sample holder fixed inside\nthe microscope chamber. The temperature and magnetic field dependence\nmeasurements were performed using a commercial cryostat with a\ntemperature stability of 0.1~mK at 100~K. The magnetic field\ngenerated by a superconducting solenoid was always applied normal to\nthe sample and input current.\n\nTo avoid or diminish irradiation effects we protected the graphite\nsample with a negative electron beam resist (AR-N 7500) of thickness\nof 300~nm. In order to test the effectiveness of the resist film to\navoid contamination during irradiation, part of the graphite flake\nwas covered with the above mentioned resist in an additional process\nafter the Au\/Pt leads were deposited on the sample, see\nFig.~\\ref{samples_2}(a). This resin protects the graphite sample in\nthe region of the three upper electrodes, allowing us to compare the\nchange in voltage in the unprotected, protected and intermediate\nregions.\n\nThe advantages of this resist is that it allows us the patterning\nby EBL in the desired shape, it is sufficiently robust in the used\n temperature range and it is a very bad electrical\nconductor. The penetration depth of the Ga$^+$-ions in this resist as\nwell as in the samples and the distribution of the density of\nvacancies as a function of sample depth were estimated taking into\naccount their density and using Monte Carlo simulations given by the\nstopping and range of ions in matter (SRIM) \\cite{ziegler,Ziegler2}\ntaking into account the energy of the Ga$^+$-ions, see\nFig.~\\ref{SRIM}.\n\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=\\textwidth]{samples_22.eps}\n\\caption{Scanning electron microscope images of: (a) the graphite\nflake (dashed line denotes its borders) with the six Au\/Pt contacts.\n(b) The LCMO film with the two inner voltages electrodes and one of\nthe input current electrodes. (c) The Co microwire \\#1 with\nelectrodes at different positions. The irradiation has been made in\nthe whole region and the electrical resistance was measured between\nthe third and second electrodes from right.} \\label{samples_2}\n\\end{figure}\n\n\n\n\n\n\n\\section{Results and discussion}\\label{results}\n\n\\subsection{In-situ transport measurements}\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.9\\textwidth]{Fig2.eps}\n\\caption{Resistance as a function of time before, during and after\nGa$^+$ irradiation inside the FIB chamber for the samples (a)\ncrystalline graphite flake (first irradiation with a fluence of $5\n\\times 10^{11}$~cm$^{-2}$, (b) LCMO film and (c) Co film \\#1. For\nthese two samples the used fluences are written in the figures.\nAll these measurements were done in-situ and at room temperature.\n} \\label{rgra}\n\\end{figure}\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.6\\textwidth]{SRIM.eps}\n\\caption{(a) Qualitative distribution of implanted Ga ions as a\nfunction of penetration depth into the sample for the three studied\nmaterials. (b) Similarly for the vacancy density but taking into\naccount fixed fluences. The curves were obtained using the stopping\nand range of ions in matter SRIM or IIS\n\\protect\\cite{ziegler,Ziegler2} taking into account the energy of the\nGa$^+$-ions and the material densities.} \\label{SRIM}\n\\end{figure}\n\nA detailed study of the electrical resistance of the above mentioned\nmaterials was realized {\\em in-situ} during the Ga$^+$-ion\nirradiation in the microscope chamber. In the case of the graphite\nsample we measured simultaneously the resistance of the covered and\nuncovered parts before, during and after irradiation.\nFigure~\\ref{rgra} shows the changes observed in this sample during\nand after the first irradiation of fluence $5 \\times 10\n^{11}~$cm$^{-2}$. This fluence produces nominally $10^3$~ppm vacancy\nconcentration inside the sample whereas the Ga concentration\nimplanted is less than 1~ppm, see Fig.~\\ref{SRIM}. The disorder\nproduced by the irradiation increases the resistance of the uncovered\npart of this sample by a factor $> 4$.\n\nThe resistance of the covered part remains unchanged within $10^{-4}$\nrelative change indicating that the 300~nm thick resist was enough\nto stop the Ga$^+$-ions as expected since according the SRIM\ncalculations the maximal penetration of the Ga$^+$-ions in the resist\nshould be $\\simeq 75~$nm. Immediately after irradiation the\nresistance of the uncovered part starts to decay exponentially with\ntwo characteristic relaxation times, as has been also observed after\nproton irradiation at room temperature \\cite{arn09}. This time\nrelaxation is observed for all samples just after the irradiation\nfinishes. This decay is related to local thermal relaxation process\nand to the diffusion of carbon interstitials and vacancies\n\\cite{niw95,lee05_vac}.\n\nQualitatively speaking, similar resistance changes during the\nirradiation process are also observed in the two other samples, see\nFig.~\\ref{rgra}(b,c). In the case of the LCMO sample the maximal\npenetration of the ions is $\\sim 45~$nm, see Fig.~\\ref{SRIM}(a).\nBecause the thickness of this sample is 35~nm, part of the\nGa$^+$-ions are expected to be implanted and the rest to go through\nthe sample generating a considerable amount of defects as can be seen\nin the calculated curves, see Fig.~\\ref{SRIM}(b). In the case of the\nCo wire \\#1 the maximal ion penetration is $\\sim 30~$nm~$ < 57~$nm\nthickness, see Fig.~\\ref{SRIM}, therefore the produced defects plus\nGa implantation are responsible for the relatively small increase in\nthe electrical resistance, see Fig.~\\ref{rgra}(c). To study the\ninfluence on the transport properties mainly due to the produced\ndefects by the Ga$^+$ irradiation, a second Co~\\#2 sample with less\nthickness has been studied and its results are discussed below.\n\nThe effect of the Ga irradiation to sample volume expansion\n(thickness swelling) as well as milling (thickness reduction) depends\non the ion fluence, ion energy and target material. As an example we\nrefer to the work done in Ref.~\\cite{pal08} where such studies were\ndone on La$_{0.7}$Sr$_{0.3}$MnO$_3$ thin films. According to this\nwork and taking into account our used fluences, any thickness\nincrease as well as any milling are completely negligible and do not\naffect the changes measured in the resistance.\n\n\\subsection{Temperature and magnetic field dependence before and after irradiation}\n\n1.{\\em Graphite}: The special lattice structure of graphite and\nthe weak coupling between graphene layers make graphite a quasi-2D\nsemimetal, which carrier density depends strongly on the lattice\ndefects like vacancies and\/or impurities. In a recent work it was\ndemonstrated that the electrical resistance of thin graphite\ncrystals of micrometer size changes after inducing less than 1~ppm\nvacancy concentration by ion irradiation \\cite{arn09}. This makes\ngraphite a extraordinary sensor for testing the efficiency of the\nresin cover as well as to show the dramatic changes produced by a\nrelatively weak Ga$^+$-ion irradiation fluence.\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.50\\textwidth]{Fig3.eps}\n\\caption{Normalized resistance as a function of temperature for the\ncrystalline graphite flake in the: (1) as-prepared state, (2) after\nthe first ($5 \\times 10^{11}~$cm$^{-2}$) and (3) second ($1 \\times\n10^{12}~$cm$^{-2}$) Ga$^+$ irradiation.} \\label{rtgra}\n\\end{figure}\n\nFigure~\\ref{rtgra} shows the temperature dependence of the graphite\nsample in its three states, as prepared and after the two\nirradiations. The temperature dependence in the as-prepared state has\na semiconducting behavior above 50~K and metallic below. The\nsemiconducting behavior is mainly due to the increase in carrier\nconcentration with temperature because most of the carriers are\nthermally activated and the Fermi energy increases linearly with\ntemperature \\cite{arn09}. The metallic behavior below 50~K is not\nintrinsic of graphite but comes from internal interfaces between\ncrystalline regions parallel to the graphene layers but of slightly\ndifferent orientation\\cite{bar08}. The mentioned interfaces originate\nduring the pyrolysis process \\cite{interfaces}.\n\nThe first irradiation increased the resistance in all the temperature\nrange without changing strongly the relative change with temperature,\nsee Fig.~\\ref{rtgra}. The metallic part was shifted to below 25~K and\nits $T$-dependence gets weaker, suggesting that the irradiation also\naffected the properties of the internal interface(s). After the\nsecond irradiation the metallic region vanishes completely and the\nresistance decreases rather linearly with $T$, see Fig.~\\ref{rtgra}.\nWe note that the absolute value of the resistance is fairly\nproportional to the used fluence. Between the virgin state and first\nirradiation we have an increase in the resistance of a factor of 3.5.\nDoubling the fluence we expect an increase of a factor $\\sim 7$ of\nthe resistance of the as-received state, a factor in agreement with\nthe experimental observation.\n\nAs shown in \\cite{arn09} the increase in resistance is due to the\ndecrease in the mean free path that overwhelms the increase in the\ncarrier density that these irradiations produce inside the graphite\nstructure. This behavior is related to difference weights the carrier\ndensity $n$ and the mean free path $l$ have in the 2D resistivity,\ni.e. $R \\propto 1\/(n^{1\/2}l)$ in contrast with the 3D resistivity\nequation $R \\propto 1\/(nl)$. Note also that the second irradiation\nproduces already a vacancy density that implies a vacancy distance of\nless than 2~nm in the graphene plane. At distances smaller than $\\sim\n3~$nm we do not expect a large increase in the carrier density with\nfurther irradiation \\cite{arn09}.\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.95\\textwidth]{Fig4.eps}\n\\caption{Magnetoresistance as a function of field applied normal to\nthe graphene planes for the sample in its virgin state (a), after the\nfirst (b) and second (c) irradiation. } \\label{mrgra}\n\\end{figure}\n\n\n\n\n\n\nFigure~\\ref{mrgra} shows the magnetoresistance (MR) vs. applied field\nat different constant temperatures for the graphite sample in the\nvirgin (a), first (b) and second (c) irradiated state. The MR in the\nvirgin state agrees with previous reports \\cite{bar08}. Note that the\nMR shows a quasi linear field behavior at low temperatures. Also\nanomalous is the systematic increase of the MR below 100~K. This is\nrelated to the decrease in the resistance with decreasing\ntemperature, see Fig.~\\ref{rtgra}. The MR reaches a value of 16 at\n8~T and 2~K. After the first irradiation the MR decreased a factor 16\nand remains practically temperature independent. After the second\nirradiation the MR decreased further a factor of 6 and shows a\nsimilar temperature independence. The data reveal that the\nShubnikov-de Haas (SdH) oscillations increase their amplitude and\nstart to be measurable at lower fields after the first irradiation\n\\cite{arn09}. This behavior is related to the increase in the carrier\ndensity due to the creation of defects, whereas the decrease in the\nMR is due to the decrease in the mean free path.\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.7\\textwidth]{Fig4b.eps}\n\\caption{First derivative of the resistance on field vs. inverse\nfield for the first (continuous line) and second (dotted line)\nirradiation. The data have been multiplied by a factor in order to\nshow both derivatives in the same $y-$axis scale.} \\label{sdh}\n\\end{figure}\n\nNote that after the second irradiation the SdH oscillations do not\nchange qualitatively (their absolute amplitude changes due to the\nlarge change of the MR), see Fig.~\\ref{sdh}. After the second\nirradiation we observe the first low-field oscillation at a similar\nfield and a similar oscillation period in reciprocal field as after\nthe first irradiation, see Fig.~\\ref{sdh}. These results indicate\nthat no further change in the carrier density has been produced for a\nvacancy distance less than 2~nm. Taking into account that 3~nm is of\nthe order of the range of modification of the electronic structure\nproduced by, e.g. a single vacancy \\cite{rufi00}, then a saturation\nof the carrier density is reached decreasing the vacancy distance\nbelow $\\sim 3~$nm but keeping the graphene structure. As expected,\nthe covered part of the sample does not show any change after the\nfirst or second irradiation, as can be seen in Fig.~\\ref{MRcovered}.\n\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.6\\textwidth]{MRC.eps}\n\\caption{Magnetoresistance of the upper covered part of the graphite\nflake, see Fig.~\\ref{samples_2}(a), at $T = 4~$K and in the virgin\nstate (continuous line) and after the second Ga$^+$ irradiation\n($\\circ$).} \\label{MRcovered}\n\\end{figure}\n\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.9\\textwidth]{Fig5.eps}\n\\caption{(a) Temperature dependence of the normalized resistance\nof the LCMO thin film before and after irradiation. The inset\nshows the same data but in a double logarithmic scale. (b) The MR\nvs. applied field for the as-prepared state. } \\label{tlaca}\n\\end{figure}\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.9\\textwidth]{Fig6.eps}\n\\caption{(a) The MR of the LCMO film after irradiation. Compare\nthese results with those in Fig.~\\protect\\ref{tlaca}(b) and note\nthe induced changes in the MR by the irradiation. (b)\nMagnetoresistance as a function of field at 75~K before and after\nirradiation in a reduced field range to show the clear increase in\ncoercivity. This increase is observed at all temperatures in the\nferromagnetic state.} \\label{hlaca}\n\\end{figure}\n2.{\\em LaCaMnO film}: The manganite sample undergoes a paramagnetic\ninsulator to ferromagnetic metal transition leading to a sharp peak\nin the resistance near its Curie temperature as shown in\nFig.~\\ref{tlaca}(a). For our sample this peak is observed at $T_c =\n106~$K. Interestingly, after irradiation the temperature dependence\ndoes change only in the ferromagnetic state of the sample, which\nshows now a $T_c = 95~$K, see Fig.~\\ref{tlaca}(a). The measured MR of\nthis sample in the as-prepared state agrees with published literature\nand shows hysteretic behavior in the ferromagnetic state whereas no\nhysteresis in the paramagnetic state above $T_c$, see\nFig.~\\ref{tlaca}(b).\n\nThe influence of ion irradiation on the magnetic and transport\nproperties of manganites were studied in the past but mainly at much\nhigher ion-energies, see e.g.\nRefs.~\\cite{str97,wol01,cha05,ram09,pal08}. In general at fluences\nabove $10^{12}~$cm$^{-2}$ the ion irradiation decreases the\nmetal-insulator transition temperature and the magnetic hysteresis\ngets broader reflecting the increase of the pinning of the domain\nwalls by the induced defects. A similar behavior is observed in our\nsample, see Fig.~\\ref{hlaca}(a). Figure~\\ref{hlaca}(b) shows in\ndetail the MR between -2~T and 2~T at 75~K. The irradiation induces\nan increase in the coercivity $H_c$, from 0.1~T to 0.42~T after\nirradiation, defined at the maxima of the MR, as well as in the\noverall hysteresis width.\n\nThese results indicate that care should be taken when changes in\nthe transport properties of ferromagnetic oxides are observed\nafter microstructuring the samples with FIB. The observed results\nafter patterning might not be due to the change of the sample\ndimension but to the induced structural changes by the Ga$^+$\nirradiation.\n\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.6\\textwidth]{Fig7.eps}\n\\caption{Scanning electron microscope image of the Co sample ~\\#2\nwith its four electrodes for resistance measurement. The thickness\nof the wire was 22~nm.} \\label{Co}\n\\end{figure}\n\n\n3.{\\em Co film}: Figure~\\ref{Co} shows a SEM picture of the\nCo-sample \\#2 with the four electrodes for the resistance\nmeasurement. The resistance as a function of temperature and\nmagnetic field before and after a Ga$^+$ irradiation is shown in\nFig.~\\ref{Co-T-B}(a) and (b). Due to the smaller thickness of this\nsample the irradiation at 30~kV produces mainly defects instead of\ndoping since the Ga$^+$ ions stop beyond the sample thickness,\ni.e. inside the substrate. The influence of the induced defects in\nthe Co sample can be clearly recognized by the increase in the\nabsolute value of the resistance. For example at 275~K the\nresistance of the Co-sample~\\#2 increases from $36~\\Omega$ to\n$175~\\Omega$ flattening the temperature dependence, see\nFig.~\\ref{Co-T-B}(a). The MR shown in Fig.~\\ref{Co-T-B}(b)\nindicates a clear change in the hysteresis indicating a change in\npinning of the domain walls. These results are qualitatively\nsimilar to those obtained for the manganite shown above, see\nFig.~\\ref{hlaca}.\n\n\n\n\n\\begin{figure}[htb]%\n\\includegraphics[width=0.9\\textwidth]{Fig8.eps}\n\\caption{(a) The normalized resistance of the Co~\\#2 wire as a\nfunction of temperature for the as-prepared and after irradiation\nstates. (b) MR of this sample for the two states at 4~K. The\nGa$^+$-irradiation fluence was $2.2 \\times 10^{11}~$cm$^{-2}$.\nNote the change in the MR at the coercivity field.} \\label{Co-T-B}\n\\end{figure}\n\n\n\\section{Conclusion}\\label{con}\n\nIn conclusion our studies indicate clearly that care should be taken\nwith the change of the intrinsic properties of the materials when FIB\ndevices are used for patterning, cutting or just for deposit other\nmaterials for contacts, for example. Our work demonstrates that\nalready the usual Ga$^+$ fluences needed for a precise alignment of\nthe Ga$^+$ beam before really using it, induce relevant changes in\nthe transport properties of the three different materials studied\nhere. Using a thin crystalline graphite flake we were able to\ndemonstrate also that covering the sample with a sufficiently thick\nresist film one can avoid the irradiation damage completely. This\nindicates that in principle one can use this technique to protect\ncertain parts and produce defined changes in other parts of the\nsample of interest. This method might be used to induce changes in\nthe hysteretic properties of ferromagnetic micro and nano-structures\nor in the electronic density in graphite or multigraphene in specific\nparts of the sample, for example.\n\n\n\n\n\n\\ack This work has been possible with the support of the DFG grant\nDFG ES86\/16-1. The authors S.D. and G.B. are supported by the\nGraduate school BuildMona and the Collaborative Research Center (SFB\n762) ``Functionality of Oxide Interfaces'', respectively.\n\n\n\n\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLoop quantum gravity (for lecture books, see \\cite{rovelli2007quantum,thiemann2007modern,Gambini:2011zz}) proposes a non-perturbative and background independent framework for quantum gravity. Based on a 3+1 splitting of space-time distinguishing time from the 3d space, it realizes a canonical quantization of the general relativity reformulated as a gauge field theory. The canonically conjugate fields are the triad, defining the local 3d frame, and the Ashtekar-Barbero $\\mathrm{SU}(2)$ connection \\cite{PhysRevLett.57.2244,Barbero:1994ap,Immirzi:1996di}. Quantum states of geometry, called spin network states, define the excitations of those fields over the kinematical Ashtekar-Lewandowski vacuum of vanishing triad and connection (corresponding to the ``nothing''-state of a degenerate vanishing metric). Their dynamics is implemented through the Hamiltonian constraints, ensuring the invariance of the theory under space-time diffeomorphisms.\nSeveral explicit proposals for the quantum dynamics exist, either in the pure canonical formalism from the original Thiemann definition of the Hamiltonian constraint operator \\cite{Thiemann:1996ay,Thiemann:1996aw,Thiemann:1996av} to more recent constructions \\cite{Thiemann:2003zv,Alesci:2011aj,Alesci:2015wla,Assanioussi:2015gka} (for a review see \\cite{Bonzom:2011jv}), or in a covariant path integral approach with transition amplitudes defined from spinfoam models as in the EPRL model \\cite{Engle:2007wy,Geloun:2010vj,rovelli2014quantum} and variations \\cite{Dupuis:2011fz,Speziale:2012nu,Wieland:2013cr} (for a review of the spinfoam framework see \\cite{Livine:2010zx,Perez:2012wv,Bianchi:2012nk}).\n\nThe main challenges in this context are the correct definition of the quantum dynamics and the coarse-graining of the theory. These two issues are intimately intertwined in that the proper quantum gravity dynamics should address and solve the perturbative non-renormalisability of general relativity, and more generally because a consistent theory of quantum gravity should give us the effective dynamics of the geometry at all scale of length and energy and provide us with a flow from the probably discrete and quantum dynamics at the Planck scale to the classical dynamics of a classical space-time manifold prescribed by general relativity.\nThe main difficulty in realizing this program in the loop quantum gravity framework is to proceed to a coarse-graining of the gravitational degrees of freedom in a background independent theory with no a priori length or energy scale and no a priori regular geometrical background on which to define a coarse-graining procedure.\nThis translates into the problem of defining new vacuum states representing non-degenerate metrics and geometries (such as the flat Minkowski space-time) and working out how to expand the quantum gravity theory, initially defined above the ``nothing''-state, around them with an explicit dictionary between the fundamental geometry excited states and the new effective gravity excitations.\n\nSome progress in this direction has been achieved by Koslowski and Sahlmann in \\cite{Koslowski:2011vn}, where they define new vacuum states for loop quantum gravity peaked on some classical field configuration for the triad and connection and describe the spin network excitations above them. Another approach by Dittrich and Geiller was to define another vacuum state, given as the flat connection vacuum of a topological BF theory, and to work out a Hilbert space better suited to account for curvature defects \\cite{Dittrich:2014wpa,Bahr:2015bra}. Another line of research in the spinfoam framework is the promising renormalisation program for tensor models and group field theories \\cite{Oriti:2013aqa,Rivasseau:2011hm,Rivasseau:2013uca,Carrozza:2013mna,Carrozza:2014rya}, but this deviates from the canonical point of view that we will pursue in the present paper. Indeed, we would like to propose another path to define the effective loop quantum gravity dynamics on non-trivial background quantum geometries following the previous work on the coarse-graining of spin network states \\cite{Livine:2006xk,Livine:2013gna}.\n\n\\medskip\n\nIn loop quantum gravity, quantum states of geometry are wave-functions of the Ashtekar-Barbero connection. More precisely, they are defined as cylindrical functionals of that connection, in that they realize a finite sampling of the connection. Indeed each wave-function is constructed with respect to a graph $\\Gamma$ (embedded in the canonical hypersurface) and depends on the holonomies of the connection along the graph edges (defined as $\\mathrm{SU}(2)$ group elements). The set of wave-functions is obtained by taking the union over all embedded graphs with the requirement of cylindrical consistency\\footnotemark, that is a wave-function initially defined over a given graph $\\Gamma$ is considered as equivalently defined over any refinement of that graph (i.e. any graph containing $\\Gamma$ as a subgraph). This union quotiented by the cylindrical consistency is rigorously defined as a projective limit \\cite{Ashtekar:1994mh,Ashtekar:1994wa}. It leads to the Hilbert space of quantum states of geometry of loop quantum gravity and has been shown to be the $L^{2}$ space of functionals of the connection with respect to the Ashtekar-Lewandowski measure \\cite{Ashtekar:1993wf}.\n\\footnotetext{The definition of cylindrical consistency can be generalized and extended to a mapping or identification between two wave-functions living on a given graph and a refinement of it. Formulated as such, it becomes equivalent to the choice of a coarse-graining procedure for the quantum states of geometry. This logic has been used to construct new Hilbert spaces for loop quantum states describing the excited states of geometry above non-trivial vacua \\cite{Koslowski:2011vn}.\n}\n\n\n\n\\begin{figure}\n\n\n\\begin{subfigure}[t]{.3\\linewidth}\n\\begin{tikzpicture}[scale=1]\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (2,-2);\n\\coordinate(D) at (0,-2);\n\n\\draw (A) to[bend left] node[midway,sloped]{$>$} node[midway,above]{$\\ell_1$} (B);\n\\draw (B) to[bend left] node[midway,sloped]{$>$} node[midway,right]{$\\ell_2$} (C);\n\\draw (C) to[bend left] node[midway,sloped]{$>$} node[midway,above]{$\\ell_3$} (D);\n\\draw (D) to[bend right] node[midway,sloped]{$>$} node[midway,left]{$\\ell_4$} (A);\n\n\\draw (A) to[bend left] node[midway,sloped]{$>$} ++(-1,0.5) node[above]{$j_1$};\n\\draw (B) to[bend right] node[midway,sloped]{$>$} ++(1,0.5) node[above]{$j_2$};\n\\draw (C) to[bend right] node[midway,sloped]{$>$} ++(1,-0.5) node[right]{$j_3$};\n\\draw (D) to[bend left] node[midway,sloped]{$>$} ++(-1,-0.5) node[left]{$j_4$};\n\n\\draw (A) node {$\\bullet$} node[above]{$i_1$};\n\\draw (B) node {$\\bullet$} node[above]{$i_2$};\n\\draw (C) node {$\\bullet$} node[right]{$i_3$};\n\\draw (D) node {$\\bullet$} node[left]{$i_4$};\n\\end{tikzpicture}\n\\caption{Spin networks are graphs coloured with spins at the edges and intertwiners at the vertices.}\n\\label{fig:spinnetwork_a}\n\\end{subfigure}\n\\hspace{2mm}\n\\begin{subfigure}[t]{.3\\linewidth}\n\\begin{tikzpicture}[scale=1]\n\\coordinate (O) at (0,0,0);\n\n\\coordinate (A) at (0,1.061,0);\n\\coordinate (B) at (0,-0.354,1);\n\\coordinate (C) at (-0.866,-0.354,-0.5);\n\\coordinate (D) at (0.866,-0.354,-0.5);\n\n\\draw[blue] (A) -- (B);\n\\draw[blue] (A) -- (C);\n\\draw[blue] (A) -- (D);\n\\draw[blue] (B) -- (C);\n\\draw[dashed,blue] (C) -- (D);\n\\draw[blue] (D) -- (B);\n\n\\draw[dotted] (O) -- ++(0,-0.531,0);\n\\draw (0,-0.531,0) -- ++(0,-0.531,0);\n\n\\draw[dotted] (O) -- ++(0,0.177,-0.5);\n\\draw[dashed] (0,0.177,-0.5) -- ++(0,0.177,-0.5);\n\n\\draw[dotted] (O) -- ++(0.433,0.177,0.25);\n\\draw (0.433,0.177,0.25) -- ++(0.433,0.177,0.25);\n\n\\draw[dotted] (O) -- ++(-0.433,0.177,0.25);\n\\draw (-0.433,0.177,0.25) -- ++(-0.433,0.177,0.25);\n\n\\draw (O) node{$\\bullet$};\n\\draw[blue] (A) node{$\\bullet$};\n\\draw[blue] (B) node{$\\bullet$};\n\\draw[blue] (C) node{$\\bullet$};\n\\draw[blue] (D) node{$\\bullet$};\n\\end{tikzpicture}\n\n\\caption{Each node of a spin network can naturally be interpreted as a polyhedron (in blue on the figure), each face corresponding to an edge of the graph.}\n\\label{fig:spinnetwork_b}\n\\end{subfigure}\n\\hspace{2mm}\n\\begin{subfigure}[t]{.3\\linewidth}\n\\begin{tikzpicture}[scale=1]\n\\coordinate (A1) at (0,0,-2);\n\\coordinate (B1) at (0,1,0);\n\\coordinate (C1) at (-0.866,-0.5,0);\n\\coordinate (D1) at (0.866,-0.5,0);\n\n\\fill[blue!50] (B1) -- (C1) -- (D1) -- cycle;\n\\fill[blue!50] (B1) -- (D1) -- (A1) -- cycle;\n\n\\draw[blue] (A1) -- (B1);\n\\draw[blue,dashed] (A1) -- (C1);\n\\draw[blue] (A1) -- (D1);\n\\draw[blue] (B1) -- (C1);\n\\draw[blue] (C1) -- (D1);\n\\draw[blue] (D1) -- (B1);\n\n\n\\draw[blue] (A1) node{$\\bullet$};\n\\draw[blue] (B1) node{$\\bullet$};\n\\draw[blue] (C1) node{$\\bullet$};\n\\draw[blue] (D1) node{$\\bullet$};\n\n\\coordinate (A2) at (0,0,0.5);\n\\coordinate (B2) at (0,-1,0);\n\\coordinate (C2) at (-0.866,0.5,0);\n\\coordinate (D2) at (0.866,0.5,0);\n\n\\fill[red!50] (A2) -- (B2) -- (D2) -- cycle;\n\\fill[red!50] (A2) -- (C2) -- (D2) -- cycle;\n\\fill[red!50] (A2) -- (B2) -- (C2) -- cycle;\n\n\\draw[red] (A2) -- (B2);\n\\draw[red] (A2) -- (C2);\n\\draw[red] (A2) -- (D2);\n\\draw[red] (B2) -- (C2);\n\\draw[red] (C2) -- (D2);\n\\draw[red] (D2) -- (B2);\n\n\\draw[red] (A2) node{$\\bullet$};\n\\draw[red] (B2) node{$\\bullet$};\n\\draw[red] (C2) node{$\\bullet$};\n\\draw[red] (D2) node{$\\bullet$};\n\\end{tikzpicture}\n\n\\caption{Spin network geometry is not Regge geometry since the shapes of the faces of the glued polyhedra do not have to match.}\n\\label{fig:spinnetwork_c}\n\\end{subfigure}\n\n\n\\caption{The geometrical interpretation of spin networks}\n\\label{fig:spinnetwork}\n\\end{figure}\n\n\n\nSpin networks provide basis states of this Hilbert space. They are introduced as diagonalizing the area and volume operators, both shown to have discrete spectra \\cite{Rovelli:1994ge,Rovelli:1995ac}. A spin network state, as drawn on fig.\\ref{fig:spinnetwork_a}, lives on a given graph $\\Gamma$ (and therefore lives on any refinement of that graph by cylindrical consistency) and is defined by spins (as half-integers determining an irreducible representation of the Lie group $\\mathrm{SU}(2)$) on each edge and intertwiners (as singlet states) on each vertex or node of the graph. Spins define quanta of area while intertwiners give the quanta of volume. This hints towards a geometrical interpretation of spin networks as discrete geometries. This can be realized explicitly and spin networks have been shown to be the quantization of twisted geometries \\cite{Freidel:2010aq,Dupuis:2012yw}, which generalize 3d Regge geometries to account for some torsion.\nNodes of the graph represent polyhedra, which are glued along faces dual to the graph edges, as illustrated on fig.\\ref{fig:spinnetwork_b}.\nThe precise face matching of Regge geometries is relaxed to a simpler area face matching, and the resulting potential shape mismatch is interpreted as torsion along the graph edge, as depicted on fig.\\ref{fig:spinnetwork_c}. \n\nThe Hamiltonian operators of loop quantum gravity then act on both algebraic and combinatorial data, meaning that they modify both spins and intertwiners living on a given graph and the graph itself. Thus the loop quantum gravity (LQG) dynamics seems to be a careful balance between fixed-graph dynamics and graph-changing dynamics, which reflects the classical dynamics of general relativity as both dynamics on a given space-time manifold and of the space-time geometry itself. This mixture between these two types of dynamics render analytical and numerical calculations extremely difficult.\nThe usual strategy for discrete systems on fixed graphs, as in condensed matter theory, is to coarse-grain the theory, that is, to integrate the microscopic degrees of freedom inside bounded regions, thus assimilated to points, and to write effective theories for the relevant macroscopic degrees of freedom. This process of coarse-graining ultimately leads to the continuum limit of the theory.\nOne can also study the statistical physics of a varying graph, for instance using matrix models for 2d quantum gravity. Putting these two ingredients together, for example to study matter coupled to 2d quantum gravity through condensed matter models on random lattices, is much more involved. Results in this direction, like the KPZ conjecture \\cite{Knizhnik:1988ak}, mostly relie on conformal field theory techniques in the continuum limit.\n\n\\medskip\n\nIn the loop quantum gravity context, through the logic of coarse-graining, we would like to map the varying graphs dynamics onto a fixed graph dynamics.\nThe rational behind this is the following: starting from a base graph, each node will correspond to a varying coarse-grained region. This means that we will add some extra internal degrees of freedom to the graph vertices in the effective theory, which should reflect that each vertex represent in fact a possibly varying subgraph itself. This method should mimic a development around this base graph considered as a skeleton graph for the gravity excitations. \nDigging deeper in the structure of the spin networks, the wave-functions carry non-trivial curvature around the loops of the graph (when the composition of the holonomies along the edges of the loops does not yield the identity). \nWhen coarse-graining, curvature should build up and the effective coarse-grained vertices should naturally describe locally curved geometries. \nAs illustrated in fig.\\ref{fig:newstructure}, we will need additional data at each vertex to encode the coarse-grained curvature now located at the vertices. \nWe will realize this program using the coarse-graining through gauge fixing procedure already developed in \\cite{Freidel:2002xb,Livine:2013gna}. The idea is that the gauge fixing procedure collapses a subgraph into a single vertex plus some additional (self-)loops connecting that vertex to itself. These self-loops carry the curvature initially carried by the loops of the subgraph. In this picture, coarse-graining a spin network state thus leads to a state living on a coarser graph with many little loops living at each vertex.\n\nReversing the logic of this procedure, we propose to fix a background graph and define the Fock space of ``loopy spin networks'' above that graph. The base states will be spin network states living on the background graph itself, while excitations will be spin networks living on the graph plus an arbitrary number of little loops attached to its vertices. These little loops allow to represent excitations of the curvature and take into account that each vertex of the background graph is in fact a coarse structure which could be unfolded into a non-trivial, possibly complex, subgraph. Taking into account these little loops as localized excitations of the quantum geometry allow to project the graph changing LQG dynamics onto a fixed background graph but with dynamical curvature excitations living at each vertex.\nIn some sense, the underlying full graph is still dynamical and changes, but we always coarse-grain it to the same skeleton graph plus some little loops. The dynamics then affect the little loops without touching the skeleton graph and of course change the algebraic data -spin and intertwiners- carried by the edges and vertices of the graph.\nThis explicitly realizes the idea proposed in the conclusion of \\cite{Livine:2013gna}.\n\nSo we are here proposing an expansion of the theory around an arbitrary non-trivial background graph, in some sense truncating the dynamics by a coarse-graining procedure to keep the quantum states living on that chosen background graph with some localized excitations of the geometry. This is rather different from the expansion around a continuous background metric, proposed up to now in loop quantum gravity as in \\cite{Koslowski:2011vn}, and it will be necessary to later compare these two approaches in a continuum limit of spin networks.\n\n\n\n\n\n\\begin{figure}\n\\centering\n\n\\begin{tikzpicture}[scale=0.85]\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (2,-2);\n\\coordinate(D) at (0,-2);\n\n\\draw (A) -- (B) node[midway,above] {$\\ell_1$};\n\\draw (B) -- (C) node[midway,right] {$\\ell_2$};\n\\draw (C) -- (D) node[midway,below] {$\\ell_3$};\n\\draw (D) -- (A) node[midway,left] {$\\ell_4$};\n\\draw (B) -- (D) node[midway,sloped,above] {$\\ell_5$};\n\n\\draw (A) -- ++(-1,0.5) node[above] {$j_1$};\n\\draw (B) -- ++(1,0.5) node[above] {$j_2$};\n\\draw (C) -- ++(1,-0.5) node[below] {$j_3$};\n\\draw (D) -- ++(-1,-0.5) node[below] {$j_4$};\n\n\\draw (A) node {$\\bullet$} node[above]{$i_1$};\n\\draw (B) node {$\\bullet$} node[above]{$i_2$};\n\\draw (C) node {$\\bullet$} node[below]{$i_3$};\n\\draw (D) node {$\\bullet$} node[below]{$i_4$};\n\n\\draw[gray,dashed] (1,-1) circle(2);\n\n\\draw[->,>=stealth,very thick] (3.5,-1) -- (5.5,-1);\n\n\\coordinate(O) at (8,-1);\n\\draw (O) -- ++(-2,1) node[above] {$j_1$};\n\\draw (O) -- ++(2,1) node[above] {$j_2$};\n\\draw (O) -- ++(2,-1) node[below] {$j_3$};\n\\draw (O) -- ++(-2,-1) node[below] {$j_4$};\n\n\\draw[fill=lightgray] (O) circle(0.5);\n\\draw[scale=2] (O) node{\\textbf{?}};\n\n\\end{tikzpicture}\n\n\\caption{When coarse-graining, the curvature carried by the loops in the collapsed bounded region leads to curvature. Coarse-grained vertices thus need a new structure to be described.}\n\\label{fig:newstructure}\n\\end{figure}\n\n\\medskip\n\nThis paper is organized as follows.\nThe first section will review the Hilbert space of spin networks for loop quantum gravity, underlining the cylindrical consistency requirement, and the ``coarse-graining through gauge-fixing'' of these quantum states of geometry. From this perspective, we will introduce a hierarchy of possible extensions of spin networks encoding extra information at the graph nodes: folded, loopy and tagged spin networks, from the finer to coarser objects.\nFolded spin networks allow for an arbitrary number of little (self-)loops at every node of the graph and moreover contain the data of a circuit at each node, that is a tree linking the little loop ends to the edges attached to the node. Loopy spin networks forget about the local circuit data, while tagged spin networks simply trace out all the little loop data and only retains the resulting closure defect at each node.\n\nThe second section is dedicated to the definition and investigation of loopy spin networks. We discuss the holonomy operator, which is the elementary brick of the loop quantum gravity formalism, and check the compatibility of our definition with the cylindrical consistency conditions. We apply this construction to the topological BF theory and solve for the physical state of flat connections in our new Hilbert of loopy spin networks.\nWe define the BF Hamiltonian constraints as holonomy constraints peaking the group elements along the little loops on the identity. We show that these are (unexpectedly) not enough to enforce the uniqueness of the physical state and lead to an infinite-dimensional space of (almost)-flat states defined from higher derivatives of the $\\delta$-distribution. This is due to the highly non-trivial structure of the intertwiner space recoupling the loops. We supplement these constraints with new Laplacian constraints, which decouple the loop and trivialize the intertwiner, finally leading to a unique flat state defined by the $\\delta$-distribution.\n\nIn a third section, we explore the possibility of endowing the little loops at the graph's nodes with bosonic statistics and define the symmetrized Fock space of loopy spin networks. We discuss the interplay between loop creation, annihilation and spin shift in the definition of the holonomy operator. This leads us to define the Hamiltonian constraints for BF theory in terms of creation and annihilation operators.\n\nThe fourth section develops tagged spin networks and shows how they provide a basis for reduced density matrix when coarse-graining spin networks.\nWe conclude this paper with a discussion on the potential applications of this new framework, for instance using fixed skeleton graph as background lattices or to the coarse-graining of loop quantum gravity dynamics.\n\n\\section{Spin networks and their coarse-graining}\n\\label{sec:overview}\n\n\\subsection{Spin networks as projective limits}\n\\label{ProjectiveLimits}\n\nLoop quantum gravity is based on the first order reformulation of general relativity in terms of the Ashtekar-Barbero variables\\cite{PhysRevLett.57.2244}. The fundamental variables of the theory on the canonical hypersurface are the densitized triad and the Ashtekar-Barbero connection\\footnotemark, which are endowed with the following symplectic structure:\n\\begin{equation}\n\\{E^a_i(x),A^j_b(y)\\} = \\frac{\\beta \\kappa}{2} \\delta^a_b \\delta^j_i \\delta(x-y)\\,,\n\\end{equation}\nwith all other brackets being zero.\nThe indices $a,b,c,..$ denote space coordinates while the indices $i,j,k,...$ refer to tangent space coordinates.\nThe canonical fields are the densitized triad $E^a_i(x)$ and the Ashtekar-Barbero $\\mathrm{SU}(2)$ connection $A^j_b(x)$. The Poisson backet coupling is given in terms of the gravitational constant $\\kappa = 16\\pi G$ and the Immirzi parameter $\\beta$ \\cite{Barbero:1994ap,Immirzi:1996di,Holst:1995pc}. \n\\footnotetext{The Ashtekar-Barbero connection is only a space connection defined on the canonical hypersurface and is not generically the pull-back of a space-time connection \\cite{Samuel:2000ue,Alexandrov:2001wt,Geiller:2011cv,Geiller:2011bh,Charles:2015rda}, except in the case of the self-dual and anti-self dual connections given by the purely imaginary choice of Immirzi paramater $\\beta=\\pm i$.} \nThe classical theory is defined by imposing on this phase space a set of seven first class constraints: the three Gauss constraints generating the local $\\mathrm{SU}(2)$ gauge invariance and the four constraints generating space-time diffeomorphisms. \n\n\n\\begin{figure}[t!]\n\\centering\n\\begin{tikzpicture}\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (2,-2);\n\\coordinate(D) at (0,-2);\n\n\\draw (A) to[bend left] node[midway,sloped]{$>$} node[midway,above]{$g_e$} node[midway,below]{e} (B);\n\\draw[lightgray] (B) to[bend left] node[midway,sloped]{$>$} (C);\n\\draw[lightgray] (C) to[bend left] node[midway,sloped]{$>$} (D);\n\\draw[lightgray] (D) to[bend right] node[midway,sloped]{$>$} (A);\n\n\\draw[lightgray] (A) to[bend left] node[midway,sloped]{$>$} ++(-1,0.5);\n\\draw[lightgray] (B) to[bend right] node[midway,sloped]{$>$} ++(1,0.5);\n\\draw[lightgray] (C) to[bend right] node[midway,sloped]{$>$} ++(1,-0.5);\n\\draw[lightgray] (D) to[bend left] node[midway,sloped]{$>$} ++(-1,-0.5);\n\n\\draw (A) node {$\\bullet$} node[above]{$s(e)$};\n\\draw (B) node {$\\bullet$} node[above]{$t(e)$};\n\\draw[lightgray] (C) node {$\\bullet$};\n\\draw[lightgray] (D) node {$\\bullet$};\n\\end{tikzpicture}\n\\caption{We consider cylindrical wavefunctions: the function only depends on a support graph with oriented edges. Each edge $e$ carries a colouring $g_e$ which is a group element and corresponds to the (open) holonomy along the corresponding path.}\n\\label{fig:graph}\n\\end{figure}\nAt the quantum level, we consider cylindrical wave-functions of the Ashtekar-Barbero connection $A$. We choose an arbitrary oriented graph $\\Gamma$ embedded in the canonical hypersurface and consider functions of the holonomies of the connection along the links or edges $e$ of the graph, as illustrated in fig.\\ref{fig:graph}:\n\\begin{equation}\n\\psi_{\\Gamma}[A]\n\\,\\equiv\\,\n\\psi\\left(\n\\{U_{e}[A]\\}_{e\\in\\Gamma}\n\\right)\\,,\n\\qquad\nU_{e}[A]\\in\\mathrm{SU}(2)\\,.\n\\end{equation} \nSuch functionals realize a finite sampling of the connection along the considered graph. We require these functionals to solve the Gauss law, that is to be invariant under local $\\mathrm{SU}(2)$ gauge transformations. These acts at the end points of the holonomies, that is at the nodes or vertices $v$ of the graph $\\Gamma$:\n\\begin{equation}\n\\label{gaugeinv}\n\\forall h_{v}\\in\\mathrm{SU}(2)^{\\times V}\\,,\n\\quad\n\\psi\\left(\n\\{U_{e}[A]\\}_{e\\in\\Gamma}\n\\right)\n\\,=\\,\n\\psi\\left(\n\\{h_{s(e)}^{-1}\\,U_{e}[A]\\,h_{t(e)}\\}_{e\\in\\Gamma}\n\\right)\\,,\n\\end{equation}\nwhere $V$ is the number of vertices of the graph $\\Gamma$, while $s(e)$ and $t(e)$ respectively denote the source and target vertices of the oriented edge $e$. The Hilbert space of states of the fixed graph $\\Gamma$ is defined by endowing this set of wave-functions with the natural scalar product induced by the Haar measure on $\\mathrm{SU}(2)$:\n\\begin{equation}\n{\\mathcal H}_{\\Gamma}\\,\\equiv\\,L^{2}\\left(\\mathrm{SU}(2)^{E}\/\\mathrm{SU}(2)^{V}\\right)\\,,\\nonumber\n\\end{equation}\n\\begin{equation}\n\\forall \\Psi\\,,\\widetilde{\\Psi}\\,\\in\\,{\\mathcal H}_{\\Gamma}\\,,\\quad\n\\langle \\Psi | \\widetilde{\\Psi} \\rangle\n\\,=\\,\n\\int_{\\mathrm{SU}(2)^{E}} \\prod_{e=1}^{E} dg_e\\,\\,\n\\overline{\\Psi(g_1,...,g_{E})} \\widetilde{\\Psi}(g_1,...,g_{E})\\,,\n\\end{equation}\nwhere $E$ counts the number of edges in the graph. A basis of this space is provided by the spin networks with support on the graph $\\Gamma$. Technically, these are obtained through the Peter-Weyl decomposition of $L^{2}$ functions on the Lie group $\\mathrm{SU}(2)$ in terms of the orthogonal Wigner matrices in the irreducible representations of $\\mathrm{SU}(2)$. As a result, a spin network state is labeled by a spin on each edge, which is a half-integer $j_{e}\\in{\\mathbb N}\/2$ determining the corresponding irreducible $\\mathrm{SU}(2)$-representation ${\\mathcal V}^{j_{e}}$ of dimension $(2j_{e}+1)$, and an intertwiner $i_{v}$ at each vertex $v$, which is an invariant tensor in the tensor product of the representations living on the incoming and outgoing edges attached to the vertex $v$:\n\\begin{equation}\ni_{v}:\\bigotimes_{e|s(e)=v}{\\mathcal V}^{j_{e}}\\longrightarrow\\bigotimes_{e|t(e)=v}{\\mathcal V}^{j_{e}}\\,.\n\\end{equation}\nThe spin network function is defined by contracting the chosen intertwiners with the Wigner matrices of the holonomies living along the graph edges:\n\\begin{eqnarray}\n\\psi_{\\Gamma}^{\\{j_{e,i_{v}}\\}}\n\\Big{(}\\{g_{e}\\}_{e\\in\\Gamma}\\Big{)}\n&=&\n{\\mathrm{Tr}}\\,\\left[\\bigotimes_{e} D^{j_{e}}(g_{e}) \\otimes \\bigotimes_{v} i_{v}\\right]\n\\nonumber\\\\\n&=&\n\\prod_{e}\\langle j_{e}m_{e}^{s}|\\,g_{e}\\,|j_{e}m_{e}^{t}\\rangle\\,\n\\prod_{v}\\langle \\otimes_{e|t(e)=v}j_{e}m_{e}^{t}|\\,i_{v\\,}|\\otimes_{e|s(e)=v}j_{e}m_{e}^{s}\\rangle\\,,\n\\end{eqnarray}\nwith an implicit sum over all the $m_{e}^{s,t}$ indices, where we have introduced the usual spin basis $|j,m\\rangle$ of the Hilbert space ${\\mathcal V}^{j}$ with the index $m$ running from $-j$ to $+j$ by integer steps.\nThat spin network functional is automatically gauge-invariant due to the $\\mathrm{SU}(2)$-invariance of the intertwiners at each vertex. Intertwiners $i_{v}$ at the vertex give the volume excitations, thus representing chunks of volume dual to each vertex, while the spin $j_{e}$ living on the edge $e$ linking two vertices give the area quanta of the (quantum) surface boundary between the corresponding two chunks of space.\nThis endows spin networks with a natural interpretation as discrete quantum geometries.\n\n\n\nFrom here, the loop quantum gravity programme proceeds in two steps. First, one sums over all possible graphs $\\Gamma$ imposing cylindrical consistency. This yields the kinematical Hilbert space of spin network states. Second, one imposes the Hamiltonian constraints generating the space-time diffeomorphisms at the quantum level to define the physical Hilbert space of loop quantum gravity.\n\n\\medskip\n\nIndeed, in order to consider the full space of connections and not just its finite sampling on a fixed graph $\\Gamma$, we will consider all possible graphs and sums of cylindrical functions over different graphs. To this purpose, one needs to compare wave-functions with support on different graphs, take their sum and scalar product. This is achieved through requiring cylindrical consistency. A function $\\psi$ on a graph $\\Gamma$ is considered as equivalent to another function $\\widetilde{\\psi}$ defined on a larger graph $\\widetilde{\\Gamma}$, containing $\\Gamma$ as a subgraph, if the finer function does not depend on the group elements living on the extra edges and coincides with the original coarser function $\\psi$ on the subgraph:\n\\begin{equation}\n\\Gamma\\subset\\widetilde{\\Gamma},\\qquad\n\\psi_{\\Gamma}\\,\\sim\\,\\widetilde{\\psi}_{\\widetilde{\\Gamma}}\n\\quad\\Leftrightarrow\\quad\n\\forall g_{e}\\in\\mathrm{SU}(2)\\,\\quad \\widetilde{\\psi}(\\{g_{e}\\}_{e\\in\\widetilde{\\Gamma}})=\\psi(\\{g_{e}\\}_{e\\in\\Gamma})\\,.\n\\end{equation}\nThis means that any wave-function defined on a graph $\\Gamma$ is automatically extended to live on any refinement of $\\Gamma$. Now, when summing two wave-functions living on a priori different graphs or taking their scalar product, one will refine the two graphs, say $\\Gamma$ and $\\Gamma'$, to a larger and finer graph $\\widetilde{\\Gamma}$ containing both of them as subgraphs: the two wave-functions will then be compared on that larger graph $\\widetilde{\\Gamma}$ and their sum will be be defined as living on it.\nA precise and rigorous treatment of these projective limit techniques can be found in \\cite{Ashtekar:1993wf,Ashtekar:1994mh,Ashtekar:1994wa}. The set of wave-functions is defined by the union of the sets of wave-functions with support on every graph quotiented by the cylindrical consistency equivalence relation, and the resulting kinematical Hilbert space for loop quantum gravity is obtained by the sum of all graphs also quotiented by the cylindrical consistency:\n\\begin{equation}\n\\mathcal{F}= \\left.\\left(\\bigcup_\\Gamma \\mathcal{F}_\\Gamma\\right)\\right\/\\sim\\,,\n\\qquad\n\\mathcal{H}_\\textrm{kin} = \\left.\\left(\\bigoplus_\\Gamma \\mathcal{H}_\\Gamma\\right)\\right\/\\sim\\,.\n\\end{equation}\nThis Hilbert space, defined as a projective limit, was shown to be the space of $L^{2}$-functionals of the connection with respect to the Ashtekar-Lewandowski measure \\cite{Ashtekar:1994wa}.\nGoing to the spin network basis, the cylindrical consistency corresponds to identifying graphs with edges $e$ carrying a vanishing spin $j_{e}=0$ to the same graphs without those edges. Thus, if we went to pick a specific representative for every equivalence class, we could simply choose all spin networks carrying no vanishing spins:\n\\begin{equation}\n\\mathcal{H}_\\textrm{kin} = \\bigoplus_\\Gamma \\widetilde{\\mathcal{H}}_\\Gamma\\,,\n\\qquad\n\\widetilde{\\mathcal{H}}_\\Gamma = \\bigoplus_{\\{j_e \\neq 0,i_v\\}} \\mathbb{C}|j_e,i_v\\rangle\n\\end{equation}\n\n\n\n\\begin{figure}[t!]\n\\centering\n\n\\begin{tikzpicture}[scale=0.5]\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (4,0);\n\\coordinate(D) at (0,-2);\n\\coordinate(E) at (2,-2);\n\\coordinate(F) at (4,-2);\n\n\\draw[decorate, decoration=snake] (A) to[bend left] (B);\n\\draw[decorate, decoration=snake] (B) to[bend right] (C);\n\\draw[decorate, decoration=snake] (D) to[bend left] (E);\n\\draw[decorate, decoration=snake] (E) to[bend right] (F);\n\n\\draw[decorate, decoration=snake] (A) -- (D);\n\\draw[decorate, decoration=snake] (B) -- (E);\n\\draw[decorate, decoration=snake] (C) -- (F);\n\n\\draw (A) node {$\\bullet$};\n\\draw (B) node {$\\bullet$};\n\\draw (C) node {$\\bullet$};\n\\draw (D) node {$\\bullet$};\n\\draw (E) node {$\\bullet$};\n\\draw (F) node {$\\bullet$};\n\n\\coordinate(O1) at (-4,6);\n\\coordinate(A1) at ($(O1)+(0,0)$);\n\\coordinate(B1) at ($(O1)+(2,0)$);\n\\coordinate(C1) at ($(O1)+(4,0)$);\n\\coordinate(D1) at ($(O1)+(0,-2)$);\n\\coordinate(E1) at ($(O1)+(2,-2)$);\n\\coordinate(F1) at ($(O1)+(4,-2)$);\n\n\\draw[<-,>=stealth,very thick] (1,1) -- (-1,3);\n\n\\draw[decorate, decoration=snake] (A1) to[bend left] (B1);\n\\draw[gray,dashed,decorate, decoration=snake] (B1) to[bend right] (C1);\n\\draw[decorate, decoration=snake] (D1) to[bend left] (E1);\n\\draw[gray,dashed,decorate, decoration=snake] (E1) to[bend right] (F1);\n\n\\draw[decorate, decoration=snake] (A1) -- (D1);\n\\draw[decorate, decoration=snake] (B1) -- (E1);\n\\draw[gray,dashed,decorate, decoration=snake] (C1) -- (F1);\n\n\\draw (A1) node {$\\bullet$};\n\\draw (B1) node {$\\bullet$};\n\\draw[gray] (C1) node {$\\bullet$};\n\\draw (D1) node {$\\bullet$};\n\\draw (E1) node {$\\bullet$};\n\\draw[gray] (F1) node {$\\bullet$};\n\n\\coordinate(O2) at (4,6);\n\\coordinate(A2) at ($(O2)+(0,0)$);\n\\coordinate(B2) at ($(O2)+(2,0)$);\n\\coordinate(C2) at ($(O2)+(4,0)$);\n\\coordinate(D2) at ($(O2)+(0,-2)$);\n\\coordinate(E2) at ($(O2)+(2,-2)$);\n\\coordinate(F2) at ($(O2)+(4,-2)$);\n\n\\draw[<-,>=stealth,very thick] (3,1) -- (5,3);\n\n\\draw[gray,dashed,decorate, decoration=snake] (A2) to[bend left] (B2);\n\\draw[decorate, decoration=snake] (B2) to[bend right] (C2);\n\\draw[gray,dashed,decorate, decoration=snake] (D2) to[bend left] (E2);\n\\draw[decorate, decoration=snake] (E2) to[bend right] (F2);\n\n\\draw[gray,dashed,decorate, decoration=snake] (A2) -- (D2);\n\\draw[decorate, decoration=snake] (B2) -- (E2);\n\\draw[decorate, decoration=snake] (C2) -- (F2);\n\n\\draw[gray] (A2) node {$\\bullet$};\n\\draw (B2) node {$\\bullet$};\n\\draw (C2) node {$\\bullet$};\n\\draw[gray] (D2) node {$\\bullet$};\n\\draw (E2) node {$\\bullet$};\n\\draw (F2) node {$\\bullet$};\n\\end{tikzpicture}\n\n\\caption{To define the scalar product in the continuum, we use cylindrical consistency: wavefunctions with trivial dependancy on some edges are identified with functions on coarser graphs (with the gray dashed edges removed from the graph). As a consequence, two coarse graphs can always be considered as being embedded in another finer graph on which the scalar product is well-defined.}\n\\label{fig:cylindrical}\n\\end{figure}\n\n\\medskip\n\nThis projective limit technique was introduced for graphs embedded in the canonical hypersurface, but was also shown to work for equivalence classes of graphs under (spatial) diffeomorphisms, and can be directly extended to abstract graphs defined purely combinatorially without reference to an embedding in a specific manifold. In the following, we will not make direct use of the embedding of spin network states, so our definition and procedures can be applied to any of those cases. Nevertheless, since we do not discuss the coarse-graining from an embedding point of view, it is simpler to consider all our definitions as for abstract graphs.\n\n\n\\subsection{Coarse-graining by gauge-fixing}\n\nLet us now discuss the main context of this paper: the coarse-graining of spin networks for loop quantum gravity. The idea of coarse-graining is to integrate out the microscopic degrees of freedom, by an iterative procedure, up to some given energy or length scale to get the effective dynamics of the macroscopic degrees of freedom.\nIn condensed matter models, one typically works on a regular lattice with degrees of freedom living on its edges and\/or nodes and one can decimate consistently the variables, integrating out one node out of two for example, and thus derive an effective Hamiltonian on the coarser lattice. The length scale is set by the lattice spacing. In quantum field theory, the renormalisation group scheme integrates out quantum fluctuations of the field of high momentum and energy to derive an effective dynamics on the low momentum degrees of freedom.\nIn general relativity, the main difficulty is that the space-time geometry itself has become dynamical thus leading to some serious obstacles: in a background independent context, we face the problems of defining consistently a length or energy scale and of properly localizing perturbations and degrees of freedom both in position and momentum. These issues persist in the quantum theory.\n\nIn loop quantum gravity, one could think that the natural graph structure of the theory makes it simpler to tackle the coarse-graining of the theory. However, even putting aside the huge complication of fluctuating graphs and graph superpositions, working out the coarse-graining of loop quantum gravity on a fixed graph still faces the problem of localizing and determining the energy scale of the geometry fluctuations. Indeed, a natural coarse-graining procedure on a fixed graph is to subdivide it into a partition of bounded (usually connected) regions and to collapse those subgraphs to single points. The internal geometrical information carried by the spin network state on those subgraphs would be coarse-grained to some effective data living at the new node of the coarser graph, as illustrated on fig.\\ref{fig:coarsegraining}. Integrating over these local degrees of freedom would lead to new effective dynamics on the coarser graph. Such a procedure would then be iterated to obtain a tower of effective theories \\textit{\\`a la} Wilson for loop quantum gravity towards a large scale limit.\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}[scale=0.3]\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (2,-2);\n\\coordinate(D) at (0,-2);\n\n\\coordinate(E) at (6,0);\n\\coordinate(F) at (6,-2);\n\\coordinate(G) at (8,0);\n\\coordinate(H) at (0,-6);\n\\coordinate(I) at (2,-6);\n\\coordinate(J) at (2,-8);\n\\coordinate(K) at (6,-6);\n\n\\draw (A) -- (B);\n\\draw (B) -- (C);\n\\draw (C) -- (D);\n\\draw (D) -- (A);\n\\draw (B) -- (D);\n\n\\draw (A) -- ++(-1,1);\n\\draw (B) -- (E);\n\\draw (C) -- (F);\n\\draw (C) -- (I);\n\\draw (D) -- (H);\n\n\\draw (G) -- ++(1,1);\n\\draw (E) -- (F) -- (G) -- (E);\n\n\\draw (H) -- ++(-1,-1);\n\\draw (J) -- ++(-1,-1);\n\\draw (H) -- (I) -- (J) -- (H);\n\n\\draw (I) -- (K);\n\\draw (J) -- (K);\n\\draw (K) -- (F);\n\n\\draw (K) -- ++(1,-1);\n\n\\draw (A) node {$\\bullet$};\n\\draw (B) node {$\\bullet$};\n\\draw (C) node {$\\bullet$};\n\\draw (D) node {$\\bullet$};\n\\draw (E) node {$\\bullet$};\n\\draw (F) node {$\\bullet$};\n\\draw (G) node {$\\bullet$};\n\\draw (H) node {$\\bullet$};\n\\draw (I) node {$\\bullet$};\n\\draw (J) node {$\\bullet$};\n\\draw (K) node {$\\bullet$};\n\n\\draw[gray,dashed] (1,-1) circle(2);\n\\draw[gray,dashed] (1.3,-6.7) circle(1.7);\n\\draw[gray,dashed] (6.7,-0.7) circle(1.7);\n\\draw[gray,dashed] (K) circle(1);\n\n\\draw[->,>=stealth,very thick] (9,-4) -- (13,-4);\n\n\\coordinate(O1) at (15,-1);\n\\coordinate(O2) at (15.3,-6.7);\n\\coordinate(O3) at (20.7,-0.7);\n\\coordinate(O4) at (20,-6);\n\\draw (O1) -- ++(-1,1);\n\\draw (O2) -- ++(-1.7,-0.5);\n\\draw (O2) -- ++(-0.5,-1.7);\n\\draw (O3) -- ++(1,1);\n\\draw (O4) -- ++(1,-1);\n\n\\draw (O1) to[bend left] (O2);\n\\draw (O1) to[bend right] (O2);\n\\draw (O1) to[bend left] (O3);\n\\draw (O1) to[bend right] (O3);\n\\draw (O2) to[bend left] (O4);\n\\draw (O2) to[bend right] (O4);\n\\draw (O4) -- (O3);\n\n\n\\draw[fill=lightgray] (O1) circle(0.3);\n\\draw[fill=lightgray] (O2) circle(0.2);\n\\draw[fill=lightgray] (O3) circle(0.2);\n\\draw[fill=lightgray] (O4) circle(0.1);\n\\end{tikzpicture}\n\n\\caption{\\label{fig:coarsegraining}\nWe coarse-grain a graph by partionning it into disjoint connected subgraphs. We will reduce each of these bounded region of space by a single vertex of the coarser graph. Since each of these regions of space had some internal geometrical structure and were likely carrying curvature, the natural question is whether spin network vertices carry each data to account for these internal structure and curvature. We will see that standard spin network vertices can be interpreted as flat and that we need to introduce some new notion of ``curved vertices'' carrying extra algebraic information and define new extensions of spin network states more suitable to the process of coarse-graining loop quantum gravity.}\n\n\n\\end{figure}\n\n\nIt remains to decide which partition to choose in practice, which one is the most ``coarse-grainable''. We need to identify the regions of the spin network state whose geometry has the smallest (quantum) fluctuations. Since the algebraic data -spins and intertwiners- living on the graph determine the discrete geometry defined by the spin network state, the combinatorial data of the subgraph is not enough to decide if it is to be coarse-grained. One actually needs to find a suitable scale function -length or energy or another geometrical observable such as curvature- and to use an optimization algorithm running through all possible bounded regions and partitions of the graph in order to find the correct partition to coarse-grain at each step. In simpler words, the obstacle is that the geometry corresponding to the considered graph is not entirely determined by the combinatorial definition of the graph but crucially depends on the algebraic data living on it and carried by the spin network state. And on top of this difficulty remains to find a consistent way to deal with graph fluctuations and superpositions.\n\n\\medskip\n\nWe propose a truncation of the theory re-introducing a background lattice through coarse-graining. From the point of view of a given observer, one chooses a lattice, which defines the network of points whose geometry the observer will probe. The lattice is not considered as the fundamental graph underlying the physical spin network state. Instead, since the observer is assumed to have a finite resolution, its nodes represent bounded regions of space whose internal geometry can fluctuate. Then, if we consider a spin network states based on a graph with a very fine structure, we will coarse-grain it onto our chosen lattice. Such a scheme allows to take into account \ngraph fluctuations and superpositions while actually working on a fixed lattice. Indeed, considering a superposition of graphs, it will live by cylindrical consistency on a finer graph containing both graphs. Then we will coarse-grain the quantum geometry state on the finer graph until it lives on our reference lattice.\n\nA key step of this procedure is the coarse-graining of subgraphs to nodes. We use the ``coarse-graining through gauge-fixing'' procedure introduced in \\cite{Livine:2006xk, Livine:2013gna} and also exploited in \\cite{Dittrich:2014wpa,Bahr:2015bra} to reformulate the algebra of geometrical observables in loop quantum gravity. This is based on the gauge-fixing for spin networks defined earlier in \\cite{Freidel:2002xb}, which allows to collapse an arbitrary subgraph to a {\\it flower} , that is a single vertex with self-loops -or petals- attached to it. These loops account for the building-up of the curvature and thus of the gravitational energy density within these microscopic bounded regions which we will coarse-grain to single points on the measurement lattice chosen by the observer.\n\n\\medskip\n\nLet us give a closer look to this gauge-fixing procedure and the resulting coarse-graining of spin networks.\nAt the classical level, a spin network state is given by the graph dressed with discrete holonomy-flux data: each oriented edge carries a $\\mathrm{SU}(2)$ group element $g_{e}\\,\\in\\mathrm{SU}(2)$ while each edge's extremity around a vertex is colored with a vector $X^{v}_{e}\\in{\\mathbb R}^{3}$. So one edge carries two vectors, one living at its source vertex and the other living at its target vertex, respectively $X^{s,t}_{e}\\equiv X^{s,t(e)}_{e}$. The group element gives the parallel transport of the vectors along the edges, that is $X^{t}_{e}=-\\,g_{e}\\triangleright X^{s}_{e}$ with the action of $g_{e}$ as a $\\mathrm{SO}(3)$-rotation on the flat 3d space.\nThis obviously forces the two vectors to have equal norm, $|X^{t}_{e}|=|X^{s}_{e}|$, which is called the (area-)matching constraint.\nOne requires another set of constraints: we impose the closure constraint at each vertex $v$, so that the sum of the fluxes around the vertex vanishes, $\\sum_{e\\ni v} X^{v}_{e}=0$. This holonomy-flux data can be interpreted as some discrete geometry in the framework of twisted geometries\n\\cite{Freidel:2010aq,Freidel:2013bfa}.\nThis is achieved through Minkowski's theorem stating that the closure constraint determines a unique convex polyhedron in flat 3d space dual to each vertex $v$, such that the fluxes $X^{v}_{e}$ are the normal vectors to the polyhedron faces.\n\nCurvature appears as non-trivial holonomies around loops ${\\mathcal L}$ of the graph, when $\\overrightarrow{\\prod_{e\\in{\\mathcal L}}}g_{e}\\,\\ne{\\mathbb{I}}$. As pointed out in \\cite{Livine:2013gna}, coarse-graining a subgraph carrying non-trivial curvature leads to an effective vertex breaking the closure constraint. This underlines the fact that a generalization of spin network states is required in order to properly carry out a coarse-graining procedure: we need an extended structure allowing for {\\it curved} vertices.\n\nWe illustrate this in fig.\\ref{fig:nonclosure}, in 2d instead of 3d. Let us consider a bounded region in space and the normals to its boundary.\nDue to gauge-invariance, if the region contains a single vertex, the sum of the normals will sum up to zero. But if there are loops inside the region, the parallel transport around these loops might introduce non-trivial rotations. And indeed, as soon as the parallel transport around the loops is non-trivial, the sum of the normals is no longer zero, leading to a closure defect \\cite{Livine:2013gna}. This is natural and translates the fact that curvature is carried by the loops of the spin network. And this must be taken into account when coarse-graining.\n\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}\n\n\\def 1.2 {1.2}\n\\def 0.2 {0.2}\n\n\\coordinate (A1) at (1*1.2,0.2,0);\n\\coordinate (A2) at (0.5*1.2,0.2,0.866*1.2);\n\\coordinate (A3) at (-0.5*1.2,0.2,0.866*1.2);\n\\coordinate (A4) at (-1*1.2,0.2,0);\n\\coordinate (A5) at (-0.5*1.2,0.2,-0.866*1.2);\n\\coordinate (A6) at (0.5*1.2,0.2,-0.866*1.2);\n\\coordinate (B) at (0,0.8+0.2,0);\n\n\\draw (A6) -- (A1) -- (A2) -- (A3) -- (A4);\n\\draw[dashed] (A4) -- (A5) -- (A6);\n\n\\draw[blue,thick] (A1) -- (B);\n\\draw (A2) -- (B);\n\\draw (A3) -- (B);\n\\draw (A4) -- (B);\n\\draw[dashed] (A5) -- (B);\n\\draw (A6) -- (B);\n\n\\draw (A1) node {$\\bullet$};\n\\draw (A2) node {$\\bullet$};\n\\draw (A3) node {$\\bullet$};\n\\draw (A4) node {$\\bullet$};\n\\draw (A5) node {$\\bullet$};\n\\draw (A6) node {$\\bullet$};\n\\draw (B) node {$\\bullet$};\n\n\\draw[->,>=stealth,thick] (2,0.5) -- (4,0.5);\n\n\\coordinate (P) at (6.5,0.5);\n\n\\def 50 {50}\n\\def 1.4 {1.4}\n\n\\coordinate (C1) at ($(P) + (0*50:1.4)$);\n\\coordinate (C2) at ($(P) + (1*50:1.4)$);\n\\coordinate (C3) at ($(P) + (2*50:1.4)$);\n\\coordinate (C4) at ($(P) + (3*50:1.4)$);\n\\coordinate (C5) at ($(P) + (4*50:1.4)$);\n\\coordinate (C6) at ($(P) + (5*50:1.4)$);\n\\coordinate (C7) at ($(P) + (6*50:1.4)$);\n\n\\fill[lightgray] (C7) -- (P) -- (C1) -- cycle;\n\\draw[red,thick,dashed] (C1) -- (C7);\n\n\\draw[blue,thick] (P) -- (C1);\n\\draw (P) -- (C2);\n\\draw (P) -- (C3);\n\\draw (P) -- (C4);\n\\draw (P) -- (C5);\n\\draw (P) -- (C6);\n\\draw[blue,thick] (P) -- (C7);\n\n\\draw (C1) -- (C2) -- (C3) -- (C4) -- (C5) -- (C6) -- (C7);\n\n\\draw[red,thick] ($(P) + (6*50:0.3*1.4)$) arc (6*50:360:0.3*1.4);\n\n\\def 0.01 {0.01}\n\\def 0.9 {0.9}\n\n\\draw[->,>=stealth] ($(P) + (0.5*50:1.4*0.9)$) -- ++(0.5*50:50*0.01);\n\\draw[->,>=stealth] ($(P) + (1.5*50:1.4*0.9)$) -- ++(1.5*50:50*0.01);\n\\draw[->,>=stealth] ($(P) + (2.5*50:1.4*0.9)$) -- ++(2.5*50:50*0.01);\n\\draw[->,>=stealth] ($(P) + (3.5*50:1.4*0.9)$) -- ++(3.5*50:50*0.01);\n\\draw[->,>=stealth] ($(P) + (4.5*50:1.4*0.9)$) -- ++(4.5*50:50*0.01);\n\\draw[->,>=stealth] ($(P) + (5.5*50:1.4*0.9)$) -- ++(5.5*50:50*0.01);\n\n\\draw[->,>=stealth,red,thick,dashed] ($(P) + ({180+3*50}:1.2)$) -- ++({180+3*50}:{360*0.01-6*50*0.01});\n\n\\draw (P) node {$\\bullet$};\n\n\\draw (C1) node {$\\bullet$};\n\\draw (C2) node {$\\bullet$};\n\\draw (C3) node {$\\bullet$};\n\\draw (C4) node {$\\bullet$};\n\\draw (C5) node {$\\bullet$};\n\\draw (C6) node {$\\bullet$};\n\\draw (C7) node {$\\bullet$};\n\n\\end{tikzpicture}\n\n\n\\caption{On this figure, we represented the dual graph of a 2d trivalent graph. The curvature at the vertex manifests itself as a defect in the closure condition. This can be seen by flattening the triangulation, which amounts to gauge-fix the variables. The curvature manifests itself as a gap (in gray on the figure) at some edge (in blue on the figure) in the flattened manifold. The closure defect can be seen as the missing normal coming from the closure of the flattened polygon (in red on the figure).}\n\\label{fig:nonclosure}\n\n\\end{figure}\n\n\nA rigorous way to make this explicit is to gauge-fix the spin network state, following the procedure devised in \\cite{Freidel:2002xb}.\nLet us consider a bounded region of a larger spin network, defined as a finite connected subgraph $\\gamma$ of the larger graph $\\Gamma$, as in fig.\\ref{fig:GaugeFix}.\nThe procedure goes as follow:\n\\begin{enumerate}\n\n\\item Choose arbitrarily a root vertex $v_{0}$ of the subgraph and select a maximal tree $T$ of the region: \n\nThe subgraph being connected, the maximal tree goes through every vertex of the region and defines a unique path of edges from the root vertex $v_{0}$ to any vertex of the subgraph.\n\n\\item Gauge-fix iteratively all the group elements along the edges of the tree $g_{e\\in T}={\\mathbb{I}}$:\n\nUsing the gauge-invariance of the wave-functions as given by \\eqref{gaugeinv} with gauge transformations acting at every vertex by $\\mathrm{SU}(2)$ group elements $h_{v}$ as $g_{e}\\rightarrow h_{s(e)}^{-1}g_{e}h_{t(e)}$, we can start from the root of the tree $v_{0}$ and progress through the tree until we reach the boundary of our subgraph. We define the appropriate gauge transformations $h_{v}$ at every vertex in order to fix all the group elements along the edges of the tree to the identity ${\\mathbb{I}}$. The absence of loops in the tree, by definition, guarantees the consistency of this gauge-fixing. We can somewhat interpret this maximal tree as a synchronization network: we set all the parallel transports along the tree edges to the identity, thus synchronizing the reference frames at all the vertices and identifying them to a single reference frame living at the root of the subgraph. This realizes the coarse-graining of the subgraph $\\gamma$ to its chosen root vertex $v_{0}$.\nThe action of $\\mathrm{SU}(2)$ gauge transformations inside the region is not entirely gauge-fixed and we are still left with the $\\mathrm{SU}(2)$ gauge transformations at the root vertex.\n\n\n\\item Having collapsed the subgraph $\\gamma$ to its root vertex $v_{0}$, the edges of the subgraph $\\gamma$ which are not in the tree, $e\\in\\gamma\\setminus T$ label all the (independent) loops of the subgraph and lead to self-loops attached to the $v_{0}$:\n\nAs illustrated on fig.\\ref{fig:GaugeFix}, these self-loops or little loops carry the holonomies around the loops of the original subgraph $\\gamma$, that is the curvature living in the bounded region. The flux-vectors living on the boundary edges, linking the region to the outside bulk, generically do not satisfy the closure constraint anymore since the effective vertex does satisfy a closure constraint which takes into account the flux-vectors of those boundary edges but also of the internal loops. The closure defect, induced by the little loops, thus reflects the non-trivial internal structure of the coarse-grained subgraph and curvature developed in the corresponding region of the spin network state. The interested reader can find details and proof in the previous work \\cite{Livine:2013gna}.\n\n\n\n\\end{enumerate}\n\n\n\\begin{figure}[h!]\n\n\\begin{tikzpicture}[scale=0.8]\n\\coordinate(A) at (0,0);\n\\coordinate(B) at (2,0);\n\\coordinate(C) at (2,-2);\n\\coordinate(D) at (0,-2);\n\n\\draw[blue,thick] (A) -- (B) node[midway,above] {$\\ell_1$};\n\\draw[blue,thick] (B) -- (C) node[midway,right] {$\\ell_2$};\n\\draw[red,very thick] (C) -- (D) node[midway,below] {$\\ell_3$};\n\\draw[red,very thick] (D) -- (A) node[midway,left] {$\\ell_4$};\n\\draw[red,very thick] (B) -- (D) node[midway,sloped,above] {$\\ell_5$};\n\n\\draw (A) -- ++(-1,0.5) node[above] {$j_1$};\n\\draw (B) -- ++(1,0.5) node[above] {$j_2$};\n\\draw (C) -- ++(1,-0.5) node[below] {$j_3$};\n\\draw (D) -- ++(-1,-0.5) node[below] {$j_4$};\n\n\\draw (A) node {$\\bullet$} node[above]{$i_1$};\n\\draw (B) node {$\\bullet$} node[above]{$i_2$};\n\\draw (C) node {$\\bullet$} node[below]{$i_3$};\n\\draw[red] (D) node[scale=2] {$\\bullet$} node[below]{$i_4$};\n\n\\draw[gray,dashed] (1,-1) circle(2);\n\n\\draw[->,>=stealth,very thick] (3.5,-1) -- (5.5,-1);\n\n\\coordinate(O) at (8,-1);\n\n\\draw (O) -- ++(-2,1) node[above] {$j_1$};\n\\draw (O) -- ++(2,1) node[above] {$j_2$};\n\\draw (O) -- ++(2,-1) node[below] {$j_3$};\n\\draw (O) -- ++(-2,-1) node[below] {$j_4$};\n\\draw[blue,thick,scale=3] (O) to[loop] (O);\n\\draw[blue,thick] (O) ++(0,1) node {$k_1$};\n\\draw[blue,thick,scale=3,rotate=180] (O) to[loop] (O);\n\\draw[blue,thick] (O) ++(0,-1) node {$k_2$};\n\n\\draw[red] (O) node[scale=2] {$\\bullet$} ++(0.35,0) node{$i$};\n\\end{tikzpicture}\n\n\\caption{Coarse-graining via gauge-fixing: we can gauge-fix the subgraph using a maximal subtree (in red). The remaining edges (in blue) correspond to loops on the coarse-grained vertex. There is a residual gauge-freedom at the coarse-grained vertex that corresponds to the action of the gauge group at the root of the tree (red vertex on the figure).}\n\\label{fig:GaugeFix}\n\\end{figure}\n\n\nThis gauge-fixing procedure allows to clearly identify and distinguish between the degrees of freedom of the internal geometry of the considered bounded region of space to coarse-grain. The tree encodes the internal combinatorial structure of the region and describes the network of points and links within: they provide the bulk structure on which we can create curvature. The little loops and the $\\mathrm{SU}(2)$ group elements coloring them are the excitations of the parallel transport and curvature. Together, tree and little loops attached to a vertex describe all its internal structure and are the extra data needed to define \\textit{curved vertices} for the effective coarse-grained theory. These curvature excitations create a closure defect for the flux-vectors living on the boundary edges linking the coarse-grained vertex -the root vertex- to the rest of the spin network (obtained by the actually satisfied closure constraint between boundary edges and little loops)\n\nWhen coarse-graining in practice, we do not want to retain all the information about the internal geometry, but only want to retain the degrees of freedom most relevant to the dynamics and interaction with the exterior geometry. In the next section, we will therefore introduce a hierarchy of extensions of spin network states with {\\it curved vertices}, from the finest notion of spin networks decorated with both trees and little loops to the coarser notion of spin networks with a simple tag at each vertex recording the induced closure defect.\n\n\n\n\\subsection{A hierarchy of coarse-grained spin network structures}\n\nIn loop quantum gravity, we start with spin network states, which are graphs decorated with spins on the edges and intertwiners at the vertices:\n\\begin{equation}\n{\\mathcal{H}}_\\Gamma = \\bigoplus_{\\{j_e,i_v\\}} \\mathbb{C}|j_e,i_v\\rangle\\,.\n\\end{equation}\nCurvature is carried loops of the graph.\nWe have argued that coarse-graining these networks should naturally lead to extended spin networks that can carry localized curvature excitations at the vertices. Following the coarse-graining through gauge-fixing procedure\\footnotemark, we propose a hierarchy of three possible extensions of the spin network states, which depend on how much extra information and structure are added to each vertex:\n\\footnotetext{\nAnother approach is to define spin networks made of intertwiners directly interpretable as dual to polyhedron in a curved space. These has been developed in the framework of spin networks for loop quantum gravity with a non-vanishing cosmological constant and is based on a quantum deformation of the $\\mathrm{SU}(2)$ gauge group \\cite{Dupuis:2013haa,Bonzom:2014wva,Dupuis:2014fya,Charles:2015lva,Haggard:2014xoa,Haggard:2015ima,Haggard:2015yda}. However, it is not yet clear how, if possible, to depart from a homogeneous curvature and glue pieces carrying a different curvature, thus obtaining actual spin networks with variable curvature.\nOne possible link with our present framework would be to show that these curved and quantum-deformed intertwiners can be obtained in a continuum limit as a vertex with an infinite number of little loops creating a constant homogeneous curvature excitation (for instance, triangulating a hyperbolic tetrahedron with finer and finer tetrahedra which can be considered as flat in an infinite refinement limit).\n}\n\\begin{figure}\n\\begin{subfigure}[t]{.33\\linewidth}\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate(O1) at (0,0);\n\\coordinate(O2) at (2.5,3.5);\n\\coordinate(A) at ($(O2)+(0,0)$);\n\\coordinate(B) at ($(O2)+(1,0)$);\n\\coordinate(C) at ($(O2)+(1,-1)$);\n\\coordinate(D) at ($(O2)+(0,-1)$);\n\n\\draw (O1) -- ++(-2,1) node[above] {$j_1$};\n\\draw (O1) -- ++(2,1) node[above] {$j_2$};\n\\draw (O1) -- ++(2,-1) node[below] {$j_3$};\n\\draw (O1) -- ++(-2,-1) node[below] {$j_4$};\n\\draw[blue,thick] (O1) to[loop,scale=3] (O1) ++(0,1) node {$k_1$};\n\\draw[blue,thick] (O1) to[loop,scale=3,rotate=180] (O1) ++(0,-1) node {$k_2$};\n\n\\draw[red] (O1) node[scale=2] {$\\bullet$} ++(0.35,0) node{$i$};\n\n\\draw[gray,dashed] (O1) circle (0.5) ++(45:0.5) -- (45:3) ++(45:1.2) coordinate (O3) circle (1.2);\n\n\n\\clip (O3) circle (1.2);\n\n\\draw[blue,thick] (A) -- (B);\n\\draw[blue,thick] (B) -- (C);\n\\draw[red,very thick] (C) -- (D);\n\\draw[red,very thick] (D) -- (A);\n\\draw[red,very thick] (B) -- (D);\n\n\\draw (A) -- ++(-2,1);\n\\draw (B) -- ++(2,1);\n\\draw (C) -- ++(2,-1);\n\\draw (D) -- ++(-2,-1);\n\n\\draw (A) node {$\\bullet$};\n\\draw (B) node {$\\bullet$};\n\\draw (C) node {$\\bullet$};\n\\draw[red] (D) node[scale=2] {$\\bullet$};\n\\end{tikzpicture}\n\n\\caption{All the information can be preserved by carrying the $SU(2)$ labels and an unfolding tree describing the inner details of the coarse-grained vertex.}\\label{fig:loopy_a}\n\\end{subfigure}%\n\\hspace{2mm}\n\\begin{subfigure}[t]{.28\\linewidth}\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate(O1) at (0,0);\n\n\\draw (O1) -- ++(-2,1) node[above] {$j_1$};\n\\draw (O1) -- ++(2,1) node[above] {$j_2$};\n\\draw (O1) -- ++(2,-1) node[below] {$j_3$};\n\\draw (O1) -- ++(-2,-1) node[below] {$j_4$};\n\\draw[blue,thick] (O1) to[loop,scale=3] (O1) ++(0,1) node {$k_1$};\n\\draw[blue,thick] (O1) to[loop,scale=3,rotate=180] (O1) ++(0,-1) node {$k_2$};\n\n\\draw[red] (O1) node[scale=2] {$\\bullet$} ++(0.35,0) node{$i$};\n\\end{tikzpicture}\n\\caption{The particular subgraph can be forgotten and only the $SU(2)$ information is preserved.}\\label{fig:loopy_b}\n\\end{subfigure}\n\\hspace{2mm}\n\\begin{subfigure}[t]{.28\\linewidth}\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate(O1) at (0,0);\n\n\\draw (O1) -- ++(-2,1) node[above] {$j_1$};\n\\draw (O1) -- ++(2,1) node[above] {$j_2$};\n\\draw (O1) -- ++(2,-1) node[below] {$j_3$};\n\\draw (O1) -- ++(-2,-1) node[below] {$j_4$};\n\\draw[blue,thick] (O1) to ++(0,0.5) node[above]{$j,m$};\n\n\\draw[red] (O1) node[scale=2] {$\\bullet$} ++(0,-0.3) node{$i'$};\n\n\\end{tikzpicture}\n\\caption{Everything except the closure defect is forgotten. Only a ``tag'' remains.}\\label{fig:loopy_c}\n\\end{subfigure}\n\\caption{The hierarchy of possible coarse-graining frameworks}\\label{fig:loopy}\n\\end{figure}\n\\begin{enumerate}\n\n\\item {\\bf Folded spin networks~:}\n\nIn the first scenario, we follow the gauge-fixing procedure but we do a minimal coarse-graining, retaining as much information as possible on the original state.\nEach vertex is allowed with an arbitrary number of little loops attached to it and is endowed with a tree connecting the ends of the external edges and of the internal loops, as represented in fig.\\ref{fig:loopy_a}. This tree can be seen as a circuit telling us how to unfold the vertex, reversing the gauge-fixing procedure and recovering the original (finer) graph.\nThis Hilbert space ${\\mathcal H}_{\\Gamma}^\\mathrm{folded}$ can be written formally as:\n\\begin{equation}\n{\\mathcal H}_{\\Gamma}^\\mathrm{folded}\n= \\bigoplus_{\\{j_e,j^{(v)}_{\\ell},i_v,\\mathcal{T}_v\\}} \\mathbb{C}\\,|j_e,j^{(v)}_{\\ell},i_v,\\mathcal{T}_v\\rangle\\,.\n\\end{equation}\n $\\mathcal{T}_v$ is the unfolding tree for each vertex, $j^{(v)}_{\\ell}$ are the spins carried by the additional loops labeled by the index $\\ell$ and the intertwiners $i_{v}$ now lives in the tensor product of the spins $j_{e}$ of the edges linking to the other neighboring vertices and (twice) the spins $j^{(v)}_{\\ell}$ living on the internal loops (because each loop has its two ends at the vertex).\n \n With such an internal space at each vertex, we actually lose no information at all on the internal degrees of freedom. Starting with a spin network state living on a finer graph $\\widetilde{\\Gamma}$, we simply gauge-fix it to a spin network on our coarser graph $\\Gamma$. And we can follow the reverse path. Using the tree at each vertex, we can fully reconstruct the original finer graph $\\widetilde{\\Gamma}$ thus simply perform generic gauge transformations to recover the fully gauge-invariant spin network state.\n\nThus the chosen graph $\\Gamma$ can be considered as a skeleton graph, to which we can add extra information to represent spin network states living on any (finer) graph. In a sense, we have not done any coarse-graining yet. The truncation of the theory will happen when defining the dynamics on the folded spin network Hilbert space, distinguishing actual edges and spins of our skeleton lattice -the background- from spins and edges on the unfolding trees and little loops, when the fundamental dynamics would have considered them on equal footing.\n\n\\item {\\bf Loopy spin networks~:}\n\nIn a second scenario, we coarse-grain the internal structure of the effective vertices by discarding the unfolding trees. We keep the curvature excitations living on the little loops, but we discard the combinatorial information of the internal subgraph: we forget that the vertex effectively represents an actual extended region of space and we localize all the internal curvature degrees of freedom on that coarse-grained vertex. This leads to loopy spin networks, with an arbitrary number of loops at each vertex but no unfolding tree data:\n\\begin{equation}\n\\mathcal{H}^{\\mathrm{loopy}}_{\\Gamma} = \\bigoplus_{\\{j_e,j^{(v)}_{\\ell},i_v\\}} \\mathbb{C}|j_e,j^{(v)}_{\\ell},i_v\\rangle\\,,\n\\end{equation}\nwhere the $j^{(v)}_{\\ell}$ are the spins living on the little loops attached to the vertex $v$ and the intertwiners $i_{v}$ live again in the tensor product of the spins carried by the graph edges attached to the vertex $v$ and the spins carried by its little loops.\n\nNow our chosen graph $\\Gamma$ for loopy spin network states is to be considered as a background graph. The little loops are explicit local excitations of the gravitational fields located at each vertex of the graph. A given loopy spin network comes from the coarse-graining of several possible finer spin network states living on finer graph, but we lack the unfolding tree information to recover the original more fundamental state.\n\nThe truncation of full theory is clear. Spin network states on the ``loopy graphs'' living on top on $\\Gamma$, that is the base graph $\\Gamma$ plus an arbitrary number of self-loops at every vertices, are already in the Hilbert space of the loop quantum gravity, although we do not usually focus on such graphs. Restricting ourselves to this subset of states is a clear truncation of the full Hilbert space. The difference with the standard interpretation is that we think here of the base graph $\\Gamma$ as embedded in the space manifold, while the little loops are abstract objects decorating the base graph vertices.\n\nSince we have local degrees of freedom, carried by the little loops, we need to discuss their statistics, which leads to a few variations of this theme:\n\n\\begin{enumerate}\n\\item {\\it Distinguishable loops~:}\nFirst, it is natural to consider that the loops are distinguishable as they come from a substructure. The loops do come from different edges of a finer graph and create curvature excitations at different places within the coarse-grained bounded region. As a result, we should distinguish them and allow to number and order them.\n\n\\item {\\it Undistinguishable bosonic loops~:}\nA second possibility is to push further along the logic of coarse-graining and to consider that the loops undistinguishable since we do not have access anymore to the specific substructure. This should lead to bosonic statistics, as expected for gravitational field exicitations. Formally, this can be written as the identification:\n\\begin{equation}\n|j_e,i_v,j^{(v)}_{\\ell}\\rangle = |j_e,i_v,j^{(v)}_{\\sigma_{v}(\\ell)}\\rangle\n\\end{equation}\nfor any permutation $\\sigma_{v}\\in\\,S_{\\#\\ell}$ in the symmetric group of order $\\#\\ell$ when the vertex $v$ has $\\#\\ell$ loops. This point of view is compatible with considering the action of space diffeomorphisms on the little loops around the vertex as gauge transformations.\n\n\\item {\\it Anyonic statistics~:} \nWe can easily imagine other statistics, for instance by allowing for a phase in the equality above (i.e a non-trivial representation of the permutation group). In fact, instead of thinking of the vertex as a mere point, we can represent the boundary of the bounded region as a sphere and consider the little loops as living on a sphere around it. Then the diffeomorphism invariance on the sphere will lead to an action of the braiding group leading to interesting anyonics statistics, similarly to the punctures of a Chern-Simons theory as already explored in the case of black holes in loop quantum gravity \\cite{Pithis:2014uva}.\n\n\\end{enumerate}\n\n\\item {\\bf Tagged spin networks~:}\n\nIn this third and last scenario, we fully coarse-grain the internal geometry of the bounded region now reduced to a graph vertex. We discard the unfolding tree, used in the gauge-fixing and unfixing procedure, and we integrate out the little loops attached to the vertex. All we retain is the closure defect induced by the non-trivial holonomies and spins carried by those little loops. The fact that coarse-graining spin networks, or their classical counterpart of twisted geometries, leads to closure defect, accounting for the presence of a non-trivial curvature within the coarse-grained region was already pointed out in \\cite{Livine:2013gna}.\nHere, the simplest method to see how this comes about is to use the intermediate spin decomposition of the intertwiner at the vertices, as illustrated on fig.\\ref{fig:intermediatespin}, introducing a fiducial link separating the external edges from the internal loops:\n\\begin{equation}\n\\mathrm{Inv}_{\\mathrm{SU}(2)}\\,\\Big{[}\n\\bigotimes_{e}{\\mathcal V}^{j_{e}}\n\\otimes\n\\bigotimes_{\\ell}\\big{(}{\\mathcal V}^{j_{\\ell}}\\otimes{\\mathcal V}^{j_{\\ell}}\\big{)}\n\\Big{]}\n\\,=\\,\n\\bigoplus_{J}\n\\,\n\\mathrm{Inv}_{\\mathrm{SU}(2)}\\,\\Big{[}\n{\\mathcal V}^{J}\n\\otimes\n\\bigotimes_{e}{\\mathcal V}^{j_{e}}\n\\Big{]}\n\\otimes\n\\mathrm{Inv}_{\\mathrm{SU}(2)}\\,\\Big{[}\n{\\mathcal V}^{J}\n\\otimes\n\\bigotimes_{\\ell}\\big{(}{\\mathcal V}^{j_{\\ell}}\\otimes{\\mathcal V}^{j_{\\ell}}\\big{)}\n\\Big{]}\\,.\n\\end{equation}\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}[scale=0.8]\n\\coordinate(O1) at (0,0);\n\n\\draw (O1) -- ++(-2,1);\n\\draw (O1) -- ++(-2,0.5);\n\\draw (O1) -- ++(-2,-0.5);\n\\draw (O1) -- ++(-2,-1);\n\\draw (O1) to[in=-25,out=25,loop,scale=3] (O1);\n\\draw (O1) to[in=30,out=80,loop,scale=3] (O1);\n\\draw (O1) to[in=-80,out=-30,loop,scale=3] (O1);\n\n\\draw (O1) node {$\\bullet$} ++(-0.15,0.5) node{$i_v$};\n\n\\draw[->,>=stealth,very thick] (3,0) -- (5,0);\n\n\\coordinate(O2) at (8,0);\n\\coordinate(O3) at (10,0);\n\n\\draw (O2) -- ++(-2,1);\n\\draw (O2) -- ++(-2,0.5);\n\\draw (O2) -- ++(-2,-0.5);\n\\draw (O2) -- ++(-2,-1);\n\n\\draw[in=-25,out=25,scale=3] (O3) to[loop] (O3);\n\\draw[in=30,out=80,scale=3] (O3) to[loop] (O3);\n\\draw[in=-80,out=-30,scale=3] (O3) to[loop] (O3);\n\n\\draw[red] (O2) -- (O3) node[midway,below]{$J$};\n\\draw (O2) node {$\\bullet$} ++(0.12,0.4) node{$i_v^{J}$};\n\\draw (O3) node {$\\bullet$} ++(-0.2,0.4) node{$\\tilde{i}_v^{J}$};\n\n\\end{tikzpicture}\n\n\\caption{We represent a loopy vertex $v$, here with three little loops attached to it. The intertwiner $i_{v}$ can be decomposed onto the intermediate spin basis, where we introduce a fiducial edge between the external legs and the internal loops. This orthogonal basis is labeled by the intermediate spin $J$, and two intertwiners $i^{J}_{v}$ and $\\tilde{i}^{J}_{v}$ intertwining between that intermediate spin and respectively the external legs or the internal loops.} \n\\label{fig:intermediatespin}\n\n\\end{figure}\nThis spin $J_{v}$ living at the vertex $v$ encodes the closure defect and is the only extra information with which we decorate the graph.\nWe call it the {\\it tag} and amounts to adding an open leg to every vertex of the graph. This open edge is colored with the spin $J_{v}$ and a vector in that $\\mathrm{SU}(2)$ representation. Using the standard spin basis labeled by magnetic moment number $M$, the Hilbert space of {\\it tagged spin networks} on the base graph $\\Gamma$ is then formally defined as:\n\\begin{equation}\n\\mathcal{H}_\\Gamma^{\\mathrm{tag}} = \\bigoplus_{\\{j_e,J_v,M_v,i_v\\}} \\mathbb{C}|j_e,J_v,M_v,i_v\\rangle\\,,\n\\end{equation}\nwhere the intertwiner $i_{v}$ at the vertex $v$ now lives in the tensor product of the spins $j_{e\\ni v}$ on the external edges $e$ attached to the vertex and of the vertex tag $J_{v}$.\n\nThe state $|J_v,M_v\\rangle$ is the quantized version of the closure defect vector. Indeed, at the classical level, as shown in \\cite{Livine:2013gna}, the sum of the flux-vectors living on the external edges $e\\ni v$ does not vanish anymore and should be balanced by the sum of the flux-vectors living on the internal loops. This defect vector means that there is no convex polyhedron dual to the vertex, as usual in twisted geometries. One way to go is to try to open the polyhedron somehow, which wouldn't have a clear geometrical interpretation. Instead we propose to interpret it as the dual convex polyhedron should not be embedded in flat space but in a (homogeneous) curved space, the curvature radius depending on the actual value of the closure defect. Progress in this direction has been achieved in the study of hyperbolic and spherical tetrahedra \\cite{Bonzom:2014wva,Charles:2015lva,Haggard:2015ima} but we do not yet have an explicit embedding and formula relating the curvature to the norm of the defect. It would ultimately be enlightening to relate this tag $J_{v}$ to the spectrum of some quasi-local energy operator in loop quantum gravity (e.g. \\cite{Yang:2008th}), which would allow to view it as a measure of the gravitational energy density within the bounded coarse-grained region.\n\n\\end{enumerate}\n\n\\medskip\n\nThese three extended spin network structures are the heart of our present proposal for studying effective truncations for the coarse-graining of loop quantum gravity. The goal would be to reformulate the dynamics of loop quantum gravity on these new structures and study their renormalisation flow under coarse-graining. An important point is that these folded, loopy and tagged spin networks sidestep the problem of fluctuating graph dynamics and allow to project the whole dynamics on a fixed background graph, or skeleton, interpreted as the lattice postulated by the observer\\footnotemark. We then have local excitations of the geometry, representing the internal fluctuations of the gravitational field in the coarse-grained regions, living at the graph vertices and represented by the new information attached to them, respectively unfolding trees, little loops or tags.\n\\footnotetext{\nThe background lattice can then be adapted to the studied models. We could choose a regular lattice or a much simpler graph, such as a flower with a single vertex and an arbitrary number of little loops. Such simple graphs could reveal useful in the study of highly symmetric problems as is the case in cosmology or in the study of Einstein-Rosen waves \\cite{Korotkin:1997ps,Ashtekar:1996cm}.\n}\nThe use of a background lattice, which might be regular, would simplify greatly the setting of a systematic coarse-graining of loop quantum gravity.\n\nThe folded spin networks are mathematically a simple gauge-fixing of spin networks onto the skeleton graph. In the following sections, we will focus on providing a clean mathematical definition of loopy and tagged spin networks and exploring the definition of a Fock space of loopy spin networks with bosonic statistics for the little loops living at every graph vertex.\n\n\n\n\n\\section{Loopy spin networks}\n\n\nHere we would like to define properly loopy spin networks and investigate their propreties.\nChoosing a fixed graph $\\Gamma$ with $E$ edges, and given numbers of little loops $N_{v}$ at each vertex $v$, we consider the following space of wave-functions on $\\mathrm{SU}(2)^{\\times\\,(E+\\sum_{v}N_{v})}$ invariant under $\\mathrm{SU}(2)$ gauge transformations acting at every vertices:\n\\begin{equation}\n\\psi\\Big{(}\\{g_{e}\\,,\\,h^{v}_{\\ell}\\}_{e,v\\in\\Gamma}\\Big{)}\n\\,=\\,\n\\psi\\Big{(}\\{a_{s(e)}g_{e}a_{t(e)}^{-1}\\,,\\,a_{v}h^{v}_{\\ell}a_{v}^{-1}\\}\\Big{)}\n\\,,\\qquad\n\\forall a_{v}\\in\\mathrm{SU}(2)^{\\times V}\\,.\n\\end{equation}\nThe $\\mathrm{SU}(2)$ gauge transformations act as usual on the edges $e$ of the graph, while they act by conjugation as expected on the little loops.\nA basis is provided by the spin decomposition on functions in $L^{2}(\\mathrm{SU}(2))$ as with standard spin networks. The loopy spin network basis states are labeled with a spin $j_{e}$ on each edge $e$, a spin $k^{v}_{\\ell}$ on each little loop $\\ell$ attached to a vertex $v$, and an intertwiners $i_{v}$ at each vertex leaving in the tensor product of the attached edges and of the loop spins:\n\\begin{equation}\ni_{v}\\in\\,\\mathrm{Inv}_{\\mathrm{SU}(2)}\\,\n\\Big{[}\n\\bigotimes_{e\\ni v}{\\mathcal V}^{j_{e}}\n\\,\\otimes\\,\n\\bigotimes_{\\ell \\ni v}({\\mathcal V}^{k^{v}_{\\ell}}\\otimes\\bar{{\\mathcal V}}^{k^{v}_{\\ell}})\n\\Big{]}\\,\n\\end{equation}\nso that the Hilbert space of loopy spin networks on the graph $\\Gamma$ with given number $N_{v}$ of little loops at every vertex is, as announced in the previous section presenting the hierarchy of extended spin network structures:\n\\begin{equation}\n{\\mathcal H}_{\\Gamma,\\{N_{v}\\}}^{\\mathrm{loopy}}\n\\,=\\,\nL^{2}\\big{(}\n\\mathrm{SU}(2)^{\\times\\,(E+\\sum_{v}N_{v})}\n\\,\/\\,\n\\mathrm{SU}(2)^{\\times V}\n\\big{)}\n\\,=\\,\n\\bigoplus_{\\{j_{e},k^{v}_{\\ell},i_{v}\\}}\\,\n{\\mathbb C}\\,|j_{e},k^{v}_{\\ell},i_{v}\\rangle\\,.\n\\end{equation}\nWhat needs to be properly defined and analyzed is the Hilbert space of states with arbitrary number of little loops, allowing $N_{v}$ to run all over ${\\mathbb N}$ and summing over all these possibilities. To this purpose, the full graph structure $\\Gamma$ does not intervene and we can ignore it and focus on the space of little loops around a single vertex. Thus, for the sake of simplifying the discussion, we will focus on a single vertex with no external, but with an arbitrary umber of little loops attached to it. This is the {\\it flower} graph.\n\nIn this section, we will assume the little loops to be distinguishable. We define the spin network states with a given number of loops -the flower graph with fixed number of petals- and we then discuss the whole Hilbert space of states with arbitrary number of excitations by a projective limit. We define and analyze the holonomy operators acting on that space and we finally implement the BF theory dynamics on that space as a first application of our framework and a consistency check.\nWe will tackle the case of indistinguishable little loops in the next section, imposing bosonic statistics and defining the holonomy operator on symmetrized states.\n\n\\subsection{Loopy intertwiners}\n\\label{sec:LoopyIntertwiners}\n\nLet us start with the flower graph with a fixed number $N$ of petals, that is a single vertex with $N$ little loops attached to it as drawn on fig.\\ref{fig:singlevertex}. We are going to define the wave-functions on that graph, the corresponding decomposition on the spin and intertwiner basis and the action of the holonomy operators.\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}\n\\coordinate(O1) at (0,0);\n\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=90] (O1) ++(0,1.2) node {$k_2$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=30] (O1) ++(1,0.7) node {$k_3$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=-30] (O1) ++(1,-0.7) node {$k_4$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=150] (O1) ++(0,-1.2) node {$k_5$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=-90] (O1) ++(-1,0.7) node {$k_{1}$};\n\n\\draw (O1) node[scale=1] {$\\bullet$} ++(-0.32,-0.18) node{${\\mathcal I}$};\n\n\\end{tikzpicture}\n\\caption{We consider the class of special graph, flowers, with a single vertex and an arbitrary number $N$ of little loops attached to it. Here we have drawn a flower with $N=5$ petals. The spin network states on such graphs are labeled by a spin on each loop, $k_{\\ell=1..N}$, and an intertwiner ${\\mathcal I}$ living in the tensor product $\\bigotimes_{\\ell=1}^{N} ({\\mathcal V}^{k_{\\ell}}\\otimes\\bar{{\\mathcal V}}^{k_{\\ell}})$.}\n\\label{fig:singlevertex}\n\\end{figure}\n\nWave-functions are gauge-invariant functions of $N$ group elements, that is functions on $\\mathrm{SU}(2)^{\\times N}$ invariant under the global action by conjugation:\n\\begin{equation}\n\\Psi(h_1,...,h_N) = \\Psi(gh_1g^{-1},...,gh_Ng^{-1})\\,.\n\\end{equation}\nThe scalar product is defined by integration with respect to the Haar measure on $\\mathrm{SU}(2)$ and the resulting Hilbert space is: \n\\begin{equation}\n{\\mathcal H}_{N}=\nL^2\\,\\Big{(}\\mathrm{SU}(2)^{\\times N}\/\\mathrm{Ad}\\,\\mathrm{SU}(2)\\Big{)}\\,.\n\\end{equation}\nA basis of this space is provided as usual by the spin network states, labeled by a spin on each loop, $k_{\\ell=1..N}\\,\\in\\frac\\N2$, and an intertwiner ${\\mathcal I}$ living in the tensor product $\\bigotimes_{\\ell=1}^{N} ({\\mathcal V}^{k_{\\ell}}\\otimes\\bar{{\\mathcal V}}^{k_{\\ell}})$ and invariant under the action of $\\mathrm{SU}(2)$:\n\\begin{equation}\n\\Psi^{\\{k_{\\ell},{\\mathcal I}\\}}\\big{(}\\{h_{\\ell}\\}_{\\ell=1..N}\\big{)}\n\\,=\\,\n\\langle h_{\\ell}\\,|\\,k_{\\ell},{\\mathcal I}\\rangle\n\\,=\\,\n{\\mathrm{Tr}}\\,\\Big{[}\n{\\mathcal I}\\otimes\\bigotimes_{\\ell=1}^{N}D^{k_{\\ell}}(h_{\\ell})\n\\Big{]}\\,,\n\\end{equation}\nwhere the trace is taken over the tensor product $\\bigotimes_{\\ell=1}^{N} ({\\mathcal V}^{k_{\\ell}}\\otimes\\bar{{\\mathcal V}}^{k_{\\ell}})$. To underline that each spin repreentation is doubled and that ${\\mathcal I}$ is an intertwiner between the loops around the vertex, we can dub it a {\\it loopy intertwiner}\n\nThe holonomy operator is the basic gauge-invariant operator of loop quantum gravity. It can shift and increase the spins along the edges on which it acts and so is used in practice as a creation operator. We define the holonomy operators $\\hat{\\chi}_\\ell$ along the loops around the vertex as acting by multiplication on the wave-functions in the group representation:\n\\begin{equation}\n(\\hat{\\chi}_\\ell \\triangleright\\Psi)\\,(h_1,...,h_N)\n\\,=\\,\n\\chi_\\frac{1}{2}(h_\\ell) \\Psi(h_1,...,h_N)\n\\end{equation}\nwhere $\\chi_\\frac{1}{2}$ is the trace operator in the fundamental two-dimensional representation of $\\mathrm{SU}(2)$. We can of course also consider holonomy operators that wrap around several loops around the flower:\n\\begin{equation} \n(\\hat{\\chi}_{i,j,k,l,...} \\triangleright\\Psi)\\,(h_1,...,h_N)\n\\,=\\,\n\\chi_\\frac{1}{2}(h_i h_j h_k h_l ...) \\Psi(h_1,...,h_N)\\,,\n\\end{equation}\nwhere the $i,j,k,l,..$ indices label loops. These operators are obviously still gauge-invariant, and we can further take the inverse or arbitrary powers of each group element.\nThere are two remarks we should do about these multi-loop operators. First, they can be decomposed as a composition of single loop operators combining both holonomy operators and grasping operators (action of the ${\\mathfrak{su}}(2)$ generators as a quantization of the flux-vectors) by iterating the following 2-loop identity:\n\\begin{equation}\n\\chi_\\frac{1}{2}(h_i h_j)\n\\,=\\,\n\\f12\\,\\left[\n\\chi_\\frac{1}{2}(h_i)\\chi_\\frac{1}{2}(h_j)\n+\\sum_{a=1}^{3}\\chi_\\frac{1}{2}(h_i\\sigma_{a})\\chi_\\frac{1}{2}(h_j\\sigma_{a})\n\\right]\\,,\n\\end{equation}\nwhere the $\\sigma_{a}$'s are the three Pauli matrices, normalized such that their square is equal to the identity matrix.\nSecond, if the loopy spin network state comes from the gauge fixing of a more complicated graph down to a single vertex, we had chosen a particular maximal tree on that graph to define the gauge-fixing procedure. The loops around the coarse-grained vertex correspond to the edges that didn't belong to the folding tree. Changing the tree actually maps the single loop holonomies onto multi-loop holonomies \\cite{Livine:2013gna}. So, from the coarse-graining perspective, there is no special reason to prefer single loops over multi-loop operators.\n\n\n\n\\subsection{Superposition of number of loops}\n\\label{sec:loopyprojective}\n\nWe would like to allow for an arbitrary number of loops $N$, with possibly an infinite number of loops, and superpositions of number of loops. We will apply the usual projective limit techniques used in loop quantum gravity, as briefly reviewed in section \\ref{ProjectiveLimits}.\nWe assume here that the little loops are all distinguishable, so we avoid all symmetrization issue. The case of indistinguishable loops will be dealt with in the next section \\ref{sec:bosonisation}. We discuss the countable infinity of loops around the vertex, so we can number them using the integers ${\\mathbb N}$. The point, as with standard spin networks, is that a state with a spin-0 on an edge does not actually depend on the group element carried by that edge and is thus equivalent to a state on the flower without that edge. Reversing this logic, a state built on a finite number of loops is equivalent to a state with an arbitrary larger number of loops carrying a spin-0 on all the extra edges , which will allow to define it in the projective limit as a state on the flower with an infinite number of loops.\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}\n\n\\coordinate(O1) at (0,0);\n\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=90] (O1) ++(0,1.2) node {$k_2$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=-90] (O1) ++(0,-1.2) node {$k_5$};\n\\draw (O1) to[in=-30,out=+30,loop,scale=3,rotate=30] (O1) ++(1,0.7) node {$k_3$};\n\\draw[dashed] (O1) to[in=-30,out=+30,loop,scale=3,rotate=-30] (O1) ++(1,-0.7) node {$k_4$};\n\\draw[dashed] (O1) to[in=-30,out=+30,loop,scale=3,rotate=150] (O1) ++(-1,0.7) node {$k_{1}$};\n\\draw[black!50,dashed] (O1) to[in=-30,out=+30,loop,scale=3,rotate=-150] (O1) ++(-1,-0.7) node {$k_6$};\n\n\\draw (O1) node[scale=1] {$\\bullet$} ;\n\n\\end{tikzpicture}\n\n\\caption{We consider a loopy spin network state with a varying number of loops as a superposition of states with support over different loops.}\n\\label{fig:variousloops}\n\n\\end{figure}\n\nLet us consider the set $\\mathcal{P}_{<\\infty}(\\mathbb{N})$ of all finite subsets of $\\mathbb{N}$. A flower with a finite number of loops corresponds to a finite subset $E\\in \\,\\mathcal{P}_{<\\infty}(\\mathbb{N})$ of indices labeling its loops. Since we keep the loops distinguishable, we do not identify all the subsets with same cardinality and keep on distinguishing them. We define the Hilbert space of gauge-invariant wave-functions on the flower corresponding to $E$:\n\\begin{equation}\n{\\mathcal H}_{E}\n\\,=\\,\nL^2\\,\\Big{(}\\mathrm{SU}(2)^{E}\/\\mathrm{Ad}\\,\\mathrm{SU}(2)\\Big{)}\\,,\n\\qquad\n\\Psi(\\{h_{\\ell}\\}_{\\ell\\in E})=\\Psi(\\{gh_{\\ell}g^{-1}\\}_{\\ell\\in E})\n\\quad\n\\forall g\\in\\mathrm{SU}(2)\\,.\n\\end{equation}\nWe would like to consider arbitrary superpositions of states with support on arbitrary subsets $E$ of loops, but we do not wish to brutally consider the direct sum over all $E$'s. We still require cylindrical consistency. Indeed, a function on $\\mathrm{SU}(2)^{E}$ which actually does not depend at all on the loop $\\ell_{0}\\in E$ can legitimately be considered as a function on $\\mathrm{SU}(2)^{E\\setminus\\ell_{0}}$. We introduce the equivalence relation making this explicit. For two subsets $E\\subset F$, and two functions $\\Psi$ and $\\widetilde{\\Psi}$ respectively on $\\mathrm{SU}(2)^{E}$ and $\\mathrm{SU}(2)^{F}$, the two wave-functions are defined as equivalent if:\n\\begin{equation}\nE\\subset F\\,,\n\\quad\n\\Psi:\\mathrm{SU}(2)^{E}\\rightarrow{\\mathbb C}\\,,\n\\quad\n\\widetilde{\\Psi}:\\mathrm{SU}(2)^{F}\\rightarrow{\\mathbb C}\\,,\n\\qquad\n\\Psi\\sim\\widetilde{\\Psi}\n\\quad\\Leftrightarrow\\quad\n\\widetilde{\\Psi}(\\{h_{\\ell}\\}_{\\ell\\in F})\n\\,=\\,\n{\\Psi}(\\{h_{\\ell}\\}_{\\ell\\in E})\\,,\n\\end{equation}\nthat is the function $\\widetilde{\\Psi}$ on the larger set $F$ does not depend on the group elements $h_{\\ell}$ for $\\ell\\in F\\setminus E$ and coincides with the function $\\Psi$ on the smaller set $E$. More generally, when the two subsets $E$ and $F$ do not contain one or the other, we transite trough their intersection $E\\cap F$.\n\nThe space of wave-functions in the projective limit is defined as the union over all subsets $E$ of functions on $\\mathrm{SU}(2)^{E}$, quotiented by this equivalence. We similarly define the projective limit of the integration measure over $\\mathrm{SU}(2)$. We use this measure to define the Hilbert space ${\\mathcal H}^{\\mathrm{loopy}}$ of states on the flower with an arbitrary number of loops. All the rigorous mathematical definitions and proofs are given in the appendix \\ref{app:ProjectiveLimit}.\n\nThe practical way to see this Hilbert space is to use the spin network basis and understand that a loop carrying a spin-0 means that the wave-function actually does not depend on the group element living on that loop. For every state, we can thus reduce its underlying graph to the minimal possible one removing all the loops with trivial dependency. Following this logic, for every subset $E$, we define the space of proper states living on $E$, that is without any spin-0 on its loops. This amounts to removing all possible 0-modes:\n\\begin{equation}\n\\mathcal{H}_{E}^0 \n\\,=\\,\n\\Bigg{\\{}\n\\Psi\\in{\\mathcal H}_{E}\n\\,:\\,\n\\forall \\ell_{0}\\in E\\,,\\,\\,\\int_{\\mathrm{SU}(2)}\\mathrm{d}h_{\\ell_{0}}\\,\\Psi =0\n\\Bigg{\\}}\\,.\n\\end{equation}\nWe can decompose the Hilbert space of states on the subset $E\\subset{\\mathbb N}$ of loops onto proper states:\n\\begin{prop}\n\\label{proper}\nThe Hilbert space ${\\mathcal H}_{E}$ on loopy intertwiners on the set of loops $E$ decomposes as a direct sum of the Hilbert spaces of proper states with support on every subset of $E$:\n\\begin{equation}\n\\mathcal{H}_{E} \\simeq \\bigoplus_{F \\subset E} \\mathcal{H}_{F}^0\n\\,.\n\\end{equation}\nThis isomorphism is realized through the projections $f_{F}=P_{E,F}f\\in{\\mathcal H}^{0}_{F}$, acting on wave-functions $f\\in{\\mathcal H}_{E}$, defined for an arbitrary subset $F\\subset E$:\n\\begin{equation}\nf_{F}\\big{(}\n\\{h_{\\ell}\\}_{\\ell\\in F}\n\\big{)}\n\\,=\\,\n\\sum_{\\widetilde{F}\\subset F}\n(-1)^{\\#\\widetilde{F}}\n\\int \\prod_{\\ell\\in E\\setminus F}\\mathrm{d}g_{\\ell}\n\\prod_{\\ell\\in\\widetilde{F}}\\mathrm{d}k_{\\ell}\\,\nf\\big{(}\n\\{h_{\\ell}\\}_{\\ell\\in F\\setminus \\widetilde{F}},\n\\{k_{\\ell}\\}_{\\ell\\in\\widetilde{F}},\n\\{g_{\\ell}\\}_{\\ell\\in E\\setminus F}\n\\big{)}\\,.\n\\end{equation}\nThese projections realize a combinatorial transform of the state $f\\in{\\mathcal H}_{E}$:\n\\begin{equation}\nf=\\sum_{F\\subset E} f_{F}\\,,\n\\qquad\nf_{F}\\in{\\mathcal H}_{F}^{0}\\,,\n\\qquad\n\\forall\\ell\\in F\\,,\\quad\\int \\mathrm{d}h_{\\ell}\\,f_{F}=0\\,.\n\\end{equation}\n\\end{prop}\nThis decomposition is straightforward to prove. It will also be crucial in the case of undistinguishable loops and symmetrized states, as we will see in the next section \\ref{sec:bosonisation}.\nThen, as we show in the appendix \\ref{app:ProjectiveLimit}, the Hilbert space of loopy spin networks on the flower, with an arbitrary number of distinguishable loops, defined as the projective limit of the Hilbert spaces ${\\mathcal H}_{E}$ is realized as the direct sum of those spaces of proper states:\n\\begin{equation}\n\\mathcal{H}^{\\mathrm{loopy}} \\simeq \\bigoplus_{F \\in \\mathcal{P}_{<\\infty}(\\mathbb{N})} \\mathcal{H}_{F}^0\\,,\n\\qquad\n\\mathcal{H}_{F}^0 = \\bigoplus_{j_{\\ell \\in F} \\neq 0, {\\mathcal I}} \\mathbb{C} |j_{\\ell \\in F}, {\\mathcal I}\\rangle\\,.\n\\end{equation}\n\n\n\\subsection{Holonomy operators as creation and annihilation operators}\n\nWe can revisit the definition of the holonomy operators\\footnotemark on our Hilbert space ${\\mathcal H}$ of states with arbitrary number of loops. Let us consider the loop $\\ell_{0}\\in{\\mathbb N}$ and define the corresponding holonomy operator $\\hat{\\chi}_{\\ell_{0}}$. Looking at its action on a state $\\Psi$ with finite number of loops living in the Hilbert space ${\\mathcal H}_{E}$, we have two possibilities: either the loop $\\ell_{0}$ belongs to the subset $E$ or it doesn't. If the acting loop $\\ell_{0}$ is already a loop of our state $\\Psi$, then the holonomy operator acts on as before by multiplication:\n\\footnotetext{\nIn order to identify a complete set of operators acting on the Hilbert space ${\\mathcal H}^{\\mathrm{loopy}}$, we should further consider multi-loops holonomy operators or grasping operators or deformation operators such as $\\mathrm{U}(N)$ operators \\cite{Borja:2010rc}, but in the first exploration we propose, in this paper, we decide to focus on the single-loop holonomy operator.}\n\\begin{equation}\n\\ell_{0}\\in E\\,,\n\\quad\n\\Psi\\in{\\mathcal H}_{E}\\,,\n\\quad\n\\hat{\\chi}_{\\ell_{0}}\\,\\Psi\\in{\\mathcal H}_{E}\\,,\n\\qquad\n(\\hat{\\chi}_{\\ell_{0}}\\Psi)\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n\\,=\\,\n\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\,\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\\,.\n\\end{equation}\nIf the acting loop doesn't belong to the initial subset $E$, we use the cylindrical consistency equivalence relation and we embed both the new loop and the initial loops in a larger graph, say $E\\cup\\{\\ell_{0}\\}$,\n\\begin{equation}\n\\ell_{0}\\notin E\\,,\n\\quad\n\\Psi\\in{\\mathcal H}_{E}\\,,\n\\quad\n\\hat{\\chi}_{\\ell_{0}}\\,\\Psi\\in{\\mathcal H}_{E\\cup\\{\\ell_{0}\\}}\\,,\n\\qquad\n(\\hat{\\chi}_{\\ell_{0}}\\Psi)\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n\\,=\\,\n\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\,\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\\,,\n\\end{equation}\nwith the holonomy operator $\\hat{\\chi}_{\\ell_{0}}$ acting as a creation operator, creating a new loop and curvature excitation.\nSince the $\\mathrm{SU}(2)$ character $\\chi_{\\frac{1}{2}}$ is real and bounded by two, $|\\chi_{\\frac{1}{2}}|\\le 2$, we can check that the holonomy operators $\\hat{\\chi}_{\\ell}$ are Hermitian, bounded and thus essentially self-adjoint.\n\nThe holonomy operator $\\hat{\\chi}_{\\ell}$ is Hermitian and has a component acting as a creation operator. It must have an annihilation counterpart. The best way to see this explicitly is to write its action on proper states, consistently removing the zero-modes. Indeed, if a loop carries a spin $\\f12$, then it gets partly annihilated by the holonomy operator:\n\\begin{equation}\n\\ell_{0}\\in E\\,,\n\\quad\n\\Psi\\in{\\mathcal H}_{E}^{0}\\,,\n\\quad\n\\hat{\\chi}_{\\ell_{0}}\\,\\Psi\\in{\\mathcal H}_{E}^{0}\\oplus{\\mathcal H}_{E\\setminus\\{\\ell_{0}\\}}^{0}\\,, \\nonumber\n\\end{equation}\n\\begin{equation}\n(\\hat{\\chi}_{\\ell_{0}}\\Psi)\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n\\,=\\,\n\\underset{\\in\\,{\\mathcal H}_{E}^{0}}{\\underbrace{\\Bigg{[}\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n-\\int\\mathrm{d}h_{\\ell_{0}}\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\\Bigg{]}}}\n+\\underset{\\in\\,{\\mathcal H}_{E\\setminus\\{\\ell_{0}\\}}^{0}}\n{\\underbrace{\\Bigg{[}\\int\\mathrm{d}h_{\\ell_{0}}\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\\Bigg{]}}}\n\\,.\n\\end{equation}\nThis way, it is clear that the holonomy operator $\\hat{\\chi}_{\\ell_{0}}$ creates transition adding and removing one loop. This proper state decomposition of the holonomy operator will become essential when defining it on the Fock space of symmetrized loopy spin networks in the next section \\ref{sec:bosonisation}.\n\n\\subsection{Imposing BF dynamics on loopy spin networks}\n\nNow that we have describe the whole kinematics of loopy spin networks, with distinguishable loops, we would like to tackle the issue of the dynamics and imposing the Hamiltonian constraints on the Hilbert space of loopy states ${\\mathcal H}^{\\mathrm{loopy}}$. The final goal of our proposal is to write the Hamiltonian constraints of loop quantum gravity on ${\\mathcal H}^{\\mathrm{loopy}}$, such that it allows explicitly for local degrees of freedom, study its renormalization group flow under the coarse-graining and extract its large scale or continuum limit.\nHere we will instead describe the much simpler BF dynamics. BF theory can be considered as a consistency check for all attempts and methods to define of dynamics in (loop) quantum gravity\\footnotemark. Its physical states are well-known and the Hamiltonian constraints project onto flat connection states. It is furthermore a topological theory with no local degrees of freedom -they are pure gauge.\nFinally it has a trivial renormalization flow. Indeed the flatness constraint behaves very nicely under coarse-graining, as illustrated on fig.\\ref{fig:Flat}~: considering a spin network graph, imposing the flatness of the connection on all small loops garanties that larger loops will be flat too.\nAll these features must reflect in any proposal for the quantum dynamics of BF theory. \n\\footnotetext{\nOnce the dynamics of BF theory is properly implemented and well under control in a certain framework, one usually use it as a starting point for imposing the true gravity dynamics, with local degrees of freedom, relying on the reformulation of general relativity as a BF theory with constraints. This is for instance the logic behind the construction of spinfoam models for a quantum gravity path integral \\cite{Livine:2010zx,Perez:2012wv,Bianchi:2012nk}.\n}\n\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}\n\\coordinate(A) at (0,0);\n\\coordinate(A1) at (0.2,-0.2);\n\\coordinate(A2) at (0.4,-0.2);\n\\coordinate(B) at (2,0);\n\\coordinate(B1) at (1.8,-0.2);\n\\coordinate(B2) at (2.2,-0.2);\n\\coordinate(B3) at (2.4,-0.2);\n\\coordinate(C) at (4,0);\n\\coordinate(C1) at (3.8,-0.2);\n\\coordinate(D) at (0,-2);\n\\coordinate(D1) at (0.2,-1.8);\n\\coordinate(D2) at (0.2,-0.6);\n\\coordinate(D3) at (0.2,-0.4);\n\\coordinate(E) at (2,-2);\n\\coordinate(E1) at (1.8,-1.8);\n\\coordinate(E2) at (2.2,-1.8);\n\\coordinate(E3) at (2.2,-0.6);\n\\coordinate(E4) at (2.2,-0.4);\n\\coordinate(F) at (4,-2);\n\\coordinate(F1) at (3.8,-1.8);\n\\coordinate(O1) at (1,-1);\n\\coordinate(O2) at (3,-1);\n\n\n\\draw[dashed] (A) -- (B) -- (E) -- (D) -- (A);\n\\draw[dashed] (B) -- (C) -- (F) -- (E);\n\n\\draw (A) node {$\\bullet$};\n\\draw (B) node {$\\bullet$};\n\\draw (C) node {$\\bullet$};\n\\draw (D) node {$\\bullet$};\n\\draw (E) node {$\\bullet$};\n\\draw (F) node {$\\bullet$};\n\n\n\\draw[blue,rounded corners,thick] (A2) -- (B1) -- (E1) -- (D1) -- (D2);\n\\draw[blue,thick,->,>=stealth] (D2) -- (D3);\n\\draw[blue,rounded corners,thick] (B3) -- (C1) -- (F1) -- (E2) -- (E3);\n\\draw[blue,thick,->,>=stealth] (E3) -- (E4);\n\n\\draw[blue] (O1) node[scale=2] {\\textbf{${\\mathbb{I}}$}};\n\\draw[blue] (O2) node[scale=2] {\\textbf{${\\mathbb{I}}$}};\n\n\n\\draw[->,>=stealth,very thick] (5,-1) -- (7,-1);\n\n\\coordinate(P) at (8,0);\n\\coordinate(A0) at ($(P)+(0,0)$);\n\\coordinate(A01) at ($(P)+(0.2,-0.2)$);\n\\coordinate(A02) at ($(P)+(0.4,-0.2)$);\n\\coordinate(B0) at ($(P)+(2,0)$);\n\\coordinate(B01) at ($(P)+(1.8,-0.2)$);\n\\coordinate(B02) at ($(P)+(2.2,-0.2)$);\n\\coordinate(B03) at ($(P)+(2.4,-0.2)$);\n\\coordinate(C0) at ($(P)+(4,0)$);\n\\coordinate(C01) at ($(P)+(3.8,-0.2)$);\n\\coordinate(D0) at ($(P)+(0,-2)$);\n\\coordinate(D01) at ($(P)+(0.2,-1.8)$);\n\\coordinate(D02) at ($(P)+(0.2,-0.6)$);\n\\coordinate(D03) at ($(P)+(0.2,-0.4)$);\n\\coordinate(E0) at ($(P)+(2,-2)$);\n\\coordinate(E01) at ($(P)+(1.8,-1.8)$);\n\\coordinate(E02) at ($(P)+(2.2,-1.8)$);\n\\coordinate(E03) at ($(P)+(2.2,-0.6)$);\n\\coordinate(E04) at ($(P)+(2.2,-0.4)$);\n\\coordinate(F0) at ($(P)+(4,-2)$);\n\\coordinate(F01) at ($(P)+(3.8,-1.8)$);\n\\coordinate(O0) at ($(P)+(2,-1)$);\n\n\\draw[dashed] (A0) -- (B0);\n\\draw[dashed,gray] (B0) -- (E0);\n\\draw[dashed] (E0) -- (D0) -- (A0);\n\\draw[dashed] (B0) -- (C0) -- (F0) -- (E0);\n\n\\draw (A0) node {$\\bullet$};\n\\draw (B0) node {$\\bullet$};\n\\draw (C0) node {$\\bullet$};\n\\draw (D0) node {$\\bullet$};\n\\draw (E0) node {$\\bullet$};\n\\draw (F0) node {$\\bullet$};\n\n\\draw[blue,rounded corners,thick] (A02) -- (C01) -- (F01) -- (D01) -- (D02);\n\\draw[blue,thick,->,>=stealth] (D02) -- (D03);\n\n\\draw[blue] (O0) node[scale=2] {\\textbf{${\\mathbb{I}}$}};\n\n\\end{tikzpicture}\n\n\\caption{In BF theory, holonomies behave very nicely under coarse-graining. If each small loops is flat, large loops are flat too. In other words, the physical state of BF theory is a flat space, which is flat at all scales.}\n\\label{fig:Flat}\n\n\\end{figure}\n\nConsidering the full space of loopy spin networks on some arbitrary graph $\\Gamma$, we would like the BF Hamiltonian constraints to project onto the flat connection state(s), that is impose flatness around all the loops of the graph $\\Gamma$ and also kill all the local excitations represented by the little loops at every vertex. Flatness around the loops of the background graph is the standard result for BF constraints. So here we will focus on the fate of the little loops, that we introduced. To this purpose, it suffices to focus on a single vertex, that is to work on the flower graph.\n\nConsidering the flower graph with arbitrary number of loops, as we have defined above, we introduce the following set of constraints:\n\\begin{equation}\n\\forall \\ell \\in \\mathbb{N},\n\\quad\n\\big{(}\\hat{\\chi}_\\ell-2\\big{)} |\\Psi\\rangle \\,=\\, 0\\,.\n\\end{equation}\nWe impose one constraint for every (possible) loop by imposing that the corresponding holonomy operator saturates its bound and projects on its highest eigenvalue. These constraints all commute with each other.\nLet us underline the dual role of Hamiltonian constraints. As first class constraints, we need to solve them and identify their solution space, but they also generate gauge transformations and we need to gauge out their action. Here, the holonomy constraint operators both impose the flatness of the connection, but they also imply that the little loops are pure gauge, so that their action can change the number of loops to arbitrary values.\nWe will see below that these one-loop holonomy constraints are almost enough to fully constrain the theory to the single flat state on the flower graph.\n\n\\bigskip\n\nLet us solve these constraints and consider a loop $\\ell_{0}$ and its action of its holonomy operator $\\widehat{\\chi}_{\\ell_{0}}$ on a wave-function $\\Psi\\in{\\mathcal H}_{E}$ with support on the finite subset $E\\subset{\\mathbb N}$ of loops. A first case is when $\\ell_{0}\\in E$ belongs to the subset, in which case we have a simple functional equation on $\\mathrm{SU}(2)^{E}$:\n$$\n(\\hat{\\chi}_{\\ell_{0}}\\Psi)\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n\\,=\\,\n\\chi_{\\frac{1}{2}}(h_{\\ell_{0}})\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\n\\,=\\,\n2\\,\\Psi\\,(\\{h_{\\ell}\\}_{\\ell\\in E})\\,.\n$$\nThe second case is when the considered loop $\\ell_{0}\\notin E$ doesn't belong to the subset. The holonomy operator $\\hat{\\chi}_{\\ell_{0}}$ then creates a loop, making a transition from ${\\mathcal H}_{E}^{0}$ to the orthogonal space ${\\mathcal H}_{E\\cup \\{\\ell_{0}\\}}^{0}$. This illustrates that the flow generated by those Hamiltonian constraints can arbitrarily shift the number of loops and therefore the little loops become pure gauge at the dynamical level in BF theory. This also means that there is no solution to all holonomy constraints with support on a finite subset $E$ and a physical state must have support on all possible loops.\n\n\\medskip\n\nTo be rigorous, we need to go to the dual space $({\\mathcal H}^{\\mathrm{loopy}})^*$ and solve the holonomy constraints on the space of distribution defined in the projective limit. We are looking for a family of distributions $\\varphi_{E}$ on $\\mathrm{SU}(2)^{E}$, that is continuous linear forms over smooth functions on $\\mathrm{SU}(2)^{E}$ (see appendix \\ref{app:distribution} for a discussion of the definition of distributions over $\\mathrm{SU}(2)$). The cylindrical consistency means that their evaluations on two cylindrically equivalent smooth functions must be equal:\n$$\n\\forall E\\subset \\widetilde{E}\\,,\\,\\,\nf_{E}\\sim f_{\\widetilde{E}}\\quad\n\\Rightarrow\n\\varphi_{E}(f_{E})=\\int_{\\mathrm{SU}(2)^{E}}\\varphi_{E}f_{E}\n\\,=\\,\n\\int_{\\mathrm{SU}(2)^{\\widetilde{E}}}\\varphi_{\\widetilde{E}}f_{\\widetilde{E}}=\\varphi_{\\widetilde{E}}(f_{\\widetilde{E}})\\,.\n$$\nThen the holonomy constraints read:\n$$\n\\forall \\ell\\in{\\mathbb N}\\,,\n\\forall E \\ni \\ell\\,,\n\\forall f_{E}\\in{\\cal C}^{\\infty}_{\\mathrm{SU}(2)^{E}}\\,,\n\\,\\,\n\\int_{\\mathrm{SU}(2)^{E}}\\ \\varphi_{E} (\\chi_{\\ell}-2)f_{E}\n\\,=\\,0\\,,\n$$\nwhere we have considered by default that the loop $\\ell$ belongs to the wave-function support $E$. Indeed, if $\\ell$ didn't belong to $E$, then we could enlarge the subset $E$ to $E\\cup \\{\\ell\\}$ by cylindrical consistency and consider both the test function $f$ and the distribution $\\varphi$ as living on that larger subset.\nOur goal is to show that the unique solution to these equations is the flat state, i.e. that there exists $\\lambda\\in{\\mathbb C}$ such that $\\varphi_{E}=\\lambda\\,\\delta^{\\otimes E}$:\n\\begin{equation}\n\\forall f_{E}\\in{\\cal C}^{\\infty}_{\\mathrm{SU}(2)^{E}}\\,,\n\\varphi_{E}(f_{E})=\\lambda \\delta_{E}(f_{E})\n=\n\\lambda\\int_{\\mathrm{SU}(2)^{E}}\\prod_{\\ell\\in E}\\delta(h_{\\ell}) f_{E}(\\{h_{\\ell}\\}_{\\ell\\in E})\n=\n\\lambda f_{E}({\\mathbb{I}},..,{\\mathbb{I}})\\,.\n\\end{equation}\nCylindrical consistency simply requires that the factor $\\lambda$ does not depend on the subset $E$.\nSo we are led to solve the holonomy constrain on every finite subset $E$. Thus, let us consider the functional equation on $\\mathrm{SU}(2)^{N}$:\n\\begin{equation}\n\\forall 1\\le\\ell\\le N\\,,\\,\\,\n\\left(\\hat{\\chi}_\\ell - 2\\right)\\varphi = 0\\,,\n\\end{equation}\nwhere we drop the subset label $E$.\n\n\\subsubsection{Holonomy constraint on $\\mathrm{SU}(2)$}\n\nLet us start with the one-loop case and solve for distributions $\\varphi$ on $\\mathrm{SU}(2)$ the equation:\n\\begin{equation}\n\\forall h\\in\\mathrm{SU}(2)\\,,\\,\\,\n\\chi_{\\f12}(h)\\varphi(h)=2\\varphi(h)\\,.\n\\end{equation}\nSince the character $\\chi_{\\f12}$ is smooth and reaches its maximum value $2$ at a single point, the identity ${\\mathbb{I}}$, it seems natural that the $\\varphi$ must be a distribution peaked at the identity. We therefore expect that the only solution be the $\\delta$-distribution on $\\mathrm{SU}(2)$, $\\varphi=\\delta$. However, since the identity is actually an extremum of $\\chi_{\\f12}$ and that the first derivatives of the character thus vanishes at this point, this equation admit more solutions: the first derivatives of the $\\delta$-distribution. This clearly came as a surprise for us.\n\nLet us first assume that $\\varphi$ is gauge-invariant, i.e. invariant under conjugation. Its Fourier decomposition on $\\mathrm{SU}(2)$ involves only the characters in all spins:\n$$\n\\varphi=\\sum_{j\\in\\frac\\N2}\\varphi_{j}\\chi_{j}\\,.\n$$\nAs well known, the holonomy constraint leads to a recursion relation on the coefficients $\\varphi_{j}$:\n\\begin{equation}\n\\chi_{\\f12}\\chi_{j}=\\chi_{j-\\f12}+\\chi_{j+\\f12}\n\\quad\\Rightarrow\\qquad\n2 \\varphi_{0}=\\varphi_{\\f12}\\,,\\quad\n2\\varphi_{j\\ge\\f12}=\\varphi_{j-\\f12}+\\varphi_{j+\\f12}\\,.\n\\end{equation}\nOnce the initial condition $\\varphi_{0}$ is fixed, these lead to a unique solution:\n\\begin{equation}\n\\varphi_{j}=(2j+1)\\varphi_{0}\\,,\n\\qquad\n\\varphi=\\varphi_{0}\\sum_{j}(2j+1)\\chi_{j}=\\varphi_{0}\\delta\\,.\n\\end{equation}\nWhen solving such functional equations in the Fourier basis, one should nevertheless be very careful to work with well-defined distributions. These are characterized by Fourier coefficients $\\varphi^{j}$ growing at most polynomially with the spin $j$. This ensures that evaluations $\\int f\\varphi$ of the distribution $\\varphi$ on smooth test functions $f$ are convergent series. The $\\delta$-distribution is clearly a good solution. But, as an example, solving for eigenvectors of the holonomy operator associated to (real) eigenvalues (strictly) larger than 2 would lead to exponentially growing Fourier coefficients, which are too divergent to define a proper distribution. The interested reader will find more details in the appendix \\ref{app:distribution}.\n\n\\medskip\n\nOn the space of functions invariant under conjugation, everything works as expected. Let us now consider the general case dropping the requirement of gauge-invariance. The $\\delta$-distribution is obviously still a solution:\n\\begin{equation}\n\\forall f\\in{\\mathcal C}^{\\infty}_{\\mathrm{SU}(2)}\\,,\n\\,\\,\n\\int f\\, (\\chi_{\\f12}-2)\\,\\delta\n=\n(\\chi_{\\f12}({\\mathbb{I}})-2)f({\\mathbb{I}})\n=0\\,.\n\\end{equation}\nBut, now the first derivatives of the $\\delta$-distributions are also solutions:\n\\begin{equation}\n\\forall f\\in{\\mathcal C}^{\\infty}_{\\mathrm{SU}(2)}\\,,\n\\,\\,\n\\int f\\, (\\chi_{\\f12}-2)\\,\\partial_{x}\\delta\n=\n\\left.-(\\partial_{x}\\chi_{\\f12})\\,f-(\\chi_{\\f12}-2)\\,f\\right|_{{\\mathbb{I}}}\n=0\\,,\n\\label{ppdelta}\n\\end{equation}\nwhere $x\\in{\\mathbb R}^{3}$ indicates the direction of the derivative and the derivatives of the character vanish at the identity since it is a extremum.\nWe remind the reader that the right-derivative $\\partial_{x}^{R}$ on $\\mathrm{SU}(2)$ is a anti-Hermitian operator ($i\\partial$ is Hermitian) defined by the infinitesimal action of the ${\\mathfrak{su}}(2)$ generator $\\vec{x}\\cdot\\vec{J}$ (where the $\\vec{J}$ in the fundamental spin-$\\f12$ representation are simply half the Pauli matrices):\n\\begin{equation}\n\\partial_{x}^{R}f(h)=\n\\lim_{\\epsilon\\rightarrow 0}\\frac{f(h e^{i\\epsilon \\vec{x}\\cdot \\vec{J}})-f(h)}{\\epsilon}\n=f(h\\, x)\\,,\\quad\n\\textrm{with}\\quad x= \\vec{x}\\cdot \\vec{J}\\,.\n\\end{equation}\nWe usually differentiate along the three directions in ${\\mathbb R}^{3}\\sim{\\mathfrak{su}}(2)$ leading to the insertion of the generators $J_{a=1,2,3}$:\n\\begin{equation}\n\\partial_{a}^{R}f(h)=if(h J_{a})\\,,\n\\quad\n\\partial_{a}^{L}f(h)=if(J_{a}h)\\,.\n\\end{equation}\nActing on the $\\delta$-distribution gives the following Fourier decomposition for its derivatives $\\partial_{a}^{L}\\delta=\\partial_{a}^{R}\\delta=\\partial_{a}\\delta$:\n\\begin{equation}\n\\partial_{a}\\delta(h)\n=i\\sum_{j} (2j+1)D^{j}_{nm}(J_{a})\\,D^{j}_{mn}(h)\\,,\n\\end{equation}\nwhere we use the Wigner matrices for the group element $h$ and the ${\\mathfrak{su}}(2)$ generators.\n\nWe can actually generate a whole tower of higher derivative solutions to the holonomy constraints. We simply need to identify the differential operators whose action on the spin-$\\f12$ character vanishes at the identity. Thus, at second order, we get five new independent solutions given by the following operators:\n\\begin{equation}\n\\partial_{1}\\partial_{2}\n\\,,\\,\\,\n\\partial_{1}\\partial_{3}\n\\,,\\,\\,\n\\partial_{2}\\partial_{3}\n\\,,\\,\\,\n(\\partial_{1}\\partial_{1}-\\partial_{2}\\partial_{2})\n\\,,\\,\\,\n(\\partial_{1}\\partial_{1}-\\partial_{3}\\partial_{3})\n\\,,\n\\end{equation}\nthat is the $\\partial_{a}\\partial_{b}$ and $(\\partial_{a}\\partial_{a}-\\partial_{b}\\partial_{b})$ for $a\\ne b$. Following this logic, we will get 7 new independent solutions at third order, and so on with $(2n+1)$ independent differential operators at order $n$, for a total of $(n+1)^{2}$ independent solutions to the holonomy constraints given by differential operators of order at most $n$ acting on the $\\delta$-distribution.\n\nSuch as in the conjugation-invariant case, it is enlightening to switch to the Fourier decomposition and translate the holonomy constraint into a recursion relation on the Fourier coefficients. The difference is that we had one Fourier coefficient $\\varphi^{j}$ for each spin $j$ in the gauge-invariant case while in the general case $\\varphi^{j}$ is a $(2j+1)\\times(2j+1)$ matrix. Implementing the recursion, we start from spin 0 and work the way up to higher spins. The problem is that the recursion relations determine only $(2j)^{2}$ matrix elements of $\\varphi^{j}$ in terms of the lower spins coefficients, leaving $(2j+1)^{2}-(2j)^{2}=(4j+1)$ matrix elements free to be specified as initial conditions. This leads to an infinite number of solutions to the recursion relations, which reproduces the tower of higher order derivative solutions.\nThe interested reader will find all of the details on the recursion relations in appendix \\ref{app:recursion}. \n\n\\subsubsection{Introducing the Laplacian constraint on $\\mathrm{SU}(2)$}\n\\label{derivativesolution1}\n\nIf we work with a single loop, a single petal on the flower, then the wave-function is obviously gauge-invariant and we do not have to deal with these extra solutions to the holonomy constraint\\footnotemark.\n\\footnotetext{\nIndeed, since we already proved that the $\\delta$-distribution is the only gauge-invariant solution to the holonomy constraint, all the derivative solutions can not be gauge-invariant. We can also prove this directly.\nTo get a solution invariant under conjugation, we need to contract the derivative indices together. Now a fundamental theorem on rotational invariants states that all $\\mathrm{SO}(3)$-invariant polynomial of $n$ 3d-vectors $\\vec{v}_{i=1..n}$ are generated by scalar products $\\vec{v}_{i}\\cdot\\vec{v}_{j}$ and triple products $\\vec{v}_{i}\\cdot(\\vec{v}_{j}\\wedge\\vec{v}_{k})$. We simply have to check that the Laplacian and triple-grasping of the $\\delta$-distribution are not solutions of the holonomy constraints:\n$$\n\\int f (\\chi_{\\f12}-2)\\partial_{a}\\partial_{a}\\delta\n=\n\\Delta \\chi_{\\f12}({\\mathbb{I}})\\,f({\\mathbb{I}})\n=\n\\frac{-3}2\\,f({\\mathbb{I}})\\ne 0\n\\,,\n\\quad\n\\int f (\\chi_{\\f12}-2)\\epsilon^{abc}\\partial_{a}\\partial_{b}\\partial_{c}\\delta\n=\n\\frac{-i^{3}}{2^{3}}\\epsilon^{abc}\\chi_{\\f12}(\\sigma_{c}\\sigma_{b}\\sigma_{a})\\,f({\\mathbb{I}})\n=\n\\frac{3}2\\,f({\\mathbb{I}})\\ne 0\\,.\n$$\n}\nHowever, as soon as we add external legs attached to the vertex (linking the flower to other vertices in the graph) or add more loops, then we have to find a way to suppress those derivative solutions, in $\\partial_{a}\\delta$ and so on, which would lead to extra degrees of freedom as some kind of polarized flat states.\n\nSince we want to ensure the full flatness of the holonomy, the most natural proposal is to constrain all the components of the group element living on the loop and not only its trace:\n$$\n\\forall m,n=\\pm\\f12\\,,\\,\\,\nD^{\\f12}_{mn}(h) \\,\\varphi(h)=\\delta_{mn}\\,\\varphi(h)\\,.\n$$\nOne can indeed check, both from the differential calculus point of view or the recursion relations in Fourier space, that these equations admit the $\\delta$-distribution as unique solutions. We can also go beyond multiplicative operators and insert some differential operators. Then supplementing the trace holonomy constraint with the other constraints $\\chi_{\\f12}\\partial_{a}\\,\\varphi =2\\partial_{a}\\varphi$ for $a=1,2,3$ also ensures a unique flat solution. However, these constraints are not gauge-invariant: the constraint operators map wave-functions invariant under conjugation to non-invariant functions.\n\nIn order to keep gauge-invariant constraints, we go to the second derivatives and consider the Laplacian operator. Actually, we introduce the right-Laplacian $\\Delta\\equiv \\sum_{a}\\partial_{a}^{R}\\partial_{a}^{R}$ and a mixed Laplacian operator $\\widetilde{\\Delta}\\equiv \\sum_{a}\\partial_{a}^{L}\\partial_{a}^{R}$, and we propose a new constraint\\footnotemark:\n\\begin{equation}\n\\Delta \\varphi =\\widetilde{\\Delta} \\varphi\\,,\n\\end{equation}\n\\footnotetext{\nAs an example, we can see how $\\Delta$ and $\\widetilde{\\Delta}$ differ through their action on the coupled character $\\chi(h_{1}h_{2})$:\n$$\n\\Delta_{1}\\chi_{\\f12}(h_{1}h_{2})=-\\f14\\chi_{\\f12}(h_{1}\\sigma_{a}\\sigma_{a}h_{2})=-\\f34\\chi_{\\f12}(h_{1}h_{2})\n\\,,\n\\quad\n\\widetilde{\\Delta}_{1}\\chi_{\\f12}(h_{1}h_{2})=-\\f14\\chi_{\\f12}(\\sigma_{a}h_{1}\\sigma_{a}h_{2})\n=-\\f14\\,\\big{(}2\\chi_{\\f12}(h_{1})\\chi_{\\f12}(h_{2})-\\chi_{\\f12}(h_{1}h_{2})\\big{)}\\,,\n$$\nwhich are of course equal at $h_{1}={\\mathbb{I}}$.\n}\nAt the classical level, the differential operator $\\partial_{a}$ represents the flux vector $X_{a}$: the right derivative represents the flux $\\vec{X}^{s}$ at the source of the loop while the left derivative is the flux $\\vec{X}^{t}$ at the target of the loop. The target flux is equal to the source flux parallely transported around the loop by the holonomy $h$. The Laplacian constraint is the equality of the scalar product $\\vec{X}^{t}\\cdot\\vec{X}^{s}$ with the squared norm $\\vec{X}^{s}\\cdot\\vec{X}^{s}$ and therefore means that the two flux are equal, $\\vec{X}^{s}=\\vec{X}^{t}$. This implies the flatness of the group element $h$ (up to the $\\mathrm{U}(1)$ stabilizer of the flux vector). \n\nAt the quantum level, the Laplacian constraint turns out to play a different role. It implies the invariance of the wave-function by conjugation:\n\\begin{equation}\n\\Delta \\varphi =\\widetilde{\\Delta} \\varphi\n\\quad\\Rightarrow\\quad\n\\forall h,g\\in\\mathrm{SU}(2)\\,,\\,\\,\n\\varphi(h)=\\varphi(ghg^{-1})\\,.\n\\end{equation}\nWe rigorously prove this statement in the appendix \\ref{app:Laplacian} solving explicitly the recursion relations implied by the Laplacian constraint on the Fourier coefficients of $\\varphi$. Another way to understand the relation of the Laplacian constraint to the invariance under conjugation is to think in terms of spin recoupling. Let us call $\\vec{J}^{L,R}$ respectively the ${\\mathfrak{su}}(2)$ generators living at the two ends of the loop and defining the left and right derivations. The two Casimirs, given by the two scalar products $\\vec{J}^{L}\\cdot\\vec{J}^{L}$ and $\\vec{J}^{R}\\cdot\\vec{J}^{R}$, are equal and their (eigen)value is $j(j+1)$ is the loop carries the spin $j$. Then the Laplacian constraint means that their recoupling is trivial:\n\\begin{equation}\n0=\\vec{J}^{R}\\cdot\\vec{J}^{R}-\\vec{J}^{R}\\cdot\\vec{J}^{L}=\\f12(\\vec{J}^{R}-\\vec{J}^{L})^{2}\\,,\n\\end{equation}\nso that the two ends of the loop recouple to the trivial representation, i.e. the spin-0. As illustrated on fig.\\ref{fig:Laplacian}, this also allows to show that the Laplacian constraint operator $(\\widetilde{\\Delta}-\\Delta)$ is positive and its spectrum is $k(k+1)\/2$ where $k$ is an integer running from 0 to $(2j)$ if the loop carries the spin $j$.\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}[scale=0.8]\n\\coordinate(O2) at (-2,0);\n\\coordinate(O3) at (0,0);\n\n\\draw (O3) to[in=-45,out=+45,loop,scale=5] (O3)++(1.6,0) node {$j$};\n\n\\draw[red] (O2) -- (O3) node[midway,below]{$k$};\n\\draw (O3) node {$\\bullet$} ++(0.2,0.6) node{$\\vec{J}^{L}$} ++(0,-1.2) node{$\\vec{J}^{R}$};\n\n\\end{tikzpicture}\n\n\\caption{The left and right derivations respectively act as graspings at the source and target of the loop, inserting ${\\mathfrak{su}}(2)$ generators in the wave-functions. The Laplacian operator $(\\widetilde{\\Delta}-\\Delta)$ then measures the difference between the two scalar products $\\vec{J}^{R}\\cdot\\vec{J}^{R}$ and $\\vec{J}^{R}\\cdot\\vec{J}^{L}$, or equivalently the Casimir $(\\vec{J}^{R}-\\vec{J}^{L})^{2}\/2$ of the recoupling of the spins at the two ends of the loop. Assuming that the loop carries the spin $j$ then recoupling $j$ with itself gives a spin $k$ running from 0 to $(2j)$.\n} \n\\label{fig:Laplacian}\n\n\\end{figure}\nOne can also see that the derivatives of the $\\delta$-distribution are eigenstates of $(\\widetilde{\\Delta}-\\Delta)$ with non-vanishing eigenvalues. For example, we compute:\n\\begin{equation}\n\\int f(\\widetilde{\\Delta}-\\Delta)\\partial_{a}^{R}\\delta\n\\,=\\,\ni^{3}\\sum_{b}\n\\big{[}\nf(J_{a}J_{b}J_{b}) -f(J_{b}J_{a}J_{b})\n\\big{]}\n\\,=\\,\n-if(J_{a})\n\\,=\\,\n+\\int f \\partial_{a}^{R}\\delta\n\\,,\n\\end{equation}\nand so on with higher order differential operators. In particular, the derivative distribution $\\partial_{a}\\delta$ corresponds to the eigenvalue $k(k+1)\/2$ for $k=1$. Higher order derivatives will explore higher eigenvalues.\n\nTo conclude, the original holonomy constraint, supplemented with the new Laplacian constraint, acting on functions on $\\mathrm{SU}(2)$ admit the $\\delta$-distribution as unique solution: the Laplacian constraint imposes invariance under conjugation while the holonomy constraint then imposes the flatness of the group element along the loop.\n\n\\begin{prop}\nThere is a unique solution (up to a numerical factor) as a distribution over $\\mathrm{SU}(2)$ to the holonomy and Laplacian constraints:\n\\begin{equation}\n\\left|\n\\begin{array}{l}\n(\\widehat{\\chi}-2)\\,\\varphi=0\\\\\n(\\Delta-\\widetilde{\\Delta})\\,\\varphi=0\n\\end{array}\n\\right.\n\\quad\\Longrightarrow\\quad\n\\exists \\lambda\\in{\\mathbb C}\\,,\\,\\, \\varphi(h)=\\lambda\\,\\delta(h)\n\\,.\n\\end{equation}\n\\end{prop}\n\nBelow, we look at the generic case of an arbitrary number of loops. We will show that we can supplement the holonomy constraints around each loop either with Laplacian constraints for each loop or with multi-loop holonomy constraints (that still act by multiplication) wrapping around several loops at once.\n\n\\subsubsection{Holonomy constraints on $\\mathrm{SU}(2)^{N}$ for $N\\ge 2$}\n\\label{derivativesolution2}\n\nWe now turn to the holonomy constraints on $\\mathrm{SU}(2)^{N}$:\n$$\n\\forall 1\\le\\ell\\le N, \\,\\,(\\widehat{\\chi}_{\\ell}-2)\\varphi=0\\,,\n$$\nwith the requirement of invariance under simultaneous conjugation of all the arguments $h_{\\ell}$. Since we do not require the invariance under the individual action of conjugation on each little loop, the gauge invariance is not enough to kill the spurious solution identified above.\nAs proposed above, we can reach the uniqueness of the physical state by further imposing the Laplacian constraint on each loop:\n\\begin{equation}\n\\forall \\ell\\in{\\mathbb N}\\,,\\,\\,\n(\\widetilde{\\Delta}_{\\ell}-\\Delta_{\\ell})\\,\\varphi=0\\,.\n\\end{equation}\nThis now implies the invariance of the wave-function under the individual action of conjugation on each loop. In terms of spin recoupling, each little loop is linked to the vertex by a spin-0, as illustrated on fig.\\ref{fig:0spincoupling}, this effectively trivializes the intertwiner space living at the vertex and the loops can be thought of as decoupled from one another.\nThe holonomy constraints then impose that the only solution state is the $\\delta$-distribution. \n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.55]\n\n\\coordinate(O) at (0,0);\n\\coordinate(O1) at (2,0);\n\\coordinate(O2) at (1,1.7);\n\\coordinate(O3) at (-1,1.7);\n\\coordinate(O4) at (-2,0);\n\\coordinate(O5) at (-1,-1.7);\n\\coordinate(O6) at (1,-1.7);\n\n\n\n\\draw (O) node[scale=1] {$\\bullet$} ;\n\\draw (O1) node[scale=1] {$\\bullet$} ;\n\\draw (O2) node[scale=1] {$\\bullet$} ;\n\\draw (O3) node[scale=1] {$\\bullet$} ;\n\\draw (O4) node[scale=1] {$\\bullet$} ;\n\\draw (O5) node[scale=1] {$\\bullet$} ;\n\\draw (O6) node[scale=1] {$\\bullet$} ;\n\n\\draw[red] (O) -- (O1) ++(-0.9,-0.3) node{$k_{1}$};\n\\draw[red] (O) -- (O2) node[midway,right]{$k_{2}$};\n\\draw[red] (O) -- (O3) node[midway,right]{$k_{3}$};\n\\draw[red] (O) -- (O4) node[midway,above]{$k_{4}$};\n\\draw[red] (O) -- (O5) ++(0,0.8) node{$k_{5}$};\n\\draw[red] (O) -- (O6) ++(-0.75,0.8) node{$k_{6}$};\n\n\n\\draw[in=-45,out=45,scale=5] (O1) to[loop] (O1) ++(0.35,0) node {$j_{1}$};\n\\draw[in=-45,out=45,scale=5,rotate=60] (O2) to[loop] (O2) ++(0.35,0) node {$j_{2}$};\n\\draw[in=-45,out=45,scale=5,rotate=120] (O3) to[loop] (O3) ++(0.35,0) node {$j_{3}$};\n\\draw[in=-45,out=45,scale=5,rotate=180] (O4) to[loop] (O4) ++(0.35,0) node {$j_{4}$};\n\\draw[in=-45,out=45,scale=5,rotate=-120] (O5) to[loop] (O5) ++(0.35,0) node {$j_{5}$};\n\\draw[in=-45,out=45,scale=5,rotate=-60] (O6) to[loop] (O6) ++(0.35,0) node {$j_{6}$};\n\n\\end{tikzpicture}\n\n\\caption{ The Laplacian constraint on a loop $\\ell$ constraint the spin $j_{\\ell}$ carried by the loop to recouple with itself into the trivial representation with vanishing spin $k_{\\ell}=0$. Imposing this constraint on every loop, the vertex then recouples a collection of spin-0, the intertwiner is thus trivial and the loops are totally decoupled.}\n\\label{fig:0spincoupling}\n\n\\end{figure}\n\n\n\\medskip\n\nInstead of imposing the Laplacian constraints, another way to proceed is to introduce multi-loop holonomy constraints. To prove this, let us start by describing the gauge-invariant derivative solutions to the holonomy constraints. The general structure is as follows. One acts with arbitrary derivatives on the $\\delta$-distribution $\\prod_{\\ell=1}^{N}\\delta(h_{\\ell})$. Then to ensure invariance under simultaneous conjugation, one must contract all the indices with a $\\mathrm{SO}(3)$-invariant tensor ${\\mathcal I}$:\n\\begin{equation}\n\\varphi^{{\\mathcal I}}(\\{h_{\\ell}\\})=\n\\sum_{\\{a_{i}^{\\ell}\\}_{i=1..n_{\\ell}}}\n{\\mathcal I}^{a^{1}_{1}..a^{N}_{n_{N}}}\\,\n\\prod_{\\ell=1}^{N}\\partial_{a^{\\ell}_{1}}..\\partial_{a^{\\ell}_{n_{\\ell}}}\\delta(h_{\\ell})\\,,\n\\end{equation}\nwhere $n_{\\ell}$ is the order of the differential operator acting on the loop $\\ell$, for an overall order $n=\\sum_{\\ell}n_{\\ell}$, and ${\\mathcal I}$ is a rotational invariant tensor defining the contraction of the differential indices $a$'s, i.e. it is an intertwiner between $n$ spin-1 representations.\n\nTo be explicit, for $n=2$ differential insertions, there is a single invariant tensor: ${\\mathcal I}^{ab}=\\delta^{ab}$. Either we act with the two derivatives on the same group elements, but then we already know that $\\Delta\\delta$ is not a solution to the holonomy constraint, or we act on two different loops getting the non-trivial distribution $\\sum_{a}\\partial_{a}\\delta(h_{1})\\partial_{a}\\delta(h_{2})$ (here we put aside all the other loops, where no differential operator act):\n\\begin{equation}\n\\langle \\,\\sum_{a}\\partial_{a}\\delta_{1}\\partial_{a}\\delta_{2}\\,|f\\rangle=\n-f(J_{a},J_{a})\\,,\n\\end{equation}\nwhich yields the evaluation $f(J_{a},J_{a})$ of the spin network state obtained by acting with the double grasping $J_{a}\\otimes J_{a}$ on the test wave-function $f$. \nWe easily check that this provides a solution to the individual one-loop holonomy constraints:\n\\begin{equation}\n\\forall f\\in {\\mathcal C}^{\\infty}_{\\mathrm{SU}(2)^{2}}\\,,\\,\\,\n\\int f (\\chi_{\\f12}(h_{1})-2)\\,\\sum_{a}^{3}\\partial_{a}\\delta(h_{1})\\partial_{a}\\delta(h_{2})\n=\n0\\,,\n\\end{equation}\nThe double grasping, as shown on fig.\\ref{fig:doublegrasping}, couples the two loops. The goal is to suppress such coupling between the two loops in order to get as unique solution the factorized flat state $\\delta^{\\otimes N}$ where all the loops are entirely decoupled.\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=1.5]\n\n\\coordinate(O1) at (0,0);\n\\draw (O1) node[scale=2] {$\\bullet$} ;\n\n\\draw (O1) to[in=-45,out=+45,loop,scale=3,rotate=90] node[very near end](A){} (O1)++(0,1) node {$h_1$};\n\\draw (O1) to[in=-45,out=+45,loop,scale=3,rotate=-30] node[very near end](B){} (O1) ++(0.8,-0.7) node {$h_2$};\n\\draw (O1) to[in=-45,out=+45,loop,scale=3,rotate=-150] (O1) ++(-0.8,-0.7) node {$h_3$};\n\n\\draw[dotted,thick] (A) to[bend left] node[very near start,above,right](C){} (B) ++(-0.1,-0.25) node{$\\partial_{a}$};\n\\draw (A) node[scale=1,red!50] {$\\bullet$} ++(0.22,-0.05)node{$\\partial_{a}$};\n\\draw (B) node[scale=1,red!50] {$\\bullet$} ;\n\n\\end{tikzpicture}\n\n\\caption{We act with derivatives $\\partial_{a}$ on the group elements $h_{1}$ and $h_{2}$ and contract the indices, which translates graphically as a double grasping linking the two loops.}\n\\label{fig:doublegrasping}\n\n\\end{figure}\nTo make the system more rigid, the natural constraint to introduce is a two-loop holonomy constraint, which would kill any correlation between the two loops:\n\\begin{equation}\n(\\widehat{\\chi}_{12}-2)\\varphi (h_{1},h_{2})\\equiv\n(\\chi_{\\f12}(h_{1}h_{2})-2)\\varphi (h_{1},h_{2})\n=0\\,.\n\\end{equation}\nWe check that this two-loop constraint eliminates the coupled solution proposed above:\n\\begin{equation}\n\\int f (\\chi_{\\f12}(h_{1}h_{2})-2)\\,\\sum_{a}^{3}\\partial_{a}\\delta(h_{1})\\partial_{a}\\delta(h_{2})\n=\n\\left.f\\Delta\\chi_{\\f12}\\right|_{{\\mathbb{I}}}\n=\n\\f32\\,f({\\mathbb{I}},{\\mathbb{I}})\\,\\ne0\\,.\n\\end{equation}\n\nFor $n=3$ differential insertions, we still have a unique intertwiner, given by the completely antisymmetric tensor $\\epsilon^{abc}$. This corresponds a triple grasping. The three derivatives can all act on the same loop, in which case we do not get a solution of the one-loop holonomy constraint, or they can act on two different loops, in which case it is not a solution of the two-loop holonomy constraints we have just introduced, or they can act on three different loops in which case we need to introduce a three-loop holonomy constraint to discard it:\n\\begin{equation}\n\\int f (\\chi_{\\f12}(h_{1}h_{2}h_{3})-2)\\epsilon^{abc}\\partial_{a}\\delta(h_{1})\\partial_{b}\\delta(h_{2})\\partial_{c}\\delta(h_{3})\n=\n-\\frac i2^{3}f({\\mathbb{I}})\\epsilon^{abc}\\chi_{\\f12}(\\sigma_{a}\\sigma_{b}\\sigma_{c})\n=\n-\\frac{3i}2\\,f({\\mathbb{I}})\\,\\,\\ne0\\,.\n\\end{equation}\n\nFor an arbitrary number $n$ of differential insertions acting on the $N$ loops, the grasping will potentially couple the $N$ loops. In order to kill all those coupled solutions, we introduce all multi-loop holonomy constraints:\n\\begin{equation}\n\\forall E\\subset \\{1,..,N\\}\\,,\n\\,\\,\n\\Big{[}\n\\chi_{\\f12}\\big{(}\n\\prod_{\\ell\\in E}h_{\\ell}\n\\big{)}\n-2\n\\Big{]}\\,\\varphi=0\n\\,.\n\\end{equation}\nThe ordering of the group elements is important of course for the precise definition of the multi-loop operator but is irrelevant to ensure that the action of the corresponding constraint operator on the coupled derivative distributions does not vanish.\nIn fact, looking deeper into the structure of $\\mathrm{SO}(3)$-invariant tensors, \na fundamental theorem on rotational invariants states that all $\\mathrm{SO}(3)$-invariant polynomial of $n$ 3d-vectors $\\vec{v}_{i=1..n}$ are generated by scalar products $\\vec{v}_{i}\\cdot\\vec{v}_{j}$ and triple products $\\vec{v}_{i}\\cdot(\\vec{v}_{j}\\wedge\\vec{v}_{k})$. This means that we only need the two-loop and three-loop holonomy constraints to ensure that the flat state, defined as the $\\delta$-distribution, is the only solution to the Hamiltonian constraints.\n\n\n\n\n\\subsubsection{The full Hamiltonian constraints for BF theory on loopy spin networks}\n\\label{derivativesolution3}\n\nTo summarize the implementation of BF theory on loopy spin networks, we have introduced individual holonomy constraints on each little loop around each vertex of the background graph. This is the usual procedure, for instance when constructing spinfoam amplitudes for BF theory from a canonical point of view. Surprisingly, these constraints are not strong enough to fully constraint the theory to the single flat state and kill al the little loop excitations. This can be backtracked to the simple fact that the identity ${\\mathbb{I}}$ is an extremum of the $\\mathrm{SU}(2)$-character $\\chi_{\\f12}$ and thus the derivative of the character vanishes at that point. As a result, the $\\delta$-function on $\\mathrm{SU}(2)$ is not the unique solution to the holonomy constraints, but its first derivative are also solutions. While all the solution distributions are peaked on the identity and vanish elsewhere, we are allowed grasping operators coupling the loops together. To forbid such such coupling and force to have a unique physical state, we have showed that we can supplement the original one-loop holonomy constraints with either one-loop Laplacian constraints or with multi-loop holonomy constraints, which leads us to two proposals for the Hamiltonian constraints for BF theory on loopy intertwiners:\n\\begin{itemize}\n\\item We impose on each loop two gauge-invariant constraints, the holonomy constraint that acts by multiplication and the Laplacian constraint which acts by differentiation:\n\\begin{equation}\n\\forall \\ell\\,,\\quad\n\\widehat{\\chi}_{\\ell}\\,\\varphi=2\\varphi\\,,\\quad\n\\Delta_{\\ell}\\,\\varphi=\\widetilde{\\Delta}_{\\ell}\\,\\varphi\\,.\n\\end{equation}\n\n\\item We impose all multi-loop holonomy constraints, requiring not only that the group elements $h_{\\ell}$ on each loop $\\ell$ is the identity ${\\mathbb{I}}$ but also that all their products remain flat. This means one constraint for each finite subset $E$ of the set of all loops:\n\\begin{equation}\n\\forall E\\subset {\\mathbb N},\\quad\n\\widehat{\\chi}_{E}\\,\\varphi=2\\varphi\\,,\n\\qquad\n\\widehat{\\chi}_{E}\\,\\varphi (\\{h_{\\ell}\\}_{\\ell\\in{\\mathbb N}})=\n\\chi_{\\f12}\\Big{(}\\prod_{\\ell \\in E}h_{\\ell}\\Big{)}\\,\\varphi (\\{h_{\\ell}\\}_{\\ell\\in{\\mathbb N}})\\,.\n\\end{equation}\nThe ordering of the group elements does not matter in order to impose the flatness. These multi-loop constraints kill any correlation or entanglement between the loops. It is actually sufficient to impose only the two-loop and three-loop holonomy constraints.\n\n\\end{itemize}\n\nIf we only impose the one-loop holonomy constraints, then the totally flat state defined by the $\\delta$-distribution is not the only physical state. We get an infinite-dimensional space of physical states, obtained by the action of first order grasping operators on the $\\delta$-distribution, allowing for non-trivial coupling and correlations between the little loops. It would be interesting to understand the geometrical meaning of those states and if they play a special role\\footnotemark in the spinfoam models for BF theory (the Ponzano-Regge and Turaev-Viro models for 3d BF theory and the Crane-Yetter model for 4d BF theory). Maybe those local excitations could provide a first extension of the topological BF theory to a field theory with local degrees of freedom.\n\\footnotetext{\nAs an example, we have in mind the recursion relation satisfied by the 6j symbol, which is understood to be the expression of the action of the holonomy operator on the flat state on the tetrahedron graph \\cite{Bonzom:2009zd,Bonzom:2011hm,Bonzom:2014bua}. Our results suggest that the double and triple graspings on the 6j symbol might be other solutions to this recursion relation.That would be specially interesting since the triply grasped 6j symbols is understood to be the first order correction of the q-deformed 6j-symbol \\cite{Freidel:1998ua}.\n}\n\n\\medskip\n\nOn the other hand, imposing the full set of Hamiltonian constraints proposed above leads to a unique physical state for BF theory: the flat state $\\varphi_{\\mathrm{BF}}=\\delta$.\nThis physical state is clearly not normalizable. But since it is unique, it is not a big problem to define the scalar product on this final one-dimension Hilbert space\\footnotemark.\n\\footnotetext{\nIf we had more solutions to the constraints, for example if we only impose the holonomy constraints without the Laplacian constraints, we would propose to render the distributions normalizable by modulating them by a damping operator $\\exp(-\\tau\\Delta)$ (heat kernel) or similar. Expanding all the functions and distributions in Fourier modes, all the scalar products would become finite and we could then compute expectation values of operators on the resulting physical Hilbert space, for instance:\n$$\n\\int \\delta e^{-\\tau\\Delta}\\delta=\\sum_{j}(2j+1)^{2}e^{-\\tau j(j+1)} <+\\infty\n\\,,\\qquad\n\\int \\delta e^{-\\tau\\Delta}\\partial_{a}\\delta=i\\sum_{j}(2j+1)^{2}e^{-\\tau j(j+1)}\\chi_{j}(J_{a})=0\\,.\n$$\nBut we haven't investigated this line of research further.\n}\nThe physical scalar product on the initial Hilbert space of loopy spin networks is defined by projecting on this physical state, which amounts at the end of the day to simply evaluate the wave-functions at the identity i.e. on flat connections:\n\\begin{equation}\n\\forall f,\\tilde{f}\\in{\\mathcal H}^{\\mathrm{loopy}}\\,,\\quad\n\\langle f|\\tilde{f}\\rangle_{\\mathrm{phys}}\n\\,=\\,\n\\langle f|\\varphi_{BF}\\rangle\\,\\langle\\varphi_{BF}|\\tilde{f}\\rangle\n\\,=\\,\n\\overline{\\langle\\varphi_{BF}|f\\rangle}\\,\\langle\\varphi_{BF}|\\tilde{f}\\rangle\n\\,=\\,\n\\overline{f({\\mathbb{I}})}\\,\\tilde{f}({\\mathbb{I}})\\,.\n\\end{equation}\nAs expected, we are left with a single physical state on the flower, the little loops have been projected out and all local degrees of freedom have disappeared.\n\n\\medskip\n\nNow that we have checked that loopy spin networks allow for a correct implementation of BF theory's topological dynamics, we would like to later introduce Hamiltonian constraints allowing for local degrees of freedom. We wouldn't want to kill the little loops as happens for BF theory. The goal would be to have dynamics coupling the little loops to the spins living on the links of the background graph, in such a way that it reproduces the propagation of the local geometry excitations of general relativity in a continuum limit. The strategy would be to slightly modify the BF dynamics -``constrain the BF theory''- most likely following the approaches for the dynamics of discrete\/twisted geometries \\cite{Bonzom:2011hm,Bonzom:2011nv,Bonzom:2013tna} or of EPRL spinfoam models \\cite{Engle:2007wy,Geloun:2010vj,Bianchi:2012nk}.\n\n\n\n\n\n\\section{The Fock space of loopy spin networks}\n\\label{sec:bosonisation}\n\nUp to now, we have introduced the loopy spin network states, on a fixed background graph with an arbitrary number of little loop excitations attached to each vertex. Here we would like to tackle the issue of endowing these local loops with bosonic statistics and define the Fock space of loop spin networks over a background skeleton graph with indistinguishable little loops living at its vertices.\n\nFrom the perspective of coarse-graining, the little loops represent curvature excitations within the bounded region coarse-grained to a single vertex. Keeping these little loops distinguishable amounts to remembering that they are excitations of different parts of the internal geometry of that region, while fully coarse-graining the region should erase any memory of internal localization and the little loops should be considered as indistinguishable: incoming energy at the vertex would then equally excite any of those loops, irrespective to their a priori different localization on the internal subgraph that we coarse-grained.\n\nIn this section, we will thus symmetrize our spin network states over the little loops attached at each vertex. The difficulty reside in the compatibility of the symmetrization with the cylindrical consistency. Indeed, a little loop carrying a spin-0 is considering as a non-existing loop, and vice-versa. We cannot symmetrize these non-existing loops with the other loops carrying non-trivial spins: since we would like to allow for an infinite number of loops, the symmetrization operation would be ill-defined. In the framework of usual quantum field theory, this would be similar to considering particles carrying a vanishing momentum as non-existent. We would have to update the definition of the symmetrization to take this new fact into account. Here, we will show how to systematically subtract the 0-modes components of the loopy spin network states, symmetrize over non-trivial little loops and define an appropriate holonomy operator acting on symmetrized states. A resulting subtlety is that we will be led to distinguish three components of the holonomy operator, that respectively conserves the number of loops, acts as a creation operator adding one little loop or as an annihilation operator removing a loop.\n\n\n\\subsection{Bosonics statistics for loops}\n\nWe would like to define symmetrized loopy intertwiner states in ${\\cal H}^{\\mathrm{loopy}}$. A direct way would be to work directly on states with an arbitrary number of loops\\footnotemark, but imposing invariance under the symmetry group for a infinite number of loops introduces a lof of technicalities. So we follow a more constructive approach and work with finite number of loops, symmetrize and then allow for varying number of loops.\n\\footnotetext{\nWe would use an extension of the finite symmetry groups $S_{n}$ to the group of permutations of integers which only act non-trivially on a finite subset:\n$$\nS_\\infty = \\{f : \\mathbb{N} \\rightarrow \\mathbb{N}, f \\textrm{ bijective and }\\exists n \\in \\mathbb{N}, \\forall m>n, f(m)=m\\}\\,.\n$$\nWe would use the canonical action of $S_\\infty$ on the Hilbert spaces of loopy spin networks ${\\mathcal H}_{E}$ with finite number of loops:\n$$\n\\sigma : {\\mathcal H}_{E} \\rightarrow {\\mathcal H}_{\\sigma(E)}\\,,\n\\qquad\n(\\sigma \\triangleright f)(\\{h_{e_i}\\}) = f(\\{h_{\\sigma^{-1}(e_i)}\\})\\,.\n$$\nThis action is compatible with the cylindrical consistency conditions and naturally extends to the projective limit.\nHowever, requiring invariance of states $|\\Psi\\rangle \\in{\\cal H}^{\\mathrm{loopy}}$ under permutations, $\\forall \\sigma \\in S_\\infty,~\\sigma \\triangleright |\\Psi\\rangle = |\\Psi\\rangle$, only provides non-normalizable states. This forces us to work on the dual space to define symmetrized states and creates unnecessary technicalities for our present purpose.\n}\n\nWe start from the definition of the loopy states in terms of proper states, ${\\cal H}^{\\mathrm{loopy}}=\\bigoplus_{E\\in\\mathcal{P}_{<\\infty}(\\mathbb{N})}{\\mathcal H}_{E}^{0}$. This decomposition has removed all spin-0 and avoids all of the redundancies due to the cylindrical consistency. We can now symmetrize the states. For each number of loops $N$, we consider gauge-invariant wave-functions, symmetric under the exchange of the $N$ loops and such that no loop carries a vanishing spin. The full symmetrized Hilbert space ${\\cal H}^{\\mathrm{sym}}$ will then be the direct sum over $N$ of all the finite symmetrized states.\n\nLet us realize this programme explicitly. We start with the Hilbert space ${\\cal H}^{\\mathrm{sym}}_{N}$ of wave-functions, $f\\in L^{2}(\\mathrm{SU}(2)^{\\times N})$, gauge-invariant and symmetrized on $N$ loops:\n\\begin{eqnarray}\n\\forall k \\in \\mathrm{SU}(2),&&\nf(h_1,...,h_N) = f(kh_1k^{-1},...,kh_N k^{-1})\\,,\\nonumber\\\\\n\\forall \\sigma \\in S_N, &&\nf(h_1,...,h_N) = f(h_{\\sigma(1)},...,h_{\\sigma(N)})\\,.\n\\end{eqnarray}\nWe define the subspace of proper states, removing the 0 mode:\n\\begin{equation}\n{\\mathcal H}_{N}^{0}=\n\\Bigg{\\{}\nf\\in{\\cal H}^{\\mathrm{sym}}_{N}\n\\,:\\,\n\\int \\mathrm{d}h_{1}\\,f(h_1,...,h_N) =0\n\\Bigg{\\}}\\,.\n\\end{equation}\nWe only need to impose one integration condition, since the function is invariant under permutation of its arguments.\nWe have a simplified version of the decomposition onto proper states given in lemma \\ref{proper}:\n\\begin{lemma}\n\\label{propersym}\nThe Hilbert space of symmetrized states on $N$ loops decomposes as a direct sum of the Hilbert spaces of proper symmetrized states on at most $N$ loops:\n\\begin{equation}\n{\\cal H}^{\\mathrm{sym}}_{N}=\\bigoplus_{n=0}^{N}{\\mathcal H}_{n}^{0}\\,.\n\\end{equation}\nThis isomorphism is realized through a combinatorial transform of the wave-functions:\n\\begin{equation}\n\\label{symresum}\n\\forall f\\in{\\cal H}^{\\mathrm{sym}}_{N}\\,,\\quad\nf=\\sum_{n=0}^{N}\\,\\,\n\\sum_{1\\le i_{1}<..0\\,,\\,\\,\n(Bf)_{N}(h_{1},..,h_{N})\n\\,=\\,\n\\f1N\\sum_{i=1}^{N}\\Bigg{[}\n\\chi_{\\f12}(h_{i})f_{N}(h_{1},..,h_{N})\n-\\int \\mathrm{d}k_{i}\\,\\chi_{\\f12}(k_{i})\\,f_{N}(h_{1},..,k_{i},..,h_{N})\n\\Bigg{]}\n\\end{equation}\n\\begin{equation}\n(\\widetilde{A} f)_{0}=0\\,,\n\\quad\n\\forall N>0\\,,\\,\\,\n(\\widetilde{A} f)_{N}(h_{1},..,h_{N})\n\\,=\\,\n\\f1{N}\\sum_{i=1}^{N}\\chi_{\\f12}(h_{i})f_{N-1}(h_{1},..,\\widehat{h_{i}},..,h_{N})\n\\end{equation}\nThe operator $B$ is the usual action of the holonomy operator by multiplication by the character up to the subtraction of the resulting spin-0 component.\nThe operator $A$ is the annihilation operator, removing one loop, while the operator $\\widetilde{A}$ creates a new loop. We have the following relations on ${\\cal H}^{\\mathrm{sym}}$:\n\\begin{equation}\n\\widetilde{A}=A^{\\dagger}\\,,\\qquad\nB=B^{\\dagger}\n\\,.\n\\end{equation}\nFinally the one-loop holonomy operator for spin $\\f12$ is defined as the sum of these three components and is self-adjoint:\n\\begin{equation}\n\\widehat{\\chi}_{\\f12}\\,\\equiv\\,\\f12\\big{(}A+\\widetilde{A}+B\\big{)}\\,.\n\\end{equation}\n\\end{definition}\nThe convention $(Bf)_{0}=2f_{0}$ follows the logic that the 0-component $f_{0}$, with no loop, represents by default a flat holonomy and thus should be multiplied by $\\chi_{\\f12}({\\mathbb{I}})=2$.\nHere is the proof for the Hermicity relations:\n\\begin{proof}\nWe compare the action of $A$ and $\\widetilde{A}$:\n$$\n\\langle \\phi|A\\psi\\rangle\n\\,=\\,\n\\sum_{N\\in{\\mathbb N}}\n\\int [\\mathrm{d}h_{i}]_{i=1}^{N}\\mathrm{d}k\\,\n\\chi_{\\f12}(k)\\,\\overline{\\phi_{N}(h_{1},..,h_{N})}\\,\\psi_{N+1}(h_{1},..,h_{N},k)\\,,\n$$\n$$\n\\langle \\widetilde{A} \\phi|\\psi\\rangle\n\\,=\\,\n\\sum_{N>0}\n\\int [\\mathrm{d}h_{i}]_{i=1}^{N}\n\\f1N\\sum_{i=1}^{N}\\chi_{\\f12}(h_{i})\\,\n\\overline{\\phi_{N-1}(h_{1},..,\\widehat{h_{i}},..,h_{N})}\\,\n\\psi_{N}(h_{1},..,h_{N})\\,.\n$$\nWe shift the sum over $N$ in $\\langle \\widetilde{A} \\phi\\,|\\,\\psi\\rangle$ and we use the invariance of $\\psi_{N}$ under permutation of its arguments to conlude that these two expressions coincides, $\\langle \\phi|A\\psi\\rangle=\\langle \\widetilde{A} \\phi|\\psi\\rangle$. As for the operator $B$, we compute:\n\\begin{eqnarray}\n\\langle \\phi|B\\psi\\rangle\n&=&\n2\\overline{\\phi_{0}}\\,\\psi_{0}\n\\,+\\,\n\\sum_{N>0}\n\\int [\\mathrm{d}h_{i}]_{i=1}^{N}\n\\f1N\\sum_{i}^{N}\n\\chi_{\\f12}(h_{i})\\overline{\\phi_{N}(h_{1},..,h_{N})}\\,\\psi_{N}(h_{1},..,h_{N})\\nonumber\\\\\n&&\n-\\,\\int [\\mathrm{d}h_{i}]_{i=1}^{N}\n\\f1N\\sum_{i}^{N}\n\\int \\mathrm{d}k_{i}\\,\\chi_{\\f12}(k_{i})\\overline{\\phi_{N}(h_{1},..,h_{i},..,h_{N})}\\,\\psi_{N}(h_{1},..,k_{i},..,h_{N})\\,.\\nonumber\n\\end{eqnarray}\nThe last term vanishes due to the absence of 0-mode, $\\int \\mathrm{d}h_{i}\\,\\phi_{N}=0$. This ensures that $\\langle \\phi|B\\psi\\rangle=\\langle B\\phi|\\psi\\rangle$ and thus $B$ is a Hermitian operator.\n\\end{proof}\n\nTo ensure that the operators $A$ and $B$ are well-defined and that the holonomy operator $\\widehat{\\chi}_{\\f12}$ is self-adjoint, it is enough to check that it is bounded. And we show below that it is indeed bounded by 2 as in the usual framework.\n\\begin{lemma}\nThe two parts of the holonomy operators are both bounded by 2, that is for all states $\\phi\\in{\\cal H}^{\\mathrm{sym}}$, we have the two inequalities:\n\\begin{equation}\n|\\langle \\phi|\\,(A+\\widetilde{A})\\,|\\phi\\rangle|\\,\\le\\,2\\langle \\phi|\\phi\\rangle\n\\,,\\qquad\n|\\langle \\phi|B|\\phi\\rangle|\\,\\le\\,2\\langle \\phi|\\phi\\rangle\\,.\n\\end{equation}\nThis ensures that they are both self-adjoint. The holonomy operator $\\widehat{\\chi}_{\\f12}$ is then also bounded by 2 and self-adjoint.\n\\end{lemma}\n\n\\begin{proof}\nLet us start with the operator $B$. The analysis is simpler since it doesn't shift the number of loops:\n$$\n\\langle \\phi|B|\\phi\\rangle=2|\\phi_{0}|^{2}+\\sum_{N>0}B_{N}\\,,\n\\quad\nB_{N}=\n\\f1N\\sum_{i}^{N}\n\\int [\\mathrm{d}h_{i}]_{i=1}^{N}\\chi_{\\f12}(h_{i})\\overline{\\phi_{N}}(h_{1},..,h_{N})\\,\\phi_{N}(h_{1},..,h_{N})\\,.\n$$\nThe extra term in the action of $B$ on the state $\\phi$ vanishes as earlier due to the integral condition on proper states, $\\int \\mathrm{d}h_{i}\\,\\phi_{N}=0$ for all $i$'s. Since the character $\\chi_{\\f12}$ is bounded by 2, it is direct to conclude:\n$$\n|B_{N}|\\le \\f2N\\sum_{i}^{N}\\int |\\phi_{N}|^{2}= 2\\int |\\phi_{N}|^{2}\\,,\n\\quad\n\\langle \\phi|B|\\phi\\rangle\n\\le 2|\\phi_{0}|^{2}+2\\sum_{N>0}\\int |\\phi_{N}|^{2}=2\\langle \\phi|\\phi\\rangle\\,.\n$$\nWe can proceed similarly with the operator $A+\\widetilde{A}$:\n$$\n\\langle \\phi|\\,(A+\\widetilde{A})\\,|\\phi\\rangle=\n\\langle \\phi|A|\\phi\\rangle+\\langle \\phi|A^{\\dagger}|\\phi\\rangle=\\langle \\phi|A|\\phi\\rangle+\\overline{\\langle \\phi|A|\\phi\\rangle}\\,,\\quad\n|\\langle \\phi|\\,(A+\\widetilde{A})\\,|\\phi\\rangle|\\le 2 |\\langle \\phi|A|\\phi\\rangle|\\,,\n$$\n\\begin{equation}\n\\langle \\phi|A|\\phi\\rangle=\\sum_{N}A_{N}\\,,\\quad\nA_{N}=\\int \\prod_{i=1}^{N}\\mathrm{d}h_{i}\\,\\mathrm{d}k\\,\n\\chi_{\\f12}(k)\\,\\overline{\\phi_{N}}(h_{1},..,h_{N})\\,\\phi_{N+1}(h_{1},..,h_{N},k)\n\\end{equation}\nAs long as the components $\\phi_{N}$'s are square-integrable, we can use the Cauchy-Schwarz inequality to bound these integrals:\n$$\n\\left|\nA_{N}\n\\right|\n\\le\n\\sqrt{\\int \\prod_{i}^{N}\\mathrm{d}h_{i}\\,\\mathrm{d}k\\,\\chi_{\\f12}(k)^{2}\\,\\big{|}\\phi_{N}(h_{1},..,h_{N})\\big{|}^{2}}\\,\n\\sqrt{\\int \\prod_{i}^{N}\\mathrm{d}h_{i}\\,\\mathrm{d}k\\,\\big{|}\\phi_{N+1}(h_{1},..,h_{N},k)\\big{|}^{2}}\\,.\n$$\nWe use that the $\\mathrm{SU}(2)$ character is normalized, $\\int \\chi_{\\f12}^{2}=1$, and then apply the inequality bounding a product $ab\\le (a^{2}+b^{2})\/2$:\n\\begin{equation}\n\\left|\nA_{N}\n\\right|\n\\le\n\\f12{\\int \\prod_{i}^{N}\\mathrm{d}h_{i}\\,\\big{|}\\phi_{N}(h_{1},..,h_{N})\\big{|}^{2}}\n+\\f12{\\int \\prod_{i}^{N}\\mathrm{d}h_{i}\\,\\mathrm{d}k\\,\\big{|}\\phi_{N+1}(h_{1},..,h_{N},k)\\big{|}^{2}}\n=\\f12\\,\\Big{[}\\langle \\phi_{N}|\\phi_{N}\\rangle+\\langle \\phi_{N+1}|\\phi_{N+1}\\rangle\\Big{]}\\,.\n\\end{equation}\nSumming over $N\\in{\\mathbb N}$, this allows us to conclude that $|\\langle \\phi|A|\\phi\\rangle|\\le \\langle\\phi|\\phi\\rangle-\\f12|\\phi_{0}|^{2}\\le \\langle\\phi|\\phi\\rangle$ and thus reproduces the expected bound $|\\langle \\phi|\\,(A+\\widetilde{A})\\,|\\phi\\rangle|^{2}\\le 2\\langle\\phi|\\phi\\rangle$.\n\\end{proof}\n\nAlthough we consider the holonomy operator $\\widehat{\\chi}_{\\f12}$ to be the averaged sum of the two self-adjoint components $(A+A^{\\dagger})$ and $B$, each of these is a legitimate operator in itself. We could push this logic further and state that we have defined two different holonomy operators, \non the one hand, a holonomy operator $(A+A^{\\dagger})$ that acts as a ladder operator, creating and annihilating loops, and on the other hand, a holonomy operator $B$ which acts as spin shifts on existing loops (i.e. modifies the area quanta carried by each loop).\n\nThe important consistency check, which will be essential for the analysis of the BF theory dynamics, is that the flat state is an eigenvector of the one-loop holonomy operator:\n\\begin{prop}\n\\label{flat-prop1}\nThe flat state $\\delta$, defined in \\eqref{delta-def} by its proper state projections, $\\delta_{0}=1$ and $\\delta_{N}=(\\delta-1)^{\\otimes N}$ for $N\\ge 1$, is an eigenvector of the spin-$\\f12$ one-loop holonomy operator $\\widehat{\\chi}_{\\f12}$ with the highest eigenvalue on ${\\cal H}^{\\mathrm{sym}}$:\n\\begin{equation}\n\\widehat{\\chi}_{\\f12}\\,\n|\\delta\\rangle\n\\,=\\, 2\\,|\\delta\\rangle\n\\,.\n\\end{equation}\nThis distributional flat state is also an eigenvector of the loop annihilation operator $A$ and of the loop creation operator $(B+A^{\\dagger})$:\n\\begin{equation}\nA|\\delta\\rangle\n\\,=\\,\n(B+A^{\\dagger})|\\delta\\rangle\n\\,=\\,\n2\\,|\\delta\\rangle\n\\,.\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nWe compute the action of the three parts of the holonomy operators acting on the flat state defined explicitly as\n$$\n\\delta_{0}=1\\,,\\quad\n\\delta_{N}(h_{1},..,h_{N})\n=\\prod_{i}^{N}\\big{[}\\delta(h_{i})-1\\big{]}\\,.\n$$\nFor the no-loop component, we get:\n$$\n(A\\delta)_{0}=\\int \\mathrm{d}k\\,\\chi_{\\f12}(k)\\,\\big{[}\\delta(k)-1\\big{]}=2\n\\,,\\quad\n(A^{\\dagger}\\delta)_{0}=0\n\\,,\\quad\n(B\\delta)_{0}=2\\,,\n$$\nwhile we compute for all other components:\n\\begin{eqnarray}\n(A\\delta)_{N}(h_{1},..,h_{N})&=& 2\\prod_{i}^{N}\\big{[}\\delta(h_{i})-1\\big{]}=2\\delta_{N}(h_{1},..,h_{N})\n\\\\\n(A^{\\dagger}\\delta)_{N}(h_{1},..,h_{N})&=& \n\\f1N\\sum_{i}^{N}\\chi_{\\f12}(h_{i})\\,\\prod_{\\ell\\ne i}^{N}\\big{[}\\delta(h_{\\ell})-1\\big{]}\n\\\\\n(B\\delta)_{N}(h_{1},..,h_{N})&=&\n2\\prod_{i}^{N}\\big{[}\\delta(h_{i})-1\\big{]}\n-\\f1N\\sum_{i}^{N}\\chi_{\\f12}(h_{i})\\,\\prod_{\\ell\\ne i}^{N}\\big{[}\\delta(h_{\\ell})-1\\big{]}\n\\end{eqnarray}\nAdding these three contributions, we get as expected for all number of loops $(\\widehat{\\chi}_{\\f12}\\,\\delta)_{N}=2\\delta_{N}$.\n\n\\end{proof}\n\n\\smallskip\n\nSince we have three operators built in the holonomy operator, it is natural to investigate their commutation algebra. It actually involves higher spin operators. We generalize the definition of the operators $A$, $A^{\\dagger}$ and $B$ to arbitrary spins: one simply replaces in their definition \\ref{AAB-def} the character in the fundamental representation $\\chi_{\\f12}$ by the higher spin character $\\chi_{j}$ for any $j\\in{\\mathbb N}^{*}\/2$, thus producing new operators $A_{j}$ annihilating a loop excitation of spin $j$, $A^{\\dagger}_{j}$ creating a new loop carrying a spin $j$ and $B_{j}$ acting with a spin $j$ excitation on an existing loop.\n\nThen acting on a an arbitrary state $f$, we get:\n$$\n(ABf)_{N}=\\frac{N}{N+1}(BAf)_{N}+\\f1{N+1}(A_{1}f)_{N}\n\\,,\\quad\n(BA^{\\dagger}f)_{N}=\\frac{N-1}{N}(A^{\\dagger}Bf)_{N}+\\f1{N}(A_{1}^{\\dagger}f)_{N}\\,,\n$$\n$$\n(AA^{\\dagger}f)_{N}\n=\\frac{N}{N+1}(A^{\\dagger}Af)_{N}+\\f1{N+1}f_{N}\\,.\n$$\nRemembering that the number of loops $N$ is not constant on the Fock space of loopy intertwiners and should be treated as an operator $\\hat{N}$, these translate into commutation relations, being careful about the operator ordering:\n\\begin{equation}\n\\hat{N} B = B\\hat{N}\n\\,,\\quad\n(\\hat{N}+1)A=A\\hat{N}\n\\,,\\quad\n\\hat{N} A^{\\dagger}= A^{\\dagger}(\\hat{N}+1)\\,,\n\\end{equation}\n\\begin{equation}\n\\label{commAB1}\nAB\\hat{N}=\\hat{N} BA+A_{1}\n\\,,\\quad\n\\hat{N} B A^{\\dagger}=A^{\\dagger}B\\hat{N}+A_{1}^{\\dagger}\n\\,,\\quad\nA\\hat{N} A^{\\dagger}={\\mathbb{I}}+A^{\\dagger}A\\hat{N}={\\mathbb{I}}+\\hat{N} A^{\\dagger}A\n\\,.\n\\end{equation}\nThese generalize to the whole tower of higher spin operators, for all spins $a,b\\in\\frac{{\\mathbb N}^{*}}2$:\n\\begin{equation}\n\\label{commAB2}\nA_{a}B_{b}\\hat{N}=\\hat{N} B_{b}A_{a}+A_{a\\otimes b}\n\\,,\\quad\n\\hat{N} B_{b} A_{a}^{\\dagger}=A_{a}^{\\dagger}B_{b}\\hat{N}+A_{a\\otimes b}^{\\dagger}\n\\,,\\quad\nA_{a}\\hat{N} A_{b}^{\\dagger}=\\delta_{ab}{\\mathbb{I}}+A_{b}^{\\dagger}A_{a}\\hat{N}=\\delta_{ab}{\\mathbb{I}}+\\hat{N} A_{b}^{\\dagger}A_{a}\n\\,,\n\\end{equation}\nwhere we use the (natural) convention of the tensor product of spins for the annihilation operator:\n\\begin{equation}\nA_{a\\otimes b}\\,\\equiv\\,\n\\sum_{c=|a-b|}^{a+b}A_{c}\\,.\n\\end{equation}\nWe give the last commutation relation:\n\\begin{equation}\n\\hat{N}\\,[B_{a},B_{b}]\n\\,=\\,\nA^{\\dagger}_{b}A_{a}-A^{\\dagger}_{a}A_{b}\\,.\n\\end{equation}\n\n\nWe can combine these higher spin creation and annihilation operators to define a spin-$j$ holonomy operator $\\widehat{\\chi}_{j}$ as the average sum of those operators as for the fundamental representation:\n\\begin{equation}\n\\widehat{\\chi}_{j}=\\f12\\,\\big{(}\nA_{j}+A^{\\dagger}_{j}+B_{j}\n\\big{)}\\,.\n\\end{equation}\nThis rather natural definition unfortunately doesn't ensure that the operators $\\widehat{\\chi}_{j}$'s for different spins $j$'s commute with each other. Using the algebra computed above, the commutator of two holonomy opertaors $\\widehat{\\chi}_{a}$ and $\\widehat{\\chi}_{b}$ actually looks like a mess.\nNevertheless we can simplify the expressions by introducing suitable number of loops factors. Inserting the operator $\\hat{N}$ in the character, we find:\n\\begin{equation}\n\\big{[}\n\\hat{N}\\widehat{\\chi}_{a},\\hat{N}\\widehat{\\chi}_{b}\n\\big{]}\n=\\frac{\\hat{N}}2\\,(A^{\\dagger}_{b}A_{a}-A^{\\dagger}_{a}A_{b})\n=\\frac{\\hat{N}^{2}}2\\,[B_{a},B_{b}]\\,.\n\\end{equation}\nThis combination $\\hat{N}\\widehat{\\chi}_{a}$ isn't Hermitian, but this can be easily remedied to by considering $\\sqrt{\\hat{N}}\\widehat{\\chi}_{a}\\sqrt{\\hat{N}}$ instead. This commutator doesn't vanish, but we can easily find other combinations of the creation and annihilation operators that do:\n\\begin{equation}\n\\big{[}\n\\hat{N}(B_{a}+A_{a}^{\\dagger}-A_{a}),\\hat{N}(B_{b}+A_{b}^{\\dagger}-A_{b})\n\\big{]}\n\\,=\\,\n0\n\\,.\n\\end{equation}\nThis suggests using the operators $A_{a}$ and $(B_{a}+A^{\\dagger}_{a})$ as more fundamental as the holonomy operators. Although they are not Hermitian, the flat state is an eigenvector of both operators and we will exploit this fact in defining flatness constraints for BF theory in the following section \\ref{BFsym}.\n\n\\medskip\n\nThe other way to proceed to defining higher spin holonomy operators is to reproduce the classical algebra of the $\\mathrm{SU}(2)$ characters. For instance, a spin-1 is obtained from the tensor product of two spin-$\\f12$ representations:\n$$\n\\chi_{1}(h)=\\chi_{\\f12}(h)^{2}-1\\,.\n$$\nWe propose to promote these relations to the quantum level:\n\\begin{equation}\n\\widehat{\\chi}_{1}^{\\,\\mathrm{full}}\n\\,\\equiv\\,\n\\widehat{\\chi}_{\\f12}^{\\,2}-1\n=\n\\f14\\Big{[}\nA^{2}+AB+BA+AA^{\\dagger}+A^{\\dagger}A+B^{2}+A^{\\dagger}B+BA^{\\dagger}+A^{\\dagger}{}^{2}\n\\Big{]}-1\n\\,.\n\\end{equation}\nThis new spin-1 holonomy operator is already a multi-loop operator: it has a component $A^{2}$ annihilating two loops and its adjoint component $A^{\\dagger}{}^{2}$ creating two loops, and so on.\nWe then define all the other spin-$j$ holonomy operators by recursion as polynomials of the fundamental $\\widehat{\\chi}_{\\f12}$ operator:\n\\begin{equation}\n\\chi_{\\f12}^{4}=\\chi_{2}+3\\chi_{1}+2\n\\,\\,\\Rightarrow\\,\\,\n\\widehat{\\chi}_{2}^{\\,\\mathrm{full}}\n\\,\\equiv\\,\n\\widehat{\\chi}_{\\f12}^{\\,4}-3\\widehat{\\chi}_{\\f12}^{\\,2}+1\\,,\n\\quad\\dots\n\\end{equation}\nand so on with $\\widehat{\\chi}_{j}^{\\,\\mathrm{full}}$ constructed from $\\widehat{\\chi}_{\\f12}^{2j}$ and smaller powers.\nThis construction clearly ensures that all the holonomy operators commute with each other. This method closely intertwines the definition of higher spin operators with multi-loop holonomies. These multi-loop operators create spins $\\f12$ (and then higher spins too) excitations on several loops at once. \n\n\\medskip\n\nTo conclude the exploration of the basic loop quantum gravity operators, we should also deal with the symmetrized flux operators (and scalar products) with the ${\\mathfrak{su}}(2)$ generators acting as derivations on the wave-functions, and check their commutation relations with our new holonomy operators.\nThe flux and grasping operators are especially important since they allow to explore the intertwiner structure at the vertices. Indeed, acting with one-loop holonomy operators will only create decoupled loops at the vertex, while a generic intertwiner will couple them. So, even though we postpone the detailed analysis of the action of flux operators on loopy spin network to future investigation, we discuss below multi-loop holonomy operators that allow for coupled loops and thus explore the space of (loopy) intertwiners at the vertex.\n\nFor instance, considering two loops with group elements $h_{1}$ and $h_{2}$, we would like to excite the overall holonomy instead of the two independent holonomies, that is act with $\\chi(h_{1}h_{2})$ instead of $\\chi(h_{1})\\chi(h_{2})$. Proceeding similarly to the one-loop holonomy operator, we define a two-loop holonomy operator $\\widehat{\\chi}_{\\f12}^{(2)}$, which creates and annihilates pairs of coupled loops:\n\n\\begin{definition}\n\\label{C-def}\nWe define the following five operators $C_{-2,-1,0,+1,+2}$ on the Fock space of symmetrized loopy intertwiners ${\\cal H}^{\\mathrm{sym}}$. They act on an arbitrary state $(f_{N})_{N\\in{\\mathbb N}}$ as:\n\n\\begin{equation}\n(C_{-2}f)_{N}(h_{1},..,h_{N})\n\\,=\\,\n\\int \\mathrm{d}k \\mathrm{d}k\\,\\chi_{\\f12}(k\\tilde{k})\\,f_{N+2}(h_{1},h_{N},k,\\tilde{k}) \n\\end{equation}\n\\begin{equation}\n(C_{-1}f)_{N}(h_{1},..,h_{N})\n\\,=\\,\n\\f1N\\sum_{i}^{N}\n\\Bigg{[}\n\\int \\mathrm{d}k\\,\\chi_{\\f12}(kh_{i})\\,f_{N+1}(h_{1},..,h_{N},k)\n-\\int \\mathrm{d}k\\, \\mathrm{d}k_{i}\\,\\chi_{\\f12}(kk_{i})\\,f_{N+1}(h_{1},..,k_{i},..,h_{N},k)\n\\Bigg{]}\n\\end{equation}\n\\begin{eqnarray}\n(C_{0}f)_{N}(h_{1},..,h_{N})\n\\,=\n&&\n\\f2{N(N-1)}\\sum_{i,>=stealth](a) -- node[midway,above]{2} (b) ;\n\\draw[->,>=stealth](a) -- node[midway,above]{3} (c) ;\n\\draw[->,>=stealth](a) -- node[midway,below]{1} (d) ;\n\\draw[->,>=stealth](b) -- node[midway,left]{4} (c) ;\n\\draw[->,>=stealth](c) -- node[midway,right]{5} (d) ;\n\\draw[->,>=stealth](d) -- node[midway,left]{6} (b) ;\n\n\n\n\\end{tikzpicture}\n\n\\caption{The $\\Theta$-graph on the left and the tetrahedron graph on the right.}\n\\label{fig:thetatetra}\n\n\\end{figure}\n\nLet us, for the sake of simplicity, look at the $\\Theta$-graph, made of two vertices connected by three edges, as in fig.\\ref{fig:thetatetra}. The flat state is:\n\\begin{equation}\n\\delta_{\\Theta}(g_{1},g_{2},g_{3})\n\\,=\\,\n\\delta(g_{1}g_{2}^{-1})\\delta(g_{1}g_{3}^{-1})\n\\,=\\,\n\\delta(g_{1}g_{2}^{-1})\\delta(g_{2}g_{3}^{-1})\n\\,=\\,\n\\delta(g_{1}g_{3}^{-1})\\delta(g_{2}g_{3}^{-1})\n\\,,\n\\end{equation}\nwhere we give the three possible sets of independent loops, corresponding to the three choices of tree on the $\\Theta$-graph.\nThis is clearly a solution to the three holonomy constraints:\n$$\n\\Big{(}\\chi_{\\f12}(g_{1}g_{2}^{-1})-2\\Big{)}\\,\\delta_{\\Theta}\n\\,=\\,\n\\Big{(}\\chi_{\\f12}(g_{2}g_{3}^{-1})-2\\Big{)}\\,\\delta_{\\Theta}\n\\,=\\,\n\\Big{(}\\chi_{\\f12}(g_{3}g_{1}^{-1})-2\\Big{)}\\,\\delta_{\\Theta}\n\\,=\\,\n0\\,.\n$$\nHowever, any first derivative give also a solution to those holonomy constraints. As an example, if we differentiate along the first group element $g_{1}$, we ge the distribution \n$\\partial_{a}^{(1)}\\delta_{12}\\delta_{23}\n=\n\\partial_{a}^{(1)}\\delta(g_{1}g_{2}^{-1})\\delta(g_{2}g_{3}^{-1})\n=\ni\\delta(g_{1}J_{a}g_{2}^{-1})\\delta(g_{2}g_{3}^{-1})$\n also satisfying:\n\\begin{equation}\n\\Big{(}\\chi_{\\f12}(g_{1}g_{2}^{-1})-2\\Big{)}\\,\\partial_{a}^{(1)}\\delta_{12}\\delta_{23}\n\\,=\\,\n\\Big{(}\\chi_{\\f12}(g_{2}g_{3}^{-1})-2\\Big{)}\\,\\partial_{a}^{(1)}\\delta_{12}\\delta_{23}\n\\,=\\,\n\\Big{(}\\chi_{\\f12}(g_{3}g_{1}^{-1})-2\\Big{)}\\,\\partial_{a}^{(1)}\\delta_{12}\\delta_{23}\n\\,=\\,\n0\n\\,.\n\\end{equation}\nAnd we can go on constructing an infinity of higher order derivative solutions, as explained in the framework of loopy spin networks in sections \\ref{derivativesolution1} and \\ref{derivativesolution2}.\nIn some sense, the totally flat state, defined by the $\\delta$-distribution, is the primary state. Then we act on it with differentiation operators and generate a whole space of solutions. All these states are technically still flat, since they are peaked exclusively on the identity group element ${\\mathbb{I}}$. They seem to be pure excitations of the triad conjugate to the connection (similar to quantum excitations of the electric field at vanishing magnetic field). This non-trivial space of flat states over the flat state sounds similar in spirit to the construction by Dittrich and collaborators \\cite{Dittrich:2014wpa,Bahr:2015bra} where they build a Hilbert space representation for loop quantum gravity with the flat state as ``ground state''. It would be interesting to check if an explicit relation with our present construction could be identified.\n\n\\medskip\n\nAs the first derivative are not gauge-invariant distributions, it is reasonable to wonder if restricting to gauge-invariant spin network states by imposing vanishing tags allows to kill all those higher order flat solutions and leave the original $\\delta$-distribution as sole physical state. We show below that this is indeed the case, but with the very important subtlety that we need to impose the holonomy constraints along all the loops of the graph and not only a set of independent loops.\n\n\nFirst, we impose the tags to vanish by a Laplacian constraint at every vertex:\n\\begin{equation}\n\\Delta_{v}\n\\,\\equiv\\,\n\\big{(}\n\\sum_{e|v=t(e)}\\partial_{a}^{(e)R}\n-\\sum_{e|v=s(e)}\\partial_{a}^{(e)L}\n\\big{)}^{2}\\,.\n\\end{equation}\nRequiring $\\Delta_{v}\\,\\varphi=0$ implies that the recoupling of the spins $j_{e}$ on the edges attached to the vertex $v$ is trivial, i.e. that the tag $J_{v}$ is the trivial representation, i.e that the wave-function $\\varphi$ satisfies the gauge-invariance at the vertex $v$. Imposing this constraint at all the vertices allows to project from the tagged spin networks down to the usual gauge-invariant spin network states.\n\n\\medskip\n\nSecond, let us revisit the flatness constraints to impose on standard spin networks in order to implement BF theory and its projector on the flat state. Imposing the holonomy constraints on a set of independent loops turns out not to be enough to get a unique solution state and we have gauge-invariant remnants of the derivative solutions. To remedy this, it is necessary and sufficient to impose the holonomy constraints on all possible loops of the graph. This is the counterpart of the multi-loop holonomy constraints for imposing the flatness of loopy intertwiners as explained in sections \\ref{derivativesolution2} and \\ref{derivativesolution3}.\n\nFor instance, on the $\\Theta$-graph, a set of independent loops is provided by the two loops, $g_{1}g_{2}^{-1}$ and $g_{2}g_{3}^{-1}$. Classically, imposing that these two group elements vanish implies that the flatness of the third ``composite'' loop, $g_{1}g_{3}^{-1}$. At the quantum level however, we can construct an infinity of derivative solutions to the holonomy constraints $\\widehat{\\chi}_{12}$ and $\\widehat{\\chi}_{23}$ by acting with grasping operators, as an example:\n\\begin{eqnarray}\n\\varphi=\\sum_{a}\\partial_{a}^{(1)}\\delta(g_{1}g_{2}^{-1})\\partial_{a}^{(3)}\\delta(g_{3}g_{2}^{-1})\n\\quad\\Rightarrow\n&&\n\\Big{(}\\chi_{\\f12}(g_{1}g_{2}^{-1})-2\\Big{)}\\,\\varphi\n\\,=\\,\n\\Big{(}\\chi_{\\f12}(g_{2}g_{3}^{-1})-2\\Big{)}\\,\\varphi\n=0\n\\nonumber\\\\\n\\quad\\textrm{but}&&\n\\int f (\\chi_{\\f12}(g_{1}g_{3}^{-1})-2)\\,\\varphi\n\\,=\\,\n\\left.-\\f14f\\Delta\\chi_{\\f12}\\right|_{{\\mathbb{I}}}\n=-\\f32\\,f({\\mathbb{I}})\\ne 0\\,.\n\\end{eqnarray}\nThese distributions will actually not be solution to the third holonomy constraint $\\widehat{\\chi}_{13}$ and, so imposing directly the three holonomy constraints together determines the $\\delta$-state $\\delta_{\\Theta}$ as unique solution.\n\nAnother interesting case is on the tetrahedron graph, see fig.\\ref{fig:thetatetra}, where imposing the holonomy constraints around three triangles $(g_{1}g_{6}g_{2}^{-1})$, $(g_{2}g_{4}g_{3}^{-1})$ and $(g_{3}g_{5}g_{1}^{-1})$ does not imply the holonomy constraint around the fourth triangle $(g_{4}g_{5}g_{6})$. This is realized by a triple grasping around the vertex $(123)$ acting on the $\\delta$-distribution, which gives the following gauge-invariant state:\n\\begin{equation}\n\\varphi\n=\n\\epsilon^{abc}\\partial_{a}^{(1)L}\\delta(g_{1}g_{6}g_{2}^{-1})\n\\partial_{b}^{(2)L}\\delta(g_{2}g_{4}g_{3}^{-1})\n\\partial_{c}^{(3)L}\\delta(g_{3}g_{5}g_{1}^{-1})\\,.\n\\end{equation}\nApplying this distribution against a gauge-invariant test function $f(g_{1},..,g_{6})$, we compute the action of the holonomy operators around the four 3-cycles of the tetrahedron graph:\n\\begin{equation}\n\\int f (\\chi_{\\f12}(g_{1}g_{6}g_{2}^{-1})-2)\\,\\varphi\n\\,=\\,\n\\int f (\\chi_{\\f12}(g_{2}g_{4}g_{3}^{-1})-2)\\,\\varphi\n\\,=\\,\n\\int f (\\chi_{\\f12}(g_{3}g_{5}g_{1}^{-1})-2)\\,\\varphi\n\\,=\\,\n 0\\,,\\nonumber\n\\end{equation}\n\\begin{eqnarray}\n\\int f (\\chi_{\\f12}(g_{4}g_{5}g_{6})-2)\\,\\varphi\n&=&\n\\int f({\\mathbb{I}},{\\mathbb{I}},{\\mathbb{I}},g_{4},g_{5},g_{6})\\,\n(\\chi_{\\f12}(g_{4}g_{5}g_{6})-2)\\,\n\\partial_{a}\\delta(g_{6})\n\\partial_{b}\\delta(g_{4})\n\\partial_{c}\\delta(g_{5}) \\nonumber\\\\\n&=&\n\\frac i8\\epsilon^{abc}\\chi_{\\f12}(\\sigma_{a}\\sigma_{b}\\sigma_{c})\\,f({\\mathbb{I}})\\,\\ne 0\\,.\n\\end{eqnarray}\nSo this triple-grasped flat state is a solution to three holonomy constraints, but the flatness around these three loops does not imply the flatness around the composite loop at the quantum level.\n\nAt the end of the day, to fully impose the flatness of the physical state, we require ``redundant'' holonomy constraints: imposing the holonomy constraints on an independent set of loops, as expected at the classical level, is not sufficient anymore at the quantum level to kill all the potential geometry excitations. This is especially relevant for the coarse-graining of the dynamics of loop quantum gravity. A common scenario is that we impose the flatness of the smallest loops, at the ``fundamental'' Planck scale, and that these will induce the flatness of the larger loops at larger scale. However, we see that it is not enough: there exist flat states (distributions peaked on the identity), solutions to the fundamental holonomy constraints, but which couple and entangle the fundamental loops through some grasping operators in such a way that they are not solutions anymore to the holonomy constraints around larger loops. In order to impose the complete flatness of the state at all scales, we need to impose the flatness of all the loops at all scales, allowing for holonomy constraints around loops of arbitrary size similarly to the construction of Ising-like states for loop quantum gravity defined in \\cite{Feller:2015yta}.\n\n\n\n\\section*{Conclusion \\& Outlook}\n\nFollowing the logic of coarse-graining the quantum geometry of loop quantum gravity, we have extended spin network states by enriching \nthe structure of its vertices: we have attached to the vertices new local degrees of freedom so that they effectively represent coarse-grained regions of space with non-trivial gravitational field and geometry fluctuations. To this purpose, we have introduced a hierarchy of three generalization of spin network states, based on the coarse-graining through gauge-fixing approach developed in \\cite{Livine:2013gna}: folded, loopy and tagged spin networks.\n\nFolded spin networks on a graph $\\Gamma$ are spin network states with an arbitrary number of additional little loops attached to each vertex of the graph and the extra information of a circuit, or folding tree, for each vertex describing how these little loops are connected to each other and to the edges of the graph. This is a mathematical reformulation of gauge-fixed spin networks where the original states live on finer graphs which have been coarse-grained down to $\\Gamma$.\nLoopy spin networks forget about the folding trees and describe spin networks on the base graph $\\Gamma$ plus the little loops attached to its vertices. These little loops describe local excitations of curvature and geometry, which can then propagate on top of the background geometry defined by the base graph. We have payed special attention to describing the cases of distinguishable and indistinguishable little loops, leading to a definition of a Fock space for loopy spin network with bosonic little loop excitations.\nTagged spin networks are the last step of coarse-graining and define a basis for non-gauge-invariant spin network states. Each vertex carries an extra spin, or tag, represented as living on an open leg attached the vertex, which defines the closure defect, that is how much the local gauge-invariance is broken. From the perspective of coarse-graining, this closure defect provides an overall measure of the non-trivial holonomies which have developed within the coarse-grained region or along the little loops attached to the vertex. Ultimately we would like to interpret this tag as some quasi-local mass or energy density for the (quantum) gravitational field fluctuations inside the coarse-grained region.\n\n\\medskip\n\nThese structures define a new framework for loop quantum gravity, where we can implement and study its graph-changing dynamics while working on a fixed background graph. Indeed, starting from a given base graph $\\Gamma$, we represent any spin network states on a finer graph as a loopy spin network on the base graph plus little loops taking into account the more complex structure of the original graph. In a way, we constantly coarse-grain spin networks to our chosen background graph and the little loops represent all the finer geometry excitations. This proposes a truncation of loop quantum gravity where the little loops are effective local degrees of freedom, which can propagate and interact on and with the base graph $\\Gamma$. For instance, we could choose as background graph, a regular 3d cubic lattice (e.g. as for defining Bianchi I cosmology as a truncation of loop quantum gravity\\cite{Alesci:2013xd}) or any other base graph suited for the case at study, and consider all the finer geometry fluctuations from an effective point of view as little loop inhomogeneities propagating on that base graph, as illustrated on fig.\\ref{fig:lattice}.\nThe little loops are the extra information carried by the loopy spin networks compared to a lattice formulation of loop quantum gravity. They encode an infinity of degrees of freedom attached to each vertex of the graph and describing excitations and fluctuations of the gravitational field.\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.6]\n\n\\coordinate(a1) at (0,1);\n\\draw (a1) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=45] (a1) to[loop] (a1);\n\n\\coordinate(a2) at (1,2);\n\\draw (a2) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=45] (a2) to[loop] (a2);\n\\draw[in=-30,out=30,scale=1.8,rotate=135] (a2) to[loop] (a2);\n\n\\coordinate(a3) at (3,3);\n\\draw (a3) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=-135] (a3) to[loop] (a3);\n\n\n\\coordinate(a4) at (3,0);\n\\draw (a4) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=-135] (a4) to[loop] (a4);\n\\draw[in=-30,out=30,scale=1.8,rotate=135] (a4) to[loop] (a4);\n\n\\coordinate(a5) at (2,1);\n\\draw (a5) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=135] (a5) to[loop] (a5);\n\\draw[in=-30,out=30,scale=1.8,rotate=-135] (a5) to[loop] (a5);\n\\draw[in=-30,out=30,scale=1.8,rotate=45] (a5) to[loop] (a5);\n\n\\coordinate(a6) at (0,0);\n\\draw (a6) node {$\\bullet$};\n\\draw[in=-30,out=30,scale=1.8,rotate=135] (a6) to[loop] (a6);\n\n\n\\draw (0,-.5) --(0,3.5);\n\\draw (1,-.5) --(1,3.5);\n\\draw (2,-.5) --(2,3.5);\n\\draw (3,-.5) --(3,3.5);\n\n\\draw (-.5,0) --(3.5,0);\n\\draw (-.5,1) --(3.5,1);\n\\draw (-.5,2) --(3.5,2);\n\\draw (-.5,3) --(3.5,3);\n\n\\end{tikzpicture}\n\n\\caption{Loopy spin networks on a 2d square lattice: the little loop excitations attached to its vertices represent finer graph structures which have been coarse-grained and allow to take into account graph changing dynamics while working on a fixed base graph. }\n\\label{fig:lattice}\n\n\\end{figure}\n\nRe-introducing in such a way a background structure offers a perfect setting for studying the coarse-graining of the dynamics of loop quantum gravity.\nIn a sense, we have split the gravitational field degrees of freedom into a dynamical background geometry on a fixed graph and localized fluctuations of geometry, which can be though of as higher energy or finer scale excitations. As we coarse-grain, structures of the base graph will become little loops.\n\n\\medskip\n\nWe have also studied in great detail how to implement the dynamics of BF theory and defined suitable Hamiltonian constraints that select ultimately the flat state as unique physical state. In particular, it should kill any local degree of freedom and project out all the potential little loop excitations (or non-trivial tags). We faced two subtleties. First, the holonomy operators of loop quantum gravity now act also as creation and annihilation operators for the little loops. Second we identified an infinity of solutions to the holonomy constraints, defined as higher derivative of the $\\delta$-distribution or equivalently by acting with grasping operators on the totally flat state. These are still peaked exclusively on the flat connection, but introduce some non-trivial correlation and entanglement between the little loop excitations. This comes from the fact that the dimension of the intertwiner space at a vertex grows with the number of little loops and these ``spurious'' solutions can be interpreted as non-trivial intertwiners between flat little loops. We have introduced Laplacian constraints to supplement the holonomy constraints and decouple the loops, allowing us to get rid of this tower of higher order flat states and get finally as wanted a unique physical state. Nevertheless, this Hilbert space of ``grasped flat states'' could prove an interesting sector of loop quantum gravity, with an infinity of degrees of freedom, especially as a toy model to investigate the continuum limit of the theory.\n\n\\begin{figure}[h!]\n\\centering\n\n\\begin{tikzpicture}[scale=1]\n\n\n\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (A) at (0,0) {};\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (B) at (2,0) {};\n\\draw[->] (A) to[bend left=20] (B);\n\\draw[->] (A) to[bend right=20] (B);\n\n\\draw[->,>=stealth,thick] (3,0) -- (4,0);\n\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (C) at (5,0) {};\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (D) at (7,0) {};\n\n\\coordinate(O) at (6,0);\n\\draw[red] (O) node {$\\bullet$};\n\n\\draw(C) -- (O) node[midway,below]{$j_{e}^{s}$};\n\\draw[->] (O) -- (D) node[midway,below]{$j_{e}^{t}$};\n\\draw[red,thick] (O) -- ++ (0,0.3) node[above] {$k_{e}$};\n\n\\draw[->,>=stealth,thick] (-1,0) -- (-2,0);\n\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (E) at (-5,0) {};\n\\node[draw,circle,dotted,fill=gray!30,scale=2] (F) at (-3,0) {};\n\n\\coordinate(P) at (-4,0);\n\\draw[red] (P) node {$\\bullet$};\n\n\\draw(E) -- (P) node[midway,below]{$j_{e}^{s}$};\n\\draw[->] (P) -- (F) node[midway,below]{$j_{e}^{t}$};\n\\draw[in=-40,out=40,scale=1.8,rotate=90,red,thick] (P) to[loop] node[midway,above] {$k_{\\ell}$}(P);\n\n\\end{tikzpicture}\n\n\\caption{Two regions, to be coarse-grained, can be related by several links. These links should in turn be coarse-grained into a single edge. However since the holonomies along the original links might have been different, the effective edge should be able to carry some data reflecting this non-trivial curvature. Identifying a spin network edge as a bivalent vertex, it is natural to introduce tagged edges, where the tag $k_{e}$ creates a trivalent intertwiner with the spins at the source and target of the edge, $j_{e}^{s}$ and $j_{e}^{t}$, which can now be different. Another natural possibility is to attach a loop or tadpole to the edge to account for the non-trivial curvature between the two regions, which would also create a defect along the edge leading to different spins at the source and target of the edge. The difference between these two generalizations of spin network edges is that the loop implies that the midway vertex now has a non-trivial volume while the tag does not.\n}\n\\label{fig:tagged-edge}\n\n\\end{figure}\n\nAfter having set in the present work the kinematics of loopy and tagged spin networks and shown how to implement the topological dynamics of BF theory, the next step will be to define some non-topological loop quantum gravity dynamics, coupling the degrees of freedom on the base graph to the little loops, and study its coarse-graining flow.\n\nThere is however another generalization of spin networks worth investigating before moving on to the dynamics of the theory. We have focussed up to now on coarse-graining bounded regions of spaces into effective vertices and therefore introduced the notions of loopy and tagged spin network states with extra structure and data attached to the vertices. We have dressed the graph's nodes, so shouldn't we also consider dressing the its links? Comparing to Feynman diagrams in quantum field theory, we have renormalized the interaction vertices but we should also describe how to renormalize the propagator. Indeed, after partitioning the graph spanning the 3d space into bounded regions and coarse-graining each region to a single vertex, there is generically several edges linking these effective vertices. We need to bundle them together into a single new effective edge. Since those edges to coarse-grain link the same two regions and thus form loops, there is naturally non-trivial holonomies which can formed between the two regions and therefore the new effective edge should be able to carry some notion of curvature. One way to proceed is to consider all the edges of a spin network state as bivalent intertwiners, recoupling between the two spins living at its source and at its target. Such intertwiners are trivial and the source and target spins are identified. But as curvature builds up, it is natural to allow this intertwiner to acquire a tag or little loops, accounting for the curvature excitations carried by the edge. For instance, as shown on fig.\\ref{fig:tagged-edge}, a tag would turn the bivalent intertwiner into a trivalent intertwiner allowing the source and target spins to differ. It would be interesting to develop the notion of spin networks with tagged links. And it could be relevant to compare such spin networks with both tagged vertices and edges to projected $\\mathrm{SL}(2,{\\mathbb C})$ spin networks\\cite{Livine:2002ak,Dupuis:2010jn}, which allow for both features of non-vanishing closure defect at vertices and non-matching spins along edges, and which are the basic states and building blocks for the EPRL-FK spinfoam models for loop gravity path integrals\\cite{Engle:2007wy,Geloun:2010vj,rovelli2014quantum}.\n\nFinally, tags and little loops closely resemble particle insertions on spin network states and it would be enlightening to understand if they can truly be interpreted as matter field degrees of freedom, especially from the perspective of working out the continuum limit of loop quantum gravity as a quantum field theory.\n \n\n\\section*{Acknowledgments}\n\nC.C. would like to thank Michel Fruchart and Dimitri Cobb for their keen insights and useful discussions with them.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Section I}\n\n\tQuantum theory as a theory of measurement was promulgated by Bohr\\cite{Bohr}, and this point of view has its adherents today\\cite{Peres-Fuchs}. But, the success of the theory suggests that it could be more, a theory of reality. Einstein wrote Schr\\\"odinger\\cite{Einsteinletter} that, if there were no more complete description of nature than that envisaged by Bohr, ``...then physics could only claim the interest of shopkeepers and engineers; the whole thing would be a wretched bungle.\" Bell made much the same point, writing\\cite{Bell} `` To restrict quantum mechanics to be exclusively about piddling laboratory experiments is to betray the great enterprise.\"\n\t\n\tThe lack in standard quantum theory has been called the ``measurement problem,\" but I prefer to call it the ``reality problem.\" I believe it is most precisely characterized as a failure to give a satisfactory response to the following operational requirement for a well-defined quantum theory of reality. You are given, at time $t=0$, a state vector for an arbitrary physical system, together with its Hamiltonian, but no other information. For $t>0$, specify (i.e., give a procedure for how to identify) the realizable states and their probabilities of realization. \n\t\t\n\tNot only Bohr's version of quantum theory, but more recent sophisticated attempts, such as many worlds\\cite{lev}, environmental decoherence\\cite{zehzurek}, consistent histories\\cite{griffithshartlegellmannomnes}, have not succeeded so far at providing such a specification\\cite{mekent}. They are successfully applied to many important and interesting examples, but these examples require additional information ({\\it this} is the nature of the system, {\\it that} is the apparatus, {\\it there} is the environment...) which lie outside the theory. \n\t\n\tThe reality problem {\\it has} been solved by enlarging quantum theory so that it describes state vector collapse as a physical process. The Continuous Spontaneous Localization (CSL) model\\cite{me, GPR, myreview} provides a modified Schrodinger equation. It describes how, in addition to their usual interactions, the particles of the world interact with a posited classical field $w({\\bf x}, t)$ resulting in a nonunitary state vector evolution (which changes the norm of the state vector). What are the realizable states? Each such state vector, corresponding to a different \n$w({\\bf x}, t)$ (drawn from the class of white noise ``functions), is realizable. What are the associated probabilities? CSL provides a second equation, the prescription that the probability of occurrence of a state vector (the probability of occurrence of its associated $w({\\bf x}, t)$) is proportional to its squared norm. In this straightforward manner, the operational requirement cited above is satisfied. Each of these CSL states accurately describes (except for occasional brief moments) the macroscopic world we see around us. The predictions agree with all presently performed experiments (but there are testable differences from standard quantum theory). The CSL model is reviewed in section II. \n\n\tI have previously presented a model\\cite{Ways, mekent, myreview} called the ``Index\" model, but here called the ``Completely Quantized Collapse\" (CQC) model. It is reviewed in section III. It utilizes a formalism invented to model continuous measurements\\cite{contmeas}. CQC is completely equivalent to CSL in its specification of the realizable states and their probabilities, in the following manner.\n\t\t\n\tIn CQC, the classical field is replaced by a quantized field $W({\\bf x}, t)$ which, like the classical field $w({\\bf x}, t)$ of CSL, commutes with itself, $[W({\\bf x}, t), W({\\bf x}', t')]=0$. There is a conjugate field, $\\Pi({\\bf x}, t)$, which also commutes with itself, but which satisfies $[W({\\bf x}, t), \\Pi ({\\bf x}', t')]=\ni\\delta(t-t')\\delta({\\bf x}-{\\bf x}')$. The basis of eigenstates of $W({\\bf x}, t)$ for every ${\\bf x}$, $t$, is denoted $|w\\rangle$. It satisfies $W({\\bf x}, t)|w\\rangle=w({\\bf x}, t)|w\\rangle$, i.e., there is a different eigenstate for every possible $w({\\bf x}, t)$ function. There is a single state vector, called here the ``ensemble vector,\" which evolves unitarily (the Hamiltonian is given). When the ensemble vector is expanded as a Schmidt decomposition in the ``preferred basis\" $|w\\rangle$, the particle state in the direct product with $|w\\rangle$ is the CSL state associated with $w({\\bf x}, t)$: schematically, \n\n\\[ |\\hbox{ensemble vector}, t\\rangle=\\sum_{\\hbox{all}\\thinspace\\thinspace w}|w\\rangle|\\psi_{\\hbox{csl}}, t\\rangle_{w}. \n\\]\n\n\\noindent The usual quantum rule for probabilities (the squared norm of a state in the superposition) gives the CSL probability of realization of that state. \n\n\tThe measurement (reality) problem in standard quantum theory has often been phrased in terms of difficulties associated with the ill-defined collapse postulate, nowhere more succinctly than Bell's Boolean phrase``And is not Or,\" i.e., Schr\\\"odinger's equation gives a sum of terms but we observe one term or another. CSL provides a resolution of the problem by individually, dynamically, providing each term we can observe. However, CQC gives a sum of terms: how can that be considered as a resolution of the problem? \n\t\n\tA colleague once observed to me that construction of CQC goes against everything I have worked for, the resolution of the reality problem by dynamical collapse. However, what I have worked for is to make a well-defined quantum theory of reality\\cite{Pearle'67}, i.e., one which satisfies the aforementioned operational requirement. CSL satisfies this requirement, but so does CQC. The terms in the superposition which make up the ensemble vector are precisely defined (by the orthogonal preferred basis $|w\\rangle$), do not interfere (they obey a superselection rule), \nare macroscopically sensible (because the associated CSL states are), and occur with the correct amplitudes (which give the correct probabilities). In his famous ``cat paradox\" paper\\cite{Schr}, Schr\\\"odinger referred to the state vector as the ``prediction catalog\" \nor the ``expectation catalog\" because, in Bohr's quantum theory, it lists what can be expected to occur if a measurement is performed. The CQC state vector may, likewise, be called a ``reality catalog.\"\n\nHowever, the justification here is greater than Schr\\\"odinger's. His usage requires judgment {\\it outside} the theory as to {\\it when} the state vector has evolved sufficiently to be considered a catalog, and as to {\\it which} are the appropriate states in the superposition (i.e., which basis) to consider as making up the catalog. In CQC, these criteria lie {\\it inside} the theory, since the ensemble-vector may {\\it always} be considered a catalog, and the appropriate states are {\\it labeled} by the preferred basis $|w\\rangle$. Roughly speaking, CSL provides a vertical catalog (Or), and CQC provides a horizontal catalog (And), but it is the same catalog. \nIf one has a precise rule for identifying a superposition of states at time $t$ which never interfere thereafter, it is a matter of indifference whether one removes them from the superposition and follows their separate evolutions that way, or leaves them in the superposition and follows their joint evolution that way.\n\nAs previously pointed out\\cite{mekent}, one may regard CQC as a model for a satisfactory resolution of other attempted interpretations of quantum theory\\cite{GS}. The superposed states making up the ensemble vector could be thought of as many worlds (although it would seem that there would then be no reason to insist upon such an interpretation). If the $W$-field is regarded as the environment, then CQC could be regarded as describing environmental decoherence. The evolving superposed states making up the ensemble vector could be taken to represent a single family of consistent histories\\cite{kent}. The Schmidt decomposition of the ensemble vector is precisely what the modal interpretation requires. What is currently missing from these approaches, and what CQC provides, is a subsystem with a well-defined ``universal basis\" (here the $|w\\rangle$ basis vectors) whose Schmidt decomposition partners make macroscopic sense. One might hope that such a basis may be found arising in a natural way from a physical entity, whence perhaps CSL\/CQC and all these interpretations may merge (see the last two paragraphs of this introduction for further remarks on this). \n\n\tNow I turn to the topics which will occupy the rest of this paper. Because collapse narrows wave functions, the energy of particles in CSL increases with time. This growth of energy of particles provides for experimental tests of the model\\cite{exptlpapers} but, from a theoretical point of view, it is desirable to have a conserved energy. Recently, I showed how to define energy associated with the $w$-field in CSL so that the first moment of the total energy (particles plus $w$-field) is conserved\\cite{energy}. However, energy conservation requires all moments of the energy to be conserved. The formalism of CQC facilitates that demonstration (which can be converted to CSL language with less transparency). Because the ensemble vector evolution in CQC is a Hamiltonian evolution, the Hamiltonian is a conserved energy associated with the ensemble. \n\n\tThe Hamiltonian does not commute with $W({\\bf x}, t)$ since it is composed of the $W$ {\\it and} $\\Pi$-field operators. Thus one cannot associate a conserved energy with an individual realizable state labeled by eigenvalues of $W$ . Nonetheless, it is of interest to see how energy fares for the ensemble of possible states, for example, how the ensemble energy of the $W$-field can go negative to compensate for the energy increase of the particles whose wavefunctions are narrowed by collapse. The discussion of energy conservation takes place in section IV.\n\n\tAnother example of the utility of the CQC model is considered in section V. Because the $W$-field energy has the whole real line for its spectrum, it is possible to construct its self-adjoint conjugate, a time-operator $T$, built solely out of the $W$ and $\\Pi$-field operators. One may consider that time should be intimately related to the realization of physical states, since without \tsuch a realization there are no events and without events there is no time. This, and the fact that, like the Matterhorn, it is ``there\" (i.e., very few viable physical theories possess an unbounded Hamiltonian) motivate looking at $T$.\t\n\t\t\n\t$T$'s ensemble probability distribution is examined. $T$'s variance diminishes with time. $T$'s mean, roughly speaking, is a ``center of time,\" i.e., the average time over all space, and over time from 0 to $t$, weighted by the square of the particle number density. Thus, if the particle Hamiltonian vanishes so the only particle evolution is collapse, which does not change the ensemble spatial particle density, the mean of $T$ is $t\/2$. If, however, the particles have a high clumping rate (for example, due to gravitational interaction), the spatial average of the square of the particle number density is largest most recently, and the mean of $T$ is closer to $t$. \n\n\tOne of the motivations of this work is to provide a bridge from dynamical collapse models to other interpretations which seek a well-defined quantum theory based upon the standard quantum formalism alone (although they may be satisfied with just accounting for observation, not reality), rather than the altered formalism of collapse models. Now, what I consider to be a pressing problem associated with CSL is that it is phenomenological. (Some have said ``ad hoc,\" but that is not appropriate terminology since ad hoc means ``for this case only,\" whereas CSL applies to all cases, all physical situations: in fact, the label ``ad hoc\" more appropriately belongs to these other interpretations to the extent that they require additional information appropriate to each physical situation). It could be that CSL dynamics may arise from a larger structure which generates quantum theory itself, as has been suggested by Adler\\cite{Adler}. In this paper, we consider another possibility.\n\t\n\tCSL depends upon interaction of a posited field's interaction with particles, but no identification of that field with a physical entity has been made. A purpose in discussing CQC is that, since it is formulated completely quantum mechanically, that may make it easier to identify the universal $W$-field with a quantized physical entity which arises naturally in another context. In that case, CQC would appear as based upon the standard quantum formalism alone. \n\t\n\tHowever, it is not so easy to find a candidate for a universal fluctuating field with the desired properties. There is experimental evidence\\cite{exptlpapers} that the coupling of $W$ is to the mass-density of particles, which strongly suggests a gravitational connection, a proposal which has a long history\\cite{gravity, pearlesquires}. This sugggests looking for a mechanism whereby gravitational excitations could naturally act like the $W$-field. \n\n\tBy way of illustration, Section VI contains two models for $W$ with gravitational overtones, neither to be taken too seriously. In one, $W$ is constructed out of many scalar quantum fields (of Planck mass), each interacting only for a brief interval (the Planck time) with the particle's mass-density. The other utilizes aspects of a previous gravitational classical model\\cite{pearlesquires} of $W$ and crudely pays homage to spin-network models\\cite{smolinseth}. It assumes that space consists of Planck-volume cells, each containing a bound spin 1\/2 entity with two possible energy states equal to $\\pm$Planck mass. A spin occasionally breaks free to briefly interact with a thermal bath, as well as gravitationally with the particle's mass-density, before it is bound once more. In this case, $W({\\bf x},t)$ is identified with the free mean spin, in a small volume surrounding ${\\bf x}$, during the small time interval surrounding $t$,. These examples suggest that what is required for $W$ is a quantity which \nhas the freedom to vary independently over all space-time and for which the state vector describes its prior space-time values . This is unlike a conventional field, which can vary independently (in magnitude and time derivative) only on a single spacelike hypersurface, and whose present values are all that is described by the state vector. \n\n\t\t\t\n\\section{CSL}\n\\subsection{The Simplest Model}\n\tThe simplest CSL model, which will supply most of the illustrative calculations in this paper, describes collapse toward an eigenstate of the operator $A$. The state vector evolution, governed by a function $w(t)$ of white noise class, is given by \n\\begin{equation}\\label{1} \n|\\psi, t\\rangle_{S}={\\cal T} e^{-\\int_{0}^{t}dt'\\{iH_{A}+(4\\lambda)^{-1}[w(t')-2\\lambda A]^{2}\\}}|\\phi\\rangle.\n\\end{equation}\n\\noindent In Eq.(1), ${\\cal T}$ is the time-ordering operator, $H_{A}$ is the (usual) Hamiltonian of the system, $|\\phi\\rangle=|\\psi,0\\rangle$ is the initial state vector, the subscript $S$ denotes the Schr\\\"odinger-picture state vector, and the parameter $\\lambda$ governs the collapse rate. Eq. (1) is the solution of the modified Schr\\\"odinger equation:\n\\begin{equation}\nd|\\psi, t\\rangle_{S}\/dt=\\{-iH_{A}-(4\\lambda)^{-1}[w(t)-2\\lambda A]^{2}\\}|\\psi, t\\rangle_{S}.\\nonumber\n\\end{equation}\n\\noindent However, only the solution form (1) shall be utilized in this paper. \n\n\tCSL is completed by specifying the probability that a particular $w$, lying in the interval $(w(t), w(t)+dw),$ is realized in nature:\t\n\\begin{equation}\\label{2}\n\tP_{t}(w)Dw=_{S}\\negthinspace\\langle\\psi,t|\\psi,t\\rangle_{S}Dw.\n\\end{equation}\n\\noindent In Eq.(2), and for all integrations, one may discretize time, i.e., with $t_{j}\\equiv jdt$, each $w(t_{j})$ is considered to be an independent variable ($-\\infty>1$, $\\Delta V\/\\ell^{3}>>1$), large enough to contain $N$ interacting spins where $N>>1$, but still small on the scale of particle physics (e.g., \n$\\rho ({\\bf x}_{i})$ is essentially constant over $\\Delta V$). Denote by $p$ the probability that \na spin in a cell is freed up to interact over a time interval $\\tau$ so, on average, \n\\begin{equation}\\label{46}\nN=p\\frac{\\Delta V}{\\ell^{3}}\\frac{\\Delta t}{\\tau}.\n\\end{equation}\n\\noindent For simplicity, we shall take (46) to be the exact expression for $N$, not just for its average. \n\n\tNow consider the (interaction picture) statevector $|\\psi,t\\rangle$. It may be written as the direct product of states associated with all volumes $\\Delta V$ and all time intervals \n$\\Delta t$ within $(0, t)$. The state which is the contribution of one such volume and interval has the form \n\\begin{equation}\\label{47}\n|\\chi\\rangle=\\sum_{s_{1}=-1}^{1}...\\sum_{s_{N}=-1}^{1}|s_{1}...s_{N}\\rangle|\\hbox {bath},s_{1}...s_{N}\\rangle C_{s_{1}...s_{N}}(\\rho).\n\\end{equation}\n\\noindent The sum in Eq.(47) contains all the spin states entangled with the associated bath states which interacted with the spins and brought them to thermal equilibrium: assume for simplicity that the bath states are mutually orthogonal. \n \n\tThen, $|\\chi\\rangle$'s contribution to the density matrix describing the spins alone is\n\\begin{equation}\n\\hbox{Tr}_{\\hbox{bath}}|\\chi\\rangle\\langle\\chi |=\\sum_{s_{1}=-1}^{1}...\\sum_{s_{N}=-1}^{1}|s_{1}...s_{N}\\rangle \\langle s_{1}...s_{N}| |C_{s_{1}...s_{N}}(\\rho)|^{2}.\\nonumber \n\\end{equation}\n\\noindent On the other hand, because these states are all in thermal equilibrium, using Eq.(45)'s Hamiltonian for these spins, the thermal density matrix is:\n\\begin{equation}\n\\sum_{s_{1}=-1}^{1}...\\sum_{s_{N}=-1}^{1}|s_{1}...s_{N}\\rangle \\langle s_{1}...s_{N}| \n\\frac{e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}{\\sum_{s_{1}=-1}^{1}...\\sum_{s_{N}=-1}^{1}e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}.\\nonumber \n\\end{equation}\t\nThus, by comparison of these two equations, \n\\begin{equation}\\label{48}\n|C_{s_{1}...s_{N}}(\\rho)|^{2}=\n\\frac{e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}{\\sum_{s_{1}=-1}^{1}...\\sum_{s_{N}=-1}^{1}e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}.\n\\end{equation}\n\n\tEventually, $W$ associated with this space-time region will be taken $\\sim S\\equiv \\sum_{i=1}^{N}\\sigma_{i}$. The statevector which is the sum of all states in (47) which have the same eigenvalue of $S$ \nand which is normalized to 1 is: \n\\begin{equation}\n|s\\rangle\\equiv\\sum_{\\{s_{i}\\}, \\sum s_{i}=s}|s_{1}...s_{N}\\rangle|\\hbox {bath},s_{1}...s_{N}\\rangle \\frac{C_{s_{1}...s_{N}}(\\rho)}{\\sqrt{\\sum_{\\{s_{i}\\},\\sum s_{i}=s}|C_{s_{1}...s_{N}}|^{2}}}\\nonumber, \n\\end{equation}\nso, using (48), \n\\begin{equation}\\label{49}\n\\langle s|\\chi\\rangle=\\bigg[\\sum_{\\{s_{i}\\}, \\sum s_{i}=s}|C_{s_{1}...s_{N}}(\\rho)|^{2}\\bigg]^{1\/2}=\n\\bigg[\\frac{\\sum_{\\{s_{i}\\},\\sum s_{i}=s}e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}{\\sum_{\\{s_{i}\\}}e^{\\beta G\\rho \\mu a^{2}\\sum_{i=1}^{N}s_{i}}}\\bigg]^{1\/2}.\n\\end{equation}\t\n\n\tEvaluation of (49) is well known but, for completeness, it is sketched here. The \npartition function in the denominator of (49) is (writing $C= G\\rho \\mu a^{2}$ and taking $N$ even)\n\\begin{equation}\\label{50}\n\\hbox{Tr}e^{\\beta CS} =\\sum_{k=0}^{N}e^{\\beta C(N-2k)}\\frac{N!}k!{(N-k)!}=(2\\cosh\\beta C)^{N}, \n\\end{equation}\t\n\\noindent so the expression in (49) under the square root is \t\n\\begin{subequations}\\label{51}\n\\begin{eqnarray}\n\\sum_{\\{s_{i}\\}, \\sum s_{i}=s}|C_{s_{1}...s_{N}}(\\rho)|^{2}&=&\\frac{e^{\\beta Cs}}{(2\\cosh\\beta C)^{N}}\\frac{N!}{[(N-s)\/2]![(N+s)\/2]!}\\\\\n&\\approx&\\frac{1}{\\sqrt{2\\pi N \\cosh^{2}\\beta C}}e^{-[s-N \\tanh\\beta C]^{2}\/2N\\cosh^{2}\\beta C}\\\\\n&\\approx&\\frac{1}{\\sqrt{2\\pi N}}e^{-[s-N\\beta C]^{2}\/2N}.\n\\end{eqnarray}\n\\end{subequations}\t\t\n\\noindent In (51b) the well-known gaussian approximation to the binomial distribution has been employed\\cite{Feller} \nwhich is valid for large $N$ and for s within a few standard deviations of its mean. \nThe approximation in (51c) depends upon validity of the inequality\n\\begin{equation}\\label{52}\n\\beta G\\rho \\mu a^{2}<<1\n\\end{equation}\n\\noindent which shall need to be checked. \n\n\tNow it is possible to put together the contributions of all volumes $\\Delta V$ and all time intervals \n$\\Delta t$ within $(0, t)$ to construct the statevector $|\\psi,t\\rangle$. \nDefine $|\\tilde{s}\\rangle\\equiv\\prod_{\\Delta V, \\Delta t}|s\\rangle_{i,j}$, the normalized basis vector which is the joint eigenvector of $S({\\bf x}_{i},t_{j})$ for all ${\\bf x}_{i}$ and for $0\\leq t_{j}\\leq t$. The scalar \nproduct $\\langle \\tilde{s} |\\psi,t\\rangle$ is the direct product of the expressions (49) so, from (51c) (without the normalization factors, which are tucked into the element of integration):\n\\begin{subequations}\\label{53}\n\\begin{eqnarray}\n\\langle \\tilde{s} |\\psi,t\\rangle&=&e^{-(4N)^{-1}\\sum_{{\\bf x}_{i},t_{j}=0}^{t}[s_{i,j}-N\\beta G\\rho \\mu a^{2}]^{2}}\\\\\n&=&e^{-4^{-1}N(\\beta G \\mu a^{2})^{2}\\sum_{{\\bf x}_{i},t_{j}=0}^{t}[s_{i,j}\/N\\beta G \\mu a^{2}-\\rho_{i}]^{2}}\\\\\n&\\approx&e^{-4^{-1}(p\/\\ell^{3}\\tau)(\\beta G \\mu a^{2})^{2}\\int d{\\bf x}\\int_{0}^{t}dt'[s'(\\bf {x},t')-\\rho(\\bf {x})]^{2}},\n\\end{eqnarray}\n\\end{subequations}\n\\noindent where (53c) follows from (46), and we have defined:\n\\begin{equation}\\label{54}\ns'({\\bf x},t)\\equiv s_{i,j}\/N\\beta G \\mu a^{2}. \n\\end{equation}\n\tNow, compare (53c) with CQC's comparable expression. Note from (8) \nthat $A({\\bf x})=m_{0}^{-1}a^{3\/2}\\rho(\\bf {x})$ (up to a numerical factor which shall be ignored) so the CQC expression comparable to Eq.(9) is\n\\begin{equation}\\label{55}\n\\langle w |\\psi,t\\rangle=e^{-(4\\lambda)^{-1}\\int d{\\bf x}\\int_{0}^{t}dt'[w({\\bf x},t')-2\\lambda m_{0}^{-1}a^{3\/2} \\rho(\\bf {x})]^{2}}. \n\\end{equation}\n\\noindent The model's Eq.(53c) and the CQC Eq.(55) are identical if \n\\begin{subequations}\\label{56}\n\\begin{eqnarray}\nW({\\bf x},t)&=&\\frac{2\\lambda a^{3\/2}}{m_{0}}S'({\\bf x},t)=\\frac{2\\lambda}\n {(\\beta m_{0}c^{2})\\ell a^{1\/2}}\\frac{S({\\bf x},t)}{N},\\\\\n4(\\lambda \\tau)(\\ell\/a)&=& p(\\beta m_{0}c^{2})^{2}, \n \\end{eqnarray}\n\\end{subequations}\t\n\\noindent where $G\\mu=\\ell c^{2}$ has ben used, and $S'{\\bf x},t)$ is the operator whose eigenvalues are $s'({\\bf x},t)$ given in Eq.(54).\n\n\tThus, this model produces CSL dynamics. \n\t\n\tIt remains to check whether the numerical values in Eqs.(52), (56) are reasonable.\nConsider two possible temperatures for the bath, the cosmic radiation temperature $\\beta^{-1}\\approx 2\\cdot 10^{-4}$ eV. and the Planck temperature $\\beta^{-1}\\approx 10^{28}$ eV. For normal mass \ndensities, $\\rho\\approx 1 \\hbox{gm\/cc}\\approx 5\\cdot 10^{33}\\hbox{eV\/cc}$, one gets\nrespectively $\\beta G\\rho \\mu a^{2}\\approx 10^{-6}$ and $10^{-38}$, so the inequality of \nEq.(52) is satisfied for both. \n\n\tFor these two temperatures, (56b) gives $p\\approx 10^{-112} $ and $10^{-49}$ respectively. \nThe second number is too small for the conceptual picture we have outlined since, after \n$10^{49}\\tau\\approx 10^{5}$sec, the spin in every cell is likely to have interacted and frozen, so the \nprocess would cease. However, one may change the conceptual picture. It is possible for a spin in a cell to interact repeatedly, without changing the dynamical equation because the bath states \n preserve the past orthogonality of the realizable states. Although they are represented by bath states, they are labeled by the past values of W({\\bf x},t). \nEach of these orthogonal realizable states at time $t$ evolves independently of the other states over the next $\\Delta t$, via the CSL evolution. \n\n\tLastly, look at Eq.(56a). Note that $W\\sim S\/N$ is intensive, proportional to the mean \nvalue of the spin in a cell, so this result is independent of the size of $\\Delta {\\bf x}$, $\\Delta t$ \nas it should be. For the numerical coefficient in (56a), it is more informative to discuss $S'({\\bf x},t)$'s relation to $S({\\bf x},t)$ rather than \n$W({\\bf x},t)$'s relation to either, because $S'({\\bf x},t)$'s mean value is simply $\\rho$ (see (53c)). One obtains from Eq.(56a), for a cosmic bath and for a Planck bath respectively, \n$S'\\approx (S\/N)10^{39}$eV\/cc and $S'\\approx (S\/N)10^{71}$eV\/cc. Both factors are large compared \nto normal mass density $\\approx 10^{34}$eV\/cc, so that the mean spin value $S\/N$ \n for such a density is respectively $10^{-5}$ and $10^{-37}$. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOver the past few years much attention has been devoted to \nthe study of traffic flow. \nSince the seminal work of Lighthill and Whitham \nin the middle of the 50's \\cite{LIGHT_1} many attempts have been\nmade to construct more and more sophisticated models which incorporate\nvarious phenomena occurring in real \ntraffic (for an overview see \\cite{WOLF}).\nRecently, a new class of models, based on the idea of cellular\nautomata, has been proven to describe traffic dynamics\nin a very efficient way \\cite{NASCH_1}.\nEspecially the transition from free flow to jammed\ntraffic with increasing car density could be investigated\nvery accurately.\nNevertheless, besides various indications \\cite{CSANYI_1}, \nno unique description for a dynamical transition has \nbeen found (see for instance \\cite{SASVARI_1,LUEB_6} and\nreferences therein).\nIn this article we consider a method of analysis\nwhich allows us to identify the different phases of the system\nand to describe the phase transition, i.e., considering the \nfluctuations which drive the transition,\nand determining the phase diagram.\n\n\nWe consider a one-dimensional cellular automaton of\nlinear size $L$ and $N$ particles.\nEach particle is associated the integer values \n$v_i\\in\\{0,1,2,...,v_{\\rm max}\\}$ and $d_i\\in\\{0,1,2,3,...\\}$,\nrepresenting the velocity and the distance to the next\nforward particle \\cite{NASCH_1}.\nFor each particle, the following \nupdate steps representing the acceleration, the slowing down, the noise,\nand the motion of the particles are done in parallel:\n(1) if $v_i < d_i$ then $v_i \\to \\mbox{Min}\\{v_i+1, v_{\\rm max}\\}$,\n(2) if $v_i > d_i$ then $v_i \\to d_i$,\n(3) with probability $P\\\/$ $v_i \\to \\mbox{Max}\\{v_i-1, 0\\}$, \nand\n(4) $r_i \\to r_i+v_i$, \nwhere $r_i$ denotes the position of the $i$-th particle.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Simulation and Results}\n\n\n\n\\begin{figure}[b]\n \\begin{center}\n \\epsfxsize=8.6cm\n \\epsfysize=7.0cm\n \\epsffile{space_time_dia.eps} \n \\end{center}\n \\vspace{-0.8cm}\n \\caption{Space-time plot for $v_{\\rm max}=5$, $P=0.5$, and $\\rho_g>\\rho_c$.\n Note the separation of the system in high and low density\n\t regions.}\n \\label{space_time_dia} \n\\end{figure}\n\n\n\n\n\n\nFigure~\\ref{space_time_dia} shows a space-time plot of the system.\nEach dot corresponds to a particle at a given time step.\nThe global density $\\rho_g=N\/L$ exceeds the critical density and \njams occur.\nTraffic jams are characterized by a high local density of the\nparticles and by a backward movement of shock waves \\cite{LIGHT_1}.\nOne can see from Fig.~\\ref{space_time_dia} that in the jammed regime \nthe system is inhomogeneous, \ni.e., traffic jams with a high local density and free flow regions with\na low local density coexist.\nIn order to investigate this transition one has to take this\ninhomogeneity into account.\n\nTraditionally one determines the so-called fundamental diagram, \ni.e.,~the diagram of the flow vs the density.\nThe global flow is given by, \n$\\Phi \\; = \\; \\rho_g \\, \\langle v \\rangle$,\nwhere $\\langle v \\rangle$ denotes the averaged velocity of the \nparticles.\nThis non-local measurements are not sensitive to the inhomogeneous\ncharacter of the system, i.e.,~the information\nabout the two different coexisting phases is lost.\nIn the following we consider\na method of analysis which is based on the measurement of \nthe local density distribution $p(\\rho)$~\\cite{LUEB_6}.\nThe local density $\\rho$ is measured on a section of the \nsystem of size $\\delta$ according to\n\\begin{equation}\n\\rho \\; = \\; \\frac{1}{\\rho_g \\delta} \\, \\sum_{i=1}^{N} \\,\n\\theta(\\delta -r_i). \n\\label{eq:density_dist}\n\\end{equation}\n\n\n\n\n\n\\begin{figure}[p]\n \\begin{center}\n \\epsfxsize=8.0cm\n \\epsfysize=8.0cm\n \\epsffile{density_dist.eps} \n \\end{center}\n \\vspace{-0.9cm}\n \\caption{The local density distribution $p(\\rho)$ for various\n\t values of the global density, $v_{\\rm max}=5$, $P=0.5$\n\t and $\\delta=256$. \n\t The dashed line corresponds to the characteristic density \n\t of the free flow phase.}\n \\label{density_dist} \n \\vspace{0.5cm}\n \\begin{center}\n \\epsfxsize=7.4cm\n \\epsfysize=7.4cm\n \\epsffile{b_8192_0128_2.ps} \n \\end{center}\n \\vspace{-0.0cm}\n \\caption{The local density distribution $p(\\rho_g,\\rho)$ as a \n\t function of the global density (horizontal axis) and \n\t local density (vertical axis), respectively.\n\t The colors correspond to the values of the probability \n\t $p(\\rho_g,\\rho)$, increasing from black to red.}\n \\label{density_dist_2} \n\\end{figure}\n\n\n\n\n\nThe local density distribution $p(\\rho)$ is plotted \nfor various values of the global \ndensity $\\rho_g$ in Fig.~\\ref{density_dist}.\nIn the case of small values of $\\rho_g$, see Fig.~\\ref{density_dist}a, \nthe particles can\nbe considered as independent (see below) and the \nlocal density distribution is simply Gaussian with\nthe mean values $\\rho_g$ and a width which scales \nwith $\\sqrt{\\delta}$. \nIncreasing the global density, jams occur\nand the distribution displays two different \npeaks (Fig.~\\ref{density_dist}c).\nThe first peak corresponds to the density of free particles and\nin the phase coexistence regime the position of this peak\ndoes not depend on the global density (see the dashed lines\nin Fig.~\\ref{density_dist}).\nThe second peak is located at larger densities and characterizes\nthe jammed phase.\nWith increasing density the second peak occurs in the vicinity\nof the critical density $\\rho_c$ (Fig.~\\ref{density_dist}b)\nand grows further (Fig.~\\ref{density_dist}c) until it dominates\nthe distribution in the sense that the first peak \ndisappears (Fig.~\\ref{density_dist}d).\nThe two peak structure of the local density distribution clearly \nreflects the coexistence of the free flow and \njammed phase above the critical value $\\rho_c$.\nIn Fig.~\\ref{density_dist_2} we present the \nprobability distribution as function of the\nglobal and local density.\nAbove a certain value of the global density $\\rho_g$ \nthe two peak structure occurs.\nThe behavior of the first peak yields a criterion to \ndetermine the transition point~\\cite{LUEB_6} and one gets\n$\\rho_c=0.0695\\pm 0.0007$ for $P=0.5$ and $v_{\\rm max}$,\nrespectively.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfxsize=8.6cm\n \\epsfysize=7.5cm\n \\epsffile{struc_5.eps} \n \\vspace{-0.8cm}\n \\caption{The structure factor $S(k)$ for $P=0.5$, $v_{\\mbox{max}}=5$\n\t and for various values of the\n\t density $\\rho$. The dashed line marks the characteristic\n\t wavelength $\\lambda_0$ of the free flow phase.}\n \\label{struc_5} \n \\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\nIn order to describe the spatial decomposition of the\ncoexisting phases we measured the steady state structure factor \\cite{SCHMITT_1}\n\\begin{equation}\nS(k) \\; = \\; \\frac{1}{L} \\left \\langle \\left | \\, \\sum_{r=1}^{L} \\,\n\\eta(r) \\, e^{i k r} \\right|^2 \\right \\rangle,\n\\label{eq:structure_factor}\n\\end{equation}\nwhere $\\eta(r)=1$ if the lattice site $r$ is occupied and\n$\\eta(r)=0$ otherwise.\nIn Fig.~\\ref{struc_5} we plot the structure factor $S(k)$\nfor the same values of the global density as in Fig.~\\ref{density_dist}, \ni.e., below, in the vicinity, above and far away of the transition\npoint.\nIt is remarkable that $S(k)$ exhibits a maximum for all considered\nvalues of the global density at $k_0 \\approx 0.72$ (dashed lines in\nFig.~\\ref{struc_5}). \nThis value correspondence to the characteristic wave length \n$\\lambda_0=\\frac{2 \\pi}{k_0}$ of the density fluctuations \nin the free flow phase.\nThe steady state structure factor is related to the \nFourier transform of the real space density-density\ncorrelation function.\nThe wave length $\\lambda_0$ corresponds to a maximum of the\ncorrelation function, i.e., $\\lambda_0$ describes the \nmost likely distance of two particles in the free flow phase.\nFor low densities the structure factor is almost independent\nof the density and displays a minimum\nfor small $k$ values indicating the lack of long-range correlations.\nCrossing the transition point the smallest mode \n$S(k=\\frac{2 \\pi}{L})$ increases quickly. \nThis suggests that the jammed phase is characterized by \nlong-range correlations which decay in the limit $\\rho_g \\gg \\rho_c$ \nalgebraically as one can see from the log-log plot \nin Fig.~\\ref{struc_5}d.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfxsize=8.6cm\n \\epsfysize=7.0cm\n \\epsffile{phase_dia.eps} \n \\end{center}\n \\vspace{-1.0cm}\n \\caption{The phase diagram of the Nagel-Schreckenberg model.\n\t Note that in the non-deterministic region $00$ of the space-time dimension $D=4-\\epsilon$ for dimensional regularization. The parameter is then varied towards the value that it would have if the divergent integral existed ($\\Lambda\\rightarrow\\infty$ for Pauli-Villars, $\\epsilon\\rightarrow0$ for dimensional) and the badly behaved contributions (e.g. terms of the order of $\\log(\\Lambda)$ or $\\epsilon^{-1}$) are subtracted out or ignored to obtain finite answers which can be compared with experiment. The process of removing the badly behaved contributions is called renormalization (e.g. minimal subtraction).\n\nMany of the initial developers of QFT such as Dirac and Feynman were not happy with the fact that many of the integrals in QFT, in particular, those involving fermion loops, do not exist in the mathematical sense but more recently, especially through the important work of Wilson and others on the renormalization group, the parameterized variation with scale has been seen to have physical significance and not just the result of a mathematical artifice. In particular the concept of running coupling constant and the comparison of its behavior in QED and QCD, which is asymptotically free, is seen to be of great physical significance. The significance has propagated into other areas of physics such as solid state physics and critical phenomena in condensed matter physics. The exact renormalization group equations (ERGE) of both Wilson and Polchinski involve cutoffs (a smooth ultraviolet regulator in the work of Polchinski).\n\nWe believe, nevertheless, that it would be desirable to have well defined initial equations or principles as a starting point for physical theory which are such that concepts such as the running coupling constant would follow from these basic principles. We have shown in a previous paper (Mashford, 2017b) how one can, through a brief formal argument, consider the problematical objects in QFT as being Lorentz invariant Borel complex measures (more generally $K$ invariant ${\\bf C}^{4\\times4}$ valued measures) on Minkowski space. We will repeat this derivation in the present paper for the case of the contraction of the vacuum polarization tensor. Having given a definition of the objects as well defined mathematical objects one can proceed and analyze these objects, computing the consequences of assuming them, without infinities or ill-definedness propagating through the calculations.\n\nIt can be shown (see Appendix 3) that any Lorentz invariant Borel complex measure on Minkowski space has a certain spectral representation.\nAn important part of this paper is the presentation of a spectral calculus whereby the spectrum of a causal Lorentz invariant Borel measure on Minkowski space can be calculated, where by causal is meant that the support of the measure is contained in the closed future null cone of the origin.\n\nIf, using the spectral calculus, one can obtain a spectrum for a causal Lorentz invariant Borel measure which is a continuous function (or, more generally, a sufficiently well behaved measurable function) then, as we will show, one can compute an equivalent density for the measure with respect to Lebesque measure on ${\\bf R}^4$ which can be used in QFT calculations. \n\nWe will show, generally, how to convolve or form products of causal Lorentz invariant Borel measures using their spectral representations. This is to be compared to the work of Scharf and others, dating back to the paper of Epstein and Glaser, (1973) on forming products of causal distributions.\n\nThe concept of spectral representation in QFT dates back to the work of K\\\"{a}llen (1952) and Lehmann (1954) who, independently, proposed the representation\n\\begin{equation}\n<0|[\\phi(x),\\phi^{\\dagger}(y)]|0>=i\\int_0^{\\infty}d{m^{\\prime}}^2\\sigma({m^{\\prime}}^2)\\Delta_{m^{\\prime}}(x-y),\n\\end{equation}\nfor the commutator of interacting fields where $\\Delta_{m^{\\prime}}$ is the Feynman propagator corresponding to mass $m^{\\prime}$. Itzikson and Zuber (1980) state, with respect to $\\sigma$, ``In general this is a positive measure with $\\delta$-function singularities.\" While K\\\"{a}llen, Lehmann and others propose and use this decomposition they do not present a way to compute the spectral measure $\\sigma$. As mentioned above one of the main results of the present paper is a presentation of the spectral calculus which enables one to compute the spectral function of a causal Lorentz invariant Borel measure on Minkowski space. This spectral calculus is quite easy to use in practice but it is somewhat tedious to prove rigorously its validity (see Appendix 6).\n\nIn Section 3 of the paper we use the spectral calculus and other methods to compute the spectrum of the measure $\\Omega_m*\\Omega_m$ which is a convolution of the standard Lorentz invariant measure on the mass $m$ mass shell (i.e the Feynman propagator corresponding to mass $m$ on the space of positive energy functions) with itself, where $m>0$. In Section 4 we use general arguments to compute the spectrum of $\\Omega_{im}*\\Omega_{im}$ where $\\Omega_{im}$ is standard Lorentz invariant measure on the imaginary mass hyperboloid corresponding to mass $im$, $m>0$. These computations form practice for the main application of the paper which is an investigation in Section 7 of vacuum polarization, i.e. the self energy of the photon.\n\nIn Section 7 we compute the spectral function and hence the density associated with the complex measure obtained by contracting the vacuum polarization tensor. This is used to define our spectral vacuum polarization function. Our function is seen to agree with a high degree of accuracy (up to finite renormalization) with the vacuum polarization function obtained using regularization\/renormalization. \n\nWe follow Weinberg and others' method for the computation of the Uehling contribution to the Lamb shift in the H atom. Ours differs because we have a different vacuum polarization function in the imaginary mass regime. We compute using the Born approximation a value for the Uehling effect of $\\approx-28.7$ MHz for the hydrogen atom.\n\nWe compute and display the running coupling constant for 1 loop QED. This computation is shown to be convergent when the spectral vacuum polarization function is used while the standard method using the vacuum polarization function obtained using regularization\/renormalization is shown to be divergent for all non-zero energies. \n\n\\section{A spectral calculus of Lorentz invariant measures}\n\nConsider the following general form of a complex measure $\\mu$ on Minkowski space.\n\\begin{equation} \\label{eq:invariant1}\n\\mu(\\Gamma)=c\\delta(\\Gamma)+\\int_{m=0}^{\\infty}\\Omega^{+}_m(\\Gamma)\\,\\sigma_1(dm)+\\int_{m=0}^{\\infty}\\Omega_m^{-}(\\Gamma)\\,\\sigma_2(dm)+\\int_{m=0}^{\\infty}\\Omega_{im}(\\Gamma)\\,\\sigma_3(dm),\n\\end{equation}\nwhere $c\\in{\\bf C}$ (the complex numbers), $\\delta$ is the Dirac delta function (measure), $\\sigma_1,\\sigma_2,\\sigma_3:{\\mathcal B}([0,\\infty))\\rightarrow{\\bf C}$ are Borel complex measures (where ${\\mathcal B}([0,\\infty))$ denotes the Borel algebra of $[0,\\infty)$), $\\Omega_m^{+}$ is the standard Lorentz invariant measure concentrated on the mass shell $H_m^{+}$ (see (Mashford, 2017b)), $\\Omega_m^{-}$ is the standard Lorentz invariant measure concentrated on the mass shell $H_m^{-}$ and $\\Omega_{im}$ is the standard Lorentz invariant measure on the imaginary mass hyperboloid $H_{im}$. Then $\\mu$ is a Lorentz invariant measure. Conversely we have the following.\n\\newtheorem{theorem}{Theorem}\n\\begin{theorem}{The Spectral Theorem.}\nLet $\\mu:{\\mathcal B}({\\bf R}^4)\\rightarrow{\\bf C}$ be a Lorentz invariant Borel complex measure. Then $\\mu$ has the form of Eq.~\\ref{eq:invariant1} for some $c\\in{\\bf C}$ and Borel spectral measures $\\sigma_1,\\sigma_2$ and $\\sigma_3$.\n\\end{theorem}\nThe proof of this theorem is given in Appendix 3.\n\nIf $\\sigma_2=\\sigma_3=0$ then $\\mu$ will be said to be {\\em causal} or a type I measure. If $\\sigma_1=\\sigma_3=0$ then $\\mu$ will be said to be a type II measure and if $c=0$ and $\\sigma_1=\\sigma_2=0$ then $\\mu$ will be said to be a type III measure. Thus any Lorentz invariant measure is a sum of a type I measure, a type II measure and a type III measure. In particular, any measure of the form\n\\begin{equation} \\label{eq:invariant2}\n\\mu(\\Gamma)=\\int_{m=0}^{\\infty}\\sigma(m)\\Omega^{+}_m(\\Gamma)\\,dm,\n\\end{equation} \nwhere $\\sigma$ is locally integrable function and the integration is carried out with respect to the Lebesgue measure, is a causal Lorentz invariant Borel complex measure. If $\\sigma$ is polynomially bounded then $\\mu$ is a tempered measure.\n\nThe spectral calculus that we will now explain is a very simple way to compute the spectrum $\\sigma$ of a Lorentz invariant measure $\\mu$ if we know that $\\mu$ can be written in the form of Eq.~\\ref{eq:invariant2} and $\\sigma$ is continuous.\n\nFor $m>0$ and $\\epsilon>0$ let $S(m,\\epsilon)$ be the hyperbolic (hyper-)disc defined by\n\\begin{equation}\nS(m,\\epsilon)=\\{p\\in{\\bf R}^4:p^2=m^2,|{\\vct p}|<\\epsilon, p^0>0\\},\n\\end{equation}\nwhere, as usual in QFT, $p^2=\\eta_{\\mu\\nu}p^{\\mu}p^{\\nu}=(p^0)^2-(p^1)^2-(p^2)^2-(p^3)^2$ and ${\\vct p}=\\pi(p)=(p^1,p^2,p^3)$. For $a,b\\in{\\bf R}$ with $0\\frac{4\\pi}{3m}-\\epsilon^{-3}I>0. \\]\nHence\n\\begin{equation}\n\\left|\\epsilon^{-3}I-\\frac{4\\pi}{3m}\\right|<\\frac{4\\pi}{3m}-\\frac{4\\pi}{3(\\epsilon^2+m^2)^{\\frac{1}{2}}}.\n\\end{equation}\nWe have\n\\begin{eqnarray}\n\\frac{4\\pi}{3m}-\\frac{4\\pi}{3(\\epsilon^2+m^2)^{\\frac{1}{2}}} & = & \\frac{4\\pi}{3}\\frac{(\\epsilon^2+m^2)^{\\frac{1}{2}}-m}{m(\\epsilon^2+m^2)^{\\frac{1}{2}}} \\nonumber \\\\\n & = & \\frac{4\\pi}{3}\\frac{\\epsilon^2}{m(\\epsilon^2+m^2)^{\\frac{1}{2}}((\\epsilon^2+m^2)^{\\frac{1}{2}}+m)} \\nonumber \\\\\n & < & \\frac{4\\pi}{3}\\frac{\\epsilon^2}{2m^3} \\nonumber \\\\\n & \\leq & \\frac{4\\pi}{3}\\frac{\\epsilon^2}{2a^3}, \\mbox{ for all }m\\in[a,b]. \\nonumber\n\\end{eqnarray}\nTherefore\n\\begin{equation}\n\\left|\\epsilon^{-3}I-\\frac{4\\pi}{3m}\\right|<\\frac{4\\pi}{3}\\frac{\\epsilon^2}{2a^3},\n\\end{equation}\nfor all $m\\in[a,b]$\n\n $\\Box$\n\n This lemma justifies the step of taking the limit under the integral sign (indicated by the symbol $\\approx$) in the proof of Theorem~\\ref{theorem:fundamental}.\n\nMore generally, suppose that $\\mu:{\\mathcal B}({\\bf R}^4)\\rightarrow{\\bf C}$ is a causal Lorentz invariant Borel measure on Minkowski space with spectrum $\\sigma$. Then, by the Lebesgue decomposition theorem there exist unique measures $\\sigma_c,\\sigma_s:{\\mathcal B}([0,\\infty))\\rightarrow{\\bf C}$ such that $\\sigma=\\sigma_c+\\sigma_s$ where $\\sigma_c$, the continuous part of the spectrum of $\\mu$, is absolutely continuous with respect to Lebesque measure and $\\sigma_s$, the singular part of the spectrum of $\\mu$, is singular with respect to $\\sigma_c$. \n\nIt is straightforward to prove the following.\n\\begin{theorem} \\label{Theorem:Th2}\nSuppose that $a^{\\prime},b^{\\prime}\\in{\\bf R}$ are such that $00\\},\n\\end{equation}\nand therefore, that $\\mu$ is causal.\nLet $U\\subset{\\bf R}^4$ be open. Then\n\\begin{equation}\n\\mu(U)=\\int_{{\\bf R}^3}\\int_{{\\bf R}^3}\\chi_U(\\omega_m({\\vct p})+\\omega_m({\\vct q}),{\\vct p}+{\\vct q})\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})}\\,\\frac{d{\\vct q}}{\\omega_m({\\vct q})}.\n\\end{equation} \nTherefore, using continuity, it follows that\n\\begin{eqnarray}\n\\mu(U)>0 & \\Leftrightarrow & (\\exists p\\in U,{\\vct q}_1,{\\vct q}_2\\in{\\bf R}^3)\\mbox{ } p=(\\omega_m({\\vct q}_1)+\\omega_m({\\vct q}_2),{\\vct q}_1+{\\vct q}_2). \\nonumber\n\\end{eqnarray}\nSuppose that $p\\in\\mbox{supp}(\\mu)$ (the support of the measure $\\mu$) i.e $p$ is such that $\\mu(U)>0$ for all open neighborhoods $U$ of $p$. Let $U$ be an open neighborhood of $p$. Then, since $\\mu(U)>0$, there exists $q\\in U, {\\vct q}_1,{\\vct q}_2\\in{\\bf R}^3$ such that $q=(\\omega_m({\\vct q}_1)+\\omega_m({\\vct q}_2),{\\vct q}_1+{\\vct q}_2)$. Clearly $q^0\\ge2m$. Since this is true for all neighborhoods $U$ of $p$ it follows that $p^0\\ge2m$. By Lorentz invariance we may assume without loss of generality that ${\\vct p}=0$. Therefore $p^2\\ge 4m^2$. Thus supp$(\\mu)\\subset C_m$. \n\nFor the converse, let $p=(\\omega_m({\\vct p}),{\\vct p}), q= (\\omega_m({\\vct p}),-{\\vct p})\\in H_m^{+}$ for ${\\vct p}\\in{\\bf R}^3$. As ${\\vct p}$ ranges over ${\\bf R}^3$, $p+q=(2\\omega_m({\\vct p}),{\\vct 0})$ ranges over $\\{(m^{\\prime},{\\vct 0}):m^{\\prime}\\geq2m\\}$. It follows using Lorentz invariance that supp$(\\mu)\\supset C_m$.\n\nTherefore the support supp$(\\mu)$ of $\\mu$ is $C_m$. Therefore by the spectral theorem $\\mu$ has a spectral representation of the form\n\\begin{equation} \\label{eq:convolution1}\n\\mu(\\Gamma)=\\int_{m^{\\prime}=2m}^{\\infty}\\Omega_{m^{\\prime}}(\\Gamma)\\,\\sigma(dm^{\\prime}),\n\\end{equation}\nfor some Borel measure $\\sigma:{\\mathcal B}([2m,\\infty))\\rightarrow{\\bf C}$. \n\n\\subsection{Computation of the spectrum of $\\Omega_m*\\Omega_m$ using the spectral calculus}\n\nLet $a,b\\in{\\bf R}$ with $00$. $\\Omega_{im}$ is a Lorentz invariant measure on $H_{im}=\\{p\\in{\\bf R}^4:p^2=-m^2\\}$.\n\nDefine, for $m\\in{\\bf C}$\n\\begin{equation}\nJ_m^{+}=\\{p\\in{\\bf C}^4:p^2=m^2,\\mbox{Re}(p^0)\\geq0, \\mbox{Im}(p^0)\\geq0\\},\n\\end{equation}\nwhere $p^2=\\eta_{\\mu\\nu}p^{\\mu}p^{\\nu}$. Then, for $m>0$,\n\\begin{equation}\nJ_m^{+}\\cap{\\bf R}^4=\\{p\\in{\\bf R}^4:p^2=m^2, p^0\\geq0\\}=H_m^{+},\n\\end{equation}\n\\begin{eqnarray} \\label{eq:cx_hyp_property}\nJ_m^{+}\\cap(i{\\bf R}^4) & = & \\{p\\in i{\\bf R}^4:p^2=m^2, \\mbox{Re}(p^0)\\geq0, \\mbox{Im}(p^0)\\geq0\\} \\nonumber \\\\\n & = & \\{iq:q\\in{\\bf R}^4, q^2=-m^2, q^0\\geq0\\} \\nonumber \\\\\n & = & iH_{im}^{+}. \n\\end{eqnarray}\n\nOne may consider the measure $\\Omega_m^{+}$ to be defined on $i{\\bf R}^4$ as well as ${\\bf R}^4$ and for all $m\\in{\\bf C}$ according to\n\\begin{equation}\n\\Omega_{m}^{+}({\\Gamma})=\\int_{\\pi(\\Gamma\\cap J_{m}^{+})}\\frac{d{\\vct p}}{\\omega_{m}({\\vct p})},\n\\end{equation}\nwhere, for $m\\in{\\bf C}$,\n\\begin{equation}\n\\omega_m:{\\bf C}^3\\rightarrow{\\bf C}, \\omega_m({\\vct p})=({\\vct p}^2+m^2)^{\\frac{1}{2}}, \\mbox{ where }{\\vct p}^2=\\delta_{jk}p^jp^k.\n\\end{equation}\nThen from Equation~\\ref{eq:cx_hyp_property}\n\\begin{equation}\n\\Omega_m^{+}(i\\Gamma)=\\int_{i\\pi(\\Gamma\\cap H_{im}^{+})}\\frac{d{\\vct p}}{\\omega_m({\\vct p})}.\n\\end{equation}\nNow make the substitution ${\\vct p}=i{\\vct q}$. Then $d{\\vct p}=-id{\\vct q}$. Also\n\\[ \\omega_{m}({\\vct q})=(m^2+{\\vct q}^2)^{\\frac{1}{2}}=(m^2-{\\vct p}^2)^{\\frac{1}{2}}=(-((im)^2+{\\vct p}^2))^{\\frac{1}{2}}=i\\omega_{im}({\\vct p}). \\]\nThis is true for all $m\\in{\\bf C}$. Therefore\n\\begin{equation}\ni\\omega_m({\\vct p})=\\omega_{(-im)}({\\vct q})=\\omega_{im}({\\vct q}),\n\\end{equation}\nand hence\n\\begin{equation}\n\\omega_m({\\vct p})=-i\\omega_{im}({\\vct q}).\n\\end{equation}\nThus\n\\begin{equation}\n\\Omega_m^{+}(i\\Gamma)=\\int_{\\pi(\\Gamma\\cap H_{im}^{+})}\\frac{-id{\\vct q}}{-i\\omega_{im}({\\vct q})}=\\Omega_{im}^{+}(\\Gamma). \n\\end{equation}\nDefine\n\\begin{equation}\n{\\mathcal B}_0({\\bf R}^4)=\\{\\Gamma\\in{\\mathcal B}({\\bf R}^4):\\Gamma\\mbox{ is relatively compact}\\}.\n\\end{equation}\nNow suppose that\n\\begin{equation}\n\\psi=\\sum_k c_k\\chi_{E_k},\n\\end{equation}\nwhere $c_i\\in{\\bf C}$ and $E_k\\in{\\mathcal B}_0({\\bf R}^4)$, is a simple function. Then\n\\begin{eqnarray}\n\\int_{{\\bf R}^4}\\psi(p)\\,\\Omega_{im}(dp) & = & \\sum_k c_k\\Omega_{im}(E_k) \\nonumber \\\\\n & = & \\sum_k c_k\\Omega_m(iE_k) \\nonumber \\\\\n & = & \\sum_k c_k\\int_{i{\\bf R}^4}\\chi_{iE_k}(p)\\,\\Omega_m(dp) \\nonumber \\\\\n & = & \\sum_k c_k\\int_{i{\\bf R}^4}\\chi_{E_k}(\\frac{p}{i})\\,\\Omega_m(dp) \\nonumber \\\\\n & = & \\int_{i{\\bf R}^4}\\psi(\\frac{p}{i})\\,\\Omega_m(dp). \\\\\n\\end{eqnarray}\nSince this is true for every such simple function $\\psi$ it follows that\n\\begin{equation}\n\\int_{{\\bf R}^4}\\psi(p)\\,\\Omega_{im}(dp)=\\int_{i{\\bf R}^4}\\psi(\\frac{p}{i})\\,\\Omega_m(dp),\n\\end{equation}\nfor every locally integrable function $\\psi$. Therefore\n\\begin{eqnarray} \\label{eq:convolve}\n(\\Omega_{im}*\\Omega_{im})(\\Gamma) & = & \\int_{({\\bf R}^4)^2}\\chi_{\\Gamma}(p+q)\\,\\Omega_{im}(dp)\\,\\Omega_{im}(dq) \\nonumber \\\\\n & = & \\int_{(i{\\bf R}^4)^2}\\chi_{\\Gamma}\\left(\\frac{p+q}{i}\\right)\\,\\Omega_m(dp)\\,\\Omega_m(dq) \\nonumber \\\\\n & = & \\int_{(i{\\bf R}^4)^2}\\chi_{i\\Gamma}(p+q)\\Omega_m(dp)\\Omega_m(dq) \\nonumber \\\\\n & = & (\\Omega_m*\\Omega_m)(i\\Gamma).\n\\end{eqnarray}\n\nNow in general, suppose that a measure $\\mu$ has a causal spectral representation of the form\n\\begin{equation}\n\\mu(\\Gamma)=\\int_{m^{\\prime}=0}^{\\infty}\\Omega_{m^{\\prime}}^{+}(\\Gamma)\\,\\sigma(m^{\\prime}),\n\\end{equation}\nfor some Borel spectral measure $\\sigma:[0,\\infty)\\rightarrow{\\bf C}$. \nThen $\\mu$ extends to a measure defined on $i{\\bf R}^4$ by\n\\begin{equation}\n\\mu(i\\Gamma)=\\int_{m=0}^{\\infty}\\Omega_m^{+}(i\\Gamma)\\,\\sigma(dm)=\\int_{m=0}^{\\infty}\\Omega_{im}^{+}(\\Gamma)\\,\\sigma(dm),\n\\end{equation}\nfor $\\Gamma\\in{\\mathcal B}({\\bf R}^4)$.\nTherefore since, as we have determined above, $\\Omega_m^{+}*\\Omega_m^{+}$ is a causal spectral measure with spectrum\n\\begin{equation} \\label{eq:imaginary_spectrum}\n\\sigma(m^{\\prime})=4\\pi mZ(m^{\\prime}) \\mbox{ for } m^{\\prime}\\ge2m,\n\\end{equation}\nit follows that\n\\begin{equation}\n(\\Omega_m^{+}*\\Omega_m^{+})(i\\Gamma)=\\int_{m=0}^{\\infty}\\Omega_{im^{\\prime}}^{+}(\\Gamma)\\,\\sigma(dm^{\\prime}).\n\\end{equation}\nTherefore using Eq.~\\ref{eq:convolve} $\\Omega_{im}^{+}*\\Omega_{im}^{+}$ is a measure with spectral representation\n\\begin{equation}\n(\\Omega_{im}^{+}*\\Omega_{im}^{+})(\\Gamma)=\\int_{m^{\\prime}=2m}^{\\infty}\\Omega_{im^{\\prime}}^{+}(\\Gamma)\\,\\sigma(m^{\\prime})\\,dm^{\\prime},\n\\end{equation}\nwhere $\\sigma$ is the spectral function given by Eq.~\\ref{eq:imaginary_spectrum}.\n Note that $\\Omega_{im}^{+}*\\Omega_{im}^{+}$ is not causal, it is a type III measure, and\n\\begin{equation}\n\\mbox{supp}(\\Omega_{im}^{+}*\\Omega_{im}^{+})=\\{p\\in{\\bf R}^4:p^2\\leq-4m^2, p^0\\geq0\\}.\n\\end{equation}\n\n\\section{Determination of the density defining a causal Lorentz invariant measure from its spectrum}\n\nSuppose that $\\mu$ is of the form of Eq.~\\ref{eq:invariant2} where $\\sigma$ is a well behaved (e.g. locally integrable) function. We would like to see if $\\mu$ can be defined by a density with respect to the Lebesgue measure, i.e. if there exists a function $g:{\\bf R}^4\\rightarrow{\\bf C}$ such that\n\\begin{equation}\n\\mu(\\Gamma)=\\int_{\\Gamma}g(p)\\,dp.\n\\end{equation}\nWell we have that\n\\begin{equation}\n\\mu(\\Gamma)=\\int_{m=0}^{\\infty}\\sigma(m)\\Omega_m^{+}(\\Gamma)\\,dm=\\int_{m=0}^{\\infty}\\sigma(m)\\int_{\\pi(\\Gamma\\cap H_m^{+})}\\frac{d{\\vct p}}{\\omega_m({\\vct p})}\\,dm.\n\\end{equation}\nNow \n\\begin{eqnarray}\n{\\vct p}\\in\\pi(\\Gamma\\cap H_m^{+}) & \\Leftrightarrow & (\\exists p\\in{\\bf R}^4){\\vct p}=\\pi(p),p\\in H_m^{+}, p\\in\\Gamma \\nonumber \\\\\n & \\Leftrightarrow & (\\omega_m({\\vct p}),{\\vct p})\\in\\Gamma \\nonumber \\\\\n & \\Leftrightarrow & \\chi_{\\Gamma}(\\omega_m({\\vct p}),{\\vct p})=1. \\nonumber\n\\end{eqnarray}\nTherefore\n\\begin{equation}\n\\mu(\\Gamma)=\\int_{m=0}^{\\infty}\\sigma(m)\\int_{{\\bf R}^3}\\chi_{\\Gamma}(\\omega_m({\\vct p}),{\\vct p})\\frac{1}{\\omega_m({\\vct p})}\\,d{\\vct p}\\,dm.\n\\end{equation}\nNow consider the transformation defined by the function $h:(0,\\infty)\\times{\\bf R}^3\\rightarrow{\\bf R}^4$ given by\n\\begin{equation}\nh(m,{\\vct p})=(\\omega_m({\\vct p}),{\\vct p}).\n\\end{equation}\nLet \n\\begin{equation}\nq=h(m,{\\vct p})=(\\omega_m({\\vct p}),{\\vct p})=((m^2+{\\vct p}^2)^{\\frac{1}{2}},{\\vct p}).\n\\end{equation}\nThen\n\\begin{equation}\n\\frac{\\partial q^0}{\\partial m}=m\\omega_m({\\vct p})^{-1}, \\frac{\\partial q^0}{\\partial p^j}= p^j\\omega_m({\\vct p})^{-1}, \\frac{\\partial q^i}{\\partial m}=0, \\frac{\\partial q^i}{\\partial p^j}=\\delta_{ij}, \n\\end{equation}\nfor $i,j=1,2,3$. Thus the Jacobian of the transformation is\n\\begin{equation}\nJ(m,{\\vct p})=m\\omega_m({\\vct p})^{-1}.\n\\end{equation}\nNow $q=(\\omega_m({\\vct p}),{\\vct p})$. Therefore $q^2=\\omega_m({\\vct p})^2-{\\vct p}^2=m^2$. So $m=(q^2)^{\\frac{1}{2}},q^2>0$.\nThus\n\\begin{eqnarray}\n\\mu(\\Gamma) & = & \\int_{q\\in{\\bf R}^4,q^2>0,q^0>0}\\chi_{\\Gamma}(q)\\frac{\\sigma(m)}{\\omega_m({\\vct p})}\\frac{dq}{J(m,{\\vct p})} \\nonumber \\\\\n & = & \\int_{q^2>0,q^0>0}\\chi_{\\Gamma}(q)\\frac{\\sigma(m)}{m}\\,dq.\n\\end{eqnarray}\nHence\n\\begin{eqnarray}\n\\mu(\\Gamma) & = & \\int_{q^2>0,q^0>0}\\chi_{\\Gamma}(q)\\frac{\\sigma((q^2)^{\\frac{1}{2}})}{(q^2)^{\\frac{1}{2}}}\\,dq \\nonumber \\\\\n & = & \\int_{\\Gamma}g(q)\\,dq, \\nonumber\n\\end{eqnarray}\nwhere $g:{\\bf R}^4\\rightarrow{\\bf C}$ is defined by\n\\begin{equation}\ng(q)=\\left\\{\\begin{array}{l}\n(q^2)^{-\\frac{1}{2}}\\sigma((q^2)^{\\frac{1}{2}})\\mbox{ if } q^2>0,q^0>0 \\\\\n0\\mbox{ otherwise.}\n\\end{array}\\right.\n\\end{equation}\nWe have therefore shown how, given a spectral representation of a causal measure in which the spectrum is a complex function, one can obtain and equivalent representation of the measure in terms of a density with respect to Lebesgue measure.\n\n\\section{Convolutions and products of causal Lorentz invariant Borel measures}\n\n\\subsection{Convolution of measures}\n\nLet $\\mu$ and $\\nu$ be causal Lorentz invariant Borel measures. Then there exist Borel spectral measures $\\sigma,\\rho:{\\mathcal B}([0,\\infty))\\rightarrow{\\bf C}$ such that\n\\begin{eqnarray}\n\\mu & = & \\int_{m=0}^{\\infty}\\Omega_m\\,\\sigma(dm), \\nonumber \\\\\n\\nu & = & \\int_{m=0}^{\\infty}\\Omega_m\\,\\rho(dm).\n\\end{eqnarray}\nThe convolution of $\\mu$ and $\\nu$, if it exists, is given by\n\\begin{equation}\n(\\mu*\\nu)(\\Gamma)=\\int\\chi_{\\Gamma}(p+q)\\,\\mu(dp)\\,\\nu(dq).\n\\end{equation}\nNow let $\\psi=\\sum_i c_i\\chi_{E_i}$ with $c_i\\in{\\bf C}, E_i\\in{\\mathcal B}_0({\\bf R}^4)$ be a simple function. Then\n\\begin{eqnarray}\n\\int\\psi(p)\\,\\mu(dp) & = & \\int\\sum_i c_i\\chi_{E_i}\\,\\mu(dp) \\nonumber \\\\\n & = & \\sum_i c_i\\mu(E_i) \\nonumber \\\\\n & = & \\sum_i c_i\\int_{m=0}^{\\infty}\\Omega_m(E_i)\\,\\sigma(dm) \\nonumber \\\\\n & = & \\sum_i c_i\\int_{m=0}^{\\infty}\\int_{{\\bf R}^4}\\chi_{E_i}(p)\\,\\Omega_m(dp)\\,\\sigma(dm) \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\int_{{\\bf R}^4}\\psi(p)\\,\\Omega_m(dp)\\,\\sigma(dm). \\nonumber\n\\end{eqnarray}\nTherefore for any sufficiently well behaved (e.g Schwartz) measurable function $\\psi:{\\bf R}^4\\rightarrow{\\bf C}$\n\\begin{equation}\n\\int\\psi(p)\\mu(dp)=\\int\\psi(p)\\,\\Omega_m(dp)\\,\\sigma(dm).\n\\end{equation}\n(Note that the integral exists because $\\sigma$ is a Borel measure.) Hence\n\\begin{eqnarray} \\label{eq:convolve1}\n(\\mu*\\nu)(\\Gamma) & = & \\int\\chi_{\\Gamma}(p+q)\\,\\mu(dp)\\,\\nu(dq) \\nonumber \\\\\n & = & \\int\\chi_{\\Gamma}(p+q)\\,\\Omega_m(dp)\\,\\sigma(dm)\\,\\Omega_{m^{\\prime}}(dq)\\,\\rho(dm^{\\prime}) \\nonumber \\\\\n & = & \\int\\chi_{\\Gamma}(p+q)\\,\\Omega_m(dp)\\,\\Omega_{m^{\\prime}}(dq)\\sigma(dm)\\,\\rho(dm^{\\prime}), \n\\end{eqnarray}\nby Fubini's theorem, as long as the integral\n\\begin{equation}\n\\int\\chi_{\\Gamma}(p+q)\\,\\Omega_m(dp)\\,\\Omega_{m^{\\prime}}(dq)|\\sigma|(dm)\\,|\\rho|(dm^{\\prime}),\n\\end{equation}\nexists where $|\\sigma|,|\\rho|$ are the total variations of the measures $\\sigma,\\rho$. \n\nSuppose that $\\Gamma\\subset{\\bf R}^4$ is compact. Then there exists $a,R\\in(0,\\infty)$ such that $\\Gamma\\subset(-a,a)\\times B_R({\\vct 0})$, where $B_R({\\vct 0})=\\{{\\vct p}\\in{\\bf R}^3:|{\\vct p}|a$. Then\n\\begin{equation}\np\\in H_m^{+},q\\in H_{m^{\\prime}}^{+}\\Rightarrow(p+q)^0=p^0+q^0\\geq m+m^{\\prime}>2a\\Rightarrow (p+q){\\slas\\in}\\Gamma.\n\\end{equation}\nThus\n\\begin{equation}\n\\int\\chi_{\\Gamma}(p+q)\\,\\Omega_m(dp)\\,\\Omega_{m^{\\prime}}(dq)=0.\n\\end{equation}\n\nTherefore since $\\sigma$ and $\\rho$ are Borel, $(\\mu*\\nu)(\\Gamma)$ exists, is finite and is given by Eq.~\\ref{eq:convolve1}. \n\nNow let $\\Lambda\\in O(1,3)^{+\\uparrow}$, $\\psi:{\\bf R}^4\\rightarrow{\\bf C}$ be a measurable function of compact support. Then\n\\begin{eqnarray}\n<\\mu*\\nu,\\Lambda\\psi> & = & \\int\\psi(\\Lambda^{-1}(p+q))\\,\\Omega_m(dp)\\,\\Omega_{m^{\\prime}}(dq)\\,\\sigma(dm)\\,\\rho(dm^{\\prime}) \\nonumber \\\\\n & = & \\int\\psi(p+q)\\,\\Omega_m(dp)\\,\\Omega_{m^{\\prime}}(dq)\\,\\sigma(dm)\\,\\rho(dm^{\\prime}). \\nonumber \\\\\n & = & <\\mu*\\nu,\\psi> \\nonumber\n\\end{eqnarray}\nTherefore $\\mu*\\nu$ is Lorentz invariant. It can be shown, by an argument similar to that used for the case $\\Omega_m*\\Omega_m$ that $\\mu*\\nu$ is causal.\n\nWe have therefore shown that the convolution of two causal Lorentz invariant Borel measures exists an is a causal Lorentz invariant Borel measure.\n\n\\subsection{Product of measures}\n\nWe now turn to the problem of computing the product of two causal Lorentz invariant Borel measures. The problem of computing the product of measures or distributions is difficult in general and has atracted a large amount of research (Colombeau, 1984; Oberguggenberger, 1992). In such work one generally seeks a definition of product of measures or distributions which agrees with the ordinary product when the measures or distributions are functions (i.e. densities with respect to Lebesgue measure). The most common approach is to use the fact that, for Schwartz functions $f,g\\in{\\mathcal S}({\\bf R}^4)$ multiplication in the spatial domain corresponds to convolution in the frequency domain, i.e. $(fg)^{\\wedge}=f^{\\wedge}*g^{\\wedge}$ (where $\\wedge$ denotes the Fourier transform operator). Thus one defines the product of measures or distributions $\\mu,\\nu$ as\n\\begin{equation}\n\\mu\\nu=(\\mu^{\\wedge}*\\nu^{\\wedge})^{\\vee}.\n\\end{equation}\nHowever this definition is only successful when the convolution that it involves exists which may not be the case in general. If $\\mu,\\nu$ are tempered measures then $\\mu^{\\wedge}$ and $\\nu^{\\wedge}$ exist as tempered distributions, however they are generally not causal, even if $\\mu,\\nu$ are causal. \n\nWe will therefore not use the ``frequency space\" approach to define the product of measures but will use a different approach. Our approach is just as valid as the frequency space approach because our product will coincide with the usual function product when the measures are defined by densities. Furthermore, our approach is useful for the requirements of QFT because measures and distributions in QFT are frequently Lorentz invariant and causal.\n\nLet int$(C)=\\{p\\in{\\bf R}^4:p^2>0,p^0>0\\}$. Suppose that $f:\\mbox{int}(C)\\rightarrow{\\bf C}$ is a Lorentz invariant locally integrable function. Then it defines a causal Lorentz invariant Borel measure $\\mu_f$ which, by the spectral theorem, must have a representation of the form\n\\begin{equation}\n\\mu_f(\\Gamma)=\\int_{\\Gamma}f(p)\\,dp=\\int_{m=0}^{\\infty}\\Omega_m(\\Gamma)\\,\\sigma(dm),\n\\end{equation}\nfor some spectral measure $\\sigma:{\\mathcal B}([0,\\infty))\\rightarrow{\\bf C}$. Since $\\mu_f$ is absolutely continuous with respect to Lebesgue measure it follows that $\\sigma$ must be non singular, i.e. a function.\nBy the result of the previous section a density defining $\\mu_f$ is ${\\tld f}:\\mbox{int}(C)\\rightarrow{\\bf C}$ defined by\n\\begin{equation}\n{\\tld f}(p)=(p^2)^{-\\frac{1}{2}}\\sigma((p^2)^{\\frac{1}{2}}), p\\in\\mbox{ int}(C).\n\\end{equation}\nWe must have that ${\\tld f}=f$ (almost everywhere). Therefore (almost everywhere on int$(C)$)\n\\begin{equation}\nf(p)=(p^2)^{-\\frac{1}{2}}\\sigma((p^2)^{\\frac{1}{2}}).\n\\end{equation}\n$f(p)$ depends only on $p^2$. Therefore $\\sigma(m)=mf(p)$ for all $p\\in\\mbox{int}(C)$ such that $p^2=m^2$. In particular\n\\begin{equation}\n\\sigma(m)=mf((m,{\\vct 0})^{T}), \\forall m>0.\n\\end{equation}\nNow we are seeking a definition of product which has useful properties. Two such properties would be that it is distributive with respect to generalized sums such as integrals and also that it agrees with the ordinary product when the measures are defined by functions. Suppose that we had such a product. Let $f,g:\\mbox{int}(C)\\rightarrow{\\bf C}$ be Lorentz invariant locallly integrable functions. Let $\\mu,\\nu:{\\mathcal B}(\\mbox{int}(C))\\rightarrow{\\bf C}$ be the associated measures with spectra $\\sigma,\\rho$. Then\n\\begin{eqnarray}\n\\mu\\nu & = & \\int_{m=0}^{\\infty}\\Omega_m\\,\\sigma(dm)\\int_{m^{\\prime}=0}^{\\infty}\\Omega_{m^{\\prime}}\\,\\rho(dm^{\\prime}) \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\Omega_m\\,mf((m,{\\vct 0})^{T})\\,dm\\int_{m^{\\prime}=0}^{\\infty}\\Omega_{m^{\\prime}}\\,m^{\\prime}g((m^{\\prime},{\\vct 0})^{T})\\,dm^{\\prime} \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\int_{m^{\\prime}=0}^{\\infty}\\Omega_m\\Omega_{m^{\\prime}}mf((m,{\\vct 0})^{T})m^{\\prime}g((m^{\\prime},{\\vct 0})^{T})\\,dm\\,dm^{\\prime}. \\nonumber \n\\end{eqnarray}\nNow we want this to be equal to\n\\begin{equation}\n\\int_{m=0}^{\\infty}\\Omega_mm(fg)((m,{\\vct 0})^{T})\\,dm\n\\end{equation}\nThis will be the case (formally) if we have\n\\begin{equation}\n\\Omega_m\\Omega_{m^{\\prime}}=\\frac{1}{m}\\delta(m-m^{\\prime})\\Omega_m, \\forall m,m^{\\prime}>0.\n\\end{equation}\nPhysicists will be familiar with such a formula (e.g. the equal time commutation relations). Rather than attempting to define its meaning in a rigorous way we will simply carry out the following formal computation for general Lorentz invariant Borel measures $\\mu,\\nu$ with spectra $\\sigma,\\rho$\n\\begin{eqnarray}\n\\mu\\nu & = & \\int_{m=0}^{\\infty}\\Omega_m\\,\\sigma(dm)\\int_{m^{\\prime}=0}^{\\infty}\\Omega_{m^{\\prime}}\\,\\rho(dm^{\\prime}) \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\int_{m^{\\prime}=0}^{\\infty}\\Omega_m\\Omega_{m^{\\prime}}\\sigma(m)\\rho(m^{\\prime})\\,dm\\,dm^{\\prime} \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\int_{m^{\\prime}=0}^{\\infty}\\frac{1}{m}\\Omega_m\\delta(m-m^{\\prime})\\sigma(m)\\rho(m^{\\prime})\\,dm^{\\prime}\\,dm \\nonumber \\\\\n & = & \\int_{m=0}^{\\infty}\\frac{1}{m}\\Omega_m\\sigma(m)\\rho(m)\\,dm. \\nonumber\n\\end{eqnarray}\nTherefore we can simply define the product $\\mu\\nu$ in general by\n\\begin{equation}\n\\mu\\nu=\\int_{m=0}^{\\infty}\\frac{1}{m}\\Omega_m\\,(\\sigma\\rho)(dm).\n\\end{equation}\nWe have therefore reduced the problem of computing the product of measures on int$(C)$ to the problem of computing the product of their 1D spectral measures. The problem of multiplying 1D measures is somewhat less problematical than the problem of multiplying 4D measures. A large class of 1D measures is made up of measures which are of the form of a function plus a finite number of ``atoms\" (singularities of the form $c\\delta_a$ where $c\\in{\\bf C},a\\in[0,\\infty)$, where $\\delta_a$ is the Dirac delta function (measure) concentrated at $a$). There are other pathological types of 1D measure but these may not be of interest for physical applications. \n\nIn the general non-pathological case, if $\\mu,\\nu$ are causal Lorentz invariant Borel measures with spectra $\\sigma(m)=\\xi(m)+\\sum_{i=1}^{k}c_i\\delta(m-a_i), \\rho(m)=\\zeta(m)+\\sum_{j=1}^l d_j\\delta(m-b_j)$ where $\\xi,\\zeta:[0,\\infty)\\rightarrow{\\bf C}$ are locally integrable functions, $c_i,d_j\\in{\\bf C},a_i,b_j\\in[0,\\infty)$ are such that $a_i\\neq b_j,\\forall i,j$ then we may define the product of $\\mu$ and $\\nu$ to be the causal Lorentz invariant measure $\\mu\\nu$ given by\n\\begin{equation}\n\\mu\\nu=\\int_{m=0}^{\\infty}\\Omega_m\\tau(dm),\n\\end{equation}\nwhere \n\\begin{equation}\n\\tau(m)=\\frac{1}{m}(\\xi(m)\\zeta(m)+\\sum_{i=1}^k\\sum_{j=1}^l c_id_j\\delta(m-a_i)\\delta(m-b_j)).\n\\end{equation}\n\nThis definition will suffice for many of the requirements of QFT, and has the properties that we desire.\n\n\\section{Vacuum polarization}\n\n\\subsection{Definition of the contraction of the vacuum polarization tensor as a Lorentz invariant tempered complex measure $\\Pi$}\n\nThe vacuum polarization tensor is written as\n\n\\begin{equation}\n\\Pi^{\\mu\\nu}(k)=-e^2\\int\\frac{dp}{(2\\pi)^4}\\mbox{Tr}(\\gamma^{\\mu}\\frac{1}{{\\slas p}-m+i\\epsilon}\\gamma^{\\nu}\\frac{1}{{\\slas p}-{\\slas k}-m+i\\epsilon}),\n\\end{equation}\n(Itzikson and Zuber, 1980, p. 319). This can be rewritten as\n\\begin{equation}\n\\Pi^{\\mu\\nu}(k)=-\\frac{e^2}{(2\\pi)^4}\\int\\frac{\\mbox{Tr}(\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas p}-{\\slas k}+m))}{(p^2-m^2+i\\epsilon)((p-k)^2-m^2+i\\epsilon)}\\,dp.\n\\end{equation}\nTherefore, contracting with the Minkowski space metric tensor, the ``function\" that we are interested in computing is\n\\begin{equation} \\label{eq:Pi1}\n\\Pi(k)=-\\frac{e^2}{(2\\pi)^4}\\int\\frac{\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas p}-{\\slas k}+m))}{(p^2-m^2+i\\epsilon)((p-k)^2-m^2+i\\epsilon)}\\,dp.\n\\end{equation}\nAs is well known, the integral defining this ``function\" is divergent for all $k\\in{\\bf R}^4$ and all the machinery of regularization and renormalization has been developed to get around this problem.\n\nWe propose that the object defined by Eq.~\\ref{eq:Pi1} exists when viewed as a measure on Minkowski space. To show this, suppose that $\\Pi$ were a density for a measure which we also denote as $\\Pi$. Then we may make the following formal computation.\n\\begin{eqnarray}\n\\Pi(\\Gamma) & = & \\int_{\\Gamma}\\Pi(k)\\,dk \\nonumber \\\\\n & = & \\int\\chi_{\\Gamma}(k)\\Pi(k)\\,dk \\nonumber \\\\\n & = & -\\frac{e^2}{(2\\pi)^4}\\int\\chi_{\\Gamma}(k)\\frac{\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas p}-{\\slas k}+m))}{(p^2-m^2+i\\epsilon)((p-k)^2-m^2+i\\epsilon)}\\,dp\\,dk \\nonumber \\\\\n & = & -\\frac{e^2}{(2\\pi)^4}\\int\\chi_{\\Gamma}(k)\\frac{\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas p}-{\\slas k}+m))}{(p^2-m^2+i\\epsilon)((p-k)^2-m^2+i\\epsilon)}\\,dk\\,dp \\nonumber \\\\ \n & = & -\\frac{e^2}{(2\\pi)^4}\\int\\chi_{\\Gamma}(k+p)\\frac{\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}(-{\\slas k}+m))}{(p^2-m^2+i\\epsilon)(k^2-m^2+i\\epsilon)}\\,dk\\,dp \\nonumber \\\\ \n & = & \\frac{e^2}{(2\\pi)^4}\\int\\chi_{\\Gamma}(k+p)\\frac{\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))}{(p^2-m^2+i\\epsilon)(k^2-m^2+i\\epsilon)}\\,dk\\,dp. \\nonumber\n\\end{eqnarray} \nNow the propagators in QFT can be viewed in a rigorous fashion as measures on Minkowski space and we make the identification\n\\begin{equation}\n\\frac{1}{p^2-m^2+i\\epsilon}\\rightarrow -\\pi i\\Omega_m^{\\pm}(p), m\\ge0,\n\\end{equation}\n(see (Mashford, 2017b) for explanation). Therefore the outcome of our formal computations is that\n\\begin{equation} \\label{eq:Pi_gen}\n\\Pi(\\Gamma)=-\\frac{e^2}{16\\pi^2}\\int\\chi_{\\Gamma}(k+p)\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))\\,\\Omega_m^{\\pm}(dk)\\,\\Omega_m^{\\pm}(dp).\n\\end{equation}\nWe will consider the case\n\\begin{equation} \\label{eq:Pi2}\n\\Pi(\\Gamma)=-\\frac{e^2}{16\\pi^2}\\int\\chi_{\\Gamma}(k+p)\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))\\,\\Omega_m(dk)\\,\\Omega_m(dp), m>0.\n\\end{equation}\n(We use the symbol $\\Omega_m$ to denote $\\Omega_m^{+}$ if $m>0$ or $\\Omega_m^{-}$ if $m<0$.)\n\nThe important thing is that the object defined by Eq.~\\ref{eq:Pi2} exists as a Borel complex measure (i.e. when its argument $\\Gamma$ is a relatively compact Borel set in ${\\bf R}^4$). This is because\n\\begin{equation}\n\\int\\chi_{\\Gamma}(k+p)|\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))|\\,\\Omega_m(dk)\\,\\Omega_m(dp)<\\infty, \n\\end{equation}\nfor all $\\Gamma\\in{\\mathcal B}_0({\\bf R}^4)$.\n\nIt also exists as a tempered distribution since\n\\begin{equation} \\label{eq:Pi3}\n\\int\\psi(k+p)\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))\\,\\Omega_m(dk)\\,\\Omega_m(dp),\n\\end{equation}\nis convergent for any Schwartz function $\\psi\\in{\\mathcal S}({\\bf R}^4,{\\bf C})$. The basic reason for both these facts is that as $|p|,|q|\\rightarrow{\\infty}$ with $p,q\\in H_m^{+}$, $(p+q)^0\\rightarrow{\\infty}$.\n\nThus $\\Pi$ exists as a tempered measure. Hence we have in a few lines of formal argument arrived at an object which has a well defined existence and can investigate the properties of this object $\\Pi$ without any further concern about ill-definedness or the fear of propagating ill-definedness through our calculations.\n\nBy (Mashford, 2017b, Theorem 5) the ${\\bf C}^{4\\times4}$ valued measure defined by\n\\begin{equation} \\label{eq:Pi4}\n\\Phi(\\Gamma)=-\\frac{e^2}{16\\pi^2}\\int\\chi_{\\Gamma}(k+p)(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))\\,\\Omega_m(dk)\\,\\Omega_m(dp),\n\\end{equation}\nis $K$ invariant for all $m>0$ (see (Mashford, 2017b) for a definition of the group $K$ and of the notion of $K$ invariance). Also we have the following. \n\\begin{theorem}\nIf $\\Psi:{\\mathcal B}_0({\\bf R}^4)\\rightarrow{\\bf C}^{4\\times4}$ is a $K$ invariant measure then the object $\\mbox{Tr}(\\Psi):{\\mathcal B}_0({\\bf R}^4)\\rightarrow{\\bf C}$ defined by\n\\begin{equation}\n(\\mbox{Tr}(\\Psi))(\\Gamma)=\\mbox{Tr}(\\Psi(\\Gamma)),\n\\end{equation}\nis a Lorentz invariant complex measure.\n\\end{theorem}\n{\\bf Proof} It is straightforward to show that Tr$(\\Psi)$ is countably subadditive. By definition of $K$ invariance (Mashford, 2017b) we have that\n\\begin{equation}\n\\Psi(\\Lambda(\\kappa)(\\Gamma))=\\kappa\\Psi(\\Gamma)\\kappa^{-1}, \\forall\\kappa\\in K,\\Gamma\\in{\\mathcal B}_0({\\bf R}^4),\n\\end{equation}\nwhere $\\Lambda(\\kappa)\\in O(1,3)^{+\\uparrow}$ is the Lorentz transformation corresponding to $\\kappa\\in K$. Therefore\n\\begin{equation}\n\\mbox{Tr}(\\Psi)(\\Lambda(\\kappa)(\\Gamma))=\\mbox{Tr}(\\kappa\\Psi(\\Gamma)\\kappa^{-1})=\\mbox{Tr}(\\Psi(\\Gamma)), \\forall\\kappa\\in K,\\Gamma\\in{\\mathcal B}_0({\\bf R}^4),\n\\end{equation}\nand hence\n\\begin{equation}\n\\mbox{Tr}(\\Psi)(\\Lambda(\\Gamma))=\\mbox{Tr}(\\Psi(\\Gamma))=(\\mbox{Tr}(\\Psi))(\\Gamma), \\forall\\Lambda\\in O(1,3)^{+\\uparrow},\\Gamma\\in{\\mathcal B}_0({\\bf R}^4). \n\\end{equation}\n$\\Box$\n\nIt follows that the measure $\\Pi$ that we have defined is a Lorentz invariant measure.\n\nUsing an argument similar to that for $\\Omega_m*\\Omega_m$ it can be shown that the support supp$(\\Pi)$ of $\\Pi$ is a subset of $C_m$. \n\n\\subsection{Application of the spectral calculus to determine the spectrum of $\\Pi$}\n\nWe have shown that $\\Pi$ is a Lorentz invariant tempered complex measure with support contained in $C_m$. Therefore by the spectral theorem $\\Pi$ must have a spectral representation of the form\n\\begin{equation} \\label{eq:vp_spectral1}\n\\Pi(\\Gamma)=\\int_{m^{\\prime}=2m}^{\\infty}\\sigma(dm^{\\prime})\\Omega_{m^{\\prime}}(\\Gamma).\n\\end{equation}\nWe would like to compute the spectral measure $\\sigma$. First we have\n\\begin{eqnarray}\n& & \\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m)) \\nonumber \\\\\n& & = \\eta_{\\mu\\nu}p_{\\alpha}k_{\\beta}\\mbox{Tr}(\\gamma^{\\mu}\\gamma^{\\alpha}\\gamma^{\\nu}\\gamma^{\\beta})+0+0-\\mbox{Tr}(\\gamma^{\\mu}\\gamma_{\\mu}m^2) \\nonumber \\\\\n& & = \\eta_{\\mu\\nu}p_{\\alpha}k_{\\beta}\\mbox{Tr}(\\gamma^{\\mu}\\gamma^{\\alpha}\\gamma^{\\nu}\\gamma^{\\beta})-16m^2 \\nonumber \\\\\n& & = 4p_{\\alpha}k_{\\beta}\\eta_{\\mu\\nu}(\\eta^{\\mu\\alpha}\\eta^{\\nu\\beta}-\\eta^{\\mu\\nu}\\eta^{\\alpha\\beta}+\\eta^{\\mu\\beta}\\eta^{\\alpha\\nu})-16m^2 \\nonumber \\\\\n& & = 4\\eta_{\\mu\\nu}(p^{\\mu}k^{\\nu}-\\eta^{\\mu\\nu}p.k+k^{\\mu}p^{\\nu})-16m^2 \\nonumber \\\\\n& & = 4(p.k-4p.k+p.k-4m^2) \\nonumber \\\\\n& & = -8(p.k+2m^2), \\nonumber \n\\end{eqnarray}\nwhere we have used in the second line the fact that the trace of a product of an odd number of gamma matrices vanishes.\n\nWe now compute in a fashion similar to that used when determining the spectrum of $\\Omega_m*\\Omega_m$ (which can be justified in a fashion similar to the justification of Argument 1) as follows.\n\\begin{eqnarray}\ng(a,b,\\epsilon) & = & \\Pi(\\Gamma(a,b,\\epsilon)) \\nonumber \\\\\n & = & -\\frac{e^2}{16\\pi^2}\\int\\chi_{\\Gamma(a,b,\\epsilon)}(k+p)\\mbox{Tr}(\\eta_{\\mu\\nu}\\gamma^{\\mu}({\\slas p}+m)\\gamma^{\\nu}({\\slas k}-m))\\,\\Omega_m(dk)\\,\\Omega_m(dp) \\nonumber \\\\\n & = & \\frac{e^2}{2\\pi^2}\\int\\chi_{\\Gamma(a,b,\\epsilon)}(k+p)(p.k+2m^2)\\,\\Omega_m(dk)\\,\\Omega_m(dp) \\nonumber \\\\\n & \\approx & \\frac{e^2}{2\\pi^2}\\int\\chi_{(a,b)}(\\omega_m({\\vct k})+\\omega_m({\\vct p}))\\chi_{B_{\\epsilon}({\\vct 0})}({\\vct k}+{\\vct p})(\\omega_m({\\vct p})\\omega_m({\\vct k})-{\\vct p}.{\\vct k}+2m^2)\\, \\nonumber \\\\\n & & \\frac{d{\\vct k}}{\\omega_m({\\vct k})}\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})} \\nonumber \\\\\n & = & \\frac{e^2}{2\\pi^2}\\int\\chi_{(a,b)}(\\omega_m({\\vct k})+\\omega_m({\\vct p}))\\chi_{B_{\\epsilon}({\\vct 0})-{\\vct p}}({\\vct k})(\\omega_m({\\vct p})\\omega_m({\\vct k})-{\\vct p}.{\\vct k}+2m^2)\\, \\nonumber \\\\\n & & \\frac{d{\\vct k}}{\\omega_m({\\vct k})}\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})} \\nonumber \\\\\n & \\approx & \\frac{e^2}{2\\pi^2}\\int\\chi_{(a,b)}(2\\omega_m({\\vct p}))(3m^2+2{\\vct p}^2)\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})^2}(\\frac{4}{3}\\pi\\epsilon^3) \\nonumber \n\\end{eqnarray}\nTherefore \n\\begin{eqnarray}\ng_a(b) & = & \\lim_{\\epsilon\\rightarrow0}\\epsilon^{-3}g(a,b,\\epsilon) \\nonumber \\\\\n & = & \\frac{e^2}{2\\pi^2}\\int\\chi_{(a,b)}(2\\omega_m({\\vct p}))(3m^2+2{\\vct p}^2)\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})^2}(\\frac{4}{3}\\pi) \\nonumber \\\\\n & = & \\frac{2e^2}{\\pi}\\int_{r=mZ(a)}^{mZ(b)}(3m^2+2r^2)\\frac{r^2}{m^2+r^2}\\,dr(\\frac{4}{3}\\pi). \\nonumber\n\\end{eqnarray}\nThus we compute the spectum of $\\Pi$ as follows.\n\\begin{eqnarray}\n\\sigma(b) & = & \\frac{3}{4\\pi}bg_a^{\\prime}(b) \\nonumber \\\\\n & = & \\frac{2e^2}{\\pi}b(3m^2+2m^2Z^2(b))\\frac{m^2Z^2(b)}{m^2+m^2Z^2(b)}\\frac{b}{4mZ(b)} \\nonumber \\\\\n & = & \\frac{2}{\\pi}e^2m^3Z(b)(3+2Z^2(b)), \\nonumber\n\\end{eqnarray}\nwhere $Z:[2m,\\infty)\\rightarrow[0,\\infty)$ is given by Eq.~\\ref{eq:Z_def}.\n\nThe spectrum has this value $\\sigma(b)$ for $b\\ge2m$ and the value 0 for $b\\leq2m$. One can now see that $\\Pi$ is a Borel measure in the ordinary sense of the word, i.e. $[0,\\infty]$ valued countably subadditive function on ${\\mathcal B}({\\bf R}^4)$ which is finite on compact sets. (It is clearly defined on the larger sigma algebra of Lebesgue measurable sets.) $\\Pi$ is finite on compact sets and when evaluated on test functions of rapid decrease, it is not divergent.\n\n\\subsection{The vacuum polarization function $\\pi$}\n\nWe know that $q\\mapsto\\Pi(q)$ does not exist pointwise as a function, the integral defining it is divergent. However, pretend for the moment that $\\Pi$ did exist as a function. Then we can define a measure which we also denote by $\\Pi$ by\n\\begin{equation}\n\\Pi(\\Gamma)=\\int_{\\Gamma}\\Pi(q)\\,dq.\n\\end{equation}\nThus the function $\\Pi$ is the density defining the measure $\\Pi$. Now we know that in fact $\\Pi$ exists as a tempered measure with density\n\\begin{equation}\n\\Pi(q)=\\left\\{\\begin{array}{l}\n(q^2)^{-\\frac{1}{2}}\\sigma((q^2)^{\\frac{1}{2}})\\mbox{ if } q^2>0,q^0>0 \\\\\n0\\mbox{ otherwise,}\n\\end{array}\\right.\n\\end{equation}\nwhere \n\\begin{equation} \\label{eq:pi_spectrum}\n\\sigma(b)=\\frac{2}{\\pi}e^2m^3Z(b)(3+2Z^2(b)), b\\ge0,\n\\end{equation} \nis the spectrum of the measure $\\Pi$. Thus we may think of $\\Pi$ the function as being defined to be equal to this density.\n\nWe define the vacuum polarization function $\\pi:\\{q\\in{\\bf R}^4:q^2>0,q^0>0\\}\\rightarrow{\\bf R}$ by\n\\begin{equation} \\label{eq:pi_def}\n\\pi(q)=\\frac{\\Pi(q)}{q^2},\n\\end{equation}\n(Weinberg (2005, p. 478) states that $\\Pi^{\\rho\\sigma}$ has the form $\\Pi^{\\rho\\sigma}(q)=(q^2\\eta^{\\rho\\sigma}-q^{\\rho}q^{\\sigma})\\pi(q^2)$ from which, contracting with $\\eta_{\\rho\\sigma}$, it would follow that $\\pi(q)=(3q^2)^{-1}\\Pi(q)$. However Eq. 11.2.23 of (Weinberg, 2005, p. 480) is consistent with $\\pi$ having the form of Eq.~\\ref{eq:pi_def}). \n\nThen our spectral vacuum polarization is\n\\begin{equation}\n\\pi(q)=\\frac{\\Pi(q)}{q^2}=\\left\\{\\begin{array}{l}\n(q^2)^{-\\frac{3}{2}}\\sigma((q^2)^{\\frac{1}{2}})\\mbox{ if } q^2>4m^2,q^0>0 \\\\\n0\\mbox{ otherwise,}\n\\end{array}\\right.\n\\end{equation}\nfor $q\\in{\\bf R}^4$.\n$\\pi$ is a function on ${\\bf R}^4$ supported on $C_m$ but its value for argument $q$ only depends on $q^2$. Therefore, with no fear of confusion, one may define the vacuum polarization function $\\pi:[2m,\\infty)\\rightarrow[0,\\infty)$ by\n\\begin{equation} \\label{eq:polarization_function}\n\\pi(s)=s^{-3}\\sigma(s)=\\frac{2}{\\pi}s^{-3}e^2m^3Z(s)(3+2Z^2(s)),\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:Z_def_1}\nZ(s)=(\\frac{s^2}{4m^2}-1)^{\\frac{1}{2}}.\n\\end{equation}\n\n\\subsection{Definition of $\\pi(q)$ for $q^2<0$}\n\n\\subsubsection{On the non-positivity of $q^2$ for momentum transfer $q$}\n\nConsider a scattering process involving a particular particle of mass $m$ with $|\\mbox{in}>$ momentum $p$ and $|\\mbox{out}>$ momentum $p^{\\prime}$ with $q=p^{\\prime}-p$ the momentum transferred. Let $p=(p^0,{\\vct p})=(\\omega_m({\\vct p}),{\\vct p}), p^{\\prime}=(p^{\\prime0},{\\vct {p^{\\prime}}})=(\\omega_m({\\vct {p^{\\prime}}}),{\\vct {p^{\\prime}}})$. (So the incoming and outgoing particles are ``on shell\".) Suppose that\n\\begin{equation}\n|{\\vct p}|=\\alpha m, |{\\vct {p^{\\prime}}}|=\\beta m, \\alpha,\\beta\\in[0,\\infty).\n\\end{equation}\nThen\n\\begin{eqnarray}\nq^2 & = & (p^{\\prime}-p)^2 \\nonumber \\\\\n & = & p^2+p^{\\prime2}-2p.p^{\\prime} \\nonumber \\\\\n & = & 2(m^2-\\omega_m({\\vct p})\\omega_m({\\vct {p^{\\prime}}})+{\\vct p}.{\\vct {p^{\\prime}}}) \\nonumber \\\\\n & = & 2(m^2-(m^2+\\alpha^2 m^2)^{\\frac{1}{2}}(m^2+\\beta^2m^2)^{\\frac{1}{2}}+{\\vct p}.{\\vct {p^{\\prime}}}) \\nonumber \\\\\n & = & 2m^2(1+\\alpha\\beta\\cos(\\theta)-(1+\\alpha^2)^{\\frac{1}{2}}(1+\\beta^2)^{\\frac{1}{2}}), \\nonumber\n\\end{eqnarray}\nwhere $\\theta$ is the angle between ${\\vec p}$ and ${\\vec {p^{\\prime}}}$. Hence\n\\begin{equation}\nq^2<2m^2(1+\\alpha\\beta\\cos(\\theta)-\\alpha\\beta)=2m^2(1-\\alpha\\beta(1-\\cos(\\theta)).\n\\end{equation}\n If $\\theta\\neq0$ and $\\beta>0$ then $q^2<0$ for $\\alpha$ sufficiently large and $\\rightarrow-\\infty$ as $\\alpha\\rightarrow\\infty$. A similar statement applies for $\\beta$. Furthermore when considering the non-relativistic approximation, $q^2$ is invariably spacelike.\n\n\\subsubsection{Definition of $\\pi$ in the spacelike (imaginary mass) domain}\n\nWe have found that the vacuum polarization function $q\\mapsto\\pi(q)$ that we have defined is zero when its argument $q$ is such that $q^2<0$. However we will find shortly that we need to consider $\\pi(q)$ for values of $q$ which are such that $q^2<0$ (since the $q$ values are momentum transfer values). In this case we may consider the vacuum polarization function to be defined in the spacelike, or imaginary mass, domain by making the substitution $\\Omega_m*\\Omega_m\\rightarrow\\Omega_{im}*\\Omega_{im}$. Then as in Section 4 we may consider the type III measure with spectrum $\\sigma$ given by Eq.~\\ref{eq:pi_spectrum} and the vacuum polarization function given by\n\\begin{equation}\n\\pi(q)=\\left\\{\\begin{array}{l}\n(-q^2)^{-\\frac{3}{2}}\\sigma((-q^2)^{\\frac{1}{2}})\\mbox{ if } q^2<0,q^0>0 \\\\\n0\\mbox{ otherwise,}\n\\end{array}\\right.\n\\end{equation}\nand the associated function $s\\mapsto\\pi(s)$ given by Eq.~\\ref{eq:polarization_function} where $s>0$ now represents a spacelike label.\n\n\\subsection{Comparison of the spectral vacuum polarization function with the renormalized vacuum polarization function}\n\nRegularization and renormalization are techniques invented by physicists to control the infinities in divergent integrals in quantum field theory to obtain finite answers which can be compared with experiment. The answers obtained using these methods are in close agreement with experiment so there is clearly great merit in the approach. However many mathematicians are confused by these methods since they do not seem to make mathematical sense (e.g. introducing infinite ``counterterms\" into Lagrangians to cancel infinities produced when carrying out integrations implied by these Lagrangians or perturbing the dimension D of space-time to $D=4-\\epsilon, \\epsilon>0$ because everything blows up when $D=4$ and then later ignoring or subtracting out terms proportional to $\\epsilon^{-1}$ before taking the limit as $\\epsilon$ tends to $0$ to obtain the answers which are compared with experiment (dimensional regularization\/renormalization)). \n\nThe vacuum polarization function is generally computed in QFT using the dimensional regularization approach with the result\n\\begin{equation}\n\\pi_r(k^2)=-\\frac{2\\alpha}{\\pi}\\int_0^1dz\\,z(1-z)\\log(1-\\frac{k^2z(1-z)}{m^2}),\n\\end{equation}\n(Mandl and Shaw, 1991, p. 229) where $m>0$ is the mass of the electron and (in natural units) $\\alpha=(4\\pi)^{-1}e^2$ is the fine structure constant in which $e>0$ is the magnitude of the electron charge. $\\pi_r$ is defined for all $k\\in{\\bf R}^4$ for which $k^2\\leq4m^2$. The integral can be performed leading to the analytic expression\n\\begin{equation} \\label{eq:pi_Itzikson}\n\\pi_r(k^2)=-\\frac{\\alpha}{3\\pi}\\{\\frac{1}{3}+2(1+\\frac{2m^2}{k^2})[(\\frac{4m^2}{k^2}-1)^{\\frac{1}{2}}\\mbox{arcot}(\\frac{4m^2}{k^2}-1)^{\\frac{1}{2}}-1]\\}.\n\\end{equation} \n(See Appendix 1 for a proof of this.)\n\nThus \n\\begin{equation}\n\\pi_r(k)=-\\frac{\\alpha}{\\pi}\\frac{1}{3}(\\frac{1}{3}+(Y^2+3)(Y\\mbox{arcot}(Y)-1)),\n\\end{equation}\nwhere\n\\begin{equation}\nY=Y(k)=\\left(\\frac{4m^2}{k^2}-1\\right)^{\\frac{1}{2}}.\n\\end{equation}\nThis expression for $\\pi_r$ is defined on $\\{k\\in{\\bf R}^4:00, \\\\\n0 \\mbox{ otherwise}.\n\\end{array}\\right.\n\\end{equation}\nIn configuration space,\n\\begin{eqnarray}\n(\\Delta V)(x) & = & (\\Delta V)(t,{\\vct x}) \\nonumber \\\\\n & = & -(2\\pi)^{-2}\\int\\frac{\\pi(q)}{q^2}e^{iq.(x-x^{\\prime})}\\,dq\\,j({\\vct {x^{\\prime}}})\\,d{\\vct {x^{\\prime}}}\\,dt^{\\prime} \\nonumber \\\\\n & = & -(2\\pi)^{-2}Ze\\int\\frac{\\pi(q)}{q^2}e^{-i{\\vct q}.{\\vct x}}e^{iq^0(t-t^{\\prime})}\\,dt^{\\prime}\\,dq^0\\,d{\\vct q} \\nonumber \\\\\n & = & -(2\\pi)^{-2}Ze\\int_{q^2<0,q^0>0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-i{\\vct q}.{\\vct x}}e^{iq^0(t-t^{\\prime})}\\,dt^{\\prime}\\,dq^0\\,d{\\vct q} \\nonumber \\\\\n\\end{eqnarray}\nTherefore\n\\begin{equation}\n(\\Delta V)(t,{\\vct x})=-(2\\pi)^{-2}Ze\\int I(t,{\\vct q})e^{-i{\\vct q}.{\\vct x}}\\,d{\\vct q},\n\\end{equation}\nwhere\n\\begin{eqnarray}\nI(t,{\\vct q}) & = & \\int_{(q^0)^2<{\\vct q}^2, q^0>0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{iq^0(t-t^{\\prime})}\\,dt^{\\prime}\\,dq^0 \\nonumber \\\\\n & = & \\int_{(q^0)^2<{\\vct q}^2,q^0>0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-iq^0t^{\\prime}}\\,dt^{\\prime}\\,dq^0.\n\\end{eqnarray}\nNow\n\\begin{eqnarray}\nI^{*}(t,{\\vct q}) & = & \\int_{(q^0)^2<{\\vct q}^2, q^0>0}\\int\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{iq^0t^{\\prime}}\\,dt^{\\prime}\\,dq^0 \\nonumber \\\\\n & = & \\int_{(q^0)^2<{\\vct q}^2,q^0<0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-iq^0t^{\\prime}}\\,dt^{\\prime}\\,dq^0. \\nonumber \\\\\n\\end{eqnarray}\nTherefore\n\\begin{equation}\n\\mbox{Re}(I(t,{\\vct q}))=\\frac{1}{2}\\int_{(q^0)^2<{\\vct q}^2}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-iq^0t^{\\prime}}\\,dt^{\\prime}\\,dq^0.\n\\end{equation} \nBut\n\\begin{equation}\nq^0\\mapsto\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2},\n\\end{equation}\nis even. Hence $I(t,{\\vct q})$ is real. Therefore\n\\begin{eqnarray}\n(\\Delta V)(t,{\\vct x}) & = & -(2\\pi)^{-2}Ze\\frac{1}{2}\\int_{q^2<0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-i{\\vct q}.{\\vct x}}e^{-iq^0t^{\\prime}}\\,dt^{\\prime}\\,dq^0\\,d{\\vct q} \\nonumber \\\\\n & = & -(2\\pi)^{-1}Ze\\frac{1}{2}\\int_{q^2<0}\\frac{\\pi((-q^2)^{\\frac{1}{2}})}{q^2}e^{-i{\\vct q}.{\\vct x}}\\delta(q^0)\\,dq^0\\,d{\\vct q} \\nonumber \\\\\n & = & (2\\pi)^{-1}Ze\\frac{1}{2}\\int\\frac{\\pi(|{\\vct q}|)}{{\\vct q}^2}e^{-i{\\vct q}.{\\vct x}}\\,d{\\vct q} \\nonumber \\\\\n & = & (2\\pi)^{-1}Ze\\frac{1}{2}\\int_{s=0}^{\\infty}\\int_{\\theta=0}^{\\pi}\\frac{\\pi(s)}{s^2}e^{-irs\\cos(\\theta)}s^2(2\\pi)\\sin(\\theta)\\,d\\theta\\,ds \\nonumber \\\\\n & = & Ze\\frac{1}{2}\\int\\pi(s)\\frac{1}{irs}\\left.{e^{irsu}}\\right|_{u=-1}^1\\,ds \\nonumber \\\\\n & = & \\frac{Ze}{r}\\int\\frac{\\pi(s)}{s}\\sin(rs)\\,ds, \\nonumber\n\\end{eqnarray}\nwhere $r=|{\\vct x}|$ (${\\vct x}\\mapsto(\\Delta V)(t,{\\vct x})$ is invariant under orthogonal transformations for all $t\\in{\\bf R}$). Thus\n\\begin{equation}\n(\\Delta V)(x)=(\\Delta V)(t,{\\vct x})=(\\Delta V)({\\vct x})=(\\Delta V)(r),\n\\end{equation}\nwhere\n\\begin{equation}\n(\\Delta V)(r)=\\frac{Ze}{r}\\int\\frac{\\pi(s)}{s}\\sin(rs)\\,ds.\n\\end{equation}\nFor vacuum polarization $\\pi=\\pi_s$ in the spacelike domain determined by the spectral calculus is given by\n\\begin{equation}\n\\pi(s)=\\frac{\\sigma(s)}{s^3},\n\\end{equation}\nwhere $\\sigma$ is the spectrum of $\\pi$ in the timelike domain. Therefore\n\\begin{equation}\n(\\Delta V)(r)=\\frac{Ze}{r}\\int\\frac{\\sigma(s)}{s^4}\\sin(rs)\\,ds.\n\\end{equation}\n \nApplying first order perturbation theory we compute the Uehling contribution to the Lamb shift to be\n\\begin{equation}\n\\Delta E=<\\psi|-e(\\Delta V)|\\psi>=-e\\int\\psi^2({\\vct x})(\\Delta V)({\\vct x})\\,d{\\vct x},\n\\end{equation}\nwhere $\\psi$ is the 2s wave function for the hydrogenic atom. Therefore our prediction for the Uehling contribution to the Lamb shift is\n\\begin{eqnarray}\n\\Delta E & = & -4\\pi Ze^2\\int\\frac{\\sigma(s)}{s^4}r\\psi^2(r)\\sin(rs)\\,ds\\,dr \\nonumber \\\\\n & = & -4\\pi Ze^2\\int\\frac{\\sigma(s)}{s^4}r\\psi^2(r)\\sin(rs)\\,dr\\,ds. \\label{eq:Delta_E}\n\\end{eqnarray}\nThe notation and argument that we have used to compute $\\Delta E$ given the vacuum polarization function $\\pi$ is somewhat formal but is entirely consistent with standard usage in the physics literature. Any issues that may arise in presenting it in a rigorous fashion will be discussed in a separate publication.\n\nWe calculate $\\Delta E$ for the H atom using numerical integration based on Eq.~\\ref{eq:Delta_E} with the result that $\\Delta E\\approx-28.7$ MHz. The C++ code for the program to carry out this computation can be found in Appendix 2 and a graph of the convergence of the integral can be found in Figure 2.\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=15cm]{convergence.jpg}\n\\caption{Convergence of integral for $\\Delta E$ = Uehling contribution to Lamb shift for H atom} \\label{fig:Fig3}\n\\end{figure}\n\n\\section{The running coupling constant}\n\nThe total equivalent potential for the electron-proton system (H atom) in the Born approximation is \n\\begin{equation}\nV(r)=-\\frac{e^2}{4\\pi r}-\\frac{e^2}{r}\\int\\frac{\\pi(s)}{s}\\sin(rs)\\,ds.\n\\end{equation}\n\nAt range $r$ the potential is equivalent to that produced by an effective charge or running coupling constant $e_r$ given by\n\\begin{eqnarray}\n-\\frac{e_r^2}{4\\pi r} & = & -\\frac{e^2}{4\\pi r}(1+4\\pi\\int\\frac{\\pi(s)}{s}\\sin(rs)\\,ds) \\nonumber \\\\\n & = & -\\frac{e^2}{4\\pi r}(1+4\\pi\\int\\pi(\\frac{s}{r})\\frac{\\sin(s)}{s}\\,ds). \\nonumber \n\\end{eqnarray}\nTherefore the running fine structure ``constant\" at energy $\\mu$ is given by\n\\begin{equation} \\label{eq:running_coupling}\n\\alpha(\\mu)=\\alpha(0)(1+4\\pi\\int\\pi(\\mu s)\\mbox{sinc}(s)\\,ds).\n\\end{equation}\n\n$\\alpha(0)\\approx1\/137$ and $\\alpha$ increases with increasing energy having been measured to have a value of $\\alpha(\\mu)\\approx1\/127$ for $\\mu=90$ GeV. Given this explicit expression for the running coupling it is not neccessary to use the techniques of the renormalization group equation involving a beta function to investigate its behavior. \n\nOur expression for the running coupling only involves the vacuum polarization contribution. Other contributions such as the electron self energy and higher order Feynman diagrams need to be considered in order to determine the complete running coupling behavior.\n\n\\subsection{Determination of the behavior of the running coupling constant in one loop QED when using the renormalized vacuum polarization function $\\pi=\\pi_r$ \\label{section:running_cc_renormalization}}\n\nIn this case we have\n\\begin{equation}\n\\pi(\\mu s)=\\pi_r(\\mu s),\n\\end{equation}\nwhere\n\\begin{equation} \n\\pi_r(s)=-\\frac{\\alpha}{3\\pi}(\\frac{1}{3}+(3-W^2)(W\\mbox{arcoth}(W)-1)),\n\\end{equation}\nand $W$ is given by\n\\begin{equation} \nW=W(s)=(1+\\frac{4m^2}{s^2})^{\\frac{1}{2}}, s\\in(0,\\infty),\n\\end{equation}\n(see Appendix 1). Note that we are using $\\pi_r$ as defined in the imaginary mass domain because we are considering $s$ corresponding to spacelike $q$.\n\n\\begin{theorem}\nThe integral Eq.~\\ref{eq:running_coupling} defing the running coupling constant is divergent at all non zero energies when $\\pi=\\pi_r$.\n\\end{theorem}\n{\\bf Proof}\nLet $\\mu>0$ Now\n\\begin{equation}\nW(\\mu s)=(1+\\frac{4m^2}{\\mu^2 s^2})^{\\frac{1}{2}}, \\forall s>0.\n\\end{equation}\nTherefore\n\\begin{equation}\ns=\\frac{2m}{\\mu(W^2(\\mu s)-1)^{\\frac{1}{2}}}.\n\\end{equation}\nNow as $s\\rightarrow\\infty$, $W(\\mu s)\\rightarrow1^{+}$. We will now show that \n\\begin{equation} \\label{eq:frac_limit1}\n\\frac{\\pi(\\mu s)}{s}\\rightarrow\\infty,\n\\end{equation}\nas $s\\rightarrow\\infty$. Terms in $\\pi$ that have a finite limit as $s\\rightarrow\\infty$ vanish in the limit of Eq.~\\ref{eq:frac_limit1}. Therefore we are interested in the limiting behavior of\n\\[ s\\mapsto\\mbox{atanh}(\\frac{1}{W(\\mu s)})(W^2(\\mu s)-1)^{\\frac{1}{2}}, \\] \nas $s\\rightarrow\\infty$. This is the same as the limiting behavior of\n\\[ W\\mapsto\\mbox{atanh}(\\frac{1}{W})(W-1)^{\\frac{1}{2}}(W+1)^{\\frac{1}{2}}, \\]\nas $W\\rightarrow1^{+}$, which is the same as the limiting behavior of\n\\[ x\\mapsto \\mbox{atanh}(\\frac{1}{x+1})x^{\\frac{1}{2}}=\\frac{x^{\\frac{1}{2}}}{f(x)}, \\]\nas $x\\rightarrow0^{+}$, where \n\\begin{equation}\nf(x)=\\frac{1}{\\mbox{atanh}(\\frac{1}{x+1})}.\n\\end{equation}\nNow, $x^{\\frac{1}{2}}\\rightarrow0^{+},f(x)\\rightarrow0^{+}$ as $x\\rightarrow0^{+}$. Therefore, by L'H\\^{o}pital's rule\n\\begin{equation}\n\\lim_{x\\rightarrow0^{+}}\\frac{x^{\\frac{1}{2}}}{f(x)}=\\lim_{x\\rightarrow0^{+}}\\frac{\\frac{1}{2}x^{-\\frac{1}{2}}}{f^{\\prime}(x)},\n\\end{equation}\nif the limit exists. Now\n\\begin{eqnarray}\n\\lim_{x\\rightarrow0^{+}}f^{\\prime}(x) & = & \\lim_{x\\rightarrow0^{+}}[-(\\mbox{atanh}(\\frac{1}{x+1}))^{-2}\\frac{1}{1-(\\frac{1}{x+1})^2}(-(x+1)^{-2})] \\nonumber \\\\\n & = & \\lim_{x\\rightarrow0^{+}}[(\\mbox{atanh}(\\frac{1}{x+1}))^{-2}\\frac{1}{x(x+2)}] \\nonumber \\\\\n & = & \\lim_{x\\rightarrow0^{+}}\\frac{g(x)}{x(x+2)}, \\nonumber \\nonumber \\\\\n\\end{eqnarray}\nwhere\n\\begin{equation}\ng(x)=(\\mbox{atanh}(\\frac{1}{x+1}))^{-2}.\n\\end{equation}\nNow\n\\[ g^{\\prime}(x)=-2(\\mbox{atanh}(\\frac{1}{x+1}))^{-3}(-(x+1)^{-2}). \\]\nTherefore\n\\begin{equation}\n\\lim_{x\\rightarrow0^{+}}f^{\\prime}(x)=\\lim_{x\\rightarrow0^{+}}\\frac{2\\mbox{atanh}(\\frac{1}{x+1}))^{-3}}{2x+2}=0.\n\\end{equation}\n Thus\n\\begin{equation}\n\\lim_{x\\rightarrow0^{+}}\\frac{x^{\\frac{1}{2}}}{f(x)}=\\infty.\n\\end{equation}\nTherefore the integrand of the integral Eq.~\\ref{eq:running_coupling} defining the running coupling constant is oscillatory with ever increasing amplitude and hence the integral divergent for all non zero energies.\n$\\Box$\n\nThis is to be compared with the work of Landau and others relating to the Landau pole or ``ghost\" pole in the solution of the renormalization group equations in QED ``the possible existence of which leads to a serious contradiction with a number of general principles of the theory\" (Bogoliubov and Shirkov, 1980, p. 517). \n\n\\subsection{Determination of the behavior of the running coupling constant in one loop QED when using the spectral vacuum polarization function $\\pi=\\pi_s$}\n\nIn this case we have\n\\begin{equation}\n\\pi(\\mu s)=\\pi_s(\\mu s),\n\\end{equation}\nwhere\n\\begin{equation}\n\\pi_s(s)=\\frac{2}{\\pi}s^{-3}e^2m^3Z(s)(3+2Z^2(s)),\n\\end{equation}\nand\n\\begin{equation} \nZ(s)=(\\frac{s^2}{4m^2}-1)^{\\frac{1}{2}},\n\\end{equation}\nfor $s\\in(0,\\infty)$.\n\nAs $s\\rightarrow\\infty$, $s^{-1}Z(s)\\rightarrow(2m)^{-1}$ and so $\\pi_s(s)\\rightarrow\\frac{e^2}{2\\pi}$. Thus\n\\begin{equation}\n\\pi(\\mu s)\\rightarrow\\frac{e^2}{2\\pi}, \\mbox{ as }s\\rightarrow{\\infty},\n\\end{equation}\nwhich is a finite limit. \n\\begin{theorem}\nThe integral given by Eq.~\\ref{eq:running_coupling} is convergent for all energies $\\mu\\geq0$ when $\\pi=\\pi_s$.\n\\end{theorem}\n{\\bf Proof}\nConsider the case when $\\mu=1$. All other values of $\\mu$ can be dealt with similarly. We want to show that the integral\n\\[ \\int\\mbox{sinc}(s)\\pi_s(s)\\,ds \\]\nis convergent. It is sufficient to show that the integral\n\\begin{equation} \\label{eq:rcc_theorem2_integral}\n\\int_{s=2m}^{\\infty}\\mbox{sinc}(s)\\frac{Z^3(s)}{s^3}\\, ds,\n\\end{equation}\nis convergent. Let\n\\begin{equation}\nL=\\lim_{s\\rightarrow\\infty}\\frac{Z^3(s)}{s^3}=\\frac{1}{(2m)^3}.\n\\end{equation}\nThen the integral \\ref{eq:rcc_theorem2_integral} will converge if $\\frac{Z^3(s)}{s^3}\\rightarrow L$ fast enough. Let\n\\begin{equation}\n\\epsilon_n=\\mbox{sup}\\{\\left|\\frac{Z^3(s)}{s^3}-L\\right|:s\\geq2\\pi (n-1)\\}, \\mbox{ for }n=1,2,\\ldots.\n\\end{equation}\nIf we define\n\\begin{equation}\nI_n=\\sum_{i=1}^n(I_i^{+}-I_i^{-}),\n\\end{equation}\nwhere\n\\begin{equation}\nI_i^{+}=\\int_{2\\pi(i-1)}^{2\\pi(i-1)+\\pi}\\frac{\\sin(s)}{s}\\,ds,\n\\end{equation}\nand\n\\begin{equation}\nI_i^{-}=-\\int_{2\\pi(i-1)+\\pi}^{2\\pi i}\\frac{\\sin(s)}{s}\\,ds,\n\\end{equation}\nthen, as is well known,\n\\begin{equation}\nI_n\\rightarrow\\frac{\\pi}{2}, \\mbox{ as }n\\rightarrow\\infty.\n\\end{equation}\nNow define\n\\begin{equation}\nJ_i^{+}=\\int_{2\\pi(i-1)}^{2\\pi(i-1)+\\pi}\\frac{\\sin(s)}{s}\\frac{Z^3(s)}{s^3}\\,ds,\n\\end{equation}\nand\n\\begin{equation}\nJ_i^{-}=-\\int_{2\\pi(i-1)+\\pi}^{2\\pi i}\\frac{\\sin(s)}{s}\\frac{Z^3(s)}{s^3}\\,ds,\n\\end{equation}\nand let\n\\begin{equation}\nS_n=\\sum_{i=1}^n(J_i^{+}-J_i^{-}), n=1,2,\\ldots.\n\\end{equation}\nWe want to show that $S_n$ converges to a finite limit as $n\\rightarrow\\infty$. We have\n\\begin{eqnarray}\nS_n & \\in & (\\sum_{i=1}^n((L-\\epsilon_i)I_i^{+}-(L+\\epsilon_i)I_i^{-}),\\sum_{i=1}^n((L+\\epsilon_i)I_i^{+}-(L-\\epsilon_i)I_i^{-})) \\nonumber \\\\\n & = & (\\sum_{i=1}^n L(I_i^{+}-I_i^{-})-\\epsilon_i(I_i^{+}+I_i^{-}),\\sum_{i=1}^n L(I_i^{+}-I_i^{-})+\\epsilon_i(I_i^{+}+I_i^{-})) \\nonumber\n\\end{eqnarray}\nClearly if $\\epsilon_i\\rightarrow0$ fast enough then $S_n$ is convergent. Well we have\n\\begin{eqnarray}\n\\left|\\frac{Z^3(s)}{s^3}-L\\right| & = & \\left|(\\frac{1}{4m^2}-\\frac{1}{s^2})^{\\frac{3}{2}}-\\frac{1}{(2m)^3}\\right| \\nonumber \\\\\n & = & \\left|\\frac{(\\frac{1}{4m^2}-\\frac{1}{s^2})^{3}-\\frac{1}{(2m)^6}}{(\\frac{1}{4m^2}-\\frac{1}{s^2})^{\\frac{3}{2}}+\\frac{1}{(2m)^3}}\\right| \\nonumber \\\\\n & \\leq & (2m)^3\\left|(\\frac{1}{4m^2}-\\frac{1}{s^2})^{3}-\\frac{1}{(2m)^6}\\right| \\nonumber \\\\\n & = & \\frac{(2m)^3}{s^2}\\left|\\frac{3}{4m^2s^2}-\\frac{3}{(2m)^4}-\\frac{1}{s^4}\\right| \\nonumber \n\\end{eqnarray}\nNow if $s$ is sufficiently large then\n\\begin{eqnarray}\n|\\frac{3}{4m^2s^2}-\\frac{3}{(2m)^4}-\\frac{1}{s^4}| & = & \\frac{3}{(2m)^4}-\\frac{3}{4m^2s}+\\frac{1}{s^4} \\nonumber \\\\\n & < & \\frac{3}{(2m)^4}+\\frac{1}{s^4} \\nonumber \\\\\n & \\leq & \\frac{3}{(2m)^4}+\\frac{1}{(2m)^4} \\nonumber \\\\\n & = & \\frac{1}{4m^4}. \\nonumber\n\\end{eqnarray} \n Therefore\n\\begin{equation}\n\\mbox{sup}\\{\\left|\\frac{Z^3(s)}{s^3}-L\\right|:s\\geq a\\}\\leq\\frac{2}{ma^2}, \\mbox{ for $a$ sufficiently large}.\n\\end{equation}\nHence\n\\begin{equation}\n\\epsilon_i\\leq\\frac{2}{m(2\\pi(i-1))^2}, \\mbox{ for $i$ sufficiently large}.\n\\end{equation}\nThus\n\\begin{equation}\n\\epsilon_i(I_i^{+}+I_i^{-})\\leq\\frac{2}{m(2\\pi(i-1))^2}\\int_{2\\pi(i-1)}^{2\\pi i}\\frac{1}{s}\\,ds\\leq\\frac{2}{m(2\\pi(i-1))^2}\\frac{1}{2\\pi(i-1)}2\\pi,\n\\end{equation}\nfor $i$ sufficiently large. Therefore the sequence $S_n$ is convergent as $n\\rightarrow\\infty$.\n$\\Box$\n\nA graph of the running fine structure constant versus $\\mu^{-1}=\\frac{r}{2\\pi}$ for $r\\in(0,\\frac{1}{10}a_0)$ where $a_0=$ the first Bohr radius of the H atom in natural units is shown in Figure \\ref{fig:Fig3}.\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=15cm]{vp_rcc.jpg}\n\\caption{QED running fine structure constant on the basis of vacuum polarization} \\label{fig:Fig3}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe have presented a spectral calculus for the computation of the spectrum of causal Lorentz invariant Borel complex measures on Minkowski space and shown how this enables one to compute the density for such a measure with respect to Lebesgue measure. This has been applied to the case of the contraction of the vacuum polarization tensor resulting in a spectral vacuum polarization function which has very close agreement with the vacuum polarization function computed using dimensional regularization \/ renormalization in the domain of real mass.\n\nUsing the Born approximation together with the spectral vacuum polarization function the Uehling effect contribution to the Lamb shift for the H atom is computed to be $\\approx-28.7$ MHz. \nWith the spectral vacuum polarization function we obtain a well defined convergent running coupling function whereas the running coupling function generated using dimensional regularization \/ renormalization is shown to be divergent at all non-zero energies. \n\nIn subsequent work we will apply the spectral calculus to the electron self energy and generally to all renormalization issues arising in the QFT of the electroweak force. In addition QCD will be formulated in the context of locally conformally flat space-times (M\\\"{o}bius structures) (Mashford, 2017a and b) and the running coupling constant for QCD will be computed with a view to proving, or deriving, the asymptotic freedom of QCD.\n\n\\section*{Acknowledgements}\nThe author thanks Randolf Pohl and Christopher Chantler for very helpful discussions. \n\n\\section*{References}\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Bogoliubov, N. N., Logunov, A. A. and Todorov, I. T., {\\em Introduction to Axiomatic Quantum Field Theory}, Benjamin, 1975.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Bogoliubov, N. N., and Shirkov, D. V., {\\em Introduction to the theory of quantized fields}, Third edition, Wiley, New York, 1980.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Choquet-Bruhat, Y., DeWitte-Morette, C., and Dillard-Bleick, M., {\\em Analysis, manifolds and physics}, North-Holland, Amsterdam, 1982.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Colombeau, J. F., {\\em New generalized functions and multiplication of distributions}, North Holland, Amsterdam, 1984.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Epstein, H. and Glaser, V., ``Role of locality in perturbation theory\", {\\em Annales de l'Institut Henri Poincar\\'{e} Section A Physique Th\\'{e}orique} 19(3), 211-295, 1973.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Halmos, P. R., {\\em Measure theory}, Springer-Verlag, New York, 1988.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Itzykson, C. and Zuber, J.-B., {\\em Quantum Field Theory}, McGraw-Hill, New York, 1980.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Mandl, F. and Shaw, G., {\\em Quantum Field Theory}, Wiley, Chichester, 1991.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Mashford, J. S., ``A non-manifold theory of space-time\", {\\em Journal of Mathematical Physics} 22(9), 1981, 1990-1993.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Mashford, J. S., {\\em Invariant measures and M\\\"{o}bius structures: A framework for field theory}, PhD thesis, University of Melbourne, 2005.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Mashford, J. S., ``An approach to classical quantum field theory based on the geometry of locally conformally flat space-time\", \n{\\em Advances in Mathematical Physics} (2017), Article ID 8070462, 15 pages, https:\/\/doi.org\/10.1155\/2017\/8070462, 2017a. \n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Mashford J. S., ``Second quantized quantum field theory based on invariance properties of locally conformally flat space-times\", arXiv:1709.09226, 2017b.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Oberguggenberger, M., {\\em Multiplication of distributions and applications to partial differential equations}, Longman, 1992.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Pohl, R., ``Laser Spectroscopy of Muonic Hydrogen and the Puzzling Proton\", Journal of the Physical Society of Japan 85, 091003 (2016).\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Scharf, G., {\\em Finite Quantum Electrodynamics: The Causal Approach}, Springer-Verlag, Berlin, 1995.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Weinberg, S., {\\em The quantum theory of fields}, Vol. 1, Cambridge University Press, 2005.\n\n\\vskip .1in\\par\\sloppy\\hangindent=1pc\\hangafter=1 \\noindent Wilson, K., ``The renormalization group and critical phenomena\", {\\em Reviews of Modern Physics}, 55(3), 1983.\n\n\\section*{Appendix 1: Derivation of closed form solution forregularized\/renormalized vacuum polarization}\n\nThe standard formula for the vacuum polarization function $\\pi_r$ as obtained using regularization and renormalization is\n\\begin{equation} \\label{eq:appendix1_vp1}\n\\pi_r(k^2)=-\\frac{2\\alpha}{\\pi}\\int_0^1d\\beta\\,\\beta(1-\\beta)\\log\\left(1-\\frac{k^2\\beta(1-\\beta)}{m^2}\\right),\n\\end{equation}\n(Mandl and Shaw, 1991, p. 229) where $m>0$ is the mass of the electron and (in natural units) $\\alpha=(4\\pi)^{-1}e^2$ is the fine structure constant in which $e>0$ is the magnitude of the charge of the electron. $\\pi_r$ is defined for all $k\\in{\\bf R}^4$ for which $k^2<4m^2$. This integral can be performeed leading to the closed form solution (see Itzikson and Zuber (1980, p. 323)) \n\\begin{equation} \\label{eq:pi_Itzikson}\n\\pi_r(k^2)=-\\frac{\\alpha}{3\\pi}\\left\\{\\frac{1}{3}+2\\left(1+\\frac{2m^2}{k^2}\\right)\\left[\\left(\\frac{4m^2}{k^2}-1\\right)^{\\frac{1}{2}}\\mbox{arcot}\\left(\\frac{4m^2}{k^2}-1\\right)^{\\frac{1}{2}}-1\\right]\\right\\}.\n\\end{equation}\nThe function defined by Eq.~\\ref{eq:pi_Itzikson} is only defined for $00$ and $I:\\{k\\in{\\bf R}^4:k^2<4m\\}\\rightarrow(-\\infty,0)$ be defined by\n\\begin{equation}\nI(k)=2\\int_{\\beta=0}^1d\\beta\\,\\beta(1-\\beta)\\log(1-\\frac{k^2\\beta(1-\\beta)}{m^2}),\n\\end{equation}\nThen\n\\begin{eqnarray}\nI(k) & = & 2\\int_{\\beta=0}^1d(\\frac{1}{2}\\beta^2-\\frac{1}{3}\\beta^3)\\log(1-\\frac{k^2\\beta(1-\\beta)}{m^2}) \\nonumber \\\\\n & = & -2\\int_{\\beta=0}^1\\frac{\\frac{1}{2}\\beta^2-\\frac{1}{3}\\beta^3}{1-\\beta(1-\\beta)m^{-2}k^2}\\frac{k^2}{m^2}(2\\beta-1)\\,d\\beta \\nonumber \\\\\n & = & 2\\int_{\\beta=0}^1\\frac{(\\frac{1}{3}\\beta^3-\\frac{1}{2}\\beta^2)(2\\beta-1)}{m^2(k^2)^{-1}-\\beta(1-\\beta)}\\,d\\beta \\nonumber\n\\end{eqnarray}\nNow\n\\begin{equation}\n\\frac{m^2}{k^2}-\\beta(1-\\beta)=\\frac{m^2}{k^2}-\\beta+\\beta^2=(\\beta-\\frac{1}{2})^2-\\frac{1}{4}+\\frac{m^2}{k^2}.\n\\end{equation}\nTherefore, changing variables,\n\\begin{eqnarray}\nI(k) & = & 2\\int_{\\beta=-\\frac{1}{2}}^{\\frac{1}{2}}\\frac{(\\frac{1}{3}(\\beta+\\frac{1}{2})^3-\\frac{1}{2}(\\beta+\\frac{1}{2})^2)2\\beta}{\\beta^2-\\frac{1}{4}+m^2(k^2)^{-1}}\\,d\\beta \\label{eq:appendix1_vp2} \\\\\n & = & 2\\int_{\\beta=-\\frac{1}{2}}^{\\frac{1}{2}}\\frac{(\\frac{1}{3}(\\beta+\\frac{1}{2})^3-\\frac{1}{2}(\\beta+\\frac{1}{2})^2)2\\beta}{\\beta^2+X^2}\\,d\\beta\n\\end{eqnarray}\nwhere\n\\begin{equation}\nX=X(k)=\\frac{1}{2}(\\frac{4m^2}{k^2}-1)^{\\frac{1}{2}}\\in(0,\\infty), \\mbox{ for } 00.001*a_0_natural) delta_int_1 = 0.001*a_0_natural;\n int N_int_1 = Lambda_int_1\/delta_int_1;\n for(j=1;j0\\}$ with little group isomorphic to $SO(3)$. Then there are the negative mass hyperboloids, the positive open null cone, the negative open null cone and the imaginary mass hyperboloids. The spectral theorem is proved by considering separately each class of orbit. We will prove it for the space $X=\\{p\\in{\\bf R}^4:p^2>0,p^0>0\\}$ consisting of the union of all positive mass hyperboloids. The other cases can be proved similarly. We will prove the spectral theorem first for Lorentz invariant Borel measures $\\mu:{\\mathcal B}(X)\\rightarrow[0,\\infty]$ and then generalize the theorem later to Lorentz invariant Borel complex measures.\n\nLet Rotations $\\subset O(1,3)^{+\\uparrow}$ be defined by\n\\begin{equation}\n\\mbox{Rotations}=\\left\\{\\left(\\begin{array}{ll}\n1 & 0 \\\\\n0 & A\n\\end{array}\\right):A\\in SO(3)\\right\\},\n\\end{equation}\nand Boosts $\\subset O(1,3)^{+\\uparrow}$ be the set of pure boosts. Then it can be shown that for every $\\Lambda\\in O(1,3)^{+\\uparrow}$ there exist unique $B\\in$ Boosts and $R\\in$ Rotations such that\n\\[ \\Lambda = BR. \\]\nThus there exist maps $\\pi_1:O(1,3)^{+\\uparrow}\\rightarrow$ Boosts and $\\pi_2:O(1,3)^{+\\uparrow}\\rightarrow$ Rotations such that for all $\\Lambda\\in O(1,3)^{+\\uparrow}$\n\\begin{eqnarray}\n& & \\Lambda = \\pi_1(\\Lambda)\\pi_2(\\Lambda), \\nonumber \\\\\n& & \\Lambda = BR \\mbox{ with } B\\in\\mbox{Boosts and }R\\in\\mbox{ Rotations }\\Rightarrow B=\\pi_1(\\Lambda), R=\\pi_2(\\Lambda). \\nonumber\n\\end{eqnarray}\nFor $m>0$, define $h_m:\\mbox{Boosts}\\rightarrow H_m$ by\n\\begin{equation}\nh_m(B)=B(m,{\\vct 0})^{T}.\n\\end{equation}\nWe will show that $h_m$ is a bijection. Let $p\\in H_m$. Choose $\\Lambda\\in O(1,3)^{+\\uparrow}$ such that $p=\\Lambda(m,{\\vct 0})^{T}$. Then $p=\\pi_1(\\Lambda)\\pi_2(\\Lambda)(m,{\\vct 0})^{T}=\\pi_1(\\Lambda)(m,{\\vct 0})^{T}\\in h(\\mbox{Boosts})$. Therefore $h_m$ is surjective. Now suppose that $h(B_1)=h(B_2)$. Then $B_1(m,{\\vct 0})^{T}=B_2(m,{\\vct 0})^{T}$. Thus $B_2^{-1}B_1(m,{\\vct 0})^{T}=(m,{\\vct 0})^{T}$. Hence $B_2^{-1}B_1=R$ for some $R\\in$ Rotations. Therefore $B_1=\\pi_1(B_1)=\\pi_1(B_2R)=\\pi_1(B_2)=B_2$. Therefore $h_m$ is a bijection.\n\nNow there is an action $\\rho_m:O(1,3)^{+\\uparrow}\\times H_m\\rightarrow H_m$ of $O(1,3)^{+\\uparrow}$ on $H_m$ defined by\n\\begin{equation}\n\\rho_m(\\Lambda,p)=\\Lambda p.\n\\end{equation}\n$\\rho_m$ induces an action ${\\tld \\rho}_m:O(1,3)^{+\\uparrow}\\times$Boosts$\\rightarrow$Boosts according to\n\\begin{eqnarray}\n{\\tld \\rho}_m(\\Lambda,B) & = & h_m^{-1}(\\rho_m(\\Lambda,h_m(B))) \\nonumber \\\\\n & = & h_m^{-1}(\\Lambda B(m,{\\vct 0})^{T}) \\nonumber \\\\\n & = & h_m^{-1}(\\pi_1(\\Lambda B)\\pi_2(\\Lambda B)(m,{\\vct 0})^{T}) \\nonumber \\\\\n & = & h_m^{-1}(\\pi_1(\\Lambda B)(m,{\\vct 0})^{T}) \\nonumber \\\\\n & = & \\pi_1(\\Lambda B).\n\\end{eqnarray}\nNote that the induced action is independent of $m$ for all $m>0$.\n\n Let\n\\begin{equation}\nX=\\bigcup_{m>0}H_m=\\{p\\in{\\bf R}^4:p^2>0,p^0>0\\}.\n\\end{equation}\nDefine the action $\\rho:O(1,3)^{+\\uparrow}\\times X\\rightarrow X$ by $\\rho(\\Lambda,p)=\\Lambda p$. Then $\\rho$ induces an action ${\\tld\\rho}:O(1,3)^{+\\uparrow}\\times(0,\\infty)\\times\\mbox{Boosts}\\rightarrow(0,\\infty)\\times\\mbox{Boosts}$ according to\n\\begin{equation} \\label{eq:action_Boosts}\n{\\tld\\rho}(\\Lambda,m,B)={\\tld\\rho}_m(B)=\\pi_1(\\Lambda B).\n\\end{equation}\nDefime, for each $m>0$, $f_m:O(1,3)^{+\\uparrow}\\rightarrow H_m\\times\\mbox{Rotations}$ by\n\\begin{equation}\nf_m(\\Lambda)=(h_m(\\pi_1(\\Lambda)),\\pi_2(\\Lambda)).\n\\end{equation}\nThen each $f_m$ is a bijection.\nThe map $g:(0,\\infty)\\times\\mbox{Boosts}\\rightarrow X$ defined by\n\\begin{equation}\ng(m,B)=h_m(B)=B(m,{\\vct 0})^{T},\n\\end{equation}\nis a bijection. Define $f:(0,\\infty)\\times O(1,3)^{+\\uparrow}\\rightarrow X\\times\\mbox{Rotations}$ by\n\\begin{equation}\nf(m,\\Lambda)=f_m(\\Lambda).\n\\end{equation}\n$f$ is a bijection so we can push forward or pull back measures using $f$ at will. \n\nSuppose that $\\mu:{\\mathcal B}(X)\\rightarrow[0,\\infty]$ is a Borel measure on $X$ (by Borel measure we mean a measure defined on ${\\mathcal B}(X)$ which is finite on compact sets) and that $\\mu$ is invariant under the action . Let $\\mu_R$ be the measure on Rotations induced by Haar measure on $SO(3)$. Let $\\nu$ be the product measure $\\nu=\\mu\\times\\mu_R$ whose existence and uniqueness is guaranteed by the Hahn-Kolmogorov theorem and the fact that both $X$ and Rotations are $\\sigma$-finite. Let $\\nu\\#f^{-1}$ denote the pull back of $\\nu$ by $f$ (i.e. the push forward of $\\nu$ by $f^{-1}$). Then\n\\begin{equation}\n(\\nu\\#f^{-1})(\\Gamma)=\\nu(f(\\Gamma)), \\forall\\Gamma\\in{\\mathcal B}((0,\\infty)\\times O(1,3)^{+\\uparrow}).\n\\end{equation}\nConsider the action $\\tau:O(1,3)^{+\\uparrow}\\times(0,\\infty)\\times O(1,3)^{+\\uparrow}\\rightarrow(0,\\infty)\\times O(1,3)^{+\\uparrow}$ defined by\n\\begin{equation}\n\\tau(\\Lambda,m^{\\prime},\\Lambda^{\\prime})=(m^{\\prime},\\Lambda\\Lambda^{\\prime}).\n\\end{equation}\n$\\tau$ induces an action ${\\tld \\tau}:O(1,3)^{+\\uparrow}\\times X\\times\\mbox{Rotations}\\rightarrow X\\times\\mbox{Rotations}$ so that if $p^{\\prime}\\in X$ with $p^{\\prime}=B^{\\prime}(m^{\\prime},{\\vct 0})^{T}$ with $m^{\\prime}\\in(0,\\infty), B^{\\prime}\\in\\mbox{Boosts}$ and $R^{\\prime}\\in\\mbox{Rotations}$ then\n\\begin{eqnarray}\n{\\tld \\tau}(\\Lambda,(p^{\\prime},R^{\\prime})) & = & {\\tld \\tau}(\\Lambda,B^{\\prime}(m^{\\prime},{\\vct 0})^{T},R^{\\prime}) \\nonumber \\\\\n & = & f_{m^{\\prime}}(\\Lambda f_{m^{\\prime}}^{-1}(h_{m^{\\prime}}(B^{\\prime}(m^{\\prime},{\\vct 0})^{T},R^{\\prime})) \\nonumber \\\\\n & = & f_{m^{\\prime}}(\\Lambda f_{m^{\\prime}}^{-1}(h_{m^{\\prime}}(\\pi_1(\\Lambda^{\\prime})),\\pi_2(\\Lambda^{\\prime})) \\nonumber \\\\\n & = & f_{m_{\\prime}}(\\Lambda\\Lambda^{\\prime}) \\nonumber \\\\\n & = & (h_{m^{\\prime}}(\\pi_1(\\Lambda\\Lambda^{\\prime})),\\pi_2(\\Lambda\\Lambda^{\\prime})), \\nonumber\n\\end{eqnarray}\nwhere $\\Lambda^{\\prime}=B^{\\prime}R^{\\prime}$. We will now show that the measure $\\nu$ is an invariant measure on $X\\times\\mbox{Rotations}$ with respect to the action ${\\tld \\tau}$. To this effect let $E_1^{\\prime}\\subset\\mbox{Boosts}, E_2^{\\prime}\\subset\\{(m^{\\prime},{\\vct 0}):m^{\\prime}\\in(0,\\infty)\\}$ and $E_3^{\\prime}\\subset\\mbox{Rotations}$ be Borel sets. Then\n\\begin{eqnarray}\n\\nu({\\tld\\tau}(\\Lambda,E_1^{\\prime}E_2^{\\prime},E_3^{\\prime})) & = & \\nu(\\pi_1(\\Lambda E_1^{\\prime})E_2^{\\prime}\\times\\pi_2(\\Lambda E_3^{\\prime})) \\nonumber \\\\\n & = & \\mu(\\pi_1(\\Lambda E_1^{\\prime})E_2^{\\prime})\\mu_R(\\pi_2(\\Lambda E_3^{\\prime})) \\nonumber \\\\\n & = & \\mu(\\pi_1(\\Lambda E_1^{\\prime}))\\mu_R(\\pi_2(\\Lambda)\\pi_2(E_3^{\\prime})) \\nonumber \\\\\n & = & \\mu(E_1^{\\prime}E_2^{\\prime})\\mu_R(E_3^{\\prime}) \\nonumber \\\\\n & = & \\nu(E_1^{\\prime}E_2^{\\prime},E_3^{\\prime}), \\nonumber\n\\end{eqnarray}\n (here we have used the notation of juxtaposition of sets to denote the set of all products i.e. $ S_1S_2=\\{xy:x\\in S_1, y\\in S_2\\}$, also $xS=\\{xy:y\\in S\\}$). Therefore the measure $\\nu\\#f^{-1}$ is an invariant measure on $O(1,3)^{+\\uparrow}$ with respect to the action $\\tau$. Therefore for each Borel set $E\\subset(0,\\infty)$ the measure $(\\nu\\#f^{-1})_E:{\\mathcal B}(O(1,3)^{+\\uparrow})\\rightarrow[0,\\infty]$ defined by\n\\begin{equation}\n(\\nu\\#f^{-1})_E(\\Gamma)=(\\nu\\#f^{-1})(E,\\Gamma),\n\\end{equation}\nis a translation invariant measure on the group $O(1,3)^{+\\uparrow}$. Therefore since, $O(1,3)^{+\\uparrow}$ is a locally compact second countable topological group there exists, by the uniqueness part of Haar's theorem, a unique $c=c(E)\\ge0$ such that \n\\begin{equation}\n(\\nu\\#f^{-1})_E=c(E)\\mu_{O(1,3)^{+\\uparrow}},\n\\end{equation} \nwhere $\\mu_{{O(1,3}^{+\\uparrow}}$ is the Haar measure on $O(1,3)^{+\\uparrow}.$ Denote by $\\sigma$ the map $\\sigma:{\\mathcal B}((0,\\infty))\\rightarrow[0,\\infty]$ defined by $\\sigma(E)=c(E)$.\n\nWe will now show that $\\sigma$ is a measure on $(0,\\infty)$. We have that for any $\\Gamma\\in{\\mathcal B}(O(1,3)^{+\\uparrow}),E\\in{\\mathcal B}((0,\\infty))$\n\\begin{equation}\n\\sigma(E)\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)=(\\nu\\#f^{-1})_E(\\Gamma)=\\nu(f(E,\\Gamma))=\\nu(\\pi_1(\\Gamma)E\\times\\pi_2(\\Gamma)).\n\\end{equation}\nChoose $\\Gamma\\in{\\mathcal B}(O(1,3)^{+\\uparrow})$ such that $\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)\\in(0,\\infty)$.\n\nThen \n\\begin{equation}\n\\sigma(\\emptyset)\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)=\\nu(\\pi_1(\\Gamma)\\emptyset\\times\\pi_2(\\Gamma))=\\nu(\\emptyset)=0.\n\\end{equation}\nTherefore \n\\begin{equation}\n\\sigma(\\emptyset)=0.\n\\end{equation}\nAlso let $\\{E_n\\}_{n=1}^{\\infty}\\subset{\\mathcal B}((0,\\infty))$. Then\n\\begin{eqnarray}\n\\sigma(\\bigcup_{n=1}^{\\infty}(E_n) & = & \\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)^{-1}\\nu(\\pi_1(\\Gamma)\\bigcup_{n=1}^{\\infty}E_n\\times\\pi_2(\\Gamma)) \\nonumber \\\\\n & = & \\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)^{-1}\\nu(\\bigcup_{n=1}^{\\infty}(\\pi_1(\\Gamma)E_n\\times\\pi_2(\\Gamma))) \\nonumber \\\\\n & = & \\sum_{n=1}^{\\infty}\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)^{-1}\\nu((\\pi_1(\\Gamma)E_n\\times\\pi_2(\\Gamma))) \\nonumber \\\\\n & = & \\sum_{n=1}^{\\infty}\\sigma(E_n).\n\\end{eqnarray}\nThus $\\sigma$ is a measure. \n\nThe above argument holds for all invariaant measures $\\mu:{\\mathcal B}(X)\\rightarrow[0,\\infty]$. Therefore, in particular, it is true for $\\Omega_m$ for $m\\in(0,\\infty)$. Hence there exists a measure $\\sigma_{\\Omega_m}:{\\mathcal B}((0,\\infty))\\rightarrow[0,\\infty]$ such that\n\\begin{equation}\n((\\Omega_m\\times\\mu_{SO(3)})\\#f^{-1})(E,\\Gamma)=\\sigma_{\\Omega_m}(E)\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma),\n\\end{equation}\nfor $E\\in{\\mathcal B}((0,\\infty)), \\Gamma\\in{\\mathcal B}(O(1,3)^{+\\uparrow}$. But\n\\begin{eqnarray}\n((\\Omega_m\\times\\mu_{SO(3)})\\#f^{-1})(E,\\Gamma) & = & (\\Omega_m\\times\\mu_{SO(3)})(f(E,\\Gamma)) \\nonumber \\\\\n & = & (\\Omega_m\\times\\mu_{SO(3)})(\\pi_1(\\Gamma)(E\\times\\{{\\vct 0}\\}),\\pi_2(\\Gamma)) \\nonumber \\\\\n & = & \\Omega_m(\\pi_1(\\Gamma)(E\\times\\{{\\vct 0}\\}))\\mu_{SO(3)}(\\pi_2(\\Gamma)) \\nonumber \\\\\n & = & \\Omega_m(\\pi_1(\\Gamma)(m,{\\vct 0})^{T})\\mu_{SO(3)}(\\pi_2(\\Gamma))\\delta_E(m) \\nonumber \\\\\n\\end{eqnarray}\nwhere $\\delta_m$ is the Dirac measure concentrated on $m$. Thus\n\\begin{equation}\n\\sigma_{\\Omega_m}(E)=\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)^{-1}\\Omega_m(\\pi_1(\\Gamma)(m,{\\vct 0})^{T})\\mu_{SO(3)}(\\pi_2(\\Gamma))\\delta_m(E),\n\\end{equation}\nfor any $E\\in{\\mathcal B}((0,\\infty)),\\Gamma\\in{\\mathcal B}(O(1,3)^{+\\uparrow})$ such that $\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)\\in(0,\\infty).$ Choose any $\\Gamma\\in{\\mathcal B}(O(1,3)^{+\\uparrow})$ such that $\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)\\in(0,\\infty)$ and define $\\sigma_{\\Omega}:(0,\\infty)\\rightarrow(0,\\infty)$ by\n\\begin{equation}\n\\sigma_{\\Omega}(m)=\\mu_{O(1,3)^{+\\uparrow}}(\\Gamma)^{-1}\\Omega_m(\\pi_1(\\Gamma)(m,{\\vct 0})^{T})\\mu_{SO(3)}(\\pi_2(\\Gamma)).\n\\end{equation}\nThen\n\\begin{equation}\n\\sigma_{\\Omega_m}=\\sigma_{\\Omega}(m)\\delta_m, \\forall m\\in(0,\\infty).\n\\end{equation}\n\nReturning now to the general invariant measure $\\mu:{\\mathcal B}(X)\\rightarrow[0,\\infty]$ we will now show that $\\mu$ can be written as a product $\\mu=\\sigma\\times\\mu_B$ for some measure $\\mu_B:{\\mathcal B}(\\mbox{Boosts})\\rightarrow[0,\\infty]$ relative to the identification $g:(0,\\infty)\\times\\mbox{Boosts}\\rightarrow X$. We have\n\\begin{equation}\n\\nu(\\Gamma\\times F)=\\mu(\\Gamma)\\mu_{SO(3)}(F), \\forall\\Gamma\\in{\\mathcal B}(X),F\\in{\\mathcal B}(SO(3)).\n\\end{equation}\nTherefore\n\\begin{equation}\n\\mu(\\Gamma)=\\mu_{SO(3)}(F)^{-1}\\nu(\\Gamma\\times F),\\forall\\Gamma\\in{\\mathcal B}(X),F\\in{\\mathcal B}(SO(3)),\\mbox{ such that }\\mu_{SO(3)}(F)>0.\n\\end{equation}\nChoose $F\\in{\\mathcal B}(SO(3)),\\mbox{ such that }\\mu_{SO(3)}(F)>0$. Then for all $\\Gamma\\in{\\mathcal B}(X)$\n\\begin{eqnarray}\n\\mu(\\Gamma) & = & \\mu_{SO(3)}(F)^{-1}\\nu(\\Gamma\\times F) \\nonumber \\\\\n & = & \\nu_{SO(3)}(F)^{-1}\\nu(f(f^{-1}(\\Gamma\\times F))) \\nonumber \\\\\n & = & \\mu_{SO(3)}(F)^{-1}\\nu(f(f^{-1}(B(E\\times\\{{\\vct 0}\\})\\times F))) \\nonumber \\\\\n & = & \\mu_{SO(3)}(F)^{-1}\\nu(f((E\\times\\{{\\vct 0}\\})\\times BF)) \\nonumber \\\\\n & = & \\mu_{SO(3)}(F)^{-1}\\sigma(E)\\mu_{O(1,3)^{+\\uparrow}}(BF) \\nonumber \\\\\n & = & \\sigma(E)\\mu_B(B), \\nonumber\n\\end{eqnarray} \nwhere $\\Gamma=g(E\\times B)=B(E\\times\\{{\\vct 0}\\})$ and\n\\begin{equation}\n\\mu_B(B)=\\mu_{SO(3)}(F)^{-1}\\mu_{O(1,3)^{+\\uparrow}}(BF).\n\\end{equation}\nIt is straightforward to show that $\\mu_B$ is a well defined Borel measure.\n\nTherefore for any measurable function $\\psi:X\\rightarrow[0,\\infty]$\n\\begin{eqnarray}\n<\\mu,\\psi> & = & \\int\\psi(p)\\,\\mu(dp)\\nonumber \\\\\n & = & \\int\\psi(g(m,B))\\,\\mu_B(dB)\\,\\sigma(dm) \\nonumber \\\\\n & = & \\int\\,\\sigma(dm), \\label{eq:spectral_rep}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n=\\int\\psi(g(m,B))\\mu_B(dB).\n\\end{equation}\nIt is straightforward to show that for all $m\\in(0,\\infty)$ $M_m$ defines a Borel measure on $X$ with supp$(M_m)=H_m$. Therefore by the above argument, there exists $c=c(m)\\in(0,\\infty)$ such that $M_m=c_m\\Omega_m$. This fact, together with the spectral representation Eq.~\\ref{eq:spectral_rep} establishes (rescaling $\\sigma$) that there exists a Borel measure $\\sigma:{\\mathcal B}((0,\\infty))\\rightarrow[0,\\infty]$ such that\n\\begin{equation} \\label{eq:spectral_decomposition1}\n\\mu(\\Gamma)=\\int_{m=0}^{\\infty}\\Omega_m(\\Gamma)\\,\\sigma(dm),\n\\end{equation}\nas desired. \n\nNow suppose that $\\mu:{\\mathcal B}(X)\\rightarrow{\\bf R}$ is a Borel signed measure which is Lorentz invariant. Then by the Jordan decomposition theorem $\\mu$ has a decomposition $\\mu=\\mu^{+}-\\mu^{-}$ where $\\mu^{+},\\mu^{-}:{\\mathcal B}({\\bf R}^4)\\rightarrow[0,\\infty]$ are measures. $\\mu^{+}$ and $\\mu^{-}$ must be Borel (finite on compact sets). In fact if $P,N\\in{\\mathcal B}(X)$ is a Hahn decomposition of $X$ with respect to $\\mu$ then\n\\begin{equation}\n\\mu^{+}(\\Gamma)=\\mu(\\Gamma\\cap P), \\mu^{-}(\\Gamma)=\\mu(\\Gamma\\cap N), \\forall\\Gamma\\in{\\mathcal B}(X).\n\\end{equation}\nNow let $\\Lambda\\in O(1,3)^{+\\uparrow}$ Then since $(\\Lambda P)\\cup(\\Lambda N)=\\Lambda(P\\cup N)=X$, $(\\Lambda P)\\cap (\\Lambda N)=\\Lambda(P\\cap N)=\\emptyset$, $\\mu((\\Lambda P)\\cap\\Gamma)=\\mu((\\Lambda P)\\cap(\\Lambda\\Lambda^{-1}\\Gamma))=\\mu(P\\cap(\\Lambda^{-1}\\Gamma))\\ge0$ and, similarly, $\\mu((\\Lambda N)\\cap\\Gamma)\\le0$ $\\Lambda P$ and $\\Lambda N$ form a Hahn decomposition of $\\mu$. Therefore \n\\begin{equation}\n\\mu^{+}(\\Lambda\\Gamma)=\\mu(\\Lambda P\\cap(\\Lambda\\Gamma))=\\mu(\\Lambda(P\\cap\\Gamma))=\\mu(P\\cap\\Gamma)=\\mu^{+}(\\Gamma).\n\\end{equation}\nHence $\\mu^{+}$ is a Lorentz invariant Borel measure. Therefore it has a spectral decomposition of the form of Eq.~\\ref{eq:spectral_decomposition1}. Sinilarly $\\mu^{-}$ is a Lorentz invariant Borel measure and so it has a spectral decomposition of the form of Eq.~\\ref{eq:spectral_decomposition1}. Thus $\\mu$ has a spectral decomposition of the form of Eq.~\\ref{eq:spectral_decomposition1} where $\\sigma:{\\mathcal B}((0,\\infty)\\rightarrow{\\bf R}$ is a Borel signed measure. \n\nFinally suppose that $\\mu:{\\mathcal B}(X)\\rightarrow{\\bf C}$ is a Lorentz invariant Borel complex measure. Define Re$(\\mu):{\\mathcal B}(X)\\rightarrow{\\bf R}$ and Im$(\\mu):{\\mathcal B}(X)\\rightarrow{\\bf R}$ by\n\\begin{equation}\n(\\mbox{Re}(\\mu))(\\Gamma)=\\mbox{Re}(\\mu(\\Gamma)), (\\mbox{Im}(\\mu))(\\Gamma)=\\mbox{Im}(\\mu(\\Gamma)), \\forall\\Gamma\\in{\\mathcal B}(X).\n\\end{equation}\nThen for all $\\Lambda\\in O(1,3)^{+\\uparrow}$\n\\begin{equation}\n(\\mbox{Re}(\\mu))(\\Lambda\\Gamma)=\\mbox{Re}(\\mu(\\Lambda\\Gamma))=\\mbox{Re}(\\mu(\\Gamma))=(\\mbox{Re}(\\mu))(\\Gamma).\n\\end{equation}\nThus Re$(\\mu)$ is a Lorentz invariant Borel signed measure and so has a representation of the form of Eq.~\\ref{eq:spectral_decomposition1} for some Borel signed measure $\\sigma$. Similarly Im$(\\mu)$ has such a representation. Therefore $\\mu$ has a representation of this form for some Borel complex spectral measure $\\sigma:{\\mathcal B}((0,\\infty))\\rightarrow{\\bf C}$. This completes the proof of the spectral theorem.\n\n\\section*{Appendix 4: Dirac spinors}\n\n\\subsection*{Construction of the Dirac spinors}\n\nDirac spinors are usually obtained by seeking solutions to the Dirac equation of the form\n\\begin{eqnarray}\n\\psi^{+}(x) & = & e^{-ik.x}u(k), \\mbox{ positive energy} \\nonumber \\\\\n\\psi^{-}(x) & = & e^{ik.x}v(k), \\,\\,\\,\\mbox{ negative energy},\n\\end{eqnarray}\n(Itzikson and Zuber, 1980, p. 55). \n\nThus, in general, we are seeking solutions to the Dirac equation of the form\n\\begin{equation} \\label{eq:Appendix4_1}\n\\psi(x)=e^{-ip.x}u,\n\\end{equation}\nfor some $p\\in{\\bf R}^4,u\\in{\\bf C}^4$. If $u=0$ the the Dirac equation is trivially satisfied, so assume that $u\\neq0$. Now if $\\psi$ is of this form then\n\\begin{equation}\n(i{\\slas\\partial}-m)\\psi=0\\Leftrightarrow({\\slas p}-m)u=0.\n\\end{equation}\nIf this is the case then\n\\begin{equation}\n0=({\\slas p}+m)({\\slas p}-m)u=(p^2-m^2)u.\n\\end{equation}\nTherefore we must have that $p^2=m^2$, i.e. that $p\\in H_{\\pm m}$.\nThus we are seeking $p\\in H_{\\pm m}, u\\in{\\bf C}^4\\backslash\\{0\\}$ such that $({\\slas p}-m)u=0$, i.e. $u\\in\\mbox{Ker}({\\slas p}-m)$. \n\nLet $p\\in H_{\\pm m}$. Choose $\\Lambda\\in O(1,3)^{+\\uparrow},\\kappa\\in K$ such that \n\\begin{equation}\n\\Lambda p=(\\pm m,{\\vct 0})^{T}, \\Lambda=\\Lambda(\\kappa),\n\\end{equation}\n(see (Mashford, 2017a)). Then\n\\begin{eqnarray}\n\\mbox{Ker}({\\slas p}-m) & = & \\kappa^{-1}\\mbox{Ker}(\\kappa({\\slas p}-m)\\kappa^{-1}) \\nonumber \\\\\n & = & \\kappa^{-1}\\mbox{Ker}(\\Sigma(\\Lambda p)-m) \\nonumber \\\\\n & = & \\kappa^{-1}\\mbox{Ker}(\\Sigma((\\pm m,{\\vct 0})^{T})-m), \\nonumber\n\\end{eqnarray}\nwhere $\\Sigma$ denotes the map $p\\mapsto{\\slas p}$. \n\nWe will use the Dirac representation for the gamma matrices in which \n\\begin{equation}\n\\gamma^0=\\left(\\begin{array}{cc}\n1 & 0 \\\\\n0 & -1\n\\end{array}\\right).\n\\end{equation}\nWith respect to the metric $g=\\gamma^0$ the vectors $\\{e_{\\alpha}\\}_{\\alpha=0}^3$ form an orthonormal basis where\n\\begin{equation}\n(e_{\\alpha})_{\\beta}=\\delta_{\\alpha\\beta},\n\\end{equation}\ni.e.\n\\begin{equation}\n\\overline{e}_{\\alpha}e_{\\beta}=e_{\\alpha}^{\\dagger}\\gamma^0e_{\\beta}=\\gamma^0_{\\alpha\\beta},\\forall\\alpha,\\beta\\in\\{0,1,2,3\\}.\n\\end{equation}\nNow\n\\begin{equation}\n\\Sigma((\\pm m,{\\vct 0})^{T})-m=\\left(\\begin{array}{cc}\n\\pm m-m & 0 \\\\\n0 & \\mp m-m\n\\end{array}\\right),\n\\end{equation}\nTherefore, if $u=(u_1,u_2)^{T}$ then\n\\begin{equation}\nu\\in\\mbox{Ker}(\\Sigma((\\pm m,{\\vct 0})^{T})-m) \\Leftrightarrow \\left(\\begin{array}{cc}\n\\pm m-m & 0 \\\\\n0 & \\mp m-m\n\\end{array}\\right)\\left(\\begin{array}{l}\nu_1 \\\\\nu_2\n\\end{array}\\right)=0 \\nonumber\n\\end{equation}\nIn the positive energy case, i.e. when $p\\in H_m$ this is equivalent to \n\\begin{equation}\nu_1=\\mbox{ arbitrary}, u_2=0.\n\\end{equation}\nHence Dim$(\\mbox{Ker}({\\slas p}-m))=2$. In other words fermions have 2 polarization states. A basis for $\\mbox{Ker}({\\slas p}-m)$ is \n\\begin{equation}\nu_0=\\kappa^{-1}e_0, u_1=\\kappa^{-1}e_1,\n\\end{equation}\nand we may describe $u_0,u_1$ as being Dirac spinors associated with $p\\in H_m$ ($u_0,u_1$ are not unique because the choice of $\\kappa$ is not unique).\n\nSimilarly, in the negative energy case, i.e. when $p\\in H_{-m}$ a basis for $\\mbox{Ker}({\\slas p}-m)$ is \n\\begin{equation}\nv_0=\\kappa^{-1}e_2, v_1=\\kappa^{-1}e_3.\n\\end{equation}\n\nNow let $v\\in{\\bf C}^4$. Then clearly $({\\slas p}+m)v\\in\\mbox{Ker}({\\slas p}-m)$. Therefore the space $<({\\slas p}+m)e_{\\alpha},\\alpha=0,1,2,3>$ is a subspace of Ker$({\\slas p}-m)$. We will show that in fact it is equal to Ker$({\\slas p}-m)$. We have\n\\begin{eqnarray}\n({\\slas p}+m) & = & \\kappa^{-1}\\kappa({\\slas p}+m)\\kappa^{-1}\\kappa \\nonumber \\\\\n & = & \\kappa^{-1}(\\Sigma((\\pm m,{\\vct 0})^{T}+m)\\kappa \\nonumber \\\\\n & = & \\kappa^{-1}\\left(\\begin{array}{ll}\n\\pm m+m & 0 \\\\\n0 & \\mp m+m\n\\end{array}\\right)\\kappa \\nonumber \\\\\n & = & \\kappa^{-1}\\left(\\begin{array}{cccc}\n\\pm m+m & 0 & 0 & 0 \\\\\n0 & \\pm m+m & 0 & 0 \\\\\n0 & 0 & \\mp m+m & 0 \\\\\n0 & 0 & 0 & \\mp m+m\n\\end{array}\\right)\\kappa. \\nonumber\n\\end{eqnarray}\nThus, in the positive energy case,\n\\begin{equation}\n({\\slas p}+m)=2m\\kappa^{-1}(e_0,e_1,0,0)\\kappa,\n\\end{equation}\nand in the negative energy case\n\\begin{equation}\n({\\slas p}+m)=2m\\kappa^{-1}(0,0,e_2,e_3)\\kappa.\n\\end{equation}\nTherefore\n\\begin{equation}\n\\frac{{\\slas p}+m}{2m}=(u_0,u_1,0,0)\\kappa,\n\\end{equation}\n(positive energy) and\n\\begin{equation}\n\\frac{{\\slas p}+m}{2m}=(0,0,v_0,v_1)\\kappa,\n\\end{equation}\n(negative energy).\n\nLet $w_{\\alpha}=\\kappa^{-1}e_{\\alpha}, \\alpha=0,1,2,3$. $\\{w_{\\alpha}\\}_{\\alpha=0}^3$ forms an orthonormal basis for ${\\bf C}^4$ with respect to the metric $g=\\gamma^0$. Then\n\\begin{equation}\n\\frac{{\\slas p}+m}{2m}w_{\\alpha}=u_{\\alpha},\\mbox{ for }\\alpha=0,1,\n\\end{equation}\n(positive energy) and\n\\begin{equation}\n\\frac{{\\slas p}+m}{2m}w_{\\alpha+2}=v_{\\alpha},\\mbox{ for }\\alpha=0,1,\n\\end{equation}\n(negative energy).\n\nSince $\\{(2m)^{-1}({\\slas p}+m)w_{\\alpha},\\alpha=0,1\\}$ is a basis for Ker$({\\slas p}-m)$ it follows that $\\{(2m)^{-1}({\\slas p}+m)e_{\\alpha},\\alpha=0,1,2,3\\}$ spans Ker$({\\slas p}-m)$.\n\nIt is straightforward to show that the Dirac spinors that we have constructed satisfy the usual normalization properties (Itzikson and Zuber, 1980, p. 696).\n\n\\subsection*{Dirac bilinears in the non-relativistic approximation}\n\nIn the non-relativistic approximation we have\n\\begin{equation}\n\\frac{p^0}{m}\\approx1,\\frac{p^j}{m}\\approx0,\\mbox{ for } j=1,2,3.\n\\end{equation}\nTherefore\n\\begin{equation}\n\\kappa\\approx I.\n\\end{equation}\nTherefore we can take, in the positive energy case,\n\\begin{equation}\n(u_0,u_1,0,0)=\\frac{{\\slas p}+m}{m}\\kappa^{-1}=\\Sigma((1,{\\vct 0})^{T})+1=\\left(\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 \n\\end{array}\\right).\n\\end{equation}\nThus \n\\begin{equation}\nu_0(p)=e_0,u_1(p)=e_1,\\forall p\\in H_m.\n\\end{equation}\nTherefore\n\\begin{equation}\n\\overline{u}_{\\alpha}(p^{\\prime})\\gamma^0u_{\\beta}(p)=u^{\\dagger}_{\\alpha}(p^{\\prime})\\gamma^0\\gamma^0u_{\\beta}(p)=e_{\\alpha}^{\\dagger}e_{\\beta}=\\delta_{\\alpha\\beta},\\forall\\alpha,\\beta\\in\\{0,1\\},p,p^{\\prime}\\in H_m.\n\\end{equation}\nAlso\n\\begin{eqnarray}\n\\overline{u}_{\\alpha}(p^{\\prime})a_{j}\\gamma^{j}u_{\\beta}(p) & = & u^{\\dagger}_{\\alpha}(p^{\\prime})\\gamma^{0}a_{j}\\gamma^{j}u_{\\beta}(p) \\nonumber \\\\\n & = & e_{\\alpha}^{\\dagger}\\left(\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & -1 & 0 \\\\\n0 & 0 & 0 & -1\n\\end{array}\\right)\\left(\\begin{array}{cccc}\n0 & 0 & a_3 & a_1-ia_2 \\\\\n0 & 0 & a_1+ia_2 & -a_3 \\\\\n-a_3 & -a_1+ia_2 & 0 & 0 \\\\\n-a_1-ia_2 & a_3 & 0 & 0\n\\end{array}\\right)e_{\\beta} \\nonumber \\\\\n & = 0, \\nonumber\n\\end{eqnarray}\nfor all $\\alpha,\\beta\\in\\{0,1\\},a\\in{\\bf R}^4,p,p^{\\prime}\\in H_m$. Therefore\n\\begin{equation}\n\\overline{u}_{\\alpha}(p^{\\prime})\\gamma^{j}u_{\\beta}(p)=0,\\forall\\alpha,\\beta\\in\\{0,1\\},j\\in\\{1,2,3\\},p,p^{\\prime}\\in H_m.\n\\end{equation}\n\n\\section*{Appendix 6: Rigorous justification of Argument 1}\n\nWe want to show that if $g(a,b,\\epsilon)$ is defined by $g(a,b,\\epsilon)=\\mu(\\Gamma(a,b,\\epsilon))$ then the following formal argument\n\\begin{eqnarray}\ng(a,b,\\epsilon) & = & \\mu(\\Gamma(a,b,\\epsilon)) \\label{eq:Argument1} \\\\\n & = & \\int\\chi_{\\Gamma(a,b,\\epsilon)}(p+q)\\,\\Omega_m(dp)\\,\\Omega_m(dq) \\nonumber \\\\\n & \\approx & \\int\\chi_{(a,b)\\times B_{\\epsilon}(0)}(p+q)\\,\\Omega_m(dp)\\,\\Omega_m(dq) \\nonumber \\\\\n & = & \\int\\chi_{(a,b)}(\\omega_m({\\vct p})+\\omega_m({\\vct q}))\\chi_{B_{\\epsilon}(0)}({\\vct p}+{\\vct q})\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})}\\frac{d{\\vct q}}{\\omega_m({\\vct q})} \\nonumber \\\\\n & = & \\int\\chi_{(a,b)}(\\omega_m({\\vct p})+\\omega_m({\\vct q}))\\chi_{B_{\\epsilon}(0)-{\\vct q}}({\\vct p})\\,\\frac{d{\\vct p}}{\\omega_m({\\vct p})}\\frac{d{\\vct q}}{\\omega_m({\\vct q})} \\nonumber \\\\\n & \\approx & \\int\\chi_{(a,b)}(2\\omega_m({\\vct q}))\\frac{\\frac{4}{3}\\pi\\epsilon^3}{\\omega_m({\\vct q})^2}\\,d{\\vct q}, \\nonumber\n\\end{eqnarray}\nis justified in the sense that\n\\begin{equation}\n\\lim_{\\epsilon\\rightarrow0}\\epsilon^{-3}g(a,b,\\epsilon)=\\frac{4}{3}\\pi\\int\\chi_{(a,b)}(2\\omega_m({\\vct q}))\\frac{1}{\\omega_m({\\vct q})^2}\\,d{\\vct q}.\n\\end{equation}\nThere are 2 $\\approx$ signs that we have to consider. The first is between lines 2 and 3 and arises because we are approximating the hyperbolic cylinder between $a$ and $b$ with an ordinary cylinder of radius $\\epsilon$. We will show that the error is of order greater than $\\epsilon^3$. Let $\\Gamma$ be the aforementioned hyperbolic cylinder. Then\n\\begin{equation}\n\\Gamma=\\bigcup_{m\\in(a,b)}S(m,\\epsilon).\n\\end{equation}\nNow \n\\begin{equation}\n\\Gamma=\\Gamma^{\\prime}\\sim\\Gamma^{\\prime-}\\cup\\Gamma^{\\prime+},\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\Gamma^{\\prime} & = & \\bigcup_{m\\in(a,b)}\\{m\\}\\times B_{\\epsilon}({\\vct 0}) \\nonumber \\\\\n\\Gamma^{\\prime-} & = & \\bigcup_{m\\in(a,a^{+})}(\\{m\\}\\times B_{\\epsilon}({\\vct 0})\\sim S(m,\\epsilon))\\subset\\bigcup_{m\\in(a,a^{+})}(\\{m\\}\\times B_{\\epsilon}({\\vct 0})) \\nonumber \\\\\n\\Gamma^{\\prime+} & = & \\bigcup_{m\\in(b^{-},b)}(\\{m\\}\\times B_{\\epsilon}({\\vct 0})\\sim S(m,\\epsilon))\\subset\\bigcup_{m\\in(b^{-},b)}(\\{m\\}\\times B_{\\epsilon}({\\vct 0})), \\nonumber\n\\end{eqnarray}\nin which\n\\begin{equation}\na^{+}=(a^2+\\epsilon^2)^{\\frac{1}{2}}, b^{-}=(b^2-\\epsilon^2)^{\\frac{1}{2}}, \\epsilon 40000$. Comparing to the Ref. \\cite{GBD-L}, we can see a different expression of $\\gamma$ in the metric-GBD theory, where $\\gamma$ depends on the parameters: $\\omega$, $F_{0}$ and $m_{s}$. (3) Given that investigating the polarization of gravitational waves (GWs) can serve to discriminate the different gravitational theories, we also study the polarization modes of GWs in the Palatini-GBD theory. The Newman-Penrose (NP) method \\cite{gw-polar4,gw-polar5} and the geodesic deviation (GD) method \\cite{gw-polar6} are used to explore the polarization modes of GWs in the GBD theory. It is observed that the polarization modes of GWs in the Palatini-GBD theory (the three polarization types) are different from that in the metric-GBD theory (the four polarization types). The results in the GBD theory also show that the extra scalar field $F$ and the BD scalar field generate the new polarizations (the scalar polarization modes) for GWs which are not present in the standard GR or the Palatini-$f(R)$ theory.\n\n\n The structure of our paper are as follows. In section II, we derive the basic equations in the Palatini-GBD theory. In section III, the linearized field equations are gained by using the method of the weak-field approximation. Section IV investigate the polarization modes of gravitational waves in the Palatini-GBD theory. Section V discuss the parameterized post Newton parameter (PPN). Section VI is the conclusions.\n\n\n\n\n\\section{$\\text{Field equations in the Palatini-formalism of GBD theory}$}\n\n\nThis section is devoted to derive the basic equations in the Palatini-GBD theory. The action of the GBD theory in the Palatini formalism read as,\n\\begin{equation}\nS=S_g(g_{\\mu \\nu },\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu},\\phi)+S_{\\phi}(g_{\\mu \\nu },\\phi)+S_m(g_{\\mu \\nu },\\psi )=\\frac{1}{2}\\int d^4x{\\cal L}_{T},\\label{action}\n\\end{equation}\nwith the total Lagrange quantity\n\\begin{equation}\n{\\cal L}_T=\\sqrt{-g}[\\phi f(\\tilde{R})- \\frac{\\omega}{\\phi}\\partial _\\mu \\phi \\partial ^\\mu \\phi+\\frac{16\\pi }{c^4}{\\cal L}_m].\\label{lagrange}\n\\end{equation}\nHere the metric $g_{\\mu \\nu }$ and the connection $\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}$ are considered as the independent dynamical variables, $g$ denotes the determinant of $g_{\\mu\\nu}$, and ${\\cal L}_m$ denotes the matter Lagrangian associated with the matter field $\\psi$ and $g_{\\mu\\nu}$. $f(\\tilde{R})$ is an arbitrary function of Ricci scalar: $\\tilde{R}=g_{\\mu\\nu}\\tilde{R}^{\\mu\\nu}$, and the Ricci tensor $\\tilde{R}_{\\mu\\nu}$ is defined by the independent Palatini connection $\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}$\n\\begin{equation}\n\\tilde{R}_{\\mu\\nu}=\\tilde{R}^{\\alpha}_{\\mu\\alpha\\nu}=\\partial_{\\lambda}\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}-\\partial_{\\mu}\\widetilde{\\Gamma}^{\\lambda}_{\\lambda\\nu}\n+\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}\\widetilde{\\Gamma}^{\\rho}_{\\rho\\lambda}-\\widetilde{\\Gamma}^{\\lambda}_{\\nu\\rho}\\widetilde{\\Gamma}^{\\rho}_{\\mu\\lambda}\n\\label{Ricci-Tensor-Pala}.\n\\end{equation}\n Using the variational principle, we can derive the evolutional equations of the dynamical fields in the Palatini-formalism of GBD theory. Varying the action (\\ref{action}) with respect to $g_{\\mu\\nu}$ and $\\phi$, we gain two field equations as follows\n\\begin{eqnarray}\n\\phi \\left[ F(\\tilde{R})\\tilde{R}_{\\mu \\nu }-\\frac{1}{2}f(\\tilde{R})g_{\\mu \\nu }\\right]+ \\frac{1}{2}\\frac{\\omega}{\\phi}g_{\\mu\\nu}\\partial_\\sigma\\phi\\partial^\\sigma\\phi\n-\\frac{\\omega}{\\phi}\\partial_\\mu\\phi\\partial_\\nu\\phi = 8\\pi T_{\\mu \\nu },\\label{gravitational-eq-Pala}\n\\end{eqnarray}\n\\begin{equation}\nf(\\tilde{R})+2\\omega\\frac{\\Box \\phi}{\\phi} -\\frac{\\omega}{\\phi^{2}}\\partial _\\mu \\phi \\partial ^\\mu \\phi=0,\\label{BD-scalar-eq}\n\\end{equation}\nwhere $F(\\tilde{R})\\equiv\\partial f(\\tilde{R})\/\\partial \\tilde{R}$, $\\Box \\equiv \\nabla ^\\mu \\nabla _\\mu $ and $T_{\\mu \\nu }=\\frac{-2}{\\sqrt{-g}}\\frac{\\delta S_m}{\\delta g^{\\mu \\nu }}$ is the energy-momentum tensor of matter. The trace of Eq.(\\ref{gravitational-eq-Pala}) is\n\\begin{eqnarray}\nF(\\tilde{R})\\tilde{R}-2f(\\tilde{R})+\\frac{\\omega}{\\phi^{2}}\\partial_\\mu\\phi\\partial^\\mu\\phi = \\frac{8\\pi T}{\\phi}.\\label{trace-grav-Pala}\n\\end{eqnarray}\nVarying the action with respect to $\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}$ gives\n\\begin{eqnarray}\n\\widetilde{\\nabla}_{\\lambda}(\\sqrt{-g}\\phi F(\\tilde{R})g^{\\mu\\nu})=0,\\label{connection-eq}\n\\end{eqnarray}\nwhere $\\widetilde{\\nabla}$ is the covariant derivative with respect to the Palatini connection. Eq.(\\ref{connection-eq}) implies that the connection can be represented as the Christoffel symbol associated with the metric $h_{\\mu\\nu}$ by defining $h_{\\mu\\nu}=\\phi F(\\tilde{R})g_{\\mu\\nu}$. Then we can have a following relation\n\\begin{equation}\n\\widetilde{\\Gamma}^{\\lambda}_{\\mu\\nu}=\\Gamma^{\\lambda}_{\\mu\\nu}+\\frac{1}{2\\phi F}[-g_{\\mu\\nu}\\partial^{\\lambda}(\\phi F)+\\delta^{\\lambda}_{\\nu}\\partial_{\\mu}(\\phi F)+\\delta^{\\lambda}_{\\mu}\\partial_{\\nu}(\\phi F)],\\label{connection-relation}\n\\end{equation}\nwhere $\\Gamma^{\\lambda}_{\\mu\\nu}$ is the Livi-Civita connection associated with the metric $g_{\\mu\\nu}$. Thus, by using Eq.(\\ref{Ricci-Tensor-Pala}) the Ricci tensor and the Ricci scalar in the Palatini formalism are rewritten as\n\\begin{equation}\n\\tilde{R}_{\\mu\\nu}=R_{(g)\\mu\\nu}+\\frac{3}{2(\\phi F)^{2}}\\nabla_{\\mu}(\\phi F)\\nabla_{\\nu}(\\phi F)-\\frac{1}{\\phi F}\\nabla_{\\mu}\\nabla_{\\nu}(\\phi F)-\\frac{1}{2\\phi F}g_{\\mu\\nu}\\Box(\\phi F),\\label{Ricci-Tensor-metric}\n\\end{equation}\n\\begin{equation}\n\\tilde{R}=R_{(g)}+\\frac{3}{2(\\phi F)^{2}}\\nabla^{\\sigma}(\\phi F)\\nabla_{\\sigma}(\\phi F)-\\frac{3}{\\phi F}\\Box(\\phi F),\\label{Ricci-scalar-metric}\n\\end{equation}\nwhere $R_{(g)\\mu\\nu}$ and $R_{(g)}$ denotes the Ricci tensor and the Ricci scalar defining in the metric formalism, and all covariant derivatives are taken with respect to the metric $g_{\\mu\\nu}$. Combining above equations, the modified Einstein equation is derived as\n\\begin{equation}\nG_{\\mu\\nu}=R_{(g)\\mu\\nu}-\\frac{1}{2}R_{(g)}g_{\\mu\\nu}=\\frac{8\\pi T_{\\mu\\nu}}{\\phi F}+8\\pi T_{\\mu\\nu}^{eff}\\label{Einstain-Tensor-Pala}\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{split}\n8\\pi T_{\\mu\\nu}^{eff}=-\\frac{\\omega}{2\\phi^2F}g_{\\mu\\nu}\\partial_{\\sigma}\\phi\\partial^{\\sigma}\\phi+\\frac{f}{2F}g_{\\mu\\nu}+\\frac{\\omega}{\\phi^{2} F}\\partial_{\\mu}\\phi\\partial_{\\nu}\\phi-\\frac{3}{2(\\phi F)^{2}}\\nabla_{\\mu}(\\phi F)\\nabla_{\\nu}(\\phi F) \\\\\n+\\frac{1}{\\phi F}\\nabla_{\\mu}\\nabla_{\\nu}(\\phi F)-\\frac{1}{2}g_{\\mu\\nu}\\tilde{R}+\\frac{3}{4(\\phi F)^{2}}g_{\\mu\\nu}\\nabla^{\\sigma}(\\phi F)\\nabla_{\\sigma}(\\phi F)-\\frac{1}{\\phi F}g_{\\mu\\nu}\\Box(\\phi F).\\label{re-gravitation-eq}\n\\end{split}\n\\end{equation}\nWhen $f(\\tilde{R})$ linear in $\\tilde{R}$, Eq.(\\ref{Einstain-Tensor-Pala}) is identical to the field equation in the metric formalism of the GBD theory \\cite{GBD}. The trace of the gravitational field equation and the BD field equation have the forms:\n\\begin{equation}\n\\Box (\\phi F)=\\frac{8\\pi T}{3}-\\frac{\\omega}{3\\phi}\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi+\\frac{2\\phi f(\\tilde{R})}{3}-\\frac{2\\phi F}{3}\\tilde{R}+\\frac{1}{2\\phi F}\\nabla_{\\mu}(\\phi F)\\nabla^{\\mu}(\\phi F)+\\frac{\\phi F}{3}R_{(g)},\\label{trace-gravity-eq-re}\n\\end{equation}\n\\begin{equation}\n\\Box\\phi-\\frac{\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi}{4\\phi}=\\frac{1}{4\\omega}[8\\pi T-\\phi F R_{(g)}-\\frac{3}{2\\phi F}\\nabla_{\\mu}(\\phi F)\\nabla^{\\mu}(\\phi F)+3\\Box (\\phi F)].\\label{re-BD-equation}\n\\end{equation}\n We can see that Eqs. (\\ref{trace-gravity-eq-re}) and (\\ref{re-BD-equation}) describe the dynamics of the two scalar fields: $\\phi$ and $F$ in the Palatini-GBD theory, which are different from the results in the Palatini-$f(R)$ theory, where the scalar field carries no dynamics of its own \\cite{fr-review2}.\n\nIn this section, we derived the field equations of the GBD theory by using the non-standard Palatini approach, where the connection was treated as an independent dynamical variable. The gravitational field equations in this theory were gained by performing variations of action with respect to the metric and the connection, respectively. Variation with respect to the metric gave new field equation containing $F(\\tilde{R})$, and variation with respect to the connection gave the Riemann connection associated with metric $h_{\\mu\\nu}$ via appropriate conformal transformation. Based on the above field equations, we derive the linearized equations in the Palatini-GBD theory in the following.\n\n\n\n\n\\section{$\\text{ Linearized field equations in the Palatini-GBD theory}$}\n\n Modified gravitational theory should have the correct weak-field limit at the Newtonian and the post-Newtonian levels. In this section, we derive the linearized field equations in the Palatini-GBD theory via the weak-field approximation method. And then in the following two sections, we solve the linearized field equations for two cases: the vacuum case and the static point-mass case, respectively.\n\n\n As a begining, we discuss the weak-field approximations of GBD theory in the Palatini formalism via\n\\begin{eqnarray}\ng_{\\mu\\nu}=\\eta_{\\mu\\nu}+b_{\\mu\\nu},~~~~~~\\phi=\\phi_{0}+\\varphi,~~~~~~F=F_{0}+\\delta F,\\label{weak-conditions}\n\\end{eqnarray}\nwhere $\\eta_{\\mu\\nu}$ denotes the Minkowski metric, $\\phi$ and $F$ are two scalar fields, and the following three relations are required: $|b_{\\mu\\nu}|\\ll 1$, $|\\varphi|\\ll \\phi_{0}$ and $|\\delta F|\\ll F_{0}$. Using Eqs.(\\ref{Einstain-Tensor-Pala}-\\ref{weak-conditions}), linearized field equations in the Palatini-GBD theory are derived as\n \\begin{eqnarray}\n\\bar{R}_{(g)\\mu\\nu}-\\frac{\\bar{R}_{(g)}}{2}\\eta_{\\mu\\nu}=\n\\partial_{\\mu}\\partial_{\\nu}\\frac{\\delta F}{F_{0}}+\\partial_{\\mu}\\partial_{\\nu}\\frac{\\varphi}{\\phi_{0}}\n-\\eta_{\\mu\\nu}\\bar{\\Box}_{p}\\frac{\\delta F}{F_{0}}-\\eta_{\\mu\\nu}\\bar{\\Box}_{p}\\frac{\\varphi}{\\phi_{0}}+\\frac{8\\pi T_{\\mu\\nu}}{\\phi_{0}F_{0}},\\label{eq-weak-gravity}\n\\end{eqnarray}\n \\begin{eqnarray}\n\\bar{\\Box}_{p}\\varphi=\\frac{3}{4\\omega-3F_{0}}[8\\pi T-\\phi F_{0}\\bar{R}_{(g)}+3\\phi_{0}\\bar{\\Box}_{p}\\delta F],\\label{eq-weak-phi}\n\\end{eqnarray}\n \\begin{eqnarray}\n\\bar{\\Box}_{p}\\frac{\\delta F}{F_{0}}=\\frac{8\\pi T}{3\\phi_{0}F_{0}}-\\frac{\\bar{\\Box}_{p}\\varphi}{\\varphi_{0}}+\\frac{\\bar{R}_{(g)}}{3},\\label{eq-weak-Phi}\n\\end{eqnarray}\nwhere $\\bar{\\Box}_{p}=\\partial^{\\mu}\\partial_{\\mu}$. $\\bar{R}_{(g)\\mu\\nu}$ and $\\bar{R}_{(g)}$ denote the linearized quantities, and they can be rewritten as\n\\begin{eqnarray}\n\\bar{R}_{(g)\\mu\\nu}=\\frac{1}{2}(-2\\partial_{\\mu}\\partial\\nu b_{f}+2\\partial\\mu\\partial\\nu\\frac{\\varphi}{\\phi_{0}}-\\bar{\\Box}_{p}\\theta_{\\mu\\nu}+\\frac{\\eta_{\\mu\\nu}}{2}\\bar{\\Box}_{p}\\theta-\\eta_{\\mu\\nu}\\bar{\\Box}_{p} b_{f}+\\eta_{\\mu\\nu}\\bar{\\Box}_{p}\\frac{\\varphi}{\\phi_{0}}),\\label{linear-Rmunu}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\bar{R}_{(g)}=-3\\bar{\\Box}_{p} b_{f}+3\\bar{\\Box}_{p}\\frac{\\varphi}{\\phi_{0}}+\\frac{\\bar{\\Box}_{p}\\theta}{2}\\label{linear-R}\n\\end{eqnarray}\nvia introducing a new tensor $\\theta_{\\mu\\nu}=b_{\\mu\\nu}-\\frac{1}{2}\\eta_{\\mu\\nu}b-\\eta_{\\mu\\nu}\\frac{\\varphi}{\\phi_{0}}+\\eta_{\\mu\\nu}b_{f}$, where $b_{f}\\equiv \\frac{\\delta F}{F_{0}}$, $b=\\eta^{\\mu\\nu}b_{\\mu\\nu}$ and $\\theta=\\eta^{\\mu\\nu}\\theta_{\\mu\\nu}$.\nAfter choosing a so-called Lorenz gauge or the Harmonic gauge: $\\partial^{\\nu}\\theta_{\\mu\\nu}=0$ and using Eqs. (\\ref{eq-weak-gravity}-\\ref{linear-R}), we get the linearized gravitational field equation and the linearized scalar-field equations in the Palatini-GBD theory as follows\n\\begin{eqnarray}\n\\bar{\\Box}_{p}\\theta_{\\mu\\nu}=-\\frac{16\\pi T_{\\mu\\nu}}{\\phi_{0}F_{0}},\\label{eq-box-theta}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\bar{\\Box}_{p}\\varphi=\\frac{4\\pi T}{\\omega},\\label{eq-box-varphi}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\bar{\\Box}_{p} b_{f}=0,\\label{eq-box-Phi}\n\\end{eqnarray}\nwith $T=\\eta^{\\mu\\nu}T_{\\mu\\nu}$. Comparing the linearized field equation (\\ref{eq-box-Phi}) in the Palatini-GBD theory with that in the metric-GBD theory: $\\bar{\\Box}_{p} b_{f}-m_{s}^{2}b_{f}=\\frac{16\\pi\\omega T}{3\\phi_{0}F_{0}(2\\omega+3F_{0})}$, we can read some different properties for the scalar field $b_{f}$. In the Palatini-formalism of GBD theory, the scalar field $b_{f}$ is massless, while in the metric-formalism GBD the scalar field is massive. We also can see that the scalar field $b_{f}$ is source-free in the Palatini-formalism GBD, which is different from the result in the metric-formalism GBD.\n\n\n\\section{$\\text{Gravitational waves polarization in the Palatini-GBD theory}$}\n\n Gravitational-waves (GWs) physics is an important aspect for probing the viable gravitational theory. Studying on the polarization modes of GWs is also useful for exploring the valuable information on the early universe \\cite{GW-polar}. How many additional polarization modes are detected in GWs experiments could instruct us to study which theories of gravity. Given that more accurate observational data on GWs will be received in the future \\cite{GW-observations-future}, it is worthwhile to investigate GWs physics in alternative theories of gravity, especially in the Palatini-formalism of modified gravity. The weak-field approximation method provides a natural way to study the GWs. And in some references, the authors have applied this method to discuss the polarization of GWs in different theories \\cite{GW-other1,GW-other2,GW-other3,GW-other4,GW-other5,GW-other6,GW-other7,GW-other8}.\n\n Considering GWs which propagate along the $z$-direction, we have $k^{\\alpha}=\\varpi(1,0,0,1)$ with the angular frequency $\\varpi$. And let us consider an observer detecting the gravitational radiation described by a unit timelike vector: $u^{\\alpha}=(1,0,0,0)$. In the vacuum, we solve the wave Eqs. (\\ref{eq-box-theta}-\\ref{eq-box-Phi}) in the Palatini-formalism GBD to get\n\\begin{equation}\n\\theta_{\\mu\\nu}=A_{\\mu\\nu}(\\vec{p})\\exp(i k_{\\alpha}x^{\\alpha}),\\label{solution-theta}\n\\end{equation}\n\\begin{equation}\n\\varphi=c(\\vec{p})\\exp(i p_{\\alpha}x^{\\alpha}),\\label{solution-varphi}\n\\end{equation}\n\\begin{equation}\nb_{f}=d(\\vec{p})\\exp(i q_{\\alpha}x^{\\alpha}).\\label{solution-hf}\n\\end{equation}\nWhere $k_{\\alpha}$ denotes the four-wavevector, and it is a null vector with $\\eta_{\\mu\\nu}k^{\\mu}k^{\\nu}=0$. Eq.(\\ref{solution-theta}) denotes the plane-wave solution of gravitational radiation, while Eqs. (\\ref{solution-varphi}) and (\\ref{solution-hf}) denote the plane-wave solutions for the massless BD-field perturbation $\\varphi$ and the massless geometry-field perturbation $b_{f}$, respectively.\n\n\n\n\n\n\n In theory, several methods have been developed to analyze the polarization of GWs \\cite{gw-polar4,gw-polar5,gw-polar6,gw-polar1,gw-polar2,gw-polar3,GW-polar1a,GW-polar2a,GW-polar3a,GW-polar4a}, such as the Newman-Penrose (NP) method \\cite{gw-polar4,gw-polar5}, the geodesic deviation (GD) method \\cite{gw-polar1,gw-polar6}, etc. In the following, we investigate the polarization modes of GWs in the Palatini-GBD theory by using these two methods. In a local proper reference frame, the equation of geodesic deviation can be described as\n \\begin{equation}\n \\ddot{x}^{i}=-R^{i}_{~0k0}x^{k},\\label{eq-GD}\n \\end{equation}\n here $i$ and $k$ can be taken as $\\{1,2,3\\}$, respectively. $R^{i}_{~0k0}$ denotes the so-called \"electric\" components of the Riemann tensor with its expression as follows \\cite{GW-polar3a}\n \\begin{equation}\n R^{(1)}_{i0j0}=(h_{i0,0j}+h_{0j,i0}-h_{ij,00}-h_{00,ij}),\\label{linear-Riemann}\n \\end{equation}\n where $h_{\\mu\\nu}$ denotes the linear perturbation. Using Eqs. (\\ref{eq-GD}) and (\\ref{linear-Riemann}), we gain\n \\begin{eqnarray}\n \\ddot{x}(t)=-(xh_{11,00}+yh_{12,00}),~~~~~~~~\\nonumber\\\\\n \\ddot{y}(t)=-(xh_{12,00}+yh_{11,00}),~~~~~~~~\\nonumber\\\\\n \\ddot{z}(t)=(2h_{03,03}-h_{33,00}-h_{00,33})z.\\label{xyzdott}\n \\end{eqnarray}\n Using solution (\\ref{solution-theta}) and Eqs. (\\ref{xyzdott}), we obtain\n \\begin{eqnarray}\n \\ddot{x}(t)=k_{0}^{2}[\\hat{\\epsilon}^{(+)}(k_{0})x+\\hat{\\epsilon}^{(\\times)}(k_{0})y]\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,~~\\nonumber\\\\\n \\ddot{y}(t)=k_{0}^{2}[-\\hat{\\epsilon}^{(+)}(k_{0})y+\\hat{\\epsilon}^{(\\times)}(k_{0})x]\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,\\nonumber\\\\\n \\ddot{z}(t)=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\label{xyzdott-tensor}\n \\end{eqnarray}\nwhich describe the two standard plus and cross polarization modes of GR with the frequency $k_{0}$. For case of massless scalar field $\\varphi$ we have\n \\begin{eqnarray}\n \\ddot{x}(t)=-p_{0}^{2}xc(\\vec{p})\\exp(ip_{\\alpha}x^{\\alpha})+c.c.,~~~~\n \\ddot{y}(t)=-p_{0}^{2}yc(\\vec{p})\\exp(ip_{\\alpha}x^{\\alpha})+c.c.,~~~~\n \\ddot{z}(t)=0,\\label{xyzdott-bf}\n \\end{eqnarray}\nand for case of massless scalar field $b_{f}$ we obtain\n \\begin{eqnarray}\n \\ddot{x}(t)=-q_{0}^{2}xd(\\vec{p})\\exp(iq_{\\alpha}x^{\\alpha})+c.c.,~~~~\n \\ddot{y}(t)=-q_{0}^{2}yd(\\vec{p})\\exp(iq_{\\alpha}x^{\\alpha})+c.c.,~~~~\n \\ddot{z}(t)=0.\\label{xyzdott-varphi}\n \\end{eqnarray}\nObviously, Eqs.(\\ref{xyzdott-bf}) and (\\ref{xyzdott-varphi}) indicate a breathing type of GWs polarization, which have two oscillation modes with the frequency $q_{0}$ and frequency $p_{0}$, respectively. The same results can also be obtained by using the Newman-Penrose (NP) method \\cite{GW-polar3a,NP-tetrad}. Following the method shown in Refs. \\cite{GW-polar3a,GW-polar4a,GNP-polar-six1,GNP-polar-six2,GNP-polar-six3,GNP-polar-six4,GNP-polar-six5,GNP-polar-six6,NP-polar-six}, the amplitudes of six polarizations in the Palatini-GBD theory can be calculated as follows\n \\begin{eqnarray}\n p^{(l)}_{1}=-\\frac{1}{6}R_{0303}=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(x)}_{2}=-\\frac{1}{2}R_{0301}=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(y)}_{3}=\\frac{1}{2}R_{0302}=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(+)}_{4}=-R_{0101}+R_{0202}=-2k_{0}^{2}\\hat{\\epsilon}^{(+)}(k_{0})\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(\\times)}_{5}=2R_{0102}=2k_{0}^{2}\\hat{\\epsilon}^{(\\times)}(k_{0})\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(b)}_{6}=-R_{0101}-R_{0202}=2p_{0}^{2}c(\\vec{p})\\exp(ip_{\\alpha}x^{\\alpha})+2q_{0}^{2}d(\\vec{p})\\exp(iq_{\\alpha}x^{\\alpha})+c.c..\n \\label{p-np}\n \\end{eqnarray}\nHere the six polarizations modes of GWs are: the longitudinal scalar mode $p_{1}^{(l)}$, the vector-$x$ model $p_{2}^{(x)}$, the vecotr-$y$ mode $p_{3}^{(y)}$, the plus tensorial mode $p_{4}^{(+)}$, the cross tensorial mode $p_{5}^{(\\times)}$, and the breathing scalar mode $p_{6}^{(b)}$, respectively. Form expressions (\\ref{p-np}), we can read the plus tensor polarization mode, the cross tensor polarization mode, and a breathing scalar mode with two oscillation in the Palatini-GBD theory. It indicates that the extra scalar field $F$ and the BD scalar field generate the new polarizations of GWs which are not present in the standard GR or the Palatini-$f(\\tilde{R})$ theory. In the GR and the Palatini-$f(\\tilde{R})$ theory, both of them predict two tensorial polarization modes: $+$ and $\\times$, not have any scalar modes \\cite{GW-polar}.\n\n Comparing our results with other theoretical results in Refs. \\cite{GW-polar,GW-polar1a,GW-polar2a,GW-polar3a,GW-polar4a}, it is observed that the polarization modes of GWs in the Palatini-GBD theory are different from the results given in some other gravitational theories. For example, in the massive BD theory \\cite{GW-polar}, it has two standard tensorial modes of GR and two scalar modes (the longitudinal and the breathing modes); In the massless BD theory it owns two standard tensorial modes and one breathing scalar mode \\cite{GW-polar}; In the $f(R)$ theories, there are two tensorial modes and a massive scalar mode that is a mix of the longitudinal and the transverse breathing polarization \\cite{GW-polar1a,GW-polar2a}; In the $f(T,B)$ theory of teleparallel gravity (it is equivalent to $f(R)$ gravity by linearized the field equations in the weak field limit approximation), there are three polarizations \\cite{GW-polar3a}: the two standard of general relativity and an additional massive scalar mode, where the boundary term $B$ excites the extra scalar polarization; In the higher order local and non-local theories of gravity, they have three state of polarization and $n + 3$ oscillation modes \\cite{GW-polar4a} (concretely, they are the two transverse tensor ($+$) and ($\\times$) standard polarization modes of frequency $\\omega_{1}$, and the $n+1$ further scalar modes of frequency $\\omega_{2}, ..., \\omega_{n+2}$, each of which has the same mixed polarization, partly longitudinal and partly transverse).\n\n\n\n We also compare the results of GWs polarization in the Palatini-GBD theory with that in the metric-GBD theory. The plane-wave solutions in the metric-GBD theory can be expressed as \\cite{GBD-L}: $\\theta_{\\mu\\nu}=A_{\\mu\\nu}(\\vec{p})\\exp(i k_{\\alpha}x^{\\alpha})$, $\\varphi=a(\\vec{p})\\exp(i p_{\\alpha}x^{\\alpha})$, and $b_{f}=b(\\vec{p})\\exp(i q_{\\alpha}x^{\\alpha})$. Here, $\\varphi$ denotes the massless BD-field perturbation and $b_{f}$ denotes the massive geometry-field perturbation, respectively. For the massive plane wave propagating along $z-$direction, we have $q_{\\alpha}=(q_{0},0,0,q_{3})$ with $ m^{2}=q_{0}^{2}-q_{3}^{2}\\neq 0$. Originally, the NP formalism was applied to work out for massless waves. Recently, it was also generalized to explore the massive waves propagating along non-null geodesics \\cite{GW-polar3a,NP-tetrad}. Using this method, the non-zero amplitudes of polarizations for the metric-GBD theory in Ref. \\cite{GBD-L} are calculated as\n \\begin{eqnarray}\n p^{(l)}_{1}=\\frac{1}{6}(-q_{3}^{2}+q_{0}^{2})b(\\vec{p})exp(iq^{\\alpha}x_{\\alpha})+c.c.,~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(+)}_{4}=-\\sqrt{2}k_{0}^{2}\\hat{\\epsilon}^{(+)}(k_{0})\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(\\times)}_{5}=\\sqrt{2}k_{0}^{2}\\hat{\\epsilon}^{(\\times)}(k_{0})\\exp(ik_{\\alpha}x^{\\alpha})+c.c.,~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n p^{(b)}_{6}=2p_{0}^{2}a(\\vec{p})\\exp(ip_{\\alpha}x^{\\alpha})+2q_{0}^{2}b(\\vec{p})\\exp(iq_{\\alpha}x^{\\alpha})+c.c..\n \\label{p-np-metric}\n \\end{eqnarray}\n Obviously, Eqs.(\\ref{p-np-metric}) show that there are four polarizations modes for GWs in the metric-GBD theory: the two standard tensorial modes ($+$ and $\\times$), a scalar breathing mode with frequency $p_{0}$, and a massive scalar mode that is a mix of the longitudinal and the transverse breathing polarization with frequency $q_{0}$.\n\n\n\n\\section{$\\text{ PPN parameter in the Palatini-GBD theory}$}\n\n\nIn this section, we derive the theoretical expressions of the parametrized post-Newtonian (PPN) parameter $\\gamma$ in the Palatini-GBD theory by using the weak-field approximation method. Considering a static point-mass source, we have the form of the energy-momentum tensor: $T_{\\mu\\nu}=m_{p}\\delta(\\vec{r})diag(1,0,0,0)$. Obviously, the point particle is located at $\\vec{r}=0$. Solving Eqs. (\\ref{eq-box-theta}) and (\\ref{eq-box-varphi}), we get the perturbation solutions: $\\theta_{00}=\\frac{4m_{p}}{\\phi_{0}F_{0}}\\frac{1}{r}$ and $\\varphi(r)=\\frac{m_{p}}{\\omega}\\frac{1}{r}$. Combining relations: $b_{\\mu\\nu}=\\theta_{\\mu\\nu}-\\eta_{\\mu\\nu}\\frac{\\theta}{2}+\\eta_{\\mu\\nu}b_{f}-\\eta_{\\mu\\nu}\\frac{\\varphi}{\\phi_{0}}$ and $\\theta=\\eta^{\\mu\\nu}\\theta_{\\mu\\nu}=-\\frac{4m_{p}}{\\phi_{0}F_{0}}\\frac{1}{r}$, we gain the non-vanishing components of the metric perturbation\n\\begin{equation}\nb_{00}=\\frac{2m_{p}}{\\phi_{0}F_{0}r}+\\frac{m_{p}}{\\phi_{0}\\omega r}-b_{f},\\label{h00}\n\\end{equation}\n\\begin{equation}\nb_{ij}=\\frac{2m_{p}}{\\phi_{0}F_{0}r}-\\frac{m_{p}}{\\phi_{0}\\omega r}+b_{f}.\\label{hij}\n\\end{equation}\nHere $i,j =1,2,3$ is the space index. The term $\\frac{m_{p}}{\\phi_{0}\\omega r}$ in above two equations reflect the effect of scalar field $\\phi$. Considering that $b_{f}$ is negligible for a point-mass case, then the concrete form of the PPN parameter $\\gamma$ in the Palatini-GBD theory are derived as follows\n\\begin{equation}\n\\gamma=\\frac{b_{ii}}{b_{00}}=\\frac{2\\omega-F_{0}}{2\\omega+F_{0}}.\\label{gamma}\n\\end{equation}\n From Eq. (\\ref{gamma}), we can see the dependence of the PPN parameter $\\gamma$ with respect to model parameters: $\\omega$ and $F_{0}$. It is well known that a gravity theory alternative to GR should be tested by the well-founded experimental results. Some observations can be directly applied to constrain the value of the PPN parameter $\\gamma$. In Ref. \\cite{bound-omega-gamma}, the experimental bound on $\\gamma$ is: $|\\gamma-1|<2.3*10^{-5}$. For the Palatini-GBD theory, then we have that $\\gamma\\sim 1$ requires $\\omega \\gg F_{0}$, which can be consistent with the observational constraint: $ |\\omega| > 40000$ \\cite{bound-omega-gamma}.\n\n\n\n\\section{$\\text{Conclusions}$}\n\nSeveral observational and theoretical problems motivate us to investigate the modified or alternative theories of GR. Lots of modified gravity theories have been proposed and widely studied. In the BD modified theory, the scalar field can be introduced by considering a time-variable Newton gravity constant. Many extended versions of the BD theory have been explored and developed. As one of the generalized BD theories, some interesting properties have been found in the metric-formalism of GBD theory \\cite{GBD,GBD-L}. For examples: (1) In the GBD model, the state parameter of geometrical dark energy can cross over the phantom boundary $w=-1$ as achieved in the quintom model, without bearing the problems existing in the two-fields quintom model, such as the problem of negative kinetic term and the fine-tuning problem, etc. (2) It is well known that the metric-$f(R)$ theories are equivalent to the BD theory with a potential (abbreviated as BDV) for taking a specific value of the BD parameter $\\omega=0$, where the specific choice: $\\omega=0$ for the BD parameter is quite exceptional, and it is hard to understand the corresponding absence of the kinetic term for the field. However, for the GBD theory, it is similar to the double scalar-fields model, and it owns the non-disappeared kinetic term in the action. In addition, the GBD theory tends to investigate the physics from the viewpoint of geometry, while the BDV or the two scalar-fields quintom model tends to solve physical problems from the viewpoint of matter. It is possible that several special characteristics of scalar field could be revealed through studies of geometrical gravity in the GBD.\n\n\nIn this paper, we studied the generalized Brans-Dicke theory in the Palatini formalism. Firstly, we derived the field equations for the gravitational field, the independent connection and the BD scalar field, respectively. Secondly, using the weak-field method we obtained the linearized gravitational field equation and the linearized scalar-fields equations. We showed various properties of the geometrical scalar field in the Palatini-formalism of GBD theory: it is massless and source-free, which are different from the results given in the metric-formalism of GBD theory. According to the weak-field equations, we investigated the parameterized post Newton parameter in the Palatini-GBD theory by using the point-source method. It was shown that $\\gamma=\\frac{2\\omega-F_{0}}{2\\omega+F_{0}}\\sim 1$ requires $\\omega \\gg F_{0}$, which can be consistent with the observational constraint: $ |\\omega| > 40000$. Comparing to the Ref. \\cite{GBD-L}, we can see that the difference for expressions of $\\gamma$ between the Palatini-GBD theory and the metric-GBD theory. In the metric-GBD theory \\cite{GBD-L}, $\\gamma$ depends on the parameters: $\\omega$, $F_{0}$ and $m_{s}$. Thirdly, we discussed the gravitational waves physics in the Palatini-GBD theory. The properties of GWs in the modified gravity theory have recently attracted lots of attention \\cite{gw-mg1,gw-mg2,gw-mg3,gw-mg4,gw-mg5}, since investigating the polarization modes of GWs can serve to discriminate the different gravitational theories.\nThe results in the Palatini-formalism of GBD showed that the extra scalar field $F$ and the BD scalar field generated the new polarizations for GWs which were not present in the standard GR or the Palatini-$f(R)$ theory, where both theories (GR and Palatini-$f(\\tilde{R})$) predicted two tensorial polarization modes: $+$ and $\\times$, not have any scalar modes. Concretely, we can read that there are three modes of polarization and four oscillation for GWs in the Palatini-GBD theory, i.e. the plus tensor polarization mode, the cross tensor polarization mode, and a breathing scalar mode with two oscillation. The results of GWs polarization in the Palatini-GBD theory are different from that in the metric-GBD theory. In the metric-GBD theory, there are four polarizations modes: the two standard tensorial modes ($+$ and $\\times$), a scalar breathing mode, and a massive scalar mode that is a mix of the longitudinal and the breathing polarization.\n\n\n\n\n\\textbf{\\ Acknowledgments }\n The research work is supported by the National Natural Science Foundation of China (11645003,11705079), and supported by LiaoNing Revitalization Talents Program.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n An intuitive understanding of a prototype of non-Fermi liquid (NFL), a multichannel Kondo model, was proposed by Nozi\\`eres and Blandin about 25 years ago.\\cite{NB} The understanding of the multichannel Kondo model was obtained by various methods, such as Bethe-ansatz\\cite{Wiegman, Andrei}, boundary conformal field theory (BCFT)\\cite{AffLud1,AffLud2,AffLud3,AffLud4,2IK}, numerical renormalization group (NRG)\\cite{Cragg,Pang}, and so on. In particular, we can compare the NRG finite size spectra and those of the BCFT even if we have NFL spectra and the agreements are excellent.\\cite{Kim2,AffLud4,2IK} \n\n In the realistic model of diluted f-electron systems, a candidate for a two-channel Kondo (2CK) model is the quadrupolar Kondo model\\cite{Cox} for U and Ce based alloys, in which non-Kramers doublet in $f^2$ configuration plays an important role. Several models that have a NFL property were proposed, in which the $\\Gamma_8$ conduction electrons interact with a localized $f^1$ or $f^2$ crystalline-electric-field state under cubic symmetry\\cite{Koga1,KusuKura,Koga2,Kim1}. These models may be relevant for dilute alloys of Ce$^{3+}$ or U$^{4+}$ ions, such as La$_{1-x}$Ce$_x$Cu$_2$Si$_2$\\cite{Andraka}, UCu$_{5-x}$Pd$_x$\\cite{Andrade} among others. In general, NFL behaviors are observed in real materials as transient phenomena, which are controlled by temperature, pressure, alloying and so on.\n\n In this Letter, we reexamine an impurity Kondo model in which a localized f-electron with $\\Gamma_8$ symmetry interacts with $\\Gamma_8$ conduction electrons under cubic symmetry. This model was proposed as that of Ce$_x$La$_{1-x}$B$_6$\\cite{Koga1}. The earlier NRG calculations confirmed the existence of a NFL fixed point that is unstable against the particle-hole (PH) symmetry breaking. Indeed, Ce$_x$La$_{1-x}$B$_6$ is not located near this NFL fixed point, so that the property of this NFL may not be relevant to the case of Ce$_x$La$_{1-x}$B$_6$. However, the NFL would be realized in the system with small quadrupolar interactions that break PH symmetry. Theoretically, the origin of the NFL was thought to be mysterious from the conventional BCFT point of view. The NRG energy spectrum was similar to that of 2CK model but the origin was unknown.\n\nAlthough it is difficult to control PH asymmetry in experiments, for both the pursuit of new materials and theoretical interest, it is worthwhile to study the detailed properties of this NFL by BCFT. It is noted that a BCFT approach is a powerful tool for answering {\\it why} NFL behaviors emerge in various impurity problems. The same situation occurs in the case of the two-impurity Kondo model (the NFL is unstable against PH asymmetry of conduction electrons), which has been extensively investigated so far\\cite{Jones}. Roughly speaking, the quantum critical phenomena near the antiferromagnetic critical point at zero temperature\\cite{Stewart} can be seen as a kind of a lattice generalization of the two-impurity Kondo model (however, it is not so simple). The PH asymmetry in a generalized 2CK model\nwas also discussed\\cite{Pan2}. In this case, in the presence of a double tensor interaction that breaks the PH symmetry, the system flows to a Fermi liquid. \n\n\n We assume the following points for the local f-electron state: (i) cubic symmetry, (ii) $f^1$ $\\Gamma_8$ ground state configuration, (iii) the lowest excited state is f$^0$ and\/or $f^2$ $\\Gamma_1$ singlet configuration. \n Under these assumptions, we can map the $\\Gamma_8$ index to a pseudospin-$3\/2$ representation as $|\\Gamma_{8,{\\pm \\frac{3}{2}}}\\rangle=\\mp(| \\pm \\frac{3}{2}\\rangle+\\sqrt{5}|\\mp \\frac{5}{2}\\rangle)\/\\sqrt{6}$ and $|\\Gamma_{8,{\\pm \\frac{1}{2}}}\\rangle=\\pm| \\pm \\frac{1}{2}\\rangle$, where $|j_z\\rangle$ is a state with the total angular momentum $J=5\/2$ and its z-component $j_z$.\n\n\n\nWe start with the following pseudospin-$3\/2$ Kondo model (see the derivation in ref. 13) in 1-dimension considering the s-wave scattering at the impurity site\\cite{AffLud2}:\n\\begin{eqnarray}\nH&=&H_0+\\sum_{m=\\rm dip,quad,oct}H_{m},\\label{H}\\\\\nH_0&=&\\frac{iv_F}{2\\pi}\\sum_{\\mu=\\pm \\frac{3}{2},\\pm\\frac{1}{2}}\\int dx\\psi_{\\mu}^{\\dagger}(x)\\partial \\psi_{\\mu}(x),\\\\\nH_{m}&=& J_{m}\\sum_{\\mu,\\nu=\\pm \\frac{3}{2},\\pm\\frac{1}{2}}\\psi_{\\mu}^{\\dagger}(0)({\\bf x}_c^m)_{\\mu\\nu}\\psi_{\\nu}(0)\\cdot {\\bf X}_I^m,\n\\end{eqnarray}\nwhere we introduce left-moving fermion annihilation operators $\\psi_{\\mu}(x)$, Fermi velocity $v_F$ and spin $3\/2$ dipolar (${\\bf x}_c^{\\rm dip}={\\bf s}_c$), quadrupolar (${\\bf x}_c^{\\rm quad}={\\bf q}_c$) and octupolar (${\\bf x}_c^{\\rm oct}={\\bf t}_c$) matrices of the conduction electron (similar definitions for ${\\bf X}_I^m$ of the impurity).\\cite{Relation} $J_{m}$ is the coupling constant of each multipoles. It is noted that under the assumptions (i)-(iii) the interactions are isotropic in the pseudospin space.\n\n\nThe ``conformal embedding'' often found in literatures is SU(4)$\\to$ SU(2)$_{10}\\otimes$ SU(2)$_4$.\\cite{Kim2}\n The SU(2)$_{10}$ corresponds to a spin current $ {{ \\mbox{\\boldmath $ \\mathcal J$}}}(x)$\\cite{J}, i.e., {\\it dipole}. The other SU(2) current, SU(2)$_4$, is an axial charge (AC) current $\\mbox{\\boldmath $\\mathcal Q$}(x)$\\cite{Q}. In this embedding, $H_0$ can be written in the following Sugawara form:\n\\begin{eqnarray}\n\\frac{l}{\\pi v_F}H_0=\\sum_{n=-\\infty}^{\\infty}\\Big(\\frac{1}{4}:\\mbox{\\boldmath $\\mathcal Q$}_{-n}\\cdot\\mbox{\\boldmath $\\mathcal Q$}_{n}: + \\frac{1}{12}:{\\mbox{\\boldmath $\\mathcal J$}}_{-n}\\cdot \n{\\mbox{\\boldmath $\\mathcal J$}}_{n}:\\Big), \\label{Hboson}\n\\end{eqnarray}\nwhere \n${\\mbox{\\boldmath $\\mathcal J$}}_{n}\\ {\\rm and}\\ {\\mbox{\\boldmath $\\mathcal Q$}}_{n}$ are the Fourier components of ${\\mbox{\\boldmath $\\mathcal J$}}(x)\\ {\\rm and}\\ {\\mbox{\\boldmath $\\mathcal Q$}}(x)$, respectively. We set the system size to $2l$. $:A:$ indicates the normal ordering of the operator $A$. The energy eigenvalues of the right hand side of eq. (\\ref{Hboson}) for primary states that form conformal towers\\cite{Itz} are given by\n\\begin{eqnarray}\nE(q,j)=\\frac{q(q+1)}{4}+\\frac{j(j+1)}{12},\\label{E0}\n\\end{eqnarray}\n where $(q=0,1\/2, 1)$ and $(j=0,1\/2, \\cdots, 5)$. The energies of the descendant states\\cite{Itz} are given by $E(q,j)+n$, where $n$ is a positive integer and indicates the PH excitations from the ground state.\\cite{AffLud3} The descendant states generally have different quantum numbers from those of the corresponding primary state in the same conformal tower.\n The form of eq. (\\ref{Hboson}) is suitable to the case of $J_{\\rm quad}=J_{\\rm oct}=0$. In this case, the impurity spin can be absorbed into ${\\mbox{\\boldmath $\\mathcal J$}}_{n}$\\cite{AffLud1} and the predicted NFL spectra are in complete agreement with the NRG results\\cite{Kim2,Koga1,Koga2}.\n\nIn the presence of quadru- and octupolar interactions, the above SU(2)$_{10}$ absorption is not applicable.\nAs noted by Wu {\\it et al.} the spin $3\/2$ fermionic system has an exact SO(5) symmetry under some conditions\\cite{Wu1}. In the following, we transform the Hamiltonian (\\ref{H}) into the SO(5) language, i.e., the embedding is SU(4)$\\to $SO(5)$_2\\otimes$ SU(2)$_4$.\n\nFirst, quadrupolar matrices of $s_c=3\/2$ have the following forms:\n\\begin{eqnarray}\nq_c^{3z^2-r^2}&\\equiv&\\frac{1}{\\sqrt{3}}(3s_c^zs_c^z-\\frac{3}{2}\\frac{5}{2}\\hat{\\bf 1})\\equiv \\sqrt{3}\\Gamma^4 \\label{qz},\\\\\nq_c^{x^2-y^2}&\\equiv&\\frac{1}{\\sqrt{2}}(s_c^xs_c^x-s_c^ys_c^y)\\equiv-\\sqrt{3}\\Gamma^5,\\\\\nq_c^{xy}&\\equiv&s_c^xs_c^y+s_c^ys_c^x\\equiv -\\sqrt{3}\\Gamma^1,\\\\\nq_c^{yz}&\\equiv&s_c^ys_c^z+s_c^zs_c^y\\equiv\\sqrt{3}\\Gamma^3,\\\\\nq_c^{zx}&\\equiv&s_c^zs_c^x+s_c^xs_c^z\\equiv\\sqrt{3}\\Gamma^2,\\label{qzx}\n\\end{eqnarray}\nwhere $\\hat{\\bf 1}$ is an identity matrix and the spin $3\/2$ operators $s_c^i\\ (i=x,y,z)$ are constructed on the basis of $^{\\dagger}(\\psi_{\\frac{3}{2}},\\psi_{\\frac{1}{2}},\\psi_{\\frac{-1}{2}},\\psi_{\\frac{-3}{2}})$. We have introduced five Dirac matrices $\\Gamma^a\\ (1\\le a \\le 5)$.\nFrom eqs. (\\ref{qz})-(\\ref{qzx}), we can define ten SO(5) generators $\\Gamma^{ab}$ as $\\Gamma^{ab}\\equiv \\frac{1}{2i}[\\Gamma^a, \\Gamma^b]$. These generators satisfy the SO(5) commutation relations:\n\\begin{eqnarray}\n[\\Gamma^{ab}, \\Gamma^{cd}]=-2i(\\delta_{bc}\\Gamma^{ad}-\\delta_{ac}\\Gamma^{bd}-\\delta_{bd}\\Gamma^{ac}+\\delta_{ad}\\Gamma^{bc}).\n\\end{eqnarray}\n\nThe important point is that $H_{\\rm dip}$ and $H_{\\rm oct}$ are written by the ten $\\Gamma^{ab}$ when the condition $J_{\\rm dip}=J_{\\rm oct}$ is satisfied. Thus, under this condition, each multipolar Hamiltonian can be written in the SO(5)'s vector $n^a_I$ ($n_c^a$) and generators $L^{ab}_I$ ($L_c^{ab}$) of the impurity (conduction electron), as follows:\n\\begin{eqnarray}\nH_{\\rm quad}&=& 12J_{\\rm quad}\\sum_{a=1}^5n_c^a(0)n^a_I,\\label{Hq}\\\\\nH_{\\rm dip}+H_{\\rm oct}&=& 5J_{\\rm dip}\\sum_{a=1}^5\\sum_{b>a}^5L_c^{ab}(0)L^{ab}_I,\\label{Hdo}\n\\end{eqnarray}\n where \n\\begin{eqnarray}\nn_c^a(x)&\\equiv& \\frac{1}{2}\\sum_{\\mu\\,\\nu=\\pm \\frac{3}{2},\\pm\\frac{1}{2}}\\psi_{\\mu}^{\\dagger}(x)(\\Gamma^a)_{\\mu\\nu}\\psi_{\\nu}(x),\\label{nc}\\\\\nL_c^{ab}(x)&\\equiv&-\\frac{1}{2}\\sum_{\\mu,\\nu=\\pm \\frac{3}{2},\\pm\\frac{1}{2}}\\psi^{\\dagger}_{\\mu}(x)(\\Gamma^{ab})_{\\mu\\nu}\\psi_{\\nu}(x).\\label{Lc}\n\\end{eqnarray}\n\\begin{table}[t]\n\t\\begin{tabular}{cccc}\n \\hline\n $q$ & $j$ & {\\bf u} & $E(q,{\\bf u}) \\ (E(q,j))$\\\\\n \\hline\n \\hline\n $0$ & $0$ & $\\bf 1$ & $0$\\\\\n \\hline\n $1\/2$ & $3\/2$ & $\\bf 4$ & $1\/2$\\\\\n \\hline\n $1$ & $2$ & $\\bf 5$ & $1$\\\\\n \\hline\n $0$ & $3$ & $\\bf 10$ & $1$\\\\\n $0$ & $1$ & & $1$\\\\\n \\hline\n $1$ & $0$ & $\\bf 1$ & $1$\\\\\n \\hline\n $3\/2$ & $3\/2$ & $\\bf 4$ & $3\/2$\\\\\n $1\/2$ & $3\/2$ & $\\bf 4$ & $3\/2$\\\\\n \\hline\n $1\/2$ & $7\/2$ & $\\bf 16$ & $3\/2$\\\\\n $1\/2$ & $5\/2$ & & $3\/2$\\\\\n $1\/2$ & $1\/2$ & & $3\/2$\\\\\n \\hline\n \\end{tabular}\n\\caption{Low-energy spectrum of free Hamiltonian for nondegenerate ground state. The first and the third columns are the AC and SO(5) indices, respectively. The second column is the eigenvalue of the spin current in eq. (\\ref{E0}). $E(q,{\\bf u})$ and $E(q,j)$ are measured from the ground state using eqs. (\\ref{E1}) and (\\ref{E0}), respectively.}\n\\label{tbl-1}\n\\end{table}\n\n\nThe free Hamiltonian $H_0$ in (\\ref{H}) is expressed using the SO(5) generators\n\\begin{eqnarray}\n\\frac{l}{\\pi v_F}H_0=\\sum_{n=-\\infty}^{\\infty}\\Big(\\frac{1}{4}:\\mbox{\\boldmath $\\mathcal Q$}_{-n}\\cdot\\mbox{\\boldmath $\\mathcal Q$}_{n}: + \\frac{1}{8}:{\\mbox{\\boldmath $\\mathcal L$}}_{-n}\\cdot \n{\\mbox{\\boldmath $\\mathcal L$}}_{n}:\\Big), \\label{Hboson2}\n\\end{eqnarray}\nwhere $({\\mbox{\\boldmath $\\mathcal L$}}_{n})^{ab}$ is the n-th Fourier component of SO(5) current $L_c^{ab}(x)$: ${\\mbox{\\boldmath $\\mathcal L$}_n}=({\\mathcal L^{12}_{n}}, {\\mathcal L^{13}_{n}},\\cdots ,{\\mathcal L^{35}_{n}}, {\\mathcal L^{45}_{n}})$ and satisfies a level 2 SO(5) Kac-Moody algebra. The energy eigenvalues $E(q,{\\bf u})$ for the primary states are expressed by the Casimir operators in each sector\n\\begin{eqnarray}\nE(q,{\\bf u})=\\frac{q(q+1)}{4}+\\frac{\\bar{c}_{\\bf u}}{8},\\label{E1}\n\\end{eqnarray}\nwhere $q=0,1\/2, 1$ for the AC sector (this is as same as in eq. (\\ref{E0})), and $\\bf u=1\\ {\\rm(identity)},\\ 4\\ {\\rm (spinor)}\\ ,5\\ {\\rm (vector)}$ for the SO(5) sector. ${\\bf u}$ indicates the dimension of the representation in the SO(5), see Fig. \\ref{fig-1}. There are three primary fields in both the AC and the SO(5) sectors.\nThe values of $\\bar{c}_{\\bf u}$ are $\\bar{c}_{\\bf 1}=0,\\ \\bar{c}_{\\bf 4}=5\/2\\ {\\rm and }\\ \\bar{c}_{\\bf 5}=4$. The noninteracting energy spectra for the nondegenerate ground state are shown in Table \\ref{tbl-1} together with the indices of $j$ in eq. (\\ref{E0}). We can see that both eqs. (\\ref{E0}) and (\\ref{E1}) can reproduce the spectrum of free $s_c=3\/2$ fermions.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.25\\textwidth]{.\/fig1.eps}\n \\end{center}\n\\caption{$H_1-H_2$ diagrams associated with the SO(5) multiplies that enter the low-energy spectrum, where $2H_1\\equiv \\Gamma^{15}$ and $2H_2\\equiv \\Gamma^{23}$ forming the Cartan subalgebra of SO(5) group. In (a), the corresponding components of the spinor are indicated.}\n\\label{fig-1}\n\\end{figure}\n\n\nNext, we consider eqs. (\\ref{Hq}), (\\ref{Hdo}) and (\\ref{Hboson2}) together. It is noted that the total Hamiltonian (\\ref{H}) is written as\n\n\\begin{eqnarray}\n\\frac{l}{\\pi v_F}H&=&\\sum_{n=-\\infty}^{\\infty}\\Big(\\frac{1}{4}:\\mbox{\\boldmath $\\mathcal Q$}_{-n}\\cdot\\mbox{\\boldmath $\\mathcal Q$}_{n}: + \\frac{1}{8}:{\\mbox{\\boldmath $\\mathcal L$}}_{-n}^{'}\\cdot \n{\\mbox{\\boldmath $\\mathcal L$}}_{n}^{'}:\\Big)\\nonumber\\\\\n&&+\\frac{12lJ_{\\rm quad}}{\\pi v_F}\\sum_{a=1}^5n_c^a(0)n^a_I+\\rm const.,\\label{HH}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n{\\mathcal L'}_{n}^{ab}\\equiv{\\mathcal L}_{n}^{ab}+\\frac{20lJ_{{\\rm dip}}}{\\pi v_F}L_I^{ab}.\n\\end{eqnarray}\nThe values at possible fixed points, the $J_{\\rm dip}^*$, $J^*_{\\rm quad}$ and $J^*_{\\rm oct}$ are determined (assumed) as\n\\begin{eqnarray}\n J_{\\rm dip}^*=J_{\\rm oct}^*=\\frac{\\pi v_F}{20l}{\\rm,\\ \\ \\ \\ and}\\ \\ \\ J^*_{\\rm quad}=0.\\label{Jc}\n\\end{eqnarray}\n The case in which $J_{\\rm quad}\\not=0$ is discussed later. At these values of the couplings, the impurity SO(5) ``superspin'' can be absorbed into the conduction electron ``superspin'' current, generating the new SO(5) ``superspin'' $ {\\mbox{\\boldmath $\\mathcal L$}}_{n}^{'}$. This impurity absorption does not affect the SO(5) Kac-Moody algebra, except the gluing conditions in Table \\ref{tbl-1}, which is much the same as in the case of the multichannel Kondo model\\cite{AffLud1}. \n\n\n At this stage, there is a need to introduce a suitable fusion rule to generate the nontrivial spectrum. Because the interaction is only in the SO(5) sector, any fusion in the AC sector is unphysical. An important point is that the impurity is described in the spin $3\/2$ representation (i.e., dimension 4). We find that the desired fusion is a direct product of $\\bf 4$ representation in the SO(5) sector to each state in the spectra of the free Hamiltonian. To execute this, the following formulae are useful (easily deducible from Fig. \\ref{fig-1}): $\n\\bf 1\\otimes 4= \\bf 4,\\ \n\\bf 4\\otimes 4= \\bf 1\\oplus 5\\oplus 10\\ {\\rm and}\\ \n\\bf 4\\otimes 5= \\bf 4\\oplus 16$.\n\nThus, even after absorbing the impurity ``superspin'', we can calculate the energy spectra at this fixed point by using eq. (\\ref{E1}\n. \nThe obtained spectra are shown in Table \\ref{tbl-2}(a). As shown by previous NRG studies\\cite{Koga1, KusuKura}, the NFL spectra are the same as those of the 2CK NFL if the AC sector in the present model and the spin sector in the 2CK model are interchanged. Indeed, the existence of a SO(5) symmetry in the 2CK model has been pointed out before in literatures.\\cite{AffLud4}\n\n\\begin{table}[t]\n\\begin{center}\n\t\\begin{tabular}{cccccccc}\n \\hline\n (a)&$q$ & {\\bf u} & $E(q,{\\bf u})$ & (b) & $q$ & {\\bf u} & $\\Delta$\\\\\n \\hline\n \\hline\n &$1\/2$ & $\\bf 1$ & $0$ & & $0$ & $\\bf 1$ & $0$ \\\\\n &$0$ & $\\bf 4$ & $1\/8$ & & $0$ & $\\bf 5$ & $1\/2$ \\\\\n &$1\/2$ & $\\bf 5$ & $1\/2$ & & $1\/2$ & $\\bf 4$ & $1\/2$ \\\\\n &$1$ & $\\bf 4$ & $5\/8$ & & $1\/2$ & $\\bf 4$ & $1\/2$ \\\\\n &$3\/2$ & $\\bf 1$ & $1$ & & $1$ & $\\bf 1$ & $1\/2$ \\\\\n &$1\/2$ & $\\bf 10$ & $1$ & & $1$ & $\\bf 5$ & $1$ \\\\\n &$1\/2$ & $\\bf 1$ & $1$ & & $0$ & $\\bf 10$ & $1$ \\\\\n \\hline\n \\end{tabular}\n\\caption{(a) Low energy spectrum of NFL fixed point with $E(q,{\\bf u})\\le 1$. In the third column, the ground state energy $3\/16$ is subtracted. (b) Operator contents at the NFL fixed point. We show the operators with $\\Delta \\le 1$ where $\\Delta$ means the scaling dimension of the corresponding operator. \n}\n\\label{tbl-2}\n\\end{center}\n\\end{table}\n\n\nThe operator contents at the NFL fixed point are obtained by applying a double fusion\\cite{AffLud1} in the SO(5) sector (see Table \\ref{tbl-2}(b)). Again, we obtain the same scaling dimensions $\\Delta$ as those at the 2CK fixed point with the above-mentioned interchange. The operator with $(q, {\\bf u})=(0,{\\bf 5})$ is the SO(5) vector $\\mbox{\\boldmath $\\mathcal \\phi$}_{\\rm SO(5)}$, which corresponds to the quadrupolar operator in the original spin $3\/2$ representation. This operator has $\\Delta=1\/2$, so that the corresponding local quadrupolar susceptibility diverges logarithmically at low temperatures. This is consistent with the result of NRG studies \\cite{Koga1, KusuKura}. The local charge and pair field susceptibility would diverge at low temperatures, because the operator $(q, {\\bf u})=(1,{\\bf 1})$, $\\mbox{\\boldmath $\\mathcal \\phi$}_{\\rm AC}$, which is the AC vector, has $\\Delta=1\/2$. The dipolar susceptibility in addition to the octupolar one, is classified in $\\bf 10$ representation with $\\Delta=1$. This means that the dipolar and the octupolar susceptibilities $\\chi$ are not singular, which is also consistent with the NRG calculation\\cite{Koga1,KusuKura}.\n\n\nNext we discuss the stability of the NFL fixed point against various perturbations.\n\na) In the presence of a uniaxial distortion, the term $h_Qn_I^4$, which breaks the SO(5) symmetry, appears in the effective Hamiltonian. This term allows $\\mbox{\\boldmath $\\mathcal \\phi$}_{\\rm SO(5)}$ with $\\Delta=1\/2$ to appear. Thus, the conjugate field $h_Q$ becomes a relevant perturbation and the NFL fixed point becomes unstable.\n\nb) In the presence of a potential scattering at the impurity site $V\\sum_{\\mu}\\psi_{\\mu}^{\\dagger}(0)\\psi_{\\mu}(0)=VQ_z(0)+\\ \\rm const.$, the SU(2) symmetry in the AC sector is broken. In this case, $\\phi_{\\rm AC}^3$, the component $q_z=0$ of $\\mbox{\\boldmath $\\mathcal \\phi$}_{\\rm AC}$ with $\\Delta=1\/2$, can appear in the effective Hamiltonian. Again, the NFL fixed point becomes unstable. The PH symmetry can also be broken by the quadrupolar interaction of eq. (\\ref{Hq}). Indeed, the system flows into the Fermi liquid fixed point of SU(4) Coqblin-Schrieffer model without PH symmetry\\cite{Koga1}. \n\nc) The exchange anisotropy in the SO(5) sector is irrelevant, because the marginal operator $(q,{\\bf u})=(0,{\\bf 10})$ cannot appear under the time-reversal symmetry as discussed in Ref. 8 (that is, our assumption $J_{\\rm dip}=J_{{\\rm oct}}$, does not affect the present result).\\cite{COMMENT}\n\n\\begin{figure}[t!]\n \\begin{center}\n \\includegraphics[width=.4\\textwidth]{.\/fig2.eps}\n \\end{center}\n\\caption{$C_{\\rm imp}\/T$ vs $T$ and ${\\rm Im}\\chi_c(\\omega)$ vs $\\omega$. $V$ is the strength of potential scattering at the impurity site. The parameters used are $J_{\\rm dip}=J_{\\rm oct}=0.1D$ and $J_{\\rm quad}=0.0D$, where $D$ is the half of the bandwidth of conduction electrons.}\n\\label{fig-2}\n\\end{figure}\n\nIn early NRG studies\\cite{Koga1,KusuKura}, the NFL fixed point was considered to be similar to the 2CK fixed point. But the question arises, `` Is the present NFL really equivalent to the 2CK ?'' To verify this, we consider the effective Hamiltonian near the NFL fixed point. If the present NFL and the NFL of the 2CK were equivalent, the leading irrelevant operator would be $\\mbox{\\boldmath $\\mathcal Q$}_{-1}\\cdot\\mbox{\\boldmath $\\mathcal \\phi$}_{\\rm AC}$ with $\\Delta=1\/2+1=3\/2$. This operator, however, is not physically adequate because the impurity absorption occurs in the SO(5) sector. The low-energy effective Hamiltonian should be made by using operators in the SO(5) sector and these should transform as a singlet\\cite{AffLud1}. \n\nBecause ${\\mathcal L}^{ab}_{-1}\\phi^c_{\\rm SO(5)}\\ (1\\le a,b,c\\le 5)$ type operators with $\\Delta=3\/2$ cannot become a singlet, the leading irrelevant operator should be the energy-momentum operator $\\sum_{ab}:L_c^{ab}(0)L_c^{ab}(0):$ with $\\Delta=2$. This leads us to an important conclusion: {\\it impurity specific heat does not diverge at low temperatures} unlike in the 2CK case, {\\it but shows a linear temperature behavior}. As we mentioned above, $\\chi$ is not singular, so the Wilson ratio $R_W=(\\delta\\chi\/\\chi)\/(\\delta C\/C)$ is calculated using the conformal charge in each of sectors $c_{\\rm SO(5)}$ and $c_{\\rm AC}$ as $R_W=(c_{\\rm SO(5)}+c_{\\rm AC})\/c_{\\rm SO(5)}=(5\/2+3\/2)\/(5\/2)=8\/5$($C$: specific heat).\\cite{AffLud3} This result is different from that of the 2CK case: $R_W=8\/3$.\n\nFinally, to check the new results above, we show the results of the NRG calculations of the impurity specific heat $C_{\\rm imp}$ and the z-component of the dynamical AC susceptibility for localized conduction electrons $\\chi_c(\\omega)$ at zero temperature in Fig. {\\ref{fig-2}}. Note that $\\chi_c(\\omega)$ is the dynamical charge susceptibility of the conduction electrons at the impurity site. We used logarithmic discretization parameter $\\Lambda=3$ and kept 400 states at each NRG step. It was confirmed that $C_{\\rm imp}\/T$ is constant at low temperatures and that Im$\\chi(0)\\not=0$ indicates the divergence of the conduction electron's charge susceptibility. We can see that the divergence of the charge susceptibility is suppressed when PH symmetry is broken ($V\\not=0$ case). We also obtained the residual impurity entropy $S_0=k_B\\log \\sqrt{2}$, which is the same value as that at the 2CK NFL fixed point. An increase in the data $V\/D=10^{-4}$ of the $C_{\\rm imp}\/T$ is the natural consequence of the entropy release from $k_B\\log \\sqrt{2}$ to $0$.\n\nIn summary, we have investigated a multipolar Kondo model with $S_I=3\/2$ and $s_c=3\/2$ using BCFT and NRG. The 2CK-like NFL fixed point observed in the earlier NRG calculation is explicitly derived using the ``superspin'' absorption in a hidden symmetry SO(5). We find that the leading irrelevant operator at the NFL fixed point is ``Fermi liquid-like'' in contrast to its 2CK-like NFL spectra. All predictions are consistent with earlier works and the present NRG calculation. In particular, the low temperature impurity specific heat is proportional to temperature, the Wilson ratio $R_W=8\/5$ and the local charge susceptibility of conduction electrons diverges at zero temperature. These results remarkably distinguish the present NFL from the 2CK model.\n\n\\vspace{.5cm}\nThe author would like to thank A. Yotsuyanagi, T. Takimoto, H. Kohno and K. Miyake for valuable comments. This work was supported by the 21st Century COE Program (G18) of the Japan Society for the Promotion of Science.\nThe author is supported by the Research Fellowships of JSPS for Young Scientists.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nIn the face of the proliferation of real-time IoT applications, fog computing has come as a promising complement to cloud computing by extending cloud to the edge of the network to meet the stringent latency requirements and intensive computation demands of such applications \\cite{xiao2017qoe}.\n\nA typical fog computing system consists of a set of geographically distributed fog nodes\nwhich are deployed at the network periphery with elastic resource provisioning such as storage, computation, and network bandwidth\\cite{yi2015fog}. \nDepending on their distance to IoT devices, fog nodes are often organized in a hierarchical fashion, with each layer as a \\textit{fog tier}.\nIn such a way, resource-limited IoT devices, when heavily loaded, can delegate workloads via wireless links to nearby fog nodes, \\textit{a.k.a.}, \\textit{workload offloading}, \nto reduce the power consumption and accelerate workload processing; \nmeanwhile, each fog node can offload workloads to nodes in its upper fog tier. \nHowever, along with all the benefits come the extended latency and extra power consumption. \nGiven such a power-latency tradeoff, two interesting questions arise.\nOne is \\textit{where} and \\textit{how much workloads} to offload between successive fog tiers. \nThe other is how to \\textit{allocate resources} for workload processing and offloading.\nThe timely decision making regarding these two questions is critical but challenging, due to temporal variations of system dynamics in wireless environment, uncertainty in the resulting offloading latency, and the unknown traffic statistics.\n\nWe summarize the main challenges of dynamic offloading and resource allocation in fog computing as follows: \n\\begin{enumerate}\n\t\\item[$\\diamond$] \\textbf{Characterization of system dynamics and the power-latency tradeoff}: In practice, a fog system often consists of multiple tiers, with complex interplays between fog tiers and the cloud, not to mention the constantly varying dynamics and intertwined power-latency tradeoffs therein.\n\t\tA model that accurately characterizes the system and tradeoffs is the key to the fundamental understanding of the design space.\n\t\\item[$\\diamond$] \\textbf{Efficient online decision making:}\n\t\tThe decision making must be computationally efficient, so as to minimize the overheads. The difficulties often come from the uncertainties of traffic statistics, online nature of workload arrivals, and intrinsic complexity of the problem.\n\t\\item[$\\diamond$] \\textbf{Understanding the benefits of predictive offloading:} One natural extension to online decision making is to employ predictive offloading to further reduce latencies and improve quality of service. For example, Netflix preloads videos onto users' devices based on user behavior prediction\\cite{NetflixPred}. Despite the wide applications of such approaches, the fundamental limits of predictive offloading in fog computing still remain unknown.\n\\end{enumerate}\n\n\n\\begin{table*}[!h]\n\\centering\n\\caption{Comparisons of related works}\n\\label{table: related works}\n\\begin{threeparttable}\t\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n & D2D-enabled IoT & IoT-Fog\\tnote{1} & Fog-Fog\\tnote{2} & Fog-Cloud\\tnote{3} & Dynamic & Prior Arrival Distribution & Prediction \\\\\n\\hline\n\\cite{xiao2017qoe} & & \\checkmark & \\checkmark & \\checkmark & & -- & \\\\\n\\hline\n\\cite{wang2019cooperative} & & \\checkmark & & \\checkmark & & -- & \\\\\n\\hline\n\\cite{liu2018socially} & & \\checkmark & & \\checkmark & \\checkmark & Poisson & \\\\\n\\hline \n\\cite{misra2019detour} & & \\checkmark & & & & -- & \\\\\n\\hline\n\\cite{lei2019joint} & & \\checkmark & & & \\checkmark & Poisson & \\\\\n\\hline\n\\cite{mao2016power} & & \\checkmark & & & \\checkmark & Not Required & \\\\\n\\hline\n\\cite{chen2019energy} & & \\checkmark & & & \\checkmark & Not Required & \\\\\n\\hline\n\\cite{gao2019dynamic} & \\checkmark & \\checkmark & & & \\checkmark & Not Required & \\\\\n\\hline\n\\cite{zhang2019near} & & \\checkmark & & & \\checkmark & Not Required & \\\\\n\\hline\nOurs & & & \\checkmark & \\checkmark & \\checkmark & Not Required & \\checkmark \\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\t\\footnotesize\n\t \\item[1,2,3] ``IoT-Fog'' means offloading from IoT devices to fog, ``Fog-Fog'' means offloading between fog tiers, while ``Fog-Cloud'' means offloading from fog to cloud. \n\\end{tablenotes}\n\\end{threeparttable}\n\t\\vspace{-0.5cm} \n\\end{table*}\n\nIn this paper, we focus on the workload offloading problem for multi-tiered fog systems. We address the above challenges by developing a fine-grained queueing model that accurately depicts such systems and proposing an efficient online scheme that proceeds the offloading on a time-slot basis. \nTo the best of our knowledge, we are the first to conduct systematic study on predictive offloading in fog systems. \nOur key results and main contributions are summarized as follows:\n\\begin{enumerate}\n\t\\item[$\\diamond$] \\textbf{Problem Formulation:} We formulate the problem of dynamic offloading and resource allocation as a stochastic optimization problem, aiming at minimizing the long-term time-average expectation of total power consumptions of fog tiers with queue stability guarantee.\n\t\\item[$\\diamond$] \\textbf{Algorithm Design:} \n\t\tThrough a non-trivial transformation, \n\t\twe decouple the problem into a series of subproblems over time slots. By exploiting their unique structures, we propose PORA, an efficient scheme that exploits predictive scheduling to make decisions in an online manner.\n\t\\item[$\\diamond$] \\textbf{Theoretical Analysis and Experimental Verification:} We conduct theoretical analysis and trace-driven simulations to evaluate the effectiveness of PORA. \n\t\tThe results show that PORA achieves a tunable power-latency tradeoff while effectively reducing the average latency with only mild-value of predictive information, even in the presence of prediction errors.\n\t\\item[$\\diamond$] \\textbf{New Degree of Freedom in the Design of Fog Computing Systems:} We systematically investigate the fundamental benefits of predictive offloading in fog computing systems, with both theoretical analysis and numerical evaluations.\n\\end{enumerate}\n\n\nWe organize the rest of the paper as follows. \nSection \\ref{sec: related work} discusses the related work. \nNext, in Section \\ref{sec: motivating example}, we provide an example that motivates our design for dynamic offloading and resource consumption in fog computing systems. \nSection \\ref{sec: model} presents the system model and problem formulation, followed by the algorithm design of PORA and performance analysis in Section \\ref{sec: algorithm}.\nSection \\ref{sec: simulation} analyzes the results from trace-driven simulations, while Section \\ref{sec: conclusion} concludes the paper. \n\n\n\\section{Related Work}\\label{sec: related work}\n\nIn recent years, a series of works have been proposed to optimize the performance fog computing systems from various aspects \\cite{xiao2017qoe, taneja2017resource, chen2019dynamic, wang2019cooperative, liu2018socially, misra2019detour, lei2019joint, mao2016power, chen2019energy, gao2019dynamic, zhang2019near}.\nAmong such works, the most related are those focusing on the design of effective offloading schemes. \nFor example, \nby adopting alternating direction method of multipliers (ADMM) methods,\nXiao \\textit{et al.}\\cite{xiao2017qoe} and Wang \\textit{et al.}\\cite{wang2019cooperative} proposed two offloading schemes for cloud-aided fog computing systems to minimize average task duration and average service response time under different energy constraints, respectively. \nLater, Liu \\textit{et al.} \\cite{liu2018socially} took the social relationships among IoT users into consideration and developed a socially aware offloading scheme by advocating game theoretic approaches. \nMisra \\textit{et al.} \\cite{misra2019detour} studied the problem in software-defined fog computing systems and proposed a greedy heuristic scheme to conduct multi-hop task offloading with offloading path selection. \nLei \\textit{et al.} \\cite{lei2019joint} considered the joint minimization of delay and power consumption over all IoT devices; they formulated the problem under the settings of continuous-time Markov decision process and solved it via approximate dynamic programming techniques.\nThe above works, despite their effectiveness, generally assume the availability of the statistical information on task arrivals in the systems which is usually unattainable in practice with highly time-varying system dynamics \\cite{zhang2017resource}.\n\n\n\nIn the face of such uncertainties, a number of works have applied stochastic optimization methods such as Lyapunov optimization techniques to online and dynamic offloading scheme design \n\\cite{mao2016power, chen2019energy, gao2019dynamic, zhang2019near}. \nFor instance, Mao \\textit{et al.}\\cite{mao2016power} investigated the tradeoff between the power consumption and execution delay, then developed a dynamic offloading scheme for energy-harvesting-enabled IoT devices.\nChen \\textit{et al.} \\cite{chen2019energy} designed an adaptive and efficient offloading scheme to minimize the transmission energy consumption with queueing latency guarantee.\nGao \\textit{et al.} \\cite{gao2019dynamic} investigated efficient offloading and social-awareness-aided network resource allocation for device-to-device-enabled (D2D-enabled) IoT users. \nZhang \\textit{et al.} \\cite{zhang2019near} designed an online rewards-optimal scheme for the computation offloading of energy harvesting-enabled IoT devices based on Lyapunov optimization and Vickrey-Clarke-Groves auction.\nDifferent from such works that focus on fog computing systems with flat or two-tiered architectures, \nour solution is applicable to general multi-tiered fog computing systems with time-varying wireless channel states and unknown traffic statistics. \nMoreover, to the best of our knowledge, \nour solution is also the first to proactively leverage the predicted traffic information to optimize the system performance with theoretical guarantee. \nWe are also the first to investigate the fundamental benefits of predictive offloading in fog computing systems.\nWe compare our work with the above mentioned works \nin TABLE \\ref{table: related works}.\n\n\n \n\\section{Motivating Example}\\label{sec: motivating example}\n\nIn this section, we provide a motivating example to show the potential power-latency tradeoff in multi-tiered fog computing systems. \nThe objective is to achieve low power consumptions and short average workload latency (in the unit of packets). \n\nFigure \\ref{figure: motivating example} shows an instance of time-slotted fog computing system with two fog tiers, \\textit{i.e.}, edge fog tier and central fog tier. \nWithin each fog tier resides one fog node, \\textit{i.e.}, an edge fog node (EFN) in edge fog tier and a central fog node (CFN) in central fog tier. \nThe EFN connects to the CFN via a wireless link, while the CFN connects to the cloud data center over wired links. \nEach fog node maintains one queue to store packets. \nFigure \\ref{figure: motivating example}(a) shows that during time slot $t_{0}$, both the EFN and the CFN store $8$ packets in their queues. \n\nWe assume that each fog node sticks to one policy all the time to handle packets, \\textit{i.e.}, either \\textit{processing packets locally} or \\textit{offloading them to its next tier}.\nThe local processing capacities of EFN and CFN are $1$ and $8$ packets per time slot, respectively.\nThe transmission capacities from EFN to CFN and from CFN to cloud are $4$ and $5$ packets per time slot, respectively. \nThe power consumption is assumed linearly proportional to the number of processed\/transmitted packets. \nIn particular, processing one packet locally consumes $1$ mW power, while transmitting one packet over wireless link consumes $0.5$ mW. \nWe ignore the processing latency in the cloud due to its powerful processing capacity. \n\nTABLE \\ref{table: motivating example} lists the total power consumptions and average packet latencies under all four possible settings. \nFigures \\ref{figure: motivating example}(b)-\\ref{figure: motivating example}(d) show the case when EFN sticks to offloading and CFN sticks to local processing. \nIn time slot $(t_{0}+1)$, EFN offloads four packets to CFN at its full transmission capacity, while CFN processes all the eight packets locally. \nIn time slot $(t_{0}+2)$, EFN offloads the rest four packets to CFN; meanwhile, CFN locally processes the four packets that arrive in previous time slot. \nIn time slot $(t_{0}+3)$, CFN finishes processing the rest four packets. \nIn this case, the system consumes $16$ mW power in local processing and $4$ mW power in transmission, with an average packet latency of $1.75$ time slots. \n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[scale=0.44]{figures1\/motivatingExample}\n\t\\caption{Motivating example of dynamic offloading and resource consumption in multi-tiered fog computing systems.}\n\t\\label{figure: motivating example}\n\\end{figure}\n\n\\begin{table}[!h]\n\\centering\n\\caption{Performance under different offloading policies}\n\\label{table: motivating example}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nPolicy of & Policy of & Total Power & Average Packet \\\\ \nEFN & CFN & Consumptions (mW) & Latency (time slot) \\\\ \\hline\nLocal & Local & 16 & 2.75 \\\\ \\hline\nLocal & Offload & 8 & 2.9375 \\\\ \\hline\nOffload & Local & 20 & 1.75 \\\\ \\hline\nOffload & Offload & 4 & 2.125 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nFrom TABLE \\ref{table: motivating example}, we conclude that: \nFirst, when EFN sticks to offloading and CFN sticks to local processing, the system achieves the lowest average packet latency of $1.75$ slots but the maximum power consumption of $20$mW. \nSecond, with the same offloading policy on EFN, there is a tradeoff between the total power consumptions and the average packet latency when CFN sticks to different policies. \nThe reason is that offloading to the cloud can not only reduce power consumptions but also prolong latency as well. \nThird, when CFN sticks to local processing, there is a power-latency tradeoff with different policies at EFN, in that offloading to CFN can induce lower processing latency but at the cost of even higher power consumption for wireless transmissions.\n\n\\section{Model and Problem Formulation}\n\\label{sec: model}\n\nWe consider a multi-tiered fog computing system, as shown in Figure \\ref{figure: hierarchical fog system}.\nThe system evolves over time slots indexed by $t \\in \\{0, 1, 2,...\\}$.\nEach time slot has a length of $\\tau_{0}$.\nInside the edge fog tier (EFT) are a set of edge fog nodes (EFNs) that offer low-latency access to IoT devices.\nOn the other hand, the central fog tier (CFT) comprises of central fog nodes (CFNs) with greater processing capacities than EFNs.\nWe assume that the workload on each EFN can be offloaded to and processed by any of its accessible CFNs, and that each CFN can offload its workload to the cloud.\nIn our model, we do not consider the power consumptions and latencies within the cloud. We mainly focus on the power consumptions and latencies within fog tiers, as shown in TABLE \\ref{table: model}. First, the power consumptions we consider include two parts: processing power and transmit power. The processing power consumption is induced by the workload processing on both EFT and CFT. The transmit power is induced by the transmissions from EFT to CFT. We do not consider the transmit power consumption from CFT to cloud because we assume that the CFT communicates with the cloud through wireline connections. Second, the latencies we consider include three parts: queueing latency, processing latency and transmit latency. We focus on the queueing latency on both EFT and CFT. We assume that the workload processing in each time slot can be completed by the end of the same time slot, and then we can ignore the processing latency. Since the EFT communicates with the CFT through high-speed wireless connections and the CFT communicates with the cloud through high-speed wireline connections, we assume that transmission latencies from both EFT to CFT and CFT to Cloud are negligible.\n\n\\begin{table}[!h]\n\\centering\n\\caption{Performance Metrics in Our Model}\n\\label{table: model}\n\\begin{tabular}{|p{1.2cm}<{\\centering}|p{1.1cm}<{\\centering}|p{0.9cm}<{\\centering}|p{1.0cm}<{\\centering}|p{1.1cm}<{\\centering}|p{0.9cm}<{\\centering}|}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c|}{Power Consumption} & \\multicolumn{3}{c|}{Latency} \\\\ \\cline{2-6} \n& Processing & Transmit & Queueing & Processing & Transmit \\\\ \\hline\nEFT & \\checkmark & & \\checkmark & & \\\\ \\hline\nEFT2CFT & & \\checkmark & & & \\\\ \\hline\nCFT & \\checkmark & & \\checkmark & & \\\\ \\hline\nCFT2Cloud & & & & & \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\nIn the following, we first introduce the basic settings in Section \\ref{subsec: basic setting}, then elaborate the queueing models in Section \\ref{subsec: local queue}. \nNext, we define the optimization objective in Section \\ref{subsec: power} while proposing the problem formulation in Section \\ref{subsec: formulation}.\nWe summarize the key notations in TABLE \\ref{table: key notations}.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[scale=0.35]{figures1\/edgeCentralFog}\n\t\\caption{An example of fog computing systems with two fog tiers.}\n\t\\label{figure: hierarchical fog system}\n\\end{figure}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Key notations}\n\\label{table: key notations}\n\\centering\n\\begin{tabular}{p{0.89cm} l}\n \\hline\\hline\n Notation & Description \\\\ \\hline\n $\\tau_{0}$ & Length of each time slot \\\\ \\hline\n $\\mathcal{N}$ & $\\mathcal{N}$ is the set of EFNs with $|\\mathcal{N}|\\triangleq N$ \\\\ \\hline\n $\\mathcal{M}$ & $\\mathcal{M}$ is the set of CFNs with $|\\mathcal{M}|\\triangleq M$ \\\\ \\hline\n $\\mathcal{N}_{j}$ & Set of accessible EFNs from CFN $j$ \\\\ \\hline \n $\\mathcal{M}_{i}$ & Set of accessible CFNs from EFN $i$ \\\\ \\hline\n\t$A_{i}(t)$ & Amount of workload arriving to EFN $i$ in time slot $t$ \\\\ \\hline\n\t$\\lambda_{i}$ & Average workload arriving rate on EFN $i$, $\\lambda_{i}\\triangleq \\mathbb{E}\\{A_{i}(t)\\}$ \\\\ \\hline\n\t$W_{i}$ & Prediction window size of EFN $i$ \\\\ \\hline\n\t$A_{i,-1}(t)$ & Arrival queue backlog of EFN $i$ in time slot $t$ \\\\ \\hline\n\t\\multirow{2}*{$A_{i,w}(t)$} & Prediction queue backlog of EFN $i$ in time slot $t$, such that \\\\ \n\t& $0\\leq w\\leq W_{i}-1$ \\\\ \\hline\n\t$Q_{i}^{(e,a)}(t)$ & Integrate queue backlog of EFN $i$ in time slot $t$ \\\\ \\hline\n\t$Q_{i}^{(e,l)}(t)$ & Local processing queue backlog of EFN $i$ in time slot $t$ \\\\ \\hline\n\t$Q_{i}^{(e,o)}(t)$ & Offloading queue backlog of EFN $i$ in time slot $t$ \\\\ \\hline\n\t$b_{i}^{(e,l)}(t)$ & Amount of workload to be sent to $Q_{i}^{(e,l)}(t)$ in time slot $t$ \\\\ \\hline\n\t$b_{i}^{(e,o)}(t)$ & Amount of workload to be sent to $Q_{i}^{(e,o)}(t)$ in time slot $t$ \\\\ \\hline\n\t$f_{i}^{(e)}(t)$ & CPU frequency of EFN $i$ in time slot $t$ \\\\ \\hline\n\t$H_{i,j}(t)$ & Wireless channel gain between EFN $i$ and CFN $j$ \\\\ \\hline\n\t$p_{i,j}(t)$ & Transmit power from EFN $i$ to CFN $j$ in time slot $t$ \\\\ \\hline\n\t$R_{i,j}(t)$ & Transmit rate from EFN $i$ to CFN $j$ in time slot $t$ \\\\ \\hline\n\t$Q_{j}^{(c,a)}(t)$ & Arrival queue backlog of CFN $j$ in time slot $t$ \\\\ \\hline\n\t$Q_{j}^{(c,l)}(t)$ & Local processing queue backlog of CFN $j$ in time slot $t$ \\\\ \\hline\n\t$Q_{j}^{(c,o)}(t)$ & Offloading queue backlog of CFN $j$ in time slot $t$ \\\\ \\hline\n\t$b_{j}^{(c,l)}(t)$ & Amount of workload to be sent to $Q_{j}^{(c,l)}(t)$ in time slot $t$ \\\\ \\hline\n\t$b_{j}^{(c,o)}(t)$ & Amount of workload to be sent to $Q_{j}^{(c,o)}(t)$ in time slot $t$ \\\\ \\hline\n\t$f_{j}^{(c)}(t)$ & CPU frequency of CFN $j$ in time slot $t$ \\\\ \\hline\n\t$P(t)$ & Total power consumptions in time slot $t$ \\\\ \\hline\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Basic Settings}\\label{subsec: basic setting}\n\nThe fog computing system consists of $N$ EFNs in EFT and $M$ CFNs in CFT. \nLet $\\mathcal{N}$ and $\\mathcal{M}$ be the sets of EFNs and CFNs.\nEach EFN $i$ has access to a subset of CFNs in their proximities. We denote the subset by $\\mathcal{M}_{i}\\subset\\mathcal{M}$. \nFor each CFN $j$, $\\mathcal{N}_{j}\\subset\\mathcal{N}$ denotes the set of its accessible EFNs. \nAccordingly, for any $i\\in\\mathcal{N}_{j}$ we have $j\\in\\mathcal{M}_{i}$. \n\n\\subsection{Queueing Model for Edge Fog Node}\\label{subsec: queue for EFN}\n\nDuring time slot $t$, there is an amount $A_{i}(t)$ ($\\le A_{\\text{max}}$ for some constant $A_{\\text{max}}$) of workload generated from IoT devices arrive to be processed on EFN $i$ such that $\\mathbb{E}\\{ A_{i}(t)\\} =\\lambda_{i}$.\nWe assume that such arrivals are independent over time slots and different EFNs. \nEach EFN $i$ is equipped with a learning module\\footnote{We do not specify any particular learning method in this paper, since our work aims to explore the \\textit{fundamental} benefits of predictive offloading. \nIn practice, one can leverage machine learning techniques such as time-series prediction methods \\cite{ahmed2010empirical} for workload arrival prediction.}\nthat can predict the future workload within a \\textit{prediction window} of size $W_{i}$, \\textit{i.e.} workload will arrive in the next $W_{i}$ time slots. The predicted arrivals are pre-generated and recorded, then arrive to EFN $i$ for pre-serving. Once the predicted arrivals actually arrive after pre-serving, they will be considered finished.\n\nOn each EFN, as Figure \\ref{figure: queue} shows, there are four types of queues:\nprediction queues with the backlogs as $A_{i,0}(t)$, ..., $A_{i,W_{i}-1}(t)$, \narrival queue $A_{i,-1}(t)$, \nlocal processing queue $Q_{i}^{(e,l)}(t)$, and offloading queue $Q_{i}^{(e,o)}(t)$.\nIn time slot $t$, prediction queue $A_{i,w}(t)$ ($0\\leq w\\leq W_{i}-1$) stores untreated workload that will arrive in time slot $(t+w)$. \nWorkload that actually arrives at EFN $i$ is stored in the arrival queue $A_{i,-1}(t)$, \nawaiting being forwarded to the local processing queue $Q_{i}^{(e,l)}(t)$ or the offloading queue $Q_{i}^{(e,o)}(t)$. \nWorkload in $Q_{i}^{(e,l)}(t)$ will be processed locally by EFN $i$, \nwhile workload in $Q_{i}^{(e,o)}(t)$ will be offloaded to CFNs in set $\\mathcal{M}_{i}$.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[scale=0.3]{figures1\/model}\n\t\\caption{Queueing model of the system.}\n\t\\label{figure: queue}\n\\end{figure}\n\n\\subsubsection{Prediction Queues and Arrival Queues in EFNs}\n\nWithin each time slot $t$, in addition to the current arrivals in the arrival queue, EFN $i$ can also forward future arrivals in the prediction queues.\nWe define $\\mu_{i,w}(t)$ as the amount of output workload from $A_{i,w}(t)$, for $w\\in\\{-1,0,...,W_{i}-1\\}$.\nSuch workload should be distributed to the local processing queue and offloading queue.\nWe denote the amounts of workloads to be distributed to the local processing queue and offloading queue as $b_{i}^{(e,l)}(t)$ and $b_{i}^{(e,o)}(t)$, respectively, such that\n\\begin{equation}\n\t0\\leq b^{(e,\\beta)}_{i}\\left(t\\right)\\leq b^{(e,\\beta)}_{i,\\text{max}},\\ \\forall \\beta\\in\\{l,o\\} \\label{constraint: EFN offloading decision 1}\n\\end{equation}\nwhere each $b^{(e,\\beta)}_{i,\\text{max}}$ is a positive constant.\nAs a result, we have\n\\begin{equation}\\label{constraint: pre-serve rates}\n\t\\sum_{w=-1}^{W_{i}-1}\\mu_{i,w}\\left(t\\right)= b^{(e,l)}_{i}\\left(t\\right)+b^{(e,o)}_{i}\\left(t\\right).\n\\end{equation}\n\nNext, we consider the queueing dynamics for different types of queues in EFN, respectively.\n\nRegarding $A_{i,w}(t)$, it is updated whenever pre-service is finished and the lookahead window moves one slot ahead at the end of each time slot. \nTherefore, we have\n\\begin{enumerate}[(i)]\n\t\\item If $w=W_{i}-1$, then\n\t\\begin{equation}\\label{update: fog node prediction queue update 1}\n\t\tA_{i,W_{i}-1}\\left(t+1\\right)=A_{i}\\left(t+W_{i}\\right).\n\t\\end{equation}\n\t\\item If $0\\leq w\\leq W_{i}-2$, then\n\t\\begin{equation}\\label{update: fog node prediction queue update 2}\n\t\tA_{i,w}(t+1)=[A_{i,w+1}(t)-\\mu_{i,w+1}(t)]^{+},\n\t\\end{equation}\n\\end{enumerate}\nwhere $[x]^{+}\\triangleq \\max\\{x,0\\}$ for $x \\in \\mathbb{R}$. \nIn time slot $(t+1)$, the amount of workload that will arrive after $(W_{i}-1)$ time slots is $A_{i}(t+W_{i})$ and it remains unknown until time slot $(t+1)$. \n\nRegarding the arrival queue $A_{i,-1}(t)$, it records the actual backlog of EFN $i$ with the update equation as follows:\n\\begin{equation}\\label{update: fog node true queue update}\n\tA_{i,-1}(t+1)\\!=\\![A_{i,-1}(t)-\\mu_{i,-1}(t)]^{+}\\!+\\![A_{i,0}(t)-\\mu_{i,0}(t)]^{+}.\n\\end{equation}\nNote that $\\mu_{i,-1}(t)$ denotes the amount of distributed workload that have already being in $A_{i,-1}(t)$. \n\nNext, we introduce an integrate queue with a backlog size as the sum of all prediction queues and the arrival queue on EFN $i$, denoted by $Q_{i}^{\\left(e,a\\right)}\\left(t\\right)\\triangleq \\sum_{w=-1}^{W_{i}-1}A_{i,w}\\left(t\\right)$ . \nUnder \\textit{fully-efficient} \\cite{huang2016predictive} service policy, $Q_{i}^{\\left(e,a\\right)}\\left(t\\right)$ is updated as\n\\begin{multline}\\label{update: prediction sum queue update}\n\tQ_{i}^{\\left(e,a\\right)}\\left(t+1\\right)=[Q_{i}^{\\left(e,a\\right)}\\left(t\\right)-(b^{(e,l)}_{i}\\left(t\\right)+b^{(e,o)}_{i}\\left(t\\right))]^{+}\\\\\n\t+A_{i}\\left(t+W_{i}\\right).\n\\end{multline}\nThe input of integrate queue $Q_{i}^{(e,a)}(t)$ consists of the predicted workload that will arrive at EFN $i$ in time slot $(t+W_{i})$, while its output consists of workloads being forwarded to the local processing queue and the offloading queue. \nNote that $b^{(e,l)}_{i}(t)+b^{(e,o)}_{i}(t)$ is the output capacity of integrate queue $Q_{i}^{(e,a)}(t)$ in time slot $t$. If the capacity is larger than the queue backlog size, the true output amount will be smaller than $b^{(e,l)}_{i}(t)+b^{(e,o)}_{i}(t)$.\n \n\n\\subsubsection{Offloading Queues in EFNs}\n\nIn time slot $t$, workload in queue $Q^{(e,o)}_{i}(t)$ will be offloaded to CFNs in set $\\mathcal{M}_{i}$. \nThe transmission capacities are determined by the transmit power decisions $(p_{i,j}(t))_{j\\in\\mathcal{M}_{i}}$, where $p_{i,j}(t)$ is the transmit power from EFN $i$ to CFN $j$. \nThe transmit power is nonnegative and the total transmit power of each EFN is upper bounded, \\textit{i.e.}, \n\\begin{align}\n\t&p_{i,j}\\left(t\\right)\\geq0,\\ \\forall i\\in\\mathcal{N},j\\in\\mathcal{M}_{i}\\text{ and }t, \\label{constraint: power allocation 1} \\\\\n\t&\\sum_{j\\in\\mathcal{M}_{i} }p_{i,j}\\left(t\\right)\\leq p_{i,\\text{max}},\\ \\forall i\\in\\mathcal{N}\\text{ and }t.\\label{constraint: power allocation 2}\n\\end{align}\nAccording to Shannon's capacity formula \\cite{gallager2008principles}, the transmission capacity from EFN $i$ to CFN $j$ is\n\\begin{equation}\\label{equation: offload rate to central fog}\n\tR_{i,j}(t)\\!\\triangleq\\!\\hat{R}_{i,j}(p_{i,j}(t))\\!=\\!\\tau_{0}B\\log_{2}\\left(1\\!+\\!\\frac{p_{i,j}(t)H_{i,j}(t)}{N_{0}B}\\right),\n\\end{equation}\nwhere $\\tau_{0}$ is the length of each time slot, $B$ is the channel bandwidth, $H_{i,j}(t)$ is the wireless channel gain between EFN $i$ and CFN $j$, and $N_{0}$ is the system power spectral density of the additive white Gaussian noise.\nNote that $H_{i,j}(t)$ is an uncontrollable environment state with positive upper bound $H_{\\text{max}}$.\nWe do not consider the interferences among fog nodes and tiers. \nBy adjusting the transmit power $p_{i,j}(t)$, we can offload different amounts of workload from EFN $i$ to CFN $j$ in time slot $t$. \nAccordingly, the update equation of offloading queue $Q^{(e,o)}_{i}(t)$ is\n\\begin{equation}\\label{update: edge offloading queue update}\n\tQ^{(e,o)}_{i}\\left(t+1\\right)\\leq [Q^{(e,o)}_{i}\\left(t\\right)\\!-\\!\\sum_{j\\in\\mathcal{M}_{i}}R_{i,j}\\left(t\\right)]^{+}+b^{(e,o)}_{i}\\left(t\\right),\n\\end{equation}\nwhere $\\sum_{j \\in \\mathcal{M}_{i}} R_{i,j}(t)$ is the total transmission capacity to EFN $i$ in time slot $t$.\nThe inequality here means that the actual arrival of $Q_{i}^{(e,o)}(t)$ may be less than $b_{i}^{(e,o)}(t)$, because $b^{(e,o)}_{i}(t)$ is the transmission capacity from integrate queue $Q_{i}^{(e,a)}(t)$ to offloading queue $Q_{i}^{(e,o)}(t)$ instead of the amount of truly transmitted workload. \nRecall that we assume the transmission latency from EFT to CFT is negligible compared to the length of each time slot, the workload transmission in each time slot can be accomplished by the end of that time slot.\n\n\\subsection{Queueing Model for Central Fog Node}\\label{subsec: queue for CFN}\n\nFigure \\ref{figure: queue} also shows the queueing model on CFN.\nEach CFN $j\\in\\mathcal{M}$ maintains three queues: \nan arrival queue $Q_{j}^{(c,a)}(t)$, a local processing queue $Q_{j}^{(c,l)}(t)$, and an offloading queue $Q_{j}^{(c,o)}(t)$. Similar to EFNs, workload offloaded from the EFT will be firstly stored in the arrival queue,\nthen distributed to $Q_{j}^{(c,l)}(t)$ for local processing and to $Q_{j}^{(c,o)}(t)$ for further offloading.\n\n\n\\subsubsection{Arrival Queues in CFNs}\n\nThe arrivals on CFN $j$ consist of \nworkloads offloaded from EFNs in the set $\\mathcal{N}_{j}$. \nWe denote the amounts of workloads distributed to the local processing queue and offloading queue in time slot $t$ as $b_{j}^{(c,l)}(t)$ and $b_{j}^{(c,o)}(t)$, respectively, such that\n\\begin{equation}\n\t0\\leq b^{(c,\\beta)}_{j}\\left(t\\right)\\leq b^{(c,\\beta)}_{j,\\text{max}},\\ \\forall \\beta\\in\\{l,o\\}, \\label{constraint: CFN offloading decision 1} \n\\end{equation}\nwhere each $b^{(c,\\beta)}_{j,\\text{max}}$ is a positive constant. Accordingly, $Q_{j}^{(c,a)}(t)$ is updated as follows:\n\\begin{multline}\\label{update: central arrival queue update}\n\tQ_{j}^{(c,a)}(t+1)\\\\\n\t\\leq [Q_{j}^{(c,a)}(t)-(b_{j}^{(c,l)}(t)+b_{j}^{(c,o)}(t))]^{+}\\!+\\!\\sum_{i\\in\\mathcal{N}_{j}}R_{i,j}(t).\n\\end{multline}\n\n\\subsubsection{Offloading Queues in CFNs}\n\nFor each CFN $j\\in\\mathcal{M}$, its offloading queue $Q^{(c,o)}_{j}(t)$ stores the workload to be offloaded to the cloud. \nWe define $D_{j}(t)$ as the transmission capacity of the wired link from CFN $j$ to the cloud during time slot $t$, \nwhich depends on the network state and is upper bounded by some constant $D_{\\text{max}}$ for all $j$ and $t$. Then we have the following update function for $Q^{(c,o)}_{j}(t)$:\n\\begin{equation}\\label{update: central offloading queue update}\n\tQ^{(c,o)}_{j}\\left(t+1\\right)\\leq [Q^{(c,o)}_{j}\\left(t\\right)-D_{j}\\left(t\\right)]^{+}+b^{(c,o)}_{j}\\left(t\\right).\n\\end{equation}\nNote that the amount of actually offloaded workload to the cloud is $\\min\\{Q_{j}^{(c,o)}(t),D_{j}(t)\\}$.\n\n\\subsection{Local Processing Queues on EFNs and CFNs}\\label{subsec: local queue}\n\nWe assume that all fog nodes are able to adjust their CPU frequencies in each time slot, by applying \\textit{dynamic voltage and frequency scaling} (DVFS) techniques\\cite{mao2017survey}. \nNext, we define $L_{k}^{(\\alpha)}$ as the number of CPU cycles that fog node $k\\in\\mathcal{N}\\cup\\mathcal{M}$ requires to process one bit of workload, where $\\alpha$ is an indicator of fog node $k$'s type ($\\alpha=e$ if $k$ is an EFN, and $\\alpha=c$ if $k$ is a CFN). \n$L_{k}^{(\\alpha)}$ is assumed constant and can be measured offline \\cite{miettinen2010energy}. \nTherefore, the local processing capacity of fog node $k$ is $f_{k}^{(\\alpha)}(t)\/L_{k}^{(\\alpha)}$.\nThe local processing queue on fog node $k$ evolves as follows:\n\\begin{equation}\\label{update: local processing queue update}\n\tQ_{k}^{(\\alpha,l)}(t+1)\\!\\leq \\![Q_{k}^{\\left(\\alpha,l\\right)}\\left(t\\right)\\!-\\tau_{0}f_{k}^{\\left(\\alpha\\right)}\\left(t\\right)\\!\/\\!L_{k}^{\\left(\\alpha\\right)}]^{+}\\!+b_{k}^{\\left(\\alpha,l\\right)}\\left(t\\right).\n\\end{equation}\nAll CPU frequencies are nonnegative and finite:\n\\begin{equation}\\label{constraint: CPU frequency}\n\t0\\leq f_{k}^{(\\alpha)}\\left(t\\right)\\leq f^{(\\alpha)}_{k,\\text{max}},\\ \\forall k\\in\\mathcal{N}\\cup\\mathcal{M}\\text{ and }t,\n\\end{equation}\nwhere each $f^{(\\alpha)}_{k,\\text{max}}$ is a positive constant. \n\n\\subsection{Power Consumptions}\\label{subsec: power}\n\nThe total power consumptions $P(t)$ of fog tiers in time slot $t$\nconsist of the processing power consumption and wireless transmit power consumption. \nGiven a local CPU with frequency $f$, its power consumption per time slot is $\\tau_{0}\\varsigma f^{3}$, where $\\varsigma$ is a parameter depending on the deployed hardware and is measurable in practice \\cite{kim2018dual}. \nThus $P(t)$ is defined as follows:\n\\begin{multline}\\label{equation: total fog power consumptions}\n\tP\\left(t\\right)\\triangleq\\hat{P}\\left(\\boldsymbol{f}\\left(t\\right),\\boldsymbol{p}\\left(t\\right)\\right)=\\sum_{i\\in\\mathcal{N}}\\tau_{0}\\varsigma(f_{i}^{\\left(e\\right)}\\left(t\\right))^{3}\\\\\n\t+\\sum_{j\\in\\mathcal{M}}\\tau_{0}\\varsigma(f_{j}^{\\left(c\\right)}\\left(t\\right))^{3}+\\sum_{i\\in\\mathcal{N}}\\sum_{j\\in\\mathcal{M}_{i}} \\tau_{0}p_{i,j}\\left(t\\right),\n\\end{multline}\nwhere $\\boldsymbol{f}(t)\\triangleq((f_{i}^{(e)}(t))_{i\\in\\mathcal{N}},(f_{j}^{(c)}(t))_{j\\in\\mathcal{M}})$ is the vector of all CPU frequencies, and $\\boldsymbol{p}(t)\\triangleq(\\boldsymbol{p}_{i}(t))_{i\\in\\mathcal{N}}$ in which $\\boldsymbol{p}_{i}(t)=(p_{i,j}(t))_{j\\in\\mathcal{M}_{i}}$ is the transmit power allocation of EFN $i$. \n\n\n\\subsection{Problem Formulation}\\label{subsec: formulation}\n\nWe define the long-term time-average expectation of total power consumptions $\\bar{P}\n$ and total queue backlog $\\bar{Q}$ as follows:\n\\begin{align}\n\t&~~~~~~~~~~~~~~\\bar{P}\\triangleq\\limsup_{T\\rightarrow\\infty}\\frac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}\\left\\{ P\\left(t\\right)\\right\\}, \\label{definition: time average exp power}\\\\\n\t&\\bar{Q}\\triangleq\\limsup_{T\\rightarrow\\infty}\\frac{1}{T}\\sum_{t=0}^{T-1}\\sum_{\\beta\\in\\{ a,l,o\\} }(\\sum_{i\\in\\mathcal{N}}\\mathbb{E}\\{ Q_{i}^{\\left(e,\\beta\\right)}(t)\\}\\nonumber \\\\\n\t&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\\sum_{j\\in\\mathcal{M}}\\mathbb{E}\\{ Q_{j}^{(c,\\beta)}(t)\\}) \\label{definition: time average exp backlog}.\n\\end{align}\nIn this paper, we aim to minimize the long-term time-average expectation of total power consumptions $\\bar{P}$, while ensuring the stability of all queues in the system, \\textit{i.e.}, $\\bar{Q}<\\infty$. \nThe problem formulation is given by\n\\begin{equation}\\label{problem: general}\n\\begin{array}{cl}\n\\underset{\\{\\boldsymbol{b}(t),\\boldsymbol{f}(t),\\boldsymbol{p}(t)\\}_{t}}{\\text{Minimize}}\n\t&\\displaystyle \\bar{P}\\\\\n\t\\text{Subject to}&\\displaystyle (\\ref{constraint: EFN offloading decision 1})\n\t(\\ref{constraint: power allocation 1})\n\t(\\ref{constraint: power allocation 2})(\\ref{constraint: CFN offloading decision 1})(\\ref{constraint: CPU frequency}),\\\\\n\t&\\displaystyle \\bar{Q}<\\infty.\n\\end{array}\n\\end{equation}\n\n\n\\section{Algorithm Design}\\label{sec: algorithm}\n\n\\subsection{Predictive Algorithm}\n\nTo solve problem (\\ref{problem: general}), we adopt Lyapunov optimization techniques\\cite{huang2016predictive}\\cite{neely2010stochastic} to decouple the problem into a series of subproblems over time slots. We show the detail of this process in Appendix \\ref{appendix: pora design}. \nBy solving each of these subproblems during each time slot, \nwe propose PORA, an efficient and predictive scheme conducts workload offloading in an online and distributed manner. \nWe show the pseudocode of PORA in Algorithm \\ref{algorithm: pora}. \nNote that symbol $\\alpha \\in \\{ e, c \\}$ indicates the type of fog node. \nSpecifically, for each fog node $k$, $\\alpha=e$ if $k$ is an EFN and CFN otherwise.\n\\begin{algorithm}[t]\n\\caption{Predictive Offloading and Resource Allocation (PORA) in One Time Slot}\n\\label{algorithm: pora}\n\\begin{algorithmic}[1]\n \\STATE Initialize $\\boldsymbol{b}(t)\\leftarrow \\boldsymbol{0}$, $\\boldsymbol{f}(t)\\leftarrow\\boldsymbol{0}$, $\\boldsymbol{p}(t)\\leftarrow\\boldsymbol{0}$.\n \\FOR {each fog node $k\\in\\mathcal{N}\\cup\\mathcal{M}$}\n \\STATE \\%\\%\\textit{Make Offloading Decisions}\n \\IF {$Q_{k}^{(\\alpha,a)}(t)> Q_{k}^{(\\alpha,l)}(t)$}\n \\STATE Set $b_{k}^{(\\alpha,l)}(t)\\leftarrow b^{(\\alpha,l)}_{k,\\text{max}}$.\n \\ENDIF\n \\IF {$Q_{k}^{(\\alpha,a)}(t)> Q_{k}^{(\\alpha,o)}(t)$}\n \\STATE Set $b_{k}^{(\\alpha,o)}(t)\\leftarrow b^{(\\alpha,o)}_{k,\\text{max}}$. \n \\ENDIF\n \\STATE \\%\\%\\textit{Local CPU Resource Allocation}\n \\STATE Set $ f^{(\\alpha)}_{k}\\left(t\\right)\\leftarrow\\min\\{ \\sqrt{Q_{k}^{(\\alpha,l)}(t)\/3V\\varsigma L_{k}^{(\\alpha)}},f^{(\\alpha)}_{k,\\text{max}}\\}$.\n \\ENDFOR\n \\STATE \\%\\%\\textit{Transmit Power Allocation}\n \\FOR {each EFN $i\\in\\mathcal{N}$}\n \\STATE Set $\\lambda_{\\text{min}}\\leftarrow 0$.\n \\STATE Set $\\lambda_{\\text{max}}\\leftarrow \\max_{j\\in\\mathcal{M}_{i}}\\frac{(Q_{i}^{(e,o)}-Q_{j}^{(c,a)})H_{i,j}(t)}{N_{0}}-V$.\n\\WHILE{$\\lambda_{\\text{max}}-\\lambda_{\\text{min}}> \\varepsilon$}\n \\STATE \\%\\%\\textit{Water Filling with Bisection Method}\n \\STATE Set $\\lambda^{*}\\leftarrow (\\lambda_{\\text{min}}+\\lambda_{\\text{max}})\/2$ \n \\STATE Set $p_{i,j}(t)\\!\\leftarrow\\!B\\left[\\frac{Q_{i}^{(e,o)}(t)-Q_{j}^{(c,a)}(t)}{V+\\lambda^{*}}-\\frac{N_{0}}{H_{i,j}(t)}\\right]^{+}$.\n \\IF {$\\sum_{j\\in\\mathcal{M}_{i}}p_{i,j}(t)> p_{i,\\text{max}}$}\n \\STATE Set $\\lambda_{\\text{max}}\\leftarrow\\lambda^{*}$.\n \\ELSE\n \\STATE Set $\\lambda_{\\text{min}}\\leftarrow\\lambda^{*}$.\n \\ENDIF\n\\ENDWHILE\n\\ENDFOR\n \\STATE Enforce scheduling decisions $\\boldsymbol{b}(t)$, $\\boldsymbol{f}(t)$, and $\\boldsymbol{p}(t)$.\n\\end{algorithmic}\n\\end{algorithm}\nNext, we introduce PORA in detail.\n\n\n\\subsubsection{Offloading Decision}\n\nIn each time slot $t$, under PORA, each fog node $k\\in\\mathcal{N}\\cup\\mathcal{M}$ decides the amounts of workload scheduled to the offloading queue and the local processing queue, denoted by $b_{k}^{(\\alpha,l)}(t)$ and $b_{k}^{(\\alpha,o)}(t)$, respectively. \nSuch decisions are obtained by solving the following problem:\n\\begin{equation}\\label{problem: offloading decision}\n\\begin{array}{cl}\n\\underset{0\\leq b_{k}^{(\\alpha,\\beta)}\\leq b^{(\\alpha,\\beta)}_{k,\\text{max}}}{\\text{Minimize}}\n\t\\displaystyle \\left(Q^{(\\alpha,\\beta)}_{k}\\left(t\\right)-Q_{k}^{(\\alpha,a)}\\left(t\\right)\\right)b_{k}^{(\\alpha,\\beta)},\n\\end{array}\n\\end{equation}\nwhere $\\beta\\in\\{l,o\\}$. \nAccordingly, the optimal solution to (\\ref{problem: offloading decision}) is\n\\begin{equation}\\label{equation: optimal offload decision}\n\tb_{k}^{(\\alpha,\\beta)}\\left(t\\right)=\\begin{cases}\nb^{(\\alpha,\\beta)}_{k,\\text{max}}, & \\text{if }Q_{k}^{(\\alpha,\\beta)}\\left(t\\right)0$ and $\\epsilon>0$ such that\n\\begin{equation*}\n\t\\bar{P}\\leq \\theta\/V+P^{*},\\ \\bar{Q}\\leq (\\theta+VP_{\\text{max}})\/\\epsilon,\n\\end{equation*}\nwhere $\\bar{P}$ and $\\bar{Q}$ are defined in (\\ref{definition: time average exp power}) and (\\ref{definition: time average exp backlog}), respectively.}\n\\end{theorem}\nThe proof is quite standard and hence omitted here.\n\n\\textbf{Remark:}\nBy \\textit{Little's} theorem\\cite{leon2017probability}, the average queue backlog size is proportional to the average queueing latency. \nTherefore, Theorem \\ref{theorem: performance} implies that by adjusting parameter $V$, PORA can achieve an $[O(1\/V),O(V)]$ power-latency tradeoff in the non-predictive case. \nFurthermore, the average power consumption $\\bar{P}$ approaches the optimum $P^{*}$ asymptotically as the value of $V$ increases to infinity. \n\n\\subsubsection{Latency Reduction}\nWe analyze the latency reduction induced by PORA under perfect prediction compared to the non-predictive case. \nIn particular, we denote the prediction window vector $(W_{i})_{i\\in\\mathcal{N}}$ by $\\boldsymbol{W}$ and the corresponding delay reduction by $\\eta(\\boldsymbol{W})$. \nFor each unit of workload on EFN $i$, let $\\pi_{i,w}$ denote the steady-state probability that it experiences a latency of $w$ time slots in $A_{i,-1}(t)$. \nWithout prediction, \nthe average latency on its \\textit{arrival queues} is $\n\td=\\sum_{i\\in\\mathcal{N}}\\lambda_{i}\\sum_{w\\geq1}w\\pi_{i,w}\/\\sum_{i\\in\\mathcal{N}}\\lambda_{i}$. Then we have the following theorem.\n\\begin{theorem}\\label{theorem: delay}\n\\textit{Suppose the system steady-state behavior depends only on the statistical behaviors of the arrivals and service processes. Then the latency reduction $\\eta(\\boldsymbol{W})$ is\n\\begin{multline}\\label{theorem 2: result 1}\n\t\\eta\\left(\\boldsymbol{W}\\right)\\\\\n\t=\\frac{\\sum_{i\\in\\mathcal{N}}\\lambda_{i}\\!\\left(\\!\\sum_{1\\leq w\\leq W_{i}}\\!w\\pi_{i,w}\\!+\\!W_{i}\\!\\sum_{w\\geq1}\\!\\pi_{i, w+W_{i}}\\right)}{\\sum_{i\\in\\mathcal{N}}\\lambda_{i}}.\n\\end{multline}\nFurthermore, if $d<\\infty$, as $\\boldsymbol{W}\\rightarrow\\infty$, \\textit{i.e.}, \nwith inifinite predictive information, we have\n\\begin{equation}\\label{theorem 2: result 2}\n\t\\lim_{\\boldsymbol{W}\\rightarrow\\infty}\\eta\\left(\\boldsymbol{W}\\right)=d.\n\\end{equation}\n}\n\\end{theorem}\n\nWe relegate the proof of Theorem \\ref{theorem: delay} to Appendix \\ref{proof: delay}.\n\n\\textbf{Remark:}\nTheorem \\ref{theorem: delay} implies that predictive offloading conduces to a shorter workload latency; \nin other words, with predicted information, PORA can break the barrier of $[O(1\/V),O(V)]$ power-latency tradeoff.\nFurthermore, the latency reduction induced by PORA is proportional to the inverse of the prediction window size, and approaches zero as prediction window sizes go to infinity. \nIn our simulations, we see that PORA can effectively shorten the average arrival queue latency with only mild-value of future information.\n\n\n\\subsection{Impact of Network Topology}\n\nFog computing systems generally proceed in wireless environments, thus the network topology of such systems is usually dynamic and may change over time slots.\nHowever, at the beginning of each time slot, the network topology is observed and deemed fixed by the end of the time slot. \nTherefore, in the following, we put the focus of our discussion on the impact of network topology within each time slot.\n\nRecall that in our settings, each EFN has access to only a subset of CFNs in its vicinity. For each EFN $i$, the subset of its accessible EFNs is denoted by $\\mathcal{M}_{i}$ with a size of $|\\mathcal{M}_{i}|$. From the perspective of graph theory, we can view the interconnection among fog nodes of different tiers as a directed graph, in which each vertex corresponds to a fog node and each edge indicates a directed connection between nodes. Hence, the value of $|\\mathcal{M}_{i}|$ can be regarded as the out-degree of EFN $i$, which is an important parameter of network topology that measures the number of directed connections originating from EFN $i$. \nDue to time-varying wireless dynamics, the out-degree of each fog node may vary over time slots; \nconsequentially, the resulting topology would significantly affect the system performance. In the following, we discuss such impacts under two channel conditions, respectively.\n\nOn the one hand, within each time slot, poor channel conditions (\\textit{e.g.} in terms of low SINR) would often lead to unreliable or even unavailable connections among fog nodes and hence a network topology with a relatively smaller out-degree of nodes. In this case, each fog node may have a very limited freedom to choose the best target node to offload its workloads, further leading to backlog imbalance among fog nodes or even overloading in its upper tier with a large cumulative queue backlog size. Besides, poor channel conditions may also require more power consumptions to ensure reliable communication between successive fog nodes.\n\nOn the other hand, within each time slot, good channel conditions allow each fog node to have a broader access to the fog nodes in its upper tier, resulting a network topology with a relatively larger out-degree of nodes. In this case, each fog node is able to conduct better decision-making with more freedom in choosing the fog nodes in its upper fog tier, thereby achieving a better tradeoff between power consumptions and backlog sizes.\n\n\\begin{table}[!t]\n\\centering\n\\begin{threeparttable}\n\\caption{Simulation Settings}\n\\label{table: simulation parameters}\n\\begin{tabular}{|c|c|}\n\\hline\nParameter & Value \\\\\n\\hline\n$B$ & $2$ MHz \\\\\n\\hline\n$H_{i,j}(t),\\forall i\\in\\mathcal{N},j\\in\\mathcal{M}$ & $24\\log_{10}d_{i,j}+20\\log_{10}5.8$+60 \\tnote{a} \\\\\n\\hline\n$N_{0}$ & $-174$ dBm\/Hz \\\\\n\\hline\n$P_{i,\\text{max}},\\forall i\\in\\mathcal{N}$ & $500$ mW \\\\\n\\hline\n$L^{(e)}_{i}\\forall i\\in\\mathcal{N}$, $L^{(c)}_{j}\\forall j\\in\\mathcal{M}$ & $297.62$ cycles\/bit \\\\\n\\hline\n$f^{(e)}_{i,\\text{max}},\\forall i\\in\\mathcal{N}$ & $4$ G cycles\/s \\\\\n\\hline\n$f^{(c)}_{j,\\text{max}},\\forall j\\in\\mathcal{M}$ & $8$ G cycles\/s \\\\\n\\hline\n$\\varsigma$ & $10^{-27}$ W$\\cdot$s$^{3}$\/cycle$^{3}$ \\\\\n\\hline\n$b^{(e,l)}_{i,\\text{max}},b^{(e,o)}_{i,\\text{max}},\\forall i\\in\\mathcal{N}$ & $6$ Mb\/s \\\\\n\\hline\n$b^{(c,l)}_{j,\\text{max}},b^{(c,o)}_{j,\\text{max}},\\forall j\\in\\mathcal{M}$ & $12$ Mb\/s \\\\\n\\hline\n$D_{j}(t),\\forall j\\in\\mathcal{M},t$ & $6$ Mb\/s \\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\\footnotesize\n\\item [a] $d_{i,j}$ is the distance between EFN $i$ and CFN $j$.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\n\\subsection{Use Cases}\nIn practice, PORA can be applied as a theoretical framework to design the offloading schemes for fog computing systems under various use cases, such as public safety systems, intelligent transportation, and smart healthcare systems.\nFor example, in a public safety system, each street is usually deployed with multiple smart cameras (IoT devices). \nAt runtime, such smart cameras would upload real-time vision data to one of their accessible EFNs. Each EFN aggregates such data to extract or even analyze the instant road conditions within multiple streets. \nSuch EFNs can upload some of the workload to their upper-layered CFNs (each taking charge of one community consisting of several streets) with greater computing capacities. \nEach CFN can further offload the workload to the cloud via optical fiber links. \nFor latency-sensitive applications, the real-time vision data will be processed locally on EFNs or offloaded to CFNs. \nFor latency-insensitive applications with intensive computation demand, the data will be offloaded to the cloud through the fog nodes. \nPORA conduces to the design of dynamic and online offloading and resource allocation schemes to support such fog systems with various applications.\n\n\n\\section{Numerical Results}\\label{sec: simulation}\n\n\nWe conduct extensive simulations to evaluate PORA and its variants.\nThe parameter settings in our simulation are based on the commonly adopted wireless environment settings that have been used in \\cite{liu2017latency, du2017computation}.\nThe simulation is conducted on a MacBook Pro with 2.3 GHz Intel Core i5 processor and 8GB 2133 MHz LPDDR3 memory, and the simulation program is implemented using Python 3.7.\nThis section firstly presents the basic settings of our simulations, and then provides the key results under perfect and imperfect prediction, respectively.\n\n\n\n\\subsection{Basic Settings}\n\nWe simulate a hierarchal fog computing system with $80$ EFNs and $20$ CFNs. All EFNs have a uniform prediction window size $W$, which varies from $0$ to $30$. Note that $W=0$ refers to the case without prediction. \nFor each EFN $i$, its accessible CFN set $\\mathcal{M}_{i}$ is chosen uniformly randomly from the power set of the CFN set with size $|\\mathcal{M}_{i}|=5$. \nWe set the time slot length $\\tau_{0}=1$ second. \nDuring each time slot, workload arrives to the system in the unit of packets, each with a fixed size of $4096$ bits. \nThe packet arrivals are drawn from previous measurements \\cite{benson2010network}, \nwhere the average flow arrival rate is $538$ flows\/s, \nand the distribution of flow size has a mean of $13$ Kb. \nGiven these settings, the average arrival rate is about $7$ Mbps. \nAll results are averaged over $50000$ time slots. \nWe list all other parameter settings in TABLE \\ref{table: simulation parameters}.\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{figures1\/decisions_V}\n\t\\caption{Offloading decisions when $W=10$.}\n\t\\label{figure: offloading decisions vs. V}\n\\end{figure}\n\n\n\\begin{figure}[!t]\n \\centering\n \\subfigure[Queue backlogs.]{\n \\label{subfig: backlog-W}\n \\includegraphics[width=0.8\\linewidth]{figures1\/backlog_W}}\n \\subfigure[Power consumptions.]{\n \\label{subfig: power-W}\n \\includegraphics[width=0.8\\linewidth]{figures1\/power_W}}\n \\caption{Performance of PORA vs. $W$ when $V=10^{11}$.}\n \\label{figure: performance vs. window size}\n\\end{figure}\n\n\\subsection{Evaluation with Perfect Prediction}\n\nUnder the perfect prediction settings, we evaluate how the values of parameter $V$ and prediction window size $W$ influence the performance of PORA, respectively.\n\n\n\\textbf{System Performance under Different Values of $V$:}\nFigure \\ref{figure: offloading decisions vs. V} shows the impact of parameter $V$ on the offloading decisions of PORA: \nWhen the value of $V$ is around $10^{10}$, the time-average amount of locally processed workload on EFNs reaches the bottom of the curve, \nwhile other offloading decisions induce the peak workload. \nThe reason is that the offloading decisions are not only determined by the value of $V$, \nbut also influenced by the queue backlog sizes.\n\nFigure \\ref{figure: performance vs. V} presents the impact of the value of $V$ on different types of queues and power consumptions in the system, respectively. \nAs the value of $V$ increases, we see a rising trend in the sizes of all types of queue backlogs, and a roughly falling trend in all types of power consumptions. \n\n\n\n\n\\begin{figure}[!t]\n \\centering\n \\subfigure[Queue backlogs.]{\n \\label{subfig: backlog vs. V}\n \\includegraphics[width=0.8\\linewidth]{figures1\/backlog_V}}\n \\subfigure[Power consumptions.]{\n \\label{subfig: power vs. V}\n \\includegraphics[width=0.8\\linewidth]{figures1\/power_V}}\n \\caption{Performance of PORA when $W=10$.}\n \\label{figure: performance vs. V}\n\\end{figure}\n\n\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[Total queue backlogs.]{\n \\label{subfig: backlog-V-variant}\n \\includegraphics[width=0.4\\linewidth]{figures1\/backlog_V_variant}}\n \\subfigure[Total power consumptions.]{\n \\label{subfig: power-V-variant}\n \\includegraphics[width=0.4\\linewidth]{figures1\/power_V_variant}}\n \\caption{Performance of variants of PORA.}\n \\label{figure: performance v.s. variants}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[Total queue backlogs.]{\n \\label{subfig: backlog v.s. time}\n \\includegraphics[width=0.4\\linewidth]{figures1\/backlog_time}}\n \\subfigure[Total power consumptions.]{\n \\label{subfig: power v.s. times}\n \\includegraphics[width=0.4\\linewidth]{figures1\/power_time}}\n \\caption{Comparison between PORA and baselines.}\n \\label{figure: comparison}\n\\end{figure*}\n\n\n\\textbf{System Performance with Different Values of Prediction Window Size $W$:}\nFigures \\ref{subfig: backlog-W} and \\ref{subfig: power-W} show the system performance with the prediction window size $W$ varying from $0$ to $30$. \nWith perfect prediction, PORA effectively shortens the average queueing latencies on EFN arrival queues -- eventually close to zero with no extra power consumption and only a mild-value of prediction window size ($W=20$ in this case).\n\n\n\\textbf{PORA vs. PORA-$d$ (Low-Sampling Variant):}\nIn practice, since PORA requires to sample system dynamics across various fog nodes, it may incur considerable sampling overheads.\nBy adopting the idea of randomized load balancing techniques \\cite{mitzenmacher2001power}, we propose PORA-$d$, a variant of PORA that reduces the sampling overheads by \nprobing $d$ ($d\\in\\{1,2,3,4\\}$) \\footnote{When $d=1$, the scheme degenerates to uniform random sampling.} CFNs and conducting resource allocation on which are uniformly chosen for each EFN from its accessible CFN set. \n\nFigure \\ref{figure: performance v.s. variants} compares the performance of PORA with PORA-$d$. \nWe observe that PORA achieves the smallest queue backlog size. The result is reasonable since each EFN has access to 5 CFNs under PORA, more than the $d\\leq 4$ CFNs under PORA-$d$. As a result, each EFN has more chance to access to the CFNs with better wireless channel condition and processing capacity under PORA when compared with PORA-$d$. The observation that the queue backlog size increases as $d$ decreases further verifies our analysis. In fact, we can view $d$ as the degree of each EFN in the network topology. As $d$ decreases, the system performance degrades.\nHowever, when the value of $V$ is sufficiently large, PORA-$d$ achieves the similar power consumptions as PORA and the ratio of increment in the backlog size is small. For example, when $V=2\\times 10^{11}$, PORA-$4$ achieves $4.3$\\% larger backlog size than PORA, and PORA-$3$ achieves $10.9$\\% larger backlog size than PORA.\nIn summary, PORA-$d$ (when $d=2,3,4$) can reduce the sampling overheads by trading off only a little performance degradation under large $V$.\n\n\n\\textbf{Comparison of PORA and Baselines:}\nWe introduce four baselines to evaluate the performance of PORA: (1) NOL (No Offloading): All nodes in the EFT process packets locally. (2) O2CFT (Offload to CFT): All packets are offloaded to the CFT and processed therein. (3) O2CLOUD (Offload to Cloud): All packets are offloaded to the cloud. (4) RANDOM: Each fog node randomly chooses to offload each packet or process it locally with equal chance. \nNote that all above baselines are also assumed capable of pre-serving future workloads in the prediction window.\nFigure \\ref{figure: comparison} compares the instant total queue backlog sizes and power consumptions over time slots under the five schemes (PORA, NOL, O2CFT, O2CLOUD, RANDOM), \nwhere $W=10$ and $V\\in\\{10^{9},10^{11}\\}$.\n\n\nWe observe that scheme O2CLOUD achieves the minimum power consumptions, but incurs constantly increasing queue backlog sizes over time. \nThe reasons are shown as follows. \nOn one hand, in our settings, the mean power consumption for transmitting workload from EFT to CFT is smaller than the mean power consumption of processing the same amount of workload on fog nodes; under scheme O2CLOUD, only wireless transmit power is consumed and hence the minimum is achieved. \nOn the other hand, all the workload must travel through all fog tiers before being offloaded to the cloud, which results in network congestion within fog tiers and thus workload accumulation with increasing queue backlogs.\n\nAs Figure \\ref{figure: comparison} illustrates, PORA achieves the maximum power consumptions but the smallest backlog size when $V=10^{9}$. \nUpon convergence of PORA, the power consumptions under all these schemes reach the same level, but the differences between their queue backlog sizes become more obvious:\nPORA ($V=10^{9}$) reduces $96$\\% of the queue backlog when compared with NOL and RANDOM.\nThe results demonstrate that with the appropriate choice of the value of $V$,\nPORA can achieve less latency than the four baselines under the same power consumptions.\n\n\\subsection{Evaluation with Imperfect Prediction}\n\nIn practice, prediction errors are inevitable. \nHence, we investigate the performance of PORA in the presence of prediction errors \\cite{chen2017timely}. \nParticularly, we consider two kinds of prediction errors: false alarm and missed detection.\nA packet is falsely alarmed if it is predicted to arrive but it does not arrive actually.\nA packet is missed to be detected if it will arrive but is not predicted.\nWe assume that all EFNs have the uniform false-alarm rate $p_{1}$ and missed-detection rate $p_{2}$.\nIn our simulation, we consider different pairs of values of $(p_{1},p_{2})$: $(0.0, 0.0)$, $(0.05, 0.05)$, $(0.5, 0.05)$, $(0.05, 0.25)$, and $(0.5, 0.25)$. \nNote that $(p_{1},p_{2})=(0.0,0.0)$ corresponds to the case when the prediction is perfect. \n\n\\begin{figure}[!t]\n \\centering\n \\subfigure[Total queue backlogs.]{\n \\label{subfig: backlog under imperfect prediction}\n \\includegraphics[width=0.8\\linewidth]{figures1\/backlog_V_error}}\n \\subfigure[Total power consumptions.]{\n \\label{subfig: power under imperfect prediction}\n \\includegraphics[width=0.8\\linewidth]{figures1\/power_V_error}}\n \\caption{Performance of PORA under imperfect prediction.}\n \\label{figure: performance under imperfect prediction}\n\\end{figure}\n\nFigure \\ref{figure: performance under imperfect prediction} presents the results under prediction window size $W=10$.\nWe observe when $V\\leq 7.5\\times 10^{10}$, both the total queue backlog sizes and power consumptions under imperfect prediction are larger than that under perfect prediction. The reason for this performance degradation is twofold:\nFirst, arrivals that are missed to be detected cannot be pre-served, thus leading to larger queue backlog sizes.\nSecond, PORA allocates redundant resources to handle the falsely predicted arrivals, thus causing more power consumptions.\nAs the value of $V$ increases, this performance degradation becomes negligible. Taking the total queue backlog under $(p_{1},p_{2})=(0.25,0.5)$ as an example, when compared with the case under perfect prediction, it increases by $4.72\\%$ at $V=10^{11}$, and increases by $2.24\\%$ at $V=2\\times 10^{11}$. \nMoreover, there is no extra power consumption under imperfect prediction when $V\\geq 7.5\\times 10^{10}$ since PORA tends to reserve resources to reduce power consumptions under large $V$.\n\nIn summary, there will be performance degradation in both total queue backlog sizes and power consumptions in the presence of prediction errors. However, as the value of $V$ increases, this degradation decreases and becomes negligible.\nThough a large value of $V$ can improve the robustness of PORA and achieve small power consumptions, it brings long workload latencies. \nIn practice, the choice of the value of $V$ depends on how the system designer trades off all these criterions.\n\n\\section{Conclusion}\\label{sec: conclusion}\n\nIn this paper, we studied the problem of dynamic offloading and resource allocation with prediction in a fog computing system with multiple tiers. \nBy formulating it as a stochastic network optimization problem, we proposed PORA, an efficient online scheme that exploits predictive offloading to minimize power consumption with queue stability guarantee. \nOur theoretical analysis and trace-driven simulations showed that PORA achieves a tunable power-latency tradeoff, while effectively shortening latency with only mild-value of future information, even in the presence of prediction errors. \nAs for future work, our model can be further extended to more general settings such that the instant wireless channel states may be unknown by the moment of decision making or the underlying system dynamics is non-stationary.\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nThe recently proposed model of quantum gravity by Ho\\v{r}ava \\cite{horava1,horava2,horava3} has recently attracted much attention, and many aspects of it\n have been extensively analyzed, ranging from formal developments, cosmology, dark energy\n and dark matter, spherically symmetric solutions, gravitational waves, and its viability with observational constraints; for a full list of references see, e.g., \\cite{cinesi,Bom,Wang,Carloni}.\nSuch a theory admits Lifshitz's scale invariance: $\\mathbf{x} \\rightarrow b \\mathbf{x}, \\quad t \\rightarrow b^{q} t$, and, after this, it is referred to as Ho\\v{r}ava-Lifshitz (HL) gravity. Actually, it has anistropic scaling in the short distances domain (UV), since it is $q=3$, while isotropy is recovered at large distances (IR).\n\nOne of the key features of the theory is its good UV behavior, since it is power-counting renormalizable; for a discussion of the renormalizability beyond power counting arguments, see \\cite{Orla}. However, in its original formulation, it experiences some problems: for instance, it leads to a non-zero cosmological constant with the wrong sign, in order to be in agreement with the observations \\cite{Soda,horatius,visser1}. To circumvent these issues, it was suggested to abandon the principle of ``detailed balance'' \\cite{Calca1,Calca2}, initially introduced by Ho\\v{r}ava in his model to restrict the number of possible parameters. As a consequence, phenomenologically viable extensions of the theory were proposed \\cite{visser1,visser2}. It was also shown that HL gravity can reproduce General Relativity (GR) at large distances \\cite{pope,KS}; for other solutions non-asymptotically flat see \\cite{Cai1,Cai2}. However, there is still an ongoing discussion on the consistency of HL gravity, since it seems that modes arise which develop exponential instabilities at short distances, or become strongly coupled \\cite{padilla,blas}. Moreover, according to \\cite{Miao}, the constraint algebra does not form a closed structure. Perturbative instabilities affecting HL gravity have been pointed out in \\cite{Saridakis}.\n\nActually, it is important to stress that, up to now, in HL gravity the gravitational field is purely geometrical: in other words, the way matter has to be embedded still needs to be studied. Nevertheless, there are interesting vacuum solutions that can be studied, such as the static spherically symmetric solution found by Kehagias and Sfetsos (hereafter KS) \\cite{KS}. Such a solution is the analog of Schwarzschild solution of GR and, moreover, it asymptotically reproduces the usual behavior of Schwarzschild spacetime. It is interesting to point out that it is obtained without requiring the projectability condition, assumed in the original HL theory, while spherically symmetric solutions with the projectability condition are however available \\cite{tang,Wang}. {Nonetheless, because of its simplicity, it is possible to consider KS solution as toy model useful to better understand some phenomenological implications of HL gravity. Actually, in \\cite{Tib} it was shown that KS solution is in agreement with the classical tests of GR, while in a previous paper \\cite{HLIR} we studied the corrections to the general relativistic Einstein's pericentre precession determined by this solution and compared the theoretical predictions to\nthe latest determinations of the corrections to the standard Newtonian\/Einsteinian planetary perihelion precessions recently estimated with the EPM2008 ephemerides. We found that the KS dimensionless parameter is constrained from the bottom at $\\omega_0\\geq 10^{-12}-10^{-24}$ level depending on the planet considered.\n\nIn our analysis, we assumed that particles followed geodesics of KS metric: however, it is important to point out that this is true if matter is minimally and universally coupled to the metric, which is not necessarily true in HL gravity, where, as we said above, the role of matter has not been yet clarified. In this paper, starting from the same assumption, we focus on the effects induced by the examined solution on the orbital period $P_{\\rm b}$ of a test particle, on an extra solar system environment. We will explicitly work out the consequent correction $P_{\\omega_0}$ to the usual third Kepler law in Section \\ref{3kep}. In Section \\ref{osiris} we compare it with the observations of the transiting extrasolar planet HD209458b \\virg{Osiris}. {We point out that the resulting constraints are to be considered as preliminary and just order-of-magnitude figures because, actually, the entire data set of HD209458b should be re-processed again by explicitly modeling the effect of the KS gravity; however, this is outside the scopes of the present paper.} Section \\ref{conclu} is devoted to the conclusions.\n\\section{KS corrections to the third Kepler law}\\lb{3kep}\nAs shown in \\cite{HLIR},\nfrom \\cite{Brum}\n\\begin{eqnarray} \\ddot x^i &=& -\\frac{1}{2}c^2 h_{00,i} - \\frac{1}{2}c^2 h_{ik}h_{00,k}+h_{00,k}\\dot x^k\\dot x^i \\nonumber \\\\ &+& \\left(h_{ik,m}-\\frac{1}{2}h_{km,i}\\right)\\dot x^k\\dot x^m,\\ i=1,2,3,\\end{eqnarray}\nit is possible to obtain the following radial acceleration acting upon a test particle at distance $d$ from a central body of mass $M$\n\\eqi \\vec{A}_{\\omega_0}\\approx \\frac{4 (GM)^4}{\\omega_0 c^6 d^5}\\hat{d},\\lb{accel}\\eqf valid up to terms of order $\\mathcal{O}(v^2\/c^2)$.\nIts effect on the pericentre of a test particle have been worked out in \\cite{HLIR}; here we want to look at a different orbital feature affected by \\rfr{accel} which can be compared to certain observational determinations.\n\nThe mean anomaly is defined as\n\\eqi \\mathcal{M}\\doteq n(t-t_p);\\eqf in it $n=\\sqrt{GM\/a^3}$ is the Keplerian mean motion, $a$ is the semimajor axis and $t_p$ is the time of pericentre passage.\nThe anomalistic period $P_{\\rm b}$ is the time elapsed between two consecutive pericentre passages; for an unperturbed Keplerian orbit it is $P_{\\rm b}=2\\pi\/n$. Its modification due to a small perturbation of the Newtonian monopole can be evaluated with standard perturbative approaches. The Gauss equation for the variation of the mean anomaly is, in the case of a radial perturbation $A_d$ to the Newtonian monopole \\cite{Ber},\n\\eqi \\frac{d{\\mathcal{M}}}{dt}=n-\\rp{2}{na}A_d\\left(\\rp{d}{a}\\right)+\\rp{(1-e^2)}{nae}A_d\\cos f,\\lb{Gaus}\\eqf where $e$ is the eccentricity and $f$ is the true anomaly counted from the pericentre position.\nThe right-hand-side of \\rfr{Gaus} has to be valuated onto the unperturbed Keplerian orbit given by (see \\cite{Roy})\n\\eqi d=\\rp{a(1-e^2)}{1+e\\cos f}.\\eqf\nBy using (see \\cite{Roy})\n\\eqi df = \\left(\\rp{a}{d}\\right)^2(1-e^2)^{1\/2}d\\mathcal{M}\\eqf\nand\n\\eqi \\int_0^{2\\pi} (1+e\\cos f)^2\\left[2-\\rp{(1+e\\cos f)}{e}\\cos f\\right]df=\\pi\\left(1+\\rp{5}{4}e^2\\right),\\eqf it is possible to work out the correction to the Keplerian period due to \\rfr{accel}; it is\n\\eqi P_{\\omega_0}=\\rp{4\\pi (GM)^4\\left(1+\\rp{5}{4}e^2\\right)}{\\omega_0 c^6 n^3 a^6(1-e^2)^{5\/2}}=\\rp{4\\pi (GM)^{5\/2}\\left(1+\\rp{5}{4}e^2\\right)}{\\omega_0 c^6 a^{3\/2}(1-e^2)^{5\/2}}.\\lb{PHL}\\eqf\nNote that \\rfr{PHL} retains its validity in the limit $e\\rightarrow 0$ becoming equal to\n\\eqi P_{\\omega_0}\\rightarrow\\rp{4\\pi (GM)^{5\/2}}{\\omega_0 c^6 d^{3\/2}}\\lb{circo},\\eqf where $d$ represents now the fixed radius of the circular orbit. It turns out that \\rfr{circo} is equal to the expression that can be easily obtained by equating the centripetal acceleration $\\Omega^2 d$, where $\\Omega$ is the particle's angular speed, to the total gravitational acceleration $GM\/d^2 - 4(GM)^4\/\\omega_0c^6 d^5$ with the obvious assumption that the Newtonian monopole is the dominant term in the sum.\n\\section{Confrontation with the observations}\\lb{osiris}\nIn the scientific literature there is a large number of papers (see, e.g., \\cite{Tal,Wri,Ove,Jae,Rey,Ser1,Ser2,Ior,Bro,Peri,Mof,Adl,Page}) in which the authors use the third Kepler law to determine, or, at least, constrain un-modeled dynamical effects of mundane, i.e. due to the standard Newtonian\/Einsteinian laws of gravitation, or non-standard, i.e. induced by putative modified models of gravity.\nAs explained below, in many cases such a strategy has been, perhaps, followed in a self-contradictory way, so that the resulting constraints on, e.g., new physics, may be regarded as somewhat \\virg{tautologic}.\n\nLet us briefly recall that the orbital period $P_{\\rm b}$ of two point-like bodies of mass $m_1$ and $m_2$ is, according to the third Kepler law,\n\\eqi P^{\\rm Kep}=2\\pi\\sqrt{\\rp{a^3}{G{M}}},\\eqf\nwhere $a$ is the relative semi-major axis and ${M}=m_1+m_2$ is the total mass of the system.\nLet us consider an unmodeled dynamical effect which induces a non-Keplerian (NK) correction to the third Kepler law, i.e.\n\\eqi P_{\\rm b}= P^{\\rm Kep} + P^{\\rm NK},\\eqf\nwhere\n\\eqi P^{\\rm NK}=P^{\\rm NK}(M,a,e; p_j),\\eqf\nis the analytic expression of the correction to the third Kepler law in which $p_j$, $j=1,2,...$N are the parameters of the NK effect to be determined or constrained. Concerning standard physics, $P_{\\rm NK}$ may be due to the centrifugal oblateness of the primary, tidal distortions, General Relativity; however, the most interesting case is that in which $P_{\\rm NK}$ is due to some putative modified models of gravity. {As a first, relatively simple step to gain insights into the NK effect one can act as follows.}\nBy comparing the measured orbital period to the computed Keplerian one it is possible, in principle, to obtain preliminary information on the dynamical effect investigated from $\\Delta P\\doteq P_{\\rm b}-P^{\\rm Kep}$. {Actually, one should re-process the entire data set of the system considered by explicitly modeling the non-standard gravity forces, and simultaneously solving for one or more dedicated parameter(s) in a new global solution along with the other ones routinely estimated. Such a procedure would be, in general, very time-consuming and should be repeated for each models considered. Anyway, it is outside the scopes of the present paper, but it could be pursued in further investigations.}\n\nConcerning our simple approach, in order to meaningfully solve for $p_j$ in\n\\eqi\\Delta P=P^{\\rm NK}\\eqf\nit is necessary that\n\\begin{itemize}\n \\item In the system considered a measurable quantity which can be identified with the orbital period and directly measured independently of the third Kepler law itself, for example from spectroscopic or photometric measurements, must exist. This is no so obvious as it might seem at first sight; indeed, in a N-body system like, e.g., our solar system a directly measurable thing like an \\virg{orbital period} simply does not exist because the orbits of the planets are not closed due to the non-negligible mutual perturbations. Instead, many authors use values of the \\virg{orbital periods} of the planets which are retrieved just from the third Kepler law itself.\n Examples of systems in which there is a measured orbital period are many transiting exoplanets, binaries and, e.g, the double pulsar.\n Moreover, if the system considered follows an eccentric path one should be careful in identifying the measured orbital period with the predicted sidereal or anomalistic periods. A work whose authors are aware of such issues is \\cite{Capoz}.\n \n \n \\item The quantities entering $P^{\\rm Kep}$, i.e. the relative semimajor axis $a$ and the total mass $M$, must be known independently of the third Kepler law. Instead, in many cases values of the masses obtained by applying just the third Kepler law itself are used.\n Thus, for many exoplanetary systems the mass $m_1\\doteq M_\\bigstar$ of the hosting star should be taken from stellar evolution models and the associated scatter should be used to evaluate the uncertainty $\\delta M_\\bigstar$ in it, while for the mass $m_2=m_p$ of the planet a reasonable range of values should be used instead of straightforwardly taking the published value because it comes from the mass function which is just another form of the third Kepler law.\n {Some extrasolar planetary systems represent good scenarios because it is possible to know many of the parameters entering $P^{\\rm Kep}$ independently of the third Kepler law itself, thanks to the redundancy offered by the various techniques used.}\n \n\nSuch issues have been accounted for in several astronomical and astrophysical scenarios in, e.g., \\cite{IorWD,IorIJMPA,IorCyg,IorReg,IorNA}.\n\\end{itemize}\n\n\\section{The transiting exoplanet HD209458b}\nLet us consider HD 209458b \\virg{Osiris}, which is the first exoplanet\\footnote{See on the WEB http:\/\/www.exoplanet.eu\/} discovered with the transit method \\cite{Cha,Hen}. Its orbital period $P_{\\rm b}$ is known with a so high level of accuracy that it was proposed to use it for the first time to test General Relativity in a planetary system different from ours \\cite{osi}; for other proposals to test General Relativity with different orbital parameters of other exoplanets, see \\cite{ada,ung1,ung2,RAGO}.\n\nIn the present case, the system's parameters entering the Keplerian period\ni.e. the relative semimajor axis $a$, the mass $M_{\\bigstar}$ of the host star and the mass $m_p$ of the planet, can be determined independently of the third Kepler law itself, so that it is meaningful to compare the photometrically measured orbital period $P_{\\rm b}=3.524746$ d \\cite{exo} to the computed Keplerian one $P^{\\rm Kep}$: their difference can be used to put genuine constraints on KS solution which predicts the corrections of \\rfr{PHL} to the third Kepler law.\nIndeed, the mass $M_{\\bigstar}=1.119\\pm 0.033$M$_{\\odot}$ and the radius $R_{\\bigstar}=1.155^{+0.014}_{-0.016}$R$_{\\odot}$ of the star \\cite{exo}, along with other stellar properties, are fairly straightforwardly estimated by matching direct spectral observations with stellar evolution models since for HD 209458 we have also the Hipparcos parallax $\\pi_{\\rm Hip}=21.24\\pm 1.00$ mas \\cite{Perry}.\nThe semimajor axis-to-stellar radius ratio $a\/R_{\\bigstar}=8.76\\pm 0.04$ is estimated from the photometric light curve, so that $a=0.04707^{+0.00046}_{-0.00047}$AU \\cite{exo}. The mass $m_p$ of the planet can be retrieved from the parameters of the photometric light curve and of the spectroscopic one entering the formula for the planet's surface gravity $g_p$ (eq.(6) in \\cite{exo}). As a result, after having computed the uncertainty in the Keplerian period by summing in quadrature the errors due to $\\delta a,\\delta M_{\\bigstar},\\delta m_p$, it turns out\n \\eqi \\Delta P\\doteq P_{\\rm b}-P^{\\rm Kep} = 204\\pm 5354\\ {\\rm s};\\lb{exop}\\eqf the uncertainties $\\delta M_{\\bigstar}$, $\\delta a$, $\\delta m_p$ contribute 4484.88 s, 2924.77 s, 2.66 s, respectively to $\\delta(\\Delta P)=5354$ s.\n\n The discrepancy $\\Delta P$ between $P_{\\rm b}$ and $P^{\\rm Kep}$ of \\rfr{exop} is statistically compatible with zero; thus, \\rfr{exop} allows to constrain the parameter $\\omega_0$ entering $P_{\\omega_0}$.\n Since\n \\eqi P^{\\rm NK}\\doteq P_{\\omega_0}=\\rp{\\mathcal{K}}{\\omega_0},\\eqf\n with\n \\eqi \\mathcal{K}\\doteq\\rp{4\\pi(GM)^{5\/2}}{c^6 d^{3\/2}}=8\\times 10^{-15}\\ {\\rm s},\\lb{cazza}\\eqf\n by equating the non-Keplerian correction $P_{\\omega_0}$ to the measured $\\Delta P$ one has\n \\eqi \\omega_0=\\rp{\\mathcal{K}}{\\Delta P}.\\lb{vaffa}\\eqf\n Since $\\Delta P$ is statistically compatible with zero, the largest value of $\\omega_0$ is infinity; from \\rfr{vaffa} a lower bound on $|\\omega_0|$ can be obtained amounting to\\eqi |\\omega_0|\\geq 1.4\\times 10^{-18}.\\lb{lowerbo}\\eqf\n A confrontation with the solar system constraints\\footnote{To avoid confusions with the perihelion $\\omega$, the KS parameter is dubbed $\\psi_0$ in \\cite{HLIR}.} Our previous paper \\cite{HLIR} shows that such a lower bound is at the level of those from Jupiter and Saturn, while it contradicts the possibility that values of $\\omega_0$ as small as those allowed by Uranus, Neptune and Pluto ($|\\omega_0|\\geq 10^{-24}-10^{-22}$) may exist.\n However, tighter constraints are established by the inner planets for which $|\\omega_0|\\geq 10^{-15}-10^{-12}$.\n\n\n\n\n\\section{Conclusions}\\lb{conclu}\nWe have investigated how the third Kepler law is modified by the KS solution,\nwhose Newtonian and lowest order post-Newtonian limits coincides with those of GR, by using the standard Gauss perturbative approach. The resulting expression for $P_{\\omega_0}$, obtained from the Gauss equation of the variation of the mean anomaly $\\mathcal{M}$, in the limit $e\\rightarrow 0$ reduces to the simple formula which can be derived by equating the centripetal acceleration to the Newton$+$KS gravitational acceleration for a circular orbit.\n\nThen, after having discussed certain subtleties connected, in general, with a meaningful use of the third Kepler law to put on the test alternative theories of gravity, we compared our explicit expression for $P_{\\omega_0}$ to the discrepancy $\\Delta P$ between the phenomenologically determined orbital periods $P_{\\rm b}$ and the computed Keplerian ones $P^{\\rm Kep}$ for the transiting extrasolar planet HD209458b \\virg{Osiris}. Since $\\Delta P$ is statistically compatible with zero, it has been possible to {preliminary} obtain the lower bound $|\\omega_0|\\geq 1.4 \\times 10^{-18}$ on the dimensionless KS parameter. {However, the entire data set of HD209458b should be re-processed by including KS gravity as well, and a dedicated, solve-for parameter should be estimated as well. The previously reported } constraint rules out certain smaller values allowed by the lower bounds obtained from the perihelia of Uranus, Neptune and Pluto ($|\\omega_0|\\geq 10^{-24}-10^{-22}$). On the other hand, our exoplanet bound still leaves room for values of $\\omega_0$ too small according to the constraints from the perihelia of Mercury, Venus and the Earth ($|\\omega_0|\\geq 10^{-14}-10^{-12}$).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\paragraph{Planar maps.}\n\nMaps, i.e. gluings of polygons forming an orientable surface, have been the object of extensive research in the last decades, both from the combinatorial and probabilistic viewpoints. The most popular category of maps are planar maps, i.e. maps homeomorphic to the sphere. Their combinatorial study goes back to Tutte in the 60s, e.g.~\\cite{Tutte63}, who gave explicit formulas for the enumeration of various classes of planar maps using a generating functions approach. More recently, bijective approaches have been developped such as the Cori--Vauquelin--Schaeffer bijection for quadrangulations~\\cite{Sch98} and its generalization, the Bouttier--di Francesco--Guitter bijection~\\cite{BDG04}.\n \nOn the other hand, much attention has been given in the last 20 years to asymptotic properties of large random planar maps picked uniformly in certain classes. These asymptotic properties are usually understood by proving the convergence of random maps in some sense when the size goes to infinity. Two different notions of limits are commonly used: scaling and local limits. Scaling limits, which we will not study in this work, consist in renormalizing the distances in order to build continuous objects. In particular, many discrete models are known to have the Brownian map as a scaling limit~\\cite{LG11, Mie11, Mar16}. The theory of scaling limits of planar maps shares deep links with other random geometry models such as Liouville Quantum Gravity~\\cite{MS16b}. On the other hand, local limits, on which the present work focuses, study the neighbourhood of a typical point in a map in order to obtain an infinite but discrete object in the limit. In the context of planar maps, this was first considered by Angel and Schramm who proved the convergence of large uniform triangulations towards the Uniform Infinite Planar Triangulation (UIPT)~\\cite{AS03}. The study of the UIPT using Markovian explorations called \\emph{peeling} explorations was then initiated by Angel~\\cite{Ang03}. More general models have followed since, such as general planar maps with Boltzmann weights on the face degrees~\\cite{St18}. The bipartite case, which will be of particular interest for us, is investigated in~\\cite{Bud15, BC16}, see also~\\cite{C-StFlour} for a complete survey.\n\n\\paragraph{Maps of higher genus.}\nIt seems natural to try to extend the combinatorial and probabilistic study of planar maps to maps of higher genus. On the combinatorial side, the enumeration of maps with any genus is a very rich topic, with links to irreducible representations of the symetric group and integrable hierarchies~\\cite{MJD00,Okounkov00}. In particular, double recursions on both the size and the genus are known for the counting of maps, see~\\cite{GJ08} for triangulations and \\cite{Lo19} for general classes of bipartite maps. However, explicit enumeration formulas are lacking. Asymptotics can be obtained when the genus is fixed and the size goes to infinity~\\cite{BC86}, but are still missing when the genus goes to infinity as well.\n\nSimilarly, on the probabilistic side, higher genus versions of random surface models have been constructed such as Brownian surfaces~\\cite{Bet16} or Liouville quantum gravity on complex tori \\cite{DRV16}. However, their behaviour when the genus goes to infinity is still poorly understood. Finally, a regime that is much easier to handle is the regime where the genus is not constrained, and the faces are simply glued uniformly at random \\cite{Gam06, CP16, BCP19}. In this case, the genus is concentrated very close to its maximal possible value.\n\nMore recently, some progress was made in the study of high genus maps, namely when the genus grows linearly in the size of the map. In this case, the Euler formula shows that maps satisfy a discrete notion of \"negative average curvature\", which suggests that the neighbourhood of a typical vertex should look hyperbolic. The first category of maps that was investigated in this setting were uniform unicellular maps (i.e. maps with one face). See \\cite{ACCR13} for the proof of local convergence to a supercritical Galton--Watson tree, and \\cite{Ray13a} for the study of more global properties such as logarithmic diameter.\n\nShortly after, Curien introduced a one-parameter family of random hyperbolic triangulations of the plane \\cite{CurPSHIT}, following the work of Angel and Ray in the half-planar case \\cite{AR13}. More precisely, random maps of this family are called Planar Stochastic Hyperbolic Triangulations (PSHT) $\\left( \\mathbb{T}_{\\lambda} \\right)_{0<\\lambda \\leq (12\\sqrt{3})^{-1}}$ and they are the only random triangulations satisfying the following spatial Markov property: for any finite triangulation $t$ with $|t|$ vertices in total and a hole of perimeter $p$, we have\n\\begin{equation}\n\\P \\left( t \\subset \\mathbb{T}_{\\lambda} \\right) = C_{p}(\\lambda) \\lambda^{|t|}.\n\\end{equation}\nIn particular, such a triangulation exists if and only if $\\lambda \\in \\left( 0, \\frac{1}{12\\sqrt{3}} \\right]$. Except for the critical case $\\lambda=\\frac{1}{12\\sqrt{3}}$ (which corresponds to the UIPT), these objects exhibit hyperbolic properties \\cite{CurPSHIT, B18}. Benjamini and Curien conjectured in~\\cite{CurPSHIT} that they are the local limits of uniform high genus triangulations. \n\nIn a recent paper \\cite{BL19}, the authors of the present work proved this conjecture. Asymptotics for the enumeration of high genus triangulations up to subexponential factors were derived as a byproduct.\n\n\\paragraph{Infinite Boltzmann Planar Maps.}\nThe goal of the present work is to generalize the results of~\\cite{BL19} to a much wider family of models, where faces do not have to be triangles. For combinatorial reasons\\footnote{More precisely, the enumeration results in the planar case are simpler for bipartite maps, and the recursion of~\\cite{Lo19} holds only for bipartite maps.}, we will restrict ourselves to bipartite maps, which means that the face degrees have to be even. The limiting objects appearing in the limits are the \\emph{Infinite Boltzmann Bipartite Planar Maps} (IBPM) introduced in~\\cite[Appendix C]{B18these} as an analog of the PSHT for bipartite maps. We also refer to~\\cite[Chapter 8]{C-StFlour} for the study of basic properties of these objects.\n\nThe IBPM are characterized by a spatial Markov property similar to the one satisfied by the PSHT. If $m$ is a finite map with one hole, we write $m \\subset M$ if $M$ can be obtained by filling the hole of $m$, possibly with a map with a nonsimple boundary\\footnote{More precisely, we use the sense introduced by Budd in~\\cite{Bud15}, i.e. the sense corresponding to the \"lazy\" peeling process, as opposed to the one introduced by Angel in~\\cite{Ang03}. See Section~\\ref{subsec_univ_defns} for precise definitions.}. Let $\\mathbf{q}=(q_j)_{j \\geq 1}$ be a sequence of nonnegative numbers. An infinite random planar map $M$ is called a $\\mathbf{q}$-IBPM if there are numbers $\\left( C_p \\right)_{p \\geq 1}$ such that, for any finite map $m$ with one hole of perimeter $2p$, we have\n\\[ \\P \\left( m \\subset M \\right) = C_p \\times \\prod_{f \\in m} q_{\\mathrm{deg}(f)\/2}, \\]\nwhere the product is over all internal faces of $m$. In particular, they generalize the infinite \\emph{critical} bipartite Boltzmann maps defined and studied by Budd in \\cite{Bud15}. It was proved in~\\cite[Appendix C]{B18these} that there is at most one $\\mathbf{q}$-IBPM, that we denote by $\\mathbb{M}_{\\mathbf{q}}$. Moreover \\cite[Appendix C]{B18these} provides both necessary conditions and sufficient ones on $\\mathbf{q}$ for the existence of $\\mathbb{M}_{\\mathbf{q}}$, but no explicit characterization (these results are recalled in Section~\\ref{subsec_IBPM} below). The present work improves on these results by giving an explicit parametrization of the weight families $\\mathbf{q}$ for which $M_{\\mathbf{q}}$ exists. More precisely, as stated in Theorem~\\ref{thm_prametrization_rootface} below, such families $\\mathbf{q}$ can be parametrized by the law of the degree of the root face of $\\mathbb M_{\\mathbf{q}}$, which may be any law on $\\{2,4,6,\\dots\\}$, and an additional hyperbolicity parameter $\\omega \\in [1, +\\infty)$. The critical maps of~\\cite{Bud15} correspond to the case $\\omega=1$, and are already known to be local limits of \\emph{planar} maps. On the other hand, the case $\\omega>1$ has a hyperbolic flavour.\n\n\\paragraph{Local limits of high genus bipartite maps.}\nThe main result of this work is that uniform maps with high genus and prescribed face degrees converge locally to the $\\mathbf{q}$-IBPM when the size goes to infinity. This can be seen as a \\textit{universality} result in the domain of high genus maps, in the sense that regardless of the precise model of maps, the same phenomenon is observed.\n\nMore precisely, we will use the notation $\\mathbf{f}=(f_j)_{j \\geq 1}$ for face degree sequences (i.e. $f_j \\geq 0$ for all $j$, and $f_j=0$ eventually). For such a sequence $\\mathbf{f}$, we set $|\\mathbf{f}|=\\sum_{j \\geq 1} j f_j$, which describes the number of edges of a map with $f_j$ faces of degree $2j$ for all $j \\geq 1$. For $g \\geq 0$, we also write\n\\begin{equation}\\label{defn_v_f_g}\nv(\\mathbf{f}, g) = 2-2g+\\sum_{j \\geq 1} (j-1)f_j.\n\\end{equation}\nBy the Euler formula, a bipartite map with genus $g$ and face degrees described by $\\mathbf{f}$ exists if and only if $v(\\mathbf{f}, g) \\geq 2$, and in this case $v(\\mathbf{f}, g)$ is the number of vertices of such a map. For such $\\mathbf{f}$ and $g$, we denote by $M_{\\mathbf{f},g}$ a uniform bipartite map with genus $g$ and $f_j$ faces of degree $2j$ for all $j \\geq 1$.\n\n\\begin{thm}\\label{univ_main_thm}\nLet $(\\mathbf{f}^{n})_{n \\geq 1}$ be a sequence of face degree sequences, and let $(g_n)$ be a sequence such that $v(\\mathbf{f}^{n}, g_n) \\geq 2$ for all $n \\geq 1$. We assume that $|\\mathbf{f}^n| \\to +\\infty$ when $n \\to +\\infty$ and that $\\frac{f^n_j}{|\\mathbf{f}^n|} \\to \\alpha_j$ for all $j \\geq 1$, where $\\sum_{j \\geq 1} j \\alpha_j=1$.\nWe also assume $\\frac{g_n}{|\\mathbf{f}^n|} \\to \\theta$, where $0 \\leq \\theta < \\frac{1}{2} \\sum_{j \\geq 1} (j-1) \\alpha_j$.\nFinally, assume that $\\sum_{j \\geq 1} j^2 \\alpha_j <+\\infty$.\n\nThen we have the convergence in distribution\n\\[ M_{\\mathbf{f}^n, g_n} \\xrightarrow[n \\to +\\infty]{(d)} \\mathbb M_{\\mathbf{q}}\\]\nfor the local topology, where the weight sequence $\\mathbf{q}$ depends only on $\\theta$ and $\\left( \\alpha_j \\right)_{j \\geq 1}$, in an injective way.\n\\end{thm}\n\nLet us now make a few comments on the various assumptions of the main theorem.\n\\begin{itemize}\n\\item[$\\bullet$]\nWe first note that the assumption that $\\sum_{j \\geq 1} j \\alpha_j=1$ means that the proportion of the edges that are incident to a face with degree larger than $A$ goes to $0$ as $A \\to +\\infty$, uniformly in $n$. This is equivalent to saying that the root face stays almost surely finite in the limit, so this assumption is necessary to obtain a local limit with finite faces. If this assumption is waived, we expect to obtain different limit objects with infinitely many infinite faces.\n\\item[$\\bullet$]\nThe assumption $\\theta < \\frac{1}{2} \\sum_{j \\geq 1} (j-1) \\alpha_j$ means that the number of vertices is roughly proportional to $|\\mathbf{f}^n|$, so that the average degree of the vertices stays bounded. Therefore, it is also necessary in order to have a proper local limit with finite vertex degrees. Note that this assumption also implies $\\alpha_1<1$, i.e. it is not possible that almost all faces are $2$-gons.\n\\item[$\\bullet$]\nThe assumption $\\sum_{j \\geq 1} j^2 \\alpha_j <+\\infty$ means that the \\emph{expected} degree of the root face stays finite in the limit. We do not expect this assumption to be necessary. However, one of the steps of our proof (the \"two-holes argument\" of Section~\\ref{sec_arg_deux_trous}) crucially requires a bound on the tail of the degrees of the faces.\n\\end{itemize}\nFinally, the application associating the weight sequence $\\mathbf{q}$ given by Theorem~\\ref{univ_main_thm} to $\\left( \\theta, \\left( \\alpha_j \\right)_{j \\geq 1}\\right)$ is surjective in the sense that every IBPM $\\mathbb M_{\\mathbf{q}}$ for which the degree of the root face has finite expectation can be obtained as a local limit through Theorem~\\ref{univ_main_thm}.\n\n\\paragraph{The heavy tail case.}\nAlthough we could not remove the assumption $\\sum_{j \\geq 1} j^2 \\alpha_j$ in Theorem~\\ref{univ_main_thm}, most of the steps of the proof do not require this assumption. In particular, we can still obtain the following partial result in the general case.\n\n\\begin{thm}\\label{thm_main_more_general}\nLet $(\\mathbf{f}^{n})_{n \\geq 1}$ be a sequence of face degree sequences, and let $(g_n)$ be a sequence such that $v(\\mathbf{f}^{n}, g_n) \\geq 2$ for all $n \\geq 1$. We assume that $|\\mathbf{f}^n| \\to +\\infty$ when $n \\to +\\infty$ and that $\\frac{f^n_j}{|\\mathbf{f}^n|} \\to \\alpha_j$ for all $j \\geq 1$, where $\\sum_{j \\geq 1} j \\alpha_j=1$.\nWe also assume $\\frac{g_n}{|\\mathbf{f}^n|} \\to \\theta$, where $0 \\leq \\theta < \\frac{1}{2} \\sum_{j \\geq 1} (j-1) \\alpha_j$.\n\nThen the sequence of random maps $\\left( M_{\\mathbf{f}^n, g_n} \\right)_{n \\geq 1}$ is tight for the local topology. Moreover, all its subsequential limits are of the form $\\mathbb M_{\\mathbf{Q}}$, where $\\mathbf{Q}$ is a random Boltzmann weight sequence.\n\\end{thm}\n\n\\paragraph{A parametrization of Infinite Bipartite Boltzmann Planar Maps.}\nAs explained briefly above, we also provide a new parametrization of the Boltzmann weight families $\\mathbf{q}$ associated to an IBPM: instead of directly using the Boltzmann weights $q_j$, we parametrize them according to the law of the degree of the root face.\n\n\\begin{thm}\\label{thm_prametrization_rootface}\nLet $\\boldsymbol{\\alpha}$ be a probability measure on $\\mathbb N^*$. Then the set of IBPM for which the half-degree of the root face has law $\\boldsymbol{\\alpha}$ forms a one parameter family $\\left( \\mathbb M_{\\mathbf{q}^{(\\omega)}} \\right)_{\\omega \\geq 1}$. Moreover, $\\mathbf{q}^{(\\omega)}$ is critical if and only if $\\omega=1$, and the vertex degrees in $\\mathbb M_{\\mathbf{q}^{(\\omega)}}$ go to infinity when $\\omega \\to +\\infty$.\n\\end{thm}\n\nIn particular, the existence of hyperbolic Boltzmann maps with arbitrarily heavy-tailed face degrees answers a question from~\\cite{C-StFlour}, that was not settled by the results of \\cite[Appendix C]{B18these}. Moreover, we can think of $\\mathbf{q}^{(\\omega)}$ as interpolating between a critical non-hyperbolic map, and a degenerate map with infinite vertex degrees.\n\n\\paragraph{Asymptotic enumeration.}\nLike for triangulations, the most natural way to try to prove Theorem~\\ref{univ_main_thm} would be to obtain precise asymptotics on the counting of maps with prescribed genus and face degrees, in order to mimic classical arguments going back to~\\cite{AS03}. However, such asymptotics are not available and seem difficult to obtain. On the other hand, just like in~\\cite{BL19} for triangulations, once Theorem~\\ref{univ_main_thm} is proved, applying the arguments of~\\cite{AS03} \"backwards\" allows to obtain a result about convergence of the ratio when we add one face of fixed degree. We denote by $\\beta_g(\\mathbf{f})$ the number of bipartite maps of genus $g$ with face degrees prescribed by $\\mathbf{f}$.\n\n\\begin{corr}\\label{prop_cv_ratio}\nLet $\\left( \\mathbf{f}^{n} \\right)_{n \\geq 0}$ and $\\left( g_n \\right)_{n \\geq 0}$ be such that $\\frac{g_n}{n}\\to\\theta$ and $\\frac{f^n_j}{|\\mathbf{f}^n|}\\to \\alpha_j$ for all $j \\geq 1$. We assume that $\\sum j \\alpha_j=1$, that $0 \\leq \\theta < \\frac{1}{2} \\sum (j-1) \\alpha_j$ and that $\\sum j^2 \\alpha_j < +\\infty$. We recall that by Theorem~\\ref{univ_main_thm}, there is a weight sequence $\\mathbf{q}$ such that $M_{\\mathbf{f}^{n},g_n}$ converges locally to $\\mathbb{M}_{\\mathbf{q}}$. Then for all $j \\geq 1$, we have\n\\begin{equation}\\label{eq_cv_ratio}\n\\frac{\\beta_{g_n}(\\mathbf{f}^{n}-\\mathbf{1}_j)}{\\beta_{g_n}(\\mathbf{f}^{n})} \\xrightarrow[n \\to +\\infty]{} C_2(\\mathbf{q})q_j.\n\\end{equation}\n\\end{corr}\n\nWe also believe that the following is true:\n\n\\begin{conj}\\label{thm_univ_asympto}\nLet $(\\mathbf{f}^{n})_{n \\geq 1}$ be a sequence of face degree sequences, and let $(g_n)$ be a sequence such that $v(\\mathbf{f}^{n}, g_n) \\geq 2$ for all $n \\geq 1$. We assume that $|\\mathbf{f}^n| \\to +\\infty$ when $n \\to +\\infty$ and that $\\frac{f^n_j}{|\\mathbf{f}^n|} \\to \\alpha_j$ for all $j \\geq 1$, where $\\sum_{j \\geq 1} j \\alpha_j=1$.\nWe also assume $\\frac{g_n}{|\\mathbf{f}^n|} \\to \\theta$, where $0 \\leq \\theta \\leq \\frac{1}{2} \\sum_{j \\geq 1} (j-1) \\alpha_j$.\nFinally, assume that $\\sum_{j \\geq 1} j^2 \\alpha_j <+\\infty$.\nThen\n\\[\\beta_{g_n}(\\mathbf{f}^{n})= |\\mathbf{f}^n|^{2g_n} \\exp \\left( \\varphi \\left( \\theta,\\left( \\alpha_j \\right)_{j \\geq 1} \\right) |\\mathbf{f}^n| + o \\left( |\\mathbf{f}^n| \\right) \\right),\\]\nwhere $\\varphi$ is some function.\n\\end{conj}\n\nMore precisely, in~\\cite{BL19}, the proof consists of first using the analog of Corollary~\\ref{prop_cv_ratio} to estimate the ratio between triangulations with any genus and triangulations with a genus close to maximal (say with $\\varepsilon |\\mathbf{f}|$ vertices). To count such triangulations, we contracted a spanning tree to reduce the problem to triangulations with only one vertex, for which explicit formulas are known. This \"contraction\" is the step that is difficult to extend to our setting here: while for triangulations we simply obtained a triangulation with less faces, here the impact on the face degrees may become much more complex. This is why we leave the question as open.\n\n\\paragraph{Sketch of the proof of Theorem~\\ref{univ_main_thm}: common points and differences with the triangular case.}\nThe proof is a combination of combinatorial and probabilistic ideas. It follows the same global strategy as in \\cite{BL19}, which shows the robustness of the approach of~\\cite{BL19}. However, new difficulties arise at each of the steps, which makes the overall proof much longer. \n\nMore precisely, the first step consists of showing the tightness of $M_{\\mathbf{f}^n, g_n}$. This follows from a \\emph{bounded ratio lemma} (Lemma~\\ref{lem_BRL}), stating that under certain assumptions the ratio $\\frac{\\beta_g \\left(\\mathbf{f}+\\mathbf{1}_j \\right)}{\\beta_g \\left(\\mathbf{f} \\right)}$ is bounded. As in~\\cite{BL19}, this lemma is established using surgery operations to remove a face, but this surgery can affect a larger region than in the triangular case, which makes it more elaborated. The second step is to prove that any subsequential limit is planar and one-ended. This relies on the recurrence proved by the second author in~\\cite{Lo19}, and only requires minor adaptations compared to~\\cite{BL19}. We then notice that any subsequential limit enjoys a weak spatial Markov property, which implies that it must be of the form $\\mathbb M_{\\mathbf{Q}}$, for some random weight sequence $\\mathbf{Q}$. This part is also similar to~\\cite{BL19}, although additional technicalities arise. All these arguments do not use any assumption on the tail of the degrees of the faces, and prove Theorem~\\ref{thm_main_more_general}.\n\nThe end of the proof consists in showing that $\\mathbf{Q}$ is actually deterministic. As in~\\cite{BL19}, this step relies on a surgery argument called the \\emph{two holes argument}, for which we need to explore two pieces of maps with the exact same boundary length. This is where the assumption $\\sum j^2 \\alpha_j<+\\infty$ is crucial: without it, when we explore a piece of map \"face by face\", the perimeter makes large positive jumps and misses too many values. Another major difference with \\cite{BL19} is in the last step, where we match the average degree in finite models (computed with the Euler formula) and in infinite ones. In particular, we need to argue that a weight sequence $\\mathbf{q}$ is determined by the law of the root face of $\\mathbb{M}_{\\mathbf{q}}$ and the average vertex degree. While for triangulations we were able to obtain an explicit formula for the average vertex degree, this is not the case here. Therefore, we need to develop new arguments making use of the local limit results obtained earlier in the paper. This is also the reason why the link between $\\theta$, $\\left( \\alpha_j\\right)$ and $\\mathbf{q}$ in Theorem~\\ref{univ_main_thm} is not explicit.\n\n\\paragraph{Weakly Markovian bipartite maps.}\nJust like in~\\cite{BL19}, the argument showing that a subsequential limit is a mixture of IBPM is a result of independent interest, so we give its statement here. Let $M$ be a random infinite, one-ended, bipartite planar map. We call $M$ \\emph{weakly Markovian} if for any finite map $m$ with one hole, the probability $m \\subset M$ only depends on the perimeter of $m$ and on the family of degrees of its internal faces. We denote by $\\mathcal{Q}_h$ the set of weight sequences $\\mathbf{q}$ for which $\\mathbb{M}_{\\mathbf{q}}$ exists, and by $\\mathcal{Q}_f \\subset \\mathcal{Q}_h$ the set of those $\\mathbf{q}$ for which the expected degree of the root face in $\\mathbb{M}_{\\mathbf{q}}$ is finite (this will be useful to handle the last assumption in Theorem~\\ref{univ_main_thm}).\n\n\\begin{thm}\\label{thm_weak_Markov_general}\nLet $M$ be a weakly Markovian infinite, one-ended, bipartite random planar map. Then there is a random weight sequence $\\mathbf{Q} \\in \\mathcal{Q}_h$ such that $M$ has the same distribution as $\\mathbb M_{\\mathbf{Q}}$. Moreover, if the degree of the root face of $M$ has finite expectation, then $\\mathbf{Q} \\in \\mathcal{Q}_f$ almost surely.\n\\end{thm}\n\n\\paragraph{Structure of the paper.}\nIn Section~\\ref{sec_univ_prelim}, we review basic definitions on maps, and previous combinatorial results that will be used in all the paper. We also introduce the IBPM and describe various parametrizations of the set of IBPMs, and in particular prove Theorem~\\ref{thm_prametrization_rootface}. In Section~\\ref{sec_univ_tight}, under the assumptions of Theorem~\\ref{thm_main_more_general} (i.e. without the assumption on the tail of face degrees), we prove that the maps $M_{\\mathbf{f}^n, g_n}$ are tight for the local topology, and that any subsequential limit is a.s. planar and one-ended. Section~\\ref{sec_univ_markov} is devoted to the proof of Theorem~\\ref{thm_weak_Markov_general}, which implies that any subsequential limit of $M_{\\mathbf{f}^n, g_n}$ is an IBPM with random parameters. This is sufficient to prove Theorem~\\ref{thm_main_more_general}. In Section~\\ref{sec_univ_end}, we conclude the proof of Theorem~\\ref{univ_main_thm} by showing that the parameters are deterministic and depend only on $\\theta$ and $\\left( \\alpha_j \\right)_{j \\geq 1}$. In Section~\\ref{sec_univ_asymp}, we deduce the combinatorial estimate of Corollary~\\ref{prop_cv_ratio} from Theorem~\\ref{univ_main_thm}. Finally, the Appendices contain the proofs of some technical results.\n\n\\tableofcontents\n\n\\newpage\n\n\\section*{Index of notations} \\label{sec_univ_index}\n\n\\addcontentsline{toc}{section}{\\protect\\numberline{}Index of notations}\n\nIn general, we will we will use lower case letters such as $m$ to denote deterministic objects or quantities, upper case letters such as $M$ for random objects and $\\mathtt{mathcal}$ letters such as $\\mathcal{M}$ for sets of objects. We will use $\\mathtt{mathbf}$ characters such as $\\mathbf{q}$ for sequences, and normal characters such as $q_j$ for their terms.\n\n\\begin{itemize}\n\\item\n$\\mathbf{f}=(f_j)_{j \\geq 1}$: denotes a face degree sequence ($\\mathbf{F}$ will denote a random face degree sequence).\n\\item\n$g$: will denote the genus.\n\\item\n$\\mathcal{B}_{g}(\\mathbf{f})$: set of finite bipartite maps with genus $g$ and $f_j$ faces of degree $2j$ for all $j \\geq 1$.\n\\item\n$\\beta_{g}(\\mathbf{f})$: cardinality of $\\mathcal{B}_{g}(\\mathbf{f})$.\n\\item\n$M_{\\mathbf{f},g}$: uniform random map in $\\mathcal{B}_{g}(\\mathbf{f})$.\n\\item\n$|\\mathbf{f}|=\\sum_{j \\geq 1} j f_j$ (i.e. the number of edges of a map in $\\mathcal{B}_{g}(\\mathbf{f})$).\n\\item\n$v(\\mathbf{f}, g) = 2-2g+\\sum_{j \\geq 1} (j-1)f_j$ (i.e. the number of vertices of a map in $\\mathcal{B}_{g}(\\mathbf{f})$).\n\\item\n$\\overline{\\mathcal{B}}$: space of finite or infinite bipartite maps with finite vertex degrees, equipped with the local distance $d_{\\mathrm{loc}}$.\n\\item\n$\\overline{\\mathcal{B}}^*$: space of finite or infinite bipartite maps with finite or infinite vertex degrees, equipped with the dual local distance $d_{\\mathrm{loc}}^*$.\n\\item\n$\\theta$: limit value of $\\frac{g}{|\\mathbf{f}|}$ when $|\\mathbf{f}| \\to +\\infty$.\n\\item\n$\\mathbf{q}=(q_j)_{j \\geq 1}$: denotes a weight sequence ($\\mathbf{Q}$ denotes a random weight sequence).\n\\item\n$\\mathbb M_{\\mathbf{q}}$: the infinite bipartite Boltzmann planar map with weight sequence $\\mathbf{q}$.\n\\item\n$W_p(\\mathbf{q})$: partition function of finite Boltzmann bipartite maps of the $2p$-gon with weights $\\mathbf{q}$.\n\\item\n$\\mathcal{Q}=[0,1]^{\\mathbb N^*}$.\n\\item\n$\\mathcal{Q}^*=\\left\\{ \\mathbf{q}=(q_j)_{j \\geq 1} \\in \\mathcal{Q} | \\exists j \\geq 2, q_j>0 \\right\\}$.\n\\item\n$\\mathcal{Q}_a$: set of admissible families of Boltzmann weights $\\mathbf{q}$, i.e. such that $W_p(\\mathbf{q})<+\\infty$.\n\\item\n$\\mathcal{Q}_h$: set of Boltzmann weights for which $\\mathbb M_{\\mathbf{q}}$ exists. We have $\\mathcal{Q}_h \\subset \\mathcal{Q}_a \\cap \\mathcal{Q}^*$.\n\\item\n$\\mathcal{Q}_f$: set of Boltzmann weights $\\mathbf{q} \\in \\mathcal{Q}_h$ for which the expectation of the degree of the root face of $\\mathbb M_{\\mathbf{q}}$ is finite.\n\\item\n$c_{\\mathbf{q}}$: for $\\mathbf{q} \\in \\mathcal{Q}_a$, denotes the solution of the equation \\[\\sum_{j \\geq 1} q_j \\frac{1}{4^{j-1}} \\binom{2j-1}{j-1} c_{\\mathbf{q}}^{j-1} = 1-\\frac{4}{c_{\\mathbf{q}}}.\\]\n\\item\n$\\nu_{\\mathbf{q}}(i)= \\left\\{ \n\\begin{array}{ll}\nq_{i+1} \\, c_{\\mathbf{q}}^{i} & \\mbox{ if $i \\geq 0$,} \\\\\n2 W_{-1-i}(\\mathbf{q}) \\, c_{\\mathbf{q}}^i & \\mbox{ if $i \\leq -1$.}\n\\end{array} \\right.$. Step distribution of the random walk associated to the perimeter process of a finite $\\mathbf{q}$-Boltzmann planar map.\n\\item\n$\\omega_{\\mathbf{q}} \\geq 1$: for $\\mathbf{q} \\in \\mathcal{Q}_h$, denotes the solution (other than $1$, unless $\\mathbf{q}$ is critical) of \\[\\sum_{i \\in \\mathbb Z} \\omega^i \\nu_{\\mathbf{q}}(i)=1.\\]\n\\item\n$\\widetilde{\\nu}_{\\mathbf{q}}(i)=\\omega_{\\mathbf{q}}^i \\nu_{\\mathbf{q}}(i)$. Step distribution of the random walk on $\\mathbb Z$ associated to the peeling process of $\\mathbb M_{\\mathbf{q}}$.\n\\item\n$C_p(\\mathbf{q})$: for $\\mathbf{q} \\in \\mathcal{Q}_h$ and $p \\geq 1$, constants such that \\[\\P \\left( m \\subset \\mathbb M_{\\mathbf{q}} \\right)=C_p(\\mathbf{q}) \\times \\prod_{f \\in m} q_{\\mathrm{deg}(f)\/2}\\] for every finite map $m$ with one hole of perimeter $2p$.\n\\item\n$h_p(\\omega)=\\sum_{i=0}^{p-1} (4 \\omega)^{-i} \\binom{2i}{i}$. We have $C_p(\\mathbf{q})= (c_{\\mathbf{q}} \\omega_{\\mathbf{q}})^{p-1} h_p(\\omega_{\\mathbf{q}})$. Also, the perimeter process associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}}$ is a Doob transform of the $\\widetilde{\\nu}_{\\mathbf{q}}$-random walk by the harmonic function $\\left( h_p(\\omega_{\\mathbf{q}}) \\right)_{p \\geq 1}$.\n\\item\n$a_j(\\mathbf{q})=\\frac{1}{j} \\P \\left( \\mbox{the degree of the root face of $\\mathbb M_{\\mathbf{q}}$ is $2j$}\\right)$ for all $j \\geq 1$.\n\\item\n$\\alpha_j$: denotes a possible value of $a_j(\\mathbf{q})$, or the limit of the ratio $\\frac{f_j}{|\\mathbf{f}|}$ when $|\\mathbf{f}| \\to +\\infty$. We will always have $\\sum_{j \\geq 1} j \\alpha_j=1$ and $\\alpha_1<1$.\n\\item\n$\\mathbf{q}^{(\\omega)}$: once $\\left( \\alpha_j \\right)_{j \\geq 1}$ has been fixed, denotes the weight sequence for which $\\omega_{\\mathbf{q}}=\\omega$, and the law of the degree of the root face is described by $\\left( \\alpha_j \\right)_{j \\geq 1}$.\n\\item\n$\\mathcal{A}$: denotes a peeling algorithm.\n\\item\n$\\mathcal{E}_t^{\\mathcal{A}}(m)$: explored map after $t$ filled-in peeling steps on the map $m$ using algorithm $\\mathcal{A}$. This is a finite map with holes.\n\\item\n$P_t, V_t$: denote respectively the perimeter (i.e. the half-length of the boundary of the hole) and the volume (i.e. the total number of edges) of the explored map after $t$ steps during a peeling exploration.\n\\item\n$d(\\mathbf{q})=\\mathbb E \\left[ \\left( \\mbox{degree of the root vertex in $\\mathbb M_{\\mathbf{q}}$} \\right)^{-1} \\right]$.\n\\item\n$r_j(\\mathbf{q})=\\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}} \\right)^{j-1} q_j=\\lim_{t \\to +\\infty} \\frac{1}{t} \\sum_{i=0}^{t-1} \\mathbbm{1}_{P_{i+1}-P_i=j-1}$ for $\\mathbf{q} \\in \\mathcal{Q}_h$ and $j \\geq 1$, where $P$ is the perimeter process associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}}$ (see Proposition~\\ref{prop_q_as_limit}).\n\\item\n$r_{\\infty}(\\mathbf{q}) = \\frac{ \\left(\\sqrt{\\omega_{\\mathbf{q}}}-\\sqrt{\\omega_{\\mathbf{q}}-1} \\right)^2}{2 \\sqrt{\\omega_{\\mathbf{q}}(\\omega_{\\mathbf{q}}-1)}} = \\lim_{n \\to +\\infty} \\frac{V_n-2P_n}{n}$ for $\\mathbf{q} \\in \\mathcal{Q}_h$ and $j \\geq 1$, where $P$ and $V$ are the perimeter and volume processes associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}}$ (see Proposition~\\ref{prop_q_as_limit}).\n\\end{itemize}\n\n\\newpage\n\n\\section{Preliminaries}\\label{sec_univ_prelim}\n\nOur purpose in this section is to recall basic definitions related to maps, local topology and peeling explorations as well as combinatorial results from previous works, and to introduce precisely the infinite objects that will appear in this paper.\n\n\\subsection{Definitions: maps and local topology}\n\\label{subsec_univ_defns}\n\\label{subsec_local_topology_univ}\n\n\\paragraph{Maps.} A (finite or infinite) \\emph{map} $M$ is a way to glue a finite or countable collection of finite oriented polygons, called the \\emph{faces}, along their edges in a connected way. By forgetting the faces of $M$ and looking only at its vertices and edges, we obtain a graph $G$ (if $M$ is infinite, then $G$ may have vertices with infinite degree). The maps that we consider will always be \\emph{rooted}, i.e. equipped with a distinguished oriented edge called the \\emph{root edge}. The face on the right of the root edge is the \\emph{root face}, and the vertex at the start of the root edge is the \\emph{root vertex}.\n\nThe \\emph{dual map} of a map $m$ is the map $m^*$ whose vertices are the faces of $m$, and whose edges are the dual edges to those of $m$. We root $m^*$ at the oriented edge crossing the root edge of $m$ from left to right (see Figure~\\ref{fig_map_dual}).\nIf the number of faces is finite, then $M$ is always homeomorphic to an orientable topological surface, so we can define the genus of $M$ as the genus of this surface. In particular, we call a map \\emph{planar} if it has genus $0$.\n\nA \\emph{bipartite map} is a rooted map where it is possible to color every vertex in black or white without any monochromatic edge. By convention, we may assume that the root is always oriented from white to black, and each edge of the map has a natural orientation from white to black. In a bipartite map, all faces have an even degree. In what follows, we will only deal with bipartite maps (except when dealing with dual maps). Therefore, we will not always specify that the map we consider is bipartite.\n\nFor every $\\mathbf{f}=(f_j)_{j \\geq 1}$ and $g\\geq 0$, we will denote by $\\mathcal{B}_g(\\mathbf{f})$ the set of bipartite maps of genus $g$ with exactly $f_j$ faces of degree $2j$ for all $j\\geq 1$.\nA map of $\\mathcal{B}_g(\\mathbf{f})$ has $|\\mathbf{f}| =\\sum_{j\\geq 1} j f_j$ edges, $\\sum_{j\\geq 1} f_j$ faces and $v(\\mathbf{f}, g)=2-2g+\\sum_{j\\geq 1} (j-1)f_j$ vertices by the Euler formula. In particular, such a map exists if and only if $v(\\mathbf{f},g) \\geq 2$. We will denote by $\\beta_g(\\mathbf{f})$ the cardinality of $\\mathcal{B}_g(\\mathbf{f})$.\n\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.5]{carte_duale}\n\\caption{A map (in black) and its dual (in blue). The arrows mark the roots.}\\label{fig_map_dual}\n\\end{figure}\n\n\\paragraph{Maps with boundaries.} We will need to consider two different notions of bipartite maps with boundaries, that we call \\emph{maps with holes} and \\emph{maps of multi-polygons}. Roughly speaking, the first ones will be used to describe a small neighbourhood of the root in a larger map, and the second ones to describe the complement of this neighbourhood. Note that, since we will only consider bipartite maps in this work, we assume in both definitions that the maps are bipartite.\n\n\\begin{defn}\nA \\emph{map with holes} is a finite, bipartite map with a set of marked faces (called \\emph{holes}) such that:\n\\begin{itemize}\n\\item\nthe boundary of each hole is a simple cycle,\n\\item\nthe boundaries of the different holes are vertex-disjoint,\n\\item\nthe adjacency graph of the \\emph{internal faces} (i.e. the faces that are not holes) is connected,\n\\item\nthe root edge may be any oriented edge of the map.\n\\end{itemize}\nBy convention, the map consisting of two vertices joined by a single edge is a map with one hole and no internal face.\nIf $m$ is a map with holes, we denote by $\\partial m$ its boundary, i.e. the union of the boundaries of its holes.\n\\end{defn}\n\n\\begin{defn}\nLet $\\ell \\geq 1$ and $p_1, p_2, \\dots, p_{\\ell} \\geq 1$. A \\emph{map of the $(2p_1, \\dots, 2p_{\\ell})$-gon} is a finite or infinite bipartite map with $\\ell$ marked oriented edges $(e_i)_{1 \\leq i \\leq \\ell}$, such that:\n\\begin{itemize}\n\\item\n$e_1$ is the root edge,\n\\item\nthe faces on the right of the $e_i$ are distinct,\n\\item\nfor all $1 \\leq i \\leq \\ell$, the face on the right of $e_i$ has degree $2p_i$.\n\\end{itemize}\nThe faces on the right of the marked edges are called \\emph{external faces}, and the other ones are called the \\emph{internal faces}.\nWe denote by $\\mathcal{B}^{(p_1,p_2,\\dots,p_{\\ell})}_g(\\mathbf{f})$ the set of bipartite maps of the $(2p_1,2p_2,\\dots,2p_{\\ell})$-gon of genus $g$ with interior faces given by $\\textbf{f}$. We also denote by $\\beta^{(p_1,p_2,\\dots,p_{\\ell})}_g(\\mathbf{f})$ its cardinal, with the convention that $\\beta_g^{(0)}(\\mathbf{f})$ is $1$ if $g=0$ and $\\mathbf{f}=\\mathbf{0}$, and $0$ otherwise.\n\\end{defn}\n\nNote that, in this second definition, we do not ask that the boundaries are simple or disjoint. The convention for the $0$-gon can be interpreted as saying that the only map of the $0$-gon is the map with $1$ vertex, no edge and no internal face.\n\n\n\\paragraph{Map inclusion.}\nGiven a map $m$, let $m^*$ be its dual map. Let $\\mathfrak{e}$ be a finite, connected subset of edges of $m^*$ such that the root vertex of $m^*$ is incident to $\\mathfrak{e}$. To $\\mathfrak{e}$, we associate the map $m_{\\mathfrak{e}}$ that is obtained by gluing the faces of $m$ corresponding to the vertices of $m^*$ incident to $\\mathfrak{e}$ along the dual of the edges of $\\mathfrak{e}$ (see Figure~\\ref{fig_univ_lazy_inclusion}). Note that $m_{\\mathfrak{e}}$, once rooted at the root edge of $m$, is a map with holes. We will refer to $m_{\\mathfrak{e}}$ as the \\emph{submap of $m$ spanned by $\\mathfrak{e}$}.\n\nIf $m'$ is a map with holes and $m$ is a (finite or infinite) map, we write\n\\[m' \\subset m\\]\nif $m'$ can be obtained from $m$ by the procedure described above. By convention, we also write $m' \\subset m$ if $m'$ is the trivial map with two vertices and one edge, or if $m'$ consists of a simple cycle with the same perimeter as the root face of $m$ (which corresponds to the case $\\mathfrak{e} = \\emptyset$).\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[scale=1]{lazy_inclusion}\n\\caption{Inclusion of bipartite maps, on an example. On the right, the map $m$ and, in red, the set of dual edges $\\mathfrak{e}$. On the left, the map $m_{\\mathfrak{e}}$.}\\label{fig_univ_lazy_inclusion}\n\\end{figure}\n\nEquivalently, we have $m' \\subset m$ if $m$ can be obtained from $m'$ by gluing one or several maps of multipolygons in the holes of $m'$. We highlight that this definition of map inclusion is taken from \\cite{C-StFlour} and is tailored for the \\emph{lazy peeling process} of \\cite{Bud15}. More precisely, maps of multipolygons may not have simple nor disjoint boundaries, so if $m' \\subset m$, it is possible that two boundary edges of $m'$ actually coincide in $m$.\n\n\\paragraph{Local convergence and dual local convergence.}\nThe goal of this paragraph is to recall the definition of local convergence in a setting that is not restricted to planar maps. We denote by $\\overline{\\mathcal{B}}$ the set of finite or infinite bipartite maps in which all the vertices have finite degrees. A map $m$ is naturally equipped with a \\emph{graph distance} $d_m$ on the set of its vertices. If $m \\in \\overline{\\mathcal{B}}$, for every $r \\geq 1$, we denote by $B_r(m)$ the submap of $m$ spanned by the duals of those edges of $m$ which have an endpoint at distance $d_m$ at most $r-1$ from the root vertex. The map $B_r(m)$ is then a map with holes. We also write $B_0(m)$ for the trivial bipartite map consisting of only one edge.\n\nFor any two maps $m,m' \\in \\overline{\\mathcal{B}}$, we write\n\\[d_{\\mathrm{loc}}(m,m')=\\left( 1+\\max \\{r \\geq 0 | B_r(m)=B_r(m')\\} \\right)^{-1}.\\]\nThis is the \\emph{local distance} on $\\overline{\\mathcal{B}}$. As in the planar case, the space $\\overline{\\mathcal{B}}$ is a Polish space and is the completion of the space of finite bipartite maps for $d_{\\mathrm{loc}}$. However, this space is not compact, since $B_1(m)$ may take infinitely many values.\n\nIn our tightness argument, it will be more convenient to first work with a weaker notion of convergence which we call the \\emph{dual local convergence}. We denote by $\\overline{\\mathcal{B}}^*$ the set of finite or infinite bipartite maps (regardless of whether vertex degrees are finite or not).\nLet $m \\in \\overline{\\mathcal{B}}^*$, and let $d_{m^*}$ be the graph distance on its dual. For $r \\geq 1$, we denote by $B_r^{*}(m)$ the submap of $m$ spanned by those edges of $m^*$ which are incident to a face of $m$ lying at distance $d_{m^*}$ at most $r-1$ from the root face of $m$. By convention, let also $B_0^*(m)$ be the map consisting of a simple cycle with the same length as the boundary of the root face. Like $B_r(m)$, the \"ball\" $B^*_r(m)$ is a finite map with holes. For any $m,m' \\in \\overline{\\mathcal{B}}^*$, we write\n\\[d_{\\mathrm{loc}}^*(m,m')=\\left( 1+\\max \\{r \\geq 0 | B_r^*(m)=B_r^*(m')\\} \\right)^{-1}.\\]\nWe call $d^*_{\\mathrm{loc}}$ the \\emph{dual local distance}. Then $\\overline{\\mathcal{B}}^*$ is a Polish space for $d_{\\mathrm{loc}}^*$ and is the completion of the set of finite bipartite maps.\n\n\n\n\n\n\n\n\n\nThe reason why we introduced $d_{\\mathrm{loc}}^*$ is that it will be very easy to obtain tightness for this distance. This will allow us to work directly on infinite objects and deduce tightness for $d_{\\mathrm{loc}}$ later. Tightness for $d_{\\mathrm{loc}}^*$ will be deduced from the next result.\n\n\\begin{lem}\\label{lem_tight_degree_in_ball}\nLet $A(\\cdot)$ be a function from $(0,1)$ to $\\mathbb N$ and let $r \\geq 1$. There is a function $A_r(\\cdot)$ from $(0,1)$ to $\\mathbb N$ with the following property. Let $G$ be a stationary (for the simple random walk) random graph such that, for all $\\varepsilon>0$, we have\n\\[ \\P \\left( \\mathrm{deg}_G(\\rho)>A(\\varepsilon) \\right) \\leq \\varepsilon,\\]\nwhere $\\rho$ is the root vertex. Then for all $\\varepsilon>0$, we have\n\\[ \\P \\left( \\max_{x \\in B_r(G)} \\mathrm{deg}_G(x)>A_r(\\varepsilon) \\right) \\leq \\varepsilon,\\]\nwhere $B_r(G)$ is the ball of radius $r$ centered at the root vertex in $G$.\n\\end{lem}\n\n\\begin{proof}\nThis result goes back to~\\cite{AS03}. More precisely, although not stated explicitely as such, it is proved by induction on $r$ in the proof of tightness of uniform triangulations for the local topology~\\cite[Lemma 4.4]{AS03}. See also~\\cite[Theorem 3.1]{BLS13} for a general statement with minimal assumptions.\n\\end{proof}\n\nFrom here, we easily obtain tightness for $d_{\\mathrm{loc}}^*$ in our setting.\n\n\\begin{lem}\\label{lem_easy_dual_convergence}\nLet $\\mathbf{f}^{n}$ be face degree sequences such that $\\frac{1}{\\left| \\mathbf{f}^{n} \\right|} \\sum_{j> A} j f_j^{n}\\rightarrow 0$ as $A \\to +\\infty$ uniformly in $n$, and let $(g_n)$ be any sequence such that $\\mathcal{B}_{g_n}(\\mathbf{f}^{n}) \\ne \\emptyset$ for every $n$. Recall that $M_{\\mathbf{f}^n, g_n}$ is a uniform map in $\\mathcal{B}_{g_n}(\\mathbf{f}^{n})$. Then $(M_{\\mathbf{f}^n, g_n})$ is tight for $d_{\\mathrm{loc}}^*$.\n\\end{lem}\n\n\\begin{proof}\nLet $M_{\\mathbf{f}^n, g_n}^*$ be the dual map of $(M_{\\mathbf{f}^n, g_n})$. Since $M_{\\mathbf{f}^n, g_n}$ is invariant under rerooting at a uniform edge, the probability that the root vertex of $M_{\\mathbf{f}^n, g_n}^*$ has degree $2j$ is equal to $\\frac{j f^{n}_j}{\\left| \\mathbf{f}^{n} \\right|}$. Therefore, it follows from the assumption of the lemma that the root degree of $M_{\\mathbf{f}^n, g_n}^*$ is tight. Moreover $M_{\\mathbf{f}^n, g_n}^*$ is invariant by rerooting along the simple random walk. Therefore, by Lemma~\\ref{lem_tight_degree_in_ball}, for every $r \\geq 1$, the maximal degree in the ball of radius $r$ centered at the root in $M_{\\mathbf{f}^n, g_n}^*$ is tight. This implies tightness for $d_{\\mathrm{loc}}^*$.\n\\end{proof}\n\nFinally, as in \\cite{BL19}, tightness for $d_{\\mathrm{loc}}$ will be deduced from tightnesss for $d^*_{\\mathrm{loc}}$ using the following result (the proof is the same as for triangulations, and is therefore omitted).\n\n\\begin{lem}\\label{lem_dual_convergence_univ}\nLet $(m_n)$ be a sequence of maps of $\\overline{\\mathcal{B}}$. Assume that\n\\[ m_n \\xrightarrow[n \\to +\\infty]{d_{\\mathrm{loc}}^*} m,\\]\nwith $m \\in \\overline{\\mathcal{B}}$ (i.e. with finite vertex degrees). Then $m_n \\to m$ for $d_{\\mathrm{loc}}$ as $n \\to +\\infty$.\n\\end{lem}\n\n\\subsection{The lazy peeling process of bipartite maps}\n\\label{subsec_lazy_peeling}\n\nWe now recall the definition of the \\emph{lazy peeling process} of maps introduced in \\cite{Bud15} (see also \\cite{C-StFlour} for an extensive study). We will make heavy use of this notion in our proofs.\n\nA \\emph{peeling algorithm} is a function $\\mathcal{A}$ that takes as input a finite bipartite map $m$ with holes, and that outputs an edge $\\mathcal{A}(m)$ on $\\partial m$ (i.e. on the boundary of one of the holes). Given an infinite, planar, one-ended bipartite map $m$ and a peeling algorithm $\\mathcal{A}$, we can define an increasing sequence $\\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)_{t \\geq 0}$ of maps with one hole, such that $\\mathcal{E}_t^{\\mathcal{A}}(m) \\subset m$ for every $t$, in the following way. First, the map $\\mathcal{E}_0^{\\mathcal{A}}(m)$ is the trivial map consisting of the root edge only. For every $t \\geq 0$, we call the edge $\\mathcal{A} \\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)$ the \\emph{peeled edge at time $t$}. Let $F_t$ be the face of $m$ on the other side of this peeled edge (i.e. the side incident to a hole in $\\mathcal{E}_t^{\\mathcal{A}}(m)$). There are two possible cases, as summed up on Figure~\\ref{fig_univ_lazy_peeling}:\n\\begin{itemize}\n\\item either $F_t$ doesn't belong to $\\mathcal{E}_t^{\\mathcal{A}}(m)$, and then $\\mathcal{E}_{t+1}^{\\mathcal{A}}(m)$ is the map obtained from $\\mathcal{E}_t^{\\mathcal{A}}(m)$ by gluing a simple face of size $\\mathrm{deg}(F_t)$ along $\\mathcal{A} \\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)$;\n\\item or $F_t$ belongs to $\\mathcal{E}_t^{\\mathcal{A}}(m)$. In that case, by planarity, there exists an edge $e_t \\in \\mathcal{E}_t^{\\mathcal{A}}(m)$ on the same hole as $\\mathcal{A} \\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)$ such that $e_t$ and $\\mathcal{A} \\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)$ are actually identified in $m$. The map $\\mathcal{E}_{t+1}^{\\mathcal{A}}(m)$ is then obtained from $\\mathcal{E}_t^{\\mathcal{A}}(m)$ by gluing $\\mathcal{A} \\left( \\mathcal{E}_t^{\\mathcal{A}}(m) \\right)$ and $e_t$ together and, if this creates a finite hole, by filling it in the same way as in $m$.\n\\end{itemize}\nSuch an exploration is called \\emph{filled-in}, because all the finite holes are filled at each step.\n\nLet us now discuss two different ways to define peeling explorations on finite or nonplanar maps. We first note that, if we do not fill the region in the second case, then the definition of a peeling exploration still makes sense for any finite or infinite map, with the only difference that the explored part may now have several holes. This is what we will call a \\emph{non-filled-in} peeling exploration, and this will only be used briefly in Section \\ref{subsec_planarity}.\n\nFinally, for a finite map $m$, we define a filled-in exploration using the following convention. Assume that the peeled edge at time $t$ is glued to another boundary edge of $\\mathcal{E}^{\\mathcal{A}}_t(m)$ and forms two holes:\n\\begin{itemize}\n\\item\nif these two holes are connected in $m \\backslash \\mathcal{E}^{\\mathcal{A}}_t(m)$ (which may occur if $m$ is not planar), we stop the exploration at time $t$;\n\\item\nif not, we obtain $\\mathcal{E}^{\\mathcal{A}}_{t+1}(m)$ by filling completely the hole which contains the smallest number of edges in $m$.\n\\end{itemize}\nNote that with these conventions, the map $\\mathcal{E}^{\\mathcal{A}}_t(m)$ always have exactly one hole. This definition will be used to compare peeling explorations of finite and infinite maps in Section \\ref{sec_univ_end}. At this point, the local planarity results from Section \\ref{sec_univ_tight} will allow us to assume that with high probability, the explorations are not stopped too early.\n\n\\begin{figure}\n\\center\n\\includegraphics[scale=1]{lazy_peeling}\n\\caption{The lazy peeling on an example. The peeled edge is in red. Either a new face is discovered (center case), or the chosen edge is glued to another boundary edge (right case, the glued edge is in blue and the filled part in pink).}\\label{fig_univ_lazy_peeling}\n\\end{figure}\n\n\\subsection{Combinatorial enumeration}\\label{subsec_prelim_combi}\n\n\\paragraph{Partition functions for Boltzmann planar maps.}\nBefore describing infinite Boltzmann models in detail, we recall well-known enumeration results in the finite, planar case. We write $\\mathcal{Q}=[0,1]^{\\mathbb N^*}$. Fix a sequence $\\mathbf{q} \\in \\mathcal{Q}$. The partition function of bipartite, planar maps of the $2p$-gon with Boltzmann weights $\\mathbf{q}$ is defined as\n\\[W_p(\\mathbf{q})=\\sum_{m} \\prod_{f \\in m} q_{\\mathrm{deg}(f)\/2},\\]\nwhere the sum spans over all planar bipartite maps $m$ of the $2p$-gon, and the product is over internal faces of $m$. We also denote by $W_p^{\\bullet}(\\mathbf{q})$ the \\emph{pointed} partition function, i.e. the sum obtained by multiplying the weight of a map $m$ by its total number of vertices. Note that $W_1(\\mathbf{q})$ can also be interpreted as the partition function of maps of the sphere.\n\nWe recall from \\cite{MM07} the classical necessary and sufficient condition for the finiteness of these partition functions. Given a weight sequence $\\mathbf{q} \\in \\mathcal{Q}$, let \n\\begin{equation*}\nf_{\\mathbf{q}}(x)=\\sum_{j \\geq 1} q_{j}\\binom{2 j-1}{j-1} x^{j-1}.\n\\end{equation*}\nIf the equation \n\\begin{equation}\\label{eq_univ_admissible}\nf_{\\mathbf{q}}(x)=1-\\frac{1}{x}\n\\end{equation}\nhas a positive solution $Z_{\\mathbf{q}}$ we call $\\mathbf{q}$ \\emph{admissible}, and write $c_{\\mathbf{q}}=4Z_{\\mathbf{q}}$. Then by results from \\cite{MM07}, for all $p \\geq 1$, the partition functions $W_p(\\mathbf{q})$ and $W_p^{\\bullet}(\\mathbf{q})$ are finite if and only if $\\mathbf{q}$ is admissible. Moreover, in this case, for $p \\geq 0$, we have\n\\begin{equation}\\label{eqn_exact_pointed_partition_function}\nW_p^{\\bullet}(\\mathbf{q}) = c_{\\mathbf{q}}^p \\times \\frac{1}{4^p} \\binom{2p}{p}.\n\\end{equation}\nIt is also possible to derive simple integral formulas for $W_p(\\mathbf{q})$ in terms of $c_{\\mathbf{q}}$ but this will not be needed here, see \\cite{C-StFlour} for more details. We denote by $\\mathcal{Q}_a$ the set of admissible weight sequences.\n\nFinally, let $\\mathcal{Q}^*$ be the set of those $\\mathbf{q}=(q_j)_{j \\geq 1} \\in \\mathcal{Q}$ for which there exists $j \\geq 2$ such that $q_j>0$ (which ensures $W_p(\\mathbf{q})>0$ for all $p \\geq 1$). For $\\mathbf{q} \\in \\mathcal{Q}^* \\cap \\mathcal{Q}_a$, we define the \\emph{Boltzmann distribution} with weights $\\mathbf{q}$ on finite planar bipartite maps of the $2p$-gon as\n\\[\\P(m)=\\frac{1}{W_p(\\mathbf{q})} \\prod_{f \\in m} q_{\\mathrm{deg}(f)\/2}\\]\nfor all bipartite planar map $m$ of the $2p$-gon.\n\n\\paragraph{A general recursion for bipartite maps.}\nAs in \\cite{BL19}, we are lacking precise asymptotics on the enumeration of maps when both the genus and the size go to infinity. The following recurrence formula, proved in \\cite{Lo19}, will play the same role as the Goulden--Jackson formula for triangulations \\cite{GJ08}. We set the convention $\\beta_g(\\mathbf{0})=0$ for all $g$. Then, for every $g\\geq 0$ and every face degree sequence $\\mathbf{f}$, we have\n\\begin{equation}\\label{rec_biparti_genre_univ}\n\\binom{|\\mathbf{f}|+1}{2}\\beta_g(\\mathbf{f})=\\hspace{-0.5cm}\\sum_{\\substack{\\mathbf{h}^{(1)}+\\mathbf{h}^{(2)}=\\mathbf{f}\\\\g^{(1)}+g^{(2)}+g^*=g}}\\hspace{-0.5cm}(1+|\\mathbf{h}^{(1)}|)\\binom{v \\left( \\mathbf{h}^{(2)},g^{(2)} \\right) }{2g^*+2}\\beta_{g^{(1)}}(\\mathbf{h}^{(1)}) \\beta_{g^{(2)}}(\\mathbf{h}^{(2)})+\\sum_{g^*\\geq 0}\\binom{v \\left( \\mathbf{f},g \\right) +2g^*}{2g^*+2}\\beta_{g-g*}(\\mathbf{f}),\n\\end{equation}\nwhere we recall that $|\\mathbf{f}|=\\sum_{j \\geq 1} j f_j$ and $v(\\mathbf{f},g)=2-2g+\\sum_j (j-1) f_j$ (i.e. it is the number of vertices of a map with face degrees $\\mathbf{f}$ and genus $g$).\n\n\\subsection{Infinite Boltzmann bipartite planar maps}\n\\label{subsec_IBPM}\n\n\\paragraph{Definition of the models.}\nOur goal here is to recall the definition of infinite Boltzmann bipartite planar maps introduced in \\cite[Appendix C]{B18these} (and earlier in \\cite{Bud15} in the critical case). We also refer to \\cite[Chapter 8]{C-StFlour} for some basic properties of these objects that we will state below.\n\nLet $\\mathbf{q}=(q_j)_{j \\geq 1}$ be a sequence of nonnegative real numbers that we will call the \\emph{Boltzmann weights}. A random infinite bipartite planar map $M$ is called $\\mathbf{q}$-Boltzmann if there are constants $\\left( C_p(\\mathbf{q}) \\right)_{p \\geq 1}$ such that, for every finite bipartite map $m$ with one hole of perimeter $2p$, we have\n\\[ \\P \\left( m \\subset M \\right)=C_p(\\mathbf{q}) \\prod_{f \\in m} q_{\\mathrm{deg}(f)\/2},\\]\nwhere the product is over all internal faces of $m$.\n\nWe will see that given $\\mathbf{q}$, such a map does not always exist, but when it does, it is unique, i.e. the constants $C_p(\\mathbf{q})$ are determined by $\\mathbf{q}$, which justifies the notation $C_p(\\mathbf{q})$. More precisely, as noted in \\cite[Appendix C]{B18these}, if a $\\mathbf{q}$-Boltzmann map exists, then the partition function of maps of a $2$-gon with Boltzmann weights $\\mathbf{q}$ must be finite, which is equivalent to the admissibility criterion \\eqref{eq_univ_admissible}. Moreover, with the notation of Section~\\ref{subsec_univ_defns}, we call $\\mathbf{q}$ \\emph{critical} if $f'_{\\mathbf{q}}(Z_{\\mathbf{q}})=\\frac{1}{Z_{\\mathbf{q}}^2}$ and \\emph{subcritical} if this is not the case.\n\nFinally, we define a measure $\\nu_{\\mathbf{q}}$ on $\\mathbb Z$ as follows:\n\\begin{equation}\\label{eqn_defn_nu}\n\\nu_{\\mathbf{q}}(i)= \\left\\{ \n\\begin{array}{ll}\nq_{i+1} \\, c_{\\mathbf{q}}^{i} & \\mbox{ if $i \\geq 0$,} \\\\\n2 W_{-1-i}(\\mathbf{q}) \\, c_{\\mathbf{q}}^i & \\mbox{ if $i \\leq -1$.}\n\\end{array} \\right.\n\\end{equation}\nAs noted in \\cite{Bud15}, this is the step distribution of the random walk on $\\mathbb Z$ describing the evolution of the perimeter of a finite $\\mathbf{q}$-Boltzmann map with a large perimeter (see also \\cite[Chapter 5.1]{C-StFlour}). Then previous results about the existence of $\\mathbf{q}$-IBPM can be summed up as follows.\n\n\\begin{thm}\\label{thm_rappel_these}\n\\begin{enumerate}\n\\item\nIf a $\\mathbf{q}$-IBPM exists, it is unique (in distribution), so we can denote it by $\\mathbb M_{\\mathbf{q}}$.\n\\item\nIf $\\mathbf{q} \\notin \\mathcal{Q}^* \\cap \\mathcal{Q}_a$, then $\\mathbb M_{\\mathbf{q}}$ does not exist.\n\\item\nIf $\\mathbf{q} \\in \\mathcal{Q}^* \\cap \\mathcal{Q}_a$ is critical, then $\\mathbb M_{\\mathbf{q}}$ exists and $C_p(\\mathbf{q})=c_{\\mathbf{q}}^{p-1} \\times \\frac{2p}{4^p} \\binom{2p}{p}$.\n\\item\nIf $\\mathbf{q} \\in \\mathcal{Q}^* \\cap \\mathcal{Q}_a$ is subcritical, then $\\mathbb M_{\\mathbf{q}}$ exists if and only if the equation\n\\begin{equation}\\label{boltzmann_equation_omega}\n\\sum_{i \\in \\mathbb Z} \\nu_{\\mathbf{q}}(i) \\omega^i =1\n\\end{equation}\nhas a solution $\\omega_{\\mathbf{q}}>1$.\n\\item\nIn this case, the solution $\\omega_{\\mathbf{q}}$ is unique and, for every $p \\geq 1$, we have\n\\begin{equation}\\label{boltzmann_formula_cp}\nC_p(\\mathbf{q})= \\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}} \\right)^{p-1} \\sum_{i=0}^{p-1} (4\\omega_{\\mathbf{q}})^{-i} \\binom{2i}{i}.\n\\end{equation}\n\\end{enumerate}\n\\end{thm}\nThe third point is from \\cite{Bud15}, and the others are from \\cite[Appendix C]{B18these}\\footnote{We have fixed a small mistake from \\cite[Appendix C]{B18these}, where $c_{\\mathbf{q}}$ was omitted in the formula for $C_p(\\mathbf{q})$.}. When it exists, we will call the map $\\mathbb M_{\\mathbf{q}}$ the \\emph{$\\mathbf{q}$-IBPM} (for \\emph{Infinite Boltzmann Planar Map)}.\nWe denote by $\\mathcal{Q}_h \\subset \\mathcal{Q}^* \\cap \\mathcal{Q}_a$ the set of weight sequences $\\mathbf{q}$ for which $\\mathbb M_{\\mathbf{q}}$ exists. We also note that the formula for $C_p(\\mathbf{q})$ in the critical case is a particular case of the subcritical one where $\\omega=1$. Since this function will appear many times later, for $\\omega \\geq 1$ and $p \\geq 1$, we write:\n\\begin{equation}\\label{eqn_defn_homega}\nh_p(\\omega)=\\sum_{i=0}^{p-1} (4\\omega)^{-i} \\binom{2i}{i}.\n\\end{equation}\nIn particular, if $\\omega=1$, then $h_p(\\omega)=\\frac{2p}{4^p} \\binom{2p}{p} \\sim \\frac{2}{\\sqrt{\\pi}} \\sqrt{p}$ as $p \\to +\\infty$. If $\\omega>1$, then $h_p(\\omega) \\to \\sqrt{\\frac{\\omega}{\\omega-1}}$ as $p \\to +\\infty$.\n\n\\paragraph{The random walk $\\widetilde{\\nu}_{\\mathbf{q}}$.}\nTo study the $\\mathbf{q}$-IBPM, we define the measure $\\widetilde{\\nu}_{\\mathbf{q}}$ on $\\mathbb Z$ by $\\widetilde{\\nu}_{\\mathbf{q}}(i)=\\omega_{\\mathbf{q}}^i \\nu_{\\mathbf{q}}(i)$, where $\\omega_{\\mathbf{q}}$ is given by \\eqref{boltzmann_equation_omega} if $\\mathbf{q}$ is subcritical, and $\\omega=1$ if $\\mathbf{q}$ is critical. The random walk with step distribution $\\widetilde{\\nu}_{\\mathbf{q}}$ plays an important role when studying $\\mathbb M_{\\mathbf{q}}$. We first note that, if $\\mathbf{q}$ is not critical, then this walk has a positive drift. Indeed, denoting by $F_{\\mathbf{q}}$ the generating function of $\\nu_{\\mathbf{q}}$, we have\n\\[ \\sum_{i \\in \\mathbb Z} i \\, \\widetilde{\\nu}_{\\mathbf{q}}(i) = F'_{\\mathbf{q}}(\\omega_{\\mathbf{q}})>0,\\]\nsince $F_{\\mathbf{q}}$ is convex and takes the value $1$ both at $1$ and at $\\omega_{\\mathbf{q}}>1$. Note also that it is possible that the drift is $+\\infty$.\n\n\\paragraph{Lazy peeling explorations of the $\\mathbf{q}$-IBPM.}\nWe now perform a few computations related to lazy peeling explorations of the $\\mathbf{q}$-IBPM. For this, we fix a peeling algorithm $\\mathcal{A}$, and consider a filled-in exploration of $\\mathbb M_{\\mathbf{q}}$ according to $\\mathcal{A}$. We recall that $\\mathcal{E}_t^{\\mathcal{A}}(\\mathbb M_{\\mathbf{q}})$ is the explored region after $t$ steps, and we denote by $\\left( \\mathcal{F}_t \\right)_{t \\geq 0}$ the filtration generated by this exploration. We denote by $P_t$ (resp. $V_t$) the half-perimeter (resp. total number of edges) of $\\mathcal{E}_t^{\\mathcal{A}}(\\mathbb M_{\\mathbf{q}})$. We will call $P$ and $V$ the \\emph{perimeter and volume processes} associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}}$.\n\nIt follows from the definition of $\\mathbb M_{\\mathbf{q}}$ that $(P_t, V_t)_{t \\geq 0}$ is a Markov chain on $\\mathbb N^2$ and that its law does not depend on the algorithm $\\mathcal{A}$. More precisely $P$ is a Doob transform of the random walk with step distribution $\\widetilde{\\nu}_{\\mathbf{q}}$, i.e. it has the following transitions:\n\\begin{equation}\\label{peeling_transitions}\n\\P \\left( P_{t+1}=P_t+i | \\mathcal{F}_t \\right)= \\widetilde{\\nu}_{\\mathbf{q}}(i) \\frac{h_{P_t+i}(\\omega_{\\mathbf{q}})}{h_{P_t}(\\omega_{\\mathbf{q}})},\n\\end{equation}\nwhere $h_p(\\omega)$ is given by \\eqref{eqn_defn_homega}. As noted in \\cite{C-StFlour}, this implies that $\\left( h_p(\\omega_{\\mathbf{q}}) \\right)_{p \\geq 1}$ is harmonic on $\\{1,2, \\dots\\}$ for the random walk with step distribution $\\widetilde{\\nu}_{\\mathbf{q}}$, and that for $\\mathbf{q}$ subcritical $P$ has the distribution of this random walk, conditioned to stay positive (if $\\mathbf{q}$ is critical, the conditioning is degenerate, but this can still be made sense of).\n\n\\paragraph{IBPM with finite expected degree of the root face.}\nWe denote by $\\mathcal{Q}_f$ the set of $\\mathbf{q} \\in \\mathcal{Q}_h$ such that the degree of the root face of $\\mathbb M_{\\mathbf{q}}$ has finite expectation. Since our Theorem~\\ref{univ_main_thm} only holds if the expected degree of the root face is finite in the limit, we will need to gather a few consequences of this assumption on $\\mathbf{q}$ and the peeling process of $\\mathbb M_{\\mathbf{q}}$. Note that, for all $\\mathbf{q} \\in \\mathcal{Q}_h$, the degree of the root face is determined by the first peeling step on $\\mathbb M_{\\mathbf{q}}$. More precisely, by \\eqref{peeling_transitions}, we have for all $j \\geq 1$:\n\\begin{equation}\\label{eqn_walk_to_rootface}\n\\P \\left( \\mbox{the root face of $\\mathbb M_{\\mathbf{q}}$ has degree $2j$} \\right)=\\frac{h_j(\\omega_{\\mathbf{q}})}{h_1(\\omega_{\\mathbf{q}})} \\, \\widetilde{\\nu}_{\\mathbf{q}}(j-1) = \\frac{h_j(\\omega_{\\mathbf{q}})}{h_1(\\omega_{\\mathbf{q}})} \\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}} \\right)^{j-1} q_j.\n\\end{equation}\nIf $\\mathbf{q}$ is critical, the right hand-side is equivalent to $\\frac{2}{\\sqrt{\\pi}}\\sqrt{j} c_{\\mathbf{q}}^{j-1} q_j$ as $j \\to +\\infty$, so $\\mathbf{q} \\in \\mathcal{Q}_f$ if and only if\n\\begin{equation}\\label{eqn_finite_32_moment}\n\\sum_{j \\geq 1} j^{3\/2} c_{\\mathbf{q}}^j q_j <+\\infty.\n\\end{equation}\nOn the other hand, we recall (see e.g. \\cite[Chapter 5.2]{C-StFlour}) that a critical weight sequence $\\mathbf{q}$ is called \\emph{of type $\\frac{5}{2}$}, or \\emph{critical generic}, if\n\\[ \\sum_{j \\geq 1} (j-1)(j-2) \\binom{2j-1}{j-1} q_j \\left( \\frac{c_{\\mathbf{q}}}{4} \\right)^{j-3} <+\\infty, \\]\nwhich is clearly equivalent to \\eqref{eqn_finite_32_moment}.\nIn the subcritical case, by \\eqref{eqn_walk_to_rootface}, $\\mathbf{q} \\in \\mathcal{Q}_f$ is equivalent to\n\\[\\sum_{j \\geq 1} j \\widetilde{\\nu}_{\\mathbf{q}}(j) <+\\infty,\\] i.e. the drift of $\\widetilde{\\nu}$ is finite. To sum up:\n\\begin{itemize}\n\\item\nIn the critical case, $\\mathbf{q} \\in \\mathcal{Q}_f$ if and only if $\\mathbf{q}$ is critical generic, which means that the perimeter process $(P_n)$ converges to a $3\/2$-stable L\u00e9vy process with no positive jump, conditioned to be positive (see \\cite[Theorem 10.1]{C-StFlour}). This basically means that $\\mathbf{q}$-Boltzmann finite maps for $\\mathbf{q} \\in \\mathcal{Q}_f$ lie in the domain of attraction of the Brownian map \\cite{MM07}.\n\\item\nIn the subcritical case, $\\mathbf{q} \\in \\mathcal{Q}_f$ if and only if the measure $\\widetilde{\\nu}$ has finite expectation. Since the perimeter process $P$ has the law of a $\\widetilde{\\nu}$-random walk conditioned on an event of positive probability, this means that $P$ has linear growth (instead of super-linear if the expectation of $\\widetilde{\\nu}$ was infinite).\n\\end{itemize}\n\n\\subsection{Four ways to describe Boltzmann weights}\n\n\\paragraph{Four parametrizations of $\\mathcal{Q}_h$.}\nIn this work, we will make use of four different \"coordinate systems\" to navigate through the spaces $\\mathcal{Q}_h$ and $\\mathcal{Q}_f$, each with its own advantages. The goal of this section is to define these parametrizations and to establish some relations between them. In particular, we will prove Theorem \\ref{thm_prametrization_rootface}.\n\nOur first coordinate system, already used in the last pages, consists in using directly the Boltzmann weights $q_j$ for $j \\geq 1$. It is the simplest way to define the model $\\mathbb M_{\\mathbf{q}}$ and gives the simplest description of its law.\n\nThe second parametrization we will use is the one given by Proposition~\\ref{prop_q_as_limit} below: we describe $\\mathbf{q}$ by parameters $r_j(\\mathbf{q}) \\in [0,1)$ for $j \\geq 1$ and $r_{\\infty}(\\mathbf{q}) \\in (0,+\\infty]$. Here $r_j(\\mathbf{q})$ describes the proportion of peeling steps where we discover a face of degree $2j$ during a peeling exploration of $\\mathbb M_{\\mathbf{q}}$, and $r_{\\infty}(\\mathbf{q})$ comes from a comparison between the volume and perimeter growths. The advantage of these parameters is that they allow to \"read\" $\\mathbf{q}$ as an almost sure observable on a peeling exploration of the map $\\mathbb M_{\\mathbf{q}}$. This will be useful in Section~\\ref{sec_arg_deux_trous}.\n\nThe third parametrization consists in using on the one hand the law of the root face, and on the other hand the average degree of the vertices. More precisely, for $j \\geq 1$, we write\n\\[a_j(\\mathbf{q})=\\frac{1}{j} \\P \\left( \\mbox{the root face of $\\mathbb M_{\\mathbf{q}}$ has degree $2j$} \\right).\\]\nWe note that $\\sum_{j \\geq 1} j a_j(\\mathbf{q})=1$ and that $a_1(\\mathbf{q})<1$ since a map consisting only of $2$-gons would have vertices with infinite degrees.\nWe also write $d(\\mathbf{q})=\\mathbb E \\left[ \\frac{1}{\\mathrm{deg}_{\\mathbb M_{\\mathbf{q}}}(\\rho)} \\right]$, where $\\rho$ is the root vertex. The advantage of this parametrization is that the analogues of these quantities are easy to compute if we replace $\\mathbb M_{\\mathbf{q}}$ by a finite uniform map with prescribed genus and face degrees. These parameters are our only way to link the finite and infinite models, and will therefore be useful in the end of the proof of Theorem~\\ref{univ_main_thm}. However, it is not obvious at all that $\\left( a_j(\\mathbf{q}) \\right)_{j \\geq 1}$ and $d(\\mathbf{q})$ are sufficient to characterize $\\mathbf{q}$. We will actually prove this in the end of the paper, only for $\\mathbf{q} \\in \\mathcal{Q}_f$, as a consequence of local convergence arguments (Proposition~\\ref{prop_monotonicity_deg}). \n\nFinally, the fourth coordinate system is the one from Theorem \\ref{thm_prametrization_rootface}: it is a variant of the third one where we replace $d(\\mathbf{q})$ by $\\omega_{\\mathbf{q}}$, which makes it easier to handle. This one is useful as an intermediate step towards the third one. Moreover, contrary to the third one, we can prove rather quickly (Theorem \\ref{thm_prametrization_rootface}) that it provides a nice parametrization of the whole space $\\mathcal{Q}_h$.\n\n\\paragraph{Recovering $\\mathbf{q}$ from explorations of $\\mathbb M_{\\mathbf{q}}$.}\nWe now describe more precisely our second parametrization of $\\mathcal{Q}_h$. The next result basically states that we can recover the weight sequence $\\mathbf{q}$ by just observing the perimeter and volume processes defined above (we recall that the volume is measured by the total number of edges).\n\n\\begin{prop}\\label{prop_q_as_limit}\nLet $\\mathbf{q} \\in \\mathcal{Q}_h$, and let $P$ and $V$ be the perimeter and volume processes associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}}$. We have the following almost sure convergences:\n\\begin{equation}\\label{limit_number_jgons}\n\\frac{1}{t} \\sum_{i=0}^{t-1} \\mathbbm{1}_{P_{i+1}-P_i=j-1} \\xrightarrow[t \\to +\\infty]{a.s.} \\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}} \\right)^{j-1} q_j =: r_j(\\mathbf{q}) \\in [0,1)\n\\end{equation}\nfor every $j \\geq 1$, and\n\\begin{equation}\\label{limit_volume_growth}\n\\frac{V_t-2P_t}{t} \\xrightarrow[t \\to +\\infty]{a.s.} \\frac{ \\left(\\sqrt{\\omega_{\\mathbf{q}}}-\\sqrt{\\omega_{\\mathbf{q}}-1} \\right)^2}{2 \\sqrt{\\omega_{\\mathbf{q}}(\\omega_{\\mathbf{q}}-1)}} =: r_{\\infty}(\\mathbf{q}) \\in (0,+\\infty].\n\\end{equation}\nMoreover, the weight sequence $\\mathbf{q}$ is a measurable function of the numbers $r_j(\\mathbf{q})$ for $j \\in \\mathbb N^* \\cup \\{\\infty\\}$.\n\\end{prop}\n\n\\begin{proof}\nIn the subcritical case, the second convergence is Proposition 10.12 of \\cite{C-StFlour}. In the critical case, we have $\\omega_{\\mathbf{q}}=1$ so the right-hand side of \\eqref{limit_volume_growth} is infinite, and the result follows from Lemma 10.9 of \\cite{C-StFlour}.\n\nLet us now prove the first convergence. For this, we first note that we have $P_t \\to +\\infty$ almost surely as $t \\to +\\infty$. Indeed, this again follows from \\cite[Proposition 10.12]{C-StFlour} in the subcritical case and from \\cite[Lemma 10.9]{C-StFlour} in the critical case. On the other hand, given the asymptotics for $h_p(\\omega)$ right after \\eqref{eqn_defn_homega}, for any fixed $j \\geq 1$, we have $\\frac{h_{p+j-1}(\\omega_{\\mathbf{q}})}{h_p(\\omega_{\\mathbf{q}})} \\to 1$ as $p \\to +\\infty$. It follows that\n\\[ \\P \\left( P_{t+1}-P_t=j-1 | \\mathcal{F}_t \\right) \\xrightarrow[t \\to +\\infty]{a.s.} \\widetilde{\\nu}_{\\mathbf{q}}(j-1) = \\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}} \\right)^{j-1} q_j, \\]\nand the first convergence follows by the law of large numbers.\n\nFinally, the function $\\omega \\to \\frac{ \\left(\\sqrt{\\omega}-\\sqrt{\\omega-1} \\right)^2}{2 \\sqrt{\\omega(\\omega-1)}}$ is a decreasing homeomorphism from $[1,+\\infty)$ to $(0,+\\infty]$, so $\\omega_{\\mathbf{q}}$ is a measurable function of $r_{\\infty}(\\mathbf{q})$. Moreover, by the\ndefinition of $c_{\\mathbf{q}}$~\\eqref{eq_univ_admissible}, we have\n\\[ 1-\\frac{4}{c_{\\mathbf{q}}} = \\sum_{j \\geq 1} \\binom{2j-1}{j-1} q_j \\left( \\frac{c_{\\mathbf{q}}}{4} \\right)^{j-1} = \\sum_{j \\geq 1} \\frac{1}{4^j} \\binom{2j-1}{j-1} \\frac{r_j(\\mathbf{q})}{\\omega_{\\mathbf{q}}^{j-1}},\\]\nwhich implies that $c_{\\mathbf{q}}$ is a measurable function of $\\omega_{\\mathbf{q}}$ and the numbers $r_j(\\mathbf{q})$ for $j \\in \\mathbb N^*$. Finally, given $c_{\\mathbf{q}}$ and the $r_j(\\mathbf{q})$, we easily recover the $q_j$ from~\\eqref{limit_number_jgons}.\n\\end{proof}\n\n\\paragraph{Weight sequences corresponding to a given distribution of the root face.}\nWe now prove Theorem~\\ref{thm_prametrization_rootface} by showing that our fourth parametrization is indeed bijective. We first state the precise version of Theorem~\\ref{thm_prametrization_rootface}. We recall that for $\\mathbf{q} \\in \\mathcal{Q}_h$, the numbers $a_j(\\mathbf{q})$ satisfy $\\sum_{j \\geq 1} j a_j(\\mathbf{q})=1$ and $\\alpha_1<1$, and we have $\\omega_{\\mathbf{q}} \\geq 1$.\n\n\\begin{prop}\\label{prop_third_parametrization}\nLet $(\\alpha_j)_{j \\geq 1}$ be such that $\\sum_{j \\geq 1} j \\alpha_j=1$ and $\\alpha_1 < 1$, and let $\\omega \\geq 1$. Then there is a unique $\\mathbf{q} \\in \\mathcal{Q}_h$ such that\n\\[ \\omega_{\\mathbf{q}}=\\omega \\mbox{ and } \\forall j \\geq 1, \\, a_j(\\mathbf{q})=\\alpha_j.\\]\nMoreover, this weight sequence $\\mathbf{q}$ is given by\n\\begin{equation}\\label{eqn_qjomega_univ}\nq_j=\\frac{j \\alpha_j}{\\omega^{j-1}h_{\\omega}(j)} \\left( \\frac{1-\\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} \\frac{i \\alpha_i}{\\omega^{i-1} h_{\\omega}(i)} }{4} \\right)^{j-1}.\n\\end{equation}\n\\end{prop}\n\n\\begin{proof}[Proof of Proposition~\\ref{prop_third_parametrization}]\nWe start with uniqueness. We note that\n\\begin{equation}\\label{eqn_alpha_from_q}\na_j(\\mathbf{q})=\\frac{1}{j} C_j(\\mathbf{q}) q_j = \\frac{1}{j} \\left( c_{\\mathbf{q}} \\omega_{\\mathbf{q}}\\right)^{j-1} h_{j}(\\omega_{\\mathbf{q}}) q_j,\n\\end{equation}\nso $q_j$ can be obtained as a function of $a_j(\\mathbf{q})=\\alpha_j$, $\\omega_{\\mathbf{q}}$ and $c_{\\mathbf{q}}$. Moreover, by the definition~\\eqref{eq_univ_admissible} of $c_{\\mathbf{q}}$, we have\n\\begin{equation}\\label{eqn_g_function_of_aj}\n1-\\frac{4}{c_{\\mathbf{q}}}=\\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} q_i c_{\\mathbf{q}}^{i-1} = \\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} \\frac{i a_i(\\mathbf{q})}{\\omega_{\\mathbf{q}}^{i-1} h_{i}(\\omega_{\\mathbf{q}})},\n\\end{equation}\nso $c_{\\mathbf{q}}$, and therefore $q_j$ for all $j \\geq 1$, can be deduced from $\\omega_{\\mathbf{q}}$ and $\\left( a_j(\\mathbf{q}) \\right)_{j \\geq 1}$. More precisely, we obtain the formula~\\eqref{eqn_qjomega_univ}, which in particular proves the uniqueness.\n\nTo prove the existence, it is enough to check that, for all $\\omega \\geq 1$ and $(\\alpha_j)_{j \\geq 1}$ with $\\sum j \\alpha_j=1$ and $\\alpha_1<1$, the sequence $\\mathbf{q}$ given by \\eqref{eqn_qjomega_univ} is indeed in $\\mathcal{Q}_h$, with $\\omega_{\\mathbf{q}}=\\omega$ and $a_j(\\mathbf{q})=\\alpha_j$ for all $j$. \nFollowing \\eqref{eqn_g_function_of_aj}, we first write\n\\begin{equation}\\label{eqn_c_of_alpha_omega}\nc=\\frac{4}{1-\\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} \\frac{i \\alpha_i}{\\omega^{i-1} h_{i}(\\omega) }},\n\\end{equation}\nand check that $\\mathbf{q}$ is admissible with $c_{\\mathbf{q}}=c$. First $\\omega^{i-1} h_{i}(\\omega)$ is a polynomial in $\\omega$ with nonnegative coefficients so $\\omega^{i-1} h_{i}(\\omega) \\geq h_i(1)=\\frac{2i}{4^i} \\binom{2i}{i}$. From here, we get\n\\[ \\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} \\frac{i \\alpha_i}{\\omega^{i-1} h_{i}(\\omega) } \\leq \\sum_{i \\geq 1} \\alpha_i < \\sum_{i \\geq 1} i \\alpha_i = 1 \\]\nbecause $\\alpha_1 <1$. Therefore, the numbers $q_j$ are nonnegative and $c>0$, and we can rewrite \\eqref{eqn_qjomega_univ} as\n\\[q_j=\\frac{j \\alpha_j}{(\\omega c)^{j-1} h_{j}(\\omega)}. \\]\nFrom here, we get\n\\[ \\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} q_i c^{i-1} = 1-\\frac{4}{c}\\]\nimmediately by the definition of $c$, which proves $\\mathbf{q} \\in \\mathcal{Q}_a$ and $c_{\\mathbf{q}}=c$.\nAlso, we know that $\\alpha_1<1$ so there is $j \\geq 2$ with $\\alpha_j>0$, which implies $q_j>0$, so $\\mathbf{q} \\in \\mathcal{Q}^*$.\n\nWe now prove $\\mathbf{q} \\in \\mathcal{Q}_h$ with $\\omega_{\\mathbf{q}}=\\omega$, which is equivalent to proving\n\\[ \\sum_{i \\in \\mathbb Z} \\nu_{\\mathbf{q}}(i) \\omega^i = 1,\\]\nwhere we recall that $\\nu_{\\mathbf{q}}$ is defined by \\eqref{eqn_defn_nu}. For this,\ninspired by similar arguments in the critical case (see e.g. \\cite[Lemma 5.2]{C-StFlour}), the basic idea will be to show that $\\left( \\omega^i h_{i}(\\omega) \\right)_{i \\geq 1}$ is harmonic for $\\nu_{\\mathbf{q}}$. More precisely, the equality $\\sum_{i \\geq 1} i \\alpha_i=1$ can be interpreted as a harmonicity relation at $1$: setting $h_i(\\omega)=0$ for $i \\leq -1$, we have\n\\begin{equation}\\label{eqn_harmo_at_one}\n\\sum_{i \\in \\mathbb{Z}} h_{i+1}(\\omega) \\omega^i \\nu_{\\mathbf{q}}(i) = \\sum_{j \\geq 1} \\omega^{j-1} h_{j}(\\omega) c^{j-1} q_j = \\sum_{j \\geq 1} j \\alpha_j = 1 = h_{1}(\\omega),\n\\end{equation}\nwhere in the beginning we do the change of variables $j=i+1$. On the other hand, we know that $h_{p}(\\omega)=\\sum_{i=0}^{p-1} \\omega^{-i} u(i)$, where $u(i)=\\frac{1}{4^i} \\binom{2i}{i}$ for $i \\geq 0$ (and we set the convention $u(i)=0$ for $i \\leq -1$). But the same function $u$ plays an important role in the description of the law of the peeling process of finite Boltzmann maps. In particular, we know that $u$ is $\\nu_{\\mathbf{q}}$-harmonic on positive integers for any admissible weight sequence $\\mathbf{q}$ (this can be found in the proof of Lemma 5.2 in \\cite{C-StFlour}). That is, for all $j \\geq 1$, we have\n\\[ u(j)=\\sum_{i \\in \\mathbb{Z}} \\nu_{\\mathbf{q}}(i) u(i+j).\\]\nMultiplying by $\\omega^{-j}$ and summing over $1 \\leq j \\leq p-1$, we get, for all $p \\geq 1$:\n\\[ h_{p}(\\omega)-h_{1}(\\omega) = \\sum_{i \\in \\mathbb Z} \\omega^i \\nu_{\\mathbf{q}}(i) \\left( h_{p+i}(\\omega) - h_{i+1}(\\omega) \\right). \\]\nSumming this with \\eqref{eqn_harmo_at_one} and dividing by $h_{p}(\\omega)$, we obtain\n\\[ \\sum_{i \\in \\mathbb Z} \\omega^i \\nu_{\\mathbf{q}}(i) \\frac{h_{p+i}(\\omega)}{h_{p}(\\omega)} =1. \\]\nfor all $p \\geq 1$. When $p \\to +\\infty$, we have that\n$h_{p}(\\omega)$ has a positive limit if $\\omega>1$ and is equivalent to $\\frac{2}{\\sqrt{\\pi}}\\sqrt{p}$ if $\\omega=1$, so $\\frac{h_{p+i}(\\omega)}{h_{p}(\\omega)} \\to 1$ in every case. Therefore, by dominated convergence, we get\n\\[ \\sum_{i \\in \\mathbb Z} \\nu_{\\mathbf{q}}(i) \\omega^i=1,\\]\nwhere the domination $\\sum_{i \\in \\mathbb Z} \\nu_{\\mathbf{q}}(i) \\omega^i <+\\infty$ is immediate for negative values of $i$ since $\\omega \\geq 1$, and comes from the convergence of the sum \\eqref{eqn_harmo_at_one} for positive values of $i$. This proves $\\mathbf{q} \\in \\mathcal{Q}_h$ with $\\omega_{\\mathbf{q}}=\\omega$, and from here $a_j(\\mathbf{q})=\\alpha_j$ is immediate using \\eqref{eqn_alpha_from_q}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm_prametrization_rootface}]\nIt is clear from Proposition \\ref{prop_third_parametrization} that weight sequences with a given root face distribution are parametrized by $\\omega \\in [1,+\\infty)$. We denote by $\\mathbf{q}^{(\\omega)}$ the unique weight sequence for which the law of the root face is given by $(\\alpha_j)_{j \\geq 1}$ and for which $\\omega_{\\mathbf{q}^{(\\omega)}}=\\omega$. Then $q^{(1)}$ is critical by definition. Moreover, using \\eqref{eqn_qjomega_univ} and \\eqref{eqn_c_of_alpha_omega}, we get for $i \\geq 0$:\n\\begin{equation}\\label{eqn_omega_infinite}\n\\widetilde{\\nu}_{\\mathbf{q}^{(\\omega)}}(i)=q^{(\\omega)}_{i+1} \\omega^i c_{\\mathbf{q}^{(\\omega)}}^i \\xrightarrow[\\omega \\to +\\infty]{} (i+1) \\alpha_{i+1}.\n\\end{equation}\nThe sum over $i \\geq 0$ of the right-hand side is equal to $1$, so $\\widetilde{\\nu}_{\\mathbf{q}^{(\\omega)}} \\left( (-\\infty,-1] \\right) \\to 0$ as $\\omega \\to +\\infty$. By \\eqref{peeling_transitions}, this means that the probability of peeling cases decreasing the perimeter goes to $0$. Since these are the cases creating cycles in the dual, the dual of $\\mathbb M_{\\mathbf{q}^{(\\omega)}}$ becomes close to a tree when $\\omega \\to +\\infty$, and the vertex degrees in $\\mathbb M_{\\mathbf{q}^{(\\omega)}}$ go to infinity.\n\\end{proof}\n\n\\paragraph{Two technical results on the dependance in $\\omega$.}\nWe conclude this section with two technical results that we will need in the end of the proof (Section~\\ref{subsec_last_step}). Both deal with the way that some quantities depend on the parameter $\\omega$. We fix $(\\alpha_j)_{j \\geq 1}$ such that $\\sum_{j \\geq 1} j \\alpha_j = 1$ and $\\alpha_1<1$. By Proposition~\\ref{prop_third_parametrization}, we can denote by $\\mathbf{q}^{(\\omega)}$ the unique weight sequence for which the law of the root face is given by $(\\alpha_j)_{j \\geq 1}$ and $\\omega_{\\mathbf{q}^{(\\omega)}}=\\omega$.\n\nThe first technical lemma states that we can recover $\\mathbf{q}$ from the law of the root face $(\\alpha_j)_{j \\geq 1}$ and a single weight $q_j$, provided $j \\geq 2$.\n\\begin{lem}\\label{lem_qj_is_monotone}\nFor every $j \\geq 1$, the function $\\omega \\to q_j^{(\\omega)}$ is nonincreasing. Moreover, if $j \\geq 2$ and $\\alpha_j>0$, this function is decreasing.\n\\end{lem}\nSince the proof is not particularly enlightening, we postpone it to Appendix \\ref{appendix_qj_monotone}.\n\nOur second technical lemma is a reinforcement of a part of Proposition~\\ref{prop_q_as_limit} above. It states that the second convergence result~\\eqref{limit_volume_growth} is uniform in $\\omega$ as long as $\\omega$ is bounded away from $1$ and $+\\infty$.\n\n\\begin{lem}\\label{lem_unif_volume}\nLet $\\left( P_t^{(\\omega)} \\right)_{t \\geq 0}$ and $\\left( V_t^{(\\omega)} \\right)_{t \\geq 0}$ denote respectively the perimeter and volume processes associated to a peeling exploration of $\\mathbb M_{\\mathbf{q}^{(\\omega)}}$.\nThe convergence in probability\n\\[ \\frac{V_t^{(\\omega)}-2P_t^{(\\omega)}}{t} \\xrightarrow[t \\to +\\infty]{P)} \\frac{ \\left(\\sqrt{\\omega}-\\sqrt{\\omega-1} \\right)^2}{2 \\sqrt{\\omega(\\omega-1)}} \\]\nis uniform in $\\omega$ over any compact subset $K$ of $(1,+\\infty)$ in the sense that for all $\\varepsilon>0$, there is $t_0>0$ such that, for all $t \\geq t_0$ and $\\omega \\in K$:\n\\[ \\P \\left( \\left| \\frac{V_t^{(\\omega)}-2P_t^{(\\omega)}}{t} - \\frac{ \\left(\\sqrt{\\omega}-\\sqrt{\\omega-1} \\right)^2}{2 \\sqrt{\\omega(\\omega-1)}} \\right| >\\varepsilon \\right) < \\varepsilon. \\]\n\\end{lem}\n\nThe proof of Lemma~\\ref{lem_unif_volume} is an adaptation of the proof of~\\eqref{limit_volume_growth} in~\\cite{C-StFlour}, but using a uniform weak law of large numbers. It is delayed to Appendix~\\ref{subsec_unif_volume}.\n\n\\section{Tightness, planarity and one-endedness}\\label{sec_univ_tight}\n\nIn all this section, we will work in the general setting of Theorem \\ref{thm_main_more_general}, i.e. we do not assume $\\sum_{j \\geq 1} j^2 \\alpha_j<+\\infty$.\n\n\\begin{prop}\\label{prop_tightness_dloc_univ}\nLet $(\\mathbf{f}^n, g_n)_{n \\geq 1}$ be as in Theorem \\ref{thm_main_more_general}. Then the sequence $\\left( M_{\\mathbf{f}^n, g_n} \\right)_{n \\geq 1}$ is tight for $d_{\\mathrm{loc}}$, and every subsequential limit is a.s. planar and one-ended.\n\\end{prop}\n\nOur strategy to prove Proposition~\\ref{prop_tightness_dloc_univ} will be similar to \\cite{BL19}, and in particular relies on a Bounded ratio Lemma (Lemma \\ref{lem_BRL}). Sections \\ref{subsec_BRL}, \\ref{subsec_good_sets} and \\ref{subsec_proof_BRL} are devoted to the proof of the Bounded ratio Lemma, which is significantly more complicated than in \\cite{BL19}. In Section \\ref{subsec_planarity}, we prove that any subsequential limit of $\\left( M_{\\mathbf{f}^n, g_n} \\right)_{n \\geq 1}$ for $d^*_{\\mathrm{loc}}$ (which exist by Lemma~\\ref{lem_easy_dual_convergence}) is planar and one-ended. Finally, in Section \\ref{subsec_finite_degrees}, we finish the proof of Proposition \\ref{prop_tightness_dloc_univ} using Lemma \\ref{lem_dual_convergence_univ} and the Bounded ratio Lemma.\n\n\\subsection{The Bounded ratio Lemma}\n\\label{subsec_BRL}\n\nThe Bounded ratio Lemma below means that, as long as the faces are not too large and the number of vertices remains proportional to the number of edges (i.e. basically under the assumptions of Theorem~\\ref{thm_main_more_general}), removing a face of degree $2j_0$ changes the number of maps by at most a constant factor, provided the faces of degree $2j_0$ represent a positive proportion of the faces. We recall from \\eqref{defn_v_f_g} that $|\\mathbf{f}|$ and $v(\\mathbf{f},g)$ are respectively the number of edges and of vertices of a map with genus $g$ and face degrees given by $\\mathbf{f}$. For $j \\geq 1$, we denote by $\\mathbf{1}_j$ the face degree sequence consisting of a single face of degree $2j$, i.e. $\\left( \\mathbf{1}_j \\right)_i$ is $1$ if $i=j$ and $0$ otherwise.\n\n\\begin{lem}[Bounded ratio Lemma]\\label{lem_BRL}\nWe fix $\\kappa, \\delta>0$ and a function $A:(0,1]\\rightarrow \\mathbb N$. Let $\\mathbf{f}$ be a face degree sequence, and let $g \\geq 0$. We assume that\n\\begin{equation}\\label{eq_petites_faces}\nv(\\mathbf{f}, g) > \\kappa |\\mathbf{f}| \\quad \\mbox{ and } \\quad \\forall \\varepsilon>0, \\, \\sum_{i>A(\\varepsilon)} i f_i< \\varepsilon |\\mathbf{f}|.\n\\end{equation}\nLet also $j_0 \\geq 1$ be such that $j_0 f_{j_0}>\\delta |\\mathbf{f}|$. Then the ratio\n\\[\\frac{\\beta_g(\\mathbf{f})}{\\beta_g(\\mathbf{f}-\\mathbf{1}_{j_0})}\\]\nis bounded by a constant depending only on $\\delta, \\kappa$ and the function $A$.\n\\end{lem}\n\nWe will not try to obtain an explicit constant. As in \\cite{BL19}, we will use the Bounded ratio Lemma to estimate the probability of certain events during peeling explorations, so we will need versions with a boundary. Here are the precise versions that we will need later in the paper.\n\n\\begin{corr}\\label{lem_BRL_boundaries}\nLet $\\kappa, \\delta>0$ and $A(\\cdot)$ be as in Lemma \\ref{lem_BRL}. Then there is a constant $C$ such that the following holds.\n\\begin{enumerate}\n\\item\nLet $p,p',j\\geq 1$. Then there is $N$ such that, for all $\\mathbf{f}$ and $g$ satisfying \\eqref{eq_petites_faces} and $j f_j > \\delta |\\mathbf{f}|$ and $|\\mathbf{f}|>N$, we have\n\\[\\frac{\\beta_g^{(p,p')}(\\mathbf{f})}{\\beta_g^{(p,p')}(\\mathbf{f}-\\mathbf{1}_j)}< C\\]\nand in particular\n\\begin{equation}\\label{eqn_BRL_one_boundary}\n\\frac{\\beta_g^{(p)}(\\mathbf{f})}{\\beta_g^{(p)}(\\mathbf{f}-\\mathbf{1}_j)}< 2C.\n\\end{equation}\n\\item\nLet $p_1, p_2 \\geq 1$ and $i_1, i_2 \\geq 0$. Then there is $N$ such that, for all $\\mathbf{f}$ and $g$ satisfying \\eqref{eq_petites_faces} and $|\\mathbf{f}|>N$, we have\n\\[ \\frac{\\beta_g^{(p_1+i_1,p_2+i_2)}(\\mathbf{f})}{\\beta_g^{(p_1,p_2)}(\\mathbf{f})} < C^{i_1+i_2}. \\]\n\\end{enumerate}\n\\end{corr}\n\nSince this will be very important later, we highlight that the constant $C$ does not depend on $p, p', j$ but that $N$ does. The inequality~\\eqref{eqn_BRL_one_boundary} will be used in the tightness argument just like in \\cite{BL19}, whereas the statements with two boundaries will be needed in the two hole argument (Section \\ref{subsec_same_perimeter}).\n\n\\begin{proof}\nWe first claim that we have the identity\n\\begin{equation}\\label{eqn_add_boundary}\n\\beta_g^{(p,p')}(\\mathbf{f})=\\frac{2p(f_{p}+1)p'(f_{p'}+\\mathbbm{1}_{p \\ne p'})}{2(|\\mathbf{f}|+p+p')}\\beta_g(\\mathbf{f}+\\mathbf{1}_{p}+\\mathbf{1}_{p'}).\n\\end{equation}\nIndeed, the factor $p(f_p+1)$ corresponds to the number of ways to add a second root to a map of $\\mathcal{B}_g(\\mathbf{f}+\\mathbf{1}_p)$ such that this second root has a face of degree $2p$ on its right (with respect to the canonical white to black orientation of edges). The factor $p'(f_{p'}+\\mathbbm{1}_{p \\ne p'})$ corresponds to the number of ways of adding a third root next to a face of degree $2p'$ so that the two root faces are distinct. The $(|\\mathbf{f}|+p+p')$ in the denominator corresponds to forgetting the original root. Moreover, if $(\\mathbf{f},g)$ satisfy the assumptions of Lemma \\ref{lem_BRL} for $\\delta, \\kappa, A(\\cdot)$ and $|\\mathbf{f}|$ is large enough, then $(\\mathbf{f}+\\mathbf{1}_p+\\mathbf{1}_{p'},g)$ also satisfies the assumptions of Lemma \\ref{lem_BRL} for $\\frac{\\delta}{2}, \\frac{\\kappa}{2}, A \\left( \\frac{\\cdot}{2}\\right)$. Therefore, the first point of the corollary follows from Lemma \\ref{lem_BRL} and \\eqref{eqn_add_boundary}. To deduce \\eqref{eqn_BRL_one_boundary}, just take $p'=1$ and use the identity $\\beta_g^{(p,1)}(\\mathbf{f})=|\\mathbf{f}|\\beta_g^{(p)}(\\mathbf{f})$ (adding a $2$-gon is equivalent to marking an edge) and the fact that $|\\mathbf{f}|$ is large enough.\n\nFor the second point, we first note that it is sufficient to prove it for $\\{i_1, i_2\\}=\\{0,1\\}$. Since $C$ does not depend on $(p_1, p_2)$, the general case easily follows by induction on $i_1+i_2$. Without loss of generality, we assume $i_1=1, i_2=0$.\n\nWe now note that there is $\\delta, j_1>0$ depending only on $A(\\cdot)$ such that, if \\eqref{eq_petites_faces} is satisfied, then there is $2 \\leq j \\leq j_1$ such that $jf_j > \\delta |\\mathbf{f}|$ (we can assume $j \\geq 2$ because if there are too many $2$-gons, then the number of vertices cannot be macroscopic). We fix such a $j$.\nThen, by the injection that consists in gluing a $2j$-gon on the first boundary as on Figure~\\ref{fig_reducing_boundary}, we have \n\\[\\beta_g^{(p_1+1,p_2)}(\\mathbf{f}) \\leq \\beta_g^{(p_1,p_2)}(\\mathbf{f}+\\mathbf{1}_j) \\leq C \\beta_g^{(p_1,p_2)}(\\mathbf{f}),\\]\nwhere the last inequality uses the first item of the Corollary. This proves the second point.\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.8]{reducing_boundary}\n\\caption{Reducing the size of a boundary by $2$ by adding a $2j$-gon (here, the boundary is in blue, and $j=3$).}\\label{fig_reducing_boundary}\n\\end{figure}\n\\end{proof}\n\n\\paragraph{Outline of the proof of Lemma \\ref{lem_BRL}.}\nThe general idea is the same as in \\cite{BL19}, namely building an injection that removes a small piece of a map (here, we would like to remove a face of degree $2j_0$). Just like in \\cite{BL19}, this implies to merge vertices, so we will try to bound the degrees of the vertices involved, so that the number of ways to do the surgery backwards is not too high. However, since we work in a more general setting, several new constraints appear. First, the degrees of the faces are not bounded, so we must make sure that our surgery operations do not involve faces of huge degrees. This is the purpose of finding \"very nice edges\" in Section \\ref{subsec_good_sets} below. Also, we will not always be able to remove a face of degree exactly $2j_0$. We will therefore remove either a face with degree higher than $2j_0$, or several faces which combined are \"larger\" than a face of degree $2j_0$. We will then use the two (easy) Lemmas~\\ref{lem_grande_face} and~\\ref{lem_petites_faces} to conclude.\n\n\\begin{lem}\\label{lem_grande_face}\nIf $p\\geq j_0 \\geq 1$, then\n\\[ |\\mathbf{f}| \\beta_g \\left( \\mathbf{f}-\\mathbf{1}_{j_0} \\right) \\geq j_0 f_{j_0} \\, \\beta_g \\left( \\mathbf{f}-\\mathbf{1}_p \\right).\\]\nIn particular, if $j_0 f_{j_0} \\geq \\delta |\\mathbf{f}|$, then\n\\begin{equation}\\label{eq_transfer_grande_face}\n\\beta_g(\\mathbf{f}-\\mathbf{1}_{j_0}) \\geq \\delta \\beta_g(\\mathbf{f}-\\mathbf{1}_p)\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nThe second point is immediate from the first. For the first point, the right-hand side counts maps in $\\mathcal{B}_g \\left( \\mathbf{f}-\\mathbf{1}_p \\right)$ with a marked edge such that the face on its right has degree $2j_0$. The left-hand side counts maps in $\\mathcal{B}_g \\left( \\mathbf{f}-\\mathbf{1}_{j_0} \\right)$ with a marked edge, so it is enough to build an injection from the first set to the second. Take a map $m$ in $\\mathcal{B}_g \\left( \\mathbf{f}-\\mathbf{1}_p \\right)$ and mark an edge $e$ of $m$ with a face of degree $2j_0$ on its right. We glue a path of $p-j_0$ edges to the starting point of $e$, just on the right of $e$ as on Figure~\\ref{fig_plus_grande_face}. One obtains a map of $\\mathcal{B}_g \\left( \\mathbf{f}-\\mathbf{1}_{j_0} \\right)$ with a marked edge, and going backwards is straightforward.\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[scale=0.7]{plus_grande_face}\n\\caption{The injection of Lemma~\\ref{lem_grande_face} (here with $j_0=3$ and $p=5$). The marked edge is in red.}\\label{fig_plus_grande_face}\n\\end{figure}\n\\end{proof}\n\n\\begin{lem}\\label{lem_petites_faces}\nIf $1j_0$, transform the face of degree $2j_0$ into a face of degree $2d$ by adding a path of $d-j_0$ edges like in the previous proof. Then tessellate this face of degree $2d$ as on Figure~\\ref{fig_plus_petites_faces}. We obtain a map of $\\mathcal{B}_g \\left( \\mathbf{f}-\\mathbf{1}_{j_0} \\right)$ with a marked edge, and this operation is also injective.\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[scale=0.8]{plus_petites_faces}\n\\caption{The injection of Lemma~\\ref{lem_petites_faces} (here with $j_0=5$ and $(d_1,d_2,d_3)=(2,3,3)$).}\\label{fig_plus_petites_faces}\n\\end{figure}\n\\end{proof}\n\n\\subsection{Good sets of edges}\n\\label{subsec_good_sets}\n\nThe injection we will build to prove the Bounded ratio Lemma takes as input a pair $(m, E)$, where $m$ is a map and $E$ is a set of edges of $m$ satisfying the properties we will need to perform some surgery around $E$. We will call such a set a \\emph{good set}. Our goal in this subsection is to define a good set and to prove that any map contains a linear number of good sets (Proposition~\\ref{prop_good_sets}). We recall that we consider that the edges are oriented from white to black, and therefore it makes sense to define the left or right side of an edge.\n\nThroughout this section, we work under the assumptions of Lemma~\\ref{lem_BRL}. Let $m \\in \\mathcal{B}_g(\\mathbf{f})$. Let $A_1:=2A \\left( \\min \\left( \\frac{\\kappa}{32},\\delta \\right) \\right)$.\n\n\\begin{rem}\\label{rem_causal_graph}\nWe will have several different constants (depending on $A(\\cdot)$, $\\delta$ and $\\kappa$) defined in terms of each other in this subsection. To help convince the reader there is no circular dependency between them, we provide a \"causal graph\" of all the involved constants.\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[scale=0.8]{causal_graph}\n\\end{figure}\n\\end{rem}\n\nWe say that an edge $e$ of $m$ is \\emph{nice} if it is not incident to a face of degree larger than $2A_1$.\n\\begin{fact} At least $\\left( 1-\\frac{\\kappa}{16} \\right) |\\mathbf{f}|$ of the edges in $m$ are nice.\n\\end{fact}\n\\begin{proof}\nDraw an edge $e$ of $m$ uniformly at random. The face $f$ sitting to the right of $e$ is drawn at random with a probability proportional to its degree. By the second assumption of \\eqref{eq_petites_faces}, the probability that $f$ has degree larger than $2A_1$ is less than $\\frac{\\kappa}{32}$. The same is true for the face sitting to the left of $e$.\n\\end{proof}\n\nWe will need to bound the degrees not only of the faces incident to an edge, but also of the faces close to this edge for the dual distance. More precisely, we will define the \\emph{dual distance between two edges $e_1, e_2$ of $m$} as the dual distance between the face on the right of $e_1$ and the face on the right of $e_2$. We fix a value $r$ (depending on $A_1$ and $\\kappa$) that we will specify later. Let $A_r$ be the function given by Lemma~\\ref{lem_tight_degree_in_ball} for $A(\\cdot)$ and $r$, and let $A_2=A_r \\left( \\frac{\\kappa}{16} \\right)$. We will call an edge $e$ of $m$ \\emph{very nice} if it is nice and no edge at dual distance $r$ or less from $e$ is incident to a face of degree larger than $2A_2$. By applying Lemma~\\ref{lem_tight_degree_in_ball} to the stationary random graph obtained by rooting the dual map $m^*$ at a uniform edge, the proportion of edges of $m$ at dual distance $r$ or less from a face larger than $2A_2$ is at most $\\frac{\\kappa}{16}$. Hence, we get the following observation.\n\n\\begin{fact}At least $\\left( 1-\\frac{\\kappa}{8} \\right) |\\mathbf{f}|$ of the edges of $m$ are very nice.\n\\end{fact}\n\nLet $D=\\frac{4}{\\kappa}$. By the first assumption of \\eqref{eq_petites_faces}, we know that $D$ is larger than twice the average vertex degree in $m$. Since at most half of the vertices have degree at least twice the average degree, and since there are more than $\\kappa |\\mathbf{f}|$ vertices, we have the following.\n\n\\begin{fact}\nThere are at least $\\frac{\\kappa}{4}|\\mathbf{f}|$ vertices of the same colour with degree less than $D$ in $m$.\n\\end{fact}\n\nWithout loss of generality, assume that this colour is white (we recall that the vertices are coloured black and white so that each edge joins two vertices of different colours and the root is oriented from white to black). We say that a white vertex is \\emph{fine} if it has degree at most $D$, and that an edge is \\emph{fine} if it is incident to a fine white vertex and the face on its right is not of degree $2$. By the previous fact, and since every vertex is incident to at least a face of degree $>2$, there are at least $\\frac{\\kappa}{4} |\\mathbf{f}|$ fine edges in $m$, incident to $\\frac{\\kappa}{4}|\\mathbf{f}|$ distinct white vertices.\nAn edge is said to be \\textit{good} if it is both very nice and fine. Summing up the last results, we have the following.\n\n\\begin{lem}\\label{lem_good_edges}\nThere are at least $\\frac{\\kappa }{8} |\\mathbf{f}|$ good edges in $m$, incident to $\\frac{\\kappa}{8} |\\mathbf{f}|$ distinct white vertices.\n\\end{lem}\n\nWe now fix the value of $r$ at $r=\\frac{16A_1}{\\kappa}+1$ (which is possible since $A_1$ does not depend on $r$, see Remark~\\ref{rem_causal_graph} above). We call a set $S$ of $A_1$ edges of $m$ a \\textit{good set} if all the edges of $S$ are good, they are incident to distinct white vertices and they are all at dual distance less than $2r$ from each other. Our next goal is to find a large number of good sets in the map $m$. Note that these good sets do not need to be disjoint.\n\n\\begin{prop}\\label{prop_good_sets}\nThere are at least $\\frac{\\kappa}{16} |\\mathbf{f}|$ good sets of edges in $m$.\n\\end{prop}\n\n\\begin{proof}\nThe proof follows the argument from~\\cite{BL19}.\nLet $G$ be a set of $\\frac{\\kappa }{8} |\\mathbf{f}|$ good edges incident to distinct white vertices given by Lemma~\\ref{lem_good_edges}. In this proof, the balls $B_r^*(e)$ that we will consider will be for the dual distance. We can assume that for every $e\\in G$, the ball $B_r^*(e)$ does not contain all the edges of $m$, since otherwise the proposition is obviously true.\n\nIn that case, for all $e\\in G$, since $m^*$ is connected we must have $|B^*_r(e)|>r$.\nWe are going to find a collection of distinct good sets $(S_i)$. For this, we build by induction a decreasing sequence of sets of good edges $(G_i)$, such that for each $i$, the set $G_{i+1}$ is obtained from $G_i$ by removing one element. We set $G_0=G$. Let $0 \\leq i<\\frac{\\kappa }{16} |\\mathbf{f}|$, and assume that we have built $G_0, G_1, \\dots, G_i$. Then $|G_i|=|G|-i$, so\n\\begin{align*}\n\\sum_{e\\in G_i} |B_r^*(e)|> \\left( |G|-i \\right)r >\\frac{\\kappa }{16} |\\mathbf{f}| r> A_1 |\\mathbf{f}|\n\\end{align*}\nby our choice of $r$. Therefore, there must be an \"$A_1$-overlap\", i.e. there exist $A_1$ edges whose balls of radius $r$ have a nonempty intersection. Thus they are all at distance at most $2r$ of each other, and we just found a good set $S_{i+1}$. Choose $e_{i+1} \\in S_{i+1}$ arbitrarily, and let $G_{i+1}=G_{i}\\setminus \\{e_{i+1}\\}$. This way we can build $G_i$ and $S_i$ for $1 \\leq i < \\frac{\\kappa}{16} |\\mathbf{f}|$, which proves the lemma.\n\\end{proof}\n\n\\subsection{Proof of the Bounded ratio Lemma: the injection}\n\\label{subsec_proof_BRL}\n\nWe now prove the Bounded ratio Lemma (Lemma~\\ref{lem_BRL}). We start with the easy case $j_0=1$: a marked digon can be contracted into a marked edge (see Figure~\\ref{fig_digon}), and if $f_1>\\delta |\\mathbf{f}|$, we have\n\\[|\\mathbf{f}|\\beta_g(\\mathbf{f}-\\mathbf{1}_1)>\\delta |\\mathbf{f}| \\beta_g(\\mathbf{f})\\]\nwhich yields the result. \n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=0.5]{digon}\n\\caption{Contraction of a digon.}\\label{fig_digon}\n\\end{figure}\n\nWe can now assume that $j_0>1$ and $j_0 f_{j_0} > \\delta |\\mathbf{f}|$. The injection we will build takes as input a map of $\\mathcal{B}_g(\\mathbf{f})$ with a marked good set of edges, and outputs a map of $\\mathcal{B}_g(\\mathbf{\\tilde{f}})$ with a marked edge and some finite information (i.e. with values in a finite set whose size depends only on $\\delta, \\kappa$ and the function $A$), with $\\mathbf{\\tilde{f}}$ of the form:\n\\begin{enumerate}\n\\item[(i)]\neither\n\\begin{equation}\\label{brl_ftilde_first_condition}\n\\mathbf{\\tilde{f}}=\\mathbf{f}-\\mathbf{1}_p \\mbox{ where } j_0 \\leq p < A_1,\n\\end{equation}\n\\item[(ii)]\nor\n\\begin{equation}\\label{brl_ftilde_second_condition}\n\\mathbf{\\tilde{f}}=\\mathbf{f}-\\sum_{i=1}^k\\mathbf{1}_{d_i} \\mbox{ where } k \\leq A_1 \\mbox{ and } 1j_0-1$. Therefore, we can consider the first index $k$ such that $\\sum_{i=1}^k (d_i-1)>j_0-1$. Then we have $d_k < j_0$ by assumption, so $\\sum_{i=1}^k (d_i-1) < 2j_0-1$. Finally, by definition of $A_1$, we have $A_1 \\geq 2A(\\delta) \\geq 2j_0$ (the second inequality follows from the assumption $j_0 f_{j_0} > \\delta|\\mathbf{f}|$), so~\\eqref{eqn_brl_case_2} is indeed satisfied for this $k$.\n\nWe now choose the anchor of $S$ arbitrarily, and apply Step 1. We then apply Step 2 with $d=1+\\sum_{i=1}^k (d_i-1)$, i.e. we delete $d$ good vertices, including the ones incident to $e_1, \\ldots, e_k$. Note that at this point, all the faces that were incident to the edges $e_1, \\dots, e_k$ have been destroyed. We then apply Step 3 and obtain a map $m_3$ with face degree sequence $\\mathbf{f'}$. The first two assumptions on $\\mathbf{f'}$ in Step 4 are satisfied for the same reason as in Case 1, with $\\mathbf{f'}$ of the form $\\mathbf{f}+\\mathbf{1}_F-\\mathbf{1}_{j_1}-\\ldots-\\mathbf{1}_{j_{\\ell}}$. Moreover, since the faces incident to $e_1, \\dots, e_k$ have been destroyed previously, up to reordering the $j_i$'s, we may assume $j_i=d_i$ for $1 \\leq i \\leq k$. Therefore, by our choice of $d$, the third assumption~\\eqref{eq_condition_BRL} of Step 4 is also satisfied. After applying Step 4, we obtain a map $m_4$ with face degree sequence\n\\[\\mathbf{\\tilde{f}}=\\mathbf{f}-\\sum_{i=1}^{k} d_i.\\]\nFinally, we have $10$. Indeed, this is true for the root face since the root face of $M$ has degree $j$ with probability $j \\alpha_j$ for all $j \\geq 1$, and this can be extended to all faces using stationarity with respect to the simple random walk on the dual of $M$. Therefore, if $m$ has a face of degree $2j$ with $\\alpha_j=0$, then $\\P \\left( m \\subset M \\right)=0$.\n\nIf not, let $\\mathbf{h}^{(0)}$ be the internal face degree sequence of $m$. Then $f^n_j \\to +\\infty$ as $n \\to +\\infty$ for every $j$ such that $h^{(0)}_j>0$, so $\\mathbf{h}^{(0)} \\leq\\mathbf{f}^{n}$ for $n$ large enough. In particular, we are in position to use Lemma~\\ref{lem_calcul_planar}. The proof is now exactly the same as in~\\cite{BL19}: we use the fact that $\\P \\left( m \\subset M \\right)$ can be expressed using the number of ways to fill the holes of $m$ with maps of multipolygons, which is given by the left-hand side of Lemma~\\ref{lem_calcul_planar}.\n\\end{proof}\n\nWe now move on to one-endedness. The proof is quite similar, and relies on the following estimate.\n\n\\begin{lem}\\label{lem_calcul_OE}\nWe fix $(\\mathbf{f}^{n})_{n \\geq 1}$ and $(g_n)_{n \\geq 1}$ which satisfy the assumptions of Theorem~\\ref{thm_main_more_general}. We also fix $\\mathbf{h}^{(0)}$ a face degree sequence such that $\\mathbf{h}^{(0)} \\leq\\mathbf{f}^{n}$ for $n$ large enough.\n\\begin{itemize}\n\\item\nLet $k\\geq 1$, numbers $\\ell_1, \\ell_2,\\ldots,\\ell_k$ \\textbf{not all equal to $1$} and perimeters $p_i^j$ for $1\\leq j\\leq k$ and $1\\leq i\\leq \\ell_j$. Then\n\\[\\sum_{\\substack{\\mathbf{h}^{(1)}+\\mathbf{h}^{(2)}+\\ldots+\\mathbf{h}^{(k)}=\\mathbf{f}^n-\\mathbf{h}^{(0)} \\\\ g^{(1)}+g^{(2)}+\\ldots+g^{(k)}=g_n-\\sum_j(\\ell_j-1)}}\\prod_{j=1}^k \\beta_{g^{(j)}}^{(p^j_1,p^j_2,\\ldots,p^j_{\\ell_j})}(\\mathbf{h}^{(j)})=o\\left(\\beta_{g_n}(\\mathbf{f}^n)\\right)\\]\nas $n\\rightarrow \\infty$.\n\\item \nLet $k\\geq 1$ and perimeters $p_1,\\ldots,p_k$. There is a constant $C$ (that may depend on everything above) such that for every $a$ and $n$ large enough we have\n\\[\\sum_{\\substack{\\mathbf{h}^{(1)}+\\mathbf{h}^{(2)}+\\ldots+\\mathbf{h}^{(k)}=\\mathbf{f}^n-\\mathbf{h}^{(0)} \\\\ g^{(1)}+g^{(2)}+\\ldots+g^{(k)}=g_n\\\\ |\\mathbf{h}^{(1)}|, |\\mathbf{h}^{(2)}|>a}} \\, \\prod_{j=1}^k \\beta_{g_j}^{(p_j)}(\\mathbf{h}^{(j)})\\leq \\frac{C}{a} \\beta_{g_n}(\\mathbf{f}^n).\\]\n\\end{itemize}\n\\end{lem}\n\n\\begin{corr}\\label{cor_OE}\nLet $(\\mathbf{f}^{n})_{n \\geq 1}$ and $(g_n)_{n \\geq 1}$ satisfy the assumptions of Theorem~\\ref{thm_main_more_general}. Every subsequential limit $M$ of $\\left( M_{\\mathbf{f}^n}, g_n \\right)$ for $d^*_{\\mathrm{loc}}$ is a.s. one-ended in the sense that, for every finite map $m$ with holes such that $m \\subset M$, only one hole of $m$ is filled with infinitely many faces\\footnote{This is a \"weak\" definition of one-endedness. For example, it does not prevent $m$ to be the dual of a tree. However, once we will have proved that $M$ has finite vertex degrees, this will be equivalent to the usual definition.}.\n\\end{corr}\n\n\\begin{proof}\nThe proof given Lemma~\\ref{lem_calcul_OE} is exactly the same as~\\cite[Corollary 9]{BL19}, except for the additional assumption in Lemma~\\ref{lem_calcul_OE} that $\\mathbf{h}^{(0)} \\leq\\mathbf{f}^{n}$ for $n$ large enough. We take care of it in the same way as in the proof of Corollary~\\ref{cor_planar}.\n\\end{proof}\n\n\\subsection{Finiteness of the root degree}\n\\label{subsec_finite_degrees}\n\nWe now finish the proof of tightness for $d_{\\mathrm{loc}}$ (Proposition~\\ref{prop_tightness_dloc_univ}). Let $M$ be a subsequential limit of $\\left( M_{\\mathbf{f}^n, g_n} \\right)$ for $d_{\\mathrm{loc}}^*$. By Lemma~\\ref{lem_dual_convergence_univ}, to get tightness for $d_{\\mathrm{loc}}$, we need to show that almost surely, all the vertices of $M$ have finite degree. Our argument is now very similar to \\cite{BL19} and inspired by \\cite{AS03}: we will first study the degree of the root vertex by using the Bounded ratio Lemma, and then extend finiteness by using invariance under the simple random walk.\n\n\\begin{lem}\\label{lem_root_degree_is_finite_univ}\nThe root vertex of $M$ has a.s. finite degree.\n\\end{lem}\n\n\\begin{proof}\nFollowing the approach of \\cite{AS03}, we perform a filled-in lazy peeling exploration of $M$. Note that we already know by Corollary~\\ref{cor_planar} that the explored part will always be planar, so no peeling step will merge two different existing holes. Moreover, by Corollary~\\ref{cor_OE}, if a peeling step separates the boundary into two holes, then one of them is finite and will be filled with a finite map. Therefore, at each step, the explored part will have only one hole.\n\nThe peeling algorithm $\\mathcal{A}$ that we use is the following: if the root vertex $\\rho$ belongs to $\\partial m$, then $\\mathcal{A}(m)$ is the edge on $\\partial m$ on the left of $\\rho$. If $\\rho \\notin \\partial m$, then the exploration is stopped. Let $\\tau$ be the time at which the exploration is stopped. Since only finitely many edges incident to $\\rho$ are added at each step, it is enough to prove $\\tau<+\\infty$ a.s.. We recall that $\\mathcal{E}_t^{\\mathcal{A}}(M)$ is the explored part at time $t$.\n\nWe will prove that at each step, conditionally on $\\mathcal{E}_t^{\\mathcal{A}}(M)$, the probability to swallow the root and finish the exploration in a bounded amount of time is bounded from below by a positive constant. We fix $j^* \\geq 2$ with $\\alpha_{j^*}>0$. Note that such a $j^*$ exists because of the assumption $\\theta<\\frac{1}{2} \\sum_{j \\geq 1} (j-1) \\alpha_j$. For every map $m$ with one hole such that $\\rho \\in \\partial m$, we denote by $m^+$ the map constructed from $m$ as follows (see Figure~\\ref{fig_swallowing_root_univ}):\n\\begin{itemize}\n\\item\nwe first glue a \"face\" of degree $2j^*$ to $m$ along the edge of $\\partial m$ on the left of $\\rho$;\n\\item\nwe then glue together the two edges of the boundary incident to $\\rho$ together;\n\\item\nduring the next $j^*-2$ steps, at each step, we pick two consecutive edges of the boundary according to some fixed convention and glue them together.\n\\end{itemize}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.7]{swallow_root}\n\\caption{The construction of $m^+$ from $m$. In gray, the map $m$. In red, the root vertex. In blue, the new face. Here, $|\\partial m|=j^*=3$.}\n\\label{fig_swallowing_root_univ}\n\\end{figure}\n\nNote that $m^+$ is a planar map with the same perimeter as $m$ but one more face (of degree $2j^*$). By the choice of our peeling algorithm, if we have $\\tau \\geq t$ and $\\mathcal{E}_t^{\\mathcal{A}}(M)^+ \\subset M$, then we have $\\tau \\leq t+2$. Hence it is enough to prove that the quantity\n\\[ \\P \\left( m^+ \\subset M | m \\subset M \\right)\\]\nis bounded from below over finite, planar maps $m$ with one hole such that $\\rho \\in \\partial m$.\n\nWe fix such an $m$, with half-perimeter $p$ and internal face degrees given by $\\mathbf{h}$. Along some subsequence, we have $M_{\\mathbf{f}^{n},g_{n}} \\to M$ in distribution (for $d_{\\mathrm{loc}}^*$). Along the same subsequence, it holds that\n\\[ \\P \\left( m^+ \\subset M | m \\subset M \\right)\\hspace{-0.1cm}=\\hspace{-0.1cm}\\lim_{n \\to +\\infty}\\hspace{-0.1cm} \\frac{\\P \\left( m^+ \\in M_{\\mathbf{f}^{n},g_{n}} \\right)}{\\P \\left( m\\in M_{\\mathbf{f}^{n},g_{n}} \\right)} \\hspace{-0.1cm} = \\hspace{-0.1cm}\\lim_{n \\to +\\infty}\\hspace{-0.1cm} \\frac{\\beta^{(p)}_{g_n} \\left( \\mathbf{f}^n-\\mathbf{h}-\\mathbf{1}_{j^*} \\right)}{\\beta^{(p)}_{g_n} \\left( \\mathbf{f}^n-\\mathbf{h} \\right)}. \\]\nBy our choice of $j^*$, we have $f^n_{j^*} \\geq \\frac{\\alpha_{j^*}}{2} |\\mathbf{f}^n|$ for $n$ large enough, so we can apply the Bounded ratio Lemma, which concludes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{prop_tightness_dloc_univ}]\nLet $M$ be a subsequential limit of $\\left( M_{\\mathbf{f}^{n},g_{n}} \\right)$. We recall that for all $n$, the map $\\left( M_{\\mathbf{f}^{n},g_{n}} \\right)$ is stationary for the simple random walk on its vertices. Therefore, by Lemma~\\ref{lem_root_degree_is_finite_univ} and the same argument as in \\cite{AS03} (see also the proof of Lemma~\\ref{lem_easy_dual_convergence} above), almost surely all the vertices of $M$ have finite degree. By Lemma~\\ref{lem_dual_convergence_univ}, this guarantees that $\\left( M_{\\mathbf{f}^{n},g_{n}} \\right)$ is tight for $d_{\\mathrm{loc}}$.\n\nThe a.s. planarity of $M$ is proved in~Corollary \\ref{cor_planar}. Finally, it easy to check that for maps with finite vertex degrees, the weak version of one-endedness proved in Corollary~\\ref{cor_OE} implies the usual one. Indeed, if $V$ is a finite set of vertices of $M$, one can consider a finite, connected submap of $M$ containing all the faces and edges incident to vertices of $V$. Then Corollary~\\ref{cor_planar} ensures that this submap does not separate $M$ into two infinite maps.\n\\end{proof}\n\n\\section{Weakly Markovian bipartite maps}\\label{sec_univ_markov}\n\nOur goal in this Section is to prove Theorem~\\ref{thm_weak_Markov_general}.\n\n\\paragraph{Weakly Markovian bipartite maps.}\nFor a finite, bipartite map $m$ with one hole, we denote by $|\\partial m|$ the half-perimeter of the hole of $m$. For all $j \\geq 1$, we also denote by $v_j(m)$ the number of internal faces of $m$ with degree $2j$.\n\n\\begin{defn}\\label{defn_weak_Markov}\nLet $M$ be a random infinite, one-ended, bipartite planar map. We say that $M$ is \\emph{weakly Markovian} if for every finite map $m$ with one hole, the probability $\\P \\left( m \\subset M \\right)$ only depends on $|\\partial m|$ and $\\left( v_j(m) \\right)_{j \\geq 1}$.\n\\end{defn}\n\nLet $\\mathcal{V}$ be the set of sequences $\\mathbf{v}=(v_j)_{j \\geq 1}$ such that $v_j=0$ for $j$ large enough. If $M$ is weakly Markovian and $\\mathbf{v} \\in \\mathcal{V}$, we will denote by $a^p_{\\mathbf{v}}$ the probability $\\P \\left( m \\subset M \\right)$ for a map $m$ with $|\\partial m|=p$ and $v_j(m)=v_j$ for all $j$. Note that this only makes sense if there is such a map $m$, which is equivalent to\n\\begin{equation}\\label{eqn_good_pvv}\np \\leq 1+\\sum_{j \\geq 1} (j-1)v_j.\n\\end{equation}\nTherefore, if $p \\geq 1$, we will denote by $\\mathcal{V}_p \\subset \\mathcal{V}$ the set of those $\\mathbf{v}$ satisfying \\eqref{eqn_good_pvv}. Note that $\\mathcal{V}_1=\\mathcal{V}$.\nIn particular, by definition, for $\\mathbf{q} \\in \\mathcal{Q}_h$, the $\\mathbf{q}$-IBPM is weakly Markovian, and the corresponding constants $a^p_{\\mathbf{v}}$ are:\n\\[a^p_{\\mathbf{v}}(\\mathbf{q}):= C_p(\\mathbf{q}) \\mathbf{q}^{\\mathbf{v}},\\]\nwhere $\\mathbf{q}^{\\mathbf{v}} := \\prod_{j \\geq 1} q_j^{v_j}$. Therefore, if $M$ is of the form $\\mathbb M_{\\mathbf{Q}}$ for some random weight sequence $\\mathbf{Q}$, we have $a^p_{\\mathbf{v}} = \\mathbb E[ C_p(\\mathbf{Q}) \\mathbf{Q}^{\\mathbf{v}} ]$.\n\n\\paragraph{Sketch of the proof of Theorem~\\ref{thm_weak_Markov_general}.}\nWe first note that the second point of Theorem~\\ref{thm_weak_Markov_general} is immediate once the first point is proved. Indeed, let us write $\\mathrm{Rootface}(m)$ for the degree of the root face of a map $m$. If $\\mathrm{Rootface}(M)$ has finite expectation, then\n\\[ \\mathbb E \\left[ \\mathbb E \\left[ \\mathrm{Rootface}(\\mathbb M_{\\mathbf{Q}}) | \\mathbf{Q} \\right] \\right] = \\mathbb E \\left[ \\mathrm{Rootface}(\\mathbb M_{\\mathbf{Q}}) \\right] <+\\infty, \\]\nso $\\mathbb E \\left[ \\mathrm{Rootface}(\\mathbb M_{\\mathbf{Q}}) | \\mathbf{Q} \\right]<+\\infty$ a.s., so $\\mathbf{Q} \\in \\mathcal{Q}_f$ a.s..\n\nThe first point of Theorem~\\ref{thm_weak_Markov_general} is the natural analogue of Theorem~2 of \\cite{BL19}, where triangulations are replaced by more general maps. The proof will rely on similar ideas: we fix a weakly Markovian map $M$ with associated constants $a^p_{\\mathbf{v}}$, and we would like to find a random $\\mathbf{Q}$ such that $a^p_{\\mathbf{v}} = \\mathbb E[ C_p(\\mathbf{Q}) \\mathbf{Q}^{\\mathbf{v}} ]$ for all $p$ and $\\mathbf{v}$. We will use peeling equations to establish inequalities between the $a^p_{\\mathbf{v}}$, and the existence of $\\mathbf{Q}$ will follow from the Hausdorff moment problem. However, compared to~\\cite{BL19}, two new difficulties arise:\n\\begin{itemize}\n\\item[$\\bullet$]\nthe random weights $\\mathbf{q}$ form a family of real numbers instead of just one real number;\n\\item[$\\bullet$]\nin the triangular case, with the notations of Definition~\\ref{defn_weak_Markov}, it was immediate that all the numbers $a^p_{\\mathbf{v}}$ are determined by the numbers $a^1_{\\mathbf{v}}$. This is not true anymore.\n\\end{itemize}\nThe first issue can be handled by using the multi-dimensional version of the Hausdorff moment problem. The second one, on the other hand, will make the proof a bit longer than in \\cite{BL19}. More precisely, the Hausdorff moment problem will now provide us, for every $p \\geq 1$, a $\\sigma$-finite measure $\\mu_p$ on the set of weight sequences, which describes $a^p_{\\mathbf{v}}$ for all $\\mathbf{v}$. We will then use the peeling equations to prove that all the $\\mu_p$ are actually determined by $\\mu_1$.\n\nBecause of the condition \\eqref{eqn_good_pvv}, we will need to find a measure with suitable $\\mathbf{v}$-th moments for all $\\mathbf{v} \\in \\mathcal{V}_p$, which is slightly different than the usual Hausdorff moment problem where $\\mathbf{v} \\in \\mathcal{V}$. Therefore, we first need to state a suitable version of the moment problem, which will follow from the usual one. This is done in the next subsection.\n\n\\subsection{The incomplete Hausdorff moment problem}\n\nTo state our version of the moment problem (Proposition~\\ref{prop_moment_Hausdorff} below), we will need to consider the space of sequences $\\left( u_{\\mathbf{v}} \\right)_{\\mathbf{v} \\in \\mathcal{V}_p}$. For $j \\geq 1$, we denote by $\\Delta_j$ the discrete derivation operator on the $j$-th coordinate on this space. That is, if $u=(u_{\\mathbf{v}})$, we write\n\\[ \\left( \\Delta_j u \\right)_{\\mathbf{v}} = u_{\\mathbf{v}}-u_{\\mathbf{v}+\\mathbf{1}_j}.\\]\nIt is easy to check that the operators $\\Delta_j$ commute with each other. For all $\\mathbf{k}=(k_j)_{j \\geq 1}$ such that $k_j=0$ for $j$ large enough (say for $j \\geq j_0$), we define the operator $\\Delta^{\\mathbf{k}}$ by\n\\[\n\\Delta^{\\mathbf{k}} u= \\Delta_1^{k_1} \\Delta_2^{k_2} \\dots \\Delta_{j_0}^{k_{j_0}} u.\n\\]\nIn other words, we have\n\\[\\Delta^{\\mathbf{k}} u= \\sum_{\\mathbf{i}} \\left( \\prod_{j \\geq 1} (-1)^{i_j} \\binom{k_j}{i_j} \\right) u_{\\mathbf{v}+\\mathbf{i}}, \\]\nwhere the sum is over families $\\mathbf{i}=(i_j)_{j \\geq 1}$, and the terms with a nonzero contribution are those for which $0 \\leq i_j \\leq k_j$ for every $j \\geq 1$. The \"usual\" Hausdorff moment problem is then the following.\n\n\\begin{thm}\\label{thm_hausdorff_usual}\nLet $(u_{\\mathbf{v}})_{\\mathbf{v} \\in \\mathcal{V}}$ be such that, for any $\\mathbf{v} \\in \\mathcal{V}$ and any $\\mathbf{k} \\geq \\mathbf{0}$, we have\n\\[ \\Delta^{\\mathbf{k}} u_{\\mathbf{v}} \\geq 0.\\]\nThen there is a unique measure $\\mu$ on $\\mathcal{Q}=[0,1]^{\\mathbb N^*}$ (equipped with the product $\\sigma$-algebra) such that, for all $\\mathbf{v} \\in \\mathcal{V}$, we have\n\\[ u_{\\mathbf{v}}=\\int \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q). \\]\nIn particular $\\mu$ is finite, with total mass $u_{\\mathbf{0}}$.\n\\end{thm}\nMore precisely, this is the infinite-dimensional Hausdorff moment problem, which can be deduced immediately from the finite-dimensional one by the Kolmogorov extension theorem.\n\nFor $p \\geq 1$, we recall that $\\mathcal{V}_p \\subset \\mathcal{V}$ is the set of $\\mathbf{v} \\in \\mathcal{V}$ that satisfy $\\sum_{j \\geq 1} (j-1) v_j \\geq p-1$. We also denote by $\\mathcal{V}_p^*$ the set of $\\mathbf{v} \\in \\mathcal{V}_p$ for which there is $j \\geq 2$ such that $v_j>0$ and $\\mathbf{v}-\\mathbf{1}_j \\in \\mathcal{V}_p$. In other words $\\mathcal{V}_p^*$ can be thought of as the \"interior\" of $\\mathcal{V}_p$. Finally, we recall that\n\\[ \\mathcal{Q}^*=\\{ \\mathbf{q} \\in [0,1]^{\\mathbb N^*} | \\exists j \\geq 2, q_j>0\\}.\\]\n\n\\begin{prop}\\label{prop_moment_Hausdorff}\nFix $p \\geq 1$, and let $\\left( u_{\\mathbf{v}} \\right)_{\\mathbf{v} \\in \\mathcal{V}_p}$ be a family of real numbers. We assume that for all $\\mathbf{v} \\in \\mathcal{V}_p$ and all $\\mathbf{k} \\geq \\mathbf{0}$, we have\n\\[ \\Delta^{\\mathbf{k}} u_{\\mathbf{v}} \\geq 0.\\]\nThen there is a $\\sigma$-finite measure $\\mu$ on $\\mathcal{Q}^*$ such that, for all $\\mathbf{v} \\in \\mathcal{V}_p^*$, we have\n\\[ u_{\\mathbf{v}}=\\int \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q). \\]\nMoreover, if $p=1$, then $\\mu$ is finite and $\\mu(\\mathcal{Q}^*) \\leq u_\\mathbf{0}$.\n\\end{prop}\n\nNote that this version is \"weaker\" than Theorem~\\ref{thm_hausdorff_usual} in the sense that it is not always possible to have $u_{\\mathbf{v}}=\\int \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q)$ for $\\mathbf{v} \\in \\mathcal{V}_p \\backslash \\mathcal{V}^*_p$. A simple example of this phenomenon in dimension one is that the sequence $\\left( \\mathbbm{1}_{i=1} \\right)_{i \\geq 1}$ has all its discrete derivatives nonnegative. However, there is no measure on $[0,1]$ with first moment $1$ and all higher moments $0$. On the other hand, we can assume an additional property of our measure $\\mu$, namely it is supported by $\\mathcal{Q}^*$ instead of $\\mathcal{Q}$ in Theorem~\\ref{thm_hausdorff_usual}.\n\n\\begin{proof}\nWe start with the case $p=1$. Then $\\mathcal{V}_1=\\mathcal{V}$, so by Theorem~\\ref{thm_hausdorff_usual}, there is a measure $\\widetilde{\\mu}$ on $\\mathcal{Q}$ such that, for all $\\mathbf{v} \\in \\mathcal{V}_1$, we have\n\\[u_{\\mathbf{v}} = \\int_{\\mathcal{Q}} \\mathbf{q}^{\\mathbf{v}} \\widetilde{\\mu}(\\mathrm{d} \\q). \\]\nLet $\\mu$ be the restriction of $\\widetilde{\\mu}$ to $\\mathcal{Q}^*$. If $\\mathbf{v} \\in \\mathcal{V}_1^*$ and $\\mathbf{q} \\in \\mathcal{Q} \\backslash \\mathcal{Q}^*$, then there is $j \\geq 2$ such that $\\mathbf{v}_j >0$ but $q_j=0$, so $\\mathbf{q}^{\\mathbf{v}}=0$. It follows that, for all $\\mathbf{v} \\in \\mathcal{V}_1^*$, we have\n\\[ \\int_{\\mathcal{Q}^*} \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q) = \\int_{\\mathcal{Q}} \\mathbf{q}^{\\mathbf{v}} \\widetilde{\\mu}(\\mathrm{d} \\q) = u_{\\mathbf{v}}.\\]\nMoreover, the total mass of $\\mu$ is not larger than the total mass of $\\widetilde{\\mu}$, so it is at most $u_{\\mathbf{0}}$.\n\nWe now assume $p \\geq 2$. Let $\\mathbf{v} \\in \\mathcal{V}_p$. Then $\\mathbf{v}+\\mathbf{w} \\in \\mathcal{V}_p$ for all $\\mathbf{w} \\in \\mathcal{V}$, so the sequence $\\left( u_{\\mathbf{v}+\\mathbf{w}} \\right)_{\\mathbf{w} \\in \\mathcal{V}}$ satisfies the assumptions of Theorem~\\ref{thm_hausdorff_usual}. Therefore, there is a finite measure $\\mu_{\\mathbf{v}}$ on $\\mathcal{Q}$ such that\n\\[ u_{\\mathbf{v}+\\mathbf{w}} = \\int \\mathbf{q}^{\\mathbf{w}} \\mu_{\\mathbf{v}}(\\mathrm{d} \\q)\\]\nfor all $\\mathbf{w} \\in \\mathcal{V}$. Now let $\\mathbf{v}, \\mathbf{v}' \\in \\mathcal{V}_p$. For all $\\mathbf{w}$, we have\n\\[ \\int \\mathbf{q}^{\\mathbf{v}'} \\mathbf{q}^{\\mathbf{w}} \\mu_{\\mathbf{v}}(\\mathrm{d} \\q)= u_{\\mathbf{v}+\\mathbf{v}'+\\mathbf{w}}=\\int \\mathbf{q}^{\\mathbf{v}} \\mathbf{q}^{\\mathbf{w}} \\mu_{\\mathbf{v}'}(\\mathrm{d} \\q).\\]\nIn other words, the measures $\\mathbf{q}^{\\mathbf{v}'} \\mu_{\\mathbf{v}}(\\mathrm{d} \\q)$ and $\\mathbf{q}^{\\mathbf{v}} \\mu_{\\mathbf{v}'}(\\mathrm{d} \\q)$ have the same moments, so by uniqueness in Theorem~\\ref{thm_hausdorff_usual}\n\\begin{equation}\\label{eqn_consistence_muv}\n\\mathbf{q}^{\\mathbf{v}'} \\mu_{\\mathbf{v}}(\\mathrm{d} \\q)=\\mathbf{q}^{\\mathbf{v}} \\mu_{\\mathbf{v}'}(\\mathrm{d} \\q).\n\\end{equation}\nIn particular, for all $\\mathbf{v} \\in \\mathcal{V}_p$, we can consider the $\\sigma$-finite measure \\[\\widetilde{\\mu}_{\\mathbf{v}}(\\mathrm{d} \\q)= \\frac{\\mu_{\\mathbf{v}}(\\mathrm{d} \\q)}{\\mathbf{q}^{\\mathbf{v}}}\\] defined on $\\{ \\mathbf{q}^{\\mathbf{v}}>0\\}$. Then \\eqref{eqn_consistence_muv} implies that, for any $\\mathbf{v}, \\mathbf{v}' \\in \\mathcal{V}_p$, the measures $\\widetilde{\\mu}_{\\mathbf{v}}$ and $\\widetilde{\\mu}_{\\mathbf{v}'}$ coincide on $\\{ \\mathbf{q}^{\\mathbf{v}}>0 \\} \\cap \\{ \\mathbf{q}^{\\mathbf{v}'}>0\\}$. Therefore, there is a measure $\\mu$ on $\\bigcup_{\\mathbf{v} \\in \\mathcal{V}_p} \\{\\mathbf{q}^{\\mathbf{v}}>0\\} = \\mathcal{Q}^*$ such that, for all $\\mathbf{v} \\in \\mathcal{V}_p$, we have\n\\begin{equation}\\label{eqn_mu_and_muv_weak}\n\\mu_{\\mathbf{v}}(\\mathrm{d} \\q) = \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q) \\quad \\mbox{ on } \\quad \\{\\mathbf{q}^{\\mathbf{v}}>0\\}.\n\\end{equation}\nSince $\\mu$ is finite on $\\{q_j>\\varepsilon\\}$ for all $\\varepsilon>0$ and $j \\geq 2$, the measure $\\mu$ is $\\sigma$-finite. We would now like to extend the equality~\\eqref{eqn_mu_and_muv_weak} to all $\\mathcal{Q}^*$ under the condition $\\mathbf{v} \\in \\mathcal{V}_p^*$.\n\nFor this, let $\\mathbf{v} \\in \\mathcal{V}_p^*$, and let $j \\geq 2$ be such that $v_j>0$ and $\\mathbf{v}-\\mathbf{1}_j \\in \\mathcal{V}_p$. We have $p \\mathbf{1}_j \\in \\mathcal{V}_p$, so we can apply \\eqref{eqn_consistence_muv} to $\\mathbf{v}$ and $p \\mathbf{1}_j$. We obtain, on $\\{ q_j>0 \\}$:\n\\[ \\mu_{\\mathbf{v}}(\\mathrm{d} \\q) = \\mathbf{q}^{\\mathbf{v}} \\frac{\\mu_{p \\mathbf{1}_j}(\\mathrm{d} \\q)}{q_j^p} = \\mathbf{q}^{\\mathbf{v}} \\mu(\\mathrm{d} \\q), \\]\nusing also \\eqref{eqn_mu_and_muv_weak} for $p \\mathbf{1}_j$. In other words, \\eqref{eqn_mu_and_muv_weak} holds on $\\{ q_j>0 \\}$.\n\nOn the other hand, for $\\mathbf{v}$ and $\\mathbf{v}-\\mathbf{1}_p$, we can obtain a stronger version of \\eqref{eqn_consistence_muv}. More precisely, for all $\\mathbf{w}$, we have\n\\[ \\int q_j \\mathbf{q}^{\\mathbf{w}} \\mu_{\\mathbf{v}-\\mathbf{1}_j}(\\mathrm{d} \\q) = u_{\\mathbf{v}+\\mathbf{w}} = \\int \\mathbf{q}^{\\mathbf{w}} \\mu_{\\mathbf{v}}(\\mathrm{d} \\q),\\]\nso the measures $q_j \\mu_{\\mathbf{v}-\\mathbf{1}_j}(\\mathrm{d} \\q)$ and $\\mu_{\\mathbf{v}}(\\mathrm{d} \\q)$ have the same moments, so they coincide. But the first one is $0$ on $\\{q_j=0\\}$, so it is also the case for the second. Therefore, \\eqref{eqn_mu_and_muv_weak} holds on $\\{ q_j=0 \\}$, with both sides equal to $0$.\n\nTherefore, we have proved that \\eqref{eqn_mu_and_muv_weak} holds on $\\mathcal{Q}$. By integrating over $\\mathcal{Q}^*$ and using that the total mass of $\\mu_{\\mathbf{v}}$ is $u_{\\mathbf{v}}$ and is supported by $\\mathcal{Q}^*$, we get the result.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thm_weak_Markov_general}}\n\nAs in \\cite{BL19}, we start by writing down the peeling equations, which are linear equations between the numbers $a^p_{\\mathbf{v}}$ together. For every $p \\geq 1$ and $\\mathbf{v} \\in \\mathcal{V}_p$, we have\n\\begin{equation}\\label{eqn_peeling_equation_bipartite}\na^p_{\\mathbf{v}}= \\sum_{j \\geq 1} a^{p+j-1}_{\\mathbf{v}+\\mathbf{1}_j}+2\\sum_{i=1}^{p-1} \\sum_{\\mathbf{w} \\in \\mathcal{V}} \\beta_0^{(i-1)}(\\mathbf{w}) a^{p-i}_{\\mathbf{v}+\\mathbf{w}},\n\\end{equation}\nwhere we recall that $\\beta_0^{(i-1)}(\\mathbf{w})$ is the number of planar, bipartite maps of the $2(i-1)$-gon with exactly $w_j$ internal faces of degree $2j$ for all $j \\geq 1$. These equations, together with the facts that $a^1_\\mathbf{0}=1$ and $a^p_{\\mathbf{v}} \\geq 0$, characterize the families $(a^p_{\\mathbf{v}})$ of numbers that may arise from a weakly Markovian map. In order to be able to use the Hausdorff moment problem, we now need to check that the discrete derivatives of $(a^p_{\\mathbf{v}})$ are nonnegative.\n\n\\begin{lem}\\label{lem_abs_monotone}\nLet $M$ be a weakly Markovian bipartite map, and let $\\left( a^p_{\\mathbf{v}} \\right)$ be the associated constants. For every $\\mathbf{k} \\geq \\mathbf{0}$, $p \\geq 1$ and $\\mathbf{v} \\in \\mathcal{V}_p$, we have\n\\[ \\left( \\Delta^{\\mathbf{k}} a^p \\right)_{\\mathbf{v}} \\geq 0.\\]\n\\end{lem}\n\n\\begin{proof}\nThe proof is similar to the proof of Lemma 16 in \\cite{BL19}, with the following modification: in \\cite{BL19}, it was useful that in the same peeling equation, we had $a^p_v$ appearing on the left and $a^p_{v+1}$ on the right. However, in \\eqref{eqn_peeling_equation_bipartite} $a^p_{\\mathbf{v}+\\mathbf{1}_j}$ does not appear in the right-hand side (this is because we are using the lazy peeling process of \\cite{Bud15} instead of the simple peeling of \\cite{Ang03}). Therefore, instead of using directly the peeling equation, we will need to use the \\emph{double peeling equation}, which corresponds to performing two peeling steps, instead of one in~\\eqref{eqn_peeling_equation_bipartite}.\n\nMore precisely, the peeling equation \\eqref{eqn_peeling_equation_bipartite} gives an expansion of $a^p_{\\mathbf{v}}$. The \\emph{double peeling equation} is obtained from \\eqref{eqn_peeling_equation_bipartite} by replacing all the terms in the right-hand side by their expansion given by \\eqref{eqn_peeling_equation_bipartite}. Note that this indeed makes sense because if $\\mathbf{v} \\in \\mathcal{V}_p$, then $\\mathbf{v}+\\mathbf{1}_j \\in \\mathcal{V}_{p+j-1}$ for all $j \\geq 1$, and $\\mathbf{v}+\\mathbf{w} \\in \\mathcal{V}_{p-i}$ for all $i \\geq 1$ and $\\mathbf{w} \\in \\mathcal{V}$.\n\nThe equation we obtain is of the form\n\\begin{equation}\\label{eqn_double_peeling_equation}\na^p_{\\mathbf{v}}=\\sum_{i \\in \\mathbb Z, \\, \\mathbf{w} \\in \\mathcal{V}} c^{p,i}_{\\mathbf{v},\\mathbf{w}} a^{p+i}_{\\mathbf{v}+\\mathbf{w}},\n\\end{equation}\nwhere the coefficients $c^{p,i}_{\\mathbf{v},\\mathbf{w}}$ are nonnegative integers. An explicit formula for these could be computed in terms of the $\\beta_0^{(i)}(\\mathbf{w})$, but this will not be needed. Here are the facts that will be useful:\n\\begin{enumerate}\n\\item\nthe coefficients $c^{p,i}_{\\mathbf{v},\\mathbf{w}}$ actually do not depend on $\\mathbf{v}$, so we can write them $c^{p,i}_{\\mathbf{w}}$,\n\\item\\label{item_coeff_equal_one}\nwe have $c^{p,2j-2}_{2\\cdot\\mathbf{1}_j}=1$ for every $j \\geq 1$,\n\\item\\label{item_coeff_positive}\nwe have $c^{p,0}_{\\mathbf{1}_j} \\geq 1$ for every $j \\geq 2$.\n\\end{enumerate}\nThe first item follows from the fact that at each time, the available next peeling steps do not depend on the internal face degrees of the explored region.\nThe second item expresses the fact that, for a given peeling algorithm, there is a unique way to obtain a map with half-perimeter $p+2j-2$ with internal faces $\\mathbf{v}+2\\cdot\\mathbf{1}_j$ in two peeling steps. This way is to discover a unique face of degree $2j$ at both steps. The third item means that it is possible (not necessarily in a unique way) to obtain in two peeling steps a map with the same perimeter but one more face of degree $2j$. This is achieved by discovering a new face of degree $2j$ at the first step, and gluing all but two sides of this face two by two at the second step (see Figure~\\ref{figure_pconstant_vincrease}). \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.7]{one_more_face}\n\\caption{In two peeling steps, the perimeter stays constant and one face with degree $2j$ is added (here $j=3$).}\\label{figure_pconstant_vincrease}\n\\end{center}\n\\end{figure}\n\nWe now prove the lemma by induction on $|\\mathbf{k}|=\\sum_{j \\geq 1} k_j$. First, the case $|\\mathbf{k}|=0$ just means that $a^p_{\\mathbf{v}} \\geq 0$ for all $p \\geq 1$ and $\\mathbf{v} \\in \\mathcal{V}_p$, which is immediate. Let us now assume that the lemma is true for $\\mathbf{k}$ and prove it for $\\mathbf{k}+\\mathbf{1}_j$, where $j \\geq 1$. We will first treat the case where $j \\geq 2$.\nUsing the double peeling equation \\eqref{eqn_double_peeling_equation} for $(p, \\mathbf{v}+\\mathbf{i})$ for different values of $\\mathbf{i}$, we have\n\\[ \\left( \\Delta^{\\mathbf{k}} a^p \\right)_{\\mathbf{v}} = \\sum_{i \\in \\mathbb Z, \\, \\mathbf{w} \\in \\mathcal{V}} c^{p,i}_{\\mathbf{w}} \\left( \\Delta^{\\mathbf{k}} a^{p+i}\\right)_{\\mathbf{v}+\\mathbf{w}}. \\]\nTherefore, using the induction hypothesis and Item~\\ref{item_coeff_equal_one} above, we can write \n\\begin{align*}\n0 & \\leq \\left( \\Delta^{\\mathbf{k}} a^{p+2j-2} \\right)_{\\mathbf{v}+2\\cdot\\mathbf{1}_j}\\\\\n&= c^{p,2j-2}_{2\\cdot\\mathbf{1}_j} \\left( \\Delta^{\\mathbf{k}} a^{p+2j-2} \\right)_{\\mathbf{v}+2\\cdot\\mathbf{1}_j}\\\\\n&= \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}} - \\sum_{(i,\\mathbf{w}) \\ne (2j-2, 2\\cdot\\mathbf{1}_j)} c^{p,i}_{\\mathbf{w}} \\left( \\Delta^{\\mathbf{k}} a^{p+i} \\right)_{\\mathbf{v}+\\mathbf{w}}.\n\\end{align*}\nUsing the induction hypothesis again, we can remove all the terms in the last sum except the one where $(i,\\mathbf{w})=(0,\\mathbf{1}_j)$. Moreover, by Item~\\ref{item_coeff_positive} above, we can replace the coefficient $c^{p,0}_{\\mathbf{1}_j}$ by $1$. We obtain\n\\[0 \\leq \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}} - \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}+\\mathbf{1}_j}= \\left( \\Delta^{\\mathbf{k}+\\mathbf{1}_j} a^p \\right)_{\\mathbf{v}}, \\]\nwhich proves the induction step for $j \\geq 2$. If $j=1$, Item~\\ref{item_coeff_positive} is not true anymore (it is not possible to add only one face of degree $2$ in $2$ steps without changing the perimeter). Therefore, instead of \\eqref{eqn_double_peeling_equation}, we use the simple peeling equation \\eqref{eqn_peeling_equation_bipartite} like in \\cite{BL19}. More precisely, in the induction step, we fix $j' \\geq 2$ and write, using \\eqref{eqn_peeling_equation_bipartite}:\n\\begin{align*}\n0 &\\leq \\left( \\Delta^{\\mathbf{k}} a^{p+j'-1} \\right)_{\\mathbf{v}+\\mathbf{1}_{j'}}\\\\\n&= \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}} - \\sum_{j'' \\ne j'} \\left( \\Delta^{\\mathbf{k}} a^{p+j''-1} \\right)_{\\mathbf{v}+\\mathbf{1}_{j''}} -2 \\sum_{i=0}^{p-1} \\sum_{\\mathbf{w}} \\beta_0^{(i-1)}(\\mathbf{w}) \\left( \\Delta^{\\mathbf{k}} a^{p-i} \\right)_{\\mathbf{v}+\\mathbf{w}}.\n\\end{align*}\nEach term in the two sums is nonnegative by the induction hypothesis, so we can remove the second sum and keep only the term $j''=1$ in the first one to obtain\n\\[0 \\leq \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}} - \\left( \\Delta^{\\mathbf{k}} a^{p} \\right)_{\\mathbf{v}+\\mathbf{1}_1} = \\left( \\Delta^{\\mathbf{k}+\\mathbf{1}_1} a^{p} \\right)_{\\mathbf{v}}.\\]\nThis concludes the proof of the lemma.\n\\end{proof}\n\nBy Lemma~\\ref{lem_abs_monotone} and Proposition~\\ref{prop_moment_Hausdorff}, for all $p \\geq 1$, there is a $\\sigma$-finite measure $\\mu_p$ on $\\mathcal{Q}^*$ such that, for all $\\mathbf{v} \\in \\mathcal{V}^*_p$,\n\\begin{equation}\\label{eqn_apv_as_moment}\na^p_{\\mathbf{v}}=\\int_{\\mathcal{Q}^*} \\mathbf{q}^{\\mathbf{v}} \\mu_p(\\mathrm{d} \\q)\n\\end{equation}\nand furthermore $\\mu_1(\\mathcal{Q}^*) \\leq a^1_{\\mathbf{0}}=1$. We now replace $a_{\\mathbf{v}}^p$ by this expression in the peeling equation \\eqref{eqn_peeling_equation_bipartite}. We get\n\\begin{align*}\n\\int \\mathbf{q}^{\\mathbf{v}} \\, \\mu_p(\\mathrm{d} \\mathbf{q}) \\hspace{-0.1cm}&= \\hspace{-0.1cm}\\sum_{j \\geq 1} \\int \\mathbf{q}^{\\mathbf{v}+\\mathbf{1}_j} \\, \\mu_{p+j-1}(\\mathrm{d} \\q) + 2\\sum_{i=1}^{p-1} \\sum_{\\mathbf{w} \\in \\mathcal{V}} \\beta_0^{(i-1)}(\\mathbf{w}) \\int \\mathbf{q}^{\\mathbf{v}+\\mathbf{w}} \\, \\mu_{p-i}(\\mathrm{d} \\q)\\\\\n&=\\hspace{-0.1cm} \\int \\mathbf{q}^{\\mathbf{v}} \\left( \\sum_{j \\geq 1} q_j \\, \\mu_{p+j-1}(\\mathrm{d} \\q) + 2 \\sum_{i=1}^{p-1} W_{i-1}(\\mathbf{q}) \\, \\mu_{p-i}(\\mathrm{d} \\q) \\right),\n\\end{align*}\nwhere we recall that $W_{i-1}(\\mathbf{q})$ is the partition function of Boltzmann bipartite maps of the $2(i-1)$-gon with Boltzmann weights $\\mathbf{q}$.\nIn particular, the right-hand side for $i=2$ must be finite, which means that $\\mu_p$ is supported by the set $\\mathcal{Q}_a$ of admissible weight sequences. Moreover, the last display means that the two measures\n\\[ \\mu_p(\\mathrm{d} \\q) \\mbox{ and } \\nu_p(\\mathrm{d} \\q) = \\sum_{j \\geq 1} q_j \\, \\mu_{p+j-1}(\\mathrm{d} \\q) + 2 \\sum_{i=1}^{p-1} W_{i-1}(\\mathbf{q}) \\, \\mu_{p-i}(\\mathrm{d} \\q) \\]\nhave the same $\\mathbf{v}$-th moment for all $\\mathbf{v} \\in \\mathcal{V}^*_p$. In particular, if we fix $j \\geq 2$, this is true as soon as $v_j \\geq p$, so the measures $q_j^p \\mu_p(\\mathrm{d} \\q)$ and $q_j^p \\nu_p(\\mathrm{d} \\q)$ have the same moments so they are equal, so $\\mu_p$ and $\\nu_p$ coincide on $\\{q_j>0\\}$. Since this is true for all $j \\geq 2$ and $\\mu_p, \\nu_p$ are defined on $\\mathcal{Q}^*=\\bigcup_{j \\geq 2} \\{q_j>0\\}$, the measures $\\mu_p$ and $\\nu_p$ are the same, that is,\n\\begin{equation}\\label{peeling_equation_mu}\n\\mu_p(\\mathrm{d} \\q) = \\sum_{j \\geq 1} q_j \\, \\mu_{p+j-1}(\\mathrm{d} \\q) + 2 \\sum_{i=1}^{p-1} W_{i-1}(\\mathbf{q}) \\, \\mu_{p-i}(\\mathrm{d} \\q).\n\\end{equation}\nWe now note that this equation is very similar to the one satisfied by the constants $C_p(\\mathbf{q})$ used to define the $\\mathbf{q}$-IBPM. More precisely, we fix a finite measure $\\mu$ such that all the $\\mu_p$ are absolutely continuous with respect to $\\mu$ (take e.g. $\\mu (\\mathrm{d} \\q)=\\sum_{p \\geq 1} \\frac{g_p(\\mathbf{q}) \\mu_p(\\mathrm{d} \\q)}{2^p}$, where $g_p(\\mathbf{q})>0$ is such that the total mass of $g_p(\\mathbf{q}) \\mu_p(\\mathrm{d} \\q)$ is at most $1$). We denote by $f_p(\\mathbf{q})$ the density of $\\mu_p$ with respect to $\\mu$. Then \\eqref{peeling_equation_mu} becomes\n\\[ f_p(\\mathbf{q}) = \\sum_{j \\geq 1} q_j f_{p+j-1}(\\mathbf{q}) + 2 \\sum_{i=1}^{p-1} W_{i-1}(\\mathbf{q}) f_{p-i}(\\mathbf{q}) \\]\nfor $\\mu$-almost every $\\mathbf{q} \\in \\mathcal{Q}^*$. In other words, $\\left( f_p(\\mathbf{q}) \\right)_{p \\geq 1}$ satisfies the exact same equation as $\\left( C_p(\\mathbf{q}) \\right)_{p \\geq 1}$ in \\cite[Appendix C]{B18these}. These equations have a nonzero solution if and only if $\\mathbf{q} \\in \\mathcal{Q}_h$, so the measures $\\mu_p$ are actually supported by $\\mathcal{Q}_h$. Moreover, by uniqueness of the solution (up to a multiplicative constant), we have\n\\[ f_p(\\mathbf{q})=\\frac{C_p(\\mathbf{q})}{C_1(\\mathbf{q})} f_1(\\mathbf{q}) = C_p(\\mathbf{q}) f_1(\\mathbf{q})\\]\nfor $\\mu$-almost every $\\mathbf{q}$, so $\\mu_p(\\mathrm{d} \\q)=C_p(\\mathbf{q}) \\mu_1(\\mathrm{d} \\q)$. Now let $\\alpha \\leq 1$ be the total mass of the measure $\\mu_1$, and let $\\mathbf{Q}$ be a random variable with distribution $\\alpha^{-1} \\mu$. We then have, for all $p \\geq 1$ and $\\mathbf{v} \\in \\mathcal{V}^*_p$, if $m$ is a map with half-perimeter $p$ and face degrees $\\mathbf{v}$:\n\\begin{equation}\\label{eqn_apv_expectation}\n\\P \\left( m \\subset M \\right) = a^p_{\\mathbf{v}} = \\int \\mathbf{q}^{\\mathbf{v}} \\mu_p(\\mathrm{d} \\q) = \\alpha \\mathbb E \\left[ C_p(\\mathbf{Q}) \\mathbf{Q}^{\\mathbf{v}} \\right] = \\alpha \\P \\left( m \\subset \\mathbb M_{\\mathbf{Q}} \\right).\n\\end{equation}\nNote that $\\mathbf{Q}$ is not well-defined if $\\alpha=0$, but in this case $\\mu_p=0$ for all $p$ so \\eqref{eqn_apv_expectation} remains true for any choice of $\\mathbf{Q}$. To conclude that $M$ has the law of $\\mathbb M_{\\mathbf{Q}}$, all we have left to prove is that $\\alpha=1$ and that \\eqref{eqn_apv_expectation} can be extended to any $\\mathbf{v} \\in \\mathcal{V}_p$. For this, we will show that, when we explore $M$ via a peeling exploration, the perimeter and volumes of the explored region at time $t$ satisfy $\\mathbf{v} \\in \\mathcal{V}_p^*$ for $t$ large enough.\n\nMore precisely, if $\\mathcal{A}$ is a peeling algorithm, we recall that $\\mathcal{E}^{\\mathcal{A}}_t(M)$ is the explored part of $M$ after $t$ steps of a filled-in peeling exploration according to $\\mathcal{A}$. We denote by $P_t$ the half-perimeter of the hole of $\\mathcal{E}^{\\mathcal{A}}_t(M)$ and by $\\mathbf{V}_t$ the sequence of degrees of its internal faces (that is, $V_{t,j}$ is the number of internal faces of $\\mathcal{E}^{\\mathcal{A}}_t(M)$ with degree $2j$). Since $M$ is weakly Markovian, the process $\\left( P_t, \\mathbf{V}_t \\right)_{t \\geq 0}$ is a Markov chain whose law does not depend on the peeling algorithm $\\mathcal{A}$.\n\n\\begin{lem}\\label{lem_volumes_star}\nWe have\n\\[ \\P \\left( \\mathbf{V}_{t} \\in \\mathcal{V}^*_{P_t} \\right) \\xrightarrow[t \\to +\\infty]{} 1.\\]\n\\end{lem}\n\n\\begin{proof}\nSince the probability in the lemma does not depend on $\\mathcal{A}$, it is sufficient to prove the result for a particular peeling algorithm. Therefore, we can assume that $\\mathcal{A}$ has the following property: if the root face of $m$ and its hole have a common vertex $m$, then the peeled edge $\\mathcal{A}(m)$ is incident to such a vertex. We will prove that for this algorithm, we have a.s. $\\mathbf{V}_{t} \\in \\mathcal{V}^*_{P_t}$ for $t$ large enough.\n\nMore precisely, since the vertex degrees of $M$ are a.s. finite and by definition of $\\mathcal{A}$, all the vertices incident to the root face will eventually disappear from the boundary of the explored part. Therefore, for $t$ large enough, no vertex incident to the root face is on $\\partial \\mathcal{E}^{\\mathcal{A}}_t(M)$. We now fix $t$ with this property. If we denote by $\\mathrm{Inn}(m)$ the number of internal vertices of a map $m$ with a hole and by $2J$ the degree of the root face of $M$, this implies $\\mathrm{Inn} \\left( \\mathcal{E}^{\\mathcal{A}}_t(M) \\right) \\geq 2J$ for $t$ large enough.\n\nOn the other hand, the total number of edges of $\\mathcal{E}^{\\mathcal{A}}_t(M)$ is $p+\\sum_{j \\geq 1} j V_{t,j}$, so by the Euler formula\n\\begin{align*}\n\\mathrm{Inn} \\left( \\mathcal{E}^{\\mathcal{A}}_t(M) \\right) &= 2 + \\left( P_t+ \\sum_{j \\geq 1} j V_{t,j} \\right) - \\left( 1+\\sum_{j \\geq 1} V_{t,j} \\right) -2P_t \\\\&= 1-P_t+\\sum_{j \\geq 1} (j-1) V_{t,j}. \n\\end{align*}\nTaking $t$ large enough to have $\\mathrm{Inn} \\left( \\mathcal{E}^{\\mathcal{A}}_t(M) \\right) \\geq 2J$, we obtain\n\\[ \\left( \\sum_{j \\geq 1} (j-1) V_{t,j} \\right) -(J-1) \\geq \\left( 2J +P_t -1 \\right) -(J-1) = P_t+J > P_t-1,\\]\nso $V_{t,J}>0$ and $\\mathbf{V}-\\mathbf{1}_{J} \\in \\mathcal{V}_{P_t}$. This proves $\\mathbf{V}_{t} \\in \\mathcal{V}^*_{P_t}$ for $t$ large enough.\n\\end{proof}\n\nWe now conclude the proof of Theorem~\\ref{thm_weak_Markov_general} from \\eqref{eqn_apv_expectation}. We consider a finite map $m_0$ with a hole and a peeling algorithm $\\mathcal{A}$ that is consistent with $m_0$ in the sense that $m_0$ is a possible value of $\\mathcal{E}_{t_0}^{\\mathcal{A}}$ for some $t_0 \\geq 0$. We note that $\\mathcal{E}^{\\mathcal{A}}_{t_0}(M)=m_0$ if and only if $m_0 \\subset M$. Indeed, the direct implication is immediate. The indirect one comes from the fact that, if $m_0 \\subset M$, then all the peeling steps until time $t_0$ must be consistent with $m_0$, so $m_0 \\subset M$ determines the first $t_0$ peeling steps. We now take $t \\geq t_0$. We sum \\eqref{eqn_apv_expectation} over all possible values $m$ of $\\mathcal{E}^{\\mathcal{A}}_t(M)$ such that $m_0 \\subset m$ and the half-perimeter $p$ and internal face degrees $\\mathbf{v}$ of $m$ satisfy $\\mathbf{v} \\in \\mathcal{V}_p^*$. We get\n\\[ \\P \\left( m_0 \\subset M \\mbox{ and } \\mathbf{V}_t \\in \\mathcal{V}^*_{P_t} \\right) = \\alpha \\P \\left( m_0 \\subset \\mathbb M_{\\mathbf{Q}} \\mbox{ and } \\mathbf{V}^{\\mathbf{Q}}_t \\in \\mathcal{V}^*_{P_t^{\\mathbf{Q}}} \\right), \\]\nwhere $P_t^{\\mathbf{Q}}$ and $\\mathbf{V}_t^{\\mathbf{Q}}$ are the analogues of $P_t$ and $\\mathbf{V}_t$ for $\\mathbb M_{\\mathbf{Q}}$ instead of $M$. Since $\\mathbb M_{\\mathbf{Q}}$ is weakly Markovian, we can apply Lemma~\\ref{lem_volumes_star} to both $M$ and $\\mathbb M_{\\mathbf{Q}}$. Therefore, letting $t \\to +\\infty$ in the last display, we get \\[\\P \\left( m_0 \\subset M \\right) = \\alpha \\P \\left( m_0 \\subset \\mathbb M_{\\mathbf{Q}} \\right)\\] for all $m_0$. In particular, if $m_0$ is the trivial map consisting only of the root edge, we get $\\alpha=1$, so $M$ and $\\mathbb M_{\\mathbf{Q}}$ have the same law. This proves Theorem~\\ref{thm_weak_Markov_general}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm_main_more_general}]\nBy Proposition~\\ref{prop_tightness_dloc_univ}, any subsequential limit $M$ of $\\left( M_{\\mathbf{f}^n, g_n} \\right)$ is planar and one-ended. Moreover, let $m$ be a map with one hole of half-perimeter $p$ and $v_{j}$ faces of degree $2j$ for all $j \\geq 1$. Then\n\\[\\P \\left( m \\subset M \\right) = \\lim_{n \\to +\\infty} \\P \\left( m \\subset M_{\\mathbf{f}^n, g_n} \\right) = \\lim_{n \\to +\\infty} \\frac{\\beta_{g_n}^{(p)}(\\mathbf{f}^n-\\mathbf{v})}{\\beta_{g_n}(\\mathbf{f}^n)},\\]\nwhere the limits are along some subsequence. In particular, the dependence in $m$ is only in $p$ and $\\mathbf{v}$, so $M$ is weakly Markovian and the result follows by Theorem~\\ref{thm_weak_Markov_general}.\n\\end{proof}\n\n\\section{The parameters are deterministic}\\label{sec_univ_end}\n\n\\subsection{Outline}\\label{sec_arg_deux_trous}\n\nOur goal is now to prove Theorem~\\ref{univ_main_thm}. We fix face degree sequences $\\mathbf{f}^n$ and genuses $g_n$ for $n \\geq 0$ satisfying the assumptions of Theorem~\\ref{univ_main_thm} (in particular, we now assume $\\sum_{j} j^2 \\alpha_j <+\\infty$ until the end of the paper). By Theorem~\\ref{thm_main_more_general}, up to extracting a subsequence, we can assume $M_{\\mathbf{f}^n, g_n}$ converges to $\\mathbb M_{\\mathbf{Q}}$, where $\\mathbf{Q}$ is a random variable with values in $\\mathcal{Q}_h$. Moreover, the law of the degree of the root face in $M_{\\mathbf{f}^n, g_n}$ converges in distribution to $\\left( j \\alpha_j \\right)_{j \\geq 1}$, which has finite expectation. By the last point of Theorem~\\ref{thm_weak_Markov_general}, we have $\\mathbf{Q} \\in \\mathcal{Q}_f$ a.s.. To prove Theorem~\\ref{univ_main_thm}, it is enough to prove that $\\mathbf{Q}$ is deterministic, and only depends on $\\left( \\alpha_j \\right)_{j \\geq 1}$ and $\\theta$.\n\n\\paragraph{Sketch of the end of the proof.}\nSince we will follow similar ideas, let us first recall the strategy of \\cite{BL19}. If $e_n$ is the root edge of $M_{\\mathbf{f}^n, g_n}$, the parameters $\\mathbf{Q}$ can be observed on a large neighbourhood of $e_n$ in $M_{\\mathbf{f}^n, g_n}$ for $n$ very large.\nThe first step of the proof (Proposition~\\ref{prop_two_holes_argument}) roughly consists of showing that once $M_{\\mathbf{f}^n, g_n}$ is picked, the weights $\\mathbf{Q}$ do not depend on the choice of $e_n$. This is proved by the \\emph{two holes argument}: if $e^1_n$ and $e_n^2$ are two roots chosen uniformly on $M_{\\mathbf{f}^n, g_n}$, we swap two large neighbourhoods of $e^1_n$ and $e^2_n$ in $M_{\\mathbf{f}^n, g_n}$. We then remark that if the weights observed around the two roots are too different, then the map obtained after swapping does not look like a map of the form $\\mathbb M_{\\mathbf{Q}}$.\nThe second step consists of noticing that the average value over all choices of the root of some functions of $\\mathbf{Q}$ is fixed by $\\left( \\alpha_j \\right)_{j \\geq 1}$ and $\\theta$ (Corollary~\\ref{corr_main_minus_monotonicity}).\nFinally, in the third step we prove that these functions are sufficient to characterize $\\mathbf{Q}$ completely (Proposition~\\ref{prop_monotonicity_deg}).\n\nHowever, two important difficulties appear here compared to~\\cite{BL19}:\n\\begin{itemize}\n\\item\nin the first step, we need to find two large pieces around $e_n^1$ and $e_2^n$ with the exact same perimeter, in order to be able to swap them. This was easy for triangulations since the perimeter process associated to a peeling exploration takes all values. This is not true anymore in our general setting. This part will make crucial use of the assumption $\\sum_j j^2 \\alpha_j < +\\infty$ (this is actually the only place in the paper where we will use it). A consequence of this difficulty is that instead of performing the swapping operation \\emph{with high probability}, we will perform it \\emph{with positive probability}.\n\\item\nIn the third step, one of the parameters that we control is the average vertex degree. For triangulations, it followed from an explicit formula that the average degree characterizes $\\mathbf{Q}$. We do not have such a formula here, so our argument will be more involved, and rely on the partial results obtained so far in the present paper.\n\\end{itemize}\n\n\\paragraph{Intermediate results.}\nLet $\\left( M_n, e_n^1, e_n^2 \\right)$ be a uniform, bi-rooted map with face degrees $\\mathbf{f}^n$ and genus $g_n$ (i.e. $e_n^1$ and $e_n^2$ are picked uniformly and independently among the edges of $M_n$). We highlight that we will write $M_n$ instead of $M_{\\mathbf{f}^n, g_n}$ in this section to make notations lighter.\nUp to extracting a subsequence, we can assume the joint convergence\n\\[ \\left( (M_n, e_n^1), (M_n, e_n^2) \\right) \\xrightarrow[n \\to +\\infty]{(d)} \\left( \\mathbb M^1_{\\mathbf{Q}^1}, \\mathbb M^2_{\\mathbf{Q}^2} \\right) \\]\nfor the local topology, where $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$ have the same distribution as $\\mathbf{Q}$. Moreover, by the Skorokhod representation theorem, we can assume this joint convergence is almost sure. \\emph{We will stay in this setting in Section~\\ref{subsec_same_perimeter} and~\\ref{subsec_two_holes}.} The first step of the proof will consist of proving the following.\n\n\\begin{prop}\\label{prop_two_holes_argument}\nWe have $\\mathbf{Q}^1=\\mathbf{Q}^2$ almost surely.\n\\end{prop}\n\nFrom here, the second step will be to deduce the next result. For $\\mathbf{q} \\in \\mathcal{Q}_h$, we recall that $j a_j(\\mathbf{q})$ is the probability that the root face of $\\mathbb M_{\\mathbf{q}}$ has degree $2j$, and that $d(\\mathbf{q})=\\mathbb E \\left[ \\frac{1}{\\mathrm{deg}_{\\mathbb M_{\\mathbf{q}}} (\\rho)} \\right]$. \n\n\\begin{corr}\\label{corr_main_minus_monotonicity}\nUnder the assumptions of Theorem~\\ref{univ_main_thm}, let $\\mathbb M_{\\mathbf{Q}}$ be a subsequential limit. Then almost surely, we have\n\\[ d(\\mathbf{Q})=\\frac{1}{2} \\left( 1-2\\theta-\\sum_i \\alpha_i \\right) \\mbox{ and, for all $j \\geq 1$, } a_j(\\mathbf{Q})=\\alpha_j.\\] \n\\end{corr}\n\n\\paragraph{Structure of the section.}\nIn Section~\\ref{subsec_same_perimeter}, we will address the issue of finding two large neighbourhoods of the two roots with exactly the same perimeters. In Section~\\ref{subsec_two_holes}, we use this to prove Proposition~\\ref{prop_two_holes_argument} and Corollary~\\ref{corr_main_minus_monotonicity}. Finally, Section~\\ref{subsec_last_step} is devoted to the end of the proof of Theorem~\\ref{univ_main_thm}, and consists mostly of showing that $d(\\mathbf{q})$ and $\\left( a_j(\\mathbf{q}) \\right)_{j \\geq 1}$ are sufficient to characterize $\\mathbf{q}$.\n\n\\subsection{Finding two pieces with the same perimeter}\n\\label{subsec_same_perimeter}\n\nAs explained above, given the uniform bi-rooted map $\\left( M_n, e^1_n, e^2_n \\right)$, we want to find two neighbourhoods of $e^1_n$ and $e^2_n$ with the same large perimeter $2p$. For this, we will perform a peeling exploration around the two roots and stop it when the perimeter of the explored region is exactly $2p$. However, since the perimeter process has a positive drift, it can make large positive jumps, we cannot guarantee that both perimeters will hit the value $p$ with high probability. We will therefore show a weaker result: roughly speaking, the probability that the perimeter processes around $e_n^1$ and $e_n^2$ both hit $p$ is bounded from below, even if we condition on $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$. \n\nMore precisely, we fix a deterministic peeling algorithm $\\mathcal{A}$, and let $p,v_0 \\geq 1$. We recall from the end of Section~\\ref{subsec_lazy_peeling} that we can make sense of a filled-in peeling exploration on the finite map $M_n$ around $e^1_n$ or $e_n^2$. We perform the following exploration:\n\\begin{itemize}\n\\item\nwe explore the map $M_n$ around $e^1_n$ according to the algorithm $\\mathcal{A}$ until the number of edges in the explored region is larger than $v_0$, or the perimeter of the explored region is exactly $2p$, and denote by $\\tau^1_n$ the time at which we stop;\n\\item\nwe do the same thing around $e^2_n$ and denote by $\\tau^2_n$ the stopping time.\n\\end{itemize}\nWe write $\\mathcal{S}_{n,p,v_0}$ for the event where both $\\tau^1_n$ and $\\tau^2_n$ occur because the perimeter hits $2p$, and where the two regions explored around $e_n^1$ and $e_n^2$ are face-disjoint (the dependence of $\\mathcal{S}$ in $\\mathcal{A}$ will stay implicit). We note right now that $(M_n, e_n^1)$ has a planar, one-ended local limit. Hence, with probability $1-o(1)$ as $n \\to +\\infty$, the exploration is not stopped before $\\tau_n^1$ or $\\tau_n^2$ for the reason stated in the end of Section~\\ref{subsec_lazy_peeling}.\n\nThe goal of this subsection is to prove the next result. We recall that the functions $r_j(\\mathbf{q})$ for $j \\in \\mathbb N^* \\cup \\{ \\infty \\}$ and $\\mathbf{q} \\in \\mathcal{Q}_h$ are defined in Proposition~\\ref{prop_q_as_limit}.\n\n\\begin{prop}\\label{prop_finding_regions_to_cut}\nLet $(M_n, e^1_n, e_2^n)$ and $\\mathbf{Q}^1, \\mathbf{Q}^2$ be as in Section~\\ref{sec_arg_deux_trous}.\nWe fix $j \\in \\mathbb N^* \\cup \\{ \\infty \\}$, and $\\varepsilon>0$. Then there is $\\delta>0$ with the following property. For every $p \\geq 1$ large enough, there is $v_0$ such that, for $n$ large enough:\n\\[ \\mbox{if } \\P \\left( | r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2) |>\\varepsilon \\right) \\geq \\varepsilon,\\]\n\\[ \\mbox{then }\\P \\left( | r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2)|>\\frac{\\varepsilon}{2} \\mbox{ and } (M_n, e_n^1, e_n^2) \\in \\mathcal{S}_{n,p,v_0} \\right) \\geq \\delta. \\]\n\\end{prop}\n\nWe recall that we have used the Skorokhod theorem to couple the finite and infinite maps together, so the last event makes sense.\n\nHere is why Proposition~\\ref{prop_finding_regions_to_cut} seems reasonable: we know that conditionally on $(\\mathbf{Q}^1, \\mathbf{Q}^2)$, the perimeters of the explored region along a peeling exploration of $\\mathbb M^1_{\\mathbf{Q}^1}$ and $\\mathbb M^2_{\\mathbf{Q}^2}$ are random walks conditioned to stay positive. Moreover, since $\\mathbf{Q}^1, \\mathbf{Q}^2 \\in \\mathcal{Q}_f$, these random walks do not have a too heavy tail, so each of them have a reasonable chance of hitting exactly $p$. However, there is no reason a priori why $\\mathbb M^1_{\\mathbf{Q}^1}$ and $\\mathbb M^2_{\\mathbf{Q}^2}$ should be independent conditionally on $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$, so it might be unlikely that \\emph{both} processes hit $p$. Therefore, the sketch of the proof will be the following:\n\\begin{itemize}\n\\item\nwe fix a large constant $C>0$ ($C$ will be much smaller than $p$),\n\\item\nwe prove that both walks have a large probability to hit the interval $[p,p+C]$ before the explored volume exceeds $v_0(p)$ (Lemma~\\ref{lem_RW_hits_interval}),\n\\item\nonce both perimeter processes around $e_n^1$ and $e_n^2$ in $M_n$ have hit $[p,p+C]$, we use the Bounded ratio Lemma (Lemma~\\ref{lem_BRL}, item 2) to show that, with probability bounded from below by roughly $e^{-C}$, both perimeters fall to exactly $p$ in at most $C$ steps. This will prove the proposition with $\\delta \\approx \\varepsilon e^{-C}$ and $v_0=v_0(p)$.\n\\end{itemize}\nThe point of replacing $p$ by $[p,p+C]$ is to deal with events of large probability, so that we don't need any independence to make sure that two events simultaneously happen.\n\nFor this, consider the peeling exploration of $\\mathbb M^1_{\\mathbf{Q}^1}$ according to $\\mathcal{A}$. We denote by $\\sigma^{1,\\infty}_{[p,p+C]}$ the first time at which the half-perimeter is in $[p,p+C]$ (this stopping time might be infinite). We define $\\sigma^{1,n}_{[p,p+C]}$ (resp. $\\sigma^{2, \\infty}_{[p,p+C]}$, $\\sigma^{2,n}_{[p,p+C]}$) as the analogue quantity for the exploration in $M_n$ around $e_n^1$ (resp. in $\\mathbb M^2_{\\mathbf{Q}^2}$, in $\\left( M_n, e_n^2 \\right)$).\n\n\\begin{lem}\\label{lem_RW_hits_interval}\nWe have\n\\[ \\lim_{C \\to +\\infty} \\liminf_{p \\to +\\infty} \\P \\left( \\sigma_{[p,p+C]}^{1,\\infty} <+\\infty \\right) =1.\\]\n\\end{lem}\n\n\\begin{proof}\nWe know that $\\mathbf{Q}^1 \\in \\mathcal{Q}_f$ a.s.. Hence, it is enough to prove that, for any $\\mathbf{q} \\in \\mathcal{Q}_f$, we have\n\\begin{equation}\\label{eqn_RW_hits_interval_infinite}\n\\lim_{C \\to +\\infty} \\liminf_{p \\to +\\infty} \\P \\left( \\sigma^{1,\\infty}_{[p,p+C]}<+\\infty \\big| \\mathbf{\\mathbf{Q}}=\\mathbf{q} \\right) =1.\n\\end{equation}\nThe lemma then follows by taking the expectation and using Fatou's lemma. Conditionally on $\\mathbf{Q}^1=\\mathbf{q}$, the law of $\\mathbb M^1_{\\mathbf{Q}^1}$ is the law of $\\mathbb M_\\mathbf{q}$. In particular, the process $P$ describing the half-perimeter of the explored region has the law of $X$ conditioned to stay positive, where $X$ is a random walk with step distribution $\\widetilde{\\nu}_{\\mathbf{q}}$.\n\nTo prove \\eqref{eqn_RW_hits_interval_infinite}, we distinguish two cases: the case where $\\mathbf{q}$ is critical, and the case where it is not. We start with the second one. Then by the results of Section~\\ref{subsec_IBPM}, the walk $X$ satisfies $\\mathbb E \\left[ |X_1| \\right]<+\\infty$ and $\\mathbb E \\left[ X_1 \\right]>0$, so the conditioning to stay positive is non degenerate. Therefore, it is enough to prove\n\\begin{equation}\\label{eqn_RW_hits_interval_conditioned}\n\\lim_{C \\to +\\infty} \\liminf_{p \\to +\\infty} \\P \\left( \\mbox{$X$ hits $[p,p+C]$} \\right) =1.\n\\end{equation}\nThis follows from standard renewal arguments: if we denote by $(H_i)_{i \\geq 0}$ the ascending ladder heights of $P$, then $(H_i)$ is a renewal set with density $\\mathbb E[H_1]= \\frac{1}{\\mathbb E[X_1]}>0$. Let $I_p$ be such that $H_{I_p} < p \\leq H_{I_{p+1}}$. Then the law of $H_{I_{p+1}}-H_{I_p}$ converges as $p \\to +\\infty$ to the law of $H_1$ biased by its size, so\n\\begin{align*}\n\\P \\left( \\mbox{$P$ does not hit $[p,p+C]$} \\right) &\\leq \\P \\left( H_{I_{p+1}} \\notin [p,p+C] \\right)\\\\& \\leq \\P \\left( H_{I_{p+1}}-H_{I_p} > C \\right) \\xrightarrow[p \\to +\\infty]{} \\frac{\\mathbb E \\left[ H_1 \\mathbbm{1}_{H_1>C} \\right]}{\\mathbb E[H_1]},\n\\end{align*} \nand this last quantity goes to $0$ as $C \\to +\\infty$.\n\nWe now tackle the case where $\\mathbf{q}$ is critical, which by the results in the end of~\\ref{subsec_IBPM} implies $\\sum_{i \\geq 1} i^{3\/2} \\widetilde{\\nu}_{\\mathbf{q}}(i)<+\\infty$. This case is more complicated since renewal arguments are not available anymore, and the conditioning is now degenerate, so absolute continuity arguments between $P$ and $X$ become more elaborate. On the other hand, the growth is now slower and the nonconditionned walk $X$ with step distribution $\\widetilde{\\nu}_{\\mathbf{q}}$ is now recurrent, so it seems more difficult to jump over a large interval. And indeed, we will prove\n\\[ \\lim_{p \\to +\\infty} \\P \\left( \\mbox{$P$ hits $p$} \\right) =1,\\]\nwhich is a much stronger version of \\eqref{eqn_RW_hits_interval_conditioned}.\n\nFor this, our strategy will be the following: let $\\chi_p$ be the first time at which $P$ is at least $p$.\n\\begin{itemize}\n\\item\nThe scaling limit of $P$ is a process with no positive jump, so ${P_{\\chi_p}=p+o(p)}$ in probability as $p \\to +\\infty$.\n\\item\nBetween time $\\chi_p$ and $\\tau_p+o(p^{2\/3})$, the process $P$ looks a lot like a nonconditioned random walk $X$ started from $P_{\\tau_p}$.\n\\item\nIf $X$ is started from $p+o(p)$, the time it takes to first hit $P$ is $o(p^{2\/3})$. This is a stronger version of the recurrence of $X$, and will follow from a local limit theorem for random walks.\n\\end{itemize}\nLet us now be more precise. By Theorem 3 of \\cite{Bud15} (see also \\cite[Chapter 10]{C-StFlour}), we have the convergence\n\\[ \\left( \\frac{P_{nt}}{n^{2\/3}} \\right)_{t \\geq 0} \\xrightarrow[n \\to +\\infty]{(d)} \\left( b_{\\mathbf{q}} S^+_t \\right)_{t \\geq 0} \\]\nfor the Skorokhod topology, where $S^+$ is a $3\/2$-stable L\u00e9vy process with no positive jump conditioned to stay positive, and $b_{\\mathbf{q}}>0$ (the precise value will not matter here). Since this limiting process has no positive jump, we have $P_{\\tau_p}-p=o(p)$ in probability. Hence, there is a deterministic function $f(p)$ with $\\frac{f(p)}{p} \\to 0$ when $p \\to +\\infty$ such that, for any $\\varepsilon>0$,\n\\[ \\P \\left( P_{\\tau_p}-p \\geq \\varepsilon f(p) \\right) \\xrightarrow[p \\to +\\infty]{} 0. \\]\nWe now fix $\\varepsilon>0$, and condition on $P_{\\tau_p}=p'$ for some $p \\leq p' \\leq p+\\varepsilon f(p)$. We claim that then $\\left( P_{\\tau_p+i}-p' \\right)_{0 \\leq i \\leq f(p)^{3\/2}}$ can be coupled with $(X_i)_{0 \\leq i \\leq f(p)^{3\/2}}$ in such a way that both processes are the same with probability $1-o(1)$. For this, recall from~\\eqref{peeling_transitions} that $P$ can be described as a Doob $h$-transform of $X$, where $h$ is given by \\eqref{eqn_defn_homega}. Hence, the Radon--Nikodym derivative of the first process with respect to the second is\n\\begin{equation}\\label{critical_Radon_Nikodym}\n\\frac{h_{p'+X_{f(p)^{3\/2}}}(1)}{h_{p'}(1)}.\n\\end{equation}\nSince $\\frac{X_{f(p)^{3\/2}}}{f(p)}$ converges in distribution, we have $\\frac{X_{f(p)^{3\/2}}}{p} \\to 0$ in probability. By using the fact that $\\frac{p'}{p} \\to 0$ uniformly in $p'$ and that $h_1(x)\\sim c\\sqrt{x}$ for some $c>0$ (see Section~\\ref{subsec_IBPM}), we conclude that \\eqref{critical_Radon_Nikodym} goes to $1$ as $p \\to +\\infty$, uniformly in $p' \\in \\left[ p,p+\\varepsilon f(p) \\right]$. This proves our coupling claim. Note that under this coupling, the time where $P$ hits exactly $p$ is $\\tau_p$ plus the time where $X$ hits $p-p'$.\n\nWe will now show that, if $p$ is large enough, for any $k \\in [-\\varepsilon f(p),0]$, we have\n\\begin{equation}\\label{eqn_critical_quantitative_recurrence}\n\\P \\left( \\mbox{$X$ hits $k$ before time $f(p)^{3\/2}$} \\right) \\geq 1-\\delta(\\varepsilon),\n\\end{equation}\nwhere $\\delta(\\varepsilon) \\to 0$ as $\\varepsilon \\to 0$. Together with our coupling result, this will imply that the probability for $P$ to hit $p$ before time $\\tau_p+f(p)^{3\/2}$ is at least $1-\\delta (\\varepsilon)-o(1)$ as $p \\to +\\infty$. Since this is true for any $\\varepsilon>0$, this will conclude the proof of Lemma~\\ref{lem_RW_hits_interval} in the critical case.\n\nThe proof of \\eqref{eqn_critical_quantitative_recurrence} relies on the Local Limit Theorem (this is e.g. Theorem 4.2.1 of \\cite{IL71}). This theorem (in the case $\\alpha=3\/2$) states that\n\\[ \\sup_{k \\in \\mathbb Z} \\left| n^{2\/3} \\P (X_n=k)-g \\left( \\frac{k}{n^{2\/3}} \\right) \\right| \\xrightarrow[n \\to +\\infty]{} 0, \\]\nwhere $g$ is a continuous function (the density of a $3\/2$-stable variable).\nOn the other hand, let us denote $t=f(p)^{3\/2}$. By the strong Markov property, for all $k \\in \\mathbb Z$, we have\n\\begin{align*}\n\\mathbb E_0 \\left[ \\sum_{i=0}^{t} \\mathbbm{1}_{X_i=k} \\right] &\\leq \\P_0 \\left( \\mbox{$X$ hits $k$ before time $t$} \\right) \\mathbb E_k \\left[ \\sum_{i=0}^{t} \\mathbbm{1}_{X_i=k} \\right]\\\\\n&= \\P_0 \\left( \\mbox{$X$ hits $k$ before time $t$} \\right) \\mathbb E_0 \\left[ \\sum_{i=0}^{t} \\mathbbm{1}_{X_i=0} \\right].\n\\end{align*}\nTherefore, using the local limit theorem, we can write, for $-\\varepsilon f(p) \\leq k \\leq 0$ and $p$ large (the $o$ terms are all uniform in $k$):\n\\begin{align*}\n\\P_0 \\left( \\mbox{$X$ hits $k$ before $t$} \\right) &\\geq \\frac{\\sum_{i=0}^{t} \\P_0 \\left( X_i=k \\right)}{\\sum_{i=0}^{t} \\P_0 \\left( X_i=0 \\right)}\\\\\n&= \\frac{\\sum_{i=1}^{t} \\left( \\frac{1}{i^{2\/3}} g \\left( \\frac{k}{i^{2\/3}} \\right) + o \\left( \\frac{1}{i^{2\/3}} \\right) \\right)}{1+\\sum_{i=1}^{t} \\left( \\frac{1}{i^{2\/3}} g (0) + o \\left( \\frac{1}{i^{2\/3}} \\right) \\right)}\\\\\n&\\geq \\frac{-\\varepsilon t^{1\/3} + \\sum_{i=\\varepsilon t}^{t} \\left( \\frac{1}{i^{2\/3}} \\min_{[-\\varepsilon^{1\/3},0]} g +o \\left( \\frac{1}{i^{2\/3}} \\right)\\right)}{3t^{1\/3} g(0) + \\varepsilon t^{1\/3}}\\\\\n&\\geq \\frac{-2\\varepsilon t^{1\/3}+ \\left( 3t^{1\/3}-3\\varepsilon^{1\/3} t^{1\/3}\\right) \\min_{[-\\varepsilon^{1\/3},0]} g}{\\left( 3g(0)+\\varepsilon \\right)t^{1\/3}}\\\\\n&= \\frac{-2\\varepsilon +3(1-\\varepsilon^{1\/3}) \\min_{[-\\varepsilon^{1\/3},0]} g}{3g(0)+\\varepsilon},\n\\end{align*}\nwhere the third line uses that, for any index $i \\geq \\varepsilon t$, we have\n\\[0 \\geq \\frac{k}{i^{2\/3}} \\geq -\\frac{\\varepsilon f(p)}{(\\varepsilon t)^{2\/3}}=-\\varepsilon^{1\/3}. \\]\nWe obtain a lower bound that goes to $1$ as $\\varepsilon \\to 0$, so this proves \\eqref{eqn_critical_quantitative_recurrence}, and Lemma~\\ref{lem_RW_hits_interval}.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{prop_finding_regions_to_cut}]\nThe subtlety in the proof is that we would like to say something about the finite maps $M_n$ conditionally on the values of $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$, but $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$ are defined in terms of the infinite limits. However, we can condition on the maps explored at the time when the perimeters of the explored parts hit $[p,p+C]$ for the first time. Then Proposition~\\ref{prop_q_as_limit} guarantees that from these explored parts, we can get good approximations of $\\mathbf{Q}^1$ and $\\mathbf{Q}^2$ if $p$ is large enough.\n\nIn this proof, we will use a shortened notation for our peeling explorations. For $i \\in \\{1,2\\}$ and $t \\geq 0$, we will write $\\mathcal{E}^{n,i}_t=\\mathcal{E}_t^{\\mathcal{A}} \\left( M_n, e_n^i \\right)$ and $\\mathcal{E}^{\\infty,i}_t=\\mathcal{E}_t^{\\mathcal{A}} \\left( \\mathbb M^i_{\\mathbf{Q}^i} \\right)$.\n\nWe fix $\\varepsilon>0$. By Lemma~\\ref{lem_RW_hits_interval}, let $C$ be a constant (depending only on $\\varepsilon$) such that\n\\[ \\liminf_{p \\to +\\infty} \\P \\left( \\sigma_{[p,p+C]}^{1,\\infty} <+\\infty \\right) > 1-\\frac{\\varepsilon}{20}.\\]\nFor $p$ large enough (where \"large enough\" may depend on $\\varepsilon$), there is $v_0=v_0(p)$ such that\n\\begin{equation}\\label{eqn_sigma_reasonable}\n\\P \\left( \\sigma^{\\infty,1}_{[p,p+C]} \\leq v_0 \\mbox{ and } \\left| \\mathcal{E}^{\\infty,1}_{\\sigma^{\\infty,1}_{[p,p+C]}} \\right| \\leq v_0 \\right)>1-\\frac{\\varepsilon}{20},\n\\end{equation}\nwhere $|m|$ is the number of edges of a map $m$.\nOn the other hand, let us fix $j \\in \\mathbb N^* \\cup \\{\\infty\\}$. Proposition~\\ref{prop_q_as_limit} provides a function $\\widetilde{r}_j$ on the set of finite maps with a hole such that $\\widetilde{r}_j(\\mathcal{E}^{\\infty, 1}_t) \\to r_j(\\mathbf{Q}^1)$ almost surely as $t \\to +\\infty$. Let $\\eta<1$ be a small constant, which will be fixed later and will only depend on $\\varepsilon$. For $p$ large enough, we have\n\\begin{equation}\\label{eqn_fq_approximation}\n\\P \\left( \\sigma^{\\infty,1}_{[p,p+C]} \\leq v_0(p) \\mbox{ but } \\left| \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 1}_{\\sigma^{\\infty,1}_{[p,p+C]}} \\right) - r_j(\\mathbf{Q}^1) \\right| \\geq \\frac{\\varepsilon}{8} \\right) < \\eta \\frac{\\varepsilon}{20}.\n\\end{equation}\nFrom now on, we take $p$ large enough so that both \\eqref{eqn_sigma_reasonable} and \\eqref{eqn_fq_approximation} hold. By almost sure local convergence and \\eqref{eqn_sigma_reasonable}, for $n$ large enough (where \"large enough\" may depend on $\\varepsilon$ and $p$), we have\n\\[ \\P \\left( \\sigma^{n,1}_{[p,p+C]}, \\sigma^{n,2}_{[p,p+C]} \\leq v_0 \\mbox{ and } \\left| \\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}} \\right|, \\left| \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}} \\right| \\leq v_0 \\right) > 1-\\frac{\\varepsilon}{10}.\\]\nBy the assumption that $|r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2)|>\\varepsilon$ with probability at least $\\varepsilon$ and by \\eqref{eqn_fq_approximation}, we deduce that\n\\[ \\P \\left(\\begin{array}{c} \\sigma^{n,1}_{[p,p+C]}, \\sigma^{n,2}_{[p,p+C]} \\leq v_0 \\mbox{ and } \\left| \\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}} \\right|, \\left| \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}} \\right| \\leq v_0\\\\ \\mbox{and } \\left| \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 1}_{\\sigma^{\\infty,1}_{[p,p+C]}} \\right) - \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 2}_{\\sigma^{\\infty,2}_{[p,p+C]}} \\right) \\right| \\geq \\frac{3}{4}\\varepsilon\\end{array} \\right) > \\frac{4}{5} \\varepsilon.\\]\n\n\n\nNote that if this last event occurs but the two regions $\\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}}$ and $\\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}}$ have a common face, then the dual graph distance between the two roots is bounded by $2v_0$. However, by Proposition~\\ref{prop_tightness_dloc_univ}, the volume of the ball of radius $2v_0$ around $e^1_n$ is tight as $n \\to +\\infty$, so the probability that this happens goes to $0$ as $n \\to +\\infty$. Hence, for $n$ large enough:\n\\begin{equation}\\label{eqn_two_distinct_regions}\n\\P \\left( \\begin{array}{c}\\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}}, \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}} \\mbox{ are well-defined, face-disjoint, have at}\\\\\\mbox{most $v_0$ edges and } \\left| \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 1}_{\\sigma^{\\infty,1}_{[p,p+C]}} \\right) - \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 2}_{\\sigma^{\\infty,2}_{[p,p+C]}} \\right) \\right| \\geq \\frac{3}{4}\\varepsilon \\end{array} \\right) > \\frac{4}{5} \\varepsilon.\n\\end{equation}\n\nNow assume that this last event occurs and condition on the $\\sigma$-algebra $\\mathcal{F}_{\\sigma}$ generated by the pair $\\left( \\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}}, \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}} \\right)$ of explored regions. Then, let $I_1, I_2 \\in [0,C]$ be such that the perimeters of the two explored regions are $2p+2I_1$ and $2p+2I_2$. Then the complementary map is a uniform map of the $\\left( 2p+2I_1, 2p+2I_2 \\right)$-gon with genus $g_n$ and face degrees given by $\\mathbf{\\widetilde{F}}^n$ as follows. If $F_j^n$ is the number of internal faces of degree $2j$ in $\\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}} \\cup \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}}$, then $\\widetilde{F}_j^n=f_j^n-F_j^n$. We now perform $I_1$ peeling steps according to $\\mathcal{A}$ around $\\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}}$, followed by $I_2$ peeling steps according to $\\mathcal{A}$ around $\\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}}$. We call a peeling step \\emph{nice} if it consists of gluing together two boundary edges, which decreases the perimeter by $2$. The number of possible values of the map $M_n \\backslash \\left( \\mathcal{E}^{n,1}_{\\sigma^{n,1}_{[p,p+C]}} \\cup \\mathcal{E}^{n,2}_{\\sigma^{n,2}_{[p,p+C]}} \\right)$ is\n\\[\\beta_{g}^{(p+I_1,p+I_2)}(\\mathbf{\\widetilde{F}}^n).\\]\nOn the other hand, if the $I_1+I_2$ additional peeling steps are all good and the regions around $e_n^1$ and $e_n^2$ are still disjoint after these steps, the number of possible complementary maps is\n\\[\\beta_{g}^{(p,p)} (\\mathbf{\\widetilde{F}}^n). \\]\nIt follows that\n\\[ \\P \\left( \\mbox{the $I_1+I_2$ peeling steps are all nice} | \\mathcal{F}_{\\sigma} \\right) = \\frac{\\beta_{g}^{(p,p)} (\\mathbf{\\widetilde{F}}^n)}{\\beta_{g}^{(p+I_1,p+I_2)}(\\mathbf{\\widetilde{F}}^n)}. \\]\nSince $|\\mathbf{F}^n|$ is bounded by $v_0(p)$, for $n$ large enough (where \"large enough\" may depend on $p$), the Bounded ratio Lemma applies to $\\mathbf{\\widetilde{F}}^n$. Therefore, by the Bounded ratio Lemma (more precisely, by Corollary~\\ref{lem_BRL_boundaries}, item 2), the last ratio is always larger than a constant $\\eta$ depending on $\\varepsilon$. More precisely $\\eta$ may depend on $I_1$ and $I_2$, but $0 \\leq I_1, I_2 \\leq C(\\varepsilon)$, so $(I_1, I_2)$ can take finitely many values given $\\varepsilon$, so $\\eta$ only depends on $\\varepsilon$ (and not on $p$). This is the value of $\\eta$ that we choose for \\eqref{eqn_fq_approximation}. For $i \\in \\{1,2\\}$, we write $\\tau_p^{n,i}=\\sigma_{[p,p+C]}^{n,i}+I_i$. If the last $I_1+I_2$ peeling steps are nice, then after they are performed, both explored regions have perimeter $2p$. Therefore, it follows from the last computation and from \\eqref{eqn_two_distinct_regions} that, for $n$ large enough, we have\n\\[ \\P \\left( \\begin{array}{c}\\mathcal{E}^{n,1}_{\\tau_p^{n,1}} \\mbox{ and } \\mathcal{E}^{n,2}_{\\tau_p^{n,2}} \\mbox{ are both face-disjoint, have perimeter $2p$}\\\\\\mbox{and volume $\\leq v_0$, and } \\left| \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 1}_{\\sigma^{\\infty,1}_{[p,p+C]}} \\right) - \\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, 2}_{\\sigma^{\\infty,2}_{[p,p+C]}} \\right) \\right| \\geq \\frac{3}{4}\\varepsilon\\end{array} \\right) \\geq \\frac{4}{5} \\varepsilon \\eta.\\]\nFinally, we can use \\eqref{eqn_fq_approximation} to replace back the approximations $\\widetilde{r}_j \\left( \\mathcal{E}^{\\infty, i}_{\\sigma^{\\infty,i}_{[p,p+C]}} \\right)$ by $r_j(\\mathbf{Q}^i)$. We obtain\n\\[ \\P \\left( \\begin{array}{c}\n\\mathcal{E}^{1,n}_{\\tau_p^{n,i}} \\mbox{ and } \\mathcal{E}^{2,n}_{\\tau_p^{n,2}} \\mbox{ are both face-disjoint, have perimeter $2p$ }\\\\ \\mbox{and $\\leq v_0$ edges, and } \\left| r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2) \\right| \\geq \\frac{1}{2}\\varepsilon\n\\end{array} \\right) \\geq \\frac{3}{5} \\varepsilon \\eta.\\]\nOn this event, we have $(M_n, e_n^1, e_n^2) \\in \\mathscr{S}_{n,p,v_0}$. Therefore, this concludes the proof of the proposition, with $\\delta=\\frac{3}{5} \\eta \\varepsilon$.\n\\end{proof}\n\n\\subsection{The two holes argument: proof of Proposition~\\ref{prop_two_holes_argument} and Corollary~\\ref{corr_main_minus_monotonicity}}\n\\label{subsec_two_holes}\n\nNow that we have Proposition~\\ref{prop_finding_regions_to_cut}, the proof of Proposition~\\ref{prop_two_holes_argument} is basically the same as two holes argument in~\\cite{BL19} (i.e. the proof of Proposition 18). Therefore, we will not write the argument in full details, but only sketch it. We first stress two differences:\n\\begin{itemize}\n\\item\nThe first one is that the involution obtained by (possibly) swapping the two explored parts is now non-identity on a relatively small set of maps (but still on a positive proportion). The only consequence is that in the end, instead of contradicting the almost sure convergence of Proposition~\\ref{prop_q_as_limit} on an event of probability $\\varepsilon$, we will contradict it on an event of probability $\\delta<\\varepsilon$, where $\\delta$ is given by Proposition~\\ref{prop_finding_regions_to_cut}.\n\\item\nThe other difference is that in \\cite{BL19}, the only observable we were using to approximate the Boltzmann weights was the ratio between perimeter and volume, which corresponds to our function $r_{\\infty}$. Here we also need to deal with the functions $r_j$ for $j \\in \\mathbb N^*$. For this, we simply need the observation that, if $q$ is much larger than $p$, the proportion of peeling steps before $\\tau_q$ where we discover a new face of perimeter $2j$ depends almost only on the part of the exploration between $\\tau_p$ and $\\tau_q$.\n\\end{itemize}\n\n\n\\begin{proof}[Sketch of proof of Proposition~\\ref{prop_two_holes_argument}]\nFix $j \\in \\mathbb N^* \\cup \\{ \\infty\\}$. Let $\\varepsilon>0$, and assume\n\\begin{equation}\\label{eqn_q1_ne_q2}\n\\P \\left( |r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2)|>\\varepsilon \\right)>\\varepsilon.\n\\end{equation}\nLet $\\delta>0$ be given by Proposition~\\ref{prop_finding_regions_to_cut}. Consider $p$ large (depending on $\\varepsilon$) and let $v_0$ be given by Proposition~\\ref{prop_finding_regions_to_cut}. We assume that the peeling algorithm $\\mathcal{A}$ that we work with has the property that the edge peeled at time $t$ for $t \\geq \\tau_p$ only depends on $\\mathcal{E}_t^{\\mathcal{A}}(m) \\backslash \\mathcal{E}_{\\tau_p}^{\\mathcal{A}}(m)$ (see~\\cite{BL19} for a more careful description of this property). This is not a problem since Proposition~\\ref{prop_finding_regions_to_cut} is independent of the choice of $\\mathcal{A}$.\n\nWe define an involution $\\Phi_n$ on the set of bi-rooted maps with genus $g_n$ and face degrees $\\mathbf{f}^n$ as follows: if $m \\in \\mathscr{S}_{n,p,v_0}$, then $\\Phi_n(m)$ is obtained from $m$ by swapping the two regions $\\mathcal{E}^{1,n}_{\\tau_p^{1,n}}$ and $\\mathcal{E}^{2,n}_{\\tau_p^{2,n}}$. If $m \\notin \\mathscr{S}_{n,p,v_0}$, then $\\Phi_n(m)=m$. Note that $\\Phi_n(M_n,e^1_n,e^2_n)$ is still uniform on bi-rooted maps with prescribed genus and face degrees. This map rooted at $e_1^n$ converges to a map $\\widehat{M}$, with either\n\\[\\widehat{M}=M^1_{\\mathbf{Q}^1}\\]\nor\n\\[\\mathcal{E}^{\\mathcal{A}}_{\\tau^1_p}(\\widehat{M}) = \\mathcal{E}^1_{\\tau^1_p} \\quad \\mbox{ and } \\quad \\widehat{M} \\backslash \\mathcal{E}^{\\mathcal{A}}_{\\tau^1_p}(\\widehat{M}) = M^2_{\\mathbf{Q}^2} \\backslash \\mathcal{E}^2_{\\tau^2_p}. \\]\n\nMoreover, by Proposition~\\ref{prop_finding_regions_to_cut}, if $p$ has been chosen large enough, then with probability at least $\\delta$, we are in the second case and furthermore $|r_j(\\mathbf{Q}^1)-r_j(\\mathbf{Q}^2)|>\\frac{\\varepsilon}{2}$. Now assume that this last event occurs and that $q \\gg p \\gg 1$. Then we have\n\\begin{equation}\\label{eqn_rtilde_and_r_badcase}\n\\widetilde{r}_j \\left( \\mathcal{E}^{\\mathcal{A}}_{\\tau^1_p}(\\widehat{M}) \\right) \\approx r_j(\\mathbf{Q}^1) \\quad \\mbox{ and } \\quad \\widetilde{r}_j \\left( \\mathcal{E}^{\\mathcal{A}}_{\\widehat{\\tau}_q}(\\widehat{M}) \\right) \\approx r_j(\\mathbf{Q}^2),\n\\end{equation}\nwhere $\\widehat{\\tau}_q$ is the first step where the perimeter of the explored part of $\\widehat{M}$ is at least $q$. The approximations of \\eqref{eqn_rtilde_and_r_badcase} can be made arbitrarily precise if $p$ and $q$ were chosen large enough, so for $p$ large enough and $q$ large enough (depending on $p$), we have\n\\begin{equation}\\label{eqn_rj_different}\n\\P \\left( \\left| \\widetilde{r}_j \\left( \\mathcal{E}^{\\mathcal{A}}_{\\tau^1_p}(\\widehat{M}) \\right) - \\widetilde{r}_j \\left( \\mathcal{E}^{\\mathcal{A}}_{\\widehat{\\tau}_q}(\\widehat{M}) \\right) \\right| > \\frac{\\varepsilon}{4} \\right) \\geq \\delta.\n\\end{equation}\nOn the other hand $\\widehat{M}$ is a local limit of finite uniform maps, so by Theorem~\\ref{thm_main_more_general} it has to be a mixture of Boltzmann infinite planar maps. But then~\\eqref{eqn_rj_different} contradicts the almost sure convergence of Proposition~\\ref{prop_q_as_limit}, so~\\eqref{eqn_q1_ne_q2} cannot be true. Therefore, we must have $r_j(\\mathbf{Q}^1)=r_j(\\mathbf{Q}^2)$ a.s.. Since this is true for all $j \\in \\mathbb N^* \\cup \\{\\infty\\}$, by the last point of Proposition~\\ref{prop_q_as_limit}, we have $\\mathbf{Q}^1=\\mathbf{Q}^2$, which concludes the proof.\n\\end{proof}\n\nThe passage from Proposition~\\ref{prop_two_holes_argument} to Corollary~\\ref{corr_main_minus_monotonicity} does also not require any new idea compared to~\\cite{BL19}, so we do not write it down completely.\n\n\\begin{proof}[Sketch of the proof of Corollary~\\ref{corr_main_minus_monotonicity}]\nThe proof is basically the same as the end of the proof of the main theorem in \\cite{BL19}. The only difference is that we could not prove directly that $d(\\mathbf{q})$ and $\\left( a_j(\\mathbf{q}) \\right)_{j \\geq 1}$ are sufficient to characterize the weight sequence $\\mathbf{q}$, so the result that we obtain is only Corollary~\\ref{corr_main_minus_monotonicity} and not Theorem~\\ref{univ_main_thm}.\n\nMore precisely, by the Euler formula, any map with genus $g_n$ and face degrees $\\mathbf{f}^n$ has exactly $|\\mathbf{f}^n|$ edges and $|\\mathbf{f}^n|-\\sum_{j \\geq 1} f_j^n +2-2g_n$ vertices, so, by invariance of $M_n$ under uniform rerooting, we have\n\\[ \\mathbb E \\left[ \\frac{1}{\\mathrm{deg}_{M_n}(\\rho)} \\right] = \\frac{|\\mathbf{f}^n|-\\sum_{j \\geq 1} f_j^n +2-2g_n}{2|\\mathbf{f}^n|} \\xrightarrow[n \\to +\\infty]{} \\frac{1}{2} \\left( 1-2\\theta - \\sum_j \\alpha_j \\right). \\]\nBy the exact same argument as in \\cite{BL19}, we deduce from Proposition~\\ref{prop_two_holes_argument} that if $M_n \\to \\mathbb M_{\\mathbf{Q}}$, then $d(\\mathbf{Q})=\\frac{1}{2} \\left( 1-2\\theta - \\sum_j \\alpha_j \\right)$ a.s.. Similarly, by invariance under rerooting, for all $j \\geq 1$, we have\n\\[ \\P \\left( \\mbox{the root face of $M_n$ has degree $2j$} \\right) = \\frac{2j f_j^n}{2n} \\xrightarrow[n \\to +\\infty]{} j \\alpha_j. \\]\nBy the same argument as for the mean vertex degree, we obtain $a_j(\\mathbf{Q})=\\alpha_j$ a.s..\n\\end{proof}\n\n\\subsection{Monotonicity of the mean inverse degree}\n\\label{subsec_last_step}\n\nTo conclude the proof of the main theorem, given Corollary~\\ref{corr_main_minus_monotonicity}, it is enough to show that if $\\sum_{j \\geq 1} j^2 a_j(\\mathbf{q}) <+\\infty$, the weight sequence $\\mathbf{q}$ is completely determined by $\\left( a_j(\\mathbf{q}) \\right)_{j \\geq 1}$ and $d(\\mathbf{q})$. For all this subsection, we fix a sequence $(\\alpha_j)_{j \\geq 1}$ such that $\\sum_j j \\alpha_j =1$ and $\\sum_j j^2 \\alpha_j <+\\infty$ and $\\alpha_1<1$. We recall from Proposition~\\ref{prop_third_parametrization} that the weight sequences $\\mathbf{q}$ such that $a_j(\\mathbf{q})=\\alpha_j$ for all $j \\geq 1$ form a one-parameter family $\\left( \\mathbf{q}^{(\\omega)} \\right)_{\\omega \\geq 1}$ given by\n\\[ q_j^{(\\omega)}=\\frac{j \\alpha_j}{\\omega^{j-1}h_j(\\omega)} c_{\\mathbf{q}^{(\\omega)}}^{-(j-1)}, \\quad \\mbox{where} \\quad c_{\\mathbf{q}^{(\\omega)}} = \\frac{4}{1-\\sum_{i \\geq 1} \\frac{1}{4^{i-1}} \\binom{2i-1}{i-1} \\frac{i \\alpha_i}{\\omega^{i-1} h_i(\\omega)}}. \\]\nTo prove Theorem~\\ref{univ_main_thm}, it is sufficient to prove the following.\n\n\\begin{prop}\\label{prop_monotonicity_deg}\nUnder the assumption $\\sum_{j \\geq 1} j^2 \\alpha_j<+\\infty$, the function $\\omega \\to d(\\mathbf{q}^{(\\omega)})$ is strictly decreasing.\n\\end{prop}\n\nSince we were not able to establish this result by a direct argument, we will prove it using Corollary~\\ref{corr_main_minus_monotonicity}.\n\n\\subsubsection{Basic properties of the mean inverse degree function}\n\nBefore moving on to the core of the argument, we start with some basic properties of the function $\\omega \\to d(\\mathbf{q}^{(\\omega)})$.\n\n\\begin{lem}\\label{lem_degreefunction_basic}\n\\begin{itemize}\n\\item[$\\bullet$]\nThe function $\\omega \\to d(\\mathbf{q}^{(\\omega)})$ is continuous on $[1,+\\infty)$ and analytic on $(1,+\\infty)$.\n\\item[$\\bullet$]\nWe have $d(\\mathbf{q}^{(\\omega)})>0$ for all $\\omega$ and $\\lim_{\\omega \\to +\\infty} d(\\mathbf{q}^{(\\omega)}) = 0$.\n\\item[$\\bullet$]\nWe have $d(\\mathbf{q}^{(\\omega)}) \\leq 1-\\sum_{j \\geq 1} \\alpha_j$ for all $\\omega \\geq 1$, with equality if and only if $\\omega=1$.\n\\end{itemize}\n\\end{lem}\n\n\\begin{proof}\nThe proof of the analyticity and continuity on $(1,+\\infty)$ is a bit long and delayed to Appendix~\\ref{subsec_analyticity}. The third item will follow from results of Angel, Hutchcroft, Nachmias and Ray~\\cite{AHNR16}. The other properties are quite easy.\n\nWe start with the continuity statement in the first item. The analyticity proved in Appendix~\\ref{subsec_analyticity} implies continuity on $(1,+\\infty)$ so it is sufficient to prove the continuity at $\\omega=1$. By the monotone convergence theorem, the function $\\omega \\to c_{\\mathbf{q}^{(\\omega)}}$ is continuous at $\\omega=1$, so $q_j^{(\\omega)}$ is continuous at $\\omega=1$ for all $j$. Therefore, for every finite map $m$ with one hole, we have\n\\[ \\P \\left( m \\subset \\mathbb M_{\\mathbf{q}^{(\\omega)}} \\right) \\xrightarrow[\\omega \\to 1]{} \\P \\left( m \\subset \\mathbb M_{\\mathbf{q}^{(1)}} \\right), \\]\nso $\\mathbb M_{\\mathbf{q}^{(\\omega)}} \\to \\mathbb M_{\\mathbf{q}^{(1)}}$ in distribution for the local topology. Since the inverse degree of the root vertex is bounded and continuous for the local topology, the function $\\omega \\to d(\\mathbf{q}^{(\\omega)})$ is continuous at $1$.\n\nWe now prove the second item: $d(\\mathbf{q}^{(\\omega)})>0$ is immediate by finiteness of vertex degrees and $d(\\mathbf{q}^{(\\omega)}) \\to 0$ is equivalent to proving $\\mathrm{deg}_{\\mathbb M_{\\mathbf{q}^{(\\omega)}}}(\\rho) \\to +\\infty$ in probability when $\\omega \\to +\\infty$. For this, we notice (see~\\eqref{eqn_omega_infinite} above) that when $\\omega \\to +\\infty$, we have $h_{i}(\\omega) \\to 1$ for all $i \\geq 1$ and\n\\[\\widetilde{\\nu}_{\\mathbf{q}^{(\\omega)}}(i) \\xrightarrow[\\omega \\to +\\infty]{} \\begin{cases}\n0 \\mbox{ if $i \\leq -1$,}\\\\\n(i+1) \\alpha_{i+1} \\mbox{ if $i \\geq 0$}.\n\\end{cases}\n\\]\nIn other words, the probability of any peeling step swallowing at least one boundary vertex goes to $0$ when $\\omega \\to +\\infty$. Therefore, if we perform a peeling exploration where we peel the edge on the right of $\\rho$ whenever it is possible, the probability to complete the exploration of the root in less than $k$ steps goes to $0$ for all $k$. It follows that the root degree goes to $+\\infty$ in probability.\n\nFinally, we move on to the third item. Since $\\mathbb M_{\\mathbf{q}}$ is stationary, if we denote by $\\mathbb M_{\\mathbf{q}}^{\\mathrm{uni}}$ a map with the law of $\\mathbb M_{\\mathbf{q}}$ biased by the inverse of the root vertex degree, then $\\mathbb M_{\\mathbf{q}}$ is unimodular. A simple computation shows that $d(\\mathbf{q}) \\leq 1-\\sum_{j \\geq 1} \\alpha_j$ is equivalent to $\\mathbb E \\left[ \\kappa_{\\mathbb M_{\\mathbf{q}}^{\\mathrm{uni}}}(\\rho) \\right] \\geq 0$ where, if $v$ is a vertex of a map $m$:\n\\[ \\kappa_m(v)=2\\pi-\\sum_{f} \\frac{\\mathrm{deg}(f)-2}{\\mathrm{deg}(f)}\\pi,\\]\nand the sum is over all faces that are incident to $v$ in $m$, counted with multiplicity.\nMoreover, we have equality if and only if $\\mathbb E[\\kappa_{\\mathbb M_{\\mathbf{q}}^{\\mathrm{uni}}}(\\rho)] = 0$. The fact that $\\mathbb E[\\kappa_{\\mathbb M_{\\mathbf{q}}^{\\mathrm{uni}}}(\\rho)] \\geq 0$ is then a consequence of \\cite[Theorem 1]{AHNR16}. Moreover, \\cite{AHNR16} shows the equivalence between 17 different definitions of hyperbolicity. In particular, we have $\\mathbb E[\\kappa_{\\mathbb M_{\\mathbf{q}}^{\\mathrm{uni}}}(\\rho)] > 0$ (Definition 1 in \\cite{AHNR16}) if and only if $p_c0$ for bond percolation on $\\mathbb M_{\\mathbf{q}}$, which is equivalent to $\\mathbf{q}$ being critical (i.e. $\\omega=1$) by \\cite[Theorem 12.9]{C-StFlour}\\footnote{More precisely \\cite[Theorem 12.9]{C-StFlour} is about half-plane supercritical maps. Here is a way to extend it to full-plane maps: there is a percolation regime on the half-plane version of $\\mathbb M_{\\mathbf{q}}$ such that with positive probability, there are infinitely many infinite clusters. For topological reasons, at most two of them intersect the boundary infinitely many times. Hence, by changing the colour of finitely many edges, with positive probability there are two infinite clusters that do not touch the boundary. Since there is a coupling in which the half-plane version of $\\mathbb M_{\\mathbf{q}}$ is included in the full-plane version, we have with positive probability two disjoint infinite clusters in $\\mathbb M_{\\mathbf{q}}$ in a certain bond percolation regime, so $p_c0$ be small enough to satisfy\n\\begin{equation}\\label{eqn_choice_eps}\n\\varepsilon < \\min_{\\omega \\geq \\min(\\omega_0, \\omega_1)} \\left( d(\\mathbf{q}^{(1)}) - d(\\mathbf{q}^{(\\omega)}) \\right) \\quad \\mbox{and} \\quad \\varepsilon < \\min_{1 \\leq \\omega \\leq \\max(\\omega_0, \\omega_1)} d(\\mathbf{q}^{(\\omega)}).\n\\end{equation}\nNote that the existence of such an $\\varepsilon$ is guaranteed by the second and third items of Lemma~\\ref{lem_degreefunction_basic}. For $g \\geq 0$, we write\n\\[ k_{\\min}(g) = \\frac{2 \\sum_{j \\geq 1} \\alpha_j}{1-\\varepsilon-\\sum_{j \\geq 1} \\alpha_j} g \\quad \\mbox{and} \\quad k_{\\max}(g)=\\frac{1}{\\varepsilon} \\left( \\sum_{j \\geq 1} \\alpha_j \\right) g.\\]\nSince $\\varepsilon$ will be fixed until the end, we omit the dependence in $\\varepsilon$ in the notation.\nThese values were chosen so that a map with face degrees $\\mathbf{F}^{k_{\\min}(g)}$ and genus $g$ has average degree of order $\\frac{1}{\\varepsilon}$ (in particular such a map exists), whereas in a map with face degrees $\\mathbf{F}^{k_{\\max}(g)}$ and genus $g$ the genus is about $\\varepsilon$ times the size.\n\nWe now set, for $t \\geq 1$ and $g \\geq 0$:\n\\begin{align}\\label{eqn_defn_k0tg}\nk_0^t(g) &= \\min \\left\\{ k \\in \\left[ k_{\\min}(g), k_{\\max}(g) \\right] \\big| \\mathbb E \\left[ \\left( \\Omega^t_{k,g} \\right)^{-1} \\right] \\geq \\omega_0^{-1} \\right\\},\\\\\nk_1^t(g) &= \\min \\left\\{ k \\in \\left[ k_{\\min}(g), k_{\\max}(g) \\right] \\big| \\mathbb E \\left[ \\left( \\Omega^t_{k,g} \\right)^{-1} \\right] \\geq \\omega_1^{-1} \\right\\}.\\nonumber\n\\end{align}\nThe only reason why we use $\\omega_0^{-1}$ instead of $\\omega_0$ in the definition is to have a bounded quantity in the expectation, and pass easily from convergences in distribution to convergences of the expectation later. We first prove that this definition indeed makes sense and that the map $M_{\\mathbf{F}^{k_0^t(g)}, g}$ is well-defined with high probability.\n\n\\begin{lem}\\label{lem_k0_tg_reasonable}\nThere is $t_0$ depending only on $\\omega_0, \\omega_1$ such that for all $t \\geq t_0$:\n\\begin{enumerate}\n\\item\nfor $g$ large enough the number $k_0^t(g)$ is well defined;\n\\item\nfor $g$ large enough we have $k_0^t(g)>k_{\\min}(g)$;\n\\item\nwith probability $1-o(1)$ as $g \\to +\\infty$, we have $v \\left( \\mathbf{F}^{k_0^t(g)}, g \\right) \\geq \\frac{\\varepsilon}{2} \\left| \\mathbf{F}^{k_0^t(g)}\\right|$, and the same is true with $k_0^t(g)-1$ instead of $k_0^t(g)$.\n\\end{enumerate}\nMoreover, the same is true if we replace $k_0^t(g)$ with $k_1^t(g)$.\n\\end{lem}\n\n\\begin{proof}\nWe first recall that\n\\[v \\left( \\mathbf{F}^k, g \\right) = 2-2g+\\sum_{j \\geq 1} (j-1) F^k_j.\\]\nMoreover, by the law of large numbers and the definition of $\\mathbf{F}^k$, we have\n\\[\\frac{1}{k} \\sum_{j \\geq 1} (j-1) F^k_j \\xrightarrow[k \\to +\\infty]{a.s.} \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} (j-1) \\alpha_j \\quad \\mbox{and} \\quad \\frac{1}{k} \\left| \\mathbf{F}^k \\right| \\xrightarrow[k \\to +\\infty]{a.s.} \\frac{1}{\\sum_{i \\geq 1} \\alpha_i}. \\]\nFrom here, by the choice of $k_{\\min}(g)$, it follows easily that almost surely, for $g$ large enough, we have $v \\left( \\mathbf{F}^k, g \\right) \\geq \\frac{\\varepsilon}{2} \\left| \\mathbf{F}^k \\right|$ for all $k \\geq k_{\\min}(g)$. In particular, the third item of the Lemma will follow from the first and the second.\n\nWe now prove the first item. We need to prove that the minimum in the definition of $k_0^t(g)$ is over a nonempty set, so it is enough to prove that, if $t$ is larger than some $t_0$, we have\n\\begin{equation}\\label{eqn_omega_kmax}\n\\mathbb E \\left[ \\left( \\Omega^t_{k_{\\max}(g), g} \\right)^{-1} \\right]> \\omega_0^{-1}\n\\end{equation}\nfor $g$ large enough. We know that $v \\left( \\mathbf{F}^k, g \\right) \\geq \\frac{\\varepsilon}{2} \\left| \\mathbf{F}^k \\right|$ for $g$ large enough, so by Theorem~\\ref{thm_main_more_general} and Corollary~\\ref{corr_main_minus_monotonicity}, up to extracting a subsequence in $g$, we have the local convergence\n\\begin{equation}\\label{eqn_subseqlimit_kmax}\nM_{\\mathbf{F}^{k_{\\max}(g)}, g} \\xrightarrow[g \\to +\\infty]{(d)} \\mathbb M_{q^{(\\Omega_{\\max})}},\n\\end{equation}\nwhere $\\Omega_{\\max}$ is a random variable on $(1,+\\infty)$. On the other hand, by the law of large numbers and the definition of $k_{\\max}(g)$, we have\n\\[ \\frac{1}{g} v \\left( \\mathbf{F}^{k_{\\max}(g)}, g \\right) \\xrightarrow[g \\to +\\infty]{a.s.} -2+ \\frac{1}{\\varepsilon} \\sum_{j \\geq 1} (j-1)\\alpha_j \\quad \\mbox{and} \\quad \\frac{1}{g}\\left| \\mathbf{F}^{k_{\\max}(g)}\\right| \\xrightarrow[g \\to +\\infty]{a.s.} \\frac{1}{\\varepsilon}.\\]\nTherefore, the average degree in $M_{\\mathbf{F}^{k_{\\max}(g)}, g}$ tends to $-\\varepsilon+\\frac{1}{2} \\left( 1-\\sum_{j} \\alpha_j \\right) = d \\left( \\mathbf{q}^{(1)}\\right)-\\varepsilon$. By Corollary~\\ref{corr_main_minus_monotonicity}, we must have\n\\begin{equation}\\label{eqn_degree_omegamax}\nd \\left( \\mathbf{q}^{(\\Omega_{\\max})}\\right) = d \\left( \\mathbf{q}^{(1)}\\right)-\\epsilon\n\\end{equation}\nalmost surely. By the first inequality in our choice~\\eqref{eqn_choice_eps} of $\\varepsilon$, this implies $\\Omega_{\\max}<\\omega_0$ a.s., so $\\mathbb E \\left[ \\Omega_{\\max}^{-1} \\right] > \\omega_0^{-1}$.\n\nOn the other hand, let $t \\geq 1$. Since the explored map at time $t$ of a peeling exploration is a local function, the convergence~\\eqref{eqn_subseqlimit_kmax} implies\n\\begin{equation}\\label{eqn_convergence_omegamax_t}\n\\Omega^t_{k_{\\max}(g), g} \\xrightarrow[g \\to +\\infty]{(d)} r^{-1} \\left( \\frac{V_t^{(\\Omega_{\\max})} - 2 P_t^{(\\Omega_{\\max})}}{t} \\right).\n\\end{equation}\nMoreover, by the third item and the continuity in Lemma~\\ref{lem_degreefunction_basic}, we also get that $\\Omega_{\\max}$ is supported on a compact subset of $(1,+\\infty)$ depending only on $\\varepsilon$ (and not on the subsequence in $g$ that we are working with). Therefore, by the uniform convergence result of Lemma~\\ref{lem_unif_volume}, the right-hand side of~\\eqref{eqn_convergence_omegamax_t} converges in probability to $\\Omega_{\\max}$ as $t \\to +\\infty$, at a speed independent from the subsequence. Hence, remembering $\\mathbb E \\left[ \\Omega_{\\max}^{-1} \\right] > \\omega_0^{-1}$, there is $t_0$ depending only on $\\varepsilon$ such that for $t \\geq t_0$, we have\n\\[ \\mathbb E \\left[ r^{-1} \\left( \\frac{V_t^{(\\Omega_{\\max})} - 2 P_t^{(\\Omega_{\\max})}}{t} \\right)^{-1} \\right] > \\omega_0^{-1}. \\]\nCombined with~\\eqref{eqn_convergence_omegamax_t}, this implies $\\mathbb E \\left[ \\left( \\Omega^t_{k_{\\max}(g), g} \\right)^{-1} \\right] > \\omega_0^{-1}$ for $g$ large enough. We have proved that for $t \\geq t_0$, every subsequence in $g$ contains a subsubsequence along which~\\eqref{eqn_omega_kmax} holds for $g$ large enough, so~\\eqref{eqn_omega_kmax} holds for $g$ large enough. This proves the first item of the lemma.\n\nTo prove the second item, we need to show that there is $t'_0$ such that if $t \\geq t'_0$, then\n\\[ \\mathbb E \\left[ \\left( \\Omega^t_{k_{\\min}(g), g} \\right)^{-1} \\right] < \\omega_0^{-1} \\]\nfor $g$ large enough.\nThe proof is very similar to the proof of~\\eqref{eqn_omega_kmax}, so we do not write it in full details. The only difference is the computation of the average degree: this time, if $\\Omega_{\\min}$ plays the role of $\\Omega_{\\max}$ for the first item, using the definition of $k_{\\min}(g)$ we get $d \\left( q^{(\\Omega_{\\min})} \\right)=\\frac{\\varepsilon}{2}$ a.s.. Hence, by the second inequality of~\\eqref{eqn_choice_eps}, this implies $\\Omega_{\\min} > \\omega_0$, and the end of the proof is the same.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{prop_monotonicity_deg}]\nLet $t$ be larger than the $t_0$ from Lemma~\\ref{lem_k0_tg_reasonable}. By the third item of Lemma~\\ref{lem_k0_tg_reasonable} and Theorem~\\ref{thm_main_more_general}, we know that $\\left( M_{\\mathbf{F}^{k_0^t(g)},g} \\right)_{g \\geq 0}$ is tight and any subsequential limit is of the form $\\mathbb M_{\\mathbf{Q}}$. Moreover by the law of large numbers, for all $j \\geq 1$ we have $\\frac{F^k_j}{\\left| \\mathbf{F}^k_j \\right|} \\to \\alpha_j$ a.s. when $k \\to +\\infty$. Therefore, by Corollary~\\ref{corr_main_minus_monotonicity}, any subsequential limit $\\mathbb M_{\\mathbf{Q}}$ must satisfy $a_j(\\mathbf{q})=\\alpha_j$ for all $j \\geq 1$, so $\\mathbf{Q}$ is of the form $\\mathbf{q}^{(\\Omega)}$ for some random $\\Omega$. Moreover, the same holds if $k_0^t(g)$ is replaced by $k_0^t(g)-1$, or $k_1^t(g)$, or $k_1^t(g)-1$. Therefore, for all $t \\geq t_0$, we can fix a subsequence $S^t$ (depending on $t$) such that when $g \\to +\\infty$ along $S^t$ the following convergences hold jointly:\n\\begin{align*}\nM_{\\mathbf{F}^{k_0^t(g)},g} & \\xrightarrow[g \\to +\\infty]{(d)} \\mathbb M_{\\mathbf{q}^{(\\Omega_0^t)}}, & M_{\\mathbf{F}^{k_1^t(g)},g} & \\xrightarrow[g \\to +\\infty]{(d)} \\mathbb M_{\\mathbf{q}^{(\\Omega_1^t)}},\\\\\nM_{\\mathbf{F}^{k_0^t(g)-1},g} & \\xrightarrow[g \\to +\\infty]{(d)} \\mathbb M_{\\mathbf{q}^{(\\Omega_0^{t,-})}}, & M_{\\mathbf{F}^{k_1^t(g)-1},g} & \\xrightarrow[g \\to +\\infty]{(d)} \\mathbb M_{\\mathbf{q}^{(\\Omega_1^{t,-})}},\\\\\n\\Omega^t_{k_0^t(g),g} & \\xrightarrow[g \\to +\\infty]{(d)} \\widetilde{\\Omega}^t_0, & \\Omega^t_{k_1^t(g),g} & \\xrightarrow[g \\to +\\infty]{(d)} \\widetilde{\\Omega}^t_1,\\\\\n\\Omega^t_{k_0^t(g)-1} & \\xrightarrow[g \\to +\\infty]{(d)} \\widetilde{\\Omega}^{t,-}_0, & \\Omega^t_{k_1^t(g)-1} & \\xrightarrow[g \\to +\\infty]{(d)} \\widetilde{\\Omega}^{t,-}_1,\\\\\n\\frac{g}{\\left| \\mathbf{F}^{k_0^t(g)} \\right|} & \\xrightarrow[g \\to +\\infty]{(d)} \\theta_0^t, & \\frac{g}{\\left| \\mathbf{F}^{k_1^t(g)} \\right|} & \\xrightarrow[g \\to +\\infty]{(d)} \\theta_1^t,\n\\end{align*}\nwhere the first four convergences are for the local topology. We highlight that for our purpose it is sufficient to consider \\emph{one} subsequence $S^t$, and that we will not try to understand \\emph{all} subsequential limits. In what follows, we will always consider $g$ in the subsequence $S^t$ and omit to precise it. Note that we have $\\Omega_0^t, \\Omega_0^{t,-} \\in [1,+\\infty)$ and $\\widetilde{\\Omega}_0^t, \\widetilde{\\Omega}_0^{t,-} \\in [1,+\\infty]$, since we have no bound a priori on $\\Omega^t_{k_0^t(g),g}$.\nWe also have $\\theta_0^t \\in \\left[ \\varepsilon, \\frac{1}{2} \\left( 1-\\varepsilon-\\sum_j \\alpha_j \\right) \\right]$ by the inequality $k_{\\min}(g) \\leq k_0^t(g) \\leq k_{\\max}(g)$ and the law of large numbers to estimate $\\mathbf{F}^{k_{\\min}(g)}$ and $\\mathbf{F}^{k_{\\max}(g)}$. By Corollary~\\ref{corr_main_minus_monotonicity}, we also know that almost surely\n\\begin{equation}\\label{eqn_omega0_in_compact}\nd \\left( \\mathbf{q}^{(\\Omega_0^t)} \\right) = d \\left( \\mathbf{q}^{(\\Omega_0^{t,-})} \\right) = \\frac{1}{2} \\left( 1-2\\theta_0^t-\\sum_{j \\geq 1} \\alpha_j \\right).\n\\end{equation}\nIn particular $d \\left( \\mathbf{q}^{(\\Omega_0^t)} \\right)$ and $d \\left( \\mathbf{q}^{(\\Omega_0^{t,-})} \\right)$ are bounded away from $0$ and $d(\\mathbf{q}^{(1)})$. By Lemma~\\ref{lem_degreefunction_basic}, this implies that $\\Omega_0^t$ and $\\Omega_0^{t,-}$ take their values in a compact subset of $(1,+\\infty)$ that depends only on $\\varepsilon$. Therefore, there is a subsequence $S$ such that, when $t \\to +\\infty$ along $S$ the following convergences hold jointly: \n\\begin{align*}\n\\Omega_0^t & \\xrightarrow[t \\to +\\infty]{(d)} \\Omega_0, & \\Omega_1^t & \\xrightarrow[t \\to +\\infty]{(d)} \\Omega_1,\\\\\n\\Omega_0^{t,-} & \\xrightarrow[t \\to +\\infty]{(d)} \\Omega_0^{-} , & \\Omega_1^{t,-} & \\xrightarrow[t \\to +\\infty]{(d)} \\Omega_1^{-},\\\\\n\\widetilde{\\Omega}_0^t & \\xrightarrow[t \\to +\\infty]{(d)} \\widetilde{\\Omega}_0, & \\widetilde{\\Omega}_1^t & \\xrightarrow[t \\to +\\infty]{(d)} \\widetilde{\\Omega}_1,\\\\\n\\widetilde{\\Omega}_0^{t,-} & \\xrightarrow[t \\to +\\infty]{(d)} \\widetilde{\\Omega}_0^{-} , & \\widetilde{\\Omega}_1^{t,-} & \\xrightarrow[t \\to +\\infty]{(d)} \\widetilde{\\Omega}_1^{-},\\\\\n\\theta_0^t & \\xrightarrow[t \\to +\\infty]{} \\theta_0, & \\theta_1^t & \\xrightarrow[t \\to +\\infty]{} \\theta_1,\n\\end{align*}\nwhere $\\Omega_0, \\Omega_0^- \\in (1,+\\infty)$ and $\\widetilde{\\Omega}_0, \\widetilde{\\Omega}_0^- \\in [1,+\\infty]$ and $\\theta_0 \\in \\left[ \\varepsilon, \\frac{1}{2} \\left( 1-\\varepsilon-\\sum_j \\alpha_j \\right) \\right]$. From now on, we will always consider $t \\geq t_0$ with $t$ in this subsequence $S$, and we will omit to precise it. \n\nThe rough sketch of the argument is to show that\n\\begin{equation}\\label{eqn_cyclic_ineq_omega}\n\\mathbb E \\left[ \\left( \\widetilde{\\Omega}^{-}_0 \\right)^{-1} \\right] \\leq \\omega_0^{-1} \\leq \\mathbb E \\left[ \\left( \\widetilde{\\Omega}_0 \\right)^{-1} \\right] = \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right] \\leq \\mathbb E \\left[ \\left( \\Omega_0^{-} \\right)^{-1} \\right] = \\mathbb E \\left[ \\left( \\widetilde{\\Omega}^{-}_0 \\right)^{-1} \\right],\n\\end{equation}\nand deduce from the equality in the third inequality that $\\Omega_0$ is deterministic and therefore equal to $\\omega_0$.\n\nMore precisely, by definition of $k_0^t(g)$ and since $k_0^t(g)>k_{\\min}(g)$, we have\n\\[ \\mathbb E \\left[ \\left( \\Omega^t_{k_0^t(g)-1} \\right)^{-1} \\right] < \\omega_0^{-1} \\leq \\mathbb E \\left[ \\left( \\Omega^t_{k_0^t(g)} \\right)^{-1} \\right].\\]\nBy letting $g \\to +\\infty$ (along $S^t$) and then $t \\to +\\infty$ (along $S$), this implies\n\\begin{equation}\\label{eqn_ineq_omegatilde}\n\\mathbb E \\left[ \\left( \\widetilde{\\Omega}^{-}_0 \\right)^{-1} \\right] \\leq \\omega_0^{-1} \\leq \\mathbb E \\left[ \\left( \\widetilde{\\Omega}_0 \\right)^{-1} \\right],\n\\end{equation}\nwhich are the first and second inequalities in~\\eqref{eqn_cyclic_ineq_omega}.\n\nOn the other hand, the third inequality will be obtained by the argument sketched in the beginning of this Subsection~\\ref{subsubsec_final_argument}. Let us first look at the relation between $M_{\\mathbf{F}^{k}, g}$ and $M_{\\mathbf{F}^{k-1}, g}$.\nWe recall that $\\mathbf{F}^{k}=\\mathbf{F}^{k-1}+\\mathbf{1}_{J_k}$, where $\\P (J_k=j)=\\frac{\\alpha_j}{\\sum_{i \\geq 1} \\alpha_i}$ for all $j \\geq 1$. If we condition on $\\mathbf{F}^{k-1}$ and $\\mathbf{F}^{k}$, then the law of $M_{\\mathbf{F}^{k-1}, g}$ is the law of $M_{\\mathbf{F}^{k}, g} \\backslash m_{J_k}^0$, conditioned on $m_{J_k}^0 \\subset M_{\\mathbf{F}^{k}, g}$, where $m^0_{J_k}$ is the map with perimeter $2$ of Figure~\\ref{fig_m_0_j}. Therefore, for any map $m$ with one hole, we have\n\\[ \\P \\left( m \\subset M_{\\mathbf{F}^{k-1}, g} | J_k=j \\right) = \\P \\left( m+m_j^0 \\subset M_{\\mathbf{F}^{k}, g} | J_k=j, \\, m_j^0 \\subset M_{\\mathbf{F}^{k}, g} \\right), \\]\nwhere $m+m_{j}^0$ is the map obtained from $m$ by replacing the root edge of $m$ by a copy of $m_j^0$. By summing over $j$, we obtain\n\\[\n\\P \\left( m \\subset M_{\\mathbf{F}^{k-1}, g} \\right) = \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{\\P \\left( m+m_j^0 \\subset M_{\\mathbf{F}^{k}, g} \\right)}{\\P \\left( m_j^0 \\subset M_{\\mathbf{F}^{k}, g} \\right)}.\n\\]\nWe now take $k=k_0^t(g)$ and let $g \\to +\\infty$ (along $S^t$) to replace $M_{\\mathbf{F}^{k-1}, g}$ and $M_{\\mathbf{F}^{k}, g}$ by respectively $\\mathbb M_{\\mathbf{q}^{(\\Omega_0^{t,-})}}$ and $\\mathbb M_{\\mathbf{q}^{(\\Omega_0^{t})}}$. We note that $m_j^0+m$ has the same perimeter as $m$ but one more internal face of degree $2j$. We obtain, for every finite map $m$ with one hole,\n\\[ \\mathbb E \\left[ C_{|\\partial m|} \\left( \\mathbf{q}^{(\\Omega_0^{t,-})} \\right) \\prod_{f \\in m} q^{(\\Omega_0^{t,-})}_{\\mathrm{deg}(f)\/2} \\right] = \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{\\mathbb E \\left[ C_{\\partial m} \\left( \\mathbf{q}^{(\\Omega_0^{t})} \\right) \\times \\prod_{f \\in m} q^{(\\Omega_0^{t})}_{\\mathrm{deg}(f)\/2} \\times q^{(\\Omega_0^{t})}_j \\right]}{ \\mathbb E \\left[ C_1 \\left( \\mathbf{q}^{(\\Omega_0^{t})} \\right) q_j^{(\\Omega_0^{t})} \\right]}. \\]\nThis can be interpreted as a Radon--Nikodym derivative, i.e. the map $\\mathbb M_{\\mathbf{q}^{(\\Omega_0^{t,-})}}$ has the law of $\\mathbb M_{\\mathbf{q}^{(\\Omega_0^{t})}}$ biased by\n\\begin{equation}\\label{eqn_radon_nikodym_omega}\n\\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{q_j^{(\\Omega_0^{t})}}{\\mathbb E \\left[ q_j^{(\\Omega_0^{t})} \\right]},\n\\end{equation}\nusing the fact that $C_1(\\mathbf{q})=1$. Since $\\Omega$ is a measurable function of the map $\\mathbb M_{\\mathbf{q}^{(\\Omega)}}$ by Proposition~\\ref{prop_q_as_limit}, it follows that $\\Omega_0^{t,-}$ has the law of $\\Omega_0^t$ biased by \\eqref{eqn_radon_nikodym_omega}. In particular, we have\n\\[\n\\mathbb E \\left[ \\left( \\Omega_0^{t,-} \\right)^{-1} \\right] = \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{\\mathbb E \\left[ q_j^{(\\Omega_0^t)} \\, \\left( \\Omega_0^t \\right)^{-1} \\right]}{\\mathbb E \\left[ q_j^{(\\Omega_0^t)}\\right]}.\n\\]\nWe can now let $t \\to +\\infty$ (along $S$) to get\n\\begin{equation}\\label{eqn_expectation_omega_inverse}\n\\mathbb E \\left[ \\left( \\Omega_0^{-} \\right)^{-1} \\right] = \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{\\mathbb E \\left[ q_j^{(\\Omega_0)} \\, \\left( \\Omega_0 \\right)^{-1} \\right]}{\\mathbb E \\left[ q_j^{(\\Omega_0)}\\right]}.\n\\end{equation}\nBut by Lemma~\\ref{lem_qj_is_monotone}, we have $\\mathbb E \\left[ q_j^{(\\Omega_0)} \\, \\left( \\Omega_0 \\right)^{-1} \\right] \\geq \\mathbb E \\left[ q_j^{(\\Omega_0)} \\right] \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right]$ for all $j \\geq 1$, so the last display implies $\\mathbb E \\left[ \\left( \\Omega_0^{-} \\right)^{-1} \\right] \\geq \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right]$, which is the third inequality of the sketch~\\eqref{eqn_cyclic_ineq_omega}.\n\nWe now move on to the two equalities of~\\eqref{eqn_cyclic_ineq_omega}. For this, we need to argue that $\\widetilde{\\Omega}_0^t$ is a good approximation of $\\Omega_0^t$ for $t$ large. By definition of $\\widetilde{\\Omega}_0^t$ and by local convergence, we have (the limits in $g$ are along $S^t$)\n\\begin{align}\\label{eqn_omegatilde_t_as_limit}\n\\mathbb E \\left[ \\left( \\widetilde{\\Omega}_0^t \\right)^{-1} \\right] &= \\lim_{g \\to +\\infty} \\mathbb E \\left[ \\left( \\Omega_{k_0^t(g),g}^t \\right)^{-1} \\right] \\nonumber \\\\\n&= \\lim_{g \\to +\\infty} \\mathbb E \\left[ \\left( r^{-1} \\left( \\frac{ \\left| \\mathcal{E}_t^{\\mathcal{A}} \\left( M_{\\mathbf{F}^{k_0^t(g)},g} \\right) \\right| - 2 \\left| \\partial \\mathcal{E}_t^{\\mathcal{A}} \\left( M_{\\mathbf{F}^{k_0^t(g)},g} \\right) \\right|}{t} \\right) \\right)^{-1} \\right] \\nonumber\n\\\\\n&= \\mathbb E \\left[ \\left( r^{-1} \\left( \\frac{V_t^{(\\Omega_0^t)}-2 P_t^{(\\Omega_0^t)}}{t} \\right) \\right)^{-1} \\right].\n\\end{align}\nWhen $t \\to +\\infty$ (along $S$), the left-hand side of~\\eqref{eqn_omegatilde_t_as_limit} goes to $\\mathbb E \\left[ \\left( \\widetilde{\\Omega}_0 \\right)^{-1} \\right]$. On the other hand, we recall that~\\eqref{eqn_omega0_in_compact} implies that $\\Omega_0^t$ lies in a compact subset of $(1,+\\infty)$ depending only on $\\varepsilon$. Since $\\Omega_0^t \\to \\Omega_0$ along $S$, by Lemma~\\ref{lem_unif_volume} we have the convergence (along $S$)\n\\[ \\frac{V_t^{(\\Omega_0^t)} -2P_t^{(\\Omega_0^t)}}{t} \\xrightarrow[t \\to +\\infty]{(P)} r(\\Omega_0). \\]\nTherefore, when $t \\to +\\infty$ (along $S$), the right-hand side of~\\eqref{eqn_omegatilde_t_as_limit} goes to $\\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right]$. This proves the first equality of~\\eqref{eqn_cyclic_ineq_omega}. The second one is proved in the exact same way, using $\\Omega^{t,-}_0$, $\\widetilde{\\Omega}^{t,-}_0$ instead of $\\Omega^{t}_0$, $\\widetilde{\\Omega}^{t}_0$.\n\nWe have therefore proved all of~\\eqref{eqn_cyclic_ineq_omega}, so all the inequalities must be equalities. In particular, \\eqref{eqn_expectation_omega_inverse} becomes\n\\[ \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right] = \\frac{1}{\\sum_{i \\geq 1} \\alpha_i} \\sum_{j \\geq 1} \\alpha_j \\frac{\\mathbb E \\left[ q_j^{(\\Omega_0)} \\, \\left( \\Omega_0 \\right)^{-1} \\right]}{\\mathbb E \\left[ q_j^{(\\Omega_0)}\\right]}. \\]\nHowever, we also know by Lemma~\\ref{lem_qj_is_monotone} that $\\mathbb E \\left[ q_j^{(\\Omega_0)} \\, \\left( \\Omega_0 \\right)^{-1} \\right] \\geq \\mathbb E \\left[ q_j^{(\\Omega_0)} \\right] \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right]$ for all $j$, so for all $j \\geq 1$ we must have the equality\n\\[ \\alpha_j \\mathbb E \\left[ q_j^{(\\Omega_0)} \\left( \\Omega_0 \\right)^{-1} \\right] = \\alpha_j \\mathbb E \\left[ q_j^{(\\Omega_0)} \\right] \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right].\\]\nIn particular, we fix $j \\geq 2$ such that $\\alpha_j>0$ (such a $j$ exists because $\\alpha_1<1$). Then $\\mathbb E \\left[ q_j^{(\\Omega_0)} \\left( \\Omega_0 \\right)^{-1} \\right] = \\mathbb E \\left[ q_j^{(\\Omega_0)} \\right] \\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right]$. Since $\\omega \\to \\omega^{-1}$ and $\\omega \\to q_j^{(\\omega)}$ are two decreasing functions (by Lemma~\\ref{lem_qj_is_monotone}), this is only possible if $\\Omega_0$ is deterministic. But then~\\eqref{eqn_cyclic_ineq_omega} yields $\\mathbb E \\left[ \\left( \\Omega_0 \\right)^{-1} \\right] = \\omega_0^{-1}$, so $\\Omega_0=\\omega_0$ a.s..\n\nWe can now finish the proof. By the exact same argument as for $\\Omega_0$, we also have $\\Omega_1=\\omega_1$ a.s.. We recall that $\\omega_0<\\omega_1$. By letting $t \\to +\\infty$ (along $S$) in~\\eqref{eqn_omega0_in_compact} and using the continuity result of Lemma~\\ref{lem_degreefunction_basic}, we get\n\\begin{equation}\\label{eqn_fin_omega0_theta0}\nd \\left( \\mathbf{q}^{(\\omega_0)} \\right) = \\frac{1}{2} \\left( 1-2\\theta_0-\\sum_{j \\geq 1} \\alpha_j \\right)\n\\end{equation}\nand similarly\n\\begin{equation}\\label{eqn_fin_omega1_theta1}\nd \\left( \\mathbf{q}^{(\\omega_1)} \\right) = \\frac{1}{2} \\left( 1-2\\theta_1-\\sum_{j \\geq 1} \\alpha_j \\right).\n\\end{equation}\nOn the other hand, by the definition~\\eqref{eqn_defn_k0tg} of $k_0^t(g)$ and $k_1^t(g)$, since $\\omega_0<\\omega_1$, we have\n\\[k_0^t(g) \\geq k_1^t(g)\\]\nfor all $t$ and $g$. Therefore, we have $\\left| \\mathbf{F}^{k_0^t(g)} \\right| \\geq \\left| \\mathbf{F}^{k_1^t(g)} \\right|$. By letting $g \\to +\\infty$ (along $S^t$) we deduce $\\theta_0^t \\leq \\theta_1^t$. Letting $t \\to +\\infty$ (along $S$) we get $\\theta_0 \\leq \\theta_1$. Combining this with~\\eqref{eqn_fin_omega0_theta0} and~\\eqref{eqn_fin_omega1_theta1}, this proves $d \\left( \\mathbf{q}^{(\\omega_0)} \\right) \\geq d \\left( \\mathbf{q}^{(\\omega_1)} \\right)$, so the function $\\omega \\to d \\left( \\mathbf{q}^{(\\omega)} \\right)$ is nonincreasing on $(1,+\\infty)$. Since it is nonconstant (for example by the second item of Lemma~\\ref{lem_degreefunction_basic}) and analytic (first item of Lemma~\\ref{lem_degreefunction_basic}), it is decreasing on $(1,+\\infty)$. Finally, we extend the result to $[1,+\\infty)$ by continuity at $1$ (first item of Lemma~\\ref{lem_degreefunction_basic}).\n\\end{proof}\n\n\\section{Asymptotic enumeration: convergence of the ratio}\\label{sec_univ_asymp}\n\n\\begin{proof}[Proof of Corollary \\ref{prop_cv_ratio}]\nFix $j \\geq 1$, and let $m_j^0$ be the map of Figure~\\ref{fig_m_0_j} with a hole of perimeter $2$. On the one hand, we have\n\\[\\P \\left( m_j^0 \\subset \\mathbb{M}_{\\mathbf{q}} \\right)=C_2(\\mathbf{q})q_j.\\]\nOn the other hand, \n\\[\\P \\left( m_j^0 \\subset M_{\\mathbf{f}^{n},g_n} \\right)=\\frac{\\beta_{g_n}(\\mathbf{f}^{n}-\\mathbf{1}_j)}{\\beta_{g_n}(\\mathbf{f}^{n})}.\\]\nThe last equality is proved by contracting $m_j^0$ in $M_{\\mathbf{f}^{n},g_n}$ into the root edge of a map with face degrees given by $\\mathbf{f}^{n}-\\mathbf{1}_j$. The corollary follows by letting $n \\to +\\infty$.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcupy b/data_all_eng_slimpj/shuffled/split2/finalzzcupy new file mode 100644 index 0000000000000000000000000000000000000000..5dfe22d9bb159f23dd6594678f0fd97c11b72142 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcupy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\\citeAPY{Berryman:1990:VCE} found that classical variational principles could be used to obtain information about the\nconductivity inside a body from electrical measurements on the exterior. In this paper our main focus is on using classical\nvariational principles and known bounds on the response of periodic composites to bound\nthe volume fraction of one phase in a two-phase body $\\Omega$ from measurements on the exterior of the body. Of course if one\nknows the mass densities of the two phases, the easiest way to do this is just to weigh the body. However this \nmay not always be practical, or the densities of the two phases may be very close. \n\nTwo types of boundary conditions are most natural: what we call special Dirichlet conditions where affine Dirichlet\nconditions are imposed on the boundary of $\\Omega$ (which would render the field inside $\\Omega$ uniform if the body were\nhomogeneous) or what we call special Neumann conditions where Neumann conditions are imposed that would render the field inside $\\Omega$\nuniform if the body were homogeneous. Bounds on the electrical and elastic response of the body to these special boundary conditions were obtained \nby Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), and were extended to piezoelectricity\nby \\citeAPY{Hori:1998:UBE}. They called these bounds universal because they did not depend on any assumption about the microgeometry in the body.\nThey obtained both elementary universal bounds based on the classical variational principles, and reviewed below in section 3, and \nuniversal bounds based on the Hashin-Shtrikman variational principles (Hashin and Shtrikman \\citeyearNP{Hashin:1962:VAT}, \\citeyearNP{Hashin:1963:VAT}).\nThe latter bounds were obtained under the assumption that $\\Omega$ is either an ellipsoid or a parallelopiped, but we will see here that they\ncan easily be improved and generalized to bodies $\\Omega$ of arbitrary shape. The key is to consider an assemblage of copies of $\\Omega$ packed to\nfill all space, and then to use the bounds of \\citeAPY{Huet:1990:AVC} which relate the effective tensor of this composite to the\nresponses of $\\Omega$ under the special boundary conditions. Then existing bounds on the effective tensor [as surveyed in the books\nof \\citeAPY{Nemat-Nasser:1993:MOP}, \\citeAPY{Cherkaev:2000:VMS}, \\citeAPY{Allaire:2002:SOH}, \\citeAPY{Milton:2002:TC}, and \\citeAPY{Tartar:2009:GTH}]\ncan be directly applied to bound the responses of $\\Omega$ under special boundary conditions (see sections 5,6,7, and 8).\nSince these bounds involve the volume fractions of the phases\n(and the moduli of the phases), they can be used in an inverse fashion to bound the volume fraction. As shown by \\citeAPY{Kang:2011:SBV}\nthe volume fraction bounds thus obtained \nfor electrical conductivity generalize those obtained by Capdeboscq and Vogelius (\\citeyearNP{Capdeboscq:2003:OAE}, \\citeyearNP{Capdeboscq:2004:RSR})\nfor the important case when the volume fraction is asymptotically small.\n\n Given the close connection between bounds on effective tensors\nand bounds on the responses of $\\Omega$ under special boundary condition, a natural question to ask is whether methods that have been used\nto derive bounds on effective tensors could be directly used to derive bounds on the response of $\\Omega$ under more general boundary conditions.\nOne such method is the Hashin-Strikman (\\citeyearNP{Hashin:1962:VAT}, \\citeyearNP{Hashin:1963:VAT}) variational method and this lead\nNemat-Nasser and Hori to their bounds for ellipsoidal or parallelopipedic $\\Omega$. Another particularly successful method is the translation method \n(\\citeAY{Tartar:1979:ECH}; Lurie and Cherkaev \\citeyearNP{Lurie:1982:AEC}, \\citeyearNP{Lurie:1984:EEC}; \\citeAY{Murat:1985:CVH}; \\citeAY{Tartar:1985:EFC}; \\citeAY{Milton:1990:CSP})\nand indeed as shown in a companion paper (\\citeAY{Kang:2011:SBV}) this method yields upper and lower bounds on the volume fraction in a\ntwo-phase body with general boundary conditions for two-dimensional conductivity without making any assumption on the shape of $\\Omega$.\nFor special boundary conditions the bounds thus derived reduce to the ones derived here. \n\nWe also provide (in section 4) some new conductivity bounds which just involve the results of just one (flux, voltage) pair\nmeasured at the boundary of $\\Omega$, and which improve upon the elementary bounds of \\citeAPY{Nemat-Nasser:1993:MOP}.\nAgain these new bounds can be used in an inverse fashion to bound the volume fraction. Other volume fraction\nbounds using one measurement were derived by \\citeAPY{Kang:1997:ICP}\n\\citeAPY{Ikehata:1998:SEI}, \\citeAPY{Alessandrini:1998:ICP}, \\citeAPY{Alessandrini:2000:OSE}, and \\citeAPY{Alessandrini:2002:DCE}.\nThese other bounds involve constants which are not easy to determine, making it difficult to make a general comparison with\nour new bounds. \n\nThe various bounds on the volume fraction we have derived are too numerous to summarize in this introduction. \nHowever we want to draw attention to the bounds \\eq{3.12} and \\eq{3.21} which are the natural extension of the\nfamous \\citeAPY{Hashin:1962:VAT} conductivity bounds to this problem. Also of particular note is the bound \\eq{5.18ag},\nwhich is one natural generalization of the bulk modulus bounds of \\citeAPY{Hashin:1963:VAT} and \\citeAPY{Hill:1963:EPR},\nand implies that a bound on the volume fraction can be obtained by simply immersing the body in a water filled \ncylinder with a piston at one end and measuring the change in water pressure when the piston is displaced by a known\nsmall amount.\n\n\n\\section{The conductivity response tensors with special Dirichlet and special Neumann boundary conditions}\n\\setcounter{equation}{0}\n\n\nIn electrical impedance tomography in a body $\\Omega$ containing two isotropic components with (positive, scalar) \nconductivities $\\sigma_1$ and $\\sigma_2$ the potential $V$ satisfies \n\\begin{equation} \\nabla \\cdot\\sigma\\nabla V=0, \\quad {\\rm where}\\quad \\sigma({\\bf x})=\\chi({\\bf x})\\sigma_1+(1-\\chi({\\bf x}))\\sigma_2,\n\\eeq{1.1}\nand $\\chi({\\bf x})$ is the indicator function of component $1$, taking the value $1$ in component and $0$ in component\n$2$. Equivalently, in terms of the current field ${\\bf j}({\\bf x})$ and electric field ${\\bf e}({\\bf x})$ we have\n\\begin{equation} \\nabla \\cdot{\\bf j}=0,\\quad{\\bf j}=\\sigma{\\bf e},\\quad {\\bf e}=-\\nabla V. \\eeq{1.2} \nLet us assume the components have been labeled so that $\\sigma_1\\geq\\sigma_2$. We are given a set of Cauchy data, i.e. \nmeasurements of pairs $(V_0,q)$, where $V_0({\\bf x})$ and $q({\\bf x})$ are the boundary values of the voltage $V({\\bf x})$ and\nand flux $q({\\bf x})=-{\\bf n}\\cdot{\\bf j}({\\bf x})$ at the boundary $\\partial\\Omega$ of $\\Omega$, in which ${\\bf n}({\\bf x})$ is the outwards \nnormal to the boundary. From this boundary information we can immediately determine, using integration by parts, \nvolume averages such as\n\\begin{eqnarray} \\langle {\\bf e}\\cdot{\\bf j}\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0({\\bf j}\\cdot{\\bf n})= \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}V_0q, \\nonumber \\\\\n \\langle {\\bf e}\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0{\\bf n}, \\nonumber \\\\\n \\langle {\\bf j}\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-{\\bf x} q,\n\\eeqa{1.3}\nwhere the angular brackets denote the volume average, i.e.\n\\begin{equation} \\langle g \\rangle=\\frac{1}{|\\Omega|}\\int_{\\Omega} g, \\eeq{1.4}\nfor any quantity $g({\\bf x})$. From such averages our objective is to bound the\nvolume fraction $f_1=\\langle \\chi\\rangle$ of component 1 (and hence also the volume fraction $f_2=1-f_1$ of component\n2).\n\nTo obtain good estimates of the volume fraction $f_1$ it makes physical sense to use\nmeasurements where the fields ${\\bf e}({\\bf x})$ and ${\\bf j}({\\bf x})$ probe well into the interior of $\\Omega$. In this\nconnection two sets of measurements are most natural. We could apply special Dirichlet boundary conditions\n\\begin{equation} V_0=-{\\bf e}_0\\cdot{\\bf x}, \\eeq{1.5}\nand measure ${\\bf j}_0=\\langle{\\bf j}\\rangle$. Here, according to \\eq{1.3}, ${\\bf e}_0$ equals $\\langle{\\bf e}\\rangle$. Since\n${\\bf j}_0$ is linearly related to ${\\bf e}_0$ we can write\n\\begin{equation} {\\bf j}_0=\\bfm\\sigma^D{\\bf e}_0, \\eeq{1.6} \nwhich defines the conductivity tensor $\\bfm\\sigma^D$ ($D$ for Dirichlet). To determine $\\bfm\\sigma^D$ \nin dimension $d=2,3$ it of course suffices to measure ${\\bf j}_0$ for $d$ linearly independent values of ${\\bf e}_0$. \nAlternatively we could apply the special Neumann boundary conditions \n\\begin{equation} q={\\bf j}_0\\cdot{\\bf n}, \\eeq{1.7}\nand measure ${\\bf e}_0=\\langle{\\bf e}\\rangle$. Again according to \\eq{1.3}, ${\\bf j}_0=\\langle{\\bf j}\\rangle$. The linear relation\nbetween ${\\bf e}_0$ and ${\\bf j}_0$,\n\\begin{equation} {\\bf e}_0=(\\bfm\\sigma^N)^{-1}{\\bf j}_0 \\eeq{1.8}\ndefines the resistivity tensor $(\\bfm\\sigma^N)^{-1}$ and hence the conductivity tensor $\\bfm\\sigma^N$ ($N$ for Neumann):\nwe will see later that $(\\bfm\\sigma^N)^{-1}$ is invertible. To determine $\\bfm\\sigma^N$ it suffices to measure ${\\bf e}_0$ \nfor $d$ linearly independent values of ${\\bf j}_0$. With either of these two sorts of boundary conditions (but \nnot in general) \\citeAPY{Hill:1963:EPR} has shown that\n\\begin{equation} \\langle {\\bf e}\\cdot{\\bf j}\\rangle=\\langle{\\bf e}\\rangle\\cdot\\langle{\\bf j}\\rangle, \\eeq{1.9}\nas follows by substituting \\eq{1.5} or \\eq{1.7} in the first of equations \\eq{1.3}.\nUsing this relationship, and its obvious generalizations, it is easy to check that both $\\bfm\\sigma^D$ and \n$\\bfm\\sigma^N$ are self-adjoint. Thus if ${\\bf e}'({\\bf x})$ and ${\\bf j}'({\\bf x})$ denote the electric and current fields\nassociated with the boundary conditions \\eq{1.5}, with ${\\bf e}_0$ replaced by ${\\bf e}_0'$, while keeping\nthe same conductivity $\\sigma({\\bf x})$ then\n\\begin{equation} {\\bf e}_0'\\cdot\\bfm\\sigma^D{\\bf e}_0=\\langle{\\bf e}'\\cdot{\\bf j}\\rangle=\\langle{\\bf e}'\\sigma{\\bf e}\\rangle\n=\\langle{\\bf e}\\cdot{\\bf j}'\\rangle={\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0',\n\\eeq{1.10}\nwhich implies $\\bfm\\sigma^D$ is self-adjoint. By similar argument $\\bfm\\sigma^N$ is self-adjoint.\n\n\n\n\\section{Known elementary bounds}\n\\setcounter{equation}{0}\n\nThis section reviews the elementary bounds on $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ obtained by\n\\citeAPY{Nemat-Nasser:1993:MOP} and by Willis\nin a 1989 private communication to Nemat-Nasser and Hori. Their implications for bounding the volume fraction\nwill be studied. We will make use of two classical variational principles: the Dirichlet variational principle that\n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega}}\\int_{\\Omega}\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\n=\\int_{\\partial\\Omega}-V_0q,\n\\eeq{2.1}\nwhich is attained when $\\underline{V}({\\bf x})=V({\\bf x})$, and the Neumann variational principle that\n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf j}}}({\\bf x})\\cr \\nabla \\cdot{\\underline{{\\bf j}}}({\\bf x})=0 \\cr \n {\\bf n}\\cdot{\\underline{{\\bf j}}}({\\bf x})=-q({\\bf x})~{\\rm on}~\\partial\\Omega}}\\int_{\\Omega}\\underline{{\\bf j}}\\cdot\\sigma^{-1}\\underline{{\\bf j}}\n=\\int_{\\partial\\Omega}V_0q,\n\\eeq{2.2}\nwhich is attained when $\\underline{j}({\\bf x})={\\bf j}({\\bf x})$. With the special Dirichlet boundary conditions\n\\eq{1.5} the Dirichlet variational principle implies \n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=-{\\bf e}_0\\cdot{\\bf x}~{\\rm on}~\\partial\\Omega}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\n={\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0.\n\\eeq{2.3}\nTaking a trial potential $\\underline{V}=-{\\bf e}_0\\cdot{\\bf x}$ produces the elementary upper bound on ${\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0$\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\leq \\langle\\sigma\\rangle{\\bf e}_0\\cdot{\\bf e}_0,\n\\eeq{2.4}\ngiven by \\citeAPY{Nemat-Nasser:1993:MOP}.\nTo obtain a lower bound observe, following a standard argument, that the left hand side of \\eq{2.3} is surely\ndecreased if we take the minimum over a larger class of trial fields. Since the constraints on \n${\\underline{{\\bf e}}}({\\bf x})$ imply $\\langle{\\underline{{\\bf e}}}\\rangle={\\bf e}_0$ let us replace them by this weaker constraint to \nobtain the inequality\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\geq \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr\\langle{\\underline{{\\bf e}}}\\rangle={\\bf e}_0}}\n\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle,\n\\eeq{2.5}\nwhere the minimum is now over fields ${\\underline{{\\bf e}}}$ which are not necessarily curl-free. Using Lagrange\nmultipliers one finds that the minimum is attained when ${\\underline{{\\bf e}}}=\\sigma^{-1}\\langle\\sigma^{-1}\\rangle^{-1}{\\bf e}_0$\nand so we obtain the lower bound \n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\\geq \\langle\\sigma^{-1}\\rangle^{-1}{\\bf e}_0\\cdot{\\bf e}_0\n\\eeq{2.6}\nof \\citeAPY{Nemat-Nasser:1993:MOP}.\nTaken together, \\eq{2.4} and \\eq{2.6} imply the lower and upper bounds\n\\begin{equation} \\left(\\frac{{\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0}{{\\bf e}_0\\cdot{\\bf e}_0}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf e}_0\\cdot{\\bf e}_0}{{\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1})\n\\eeq{2.7}\non the volume fraction $f_1$. These bounds give useful information even if we only know ${\\bf j}_0=\\bfm\\sigma^D{\\bf e}_0$\nfor only one value of ${\\bf e}_0$, i.e. if we only take one measurement. These bounds \\eq{2.7} are sharp in \nthe sense that the lower bound is approached artitrarily closely if $\\Omega$ is filled with a periodic laminate\nof components 1 and 2, oriented with the normal to the layers orthogonal to ${\\bf e}_0$ and we let the period\nlength go to zero, while the upper bound is approached artitrarily closely for the same geometry, but \noriented with the normal to the layers parallel to ${\\bf e}_0$. If the full tensor $\\bfm\\sigma^D$ is known,\nfrom $d=2,3$ measurements of pairs $({\\bf e}_0,{\\bf j}_0)$ \nthen we can take the intersection of the bounds \\eq{2.7} as ${\\bf e}_0$ is varied, and so obtain \n\\begin{equation} (\\lambda^D_+-\\sigma_2)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq (1\/\\sigma_2-1\/\\lambda^D_-)\/(\\sigma_2^{-1}-\\sigma_1^{-1}),\n\\eeq{2.8}\nwhere $\\lambda^D_+$ and $\\lambda^D_-$ are the maximum and minimum eigenvalues of $\\bfm\\sigma^D$. However\nwe will see in the next section that an additional and typically sharper upper bound on $f_1$ can be obtained.\n\nWith the special Neumann boundary conditions \\eq{1.7} the variational principle \\eq{2.2} implies \n\\begin{equation} \\min_{\\matrix{{\\underline{{\\bf j}}}({\\bf x})\\cr \\nabla \\cdot{\\underline{{\\bf j}}}({\\bf x})=0 \\cr \n {\\bf n}\\cdot{\\underline{{\\bf j}}}({\\bf x})={\\bf n}\\cdot{\\bf j}_0~{\\rm on}~\\partial\\Omega}}\n\\langle\\underline{{\\bf j}}\\cdot\\sigma^{-1}\\underline{{\\bf j}}\\rangle={\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0.\n\\eeq{2.9}\nBy taking a constant trial field ${\\underline{{\\bf j}}}({\\bf x})={\\bf j}_0$ or alternatively by taking the minimum over the larger\nclass of trial fields satisfying only $\\langle{\\underline{{\\bf j}}}\\rangle={\\bf j}_0$ we \nobtain the bounds\n\\begin{equation} \\langle\\sigma\\rangle^{-1}{\\bf j}_0\\cdot{\\bf j}_0\\leq{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0\\leq \\langle\\sigma^{-1}\\rangle{\\bf j}_0\\cdot{\\bf j}_0\n\\eeq{2.10}\nof \\citeAPY{Nemat-Nasser:1993:MOP} which imply\n\\begin{equation} \\left(\\frac{{\\bf j}_0\\cdot{\\bf j}_0}{{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0}{{\\bf j}_0\\cdot{\\bf j}_0}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.11}\nThese bounds are applicable even if we only know ${\\bf e}_0=(\\bfm\\sigma^N)^{-1}{\\bf j}_0$\nfor only one value of ${\\bf j}_0$. For comparison, with these special Neumann boundary conditions \\eq{1.7},\nthe bounds in Theorem 3.1 of \\citeAPY{Kang:1997:ICP} coupled with the improvement in proposition 0\nof \\citeAPY{Ikehata:1998:SEI},\nwith $\\sigma_1=k>1$, $\\sigma_2=1$ and ${\\bf j}_0\\cdot{\\bf j}_0=1$, imply\n\\begin{equation} \n\\frac{1}{k-1}(1-{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0)\n\\leq f_1 \\leq \\frac{k}{k-1}(1-{\\bf j}_0\\cdot(\\bfm\\sigma^N)^{-1}{\\bf j}_0).\n\\eeq{2.12}\nIn this case it is easy to check that the upper bounds in \\eq{2.11} and \\eq{2.12} coincide while the lower bound\nin \\eq{2.11} is tighter. The bounds \\eq{2.11} are each approached arbitrarily closely if $\\Omega$ is filled with a periodic laminate\nof components 1 and 2, oriented with ${\\bf j}_0$ either parallel or orthogonal to the layers\nand we let the period length go to zero.\n\nIn summary, \\eq{2.4},\\eq{2.6} and \\eq{2.10} imply the matrix inequalities\n\\begin{equation} \\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}\\leq\\bfm\\sigma^D\\leq \\langle\\sigma\\rangle{\\bf I},\\quad\\quad\n \\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}\\leq\\bfm\\sigma^N\\leq \\langle\\sigma\\rangle{\\bf I}\n\\eeq{2.13}\nof \\citeAPY{Nemat-Nasser:1993:MOP}.\n\nFor artitrary boundary conditions, i.e. for any ${\\bf e}$ and ${\\bf j}$ satisfying \\eq{1.2} within $\\Omega$,\nwe have the bounds \n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle \\geq {\\bf e}_0\\cdot\\bfm\\sigma^N{\\bf e}_0,\\quad\n\\langle{\\bf e}\\cdot{\\bf j}\\rangle \\geq {\\bf j}_0(\\bfm\\sigma^D)^{-1}{\\bf j}_0,\n\\eeq{2.14}\nwhere ${\\bf e}_0=\\langle{\\bf e}\\rangle$ and ${\\bf j}_0=\\langle{\\bf j}\\rangle$,\ndue to Willis in a 1989 private communication to Nemat-Nasser and Hori, and presented by \\citeAPY{Nemat-Nasser:1993:MOP}. In conjunction with \\eq{2.13} they imply\nthe volume fraction bounds,\n\\begin{equation} \\left(\\frac{{\\bf j}_0\\cdot{\\bf j}_0}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}-\\sigma_2\\right)\/(\\sigma_1-\\sigma_2)\n\\leq f_1 \\leq \\left(\\sigma_2^{-1}-\\frac{{\\bf e}_0\\cdot{\\bf e}_0}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.15}\n\n\n\\section{New bounds with one measurement}\n\\setcounter{equation}{0}\n\n\nIf we have measurements of $\\langle{\\bf e}\\cdot{\\bf j}\\rangle$ and both vectors ${\\bf e}_0$ and ${\\bf j}_0$ for arbitrary boundary \nconditions then the bounds \\eq{2.14} and \\eq{2.15} can be improved. The classical variational principle\n\\eq{2.1} implies \n\\begin{equation}\n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega \\cr\n\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle,\n=\\langle{\\bf e}\\cdot{\\bf j}\\rangle\n\\eeq{2.16}\nwhere we have chosen to add the constraint that $\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0$ since we know that\nwithout this constraint the minimizer $\\underline{{\\bf e}}={\\bf e}$ satisfies $\\langle\\sigma{\\bf e}\\rangle={\\bf j}_0$. We surely\nobtain something lower if take the minimum over the larger class of fields satisfying only\n$\\langle\\underline{{\\bf e}}\\rangle={\\bf e}_0$ and $\\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0$. Thus we obtain the inequality\n\\begin{equation} \n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr \\langle\\underline{{\\bf e}}\\rangle={\\bf e}_0 \\cr \\langle\\sigma\\underline{{\\bf e}}\\rangle={\\bf j}_0}}\n\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\\leq\\langle{\\bf e}\\cdot{\\bf j}\\rangle.\n\\eeq{2.17}\nBy introducing two vector valued Lagrange multipliers associated with the two vector valued constraints we find that\nthe minimum is attained when\n\\begin{equation} \\underline{{\\bf e}}({\\bf x})\n={\\bf e}_0+(\\langle\\sigma^{-1}\\rangle-\\sigma^{-1}({\\bf x}))(\\langle\\sigma\\rangle\\langle\\sigma^{-1}\\rangle-1)^{-1}({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0).\n\\eeq{2.18}\nSubstituting this back in \\eq{2.17} gives the bound\n\\begin{equation} (\\langle\\sigma\\rangle\\langle\\sigma^{-1}\\rangle-1)(\\langle{\\bf e}\\cdot{\\bf j}\\rangle-{\\bf e}_0\\cdot{\\bf j}_0)\n\\geq ({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0)\\cdot(\\langle\\sigma^{-1}\\rangle{\\bf j}_0-{\\bf e}_0).\n\\eeq{2.19}\nIf, with general boundary conditions, we are interested in bounding the volume fraction $f_1$ given measured values of \n$\\langle{\\bf e}\\cdot{\\bf j}\\rangle$, ${\\bf e}_0$ and ${\\bf j}_0$ then the difference between the left hand side and right hand side\nof \\eq{2.19} is a quadratic in $f_1$ whose two roots give upper and lower bounds on $f_1$. (Unless the roots happen\nto be complex, in which case there is no configuration of the two phases within $\\Omega$ which produce the measured\n$\\langle{\\bf e}\\cdot{\\bf j}\\rangle$, ${\\bf e}_0$ and ${\\bf j}_0$, indicating the presence of other phases or indicating\nan error in measurements.)\n\n\nIn the particular cases of either special Dirichlet or special Neumann boundary conditions, \n\\eq{1.5} or \\eq{1.7}, the left hand\nside of \\eq{2.19} vanishes (see \\eq{1.9}) and we obtain the reduced bounds\n\\begin{equation} 0\\geq ({\\bf j}_0-\\langle\\sigma\\rangle{\\bf e}_0)\\cdot(\\langle\\sigma^{-1}\\rangle{\\bf j}_0-{\\bf e}_0),\n\\eeq{2.20}\nwhich are in fact implied by the matrix inequalities \\eq{2.13}. This bound \\eq{2.20} is optimal.\nFor any given fixed ${\\bf e}_0$, and fixed volume fraction $f_1$, the vector ${\\bf j}_0$ \nhas an endpoint which is constrained by \\eq{2.20} to lie within a\nsphere (disk in two dimensions) centered at $(\\langle\\sigma\\rangle+\\langle\\sigma^{-1}\\rangle^{-1}){\\bf e}_0\/2$.\nWhen $\\Omega$ is filled with a periodic laminate of the two phases\nwith interfaces orthogonal to some unit vector ${\\bf m}$, and we let the period length go to zero,\nthen the endpoint of the vector ${\\bf j}_0$ covers the entire surface of this sphere (disk) as ${\\bf m}$\nranges over all unit vectors. These bounds are the analogs, for arbitrary bodies $\\Omega$, of \nbounds on possible $({\\bf e}_0,{\\bf j}_0)$ pairs\nfor composites derived by \\citeAPY{Raitum:1983:QES} and \\citeAPY{Tartar:1995:RHM}. If we are given ${\\bf e}_0$ and ${\\bf j}_0$\nand want to bound $f_1$ then we should find the range of $f_1$ where the sphere (or disk)\ncontains the endpoint of the vector ${\\bf j}_0$. The endpoints of this range are the roots\nof the right hand side of \\eq{2.20} which is a quadratic function of $f_1$. \n\nKnowledge of ${\\bf e}_0$ and ${\\bf j}_0$ is equivalent to knowledge of $\\langle{\\bf e}\\cdot{\\bf v}\\rangle$\nand $\\langle{\\bf v}\\cdot{\\bf j}\\rangle$ for all constant fields ${\\bf v}$. A more general alternative is to use\nthe information about \n\\begin{eqnarray} a_i=\\langle {\\bf e}\\cdot{\\bf j}_i\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_0({\\bf j}_i\\cdot{\\bf n}),\n\\nonumber \\\\\n b_k=\\langle \\nabla V_k\\cdot{\\bf j}\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}-V_k({\\bf j}\\cdot{\\bf n}),\n\\eeqa{2.21}\nfor a given set of ``comparison flux fields'' ${\\bf j}_i({\\bf x})$ satisfying $\\nabla \\cdot{\\bf j}_i=0$,\n$i=1,2,\\ldots n$ and ``comparison potentials'' $V_k({\\bf x}),~k=1,2,\\ldots m$. Suppose, for\nexample, that we have just one comparison flux field ${\\bf j}_1$. We have the variational\nprinciple\n\\begin{equation}\n\\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})=-\\nabla{\\underline{V}}({\\bf x}) \n\\cr {\\underline{V}}({\\bf x})=V_0({\\bf x})~{\\rm on}~\\partial\\Omega \\cr\n\\langle \\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle\n=\\langle{\\bf e}\\cdot{\\bf j}\\rangle,\n\\eeq{2.22}\nwhere we have chosen to add the constraint that $\\langle\\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1$ since we know that\nwithout this constraint the minimizer $\\underline{{\\bf e}}={\\bf e}$ \nsatisfies $\\langle{\\bf e}\\cdot{\\bf j}_1\\rangle=a_1$. This implies the inequality\n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle\\geq \\min_{\\matrix{{\\underline{{\\bf e}}}({\\bf x})\\cr \n\\langle \\underline{{\\bf e}}\\cdot{\\bf j}_1\\rangle=a_1}}\\langle\\underline{{\\bf e}}\\cdot\\sigma\\underline{{\\bf e}}\\rangle. \n\\eeq{2.23}\nBy introducing a Lagrange multiplier associated with the constraint $\\langle{\\bf e}\\cdot{\\bf j}_1\\rangle=a_1$\nwe see the minimum occurs when\n\\begin{equation} \\underline{{\\bf e}}=a_1\\sigma^{-1}{\\bf j}_1\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle,\n\\eeq{2.24}\ngiving the inequality\n\\begin{equation} \\langle{\\bf e}\\cdot{\\bf j}\\rangle\\geq a_1^2\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle. \\eeq{2.25a}\nThis inequality gives information about $\\sigma({\\bf x})$ through $\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle$.\nIf we only want bounds which involve the volume fraction we should choose ${\\bf j}_1({\\bf x})$\nwith\n\\begin{equation} |{\\bf j}_1({\\bf x})|=1\\quad {\\rm for~all~}{\\bf x}\\in\\Omega.\n\\eeq{2.25} \nThere are many divergence free fields ${\\bf j}_1({\\bf x})$ which satisfy this constraint. For example\nin two dimensions we can take\n\\begin{equation} {\\bf j}_1=({\\partial \\phi\/\\partial x_2,-\\partial \\phi\/\\partial x_1}),~~\n{\\rm with}~|\\nabla\\phi({\\bf x})|=1\\quad {\\rm for~all~}{\\bf x}\\in\\Omega.\n\\eeq{2.26}\nThus $\\phi({\\bf x})$ satisfies an Eikonal equation, and we could take $\\phi({\\bf x})$ to be the shortest\ndistance between ${\\bf x}$ and a curve outside $\\Omega$. Once \\eq{2.25} is satisfied \\eq{2.25a}\nimplies the volume fraction bound\n\\begin{equation} f_1\\leq \n \\left(\\sigma_2^{-1}-\\frac{a_1^2}{\\langle{\\bf e}\\cdot{\\bf j}\\rangle}\\right)\/(\\sigma_2^{-1}-\\sigma_1^{-1}).\n\\eeq{2.27}\nIn the special case when ${\\bf j}_1={\\bf e}_0\/|{\\bf e}_0|$ this reduces to the upper bound on \n$f_1$ given by \\eq{2.15}. \n\nAn important question is whether this new bound is sharp, and if so for what $\\sigma({\\bf x})$?\nThe new bound will be sharp when ${\\bf e}=\\underline{{\\bf e}}$ where $\\underline{{\\bf e}}$ is\ngiven by \\eq{2.24}. In that case\n\\begin{equation} {\\bf j}({\\bf x})=a_1{\\bf j}_1\/\\langle{\\bf j}_1\\cdot\\sigma^{-1}{\\bf j}_1\\rangle\n\\eeq{2.28}\nhas zero divergence because it is proportional to ${\\bf j}_1$. \nLet us impose the Neumann boundary condition\n\\begin{equation} {\\bf j}({\\bf x})\\cdot{\\bf n}={\\bf j}_1\\cdot{\\bf n}\\quad {\\rm for~all}~{\\bf x}\\in\\partial\\Omega,\n\\eeq{2.29}\nand look for a $\\sigma({\\bf x})$ so ${\\bf j}({\\bf x})={\\bf j}_1({\\bf x})$ and ${\\bf e}({\\bf x})=\\sigma^{-1}{\\bf j}_1({\\bf x})$\nis curl-free. Now as schematically represented by figure \\fig{0}, choose\n$\\sigma({\\bf x})$ to correspond to a finely layered composite with layers\northogonal to the streamlines of ${\\bf j}_1({\\bf x})$, and with phase 1 occupying\na local volume fraction $p({\\bf x})$. This composite will support a\ncurrent field ${\\bf j}({\\bf x})={\\bf j}_1({\\bf x})$ and an electric field ${\\bf e}({\\bf x})=\\sigma^{-1}{\\bf j}_1({\\bf x})$\nprovided \n\\begin{equation} \\nabla \\times{\\bf e}_0=0,\\quad{\\bf e}_0\\equiv[\\sigma_2^{-1}-p({\\bf x})(\\sigma_2^{-1}-\\sigma_1^{-1})]{\\bf j}_1({\\bf x}).\n\\eeq{2.30}\nHere ${\\bf e}_0({\\bf x})$ is the weak limit (local volume average) of ${\\bf e}({\\bf x})$ as the\nlayer spacing goes to zero. In two dimensions, given ${\\bf j}_1({\\bf x})$ we could look for solutions\nfor $p({\\bf x})$ such that \\eq{2.30} is satisfied and $0\\leq p({\\bf x})\\leq 1$ in $\\Omega$. We expect such\nsolutions to exist for a wide class of fields ${\\bf j}_1({\\bf x})$. This example shows that non-constant\n``comparison flux fields'' can lead to sharp bounds on the volume fraction. In three\ndimensions we only expect to find a solution of the vector equation \\eq{2.30}\nfor the scalar field $p({\\bf x})$ if ${\\bf j}_1({\\bf x})$ satisfies some additional conditions. \n \n\n\\begin{figure}\n\\vspace{2in}\n\\hspace{1.0in}\n{\\resizebox{2.0in}{1.0in}\n{\\includegraphics[0in,0in][6in,3in]{wavelam.eps}}}\n\\vspace{0.1in}\n\\caption{A schematic of the type of layered microstructure achieving the volume fraction bound \\eq{2.27}, where the black regions denote one phase, and the\nwhite regions the other phase. The layer widths should be much finer than the size of $\\Omega$. }\n\\labfig{0}\n\\end{figure}\n\n\n\n\\section{Relationship to bounding effective tensors of composites}\n\\setcounter{equation}{0}\n\nConsider a periodic composite obtained by taking the unit cell boundaries outside $\\Omega\\equiv\\Omega_1$\nand almost filling the rest of the unit cell by non-intersecting\nrescaled and translated copies $\\Omega_i$, $i=2,\\ldots, n$ of\n$\\Omega$, as illustrated in figure \\fig{1}. The remainder of the unit cell is filled by phase 2 with\nconductivity $\\sigma_2$. The unit cell structure is periodically repeated to fill all space.\nLet $\\sigma_C({\\bf x})$ ($C$ for composite) denote this effective conductivity, i.e. in \nthe unit cell\n\\begin{eqnarray} \\sigma_C({\\bf x}) &=& \\sigma({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& \\sigma_2~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.1}\nwhere the scaling constants $a_i$ and translation vectors ${\\bf b}_i$ (with $a_1=1$ and ${\\bf b}_1=0$)\nare determined by the size and\nposition of each copy $\\Omega_i$, so that ${\\bf x}\/a_i+{\\bf b}_i$ is on the boundary of $\\Omega$ if and only if\n${\\bf x}$ is on the boundary of $\\Omega_i$. Let $p_n$ denote the volume fraction in the unit cell\noccupied by the material with conductivity $\\sigma_2$. Let $\\bfm\\sigma^*_n$ denote the (matrix valued) \neffective conductivity of this composite, which in general depends upon the relative positions\nof the copies $\\Omega_i$ within the unit cell. \n\n\n\\begin{figure}\n\\vspace{2in}\n\\hspace{1.0in}\n{\\resizebox{2.0in}{1.0in}\n{\\includegraphics[0in,0in][6in,3in]{omegacopies.eps}}}\n\\vspace{0.1in}\n\\caption{A period cell containing rescaled copies of $\\Omega$. }\n\\labfig{1}\n\\end{figure}\n\nWe have the classical variational inequality \n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^*_n{\\bf e}_0\\leq \\langle\\underline{{\\bf e}}_C\\cdot\\sigma_C\\underline{{\\bf e}}_C\\rangle,\n\\eeq{3.2}\nwhich holds for any trial electric field $\\underline{{\\bf e}}_C$ satisfying\n\\begin{equation} \\nabla \\times\\underline{{\\bf e}}_C=0,\\quad\\underline{{\\bf e}}_C~{\\rm periodic},\\quad\n\\langle\\underline{{\\bf e}}_C\\rangle={\\bf e}_0,\n\\eeq{3.3}\nwhere now the volume averages are over the entire unit cell, rather than just $\\Omega$. \nIn particular, letting ${\\bf e}({\\bf x})$ denote the electric field within $\\Omega$ when the special\nDirichlet boundary conditions \\eq{1.5} are applied, we may take in the unit cell\n\\begin{eqnarray} \\underline{{\\bf e}}_C({\\bf x}) &=& {\\bf e}({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& {\\bf e}_0~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.4}\nand periodically extend it. Then we get\n\\begin{eqnarray} \\langle\\underline{{\\bf e}}_C\\cdot\\sigma_C\\underline{{\\bf e}}_C\\rangle & = & p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0 \\nonumber \\\\\n&~& +(1-p_n)\\sum_{i=1}^N\\langle{\\bf e}({\\bf x}\/a_i+{\\bf b}_i)\\cdot\\sigma({\\bf x}\/a_i+{\\bf b}_i){\\bf e}({\\bf x}\/a_i+{\\bf b}_i)\\rangle_{\\Omega_i} \\nonumber \\\\\n&=& p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n)\\langle{\\bf e}({\\bf x})\\cdot\\sigma({\\bf x}){\\bf e}({\\bf x})\\rangle_{\\Omega} \\nonumber \\\\\n&=& p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n){\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0,\n\\eeqa{3.5}\nwhere $\\langle\\cdot\\rangle_{\\Omega_i}$ denotes an average over the region $\\Omega_i$.\nCombined with the variational inequality \\eq{3.2} this implies the bound\n\\begin{equation} {\\bf e}_0\\cdot\\bfm\\sigma^*_n{\\bf e}_0\\leq p_n\\sigma_2{\\bf e}_0\\cdot{\\bf e}_0+(1-p_n){\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0\n\\leq {\\bf e}_0\\cdot\\bfm\\sigma^D{\\bf e}_0,\n\\eeq{3.6}\nwhere we have used the inequality $\\bfm\\sigma^D\\geq\\sigma_2{\\bf I}$ implied by \\eq{2.13}. Thus we get\n\\begin{equation} \\bfm\\sigma^*_n\\leq \\bfm\\sigma^D.\n\\eeq{3.7}\nThis composite has a volume fraction $f_1'=(1-p_n)f_1$ of phase 1.\nThus any bound ``from below'' on the effective conductivity $\\bfm\\sigma^*_n$, applicable to composites\nhaving a volume fraction $f_1'$ of phase 1, immediately translates into\nbound ``from below'' on $\\bfm\\sigma^D$. Now consider what happens as we increase $N$, inserting\nmore and more regions $\\Omega_i$, while leaving undisturbed the regions $\\Omega_i$ already in place,\nso that $p_n\\to 0$ as $n\\to\\infty$. We are assured that this is possible. Rescaled copies\nof any shaped region can be packed to fill all space: see, for example, Theorem A.1\nin \\citeAPY{Benveniste:2003:NER}. Define\n\\begin{equation} \\bfm\\sigma^*=\\lim_{n\\to\\infty}\\bfm\\sigma^*_n.\n\\eeq{3.7a}\nWe are assured this limit exists since if we change the geometry in some\nsmall volume then the effective conductivity (assuming $\\sigma_1$ and $\\sigma_2$ are strictly positive and finite)\nis perturbed only by a small amount (\\citeAY{Zhikov:1994:HDO}).\nWe will call $\\bfm\\sigma^*$ the effective conductivity tensor of an assemblage of rescaled copies\nof $\\Omega$ packed to fill all space. Then \\eq{3.7} implies\n\\begin{equation} \\bfm\\sigma^*\\leq \\bfm\\sigma^D, \\eeq{3.7b}\nwhich is essentially the bound of \\citeAPY{Huet:1990:AVC} applied to this assemblage.\nAssume the bound is continuous with respect\nto $f_1'$ at the point $f_1'=f_1$, as expected. Then taking the limit $n\\to\\infty$ the \n``lower bound'' on the effective tensor of composites having\nvolume fraction $f_1$ must also be a lower bound on $\\bfm\\sigma^D$.\n\nIn particular, the harmonic mean bound \n$\\bfm\\sigma^*\\geq\\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}$ translates into the elementary bound \n$\\bfm\\sigma^D\\geq\\langle\\sigma^{-1}\\rangle^{-1}{\\bf I}$ of Nemat-Nasser and Hori, obtained before. \nAdditionally, in our two-phase composite, the effective conductivity $\\bfm\\sigma^*$ satisfies\nthe Lurie-Cherkaev-Murat-Tartar bound (Lurie and Cherkaev \\citeyearNP{Lurie:1982:AEC}, \\citeyearNP{Lurie:1984:EEC}; \\citeAY{Murat:1985:CVH}; \\citeAY{Tartar:1985:EFC})\n\\begin{equation} f_1{\\rm Tr}[(\\bfm\\sigma^*-\\sigma_2{\\bf I})^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)+f_2\/\\sigma_2,\n\\eeq{3.8}\n[which are a generalization of the bounds of \\citeAPY{Hashin:1962:VAT}] where $d=2,3$ is the dimensionality\nof the composite. Since $\\bfm\\sigma^D\\geq\\bfm\\sigma^*\\geq \\sigma_2{\\bf I}$ it follows that \n\\begin{equation} (\\bfm\\sigma^*-\\sigma_2{\\bf I})^{-1}\\geq (\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}, \\eeq{3.9}\nand so \\eq{3.8} implies the new bound\n\\begin{equation} f_1{\\rm Tr}[(\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)+f_2\/\\sigma_2.\n\\eeq{3.10}\nBy multiplying this inequality by $\\sigma_2^2$ and adding $df_1\\sigma_2$ to both sides we see that it \ncan be rewritten in the equivalent form \n\\begin{equation} f_1{\\rm Tr}[(\\sigma_2^{-1}{\\bf I}-(\\bfm\\sigma^D)^{-1})^{-1}]\\leq d\/(\\sigma_2^{-1}-\\sigma_1^{-1})-(d-1)f_2\\sigma_2.\n\\eeq{3.10a}\nAs $d^2\/{\\rm Tr}({\\bf A})\\leq {\\rm Tr}({\\bf A}^{-1})$ for any positive definite matrix ${\\bf A}$ we also obtain\nthe weaker bound\n\\begin{equation} \\frac{1}{d}{\\rm Tr}[(\\bfm\\sigma^D)^{-1}]\\leq \\sigma_2^{-1}-\\frac{f_1 d}{d\/(\\sigma_2^{-1}-\\sigma_1^{-1})-(d-1)f_2\\sigma_2},\n\\eeq{3.11}\nwhich is a particular case of the universal bounds first derived by \nNemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), see equation (5.4.9) in their 1995 paper,\nobtained under the assumption that $\\Omega$ is ellipsoidal or parallelpipedic.\n(which we see is not needed).\n\nIf one is interested in bounds on the volume fraction $f_1$ then \\eq{3.10} implies the upper bound\n\\begin{equation} f_1\\leq \\frac{1\/\\sigma_2+d\/(\\sigma_1-\\sigma_2)}{1\/\\sigma_2+{\\rm Tr}[(\\bfm\\sigma^D-\\sigma_2{\\bf I})^{-1}]}.\n\\eeq{3.12}\n\nTo obtain lower bounds on $f_1$, we consider the same periodic composite\nand apply the dual variational inequality\n\\begin{equation} {\\bf j}_0\\cdot(\\bfm\\sigma^*_n)^{-1}{\\bf j}_0\\leq \\langle\\underline{{\\bf j}}_C\\cdot\\sigma_C^{-1}\\underline{{\\bf j}}_C\\rangle\n\\eeq{3.13}\nvalid for any trial current field $\\underline{{\\bf j}}_C$ satisfying\n\\begin{equation} \\nabla \\cdot\\underline{{\\bf j}}_C=0,\\quad\\underline{{\\bf j}}_C~{\\rm periodic},\\quad\n\\langle\\underline{{\\bf j}}_C\\rangle={\\bf j}_0.\n\\eeq{3.14}\nLetting ${\\bf j}({\\bf x})$ denote the current field within $\\Omega$ when the special\nNeumann boundary conditions \\eq{1.7} are applied, we may take in the unit cell\n\\begin{eqnarray} \\underline{{\\bf j}}_C({\\bf x}) &=& {\\bf j}({\\bf x}\/a_i+{\\bf b}_i)~{\\rm in}~\\Omega_i,~~i=1,2,\\ldots, N \\nonumber \\\\\n &=& {\\bf j}_0~{\\rm elsewhere~outside}~\\cup_{i}\\Omega_i,\n\\eeqa{3.15}\nand periodically extend it. \nSubstituting this trial field in \\eq{3.13} gives the bound \n\\begin{equation} (\\bfm\\sigma^*_n)^{-1}\\leq p_n\\sigma_2^{-1}{\\bf I}+(\\bfm\\sigma^N)^{-1}, \\eeq{3.15a}\nwhich in the limit $n\\to\\infty$ implies\n\\begin{equation} \\bfm\\sigma^*\\geq \\bfm\\sigma^N, \\eeq{3.16}\nwhich is essentially the bound of \\citeAPY{Huet:1990:AVC} applied to the\nassemblage of rescaled copies of $\\Omega$ packed to fill all space.\n\nThus any bound ``from above'' on the effective conductivity $\\bfm\\sigma^*_n$ of composites having a volume \nfraction $f_1'$ immediately \ntranslates into a bound ``from above'' on $(p_n\\sigma_2^{-1}{\\bf I}+(\\bfm\\sigma^N)^{-1})^{-1}$. Taking\nthe limit $n\\to\\infty$\nand assuming continuity of the bound at $f_1'=f_1$ the ``upper bound'' \non the effective tensor of composites having volume fraction $f_1$ must also be an upper bound\non $\\bfm\\sigma^N$.\nIn particular, the other Murat-Tartar-Lurie-Cherkaev bound \n\\begin{equation} f_2{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^*)^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1,\n\\eeq{3.17}\nimplies\n\\begin{equation} f_2{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^N)^{-1}]\\leq d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1. \n\\eeq{3.18}\nAgain using the inequality $d^2\/{\\rm Tr}({\\bf A})\\leq {\\rm Tr}({\\bf A}^{-1})$ for ${\\bf A}>0$, we obtain\nthe weaker bound\n\\begin{equation} \\frac{1}{d}{\\rm Tr}(\\bfm\\sigma^N)\\leq \\sigma_1-\\frac{f_2 d}{d\/(\\sigma_1-\\sigma_2)-f_1\/\\sigma_1}\n\\eeq{3.20}\nwhich is a particular case of the universal bounds derived by Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}), see \nequation (5.3.11) in their 1995 paper,\nobtained under the assumption that $\\Omega$ is ellipsoidal or parallelpipedic\n(which we see is not needed).\n\nFrom \\eq{3.18} we directly obtain the volume fraction bound\n\\begin{equation} f_2\\leq \\frac{d\/(\\sigma_1-\\sigma_2)-1\/\\sigma_1}{\\{{\\rm Tr}[(\\sigma_1{\\bf I}-\\bfm\\sigma^N)^{-1}]\\}-1\/\\sigma_1},\n\\eeq{3.21}\ngiving a lower bound on the volume fraction $f_1=1-f_2$. \n\nIn the asymptotic limit as the volume fraction goes to zero\nthe volume fraction bounds \\eq{3.12} and \\eq{3.21} reduce to those of Capdeboscq and Vogelius (\\citeyearNP{Capdeboscq:2003:OAE}, \\citeyearNP{Capdeboscq:2004:RSR}),\nas shown in the two dimensional case by \\citeAPY{Kang:2011:SBV}. The paper of Kang, Kim and Milton also tests the bounds numerically, and their (two-dimensional) results \nshow the bound \\eq{3.12} is typically close to the actual volume fraction for a variety of inclusions of phase 1 in a matrix of phase 2. Similarly we can expect\nthat the bound \\eq{3.21} will be typically close to the actual volume fraction for a variety of inclusions of phase 2 in a matrix of phase 1. \n\n\n\n\\section{Coupled bounds in two-dimensions}\n\\setcounter{equation}{0}\nThe tensors $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ obviously depend on $\\sigma_1$ and $\\sigma_2$, i.e. \n$\\bfm\\sigma^D=\\bfm\\sigma^D(\\sigma_1,\\sigma_2)$ and $\\bfm\\sigma^N=\\bfm\\sigma^N(\\sigma_1,\\sigma_2)$. Let us assume we have\nmeasurements of these tensors for an additional pair of conductivities $(k_1, k_2)$,\n(which could be obtained, say from thermal, magnetic permeability, or diffusivity\nmeasurements) and let ${\\bf k}^D$ and ${\\bf k}^N$ denote these tensors,\n\\begin{equation} {\\bf k}^D=\\bfm\\sigma^D(k_1,k_2),\\quad {\\bf k}^N=\\bfm\\sigma^N(k_1,k_2).\n\\eeq{4.1}\nWe still let $\\bfm\\sigma^D$ and $\\bfm\\sigma^N$ denote the tensors associated with the first\npair of conductivities $(\\sigma_1, \\sigma_2)$, with $\\sigma_1>\\sigma_2$. From \\eq{3.7b} and\n\\eq{3.16} we have the inequalities\n\\begin{eqnarray} \\sigma_2{\\bf I}\\leq\\bfm\\sigma^N\\leq\\bfm\\sigma^*\\leq \\bfm\\sigma^D\\leq \\sigma_1{\\bf I}, \\nonumber \\\\\nk^-\\leq {\\bf k}^N\\leq{\\bf k}^*\\leq {\\bf k}^D\\leq k^+{\\bf I},\n\\eeqa{4.2}\nwhere $k^-=\\min\\{k_1,k_2\\}$ and $k^+=\\max\\{k_1,k_2\\}$ and\n${\\bf k}^*$ is the effective conductivity the composite considered\nin the previous section when $\\sigma_1$ and $\\sigma_2$ are replaced by $k_1$ and $k_2$. (It can\neasily be checked that these inequalities still hold if $k_2>k_1$.) \n\nFor two dimensional conductivity from duality \n(\\citeAY{Keller:1964:TCC}; \\citeAY{Dykhne:1970:CTD})\nwe know the functions \n$\\bfm\\sigma^D=\\bfm\\sigma^D(\\sigma_1,\\sigma_2)$ and $\\bfm\\sigma^N=\\bfm\\sigma^N(\\sigma_1,\\sigma_2)$ satisfy\n\\begin{eqnarray} \\bfm\\sigma^D(\\sigma_2,\\sigma_1)& = &\\sigma_1\\sigma_2{\\bf R}_\\perp^T[\\bfm\\sigma^N(\\sigma_1,\\sigma_2)]^{-1}{\\bf R}_\\perp, \\nonumber \\\\\n \\bfm\\sigma^N(\\sigma_2,\\sigma_1)& = &\\sigma_1\\sigma_2{\\bf R}_\\perp^T[\\bfm\\sigma^D(\\sigma_1,\\sigma_2)]^{-1}{\\bf R}_\\perp,\n\\eeqa{4.3}\nwhere \n\\begin{equation} {\\bf R}_\\perp=\\pmatrix{0 & 1 \\cr -1 & 0}\n\\eeq{4.4}\nis the matrix for a $90^\\circ$ rotation.\nSo if we know these tensors for the conductivity pair $(k_1, k_2)$, we also know them for the \nconductivity pair $(k_2, k_1)$. Hence, by making such an interchange if necessary, we may assume without\nloss of generality that $k_1>k_2$, i.e. that $k^+=k_1$ and $k^-=k_2$. Finally, by \ninterchanging $k$ with $\\sigma$ if necessary,\nwe may assume without loss of generality that \n\\begin{equation} \\sigma_1\/\\sigma_2\\geq k_1\/k_2>1. \\eeq{4.5}\n\n\n\nOptimal bounds on all possible matrix pairs $(\\bfm\\sigma^*,{\\bf k}^*)$ for composites having a prescribed\nvolume fraction $f_1$ of phase 1 have been derived by \\citeAPY{Cherkaev:1992:ECB}, \nand extended to an arbitrary number of effective conductivity function values by \\citeAPY{Clark:1995:OBC}.\nHowever it seems\ndifficult to extract bounds on $f_1$ from these optimal bounds. Instead we consider a\npolycrystal checkerboard with conductivities \n\\begin{equation} \\bfm\\sigma({\\bf x})={\\bf R}^T({\\bf x})\\bfm\\sigma^*{\\bf R}({\\bf x}),\\quad {\\bf k}({\\bf x})={\\bf R}^T({\\bf x}){\\bf k}^*{\\bf R}({\\bf x}),\n\\quad {\\rm with}~{\\bf R}^T({\\bf x}){\\bf R}({\\bf x})={\\bf I},\n\\eeq{4.6}\nin which the rotation field ${\\bf R}({\\bf x})$ is ${\\bf I}$ in the ``white squares'' and\n${\\bf R}_\\perp$ in the ``black squares''. By a result of \\citeAPY{Dykhne:1970:CTD} this material\nhas effective conductivities $(\\sigma_*{\\bf I},k_*{\\bf I})$ where\n\\begin{equation} \\sigma_*=\\sqrt{\\det\\bfm\\sigma^*},\\quad k_*=\\sqrt{\\det{\\bf k}^*}.\n\\eeq{4.7}\nNow we replace the ``white squares'' by the limiting composite considered\nin the previous section (with structure much smaller than the size of the\nsquares) and we replace the ``black squares'' by the limiting composite considered\nin the previous section, rotated by $90^\\circ$. The resulting material is an isotropic\ncomposite of phases 1 and 2 and so the pair $(\\sigma_*,k_*)$ satisfies the bounds\nof \\citeAPY{Milton:1981:BTO},\n\\begin{equation} u(k_*)\\leq \\sigma_* \\leq v(k_*), \\eeq{4.10}\nwhich are attained when the composite is an assemblage of doubly coated disks, where\n\\begin{eqnarray} \n v(k_*) & = &\\sigma_1-\\frac{2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2)}{(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_*)}, \\nonumber \\\\\nu(k_*) & \\equiv &\\sigma_2+\n\\frac{2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2)}{(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_*)},\n\\eeqa{4.11}\nand\n\\begin{eqnarray}\n\\alpha_1(k_*)& = &\\frac{(k_1+k_2)[2f_2k_1(k_1-k_2)\/(k_1-k_*)-(f_2k_1+f_1k_2+k_1)]}{(k_1-k_2)^2}, \\nonumber \\\\\n\\alpha_2(k_*)& = &\\frac{(k_1+k_2)[2f_1k_2(k_1-k_2)\/(k_*-k_2)-(f_2k_1+f_1k_2+k_2)]}{(k_1-k_2)^2}.\n\\eeqa{4.12}\nNow for any two symmetric matrices ${\\bf A}$ and ${\\bf B}$ with ${\\bf A}\\geq{\\bf B}>0$ we \nhave ${\\bf B}^{-1\/2}{\\bf A}{\\bf B}^{-1\/2}\\geq{\\bf I}$, and so $\\det({\\bf B}^{-1\/2}{\\bf A}{\\bf B}^{-1\/2})\\geq 1$\nimplying $\\det({\\bf A})>\\det({\\bf B})$. Thus \\eq{4.2} and \\eq{4.7} imply\n\\begin{equation} \\sigma_2\\leq\\sigma_N\\leq\\sigma_*\\leq \\sigma_D\\leq\\sigma_1, \n\\quad k_2\\leq k_N\\leq k_*\\leq k_D\\leq k_1,\n\\eeq{4.13}\nwhere we define\n\\begin{equation} \\sigma_N=\\sqrt{\\det\\bfm\\sigma_N},\\quad \\sigma_D=\\sqrt{\\det\\bfm\\sigma_D},\\quad\n k_N=\\sqrt{\\det{\\bf k}_N},\\quad k_D=\\sqrt{\\det{\\bf k}_D}.\n\\eeq{4.13a}\nThe Hashin-Shtrikman bounds (\\citeAY{Hashin:1962:VAT}; \\citeAY{Hashin:1970:TCM}),\n\\begin{equation} k_1-\\frac{2f_2k_1(k_1-k_2)}{f_2k_1+f_1k_2+k_1}\\geq k_*\n\\geq k_2+\\frac{2f_1k_2(k_1-k_2)}{f_2k_1+f_1k_2+k_2},\n\\eeq{4.14}\nimply that both $\\alpha_1(k_*)$ and $\\alpha_2(k_*)$ are non-negative. Hence the denominators\nin \\eq{4.11} are positive and so \\eq{4.10} implies\n\\begin{eqnarray} \n(\\sigma_1-\\sigma_*)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_*)]\\geq 2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2),\n\\nonumber \\\\\n(\\sigma_*-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_*)]\\geq 2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2).\n\\eeqa{4.15}\nSince $\\alpha_1(k_D)\\geq\\alpha_1(k_*)$ and $\\alpha_2(k_N)\\geq\\alpha_2(k_*)$, we get using\n\\eq{4.13},\n\\begin{eqnarray}\n(\\sigma_1-\\sigma_N)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_1)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_1(k_D)]\\geq 2f_2\\sigma_1(\\sigma_1^2-\\sigma_2^2),\n\\nonumber \\\\\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+\\sigma_2)(\\sigma_1+\\sigma_2)\n+(\\sigma_1-\\sigma_2)^2\\alpha_2(k_N)]\\geq 2f_1\\sigma_2(\\sigma_1^2-\\sigma_2^2).\n\\nonumber \\\\ ~\n\\eeqa{4.16}\nAs $\\alpha_1(k_D)$ and $\\alpha_2(k_N)$ depend linearly on $f_1$ and $f_2=1-f_1$,\nthe equations \\eq{4.16} readily yield bounds on the volume fraction. Eunjoo Kim\nhas used an integral equation solver [as described by \\citeAPY{Kang:2011:SBV}]\nto compare the bounds \\eq{4.16} with the bounds \\eq{3.12} and \\eq{3.21}. Her results\nare presented in figures \\ref{3}, \\ref{4}, and \\ref{5}. More numerical results testing\nthe bounds \\eq{3.12} and \\eq{3.21} are in the paper by \\citeAPY{Kang:2011:SBV}.\n\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DYDNa6.3.eps,width=10cm}\n\\end{center}\n\\caption{The first figure shows the circular body $\\Omega$ containing an ellipse of phase 1 surrounded by phase 2.\nThe second figure shows the results for the bounds\n\\eq{3.12} and \\eq{3.21} while the third figure shows the results \nfor the bounds \\eq{4.16}.\nThe bounds are for increasing $\\sigma_1$, with $\\sigma_2=1$\nand (for the third figure) the pairs $(\\sigma_1,k_1)$ are taken as\n$(1.1,1.05)$, $(1.2,1.1)$, $(1.5,1.2)$, $(2,1.5)$, $(3,2)$, $(5,3)$, $(10,5)$ and $(20,10)$,\nwith $\\sigma_2=k_2=1$.\nHere $U(\\sigma_1)$ and $L(\\sigma_1)$ are the upper and \nlower bounds on the volume fraction, and the true volume fraction is $f_1=0.08$.\nFigure supplied courtesy of Eunjoo Kim.}\\label{3}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DYDNa5.3.eps,width=10cm}\n\\end{center}\n\\caption{The same as for figure \\ref{3} but with the elliptical \ninclusion moved closer to the boundary of $\\Omega$. Figure supplied courtesy of Eunjoo Kim.}\\label{4}\n\\end{figure}\n\n\n\n \n\\begin{figure}[htbp]\n\\begin{center}\n\\epsfig{figure=LU_DNDNa4.3.eps,width=10cm}\n\\end{center}\n\\caption{The same as for figure \\ref{3} but with a non-elliptical inclusion of phase 1 in a square region $\\Omega$. The true volume\nfraction is $f_1=0.0673$. Figure supplied courtesy of Eunjoo Kim. }\\label{5}\n\\end{figure}\n\n\\section{Coupled bounds in three-dimensions}\n\\setcounter{equation}{0}\n\nWe can also derive coupled bounds in three dimensions. Let us assume the phases\nhave been labeled so that\n\\begin{equation} \\sigma_1 k_1\\geq \\sigma_2 k_2, \\quad {\\rm i.e.}~\\sigma_1\/\\sigma_2\\geq k_2\/k_1,\n\\eeq{4.17}\nand by interchanging $\\sigma$ with $k$ if necessary let us assume\n\\begin{equation} \\sigma_1\/\\sigma_2\\geq k_1\/k_2.\n\\eeq{4.18}\nThese two inequalities imply $\\sigma_1\/\\sigma_2>1$ as before. We want to use the\ninequalities \\eq{4.2} to derive bounds on the volume fraction. As in the \ntwo-dimensional case the idea is to first construct an isotropic\npolycrystal, where the polycrystal has the conductivities \\eq{4.6}\nin which the rotation field ${\\bf R}({\\bf x})$ is constant within grains\nwhich we take to be spheres. These spheres fill all space, and the crystal\norientation varies randomly from sphere to sphere so that the composite\nhas isotropic conductivities $(\\sigma_*{\\bf I},k_*{\\bf I})$. We use the effective medium formula\n(\\citeAY{Stroud:1975:GEM}; \\citeAY{Helsing:1991:ECA})\nwhich gives \n\\begin{equation} \\sigma_*=g(\\bfm\\sigma_*), \\quad k_*=g({\\bf k}_*),\n\\eeq{4.19}\nwhere for any positive definite symmetric $3\\times 3$ matrix ${\\bf A}$, $g=g({\\bf A})$ is taken to be the\nunique positive root of\n\\begin{equation} \\frac{\\lambda_1-g}{\\lambda_1+2g}+\\frac{\\lambda_2-g}{\\lambda_2+2g}+\\frac{\\lambda_3-g}{\\lambda_3+2g}=0,\n\\eeq{4.20}\nin which $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the eigenvalues of ${\\bf A}$. This effective\nmedium formula is realizable (\\citeAY{Milton:1985:TCP}; \\citeAY{Avellaneda:1987:IHD})\nin the sense that it corresponds to a limiting composite\nof spherical grains with hierarchical structure (where any pair of grains of \ncomparable size are well separated from each other, relative to their diameter). \nNote that the left hand side of side of \\eq{4.20} increases if any of the eigenvalues\n$\\lambda_i$ increase, and decreases if $g$ increases. So $g({\\bf A})$ must increase if\nany or all of the eigenvalues of ${\\bf A}$ increase. It follows that $g({\\bf B})\\geq g({\\bf A})$\nif ${\\bf B}\\geq{\\bf A}>0$. Hence the inequalities \\eq{4.2} imply\n\\begin{equation} \\sigma_2\\leq\\sigma_N\\leq\\sigma_*\\leq \\sigma_D\\leq\\sigma_1, \n\\quad k^-\\leq k_N\\leq k_*\\leq k_D\\leq k^+,\n\\eeq{4.20a}\nwhere now\n\\begin{equation} \\sigma_N=g(\\bfm\\sigma_N),\\quad \\sigma_D=g(\\bfm\\sigma_D),\\quad k_N=g({\\bf k}_N),\\quad k_D=g({\\bf k}_D).\n\\eeq{4.20b}\n\nWe next replace the material in each sphere by the appropriately oriented \nlimiting composite considered in the previous section (with structure much \nsmaller than the sphere diameter) to obtain a two-phase isotropic composite with\n$(\\sigma_*,k_*)$ as its conductivities. Thus $\\sigma_*$ must satisfy the upper bound of Bergman (\\citeyearNP{Bergman:1976:VBS},\\citeyearNP{Bergman:1978:DCC})\n\\begin{equation}\n\\sigma_*\\leq f_1\\sigma_1+f_2\\sigma_2\n-\\frac{f_1f_2(\\sigma_1-\\sigma_2)^2}{3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_*)},\n\\eeq{4.23}\nwhere\n\\begin{equation} \\gamma(k_*)=\\frac{f_1f_2(k_1-k_2)}{f_1k_1+ f_2k_2 -k_*}-\\frac{3k_2}{k_1-k_2},\n\\eeq{4.24}\nand the lower bound\n\\begin{equation} \\sigma_*\\geq \\sigma_2+\n\\frac{3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1)}{(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_*)},\n\\eeq{4.21}\nwhere\n\\begin{equation}\n\\beta(k_*) = \\frac{(k_2+2k_1)[3f_1k_2(k_1-k_2)\/(k_*-k_2)-(f_2k_1+f_1k_2+2k_2)]}{(k_1-k_2)^2}.\n\\eeq{4.22}\nThis lower bound was first conjectured by \\citeAPY{Milton:1981:BTO}. A proof was proposed\nby \\citeAPY{Avellaneda:1988:ECP} which was corrected by \\citeAPY{Nesi:1991:MII} and \\citeAPY{Zhikov:1991:EHM}.\n\nThe lower bound \\eq{4.21} is sharp, being attained for two-phase assemblages of doubly\ncoated spheres (\\citeAY{Milton:1981:BTO}). The upper bound \\eq{4.23} is attained at $5$ values of $\\gamma(k_*)$\nnamely when $\\gamma(k_*)=f_2, 3f_2\/2, 3f_2, 3-3f_1\/2,$ and $3-f_1$ (\\citeAY{Milton:1981:BCP}).\n\nThe Hashin-Shtrikman bound (\\citeAY{Hashin:1962:VAT}),\n\\begin{equation} (k_*-k_2)\/(k_1-k_2)\\geq 3f_1k_2\/(f_2k_1+f_1k_2+2k_2),\n\\eeq{4.25}\nimplies that $\\beta(k_*)$ is non-negative. Hence the denominator\nin \\eq{4.21} is positive and so the inequality implies\n\\begin{equation} (\\sigma_*-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_*)]\\geq 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1).\n\\eeq{4.26}\nThe Hashin-Shtrikman bounds can also be rewritten in the form\n\\begin{equation}\nf_2k_1+f_1k_2+2k^- \\leq \\frac{f_1f_2(k_1-k_2)^2}{f_1k_1+ f_2k_2 -k_*}\\leq f_2k_1+f_1k_2+2k^+.\n\\eeq{4.27}\nThese inequalities imply\n$\\gamma(k_*)$ lies between $f_2$ and $3-f_1$. Hence the denominator in \\eq{4.23} is positive and\nthe inequality can be rewritten as \n\\begin{equation}\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_*)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_*))\\geq f_1f_2(\\sigma_1-\\sigma_2)^2.\n\\eeq{4.28}\nWhen $k_1\\geq k_2$ \\eq{4.20a} implies $\\beta(k_N)\\geq\\beta(k_*)$ and $\\gamma(k_D)\\geq\\gamma(k_*)$, and \nhence \n\\begin{eqnarray}\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_N)]& \\geq & 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1),\\nonumber \\\\\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_N)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_D))& \\geq & f_1f_2(\\sigma_1-\\sigma_2)^2. \\nonumber \\\\\n\\eeqa{4.29}\nOn the other hand when $k_1\\leq k_2$ then \\eq{4.20a} implies $\\beta(k_D)\\geq\\beta(k_*)$ and\n$\\gamma(k_N)\\geq\\gamma(k_*)$, and hence \n\\begin{eqnarray}\n(\\sigma_D-\\sigma_2)[(f_2\\sigma_1+f_1\\sigma_2+2\\sigma_2)(\\sigma_2+2\\sigma_1)\n+(\\sigma_1-\\sigma_2)^2\\beta(k_D)]& \\geq & 3f_1\\sigma_2(\\sigma_1-\\sigma_2)(\\sigma_2+2\\sigma_1),\\nonumber \\\\\n(f_1\\sigma_1+f_2\\sigma_2-\\sigma_N)(3\\sigma_2+(\\sigma_1-\\sigma_2)\\gamma(k_N))&\\geq& f_1f_2(\\sigma_1-\\sigma_2)^2. \\nonumber \\\\\n\\eeqa{4.30}\nSince $\\beta(k_N)$ and $\\beta(k_D)$ depend linearly on the volume fractions $f_1$ and $f_2=1-f_1$, \nthe first inequalities\nin \\eq{4.29} and \\eq{4.30} also depend linearly on the volume fraction and easily yield\nbounds on the volume fraction. On the other hand, finding bounds on the volume fraction\nfrom the second inequalities in \\eq{4.29} and \\eq{4.30}, involves solving a cubic equation in $f_1$. So instead\nof analytically computing the roots of this cubic it is probably better to numerically\nsearch for the range of values of $f_1$ where the second inequalities in \\eq{4.29} and \\eq{4.30}\nare satisfied. \n\n\n\n\n\n\n\n\\section{Bounds for elasticity}\n\\setcounter{equation}{0}\n\nLet us consider solutions to the linear elasticity equations \n\\begin{equation} \\bfm\\tau({\\bf x})={\\bfm{\\cal C}}({\\bf x})\\bfm\\epsilon({\\bf x}),\\quad\\nabla \\cdot\\bfm\\theta=0,\\quad\\bfm\\epsilon=(\\nabla{\\bf u}+(\\nabla{\\bf u})^T)\/2,\n\\eeq{5.1}\nwithin $\\Omega$, where ${\\bf u}({\\bf x})$, $\\bfm\\epsilon({\\bf x})$ and $\\bfm\\tau({\\bf x})$, are the displacement\nfield, strain field, and stress field, and ${\\bfm{\\cal C}}({\\bf x})$ is the fourth\norder elasticity tensor field\n\\begin{equation} {\\bfm{\\cal C}}({\\bf x})=\\chi({\\bf x}){\\bfm{\\cal C}}^1+(1-\\chi({\\bf x})){\\bfm{\\cal C}}^2,\n\\eeq{5.2}\nin which ${\\bfm{\\cal C}}^1$ and ${\\bfm{\\cal C}}^2$ are the elasticity tensors of the phases, assumed\nto be isotropic with elements,\n\\begin{equation} {\\cal C}_{ijk\\ell}^h\n=\\mu_h(\\delta_{ik}\\delta_{j\\ell}+\\delta_{i\\ell}\\delta_{jk})+(\\kappa_h-2\\mu_h\/d)\\delta_{ij}\\delta_{k\\ell},\\quad h=1,2,\n\\eeq{5.3}\nin which $d=2$ or 3 is the dimensionality, and \n$\\mu_1,\\mu_2$ and $\\kappa_1,\\kappa_2$ are the shear and bulk moduli of the\ntwo phases. From boundary information on the displacement ${\\bf u}_0({\\bf x})={\\bf u}({\\bf x})$\nand traction ${\\bf f}({\\bf x})=\\bfm\\tau({\\bf x})\\cdot{\\bf n}$\nwe can immediately determine, using integration by parts, \nvolume averages such as\n\\begin{eqnarray} \\langle \\bfm\\epsilon:\\bfm\\tau\\rangle &= & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}{\\bf u}\\cdot{\\bf f}, \\nonumber \\\\\n \\langle \\bfm\\epsilon\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}({\\bf n}{\\bf u}^{T}+{\\bf u}{\\bf n}^{T})\/2, \\nonumber \\\\\n \\langle \\bfm\\tau\\rangle & = & \\frac{1}{|\\Omega|}\\int_{\\partial\\Omega}{\\bf x}{\\bf f}^T,\n\\eeqa{5.3a}\nin which $\":\"$ denotes a contraction of two indices.\n\nThere are two natural sets of boundary conditions. \nFor any symmetric matrix $\\bfm\\epsilon_0$ we could\nprescribe the special Dirichlet boundary conditions \n\\begin{equation} {\\bf u}({\\bf x})=\\bfm\\epsilon_0{\\bf x},\\quad {\\rm for}~{\\bf x}\\in\\partial\\Omega, \n\\eeq{5.4}\nand measure $\\bfm\\tau_0=\\langle\\bfm\\tau\\rangle$. Here, according to \\eq{5.3a}, $\\bfm\\epsilon_0$ equals $\\langle\\bfm\\epsilon\\rangle$. Since\n$\\bfm\\tau_0$ is linearly related to $\\bfm\\epsilon_0$ we can write\n\\begin{equation} \\bfm\\tau_0={\\bfm{\\cal C}}^D\\bfm\\epsilon_0,\n\\eeq{5.5} \nwhich defines the elasticity tensor $\\bfm\\sigma^D$ ($D$ for Dirichlet). Alternatively\nfor any symmetric matrix $\\bfm\\tau_0$ we could\nprescribe the special Neumann boundary conditions \n\\begin{equation} \\bfm\\tau({\\bf x})\\cdot{\\bf n}=\\bfm\\tau_0\\cdot{\\bf n},\\quad {\\rm for}~{\\bf x}\\in\\partial\\Omega, \n\\eeq{5.6}\nand measure $\\bfm\\epsilon_0=\\langle\\bfm\\epsilon\\rangle$. Here, according to \\eq{5.3a}, $\\bfm\\tau_0$ equals $\\langle\\bfm\\tau\\rangle$. Since\n$\\bfm\\epsilon_0$ is linearly related to $\\bfm\\tau_0$ we can write\n\\begin{equation} \\bfm\\epsilon_0=({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0, \n\\eeq{5.7} \nwhich defines the elasticity tensor $\\bfm\\sigma^N$ ($D$ for Dirichlet). It is easy to check\nthat ${\\bfm{\\cal C}}^D$ and ${\\bfm{\\cal C}}^N$ satisfy all the usual symmetries of elasticity tensors.\n\nDirectly analogous to \\eq{2.13} we have the bounds \n\\begin{equation} \\langle{\\bfm{\\cal C}}^{-1}\\rangle^{-1}\\leq{\\bfm{\\cal C}}^D\\leq \\langle{\\bfm{\\cal C}}\\rangle,\\quad\\quad\n \\langle{\\bfm{\\cal C}}^{-1}\\rangle^{-1}\\leq{\\bfm{\\cal C}}^N\\leq \\langle{\\bfm{\\cal C}}\\rangle\n\\eeq{5.8}\nof \\cite{Nemat-Nasser:1993:MOP}, and directly analogous to \\eq{2.14} \nfor any boundary condition (not just the\nspecial boundary conditions \\eq{5.4} and \\eq{5.6}) we have the bounds\n\\begin{equation} \\langle\\bfm\\epsilon\\cdot\\bfm\\tau\\rangle \\geq \\bfm\\epsilon_0\\cdot\\bfm\\sigma^N\\bfm\\epsilon_0,\\quad\n\\langle\\bfm\\epsilon\\cdot\\bfm\\tau\\rangle \\geq \\bfm\\tau_0(\\bfm\\sigma^D)^{-1}\\bfm\\tau_0,\n\\eeq{5.9}\nwhere $\\bfm\\epsilon_0=\\langle\\bfm\\epsilon\\rangle$ and $\\bfm\\tau_0=\\langle\\bfm\\tau\\rangle$,\ndue to Willis in 1989 private communication to Nemat-Nasser and Hori and presented by \\citeAPY{Nemat-Nasser:1993:MOP}.\n\nAlso directly analogous to \\eq{3.7b} and \\eq{3.16} we have the bounds\n\\begin{equation} {\\bfm{\\cal C}}^N\\leq{\\bfm{\\cal C}}^*\\leq {\\bfm{\\cal C}}^D,\n\\eeq{5.10}\nwhere ${\\bfm{\\cal C}}^*$ is the effective elasticity tensor of any assemblage of\nrescaled copies of $\\Omega$ packed to fill all space. These are \nessentially the bounds of \\citeAPY{Huet:1990:AVC} applied to this assemblage. Thus ``lower\nbounds'' on ${\\bfm{\\cal C}}^*$ directly give ``lower bounds'' on ${\\bfm{\\cal C}}^D$ and\n``upper bounds'' on ${\\bfm{\\cal C}}^*$ directly give ``upper bounds'' on ${\\bfm{\\cal C}}^N$.\nIn particular, in two dimensions lower and upper bounds on $\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^*\\bfm\\epsilon_0$ \nhave been obtained by \\citeAPY{Gibiansky:1984:DCPa} (for the equivalent plate equation) and also by \\citeAPY{Allaire:1993:EOB}.\nAssuming that the phases have been labeled so that $\\mu_1\\geq\\mu_2$ ($\\kappa_1-\\kappa_2$ could be either positive\nor negative)\nand letting $\\epsilon_1$ and $\\epsilon_2$ denote the two eigenvalues $\\bfm\\epsilon_0$,\nthe bounds imply\n\\begin{eqnarray}\n&~& \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq\n(\\epsilon_1+\\epsilon_2)^2\/(f_1\/\\kappa_1+f_2\/\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2\/(f_1\/\\mu_1+f_2\/\\mu_2), \\nonumber \\\\\n&~& \\quad \\quad {\\rm if~~} |\\kappa_1-\\kappa_2|(f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|\\leq\n |\\mu_1-\\mu_2|(f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq (\\epsilon_1+\\epsilon_2)^2(f_1\\kappa_1+f_2\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2(f_1\\mu_1+f_2\\mu_2) \\nonumber \\\\\n&~&~~~~~~~~~~~~~~~~~~~-f_1f_2\\frac{[|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|]^2}\n{f_1(\\mu_2+\\kappa_2)+f_2(\\mu_1+\\kappa_1)}, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~} (\\mu_2+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\geq\nf_2|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2| \\nonumber \\\\\n&~& \\quad \\quad{\\rm and~~} |\\kappa_1-\\kappa_2|(f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|\\geq\n |\\mu_1-\\mu_2|(f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 \\geq\n\\mu_2(\\epsilon_1-\\epsilon_2)^2+\n\\frac{\\kappa_1\\kappa_2+\\mu_2(f_1\\kappa_1+f_2\\kappa_2)}{\\mu_2+f_1\\kappa_2+f_2\\kappa_1}(\\epsilon_1+\\epsilon_2)^2, \\nonumber \\\\\n&~ & \\quad \\quad{\\rm if~~} (\\mu_2+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\leq\nf_2|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|;\n\\eeqa{5.12}\nand\n\\begin{eqnarray}\n&~& \\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq (\\epsilon_1+\\epsilon_2)^2(f_1\\kappa_1+f_2\\kappa_2)\n+(\\epsilon_1-\\epsilon_2)^2(f_1\\mu_1+f_2\\mu_2) \\nonumber \\\\\n&~&~~~~~~~~~~~~~~~~~~~-f_1f_2\\frac{[|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|-|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|]^2}\n{f_1(\\mu_2+\\kappa_2)+f_2(\\mu_1+\\kappa_1)}, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_1|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|\\leq (\\mu_1+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2| \\nonumber \\\\\n&~&\\quad \\quad {\\rm and~~} f_+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|\\leq (\\kappa_++f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq \\mu_1(\\epsilon_1-\\epsilon_2)^2+\\frac{\\kappa_1\\kappa_2+\\mu_1(f_1\\kappa_1+f_2\\kappa_2)}{\\mu_1+f_1\\kappa_2+f_2\\kappa_1}(\\epsilon_1+\\epsilon_2)^2, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_1|\\kappa_1-\\kappa_2||\\epsilon_1+\\epsilon_2|\\geq (\\mu_1+f_1\\kappa_2+f_2\\kappa_1)|\\epsilon_1-\\epsilon_2|; \\nonumber \\\\\n&~& \\nonumber \\\\ &~&\n\\bfm\\epsilon_0\\cdot{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 \\leq \\kappa_+(\\epsilon_1+\\epsilon_2)^2+\\frac{\\mu_1\\mu_2+\\kappa_+(f_1\\mu_1+f_2\\mu_2)}{\\kappa_++f_1\\mu_2+f_2\\mu_1}(\\epsilon_1-\\epsilon_2)^2, \\nonumber \\\\\n&~&\\quad \\quad {\\rm if~~~} f_+|\\mu_1-\\mu_2||\\epsilon_1-\\epsilon_2|\\geq (\\kappa_++f_1\\mu_2+f_2\\mu_1)|\\epsilon_1+\\epsilon_2|,\n\\eeqa{5.12a}\nwhere $\\kappa_+$ is the maximum of $\\kappa_1$ and $\\kappa_2$ and $f_+$ is the volume fraction of the material corresponding to $\\kappa_+$.\n\n\nThe corresponding three-dimensional bounds follow directly from\n\\eq{5.10} and the bounds of \\citeAPY{Allaire:1993:OBE}, but are not so explicit. Assuming\nthat the Lame moduli\n\\begin{equation} \\lambda_1=\\kappa_1-2\\mu_1\/3~~{\\rm and}~~\\lambda_2=\\kappa_2-2\\mu_2\/3 \n\\eeq{5.13}\nof both phases are positive, and that the bulk and shear moduli of the two phases are well-ordered with\n\\begin{equation} \\kappa_1>\\kappa_2>0 {\\rm ~~and~~}\\mu_1>\\mu_2>0, \n\\eeq{5.13a}\nthese bounds are\n\\begin{eqnarray}\n\\bfm\\epsilon_0:{\\bfm{\\cal C}}^D\\bfm\\epsilon_0 & \\geq & \\bfm\\epsilon_0:{\\bfm{\\cal C}}_2\\bfm\\epsilon_0\n+f_1\\max_{\\bfm\\eta}[2\\bfm\\epsilon_0:\\bfm\\eta-\\bfm\\eta:({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}_2)^{-1}\\bfm\\eta-f_2g(\\bfm\\eta)], \\nonumber \\\\\n\\bfm\\epsilon_0:{\\bfm{\\cal C}}^N\\bfm\\epsilon_0 & \\geq & \\bfm\\epsilon_0:{\\bfm{\\cal C}}_1\\bfm\\epsilon_0\n+f_2\\min_{\\bfm\\eta}[2\\bfm\\epsilon_0:\\bfm\\eta+\\bfm\\eta:({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}_2)^{-1}\\bfm\\eta-f_1h(\\bfm\\eta)],\n\\eeqa{5.14}\nwhere $g(\\bfm\\eta)$ and $h(\\bfm\\eta)$ are function of the eigenvalues $\\eta_1, \\eta_2$, and $\\eta_3$ \nof the symmetric matrix $\\bfm\\eta$.\nAssuming that\nthese are labeled with\n\\begin{equation} \\eta_1\\leq\\eta_2\\leq\\eta_3, \n\\eeq{5.15}\nwe have\n\\begin{eqnarray}\ng(\\bfm\\eta)&=&\n\\frac{(\\eta_1-\\eta_3)^2}{4\\mu_2}+\\frac{(\\eta_1+\\eta_3)^2}{4(\\lambda_2+\\mu_2)} ~~{\\rm if}~~\n\\eta_3\\geq\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3)\\geq\\eta_1, \\nonumber \\\\\ng(\\bfm\\eta)&=&\n\\frac{\\eta_1^2}{\\lambda_2+2\\mu_2} ~~{\\rm if}~~\n\\eta_1>\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3), \\nonumber \\\\\ng(\\bfm\\eta)&=&\n\\frac{\\eta_3^2}{\\lambda_2+2\\mu_2} ~~{\\rm if}~~\n\\eta_3<\\frac{\\lambda_2+2\\mu_2}{2(\\lambda_2+\\mu_2)}(\\eta_1+\\eta_3),\n\\eeqa{5.16}\nand\n\\begin{equation} h(\\bfm\\eta)=\\frac{1}{\\lambda_1+2\\mu_1}\\min\\{\\eta_1^2,\\eta_2^2,\\eta_3^2\\}.\n\\eeq{5.17}\n\nThe bounds \\eq{5.12}, \\eq{5.12a} and \\eq{5.14} can be used in an inverse way to bound\nthe volume fraction $f_1=1-f_2$, for a single experiment when for special Dirichlet conditions\n$\\bfm\\epsilon_0$ is prescribed and $\\bfm\\tau_0$ ($={\\bfm{\\cal C}}^D\\bfm\\epsilon_0$) is measured, or when for special Neumann conditions\n$\\bfm\\tau_0$ is prescribed and $\\bfm\\epsilon_0$ ($={\\bfm{\\cal C}}^N\\bfm\\epsilon_0$) is measured. \\citeAPY{Allaire:1993:OBE} also derive\nbounds on the complementary energy and these imply\n\\begin{equation} \\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0 \\geq \\bfm\\tau_0:{\\bfm{\\cal C}}_1^{-1}\\bfm\\tau_0\n+f_2\\max_{\\bfm\\zeta}[2\\bfm\\tau_0:\\bfm\\zeta-\\bfm\\zeta:({\\bfm{\\cal C}}_2^{-1}-{\\bfm{\\cal C}}_1^{-1})^{-1}\\bfm\\zeta-f_1\\bfm\\zeta:{\\bfm{\\cal C}}_1\\bfm\\zeta+f_1h({\\bfm{\\cal C}}_1\\bfm\\zeta)], \\nonumber \\\\\n\\eeq{5.17aa}\nand\n\\begin{equation}\n\\bfm\\tau_0:({\\bfm{\\cal C}}^D)^{-1}\\bfm\\tau_0 \\leq \\bfm\\tau_0:{\\bfm{\\cal C}}_2^{-1}\\bfm\\tau_0\n+f_1\\min_{\\bfm\\zeta}[2\\bfm\\tau_0:\\bfm\\zeta+\\bfm\\zeta:({\\bfm{\\cal C}}_2^{-1}-{\\bfm{\\cal C}}_1^{-1})^{-1}\\bfm\\zeta-f_2\\bfm\\zeta:{\\bfm{\\cal C}}_2\\bfm\\zeta+f_2g({\\bfm{\\cal C}}_2\\bfm\\zeta)].\n\\eeq{5.17ab}\n\nThe bound in \\eq{5.17aa} is particularly useful when $\\bfm\\tau_0=-p{\\bf I}$, corresponding to immersing the body $\\Omega$ in a fluid\nwith pressure $p$. Then from measurements of the resulting volume change of the body one can determine $\\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0=-p\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0$.\nLet us assume $\\lambda_1>0$ and set $\\bfm\\zeta=\\alpha{\\bf I}+{\\bf A}$, with ${\\bf A}$ being a trace free matrix with eigenvalues $a_1$, $a_2$ and $a_3$. Then \nwe have $\\bfm\\eta={\\bfm{\\cal C}}_1\\bfm\\zeta=2\\mu_1(k{\\bf I}+{\\bf A})$ where $k=\\alpha[1+3\\lambda_1\/(2\\mu_1)]$. Substitution gives\n\\begin{eqnarray} &~&[\\bfm\\zeta:{\\bfm{\\cal C}}_1\\bfm\\zeta-h({\\bfm{\\cal C}}_1\\bfm\\zeta)]-\\alpha^2[{\\bf I}:{\\bfm{\\cal C}}_1{\\bf I}-h({\\bfm{\\cal C}}_1{\\bf I})] \\nonumber \\\\\n&~&~=2\\mu_1\\left[a_1^2+a_2^2+a_3^2-\\frac{2\\mu_1}{\\lambda_1+2\\mu_1}\\min\\{(k+a_1)^2-k^2,(k+a_2)^2-k^2,(k+a_3)^2-k^2\\}\\right] \\nonumber \\\\\n&~&~\\geq 2\\mu_1[a_1^2+a_2^2+a_3^2-\\min\\{2a_1k+a_1^2,2a_2k+a_2^2,2a_3k+a_3^2\\}],\n\\eeqa{5.17ac}\nwhich is surely positive since $\\min\\{2a_1k+a_1^2,2a_2k+a_2^2,2a_3k+a_3^2\\}\\leq a_j^2$ where $j$ is such that $ka_j$ is non-positive. (Note that\n$a_1$, $a_2$ and $a_3$ cannot all have the same sign since they sum to zero). Consequently when $\\bfm\\tau_0=-p{\\bf I}$\nthe maximum over $\\bfm\\zeta$ in \\eq{5.17aa} is achieved when ${\\bf A}=0$ and taking the maximum over $\\alpha$ gives\n\\begin{equation} -p\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0 \\geq p^2\\left[\\frac{1}{\\kappa_1}+\\frac{f_2}{\\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4f_1\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}}\\right],\n\\eeq{5.17ad}\nor equivalently\n\\begin{equation} -p\/(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0)\\leq \\kappa_{HSH}^+\\equiv\\kappa_1-\\frac{f_2}{1\/(\\kappa_1-\\kappa_2)-f_1\/(\\kappa_1+4\\mu_1\/3)}, \\eeq{5.17ae}\nwhere $\\kappa_{HSH}^+$ is the upper bulk modulus bound of \\citeAPY{Hashin:1963:VAT} and\n\\citeAPY{Hill:1963:EPR}. The inequality \\eq{5.17ad} can be \nrewritten as\n\\begin{equation} \\frac{f_2}{-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1}\\leq \\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4f_1\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1},\n\\eeq{5.17af}\nwhere we have used the fact that $-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1$ is positive (since $\\bfm\\tau_0:({\\bfm{\\cal C}}^N)^{-1}\\bfm\\tau_0\\geq \\bfm\\tau_0:({\\bfm{\\cal C}}_1)^{-1}\\bfm\\tau_0$\nby \\eq{5.8}). This then yields the volume fraction bound\n\\begin{equation} f_2\\leq \\frac{\\frac{\\kappa_1\\kappa_2}{\\kappa_1-\\kappa_2}+\\frac{4\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}}{\\frac{1}{-(\\mathop{\\rm Tr}\\nolimits\\bfm\\epsilon_0\/p)-1\/\\kappa_1}+\\frac{4\\mu_1\\kappa_1}{3\\kappa_1+4\\mu_1}},\n\\eeq{5.18ag}\nwhich we expect to be closest to the actual volume fraction when phase 2 (the softer phase) \nis the inclusion phase. Thus the\nbound may be particularly effective for estimating the volume of cavities in a body. Note that\nif some granules of phase 1 lie within these cavities, then such granules will not\ncontribute to this volume fraction estimate, but will contribute to the overall weight.\nIf the weight of the body has been measured (and the density of phase 1 is known)\nthis provides a way of estimating the volume of granules of phase 1 which lie within the\ncavities. \n\nWhen multiple experiments have been\ndone, and the full tensor ${\\bfm{\\cal C}}^D$ or ${\\bfm{\\cal C}}^N$ has been determined, then the ``trace bounds'' of\nZhikov(\\citeyearNP{Zhikov:1988:ETA}, \\citeyearNP{Zhikov:1991:EHM}) and \\citeAPY{Milton:1988:VBE} can be used. \n(These generalize the well known Hashin-Shtrikman (\\citeyearNP{Hashin:1963:VAT})\nbounds to anisotropic elastic composites.) Define the two traces\n\\begin{equation} \\mathop{\\rm Tr}\\nolimits_h{\\bfm{\\cal A}}=A_{iijj}\/d,\\quad\\quad Tr_s{\\bfm{\\cal A}}=A_{ijij}-(A_{iijj}\/d),\n\\eeq{5.17a}\nfor any fourth order tensor ${\\bfm{\\cal A}}$ with elements $A_{ijk\\ell}$ in spatial dimension $d$.\nThen, assuming the moduli of the two-phases are well ordered satisfying \\eq{5.13a},\ntheir lower and upper ``bulk modulus type bounds''\nimply, through \\eq{5.10}, the universal bounds\n\\begin{eqnarray} f_1\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}^D-{\\bfm{\\cal C}}_2)^{-1}]\n& \\leq & \\frac{1}{d(\\kappa_1-\\kappa_2)}+\\frac{f_2}{d\\kappa_2+2(d-1)\\mu_2}, \\nonumber \\\\\nf_2\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}^N)^{-1}]\n& \\leq & \\frac{1}{d(\\kappa_1- \\kappa_2)}-\\frac{f_1}{d\\kappa_1+2(d-1)\\mu_1},\n\\eeqa{5-18}\nwhile their lower and upper ``shear modulus type bounds,'' imply the universal bounds\n\\begin{eqnarray} f_1\\mathop{\\rm Tr}\\nolimits_s[({\\bfm{\\cal C}}^D-{\\bfm{\\cal C}}_2)^{-1}] & \\leq &\n\\frac{(d-1)(d+2)}{4(\\mu_1-\\mu_2)}+\\frac{d(d-1)(\\kappa_2+2\\mu_2)f_2}{2\\mu_2(d\\kappa_2+2(d-1)\\mu_2)},\\nonumber \\\\\nf_2\\mathop{\\rm Tr}\\nolimits_s[({\\bfm{\\cal C}}_1-{\\bfm{\\cal C}}^N)^{-1}] & \\leq &\n\\frac{(d-1)(d+2)}{4(\\mu_1-\\mu_2)}-\\frac{d(d-1)(\\kappa_1+2\\mu_1)f_1}{2\\mu_1(d\\kappa_1+2(d-1)\\mu_1)}.\n\\eeqa{5.19}\nSince these inequalities depend linearly on $f_1=1-f_2$ they can easily be inverted to obtain\nbounds on $f_1$ given ${\\bfm{\\cal C}}^D$ or ${\\bfm{\\cal C}}^N$.\n\nAs noted by \\citeAPY{Milton:1988:VBE} the lower and upper ``bulk modulus type bounds'' are tighter than those obtained by \\citeAPY{Kantor:1984:IRB} and\n\\citeAPY{Francfort:1986:HOB}, which imply\n\\begin{eqnarray} \\mathop{\\rm Tr}\\nolimits_h{\\bfm{\\cal C}}^N&\\leq& d\\kappa_1-\\frac{f_2}{\\frac{1}{d(\\kappa_1-\\kappa_2)}-\\frac{f_1}{d\\kappa_1+2(d-1)\\mu_1}}, \\nonumber \\\\\n1\/\\mathop{\\rm Tr}\\nolimits_h[({\\bfm{\\cal C}}^D)^{-1}]&\\geq& d\\kappa_2+\\frac{f_1}{\\frac{1}{d(\\kappa_1-\\kappa_2)}+\\frac{f_2}{d\\kappa_2+2(d-1)\\mu_2}}.\n\\eeqa{5.20}\nFor bodies $\\Omega$ of ellipsoidal or parallelopipedic shape the universal bounds \\eq{5.20} were \nobtained by Nemat-Nasser and Hori (\\citeyearNP{Nemat-Nasser:1993:MOP}, \\citeyearNP{Nemat-Nasser:1995:UBO}):\nsee the equations (4.3.9) and (4.4.8), with $I=1$, in their 1995 paper. Their other bounds, with $I=2$,\nwhich incorporate the ``shear responses'' of the tensors ${\\bfm{\\cal C}}^N$ and ${\\bfm{\\cal C}}^D$ are improved upon by the bounds \\eq{5.19}\nas can be seen using the inequality \n\\begin{equation} \\mathop{\\rm Tr}\\nolimits_s{\\bfm{\\cal A}}^{-1}\\geq (d-1)^2(d+2)^2\/(4\\mathop{\\rm Tr}\\nolimits_s{\\bfm{\\cal A}}), \\eeq{5.21}\nwhich holds for any positive definite fourth-order tensor ${\\bfm{\\cal A}}$. \n\n\\section*{Acknowledgements}\nEunjoo Kim is deeply thanked for generously providing figures \\ref{3}, \\ref{4}, and \\ref{5}, and for doing the numerical \nsimulations which generated them. Additionally the author is grateful to Hyeonbae Kang and \nMichael Vogelius for stimulating\nhis interest in this problem, and for their comments on an initial draft of the manuscript. \nThe author is most thankful for support from the Mathematical Sciences Research Institute and the Simons foundation, \nthrough an Eisenbud fellowship, and from\nNational Science Foundation through grant DMS-0707978. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix: Volterra equation for the cavity amplitude}\n\\normalsize\nWe start from the Hamiltonian (1) of the main article and derive the Heisenberg operator equations (limit of zero temperature), for the cavity and spin operators, $\\dot a=i [{\\cal H},a]-\\kappa a$, $\\dot \\sigma_k^-=i [{\\cal H},\\sigma_k^-]-\\gamma \\sigma_k^-$, respectively. Here $\\kappa$ and $\\gamma$ stand for the total cavity and spin losses, respectively. We then write a set of equations for the expectation values in the frame rotating with the probe frequency $\\omega_p$, using the commonly used Holstein-Primakoff-approximation, $\\langle \\sigma_k^z \\rangle \\approx -1$, which is valid if the number of the excited spins is small compared to the ensemble size (which is the case for all experimental results reported in the main article). Denoting $A(t)\\equiv \\langle a(t)\\rangle$ and $B_k(t)\\equiv\\langle\\sigma_k^-(t)\\rangle$, we end up with the following set of first-order ODEs with respect to the cavity and spin amplitudes\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{Eq_a_Volt}\n\\dot{A}(t) & = & -\\left[\\kappa-i(\\omega_c-\\omega_p)\\right]A(t) + \\sum_k\ng_k B_k(t)-\\eta(t), \\\\\n\\label{Eq_bk_Volt}\n\\dot{B}_k(t) & = & -\\left[\\gamma+i(\\omega_k-\\omega_p)\\right] B_k(t) - g_k A(t).\n\\end{eqnarray}\n\\end{subequations}\nNote, that the size of our spin ensemble is very large (typically $N\\sim 10^{12}$) and individual spins are distributed around a certain mean frequency $\\omega_s$. We can thus go to the continuum limit by introducing the continuous spectral density as $\\rho(\\omega)=\\sum_k g_k^2 \\delta(\\omega-\\omega_k)\/\\Omega^2$ (see, e.g. \\onlinecite{Diniz2011}), where $\\Omega$ is the collective coupling strength of the spin ensemble to the cavity and $\\int d\\omega\\rho(\\omega)=1$. In what follows we will replace any discrete function $F(\\omega_k)$ by its continuous counterpart, $F(\\omega)$: $F(\\omega_k) \\rightarrow \\Omega^2 \\int d\\omega \\rho(\\omega) F(\\omega)$. By integrating Eq.~(\\ref{Eq_bk_Volt}) in time, each individual spin amplitude, $B_k(t)$, can formally be expressed in terms of the cavity amplitude, $A(t)$. By plugging the resulting equation into Eq.~(\\ref{Eq_a_Volt}) and assuming that initially all spins are in the ground state, $B_k(t=0)=0$, we arrive at the following integro-differential Volterra equation for the cavity amplitude ($\\omega_c=\\omega_s$)\n\\begin{eqnarray}\n\\dot A(t)=-\\kappa A(t)-\\Omega^2 \\int d\\omega \\rho(\\omega) \\int\\limits_{0}^t d\\tau\ne^{-i(\\omega-\\omega_c-i\\gamma)(t-\\tau)}A(\\tau)-\\eta(t), \n\\label{Eq_rigor}\n\\end{eqnarray}\nNote that in the $\\omega_p$-rotating frame the rapid oscillations presented in the original Hamiltonian (1) are absent, so that the time variation of $\\eta(t)$ in Eq.~(\\ref{Eq_rigor}) is much slower as compared to $1\/\\omega_p$. \n\nFor a proper description of the resulting dynamics, it is essential to capture the form of the spectral density $\\rho(\\omega)$ realized in the experiment as accurately as possible. Following \\onlinecite{Sandner2012}, we take the $q$-Gaussian function for that purpose \n\\begin{eqnarray}\n\\label{rho_w_Eq}\n\\rho(\\omega)=C\\cdot\\left[1-(1-q)\\dfrac{(\\omega-\\omega_s)^2}{\\Delta^2}\\right]^{\n\\dfrac{1}{1-q}},\n\\end{eqnarray}\ncharacterized by the dimensionless shape parameter $1 \\xi_i$. It's easy to see that:\n\\begin{equation*}\n\\begin{aligned}\n&\\left\\|g_m\\right\\|^2 > \\xi_i^2 \\left\\|\\tilde{g}\\right\\|^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(\\mu_j-z\\sigma_j)^2 > \\xi_i^2 \\sum_{j=1}^{d}(\\mu_j)^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(\\mu_j^2-2z\\mu_j\\sigma_j+z^2\\sigma_j^2) > \\xi_i^2 \\sum_{j=1}^{d}(\\mu_j)^2 \\\\\n&\\Longleftrightarrow ~~ \\sum_{j=1}^{d}(z^2\\sigma_j^2) > \\xi_i^2 \\sum_{j=1}^{d}(2z\\mu_j\\sigma_j)\\\\\n&\\Longleftrightarrow ~~ z > \\xi_i^2 \\frac{\\sum_{j=1}^{d}(2\\mu_j\\sigma_j)}{\\sum_{j=1}^{d}(\\sigma_j^2)}\n\\end{aligned}\n\\end{equation*}\nTherefore, given appropriate $z$, there exists $i$, such that $1\\le \\xi_i < \\xi_m$. By using these gradient norm relations, we can get:\n\n\\begin{equation*}\n\t\\begin{aligned}\n\t&\\cos(g_m,\\tilde{g}) - \\cos(g^{(i)},\\tilde{g})\\\\\n\t&= \\frac{(\\xi_m^2+1)\\left\\|\\tilde{g}\\right\\|^2-\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\xi_m\\left\\|\\tilde{g}\\right\\|^2} - \\frac{(\\xi_i^2+1)\\left\\|\\tilde{g}\\right\\|^2-\\left\\|g^{(i)}-\\tilde{g}\\right\\|^2}{2\\xi_i\\left\\|\\tilde{g}\\right\\|^2}\\\\\n\t&> \\left(\\frac{(\\xi_m^2+1)}{2\\xi_m}-\\frac{(\\xi_i^2+1)}{2\\xi_i}\\right)+\\frac{\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\left\\|\\tilde{g}\\right\\|^2}\\left(\\frac{1}{\\xi_i}-\\frac{1}{\\xi_m}\\right)\\\\\n\t&=\\frac{(\\xi_m-\\xi_i)(\\xi_m \\xi_i -1)}{2\\xi_m \\xi_i}+\\frac{(\\xi_m-\\xi_i)\\left\\|g_m-\\tilde{g}\\right\\|^2}{2\\xi_m \\xi_i\\left\\|\\tilde{g}\\right\\|^2}\\\\\n\t&>0\n\t\\end{aligned}\n\\end{equation*}\nHence, it's possible for the malicious gradient to have bigger cosine-similarity with true averaged gradient than that of some honest gradients.\n\n\n\n\n\\subsection{Proof of Lemma 2}\n\\label{appendix:lemm2} \nGiven a arbitrary subset of clients $\\mathcal{G}$ with $|\\mathcal{G}|=(1-\\beta)n$ and $\\beta<0.5$.\nLet $\\mathbf{A}=\\sum\\limits_{i\\notin \\mathcal{G}}\\left(g_{t}^{(i)}-\\nabla F(\\mathbf{x}_{t})\\right)$, $\\mathbf{B}=\\sum\\limits_{j\\in \\mathcal{G}}\\left(g_{t}^{(j)}-\\nabla F(\\mathbf{x}_{t})\\right)$, then $\\mathbf{A}$ and $\\mathbf{B}$ are independent. We have $\\mathbb{E}[\\mathbf{A}+\\mathbf{B}]=\\mathbf{0}$.\nRecall that $\\sigma^2$ is the bounded local variance for local gradient and $\\kappa^2$ is bounded deviation between local and global gradient. Applying the Jensen inequality, we have\n\\begin{equation*}\n\\begin{aligned}\n\\left\\|\\mathbb{E}\\left[\\mathbf{A}\\right]\\right\\|^2 &\\leq \\beta n\\sum\\limits_{i\\notin \\mathcal{G}}\\left\\|\\nabla F_i(\\mathbf{x}_{t})-\\nabla F(\\mathbf{x}_{t})\\right\\|^2 \\leq \\beta^2n^2\\kappa^2\\\\\n\\left\\|\\mathbb{E}\\left[\\mathbf{B}\\right]\\right\\|^2 &\\leq (1-\\beta)n\\sum\\limits_{i\\in \\mathcal{G}}\\left\\|\\nabla F_i(\\mathbf{x}_{t})-\\nabla F(\\mathbf{x}_{t})\\right\\|^2 \\leq (1-\\beta)^2n^2\\kappa^2\\\\\n\\end{aligned}\n\\end{equation*}\nNotice that $\\mathbb{E}[\\mathbf{A}]=-\\mathbb{E}[\\mathbf{B}]$, thus\n\\begin{equation*}\n\\left\\|\\mathbb{E}\\left[\\mathbf{A}\\right]\\right\\|^2=\\left\\|\\mathbb{E}\\left[\\mathbf{B}\\right]\\right\\|^2\\leq \\min\\{\\beta^2 n^2\\kappa^2, (1-\\beta)^2n^2\\kappa^2\\} = \\beta^2 n^2\\kappa^2\n\\end{equation*}\nUsing the basic relation between expectation and variance, we have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}\\left\\|\\mathbf{A}\\right\\|^2=\\left\\|\\mathbb{E}[\\mathbf{A}]\\right\\|^2+\\text{var}[\\mathbf{A}]\\leq\\left\\|\\mathbb{E}[\\mathbf{A}]\\right\\|^2+\\beta n\\sigma^2\\\\\n&\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2=\\left\\|\\mathbb{E}[\\mathbf{B}]\\right\\|^2+\\text{var}[\\mathbf{B}]\\leq\\left\\|\\mathbb{E}[\\mathbf{B}]\\right\\|^2+(1-\\beta) n\\sigma^2\n\\end{aligned}\n\\end{equation*}\nwhich leads to\n\\begin{equation*}\n\\begin{aligned}\n\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2 \\le \\beta^2 n^2\\kappa^2 +(1-\\beta)n\\sigma^2\n\\end{aligned}\n\\end{equation*}\nThen, we directly have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}\\left[\\left\\|\\frac{1}{|\\mathcal{G}|}\\sum\\limits_{i\\in \\mathcal{G}}\\left(g_{t}^{(i)}\\right)-\\nabla F(\\mathbf{x}_{t})\\right\\|^2\\right]=\\frac{1}{(1-\\beta)^2n^2}\\mathbb{E}\\left\\|\\mathbf{B}\\right\\|^2\\\\\n&\\leq { \\frac{\\beta^2\\kappa^2}{(1-\\beta)^2}}+\\frac{\\sigma^2}{(1-\\beta)n}\n\\end{aligned}\n\\end{equation*}\nIt completes the proof of Lemma 2.\n\n\n\\subsection{Proof of Theorem 1} \n\\label{append:theorem1}\n\nTaking the total expectations of averaged gradient on local sampling and randomness in aggregation rule, we have\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}_t[F(\\mathbf{x}_{t+1})] - F(\\mathbf{x}_t) \\\\\n&\\leq -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t\\right]\\right\\rangle+\\frac{L\\eta^2}{2}\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t\\right\\|^2\\right]\\\\\n&= -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t+\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)+\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\qquad+\\frac{L\\eta^2}{2}\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)+\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t\\right]\\right\\rangle -\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\qquad-\\eta\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 +L\\eta^2\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2+L\\eta^2\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n\\end{aligned}\n\\end{equation*}\nFrom Assumption 1 \\& 2, we have\n\\begin{equation*}\n\\begin{aligned}\n\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2\n\\leq c{\\delta}\\sup_{i,j\\in \\mathcal{G}}\\mathbb{E}[\\|{g_t^{(i)}}-{g_t^{(j)}}\\|^2]\\leq 2c{\\delta}(\\sigma^2+\\kappa^2)\n\\end{aligned}\n\\end{equation*}\nthen by Young's Inequality with $\\rho=2$, we can get\n\\begin{equation*}\n\\begin{aligned}\n&-\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\hat{g}_t-\\tilde{g}_t\\right]\\right\\rangle\\\\\n&\\leq \\eta \\left\\|\\nabla F(\\mathbf{x}_t)\\right\\| \\cdot \\mathbb{E}_t\\left\\|\\hat{g}_t-\\tilde{g}_t\\right\\|\\\\\n& \\leq\\frac{\\sqrt{\\delta}\\eta}{2\\rho}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + \\frac{\\rho}{2}\\cdot 2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)\\\\\n&\\leq\\frac{\\sqrt{\\delta}\\eta}{4}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + 2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)\n\\end{aligned}\n\\end{equation*}\nCombining with Lemma 2, we get\n\\begin{equation*}\n\\begin{aligned}\n&-\\eta \\left\\langle \\nabla F(\\mathbf{x}_t),\\mathbb{E}_t\\left[\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right]\\right\\rangle\\\\\n&\\leq \\eta \\left\\|\\nabla F(\\mathbf{x}_t)\\right\\| \\cdot \\mathbb{E}_t\\left\\|\\tilde{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|\\\\\n&\\leq\\frac{\\beta\\eta}{2}\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2 + \\frac{\\beta\\eta\\kappa^2}{2(1-\\beta)^2}\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n&\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\nabla F(\\mathbf{x}_t)\\right\\|^2\\right] \\\\\n&= \\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\bar{g}_t + \\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq 2\\mathbb{E}_t\\left[\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|^2\\right]+2\\mathbb{E}_t\\left[\\left\\|\\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&=2\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2+2\\text{var}\\left\\|\\hat{g}_t\\right\\|+2\\mathbb{E}_t\\left[\\left\\|\\bar{g}_t - \\nabla F(\\mathbf{x}_t)\\right\\|^2\\right]\\\\\n&\\leq \\quad \\begin{matrix} \\underbrace{ 4c\\delta(\\sigma^2+\\kappa^2)+2b^2+\\frac{2\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{2\\sigma^2}{(1-\\beta)n} } \\\\ =\\Delta_1 \\end{matrix}\n\\end{aligned}\n\\end{equation*}\nIn the above derivations, the basic inequality $2\\mathbf{a}\\cdot \\mathbf{b}\\leq \\mathbf{a}^2+\\mathbf{b}^2$ is applied. Taking total expectation and rearranging the terms, we get\n\\begin{equation*}\n\\begin{aligned}\n&\\eta\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2]\\leq \\mathbb{E}[F(\\mathbf{x}_{t})-F(\\mathbf{x}_{t+1})]\\\\\n&\\qquad\\qquad+2\\sqrt{\\delta}\\eta c(\\sigma^2+\\kappa^2)+\\frac{\\beta\\eta\\kappa^2}{2(1-\\beta)^2}+L\\eta^2\\Delta_1\\\\\n\\end{aligned}\n\\end{equation*}\nAssume that $\\eta \\le (2-\\sqrt{\\delta}-2\\beta)\/(4L)$, thus $\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)\\geq\\dfrac{1}{2}$. Taking summation and dividing by $\\eta\\left(\\frac{4-\\sqrt{\\delta}-2\\beta}{4}-L\\eta \\right)T$, then we finally get\n\\begin{equation*}\n\\begin{aligned}\n&\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2]\\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+2L\\eta\\Delta_1\\\\\n&\\qquad\\qquad\\qquad\\qquad+\\quad\n\\begin{matrix} \\underbrace{ 4\\sqrt{\\delta} c(\\sigma^2+\\kappa^2)+\\frac{\\beta\\kappa^2}{(1-\\beta)^2} } \\\\ =\\Delta_2 \\end{matrix}\n\\end{aligned}\n\\end{equation*}\nwhich completes the proof.\n\\section{Background and Related Work}\n\\label{secRelated} \n\n\\subsection{Safety \\& Security in Federated Learning}\n\nThe model safety and data security are essential principles of federated learning due to the concern of privacy risks and adversarial threats \\cite{yang2019federated, kairouz2019advances, lyu20survey, ma20safeFL}, especially in the\nage of emerging privacy regulations such as General Data\nProtection Regulation (GDPR) \\cite{sharma2019data}. In the context of FL, instead of raw data, the gradient information are shared to jointly train a model. More advanced technologies such as secure multiparty computation or differential privacy are also employed to enhance the privacy guarantees\\cite{abadi2016deepdp,Bonawitz17secureML,Gao19FTL}. Meanwhile, the learning systems are vulnerable to various kinds of failures, including non-malicious faults and malicious attacks. Data poisoning attacks and model update poisoning attacks (aka. untargeted attacks) aim to degrade or even fully break the global model during training phase, while backdoor attacks (aka. targeted attacks) make the model misclassify certain samples during inference phase \\cite{kairouz2019advances}.\nIn particular, the Byzantine threats can be viewed as worst-case attacks, in which corrupted clients can produce arbitrary outputs and are allowed to collude. In many studies, the Byzantine attacker is assumed to be omniscient and have capability to access white-box model parameters and all honest gradients to conduct strong attacks \\cite{Fang20Local}. As pointed in many works, appropriately crafted attacks can give significant impact on the model performance while circumventing most of current defenses \\cite{BaruchBG19LIE}. However, security mechanisms to protect privacy inevitably make it a more challenging task to successfully detect those failures and attacks, such as secure aggregation where the server can not directly see any individual client updates but an aggregate result \\cite{Bonawitz17secureML,kairouz2019advances}. Thus, the trade-off between privacy assurance and system robustness needs more investigations.\n\n\\subsection{Existing Defense Strategies}\n\n\\noindent \\textbf{Statistic-based.} This is also known as majority-vote based strategy, requiring the percentage of Byzantine clients less than 50\\%. This kind of methods use the $\\ell_p$-norm distance or cosine-similarity to measure the confidence for received gradients. The {Krum} as well as extended {Multi-Krum} are pioneering work towards Byzantine-robust learning\\cite{Blanchard17Byz}. In \\cite{Yin18optimalrate}, the convergence rate and error rate of {trimmed-mean (TrMean)} and {coordinate-wise median (Median)} is rigorously studied. Moreover, \\cite{Mhamdi18Bulyan} has shown that the Krum and median defenses are vulnerable to $\\ell_p$-attack and developed a meta-method called {Bulyan} on top of other robust aggregation methods. Specially, some works only aggregate the sign of gradient to mitigate the Byzantine effect\\cite{bernstein18sign, li2019rsa}. Recently, a method called {Divider and Conquer} is proposed to tackle strong attacks \\cite{shejwalkar2021manipulating}.\n\n\\noindent \\textbf{Validation-based.} The most straightforward approach to evaluate whether a particular gradient is honest or not, is utilizing the auxiliary data in PS to validate the performance of updated model. {Zeno} \\cite{Xie19Zeno} use a stochastic descendant score to evaluate the correctness of each gradient and choose those with highest scores. Fang \\cite{Fang20Local} use error rate based and loss function based rejection mechanism to reject gradients that have bad impact on model updating. In \\cite{cao20FLTrust}, the authors utilize the ReLU-clipped cosine-similarity between each received gradient and standard gradient as weight to get robust aggregation. The main concern of such approaches is the accessibility of auxiliary data.\n\n\\noindent \\textbf{History-aided.} If the one-to-one correspondence between gradient and client entity is knowable for PS, then it's possible to utilize historical data to trace the clients' behaviors. Some studies show that malicious behavior could be revealed from the gradient trace by designing advanced filter techniques \\cite{Alistarh18Byz, zhu20safeguard}. In\\cite{Mu19AFA}, the authors propose a Hidden Markov Model to learn the quality of model updates and discard the bad or malicious updates. Besides, the momentum SGD can also be considered as history-aided method and can help to alleviate the impact of Byzantine attacks\\cite{Karimireddy20history,Mahdi21momentum}.\n\n\\noindent \\textbf{Redundancy-based.} In the context of traditional distributed training, it's possible to assign each node with redundant data and use this redundancy to eliminate the effect of Byzantine failures. In \\cite{Chen18Draco}, the authors present a scalable framework called {DRACO} for robust distributed training using ideas from coding theory. In \\cite{DataSD21dataencoding}, a method based on data encoding and error correction techniques over real numbers is proposed to combat adversarial attacks. In \\cite{Rajput19Detox}, a framework called {DETOX} is proposed by combing computational redundancy and hierarchical robust aggregation to filter out Byzantine gradients.\n\n\\noindent \\textbf{Learning-based.} In \\cite{li20byzautoencoder}, the authors use VAE as spectral anomaly detection model to learn the representation of honest gradients and use reconstruction error in each round as detection threshold.\nIn \\cite{Pan20Justinian}, a method called Justinian's GAAvernor is proposed to learn a robust gradient aggregation policy against Byzantine attacks via reinforcement learning. In \\cite{regatti20bygars}, the authors use auxiliary data in PS to learn the coefficient in weighted average aggregation for each received gradient.\n\n\\noindent \\textbf{Ensemble-learning.} Another line of work leverage the ensemble learning approach to provably guarantee the predicted label for a testing example is not affected by Byzantine clients, in which multiple global models are trained and each of them is learned by using a randomly selected subset of clients \\cite{cao2021provably,qiao21provebackdoor}. However, such ensemble-learning methods significantly enlarge computational overhead and storage cost.\n\n\n\n\\section{Rethink of Recent Attacks}\n\\label{secAttackAnalysis}\n\nIn this section, we first give the threat model and then present our theoretical analysis along with empirical evidence of the\\emph{ Little is Enough (LIE)} attack \\cite{BaruchBG19LIE} to demonstrate the limitation of existing median- and distance-based defenses.\n\n\\noindent \\textbf{Threat Model.} Similar to the threat models in previous works \\cite{Blanchard17Byz,BaruchBG19LIE,Fang20Local,shejwalkar2021manipulating}, we assume that there exists an attacker that controls some malicious clients to perform model poisoning attacks. The malicious clients could be fake clients that injected by the attacker or genuine ones but corrupted by the attacker. Specially, we assume the attacker has full knowledge on all benign gradients, and model parameters, and the corrupted clients can collude to conduct strong attacks. However, the attacker cannot corrupt the server and the proportion of malicious clients $\\beta$ is less than half. For a system with $n$ clients, without loss of generality, we assume that the first $m$ clients are corrupted and $\\beta=\\frac{m}{n}<0.5$.\n\n\\vspace{1ex}\n\\noindent \\textbf{{LIE} Attack.} Byzantine clients first estimate coordinate-wise mean ($\\mu_j$) and standard deviation ($\\sigma_j$), and then send malicious gradient vector with elements crafted as follows:\n\\begin{equation}\\label{eq:lie}\n(g_m)_j = \\mu_j - z\\cdot\\sigma_j, ~ j \\in [d]\n\\end{equation}\nwhere the positive attack factor $z$ depends on the total number of clients and Byzantine fraction. The design mechanism behind this attack is circumventing the coordinate-wise median and trimmed-mean methods. As advised by original paper, the $z$ can be determined by using cumulative standard normal function $\\phi(z)$:\n\n\\begin{equation}\nz_{max} = max_z \\left(\\phi(z)<\\frac{n-\\left\\lfloor\\frac{n}{2}+1\\right\\rfloor}{n-m}\\right)\n\\end{equation}\n\nIn the following, we will show why this attack is harmful and hard to detect. From an optimization point of view, we can check the upper bound of non-convex distributed optimization problem before and after \\textit{LIE} attack, where we assume the distributed data are IID for simplicity. Lemma~\\ref{lemma:SGD-1} gives out general upper bound when no attack and no defense are performed \\cite{bottou2018optim,yu2019on}, from which we can see that the objective function will converge to a critical point given large iterations $T$ and small learning rate $\\eta$. Applying similar analysis method, we can get a new upper bound when \\textit{LIE} attack and coordinate-wise median defense are conducted as presented in Proposition~\\ref{proposition:SGD-LIE}, where we assume the training can converge.\n\\begin{lemma}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $n$ benign workers, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Empoly the SGD with a fixed learning rate $\\eta \\le 1\/L$ and assume $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the following convergence result\\footnote{In this paper, $\\|\\cdot\\|$ denotes the $\\ell_2$ norm.}:\n\t\\begin{equation}\n\t\t\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2] \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+\\frac{L\\eta \\sigma^2}{n}\n\t\\end{equation}\n\t\\label{lemma:SGD-1}\n\\end{lemma}\n\\vspace{-2ex}\n\\begin{proposition}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $(n-m)$ benign workers and $m$ malicious workers conducting \\textit{LIE} attack with appropriate $z$, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Employ the Median-SGD with a fixed learning rate $\\eta \\le 1\/L$, and assume $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the upper bound $B$ of averaged gradient norm square:\n\t\\begin{align}\n\t\tB \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T} + \\frac{L\\eta \\sigma^2}{n} +{\\left(1+\\frac{1}{n}\\right)z^2\\sigma^2}\n\t\\end{align}\n\t\\label{proposition:SGD-LIE}\n\\end{proposition}\n\\vspace{-2ex}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:prop1}.\n\\end{proof}\n\nCompared with the result in Lemma~\\ref{lemma:SGD-1}, there exists an extra constant term in Proposition~\\ref{proposition:SGD-LIE}, which does not diminish even with decreasing learning rate, enlarging the convergence error and even making the model training totally collapsed. When no defense is employed, just replace $z$ with ${\\beta z}$, because the $m\\cdot z\\sigma$ could be averaged across all $n$ workers, resulting in a smaller upper bound than median-based defense. This also explains the phenomenon that naive Mean aggregation even has better results than median-based and distance-based defenses in some cases as shown in \\cite{BaruchBG19LIE} and experimental results in this paper. Then, we turn to the coordinate point of view to further analyze why this type of crafted gradient is harmful for model training. \nRecall that signSGD can achieve good model accuracy by only utilizing the sign of gradient, which illuminates a fact that the sign of gradient plays an crucial role in model updating. Therefore, it's worthy to check the sign of gradient for this type attack. The crafting rule of \\textit{LIE} attack is already shown in Eq. (\\ref{eq:lie}), from which we can see that $(g_m)_j $ could have opposite sign with $\\mu_j$ when $\\mu_j>0$. For coordinate-wise median and $\\mu_j>0$, we assume this aggregation rule results in $\\tilde{g} = g_m$, then we have:\n\\begin{equation}\nif ~~ z > \\frac{\\mu_j}{\\sigma_j}, ~~then ~~sign(\\tilde{g}_j) \\ne sign(\\mu_j)\n\\end{equation} For mean aggregation rule and $\\mu_j>0$, if $\\mu_j$ and $\\sigma_j$ are estimated on benign clients, then the $j$-th element becomes:\n\\begin{equation}\n\\tilde{g}_j = \\frac{1}{n}[m\\cdot (g_m)_j + (n-m)\\mu_j] = \\mu_j - z\\cdot \\beta \\cdot\\sigma_j\n\\end{equation}\nand in this case a bigger $z$ is needed to reverse the sign:\n\\begin{equation}\nif ~~ z > \\frac{n\\mu_j}{m\\sigma_j}, ~~then ~~sign(\\tilde{g}_j) \\ne sign(\\mu_j)\n\\end{equation}\n\nEmpirical results in \\cite{BaruchBG19LIE} show that mostly coordinate-wise standard deviation turns out to be bigger than the corresponding gradient element, thus a small $z$ could turn a large number of positive elements into negative, leading to incorrect model updating. To verify this theoretical result, we adopt default training setting in Section~\\ref{secExperimentSetup} to train a CNN on MNIST dataset and ResNet-18 on CIFAR-10 dataset under no attack, and calculate averaged sign statistics across all workers as well as the sign statistics of a virtual gradient that crafted as Eq. (\\ref{eq:lie}). We plot the sign statistics over iterations as Fig.~\\ref{fig:sign_byz}, which convincingly supports our theoretical analysis. \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfigure[Honest Gradient of CNN]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/mnist_honest.pdf}\n\t}\n\n\t\\subfigure[Malicious Gradient of CNN]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/mnist_lie.pdf}\n\t}\n\n\t\\subfigure[Honest Gradient of ResNet18]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_honest.pdf}\n\t}\n\n\t\\subfigure[Malicious Gradient of ResNet18]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_lie.pdf}\n\t}\n\t\\caption{Sign statistics of honest and malicious gradient. }\n\t\\label{fig:sign_byz}\n\\end{figure}\n\nNext, we present the following Proposition~\\ref{proposition:Safe-LIE} to explain why \\textit{LIE} attack is hard to detect, in which we compare the distance to averaged true gradient $\\tilde{g}=\\frac{1}{n}\\sum_{i=1}^{n}g^{(i)}$ and similarity with $\\tilde{g}$ for malicious gradient and honest gradient, respectively.\n\n\\begin{proposition}\n\tFor a distributed non-convex optimization problem $F(\\mathbf{x})$ with $(n-m)$ benign workers and $m$ malicious workers conducting \\textit{LIE} attack, suppose the data are IID and the gradient variance is bounded by $\\sigma^2$. Given small enough $z$, then the distance between malicious gradient and true averaged gradient could be smaller than that of certain honest gradient:\n\t\\begin{equation}\n\t\\exists ~i, ~s.t. ~~ \\mathbb{E}[\\left\\|g_m-\\tilde{g}\\right\\|^2] < \\mathbb{E}[\\|g^{(i)}-\\tilde{g}\\|^2]\n\t\\end{equation}\n\tand the cosine-similarity between malicious gradient and true averaged gradient could be bigger than that of certain honest gradient:\n\t\\begin{equation}\n\t\\exists ~i, ~s.t. ~~cos(g_m,\\tilde{g}) > cos(g^{(i)},\\tilde{g})\n\t\\end{equation}\n\t\\label{proposition:Safe-LIE}\n\\end{proposition}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:prop2}.\n\\end{proof}\n\nFrom the above results we can see that it's possible for the malicious gradient to be more ``safe'' when evaluated by Krum and Bulyan methods. Hence, it's almost impossible to detect the malicious gradient from the distance and cosine-similarity perspectives. Instead, checking the sign statistics is a novel and promising perspective to detect abnormal gradients. Similar analysis is also valid for the recent proposed Min-Max\/Min-Sum attacks as well as the adaptive attack that uses different perturbation vectors \\cite{shejwalkar2021manipulating}, along with which a new method called \\textit{Divide and Conquer (DnC)} is also proposed to tackle those attacks. However, this method makes the assumption that malicious gradients are in the direction of largest singular vector of gradient matrix, and would fail when multiple attacks exist simultaneously or in non-IID settings. \n\n\\vspace{1ex}\n\\noindent \\textbf{New Hybrid Attack.} In this work, we extend the OFOM attack in\\cite{chang19cronus} and propose a type of hybrid attack called \\textbf{ByzMean} attack, which makes the mean of gradients be arbitrary targeted malicious gradient. More specifically, the malicious clients are divided into two sets, one set with $m_1$ clients chooses a arbitrary gradient value $g_{m_1}=*$, and the other set with $m_2=m-m_1$ clients chooses the gradient value $g_{m_2}$ such that the average of all gradients is exactly the $g_{m_1}$, just as follows:\n\\begin{equation}\ng_{m_1} = *, ~ g_{m_2}=\\frac{(n-m_1)g_{m_1}-\\sum_{i=m+1}^{n}g^{(i)}}{m_2}\n\\label{eq:byzMean}\n\\end{equation}\nAll existing attacks can be integrated into this ByzMean attack, making this hybrid attack even stronger than all single attacks. For example, we can set $g_{m_1}$ as random gradient or even the gradient crafted by \\textit{LIE }attack. In that case, all existing defense methods including DnC will be broken.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Our SignGuard Framework}\n\\label{secFramework} \n\nIn this section, we present formal problem formulation and introduce our SignGuard framework for Byzantine-robust federated learning. And some theoretical analysis on training convergence is also provided.\n\n\\subsection{System Overview and Problem Setup}\nOur federated learning system consists of a parameter server and a number of benign clients along with a small portion of Byzantine clients. We assume there exists an attacker or say adversary that aims at poisoning global model and controls the Byzantine clients to perform malicious attacks. We first give out the following definitions of benign and Byzantine clients, along with the attacker's capability and defense goal.\n\n\\begin{definition}\n\t\\textbf{(Benign Client)} A benign client always sends honest gradient to the server, which is an unbiased estimation of local true gradient at each iteration.\n\\end{definition}\n\n\\begin{definition}\n\t\\textbf{(Byzantine Client)} A Byzantine client may act maliciously and can send arbitrary message to the server.\n\\end{definition}\n\n\\noindent \\textbf{Attacker's Capability:}\nAs mentioned in the threat model in Section~\\ref{secAttackAnalysis}, the attacker has full knowledge on all benign gradients and the corrupted clients can collude to conduct various kinds of attacks. However, the attacker cannot compromise the server and the proportion of Byzantine clients is less than 50\\%.\n\\vspace{-1ex}\n\n\\noindent \\textbf{Defender's Capability:} As in previous studies \\cite{Fang20Local,cao20FLTrust}, We consider the\ndefense is performed on the server side. The parameter server does not have access to the raw training data on the clients, and the server does not know the exact number of malicious clients. However, the server has full access to the global model as well as the local model updates (i.e., local gradients) from all clients in each iteration. Specially, we further assume the received gradients are anonymous, which means the behavior of each client is untraceable. In consideration of privacy and security, we think this assumption is reasonable in the context of federated learning.\n\n\\noindent \\textbf{Defense Goal:}\nAs mentioned in \\cite{cao20FLTrust}, an ideal defense method should give consideration to the following three aspects: Fidelity, Robustness and Efficiency. We hope the defense method achieves Byzantine-robustness against various malicious attacks without sacrificing the model accuracy. Moreover, the defense should be computationally cheap such that does not affect the overall training efficiency.\n\n\\noindent \\textbf{Problem Formulation:}\nWe focus on federated learning on IID settings and then extend our algorithm into non-IID settings. We assume that training data are distributed over a number of clients in a network, and all clients jointly train a shared model based on disjoint local data. Mathematically, the underlying distributed optimization problem can be formalized as follows:\n\\begin{equation}\\label{eq:objective}\n\\min_{\\mathbf{x}\\in R^d}{F(\\mathbf{x})}=\\frac{1}{n}\\sum_{i=1}^{n}\\mathbb{E}_{\\xi_i\\sim D_i}\\left[F(\\mathbf{x};\\xi_i)\\right]\n\\end{equation}\nwhere $n$ is the total number of clients, $D_i$ denotes the local dataset of \\textit{i}-th client and could have different distribution from other clients, and $F(\\mathbf{x};\\xi_i)$ denotes the local loss function given shared model parameters $\\mathbf{x}$ and training data $\\xi_i$ sampled from $D_i$. We make all clients initialize to the same point $\\mathbf{x_0}$, then FedAvg \\cite{mcmahan17} can be employed to solve the problem. At each iteration, the \\textit{i}-th benign client draws $\\xi_i$ from $D_i$, and computes local stochastic gradient with respect to global shared parameter $\\mathbf{x}$, while Byzantine clients can send arbitrary gradient message:\n\\begin{equation}\n\\begin{aligned}\ng_{t}^{(i)} = \\begin{cases} \n\\nabla F(\\mathbf{x}_{t};\\xi_i) ,&\\text{if \\textit{i}-th client is benign}\n\\\\\n arbitrary , &\\text{if \\textit{i}-th client is Byzantine}\n\\end{cases}\n\\end{aligned}\n\\end{equation} \nThe parameter server collects all the local gradients and employs robust gradient aggregation rule to get a global model update:\n\\begin{equation}\n\\mathbf{x_{t+1}} = \\mathbf{x_{t}} -\\eta_{t}\\cdot \\textsl{GAR}(\\{g_{t}^{(i)}\\}_{i=1}^{n})\n\\end{equation}\nIn a synchronous and full participation setting, the result will be broadcast to all clients to update their local models and start a new iteration. In a partial participation setting, the model update is finished in PS and the updated model will be sent to the selected clients for next round. This process will repeat until the stop condition is satisfied.\n\n\nTo characterize the impact of Byzantine attack, we define the following two metrics:\n\\begin{definition}\n\t\\textbf{(Attack Success Rate)} The averaged proportion of malicious gradients that were selected by the detection-based GAR throughout the training iterations.\n\\end{definition}\n\\begin{definition}\n\t\\textbf{(Attack Impact)} The model accuracy drop compared with benchmark result that under no attack and no defense.\n\\end{definition}\n\nBased on above metrics, we can measure the effect of Byzantine attack by calculating the accuracy drop due to model poisoning and measure the validity of detection-based defense by calculating the attack success rate.\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t{\n\t\n\t\t\\includegraphics[width=2.0\\columnwidth,clip=true]{.\/figs\/illustration_framework.pdf}\n\t}\t\n\t\\vspace{-2ex}\n\t\\caption{Illustration of the workflow of proposed SignGuard. The collected gradients are anonymous and sent into multiple filters, after which the intersection of multiple outputs are selected as trusted gradients.}\n\n\t\\label{fig:illustration}\n\\end{figure*}\n\n\\subsection{Our Proposed Solution}\n\nThe proposed SignGuard framework is described in Algorithm~\\ref{alg1}-\\ref{alg2} and the workflow is illustrated in Fig.~\\ref{fig:illustration}. On a high level, we pay attention to the magnitude and direction of the received gradients. At each iteration, the collected gradients are sent into multiple filters, including norm-based thresholding filer and sign-based clustering filter, etc. \\textbf{Firstly}, for the norm-based filter, the median of gradient norms is utilized as reference norm as the median always lies in benign set. Considering that small magnitude of gradients do less harm to the training while significantly large one is definitely malicious, we will perform a loose lower threshold and a strict upper threshold. \\textbf{Secondly}, for the sign-based clustering filter, we extract some statistics of gradients as features and using Mean-Shift \\cite{meanshif} algorithm as unsupervised clustering model with adaptive number of cluster classes, while the cluster with largest size is selected as the trusted set. In this work, the proportions of positive, zero and negative signs are computed as basic features, which are sufficient for a number of attacks, including LIE attack. \n\\begin{algorithm}[t] \n\t\\setstretch{1}\n\t\\caption{~SignGuard-based Robust Federated Learning} \n\t\\begin{algorithmic}[1] \n\t\t\\State \\textbf{Input:} learning rate $\\eta$, total iteration $T$, total client number $n$\n\t\t\\State \\textbf{Initial:} $\\mathbf{x}_0\\in R^d$\n\t\t\\For{$t=0, 1, ..., T-1$} \n\t\t\\State \\textbf{On each client \\textit{i} :}\n\t\t\\State Sample a mini-batch of data to compute gradient $\\displaystyle g_{t}^{(i)}$\n\t\n\t\t\\State Send $\\displaystyle g_{t}^{(i)}$ to the parameter server\n\t\t\\State Wait for global gradient $\\tilde{g}_{t}$ from server\n\t\t\\State Update local model: $\\displaystyle \\mathbf{x}_{t+1}=\\mathbf{x}_{t}-\\eta\\tilde{g}_{t}$ \n\t\t\\vspace{1ex}\n\t\t\\State \\textbf{On server:}\n\t\t\\State Collect gradients from all clients\n\t\n\t\t\\State Obtain global gradient: $\\displaystyle\\tilde{g}_{t}=SignGuard(\\{g_{t}^{(i)}\\}_{i=1}^{n})$\n\t\t\\State Send $\\tilde{g}_{t}$ to all clients\n\t\t\\EndFor \n\t\\end{algorithmic} \n\t\\label{alg1}\n\\end{algorithm}\n\n\\begin{algorithm}[t] \n\t\\setstretch{1}\n\t\\caption{~SignGuard Function} \n\t\\begin{algorithmic}[1] \n\t\t\\State \\textbf{Input:} Set of received gradients $S_t=\\{g_{t}^{(i)}\\}_{i=1}^{n}$, lower and upper bound $L,R$ for gradient norm\n\t\n\t\t\\State \\textbf{Initial:} $S_1 = S_2 = \\emptyset $\n\t\t\\State \\quad Get $l_2$-norm and element-wise sign of each gradient\n\t\t\n\t\t\\State \\textbf{Step 1:} Norm-threshold Filtering\n\t\t\\State \\quad Get the median of norm $M = med(\\{\\|g_{t}^{(i)}\\|\\}_{i=1}^{n})$\n\t\t\\vspace{1ex}\n\t\t\\State \\quad Add the gradient that satisfies $L \\leq \\dfrac{\\|g_{t}^{(i)}\\|}{M} \\leq R $ into $S_1$\n\t\t\n\t\t\\State \\textbf{Step 2:} Sign-based Clustering\n\t\t\\State \\quad Randomly select a subset of gradient coordinates\n\t\t\\State \\quad Compute sign statistics on selected coordinates for each gradient as features\n\t\t\\State \\quad Train a Mean-Shift clustering model\n\t\t\\State \\quad Choose the cluster with most elements as $S_2$\n\t\t\\State \\textbf{Step 3:} Aggregation\n\t\t\\State \\quad Get trusted set: $S'_t=S_1 \\cap S_2$ \n\t\t\\State \\quad Get $\\displaystyle\\tilde{g}_{t}=\\frac{1}{|S'_t|}\\sum_{i\\in S'_t}g_t^{(i)}$ \n\t\t\n\t\t\\State \\textbf{Output:} Global gradient: $\\displaystyle\\tilde{g}_{t}$\n\t\\end{algorithmic} \n\t\\label{alg2}\n\\end{algorithm}\n\n\nHowever, those features only consider the overall statistics and lose sight of local properties. Take a toy example, when the amounts of positive and negative elements are approximate (just as ResNet-18), the naive sign statistics may be insufficient to detect sign-flipped gradients \\cite{Rajput19Detox} or those well-crafted attacks that have similar sign statistics. To mitigate this problem, we introduce randomized coordinate selection and add a similarity metric as additional feature in our algorithm, such as cosine-similarity or Euclidean distance between each received gradient and a ``correct'' gradient. However, without the help of auxiliary data in PS, the ``correct'' gradient is not directly available. A practical way is to compute pairwise similarities between all the other gradients and take the median as the similarity with ``correct'' gradient. Or more efficiently, just utilize the aggregated gradient from previous iteration as the ``correct'' gradient. Intuitively, it is promising to distinguish those irrelevant gradients and helps to improve the robustness of anomaly detection. What's challenging is, as shown in Section~\\ref{secAttackAnalysis}, the Euclidean distance or cosine-similarity metrics are not reliable for the state-of-the-art attacks, and even affect the judgment of SignGuard as we found in experiments. In this work, the plain ``SignGuard\" only uses sign statistics in default, and the enhanced variants that add cosine-similarity feature or Euclidean distance feature are called ``SignGuard-Sim\" and ``SignGuard-Dist\", respectively. We will provide some comparative results of them. We emphasize that the SignGuard is a sort of flexible approach and more advanced features could be further extracted to enhance the effectiveness of anomaly detection. And how to design a more reliable similarity metric is left as an open problem for future work. \n\n\nAfter filtering, the server eventually selects the intersection of multiple filter outputs as trusted gradient set, and obtains a global gradient by robust aggregation, e.g. trimmed-mean. In this work, we use the mean aggregation with magnitude normalization. It is worth noting that a small fraction of honest gradients could also be filter out due to gradient diversity, especially in the non-IID settings, depending on the variance of honest gradients and the closeness to malicious gradients.\n\n\n\\subsection{Convergence Analysis}\n\nIn this part, we provide some theoretical analysis of the security guarantee by SignGuard and the convergence of non-convex optimization problem, jointly considering the IID and non-IID data. We first claim that high separability can be achieved when the distributions of test statistics for malicious and honest gradients have negligible overlap.\n\n\\begin{claim}\n\tSuppose all honest gradients are computed with global model parameters and same batch size, and assume the test statistics of honest and malicious gradients follow two finite covariance distributions $P$ and $Q$. For $0<\\beta<1\/2$, let $U=(1-\\beta)P+\\beta Q$ be a mixture of sample points from $P$ and $Q$, denote $f(\\mathbf{x})$ and $g(\\mathbf{x})$ the PDFs of $P$ and $Q$. Then, there exists a algorithm that separates data points with low probability of error if the total variation distance satisfies: $TV(f,g)=1-o(1)$.\n\\end{claim}\n\\begin{remark}\n\tNote that the Byzantine clients have an inevitable trade-off between the attack impact and the risk of exposure by manipulating the gradient deviation. Therefore, under our detection-based SignGuard framework, the malicious gradient either have limited attack impact or become obvious to get detected, depending on the discrepancy between $P$ and $Q$.\n\\end{remark}\n\nTo conduct convergence analysis, we also make the following basic assumption, which is commonly used in the literature \\cite{yu2019on,bottou2018optim,Karimireddy19error_fix} for convergence analysis of distributed optimization.\n\\begin{assumption}\n\tAssume that problem (\\ref{eq:objective}) satisfies:\n\t\n\t\\textbf{1. Smoothness}: The objective function $F(\\cdot)$ is smooth with Lipschitz constant $L>0$, which means $\\forall \\mathbf{x}, \\forall \\mathbf{y},~\\left\\| \\nabla F(\\mathbf{x})-\\nabla F(\\mathbf{y})\\right\\| \\leq L\\left\\| \\mathbf{x}-\\mathbf{y}\\right\\|$.\n\tIt implies that:\n\t\\begin{equation}\n\tF(\\mathbf{x})-F(\\mathbf{y}) \\leq \\nabla F(\\mathbf{x})^{T}(\\mathbf{y}-\\mathbf{x})+\\frac{L}{2}\\left\\| \\mathbf{x}-\\mathbf{y}\\right\\|^2\n\t\\end{equation}\n\t\n\t\\textbf{2. Unbiased local gradient}: For each worker with local data, the stochastic gradient is locally unbiased:\n\t\\begin{equation}\n\t\\mathbb{E}_{\\xi_i\\sim D_i}\\left[\\nabla F(\\mathbf{x};\\xi_i)\\right] = \\nabla F_i(\\mathbf{x})\n\t\\end{equation}\n\t\n\t\\textbf{3. Bounded variances}: The stochastic gradient of each worker has a bounded variance uniformly, satisfying:\n\t\\begin{equation}\n\t\\mathbb{E}_{\\xi_i\\sim D_i}[\\left\\|\\nabla F(\\mathbf{x};\\xi_i)-\\nabla F_i(\\mathbf{x})\\right\\|^2] \\leq \\sigma^2\n\t\\end{equation}\n\tand the deviation between local and global gradient satisfies:\n\t\\begin{equation}\n\t\\left\\|\\nabla F_i(\\mathbf{x})-\\nabla F(\\mathbf{x})\\right\\|^2 \\leq \\kappa^2\n\t\\end{equation}\n\t\n\t\\label{as:1}\n\\end{assumption}\n\n\nFor SignGuard framework, the trusted gradients attained by filters may still contain a part of malicious gradients. In this case, any gradient aggregation rule necessarily results in an error to the averaged honest gradient \\cite{LaiRV16agnostic,Karimireddy20history}. Here we make another assumption on the capability of aggregation rule:\n\\begin{assumption}\n\tFor problem (\\ref{eq:objective}) with $(1 - \\beta)n$ benign clients (denoted by $\\mathcal{G}$) and $\\beta n$ Byzantine clients, suppose that at most $\\delta n$ Byzantine clients can circumvent SignGuard at each iteration. We assume that the robust aggregation rule in SignGuard outputs $\\hat{g}_t$ such that for some constant $c$ and constant $b$,\n\t\\begin{equation}\n\t\\begin{aligned}\n\t&\\textbf{1. Bounded Bias:}~~\\left[\\mathbb{E}\\left\\|\\hat{g}_t-\\bar{g}_t\\right\\|\\right]^2\n\t\\leq c{\\delta}\\sup_{i,j\\in \\mathcal{G}}\\mathbb{E}[\\|{g_t^{(i)}}-{g_t^{(j)}}\\|^2]\\\\\n\t&\\textbf{2. Bounded Variance:}~~ \\text{var}\\left\\|\\hat{g}_t\\right\\|\n\t\\leq b^2\n\t\\end{aligned}\t\n\t\\end{equation}\n\twhere $\\bar{g}_t=\\frac{1}{|\\mathcal{G}|}\\sum_{i\\in \\mathcal{G}}g_t^{(i)}$ and $0\\le \\delta < \\beta<0.5~$.\n\t\\label{as:2}\n\\end{assumption}\n\n\\begin{remark}\nWhen $\\delta=0$, it's possible to exactly recover the averaged honest gradient. For most aggregation rules such as Krum, the output is deterministic and thus has $b^2=0$. For clustering-based rules, the output is randomized and could have negligible variance if the clustering algorithm is robust.\n\\end{remark}\n\nWhen $\\beta n$ Byzantine clients exist and act maliciously, the desired gradient aggregation result is the average of $(1 - \\beta)n$ honest gradients, which still has a deviation to the global gradient of no attack setting. We give the following lemma to characterize the deviation:\n\n\\begin{lemma}\n\tSuppose the training data are non-IID under Assumption 1, then the deviation between averaged gradient of $(1-\\beta)n$ clients $\\bar{g}$ and the true global gradient $\\nabla F(\\mathbf{x})$ can be characterized as follows:\n\t\\begin{equation}\n\t\\mathbb{E}\\left[\\left\\|\\bar{g}-\\nabla F(\\mathbf{x})\\right\\|^2\\right]\n\t\\leq \\frac{\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{\\sigma^2}{(1-\\beta)n}\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{appendix:lemm2}.\n\\end{proof}\n\n\nGiven above assumptions and lemma, extending the analysis techniques in \\cite{bottou2018optim,yu2019on, Karimireddy19error_fix, Karimireddy20history}, now we can characterize the convergence of SignGuard by the following theorem. \n\\begin{theorem}\n\tFor problem (\\ref{eq:objective}) under Assumption 1, suppose the SignGuard satisfying Assumption 2 is employed with a fixed learning rate $\\eta \\le (2-\\sqrt{\\delta}-2\\beta)\/(4L)$ and $F^*=\\min_{\\mathbf{x}}F(\\mathbf{x})$, then we have the following convergence result:\n\t\\begin{equation}\n\t\\dfrac{1}{T}\\sum_{t=0}^{T-1}\\mathbb{E}[\\left\\|\\nabla F(\\mathbf{x}_t)\\right\\|^2] \\leq \\frac{2(F(\\mathbf{x}_0)-F^*)}{\\eta T}+2L\\eta\\Delta_1 + \\Delta_2\n\t\\end{equation}\n\twhere the constant terms are $\\Delta_1=4c\\delta(\\sigma^2+\\kappa^2)+2b^2+\\frac{2\\beta^2\\kappa^2}{(1-\\beta)^2}+\\frac{2\\sigma^2}{(1-\\beta)n}$ and $\\Delta_2=4c\\sqrt{\\delta}(\\sigma^2+\\kappa^2)+\\frac{\\beta\\kappa^2}{(1-\\beta)^2}$.\n\t\\label{theorem:signGuard}\n\\end{theorem}\n\\begin{proof}\n\tDetailed proof is in Appendix \\ref{append:theorem1}.\n\\end{proof}\n\n\\begin{remark}\n\tThe terms $\\Delta_1$ and $\\Delta_2$ arise from the existence of Byzantine clients and are influenced by the capability of aggregation rule. When no Byzantine client exists ($\\beta=0$ and thus $\\delta=0$), we have $\\Delta_2=0$ and the convergence is guaranteed with sufficiently small learning rate. If Byzantine clients exist ($\\beta>0$), even the defender is capable to remove all malicious gradients ($\\delta=0$), we still have $\\Delta_2>0$ due to non-IID data and may result in some model accuracy gaps to benchmark results. \n\\end{remark}\n\n\n\\section{Experimental Setup}\\label{secExperimentSetup}\nThe proposed SignGuard framework is evaluated on various datasets for image and text classification tasks. We mainly implement the learning tasks in IID fashion, and investigate the performance of different defenses in non-IID settings as well. The models that trained under no attack and no defense are used as benchmarks. All evaluated attack and defense algorithms are implemented in PyTorch.\n\n\\subsection{Datasets and Models}\n\n\\noindent \\textbf{MNIST.} MNIST is a 10-class digit image classification dataset, which consists of 60,000 training samples and 10,000 test samples, and each sample is a grayscale image of size 28 \u00d7 28. For MNIST, we construct a convolutional neural network (CNN) as the global model (see Appendix~\\ref{appendix:cnn}).\n\n\\noindent \\textbf{Fashion-MNIST.} Fashion-MNIST\\cite{xiao17fmnist} is a clothing image classification dataset, which has exactly the same image size and structure of training and testing splits as MNIST, and we use the same CNN as global model.\n\n\\noindent \\textbf{CIFAR-10.} CIFAR-10 \\cite{cifar10\/100} is a well-known color image classification dataset with 60,000 32 \u00d7 32 RGB images in 10 classes, including 50,000 training samples and 10,000 test samples. We use ResNet-18 \\cite{he2016residual} as the global models\\footnote{We use open-source implementation of ResNet-18, which is available at https:\/\/github.com\/kuangliu\/pytorch-cifar}.\n\n\\noindent \\textbf{AG-News.} AG-News is a 4-class topic classification dataset. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and 7,600 for test. We use a TextRNN that consists of two-layer bi-directional LSTM network \\cite{LiuQH16textRNN} as the global model.\n\n\\subsection{Evaluated Attacks}\nWe consider various popular model poisoning attacks in literature as well as recently proposed state-of-the-art attacks as introduced in Section~\\ref{secRelated}, and we assume the attacker knows all the benign gradients and the GAR in server. \n\n\\textbf{Random Attack.} The Byzantine clients send gradients with randomized values that generated by a multi-dimensional Gaussian distribution $\\mathcal{N}(\\mu,\\sigma^2 \\textbf{I})$. In our experiments, we take $\\mu = (0,...,0) \\in \\mathbb{R}^d\\ $and $\\sigma=0.5$ to conduct random attacks.\n\n\\textbf{Noise Attack.} The Byzantine clients send noise perturbed gradients that generated by adding Gaussian noise into honest gradients: $ g_{m} = g_{b} + \\mathcal{N}(\\mu,\\sigma^2 \\textbf{I}) $. We take the same Gaussian distribution parameters as random attack.\n\n\\textbf{Sign-Flipping.} The Byzantine clients send reversed gradients without scaling: $ g_{m} = -g_{b}$. This is a special case of reversed gradient attack \\cite{Rajput19Detox} or empire attack \\cite{Xie19Empires}.\n\n\\textbf{Label-Flipping.} The Byzantine clients flip the local sample labels during training process to generate faulty gradient. This is also a type of data poisoning attack. In particular, the label of each training sample in Byzantine clients is flipped from $l$ to $C-1-l$, where $C$ is the total categories of labels and $l\\in \\{0,1,\\cdots,C-1\\}$.\n\n\\textbf{Little is Enough.} As in \\cite{BaruchBG19LIE}, the Byzantine clients send malicious gradient vector with elements crafted as Eq.~(\\ref{eq:lie}). We set $z=0.3$ for default training settings in our experiments.\n\n\\textbf{ByzMean Attack.} As introduced in Section~\\ref{secAttackAnalysis}, we set $m_1=\\lfloor 0.8m \\rfloor$ and $m_2=m-m_1$, and set $g_{m_1}$ as LIE attack in all experiments.\n\n\\textbf{Min-Max\/Min-Sum.} As in \\cite{shejwalkar2021manipulating}, the malicious gradient is a perturbed version of the benign aggregate as Eq.~(\\ref{eq:gstd}), where $\\nabla^p$ is a perturbation vector and $\\gamma$ is a scaling coefficient, and those two attacks are formulated in Eq.~(\\ref{eq:minmax})-(\\ref{eq:minsum}). The first Min-Max attack ensures that the malicious gradients lie close to the clique of the benign gradients, while the Min-Sum attack ensures that the sum of squared distances of the malicious gradient from all the benign gradients is upper bounded by the sum of squared distances of any benign gradient from the other benign gradients. To maximize the attack impact, all malicious gradients keep the same. By default, we choose $\\nabla^p$ as $-std(g^{\\{i\\in [n]\\}})$, i.e., the inverse standard deviation.\n\\begin{equation}\ng_m = f_{avg}(g^{\\{i\\in [n]\\}})+\\gamma \\nabla^p\n\\label{eq:gstd}\n\\end{equation}\n\\begin{equation}\n\t\\mathop{\\arg\\max}\\limits_{\\gamma} ~ \\mathop{\\max}\\limits_{i\\in [n]}\\|g_m-g^{(i)}\\|\\leq \\mathop{\\max}\\limits_{i,j\\in [n]}\\|g^{(i)}-g^{(j)}\\|\n\t\\label{eq:minmax}\n\\end{equation}\n\\begin{equation}\n\t\\mathop{\\arg\\max}\\limits_{\\gamma} ~ \\mathop{\\sum}\\limits_{i\\in [n]}\\|g_m-g^{(i)}\\|^2\\leq \\mathop{\\max}\\limits_{i\\in [n]}\\mathop{\\sum}\\limits_{j\\in [n]}\\|g^{(i)}-g^{(j)}\\|^2\n\t\\label{eq:minsum}\n\\end{equation}\n\n\nSpecially, we investigate the fixed and randomized attacking behaviors respectively. In fixed settings, all corrupted clients play the role of Byzantine nodes and always perform the predefined attack method during the whole training process. In randomized settings, all corrupted clients will change their collusion attack strategy at each training epoch.\n\n\\subsection{Training Settings}\nBy default, we assume there are $n = 50$ clients in total for each task, 20\\% of which are Byzantine nodes with fixed attack method, and the training data are IID among clients. To verify the resilience and robustness, we will also evaluate the impact of different fractions of malicious clients for different attacks and defenses. Furthermore, our approach will also be evaluated in non-IID settings. In all experiments, we set the lower and upper bounds of gradient norm as $L = 0.1$ and $R = 3.0$, and randomly select 10\\% of coordinates to compute sign statistics in our SignGuard-based algorithms. Each training procedure is run for 60 epochs for MNIST\/Fashion-MNIST\/AG-News and 160 epochs for CIFAR-10, and local iteration is always set to 1. We employ momentum in PS side and the momentum parameter is set to 0.9, and weight decay is set to 0.0005. More details on some key hyper-parameters are described in Appendix~\\ref{appendix:train}\n\n\n\\subsection{Performance Metrics}\nWe train the models for a fixed number of epochs and use the test accuracy to evaluate the model performance. Considering the instability of model training and the fluctuation of model accuracy under strong attacks, we test the training model at the end of each training epoch and take the best test accuracy during the whole training process to assess the efficacy of defenses. We repeat each experiment for three times and report the average results. When a certain defense is performed, the accuracy gap to the baseline can be utilized to evaluate the efficacy of defense under various attacks, and smaller gap indicates more effective defense method. \n\n\\section{Evaluation Results}\\label{secExperimentResult}\n\n\n\\begin{table*}[ht] \\footnotesize\n\t\\centering\n\t\\caption{Comparison of defenses under various model poisoning attacks} \n\t\\label{tab:main_result_iid} \n\t\\renewcommand\\arraystretch{1.1}\n\t\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\t\\begin{tabular}{| c | c | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | c | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} | p{1.08cm}<{\\centering} |}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\tabincell{c} {Dataset\\\\(Model)}} & \n\t\t\\multirow{2}{*}{GAR} & \n\t\t\\multirow{2}{*}{No Attack} & \n\t\t\\multicolumn{3}{c|}{Simple Attacks}&\n\t\t\\multicolumn{5}{c|}{State-of-the-art Attacks}\\\\\n\t\t\\cline{4-5} \n\t\t\\cline{5-6} \n\t\t\\cline{6-7}\n\t\t\\cline{7-8}\n\t\t\\cline{8-9} \n\t\t\\cline{9-10} \n\t\t\\cline{10-11} \n\t\t& & & {Random} & {Noise} & {Label-flip} & {ByzMean} & {Sign-flip} & {LIE} & {Min-Max} & {Min-Sum}\\\\\n\t\t\\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {MNIST\\\\(CNN)}}&\n\t\tMean & 99.23 & 84.84 & 90.48 & 99.05 & 31.98 & 98.42 & 84.49 & 68.89 & 34.46 \\\\\n\t\t& TrMean & 98.23 & 98.63 & 98.53 & 95.31 & 58.87 & 98.44 & 94.50 & 34.48 & 43.89\\\\\n\t\t& Median & 97.46 & 94.18 & 97.45 & 93.84 & 40.04 & 97.73 & 74.37 & 26.11 & 38.13 \\\\\n\t\t& GeoMed & 93.21 & 82.77 & 78.68 & 86.20 & 45.02 & 74.78 & 34.37 & 15.62 & 20.53 \\\\\n\t\t& Multi-Krum & 99.20 & 98.98 & 99.11 & 99.06 & 83.26 & 98.82 & 90.04 & 52.77 & 27.27 \\\\\n\t\t& Bulyan & 99.10 & 99.17 & {99.12} & 99.15 & 98.58 & 98.81 & 98.86 & 52.45 & 51.95 \\\\\n\t\t& DnC & 99.09 & 99.07 & 99.08 & {99.17} & 82.25 & 98.73 & {99.12} & 98.97 & 81.04 \\\\\n\t\t& SignGuard & 99.11 & {99.09} & {98.97} & \\textbf{99.18} & \\textbf{99.02} & \\textbf{99.13} & 99.15 & \\textbf{99.18} & {99.15} \\\\\n\t\t& SignGuard-Sim & 99.16 & \\textbf{99.18} & 99.16 & 99.07 & {98.91} & {99.06} & \\textbf{99.22} & {99.08} & {99.13} \\\\\n\t\t& SignGuard-Dist & 98.95 & 99.05 & \\textbf{99.18} & 99.11 & 98.93 & 98.86 & 98.96 & 99.01 & \\textbf{99.19} \\\\\n\t\t\\hline \\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {Fashion-MNIST\\\\(CNN)}}&\n\t\tMean & 89.51 & 69.88 & 31.83 & 89.37 & 16.31 & 86.68 & 79.78 & 47.73 & 45.12 \\\\\n\t\t& TrMean & 87.02 & 87.81 & 87.45 & 79.58 & 62.66 & 87.45 & 54.28 & 45.71 & 42.96 \\\\\n\t\t& Median & 80.77 & 82.96 & 82.59 & 77.41 & 47.46 & 82.52 & 45.14 & 47.43 & 50.83 \\\\\n\t\t& GeoMed & 76.51 & 79.96 & 78.93 & 78.16 & 40.51 & 70.65 & 10.00 & 73.75 & 66.63 \\\\\n\t\t& Multi-Krum & 87.89 & 89.12 & 88.94 & 89.27 & 69.95 & 87.59 & 72.22 & 40.08 & 47.36 \\\\\n\t\t& Bulyan & 88.80 & 89.31 & 89.32 & 89.21 & 88.72 & 87.52 & 88.64 & 59.65 & 43.63 \\\\\n\t\t& DnC & 89.21 & 88.89 & 88.14 & 88.85 & 70.15 & 87.58 & 71.82 & 88.43 & 88.94 \\\\\n\t\t& SignGuard & 89.48 & \\textbf{89.34} & \\textbf{89.32} & 89.12 & 89.35 & 88.69 & 89.34 & \\textbf{89.48} & \\textbf{88.51} \\\\\n\t\t& SignGuard-Sim & 89.43 & {89.24} & 89.21 & \\textbf{89.33} & {89.28} & {89.08} & \\textbf{89.36} & {89.04} & {88.18}\\\\\n\t\t& SignGuard-Dist & 89.37 & 88.87 & 89.30 & 89.31 & \\textbf{89.39} & \\textbf{89.21} & \\textbf{89.36} & 89.34 & 88.38 \\\\\n\t\t\\hline \\hline\n\t\t\n\t\t\\multirow{10}{*}{\\tabincell{c} {CIFAR-10\\\\(ResNet-18)}}&\n\t\tMean & 93.16 & 44.53 & 46.34 & {91.98} & 17.18 & 79.63 & 55.86 & 23.84 & 18.17 \\\\\n\t\t& TrMean & 93.15 & 89.61 & 89.47 & 85.15 & {30.13} & 85.54 & 43.76 & 24.81 & 23.36 \\\\\n\t\t& Median & 74.18 & 68.27 & 71.42 & 71.19 & 23.47 & 70.75 & 27.35 & 20.46 & 22.74 \\\\\n\t\t& GeoMed & 65.62 & 70.41 & 69.35 & 70.76 & 24.86 & 67.82 & 23.55 & 50.36 & 45.23 \\\\\n\t\t& Multi-Krum & 93.14 & \\textbf{92.88} & \\textbf{92.91} & 92.26 & 50.41 & 92.36 & 42.58 & 21.17 & 38.24 \\\\\n\t\t& Bulyan & 92.78 & 91.87 & 92.47 & 92.24 & 81.33 & 90.12 & 74.52 & 29.87 & 37.79 \\\\\n\t\t& DnC & 92.73 & 88.01 & 88.25 & 92.05 & 36.56 & 84.76 & 47.37 & 52.94 & 35.36 \\\\\n\t\t& SignGuard & 93.03 & \\textbf{92.78} & 92.52 & 92.28 & \\textbf{92.46} & 88.61 & \\textbf{92.93} & 92.56 & {92.47} \\\\\n\t\t& SignGuard-Sim & 93.19 & 92.51 & 91.38 & 92.26 & {92.26} & \\textbf{92.48} & 92.62 & {92.63} & 92.75 \\\\\n\t\t& SignGuard-Dist & 92.76 & 92.64 & 92.26 & \\textbf{92.51} & 92.42 & 91.69 & 92.36 & \\textbf{92.82} & \\textbf{92.93} \\\\\n\t\t\\hline \\hline\n\t\t\\multirow{10}{*}{\\tabincell{c} {AG-News\\\\(TextRNN)}}&\n\t\tMean & 89.36 & 28.18 & 28.41 & 86.72 & 25.05 & 84.18 & 79.34 & 27.32 & 25.24 \\\\\n\t\t& TrMean & 87.57 & 88.33 & 88.72 & 85.50 & 37.51 & 84.84 & 66.95 & 30.05 & 30.28 \\\\\n\t\t& Median & 84.57 & 84.52 & 84.59 & 82.08 & 28.99 & 81.10 & 32.39 & 30.28 & 29.71 \\\\\n\t\t& GeoMed & 82.38 & 77.63 & 77.18 & 78.42 & 27.36 & 81.64 & 31.57 & 74.82 & 71.48 \\\\\n\t\t& Multi-Krum & 88.86 & 89.18 & 89.22 & 86.89 & 68.53 & \\textbf{87.42} & 72.98 & 53.51 & 32.46 \\\\\n\t\t& Bulyan & 88.22 & 88.86 & 88.93 & 85.54 & 85.80 & 86.55 & 85.49 & 47.76 & 51.25 \\\\\n\t\t& DnC & 89.13 & 86.42 & 86.28 & 86.72 & 31.47 & 86.30 & 76.58 & 88.45 & 89.05 \\\\\n\t\t& SignGuard & 89.29 & \\textbf{89.22} & 89.23 & 86.78 & \\textbf{89.24} & 86.53 & 89.26 & 89.23 & 89.27 \\\\\n\t\t& SignGuard-Sim & 89.24 & 89.13 & \\textbf{89.29} & 87.05 & 89.36 & 86.76 & \\textbf{89.33} & \\textbf{89.27} & \\textbf{89.37}\\\\\n\t\t& SignGuard-Dist & 89.23 & 89.16 & 89.23 & \\textbf{87.25} & 89.31 & 87.30 & 89.17 & 89.22 & 89.35 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\n\nIn this section, we conduct extensive experiments with various attack-defense pairs on both IID and non-IID data settings. We compare our methods with several existing defense methods, including TrMean, Median, GeoMed, Multi-Krum, Bulyan and DnC. The numerical results demonstrate the efficacy and superiority of our proposed SignGuard framework.\n\n\\subsection{Main Results in IID Settings}\nThe main results of best achieved test accuracy during training process under different attack and defense methods in IID setting are collected in Table~\\ref{tab:main_result_iid}. The results of naive \\textit{Mean} aggregation under \\textit{No Attack} are used as benchmarks. Note that we favor other defenses by assuming the defense algorithms know the fraction of Byzantine clients, which is somewhat unrealistic but intrinsically required by existing defenses. However, we do not use the Byzantine fraction information in our SignGuard-type methods, including plain SignGuard, SignGuard-Sim and SignGuard-Dist.\n\n\\vspace{1ex}\n\\noindent \\textbf{Sign Statistics are Powerful.} Test results on four datasets consistently show that our SignGuard-type methods can leverage the power of sign statistics and similarity features to filter out most malicious gradients and achieve comparable test accuracy as general distributed SGD under no attack. Consistent with original papers \\cite{BaruchBG19LIE,shejwalkar2021manipulating}, the state-of-the-art attacks, such as LIE and Min-Max\/Min-Sum, can circumvent the median-based and distance-based defenses, preventing successful model training. Take the results of Multi-Krum on ResNet-18 as example, it can be seen that when no attack is performed, Multi-Krum has negligible accuracy drop (less than 0.1\\%). However, the best test accuracy drops to 42.58\\% under LIE attack and even less than 40\\% under Min-Max\/Min-Sum attacks. Similar phenomena can also be found in model training under TrMean, Median and Bulyan methods. Besides, even under no attack, the Median and GeoMed methods are only effective in simple tasks, such as CNN for digit classification on MNIST and TextRNN for text classification on AG-News. When applied to complicated model training, such as ResNet-18 on CIFAR-10, those two methods have high convergence error and result in significant model degradation. While Muti-Krum and Bulyan suffer from well-crafted attacks, they perform well on naive attacks and even better than our plain SignGuard in mitigating random noise and sign-flip attack. Though the DnC method has extraordinary effectiveness under many attacks, we found it is unstable during training and can be easily broken by our proposed ByzMean attack. In contrast, our proposed SignGuard-type methods is able to distinguish most of those well-crafted malicious gradients and achieve satisfactory model accuracy under various types of attacks. Considering that the local data of Byzantine clients also contribute to global model when no attack is performed, it's not surprising to see that even the best defense against Byzantine attack will still result in small gap to the benchmark results.\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\subfigure[CNN trained on Fashion-MNIST]{\n\t\t\\includegraphics[width=2.1\\columnwidth]{.\/figs\/fmnist_byznum.pdf}\n\t}\n\n\t\\subfigure[ResNet-18 trained on CIFAR-10]{\n\t\t\\includegraphics[width=2.1\\columnwidth]{.\/figs\/cifar_byznum.pdf}\n\t}\n\n\t\\caption{Accuracy drop comparison under various attacks and different percentage of Byzantine clients. SignGuard has the smallest gap to the baseline. }\n\t\\label{fig:acc_byznum}\n\\end{figure*}\n\n\n\\vspace{1ex}\n\\noindent \\textbf{Sign Statistics are Insufficient.} Table~\\ref{tab:rate_iid} reports the average selected rate of both benign and Byzantine clients during the training process of ResNet-18. We notice that the SignGuard-type methods inevitably exclude part of honest gradients, and select some malicious gradients under the sign-flip attack, even with the help of similarity feature. The reason is that the proportions of positive and negative elements in normal gradient are approximate for ResNet-18, even after randomized downsampling of gradient elements. Consequently, the ratios of positive and negative signs remain approximate in the sign-flipped gradient. Therefore, the simple sign statistics are insufficient to make a distinction between those honest gradients and sign-flipped ones. We also notice that although SignGuard-Sim is resilient to all kinds of attacks and achieves high accuracy results, it only selects less than 80\\% honest gradients during training. One possible reason is that the cosine-similarity feature also has some diversity across honest gradients.\n\n\\begin{table}[htbp] \\footnotesize\n\t\\centering\n\t\\caption{Selected Rate of Honest and Malicious Gradients} \n\t\\label{tab:rate_iid} \n\t\\renewcommand\\arraystretch{1.3}\n\t\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\t\\begin{tabular}{| c | c | c | c | c | c | c |}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\tabincell{c} {\\textbf{Attack}}} & \n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard}} &\n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard-Sim}} &\n\t\t\\multicolumn{2}{c|}{\\textbf{SignGuard-Dist}} \\\\\n\t\t\\cline{2-3} \n\t\t\\cline{3-4} \n\t\t\\cline{4-5}\n\t\t\\cline{5-6}\n\t\t\\cline{6-7}\n\t\t& {H} & {M} & {H} & {M} & {H} & {M}\\\\\n\t\t\\hline\n\t\tByzMean & 0.9625 & 0 & 0.7791 & 0 & 0.9272 & 0.0003\\\\\n\t\t\\hline\n\t\tSign-flip & 0.6870 & 0.3908 & 0.7639 & 0.0981 & 0.7570 & 0.2440\\\\\n\t\t\\hline\n\t\tLIE & 0.9532 & 0 & 0.7727 & 0 & 0.9151 & 0\\\\\n\t\t\\hline\n\t\tMin-Max & 0.9650 & 0 & 0.7866 & 0.0003 & 0.9105 & 0.0009\\\\\n\t\t\\hline\n\t\tMin-Sum & 0.9640 & 0 & 0.7752 & 0 & 0.9111 & 0\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\vspace{1ex}\n\\noindent \\textbf{Percentage of Byzantine Clients.} We also evaluate the performance of signGuard-Sim with different percentages of Byzantine clients. In this part, we conduct experiments of CNN trained on the Fashion-MNIST dataset and ResNet-18 trained on CIFAR-10 dataset. We keep the total number of clients be 50 and vary the fraction of Byzantine clients from 10\\% to 40\\% to study the impact of Byzantine percentage for different defenses. We use the default training settings, and experiments are conducted under various state-of-the-art attacks. Particularly, we compare the results of SignGuard-Sim with Median, TrMean, Multi-Krum and DnC as shown in Fig.~\\ref{fig:acc_byznum}. It can be seen that our approach can effectively filter out malicious gradients and result in slight accuracy drop regardless of the high percentage of Byzantine clients, while other defense algorithms suffer much more attack impact with increasing percentage of Byzantine clients. In particular, we also find that Multi-Krum can mitigate sign-flip attack well in ResNet-18 training, possibly because the exact percentage of Byzantine clients is provided.\n\n\\vspace{1ex}\n\\noindent \\textbf{Time-varying Attack Strategy.} Further, we test different defense algorithms under time-varying Byzantine attack strategy. We still use the default system setting, and change attack method randomly at each epoch (including no attack scenario). The test accuracy curves of CNN on Fashion-MNIST and ResNet-18 on CIFAR-10 are presented in Fig.~\\ref{fig:acc_random_attack}, where the baseline is training under no attack and no defense, and we only test the State-of-the-art defenses. It can be found that our SignGuard could ensure successful model training and closely follow the baseline, while other defenses resulted in significant accuracy fluctuation and model deterioration. For CNN, the training process even collapsed eventually for other defenses\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\subfigure[CNN on Fashion-MNIST]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/fmnist_attack.pdf}\n\t}\n\n\t\\subfigure[ResNet-18 on CIFAR-10]{\n\t\t\\includegraphics[width=0.46\\columnwidth]{.\/figs\/cifar_attack.pdf}\n\t}\n\n\t\\caption{Defense effect comparison under time-varying attacks. SignGuard can ensure safe training and achieve decent model accuracy. }\n\t\\label{fig:acc_random_attack}\n\\end{figure}\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\subfigure[CNN on Fashion-MNIST]{\n\t\t\\includegraphics[width=2.0\\columnwidth]{.\/figs\/fmnist_noniid.pdf}\n\t}\n\n\t\\subfigure[CifarNet on CIFAR-10]{\n\t\t\\includegraphics[width=2.0\\columnwidth]{.\/figs\/cifar_noniid.pdf}\n\t}\n\t\\caption{Model accuracy comparison under various attacks and different degrees of non-IID. SignGuard has the best performance compared with other start-of-the-art defenses. }\n\t\\label{fig:acc_noniid}\n\\end{figure*}\n\n\n\\subsection{Main Results in Non-IID Settings}\n\nThe Byzantine-mitigation in non-IID FL settings has been a well-known challenging task due to the diversity of gradients. We evaluate our SignGuard-Sim method in synthetic non-IID partition of Fashion-MNIST and CIFAR-10 datasets. As previous works, we simulate the non-IID data distribution between clients by allocating $s$-fraction of dataset in a IID fashion and the remaining (1-$s$)-fraction in a sort-and-partition fashion. Specifically, we first randomly select $s$-proportion of the whole training data and evenly distribute them to all clients. Then, we sort the remaining data by labels and divide they into multiple shards, while data in the same shard has the same label, after which each client is randomly allocated with 2 different shards. The parameter $s$ can be used to measure the skewness of data distribution and smaller $s$ will generate more skewed data distribution among clients. We consider three levels for the skewness with $s$ = 0.3, 0.5, 0.8, respectively. \n\n\\vspace{1ex}\n\\noindent \\textbf{Efficacy on Non-IID Data.} We compare the SignGuard-Sim with various start-of-the-art defenses. As shown in Fig.~\\ref{fig:acc_noniid}, our method still works well under strong attacks in non-IID settings, achieving satisfactory accuracy results in various scenarios. In contrast, TrMean and Multi-Krum could not defend LIE attack and ByzMean attack, making them not reliable any more. Bulyan has good performance on CNN trained on Fashion-MNIST, but is ineffective under LIE attack on ResNet-18 trained on CIFAR-10. DnC can defend against sign-flip attack well, but performs poorly on the other scenarios. Those results in non-IID settings further demonstrate the general validness of sign statistics.\n\n\n\n\\subsection{Computational Overhead Comparison}\nThe following Table~\\ref{tab:aggtime} reports the averaged aggregation time of different defenses on training ResNet-18, where we omit the TrMean and Median since they induce negligible computation cost. It can be seen that SignGuard resulted in shortest time compared with GeoMed, Multi-Krum and Bulyan, which means our method can achieve efficiency and robustness simultaneously. For the other two variants, we found the pairwise similarity\/distance calculation is time-consuming and using the previous aggregate as correct gradient to compute similarity\/distance could alleviate this issue.\n\n\\begin{table}[htbp]\n\\centering\n\\caption{Averaged Aggregation Time} \n\\label{tab:aggtime} \n\\renewcommand\\arraystretch{1.2}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}} \n\\begin{tabular}{| c | c | c | c | c |}\n\t\\hline\n\t\\textbf{Method} & GeoMed & Multi-Krum & Bulyan & SignGuard\\\\\n\t\\hline\n \\textbf{Time(s)} & 0.39314 & 0.29847 & 0.29629 & 0.04706\\\\\n\t\\hline \\hline\n\t\\multirow{2}{*}{\\tabincell{c} {\\textbf{Method}}}&\n\t\\multicolumn{2}{c|}{SignGuard-Sim}&\n\t\\multicolumn{2}{c|}{SignGuard-Dist}\\\\\n\t\\cline{2-3} \n\t\\cline{3-4} \n\t\\cline{4-5}\n\t& pairwise & previous & pairwise & previous \\\\\n\t\\hline\n\t\\textbf{Time(s)} & 0.73887 & 0.07686 & 0.39117 & 0.07834\\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Discussions}\\label{secDiscussion}\n\nFrom the previous sections we can see that the SignGuard approach performs well in many scenarios and can mitigate a number of attacks effectively. Extensive experiments demonstrate that sign statistics are powerful in malicious gradient detection but also insufficient in some cases. In this section, we present some discussions of our approach.\n\n\n\\subsection{Strength and Limitation of SignGuard} \n\nOur SignGuard approach mainly leverages the sign statistics to distinguish malicious gradients from honest gradients, which can overcome the drawbacks of distance and cosine-similarity based detection methods and reveals a new evaluation criteria for gradient correctness. The feasibility of our algorithm depends on the fact that sign statistics of honest gradients gather in a compact range, which is short of theoretical explanation at the moment. The experimental results show that the sign statistics are capable of detecting recent state-of-the-art attacks, which are distance-indistinguishable but do not take the variation of sign statistics into account. This is the key strength of SignGuard, however, also reveals the main limitation that it essentially depends on the distinguishability of sign statistics. As shown in evaluation results, if the ratio of positive and negative sign are approximate, then it's hard for . If the attacker keeps the values of gradient elements unmodified but shuffles the order of gradient elements, then the gradient norm and sign statistics will remain unchanged, that's why random coordinate selection and similarity feature are required in the design of SignGuard algorithm.\n\nAnother advantage is that SignGuard has good extensibility, not only sign statistics but also sophisticated distance and similarity features can be extracted as features, and more advanced clustering algorithms can be applied to separate benign and malicious gradients. And the final aggregation rule can also be replaced by TrMean or Multi-Krum to mitigate those malicious gradients that evade detection filters. Another limitation is also obvious that we only design 2-class K-Means algorithm in this work, which limits its wider applicability. The K-Means algorithm has its own drawbacks for it makes the assumption that clusters are convex and isotropic, and using fixed cluster number cannot tackle the case with dynamic varieties of attack. Therefore, more advanced and adaptive clustering algorithm should be developed to improve the flexibility and robustness.\n\n\\subsection{Possible Improvement for SignGuard} \n\nThe SignGuard framework in this work is a preliminary attempt that levarages sign-gradient to solve the Byzantine attack problem. We believe that there exist more robust characteristics of gradient that can reflect the malicious manipulation, rather than naive sign statistics. And the design of SignGuard framework needs to be improved as well. First, as shown in previous sections, similarity feature is not always helpful and can even hides sign statistics distinction for features in K-Means are isotropic, so it's hard to decide whether to use the similarity feature or not. One possible solution is to apply more advanced clustering models such as spectral clustering\\cite{} and Gaussian mixture models (GMM) with expectation-maximization (EM) algorithm\\cite{}. Second, when a variety of malicious gradients exist simultaneously, it's impossible to specify the number of clusters beforehand. In that case, the desired clustering model should be able to find suitable number of clusters automatically, such as hierarchical clustering \\cite{} and DBSCAN\\cite{}. Third, we find the sign statistics of randomly selected subset of gradient elements are almost consistent with that of original gradient, making the sign-flipped gradient still hard to detect when the ratios of positive and negative sign are approximate. Thus, possibly the algorithm can discard part of coordinates where all gradients have positive (or negative) sign, enlarging the difference between those two ratios to improve the cluster separability. What's more, it is promising to improve SignGuard by cooperating with existing defense algorithms. Because no single defense method can mitigate all possible attacks, and different defense algorithms have their own advantages and weaknesses. For example, Multi-Krum is more robust to sign-flip attack than SignGuard in both IID and non-IID settings as shown in previous sections, hence we are encouraged to adopt the design ideas behind Multi-Krum to calculate useful distance feature and take it into consideration during malicious gradient filtering.\n\n\\subsection{Trade-off Among Safety, Security and Fairness} \n\nTo ensure the safety of model training, we hope for more detailed information about gradient or even raw training data, to evaluate the correctness of each received gradient. If a small set of raw training data can be collected from voluntary clients, we can directly use them to validate the gradients. Otherwise, we should extract many statistical features, including gradient norm, distance, similarity and sign statistics, to perform anomaly detection. However, not only raw training data, but also raw gradient has the risk of privacy leakage. Hence, encryption algorithm and secure multiparty computing are also considered for data security protection, which may not support many detection algorithms and makes the Byzantine gradient detection especially challenging. Our SignGuard mainly makes use of the sign-gradient to achieve effective anomaly detection, having the potential for keeping users data security to a great extent. However, our experimental results also show that sign statistics and sign-similarity are insufficient in some cases, requiring original gradient values to improve robustness. This is a important trade-off between safety and security when developing federated learning systems. Moreover, honest gradients could be filter out as well, which may lead to some algorithm unfairness, especially in non-IID scenarios.\n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Work}\\label{secConclusion}\n\nIn this work, we proposed a novel Byzantine attack detection framework, namely SignGuard, to mitigate malicious gradients in federated learning systems. It can overcome the drawbacks of median- and distance-based approaches which are vulnerable to well-crafted attacks and unlike validation-based approaches that require extra data collection in PS. And it also does not depend on historical data or other external information, only utilizing magnitude and robust sign statistics from current local gradients, making it a practical way to defend most kinds of model poisoning attacks. Extensive experimental results on image and text classification tasks verify our theoretical and empirical findings, demonstrating the extraordinary effectiveness of our proposed SignGuard-type algorithms. We hope this work can provide a new perspective on the Byzantine attack problems in machine learning security. Future directions include developing strategies to defend dynamic and hybrid model poisoning attacks as well as backdoor attacks in more complex federated learning scenarios. And how to design more effective and robust filters in the SignGuard framework for real-world learning systems is also left as an open problem.\n\n\\section{Acknowledgment}\\label{secAcknowledgment}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\nIn many of statistical physics applications the entropy notion enters as\na basic concept suitable for characterize the behaviour of macroscopic\nsystems \\cite{entropy_economic,tsallis2004,entropy_ecology,Franzosi_EPL15,\nFranzosi_PRE16,Felice_PhysA_2018}.\nIn the present manuscript, we address the problem of the correct definition\nof the microcanonical entropy for classical systems. In fact, the latter\nconcern has recently become a matter of a debate where it has been discussed\nwhich one between the Boltzmann and the Gibbs definition provides the correct\nentropy. \n\nA mechanically and adiabatically isolated system, at the equilibrium and\ncomposed of a macroscopic number of interacting particles is statistically\ndescribed with the microcanonical ensemble.\nIn this statistic description the relevant thermodynamical quantities are derived\nfrom the entropy $S$ through suitable thermodynamic relations.\nNow, there are -at least- two accepted definitions for the microcanonical entropy,\nthe ones commonly referred to as Boltzmann entropy and Gibbs entropy.\nThe former is proportional to the logarithm\nof the density of microstates at a given ``energy shell'', whereas the latter \nis proportional to the logarithm of the number of microstates up to a given\nenergy.\nThe debate as to which of these definitions of entropy is the correct one dates back\nto many years ago\n\\cite{Hertz10,Einstein11,Schluter48,Jaynes,Munster_1987,Pearson85,Berdichevsky91,Adib04,\nLavis2005245,Campisi05}. \n\nVery recently \\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15}, it has been argued that\nthe Gibbs entropy yields a consistent thermodynamics, and they have been discussed some\nconsistency issues that, the microcanonical statistical mechanics founded on the\nBoltzmann entropy, would unveil \n\\cite{Dunkel2013,Hilbert_PRE_2014,Sokolov_2014,DunkelHilbertRep1,DunkelHilbertRep2,\nCampisi_2015,Campisi_2016}. These and other related arguments \n\\cite{Romero-Rochin,Treumann_2014,Treumann_2014a} have\nbeen contended \\cite{Vilar_2014, Frenkel_2015, Schneider_2014,Wang_2015, Cerino_2015, Swendsen_Wang_Physicaa_2016,Puglisi_PhysRep_2017,\nBaldovin_JStatMech_2017},\nin what has become a lively debate.\nAlthough this may seem a marginal issue, it has crucial consequences about the foundations of statistical mechanics.\nFor instance the negative temperatures notion wouldn't make sense, since\nthey are a well founded concept in the Boltzmann description, whereas, they\nare forbidden in the case of the Gibbs entropy since the number of microstates\nwith energy below a given value $E$ is a non-decreasing function of $E$.\nEven if we do not share the point of view of authors of Refs.\n\\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15, Campisi_2015,Campisi_2016},\nas we have clarified in Refs. \\cite{Buonsante_AoP_2016, Buonsante_2015}\nwhere we have shown that the Boltzmann entropy provides a consistent description\nof the microcanonical ensemble, in our opinion these authors must be given\ncredit for having raised this key question.\n\nA further issue raised by the authors of Refs.\n\\cite{Dunkel2013,Hilbert_PRE_2014,Hanggi_15,\nCampisi_2015,Campisi_2016} pertains to the fact that the caloric equation\nof state, for instance in the simple case of an isolated ideal gas system,\nderived with the Boltzmann entropy is not strictly extensive.\nAbout this point, in Ref. \\cite{Buonsante_AoP_2016,Swendsen_2017}, we have shown that the correction to the extensive behaviour, is of the order of $1\/(nd)$, and therefore it vanishes in the limit of infinite degrees of freedom.\nAlthough in the case of a macroscopic system (as the ones more often considered in\nstatistical mechanics) this is not an issue and it represents just an aesthetical\nmathematical problem, it pose a relevant matter when microcanonical thermodynamics\nis applied to systems that for their nature do not admit the thermodynamic limit.\nExamples of the latter class include proteins, DNA helix, nanosystems.\n\n\nIn the present manuscript we propose a modified version of the Boltzmann\nentropy that overcomes all of these issues. In fact, this entropy reproduces\nthe same results as the Boltzmann entropy for systems with a macroscopic\nnumber of particles and predicts the correct extensivity for the caloric\nequation in the case of small systems.\nLet $H(x)$ be a classical Hamiltonian describing an autonomous many-body system\nof $n$ interacting particles in $d$ spatial dimensions, whose\ncoordinates and canonical momenta $(q_1\\ldots, p_1 ,\\ldots)$ are represented as \n$N$-component vectors $x\\in \\mathbb{R}^{N}$, with $N=2nd$.\nMoreover, we assume that no other conserved\nquantities do exist in addition to the total energy $H$ \\cite{Franzosi_JSP11,Franzosi_PRE12}.\nLet \n$\nM_E = \\left\\{x\\in \\mathbb{R}^{N} | H(x) \\leq E \\right\\}\n$\nbe the set of phase-space states with total energy less than or equal to $E$.\nThe Gibbs entropy for this system is\n\\begin{equation}\nS_G (E) = \\kappa_B \\ln \\Omega(E) \\, ,\n\\label{gibbs}\n\\end{equation}\nwhere $\\kappa_B$ is the Boltzmann constant and\n\\begin{equation}\n\\Omega(E) = \\dfrac{1}{h^{nd}} \\int d^N x \\Theta(E-H(x)) \\, ,\n\\label{OmegaE}\n\\end{equation}\nis the number of states with energy below $E$. $h$ is the Planck constant and\n$\\Theta$ is the Heaviside function.\n\nThe Boltzmann entropy concerns the energy level sets \n$\n\\Sigma_E = \\left\\{x\\in \\mathbb{R}^{N} | H(x) = E \\right\\} \\, ,\n$\nand is given in terms of $\\omega(E) = \\partial \\Omega\/\\partial E$, according to\n\\begin{equation}\nS_B (E) = \\kappa_B \\ln \\left(\\omega(E)\\Delta \\right) \\, ,\n\\label{boltzmann}\n\\end{equation}\nwhere the constant $\\Delta$ with the dimension of energy makes the argument of the logarithm dimensionless,\nand\n\\begin{equation}\n\\omega(E) = \\dfrac{1}{h^{nd}} \\int d^N x \\delta(E-H(x)) \\, ,\n\\label{omegaE}\n\\end{equation}\nis expressed in terms of the Dirac $\\delta$ function. Remarkably, in the case\nof smooth level sets $\\Sigma_E$, $\\omega(E)$\ncan be cast in the following form \n\\cite{RughPRL97,Franzosi_JSP11,Franzosi_PRE12}\n\\begin{equation}\n\\omega(E) = \\dfrac{1}{h^{nd}} \n\\int_{\\Sigma_E} \\dfrac{m^{N-1}(\\Sigma_E)}{\\Vert\\nabla H(x) \\Vert} \\, ,\n\\label{omegaEdiff}\n\\end{equation}\nwhere $m^{N-1}(\\Sigma_E)$ is the metric induced from $\\mathbb{R}^N$ on the\nhypersurface $\\Sigma_E$ and $\\Vert\\nabla H(x) \\Vert$ is the norm of the gradient\nof $H$ at $x$. \n\nThe entropy that we propose here is\n\\begin{equation}\nS (E) = \\kappa_B \\ln \\left( \\sigma(E) \\Delta^{1\/2} \\right) \\, ,\n\\label{enew}\n\\end{equation}\nwhere\n\\begin{equation}\n\\sigma(E) =\\dfrac{1}{h^{nd}} \\int_{\\Sigma_E} m^{N-1}(\\Sigma_E) \\, .\n\\label{sigmaEdiff}\n\\end{equation}\nIn the case of a system of identical particles, to avoid the Gibbs\nparadox it is in order to introduce a factor 1\/n! in the definitions of $\\Omega$,\n$\\omega$ and $\\sigma$, Eqs. \\eqref{OmegaE}, \\eqref{omegaE}, \\eqref{omegaEdiff}\nand \\eqref{sigmaEdiff}, as we will do in the following.\n\n\nThe entropy is the fundamental thermodynamic potential of the microcanonical ensemble from which\nsecondary thermodynamic quantities are obtained by derivatives with respect to the control\nparameter: the total energy $E$, the occupied volume $V$ and, possibly, further Hamiltonian\nparameters $A_\\mu$ (in the following we omit to indicate explicitly the dependence by $A_\\mu$ in \norder to simplify the notation).\nThe inverse temperatures $\\beta=(\\kappa_B T)^{-1}$ is derived from the\nthe entropy according to $\\beta = (\\partial S\/\\partial E)\/\\kappa_B$, thus in the\nthree cases under consideration we have\n\\begin{eqnarray}\n\\beta_G &=& \n\\dfrac{\\Omega^\\prime}{\\Omega} \\, , \\\\\n\\beta_B &=& \n\\dfrac{\\omega^\\prime}{\\omega} \\, , \\\\\n\\beta &=& \n\\dfrac{\\sigma^\\prime}{\\sigma} \\, ,\n\\end{eqnarray}\nwhere the symbol $^\\prime$ denotes the partial derivative of the corresponding\nterm with respect to energy $E$.\n\nA basic requisite for $S$ is to allow the measure of temperature \nand the other secondary\nthermodynamic quantities via microcanonical averages.\nIn terms of the microscopic dynamics, from the Liouville theorem it follows that the\ninvariant measure $d \\mu$ for the dynamics on each energy level-set $\\Sigma_E$ is\n$d \\mu = {m^{N-1}(\\Sigma_E)}\/{\\Vert \\nabla H \\Vert}$.\nIn the case of the Boltzmann entropy, the temperature definition meets the\nmentioned requisite since\n\\begin{equation}\n\\beta_B =\\left\\langle \n\\nabla \\left( \\frac{\\nabla H}{\\Vert \\nabla H \\Vert^2} \\right) \\right\\rangle \\, ,\n\\label{betaB}\n\\end{equation}\nwhere $\\langle \\rangle$ indicates the microcanonical average\n\\begin{equation}\n\\langle \\phi \\rangle = \\dfrac{1}{\\omega} \\int_{\\Sigma_E} \\phi d\\mu \\, .\n\\end{equation} \nEq. \\eqref{betaB} is derived in Ref. \\cite{RughPRL97}\nfor the case of many-particle systems for which the energy is the only conserved quantity, and in Refs. \n\\cite{Franzosi_JSP11,Franzosi_PRE12} for the general case of two or more\nconserved quantities. On the contrary, the Gibbs definition of \ntemperature does not meet such important requisite as diffusely\ndiscussed in Ref. \\cite{Buonsante_AoP_2016}.\nBy using by the Federer-Laurence derivation formula \\cite{Federer_1969,Laurence_1989,Franzosi_JSP11,Franzosi_PRE12}, \nin the case of the proposed entropy we get\n\\begin{equation}\n\\beta = \\dfrac{\\sigma^\\prime}{\\sigma} = \\dfrac{\\sigma^\\prime\/\\omega}{\\sigma\/\\omega} =\n\\dfrac{\\langle \\nabla \\left( \\frac{\\nabla H}{\\Vert \\nabla H \\Vert} \\right) \\rangle}\n{\\langle \\Vert \\nabla H \\Vert \\rangle} \\, .\n\\label{beta}\n\\end{equation}\nThis shows that also $S$, besides $S_B$, satisfies the requirement to provide secondary thermodynamic\nquantities measurable as microcanonical averages. In passing, we note that\nunder the hypothesis of ergodicity, the\naverages of each dynamical observable of the system can be equivalently measured along the dynamics.\n\nAs a simple test let us consider a classical ideal gas in $d$-spatial\ndimensions composed of $n$ \nidentical particles of mass $m$ for which it is easy matter to verify that\n\\begin{eqnarray}\n\\Omega (E,V) &=& \\dfrac{V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2} +1) n! h^{nd} } E^{nd\/2} \\, ,\n\\\\\n\\omega (E,V) &=& \\dfrac{V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2}) n! h^{nd} } E^{nd\/2-1} \\, ,\n\\\\\n\\sigma (E,V) &=& \\dfrac{2 V^n (2\\pi m)^{nd\/2}}{\\Gamma(\\frac{nd}{2}) n! h^{nd} } E^{(nd-1)\/2} \\, ,\n\\end{eqnarray}\nwhere the factor $1\/n!$ is introduced in order to avoid the Gibbs paradox.\nFrom these formulas one finds the following expression of the caloric \nequation\n\\begin{eqnarray}\n\\beta^{-1}_G &=& \\dfrac{E}{{nd}\/{2}} \\, ,\\\\\n \\beta^{-1}_B &=& \\dfrac{E}{\\left({nd}\/{2}-1\\right)} \\, , \\\\\n\\beta^{-1} &=& \\dfrac{E}{{(nd-1)}\/{2}} \\, .\n\\end{eqnarray}\nIn the count for the degrees of freedom for a system of free particles,\njust the kinetic term contributes. Thus, in $d$ spatial dimensions a\nsystem of $n$ particles have $nd$ degrees of freedom and, by setting\nthe energy $E$ to a given value we are left with $nd-1$ degrees of\nfreedom.\nTherefore, among these only the latter expression is exactly extensive\nand, hence, rigorously satisfies the equipartition theorem for any $n$.\nWith an analogous calculation, it is easy matter to show that for a system\nof $n$ independent identical harmonic oscillators, of mass $m$ and\nfrequency $\\nu$ in $d$ spatial dimensions the caloric equations\nderived from the three entropies are\n\\begin{eqnarray}\n\\beta^{-1}_G &=& \\dfrac{E}{nd} \\, ,\\\\ \n\\beta^{-1}_B &=& \\dfrac{E}{\\left({nd}-1\\right)} \\, , \\\\\n\\beta^{-1} &=& \\dfrac{E}{{(2nd-1)}\/{2}} \\, .\n\\end{eqnarray}\nIn this case, either the coordinates and the motional degrees of freedom\ncontribute to the count of the degrees of freedom of the system.\nThus, when the energy has a fixed value $E$, the number of degrees\nof freedom are $2nd-1$ and only $S$ brings to the correct equipartition\nformula.\n\n\\emph{In addition to lead up the correct relation between total energy\nand true number of degrees of freedom, the entropy we propose rigorously\nsatisfies the postulate of equal a-priory probability which is a very\nfoundations of the equilibrium microcanonic statistical mechanics.\nAs a matter of fact, for a generic isolated physical-system at the\nequilibrium, a given thermodynamic state is completely determined when\nwe know the values of the macroscopic parameters as energy, volume, and\npossibly further external parameters, that characterize such system.\nIn this way, from a thermodynamic point of view we do not distinguish\nbetween the states of the system represented by different points on\nthe same energy level and consistent with the further constraints.\nThis is just what Eq. \\eqref{omegaEdiff} does, it ``counts the number\u00b4\u00b4\nof microstates satisfying the macroscopic constraint $H=E$,\nconsistently to the above mentioned postulate.\nOn the contrary, the standard\nBoltzmann entropy adopts a place-dependent weight\n$1\/\\Vert \\nabla H \\Vert$.}\n\nIn order to better clarify the connection between the Boltzmann\nentropy and that one we propose, let us perform the following\nrough calculation.\nFor a system with $N$ degrees of freedom, if $\\Delta E \\ll E$,\napproximatively we have\n\\begin{equation}\n\\Omega(E+\\Delta E) - \\Omega(E) \\approx \\omega(E) \\Delta E + O(\\Delta E^{2})\\, ,\n\\end{equation}\non the other hand, for the Cavalieri's principle, we have\n\\begin{equation}\n\\Omega(E+\\Delta E ) - \\Omega(E) \\approx \n\\left(\\sigma(E)\\Delta^{1\/2} \n\\right) \\dfrac{\\Delta E}{\\Delta} + O(\\Delta E^{2})\n\\, .\n\\end{equation}\nHence it results\n$\n\\sigma (E)\\Delta^{1\/2} = \\omega(E)\\Delta + O(N^{2})\n$\nand, consequently\n\\begin{equation}\n\\lim_{N\\to \\infty} \\dfrac{1}{N} \\left(\\ln (\\sigma \\Delta^{1\/2})-\\ln(\\omega \\Delta)\n\\right)\n= 0 \\, .\n\\label{bigN}\n\\end{equation}\nThis makes evident that in the limit of large number of degrees of freedom,\nthe proposed entropy predicts the same results as the Boltzmann entropy,\nwhereas, in the case of systems with small $N$ the two\nentropies differ from each other.\n\nIn order to verify our assumption, we have tested the proposed\nentropy on two systems: the two dimensional $\\Phi^4$ model and\na one dimensional model of rotors.\n\nThe $\\phi^4$ model \\cite{Franzosi_PRE99,Franzosi_PRL00,BPV_PRB04,Franzosi_PRA10} is defined by the Hamiltonian\n\\begin{equation}\nH = \\sum_{\\bf j} \\dfrac{1}{2} \\pi^2_{\\bf j} + V(\\phi)\n\\label{Hphi4}\n\\end{equation}\nwhere\n\\begin{equation}\nV(\\phi) =\\sum_{\\bf j} \\left[ \n \\dfrac{\\lambda}{4!} \\phi^4_{\\bf j} - \\dfrac{\\mu^2}{2}\\phi^2_{\\bf j} +\n\\dfrac{J}{4} \\sum_{{\\bf k}\\in I({\\bf j})} (\\phi_{\\bf j} - \\phi_{\\bf k})^2\n \\right] \\, ,\n\\label{Vphi4}\n\\end{equation}\n$\\pi_{\\bf j}$ is the conjugate momentum of the variable $\\phi_{\\bf j}$\nthat defines the field at ${\\bf j}^{th}$ site. Indeed,\n${\\bf j} = (j_1,j_2)$ denotes a site of a two dimensional latte\nand\n$I({\\bf j})$ are the nearest neighbour lattice sites of the\n${\\bf j}^{th}$ site. The coordinates of the sites are integer numbers\n$j_k =1,\\ldots,N_k$, $k=1,2$, so that the total number of sites\nin the lattice is $N=N_1\\,N_2$. Furthermore periodic boundary conditions\nare assumed.\nThe local potential displays a double-well shape whose minima are located \nat $\\pm \\sqrt{{3! \\mu^2}\/{\\lambda}}$ and to which it corresponds \nthe ground-state energy per particle $e_0 = - 3! \\mu^4\/(2 \\lambda)$.\nAt low-energies the system is dominated by an ordered phase where the time \naverages of the local field are not vanishing. By increasing the system\nenergy the system undergoes a second order phase-transition and\nthe local $\\mathbb{Z}_2$ symmetry is restored. In fact, at high\nenergies the time averages of the local field go to zero.\n\nThe second model \\cite{Cerino_2015} is composed by $N$ rotators with\ncanonical coordinates\n$\\phi_1,\\ldots,\\phi_N,\\pi_1,\\ldots,\\pi_N$ and Hamiltonian\n\\begin{equation}\nH=\\sum^N_{j=1} [1-\\cos(\\pi_j)] + \\epsilon \n\\sum^N_{j=1}[1-\\cos(\\phi_j-\\phi_{j-1})] \\, ,\n\\label{Hrot}\n\\end{equation}\nwhere is assumed $\\phi_0 = 0$. The form of kinetic and potential terms\nin \\eqref{Hrot} makes the energy bounded either from above and from below\nand such Hamiltonian implies the existence of negative Boltzmann temperatures\n\\cite{Cerino_2015}.\n\nWe have numerically integrated the equation of motion associated to the \nHamiltonian of both the models, by using a third order symplectic algorithm\nand starting from initial conditions corresponding to different values\nof the system total energy $E$. We have measured along the dynamics the time averages of the relevant quantities that appear in \\eqref{betaB} \nand \\eqref{beta} and, then we have derived the curves $\\beta_B(E)$ and\n$\\beta(E)$ for the two models.\n\\begin{figure}[h]\n \\includegraphics[height=5.cm]{phi4_beta-e.eps}\n\\caption{The figure compares $\\beta_B(E\/N)$ (dotted line) and\n$\\beta(E\/N)$ (continuous line) numerically computed for a lattice\nof $128\\times 128$ sites for the\n$\\Phi^4$-model. The agreement is astonishing,\nin fact the two curves are indistinguishable. In the inset we report\na zoom in order to show the two curves.\n\\label{fig1}}\n\\end{figure} \n\\begin{figure}[h]\n \\includegraphics[height=5.cm]{rot_1d_beta-e.eps}\n\\caption{The figure compares $\\beta_B(E\/N)$ (dotted line) and\n$\\beta(E\/N)$ (continuous line) numerically computed for an array of $512$\nrotors. Also here the agreement is astonishing,\nthe two curves are indistinguishable thus we report the inset with\na zoom that shows the two curves.\n\\label{fig2}}\n\\end{figure} \nFigs. 1 and 2 clearly show the remarkable agreement between the curves\n$\\beta_B(E\/N)$ and $\\beta(E\/N)$, for both the models studied.\n\nIn conclusion we have proposed a novel definition of the microcanonical\nentropy for classical systems.\nWe have shown that this definition definitely resolve the debate\non the correct definition of the microcanonical entropy. In fact, we have\nshown that this entropy definition fixes the issue inherent the full\nextensivity of the caloric equation.\nFurthermore, we have given evidence by investigating\ntwo different models, that this entropy reproduces results which are in\nagreement with the ones predicted with standard Boltzmann entropy in the\ncase of macroscopic systems. \nSince the differences between the predictions of Boltzmann entropy and\nof the one here proposed, are more evident in systems with small number of\ndegrees of freedom, we conclude that the Boltzmann entropy (with the our one)\nprovides a correct description for macroscopic systems whereas\nextremely small systems should be described with the entropy that we\nhave proposed in order to avoid, for instance, issues with the\nextensivity of the caloric equation.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe are grateful to A. Smerzi and P. Buonsante for useful discussions.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Sample Section Title}\n\\label{sec:sample1}\n\n\nLorem ipsum dolor sit amet, consectetur adipiscing \\citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \\citet{Blondeletal2008} exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \\citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\\ref{sec:sample:appendix}.\n\n\n\\section{Sample Section Title}\n\\label{sec:sample1}\n\n\nLorem ipsum dolor sit amet, consectetur adipiscing \\citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna \\citet{Blondeletal2008} aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \\citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\\ref{sec:sample:appendix}.\n\n\n\\section{Introduction}\n\\label{sec1}\nCompound refractive lens assemblies (CRLs) are frequently used as objective lenses in X-ray microscopes \\cite{lengeler99}, or as upstream condensers \\cite{schroer05,vaughan11}. Both applications are extremely sensitive to lateral misalignment. The full focusing procedure of a CRL requires five degrees of freedom. Independent translations along the $x,$ and $y$ axes, in addition to two rotations $r_x$ and $r_y$ about those axes, produce a lateral alignment. The fifth degree of freedom is a translation along the $z$ axis, which locates the classical position of the focus (see Figure~\\ref{hugh_image}). \n\nOur principal motivation for this work is the task of laterally aligning compound refractive lens assemblies (CRLs) at X-ray free electron laser facilities (XFELs). XFELs are a new class of X-ray sources that produce the shortest duration and brightest X-ray pulses currently attainable, paving the way for experiments that were previously not possible \\cite{Yabashi2017}. A new approach to the alignment of CRLs is necessary, in large part due to the novel amplification process to generate X-ray pulses. At XFEL facilities, Self-Amplification of Spontaneous Emission (SASE) causes the beam position, spatial mode, propagation direction (pointing), and intensity to fluctuate stochastically \\cite{Emma2010,Schneidmiller2016}. The proper lateral alignment of focusing optics is crucial to produce the highest resolution and smallest focal spots, as required for many modern X-ray experiments.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=5cm]{CRL_diagram.eps}\n \\caption{This example diagram depicts a CRL-based imaging configuration along an optical axis $z$. A CRL is aligned to the optical axis by four independent motors. Two control translations along the perpendicular $x$ and $y$ axes, and two control rotations about the $x$ and $y$ axes.}\n \\label{hugh_image}\n\\end{figure}\n\nThe orientation of the CRL is controlled by four motorized stages: two that translate the optic along the $x$ and $y$ axes, and two that rotate about them. To align this optic, a detector is placed behind the exit surface of the CRL to measure the transmitted X-ray beam. Typically, these sensors are either charge-coupled device (CCD) cameras or different implementations of a photodiode (e.g. ion chamber). The beam-line scientist's task is to laterally align the CRL by maximizing X-ray transmitted light that reaches the sensor as a function of the four positions. \n\nIn more formal language, let $f\\colon \\mathbb{R}^4 \\to \\mathbb{R}$ denote an idealized model of a noiseless, steady (in space and amplitude) X-ray transmission through a CRL as a function of the orientation. In \\cite{simons17}, Simons et al. demonstrated that for a convex region $\\Omega \\subset \\mathbb{R}^4$, that transmission $f$ can be modelled with a Gaussian distribution. In this idealized case, the alignment procedure reduces to the trivially convex optimization problem\n\\begin{equation*}\n\\min_{{\\bf x} \\in \\Omega} -f({\\bf x}).\n\\end{equation*}\n\nGiven the simplicity of the problem, it is common at scientific beam-lines with more-stable amplitudes to establish lateral alignment through a simple manual process. A recent study automated this task by using a modified stochastic simplex method on the lateral alignment of CRL assemblies at synchrotron facilities \\cite{Breckling2021}. In lieu of automating, the usual approach is to perform a rough initial alignment, then select two of the four dimensions of $\\Omega$ and perform a raster scan of the transmission, logging a detector's response at each particular orientation. The ``best\" position from that 2D scan is selected, and the micro controllers are driven to that position. The alternate dimensions are then selected, and the procedure repeats until alignment is satisfactory. \n\nFor many scientific beam-lines, this dead-reckoning approach is sufficient. Unfortunately, the SASE process for generating X-rays at XFEL facilities introduces an unpredictable time-dependent intensity drift, as well as stochastic perturbations of the beam's propagation axes. This, of course, is in addition to the usual sources of measurement noise. These complications prevent the reliable success of a direct implementation of the simplex-based approach seen in \\cite{Breckling2021}. Given that it is only possible to record X-ray transmission for a single orientation at a single moment in time, an orderly raster-like scan of the transmission at an XFEL facility is not likely to see a distribution that strongly agrees with the Gaussian model developed in \\cite{simons17}. As a result, it remains the common practice to rely heavily on the intuition of the beamline scientist to interpret such scans, substantially extending the time required to produce an acceptable initial lateral alignment, and realignment. Given that time is an extremely limited resource at XFELs, an alternative technique to quickly and reliably expedite this procedure is sought. \n\nIn this paper we propose a technique to estimate the gradient of the transmission function that accounts for both time-dependent amplitude fluctuations, and instrumentation noise. If successful, such a gradient could be utilized in a classic steepest descent algorithm. Given that stochastic descent-based approaches have been successful in automating similar optical alignment and focusing tasks, including the control of directed energy sources \\cite{belen2007laboratory}, aligning line-of-sight communication arrays \\cite{Raj2010}, and the alignment of two-mirror telescopes \\cite{Li20}, we suspect that these corrections will allow for expedient and accurate alignments in our application. \n\nLet $t \\in \\mathbb{R}^+$ represent time, $T\\colon \\mathbb{R}^+ \\to \\mathbb{R}$ be an arbitrarily smooth function which denotes the intensity of the beam over time, $\\varsigma$ be the aggregate of all additive stochastic noise, and $\\Theta$ denote stochastic perturbations to the beam's orientation. We then formulate our estimate of the transmission function as $G({\\bf x},t)=-T(t)f({\\bf x}+\\Theta) + \\varsigma$. Our alignment procedure then looks like the optimization problem\n\\begin{equation}\\label{exp_prob}\n\\min_{{\\bf x} \\in \\Omega} E\\left[G({\\bf x},t)\\right], \\text{ for all } t \\geq 0,\n\\end{equation}\nwhere $E$ is the expected value. While this problem does admit an optimal solution, common stochastic steepest-descent methods are not amenable to finding it without directly addressing the amplitude fluctuations \\cite{Spall}. \n\nWe propose an approach to this problem that substitutes the usual finite difference method with one which corrects for the non-steady amplitude. The method systematically intertwines the usual spatial samples for the gradient with additional samples from a fixed central location. We demonstrate through error asymptotics and numerical benchmarks that these additional samples can, when collected at sufficient rate, can sufficiently account for amplitude changes in intensity over time. Thus, an amplitude-corrected gradient, when paired with a standard stochastic descent algorithm, becomes well-suited for minimization problems like \\eqref{exp_prob}.\n\nThe remainder of the paper is organized as follows: We formally introduce the amplitude-correcting scheme in Section~\\ref{DiffScheme}, along with notation, and asymptotic error estimates. We provide two numerical benchmarks in Section~\\ref{NumericalExperiments}. There, we first develop asymptotic error estimates for stochastic gradient descent (SGD) schemes using the amplitude-correction, along with a demonstration of the resulting convergence rates. We then demonstrate the efficacy of our amplitude-correcting gradient on a modified version of the Rosenbrock valley benchmark. Section~\\ref{XFEL} outlines how our method shows promise in automating the lateral alignment of CRLs at X-ray experimental facilities. There, we provide a proof-of-concept implementation of our full optimization scheme against a synthetic cost function modelled to behave appreciably similar to one used at a genuine XFEL facility. Finally, we provide remarks in summary in Section~\\ref{conclusions}.\n\n\\section{Constructing the Amplitude-Correcting Differencing Scheme} \\label{DiffScheme}\nGiven a function $f\\colon \\mathbb{R}^n \\to \\mathbb{R}$, we are primarily concerned with computing estimates of $\\nabla f$. To this end, we assume a high degree of smoothness, i.e., $f$ is sufficiently G\\^{a}teux and Fr\\'{e}chet differentiable to satisfy the necessary conditions of our estimates to follow. The gradient of a function at a particular point ${\\bf x}_c \\in \\mathbb{R}^n$ is typically estimated by sampling that function $\\mathcal{O}(N)$ $(N \\in \\mathbb{N}, N > n)$ times in a local region around ${\\bf x}_c$. We consider an $n-$dimensional ball of radius $\\delta>0$ centered at ${\\bf x}_c$, denoted $\\mathcal{B}_\\delta({\\bf x}_c)$, and define $\\Omega$ to be an open, connected, bounded set containing $\\mathcal{B}_\\delta({\\bf x}_c)$ within the interior. \n\nFor our application, $n=4$, given the degrees of freedom for lateral alignment. Additionally, the function $f$ can only be evaluated at one particular position ${\\bf x} \\in \\mathbb{R}^n$ at a time. Sampling another position ${\\bf x}' \\in \\mathbb{R}^n$ requires a discrete amount of time $h>0$ to elapse. Given a particular starting time $t_0 > 0$, we denote the interval of time required to compute a gradient using our technique defined below to be \n\\begin{equation*}\n \\mathcal{T}_{h,N} =: [t_0,t_0 + (4N + 1)h].\n\\end{equation*}\nThough in the interest of brevity, we may refer to $\\mathcal{T}_{h,N}$ as simply $\\mathcal{T}$. Finally, we assume that our amplitude function $T\\colon \\mathbb{R}^+ \\to \\mathbb{R}$ is at least four-times differentiable, i.e., $T \\in \\mathcal{C}^4(\\mathcal{\\mathcal{T})}.$\n\nLet $E$ and $V$ denote the expectation and variance of a time series over $\\mathcal{T}$. Our source of additive noise is assumed to be normal and i.i.d. such that $E(\\varsigma)$ = 0 and $V(\\varsigma) = \\sigma^2$. Our smooth and additive noise-corrupted functions are written:\n\\begin{eqnarray*}\n\tF({\\bf x},t) &=& T(t)f({\\bf x}), \\\\\n\tG({\\bf x},t) &=& F({\\bf x},t) + \\varsigma(t).\n\\end{eqnarray*}\n\nTo organize our scheme, we arrange our sample indices serially in terms of the position in $\\mathcal{B}_\\delta({\\bf x}_c)$ and time $t_k \\in \\mathcal{T}$. Let ${\\bf e}$ be an arbitrary unit vector in $\\mathbb{R}^n$. For the noise-free case, we write:\n\\begin{center}\n\t\\begin{tabular}{rclcl}\n\t\t$F({\\bf x}_c,t_k)$ & = & $F_k^c$ & = & $T_k f^c$, \\\\\n\t\t$F({\\bf x}_c \\pm \\delta {\\bf e}, t_k \\pm h)$ & = & $F_{k\\pm1}^{{\\bf e}^{\\pm}} $& = & $T_{k\\pm1} f^{{\\bf e}^{\\pm}}$.\n\t\\end{tabular}\n\\end{center}\n\\noindent Similarly, our noise corrupted case is written:\n\\begin{center}\n\t\\begin{tabular}{rclcl}\n\t\t$G({\\bf x}_c,t_k)$ & = & $G_k^c$ & = & $F_k^c + \\varsigma_k$, \\\\\n\t\t$G({\\bf x}_c \\pm \\delta {\\bf e}, t_k \\pm h)$ & = & $G_{k\\pm1}^{{\\bf e}^{\\pm}} $& = & $F_{k\\pm1}^{{\\bf e}^{\\pm}} + \\varsigma_{k\\pm1}$,\n\t\\end{tabular}\n\\end{center}\nwhere ${\\bf e}$ is a unit vector in the selected direction.\n\nWe use the over-bar shorthand to denote time-averaged terms, e.g.,\n\\begin{equation*}\n\\bar{F}_k^c = \\frac{F_{k-1}^c + F_{k+1}^c}{2}.\n\\end{equation*}\nWe make use of the usual norm notation, i.e., $|| \\cdot ||_2$ denotes an $L^2$ norm; though the subscript is dropped in the context of Euclidean vectors. When discussing discretized approximations to the usual gradient operator $\\nabla$, we use $\\nabla_\\delta$ to denote the uncorrected differencing scheme provided in Definition \\ref{basic_grad}, and $\\nabla_{\\delta,h}$ for the amplitude-correcting gradient estimate developed further below. Directional derivative operators and their approximations are then written as $({\\bf e} \\cdot \\nabla)$, $({\\bf e} \\cdot \\nabla_\\delta)$, and $({\\bf e} \\cdot \\nabla_{\\delta,h})$ respectively. \n\n\\begin{definition}[A Linear Regression-Based Gradient Estimate]\\label{basic_grad} Let $\\delta > 0$, and $\\Omega \\subset \\mathbb{R}^n$ contain the open ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$. Further, let the points $\\lbrace {\\bf x}_i \\rbrace_{i=1}^{N}$ be a collection of $N$ unique points on the surface of the ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$ such that $N>n$. For a given function $f :\\Omega \\rightarrow \\mathbb{R}$, sample each point on the ball, collecting each sample in the vector ${\\bf F} = \\lbrace f_i \\rbrace_{i=1}^{N}$. Use the matrix ${\\bf X} = \\lbrace 1, {\\bf x}_i \\rbrace_{i=1}^{N}$ and corresponding samples ${\\bf F}$ to assemble the linear regression problem \n\\begin{equation*}\n {\\boldsymbol \\eta} = ({\\bf X}^T {\\bf X})^{-1} {\\bf X}^T {\\bf F}.\n\\end{equation*}\nThe solution ${\\boldsymbol \\eta} = \\lbrace \\eta_i \\rbrace_{i=1}^{n+1}$ determines the gradient estimate\n\\begin{equation*}\n \\nabla f({\\bf x}_c) \\approx \\nabla_\\delta f({\\bf x}_c) =: \\lbrace \\eta_k \\rbrace_{k=2}^{n+1}.\n\\end{equation*}\n\\end{definition}\n\n\\subsection{The Differencing Scheme}\n\\label{sec:diffscheme}\nThe definition below is assembled similarly to that seen in Definition \\eqref{basic_grad}, but coordinates all sampling according to a uniformly-discretized time series. A example diagram is provided in Figure \\ref{ball}. If the sampling distance $\\delta > 0$ remains uniform, it is assumed that the time required to visit each point within the sequence is uniform. While this isn't a necessary limitation in practice, this assumption simplifies the analysis provded in Appendix \\ref{AccuracyEst}.\n\n\\begin{figure} \n\t\\begin{center}\n\t\t{\\bf An Example 6-Point Stencil in 2D} \\\\\n\t\t\\begin{tikzpicture}[fill = white]\n\t\t\\draw[blue!20,thick,dashed] (0,0) circle (2cm);\n\t\t\\path (0,0) node(a) [circle, draw, fill] {${\\bf x}_c$}\n\t\t(2.0,0.0) node(b) [circle, draw, fill] {${\\bf x}_1$}\n\t\t(1, 1.7320) node(c) [circle, draw, fill] {${\\bf x}_3$}\n\t\t(-1,1.7320) node(d) [circle, draw, fill] {${\\bf x}_5$}\n\t\t(-2.0,0) node(e) [circle, draw, fill] {${\\bf x}_2$}\n\t\t(-1,-1.7320)node(f) [circle, draw, fill] {${\\bf x}_4$}\n\t\t( 1,-1.7320)node(g) [circle, draw, fill] {${\\bf x}_6$};\n\t\t\n\t\t\\draw[blue!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=8 ) -- (node cs:name=b, angle=180-8);\n\t\t\\draw[blue!100, thick,-{Straight Barb[left]}] (node cs:name=b, angle=180+8) -- (node cs:name=a, angle=-8 ); \n\t\t\n\t\t\\draw[purple!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=60+8 ) -- (node cs:name=c, angle=180+60-8);\n\t\t\\draw[purple!100, thick,-{Straight Barb[left]}] (node cs:name=c, angle=180+60+8) -- (node cs:name=a, angle=60-8 ); \t\t\t\n\t\t\n\t\t\\draw[green!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=120+8 ) -- (node cs:name=d, angle=180+120-8);\n\t\t\\draw[green!100, thick,-{Straight Barb[left]}] (node cs:name=d, angle=180+120+8) -- (node cs:name=a, angle=120-8 ); \t\n\t\t\n\t\t\\draw[blue!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=180+8 ) -- (node cs:name=e, angle=180+180-8);\n\t\t\\draw[blue!100, thick,-{Straight Barb[left]}] (node cs:name=e, angle=180+180+8) -- (node cs:name=a, angle=180-8 ); \t\t\t\n\t\t\n\t\t\\draw[purple!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=240+8 ) -- (node cs:name=f, angle=180+240-8);\n\t\t\\draw[purple!100, thick,-{Straight Barb[left]}] (node cs:name=f, angle=180+240+8) -- (node cs:name=a, angle=240-8 ); \t\t\t\n\t\t\n\t\t\\draw[green!100,thick,-{Straight Barb[left]}] (node cs:name=a, angle=300+8 ) -- (node cs:name=g, angle=180+300-8);\n\t\t\\draw[green!100, thick,-{Straight Barb[left]}] (node cs:name=g, angle=180+300+8) -- (node cs:name=a, angle=300-8 ); \t\t\t\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{\\label{ball} This depicts an example six-point ($N=3$) sampling stencil for a two-dimensional search space. The procedure requires a total of 13 samples. Begin by sampling at ${\\bf x}_c.$ Next, sample at ${\\bf x}_1$, then return and sample ${\\bf x}_c$. Repeat this process sequentially for the remaining ${\\bf x}_i$. }\n\\end{figure}\n\n\n\\begin{definition}[Amplitude-Correcting Gradient Estimate]\\label{def_grad} Let $\\delta, h > 0,$ and $\\Omega \\subset \\mathbb{R}^n$ contain the open ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$. Let the points $\\lbrace {\\bf x}_i \\rbrace_{i=1}^{N}$ be a collection of $N$ unique points on the surface of the ball $\\mathcal{B}_{\\delta}({\\bf x}_c)$, along with the $N$ corresponding antipodal points $\\lbrace {\\bf x}_i' \\rbrace_{i=1}^N$, such that $N>n$. Let $\\mathcal{T}_{h,N}$ be the uniform discretization of the time interval $\\mathcal{T}.$ For the function $f:\\Omega \\rightarrow \\mathbb{R}$, the amplitude function $T:\\mathcal{T} \\rightarrow \\mathbb{R}$, and additive noise $\\varsigma: \\mathcal{T} \\rightarrow \\mathbb{R}$, we write the given function $G : \\Omega \\times \\mathcal{T} \\rightarrow \\mathbb{R}$ such that $G({\\bf x},t) = T(t)f({\\bf x}) + \\varsigma(t)$. Let $\\mu_T = E\\left[T\\left(\\mathcal{T}_{h,N} \\right) \\right]$, and ${\\bf e}_k^+$ be the unit vector in the direction ${\\bf x}_k - {\\bf x}_c.$ Index our uniformly discretized time-steps as $k=1,\\ldots,4N+1$. Sample ${\\bf x}_c$ at odd values of $k,$ i.e. $k = 2k'-1$, collecting each sample as $G_{k}^c.$ Time-average these values gives $\\bar{G}_{2k'}^c$. For even values of $k,$ i.e. $k= 2k'$, alternate sampling ${\\bf x}_{k'}$ and its antipodal counterpart ${\\bf x}_{k'}'$, collecting each sample as $G_{k}^{{\\bf e}_{k}^+}.$ We organize the matrix ${\\bf X}$ such that\n\\begin{equation*}\n {\\bf X} = \\begin{bmatrix}\n 1 & {\\bf x}_1 \\\\\n 1 & {\\bf x}_1' \\\\\n \\vdots & \\vdots \\\\\n 1 & {\\bf x}_N \\\\\n 1 & {\\bf x}_N'\n\\end{bmatrix}, \n\\end{equation*}\nthe sample matrix ${\\bf G}$ such that\n\\begin{equation*}\n {\\bf G} = \\frac{1}{\\mu_T}\\lbrace G_{2i}^{{\\bf e}_{2i}^+} - \\bar{G}_{2i}^c , \\rbrace_{i=1}^{2N}\n\\end{equation*}and the linear regression problem \n\\begin{equation*}\n {\\boldsymbol \\eta} = ({\\bf X}^T {\\bf X})^{-1} {\\bf X}^T {\\bf G}.\n\\end{equation*}\nThe solution ${\\boldsymbol \\eta} = \\lbrace \\eta_i \\rbrace_{i=1}^{n+1}$ determines our gradient estimate\n\\begin{equation*}\n \\nabla f({\\bf x}_c) \\approx \\nabla_{\\delta,h}G({\\bf x}_c, \\mathcal{T}) =: \\lbrace \\eta_i \\rbrace_{i=2}^{n+1}.\n\\end{equation*}\n\\end{definition}\n\nIf we use the following short-hand for the standard central-differencing stencil (in spatial coordinates), directional derivatives can be written \n\\begin{equation*}\n\\left({\\bf e}_k \\cdot \\nabla_{\\delta}\\right)f({\\bf x}_c) = \\frac{1}{2\\delta} \\left(f^{{\\bf e}_k^+} - f^{{\\bf e}_k^-}\\right).\n\\end{equation*}\nOur convention of selecting antipodal points in sequence allows us to utilize these directional derivative stencils directly. Since each observation of $G({\\bf x},t)$ results in an independent noise term $\\varsigma$, combining like-terms results in\n\\begin{align*}\n\t\\left({\\bf e}_k \\cdot \\nabla_{\\delta,h}\\right)G({\\bf x}_c,\\mathcal{T}) = \\frac{1}{2\\mu_T\\delta} & \\left[\\left(G_{2k}^{{\\bf e}_k^+} - \\bar{G}_{2k}^{c} \\right) - \\left(G_{2k+2}^{{\\bf e}_k^-} - \\bar{G}_{2k+2}^{c}\\right)\\right] \\\\\n\t = \\frac{1}{2\\mu_T\\delta} & \\Big[\\left(F_{2k}^{{\\bf e}_k^+} - \\bar{F}_{2k}^{c} \\right) - \\left(F_{2k+2}^{{\\bf e}_k^-} - \\bar{F}_{2k+2}^{c}\\right) \\\\\n\t & + \\frac{\\varsigma_{1,k}}{2} + \\varsigma_{2,k} + \\varsigma_{3,k} + \\frac{\\varsigma_{4,k}}{2}\\Big].\n\\end{align*}\n\n\\begin{thm}[Error Estimate on Noise-Free Functions]\\label{NoiseFreeThm} Let $F({\\bf x},t) = T(t)f({\\bf x})$ where $T$ and $f$ are at least $\\mathcal{C}^4(\\mathcal{T})$ and $\\mathcal{C}^3(\\Omega)$ respectively. We sample $N$ antipodal pairs such that the resulting sampling is unbiased, and quasi-uniform. For ${\\bf x}_c \\in \\Omega$, and $\\delta > 0$ such that $\\mathcal{B}_\\delta({\\bf x}_c)$ is in the interior of $\\Omega$, we let ${\\bf e}_k$ be the unit vector associated with the $k^{th}$ antipodal pair of points. Further, we let $\\mu_T$ be the known expectation of $T(t)$ over $\\mathcal{T}$. Selecting $h$ such that $h^3 < \\delta$ guarantees that there exists a constant $C^*(\\delta, h, N, T,T',T^{(4)},f, \\nabla f) > 0$ such that, \n\t\\begin{equation*}\n\t\\left|\\left| \\nabla f({\\bf x}_c) - \\nabla_{\\delta,h} F({\\bf x}_c,\\mathcal{T})\\right|\\right| \\leq C^* \\left(h + \\delta^2 \\right).\n\t\\end{equation*}\n\\end{thm}\n\n\\noindent A similar result is provided for the case when additive i.i.d. noise is present. \n\n\\begin{thm}[Error Estimate on Noisy Functions]\\label{NoisyThm} Let $G({\\bf x},t) = F({\\bf x},t) + \\varsigma(t)$. Under the same assumptions as Theorem \\ref{NoiseFreeThm}, the total contribution of error from stochastic sources can be written\n\t\\begin{eqnarray*}\n\t\t{\\varepsilon} &:=& \\nabla_{\\delta,h} G({\\bf x}_c, \\mathcal{T}) - \\nabla_{\\delta,h} F({\\bf x}_c, \\mathcal{T}) \\\\\n\t\t& =& \\left\\lbrace \\sum_{k=1}^N \\frac{\\hat{{\\bf e}}_i^T \\cdot {\\bf e}_k}{2 \\mu_T N \\delta} \\left[\\frac{\\varsigma_{1,k}}{2} + \\varsigma_{2,k} + \\varsigma_{3,k} + \\frac{\\varsigma_{4,k}}{2}\\right]\\right\\rbrace_{i=1}^n.\n\t\\end{eqnarray*} \n\tThen it follows that\n\t\\begin{equation*}\n\t E\\left[||\\varepsilon||\\right] \\leq 4 \\frac{\\sigma}{\\mu_T \\delta} \\sqrt{\\frac{n}{N}},\n\t\\end{equation*}\n\tand for $p \\in (0,1)$\n\t\\begin{equation*}\n\t \\mathcal{P} \\left[ ||\\varepsilon|| \\leq 4 \\frac{\\sigma}{\\mu_T \\delta} + 2 \\frac{\\sigma}{\\mu_T \\delta}\\sqrt{\\frac{\\log(1\/p)}{N}}\\right] \\geq 1-p.\n\t\\end{equation*}\n\\end{thm}\n\n\n\n\\section{Numerical Demonstrations} \\label{NumericalExperiments}\nGiven that our motivation is to employ the amplitude-correcting gradient in steepest descent methods, our demonstrations will focus on that application. We begin by presenting two accelerated versions of the classic SGD algorithm, differing only by which gradient estimation technique utilized. A full discussion on proper choices for $\\alpha$ and $\\beta$ can be found in \\cite{Nesterov}.\n\n\\begin{alg}[Accelerated SGD] \\label{Alg2} Choose a suitable initial condition ${\\bf x}_0 \\in \\mathbb{R}^n$, step-size $\\alpha_i > 0$, $\\alpha_i \\rightarrow 0$ as $i \\rightarrow \\infty$, and $\\beta \\in [0,1)$. Additionally, choose a radius $\\delta > 0$ for the gradient estimator. Indexing our steps with $i=0,1,\\ldots$ we proceed such that\n\t\\begin{eqnarray*}\n\t\t{\\bf y}_{i+1} &=& \\beta {\\bf y}_i - \\nabla_{\\delta}f({\\bf x}_{i+1}), \\\\\n\t\t{\\bf x}_{i+1} &=& {\\bf x}_i - \\alpha_i {\\bf y}_{i+1}.\n\t\\end{eqnarray*}\n\\end{alg}\n\n\\begin{alg}[Dynamic Amplitude-Corrected Accelerated SGD] \\label{Alg1} Choose a suitable initial condition ${\\bf x}_0 \\in \\mathbb{R}^n$, step-size $\\alpha_i > 0$, $\\alpha_i \\rightarrow 0$ as $i \\rightarrow \\infty$, and $\\beta \\in [0,1)$. Additionally, prescribe a spatial radius and time-step $\\delta, h > 0$ for the gradient estimator. Indexing our steps with $i=0,1,\\ldots$ we proceed such that\n\t\\begin{eqnarray*}\n\t\t{\\bf y}_{i+1} &=& \\beta {\\bf y}_i - \\nabla_{\\delta,h}G({\\bf x}_{i+1},t), \\\\\n\t\t{\\bf x}_{i+1} &=& {\\bf x}_i - \\alpha_i {\\bf y}_{i+1}.\n\t\\end{eqnarray*}\n\\end{alg}\n\nIn our first demonstration, we seek a direct comparison of the classic SGD algorithm with the amplitude-correcting version. In order for such a comparison to be salient, we consider two functions: Rosenbrock's valley with and without a time-varying amplitude. We then demonstrate, for well-selected parameters, that Algorithm \\ref{Alg2}'s performance on the steady-amplitude function qualitatively matches Algorithm \\ref{Alg1}'s performance on the non-steady version. When both simulations are successful against minimization problems that are otherwise formulated identically, we can conclude that the amplitude-corrections encoded into the online gradient estimate effectively overcome the variations.\n\nIn the second numerical experiment, we show that the error asymptotics provided in Theorems \\ref{NoiseFreeThm} and \\ref{NoisyThm} can be seen in SGD executions. We cite two theorems that respectively provide sufficient conditions for the convergence of Algorithm \\ref{Alg2} with probability 1, and asymptotic error estimates. We then construct a noisy, time-varying function that otherwise adheres to those conditions, then prove that well-selected parameters guarantee Algorithm \\ref{Alg1} also converges. This is numerically verified by isolating each source of error to see if the analytic rates match those encountered numerically. \n\n\\subsection{A Quake in Rosenbrock's Valley}\\\nRosenbrock's Valley \\cite{Rosenbrock} is a polynomial on $\\mathbb{R}^2$ defined as\n\\begin{equation}\\label{rosenbrock}\nf(x,y) = (1-x^2) + 100(y-x^2)^2.\n\\end{equation}\nThis polynomial has a global minimum value of $f(1,1) = 0$, and is locally convex around that point. However, the downward slope along the minimal ridge is quite low in the parabolic valley. It is this feature that made Rosenbrock's Valley a popular benchmark, since many steepest descent algorithms tend to reach the ridge quite quickly, but struggle to reach the optimal answer due to the oscillations spurred from the large values of $|\\nabla f(x,y)|$ for $(x,y)$ not precisely on the ridge path. In the interest of clarity, we will refer to these as \\emph{spatial oscillations.}\n\nThe classic benchmark nonlinear programming problem is typically presented as \n\\begin{equation}\\label{nlprog1}\n{\\bf x}^* = \\text{argmin}_{{\\bf x} \\in \\mathbb{R}^2} f(x,y).\n\\end{equation}\nWe complicate matters by including the amplitude function $T(t)$ such that\n\\begin{equation}\\label{nlprog2}\n{\\bf x}^* = \\text{argmin}_{{\\bf x} \\in \\mathbb{R}^2} E\\left[T(t)f(x,y)\\right].\n\\end{equation}\nwhere \n\\begin{equation}\\label{dynamic_amp_T1}\nT(t) = 1 + \\frac{3}{4} \\cos{(2 \\pi t)}.\n\\end{equation}\nAgain, for the sake of clarity, we shall refer to oscillations caused by a dynamic amplitudes like \\eqref{dynamic_amp_T1} as \\emph{temporal oscillations}.\n\nIn our first experiment, we attempt to solve our temporally oscillating problem \\eqref{nlprog2} with the standard gradient descent method (Algorithm \\ref{Alg2}.) We initialize at ${\\bf x}_0 = (-1.2,1)$, fix $\\alpha_i = \\delta = 1\/500$, $\\beta = 0$, enforce a step-size maximum $||{\\bf x}_{i+1} - {\\bf x}_{i}|| \\leq 1 \/ 4$, and a maximum iteration count of $i_{\\text{max}} = 1200$. The gradient is computed by a uniform sampling of $N=15$ antipodal pairs. In Figure \\ref{rosenbrockfigs_noise}, we see that the gradient estimates are erroneous far beyond what can be tolerated by the standard algorithm. The figure only depicts steps up to $i_{200}$, since the full path eventually diverges. Increasing the momentum value $\\beta$ has no appreciable impact on this outcome.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=10cm]{diverge.eps} \\\\\n\t\t\\hspace{0.5cm} 0 \\ \\includegraphics[width=6cm]{colorbar.eps} \\ 1800\n\t\\end{center}\n\t\\caption{\\label{rosenbrockfigs_noise}This figure demonstrates the failure of Algorithm \\ref{Alg2} to solve the temporally-fluctuating problem \\eqref{nlprog2}. Our plot only considers the first 200 steps, due to an eventual divergence. Each step is depicted by a red dot, connected in sequence by a white line. The white cross in each plot depicts the optimal solution at $(1,1)$. The spatial coordinates and color axis are all non-dimensionalized.}\n\\end{figure}\n\nIn the second experiment, we seek to demonstrate that our amplitude correcting gradient estimate is effective in overcoming the temporal oscillations imposed by \\eqref{dynamic_amp_T1}. We accomplish this by comparing the performance of Algorithm \\ref{Alg1}, which utilizes the dynamic amplitude correction, on the temporally-oscillating problem \\eqref{nlprog2} to the performance of classic gradient descent method in Algorithm \\ref{Alg2} on the non temporally-oscillating problem in \\eqref{nlprog1}. For each execution we initialize at ${\\bf x}_0 = (-1.2,1)$, selecting $\\alpha_i = \\delta = 1\/500$, $\\beta = 0$, enforce a step-size maximum $||{\\bf x}_{i+1} - {\\bf x}_{i}|| \\leq 1 \/ 4$, and a maximum iteration count of $i_{\\text{max}} = 1200$. The gradient is computed by a uniform sampling of $N=15$ antipodal pairs. In the temporally oscillating problem, we prescribe a time-step of $h = 1\/16$. We provide comparisons with, and without momentum in Figure~\\ref{rosenbrockfig}.\n\nIn the first row of Figure \\ref{rosenbrockfig} we see that without momentum ($\\beta = 0$) neither implementation manages to overcome the spatial oscillations. By iteration count $i_{\\text{max}} = 1200$, both executions seem to terminate in roughly the same position. In the second row, we see that when momentum is included, both methods overcome the spatial oscillations and reach the global minimum position. When considering the apparent qualitative similarity between these outcomes, in conjunction with the failure demonstrated in Figure \\ref{rosenbrockfigs_noise}, we posit that the amplitude corrections are effective in mitigating the temporal oscillations imposed on \\eqref{nlprog2}.\n\n\\begin{figure} \n\t\\begin{center}\n\t\t\\includegraphics[width=10cm]{comparison_plot.eps}\\\\\n\t\t0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 600\n\t\\end{center}\n\t\\caption{\\label{rosenbrockfig} The top row compares the results from Algorithms \\ref{Alg1} and \\ref{Alg2} to problems \\eqref{nlprog2} and \\eqref{nlprog1} respectively, with no momentum term ($\\beta = 0$). The bottom row makes the same comparison, but selects a momentum term $\\beta = 0.75$. Each step is depicted by a red dot, connected in sequence by a white line. The white cross in each plot depicts the optimal solution at $(1,1)$. The spatial coordinates and color axis are again all non-dimensionalized.}\n\\end{figure}\n\n\\subsection{A Convergence Study}\nThe following theorem provides conditions sufficient for the convergence of the standard differencing gradient in SGD (Algorithm \\ref{Alg2}), as well as an error estimate. Proof can be found in \\cite{Nguyen2019}.\n\n\\begin{thm}[Convergence of SGD with Probability One]\\label{sgd_converge} Under the following assumptions,\n\\begin{enumerate}\n \\item[{\\bf 1.)}] The objective function $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ is $\\mu-$strongly convex, i.e., there exists $\\mu > 0$ such that\n \\begin{equation*}\n f({\\bf x}) - f({\\bf x}') \\geq \\nabla f({\\bf x}')^T \\cdot ({\\bf x} - {\\bf x}') + \\frac{\\mu}{2}||{\\bf x} - {\\bf x}'||^2.\n \\end{equation*} \n \\item[{\\bf 2.)}] For particular realizations of ${\\bf \\varsigma}$, the noise-corrupted objective function $\\hat{f}({\\bf x}) = f({\\bf x}) + \\varsigma$ is L-smooth, i.e., there exists an $L > 0$ such that for any ${\\bf x}',{\\bf x} \\in \\mathbb{R}^n$,\n \\begin{equation*}\n ||\\nabla_\\delta \\hat{f}({\\bf x}) - \\nabla_\\delta \\hat{f}({\\bf x}')|| \\leq L||{\\bf x}-{\\bf x}'||.\n \\end{equation*}\n \\item[{\\bf 3.)}] The noise-corrupted cost function $\\hat{f}$ is convex for every realization of $\\varsigma$, i.e., for any ${\\bf x}, {\\bf x}' \\in \\mathbb{R}^n$\n \\begin{equation*}\n \\hat{f}({\\bf x}) - \\hat{f}({\\bf x}') \\geq \\nabla_\\delta \\hat{f}({\\bf x}')^T \\cdot ({\\bf x}-{\\bf x}').\n \\end{equation*}\n\\end{enumerate}\nThen considering Algorithm \\ref{Alg2} with step sizes \n\\begin{equation*}\n 0 < \\alpha_i < \\frac{1}{2L}, \\ \\sum_{i=0}^\\infty \\alpha_i = \\infty \\ \\text{and} \\ \\sum_{i=0}^{\\infty} \\alpha_i^2 < \\infty,\n\\end{equation*}\nthe following holds with probability 1 (almost surely)\n\\begin{equation*}\n ||{\\bf x} - {\\bf x}^* ||^2 \\rightarrow 0,\n\\end{equation*}\nwhere ${\\bf x}^* = \\text{\\emph{argmin}}_{{\\bf x} \\in \\mathbb{R}^n} f({\\bf x})$.\n\\end{thm}\n\nThe following result presents convergence of the stochastic gradient descent method in terms of the error seen in the gradient estimates of the cost function. \n\\begin{cor}\\label{sgd_conv_rate} Under the same assumptions of Theorem \\ref{sgd_converge}, let $\\mathcal{E} = \\frac{4L}{\\mu}$. Initialize Algorithm \\ref{Alg2} with step size $\\alpha_i = \\frac{2}{\\mu(t+\\mathcal{E})} \\leq \\alpha_0 = \\frac{1}{2L}.$ Then,\n\\begin{equation*}\n E\\left[||{\\bf x} - {\\bf x}^* ||^2 \\right] \\leq \\frac{16M}{\\mu^2} \\frac{1}{(t-\\tau+\\mathcal{E})},\n\\end{equation*}\nfor \n\\begin{equation*}\n t \\geq \\tau = \\frac{4L}{\\mu} \\text{\\emph{max}}\\left\\lbrace \\frac{L\\mu}{M}||{\\bf x}_0 - {\\bf x}^*||^2,1 \\right\\rbrace - \\frac{4L}{\\mu},\n\\end{equation*}\nwhere $M = 2E\\left[||\\nabla_\\delta \\hat{f}({\\bf x}^*) ||^2\\right]$ and ${\\bf x}^* = \\text{\\emph{argmin}}_{{\\bf x} \\in \\mathbb{R}^n} \\hat{f}({\\bf x})$.\n\\end{cor}\n\nWe now look to numerically verify the convergence of Algorithm \\ref{Alg1} on the problem:\n\\begin{equation}\\label{last_nlp}\n \\min_{{\\bf x} \\in \\mathbb{R}^3} E\\left[G({\\bf x},t)\\right],\n\\end{equation}\nwhere the cost function\n\\begin{equation}\\label{cost_f}\n G({\\bf x},t) = -\\left(1 + \\frac{3}{4}\\cos{\\left(2\\sqrt{2} \\pi t\\right)}\\right)\\left({\\bf x}^T \\mathbf{\\Sigma} {\\bf x} \\right) + \\varsigma(t),\n\\end{equation}\nwith $\\mathbf{\\Sigma}$ given by\n\\begin{equation*}\n \\mathbf{\\Sigma} = \\begin{bmatrix}\n 2 & -0.5 & 0 \\\\\n -0.5 & 2 & -0.5 \\\\\n 0 & -0.5 & 2\n\\end{bmatrix}.\n\\end{equation*}\n\nWe proceed by first demonstrating that an \\emph{a priori} accuracy of the SGD algorithm can be written in terms of the asymptotic error of our gradient estimate developed in Theorem \\ref{NoisyThm}. This allows us to formalize a parameterization of the error developed in Algorithm \\ref{Alg1} as a function of $\\delta, h, \\sigma,$ and $N,$ and to test the error rates. As before, the additive noise term $\\varsigma(t)$ is i.i.d. and $\\mathcal{N}(0,\\sigma)$. The asymptotic error of the dynamic amplitude-correcting gradient estimates of $G$ are given, in expectation, in Corollary \\ref{grad_cor}. Proof of the following comes directly from Theorems \\ref{NoiseFreeThm}, \\ref{NoisyThm}, and Young's inequality.\n\n\n\\begin{cor} \\label{grad_cor} The gradient of the cost function $G({\\bf x},t)$ in \\eqref{cost_f} can be estimated such that given $\\sigma, \\delta, h> 0,$ where $h^3 < \\delta$, there exists positive constants $c_1, c_2,$ and $c_3$ such that\n\\begin{equation*}\n E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right] \\leq c_1(1 + N^2)h^2 + c_2 \\delta^4 + c_3 \\frac{1}{N} \\frac{\\sigma^2}{\\delta^2}.\n\\end{equation*}\n\\end{cor}\n\nWhen we ignore the time-dependent amplitude of the cost function $G$ from \\eqref{cost_f}, we note that it was constructed to satisfy the assumptions from Theorem \\ref{sgd_converge}. In particular, Assumption 1 is satisfied with $\\mu = 2.$ In the calculations to follow, the initial position is ${\\bf x}_0 := (1,1,1)$, hence the total distance we intend Algorithm \\ref{Alg1} to travel is $|| {\\bf x}_0 - {\\bf 0}|| = \\sqrt{3}$. We also note that since $L$ is sensitive to $\\varsigma(t)$, it is not precisely known. For appropriately converging step-sizes $\\lbrace \\alpha_i \\rbrace_{i = 1}^{\\infty}$, we will see\n\\begin{equation*}\n || {\\bf x}_i - {\\bf 0}||^2\\rightarrow 0,\n\\end{equation*}\nwith probability 1 and when \n\\begin{equation*}\n i > \\tau = \\max \\left\\lbrace \\frac{2\\sqrt{3}L^2}{E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right]}, L \\right\\rbrace,\n\\end{equation*}\nwe see\n\\begin{eqnarray} \\label{alg_Error}\n E \\left[|| {\\bf x}_i - {\\bf 0}||^2 \\right] & \\leq & E\\left[||\\nabla_{\\delta,h}G({\\bf 0},t)||^2\\right] \\frac{1}{i - \\tau} \\nonumber \n \\\\ & \\leq & c_1(1 + N^2)h^2 + c_2 \\delta^4 + c_3 \\frac{1}{N} \\frac{\\sigma^2}{\\delta^2}.\n\\end{eqnarray}\n\\noindent Thus, for steps $i > \\tau$, the error seen in \\eqref{alg_Error} is proportional to that seen for the gradient estimate in Corollary \\ref{grad_cor}. \n\nThese error estimates are verified in a series of Monte Carlo studies. For each parametrization, we repeat and store the results from 30 executions of Algorithm \\ref{Alg1}, storing the results in\n\\begin{equation*}\n \\text{err}(\\delta,h,\\sigma,N) = \\left\\lbrace||{\\bf x}_{i_{\\text{max}}, k}||^2 \\right\\rbrace_{k=1}^{30}.\n\\end{equation*}\nIn the first experiment, we fix $\\delta$, $\\sigma$, and $N$ such that their contributions to the error in \\eqref{alg_Error} are several orders of magnitude below our choices for $h$. We further assume that $L \\approx ||\\mathbf{\\Sigma}||_2 = 2 + \\sqrt{2}\/2,$ which gives for $h$ sufficiently small, that our critical algorithm step $\\tau$ is $\\mathcal{O}(h^{-2}).$ Selecting a fixed step-size $\\alpha_i = \\delta$, with a fixed stopping point, trivially satisfies the convergence requirements of Theorem \\ref{sgd_converge}. In addition, given our estimate of $\\tau$, and the minimum travel distance required, selecting $N=5$, and $\\delta =$1\/100, we find a choice of $i_{\\text{max}} = 500$ to be appropriate. The results of this test confirm the \\emph{a priori} rate estimate of $\\mathcal{O}(h^2)$, and are presented in terms of the average result over the 30 simulations in Table \\ref{h_conv_sim}. \n\\begin{table} \n\\caption{\\label{h_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of $h$. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 2. We fix $N$=5, $\\delta$ = 1\/100, and $\\sigma$ = 1E-5. The other sources of error begin to dominate for choices of $h \\leq 1\/512$.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $h$ & $\\text{AVG}\\left(\\text{err}(h)\\right)$ & Rate \\\\\n \\hline\n 1\/16& 1.7E-4 & - \\\\\n 1\/32& 3.2E-5 & 2.41 \\\\\n 1\/64& 2.2E-6 & 3.87 \\\\\n 1\/128& 5.2E-7 & 2.07 \\\\\n 1\/256& 1.1E-7 & 2.28 \\\\\n 1\/512& 3.7E-8 & 1.53 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nWe proceed similarly in the second experiment. We fix $h = 1\/1024$, $N=10$, and $\\delta=1\/100$, varying $\\sigma$. Noting that smallest choice for $\\sigma = 1\/2560$, we again estimate the critical time-step as $\\tau =$ 500. The optimal rates are observed and presented in Table \\ref{sig_conv_sim}.\n\n\\begin{table} \n\\caption{\\label{sig_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of $\\sigma$. We fix $N$=10, $\\delta$ = 1\/100, and $h$ = 1\/1024. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 2. The other sources of error begin to dominate for choices of $\\sigma \\leq 1\/2560$.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $\\sigma$ & $\\text{AVG}\\left(\\text{err}(\\sigma)\\right)$ & Rate \\\\\n \\hline\n 1\/80 & 4.0E-4 & - \\\\\n 1\/160 & 1.2E-4 & 1.73 \\\\\n 1\/320 & 4.0E-5 & 2.04 \\\\\n 1\/640 & 9.7E-6 & 2.03 \\\\\n 1\/1280 & 2.1E-6 & 2.19 \\\\\n 1\/2560 & 5.8E-7 & 1.87 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nFor the third, we fix $N$=256, $\\sigma$ = 1\/2048, and $h$ = 1\/2048, varying $\\delta.$ We maintain our choice of $i_{\\text{max}}$ = 500, presenting the results in Table \\ref{delta_conv_sim}. We see rates comparable to the $\\mathcal{O}(\\delta^4)$ rate. \n\n\\begin{table} \n\\caption{\\label{delta_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 5000 steps from Algorithm \\ref{Alg1}, in terms of $\\delta$. We fix $N$=256, $\\sigma$ = 1\/2048, and $h$ = 1\/2048. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 4. The other sources of error begin to dominate for choices of $\\delta \\leq 1\/100.$}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $\\delta$ & $\\text{AVG}\\left(\\text{err}(\\delta)\\right)$ & Rate \\\\\n \\hline\n 0.300 & 5.4E-2 & - \\\\\n 0.210 & 1.98E-2 & 2.90 \\\\\n 0.149 & 5.480E-3 & 3.70 \\\\\n 0.105 & 1.38E-3 & 3.98 \\\\\n 0.074 & 3.32E-4 & 4.11 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn our final experiment, we test the linear convergence rate of the sampling count parameter $N.$ Fixing $h$ = 1\/1024, $\\delta$ = 1\/100, and $\\sigma$ = 0.64, varying $N$. We select $i_{\\text{max}} = 500$. In Table \\ref{N_conv_sim} we see convergence at a rate slightly better than the expected $\\mathcal{O}(N^{-1})$ rate. \n\n\\begin{table} \n\\caption{\\label{N_conv_sim} Monte Carlo simulations were used to estimate the accuracy of 500 steps from Algorithm \\ref{Alg1}, in terms of the inverse sample count $N^{-1}$. We fix $\\delta$ = 1\/100, $h$ = 1\/1024, and $\\sigma = $0.64. Corollary \\ref{grad_cor} suggests we should see error converge at a rate of 1.}\n\\begin{center}\n\\begin{tabular}{|ccc|} \n \\hline\n $N$ & $\\text{AVG}\\left(\\text{err}(N)\\right)$ & Rate \\\\\n \\hline\n 8 & 6.2E-1 & - \\\\\n 16 & 3.6E-1 & 0.77 \\\\\n 32 & 1.7E-1 & 1.05 \\\\\n 64 & 1.0E-2 & 1.86 \\\\\n 128 & 3.3E-3 & 1.67 \\\\\n 256 & 1.1E-3 & 1.58 \\\\\n \\hline \n\\end{tabular}\n\\end{center}\n\n\\end{table}\n\n\\section{Compound Refractive Lens Alignment on Simulated XFEL Experimental Beamline}\\label{XFEL}\nWhat follows is a proof-of-concept implementation of Algorithm \\ref{Alg1} by simulating the alignment of a CRL assembly on a scientific beam-line with a highly dynamic intensity. We begin by developing a model X-ray transmission function from data collected at the Advanced Photon Source (APS) at Argonne National Laboratory \\cite{Breckling2021}. This steady-amplitude model is then augmented with a time-dependent intensity function, recorded during an experiment performed at the Pohang Accelerator Laboratory's XFEL facility (PAL-XFEL).\n\nGiven that access to XFEL beam-lines is competitive and limited, our goal is to demonstrate the feasibility of our amplitude-correcting SGD approach to overcome the beam intensity fluctuations inherent to XFEL facilities. We break this effort into two parts: the construction of our model cost function, and the results of our implementation of Algorithm \\ref{Alg1} using that cost function in settings similar to those seen at PAL-XFEL. \n\n\\subsection{Developing a Model Cost Function}\nLet $\\Omega_{max} \\subset \\mathbb{R}^4$ denote the travel limits for the four stepper motors that determine the orientation of the CRL. For a given orientation ${\\bf x} = (x,y,r_x,r_y) \\in \\Omega$, let the resulting image deposited on the detector panel be denoted as $I({\\bf x}),$ or simply $I$ when convenient; see Figure \\ref{hugh_image}. Further, we describe position of a given pixel by its indices $I_{i,j}.$ Example detector images are shown in Figures \\ref{sensor_imgs}(a) and (b).\n\nLet $\\xi(I)$, $\\mu(I)$ and $\\sigma(I)$ denote the median, mean, and standard deviation, respectively, of the pixel values of the image $I$. We then constrain\n$I$ to a selected region of interest (ROI)\ndefined as\n$$\\hat{I}_M := \\left\\lbrace I_{i,j} \\in I \\ \\Big| \\ |I_{i,j}-\\xi(I)| > M \\times \\sigma(I) \\right\\rbrace,$$\nwhere $M > 0$ is a user-selected threshold parameter. In practice, we found $M=2$ to be a good choice. Figure \\ref{sensor_imgs} (c) and (d) highlight the corresponding ROIs, $\\hat{I}_M.$ \n\n\\begin{figure} \n \\begin{center}\n \\includegraphics[width=2cm]{not_well_aligned.eps} \\ \n \\includegraphics[width=2cm]{well_aligned.eps} \\ \n \\includegraphics[height=5.75cm]{vert_colorbar.eps} \\hspace{1cm} \n \\includegraphics[width=2cm]{not_well_aligned_support.eps} \\ \n \\includegraphics[width=2cm]{well_aligned_support.eps} \\ \n \\includegraphics[height=5.75cm]{vert_binarybar.eps}\n \\caption{Figure (a) is a cropped region collected from the imaging sensor when the CRL was poorly aligned. Figure (b) is the same cropped region, but shows the result from a well-aligned CRL. The images are shown on the same color axis, after feature normalizing against the maximum pixel value recorded. Figures (c) and (d) are binary images depicting the pixels identified in the ROI for Figures (a) and (b) respectively. \\label{sensor_imgs}}\n \\end{center}\n\\end{figure}\n\nIn the synchrotron experiments performed at the Advanced Photon Source in \\cite{Breckling2021}, a set of coordinates found by manual alignment were defined as the ground-truth to provide the ``well-aligned'' position of the CRL. We denote that position as ${\\bf x}^* = (x^*, y^*, r_x^*, r_y^*)$. This ground truth served two purposes. First, we were then able to define a feature scaling such that our metric of X-ray transmission, in terms of CRL orientation, \n\\begin{equation}\\label{crude_cost}\n f({\\bf x}; M) := \\mu\\left(\\hat{I}_{M}({\\bf x})\\right),\n\\end{equation}\nhad a maximal value of 1. Second, the ground-truth position allowed us to establish a four-dimensional rectangular region $\\hat{\\Omega} \\subset \\Omega_{max}$ around the best point that contained the support of $f$ above the noise floor. With this ground-truth and 4D window, we then collected several raster scans of $f(\\hat{\\Omega}; M=2)$. We make use of a full four-dimensional scan, and two high-resolution, independent, 2-dimensional raster scans of $\\hat{\\Omega}$.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=4cm]{XrY_raster.eps} \\ \n \\includegraphics[width=4cm]{YrX_raster.eps} \\\\\n 0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 1\n \\caption{Two 2D raster scans of $f$ in $\\hat{\\Omega}$ are depicted above. Both figures are mutually min-max normalized, and plotted on the same color axis.\\label{intensity_plots}}\n \\end{center}\n\\end{figure}\n\nAssuming a steady beam amplitude, it follows from the model developed by Simons \\textit{et al.} that an idealized transmission function $f:\\mathbb{R}^4 \\rightarrow \\mathbb{R}^+$ is given by a 4-variate Gaussian distribution \\cite{simons17}. We generalize that model as \n\\begin{equation} \\label{regression}\n f_{\\text{Simons}}({\\bf x}; a,b,\\mathbf{A}, \\hat{{\\bf x}}) = a\\exp{\\left(-({\\bf x}-\\hat{{\\bf x}})^T \\mathbf{A} ({\\bf x}-\\hat{{\\bf x}}) \\right)} + b,\n\\end{equation}\nwhere $a,b \\in \\mathbb{R}$, the matrix $\\mathbf{A} \\in \\mathbb{R}^{4\\times4}$ is symmetric, and $\\hat{{\\bf x}} \\in \\mathbb{R}^4$ is the position associated with optimal lateral alignment. Fitting the four-dimensional raster scan data $f(\\hat{\\Omega}; M=2)$ to Simons' model \\eqref{regression} gives the idealized X-ray transmission $f_{\\text{Simons}}^*({\\bf x})$. We present two, two-dimensional slice views of $f_{\\text{Simons}}^*({\\bf x})$ in Figure \\ref{f_simons_plots}. \n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=4cm]{XrY_Gauss.eps} \\ \n \\includegraphics[width=4cm]{YrX_Gauss.eps} \\\\\n 0 \\ \\includegraphics[width = 6cm]{colorbar.eps} \\ 1\n \\caption{Depicted here are 2D slices selected from $f_{\\text{Simons}}^*(\\hat{\\Omega})$. The aspect ratios were selected to agree with Figures (a) and (b) from Figure \\ref{intensity_plots}. \\label{f_simons_plots}}\n \\end{center}\n\\end{figure}\n\nTo model the noise functions that are characteristic to the XFEL light sources, we include additive measurement noise as $\\varsigma_\\Omega(t)$. Let $\\text{diam}(\\hat{\\Omega})$ denote the maximal \\text{diam}eter of the set $\\hat{\\Omega}$. We collected a sampling $\\mathcal{S} = \\lbrace {\\bf x}_i \\rbrace_{i=1}^{500}$ such that for every orientation ${\\bf x}_i$, $||{\\bf x}_i - {\\bf x}^*|| > \\text{diam}(\\hat{\\Omega})$. We found that $\\sigma(f(\\mathcal{S};M=2)) \\approx 4.5 \\times 10^{-3}.$ We then model the time-series of additive noise $\\varsigma_\\Omega(t)$ as i.i.d. and $\\mathcal{N}(0, 4.5 \\times 10^{-3})$. \n\nWe additionally consider fluctuations that occur because of pointing jitter (from the SASE generation scheme) \\cite{kang2017hard}. We assume the position and direction of the beam may randomly fluctuate as a function of the beam's divergence profile, which was estimated at the APS to be 6.5$\\times 10^{-3}$ Radians. We account for jitter in our model as random perturbations of the orientation vector ${\\bf x}$ in the $r_x$ and $r_y$ directions. Further, we expect that the beam will jitter randomly within 10\\% of the beam-divergence. In doing so, we define \n\\begin{equation*}\n \\Theta(t) = (0, 0, \\theta_x(t), \\theta_y(t))\n\\end{equation*}\nwhere $\\theta_x$ and $\\theta_y$ are respectively i.i.d and $\\mathcal{N}(0,6.5\\times 10^{-4})$.\n\nWe lastly introduce the fluctuating intensity of the beam over time. To this end, we utilize the measured shot-to-shot intensity values recorded at the PAL-XFEL facility, which was recorded using a quadrant beam position monitor (QBPM) at 30 Hz \\cite{DresselhausMarais2020}. We feature-scale the raw pulse-to-pulse time-series data by normalizing the full signal against the mean recorded value. This scaled signal is written as $T_{\\text{PAL}}(t,\\kappa)$ where $\\kappa$ determines the number of pulses averaged during a data collection event. In Figure \\ref{intensity_over_time} we show $T_{\\text{PAL}}(t,1)$ in dark gray, $T_{\\text{PAL}}(t,8)$ in light gray, and $T_{\\text{PAL}}(t,264)$ in red. The mollified signals at $\\kappa=8$ and $\\kappa=264$ respectively represent the average beam intensity over a sampling interval, and the amount of time required to collect all samples necessary to compute the amplitude-correcting gradient. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=12cm]{T_plot.eps}\n \\caption{This figure depicts a two-minute interval of the signals $T_{\\text{PAL}}(t,\\kappa=1)$ in dark gray, $T_{\\text{PAL}}(t,\\kappa=8)$ in light gray, and $T_{\\text{PAL}}(t,\\kappa=264)$ in red. All signals are feature normalized by their mean value. The data was recorded at the PAL-XFEL facility. {\\color{green} \\cite{DresselhausMarais2020}} \\label{intensity_over_time} }\n\\end{figure}\n\nOur full cost function model is hence written and evaluated as\n\\begin{equation}\\label{xfel_cost}\n G_{\\text{XFEL}}({\\bf x},t; \\kappa) = -T_{\\text{PAL}}(t,\\kappa) f_{\\text{Simons}}^*({\\bf x} + \\Theta(t)) + \\varsigma_\\Omega(t).\n\\end{equation}\n\n\\subsection{Solving the CRL Alignment Problem}\nWe now endeavor to study the performance of Algorithm \\ref{Alg1} on our model of the CRL alignment problem\n\\begin{equation*}\n \\min_{{\\bf x} \\in \\hat{\\Omega}} E\\left[G_{\\text{XFEL}}({\\bf x},t;\\kappa = 8)\\right], \\ \\forall t > 0.\n\\end{equation*}\nOur goal is to identify a range of nominal parameter choices for Algorithm \\ref{Alg1} that can be implemented as a starting point at an XFEL facility. \n\nWe begin by noting that when sampling \\eqref{xfel_cost} to estimate $\\nabla f_{\\text{Simons}}^*({\\bf x})$ as per Definition \\ref{def_grad}, we consider $N=8$ quasi-uniformly distributed antipodal pairs in our differencing stencil. We select our effective integration time interval for the camera to be $h_{\\text{cam}} = 8\/30$ seconds, and establish the full time interval required to complete the scheme as $\\mathcal{T}_{h,N} := [t_0, (4N+1)h + t_0].$ Further, we make use of the estimate\n\\begin{equation*}\n \\mu_T = T_{\\text{PAL}}(t_0 + 264\/30,264),\n\\end{equation*}\nwhere $t_0$ is the moment we began estimating the gradient.\n\nFor each execution of Algorithm \\ref{Alg1} that follows, the stopping condition is established to be a maximal iteration count $i_{\\text{max}}.$ No other stopping conditions are considered. Additionally, we conceptualize our initial gradient sphere radius $\\alpha_0$ as some multiple $C r,$ where $r = ||{\\bf x}_0 - \\hat{{\\bf x}}||,$ though we don't expect users to know what $r$ is \\textit{a priori.} At each step $i$, the gradient sampling radius $\\alpha_0$ is scaled by a cooling factor such that\n\\begin{equation*}\n \\alpha_i = \\frac{\\alpha_0}{(1 + i)^\\gamma},\n\\end{equation*}\nwhere $\\gamma > 0$ and fixed. Further, we enforce a maximum step size $||{\\bf x}_i - {\\bf x}_{i-1}|| \\leq \\delta_i = \\alpha_i.$ Given that the true distance $r$ is unknown upon initialization, the executions that follow are intended to identify a performance relationship between $\\alpha_0$ with respect to $r$, $\\gamma$, and the stopping condition.\n\nWe demonstrate a single execution of Algorithm \\ref{Alg1} with an initial position ${\\bf x}_0$ selected randomly a distance of $r = 0.4$ from $\\hat{{\\bf x}}.$ We note that this Euclidean distance is significantly further away from $\\hat{{\\bf x}}$ than the positions selected during the manually-tuned rough alignments completed during data collections at the more stable synchrotron source at APS \\cite{Breckling2021}. We fix $\\gamma = 0.3$, select $\\alpha_0$ according to the distance scalar $C = 3.0$, assign a momentum term $\\beta = 0.15,$ and set the stopping condition to $i_{\\text{max}} = 100$ iterations. In addition to the time required to collect the image data from the camera sensor $h_{\\text{cam}}$, we need to include an estimate of the time required to move the four stepper motors, and process the data. We assume $h_{\\text{move}}=5\/30s$. Given the full time interval time interval $h_{\\text{total}} = h_{\\text{cam}} + h_{\\text{move}} = 13\/30s$, the total execution time assumed necessary to reach 100 iterations is \n\\begin{equation*}\n (4N+1) \\times i_{\\text{max}} \\times h_{\\text{total}} = 1430s,\n\\end{equation*}\n or 23.8 minutes. A figure depicting the particular route taken is presented in the 2D projections shown in Figure \\ref{single_exec}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=9cm]{SingleExecution.eps} \\\\\n\\ \\ -1 \\includegraphics[width=6cm]{colorbar.eps} 0 \\\\\n$-f_{\\text{Simons}}^*({\\bf x})$\n\\caption{\\label{single_exec\nA single execution of Algorithm \\ref{Alg1} with an initial position ${\\bf x}_0$ selected randomly at a distance $r = ||{\\bf x}_0 - \\hat{{\\bf x}}|| = 0.4.$ The initial step-size $\\alpha_0$ is fixed to $3r = 1.20,$ and scaled with each step by the cooling parameter $\\gamma = 0.3.$ The slices depicted in (a) and (b) use the optimal off-axis values in $\\hat{{\\bf x}}$. The blue dots depict the initial position projected onto the respective 2D planes, the white crosses depict the optimum alignment coordinates $\\hat{{\\bf x}}$, while the red dots depict the 100 positions ${\\bf x}_i$. Each position is connected sequentially by a white line.}\n\\end{figure}\n\nNext, we present the result of three Monte Carlo experiments. We maintain the parameter choices established in the demonstration execution above, varying only the stopping condition $i_{\\text{max}} = 50, 100,$ and $200$. Each Monte Carlo executes Algorithm \\ref{Alg1} to completion 100 times, varying the initial position randomly on $\\partial \\mathcal{B}_r({\\bf x}_0)$ where $r = 0.4.$ The results depicted in Figure \\ref{final_MC_1} demonstrate the expected convergence behavior for those well-selected parameters. \n\n\\begin{figure}\n \\centering\n $i_{\\text{max}} = 50; (11.9 \\ \\text{Minutes})$\\\\\n \\includegraphics[width=7cm]{itermaxmax_50.eps}\\\\\n $i_{\\text{max}} = 100; (23.8 \\ \\text{Minutes})$ \\\\\n \\includegraphics[width=7cm]{itermaxmax_100.eps}\\\\\n $i_{\\text{max}} = 200; (47.7 \\ \\text{Minutes})$ \\\\\n \\includegraphics[width=7cm]{itermaxmax_200.eps}\\\\\n \\caption{\\label{final_MC_1}Depicted above are the results of three Monte Carlo simulations, wherein Algorithm \\ref{Alg1} is executed 100 times to solve the synthetic CRL alignment problem, varying the stopping condition $i_{\\text{max}}$. The point $\\hat{{\\bf x}}$ is depicted in each figure as a red cross. Figures (a) and (b) show the spatial distribution of results when $i_{\\text{max}} = 50$. Similarly, Figures (c) and (d) denote the results when $i_{\\text{max}} = 100$, and Figures (e) and (f) show $i_{\\text{max}} = 200$. The result of a particular execution is shown as a dark blue dot. The blue ellipses highlight the 99.3\\% uncertainty region.}\n\\end{figure}\n\nWhat remains to be assessed is performance as a function of the user's choice of step size, and cooling parameter. We consider two values for the cooling parameter $\\gamma$, five initial step-size scales $C$, and six stopping conditions $i_{\\text{max}}.$ For each particular set of parameters, Algorithm \\ref{Alg1} is executed 100 times, where the starting position ${\\bf x}_0$ is again sampled randomly from $\\partial \\mathcal{B}_r(\\hat{{\\bf x}})$. These regions demonstrate a collection of parameters that tend to reliably converge under an hour ($i_{\\text{max}} < 200$ iterations.) \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=5cm]{gamma_0_3.eps} \\ \n \\includegraphics[width=5cm]{gamma_0_4.eps}\\\\\n \\caption{Depicted above are the results from 60 Monte Carlo studies varying $i_{\\text{max}}$, the cooling parameter $\\gamma$, and gradient sphere-radius interval $\\alpha_0$. We vary $\\alpha_0$ as a multiple of the initial position's distance from ground truth $C||{\\bf x}_0 - \\hat{{\\bf x}}|| = Cr.$ Figures (a) and (b) fix $\\gamma$ as 0.3 and 0.4, respectively. Each Monte Carlo simulation executes Algorithm \\ref{Alg1} a total of 100 times; the average value of which is depicted normalized by $r$, and depicted as the vertical height. The shaded regions above and below the interpolated lines represent the standard deviation trend, in terms of the 4D Euclidean distance. \\label{final_all}}\n\\end{figure}\n\nWhile additional parameters remain to be thoroughly studied, namely the momentum term $\\beta,$ we found that choices of $\\beta > 0.15$ tended to perform poorly over longer periods of computation time. In particular, when $i_{\\text{max}} > 50$ we saw no apparent improvement to performance. Given that the settings identified above demonstrate convergence that tends to improve with additional computation, an attractive behavior for an unsupervised optimization method, we advise being conservative with $\\beta.$ We additionally note that our choice to equate the maximal step-size with the gradient \\text{diam}eter was born out of observation. Choices of $\\alpha_i$ substantially larger than $\\delta_i$ frequently resulted in failure over longer time intervals. Lastly, we observed that selecting $\\gamma$ too large tended to collapse $\\alpha_i$ too quickly, which was also detrimental to long-time performance. Conversely, selecting $\\gamma$ too small tends to result in slow convergence. \n\n\\section{Conclusions}\\label{conclusions}\nThe motivation for this work was encountered while attempting to automate the task of laterally aligning optics at an X-ray Free-Election Laser (XFEL) facility. These facilities, in aggregate, are capable of generating extremely bright pencil beams of X-ray light, but from moment to moment that brightness fluctuates in time. If not for the stochastic noise sources and the apparent intensity fluctuations, the task of orienting beam-line optics reduces to a rather simple minimization problem \\cite{simons17}. While the apparent level of stochastic measurement noise is certainly tractable for many stochastic descent methods, the intensity fluctuations are so severe that they required a separate, independent treatment. \n\nIn this paper, we introduced a differencing scheme to estimate the gradient of a cost functional potentially corrupted by both stochastic noise and independent amplitude fluctuations. We assume that only one position in the search space can be measured at any particular moment in time. Thus, any finite differencing scheme is going to require procedurally moving from point to point, recording each intensity along with the corresponding position and time. In this scheme, we account for the fluctuating amplitude by introducing additional samples at a single, fixed location central to the differencing stencil. By alternating these samples sequentially in time, separating the resulting data post-hoc provides a proportional estimate of the functional's amplitude. This additional signal is then interpolated along the time axis, and subtracted from the corresponding signal generated sequentially by the finite differencing stencil. When well-sampled in space and time, this method is effective at detrending those measurements. \n\nWe included a detailed error analysis of this amplitude-correcting gradient estimate, as well as numerical benchmarking of its performance in nonlinear programming problems solved with SGD. Additionally, given that access to XFEL facilities are highly limited, we included a proof-of-concept implementation of an amplitude-correcting SGD method that we believe shows promise. In doing so, we identified regions of parameter choices that will likely be effective in a similarly-configured apparatus. \n\n\\section*{Acknowledgments} \nThis manuscript has been authored in part by Mission Support and Test Services, LLC, under Contract No. DE-NA0003624 with the U.S. Department of Energy, National Nuclear Security Administration (DOE-NNSA), NA-10 Office of Defense Programs, and supported by the Site-Directed Research and Development Program. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published content of this manuscript, or allow others to do so, for United States Government purposes. The U.S. Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http:\/\/energy.gov\/downloads\/doe-public-access-plan). The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government. DOE\/NV\/03624--1406.\n\nPortions of this work were performed at High Pressure Collaborative Access Team (HPCAT; Sector 16), Advanced Photon Source (APS), Argonne National Laboratory. HPCAT operations are supported by the DOE-NNSA's Office of Experimental Sciences. The Advanced Photon Source is a DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357\n\nSunam Kim, Sangsoo Kim, and Daewoong Nam would like to acknowledge support from the National Research Foundation of Korea (NRF), specifically NRF-2019R1A6B2A02098631 and NRF-2021R1F1A1051444.\n\nPart of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We also acknowledge the support of the Lawrence Fellowship in this work.\n\n \\bibliographystyle{elsarticle-num} \n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdmhn b/data_all_eng_slimpj/shuffled/split2/finalzzdmhn new file mode 100644 index 0000000000000000000000000000000000000000..f7d39b7863a69ab6959ad759b4188acb74dbd27a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdmhn @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nThe inherent randomness of the wireless medium can be utilized for extracting a shared secret, since wireless channels exhibit the feature of \\emph{reciprocity}. This approach is referred to as channel-reciprocity based key generation (CRKG). The underlying assumption is that an eavesdropper (Eve) cannot obtain the same channel state, and thus cannot compute the key. The general feasibility of the approach has been reported by several early works in the literature~\\cite{src:li2006securing,src:wilson2007channel},\nwhich have been extended by subsequent studies related to practical key agreement~\\cite{src:liu2012exploiting,src:pierrot2013}. In particular, there have been some works that deal with the removal of temporal correlation, by methods like principal component analysis (PCA)~\\cite{src:chen2011secret}, beamforming~\\cite{src:madiseh2012applying} or linear prediction~\\cite{src:mcguire2014bounds}. \n\nThroughout the paper, we use \\textit{cross-correlation}, \\textit{mutual information}, and \\emph{secret-key rates} as performance metric. The theoretical foundation of secret-key rates has been established by Maurer~\\cite{src:maurer1993} and Ahlswede et al.~\\cite{src:ahlswede1993}. They coined the information-theoretic \\emph{source-type model}, where Alice, Bob and Eve have access to a jointly random source, and derived bounds on the secret-key \\emph{capacity}. Their result is used in a large body of research, especially for Gaussian channels, e.g., reference~\\cite{src:wallace2009key} for a multi-observation model or \\cite{src:wilson2007channel} for the application to UWB channels. \n\nHowever, some of the popular beliefs regarding the capabilities of the eavesdropper have to be challenged. Many previous works, e.g.,~\\cite{DBLP:conf\/mobicom\/MathurTMYR08sh,DBLP:conf\/mobicom\/JanaPCKPK09sh}, have relied on the assumption that the channel of Alice-to-Bob gets uncorrelated to that of Eve, as long as Eve is positioned more than half a wavelength away from Alice and Bob, commonly referred to as Jake's model~\\cite[Chapter 3.2.1]{src:goldsmith2005wireless}. In the literature, this is usually referred to as \\emph{spatial decorrelation}~\\cite{src:zhang2016key}. A study~\\cite{src:pierrot2013} has questioned this assumption by practical evaluation. Recently, a comprehensive study~\\cite{src:he2016toward} has shown that for many popular correlation models of scattering environments, the eavesdropper might obtain largely correlated observations, especially if Eve is located within the line-of-sight beam of Alice and Bob. \n\nIn this work, we intend and shed more light on the threats for CRGK from passive eavesdropping. \nAs a consequence, we extend the work of~\\cite{src:he2016toward} by providing more elaborated practical measurements. We quantify the leakage of Alice and Bob in relation to Eve with respect to the distance, especially for low ranges that introduce near-field effects. The measurement setup is designed with the objective to generate \\emph{reproducible} results, such that we can justify\n\\emph{stationary} random processes. This is a fundamental necessity in order to obtain meaningful results, which has sometimes been overlooked in previous work. The cross-correlation and achievable secret-key rate serve as the performance metrics that indicate the common randomness available to Alice and Bob, and likewise, the information loss to Eve. We evaluate the metrics for the original data and the processed versions after down-sampling or decorrelation. The results demonstrate that the close physical presence of Eve in the communication setting significantly changes the channel statistics. This phenomenon is so far not covered by conventional channel models for CRKG.\n\nSection~\\ref{sec:systemmodel} introduces the system model and elaborates on both the processing of the measured data and the performance metrics on security. The measurement setup is described in section~\\ref{sec:measurements}. The evaluation and results of the measurement campaign are presented in section~\\ref{sec:evaluation}. Finally, section~\\ref{sec:conclusion} concludes the paper. \n\n\\section{System model} \n\\label{sec:systemmodel}\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[\nblock\/.style={draw, drop shadow, fill=white, rectangle, minimum height=0.75cm, minimum width=3.5em},\npublicch\/.style={draw, drop shadow, fill=white, rectangle, rounded corners,minimum height=1.5em, minimum width=2.5em},\n]\n\\node[block] (alice) at (0,0) {Alice};\t\n\\node[block] (eve) at ($(alice)+(2.5,-1.25)$) {Eve};\n\\node[block] (bob) at ($(alice)+(5,0)$) {Bob};\n\n\\draw[thick,->] ($(alice)+(1,0.2)$) -- node[above] {$h_{ab,k}$} ($(bob)+(-1,0.2)$);\n\\draw[thick,->] ($(bob)+(-1,-0.1)$) -- node[below] {$h_{ba,k}$} ($(alice)+(1,-0.1)$);\n\\draw[thick,->,dashed] (alice) -- node[below,pos=0.5] {$h_{ae,k}$} (eve);\n\\draw[thick,->,dashed] (bob) -- node[below,pos=0.3] {$h_{be,k}$} (eve);\n\\end{tikzpicture}\n\\caption{Overview of the system model.} \\label{fig:systemmodel}\n\\end{figure}\nAs depicted in Figure~\\ref{fig:systemmodel}, we consider Alice, Bob and Eve measuring the channel $h_{ab,k}\\in\\mathbb{R}$, $h_{ba,k}\\in\\mathbb{R}$, $h_{ae,k}\\in\\mathbb{R}$ and $h_{be,k}\\in\\mathbb{R}$, which represent the state of Alice-to-Bob, Bob-to-Alice, Alice-to-Eve and Bob-to-Eve channels, respectively, and $k$ denotes a discrete time instant. We model these variables as joint stationary and ergodic random processes. In general, Eve gets two channel states $\\left(h_{ae,k},h_{be,k}\\right)$, however, in this study we focus on $h_{ae,k}$ only. In the following, we use the labels $x_k:=h_{ba,k}$ for Alice, $y_k:=h_{ab,k}$ for Bob, and $z_k:=h_{ae,k}$ for Eve. Furthermore, we define the vector process\n$\\mybold{v}_k:=\\left(x_{k}, y_{k}, z_{k}\\right)^T$.\n\n\\subsection{Processing}\nFor different $k$, the random vectors $\\mybold{v}_k$ are likely to exhibit correlation in time, since the wireless channel is varying only slowly in indoor environments. In order to remove the temporal dependencies, we perform two alternative options of processing, namely either downsampling or decorrelation. We show both options for $x_{k}$ only, since we have the same processing for $y_{k}$ and $z_{k}$. \n\n\\subsubsection{Downsampling}\nIf we keep only every $N_m$th variable of the process $x_{k}$, we effectively\ndownsample by factor $N_m$ and obtain\n\\begin{align}\n\\label{eq:donwsampled}\nx_{k}^{\\text{ds}}=x_{kN_m}.\n\\end{align} \nThe generated $x^{\\text{ds}}_k$ can be assumed independent under the condition that the process does not exhibit any dependence after an interval of $N_m$ variables. Subsequently, we assume that the $\\mybold{v}^{\\text{ds}}_k=\\left(x_{k}^{\\text{ds}},y_{k}^{\\text{ds}},z_{k}^{\\text{ds}}\\right)^T$ are identically and independently distributed (i.i.d.) for different $k$. \n\n\\subsubsection{Decorrelation}\n We need to provide an estimator for the autocorrelation function \n\\begin{align}\n\\label{eq:autocorrest}\n\\hat{r}_{xx}[l] = \\frac{1}{N-l} \\sum_{i=0}^{N-l-1}x_{i}x_{i+l}.\n\\end{align}\nThis estimator is unbiased if the process is correlation-ergodic. The linear forward predictor for $x_{k}$ of order $N_m$ is given by\n\\begin{align}\n\\label{eq:predictor}\n\\hat{x}_{k} = \\sum_{i=1}^{N_m} a_i x_{k-i},\n\\end{align}\nwhere $a_i\\in\\mathbb{R}$ are parameter coefficients, which can be computed by Levinson-Durbin recursion based on Yule-Walker equations~\\cite{src:vaidyanathan2007the}. We define \n\\begin{align}\n\\label{eq:decorr}\nx_{k}^{\\text{de}} = x_{k}-\\hat{x}_{k}\n\\end{align}\nas \\emph{innovation sequence}, which is orthogonal to past $h_{ab,k-i}$ for $i>0$. However, orthogonal (or uncorrelated if zero-mean) variables do not necessarily imply independence, especially not \\emph{joint} independence of $\\mybold{v}^{\\text{de}}_k=\\left(x_{k}^{\\text{de}},y_{k}^{\\text{de}},z_{k}^{\\text{de}}\\right)^T$ for different $k$. Decorrelation is practically more relevant than downsampling (even if no i.i.d. can be achieved), since the information loss is significantly lower.\n\n\\subsection{Performance metrics}\nThroughout the paper, we use (1) the Pearson correlation and (2) secret-key rates as performance metrics for security.\n\n\\subsubsection{Pearson correlation}\nThe Pearson correlation provides a measure of linear dependence between two data series. The values span between $-1$ and $1$, where $1$ refers to absolute correlation, $0$ to no correlation, and $-1$ to perfect inverse correlation. It is a wide-used metric for secrecy of practical secret-key generation~\\cite{src:he2016toward}. Given a finite collection of $N$ pairs $\\left(x_{k},y_{k}\\right)$ from the process, we use the estimator\n\\begin{align}\n\\label{eq:pearson}\n\\rho_{xy}=\\frac{\\sum\\limits_{i=0}^{N-1}\\left(x_{i}-\\bar{x}\\right)\\left(y_{i}-\\bar{y}\\right)}{\\sqrt{\\sum\\limits_{i=0}^{N-1}\\left(x_{i}-\\bar{x}\\right)^2}\\sqrt{\\sum\\limits_{i=0}^{N-1}\\left(y_{i}-\\bar{y}\\right)^2}},\n\\end{align}\nwhere $\\bar{x}=\\frac{1}{N}\\sum_{j=0}^{N-1}x_{j}$ and $\\bar{y}=\\frac{1}{N}\\sum_{j=0}^{N-1}y_{j}$ are the sample means.\n\n\n\n\\subsubsection{Secret-key rate} \n\\label{sec:sk}\nWe introduce the information-theoretic secret-key rate and use the downsampled process~\\eqref{eq:donwsampled}.\nRecall that the $\\mybold{v}^{\\text{ds}}_k$ are i.i.d. We characterize $\\mybold{v}^{\\text{ds}}_k$ by the joint probability density function $f_{\\mybold{v}^{\\text{ds}}_k}$. We apply a lower bound on secret-key capacity based on the source-type model, under the following conditions:\n\\begin{enumerate}\n\\item The joint probability density function $f_{\\mybold{v}^{\\text{ds}}_k}$ is known a priori at all terminals.\n\\item Alice and Bob exchange messages over an authenticated, public channel with unlimited communication capacity.\n\\item Eve remains passive at all times.\n\\end{enumerate}\nSubsequently, the asymptotic bound is given by~\\cite{src:ahlswede1993}\n\\begin{align}\n\\label{eq:sklower}\nC_{\\text{sk}} &\\geq \\mui\\left(x_{k}^{\\text{ds}};y_{k}^{\\text{ds}}\\right) \\notag\\\\\n& \\qquad-\\min\\left[ \\mui\\left(x_{k}^{\\text{ds}};z_{k}^{\\text{ds}}\\right), \\mui\\left(y_{k}^{\\text{ds}};z_{k}^{\\text{ds}}\\right) \\right]=:R_{\\text{sk}}\n\\end{align}\nfor each $k$, since the process is stationary. Since the actual probability distributions are unknown in practice, we evaluate the lower bound~\\eqref{eq:sklower} by estimations, based on a finite number of measured samples. We utilize a $k$-nearest neighbor estimator (NNE) for the mutual information, which is based on the idea and implementation of~\\cite{src:kraskov2004}. Mutual information is a function of joint and marginal probability densities. For a measure of the joint density, the estimator computes the distance between a tuple of samples and its $k$th-next neighbor. A similar approach is provided for the marginal densities. To best of our knowledge, the reliability of the NNE has not been studied systematically. However, results in~\\cite{src:kraskov2004} indicate that at least for multivariate Gaussian variables, the estimation error is very low if $N>10^4$ samples are used for the estimation.\n\nNote that the bound~\\eqref{eq:sklower} could have been defined with the original $\\mybold{v}_k$ or the decorrelated processes~\\eqref{eq:decorr}, such that less information is discarded than in case of downsampling. However, in order to obtain an accurate estimation of~\\eqref{eq:sklower}, we require i.i.d. samples for the two following reasons: \n\\begin{enumerate}\n\\item The bound~\\eqref{eq:sklower} has been derived under the assumption of an unlimited number of i.i.d. observations from a random source. Therefore, a value of $R_{\\text{sk}}$ measured in bits per observation, is meaningful only if the time series is i.i.d. as well.\n\\item The NNE of~\\cite{src:kraskov2004} requires i.i.d. samples, since it relies on Khinchin's theorem~\\cite[p. 277]{src:papoulis2002probability}. If the time series of samples exhibits some dependence in time, the estimator might induce an undesired bias.\n\\end{enumerate}\n \nTherefore, if we apply the process $\\mybold{v}_k$ or its decorrelated modification~\\eqref{eq:decorr}, we have an approximation of the lower bound $R_{\\text{sk}}$~\\eqref{eq:sklower} only. While approximating the common information of Alice and Bob is a rather \"safe\" option, we need to be cautious regarding Eve. In order to minimize the risk of underestimating Eve, we verify our results obtained from $\\mybold{v}_k$ or the decorrelated version~\\eqref{eq:decorr} by comparing them with the downsampling approach, since it provides a more accurate description of the information leakage to Eve. Unfortunately, by removing samples from the estimation, the NNE gets more biased.\n\n\n\n\n\\begin{figure}[h]\n\t\t\\centering\n\\includegraphics[width=0.5\\textwidth]{figures\/raumplan\/Raumplan_Setup2.pdf}\n \t\t\\caption{The testbed includes several experimental setups for performance evaluation as well as for security analysis. \n \t\tAlice (X), Bob (Y) and Eve (Z) are mounted on a automated antenna positioning system.}\n \\label{fig:setup}\n\\end{figure}\n\n\\section{Measurements}\n\\label{sec:measurements}\nThe testbed is applied at the premises of our research group, which is an office area in a university building. \nAlice is positioned at a predestined access point position. Bob and Eve are mounted on an automated antenna positioning setup, which is located at several predestined \"end-device\" positions (cf.\\ Figure~\\ref{fig:setup}). For this, we choose positions which are representative for security-related IoT devices, such as doorknobs (keyless entry systems), window frames (perimeter fence intrusion sensor), and wall (motion detectors) positions. Due to a lack of space, in this version of the paper we restrict ourselves to a description of one representative realization of all experiments. We will also provide a full version of the paper with results of $23$ further positionings in the building.\n\n\\begin{table}\n\\caption{Parameters of the measurement setup}\n\\begin{center}\n\\begin{tabular}{|l | c | l| }\n\t\\hline\n\tParameter & Variable & Value \\\\\n\t\\hline\\hline\n\tSampling interval & $T_s$ & $100$ msec \\\\\\hline\n\tProbing duration & $T_p$ & $<5$ msec \\\\\\hline\n\tStep size & $\\Delta_d$ & $5$ mm \\\\\\hline\n\tAccuracy of step size & $\\hat{\\Delta}_d$ & $\\pm 0.05$ mm \\\\\\hline\n\tGeometrical distance Bob-Eve & $\\Delta_{BE}$ & $[0,30]$ cm \\\\\\hline\t\n\tGeometrical distance Alice-Bob & $\\Delta_{AB}$ & $5$ m \\\\\\hline\t\n\tSamples per step & $N$ & $3\\cdot 10^5$ \\\\\\hline\n\\end{tabular}\n\\end{center}\n\n\\label{tab:parameters}\n\\end{table}\n\nWe perform mobile, long-time narrow-band channel measurements on $2.4$~GHz (wavelength $12.5$~cm). The data exchange protocol is implemented on three Raspberry Pi $2$ platforms (credit-card sized computer). \nAll devices are equipped with a CC$2531$ USB enabled IEEE $802.15.4$ communication interface\\footnote{http:\/\/www.ti.com\/tool\/cc2531emk}. The CC$2531$ is a true SoC solution for IEEE $802.15.4$ applications, that is compatible to network layer standards for resource-constrained devices: ZigBee, WirelessHART, and 6LoWPAN. The platform is equipped with proprietary PCB antennas, i.e., \\textit{Meandered Inverted-F antenna} (MIFA), with the size of $5\\times 12$~mm. These antennas provide good performance with a small form factor. The platform and antenna design are widely used in commercial products and suited for systems where ultra-low-power consumption is required. \n\nIn order to establish common channel probing, Alice periodically sends data frames to Bob and waits for acknowledgments. Eve also receives these request-response pairs. When receiving a probe, all three devices extract Received Signal Strength Indicators (RSSI) values and, thus, can measure a channel-dependent sequence over time. For evaluation of the channel measurements, we store and process the realizations of $\\mybold{v}_k:=\\left(x_k, y_k, z_k\\right)^T$, locally on a monitoring laptop\n\n\nTable~\\ref{tab:parameters} lists the relevant parameters of our measurement setup. We obtain a complete realization of $\\mybold{v}_k$ on every sampling interval $T_s=100$~msec. The protocol ensures that Alice, Bob, and Eve can probe the channel within a probing duration $T_p<5$~msec. We want to analyze the joint statistical properties of the samples with respect to the position of Eve in the scene. As a consequence, we apply an automated antenna positioning system, which is constructed from a low-reflective material, cf. Figure~\\ref{fig:setup}. \nIt moves the antenna of Eve on a linear guide towards the fixed antenna of Bob in step size $\\Delta_d=5$~mm with accuracy $\\hat{\\Delta}_d=\\pm 0.05$~mm. The variable distance $\\Delta_{BE}$ ranges from $0$ to $30$~cm in order to provide $60$ different locations. Alice's antenna is placed orthogonal to the linear guiding at a fixed distance $\\Delta_{AB}=5$~m. For each position of Eve's antenna, we record at least $N$ samples. \n\nAlice and Bob extract the common randomness $x_k$ and $y_k$ from a time-varying channel. Since we aim for meaningful and reproducible results, we have to create an environment which provides the joint stationarity to the random process. \nTherefore, with a distance of $10$~cm to Alice's antenna, we deploy a curtain of $30 \\times 30$~cm aluminum strips that continuously rotates at $ \\approx 0.1$~rotations per second, cf. Figure~\\ref{fig:setup}. However, the rotation itself inserts a deterministic component into the channel. The evolution of the self-dependence of channel gains --- we show exemplary $x^{\\text{ds}}_k$ --- is illustrated in Figure~\\ref{fig:MI_cyclic}. It shows that the mutual information decays rapidly and vanishes after four samples, corresponding to approximately $400$~ms. However, due to the continuously rotating curtain of aluminum strips, we discover strong stochastical dependencies after $96$ samples, corresponding to approximately $9.6$~s. Therefore, we adapt a random source (Unix file \\texttt{\/dev\/urandom}) to the motor controller and program the instrument to rotate with random speed between $0.240$ rad\/s and $1$~rad\/s in random direction and with random interval lengths $0^\\circ, 1^\\circ, \\ldots 60^\\circ$ (uniformly distributed).\n Figure~\\ref{fig:MI_cyclic} shows that no strong stochastical dependencies are given anymore.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[%\nwidth=7cm,\nheight=4cm,\nscale only axis,\nxmin=0,\nxmax=120,\nxlabel={$l$},\nymin=0,\nymax=0.6,\nylabel={$I(x_0;x_l)$ [bits\/observation]},\nlegend style={draw=black,fill=white,legend cell align=left,at={(0.7,0.9)}}\n]\n\\addplot [color=blue,solid,very thick,dashed]\n table[]{figures\/muiVsDelayAliceOld-1.tsv};\n\\addlegendentry{Continuous rotation}; \n\\addplot [color=red,solid,very thick]\n table[]{figures\/muiVsDelayAliceNew-1.tsv};\n\\addlegendentry{Random rotation}; \n\\end{axis}\n\\end{tikzpicture}%\n\t\\caption{Self-dependence of channel gains with respect to time delay. Setup is equipped with aluminum strips of either continuous or random rotation.}\n\t\\label{fig:MI_cyclic}\n\\end{figure}\n\n\n\\section{Evaluation and Results}\n\\label{sec:evaluation}\nWe now use the experimental measurements to evaluate and compare the results of the Pearson correlation~\\eqref{eq:pearson}, mutual information, as well as the achievable bound of the secret-key capacity~\\eqref{eq:sklower}, as a function of attacker's distance $\\Delta_{BE}$ to Bob. We interpret the original measurements as realizations of $\\mybold{v}_k$. In addition, we have the decorrelated and downsampled outcomes, denoted by the processes $\\mybold{v}^{\\text{de}}_k$ and $\\mybold{v}^{\\text{ds}}_k$, respectively. The decorrelated samples are obtained by a linear prediction of order $N_m = 30$. To generate the i.i.d. random vectors $\\mybold{v}^{\\text{ds}}_k$ we downsample $\\mybold{v}_k$ by the factor $N_m = 30$. In subsubsection~\\ref{sec:sk}, we have already outlined the necessity of i.i.d. random vectors to obtain accurate estimations. This is not given for $\\mybold{v}_k$ and $\\mybold{v}^{\\text{de}}_k$, however, they provide valid approximations, as the results indicate later on.\nWe present three~Figures~\\ref{fig:original}, \\ref{fig:DS}, \\ref{fig:decorr} with three Subfigures a)-c) each, which are arranged in a 3x3 matrix on the next page. The \\emph{rows} denote the Figures as follows.\n\\begin{enumerate}\n\\item Fig.~\\ref{fig:original} illustrates the results for the \\emph{original} process $\\mybold{v}_k$.\n\\item Fig.~\\ref{fig:DS} shows the results for the \\emph{downsampled} process $\\mybold{v}^{\\text{ds}}_k$ of~\\eqref{eq:donwsampled}.\n\\item Fig.~\\ref{fig:decorr} depicts the results for the \\emph{decorrelated} process $\\mybold{v}^{\\text{de}}_k$ of~\\eqref{eq:decorr}.\n\\end{enumerate}\nThe \\emph{columns} constitute Subfigures as follows. For convenience, we introduce generic labels $X\\in\\left\\lbrace x_k,x^{\\text{de}}_k,x^{\\text{ds}}_k \\right\\rbrace$ for Alice, $Y\\in\\left\\lbrace y_k,y^{\\text{de}}_k,y^{\\text{ds}}_k \\right\\rbrace$ for Bob and $Z\\in\\left\\lbrace z_k,z^{\\text{de}}_k,z^{\\text{ds}}_k \\right\\rbrace$ for Eve.\n\\begin{enumerate}\n\\item Subfigures a) show the Pearson correlation~\\eqref{eq:pearson} vs. geometrical distance $\\Delta_{BE}$ between the three pairs (Alice$\\leftrightarrow$Bob $\\rho_{XZ}$, Alice$\\leftrightarrow$Eve $\\rho_{XY}$, Bob$\\leftrightarrow$Eve $\\rho_{YZ}$).\n\\item Subfigures b) zoom into the correlation $\\rho_{XY}$ of Alice$\\leftrightarrow$Bob.\n\\item Subfigures c) depict the three mutual information results ($I(X;Y)$, $I(X;Z)$, $I(Y;Z)$) and the secret-key rate $R_{\\text{sk}}$ of ~\\eqref{eq:sklower} vs. geometrical distance $\\Delta_{BE}$.\n\\end{enumerate}\n\nMost of the practical key generation schemes use downsampling or decorrelation on the original observations $\\mybold{v}_k$. We introduce the Figs. \\ref{fig:original}, \\ref{fig:DS} and \\ref{fig:decorr} in order to analyze whether downsampling and decorrelation obscure certain features of the channel that are important for the security evaluation of the system. We start with a comparison of the cross-correlation behavior between Alice and Bob, as well as to a potential attacker. By comparing Figure~\\ref{fig:original} (a-b) and Figure~\\ref{fig:DS} (a-b) we see that no significant differences in $\\rho_{XY}$ and $\\rho_{XZ}$ occur after downsampling. (Further, $\\rho_{XZ}$ and $\\rho_{YZ}$ are almost identical due to channel reciprocity between Alice and Bob.) The high similarity is due to the fact that even the process $\\mybold{v}_k$ does not exhibit much dependency in time, as already hinted in Figure~\\ref{fig:MI_cyclic}. As a consequence, the results obtained for $\\mybold{v}_k$ expose a valid approximation of the cross-correlation. As it can be seen from Figure~\\ref{fig:DS}, in case of downsampling the results are more noisy, since much fewer samples are available for the estimations.\n\n\n\\begin{figure*}[htp!]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given.}\n\t\\label{fig:original}\n\\end{figure*}\n\nAfter decorrelation, the results (see Figure~\\ref{fig:decorr}) show that (unlike in case of downsampling) the correlation decreases on average by $\\approx 0.05$, which can have a significant negative impact on the performance of a potential quantization scheme, cf.~\\cite[Figure 3]{WiComSec-Phy-QuantAna}. \nFurthermore, the difference between the minimum and maximum value significantly decreases. Whereas in the original (and downsampled) signal the difference is $0.995-0.98=0.015$, the difference is $0.97-0.89=0.08$ for the decorrelated signal. \nThis probably stems from errors of the autocorrelation estimate~\\eqref{eq:autocorrest}, which is necessary for the linear forward prediction. Another reason might be the Pearson correlation where single outliers (e.g., strong peaks) significantly influence the result. Analyzing the impact of decorrelation techniques on the reciprocity and security in detail is left for future work.\n\n\n\\begin{figure*}[htp!]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given.}\n\t\\label{fig:DS}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/results\/R_sk_new\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given.}\n\t\\label{fig:decorr}\n\\end{figure*}\n\n\n\n\n\n\nBy analyzing the attacker's opportunity, we observe a wavelength dependent behavior of the correlation between $z_k$ and $x_k$ (or $y_k$), as illustrated in Subfigures a). The following findings hold for all three processes: $\\mybold{v}_k$, $\\mybold{v}^{\\text{ds}}_k$, $\\mybold{v}^{\\text{de}}_k$. The correlation vs. distance function $\\rho_{XZ}$ (and $\\rho_{YZ}$) looks similar to the channel diversity function known from Jake's model~\\cite{src:goldsmith2005wireless}, which is a zero-order Bessel function\\footnote{A zero-order Bessel function is expected for the cross-correlation behavior of two receivers if uniformly distributed scatterers are given. According to Jake's model the first zero correlation is given after $\\approx 0.4\\lambda$, where $\\lambda$ is the wavelength of the carrier~\\cite{src:goldsmith2005wireless,DBLP:books\/daglib\/0025266}.} (cf. Figure~\\ref{fig:bessel}). However, the highest correlation is not at distance $\\Delta_{BE} =0$, where the correlation is only $0.2$. The highest cross-correlation is given at a distance of $\\Delta_{BE} \\approx 12.5$~cm, which is the wavelength of the $2.4$~GHz carrier. The first correlation of zero is given at a distance of $4$~cm. \n\n\\begin{figure}[htp]\n\t\t\\centering\n\\includegraphics[width=0.375\\textwidth]{figures\/bessel\/bessel.pdf}\n \t\t\\caption{Bessel function versus distance.}\n \\label{fig:bessel}\n\\end{figure}\n\n\n\n\n\nNote that the cross-correlation behavior of $x_k$ to $y_k$ is not independent of Eve's antenna position. Figure~\\ref{fig:original}(b) illustrates the correlation behavior in detail. The correlation has an \"oscillating\" behavior with a wavelength of approximately $11$~cm, whereby at a distance of $5$~cm the curve decreases rapidly to the lowest level of $\\approx 0.98$. The reason for that might be the non-perfect uniformly distributed scatterers in the environment, which are the basis of Jake's model. \nThe oscillating behavior in Alice's and Bob's original observation is also given in the downsampled and decorrelated versions, cf. Figure~\\ref{fig:DS}(b) and Figure~\\ref{fig:decorr}(b).\nThis behavior is contradictory to theoretical approaches based on Jake's Doppler spectrum~\\cite{DBLP:books\/daglib\/0025266}. The reason might be because the narrow band fading models do not include coupling and near field effects between both antennas for the spatial evaluation of autocorrelation, cross-correlation, and power spectral density (cf. \\cite[Chapter 3.2]{src:goldsmith2005wireless}). \n\n\nThe boundary $B$ between the near field zone and the far field zone can usually be determined by the following relationship: $B\\geq \\frac{2D^2}{\\lambda}$, where $D$ is the largest antenna size~\\cite{DBLP:journals\/comnet\/DlugoszT10}. We estimated the size of our antenna to be $6$~cm. Therefore, the boundary is $\\approx 5.7$~cm. \nAnalyzing near field boundaries in detail is left for future work. \n\n\n\n\nCompared to the cross-correlation behavior between the i.i.d. samples $x^{\\text{ds}}_k$ and $y^{\\text{ds}}_k$ (after downsampling), both mutual information $I(X;Y)$ and $R_{sk}$ have very similar oscillating behavior, shown in Subfigures c). The (minimum, maximum) values of the correlation are ($0.980$, $0.995$) and the ones of the mutual information are ($2.1$, $2.75$). \nBy analyzing Eve's observation, we see only a slight similarity between the mutual information $I(X;Z)$ (and $I(Y;Z)$) to the correlation behavior of her observation $\\rho_{XZ}$ (and $\\rho_{YZ}$). The similarity can be found by comparing the maximum absolute values. For instance, the highest correlation occurs at $10$~cm with a value of $0.5$, and corresponds to the highest mutual information of $0.5$~bits per sample. \nHowever, the Bessel-like behavior is not evident. Notably is the fact that the attackers observation $z_k$ does not significantly impact $R_{sk}$. Our results show that $R_{sk}$ is mainly dependent on $x_k$ and $y_k$. \nHowever, Eve's antenna affects Alice's and Bob's observation and, therefore, affects $R_{sk}$. Table~\\ref{tab:results} summarizes our results.\n\n\n\\begin{table}\n\\caption{Averaged results of our experiment.}\n\\begin{center}\n\\begin{tabular}{|l | c | c | c | }\n \\hline\n & $\\mybold{v}_k$ & $\\mybold{v}^{\\text{ds}}_k$ &\n$\\mybold{v}^{\\text{de}}_k$ \\\\\n \\hline\\hline\n $\\rho_{x_k,y_k}$ & $\\approx 0.99$ & $\\approx 0.99$ & $0.94$ \\\\\\hline\n $\\rho_{y_k,z_k}$ & $\\approx 0.09$ & $\\approx 0.09$ & $ \\approx\n0.07$ \\\\\\hline\n $I(X;Y)$ & $\\approx 2.92$ & $\\approx 2.89$ & $\\approx \n2.31$ \\\\\\hline\n $I(Y;Z)$ & $\\approx 0.26$ & $\\approx 0.27$ & $0.10$ \\\\\\hline\n $R_{\\text{sk}}$ & $\\approx 2.67$ & $\\approx 2.63$ & $\\approx 2.22$ \\\\\\hline\n\n\\end{tabular}\n\\end{center}\n\n\\label{tab:results}\n\\end{table}\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this work, we have provided an important pillar to bridge the gap between theory and practice-oriented approaches for CRKG. Our experimental study helps to provide a better understanding of channel statistics in wireless environments for security applications. \nWe present reproducible results based on a relevant environment which justifies the joint stationarity of a random process. \nWe show results of cross-correlation, mutual information and secret-key rates, which are dependent on attacker's (or third device's) position. \nAs a result, we discovered that the \\textit{observer effect} occurs, which most probably originates from near field distortions. \nWe believe the effect needs to be considered in the future. Common channel models like Jake's model for channel diversity need to be extended in order to be valid for key generation setups. Furthermore, it might be pertinent, for instance, to detect the proximity of Eve. Basing on our results two bidirectionally communicating nodes might recognize a third device, its relative position, and its motion in the proximity. Further studies might use complex-valued channel profiles to analyze third party positioning based and motion based influences.\n\n\n\n\n\\section{Full Measurement}\n\\begin{figure*}\n\t\t\\centering\n\\includegraphics[width=0.75\\textwidth]{figures\/raumplan\/Raumplan_Setup.pdf}\n \t\t\\caption{The testbed includes several experimental setups for performance evaluation as well as for security analysis. \tAlice (X), Bob (Y) and Eve (Z) are mounted on a automated antenna positioning system.}\n \\label{fig:setup_ur}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 0.}\n\t\\label{fig:app_original_0}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 0.}\n\t\\label{fig:app_ds_0}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-05_Nachtmessung_Paar_Office\/results\/03-02-2016_11-44-39\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 0.}\n\t\\label{fig:app_decorr_0}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 1.}\n\t\\label{fig:app_original_1}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 1.}\n\t\\label{fig:app_ds_1}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Nachtmessung_Paar_Office\/results\/03-02-2016_11-54-10\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 1.}\n\t\\label{fig:app_decorr_1}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 2.}\n\t\\label{fig:app_original_2}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 2.}\n\t\\label{fig:app_ds_2}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-06_Tagmessung_Paar_Office\/results\/03-02-2016_12-06-20\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 2.}\n\t\\label{fig:app_decorr_2}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 3.}\n\t\\label{fig:app_original_3}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 3.}\n\t\\label{fig:app_ds_3}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-07_TagNachtMessung_Paar_Office\/results\/03-02-2016_12-13-48\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 3.}\n\t\\label{fig:app_decorr_3}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 4.}\n\t\\label{fig:app_original_4}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 4.}\n\t\\label{fig:app_ds_4}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Tagmessung_Paar_Office\/results\/03-02-2016_14-47-03\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 4.}\n\t\\label{fig:app_decorr_4}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 5.}\n\t\\label{fig:app_original_5}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 5.}\n\t\\label{fig:app_ds_5}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-08_Wochenendmessung_Paar_Office_und_Serverraum\/results\/03-02-2016_16-00-49\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 5.}\n\t\\label{fig:app_decorr_5}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 6.}\n\t\\label{fig:app_original_6}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 6.}\n\t\\label{fig:app_ds_6}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Nachtmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-15-42\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 6.}\n\t\\label{fig:app_decorr_6}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 7.}\n\t\\label{fig:app_original_7}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 7.}\n\t\\label{fig:app_ds_7}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-11_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-27-43\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 7.}\n\t\\label{fig:app_decorr_7}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 8.}\n\t\\label{fig:app_original_8}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 8.}\n\t\\label{fig:app_ds_8}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Nachtmessung_Irmgard_und_Serverraum\/results\/03-02-2016_17-35-29\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 8.}\n\t\\label{fig:app_decorr_8}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 9.}\n\t\\label{fig:app_original_9}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 9.}\n\t\\label{fig:app_ds_9}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-12_Tagmessung_Archiv_und_Serverraum\/results\/03-02-2016_17-44-50\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 9.}\n\t\\label{fig:app_decorr_9}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 10.}\n\t\\label{fig:app_original_10}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 10.}\n\t\\label{fig:app_ds_10}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-13_Nachtmessung_Serverraum_und_Tim\/results\/03-02-2016_17-52-17\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 10.}\n\t\\label{fig:app_decorr_10}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 11.}\n\t\\label{fig:app_original_11}\n\\end{figure*}\n\n\\clearpage\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 11.}\n\t\\label{fig:app_ds_11}\n\\end{figure*}\n\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Nachtmessung_Serverraum_und_Falk\/results\/03-02-2016_18-02-23\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 11.}\n\t\\label{fig:app_decorr_11}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 12.}\n\t\\label{fig:app_original_12}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 12.}\n\t\\label{fig:app_ds_12}\n\\end{figure*}\n\n\\clearpage\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-14_Tagmessung_Serverraum\/results\/03-02-2016_18-11-21\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 12.}\n\t\\label{fig:app_decorr_12}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 13.}\n\t\\label{fig:app_original_13}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 13.}\n\t\\label{fig:app_ds_13}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_18-19-03\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 13.}\n\t\\label{fig:app_decorr_13}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 14.}\n\t\\label{fig:app_original_14}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 14.}\n\t\\label{fig:app_ds_14}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-15_Wochenendmessung_Serverraum_und_Christian\/results\/03-02-2016_18-25-55\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 14.}\n\t\\label{fig:app_decorr_14}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 15.}\n\t\\label{fig:app_original_15}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 15.}\n\t\\label{fig:app_ds_15}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Nachtmessung_Serverraum_und_Kueche\/results\/03-02-2016_19-06-14\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 15.}\n\t\\label{fig:app_decorr_15}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 16.}\n\t\\label{fig:app_original_16}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 16.}\n\t\\label{fig:app_ds_16}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-18_Tagmessung_Serverraum_und_Archiv\/results\/03-02-2016_19-18-10\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 16.}\n\t\\label{fig:app_decorr_16}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 17.}\n\t\\label{fig:app_original_17}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 17.}\n\t\\label{fig:app_ds_17}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Nachtmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-24-32\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 17.}\n\t\\label{fig:app_decorr_17}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 18.}\n\t\\label{fig:app_original_18}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 18.}\n\t\\label{fig:app_ds_18}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-19_Tagmessung_Serverraum_und_Tim\/results\/03-02-2016_19-32-16\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 18.}\n\t\\label{fig:app_decorr_18}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 19.}\n\t\\label{fig:app_original_19}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 19.}\n\t\\label{fig:app_ds_19}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Nachtmessung_Serverraum_und_Tobias\/results\/03-02-2016_19-38-45\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 19.}\n\t\\label{fig:app_decorr_19}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 20.}\n\t\\label{fig:app_original_20}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 20.}\n\t\\label{fig:app_ds_20}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-20_Tagmessung_Serverraum_und_Lounge\/results\/03-02-2016_19-50-33\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 20.}\n\t\\label{fig:app_decorr_20}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 21.}\n\t\\label{fig:app_original_21}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 21.}\n\t\\label{fig:app_ds_21}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Nachtmessung_Serverraum_und_Erik\/results\/03-02-2016_19-55-50\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 21.}\n\t\\label{fig:app_decorr_21}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 22.}\n\t\\label{fig:app_original_22}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 22.}\n\t\\label{fig:app_ds_22}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-21_Tagmessung_Serverraum_und_Erik\/results\/03-02-2016_20-08-10\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 22.}\n\t\\label{fig:app_decorr_22}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/before_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/before_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/before_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 23.}\n\t\\label{fig:app_original_23}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/after_ds_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=0.5cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/aftere_ds_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=2.2cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/after_ds_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{ds}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 23.}\n\t\\label{fig:app_ds_23}\n\\end{figure*}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[trim=1.4cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/after_decorr_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/aftere_decorr_AB_Corr.pdf}}\n\t\\subfloat[]{\\includegraphics[trim=1.8cm 0.1cm 3.5cm 1.6cm, clip=true, height=0.224\\textwidth]{figures\/apendix_results\/2016-01-22_Tagmessung_Serverraum_und_Christian\/results\/03-02-2016_20-15-07\/after_decorr_SC.pdf}}\n\t\\caption{Evaluation results of $\\mybold{v}^{\\text{de}}_k$. In (a) and (b) the cross-correlations is given; in (c) the mutual information as well as $R_{\\text{sk}}$ is given. Position 23.}\n\t\\label{fig:app_decorr_23}\n\\end{figure*}\n\n\n\\end{appendices}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHigher-order derivative terms play important roles in the several contexts, e.g., inflation models, modified gravity, renormalization of gravity, and so on. From a phenomenological and theoretical viewpoint, their embeddings into supersymmetry (SUSY) or supergravity (SUGRA) are also interesting. In particular, there exist many non-renormalizable terms in SUGRA and it is quite natural to consider the extension including higher-order derivative terms and the effects of them on cosmology and particle phenomenology. The higher-order derivative terms of a chiral superfield in 4D SUSY or SUGRA and their cosmological applications have been investigated so far, e.g., in Refs.~\\cite{Khoury:2010gb,Khoury:2011da,Baumann:2011nm,Farakos:2012je,Koehn:2012ar,Farakos:2012qu,Koehn:2012te,Farakos:2013zya,Gwyn:2014wna,Aoki:2014pna,Aoki:2015eba,Ciupke:2015msa,Bielleman:2016grv}. \n\n\nThe Dirac-Born-Infeld (DBI) action \\cite{Born:1934gh,Dirac:1962iy} includes such higher-order derivative terms. It was first proposed as a nonlinear generalization of Maxwell theory. The DBI action is also motivated by string theory, which is a promising candidate for a unified theory including gravity. In the context of string theory, an effective action of D-brane is described by a DBI-type action, which consists of Maxwell terms $F_{\\mu \\nu }$ as well as the ones of scalar fields $\\partial _{\\mu }\\phi^i\\partial _{\\nu }\\phi^jg_{ij}$ and a two-form $B_{\\mu \\nu }$ in general, \n\\begin{align}\nS_{\\rm{DBI}}=\\int d^Dx \\sqrt{-g}\\left( 1-\\sqrt{{\\rm{det}}(g_{\\mu \\nu }+ \\partial _{\\mu }\\phi ^i\\partial _{\\nu }\\phi ^jg_{ij}+B_{\\mu \\nu }+F_{\\mu \\nu })} \\right) . \\label{string DBI}\n\\end{align}\n\nSUSY Dp-brane actions in $D$ dimension are also important for the effective theory of superstring. With a component formalism, such actions have also been discussed in many literature. For example, in Refs.~\\cite{Aganagic:1996pe,Aganagic:1996nn}, the authors construct SUSY Dp-brane actions with local kappa symmetry based on a component formalism in 10 dimensional spacetime. In a similar way, the p-brane action in various dimensions has also been discussed in Ref.~\\cite{Bergshoeff:2013pia}. In Refs.~\\cite{Howe:1996mx,Cederwall:1996pv,Cederwall:1996ri,Bergshoeff:1996tu}, the SUSY Dp-brane in SUGRA background is constructed by considering the background super-vielbein on the brane and couplings between them. \n\nAn approach based on superfields is useful for constructing a manifestly SUSY invariant action and generalizing it. Within the formalism, such 4D $\\mathcal{N}=1$ SUSY extensions of the DBI action have been known partially. The DBI action of a vector superfield, which corresponds to the case with $\\phi^i=B_{\\mu \\nu }=0$ in Eq. $\\eqref{string DBI}$, is constructed in Refs.~\\cite{Cecotti:1986gb,Bagger:1996wp,Rocek:1997hi,Kuzenko:2002vk,Kuzenko:2005wh}. In particular, in Refs.~\\cite{Bagger:1996wp,Rocek:1997hi}, it is shown that such an action appears from the partial breaking of 4D $\\mathcal{N}=2$ SUSY. Its SUGRA embedding has also been discussed in Refs.~\\cite{Cecotti:1986gb,Kuzenko:2002vk,Kuzenko:2005wh,Abe:2015nxa}. Its application to inflation models has been investigated in Ref.~\\cite{Abe:2015fha}. Furthermore, in global SUSY, multiple $U(1)$~\\cite{Ferrara:2014oka,Ferrara:2014nwa} and massive~\\cite{Ferrara:2015ixa} extensions of the DBI action have been discussed. In particular, for the case with multiple U(1) vector multiplets, linear actions ~\\cite{Andrianopoli:2014mia}, general conditions for partial SUSY breaking ~\\cite{Andrianopoli:2015wqa,Andrianopoli:2015rpa}, and c-maps ~\\cite{Andrianopoli:2016eub} have also been discussed.\n \nFor the DBI action of scalar fields, which corresponds to the case with $F_{\\mu \\nu }=B_{\\mu \\nu }=0$ in Eq. $\\eqref{string DBI}$, its SUSY extension has been done via partially broken $\\mathcal{N}=2$ SUSY theory, where the Goldstino multiplet is an $\\mathcal{N}=1$ real linear superfield \\cite{Rocek:1997hi,Bagger:1997pi,GonzalezRey:1998kh}. However, there has never been the SUGRA extension of the DBI action of a real linear superfield. In this paper, we discuss the embedding of the DBI action of a real linear superfield into SUGRA. The action of a chiral superfield can be found in Ref.~\\cite{Koehn:2012ar}. In general, it is known that the action with a chiral superfield can be rewritten in terms of the one with a real linear superfield, and vice versa (via linear-chiral duality \\cite{Siegel:1979ai}). Therefore, our action, which will be discussed in this paper, would be equivalent to that derived in Ref.~\\cite{Koehn:2012ar} through the duality transformation. We will discuss this point and the differences between their result and ours.\n\nIn Refs. \\cite{Rocek:1997hi,Bagger:1997pi,GonzalezRey:1998kh}, the DBI action of a real linear multiplet is realized with a chiral multiplet, which is constrained by a specific $\\mathcal{N}=1$ SUSY constraint. We will investigate the corresponding constraint which is a key for the construction of DBI action, in SUGRA. To achieve this, we use a formulation based on conformal SUGRA \\cite{Kaku:1978nz,Kugo:1982cu,Kugo:1983mv}\\footnote{We will use the superconformal tensor calculus~\\cite{Kaku:1978nz,Kugo:1982cu,Kugo:1983mv}. See also another formulation, conformal superspace~\\cite{Butter:2009cp,Kugo:2016zzf}.}, where one can treat off-shell SUGRA with different sets of auxiliary fields in a unified manner. Because of the restrictions on the SUGRA embedding of the ${\\cal N}=1$ constraint, we will find that the DBI action of a real linear superfield can be realized only in the so-called new minimal formulation of SUGRA. Furthermore, we will extend the DBI action to the matter coupled version of it.\n\nThe remaining parts of this paper are organized as follows. First, we will briefly review the SUSY DBI action of a real linear superfield in Sec.~\\ref{review}. There, we will find that the constraint imposed between a chiral and real linear superfield is important for the construction. Then, we will extend the constraint to that in conformal SUGRA in Sec.~\\ref{extension}. After a short review of conformal SUGRA, we will also review the concept of the {\\it{u-associated}} derivative which is crucial for the superconformal extension. Using this {\\it{u-associated}} derivative, we will complete the embedding and find that the constraint can be consistently realized in the new minimal SUGRA. With the constraint, we will construct the corresponding action in the new minimal SUGRA, and write down the bosonic component action in Sec.~\\ref{Component action}. The linear -chiral duality and the matter coupled extension will be also discussed there. Finally, we will discuss the correspondence and differences between results in related works and ours in Sec.~\\ref{discussion}, and summarize this paper in Sec.~\\ref{summary}. In Appendix.~\\ref{explicit}, the explicit components of the multiplet including the {\\it{u-associated}} derivative are shown.\n\nIn this paper, we use the unit $M_P=1$ where $M_P=2.4\\times 10^{18}$ GeV is the reduced Planck mass, and follow the conventions of \\cite{Wess:1992cp} in Sec.~\\ref{review} and of \\cite{Freedman:2012zz} in other parts. $a,b\\cdots$ denote Minkowski indices and $\\mu,\\nu\\cdots$ denote curved indices.\n\n\\section{Review of DBI action in global SUSY}\\label{review}\nIn this section, we briefly review the DBI action of a real linear superfield in global SUSY \\cite{Bagger:1997pi}. We use a chiral superfield $X$ and a real linear superfield $L$ which satisfy the conditions,\n\\begin{align}\n\\bar{D}_{\\dot{\\alpha }} X=0, \\ \\ \\ D^2L=\\bar{D}^2L=0,\n\\end{align}\nwhere $D_{\\alpha }$ and $\\bar{D}_{\\dot{\\alpha }}$ are a SUSY spinor derivative and its complex conjugate.\nTo construct the DBI action for $L$, we consider the following constraint between $X$ and $L$,\n\\begin{align}\nX-\\frac{1}{4}X\\bar{D}^2\\bar{X}-\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L=0 , \\label{global constraint}\n\\end{align}\nwhere $\\bar{X}$ is a complex conjugate of $X$ \\footnote{In Ref.~\\cite{Bagger:1997pi}, the constraint $\\eqref{global constraint}$ has been obtained from the tensor multiplet in $\\mathcal{N}=2$ SUSY through partial breaking of it. Here, we do not discuss its origin and we just use the constraint as a guideline to obtain the DBI action. In Sec.~\\ref{discussion}, we will briefly comment on the relation between the partial breaking of ${\\cal N}=2$ SUSY and our construction.}. The equation. $\\eqref{global constraint}$ can be solved with respect to $X$ and we obtain\n\\begin{align}\nX=\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L +\\frac{1}{2}\\bar{D}^2\\Biggl[ \\frac{D^{\\alpha }LD_{\\alpha }L\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L }{1-\\frac{1}{2}A+\\sqrt{1-A+\\frac{1}{4}B^2}}\\Biggr],\\label{solution for global constraint}\n\\end{align}\nwhere\n\\begin{align}\nA\\equiv \\frac{1}{2}\\{ D^2( \\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L)+{\\rm{h.c.}} \\}, \\ \\ \\ B\\equiv \\frac{1}{2}\\{ D^2( \\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L)-{\\rm{h.c.}} \\}.\n\\end{align}\nUsing this solution $\\eqref{solution for global constraint}$, we can construct the SUSY DBI action as \n\\begin{align}\n\\mathcal{L}=\\int d^2 \\theta X(L) +{\\rm{h.c.}}. \\label{global DBI}\n\\end{align}\nOne can check that the bosonic part of the Lagrangian $\\eqref{global DBI}$ produces,\n\\begin{align}\n\\mathcal{L}_B=1-\\sqrt{1-B\\cdot B+\\partial C\\cdot \\partial C-(B\\cdot \\partial C)^2} ,\\label{component form of global DBI}\n\\end{align}\nwhere $C$ and $B_a$ are a real scalar and a constrained vector satisfying $\\partial ^a B_a=0$, in the real linear superfield, and we use the notation $B\\cdot \\partial C \\equiv B^a \\partial _aC $. It is known that, through the linear-chiral duality, Eq. $\\eqref{component form of global DBI}$ produces the DBI action of a complex scalar, which can be interpreted as the 4D effective D3-brane action. \nWe call Eq. $\\eqref{component form of global DBI}$ the DBI action of a real linear superfield in this paper.\n\nIt is worth noting that Eq. $\\eqref{solution for global constraint}$ satisfies the nilpotency condition, i.e., $X^2=0$, due to the Grassmann property of the SUSY spinor derivative, $\\bar{D}_{\\dot{\\alpha }}$. This reflects the underlying Volkov-Akulov SUSY \\cite{Volkov:1972jx,Rocek:1978nb}. Instead of writing the action like Eq. $\\eqref{global DBI}$, we can also rewrite the same system imposing the constraint $\\eqref{global constraint}$ by a chiral superfield Lagrange multiplier $\\Lambda $,\n\\begin{align}\n\\mathcal{L}=\\int d^2 \\theta \\biggl[ X +\\Lambda \\left( X-\\frac{1}{4}X\\bar{D}^2\\bar{X}-\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L\\right) +\\tilde{\\Lambda }X^2\\biggr] +{\\rm{h.c.}}. \\label{global DBI with constraint}\n\\end{align}\nHere we have introduced another Lagrange multiplier $\\tilde{\\Lambda }$, which ensures the nilpotency of $X$. Indeed, we need not require this condition in the Lagrangian since $X$ satisfies $X^2=0$ after integrating out $\\Lambda$ first and solving $X$ with respect to $L$, but the condition is still consistent and makes the calculation simple as far as we focus on the bosonic part of the action, as we will see in the following section.\n\\section{Extension to 4D $\\mathcal{N}=1$ conformal SUGRA}\\label{extension}\nIn this section, we generalize the SUSY DBI action $\\eqref{global DBI with constraint}$ discussed in Sec.~\\ref{review} to that in SUGRA. \n\n\\subsection{Review of conformal SUGRA} \\label{conformal SUGRA}\nTo construct the action in SUGRA, we use conformal SUGRA formulation. Then, let us briefly review the basics of the conformal SUGRA before proceeding to the specific construction of the DBI action. \n\nIn this formulation, there are extra gauge symmetries such as dilatation, $U(1)_A$ symmetry, S-SUSY and conformal boost in addition to translation, Lorentz transformation and SUSY. The commutation and anti-commutation relations are governed by the superconformal algebra and its representation $\\Phi$ called a superconformal multiplet has the following components, \n\\begin{align}\n\\Phi =\\{ \\mathcal{C}, \\mathcal{Z},\\mathcal{H},\\mathcal{K},\\mathcal{B}_a,\\Lambda ,\\mathcal{D}\\} ,\\label{general multiplet}\n\\end{align}\nwhere $\\mathcal{Z}$ and $\\Lambda $ are spinors; $\\mathcal{B}_a$ is a vector; the others are complex scalars. We also denote the superconformal multiplet $\\Phi$ by its first component $\\mathcal{C}$,\n\\begin{align}\n\\Phi =\\langle \\mathcal{C} \\rangle ,\n\\end{align}\nwhere $\\langle ...\\rangle $ represents the superconformal multiplet which has $\\mathcal{C}$ as the first component. $\\mathcal{C}$ must be invariant under the transformations of S-SUSY and conformal boost in order for $\\Phi=\\langle \\mathcal{C}\\rangle$ to be a superconformal multiplet \\cite{Kugo:1983mv}.\n \nA superconformal multiplet is characterized by the charge $(w,n)$ under dilatation and $U(1)_A$ symmetry called the Weyl weight and the chiral weight, respectively. For example, a chiral multiplet $X$ has $(w,w)$, in order to satisfy\n\\begin{align}\n\\bar{\\mathcal{D}}_{\\dot{\\alpha}}X=0, \\label{dbar}\n\\end{align}\nwhere $\\bar{\\mathcal{D}}_{\\dot{\\alpha}}$ is a spinor derivative \\cite{Kugo:1983mv}. For a real linear multiplet $L$ defined by, \n\\begin{align}\n\\Sigma L =\\bar{\\Sigma } L=0,\n\\end{align}\nwhere $\\Sigma $ ($\\bar{\\Sigma }$) is a (anti-) chiral projection operator, the values of each weight are determined as $(w,n)=(2,0)$. We will discuss these operators, $\\mathcal{D}_\\alpha$ and $\\Sigma $, more precisely in the following subsections.\n\nThe chiral multiplet consists of the following components, $\\{z,P_L\\chi,F\\}$, where $z$ and $F$ are complex scalars and $P_L\\chi$ is a chiral spinor; $P_L=(1+\\gamma_5)\/2$ is a left-handed projection operator. It is embedded into a general superconformal multiplet $\\eqref{general multiplet}$ as\n\\begin{align}\n\\{ z,-\\sqrt{2}iP_L\\chi , -F,iF,iD_az,0,0 \\} , \\label{embedding chiral}\n\\end{align}\nwhere $D_a$ is a superconformal covariant derivative. On the other hand, a real linear multiplet has components, $\\{C,Z,B_a\\}$, where $C$ is a real scalar, $Z$ is a Majorana spinor and $B_a$ is a constrained vector which satisfies $D^aB_a=0$. A real linear multiplet is embedded into a general superconformal multiplet $\\eqref{general multiplet}$ as\n\\begin{align}\n\\{ C,Z,0,0,B_a,-\\slash{D}Z,-\\Box C\\}, \\label{embedding linear}\n\\end{align}\nwhere $\\slash{D} \\equiv \\gamma ^a D_a$.\n\nFor later convenience, we also introduce a multiplication rule for superconformal multiplets. For a function of multiplets $f(\\mathcal{C}^I)$, where $I$ classifies different multiplets, we have\n\\begin{align}\n\\nonumber \\langle f({\\cal C}^I)\\rangle=\\biggl[ &f,f_I\\mathcal{Z}^I, f_I\\mathcal{H}^I-\\frac{1}{4}f_{IJ}\\bar{\\mathcal{Z}}^J\\mathcal{Z}^I, f_I\\mathcal{K}^I+\\frac{i}{4}f_{IJ}\\bar{\\mathcal{Z}}^J\\gamma _5\\mathcal{Z}^I, f_I\\mathcal{B}_a^I-\\frac{i}{4}f_{IJ}\\bar{\\mathcal{Z}}^J\\gamma_a \\gamma _5\\mathcal{Z}^I,\\\\\n\\nonumber &f_I\\Lambda^I-\\frac{i}{2}\\gamma _5\\left( \\mathcal{K}^I-\\slash{\\mathcal{B}}^I-i\\gamma _5\\slash{D}\\mathcal{C}^I+i\\gamma _5\\mathcal{H}^I\\right) f_{IJ}\\mathcal{Z}^J-\\frac{1}{4}\\left( \\bar{\\mathcal{Z}}^J\\mathcal{Z}^I\\right) \\mathcal{Z}^Kf_{IJK},\\\\\n\\nonumber &f_I\\mathcal{D}^ I+\\frac{1}{2}f_{IJ}\\left( \\mathcal{K}^I\\mathcal{K}^J+\\mathcal{H}^I\\mathcal{H}^J-\\mathcal{B}^{aI}\\mathcal{B}_a^J-D_a\\mathcal{C}^ID^a\\mathcal{C}^J-2\\bar{\\mathcal{Z}}^J\\Lambda^I-\\bar{\\mathcal{Z}}^J\\slash{D}\\mathcal{Z}^I\\right)\\\\\n&-\\frac{1}{4}f_{IJK}\\bar{\\mathcal{Z}}^J(\\mathcal{H}^K-i\\gamma _5\\mathcal{K}^K-i\\slash{\\mathcal{B}}^K\\gamma _5)\\mathcal{Z}^I+\\frac{1}{16}f_{IJKL}(\\bar{\\mathcal{Z}}^J\\mathcal{Z}^I)(\\bar{\\mathcal{Z}}^K\\mathcal{Z}^L) \\biggr] , \\label{formulaF}\n\\end{align}\nwhere $f_{IJ\\cdots }$ is $\\partial f\/\\partial \\mathcal{C}^I\\partial\\mathcal{C}^J\\cdots$ and $\\bar{\\mathcal{Z}}\\equiv \\mathcal{Z}^T\\hat{C}$ ($\\hat{C}$ is a charge conjugation matrix).\n\nWe also need action formulas to construct a superconformal action. For a chiral multiplet $X=\\{z,P_L\\chi,F\\}$ with its weight $(3,3)$, there exists the so-called F-term formula~\\cite{Kugo:1982cu}, \n\\begin{align}\n[X]_F=\\int d ^{4}x\\sqrt{-g}{\\rm{Re}} \\biggl[ F+\\frac{1}{\\sqrt{2}} \\bar {\\psi }_{\\mu}\\gamma ^{\\mu}P _{L}\\chi +\\frac{1}{2}z\\bar {\\psi }_{\\mu}\\gamma ^{\\mu \\nu}P _{R}\\psi _{\\nu} \\biggr] , \\label{Fformula}\n\\end{align}\nwhere $\\psi _{\\mu}$ is a gravitino.\nFor a real multiplet $\\phi =\\{C,Z,H,K,B_a,\\Lambda,D\\}$ with its weight $(2,0)$, we can apply the following D-term formula~\\cite{Kugo:1982cu},\n\\begin{align}\n\\nonumber [\\phi ]_D= \\int d^{4}x\\sqrt{-g}\\biggl[ &D-\\frac{1}{2}i\\bar{\\psi}\\cdot \\gamma \\gamma _{5}\\lambda -\\frac{1}{3}CR+\\frac{1}{3}(C\\bar{\\psi }_{\\mu}\\gamma ^{\\mu \\rho \\sigma }-i\\bar{Z }\\gamma ^{\\rho \\sigma }\\gamma _{5})D_{\\rho }\\psi _{\\sigma }\\\\\n &+\\frac{1}{4} \\varepsilon ^{abcd}\\bar {\\psi }_{a}\\gamma _{b}\\psi _{c}\\left(B _{d}-\\frac{1}{2}\\bar{\\psi }_{d}Z \\right)\\biggr] . \\label{Dformula}\n\\end{align}\nHere, all the components of $\\phi$ are real (Majorana).\n\nUsing these superconformal multiplets, the multiplication rule $\\eqref{formulaF}$, and the action formulas $\\eqref{Fformula}$ and $\\eqref{Dformula}$, we can construct superconformal invariant actions. Finally, we fix some parts of the extra gauge symmetries by imposing the condition to one of the superconformal multiplets $\\Phi_0$ called a compensator multiplet, and obtain the $\\rm{Poincar\\acute{e}}$ SUGRA action.\n\n\n\n\\subsection{{\\it{u-associated}} derivative} \\label{u-associated derivative}\nNow, we have prepared the tool for constructing the DBI action in SUGRA. Within the conformal SUGRA formulation, we will discuss a constraint corresponding to that in global SUSY, \n\\begin{align}\nX-\\frac{1}{4}X\\bar{D}^2\\bar{X}-\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L=0 , \\label{global constraint 2}\n\\end{align}\nin the following. However, it seems to be a nontrivial task to extend the term including SUSY spinor derivatives,\n\\begin{align}\n\\bar{D}_{\\dot{\\alpha }}L\\bar{D}^{\\dot{\\alpha }}L \\label{global derivative}\n\\end{align}\n to that in conformal SUGRA. \n \nTo treat the term $\\eqref{global derivative}$ in conformal SUGRA, we need the spinor derivative defined as a superconformal operation. In Ref.~\\cite{Kugo:1983mv}, it is pointed out that the spinor derivative in conformal SUGRA, $\\mathcal{D}_{\\alpha }$ ($\\bar{\\mathcal{D}}_{\\dot{\\alpha }}$), cannot be defined on a superconformal multiplet $\\Phi$ unless $\\Phi$ satisfies a specific weight condition, $w=-n$ ($w =n$). This is because $\\mathcal{D}_{\\alpha }\\Phi $ ($\\bar{\\mathcal{D}}_{\\dot{\\alpha }}\\Phi$) is not generically a superconformal multiplet, i.e., the first component of it is S-SUSY and conformal boost inert only when $w=-n$ ($w=n$) is satisfied. \nThen, it is obvious that we cannot define $\\bar{\\cal D}_{\\dot{\\alpha}}L$ as a superconformal multiplet since $L$ has the weight with $(2,0)$.\n\nHowever, the authors in Ref.~\\cite{Kugo:1983mv} also proposed an improved spinor derivative operation, which can be defined on any supermultiplet. They introduced another multiplet, ${\\bf{u}}$, called a {\\it{u-associated}} multiplet, \n\\begin{align}\n{\\bf{u}} =\\{ \\mathcal{C}_u, \\mathcal{Z}_u,\\mathcal{H}_u,\\mathcal{K}_u,\\mathcal{B}_{au},\\Lambda _u,\\mathcal{D}_u \\} ,\n\\end{align}\nin order to force the first component of ${\\cal D}_\\alpha \\Phi$ to be invariant under S-SUSY and conformal boost. To be specific, they defined the {\\it{u-associated}} spinor derivative as \n\\begin{align}\n\\mathcal{D}^{({\\bf{u}})}_{\\alpha }\\Phi =\\langle (P_L\\mathcal{Z})_{\\alpha }+i(n+w)\\lambda _{\\alpha }\\mathcal{C}\\rangle ,\\ \\ \\ \\lambda _{\\alpha } \\equiv \\frac{i(P_L\\mathcal{Z}_u)_{\\alpha }}{(w_u+n_u)\\mathcal{C}_u}, \\label{def u}\n\\end{align}\nwhere $w_u$ and $n_u$ are the Weyl and chiral weight of a {\\it{u-associated}} multiplets, respectively. Unless $w_u+n_u=0$, we can choose any multiplet as the {\\it{u-associated}} multiplet. \nThen, we can define the spinor derivative for an arbitrary superconformal multiplet by this {\\it{u-associated}} spinor derivative.\n\nFor our purpose, we need the {\\it{u-associated}} spinor derivative acting on a real linear multiplet, $\\mathcal{D}^{({\\bf{u}})}_{\\alpha }L $. More generally, we can consider \n\\begin{align}\n\\mathcal{D}^{({\\bf{u}}_1)}_{\\alpha }({\\bf{u}}_2L), \\label{u-derivative}\n\\end{align}\nwhere ${\\bf{u}}_1$ is a {\\it{u-associated}} multiplet and ${\\bf{u}}_2$ is an additional multiplet. These multiplets must satisfy ${\\bf{u}}_1\\neq {\\bf{u}}_2$, since $\\mathcal{D}^{({\\bf{u}})}_{\\alpha }{\\bf{u}}$ is identically zero obviously from the definition $\\eqref{def u}$.\\footnote{As we will discuss, we choose ${\\bf{u}}_1$ and ${\\bf{u}}_2$ as compensators, which become some parts of the gravity multiplet after superconformal gauge fixings. In the global SUSY expression $\\eqref{global derivative}$, all the fields in the gravitational multiplet decouple from it. Therefore, it is natural to consider a possibility that a compensator appears as in Eq. $\\eqref{u-derivative}$.} Using this {\\it{u-associated}} spinor derivative, Eq. $\\eqref{global derivative}$ can be generalized to the one in conformal SUGRA as\n\\begin{align}\n\\frac{1}{{\\bf{u}}_3}\\bar{\\mathcal{D}}^{({\\bf{u}}_1)}(\\bar{\\bf{u}}_2L)\\bar{\\mathcal{D}}^{({\\bf{u}}_1)}(\\bar{\\bf{u}}_2L) , \\label{u-derivative part}\n\\end{align}\nwhere we have introduced a new multiplet ${\\bf{u}}_3$\\footnote{We will refer all of ${\\bf {u}}_i$ as {\\it u-associated} multiplets.} for generality and omitted the spinor index, $\\dot{\\alpha }$, and we have also defined the conjugate of a {\\it{u-associated}} derivative as $\\bar{\\mathcal{D}}_{\\dot{\\alpha }}^{\\bf{u}} \\Phi = (\\mathcal{D}_{\\alpha }^{\\bf{u}}(\\Phi )^*)^*$ following Ref. \\cite{Kugo:1983mv}.\n\nLet us comment on the weight of the multiplet $\\eqref{u-derivative part}$. The operator $\\bar{\\mathcal{D}}^{({\\bf{u}})}_{\\dot{\\alpha} }$ has the weight $(1\/2,3\/2)$, then the total weight of Eq. $\\eqref{u-derivative part}$ is $(2w_2-w_3+5,2n_2-n_3+3)$, where $w_i$ and $n_i$ with $i=1,2,3$ are the Weyl and chiral weights of ${\\bf{u}}_i$, respectively. \n\nFurthermore, Eq. $\\eqref{global constraint 2}$ is a ``chiral\" constraint since the first and second term in Eq. $\\eqref{global constraint 2}$ are chiral multiplets. Then, we require a condition that the multiplet $\\eqref{u-derivative part}$ is a chiral multiplet, that is, \n\\begin{align}\n\\bar{\\mathcal{D}}\\biggl[ \\frac{1}{{\\bf{u_3}}}\\bar{\\mathcal{D}}^{({\\bf{u_1}})}(\\bar{\\bf{u_2}}L)\\bar{\\mathcal{D}}^{({\\bf{u_1}})}(\\bar{\\bf{u_2}}L) \\biggr] =0 . \\label{condition for chiral}\n\\end{align}\nTo apply $\\bar{\\mathcal{D}}$ for Eq. $\\eqref{u-derivative part}$, the Weyl and chiral weight of Eq. $\\eqref{u-derivative part}$ must satisfy $w=n$ as mentioned before,\n\\begin{align}\n2w_2-w_3+5=2n_2-n_3+3. \\label{weight condition between w3n3 and w2n2}\n\\end{align}\nThe condition $\\eqref{condition for chiral}$ implies that \n\\begin{align}\nP_R \\mathcal{Z}'=0, \\label{condition for chiral2}\n\\end{align}\nwhere $P_R= (1-\\gamma _5)\/2$ is a right-handed projection operator and $\\mathcal{Z}'$ is the second component of the multiplet $\\eqref{u-derivative part}$. The equation $\\eqref{condition for chiral2}$ can be written explicitly as\n\\begin{align}\n\\nonumber &\\bar{\\tilde{\\mathcal{Z}}}_2^cP_R\\tilde{\\mathcal{Z}}_2^c\\biggl[P_R\\tilde{Z}+k P_R\\tilde{\\mathcal{Z}}_1^c- P_R\\tilde{\\mathcal{Z}}_3\\biggr] \n+\\bar{\\tilde{Z}}P_R\\tilde{Z}\\biggl[P_R\\tilde{\\mathcal{Z}}_2^c+k P_R\\tilde{\\mathcal{Z}}_1^c- P_R\\tilde{\\mathcal{Z}}_3\\biggr] \\\\\n\\nonumber &-k \\bar{\\tilde{\\mathcal{Z}}}_1^cP_R\\tilde{\\mathcal{Z}}_1^c\\biggl[\\left( 1-2k \\right) \\left( P_R\\tilde{Z}+P_R\\tilde{\\mathcal{Z}}_2^c\\right) +P_R\\tilde{\\mathcal{Z}}_3\\biggr] \\\\\n\\nonumber &-2k \\biggl[\\bar{\\tilde{\\mathcal{Z}}}_2^cP_R\\tilde{\\mathcal{Z}}_1^c \\left( 2P_R\\tilde{Z}-P_R\\tilde{\\mathcal{Z}}_3\\right)+\\bar{\\tilde{Z}}P_R\\tilde{\\mathcal{Z}}_1^c\\left( 2P_R\\tilde{\\mathcal{Z}}_2^c-P_R\\tilde{\\mathcal{Z}}_3\\right) \\biggr] \\\\\n&-2i\\biggl[i\\tilde{\\mathcal{H}}_2^*+\\tilde{\\mathcal{K}}_2^*-k \\left( i\\tilde{\\mathcal{H}}_1^*+\\tilde{\\mathcal{K}}_1^*\\right) \\biggr] \\biggl[P_R\\tilde{\\mathcal{Z}}_2^c+P_R\\tilde{Z}-k P_R\\tilde{\\mathcal{Z}}_1^c\\biggr] \n-2\\bar{\\tilde{\\mathcal{Z}}}_2^cP_R\\tilde{Z}P_R\\tilde{\\mathcal{Z}}_3=0,\\label{explicit condotion for chiral} \n\\end{align}\nwhere \n\\begin{align}\n&{\\bf{u}}_i =\\{ \\mathcal{C}_i, \\mathcal{Z}_i,\\mathcal{H}_i,\\mathcal{K}_i,\\mathcal{B}_{ai},\\Lambda _i,\\mathcal{D}_i \\} ,\\ \\ \\ (i=1,2,3),\\\\\n&\\tilde{Z}\\equiv \\frac{1}{C}Z, \\ \\ \\ \\tilde{\\mathcal{Z}}_i\\equiv \\frac{1}{C_i}\\mathcal{Z}_i, \\ \\ \\ \\tilde{\\mathcal{H}}_i(\\tilde{\\mathcal{K}}_i)\\equiv \\frac{1}{C_i}\\mathcal{H}_i(\\mathcal{K}_i), \\label{tilde}\\\\\n&k \\equiv \\frac{w_2+n_2+2}{w_1+n_1},\n\\end{align}\nand $``c\"$ denotes the charge conjugation for spinors. \n\nAs a summary, we find that the superconformal realization of Eq. $\\eqref{global derivative}$ is the multiplet $\\eqref{u-derivative part}$ satisfying the conditions $\\eqref{weight condition between w3n3 and w2n2}$ and $\\eqref{explicit condotion for chiral}$. \n\n\\subsection{Old minimal versus New minimal}\nWe have found, in the previous subsection ~\\ref{u-associated derivative}, the conditions for extending Eq. $\\eqref{global derivative}$ to that in conformal SUGRA. Here, we will choose a conformal compensator $\\Phi_0$ as {\\it{u-associated}} multiplets, ${\\bf{u}}_i$. Then, we have two choices of compensators; one of them is a chiral compensator $S_0$ realizing the old minimal SUGRA and the other is a real linear compensator $L_0$ realizing the new minimal SUGRA.\\footnote{We do not discuss the case of the non-minimal formulation which is realized with a complex linear compensator.} \n\nNow, we will examine what forms of ${{\\bf{u}}_i}$ with both compensators are allowed. Let us start from the old minimal SUGRA realized with a chiral compensator,\n\\begin{align}\nS_0=\\{ z_0,-\\sqrt{2}iP_L\\chi_0 , -F_0,iF_0,iD_az_0,0,0 \\},\n\\end{align}\nwith its weight $(1,1)$. Here we assume that the multiplets ${\\bf{u}}_i$ take the following form \n\\begin{align}\n{\\bf{u}}_i=S_0^{p_i}\\bar{S}_0^{q_i},\\ \\ \\ (i=1,2,3), \\label{s0s0}\n\\end{align}\nwhere $p_i$ and $q_i$ are the power of $S_0$ and $\\bar{S}_0$, and satisfy $p_1\\neq 0$ since $w_1+n_1=(p_1+q_1)+(p_1-q_1)=2p_1$ must be nonzero by a definition of the {\\it{u-associated}} multiplet. Here we have to stress that Eq. $\\eqref{s0s0}$ is the most general form except for the case including derivative operators on a compensator,\\footnote{For example, $S_0\\Sigma \\bar{S}_0$ could be considered.} which might produce higher-derivative terms of gravity. Using Eq. $\\eqref{embedding chiral}$ and the multiplication rule $\\eqref{formulaF}$, the components of the multiplet in Eq. $\\eqref{s0s0}$ are written as \n\\begin{align}\n\\nonumber &\\{ \\mathcal{C}_i, \\mathcal{Z}_i,\\mathcal{H}_i,\\mathcal{K}_i,\\mathcal{B}_{ai},\\Lambda _i,\\mathcal{D}_i \\} \\\\\n\\nonumber &= \\{ z_0^{p_i}z_0^{*q_i},\\sqrt{2}iz_0^{p_i-1}z_0^{*q_i-1}(q_iz_0P_R\\chi_0 -p_iz_0^*P_L\\chi_0 ),\\\\\n\\nonumber &z_0^{p_i-2}z_0^{*q_i-2}\\left( -q_iz_0^2z_0^*F_0^*-p_iz_0z_0^{*2}F_0+\\frac{1}{2}q_i(q_i-1)z_0^2\\bar{\\chi}_0P_R\\chi _0+\\frac{1}{2}p_i(p_i-1)z_0^{*2}\\bar{\\chi}_0P_L\\chi _0\\right) ,\\\\\n\\nonumber &z_0^{p_i-2}z_0^{*q_i-2}\\left( -iq_iz_0^2z_0^*F_0^*+ip_iz_0z_0^{*2}F_0+\\frac{i}{2}q_i(q_i-1)z_0^2\\bar{\\chi}_0P_R\\chi _0-\\frac{i}{2}p_i(p_i-1)z_0^{*2}\\bar{\\chi}_0P_L\\chi _0\\right) , \\\\\n&...,...,...\\}, \\label{explicit s0s0}\n\\end{align}\nwhere we have omitted the components, $\\mathcal{B}_{ai},\\Lambda _i$ and $\\mathcal{D}_i $, which are not necessary to evaluate Eq. $\\eqref{explicit condotion for chiral}$. \nOne finds that Eq. $\\eqref{explicit condotion for chiral}$ cannot be satisfied by Eq. $\\eqref{s0s0}$ by the following reason: Terms including ${\\cal H}_i$ and ${\\cal K}_i$ must vanish by themselves since any other terms cannot cancel them. After substituting Eq. $\\eqref{explicit s0s0}$ into such a part, we obtain\n\\begin{align}\n\\nonumber &i\\mathcal{\\tilde{H}}_2^*+\\tilde{\\mathcal{K}}_2^*-k \\left( i\\tilde{\\mathcal{H}}_1^*+\\tilde{\\mathcal{K}}_1^*\\right) =2iF_0^*z_0^{*-1}+i\\bar{\\chi}_0P_R\\chi _0z_0^{*-2}(p_2^2-p_2p_1-p_1+1).\n\\end{align}\nApparently, the first term cannot be eliminated no matter how we choose the parameters $p_i$ and $q_i$, and the other terms in Eq. $\\eqref{explicit condotion for chiral}$ cannot eliminate it because they do not contain $F_0^*$. \nTherefore, we find that Eq. $\\eqref{s0s0}$ cannot be a solution of Eq. $\\eqref{explicit condotion for chiral}$. This means that Eq. $\\eqref{u-derivative part}$ cannot be realized as a chiral constraint in the old minimal SUGRA.\n\nNext, we examine the case in the new minimal SUGRA with a real linear compensator\n\\begin{align}\nL_0=\\{ C_0,Z_0,0,0,B_{0a},-\\slash{D}Z_0,-\\Box C_0\\}\n\\end{align}\nwith its weight $(2,0)$. In the same way as the old minimal case, we assume the general form of ${\\bf{u}}_i$ as\n\\begin{align}\n{\\bf{u}}_i=L_0^{r_i},\\ \\ \\ (i=1,2,3), \\label{l0}\n\\end{align}\nwhose components are \n\\begin{align}\n\\nonumber &\\{ \\mathcal{C}_i, \\mathcal{Z}_i,\\mathcal{H}_i,\\mathcal{K}_i,\\mathcal{B}_{ai},\\Lambda _i,\\mathcal{D}_i \\} \\\\\n &= \\{ C_0^{r_i}, r_iC_0^{r_i-1}Z_0,-\\frac{1}{4}r_i(r_i-1)C_0^{r_i-2}\\bar{Z}_0Z_0, \\frac{i}{4}r_i(r_i-1)C_0^{r_i-2}\\bar{Z}_0\\gamma _5Z_0,...,...,...\\} . \\label{explicit l0}\n\\end{align}\nHere we have used Eq. $\\eqref{embedding linear}$ and Eq. $\\eqref{formulaF}$. Then, after substituting Eq. $\\eqref{explicit l0}$ into Eq. $\\eqref{explicit condotion for chiral}$ with the Fierz rearrangement, Eq. $\\eqref{explicit condotion for chiral}$ is summarized as\n\\begin{align}\n(2r_2-r_3+1)\\left\\{ CP_RZ\\bar{Z}_0P_RZ_0+C_0P_RZ_0\\bar{Z}P_RZ\\right\\}=0. \\label{L0 condition}\n\\end{align}\nTo satisfy Eq. $\\eqref{L0 condition}$, the coefficient must be zero,\n\\begin{align}\n2r_2-r_3+1=0.\\label{L0 condition2}\n\\end{align}\nThen, we find that the chiral condition $\\eqref{explicit condotion for chiral}$ is satisfied as long as the {\\it{u-associated}} multiplets follow the condition $\\eqref{L0 condition2}$.\n\nNoting that $w_i=2r_i$ and $n_i=0$ in the ansatz $\\eqref{l0}$, the weight condition $\\eqref{weight condition between w3n3 and w2n2}$ which the chiral multiplet should obey is now reduced to\n\\begin{align}\n2r_2-r_3+1=0. \\label{L0 condition22}\n\\end{align}\nThis is nothing but Eq. $\\eqref{L0 condition2}$ which is satisfied automatically.\n\nTherefore, we conclude that one can make a multiplet in Eq. $\\eqref{u-derivative part}$ a chiral one with the real linear compensator if Eq. $\\eqref{L0 condition2}$ is satisfied. Here and hereafter, we focus on the case of the new minimal SUGRA with $r_1=r_3=1$ and $r_2=0$ for simplicity. In this case, the multiplet in Eq. $\\eqref{u-derivative part}$ becomes\n\\begin{align}\n\\frac{1}{L_0}\\bar{\\mathcal{D}}^{(L_0)}L\\bar{\\mathcal{D}}^{(L_0)}L . \\label{Lo derivative part}\n\\end{align}\nWe present the components of this chiral multiplet $\\eqref{Lo derivative part}$ explicitly in Appendix A.\n\n\\subsection{Embedding the constraint into conformal SUGRA} \\label{embedding}\nLet us consider the remaining terms, $X$ and $X\\bar{D}^2\\bar{X}$ in Eq. $\\eqref{global constraint 2}$. For $X$, we just regard it as a superconformal chiral multiplet with the weight $(w,w)$. In order to extend the second one, $X\\bar{D}^2\\bar{X}$, to a superconformal multiplet, we replace it with $X\\Sigma \\bar{X}$, where $\\Sigma $ is a chiral projection operator in conformal SUGRA. However, $\\Sigma $ cannot always be applied for any multiplet $\\Phi$ in the same way as the spinor derivative $\\mathcal{D}$. It can be applied only when $\\Phi$ satisfies the following weight condition, \n\\begin{align}\nw_{\\Phi }=n_{\\Phi }+2. \\label{condition for projection}\n\\end{align}\nTherefore, we compensate the weight of $\\bar{X}$, which has the weight $(w,-w)$, by the real linear compensator multiplet $L_0^s$, where $s$ is the power of $L_0$, \n\\begin{align}\nX\\Sigma \\left( \\frac{1}{L_0^s}\\bar{X} \\right) . \\label{compensated second term}\n\\end{align}\nHere, the term, $\\frac{1}{L_0^s}\\bar{X}$, has the weight $(-2s+w,-w)$. According to Eq. $\\eqref{condition for projection}$, $s$ must satisfy the condition,\n\\begin{align}\ns=w-1. \\label{weight condition of v1}\n\\end{align}\nTaking into account this condition and the fact that $\\Sigma $ raises the weight by (1,3), Eq. $\\eqref{compensated second term}$ has the weight $(3,3)$, which is correct for a chiral multiplet. Since the total weight of Eq. $\\eqref{compensated second term}$ must be the same as the first term $X$, the value of $w$ is determined as \n\\begin{align}\nw=3. \\label{weight condition of v2}\n\\end{align}\nThen, we find $s=2$ from Eq. $\\eqref{weight condition of v1}$, and Eq. $\\eqref{compensated second term}$ becomes \n\\begin{align}\nX\\Sigma \\left( \\frac{1}{L_0^2}\\bar{X} \\right) . \\label{compensated second term2}\n\\end{align}\nFinally, the weight of the multiplet in Eq. $\\eqref{u-derivative part}$ with that in Eq. $\\eqref{l0}$ is $(3,3)$ as long as Eq. $\\eqref{L0 condition22}$ is satisfied, then Eq. $\\eqref{Lo derivative part}$ is automatically satisfied. \n\nTherefore, we find the complete embedding of a global SUSY expression $\\eqref{global constraint 2}$,\n\\begin{align}\nX+\\frac{1}{2}X\\Sigma \\left( \\frac{1}{L_0^2}\\bar{X} \\right) +\\frac{1}{4L_0}\\bar{\\mathcal{D}}^{(L_0)}L\\bar{\\mathcal{D}}^{(L_0)}L =0,\\label{local constraint}\n\\end{align}\nwhere $X$ is a chiral multiplet with $(3,3)$, $L$ is a real linear multiplet with $(2,0)$, and $L_0$ is a real linear compensator with $(2,0)$.\n\n\n \n\n\n\\section{Component action}\\label{Component action}\nIn this section, we derive the DBI action based on the constraint $\\eqref{local constraint}$ in the new minimal SUGRA.\n\n\\subsection{Minimal action} \\label{Minimal action}\nWe first consider the minimal extension of the action $\\eqref{component form of global DBI}$. The action corresponding to Eq. $\\eqref{global DBI with constraint}$ is expected to be\n\\begin{align}\nS=&[2X]_F+\\Biggl[2\\Lambda \\left\\{ X+\\frac{1}{2}X\\Sigma \\left( \\frac{\\bar{X}}{L_0^2}\\right) +\\frac{1}{4L_0}\\bar{\\mathcal{D}}^{(L_0)}L\\bar{\\mathcal{D}}^{(L_0)}L \\right\\} \\Biggr] _F+[\\tilde{\\Lambda }X^2]_F +\\Biggl[\\frac{3}{2}L_0V_R\\Biggr] _D, \\label{SL0}\n\\end{align}\nwhere $V_R\\equiv \\log \\frac{L_0}{S\\bar{S}}$, $S$ is a chiral multiplet with $(1,1)$, and we have assigned the weights of the Lagrange multiplier chiral multiplet $\\Lambda$ to $(0,0)$ and also $\\tilde{\\Lambda }$ to $(-3,-3)$ in such a way that the total weight is equal to $(3,3)$.\nThe last term in Eq. $\\eqref{SL0}$ is responsible for the kinetic term of the gravitational multiplet. Note that this term is invariant under the transformation $S\\to Se^{i\\Theta}$ where $\\Theta$ is a chiral multiplet with the weight $(0,0)$ since $[L_0(\\Theta+\\bar{\\Theta})]_D\\equiv0$ by the nature of a real linear multiplet. Due to this additional gauge invariance, we have gauge degrees of freedom other than superconformal ones. After imposing the gauge fixing condition for this additional gauge symmetry as $S=\\{1,0,0\\}$, the bosonic part of $\\eqref{SL0}$ is given by\n\\begin{align}\n\\nonumber S_B=& \\int d^4 x \\sqrt{-g}\\Biggl[ \\Biggl( F_X (1+\\Lambda )-\\frac{|F_X|^2\\Lambda }{C_0^2}-\\frac{\\Lambda}{4C_0}(B_a-i\\hat{D}_aC)^2\\\\\n\\nonumber &+\\frac{C\\Lambda}{2C_0^2}(B_a-i\\hat{D}_aC)(B_0^a-i\\hat{D}^aC_0)\n-\\frac{C^2\\Lambda}{4C_0^3}(B_{0a}-i\\hat{D}_aC_0)^2 +{\\rm{h.c.}}\\Biggr) \\\\\n&-\\frac{3}{2}\\hat{\\Box} C_0\\log C_0-\\frac{3}{2}\\hat{\\Box} C_0-\\frac{3}{4C_0}(B_0\\cdot B_0+\\hat{D}C_0\\cdot \\hat{D}C_0) \n+3A\\cdot B_0 \\Biggr] , \\label{SBfull}\n\\end{align}\nwhere $\\Lambda$ and $F_X$ are a scalar component of the chiral multiplet $\\Lambda$ and an auxiliary field of $X$, and $\\hat{D}_{\\mu}$ is a superconformal covariant derivative only including bosonic fields, for example,\n\\begin{align}\n\\hat{D}_{\\mu}C=\\partial_{\\mu}C-2b_{\\mu}C,\n\\end{align}\nwhere $b_{\\mu}$ is the gauge field of dilatation. The third term in Eq. $\\eqref{SL0}$, $\\tilde{\\Lambda }X^2$, imposes the nilpotency condition for $X$. Thanks to this, we can drop the scalar component of the chiral multiplet $X$ since the first scalar component can be represented as a fermion bilinear after solving $X^2=0$. That is why, we have inserted this term into the action from the beginning. \nIntegrating out the gauge field of $U(1)_A$ symmetry $A_\\mu$, we obtain \n\\begin{align}\nB_{0a}=0.\n\\end{align}\nTo eliminate the dilatation symmetry and conformal boost symmetry, we impose the following $D$-gauge and $K$-gauge conditions, \n\\begin{align}\nC_0=1,\\ \\ \\ \\ b_{\\mu}=0.\n\\end{align}\nThese conditions simplify the action $\\eqref{SBfull}$, which becomes\n\\begin{align}\n\\nonumber S_B=\\int d^4 x \\sqrt{-g}\\Biggl[ &\\frac{1}{2}R+\\Bigl( F_X (1+\\Lambda ) -|F_X|^2\\Lambda \\\\\n&-\\frac{\\Lambda }{4}(B\\cdot B-2iB\\cdot \\partial C-\\partial C\\cdot \\partial C)+{\\rm{h.c.}}\\Bigr) \\Biggr] . \\label{FX}\n\\end{align}\nThen, eliminating the auxiliary field $F_X$ leads to\n\\begin{align}\nS_B=\\int d^4 x \\sqrt{-g}\\Biggl[ &\\frac{1}{2}R+\\frac{1}{2\\lambda }\\Bigl( (\\lambda +1)^2+\\chi ^2\\Bigr) -\\frac{1}{2}(B\\cdot B-\\partial C\\cdot \\partial C)\\lambda -B\\cdot \\partial C \\chi \\Biggr] , \\label{FXX}\n\\end{align}\nwhere $\\lambda={\\rm Re}\\Lambda$ and $\\chi={\\rm Im}\\Lambda$.\nFinally, we obtain the following conditions from the E.O.Ms for $\\lambda $ and $\\chi $,\n\\begin{align}\n&\\frac{\\chi }{\\lambda }=B\\cdot \\partial C ,\\\\\n&\\frac{1}{\\lambda ^2}=1-(B\\cdot \\partial C)^2-B\\cdot B+\\partial C\\cdot \\partial C .\n\\end{align}\nSubstituting them into the action $\\eqref{FXX}$, we obtain the on-shell DBI action of a real linear multiplet,\n\\begin{align}\n S_B=\\int d^4 x \\sqrt{-g}\\Biggl[ \\frac{1}{2}R+1-\\sqrt{1-B\\cdot B+\\partial C\\cdot \\partial C-(B\\cdot \\partial C)^2}\\Biggr] . \\label{DBI}\n\\end{align}\nThis is almost the same form as Eq. $\\eqref{component form of global DBI}$ except for that our action $\\eqref{DBI}$ is formulated in curved background.\n\nBefore closing this subsection, let us discuss the linear-chiral duality. It is known that the action of a real linear multiplet can be rewritten in terms of that of a chiral multiplet. However, in the case with the action including derivative terms such as Eq. $\\eqref{SL0}$, it is nontrivial to take this duality transformation in a manifestly SUSY way.\\footnote{In global SUSY, the dual action has been obtained at the level of superfield in Ref.~\\cite{GonzalezRey:1998kh}.} Then, we focus only on the bosonic part $\\eqref{DBI}$ and discuss this duality at the component level of bosonic part.\n\nWe start from the following Lagrangian which is the relevant part in the action $\\eqref{DBI}$,\n\\begin{align}\n\\mathcal{L}=1-\\sqrt{1-B\\cdot B+\\partial C\\cdot \\partial C-(B\\cdot \\partial C)^2}. \\label{Ori}\n\\end{align}\nTo rewrite this Lagrangian $\\eqref{Ori}$ in terms of the complex scalar of a chiral multiplet, we first relax the constraint on the vector field $B_a$. We impose it by the E.O.M for a scalar field $\\ell$, that is, we use \n\\begin{align}\n\\mathcal{L}=1-\\sqrt{1-B\\cdot B+\\partial C\\cdot \\partial C-(B\\cdot \\partial C)^2}+B\\cdot \\partial \\ell ,\\label{LL}\n\\end{align}\nwhere $B_a$ is now an unconstrained vector. The Lagrangian $\\eqref{LL}$ is equivalent to the original one $\\eqref{Ori}$ since the variation with respect to $\\ell$ leads to the constraint, $\\partial _a B^a=0$.\nInstead of $\\ell$, varying with respect to $B_a$ gives\n\\begin{align}\n\\partial ^a\\ell + (\\partial ^a C B\\cdot \\partial C+B^a) \\{1-B\\cdot B+\\partial C\\cdot \\partial C-(B\\cdot \\partial C)^2\\} ^{-1\/2} =0. \\label{du}\n\\end{align}\nOur task is now to solve this equation $\\eqref{du}$ with respect to $B_a$.\nBy taking scalar products of Eq. $\\eqref{du}$ with $B_a, \\partial _a C$ and $\\partial _a \\ell $, we obtain three independent equations and can solve them with respect to $B^2$, $B\\cdot\\partial C$, and $B\\cdot \\partial \\ell$. The solutions are\n\\begin{align}\n&B^2=\\frac{(\\partial \\ell) ^2(1+(\\partial C)^2)^2-(\\partial C\\cdot \\partial \\ell )^2(2+(\\partial C)^2)}{Y^2},\\\\\n&B\\cdot \\partial C=-\\frac{\\partial C\\cdot \\partial \\ell }{Y},\\\\\n&B\\cdot \\partial \\ell =\\frac{-(\\partial \\ell) ^2(1+(\\partial C)^2)+(\\partial C\\cdot \\partial \\ell )^2}{Y},\n\\end{align}\nwhere\n\\begin{align}\nY \\equiv \\{ (1+(\\partial C)^2)(1+(\\partial \\ell) ^2)-(\\partial C\\cdot \\partial \\ell )^2\\}^{1\/2}.\n\\end{align}\nSubstituting these solutions into the Lagrangian $\\eqref{LL}$, we obtain the dual action,\n\\begin{align}\n\\nonumber \\mathcal{L}&=1-\\sqrt{1+(\\partial C)^2+(\\partial \\ell) ^2+(\\partial C)^2(\\partial \\ell) ^2-(\\partial C\\cdot \\partial \\ell )^2}\\\\\n&=1-\\sqrt{1+\\partial \\phi \\cdot \\partial \\bar{ \\phi }-\\frac{1}{4}(\\partial \\phi)^2 (\\partial \\bar{ \\phi })^2+\\frac{1}{4}(\\partial \\phi \\cdot \\partial \\bar{ \\phi })^2} ,\\label{LC}\n\\end{align}\nwhere we have defined a complex scalar $\\phi =\\ell + iC$. The Lagrangian $\\eqref{LC}$ can be written as the DBI form\n\\begin{align}\n\\mathcal{L}=1-\\sqrt{{\\rm{det}} \\left( g_{ab}+\\frac{1}{2}\\partial_a \\phi \\partial_b \\bar{\\phi} \\right)}.\\label{LC2}\n\\end{align}\nThis Lagrangian $\\eqref{LC2}$ agrees with the one constructed in Ref.~\\cite{Koehn:2012ar} using a chiral multiplet directly. \n \n\n\\subsection{Matter coupled extension} \\label{matter}\nFinally, we discuss the matter coupled DBI action given by\n\\begin{align}\nS=&[2f(\\Phi^{I})X]_F+\\left[2\\Lambda \\left\\{ X+\\frac{1}{2}X\\Sigma\\left(\\frac{\\bar{X}}{M(L_0,\\Phi^I,\\bar{\\Phi}^{\\bar{J}})}\\right)+\\frac{1}{4L_0}\\bar{\\cal D}^{(L_0)}L\\bar{\\cal D}^{(L_0)}L\\right\\}\\right]_F\\nonumber\\\\\n&+[{\\cal F}(L_0,\\Phi^I,\\bar{\\Phi}^{\\bar{J}})]_D+[\\tilde{\\Lambda}X^2]_F\\label{mDBI},\n\\end{align}\nwhere $\\Phi^I$ ($\\bar{\\Phi}^{\\bar{J}}$) is a (anti-) chiral matter multiplet; $f(\\Phi)$ is a holomorphic function of $\\Phi^I$ with $(0,0)$; $M(L_0,\\Phi^I,\\bar{\\Phi}^{\\bar{J}})$ and ${\\cal F}(L_0,\\Phi^I,\\bar{\\Phi}^{\\bar{J}})$ are real functions of $\\Phi^I,\\bar{\\Phi}^{\\bar{J}}$ and $L_0$ with $(4,0)$ and $(2,0)$, respectively. Note that we have omitted superpotential term $[W(\\Phi^I)]_F$, where $W(\\Phi^I)$ is a holomorphic function of $\\Phi^I$ with the weight $(w,n)=(3,3)$, since the term is irrelevant to the following discussion. Taking into account the nilpotency condition on $X$, the bosonic component of the action~(\\ref{mDBI}) is given by\n\\begin{align}\nS_B=&\\int d^4x \\sqrt{-g}\\Biggl[ \\Biggl( F_X(f+\\Lambda) -\\frac{\\Lambda |F_X|^2}{M}-\\frac{\\Lambda}{4C_0}(B_a-i\\hat{D}_aC)^2 \\nonumber\\\\\n&+\\frac{C\\Lambda}{2C_0^2}(B_a-i\\hat{D}_aC)(B_0^a-i\\hat{D}^aC_0)-\\frac{C^2\\Lambda}{4C_0^3}(B_0^a-i\\hat{D}^aC_0)^2+{\\rm h.c.}\\Biggr)+{\\cal L}_m\\Biggr],\n\\end{align}\nwhere\n\\begin{align}\n{\\cal L}_m=&-\\frac{1}{3}({\\cal F}-{\\cal F}_{C_0}C_0)R(b)+\\frac{1}{2}{\\cal F}_{C_0C_0}(\\hat{D}C_0\\cdot \\hat{D}C_0-B_0\\cdot B_0)\\nonumber\\\\\n&+2{\\cal F}_{I\\bar{J}}(F^I\\bar{F}^{\\bar{J}}-\\hat{D}\\Phi^I \\cdot \\hat{D}\\bar{\\Phi}^{\\bar{J}})+\\left(-i{\\cal F}_{C_0I}B_0\\cdot \\hat{D}\\Phi^I+{\\rm h.c.}\\right)\\label{defLm}.\n\\end{align}\nIn the above expression, $\\Phi^I$ ($\\bar{\\Phi}^{\\bar{J}}$) and $F^I$ ($\\bar{F}^{\\bar{J}}$) represent the scalar and auxiliary components of the (anti-) chiral matter multiplet, and subscripts denote the derivative with respect to the corresponding scalar. $R(b)$ becomes a Ricci scalar when $b_{\\mu}=0$ is imposed as the $K$-gauge condition.\n\nBefore setting superconformal gauge conditions, we integrate out the auxiliary field $F_X$ and the Lagrange multiplier $\\Lambda$. We can easily solve the E.O.M for $F_X$ and obtain\n\\begin{align}\nS_B=&\\int d^4x \\sqrt{-g}\\Biggl[\\frac{M}{2\\lambda}\\left\\{(\\lambda+p)^2+(\\chi+q)^2\\right\\}-\\frac{\\lambda}{2C_0}(B\\cdot B-\\hat{D}C\\cdot \\hat{D}C)\\nonumber\\\\\n&-\\frac{\\chi}{C_0}B \\cdot \\hat{D}C+\\frac{C\\lambda}{C_0^2}(B_0\\cdot B-\\hat{D}C_0\\cdot \\hat{D}C)+\\frac{C\\chi}{C_0^2}(B_0\\cdot \\hat{D}C+B\\cdot \\hat{D}C_0)\\nonumber\\\\\n&-\\frac{C^2\\lambda}{2C_0^3}(B_0 \\cdot B_0-\\hat{D}C_0\\cdot \\hat{D}C_0)-\\frac{C^2\\chi}{C_0^3}B_0\\cdot \\hat{D}C_0+{\\cal L}_m\\Biggr],\\label{mDBI2}\n\\end{align}\nwhere $\\lambda={\\rm Re}\\Lambda$, $\\chi={\\rm Im}\\Lambda$, $p={\\rm Re}f$, and $q={\\rm Im}f$. Note that, at this stage, the matter Lagrangian ${\\cal L}_m$ is not affected by the DBI sector. Next, we eliminate $\\lambda$ and $\\chi$ by using their E.O.Ms, which are given by\n\\begin{align}\n&-\\frac{M}{2\\lambda^2}\\left\\{(\\lambda+p)^2+(\\chi+q)^2\\right\\}+\\frac{M}{\\lambda}(\\lambda+p)+\\mathcal{A}=0,\\\\\n&\\frac{M}{\\lambda}(\\chi+q)+\\mathcal{B}=0,\n\\end{align}\nwhere\n\\begin{align}\n&\\mathcal{A}\\equiv -\\frac{1}{2C_0}(B\\cdot B-\\hat{D}C\\cdot \\hat{D}C)+\\frac{C}{C_0^2}(B_0\\cdot B-\\hat{D}C_0\\cdot \\hat{D}C)-\\frac{C^2}{2C_0^3}(B_0\\cdot B_0-\\hat{D}C_0\\cdot \\hat{D}C_0),\\\\\n&\\mathcal{B}\\equiv -\\frac{1}{C_0}B\\cdot \\hat{D}C+\\frac{C}{C_0^2}(B_0\\cdot \\hat{D}C+B\\cdot \\hat{D}C_0)-\\frac{C^2}{C_0^3}B_0\\cdot \\hat{D}C_0.\n\\end{align}\nSolutions for them are\n\\begin{align}\n&\\lambda|_{\\rm sol}^{-1}=\\frac{1}{p}\\sqrt{1+\\frac{2\\mathcal{A}}{M}-\\frac{\\mathcal{B}^2}{M^2}},\\\\\n&\\chi|_{\\rm sol}=-q-\\frac{\\lambda|_{\\rm sol}}{M}\\mathcal{B}.\n\\end{align} \nSubstituting the above solutions into the action~(\\ref{mDBI2}), we obtain a relatively simple form\n\\begin{align}\nS_B=\\int d^4x\\sqrt{-g}\\left[Mp\\left(1-\\sqrt{1+\\frac{2\\mathcal{A}}{M}-\\frac{\\mathcal{B}^2}{M^2}}\\right)-q\\mathcal{B}+{\\cal L}_m\\right].\\label{mDBI3}\n\\end{align}\n\nThe remaining issue is the elimination of auxiliary fields $B_0^a$ and $A_a$. However, it is difficult to do it because of the presence of nonlinear terms of $B_0^a$ contained in the first term in Eq.~(\\ref{mDBI3}). In addition, ${\\cal L}_m$ has $A_aA^a$ as well as mixing terms between $B_0^a$ and $A_a$ in general cases. Therefore, integration of those auxiliary fields is technically difficult and we cannot obtain the complete on-shell action.\\footnote{The general matter coupled system in the new minimal SUGRA not including higher-order derivative terms can be found in Ref.~\\cite{Ferrara:1983dh}.}\n\nAlthough a general case is difficult to complete the remaining task, we can continue our discussion for the following special case. Let us consider the following choice of ${\\cal F}(L_0,\\Phi^I,\\bar{\\Phi}^{\\bar{J}})$,\n\\begin{align}\n{\\cal F}=L_0\\log \\left(\\frac{L_0G(\\Phi^i,\\bar{\\Phi}^{\\bar{j}})}{S\\bar{S}}\\right), \\label{special form}\n\\end{align}\nwhere $\\Phi^i$ is a matter chiral multiplet with its weight $(0,0)$, $G(\\Phi^i,\\bar{\\Phi}^{\\bar{j}})$ is a real function of $\\Phi^i$ and $\\bar{\\Phi}^{\\bar{j}}$, and $S$ is a chiral multiplet with $(1,1)$. This action is also invariant under the transformation $S\\to Se^{i\\Theta}$ in the same way as the last term in Eq. $\\eqref{SL0}$, which characterizes the new minimal SUGRA.\n\nWe use the $D$-gauge condition to make the Ricci scalar term canonical. From Eq.~(\\ref{defLm}), we can find an appropriate $D$-gauge choice~\\cite{Ferrara:1983dh}\n\\begin{align}\n{\\cal F}-{\\cal F}_{C_0}C_0=-\\frac{3}{2}.\n\\end{align}\nAs the choice of the additional gauge, we set ${\\cal F}_{C_0}=0$~\\cite{Ferrara:1983dh}. Then, we can solve these gauge conditions with respect to $C_0$ and $S$ and obtain\n\\begin{align}\nS\\bar{S}=&\\frac{3}{2}eG,\\\\\nC_0=&\\frac{3}{2}.\n\\end{align} \nUsing the $K$-gauge, we also set a condition $b_\\mu=0$.\n\nUnder these conditions, ${\\cal L}_m$ becomes\n\\begin{align}\n{\\cal L}_m=&\\frac{1}{2}R+2{\\cal F}_{i\\bar{j}}(F^i\\bar{F}^{\\bar{j}}-\\partial_a\\Phi^i\\partial^a\\bar{\\Phi}^{\\bar{j}})-\\frac{1}{2}B_0^aB_{0a}\\nonumber\\\\\n&+(-i{\\cal F}_{C_0i}B_0^a\\partial_a \\Phi^i+{\\rm h.c.})+(iB_0^a\\partial_a\\log S+{\\rm h.c.})+2B_0^aA_a,\n\\end{align}\nwhere $A_a$ is the $U(1)_A$ gauge field mentioned above. We find that the E.O.M for $A_a$ gives a constraint $B_0^a=0$ and the difficulty due to the nonlinear term of $B_0^a$ is circumvented in this case. This result is irrelevant to other parts of the action (\\ref{mDBI3}) since they do not contain terms of $A_a$. $F^i$ can be eliminated by their E.O.Ms, and we finally obtain the following on-shell action,\n\\begin{align}\nS_B=\\int d^4x\\sqrt{-g}\\left[Mp\\left(1-\\sqrt{1+\\frac{2\\mathcal{A}}{M}-\\frac{\\mathcal{B}^2}{M^2}}\\right)-q\\mathcal{B}+\\frac{1}{2}R-2{\\cal F}_{i\\bar{j}}\\partial_a\\Phi^i\\partial^a\\bar{\\Phi}^{\\bar{j}}\\right], \\label{special matter}\n\\end{align}\nwith\n\\begin{align}\n\\mathcal{A}=\\frac{1}{3}(\\partial C\\cdot \\partial C-B\\cdot B),\\ \\ \\ \\mathcal{B}=-\\frac{2}{3}B\\cdot \\partial C.\n\\end{align}\nHere, the real function $M$ should be understood as $M|_{C_0=3\/2}$.\nNote that, in this case, we cannot add superpotential terms of $\\Phi^i$ by the following reason: To obtain the constraint $B_0^a=0$, we assumed that only $S$ has the weight $(w,n)=(1,1)$ and a special form of ${\\cal F}$ giving ${\\cal F}_{S\\bar{S}}=0$, otherwise such a constraint does not appear. For the superconformal invariance, the superpotential $W$ should have $(3,3)$. From the weight condition, a possible form is $W=S^3g(\\Phi^i)$ but this term is forbidden by the symmetry under $S\\to Se^{i\\Theta}$ which the D-term part $[{\\cal F}]_D$ has. Therefore, we cannot add any superpotential terms of matter multiplets.\n\\section{Relation between our results and other works}\\label{discussion}\nHere, we comment on the differences between ours and the results in Ref.~\\cite{Koehn:2012ar}, in which the DBI action of a chiral multiplet is constructed in the old minimal SUGRA. As we mentioned before, the DBI action of a real linear multiplet can be rewritten in terms of a chiral multiplet through the linear-chiral duality and the whole action of a chiral multiplet is obtained in global SUSY in terms of superfield \\cite{GonzalezRey:1998kh}. The authors of Ref.~\\cite{Koehn:2012ar} embedded the dual chiral multiplet action into the old minimal SUGRA. On the other hand, our starting point is the action of a real linear multiplet, more precisely, the constraint $\\eqref{global constraint}$ imposed upon it. This constraint has its origin in the tensor multiplet of $\\mathcal{N}=2$ SUSY \\cite{Rocek:1997hi,Bagger:1997pi,GonzalezRey:1998kh}. Indeed, in global SUSY case, the real linear multiplet corresponds to a Goldstino multiplet for the broken SUSY. From such a viewpoint, our construction is important since it makes the connection with the partial breaking of $\\mathcal{N}=2$ SUSY much clearer . \n\nAlthough the ways of construction are different, our action would realize their result. Indeed, at the bosonic component level, we have found the correspondence between the result in Ref.~\\cite{Koehn:2012ar} and ours. However, we also found that the action cannot be realized in the old minimal SUGRA when we do not consider the case including higher-derivative terms of a chiral compensator, which may contradict the result of Ref.~\\cite{Koehn:2012ar}. Unlike the DBI action of a real linear multiplet, that of a vector multiplet can be constructed in both of the old and new minimal SUGRA \\cite{Abe:2015nxa}. The difference originates from the necessity of {\\it u-associated} derivatives in the DBI action of a real linear multiplet. For a vector superfield case, we can construct the DBI action only with the chiral projection operator $\\Sigma$, which does not require {\\it u-associated} multiplet to make the operand superfield a primary superfield~\\cite{Kugo:1983mv,Butter:2009cp,Kugo:2016zzf}. It is interesting to explore these reasons and we expect that the direct derivation of the constraint $\\eqref{global constraint}$ and also DBI action from $\\mathcal{N}=2$ SUGRA are necessary to understand this issue, which would be our future work \\footnote{For the DBI action of a vector multiplet, such attempts have been recently discussed \\cite{Kuzenko:2015rfx}. There, the partial breaking of ${\\cal N}=2$ SUSY in some ${\\cal N}=1$ SUSY background has been discussed.}.\n\n\n\n\n\\section{Summary}\\label{summary}\nIn this paper, we have discussed superconformal generalization of a DBI action of a real linear superfield known in global SUSY. \n\nTo achieve this, we have focused on the constraint $\\eqref{global constraint}$ between a chiral multiplet and a real linear multiplet, which comes from the partial breaking of 4D $\\mathcal{N}=2$ SUSY \\cite{Bagger:1997pi}. However, it is a nontrivial task to embed this constraint into conformal SUGRA due to the existence of the SUSY spinor derivative, which in general, cannot be applied for arbitrary multiplets in conformal SUGRA. Instead of using an original spinor derivative, we have adopted the {\\it{u-associated}} spinor derivative, proposed in Ref.~\\cite{Kugo:1983mv}. We obtained the condition $\\eqref{weight condition between w3n3 and w2n2}$ and $\\eqref{explicit condotion for chiral}$ by requiring that the corresponding constraint $\\eqref{u-derivative part}$ in conformal SUGRA becomes a chiral constraint. Surprisingly, we have found that these conditions can be realized only in the new minimal formulation of SUGRA when we choose the general power function of compensator as the {\\it{u-associated}} multiplet. Then, we have derived the condition $\\eqref{L0 condition2}$ which {\\it{u-associated}} multiplets must satisfy.\n\nAfter embedding the constraint into the new minimal SUGRA, we have shown the component action which is formulated in curved spacetime. We have also discussed the linear-chiral duality at the level of bosonic components and rewritten the action from a complex scalar field of a chiral multiplet. Finally, we have constructed the action where matter multiplets are directly coupled to the DBI sector. Due to the appearance of nonlinear terms for vector field $B_{0a}$, we have restricted the discussion to the special form of matter function $\\eqref{special form}$ and derived the bosonic action $\\eqref{special matter}$. \n\nIn this paper, we have shown that the DBI action of a real linear multiplet cannot be realized in the old minimal SUGRA as a naive embedding of the constraint $\\eqref{global constraint}$, which may contradict the result of Ref.~\\cite{Koehn:2012ar}. The duality relation between the old and new minimal SUGRA \\cite{Ferrara:1983dh} is generically not obvious when there exist higher-derivative terms. For example, the non-minimal coupling of gravity is realized only in new minimal SUGRA \\cite{Farakos:2012je} as in the case of the DBI action we discussed here. Such an issue may be revealed with the help of deep understanding of SUGRA system with higher-order derivative terms. \n\nTo investigate our model further, we need the direct derivation of the constraint from $\\mathcal{N}=2$ SUGRA. And also, the remaining part in Eq. $\\eqref{string DBI}$, i.e., a term including $B_{\\mu \\nu }$, and possible combinations of the Maxwell, scalar and 2-form parts have not been constructed. We leave them for future work.\n\n\n\n\n\\section*{Acknowledgment}\nThe authors would like to thank Taichiro Kugo for helpful discussions and comments. YY would like thank also to Hiroyuki Abe and Yutaka Sakamura for useful discussion and collaboration in the related work. The work of YY is supported by JSPS\nResearch Fellowships for Young Scientists No. 26-4236 in Japan.\n\\begin{appendix}\n\\section{The components of {\\it{u-associated}} spinor derivative multiplet}\\label{explicit}\nHere we show the explicit component form of \n\\begin{align}\n\\frac{1}{L_0}\\bar{\\mathcal{D}}^{(L_0)}L\\bar{\\mathcal{D}}^{(L_0)}L. \\label{Lo derivative part2}\n\\end{align}\nAs we have seen in Sec.~\\ref{extension}, Eq. $\\eqref{Lo derivative part2}$ is a chiral multiplet with weight $(3,3)$. The components of this multiplet, $\\{ z',P_L\\chi' , F'\\}$, are \n\\begin{align}\nz'&=\\frac{C^2}{C_0} \\left(\\bar{\\tilde{Z}}-\\bar{\\tilde{Z}}_0 \\right) P_R\\left( \\tilde{Z}-\\tilde{Z}_0 \\right),\\\\\n\\nonumber P_L\\chi' &=\\frac{\\sqrt{2}C^2}{C_0}P_L\\biggl[ \\left( \\tilde{\\slash {B}}-i\\slash{D}\\tilde{C}-\\tilde{\\slash {B}}_0+i\\slash{D}\\tilde{C}_0\\right) \\left( \\tilde{Z}-\\tilde{Z}_0 \\right) -\\frac{3i}{2}\\tilde{Z}_0\\bar{\\tilde{Z}}_0P_R\\tilde{Z}_0\\\\\n&-\\frac{i}{2}\\tilde{Z}_0\\bar{\\tilde{Z}}P_R\\tilde{Z}+\\frac{i}{4}\\gamma ^a\\tilde{Z}_0\\bar{\\tilde{Z}}\\gamma _a\\gamma _5 \\tilde{Z}+i\\tilde{Z}\\bar{\\tilde{Z}}_0P_R\\tilde{Z}_0-\\frac{i}{2}\\gamma ^a\\tilde{Z}\\bar{\\tilde{Z}}_0\\gamma _a\\gamma _5 \\tilde{Z}_0\\biggr] ,\\\\\n\\nonumber F'&=\\frac{C^2}{C_0}\\biggl[ -\\left( \\tilde{B}_a-i D_a\\tilde{C}\\right) ^2 +2\\left( \\tilde{B}_a-i D_a\\tilde{C} \\right) \\left( \\tilde{B}^a-i D^a\\tilde{C} \\right) -\\left( \\tilde{B}_{0a}-i D_{0a}\\tilde{C}\\right) ^2\\\\\n\\nonumber &+i\\bar{\\tilde{Z}}_0\\gamma _5\\left( \\tilde{\\slash {B}}-i\\slash{D}\\tilde{C} \\right)\\left( \\tilde{Z}-\\tilde{Z}_0 \\right)+\\frac{i}{2}\\bar{\\tilde{Z}} \\gamma _5\\left( \\tilde{\\slash {B}}_0-i\\slash{D}\\tilde{C} _0\\right)\\tilde{Z}\\\\\n\\nonumber &-2i\\bar{\\tilde{Z}} \\gamma _5\\left( \\tilde{\\slash {B}}_0-i\\slash{D}\\tilde{C} _0\\right)\\tilde{Z}_0+\\frac{3i}{2}\\bar{\\tilde{Z}}_0 \\gamma _5\\left( \\tilde{\\slash {B}}_0-i\\slash{D}\\tilde{C} _0\\right)\\tilde{Z}_0\\\\\n\\nonumber &+2\\left( \\bar{\\tilde{Z}}-\\bar{\\tilde{Z}}_0 \\right)P_R \\slash{D}\\left( \\tilde{Z}-\\tilde{Z}_0 \\right) +\\frac{1}{2}\\bar{\\tilde{Z}}_0P_R\\tilde{Z}_0\\bar{\\tilde{Z}}\\tilde{Z}+\\frac{1}{2}\\bar{\\tilde{Z}}P_R\\tilde{Z}\\bar{\\tilde{Z}}_0\\tilde{Z}_0\\\\\n&+2\\bar{\\tilde{Z}}P_R\\tilde{Z}_0\\bar{\\tilde{Z}}\\tilde{Z}_0-3\\bar{\\tilde{Z}}P_R\\tilde{Z}_0\\bar{\\tilde{Z}}_0\\tilde{Z}_0-3\\bar{\\tilde{Z}}\\tilde{Z}_0\\bar{\\tilde{Z}}_0P_R\\tilde{Z}_0+\\frac{1}{2}\\bar{\\tilde{Z}}_0P_R\\tilde{Z}_0\\bar{\\tilde{Z}}_0\\tilde{Z}_0\\biggr] ,\n\\end{align}\nwhere the fields with $\\tilde{}$ are divided by the first components of the multiplet they belong to, in the same way as Eq. $\\eqref{tilde}$, and the superconformal derivative $D_a$ is understood to act only on the numerator but not on the denominator, e.g., $D^a\\tilde{C} \\equiv D^aC\/C =D^a \\log C$.\n\n\n\\end{appendix}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe famous Hasse--Minkowski theorem gives precise conditions for a quadratic form to have a non-trivial zero. For the solubility of a system of two quadratic forms there are still many open questions. In this paper we restrict to the study of smooth intersections of two quadrics in five variables. Such a system describes a geometric object $X \\subseteq \\mathbb P^4$ called a \\textit{del Pezzo surfaces of degree $4$}.\n\nAs in the Hasse--Minkowski theorem, a natural first step is to study the inclusion $X(\\mathbb{Q}) \\subseteq X(\\mathbb{A}_\\mathbb{Q})$ of the rational points into the \\textit{adelic points}. Since we are considering homogeneous equations we can identify $X(\\mathbb{A}_\\mathbb{Q})$ with $\\prod_{p \\leq \\infty} X(\\mathbb{Q}_p)$, that is, an adelic solution consists of local solutions in each completion of $\\mathbb{Q}$. We say that a class of varieties satisfies the \\textit{Hasse principle} if for each member of the family the existence of local solutions implies the existence of a global solution. In general the Hasse principle can fail. One such example was the quartic del Pezzo surface\n$$\n\\begin{cases}\nx^2-5y^2=uv;\\\\\nx^2-5z^2=(u+v)(u+2v),\n\\end{cases}\n$$\nby Birch and Swinnerton-Dyer \\cite{BSD}.\n\nThis failure can be explained by the Brauer--Manin obstruction, as introduced by Manin \\cite{ManinICM}; any element $\\mathcal{Q}$ in the Brauer group of $X$ determines an intermediate set\n\\[\nX(\\mathbb{Q}) \\subseteq X(\\mathbb{A}_\\mathbb{Q})^{\\mathcal{Q}} \\subseteq X(\\mathbb{A}_\\mathbb{Q})\n\\]\nwhich can be used in some cases to show that $X(\\mathbb{Q})=\\emptyset$ even if there are adelic points on $X$. It has been conjectured by Colliot-Th\\'el\\`ene and Sansuc \\cite{CTSansuc} that for varieties such as del Pezzo surfaces, the Brauer--Manin obstruction is the \\textit{only obstruction to the Hasse principle}, that is, $X(\\mathbb{Q})$ is empty if and only if $X(\\mathbb{A}_{\\mathbb{Q}})^{\\Br} := \\bigcap_{\\mathcal{Q} \\in \\Br X} X(\\mathbb{A}_{\\mathbb{Q}})^{\\mathcal{Q}}$ is.\n\nPioneering work of Swinnerton-Dyer shows that for quartic del Pezzo surfaces the Brauer group $\\Br X\/\\Br_0 X$ is isomorphic to $\\left(\\mathbb{Z}\/2\\mathbb{Z} \\right)^i$ for $i \\in\\{0,1,2\\}$ \\cite{SDBrauerGroupCubicSurfaces}. Furthermore, the Brauer group can be explicitly computed from the equations \\cite{BBFL}, \\cite{VAV}.\n\nThe conjecture by Colliot-Th\\'el\\`ene and Sansuc was proven by Wittenberg \\cite{WittenbergBook} for quartic del Pezzo surface with a trivial Brauer group subject to the Schinzel hypothesis and the conjectured finiteness of Tate--Shafarevich groups of elliptic curves. These techniques were then applied by V\u00e1rilly-Alvarado and Viray for certain surfaces with a Brauer group of order $2$.\n\nWe will study the arithmetic of quartic del Pezzo surfaces with a Brauer group of order $4$, using fibration $X \\dashrightarrow \\mathbb P^1$ into curves. First we show that the existence of a commonly studied fibration implies the Hasse principle.\n\n\\begin{thm}[Thm.~\\ref{thm:conicfibrations}]\\label{thm:thm1}\nLet $X$ be a quartic del Pezzo surface with $\\# \\Br X\/\\Br_0 X =4$ over a number field $K$. If $X$ admits a conic fibration $X \\dashrightarrow \\mathbb P^1$ then $X(K)\\neq \\emptyset$.\n\\end{thm}\n\nThe work of V\u00e1rilly-Alvarado and Viray in the case of a smaller Brauer group, however uses maps $X \\dashrightarrow \\mathbb P^1$ obtained by embedding $X$ anticanonically and projecting away from a plane. They ask if a Brauer group of order $4$ can be ``vertical'' with respect to such maps. The second result of this paper is that this is not the case.\n\n\\begin{thm}[Thm.~\\ref{thm:nonewpoints}]\nLet $X \\subseteq \\mathbb P^4$ be an anticanonically embedded quartic del Pezzo surface over a number field $K$ with $\\# \\Br X\/\\Br_0 X =4$ and $X(K)=\\emptyset$. Then $\\Br X$ is not vertical with respect to a map $f \\colon X \\dashrightarrow \\mathbb P^1$ obtained by projecting away from a plane.\n\\end{thm}\n\nThis shows that the techniques of \\cite{CTSSD}, \\cite{WittenbergBook} and \\cite{VAV} cannot be directly applied to prove that the Brauer--Manin obstruction is the only one to the Hasse principle for quartic del Pezzo surfaces with a Brauer group of order $4$.\n\nSurfaces with two independent classes in the Brauer group have rarely shown up in the literature. Jahnel and Schindler \\cite{JS} prove that quartic del Pezzo surfaces with a Brauer group of order $2$ (and even those for which the conjecture is true) are Zariski dense in the moduli space of all quartic del Pezzo surfaces. In contrast, Mitankin and Salgado study a subfamily where infinitely many members have a Brauer group of order $4$, but all of these surfaces admit a conic fibrations and hence have a rational point in the light of Theorem~\\ref{thm:thm1}.\n\nWe will restrict to a different subfamily in which each member has a Brauer group of order $4$, which we will show to be non-empty.\n\n\\begin{thm}[Thm.~\\ref{thm:failureWA}, Thm.~\\ref{thm:curlyABorC}]\\label{thm:intro2}\nLet $X$ be a quartic del Pezzo surface with $\\#\\Br X\/\\Br_0 X =4$ given by a system\n\\[\n\\begin{cases}\ny^2-pz^2 = (A_1u+B_1v)(C_1u+D_1v);\\\\\nz^2-pz^2 = (A_2u+B_2v)(C_2u+D_2v),\n\\end{cases}\n\\]\nwhere $p$ is an odd prime. Then $X$ fails weak approximation and there is at most one element $\\mathcal{Q} \\in \\Br X\/\\Br_0 X$ for which $X(\\mathbb{A}_{\\mathbb{Q}})^{\\mathcal{Q}}=\\emptyset$.\n\\end{thm}\n\nNote the contrast with the result that if $X(\\mathbb{A}_\\mathbb{Q})^{\\Br} = \\emptyset$ then there exists an $\\mathcal{Q} \\in \\Br X$ such that $X(\\mathbb{A}_{\\mathbb{Q}})^{\\mathcal{Q}}=\\emptyset$ for any quartic del Pezzo surface \\cite[Rem.~2 following Lem.~3.4]{CTPoonen}.\n\nFor every quartic del Pezzo surface $X$ with $\\Br X\/\\Br_0 X$ has order $4$, we can write down two explicit generators $\\mathcal{A}_X$ and $\\mathcal{B}_X$, where $\\mathcal{A}_X$ is uniquely defined and $\\mathcal{B}_X$ is any other non-trivial element. We produce examples in which either generator obstructs the Hasse principle.\n\n\\begin{thm}\nConsider the surfaces\n$$\nY\\colon\\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(2u-13v)(u-6v),\n\\end{cases}\n$$\nand\n$$\nS \\colon \\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(u+v)(153u+179v).\n\\end{cases}\n$$\n\\begin{enumerate}\n\\item[(a)] The surfaces $Y$ and $S$ are everywhere locally soluble.\n\\item[(b)] The Brauer groups $\\Br Y\/\\Br_0 Y$ and $\\Br S\/\\Br_0 S$ are both isomorphic to $(\\mathbb Z\/2\\mathbb Z)^2$.\n\\item[(c)] We have $Y(\\mathbb{A}_\\mathbb{Q})^{\\mathcal{A}_Y} = \\emptyset$ for $\\mathcal{A}_Y = \\left(13,\\frac{u-6v}v\\right) \\in \\Br Y$, and $S(\\mathbb{A}_\\mathbb{Q})^{\\mathcal{B}_S} = \\emptyset$ for $\\mathcal{B}_S = \\left(13,\\frac{y+z}u\\right) \\in \\Br S$.\n\\end{enumerate}\n\\end{thm}\n\nUsing Theorem~\\ref{thm:intro2} we see that $\\mathcal{A}_Y$ and $\\mathcal{B}_S$ are the unique elements in respectively $\\Br Y$ and $\\Br S$ with these properties.\n\nTo prove for a general quartic del Pezzo surface with a Brauer group of order $4$ that $X(\\mathbb{Q})\\ne\\emptyset$ when neither $\\mathcal A_X$, $\\mathcal B_X$ nor $\\mathcal{A}_X+\\mathcal{B}_X$ obstructs the Hasse principle is still open.\n\n\\subsection*{Acknowledgements}\n\nThis paper is a product of an internship of the second author at IST Austria under the supervision of the first author.\n\n\\section{Preliminaries}\n\nLet us fix our terminology.\n\n\\begin{defi}\nLet $k$ be a field. A $k$-scheme $X$ is called \\textit{nice} if it is proper, geometrically integral and smooth over $k$. A \\textit{surface} will be a nice $2$-dimensional $k$-scheme.\n\\end{defi}\n\nAll varieties under consideration in this paper will be nice $\\mathbb Q$-surfaces. So by the properness we can identify $X(\\mathbb{A}_\\mathbb{Q})$ with $\\prod_{p\\leq \\infty} X(\\mathbb{Q}_p)$ where we write $\\mathbb{Q}_\\infty := \\mathbb R$. We have the diagonal embedding $X(\\mathbb{Q}) \\hookrightarrow X(\\mathbb{A}_\\mathbb{Q})$.\n\nWe will say that a class $\\mathcal S$ of varieties satisfies the \\textit{Hasse principle} when $X(\\mathbb{Q})=\\emptyset$ if and only if $X(\\mathbb{A}_\\mathbb{Q}) = \\emptyset$ for all $X \\in \\mathcal S$. If $X(\\mathbb{Q})$ is dense in $X(\\mathbb{A}_\\mathbb{Q})$ we say that X satisfies \\textit{weak approximation}.\n\n\\subsection*{The Brauer--Manin obstruction}\n\nThe failure of the Hasse principle or weak approximation can be explained using the Brauer group. So let us consider $\\Br X := \\H^2(X_{\\text{\\'et}},\\mathbb{G}_m)$ and its natural subgroup $\\Br_0 X := \\Im(\\Br \\mathbb{Q} \\to \\Br X)$. Since $X$ is nice we have an inclusion $\\Br X \\hookrightarrow \\Br \\kappa(X)$, and we will only be interested in elements $\\mathcal Q \\in \\Br X$ of order $2$. Hence such elements are always represented by a quaternion algebra $(f,g)$ over the function field $\\kappa(X)$ of $X$.\n\nRecall the \\textit{invariant map} $\\inv_v \\mathcal Q \\colon X(\\mathbb{Q}_v) \\to \\mathbb{Q}\/\\mathbb{Z}$ of an element $\\mathcal Q=(f,g) \\in \\Br X[2]$ at a place $v$ \\cite[Thm.~1.5.36]{Poonen}, which are constant for $\\mathcal Q \\in \\Br_0 X$. At a point $P \\in X(\\mathbb{Q})$ for which $f,g\\in \\mathcal O^\\times_{X,P}$ the invariant $\\inv_v \\mathcal Q(P)$ coincides with the Hilbert symbol $(f(P),g(P)) \\in \\{\\pm 1\\}$ after identifying the groups $\\{\\pm 1\\}$ and $\\frac12\\mathbb{Z}\/\\mathbb{Z}$. By abuse of terminology we will say that $\\inv_v\\mathcal Q$ is \\textit{surjective} if $\\# \\Im(\\inv_v \\mathcal Q) = 2$.\n\nFor $\\mathcal Q \\in \\Br X$ we can consider\n\\[\nX(\\mathbb{A}_\\mathbb{Q})^{\\mathcal Q} := \\{(P_v)_v \\in X(\\mathbb{A}_\\mathbb{Q})\\mid \\sum_{v \\leq \\infty} \\inv_v \\mathcal Q(P_v) =0\\}.\n\\]\nBy global reciprocity we have $X(\\mathbb{Q}) \\subseteq X(\\mathbb{A}_\\mathbb{Q})^{\\mathcal{Q}} \\subseteq X(\\mathbb{A}_\\mathbb{Q})$ or even $X(\\mathbb{Q}) \\subseteq X(\\mathbb{A}_\\mathbb{Q})^{\\Br} \\subseteq X(\\mathbb{A}_\\mathbb{Q})$ for $X(\\mathbb{A}_\\mathbb{Q})^{\\Br} := \\bigcap X(\\mathbb{A}_\\mathbb{Q})^{\\mathcal{Q}}$.\n\nIf $X(\\mathbb{A}_\\mathbb{Q}) \\ne \\emptyset$, but $X(\\mathbb{A}_\\mathbb{Q})^{\\Br} = \\emptyset$, then the Hasse principle fails for $X$ and we say there is a \\textit{Brauer--Manin obstruction to the Hasse principle}. If $X(\\mathbb{A}_\\mathbb{Q})^{\\Br} \\subsetneq X(\\mathbb{A}_\\mathbb{Q})$, then $X$ cannot satisfy weak approximation, and there is a \\textit{Brauer--Manin obstruction to weak approximation}.\n\nNote that if an invariant map of $\\mathcal Q$ is surjective, then there is a Brauer--Manin obstruction to weak approximation, but not necessarily to the Hasse principle.\n\n\\subsection*{Quartic del Pezzo surfaces}\n\nWe will be interested in a particular type of surface, namely del Pezzo surface of degree $4$. For a general treatise on general del Pezzo surfaces and their arithmetic the reader is referred to \\cite{arithmeticdps}. We will give the following characterisation of quartic del Pezzo surfaces, which we will take as the definition.\n\n\\begin{defi}\nA \\textit{del Pezzo surface of degree $4$} is a surface $X \\subseteq \\mathbb P^4$, hence in particular smooth, given by two quadratic forms $Q=0=\\tilde Q$.\n\\end{defi}\n\nIt is known that the Brauer group of a quartic del Pezzo surface modulo constants is either trivial, of order $2$ or isomorphic to the Klein four-group. Algorithms to determine this isomorphism class and explicit generators are given in both \\cite{BBFL} and \\cite{VAV}. Let us introduce some notation from the latter algorithm to determine if we are dealing with a Brauer group of order $4$.\n\nRepresent the quadratic forms $Q$ and $\\tilde Q$ by symmetric matrices $M$ and $\\tilde M$. For a point $T=(\\kappa \\colon \\lambda) \\in \\mathbb P^1(\\bar{\\mathbb{Q}})$ we define the symmetric matrix $M_T = \\kappa M + \\lambda \\tilde M$, with associated quadratic form $Q_T$ over $\\kappa(T)$. Let $f$ be the discriminant of $M_T$, then $\\mathscr{S}=V(f)\\subseteq\\mathbb{P}^1$ is the locus of the degenerate fibres. For the quadratic forms $Q_T$ of rank $4$ we will need the determinant $\\varepsilon_T$ of its restriction to a linear subspace of codimension $1$ on which it is nondegenerate. The invariant $\\varepsilon_T$ is well defined up to squares.\n\n\\begin{pro}[{\\cite[\\textsection 4.1]{VAV}}]\\label{pro:Brauergrouporder4}\nLet $X \\subseteq \\mathbb P^4$ be a quartic del Pezzo surface. Then $\\#\\Br X\/\\Br_0 X =4$ if and only if there exists an $\\varepsilon\\not\\in\\mathbb{Q}^{\\times, 2}$, and three degree 1 points $T_i \\in \\mathscr{S}$ such that each $Q_{T_i}$ has rank $4$ and satisfies $\\varepsilon\\cdot\\varepsilon_{T_i}\\in\\mathbb{Q}^{\\times, 2}$.\n\\end{pro}\n\n\\section{A Brauer group of order $4$}\n\nFrom now on $X\/\\mathbb Q$ will be a quartic del Pezzo surface anticanonically embedded in $\\mathbb P^4$. Such a surface is the intersection of two quadrics. We will be particularly interested in the $X$ for which the Brauer group modulo constants has order $4$. \n\n\\begin{pro}\\label{pro:gen_sys}\nA quartic del Pezzo surface $X$ over $\\mathbb Q$ satisfies $\\Br X\/\\Br \\mathbb Q \\cong (\\mathbb Z\/2\\mathbb Z)^2$ precisely if it can be given by a system of equations of the form\n\\begin{equation}\\label{gen_sys}\n\\begin{cases}\nd_0y^2-\\varepsilon x^2=a_0u^2+2b_0uv+c_0v^2;\\\\\nd_1z^2-\\varepsilon x^2=a_1u^2+2b_1uv+c_1v^2,\n\\end{cases}\n\\end{equation}\nsuch that\n\\begin{enumerate}\n\\item $\\varepsilon\\not\\in\\mathbb{Q}^{\\times, 2}$,\n\\item $d_0=b_0^2-a_0c_0$ and $d_1=b_1^2-a_1c_1$,\n\\item $\\varepsilon d_0d_1d_2\\in \\mathbb{Q}^{\\times, 2}$ where $d_2:=(b_1-b_0)^2-(a_1-a_0)(c_1-c_0)$, and\n\\item the quadratic forms $a_iu^2+2b_iuv+c_iv^2$ do not have a common projective root.\n\\end{enumerate}\n\\end{pro}\n\n\\begin{proof}\nProposition~4.2 in \\cite{VAV} and Proposition~\\ref{pro:Brauergrouporder4} yield that after a linear change of variables the matrices of $Q_{T_i}$ have the following form\n$$\nM_{T_0}=\n\\begin{pmatrix}\na_0 & b_0 & 0 & 0 & 0\\\\\nb_0 & c_0 & 0 & 0 & 0\\\\\n0 & 0 & m_0 & 0 & 0\\\\\n0 & 0 & 0 & n_0 & 0\\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix},\\ \nM_{T_1}=\n\\begin{pmatrix}\na_1 & b_1 & 0 & 0 & 0\\\\\nb_1 & c_1 & 0 & 0 & 0\\\\\n0 & 0 & m_1 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & k_1\n\\end{pmatrix}\n\\ \n\\text{ and }\n\\ \nM_{T_2}=\n\\begin{pmatrix}\na_2 & b_2 & 0 & 0 & 0\\\\\nb_2 & c_2 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & n_2 & 0\\\\\n0 & 0 & 0 & 0 & k_2\n\\end{pmatrix}.\n$$\nWe get that\n$$\n\\varepsilon_{T_0}=-d_0m_0 n_0,\\quad\\varepsilon_{T_1}=-d_1m_1 k_1 \\quad \\text{ and } \\quad \\varepsilon_{T_2}=-d_2n_2k_2,\n$$\nwhere $d_i:=b_i^2-a_ic_i$. Since the $M_{T_i}$ are linearly dependent we can assume $M_{T_2}=M_{T_1}-M_{T_0}$. Without loss of generality we can set $m_0=m_1$ equal to $\\varepsilon$. We can still change $n_0$ and $k_1$ independently up to squares, and the condition $\\varepsilon\\cdot\\varepsilon_{T_i}\\in\\mathbb{Q}^{\\times,2}$ allows us to take $n_0=-d_0$ and $k_1=-d_1$. The condition $\\varepsilon d_0d_1d_2\\in\\mathbb{Q}^{\\times,2}$ is then automatically satisfied. The last condition is equivalent to the projective scheme defined by \\eqref{gen_sys} is smooth.\n\nFrom Proposition~\\ref{pro:Brauergrouporder4} it is easy to check that any such system defines a surface with a Brauer group of order $4$.\n\\end{proof}\n\nThe algorithm in \\cite{VAV} even allows us to represent the elements of the Brauer group by explicit quaternion algebras. Given the explicit element of the Brauer group we can now discuss whether the Brauer--Manin obstruction to Hasse principle is the only one. For simplicity we restrict to surfaces over the rational numbers, but the results and proofs in this and the last section equally apply to quartic del Pezzo surfaces with a Brauer group of order $4$ over any number fields. \n\nFor quartic del Pezzo surfaces with a large Brauer group which admit conic fibrations there cannot be a Brauer--Manin obstruction to the Hasse principle.\n\n\\begin{thm}\\label{thm:conicfibrations}\nLet $X\/\\mathbb Q$ be a quartic del Pezzo surface with $\\# \\Br X\/\\Br_0 X =4$. If $X$ admits a conic fibration $X \\dashrightarrow \\mathbb P^1$ then $X(\\mathbb Q)\\neq \\emptyset$.\n\\end{thm}\n\n\\begin{proof}\nSince $X$ is a nice variety over a number field we conclude from the Hochschild--Serre spectral sequence \\cite[Cor.~6.7.8]{Poonen} that $\\Br X\/\\Br_0 X \\cong \\H^1(G_{\\mathbb Q}, \\Pic \\bar X)$. We will need to understand the Galois action on the geometric Picard group.\n\nFor a general quartic del Pezzo surface the action of the absolute Galois group $G_{\\mathbb Q}$ on $\\Pic \\bar X$ factors through a subgroup $H \\subseteq W_5$. Here $W_5$ is the finite group Weyl group which acts on $\\Pic \\bar X$, see \\cite[\\textsection 1.5]{arithmeticdps}. Since $\\Pic \\bar X$ is torsion-free we conclude from the inflation-restriction sequence that $\\H^1(G_{\\mathbb Q}, \\Pic \\bar X) \\cong \\H^1(H, \\Pic \\bar X)$. For each of the 196 different subgroups of $W_5$ (up to conjugation) we can compute the first cohomology group directly and we find that only four subgroups yield a Brauer group of order $4$.\n\nAgain using the Hochschild--Serre spectral sequence we see that $\\Pic X \\hookrightarrow (\\Pic \\bar X)^{G_{\\mathbb Q}}$ is injective. For the four relevant Galois actions on $\\Pic \\bar X$ we see that $(\\Pic \\bar X)^{G_{\\mathbb Q}}$ has rank $1$ in three cases, and rank $2$ in one case.\n\nIn the case of rank $1$ we conclude that $\\Pic X$ is free of rank $1$, since it contains the canonical class. If $X$ would admit a conic fibration then the class of the fibre should be a (rational) multiple of the canonical class, and have self-intersection $0$. Clearly a contradiction.\n\nIn the case of rank $2$ we find that the action of Galois permutes two exceptional curves, whose intersection pairing equals $1$. Hence their intersection point is defined over the base field.\n\\end{proof}\n\n\\begin{rema} More precisely, in the latter case there are multiple pairs of such intersecting lines which sum to the same class in $\\Pic X$. One can conclude from this that among the quartic del Pezzo surfaces with a Brauer group of order $4$ the following statements are equivalent.\n\\begin{enumerate}\n\\item $\\rk \\Pic X =2$.\n\\item $X$ contains two conjugate intersecting lines.\n\\item $X$ admits a conic fibration.\n\\end{enumerate}\n\\end{rema}\n\nFor general maps $f\\colon X\\dashrightarrow \\mathbb{P}^1$ the following notion will be very useful. We say that $\\Br X$ is \\textit{vertical} with respect to $f$ if $\\Br X\\subseteq f^*(\\Br \\kappa(\\mathbb{P}^1))$.\n\nFor quartic del Pezzo surfaces with vertical Brauer groups there is a method to prove that $X(\\mathbb{A}_\\mathbb{Q})^{\\Br}\\neq\\emptyset$ \nimplies $X(\\mathbb{Q})\\neq\\emptyset$ (see \\cite{WittenbergBook} and \\cite{CTSSD}). V\u00e1rilly-Alvarado and Viray proved in \\cite{VAV} that the Brauer group is vertical for certain quartic del Pezzo surfaces, and they concluded that the Brauer--Manin obstruction to Hasse principle is the only one for these surface of ``BSD-type''.\n\nNext they considered quartic del Pezzo surface with a Brauer group of order $4$. First they noted that if $X(\\mathbb{Q})\\neq \\emptyset$, then $\\Br X$ is vertical with respect to a specific map. Next they asked if $X(\\mathbb{Q})= \\emptyset$ whether the Brauer group can be vertical, in particular for rational maps $f \\colon X \\dashrightarrow \\mathbb P^1$ obtained from projecting away from a plane. They suggested the following approach.\n\nLet $T_i$ be the three degree 1 points of $\\mathscr{S}$, see Proposition~\\ref{pro:Brauergrouporder4}. For $P_i\\in V(Q_{T_i})(\\mathbb{Q})$, denote $M(P_0,P_1,P_2)$ the matrix whose $(i,j)$th entry is $\\frac{\\partial Q_{T_i}}{\\partial x_j}(P_i)$. Then take a hyperplane $H\\subseteq \\mathbb{P}^4$ which does not pass through the vertices of $V(Q_{T_i})$, and define\n$$\nY^{(X)}=\\{(P_0,P_1,P_2)\\in H^3\\mid \\rnk M(P_0,P_1,P_2)\\le 2 \\text{ and } Q_{T_i}(P_i)=0\\}.\n$$\nThere is an inclusion $X(\\mathbb{Q})\\hookrightarrow Y^{(X)}(\\mathbb{Q})$, which is described in \\cite{VAV}.\n\nIf we can find a rational point on $Y^{(X)}$, then we obtain the verticality of $\\Br X\/\\Br\\mathbb{Q}$ with respect to a certain map $f$. We already know that if $X(\\mathbb{Q})\\neq \\emptyset$, then $Y^{(X)}(\\mathbb{Q})\\neq\\emptyset$. The question is, whether there is a point in $Y^{(X)}(\\mathbb{Q})$ which does not come from a point in $X(\\mathbb{Q})$ (see \\cite{VAV}, Question~6.3). Here we show that the answer is no.\n\n\\begin{defi}\nWe define a point $P=(u \\colon v \\colon x \\colon y \\colon z) \\in \\mathbb P^4(\\mathbb{Q})$ to be equivalent to the points $(u \\colon v \\colon \\pm x \\colon \\pm y \\colon \\pm z)$. This induces an equivalence relation $\\sim$ on $Y^{(X)}(\\mathbb{Q})$.\n\\end{defi}\n\nIn other words, two points $\\mathcal P = (P_0,P_1,P_2)$ and $\\mathcal P' = (P'_0,P'_1,P'_2)$ are equivalent precisely if the $P_i$ and $P'_i$ are the same projective point up to a possible change of sign in the last three coordinates.\n\n\nConsider a point $\\mathcal P \\in Y^{(X)}(\\mathbb{Q})$. Although $\\mathcal P$ need not lie the image of the map $X(\\mathbb{Q})\\hookrightarrow Y^{(X)}(\\mathbb{Q})$, we now show that it is equivalent to a point that does.\n\n\\begin{thm}\\label{thm:nonewpoints}\nThe composite map\n$$\nX(\\mathbb{Q})\\hookrightarrow Y^{(X)}(\\mathbb{Q})\\longrightarrow Y^{(X)}(\\mathbb{Q})\/\\sim\n$$\nis surjective.\n\\end{thm}\n\n\\begin{proof}\nWe will represent $X$ by a system of equations as in \\eqref{gen_sys}. In addition, we can assume without loss of generality $a_0 \\ne 0$ and hence after completing the square that $b_0=0$. This implies that $d_0 = -a_0c_0$ and we conclude that also $c_0 \\ne 0$.\n\nConsider the point $(P_0,P_1,P_2)\\in Y^{(X)}(\\mathbb{Q})$ where $P_i=(u_i:v_i:x_i:y_i:z_i)$. Our goal is to prove that $(P_1,P_2,P_3)\\sim (P,P,P)$ for some $P\\in X(\\mathbb{Q})$. By the definition of $Y^{(X)}$ using $T_0=(1:0)$, $T_1=(0:1)$ and $T_2=(-1:1)$ as in the proof of Proposition~\\ref{pro:gen_sys}, we have\n$$\n\\mathrm{rank\\,}\n\\begin{pmatrix}\n2a_0u_0 & 2c_0v_0 & 2\\varepsilon x_0 & -2d_0y_0 & 0 \\\\\n2a_1u_1+2b_1v_1 & 2b_1u_1+2c_1v_1 & 2\\varepsilon x_1 & 0 & -2d_1z_1\\\\\n2(a_1-a_0)u_2+2b_1v_2 & 2b_1u_2+2(c_1-c_0)v_2 & 0 & 2d_0y_2 & -2d_1z_2\n\\end{pmatrix}\n=2.\n$$\nWrite $\\ell_i \\in\\mathbb{Q}^5$ for the rows of the above matrix. The condition on the rank implies the existence of a non-trivial relation $\\kappa\\ell_0+\\lambda \\ell_1+\\mu \\ell_2=0$. After multiplying the coordinates of $P_0$, $P_1$ and $P_2$ by respectively $\\kappa$, $-\\lambda$ and $\\mu$, we obtain $\\ell_0-\\ell_1+\\ell_2=0$. This implies \n\\begin{equation}\\label{xyz}\nx_0=x_1,\\ y_0=y_2\\ \\text{ and }\\ z_1=z_2,\n\\end{equation} \nand the first two columns give us the relations\n\\begin{equation}\\label{subst}\nu_0=\\frac{1}{a_0}\\Bigl(a_1u_1+b_1v_1-(a_1-a_0)u_2-b_1v_2\\Bigr)\n\\quad \\text{ and } \\quad\nv_0=\\frac{1}{c_0}\\Bigl(b_1u_1+c_1v_1-b_1u_2-(c_1-c_0)v_2\\Bigr).\n\\end{equation}\nCondition $Q_{T_i}(P_i)=0$ gives us\n\\begin{multline}\\label{quadr}\n(a_0u_0^2+c_0v_0^2)-(a_1u_1^2+2b_1u_1v_1+c_1v_1^2)+((a_1-a_0)u_2^2+2b_1u_2v_2+(c_1-c_0)v_2^2)=\\\\\n(d_0y_0^2-\\varepsilon x_0^2)-(d_1z_1^2-\\varepsilon x_1^2)+(d_1z_2^2-d_0y_2^2)=0.\n\\end{multline}\nSubstituting \\eqref{subst} into \\eqref{quadr}, we obtain the following:\n\\begin{multline}\\label{quad_form}\n(a_0^2c_0+b_0^2a_0-a_1a_0c_0)(u_1-u_2)^2+2(a_1b_1c_0+b_1c_1a_0-b_1a_0c_0)(u_1-u_2)(v_1-v_2)+\\\\\n(b_1^2c_0+c_1^2a_0-c_1a_0c_0)(v_1-v_2)^2=0.\n\\end{multline}\nThis is a quadratic form in $u_1-u_2$ and $v_1-v_2$ whose determinant equals \n$$\n-a_0c_0(b_1^2-a_1c_1)(b_1^2-(a_1-a_0)(c_1-c_0))=d_0d_1d_2.\n$$ \nFrom $\\varepsilon\\notin\\mathbb{Q}^{\\times,2}$ and $\\varepsilon d_0d_1d_2\\in\\mathbb{Q}^{\\times,2}$ we conclude $d_0d_1d_2\\notin\\mathbb{Q}^{\\times,2}$, so \n\\eqref{quad_form} implies $u_1-u_2=0=v_1-v_2$. Then \\eqref{subst} implies that $u_0=u_1=u_2$ and $v_0=v_1=v_2$.\n\nFinally, conditions \\eqref{xyz} and $Q_{T_i}(P_i)=0$ give us $x_0=x_1=\\pm x_2$, $y_0=y_2=\\pm y_1$ and $z_1=z_2=\\pm z_0$, \nwhich means that $P_i \\in X(\\mathbb{Q})$ and $P_0 \\sim P_1 \\sim P_2$ which proves the proposition.\n\\end{proof}\n\n\nWe deduce that the answer to Question~6.3 in \\cite{VAV} is no. We conclude that one cannot directly apply to machinery of \\cite{CTSSD} and \\cite{WittenbergBook} to prove there are rational points on quartic del Pezzo surfaces with a Brauer group of order $4$.\n\n\\section{A subfamily}\n\nWe will study the arithmetic of quartic del Pezzo surfaces with a Brauer group of order $4$. In \\cite{MS} such surfaces appeared, but those all had a rational point for obvious geometrical reasons; on these surfaces there is a pair of intersecting lines which, as a pair, are fixed by the Galois action. We will restrict to a subfamily for which such behaviour does occur.\n\n\\begin{defi}\nLet $\\mathcal X$ be the set of isomorphism classes of quartic del Pezzo surfaces for which the two quadratic forms $Q_i=a_iu^2+b_iuv+c_iv^2$ split over $\\mathbb Q$, and $\\varepsilon$ is an odd prime number $p$.\n\\end{defi}\n\n\\begin{pro}\\label{pro:sys_subfam}\nAny $X \\in \\mathcal X$ can be written as\n\\begin{equation}\\label{sys}\n\\begin{cases}\ny^2-px^2=Muv;\\\\\nz^2-px^2=(Au+Bv)(Cu+Dv),\n\\end{cases}\n\\end{equation}\nfor $A,B,C,D,M,N\\in\\mathbb{Z}$ which satisfy\n\\begin{enumerate}\n\\item[\\textbf{(C1)}] $(AD+BC-M)^2-4ABCD=pN^2$, and\n\\item[\\textbf{(C2)}] $ NM(AD-BC)\\neq 0$.\n\\end{enumerate}\n\\end{pro}\n\nCondition (C1) is a restatement of $\\varepsilon d_0d_1d_2\\in\\mathbb{Q}^{\\times,2}$ and (C2) is equivalent to $X$ being smooth. \n\n\n\\begin{proof}\nIt is an immediate consequence of Proposition \\ref{pro:gen_sys}.\n\\end{proof}\n\nTo study the members of this family which fail the Hasse principle we will first consider the question of local solubility. For places $v$ such that $p\\in\\mathbb{Q}_v^{\\times,2}$ we have $(0\\colon 0\\colon 1\\colon \\sqrt{p}\\colon \\sqrt{p})\\in X(\\mathbb{Q}_v)$, so $X(\\mathbb{Q}_v)\\neq\\emptyset$. The following proposition covers all but two of the other places.\n\n\n\\begin{pro}\\label{pro:loc_sol_subfam}\nConsider $X=X_{p,A,B,C,D,M,N}$. Let $v\\not \\in \\{2,p\\}$ be a place such that $p\\notin\\mathbb{Q}_v^{\\times,2}$ and $v\\nmid N$. Then $X(\\mathbb{Q}_v)\n\\neq\\emptyset$.\n\\end{pro}\n\n\\begin{proof}\nObviously $v\\neq \\infty$, so it corresponds to a prime number $q$. The Chevalley--Warning theorem yields that the system \\eqref{sys} has a non-trivial solution $(u,v,x,y,z)\\in\\mathbb{F}_q^5$. Let $(u_0,v_0,x_0,y_0,z_0)$ be an arbitrary lift to $\\mathbb{Z}_q^5$. We distinguish four cases.\n\n\\noindent\\textbf{Case 1.} $q \\nmid y_0,z_0$. In this case take \n$$\ny_1=\\sqrt{Mu_0v_0+px_0^2}\\in\\mathbb{Z}_q,\\quad z_1=\\sqrt{(Au_0+Bv_0)(Cu_0+Dv_0)+px_0^2}\\in\\mathbb{Z}_q,\n$$ \nthen $(u_0\\colon v_0\\colon x_0\\colon y_1\\colon z_1)\\in X(\\mathbb{Q}_q)$.\n\n\n\n\\noindent\\textbf{Case 2.} $q \\mid y_0$ and $q \\nmid x_0,z_0$. Take \n$$\nx_1=\\sqrt{-\\tfrac{M}{p}u_0v_0}\\in\\mathbb{Z}_q,\\quad z_1=\\sqrt{(Au_0+Bv_0)(Cu_0+Dv_0)+px_1^2}\\in\\mathbb{Z}_q,\n$$ \nthen $(u_0\\colon v_0\\colon x_1\\colon 0\\colon z_1)\\in X(\\mathbb{Q}_q)$.\n\n\\noindent\\textbf{Case 3.} $q \\mid x_0, y_0$ and $q \\nmid z_0$. We deduce that $q \\mid u_0v_0$. If $q \\mid u_0$, take $(0\\colon v_0\\colon 0\\colon 0\\colon \\sqrt{BDv_0^2})\\in X(\\mathbb{Q}_q)$. If on the other hand $q \\mid v_0$, take $(u_0\\colon 0\\colon 0\\colon 0\\colon \\sqrt{ACu_0^2})\\in X(\\mathbb{Q}_q)$.\n\n\\noindent\\textbf{Case 4.} $q \\mid y_0, z_0$. Subtracting the two equations in \\eqref{sys} yields\n$$\nACu^2+(AD+BC-M)uv+BDv^2\\equiv 0\\mod q,\n$$\nwhich can be rewritten to\n$$\n(2ACu+(AD+BC-M)v)^2 \\equiv (AD+BC-M)^2v^2-4ABCDv^2 \\equiv pN^2v^2 \\mod q,\n$$\nand equivalently\n$$\n(2BDv+(AD+BC-M)u)^2 \\equiv pN^2u^2 \\mod q.\n$$\nFrom $\\left(\\frac{p}{q}\\right)=-1$ and $q\\nmid N$ we obtain $u\\equiv v\\equiv 0\\mod q$, and \\eqref{sys} gives us $x\\equiv 0\\mod q$. Contradicting the non-triviality of $(u,v,x,y,z)\\in\\mathbb{F}_q^5$.\n\\end{proof}\n\n\n\n\n\nMost of our examples will satisfy $N=1$ so $X$ will be locally soluble everywhere away from $2$ and $p$.\n\nThe algorithm in \\cite[\\textsection 4.1]{VAV} allows to write down the explicit elements of $\\Br X\/\\Br \\mathbb{Q}$. Choosing $P_{T_0}=(0\\colon 1\\colon 0\\colon 0\\colon 0)\\in Q_{T_0}(\\mathbb{Q})$, $P_{T_1}=(-B\\colon A\\colon 0\\colon 0\\colon 0)\\in Q_{T_1}(\\mathbb{Q})$ and $P_{T_2}=(0\\colon 0\\colon 0\\colon 1\\colon 1)\\in Q_{T_2}(\\mathbb{Q})$, we obtain\n\\begin{equation}\\label{Br_subfam}\n\\Br X\/\\Br \\mathbb{Q}=\\left\\{\\id, \\left(p,\\frac{u}{Au+Bv}\\right), \\left(p,\\frac{z-y}{u}\\right), \\left(p,\\frac{Au+Bv}{z-y}\n\\right)\\right\\}.\n\\end{equation}\n\n\n\\begin{defi}\\label{defi:ABC}\nFor any $X\\in\\mathcal{X}$ given by \\eqref{sys}, we will write $\\mathcal{A}=\\left(p,\\frac{u}{Au+Bv}\\right)$, $\\mathcal{B}=\\left(p,\\frac{z-y}{u}\\right)$ and $\\mathcal{C}=\\left(p,\\frac{Au+Bv}{z-y}\\right)$ and assume the dependency on $X$ to be understood.\n\\end{defi}\n\n\\begin{rema}\nThe above notation actually depends on the representation \\eqref{sys} we chose for the equivalence class $X\\in\\mathcal{X}$. One can see that the change of variables $\\tilde{u}=\\frac{Au+Bv}{AD-BC}$, $\\tilde{v}=\\frac{Cu+Dv}{AD-BC}$, $\\tilde{y}=z$, $\\tilde{z}=y$ provides another representation of $X$ in the form \\eqref{sys}, but with possibly different coefficients, and $\\mathcal{B}$ for the new representation is exactly $\\mathcal{C}$ for the old one. \nHowever, one can show that $\\mathcal{A}$ does not depend on the representation of $X$.\n\n\n\n\\end{rema}\n\n\n\n\\begin{rema}\nOne can see that the classes $\\mathcal{A}$ and $\\mathcal{B}$ are also represented by\n\\begin{equation}\\label{Br_gr_rewr}\n\\mathcal{A}=\\left(p,\\frac{Mv}{Au+Bv}\\right) \\quad \\text{ and } \\quad\n\\mathcal{B}=\\left(p,\\frac{AC(z+y)}{u}\\right).\n\\end{equation}\n\\end{rema}\n\n\n\n\n\n\\section{Simultaneous obstructions to the Hasse principle}\n\nWe use the notation from the previous section. In particular, $X$ is a quartic del Pezzo surface given by explicit equations as in Proposition~\\ref{pro:sys_subfam}. We then see from \\eqref{Br_subfam} and Definition~\\ref{defi:ABC} that\n\\[\n\\Br X\/\\Br \\mathbb Q = \\{1,\\mathcal A, \\mathcal B, \\mathcal C\\}.\n\\]\nWe will concern ourselves with the Brauer--Manin obstruction to weak approximation and the Hasse principle. The first results is on weak approximation.\n\n\\begin{thm}\\label{thm:failureWA}\nAny $X \\in \\mathcal X$ fails weak approximation.\n\\end{thm}\n\nThe second result is about the Hasse principle and seems related to the following fact by Colliot-Th\\'el\\`ene and Poonen \\cite[Rem.~2 following Lem.~3.4]{CTPoonen} on quartic del Pezzo surfaces: suppose that $X(\\mathbb A_\\mathbb{Q})^{\\Br} = \\emptyset$ then there exists an element $\\mathcal Q \\in \\Br X$ such that\n $X(\\mathbb A_\\mathbb{Q})^{\\mathcal Q} = \\emptyset$. However, neither result implies the other.\n\n\\begin{thm}\\label{thm:curlyABorC}\nOnly one of $\\mathcal A$, $\\mathcal B$ and $\\mathcal C$ can give an obstruction to the Hasse principle on $X \\in \\mathcal X$.\n\\end{thm}\n\nTheorem~\\ref{thm:failureWA} and Theorem~\\ref{thm:curlyABorC} are direct consequences of the following proposition.\n\n\\begin{pro}\\label{prop:surjectiveinvariantmap}\nConsider an $X\\in\\mathcal{X}$. Assume $X(\\mathbb{Q}_p)\\neq\\emptyset$, then the invariant map at $p$ of $\\mathcal{A}$, $\\mathcal{B}$ or $\\mathcal{C}$ is surjective.\n\\end{pro}\n\nWe will need the following lemmas.\n\n\\begin{lem}\\label{lem:quadres}\nLet $p\\equiv 1\\mod 4$ be a prime. Consider the set $S(a,b)=\\{a+by\\mid y\\in\\mathbb{F}_p^{\\times,2}\\}$ for $a,b\\in\\mathbb{F}_p^\\times$. Denote $S(a,b)_\\zeta=\\{x\\in S(a,b)\\mid \\left( \\frac xp\\right)=\\zeta\\}$. \n\\begin{enumerate}\n\\item[(a)] If $a,b\\in\\mathbb{F}_p^{\\times,2}$, then $|S(a,b)_0|=1$, $|S(a,b)_1|=\\dfrac{p-5}{4}$ and $|S(a,b)_{-1}|=\\dfrac{p-1}{4}$.\n\\item[(b)] If $a\\in\\mathbb{F}_p^{\\times,2}$, $b\\notin\\mathbb{F}_p^{2}$, then $|S(a,b)_0|=0$, $|S(a,b)_1|=\\dfrac{p-1}{4}$ and $|S(a,b)_{-1}|=\\dfrac{p-1}{4}$.\n\\end{enumerate}\nIn particular, for $a\\in\\mathbb{F}_p^{\\times,2}$ we conclude that $S(a,b)$ contains a non-square.\n\\end{lem}\n\n\\begin{proof}\nBoth parts follow from the identity $\\sum_{x\\in\\mathbb{F}_p} (\\frac{a+bx^2}{p})=-(\\frac{b}{p})$ where $a,b\\in\\mathbb{F}_p^\\times$.\n\\end{proof}\n\n\n\\begin{lem}\\label{lem:quadres_table}\nLet $p\\equiv 1\\mod 4$ be a prime and fix $a,b,c,d\\in\\mathbb{F}_p^{\\times,2}$. Then there exists an $y_0\\in\\mathbb{F}_p^\\times$ such that either\n\\begin{enumerate}\n\\item[(i)] $a+by_0^2,\\ c+dy_0^2\\in\\mathbb{F}_p^{\\times}\\setminus\\mathbb{F}_p^{\\times,2}$, or\n\\item[(ii)] $a+by_0^2\\in \\mathbb{F}_p^{\\times}\\setminus\\mathbb{F}_p^{\\times,2}$ and $c+dy_0^2=0$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nApply the statement (a) of Lemma~\\ref{lem:quadres} for $S(a,b)$ and $S(c,d)$.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:surjectiveinvariantmap}]\nWe will first make a few reductions.\n\nFirst suppose that $p \\mid M, A, C$. Then the system \\eqref{sys} is equivalent to the one with $M$, $A$ and $B$ divided by $p$. Also if $p^2 \\mid M, A, C$ then we obtain an equivalent system by dividing these three coefficients by $p^2$. This shows that we can simply to the cases where\n\\begin{equation}\\label{gcd_cond_1}\n p \\nmid \\gcd(A,C,M) \\text{ and } p \\nmid \\gcd(B,D,M),\n\\end{equation}\nand\n\\begin{equation}\\label{gcd_cond_2}\np^2\\nmid\\gcd(A,B,M) \\text{ and } p^2\\nmid \\gcd(C,D,M).\n\\end{equation}\n\nConsider the case $p \\equiv 3 \\mod 4$. For a point $P=(u\\colon v \\colon x \\colon y \\colon z) \\in X(\\mathbb Q_p)$ we compute the invariant map at $P$ and $P'=(u\\colon v \\colon x \\colon -y \\colon -z)$. We see that\n\\[\n\\inv_p \\mathcal{B}(P) \\ne \\inv_p \\mathcal{B}(P')\n\\]\nand this proves the proposition in this case.\n\nNow we can assume \n\\begin{equation}\\label{p_1_mod4}\np\\equiv 1\\mod 4.\n\\end{equation}\nCondition (C1) implies $(p,ABCD)_p=1$. Consider the case $(p,AC)_p=(p,BD)_p=-1$. For a point $P=(u\\colon v\\colon x\\colon y\\colon z)\\in X(\\mathbb{Q}_p)$, we define $P'=(u\\colon v\\colon x\\colon -y\\colon z)\\in X(\\mathbb{Q}_p)$. One can see that $\\inv_p\\mathcal{B}(P)\\neq\\inv_p\\mathcal{B}(P')$ using \\eqref{Br_gr_rewr}, and this proves the proposition in this case.\n\nSo now we can assume\n\\begin{equation}\\label{hilbert_cond}\n(p,AC)_p=(p,BD)_p=1.\n\\end{equation}\n\nFor equations satisfying (C1) and (C2), and the additional assumptions \\eqref{gcd_cond_1} up to \\eqref{hilbert_cond} we consider different cases depending on the valuation of $M$ and $m = \\max\\{v_p(A),v_p(B),v_p(C),v_p(D)\\}$. Without loss of generality we can assume that $p^m \\mathrel{\\|} A$ and $p^{m+1} \\nmid B,C,D$.\n\nIn the first two cases we show that the invariant map of $\\mathcal{B}$ at $p$ is surjective.\n\\begin{enumerate}\n\\item[\\textbf{Case 1.}] $\\mathrm{v}_p(M)=0$ and $m\\geq 1$\n\\item[\\textbf{Case 2.}] $\\mathrm{v}_p(M)=1$ and $m=1$\n\\end{enumerate}\n In the next two cases we show that the invariant map of $\\mathcal{A}$ at $p$ is surjective.\n\\begin{enumerate}\n\\item[\\textbf{Case 3.}] $\\mathrm{v}_p(M)=0$ and $m=0$\n\\item[\\textbf{Case 4.}] $\\mathrm{v}_p(M)$ is odd and $m=0$\n\\end{enumerate}\nIn the last four cases we can make a change of variables to reduce to one of the previous cases.\n\\begin{enumerate}\n\\item[\\textbf{Case 5.}] $\\mathrm{v}_p(M)=1$ and $m\\geq 2$\n\\item[\\textbf{Case 6.}] $\\mathrm{v}_p(M) \\geq 3$ is odd and $m \\geq 1$\n\\item[\\textbf{Case 7.}] $\\mathrm{v}_p(M) \\geq 2$ is even and $m=0$\n\\item[\\textbf{Case 8.}] $\\mathrm{v}_p(M) \\geq 2$ is even and $m\\geq 1$\n\\end{enumerate}\n\n\\noindent\n\\textbf{Surjectivity for $\\mathcal{B}$}.\nTo prove $\\inv_p\\mathcal B$ is surjective we will find points $P_i=(u_i\\colon v_i\\colon x_i\\colon y_i\\colon z_i)\\in X(\\mathbb{Q}_p)$ such that \n$$\n\\left(\\frac{u_1(z_1-y_1)}{p}\\right)\\left(\\frac{u_2(z_2-y_2)}{p}\\right)=-1,\n$$ \nsince this implies $\\inv_p\\mathcal{B}(P_1)+\\inv_p\\mathcal{B}(P_2)=\\frac{1}{2}$.\n\n\\noindent\\textbf{Case 1.} Since $p \\mid A$, condition (C1) implies that \n$\nBC\\equiv M\\not\\equiv 0\\mod p.\n$\nHence $\\mathrm{v}_p(B)=\\mathrm{v}_p(C)=0$.\n\n\\textbf{Case 1a.} If $\\mathrm{v}_p(D)=0$ then \\eqref{p_1_mod4} and \\eqref{hilbert_cond} imply $-BD\\in\\mathbb{Z}_p^{\\times, 2}$.\nChoose $r\\in\\mathbb{Z}_p^{\\times}$ such that $r\\sqrt{-BD}\\notin\\mathbb{Z}_p^{\\times, 2}$. Consider $y_2=\\frac{1}{2}\\left(\\frac{BD}{r}-r\n\\right)$ and note that\n\\begin{equation*}\n-\\frac{DM}{C}\\equiv -BD\\mod p\n\\text{\\quad and\\quad}\n\\left(\\frac{A}{M}y_2^2+B\\right)\\left(\\frac{C}{M}y_2^2+D\\right)\\equiv y_2^2 + BD\\equiv \\frac{1}{4}\\left(r+\\frac{BD}{r}\\right)^2 \\not\\equiv 0\\mod p.\n\\end{equation*}\nThen take \n$$\nP_1=\\left(-\\frac DC \\colon 1 \\colon 0\\colon \\sqrt{-\\dfrac{DM}C}\\colon 0\\right) \\; \\text{ and } \\; P_2=\\left(\\frac{y_2^2}{M}\\colon 1\\colon 0\\colon y_2\\colon \\sqrt{\\left(\\frac{A}{M}y_2^2+B\\right)\\left(\\frac{C}{M}y_2^2+D\\right)}\\right)\\in X(\\mathbb{Q}_p).\n$$\n\n\\textbf{Case 1b.} If $\\mathrm{v}_p(D)\\geq 1$ then take $y_1\\in\\mathbb{Z}_p^{\\times, 2}$ and $y_2\\notin \\mathbb{Z}_p^{\\times, 2}$.\nThen\n\\begin{equation}\n\\left(\\frac{A}{M}y_i^2+B\\right)\\left(\\frac{C}{M}y_i^2+D\\right)\\equiv \\frac{BC}{M}y_i^2\\equiv y_i^2\\mod p,\n\\end{equation}\nso we can choose $z_i\\in\\mathbb{Z}_p$ such that $z_i\\equiv -y_i\\mod p$ and $z_i^2=\\left(\\frac{A}{M}y_i^2+B\\right)\\left(\\frac{C}{M}y_i^2+D\\right)$.\nNow take \n$$\nP_i=\\left(1\\colon \\frac{y_i^2}{M}\\colon 0\\colon y_i\\colon z_i\\right)\\in X(\\mathbb{Q}_p).\n$$\n\n\n\n\\noindent\\textbf{Case 2.} In this case we have $p \\mathrel{\\|} M$ and $p \\mathrel{\\|} A$ and $p^2 \\nmid B,C,D$. From \\eqref{gcd_cond_1} we get $\\mathrm{v}_p(C)=0$, so condition (C1) implies $p \\mid B$. From $p^2 \\nmid B$, \\eqref{gcd_cond_1} and (C1) we conclude in order that $\\mathrm{v}_p(B)=1$, $\\mathrm{v}_p(D)=0$ and $\\mathrm{v}_p(N)\\ge 1$.\n\nIntroduce $A=pA'$, $B=pB'$, $M=pM'$ and $N=pN'$. Then (C1) becomes $(A'D+B'C-M')^2-4A'B'CD=pN'^2$ and we conclude $\\mathrm{v}_p(A'D+B'C-M)=0$. \nOur system is equivalent to\n$$\n\\begin{cases}\n-x^2+\\dfrac{y^2}{p}=M'uv;\\\\\n\\dfrac{4A'C}{p}(z^2-y^2)=\\bigl(2A'Cu+(B'C+A'D-M')v\\bigr)^2-pN'^2v^2.\n\\end{cases}\n$$\n\\indent\\textbf{Case 2a.} Suppose that $M'\\cdot\\frac{B'C+A'D-M'}{2A'C}\\in\\mathbb{Z}_p^{\\times 2}$. \nChoose $r_1\\in\\mathbb{Z}_p^{\\times 2}$ and $r_2\\not\\in\\mathbb{Z}_p^{\\times 2}$ and define \n$$\ny_i=\\frac{1}{2}\\left(\\frac{A'C}{r_i}-r_i\\right)N',\\quad z_i=\\frac{1}{2}\\left(\\frac{A'C}{r_i}+r_i\\right)N'\\quad \\text{ and } \\quad x_i=\\sqrt{M'(B'C+A'D-M')\\cdot 2A'C+py_i^2}.\n$$ \nNow we can just take\n$\nP_i=(-(B'C+A'D-M')\\colon 2A'C\\colon x_i\\colon py_i\\colon pz_i)\\in X(\\mathbb{Q}_p).\n$\n\n\\textbf{Case 2b.} Now suppose that $M'\\cdot\\frac{B'C+A'D-M'}{2A'C}\\notin\\mathbb{Z}_p^{\\times 2}$. Let $(u:v:x:y:z)\\in X(\\mathbb{Q}_p)$ be represented by a primitive tuple in $\\mathbb{Z}_p$. Obviously, $\\mathrm{v}_p(y),\\mathrm{v}_p(z)\\ge 1$ so write $y=py_1$ and $z=pz_1$. Then \n$$\n(2A'Cu+(B'C+A'D-M')v)^2-pN'^2v^2=4A'Cp(z_1^2-y_1^2),\n$$ \nhence $u\\equiv -\\frac{B'C+A'D-M'}{2A'C} v\\mod p$. This turns the first equation into $M'\\cdot\\frac{B'C+A'D-M'}{2A'C} \nv^2\\equiv x^2\\mod p$. It is possible only if $u\\equiv v\\equiv x\\equiv 0\\mod p$. This contradicts the assumption $X(\\mathbb{Q}_p) \\ne \\emptyset$.\n\\bigskip\n\n\\noindent\\textbf{Surjectivity for $\\mathcal{A}$.}\nIn Cases 3 and 4 we apply the same technique for proving that $\\inv_p \\mathcal A$ is surjective.\n\n\\noindent\\textbf{Case 3.} By assumption $p \\nmid MABCD$. Since $(p,AC)_p=(p,BD)_p=1$, we have $AC,BD\\in \\mathbb{Z}_p^{\\times, 2}$.\n\n\n\\textbf{Case 3a.} If $ABM\\notin \\mathbb{Z}_p^{\\times, 2}$ then $\\mathcal A$ has different invariants at the points\n$$\nP_1=(1\\colon 0\\colon 0\\colon 0\\colon \\sqrt{AC})\\in X(\\mathbb{Q}_p)\\quad \\text{ and } \\quad P_2=(0\\colon 1\\colon 0\\colon 0\\colon \\sqrt{BD})\\in X(\\mathbb{Q}_p).\n$$\nas one can see using \\eqref{Br_gr_rewr}.\n\n\n\\textbf{Case 3b.} Now let $ABM\\in\\mathbb{Z}_p^{\\times, 2}$. It follows from Lemma \\eqref{lem:quadres_table} that there exists a $y_0\\in \\mathbb{Z}_p$ such that $1+\\frac{B}{MA}y_0^2$, $\\frac{C}{A}+\\frac{D}{MA}y_0^2\\not\\in\\mathbb{Z}_p^{\\times, 2}$, or $\\frac{C}{A}+\\frac{D}{MA}y_0^2 \\equiv 0 \\mod p$ and $1+\\frac{B}{MA}y_0^2\\not\\in\\mathbb{Z}_p^{\\times, 2}$. In the first case we consider\n$$\nP_1=(1\\colon 0\\colon 0\\colon 0\\colon \\sqrt{AC})\\quad \\text{ and } \\quad\nP_2=\\left(1\\colon \\frac{y_0^2}{M}\\colon 0\\colon y_0\\colon A\\sqrt{\\left(1+\\frac{B}{MA}y_0^2\\right)\\left(\\frac{C}{A}+\\frac{D}{MA}y_0^2\\right)}\\right)\n\\in X(\\mathbb{Q}_p).\n$$\nIn the second case we choose a $y_1\\in\\mathbb{Z}_p$ such that $y_1\\equiv y_0\\mod p$ and $\\frac{C}{A}+\\frac{D}{MA}y_1^2=0$. We then work with\n$$\nP_1=(1\\colon 0\\colon 0\\colon 0\\colon \\sqrt{AC})\\quad \\text{ and } \\quad P_2=\\left(1\\colon \\frac{y_1^2}{M}\\colon 0\\colon y_1\\colon 0\\right)\\in X(\\mathbb{Q}_p).\n$$\n\n\\noindent\\textbf{Case 4.} Let $M=p^{2k+1}M'$ with $k\\geq 0$ and $M' \\in \\mathbb Z_p^\\times$. From (C1) we obtain that $AD\\equiv BC\\mod p$. Lemma~\\ref{lem:quadres}(b) for $S\\left(1,\\frac{B}{M'A}\\right)$ implies the existence of an $x_0\\in\\mathbb{Z}_p^\\times$ such that $1-\\frac{B}{M'A}x_0^2\\in\\mathbb{Z}_p^\\times\\setminus\\mathbb{Z}_p^{\\times 2}$. Then \n$$\n1-\\frac{D}{M'C}\nx_0^2\\equiv 1-\\frac{B}{M'A}x_0^2\\mod p,\n$$ \nso we can take \n$$\nP_1=(1:0:0:0:\\sqrt{AC})\\, \\text{ and }\\, P_2=\\left(1:-\\frac{x_0^2}{M'}:p^kx_0:0:\\sqrt{AC\\left(1-\\frac{Bx_0^2}{M'A}\\right)\\left(1-\\frac{Dx_0^2}{M'C}\\right)+p^{2k+1}x_0^2}\\right).\n$$\n\n\\noindent\\textbf{Case 5.} We have $p\\mathrel{\\|} M$ and $p^2 \\mid A$. Conditions (C1) and \\eqref{gcd_cond_1} imply $p\\nmid C$, $p \\mid B$, $p \\nmid D$ and $p \\mid N$. Then \\eqref{sys} is isomorphic to the system with coefficients $(p^{-2}A,p^{-1}B,C,pD,p^{-1}M,p^{-1}N)$ under the morphism $(u\\colon v \\colon x \\colon y \\colon z) \\mapsto (pu\\colon v \\colon x\\colon y \\colon z)$. For the new system we have $\\mathrm{v}_p(M)=0$ and $m\\geq 1$, which was considered in Case 1.\n\n\\noindent\\textbf{Case 6.} From $p^{2k+1} \\mathrel{\\|} M$ and $p\\mid A$ we get, as in the previous case, $p \\mid B,N$ and $p \\nmid C,D$. Considering (C1) modulo $p^{2k+1}$ we get $(AD-BC)^2 \\equiv pN^2 \\mod p^{2k+1}$. From this we conclude that $p^{k+1} \\mid AD-BC$ and $p^k \\mid N$. In particular, $p^2 \\mid A$ if and only if $p^2 \\mid B$. So we conclude $p\\mathrel{\\|} A,B$ from \\eqref{gcd_cond_2}. This implies $2\\mathrm{v}_p(AD-BC+M) = \\mathrm{v}_p(pN^2-4ABCD) = 2$ and hence $\\mathrm{v}_p(AD+BC)=1$. From this we deduce that\n\\[\n2\\mathrm{v}_p(AD-BC)=\\mathrm{v}_p(pN^2 +2(AD+BC)M-M^2) = 2k+2.\n\\]\nHence $p^{k+1} \\mathrel{\\|} AD-BC$.\n\nOur system of equations is equivalent to \\eqref{sys} with the coefficients\n\\[\n(p^{-2k}MD,-p^{-2k-1}MB,-C,p^{-1}A,p^{-2k-1}(AD-BC)^2)\n\\]\nunder the morphism $(u\\colon v \\colon x \\colon y \\colon z) \\mapsto (\\frac{Au+Bv}{AD-BC} \\colon p\\frac{Cu+Dv}{AD-BC} \\colon p^{-k}x \\colon p^{-k}y \\colon p^{-k}z)$. This new system satisfied $\\mathrm{v}_p(M)=1$ and $m=1$, which was discussed in Case 2.\n\n\\noindent\\textbf{Case 7.} Now we know that $p^{2k} \\mathrel{\\|} M$ and $p\\nmid ABCD$. Condition (C1) implies $AD\\equiv BC\\mod p$, so $\\mathrm{v}_p(AD+BC)=0$. From $\\mathrm{v}_p(M)=2k$ and (C1) we obtain that $\\mathrm{v}_p(AD-BC)=k$. Now consider the coefficients\n\\[\n(p^{-2k}MD,-p^{-2k}MB,-C,A,p^{-2k}(AD-BC)^2)\n\\]\nwith variables $(\\frac{Au+Bv}{AD-BC}\\colon \\frac{Cu+Dv}{AD-BC}\\colon p^{-k}x\\colon p^{-k}y\\colon p^{-k}z)$ for the system \\eqref{sys}. This is precisely Case 3.\n\n\\noindent\\textbf{Case 8.} Now we assume $p^{2k} \\mathrel{\\|} M$ and $m=1$. Without loss of generality $p \\mathrel{\\|} A$ and as in the previous cases we can prove $p \\mathrel{\\|} B$, $p\\nmid CD$, $\\mathrm{v}_p(AD+BC)=1$ and $\\mathrm{v}_p(AD-BC)\\ge k+1$. The system \\eqref{sys} with coefficients $(p^{-2k}MD,-p^{-2k-1}MB,-C,p^{-1}A,p^{-2k-1}(AD-BC)^2)$ with variables $(\\frac{Au+Bv}{AD-BC}\\colon p\\frac{Cu+Dv}{AD-BC}\\colon p^{-k}x\\colon p^{-k}y\\colon p^{-k}z)$ is isomorphic to our system. This new system satisfies $\\mathrm{v}_p(M)$ is odd and $m=0$, which is precisely Case~4.\n\\end{proof}\n\n\\section{Explicit examples}\n\nAll known examples of Brauer--Manin obstructions for quartic del Pezzo surfaces occur on surfaces with a Brauer group modulo constants of order $2$. The first such obstruction was described by Birch and Swinnerton-Dyer in \\cite{BSD}. Work by Jahnel and Schindler \\cite{JS} showed that the locus of quartic del Pezzo surfaces with a Brauer group of order $2$ for which the Brauer--Manin obstruction is the only one, is dense in the moduli space of all del Pezzo surfaces of degree $4$.\n\nMitankin and Salgado \\cite{MS} restricted to the subfamily in which a positive proportion has a Brauer group of order $4$. Such surfaces can be written as\n$$\n\\begin{cases}\nx_0x_1-x_2x_3=0;\\\\\na_0x_0^2+a_1x_1^2+a_2x_2^2+a_3x_3^2+a_4x_4^2=0,\n\\end{cases}\n$$\nwhere $a_0a_1,a_2a_3,-a_0a_2\\in\\mathbb{Q}^{\\times,2}$ and $-a_0a_4(a_0a_1-a_2a_3)\\notin\\mathbb{Q}^{\\times,2}$. The Hasse principle, however, trivially holds for such surfaces, because of the rational point $(1:0:\\sqrt{-a_0\/a_2}:0:0)$.\n\nIn this section we will exhibit families of quartic del Pezzo surfaces with a Brauer group of order $4$ with a Brauer--Manin obstruction to the Hasse principle.\n\n\\subsection{Obstructions coming from $\\mathcal A$}\n\nLet us define our first subfamily. Consider a prime number $p\\equiv 1\\mod 4$ and integers $a,b$ such that $ab=p-1$. The surface $Y_{p,a,b}$ is given by the system\n$$\n\\begin{cases}\ny^2-px^2=uv;\\\\\nz^2-px^2=(au-pv)(u-bv).\n\\end{cases}\n$$\nObviously, $Y_{p,a,b}=X_{p,a,-p,1,-b,1,1}$, and one can check that conditions (C1) and (C2) are satisfied with $N=1$.\n\nFor such surfaces we have $\\mathcal{A}=\\left(p,\\frac{u}{au-pv}\\right)$, $\\mathcal{B}=\\left(p,\\frac{z-y}{u}\\right)$ and $\\mathcal{C}=\\left(p,\\frac{au-pv}{z-y}\\right)$.\n\n\\begin{pro}\\label{pro:ex1_inv_B}\nConsider the surface $Y=Y_{p,a,b}$.\n\\begin{enumerate}\n\\item[(a)] The variety $Y$ is everywhere locally soluble.\n\\item[(b)] The invariant map of $\\inv_p\\mathcal{B} \\colon Y(\\mathbb A_\\mathbb{Q}) \\to \\frac12\\mathbb{Z}\/\\mathbb{Z}$ is surjective.\n\\end{enumerate}\n\\end{pro}\n\n\\begin{proof}\nLocal solubility at all places $v\\not\\in \\{2,p\\}$ has already been proven in Proposition \\ref{pro:loc_sol_subfam}. Local solubility at $p$ and the surjectivity of the invariant map of $\\mathcal{B}$ at $p$ follow from Case 1 in the proof of Proposition~\\ref{prop:surjectiveinvariantmap}. For local solubility at $2$ one can simply consider all possibilities of $a$, $b$ and $p$ modulo $8$.\n\\end{proof}\n\nSo we might expect an obstruction coming from $\\mathcal{A}$.\n\n\\begin{pro}\\label{pro:ex1_inv_A}\nFor all $(s_v)_v\\in Y_{p,a,b}(\\mathbb{A}_\\mathbb{Q})$ we have: $\\inv_v \\mathcal{A}(s_v)=0$ for $v\\neq p$, and:\n$$\n\\inv_p \\mathcal{A}(s_p)=\\begin{cases}\n0 & \\text{ if }\\left(\\dfrac{a}{p}\\right)=1;\\\\\n\\dfrac{1}{2} & \\text{ if }\\left(\\dfrac{a}{p}\\right)=-1.\n\\end{cases}\n$$\n\\end{pro}\n\n\\begin{proof}\nOne can prove this by direct computation. However, we will apply results from \\cite{Beffeval} and \\cite{CTS13} to conclude that $\\inv_v$ is constant for $v \\ne p$ and compute it at a single point in $X(\\mathbb{Q}_v)$ to conclude $\\inv_v$ is identically zero.\n\nFinally, consider the case $v=p$. Let $s_p=(u\\colon v\\colon x\\colon y\\colon z)$ with $(u,v,x,y,z)$ a primitive tuple in $\\mathbb{Z}^5_p$. Suppose $p \\mid u$, then immediately $p \\mid y,z$. Denote $u=pu'$, then \n$$\nu'v\\equiv -x^2\\equiv (au'-v)(pu'-bv)\\equiv u'v+bv^2\\mod p\n$$\nwhich means that $p \\mid v$ and then $p \\mid x$, which contradicts the primitivity of $(u,v,x,y,z)$. Thus, $\n\\mathrm{v}_p(u)=0$, so $\\left(p,\\frac{u}{au-pv}\\right)_p=\\left(\\frac{u(au-pv)}{p}\\right)=\\left(\\frac{a}{p}\\right)$. This proves the proposition \nin the case $v=p$.\n\\end{proof}\n\n\nWe can now compute the Brauer--Manin obstruction coming from $\\mathcal{A}$.\n\n\\begin{thm}\\label{thm_ex1}\nConsider the surface $Y_{p,a,b}$. Note that from $ab=p-1$ and $p\\equiv 1\\mod 4$ we conclude $(\\frac{a}{p})=(\\frac{b}{p})$. \nAlso, $(\\frac{a}{p})=(\\frac{b}{p})=-1$ holds if and only if $p\\equiv 5\\mod 8$, and $a$ and $b$ are both even.\n\\begin{enumerate}\n\\item[(a)] If $(\\frac{a}{p})=(\\frac{b}{p})=-1$ then $Y_{p,a,b}(\\mathbb{Q})=\\emptyset$ and this failure of the Hasse principle is \nexplained by the Brauer--Manin obstruction coming from $\\mathcal{A}$.\n\\item[(b)] Otherwise, we have\n\\[\nY_{p,a,b}(\\mathbb A_\\mathbb{Q})^{\\Br X} \\ne \\emptyset.\n\\]\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nThe first observation comes from the following facts: for $p\\equiv 1\\mod 4$ we have $(\\frac{-1}{p})=1$ and $(\\frac{q}{p})=1$ for odd prime divisors $q$ of $p-1$. Also, $(\\frac{2}{p})=1$ if $p\\equiv 1\\mod 8$ and $(\\frac{2}{p})=-1$ if $p\\equiv 5\\mod 8$.\n\nIf $(\\frac{a}{p})=(\\frac{b}{p})=-1$ then Proposition \\ref{pro:ex1_inv_A} implies that $\\sum_v\\inv_v\\mathcal{A}(s_v)=\\frac{1}{2}$ for all $(s_v)_v\\in Y_{p,a,b}(\\mathbb{A}_\\mathbb{Q})$, so $Y_{p,a,b}(\\mathbb{A}_\\mathbb{Q})^\\mathcal{A}=\\emptyset$.\n\nIf $(\\frac{a}{p})=(\\frac{b}{p})=1$ then Proposition \\ref{pro:ex1_inv_A} and Proposition \\ref{pro:ex1_inv_B} imply that \n$\\sum_v\\inv_v\\mathcal{A}(s_v)=0$ for all ${(s_v)_v\\in Y_{p,a,b}(\\mathbb{A}_\\mathbb{Q})}$, and invariant maps of $\\mathcal{B}$ and $\\mathcal{C}$ at $p$ are \nsurjective. Thus, $Y_{p,a,b}(\\mathbb{A}_\\mathbb{Q})^{\\Br}\\neq\\emptyset$.\n\\end{proof}\n\n\n\n\n\\begin{exa}\nConsider a surface $Y_{13,2,6}$ given by\n$$\n\\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(2u-13v)(u-6v)\n\\end{cases}\n$$\nTheorem \\ref{thm_ex1} shows that $Y_{13,2,6}(\\mathbb{Q})=\\emptyset$.\n\\end{exa}\n\n\\begin{exa}\nConsider a surface $Y_{13,1,12}$ given by\n$$\n\\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(u-13v)(u-12v)\n\\end{cases}\n$$\nTheorem \\ref{thm_ex1} shows that there is no Brauer--Manin obstruction for $Y_{13,1,12}$ to the Hasse principle, and it has a trivial rational point $(1:0:0:0:1)\\in Y_{13,1,12}(\\mathbb{Q})$.\n\\end{exa}\n\n\n\\begin{exa}\nConsider a surface $Y_{13,12,1}$ given by\n$$\n\\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(12u-13v)(u-v)\n\\end{cases}\n$$\nTheorem \\ref{thm_ex1} shows that there is no Brauer--Manin obstruction for $Y_{13,12,1}$, and it has a rational point $(1:-3:2:7:16)\\in Y_{13,12,1}(\\mathbb{Q})$ which is actually non-trivial to find.\n\\end{exa}\n\n\\begin{rema}\nNote that we can change $a$ and $b$ up to squares without changing the isomorphism class of the surface. Unless $(a,b)$ equals either $(1,p-1)$ or $(-1,-(p-1))$, up to squares, there does not seem to be an obvious rational point. However, for each explicit surface we studied we were always able to find such a point.\n\nThe machinery from \\cite{CTSSD} and \\cite{WittenbergBook} seems ideally suited to proving such a statement by making $X$ into an elliptic fibration using the projection $f \\colon X \\dashrightarrow \\mathbb P^2$ away from a plane studied in the beginning of this paper. Unfortunately the ``condition (D)'' in those works is not satisfied, which relates to the fact that the Brauer group of $X$ is not vertical.\n\\end{rema}\n\n\\begin{rema}\nAlso, note that the technique described above would not just establish the existence of a rational point, but even prove that these are Zariski dense. We already know these two statements to be equivalent by work of Salberger and Skorobogatov \\cite{SalbergerSkorobogatov91}.\n\\end{rema}\n\n\\subsection{Obstructions coming from $\\mathcal B$}\n\nWe will now consider a second subfamily for which $\\mathcal B$ might give an obstruction.\n\n\\begin{defi}\\label{def_ex_B}\nLet $p$ be an odd prime, and $a$ and $b$ integers such that\n\\begin{enumerate}\n\\item $(a+b-1)^2-4ab=p$,\n\\item $4a\\equiv 4b\\equiv 1\\mod p$,\n\\item $a\\equiv b\\equiv 1\\mod 2$, and\n\\item $a\\equiv 1\\mod 8$.\n\\end{enumerate}\nLet $S_{p,a,b}$ be the surface given by\n\\begin{equation}\\label{sys_ex_B}\n\\begin{cases}\ny^2-px^2=uv;\\\\\nz^2-px^2=(u+v)(au+bv).\n\\end{cases}\n\\end{equation}\n\\end{defi}\n\nSuch triples $(p,a,b)$ do exist, as one can see from considering $p \\equiv 5 \\mod 8$, an integer $t \\equiv 3\\frac{p-1}{4}\\mod 8$ and\n\\[\na_t=t^2p^2-tp-\\frac{p-1}{4} \\quad \\text{ and } \\quad b_t=t^2p^2+tp-\\frac{p-1}{4}.\n\\]\n\nOne can also check that $S_{p,a,b}=X_{p,1,1,a,b,1,1}$ satisfies conditions (C1) and (C2).\n\n\\begin{pro}\n\\\n\\begin{enumerate}\n\\item[(a)] All $S_{p,a,b}$ are everywhere locally soluble.\n\\item[(b)] For all $S_{p,a,b}$ the invariant map of $\\mathcal{A}$ at $p$ is surjective.\n\\end{enumerate}\n\\end{pro}\n\n\\begin{proof}\nProposition \\ref{pro:loc_sol_subfam} addresses the local solubility at all places $v\\not\\in\\{2,p\\}$. Case 3 in the proof of Proposition~\\ref{prop:surjectiveinvariantmap} establishes the local solubility at $p$ and the surjectivity of the invariant map of $\\mathcal{A}$ at $p$. The local solubility at $2$ follows from the point $(1:0:0:0:\\sqrt{a})\\in X(\\mathbb{Q}_2)$, since $a\\equiv 1\\mod 8$.\n\\end{proof}\n\nWe will now consider the obstruction coming from $\\mathcal{B}$.\n\n\\begin{thm}\\label{thm_ex_B}\nWe have $S_{p,a,b}(\\mathbb A_\\mathbb{Q})^\\mathcal{B}=\\emptyset$ and in particular $S_{p,a,b}(\\mathbb{Q})=\\emptyset$.\n\\end{thm}\n\n\\begin{proof}\nWe can prove that $\\inv_v$ is identically zero for $v \\ne p$ as we did in the proof of Proposition~\\ref{pro:ex1_inv_A}.\n\nFinally, consider the case $v=p$. Let $s_p=(u:v:x:y:z)$ such that $(u,v,x,y,z)$ is primitive in $\\mathbb{Z}_p$, hence in particular $\\mathrm{v}_p(u)\\mathrm{v}_p(v)=0$. Assume $\\mathrm{v}_p(u)=0$; the other case is similar. The system \\eqref{sys_ex_B} together with the condition $4a\\equiv 4b\\equiv 1\\mod p$ gives us\n$$\nv\\equiv \\frac{y^2}{u}\\mod p\\quad \\text{ and } \\quad z^2\\equiv \\frac{1}{4}(u+v)^2\\mod p.\n$$\nLet us write $z \\equiv \\frac{\\varepsilon}2(u+v) \\mod p$ with $\\varepsilon = \\pm 1$. Then we get\n\\begin{align*}\nz\\pm y \\equiv \\frac{\\varepsilon}2(u+v) \\pm y & \\equiv \\frac{\\varepsilon}2 \\frac{y^2}u \\pm y + \\frac{\\varepsilon}2u\\\\\n & \\equiv \\frac{\\varepsilon}2u\\left(\\frac{y^2}{u^2} \\pm 2 \\varepsilon \\frac yu + 1 \\right)\\\\\n& \\equiv \\frac{\\varepsilon}2u\\left(\\frac yu \\pm \\varepsilon\\right)^2 \\mod p.\n\\end{align*}\n\nSo at least one of $\\eta = \\frac u{z\\pm y}$ is non-zero modulo $p$ at $s_p$, and we have $(p,\\eta)=-1$ since $(\\frac{2}{p})=-1$. We conclude $\\inv_p\\mathcal{B}(s_p)=\\frac{1}{2}$.\n\\end{proof}\n\n\n\n\n\\begin{exa}\nConsider $S_{13,153,179}$ given by\n$$\n\\begin{cases}\ny^2-13x^2=uv;\\\\\nz^2-13x^2=(u+v)(153u+179v).\n\\end{cases}\n$$\nTheorem \\ref{thm_ex_B} shows that $S_{13,153,179}(\\mathbb{Q})=\\emptyset$ and hence this surface does not satisfy the Hasse principle.\n\\end{exa}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the last decade, hybrid organic-inorganic materials with the perovskite structure, most notably CH$_3$NH$_3$Pb(I,Cl)$_3$, have become the most promising light-harvesting active layer for the implementation of high efficiency and low cost solar cells \\cite{Graetzel_2014,Hodes_2013,Xiao_2016,Lee_2012,Heo_2013,You_2014,Zhou_2014,Nanni_2014}. As a matter of fact, the power conversion efficiency of perovskite solar cells (PSCs) has shown a very fast growth, increasing from 3.8\\% \\cite{Kojima_2009} in 2009 to 22.1\\% \\cite{NREL_eff_chart} in 2016. The ease of fabrication and the high radiative efficiency have made perovskites also extremely attractive for the development of bright LEDs and lasers \\cite{Veldhuis_2016,Sutherland_2016,Palma_2016,Palma_2016_2}.\n\nNevertheless, a major drawback originates from the poor material stability\\cite{Xu_2016,Xiao_2016}: decomposition after exposure to moisture, cracks and defects generated by thermal stresses, crystal phase transition, UV-light exposure represent the principal causes of aging\\cite{Xu_2016,Xiao_2016}.\nMoreover, a high density of trap states, both on surfaces and in grain boundaries \\cite{Xiao_2016,Draguta_2016,Wu_2015,Kim_2014}, are present in polycrystalline perovskites, used for solar cells and light emitters. \nIndeed, even though theoretical predictions show that deep trap states are not generally formed inside perovskites grains, the opposite is observed at the grain boundaries and at the surfaces\\cite{Xiao_2016,Park_2016_book}.\nTherefore, sophisticated passivation strategies are essential for increasing the efficiency of PSCs and light emitters \\cite{Son_2016,Giordano_2016,deQuilettes_2016,Park_2016_book}.\n\nFor CH$_3$NH$_3$PbI$_3$ (MAPI) perovskites, a correlation between the solar cell efficiency and the grain morphology was recently demonstrated \\cite{Shao_2016}. Results of local short-circuit photocurrent, open-circuit voltage and dark drift current within individual grains correlate these quantities to different crystal facets, as a consequence of a facet-dependent density of trap states \\cite{Leblebici_2016} and it was also proven that the structural order of the electron transport layer (ETL) impacts the overall cell performance \\cite{Shao_2016}. Moreover, the nature of grain boundaries was shown to affect the carrier recombination kinetics because of non-radiative pathways that would also play a role in the process of charge separation and collection \\cite{deQuilettes_2016}.\n\n\n\nIt turns out that the possibility of controlling the morphology of the perovskite thin film and the understanding if different realizations of the ETL can modify the morphology of the grains are of the utmost relevance to further improvements of perovskite based solar cells.\n\nRecently, graphene and related two-dimensional materials have been introduced in the device structure in order to improve the charge injection and\/or collection at the electrodes: an enhancement of the power conversion efficiency\\cite{Acik_2016} and a long-term stability\\cite{Agresti_2016} was obtained.\nAs a matter of fact, interfaces between perovskite and transport layers have been recently demonstrated to dramatically affect the charge recombination processes and material instability within the working device.\\cite{Capasso_2016}\nIn fact, when free charges are fast injected from perovskite to the electron transport layer, the perovskite degradation is slowed down and the non-radiative recombination is reduced\\cite{Ahn_2016}. \nIn particular, the insertion of graphene flakes into the mesoporous-TiO$_2$ layer (mTiO$_2$) and of lithium-neutralized graphene oxide (GO-Li) as interlayer at perovskite\/mTiO$_2$ interface showed enhanced conversion efficiency and stability on both small and large area devices by demonstrating the crucial role of graphene interface engineering in perovskite-based devices\\cite{Agresti_2017}. Thus, the influence of mesoscopic-graphene modified substrates onto the perovskite film need to be investigated more in details to finely control the photovoltaic performance of complete devices.\n\nGiven the typical size of the grains (a few hundreds of nanometers), high resolution techniques as TEM (Transmission Electron Microscopy), AFM (Atomic-Force Microscopy), SNOM (Scanning Near-field Optical Microscopy), etc.~are employed to investigate the grain morphology but a difficult task is to correlate physical properties at the nanoscale with the device performance measuring, for instance, the $I$-$V$ curve of the cell\\cite{Leblebici_2016}.\nThus, it would be of extreme relevance to extract information on the active film morphology with much easier techniques on a length scale of tens\/hundreds of microns and even larger so to assess the homogeneity of the film deposition and the reliability of the synthesis protocol and post-deposition treatments. Moreover, the high spatial resolution analysis can explore a limited thickness of the film, and therefore it appears difficult to get information in the case of a real device.\nPhotoluminescence (PL) spectroscopy is an effective tool to investigate the film quality: in fact, from the comparison of samples with different ETLs, it is possible to extract quantitative information on the carrier capture and transport and, by the spectral shape, identify the crystalline phase of the active layer and evaluate the density of traps\/defects.\nMoreover, given the limited thickness (few hundreds of nm) of the perovskite film in solar cells and given the steep behavior of the absorption coefficient in MAPI\\cite{Loper_2015}, by varying the excitation wavelength in the range 300-700\\,nm, it is possible to probe thicknesses of the film from a hundred of nm to the whole film layer.\n\nIn this paper we aim to establish the effects of different graphene-based\nETLs in sensitized MAPI solar cells. In particular we will study the ETL\neffects on the carrier collection efficiency and on the MAPI morphology along\nthe thickness.\nBy picosecond time-resolved measurements we correlate the carrier recombination dynamics to the crystalline quality of the active material in presence\/absence of the ETL. We will find an increase of the electron collection efficiency up to a factor 3 with respect to standard mTiO$_2$.\nTaking advantage of the absorption coefficient dispersion, we are able to assess the film morphology along the thickness. In fact, by tuning the excitation wavelength, we investigate a thickness range from 150 to 400\\,nm and we can highlight the morphology changes induced by different ETLs.\nOur results will indicate that, when a graphene doped mesoporous TiO$_2$ (G+mTiO$_2$) with the addition of a GO-Li interlayer is used as ETL, the morphology of the MAPI film embedded in the mesoporous layer is frozen in the tetragonal phase, regardless of the temperature. In addition, the defect concentration is about one order of magnitude lower than that found with the other ETLs.\n\n\n\n\n\n\n\n\n\\section{Results and discussion}\n\nFour types of samples were prepared using different combinations of ETLs: mTiO$_2$, G+mTiO$_2$, mTiO$_2$ plus GO-Li interlayer and G+mTiO$_2$ plus GO-Li interlayer. \nThe different sample structures are schematically shown in Fig.\\,\\ref{Fig1} and listed in Tab.\\,\\ref{Tab1}. For clear comparison a sample of MAPI on FTO without ETL (reference sample) was also investigated. \nDetails on the sample preparation are reported in the Experimental Section.\nIt is worth noting that PL analysis was carried out on simple photo-electrodes, lacking of the hole collecting layer and the bottom contact: this allows us to focus only on the electron collection and transport after the electron-hole pairs creation due to the photon absorption. Moreover, PL experiments can be realized illuminating the samples either from the perovskite film side (side A) or the FTO side (side B). Varying the excitation wavelength and the excitation side, we can differently penetrate into the MAPI film and selectively probe spatial regions few tens of nm near the ETL or far from it.\n\n\\begin{figure}\n\\includegraphics[width=1\\columnwidth]{Fig1.pdf}\n\\caption{Structures of the investigated samples. The ETLs are indicated in red.}\n\\label{Fig1}\n\\end{figure}\n\n\n\\begin{table}[b]\n\\caption{List of the investigated samples and description of their corresponding ETLs.}\n\\label{Tab1}\n\\addtolength{\\tabcolsep}{3.8mm}\n\\begin{tabular*}{\\columnwidth}{l@{\\hspace{22mm}}l}\n\\hline\nSample & ETL \\\\\n\\hline\nReference & No ETL \\\\\nETL 1 & mTiO$_2$ \\\\\nETL 2 & G+mTiO$_2$ \\\\\nETL 3 & mTiO$_2$ plus GO-Li\\\\\nETL 4 & G+mTiO$_2$ plus GO-Li \\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nIn Fig.\\,\\ref{Fig2} PL decays at room temperature, after excitation with photons of 2.06\\,eV, are compared for the different samples, while in the inset a PL spectrum at room temperature is shown: typical spectra of the tetragonal phase are observed\\cite{Milot_2015,Dar_2016}. \nBy exciting from side A (Fig.\\,\\ref{Fig2}a), the PL decays are identical for all samples and no effect related to the presence of the ETL is detected. \nIn contrast there is a significant difference in the PL decay by exciting from side B depending on the sample, as shown in Fig.\\,\\ref{Fig2}b: faster decays are observed in presence of ETL, especially when graphene and\/or GO-Li are used.\nIt is worth noting that the PL decay of reference sample excited from side B is slower with respect to the decays of side A. This can be attributed to non-radiative states at the surface of the uncovered perovskite film (side A).\n\n\\begin{figure}\n\\includegraphics[width=1\\columnwidth]{Fig2.pdf}\n\\caption{PL decay (at the PL peak energy) and PL spectra (in the inset) at room temperature, after ps excitation at 2.06\\,eV with an average intensity of 10\\,W\/cm$^2$. a) From side A. b) From side B.}\n\\label{Fig2}\n\\end{figure}\n\nTo extract information about the efficiency in the carrier injection from perovskite to ETL, we first fitted the decays of Fig.\\,\\ref{Fig2}b with a double exponential\\cite{Son_2016,Bi_2016}, taking into account the laser pulse repetition period \\cite{Warren_2013}. The fitting function $I(t)$ can be written as\n\\begin{equation}\nI(t) = f(t)+g(t)\n\\end{equation}\nwhere $f(t)$ is the original double exponential and $g(t)$ the correction term due to the intrinsic periodic nature of TCSCP measurements\\cite{Warren_2013}:\n\\begin{equation}\nf(t)=\\Theta(t-t_0)[Ce^{-(t-t_0)\/\\tau_1}+ \n(1-C)e^{-(t-t_0)\/\\tau_2}]\n\\end{equation}\n\\begin{equation}\ng(t)=C\\frac{e^{-(t-t_0)\/\\tau_1}}{e^{T\/\\tau_1}-1}+ \n(1-C)\\frac{e^{-(t-t_0)\/\\tau_2}}{e^{T\/\\tau_2}-1} \n\\end{equation}\nIn the previous equations $\\theta$(t) is the Heaviside function, $\\tau_1$ and $\\tau_2$ are the decay time constants, $T$ is the laser pulse repetition period (13.15\\,ns), $C$ is the contribution of $\\tau_1$ exponential to the fit and $t_0$ is a constant. \nThe results obtained by the fitting procedure are reported in Tab.\\,\\ref{Tab2}.\nInserting graphene-based ETLs in the samples reduces $\\tau_1$ from 25\\,ns to 15\\,ns while the reduction of $\\tau_2$ depends on the ETL.\nIn the literature\\cite{Son_2016,Bi_2016} the longer decay constant ($\\tau_1$) is ascribed to the radiative recombination in MAPI while the shorter decay constant can be attributed to the carrier removal from MAPI layer towards the ETL.\n\nTo get rid of the local inhomogeneities in the samples, which can give rise to variation of the TI-PL intensity and considering that the PL time evolution does not depend on the detection spot, we estimate the TI-PL intensity from the PL decay. It turns out that we can express the integrated PL intensity ($I_\\mathrm{PL}$) as\n\\begin{equation}\nI_\\mathrm{PL}=\\eta P,\n\\end{equation}\nwhere $\\eta$ is the radiative efficiency and $P$ is the pump intensity, equals to 10\\,W\/cm$^2$ for all the TR-PL measurements.\nAssuming a unitary radiative efficiency for the reference sample, we can evaluate the change in $\\eta$ for the different ETLs (see Tab.\\,\\ref{Tab2}). Lower $\\eta$ is obtained in case of ETLs 2, 3 and 4: thus the insertion of graphene and\/or GO-Li interlayer improves the electron capture from perovskite to ETL by a factor between two and three with respect to ETL 1.\nWe want to remark that this result does not necessarily imply an increase of the solar cell short circuit current density ($J_\\mathrm{sc}$) of the same amount. In fact hole collection by the hole transport layer\\cite{Pydzinska_2016} and non-radiative recombination in the ETL have to be taken into account. However, as shown in the Supporting Information, a significant increase of $J_\\mathrm{sc}$ is measured for complete cell when ETL 4 is used.\n\n\\begin{table}\n\\caption{Results of the fits of data of Fig.\\,\\ref{Fig2}b: $\\tau_1$ and $\\tau_2$ are the decay time constants, $C$ is the contribution of $\\tau_1$ to the fit and $\\eta$ is the radiative efficiency.}\n\\label{Tab2}\n\\addtolength{\\tabcolsep}{3.8mm}\n\\begin{tabular*}{\\columnwidth}{lcccc}\n\\hline\nSample & $\\tau_1$ (ns) & $\\tau_2$ (ns) & $C$ &$\\eta$ \\\\\n\\hline\nReference & 25 & -- & 1.00 &1 \\\\\nETL 1 & 25 & 2.05 & 0.43 & 0.48 \\\\\nETL 2 & 15 & 1.99 & 0.36 & 0.27\\\\\nETL 3 & 15 & 1.24 & 0.20 &0.16\\\\\nETL 4 & 15 & 1.30 & 0.34 &0.24\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\nMore insight in the role of the ETL can be obtained by PL spectra at low temperature ($T=11$\\,K). In Fig.\\,\\ref{Fig3} PL spectra, obtained by exciting the samples from side A (Fig.\\,\\ref{Fig3}a) and side B (Fig.\\,\\ref{Fig3}b), are reported.\nThe PL spectra from side A show two peaks. As expected for $T < 150$\\,K in MAPI perovskite \\cite{Milot_2015,Kong_2015}, the peak at about 1.65\\,eV is attributed to the orthorhombic phase of MAPI. But the major contribution to the spectra comes from the other peak, centered at 1.55\\,eV. This emission is likely due to the sum of two contributions: the radiative recombination arising from the residual tetragonal phase at 1.56\\,eV and the radiative recombination from localized states below 1.52\\,eV. In the literature, these localized states are identified with radiative traps\\cite{Wu_2015,Fang_2015} or, recently, to methylammonium-disordered domains in orthorhombic phase of MAPI\\cite{Dar_2016}.\nWe want to remark that, in both interpretations of the low energy side emission as radiative traps or disordered domains, a carrier localization is present. More relevant is the fact that the low energy states are radiative and do not produce a loss of photogenerated carriers.\n\n\\begin{figure}\n\\includegraphics[width=1\\columnwidth]{Fig3.pdf}\n\\caption{TI-PL spectra at 11\\,K for the various samples after excitation at 2.06\\,eV with an average intensity of 10\\,W\/cm$^2$. a) From side A. b) From side B.}\n\\label{Fig3}\n\\end{figure}\n\nConsidering that the absorption length of the MAPI film at about 2\\,eV is roughly 200\\,nm \\cite{Loper_2015} and that the thickness of the perovskite layer in our samples is about 350\\,nm (see Fig.\\,S5 in the Supporting Information), we can conclude that the emission, exciting from side A, comes mostly from the MAPI film and that no effect related to the presence of the ETL is detected. \nOn the contrary, the excitation from side B can reveal the ETL effect on the MAPI. As a matter of fact, relevant differences are observed between the emissions from side B (Fig.\\,\\ref{Fig3}b). First of all we observe a smaller contribution of the orthorhombic phase for all the samples with respect to the excitation from side A. Moreover, in the case of GO-Li plus G+mTiO$_2$ (ETL 4), we detect a strong reduction of the radiative traps at 1.52\\,eV and the dominance of the emission from the tetragonal phase at 1.56\\,eV.\n\nSuch results suggest that an incomplete phase transition occurs for the perovskite wrapped into the mesoporous ETL layer. In particular, in the case of ETL 4, where the PL lineshape shows a negligible low energy tail and a smaller linewidth with respect to the other samples, we can argue that the crystallization of the MAPI film is very good, as also confirmed by the scanning electron microscopy (SEM) image reported in Fig.\\,S4 of the Supporting Information. However, the interaction of MAPI and ETL inhibits the phase transition, at least for a thickness of about 200\\,nm (see SEM cross section in Fig.\\,S5 of the Supporting Information). In order to confirm and test our hypothesis we performed PL measurements as a function of temperature, excitation wavelength and excitation power.\n\n\\begin{figure}\n\\includegraphics[width=1\\columnwidth]{Fig4.pdf}\n\\caption{Temperature-dependent measurement on ETL 4 by exciting with photon energy of 2.06\\,eV with an average intensity of 10\\,W\/cm$^2$. The labels D, T and O stand for defects (trap states), tetragonal and orthorhombic, respectively. a) PL spectra from 10 to 300\\,K after excitation on side A. b) PL spectra from 10 to 300\\,K after excitation on side B. c) Position of the PL peaks of the spectra in Fig.\\,\\ref{Fig4}a and Fig.\\,\\ref{Fig4}b as a function of temperature. The excitation side is indicated in brackets. d) FWHM of the PL peaks of Fig.\\,\\ref{Fig4}a and Fig.\\,\\ref{Fig4}b as a function of temperature. The excitation side is indicated in brackets. Red solid line shows the fitting of FWHM through Eq.\\,\\eqref{Eq_Gamma}.}\n\\label{Fig4}\n\\end{figure}\n\nIn Fig.\\,\\ref{Fig4}a we report the emission spectra of ETL 4 exciting from side A, varying the sample temperature from 10 to 300\\,K. As already shown before, at low temperature we observe two bands, one at 1.65\\,eV corresponding to the orthorhombic phase and one at about 1.55 eV corresponding to the sum of the emission from optically active trap states and the emission from a residual tetragonal phase.\nAs expected\\cite{Kong_2015,Fang_2015,Dar_2016,Wright_2016}, increasing the temperature, the orthorhombic phase emission shifts at higher energy (see Fig.\\,\\ref{Fig4}c), showing a monotonic increase of its full width at half maximum (FWHM) (see Fig.\\,\\ref{Fig4}d), and it disappears above 150\\,K, where the phase transition of MAPI from orthorhombic to tetragonal phase occurs.\nBy increasing the temperature, the low energy band shows instead the typical S shape both in the peak emission energy (see Fig.\\,\\ref{Fig4}c) and its FWHM (see Fig.\\,\\ref{Fig4}d). This is an indication of the phase transition from orthorhombic to tetragonal phase, with the concurrent lower contribution of the traps.\nAbove 150\\,K the PL spectrum has only one peak, arising from the tetragonal phase emission, which continues to monotonically blue shift increasing the temperature (see Fig.\\,\\ref{Fig4}c), as expected\\cite{Kong_2015,Fang_2015,Dar_2016,Wright_2016}. \nApart from different relative weights of the emission bands, for all samples we find PL spectra very similar to the one of the sample with ETL 4 when the excitation is performed from side A.\n\nA very similar trend with temperature, as shown in Fig.\\,\\ref{Fig4}a for ETL 4, is found for ETL 1, 2 and 3, irrespective of the excitation side, and this trend is commonly reported in the literature\\cite{Dar_2016}. On the contrary, relevant differences are observed in case of ETL 4 exciting from side B (Fig.\\,\\ref{Fig4}b). By increasing the temperature, the PL spectrum shows a single band, corresponding to the tetragonal phase, with a monotonic increase of the emission energy (see Fig.\\,\\ref{Fig4}c) and the FWHM (see Fig.\\,\\ref{Fig4}d). Such behavior indicates that the MAPI film embedded in the mesoporous ETL side remains in the tetragonal phase even down to 10\\,K.\n\nThe FWHM of the PL bands as a function of temperature can be fitted taking into account the temperature-independent inhomogeneous broadening and the interaction between carriers and acoustic and longitudinal optical (LO) phonons\\cite{Dar_2016,Wright_2016}, using the following equation:\n\\begin{equation}\n\\Gamma(T)=\\Gamma_0+\\gamma_{\\mathrm{ac}}T + \\frac{\\gamma_{\\mathrm{LO}}}{e^{E_{\\mathrm{LO}}\/k_\\mathrm{B}T}-1},\n\\label{Eq_Gamma}\n\\end{equation}\nwhere $\\Gamma_0$ is the inhomogeneous broadening, $\\gamma_{\\mathrm{ac}}$ and $\\gamma_\\mathrm{LO}$ are the acoustic and LO phonon-carrier coupling strengths, respectively, and $E_\\mathrm{LO}$ is the LO phonon energy.\nWe fitted the FWHM data extracted from the PL from side A: in particular we considered the orthorhombic phase from 10\\,K to 150\\,K and the tetragonal phase from 150\\,K up to room temperature. If the two set of data can be fitted with a single function, we can conclude that the acoustic and optical phonons causing the PL line broadening have very similar energies for the two phases. The solid line in Fig.\\,\\ref{Fig4}d shows the best fitting curve and a good agreement is reached between the data and the model with the fitting parameters $\\Gamma_0 = (23\\pm1)$\\,meV, $\\gamma_\\mathrm{ac} = (30\\pm5)$\\,\\micro eV\/K, $\\gamma_\\mathrm{LO} = (75\\pm5)$\\,meV, $E_\\mathrm{LO} = (19\\pm1)$\\,meV. These values well agree with data in the literature \\cite{Dar_2016,Wright_2016}.\n\nA further evidence proving that the emission band at low temperature in ETL 4 must be attributed to a tetragonal crystalline phase of MAPI with good quality, is given in Fig.\\,\\ref{Fig5}a. We report two spectra, already shown before, from side A (red curve) and side B (black curve), acquired at the same average excitation intensity $I_0=10$\\,W\/cm$^2$.\nDecreasing the excitation intensity by one order of magnitude, the emission from side B changes completely, showing a spectrum (blue curve) similar to the side A (red curve), with the dominant contribution of the traps and a small signal from the orthorhombic phase. This is explained by the fact that, lowering the power density, only the trap energy levels close to the band gap tail are filled.\nIn addition, the PL spectrum of the sample ETL 4, obtained by exciting from side B with an intensity of $I_0\/10$ (blue curve in Fig.\\,\\ref{Fig5}a), is very similar to the spectra of the ETLs 1, 2 and 3 (Fig.\\,\\ref{Fig3}b) by exciting from side B with an higher intensity $I_0$. This means that the trap density in the mesoporous region of MAPI in ETL 4 is lower, of about one order of magnitude, with respect to the other samples. This result agrees with the better crystal quality found by SEM analysis for ETL 4 (see Fig.\\,S4 in the Supporting Information).\n\n\\begin{figure}\n\\includegraphics[width=1\\columnwidth]{Fig5.pdf}\n\\caption{a) Normalized PL spectra of ETL 4 at $T=11$\\,K, after excitation at 2.06\\,eV, for different excitation densities and excitation sides (in brackets). $I_0$ corresponds to an average intensity of 10\\,W\/cm$^2$. b) Normalized PL spectra of ETL 4 at $T=11$\\,K for different excitation photon energies and excitation sides (in brackets).}\n\\label{Fig5}\n\\end{figure}\n\nAs stated previously, to confirm that the crystalline nature of the film changes when in contact with the ETL, in particular in presence of GO-Li plus G+mTiO$_2$, we probed the ETL 4 along the thickness exploiting the different absorption coefficient of MAPI varying the excitation photon energies.\nWe spanned the range from 1.73 to 3.1\\,eV, where FTO and cTiO$_2$ have a low and nearly constant absorption.\nThe result of this experiment, performed at 11\\,K, is reported in Fig.\\,\\ref{Fig5}b. Let us focus on side B. At high photon energy excitation, 3.1\\,eV (blue curve) and 2.06\\,eV (orange curve), the MAPI is excited for a few tens of nanometers close to the ETL and, as already observed above, the spectra show only the tetragonal phase (which should not be observed at this temperature since the only stable phase below 150\\,K is the orthorhombic). Decreasing the excitation photon energy down to 1.73\\,eV, the absorption coefficient of MAPI decreases and therefore the sample is excited more uniformly in depth. The resulting spectrum (red curve) shows an increase of the contribution of the orthorhombic phase.\nFor comparison, a spectrum from side A with an excitation of 2.06\\,eV is reported (black curve). The comparison of this spectrum with that at 1.73\\,eV from side B (red curve) shows, apart from a difference in the intensity of the orthorhombic phase, exactly the same contribution from the radiative traps and the tetragonal phase. The observed behavior proves that the crystalline nature of MAPI at low temperature is influenced by the interaction with the ETL, which inhibits the MAPI phase change into the orthorhombic form.\n\nOur results demonstrate a substantial improvement of the active layer morphology of ETL 4 with an efficient carrier capture from the ETL.\nThis is confirmed by a remarkable increase in power conversion efficiency (PCE) for complete devices, obtained by $I$-$V$ characterization (see Fig.\\,S6, S7 and Tab.\\,S1 in the Supporting Information), which is mainly ascribed to an improved short circuit current density ($J_\\mathrm{SC}$).\n\n\n\\section{Conclusions}\n\nWe investigated the effects of different graphene-based ETLs in sensitized MAPI photoelectrodes. In particular we have compared four different samples with the following ETLs: mTiO$_2$, G+mTiO$_2$, mTiO$_2$ plus GO-Li interlayer and G+mTiO$_2$ plus GO-Li interlayer. \nWe have studied the ETL effects on the carrier collection efficiency and on the MAPI morphology and quality along the thickness.\nIn presence of ETL, we found faster PL decays by exciting on FTO side with respect to the MAPI side which is explained by efficient electron removal from MAPI layer due to the ETL. In particular an increase of the electron collection efficiency up to a factor 3 with respect to standard mTiO$_2$ is reported.\n\nMoreover, the MAPI layer embedded in G+mTiO$_2$ plus GO-Li ETL shows a crystalline quality much better than the other samples, with a trap density about one order of magnitude lower.\nExploiting the dispersion of the MAPI absorption coefficient, we could probe the sample along the thickness, finding that the morphology of the MAPI film embedded in the G+mTiO$_2$ plus GO-Li ETL is frozen in the tetragonal phase, regardless of the temperature.\nMoreover, the observed morphology improvement of the MAPI encapsulated in the mTiO$_2$ plus GO-Li layer supports the increased efficiency measured for the complete devices.\n\nFinally, our results show that graphene based ETLs significantly improve both the carrier collection and the crystalline quality of the active material, opening new routes to the development of efficient and stable MAPI solar cells.\n\n\n\\section{Experimental section}\n\n\\textbf{Sample preparation.} \nSolar cells photoelectrodes were prepared on Fluorine-doped Tin Oxide (FTO) conductive glass (Pilkington TEC 8, 8\\,\\ohm\/$\\square$, 25\\,mm${}\\times{}$25\\,mm). \nThe substrates were cleaned in an ultrasonic bath, using three sequential steps: detergent with de-ionized water, acetone and 2-Propanol (10\\,min for each step).\nThe substrates were covered by a compact layer of TiO$_2$ (cTiO$_2$).\nA solution of acetylacetone (2\\,mL), titanium diisopropoxide (3\\,mL) and ethanol (45\\,mL) was deposited onto the FTO substrates by Spray Pyrolysis Deposition at 450\\,\\C. The final thickness of the cTiO$_2$ layer was measured about 50\\,nm by a Dektak Veeco 150 profilometer.\n\nThe mTiO$_2$ layer was obtained starting by an ethanol solution of 18NR-T titania paste (Dyesol) dissolved in pure ethanol (1:5 by weight), stirred overnight. \nThe graphene-doped mTiO$_2$ was obtained by adding sonicated graphene ink (1\\% in vol.),\nprepared by dispersing 5\\,g of graphite flakes (+100 mesh, $\\ge$75\\%, Sigma Aldrich) in 500\\,mL of N-methyl-2-pyrrolidone, NMP (Sigma Aldrich). The initial dispersion was ultrasonicated (VWR) for 6 hours and subsequently ultracentrifuged using a SW32Ti rotor in a Beckman-Coulter Optima XPN ultracentrifuge at 10000\\,rpm ($\\sim$12200 g) for 30 mins at 15\\,$^\\circ$C. After ultracentrifugation, the upper 80\\% supernatant was extracted by pipetting. The concentration of the graphitic flakes is calculated from the OAS (see Fig.\\,S1 in the Supporting Information), giving a concentration of 0.25\\,g L$^{-1}$. The morphology of the flakes, i.e., lateral size and thicknesses are characterized by transmission electron microscopy (TEM) and atomic force microscopy (AFM), respectively (see Fig.\\,S2 in the Supporting Information) giving a lateral size distribution of 150\\,nm and thickness of 1.7\\,nm. Raman spectroscopy data are found in the Supporting Information (see Fig.\\,S3).\nBoth standard mTiO$_2$ and G+mTiO$_2$ dispersions were sonicated 10\\,min prior to be spin-coated in air at 1700\\,rpm for 20\\,s onto the cTiO$_2$ surface, followed by a calcination step at 450\\,\\C\\ for 30\\,min.\n\nThe GO-Li interlayer was realized by spin coating (2000\\,rpm for 10\\,s) 200\\,\\micro L of GO-Li dispersion in ethanol\/H$_2$O (3:1) prepared as reported in Ref.\\,\\onlinecite{Agresti_2016b}. After the deposition, the substrates were annealed at 110\\,\\C\\ for 10\\,min.\n\nThe photo-electrodes were completed by depositing the perovskite active layer in dry conditions (relative humidity less than 30\\%) by a double step method: a lead iodide solution (PbI$_2$ in N,N-dimethylformamide, 1\\,M, heated at 70\\,\\C) was spin coated at 6000\\,rpm for 10\\,s on heated substrates (50\\,\\C) which were then dipped into a CH$_3$NH$_3$I (Dyesol) in anhydrous 2-propanol solution (10\\,mg\/mL) for 15\\,min. Finally, the samples were heated at 80\\,\\C\\ for 20\\,min in air. The MAPI absorbing layer has a typical thickness of 350\\,nm with a perovskite penetration into mesoporous layer of about 200\\,nm.\nThe samples were not encapsulated. \nComplete solar cells were realized with the same structure of the investigated samples to compare the PL results with the current-voltage ($I$-$V$) characteristics.\nIn the case of complete devices doped spiro-OMeTAD (73.5\\,mg\/mL) in chlorobenzene solution doped with tert-butylpyridine (TBP 26.77\\,\\micro L\/mL), lithium bis(trifluoromethanesulfonyl)imide (LiTFSI 16.6\\,\\micro L\/mL), and cobalt(III) complex (FK209 from Lumtec, 7.2\\,\\micro L\/mL) is spin coated (2000\\,rpm for 20\\,s) onto the tested photo-electrodes. The final devices are completed by Au counter-electrode thermal evaporation (100\\,nm). $I$-$V$ characteristics are recorded under AM1.5G solar simulator Solar Constant from KHS at 1000\\,W\/m$^2$ (1\\,sun). Results on the solar cells are shown in the Supporting Information.\n\n\\textbf{Optical Absorption Spectroscopy (OAS).}\nThe OAS of the as-produced inks was carried out in the 300-1000\\,nm range with a Cary Varian 5000i UV-vis-NIR spectrometer. The absorption spectra was acquired using a 1\\,mL quartz glass cuvette. The ink is diluted to 1:7 in NMP. The NMP solvent baseline was subtracted. The concentration of graphitic flakes is determined from the extinction coefficient at 660\\,nm, using $A = \\alpha l c$ where $l$ [m] is the light path length, $c$ [gL$^{-1}$] is the concentration of dispersed graphitic material, and $\\alpha$ [Lg$^{-1}$m$^{-1}$] is the absorption coefficient, with $\\alpha \\sim 1390$\\,Lg$^{-1}$m$^{-1}$ at 660\\,nm.\\cite{Lotya_2009}\n\n\\textbf{Transmission electron microscopy (TEM).}\nThe exfoliated flakes morphology is characterized by using a TEM JOEL JEM 1011, using an acceleration voltage of 100\\,kV. The sample preparation was performed diluting the ink in NMP (1:10). 20\\,\\micro L of the diluted sample were drop cast on copper grids (200 mesh), and dried in vacuum overnight. Statistical analyses are fitted with log-normal distributions.\n\n\\textbf{Atomic force microscopy (AFM).}\nThe dispersions are diluted 1:30 in NMP. 100\\,\\micro L of the dilutions are drop-casted onto Si\/SiO$_2$ wafers. AFM images are acquired with Bruker Innova AFM in tapping mode using silicon probes (frequency = 300\\,kHz, spring constant = 40\\,Nm$^{-1}$). Statistical analysis are fitted with log-normal distributions.\n\n\\textbf{Raman spectroscopy.}\nThe graphene inks are drop-cast onto Si\/SiO$_2$ wafers (LDB Technologies Ltd.) and dried under vacuum. Raman measurements are collected with a Renishaw inVia confocal Raman microscope using an excitation line of 514\\,nm with a 100X objective lens, and an incident power of $\\sim 1$\\,mW on the sample. 20 spectra are collected for each sample. Peaks are fitted with Lorentzian functions.\n\n\\textbf{Scanning electron microscopy (SEM).}\nElectrodes are imaged by aim of a field-emission scanning electron microscope FE-SEM (JOEL JSM-7500 FA). The acceleration voltage is set at 5\\,kV. Images are collected using the in-lens sensors (secondary electron in-lens image, SEI) and the secondary electron sensor (lower secondary electron image, LEI). No coating is applied.\n\n\n\\textbf{Photoluminescence spectroscopy (PL).}\nPL experiments were performed, in a quasi back-scattering geometry, keeping the samples in a closed cycle cryostat and the temperature was changed from 10 to 300\\,K. Time integrated photoluminescence (TI-PL) measurements were performed exciting the samples by different mode-locked ps laser sources: a tunable (700--850\\,nm and 350--425\\,nm with the second harmonic generator) Ti-Sapphire laser operating at 81.3\\,MHz repetition rate with 1.2\\,ps pulses and a 4\\,ps Rhodamine 6G dye laser synchronously pumped by the second harmonic of a mode-locked Nd-YAG laser, operating at 76\\,MHz. The PL signal was spectrally dispersed by a 50\\,cm monochromator providing a spectral resolution of 1\\,meV and detected by a microchannel plate photomultiplier.\nTime resolved (TR-PL) measurements were carried out exciting the samples by the ps dye laser operating at 600\\,nm and using time-correlated single photon counting technique (TCSPC) with a temporal resolution of about 60\\,ps.\n\n\n\n\\begin{acknowledgments}\nFB acknowledges funding from the Italian Ministry for Education, University and Research within the Futuro in Ricerca (FIRB) program (project DeLIGHTeD, Protocollo RBFR12RS1W). We warmly acknowledge Franco Bogani for fruitful discussions. This work was partially supported by ENTE CARIFI grant n.\\,2015\/11162 and from the European Union's Horizon 2020 research and innovation programme under grant agreement n. 696656 - GrapheneCore1.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe doping phase diagram near the underdoped region is one of the\nimportant and long-debated issues with high-$T_{c}$ cuprates\n\\cite{LeeRMP06}.\nSince the parent compound is antiferromagnetic (AFM) Mott insulator,\nthe AFM correlation plays a significant role in the emergence of\nsuperconductivity by doping charge carries.\nThe intrinsic proximity of the superconducting (SC) phase with the\nAFM phase is also shared by the phase diagrams of other SC\nmaterials, such as iron pnictides \\cite{StewartRMP11} and heavy\nfermion superconductors \\cite{NicklasPRB07}.\nThe multi-layered cuprate superconductors exhibit the coexistence of\nAFM and SC states at underdoping discovered by nuclear magnetic\nresonance measurements \\cite{MukudaJPSJ08,MukudaJPSJ09}.\nHowever, the AFM phase and the SC phase never coexist in the phase\ndiagram of single-layered cuprates such as\nLa$_{2-x}$Sr$_{x}$CuO$_{4}$ \\cite{KeimerPRB92} and\nBi$_{2}$Sr$_{2}$CuO$_{6+\\delta}$ \\cite{KatoJSSC97}.\nIn particular, Bi$_{2}$(Sr$_{2-x}$La$_{x}$)CuO$_{6+\\delta}$ systems\nshows that the three-dimensional AFM region, separated by the SC\nphase, even survives until a high underdoping level\n\\cite{KawasakiPRL10}.\n\nThe existence of the coexisting state has been found by analytical\nand numerical approaches in Hubbard$-$type models\n\\cite{ReissPRB07,JarrellEPL01,AichhornPRB07,DSPRL05,KobayashiPhysicaC10,CaponePRB06,KancharlaPRB08}\nand $t-J-$type models\n\\cite{InabaPhysicaC96,HimedaPRB99,YamasePRB04,ShihPRB04,ShihLTP05,PathakPRL09,WatanabePhysicaC10}.\nThey seem to contribute the underlying mechanism to the coexisting\nstate observed in multi-layered cuprates.\nHowever, a proper mechanism to explain why these two phases do not\nlike to coexist in single-layered cuprates remains needed.\nInterestingly, some previous studies proposed the spin-bag mechanism\nfor superconductivity since two spin bags would attract each other\nto form a Cooper pair and lower the total energy\n\\cite{SchriefferPRB89,WengPRB90,EderPRB94}.\nAs for doping more holes, therefore, it is necessary to re-examine\nhow the local distortion of the AFM background around holes\ninfluences AFM order and SC order.\n\nOn the other hand, one of the most exciting experimental results is\nthe observation of quantum oscillations in the hole-doped cuprates\nwhich pointed to electron pockets \\cite{LeBoeufNat07,LeBoeufPRB11}.\nIn particular, they proposed that these electron pockets probably\noriginate from the Fermi surface reconstruction caused by the onset\nof a density-wave phase, e.g. the AFM phase.\nUnfortunately the electron-like Fermi pockets have never been found\nin most of hole-doped cuprates using angle-resolved photoemission\nspectroscopy (ARPES) \\cite{YangNat08,MengNat09,LuARCMP12}.\nThus, to comprehend the loss of the electron pocket observed by\nARPES experiments, we inquire to what extent into the electronic\ncorrelations ignored in mean-field calculations.\n\nIn this work, we study Gutzwiller's trial wave functions with the\ncoexistence of AFM order and SC order by means of variational Monte\nCarlo (VMC) method.\nTo improve the trial state, we further consider the off-site\ncorrelations between two electrons by applying suitable Jastrow\ncorrelators.\nSurprisingly, the long-range AFM order is strongly enhanced due to\nthe local ferromagnetic (FM) Jastrow correlation, or precisely local\nAFM distortion, giving rise to the disappearance of the coexisting\nstate near underdoping in the phase diagram.\nBesides, the spin-spin correlation in the non-coexisting state\ntransfers the quasiparticle spectral weight from the antinodal\nelectron pockets to the lower AFM band, and also the nodal hole\npockets can remain until superconductivity occurs.\nTherefore, it is expected that the signal of the electron pockets\naround antinodes cannot be found in many hole-doped compounds by\nusing ARPES.\n\n\\section{Theory}\nLet us begin by the Hamiltonian on a square lattice of size\n$16\\times16$,\n\\begin{eqnarray}\nH=-\\sum_{i,j,\\sigma}t_{ij}\\tilde{c}_{i\\sigma}^{\\dag}\\tilde{c}_{j\\sigma}+J\\sum_{\\langle\ni,j\\rangle}\\left(\\mathbf{S}_{i}\\cdot\\mathbf{S}_{j}-\\frac{1}{4}n_{i}n_{j}\\right),\n\\label{e:equ1}\n\\end{eqnarray}\nwhere the hopping $t_{ij}=t$, $t'$, and $t''$ for sites i and j\nbeing the nearest, second-nearest, and third-nearest neighbors,\nrespectively.\nOther notations are standard. We restrict the electron creation\noperators $\\tilde{c}_{i\\sigma}^{\\dag}$ to the subspace without\ndoubly-occupied sites.\nIn the following, the bare parameters $(t',t'',J)\/t$ in the\nHamiltonian are set to be in the hole-doped regime:\n$(-0.3,0.15,0.3)$.\nIn order to understand how AFM order and SC order compete in\nvariational phase diagram, we choose the mean-field ground state\nincluding both AFM order and SC order (AFSC) as a starting point,\n\\begin{eqnarray}\n|\\Psi_{AFSC}\\rangle=\\prod'_{\\bf{k},s=\\{a,b\\}}\\gamma_{\\bf{k}\\uparrow}^{s}\\gamma_{-\\bf{k}\\downarrow}^{s}|0\\rangle,\\label{e:equ2}\n\\end{eqnarray}\nwhere the prime means the product only includes momenta inside the\nmagnetic zone boundary (MZB).\nNote that $s$ represents the quasiparticle coming from the upper AFM\nband ($s=b$) or the lower AFM band ($s=a$).\nThe Bogoliubov's quasiparticle operators $\\gamma_{\\bf{k}\\sigma}^{s}$\nare defined as\n\\begin{eqnarray}\n\\gamma_{\\bf{k}\\sigma}^{s}=u_{\\bf{k}}^{s}\\hat{s}_{\\bf{k}\\sigma}-\\sigma\nv_{\\bf{k}}^{s}\\hat{s}_{-\\bf{k}\\bar{\\sigma}}^{\\dag}.\\label{e:equ5}\n\\end{eqnarray}\nThe coefficients $u_{\\bf{k}}^{s}$ and $v_{\\bf{k}}^{s}$ are the BCS\ncoherence factor of AFM quasiparticles corresponding to the $s$\nband,\n\\begin{eqnarray}\n(u_{\\bf{k}}^{s})^{2}&=&\\frac{1}{2}\\left(1+\\frac{\\xi_{\\bf{k}}^{s}}{\\sqrt{(\\xi_{\\bf{k}}^{s})^{2}+\\Delta_{\\bf{k}}^{2}}}\\right),\\nonumber\\\\\n(v_{\\bf{k}}^{s})^{2}&=&1-(u_{\\bf{k}}^{s})^{2},\\label{e:equ6}\n\\end{eqnarray}\nwhere the AFM band dispersion\n$\\xi_{\\bf{k}}^{b\/a}=\\epsilon_{\\bf{k}}^{+}\\pm\\sqrt{(\\epsilon_{\\bf{k}}^{-})^{2}+m^{2}}$\nand\n$\\epsilon_{\\bf{k}}^{\\pm}\\equiv\\left(\\varepsilon_{\\bf{k}}\\pm\\varepsilon_{\\bf{k+Q}}\\right)\/2$.\nHere $\\varepsilon_{\\bf{k}}$ is the normal-state dispersion.\n$\\Delta_{\\bf{k}}$($=2\\Delta\\left(\\cos\\bf{k_{x}}-\\cos\\bf{k_{y}}\\right)$)\nis $d$-wave pairing amplitude and $m$ AFM order parameter.\nThe annihilation operators for AFM bands, $\\hat{s}_{\\bf{k}\\sigma}$,\nare given by\n\\begin{eqnarray}\n\\left(\n \\begin{array}{c}\n a_{\\bf{k}\\sigma} \\\\\n b_{\\bf{k}\\sigma} \\\\\n \\end{array}\n \\right)=\\left(\n \\begin{array}{cc}\n \\alpha_{\\bf{k}} & \\sigma\\beta_{\\bf{k}} \\\\\n -\\sigma\\beta_{\\bf{k}} & \\alpha_{\\bf{k}} \\\\\n \\end{array}\n \\right)\\left(\n \\begin{array}{c}\n c_{\\bf{k}\\sigma} \\\\\n c_{\\bf{k+Q}\\sigma} \\\\\n \\end{array}\n \\right),\\label{e:equ3}\n\\end{eqnarray}\nwith $\\bf{Q}=(\\pi,\\pi)$ and the coefficients\n\\begin{eqnarray}\n\\alpha_{\\bf{k}}^{2}&=&\\frac{1}{2}\\left(1-\\frac{\\epsilon_{\\bf{k}}^{-}}{\\sqrt{(\\epsilon_{\\bf{k}}^{-})^{2}+m^{2}}}\\right),\\nonumber\\\\\n\\beta_{\\bf{k}}^{2}&=&1-\\alpha_{\\bf{k}}^{2}.\\label{e:equ4}\n\\end{eqnarray}\n\nIn order to introduce more correlations in the mean-field wave\nfunction, we first formulate the trial wave function fixing the\nnumber of electrons $\\hat{P}_{N_{e}}$ with on-site Gutzwiller\nprojector\n$\\hat{P}_{G}(=\\prod_{i}\\left(1-\\hat{n}_{i\\uparrow}\\hat{n}_{i\\downarrow}\\right))$\nand charge-charge Jastrow correlator ($\\hat{P}_{J}^{CC}$)\n\\cite{CPC08,CPC12},\n\\begin{eqnarray}\n|\\Psi_{CC}\\rangle=\n\\hat{P}_{N_{e}}\\hat{P}_{G}\\hat{P}_{J}^{CC}|\\Psi_{AFSC}\\rangle.\\label{e:equ7}\n\\end{eqnarray}\nMore importantly, we also consider the correlation between spins by\nusing spin-spin Jastrow correlator ($\\hat{P}_{J}^{SS}$),\n\\begin{eqnarray}\n|\\Psi_{CCSS}\\rangle=\\hat{P}_{J}^{SS}|\\Psi_{CC}\\rangle.\\label{e:equ82}\n\\end{eqnarray}\nThe Jastrow correlator is constructed by classical Boltzmann\noperator, $\\hat{P}_{J}^{i}=e^{\\hat{H}_{i}}$, encoding the intersite\ncorrelations.\nFor the sake of simplicity, $\\hat{H}_{i}$ depicting charge ($i=CC$)\nand spin ($i=SS$) parts are chosen to be diagonal in real-space\nconfiguration.\nThe charge-charge Jastrow correlator describes the short- and\nlong-range correlations between holes in the lattice system.\nThus,\n\\begin{eqnarray}\n\\hat{H}_{CC}=\\sum_{i1$) and repulsive short-range ($r_{ij}<1$) correlations\nbetween holes if $\\alpha>0$.\n\nA similar formalism to the spin-spin correlation has been considered\nat half-filling \\cite{HuesPRL88}.\nWe further imitate the formalism described above to write down the\nspin-spin Jastrow correlator,\n\\begin{eqnarray}\n\\hat{H}_{SS}=\\sum_{i0$ ($<0$).\nIn addition to the parameter $\\beta$ controlling the long-range spin\ncorrelations, we consider the other three parameters\n$w_{\\gamma=1,2,3}$ for the neighboring spin-spin correlations.\nFor example of the FM case, the short-range correlation would be\nsuppressed when $w_{\\gamma}<1$.\nOn the other hand, the factor $r_{ij}^{\\beta}$ control the\nlong-range ($r_{ij}>1$) and short-range ($r_{ij}<1$) correlations.\nIn the long-range case of $\\beta<0$, for instance, $r_{ij}^{\\beta}$\nwould decrease the FM correlation but conversely increase the AFM\ncorrelation.\n\nIn addition to the ground state, we also propose a trial wave\nfunction for the low-lying excitation of the Gutzwiller-projected\ncoexisting state simply generated by Gutzwiller projecting the\nmean-field excited state\n\\begin{eqnarray}\n|\\Psi_{AFSC}^{\\bf{k}\\sigma\ns}\\rangle=\\left(\\gamma_{\\bf{k}\\sigma}^{s}\\right)^{\\dag}|\\Psi_{AFSC}\\rangle.\\label{e:equ9}\n\\end{eqnarray}\nHere we have applied the particle-hole transformation\n\\cite{YokoyamaJPSJ88,CPCPRB12} into Eq.(\\ref{e:equ9}) to avoid the\ndivergence from the nodes of the mean-field wave function.\nThe Gutzwiller-projected excited state with both AFM order and SC\norder fixing to $N_{e}-1$ electrons is written as\n\\begin{eqnarray}\n|\\Psi_{\\bf{k}\\sigma}^{s}\\rangle=\n\\hat{P}_{N_{e}-1}\\hat{P}_{G}\\hat{P}_{J}^{CC}\\hat{P}_{J}^{SS}|\\Psi_{AFSC}^{\\bf{k}\\sigma\ns}\\rangle.\\label{e:equ10}\n\\end{eqnarray}\nHence we can compute the excitation energies\n$E_{k}(\\equiv\\langle\\Psi_{\\bf{k}\\sigma}^{s}|H|\\Psi_{\\bf{k}\\sigma}^{s}\\rangle-\\langle\\Psi_{0}|H|\\Psi_{0}\\rangle)$\nfor either upper ($s=b$) or lower ($s=a$) AFM quasiparticles.\nFurthermore, the quasiparticle spectral weight measured from ARPES\ncan be obtained by calculating\n\\begin{eqnarray}\nZ_{\\bf{k}}^{-}\\equiv\\frac{\\left|\\langle\\Psi_{\\bf{k}\\sigma}^{s}|c_{-\\bf{k}\\bar{\\sigma}}|\\Psi_{0}\\rangle\\right|^{2}}{\\langle\\Psi_{\\bf{k}\\sigma}^{s}|\\Psi_{\\bf{k}\\sigma}^{s}\\rangle\\langle\\Psi_{0}|\\Psi_{0}\\rangle}.\\label{e:equ11}\n\\end{eqnarray}\nSome details in the VMC calculation should be noticed.\nThe boundary condition we use is periodic along both directions.\nIn order to achieve a reasonable acceptance ratio, the simulation\nconsists of a combination of one-particle moves and two-particle\nmoves.\nThe variational parameters of the Gutzwiller-projected coexisting\nstate are optimized by using the stochastic reconfiguration method\n\\cite{SorellaPRB01}.\nAll physical quantities are evaluated using the optimized\nparameters.\nWe also take a sufficient number of samples ($=2\\times10^{5}$) to\nreduce the statistical errors, and keep the sampling interval\n($\\sim40$) long enough to ensure statistical independence between\nsamples.\n\n\\section{Results}\n\n\\begin{figure}[t]\n\\begin{center}\\rotatebox{0}{\\includegraphics[height=3.5in,width=3in]{fig1.png}}\\end{center}\n\\caption{(a) Variational phase diagram plotted by staggered\nmagnetization $M_{s}$ (squares) and superconducting order parameter\n$\\Delta_{SC}$ (circles). Filled and empty symbols represent\n$|\\Psi_{CC}\\rangle$ and $|\\Psi_{CCSS}\\rangle$, respectively. (b) The\ndifference of the energy components between $|\\Psi_{CCSS}\\rangle$\nand $|\\Psi_{CC}\\rangle$ as a function of hole doping $\\delta$ in\n$16\\times16$ lattice.}\\label{fig1}\n\\end{figure}\n\nWe first consider the trial state with only the charge-charge\nJastrow correlator to better demonstrate the variational phase\ndiagram.\nThen we further include the spin-spin Jastrow correlator to\nsee how the phase diagram changes.\nOrder parameters shown in the phase diagram are determined by the\nstaggered magnetization\n\\begin{eqnarray}\nM_{s}=\\frac{1}{N}\\sum_{i}\\langle\\hat{S}_{i}^{z}\\rangle\ne^{i\\bf{Q}\\cdot\\bf{R}_{i}}\\label{e:mag}\n\\end{eqnarray}\nand the long-range pair-pair correlation function\n\\begin{eqnarray}\nC_{PP}(R)=\\frac{1}{N}\\sum_{i,\\alpha,\\alpha'}\\lambda_{\\alpha,\\alpha'}\\langle\\Delta_{i,\\alpha}^{\\dag}\\Delta_{i+R,\\alpha'}\\rangle.\\label{e:ppcf}\n\\end{eqnarray}\nThe creation operator\n$\\Delta_{i,\\alpha}^{\\dag}$($\\equiv\\tilde{c}_{i\\uparrow}^{\\dag}\\tilde{c}_{i+\\alpha\\downarrow}^{\\dag}-\\tilde{c}_{i\\downarrow}^{\\dag}\\tilde{c}_{i+\\alpha\\uparrow}^{\\dag}$)\ncreates a singlet on the bond $(i,i+\\alpha)$, $\\alpha=x,y$.\nThe factor $\\lambda_{\\alpha,\\alpha'}$ describes $d$-wave symmetry:\n$\\lambda_{\\alpha,\\alpha'}=1$($-1$) as\n$\\alpha=\\alpha'$($\\alpha\\neq\\alpha'$).\n\nIn Fig.\\ref{fig1}(a), without the spin-spin Jastrow correlators as\nindicated by $|\\Psi_{CC}\\rangle$, there exists a region showing the\ncoexistence of AFM order and SC order within doping\n$\\delta\\lesssim0.125$ in the phase diagram\n\\cite{ShihLTP05,PathakPRL09,WatanabePhysicaC10}, where $M_{s}$ and\n$\\Delta_{SC}$($\\equiv\\sqrt{C_{PP}(R>2)}$) are finite.\nLet us turn to the case with both charge-charge and spin-spin\nJastrow correlators denoted by $|\\Psi_{CCSS}\\rangle$.\nObviously the coexisting region disappears and a clear boundary\nseparating the AFM phase and the SC phase shows up at doping\n$\\delta=0.156$.\nNote that near the boundary the spin-spin Jastrow correlator can\ngreatly improve the ground-state energies from $0.3\\%$ to $0.7\\%$.\nFrom the numerical optimization, we find the spin-spin Jastrow\ncorrelator can provide a conduit to vary the mean-field AFM order in\n$|\\Psi_{AFSC}\\rangle$.\nSurprisingly, the optimized spin-spin Jastrow parameters slightly\ndisplay short-range FM correlations in the AFM background (e.g. at\n$\\delta=0.156$ the spin-spin Jastrow weights $w_{\\gamma}$ for\n$\\gamma=1$, $2$ and $3$ would be increased to $1.12$, $1.02$ and\n$1.01$, respectively).\nThe local FM correlation introduced by the Jastrow factors is\nharmful to the mean-field AFM order.\nTo make them balance, it is inevitable to largely enhance the AFM\nbackground in $|\\Psi_{AFSC}\\rangle$.\nThe surprising competition between AFM order and SC order near the\nphase boundary is mainly due to the hugely enhanced AFM order\nfurther leading to the diminished SC order.\n\nTo further demonstrate the energy competition, we analyze the\ndifference of the energy components in the Hamiltonian between\n$|\\Psi_{CCSS}\\rangle$ and $|\\Psi_{CC}\\rangle$ shown in\nFig.\\ref{fig1}(b).\nOur data clearly show that within $0.04<\\delta<0.2$ the spin Jastrow\ncorrelator helps the trial mean-field state gain much more energy\nfrom the second-nearest-neighbor hopping term.\nOn the other hand, the competing energy primarily comes from the\nspin-spin superexchange interaction.\nFrom real-space point of view, holes prefer to move along diagonal\ndirection in strong AFM background so that the hopping energy from\nthe second nearest neighbors ($t'$) is likely to compete with the\nsuperexchange energy ($J$).\n\n\\begin{figure}[t]\n\\begin{center}\\rotatebox{0}{\\includegraphics[height=2.4in,width=3.4in]{fig2.png}}\\end{center}\n\\caption{The difference of the momentum distribution function\nbetween $|\\Psi_{CCSS}\\rangle$ and $|\\Psi_{CC}\\rangle$ for doping (a)\n$\\delta=0.125$, (b) $\\delta=0.156$ and (c) $\\delta=0.188$ plotted in\nthe first Brillouin zone. (d) The next-nearest-neighbor energy\ncomponent, $-4t'\\cos(k_{x})\\cos(k_{y})$. The black diamond is the\nhalf-filled Fermi surface. White (Purple) regions present the\npositive (negative) values. Red lines mean zero.}\\label{fig2}\n\\end{figure}\n\nIn momentum space, it is apparent that the $t'$ energy gain would\ninfluence how the band dispersion evolves from Fermi pocket to Fermi\nsurface as increasing doping.\nIn Fig.\\ref{fig2}(a)-(c), the difference of the momentum\ndistribution function between $|\\Psi_{CCSS}\\rangle$ and\n$|\\Psi_{CC}\\rangle$ shows how electrons distribute in the band\nstructure.\nAt $\\delta=0.125$ (Fig.\\ref{fig2}(a)), obviously electrons in the\nsystem would prefer to stay around \"hot spots\" rather than living\nnear nodes and antinodes, which hole pockets and electron pockets\nseem to be observed as well.\nThe hot spot is defined as the momenta along the MZB that can be\nconnected by ($\\pi,\\pi$) momentum scattering.\nOnce doping is increased to $0.156$ which is the phase boundary\n(Fig.\\ref{fig2}(b)), hole pockets become larger and electron pockets\nslightly shrink.\nNow that electrons like to circle just outside the electron pockets,\nthey attempt to form a large Fermi surface.\nIndeed, as further increasing doping to $0.188$ where the long-range\nAFM order almost disappears (Fig.\\ref{fig2}(c)), a clear Fermi\nsurface in which electrons cluster together can be seen.\nSo far, we also understand the reason why the system gain much\nenergy from $t'$ term since the hot spots are located right at the\npurple region shown in Fig.\\ref{fig2}(d).\n\n\\begin{figure}[t]\n\\begin{center}\\rotatebox{0}{\\includegraphics[height=3.7in,width=2.4in]{fig3.png}}\\end{center}\n\\caption{(a) Spin-spin, (b) hole-hole and (c) pair-pair correlation\nfunctions for the optimized state $|\\Psi_{CCSS}\\rangle$\n($|\\Psi_{CC}\\rangle$), denoted by red circle (black square) symbols.\nThere are 40 doped holes in $16\\times16$ lattice\n($\\delta=0.156$).}\\label{fig3}\n\\end{figure}\n\nIn Fig.\\ref{fig3}, we compute the spin-spin, hole-hole and pair-pair\ncorrelation functions (already shown in Eq.(\\ref{e:ppcf})) defined\nas,\n\\begin{eqnarray}\nC_{CC}(\\bf{R})&=&\\frac{1}{N}\\sum_{i}\\langle\\hat{n}_{i}^{h}\\hat{n}_{i+\\bf{R}}^{h}\\rangle,\\\\\nC_{SS}(\\bf{R})&=&\\frac{1}{N}\\sum_{i}\\langle\n\\hat{S}_{i}^{z}\\hat{S}_{i+\\bf{R}}^{z}\\rangle\ne^{i\\bf{Q}\\cdot\\bf{R}}.\\label{e:equ12}\n\\end{eqnarray}\nThe doping density we choose to present is $0.156$.\nFigure \\ref{fig3}(a) illustrates that the spin-spin Jastrow\ncorrelators indirectly induce the stronger AFM background showing a\nconstant tail in the staggered spin-spin correlation function which\nimplies a clear AFM order.\nNote that The enhancement of the AFM order mainly arises from the\nmean-field wave function $|\\Psi_{AFSC}\\rangle$.\nFurthermore, we find in Fig.\\ref{fig3}(b) that the hole-hole\ncorrelation function makes no difference even if including the\nspin-spin Jastrow correlators, except that the short-range part\nbecomes less staggered.\nFor spin and charge, there is no correlation for their long-range\nbehavior.\nFinally, we can also see in Fig.\\ref{fig3}(c) that as considering\n$\\hat{P}_{J}^{SS}$ the pair-pair correlation almost vanishes at\nlarge distances so that the SC properties is not available.\n\nNext, it would be interesting to examine the low-lying\nsingle-particle excitation spectra near the phase boundary.\nIn Fig.\\ref{fig4}, by applying the ansatz (Eq.(\\ref{e:equ10})) to\nthe single-particle excitation, we calculate two quasiparticle band\ndispersions ($s=a,b$) and their corresponding spectral weight for\nremoving one particle defined by Eq.(\\ref{e:equ11}).\nIn order to compare with the excitations with\/without spin-spin\nJastrow correlators $\\hat{P}_{J}^{SS}$, we plot their excitation\nenergy $E_{\\bf{k}}$ along the high symmetric momenta in\nFig.\\ref{fig4}(a).\nIn the case where the trial state only includes the charge-charge\nJastrow factors, its optimized mean-field parameters $\\Delta\\gg m$.\nDue to large $d$-wave BCS pairing contribution, the dispersions thus\nshow convex around the antinodes and almost zero gap between the two\nbands at nodes.\nEspecially, the upper AFM band is beneath the lower AFM band near\nthe antinodal regions, and hence there is a clear signal of electron\npockets arising from the upper AFM band shown in Fig.\\ref{fig4}(c).\n\n\\begin{figure}[t]\n\\begin{center}\\rotatebox{0}{\\includegraphics[height=2in,width=3.4in]{fig4.png}}\\end{center}\n\\caption{(a) The quasi-particle excitation dispersion $E_{\\bf{k}}$\nfor different optimized states (denoted in the legend of (c)) along\nhigh symmetric momenta at $\\delta=0.156$. Empty (Filled) symbols\nrepresent the upper (lower) AFM band and squares (circles) the trial\nstate $|\\Psi_{CC}\\rangle$ ($|\\Psi_{CCSS}\\rangle$). Due to much\nsmaller $\\Delta$ than $m$ for the trial state $|\\Psi_{CCSS}\\rangle$,\nwe simply plot the lower AFM band (red circles) below the Fermi\nlevel (pink line) except the nodal regions for clear demonstration.\nThe quasiparticle spectral weight $Z_{\\bf{k}}^{-}$ are obtained from\n(b) the lower AFM band and (c) the upper AFM band.}\\label{fig4}\n\\end{figure}\n\nWhen further considering spin-spin Jastrow correlators, the\noptimized mean-field parameters $m\\gg\\Delta$.\nSuch a huge AFM parameter $m$ gives rise to a typical AFM band\ndispersion and opens a AFM gap between these two bands at nodes, as\nindicated by red circles in Fig.\\ref{fig4}(a).\nInterestingly, Fig.\\ref{fig4}(c) shows that near antinodes the\nquasiparticle spectral weight of the upper AFM band disappear and\ntransfer to almost entire lower AFM band (see Fig.\\ref{fig4}(b)).\nIn particular, a clear hole pocket of the lower AFM band centering\naround $\\bf{Q}\/2$ is also observed in Fig.\\ref{fig4}(b).\nThe Gutzwiller and Jastrow correlators arising from electronic\ncorrelation firmly influence the low-lying quasiparticle excitation\nspectra of the mean-field state $|\\Psi_{AFSC}\\rangle$.\nTherefore, the loss of the electron pockets due to electron\ncorrelations provides a route to figure out why electron pockets\nhave never been found in most of hole-doped cuprates measured by\nARPES.\n\n\\section{Conclusions}\nSumming up, by using VMC approach we have studied the coexisting\nstate with both AFM order and SC order simultaneously underneath the\nGutzwiller's projection and Jastrow correlators.\nWe have thereby re-examined the variational ground-state phase\ndiagram and found that the AFM phase competes with the SC phase as\nfurther considering off-site spin correlations.\nThe reasoning for the competition is that the mean-field AFM order\nis considerably enhanced due to short-range FM correlation\nintroduced by the spin-spin Jastrow factors, further leading to the\nvanished SC order.\nAs well, we have first investigated the Gutzwiller-projected\nquasiparticle excitations of the coexisting state.\nBased on the Gutzwiller ansatz, passing through the boundary between\nAFM and SC phases, we have observed the loss of electron pockets\nnear antinodes coming from the upper AFM band and the occurrence of\nhole pockets near nodes arising from the lower AFM band as long as\nthe spin-spin Jastrow correlators are included.\nTherefore, such a strongly correlated electron system needs to be\ncarefully inspected in the explanation for the low-lying\nquasiparticle excitations observed by ARPES experiments.\n\n\\section{Acknowledgments}\n\\label{Acknowledgment} Greatly thanks S.-M. Huang, W. Ku and T.-K.\nLee for helpful discussions. This work is supported by the\nPostdoctoral Research Abroad Program sponsored by National Science\nCouncil in Taiwan with Grant No. NSC 101-2917-I-564-010 and by CAEP\nand MST. All calculations are performed in the National Center for\nHigh-performance Computing in Taiwan.\n\\\\\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdmkv b/data_all_eng_slimpj/shuffled/split2/finalzzdmkv new file mode 100644 index 0000000000000000000000000000000000000000..b1db10e4bc57c2982365f901fcd542f34fc59b96 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdmkv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\nThis paper is part of a program aimed to understand equivariant and $K$-theoretic invariants for Artin groups.\nThese groups are ubiquitious in geometric group theory, as they comprehend many different subfamilies of groups (braid groups, right-angled, spherical type, extra-large, free groups...), provide examples and counterexamples for many interesting phenomena, and at the same time they are not well understood from a global point of view: it is not even known if every Artin group is torsion-free. See \\cite{Par09} for a good survey. In this context, Azzali \\emph{et al.} have recently computed explicitly both sides of Baum-Connes for pure braid groups \\cite{ABGRW}; in joint work of the second author with J. Gonz\\'alez-Meneses \\cite{FlGo18} it is computed the minimal dimension of a model of ${\\underline{\\underline{E}}} G$ for braid groups; J. and virtually-cyclic dimensions of mapping class groups (which include in particular certain Artin groups) have been recently investigated by Aramayona \\emph{et al.} (\\cite{AJT18}, \\cite{ArMa}) and by Petrosyan and Nucinkis \\cite{NuPe18}.\n\n \n\nIn the present paper we consider the case of Artin groups of dihedral type, denoted by $A_n$, which are the groups defined by\n$$A_n=\\gp{a,b}{\\mathrm{prod}(a,b;n)=\\mathrm{prod}(b,a;n)}.$$\nHere $\\mathrm{prod}(x,y;n)$ denotes the word of length $n$ that alternates $x$ and $y$ and starts with $x$.\n These groups are one-relator and torsion-free, they have small geometric dimension and its group-theoretic structure is quite well understood. These features makes them strongly appropriate for computations, and in fact have been recently studied from different angles, as growth series \\cite{MaMa06}, systems of equations \\cite{CHR20} or geodesics \\cite{Wal09}.\n\nWe compute here Bredon homology groups of the classifying space of these groups with respect to the family of virtually cyclic groups. Bredon homology was first described by Glen Bredon in the sixties \\cite{Bre67}, and it is a $G$-equivariant homology theory that takes into account the action over the target space of the subgroups of $G$ that belong to a predefined family. Since their appearance, Bredon homology groups have played a prominent role in Homotopy Theory and Group Theory, particularly in relation with finiteness conditions, dimension theory for groups and classifying spaces, and in the framework of the Isomorphism Conjectures. In fact, our choice of coefficients in the $K$-theory of a ring $G$ has been done with an eye in possible applications of the computations to the left-hand side of Farrell-Jones Conjecture (see below), although we believe that the methods of the present paper may be useful in a more general context, switching the $K$-theoretic coefficients to a general coefficient module.\n\nOur main technical result is the following:\n\n\n\n\\noindent\\textbf{Theorem \\ref{Thm:Bredon}.} Let $A_n$ be an Artin group of dihedral type and $R$ a ring, $R$ a ring, $K_q(R[-])$ the covariant module over the orbit category $O_{\\mathcal F}(G)$ that sends every (left) coset $G\/H$ to the $K$-theory group $K_q(R[H])$. Then we have the following:\n\\begin{enumerate}\n \\setlength{\\itemindent}{-2em}\n\\item $H_i^{vc}({\\underline{\\underline{E}}} A_n,K_q (R[-]))=\\{0\\}$ for $i\\geq 4$.\n\\item $H_3^{vc}({\\underline{\\underline{E}}} A_n,K_q (R[-]))=\\begin{cases}\\bigoplus_{[H]\\neq [Z(A_n)]} K_q(R) & \\text{$n$ odd}\\\\\n\\ker g_2^2 & \\text{$n$ even.}\n\\end{cases}$\n\n\\item $H_2^{vc}({\\underline{\\underline{E}}} A_n,K_q (R[-]))= \\ker g_2^1$.\n\n\\item $H_1^{vc}({\\underline{\\underline{E}}} A_n,K_q (R[-]))= \\emph{coker } g_2^1 = \\begin{cases}(\\bigoplus_{[H]\\neq [Z(A_n)]}N_q^{[H]})\\oplus T_1(K_q(R))\\oplus T_2(K_q(R)) & \\text{$n$ odd}\\\\\n\\bigoplus_{[H]} N_q^{[H]}\n & \\text{$n$ even.}\n\\end{cases}$%\n\\item $H_0^{vc}({\\underline{\\underline{E}}} A_n,K_q (R[-]))= \\emph{ coker } g_2^0= \\begin{cases}(\\bigoplus_{[H]\\neq [Z(A_n)]} N_q^{[H]})\\oplus K_q(R)\\oplus\\overline{C}(K_q(R)) & \\text{$n$ odd}\\\\\n(\\bigoplus_{[H]} N_q^{[H]})\\oplus K_q(R)\n & \\text{$n$ even.}\n \\end{cases}$%\n\\end{enumerate}\n\n\nHere $g^j_i$ stands for a homomorphism in the Degrijse-Petrosyan exact sequence {\\cite[Section 7]{DP}}, $[H]$ for the commensurability class of a {non-trivial} cyclic subgroup $H$, and $N_q^{[H]}$, $T_i(K_q(R))$ and $\\overline{C}(K_q(R))$ for groups that depend on $H$ and the Bass-Heller-Swan decomposition of $K_q(R[\\Z])$. See Section \\ref{Sect:BredonArtin} for details.\n\n\n\n\nThe main tool used in the proof of Theorem \\ref{Thm:Bredon} is an exact sequence in Bredon homology \\cite{DP}, which is in turn the Mayer-Vietoris sequence associated to the push-out that defines the L\\\"{u}ck-Weiermann model for ${\\underline{\\underline{E}}} G$ (\\cite{LW}, see also Section \\ref{Sect:Prelim} below). The knowledge about the group-theoretic structure of the groups includes a complete understanding of the commensurators, which is crucial in the computations. Using the theorem, we are able to describe with precision the Bredon homology of $A_n$ with coefficients in the $K$-theory of several rings, both regular and non-regular (see Section \\ref{Sect:concrete}). \n\nNext we describe the implications of our work in relation with the Farrell-Jones Conjecture. Recall that given a group $G$, a ring $R$ and $n\\in\\mathbb{Z}$, Farrell-Jones stated in \\cite{FaJo93} the existence of an assembly map:\n\n$$H_n^G({\\underline{\\underline{E}}} G,\\mathbf{K} (R))\\rightarrow K_n(RG),$$\nwhere ${\\underline{\\underline{E}}} G$ is the classifying space of $G$ with respect to the family of virtually cyclic groups,\n$H_*^G(-,\\mathbf{K} (R))$ is the $G$-homology theory defined in Section 1 of \\cite{FaJo93} and $K_n(RG)$ stands for the $n$-th group of algebraic $K$-theory of the group ring $RG$.\nThe Farrell-Jones conjecture predicts that the assembly map is an isomorphism.\nThe conjecture has been verified for a big family of groups (see \\cite{LR05} for an excellent survey), and no counterexample has been found so far.\nThe philosophy in this context is, for a group for which the assembly map is known to be an isomorphism, to perform computations in the topological side in order to extract information about the algebraic $K$-theory of the group ring. It is remarkable that the latter are difficult to compute, and at the same time encode fundamental invariants of manifolds, including obstructions to the existence of cobordisms and information about groups of pseudoisotopies. Explicit calculations in this context can be found in \\cite{BuSa16}, \\cite{DQR11}, \\cite{KLL21} or \\cite{SaVe18}, for example.\n\nThe left-hand side of the conjecture can be approached by means of a $G$-equivariant version of the Atiyah-Hirzebruch spectral sequence, which converges to the Farrell-Jones $K$-homology, and whose $E_2$-page is the Bredon homology of the classifying space ${\\underline{\\underline{E}}} G$ with coefficients in the $K$-theory of the group rings of the virtually cyclic subgroups of $G$; example of such calculations in the Farrell-Jones framework can be found in \\cite{BJV14} \\cite{LuRo14}. In this setting, Theorem \\ref{Thm:Bredon} and the examples of Section \\ref{Sect:concrete} can be interpreted as an explicit computation of such $E_2$-page, in the case of Artin groups of dihedral type. We remark that in the case of $R$ regular, the left-hand side of the conjecture for these Artin groups can be deduced from \\cite[Lemma 16.12]{Luc21}, using previously a splitting result of L\\\"{u}ck-Steimle \\cite{LS16} (see the end of Section \\ref{Sect:concrete} for details), so our results provide new information in this context for a non-regular $R$; in this sense, we expect that Examples 5.3-5.5 will be useful. It is worth to point out that the Atiyah-Hirzebruch spectral sequence collapses at most at the $E_5$-page in this context, so an (at least partial) computation of the differentials may not be completely out of sight.\n\nWe finish by pointing out that Artin groups of dihedral type are free-by-cyclic, and the Farrell-Jones conjecture has been recently verified for this class of groups \\cite{BFW21}. Hence, all the computations in the left-hand side can be read in terms of algebraic $K$-theory of $RG$.\n\n\n\n\\textbf{Summary of contents}. In Section \\ref{Sect:Prelim} we recall the main definitions about classifying spaces for families and Bredon homology, with special emphasis in the L\\\"{u}ck-Weiermann model and its associated Mayer-Vietoris sequence. In Section \\ref{Sect:Artin} the main properties of Artin groups that will be used on the rest of the paper are studied. Then, in Section \\ref{Sect:BredonArtin}, we carefully analyze the homomorphisms in the exact sequence and prove Theorem \\ref{Thm:Bredon}; and in final Section \\ref{Sect:concrete} we apply our results to describe different concrete examples.\n\n\n\\section{Preliminaries}\n\\label{Sect:Prelim}\n\nIn this section we state some notions of $G$-equivariant homotopy that will frequently appear in the rest of the paper.\nThe exposition will be sketchy and very focused to our goals; the reader interested in a thorough treatment of the subject is referred to \\cite{TDieck} for the theory of $G$-$CW$-complexes and actions on them, to \\cite{Luc05} for the theory of classifying spaces and to the first part of \\cite{MiVa03} for Bredon homology.\n\n\\subsection{Classifying spaces for families}\n\\label{Sect:classify}\nIn this section we will briefly recall the notion of classifying space for a family of subgroups, which is the central object in the topological side of the Isomorphism Conjectures. Then we will review L\\\"{u}ck-Weiermann model and the definition of commensurator, which will be crucial in our computations.\n\n\\begin{defn}\n\nLet $G$ be a discrete group, and $\\mathcal{F}$ be a family of subgroups of $G$ closed under passing to subgroups and conjugation. A $G$-CW-complex $X$ is a \\emph{classifying space for the family} $\\mathcal{F}$ if for every $H\\in \\mathcal{F}$ the fixed-point set $X^H$ is contractible, and empty otherwise.\n\n\\end{defn}\n\nThe classifying space for the family $\\mathcal{F}$ is usually denoted by $E_{\\mathcal{F}}G$.\nMoreover, two models for $E_{\\mathcal{F}}G$ are $G$-homotopy equivalent. A point is always a model for $E_{\\mathcal{F}}G$ if $G\\in \\mathcal{F}$, and the closeness under subgroups implies that $E_{\\mathcal{F}}G$ is always a contractible space.\n\nIf there is a family ${\\mathcal{F}}$ of subgroups of $G$ with the previous closeness properties and a subgroup $H\\leqslant G$, we denote by ${\\mathcal{F}}\\cap H$ the family whose elements are the intersections $F\\cap H$, with $F\\in {\\mathcal{F}}$.\nThe family ${\\mathcal{F}}\\cap H$ of subgroups of $H$ is again closed under $H$-conjugation and taking subgroups.\nIn these conditions the action of $H$ over $E_{{\\mathcal{F}}}G$ by restriction turns $E_{{\\mathcal{F}}}G$ into a model for $E_{{\\mathcal{F}}\\cap H}H$.\n\n\nThe most important families of subgroups in this context are the trivial family $\\mathcal F_{\\{1\\}}$, the family $\\mathcal{F}_{Fin}$ of finite groups and the family $\\mathcal{F}_{vc}$ of virtually cyclic groups of $G$; the classifying spaces for these families are respectively denoted by $EG$, $\\underline{E} G$ and ${\\underline{\\underline{E}}} G$. Observe that $\\mathcal F_{\\{1\\}}\\subseteq \\mathcal{F}_{Fin}\\subseteq \\mathcal{F}_{vc}$, and that $\\mathcal F_{\\{1\\}}=\\mathcal{F}_{Fin}$ if and only if $G$ is torsion-free.\nIt is also a standard argument to show that torsion-free virtually cyclic groups are cyclic (see for example Lemma 3.2 in \\cite{Mac96}).\nThen, for torsion-free groups, $\\mathcal F_{vc}$ is the set of cyclic subgroups.\n\nFrom now on we will describe the model of ${\\underline{\\underline{E}}} G$ developed by L\\\"{u}ck-Weiermann in \\cite{LW}, for the special families we are interested (the construction is indeed more general). Given a group $G$, and two subgroups $H$ and $K$, we consider the equivalence relation generated by $H\\sim K$ if $H\\cap K$ has finite index in both $H$ and $K$.\nObserve that if $H$ is in ${\\mathcal F}_{vc}\\setminus \\mathcal{F}_{Fin}$ then $H\\sim K$ if and only if $K$ is virtually cyclic and $H\\cap K$ is infinite.\nAlso remark that if $H\\in \\mathcal{F}_{Fin}$, then $H\\sim K$ if and only if $K\\in \\mathcal{F}_{Fin}$.\nThe equivalence class of $H$ will be denoted by $[H]$.\nThis equivalence relation is preserved by conjugation, it can be defined $g^{-1}[H]g$ as $[g^{-1}Hg]$ for any $H\\leqslant G$ and for any $g\\in G$.\n\nNow we can define the notion of \\emph{commensurator}, central in this model and in our paper:\n\n\\begin{defn}\n\\label{defn:comm}\nGiven an equivalence class $[H]$ of the relation $\\sim$, the \\emph{commensurator} of $[H]$ in $G$ is defined as the subgroup\n$$ \\textrm{Comm}_G[H]=\\{g\\in G\\:|\\ g^{-1}[H]g=[H]\\}.$$\n\\end{defn}\n\n\nIn \\cite{LW}, it is also defined a family of subgroups of $\\textrm{Comm}_G[H]$ as:\n$${\\mathcal{F}} [H]:=\\{K<{ \\textrm{Comm}_G[H]} \\: |\\ K\\sim H \\text{ or } |K|<\\infty \\}.$$\nIf $H$ is virtually cyclic, it is easy to check that this family is closed under taking subgroups and conjugation in {$\\textrm{Comm}_G[H]$}. {We remark that the commensurator of $[H]$ is sometimes denoted by $N_G[H]$ in the literature}.\n\nNow we have all the ingredients needed for building L\\\"{u}ck-Weiermann model:\n\n\\begin{thm}\n\\label{maintheorem}{\\rm (\\cite{LW}, Theorem 2.3)} We denote by $I$ a complete set of representatives of the $G$-orbits (under conjugation) of equivalence classes $[H]$ of infinite virtually cyclic subgroups of $G$, and we choose, for every $[H]\\in I$, models for the classifying spaces $\\underline{E} \\Comm_G[H]$ and $E_{{\\mathcal{F}} [H]}\\Comm_G[H]$.\nWe also choose a model for $\\underline{E} G$. Consider the $G$-pushout:\n$$\n\\xymatrix{ \\coprod_{[H]\\in I}G\\times_{\\Comm_G[H]}\\underline{E} \\Comm_G[H] \\ar[r]^{\\hspace{2cm} i} \\ar[d]^{\\coprod_{[H]\\in I}id_G\\times_{\\Comm_G[H]}f_{[H]}} & \\underline{E} G \\ar[d] \\\\\n\\coprod_{[H]\\in I}G\\times_{\\Comm_G[H]}E_{{\\mathcal{F}} [H]}\\Comm_G[H] \\ar[r] & X }\n$$\nwhere $f_{[H]}$ is a cellular $\\Comm_G[H]$-map for every $[H]\\in I$ and $i$ is the inclusion.\n In these conditions, $X$ is a model for ${\\underline{\\underline{E}}} G$.\n\\end{thm}\n\n\nIn practice, this theorem implies that the existence of good models for the proper classifying space of the commensurators and $G$, and also of the classifying spaces with respect to the families ${\\mathcal{F}} [H]$ will lead to the knowledge of good models for ${\\underline{\\underline{E}}} G$. Moreover, the push-out implies the existence of a long exact sequence in Bredon homology, and dimensional consequences that we will analyze in next section.\n\n\n\n\\subsection{Bredon homology}\n\\label{Sect:Bredon}\n\n{In this subsection we will briefly review the main definitions concerning Bredon homology. We follow the topological concise approach from \\cite{San08}, which we use in our computations}.\n\n{Consider a discrete group $G$, $\\mathcal{F}$ a family of groups which is closed under conjugation and taking subgroups. Let $O_{\\mathcal{F}}(G)$ be the \\emph{orbit category} whose objects are the homogeneous spaces $G\/K$, $K\\subset G$ with $K\\in\\mathcal{F}$, and whose morphisms are the $G$-equivariant maps.\nThen a \\emph{left Bredon module} $N$ over $O_{\\mathcal{F}}(G)$ is a covariant functor $$N:O_{\\mathcal{F}}(G)\\rightarrow \\textbf{Ab},$$ where $\\textbf{Ab}$ is the category of abelian groups}.\n\n{Let $N$ be a left Bredon module and $X$ a $G$-CW-complex, and assume that all the stabilizers of the $G$-action belong to the family $\\mathcal{F}$. Then the \\emph{Bredon chain complex} $(C_n^{\\mathcal{F}}(X,N),\\Phi_n)$ can be defined in the following way.\nFor every $d\\geq 0$, consider a set $\\{e_i^d\\}_{i\\in I}$ of representatives of orbits of $d$-cells in $X$, and denote by $\\stab(e_i^d)$ the stabilizer of $e_i^d$.\nThen we define the \\emph{n-th group of Bredon chains} as $C_n^{\\mathcal{F}}(X,N)=\\bigoplus_{i\\in I} N(G\/\\stab(e_i^d))$}.\n\n{Consider now a $(d-1)$-face of $e_i^d$, which can be given as $ge$ for a certain $(d-1)$-cell $e$.\nThen we have an inclusion of stabilizers $g^{-1}\\stab(e_i^d)g\\subseteq \\stab(e)$.\nAs $g^{-1}\\stab(e_i^d)g$ and $stab(e_i^d)$ are isomorphic, the previous inclusion induces an equivariant $G$-map $f\\colon G\/\\stab(e_i^d)\\rightarrow G\/\\stab(e)$.\nIn turn, as $N$ is a functor, we have an induced homomorphism $N(f)\\colon N(G\/\\stab(e_i^d))\\rightarrow N(G\/\\stab(e))$.\nTaking into account that the boundary of $e_i^d$ can be written as $\\partial e^d_i=\\sum_{j=1}^n g_j e_j^{d-1}$ for certain $g_j\\in G$ and using linear extension to all representatives of equivariant $d$-cells, we obtain a differential $\\Phi_d\\colon C_d^{\\mathcal{F}}(X,N)\\rightarrow C_{d-1}^{\\mathcal{F}}(X,N)$ for every $d>0$.\nSo we have the following definition:}\n\n\\begin{defn}\n{The homology groups of the chain complex $(C_i^{\\mathcal{F}}(X,N),\\Phi_i)$ will be denoted by $H_i^{\\mathcal{F}}(X,N)$ and called \\emph{Bredon homology groups} of $X$ with coefficients in $N$ with respect to the family $\\mathcal{F}$}.\n\nWe define $H_i^{\\mathcal{F}}(G,N)$, the {\\emph Bredon homology groups} of $G$ with coefficients in $N$ with respect to the family $\\mathcal{F}$ as $H_i^{{\\mathcal{F}}}(E_{\\mathcal{F}}G,M)$.\n\\end{defn}\n\n{These groups are preserved under $G$-equivariant homotopy equivalence}.\n\n\n\\textbf{Notation}. When $\\mathcal{F}$ is the family of finite groups, we use indistinctly the notations $\\underline{E}G$ or $E_{\\mathcal{F}}G$, and similarly when $\\mathcal{F}$ is the family of virtually cyclic groups and ${\\underline{\\underline{E}}} G$ and $E_{\\mathcal{F}}G$ notations for the corresponding classifying space. If $\\mathcal{F}$ is the family that only contains the trivial group, the superindex in the Bredon homology will be supressed, as it is ordinary homology in this case.\nIt is worth noticing that there is an algebraic definition of $H_*^{\\mathcal{F}}(G,M)$, however there is an isomorphism $H_*^{\\mathcal{F}}(G,M)\\simeq H_*^{\\mathcal{F}}(E_{\\mathcal{F}}G,M)$ between the algebraic and the topological definitions of Bredon homology \\cite[page 15]{MiVa03}.\nTo not overload the paper with unnecessary notation, we commonly denote this homology groups by $H_*^{\\mathcal{F}}(G,M)$, although as said above, we mainly deal with the topological definition.\n\nThe cyclic group of order $n$ will be denoted by $C_n$. When we want to consider its ring structure we might use $\\Z \/{\\bf n}$. When $n$ is prime, we might use $\\mathbb{F}_n$ to emphasize its field structure.\n\n\n\n\\section{Artin groups of dihedral type}\n\\label{Sect:Artin}\nIn this section we present the main features of the Artin groups of dihedral type that we will need in the remaining of the paper. We start with the definition of the groups:\n\n\\begin{defn}\n\\label{Defn:Artindih}\nLet $n\\geq 1$. By $\\mathrm{prod}(x,y;n)$ we denote the word of length $n$ that alternates $x$ and $y$ and starts with $x$.\nFor example, $\\mathrm{prod}(x,y;3)=xyx$ and $\\mathrm{prod}(x,y;4)=xyxy$. With this notation, a {\\it dihedral Artin group of type $n$} is the group $$A_n=\\gp{a,b}{\\mathrm{prod}(a,b;n)=\\mathrm{prod}(b,a;n)}.$$\n\\end{defn}\nThe name ``dihedral\" comes from the associated Coxeter group, $$\\gp{a,b}{a^2=b^2=1, \\mathrm{prod}(a,b;n)=\\mathrm{prod}(b,a;n)}$$ which is the dihedral group of order $2n$. Dihedral Artin groups are torsion-free, even more $A_n\\cong F_{n-1}\\rtimes \\Z$, where $F_k$ is a free group of rank $k$. To see this, one can check that the kernel of $A_n\\to \\Z$, $a,b\\mapsto 1$ is free on rank $n-1$. In particular, Dihedral Artin groups satisfy the Farrell-Jones conjecture \\cite{BFW21}.\n\nWe are interested on understanding the commensurators of the virtually cyclic subgroups of $A_n$, for that we will use that $A_n$ is also a central extension of a virtually free group.\n\nAs indicated above an important ingredient of our calculations will be the description of some commensurators inside these Artin groups of subgroups from the family of virtually cyclic groups (which in this case turn to be just cyclic groups). We start by observing that for any virtually cyclic subgroup $H$ of a group $G$, and any $h\\in H$ of` infinite order, we have that\n$$\\Comm_G[H]=\\Comm_G(\\gen{h})=\\{g\\in G\\mid gh^mg^{-1}=h^n \\text{ for some }n,m\\in \\Z-\\{0\\}\\}.$$\n\nWe will be interested in computing commensurators up to isomorphism, and we will use the fact that commensurators of conjugated subgroups are conjugated.\nLet us denote by $\\pi$ the natural\nprojection map $\\pi\\colon A_{n}\\to \\ol{A_{n}}\\mathrel{\\mathop\\mathchar\"303A}\\mkern-1.2mu= A_{n}\/Z(A_{n})$. Then, for every $g\\in A_{n}$,\n$$\\pi(\\Comm_{A_{n}}[\\gen{g}])\\leqslant \\Comm_{\\ol{A_{n}}}[\\gen{\\pi(g)}],$$\nand\n$$Z(A_{n})\\leqslant \\Comm_{A_{n}}[\\gen{g}].$$\n\nWe will prove the following.\n\\begin{lem}\\label{lem:commensurators}\nLet $A_n$ be a dihedral Artin group and $g\\in A_n$ of infinite order. Then\n\\begin{enumerate}\n\\item If $\\langle g\\rangle \\cap Z(A_n)\\neq \\{1\\}$ then $\\Comm_{A_n}[\\langle g\\rangle]= A_n$ and $Z(A_n)\\in [H]$,\n\\item If $\\langle g\\rangle \\cap Z(A_n)= \\{1\\}$ then $\\Comm_{A_n}[\\langle g\\rangle]\\cong \\mathbb{Z}^2$ and there is $\\langle g'\\rangle \\in [\\langle g\\rangle]$ that is a direct factor of $\\Comm_{A_n}[\\langle g\\rangle]$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n\nIt is a well-known fact that the centers of the dihedral Artin groups have different shape depending on the parity of $n$. Hence, we should divide the study of the commensurators of their cyclic subgroups in two subcases, even and odd.\n\n\n{\\bf Case $n$ even: }\nIn the group $$A_{2n}=\\gp{a,b}{\\mathrm{prod}(a,b;2n)=\\mathrm{prod}(b,a;2n)},$$\nit is known that $Z(A_{2n})=\\gen{(ab)^{n}}$.\nMoreover, $A_{2n}\\cong \\gp{x,y}{x^{-1}y^nx=y^n}$ via the isomorphism defined by $x\\to b$, $y\\mapsto ba$.\nTherefore $A_{2n}\/Z(A_{2n})=\\gp{x,y}{x^{-1}y^nx=y^n,y^n =1}=\\gp{\\ol{x},\\ol{y}}{\\ol{y}^n=1}\\cong C_{\\infty}*C_{n}$.\n\n\nSince $\\ol{A_{2n}}$ is hyperbolic (even more, virtually free), the commensurator of an infinite order element is virtually cyclic. (See for example \\cite[Theorem 2]{Arz} bearing in mind that infinite cyclic subgroups of hyperbolic groups are quasi-convex).\n\nSuppose first that $\\pi(g)$ has infinite order. Then $\\Comm_{\\overline{A_{2n}}}[\\langle \\pi(g)\\rangle]$ is infinite and virtually cyclic and hence $\\pi (\\Comm_{A_{2n}}[\\gen{g}])$ is infinite and virtually cyclic.\nSince all infinite virtually cyclic subgroups of $\\ol{A_{2n}}$ are infinite cyclic we have that $\\pi (\\Comm_{A_{2n}}[\\gen{g}])$ is infinite cyclic. Therefore,\n$\\Comm_{A_{2n}}[\\gen{g}]$ is a central extension of $\\Z$ by $\\Z$ and hence $\\Comm_{A_n}[\\gen{g}]\\cong \\Z^2$.\nWe can take $g'$ as a pre-image of a generator of $\\pi (\\Comm_{A_{2n}}[\\gen{g}])$.\n\nSuppose now that $\\pi(g)$ has finite order.\nThus $\\langle g \\rangle \\cap Z(A_{2n})$ is non-trivial and infinite, and thus $[\\langle g \\rangle]= [Z(A_{2n})]$ and $\\Comm_{A_{2n}}[\\gen{g}]=\\Comm_{A_{2n}}Z(A_n)=A_{2n}.$\n\n\n\n{\\bf Case $n$ odd:}\nIn the group $$A_{2n+1}=\\gp{a,b}{\\mathrm{prod}(a,b;2n+1)=\\mathrm{prod}(b,a;2n+1)}$$ we have that $Z(A_{2n+1})=\\gen{(ab)^{2n+1}}$.\nMoreover, $A_{2n+1}\\cong \\gp{x,y}{xy^n=y^{n+1}x^{-1}}$ via the isomorphism defined by $x\\to b$, $y\\mapsto ab$.\nTherefore\n\\begin{align*}\nA_{2n+1}\/Z(A_{2n+1})&=\\gp{x,y}{xy^nx=y^{n+1},y^{2n+1}=1}=\\gp{x,y}{(xy^n)^2=1=y^{2n+1}}\\\\\n&=\\gp{\\ol{z},\\ol{y}}{\\ol{z}^2=1=\\ol{y}^{2n+1}}\\cong C_{2}*C_{2n+1},\n\\end{align*}\nwhere $\\ol{z}$ denotes the class of $xy^nZ(A_{2n+1})$.\n\n\n\nSince $\\ol{A_{2n+1}}$ is hyperbolic and infinite virtually cyclic subgroups of $\\ol{A_{2n+1}}$ are infinite cyclic we get, arguing as above, that if $\\pi(g)$ has infinite order then $\\Comm_{A_{2n+1}}[\\gen{g}]\\cong \\Z^2$.\n\nSuppose now that $\\pi(g)$ has finite order. Arguing as above, $\\Comm_{A_{2n+1}}[\\gen{g}] = \\Comm_{A_{2n+1}}[Z(A_{2n+1})] = A_{2n+1}$.\n\\end{proof}\n\n\n\nWe recall the ordinary homology of these groups, which will be important in the remaining sections of the paper.\n\n\\begin{prop}\n\\label{OrdHom}\nFor $n\\geq 2$.\n\nIf $n$ is even, we have $H_0(A_n)=\\mathbb{Z}$, $H_1(A_{n})=\\mathbb{Z}\\oplus \\mathbb{Z}$, $H_2(A_{n})=\\mathbb{Z}$ and $H_i(A_{n})=0$ for $i>2$.\n\n If $n$ is odd, we have $H_0(A_n)=\\mathbb{Z}$, $H_1(A_{n})=\\mathbb{Z}$ and $H_i(A_{n})=0$ for $i>1$.\n\n\\end{prop}\n\n\\begin{proof}\nAs dihedral Artin groups are one-relator groups, the Cayley complex (associated to the one-relator presentation) gives a 2-dimensional model for $K(A_n,1)$ (see \\cite{Lyndon} or \\cite{CMW04} for more general Artin groups).\n As $A_n$ is not free, this implies that $\\textrm{gd }A_n=\\textrm{cd }A_n=2$ for every $n$, and then in particular $H_i(A_n)=0$ for $i\\geq 3$. Moreover, as $K(A_n,1)$ is connected, $H_0(A_n)=\\mathbb{Z}$ for every $n$.\nThe formulae for $H_1$ follow from performing abelianization to the groups. Finally, the results for the Schur multiplier $H_2$ are a consequence of Theorem 3.1 in \\cite{CE09}.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Bredon homology of Artin groups of dihedral type}\n\\label{Sect:BredonArtin}\n\nIn this section we start with the computation of homological invariants of the Artin groups of dihedral type, which is the main goal of this paper. The computation of the Farrell-Jones homology in the case of a general ring is usually very complicated. A powerful tool to undertake this computation is the $G$-equivariant version of the Atiyah-Hirzebruch spectral sequence, which is a spectral sequence of the 1st and 4th quadrant. In our case of interest, the $E_2$-page of this sequence is given by Bredon homology of the classifying space ${\\underline{\\underline{E}}} G$, and the $E_{\\infty}$-page encodes the Farrell-Jones homology. Let us make the last statement more precise.\n\nLet $G$ be a discrete group, $\\mathcal{F}$ its family of virtually cyclic subgroups, $R$ a ring. For every $q\\in\\mathbb{Z}$, denote by $K_q(R[-])$ the covariant module over the orbit category $O_{\\mathcal F}(G)$ that sends every (left) coset $G\/H$ to the $K$-theory group $K_q(R[H])$. In these conditions, the $E_2$-page of the Atiyah-Hirzebruch spectral sequence that we will use is defined as $E_2^{p,q}=H_p^{\\mathcal{F}}({\\underline{\\underline{E}}} G,K_q(R[-]))$, for every $p\\geq 0$ and $q\\in\\mathbb{Z}$, and converges to $H^G_{p+q}({\\underline{\\underline{E}}} G,\\mathbf{K}(R))$ the $(p+q)$-th group of Farrell-Jones homology of ${\\underline{\\underline{E}}} G$ with coefficients in $R$. An excellent source for more information about this sequence is \\cite{LR05}.\n\nAlthough the path {to} the computation is very clear, in general it is difficult to obtain explicit formulae for the Bredon\nhomology of these classifying spaces. Different reasons for this are the complexity of the models for ${\\underline{\\underline{E}}} G$ and the fact that the exact values of $K_q(R[H])$ are only known for very special instances of $R$ and $H$. In fact, taking $R=\\mathbb{Z}$ and $H$ the trivial group, the groups $K_q(\\mathbb{Z})$ are not completely listed, as their value in some cases depend on the solution of the Vandiver conjecture, which remains unsolved. See \\cite{Wei05}.\n\n\nIn this section we show that for Artin groups of dihedral type it is possible to describe to some extent the aforementioned Bredon homology groups, with coefficients in $K_q(R[-])$ for a general ring $R$ (see Theorem \\ref{Thm:Bredon}); of course, the result strongly depends on the concrete shape of the $K$-theory groups of the group rings involved. The key to our computations is the description of the commensurators of the virtually cyclic subgroups of the groups $A_n$ (see previous section), and mainly the Mayer-Vietoris sequence associated to the push-out of L\\\"{u}ck-Weiermann model (Theorem \\ref{maintheorem}), that was explicitly stated by Degrijse-Petrosyan in Section 7 of \\cite{DP}, for the cohomological case. We offer here the homological version, which is the one needed in our context. To avoid confusions, we denote Bredon homology with respect to the family of virtually cyclic groups as $H_*^{vc}$ from now on.\n\n\\begin{prop}{\\rm (\\cite[Proposition 7.1]{DP}, see also \\cite[Theorem 2.3]{LW})}\n\\label{mainMV} Let $M$ be a left Bredon module over $O_{{\\mathcal{F}}_{vc}}(A_n)$.\nThere is an exact sequence:\n\n$$\\ldots \\stackrel{}{\\rightarrow} H^{vc}_{i+1}(A_n,M) \\stackrel{g_1^{i+1}}{\\rightarrow} \\bigoplus_{[H]\\in I}H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],M) \\stackrel{g_2^i}{\\rightarrow} $$\n\n$$\\stackrel{g_2^i}{\\rightarrow} (\\bigoplus_{[H]\\in I}H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M))\\oplus H_i^{Fin}(A_n,M) \\stackrel{g_3^i}{\\rightarrow} H^{vc}_i(A_n,M) \\rightarrow \\ldots$$\n\n\\end{prop}\n\nBefore we undertake our computations of the homology groups, we will compute their geometric dimension ${\\underline{\\underline{\\textrm{gd}}}\\ }$ with respect to the family of virtually cyclic groups.\n\n\\begin{prop}\n\\label{gd}\n\nThe geometric dimension of $A_n$ with respect of the family of virtually cyclic groups is 3.\n\n\\end{prop}\n\n\\begin{proof}\nAs seen in Lemma \\ref{lem:commensurators}, $A_n=\\langle a,b\\,|\\,\\textrm{prod}(a,b;n)=\\textrm{prod}(b,a;n) \\rangle$, contains subgroups isomorphic to $\\mathbb{Z}\\oplus\\mathbb{Z}$ (for example $\\Comm_{A_n}(\\langle a\\rangle))$. According to Example 5.21 in \\cite{LW}, this implies that ${\\underline{\\underline{\\textrm{gd}}}\\ } A_n\\geq 3.$\n\n\nOn the other hand, as the Artin groups $A_n$ are one-relator, Corollary 3 in \\cite{Deg16} implies that ${\\underline{\\underline{\\textrm{gd}}}\\ } A_n\\leq 3$, and then ${\\underline{\\underline{\\textrm{gd}}}\\ } A_n=3$.\n\\end{proof}\n\nAs stated, the strategy to compute $H_i^{vc}(A_n,M)$ goes through describing the different elements of the exact sequence of Proposition \\ref{mainMV}.\nWe first understand the terms $H_i^{\\mathcal{F}[H]}$ on Subsection \\ref{HiF[H]}.\nThen, in Subsection \\ref{Ktheory}, we give a more concrete description of the different homology groups when we take coefficients in the $K$-theory.\nFinally, in Subsection \\ref{morphism} {we study the homomorphisms} $g_1$ and $g_2$ of the exact sequence of Proposition \\ref{mainMV} and prove our main theorems.\n\n\\subsection{Computing {the homology of the commensurators}}\\label{HiF[H]}\nLet us start the main calculations of this section.\nIt is clear from the previous Mayer-Vietoris sequence that the computations of the ordinary homology of the commensurators\n(which includes the homology of $A_n$, as the center of $A_n$ is virtually cyclic)\n and the homology of the commensurators with coefficients in $\\mathcal{F}[H]$ will give us valuable information about the Bredon homology of $A_n$ with respect to the family of virtually cyclic subgroups,\nso we will perform these calculations in the sequel.\nWe start with the ordinary homology, which is straightforward and depends on the shape of the commensurators:\n\n\\begin{itemize}\n\n\\item {If} $\\textrm{Comm}_{A_n}[H]\\simeq\\mathbb{Z}\\oplus \\mathbb{Z}$, we have $H_0(\\textrm{Comm}_{A_n}[H])=H_2(\\textrm{Comm}_{A_n}[H])=\\mathbb{Z}$, $H_1(\\textrm{Comm}_{A_n}[H])=\\mathbb{Z}\\oplus \\mathbb{Z}$, $H_i(\\textrm{Comm}_{A_n}[H])=0$ for $i>2$.\n\n\\item {If} $\\textrm{Comm}_{A_n}[H]\\simeq A_n$, see Proposition \\ref{OrdHom}.\n\n\n\\end{itemize}\n\n\nWe concentrate now in the case of $\\mathcal{F}[H]$, which we recall is the family of subgroups of $\\Comm_{A_n}[H]$ that are either finite or commensurable with $H$. Concretely, we intend to compute $H_i^{\\mathcal{F}[H]}(\\textrm{Comm}_{A_n}[H],M)$ for every non-trivial cyclic subgroup $H$ of $A_n$.\nWe have two cases, either $H\\cap Z(A_n)$ is trivial or not. We will use the following Convention throughout the paper without explictly refering to it.\n\n\\begin{conv}\nFor $[H]$ a class of infinite cyclic subgroups, we can always take a representative $H$ that is normal in $\\Comm_{A_n}[H]$.\n\n\nIf $H\\cap Z(A_n)$ is trivial, then Lemma \\ref{lem:commensurators} states that $H\\unlhd \\Comm_{A_n}[H]$. Moreover, by Lemma \\ref{lem:commensurators} we assume that $H$ is a direct factor of $\\Comm_{A_n}[H]$.\n\n\nIf $H\\cap Z(A_n)$ is non-trivial, then $[H]=[Z(A_n)]$ and we will assume in this case that we choose $Z(A_n)$ as the representative of $[H]$, and thus $H\\unlhd \\Comm_{A_n}[H]$ again.\n\nWith this convention, the projection $\\pi\\colon\\textrm{Comm}_{A_n}[H]\\rightarrow \\textrm{Comm}_{A_n}[H]\/H$ is well-defined and $\\Comm_{A_n}[H]\/H$ is isomorphic to $C_\\infty$, $C_\\infty*C_n$ or $C_2*C_{2n+1}$.\n\\end{conv}\n\n\nLet now $M$ be a module over the orbit category of $\\textrm{Comm}_{A_n}[H]$ with respect to $\\mathcal{F}[H]$, and $\\pi^{-1}M$ the induced module over the orbit category of $\\Comm_{A_n}[H]\/H$ with respect to the family of finite groups.\nThat is, for $K\\leqslant \\textrm{Comm}_{A_n}[H]\/H$ finite {it is defined}\n$$\\pi^{-1}M((\\Comm_{An}[H]\/H)\/K)\\mathrel{\\mathop\\mathchar\"303A}\\mkern-1.2mu= M(\\Comm_{A_n}[H]\/\\pi^{-1}(K)).$$ {Observe that this assignation gives rise to a natural transformation of functors $\\pi^{-1}M\\rightarrow M$. In the next result, which is essentially \\cite[Lemma 4.2]{DP}, but stated for homology instead of cohomology, we will see that this natural transformation induces an isomorphism in Bredon homology}. It will be a powerful tool in our computations.\n\\begin{prop}\n\\label{induced}\nLet $H$ be an infinite cyclic subgroup of $A_n$ normal in its commensurator.\nFor every $n\\geq 0$, and every module $M$ over the orbit category of $\\Comm_{A_n}[H]$ there is an isomorphism\n{$$H_i^{Fin}(\\Comm_{A_n}[H]\/H,\\pi^{-1} M)\\simeq H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M).$$}\nMoreover, every model for $\\uE(\\Comm_{A_n}[H]\/H)$ is a model for $E_{\\mathcal{F}[H]}\\Comm_{A_n}H$, with the action induced by the quotient map $\\Comm_{A_n}[H]\\to \\Comm_{A_n}[H]\/H$.\n\\end{prop}\n\\begin{proof}\nThe argument here is taken from the proof of \\cite[Lemma 4.2]{DP}. We write it here to make our paper more self-contained.\n\nThe projection $\\pi\\colon \\Comm_{A_n}[H]\\to \\Comm_{A_n}[H]\/H$ maps the family $\\mathcal{F}[H]$ onto the family $Fin$ of finite subgroups of the quotient $\\textrm{Comm}_{A_n}[H]\/H$.\nMoreover, the pre-image $\\pi^{-1}(K)$ for any finite group $K$ of $\\Comm_{A_n}[H]\/H$ lies in $\\mathcal{F}[H]$.\nTherefore, $\\uE \\Comm_{A_n}[H]\/H$ is a model for $E_{\\mathcal{F}[H]}\\textrm{Comm}_{A_n}[H]$, with the action induced by the projection $\\pi$.\n\nConsider now the spectral sequence associated to the short exact sequence\n$1\\to H\\to \\Comm_{A_n}[H]\\stackrel{\\pi}{\\to} \\Comm_{A_n}[H]\/H\\to 1$ for homology \\cite{Mar02}.\nFor every module $M$ over the orbit category of $\\Comm_{A_n}[H]$ we have\n$$E^{p,q}_2 (M)= H_p^{{Fin}}(\\Comm_{A_n}[H]\/H, H_q^{\\mathcal{F}[H]\\cap \\pi^{-1}(-)}(\\pi^{-1}(-), M)), $$\nwhich converges to $E^{p,q}_{\\infty} (M)=H_{p+q}^{\\mathcal{F}[H]}(\\Comm_{A_n}[H], M)$.\n\nObserve that $E^{p,q}_2$ is trivial for $q\\geq 1$, as for every finite subgroup $K<\\Comm_{A_n}[H]\/H$, $\\pi^{-1}(K)$ belongs to the family $\\mathcal{F}[H]\\cap \\pi^{-1}(K)$, and then $H_q^{\\mathcal{F}[H]\\cap \\pi^{-1}(-)}(\\pi^{-1}(-),M)$ is zero. Thus, as $E^{i,0}_2 (M)=H_i^{Fin}(\\Comm_{A_n}[H]\/H,\n\\pi^{-1}M)$\nin the 0-th row of the sequence, we have {$$H_i^{Fin}(\\Comm_{A_n}[H]\/H,\\pi^{-1} M)\\simeq H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M).$$} for every $i\\geq 0.$\nWe are done.\n\\end{proof}\n\n The family of finite subgroups is easier to deal with than the family $\\mathcal{F}[H]$, taking account of the previous proposition, the next step is to construct the corresponding classifying spaces for proper actions of the commensurators in $A_n$ modulo the corresponding subgroups. We have the following:\n\n\\begin{itemize}\n\n\n\\item If $\\textrm{Comm}_{A_n}[H]\\simeq\\mathbb{Z}\\oplus \\mathbb{Z}$, then there is a representative of the class $[H]$ that can be identified with one of these copies of $\\Z$; hence, we may assume that the inclusion $H\\hookrightarrow \\textrm{Comm}_{A_n}[H]=\\mathbb{Z}\\oplus \\mathbb{Z}$ is the inclusion of the first factor, and so $\\textrm{Comm}_{A_n}[H]\/H$ is isomorphic to $\\mathbb{Z}$. Then a model for $\\underline{{E}}(\\textrm{Comm}_{A_n}[H]\/H)$ is the straight line, and the action is by shifting.\n\n\\item If $\\textrm{Comm}_{A_n}[H]\\simeq A_n$, then $Z(A_n)$ is a representative of the class $[H]$.\nWe have seen that $\\textrm{Comm}_{A_n}[H]\/H=A_n\/Z(A_n)$ is an amalgamated product of two cyclic groups, depending its concrete shape on the parity of $n$.\nIn this case a tree model for ${\\uE}(\\textrm{Comm}_{A_n}[H]\/H)$ can be explicitly constructed.\n\nIf $n$ is even, we have that $A_{n}\/Z(A_{n})\\cong C_{\\infty}*C_{n}$.\nDenote $C_\\infty*C_{n}$ by $\\overline{A_n}$.\nLet $s$ be a generator of $C_\\infty$.\nThen Bass-Serre theorem guarantees that the graph with vertex set $\\overline{A_n}\/C_n$, edge set $\\overline{A_n}$ and incidence maps $\\iota(g) = gC_\\infty$ and $\\tau(g)=gsC_\\infty$ is a $\\overline{A_n}$-equivariant oriented tree.\n\n\nIf $n$ is odd, we have that $A_{n}\/Z(A_{n})\\cong C_{2}*C_{n}$. Denote $C_2*C_{n}$ by $\\overline{A_n}$.\nThen Bass-Serre theorem guarantees that the graph with vertex set $\\overline{A_n}\/C_2 \\sqcup \\overline{A_n}\/C_n$, edge set $\\overline{A_n}$ and incidence maps $\\iota(g) = gC_2$ and $\\tau(g)=gC_n$ is a $\\overline{A_n}$-equivariant oriented tree.\n\nNote that in both cases, the isotropy groups are the finite subgroups of $\\overline{A_n}$ and they fix exactly a vertex; and therefore these are models for $\\uE \\overline{A_n}$.\n\\end{itemize}\n\n\n\n\nAs the classifying spaces of the commensurators that we have described are all 1-dimensional, we will use the following result of Mislin:\n\n\n\\begin{lem}[\\cite{MiVa03}, Lemma 3.14]\n\\label{1dim}\nSuppose that for a family $\\mathcal{F}$ of subgroups of a group $G$ there is a tree model $T$ for $E_{\\mathcal{F}}G$. Let $S_e$ be the stabilizer of the edge $e\\in T$ and $S_v$ the stabilizer of a vertex. Let $N$ be a coefficient module. Then $H_i^{\\mathcal{F}}(G,N)=0$ for $i>1$ and there is an exact sequence:\n\n$$ 0\\rightarrow H_1^{\\mathcal{F}}(G,N)\\rightarrow \\bigoplus_{[e]} N(G\/S_e) \\rightarrow \\bigoplus_{[v]} N(G\/S_v)\\rightarrow H_0^{\\mathcal{F}}(G,N)\\rightarrow 0,$$ where $[e]$ and $[v]$ run over the $G$-orbits of edges and vertices of $T$, respectively.\n\\end{lem}\n\n\\begin{rem}\n\\label{diff}\nIt is interesting to remark that the middle map $\\bigoplus_{[e]} N(G\/S_e) \\rightarrow \\bigoplus_{[v]} N(G\/S_v)$ is induced by the (formal) border operation defined by $\\partial[e]=[v_1]-[v_2]$, being $v_1$ and $v_2$ vertices of a representative $e$ of $[e]$. The tree is assumed to be oriented.\n\\end{rem}\n\nIn particular we obtain:\n\n\\begin{cor}\n\\label{zerohomology}\nFor every $n\\geq 2$, $i\\geq 2$, $H\\leqslant A_n$ infinite cyclic and every module $M$ over the orbit category with respect to the family $\\mathcal{F}[H]$, we have $H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)=0$.\n\\end{cor}\n\\begin{proof}\nRecall that by Proposition \\ref{induced} $H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)\\simeq H_i^{Fin}(\\Comm_{A_n}[H]\/H,\\pi^{-1}M)$. As $\\uE\\Comm_{A_n}[H]\/H$ is 1-dimensional, the result follows by Lemma \\ref{1dim}.\n\\end{proof}\n\nSo it remains to compute $H_0^{\\mathcal{F}[H]}(\\textrm{Comm}_{A_n}[H],M)$ and $H_1^{\\mathcal{F}[H]}(\\textrm{Comm}_{A_n}[H],M)$. We assume here a general coefficient module $M$, but the reader may keep in mind that our case of interest is $M(A_n\/-\n)=K_q (R[-])$, $q\\in\\Z$.\n\n\n\\begin{prop}\\label{homology0and1}\nLet $A_n =\\langle a, b \\,\\textrm{ }|\\textrm{ } \\mathrm{prod}(a,b;n) = \\mathrm{prod}(b,a;n)\\rangle$\n be a dihedral Artin group, let $H\\leqslant A_n$ be an infinite cyclic group that is normal in its commensurator, let $\\pi\\colon \\Comm_{A_n} [H]\\to \\Comm_{A_n}[H]\/H$ be the natural projection.\nLet $M$ be a module over the orbit category of $\\Comm_{A_n}[H]$.\nThen\n\\begin{enumerate}\n\\item[(i)] If $H\\cap Z(A_n)$ is trivial, then\n$$H_0^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)=H_1^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)=M(\\Comm_{A_n}[H]\/H).$$\n\\item[(ii)] If $H=Z(A_{n})$, $n=2k+1$ and if $f\\colon M(A_{n}\/Z(A_n))\\to M(A_n\/\\langle b(ab)^k\\rangle ) \\oplus M(A_n\/\\langle ab \\rangle)$ {is} induced by the natural projections, we have\n$$H_0^{\\mathcal{F}[H]}(A_{n},M)= \\mathrm{coker} f\\quad \\text { and } \\quad H_1^{\\mathcal{F}[H]}(A_{n},M)= \\ker f.$$\n\n\\item[(iii)] If $H=Z(A_n)$ and $n=2k$, we have\n $$H_0 ^{\\mathcal{F}[H]}(A_{n},M)=M(A_{n}\/\\langle ab \\rangle)\\quad \\text{ and }\\quad H_1^{\\mathcal{F}[H]}(A_{n},M)= M(A_{n}\/Z(A_n)).$$\n\\end{enumerate}\nNote that in cases (ii) and (iii), $A_n=\\Comm_{A_n}[H]$.\n\\end{prop}\n\n\\begin{proof}\nIn the case $H\\cap Z(A_n)$ is trivial, the classifying space $\\uE\\Comm_{A_n}[H]\/H$ is the real line, there is one orbit of edges and one orbit of vertices, and the action is free. Then by Remark \\ref{diff}, the central map in that exact sequence is trivial and by Lemma \\ref{1dim},\n$$H_1^{Fin}(\\Comm_{A_n}[H]\/H,\\pi^{-1}M)= \\pi^{-1}M((\\Comm_{A_n}[H]\/H)\/\\{1\\})$$ and\n$$H_0^{Fin}(\\Comm_{A_n}[H]\/H,\\pi^{-1}M)=\\pi^{-1}M((\\Comm_{A_n}[H]\/H)\/\\{1\\}).$$\nUsing Proposition \\ref{induced}, case (i) follows.\n\nLet us discuss the case $H = Z(A_n)$. Now $\\Comm_{A_n}[H]\/H=A_n\/Z(A_n)$ is an amalgamated product of two finite cyclic groups or of a finite cyclic and an infinite cyclic group, depending on the parity of $n$. In both cases, there is a tree model for the classifying space for proper actions of the commutator modulo $H$. Denote $\\Comm_{A_n}[H]\/H$ by $\\overline{A_n}$ for brevity.\n\nWe consider first the case $n=2k+1$.\nHere $\\overline{A_{n}}=S*L$, with $S=\\langle b(ab)^k Z(A_{n})\\rangle \\cong C_2$ and $L=\\langle ab Z(A_{n}) \\rangle \\cong C_{n}$.\nThe Bass-Serre tree has two orbits of vertices, with stabilizers conjugated to $S$ or $L$, and one free orbit of edges.\nThen, by Lemma \\ref{1dim}\n\n\\begin{equation}\n\\label{MVodd}\nH_1^{Fin}(\\overline{A_n},\\pi^{-1} M)\\hookrightarrow \\pi^{-1}M(\\overline{A_{n}}\/\\{1\\})\\stackrel{f}{\\rightarrow}\n \\end{equation}\n$$\\stackrel{f}{\\rightarrow} \\pi^{-1}M(\\overline{A_{n}}\/S)\\oplus \\pi^{-1}M(\\overline{A_{n}}\/L)\\twoheadrightarrow H_0^{Fin}(\\overline{A_{n}},\\pi^{-1}M).\n$$\n Hence, we obtain that $$H_0^{Fin}(\\overline{A_{n}},\\pi^{-1}M)=(\\pi^{-1}M(\\overline{A_{n}}\/S)\\oplus \\pi^{-1}M(\\overline{A_{n}}\/L))\/\\textrm{Im }f$$ and $$H_1^{Fin}(\\overline{A_{n}},\\pi^{-1}M)=\\textrm{Ker }f.$$\n Observe that the two components of $f$ are induced by the images of the projections $\\overline{A_{n}}\/\\{1\\}\\rightarrow \\overline{A_{n}}\/S$ and $\\overline{A_{n}}\/\\{1\\}\\rightarrow \\overline{A_{n}}\/L$ by the functor $\\pi^{-1}M$. The case (ii) follows by applying Proposition \\ref{induced}.\n\n\n\n\nConsider now the case $n=2k$.\nIn this situation $\\overline{A_{n}}=A_{n}\/Z(A_{n})$ is a free product $G=S\\ast L$, with $S=\\langle b Z(A_n) \\rangle \\simeq C_{\\infty}$ and $L=\\langle ab Z(A_n) \\rangle\\simeq C_n$.\nWe denote by $s$ the element $b Z(A_n)$.\nThe Bass-Serre tree for this group has\none orbit of vertices with stabilizers conjugate to $L$ and one orbit of edges with trivial stabilizers.\n\nThe exact sequence of Proposition \\ref{1dim} has the following form:\n\\begin{equation}\n\\label{MVeven}\nH_1^{Fin}(\\overline{A_n},\\pi^{-1}M)\\hookrightarrow M(\\overline{A_{n}}\/\\{1\\}) \\stackrel{f}{\\rightarrow} \\pi^{-1}M(\\overline{A_{n}}\/L) \\twoheadrightarrow H_0^{Fin}(\\overline{A_{n}},\\pi^{-1}M).\n\\end{equation}\nThe function $f$ is induced by $g\\{1\\}\\in A_{2n}\/\\{1\\} \\mapsto gsL- gL$ by the functor $M$.\nAnd hence $f$ is trivial.\n\nHence, we deduce that $$H_1^{Fin}(\\overline{A_{n}},\\pi^{-1}M)= \\pi^{-1}M(\\overline{A_{n}}\/\\{1\\})$$ and $$H_0^{Fin}(\\overline{A_{n}},\\pi^{-1}M)=\\pi^{-1}M(\\overline{A_{n}}\/L).$$\nThe result now follows by applying Proposition \\ref{induced}.\n\\end{proof}\n\nObserve that taking into account that $\\overline{A_{2n}}$ is one-relator, this computation agrees with the result of Corollary 3.23 in \\cite{MiVa03}, in the case of $M=R_\\mathbb{C}$, the complex representation ring.\n\n\n\\subsection{ Coefficients in $K$-theory groups}\n\\label{Ktheory}\nFrom the point of view of Farrell-Jones conjecture, the case of interest is the coefficients in the $K$-theory $K_q(R)$ of the group ring. We now describe all the homology groups of the exact sequence of Proposition \\ref{mainMV} with coefficients in this module, except the groups $H^{vc}(A_n,K_q(R[-]))$ which will be studied in the next section.\n\nWe begin recalling the following definiton.\n\\begin{defn}\\label{defn:regular}\nA {\\it regular} ring is a\n commutative noetherian ring such that in the localization at every prime ideal, the Krull dimension of the maximal ideal is equal to the cardinal of a minimal set of generators.\n\\end{defn}\nExamples of regular rings include fields (of dimension zero) and Dedekind domains. If $R$ is regular then so is the polynomial ring $R[x]$, with dimension one greater than that of $R$.\n\n\n\n Before stating our results, we also need to recall the \\emph{Bass-Heller-Swan decomposition} in $K$-theory, which permits to {decompose} the $K$-theory of $R[\\Z]$. For thorough approaches to algebraic $K$-theory the reader is referred to \\cite{Bas68}, \\cite{Wei13} or \\cite{Luc21}.\n\n\\begin{thm}[\\cite{Sri91}, Theorem 9.8]\n\\label{BHS}\nGiven a ring $R$ and $q\\in\\Z$, there exists an isomorphism $$K_q(R[\\Z])\\simeq K_q(R)\\oplus K_{q-1}(R)\\oplus NK_q(R)\\oplus NK_q(R),$$ which is natural in the ring $R$.\n\\end{thm}\n\n{The additional terms $NK_q(R)$ are called the Nil-terms, and $NK_n(R)$ is defined as the kernel of the homomorphism induced in $K_q$ by the homomorphism $R[t]\\rightarrow R$ which sends $t$ to $1$. These terms vanish for a regular ring $R$, see \\cite[Section 9]{Sri91}.}\n\n{In our computations it will be important to understand the endomorphism $ind_n$ of $K_q(R[\\Z])$ induced by multiplication by $n$ in $\\Z$. The references for the sequel are \\cite[Section 2]{HaLu12}, \\cite{Sti82} and \\cite{Wei81}.}\n\nAccording to Bass-Heller-Swan-decomposition, $ind_n$ can be seen as a homomorphism\n$$ind_n\\colon K_q(R)\\oplus K_{q-1}(R)\\oplus NK_q(R)\\oplus NK_q(R)\\rightarrow K_q(R)\\oplus K_{q-1}(R)\\oplus NK_q(R)\\oplus NK_q(R).$$\n\\noindent{Here, by naturality of the decomposition, the image of $ind_n|_{K_q(R)}$ lies inside $K_q(R)$, the image of $ind_n|_{K_{q-1}(R)}$ lies inside $K_{q-1}(R)$ and analogously for the Nil-terms. Now, the restriction of $ind_n$ to $K_q(R)$ is the identity and the restriction of $ind_n$ to $K_{q-1}(R)$ is multiplication by $n$. By Farrell's trick \\cite{Far77}, $ind_n$ admits a transfer $res_n$ such that $res_n\\circ ind_n$ is multiplication by $n$.\nThis implies the following:}\n\n{\\begin{prop} \\label{kernel}\nIn the previous notation, the kernel of $ind_n$ is isomorphic to a direct sum $T_1(K_q(R))\\oplus T_2(K_q(R))$, where $T_1(K_q(R))$ is the $n$-torsion subgroup of $K_{q-1}(R)$ and $T_2(K_q(R))$ is a subgroup of the $n$-torsion subgroup of $NK_q(R)\\oplus NK_q(R).$\n\\end{prop}}\n\n{It should be remarked that the restriction of $ind_n|_{NK_q(R)}$ (sometimes call the \\emph{Frobenius map} in the literature) is related to the action of big Witt vectors in $NK_q(R)$, and is hard to describe in general. Anyhow, the previous proposition will be important when computing the Bredon homology of Artin groups of dihedral type.}\n\n\n\nThe following propositions\n{particularize} our previous results when the coefficient module is $K$-theory.\n\n For the sake of clarity, we maintain the notation from Proposition \\ref{mainMV}, although we are aware that sometimes can be found redundant.\n\n\\begin{prop}\n\\label{Anordinary}\n\nLet us consider an Artin group $A_n$ of dihedral type, $q\\in\\mathbb{Z}$, $R$ a ring. Then we have:\n\n\\begin{itemize}\n\n\\item $H_0^{Fin}(A_n,K_q(R[-]))=K_q(R)$.\n\n\\item $H_1^{Fin}(A_n,K_q(R[-]))=K_q(R)\\oplus K_q(R)$ if $n$ is even\n\n\\item $H_1^{Fin}(A_n,K_q(R[-]))=K_q(R)$ if $n$ is odd.\n\n\\item $H_2^{Fin}(A_n,K_q(R[-]))=K_q(R)$ if $n$ is even\n\\item $H_2^{Fin}(A_n,K_q(R[-]))=0$ if $n$ is odd.\n\n\\item $H_i^{Fin}(A_n,K_q(R[-]))=0$ if $i\\geq 3$.\n\n\\end{itemize}\n\n\\end{prop}\n\n\\begin{proof}\n\nAs $A_n$ is torsion-free, Bredon homology with respect to the family of finite groups is in fact ordinary homology. The proposition is then obtained by applying Universal Coefficient Theorem \\cite[Theorem 3A.3]{Hat02} to the ordinary homology groups of $A_n$ (Proposition \\ref{OrdHom}), taking into account that the latter are torsion-free and that the $K$-theory groups are abelian.\n\\end{proof}\n\nWe consider now {the} Bredon homology of the commensurators.\n\n\\begin{prop}\n\\label{FinComm}\n\nLet us consider an Artin group $A_n$ of dihedral type, $q\\in\\mathbb{Z}$, $R$ a ring. Let $\\Comm_{A_n}[H]$ be the commensurator of a virtually cyclic group in $A_n$. Then:\n\n\\begin{itemize}\n\n\n\\item If $\\Comm_{A_n}[H]=\\mathbb{Z}\\oplus\\mathbb{Z}$, then\n\n$$H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=\\begin{cases} K_q(R) & \\text{ if $i=0,2$}\\\\\nK_q(R)\\oplus K_q(R) & \\text{ if i =1}\\\\\n0 & \\text{otherwise}. \\end{cases}$$\n\n\n\\item If $\\Comm_{A_n}[H]=A_n$, then $$H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=H_i^{Fin}(A_n,K_q(R))$$ and this case was described in the previous proposition.\n\n\n\\end{itemize}\n\n\\end{prop}\n\n\\begin{proof}\n\nGiven a commensurator $\\textrm{Comm}_{A_n}[H]$, its Bredon homology with respect to the family $Fin\\cap \\textrm{Comm}_{A_n}[H]$ is its ordinary homology. Then, {as in the previous proposition}, the result follows by applying the Universal Coefficient Theorem to the ordinary homology groups of the commensurators.\n\\end{proof}\n\nWe now undertake the remaining case. In item (3) we follow the notation of Proposition \\ref{kernel}.\nMoreover, we use the following notation.\n\\begin{nt}\\label{not: CKR}\nFor $n$ odd, we denote by $C(K_q(R))$ the cokernel of the homomorphism $$\\tilde{f}\\colon K_q(R[\\Z])\\to K_q(R[\\Z])\\oplus K_q(R[\\Z])$$ induced by\n$f\\colon R[\\Z]\\rightarrow R[\\Z]\\oplus R[\\Z]$ induced in the first component by multiplication by 2 and in the second by multiplication by $n=2k+1$.\n\\end{nt}\nBy Bass-Heller-Swan decomposition, $C(K_q(R))$ this is a quotient of $K_q(R)\\oplus K_q(R)\\bigoplus(\\oplus_{i=1}^2 K_{q-1}(R))\\bigoplus (\\oplus_{i=1}^4 NK_q(R))$. Moreover, $\\tilde{f}$ restricts to the component corresponding to $K_q(R)$ of the Bass-Heller-Swan decomposition of $K_q(R[\\Z])$ to the diagonal map to $K_q(R)\\oplus K_q(R)$, a component of $K_q(R[\\Z])\\oplus K_q(R[\\Z])$.\nThus, $C(K_q(R))$ can be viewed as a quotient of $K_q(R)\\bigoplus(\\oplus_{i=1}^2 K_{q-1}(R))\\bigoplus (\\oplus_{i=1}^4NK_q(R))$ that can be identified in many cases, see Section \\ref{Sect:concrete}.\n\n\n\n\\begin{prop}\n\\label{FHcomm}\nLet $A_n$ be an Artin group of dihedral type, $q\\in\\mathbb{Z}$, $R$ a ring.\nConsider for every non-trivial virtually cyclic $H\\leqslant A_n$ and the family $\\mathcal{F}[H]$. Then:\n\n\n\n\\begin{enumerate}\n\n\n\\item If $i\\geq 2$, then $$H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=0.$$\n\n\\item If $H\\cap Z(A_n)$ is trivial, then $$H_0^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=H_1^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=K_q(R[\\mathbb{Z}]).$$\n\n\\item If $n$ is odd and $\\Comm_{A_n}[H]=A_n$, then $$H_0^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=C(K_q(R))$$ and $$H_1^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=T_1(K_q(R))\\oplus T_2(K_q(R)).$$\n\n\\item If $n$ is even and $\\Comm_{A_n}[H]=A_n$, then $$H_0^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=H_1^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],K_q(R[-]))=K_q(R[\\mathbb{Z}]).$$\n\n\\end{enumerate}\n\n\\end{prop}\n\n\n\\begin{proof}\n\nWe will check every item separately.\n\n\\begin{enumerate}\n\n\n\n\\item It follows straightforward from Corollary \\ref{zerohomology}.\n\n\\item The claim follows from item (i) in Proposition \\ref{homology0and1}, taking into account that $H$ is infinite cyclic an then $K_q(R[H])=K_q(R[\\mathbb{Z}])$.\n\n\\item Let $n=2k+1$. Consider the homomorphism $f\\colon M(A_{n}\/Z(A_n))\\to M(A_n\/\\langle b(ab)^k\\rangle ) \\oplus M(A_n\/\\langle ab \\rangle)$ defined in item (2) of Proposition \\ref{homology0and1}.\nTaking $M(A_n\/-)=K_q(R[-])$, we obtain a homomorphism $f_K:K_q(R[Z(A_n)])\\to K_q(R[\\langle b(ab)^k\\rangle])\\oplus K_q(R[\\langle ab \\rangle])$.\nObserve that the two components of $f_K$ are respectively induced by the inclusions $Z(A_n)\\hookrightarrow \\langle b(ab)^k\\rangle$ and $Z(A_n)\\hookrightarrow \\langle ab \\rangle$, which are both inclusions $\\Z\\hookrightarrow \\Z$ given respectively by multiplication by 2 and multiplication by $n$.\nThen, $f_K$ can be seen as the homomorphism $K_q(R[\\Z])\\rightarrow K_q(R[\\Z])\\oplus K_q(R[\\Z])$ induced by each multiplication in the corresponding component.\nAccording to Proposition \\ref{kernel}, the kernel of the first component of this homomorphism is equal to $T_1(K_q(R))\\oplus T_2(K_q(R))$, while its cokernel is $C(K_q(R))$ by definition.\nThe result now follows from item (2) in Proposition \\ref{homology0and1}.\n\n\n\n\\item In this case $A_n=\\Comm_{A_n}[H]$. The claim follows from item (3) in Proposition \\ref{homology0and1}, taking into account that $H$ is infinite cyclic and then $K_q(R[H])=K_q(R[\\mathbb{Z}])$.\n\\end{enumerate}\n\\end{proof}\n\n\\subsection{Understanding the homomorphisms of Proposition \\ref{mainMV}}\n\\label{morphism}\nIn our way to describe the Bredon homology of $A_n$ with respect to the family of virtually cyclic groups, we should describe to some extent the homomorphisms that appear in the Mayer-Vietoris sequence of Proposition \\ref{mainMV}.\n\n In the following we will use without explicit mention the previous three propositions, which identify the terms of the exact sequence. We also maintain the name of the homomorphisms in the sequence.\nNote that the superscript of the homomorphism $g^k_i$ specifies the degree of the homology in the source of $g^k_i$.\nMoreover, when we need to refer to the $j$-th component of $g_i^k$, we will write $g_{ij}^k$.\nFor example, $g_{22}^1$ is the second component of the homomorphism $g_2^1$ defined over the first homology group.\n\nTo prove our results, we need to analyze in detail the following homomorphism\n\\begin{equation}\\label{g2}\n\\bigoplus_{[H]}H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],M) \\stackrel{g_2^i}{\\rightarrow} \\end{equation}\n$$ \\stackrel{g_2^i}{\\rightarrow}\\left(\\bigoplus_{[H]}H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)\\right)\\oplus H_i^{Fin}(A_n,M).\n$$\n\n\nThe homomorphism $g_{21}^i$ is induced by the vertical left arrow $\\sqcup_{[H]\\in I}\\mathrm{id}_{A_n}\\times_{\\Comm_{A_n}[H]}f_{[H]}$ of L\\\"{u}ck-Weiermann push-out \\eqref{eq:Luck-Weiermann} below (see also Theorem \\ref{maintheorem}).\nIn turn, the homomorphism $g_{22}^i$ is induced by the inclusion in the upper horizontal arrow of the push-out \\eqref{eq:Luck-Weiermann}\n\\begin{equation}\n\\label{eq:Luck-Weiermann}\n\\xymatrix{ \\coprod_{[H]\\in I}A_n\\times_{\\Comm_{A_n}[H]}\\uE \\Comm_{A_n}[H] \\ar[r]^{\\hspace{2cm} i} \\ar[d]^{\\coprod_{[H]\\in I}id_{A_n}\\times_{\\Comm_{A_n}[H]}f_{[H]}} & \\uE A_n \\ar[d] \\\\\n\\coprod_{[H]\\in I}A_n\\times_{\\Comm_{A_n}[H]}E_{{\\mathcal{F}} [H]}\\Comm_{A_n}[H] \\ar[r] & X. }\n\\end{equation}\n\n\n\n\nWe will decompose the homomorphism $g_2^i$ of Equation \\eqref{g2} into homomorphisms $g_{2[H]}^i$ which are the restriction of $g_2^i$ to the factor of the domain corresponding to $[H]$. We will write\n\n\\begin{equation}\n\\label{eq:g}\ng_{2[H]}^i\\colon H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],M)\\to H_i^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M)\\oplus H_i^{Fin}(A_n,M).\n\\end{equation}\nNote that we are making a slight abuse of notation, as the real codomain of $g_{2[H]}^i$ is the same as {that of} $g_2^i$, but we have chosen to write only the subgroup where the image of $g_{2[H]}^i$ lies.\nMoreover, when needed, we will further decompose $g^i_{2[H]}$ into $g^i_{21[H]}\\oplus g^i_{22[H]}$ indicating the different factors of the image of $g^i_{2[H]}$. Observe that this notation is coherent with the previous one.\n\n\nAs all the commensurators are torsion-free, $H_i^{Fin\\cap \\Comm_{A_n}[H]}(\\Comm_{A_n}[H],M)$ is ordinary homology with coefficients in $M(\\Comm_{A_n}[H]\/\\{1\\})$.\n\n\n\n\nWe now analyze the homomorphism $g_2^i$ on the case $M=K_q(R[-])$. Before that, we introduce the following notation.\n\\begin{nt}\nWe will denote\n$N_q^{[H]} =K_{q-1}(R)\\oplus NK_q(R)\\oplus NK_q(R)$, where the superindex means that this group is associated to a concrete commensurability class $[H]$.\n\\end{nt}\nWe now will give more detailed descriptions of the homomorphism $g_2^i$ and the cokernels introduced above.\n\n\\subsubsection{{\\bf The homomorphism $g_2^2$ when $n$ odd:}}\n\\label{NqH}\nIn this case we have that the codomain of $g_2^2$ is $(\\bigoplus H_2^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M))\\oplus H_2^{Fin}(A_n,M)=\\{0\\}$, and therefore for all commensurability classes of infinite cyclic subgroups $[H]$ the homomorphisms $g_{2[H]}^2$ are trivial.\n\n\\subsubsection{{\\bf The homomorphism $g_2^2$ when $n$ even:}}\nIn this case we have that the codomain of $g_2^2$ is $(\\bigoplus H_2^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M))\\oplus H_2^{Fin}(A_n,M)=(\\bigoplus_{[H]}\\{0\\})\\oplus K_q(R)$.\nMoreover, by Proposition \\ref{Anordinary} and Proposition \\ref{FinComm},\nthe domain of all $g_{2[H]}^2$ are the same, {and} we have\n$$g_{2[H]}^2 \\colon K_q(R)\\to \\{0\\}\\oplus K_q(R).$$\nNote that $g_{22[H]}^2$ is induced by the inclusion given by the upper arrow in the push-out.\nWhen $H=Z(A_n)$, we have that $\\Comm_{A_n}[H]=A_n$ and therefore this inclusion is the identity.\nSince $g_{22[Z(A_n)]}^2$ is surjective, we have that $g_2^2$ is surjective.\n\n\n\n\n\\subsubsection{{\\bf The homomorphism $g_2^1$ when $n$ is odd:}}\n\\label{g21}\n{Before we describe this case, we need to make some considerations.\nRecall that we assume that $H$ is normal in $\\textrm{Comm}_{A_n}[H]$.\nObserve that in the models described in Section \\ref{HiF[H]} for $\\underline{E}(\\textrm{Comm}_{A_n} [H]\/H)$, the stabilizers of the edges are trivial for every $H$.\nThis means that if we consider the space $\\underline{E}(\\textrm{Comm}_{A_n} [H]\/H)$ as a model for $E_{{\\mathcal{F}} [H]}\\Comm_{A_n}[H]$ (see Proposition \\ref{induced}) the stabilizers of the edges are always isomorphic to $H$.\nOn the other hand, in any model of $\\underline{E}\\textrm{Comm}_{A_n} [H]$ the stabilizer of the edges should be trivial, as the action of $\\textrm{Comm}_{A_n} [H]$ is free.\nThen, the homomorphism induced by $f_{[H]}:\\underline{E}\\textrm{Comm}_{A_n} [H]\\rightarrow E_{{\\mathcal{F}} [H]}\\Comm_{A_n}[H]$ in the first chain group of the Bredon complex with coefficients in a module $M$ takes every copy of $M(\\textrm{Comm}_{A_n} [H]\/{1})$ to a copy of $M(\\textrm{Comm}_{A_n}[H]\/H)$ with the homomorphism induced by the inclusion of the trivial group in $H$. In particular, if $M=K_q(R[-])$ for some $q$, the corresponding homomorphism $K_q(R)\\rightarrow K_q(R[\\mathbb{Z}])$ is given by the inclusion of $K_q(R)$ in the corresponding piece of the Bass-Heller-Swan decomposition of $K_q(R[\\mathbb{Z}])$. This fact will be very useful in the sequel}.\n\nIn the following we use the notation of Proposition \\ref{kernel} when needed.\nWe have that $$g_{2[Z(A_n)]}^1\\colon K_q(R)\\to T_1(K_q(R))\\oplus T_2(K_q(R))\\oplus K_q(R)$$ and for $H$ nontrivial, $[H]\\neq [Z(A_n)]$\n$$g_{2[H]}^1\\colon K_q(R)\\oplus K_q(R)\\to K_q(R[\\Z]) \\oplus K_q(R).$$\n\n\n\n\nNote that there are infinitely many commensurability classes of infinity cyclic subgroups different from the $[Z(A_n)]$ and hence infinitely many terms of this kind.\nWe now examine the cases $[H]\\neq [Z(A_n)]$ and $[H]=[Z(A_n)]$ separately.\n\nIf $[H]=[Z(A_n)]$ then we have $\\Comm_{A_n}[H]=A_n$.\n\n In order to describe $g^{1}_{21[Z(A_n)]}$, consider the homomorphism $$C_1^{Fin}(\\underline{E}\\textrm{Comm}_{A_n}[H],K_q)\\rightarrow C_1^{{\\mathcal{F}} [H]}(E_{{\\mathcal{F}} [H]}\\Comm_{A_n}[H],K_q)$$ at the level of Bredon chains that induces $g^{1}_{21[Z(A_n)]}$ in homology. According to the previous considerations, this homomorphism is given by the inclusion $i:K_q(R)\\hookrightarrow K_q(R[H])=K_q(R[\\Z])$ given by Bass-Heller-Swan decomposition. But by item (3) of Proposition \\ref{FHcomm}, the image of this homomorphism is trivial in $H_1^{{\\mathcal{F}} [H]}(E_{{\\mathcal{F}} [H]}\\Comm_{A_n}[H],K_q)$, and hence $g^{1}_{21[Z(A_n)]}$ is also trivial. In turn, $g^{1}_{22[Z(A_n)]}$ is the identity.\n\n\n\nIf $[H]\\neq[Z(A_n)]$ then $\\Comm_{A_n}[H]=\\Z^2$, and $\\Comm_{A_n}[H]\/H$ is isomorphic to $\\Z$.\nIn this case $g_{21 [H]}^1$ is given by {a} homomorphism $K_q(R)\\oplus K_q(R)\\rightarrow K_q(R[\\Z])$, where we assume that the first component of the domain corresponds to $H$ and the second to $Z(A_n)$. As the homomorphism $g_{21 [H]}^1$ is induced in homology by the quotient homomorphism $\\textrm{Comm}_{A_n}[H]\\rightarrow \\textrm{Comm}_{A_n}[H]\/H$, the previous results imply that the first component of $g_{21 [H]}^1$ is trivial, while the second, which corresponds to the center, identifies the copy of $K_q(R)$ in the Bass-Heller-Swan decomposition of $K_q(R[\\Z])$.\n\n\n\n On the other hand, $g_{22[H]}^1\\colon (\\Z\\oplus\\Z)\\otimes K_q(R)\\rightarrow \\Z\\otimes K_q(R)$ is defined by the abelianization of the inclusion $H_1(\\Comm_{A_n}[H])\\rightarrow H_1(A_n)$ in the first component of the tensor product and by the identity in the second.\n\nNow since the image of $g^1_{2[Z(A_n)]}$ is precisely given by the copy of $K_q(R)$ that corresponds to $H_1(A_n,K_q)$, the previous computations imply that the cokernel of $g^1_2$ is then equal to $(\\bigoplus_{[H]\\neq [Z(A_n)]}N_q^{[H]})\\oplus T_1(K_q(R))\\oplus T_2(K_q(R))$.\n\n\\subsubsection{ {\\bf The homomorphism $g_2^1$ when $n$ is even:}}\n\n\nThis homomomorphism is defined in equation \\eqref{eq:g} and, according to Propositions {\\ref{Anordinary}, \\ref{FinComm} and \\ref{FHcomm}}, its first component is given by\n$$g_{2[H]}^1 \\colon K_q(R)\\oplus K_q(R)\\to K_q(R[\\Z]) \\oplus ( K_q(R)\\oplus K_q(R))$$\nfor every commensurability class $[H]$. Let us describe this component with more detail.\n\n\n\n\nWe consider first the case $[H] \\neq [Z(A_n)]$. Here, the same argument as in the odd case proves that the component of $g_{21[H]}^1$ given by the center is the inclusion $K_q(R)\\hookrightarrow K_q(R[\\Z])$ via Bass-Heller-Swan decomposition, and the other component is trivial.\n\nAlso when $[H]\\neq [Z(A_n)]$ the homomorphism $g_{22[H]}^1$ is identified (via Proposition \\ref{FinComm} and the Universal Coefficient Theorem) with a homomorphism $(\\Z\\oplus \\Z) \\otimes K_q(R)\\rightarrow (\\Z\\oplus\\Z)\\otimes K_q(R)$, which comes, as above, from tensoring with $K_q(R)$ the homomomorphism $H_1(\\Comm_{A_n}[H])\\rightarrow H_1(A_n)$ given by abelianization of the inclusion of the commensurator in $A_n$.\n\nNow we consider $[H]=[Z(A_n)]$. As in the previous case, the homomorphism $g_{21[H]}^1$ can be described as:\n$$H_1^{Fin}(A_n,K_q(-))\\to H_1^{{\\mathcal{F}} [H]}(A_n,\\pi^{-1} K_q(-)) \\cong H_1^{Fin}(A_n\/Z(A_n),\\pi^{-1} K_q(-)).$$\nNow recall from Proposition \\ref{homology0and1} that\n$$H_1^{Fin}(A_n\/Z(A_n),\\pi^{-1} K_q(-))=\\pi^{-1}K_q(R[\\{1\\}])=K_q(R[Z(A_n)])=K_q(R[\\Z]),$$\nwhere $Z(A_n)$ is interpreted here as the stabilizer of the unique $A_n$-class of edges in the model of $E_{{\\mathcal{F}} [H]}A_n$ described in Section \\ref{HiF[H]}.\nOn the other hand, the two copies of $K_q(R)$ in $H_1(A_n,K_q(R))$ come from taking values of the module $K_q(R[-])$ on the trivial group, interpreted as the stabilizer of two different $A_n$-classes of edges in a model of $EA_n$. Again taking into account our previous considerations about stabilizers, the two components of $g_{21[Z(A_n)]}^1 \\colon K_q(R)\\oplus K_q(R)\\to K_q(R[\\Z])$ induce inclusion of the $K_0(R)$ via Bass-Heller-Swan decomposition. On the other hand, it is clear that $g_{22[Z(A_n)]}^1$ is the identity.\n\nAs $H_1(A_n)$ is a free abelian group of rank 2 generated by the images of $a$ and $b$ under abelianization, we may assume that the two copies of $K_q(R)$ in the image of $g_{2[H]}^1$ correspond respectively to these two copies of $\\Z$, after tensoring with $K_q(R)$. Now observe that if $H=\\langle a\\rangle$, the image of the restriction of $g_{22[H]}^1$ to the homology of its commensurator is exactly the first of the two copies of $K_q(R)$, while the image of the restriction of $g_{22[H]}^1$ to the homology of the commensurator of $H=\\langle b\\rangle$ is the other copy. As the restrictions of $g_{21[H]}^1$ to the homology of these commensurators are trivial, we obtain that $\\{0\\}\\oplus K_q(R)\\oplus K_q(R)$ lies in the image of $g_{2[H]}^1$. In fact, the description of $g_{21[H]}^1$ for $H=Z(A_n)$ implies that the copy of $K_q(R)$ inside $K_q(R[\\Z])$ is also in the image, and now it is easy to conclude that the image of $g_{2[H]}^1$ is in fact $(\\bigoplus_{[H]}) K_q(R))\\oplus K_q(R)\\oplus K_q(R)$, corresponding the big direct sum to the copies of $K_q(R)$ included in each copy of $K_q(R[\\Z])$, and the remaining two copies corresponding to the ordinary homology of $A_n$. In particular, we have that $\\textrm{coker } g_2^1=\\bigoplus_{[H]} N_q^{[H]}.$\n\n\n\n\n\n\\subsubsection{{\\bf The homomorphism $g_2^0$ for every $n$:}}\n\nSimilar considerations to those of the beginning of Section \\ref{g21} hold here.\nIf $x$ is a vertex of $\\underline{E}\\textrm{Comm}_{A_n}[H]$ such that the stabilizer of $f_{[H]}(x)$ is infinite cyclic (here $f_{[H]}$ is the function of \\eqref{eq:Luck-Weiermann}), then the induced map $K_q(R)\\rightarrow K_q(R[\\Z])$ induces the injection in the correspondent component of Bass-Heller-Swan decomposition. This is always the case except when $n$ is odd, $H=Z(A_n)$ and $f_{[H]}(x)$ has the shape $gC_{\\infty}$ if we consider $\\underline{E}(\\textrm{Comm}_{A_n} [H]\/H)$ as a $\\textrm{Comm}_{A_n} [H]\/H$-complex.\nThis will be enough to describe $g_2^0$ to the extent we need.\n\nWe begin describing $g_{2[H]}^0$ according to the different prossibilities of $[H]$ and $n$.\n\nWhen $H\\neq Z(A_n)$, the results of Propositions \\ref{Anordinary}, \\ref{FinComm} and \\ref{FHcomm} imply that $g_{2[H]}^0$ is defined in the following way:\n$$g_{2[H]}^0\\colon K_q(R)\\to K_q(R[\\Z])\\oplus K_q(R).$$\nBy the previous considerations, the homomorphism $g_{21[H]}^0$ identifies $K_q(R)$ as the corresponding direct summand of $K_q(R[\\mathbb{Z}])$ in the Bass-Heller-Swan decomposition, while $g^{0}_{22[H]}$ is induced by the inclusion $\\textrm{Comm}_{A_n} [H]\\hookrightarrow A_n$, counts the number of connected components of the classifying space, and then is the identity.\n\nWhen $n$ is even and $H=Z(A_n)$, we have that $H_0^{{\\mathcal{F}} [H]}(\\textrm{Comm}_{A_n} [H],K_q(-))=K_q(R[\\Z])$ by Proposition \\ref{FHcomm}, and this copy of $\\Z$ corresponds to a stabilizer which is isomorphic to $C_n$, a cyclic group of order $n$. Then, by the previous considerations, the homomorphism $g_{2[H]}^0$ behaves as in the case $n$ even and $H\\neq Z(A_n)$.\n\nIn the case $n$ odd and $H=Z(A_n)$, again by Proposition \\ref{FHcomm} we have that $ H_0^{{\\mathcal{F}} [H]}(\\textrm{Comm}_{A_n} [H],K_q(-))=C(K_q(R))$, where $C(K_q(R))$ was defined in Notation \\ref{not: CKR}.\nThen $g_{21[H]}^0$ is defined as a homomorphism\n$$g_{2[H]}^0\\colon K_q(R)\\to C(K_q(R))\\oplus K_q(R).$$\nRemark from the definition of $C(K_q(R))$ that this group is the direct sum of $K_q(R)$ with quotients of $\\oplus_{i=1}^2 K_{q-1}(R)$ and $\\oplus_{i=1}^4 NK_q(R)$,\nand that this copy of $K_q(R)$ comes from the identification of the respective copies of $K_q(R)$ that appear in the Bass-Heller-Swan decomposition of $K_q(R[\\langle b(ab)^k\\rangle])$ and $K_q(R[\\langle ab\\rangle])$.\nThese copies correspond respectively to the stabilizers $C_2$ and $C_{2k+1}$ in $\\underline{E}(\\textrm{Comm}_{A_n} [H]\/H)$ (as a model for the $\\textrm{Comm}_{A_n} [H]\/H$-action). Then, the previous considerations about stabilizers imply that $g_{21[H]}^0$ maps this $K_0(R)$ isomorphically to the mentioned copy of itself inside $C(K_q(R))$.\nIn turn, $g_{22[H]}^0$ is again the identity, arguing as in the even case.\n\n\nWith the previous information, we can describe $\\mathrm{coker}g_2^0$.\n\nNow if $n$ is even, observe that the source of $g_2^0$ is $\\oplus_{[H]} K_q(R)$ and the codomain is $(\\oplus_{[H]}K_q(R[\\Z]))\\oplus K_q(R)$.\nMoreover, the image of every $g_{2[H]}^0$ consists in a copy of $K_q(R)$ inside $K_q(R[\\Z])$ (again the copy that appears in the Bass-Heller-Swan decomposition), and the copy of $K_q(R)$ that corresponds to $H_0^{Fin}(A_n,M)$, which is fixed for any choice of $H$. Hence, the cokernel of the homomorphism $g_2^0$ is isomorphic to the quotient of a direct sum of copies of $K_q(R[\\Z])$ (indexed by $[H]$) by the identification of all the copies $K_q(R)$ which are the images of the homomorphisms $g^0_{22[H]}$. Observe that the copies of $K_q(R[\\Z])$ are ``glued\" by the copy of $K_q(R)$ that corresponds to $H_0^{Fin}(A_n,M)$, and hence we have $$\\mathrm{ coker } g_2^0=(\\bigoplus_{[H]} N_q^{[H]})\\oplus K_q(R).$$\n\nWhen $n$ is odd, we only need to take into account that in the case of $H=Z(A_n)$ the role of $K_q(R[\\Z])$ in the codomain is played by $C(K_q(R))$.\nThen, using the same argument as in the previous case and denoting by $\\overline{C}(K_q(R))$ the quotient of $C(K_q(R))$ under the copy of $K_q(R)$ in the Bass-Heller-Swan decomposition, we obtain that: $$\\mathrm{ coker } g_2^0=(\\bigoplus_{[H]\\neq [Z(A_n)]} N_q^{[H]})\\oplus K_q(R)\\oplus\\overline{C}(K_q(R)).$$\n\nObserve that $g_2^0$ is a monomorphism for every $n$, as the inclusions of $K_q(R)$ in $K_q(R[\\Z])$ and $C(K_q(R))$ are so.\n\n\\vspace{0.5cm}\n\n\nNow we can describe the Bredon homology of $A_n$ with respect to the family of virtually cyclic groups.\nObserve that the (co)kernels of the statement have been previously described, and that the groups $N_q^{[H]}$ were defined in Section \\ref{NqH}.\n\n\n\n\\begin{thm}\n\\label{Thm:Bredon}\nLet $A_n$ be an Artin group of dihedral type. In the previous notation, we have the following:\n\\begin{enumerate}\n \\setlength{\\itemindent}{-2em}\n\\item $H_i^{vc}(A_n,K_q (R[-]))=\\{0\\}$ for $i\\geq 4$.\n\\item $H_3^{vc}(A_n,K_q (R[-]))=\\begin{cases}\\bigoplus_{[H]\\neq [Z(A_n)]} K_q(R) & \\text{$n$ odd}\\\\\n\\ker g_2^2 & \\text{$n$ even.}\n\\end{cases}$\n\n\\item $H_2^{vc}(A_n,K_q (R[-]))= \\ker g_2^1$.\n\n\\item $H_1^{vc}(A_n,K_q (R[-]))= \\emph{coker } g_2^1= \\begin{cases}(\\bigoplus_{[H]\\neq [Z(A_n)]}N_q^{[H]})\\oplus T_1(K_q(R))\\oplus T_2(K_q(R)) & \\text{$n$ odd}\\\\\n\\bigoplus_{[H]} N_q^{[H]}\n & \\text{$n$ even.}\n\\end{cases}$%\n\\item $H_0^{vc}(A_n,K_q (R[-]))= \\emph{ coker } g_2^0= \\begin{cases}(\\bigoplus_{[H]\\neq [Z(A_n)]} N_q^{[H]})\\oplus K_q(R)\\oplus\\overline{C}(K_qR) & \\text{$n$ odd}\\\\\n(\\bigoplus_{[H]} N_q^{[H]})\\oplus K_q(R)\n & \\text{$n$ even.}\n \\end{cases}$%\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n\nThe proof is based on the sequence of Proposition \\ref{mainMV} and the previous homological computations. We will check the five items in a separate way.\n\n\\begin{enumerate}\n\\item By Proposition \\ref{Anordinary}, Proposition \\ref{FinComm} and Proposition \\ref{FHcomm} the sequence of Proposition \\ref{mainMV} is identically trivial to the left of $g_3^3$.\n\n\\item As the exact sequence of Proposition \\ref{mainMV} is identically trivial to the left of $g_3^3$, the map $g_3^3$ is trivial and by exactness, $H_3^{vc}(A_n, K_q(R[-]))$ is isomorphic to the kernel of $g_2^2$. This completes the even case.\n\n\nFor the case $n$ odd, we claim that $g_1^3$ is an isomorphism.\nIndeed, from Proposition \\ref{Anordinary} and Proposition \\ref{FHcomm}, the term $(\\bigoplus H_2^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M))\\oplus H_2^{Fin}(A_n,M)=\\{0\\}$ and in particular $\\textrm{Im}(g_2^2)=\\{0\\}$.\nSince $g_1^3$ is injective, the claim follows.\nBy Proposition \\ref{FinComm}, the specific description of the odd case follows.\n\n\\item There are two different arguments depending if $n$ is odd or even.\n\nFor $n$ odd, as the term $(\\bigoplus H_2^{\\mathcal{F}[H]}(\\Comm_{A_n}[H],M))\\oplus H_2^{Fin}(A_n,M)$ is trivial, the statement is a direct consequence of the exactness of the sequence of Proposition \\ref{mainMV}.\n\nFor $n$ even, we have seen in the discussion above about $g_2^2$ that this map is surjective, and hence $g_3^2$ is the trivial map.\nThis implies that $H_2^{vc}(A_n, K_q(R[-]))$ is the image of $g_1^2$, or equivalently, the kernel of $g_2^1$.\n\n\\item The previous description of the homomorphisms in the Mayer-Vietoris sequence proves that $g_2^0$ is a monomorphism, and hence $g_3^1$ is surjective and $H_1^{vc}(A_n,K_q( R[-]))$ the cokernel of $g_2^1$, which has been described above in terms of the summands $N_q^{[H]}$.\n\n\\item Since the Mayer-Vietoris sequence {ends} at $H_0^{vc}(A_n ,K_q( R[-]))$ this term is equal to the image of $g_3^0$ which is equal to the cokernel of $g_2^0$ that was described above.\n\\end{enumerate}\n\\end{proof}\n\n\nAs said in the introduction, this theorem opens the door to concrete computations of Bredon homology of $A_n$ with respect to the family of virtually cyclic groups, provided there is available information about the $K$-theory of the coefficient ring. In particular, when $R$ is a regular ring the absence of Nil-terms and negative K-theory groups make the calculations easier and more precise.\nFor instance, we have\n\\begin{cor}\\label{cor:K_0}\nLet $A_n$, $n>2$ be an Artin group of dihedral type. Let $R$ be a regular ring. Then $K_0(RA_n)=K_0(R)$.\n\\end{cor}\n\\begin{proof}\nAs $R$ is regular, $K_{i}(R)$ vanishes for $i<0$ and also the Nil-Terms of the Bass-Heller-Swan decomposition.\nIn particular, by Theorem \\ref{Thm:Bredon}, $H_0^{vc}(A_n, K_0(R[-]))= K_0(R)$.\nMoreover, as $K_{i}(R)$ vanishes for $i<0$ we see by Theorem \\ref{Thm:Bredon}, that $H_j^{vc}(A_n,K_i(R[-]))=\\{0\\}$ for $i<0$.\nThen the $E_2$-page of the Atiyah-Hirzebruch spectral sequence is concentrated in the non-negative part of the 0th, 1st, 2nd and 3rd columns, and $E_{\\infty}^{0,0}=E_2^{0,0}= K_0(R)$.\n\\end{proof}\n\nIn particular, this implies that every finitely dominated $CW$-complex whose fundamental group is $A_n$ has the homotopy type of a finite $CW$-complex.\n\nIn the following section we compute these Bredon homology groups for different choices of the ring, including some non-regular ones.\n\n\\section{Computations of $H_i^{vc}(A_n,K_q (R[-]))$ for several coefficient rings}\n\\label{Sect:concrete}\n\nIn this section we use Theorem \\ref{Thm:Bredon} to describe $H_i^{vc}(A_n,K_q (R[-]))$ for some instances of the ring $R$, both regular ($\\mathbb{Z}, \\mathbb{F}_q$) and non-regular ($\\mathbb{Z}[\\mathbb{Z}\/{\\bf 2}], \\mathbb{Z}[\\mathbb{Z}\/{\\bf 2}\\times \\mathbb{Z}\/{\\bf 2}]$ and $\\mathbb{Z}[\\mathbb{Z}\/{\\bf 4}]$).\nWe recall that all these groups give information about the $E^2$-term of the corresponding Atiyah spectral sequence. We point out that in the regular cases many groups can be computed, because the $K$-theory of $\\Z$ is nearly known and the $K$-theory of $\\mathbb{F}_q$ is known (see the corresponding examples below).\nIn the non-regular framework, by contrast, it is very difficult to find concrete descriptions of $K_q(R)$ for $q>1$, or of the corresponding Nil-terms.\n\nRecall that the groups $H_i^{vc}(A_n,K_0 (R[-]))$ are trivial for $i\\geq 4$ and any ring $R$.\n\n\\begin{ex}\n\\label{KZ}\n\nFirst we compute the Bredon homology of $A_n$ with respect to the family of virtually cyclic subgroups, taking as coefficients $K_q(\\Z [-])$, for $q=0,1,2$.\n\nWe need in our computations the lower algebraic $K$-theory groups of the integers. The groups $K_q(\\Z)$ are known for all $q\\leq 7$ and all $q\\geq 8$ such that $q \\not\\equiv 0 \\mod 4$, see \\cite[page 2]{Wei05}.\nFor example, the first values of $K_q(\\Z)$ are given in the following table:\n\\begin{center}\\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n\\hline\n$q$ & $<0$ &0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\\n\\hline\n$K_q(\\Z)$ & 0 & $\\Z$ & $\\Z\/{\\bf 2}$ & $\\Z\/{\\bf 2}$ & $\\Z\/{\\bf 48}$ & 0 & $\\Z$ & 0 & $\\Z\/{\\bf 240}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\nObserve that, as $\\Z$ is regular, the Nil-terms in the Bass-Heller-Swan decomposition are trivial, and then for every $i\\in \\mathbb{N}$, $K_i(\\Z[\\Z])=K_i(\\Z)\\oplus K_{i-1}(\\Z)$ and $N_q^{[H]}=K_{q-1}(\\Z)$. Moreover, also by regularity, $K_i(\\Z)=0$ if $i\\leq -1$.\n\nTaking into account of all these considerations, the previous theorem implies the following.\n\nFor $q=0$, we have:\n\n\\begin{enumerate}\n\n\\item $H_3^{vc}(A_n,K_0 (\\Z[-]))\\simeq H_2^{vc}(A_n,K_0 (\\Z[-])) \\simeq \\bigoplus_{\\aleph_0} \\Z$.\n\\item {$H_1^{vc}(A_n,K_0 (\\Z[-]))=0$.}\n\\item $H_0^{vc}(A_n,K_0 (\\Z[-]))=\\Z$.\n\n\\end{enumerate}\n\nNow for $q=1$,\n\n\\begin{enumerate}\n\n\n\n\\item $H_3^{vc}(A_n,K_1 (\\Z[-]))\\simeq H_2^{vc}(A_n,K_1 (\\Z[-])) \\simeq \\bigoplus_{\\aleph_0} \\Z \/{\\bf 2}$.\n\\item {$H_1^{vc}(A_n,K_1 (\\Z[-]))=\\bigoplus_{\\aleph_0} \\Z$}.\n\\item {$H_0^{vc}(A_n,K_1 (\\Z[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus \\Z \/{\\bf 2}$}.\n\n\\end{enumerate}\n\nAnd finally, for $q=2$, all these groups are isomorphic to $\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2}$.\n\n\nLet us briefly explain these computations.\nWe consider first $q=0$.\nFor $H_3^{vc}(A_n,K_0 (\\Z[-]))$, the odd case is immediate.\nIn the even case, $H_3^{vc}(A_n,K_1 (\\Z[-]))$ is the kernel of a homomorphism $\\bigoplus_{\\aleph_0} \\Z\\rightarrow \\Z$, and then isomorphic to $\\bigoplus_{\\aleph_0} \\Z$.\nThis easy argument will be frequently used in the examples of this section without express mention.\n\nWe now deal with $H_2^{vc}(A_n,K_0 (\\Z[-]))$.\nWe only deal with the odd case, the even one is very similar.\nFirst observe that the kernel of $g_2^1$ is isomorphic to $\\bigoplus_{\\aleph_0} \\Z$.\nMore precisely, taking account the description of Section \\ref{g21} and the values of the $K_i(\\Z)$, $g_2^1$ (for $n$ odd) is defined in the following way:\n\n$$g_2^1: (\\bigoplus_{[H]\\neq [Z(A_n)]} \\Z^2)\\oplus \\Z\\rightarrow (\\bigoplus_{[H]\\neq [Z(A_n)]} \\Z)\\oplus \\Z.$$\n\nHere the kernel of $g_{21[H]}^1$ is isomorphic to $\\Z$ for every $[H]\\neq [Z(A_n)]$.\nAs there is an infinite number of such commensurators, we conclude that the kernel should be also infinite, and then $H_2^{vc}(A_n,K_1 (\\Z[-]))=\\bigoplus_{\\aleph_0} \\Z$.\n\nObserve that, as $K_{-1}(\\Z)=0$ and $\\Z$ is regular, $N_0^{[H]}=T_i(K_0\\Z)=\\overline{C}(K_0\\Z)=0$.\nNow the values of $H_1^{vc}(A_n,K_0 (\\Z[-]))$ and $H_0^{vc}(A_n,K_0 (\\Z[-]))$ are easily deduced from items (4) and (5) of Theorem \\ref{Thm:Bredon}.\n\nNow take $q=1$. For $H_3^{vc}(A_n,K_1 (\\Z[-]))$ and $H_2^{vc}(A_n,K_1 (\\Z[-]))$ it is argued as in the previous case, taking into account that every subgroup of an $\\mathbb{F}_2$-vector space is again an $\\mathbb{F}_2$-vector space.\nNow observe that $N_1^{[H]}=K_0(\\Z)=\\Z$ by the Bass-Heller-Swan decomposition, and regularity and the fact that $K_0(\\Z)=\\Z$ is torsion-free imply that $T_1(K_1\\Z)=0$, $T_2(K_1\\Z)=0$ and $\\overline{C}(K_1\\Z)=\\Z$.\nThe values of $H_1^{vc}(A_n,K_1 (\\Z[-]))$ and $H_0^{vc}(A_n,K_1 (\\Z[-]))$ follow again from items (4) and (5) of Theorem \\ref{Thm:Bredon}.\n\nFinally, for $q=2$, the values of the homology are immediately implied by regularity and the fact that $K_2(\\Z)=K_1(\\Z)=\\Z \/{\\bf 2}$.\n\n\\end{ex}\n\n\n\\begin{ex}\n\nNow we will compute the groups $H_i^{vc}(A_n,K_q (\\mathbb{F}_2[-]))$, for $0\\leq q\\leq 3$. First, the following table (see \\cite{Qui73}) includes the algebraic $K$-groups that are necessary in our computations:\n\n\\begin{center}\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n$q$ & $<0$ & 0 & 1 & 2 & 3 \\\\\n\\hline\n$K_q(\\mathbb{F}_2)$ & 0 & $\\Z$ & 0 & 0 & $\\Z\/{\\bf 3}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\n\nAs $\\mathbb{F}_2$ is regular, the same considerations about the Nil-terms and the negative $K$-groups apply also in this case. Hence, again as a direct consequence of Theorem \\ref{Thm:Bredon} we have the following results. For $q=0$ and every $i\\in\\mathbb{N}$, $H_i^{vc}(A_n,K_0 (\\mathbb{F}_2[-]))=H_i^{vc}(A_n,K_0 (\\Z[-]))$, because $K_0(\\Z)=K_0(\\mathbb{F}_2)$ and then $K_0 (\\mathbb{F}_2[-])=K_0 (\\Z[-])$ because both have the same Bass-Heller-Swan decomposition.\n\nFor $q=1$, we have $H_3^{vc}(A_n,K_1 (\\mathbb{F}_2[-]))=H_2^{vc}(A_n,K_1 (\\mathbb{F}_2[-]))=0$, as $K_1(\\mathbb{F}_2)=0$. On the other hand, as {$N_1^{[H]}=\\Z$} for every $H$, $H_1^{vc}(A_n,K_1 (\\mathbb{F}_2[-]))=H_0^{vc}(A_n,K_1 (\\mathbb{F}_2[-]))=\\bigoplus_{\\aleph_0}\\Z$.\n\n\nFor $q=2$, the triviality of $K_2(\\mathbb{F}_2)$ and $K_1(\\mathbb{F}_2)$ implies the triviality of $H_i^{vc}(A_n,K_2 (\\mathbb{F}_2[-]))$ for every $i$.\n\nFinally, for $q=3$, we have $H_2^{vc}(A_n,K_3 (\\mathbb{F}_2[-]))\\simeq H_3^{vc}(A_n,K_3 (\\mathbb{F}_2[-]))=\\bigoplus_{\\aleph_0}\\Z \/{\\bf 3}$.\nAs the groups $N_3^{[H]}$ are trivial and $K_2(\\mathbb{F}_2)$ is so, we obtain that $H_1^{vc}(A_n,K_3 (\\mathbb{F}_2[-]))=\\Z \/{\\bf 3}$ is trivial, and $H_0^{vc}(A_n,K_3 (\\mathbb{F}_2[-]))=\\Z \/{\\bf 3}$.\n\\end{ex}\n\n\n\nWe follow with some non-regular examples, namely the rings $\\Z [\\Z \/{\\bf 2}]$, $\\Z [\\Z \/{\\bf 2}\\times \\Z \/{\\bf 2}]$ and $\\Z [\\Z \/{\\bf 4}]$, which we will respectively denote by $R_1$, $R_2$ and $R_3$.\nWe will compute the groups $H_i^{vc}(A_n,K_q (R_j[-]))$, for $0\\leq i\\leq 3$, $q=0,1$ and $1\\leq j\\leq 3$.\nIn order to do this we will need the values of their lower algebraic $K$-theory groups, as well as the Nil groups.\nAll these groups are displayed in the following table:\n\n\\begin{center}\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n& $K_1$ & $K_0$ & $K_{-1}$ & $NK_0$ & $NK_1$ \\\\\n\\hline\n$\\Z [\\Z \/{\\bf 2}]$ & $(\\Z \/{\\bf 2})^2$ & $\\Z$ & 0 & 0 & 0 \\\\\n\\hline\n$\\Z [\\Z \/{\\bf 2}\\times \\Z \/{\\bf 2}]$ & $(\\Z \/{\\bf 2})^3$ & $\\Z$ & $\\Z^r$ & $\\bigoplus_{\\aleph_0} \\Z\/{\\bf 2}$ & $\\bigoplus_{\\aleph_0} \\Z\/{\\bf 2} $\\\\\n\\hline\n$\\Z [\\Z \/{\\bf 4}]$ & $\\Z \/{\\bf 2}\\times \\Z \/{\\bf 4}$ & $\\Z$ & $\\Z^s$ & $\\bigoplus_{\\aleph_0} \\Z\/{\\bf 2}$&$ \\bigoplus_{\\aleph_0} \\Z\/{\\bf 2}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\nLet us briefly explain the values of the table. By work of Oliver \\cite[Theorem 14.1-2]{Oli88}, the kernels $SK_1(R_j)$ of the determinant maps are trivial, and hence (see for example \\cite{Ste78}) $K_1(R_j)$ is the group of units of $R_j$. Now the values of $K_1$ follow from a theorem of Higman (\\cite{Hig40}, see also \\cite[II.4.1]{She78}). In turn, by \\cite[Proposition 6]{Cas73}, the reduced $K_0$ of these three rings is trivial, and then \\cite[Lemma 2.18]{Luc21} implies that $K_0(R_j)=\\Z$ for every $j$. The values of the third column follow from work of Carter \\cite[Theorem 1]{Car80}, being the figures $r$ and $s$ positive integers that depend on the Schur indexes. Finally, the Nil-terms of the two last columns were computed by Weibel in \\cite{Wei09}.\n\nWe are now ready to describe the homology of the dihedral Artin groups with respect to the family of virtually cyclic groups, referred to the $K$-theory $K_q$ of these group rings, $q=0,1$. As before, our main tool is Theorem \\ref{Thm:Bredon}.\n\n\n\n\n\\begin{ex}\nWe start with $R_1=\\Z [\\Z \/{\\bf 2}]$.\nWhen $q=0$, $H^{vc}_3(A_n,K_0(R_1[-]))=H^{vc}_2(A_n,K_0(R_1[-]))=\\bigoplus_{\\aleph_0} \\Z $.\nTaking into account that $N_0^{[H]}=0$ for every $H$ (because $K_{-1}(R_1)$ is trivial, and also the Nil-terms), we obtain that $H^{vc}_1(A_n,K_0(R_1[-]))=0$.\nOn the other hand, $H^{vc}_0(A_n,K_0(R_1[-]))=\\Z$ too.\n\nIf $q=1$, $H^{vc}_3(A_n,K_1(R_1[-]))=H^{vc}_2(A_n,K_1(R_1[-]))=\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2} $.\nMoreover, as $\\Z$ is torsion-free and then the correspondent $T_1(K_1(R_1))$ is trivial, we have $H^{vc}_1(A_n,K_1(R_1[-]))=\\bigoplus_{\\aleph_0}\\Z$.\nFinally, taking into account that $C(K_1(R_1))=\\Z$ in this case, $H^{vc}_0(A_n,K_1(R_1[-]))=\\Z \/{\\bf 2}\\oplus \\Z \/{\\bf 2}\\oplus(\\bigoplus_{\\aleph_0} \\Z)$.\n\n\\end{ex}\n\n\\begin{ex}\nWe continue by considering $R_2=\\Z [\\Z \/{\\bf 2}\\times \\Z \/{\\bf 2}]$.\nNow $H^{vc}_3(A_n,K_0(R_2[-]))=H^{vc}_2(A_n,K_0(R_2[-]))=\\bigoplus_{\\aleph_0} \\Z $, exactly as in the previous example. We have $N_0^{[H]}=K_{-1}(R_2)\\oplus NK_0(R_2)\\oplus NK_0(R_2)=\\Z^r\\oplus (\\bigoplus_{\\aleph_0}\\Z \/{\\bf 2})$ and then $H^{vc}_1(A_n,K_0(R_2[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$.\n Observe that the extra term $T_2(K_0(R_2))$ is an $\\mathbb{F}_2$-vector space of at most countable dimension, and then it is included in the previous direct sum.\n Finally, $H^{vc}_0(A_n,K_0(R_2[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$ as in the previous case, taking into account that $\\overline{C}(K_0(R_2))$ is a direct sum of free abelian groups and an $\\mathbb{F}_2$-vector space, both of at most countable dimension.\n\nWhen $q=1$, $H^{vc}_3(A_n,K_1(R_2[-]))=H^{vc}_2(A_n,K_0(R_2[-]))=\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2}$, because $K_1(R_2)=(\\Z \/{\\bf 2})^3$.\nNow $N_q^{[H]}=\\Z\\oplus \\bigoplus_{\\aleph_0}\\Z \/{\\bf 2}$, and then by similar reasons to the previous case, $H^{vc}_1(A_n,K_1(R_2[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$.\nSimilarly $H^{vc}_0(A_n,K_1(R_2[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$.\n\n\\end{ex}\n\n\\begin{ex}\nTo conclude, we consider $R_2=\\Z [\\Z \/{\\bf 4}]$.\nAs the groups $K_0$, $NK_0$ and $NK_1$ are all isomorphic as abelian groups to their counterparts in the previous example\nand $K_{-1}$ is also free abelian, $H^{vc}_3(A_n,K_0(R_3[-]))=H^{vc}_2(A_n,K_0(R_3[-]))=\\bigoplus_{\\aleph_0} \\Z $, $H^{vc}_1(A_n,K_0(R_3[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$, and $H^{vc}_0(A_n,K_0(R_3[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$.\n\nFor $q=1$, it is clear that\n$H^{vc}_3(A_n,K_1(R_3[-]))=(\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 4})$.\nNow, the analysis of $g_2^1$ in Section \\ref{g21} guarantees that $H^{vc}_2(A_n,K_1(R_3[-]))$ should contain a subgroup isomorphic to $(\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 4})$, and then be isomorphic to it, as the source of $g_2^1$ is also isomorphic to $(\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 4})$.\nWe have $N_1^{[H]}=K_{0}(R_3)\\oplus NK_0(R_2)\\oplus NK_0(R_2)=\\Z\\oplus (\\bigoplus_{\\aleph_0}\\Z \/{\\bf 2})$.\nBy an analogous reasoning to the case of $R_2$, $H^{vc}_1(A_n,K_1(R_3[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})$.\nFinally, $H^{vc}_0(A_n,K_1(R_3[-]))=(\\bigoplus_{\\aleph_0} \\Z)\\oplus (\\bigoplus_{\\aleph_0} \\Z \/{\\bf 2})\\oplus \\Z \/{\\bf 4}$, corresponding the extra term to $K_1(R_3)$.\n\n\\end{ex}\n\n\\begin{rem}\nObserve that the proposed method permits computations of the Bredon homology (with respect to the family of virtually cyclic groups) of $A_n$ with respect to the $K$-theory of any ring for which there is some knowledge of the algebraic $K$-groups.\nWe remark that the computation of the lower algebraic $K$-theory groups is a hot topic nowadays (see for example \\cite{LaOr07}, \\cite{HiJu2021} or \\cite{GJM18}), so it seems possible that good knowledge about the $E^2$-term of the Atiyah-Hirzebruch spectral sequence is achieved in these cases, even for more general families of Artin groups.\n\n\\end{rem}\n\n\n\n\n\nWhen the ring $R$ is regular, it is possible to take a shortcut in order to compute the left-hand side of the Farrell-Jones conjecture. Observe that Theorem 0.1 in \\cite{LS16}, that establishes a splitting $$H^G({\\underline{\\underline{E}}} G,\\mathbf{K} (R))\\simeq H^G(\\underline{E} G,\\mathbf{K} (R))\\oplus H^G({\\underline{\\underline{E}}} G, \\underline{E} G, \\mathbf{K} (R)),$$ for every group $G$ and ring $R$.\nNow if $R$ is regular and $G$ is torsion-free, the second term of the direct sum vanishes \\cite[Proposition 2.6]{LR05}. Likewise, when $G$ is torsion-free $\\underline{E} G=EG$, the classical universal space for principal $G$-bundles, and then $H^G(\\underline{E} G,\\mathbf{K}(R))=H(BG, \\mathbf{K}(R))$, being the latter ordinary homology. Hence, in the case $G=A_n$, it is enough to compute $H^(BA_n, \\mathbf{K}(R))$ to obtain the left-hand side of the Farrell-Jones conjecture for these groups. When the coefficients $\\mathbf{K}(R)$ are known, and taking into account that the Artin groups of dihedral type are one-relator, Lemma 16.21 in \\cite{Luc21} provides an accurate description of these homology groups, corresponding case i) of the lemma to $n$ even and case ii) to $n$ odd. For example, when $K=\\mathbb{F}_p$ for $p$ prime, the classical results of \\cite{Qui73} provide a complete knowledge of the groups $H^(BA_n, \\mathbf{K}(R))$ , while for $K=\\mathbb{Z}$ much information is available (\\cite[page 2]{Wei05}). When $R$ is not regular, however, this strategy does not work, as in this case groups in the left-hand side of Farrell-Jones. In this context, we expect that our results on Bredon homology of $A_n$ (znd in particular the last examples of this section) can bring some light over the groups $H^{A_n}({\\underline{\\underline{E}}} A_n,\\mathbf{K} (R))$, via appropriate computations in the equivariant Atiyah-Hirzebruch spectral sequence. Observe that, according to Theorem \\ref{Thm:Bredon}, this spectral sequence has four columns. This fact indicates that the sequence should collapse at most in the page $E_4$, and hence this page should provide the knowledge of Farrell-Jones groups. The analysis of the differentials, as well as the subsequent description of the $E_3$ and $E_4$-pages, seems a difficult and interesting future line of research.\n\n\n\n\\noindent{\\textbf{{Acknowledgments.}}}\n\nWe warmly thank D. Juan-Pineda, W. L\\\"{u}ck, N. Petrosyan, L.J. S\\'anchez-Salda\\~{n}a, V. Srinivasan and J. Stienstra and an anonymous referee for their useful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nEntanglement entropy is a non-local observable which measures entanglement between two subsystems of a quantum system.\nIt has many applications in studies of phenomena in quantum gravity, quantum information, condensed matter and high energy physics.\nParticularly, entanglement entropy in the context of the gauge\/gravity duality is aimed to shed some light on understanding of quantum gravity into the bulk \\cite{Rangamani:2016dms}.\n\nFollowing the holographic prescription, the entanglement entropy between a subsystem (region) $A \\in\\mathbb{R}^{d}$ that has a $d-1$-dimensional boundary $\\partial A$ and a remaining part $B$ can be calculated by Ryu-Takayanagi formula \\cite{Ryu:2006bv,Ryu:2006ef,Hubeny:2007xt}\n\\begin{eqnarray}\\label{1.1}\nS = \\frac{\\text{Area}(\\gamma_{A})}{4G^{d+2}_{N}},\n\\end{eqnarray}\nwhere $\\gamma_{A}$ is the minimal $d$-dimensional surface in $AdS_{d+2}$ space whose boundary coincides with the boundary of the region $A$ ($\\partial A = \\partial \\gamma_{A}$), $G^{d+2}_{N}$ is $d +2$-dimensional Newton constant.\n\nFor the classic case with $AdS$ on the gravity side, which geometry is not supported by any scalar field,\n and conformal theory of QFT side the area of the surface $\\gamma_{A}$ is defined through the induced metric by the relation\n\\begin{equation}\\label{1.1a}\nA = \\int d^{d}\\sigma \\sqrt{|\\det{G_{\\alpha\\beta}}|},\n\\end{equation}\nwhere $G_{\\alpha\\beta}=g_{MN}\\partial_{\\alpha}X^{M}\\partial_{\\beta}X^{N}$ is the induced metric of $\\gamma_{A}$ and $g_{MN}$ is the metric of the background. Important examples of $AdS$ spacetimes include near-horizon geometries of $p$-branes.\nIn the form (\\ref{1.1a}) it can be applied to studies of entanglement entropy for non-dilatonic branes, namely D3, M2 and M5 branes \\cite{Quijada:2017zif}.\n\nThe generalization of the entangled functional (\\ref{1.1}) with (\\ref{1.1a}) for branes with non-conformal boundaries reads\n\\begin{eqnarray}\\label{RTncbr}\nS = \\int d^{8}\\sigma \\frac{1}{4G^{10}_{N}} \\sqrt{|\\det G_{ind}|}e^{-2\\phi},\n\\end{eqnarray}\nwhere $\\phi$ is the dilaton.\nThe holographic entanglement entropy for configurations on D2 and NS five-branes was calculated in the original work \\cite{Ryu:2006ef},\non D3 and D4 branes in \\cite{Klebanov:2007km,Pakman:2008ui,Arean:2008az}, on D1-D5 brane intersection in \\cite{Asplund:2011cq}.\n The dilaton destroys the scale symmetry, \nbut we still can detect a certain field theory on the boundaries of the branes and discuss a holographic picture.\nFor example, \nfor NS5 brane in the long distances of the theory is governed by the (2, 0) SCFT for IIA theory and the IR free SYM with sixteen supercharges for IIB, while\nthe short distance behavior leads to a linear dilaton geometry \\cite{Aharony:1998ub} that can be described through the so called Little String Theory $\\mathcal{N}=(2,0)$ and $\\mathcal{N}=(1,1)$ on the Type IIA and Type IIB NS5 branes respectively.\n\n\n\n\nIn this work we aim at studying T-duality aspects of entanglement entropy for field theories living on NS five branes, including the exotic branes $5_2^r$ with $r=0,1,2,3,4$. For the NS5 brane the decoupling limit is known to be LST, which is a 6-dimensional theory describing dynamics of string-like degrees of freedom which do not have gravitational modes in their spectrum. In all other respects they exhibit essentially stringy behaviour, such as Hagedorn temperature and T-duality of spectrum \\cite{Losev:1997hx, Kutasov:2001uf, Aharony:1999ks}. This is due to the fact that in contrast to D-branes the decoupling limit for the NS branes does not involve taking $\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon'\\to0$. This preserves stringy properties on the world-volume. The origin of T-duality in LST is the simple observation that a compactified NS5 brane transform into itself under T-duality along a world-volume direction. One the language of LST this transform into T-duality symmetry of the 6d theory with one compact direction, the direct analogue of that of the 10d string theory.\n\nIn addition however one may wonder what are the properties of the theory under T-duality transformations in the transverse directions, i.e. those which change the brane, say from NS5 brane to the KK5-monopole. By simple counting of degrees of freedom one concludes that the theory does not change under that. I.e. the Type IIA\/B NS5 brane carries the same world-volume theory as the Type IIB\/A Kaluza-Klein monopole \\cite{Sen:1997js}. Continuing this logic one concludes that the theory should not change along the whole T-duality orbit.\n\\begin{equation}\n\\begin{aligned}\n5_2^0(A\/B) && \\longleftrightarrow && 5_2^1(B\/A)\\longleftrightarrow && 5_2^2(A\/B)\\longleftrightarrow && 5_2^3(B\/A)\\longleftrightarrow && 5_2^4(A\/B)\n\\end{aligned}\n\\end{equation}\nHere we use the notations for the branes of \\cite{Obers:1998fb} (see also \\cite{deBoer:2012ma} for more on that), and the last three are exotic. The fact that the corresponding world-volume field theories do not change under T-duality trivially follows from the T-duality invariant world-volume effective action for these branes presented in \\cite{Blair:2017hhy}. This is a single action for the whole orbit, which drops into actions for a representative upon removing half of the scalar fields living on the brane (geometric or dual coordinates). Since from the world-volume point of view these are just scalar fields moving in a dynamical background, replacement one by its dual does not change anything for it. \n\n\nHowever, applying the Ryu-Takayanagi prescription for geometric entropy to the background of say Kaluza-Klein monopole one gets the answer which is different from the one for the NS5 brane, which clearly breaks the T-duality invariance. In this paper we show that the reason for that is that in its geometric and straightforward form this prescription does not take into account dependence on the winding direction of the localized Kaluza-Klein monopole. Indeed, in \\cite{Tong:2002rq, Harvey:2005ab,Kimura:2013fda,Kimura:2018hph} it has been shown that instanton corrections coming from the 2d sigma-model describing the KK5 background, change the geometry such that field start depending on a winding mode. This correct the throat behaviour of the KK5-monopole to make it the same as that of the NS5 brane. In \\cite{Jensen:2011jna, Berman:2014jsa, Bakhmatov:2016kfn} it has been shown that this has simple explanation in terms of Double Field Theory, that is to do a T-duality transformation in a direction $z$ one replaces $z$ by it dual $\\tilde{z}$ in all expressions. The same is true for producing exotic backgrounds, and the corresponding instanton interpretation has been presented in \\cite{Kimura:2013zva}. In this work we consider the invariant action of \\cite{Blair:2017hhy} and propose an algorithm to calculate entanglement entropy for theories living on branes with non-trivial dynamics in doubled space.\n\nThis paper is structures as follows. In Section \\ref{geom} we present a short technical review of how the geometric entanglement entropy is calculated and explicitly show that the RT formula gives different results when applying to NS five-brane backgrounds belonging to the same T-duality orbit. In Section \\ref{dft} we turn to invariant dynamics governed by the action of \\cite{Blair:2017hhy}, shortly review how one obtains different action from the invariant one, and describe the algorithm which produces an invariant answer for entanglement entropy. In addition we comment on the geometric meaning of the expression, which is an important and subtle point due to lack of the notions of integration, distance and area in doubled geometry.\n\n\n\\section{Geometric entanglement entropy}\n\\label{geom}\n\n\n\n\nThe usual choice of areas which carry entangled states, which significantly simplifies calculations, is the infinite strip set-up. For that one considers a surface in the space transverse to a brane one which the field theory lives (shaded on Fig. \\ref{embed}). This surface is the boundary for the minimal surface, which tends to curve closer to the brane due to the transverse geometry. For D-branes this surface is identified with the AdS conformal boundary. The geometric formula of Ryu and Takayanagi gives entanglement entropy of states in the region A and B on the picture.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=10cm]{embed.png}\n\\caption{Configuration of the embedding}\n\\label{embed}\n\\begin{tikzpicture}[overlay]\n\\node at (-0.1,9.5) (r) {$r$};\n\\node at (-1.8,3.8) (ra) {$r_a$};\n\\node at (-4,2) (X25) {$X^{2,\\dots,5}$};\n\\node at (5,3.8) (X1) {$X^1$};\n\\node at (1.5,8.3) (A) {$A$};\n\\node at (3.7,8) (B1) {$B$};\n\\node at (-0.8,8.5) (B2) {$B$};\n\\node[rotate=-11] at (1,3) (br) {brane};\n\\end{tikzpicture}\n\\end{figure}\n\n\nIn this section the standard formula for calculation of geometric entanglement entropy is applied to the standard NS5 brane and to the KK5-monopole and exotic $5_2^2$ brane. Due to the special circle already for the KK5 background one gets expressions very different from that for the NS5 background. From this we conclude that one must develop a different algorithm and a different understanding of the RT expression to properly capture transformations along the NS five-brane T-duality orbit.\n\n\n\\subsection{Non-conformal theories: geometric five-branes}\n\\label{direct_KK}\n\nWhen turning to NS five branes one encounters 6d theories which describe string-like degrees of freedom which do not have gravitational excitation in their spectrum, the so-called Little String Theory. In the field theory limit these drop to non-conformal field theories since the corresponding brane backgrounds contain non-trivial dilaton and are not asymptotically AdS. However, for these one also can define entanglement entropy using the Ruy-Takayanagi conjecture (\\ref{RTncbr}) and write\n\\begin{equation}\nS=\\int_{\\S} d^5 \\s e^{-2\\f} \\sqrt{\\det G_{\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon\\b}},\n\\end{equation}\nwhere $\\{\\s^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon\\}$ wuth $\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon=1,\\dots,5$ are coordinates on the space-like surface $\\S$ and $G_{\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon\\b}=\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho G_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}$ is the induced metric on the surface. For our purposes we choose the simplest shape for the surface $\\S$ generated by an infinite stripe. Theory for which the entanglement entropy is calculated lives on a surface parallel to the NS5 brane placed at some $r=r_b$. Entanglement is assumed for the states defined in the interior $A$ and exterior $B$ regions of the grey surface on the Fig.\\ref{embed}. According to the conjecture this is equal to area of the minimal surface $\\S$ whose boundary satisfies $\\dt \\S=\\dt A$.\n\nEmbedding of the brane and of the surface is given by\n\\begin{equation}\n\\begin{aligned}\n& && 0 && 1 && 2 && 3 && 4 && 5 && r && \\q_1 && \\q_2 && \\f \\\\\n\\hline\n& NS5 &&\\times &&\\times&&\\times&&\\times&&\\times&&\\times&& \\\\\n& \\S && &&\\bullet&& L && L && L && L && \\bullet\n\\end{aligned}\n\\end{equation}\nwhere $\\times$ denote the world-volume directions. The surface $\\S$ extends from $-L$ to $L$ in the directions denoted by $L$ above, while it is somehow curved in the directions denoted by bullets. I.e. one can choose coordinates and embedding functions for the surface as follows\n\\begin{equation}\n\\begin{aligned}\nX^1&=X(r),\\\\\nX^{2,\\dots,5}&=\\s^{2,\\dots,5},\\\\\nr&=\\s^1.\n\\end{aligned}\n\\end{equation}\n\nBackground for the NS5 brane is given by\n\\begin{equation}\n\\begin{aligned}\nds^2&=\\h_{rs}dx^rdx^s+H(dr^2+r^2\\d\\W_{3}^2),\\\\\n\\mathcal{H}&=dB,\\\\\ne^{-2(\\f-\\f_0)}&=H(r)^{-1},\n\\end{aligned}\n\\end{equation}\nwith the harmonic function $H(r)=1+h\/r^2$. Hence one writes for the entropy\n\\begin{equation}\n\\begin{aligned}\\label{RT-ns5}\nS_{NS5}\n16L^4 \\int dr H(r)^{-1}\\sqrt{H(r)+X'(r)^2}.\n\\end{aligned}\n\\end{equation}\nThe usual minimisation procedure implies that the embedding function $X(r)$ should satisfy\n\\begin{equation}\n\\label{Xemb}\nX_{NS5}'(r)=\\pm\\frac} \\def\\dfr{\\dfrac} \\def\\dt{\\partial{H(r_a)^{1\/2}H(r)}{\\sqrt{H(r_a)^2-H(r)^2}},\n\\end{equation}\nwhere we used the condition that $X'(r_a)=0$, which basically means that $r_a$ is the turning point for the surface $\\S$. Note, that one has to set $r>r_a$ to keep the expression in the square root positive, which means that the turning point is closer to the brane than the surface $r=r_b$ on which the field theory is defined. This is the usual configuration for the AdS\/CFT correspondence and hence the initial setup and the Fig.\\ref{embed}.\n\nOne can apply the same procedure to the worldvolume theory of the KK-monopole which for the Type IIA(B) monopole is the same as for the Type IIB(A) NS5 brane. Background geometry is given by the following configuration\n\\begin{equation}\n\\begin{aligned}\nds^2&=\\h_{rs}dx^rdx^s+H^{-1}(d\\tilde{z}+A_idy^i)^2+H\\d_{ij}dy^i dy^j,\\\\\nB&=0,\\\\\ne^{-2(\\f-\\f_0)}&=1.\n\\end{aligned}\n\\end{equation}\nHere $\\tilde{z}$ is the normal geometric coordinate used to measure distances in space-time, however it is dual to the coordinate $z$ of the corresponding NS5 background. Note that the harmonic function is smeared $H=1+h\/r$.\n\nRepeating the same calculation as above one gets for the entropy and for the embedding function $X_{KK5}(r)$\n\\begin{equation}\n\\begin{aligned}\nS_{KK5}&=16L^4 \\int dr \\sqrt{H(r)+X'(r)^2},\\\\\nX_{KK5}'(r)&=\\pm C\\sqrt{H(r)}, \\quad C=\\mbox{const}.\n\\end{aligned}\n\\end{equation}\nOne first notices that the crucial difference with the previous case, that is $dr\/dX=0$ at $r=0$, i.e. on the brane itself, while for the NS5 brane background the turning point is at some $r_a\\neq 0$. This can be understood in terms of the short distance behaviour of NS5 branes and KK5 monopole. As it has been shown in \\cite{Tong:2002rq,Gauntlett:1992nn} the former is the version of the H-monopole (which is the proper T-dual of the KK monopole) localized due to instanton corrections. However, the localization breaks isometry along the compact circle of H-monopole and one observes a throat behaviour at short distances. \n\nTo cure the near-brane behaviour of the KK5-monopole background one also considers instanton corrections \\cite{Harvey:2005ab}. Only in this case one may expect result for the entropy which reproduce those for the NS5 brane. Such corrections however deform the background by introducing a non-trivial dependence on string winding coordinates, which requires double field theory to consistently address the issue, as in \\cite{Jensen:2011jna,Bakhmatov:2016kfn}. \n\n\n\n\n\n\n\\subsection{Exotic five-branes}\n\\label{direct_exotic}\n\nHence, the answer for the entropy which one obtains for the theory living on the KK monopole is different from that for the NS5 brane. The important point here is that although T-duality exchanges IIA and IIB branes the theories living on the NS5A(B) and the KK5B(A) are the same and the entropy should not change. When going further along the T-duality orbit towards exotic branes the situation does not get better. Smearing the KK5 background along $y_3$ and T-dualizing one arrives at the exotic $5_2^2$-brane with background given by \\cite{deBoer:2012ma}\n\\begin{eqnarray}\nds^{2} &=& \\eta_{rs}dx^{r}dx^{s} + HK^{-1}\\left(d\\tilde{z}^{2} + d\\tilde{y}^{2}_{3}\\right) + H \\delta_{\\alpha\\beta}dy^{\\alpha}dy^{\\beta}, \\\\\nB& = & h\\theta K^{-1} d\\tilde{z}\\wedge d\\tilde{y}_{3}, \\\\\ne^{-2(\\phi - \\phi_{0})} &=& HK^{-1},\\\\\nK &= & H^{2} + (h\\theta)^2.\n\\end{eqnarray}\nHere the harmonic function is further smeared $H(r)=1+h \\log r$ and does not behave well at space infinity\n\nThis background is globally well-defined only up to a monodromy around the brane, hence the non-geometric properties of the background. Naively applying the above procedure one obtains\n\\begin{equation}\n\\begin{aligned}\nS_{5_2^2}&=16L^4\\int d r \\frac} \\def\\dfr{\\dfrac} \\def\\dt{\\partial{H(r)\\sqrt{H(r)+X'(r)^2}}{H(r)^2+h^2 \\q^2},\\\\\nX_{5_2^2}'&=\\pm\\frac} \\def\\dfr{\\dfrac} \\def\\dt{\\partial{C \\sqrt{H(r)}(H(r)^2+h^2 \\q^2)}{\\sqrt{H(r)^2-C^2 (H(r)^2+h^2 \\q^2)^2}}.\n\\end{aligned}\n\\end{equation} \nThe embedding function $X_{5_2^2}(r)$ delivering extremum to $S_{5_2^2}$ is apparently not well-defined and moreover it explicitly depends on $\\q$. Hence, the entropy also depends explicitly on the coordinate $\\q$ respecting the monodromy property of the background.\n\nOn the other hand, the worldvolume theory on $5_2^2$-brane should not differ from that of the KK-monopole or NS5 brane (with proper replacement of Type IIA with Type IIB). To perform calculation of entanglement entropy for such theories which respect T-duality we use the T-duality covariant action of \\cite{Blair:2017hhy} for the 5-brane orbit. It suggests that the worldvolume theory is the same irrespective of the choice of the brane (equivalently, the section constraint or orientation in the doubled space) upon the proper exchange of the worldvolume scalars $X^\\m$ with their duals $\\tilde{X}_{\\m}$. \n\n\n\n\\section{Entanglement entropy in DFT}\n\\label{dft}\n\nDouble Field Theory being a T-duality covariant formulation of supergavity (string theory) allows to consider the whole T-duality orbit instead of a single representative. In this section we propose a deformation of the geometric prescription for entanglement entropy and embed the expression for entropy itself into the DFT framework. Let us start with brief description of how NS five-branes are embedded into doubled space.\n\n\\subsection{Embedding of NS five-branes in doubled space}\n\nIn \\cite{Blair:2017hhy} it was shown in details how one can construct a T-duality covariant action for NS five-branes. The covariancy here is understood in the following way: one has a single expression which is written in terms of DFT (covariant) fields and which reproduces the effective action for the NS5B-brane, KK5A monopole and exotic branes $5_2^2$B, $5_2^3$A, $5_2^4$B. The full action smartly chooses these frames depending on which symmetries of the doubled spaces are eventually realized on the world-volume. Let us briefly describe the process focusing only on the NS-NS sector and only on the DBI part of the action, which is given by\n\\begin{equation}\n\\label{full_5}\nS_{NS,DBI}[Y(\\x)]=\\int_V d^6 \\x e^{-2d}\\sqrt{\\det h_{ab}}\\sqrt{\\displaystyle -\\det\\Big(g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon{X^\\m} \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho+ \\mathcal{H}_{MN}\\hat{D}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M \\hat{D}_\\b Y^N\\Big)},\n\\end{equation}\nwhere we introduce\n\\begin{equation}\n\\begin{aligned}\nh_{ab}&=k_a^M k_b^N \\mathcal{H}_{MN},\\\\\n\\hat{D}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M&=\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M+\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m A_\\m{}^M,\\\\\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M&=\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M-(h^{-1})^{ab}k_a^M k_b^N\\mathcal{H}_{NP}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^P,\\\\\n\\mathcal{H}_{MN}&=\n\\begin{bmatrix}\nG_{mn}-B_{m}{}^k B_{kn} & B_{n}{}^q\\\\\nB_m{}^p & G^{pq}\n\\end{bmatrix}.\n\\end{aligned}\n\\end{equation}\nHere the full space-time is split into the part parallel to the five-brane, labelled by the indices $\\m,\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho=\\{0,5\\}$, and the part transverse to the branes, which is doubled and labelled by $M,N,P,Q=\\{6,7,8,9,\\tilde{6},\\tilde{7},\\tilde{8},\\tilde{9}\\}$. The vector fields $A_\\m{}^M$ result from the Kaluza-Klein decomposition of the full 10D theory \n\\begin{equation}\nA_\\m{}^M=\n\\begin{bmatrix}\nA_\\m{}^m \\\\\n-B_{\\m m}\n\\end{bmatrix}.\n\\end{equation}\nThe integration is performed over world-volume of the brane which is parametrized by six coordinates $\\{\\x^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon\\}$. The hatted derivative $\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon$ contains a projector part and is designed in such a way as to always remove half of the fields $Y^M$ from the action. Upon adding the action for DFT fields this results in field configurations which do not depend on half of DFT coordinates and hence is a worldvolume realization of the section constraint. Finally, the choice of the section frame and hence a representative of the T-duality orbit is done by choosing the particular form of the vectors $k_a{}^M$, which must satisfy the following algebraic section constrain\n\\begin{equation}\nk_a^Mk_b^N\\h_{MN}=0,\n\\end{equation}\nwhere $\\h_{MN}$ is the usual O(4,4) invariant tensor \n\\begin{equation}\n\\h_{MN}=\\begin{bmatrix}\n0 & 1 \\\\\n1 & 0\n\\end{bmatrix}\n\\end{equation}\nand the indices $a,b={1,4}$ enumerate the Killing vectors. The reason why we call these vectors Killing will be clear in a moment.\n\nFor the O(4,4) configuration there exist five inequivalent solutions of the algebraic section constrain, each of which corresponds to the branes $5_2^r$ with $r=0,1,2,3,4$ showing the number of quadratic direction in the mass of the corresponding 3D BPS state (see \\cite{deBoer:2012ma} for more detailed description of these notations). Here we list five representative solutions\n\\begin{equation}\n\\label{Kill}\n\\begin{aligned}\nNS5=5_2^0: && k_a^M&=(0,0,0,0;\\tilde{k}_{a 1}, \\tilde{k}_{a 2},\\tilde{k}_{a 3},\\tilde{k}_{a 4}),\\\\\nKK5=5_2^1: && k_a^M&=(0,0,0,k_{a}^{4}; \\tilde{k}_{a 1},\\tilde{k}_{a 2},\\tilde{k}_{a 3}, 0),\\\\\nQ=5_2^2: && k_a^M&=(0,0,k_{a}^{3},k_{a}^{4};\\tilde{k}_{a 1},\\tilde{k}_{a 2},0,0),\\\\\nR=5_2^3: && k_a^M&=(0,k_{a}^{2},k_{a}^{3},k_{a}^{4};\\tilde{k}_{a 1},0,0,0),\\\\\nR'=5_2^4: && k_a^M&=(k_{a}^1,k_{a}^{2},k_{a}^{3},k_{a}^{4};0,0,0,0).\n\\end{aligned}\n\\end{equation}\nFor example, for the NS5 brane case, which is the first line above, one chooses all vectors $k_a^M$ to be along the dual coordinates. Substituting this back into the action one checks that all fields $Y_m$ drop from the expression rendering field configurations independent on the corresponding DFT coordinates. This is due to\n\\begin{equation}\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y_m=B_{mn}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^n,\n\\end{equation}\nwhere $B_{mn}$ is the usual Kalb-Ramond two-form gauge field. The same is true for all other configurations up to the R'-brane which is a co-dimension-0 object from the point of view of the conventional supergravity.\n\nTo obtain explicit expression for the background fields for a fixed choice of the Killing vectors, one considers the full action with the embedding given by Dirac delta functions $\\d^{(8)}(\\mathbb{X}^M-Y^M(\\x))$, where $\\mathbb{X}^M=(x^m,\\tilde{x}_m)$ are the coordinates of DFT. The reparametrization invariance of the world-volume is fixed as usual as\n\\begin{equation}\n\\label{gauge_fix0}\n\\begin{aligned}\nX^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon&=\\x^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon.\n\\end{aligned}\n\\end{equation}\nConsider for example the KK-monopole, which is the second line above, where the fields $Y_{1,2,3}$ and $Y^4$ drop from the action meaning that the field configurations as functions are of the form $H=H(x^1,x^2,x^3,\\tilde{x}_4)$. This is interpreted as a functional dependence of the background on three geometric coordinates $x^{1,2,3}$ and one non-geometric (dual or winding) coordinate $\\tilde{x}_4$. This is due to an additional piece of information fixed in the DFT action, where one always understands $\\mathbb{X}^m=x^m$ as geometric coordinates, i.e. those used to measure space distances, and $\\mathbb{X}_m=\\tilde{x}_m$ as their non-geometric duals. Without this fixing one will just count each brane four more times obtaining the same backgrounds but with different names for the same physical coordinates.\n\nSuch dependence of exotic backgrounds (starting from the KK monopole) on dual coordinates has been shown for the DFT monopole in \\cite{Bakhmatov:2016kfn} and will be important for our further discussion.\n\n\n\n\\subsection{Invariant entropy and minimal surface}\n\nThe main feature of the effective action \\eqref{full_5} is that it does not depend on the choice of the T-duality frame and describes dynamics of all five-branes dual to NS5 brane. Since the world-volume theory does not change when switching from (Type IIB) NS5 brane to (Type IIA) KK5-monopole, the corresponding entanglement entropy should not change as well. One can conjectures the following deformation of the Ryu-Takayanagi formula which provides such invariant description:\n\\begin{equation}\n\\label{inv_entr}\nS_5=\\int_\\S d^5 \\s e^{-2d}\\sqrt{\\det h_{ab}}\\sqrt{\\displaystyle -\\det\\Big(g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon{X^\\m} \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho+ \\mathcal{H}_{MN}\\hat{D}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M \\hat{D}_\\b Y^N\\Big)},\n\\end{equation}\nwhere the notations are the same as before. In a moment we will explicitly show that this expression gives the usual RT formula whose minimization gives the geometric entanglement entropy for the NS5 brane case. For other representatives of the orbit one gets a deformation of the formula, however the integral itself does not distinguish between the allowed choices of the duality frame.\n\nBefore that it is important to discuss the meaning of the integration and of the surface $\\S$ here. Going back to the effective action \\eqref{full_5} one notes that the integration there is performed over the world-volume $V$ parametrized by $\\s^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon$, which is a usual geometric manifold with properly defined integration measure. On this manifold one defines $6+(4+4)$ fields $\\{X^\\m(\\x),Y^M(\\x)\\}$, which are identified with coordinates in the space-time and the doubled coordinates of the O(4,4) DFT. The crucial point here is that without such identification, these fields do not carry the meaning of coordinates on a doubled space and hence one is not actually doing doubled geometry, and rather works with a number of fields. For more discussion on this see \\cite{Blair:2017hhy}.\n\nAlthough the expressions \\eqref{full_5} and \\eqref{inv_entr} look almost the same, there is fundamental difference between them. While in action one varies with respect to the background fields keeping the embedding fixes, for the entropy the background is fixed by our choice of the brane and variation goes with respect to the embedding. The latter is defined by identification of the surface coordinates $\\{\\s^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon\\}$ with the fields $X^\\m,Y^M$, which define the (doubled) space-time dependence of the background. Since, the harmonic function depends only on a singlet combination $r$, a natural choice of the embedding is\n\\begin{equation}\n\\label{gauge_fix}\n\\begin{aligned}\nX^{2,3,4,5}&=\\s^{2,3,4,5},\\\\\n\\s^1&=r,\n\\end{aligned}\n\\end{equation}\nand the remaining field $X^1=X(\\s^1)$ is a function delivering minimum to the expression. Here the particular form of the field $r$ depends on the choice of the background and reads\n\\begin{equation}\n\\begin{aligned}\nNS5=5_2^0: && r^2&=(Y^1)^2+(Y^2)^2+(Y^3)^2+(Y^4)^2,\\\\\nKK5=5_2^1: && r^2&=(Y^1)^2+(Y^2)^2+(Y^3)^2+(\\tilde{Y}_4)^2,\\\\\nQ=5_2^2: && r^2&=(Y^1)^2+(Y^2)^2+(\\tilde{Y}_3)^2+(\\tilde{Y}_4)^2,\\\\\nR=5_2^3: && r^2&=(Y^1)^2+(\\tilde{Y}_2)^2+(\\tilde{Y}_3)^2+(\\tilde{Y}_4)^2,\\\\\nR'=5_2^4: && r^2&=(\\tilde{Y}_1)^2+(\\tilde{Y}_2)^2+(\\tilde{Y}_3)^2+(\\tilde{Y}_4)^2.\n\\end{aligned}\n\\end{equation}\nThese follow from solutions of the equations of motion for the full action $S_{DFT}+S_{brane}$ which boil down to Poisson equation with delta source whose solution is the harmonic function $H=H(r)$ with $r$ given by the above expression. The number of dual coordinates entering the dependence of the fields is equal to the number of special circles.\n\nThe gauge fixing conditions \\eqref{gauge_fix} can be understood as a proper embedding of the surface $\\S$ in the doubled 5+(4+4)-dimensional space. This is similar to the way how the magnetic charge for these branes has been calculated in \\cite{Bakhmatov:2016kfn}, however now the integration remains proper integration over a conventional manifold with conventional measure. The structure of the doubled space shows up only at the level of the Killing vectors and of the interaction between the effective action and the full DFT action. Before that, the integration does not distinguish between $Y^m$ and $\\tilde{Y}_m$, as it should be since the corresponding world-volume theories do not feel this as well. The integration is then performed in $\\s^{2,3,4,5}\\in [-L,L]$ for some large $L$ and from the points $X'=0$ in the $\\s^1$ direction. This is what is usually called the rectangular strip area, which is the simplest to perform calculations. In principle, one may choose a different embedding which will correspond to a different area inside the world-volume theory.\n\nLet us postpone the discussion, of how this process is seen from the point of view of the world-volume theory, to the Discussion section and now move to explicit examples to show invariance of the expression.\n\n\n\\subsection{Explicit examples}\n\n\nLet us start with the T-duality frame which corresponds to NS5 brane, which fixes the Killing vectors to be\n\\begin{equation}\nk_a^M=(0;\\tilde{k}_{am}).\n\\end{equation}\nThen the matrix $h_{ab}$ becomes $h_{ab}=\\tilde{k}_{am}\\tilde{k}_{bn}g^{mn}$ and one has \n\\begin{equation}\n\\det{h_{ab}}=|\\tilde{k}|^2 g^{-1},\n\\end{equation}\nwhere $|\\tilde{k}|=\\det\\tilde{k}_{am}$ and $g=\\det G_{mn}$. The inverse of the matrix $h_{ab}$ is then\n\\begin{equation}\n(h^{-1})^{ab}=(\\tilde{k}^{-1})^{am}(\\tilde{k}^{-1})^{bn}G_{mn},\n\\end{equation}\nwhere $(\\tilde{k}^{-1})$ is the inverse of $\\tilde{k}_{am}$ understood simply as a $4\\times 4$ matrix. Hence, for derivatives of the fields $Y^M$ we have\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^m&=\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^m,\\\\\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon \\tilde{Y}_m&=\\dt \\tilde{Y}_m-(h^{-1})^{ab}\\tilde{k}_{am}\\tilde{k}_{an}\\mathcal{H}^n{}_P\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^p=B_{mn}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^n.\n\\end{aligned}\n\\end{equation}\nWith this in hands it is easy to show that\n\\begin{equation}\n\\mathcal{H}_{MN}\\hat{D}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^M \\hat{D}_\\b Y^N= G_{mn}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^m \\dt_\\b Y^n,\n\\end{equation}\nwhere we used the fact that $A_{\\m}{}^M=0$ for the chosen embedding of the brane. \n\nFinally, substituting all this into the expression for the entropy \\eqref{inv_entr} one obtains\n\\begin{equation}\n\\label{Einv_NS5}\n\\begin{aligned}\nS_{NS5}&=\\int_\\S d^5 \\s e^{-2\\f}\\sqrt{G}|\\tilde{k}|\\frac} \\def\\dfr{\\dfrac} \\def\\dt{\\partial{1}{\\sqrt{G}}\\sqrt{-\\displaystyle \\Big(g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho+G_{mn}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^m \\dt_\\b Y^n\\Big)}\\\\\n&=|\\tilde{k}|\\int_\\S d^5 \\s e^{-2\\f}\\sqrt{-\\displaystyle \\Big(g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho+G_{mn}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^m \\dt_\\b Y^n\\Big)},\n\\end{aligned}\n\\end{equation}\nwhich is the conventional expression for the geometric entanglement entropy of Ryu and Takayanagi (for the chosen embedding, i.e. $g_{\\m m}=0$).\n\nThe same calculation can be repeated for the KK5-monopole. One starts with the following Killing vectors\n\\begin{equation}\nk_a{}^M=(0,k_4^m;\\tilde{k}_{e m}),\n\\end{equation}\nwhere $e,f,g,h=1,2,3$. And the direction $4$ is identified with the Taub-NUT direction (the special circle of the monopole). For further convenience it is natural to choose such basis for the vectors $k_a^M$ where $\\tilde{k}_{e 4}=0$ and $k_4^i$=0. Then the matrix $h_{ab}=k_a{}^Mk_b{}^N\\mathcal{H}_{MN}$ becomes\n\\begin{equation}\n\\begin{aligned}\nh_{ef}&=\\tilde{k}_{e m}\\tilde{k}_{fn}g^{mn}=\\tilde{k}_{e i}\\tilde{k}_{f j}G^{ij},\\\\\nh_{e4}&=0,\\\\\nh_{44}&=k_4^4k_4^4 G_{44},\n\\end{aligned}\n\\end{equation}\nand $\\det h_{ab}=|\\tilde{k}|^2 (k_4^4)^2 g^{-1} G_{44}$, where $g=\\det g_{ij}$ is determinant of the 3-dimensional part of the metric $G_{mn}$ defined as\n\\begin{equation}\n\\begin{aligned}\nG_{ij}&=g_{ij}+A_iA_j G_{44}, && & G_{i4}&=A_i G_{44},\\\\\nG^{ij}&=g^{ij}, && & G^{i4}&=-A_{i4}G_{44},\\\\\nG_{44}&=H^{-1}, && & G^{44}&=\\frac} \\def\\dfr{\\dfrac} \\def\\dt{\\partial{1}{G_{44}}+A_i A_i G_{44}.\n\\end{aligned}\n\\end{equation}\nFollowing the same procedure as before it is straightforward to obtain the following expression for derivatives of the fields $Y^M$\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^i&= \\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^i, && &\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^4&=-A_i \\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^i\\\\\n\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon \\tilde{Y}_i&=A_i \\dt_a \\tilde{Y}_4, && &\\hat{\\dt} \\tilde{Y}_4&=\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon \\tilde{Y}_4.\n\\end{aligned}\n\\end{equation}\nThe crucial difference between NS5 brane and KK5-monopole here is that for the former one is left only with the fields $Y^i$, which upon embedding into the full DFT action are identified with the usual geometric coordinates. In contrast, for KK5-monopole after projection one has the fields $\\{Y^i,\\tilde{Y}_4\\}$ which results in dependence of the background fields on the corresponding dual (winding) coordinate $\\tilde{x}_4$. This behaviour has been observed in \\cite{Harvey:2005ab,Jensen:2011jna,Bakhmatov:2016kfn} for KK5 and in \\cite{Kimura:2013zva} for the exotic $5_2^2$-brane.\n\nFinally, collecting all these pieces together one arrives at the following expression for entanglement entropy of the world-volume theory on (localized) Kaluza-Klein monopole\n\\begin{equation}\n\\label{Einv_KK55}\n\\begin{aligned}\nS_{KK5}&=|\\tilde{k}||k_4^4|\\int_\\S d^5 \\s e^{-2\\f} G_{44}\\times\\\\\n&\\sqrt{-\\displaystyle \\det\\Big[g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho + \\big(G_{ij}-G_{44}A_iA_j\\big)\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^i \\dt_\\b Y^j +\\big(G^{44}-G^{ij}A_iA_j\\big)\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon \\tilde{Y}_4 \\dt_\\b \\tilde{Y}_4\\Big]}\\\\\n&=|\\tilde{k}||k_4^4|\\int_\\S d^5 \\s e^{-2\\f}G_{44}\\sqrt{-\\displaystyle \\det\\Big[g_{\\m\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon X^\\m \\dt_\\b X^\\nu} \\def\\x{\\xi} \\def\\p{\\pi} \\def\\vp{\\varpi} \\def\\r{\\rho + H\\big(\\d_{ij}\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon Y^i \\dt_\\b Y^j +\\dt_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon \\tilde{Y}_4 \\dt_\\b \\tilde{Y}_4\\big)\\Big]}.\n\\end{aligned}\n\\end{equation}\nWhere the last line is obtained by substituting the explicit background of KK5-monopole inside the square root. Taking into account that for the monopole one has $e^{-2\\f}=1$ and $G_{44}=H^{-1}$ the second line reproduces precisely the expression \\eqref{Einv_NS5} up to replacement $Y^4 \\to \\tilde{Y}_4$. Note however, that talking about world-volume dynamics and field theories on the branes one does not distinguish between fields $Y^m$ and their duals $\\tilde{Y}_m$. The only difference is that the latter see the background T-dual to the background seen by the former. This is a trivial consequence of the above considerations.\n\nNow, for exotic branes $5_2^r$ with $r=2,3,4$ the story is precisely the same and the algorithm is the following: fix the Killing vectors as in \\eqref{Kill}, calculate $h_{ab}$ and hatted derivatives $\\hat{\\dt}_\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon$, substitute everything in \\eqref{inv_entr}. The result will always be \\eqref{Einv_NS5} with the corresponding replacement of the fields $Y^m$ by their duals. Hence the name ``invariant entropy''. We postpone speculations on the physical and geometrical meaning of this procedure to the next section.\n\n\n\n\n\\section{Discussion}\n\nIn this letter we propose a T-duality invariant generalization of the Ryu-Takayanagi formula for geometric entanglement entropy for the case of NS five-branes $5_2^r$ ($r=0,\\dots,4$), with tension proportional to $g_s^{-2}$. The result is the expression \\eqref{inv_entr} which is based on the same ideas as the effective action \\eqref{full_5} for these branes. In particular, to choose a representative brane from the orbit one must specify Killing vectors, which satisfy the so-called algebraic section constraint. The choice which gives the effective action of NS5B-brane also reproduces the RT-formula for entanglement entropy of $\\mathcal{N}=(1,1)$ Little String Theory living on this brane.\n\nWe check, that the same expression gives always the same result irrespective of which representative is chosen. This is in consistency with the fact, that e.g. the world-volume theory for the KK5A-brane is also $\\mathcal{N}=(1,1)$ LST and hence the entropy should be the same. As we show in Section \\ref{direct_KK} this is in contrast with the direct application of the Ryu-Takayanagi formula, which gives different results. \n\nOn the level of world-volume scalar fields $Y^M$ transition between orbit representatives (say NS5B and KK5A) is just replacement of a field $Y^m$ by its duality partner $\\tilde{Y}_m$. Although this has crucial impact on DFT and supergravity solutions changing the background, the world-volume theory has no way to see that, and hence it is always the same. For this reason, as the carrier of the $\\mathcal{N}=(1,1)$ LST in Type IIA string theory one should consider the localized Kaluza-Klein monopole rather than the smeared one \\cite{Harvey:2005ab}. The former is a deformation of the latter by instanton corrections, and is already exotic since its harmonic function depends on one dual coordinate \\cite{Jensen:2011jna,Bakhmatov:2016kfn}. The same is true for other exotic branes, which should also be localized.\n\nThe apparent issue that needs clarification is the following. For a theory on a Dp-brane one has apparent geometric picture, where the theory lives on a timelike surface at some $r\\neq 0$ in the transverse space. For AdS\/CFT correspondence one literally takes the conformal boundary of the anti-de-Sitter space. To calculate entanglement entropy geometrically one chooses a region $A$ on this surface and a surface $\\S$ in the transverse space of the brane such that $\\dt \\S =\\dt A$, and calculates its area in the given background. \n\nIn the case in question one cannot develop such simple geometric picture, and moreover one cannot do this already for the NS5 brane. Indeed, the corresponding geometry does not drop into AdS and the corresponding field theory is not conformal. However, the $6D$ field theory associated with the brane can be as well put at any $\\r$ in the transverse space, and the choice corresponds to the RG flow and one can still calculate area properly. The procedure described here suggests the following:\n\\begin{itemize}\n\\item start with a $5_2^r$-brane with any $r\\in\\{0,1,2,3,4\\}$ and its world-volume theory described by the doubled amount of scalar fields $\\Phi} \\def\\C{\\Chi} \\def\\Y{\\Psi} \\def\\W{\\Omega^M=(\\Phi} \\def\\C{\\Chi} \\def\\Y{\\Psi} \\def\\W{\\Omega^m,\\tilde{\\Phi} \\def\\C{\\Chi} \\def\\Y{\\Psi} \\def\\W{\\Omega}_m)$ half of which is projected out by the algebraic section constraint;\n\\item choose a region $A$ in the space of the theory with boundary $\\dt A$;\n\\item consider a surface $\\S$ with boundary $\\dt\\S$ parametrized by some coordinates $\\s^\\alpha} \\def\\b{\\beta} \\def\\g{\\gamma} \\def\\d{\\delta} \\def\\e{\\epsilon$;\n\\item this surface carries a doubled amount of scalar fields $\\{Y^M\\}$ with boundary conditions $Y^M\\big|_{\\dt\\S}=\\Phi} \\def\\C{\\Chi} \\def\\Y{\\Psi} \\def\\W{\\Omega^a\\big|_{\\dt A}$;\n\\item minimize the functional \\eqref{inv_entr}.\n\\end{itemize}\nThe theory in the first item here just descents from the full invariant effective action \\eqref{full_5}. The boundary condition is needed to identify the scalar fields living on the artificial surface $\\S$ with the actual fields of the theory. For the conventional geometric picture this is done automatically by the embedding functions, where both the theory and the surface live in a single geometric background. Apparently, this procedure trivially reproduces the conventional geometric calculation, and the only messages here are the following:\n\\begin{itemize}\n\\item to calculate entanglement entropy for the $\\mathcal{N}=(1,1)$ and $\\mathcal{N}=(2,0)$ $6D$ theories one may use equivalently any of the representative of the T-duality orbit;\n\\item to get the correct result one must take into account proper localization of the backgrounds in the dual space.\n\\end{itemize}\n\n\n\nAn interesting further direction of research is to generalize the expression to the case of M5-brane which belongs to the same orbit as the $5^3$-brane under U-duality group. One then still works with Little String Theory and 6D, however the invariant expression will be different. One can also consider D-branes in DFT, which can also be non-geometric, i.e. localized in the dual space. The corresponding effective action will be presented in the forthcoming paper \\cite{axel-eric-fabio} and investigation of the corresponding world-volume theories and their entanglement entropy we reserve for future work.\n\n\n\n\n\\section*{Acknowledgements} The authors are grateful to the Istanbul center of mathematical sciences and Bogazici University for hospitality during initial stages of this project. ETM would like to thank for hospitality Bogolyubov laboratory, JINR, Dubna. The work of ETM was supported by the Russian state grant Goszadanie 3.9904.2017\/8.9 and by the Alexander von Humboldt return fellowship and partially by the program of competitive growth of Kazan Federal University. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Abstract}\n\nWe present the discovery of only the second radio-selected, z $\\sim$ 6 quasar. We identified SDSS J222843.54+011032.2 (z=5.95) by matching the optical detections of the deep Sloan Digital Sky Survey (SDSS) Stripe 82 with their radio counterparts in the Stripe82 VLA Survey. We also matched the Canadian-France-Hawaiian Telescope Legacy Survey Wide (CFHTLS Wide) with the Faint Images of the Radio Sky at Twenty-cm (FIRST) survey but have yet to find any z $\\sim$ 6 quasars in this survey area. The discovered quasar is optically-faint, $z = 22.3$ and M$_{1450}$ $\\sim$ -24.5, but radio-bright, with a flux density of f$_{1.4GHz, peak}$ = 0.31mJy and a radio-loudness of R $\\sim$ 1100 (where R $\\equiv$ $f_{5GHz}\/f_{2500}$). The $i-z$ color of the discovered quasar places it outside the color selection criteria for existing optical surveys. We conclude by discussing the need for deeper wide-area radio surveys in the context of high-redshift quasars. \n \n\\section{Introduction}\n\\label{sec:intro}\n\nHigh-redshift quasars (z $\\sim$ 6) which are powered by supermassive black holes (SMBH's) provide a window into the early universe, and, unsurprisingly, the study of these rare objects has grown in the decade since their discovery (\\citealp{2000AJ....120.1167F}; \\citealp{2001AJ....122.2833F}). These distant SMBH's are important in the study of galaxy evolution and the epoch of re-ionization. The strong correlation observed between the velocity dispersion of the stars in a galaxy and the mass of its central SMBH indicates a linkage between the mass of the SMBH and the evolution of the galaxy \\citep{2000ApJ...539L...9F}. This relation may be explained by a ``quasar phase\" of the SMBH, where significant accretion is coupled with feedback into the stellar formation processes of the host galaxy \\citep{ 2005ApJ...625L..71H}. Since nearly every galaxy is thought to have a central SMBH, active or not, high-redshift quasars are an important piece of the puzzle in understanding early galaxy evolution. \n\nThe spectra of distant quasars probe the intergalactic medium (IGM) and can be used to study the re-ionization epoch along the line of sight (\\citealp{2001AJ....122.2850B}; \\citealp{2003AJ....126....1W}; \\citealp{2006AJ....132..117F}). The presence of the Gunn-Peterson trough \\citep{1965ApJ...142.1633G} can indicate the neutral hydrogen fraction and act as a flag marking the re-ionization epoch. There is evidence that the fraction of neutral hydrogen by volume increases an order of magnitude from z $=$ 5.7 to z $=$ 6.4 (\\citealp{2006AJ....132..117F}; \\citealp{2010ApJ...714..834C}). This increase may mark the end of the re-ionization epoch, placing a high importance on any new discoveries of z $>$ 5.7 quasars, especially those which are radio-loud. Radio-loud quasars are valuable as probes of the early IGM because the coming generation of radio telescopes (EVLA, ALMA, SKA, LOFAR, etc.) will use them to measure the properties of neutral hydrogen through 21 cm absorption studies (\\citealp{2004NewAR..48.1029C}; \\citealp{2010HiA....15..312K}).\n\nDespite large area surveys designed to find high-redshift quasars, only about 60 ($z > 5.7$) quasars have been found so far (\\citealp{2000AJ....120.1167F}; \\citealp{2007AJ....134.2435W}; \\citealp{2008AJ....135.1057J}). These surveys all find quasars using similar methods that rely on red optical colors ($(i - z)_{AB}$ $>$ 2) and blue near-IR colors ($(z - J)_{AB}$ $<$ 1). The red optical color of high-z quasars is a consequence of the Ly$\\alpha$ break moving through and out of the blueward optical bands. The blue near-IR color cut is used to distinguish high-redshift quasars from the largest contaminant in these surveys: cool dwarf stars. An alternative method to separate high-redshift quasars from cool dwarf stars is to require a radio detection, as very few stars are radio-bright (\\citealp{kimball}). This method only finds $\\sim$5$\\%$ of quasars (3 of $\\sim$60 z $\\sim$ 6 quasars have been detected at $>$1mJy in the radio), but it can be used to select quasars with colors that fail the usual optical\/near-IR selection. Previously, one radio-selected z $\\sim$ 6 quasar was found by \\citet{2006ApJ...652..157M} in a mere 4 ${deg}^2$ search area in the NOAO Deep Wide-Field Survey (NDWFS). This quasar was bright enough to be discovered by other surveys; however, its red near-IR colors placed it in the color space occupied by cool dwarf stars, and hence it was missed by typical color-color selection methods. \n\nThis work presents the discovery of the second radio-selected, z $\\sim$ 6 quasar. We used two different survey combinations to search for radio-selected z $>$ 5.7 quasars: the Canadian-France-Hawaiian Telescope Legacy Survey Wide (CFHTLS Wide \\footnote[6]{http:\/\/www.cfht.hawaii.edu\/Science\/CFHTLS\/}) matched to the Faint Images of the Radio Sky at Twenty-cm (FIRST - \\citealp{1995ApJ...450..559B}) survey, and the deep Sloan Digital Sky Survey (SDSS Stripe 82 - \\citealp{2009ApJS..182..543A}) matched with the Stripe82 VLA Survey (Hodge et al. 2011). We discovered the quasar in Stripe 82 with the deeper radio data of the Stripe82 VLA Survey. \n\n In \\S 3 we describe our candidate selection methods and expected number of discoveries. In \\S 4, we discuss the observations of SDSS J222843.53+011032.0 (SDSS J2228+0110; z $=$ 5.95). In \\S 5 we discuss the implications of further studies of high redshift radio-selected quasars. All magnitudes are AB unless stated otherwise. This paper assumes a flat cosmological model with $\\Omega_{m} = 0.28$, $\\Omega_{\\Lambda} = 0.72$, and $H_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$. \n\n\\section{Candidate Selection}\n\n\\subsection{CFHTLS Wide}\n\nCFHTLS Wide is an intermediate depth and area survey covering 171 deg$^2$ with $\\sim$130 deg$^2$ publicly available at the time of this work (release T0005). The typical integration times are 4300s in $i$ and 3600s in $z$, reaching an average depth of 24.5 in i and 23.8 in z at the 80$\\%$ completeness limit determined through simulation\\footnote[7]{http:\/\/terapix.iap.fr\/cplt\/oldSite\/Descart\/CFHTLS-T0005-Release.pdf}. \n\n\\subsection{FIRST}\n\nFIRST is a 20 cm survey over $\\sim$10,000 deg$^2$. With a sensitivity threshold of 1 mJy, it achieves a source density of $\\sim$90 deg$^{-2}$. The FIRST survey covers the entirety of the CFHTLS Wide survey, and the typical astrometric accuracy between the two surveys is $<$ 0.5''. \n\n\\subsection{Stripe 82}\n\nDuring the months when the primary SDSS area was not observable, SDSS repeatedly observed a strip of sky along the Galactic Equator known as Stripe 82. This patch of sky is 300 deg$^2$ and spans roughly 20$^{h}$ $<$ RA $<$ 4$^{h}$ and -1.5$^{\\circ}$ $<$ Dec $<$ 1.5$^{\\circ}$. The resulting co-additions (\\citealp{2009ApJS..182..543A}) go two magnitudes deeper than the typical SDSS images and reach 23.3 in $i$ and 22.5 in $z$ at the 95$\\%$ repeatable detection limit. \n\n\\subsection{Stripe 82 VLA Survey}\n\nThis 1.4 GHz survey was conducted with the Very Large Array (VLA) in A-configuration and has an angular resolution of 1.8'' (Program ID AR646 and AR685). It achieves a median rms noise of 52 $\\mu$Jy beam$^{-1}$ over 92 deg$^{2}$ (\\citealp{hodge2011}), making it the deepest 1.4 GHz survey to cover that much sky. A catalog of 17,969 isolated radio components, for an overall source density of $\\sim$195 sources deg$^{-2}$, is publicly available. The astrometric accuracy of the data is excellent, with an rms scatter of 0.25'' in both right ascension and declination when matched to SDSS's Stripe 82.\n\n\\subsection{Selection Method}\n\nHigh-redshift quasars are typically selected based on their very red optical colors: $(i - z)_{AB} > 1.5$. Cool dwarf stars also have red optical colors, so in an effort to reduce contamination, surveys such as SDSS require a blue near-IR color (see Figure 1). The optical color probes the Ly$\\alpha$ break while the near-IR color probes the quasar continuum as well as strong emission features. Dust reddening can affect the color of the quasar continuum and displace a fraction of the high-z quasars outside of the typical selection criterion. An alternative to a blue near-IR color cut is radio-selection, which reduces the contamination from cool dwarf stars to nearly zero but will only be able to recover $\\sim$5\\% of high-z quasars. \n\nWe combined optical and radio data through catalog matching. Counterparts between catalogs were defined using a matching radius of 0.6'' for Stripe 82 and 1'' for CFHTLS Wide, which was a compromise between completeness and reliability. We used a method similar to that of \\citet{2002ApJS..143....1M} to estimate completeness and reliability of the matches as a function of radius. A matching radius of 1'' for FIRST-CFHTLS wide results in a completeness of 83\\% and a reliability of 94\\%. A matching radius of 0.6'' for Stripe 82-Stripe 82 VLA results in a completeness of 91\\% and a reliability of 99\\%. If there were multiple optical sources for a single radio source, then only the closest match was used. After matching the two catalogs, two photometric cuts were applied to select the initial $z > 6$ quasar candidates: $(i -z)_{AB} > 1.5$ and $z_{AB} < 23.8$ for CFHTLS Wide, and $(i -z)_{AB} > 1.7$ \\footnote[8]{The color conversion between SDSS and CFHT for typical z $\\sim$ 6 quasars is $(i-z)_{SDSS} > (i-z)_{CFHT} + 0.20$; http:\/\/www.cadc.hia.nrc.gc.ca\/megapipe\/docs\/filters.html} and $z_{AB} < 22.5$ for Stripe 82. The candidates went through another series of cuts which required that the \\textit{u}, \\textit{g}, and \\textit{r} band fluxes be below the 3$\\sigma$ detection limit. Also, a visual inspection of the \\textit{i} and \\textit{z} band images was conducted to ensure there were no cosmic rays or bad pixels contaminating the photometry. The remaining candidates were then checked against known sources as to not repeat observations. One of the candidates was a previously found z$=$6.21 quasar by \\citet{2010AJ....139..906W}, CFHQS J1429+5447, and is the third radio-loud, z $\\sim$ 6 quasar to be identified. CFHQS J1429+5447 is the radio-brightest z $\\sim$ 6 quasar yet to be found with f$_{1.4GHz, peak}$ = 2.93 mJy and has the highest radio-loudness value of R $\\sim$ 3200, where R $\\equiv$ $f_{5 GHz} \/ f_{2500\\AA}$. After all of the cuts, there were 29 remaining candidates in CFHTLS Wide, i.e. $\\sim$ 0.2 per deg$^2$, and 27 in Stripe 82, giving $\\sim$ 0.3 per deg$^2$. \n\nOur search area was previously mined by other high-z quasar searches and thus our candidate list has some overlap with those selection methods. In the case of CFHTLS Wide, 3 of our 29 candidates satisfy the criteria set forth by \\citet{2009AJ....137.3541W}, $(i -z)_{AB} > 2.0$ and $z_{AB} < 23.0$ or 10$\\sigma$ $z_{AB}$ limit for the field (private communication). Only 3 of the 27 candidates in Stripe 82 would have been selected by \\citet{2009AJ....138..305J} while the other 24 candidates were either too blue, $(i -z)_{AB} < 2.2$, or too faint, $z_{AB} > 21.8$ for their completeness-limited selection. \n\nTo properly estimate how many quasars our radio-selection should find, we had to first estimate the completeness of our method as a function of redshift and rest-frame absolute magnitude, $M_{1450}$\\footnote[9]{$M_{1450}$ was calculated using the measured z-band magnitudes and assuming the \\citet{2001AJ....122..549V} spectrum corrected for the effective Gunn-Peterson optical depth due to Ly$\\alpha$ and Ly$\\beta$ absorption (\\citealp{2006AJ....132..117F})}. The completeness of our optical selection was estimated through simulation. A relation between redshift and $i-z$ color was calculated using the median color track of z $\\sim$ 3 quasars from SDSS redshifted from z$=$5.5 to z$=$6.7 and corrected for the effective Gunn-Peterson optical depth due to Ly$\\alpha$ and Ly$\\beta$ absorption (\\citealp{2006AJ....132..117F}). This track is very similar to that calculated by \\citet{2009AJ....137.3541W} as shown in Figure 1. A ``measured'' $i-z$ color was drawn randomly from a gaussian distribution with a mean $i-z$ color taken from the median quasar color track and a sigma calculated using both the standard deviation of the median quasar color track and the median error as a function of the $i$ and $z$ magnitudes. This was done hundreds of times for a grid of (z, $M_{1450}$), and the completeness was estimated by the fraction recovered by our selection method. An example of our completeness as a function of redshift and absolute magnitude is shown in Figure 2. The simulation of completeness only took into account the optical selection method and not the radio. The radio-loud ($>$1mJy) fraction of quasars at 0 $<$ z $<$ 5 in SDSS is $\\sim$10\\%, but is strongly dependent on redshift and optical luminosity (\\citealp{jiang2007}). We adopt a radio-loud fraction of 5\\% for z $\\sim$ 6 and apply this fraction to estimate the number of radio-selected quasars expected. \n\nUsing the luminosity function for z $=$ 6 quasars calculated by \\citet{2010AJ....139..906W}, we are able to estimate the expected number of quasars in our search, N, \\begin{equation}\nN = A * \\iint \\ \\Phi(z,M_{1450}) \\, V_{c}(z) \\,\\, p(z,M_{1450}) \\, \\mathrm{d}z \\, \\mathrm{d}M_{1450}. \n\\end{equation}\nThe comoving volume, $V_{c}(z)$, is corrected for the completeness, $p(z,M_{1450})$, to form an effective volume. The luminosity function, $\\Phi(z,M_{1450})$, is in the form of a double power law, and we used the best fit parameters from \\citet{2010AJ....139..906W}. Our search area is represented by A, in units of steradians. From the calculation, we expect 0.7 radio-selected quasars in Stripe 82 and 3.2 in CFHTLS Wide. We find more candidates than the expected number of quasars for several reasons. Firstly, some of our candidates are expected to be lower redshift, radio-loud AGN that share a red $i-z$ color with high-z quasars due to a strong Balmer break or dust extincted continuum. Secondly, a few of our candidates may be spurious matches where the optical source is not really associated with the radio emission. Thirdly, the use of a constant radio-loud fraction may not be appropriate as it has been observed to be a function of optical luminosity which could change our number estimates as much as a factor of 2 (\\citealp{jiang2007}). Lastly, the best fit parameters of the luminosity function from \\citet{2010AJ....139..906W} have large errors and can change the expected number of quasars by a factor of 3 or 4. \n\n\\section{Observations}\n\n\\subsection{Optical and Radio}\n\nThe quasar, SDSS J2228+0110, was selected for followup using the point spread function (PSF) magnitudes from the co-added imaging catalog of Stripe 82 (\\citealp{2009ApJS..182..543A}). It was found near our $z$-band magnitude limit at $z$ $=$ 22.28 and near our $i-z$ color limit at $i-z$ $=$ 1.81 which would have been too faint and too blue to be selected by \\citet{2009AJ....138..305J}. It is one of the faintest quasars found to date, with M$_{1450}$ $=$ -24.53. The quasar overlaps the UKIRT Infrared Deep Sky Survey (UKIDSS) sky coverage, but is undetected in the $J$-band placing an upper limit at $z-J \\le 1.4$\\footnote[10]{UKIDSS Large Area Survey detection limit is $J_{AB} = 20.9$, http:\/\/www.ukidss.org\/surveys\/surveys.html}.\n\nThe quasar was detected in the radio in the Stripe82 VLA Survey, which has a detection limit of 0.30 mJy. SDSS J2228+0110 has a measured peak flux density of 0.31 mJy, just above the detection limit, and it is only the fourth z $\\sim$ 6 quasar discovered with a flux density f$_{1.4GHz} >$ 0.3 mJy. The faint optical luminosity and relatively high radio luminosity, $L_{5 GHz} = $ 2.55 $\\times$ 10$^{32}$, makes this one of the most radio-loud z $\\sim$ 6 quasars ever found, with a radio-loudness of R $\\sim$ 1100. There is no discernible morphology from the optical or radio images given that the discovered quasar was at the limits of detection in both wavelengths (see Figure 3).\n\n\\subsection{Quasar Spectrum}\n\nFrom thirty-five candidates observed in June and December 2010 (sixteen from CFHTLS Wide and nineteen from Stripe 82 of which most remain unidentified with featureless continua), we discovered one quasar at z=5.95, SDSS J2228+0110 (see Table 1). The discovery spectrum of SDSS J2228+0110 was taken using the Keck I telescope with the Low Resolution Imaging Spectrometer (LRIS; \\citealp{1995PASP..107..375O}). The spectrum included four exposures of 900 seconds each and a 1\" long slit. It was taken with a 600\/10000 grating on the red camera, resulting in a dispersion of $\\sim$0.8$\\AA$ per pixel. The conditions were fair when the spectrum was taken, with $\\sim$1'' seeing at a high air mass of 1.5. \n\nThe reduction of the spectrum was done in a standard way. Bias frames were obtained and combined with the overscan bias, then subtracted off the science image. A flat field correction was applied using flat field frames taken during the observing run. A specialized routine was used for cosmic ray rejection which recognized and rejected cosmic rays based on their shape. The spectrum of our quasar was extracted using optimal variance weighting through the IRAF task apall. The wavelength was calibrated using arc line lamps. The standard star used for flux calibration was Feige 34\\footnote[11]{The calibration flux was obtained from the Space Telescope standard star flux catalog} in combination with a custom Mauna Kea extinction curve as a function of wavelength. A lower resolution spectrum, $\\sim$8$\\AA$ per pixel, was created using inverse sky-variance weighting. The spectrum clearly identifies the source as a z=5.95 quasar with a large continuum break blueward of a strong emission line marked as Ly$\\alpha$ (see Figure 4).\n\nThe only strong emission feature in the discovery spectrum is Ly$\\alpha$. The noise from sky emission lines resulted in a low signal to noise continuum, making it difficult to claim the detection of other, weaker emission lines. However, the redshift calculated from Ly$\\alpha$ does seem consistent with possible detections of other common emission features such as NV and Si IV. The wavelength coverage of the spectrum was not large enough to detect Ly$\\beta$ or OVI. \n\nWe measured the rest-frame equivalent width (EW) and full width at half maximum (FWHM) of Ly$\\alpha$ for SDSS J2228+0110. The wavelength coverage for this spectrum was too small to fit the continuum slope; instead, we assumed it to be a power law with slope $\\alpha$ = -0.5 (f$_{\\nu} \\propto {\\nu}^{\\alpha}$). The normalization of the power law was fit through a chi-squared minimization to the continuum redward of 1280$\\AA$. We fit the Ly$\\alpha$ profile with a Gaussian on its red side, and we assumed the line to be symmetric to account for the Ly$\\alpha$ forest on the blue side. We find a rest-frame FWHM of 7.68 $\\AA$, which is equivalent to a velocity of 1,890 km\/s, making it a narrow Ly$\\alpha$ but not abnormal. The rest-frame EW is 21.9 $\\AA$ and is at the low end, but well within the range of Ly$\\alpha$ strengths of other z $\\sim$ 6 quasars.\n\n\\section{Discussion}\n\nHigh-redshift quasar searches have been popular over the last decade. Large scale efforts have revealed that these objects are quite rare and quite difficult to find (\\citealp{2000AJ....120.1167F}; \\citealp{2007AJ....134.2435W}; \\citealp{2008AJ....135.1057J}). Many of these surveys use optical data to select candidates and rely on very red optical colors. Unfortunately, high-z quasars share the same red optical color space with cool dwarf stars. In an effort to reduce contamination, a blue near-IR cut is used in the selection method to help separate the two populations. This technique successfully increases efficiency; however, it reduces the completeness of the quasar sample by an unknown amount. Our incompleteness was estimated by redshifting low-z quasar spectra to high-z and tracking their colors. This track indicates that a large ``blue'' quasar population exists at high-z, and that optical surveys are selecting a significant sample of the population. However, it is an open question as to whether low-z quasars are truly like their high-z counterparts. \n\n\\citet{2006ApJ...652..157M} discovered, at the time, the highest redshift radio-loud quasar (FIRST J1427+3312) using radio selection in a search area of only 4 deg$^2$. The quasar was bright enough in the optical to be detected in other surveys, but its red near-IR color would have prevented its selection. This rare discovery in such a small area hints that there may be a larger population of ``red'' quasars than predicted. If this is true, current estimates for the quasar number density at z $\\sim$ 6 are too low. \n \nRadio-selection offers an alternative method for the selection of high-z quasars. Requiring a radio detection is just as efficient in removing contaminants as a blue near-IR color cut; however, radio-selection also allows for the detection of ``red'' quasars, possibly from dust reddening. A likely cause of dust reddening is the circumnuclear cocoon that is thought to encase relatively young quasars (\\citealp{ 2005ApJ...625L..71H}). As the quasar ages, the dust cocoon may be blown away by energy released from accretion at near-Eddington luminosities. The universe is less than one billion years old at z $\\sim$ 6, and it would not be unreasonable that the fraction of quasars in this dust cocoon phase is higher at higher redshift. \n \nOf the seven z $\\sim$ 6 quasars with $z-J >$ 0.8, three of them are radio-loud. It is not unusual that radio-loud quasars are redder than their radio-quiet counterparts, as this effect is also seen at lower redshift (\\citealp{2003AJ....126..706W}, \\citealp{2009AJ....138.1925M}). However, when quasars at lower redshift, z $\\sim$ 3, are redshifted to z $=$ 6 and placed in the same colorspace as high-z quasars, at higher redshift radio-loud quasars tend to have redder $z-J$ colors than at lower redshift (see Figure 5). This is a very intriguing result albeit in the small number regime, as it might suggest that radio-loud quasars are intrinsically redder at higher redshift or have higher quantities of dust. The result should also be taken with caution as there is a substantial selection effect for z $\\sim$ 3 quasars in SDSS (\\citealp{richards}). The selection efficiency at z $\\sim$ 3 is $\\sim$50$\\%$ and biased towards redder $u-g$ colors; however, radio-selected z $\\sim$ 3 quasars don't seem to show the same color bias (\\citealp{worseck}). This comparison of radio-loud\/radio-selected z $\\sim$ 3 quasars redshifted to z $=$ 6 with observed radio-loud z $\\sim$ 6 quasars should be absent of a color selection-effect and serve as a viable comparison in cosmic time. \n\nAlthough dust reddening of the quasar continuum may contribute to a significant ``red'' quasar population, it would also cause high levels of extinction in the detection bands. This extinction can be greater than three magnitudes for E(B-V) = 0.1 using a typical SMC reddening law (\\citealp{prevot}), placing even some of the brightest z $\\sim$ 6 quasars below the detection limit of wide-area surveys like CFHT and SDSS. A more likely cause of a significant ``red'' quasar population is the strength of Ly$\\alpha$. A strong Ly$\\alpha$ line leads to a ``bluer'' population of high-z quasars in the $z-J$ color while a weak Ly$\\alpha$ line leads to a ``redder'' population of z $\\sim$ 6 quasars. A second factor that can lead to a ``red'' quasar population is the absorption blueward of Ly$\\alpha$ from neutral hydrogen in the intergalactic medium, which can lead to a ``red'' $i-z$ color for stronger absorption (see Figure 6).\n\nThis is still an ongoing search for radio-selected quasars. We plan to observe more candidates in future observations and obtain a deeper optical spectrum of SDSS J2228+0110. In an effort to measure the $z-J$ color of radio-selected z $\\sim$ 6 quasars, we also plan to follow up on our current discovery and any new discoveries with near-IR imaging. It would be interesting to use the discovered z $\\sim$ 6 quasar to constrain the luminosity function of high-z quasars outside of the typical optical selection criteria; however, our survey is not complete. \n \nThe greatest limitation to radio-selection is the depth of the radio data. Only $\\sim$5\\% of z $\\sim$ 6 quasars are detected in FIRST. Even a medium depth radio survey like the Stripe82 VLA Survey does not reach the median quasar radio luminosity. With a deep and wide-area radio survey, questions about the existence of a large ``red'' high-z quasar population, as well as the mechanism that might cause the reddening, could finally be answered. \n\n{\\bf Acknowledgements}: Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.\nBased on observations obtained with MegaPrime\/MegaCam, a joint project of CFHT and CEA\/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. \nThe authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\nGRZ acknowledges NRAO Grant GSSP 09-0010. The work by RHB was partly performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1} \\vspace{0mm}}\n\\newcommand{\\SubSection}[1]{\\vspace{-1mm} \\subsection{#1} \\vspace{-0mm}}\n\\newcommand{\\SubSubSection}[1]{\\vspace{-1mm} \\subsubsection{#1} \\vspace{-1mm}}\n\n\\newcommand\\Mark[1]{\\textsuperscript#1}\n\n\\iccvfinalcopy\n\n\\def\\iccvPaperID{10568}\n\\def\\mbox{\\tt\\raisebox{-.5ex}{\\symbol{126}}}{\\mbox{\\tt\\raisebox{-.5ex}{\\symbol{126}}}}\n\n\n\\begin{document}\n\t\n\\begin{textblock*}{\\textwidth}(0cm,0cm)\n\t\\large\\noindent{\\copyright~2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting\/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.}\n\\end{textblock*}\n\\thispagestyle{empty}\t\n\n\n\\title{\\textbf{Full-Velocity Radar Returns by Radar-Camera Fusion}}\n\n\\author{\nYunfei Long\\Mark{1}, Daniel Morris\\Mark{1}, Xiaoming Liu\\Mark{1}, \\\\\nMarcos Castro\\Mark{2},\nPunarjay Chakravarty\\Mark{2},\nand Praveen Narayanan\\Mark{2} \\\\ \n\\Mark{1}Michigan State University, \\Mark{2}Ford Motor Company \\\\\n{\\tt\\small \\{longyunf,dmorris,liuxm\\}@msu.edu},\n{\\tt\\small \\{mgerard8,pchakra5,pnaray11\\}@ford.com}\n}\n\\date{}\n\n\n\\maketitle\n\\pagenumbering{arabic}\n\n\\begin{abstract}\nA distinctive feature of Doppler radar is the measurement of velocity in the radial direction for radar points. \nHowever, the missing tangential velocity component hampers object velocity estimation as well as temporal integration of radar sweeps in dynamic scenes. \nRecognizing that fusing camera with radar provides complementary information to radar, in this paper we present a closed-form solution for the point-wise, full-velocity estimate of Doppler returns using the corresponding optical flow from camera images. \nAdditionally, we address the association problem between radar returns and camera images with a neural network that is trained to estimate radar-camera correspondences. \nExperimental results on the nuScenes dataset verify the validity of the method and show significant improvements over the state-of-the-art in velocity estimation and accumulation of radar points. \n\\end{abstract}\n\n\\section{Introduction}\nRadar is a mainstream automotive 3D sensor, and along with LiDAR and camera, is used in perception systems for driving assistance and autonomous driving~\\cite{stanislas2015characterisation, li2020lidar,m3d-rpn-monocular-3d-region-proposal-network-for-object-detection}. \nUnlike LiDAR, radar has been widely installed on existing vehicles due to its relatively low cost and small sensor size, which makes it an easy fit into various vehicles without changing their appearance. Thus, advances in radar vision systems have potential to make immediate impact on vehicle safety. Recently, with the release of a couple of autonomous driving datasets with radar data included,~\\emph{e.g.}, Oxford Radar RobotCar~\\cite{barnes2020oxford} and nuScenes~\\cite{caesar2020nuscenes}, there is great interest in the community to explore how to leverage radar data in various vision tasks such as object detection~\\cite{nabati2021centerfusion, yang2020radarnet}.\n\n\n\\begin{figure}[t!]\n \\captionsetup{font=small}\n\t\\centering\n\t\\begin{subfigure}[b]{\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\textwidth]{.\/figures\/indeterminacy.pdf}\n\t\t\\vspace{-2mm}\n\t\t\\caption{$ $}\n\t\t\\label{fig:y equals x}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.2\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{.\/figures\/flow1}\n\t\t\\caption{$ $}\n\t\t\\label{fig:three sin x}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.22\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{.\/figures\/ex_motion.pdf}\n\t\t\\caption{$ $}\n\t\t\\label{fig:five over x}\n\t\\end{subfigure}\n\t\\vspace{-2mm}\n\t\\caption{\\small (a) Full motion cannot be determined with a single sensor: all motions ending on the blue dashed line (\\emph{i.e.},~blue dashed arrows) map to the same optical flow and all motions terminated on the red dashed line (\\emph{i.e.},~red dashed arrows) fit the same radial motion. However, with a radar-camera pair, the full motion can be uniquely decided: only the motion drawn in black satisfies both optical flow and radial motion. (b) Optical flow in the camera-image and (c) a bird's-eye view of the observed vehicle. This shows measured radar points with radial velocity (red), our predicted point-wise, full velocity (black), and ground truth full velocity of the vehicle (green).}\n\t\\vspace{-3mm}\n\t\\label{Figure:indeterminacy}\n\\end{figure}\n\nIn addition to measuring 3D positions, radar has the special capability of obtaining radial velocity of returned points based on the Doppler effect. \nThis extra capability is a significant advantage over other 3D sensors like LiDAR, enabling, for instance, instantaneous moving object detection. \nHowever, due to the inherently ambiguous mapping from radial velocity to full velocity, using radial velocity directly to account for the real movement of radar points is inadequate and sometimes misleading. \nHere, the full velocity denotes the actual velocity of radar points in 2D or 3D space. \nWhile radial velocity can well approximate full velocity when a point is moving away from or towards the radar, these two can be very different when the point is moving in the non-radial directions. \nAn extreme case occurs for objects moving tangentially as these will have zero radial velocity regardless of target speed. Therefore, acquiring point-wise full velocity instead of radial velocity is crucial to reliably sense the motion of surrounding objects.\n\nApart from measuring the velocity of objects, another important application of point-wise velocity is the accumulation of radar points. \nRadar returns from a single frame are much sparser than LiDAR in both azimuth and elevation,~\\emph{e.g.}, typically LiDAR has an azimuth resolution $10\\times$ higher than radar~\\cite{yang2020radarnet}.\nThus, it is often essential to accumulate multiple prior radar frames to acquire sufficiently dense point clouds for downstream tasks,~\\emph{e.g.}, object detection~\\cite{nobis2019deep, chadwick2019distant, chang2020spatial}. \nTo align radar frames, in addition to compensating egomotion, we shall consider the motion of moving points in consecutive frames, which can be estimated by point-wise velocity and time of movement. \nAs the radial velocity does not reflect the true motion, it is desirable to have point-wise full velocity for point accumulation.\n\n\nTo solve the aforementioned dilemma of radial velocity, we propose to estimate point-wise full velocity of radar returns by fusing radar with a RGB camera. \nSpecifically, we derive a closed-form solution to infer point-wise full velocity from radial velocity as well as associated projected image motion obtained from optical flow. As shown in Fig.~\\ref{Figure:indeterminacy}, constraints imposed by optical flow resolve the ambiguities of radial-full velocity mapping and lead to a unique and closed-form solution for full velocity. \nOur method can be considered as a way to enhance raw radar measurement by upgrading point-wise radial velocity to full velocity, laying the groundwork for improving radar-related tasks, ~\\emph{e.g.}, velocity estimation, point accumulation and object detection. \n\nMoreover, a prerequisite for our closed-form solution is the association between moving radar points and image pixels. \nTo enable a reliable association, we train a neural network to predict radar-camera correspondences as well as discerning occluded radar points. \nExperimental results demonstrate that the proposed method improves point-wise velocity estimates and their use for object velocity estimation and radar point accumulation.\n\nIn summary, the main contributions of this work are:\n\\begin{itemize}\n \\item We define a novel research task for radar-camera perception systems,~\\emph{i.e.}, estimating point-wise full velocity of radar returns by fusing radar and camera.\n \n \\item We propose a novel closed-form solution to infer full radar-return velocity by leveraging the radial velocity of radar points, optical flow of images, and the learned association between radar points and image pixels.\n \n \\item We demonstrate state-of-the-art (SoTA) performance in object velocity estimation, radar point accumulation, and 3D object localization.\n \n\\end{itemize}\n\n\n\\section{Related Works}\n\n\\Paragraph{Application of Radar in Vision}\nRadar data differs from LiDAR data in various aspects~\\cite{brodeski2019deep}. \nIn addition to the popular point representation (also named radar target~\\cite{palffy2020cnn}), an analogy to LiDAR points, there are other radar data representations containing more raw measurements, {\\it e.g.}, range-azimuth image and spectrograms, which have been applied in tasks such as activity classification~\\cite{seyfiouglu2018deep}, detection~\\cite{lim2019radar}, and pose estimation~\\cite{roos2016reliable}. \nOur method is based on radar points, with the format available in the nuScenes dataset~\\cite{caesar2020nuscenes}. \n\nThe characteristics of radar have been explored to complement other sensors. \nThe Doppler velocity of radar points is used to distinguish moving targets. \nFor example, RSS-Net~\\cite{kaul2020rss} uses radial velocity as a motion cue for image semantic segmentation. Chadwick~\\emph{et al.}~\\cite{chadwick2019distant} use radial velocity to detect distant moving vehicles---difficult to detect with only images. \nFritsche~\\emph{et al.}~\\cite{fritsche2017fusion} combine radar with LiDAR for measurement under poor visibility. \nWith a longer detection range than LiDAR, radar is also deployed with LiDAR to better detect far objects~\\cite{yang2020radarnet}.\n\nThe {\\it sparsity} of radar makes it difficult to directly apply well-developed techniques for LiDAR on radar~\\cite{lim2019radar, nabati2021centerfusion}. \nFor example, Danzer~\\emph{et al.}~\\cite{danzer20192d} adopt PointNets~\\cite{qi2017pointnet} on radar points for 2D car detection, while sparsity limits it to large objects like cars. \nSimilar to LiDAR-camera depth completion~\\cite{depth-completion-with-twin-surface-extrapolation-at-occlusion-boundaries,depth-coefficients-for-depth-completion}, Long~\\emph{et al.}~\\cite{long2021radar} develop radar-camera depth completion by learning a probabilistic mapping from radar returns to images.\nTo obtain denser radar points, Lombacher~\\emph{et al.}~\\cite{lombacher2016potential} use occupancy grid~\\cite{elfes1989using} to accumulate radar frames.\nYet, the method assumes a static scene and cannot cope with moving objects. \nRadar points are projected on images and represented as regions near projected points, such as vertical bars~\\cite{nobis2019deep} and circles~\\cite{chadwick2019distant, chang2020spatial}, to account for uncertainty of projection due to measurement error. \nWhile accumulating radar frames is desirable, without reliably compensating object motion, these methods need to carefully decide the number of frames to trade off between the gain in accumulation and loss in accuracy due to delay~\\cite{nobis2019deep}. \nOur estimated point-wise velocity can compensate object motion and realize more accurate accumulation.\n\n\n\\Paragraph{Velocity Estimation in Perception Systems}\nResearchers have used monocular videos~\\cite{kinematic-3d-object-detection-in-monocular-video} or radial velocity of radar points to estimate {\\it object-wise} velocity. With only radar data of a single frame, Kellner~\\emph{et al.}~\\cite{kellner2013instantaneous, kellner2014instantaneous} compute full velocity of moving vehicles from radial velocities and azimuth angles of at least two radar hits. \nHowever, for a robust solution, the method requires that 1) radar captures more radar hits on each object, 2) radar points have significantly different azimuth angles and 3) object points are clustered before velocity estimation~\\cite{kellner2013instantaneous, schlichenmaier2019clustering, scheiner2019multi}. \nObviously due to sparsity of radar in a single frame, it is difficult to obtain at least two radar hits on distant vehicles, let alone objects of smaller sizes. \nAlso, it is common that radar points on the same object, {\\it e.g.}, a distant or small object, have similar azimuth. \n\n\\begin{figure*}[t!]\n \\captionsetup{font=small}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\linewidth]{figures\/diagram.pdf}\n\t\\end{center}\n\t\\vspace{-6mm}\n\t\\caption{\\small \\textbf{Full velocity estimation and learning to associate radar points to camera pixels}. (a) A 3D point, $\\bm{p}$, is observed by a camera at $B$. A short interval, $\\Delta t$, later, the point has moved by $\\dot{\\bm{m}}\\Delta t$ to $\\bm{q}$ while the camera has moved by $\\dot{\\bm{c}}\\Delta t$ to $A$. At the same time, the radar measures both the position of $\\bm{q}$ and the radial speed $\\dot{r}$, which is the radial component of $\\dot{\\bm{m}}$. Using radial speed $\\dot{r}$ and the associated optical flow of $\\bm{q}$ in images, we derive a closed-form equation (denoted as $\\bm{f}()$) to estimate $\\bm{q}$'s full velocity $\\dot{\\bm{m}}$. (b) As the closed-form solution requires point-wise association of two sensors, we train a Radar-2-Pixel (R2P) network to take a multi-channel input and predict the association probabilities for pixels within a neighborhood of the raw projection (white dot) obtained via known pose $\\prescript{A}{R}\\T$. A pixel with the highest probability (yellow arrow) is deemed as the associated pixel of a radar point. To obtain labels for training R2P, our label generation module uses $\\bm{f}()$ to compute velocities of all neighboring pixels, then calculates velocity error $E_m$ by using the ground truth velocity $\\dot{\\bm{m}}_{GT}$, and finally obtains association probabilities of these neighbors based on $E_m$.}\n\t\\label{fig:diagram_association}\n\t\t\\vspace{-3mm}\n\\end{figure*}\n\n\nRecognizing the density and accuracy limitation of radar, researchers fuse radar with other sensors, {\\it e.g.}, LiDAR and camera, for object-wise velocity estimation. \nSpecifically, existing techniques~\\cite{zhao2019object,wu2020deep,li2020deep} for images or LiDAR are employed to obtain preliminary detections. \nRadar data, including radial velocity, once associated with the initial detections, are used as additional cues to predict full velocities of objects. \nFor instance, in RadarNet~\\cite{yang2020radarnet} \ntemporal point clouds of radar and LiDAR, modeled as voxels, are used to acquire initial detections and their motions. \nObject motion direction is used to resolve the ambiguities in radar-point association by\nback-projecting their radial velocities on the motion direction. \nYet, a sequence of LiDAR frames is required to obtain the initial detection and motion estimation.\n\nCenterFusion~\\cite{nabati2021centerfusion} integrates radar with camera for object-wise velocity estimation. \nWell-developed image-based detector is applied to extract preliminary boxes.\nAfter associating radar points with detections, the method combines radar data, radial velocity and depth, with image features within detected regions to regress a full velocity per detection. \nHowever, without a closed-form solution, the mapping from radial to full velocity needs to be learned from a great number of labeled data. \nIn contrast, we present a point-wise {\\it closed-form solution} for full-velocity estimation of radar points, without performing object detection. \nTo our knowledge, there is no prior method able to perform point-wise full-velocity estimation for radar returns.\n\n\n\\section{Proposed Method}\n\nWe consider the case of a camera and radar rigidly attached to a moving platform, {\\it e.g.}, a vehicle, observing moving objects in the environment. In this section we develop equations relating optical flow measurements in the camera to position and velocity measurements made by the radar. \n\n\\subsection{Physical Configuration and Notation}\n\n\nThe physical configuration of our camera and radar measurements is illustrated in Fig.~\\ref{fig:diagram_association}(a). Three coordinate systems are shown: $A$ and $B$ specifying camera poses and $R$ specifying a radar pose. The camera at $B$ observes a 3D point $\\bm{p}$. A short interval later, $\\Delta t$, the point has moved to $\\bm{q}$, the camera to $A$ and the radar to $R$, and both the camera and radar observe the target point $\\bm{q}$. These 3D points are specified by $4$-dim homogeneous vectors, and when needed, a left-superscript specifies the coordinate system in which it is specified, {\\it e.g.}, $\\prescript{A}{}\\bm{q}$ indicates a point relative to a coordinate system $A$. The target velocity, $\\dot{\\bm{m}}$, and camera velocity $\\dot{\\bm{c}}$ are specified by $3$-dim vectors, again optionally with a left superscript to specify a coordinate system. \n\nCoordinate transformations, containing both a rotation and translation, are specified by $4\\times 4$ matrices, such as $\\prescript{B}{A}{\\bm{T}}$, which transforms points from the left-subscript coordinate system to the left-superscript coordinate system. In this case we transform a point from $A$ to $B$ with:\n\\begin{equation}\n \\prescript{B}{}\\bm{q} = \\prescript{B}{A}{\\bm{T}} \\: \\prescript{A}{}\\bm{q}.\n\\end{equation}\nOnly the rotational component of these transformations is needed to transform velocities. For example, $\\prescript{A}{}\\dot{\\bm{m}}$ is transformed to $\\prescript{B}{}\\dot{\\bm{m}}$ by the $3\\times 3$ rotation matrix $\\prescript{B}{A}{\\bm{R}}$:\n\\begin{equation}\n \\prescript{B}{}\\dot{\\bm{m}} = \\prescript{B}{A}{\\bm{R}} \\: \\prescript{A}{}\\dot{\\bm{m}}.\n\\end{equation}\n\nA vector with a right subscript, {\\it e.g.}, ${\\bm{p}_i}$, indicates the $i$'th element of ${\\bm{p}}$, while a right subscript of ``1:3'' puts the first $3$ elements in a $3$-dim vector. \nFor a matrix, the right subscript indicates the row. Thus $\\prescript{B}{A}{\\bm{R}_i}$ is a $1\\times 3$ row vector containing its $i$-th row. A right superscript ``${}^\\mathsf{T}$'' is a matrix transpose.\n\nThe projections of points $\\bm{p}$ and $\\bm{q}$ are specified in either undistorted raw pixel coordinates, {\\it e.g.}, $(x_q,y_q)$ or their normalized image coordinates $(u_q,v_q)$ given by:\n\\begin{equation}\n u_q = (x_q-c_x) \/ f_x, \\hspace{0.5cm} v_q = (y_q-c_y) \/ f_y.\n\\end{equation}\nHere $c_x,c_y,f_x,f_y$ are intrinsic camera parameters, while the right subscript of the pixel refers to the point being projected. Vectors for 3D points can be expressed in terms of the normalized image coordinates:\n\\begin{align}\n\\prescript{A}{}\\qv = \\begin{pmatrix} u_qd_q\\\\ v_qd_q\\\\ d_q\\\\ 1 \\end{pmatrix} \n\\;\\; \\text{and} \\;\\;\n\\prescript{B}{}\\pv = \\begin{pmatrix} u_pd_p\\\\ v_pd_p\\\\ d_p\\\\ 1 \\end{pmatrix}.\n\\label{eq:qa}\n\\end{align}\nHere $d_q$ and $d_p$ are depths of points $\\prescript{A}{}\\qv$ and $\\prescript{B}{}\\pv$ respectively. \n\nWe assume dense optical flow is available that maps target pixel coordinates observed in $A$ to $B$ as follows:\n\\begin{equation}\n \\text{Flow}\\left( (u_q,v_q) \\right) \\rightarrow (u_p,v_p).\n \\label{eq:flow}\n\\end{equation}\nFurther, we assume the following are known: camera motion, $\\prescript{B}{A}\\T$, relative radar pose, $\\prescript{A}{R}\\T$, and intrinsic parameters.\n\n\\subsection{Full-Velocity Radar Returns}\n\nThe Doppler velocity measured by a radar is just one component of the three-component, full-velocity vector of an object point. Here our goal is to leverage optical flow from a synchronized camera to augment radar and estimate this full-velocity vector for each radar return. \n\n\\subsubsection{Relationship of Full Velocity to Radial Velocity}\n\nThe target motion from $\\bm{p}$ to $\\bm{q}$ is modeled as constant velocity, $\\dot{\\bm{m}}$, over time $\\Delta t$, such that \n\\begin{equation}\n \\dot{\\bm{m}}=\\frac{\\bm{q}_{1:3}-\\bm{p}_{1:3}}{\\Delta t}. \n \\label{eq:velocity}\n\\end{equation}\nOur goal is to estimate the full target velocity, $\\dot{\\bm{m}}$. Radar provides an estimate of the target position, $\\bm{q}$, but not the previous target location $\\bm{p}$. Radar also provides the signed radial speed, $\\dot{r}$, which is one component of $\\dot{\\bm{m}}$. In the nuScenes dataset $\\dot{r}$ is given by:\n\\begin{equation}\n \\dot{r} = \\hat{\\rv}^\\mathsf{T}\\dot{\\bm{m}}.\n \\label{eq:radial}\n\\end{equation}\nHere $\\hat{\\rv}$ is the unit-norm vector along the direction to the target $\\prescript{R}{}\\qv$. Note that this equation is coordinate-invariant, and could be equally written in $A$ using $\\prescript{A}{}\\hat{\\rv}$ and $\\prescript{A}{}\\mvdot$. Now Eq.~\\eqref{eq:radial} is actually the egomotion-corrected Doppler speed. The raw Doppler speed, $\\dot{r}_{raw}$, is the radial component of the \\emph{relative} velocity between target and sensor, $\\dot{\\bm{m}}-\\dot{\\bm{c}}$, and this constraint is given by:\n\\begin{equation}\n \\dot{r}_{raw} = \\hat{\\rv}^\\mathsf{T}(\\dot{\\bm{m}}-\\dot{\\bm{c}}),\n \\label{eq:radial_relative}\n\\end{equation}\nwhere $\\dot{\\bm{c}}$ is the known ego-velocity. Either Eq.~\\eqref{eq:radial} or \\eqref{eq:radial_relative} can be used in our formulation, depending on whether $\\dot{r}$ or $\\dot{r}_{raw}$ is available from the radar.\n\n\n\\subsubsection{Relationship of Full Velocity to Optical Flow}\n\nIn solving the velocity constraints, we first identify the known variables. The radar measures $\\prescript{R}{}\\qv$, and transforming this we obtain $\\prescript{A}{}\\qv=\\prescript{A}{R}\\bm{T}\\:\\prescript{R}{}\\qv$ which contains $d_q$ as the third component. Image coordinates $(u_q,v_q)$ are obtained by projection, and using optical flow in Eq.~\\eqref{eq:flow}, we can also obtain the $(u_p,v_p)$ components of $\\prescript{B}{}\\pv$. The key parameter we do not know from this is the depth, $d_p$, in $B$.\n\nNext we eliminate this unknown depth from our constraints. Eq.~\\eqref{eq:velocity} can be rearranged and each component expressed in frame $B$:\n\\begin{equation}\n \\prescript{B}{}\\pv_{1:3} = \\prescript{B}{}\\qv_{1:3} - \\prescript{B}{A}\\R\\: \\prescript{A}{}\\mvdot\\Delta t,\n \\label{eq:pvB2}\n\\end{equation}\nwhere the second term on the right is the transformation of the target motion into $B$ coordinates. The third row of this equation is an expression for $d_p$:\n\\begin{equation}\n d_p = \\prescript{B}{}\\qv_3 - \\prescript{B}{A}\\R_3\\: \\prescript{A}{}\\mvdot\\Delta t.\n \\label{eq:dp1}\n\\end{equation}\nSubstituting this for $d_p$, and the components of $\\prescript{B}{}\\pv$ from Eq.~\\eqref{eq:qa}, into the first two rows of Eq.~\\eqref{eq:pvB2}, we obtain\n\\begin{eqnarray}\n\\begin{bmatrix}\nu_p(\\prescript{B}{}\\qv_3 - \\prescript{B}{A}\\R_3\\: \\prescript{A}{}\\mvdot\\Delta t) \\\\\nv_p(\\prescript{B}{}\\qv_3 - \\prescript{B}{A}\\R_3\\: \\prescript{A}{}\\mvdot\\Delta t) \\\\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\prescript{B}{}\\qv_1 - \\prescript{B}{A}\\R_1\\: \\prescript{A}{}\\mvdot\\Delta t \\\\\n\\prescript{B}{}\\qv_2 - \\prescript{B}{A}\\R_2\\: \\prescript{A}{}\\mvdot\\Delta t \\\\\n\\end{bmatrix},\n\\label{eq:dp2}\n\\end{eqnarray}\nand rearrange to give two constraints on the full velocity:\n\\begin{eqnarray}\n\\begin{bmatrix}\n\\prescript{B}{A}\\R_1 - u_p\\prescript{B}{A}\\R_3 \\\\\n\\prescript{B}{A}\\R_2 - v_p\\prescript{B}{A}\\R_3 \\\\\n\\end{bmatrix}\n\\prescript{A}{}\\mvdot\n=\n\\begin{bmatrix}\n\\left(\\prescript{B}{}\\qv_1 - u_p\\prescript{B}{}\\qv_3\\right)\/\\Delta t \\\\\n\\left(\\prescript{B}{}\\qv_2 - v_p\\prescript{B}{}\\qv_3\\right)\/\\Delta t \\\\\n\\end{bmatrix}.\n\\label{eq:dp3}\n\\end{eqnarray}\n\n\\subsubsection{Full-Velocity Solution}\n\nWe obtain three constraints on the full velocity, $\\prescript{A}{}\\mvdot$, from Eq.~\\eqref{eq:dp3} and by converting Eq.~\\eqref{eq:radial} to $A$ coordinates. Combining these we obtain: \n\\begin{eqnarray}\n\\begin{bmatrix}\n\\prescript{B}{A}\\R_1 - u_p\\prescript{B}{A}\\R_3 \\\\\n\\prescript{B}{A}\\R_2 - v_p\\prescript{B}{A}\\R_3 \\\\\n\\prescript{A}{}\\hat{\\rv}^\\mathsf{T} \\\\\n\\end{bmatrix}\n\\prescript{A}{}\\mvdot\n=\n\\begin{bmatrix}\n\\left(\\prescript{B}{}\\qv_1 - u_p\\prescript{B}{}\\qv_3\\right)\/\\Delta t \\\\\n\\left(\\prescript{B}{}\\qv_2 - v_p\\prescript{B}{}\\qv_3\\right)\/\\Delta t \\\\\n\\dot{r} \\\\\n\\end{bmatrix}.\n\\label{eq:dp4}\n\\end{eqnarray}\nThen inverting the $3\\times 3$ coefficient of $\\prescript{A}{}\\mvdot$ gives a closed form solution for the full velocity:\n\\begin{eqnarray}\n\\prescript{A}{}\\mvdot =\n\\begin{bmatrix}\n\\prescript{B}{A}{\\bm{R}_1} - u_p \\prescript{B}{A}{\\bm{R}_3}\\\\\n\\prescript{B}{A}{\\bm{R}_2} - v_p \\prescript{B}{A}{\\bm{R}_3}\\\\\n\\prescript{A}{}\\hat{\\rv}^\\mathsf{T}\\\\ \n\\end{bmatrix}^{-1}\n\\begin{bmatrix}\n\\left(\\prescript{B}{}{\\bm{q}}_1 - u_p \\prescript{B}{}{\\bm{q}_3} \\right) \/ \\Delta t\\\\\n\\left(\\prescript{B}{}{\\bm{q}}_2 - v_p \\prescript{B}{}{\\bm{q}_3} \\right) \/ \\Delta t\\\\\n\\dot{r} \\\\ \n\\end{bmatrix}\n\\label{eq:full_v}.\n\\end{eqnarray}\n\n\n\\begin{figure}[t!]\n \\captionsetup{font=small}\n\t\\centering\n\t\\scalebox{1}{\n\t\t\\begin{tabular}{@{}c@{}c@{}c@{}c@{}}\n\t\t\t\\includegraphics[width=1.4 in]{.\/figures\/flow2} &\n\t\t\t\\includegraphics[width=1.4 in]{.\/figures\/bev1.pdf} \\vspace{-1mm}\\\\\n\t\t\t\\footnotesize{ (a) } & \\footnotesize{ (b) } \\\\\n\t\t\t\\includegraphics[width=1.4 in]{.\/figures\/04_ev} &\n\t\t\t\\includegraphics[width=1.4 in]{.\/figures\/02_ev} \\vspace{-1mm}\\\\\n\t\t\t \\footnotesize{ (c) } & \\footnotesize{ (d) } \\\\\n\t\\end{tabular} }\n\t\\vspace{-2mm}\n\t\\caption{\\small (a) Optical flow; (b) Bird's-eye view of GT bounding box, radial velocity (red) and GT velocity (green); (c) and (d) show $E_m$, computed by using Eq.~(\\ref{eq:e_v}), for two radar projections (white square) over $41\\times41$ pixel regions, respectively. For radar hits reflected from the vehicle, $E_m$ is small for neighboring pixels on the car and large on the background.}\n\t\\label{Figure:label}\\vspace{-3mm}\n\\end{figure}\n\n\n\nRecall in Fig.~\\ref{Figure:indeterminacy}(a) the red\/blue dashed lines show the velocity constraints from radar\/flow. The solution of Eq.~\\eqref{eq:full_v} is the full velocity that is consistent with both constraints. \nWe note that this can handle moving sensors, although Fig.~\\ref{Figure:indeterminacy}(a) shows the case of a stationary camera for simplicity. Further, if we set $\\Delta t<0$, Eq.~\\eqref{eq:full_v} also applies to the case that the point shifts from $\\bm{q}$ to $\\bm{p}$ as the camera moves from $A$ to $B$. And one limitation is that Eq.~\\eqref{eq:full_v} cannot estimate full velocity for radar points occluded in the camera view, although we can typically identify those occlusions.\n\n\n\\SubSection{Image Pixels and Radar Points Association}\n\nOur solution for point-wise velocity in Eq.~(\\ref{eq:full_v}) assumes that we know the pixel coordinates $(u_q,v_q)$ of the radar-detected point, $\\prescript{R}{}\\bm{q}$.\nIt appears straightforward to obtain this pixel correspondence by projecting a radar point onto the image using the known radar-image coordinate transformation, $\\prescript{A}{R}\\bm{T}$. \nWe refer to this corresponding pixel as ``raw projection''. \nHowever, there are a number of reasons why raw projection of radar points into an image is inaccurate. Radar beam-width typically subtends a few degrees and is large relative to a pixel, resulting in low resolution target location in both azimuth and elevation. Also, a radar displaced from a camera can often see behind an object, as viewed by the camera, and when these returns are projected onto an image they incorrectly appear to correspond to the foreground occluding object. Using flow from an occluder or an incorrectly associated object pixel may result in incorrect full-velocity estimation. To address these issues with raw projection, we train a neural network model, termed Radar-2-Pixel (R2P) network, to estimate associated radar pixels in the neighborhood of raw projection and identify occluded radar points.\nSimilar models have been applied to image segmentation~\\cite{kampffmeyer2018connnet} and radar depth enhancement~\\cite{long2021radar}.\n\n\n\\SubSubSection{Model Structure}\nOur method estimates association probabilities (ranging from $0$ to $1$) between a moving radar point and a set of pixels in the neighborhood of its raw projection. \nThe R2P network is an encoder-decoder structure with inputs and outputs of image resolution. \nStored in $8$ channels, the input data include image, radar depth map (with depth on raw projections) and optical flow. \nThe output has $N$ channels, representing predicted association probability for $N$ pixel neighbors. \nThe association between the radar point, $\\prescript{A}{}\\qv$, and the $k$-th neighbor of raw projection $(x,y)$ is stored in $A(x,y,k)$, where $k=1,2,...,N$.\n\n\n\\SubSubSection{Ground Truth Velocity of Moving Radar Points}\n\\label{sec:GT_velocity}\nThe nuScenes~\\cite{caesar2020nuscenes} provides the GT (ground truth) velocity of object bounding boxes. \nWe associate radar hits on an object to its labeled bounding box, and assign the velocity of the box to its associated radar points.\nThe association is determined based on two criteria:\n1) in radar coordinates, the distance between radar points and associated box is smaller than a threshold $T_d$; \nand 2) the percentage error between the radial velocity of a radar point and the radial component of the velocity of associated box is smaller than a threshold $T_p$.\n\n\\SubSubSection{Generating Association Labels}\nWe can project a radar point expressed in corresponding camera coordinates, $\\prescript{A}{}\\qv$, to pixel coordinates $(u_q,v_q)$, but as mentioned before, often this image pixel does not correspond to the radar return. Our proposed solution is to search in a neighboring region around $(u_q,v_q)$ for a pixel whose motion is consistent with the radar return. This neighborhood search is shown in Fig.~\\ref{fig:diagram_association}. If a pixel is found, then we correct the 3D radar location $\\prescript{A}{}\\qv$ to be consistent with this pixel, otherwise we mark this radar return as occluded.\n\nWe learn this radar-to-pixel association and correction by training the R2P network. \nWe generate true association score between a radar point and a pixel according to the compatibility between the true velocity and the optical flow at that pixel: high compatibility indicates high association. \nTo quantify the compatibility, assuming a pixel is associated with a radar point, we compute a hypothetical full velocity for the radar point by using the optical flow of that pixel according to Eq.~\\eqref{eq:full_v}. \nThe flow is considered compatible\nif the hypothetical velocity is close to the GT velocity. \nSpecifically, the hypothetical velocity can be computed as\n\\begin{equation}\n\\resizebox{.88\\hsize}{!}{\n$\\prescript{A}{}\\mvdot_{est}(x,y,k) = \\bm{f}\\left(\\breve{u}_q,\\breve{v}_q,\\breve{u}_p,\\breve{v}_p, d_q, \\dot{r}, \\prescript{B}{A}\\T, \\prescript{A}{R}\\T \\right),$\n}\n\\label{eq:fvel}\n\\end{equation}\nwhere $k=1,\\cdots,N$, $\\bm{f}(\\cdot)$ is the function to solve full velocity via Eq.~\\eqref{eq:full_v}, and $(x,y)$ is the raw projection of the radar point.\nNote that $\\breve{u}_q=u_q\\left[x+\\Delta x(k), y+\\Delta y(k)\\right]$, $\\breve{v}_q$ is defined similarly, \nand $[\\Delta x(k),\\Delta y(k)]$ is the coordinate offset from raw projection to the $k$-th neighbor. Using flow, Eq.~\\eqref{eq:flow}, we obtain $(\\breve{u}_p,\\breve{v}_p)$ from $(\\breve{u}_q,\\breve{v}_q)$.\n\nSecond, we calculate the $L_2$ norm of errors between $\\prescript{A}{}\\mvdot_{est}(x,y,k)$ and ground truth velocity $\\prescript{A}{}\\mvdot_{GT}(x,y)$ by\n\\begin{equation}\nE_m(x,y,k)= \\lVert \\prescript{A}{}\\mvdot_{est}(x,y,k) - \\prescript{A}{}\\mvdot_{GT}(x,y) \\rVert_2\n\\label{eq:e_v}.\n\\end{equation}\nFig.~\\ref{Figure:label} shows examples of $E_m$ for two radar hits on a car.\n\nFinally, we transform $E_m$ to an association score with\n\\begin{equation}\nL(x,y,k)= e^{-\\frac{E^2_m(x,y,k)}{c}}\n\\label{eq:prob},\n\\end{equation}\nwhere $L$ is used as a label for association probability between a radar and its $k$-th neighbor. Note that $L$ increases with decreasing $E_v$, and $c$ is a parameter adjusting the tolerance of velocity errors when converting errors to association. \nWe use the cross entropy loss to train the model.\n\n\n\n\n\\SubSubSection{Estimate Association and Identify Occlusion}\nWith a trained model, we can estimate association probability between radar points and $N$ pixels around their raw projections $(x,y)$, {\\it i.e.}, $A(x,y,k)$.\nAmong the $N$ neighbors, the radar return velocity may be compatible with a number of pixels, and we select the pixel with the maximum association, $A_{max}$, as the neighbor ID $k_{max}$:\n\\begin{equation}\nk_{max}= \\underset{k}{\\arg\\max}[A(x,y,k)]\n\\label{eq:k_max}.\n\\end{equation}\n If $A_{max}$ is equal or larger than a threshold $T_a$, we estimate the associated pixel as $\\left[x+\\Delta x(k_{max}), y+\\Delta y(k_{max})\\right]$. \nOtherwise there is no associated pixels in the neighborhood, and an occlusion is identified.\n\n\\section{Experimental Results}\n\\subsection{Comparison of Point-wise Full Velocity}\n\nTo the best of our knowledge, there is no existing method estimating {\\it point-wise} full velocity for radar returns. \nThus, we use point-wise radial velocity from raw radar returns as the baseline to compare with our estimation. \nWe extract data from the nuScenes Object Detection Dataset~\\cite{caesar2020nuscenes}, with $6432$, $632$, and $2041$ samples in training, validation and testing set, respectively. \nEach sample consists of a radar scan and two images for optical flow computation, {\\it i.e.}, one image synchronizing with the radar and the other is a neighboring image frame. \nThe optical flow is computed by the RAFT model~\\cite{teed2020raft} pre-trained on KITTI~\\cite{geiger2013vision}. \nThe R2P network is an U-Net~\\cite{ronneberger2015u, morris2018pyramid} with five levels of resolutions and $64$ channels for intermediate filters. \nThe neighborhood skips every other pixel, and its size (in pixels) is (left: $4$, right: $4$, top: $10$, bottom: $4$) and an example of the neighborhood is illustrated in Fig.~\\ref{fig:diagram_association}(b). \nThe threshold of association scores $T_a$ is $0.3$. Parameters associating radar points with GT bounding box are set as $T_d=0.5$m and $T_p=20\\%$. \nParameter $c$ in Eq.~\\eqref{eq:prob} is $0.36$.\nTo obtain GT point-wise velocity, based on the criteria in Sec.~\\ref{sec:GT_velocity}, we first associate moving radar points to GT detection boxes, whose GT velocity is assigned to associated points as their GT velocity. The GT velocity of bounding boxes is estimated from GT center positions in neighboring frames with timestamps. \n\nTab.~\\ref{tab:baseline} shows the average velocity error for moving points. The proposed method achieves substantially more accurate velocity estimation than the baseline. \nFor instance, the error of our tangential component is only $21\\%$ of that of the baseline.\nWe also have much smaller standard deviation, indicating more stable estimates. In addition, we list in Tab.~\\ref{tab:baseline} velocity error of our method using raw radar projection for radar-camera association. Results show that, compared with using raw projection, using R2P network achieves higher estimation accuracy.\nFig.~\\ref{Figure:pv} illustrates qualitative results of our point-wise velocity estimation.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\captionsetup{font=small}\n\t\\vspace{-2mm} \n\t\\scalebox{1.02}{\n\t\t\\begin{tabular}{@{}c@{}c@{}c@{}c@{}}\n\t\t\t\\includegraphics[height=1.09in]{.\/figures\/fig_v\/Figure_1} &\n\t\t\t\\includegraphics[height=1.16in]{.\/figures\/fig_v\/01_v_flow} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/01_v_aff} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/01_v_bev} \\vspace{-2mm} \\\\\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/Figure_2} &\n\t\t\t\\includegraphics[height=1.12in]{.\/figures\/fig_v\/02_v_flow} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/02_v_aff} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/02_v_bev} \\vspace{-3mm} \\\\\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/Figure_3} &\n\t\t\t\\includegraphics[height=1.15in]{.\/figures\/fig_v\/03_v_flow} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/03_v_aff} &\n\t\t\t\\includegraphics[height=1.1in]{.\/figures\/fig_v\/03_v_bev} \\vspace{-2mm} \\\\\n\t\t\t\\footnotesize{(a)} & \\footnotesize{ (b)} & \\footnotesize{ (c)} & \\footnotesize{ (d)} \\vspace{-2mm}\\\\\n\t \\end{tabular} }\n\t\\caption{\\small Visualization of point-wise velocity estimation: (a) depth of all measured radar returns as well as flow, (b) optical flow in the white box region, (c) association scores around the selected radar projections as well as predicted mapping from raw radar projections to image pixels (yellow arrow) and (d) radial velocity (red), estimated full velocity (black) and GT velocity (green) in bird's-eye view.}\n\t\\label{Figure:pv}\n\\end{figure*}\n\n\n\\begin{table}[t!]\n\t\\captionsetup{font=small}\n\t\\begin{center}\t\n\n\t\t\\scalebox{0.75}{\n\t\t\t\\begin{tabular}{|c|c|c|c|}\n\t\t\t\t\\hline\n\t\t\t\tMean Error (STD) & Ours & Ours & Baseline \\\\ \n\t\t\t\t(m\/s) & (R2P Network) & (Raw Projection) & \\\\\n\t\t\t\t\\hline\n\t\t\t\tFull Velocity \t \t& $\\mathbf{0.433}\\; (\\mathbf{0.608})$ & $0.577\\; (1.010)$ & $1.599\\; (2.054)$ \\\\\n\t\t\t\t\\hline\n\t\t\t\tTangential Comp. & $\\mathbf{0.322}\\; (\\mathbf{0.610})$ & $0.472\\; (1.024)$ & $1.536\\; (2.083)$ \\\\\n\t\t\t\t\\hline\n\t\t\t\tRadial Comp. \t& $0.205\\; (0.196)$ & $0.205\\; (0.196)$ & $0.205\\; (0.196)$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\\vspace{-5mm}\n\t\\caption{\\small Comparison of point-wise velocity error of our methods and the baseline (raw radial velocity).}\n\t\\label{tab:baseline}\n\t\\vspace{-2mm}\n\\end{table}\n\n\n\\subsection{Comparison of Object-wise Velocity}\nAlthough there are no existing methods for point-wise velocity estimation for radar, a related work, CenterFusion~\\cite{nabati2021centerfusion}, estimates {\\it object-wise} full velocity via object detection with image and radar inputs. \nTo fairly compare with CenterFusion, we convert our point-wise velocity to object-wise velocity. \nSpecifically, we use the average velocity of radar points associated with the same detected box as our estimate of object velocity. \nPoints are associated with detected boxes according to distance. \nNote the point-wise velocity to object-wise velocity conversion is straightforward for comparison purposes, and there would be more advanced approaches to integrate point-wise full velocities in a detection network, which is beyond the scope of this work. \nTab.~\\ref{tab:det} shows that with our estimated full velocity, the velocity estimation for objects is significantly improved.\n\n\\begin{table}[t!]\n\t\\captionsetup{font=small}\n\t\\begin{center}\t\n\t\t\\scalebox{0.9}{\n\t\t\t\\begin{tabular}{|c|c|}\n\t\t\t\t\\hline\n\t\t\t\tMethods & Error (m\/s) \\\\ \n\t\t\t\t\\hline\n\t\t\t\tOurs \t & $\\mathbf{0.451}$ \\\\\n\t\t\t\tCenterFusion~\\cite{nabati2021centerfusion} & $0.826$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\\vspace{-5mm}\n\t\\caption{\\small Comparison of object-wise velocity errors. For a fair comparison we inherit the same set of detected objects from~\\cite{nabati2021centerfusion}.}\n\t\\label{tab:det}\t\\vspace{-3mm}\n\\end{table}\n\n\n\\subsection{Radar Point Accumulation}\nAccumulating radar points over time can overcome the sparsity of radar hits acquired in a single sweep, achieving dense point cloud for objects and thus allowing techniques designed for processing LiDAR points to be applicable for radar. \nThe point-wise velocity estimate makes it possible to compensate the motion of dynamic objects appearing in a temporal sequence of measurements for accumulation. \nSpecifically, for a moving radar point (with estimated velocity $\\dot{\\bm{m}}$) in a previous frame $i$ captured at time $t_i$, its motion from $t_i$ to the time at the current frame, $t_0$, can be compensated by,\n\\begin{equation}\n\\bm{p_0}= \\bm{p_i} + \\dot{\\bm{m}}(t_0 - t_i),\n\\label{eq:motion_compensate}\n\\end{equation}\nwhere $\\bm{p_i}$ and $\\bm{p_0}$ are the radar point coordinates at $t_i$ and $t_0$ in radar coordinates of $t_i$. \nThen $\\bm{p_0}$ is transformed to current radar coordinates by known egomotion from $t_i$ to $t_0$.\n\n\\begin{figure}[t!]\n \\captionsetup{font=small}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.8\\linewidth]{figures\/error_curve}\n\t\\end{center}\n\t\\vspace{-5mm}\n\t\\caption{\\small Error comparison when accumulating radar points from increasing number of frames. The lines represent mean error and shaded area $\\pm0.1\\times$ STD. Our full velocity based accumulation outperforms the ones with radial velocity, or no compensation.}\n\t\\label{fig:curve}\\vspace{-3mm}\n\\end{figure}\n\n\\Paragraph{Qualitative results}\nFig.~\\ref{Figure:acc} shows accumulated points of moving vehicles in radar coordinates. \nFor comparison, we show accumulated radar points compensated by our estimated full velocity, compensated with radial velocity (baseline) and without motion compensation. Compared with the baseline and no motion compensation, our accumulated points are more consistent with the GT bounding boxes.\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\captionsetup{font=small}\n\t\\scalebox{1}{\n\t\t\\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/02_acc_im} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/02_acc_one}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/02_acc_no} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/02_acc_old}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/02_acc_our}\\vspace{-2mm}\\\\\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/03_acc_im} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/03_acc_one}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/03_acc_no} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/03_acc_old}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/03_acc_our}\\vspace{-2mm}\\\\\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/04_acc_im} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/04_acc_one}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/04_acc_no} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/04_acc_old}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/04_acc_our}\\vspace{-2mm}\\\\\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/05_acc_im} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/05_acc_one}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/05_acc_no} &\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/05_acc_old}&\n\t\t\t\\includegraphics[width=1.2in]{.\/figures\/fig_acc\/05_acc_our}\\vspace{-2mm}\\\\\n\t\t\t\\footnotesize{(a)} & \\footnotesize{ (b)} & \\footnotesize{ (c)} & \\footnotesize{ (d)} & \\footnotesize{ (e)}\\\\\n\t\\end{tabular} }\n\t\\vspace{-2mm}\n\t\\caption{\\small Moving radar points are plotted with point-wise radial (red) and full (black) velocity, including image with bounding box (a), single-frame radar points in bird's-eye view (b), accumulated radar points from $20$ frames without motion compensation (c), with radial velocity based compensation (d), and with our full-velocity based compensation (e).\n\t Our accumulated points are tightly surrounding the bounding box, which will benefit downstream tasks such as pose estimation and object detection.}\n\t\\label{Figure:acc}\t\\vspace{-1mm}\n\\end{figure*}\n\n\n\n\\Paragraph{Quantitative results}\nTo quantitatively evaluate the accuracy of radar point accumulation, we use the mean distance from accumulated points (of up to $25$ frames) to their corresponding GT boxes as the accumulation error. \nThis distance for points inside the box is zero, and outside it is the distance from the radar point to the closest point on the box's boundary. \nIn Fig.~\\ref{fig:curve}, we compare the accumulation for our method, the baseline and accumulation without motion compensation. \nWhile error increases with the number of frames for all methods, our method has the lowest rate of error escalation. \n\n\n\\begin{table}[t!]\n \\captionsetup{font=small}\n\t\\begin{center}\t\n\t\t\\scalebox{0.85}{\n\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\\hline\n\t\t\t\t \t\t Metric\t& Ours & Baseline \\\\ \n\t\t\t\t\\hline\t\t\t\t\n\t\t\t\t\tCenter Error (m) $\\downarrow$ & $\\mathbf{0.834}$ & $0.997$ \\\\\t\t\t\n\t\t\t\t\tOrientation Error (degree) $\\downarrow$ & $\\mathbf{6.873}$ & $7.517$ \\\\\n\t\t\t\t\tIoU $\\uparrow$ \t\t\t\t\t\t& $\\mathbf{0.546}$ & $0.462$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\\vspace{-4mm}\n\t\\caption{\\small Comparison of pose estimation performance: average error in center and orientation as well as Intersection over Union (IoU), by using BoxNet~\\cite{nezhadarya2019boxnet} on radar points accumulated using our velocity and the radial velocity as a baseline.}\n\t\\label{tab:iou}\t\\vspace{-3mm}\n\\end{table}\n\n\\Paragraph{Application of pose estimation}\nTo demonstrate the utility of accumulated radar points for downstream applications, we apply a pose estimation method, {\\it i.e.}, BoxNet~\\cite{nezhadarya2019boxnet}, on the accumulated 2D radar points via our full velocity and radial velocity (baseline), respectively. BoxNet takes pre-segmented 2D point clouds of an object as input and predicts a 2D bounding box with parameters as center position, length, width and orientation. \nWe use accumulated radar points of $5702$, $559$ and $2001$ moving vehicles with corresponding GT bounding boxes as training, validation and testing data, respectively. Tab.~\\ref{tab:iou} shows our accumulated radar achieves higher accuracy than the baseline.\n\n\n\\Section{Conclusion}\nA drawback of Doppler radar has been that it provides only the radial component of velocity, which limits its utility in object velocity estimation, motion prediction and radar return accumulation. This paper addresses this drawback by presenting a closed-form solution to the full velocity of radar returns. It leverages optical flow constraints to upgrade radial velocity into full velocity. As part of this work, we use GT bounding-box velocities to supervise a network that predicts association corrections for the raw radar projections. We experimentally verify the effectiveness of our method and demonstrate its application on motion compensation for integrating radar sweeps over time.\n\nThis method developed here may apply to additional modalities such as full-velocity estimation from Doppler LiDAR and cameras. \n\n\\vspace{1mm}\n\\noindent\\textbf{Acknowledgement}\nThis work was supported by the Ford-MSU Alliance.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTopology freezing or fixing are important issues in quantum field theory, in particular in QCD. For example, when simulating chirally symmetric overlap quarks, the corresponding algorithms do not allow transitions between different topological sectors, i.e.\\ topological charge is fixed (cf.\\ e.g.\\ \\cite{Aoki:2008tq,Aoki:2012pma}). Also when using other quark discretizations, e.g.\\ Wilson fermions, topology freezing is expected at lattice spacings $a \\lesssim 0.05 \\, \\textrm{fm}$, which are nowadays still fine, but realistic \\cite{Luscher:2011kk,Schaefer:2012tq}. There are also applications, where one might fix topology on purpose. For example, when using a mixed action setup with light overlap valence and Wilson sea quarks, approximate zero modes in the valence sector are not compensated by the sea. The consequence is an ill-behaved continuum limit \\cite{Cichy:2010ta,Cichy:2012vg}. A possible solution to overcome this problem is to restrict computations to a single topological sector, either by sorting the generated gauge link configurations with respect to their topological charge or by directly employing so-called topology fixing actions (cf.\\ e.g.\\ \\cite{Fukaya:2005cw,Bietenholz:2005rd,Bruckmann:2009cv}).\n\nIn view of these issues it is important to develop methods, which allow to obtain physically meaningful results (i.e.\\ results corresponding to unfixed topology) from fixed topology simulations. The starting point for our work are calculations from the seminal papers \\cite{Brower:2003yx,Aoki:2007ka}. We extend these calculations by including all terms proportional to $1\/V^2$ and $1\/V^3$. We apply the resulting equations to a quantum mechanical particle on a circle, to the Schwinger model and to SU(2) Yang-Mills theory and determine ``hadron masses'' at unfixed topology from fixed topology computations and simulations (for related exploratory studies in the Schwinger model and the $O(2)$ and $O(3)$ non-linear Sigma model cf.\\ \\cite{Bietenholz:2011ey,Bietenholz:2012sh,Bautista:2014tba}).\n\nPart of this work has already been published \\cite{Dromard:2013wja,Dromard:2014wja,Czaban:2013haa}.\n\n\n\n\n\\section{\\label{SEC1}Hadron masses from fixed topology simulations}\n\n\n\\subsection{\\label{SEC11}Two-point correlation functions at fixed topology}\n\nThe partition function and the two-point correlation function of a hadron creation operator $O$ at fixed topological charge $Q$ and finite spacetime volume $V$ are given by\n{\\small\n\\begin{equation}\n\\begin{aligned}\n & Z_{Q,V} \\equiv \\int DA \\, D\\psi \\, D\\bar{\\psi}\\, \\delta_{Q,Q[A]} e^{-S_E[A,\\bar{\\psi},\\psi]} \\\\\n & C_{Q,V}(t) \\equiv \\frac{1}{Z_{Q,V}} \\int DA \\, D\\psi \\, D\\bar{\\psi} \\, \\delta_{Q,Q[A]} O^\\dagger(t) O(0) e^{-S_E[A,\\bar{\\psi},\\psi]} .\n\\end{aligned}\n\\end{equation}\n}Using a saddle point approximation the correlation function has been expanded in \\cite{Brower:2003yx} according to\n{\\small\n\\begin{equation}\n\\label{EQN673} C_{Q,V}(t) = \\alpha(0) \\exp\\bigg(-M_H(0) t - \\frac{M^{(2)}_H(0) t}{2\\mathcal{E}_2 V} \\bigg(1- \\frac{ Q^2}{\\mathcal{E}_2 V}\\bigg)\\bigg) + \\mathcal{O}\\bigg(\\frac{1}{ V^2} \\bigg) ,\n\\end{equation}\n}where $\\alpha(0)$ is a constant, $M_H(\\theta)$ the hadron mass at vacuum angle $\\theta$, $\\mathcal{E}_k \\equiv e_0^{(k)}(\\theta)|_{\\theta=0}$ ($\\mathcal{E}_2 = \\chi_t$, the topological susceptibility) and $e_0$ is the vacuum energy density. In \\cite{Dromard:2014wja} we have extended this calculation by including all terms proportional to $1\/V^2$ and $1\/V^3$,\n{\\small{\n\\begin{equation}\n\\label{EQN674} \\begin{aligned}\n & C_{Q,V}(t) = \\alpha(0) \\exp\\bigg(-M_H(0) t - \\frac{x_2}{2\\mathcal{E}_2 V} - \\bigg(\\frac{x_4 - 2 (\\mathcal{E}_4\/\\mathcal{E}_2) x_2 - 2 x_2^2-4x_2Q^2}{8(\\mathcal{E}_2 V)^2} \\bigg) \\\\\n & \\hspace{0.6cm} - \\bigg(\\frac{16 (\\mathcal{E}_4\/\\mathcal{E}_2)^2 x_2 + x_6 - 3 (\\mathcal{E}_6\/\\mathcal{E}_2) x_2 - 8 (\\mathcal{E}_4\/\\mathcal{E}_2) x_4 - 12 x_2 x_4 + 18 (\\mathcal{E}_4\/\\mathcal{E}_2) x_2^2 + 8 x_2^3}{48(\\mathcal{E}_2 V)^3} \\\\\n & \\hspace{1.2cm} - \\frac{x_4 - 3 (\\mathcal{E}_4\/\\mathcal{E}_2) x_2 - 2 x_2^2}{4(\\mathcal{E}_2 V)^3} Q^2\\bigg)\\bigg) + \\mathcal{O}\\bigg(\\frac{1}{(\\mathcal{E}_2 V)^4} \\, , \\, \\frac{1}{(\\mathcal{E}_2 V)^4} Q^2 \\, , \\, \\frac{1}{(\\mathcal{E}_2 V)^4} Q^4\\bigg) ,\n\\end{aligned}\n\\end{equation}\n}where $x_n \\equiv M^{(n)}_H (0) t+ \\beta^{(n)}(0)$ (for the definition of $\\beta^{(n)}$ cf.\\ \\cite{Dromard:2014wja}). The expansions (\\ref{EQN673}) and (\\ref{EQN674}) are rather accurate approximations, if the following conditions are fulfilled: \\vspace{0.1cm}\n\\\\\\textbf{(C1)} $\\phantom{xxx} 1 \/ \\mathcal{E}_2 V \\ll 1 \\quad , \\quad |Q| \/ \\mathcal{E}_2 V \\ll 1$. \\vspace{0.1cm}\n\\\\\\textbf{(C2)} $\\phantom{xxx} |x_2| = |M_H^{(2)}(0) t + \\beta^{(2)}(0)| \\lesssim 1$. \\vspace{0.1cm}\n\\\\\\textbf{(C3)} $\\phantom{xxx} m_\\pi(\\theta) L \\gtrsim 3 \\ldots 5 \\gg 1$ $\\ \\ $ ($m_\\pi$: pion mass, $L$: periodic spatial extension). \\vspace{0.1cm}\n\\\\\\textbf{(C4)} $\\phantom{xxx} (M_H^\\ast(\\theta) - M_H(\\theta)) t \\gg 1 \\quad , \\quad M_H(\\theta) (T-2 t) \\gg 1$. \\vspace{0.1cm}\n\nNote that the effective mass at fixed topology, defined in the usual way,\n{\\small\n\\begin{equation}\n\\label{eq:MQ} M^\\textrm{eff}_{Q,V}(t) \\equiv -\\frac{1}{C_{Q,V}(t)}\\frac{d C_{Q,V}(t)}{dt} ,\n\\end{equation}\n}exhibits severe deviations from a constant behavior at large temporal separations $t$ \\cite{Dromard:2014wja}, which is in contrast to ordinary quantum field theory at unfixed topology.\n\n\n\\subsection{\\label{SEC13}Extracting hadron masses}\n\nA straightforward method to determine physical hadron masses (i.e.\\ hadron masses at unfixed topology) from fixed topology simulations is to fit either (\\ref{EQN673}) or (\\ref{EQN674}) to two-point correlation functions computed at fixed topology. Among the results of the fit are then the hadron mass at unfixed topology $M_H(0)$ and the topological susceptibility $\\mathcal{E}_2 = \\chi_t$. A similar method is to first determine hadron masses $M_{Q,V}$ at fixed topological charge $Q$ and spacetime volume $V$ and then use equations based on (\\ref{EQN673}) or (\\ref{EQN674}) to determine $M_H(0)$ and $\\mathcal{E}_2 = \\chi_t$. For a detailed discussion cf.\\ \\cite{Dromard:2014wja}}.\n\n\n\n\n\\section{\\label{SEC2}A quantum mechanical particle on a circle at fixed topology}\n\nFor a first test of the methods mentioned in section~\\ref{SEC13} we decided for a simple toy model, a quantum mechanical particle on a circle in a square well potential. This model shares some important features with QCD, e.g.\\ the existence of topological charge and the symmetry $+\\theta \\leftrightarrow -\\theta$. Moreover, it can be solved numerically up to arbitrary precision. We determine $M_H(0)$ (which is the energy difference between the ground state and the first excitation) and $\\chi_t$ from fixed topology two-point correlation functions as outlined in section~\\ref{SEC13}. We compare the $1\/V$ expansion from \\cite{Brower:2003yx} (eq.\\ (\\ref{EQN673})) and our $1\/V^3$ version (eq.\\ (\\ref{EQN674})). We find rather accurate results for $M_H(0)$ and $\\chi_t$ (cf.\\ Table~\\ref{TAB001}). Note that the relative errors for both $M_H(0)$ and $\\chi_t$ are smaller, when using the $1\/V^3$ version (\\ref{EQN674}). For details cf.\\ \\cite{Dromard:2013wja,Dromard:2014wja}.\n\n\\begin{table}[htb]\n\\begin{center}\n\\small{\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline \n & expansion & $\\hat{M}_H(0)$ & error& $\\hat{\\chi}_t$ & error \\tabularnewline\n\\hline \n\\hline \n\\multirow{2}{*}{$\\frac{|Q|}{\\chi_{t}V} \\leq 0.5$} & (\\ref{EQN674}), hep-lat\/0302005 & $0.40702$ & $0.029\\%$& $0.00629$ & $2.5\\%$\\tabularnewline\n\\cline{2-6} \n & (\\ref{EQN674}) & $0.40706$ & $0.019\\%$ & $0.00633$ & $1.9\\%$ \\tabularnewline\n\\hline \n\\end{tabular}\n}\n\n\\caption{\\label{TAB001}$M_H(0)$ and $\\chi_t$ from fixed topology two-point correlation functions; ``error'' denotes relative differences to the exact results $\\hat{M}_{H} = 0.40714$ and $\\hat{\\chi}_t = 0.00645$ at unfixed topology.}\n\\end{center}\n\\end{table}\n\n\n\n\n\\section{The Schwinger model at fixed topology}\n\n\nThe Schwinger model, defined by the Lagrangian\n{\\small\n\\begin{equation}\n\\mathcal{L}(\\psi,\\bar{\\psi},A_{\\mu}) \\equiv \\bar{\\psi} (\\gamma_\\mu (\\partial_\\mu + i g A_\\mu) + m) \\psi + \\frac{1}{2} F_{\\mu \\nu} F_{\\mu \\nu} ,\n\\end{equation}\n}also shares certain features with QCD, most prominently confinement. Furthermore, simulations are computationally inexpensive, because there are only $2$ spacetime dimensions.\n\nWe have studied the ``pion'' mass $m_\\pi$ and the static quark-antiquark potential ${\\mathcal V}_{q\\bar{q}}$ for various separations. Results are summarized in Table~\\ref{TAB002}. In the first line (``fixed top.'') results obtained from two-point correlation functions at fixed topology (as outlined in section~\\ref{SEC13}) are listed. In the second line (``unfixed top.'') they are compared to results from standard lattice simulations, where gauge link configurations from all topological sectors are taken into account. One can observe agreement demonstrating that one can obtain correct and accurate physical results from fixed topology simulations. For details cf.\\ \\cite{Czaban:2013haa}.\n\n\\begin{table}[htb]\n\\begin{center}\n{\\small\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n & $m_\\pi a$ & ${\\mathcal V}_{q\\bar{q}}(1a) a$ & ${\\mathcal V}_{q\\bar{q}}(2a) a$ & ${\\mathcal V}_{q\\bar{q}}(3a) a$ & ${\\mathcal V}_{q\\bar{q}}(4a) a$ \\\\ \\hline\\hline\nfixed top.\\ & 0.2747(2) & 0.12551(4) & 0.2247(2) & 0.3005(3) & 0.3581(7) \\\\ \\hline\nunfixed top.\\ & 0.2743(3) & 0.12551(4) & 0.2247(2) & 0.3008(4) & 0.3577(9) \\\\ \\hline\n\\end{tabular}\n}\n\n\\caption{\\label{TAB002}Comparison of results obtained from computations at fixed and at unfixed topology.}\n\\end{center}\n\\end{table}\n\n\n\n\n\\section{SU(2) Yang-Mills theory at fixed topology}\n\nCurrently we perform fixed topology studies of SU(2) Yang-Mills theory,\n{\\small\n\\begin{equation}\n\\mathcal{L}(A_\\mu) \\equiv \\frac{1}{4} F_{\\mu \\nu}^a F_{\\mu \\nu}^a ,\n\\end{equation}\n}which is expected to be rather similar to QCD. Again we explore the static quark-antiquark potential for various separations.\n\nThe left plot in Fig.~\\ref{FIG342} shows that there is a significant discrepancy between the potential from computations restricted to a single topological sector and corresponding results obtained at unfixed topology. The plot, therefore, underlines the necessity of a method to extract physical results from fixed topology computations.\n\nIn the right plot of Fig.~\\ref{FIG342} we compare the static potential obtained from Wilson loops at fixed topology (as outlined in section~\\ref{SEC13}) and from standard lattice simulations, where gauge link configurations from all topological sectors are taken into account. As for the Schwinger model, one can observe excellent agreement demonstrating again that one can obtain correct and accurate physical results from fixed topology simulations.\n\nDetails regarding our study of Yang-Mills theory at fixed topology will be published in the near future.\n\n\\begin{figure}[htb]\n\\input{FIGSU001.pstex_t}\n\\caption{\\label{FIG342} \\textbf{(left)} ${\\mathcal V}_{q\\bar{q}}(6a)$ for different topological sectors $Q = 0, 1, 2, 3$ for spacetime volume $V\/a^4 = 16^4$. \\textbf{(right)} Comparison of potential results obtained from computations at fixed and at unfixed topology.}\n\n\\end{figure}\n\n\n\n\n\\section{Conclusions and outlook}\n\nWe have extended relations from the literature \\cite{Brower:2003yx,Aoki:2007ka} relating two-point correlation functions at fixed topology to physical hadron masses (i.e.\\ hadron masses at unfixed topology). We have successfully applied our resulting equations to various models. We plan to test the same methods for QCD in the near future, where hadron masses obtained from different topological sectors also exhibit clear differences (for an example cf.\\ \\cite{Galletly:2006hq}, where the pion mass has been computed in various topological charge sectors).\n\n\n\n\n\\section*{Acknowledgments}\n\nWe thank Wolfgang Bietenholz, Krzysztof Cichy, Dennis Dietrich, Gregorio Herdoiza, Karl Jansen and Andreas Wipf for discussions. We acknowledge support by the Emmy Noether Programme of the DFG (German Research Foundation), grant WA 3000\/1-1. This work was supported in part by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzelbd b/data_all_eng_slimpj/shuffled/split2/finalzzelbd new file mode 100644 index 0000000000000000000000000000000000000000..3d0aea4592f4d9994cdc9b8b744d536d8205a0ce --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzelbd @@ -0,0 +1,5 @@ +{"text":"\\section*{Abstract.}%\n \\else\n \\small\n \\quotation\n\t\\noindent{\\bfseriesAbstract.}%\n \\fi}\n {\\if@twocolumn\\else\\endquotation\\fi}\n\\makeatother\n\\setcounter{secnumdepth}{2}\n\n\\title{\\Large \\bf Placing Green Bridges Optimally, with a Multivariate~Analysis}\n\\author{Till Fluschnik\\footnote{Supported by DFG, project TORE (NI\/369-18).} \\and Leon Kellerhals}\n\\date{\\small Technische Universit\u00e4t Berlin, Faculty~IV,\\\\ Algorithmics and Computational Complexity, Germany.\\\\\\texttt{\\{till.fluschnik,leon.kellerhals\\}@tu-berlin.de}}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nWe study the problem of placing wildlife crossings, \nsuch as green bridges,\nover human-made obstacles to challenge habitat fragmentation.\nThe main task\nherein is,\ngiven\na graph describing habitats or routes of wildlife animals and possibilities of building green bridges,\nto find a low-cost placement of green bridges that connects the habitats.\nWe develop different problem models for this task\nand study them from a computational complexity and parameterized algorithmics perspective.\n\n\\medskip\n\\noindent\n\\emph{Keywords.}\nwildlife crossings,\ncomputational complexity,\nNP-hardness,\nparameterized algorithmics,\nconnected subgraphs.\n\\end{abstract}\n\n\\section{Introduction}\n\\label{sec:intro}\n\nSustainability \nis\nan enormous concern \nimpacting today's\npolitics,\neconomy,\nand industry.\nAccordingly,\nsustainability sciences are well-established by now.\nYet,\nthe \\emph{interdisciplinary} scientific field ``computational sustainability''~\\cite{Gomes09a},\nwhich combines practical and theoretical computer science with sustainability sciences,\nis quite young.\nFor instance,\nthe Institute for Computational Sustainability \nat Cornell University was founded in 2008,\nthe 1st International Conference on Computational Sustainability \n(CompSust'09) \ntook place in 2009,\nand\nspecial tracks on\ncomputational sustainability and AI\nwere established\nin 2011 (AAAI) and 2013 (IJCAI).\nThis work\ncontributes to\ncomputational sustainability:\nWe model problems of elaborately placing wildlife crossings\nand give complexity-theoretical and algorithmic analysis for each.\nWildlife crossings are constructions (mostly bridges or tunnels) that allow wildlife animals to safely cross human-made transportation lines (mostly roads). \nWe will refer to wildlife crossings as~\\emph{green bridges}.\n\nHuijser~\\textsl{et~al.}~\\cite{HuijserMHKCSA08}\ngive an extensive report on wildlife-vehicle collisions.\nThey identify several endangered animal species suffering from high road mortality\nand estimate the annual cost associated with wildlife-vehicle collisions with about 8~billion US dollars.\nWildlife fencing with wildlife crossings\ncan reduce collisions by over 80\\%~\\cite{HuijserMHKCSA08},\nenables populations to sustain~\\cite{SawayaKC14},\nand are thereby among the most cost-effective~\\cite{HuijserDCAM09}. \nThe implementation,\nthough,\nis an important problem:\n\\begin{quote}\nThe location, type, and dimensions of wildlife\ncrossing structures must be carefully planned with regard to the species and surrounding\nlandscape.\\ \n[...]\\ \nIn addition, different species use different habitats, influencing their\nmovements and where they want to cross the road.\n~\\cite[p.\\;\\!16]{HuijserMHKCSA08}\n\\end{quote}\nIn addition,\nit is pointed out that data about wildlife habitats \nis basic for mitigation plans,\nyet challenging to obtain~\\cite{ZhengLY19}.\nIn this work,\nour main problem is placing green bridges\nat low cost and under several variants of habitat-connectivity requirements,\nthereby inherently modeling different availabilities of data on habitats.\nThe problem is hence the following:\n\\textsl{\nGiven a graph describing habitats of wildlife animals and possibilities of building green bridges,\nfind a low-cost placement of green bridges that sufficiently connects habitats.\n}\nIn particular,\nwe comparatively \nstudy in terms of computational complexity and parameterized algorithmics\nthe following three different (families of) decision problems.%\n\\footnote{The ($d$-th power) graph $G^d$\nof~$G$ \ncontains edge~$\\{v,w\\}$ if and only if~$\\dist_G(v,w)\\leq d$.}\n\\decprobX{$\\Pi$ Green Bridges Placement{} ($\\Pi$ \\prob{GBP}{})}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand~$k\\in\\mathbb{N}_0$.}\n{Is there an edge set~$F\\subseteq E$ with~$|F|\\leq k$ such that for every~$i\\in\\set{r}$, \nit holds that\n\n\\smallskip\n\\noindent\n{\n\\setlength{\\tabcolsep}{7pt}\n\\begin{tabular}{@{\\hspace{1em}}rlll@{}}\n $\\Pi\\equiv{}$\\prob{$d$-Reach}: & $G[F]^d[V_i]$ is connected? & (\\cref{prob:rgbp}) & (\\cref{sec:rgbp}) \\\\\n $\\Pi\\equiv{}$\\prob{$d$-Closed}: & $G[F]^d[V_i]$ is a clique? & (\\cref{prob:cgbp}) & (\\cref{sec:cgbp})\\\\\n $\\Pi\\equiv{}$\\prob{$d$-Diam(eter)}: & $\\diam(G[F][V_i])\\leq d$? & (\\cref{prob:dgbp}) & (\\cref{sec:dgbp})\n\\end{tabular}\n}}\n\n\\noindent\nIn words:\n\\RgbpAcr{} seeks to connect each habitat \nsuch\nthat \nevery patch has some other patch at short distance.\n\\CgbpAcr{} seeks to connect each habitat such that any two habitat's patches are\nat short distance.\nFinally,\n\\DgbpAcr{} seeks to connect each habitat such that the habitat forms a connected component of low diameter.\n\\cref{fig:relationship}\ndepicts a relationship between the problems in terms of \nKarp reductions.\n\\begin{figure}[t]\\centering\n \\begin{tikzpicture}\n \\def1.1{1}\n \\def0.9{1}\n \\node (con) at (0,-1.25*0.9)[]{\\hyperref[prob:congbp]{\\prob{Connect \\GBP}}};\n \\node (r) at (-3*1.1,-1*0.9)[]{\\hyperref[prob:rgbp]{\\prob{Reach \\prob{GBP}{}}}};\n \\node (c) at (3*1.1,-1*0.9)[]{\\hyperref[prob:cgbp]{\\prob{Closed \\prob{GBP}{}}}};\n \n \\newcommandx{\\AredB}[3][1=$\\geq_P$]{\n \\draw[draw=none] (#2) to node[midway,sloped]{#1}(#3);\n }\n \\AredB{con}{r};\n \\AredB[$\\leq_P$]{con}{c};\n \n \\node (r1) at (-4*1.1,-2*0.9)[]{\\RgbpAcr[1]{}};\n \\node (c1) at (4*1.1,-2*0.9)[]{\\CgbpAcr[1]{}};\n \n \\AredB[$\\leq_P$]{r1}{r};\n \\AredB{c1}{c};\n \n \\node (d) at (-1.25*1.1,-1.75*0.9)[]{\\hyperref[prob:dgbp]{\\prob{Diam \\prob{GBP}{}}}};\n \\node (d1) at (1.25*1.1,-2*0.9)[]{\\DgbpAcr[1]{}};\n \n \\AredB[$\\leq_P$]{r1}{d}\n \\AredB{d1}{d}\n \\AredB[$\\equiv_P$]{d1}{c1}\n \\end{tikzpicture} \n \\caption{Polynomial-time many-one reducibility directly derived from problem definitions.}\n \\label{fig:relationship}\n\\end{figure}\n\n\\paragraph{Our contributions.}\nOur results are summarized in \\cref{tab:results}.\n\\begin{table}[t]\n \\caption{Overview of our results. \n \\cocl{NP}-c., P, and K \n stand for \n \\cocl{NP}-complete,\n ``polynomial-size'',\n and ``problem kernel'',\n respectively.\n \\tss{a}(even on planar graphs or if~$\\Delta=4$)\n \\tss{b}(even on bipartite graphs with~$\\Delta=4$ or graphs of diameter four.)\n \\tss{c}(even if~$r=1$ or if~$r=2$ and~$\\Delta=4$)\n \\tss{d}(even on bipartite graphs of diameter three and~$r=1$, \n \\emph{but} linear-time solvable when~$r+\\Delta$ is constant)\n \\tss{e}(admits a linear-size problem kernel if~$\\Delta$ is constant)\n \\tss{f}(linear-time solvable when~$r+\\Delta$ is constant) \n \\tss{$\\dagger$}(no polynomial problem kernel unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}, but an~$\\O(k^3)$-vertex kernel on planar graphs)\n }\n \\label{tab:results}\n \\centering\n \\setlength{\\tabcolsep}{3.25pt}\n \\begin{tabular}{@{}p{0.12\\textwidth}l|p{0.13\\textwidth}|p{0.17\\textwidth}p{0.11\\textwidth}p{0.19\\textwidth}|p{0.1\\textwidth}@{}}\\toprule\n Problem & & Comput. & \\multicolumn{3}{p{0.46\\textwidth}|}{\n Parameterized Algorithmics} & Ref.\\\\\n ($\\Pi$~\\prob{GBP}) & & Complex. & $k$ & $r$ & $k+r$ &\n \\\\ \\midrule\\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Reach} \\\\ \\tref{sec:rgbp}}} & $d=1$ & \\cocl{NP}-c.~\\tss{a} & $O(k4^{k})$ K & \\emph{open} & $O(rk+k^2)$ PK & \\tref{ssec:1rgbp}\\\\\n & $d=2$ & \\cocl{NP}-c.\\tss{b} & $O(2^{4k})$ K\\tss{$\\dagger$} & p-\\cocl{NP}-h. \\tss{c} & FPT\\tss{$\\dagger$} & \\tref{ssec:2rgbp} \\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & \\cocl{XP}, \\W{1}-h. & p-\\cocl{NP}-h. & \\cocl{XP}, \\W{1}-h. & \\tref{ssec:3rgbp}\\\\\n \\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Closed} \\\\ \\tref{sec:cgbp}}} & $d=1$ & Lin.~time & --- & --- & --- & \\tref{sec:cgbp} \\\\\n & $d=2$ & \\cocl{NP}-c. \\tss{d} & $O(2^{4k})$ K\\tss{$\\dagger$} & p-\\cocl{NP}-h.\\tss{e} & FPT\\tss{$\\dagger$} & \\tref{ssec:2cgbp}\\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & \\cocl{XP}, \\W{1}-h. & p-\\cocl{NP}-h.\\tss{e} & \\cocl{XP}, \\W{1}-h. & \\tref{ssec:3cgbp}\\\\\n \\midrule\n \\multirow{3}{*}{\\makecell{\\prob{$d$-Diam} \\\\ \\tref{sec:dgbp}}} & $d=1$ & Lin.~time & --- & --- & --- & \\tref{sec:dgbp}\\\\\n & $d=2$ & \\cocl{NP}-c. \\tss{f} & $2k$-vertex K & p-\\cocl{NP}-h. & $O(rk+k^2)$ PK & \\tref{ssec:2dgbp}\\\\\n & $d\\geq 3$ & \\cocl{NP}-c. & $2k$-vertex K & p-\\cocl{NP}-h. & $O(rk+k^2)$ PK & \\tref{ssec:3dgbp}\n \\\\\\arrayrulecolor{black} \\bottomrule\n \\end{tabular}\n\\end{table}\nWe settle the classic complexity and parameterized complexity (regarding the number~$k$ of green bridges and the number~$r$ of habitats)\nof the three problems.\nWhile~\\RgbpAcr{} is \n(surprisingly) \nalready \\cocl{NP}-hard for~$d=1$ on planar or maximum degree~$\\Delta=4$ graphs,\n\\CgbpAcr{} and \\DgbpAcr{} become~\\cocl{NP}-hard for~$d\\geq 2$,\nbut admit an $(r+\\Delta)^{\\O(1)}$-sized problem kernel \nand thus are linear time solvable if~$r+\\Delta$ is constant.\nExcept for~\\RgbpAcr[1]{},\nwe proved all variants to be para-\\cocl{NP}-hard regarding~$r$.\n\\RgbpAcr{} and \\CgbpAcr{}\nare fixed-parameter tractable regarding~$k$ when~$d\\leq 2$,\nbut become~\\W{1}-hard (yet~\\cocl{XP}) regarding~$k$ and~$k+r$ when~$d>2$.\nAdditionally,\nwe prove that\n\\RgbpAcr{} admits an $rd$-approximation in~$\\O(mn+rnd)$ time.\n\n\\paragraph{Further related work.}\nOur problems deal with\nfinding (small) spanning connected subgraphs\nobeying some (connectivity) constraints,\nand thus can be seen as \nnetwork design problems~\\cite{KerivinM05}.\nMost related to our problems are \nSteiner multigraph problems~\\cite{RicheyP86}\nwith an algorithmic study~\\cite{Gassner10,LaiGSMCM11}\n(also in the context of wildlife corridor construction).\nRequiring small diameter \nappears also in the context of spanning trees~\\cite{RaviSMRR96}\nand Steiner forests~\\cite{DingQ20}.\nA weighted version of\n\\DgbpAcr[4]{}\nis proven to be \\cocl{NP}-hard with two different weights~\\cite{Plesnik81}.\nAs to wildlife crossing placement,\nmodels and approaches different to ours are studied~\\cite{LoraammD16,DownsHLAKO14}.\n\n\\decprob{\\prob{Connected \\gbp}{} (\\prob{Connect \\GBP}{})}{congbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats \nwhere~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that in~$G[F]$ exists a connected component containing~$V_i$?\n}\n\n\\noindent\n\\prob{Connect \\GBP}{}\nwith edge costs\nis also known as \\prob{Steiner Forest}~\\cite{Gassner10}\nand generalizes the well-known \\cocl{NP}-hard \\prob{Steiner Tree} problem. \nGassner~\\cite{Gassner10} proved \\prob{Steiner Forest} to be \\cocl{NP}-hard \neven if every so-called terminal net contains two vertices,\nif the graph is planar and has treewidth three,\nand if there are two differrent edge costs,\neach being upper-bounded linearly in the instance size.\nIt follows that \\prob{Connect \\GBP}{} is also \\cocl{NP}-hard in this case.\nBateni~\\textsl{et~al.}~\\cite{BateniHM11} proved that \\prob{Steiner Forest} is polynomial-time solvable on treewidth-two graphs \nand admits approximation schemes on planar and bounded-treewidth graphs.\n\nFrom a modeling perspective,\nsolutions are valid in which any habitat\nis scattered and \na habitat's patch is far away from all the others;\nthus\nanimals may need to take long walks through areas outside of their habitats.\nWith our models we avoid solutions with this property.\n\n\\section{Preliminaries}\n\\label{sec:prelims}\n\nLet~$\\mathbb{N}$ and~$\\mathbb{N}_0$ be the natural numbers with and without zero,\nrespectively.\nWe use basic definitions from graph theory~\\cite{Diestel} and\nparameterized algorithmics~\\cite{cygan2015parameterized}.\n\n\\paragraph*{Graph Theory.}\n\nLet~$G=(V,E)$ be an undirected graph with vertex set~$V$ and edge set~$E\\subseteq \\binom{V}{2}$.\nWe also denote by~$V(G)$ and~$E(G)$ the vertices and edges of~$G$,\nrespectively.\nFor~$F\\subseteq E$ let~$V(F)\\ensuremath{\\coloneqq} \\{v\\in V\\mid \\exists e\\in F :\\: v\\in e\\}$ and\n$G[F]\\ensuremath{\\coloneqq} (V(F),F)$.\nA path~$P$ is a graph with~$V(P) \\ensuremath{\\coloneqq} \\{v_1, \\ldots, v_n\\}$ and~$E(P) \\ensuremath{\\coloneqq} \\{\\{v_i,v_{i+1}\\} \\mid 1 \\le i < n \\}$.\nThe length of the path~$P$ is $|E(P)|$.\nThe distance~$\\dist_G(v,w)$ between vertices~$v,w \\in V(G)$ is the length of the shortest path between~$v$ and~$w$ in~$G$.\nThe diameter~$\\diam(G)$ is the length of longest shortest path over all vertex pairs.\nFor~$p\\in \\mathbb{N}$,\nthe graph~$G^p$ is the $p$-th power of~$G$ \ncontaining the vertex set~$V$ and edge set~$\\{\\{v,w\\}\\in \\binom{V}{2}\\mid \\dist_G(v,w)\\leq p\\}$.\nLet~$N_G(v)\\ensuremath{\\coloneqq} \\{w\\in V\\mid \\{v,w\\}\\in E\\}$ be the (open) neighborhood of~$v$,\nand $N_G[v]\\ensuremath{\\coloneqq} N_G(v)\\cup\\{v\\}$ be the closed neighborhood of~$v$.\nFor~$p\\in\\mathbb{N}$,\nlet~$N_G^p(v)\\ensuremath{\\coloneqq} \\{w\\in V\\mid \\{v,w\\}\\in E(G^p)\\}$ be the (open) $p$-neighborhood of~$v$,\nand $N_G^p[v]\\ensuremath{\\coloneqq} N_G^p(v)\\cup\\{v\\}$ be the closed $p$-neighborhood of~$v$.\nTwo vertices~$v,w\\in V$ are called twins if~$N_G(v)=N_G(w)$.\nThe (vertex) degree~$\\deg_G(v)\\ensuremath{\\coloneqq} |N_G(v)|$ of~$v$ is the number if its neighbors.\nThe maximum degree~$\\Delta(G)\\ensuremath{\\coloneqq} \\max_{v\\in V}\\deg_G(v)$ is the maximum over all (vertex) degrees.\n\n\\section{Connecting Habitats with a Patch at Short Reach}\n\\label{sec:rgbp}\n\nThe following problem ensures that any habitat patch can reach the other patches via patches of the same habitat and short strolls over ``foreign'' ground.\n\n\\decprob{\\RgbpTsc{} (\\RgbpAcr{})}{rgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F]^d[V_i]$ is connected?\n}\n\n\\noindent\nNote that if~$d$ is part of the input,\nthen~$\\prob{Connect \\GBP}{} \\leq \\prob{Reach \\prob{GBP}{}}$.\nWe will prove the following.\n\n\\begin{theorem}\n\\label{thm:rgbp}\n \\RgbpAcr{} is\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, \\cocl{NP}-hard even on planar graphs or graphs with maximum degree four;\n \\item if~$d=2$, \\cocl{NP}-hard even on graphs with maximum degree four and~$r=2$ or graphs with diameter four and~$r=1$, and in~\\cocl{FPT}{} regarding~$k$;\n \\item if~$d\\geq 3$, \\cocl{NP}-hard and~\\W{1}-hard regarding~$k+r$.\n \\end{compactenum}\n Moreover, \\RgbpAcr{} admits an~$rd$-approximation of the minimum number of green bridges in~$\\O(mn+rnd)$ time.\n\\end{theorem}\n\n\\subsection{Approximation}\nThe approximation algorithm computes for every habitat~$V_i$ a spanning tree in~$G^d[V_i]$,\nand adds the edges of the corresponding paths to the solution~$F$.\nEach of the spanning trees then is a~$d$-approximation for just the one habitat,\nhence the union of the spanning trees is an~$rd$-approximation for all habitats.\n\n\\begin{lemma}\n \\label{lem:d-approx}\n For~$r=1$,\n \\RgbpAcr{} admits a~$d$-approximation \n of the minimum number of green bridges\n in~$\\O(mn)$ time.\n\\end{lemma}\n\n\\begin{proof}\n We start off by computing in~$\\O(mn)$ time the graph~$H \\ensuremath{\\coloneqq} G^d$ as well as for every edge~$e = \\{u, v\\} \\in E(H)$ the corresponding path~$P_e$ from~$u$ to~$v$ of length at most~$d$ in~$G$.\n If~$H[V_1]$ is not connected, then return~\\textnormal{\\texttt{no}}{}.\n If not,\n then compute a minimum spanning tree~$T \\subseteq H[V_1]$ in~$\\O(n \\log n)$ time.\n For each edge~$e = \\{u, v\\} \\in E(T)$ compute in~$\\O(m)$ time the corresponding path~$P_e \\subseteq G$ from~$u$ to~$v$ of length at most~$d$.\n Finally, return the set~$F \\ensuremath{\\coloneqq} \\bigcup_{e \\in E(T)} E(P_e)$, computable in~$\\O(m)$ time.\n Clearly, $G[F]^d[V_1]$ is connected.\n As a minimum solution~$F^*$ has at least~$|V_1|-1$ edges,\n and every of the paths~$P_e$ consists of at most~$d$ edges,\n \\[\n |F| = |\\bigcup_{e\\in E(T)} E(P_e)| \\le \\sum_{e \\in E(T)} E(P_e) \\le (|V_1|-1)\\cdot d \\le d|F^*|.\\qedhere\n \\]\n\\end{proof}\n\nWith the~$d$-approximation algorithm for a habitat at hand, we present our~$rd$-approximation algorithm.\n\n\\begin{proposition}\n\t\\RgbpAcr{} admits an~$rd$-approximation \n\tof the minimum number of green bridges \n\tin $\\O(mn + rnd)$ time.\n\\end{proposition}\n\\begin{proof}\n\tWe initially compute the shortest paths between all vertex pairs in~$G$ in~$O(mn)$ time.\n\tWe obtain the graph~$H \\ensuremath{\\coloneqq} G^d$ as a byproduct.\n\tIf for some~$i \\in \\set{r}$, $H[V_i]$ is not connected,\n\tthen return~\\textnormal{\\texttt{no}}{}.\n\tIf not, then \n\tcompute for each~$i \\in \\set{r}$ a spanning tree~$T_i$ of~$H[V_i]$, or return~\\textnormal{\\texttt{no}}{} if~$H[V_i]$ is not connected.\n\tLet~$F_i \\subseteq E(G)$ be the edge set corresponding to~$T_i$ as in the proof of \\cref{lem:d-approx}.\n\tAs~$G[F_i]^d[V_i]$ is connected,\n\t$F \\ensuremath{\\coloneqq} \\bigcup_{i=1}^r F_i$ is a solution.\n\n\tNote that each of the~$r$ spanning trees~$T_i$ contain at most~$n$ edges,\n\tand for each of these edges~$e \\in F_i$ we can determine the corresponding paths~$P_e \\subseteq G$\n\tof length at most~$d$ in~$\\O(d)$ time.\n\tWe obtain an overall running time of~$\\O(mn + rnd)$.\n\n\tAs for the approximation ratio, let~$F^*$ be a minimum solution, and for every~$i \\in \\set{r}$ let~$F^*_i \\in E(G)$ be a minimum-size edge set such that~$G[F^*_i]^d[V_i]$ is connected.\n\tAs~$|F^*| \\ge \\max_{i\\in\\set{r}} |F^*_i|$,\n\twe have\n\t\\[\n\t\t|F| \\le \\sum_{i=1}^r |F_i| \\le \\sum_{i=1}^r d|F^*_i| \\le r \\cdot d|F^*|. \\qedhere\n\t\\]\n\\end{proof}\n\n\\subsection{When a next habitat is directly reachable (\\texorpdfstring{\\boldmath{$d=1$}}{d=1})}\n\\label{ssec:1rgbp}\n\nSetting~$d=1$ may reflect perfect knowledge about the habitats.\nIn this case,\nwe want that in~$G[F]$, \neach habitat~$V_i$ forms a connected component.\n\n\\begin{proposition}\n \\label{prop:1rgbp}\n \\RgbpAcr[1]{} is \\cocl{NP}-hard even on graphs of maximum degree four\n and on series-parallel graphs.\n\\end{proposition}\nWe remark that series-parallel graphs are also planar.\nWe first present the construction for showing hardness for graphs of degree at most four.\n\n\\begin{construction}\n \\label{constr:1rgbp}\n For an instance~$\\mathcal{I}=(G, k)$ of \\prob{3-Regular Vertex Cover} with~$G=(V, E)$,\n $V=\\set{n}$,\n and~$E=\\{e_1,\\dots,e_m\\}$,\n construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\mathcal{H},k')$\n where\n $\\mathcal{H}\\ensuremath{\\coloneqq} \\mathcal{V}\\cup\\mathcal{W}\\cup\\mathcal{Z}$, \n $\\mathcal{V}\\ensuremath{\\coloneqq} \\{V_1,\\dots,V_n\\}$,\n $\\mathcal{W}\\ensuremath{\\coloneqq} \\{W_1,\\dots,W_n\\}$,\n $\\mathcal{Z}\\ensuremath{\\coloneqq} \\{Z_1,\\dots,Z_m\\}$,\n and~$k'\\ensuremath{\\coloneqq} 4m+k$, as follows\n (see~\\cref{fig:1rgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\contourlength{0.09em}\n\n \\def1.1{1}\n \\def0.9{1}\n \\def0.75{0.75}\n \\def1.5*\\yr{1}\n \\def25{15}\n \\def155{165}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4175*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xedgem}=[,-,color=cyan!50!green]\n \\tikzstyle{xedgef}=[very thick,-,color=cyan!50!green]\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (a\\x) at (\\x*0.75*1.1-5*1.1,0)[\\y]{};\n \\node (b\\x) at (\\x*0.75*1.1-5*1.1,-0.9*1.5*\\yr)[\\y]{};\n }\n \\draw[xedge,white] (a1) to node[midway]{\\color{cyan!50!green}$e_1$}(b1);\n \\draw[xedge,white] (a3) to node[midway]{\\color{cyan!50!green}$e_s$}(b3);\n \\draw[xedge,white] (a5) to node[midway]{\\color{cyan!50!green}$e_t$}(b5);\n \\draw[xedge,white] (a7) to node[midway]{\\color{cyan!50!green}$e_m$}(b7);\n\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (x\\x) at (9*0.75*1.1+\\x*0.75*1.1-5*1.1,0)[\\y]{};\n \\node (y\\x) at (9*0.75*1.1+\\x*0.75*1.1-5*1.1,-0.9*1.5*\\yr)[\\y]{};\n \\draw[xedge] (x\\x) to (y\\x);\n }\n \\draw[xedge] (x1) to node[midway,left]{$1$}(y1);\n \\draw[xedge] (x3) to node[midway,left]{$i$}(y3);\n \\draw[xedge] (x5) to node[midway,left]{$j$}(y5);\n \\draw[xedge] (x7) to node[midway,left]{$n$}(y7);\n\n \\foreach \\x in {2,4,6}{\n \\node at (a\\x.north east)[anchor=south,rotate=25]{$\\cdots$};\n \\node at (b\\x.south east)[anchor=north,rotate=-25]{$\\cdots$};\n }\n \\foreach \\x in {4}{\n \\node at (x\\x.north west)[anchor=south,rotate=-25]{$\\cdots$};\n \\node at (y\\x.south west)[anchor=north,rotate=25]{$\\cdots$};\n }\n \\foreach \\x\/\\y in {1\/7,1\/6,7\/2,7\/4}{\n \\draw[xedgem,color=red!50!black] (a\\x) to [out=25,in=155](x\\y);\n \\draw[xedgem,color=red!50!black] (b\\x) to [out=-25,in=-155](y\\y);\n }\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/1,5\/3}{\n \\draw[xedgef,color=red!50!black] (a\\x) to [out=25,in=155](x\\y);\n \\draw[xedgef,color=red!50!black] (b\\x) to [out=-25,in=-155](y\\y);\n }\n \n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\scriptsize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \\ltb{a5}{~~~$\\in V_1,V_i,Z_t$}{};\n \\ltb{b5}{}{~~~$\\in W_1,W_i,Z_t$}{};\n \n \\ltb{a3}{$\\in V_i,V_j,Z_s$~~~}{};\n \\ltb{b3}{}{$\\in W_i,W_j,Z_s$~~~}{};\n \n \\ltb{x3}{$\\in V_i,Z_s,Z_t$~~~}{};\n \\ltb{y3}{}{$\\in W_i,Z_s,Z_t$~~~}{};\n \\ltb{x5}{$\\in V_j,Z_s$}{};\n \\ltb{y5}{}{$\\in W_j,Z_s$}{};\n \n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:1rgbp} for~\\RgbpAcr[1]{}. \n Here,\n e.g.,~$e_s=\\{i,j\\}$ and~$e_t=\\{1,i\\}$.\n Every solution (if existent) contains all red-colored edges~(\\Cref{obs:1rgbp}).}\n \\label{fig:1rgbp}\n \\end{figure}\n Construct vertex sets~$V_E\\ensuremath{\\coloneqq} \\{x_i,y_i\\mid e_i\\in E\\}$ \n and~$V_G\\ensuremath{\\coloneqq} \\{v_i,w_i\\mid i\\in V\\}$.\n Next,\n construct edge sets~$E^*\\ensuremath{\\coloneqq} \\bigcup_{i\\in V} \\{\\{v_i,x_j\\},\\{w_i,y_j\\}\\mid i\\in e_j\\}$\n and~$E'\\ensuremath{\\coloneqq} \\{\\{v_i,w_i\\}\\mid i\\in V\\}\\cup E^*$.\n Finally, \n construct habitats~$V_i\\ensuremath{\\coloneqq} \\{v_i\\}\\cup\\bigcup_{i\\in e_j} \\{x_j\\}$ and\n $W_i\\ensuremath{\\coloneqq} \\{w_i\\}\\cup\\bigcup_{i\\in e_j} \\{y_j\\}$ \n for every~$i\\in\\set{n}$,\n and~$Z_j\\ensuremath{\\coloneqq} \\{x_j,y_j\\}\\cup \\bigcup_{i\\in e_j} \\{v_i,w_i\\}$ for every~$j\\in\\set{m}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:1rgbp}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Then\n every solution~$F$ contains all edges in~$E^*$.\n\\end{observation}\n\n\\begin{proof}\n Observe that by construction,\n for every~$S\\in\\mathcal{V}\\cup\\mathcal{W}$,\n $G[S]$ is a star with center in~$V_G$.\n Hence,\n all edges in~$G[S]$ must be contained in every solution.\n Since~$E^* = \\bigcup_{S\\in\\mathcal{V}\\cup\\mathcal{W}} E(G[S])$,\n the claim follows.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:1rgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using~\\cref{constr:1rgbp}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We claim that~$F\\ensuremath{\\coloneqq} E^*\\cup \\bigcup_{i\\in S}\\{\\{x_i,y_i\\}\\}$\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=4m+k$.\n Observe that~$G[F][T]$ is connected for every~$T\\in\\mathcal{V}\\cup\\mathcal{W}$.\n Suppose that there is~$Z_\\ell$ such that~$G[F][Z_\\ell]$\n is not connected.\n Let~$e_\\ell=\\{i,j\\}$.\n Since~$E^*\\subseteq F$,\n none of~$\\{v_i,w_i\\}$ and~$\\{v_j,w_j\\}$ is contained in~$F$.\n It follows that~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}'$.\n We know that~$E^*\\subseteq V$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{x_i,y_i\\}\\in F\\}$\n is a vertex cover of~$G$.\n Note that~$|S|\\leq k$.\n Suppose not,\n that is,\n there is an~$e_\\ell=\\{i,j\\}$ with~$\\{i,j\\}\\cap S=\\emptyset$.\n Then,\n $G[F][Z_\\ell]$ is not connected,\n a contradiction.\n\\end{proof}\n\nThe construction for showing hardness for series-parallel graphs is very similar:\nWe replace the edges in~$E^*$ by two stars.\n\n\\begin{construction}\n \\label{constr:1rgbp-planar}\n For an instance~$\\mathcal{I}=(G,k)$ of~\\prob{3-Regular Vertex Cover}\n with~$G=(V,E)$, and~$V=\\set{n}$ and~$E=\\{e_1,\\dots,e_m\\}$,\n construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\mathcal{H},k')$ with\n $\\mathcal{H}=\\{S, T, Z_1,\\dots,Z_m\\}$\n and~$k'\\ensuremath{\\coloneqq} 2m+2n+k$ as follows\n (see~\\cref{fig:1rgbp} for an illustration).\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\contourlength{0.09em}\n\n \\def1.1{1}\n \\def0.9{1}\n \\def0.75{0.75}\n \\def1.5*\\yr{1}\n \\def25{25}\n \\def155{155}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.55*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (a\\x) at (\\x*0.75*1.1,0)[\\y]{};\n \\node (b\\x) at (\\x*0.75*1.1,-0.9*1.5*\\yr)[\\y]{};\n }\n \\draw[xedge,white] (a1) to node[midway]{\\color{cyan!50!green}$e_1$}(b1);\n \\draw[xedge,white] (a3) to node[midway]{\\color{cyan!50!green}$e_p$}(b3);\n \\draw[xedge,white] (a5) to node[midway]{\\color{cyan!50!green}$e_q$}(b5);\n \\draw[xedge,white] (a7) to node[midway]{\\color{cyan!50!green}$e_m$}(b7);\n\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (x\\x) at (9*0.75*1.1+\\x*0.75*1.1,0)[\\y]{};\n \\node (y\\x) at (9*0.75*1.1+\\x*0.75*1.1,-0.9*1.5*\\yr)[\\y]{};\n \\draw[xedge] (x\\x) to (y\\x);\n }\n \\draw[xedge] (x1) to node[midway,left]{$1$}(y1);\n \\draw[xedge] (x3) to node[midway,left]{$i$}(y3);\n \\draw[xedge] (x5) to node[midway,left]{$j$}(y5);\n \\draw[xedge] (x7) to node[midway,left]{$n$}(y7);\n\n \n \\foreach \\x in {2,4,6}{\n \\node at (a\\x.north east)[anchor=south,rotate=25]{$\\cdots$};\n \\node at (b\\x.south east)[anchor=north,rotate=-25]{$\\cdots$};\n }\n \\foreach \\x in {4}{\n \\node at (x\\x.north west)[anchor=south,rotate=-25]{$\\cdots$};\n \\node at (y\\x.south west)[anchor=north,rotate=25]{$\\cdots$};\n }\n \\node (a) at (8.5*0.75*1.1,1*0.9)[xnode]{};\n \\node (b) at (8.5*0.75*1.1,-2*0.9)[xnode]{};\n \\foreach \\x\/\\y in {1,...,7}{\n \\draw[xedge,color=red!50!black] (a\\x) to (a);\n \\draw[xedge,color=red!50!black] (x\\x) to (a);\n \\draw[xedge,color=red!50!black] (b\\x) to (b);\n \\draw[xedge,color=red!50!black] (y\\x) to (b);\n }\n \n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\scriptsize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:#3}]{};\n }\n \\ltb{a3}{\\contour*{white}{$\\in Z_p,S$~~~}}{};\n \\ltb{b3}{}{\\contour*{white}{$\\in Z_p,T$~~~}}{};\n \n \\ltb{a5}{\\contour*{white}{~~~$\\in Z_q,S$}}{};\n \\ltb{b5}{}{\\contour*{white}{~~~$\\in Z_q,T$}}{};\n \n \\ltb{x3}{\\contour*{white}{$\\in Z_p,Z_q,S$}}{};\n \\ltb{y3}{}{\\contour*{white}{$\\in Z_p,Z_q,T$}}{};\n \\ltb{x5}{\\contour*{white}{$\\in Z_q,S$}}{};\n \\ltb{y5}{}{\\contour*{white}{$\\in Z_q,T$}}{};\n \n \\ltb{a}{\\contour*{white}{$\\in S,Z_1,\\dots,Z_m$}}{$s$};\n \\ltb{b}{$t$}{\\contour*{white}{$\\in T,Z_1,\\dots,Z_m$}}{};\n \n \\end{tikzpicture}\n \\label{fig:1rgbp-planar}\n \\caption{Illustration to~\\cref{constr:1rgbp-planar} for \\RgbpAcr[1]{} on planar series-parallel graphs. \n In this example,\n there are e.g.~$e_p=\\{1,i\\}$ and~$e_q=\\{i,j\\}$.\n In case of a \\textnormal{\\texttt{yes}}-instance,\n the red-colored edges are in every solution~(\\cref{obs:1rgbp-planar}).\n ($k=2n+2m+k$)}\n \\end{figure}\n \n Add to~$G'$ the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{x_j,y_j\\mid e_j\\in E\\}$,\n $V_G\\ensuremath{\\coloneqq} \\{v_i,w_i\\mid i\\in V\\}$,\n and~$V_C\\ensuremath{\\coloneqq} \\{s, t\\}$,\n and the edge sets\n $E_V\\ensuremath{\\coloneqq} \\{\\{v_i,w_i\\}\\mid i\\in V\\}$\n and~$E^*\\ensuremath{\\coloneqq} \\{\\{s, v_i\\}, \\{t, w_i\\} \\mid i \\in V\\} \\cup \\{\\{s, x_j\\}, \\{t, y_j\\} \\mid e_j \\in E\\}$.\n Finally,\n let~$S\\ensuremath{\\coloneqq} \\{s\\}\\cup\\bigcup_{i \\in V} \\{v_i\\} \\cup \\bigcup_{e_j\\in E} \\{x_j\\}$,\n let~$T\\ensuremath{\\coloneqq} \\{t\\}\\cup\\bigcup_{i \\in V} \\{w_i\\} \\cup \\bigcup_{e_j\\in E} \\{y_j\\}$,\n and for~$e_j \\in E$ let~$Z_j\\ensuremath{\\coloneqq}\\{x_j,y_j\\}\\cup\\bigcup_{i\\in e_j}\\{v_i, w_i\\}\\cup\\{s,t\\}$.\n\\end{construction}\n\n\\begin{observation}\n\tThe graph~$G'$ constructed in \\cref{constr:1rgbp-planar} is planar and series-parallel.\n\\end{observation}\n\n\\begin{observation}\n \\label{obs:1rgbp-planar}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Then\n every solution~$F$ contains all edges in~$E^*$.\n\\end{observation}\n\n\\begin{proof}\n Observe that,\n by construction,\n $G[S]$ is a star with center~$s$\n and~$G[T]$ is a star with center~$t$.\n Hence,\n all edges in~$G[S]$ and in~$G[T]$ are contained in every solution.\n Since~$E^* = E(G[S]) \\cup E(G[T])$,\n the claim follows.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:1rgbp-planar}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using~\\cref{constr:1rgbp-planar}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We claim that~$F\\ensuremath{\\coloneqq} E^*\\cup \\bigcup_{i\\in S}\\{\\{x_i,y_i\\}\\}$\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=2m+2n+k$.\n Observe that~$G[F][S]$ and~$G[F][T]$ are connected.\n Suppose that there is~$Z_\\ell$ such that~$G[F][Z_\\ell]$\n is not connected.\n Let~$e_\\ell=\\{i,j\\}$.\n Since~$E^*\\subseteq F$,\n none of~$\\{v_i,w_i\\}$ and~$\\{v_j,w_j\\}$ is contained in~$F$.\n It follows that~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}'$.\n We know that~$E^*\\subseteq V$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{x_i,y_i\\}\\in F\\}$\n is a vertex cover of~$G$.\n Note that~$|S|\\leq k$.\n Suppose not,\n that is,\n there is an~$e_\\ell=\\{i,j\\}$ with~$\\{i,j\\}\\cap S=\\emptyset$.\n Then,\n $G[F][Z_\\ell]$ is not connected,\n a contradiction.\n\\end{proof}\n\n\\subsection{One hop between habitat patches (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2rgbp}\n\nWe prove that~\\RgbpAcr[2]{}\nis already~\\cocl{NP}-complete even if there are two habitats and the graph has maximum degree four,\nor if there is only one habitat.\n\n\\begin{proposition}\n \\label{prop:2rgbp}\n \\RgbpAcr{} with~$d\\geq 2$ is \\cocl{NP}-complete \n even if\n (i) $r=2$ and~$\\Delta\\leq 4$\n or\n (ii) $r=1$ and the input graph has diameter~$2d$.\n\\end{proposition}\n\n\\noindent\nFor the sake of presentation,\nwe prove~\\cref{prop:2rgbp}(i) for~$d=2$.\nAfterwards,\nwe briefly explain how to adapt the proof for~$d>2$ and for \\cref{prop:2rgbp}(ii).\n\n\\begin{construction}\n \\label{constr:2rgbp}\n For an instance~$\\mathcal{I}=(G, k)$ of \\prob{3-Regular Vertex Cover} with~$G=(V, E)$ and~$V=\\set{n}$\n construct an instance of~\\RgbpAcr[2]{}\n with graph~$G'=(V',E')$, \n habitat sets~$V_1$ and~$V_2$,\n and integer~$k'\\ensuremath{\\coloneqq} |E|+(n-1)+k$ as follows \n (see~\\cref{fig:2rgbp}(a) for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{0.67}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.64*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\node at (-4.25*1.1,1*0.9)[]{(a)};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode}{\n \\node (z\\x) at (\\x*1.1-4*1.1,-2*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(z1.south west)-(0.125,0.125)$) rectangle ($(z7.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (z\\x) to (v\\x);\n }\n \\foreach \\x\/\\y in {1\/2,2\/3,3\/4,4\/5,5\/6,6\/7}{\n \\draw[xedge] (z\\x) to (z\\y);\n }\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{\\footnotesize$e=$\\\\\\footnotesize$\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{\\footnotesize$e'=$\\\\\\footnotesize$\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{\\footnotesize$e''=$\\\\\\footnotesize$\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{z3}{\\contour*{white}{$x_i$}}{$\\in V_1,V_2$};\n \\ltb{z5}{\\contour*{white}{$x_j$}}{$\\in V_1,V_2$};\n \n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \\end{tikzpicture}\n \\hfill\n \\begin{tikzpicture}\n\n \\def1.1{0.67}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.64*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\node at (-4.25*1.1,1*0.9)[]{(b)};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{\\footnotesize$e=$\\\\\\footnotesize$\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{\\footnotesize$e'=$\\\\\\footnotesize$\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{\\footnotesize$e''=$\\\\\\footnotesize$\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{x}{\\contour*{white}{$x$}}{$\\in V_1$};\n \n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \\end{tikzpicture}\n \\caption{Illustration for~\\RgbpAcr[2]{} with \n (a) $r=2$ and~$\\Delta=4$ ($k'=m+(n-1)+k$) and\n (b) $r=1$ ($k'=m+k$).}\n \\label{fig:2rgbp}\n\\end{figure}\n\n Add the vertex set~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$\n and add~$v_e$ with $e=\\{i,j\\}\\in E$ to habitat~$V_1$.\n Next,\n add the vertex sets~$V_G=\\{v_i\\mid i\\in V\\}$,\n and connect each~$v_i$ with all edge-vertices corresponding to an edge incident with~$i$,\n i.e.,\n add the edge set~$E_G\\ensuremath{\\coloneqq} \\bigcup_{i\\in V}\\{\\{v_i,v_e\\}\\mid i\\in e\\}$.\n Next,\n add the vertex set~$V_X\\ensuremath{\\coloneqq} \\{x_i\\mid i\\in V\\}$,\n connect each~$x_i$ with~$v_i$,\n and add~$x_i$ to~$V_1,V_2$.\n Finally,\n add the edge set~$\\{\\{x_i,x_{i+1}\\}\\mid i\\in\\set{n-1}\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2rgbp}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1,V_2\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then every solution contains all edges in~$G[V_X]$.\n\\end{observation}\n\n\\begin{proof}\n Suppose not,\n and let~$F$ be a solution without some edge~$\\{x_i,x_{i+1}\\}$.\n Note that in~$G-\\{\\{x_i, x_{i+1}\\}\\}$, the distance between~$x_i$ and~$x_{i+1}$ is at least four;\n thus~$G[F]^2[V_X] = G[F]^2[V_2]$ is not be connected.\n A contradiction.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:2rgbp:edinc}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',f,k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then there is a solution~$F\\subseteq E(G')$ such that~$\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E(G)$.\n\\end{lemma}\n\n\\begin{proof}\n Clearly, in every solution,\n we have~$\\deg_{G'[F]}(v_e)\\geq 1$.\n Let~$F$ be a minimum solution with a minimum number of edges incident to vertices in~$\\{v_e \\mid e\\in E\\}$.\n Suppose that there is at least one~$e=\\{i,j\\}\\in E$ such that~$\\deg_{G'[F]}(v_e)=2$, that is, $\\{v_e, v_i\\}, \\{v_e, v_j\\} \\in F$.\n Since~$F$ is a solution, there is a path~$P$ in~$G'[F]$ from~$v_e$ to some~$x_i$.\n Let~$\\{v_e,v_i\\}$ be the first edge on this path.\n Let~$F'\\ensuremath{\\coloneqq} (F \\setminus\\{v_e,v_j\\})\\cup\\{v_j,x_j\\}$.\n We claim that~$F'$ is a solution,\n yielding a contradiction to the fact that~$F$ is a solution with a minimum number of edges incident with vertices in~$V_E$.\n\n Only a vertex~$v_{e'}$ can be disconnected from any~$V_X$ by removing~$\\{v_e,v_j\\}$ from~$F$.\n This vertex cannot be on the path~$P$,\n and hence is connected to~$v_e$ via edge~$\\{v_e,v_j\\}$.\n Since now edge~$\\{v_j,x_j\\}$ is present,\n $v_{e'}$ is again connected to~$V_X$.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:2rgbp:cor}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{3-Regular Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1,V_2\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2rgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of size~$k$ in~$G$.\n We construct a solution~$F\\subseteq E'$ as follows.\n Let~$F_X=\\bigcup_{i=1}^{n-1} \\{\\{x_i,x_{i+1}\\}\\}$\n and~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x_i\\}\\mid i\\in S\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap S)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e=\\{i,j\\}\\in E} \\{v_e,g(e)\\}$.\n Let~$F\\ensuremath{\\coloneqq} F_X\\cup F_V\\cup F_E$. \n Note that~$|F|=|F_X|+|F_V|+|F_E|\\leq |E|+(n-1)+k = k'$.\n Moreover, \n every~$v_e\\in V_E$ is connected to~$x_i$ via a path~$(v_e,v_i,x_i)$, \n where~$i\\in (e\\cap S)$.\n Finally,\n observe that~$G'[F][V_X]$ is connected.\n \n $(\\Leftarrow)\\quad${}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Due to \\cref{lem:2rgbp:edinc} there\n is a solution~$F\\subseteq E'$ such that $\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$.\n Due to the observation,\n we know that~$\\bigcup_{i=1}^{n-1} \\{\\{x_i,x_{i+1}\\}\\}\\subseteq F$.\n Let~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x_i\\}\\in F\\}$.\n We claim that~$S$ is a vertex cover.\n Suppose not,\n that is,\n there is an edge~$e\\in E$ such that~$e\\cap S=\\emptyset$.\n That means that the unique neighbor of~$v_e$,\n say~$v_i$,\n is not adjacent with~$x_i$ in~$G'[F]$.\n Since $\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$,\n $N_{G'[F]}[v_i]\\neq G'[F]$ forms a connected component in~$G'[F]^2$.\n Hence,\n $F$ is not a solution.\n A contradiction.\n\\end{proof}\n\n\\begin{remark}\n \\begin{inparaenum}[(i)]\n \\item To make the reduction work for~$d\\geq 3$,\n it is enough to subdivide each edge~$\\{v_e,v_i\\}$\n $(d-2)$ times and set~$k'\\ensuremath{\\coloneqq} (d-1)m+(n-1)+k$.\n \\item If we contract all~$x_i$,\n set~$V_2=\\emptyset$ \n (i.e., only one habitat remains),\n and set~$k'\\ensuremath{\\coloneqq} (d-1)m+k$,\n then the reduction is still valid\n (see~\\cref{fig:2rgbp}(b) for an illustration).\n \\end{inparaenum}\n Thus,\n \\cref{prop:2rgbp}(ii) follows.\n\\end{remark}\n\n\\noindent\n\\cref{prop:2rgbp}\nleaves $k$ unbounded.\nThis leads to the following.\n\n\\subsubsection{Parameterizing with~\\texorpdfstring{\\boldmath{$k$}}{k}.}\n\nWe show that \\RgbpAcr[2]{} admits a problem kernel of size exponential in~$k$.\n\\begin{proposition}\n \\label{prop:rgbp:kernel}\n \\RgbpAcr[2]{} admits a problem kernel with at most $2k+\\binom{2k}{k}$ vertices,\n at most $\\binom{2k}{2}+k\\binom{2k}{k}$ edges,\n and at most~$2^{2k}$ habitats.\n\\end{proposition}\n\n\\noindent\nLet~$\\bar{V}\\ensuremath{\\coloneqq} V\\setminus \\bigcup_{V'\\in\\mathcal{H}} V'$ for graph~$G=(V,E)$ and habitat set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$.\nThe following reduction rules are immediate.\n\n\\begin{rrule}\n \\label{rr:immediate}\n \\begin{inparaenum}[(i)]\n \\item If~$|V_i|=1$ some~$i$,\n delete~$V_i$.\n \\item If a vertex in~$\\bar{V}$ is of degree at most one,\n delete it.\n \\item If there is an~$i\\in\\set{r}$ with~$|V_i|>1$ and an~$v\\in V_i$ of degree zero,\n return a trivial \\textnormal{\\texttt{no}}-instance.\n \\item If there is~$v\\in V\\setminus\\bar{V}$ of degree at most one,\n delete it (also from~$V_1,\\dots,V_r$),\n and set~$k\\ensuremath{\\coloneqq} k-1$.\n \\end{inparaenum}\n\\end{rrule}\n\n\\noindent\nClearly, \n$k$ edges can connect at most~$2k$ vertices;\nthus we obtain the following.\n\n\\begin{rrule}\n \\label{rr:few-habitat-vertices}\n If~$|V\\setminus\\bar{V}|>2k$,\n then return a trivial \\textnormal{\\texttt{no}}{}-instance.\n\\end{rrule}\n\n\\noindent\nSo we have at most~$2k$ vertices in habitats.\nNext, we upper-bound the number of non-habitat vertices.\nNo minimal solution has edges between two such vertices.\n\n\\begin{rrule}\n \\label{rr:no-bar-edges}\n If there is an edge~$e \\in E$ with $e\\subseteq \\bar{V}$,\n then delete~$e$.\n\\end{rrule}\n\n\\noindent\nMoreover,\nno minimum solution connects through non-habitat twins.\n\n\\begin{rrule}\n \\label{rr:no-twins}\n If~$N(v)\\subseteq N(w)$ for distinct~$v,w\\in \\bar{V}$,\n then delete~$v$.\n\\end{rrule}\n\n\\noindent\nWe still need to bound the number of vertices in~$\\bar V$.\nFor an $n$-element set~$S$ let~$\\mathcal F \\subseteq 2^{S}$ be a family of subsets such that\nfor every~$A, B \\in \\mathcal F$ we have~$A \\not\\subseteq B$.\nThen~$|\\mathcal F| \\le \\binom{n}{\\lfloor n\/2 \\rfloor}$ by Sperner's Theorem.\nHence,\nafter \napplying the reduction rules,\nwe get an instance with at most~$2k+\\binom{2k}{k}$ vertices \nand~$\\binom{2k}{2}+2k\\binom{2k}{k}$ edges.\n\n\\begin{rrule}\n\\label{rr:habitats}\nIf $V_i=V_j$ for distinct~$i,j\\in\\set{r}$,\nthen delete~$V_j$.\n\\end{rrule}\n\n\\noindent\nIt follows that we can safely assume that~$r\\leq 2^{2k}$.\nThus,\n\\cref{prop:rgbp:kernel} follows.\nUnfortunately,\nimproving the problem kernel \nto polynomial-size appears unlikely.\n\n\\begin{proposition}\n \\label{prop:rgbp:nopk}\n Unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly},\n \\RgbpAcr{} for~$d\\geq 2$ admits no problem kernel of size~$k^{\\O(1)}$,\n even if~$r\\geq 1$ is constant.\n\\end{proposition}\n\n\\noindent\nWe will give a linear parameteric transformation from the following problem:\n\n\\decprob{Set Cover (SC)}{setcover}\n{A universe~$U$, a set~$\\mathcal{F}\\subseteq 2^U$ of subsets of~$U$, and an integer~$k$.}\n{Is there~$\\mathcal{F}'\\subset\\mathcal{F}$ with~$|\\mathcal{F}'|\\leq k$ such that~$\\bigcup_{F\\in\\mathcal{F}'} F=U$?}\n\n\\noindent\nThe construction is basically the same as for \\cref{prop:2rgbp}(ii).\nNote that \\prob{Set Cover} admits no problem kernel of size polynomial in~$|U|+k$,\nunless~$\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}$~\\cite{DomLS14}.\n\n\n\\begin{proof}\n Let~$\\mathcal{I}=(U,\\mathcal{F},k)$ be an instance of~\\prob{Set Cover},\n with~$U=\\{u_1,\\dots,u_n\\}$.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G,V_1,k')$ of \\RgbpAcr[2]{} with~$k'=|U|+k$ as follows \n (see~\\cref{fig:2rgbp:nopk}).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{1}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,0*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n\n\n \\foreach \\x\/\\y in {3\/2,3\/3,3\/5,5\/1,5\/3,5\/5,5\/7,7\/4,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (x) to (e\\x);\n }\n \n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$v_{F'}$}{};\n \\ltb{e5}{$v_F$}{};\n \\ltb{e7}{$v_{F''}$}{};\n \n \\ltb{x}{$x$}{$\\in V_1$};\n \n \\ltb{v1}{$u_1$}{$\\in V_1$};\n \\ltb{v7}{$u_n$}{$\\in V_1$};\n \\ltb{v3}{$u_i$}{$\\in V_1$};\n \\ltb{v5}{$u_j$}{$\\in V_1$};\n \n \\node[right =of v7,xshift=-1.1*0.5 cm]{$V_U$};\n \\node[right =of e9,xshift=-1.1*0.5 cm]{$V_\\mathcal{F}$};\n \\end{tikzpicture}\n \\caption{Illustration for the construction in the proof of~\\cref{prop:rgbp:nopk} for~\\RgbpAcr[2]{} with~$r=1$. \n In this example, \n $U=\\{u_1,\\dots,u_n\\}$ and\n we have $\\{u_1,u_i,u_j,u_n\\}= F\\in\\mathcal{F}$.}\n \\label{fig:2rgbp:nopk}\n \\end{figure}\n Let~$G$ be initially empty.\n Add the vertex set~$V_U\\ensuremath{\\coloneqq} U$,\n the vertex set~$V_\\mathcal{F}\\ensuremath{\\coloneqq} \\{v_F\\mid F\\in\\mathcal{F}\\}$,\n and the vertex~$x$.\n Set~$V_1\\ensuremath{\\coloneqq} V_U\\cup\\{x\\}$.\n Make each vertex in~$V_\\mathcal{F}$ adjacent with~$x$.\n Finally,\n for each~$F\\in\\mathcal{F}$,\n add the edge set~$\\{\\{v_i,v_F\\}\\mid u_i\\in F\\}$.\n \n The proof that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance \n is analogous with the correctness proof for \\cref{prop:2rgbp}(ii).\n \n Since \\prob{Set Cover} admits no problem kernel of size polynomial in~$|U|+k$,\n unless~$\\ensuremath{\\NP\\subseteq \\coNP\/\\poly}$~\\cite{DomLS14},\n neither does~\\RgbpAcr[2]{} when parameterized by~$k'=|U|+k$.\n\\end{proof}\n\n\\noindent\n\\cref{prop:rgbp:nopk} holds for general graphs.\nIn fact, \nfor planar graphs,\nthe above reduction rules allow for \nan~$\\O(k^3)$-vertex kernel.\n\n\\begin{proposition}\n\t\\label{prop:rgbp:planar}\t\n\t\\RgbpAcr[2]{} on planar graphs admits a problem kernel with~$\\O(k^3)$ vertices and edges and at most~$2^{2k}$ habitats.\n\\end{proposition}\n\n\\begin{observation}\n \\label{obs:2rgbpplanar}\n\tSuppose all reduction rules were applied exhaustively.\n\tThen\n\t\\begin{inparaenum}[(i)]\n\t\\item there are at most~$k$ vertices of degree two in~$\\bar V$, and\n\t\\item there are at most~$3\\binom{2k}{3}$ vertices of dergee at least three in~$\\bar V$.\n\t\\end{inparaenum}\n\\end{observation}\n\n\\begin{proof}\n\t\\begin{inparaenum}[\\itshape (i)]\n\t\\item By \\cref{rr:no-bar-edges,rr:few-habitat-vertices,rr:no-twins},\n\t\tevery degree-two vertex in~$\\bar V$ has a pairwise different pair of neighbors in~$V \\setminus \\bar V$.\n\t\tIf there are more than~$2\\binom{2k}{2}$ degree-two vertices in~$\\bar V$,\n\t\tthen one of the reduction rules was not applied exhaustively.\n\n\t\\item Any three vertices~$u,v,w\\in V$ of a planar graph share at most two neighbors,\n\t\tthat is,\n\t\t$|N(u)\\cap N(v)\\cap N(w)| \\le 2$.\n\t\tSuppose there are more than~$3\\binom{2k}{3}$ vertices in~$\\bar V$ of degree at least three.\n\t\tThen,\n\t\tby \\cref{rr:no-bar-edges,rr:few-habitat-vertices,rr:no-twins},\n\t\tthere are three vertices~$u,v,w\\in\\bar V$ such that~$|N(u)\\cap N(v)\\cap N(w)| \\ge 3$,\n\t\ta contradiction to~$G$ being planar.\n\t\\end{inparaenum}\n\\end{proof}\n\n\\noindent\nAs~$|V\\setminus \\bar V| \\le 2k$ and we deleted all degree-one vertices,\n\\cref{prop:rgbp:planar} follows.\n\n\\subsection{At least two hops between habitat patches (\\texorpdfstring{\\boldmath{$d\\ge 3$}}{d\u2a7e3})}\n\\label{ssec:3rgbp}\n\nIf the data is more sparse,\nthat is,\nthe observed habitats to connect are rather scattered,\nthen the problem becomes significantly harder to solve from the parameterized complexity point of view.\n\n\\begin{proposition}\n \\label{prop:3rgbp}\n \\RgbpAcr{} with~$d\\geq 3$ is \\cocl{NP}-complete and \n \\W{1}-hard when parameterized by~$k+r$.\n\\end{proposition}\n\n\\noindent\nWe give the construction for~$d$ being odd.\nAfterwards,\nwe explain how to adapt the reduction to~$d$ being even.\n\n\\begin{construction}\n \\label{constr:3rgbp}\n Let~$(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique}\n where~$G[U^i]$ forms an independent set for every~$i\\in\\set{k}$.\n Assume without loss of generality that~$U^i=\\{u^i_1,\\dots,u^i_{|V^i|}\\}$.\n Construct the instance~$(G',V_1,\\dots,V_{\\binom{k}{2}},k')$ with~$k\\ensuremath{\\coloneqq} \\frac{(d-1)}{2}k+\\binom{k}{2}$ as follows\n (see~\\cref{fig:3rgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\usetikzlibrary{calc}\n \\def1.1{1}\n \\def0.9{0.775}\n \\def0.125{0.175}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n\n \\newcommand{\\xpart}[6]{\n \\begin{scope}[xshift=#5*1.1 cm,yshift=#6*0.9 cm,rotate around={#4:(0,0)}]\n \\node (u#1) at (0,2*0.9)[xnode]{};\n \\node (v#11) at (-1.5*1.1,0*0.9)[xnodef]{};\n \\node (v#12) at (-0.75*1.1,0*0.9)[rotate=#4]{$\\cdots$};\n \\node (v#13) at (0*1.1,0*0.9)[xnodef]{};\n \\node (v#14) at (0.75*1.1,0*0.9)[rotate=#4]{$\\cdots$};\n \\node (v#15) at (1.5*1.1,0*0.9)[xnodef]{};\n \n\n \\foreach \\x in {1,...,5}{\n \\draw[xedge] (u#1) to (v#1\\x);\n \\draw[dash pattern=on \\pgflinewidth off 8pt, fill=white,thick, line width=3pt,color=cyan!50!green] (u#1) to (v#1\\x);\n }\n \\draw[densely dotted,rounded corners] ($(v#11)-(0.125,0.125)$) rectangle ($(v#15)+(0.125,0.125)$);\n \\end{scope}\n \\node at (v#15)[label={[]#3}]{};\n \\node at (u#1)[label={[]#2}]{};\n }\n \n \\xpart{1}{90:$\\in V_\\ell$ if~$i\\in g^{-1}(\\ell)$}{90:$U^i$}{45}{-3}{2};\n \\xpart{2}{-90:$\\in\\bigcap_{\\ell: j\\in g^{-1}(\\ell)} V_\\ell$}{180:$U^j$}{135}{-3}{-2};\n \\xpart{3}{-90:$\\in\\bigcap_{\\ell: j'\\in g^{-1}(\\ell)} V_\\ell$}{-90:$U^{j'}$}{225}{3}{-2};\n \\xpart{4}{90:$\\in\\bigcap_{\\ell: i'\\in g^{-1}(\\ell)} V_\\ell$}{0:$U^{i'}$}{315}{3}{2};\n\n \\node (top) at (0,3.2*0.9)[scale=2]{$\\cdots$};\n \\node (bot) at (0,-3.2*0.9)[scale=2]{$\\cdots$};\n \\draw[xedge] (v13) to (v25) to (v33) to (v45) to (v13) to (v33);\n \\draw[xedge] (v25) to (v45);\n \\foreach \\x\/\\y in {1\/3,2\/5,3\/3,4\/5}{\n \\draw[xedge] (v\\x\\y) to (top);\n \\draw[xedge] (v\\x\\y) to (bot);\n }\n \\draw[draw=none] (v11) to node[midway,below,sloped,color=red]{$(d-1)\/2$ edges}(u1);\n\n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:3rgbp} for~\\RgbpAcr{} for~$d\\geq 3$.}\n \\label{fig:3rgbp}\n \\end{figure}\n\n Let~$g\\colon \\binom{\\set{k}}{2}\\to \\set{\\binom{k}{2}}$ be a bijective function.\n Let~$G'$ be initially~$G$.\n For each~$i\\in\\set{k}$,\n add a vertex~$v_i$ to~$G'$,\n add~$v_i$ to each habitat~$V_\\ell$ with~$i\\in g^{-1}(\\ell)$,\n and connect~$v_i$ with~$u^i_j$ for each~$j\\in\\set{|u^i_{|U^i|}}$ via a path with~$\\frac{d-1}{2}$ edges,\n where~$v_i$ and~$u_i^j$ are the endpoints of the path.\n\\end{construction}\n\n\\begin{remark}\n For every even~$d\\geq 4$,\n we can adapt the reduction for~$d-1$:\n at the end of the construction,\n subdivide each edge between two vertices that are in the original graph~$G$.\n\\end{remark}\n\n\\begin{observation}\n \\label{obs:3rgbp:smallhabs}\n In the obtained instance,\n for every~$\\ell\\in\\set{\\binom{k}{2}}$,\n it holds that, \n $V_\\ell=\\{v_i,v_j\\}$ where~$\\{i,j\\}=g^{-1}(\\ell)$,\n and for every~$i,j\\in\\set{k}$,\n $i\\neq j$,\n it holds that~$\\{\\ell'\\mid \\{v_i,v_j\\}\\subseteq V_{\\ell'}\\}=\\{\\ell\\}$ with~$\\ell=g(\\{i,j\\})$.\n\\end{observation}\n\n\\begin{observation}\n \\label{obs:3rgbp:exone}\n If the obtained instance is a \\textnormal{\\texttt{yes}}-instance,\n then in every minimal solution~$F$,\n for every~$i\\in\\set{k}$ there is exactly one~$u^i_j$ in~$G[F]$.\n\\end{observation}\n\n\\begin{proof}\n Note that each~$v_i$ must be connected with at least one vertex from~$U^i$ in~$G[F]$.\n Thus,\n $|G[F]\\cap U^i|\\geq 1$.\n Moreover,\n from each~$i,j\\in\\set{k}$,\n $i\\neq j$,\n $F$~must contain an edge between~$U^i$ and~$U^j$,\n since~$\\dist_{G'}(v_i,u)+\\dist_{G'}(v_j,u')\\geq d-1$ for every~$u\\in U^i$, \n $u'\\in U^j$.\n Since additionally~$k= \\frac{(d-1)}{2}k+\\binom{k}{2}$,\n it follows that~$v_i$ can not be connected with two vertices from~$U^i$ in~$G[F][U^i\\cup \\{v_i\\}]$.\n Hence,\n if there are two vertices~$u,u'\\in U^i\\cap F$,\n with~$u$ being connected to~$v_i$ in~$G[F][U^i\\cup \\{v_i\\}]$,\n then~$u'$ is not part of an $v_a$-$v_b$~path in~$G[F]$ of length at most~$d$\n for every~$a,b\\in\\set{k}$.\n It follows that~$F$ is not minimal.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:3rgbp}\n Let~$\\mathcal{I}=(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique} and let~$\\mathcal{I}'=(G',\\mathcal{H},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:3rgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${} \n Let~$W\\subseteq V(G)$ be a multicolored clique.\n Let~$F$ contain~$\\binom{W}{2}$ and all edges of a path from~$v_i$ to~$U^i\\cap W$.\n We claim that~$F$ is a solution.\n Note that~$|F|=\\binom{k}{2}+k\\frac{d-1}{2}$.\n Since~$V_\\ell$ is of size two for all~$\\ell\\in\\set{\\binom{k}{2}}$ (\\cref{obs:3rgbp:smallhabs}),\n we only need to show that~$v_i,v_j$ with~$\\{i,j\\}=g^{-1}(\\ell)$\n is connected by a path of length at most~$d$.\n We know that~$v_i$ is connected to some~$u^i_x$ by a path of length~$(d-1)\/2$,\n which is adjacent to some~$u^j_y$, \n which is connected to~$v_j$ by a path of length~$(d-1)\/2$.\n Thus, \n $v_i$ and~$v_j$ are of distance~$d$.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution.\n Note that~$|F|=\\binom{k}{2}+k\\frac{d-1}{2}$.\n We claim that~$W\\ensuremath{\\coloneqq} V(G'[F])\\cap V(G)$ is a multicolored clique.\n First, \n observe that~$|W|=k$ since for every~$v_i$ there is exactly one~$u^i_{\\ell_i}$ in~$G'[F]$ (\\cref{obs:3rgbp:exone}).\n Suppose that~$W$ is not a multicolored clique,\n that is,\n there are~$U^i$ and~$U^j$ such that there is no edge in~$F$ between them.\n Then~$v_i$ and~$v_j$ are of distance larger than~$d$ in~$G[F]$,\n contradicting that~$F$ is a solution.\n\\end{proof}\n\n\\section{Connecting Habitats at Short Pairwise Distance}\n\\label{sec:cgbp}\n\nIn the next problem, \nwe require short pairwise reachability.\n\n\\decprob{\\CgbpTsc{} (\\CgbpAcr{})}{cgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal H = \\{V_1, \\dots, V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$,\nand~$k \\in \\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F]^d[V_i]$ is a clique?\n}\n\n\\noindent\nNote that if $G[F]^d[V_i]$ is a clique, \nthen~$\\dist_{G[F]}(v,w) \\leq d$ for all~$v,w\\in V_i$.\nNote that~\\CgbpAcr[2]{} is an unweighted variant of the 2NET problem~\\cite{DahlJ04}.\n\n\\begin{theorem}\n \\CgbpAcr{} is,\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, linear-time solvable;\n \\item if~$d=2$, \\cocl{NP}-hard even on bipartite graphs of diameter three and~$r=1$, \n and in~\\cocl{FPT}{} regarding~$k$;\n \\item if~$d\\geq 3$, \\cocl{NP}-hard and~\\W{1}-hard regarding~$k$ even if~$r=1$.\n \\end{compactenum}\n\\end{theorem}\n\n\n\\noindent\nFor~$d=1$,\nthe problem is solvable in linear time:\nCheck whether each habitat induces a clique.\nIf so,\ncheck if the union of the cliques is small enough.\n\n\\begin{observation}\n \\label{obs:1cgbp}\n \\CgbpAcr[1]{} is solvable in linear time.\n\\end{observation}\n\n\\begin{proof}\n We employ the following algorithm:\n For each~$i\\in\\set{r}$,\n let~$G_i := G[V_i]$ and return~\\textnormal{\\texttt{no}}{} if~$G_i$ is not a clique.\n Finally,\n return~\\textnormal{\\texttt{yes}}{} if~$|\\bigcup_{i=1}^r E(G_i)|\\leq k$,\n and~\\textnormal{\\texttt{no}}{} otherwise.\n Clearly,\n if the algorithm returns \\textnormal{\\texttt{yes}},\n then~$\\mathcal{I}$ is \\textnormal{\\texttt{yes}}-instance.\n Conversely,\n let~$\\mathcal{I}$ be a \\textnormal{\\texttt{yes}}-instance\n and let~$F'$ be a solution to~$\\mathcal{I}$.\n We know that for every~$i\\in\\set{r}$,\n and any two vertices~$v,w\\in V_i$,\n edge~$\\{v,w\\}$ must be in~$F'$.\n It follows that\n $\\bigcup_{i=1}^r E(G_i)\\subseteq F'$.\n Thus,\n $|\\bigcup_{i=1}^r E(G_i)|\\leq |F'|\\leq k$ \n and the algorithm correctly returns~\\textnormal{\\texttt{yes}}.\n\\end{proof}\n\n\\subsection{When each part is just two steps away (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2cgbp}\n\nFor~$d=2$,\n\\CgbpAcr{}\nbecomes \\cocl{NP}-hard already on quite restrictive inputs.\n\n\\begin{proposition}\n \\label{prop:2cgbp}\n \\CgbpAcr[2]{} is \\cocl{NP}-complete,\n even\n if~$r=1$ and\n the input graph is bipartite and of diameter three.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:2cgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of \\prob{Vertex Cover},\n and assume without loss of generality that~$V=\\set{n}$.\n Construct an instance of~\\CgbpAcr[2]{}\n with graph~$G'=(V',E')$, \n habitat~$V_1$,\n and integer~$k'\\ensuremath{\\coloneqq} 2|E|+k+3$ as follows\n (see~\\cref{fig:2cgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\def1.1{1.1}\n \\def0.9{0.7}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \n \\node (xp) at (0,1*1.5*\\yr)[xnodef]{};\n \\node (x) at (0*1.1,2*1.5*\\yr)[xnode]{};\n \n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (z) at (0,-2*1.5*\\yr)[xnode]{};\n\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (z) to (v\\x);\n }\n \n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to (xp);\n }\n \\draw[xedge] (xp) to (x);\n \n \\node[right =of e9](y)[xnodef]{};\n \\draw[xedge] (z) to [out=0,in=-90](y);\n \\draw[xedge] (y) to [out=90,in=0](x);\n \n \\newcommand{\\dlabel}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]#2},\n label={[align=center,font=\\scriptsize,color=black]-90:#3}]{};\n }\n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n\n\n \n \\ltb{xp}{$y'$}{}\n \\ltb{x}{$y$}{$\\in V_1$}\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1$};\n\n \\ltb{v3}{$v_i$}{};\n \\ltb{v5}{$v_j$}{};\n \n \\ltb{z}{$x$}{$\\in V_1$};\n \\dlabel{y}{180:$z$}{};\n \n \\end{tikzpicture}\n \\caption{Illustration to~\\cref{constr:2cgbp} for~\\CgbpAcr[2]{}.}\n \\label{fig:2cgbp}\n \\end{figure}\n\n To construct~$G'$ and~$V_1$,\n add the vertex set~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$ and add~$V_E$ to~$V_1$.\n Add two designated vertices~$y'$ and~$y$,\n add~$y$ to~$V_1$,\n and make~$y'$ adjacent with~$y$ and all vertices in~$V_E$.\n Add a designated vertex~$x$,\n add~$x$ to~$V_1$,\n and introduce a path of length two from~$x$ to~$y$ (call the inner vertex~$z$).\n Add the vertex set~$V_G\\ensuremath{\\coloneqq} \\{v_i\\mid i\\in V\\}$,\n and make each~$v_i$ adjacent with~$x$ \n and all edge-vertices corresponding to an edge incident with~$i$,\n i.e.,\n add the edge set~$E_G\\ensuremath{\\coloneqq} \\bigcup_{i\\in V}\\{\\{v_i,v_e\\}\\mid i\\in e\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2cgbp}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then for every solution~$F\\subseteq E(G')$ \n it holds that~$\\{\\{y,y'\\},\\{y,z\\},\\{z,x\\}\\}\\cup \\{\\{y',v_e\\}\\mid e\\in E(G)\\}\\subseteq F$.\n\\end{observation}\n\n\\begin{lemma}\n \\label{lem:2cgbp:edinc}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then there is a solution~$F\\subseteq E(G')$ such that~$|N_{G'[F]}(v_e)\\cap V_G|=1$ for all~$e\\in E(G)$.\n\\end{lemma}\n\n\\begin{proof}\n Note that in every solution,\n clearly we have~$|N_{G'[F]}(v_e)\\cap V_G|\\geq 1$.\n Suppose there is a minimal solution~$F$ such that there is at least one~$e=\\{i,j\\}\\in E$ such that~$|N_{G'[F]}(v_e)\\cap V_G|=2$.\n Let~$F$ be a solution with a minimum number of edges incident to vertices in~$V_E$.\n \n Since~$\\dist_{G'[F]}(v_e,x)= 2$,\n at least one of the edges~$\\{v_i,x_i\\}$ or~$\\{v_j,x_j\\}$ are in~$F$.\n If both are present\n then we can remove one of the edges~$\\{v_e,v_i\\}$ or~$\\{v_e,v_j\\}$ to~$e$ to obtain a solution of smaller size.\n This yields a contradiction.\n \n Otherwise,\n assume there is exactly one edge, say~$\\{v_e,v_i\\}$, \n contained in~$F$.\n Then exchanging~$\\{v_e,v_j\\}$ with~$\\{v_j,x\\}$ yields a solution with a lower number of edges incident to vertices in~$V_E$.\n A contradiction.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lem:2cgbp:cor}\n Let~$\\mathcal{I}=(G,k)$ be an instance of \\prob{Vertex Cover} and let~$\\mathcal{I}'=(G',\\{V_1\\},k')$ be the instance obtained from~$\\mathcal{I}$ using~\\cref{constr:2cgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$W\\subseteq V$ be a vertex cover of size at most~$k$ in~$G$.\n We construct a solution~$F\\subseteq E'$ as follows.\n Let~$F'$ denote the set of all edges required due to~\\cref{obs:2cgbp}.\n Let~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x\\}\\mid i\\in W\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap W)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e=\\{i,j\\}\\in E} \\{v_e,g(e)\\}$.\n Let~$F\\ensuremath{\\coloneqq} F'\\cup F_V\\cup F_E$. \n Note that~$|F|=|F'|+|F_V|+|F_E|\\leq |E|+3+|E|+k = k'$.\n Moreover, \n every~$v_e\\in V'$ is connected to~$x$ via a path~$(v_e,v_i,z)$, \n for some~$i\\in (e\\cap W)$,\n of length two.\n Thus all vertex pairs in~$V_1$ are at distance at most two.\n \n $(\\Leftarrow)\\quad${}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance.\n Due to~\\cref{lem:2cgbp:edinc},\n there\n is a solution~$F\\subseteq E'$ such that~$\\deg_{G'[F]}(v_e)=1$ for all~$e\\in E$.\n Let~$W\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x\\}\\in F\\}$.\n We claim that~$W$ is a vertex cover.\n Suppose not,\n that is,\n there is an edge~$e\\in E$ such that~$e\\cap W=\\emptyset$.\n That means that the unique neighbor of~$v_e$,\n say~$v_i$,\n is not adjacent with~$x$ in~$G'[F]$.\n Then,\n $v_e$ is not connected with~$x$ in~$G'[F]^2$,\n and hence~$F$ is no solution,\n a contradiction.\n\\end{proof}\n\n\\subsubsection{Graphs of constant maximum degree.}\n\n\\RgbpAcr[2]{} is \\cocl{NP}-hard if the number~$r$ of habitats and the maximum degree~$\\Delta$ are constant \n(\\cref{prop:2rgbp}).\n\\CgbpAcr[2]{} is linear-time solvable in this~case:\n\n\\begin{proposition}\n \\label{prop:cgbpdelta}\n \\CgbpAcr{} admits an~$\\O(r\\Delta(\\Delta-1)^{3d\/2})$-sized problem kernel computable in~$\\O(r(n+m))$ time.\n\\end{proposition}\n\n\\begin{proof}\n\tLet~$\\mathcal{I} = (G, \\mathcal{H}, k)$ be an instance of \\CgbpAcr{}.\n\tFor every~$i\\in\\set{r}$, fix a vertex~$u_i\\in V_i$.\n\tWe assume that we have $V_i \\subseteq N_G^d[u_i]$ for all~$i\\in\\set{r}$, otherwise~$\\mathcal{I}$ is a \\textnormal{\\texttt{no}}-instance.\n\tNow let~$W_i = N_G^{\\lceil 3d\/2\\rceil}[u_i]$ and let~$G' \\ensuremath{\\coloneqq} G[\\bigcup_{i=1}^r W_i]$.\n\tNote that~$G'$ contains at most~$r\\Delta(\\Delta - 1)^{\\lceil 3d\/2\\rceil}$ vertices and can be computed by~$r$ breadth-first searches.\n\tWe claim that~$G'$ contains every path of length at most~$d$ between every two vertices~$v, w\\in V_i$, for every~$i \\in \\set{r}$.\n\tRecall that an edge set~$F \\subseteq E$ is a solution if and only if for every~$i\\in\\set{r}$ and for every~$v, w\\in V_i$,\n\tthe graph $G[F]$ contains a path of length at most~$d$ from~$v$ to~$w$.\n\tAs by our claim~$G'$ contains any such path,\n\tthis implies that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'\\ensuremath{\\coloneqq}(G',\\mathcal{H},k)$ is a \\textnormal{\\texttt{yes}}-instance (note that~$V_i \\subseteq V(G')$ for every~$i \\in \\set{r}$).\n\n\tAssuming that~$V_i \\subseteq N_G^d[u_i]$,\n\t$G[W_i]$ contains all paths of length at most~$d$ between~$u_i$ and any~$v\\in V_i$.\n\tSo let~$v, w \\in V_i$ be two vertices, both distinct from~$u_i$.\n\tAs $v, w \\in N^d_G[u_i]$ and~$W_i=N^{\\lceil 3d\/2\\rceil}_G[u_i]$,\n\tthe subgraph $G[W_i]$ contains all vertices in~$N^{\\lceil d\/2\\rceil}_G[v]$ and~$N^{\\lceil d\/2\\rceil}_G[w]$.\n\tConsider now a path of length at most~$d$ between~$v$ and~$w$.\n\tSuppose it contains a vertex~$x \\in V(G) \\setminus (N^{\\lceil d\/2\\rceil}_G[v]\\cup N^{\\lceil d\/2\\rceil}_G[w])$.\n\tThen~$\\dist_G(v, x) + \\dist_G(w, x) > 2 \\lceil d\/2 \\rceil \\ge d$, a contradiction to~$x$ being on a path from~$v$ to~$w$ of length at most~$d$.\n\tThe claim follows.\n\\end{proof}\n\n\\subsubsection{Parameterizing with~\\texorpdfstring{\\boldmath{$k$}}{k}.}\n\nAll the reduction rules that worked for~\\RgbpAcr[2]{}\nalso work for~\\CgbpAcr[2]{}.\nIt thus follows that~\\CgbpAcr[2]{} admits a problem kernel of size exponentially in~$k$,\nand hence,\n\\CgbpAcr[2]{} is~\\cocl{FPT}.\nAs with~\\RgbpAcr[2]{},\nthe problem kernel presumably cannot be much improved.\nFor this, we combine the constructions for~\\cref{prop:rgbp:nopk,prop:2cgbp}\n(see \\cref{fig:2rgbp:nopk,fig:2cgbp}).\n\n\\begin{corollary}\n \\CgbpAcr[2]{} admits a problem kernel of size exponentially in~$k$\n and,\n unless~\\ensuremath{\\NP\\subseteq \\coNP\/\\poly},\n none of size polynomial in~$k$,\n even if~$r=1$.\n\\end{corollary}\n\n\\subsection{When reaching each part is a voyage (\\texorpdfstring{\\boldmath{$d=3$}}{d\u2a7e3})}\n\\label{ssec:3cgbp}\n\nFor~$d\\geq 3$,\nthe problem\nis\n\\W{1}-hard regarding the number~$k$ of green bridges,\neven for one habitat.\nThe reduction is similar to the one for \\cref{prop:3rgbp}.\n\n\\begin{proposition}\n \\label{prop:3cgbp}\n \\CgbpAcr{} with~$d\\geq 3$ is \\cocl{NP}-complete and~\\W{1}-hard when parameterized by the number~$k$,\n even if $r=1$.\n\\end{proposition}\n\n\\begin{proof}\n Let~$\\mathcal{I}=(G)$ with~$G=(U^1,\\dots,U^k,E)$ be an instance of \\prob{Multicolored Clique}.\n Apply\n \\cref{constr:3rgbp}\n to obtain instance~$\\mathcal{I}''=(G',\\{V_1,\\dots,V_{\\binom{k}{2}}\\},k')$\n (recall that~$k'=\\frac{d-1}{2}k+\\binom{k}{2}$).\n Let~$\\mathcal{I}'=(G',\\{V_1'\\},k')$ with~$V_1'\\ensuremath{\\coloneqq} \\bigcup_{i=1}^{\\binom{k}{2}} V_i=\\{v_1,\\dots,v_k\\}$ be the finally obtained instance of~\\CgbpAcr{}.\n We claim that~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n \n $(\\Rightarrow)\\quad${}\n Let~$C$ be a multicolored clique in~$G$.\n Let~$z_i\\ensuremath{\\coloneqq} V(C)\\cap U^i$.\n We claim that~$F$,\n consisting of the edges of each shortest path from~$v_i$ to~$z_i$ and the edge set~$E(C)$,\n is a solution to~$\\mathcal{I}'$.\n Note that~$|F|=k'$.\n Moreover,\n for any two~$i,j\\in\\set{k}$,\n we have that~$v_i$ and~$v_j$ are of distance~$2\\frac{d-1}{2}+1=d$.\n Hence,\n $F$ is a solution.\n \n $(\\Leftarrow)\\quad${}\n Let~$F$ be a solution to~$\\mathcal{I}$.\n Since~$F$ must contain a path from~$v_i$ to some~$z_i\\in U^i$\n for every~$i\\in\\set{k}$,\n there are at most~$\\binom{k}{2}$ edges left to connect.\n Let~$Z\\ensuremath{\\coloneqq} \\{z_1,\\dots,z_k\\}$ be the vertices such that~$v_i$ is connected with~$z_i$ in~$G[F][U^i]$.\n Since~$d\\geq \\dist_{G'[F]}(v_i,v_j) = \\dist_{G'[F]}(v_i,z_i)+\\dist_{G'[F]}(z_i,z_j)+\\dist_{G'[F]}(z_j,v_j)$ and~$d-1=\\dist_{G'[F]}(v_i,z_i)+\\dist_{G'[F]}(z_j,v_j)$,\n it follows that~$\\dist_{G'[F]}(z_i,z_j)=1$.\n Thus,\n $G[Z]$ forms a mulicolored clique.\n\\end{proof}\n\n\\section{Connecting Habitats at Small Diameter}\n\\label{sec:dgbp}\n\nLastly,\nwe consider requiring short pairwise reachability\nin \\RgbpAcr[1]{}.\n\n\\decprob{\\DgbpTsc{} (\\DgbpAcr{})}{dgbp}\n{An undirected graph~$G=(V,E)$,\na set~$\\mathcal{H}=\\{V_1,\\dots,V_r\\}$ of habitats where~$V_i\\subseteq V$ for all~$i\\in\\set{r}$, \nand an integer~$k\\in\\mathbb{N}_0$.}\n{Is there a subset~$F\\subseteq E$ with~$|F|\\leq k$ such that\nfor every~$i\\in\\set{r}$\nit holds that~$G[F][V_i]$ has diameter~$d$?\n}\n\n\\noindent\nIn particular,\n$G[F][V_i]$ is connected.\nNote that\n\\RgbpAcr[1]{} reduces to~\\prob{Diam \\prob{GBP}} \n(where $d$ is part of the input and then set to the number of vertices in the input instance's graph).\nWe have the following.\n\n\\begin{theorem}\n \\label{thm:dgbp}\n \\DgbpAcr{} is\n \\begin{compactenum}[(i)]\n \\item if~$d=1$, solvable in linear time;\\label{thm:dgbp:i}\n \\item if~$d=2$, \\cocl{NP}-hard even if~$r=3$;\n \\item if~$d=3$, \\cocl{NP}-hard even if~$r=2$.\n \\end{compactenum}\n Moreover,\n \\DgbpAcr{} admits a problem kernel with at most~$2k$ vertices and at most~$2^{2k}$ habitats.\n\\end{theorem}\n\n\\noindent\n\\DgbpAcr[1]{} is equivalent to \\CgbpAcr[1]{},\nwhich is linear-time solvable by \\cref{obs:1cgbp}.\nThus, \n\\cref{thm:dgbp}\\eqref{thm:dgbp:i} follows.\nApplying \\cref{rr:few-habitat-vertices,rr:habitats} and deleting all non-habitat vertices yields the problem kernel.\n\n\\subsection{Over at most one patch to every other (\\texorpdfstring{\\boldmath{$d=2$}}{d=2})}\n\\label{ssec:2dgbp}\n\n\\DgbpAcr[2]{} turns out to be \\cocl{NP}-hard even for three habitats.\n\n\n\n\\begin{proposition}\n \\label{prop:2dgbp}\n \\DgbpAcr[2]{} is \\cocl{NP}-hard even if~$r=3$.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:2dgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of~\\prob{Vertex Cover}\n and assume without loss of generality that~$V=\\{1,\\dots,n\\}$ and~$E=\\{e_1,\\dots,e_m\\}$.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\{V_1,V_2,V_3\\},k')$\n with~$k'=2m+2n+k+4$ as follows\n (see~\\cref{fig:2dgbp} for an illustration).\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{0.9}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n \\node (z) at (5*1.1,-0.5*1.5*\\yr)[xnodef]{};\n \\node (zp) at (6*1.1,-0.5*1.5*\\yr)[xnodef]{};\n \\node (y) at (5*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \\node (yp) at (6*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to [out=-15,in=175](z);\n }\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (v\\x) to [out=15,in=185](z);\n \\draw[xedge] (v\\x) to [out=-15,in=180](y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \\draw[xedge] (x) to (y);\n \\draw[xedge] (y) to (z);\n \\draw[xedge] (y) to (yp);\n \\draw[xedge] (z) to (zp);\n\n \\newcommand{\\ltb}[3]{\n \\contourlength{0.09em}\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1,V_2$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1,V_2$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1,V_2$};\n \n \\ltb{x}{$x$}{$\\in V_1, V_3$};\n \\ltb{y}{$y$}{$\\in V_1,V_3$};\n \\ltb{z}{$z$}{$\\in V_1,V_2,V_3$};\n \\ltb{yp}{$y'$}{$\\in V_3$};\n \\ltb{zp}{$z'$}{$\\in V_2$};\n \n \\ltb{v3}{$v_i$}{$\\in V_1,V_2,V_3$};\n \\ltb{v5}{$v_j$}{$\\in V_1,V_2,V_3$};\n \\end{tikzpicture}\n \\caption{Illustration for~\\DgbpAcr[2]{} with~$r=3$. ($k'=2m+2n+k+4$)}\n \\label{fig:2dgbp}\n\\end{figure}\n Add the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$ and~$V_G=\\{v_i\\mid i\\in V\\}$,\n as well as the vertices~$x,y,y',z,z'$.\n Add all vertices except for~$y'$ and~$z'$ to~$V_1$.\n Let~$V_2\\ensuremath{\\coloneqq} V_E\\cup V_G\\cup \\{z,z'\\}$\n and\n $V_3\\ensuremath{\\coloneqq} V_G\\cup \\{x,z,y,y'\\}$.\n Next,\n for each~$e=\\{i,j\\}\\in E$,\n connect~$v_e$ with~$v_i$, \n $v_j$, \n and~$z$.\n For each~$i\\in V$,\n connect~$v_i$ with~$x$, \n $y$, \n and~$z$.\n Lastly, \n add the edge set~$E^*\\ensuremath{\\coloneqq}\\{\\{x,y\\},\\{y,y'\\},\\{z,z'\\},\\{z,y\\}\\}$ to~$E'$.\n Let~$E_y \\ensuremath{\\coloneqq} \\{\\{y,v_i\\} \\mid i \\in V\\}$,\n $E_{E}\\ensuremath{\\coloneqq}\\{\\{v_e,z\\} \\mid e \\in E\\}$,\n and~$E_{V}\\ensuremath{\\coloneqq}\\{\\{v_i,z\\}\\mid i \\in V\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:2dgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:2dgbp}.\n If~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance,\n then every solution~$F$ for~$\\mathcal{I}'$ contains\n (i) $\\{\\{v,z\\}\\mid v\\in N_{G'}(z)\\}$ and \n (ii) $\\{\\{v,y\\}\\mid v\\in N_{G'}(y)\\}$.\n\\end{observation}\n\n\\begin{proof}\n Clearly,\n $\\{z,z'\\},\\{y,y'\\}\\in F$.\n Every path of length at most two from~$z'$ to any vertex in~$V_E\\cup V_G$ passes~$z$.\n Hence,\n each edge in~$\\{\\{v,z\\}\\mid v\\in V_G\\cup V_E\\}$ is contained in~$F$.\n This proves~(i).\n The proof for~(ii) is analogous.\n\\end{proof}\n\n\\noindent\nNote that~$|\\{\\{v,z\\}\\mid v\\in N_{G'}(z)\\}\\cup\\{\\{v,y\\}\\mid v\\in N_{G'}(y)\\}|=m+2n+4$.\n\n\\begin{lemma}\n \\label{lem:2dgbp}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:2dgbp}.\n Then,\n $\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S\\subseteq V$ be a vertex cover of size~$k$.\n Let~$F'$ denote the set of all edges required to be in a solution due to \\cref{obs:2dgbp}.\n Let~$F_V\\ensuremath{\\coloneqq} \\{\\{v_i,x\\}\\mid i\\in S\\}$.\n We define the auxiliary function~$g\\colon E\\to V'$ with~$g(\\{i,j\\})=v_{\\min(\\{i,j\\}\\cap S)}$.\n Let~$F_E\\ensuremath{\\coloneqq} \\bigcup_{e\\in E} \\{\\{v_e,g(e)\\}\\}$.\n Let~$F\\ensuremath{\\coloneqq} F'\\cup F_V\\cup F_E$. \n Note that~$|F|=|F'|+|F_V|+|F_E|\\leq (m+2n+4)+k+m = k'$.\n Observe that~$G'[F][V_2]$ and~$G'[F][V_3]$ are connected and of diameter two.\n Next consider~$G'[F][V_1]$.\n We claim that for all~$e \\in E$, $\\dist_{G'[F][V_1]}(x,v_e)=2$.\n By construction,\n $\\dist_{G'[F][V_1]}(x,v_e)>1$.\n Suppose that there is~$v_e$ with~$e=\\{i,j\\}$ and~$\\dist_{G'[F][V_1]}(x,v_e)>2$.\n Then there is no path~$(x,v,v_e)$ with~$v\\in\\{v_i,v_j\\}$.\n Then~$\\{i,j\\}\\cap S=\\emptyset$,\n contradicting the fact that~$S$ is a vertex cover.\n \n $(\\Leftarrow)\\quad${} \n Let~$F$ be a solution to~$\\mathcal{I}'$.\n Let~$F'$ be the set of edges mentioned in \\cref{obs:2dgbp};\n so~$F' \\subseteq F$.\n Observe that in~$G'-V_G$,\n the distance of~$x$ to any~$v_e\\in V_E$ is larger than two.\n Hence,\n for each~$v_e$,\n there is a path~$(v_e,v,x)$ in~$G'[F][V_1]$ with~$v\\in V_G$.\n We claim that~$S\\ensuremath{\\coloneqq} \\{i\\in V\\mid \\{v_i,x\\}\\in F\\}$ is a vertex cover for~$G$ of size at most~$k$.\n Suppose not,\n that is,\n there is an edge~$e=\\{i,j\\}$ with~$e\\cap S=\\emptyset$.\n This contradict the fact that there is a path~$(v_e,v,x)$ in~$G'[F][V_1]$ with~$v\\in V_G$.\n It remains to show that~$|S|\\leq k$.\n As~$F$ contains an edge~$\\{v_e,v\\}$ with~$v \\in V_G$ for every~$e \\in E$,\n $|S| = |F\\cap \\{\\{v_i,x\\} \\mid i\\in V\\}| \\le k'-(|F'|+m) = k$,\n and the claim follows.\n\\end{proof}\n\n\\subsection{Over at most two patches to every other (\\texorpdfstring{\\boldmath{$d\\ge 3$}}{d\u2a7e3})}\n\\label{ssec:3dgbp}\n\n\\DgbpAcr[3]{} turns out to be \\cocl{NP}-hard even for two habitats.\n\n\\begin{proposition}\n \\label{prop:3dgbp}\n \\DgbpAcr[3]{} is \\cocl{NP}-hard even if~$r=2$.\n\\end{proposition}\n\n\\begin{construction}\n \\label{constr:3dgbp}\n Let~$\\mathcal{I}=(G,k)$ with~$G=(V,E)$ be an instance of~\\prob{Vertex Cover}.\n Construct an instance~$\\mathcal{I}'\\ensuremath{\\coloneqq} (G',\\{V_1,V_2\\},k')$\n with~$k'=2m+n+k+4$ as follows\n (see~\\cref{fig:3dgbp} for an illustration).\n \\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n\n \\def1.1{1.1}\n \\def0.9{0.9}\n \\def1.5*\\yr{1.5*0.9}\n \\def0.125{0.125}\n \\tikzstyle{xnode}=[circle,fill,scale=0.6,draw,color=cyan!50!green]\n \\tikzstyle{xnodef}=[circle,scale=0.6,draw,color=cyan!50!green,fill=white]\n \\tikzstyle{xnodex}=[label={[xshift=-0.4*1.1 cm]0:$\\cdots$},color=cyan!50!green]\n \\tikzstyle{xedge}=[thick,-,color=cyan!50!green]\n\n \\foreach \\x\/\\y in {1\/xnode,2\/xnodex,3\/xnode,4\/xnodex,5\/xnode,6\/xnodex,7\/xnode,8\/xnodex,9\/xnode}{\n \\node (e\\x) at (\\x*1.1-5*1.1,0)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(e1.south west)-(0.125,0.125)$) rectangle ($(e9.north east)+(0.125,0.125)$);\n \\foreach \\x\/\\y in {1\/xnodef,2\/xnodex,3\/xnodef,4\/xnodex,5\/xnodef,6\/xnodex,7\/xnodef}{\n \\node (v\\x) at (\\x*1.1-4*1.1,-1*1.5*\\yr)[\\y]{};\n }\n \\draw[dotted,rounded corners,draw] ($(v1.south west)-(0.125,0.125)$) rectangle ($(v7.north east)+(0.125,0.125)$);\n \n \\node (x) at (0*1.1,-2*1.5*\\yr)[xnodef]{};\n \\node (xp) at (6*1.1,-2*1.5*\\yr)[xnodef]{};\n\n \\node (y) at (5*1.1,-1.5*1.5*\\yr)[xnodef]{};\n \\node (yp) at (6*1.1,-1*1.5*\\yr)[xnodef]{};\n \\node (z) at (5*1.1,-0.5*1.5*\\yr)[xnodef]{};\n\n \\foreach \\x\/\\y in {3\/3,3\/5,5\/3,5\/2,7\/5,7\/6}{\n \\draw[xedge] (e\\x) to (v\\y);\n }\n \\foreach \\x in {1,...,9}{\n \\draw[xedge] (e\\x) to [out=-15,in=180](z);\n }\n \\draw[xedge] (z) to (yp);\n \\foreach \\x\/\\y in {2\/1,8\/7}{\n \\draw[draw=none] (e\\x) to node[midway]{$\\vdots$}(v\\y);\n }\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (v\\x) to [out=-15,in=180](y);\n }\n \\draw[xedge] (yp) to (y);\n \\foreach \\x in {1,...,7}{\n \\draw[xedge] (x) to (v\\x);\n }\n \\draw[xedge] (x) \n to (xp) to (yp);\n \n \\contourlength{0.09em}\n \\newcommand{\\ltb}[3]{\n \\node at (#1)[label={[align=center,font=\\footnotesize,color=black]\\contour*{white}{#2}},label={[align=center,font=\\scriptsize,color=black]-90:\\contour*{white}{#3}}]{};\n }\n \n \\ltb{e3}{$e=\\{i,j\\}$}{$\\in V_1$};\n \\ltb{e5}{$e'=\\{i,j'\\}$}{$\\in V_1$};\n \\ltb{e7}{$e''=\\{i',j\\}$}{$\\in V_1$};\n \n \\ltb{x}{$x$}{$\\in V_1$};\n \\ltb{xp}{$x'$}{$\\in V_1$};\n \\ltb{z}{$z$}{$\\in V_1$};\n \\ltb{y}{$y$}{$\\in V_1,V_2$};\n \\ltb{yp}{$y'$}{$\\in V_1,V_2$};\n \n \\ltb{v3}{$v_i$}{$\\in V_1,V_2$};\n \\ltb{v5}{$v_j$}{$\\in V_1,V_2$};\n \\end{tikzpicture}\n \\caption{Illustration for~\\DgbpAcr[3]{} with~$r=2$. ($k'=2m+n+k+4$)}\n \\label{fig:3dgbp}\n \\end{figure}\n Add the vertex sets~$V_E\\ensuremath{\\coloneqq} \\{v_e\\mid e\\in E\\}$,\n $V_G\\ensuremath{\\coloneqq}\\{v_i\\mid i\\in V\\}$,\n and~$V^*\\ensuremath{\\coloneqq} \\{x,x',y,y',z\\}$.\n Add all vertices to~$V_1$,\n and all vertices in~$V_G$ as well as~$y$ and~$y'$ to~$V_2$.\n Next,\n for each~$e=\\{i,j\\}\\in E$,\n connect~$v_e$ with~$v_i$, $v_j$, and~$z$.\n Denote by~$E_z$ the set of edges incident with~$z$.\n For each~$i\\in V$,\n connect~$v_i$ with~$x$ and~$y$.\n Denote by~$E_y$ the set of edges incident with~$y$.\n Finally,\n add the edge set~$E^*\\ensuremath{\\coloneqq}\\{\\{x,x'\\},\\{y,y'\\},\\{z,y'\\},\\{y',x'\\}\\}$.\n\\end{construction}\n\n\\begin{observation}\n \\label{obs:3dgbp-aux}\n Let~$\\mathcal{I}'$ be the instance obtained from some instance~$\\mathcal{I}$ using~\\cref{constr:3dgbp}.\n Then every solution~$F$ for~$\\mathcal{I}'$ (if there is one) contains the set of edges~$F'\\ensuremath{\\coloneqq} E_y \\cup E_z \\cup E^*$.\n Further, $\\dist_{G[F']}(u, v) \\le 3$ for every~$u, v \\in V^*$ and for every~$u \\in V_G \\cup V_E$ and~$v \\in \\{y, y', z, x'\\}$.\n\\end{observation}\n\n\\begin{proof}\n Let~$\\mathcal{I}'$ be a \\textnormal{\\texttt{yes}}-instance and\n let~$F$ be a solution.\n Since~$G[V_2]$ is a tree and~$E_y = E(G[V_2])$,\n we have~$E_y \\subseteq F$.\n If for some~$e \\in E$, $\\{v_e, z\\} \\notin F$, \n then~$\\dist_{G[F]}(v_e, z) > 3$.\n If~$\\{z,y'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(z, y') > 3$.\n Thus,\n $E_z \\subseteq F$.\n If~$\\{y',x'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(z, x') > 3$.\n If~$\\{x,x'\\}\\not\\in F$, \n then~$\\dist_{G[F]}(x, x') > 3$.\n Thus,\n $E^* \\subseteq F$.\n The distances claimed for~$G[F']$ are immediate.\n\\end{proof}\n\n\\begin{lemma}\n Let~$\\mathcal{I}'$ be the instance obtained from an instance~$\\mathcal{I}$ using \\cref{constr:3dgbp}.\n Then~$\\mathcal{I}$ is a \\textnormal{\\texttt{yes}}-instance if and only if~$\\mathcal{I}'$ is a \\textnormal{\\texttt{yes}}-instance.\n\\end{lemma}\n\\begin{proof}\n $(\\Rightarrow)\\quad${}\n Let~$S \\subseteq V$ be a vertex cover of~$G$ of size~$k$.\n We construct a solution~$F \\subseteq E'$ as follows.\n Let~$g \\colon E \\to V'$ be an auxiliary function with~$g(\\{i, j\\}) = v_{\\min(\\{i, j\\}\\cap S)}$.\n Let~$F_V\\ensuremath{\\coloneqq}\\{\\{v_i, x\\} \\mid i \\in S\\}$\n and let~$F_E\\ensuremath{\\coloneqq}\\{\\{v_e, g(e)\\} \\mid e \\in E\\}$.\n Then we set~$F\\ensuremath{\\coloneqq} F' \\cup F_V \\cup F_E$, \n where~$F'$ is as defined in \\cref{obs:3dgbp-aux}.\n Note that~$|F| = n + m + 4 + k + m = k'$. \n Since~$S$ is a vertex cover, \n every vertex~$v_e \\in V_E$ is adjacent to a vertex~$v_i$, $i \\in S$, \n which in turn is adjacent to~$x$ in~$G[F]$ due to~$F_V$.\n Further, as every~$v_i$, $i \\in V \\setminus S$, \n has distance two to every vertex~$v_j$, $j \\in S$, \n we have~$\\dist_{G[F]}(v_i, v_e) = \\dist_{G[F]}(v_i, x) \\le 3$ for every~$v_e \\in V_E$.\n Hence, $F$ is a solution for~$\\mathcal{I}'$.\n \n $(\\Leftarrow)\\quad${}\n Let~$F \\subseteq E'$ be a solution.\n Observe that in~$G$ the only path of length at most three from a vertex in~$V_E$ to~$x$ is via a vertex in~$V_G$ (and thus of length exactly two).\n Hence, in~$G[F]$, for every~$v_e \\in V_E$ there exists a~$v_i \\in V_G$ such that~$\\{v_e, v_i\\}, \\{v_i, x\\} \\in F$.\n It follows that\n $G[F]$ contains at least~$m$ edges between~$V_E$ and~$V_I$ and hence at most~$k$ edges between~$V_I$ and~$x$.\n We claim that~$S\\ensuremath{\\coloneqq}\\{i \\in V \\mid \\{v_i, x\\} \\in G[F]\\}$ is a vertex cover in~$G$.\n First note that~$|S|\\leq |F|-(|F'|+m)=k$,\n where~$F' \\subseteq F$ is the set of edges from~\\cref{obs:3dgbp-aux}.\n Suppose not,\n that is,\n $S$ is no vertex cover and hence there is an edge~$e\\in E$ with~$e\\cap S=\\emptyset$.\n This contradicts the fact that~$\\dist_{G[F][V_1]}(v_e,x)\\leq 3$.\n\\end{proof}\n\n\\section{Conclusion, Discussion, and Outlook}\n\nWe modeled the problem of placing wildlife crossings\nwith three different problem (families):\n\\RgbpAcr{},\n\\CgbpAcr{},\nand~\\DgbpAcr{}.\nWe studied the practically desired cases~$d=1$ and $d=2$, \nas well as the cases~$d\\ge3$.\nFor all three problems,\nwe settled the classic as well as the parameterized complexity \n(regarding the number~$k$ of wildlife crossings and the number~$r$ of habitats),\nexcept for the parameterized complexity of \\RgbpAcr{} regarding~$r$.\n\n\\paragraph{Discussion.}\nWe derived an intriguing interrelation of\nconnection requirements,\ndata quality, \nand computational and parameterized complexity.\nWhile each problem admits its individual complexity fingerprint,\neach of them depends highly on the value of~$d$,\nthe level of the respective connectivity constraint.\nThis value can reflect the quality of the given data,\nsince naturally we assume that habitats are connected.\nThe worse the data, \nthe stronger are the relaxations according to the connectivity of habitats,\nand thus the larger is the value of~$d$.\nOur results show that having very small ($d=2$) data gaps already leads to the problems becoming \\cocl{NP}-hard,\nand that even larger gaps ($d\\geq 3$) yield \\W{1}-hardness (when parameterized by~$k$).\nHence, knowledge about habitats, connections, and data quality\ndecide which problem models can be applied,\nthus influencing the computation power required to determine an optimal placement of wildlife crossings.\nFor instance,\nfor larger networks,\nwe recommend to ensure data quality such that one of our proposed problems for~$d\\leq 2$ becomes applicable.\nThis\nin turn\nemphasizes the importance of careful habitat recognition.\n\nIn our models,\nwe neglected that different positions possibly lead to \\emph{different} costs of building bridges \n(i.e.,\\ edge~costs).\nThis neglect is justified when differentiating between types of bridges (and thus their costs)\nis not necessary \n(e.g., if the habitat's species share preferred types of green bridges,\nand the underlying human-made transportation lines are homogeneous).\nIn other scenarios,\nadditionally considering these costs may be beneficial for decision-making.\n\n\n\\paragraph{Outlook and open problems.}\nFor a final version,\nwe plan to continue our study with \napproximation and (refined) data reduction for our three problems,\nas well as planar input graphs,\nand to settle \\RgbpAcr[1]{}'s complexity regarding~$r$. \nNote that we obtained an~$\\O(rd)$-approximation for~\\RgbpAcr{},\nwhich does not directly transfer to the other two problems.\nFPT approximations may be lucrative.\nFor small~$d\\geq 2$,\nall problems allow for problems kernels where the number of vertices only depends on~$k$.\nIf more effective preprocessing is possible,\nthen data reduction on the habitats is required.\nIf the underlying street network is planar,\nthen the input graphs to our problems can be seen as their planar dual.\nThus, \ninput graphs may be planar in the applications.\n\nMoreover,\ninteresting directions for future work are,\nfor instance,\ndistinguishing types of green bridges to place,\ntaking into account possible movement directions within habitats (connectivity in directed graphs),\nidentifying real-world driven problem parameters leading to tractability,\nor the problem of maintaining and servicing green bridges over time under a possible seasonal change of wildlife habitats \n(temporal graph modeling could fit well).\n\n\\bigskip \n\n{\n \\let\\clearpage\\relax\n \\renewcommand{\\url}[1]{\\href{#1}{$\\ExternalLink$}}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe study of special knots in contact three manifolds provided great insight\ninto the geometry and topology of three manifolds. In particular, the study\nof Legendrian knots (ones tangent to the contact planes) has been useful\nin distinguishing homotopic contact structures on $T^3$ \\cite{k} and\nhomology spheres \\cite{am}. Moreover, Rudolph \\cite{r} has shown\nthat invariants of Legendrian knots can be useful in understanding\nslicing properties of knots. The first example of the use of knot \ntheory in contact topology was in the work of Bennequin. In \\cite{be}\nBennequin used transversal knots (ones transversal to the contact\nplanes) to show that $\\hbox{$\\mathbb R$} ^3$ has exotic contact structures. This was the\ngenesis of Eliashberg's insightful tight versus overtwisted dichotomy in three dimensional\ncontact geometry.\n\nIn addition to its importance in the understanding of contact geometry, the study\nof transversal and Legendrian knots is quite interesting in its own right.\nQuestions concerning transversal and Legendrian knots have most prominently appeared\nin \\cite{el:knots} and Kirby's problem list \\cite{kirby}.\nCurrently there are very few general theorems concerning the classification\nof these knots. In \\cite{el:knots}, Eliashberg classified transversal unknots\nin terms of their self-linking number. In \\cite{ef}, Legendrian unknots\nwere similarly classified. In this paper we will extend this classification \nto positive transversal torus knots\\footnote{By ``positive transversal torus knot'' we mean a positive (right handed) \n\ttorus\n\tknot that is transversal to a contact structure.}. In particular we prove:\n\\begin{ttm}\n\tPositive transversal \n\ttorus knots\n\tare transversely isotopic if and only if they have the same topological knot\n\ttype and the same self-linking number.\n\\end{ttm}\nIn the process of proving this result we will examine {\\em transversal stabilization}.\nThis is a simple method for creating one transversal knot from another. By \nshowing that all positive transversal torus knots whose self-linking number is less than \nmaximal come from this stabilization process we are able to reduce the above theorem\nto the classification of positive transversal torus knots with maximal self-linking number.\nStabilization also provides a general way to approach the classification problem for \nother knot types. For example, we can reprove Eliashberg's classification\nof transversal unknots using stabilization ideas and basic contact topology.\n\nIt is widely believed that the\nself-linking number is not a complete invariant for transversal knots. However,\nas of the writing of this paper, there is no known knot type whose transversal realizations\nare not determined by their self-linking number. For Legendrian knots, in contrast,\nEliashberg and Hofer (currently unpublished)\nand Chekanov \\cite{chek} have produced examples of Legendrian knots that are not determined\nby their corresponding invariants.\n\nIn Section~\\ref{basic} we review some standard facts concerning contact geometry\non three manifolds. In Section~\\ref{sec:main} we prove our main theorem modulo some\ndetails concerning the characteristic foliations on tori which are proved in\nSection~\\ref{tori} and some results on stabilizations proved in Section~\\ref{stab}.\nIn the last section we discuss some open questions.\n\n\\rk{Acknowledgments}\nThe author gratefully acknowledges the support of an NSF \nPost-Doctoral Fellowship (DMS--9705949) and Stanford University.\nConversations with Y Eliashberg and E Giroux were helpful\nin preparing this paper.\n\n\n\\section{Contact structures in three dimensions}\\label{basic}\n\nWe begin by recalling some basic facts from contact topology. For a more detailed\nintroduction, see \\cite{a:etall, gi:survey}.\nRecall an orientable plane field $\\xi$ is a \\dfn{contact structure} on a three manifold\nif $\\xi=\\hbox{ker } \\alpha$ where $\\alpha$ is a nondegenerate 1--form for which\n$\\alpha\\wedge d\\alpha\\not=0$. Note $d\\alpha$ induces an orientation on $\\xi$.\nTwo contact structures are called \\dfn{contactomorphic} if there is a diffeomorphism\ntaking one of the plane fields to the other.\nA contact structure $\\xi$ induces a singular\nfoliation on a surface $\\Sigma$ by integrating the singular line field\n$\\xi\\cap T\\Sigma$. This is called the \\dfn{characteristic foliation} and is denoted\n$\\Sigma_\\xi$. Generically, the singularities are elliptic (if local degree is 1)\nor hyperbolic (if the local degree is $-1$). If $\\Sigma$ is oriented then the singularities\nalso have a sign. A singularity is positive (respectively negative) if the orientations\non $\\xi$ and $T\\Sigma$ agree (respectively disagree) at the singularity. \n\n\\begin{lem}[Elimination Lemma \\cite{gi:convex}]\\label{lem:elimination}\n\tLet $\\Sigma$ be a surface in a contact $3$--manifold $(M,\\xi)$. Assume\n\tthat $p$ is an elliptic and $q$ is a hyperbolic singular point in \n\t$\\Sigma_\\xi$, they both have the same sign and there is a leaf $\\gamma$\n\tin the characteristic foliation $\\Sigma_\\xi$ that connects $p$ to $q$.\n\tThen there is a $C^0$--small isotopy $\\phi\\colon\\thinspace \\Sigma\\times[0,1]\\to M$ such that\n\t$\\phi_0$ is the inclusion map, $\\phi_t$ is fixed on $\\gamma$ and outside any \n\t(arbitrarily small) pre-assigned\n\tneighborhood $U$ of $\\gamma$ and $\\Sigma'=\\phi_1(\\Sigma)$ has no singularities\n\tinside $U$.\n\\end{lem}\n\nIt is important to note that after the above cancellation there is a curve in the \ncharacteristic foliation on which the singularities had previously sat. \nIn the case of positive\nsingularities this curve will consist of the (closure of the) stable manifolds of the hyperbolic point\nand {\\em any} arc leaving the elliptic point (see \\cite{ef, e:dis}), and similarly for the negative singularity case.\nOne may also reverse this process and add a canceling pair of singularities along \na leaf in the characteristic foliation. It is also important to note:\n\n\\begin{lem}\n\tThe germ of the contact structure $\\xi$ along a surface $\\Sigma$ is determined \n\tby $\\Sigma_\\xi$.\n\\end{lem}\n\nNow recall that a contact structure $\\xi$ on $M$ is called \\dfn{tight} if no disk embedded in \n$M$ contains a limit cycle in its characteristic foliation, otherwise it is called \n\\dfn{overtwisted}. The standard contact structure on $S^3$,\ninduced from the complex tangencies to $S^3=\\partial B^4$ where $B^4$ is the\nunit 4--ball in $\\hbox{$\\mathbb C$} ^2$, is tight.\n\nA closed curve $\\gamma\\colon\\thinspace S^{1}\\to M$ in a contact manifold $(M,\\xi)$ is called \n\\dfn{transversal} if $\\gamma'(t)$ is transverse to $\\xi_{\\gamma(t)}$ \nfor all $t\\in S^{1}$. Notice a transversal curve can be \n\\dfn{positive} or \\dfn{negative} according as $\\gamma'(t)$ agrees with \nthe co-orientation of $\\xi$ or not. We will restrict our attention to \npositive transversal knots (thus in this paper ``transversal'' means \n``positive transversal''). It can be shown that any curve can be made \ntransversal by a $C^{0}$ small isotopy. It will be useful to note:\n\n\\begin{lem}[See \\cite{el:knots}]\\label{extend}\n\tIf $\\psi_t\\colon\\thinspace S^1\\to M$ is a transversal isotopy, then there\n\tis a contact isotopy $f_t\\colon\\thinspace M\\to M$ such that $f_t\\circ \\psi_0=\\psi_t$.\n\\end{lem}\n\nGiven a transverse knot $\\gamma$ in $(M,\\xi)$ that bounds a surface $\\Sigma$\nwe define the \\dfn{self-linking number}, $l(\\gamma)$, of $\\gamma$ as follows: \ntake a nonvanishing vector field $v$ in $\\xi\\vert_{\\gamma}$ that \nextends to a nonvanishing vector field in $\\xi\\vert_{\\Sigma}$ and let $\\gamma'$\nbe $\\gamma$ slightly pushed along $v$. Define\n$$l(\\gamma,\\Sigma)=I(\\gamma',\\Sigma),$$\nwhere $I(\\,\\cdot\\,,\\,\\cdot\\,)$ is the oriented intersection number.\nThere is a nice relationship between $l(\\gamma,\\Sigma)$ and the \nsingularities of the characteristic foliation of $\\Sigma$. Let \n$d_{\\pm}=e_{\\pm}-h_{\\pm}$ where $e_{\\pm}$ and $h_{\\pm}$ are the \nnumber of $\\pm$ elliptic and hyperbolic points in the characteristic \nfoliation $\\Sigma_{\\xi}$ of $\\Sigma$, respectively. In \\cite{be} it \nwas shown that\n\\begin{equation}\n\tl=d_{-}-d_{+}.\n\\end{equation}\nWhen $\\xi$ is a {\\em tight} contact structure and $\\Sigma$ is a \n{\\em disk},\nEliashberg \\cite{el:twenty} has shown, using the elimination lemma, how to \neliminate all the positive hyperbolic and negative elliptic points \nfrom $\\Sigma_{\\xi}$. Thus in a \ntight contact structure when $\\gamma$ is an unknot $l(\\gamma,\\Sigma)$ is always negative.\nMore generally one can show (see \\cite{be, el:twenty}) that\n\\begin{equation}\\label{lbound}\n\tl(\\gamma)\\leq -\\chi(\\Sigma),\n\\end{equation}\nwhere $\\Sigma$ is a Seifert surface for $\\gamma$ and $\\chi(\\Sigma)$ is its Euler number.\n\nAny odd negative integer can be realized as the self-linking number for some \ntransversal unknot.\nThe first general result concerning the classification of transversal knots\nwas the following:\n\n\\begin{thm}[Eliashberg \\cite{el:knots}]\\label{unknots}\n\tTwo transversal unknots are transversely isotopic if and only if \n\tthey have the same self-linking number.\n\\end{thm}\n\nLet $\\mathcal{T}$ be the transversal isotopy classes of transversal knots in $S^3$ with its unique\ntight contact structure. Let $\\mathcal{K}$ be the isotopy classes of knots in $S^3$.\nGiven a transversal knot $\\gamma\\in \\mathcal{T}$ we have two pieces of information: its knot type \n$[\\gamma]\\in\\mathcal{K}$ and its self-linking number $l(\\gamma)\\in\\hbox{$\\mathbb Z$} $. Define \n\\begin{equation}\\label{tmap}\n\t\\phi\\colon\\thinspace \\mathcal{T}\\to\\mathcal{K}\\times \\hbox{$\\mathbb Z$} : \\gamma\\mapsto ([\\gamma],l(\\gamma)).\n\\end{equation}\nThe main questions concerning transversal knots can be phrased in terms of the image\nof this map and preimages of points. In particular the above results say that $\\phi$\nis onto \n$$U=[\\hbox{unknot}]\\times\\{\\hbox{negative odd integers}\\}$$ and $\\phi$ is one-to-one\non $\\phi^{-1}(U)$.\n\nWe will also need to consider Legendrian knots. A knot $\\gamma$ is a \\dfn{Legendrian knot} if\nit is tangent to $\\xi$. The contact structure $\\xi$ defines a canonical framing on a Legendrian\nknot $\\gamma$. If $\\gamma$ is null homologous we may associate a number to this framing which\nwe call the \\dfn{Thurston--Bennequin invariant} of $\\gamma$ and denote it $\\hbox{tb}(\\gamma)$.\nIf we let $\\Sigma$ be the surface exhibiting the null homology of $\\gamma$ then we may trivialize\n$\\xi$ over $\\Sigma$ and use this trivialization to measure the rotation of $\\gamma'(t)$ around\n$\\gamma$. This number $r(\\gamma)$ is called the \\dfn{rotation number} of $\\gamma$. Note that\nthe rotation number depends on an orientation on $\\gamma$. From an oriented \nLegendrian knot $\\gamma$ one can obtain canonical positive and negative transversal knots\n$\\gamma_\\pm$ by pushing $\\gamma$ by vector fields tangent to $\\xi$ but transverse to\n$\\gamma'(t)$. One may compute\n\\begin{equation}\n\tl(\\gamma_\\pm)=\\hbox{tb}(\\gamma)\\mp r(\\gamma).\n\\end{equation}\nThis observation combined with \\eqn{lbound} implies\n\\begin{equation}\\label{tb-bound}\n\t\\hbox{tb}(\\gamma)+|r(\\gamma)|\\leq -\\chi(\\Sigma).\n\\end{equation}\nConsider an oriented (nonsingular) foliation $\\mathcal{F}$ on a torus $T$. The foliation is \nsaid to have a \\dfn{Reeb component} if two oppositely oriented periodic orbits cobound an\nannulus containing no other periodic orbits. \n\n\\begin{lem}\\label{curveinT}\n\tConsider a torus $T$ in a contact three manifold $(M,\\xi)$. If the characteristic foliation\n\ton $T$ is nonsingular and contains no Reeb components then \n\tany closed curve on $T$ may be isotoped to be transversal to $T_\\xi$ or into a leaf\n\tof $T_\\xi$. Moreover there is at most one homology class in $H_1(T)$ that can\n\tbe realized by a leaf of $T_\\xi$. \n\\end{lem}\n\nNow let $\\xi$ be a tight contact structure on a solid torus $S$ with nonsingular characteristic\nfoliation on it boundary $T=\\partial S$. It is easy to arrange for $T_\\xi$ to have no Reeb \ncomponents \\cite{ml}.\nSince $\\xi$ is tight the lemma above implies the meridian $\\mu$\ncan be made transversal to $T_\\xi$. We say $S$ has self-linking number $l$ if $l=l(\\mu)$\n(ie, the self-linking number of $S$ is the self-linking number of its meridian).\n\n\\begin{thm}[Makar--Limanov \\cite{ml}]\\label{solid_tori}\n\tAny two tight contact structures on $S$ which induce the same nonsingular\n\tfoliation on the boundary and have self-linking number $-1$ are contactomorphic.\n\\end{thm}\n\n\n\\section{Positive transversal torus knots}\\label{sec:main}\n\nLet $U$ be an unknot in a 3--manifold $M$, $D$ an embedded disk that it bounds\nand $V$ a tubular neighborhood of $U$. The boundary $T$ of $V$ is an embedded torus in $M$,\nwe call such a torus a \\dfn{standardly embedded torus}. \nLet $\\mu$ be the unique curve on $T$ that bounds a disk in $V$ and\n$\\lambda=D\\cap V$. Orient $\\mu$ arbitrarily and \nthen orient $\\lambda$ so that $\\mu, \\lambda$ form a positive basis for $H_1(T)$ where \n$T$ is oriented as the boundary of $V$. Up to homotopy any curve in \n$T$ can be written as $p\\mu + q\\lambda$, we shall denote this curve by $K_{(p,q)}$. \nIf $p$ and $q$ are relatively prime\nthen $K_{(p,q)}$ is called a {\\em $(p,q)$--torus knot.} If $pq>0$ we say $K(p,q)$ is a \n\\dfn{positive} torus knot otherwise we call it \\dfn{negative}. One\nmay easily compute that the Seifert surface of minimal genus for $K_{(p,q)}$\nhas Euler number $|p|+|q|-|pq|$. Thus for a transversal torus knot Equation~\\ref{lbound} implies \n\\begin{equation}\n\tl(K_{(p,q)})\\leq -|p|-|q|+|pq|.\n\\end{equation}\nIn fact, if $\\overline{l}_{(p,q)}$ denotes the maximal self-linking number for a \ntransversal $K_{(p,q)}$ then one may easily check that\n\\begin{equation}\n\t\\overline{l}_{(p,q)}=-p-q+pq,\n\\end{equation}\nif $p,q>0$, ie, for a positive torus knot. (Note: for a positive transversal torus knot \nLemma~\\ref{basis}\nsays we have $p,q>0$ not just $pq>0$.) From the symmetries involved in the definition of\na torus knot we may assume that $p>q$, which we do throughout the rest of the paper.\nWe now state our main theorem.\n\n\\begin{thm}\\label{main}\n\tPositive transversal torus knots in a tight contact structure\n\tare determined up to transversal isotopy by \n\ttheir knot type and their self-linking number.\n\\end{thm}\n\n\\begin{rem}\n\t{\\em We may restate this theorem by saying\n\tthe map $\\phi$ defined in equation \\ref{tmap} is one-to-one when restricted to\n\t$$(\\hbox{pr}\\circ \\phi)^{-1}(\\hbox{positive torus knots})$$ \n\t(here $\\hbox{pr}\\colon\\thinspace \\mathcal{K}\\times \\hbox{$\\mathbb Z$} \\to \\mathcal{K}$ is projection). \n\tMoreover, the image of $\\phi$ restricted to the above set is \n\t$G=\\cup_{(p,q)} K_{(p,q)}\\times N(p,q)$ where the union\n\tis taken over relatively prime positive $p$ and $q$, and \n\t$N(p,q)$ is the set of odd integers less than or equal to $-p-q+pq$.}\n\\end{rem}\n\nWe first prove the following auxiliary result:\n\n\\begin{prop}\\label{aux}\n\tTwo positive transversal $(p,q)$--torus knots $K$ and $K'$ in a tight contact\n\tstructure with maximal self-linking number (ie, $l(K)=l(K')=\\overline{l}_{(p,q)}$) \n\tare transversally isotopic.\n\\end{prop}\n\n\\proof\nLet $T$ and $T'$ be tori standardly embedded in $M$ on which $K$ and $K'$, \nrespectively, sit.\n\n\\begin{lem}\\label{nonsingular}\n\tIf the self-linking number of $K$ is maximal then $T$ may be\n\tisotoped relative to $K$ so that the characteristic\n\tfoliation on $T$ is nonsingular.\n\\end{lem}\n\nThis lemma and the next are proved in the following section.\n\n\\begin{lem}\\label{isotopic}\n\tTwo transversal knots on a torus $T$ with nonsingular characteristic foliation\n\tthat are homologous are transversally isotopic, except possibly when there is a closed leaf\n\tin the foliation isotopic to the transversal knots.\n\\end{lem}\n\nOur strategy is to isotop $T$ onto $T'$, keeping $K$ and $K'$ transverse to $\\xi$, \nso that $K$ and $K'$ are\nhomologous, and thus transversally isotopic. We now show that $T$\ncan be isotoped into a standard form keeping $K$ transverse (and similarly for\n$K'$ and $T'$ without further mention). Let $V$ be \nthe solid torus that $T$ bounds (recall we are choosing $V$ so that $p>q$). \nLet $D_\\mu$ and $D_\\lambda$ be the disk that $\\mu$ and $\\lambda$\nrespective bound. Now observe:\n\n\\begin{lem}\\label{basis}\n\tWe may take $\\mu$ and $\\lambda$ to be positive transversal curves and with this\n\torientation $\\mu,\\lambda$ form a positive basis for $T=\\partial V$.\n\\end{lem}\n\n\\proof\nClearly we may take $\\mu$ and $\\lambda$ to be positive transversal knots, for if we could\nnot then \\lm{curveinT} implies that we may isotop one of them to a closed leaf in $T_\\xi$ contradicting\nthe tightness of $\\xi$. Thus we are left to see that \n$\\mu, \\lambda$ is a positive basis. Assume this is not the case.\nBy isotoping $T$ slightly we may assume that $T_\\xi$ has closed leaf\n(indeed if $T_\\xi$ does not already have a closed leaf then the isotopy will give an \nintervals worth of rotation numbers, and hence\nsome rational rotation numbers, for the return map\ninduced on $\\mu$ by $T_\\xi$). Let $C$ be one of these closed leaves and let $n=\\lambda\\cdot C$\nand $m=\\mu\\cdot C$. Note $n$ and $m$ are both positive since $\\mu$ and $\\lambda$ are\npositive transversal knots. Since $\\mu, \\lambda$ is not a positive basis $C$ is an $(n,m)$--torus knot. \nIn particular $C$ is a positive torus knot. Moreover, the framing on $C$ induced by $\\xi$ is\nthe same as the framing induced by $T$. Thus $\\hbox{tb}(C)=mn$ contradicting \\eqn{tb-bound}.\nSo $\\mu,\\lambda$ must be a positive basis for $T$.\n\\endproof\n\nNow let $m=l(\\mu)$ and $l=l(\\lambda)$ and recall $m,l\\leq -1$.\n\n\\begin{lem}\\label{l-formula}\n\tIf $\\gamma$ is a transversal $(p,q)$ knot on $T$ (with nonsingular \n\tcharacteristic foliation) then \n\t\\begin{equation}\\label{eqn:l-formula}\n\t\tl(\\gamma)=pm+ql+pq.\n\t\\end{equation}\n\\end{lem} \n\n\\proof\nLet $v$ be a section of $\\xi$ over an open 3--ball containing $T$ and its meridional and longitudinal\ndisks. If $C$ is a curve on $T$ then define\n$f(C)$ to be the framing of $\\xi$ over $C$ induced by $v$ relative to the framing of $\\xi$ \nover $C$ induced by $T$. Note $f$ descends to a map on $H_1(T)$ and $f(A+B)=f(A)+f(B)$\nwhere $A,B\\in H_1(T)$. One easily computes $f(\\mu)=m$ and $f(\\lambda)=l$. Thus \n$f(p\\mu+q\\lambda)=pm+ql$. Now for a transversal curve $C$ on $T$ the normal bundle to $C$ can be\nidentified with $\\xi$ thus $f(C)$ differs\nfrom $l(C)$ by the framing induced on $C$ by $T$ relative to the framing induced\non $C$ by its Seifert surface. So $l(C)=f(C)+pq=pm+ql+pq$. \n\\endproof\n\nThus since $K$ has maximal self-linking number we must have $m=l=-1$. \nNow by \\tm{solid_tori} we may find\na contactomorphism from $V$ to $S_f=\\{(r,\\theta,\\phi)\\in \\hbox{$\\mathbb R$} ^2\\times S^1|\nr\\leq f(\\theta,\\phi)\\}$ for some positive function $f\\colon\\thinspace T^2\\to \\hbox{$\\mathbb R$} $, with the \nstandard tight contact structure $\\hbox{ker}(d\\phi+r^2\\, d\\theta)$. \n\nClearly $T=\\partial S_f$ may be isotoped to $S_\\epsilon=\\{(r,\\theta,\\phi)\\in \\hbox{$\\mathbb R$} ^2\\times S^1|\nr<\\epsilon\\}$ for arbitrarily small $\\epsilon>0$. We now show this isotopy may be done\nkeeping our knot $K$ transverse to the characteristic foliation. To a foliation on $\\partial S_f$ we may\nassociate a real valued rotation number $r(S_f)$ for the return map on $\\mu$ induced\nby $(\\partial S_f)_\\xi$ (see \\cite{ml}). \nFor a standardly embedded torus this number must be negative since if not then some\nnearby torus would have a positive $(r,s)$ torus knot as a closed leaf in its characteristic\nfoliation violating the Bennequin inequality (as in the proof of Lemma~\\ref{basis}). \nSo as we isotop $\\partial S_f$ to\n$\\partial S_\\epsilon$ we may keep our positive torus knot transverse to the characteristic foliation\nby Lemma~\\ref{curveinT} (since closed leaves in $(\\partial S_f)_\\xi$ have slope $r(S_f)$ and\n$K$ has positive slope). \nThus we assume that the solid torus $V$ is contactomorphic to $S_\\epsilon$.\nIf $C$ is the core of $V (=S_\\epsilon)$ then it is a transversal unknot with self-linking \n$l(\\lambda)=-1$.\n\nFinally, let $V$ and $V'$ be the solid tori associated to the torus knots $K$ and $K'$ and\nlet $C$ and $C'$ be the cores of $V$ and $V'$. Now since $C$ and $C'$ are unknots with the\nsame self-linking number they are transversely isotopic. Thus we may think of $V$ and $V'$ as\nneighborhoods of the same transverse curve $C=C'$. From above, $V$ and $V'$ may both be shrunk \nto be arbitrarily small neighborhoods of $C$ keeping $K$ and $K'$ transverse to $\\xi$. Hence\nwe may assume that $V$ and $V'$ both sit in a neighborhood of $C$ which is contactomorphic\nto, say, $S_c$ (using the notation from the previous paragraph). By shrinking $V$ and $V'$ \nfurther we may assume they are the tori $S_\\epsilon$ and $S_{\\epsilon'}$ inside $S_c$\nfor some $\\epsilon$ and $\\epsilon'$. Note that this is not immediately obvious but follows\nfrom the fact that a contactomorphism from the standard model $S_f$ for, say, $V$ to \n$V\\subset S_c$ may be constructed to take a neighborhood of the core of $S_f$ to a\nneighborhood of the core of $S_c$.\nThis allows us to finally conclude that we may isotop\n$V$ so that $V=V'$. Now since $K$ and $K'$ represent the same homology class on $\\partial V$ and\nthey are both transverse to the foliation we may use Lemma~\\ref{isotopic} to transversely \nisotop $K$ to $K'$.\n\\endproof\n\nA transversal knot $K$ is called a \\dfn{stabilization} of a transversal knot $C$ if\n$K=\\alpha\\cup A$, $C=\\alpha\\cup A'$ and $A\\cup A'$ cobound a disk\nwith only positive elliptic and negative hyperbolic singularities (eg Figure~\\ref{fig:stab}). \n\\begin{figure}[ht]\n\t{\n\\epsfysize=2in\\centerline{\\relabelbox\\small\n\\epsfbox{stab.eps}\n\\relabel {A}{$A$}\n\\relabel {A'}{$A'$}\n\\adjustrelabel <-4pt,-1pt> {e}{$e_+$}\n\\relabel {h}{$h_-$}\n\\endrelabelbox}}\n\t\\caption{Stabilization disk}\n\t\\label{fig:stab}\n\\end{figure}\nWe say $K$ is obtained from $C$ by a \\dfn{single stabilization} if $K$ is a stabilization\nof $C$ and $l(K)=l(C)-2$ (ie, the disk that $A\\cup A'$ cobound is the one shown in \nFigure~\\ref{fig:stab}).\nThe key observation concerning stabilizations is the following:\n\n\\begin{thm}\\label{tm-stab}\n\tIf the transversal knots $K$ and $K'$ are single stabilizations of transversal knots\n\t$C$ and $C'$ then $K$ is transversely isotopic to $K'$ if $C$ is transversely \n\tisotopic to $C'$.\n\\end{thm}\n\nThis theorem will be proved in Section~\\ref{stab}. The proof of \\tm{main} is completed by an inductive\nargument using the following observation.\n\n\\begin{lem}\\label{canstab}\n\tIf $K$ is a positive transversal $(p,q)$--torus knot and $l(K)<\\overline{l}_{(p,q)}$ then $K$\n\tis a single stabilization of a $(p,q)$--torus knot with larger self-linking number.\n\\end{lem}\n\nThe proof of this lemma will be given in the next section following the proof of \\lm{nonsingular}.\n\n\\section{Characteristic foliations on tori}\\label{tori}\n\nIn this section we prove various results stated in Section~\\ref{sec:main} related to \nfoliations on tori.\nLet $T$ be a standardly embedded torus in $M^3$ and $K$ a positive $(p,q)$--torus\nknot on $T$ that is transverse to a tight contact structure $\\xi$. We are now ready to prove:\n\n\\proof[\\bf Lemma~\\ref{nonsingular}]\n{\\sl\n\tIf the self-linking number of $K$ is maximal then $T$ may be\n\tisotoped relative to $K$ so that the characteristic\n\tfoliation on $T$ is nonsingular.\n}\n\n\\proof\nBegin by isotoping $T$ relative to $K$ so that the number of singularities \nin $T_\\xi$ is minimal. Any singularities that are left must occur in pairs:\na positive (negative) hyperbolic $h$ and elliptic $e$ point connected by a stable\n(unstable) manifold $c$. Moreover, since $h$ and $e$ cannot be canceled without\nmoving $K$ we must have $c\\cap K\\not=\\emptyset$.\n\nNow $T\\setminus K$ is an annulus $A$ with the characteristic foliation flowing\nout of one boundary component and flowing in the other. Let $c'$ be the component\nof $c$ connected to $h$ in $A$. \nWe can have no periodic orbits in $A$ since such an orbit\nwould be a Legendrian $(p,q)$--torus knot with Thurston--Bennequin invariant\n$pq$ contradicting \\eqn{tb-bound}.\nThus the other stable (unstable) manifold $c''$ of $h$ will have to enter (exit) $A$ through the same\nboundary component. The manifolds $c'$ and $c''$ separate off a disk $D$ from $A$.\nWe may use $D\\subset T$ to push the arc $K\\cap D$ across $D$ to obtain another\ntransverse $(p,q)$--torus knot $K'$. It is not hard to show that\n$K$ is a stabilization of $K'$. In particular $l(K')>l(K)$, contradicting\nthe maximality of $l(K)$. Thus we could have not have had any singularities \nleft after our initial isotopy.\n\\endproof\n\nThe above proof provides some insight into \\lm{canstab}. Recall:\n\n\\proof[{\\bf \\lm{canstab}}]\n{\\sl\n\tIf $K$ is a positive transversal $(p,q)$--torus knot with and $l(K)<\\overline{l}_{(p,q)}$ then $K$\n\tis a single stabilization of a $(p,q)$--torus knot with larger self-linking number.\n}\n\n\\proof\nWe begin by noting that if $K$ is a stabilization of another transversal knot then it\nis also a single stabilization of some transversal knot. Thus we just demonstrate that \n$K$ is a stabilization of some transversal knot.\n\nFrom the above proof it is clear that if we cannot eliminate all the singularities in\nthe characteristic foliation of the torus $T$ on which $K$ sits then there is a disk on \nthe torus which exhibits $K$ as a stabilization. \n\nIf we can remove all the singularities from $T$ then by \\lm{l-formula} we know\nthat the self-linking number of, say, the meridian $\\mu$ is less than $-1$. Thus $\\mu$ bounds a \ndisk $D_\\mu$ containing only positive elliptic and at least one negative hyperbolic singularity. \nTo form a positive transversal torus knot $K''$ we can take $p$ copies of the meridian $\\mu$ and \n$q$ copies of the longitude $\\lambda$\nand ``add'' them (ie, resolve all the intersection points keeping the curve \ntransverse to the characteristic foliation). \nThis will produce a transversal knot on $T$ isotopic to $K$ thus transversely\nisotopic. Moreover, we may use the graph of singularities on $D_\\mu$ to show that $K''$, and hence $K$, \nis a stabilization. \n\\endproof\n\nWe end this section by establishing (a more general version of)\nLemma~\\ref{isotopic}.\n\\begin{lem}\n\tSuppose that $\\mathcal{F}$ is a nonsingular foliation on a torus\n\t$T$ and $\\gamma$ and $\\gamma'$ are two simple closed curves on $T.$\n\tIf $\\gamma$ and $\\gamma'$ are homologous and transverse to $\\mathcal{F}$\n\tthen they are isotopic through simple closed curves transverse to \n\t$\\mathcal{F},$ except possibly if $\\mathcal{F}$ has a closed leaf \n\tisotopic to $\\gamma.$\n\\end{lem}\n\n\\proof\nWe first note that if $\\gamma$ and $\\gamma'$ are disjoint and there are not closed\nleaves isotopic to them then the annulus that they cobound will provide the desired transverse\nisotopy. Thus we are left to show that we can make $\\gamma$ and $\\gamma'$ disjoint.\nWe begin by isotoping them so they intersect transversely. Now assume we have transversely \nisotoped them so that the number of their intersection points is minimal. We wish to show \nthis number is zero. Suppose not, then there are an even number of intersection points\n(since homologically their intersection is zero). \n\nUsing a standard innermost arc argument we may find a disk $D\\subset T$ such that\n$\\partial D$ consists of two arcs, one a subarc of $\\gamma$ the other a subarc of $\\gamma'$.\nWe can use the disk $D$ to guide a transverse isotopy of $\\gamma'$ that will decrease the\nnumber of intersections of $\\gamma$ and $\\gamma'$ contradicting our assumption of minimality.\nTo see this, note that the local orientability of the foliation implies that we can define\na winding number of $\\mathcal{F}$ around $\\partial D$. Moreover since $\\partial D$ is contractible\nand the foliation is nonsingular this winding number must be zero. Thus the foliation on \n$D$ must be diffeomorphic to the one shown in Figure~\\ref{fig:diskfol} where the desired isotopy is apparent.\n\\begin{figure}[ht]\n\t{\\epsfysize=2in\\centerline{\\epsfbox{diskfol.eps}}}\n\t\\caption{Foliation on $D$}\n\t\\label{fig:diskfol}\n\\end{figure}\n\\endproof\n\n\n\\section{Stabilizations of transversal knots}\\label{stab}\n\nThe main goal of this section is to prove \\tm{tm-stab}:\n\n\\proof[{\\bf \\tm{tm-stab}}]\n{\\sl\n\tIf the transversal knots $K$ and $K'$ are single stabilizations of transversal knots\n\t$C$ and $C'$ then $K$ is transversely isotopic to $K'$ if $C$ is transversely \n\tisotopic to $C'$.\n}\n\n\\proof\nSince $C$ and $C'$ are transversely isotopic we can assume that $C=C'$. \nLet $D$ and $D'$ be the disks that exhibit $K$ and $K'$ as stabilizations of\n$C$. Let $e,h$ and $e',h'$ be the elliptic\/hyperbolic pairs on $D$ and $D'$.\nFinally, let $\\alpha$ and $\\alpha'$ be the Legendrian arcs formed by the (closure of the) \nunion of stable manifolds of $h$ and $h'$. Using the characteristic foliation \non $D$ we may transversely isotop $K\\setminus C$ to lie arbitrarily close to $\\alpha$\n(and similarly for $K'$ and $\\alpha'$). We are thus done by the following simple lemmas.\n\n\\begin{lem}\n\tThere is a contact isotopy preserving $C$ taking $\\alpha\\cap C$ to \n\t$\\alpha'\\cap C$.\n\\end{lem} \n\nWorking in a standard model for a transverse curve\nthis lemma is quite simple to establish.\nThus we may assume that $\\alpha$ and $\\alpha'$ both touch $C$ at the same point.\n\\begin{lem}\n\tThere is a contact isotopy preserving $C$ taking $\\alpha$ to $\\alpha'$.\n\\end{lem}\nOnce again one can use a Darboux chart to check this lemma (for some details see \\cite{ef}).\n\\begin{lem}\n\tAny two single stabilizations of $C$ along a fixed Legendrian arc are transversely \n\tisotopic.\n\\end{lem}\nWith this lemma our proof of Theorem~\\ref{tm-stab} is complete.\n\\endproof\n\nWe now observe that using \\tm{tm-stab} we may reprove Eliashberg's result concerning \ntransversal unknots. The reader should note that this ``new proof'' is largely just\na reordering\/rewording of Eliashberg's proof.\n\n\\begin{thm}\n\tTwo transversal unknots are transversally isotopic if and only if they\n\thave the same self-linking number.\n\\end{thm}\n\n\\proof\nUsing \\tm{tm-stab} we only need to prove that two transversal unknots with self-linking\nnumber $-1$ are transversally isotopic, since by looking at the characteristic foliation\non a Seifert disk it is clear that a transversal unknot with self-linking number less\nthan $-1$ is a single stabilization of another unknot. But given a transversal unknot with self-linking\nnumber $-1$ we may find a disk that it bounds with precisely one positive elliptic \nsingularity in its characteristic foliation. Using the characteristic foliation on the disk \nthe unknot may be transversely isotoped into an arbitrarily small neighborhood of the elliptic \npoint. Thus given two such knots we may now find a contact isotopy of taking the elliptic point on\none of the Seifert disks to the elliptic point on the other. Since the Seifert disks are tangent\nat their respective elliptic points we may arrange that they agree in a neighborhood of the \nelliptic points. Now by shrinking the Seifert disks more we may assume that both unknots sit on the\nsame disk. It is now a simple matter to transversely isotop one unknot to the other.\n\\qed\n\n\\section{Concluding remarks and questions}\n\nWe would like to note that many of the techniques in this paper work for negative torus knots\nas well (though the proofs above do not always indicate this). There are two places where we cannot\nmake the above proofs work for negative torus knots, they are:\n\\begin{itemize}\n\\item From Equation~\\ref{eqn:l-formula} we cannot conclude that the self-linking numbers of\t\n\t$\\mu$ and $\\lambda$ are $-1$ when $l(K_{(p,q)})$ is maximal as we could for positive torus\n\tknots.\n\\item We cannot always conclude that a negative torus knot with self-linking less than maximal\n\tis a stabilization.\n\\end{itemize}\nDespite these difficulties we conjecture that negative torus knots are also determined by their\nself-linking number.\n\nLet $S=S^1\\times D^2$ and let $K$ be a $(p,q)$--curve on\nthe boundary of $S$. Now if $C$ is a null homologous knot in a three manifold $M$ then \nlet $f\\colon\\thinspace S\\to N$ be a diffeomorphism from $S$ to a neighborhood $N$ of $C$ in $M$ taking $S^1\\times\\{\\hbox{point}\\}$\nto a longitude for $C$. We now define the \\dfn{$(p,q)$--cable of $C$} to be the knot $f(K)$. \n\\begin{quest}\\label{conj}\n\tIf $\\mathcal{C}$ is the class of topological knots whose transversal realizations\n\tare determined up to transversal isotopy by their self-linking number, then is\n\t$\\mathcal{C}$ closed under cablings?\n\\end{quest}\nEliashberg's \nTheorem~\\ref{unknots} says that the unknot $U$ is in $\\mathcal{C}$. Our main Theorem~\\ref{main} says\nthat any positive cable of the unknot is in $\\mathcal{C}$. \nThis provides the first bit of evidence that the answer to the question might be YES, at least for\n``suitably positive'' cablings.\n\nGiven a knot type one might hope,\nusing the observation on stabilizations in this paper, to prove that transversal knots in this\nknot type are determined by their self-linking number as follows: First establishing that there is a \nunique transversal knot in this knot type with maximal self-linking number. Then showing that\nany transversal knot in this knot type that does not have maximal self-linking number is a stabilization.\nThe second part of this program is of independent interest so we ask the\nfollowing question:\n\\begin{quest}\n\tAre all transversal knots not realizing the maximal self-linking number of their knot type\n\tstabilizations of other transversal knots?\n\\end{quest}\nIt would be somewhat surprising if the answer to this question is YES in complete generality\nbut understanding when the answer is YES and when and why it is NO should provide insight\ninto the structure of transversal knots.\n\nWe end by mentioning that the techniques in this paper also seem to shed light on Legendrian\ntorus knots. It seems quite likely that their isotopy class may be determined by their \nThurston--Bennequin invariant\nand rotation number. We hope to return to this question in a future paper.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Interfacial Structures of Different Classical Water Force Fields}\nTo compare the interfacial structures of classical water force fields, we simulated the liquid-vapor interfaces of SPC\/E \\cite{Berendsen:1987gc}, TIP4P\/2005 \\cite{abascal2005general}, and TIP5P \\cite{Mahoney2000_SM} waters in NVT ensembles. Each system has 1944 water molecules in a slab geometry with dimensions of 5.0 nm $\\times$ 5.0 nm $\\times$ 3.0 nm. The system was equilibrated at $T=298 \\,\\text{K}$ and Particle Mesh Ewald was used to handle the long-range part of electrostatic interactions. The SHAKE and SETTLE algorithms were used to constrain the geometry of water. The simulations were performed using the LAMMPS package \\cite{Plimpton:1995fc} for SPC\/E and TIP4P and the GROMACS package \\cite{pronk2013gromacs} for TIP5P. Following the same procedure in Ref. \\onlinecite{Willard:2010da}, instantaneous liquid interface was constructed for each configuration of the generated statistics. Density profile, $\\rho(a)$, and orientational distribution, $P(\\cos\\theta_1,\\cos\\theta_2 | a)$, were computed according to Eqs. (7) and (11) in Ref. \\onlinecite{Willard:2010da}. Figure \\ref{fig:S1} shows the density profiles along with the reduced orientational distributions as defined in Eq. (5) of the main text. Both $\\rho(a)$ and $P(\\cos\\theta_\\text{OH}|a)$ exhibit the qualitatively same structural characteristics for three different force fields of water. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 4.0 in]{FigS1}\n\\caption{(a) Density profiles computed from the atomistic simulations of SPC\/E, TIP4P, and TIP5P. They are normalized by the bulk density, $\\rho_\\mathrm{b}$. (b) Orientational distributions, $P(\\cos\\theta_\\text{OH}|a)$, for SPC\/E (top), TIP4P (middle), and TIP5P (bottom). Color shading indicates the probability density.}\n\\label{fig:S1}\n\\end{figure}\n\nThe orientational polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, were computed according to Eqs.~(6) and (7) of the main text, and their interfacial profiles are plotted in Fig. \\ref{fig:S2}. The plots show that these interfacial properties also share the qualitatively same feature among three different force fields of water. It is notable that while there exists some quantitative difference in the scale of the polarization, the polarizability shows the almost identical trend across the force fields.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 6.4 in]{FigS2_v2}\n\\caption{Interfacial polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, computed from the atomistic simulations of SPC\/E, TIP4P, and TIP5P.}\n\\label{fig:S2}\n\\end{figure}\n\n\\section{Optimization for the Hydrogen Bond Energy Parameter}\nTo determine the optimal model parameter, $\\epsilon_\\text{w}$, we compute the Kullback-Leibler divergence \\cite{Kullback1951},\n\\begin{equation}\n\\Gamma(\\epsilon_\\text{w}) = \\int d a \\int d (\\cos\\theta_\\text{OH}) \\, P_\\text{ref} (\\cos\\theta_\\text{OH} | a) \\ln \\left[ \\frac{ P_\\text{ref} (\\cos\\theta_\\text{OH} | a)}{P (\\cos\\theta_\\text{OH} | a,\\epsilon_\\text{w})} \\right],\n\\tag{S.1}\n\\label{gamma}\n\\end{equation}\nwhere $P_\\text{ref}(\\cos\\theta_\\text{OH} | a)$ and $P (\\cos\\theta_\\text{OH} | a,\\epsilon_\\text{w})$ are the reduced orientational distributions obtained from atomistic simulation and our mean-field model, respectively. This quantity measures how far the probability distribution of our model deviates from the reference given $\\epsilon_\\text{w}$. As indicated above, it becomes a function of $\\epsilon_\\text{w}$ and thus we choose the parameter that minimizes the \\emph{fitness} function, \n\\begin{equation}\n\\epsilon_\\text{w}^* = \\argmin_{\\epsilon_\\text{w}}\\{\\Gamma(\\epsilon_\\text{w})\\},\n\\tag{S.2}\n\\end{equation}\nas the effective hydrogen bond energy.\n\n\n\\section{Fluctuations in hydrogen bond geometry}\nWe quantify the distortions in hydrogen bond geometry and the associated energetics by analyzing the atomistic simulation results as follows.\nNotably, here relatively simple algorithms for quantifying various aspects of molecular geometry translate into complicated mathematical expressions. \nLet $\\mathbf{v}_{k}^{(i)} = \\vec{r}_k^{(i)} - \\vec{r}_\\text{O}^{(i)}$, where $\\vec{r}_\\text{O}^{(i)}$ is the position of the oxygen of the $i$th water molecule and $\\vec{r}_k^{(i)}$ is the position of the $k$th bonding site on it ($k=1,2$ indicate the hydrogens and $k=3,4$ indicate the lone pairs for TIP5P water). \nThen $\\mathbf{v}_{k}^{(i)}$ represents the direction of ideal hydrogen bonding coordination through the $k$th site of the $i$th molecule.\nFor the $j$th molecule neighboring the $i$th, its deviation angle from the ideal coordination to the $i$th molecule is given by\n\\begin{equation}\n\\phi_j^{(i)} = \\min_k \\left\\{\\cos^{-1}\\!\\left( \\frac{\\mathbf{v}_{k}^{(i)} \\cdot \\mathbf{b}_j^{(i)} }{\\left|\\mathbf{v}_{k}^{(i)}\\right| \\left|\\mathbf{b}_j^{(i)} \\right|}\\right) \\right\\},\n\\label{angle_phi}\n\\tag{S.3}\n\\end{equation}\nwhere $\\mathbf{b}_j^{(i)} = \\vec{r}_\\text{O}^{(j)}- \\vec{r}_\\text{O}^{(i)}$ represents the hydrogen bond vector of the $i$th molecule to the $j$th. \nThe corresponding index,\n\\begin{equation}\ny_j^{(i)} = \\argmin_k \\left\\{\\cos^{-1}\\!\\left( \\frac{\\mathbf{v}_{k}^{(i)} \\cdot \\mathbf{b}_j^{(i)} }{\\left|\\mathbf{v}_{k}^{(i)}\\right| \\left|\\mathbf{b}_j^{(i)} \\right|} \\right) \\right\\},\n\\tag{S.4}\n\\label{b_label}\n\\end{equation}\nindicates which ideal boding direction $\\mathbf{b}_j^{(i)}$ is distorted from and thus whether it belongs to donor or acceptor hydrogen bond. \nThe inter-bond angle between the $j$th and $k$th molecules with respect to the $i$th is given by\n\\begin{equation}\n\\psi_{jk}^{(i)} = \\cos^{-1}\\!\\left( \\frac{\\mathbf{b}_j^{(i)} \\cdot \\mathbf{b}_k^{(i)} }{\\left|\\mathbf{b}_j^{(i)}\\right| \\left|\\mathbf{b}_k^{(i)}\\right|} \\right).\n\\tag{S.5}\n\\label{angle_psi}\n\\end{equation}\nThen the probability distribution of inter-bond angle at given distance $a$ is computed as,\n\\begin{equation}\nP_{\\alpha\\gamma}(\\psi|a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(\\psi_{jk}^{(i)} - \\psi)\\delta(a^{(i)} - a) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}{\\sin\\psi\\displaystyle\\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(a^{(i)} - a)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right)\\right>}~,\n\\tag{S.6}\n\\end{equation}\nwhere $a^{(i)}$ is the interfacial depth of the $i$th molecule,\n\\begin{equation}\n\\Theta_\\text{hyd}(\\vec{r}) = H(|\\vec{r}| - 2.4 \\,\\mathrm{\\AA})H(3.2 \\,\\mathrm{\\AA} - |\\vec{r}|)\n\\tag{S.7}\n\\end{equation} \nselects the molecules only in the first hydration shell of the $i$th molecule using the Heaviside step function, $H(x)$, and \n\\begin{equation}\n\\Phi_\\alpha \\left(x \\right) = \\left\\{ \\begin{array}{ll}\n\t\tH\\left(2.5 - x \\right), & \\quad \\textrm{if $\\alpha$ is Donor}, \\\\\n \tH\\left(x - 2.5 \\right), & \\quad \\textrm{if $\\alpha$ is Acceptor}, \\end{array} \\right.\n\\tag{S.8}\t\n\\end{equation}\nselects the neighboring molecule of specific bond type, $\\alpha$. Here the geometric factor, $\\sin\\psi$, corrects the bias coming from the variation of solid angle. \nFig. 3(b) of the main text shows the plots of $P_{\\alpha\\gamma}(\\psi|a)$ with $a = 10 \\,\\mathrm{\\AA}$ and $a = 0 \\,\\mathrm{\\AA}$ for the bulk and interface respectively ($0.1 \\,\\mathrm{\\AA}$ was used for the binning width of histogram).\nFor the plots in Fig. 4(a) of the main text, the distribution is integrated such that $P_{\\alpha\\gamma}(\\psi < 60^{\\circ}|a) = \\int_0^{60^{\\circ}} P_{\\alpha\\gamma}(\\psi|a)\\sin\\psi d\\psi$.\n\nComputing $P_\\text{sqz}(a)$ needs to specify the certain type of defect among the configurations of $\\psi < 60 \n\\,\\text{deg}$. There is the other type of defect than the squeezed triangular one, which is known as the intermediate of the water reorientation \\cite{Laage2006_SM}. \nThis type of defect has bifurcated hydrogen bonds through one site of the molecule such that $y_j^{(i)} = y_k^{(i)}$. \nBy excluding such cases, we can compute the probability to observe a squeezed configuration of two donor bonds as,\n\\begin{equation}\nP_\\text{sqz,DD}(a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} H\\!\\left(60^{\\circ} - \\psi_{jk}^{(i)} \\right)\\delta(a^{(i)} - a) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\delta \\!\\left(y_j^{(i)}y_k^{(i)} - 2 \\right) \\right>}{\\displaystyle\\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} \\delta(a^{(i)} - a)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\text{D} \\!\\left(y_j^{(i)} \\right) \\Phi_\\text{D} \\!\\left(y_k^{(i)} \\right)\\right>}~.\n\\tag{S.9}\n\\end{equation}\n\nThe average direct interaction energy for a hydrogen bond pair is computed in the bulk phase as a function of the deviation angle, $\\phi$, such that\n\\begin{equation}\nu_\\alpha (\\phi) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} U_{ij} \\,\\delta(\\phi_j^{(i)} - \\phi ) H\\!\\left(a^{(i)} - a_b\\right)\\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(\\phi_j^{(i)} - \\phi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\right>}~,\n\\tag{S.10}\n\\end{equation}\nwhere $U_{ij}$ is the pair potential energy between the $i$th and $j$th molecules and $a_b = 10 \\,\\mathrm{\\AA}$. The plots are given in Fig. 3(c) of the main text, normalized by the average bulk hydrogen bond energy where we used $E_\\text{HB} = 9.0 \\,k_B T$ (see the next section below for more details on this).\nSimilarly, the average direct interaction energy between two neighbors of a tagged molecule is computed as,\n\\begin{equation}\nv_{\\alpha\\gamma} (\\psi) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\sum_{k \\neq i > j} U_{jk} \\,\\delta( \\psi_{jk}^{(i)} - \\psi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i}\\sum_{k \\neq i > j} \\delta (\\psi_{jk}^{(i)} - \\psi ) H\\!\\left(a^{(i)} - a_b\\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_j^{(i)} \\right) \\Theta_\\text{hyd}\\!\\left(\\mathbf{b}_k^{(i)} \\right) \\Phi_\\alpha \\!\\left(y_j^{(i)} \\right) \\Phi_\\gamma \\!\\left(y_k^{(i)} \\right) \\right>}~.\n\\tag{S.11}\n\\end{equation}\n\n\\section{Implementation of the Three-body Fluctuation model} \nFollowing the notations used in the previous section, let $\\{\\mathbf{\\hat{v}}_1, \\mathbf{\\hat{v}}_2, \\mathbf{\\hat{v}}_3, \\mathbf{\\hat{v}}_4\\}$ be the unit vectors of ideal hydrogen bonding directions through the hydrogens and lone pairs of a probe water molecule of given orientation $\\vec{\\kappa}$.\nWe sample the hydrogen bond vectors, $\\{\\mathbf{b}_1, \\mathbf{b}_2, \\mathbf{b}_3, \\mathbf{b}_4 \\}$, each of which is within a certain solid angle around $\\mathbf{\\hat{v}}_i$ such that $\\mathbf{b}_i \\cdot \\mathbf{\\hat{v}}_i = |\\mathbf{b}_i| \\cos\\phi_i$ where the value for $\\cos\\phi_i$ is drawn from the uniform random distribution of [$\\cos 70^{\\circ}$, 1]. \nEach hydrogen bond vector is assigned a directionality of either donor or acceptor based on the proximity to the bonding sites (see Eq.~(\\ref{b_label})). \nThe six inter-bond angles, $\\psi_{ij}$, are calculated based on Eq.~(\\ref{angle_psi}). \nIf the bond vectors are too close with one another, \\emph{i.e.} $\\psi_{ij} < 44^{\\circ }$, they are not taken into account for computing $P(\\vec{\\kappa}|a)$ since there is almost no statistics below that in the atomistic simulation. \nHence $P(\\vec{\\kappa}|a)$ is computed as,\n\\begin{equation}\nP(\\vec{\\kappa} \\vert a) = \\int \\prod_{i=1}^{4} \\left[d\\mathbf{b}_i \\delta (|\\mathbf{b}_i| - d_\\text{HB})H\\!\\left(\\frac{\\mathbf{b}_i}{|\\mathbf{b}_i|} \\cdot \\mathbf{\\hat{v}}_i - \\cos 70^{\\circ} \\right) \\prod_{j > i} H\\!\\left(\\cos44^{\\circ} - \\frac{\\mathbf{b}_i \\cdot \\mathbf{b}_j}{|\\mathbf{b}_i||\\mathbf{b}_j|} \\right)\\right] \\frac{\\left \\langle e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})} \\right\\rangle_\\mathrm{b}}{Z(a)},\n\\tag{S.12}\n\\label{eq:dist1}\n\\end{equation}\nwhere $E(\\vec{\\kappa},a,\\{n_k\\})$ follows the Eq.~(8) of the main text. \nHere we impose the same constraint on the length of hydrogen bond vectors as that in the rigid tetrahedral model. \nImplementing the fluctuations in $|\\mathbf{b}_i|$ provokes more details about the energetics, $\\tilde{u}_\\alpha$ and $\\tilde{v}_{\\alpha\\gamma}$, such as their dependence on both lengths and angles of the hydrogen bond vectors, which we have not detailed so far in this model. \n\nThe energy functions, $\\tilde{u}_\\alpha(\\phi)$ and $\\tilde{v}_{\\alpha\\gamma}(\\psi)$, are rescaled from the atomistic simulation data of $u_\\alpha(\\phi)$ and $v_{\\alpha\\gamma}(\\psi)$. \nAdditionally, we parametrize $\\tilde{u}_\\alpha(\\phi)$ by tuning the maximum value of $u_\\alpha(\\phi)$ such that\n\\begin{equation}\n\\tilde{u}_\\alpha(\\phi) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle\\left\\{ \\big[u_\\alpha(\\phi) - u_\\alpha(0) \\big]\\frac{u_\\alpha^* - u_\\alpha(0)}{u_{\\alpha,\\text{max}} - u_\\alpha(0)} + u_\\alpha(0) \\right\\}\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}, & \\quad\\quad \\textrm{if $\\phi \\le \\phi_c$},\\;\\;\\quad\\;\\;\\;\\; \\\\\\\\\n \t\\displaystyle0, & \\quad\\quad \\textrm{if $\\phi > \\phi_c$}, \\end{array} \\right.\\\\\n\\tag{S.13}\n\\label{eq:u_alpha}\n\\end{equation}\nwhere $\\phi_c = 72^{\\circ}$, $u_{\\alpha,\\text{max}} = \\max_{\\phi < \\phi_c}\\left\\{u_\\alpha(\\phi)\\right\\}$, and $u_\\alpha^*$ is the parameter that sets the new maximum (in the original scale of $u_\\alpha$). \nHere the factor of $|\\epsilon_\\text{w}|\/E_\\text{HB}$ rescales the functions in units of the effective hydrogen bond energy of our model.\nWe observed that the behavior of $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ is largely sensitive to $\\tilde{u}_\\alpha(\\phi)$, and thus we optimized the parameters, $u_\\alpha^*$ and $\\epsilon_\\text{w}$, for the result expected from atomistic simulations.\nFor the results presented in Fig. 2 of the main text, we used $\\epsilon_\\text{w} = -5.0 \\,k_B T$, $u_\\text{D}^* = +0.2 \\,k_B T$, and $u_\\text{A}^* = -3.7 \\,k_B T$. \nWe found that the optimized $\\tilde{u}_\\text{A}(\\phi)$ is quite different from $u_\\text{A}(\\phi)$ but more like the corresponding energy computed near the interface (see Fig. \\ref{fig:S3}). \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.5 in]{FigS3_v2}\n\\caption{$\\tilde{u}_\\alpha(\\phi)$ compared with $u_\\alpha(\\phi)$ in the same scale. Solid lines are the simulation data of $u_\\alpha(\\phi)\/E_\\text{HB}$, as originally shown in Fig.~3(c) of the main text, and circles indicate the rescaled energy functions that are optimized for the accurate trend of $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$. Gray and blue color correspond to $\\alpha = \\mathrm{D}$ and $\\alpha = \\mathrm{A}$, respectively. The blue dashed line corresponds to $u_\\text{A}(\\phi|a=1\\,\\mathrm{\\AA})$, \\emph{i.e.}, the average direct interaction energy for a hydrogen bond pair at $a = 1\\,\\mathrm{\\AA}$.}\n\\label{fig:S3}\n\\end{figure}\nFor $\\tilde{v}_{\\alpha\\gamma}(\\psi)$, we simply take the values from the atomistic simulation data and rescale them in units of $\\epsilon_\\text{w}$, that is,\n\\begin{equation}\n\\tilde{v}_{\\alpha\\gamma} (\\psi) = v_{\\alpha\\gamma}(\\psi)\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}~.\n\\tag{S.14}\n\\end{equation}\n\nAs given in Eq.~(8) of the main text, $\\tilde{v}_{\\alpha\\gamma}(\\psi)$ is combined with the auxiliary function, $\\lambda(a_i,a_j,\\psi_{ij})$, in order to accounts for the interface-specific stability of hydrogen bond defects.\nThis auxiliary function represents the energetic cost for the defects to pay based on the hydrogen bonding status of the $i$th or $j$th hydrogen bond partner. \nAssuming that this penalty is imposed on the one that donates hydrogen, we describe this function in terms of the average number and energy of hydrogen bonds through donor sites, denoted by $N_\\text{D} (a)$ and $E_\\text{D} (a)$ respectively. \nSpecifically it is given by,\n\\begin{equation}\n\\lambda_{\\alpha\\gamma} (a_i,a_j,\\psi) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle \\frac{N_\\text{D} (\\bar{a}_{\\alpha\\gamma})}{2}E_\\text{D} (\\bar{a}_{\\alpha\\gamma})\\frac{|\\epsilon_\\text{w}|}{E_\\text{HB}}, & \\quad \\textrm{if $\\psi \\le \\psi_{\\alpha\\gamma}$}, \\\\\\\\\n\t\t\\displaystyle \\left[ 1 - \\frac{ v_{\\alpha\\gamma}(\\psi) - v_{\\alpha\\gamma}( \\psi_{\\alpha\\gamma})}{v_{\\alpha\\gamma}(\\psi_c) - v_{\\alpha\\gamma}(\\psi_{\\alpha\\gamma}) } \\right] \\lambda_{\\alpha\\gamma}(a_i,a_j,\\psi_{\\alpha\\gamma}), & \\quad \\textrm{if $\\psi_{\\alpha\\gamma} < \\psi < \\psi_c$}, \\\\\\\\\n \t\\displaystyle 0, & \\quad \\textrm{if $\\psi \\ge \\psi_c$}, \\end{array} \\right.\n\\tag{S.15}\n\\end{equation}\nwhere $\\psi_{\\alpha\\gamma} = \\argmin_{\\psi}\\left\\{v_{\\alpha\\gamma}(\\psi)\\right\\}$, $\\psi_c = \\argmax_{\\psi}\\left\\{v_\\text{AA}(\\psi)\\right\\}$, and\n\\begin{equation}\n\\bar{a}_{\\alpha\\gamma} (a_i,a_j) = \\left\\{ \\begin{array}{ll}\n\t\t\\displaystyle \\min \\{a_i, a_j \\}, & \\quad \\textrm{if $\\alpha = \\gamma$},\\\\\\\\\n \t\\displaystyle \\Phi_\\text{D}\\!\\left(y_i \\right)a_i + \\Phi_\\text{D}\\!\\left(y_j \\right)a_j, & \\quad \\textrm{if $\\alpha \\ne \\gamma$}. \\end{array} \\right.\n\\tag{S.16}\n\\end{equation}\nHere we let the penalty taken by the hydrogen bond partner located closer to the interface, but we make the exception for donor-acceptor bond pairs based on the typical structure of cyclic water trimer \\cite{keutsch2003water}.\n$N_\\text{D}(a)$ and $E_\\text{D}(a)$ are computed from atomistic simulation as, \n\\begin{equation}\nN_\\text{D} (a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\delta\\!\\left(a^{(i)} - a \\right) \\right>}~,\n\\tag{S.17}\n\\end{equation}\nand\n\\begin{equation}\nE_\\text{D} (a) = \\frac{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} U_{ij} \\, \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}{\\displaystyle \\left< \\sum_i \\sum_{j \\neq i} \\delta(a^{(i)} - a) H\\!\\left(30^{\\circ} - \\phi_j^{(i)} \\right) H\\!\\left(3.5 \\,\\mathrm{\\AA} - \\left\\vert \\mathbf{b}_j^{(i)}\\right\\vert \\right) \\Phi_\\text{D}\\!\\left(y_j^{(i)} \\right) \\right>}~,\n\\tag{S.18}\n\\end{equation}\nwhere the definition of a good hydrogen bond follows the one by Luzar and Chandler \\cite{luzar1996effect}.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 6.4 in]{FigS4_v2}\n\\caption{(a) Interfacial profiles of average number and energy of hydrogen bonds through donor sites, rendered in orange and blue lines respectively. (b) Illustration of a three-body interaction term implemented in our model. Solid lines show the different energetic preferences for highly distorted configurations depending on the interfacial depth of hydrogen bond partner. Dashed line corresponds to $\\tilde{v}_\\text{AA}$ without the attenuation by $\\lambda_\\text{AA}$.}\n\\label{fig:S4}\n\\end{figure}\nAs illustrated in Fig.~\\ref{fig:S4}, they change dramatically within the first $2\\,\\mathrm{\\AA}$ of the interfacial region such that the effect of three-body interaction also becomes significant in that region. \nHere we take the value of $E_\\text{HB}$ from $E_\\text{D}(a)$ by setting $E_\\text{HB} = |E_\\text{D}(a_b)| = 8.45 \\,k_B T = 20.9 \\,\\mathrm{kJ\/mol}$ at $T = 298 \\text{ K}$.\n\n\nIn order to evaluate $P(\\vec{\\kappa}|a)$, we obtain an approximate analytic expression for $\\left< e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})} \\right>_\\text{b}$ in Eq.~(\\ref{eq:dist1}).\nHere we made the same assumption as that of the rigid tetrahedral model, such that $\\langle n_i n_j \\rangle \\approx \\langle n_i \\rangle \\langle n_j \\rangle $ for $i \\ne j$\n\\footnote{\nThis is definitely a rough approximation since it neglects the density correlation between hydrogen bond partners even in the squeezed configurations. However, more accurate treatment for the density correlation provokes again the distance dependence of the energy functions which we have not detailed so far in this model. \n}. \nWithin this approximation, we can write\n\\begin{align*}\n\\left< e^{-\\beta E(\\vec{\\kappa},a,\\{n_k\\})}\\right>_\\text{b} &= \\prod_{i=1}^4 \\langle n_i \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_i)} \\prod_{j > i}^{4} e^{-\\beta \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij})} + \\sum_{k=1}^4 \\left[ 1 - \\langle n_k \\rangle_\\text{b}\\right] \\prod_{i \\neq k}^4 \\langle n_i \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_i)} \\prod_{j\\neq k > i}^4 e^{-\\beta \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij})} \\nonumber\\\\ &\\quad+ \\sum_{i = 1}^4\\sum_{j > i}^4 \\langle n_i \\rangle_\\text{b}\\langle n_j \\rangle_\\text{b} \\left[1 - \\langle n_k \\rangle_\\text{b} \\right] \\left[1 - \\langle n_l \\rangle_\\text{b} \\right] e^{-\\beta \\left[\\tilde{u}_\\alpha(\\phi_i) + \\tilde{u}_\\alpha(\\phi_j) + \\tilde{v}_{\\alpha\\gamma}(\\psi_{ij}) \\right]} \\nonumber\\\\ &\\quad\\quad+ \\sum_{k=1}^4 \\langle n_k \\rangle_\\text{b} e^{-\\beta \\tilde{u}_\\alpha(\\phi_k)} \\prod_{i \\neq k}^4 \\left[1 - \\langle n_i \\rangle_\\text{b}\\right] + \\prod_{i=1}^4 \\left[1 - \\langle n_i \\rangle_\\text{b}\\right]~,\\nonumber\n\\tag{S.19}\n\\end{align*}\nwhere $\\langle n_i \\rangle_\\text{b} = P_\\text{HB} (a_i) = \\rho(a_i)\/2\\rho_b$ and the dummy indices, $k$ and $l$, in the third term are the numbers among $\\{1,2,3,4\\}$ such that $i \\ne j \\ne k \\ne l$. \nThe resulting reduced probability distribution, $P(\\cos\\theta_\\text{OH}|a)$, is given in Fig.~\\ref{fig:S5}.\nAlthough its qualitative feature is still the same as the result from the rigid tetrahedral model, its details are closer to that from the atomistic simulation of TIP5P water (especially at $a < 3 \\,\\mathrm{\\AA}$).\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.4 in]{FigS5_v2}\n\\caption{Orientational distributions, $P(\\cos\\theta_\\text{OH}|a)$, computed from (a) the three-body fluctuation model and (b) the rigid tetrahedral model for the TIP5P force field. Color shading indicates the probability density.}\n\\label{fig:S5}\n\\end{figure}\n\n\\section{Application of mean-field models to SPC\/E force field}\nDespite sharing similar bulk hydrogen bonding structures, different classical water models can yield non-trivial differences in interfacial structure. This is highlighted to some extent in Figs.~\\ref{fig:S1} and \\ref{fig:S2}. As Fig.~\\ref{fig:S2} illustrates, different water models exhibit similar trends in $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ and $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, but they differ in their quantitative characteristics. To evaluate the ability of our mean field model to capture these differences have also applied our model to the SPC\/E force field. To do this, we have followed the same procedure described herein but using the data from molecular dynamics simulations of SPC\/E water rather than TIP5P. For the SPC\/E-parameterized mean field model we have found similarly good agreement in reproducing $P(\\cos\\theta_\\text{OH}|a)$, as illustrated in Fig.~\\ref{fig:S6}, but as Fig.~\\ref{fig:S7} illustrates, the agreement to $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$ and $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, is not as strong for SPC\/E as it is for TIP5P. \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width = 3.0 in]{FigS6}\n\\caption{$P(\\cos\\theta_\\text{OH}|a)$ computed from (a) the atomistic simulation with SPC\/E water and (b) the rigid tetrahedral model optimized for the SPC\/E force field ($\\epsilon_\\text{w}^* = -1.8\\,k_B T = -4.46 \\text{ kJ\/mol}$).}\n\\label{fig:S6}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 5.0 in]{FigS7}\n\\caption{Interfacial polarization, $\\langle \\mu_{\\mathbf{\\hat{n}}}(a) \\rangle$, and polarizability, $\\langle (\\delta \\mu_{\\mathbf{\\hat{n}}}(a) )^2 \\rangle$, computed from the three-body fluctuation model parametrized for the SPC\/E force field, in comparison to the molecular dynamics simulation results.}\n\\label{fig:S7}\n\\end{figure}\n\nWe understand that the difference in the ability of our model to reproduce dipolar polarization\/polarizability between SPC\/E and TIP5P arises due to the differences in the geometric tendencies inherent to these force fields. The TIP5P force field is built upon a tetrahedral charge scaffold, so non-ideal hydrogen bond structures are more naturally described in terms of their deviations from this scaffold. On the other hand, SPC\/E is built upon a triangular charge scaffold (albeit with a tetrahedral bond angle) so non-ideal hydrogen bond structures are less well represented in term of deviation from a tetrahedral scaffold. We speculate that we could improve quantitative accuracy of our model in reproducing SPC\/E results by modifying the details of the underlying geometry of our mean field model.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:introduction}\n\nHeat exchanger network synthesis (HENS) \nminimizes cost and improves energy recovery in chemical processes \\citep{biegleretal:1997, smith:2000, eliaetal:2010, balibanetal:2012}. \nHENS exploits excess heat by integrating process hot and cold streams and improves energy efficiency by reducing utility usage \\citep{floudas:1987, naess:1988, furman:2002, escobar-et-trierweiler:2013}. \n\\citet{floudas:2012} review the critical role of heat integration for energy systems producing liquid transportation fuels \\citep{niziolek:2015}.\nOther important applications of HENS include: refrigeration systems \\citep{shelton:1986}, batch semi-continuous processes \\citep{zhao:1998,castro:2015} and water utilization systems \\citep{bagakewicz:2002}.\n\n\n\nHeat exchanger network design is a mixed-integer nonlinear optimization (MINLP) problem \\citep{yee:1990, Ciric:1991, papalexandri-pistikopoulos:1994,hasan-etal:2010}. \\cite{mistry:2016} recently showed that expressions incorporating logarithmic mean temperature difference, i.e.\\ the nonlinear nature of heat exchange, may be reformulated to decrease the number of nonconvex nonlinear terms in the optimization problem. But HENS remains a difficult MINLP with many nonconvex nonlinearities. One way to generate good HENS solutions is to use the so-called \\emph{sequential method} \\citep{furman:2002}.\nThe sequential method decomposes the original HENS MINLP into three tasks: (i) minimizing utility cost, (ii) minimizing the number of matches, and (iii) minimizing the investment cost. \nThe method optimizes the three mathematical models sequentially with: (i) a linear program (LP) \\citep{cerdaetwesterberg:1983, papoulias:1983}, (ii) a mixed-integer linear program (MILP) \\citep{cerda:1983, papoulias:1983}, and (iii) a nonlinear program (NLP) \\citep{floudas:1986}.\nThe sequential method may not return the global solution of the original MINLP, but solutions generated with the sequential method are practically useful.\n\nThis paper investigates the \\emph{minimum number of matches} problem \\citep{floudas}, the computational bottleneck of the sequential method. \nThe minimum number of matches problem is a strongly $\\mathcal{NP}$-hard MILP \\citep{furman:2001}.\nMathematical symmetry in the problem structure combinatorially increases the possible stream configurations and deteriorates the performance of exact, tree-based algorithms \\citep{kouyialis:2016}. \n\nBecause state-of-the-art approaches cannot solve the minimum number of matches problem to global optimality for moderately-sized instances \\citep{chen:2015}, engineers develop experience-motivated heuristics \\citep{hindmarsh:1983,cerdaetwesterberg:1983}. {\\cite{hindmarsh:1983} highlight the importance of generating good solutions quickly: a design engineer may want to actively interact with a good minimum number of matches solution and consider changing the utility usage as a result of the MILP outcome.}\n\\citet{furman:2004} propose a collection of approximation algorithms, i.e.\\ heuristics with performance guarantees, for the minimum number of matches problem by exploiting the LP relaxation of an MILP formulation.\n\\citet{furman:2004} present a unified worst-case analysis of their algorithms' performance guarantees and show a non-constant approximation ratio scaling with the number of temperature intervals.\nThey also prove a constant performance guarantee for the single temperature interval problem.\n\nThe standard MILP formulations for the minimum number of matches contain big-M constraints, i.e.\\ the on\/off switches associated with weak continuous relaxations of MILP.\nBoth optimization-based heuristics and exact state-of-the-art methods for solving minimum number of matches problem are highly affected by the big-M parameter.\nTrivial methods for computing the big-M parameters are typically adopted, but \\citet{gundersen:1997} propose a tighter way of computing the big-M parameters.\n\nThis manuscript develops new heuristics and provably efficient approximation algorithms for the minimum number of matches problem. These methods have guaranteed solution quality and efficient run-time bounds. \nIn the sequential method, many possible stream configurations are required to evaluate the minimum overall cost \\citep{floudas}, so a complementary contribution of this work is a heuristic methodology for producing multiple solutions efficiently.\nWe classify the heuristics based on their algorithmic nature into three categories: (i) relaxation rounding, (ii) water filling, and (iii) greedy packing.\n\n{\nThe relaxation rounding heuristics we consider are (i) Fractional LP Rounding (FLPR), (ii) Lagrangian Relaxation Rounding (LRR), and (iii) Covering Relaxation Rounding (CRR).\nThe water-filling heuristics are (i) Water-Filling Greedy (WFG), and (ii) Water-Filling MILP (WFM). \nFinally, the greedy packing heuristics are (i) Largest Heat Match LP-based (LHM-LP), (ii) Largest Heat Match Greedy (LHM), (iii) Largest Fraction Match (LFM), and (iv) Shortest Stream (SS).\nMajor ingredients of these heuristics are adaptations of single temperature interval algorithms and maximum heat computations with match restrictions.\nWe propose (i) a novel MILP formulation, and (ii) an improved greedy approximation algorithm for the single temperature interval problem.\nFurthermore, we present (i) a greedy algorithm computing maximum heat between two streams and their corresponding big-M parameter, (ii) an LP computing the maximum heat in a single temperature interval using a subset of matches, and (iii) an extended maximum heat LP using a subset of matches on multiple temperature intervals.\n}\n\nThe manuscript proceeds as follows: \nSection \\ref{sec:preliminaries} formally defines the minimum number of matches problem and discusses mathematical models.\nSection \\ref{sec:heuristics_performance_guarantees} discusses computational complexity and introduces a new $\\mathcal{NP}$-hardness reduction of the minimum number of matches problem from bin packing. \nSection \\ref{sec:single_temperature_interval} \nfocusses on the single temperature interval problem.\nSection \\ref{sec:max_heat} explores computing the maximum heat exchanged between the streams with match restrictions.\nSections \\ref{sec:relaxation_rounding} - \\ref{sec:greedy_packing} present our heuristics for the minimum number of matches problem based on: (i) relaxation rounding, (ii) water filling, and (iii) greedy packing, respectively, as well as new theoretical performance guarantees.\nSection \\ref{sec:results} evaluates experimentally the heuristics and discusses numerical results.\nSections \\ref{sec:discussion} and \\ref{sec:conclusion} discuss the manuscript contributions and conclude the paper.\n\n\n\n\\begin{comment}\n\\begin{table}[h]\n\n\\scriptsize\n\\begin{center}\n\\begin{adjustbox}{center}\n\\begin{tabular}{ | c | c c c | c c | c c c c| } \n\\hline\n& \\multicolumn{3}{|c|}{\\textbf{Relaxation Rounding}} & \\multicolumn{2}{c}{\\textbf{Water Filling}} & \\multicolumn{4}{|c|}{\\textbf{Greedy Packing}} \\\\\n& FLPR & LRR & CRR & WFM & WFG & LHM & LFM & LHM-LP & SS \\\\\n& (\\ref{Subsection:FLPR}) & (\\ref{Subsection:LRR}) & (\\ref{Subsection:CRR}) & (\\ref{sec:water_filling}) & (\\ref{sec:water_filling}) & (\\ref{Subsection:Largest_Heat_Match}) & (\\ref{Subsection:LFM}) & (\\ref{Subsection:Largest_Heat_Match}) & (\\ref{Subsection:SS}) \\\\\n\\hline\n\\textbf{Single Temperature Interval Problem} & & & & & & & & & \\\\ \nMILP Model (\\ref{Sec:SingleTemperatureIntervalProblem-MILP}) & & & & \\checkmark & & & & & \\\\ \nApproximation Algorithm (\\ref{Sec:SingleTemperatureIntervalProblem-Approximation}) & & & & & \\checkmark & & & & \\\\ \n\\hline\n\\textbf{Maximum Heat Computations} & & & & & & & & & \\\\\nTwo Streams, Big-M Parameter (\\ref{Sec:MaxHeat_2Streams}) & \\checkmark & \\checkmark & \\checkmark & & & \\checkmark & \\checkmark & & \\checkmark \\\\ \nSingle Temperature Interval (\\ref{Sec:MaxHeat_SingleInterval}) & & & & \\checkmark & \\checkmark & & & & \\\\ \nMultiple Temperature Intervals (\\ref{Sec:MaxHeat_MultipleIntervals}) & & & \\checkmark & & & & & \\checkmark & \\\\ \n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{center}\n\\caption{Table indicating the single temperature interval problem and maximum heat with match restrictions components used by each heuristic. Each element is associated with a section number. This table may be used as a roadmap to the paper.}\n\\label{Table:Heuristic_Components}\n\\end{table}\t\n\\end{comment}\n\n\n \n\n\\section{Minimum Number of Matches for Heat Exchanger Network Synthesis}\n\\label{sec:preliminaries}\nThis section defines the minimum number of matches problem and presents the standard transportation and transshipment MILP models. \nTable \\ref{tbl:notation} contains the notation.\n\n\\singlespacing\n\\begin{longtable}{l l l}\n\\caption{Nomenclature}\\\\\n\\toprule\nName & Description \\\\\n\\midrule\n{\\bf Cardinalities} & \\\\\n$n$ & Number of hot streams \\\\\n$m$ & Number of cold streams \\\\\n$k$ & Number of temperature intervals\\\\\n$v$ & Number of matches (objective value) \\\\\n\\midrule\n{\\bf Indices} \\\\\n$i\\in H$ & Hot stream\\\\\n$j\\in C$ & Cold stream\\\\\n$s,t,u \\in T$ & Temperature interval\\\\\n$b\\in B$ & Bin (single temperature interval problem) \\\\\n\\midrule\n{\\bf Sets} & & \\\\\n$H$, $C$ & Hot, cold streams \\\\\n$T$ & Temperature intervals \\\\\n$M$ & Set of matches (subset of $H\\times C$) \\\\\n$C_i(M), H_j(M)$ & Cold, hot streams matched with $i\\in H$, $j\\in C$ in $M$ \\\\\n$B$ & Bins (single temperature interval problem) \\\\\n$A(M)$ & Set of valid quadruples $(i,s,j,t)$ with respect to a set $M$ of matches \\\\\n$A_u(M)$ & Set of quadruples $(i,s,j,t)\\in A(M)$ with $s\\leq uT_{\\text{in},j}^{CS}$).\nEvery hot stream $i$ and cold stream $j$ are associated flow rate heat capacities $FCp_i$ and $FCp_j$, respectively.\nMinimum heat recovery approach temperature $\\Delta T_{\\min}$ relates the hot and cold stream temperature axes.\nA hot utility $i$ in a set $HU$ and a cold utility $j$ in a set $CU$ may be purchased at a cost, e.g.\\ with unitary costs $\\kappa_i^{HU}$ and $\\kappa_j^{CU}$.\nLike the streams, the utilities have inlet and outlet temperatures $T_{\\text{in},i}^{HU}$, $T_{\\text{out},i}^{HU},T_{\\text{in},j}^{CU}$ and $T_{\\text{out},j}^{CU}$.\nThe first step in a sequential approach to HENS minimizes the utility cost and thereby specifies the heat each utility introduces in the network.\nThe next step minimizes the number of matches.\n\\ref{App:Minimum_Utility_Cost} discusses the transition from the minimizing utility cost to minimizing the number of matches.\nAfter this transition, each utility may, without loss of generality, be treated as a stream.\n}\n\nThe minimum number of matches problem posits a set of \\emph{hot process streams} to be cooled and a set of \\emph{cold process streams} to be heated.\nEach stream is associated with an initial and a target temperature. \nThis set of temperatures defines a collection of \\emph{temperature intervals}. \nEach hot stream exports (or supplies) heat in each temperature interval between its initial and target temperatures.\nSimilarly, each cold stream receives (or demands) heat in each temperature interval between its initial and target temperatures.\n\\ref{App:Minimum_Utility_Cost} formally defines the temperature range partitioning.\nHeat may flow from a hot to a cold stream in the same or a lower temperature interval, but not in a higher one.\nIn each temperature interval, the \\emph{residual heat} descends to lower temperature intervals.\nA zero heat residual is a \\emph{pinch point}.\nA pinch point restricts the maximum energy integration and divides the network into subnetworks. \n\nA problem instance consists of a set $H=\\{1,2,\\ldots,n\\}$ of hot streams, a set $C=\\{1,2,\\ldots,m\\}$ of cold streams, and a set $T=\\{1,2,\\ldots,k\\}$ of temperature intervals. \nHot stream $i\\in H$ has heat supply $\\sigma_{i,s}$ in temperature interval $s\\in T$ and cold stream $j\\in C$ has heat demand $\\delta_{j,t}$ in temperature interval $t\\in T$.\nHeat conservation is satisfied, i.e.\\ $\\sum_{i\\in H}\\sum_{s\\in T}\\sigma_{i,s} = \\sum_{j\\in C}\\sum_{t\\in T}\\delta_{j,t}$.\nWe denote by $h_i=\\sum_{s\\in T}\\sigma_{i,s}$ the total heat supply of hot stream $i\\in H$ and by $c_j=\\sum_{t\\in T}\\delta_{j,t}$ the total heat demand of cold stream $j\\in C$.\n\nA feasible solution specifies a way to transfer the hot streams' heat supply to the cold streams, i.e.\\ an amount $q_{i,s,j,t}$ of heat exchanged between hot stream $i\\in H$ in temperature interval $s\\in T$ and cold stream $j\\in C$ in temperature interval $t\\in T$.\nHeat may only flow to the same or a lower temperature interval, i.e.\\ $q_{i,s,j,t}=0$, for each $i\\in H$, $j\\in C$ and $s,t\\in T$ such that $s>t$.\nA hot stream $i\\in H$ and a cold stream $j\\in C$ are \\emph{matched}, if there is a positive amount of heat exchanged between them, i.e.\\ $\\sum_{s,t\\in T}q_{i,s,j,t}>0$.\nThe objective is to find a feasible solution minimizing the number of matches $(i,j)$.\n\n\n\n\\subsection{Mathematical Models}\nThe transportation and transshipment models formulate the minimum number of matches as a mixed-integer linear program (MILP).\n\n\\paragraph{Transportation Model \\citep{cerda:1983}} \nAs illustrated in Figure \\ref{Fig:transportation}, the transportation model represents heat as a commodity transported from supply nodes to destination nodes.\nFor each hot stream $i\\in H$, there is a set of supply nodes, one for each temperature interval $s\\in T$ with $\\sigma_{i,s}>0$.\nFor each cold stream $j\\in C$, there is a set of demand nodes, one for each temperature interval $t\\in T$ with $\\delta_{j,t}>0$.\nThere is an arc between the supply node $(i,s)$ and the destination node $(j,t)$ if $s\\leq t$, for each $i\\in H$, $j\\in C$ and $s,t\\in T$.\n\nIn the MILP formulation, variable $q_{i,s,j,t}$ specifies the heat transferred from hot stream $i\\in H$ in temperature interval $s\\in T$ to cold stream $j\\in C$ in temperature interval $t\\in T$. \nBinary variable $y_{i,j}$ if whether streams $i\\in H$ and $j\\in C$ are matched or not. \nParameter $U_{i,j}$ is a big-M parameter bounding the amount of heat exchanged between every pair of hot stream $i\\in H$ and cold stream $j\\in C$, e.g.\\ $U_{i,j}=\\min\\{h_i,c_j\\}$.\nThe problem is formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{min} & \\sum_{i \\in H}\\sum_{j \\in C} y_{i,j} \\label{TransportationMIP_Eq:ObjMinMatches} \\\\ \n& \\sum_{j\\in C}\\sum_{t\\in T} q_{i,s,j,t} = \\sigma_{i,s} & i\\in H, s\\in T \\label{TransportationMIP_Eq:HotStreamConservation}\\\\\n& \\sum_{i\\in H}\\sum_{s\\in T} q_{i,s,j,t} = \\delta_{j,t} & j\\in C, t\\in T \\label{TransportationMIP_Eq:ColdStreamConservation}\\\\\n& \\sum_{s,t\\in T} q_{i,s,j,t}\\leq U_{i,j}\\cdot y_{i,j} & i\\in H, j\\in C \\label{TransportationMIP_Eq:BigM_Constraint}\\\\\n& q_{i,s,j,t} = 0 & i\\in H, j\\in C, s,t\\in T: s> t \\label{TransportationMIP_Eq:ThermoConstraint} \\\\\n& y_{i,j} \\in \\{0,\\,1\\},q_{i,s,j,t}\\geq 0 & i\\in H, j\\in C,\\; s,t\\in T \\label{TransportationMIP_Eq:IntegralityConstraints}\n\\end{align}\n}\nExpression (\\ref{TransportationMIP_Eq:ObjMinMatches}), the objective function, minimizes the number of matches.\nEquations (\\ref{TransportationMIP_Eq:HotStreamConservation}) and (\\ref{TransportationMIP_Eq:ColdStreamConservation}) ensure heat conservation.\nEquations (\\ref{TransportationMIP_Eq:BigM_Constraint}) enforce a match between a hot and a cold stream if they exchange a positive amount of heat.\nEquations (\\ref{TransportationMIP_Eq:BigM_Constraint}) are \\emph{big-M constraints}.\nEquations (\\ref{TransportationMIP_Eq:ThermoConstraint}) ensure that no heat flows to a hotter temperature.\n\n{\n\nThe transportation model may be reduced by removing redundant variables and constraints.\nSpecifically, a mathematically-equivalent \\emph{reduced transportation MILP model} removes: (i) all variables $q_{i,s,j,t}$ with $s> t$ and (ii) Equations (\\ref{TransportationMIP_Eq:ThermoConstraint}).\nBut modern commercial MILP solvers may detect redundant variables constrained to a fixed value and exploit this information to their benefit. Table \\ref{Table:Transportation_Models_Comparison} shows that the aggregate performance of CPLEX and Gurobi is unaffected by the redundant constraints and variables.\n\n}\n\n\\begin{figure*}[t!]\n\\centering\n\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\centering\n\\includegraphics{transportation_graph.eps}\n\\vspace*{-1cm}\n\\caption{ Transportation Model}\n\\label{Fig:transportation}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\centering\n\\includegraphics{transshipment_graph.eps}\n\\vspace*{-1cm}\n\\caption{Transshipment Model}\n\\label{Fig:transshipment}\n\\end{subfigure}\n\\vspace*{-0.5cm}\n\\caption{\nIn the transportation model \\citep{cerda:1983}, each hot stream $i$ supplies $\\sigma_{i,t}$ units of heat in temperature interval $t$ which can be received, in the same or a lower temperature interval, by a cold stream $j$ which demands $\\delta_{j,t}$ units of heat in $t$. \nIn the transshipment model \\citep{papoulias:1983}, there are also intermediate nodes transferring residual heat to a lower temperature interval.\nThis figure is adapted from \\citet{furman:2004}.\n}\n\\end{figure*}\n\n\n\\paragraph{Transshipment Model \\citep{papoulias:1983}} \nAs illustrated in Figure \\ref{Fig:transshipment}, the transshipment formulation transfers heat from hot streams to cold streams via intermediate transshipment nodes.\nIn each temperature interval, the heat entering a transshipment node either transfers to a cold stream in the same temperature interval or it descends to the transshipment node of the subsequent temperature interval as residual heat.\n\nBinary variable $y_{i,j}$ is 1 if hot stream $i\\in H$ is matched with cold stream $j\\in C$ and 0 otherwise.\nVariable $q_{i,j,t}$ represents the heat received by cold stream $j\\in C$ in temperature interval $t\\in T$ originally exported by hot stream $i\\in H$.\nVariable $r_{i,s}$ represents the residual heat of hot stream $i\\in H$ that descends from temperature interval $s$ to temperature interval $s+1$. \nParameter $U_{i,j}$ is a big-M parameter bounding the heat exchanged between hot stream $i\\in H$ and cold stream $j\\in C$, e.g.\\ $U_{i,j}=\\min\\{h_i,c_j\\}$.\nThe problem is formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{min} & \\sum_{i\\in H}\\sum_{j \\in C} y_{i,j} \\label{TransshipmentMIP_Eq:ObjMinMatches} \\\\ \n& \\sum_{j \\in C} q_{i,j,s} + r_{i,s} = \\sigma_{i,s} + r_{i,s-1} & i\\in H, s\\in T \\label{TransshipmentMIP_Eq:HotStreamConservation} \\\\\n& r_{i,k} = 0 & i\\in H \\label{TransshipmentMIP_Eq:HeatConservation} \\\\\n& \\sum_{i\\in H} q_{i,j,t} = \\delta_{j,t} & j\\in C, t \\in T \\label{TransshipmentMIP_Eq:ColdStreamConservation} \\\\ \n& \\sum_{t\\in T} q_{i,j,t}\\leq U_{i,j}\\cdot y_{i,j} & i\\in H, j\\in C \\label{TransshipmentMIP_Eq:BigM_Constraint} \\\\\n& y_{i,j}\\in \\{0,1\\},\\; q_{i,j,t}, r_{i,s}\\geq 0 & i\\in H, j\\in C, s,t\\in T\n\\end{align}\n}\nExpression (\\ref{TransshipmentMIP_Eq:ObjMinMatches}) minimizes the number of matches. \nEquations (\\ref{TransshipmentMIP_Eq:HotStreamConservation})-(\\ref{TransshipmentMIP_Eq:ColdStreamConservation}) enforce heat conservation.\nEquation (\\ref{TransshipmentMIP_Eq:BigM_Constraint}) allows positive heat exchange between hot stream $i\\in H$ and cold stream $j\\in C$ only if $(i,j)$ are matched.\n\n\n\\section{Heuristics with Performance Guarantees}\n\\label{sec:heuristics_performance_guarantees}\n\n\\subsection{Computational Complexity}\n\\label{sec:computational_complexity}\n\nWe briefly introduce $\\mathcal{NP}$-completeness and basic computational complexity classes \\citep{arora:2009,papadimitriou:1994}.\nA \\emph{polynomial algorithm} produces a solution for a computational problem with a running time polynomial to the size of the problem instance.\nThere exist problems which admit a polynomial-time algorithm and others which do not.\nThere is also the class of \\emph{$\\mathcal{NP}$-complete problems} for which we do not know whether they admit a polynomial algorithm or not.\nThe question of whether $\\mathcal{NP}$-complete problems admit a polynomial algorithm is known as the $\\mathcal{P}=\\mathcal{NP}$ question.\nIn general, it is conjectured that $\\mathcal{P}\\neq \\mathcal{NP}$, i.e.\\ $\\mathcal{NP}$-complete problems are not solvable in polynomial time.\nAn optimization problem is \\emph{$\\mathcal{NP}$-hard} if its decision version is $\\mathcal{NP}$-complete.\nA computational problem is \\emph{strongly $\\mathcal{NP}$-hard} if it remains $\\mathcal{NP}$-hard when all parameters are bounded by a polynomial to the size of the instance.\n\n\nThe minimum number of matches problem is known to be strongly $\\mathcal{NP}$-hard, even in the special case of a single temperature interval.\n\\citet{furman:2004} propose an $\\mathcal{NP}$-hardness reduction from the well-known 3-Partition problem, i.e.\\ they show that the minimum number of matches problem has difficulty equivalent to the 3-Partition problem.\n\\ref{App:NP_harndess} presents an alternative $\\mathcal{NP}$-hardness reduction from the bin packing problem. This alternative setting of the minimum number of matches problem gives new insight into the packing nature of the problem.\nA major contribution of this paper is to design efficient, greedy heuristics motivated by packing.\n\n\\begin{theorem}\n\\label{thm:NP_hardness}\nThere exists an $\\mathcal{NP}$-hardness reduction from bin packing to the minimum number of matches problem with a single temperature interval.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:NP_harndess}.\n\\end{omitted_proof}\n\n\n\\subsection{Approximation Algorithms}\n\\label{sec:approximation_algorithms}\n\nA heuristic with a performance guarantee is usually called an \\emph{approximation algorithm} \\citep{vazirani:2001,williamson:2011}.\nUnless $\\mathcal{P}=\\mathcal{NP}$, there is no polynomial algorithm solving an $\\mathcal{NP}$-hard problem.\nAn approximation algorithm is a polynomial algorithm producing a near-optimal solution to an optimization problem.\nFormally, consider an an optimization problem, without loss of generality minimization, and a polynomial Algorithm $A$ for solving it (not necessarily to global optimality).\nFor each problem instance $I$, let $C_A(I)$ and $C_{OPT}(I)$ be the algorithm's objective value and the optimal objective value, respectively.\nAlgorithm $A$ is $\\rho$-approximate if, for every problem instance $I$, it holds that:\n\\begin{equation*}\nC_A(I)\\leq \\rho\\cdot C_{OPT}(I).\n\\end{equation*}\nThat is, a $\\rho$-approximation algorithm computes, in polynomial time, a solution with an objective value at most $\\rho$ times the optimal objective value.\nThe value $\\rho$ is the \\emph{approximation ratio} of Algorithm $A$. \nTo prove a $\\rho$-approximation ratio, we proceed as depicted in Figure \\ref{Fig:ApproximationAlgorithm}.\nFor each problem instance, we compute analytically a lower bound $C_{LB}(I)$ of the optimal objective value, i.e.\\ $C_{LB}(I) \\leq C_{OPT}(I)$, and we show that the algorithm's objective value is at most $\\rho$ times the lower bound, i.e.\\ $C_A(I)\\leq \\rho \\cdot C_{LB}(I)$.\nThe ratio of a $\\rho$-approximation algorithm is \\emph{tight} if the algorithm is not $\\rho-\\epsilon$ approximate for any $\\epsilon>0$. \nAn algorithm is $O(f(n))$-approximate and $\\Omega(f(n))$-approximate, where $f(n)$ is a function of an input parameter $n$, if the algorithm does not have an approximation ratio asymptotically higher and lower, respectively, than $f(n)$.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{approximation_algorithm.eps}\n\\caption{Analysis of an Approximation Algorithm}\n\\label{Fig:ApproximationAlgorithm}\n\\end{center}\n\\end{figure}\n\nApproximation algorithms have been developed for two problem classes relevant to process systems engineering: heat exchanger networks \\citep{furman:2004} and pooling \\citep{dey2015analysis}. Table \\ref{Table:Heuristics_Performance_Guarantees} lists performance guarantees for the minimum number of matches problem; most are new to this manuscript.\n\n\\begin{table}[t]\n\\small\n\\begin{adjustbox}{center} \n\\begin{tabular}{ | l l c c c | }\n\\hline\n\\textbf{Heuristic} & \\textbf{Abbrev.} & \\textbf{Section} & \\textbf{Performance Guarantee} & \\textbf{Running Time} \\\\ \n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Single Temperature Interval Problem}} \\\\ \nSimple Greedy & SG & \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & 2$^{\\dagger}$ (tight) & $O(nm)$ \\\\\nImproved Greedy & IG & \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & 1.5 (tight) & $O(nm)$ \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Relaxation Rounding Heuristics}} \\\\ \nFractional LP Rounding & FLPR & \\ref{Subsection:FLPR} & $O(k)^{\\dagger}$, $O(U_{\\max})$, $\\Omega(n)$ & 1 LP \\\\\nLagrangian Relaxation Rounding & LRR & \\ref{Subsection:LRR} & & 2 LPs \\\\\nCovering Relaxation Rounding & CRR & \\ref{Subsection:CRR} & & $O(nm)$ ILPs \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Water Filling Heuristics}} \\\\ \nWater Filling MILP & WFM & \\ref{sec:water_filling} \\& \\ref{Sec:SingleTemperatureIntervalProblem-MILP} & $O(k)^{\\dagger}$, $\\Omega(k)$ & $O(k)$ MILPs \\\\ \nWater Filling Greedy & WFG & \\ref{sec:water_filling} \\& \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} & $O(k)^{\\dagger}$, $\\Omega(k)$ & $O(n m k)$ \\\\\n\\hline\n\\multicolumn{5}{|l|} {\\textbf{Greedy Packing Heuristics}} \\\\ \nLargest Heat Match LP-based & LHM-LP & \\ref{Subsection:Largest_Heat_Match} & $O(\\log n + \\log (h_{\\max}\/\\epsilon))$ & $O(n^2m^2)$ LPs \\\\\nLargest Heat Match Greedy & LHM & \\ref{Subsection:Largest_Heat_Match} & & $O(n^2 m^2 k)$ \\\\\nLargest Fraction Match & LFM & \\ref{Subsection:LFM} & & $O(n^2 m^2 k)$ \\\\\nShortest Stream & SS & \\ref{Subsection:SS} & & $O(n m k)$ \\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox} \n\\caption{Performance guarantees for the minimum number of matches problem. The performance guarantees marked ${\\dagger}$ are from \\cite{furman:2004}; all others are new to this manuscript.}\n\\label{Table:Heuristics_Performance_Guarantees}\n\\end{table}\n\n\n\n\\section{Single Temperature Interval Problem}\n\\label{sec:single_temperature_interval}\n\nThis section proposes efficient algorithms for the single temperature interval problem.\nUsing graph theoretic properties, we obtain: (i) a novel, efficiently solvable MILP formulation without big-M constraints and (ii) an improved 3\/2-approximation algorithm. \n{Of course, the single temperature interval problem is not immediately applicable to the minimum number of matches problem with multiple temperature intervals. But designing efficient approximation algorithms for the single temperature interval is the first, essential step before considering multiple temperature intervals. Additionally, the water filling heuristics introduced in Section \\ref{sec:water_filling} repeatedly solve the single temperature interval problem.}\n\nIn the single temperature interval problem, a feasible solution can be represented as a bipartite graph $G=(H\\cup C, M)$ in which there is a node for each hot stream $i\\in H$, a node for each cold stream $j\\in C$ and the set $M\\subseteq H\\times C$ specifies the matches. \n\\ref{App:Single_Temperature_Interval} shows the existence of an optimal solution whose graph $G$ does not contain any cycle. \nA connected graph without cycles is a \\emph{tree}, so $G$ is a forest consisting of trees.\n\\ref{App:Single_Temperature_Interval} also shows that the number $v$ of edges in $G$, i.e.\\ the number of matches, is related to the number $\\ell$ of trees with the equality $v=n+m-\\ell$.\nSince $n$ and $m$ are input parameters, minimizing the number of matches in a single temperature interval is equivalent to finding a solution whose graph consists of a maximal number $\\ell$ of trees.\n\n\\subsection{Novel MILP Formulation}\n\\label{Sec:SingleTemperatureIntervalProblem-MILP}\n\nWe propose a novel MILP formulation for the single temperature interval problem.\nIn an optimal solution without cycles, there can be at most $\\min\\{n,m\\}$ trees.\nFrom a packing perspective, we assume that there are $\\min\\{n,m\\}$ available bins and each stream is placed into exactly one bin.\nIf a bin is non-empty, then its content corresponds to a tree of the graph.\nThe objective is to find a feasible solution with a maximum number of bins. \n\nTo formulate the problem as an MILP, we define the set $B=\\{1,2,\\ldots,\\min\\{n,m\\}\\}$ of available bins.\nBinary variable $x_b$ is 0 if bin $b\\in B$ is empty and 1, otherwise. \nA binary variable $w_{i,b}$ indicates whether hot stream $i\\in H$ is placed into bin $b\\in B$.\nSimilarly, a binary variable $z_{j,b}$ specifies whether cold stream $j\\in C$ is placed into bin $b\\in B$. \nThen, the minimum number of matches problem can be formulated:\n{\\allowdisplaybreaks\n\\begin{align}\n\\text{max} & \\sum_{b\\in B} x_b \\label{Eq:SingleMIP_MaxBins} \\\\\n& x_b \\leq \\sum_{i\\in H} w_{i,b} & b\\in B \\label{Eq:SingleMIP_HotBinUsage} \\\\ \n& x_b \\leq \\sum_{j\\in C} z_{j,b} & b\\in B \\label{Eq:SingleMIP_ColdBinUsage} \\\\ \n& \\sum_{b\\in B} w_{i,b} = 1 & i\\in H \\label{Eq:SingleMIP_HotAssignment} \\\\\n& \\sum_{b\\in B} z_{j,b} = 1 & j\\in C \\label{Eq:SingleMIP_ColdAssignment} \\\\\n& \\sum_{i\\in H} w_{i,b} \\cdot h_i = \\sum_{j\\in C} z_{j,b} \\cdot c_j & b\\in B \\label{Eq:SingleMIP_BinHeatConservation} \\\\\n& x_b, w_{i,b}, z_{j,b}\\in\\{0,1\\} & b\\in B, i\\in H, j\\in C \\label{Eq:SingleMIP_Integrality}\n\\end{align}\n}\nExpression (\\ref{Eq:SingleMIP_MaxBins}), the objective function, maximizes the number of bins. \nEquations (\\ref{Eq:SingleMIP_HotBinUsage}) and (\\ref{Eq:SingleMIP_ColdBinUsage}) ensure that a bin is used if there is at least one stream in it.\nEquations (\\ref{Eq:SingleMIP_HotAssignment}) and (\\ref{Eq:SingleMIP_ColdAssignment}) enforce that each stream is assigned to exactly one bin.\nFinally, Eqs.\\ (\\ref{Eq:SingleMIP_BinHeatConservation}) ensure the heat conservation of each bin. \nNote that, unlike the transportation and transshipment models, Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_BinHeatConservation}) do not use a big-M parameter. {\n\\ref{App:Water_Filling} formulates the single temperature interval problem \\emph{without} heat conservation. Eqs.\\ (\\ref{Eq:SingleMIPwithoutConservation_MaxBins})-(\\ref{Eq:SingleMIPwithoutConservation_Integrality}) are similar to Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_Integrality}) except (i) they drop constraints (\\ref{Eq:SingleMIP_HotBinUsage}) and (ii) equalities (\\ref{Eq:SingleMIP_HotAssignment}) \\& (\\ref{Eq:SingleMIP_BinHeatConservation}) become inequalities (\\ref{Eq:SingleMIPwithoutConservation_HotAssignment}) \\& (\\ref{Eq:SingleMIPwithoutConservation_BinHeatConservation}).\n}\n\n\\subsection{Improved Approximation Algorithm}\n\\label{Sec:SingleTemperatureIntervalProblem-Approximation}\n\n\\citet{furman:2004} propose a greedy 2-approxi\\-mation algorithm for the minimum number of matches problem in a single temperature interval.\nWe show that their analysis is tight. We also propose an improved, tight $1.5$-approximation algorithm by prioritizing matches with equal heat loads and exploiting graph theoretic properties.\n\nThe simple greedy (SG) algorithm considers the hot and the cold streams in non-increasing heat load order \\citep{furman:2004}.\nInitially, the first hot stream is matched to the first cold stream and an amount $\\min\\{h_1, c_1\\}$ of heat is transferred between them.\nWithout loss of generality $h_1 > c_1$, which implies that an amount $h_1 - c_1$ of heat load remains to be transferred from $h_1$ to the remaining cold streams.\nSubsequently, the algorithm matches $h_1$ to $c_2$, by transferring $\\min\\{h_1 - c_1, c_2\\}$ heat. \nThe same procedure repeats with the other streams until all remaining heat load is transferred.\n\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption[Simple Greedy (SG)]{Simple Greedy (SG), developed by \\cite{furman:2004}, is applicable to one temperature interval only.}\n\\begin{algorithmic}[1]\n\\State Sort the streams so that $h_1\\geq h_2\\geq \\ldots\\geq h_n$ and $c_1\\geq c_2\\geq \\ldots \\geq c_m$.\n\\State Set $i = 1$ and $j = 1$.\n\\While {there is remaining heat load to be transferred}\n\\State Transfer $q_{i,j}=\\min\\{h_i, c_j\\}$ \n\\State Set $h_i = h_i - q_{i,j}$ and $c_j = c_j - q_{i,j}$\n\\State \\textbf{if} $h_i = 0$, \\textbf{then} set $i = i+1$\n\\State \\textbf{if} $c_j = 0$, \\textbf{then} set $j = j+1$\n\\EndWhile\n\\end{algorithmic}\n\\label{Alg:SimpleGreedy}\n\\end{algorithm}\n\n\n\n\\citet{furman:2004} show that Algorithm SG is 2-approximate for one temperature interval.\nOur new result in Theorem \\ref{thm:greedy} shows that this ratio is tight.\n\n\\begin{theorem}\n\\label{thm:greedy}\nAlgorithm SG achieves an approximation ratio of 2 for the single temperature interval problem and it is tight.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Single_Temperature_Interval}.\n\\end{omitted_proof}\n\n\\medskip\n\nAlgorithm IG improves Algorithm SG by: (i) matching the pairs of hot and cold streams with equal heat loads and (ii) using the acyclic property in the graph representation of an optimal solution. {\nIn practice, hot and cold process streams are unlikely to have equal supplies and demands of heat, so discussing equal heat loads is largely a thought experiment. But the updated analysis allows us to claim an improved performance bound on Algorithm SG. Additionally, the notion of matching roughly equivalent supplies and demands inspires the Section \\ref{Subsection:LFM} \\emph{Largest Fraction Match First} heuristic.\n}\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption[Improved Greedy (IG)]{Improved Greedy (IG) is applicable to one temperature interval only.}\n\\label{alg:impgreedy}\n\\begin{algorithmic}[1]\n\\For {each pair of hot stream $i$ and cold stream $j$ s.t. $h_i=c_j$}\n\\State Transfer $h_i$ amount of heat load (also equal to $c_j$) between them and remove them.\n\\EndFor\n\\State Run Algorithm SG with respect to the remaining streams.\n\\end{algorithmic}\n\\label{Alg:ImprovedGreedy}\n\\end{algorithm}\n\n\n\n\\medskip\n\n\\begin{theorem}\\label{thm:impgreedy}\nAlgorithm IG achieves an approximation ratio of 1.5 for the single temperature interval problem and it is tight.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Single_Temperature_Interval}.\n\\end{omitted_proof}\n\n\\medskip\n\n\n\n\\section{Maximum Heat Computations with Match Restrictions}\n\\label{sec:max_heat}\nThis section discusses computing the maximum heat that can be feasibly exchanged in a minimum number of matches instance. Section \\ref{Sec:MaxHeat_2Streams} discusses the specific instance of two streams and thereby reduces the value of big-M parameter $U_{i,j}$. \nSections \\ref{Sec:MaxHeat_SingleInterval} \\& \\ref{Sec:MaxHeat_MultipleIntervals} generalize Section \\ref{Sec:MaxHeat_2Streams} from 2 streams to any number of the candidate matches. Section \\ref{Sec:MaxHeat_SingleInterval} is limited to a restricted subset of matches in a single temperature interval. Section \\ref{Sec:MaxHeat_MultipleIntervals} calculates the maximum heat that can be feasibly exchanged for the most general case of multiple temperature intervals.\nThese maximum heat computations are an essential ingredient of our heuristic methods and aim in using a match in the most profitable way. They also answer the feasibility of the minimum number of matches problem.\n\n\\subsection{Two Streams and Big-M Parameter Computation}\n\\label{Sec:MaxHeat_2Streams}\n\nA common way of computing the big-M parameters is setting $U_{i,j}=\\min\\{h_i,c_j\\}$ for each $i\\in H$ and $j\\in C$. \\citet{gundersen:1997} propose a better method for calculating the big-M parameter.\nOur novel Greedy Algorithm MHG (Maximum Heat Greedy) obtains tighter $U_{i,j}$ bounds than either the trivial bounds or the \\citet{gundersen:1997} bounds by exploiting the transshipment model structure.\n\nGiven hot stream $i$ and cold stream $j$, Algorithm MHG computes the maximum amount of heat that can be feasibly exchanged between $i$ and $j$ in any feasible solution.\nAlgorithm MHG is tight in the sense that there is always a feasible solution where streams $i$ and $j$ exchange exactly $U_{i,j}$ units of heat.\nNote that, in addition to $U_{i,j}$, the algorithm computes a value $q_{i,s,j,t}$ of the heat exchanged between each hot stream $i\\in H$ in temperature interval $s\\in T$ and each cold stream $j\\in C$ in temperature interval $t\\in T$, so that $\\sum_{s,t\\in T}q_{i,s,j,t}=U_{i,j}$.\nThese $q_{i,s,j,t}$ values are required by greedy packing heuristics in Section \\ref{sec:greedy_packing}.\n\nAlgorithm \\ref{Alg:MaximumHeat} is a pseudocode of Algorithm MHG.\nThe correctness, i.e.\\ the maximality of the heat exchanged between $i$ and $j$, is a corollary of the well known maximum flow - minimum cut theorem.\nInitially, the procedure transfers the maximum amount of heat across the same temperature interval; $q_{i,u,s,u}=\\min\\{\\sigma_{i,u},\\delta_{j,u}\\}$ for each $u\\in T$.\nThe remaining heat is transferred greedily in a top down manner, with respect to the temperature intervals, by accounting heat residual capacities.\nFor each temperature interval $u\\in T$, the heat residual capacity $R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$ imposes an upper bound on the amount of heat that may descend from temperature intervals $1,2,\\ldots,u$ to temperature intervals $u+1,u+2,\\ldots,k$.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Maximum Heat Greedy (MHG)}\n\\textbf{Input:} Hot stream $i\\in H$ and cold stream $j\\in C$ \n\\begin{algorithmic}[1]\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\For {{$u=1,2,\\ldots,k-1$}}\n\\State {$R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$}\n\\EndFor\n\\For {$u=1,2,\\ldots,k$}\n\\State $q_{i,u,j,u}\\leftarrow \\min\\{\\sigma_{i,u},\\delta_{j,u}\\}$\n\\State $\\sigma_{i,u}\\leftarrow \\sigma_{i,u}-q_{i,u,j,u}$\n\\State $\\delta_{j,u}\\leftarrow \\delta_{j,u}-q_{i,u,j,u}$\n\\EndFor\n\\For {$s=1,2,\\ldots,k-1$}\n\\For {$t=s+1,s+2,\\ldots,k$}\n\\State $q_{i,s,j,t}=\\min\\{\\sigma_{i,s},\\delta_{j,t},\\min_{s\\leq u\\leq t-1}\\{R_u\\}\\}$\n\\State $\\sigma_{i,s}\\leftarrow \\sigma_{i,s}-q_{i,s,j,t}$\n\\State $\\delta_{j,t}\\leftarrow \\delta_{j,t}-q_{i,s,j,t}$\n\\For {$u=s,s+1,s+2,\\ldots,t-1$}\n\\State $R_u\\leftarrow R_u-q_{i,s,j,t}$\n\\EndFor\n\\EndFor\n\\EndFor\n\\State Return $\\vec{q}$\n\\end{algorithmic}\n\\label{Alg:MaximumHeat}\n\\end{algorithm}\n\n\\subsection{Single Temperature Interval}\n\\label{Sec:MaxHeat_SingleInterval}\n\nGiven an instance of the single temperature interval problem and a subset $M$ of matches, the maximum amount of heat that can be feasibly exchanged between the streams using only the matches in $M$ can be computed by solving \\ref{EquationSet:SingleMaxHeatLP_initial_stattement}.\n{Like the single temperature interval algorithms of Section \\ref{sec:single_temperature_interval}, \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} is not directly applicable to a minimum number of matches problem with multiple temperature intervals. But \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} is an important part of our water filling heuristics.}\nFor simplicity, \\ref{EquationSet:SingleMaxHeatLP_initial_stattement} drops\ntemperature interval indices for variables $q_{i,j}$.\n{\\allowdisplaybreaks\n\\begin{align}\n\\tag{MaxHeatLP}\n\\label{EquationSet:SingleMaxHeatLP_initial_stattement}\n\\begin{aligned}\n\\text{max} & \\sum_{(i,j)\\in M} q_{i,j} \\\\\n& \\sum_{j\\in C} q_{i,j} \\leq h_i & i\\in H \\\\\n& \\sum_{i\\in H} q_{i,j} \\leq c_j & j\\in C \\\\\n& q_{i,j} \\geq 0 & i\\in H, j\\in C\n\\end{aligned}\n\\end{align}\n}\n\n\\subsection{Multiple Temperature Intervals}\n\\label{Sec:MaxHeat_MultipleIntervals}\n\nMaximizing the heat exchanged through a subset of matches across multiple temperature intervals can solved with an LP that generalizes \\ref{EquationSet:SingleMaxHeatLP_initial_stattement}. \nThe generalized LP must satisfy the additional requirement that, after removing a maximum heat exchange, the remaining instance is feasible.\nFeasibility is achieved using residual capacity constraints which are essential for the efficiency of greedy packing heuristics (see Section \\ref{Subsection:MonotonicGreedyHeuristics}).\n\nGiven a set $M$ of matches, let $A(M)$ be the set of quadruples $(i,s,j,t)$ such that a positive amount of heat can be feasibly transferred via the transportation arc with endpoints the nodes $(i,s)$ and $(j,t)$.\nThe set $A(M)$ does not contain any quadruple $(i,s,j,t)$ with: (i) $s>t$, (ii) $\\sigma_{i,s}=0$, (iii) $\\delta_{j,t}=0$, or (iv) $(i,j)\\not\\in M$. \nLet $V^H(M)$ and $V^C(M)$ be the set of transportation vertices $(i,s)$ and $(j,t)$, respectively, that appear in $A(M)$.\nSimilarly, given two fixed vertices $(i,s)\\in V^H(M)$ and $(j,t)\\in V^C(M)$, we define the sets $V_{i,s}^C(M)$ and $V_{j,t}^H(M)$ of their respective neighbors in $A(M)$.\n\nConsider a temperature interval $u\\in T$.\nWe define by $A_u(M)\\subseteq A(M)$ the subset of quadruples with $s\\leq u< t$, for $u\\in T$. \nThe total heat transferred via the arcs in $A_u(M)$ must be upper bounded by $R_u=\\sum_{i=1}^n\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j=1}^m\\sum_{t=1}^u\\delta_{j,t}$.\nFurthermore, $A(M)$ eliminates any quadruple $(i,s,j,t)$ with $R_u=0$, for some $s\\leq u0$}\n\\State $y_{i,j}\\leftarrow 1$\n\\Else\n\\State $y_{i,j}\\leftarrow 0$\n\\EndIf\n\\EndFor\n\\State Return $(\\vec{y},\\vec{q})$\n\\end{algorithmic}\n\\label{Alg:FLPR}\n\\end{algorithm}\n\n\nAn inherent drawback of the \\citet{furman:2004} approach is the existence of optimal fractional solutions with unnecessary matches.\nTheorem \\ref{Thm:FLPR_negative} shows that Algorithm FLPR performance is bad in the worst case, even for instances with a single temperature interval. \nThe proof, given in \\ref{App:Relaxation_Rounding}, can be extended so that unnecessary matches occur across multiple temperature intervals.\n\\begin{theorem}\n\\label{Thm:FLPR_negative}\nAlgorithm FLPR is $\\Omega(n)$-approximate.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Relaxation_Rounding}.\n\\end{omitted_proof}\n\n\\medskip\n\nConsider an optimal fractional solution to \\ref{EquationSet:FracLP} and suppose that $M\\subseteq H\\times C$ is the set of pairs of streams exchanging a positive amount of heat.\nFor each $(i,j)\\in M$, denote by $L_{i,j}$ the heat exchanged between hot stream $i$ and cold stream $j$.\nWe define: \n\\begin{equation*}\n\\phi(M) = \\min_{(i,j)\\in M} \\left\\{ \\frac{L_{i,j}}{U_{i,j}} \\right\\}\n\\end{equation*}\nas the \\emph{filling ratio}, which corresponds to the minimum portion of an upper bound $U_{i,j}$ filled with the heat $L_{i,j}$, for some match $(i,j)$.\nGiven an optimal fractional solution with filling ratio $\\phi(M)$, Theorem \\ref{Thm:FLPR_positive} obtains a $1\/\\phi(M)$-approximation ratio for FLPR.\n\n\\begin{theorem}\n\\label{Thm:FLPR_positive}\nGiven an optimal fractional solution with a set $M$ of matches and filling ratio $\\phi(M)$, FLPR produces a $\\left(1\/\\phi(M) \\right)$-approximate integral solution.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Relaxation_Rounding}.\n\\end{omitted_proof}\n\n\\medskip\n\nIn the case where all heat supplies and demands are integers, the integrality of the minimum cost flow polytope and Theorem \\ref{Thm:FLPR_positive} imply that FLPR is $U_{\\max}$-approximate, where $U_{\\max}=\\max_{(i,j)\\in H\\times C}\\{U_{i,j}\\}$ is the biggest big-M parameter. \n{A corollary of the $L_{i,j} \/ U_{i,j}$ ratio is that a fractional solution transferring heat $L_{i,j}$ close to capacity $U_{i,j}$ corresponds to a good integral solution. For example, if the optimal fractional solution satisfies $L_{i,j}>0.5 \\cdot U_{i,j}$, for every used match $(i,j)$ such that $L_{i,j} \\neq 0$, then FLPR gives a 2-approximate integral solution. \nFinally, branch-and-cut repeatedly solves the fractional problem, so our new bound proves the big-M parameter's relevance for exact methods.}\nBecause performance guarantee of FLPR scales with the big-M parameters $U_{i,j}$, we improve the heuristic performance by computing a small big-M parameter $U_{i,j}$ using Algorithm MHG in Section \\ref{Sec:MaxHeat_2Streams}.\n\n\\subsection{Lagrangian Relaxation Rounding}\n\\label{Subsection:LRR}\n\n\\cite{furman:2004} design efficient heuristics for the minimum number of matches problem by applying the method of Lagrangian relaxation and relaxing the big-M constraints.\nThis approach generalizes Algorithm FLPR by approximating the fractional cost of every possible match $(i,j)\\in H\\times C$ and solving an appropriate LP using these costs.\nWe present the LP and revisit different ways of approximating the fractional match costs.\n\nIn a feasible solution, the fractional cost $\\lambda_{i,j}$ of a match $(i,j)$ is the cost incurred per unit of heat transferred via $(i,j)$.\nIn particular, \n\\begin{equation*}\n\\lambda_{i,j}=\n\\left\\{\n\t\\begin{array}{ll}\n\t\t1\/L_{i,j}, & \\mbox{if $L_{i,j}>0$, and} \\\\\n\t\t0, & \\mbox{if $L_{i,j}=0$}\n\t\\end{array}\n\\right.\n\\end{equation*}\nwhere $L_{i,j}$ is the heat exchanged via $(i,j)$.\nThen, the number of matches can be expressed as $\\sum_{i,s,j,t}\\lambda_{i,j}\\cdot q_{i,s,j,t}$.\n\\citet{furman:2004} propose a collection of heuristics computing a single cost value for each match $(i,j)$ and constructing a minimum cost solution.\nThis solution is rounded to a feasible integral solution equivalently to FLPR.\n\nGiven a cost vector $\\vec{\\lambda}$ of the matches, a minimum cost solution is obtained by solving:\n\\begin{align}\n\\tag{CostLP}\n\\label{EquationSet:CostLP}\n\\begin{aligned}\n\\text{min} & \\sum_{i \\in H} \\sum_{j \\in C} \\sum_{s,t\\in T} \\lambda_{i,j} \\cdot q_{i,s,j,t} \\\\ \n& \\sum_{j\\in C}\\sum_{t\\in T} q_{i,s,j,t} = \\sigma_{i,s} & i\\in H, s\\in T \\\\\n& \\sum_{i\\in H}\\sum_{s\\in T} q_{i,s,j,t} = \\delta_{j,t} & j\\in C, t\\in T \\\\\n& q_{i,s,j,t}\\geq 0 & i\\in H, j\\in C,\\; s,t\\in T \n\\end{aligned}\n\\end{align}\n\nA challenge in Lagrangian relaxation rounding is computing a cost $\\lambda_{i,j}$ for each hot stream $i\\in H$ and cold stream $j\\in C$.\nWe revisit and generalize policies for selecting costs.\n\n\\paragraph{Cost Policy 1 (Maximum Heat)} \nMatches that exchange large amounts of heat incur low fractional cost.\nThis observation motivates selecting $\\lambda_{i,j}= 1\/U_{i,j}$, for each $(i,j)\\in H\\times C$, where $U_{i,j}$ is an upper bound on the heat that can be feasibly exchanged between $i$ and $j$.\nIn this case, Lagrangian relaxation rounding is equivalent to FLPR (Algorithm \\ref{Alg:FLPR}).\n\n\\paragraph{Cost Policy 2 (Bounds on the Number of Matches)}\nThis cost selection policy uses lower bounds $\\alpha_i$ and $\\beta_j$ on the number of matches of hot stream $i\\in H$ and cold stream $j\\in C$, respectively, in an optimal solution.\nGiven such lower bounds, at least $\\alpha_i$ cost is incurred for the $h_i$ heat units of $i$ and at least $\\beta_j$ cost is incurred for the $c_j$ units of $j$.\nOn average, each heat unit of $i$ is exchanged with cost at least ${\\alpha_i}\/{h_i}$ and each heat unit of $j$ is exchanged with cost at least ${\\beta_j}\/{c_j}$.\nSo, the fractional cost of each match $(i,j)\\in H\\times C$ can be approximated by setting $\\lambda_{i,j}={\\alpha_i}\/{h_i}$, $\\lambda_{i,j}={\\beta_j}\/{c_j}$ or $\\lambda_{i,j}=\\frac{1}{2}(\\frac{\\alpha_i}{h_i} + \\frac{\\beta_j}{c_j})$.\n\n\\citet{furman:2004} use lower bounds $\\alpha_i=1$ and $\\beta_j=1$, for each $i\\in H$ and $j\\in C$.\nWe show that, for any choice of lower bounds $\\alpha_i$ and $\\beta_j$, this cost policy for selecting $\\lambda_{i,j}$ is not effective. Even when $\\alpha_i$ and $\\beta_j$ are tighter than 1, all feasible solutions of \\ref{EquationSet:CostLP} attain the same cost. \nConsider any feasible solution $(\\vec{y},\\vec{q})$ and the fractional cost $\\lambda_{i,j}=\\alpha_i \/ h_i$ for each $(i,j)\\in H\\times C$.\nThen the cost of $(\\vec{y},\\vec{q})$ in \\ref{EquationSet:CostLP} is:\n\\begin{equation*}\n\\sum_{i\\in H} \\sum_{j\\in C} \\sum_{s,t\\in T} \\lambda_{i,j} \\cdot q_{i,s,j,t} = \n\\sum_{i\\in H} \\sum_{j\\in C} \\sum_{s,t\\in T} \\frac{\\alpha_i}{h_i} \\cdot q_{i,s,j,t} = \n\\sum_{i\\in H} \\alpha_i.\n\\end{equation*}\nSince every feasible solution in (\\ref{EquationSet:CostLP}) has cost $\\sum_{i\\in H} \\alpha_i$, Lagrangian relaxation rounding returns an arbitrary solution.\nSimilarly, if $\\lambda_{i,j}={\\beta_j}\/{c_j}$ for $(i,j)\\in H\\times C$, every feasible solution has cost $\\sum_{j\\in C}\\beta_j$.\nIf $\\lambda_{i,j}=\\frac{1}{2}(\\frac{\\alpha_i}{h_i} + \\frac{\\beta_j}{c_j})$, all feasible solutions have the same cost $1\/2 \\cdot (\\sum_{i\\in H} \\alpha_i + \\sum_{j\\in C}\\beta_j)$.\n\n\n\\paragraph{Cost Policy 3 (Existing Solution)}\nThis method of computing costs uses an existing solution.\nThe main idea is to use the actual fractional costs for the solution's matches and a non-zero cost for every unmatched streams pair. \nA minimum cost solution with respect to these costs may improve the initial solution.\nSuppose that $M$ is the set of matches in the initial solution and let $L_{i,j}$ be the heat exchanged via $(i,j) \\in M$.\nFurthermore, let $U_{i,j}$ be an upper bound on the heat exchanged between $i$ and $j$ in any feasible solution.\nThen, a possible selection of costs is $\\lambda_{i,j}= 1\/L_{i,j}$ if $(i,j)\\in M$, and $\\lambda_{i,j}=1\/U_{i,j}$ otherwise.\n\n\n\\subsection{Covering Relaxation Rounding}\n\\label{Subsection:CRR}\n\nThis section proposes a novel covering relaxation rounding heuristic for the minimum number of matches problem.\nThe efficiency of Algorithm FLPR depends on lower bounding the unitary cost of the heat transferred via each match. \nThe goal of the covering relaxation is to use these costs and lower bound the number of matches in a stream-to-stream to basis by relaxing heat conservation.\nThe heuristic constructs a feasible integral solution by solving successively instances of the covering relaxation.\n\nConsider a feasible MILP solution and suppose that $M$ is the set of matches.\nFor each hot stream $i\\in H$ and cold stream $j\\in C$, denote by $C_i(M)$ and $H_j(M)$ the subsets of cold and hot streams matched with $i$ and $j$, respectively, in $M$.\nMoreover, let $U_{i,j}$ be an upper bound on the heat that can be feasibly exchanged between $i\\in H$ and $j\\in C$.\nSince the solution is feasible, it must be true that $\\sum_{j\\in C_i(M)}U_{i,j}\\geq h_i$ and $\\sum_{i\\in H_j(M)}U_{i,j}\\geq c_j$.\nThese inequalities are necessary, though not sufficient, feasibility conditions. \nBy minimizing the number of matches while ensuring these conditions, we obtain a covering relaxation:\n\\begin{align}\n\\tag{CoverMILP}\n\\label{EquationSet:CoverMILP}\n\\begin{aligned}\n\\text{min} & \\sum_{i \\in H}\\sum_{j \\in C} y_{i,j} \\\\ \n& \\sum_{j\\in C} y_{i,j}\\cdot U_{i,j} \\geq h_i & i\\in H \\\\\n& \\sum_{i\\in H} y_{i,j}\\cdot U_{i,j} \\geq c_j & j\\in C \\\\\n& y_{i,j}\\in\\{0,1\\} & i\\in H, j\\in C\n\\end{aligned}\n\\end{align}\nIn certain cases, the matches of an optimal solution to \\ref{EquationSet:CoverMILP} overlap well with the matches in a near-optimal solution for the original problem. \nOur new Covering Relaxation Rounding (CRR) heuristic for the minimum number of matches problem successively solves instances of the covering relaxation \\ref{EquationSet:CoverMILP}.\nThe heuristic chooses new matches iteratively until it terminates with a feasible set $M$ of matches. \nIn the first iteration, Algorithm CRR constructs a feasible solution for the covering relaxation and adds the chosen matches in $M$.\nThen, Algorithm CRR computes the maximum heat that can be feasibly exchanged using the matches in $M$ and stores the computed heat exchanges in $\\vec{q}$.\nIn the second iteration, the heuristic performs same steps with respect to the smaller updated instance $(\\vec{\\sigma}',\\vec{\\delta}')$, where $\\sigma_{i,s}'=\\sigma_{i,s}-\\sum_{j,t}q_{i,s,j,t}$ and $\\delta_{j,t}'=\\delta_{j,t}-\\sum_{i,s}q_{i,s,j,t}$.\nThe heuristic terminates when all heat is exchanged.\n\nAlgorithm \\ref{Alg:CRR} is a pseudocode of heuristic CRR.\nProcedure $CoveringRelaxation(\\vec{\\sigma},\\vec{\\delta})$ produces an optimal subset of matches for the instance of the covering relaxation in which the heat supplies and demands are specified by the vectors $\\vec{\\sigma}$ and $\\vec{\\delta}$, respectively.\nProcedure $MHLP(\\vec{\\sigma},\\vec{\\delta},M)$ (LP-based Maximum Heat) computes the maximum amount of heat that can be feasibly exchanged by using only the matches in $M$ and is based on solving the LP in Section \\ref{Sec:MaxHeat_MultipleIntervals}.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Covering Relaxation Rounding (CRR)}\n\\begin{algorithmic}[1]\n\\State $M\\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\State $r\\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r>0$}\n\\State For each $i\\in H$ and $s\\in T$, set $\\sigma_{i,s}' \\leftarrow \\sigma_{i,s} - \\sum_{j\\in C} \\sum_{t\\in T} q_{i,s,j,t}$\n\\State For each $j\\in C$ and $t\\in T$, set $\\delta_{j,t}' \\leftarrow \\delta_{j,t} - \\sum_{i\\in H} \\sum_{s\\in T} q_{i,s,j,t}$\n\\State $M'\\leftarrow CoveringRelaxation(\\vec{\\sigma}',\\vec{\\delta}')$\n\\Comment{{(\\ref{EquationSet:CoverMILP}) solving, Section \\ref{Subsection:CRR}}}\n\\State $M\\leftarrow M\\cup M'$\n\\State $\\vec{q}\\leftarrow MHLP(\\vec{\\sigma},\\vec{\\delta},M')$\n{\\Comment{Equations (\\ref{Eq:MaxHeatLP_Objective}) - (\\ref{Eq:MaxHeatLP_Positiveness}) LP solving, Section \\ref{Sec:MaxHeat_MultipleIntervals}}}\n\\State $r\\leftarrow \\sum_{i\\in H}h_i-\\sum_{i\\in H}\\sum_{j\\in C}\\sum_{s,t\\in T} q_{i,s,j,t}$\n\\EndWhile\n\\end{algorithmic}\n\\label{Alg:CRR}\n\\end{algorithm}\n\n\n\\section{Water Filling Heuristics}\n\\label{sec:water_filling}\n\nThis section introduces \\emph{water filling heuristics} for the minimum number of matches problem.\nThese heuristics produce a solution iteratively by exchanging the heat in each temperature interval, in a top down manner. \nThe water filling heuristics use, in each iteration,\nan efficient algorithm for the single temperature interval problem \n(see Section \\ref{sec:single_temperature_interval}).\n\n\nFigure \\ref{Fig:Water_Filling} shows the main idea of a \\emph{water filling heuristic} for the minimum number of matches problem with multiple temperature intervals.\nThe problem is solved iteratively in a top-down manner, from the highest to the lowest temperature interval.\nEach iteration produces a solution for one temperature interval.\nThe main components of a water filling heuristic are: (i) a maximum heat procedure which reuses matches from previous iterations and (ii) an efficient single temperature interval algorithm. \n\n\\begin{figure*}[t!]\n \\centering\n\n \\begin{subfigure}[t]{0.5\\textwidth}\n \\centering\n\t\\includegraphics{water_filling_all.eps}\n\t\\caption{Top Down Temperature Interval Structure}\n\t\\label{Fig:Water_Filling_Temperature_Intervals}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\\centering\n\n\t\\includegraphics{water_filling_one.eps}\n\n\t\\caption{Excess Heat Descending}\n\t\\label{Fig:Water_Filling_Excess_Heat}\n\t\\end{subfigure}\n\\caption{A water filling heuristic computes a solution by exploiting the top down temperature interval structure and moving from the higher to the lower temperature interval.\nIn each temperature interval $t$, the heuristic isolates the streams with positive heat at $t$, it matches them and descends the excess heat to the next interval which is sequentially solved. \n\\label{Fig:Water_Filling}}\n\n\\end{figure*}\n\n\nGiven a set $M$ of matches and an instance $(\\vec{\\sigma_t},\\vec{\\delta}_t)$ of the problem in the single temperature interval $t$, the procedure $MHS(\\vec{\\sigma}_t,\\vec{\\delta}_t,M)$ (Maximum Heat for Single temperature interval) computes the maximum heat that can be exchanged between the streams in $t$ using only the matches in $M$.\nAt a given temperature interval $t$, the $MHS$ procedure solves the LP in Section \\ref{Sec:MaxHeat_SingleInterval}.\nThe procedure $SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$ produces an efficient solution for the single temperature interval problem with a minimum number of matches and total heat to satisfy one cold stream. $SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$ either: (i) solves the MILP exactly (Water Filling MILP-based or WFM) or (ii) applies the improved greedy approximation Algorithm IG in Section \\ref{sec:single_temperature_interval} (Water Filling Greedy or WFG). \nBoth water filling heuristics solve instances of the single temperature interval problem in which there is no heat conservation, i.e.\\ the heat supplied by the hot streams is greater or equal than the heat demanded by the cold streams. \nThe exact WFM uses the MILP proposed in Eqs.\\ (\\ref{Eq:SingleMIPwithoutConservation_MaxBins}) - (\\ref{Eq:SingleMIPwithoutConservation_Integrality}) of \\ref{App:Water_Filling}.\nThe greedy heuristic WFG adapts Algorithm IG by terminating when the entire heat demanded by the cold streams has been transferred.\nAfter addressing the single temperature interval, the excess heat descends to the next temperature interval.\nAlgorithm \\ref{Alg:Water_Filling} represents our water filling approach in pseudocode.\n{Figure \\ref{Figure:Water_Filling_Ingredients} shows the main components of water filling heuristics.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{water_filling_ingredients.eps}\n\\caption{\nWater filling heuristics solve the temperature intervals serially in a top-down manner and keep composition feasible. \nThe main components are (i) a maximum heat computation re-using higher temperature interval matches, (ii) a single temperature interval problem algorithm, and (iii) excess heat descending between consecutive temperature intervals. \nHeuristic WFM uses the \\ref{App:Water_Filling} MILP formulation for solving the single temperature interval problem, while heuristic WFG uses the Section \\ref{Sec:SingleTemperatureIntervalProblem-Approximation} Algorithm IG.}\n\\label{Figure:Water_Filling_Ingredients}\n\\end{center}\n\\end{figure}\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Water Filling (WF)}\\label{alg:water_filling}\n\\begin{algorithmic}[1]\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\For {$t=1,2,\\ldots,k$}\n\\If {$t\\neq 1$}\n\\State $\\vec{q}\\;' \\leftarrow MHS(\\vec{\\sigma}_t,\\vec{\\delta}_t,M)$\n{\\Comment{(\\ref{EquationSet:SingleMaxHeatLP_initial_stattement}) solve, Section \\ref{Sec:MaxHeat_SingleInterval}}}\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $i\\in H$, set $\\sigma_{i,t} \\leftarrow \\sigma_{i,t} - \\sum_{j\\in C} \\sum_{t\\in T} q_{i,j,t}'$\n\\State For each $j\\in C$, set $\\delta_{j,t} \\leftarrow \\delta_{j,t} - \\sum_{i\\in H} \\sum_{s\\in T} q_{i,j,t}'$\n\\EndIf\n\\State $(M',\\vec{q}\\;') \\leftarrow SingleTemperatureInterval(\\vec{\\sigma}_t,\\vec{\\delta}_t)$\n{\\Comment{Eqs (\\ref{Eq:SingleMIPwithoutConservation_MaxBins} - \\ref{Eq:SingleMIPwithoutConservation_Integrality}) or Alg IG, Sec \\ref{Sec:SingleTemperatureIntervalProblem-Approximation}}}\n\\State $M \\leftarrow M\\cup M'$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\If {$t\\neq k$}\n\\For {$i\\in H$}\n\\State $\\vec{\\sigma}_{i,t+1} \\leftarrow \\vec{\\sigma}_{i,t+1} + (\\vec{\\sigma}_{i,t} - \\sum_jq_{i,j,t})$ (excess heat descending)\n\\EndFor\n\\EndIf\n\\EndFor\n\\end{algorithmic}\n\\label{Alg:Water_Filling}\n\\end{algorithm}\n\n\n\n\\begin{theorem}\n\\label{Thm:WaterFillingRatio}\nAlgorithms WFG and WFM are $\\Omega(k)$-approximate.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Water_Filling}.\n\\end{omitted_proof}\n\n\n\n\\section{Greedy Packing Heuristics}\n\\label{sec:greedy_packing}\n\nThis section proposes greedy heuristics motivated by the packing nature of the minimum number of matches problem.\nEach greedy packing heuristic starts from an infeasible solution with zero heat transferred between the streams and iterates towards feasibility by greedily selecting matches.\nThe two main ingredients of such a heuristic are: (i) a match selection policy and (ii) a heat exchange policy for transferring heat via the matches.\nSection \\ref{Subsection:MonotonicGreedyHeuristics} observes that a greedy heuristic has a poor worst-case performance if heat residual capacities are not considered.\nSections \\ref{Subsection:Largest_Heat_Match} - \\ref{Subsection:SS} define formally the greedy heuristics: (i) Largest Heat Match First, (ii) Largest Fraction Match First, and (iii) Smallest Stream First. \n{Figure \\ref{Figure:Greedy_Packing_Ingredients} shows the main components of greedy packing heuristics.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{greedy_packing_ingredients.eps}\n\\caption{\nGreedy packing heuristics select matches iteratively one by one. \nThe main components of greedy packing heuristics are (i) a heat exchange policy, and (ii) a match selection policy.\nGreedy packing heuristics apply these policies with respect to all unmatched stream pairs, in each iteration.\nOptions for the heat exchange policy include \\emph{dynamic heat exchange}, which solves the Section \\ref{Sec:MaxHeat_MultipleIntervals} maximum heat LP, and \\emph{static heat exchange}, which uses the Section \\ref{Sec:MaxHeat_2Streams} greedy algorithm.\nOnce the heat exchange policy has been applied for every unmatched pair of streams, a match selection policy chooses the new match, e.g.\\\n(i) with the largest heat (LHM), (ii) with the largest fraction (LFM), or (iii) of the shortest stream (SS).}\n\\label{Figure:Greedy_Packing_Ingredients}\n\\end{center}\n\\end{figure}\n\n\\subsection{A Pathological Example and Heat Residual Capacities}\n\\label{Subsection:MonotonicGreedyHeuristics}\n\nA greedy match selection heuristic is efficient if it performs a small number of iterations and chooses matches exchanging large heat load in each iteration.\nOur greedy heuristics perform large moves towards feasibility by choosing good matches in terms of: (i) heat and (ii) stream fraction. \nAn efficient greedy heuristic should also be monotonic in the sense that every chosen match achieves a strictly positive increase on the covered instance size.\n\nThe Figure \\ref{Fig:non_monotonic_heuristic} example shows a pathological behavior of greedy non-monotonic heuristics.\nThe instance consists of 3 hot streams, 3 cold streams and 3 temperature intervals.\nHot stream $i\\in H$ has heat supply $\\sigma_{i,s}=1$ for $s=i$ and no supply in any other temperature interval.\nCold stream $j\\in C$ has heat demand $\\delta_{j,t}=1$ for $t=j$ and no demand in any other temperature interval.\nConsider the heuristic which selects a match that may exchange the maximum amount of heat in each iteration.\nThe matches $(h_1,c_2)$ and $(h_2,c_3)$ consist the initial selections.\nIn the subsequent iteration, no match increases the heat that can be feasibly exchanged between the streams and the heuristic chooses unnecessary matches.\n\n\\begin{figure}[t]\n\n\\begin{center}\n\\includegraphics{non_monotonic_heuristic.eps}\n\\caption{A bad example of a non monotonic heuristic. If a heuristic begins by matching $h_1$ with $c_2$ and $h_2$ with $c_3$, then many unnecessary matches might be required to end up with a feasible solution.}\n\\label{Fig:non_monotonic_heuristic}\n\\end{center}\n\n\\end{figure}\n\nA sufficient condition enforcing strictly monotonic behavior and avoiding the above pathology, is for each algorithm iteration to satisfy the heat residual capacities.\nAs depicted in Figure \\ref{Fig:greedy_decomposition}, a greedy heuristic maintains a set $M$ of selected matches together with a decomposition of the original instance $I$ into two instances $I^A$ and $I^B$.\nIf $I=(H,C,T,\\vec{\\sigma},\\vec{\\delta})$, then it holds that $I^A=(H,C,T,\\vec{\\sigma}^A,\\vec{\\delta}^A)$ and $I^B=(H,C,T,\\vec{\\sigma}^B,\\vec{\\delta}^B)$, where $\\mathbf{\\sigma}=\\vec{\\sigma}^A+\\vec{\\sigma}^B$ and $\\vec{\\delta}=\\vec{\\delta}^A+\\vec{\\delta}^B$. \nThe set $M$ corresponds to a feasible solution for $I^A$ and the instance $I^B$ remains to be solved.\nIn particular, $I^A$ is obtained by computing a maximal amount of heat exchanged by using the matches in $M$ and $I^B$ is the remaining part of $I$.\nInitially, $I^A$ is empty and $I^B$ is exactly the original instance $I$.\nA selection of a match increases the total heat exchanged in $I^A$ and reduces it in $I^B$.\n\\ref{App:Greedy_Packing} observes that a greedy heuristic is monotonic if $I^B$ is feasible in each iteration. \nFurthermore, $I^B$ is feasible if and only if $I^A$ satisfies the heat residual capacities $R_u = \\sum_{i\\in H}\\sum_{s=1}^u\\sigma_{i,s} - \\sum_{j\\in C}\\sum_{t=1}^u\\delta_{j,t}$, for $u\\in T$.\n\n\\begin{figure}[t]\n\n\\begin{center}\n\\includegraphics{greedy_packing_decomposition.eps}\n\\vspace{-20pt}\n\\caption{Decomposition of a greedy packing heuristic. The problem instance $I$ is the union of the instance $I_A$ already solved by the heuristic and the instance $I_B$ that remains to be solved.}\n\\label{Fig:greedy_decomposition}\n\\end{center}\n\n\\end{figure}\n\n\\subsection{Largest Heat Match First}\n\\label{Subsection:Largest_Heat_Match}\n\nOur Largest Heat Match First heuristics arise from the idea that the matches should individually carry large amounts of heat in a near optimal solution.\nSuppose that $Q_v$ is the maximum heat that may be transferred between the streams using only a number $v$ of matches.\nThen, minimizing the number of matches is expressed as $\\min\\{v:Q_v\\geq \\sum_{i=1}^nh_i\\}$.\nThis observation motivates the greedy packing heuristic which selects matches iteratively until it ends up with a feasible set $M$ of matches exchanging $\\sum_{i=1}^nh_i$ units of heat.\nIn each iteration, the heuristic chooses a match maximizing the additional heat exchanged. \nOur two variants of largest heat matches heuristics are: (i) LP-based Largest Heat Match (LHM-LP) and (ii) Greedy Largest Heat Match (LHM).\n\n\nHeuristic LHM-LP uses the $MHLP(M)$ (LP-based Maximum Heat) procedure to compute the maximum heat that can be transferred between the streams using only the matches in the set $M$.\nThis procedure is repeated $O(nm)$ times in each iteration, once for every candidate match, and solves an LP incorporating the proposed heat residual capacities.\nAlgorithm \\ref{Alg:LHM-LP} is an LHM-LP heuristic using the LP in Section \\ref{Sec:MaxHeat_MultipleIntervals}.\nThe algorithm maintains a set $M$ of chosen matches and selects a new match $(i',j')$ to maximize $MHLP(M\\cup(i',j'))$.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Largest Heat Match First LP-based (LHM-LP)}\n\\begin{algorithmic}[1]\n\\State $M\\leftarrow\\emptyset$\n\\State $r\\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r>0$}\n\\State $(i',j')\\leftarrow\\arg\\max_{(i,j)\\in H\\times C\\setminus M} \\{MHLP(M\\cup\\{(i,j)\\})\\}$\n{\\Comment{Eqs (\\ref{Eq:MaxHeatLP_Objective} - \\ref{Eq:MaxHeatLP_Positiveness}), Sec \\ref{Sec:MaxHeat_MultipleIntervals}}}\n\\State $M\\leftarrow M\\cup \\{(i',j')\\}$\n\\State $r\\leftarrow\\sum_{i\\in H}h_i-MHLP(M)$\n\\EndWhile\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:LHM-LP}\n\\end{algorithm}\n\n\\begin{theorem}\n\\label{Thm:GreedyPackingRatio}\nAlgorithm LHM-LP is $O(\\log n + \\log \\frac{h_{\\max}}{\\epsilon})$-approximate, where $\\epsilon$ is the required precision.\n\\end{theorem}\n\\begin{omitted_proof}\nSee \\ref{App:Greedy_Packing}.\n\\end{omitted_proof}\n\n\\medskip\n\nLHM-LP heuristic is polynomial-time in the worst case.\nThe $i$-th iteration solves $nm-i+1$ LP instances which sums to solving a total of $\\sum_{i=1}^{nm}(nm-i+1)=O(n^2m^2)$ LP instances in the worst case.\nHowever, for large instances, the algorithm is time consuming because of this iterative LP solving.\nSo, we also propose an alternative, time-efficient greedy approach. \nThe new heuristic version builds a solution by selecting matches and deciding the heat exchanges, without modifying them in subsequent iterations.\n\nThe new approach for implementing the heuristic, that we call LHM, requires the $MHG(\\vec{\\sigma},\\vec{\\delta},i,j)$ procedure. \nGiven an instance $(\\vec{\\sigma},\\vec{\\delta})$ of the problem, it computes the maximum heat that can be feasibly exchanged between hot stream $i\\in H$ and cold stream $j\\in C$, as defined in Section \\ref{Sec:MaxHeat_2Streams}.\nThe procedure also computes a corresponding value $q_{i,s,j,t}$ of heat exchanged between $i\\in H$ in temperature interval $s\\in T$ and $j\\in C$ in temperature interval $t\\in T$.\nLHM maintains a set $M$ of currently chosen matches together with their respective vector $\\vec{q}$ of heat exchanges.\nIn each iteration, it selects the match $(i',j')$ and heat exchanges $q'$ between $i'$ and $j'$ so that the value $MHG(\\vec{\\sigma},\\vec{\\delta},i',j')$ is maximum.\nAlgorithm \\ref{Alg:LHM} is a pseudocode of this heuristic.\n\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Largest Heat Match First Greedy (LHM)}\n\\begin{algorithmic}[1]\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q} \\leftarrow \\vec{0}$\n\\State $r \\leftarrow \\sum_{i\\in H}h_i$\n\\While {$r > 0$}\n\\State $(i', j', \\vec{q}\\;') \\leftarrow \\arg \\max_{(i,j)\\in H\\times C\\setminus M} \\{MHG(\\vec{\\sigma},\\vec{\\delta},i,j)\\}$\n{\\Comment{Algorithm MHG, Section \\ref{Sec:MaxHeat_2Streams}}}\n\\State $M \\leftarrow M \\cup \\{(i',j')\\}$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $s\\in T$, set $\\sigma_{i',s} \\leftarrow \\sigma_{i',s} - \\sum_{t\\in T} q_{i',s,j',t}'$\n\\State For each $t\\in T$, set $\\delta_{j',t} \\leftarrow \\delta_{j',t} - \\sum_{s\\in T} q_{i',s,j',t}'$\n\\State $r \\leftarrow r - \\sum_{s,t\\in T}q_{i',s,j',t}'$\n\\EndWhile\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:LHM}\n\\end{algorithm}\n\n\n\\subsection{Largest Fraction Match First}\n\\label{Subsection:LFM}\n\nThe heuristic \\emph{Largest Fraction Match First} (LFM) exploits the bipartite nature of the problem by employing matches which exchange large fractions of the stream heats.\nConsider a feasible solution with a set $M$ of matches.\nEvery match $(i,j)\\in M$ covers a fraction $\\sum_{s,t\\in T}\\frac{q_{i,s,j,t}}{h_i}$ of hot stream $i\\in H$ and a fraction $\\sum_{s,t\\in T}\\frac{q_{i,s,j,t}}{c_j}$ of cold stream $j\\in C$.\nThe total covered fraction of all streams is equal to $\\sum_{(i,j)\\in M} \\sum_{s,t\\in T} \\left( \\frac{q_{i,s,j,t}}{h_i} + \\frac{q_{i,s,j,t}}{c_j}\\right)=n+m$.\nSuppose that $F_v$ is the maximum amount of total stream fraction that can be covered using no more than $v$ matches.\nThen, minimizing the number of matches is expressed as $\\min\\{v:F_v\\geq n+m\\}$.\nBased on this observation, the main idea of LFM heuristic is to construct iteratively a feasible set of matches, by selecting the match covering the largest fraction of streams, in each iteration.\nThat is, LFM prioritizes proportional matches in a way that high heat hot streams are matched with high heat cold streams and low heat hot streams with low heat cold streams. \nIn this sense, it generalizes the idea of Algorithm IG for the single temperature interval problem (see Section \\ref{sec:single_temperature_interval}), according to which it is beneficial to match streams of (roughly) equal heat.\n\nAn alternative that would be similar to LHM-LP is an\nLFM heuristic \nwith an $MFLP(M)$ (LP-based Maximum Fraction) procedure computing the maximum fraction of streams that can be covered using only a given set $M$ of matches.\nLike the LHM-LP heuristic, this procedure would be based on solving an LP (see \\ref{App:Greedy_Packing}), except that the objective function maximizes the total stream fraction.\nThe LFM heuristic can be also modified to attain more efficient running times using Algorithm $MHG$, as defined in Section \\ref{Sec:MaxHeat_2Streams}. \nIn each iteration, the heuristic selects the match $(i,j)$ with the highest value $\\frac{U_{i,j}'}{h_i}+\\frac{U_{i,j}'}{c_j}$, where $U_{i,j}'$ is the maximum heat that can be feasibly exchanged between $i$ and $j$ in the remaining instance.\n\n\n\n\\subsection{Smallest Stream Heuristic}\n\\label{Subsection:SS}\n\nSubsequently, we propose \\emph{Smallest Stream First} (SS) heuristic based on greedy match selection, which also incorporates stream priorities so that a stream is involved in a small number of matches.\nLet $\\alpha_i$ and $\\beta_j$ be the number of matches of hot stream $i\\in H$ and cold stream $j\\in C$, respectively.\nMinimizing the number of matches problem is expressed as $\\min\\{\\sum_{i\\in H}\\alpha_i\\}$, or equivalently $\\min\\{\\sum_{j\\in C}\\beta_j\\}$.\nBased on this observation, we investigate heuristics that specify a certain order of the hot streams and match them one by one, using individually a small number of matches.\nSuch a heuristic requires: (i) a stream ordering strategy and (ii) a match selection strategy.\nTo reduce the number of matches of small hot streams, heuristic SS uses the order $h_1\\leq h_2\\leq \\ldots\\leq h_n$. \n\nIn each iteration, the next stream is matched with a low number of cold streams using a greedy match selection strategy; we use greedy LHM heuristic.\nObserve that SS heuristic is more efficient in terms of running time compared to the other greedy packing heuristics, because it solves a subproblem with only one hot stream in each iteration. \nAlgorithm \\ref{Alg:ShortestStream} is a pseudocode of SS heuristic. \nNote that other variants of ordered stream heuristics may be obtained in a similar way.\nThe heuristic uses the $MHG$ algorithm in Section \\ref{Sec:MaxHeat_2Streams}.\n\n\\begin{algorithm}[t] \\nonumber\n\\caption{Smallest Steam First (SS)}\n\\begin{algorithmic}[1]\n\\State Sort the hot streams in non-decreasing order of their heat loads, i.e.\\ $h_1\\leq h_2\\leq\\ldots\\leq h_n$.\n\\State $M \\leftarrow \\emptyset$\n\\State $\\vec{q}\\leftarrow \\vec{0}$\n\\For {$i\\in H$}\n\\State $r\\leftarrow h_i$\n\\While {$r>0$}\n\\State $(i, j', \\vec{q}\\;') \\leftarrow \\arg \\max_{j\\in C} \\{MHG(\\vec{\\sigma},\\vec{\\delta},i,j)\\}$\n{\\Comment{Algorithm MHG, Section \\ref{Sec:MaxHeat_2Streams}}}\n\\State $M\\leftarrow M\\cup \\{(i,j')\\}$\n\\State $\\vec{q} \\leftarrow \\vec{q} + \\vec{q}\\;'$\n\\State For each $s\\in T$, set $\\sigma_{i,s} \\leftarrow \\sigma_{i,s} - \\sum_{t\\in T} q_{i,s,j',t}'$\n\\State For each $t\\in T$, set $\\delta_{j',t} \\leftarrow \\delta_{j',t} - \\sum_{s\\in T} q_{i,s,j',t}'$\n\\State $r \\leftarrow r - \\sum_{s,t\\in T}q_{i',s,j',t}'$\n\\EndWhile\n\\EndFor\n\\State Return $M$\n\\end{algorithmic}\n\\label{Alg:ShortestStream}\n\\end{algorithm}\n\n\n\n\\section{Numerical Results}\n\\label{sec:results}\n\nThis section evaluates the proposed heuristics on three test sets. \nSection \\ref{Section:Benchmark_Instances} provides information on system specifications and benchmark instances.\nSection \\ref{Section:Exact_Experiments} presents computational results of exact methods and shows that commercial, state-of-the-art approaches have difficult solving moderately-sized instances \nto global optimality.\nSection \\ref{Section:Heuristic_Experiments} evaluates experimentally the heuristic methods and compares the obtained results with those reported by \\citet{furman:2004}.\nAll result tables are provided in \\ref{App:Experimental_Results}.\n{\\cite{source_code} provide test cases and source code for the paper's computational experiments.}\n\n\\subsection{System Specification and Benchmark Instances}\n\\label{Section:Benchmark_Instances}\n\nAll computations are run on an Intel Core i7-4790 CPU 3.60GHz with 15.6 GB RAM running 64-bit Ubuntu 14.04.\nCPLEX 12.6.3 and Gurobi 6.5.2 solve the minimum number of matches problem exactly. \nThe mathematical optimization models and heuristics are implemented in Python 2.7.6 and Pyomo 4.4.1 \\citep{hart:2011, hart:2012}. \n\nWe use problem instances from two existing test sets \\citep{furman:2004, chen:2015}. {We also generate two collections of larger test cases. The smaller of the two sets uses work of \\citet{grossman:2017}. The larger of the two sets was created using our own random generation method.}\nAn instance of general heat exchanger network design consists of streams and utilities with inlet, outlet temperatures, flow rate heat capacities and other parameters.\n\\ref{App:Minimum_Utility_Cost} shows how a minimum number of matches instances arises from the original instance of general heat exchanger network design.\n \nThe \\emph{\\cite{furman:2000} test set} consists of test cases from the engineering literature. \nTable \\ref{Table:Problem_Sizes} reports bibliographic information on the origin of these test cases. \nWe manually digitize this data set and make it publicly available for the first time \\citep{source_code}.\nTable \\ref{Table:Problem_Sizes} lists the 26 problem instance names and information on their sizes.\nThe total number streams and temperature intervals varies from 6 to 38 and from 5 to 32, respectively. \nTable \\ref{Table:Problem_Sizes} also lists the number of binary and continuous variables as well as the number of constraints in the transshipment MILP formulation.\n\nThe \\emph{\\cite{minlp,chen:2015} test set} consists of 10 problem instances.\nThese instances are classified into two categories depending on whether they consist of balanced or unbalanced streams. \nTest cases with balanced streams have flowrate heat capacities in the same order of magnitude,\nwhile test cases with unbalanced streams have dissimilar flowrate heat capacities spanning several orders of magnitude.\nThe sizes of these instances range from 10 to 42 streams and from 12 to 35 temperature intervals. \nTable \\ref{Table:Problem_Sizes} reports more information on the size of each test case.\n\n\\emph{The \\cite{grossman:2017} test set} is generated randomly.\nThe inlet, outlet temperatures of these instances are fixed while the values of flowrate heat capacities are generated randomly with fixed seeds.\nThis test set contains 12 moderately challenging problems (see Table \\ref{Table:Problem_Sizes}) with a classification into balanced and unbalanced instances, similarly to the \\cite{minlp,chen:2015} test set. \nThe smallest problem involves 27 streams and 23 temperature intervals while the largest one consists of 43 streams and 37 temperature intervals.\n\n{\\emph{The Large Scale test set} is generated randomly. These instances have 80 hot streams, 80 cold streams, 1 hot utility and 1 cold utility.\nFor each hot stream $i\\in HS$, the inlet temperature $T_{\\text{in},i}^{HS}$ is chosen uniformly at random in the interval $(30,400]$. \nThen, the outlet temperature $T_{\\text{out},i}^{HS}$ is selected uniformly at random in the interval $[30, T_{\\text{in},i}^{HS})$.\nAnalogously, for each cold stream $j\\in CS$, the outlet temperature $T_{\\text{out},j}^{CS}$ is chosen uniformly at random in the interval $(20,400]$.\nNext, the inlet temperature $T_{\\text{in},j}^{CS}$ is chosen uniformly at random in the interval $[20,T_{\\text{out},j}^{CS})$.\nThe flow rate heat capacities $FCp_i$ and $FCp_j$ of hot stream $i$ and cold stream $j$ are chosen as floating numbers with two decimal digits in the interval $[0,15]$.\nThe hot utility has inlet temperature $T_{\\text{in}}^{HU}=500$, outlet temperature $T_{\\text{out}}^{HS}=499$, and cost $\\kappa^{HU}=80$.\nThe cold utility has inlet temperature $T_{\\text{in}}^{CU}=20$, outlet temperature $T_{\\text{out}}^{CU}=21$, and cost $\\kappa^{CU}=20$.\nThe minimum heat recovery approach temperature is $\\Delta T_{\\min}=10$.}\n\n\n\\subsection{Exact Methods} \n\\label{Section:Exact_Experiments}\n\nWe evaluate exact methods using state-of-the-art commercial approaches. \nFor each problem instance, CPLEX and Gurobi solve the Section \\ref{sec:preliminaries} transportation and transshipment models.\nBased on the difficulty of each test set, we set a time limit for each solver run as follows: (i) 1800 seconds for the \\cite{furman:2000} test set, (ii) 7200 seconds for the \\cite{minlp,chen:2015} test set, and (iii) 14400 seconds for the \\cite{grossman:2017} {and large scale} test sets. \nIn each solver run, we set absolute gap 0.99, relative gap $4\\%$, and maximum number of threads 1.\n\nTable \\ref{Table:Exact_Methods} reports the best found objective value, CPU time and relative gap, for each solver run.\nObserve that state-of-the-art approaches cannot, in general, solve moderately-sized problems with 30-40 streams to global optimality.\nFor example, none of the test cases in the \\cite{grossman:2017} {or large scale} test sets is solved to global optimality within the specified time limit. \nTable \\ref{Table:Comparisons} contains the results reported by \\citet{furman:2004} using CPLEX 7.0 with 7 hour time limit. \nCPLEX 7.0 fails to solve 4 instances to global optimality.\nInterestingly, CPLEX 12.6.3 still cannot solve 3 of these 4 instances with a 1.5 hour timeout.\n\nTheoretically, the transshipment MILP is better than the transportation MILP because the former has asymptotically fewer variables.\nThis observation is validated experimentally with the exception of very few instances, e.g.\\ \\texttt{balanced10}, in which the transportation model computes a better solution within the time limit.\nCPLEX and Gurobi are comparable and neither dominates the other.\nInstances with balanced streams are harder to solve, which highlights the difficulty introduced by symmetry, see \\cite{kouyialis:2016}. \n{\nThe preceding numerical analysis refers to the extended transportation MILP.\nTable \\ref{Table:Transportation_Models_Comparison} compares solver performance to the reduced transportation MILP, i.e.\\ a formulation removing redundant variables $q_{i,s,j,t}$ with $s> t$ and Equations (\\ref{TransportationMIP_Eq:ThermoConstraint}). Note that modern versions of CPLEX and Gurobi show effectively no difference between the two formulations.\n}\n \n\\subsection{Heuristic Methods} \n\\label{Section:Heuristic_Experiments}\n\n{\nWe implement the proposed heuristics using Python and develop the LP models with Pyomo \\citep{hart:2011,hart:2012}.\nWe use CPLEX 12.6.3 with default settings to solve all LP models within the heuristic methods. \\cite{source_code} make the source code available. The following discussion covers the 48 problems with 43 streams or fewer. Section \\ref{Section:Large_Scale} discusses the 3 examples with 160 streams each.}\n\nThe difficulty of solving the minimum number of matches problem to global optimality motivates the design of heuristic methods and approximation algorithms with proven performance guarantees.\nTables \\ref{Table:Heuristic_Upper_Bounds} and \\ref{Table:Heuristic_CPU_Times} contain the computed objective value and CPU times, respectively, of the heuristics for all test cases. \nFor the challenging \\cite{minlp,chen:2015} and \\cite{grossman:2017} test sets, heuristic LHM-LP always produces the best solution.\nThe LHM-LP running time is significantly higher compared to all heuristics due to the iterative LP solving, despite the fact that it is guaranteed to be polynomial in the worst case.\nAlternatively, heuristic SS produces the second best heuristic result with very efficient running times in the \\cite{minlp,chen:2015} and \\cite{grossman:2017} test sets. \nFigure \\ref{Fig:Boxplot_Performance_Ratios} depicts the performance ratio of the proposed heuristics using a box and whisker plot, where the computed objective value is normalized with the one found by CPLEX for the transshipment MILP.\nFigure \\ref{Fig:Boxplot_CPU_Times} shows a box and whisker plot of the CPU times of all heuristics in $\\log_{10}$ scale normalized by the minimum CPU time for each test case.\nFigure \\ref{Fig:Line_Chart} shows a line chart verifying that our greedy packing approach produces better solutions than the relaxation rounding and water filling ones. \n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{performance_ratios_boxplot.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Box and whisker diagram of {48} heuristic performance ratios, i.e.\\ computed solution \/ best known solution {for the problems with 43 streams or fewer.}}\n\\label{Fig:Boxplot_Performance_Ratios}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{elapsed_times_boxplot.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Box and whisker diagram of {48} CPU times ($\\log_{10}$ scale) normalized by the minimum CPU time for each test case.}\n\\label{Fig:Boxplot_CPU_Times}\n\\end{figure}\n\n\\begin{figure}[!ht] \n\\begin{center}\n\\includegraphics[scale=0.6]{heuristic_comparison_line_chart.eps}\n\\end{center}\n\\vspace{-20pt}\n\\caption{Line chart comparing the performance ratio, i.e.\\ computed solution \/ best known solution, of the best computed result by heuristic methods: relaxation rounding, water filling, and greedy packing. {This graph applies to the 48 problems with 43 streams or fewer.}}\n\\label{Fig:Line_Chart}\n\\end{figure}\n\nTable \\ref{Table:Comparisons} contains the heuristic results reported by \\citet{furman:2004} and the ones obtained with our improved version of the FLPR, LRR, and WFG heuristics of \\cite{furman:2004}.\nOur versions of FLPR, LRR, and WFG perform better for the \\cite{furman:2004} test set because of our new Algorithm MHG for tightening the big-M parameters.\nFor example, out of the 26 instances, our version of FLPR performs strictly better than the \\cite{furman:2004} version 20 times and worse only once (\\texttt{10sp1}). To further explore the effect of the big-M parameter, Table \\ref{Table:BigM_Comparisons} shows how different computations for the big-M parameter change the FLPR and LRR performance. Table \\ref{Table:BigM_Comparisons} also demonstrates the importance of the big-M parameter on the transportation MILP fractional relaxation quality. \n\nIn particular, Table \\ref{Table:BigM_Comparisons} compares the three big-M computation methods discussed in Section \\ref{Sec:MaxHeat_2Streams}: (i) the trivial bounds, (ii) the \\citet{gundersen:1997} method, and (iii) our greedy Algorithm MHG.\nOur greedy maximum heat algorithm dominates the other approaches for computing the big-M parameters. Algorithm MHG also outperforms the other two big-M computation methods by finding smaller feasible solutions via both Fractional LP Rounding and Lagrangian Relaxation Rounding. In the 48 test cases, Algorithm MHG produces the best FLPR and LRR feasible solutions in 46 and 43 test cases, respectively. Algorithm MHG is strictly best for 33 FLPR and 32 LRR test cases. Finally, Algorithm MHG achieves the tightest fractional MILP relaxation for all test instances.\n\nFigure \\ref{Fig:Boxplot_Performance_Ratios} and Table \\ref{Table:Heuristic_Upper_Bounds} show that our\nnew CRR heuristic is competitive with the other relaxation rounding heuristics, performing as well or better than FLPR or LRR in 19 of the 48 test cases and strictly outperforming both FLPR and LRR in 8 test cases.\nAlthough CRR solves a sequence of MILPs, Figure \\ref{Fig:Boxplot_CPU_Times} and Table \\ref{Table:Heuristic_CPU_Times} show that its running time is efficient compared to the other relaxation rounding heuristics.\n\nOur water filling heuristics are equivalent to or better than \\citet{furman:2004} for 25 of their 26 test set instances (all except \\texttt{7sp2}). \nIn particular, our Algorithm WFG is strictly better than their WFG in 18 of 26 instances and is worse in just one. This improvement stems from the new 1.5-approximation algorithm for the single temperature interval problem (see Section \\ref{Sec:SingleTemperatureIntervalProblem-Approximation}).\nThe novel Algorithm WFM is competitive with Algorithm WFG and produces equivalent or better feasible solutions for 37 of the 48 test cases. \nIn particular, WFM has a better performance ratio than WFG (see Figure \\ref{Fig:Boxplot_Performance_Ratios}) and WFM is strictly better than WFG in all but 1 of the \\citet{grossman:2017} instances.\nThe strength of WFM highlights the importance of our new MILP formulation in Eqs.\\ (\\ref{Eq:SingleMIP_MaxBins})-(\\ref{Eq:SingleMIP_Integrality}).\nAt each iteration, WFM solves an MILP without big-M constraints and therefore has a running time in the same order of magnitude as its greedy counterpart WFG (see Figure \\ref{Fig:Boxplot_CPU_Times}).\n\nIn summary, our heuristics obtained via the relaxation rounding and water filling methods improve the corresponding ones proposed by \\citet{furman:2004}.\nFurthermore, greedy packing heuristics achieve even better results in more than $90\\%$ of the test cases. \n\n{\n\n\\subsection{Larger Scale Instances}\n\\label{Section:Large_Scale}\n\nAlthough CPLEX and Gurobi do not converge to global optimality for many of the \n\\cite{furman:2000}, \\cite{minlp,chen:2015}, and \\cite{grossman:2017} instances,\nthe solvers produce the best heuristic solutions in all test cases.\nBut the literature instances are only moderately sized. We expect that the heuristic performance improves relative to the exact approaches as the problem sizes increase.\nTowards a more complete numerical analysis, we randomly generate 3 larger scale instances with 160 streams each.\n\nFor larger problems, the running time may be important to a design engineer \\citep{hindmarsh:1983}.\nWe apply the least time consuming heuristic of each type for solving the larger scale instances, i.e.\\ apply relaxation rounding heuristic FLPR, water filling heuristic WFG, and greedy packing heuristic SS.\nWe also solve the transshipment model using CPLEX 12.6.3 with a 4h timeout. The results are in Table \\ref{Table:Large_Scale_Results}.\n\nFor these instances, greedy packing SS computes a better solution than the relaxation rounding FLPR heuristic or the water filling WFG heuristic, but SS has larger running time.\nIn instance \\texttt{large-scale1}, greedy packing SS computes 218, a better solution than the CPLEX value 219. Moreover, the CPLEX heuristic spent the first 1hr of computation time at solution 257 (18\\% worse than the solution SS obtains in 10 minutes) and the next 2hr of computation time at solution 235 (8\\% worse than the solution SS obtains in 10 minutes). Any design engineer wishing to interact with the results would be frustrated by these times.\n\nIn instance \\texttt{large-scale2}, CPLEX computes a slightly better solution (239) than the SS heuristic (242).\nBut the good CPLEX solution is computed slightly before the 4h timeout. For more than 3.5hr, the best CPLEX heuristic is 273 (13\\% worse than the solution SS obtains in 10 minutes).\nFinally, in instance \\texttt{large-scale0}, CPLEX computes a significantly better solution (175) than the SS heuristic (233).\nBut CPLEX computes the good solution after 2h and the incumbent is similar to the greedy packing SS solution for the first 2 hours.\nThese findings demonstrate that greedy packing approaches are particularly useful when transitioning to larger scale instances.\n\nNote that we could additionally study approaches to improve the heuristic performance of CPLEX, e.g.\\ by changing CPLEX parameters or using a parallel version of CPLEX. But the point of this paper is to develop a deep understanding of a very important problem that consistently arises in process systems engineering \\citep{floudas:2012}.\n}\n\n\\section{Discussion of Manuscript Contributions}\n\\label{sec:discussion}\n\nThis section reflects on this paper's contributions and situates the work with respect to existing literature. We begin in Section \\ref{sec:single_temperature_interval} by designing efficient heuristics for the minimum number of matches problem with the special case of a single temperature interval.\nInitially, we show that the 2 performance guarantee by \\citet{furman:2004} is tight.\nUsing graph theoretic properties, we propose a new MILP formulation for the single temperature interval problem which does not contain any big-M constraints. We also develop an improved, tight, greedy 1.5-approximation algorithm which prioritizes stream matches with equal heat loads. Apart from the its independent interest, solving the single temperature interval problem is a major ingredient of water filling heuristics.\n\n\nThe multiple temperature interval problem requires big-M parameters. We reduce these parameters in Section \\ref{sec:max_heat} by\ncomputing the maximum amount of heat transfer with match restrictions.\nInitially, we present a greedy algorithm for exchanging the maximum amount of heat between two streams.\nThis algorithm computes tighter big-M parameters than \\citet{gundersen:1997}.\nWe also propose LP-based ways for computing the maximum exchanged heat using only a subset of the available matches.\nMaximum heat computations are fundamental ingredients of our heuristic methods and detect the overall problem feasibility. This paper emphasizes how tighter big-M parameters improve heuristics with performance guarantees, but notice that improving the big-M parameters will also tend to improve exact methods.\n\nSection \\ref{sec:relaxation_rounding} further investigates the \\emph{relaxation rounding heuristics} of \\cite{furman:2004}. \n\\citet{furman:2004} propose a heuristic for the minimum number of matches problem based on rounding the LP relaxation of the transportation MILP formulation\n(\\emph{Fractional LP Rounding (FLPR)}).\nInitially, we formulate the LP relaxation as a minimum cost flow problem showing that it can be solved with network flow techniques which are more efficient than generic linear programming.\nWe derive a negative performance guarantee showing that FLPR has poor performance in the worst case.\nWe also prove a new positive performance guarantee for FLPR indicating that its worst-case performance may be improved with tighter big-M parameters.\nExperimental evaluation shows that the performance of FLPR improves with our tighter algorithm for computing big-M parameters.\nMotivated by the method of Lagrangian Relaxation, \\citet{furman:2004} proposed an approach generalizing FLPR by approximating the cost of the heat transferred via each match.\nWe revisit possible policies for approximating the cost of each match.\nInterestingly, we show that this approach can be used as a generic method for potentially improving a solution of the minimum number of matches problem.\nHeuristic \\emph{Lagrangian Relaxation Rounding (LRR)} aims to improve the solution of FLPR in this way.\nFinally, we propose a new heuristic, namely \\emph{Covering Relaxation Rounding (CRR)}, that successively solves instances of a new covering relaxation which also requires big-M parameters.\n\nSection \\ref{sec:water_filling} defines \\emph{water filling heuristics} as a class of heuristics solving the minimum number of matches problem in a top-down manner, i.e.\\ from highest to lowest temperature interval.\n\\citet{cerdaetwesterberg:1983} and \\citet{furman:2004} have solution methods based on water filling.\nWe improve these heuristics by developing novel, efficient ways for solving the single temperature interval problem.\nFor example, heuristics \\emph{MILP-based Water Filling (WFM)} and \\emph{Greedy Water Filling (WFG)} incorporate the new MILP formulation (Eqs.\\ \\ref{Eq:SingleMIP_MaxBins}-\\ref{Eq:SingleMIP_Integrality}) and greedy Algorithm IG, respectively.\nWith appropriate LP, we further improve water filling heuristics by reusing in each iteration matches selected in previous iterations.\n\\citet{furman:2004} showed a performance guarantee scaling with the number of temperature intervals. \nWe show that this performance guarantee is asymptotically tight for water filling heuristics. \n\nSection \\ref{sec:greedy_packing} develops a new \\emph{greedy packing approach} for designing efficient heuristics for the minimum the number of matches problem motivated by the packing nature of the problem.\nGreedy packing requires feasibility conditions which may be interpreted as a decomposition method analogous to pinch point decomposition, see\n\\cite{hindmarsh:1983}.\nSimilarly to \\cite{cerdaetwesterberg:1983}, stream ordering affects the efficiency of greedy packing heuristics.\nBased on the feasibility conditions, the LP in Eqs.\\ (\\ref{Eq:MaxHeatLP_Objective})-(\\ref{Eq:MaxHeatLP_Positiveness}) selects matches carrying a large amount of heat and incurring low unitary cost for exchanging heat.\nHeuristic \\emph{LP-based Largest Heat Match (LHM-LM)} selects matches greedily by solving instances of this LP.\nUsing a standard packing argument, we obtain a new logarithmic performance guarantee.\nLHM-LP has a polynomial worst-case running time but is experimentally time-consuming due to the repeated LP solving.\nWe propose three other greedy packing heuristic variants which improve the running time at the price of solution quality.\nThese other variants are based on different time-efficient strategies for selecting good matches.\nHeuristic \\emph{Largest Heat Match (LHM)} selects matches exchanging high heat in a pairwise manner.\nHeuristic \\emph{Largest Fraction Match (LFM)} is inspired by the idea of our greedy approximation algorithm for the single temperature interval problem which prioritizes roughly equal matches.\nHeuristic \\emph{Smallest Stream First (SS)} is inspired by the idea of the tick-off heuristic \\citep{hindmarsh:1983} and produces matches in a stream to stream basis, where a hot stream is ticked-off by being matched with a small number of cold streams. \n\nFinally, Section \\ref{sec:results} \nshows numerically that our new way of computing the big-M parameters, our improved algorithms for the single temperature interval, and the other enhancements improve the performance of relaxation rounding and water-filling heuristics.\nThe numerical results also show that our novel greedy packing heuristics {typically find better feasible solutions than} relaxation rounding and water-filling ones. {But the tradeoff is that the relaxation rounding and water filling algorithms achieve very efficient run times.}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn his PhD thesis, Professor Floudas showed that, given a solution to the minimum number of matches problem, he could solve a nonlinear optimization problem designing effective heat recovery networks. But the sequential HENS method cannot guarantee that promising minimum number of matches solutions will be optimal (or even feasible!) to Professor Floudas' nonlinear optimization problem. Since the nonlinear optimization problem is relatively easy to solve, we propose generating many good candidate solutions to the minimum number of matches problem. This manuscript develops nine heuristics with performance guarantees to the minimum number of matches problem. Each of the nine heuristics is either novel or provably the best in its class. Beyond approximation algorithms, our work has interesting implications for solving the minimum number of matches problem exactly, e.g.\\ the analysis into reducing big-M parameters {or the possibility of quickly generating good primal feasible solutions}. \\\\[-4pt]\n\n\\noindent\n\\textbf{Acknowledgments} \\\\[2pt]\n\\noindent\nWe gratefully acknowledge support from EPSRC EP\/P008739\/1, an EPSRC DTP to G.K., and a Royal Academy of Engineering Research Fellowship to R.M.\n\n\n\\bibliographystyle{ijocv081}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe success of the manufacturing industry based on the revolutionary innovations in various technologies in hardware and software \\cite{8666558}. The conception of the Internet of Things (IoT) has already become a part of modern factories for automatizing of large-scale processes~\\cite{Microsoft2019}. \n\nManufacturing sector brings many different stakeholders together. This industry has mostly common fraud problems comparing to any sector. The significant rate (56\\%) of respondents reported the issues related to vendor or\nprocurement fraud~\\cite{kroll}. For some companies developing know-how products, intellectual property infringement could be fatal for the whole business. Not surprisingly, these issues make companies look for new ways to avoid them~\\cite{deloitte2014}.\n\nBlockchain technology, which stays behind Bitcoin, is nowadays a hype technology\\cite{8704309}. Its development could be revolutionized like the appearance of the Internet at the beginning of 90th's. Blockchain provides a secure mechanism for achieving an independent trusted between business partners, excluding intermediaries. \nThe rest of the paper is organized as follows. Section \\ref{sec:research:method} discusses information about research methods. In Section \\ref{sec:results} illustrates the results of searching and screening for relevant papers for this research, while section \\ref{sec:analysis} answers the research questions using these results. Section \\ref{sec:conlcusion} concludes the paper.\n\n\n\n\n\\subsection{Overview of Blockchain}\\label{sec:background}\nBlockchain is a new and successful combination of existing technologies. The technical concept of Blockchain describes how data are distributed saved across the user systems using cryptography algorithm.\nBlockchain organized logically centralized and organizationally decentralized. As a result, the Blockchain represents a distributed database that maintains an ever-expanding list of decentralized transaction, events, or records in hash form. The data is maintained in a distributed register (Distributed Ledger Technology) and all participants have a copy of the entire register. In this distributed approach, the data is grouped into individual blocks that are linked together to ensure the chronological order and immutable data integrity of the entire data set~\\cite{8715005}. \n\nThe innovation of Blockchain technology is that existing approaches have been successfully put together. Following approaches are the main components of Blockchain:\n\\begin{itemize}\n\\item \\textbf{Peer-to-peer network:} In this peer-to-peer network (P2P network), communication runs without a central point. All participants or nodes are connected to each other and communicate with each other at the same level. Since the nodes are equal to each other, or can use services and make them available at the same time, there is no classic client-server structure\\cite{8704309}.\n\\item \\textbf{Cryptography:} With the help of methods from cryptography, the distributed register is protected against manipulation and abuse. This enables traceability, data integrity and authentication of the data source\\cite{LI2018133}.\n\\item \\textbf{Consensus mechanism:} The consensus mechanism defines the criteria that provide evidence of permission to create new blocks (mining). To reach a consensus, various consensus algorithms have been developed\\cite{Innerbichler:2018:FBA:3211933.3211953}.\n\\end{itemize}\n\nDue to the different uses of Blockchain technology, there are different variations on how the Blockchain is constructed.\nIn a \\textit{public Blockchain}, there are no restrictions on who can see the public data and validate the transactions\\cite{LI2018133}. Furthermore, the Blockchain data may be encrypted and understandable only to the authorized user. In the case of a \\textit{private Blockchain}, the completed consortium of those who access the Blockchain and are allowed to validate transactions is predefined~\\cite{8560153}. In a \\textit{permissionless Blockchain}, there are no restrictions on the identity of the participants who are allowed to conduct the transactions. In a \\textit{permissioned Blockchain}, the user group that can execute the transactions and generate new blocks is predefined and known~\\cite{8678753}.\n\\subsection{Smart Contracts}\nIn addition to a decentralized database system for transactions, Blockchain technology is also a platform for the automation of processes, regulations and organizational principles.\nSmart contracts are new, intelligent forms of contracts. These are to be understood as peer-to-peer applications, which are distributed with the underlying Blockchain technology, such as Bitcoin, Ethereum or Hyperledger. Hyperledger also uses the term \"chaincode\"~\\cite{Wang:2019:SSB:3302505.3310086}.\n\nThe smart contracts enable the determination of the conditions that lead to certain decisions by the data provided; the automatic processing of contracts; the permanent and real-time monitoring of contract terms and the automatic enforcement of the rights of contractors. The smart contracts provide not only the information about the Blockchain network and the distribution of data, but also the business logic~\\cite{Baumung2018135}.\n\nThese P2P applications can be programmed, stored in the Blockchain and executed in there. Therefore, they have the same advantages as the Blockchain itself. In Bitcoin, the smart contracts are created in the form of scripts. In order to simplify the development of smart contracts for Ethereum platform, a new specific programming language \"Solidity\" has been developed~\\cite{Baumung2019456}. Compared to the scripting language in Bitcoin, where many program constructs, such as loops, are missing, Solidity is a higher-level, more abstract language that resembles the JavaScript language. The chain codes are written in various high-level languages, such as Java or Go, and during execution, access is made to the data stored in the Blockchain, or to read out the existing information and store new ones. These scripts are stored in the Blockchain at a particular address. This address is determined when the integration of smart contract into the Blockchain is decided. When an event prescribed in the contract has occurred, a transaction is sent to this address. The distributed virtual machine executes the code of the script using the data sent with the transaction.\n\\subsection{State of Research on Blockchain}\nYli-Huumo et. al~\\cite{Yli_Huumo_2016} aim to understand the current research state of Blockchain technology, its technical challenges and limitations. This systematic review illustrates an sharply increasing number of publications each year beginning from 2012. It shows a growing interest in Blockchain technology. \nSwan~\\cite{swan2015blockchain} identified seven technical challenges of Blockchain for the future. Modern Blockchain implementations have to ensure security, throughput, size and bandwidth, performance, usability, data integrity and scalability. Being public Blockchains the \\textit{throughput} in the Bitcoin and Ethereum networks is from 10tps to 100tps (transactions per second). For example, VISA Payment System proceeds 2,000tps. But a permissioned Blockchain Hyperledger Fabric overcomes these challenges~\\cite{Alharby2017}. In order to achieve adequate security in the Bitcoin network validation of transaction takes roughly 10 minutes \\textit{(latency)}. In February 2016 the size of Bitcoin register was 50,000 MB. Current \\textit{size} of Bitcoin is 1 MB. This is the serious limitation of \\textit{bandwidths} for Blockchain, which should be solved to increase amount of transaction handled by register. The 51\\% attach on Blockchain network is still significant security issue. If the majority of the network will be controlled by hackers, it will be possible to manipulate Blockchain. Issue of \\textit{waster resources} is caused by Proof-of-Work effort in the mining process mainly in Bitcoin, which required huge amounts of energy. But there are other consensus algorithms, like Proof-Of-Stake, which are energy friendly. \\textit{Usability} problems resulting from difficulty of using Bitcoin API~\\cite{Yli_Huumo_2016}. \\textit{Versioning, hard forks, multiple chains} refers to a small chain with a small number of nodes, where a possibility of 51\\% attach is higher. Another issue become possible when chains are split for administrative or versioning purposes.\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth,height=4.5cm]{Mapping-Process.png}}\n\\caption{The Systematic Mapping Process~\\cite{Petersen:2008:SMS:2227115.2227123}.}\n\\label{fig:001}\n\\end{figure*}\n\\section{Research method}\\label{sec:research:method}\nA systematic mapping study was selected to identify and classify primary studies to provide a systematic overview on the topics of industrial manufacturing and Blockchain. Petersen et al.~\\cite{Petersen:2008:SMS:2227115.2227123} presented the guidelines for systematic mapping study, which we followed to conduct this study.\n\nThe process for the systematic mapping study falls into a five-phase process as depicted in Figure \\ref{fig:001}: (1) Define research questions; (2) Search for primary studies; (3) Identify inclusion and exclusion criteria and screen primary studies based on these criteria; (4) Classify primary studies; (5) Mapping the data.\n\\subsection{Research questions}\nThe first step in systematic mapping study is the definition of the research questions. The purpose of this research is to classify current research and identify pertinent themes which relate directly to Blockchain technologies in manufacturing. This leads to the following research questions (RQs): \\\\\n\n\\textbf{RQ1:} \\textit{What are the problems between stakeholders in the manufacturing\nindustry? } \\\\\nRationale: The intention of this question is to identify current gaps in relationships of interested parties in manufacturing industry.\\\\\n\n\\textbf{RQ2:} \\textit{What are the data to secure in manufacturing process?} \\\\\nRationale: This question aims to identify the data, which should be insured during the whole process.\\\\\n\n\\textbf{RQ3:} \\textit{What are the use cases of Blockchain technology for manufacturing industry?} \\\\\nRationale: The intention of this question is to figure out the possible usage of Blockchain in manufacturing area.\\\\\n\n\\textbf{RQ4:} \\textit{What Blockchain frameworks are suitable for the scenario \"Assignment of production orders to an external manufacturer\"?} \\\\\nRationale: With this question, we investigate the existing Blockchain solutions to find appropriate framework. \\\\\n\\subsection{Search Strategy}\nThe search strategy is key to ensure a good starting point for the identification of studies and ultimately for the actual outcome of the study. An extensive and broad set of primary studies was needed to answer the research questions. The most popular academic databases in the domain of software engineering were selected to be used in this systematic mapping to search for potentially relevant papers:\n\\begin{itemize}\n\\item ACM Digital Library\\footnote{http:\/\/dl.acm.org}\n\\item IEEE Xplore Digital Library\\footnote{http:\/\/ieeexplore.ieee.org}\n\\item Scopus\\footnote{https:\/\/www.scopus.com}\n\\item Science Direct\\footnote{https:\/\/www.sciencedirect.com}\n\\end{itemize}\n\nFinding possibly relevant publications to answer the research questions requires creating an appropriate search clause. We chose the terms \"Blockchain\" and \"Manufacturing industry\" for this study as the main search keyword core, it focuses on Blockchain technology, manufacturing, production processes.\nThe final search strings were extended with alternative synonyms for main keywords. The term \"distributed ledger\" is a basic technology for \"blockchain\". We considered papers mentioning distributed manufacturing, manufacturing execution, programmable logic controller\" and included them into the search clause.\nRegarding the keywords for the search, after some exploratory searches using different combination of keywords, the researchers jointly established the final string to be used in the search for papers in the databases. Search terms with similar meanings were grouped in the same group and combined using the OR logical operator. To perform automatic searches in the selected digital libraries, the AND logical operator were used between combined terms of different groups, depicted in Table \\ref{tab:01}:\n\\begin{table}[htbp]\n\\centering\n\\label{tab:01}\n\\caption{Searches in databases.}\n\\begin{tabular}{|l|l|}\n\\hline\n\\rowcolor[HTML]{EFEFEF} \n\\textbf{Database} & \\textbf{Search}\\\\ \n\\hline\nACM & \\begin{tabular}[c]{@{}l@{}}(+(\"Blockchain\" \"Distributed Ledger\") +\\\\(\"Manufacturing Execution\" \"Programmable Logic \\\\ Controller\" \"Manufacturing\" \"Distributed \\\\manufacturing\")\\end{tabular} \\\\ \\hline\nIEEE & \\begin{tabular}[c]{@{}l@{}}('Blockchain' OR 'Distributed Ledger') AND \\\\ ('Industrial Control' OR 'Manufacturing Execution' \\\\ OR 'Programmable Logic Controller' OR \\\\'Manufacturing industry' OR 'Distributed \\\\manufacturing')\\end{tabular} \\\\ \\hline\nScopus & \\begin{tabular}[c]{@{}l@{}}ALL ((\"Blockchain\" OR \"Distributed Ledger\") AND\\\\ (\"Industrial Control\" OR \"Manufacturing Execution\" \\\\ OR \"Programmable Logic Controller\") OR \\\\ (\"Manufacturing industry\" OR \"Distributed\\\\ manufacturing\"))\\end{tabular} \\\\ \\hline\nScience Direct & \\begin{tabular}[c]{@{}l@{}}(\"Blockchain\" OR \"Distributed Ledger\") AND \\\\ (\"Industrial Control\" OR \"Manufacturing Execution\" \\\\ OR \"Programmable Logic Controller\" OR \\\\ \"Manufacturing\" OR \"Distributed manufacturing\")\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nThe search string was applied to title, abstract, full-text and keywords, and limited to journal papers written in English. The search was performed at the beginning of 2016. A total of 258 papers were retrieved from the different databases, which are focusing on research regarding information technologies, as on 21st May 2019, 10.30 am (CEST) and displayed in Table \\ref{tab:studies}. \n\\begin{table}[htbp]\n\\label{tab:studies}\n\\caption{Number of studies per database.}\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\rowcolor[HTML]{EFEFEF} \n\\textbf{Database} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Search\\\\ results\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Final\\\\ results\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}\\% final papers\\\\ from search results\\end{tabular}} \\\\ \\hline\nACM & 24 & 4 & 1.6 \\% \\\\ \\hline\nIEEE & 61 & 12 & 4.7 \\% \\\\ \\hline\nScopus & 159 & 6 & 2.3 \\% \\\\ \\hline\nScience Direct & 14 & 6 & 2.3 \\% \\\\ \\hline\n\\textbf{Total} & \\textbf{258} & \\textbf{28} & \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Screening for relevant papers}\nThis step is to exclude all research papers that are irrelevant to the research questions. To accomplish this step, We followed the screening approach described by Yli-Huumo et al.~\\cite{Yli_Huumo_2016} and defined inclusion\/exclusion criteria.\\\\\n\n\\textbf{Inclusion criteria} \n\nTo be considered for inclusion in the study, the research being evaluated had to originate from an academic source, such as a journal or conference, and clearly show its contribution was focused on applying Blockchain in manufacturing. Studies are accessible electronically. The paper title includes \"Blockchain\".\\\\\n\n\\textbf{Exclusion criteria}\n\nFor those publications that passed the inclusion criteria, two filters were applied to reduce the publications to only those that were deemed to be directly aligned with the focus of the study. These filters are described as follows:\n\\begin{itemize}\n\\item Study focuses on financial sector\n\\item Study contains \"Bitcoin\" or \"Cryptocurrency\"\n\\end{itemize}\nIt was decided to apply exclusion criteria on titles, keywords, abstracts and full-text. The final inclusion or exclusion could be decided based on the reading of publication's full text. Thus, this phase only eliminates publications clearly not within this study's scope and publications failing on formal requirements (such as duplicated papers). In detail, we identified 230 publications (about 90\\% of all papers) outside this study's scope: 20 duplicates, 156 papers accordingly with inclusion\/exclusion criteria and other based on abstract reading.\nThe search process is summarized in Figure \\ref{fig:sum}.\\\\\n\n\\begin{tikzpicture}\n[node distance=.6cm,\nstart chain=going below,]\n \\node (step0) [punktchain ] {Applying search on databases};\n \\begin{scope}[start branch=venstre,\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node[punktchain, on chain=going right, join=by {->}]\n (result0){Results = 258};\n \\end{scope}\n \\node (step1) [punktchain ] {Remove duplicates};\n \\begin{scope}[start branch=venstre,\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node[punktchain, on chain=going right, join=by {->}]\n (result1){Results = 238};\n \\end{scope}\n \\node[punktchain] (step2) {Apply inclusion\/exclusion};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}] \n (result2) {Results = 94};\n \\end{scope}\n \\node[punktchain] (step3) {Exclusion based on abstract};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}] \n (result3) {Results = 36};\n \\end{scope}\n \\node[punktchain] (step4) {Exclusion based on full reading};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}]\n (result4) {Results = 32};\n \\end{scope}\n \\node[punktchain] (step5) {Final primary papers};\n \\begin{scope}[start branch=hoejre1,]\n every join\/.style={->, thick, shorten <=1pt}, ]\n \\node [punktchain, on chain=going right, join=by {->}]\n (result5) {Results = 28};\n \\end{scope}\n \\draw[|-,-|,->, thick,] (result0.south) |-+(0,-1em)-| (step1.north);\n \\draw[|-,-|,->, thick,] (result1.south) |-+(0,-1em)-| (step2.north);\n \\draw[|-,-|,->, thick,] (result2.south) |-+(0,-1em)-| (step3.north);\n \\draw[|-,-|,->, thick,] (result3.south) |-+(0,-1em)-| (step4.north);\n \\draw[|-,-|,->, thick,] (result4.south) |-+(0,-1em)-| (step5.north);\n \\end{tikzpicture}\n \\begin{figure}[htbp]\n\\caption{Search and Selection Process of the Papers.}\n\\label{fig:sum}\n\\end{figure}\n\\subsection{Key-wording using Abstracts}\nYli-Huumo et al.~\\cite{Yli_Huumo_2016} proposed the key-wording technique to classify all relevant research papers. We first went thought the abstract of each paper to identify the most important keywords and the key contribution. This purpose used to classify papers under different categories. In some cases where it was difficult to classify a paper using\nits abstract, we looked through its introduction and conclusion. After classifying all papers, we read the papers and made changes to the classification when necessary.\n\\subsection{Data extraction and mapping process}\nThe bubble plot was designed to collect all the information needed to address the research questions of this mapping study. During the process of data extraction, we recorded major terms to Excel, which helped me to generate categories and proceed quality analysis. These data items embrace the main goals of papers. We used both qualitative and quantitative synthesis methods.\n\\\\\n\nThe final search process results 28 papers performed at the beginning of 2017. Their distribution over databases and percentage from all search results are illustrated in Table \\ref{tab:studies}.\\\\\n\\section{Study Results}\\label{sec:results}\nThis section demonstrates the findings from those the data was extracted regarding use cases of Blockchain in manufacturing industry, research type and attributes of Blockchain technology.\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=4.25cm,height=4.25cm]{Countries.png}}\n\\caption{Continent wise distribution of studies.}\n\\label{fig:country}\n\\end{figure} \n\n\\textbf{Top Countries and continents}\n\nGeographic distribution of the selected primary papers is shown in Fig. \\ref{fig:country}. The top continent was Europe with 14 studies being conducted there. Asia was second with 9 studies followed by America with 5 studies. China and Germany contributed towards 6 and 3 studies respectively. The rest of the countries had two or less papers published. It shows, that Blockchain technology has attracted attention worldwide. \\\\\n\n\\textbf{Year Wise Distribution}\n\nFigure \\ref{year} depicts the distribution of the papers from 2017 to 2019 year. Literature related to Blockchain in manufacturing industry has increased enormously in the last 2 years. Due to increasing papers of applying Blockchain in industrial context it expected to save this trend for next years. \\\\\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=6cm,height=4cm]{Year.png}}\n\\caption{Publication year of the selected primary papers.}\n\\label{year}\n\\end{figure} \n\n\\textbf{Classification of research.}\n\nAll of the publications in the study were classified by considering the following\ncriteria: (1) Use case of Blockchain, (2) Research facet and (3) Blockchain facet.\n\\subsection{Use case of Blockchain:}\nIn order to answer RQ3, we classified the publications in the study under five dimensions. These dimensions describe different use cases of Blockchain in manufacturing industry on the current state of research in the area. The use cases of Blockchain in industrial manufacturing are follows: \\\\\n\n\\textbf{Secure transfer of order data.} This use case describes how the production orders can be assigned to an external manufacturer and securely transmitted between different systems. It enables mutual interaction between the producer and the customer~\\cite{8250199}.\\\\\n\n\\textbf{Product data storing.} The data can be intercepted during transmission from the user's computer to the cloud systems. In these use cases the focus is on secured storing of products data in Blockchain.\\\\\n\n\\textbf{Supply chain, Process traceability.} Creating and distributing of goods can span over multiple locations, hundreds of stages etc. The use case aims to provide the ability to trace process in supply chain from procurement of raw materials to production~\\cite{8711819}.\\\\\n\n\\textbf{Prevention of fraud, Protection of Intellectual Property (IP).} The main point of this use case is to prove of products origin and intend for prevention of manipulation providing an indelible and traceable record of changes.\\\\\n\n\\textbf{Industrial IoT (IoT), Automation.} This use case illustrates how Blockchain can be used for integration with industrial IoT in automation context. \\\\\n\nFigure \\ref{trend} shows the amount of publications where each of use case above is described year wise. This figure illustrates, that the most researched topic is supply chain and process traceability (19 papers). Less researched (5 papers) is the scenario of how could the product data be stored in Blockchain. All use cases, except \"Secure transfer of order data\" show the trend of increasing interests. \n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=9cm,height=5cm]{Trend.png}}\n\\caption{Publication year of the identified use cases.}\n\\label{trend}\n\\end{figure} \n\n\\subsection{Research Facet:}\nThis facet is inspired by~\\cite{Petersen:2008:SMS:2227115.2227123} classify publications according to the type of research they contribute. (1) Review paper summarizes the current state of understanding on a topic. (2) Conceptual paper addresses a question that cannot be answered simply by getting more factual information. (3) Solution proposal includes an illustration or example of a solution to a particular problem. (4) Implementation research provides a prototypical development of a solution. (5) Case Study provides an up-close, in-depth, and detailed examination of a subject of study.\n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth,height=11cm]{SM.png}}\n\\caption{Visualisation of a Systematic Map in the Form of a Bubble Plot}\n\\label{fig:007}\n\\end{figure*}\n\\subsection{Blockchain Facet:}\nTo answer the RQ4, this facet classified the publications along the Blockchain attributes like framework (Ethereum, Hyperledger or other frameworks), type of Blockchain (public. private, permissioned) and focus of research paper on consensus algorithm.\n\nThe results of mapping process are summarized in Figure \\ref{fig:007} in form of the bubble plot to show the frequencies. This visualization gives a quick overview of publications for each category.\n\n\\section{Analysis}\\label{sec:analysis}\nThis section discusses the study results and answers the research questions that we defined in Section \\ref{sec:background}.\\\\ \n\n\\textbf{RQ1: What are the problems between stakeholders in manufacturing industry?}\\\\\\\\\nManufacturing industry is facing security incidents due to the competition~\\cite{Wang:2019:SSB:3302505.3310086}, data sharing with third companies~\\cite{8678753}, operational inefficiencies, losses and costs~\\cite{8626103}. Issue of limited trust is one of the complications in the industry~\\cite{Geiger:2019:PTD:3297280.3297546,Innerbichler:2018:FBA:3211933.3211953,8704309}. In~\\cite{Pinheiro2019331} authors illustrate the dependency of industrial companies on Trusted-Third-Party (TTP). It caused by the closed source code of programs, used in manufacturing industry~\\cite{8678753}. Moreover it is necessary to differentiating between original part or counterfeit products~\\cite{Holland2018,BANERJEE201869}. The outsourcing of production orders~\\cite{Baumung2019456} leads to limited flexibility and considerable organizational effort.\n\\\\ \\\\\n\\textbf{RQ2: What are the data to secure in manufacturing process?} \\\\\\\\\nFor some businesses the data exchange is a key success factor, so we found several type of data that should be protected between stakeholders: \n\\begin{itemize}\n\\item Computer-Aided Design (CAD) file for design and technical documentation~\\cite{Holland2018},~\\cite{Papakostas2019}\n\\item Material specification~\\cite{Mondragon20181300,Papakostas2019}\n\\item Order details~\\cite{Baumung2019456} and product recipe~\\cite{WESTERKAMP2019}\n\\item Machine data~\\cite{Geiger:2019:PTD:3297280.3297546,ANGRISH20181180} (measurement data~\\cite{Wang:2019:SSB:3302505.3310086} and configuration\\cite{Mondragon20181300})\n\\item Process values~\\cite{Geiger:2019:PTD:3297280.3297546,MANDOLLA2019134} and process state~\\cite{LI2018133,8704309,8621042}\n\\end{itemize}\n\nSelection of data, which should be protected, depends on specific scenario and application area of Blockchain. \\\\ \n\\textbf{RQ3: What are the use cases of Blockchain technology for manufacturing industry?} \\\\ \\\\\nThere are various use cases of Blockchain technology in the industrial manufacturing. In this study we identified 5 papers (17\\% of all papers) describing use case \"Secure transfer of order data\" in form of solution proposal. All of this papers provide prototypical implementation. Only 4 papers (14\\% ) are relevant for storing of product data in distributed ledger. According to our findings, significant part of papers (around 43\\%) illustrate solution proposal for supply chain and process traceability and 9 of them (32\\%) demonstrate implementation examples. Fraud prevention and IP Protection are considered in 2 papers (around 7\\%) and both of these papers implement this use case. The last use case \"IoT, Automation\" are described in 5 papers and covers equally all types of research.\nThe application of Blockchain is not limited to the use cases above. \\\\ \\\\\n\\textbf{RQ4: What Blockchain frameworks are suitable for the scenario \"Assignment of production orders to an external manufacturer\"?} \\\\ \\\\\nThis scenario is implemented mostly on Ethereum Blockchain (in 4 of all papers)~\\cite{Baumung2018135,Baumung2019456,Papakostas2019,ANGRISH20181180}. In~\\cite{Baumung2019456} was illustrated an example of using Hyperledger Fabric and in~\\cite{LI2018133} was used Multichain. Public Blockchain was used in 2 papers~\\cite{Baumung2018135},~\\cite{Baumung2019456}, in 1 research work the authors used the private Blockchain~\\cite{Papakostas2019} and in 4 papers was chosen consortium or permissioned Blockchain~\\cite{LI2018133,Baumung2018135,Baumung2019456,ANGRISH20181180}. Some papers describe several types of Blockchain in the case use case. It means, that this use case is possible to implement based on different Blockchain networks and doesn't require a specific framework. \n\\section{Conclusion}\\label{sec:conlcusion}\nIn the coming years it is expected, that the manufacturing sector will benefit from the use of Blockchain technology. In order to identify opportunities for integration of Blockchain in industrial processes, this research was made in form of systematic mapping study. After conducting the SMS and analysing the literature, a total of 28 primary papers were extracted from 4 different scientific databases, published mainly in journals and conference proceedings and classified into different facets. We have covered the time period of 2017-2019 and have classified the papers under different dimensions. We grouped these issues into five use cases, namely, secure transfer of order data, product data storing, supply chain and process traceability, fraud prevention and IP protection, IoT and automation. \n\nIn this study we found that the majority of papers describe the case \"supply chain and process traceability\" as solution proposal. There are essential less findings regarding assignment of production orders to an external manufacturer. It demonstrates the relative lack of research on this scenario, that requires effort lot more research. As found out, the last use case can be implemented using different frameworks. For example, this case could be implemented base on permissioned Blockchain using Hyperledger Fabric\\footnote{\\url{www.hyperledger.org}}. However, it is required to evaluate all frameworks to select the most suitable solution for the use case. \n\nThe result of this mapping study can be applied only on the selected research databases and may help the researchers to get an overview of the status of Blockchain in the manufacturing industry and highlight the research gaps. \n\nOur line of future research aims precisely to implement our own prototype of use case \"Assignment of production orders to an external manufacturer\" to demonstrate the benefits of using Blockchain in factory automation. Additionally, we are going to extend the literature survey to include other databases like SpringerLink and we will apply snowballing to assure that the search is as comprehensive as possible. \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhgko b/data_all_eng_slimpj/shuffled/split2/finalzzhgko new file mode 100644 index 0000000000000000000000000000000000000000..ffbd3ecbf364b3e13d02c43df21b5c40dd44231d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhgko @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\nIn recent years unstable periodic orbits have\nbeen shown to be an effective tool in the description of \ndeterministic dynamical systems of low intrinsic dimension\\cite{CHAOS92}, \nin diagnosing deterministic chaos in noisy biological \nsystems\\cite{Moss94}, and many other applications. \nThe theory has been successfully applied to low dimensional ordinary \ndifferential equations (deterministic chaos) and linear partial differential \nequations (semiclassical quantization). It is an open question whether\n the theory has anything to say about nonlinear partial differential \nequations (hydrodynamics, field theory). \nIn this paper we show that \n the periodic orbit theory can be used to describe \nspatially extended systems by applying it to the Kuramoto-Sivashinsky\nequation\\cite{Kur,Siv}.\n\nIn what follows we shall refer to a periodic solution \n as a ``cycle'', and to the closure of the union of all \nperiodic solutions as the ``invariant set''.\nPeriodic solutions are important because they form the {skeleton}\nof the invariant set\\cite{cycprl,AACI}, with\ncycles ordered {hierarchically}; short cycles give good\napproximations to the invariant set, longer cycles refinements.\nErrors due to neglecting long cycles can be bounded, and for nice hyperbolic\nsystems they fall off exponentially or even superexponentially \nwith the cutoff cycle length\\cite{Rugh92,CRR93}.\nFurthermore, cycles are {structurally robust} as for smooth flows\neigenvalues of short cycles vary slowly with smooth parameter changes,\nshort cycles can be accurately extracted from experimental\nor numerical data, and\nglobal averages (such as \ncorrelation exponents, escape rates \nand other ``thermodynamic\" averages) can be efficiently computed from short\ncycles by means of {cycle expansions}.\n\nWhile the role of periodic solutions in elucidating the asymptotics of\nordinary differential equations was already appreciated by\nPoincar\\'e\\cite{poincare}, allegedly Hopf\\cite{Hopf42}\nand, more demonstrably, Spiegel and\ncollaborators\\cite{MS66,BMS71,EAS87} have argued that the asymptotics of\npartial differential equations should also\nbe described in terms of recurrent spatiotemporal patterns.\nPictorially, dynamics drives a given spatially extended system through a\nrepertoire of unstable patterns; as we watch \na given ``turbulent'' system evolve, \nevery so often we catch a glimpse of a familiar pattern. \nFor any finite spatial resolution,\nthe system follows approximately for a finite time \na pattern belonging to a finite \nalphabet of admissible patterns, and the long term dynamics can be thought\nof as a walk through the space of such patterns,\njust as chaotic dynamics with a low dimensional\nattractor can be thought of as a succession of nearly periodic (but\nunstable) motions.\n\n\\section{Kuramoto-Sivashinsky system}\n\nWe offer here a modest implementation of the above program on\na prototype spatially extended dynamical system defined by\nthe Kuramoto-Sivashinsky equation\\cite{Kur,Siv}\n\\begin{equation}\nu_t=(u^2)_x-u_{xx}-\\nu u_{xxxx} \n\\label{ks}\n\\end{equation}\nwhich arises as a model amplitude equation for interfacial instabilities\nin a variety of contexts - see e.g. \\refref{KNS90}.\nHere $t \\geq 0$ is the time, $x \\in [0,2\\pi]$ is the space coordinate,\nand $\\nu$ is a fourth-order ``viscosity'' damping parameter.\nThe subscripts $x$ and $t$ denote the partial derivatives with respect to \n$x$ and $t$. \nWe take the Kuramoto-Sivashinsky system because it is one of the\nsimplest physically interesting spatially extended nonlinear systems,\nbut in the present context the \ninterpretation of the equation, or the equation itself is not the most \nimportant ingredient;\nthe approach should be applicable to a wide class of \nspatially extended nonlinear systems. The salient feature of such \npartial differential equations is that\nfor any finite value of $\\nu$ their asymptotics is in principle \ndescribable by a\n{\\em finite} set of ``inertial manifold''\n ordinary differential equations\\cite{Foias88}.\n\nThe program of studying unstable solutions in this context\nwas initiated by Goren, Eckmann and Procaccia\\cite{GEP}\nwho have used a 2-unstable modes truncation of \nthe Kuramoto-Sivashinsky equation\nto study the dynamics connecting coexisting unstable \n{\\em temporally stationary} solutions.\nWe shall study here\nunstable {\\em spatiotemporally periodic} solutions of the {\\em full}\nKuramoto-Sivashinsky system.\nOur main result is that in the limit of weak turbulence or \n``spatiotemporal chaos'', we can determine hierarchically and exhaustively\ncycles of longer and longer periods, and apply this data \nto the evaluation of global averages. \n\nThe function $u(x,t)=u(x+2\\pi,t)$ is assumed periodic on \nthe $x \\in [0,2\\pi]$ interval. \nAs $u(x,t)$ has compact support, the standard\nstrategy is to expand it in a discrete spatial Fourier series: \n\\begin{equation}\n u(x,t)= \\sum_{k=-\\infty}^{+ \\infty} b_k(t) \\e^{\\i k x}\n\\, . \n\\label{fseries}\n\\end{equation}\nSince $u(x,t)$ is real, $b_k=b_{-k}^*$.\nSubstituting (\\ref{fseries}) into (\\ref{ks}) yields \nthe infinite ladder of evolution equations for the Fourier coefficients $b_k$: \n\\begin{equation}\n\\dot{b}_k=(k^2-\\nu k^4)b_k +\\i k \\sum_{m=-\\infty}^{\\infty}\nb_m b_{k-m}\n\\,. \n\\label{expanfull}\n\\end{equation}\nAs $\\dot{b}_0=0$, the average (the mean drift) of the solution is \nan integral of motion. In what follows we shall assume that this average is \nzero, $\\int \\d x \\, u(x,t) =0$. \n\nThe coefficients $b_k$ are in general complex functions of time. \nWe can simplify the system (\\ref{expanfull}) further by assuming \nthat $b_k$ are pure imaginary, $b_k= \\i a_k$, \nwhere $a_k$ are real. \nAs we shall see below, this picks out the\nsubspace of odd solutions $u(x,t)=-u(-x,t)$, with \nthe evolution equations \n\\begin{equation}\n\\dot{a}_k=(k^2-\\nu k^4)a_k - k \\sum_{m=-\\infty}^{\\infty} a_m a_{k-m} \n\\,.\n\\label{expan}\n\\end{equation}\nWe shall determine \nthe periodic solutions in the space of Fourier coefficients,\nand then reconstitute from them the unstable spatiotemporally \nperiodic solutions of\n\\refeq{ks}. \n\nThe trivial solution $u(x,t)=0$ is a fixed point of (\\ref{ks}). From \n(\\ref{expan}) it follows that the $|k|<1\/ \\sqrt{\\nu}$ \nlong wavelength modes of this fixed point \nare linearly unstable, and the \n$|k|>1\/ \\sqrt{\\nu}$ short wavelength modes are stable. \nFor $\\nu > 1$, $u(x,t)=0$ is the globally attractive stable fixed point;\nstarting with $\\nu =1$ the solutions go through a rich sequence of\nbifurcations, studied e.g. in \\refref{KNS90}. \nDetailed knowledge of the parameter dependence of bifurcations sequences\nis not needed for our\npurposes; we shall take $\\sqrt{\\nu}$ sufficiently small so that the\ndynamics can be spatiotemporally chaotic, but not so small that we would be\noverwhelmed by too many short wavelength modes needed in order to accurately\nrepresent the dynamics.\n\nThe growth of the unstable long wavelengths (low $|k|$) excites\nthe short wavelengths \nthrough the nonlinear term in (\\ref{expan}). \nThe excitations thus transferred are dissipated by the strongly damped \nshort wavelengths, and a sort of ``chaotic equilibrium'' \ncan emerge. The very short wavelengths $|k| \\gg 1 \/ \\sqrt{\\nu}$ \nwill remain small \nfor all times, but the intermediate wavelengths of order\n$|k| \\sim 1 \/ \\sqrt{\\nu}$ \nwill play an important role in maintaining the dynamical equilibrium. \nAs the damping parameter decreases, the solutions increasingly take on \n Burgers type shock front\ncharacter which is poorly represented by the Fourier basis, and many\nhigher harmonics need to be kept\\cite{KNS90,GEP} in truncations of\n(\\ref{expan}).\nHence, while one may truncate the high modes in the expansion (\\ref{expan}), \ncare has to be exercised to ensure that no\nmodes essential to the dynamics are chopped away.\n\nBefore proceeding with the calculations, we take into account\nthe symmetries of the solutions and describe our criterion for reliable\ntruncations of the infinite ladder of \nordinary differential equations (\\ref{expan}).\n\n\\section{Symmetry decomposition}\nAs usual, the first step in analysis of such dynamical flows\nis to restrict the dynamics to a Poincar\\'e section. We\nshall fix the Poincar\\'e section to be the hyperplane\n$a_1=0$. We integrate (\\ref{expan}) with the initial\n conditions \n$a_1=0$, and arbitrary values of the coordinates $a_2, \\ldots, a_N$, where \n$N$ is the truncation order. When $a_1$ becomes \n$0$ the next time, the coordinates $a_2, \\ldots, a_N$ are mapped \ninto $(a_2', \\ldots a_N')=P(a_2, \\ldots, a_N)$, where $P$ is the Poincar\\'e \nmap. $P$ defines a mapping of a $N-1$ dimensional hyperplane into itself. \nUnder successive iterations of $P$, any trajectory\napproaches the attractor ${\\cal A}$, which itself is an invariant \nset under $P$. \n\nA trajectory of \n (\\ref{expan}) can cross the plane $a_1=0$ in two possible ways: \n either when \n$\\dot{a_1}>0$ (``up'' intersection) \nor when $\\dot{a_1}<0$ (``down'' intersection),\n with the ``down'' and ``up'' crossings \nalternating. \nIt then makes sense to define the Poincar\\'e map $P$ as a transition between, \nsay, ``up'' and ``up'' crossing. \nWith Poincar\\'e section defined as the ``up-up'' transition, \nit is natural to define a ``down-up'' transition map $\\Theta$. Since \n$\\Theta$ describes the transition from down to up (or up to down) state, \nthe map $\\Theta^2$ describes the transition up-down-up, that is \n$\\Theta^2=P$.\n\nConsider the spatial flip and\nshift symmetry operations $Ru(x)=u(-x)$, $Su(x)=u(x+\\pi)$. \n The latter symmetry reflects the invariance under\nthe shift $u(x,t) \\rightarrow u(x+ \\pi,t)$, and is a particular case of the \ntranslational invariance of the Kuramoto-Sivashinsky equation (\\ref{ks}). \nIn the Fourier modes decomposition (\\ref{expan}) this \nsymmetry acts as\n$S: a_{2k} \\rightarrow a_{2k}, a_{2k+1} \\rightarrow -a_{2k+1}$. \nRelations $R^2=S^2=1$\ninduce decomposition of the space of solutions into 4 invariant \nsubspaces\\cite{KNS90};\nthe above restriction to $b_k= \\i a_k$ amounts to specializing\nto a subspace of odd solutions $u(x,t)=-u(-x,t)$.\n\nNow, with the help of the symmetry $S$ \nthe whole attractor ${\\cal A}_{tot}$ can be \ndecomposed into two pieces: ${\\cal A}_{tot}={\\cal A}_0 \\cup S \n{\\cal A}_0 $ for some set ${\\cal A}_0$.\n It can happen that the set ${\\cal A}_0$\n (the symmetrically decomposed attractor) \ncan be decomposed even further by the action of the map $\\Theta$. In this \ncase the attractor will consist of four disjoint sets: \n ${\\cal A}_{tot}={\\cal A} \\cup S {\\cal A} \\cup \\Theta {\\cal A} \n \\cup \\Theta S {\\cal A} $. As we shall see, \nthis decomposition \nis not always possible, since sometimes $ {\\cal A}$ overlaps with \n$\\Theta S{\\cal A} $ (in this case $\\Theta {\\cal A}$ will also overlap with \n$S {\\cal A} $). \nWe shall carry out our calculations in the regime where \nthe decomposition into four disjoint pieces \nis possible. In this case the set $ {\\cal A}$ can be taken as\nthe fundamental \ndomain of the Poincar{\\'e} map, with $S {\\cal A} $, \n$\\Theta {\\cal A} $ and $\\Theta S {\\cal A} $ its images under the \n$S$ and $\\Theta$ mappings.\n\nThis reduction of the dynamics to the fundamental domain is particularly\nuseful in periodic orbit calculations, as it simplifies symbolic dynamics\nand improves the convergence of cycle expansions\\cite{CEsym}.\n\n\\section{Fourier modes truncations}\n \nWhen we simulate the equation (\\ref{expan}) on a computer, we have \nto truncate the ladder of equations to a finite length $N$, i.e., set \n $a_k=0$ for $k>N$. \n$N$ has to be sufficiently large that no harmonics \n$a_k$ important for the dynamics with $k>N$ \nare truncated. On the other hand,\ncomputation time increases dramatically with the increase of $N$:\nsince we will be evaluating the stability matrices for the flow,\nthe computation time will grow at least as $N^2$. \n\n\nAdding an extra dimension to a truncation of the system (\\ref{expan})\nintroduces a small\nperturbation, and this can (and often will) \nthrow the system into a totally different asymptotic state. \nA chaotic attractor for $N=15$ can become a period three \nwindow for $N=16$, and so on. \nIf we compute, for example, the Lyapunov exponent\n$\\lambda(\\nu,N)$ for the strange attractor of the \nsystem (\\ref{expan}), there is no reason to \nexpect $\\lambda(\\nu,N)$ to smoothly converge to the limit \nvalue $\\lambda(\\nu,\\infty)$ as $N \\rightarrow \\infty$. \nThe situation is different in the periodic windows, \nwhere the system is structurally stable, and it makes sense to compute \n Lyapunov exponents, escape rates, etc. for the \n{\\em repeller}, i.e. the closure of the set of all \n{\\em unstable} periodic orbits. \nHere the power of cycle expansions comes in: \nto compute quantities on the repeller by direct averaging methods is \ngenerally more difficult, because the motion quickly collapses to the \nstable cycle. \n\n\\begin{figure}\n\\centerline{\\epsfig{file=feig16bm.ps,width=8cm}}\n\\caption[]{\nFeigenbaum tree for coordinate $a_6$, $N=16$ Fourier modes \ntruncation of (\\ref{expan}). The two upper arrows indicate \nthe values of damping parameter that we use in our \nnumerical investigations; $\\nu=0.029910$ \n(chaotic) and $\\nu=0.029924$ (period-3 window). The lower arrow indicates\nthe kink where the invariant set ${\\cal A}$ \nstarts to overlap with $\\Theta S {\\cal A}$.\nTruncation to $N=17$ modes yields a similar figure, with values for \nspecific bifurcation points shifted by $\\sim 10^{-5}$ with respect to the \n$N=16$ values. The choice of the coordinate $a_6$ is arbitrary;\nprojected down to any coordinate, the tree is qualitatively the same.\n}\n\\label{feig16}\n\\end{figure}\n\n\nWe have found that the minimum value of $N$ to get any chaotic behavior at all \nwas $N=9$. However, the dynamics for the $N=9$ truncated system is rather\ndifferent from the full system dynamics, and therefore we have performed our\nnumerical calculations for $N=15$, $N=16$ and $N=17$. \n\\refFig{feig16} is a representative plot of the Feigenbaum \ntree for the Poincar\\'e map $P$. \nTo obtain this figure, we took a random \ninitial point, iterated it for a some time to let it settle on the\nattractor and then plotted the $a_6$ coordinate of the next 1000 intersections\nwith the Poincar{\\'e} section.\nRepeating this for different values of the damping parameter $\\nu$, one can \nobtain a picture of the attractor as a function of $\\nu$.\nFor an intermediate range of values of\n$\\nu$, the dynamics exhibits a rich variety of behaviours, such\nas strange attractors, stable limit cycles, and so on.\nThe Feigenbaum trees for different values of $N$ resemble \neach other, but the precise \nvalues of $\\nu$ corresponding the various bifurcations \ndepend on the order of truncation $N$. \n\nBased on the observed numerical similarity\nbetween the Feigenbaum trees for $N=16$ and $N=17$ (cf. \\reffig{feig16}),\n we choose $N=16$ as a reasonable cutoff \nand will use only this truncation throughout the remainder of this \npaper. We will\nexamine two values of the damping parameter: $\\nu=0.029910$, \nfor which\nthe system is chaotic, and $\\nu=0.029924$, for which the system has a\nstable period-3 cycle.\nIn our numerical work we use both the pseudospectral\\cite{Laurette} and the \n$4$-th order variable-step Runge-Kutta integration routines\\cite{NAG};\ntheir results are in satisfactory agreement. As will be seen below, the\ngood control of symbolic dynamics guarantees that we do not miss\nany short periodic orbits generated by the bifurcation sequence indicated\nby the Feigenbaum tree of \\reffig{feig16}. However, even though\nwe are fairly sure that for this parameter value we have all\nshort periodic orbits, the possibility that\nother sets of periodic solutions exist somewhere else in the\nphase space has not been excluded.\n\n\n\\begin{figure}\n\\centerline{\\epsfig{file=solution123.ps,width=8cm}\n\t \\epsfig{file=solution124.ps,width=8cm}}\n\\caption[]{Projections of a typical 16-dimensional trajectory onto \n\t\tdifferent 3-dimensional subspaces, coordinates \n\t\t(a) $\\{a_1, a_2, a_3\\}$,\n\t\t(b) $\\{a_1, a_2, a_4\\}$. $N=16$ Fourier modes truncation with\n\t\t $\\nu=0.029910$.}\n\\label{plot123}\n\\end{figure}\n\n\nThe problem with such high dimensional truncations of (\\ref{expan})\nis that the dynamics is difficult to visualize. We can \nexamine its projections onto any three axes\n$a_i,a_j,a_k$, as in \\reffig{plot123}\nor, alternatively, study \na return map for a given coordinate\n$a_k \\rightarrow a_k' = P_k(a_2, \\ldots, a_{N})$\nas the one plotted in \\reffig{returnmap}.\nThe full return map is $(N-1)$-dimensional\n${\\bf a} \\rightarrow {\\bf P}(a_2, \\dots, a_{N})={\\bf a}'$\nand single-valued, and for the values of $\\nu$ used here\nthe attractor is essentially 1-dimensional,\nbut its projection into the $\\{a_k,P_k(a_2,\\ldots,a_{N})\\}$ plane \ncan be multi-valued and self-intersecting.\nOne can imagine a situation where no\n``good'' projection is possible,\nthat is, any projection onto any two-dimensional\nplane is a multiple-valued function.\nThe question is how to treat such a map?\n \n\n\\begin{figure}\n\\centerline{\\epsfig{file=return6.ps,width=8cm}}\n\\caption[]{The attractor of the system \\refeq{expan}, plotted as the $a_6$\ncomponent of the $a_1=0$ \nPoincar\\'e section return map, 10,000 Poincar\\'e section returns of\na typical trajectory. Indicated are the periodic points $\\overline{0}$, \n$\\overline{1}$ and $\\overline{01}$; as this is an arbitrary projection of\nthe invariant set, they exhibit no good spatial ordering.\n$N=16$ Fourier modes truncation with $\\nu=0.029910$. \n}\n\\label{returnmap}\n\\end{figure}\n\n\\section{One-dimensional visualization of the dynamics} \n\nWe now describe an \napproach which simplifies matters a lot by reducing the \nmap to an approximate one-dimensional map. The multiple-valuedness in \n\\reffig{returnmap} arises from the fact that the return map \nis a 2-dimensional\nprojection of a convoluted \n1-dimensional curve embedded into a high-dimensional space.\nWe shall show that it is possible to find an {\\em intrinsic} \nparametrization $s$ along the unstable manifold, \nsuch that the map $s \\rightarrow f(s)$ induced by the\nfull $d$-dimensional flow is approximately {\\em $1$-dimensional}. \nStrictly speaking, the attractor on \\reffig{returnmap} has a certain \nthickness transverse to it, but the contraction in the transverse\ndirections is so strong that the invariant set is effectively\n$1$-dimensional. \n\nSuppose we already have determined some of the shorter cycles for our\nsystem, i.e. \nthe fixed points of the Poincar\\'e map and its iterates. This is \naccomplished relatively easily \nby checking a trajectory of a random initial point for close returns\nand then using these as initial guesses for a cycle search algorithm.\nWe now assume that the invariant set can be approximated by a curve\npassing close to all periodic points,\nand determine the order of periodic points along such curve.\nThis is done in the following way: there exists a fixed point which\nis not connected to the attractor (the point \n$\\overline{0}$ in \\reffig{returnmap}) - we\nchoose this fixed point \nas the starting point and assign it number $1$. \nPoint number $2$ is the periodic point in the sample\nwhich is closest (in the full space) \nto this fixed point, and the $n$-th point is determined as the point \nwhich has the minimum distance from the point number $n-1$ among \nall the periodic points which have not yet been enumerated. \nProceeding this way, we order all the periodic points that we have found\nso far.\n\nSince all periodic points belong to cycles, \ntheir images are known and are simply the successive periodic points\nalong the cycle. We use this fact to recursively construct\na $1$-dimensional mapping $s_i \\rightarrow f(s_i)$. We approximate\nparametrization length $s$ along the invariant set \nby computing the Euclidean inter-distances between the\nsuccessive periodic points in the full dynamical\nspace,\n$s_1=0, s_2=\\| {\\bf a}_2-{\\bf a}_1 \\|, s_i-s_{i-1}=\\| \n{\\bf a}_i-{\\bf a}_{i-1} \\| $. \nThe $i$-th cycle point $s_i$ is mapped onto its image\n$s_{\\sigma{i}} = f(s_i)$, \nwhere $\\sigma i$ denotes the label of the next periodic point \nin the cycle.\nWe can now find longer \nperiodic orbits of the 1-dimensional map $f$ by standard methods such\nas inverse iteration, and guess the location of \nthe corresponding points in the full $N$-dimensional\nspace by interpolating between the nearest known periodic points.\nThese will not be exact periodic orbits of the full system,\nbut are very useful as good starting guesses in a search for the exact\nperiodic orbits. Iteratively, more and more periodic orbits can be computed. \nWhile it only pays to refine the 1-dimensional parametrization \nuntil the density\nof the periodic points become so high that the width of the attractor\nbecomes noticeable, the 1-dimensional map continues to provide good\ninitial guesses to longer periodic orbits.\nMore sophisticated methods are needed only if \nhigh accuracy around the folding region of $f(s)$ is required\nin order to distinguish between long cycles.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=unfolded.ps,width=8cm}}\n\\caption[]{The return map $s_{n+1} = f(s_n)$ constructed\nfrom the images of periodic points. The diamonds were obtained by \nusing $34$ periodic points, the dots were obtained by using $240$ periodic \npoints. \nWe have indicated the periodic points $\\overline{0}$, \n$\\overline{1}$ and $\\overline{01}$. \nNote that the transverse fractal structure of the map shows when \nthe number of points is increased. \n$N=16$ Fourier modes truncation with $\\nu=0.029910$. }\n\\label{unfolded}\n\\end{figure}\nFor the values of $\\nu$ we are working with, the attractor consists \nof four disjoint sets, the fundamental domain ${\\cal A}$ and \nits images under the maps $S$ and $\\Theta$. \nIn this case the approximate return map \n$s \\rightarrow f(s)$ is unimodal.\nThe corresponding map on the symmetric \npart of the attractor, $S \\Theta {\\cal A} $, is likewise unimodal,\nand turned $180$ degrees around the origin. For the values of $\\nu$ we\nwork with the\ntwo maps do not interact and their domains are separate. \nHowever, if the value of the damping parameter $\\nu$ is decreased\nsufficiently, the domains of the\nmaps join and together they form a connected invariant set\ndescribed by a bimodal map\\cite{Glbtsky94}.\nThis joining of the fundamental domain ${\\cal A}$ and \nits symmetric image $\\Theta S {\\cal A} $ is visible \nin \\reffig{feig16} at\n$\\nu \\simeq 0.0299$, where the range of the $a_6$ coordinate \nincreases discontinuously.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=timeceiling.ps,width=8cm}}\n\\caption[]{The return time $T(s)$ as a function of the parameter $s$,\nevaluated on the periodic points, as in \\reffig{unfolded}, with\nthe diamonds obtained by \n$34$ periodic points and the dots by $240$ periodic points. \nThe fine structure is due to the fractal structure of the attractor.\n}\n\\label{rettime}\n\\end{figure}\n\nWe use the unimodal map $s \\rightarrow f(s)$ to construct binary symbolic \ndynamics for the system in the usual way: assign the symbol '0' to points\nto the left of the maximum, '1' to the points to the right. \nIn the period-3 window with the stable cycle $\\overline{001}$, \nthe pruning rules are very easy: except for the stable $\\overline{001}$ cycle\nand the $\\overline{0}$ fixed point (both disjoint from the invariant set) \ntwo 0's in a row are forbidden. \nIn this case it is convenient to redefine the alphabet by\ndenoting the symbol pair $01$ by $a$ and the symbol $1$ by $b$.\nThis renders the symbolic dynamics of the points on the repeller\ncomplete binary: all sequences of\nthe letters $a$ and $b$ are admissible.\n\nA flow in $N$ dimensions \ncan be reduced to an $(N-1)$-dimensional \nreturn map by suspension on a Poincar\\'e section\nprovided that the Poincar\\'e return map is \nsupplemented by a ``time ceiling function''\\cite{bowen} which accounts\nfor a variation in the section return times. \nHence we also determine the return time $T(s_i)$ \nfor each periodic point $i$, and use those to construct recursively\nthe periodic orbit approximations to the time ceiling function,\n\\reffig{rettime}.\nThe mean Poincar\\'e section return time \nis of order $\\overline{T} \\approx .88$. \n\n\n\\subsection{Numerical results}\n\nWe have found all cycles \nup to topological length 10 (the redefined topological length in the case of\nthe period-3 window), 92 cycles in the chaotic regime and 228 \nin the period-3 window,\nby using the 1-dimensional parametrization $f(s)$ to find initial guesses for\nperiodic points of the full $N=16$ Fourier modes truncation \nand then determining the cycles by\na multi-shooting Newton routine. It is worth\nnoting that the effectiveness of using the \n$1$-dimensional $f(s)$ approximation to the dynamics\nto determine initial guesses is\nsuch that for a typical cycle it takes only 2-3 Newton iterations to find\nthe cycle with an accuracy of $10^{-10}$.\n\n\\begin{table}\n\\caption[]{All cycles up to topological length\n5 for the $N=16$ Fourier modes truncation of the Kuramoto-Sivashinsky equation \n(\\ref{expan}),\ndamping parameter \n$ \\nu =0.029910$ (chaotic attractor) and $\\nu=0.029924$ (period-3\nwindow), their itineraries, periods, \nthe first four stability eigenvalues. For the chaotic attractor\npruning shows up at the \ntopological length $4$; $\\overline{0001}$ \nand $\\overline{0011}$ cycles are pruned. \nThe deviation from unity of $\\Lambda_2$, the eigenvalue along the flow, \nis an indication of the accuracy of the numerical integration. \nFor the period-3 window we also give the itineraries in the redefined alphabet\nwhere $a=1$ and $b=10$.}\n\\begin{indented}\n{\\small \n\\lineup\n\\item[]\\begin{tabular}{@{}lllllll}\n\\br\n$p$ & & $T_p$ & $\\Lambda_1$ & $\\Lambda_2-1$ & $\\Lambda_3$ & $\\Lambda_4$ \\\\ \\mr\n\\multicolumn{7}{l}{Chaotic, $\\nu=0.029910$} \\\\ \\mr\n0 & & 0.897653 & \\03.298183 & 5$\\cdot10^{-12}$ \n& \\-2.793085$\\cdot 10^{-3}$ & \\-2.793085$\\cdot 10^{-3}$ \\\\\n1 & & 0.870729 & \\0\\-2.014326 & \\-5$\\cdot10^{-12}$ \n & 6.579608$\\cdot 10^{-3}$ & \\-3.653655$\\cdot 10^{-4}$ \\\\\n10 & & 1.751810 & \\0\\-3.801854 & 8$\\cdot10^{-12}$ &\n \\-3.892045$\\cdot 10^{-5}$ & 2.576621$\\cdot 10^{-7}$ \\\\\n100 & & 2.639954 &\\0\\-4.852486 & 1$\\cdot10^{-11}$\n & 3.044730$\\cdot 10^{-7}$ &\\-3.297996$\\cdot 10^{-10}$ \\\\\n110 & & 2.632544 & \\06.062332 & 2$\\cdot10^{-11}$ &\n \\-2.721273$\\cdot 10^{-7}$ & \\-1.961928$\\cdot 10^{-10}$ \\\\ \n1000 & & - & - & - & - & - \\\\\n1100 & & - & - & - & - & - \\\\\n1110 & & 3.497622 & \\-14.76756 & 2$\\cdot10^{-11}$ &\n \\-1.629532$\\cdot 10^{-9}$ & 6.041192$\\cdot 10^{-14}$ \\\\\n10100 & & 4.393973 & 19.64397 & 2$\\cdot10^{-11}$ & \n\\-1.083266$\\cdot 10^{-11}$ & 3.796396$\\cdot 10^{-15}$ \\\\\n11100 & & 4.391976 & \\-18.93979 & 2$\\cdot10^{-11}$ & \n 1.162713$\\cdot 10^{-11}$ &\\-1.247149$\\cdot 10^{-14}$ \\\\\n11010 & & 4.380100 & \\-26.11626 & 2$\\cdot10^{-11}$ & \n 1.005397$\\cdot 10^{-11}$ & 8.161650$\\cdot 10^{-15}$ \\\\\n11110 & & 4.370895 & 28.53133 & 2$\\cdot10^{-11}$ & \n1.706568$\\cdot 10^{-11}$ & 1.706568$\\cdot 10^{-14}$ \\\\ \\mr\n\\multicolumn{7}{l}{Period-3 window, $\\nu=0.029924$} \\\\ \\mr\n 0 & & 0.897809 & \\03.185997 & 7$\\cdot10^{-13}$ & \n\\-2.772435$\\cdot10^{-3}$ & \\-2.772435$\\cdot10^{-3}$ \\\\ \n 1 & $a$ & 0.871737 & \\0\\-1.914257 & 5$\\cdot10^{-13}$ &\n 6.913449$\\cdot10^{-3}$ & \\-3.676167$\\cdot10^{-4}$ \\\\ \n 10 & $b$ & 1.752821 & \\0\\-3.250080 & 1$\\cdot10^{-12}$ &\n\\-4.563478$\\cdot10^{-5}$ & 2.468647$\\cdot10^{-7}$ \\\\ \n 100 & & 2.638794 & \\0\\-0.315134 & \\-4$\\cdot10^{-13}$ &\n 4.821809$\\cdot10^{-6}$ & \\-2.576341$\\cdot10^{-10}$ \\\\ \n 110 & $ab$ & 2.636903 & \\02.263744 & 3$\\cdot10^{-12}$ &\n\\-6.923648$\\cdot10^{-7}$ & \\-2.251226$\\cdot10^{-10}$ \\\\ \n 1110 & $aab$ & 3.500743 & \\-10.87103 & 2$\\cdot10^{-12}$ &\n\\-2.198314$\\cdot10^{-9}$ & 3.302367$\\cdot10^{-14}$ \\\\ \n 11010 & $abb$ & 4.382927 & \\-15.84102 & 2$\\cdot10^{-12}$ &\n 1.656690$\\cdot10^{-11}$ & 1.388232$\\cdot10^{-14}$ \\\\ \n 11110 & $aaab$ & 4.375712 & 18.52766 & 3$\\cdot10^{-12}$ &\n\\-1.604898$\\cdot10^{-11}$ & 2.831886$\\cdot10^{-14}$ \\\\ \\br\n\\end{tabular} \n}\n\\end{indented}\n\\label{t_orbits}\n\\end{table}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=eigenvalues.ps,width=8cm}}\n\\caption[]{Lyapunov exponents $\\lambda_k$ versus $k$ for the \nperiodic orbit $\\overline{1}$ compared with the stability eigenvalues \nof the $u(x,t)=0$ stationary solution $k^2- \\nu k^4$. \n$\\lambda_k$ for $k \\geq 8$ fall below the numerical accuracy of integration \nand are not meaningful. \n$N=16$ Fourier modes, $\\nu=0.029924$, chaotic regime.\n}\n\\label{eigenvalues}\n\\end{figure}\n\nIn \\reftab{t_orbits} we list the periodic orbits \nto topological length 5 found by our method.\nThe value of $\\Lambda_2$ serves as an indication of the \naccuracy of our numerics, as $\\Lambda_2$ corresponds to the marginal\neigenvalue along the periodic orbit, strictly equal to $1$.\nAll cycles seem to have \nreal eigenvalues (to within the numerical accuracy) except for the \n$\\overline{0}$-cycle \nwhich has a pair of complex eigenvalues, $\\Lambda_3$ and $\\Lambda_4$.\nWe therefore do not list the corresponding imaginary parts of the eigenvalues. \nTo illustrate the rapid contraction in the nonleading eigendirections\nwe plot all the eigenvalues of the \n$\\overline{1}$-cycle in \\reffig{eigenvalues}. \nAs the length of the orbit increases, the magnitude of\ncontracting eigenvalues falls quickly \nbellow the attainable numerical numerical \naccuracy $\\approx 10^{-16}$ and our numerical results for \n$\\Lambda_k$ are not meaningful for $ k \\geq 8$.\n\nHaving determined the periodic solutions $p$ in the Fourier modes space, \nwe now go back to the configuration space and plot the corresponding\nspatiotemporally periodic solutions $u_p(x,t)$: they are the\nrepertoire of the recurrent spatiotemporal patterns that Hopf wanted to\nsee in turbulent dynamics.\nDifferent spatiotemporally periodic solutions are qualitatively\nvery much alike but still different, as a closer inspection reveals. \nIn \\reffig{orbit0fig} we plot \n$u_0(x,t)$ corresponding to the Fourier space $\\overline{0}$-cycle. \nOther solutions, plotted in the configuration space, exhibit the same\noverall gross structure. For this reason we find it more informative\nto plot the difference $u_0(x,t'T_0)-u_p(x,t''T_p\/n_p)$\nrather than $u_p(x,t)$ itself. \nHere $p$ labels a given prime (non-repeating) cycle, \n$n_p$ is the topological cycle length, $T_p$ its period,\nand the time is rescaled to make this difference periodic in \ntime: $t'=t \/T_0$ and $t''=n_p t\/T_p$, so that $t''$ ranges from $0$ to $n_p$. \n$u_0(x,t'T_0)-u_1(x,t''T_1)$ is given in \\reffig{diff1fig}, and\n$u_0(x,t'T_0)-u_{01}(x,t''T_{01}\/2)$ in \\reffig{diff01fig}.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=orbit0.ps,width=8cm}}\n\\caption[]{Spatiotemporally periodic solution $u_0(x,t)$.\nWe have divided $x$ by $\\pi$ and plotted only the $x>0$ part, since we work in\nthe subspace of the odd solutions, $u(x,t)=-u(-x,t)$. \n$N=16$ Fourier modes truncation with $\\nu=0.029910$.\n\t }\n\\label{orbit0fig}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=diff1.ps,width=8cm}}\n\\caption[]{The difference between the two shortest period\n\t spatiotemporally periodic solutions \n\t $u_0(x,t'T_0)$ and $u_1(x,t''T_1)$.\n\t }\n\\label{diff1fig}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=diff01.ps,width=8cm}}\n\\caption[]{The difference between solution $u_0(x,t'T_0)$ repeated\n\t twice and the $n_p=2$ period spatiotemporally periodic solution\n $u_{01}(x,t''T_{01}\/2)$.\n }\n\\label{diff01fig}\n\\end{figure}\n\n\\section{Global averaging: periodic orbits in action} \n\nThe above investigation of the Kuramoto-Sivashinsky\nsystem demonstrates that it is possible to\nconstruct recursively and exhaustively\na hierarchy of spatiotemporally periodic unstable solutions\nof a spatially extended nonlinear system.\n\nNow we turn to the central issue of this paper; qualitatively, these\nsolutions are indeed an implementation of Hopf's program, but how\nis this information to be used quantitatively? This is precisely\nwhat the periodic orbit theory is about;\nit offers machinery that puts together\nthe topological and the quantitative information about individual\nsolutions, such as their periods and stabilities, into predictions \nabout measurable global averages, such as the Lyapunov exponents,\ncorrelation functions, and so on. The proper tool for\ncomputing such global characterizations of the dynamics are the trace\nand determinant formulas of the periodic orbit theory.\n\nWe shall briefly summarize the aspects of the\nperiodic orbit theory relevant to the present application;\nfor a complete exposition of the theory \nthe reader is referred to \\refref{cycl_book}. The key idea is to\nreplace a time average $\\Phi^t (x)\/t$\nof an ``observable\" $\\phi$ measured along\na dynamical trajectory $x(t) = f^t(x)$\n\\[\n\\Phi^t (x) = \\int_0^t \\d\\tau \\, \\phi(x(\\tau))\n\\]\nby the spatial average $\\left<\\e^{\\beta \\Phi^t}\\right>$\nof the quantity $\\e^{\\beta \\Phi^t (x)}$. Here $\\beta$ is\na dummy variable, used to recover the desired expectation value\n$\\left<\\phi\\right> = \\lim_{t\\to\\infty} \\left<\\Phi^t\/t \\right>$ \nby taking $\\frac{\\d}{\\d\\beta}$ derivatives\nof $\\left<\\e^{\\beta \\Phi^t}\\right>$ and then setting $\\beta=0$.\nFor large $t$ the average $\\left<\\e^{\\beta \\Phi^t}\\right>$ behaves as\nthe trace \n\\begin{equation}\n{\\rm tr}\\, {\\cal L}^{t}\n = \\sum_{p} \\period{p} \\sum_{r=1}^\\infty\n { \\e^{r \\beta \\Phi_p} \\over \\oneMinJ{r} }\n \\prpgtr{t-r \\period{p}}\n\\ee{tr-L1}\nof the evolution operator\n\\[\n{\\cal L}^t (x,y) = \\delta(y-f^{t}(x))\\e^{\\beta \\Phi^t(x)} .\n\\]\nand is dominated by its largest eigenvalue $\\e^{ts(\\beta)}$.\n\nThe trace formula \\refeq{tr-L1} has an intuitive geometrical interpretation.\nThe sums in \\refeq{tr-L1}\nare over prime periodic orbits $p$ \nand their repeats $r$, $T_p$ are their periods,\nand ${\\bf J_p}$ are their stability matrices. \nPrime cycles partition the dynamical space into closed tubes of\nlength $\\period{p}$ and thickness\n$\\left|\\det({\\bf 1}-{\\bf J}_p)\\right|^{-1}$, \nand the trace picks up a periodic orbit contribution only when the\ntime $t$ equals a prime period or its repeat, hence the time\ndelta function $\\prpgtr{t- r\\period{p}}$. Finally,\n$\\e^{\\beta \\Phi_p }$\nis the mean value of\n$ \\e^{\\beta \\Phi^t(x)}$\nevaluated on this part of dynamical space, so the trace formula is \nthe average of $\\left< \\e^{\\beta \\Phi^t}\\right>$ expressed as a partition\nof the space of solutions into a repertoire of spatiotemporally\nperiodic solutions, each weighted by its stability, i.e. likelihood of its\noccurrence in a long time evolution of the system.\n\nIn applications of the periodic orbit theory \nthe related Fredholm determinant\n\\begin{equation}\nF(\\beta,s)=\n\\exp \\left ( \n{ \\displaystyle - \\sum_p \\sum_{r=1}^{\\infty} z ^{n_p r} \n\\frac{\\displaystyle \\e^{r( \\beta \\Phi_p -s T_p) } }\n{ \\displaystyle \nr \\left | \\det \\left ( {\\bf 1}- {\\bf J}_p^r \\right ) \\right | } } \\right )\n\\label{fredholm} \n\\end{equation}\nhas better convergence as a function of\nthe maximal cycle length truncation, so that is the function whose\nleading zero $F(\\beta,s(\\beta))=0$ we determine here in order\nto evaluate the leading eigenvalue $s(\\beta)$. \n\nThe dummy variable $z$ in \\refeq{fredholm} keeps track of \nthe topological lengths $n_p$ (number of the Poincar\\'e section crossings),\nand is used to expand $F$ as a series in\n$z$. If we know all cycles up to topological length $l$ we truncate $F$\nto $l$-th order polynomial:\n\\begin{equation}\nF(\\beta,s)=1-\\sum_{1}^{l} c_k z^k + (\\mbox{remainder})\n\\label{Fred_exp}\n\\end{equation}\nand set $z=1$. The general theory\\cite{Ruelle,Rugh92,CRR93} then \nguarantees that\nfor a hyperbolic dynamical system the coefficients $c_k$ fall\noff in magnitude exponentially or faster with increasing $k$.\nWe now calculate the leading eigenvalue $s(\\beta)$ \nby determining the smallest zero of $F(\\beta,s)$,\nand check the convergence of this estimate by studying it as a \nfunction of the maximal cycle length truncation $l$. \nIf the flow conserves all trajectories, the leading eigenvalue\nmust satisfy $s(0)=0$; if the invariant set is repelling, the\nleading eigenvalue yields $\\gamma= -s(0)$, the escape rate from the repeller.\nOnce the leading eigenvalue is determined we can\ncalculate the desired average $\\left<\\phi \\right>$ using formula\\cite{AACI}:\n\\begin{equation}\n\\left<\\phi \\right> =\n\\left. -\\frac{\\partial s}{\\partial \\beta}\\right|_{\\beta=0} =\n\\left. -\\frac{\\partial F}{\\partial \\beta} \n \\left\/\n \\frac{ \\partial F}{\\partial s }\n \\right. \\right|_{\\beta=0 \\atop s=s(0)}.\n\\label{cyc_aver}\n\\end{equation}\nFor example, if we take as our ``observable'' $\\log |\\Lambda_{1}^t(x)|$,\nthe largest eigenvalue of the linearized stability of the flow,\n$\\Phi_p$ will be\n$\\log |\\Lambda_{1,p}|$ where $\\Lambda_{1,p}$ is the largest eigenvalue\nof stability matrix of the cycle $p$, and the above \nformula yields the Lyapunov exponent $\\left<\\lambda\\right>$.\n \nBoth the numerator and the denominator in \\refeq{cyc_aver} have a cycle\nexpansion analogous to \\refeq{Fred_exp} (cf. \\refref{cycl_book}), \nand the same periodic orbit data suffices for their evaluation.\n\nConceptually the most\nimportant lesson of the periodic orbit theory \nis that the spatiotemporally periodic\nsolutions are {\\em not} to be thought of as eigenmodes to be used as a linear\nbasis for expressing solutions of the equations of motion - as the equations\nare nonlinear, the periodic solutions are in no sense additive.\nNevertheless, the trace formulas and determinants of the periodic\norbit theory give a precise prescription for how \nto systematically explore the repertoire of admissible spatiotemporal patterns,\nand how to put them together in order to predict measurable observables. \n\n\\subsection{Numerical results}\n\nOne of the objectives of a theory of turbulence is\nto predict measurable global averages over turbulent flows, such as\nvelocity-velocity correlations and transport coefficients. While in\nprinciple the periodic orbit averaging formulas should be applicable\nto such averages, with the present parameter values\nwe are far from any strongly turbulent regime, and here we shall\nrestrict ourselves to the simplest tests of chaotic dynamics: we shall\ntest the theory by evaluating Lyapunov exponents\nand escape rates.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=coeff.ps,width=8cm}}\n\\caption[]{$\\log_{10}$ of the coefficients $|c_k|$ in the cycle expansion \n\\refeq{Fred_exp} of $F(0,0)$ versus $k$ \nfor the period-$3$ window case (crosses) and the chaotic case\n(diamonds). $N=16$ Fourier modes truncation. }\n\\label{coeffval}\n\\end{figure}\n\nWe compute the periodic orbits, escape rates and Lyapunov exponents \nboth for the period-$3$ window and a chaotic regime.\nIn the case of period-$3$ window the complete symbolics dynamics\nand grammar rules\nare known and good convergence of cycle expansions is expected \nboth for the escape rate from the repeller\nand the Lyapunov exponent. Parenthetically, the stable\nperiod-3 orbit is separated from the rest of the invariant set\nby its immediate basin of attraction window, and its eigenvalues \nbear no immediate relation to the escape rate and the Lyapunov\nexponent of the repelling set.\n\nIn the case of a generic ``strange attractor'',\nthe convergence is not expected to be nearly as good, \nsince in this case there exist no finite description of the symbolic dynamics. \nFor closed systems (no escape) $\\gamma=0$ and $F(0,0)=0$. \nThe discrepancy of the value $F(0,0)$ from $0$ for a closed system \nallows us to estimate the accuracy of \nfinite cycle length approximations to the \nFredholm determinant. \n\nThe analytic properties of the Fredholm determinant are illustrated by \nthe decay rate of the coefficients $c_k$ as a function of $k$ in the expansion \n\\refeq{Fred_exp}.\nIf the complete symbolic dynamics is known and the system is hyperbolic, the \ndecay of $c_k$ should be superexponential\\cite{Rugh92}. \nThis is illustrated in \\reffig{coeffval}, where\nwe plot the coefficients $c_k$ for the $16$-dimensional \nsystem for the chaotic case and for the period-$3$ window. \nWe can clearly see the \nsuperexponential decay for the period-$3$ \nwindow case and at best exponential decay \nfor the chaotic case. \n\nOur results are presented in \\reftab{t_16_chaotic}. One \nobserves that when the symbolic dynamics is known \n(period-$3$ window), the convergence is much better than\nin the generic case, in accordance with the periodic orbit theory\nexpectations. \n\n\\begin{table}\n\\caption[]{\nThe escape rate $\\gamma$ and the leading Lyapunov exponent\nas a function of the cycle expansion truncation $n_{max}$ \nfor the $N=16$ Fourier modes truncation, chaotic regime \n$(\\nu=0.029910$) and period-3 window $(\\nu=0.029924)$. In the \nperiod-3 window the Fredholm determinant starts converging only\nfor $n_{max}>4$; for $n_{max}=4$ it has no real zero at all.\nA numerical simulation \nestimate for the Lyapunov exponent in the chaotic regime is given in the \nlast line; for this parameter value the escape rate, $\\gamma$,\nshould strictly equal zero.}\n\\begin{indented}\n{\\small \n\\lineup\n\\item[]\\begin{tabular}{lllll} \\br\n & \\multicolumn{2}{l}{chaotic} & \n \\multicolumn{2}{l}{period-3 window} \\\\ \\mr\n$n_{max}$ & $\\gamma$ & $\\lambda_1$ & $\\gamma$ & $\\lambda_1$ \\\\ \\mr\n 1 & & & 0.428143 & 0.703010 \\\\ \\hline\n 2 & 0.441948 & 0.981267 & \\-0.187882 & 0.430485 \\\\ \\hline\n 3 & 0.080117 & 0.765050 & \\-0.049325 & 0.469350 \\\\ \\hline\n 4 & 0.148583 & 0.703072 & & \\\\ \\hline\n 5 & 0.068513 & 0.727498 & 1.072468 & 0.585506 \\\\ \\hline\n 6 & 0.027724 & 0.699907 & 0.078008 & 0.547005 \\\\ \\hline\n 7 & 0.035137 & 0.693852 & 0.088132 & 0.598977 \\\\ \\hline\n 8 & 0.007104 & 0.675529 & 0.090425 & 0.631551 \\\\ \\hline\n 9 & 0.021066 & 0.673144 & 0.090101 & 0.618160 \\\\ \\hline\n10 & 0.007367 & 0.646233 & 0.090065 & 0.621271 \\\\ \\hline\nnumer. & & 0.629 & & \\\\ \\br\n\\end{tabular}\n}\n\\end{indented}\n\\label{table16_chaotic}\n\\label{t_16_chaotic}\n\\end{table}\n\n\\section{Summary}\n\nHopf's proposal for a theory of turbulence was, as we understand it, to think \nof turbulence as a sequence of near recurrences of a repertoire of unstable \nspatiotemporal patterns. Hopf's \nproposal is in its spirit very different from most ideas that animate\ncurrent turbulence research. \nIt is distinct from\nthe Landau quasiperiodic picture of turbulence as a sum of \ninfinite number of incommensurate frequencies, with dynamics taking place on a \nlarge-dimensional torus.\nIt is not the\nKolmogorov's 1941 homogeneous turbulence with no \ncoherent structures fixing the length scale, here all the action is \nin specific coherent structures.\nIt is emphatically {\\em not} universal; spatiotemporally periodic solutions \nare specific to the particular set of equations and boundary conditions.\nAnd it is {\\em not} probabilistic; everything is fixed by the deterministic\ndynamics with no probabilistic assumptions on the velocity distributions \nor external stochastic forcing. \n\nOur investigation of the Kuramoto-Sivashinsky system is a\nstep in the direction of implementing Hopf's program. \nWe have constructed \na complete and exhaustive hierarchy of spatiotemporally periodic solutions \nof spatially extended nonlinear system and applied the periodic orbit theory \nto evaluation of global averages for such system. Conceptually the most \nimportant lesson of this theory is that \nthe unstable spatiotemporally periodic\nsolutions serve to explore systematically the\nrepertoire of admissible spatiotemporal patterns, with the trace\nand determinant formulas and their cycle expansions being the proper tools for\nextraction of quantitative predictions from the periodic orbits data.\n\nWe have applied the\ntheory to a low dimensional attractor, not larger than the Lorenz's original \nstrange attractor\\cite{Lorenz}. As our aim was to solve the given equations \naccurately, we were forced to work with a high dimensional\nFourier modes truncations, and we succeeded in determining the\nperiodic orbits for flows of much higher dimension than in\nprevious applications of the periodic orbit theory. As something new, we \nhave developed an intrinsic parametrization of the invariant set that\nprovided the key to finding the periodic orbits. \n\nIn practice, the method \nof averaging by means of periodic orbits produced best results when \nthe complete symbolic dynamics was known. \nFor generic parameter values we cannot claim that the periodic\norbit approach is\ncomputationally superior to a direct numerical simulation. \nA program to find periodic orbits up to length 10 for one value of \nthe damping parameter $\\nu$ requires a day of CPU on a fast workstation, \nmuch longer than the time used in the direct numerical\nsimulations. \n\nThe parameter $\\nu$ values that we work with correspond to\nthe weakest nontrivial ``turbulence'', and it is an open \nquestion to what extent the approach remains implementable as the system goes \nmore turbulent. Our hope is that the unstable structures captured so far \ncan be adiabatically tracked to the ``intermediate turbulence'' regime, \nand still remain sufficiently representative of the space of admissible \npatterns to allow meaningful estimates of global averages. \nAs long as no effective coordinatization of the ``inertial\nmanifold'' exists and we rely on the spatial Fourier decomposition, the\npresent approach is \nbound to fail in the ``strong turbulence'' $\\nu \\rightarrow 0$ limit,\nwhere the dominant structures are Burgers-type shocks \nand truncations of the spatial Fourier modes \nexpansions are increasingly uncontrollable. \n\n\\ack\nWe are grateful to L. Tuckerman for patient instruction,\nE.A. Spiegel,\nG. Goren, \nR. Zeitek,\nand \nI. Procaccia\nfor inspiring conversations, P. Dahlqvist for a critical\nreading of an early version of the paper, and E. Bogomolny for the\ncatchy but all too ephemeral title for the paper.\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe connection between deep neural network and the ordinary differential equation has been studied and discussed in different recent works [1,2,3,4,5,6]. It has been shown that residual networks such as ResNet [7] and recurrent neural network decoders can be modeled as a discretization of a continuous ODE model. An ODE-based model and its relation to the residual network can be shown as follows:\n\\begin{equation} \\label{Eq1}\n\\begin{aligned}\n \\ \\ \\text L_{t_1} =\\text L_{t_0} + f(\\text L_{t_0},\\theta) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (ResNet)\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq2}\n\\begin{aligned}\n\\text L_{t_1} =\\text L_{t_0} + \\int_{t_0}^{t_1} f(\\text L_{t},\\theta)\\ dt \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (ODE)\n\\end{aligned}\n\\end{equation}\nwhere $\\text L_{t_0}, \\text L_{t_1}$ are the residual block input and output. $f$ represents the network-defined nonlinear operator which preserves the dimensionality of $\\text L_{t_0}$ and $\\theta$ represents the network weights. The defined ODE ($\\frac{d\\text L}{dt} = f(\\text L_{t},\\theta)$) is described in terms of its solution in $t=t_1$. \nThe forward step of ODE Euler discretization is as follows:\n\\begin{equation} \\label{Eq3}\n\\begin{aligned}\n \\ \\ \\text L_{t_{0}+h} =\\text L_{t_0} + hf(\\text L_{t_0},\\theta) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\end{aligned}\n\\end{equation}\nIt can be observed that the single forward step of Eq~\\ref{Eq3}, can be considered as the equivalent to the formulation of the residual block. Therefore, The ODE discretization model can lead to different ODE-inspired network architectures. In this paper, we present an ODE-based deep network for MRI reconstruction which extends the conventional reconstruction framework to its data-adaptive variant using the ODE-based network and provides an end-to-end reconstruction scheme. We evaluate the reconstruction performance of our method to the reconstruction methods based on the standard UNet network [8] and Residual network. \n\\section{Method}\nThe discretized version of MR imaging model given by\t\t\n\\begin{equation} \\label{Eq4}\n\\begin{aligned}\n\\text d=\\text E \\text x + \\text n.\n\\end{aligned}\n\\end{equation}\nwhere $\\text x$ is the samples of unknown MR image, and $\\text d$ is the undersampled k-space data. $\\text E =\\text {FS}$ is an encoding matrix, and $\\text F$ is an undersampled Fourier operator. $\\text S$ is a matrix representing the sensitivity map of the coils, and $\\text n$ is noise. Assuming that the interchannel noise covariance has been whitened, the reconstruction relies on the least-square approach:\n\\begin{equation} \\label{Eq5}\n\\begin{aligned}\n\\hat{\\text x} =\\underset{\\text x}{ argmin} \\ \\|\\text d-\\text E\\text x\\|_{2}^{2}\n\\end{aligned}\n\\end{equation}\nThe ODE-based reconstruction framework we used for solving the Eq~\\ref{Eq5} is shown in Fig~\\ref{fig1}. \\\\\nFor a conventional neural network, we minimize the loss function ($l$) over a set of training pairs and we search for the weights ($\\theta$) that minimize that loss function:\n\\begin{equation} \\label{Eq6}\n\\begin{aligned}\n\\underset{\\theta}{ minimize} \\ \\frac{1}{M} \\sum\\limits_{i=1}^M l(L(\\theta;x_i,y_i)) + \\text R(\\theta)\n\\end{aligned}\n\\end{equation}\nwhere $(x_i,y_i)$ is the $i_{th}$ training pairs (input and ground truth). $R$ is a regularization operator and $M$ is the number of training pairs. The loss function depends implicitly on $\\theta$. This optimization is usually solved through Stochastic Gradient Descent (SGD) and backpropagation to compute the gradient of $L$ with respect to $\\theta$. In our ODE-based network, besides the $L$s, the network weights also change with respect to time as well. In this case, we need to solve the following constrained optimization problem :\n\\begin{equation} \\label{Eq7}\n\\begin{aligned}\n\\underset{p,\\theta}{ minimize} \\ \\frac{1}{M} \\sum\\limits_{i=1}^M l(L_{t_1};x_i,y_i) + \\text R(p,\\theta)\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq8}\n\\begin{aligned}\n\\frac{d\\text L}{dt} = f(\\text L_{t},\\theta_{t})\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq9}\n\\begin{aligned}\n\\frac{d\\theta}{dt} = w(\\theta_{t},p)\n\\end{aligned}\n\\end{equation}\nwhere $\\theta_t$ is parameterized by the learnable dynamics of\n\\begin{equation} \\label{Eq10}\n\\begin{aligned}\n\\theta_{t_1} =\\theta_{t_0} + \\int_{t_0}^{t_1} w(\\theta_{t},p)\\ dt\n\\end{aligned}\n\\end{equation}\nwhere $w$ is a nonlinear operator responsible for the network weights dynamics and $p$ is the parameters for $w$.\nWe also augment the learnable space and solve the ODE flow as follows so that the learned ODE representation won't preserve the input space topology [5].\n\\begin{equation} \\label{Eq11}\n\\begin{aligned}\n\\frac{d}{dt} \\begin{bmatrix}\n \\text L\\\\\n a\n\\end{bmatrix} = f(\\begin{bmatrix}\n \\text L_{t}\\\\\n a_{t}\n\\end{bmatrix},\\theta_{t})\n\\end{aligned}\n\\end{equation}\nwhere $a_{0}=0$. We use the discretize-then-optimize method [4,6] to calculate the gradients for backpropagating through ODE layers. Figure \\ref{fig2} shows the proposed ODE-based deep network. Five residual blocks have been used in our method (N=5).\n\\section{Results and Discussion}\nIn our experiments, we have tested our method with our MPRAGE brain datasets. The data on ten volunteers with a total of 750 brain images used as the training set. Images from fifteen different volunteers have used as the testing set. The sensitivity maps were computed from a block of size 24x24 using the ESPIRiT [9] method. Reconstruction results with the undersampling factor of 2x2 for different approaches are shown in Fig \\ref{fig3}. ResidualNet includes the same number of residual blocks as our proposed method (without ODE layers).\nTable \\ref{table1} shows\nthat our method consistently has higher Peak Signal-to-noise Ratios (PSNR) and structural similarities (SSIM) compared to the reconstructions using the other two networks. In conclusion, an ODE-based deep network for MRI reconstruction is proposed. It enables the rapid acquisition of MR images with improved image quality. The proposed ODE-based network can be easily adopted by unrolled optimization schemes for better MRI reconstruction accuracy.\n\n\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox{%\n \\includegraphics[scale=0.37]{fig2.jpg\n }{%\n \\caption{The reconstruction framework}%\n \\label{fig1}}\n\\capbtabbox{%\n\\begin{tabular}{rllll}\n\\hline\n\\multicolumn{3}{c} {Brain Dataset} \\\\\n\\cline{2-3} \n\\cline{4-5} \nMethod & PSNR & SSIM \\\\\n\\hline\nProposed & $54.5\\pm1.37$ & $0.99\\pm0.0063$ \\\\\nUNet & $52.4\\pm1.54$ & $0.98\\pm0.0075$ \\\\\nResidualNet & $50.1\\pm1.65$ & $0.978\\pm0.0097$ \\\\\n\\hline\n\\end{tabular}\n}\n{\n\\caption{PSNR and SSIM variations on Brain dataset}%\n\\label{table1}\n}\n\\end{floatrow}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\centerline{\\includegraphics[scale=0.5]{fig1.jpg}}\n \\caption{The proposed ODE-based deep network.}\n \\label{fig2}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\centerline{\\includegraphics[scale=0.15]{fig3.jpg}}\n \\caption{First row (left to right): Reference image using fully sampled data, ResidualNet reconstruction, UNet reconstruction, and our reconstruction all with undersampling factor of 2x2. Second row includes error maps correspond to each reconstruction results for comparison.}\n \\label{fig3}\n\\end{figure}\n\\newpage\n\n\\subsubsection*{Acknowledgments}\n\nThis research was supported in part by NIH grants R01 NS079788, R01 EB019483, R01 DK100404, R44 MH086984, IDDRC U54 HD090255, and by a research grant from the Boston Children's Hospital Translational Research Program.\n\n\n\\section*{References}\n\n[1] Haber, E. and Ruthotto, L., \"Stable architectures for deep neural networks,\" Inverse Problems, 34(1), 014004 (2017).\n\n[2] Ruthotto, L. and Haber, E., \"Deep neural networks motivated by partial differential equations,\" arXiv preprint arXiv:1804.04272 (2018).\n\n[3] Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K., \"Neural ordinary differential equations,\" in [Advances in neural information processing systems], 6571\u20136583 (2018).\n\n[4] Gholami, A., Keutzer, K., and Biros, G., \"Anode: Unconditionally accurate memory-efficient gradients for neural odes,\" arXiv preprint arXiv:1902.10298 (2019).\n\n[5] Dupont, E., Doucet, A., and Teh, Y. W., \"Augmented neural odes,\" arXiv preprint arXiv:1904.01681 (2019).\n\n[6] Zhang, T., Yao, Z., Gholami, A., Keutzer, K., Gonzalez, J., Biros, G., and Mahoney, M., \"Anodev2: A coupled neural ode evolution framework,\" arXiv preprint arXiv:1906.04596 (2019).\n\n[7] He, K., Zhang, X., Ren, S., and Sun, J., \"Deep residual learning for image recognition,\" in [Proceedings of the IEEE conference on computer vision and pattern recognition], 770\u2013778 (2016).\n\n[8] Ronneberger, O., Fischer, P., and Brox, T., \"U-net: Convolutional networks for biomedical image segmentation,\" in [International Conference on Medical image computing and computer-assisted intervention], 234\u2013241, Springer (2015).\n\n[9] M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M.Pauly, S. S. Vasanawala, and M. Lustig, \"Espirit an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa,\" Magnetic resonance in medicine, 71(3), 990\u20131001 (2014).\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe duration of particle collisions is an interesting and important aspect of\ngeneral scattering theory which is in a sense complementary to the energy\nrepresentation ordinarily used. A collision is characterized in this case by\nthe time delay of particles in the region of interaction. Wigner \\cite{W-55}\nwas the first who considered the time on which a monochromatic particle with\ngiven\nangular momentum is delayed during its elastic scattering. He established\nthe connection of this time delay to the energy derivative of the scattering\nphase shift. The sharper the energy dependence of the phase shift is the\nlonger is the time delay.\n\nLater on Smith \\cite{S-60} extended the time delay concept on many channel\nproblems introducing the time delay matrix\n\\begin{equation}\\label{q-matr}\n Q^{ab}(E) = -i\\hbar\\left\\{\\frac{d}{d\\,\\varepsilon}\\,\n \\sum_{c}S^{ac}(E\\!+\\!\\frac{\\varepsilon}{2})\n S^{\\ast\\,cb}(E\\!-\\!\\frac{\\varepsilon}{2})\n \\right\\}_{\\varepsilon=0}\\,\\,,\n\\end{equation}\nin the channel space. Here $S$ is the scattering matrix and the summation\nindex $c$ runs over all the $M$ open scattering channels. The matrix $Q$ is\nhermitian; its diagonal element $Q^{cc}$ coincides with the mean duration of\ncollision (time delay) in the $c$-th entrance channel. Generally speaking,\nthe delays are different in different channels $c$. Taking the trace of the\nSmith matrix, one arrives at the simple weighted-mean characteristic\n\\begin{equation}\\label{q1}\nQ(E) = \\frac{1}{M}\\,\\sum_{c} Q^{cc}\n = -\\frac{i}{M}\\,\\frac{d}{dE}\\,\\ln \\det S(E)\n\\end{equation}\nof the duration of collisions. Eq. (\\ref{q1}) is just the many-channel\nversion of the well-known simple Wigner formula. (Here and below we set\n$\\hbar=1$.)\n\nThe time delay turns out to be an especially pertinent concept for the chaotic\nresonance scattering encountered in atomic, molecular and nuclear physics\n\\cite{L-77, LW-91}, as well as in the scattering of electromagnetic\nmicrowaves \\cite{Sr-91, GHLLRRSW-92, SS-92} in resonance billiard-like\ncavities. The quantity $Q(E)$, being closely connected to the complex energy\nspectrum of resonance states, shows in its energy dependence strong\nfluctuations around a smooth regular variation. The two kinds of variation on\ndifferent energy scales are naturally decomposed\n\\begin{equation}\\label{atd}\nQ(E)=\\langle Q(E)\\rangle+Q_{fl}(E)\\,\\,,\n\\end{equation}\nwith an energy or ensemble averaging. By this, the slow energy dependence of\ntime delay is revealed whereas the two-point autocorrelation function\n\\begin{equation}\\label{dcf}\nC_Q(E,\\varepsilon) =\n\\langle Q_{fl}(E\\!+\\!\\frac{\\varepsilon}{2})\nQ_{fl}(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle=\n\\langle Q(E\\!+\\!\\frac{\\varepsilon}{2})\nQ(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle -\n\\langle Q(E\\!+\\!\\frac{\\varepsilon}{2})\\rangle\n\\langle Q(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle\n\\end{equation}\nis used to characterize the time delay fluctuations.\n\nTo the best of our knowledge, the first consideration of these fluctuations\nhas been made numerically as well as analytically in \\cite{WJ-89} and\n\\cite{SW-92} in the framework of rather peculiar model of resonance elastic\nquantum scattering on a leaky surface of constant negative curvature. The\nnoteworthy feature of this model is that the poles of the scattering\namplitude turn out to correspond to zeros of the famous Riemann\n$\\zeta$-function. The real parts of the poles are therefore supposed\n\\cite{Mo-73} to be chaotically distributed similar to the eigenvalues of the\nGaussian Unitary Ensemble whereas all their imaginary parts (the widths of\nresonances) are the same. The latter very specific property partly deprives\nthe model of its interest since actual single-channel widths are known to\nexhibit quite large fluctuations \\cite{Po-65}.\n\nThe width fluctuations are suppressed when many channels are open. In this\ncase semiclassical approximation can be as a rule expected to be valid. The\nsemiclassical analysis of the time delay in terms of closed periodic orbits\nis given in \\cite{Ec-93}. It is in particular emphasized there that only the\ntail of the correlation function (\\ref{dcf}) corresponding to the very large\nvalues of $\\varepsilon$ can immediately be related to the (short) periodic\norbits. Quite opposite, the central peak near the point $\\varepsilon=0$ is\nformed as a result of a strong interference of many orbits. Therefore, its\nwidth describing the long-time asymptotic behaviour of the Fourier transform\nhas no direct connection to the classical escape rate and has rather to be\ncalculated on the pure quantum ground. This is in line with the results of\nthe analysis \\cite{GR-89} of distribution of the resonance widths in the\nthree discs scattering problem.\n\nIt is now generally acknowledged that the random matrix theory \\cite{Me-67}\nrepresents a suitable and reliable foundation for description of local\nproperties of dynamical quantum chaos \\cite{BG-84}. We therefore use below a\nrandom matrix model of chaotic scattering to calculate the time delay\nautocorrelation function. We suppose as usual that the number $N$ of\nresonances is asymptotically large and use the powerful supersymmetry\ntechnique \\cite{E-83}\nfirst applied to chaotic scattering problems in \\cite{VWZ-85}. The number $M$\nof the (statistically equivalent) scattering channels\ncan be small or large or can even scale with the number of resonance states.\nOne can treat the latter two cases \\cite{LW-91, HILSS-92, LSSS-94} as a\n\"semiclassical limit\" in the matrix model. We show here that the time-delay\nlocal fluctuations are governed, similar to those of the $S$-matrix\n\\cite{LSSS-94}, by the gap between the real axis and the upper edge of the\ndistribution of resonance energies in the complex energy plane. We compare\nthis result with that obtained in the framework of the periodic orbit\napproach.\n\nIn the next section our statistical matrix model is briefly presented.\nThe connections of average time delay with the resonance spectrum and S-matrix\nfluctuations are elucidated in sec.~3. After a short description in sec.~4\nof the supersymmetry method which we use the main analytical results for the\ntime delay correlation function are given and discussed in detail in sec.~5.\nSome numerical results shedding additional light upon properties of the time\ndelay correlations are gathered in sec. 6. We close with a brief summary\nin sec.~7.\n\n\\section{The Resonance Matrix Model}\nAccording to the general scattering theory, the evolution of the $N$-level\nunstable system formed on intermediate stage of a resonance collision is\ndescribed \\cite{MW-69, KNO-69, SZ-89} by the effective Hamiltonian\n\\begin{equation}\\label{hamil}\n{\\cal H} = H - i\\gamma\\, W,\\; \\; \\;\\; W = VV^{T}\\,\\,.\n\\end{equation}\nThe Hamiltonian (\\ref{hamil}) acts within the intrinsic $N$-dimensional space\nbut acquires, due to the elimination of continuum variables, an antihermitian\npart. The hermitian matrix $H$ is the internal Hamiltonian with a discrete\nspectrum whereas the rectangular $N\\times M$ matrix $V$ consists of the\namplitudes $V_m^c$ of transitions between $N$ internal and $M$ channel\nstates. These amplitudes are real in T-invariant theory, so that the matrix\n$W$, similar to $H$, is real and symmetric. As usual, we neglect the smooth\nenergy dependence of $V$ and $W$. The dimensionless parameter $\\gamma$\ncharacterizes the strength of the coupling of the internal motion to the\ncontinuum.\n\nThe poles of the resonance scattering matrix in the complex energy plane\nare those of the Green's function \\cite{MW-69, KNO-69, SZ-89}\n\\begin{equation}\\label{green}\n{\\cal G}(E) = (E-{\\cal H})^{-1}\\,\\,.\n\\end{equation}\nThey coincide with the eigenvalues ${\\cal E}_n=E_n-\\frac{i}{2}\\Gamma_n$ of\nthe effective Hamiltonian ${\\cal H}$ with $E_n$ and $\\Gamma_n$ being the\nenergy and width of $n$-th resonance state. It what follows, the properties\nof the spectrum of complex energies ${\\cal E}_n$ play the crucial role.\n\nThe intrinsic chaoticity of the internal motion of long-lived intermediate\nsystem manifests itself by chaotic fluctuations in resonance scattering and\ndemands a statistical consideration. Therefore the random matrix approach\nextending\nthe well-known \\cite{Po-65, Me-67} description of chaotic bounded systems\nhas been worked out in \\cite{W-84, VWZ-85, SZ-89}. It is usually\nassumed that the hermitian part $H$ of the effective Hamiltonian belongs to\nthe Gaussian Orthogonal Ensemble (GOE),\n\\begin{equation}\\label{goe}\n\\langle H_{nm} \\rangle = 0,\\ \\ \\ \\langle H_{nm}H_{n'm'} \\rangle =\n\\frac{\\lambda^2}{N}(\\delta_{nn'}\\delta_{mm'}+\\delta_{nm'}\\delta_{mn'})\\,\\,.\n\\end{equation}\nIn the limit $N\\rightarrow\\infty$ eigenvalues of $H$ are situated in the\ninterval $[-2\\lambda,2\\lambda]$ with the density given by Wigner's\nsemicircle law. Following \\cite{SZ-89}, we suggest the transition amplitudes\n$V_n^c$ also to be statistically independent Gaussian variables,\n\\begin{equation}\\label{rand}\n\\langle V^a_n \\rangle = 0,\\, \\, \\,\n\\langle V^a_nV^b_m \\rangle = \\frac{\\lambda}{N}\\delta^{ab}\\delta_{nm}\\,\\,.\n\\end{equation}\nWe will use below the ensemble (\\ref{goe},\\ref{rand}) to calculate the\naverage quantities defined in (\\ref{atd},\\ref{dcf}).\n\n\\section{Time Delay and Resonance Spectrum}\n\nSince we have neglected a smooth energy dependence of the effective\nHamiltonian (\\ref{hamil}),\nthe poles ${\\cal E}_n$ in the lower part of the complex energy\nplane are the only singularities of the resonance scattering matrix. Due to\nthe unitarity condition their complex conjugates ${\\cal E}_n^*$ serve as\n$S$-matrix's zeros. These two conditions result in the representation\n\\begin{equation}\\label{det_s}\n\\det\\,S(E) = \\prod_{n} \\frac{E-{\\cal E}^{\\ast}_{n}}{E-{\\cal E}_{n}}\\,\\,.\n\\end{equation}\nSubstituting eq.(\\ref{det_s}) in eq.(\\ref{q1}), we come to the important\nconnection\n\\begin{equation}\\label{q2}\n Q(E) = -2\\,\\mbox{Im}\\,\\left\\{\\frac{1}{M}\\mbox{tr\\,} {\\cal G(E)}\\right\\}\n = \\frac{1}{M}\\,\\sum_n \\frac{\\Gamma_n}{(E-E_n)^2+\\frac{1}{4}\\Gamma_n^2}\n\\end{equation}\nbetween the time delay and the trace of the Green's function (\\ref{green}) of\nthe intermediate unstable system. The time delay is entirely determined by\nthe spectrum of complex energies of this system. The collision duration\ndirectly reflects the statistical properties of resonances. This is in\ncontrast to the scattering amplitudes $S^{cc'}$ which explicitly depend also\non the transition amplitudes $V_n^c$.\n\nThe ensemble averaging of eq.(\\ref{q2}) gives\n\\begin{equation}\\label{qav}\n \\langle Q(E) \\rangle = \\frac{2}{m\\lambda}\\,\\mbox{Re}\\,{\\sl g}(E)\n\\end{equation}\nwhere $m < 1$ is the ratio $M\/N$ and the function\n\\begin{equation}\\label{gav}\n {\\sl g}(E) = i\\lambda\\frac{1}{N}\\,\\langle \\mbox{tr\\,}\\,{\\cal G}(E)\\rangle\n\\end{equation}\nsatisfies the cubic equation \\cite{LSSS-94}\n\\begin{equation}\\label{qeq}\n {\\sl g}(E)-\\frac{1}{{\\sl g}(E)}+\\frac{m\\gamma}{1+\\gamma{\\sl g}(E)}-\n i\\frac{E}{\\lambda}=0\\,\\,.\n\\end{equation}\nThe (unique) solution with a positive real part has to be chosen. It can be\nseen from the consideration given in \\cite{LSSS-94} that this real part is\nclose to $\\frac{\\lambda}{N}\\pi\\rho(E)$ with $\\rho(E)$ being the projection on\nthe real energy axis near the scattering energy $E$ of the density of\nresonance levels in the complex energy plane.\n\nOn the other hand, averaging eq.(\\ref{q-matr}) directly, we express\n$\\langle Q \\rangle$ in terms of the two-point $S$-matrix correlation function\n\\cite{VWZ-85, LSSS-94}. In the limit of a large number of statistically\nequivalent channels, $M\\gg 1$, scaling with the number of resonances $N$\n\\begin{equation}\\label{q-ss}\n\\langle Q \\rangle =\n-i\\frac{dC_S(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0} +\ni\\frac{d{\\cal T}(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\\,\\,.\n\\end{equation}\nHere \\cite{LSSS-94}\n\\begin{equation}\\label{scor}\n C_S(\\varepsilon) = \\frac{i\\Gamma(\\varepsilon)}{\\varepsilon +\n i\\Gamma(\\varepsilon)}\\,{\\cal T}(\\varepsilon) \\equiv\n K(\\varepsilon)\\,{\\cal T}(\\varepsilon)\n\\end{equation}\nwith the two smooth functions defined by\n\\begin{equation}\\label{G,T}\n \\Gamma(\\varepsilon) = \\frac{m}{2}\\lambda\\,\n \\frac{{\\cal T}(\\varepsilon)}{{\\sl g}(\\varepsilon\/2)}\\;\\;\\;,\n \\;\\;\\;{\\cal T}(\\varepsilon) = \\frac{4\\gamma{\\sl g}(\\varepsilon\/2)}\n {\\left[1+\\gamma{\\sl g}(\\varepsilon\/2)\\right]^2}\n\\end{equation}\nand we set $E=0$ for the sake of simplicity. The quantity\n\\begin{equation}\\label{trc}\nC_S(0) = {\\cal T}(0)\\equiv T\n\\end{equation}\ncoincides with the transmission coefficient $T=1-|\\langle S\\rangle|^2$. With\neq.(\\ref{scor}) taken into account we obtain from (\\ref{q-ss})\n\\begin{equation}\\label{qav2}\n\\langle Q \\rangle =\n-iT\\frac{dK(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\n= \\frac{T}{\\Gamma_0}\\,\\,,\n\\end{equation}\nwhere we have designated $\\Gamma(0)$ as $\\Gamma_0$.\n\nAs long as the typical values of the quantity $\\Gamma(\\varepsilon)$ are small\nas compared to the parameter $\\lambda$ characterizing the scale of the smooth\n$\\varepsilon$-dependence of the function ${\\cal T}(\\varepsilon)$, the two\nfactors on the r.h.s. of eq.(\\ref{scor}) have quite different energy scales.\nOnly the first fast varying factor $K(\\varepsilon)$ describes the local\nfluctuations whereas the second one corresponds to the joint influence of all\nresonances giving rise to the processes with a very short duration. The\nlatter came out from eq.(\\ref{qav2}). The average time delay of a\nnon-monochromatic spatially small wave packet caused by the formation of a\nlong-lived intermediate state \\cite{L-77, DHM-92} is determined just by the\nfactor $K(\\varepsilon)$ \\cite{LSSS-94}\n\\begin{equation}\\label{dstd}\n\\langle \\tau \\rangle =\n-i\\frac{dK(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\n= \\Gamma_0^{-1}\\,\\,.\n\\end{equation}\nThis implies the connection \\cite{DHM-92, LSSS-94}\n\\begin{equation}\\label{std}\n\\langle\\tau\\rangle = \\langle Q\\rangle\/T =\n\\frac{2N}{\\lambda MT}\\,{\\sl g}(0)\\approx\\frac{2\\pi\\rho}{MT}\\,\\,.\n\\end{equation}\n\n\\section{The Supersymmetry Method}\n\nNow we calculate the correlation function (\\ref{dcf}). Taking into account\nthe relation (\\ref{q2}), one can cast eq.(\\ref{dcf}) into the form\n\\begin{equation}\\label{qq2}\nC_Q(E,\\varepsilon)= \\frac{2}{M^2}\\,\\mbox{Re} \\left\\{\n\\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2})\n\\mbox{tr\\,}{\\cal G}^{\\dagger}(E\\!-\\!\\frac{\\varepsilon}{2}) \\rangle\n- \\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2}) \\rangle\n\\langle \\mbox{tr\\,}{\\cal G^{\\dagger}}(E\\!-\\!\\frac{\\varepsilon}{2})\n\\rangle \\right\\}\\,\\,.\n\\end{equation}\nWe also define the normalized quantity\n\\begin{equation}\\label{Nqq}\nK_Q(E,\\varepsilon)=\\frac{C_Q(E,\\varepsilon)}{{\\langle Q(E)\\rangle}^2}\\,\\,.\n\\end{equation}\nThe terms containing two Green's functions with poles at the same side from\nthe real energy axis are omitted in (\\ref{qq2}). We will briefly return to\nthis point later.\n\nIn the limit $\\gamma=0$, when the system gets closed, the correlation function\n(\\ref{qq2}) becomes proportional to the GOE density-density correlation which\nconsists \\cite{Me-67} of the singular term $\\delta(\\pi\\rho\\varepsilon)$ and\nDyson's smooth function $-Y_2(\\pi\\rho\\varepsilon)$. Coupling to the continuum\nleads to appearing of a new energy scale caused by the decay processes. This\nscale is defined \\cite{LSSS-94} by the quantity $\\Gamma(\\varepsilon)$ from\neq.(\\ref{G,T}). One can anticipate a qualitative changing of the correlation\nfunction to occur on this scale. For larger distances the influence of the\nantihermitian part should fade away and the asymptotics of $C_Q$ for\n$\\varepsilon\\rightarrow\\infty$ is expected to coincide with that of the\nDyson's function $-Y_2(\\pi\\rho\\varepsilon)$.\n\nTo perform the ensemble averaging in (\\ref{qq2}) we use the modification\nworked out in \\cite{LSSS-94} of the supersymmetry technique \\cite{VWZ-85}.\nUsing the integral representation of Green's function as a multivariate\nGaussian integral over commuting and anticommuting variables, one gains the\npossibility to accomplish the averaging exactly. With the help of the Fourier\ntransformation in the supermatrix space the integration over initial\nauxiliary supervectors is then carried out. Going along this line, one finally\narrives at\n\\begin{equation}\\label{grgr}\n \\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2})\\,\n \\mbox{tr\\,}{\\cal G^{\\dagger}}(E\\!-\\!\\frac{\\varepsilon}{2}) \\rangle =\n -\\frac{N^2}{4}\\langle\\mbox{str\\,}(\\sigma \\eta_1)\\,\\mbox{str\\,}\n (\\sigma \\eta_2) \\rangle_{{\\cal L}}\\,\\,.\n\\end{equation}\nHere the shorthand $\\langle\\ldots\\rangle_{{\\cal L}}$ is used to denote\nthe integral\n\\begin{equation}\\label{shand}\n \\langle \\ldots \\rangle_{{\\cal L}} =\n \\int\\!d[\\sigma]\\,d[\\hat\\sigma]\\,\\exp \\{\n - N{\\cal L}(\\sigma,\\hat\\sigma) \\} (\\ldots)\n\\end{equation}\nover two $8\\times8$ supermatrices $\\sigma$ and $\\hat\\sigma$ with the\nmeasure defined by the Lagrangian \\cite{LSSS-94}\n\\begin{equation}\\label{Lg}\n {\\cal L}(\\sigma,\\hat\\sigma) = \\frac{1}{4}\\,\\mbox{str\\,}\\sigma^2\n - \\frac{i}{2}E\\,\\mbox{str\\,}\\sigma -\n \\frac{i}{2}\\mbox{str\\,}(\\sigma\\hat\\sigma) +\n \\frac{1}{2}\\mbox{str\\,}\\ln(\\hat\\sigma) + \\frac{m}{2}\\,\\mbox{str\\,} \\ln\n (1\\!+\\!\\gamma\\sigma \\eta) - \\frac{i}{4}\\varepsilon\\,\\mbox{str\\,}(\\sigma\n \\eta)\\,\\,. \\end{equation} The diagonal supermatrices appearing above are\nequal to \\[\\eta=\\mbox{diag}(1,1,-1,-1,1,1,-1,-1)\\]\n\\[\\eta_1=\\mbox{diag}(1,1,0,0,-1,-1,0,0)\\;\\;\\;\n\\eta_2=\\mbox{diag}(0,0,1,1,0,0,-1,-1)\\,\\,. \\]\nHere we have set the GOE parameter $\\lambda$ equal to one.\n\nThe supermatrix $\\sigma$ can be decomposed in the following way \\cite{VWZ-85}\n\\begin{equation}\n\\sigma=T_0\\,\\sigma_R\\,T_0^{-1}\n\\end{equation}\nwhere $T_0$ is a transformation from a non-compact manifold whereas the matrix\n$\\sigma_R$ is\ndiagonalized by transformations from a compact one. This implies a\ncorresponding decomposition of the integrals on the r.h.s. of (\\ref{shand})\n\\begin{equation}\\label{z4}\n \\langle \\ldots \\rangle_{{\\cal L}} =\n \\int\\!{\\cal F}(\\sigma_R)\\,d[\\sigma_R]\\,d[\\hat\\sigma]\n \\exp\\{-N{\\cal L}_R(\\sigma_R,\\hat\\sigma) \\}\n \\int\\!d\\mu\\exp\\{-N{\\cal L}_{\\mu}(\\sigma_R,T_0)\\}(\\ldots)\\,\\,.\n\\end{equation}\nThe Berezinian ${\\cal F}(\\sigma_R)$ depends only on the eigenvalues of\n$\\sigma_R$; $d\\mu$ is the invariant measure of the manifold of non-compact\ntransformations $T_0$. At last, the Lagrangian (\\ref{Lg}) is splitted into\n two parts, ${\\cal L}_R$ and ${\\cal L}_{\\mu}$, given by\n\\begin{equation}\\label{dLg}\n\\begin{array}{l}\n {\\cal L}_R(\\sigma,\\hat\\sigma) = \\frac{1}{4}\\mbox{str\\,}\\sigma_R^2 -\n \\frac{i}{2}E\\,\\mbox{str\\,}\\sigma_R -\n \\frac{i}{2}\\mbox{str\\,}(\\sigma_R\\hat\\sigma) +\n \\frac{1}{2}\\mbox{str\\,}\\ln(\\hat\\sigma)\\,\\,, \\\\ \\\\ {\\cal\n L}_{\\mu}(\\sigma_R,T_0) = -\\frac{i}{4}\\varepsilon\\,\\mbox{str\\,}(\\sigma_R\n T_0^{-1}\\eta T_0) + \\frac{m}{2}\\,\\mbox{str\\,}\\ln(1\\!+\\!\\gamma\\sigma_R\n T_0^{-1}\\eta T_0)\\,\\,. \\end{array} \\end{equation} Only the second part\n${\\cal L}_{\\mu}$ depends on the non-compact variables. The first one\n${\\cal L}_R$ is invariant under a transformation by\n$T_0$ since it is fully absorbed by an appropriate transformation of\n$\\hat\\sigma$. One can easily verify that the corresponding Berezinian is\nequal to unity.\n\nSince the number of resonances $N\\rightarrow \\infty$, the integrations over\n$\\sigma_R$ and $\\hat\\sigma$ can be carried out in the saddle-point\napproximation. At the same time, one has to integrate exactly over\nnon-compact variables as long as the number of channels $M$ is finite\n($m=0$). The saddle-point approximation becomes valid for the latter\nintegration when the number $M$ also tends to infinity ($m$ is finite). We\nwill consider both cases mentioned. To simplify formulae we restrict\nour further consideration to the center of the GOE spectrum $E=0$.\n\n\\section{Time Delay Correlation Function}\n\nLet us first consider collisions with a fixed number of channels $M$. The\nlogarithmic term in ${\\cal L}_{\\mu}$ being proportional to the small ratio\n$m$ does not influence then the saddle-point equations in the\n$(\\sigma_R,\\hat\\sigma)$-sector. In particular, the term in (\\ref{qeq})\ncontaining this ratio has to be omitted. The saddle-point equations are\ntrivially solved in this case and at the point $E=0$\n\\begin{equation}\\label{sps1}\n\\hat\\sigma=-i\\sigma_R^{-1}\\;\\;,\\;\\;\\sigma_R=\\eta\\,\\,.\n\\end{equation}\nWith integrations over $\\sigma_R$ and $\\hat\\sigma$ being done, the\ncorrelation function (\\ref{dcf}) reduces to the integral\n\\begin{equation}\\label{Cqq}\n K_Q(\\varepsilon)=2\\,\\mbox{Re}\\int\\!d\\mu\\,\n \\mbox{str\\,}(\\kappa\\alpha_1)\\,\\mbox{str\\,}(\\kappa\\alpha_2)\\,\n \\exp\\Bigl\\{ \\frac{i}{2}\\pi\\rho\\varepsilon\\,\\mbox{str\\,}\\alpha_1\n-\\frac{M}{2}\\mbox{str\\,}\\ln(1\\!+\\!\\frac{1}{2}T\\alpha_1) \\Bigr\\}\n\\end{equation}\nover the invariant measure of the non-compact manifold of $T_0$-matrices.\nHere $\\alpha_{1,2}$ are the $4\\times 4$ supermatrices defined in\n\\cite{VWZ-85}, the supermatrix $\\kappa=\\mbox{diag}(1,1,-1,-1)$ and\n\\begin{equation}\\label{trc0}\nT=\\frac{4\\gamma}{(1+\\gamma)^2}\n\\end{equation}\nis the transmission coefficient (\\ref{trc}) calculated in the limit of $m=0$.\n\nThe further calculations go along the line described in details in\n\\cite{VWZ-85} and lead to the result\n\\[K_Q(\\varepsilon)=\\frac{1}{4}\\,\\int\\limits_0^1\\!d\\lambda_0\\!\n\\int\\limits_0^{\\infty}\\!d\\lambda_1\\!\\int\\limits_0^{\\infty}\\!d\\lambda_2\\,\n\\mu(\\lambda_0,\\lambda_1,\\lambda_2)(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)^2\\,\n\\mbox{cos}\\{\\pi\\rho\\varepsilon(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)\\}\\]\n\\begin{equation}\\label{K-f}\n\\times\\left[ \\frac{(1\\!-\\!T\\lambda_0)^2}\n{(1\\!+\\!T\\lambda_1)(1\\!+\\!T\\lambda_2) }\\right]^{M\/2}\n\\end{equation}\nwhere\n\\[\\mu(\\lambda_0,\\lambda_1,\\lambda_2)=\n\\frac{(1\\!-\\!\\lambda_0)\\lambda_0|\\lambda_1-\\lambda_2|}\n{[(1+\\lambda_1)\\lambda_1(1+\\lambda_2)\\lambda_2]^{1\/2}\n(\\lambda_0\\!+\\!\\lambda_1)^2(\\lambda_0\\!+\\!\\lambda_2)^2 }\\,\\,.\\]\n\nThe dependence of the function $K_Q$ on openness of the unstable system is\nfully contained in the last factor in (\\ref{K-f}). If at least one of the\nquantities $M$ or $T$ is equal to zero the threefold integral reduces to\nthe single one \\cite{E-83}\n\\begin{equation}\\label{d-d}\nK_Q^{(0)}(\\varepsilon)=\\int\\limits_0^2\\!dt\\,\nt\\left(1-\\frac{1}{2}\\ln(t\\!+\\!1)\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)+\n\\int\\limits_2^{\\infty}\\!dt\\,\n\\left(2-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\n\\end{equation}\n\\[=\\delta(\\pi\\rho\\varepsilon)-Y_2(\\pi\\rho\\varepsilon)\\]\nwhich is just the normalized GOE density-density correlation function.\n\nGenerally speaking, the threefold integral in (\\ref{K-f}) can be investigated\nfor arbitrary number of channels $M$ only numerically using the methods\ndeveloped in \\cite{V-86} (see the next section). However, this integral can\nbe simplified if $M$ becomes large enough. Let the number $M$ grow still\nkeeping the ratio $m=0$ and the product $MT=2\\pi\\rho\\Gamma_W$ (compare with\n(\\ref{std})) fixed. The quantity $\\Gamma_W$ is just the limiting value of\n$\\Gamma_0$ with $T$ and ${\\sl g}$ calculated in the limit $m=0$. It coincides\nwith the well-known semiclassical Weisskopf estimate \\cite{BW-79} of the\ncorrelation length of Ericson fluctuations. Then\n\\begin{equation}\\label{t-exp}\n\\left[\\frac{(1\\!-\\!T\\lambda_0)^2}\n{(1\\!+\\!T\\lambda_1)(1\\!+\\!T\\lambda_2)}\\right]^{M\/2}\n\\rightarrow\\exp\\{-\\pi\\rho\\Gamma_W\\,\n(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)\\},\n\\end{equation}\nand one obtains similar to eq.(\\ref{d-d})\n\\[K_Q(\\varepsilon)=\\int\\limits_0^2\\!dt\\,t e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\left(1-\\frac{1}{2}\\ln(t\\!+\\!1)\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\]\n\\begin{equation}\\label{lM}\n+\\int\\limits_2^{\\infty}\\!dt\\,e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\left(2-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\,\\,.\n\\end{equation}\nThis is in close analogy with the consideration of the S-matrix correlation\nfunction made in \\cite{V-86}.\n\nA new convergency factor appeared in the integrals in (\\ref{lM}) as compared\nto (\\ref{d-d}) where only the oscillating cosine cuts the integral in the\nregion\nof asymptotically large $t$. This makes the function $K_Q$ finite for all\nvalues of $\\varepsilon$ including zero, so that the $\\delta$-function is now\nsmeared out. The behaviour of $K_Q(\\varepsilon)$ is quite different in the\nregions $\\varepsilon\\ll\\Gamma_W$ and $\\varepsilon\\gg \\Gamma_W$. In the first\none it is determined by decays and therefore is sensitive to the coupling to\nthe continuum. Quite opposite, for large $\\varepsilon$ the behaviour becomes\nuniversal since the GOE fluctuations described by the Dyson's function $Y_2$\nare restored. It is perfectly reasonable since an open system cannot be\ndistinguished from a closed one during a small time $t\\ll \\Gamma_W^{-1}$.\n\nThe first $\\gamma$-sensitive domain is widened when the width $\\Gamma_W$\ngrows.\nIn the case of small $\\rho\\Gamma_W\\ll 1$ (isolated resonances) it is natural\nto set aside the contribution of asymptotics of the integrand presenting\n(\\ref{lM}) in the form\n\\begin{equation}\\label{isol}\n K_Q(\\varepsilon) = \\frac{1}{\\pi\\rho}\\,\\frac{\\Gamma_W}\n{(\\varepsilon^2\\!+\\!\\Gamma_W^2)}\\, +\n\\end{equation}\n\\[\\int\\limits_0^2\\!dt\\,e^{-\\pi\\rho\\Gamma_W t}\\,\n\\left(t-\\frac{t}{2}\\ln(t\\!+\\!1)-1\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t) +\n\\int\\limits_2^{\\infty}\\!dt\\,e^{-\\pi\\rho\\Gamma_W t}\\,\n\\left(1-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\,\\,.\\]\nThe Lorentzian contribution with the width $\\Gamma_W$ directly traced to\nthe GOE $\\delta$-function dominates in the domain $\\varepsilon\\mathrel{\\mathpalette\\fun <}\\Gamma_W$.\nThe sum of the integrals in the second line is negative for all values of\n$\\varepsilon$ and approaches asymptotically the function $Y_2$ from above.\nWe thus come to the conclusion that the correlation function vanishes at some\nintermediate point $\\varepsilon_0$ which can be estimated as\n\\begin{equation}\\label{isoz}\n\\varepsilon_0\\simeq \\sqrt{\\frac{\\Gamma_W}{\\pi\\rho}}\n\\end{equation}\nusing the condition\n\\[\\frac{1}{\\pi\\rho}\\,\\frac{\\Gamma_W}{(\\varepsilon_0^2\\!+\\!\\Gamma_W^2)}\n\\sim |Y_2(\\rho\\varepsilon_0)|\\sim 1\\,\\,.\\]\n\nThe regime of strongly overlapping resonances, $\\rho\\Gamma_W\\gg 1$, is the\nmost interesting. In this case the\nmain contribution in $K_Q$ comes from the region of small $t$. Therefore, the\nsecond integral in (\\ref{lM}) can be neglected. Dropping then out the small\nlogarithmic term in the first integral and extending its upper limit to\ninfinity, we arrive at\n\\begin{equation}\\label{overl}\nK_Q(\\varepsilon)\\approx\\int\\limits_0^\\infty\\!dt\\,t e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\mbox{cos}(\\pi\\rho\\varepsilon t) =\n\\frac{1}{\\pi^2\\rho^2}\\frac{\\Gamma_W^2-\\varepsilon^2}\n{(\\varepsilon^2+\\Gamma_W^2)^2}\\,\\,.\n\\end{equation}\nCorrections to this result are of higher order with respect to the parameter\n$(\\rho\\Gamma_W)^{-1}$. The function (\\ref{overl}) is not a Lorentzian at all.\nDecreasing quadratically in a small vicinity of the point $\\varepsilon=0$, it\ndeviates subsequently from a Lorentzian, becomes zero at the point\n$\\varepsilon=\\Gamma_W$, reaches a negative minimum and approaches at last\nzero from below. Just the correlation function of such a form with $\\Gamma_W$\nsubstituted by the classical escape rate was conjectured in \\cite{Ec-93}\nas the limiting classical expression following from the periodic orbit\npicture. However, there is no room for the classical escape rate in the\nmatrix models considered here. One can see that the found form has in fact\nquantum grounds.\n\nOne should return to the exact expressions (\\ref{z4},\\ref{dLg}) if the ratio\n$m$ is finite. The resonances strongly overlap in this case. The\nsaddle-point is now found to be\n\\begin{equation}\\label{sps2}\nT_0=1\\;\\,,\\;\\, \\hat\\sigma=-i\\sigma_R^{-1}\\;\\,,\n\\;\\, \\sigma_R={\\sl g}(\\varepsilon\/2)\\,\\eta\\,\\,,\n\\end{equation}\nwhere ${\\sl g}$ is the solution chosen in sec. 5 of the cubic equation\n(\\ref{qeq}). The sequential saddle-point integrations over\n$\\sigma_R,\\hat\\sigma$ and then over the non-compact manifold result in the\nexpression\n\\begin{equation}\\label{Cqqm}\nK_Q(\\varepsilon)=-\\frac{4}{M^2T^2}\\,\\mbox{Re}\\,\n\\frac{\\Gamma_0^2}{\\left[\\varepsilon+i\\Gamma(\\varepsilon)\\right]^2}\n\\end{equation}\nwhere the function $\\Gamma(\\varepsilon)$ defined in (\\ref{G,T}) is just the\none appearing when the $S$-matrix fluctuations are considered \\cite{LSSS-94}.\n\nThe explicit dependence on $\\varepsilon$ gives rise to a sharp variation of\nthe correlation function (\\ref{Cqqm}) in the vicinity of zero if the typical\nvalues $|\\Gamma(\\varepsilon)|\\ll 1$ (see eq.(\\ref{scor}) and the discussion\nbelow). As long as the ratio $m$ is small, the quantity $\\Gamma(\\varepsilon)$\nis small indeed and we can neglect its smooth $\\varepsilon$-dependence for\nall $\\varepsilon\\mathrel{\\mathpalette\\fun <}\\Gamma_0\\approx\\Gamma_W$. Eq.(\\ref{Cqqm}) is equivalent\nto eq.~(\\ref{overl}) within this domain. The asymptotic behaviour for large\n$\\varepsilon$ also does not change since $\\Gamma(\\varepsilon)$ remains\nrestricted for all $\\varepsilon$. A small difference can appear only for\nintermediate values of $\\varepsilon$.\n\nHowever, for larger values of $m$ the deviation can become noticeable even\nnear the point $\\varepsilon=0$. In this case the next term in the power\nexpansion\n\\begin{equation}\\label{pex}\n\\Gamma(\\varepsilon)\\approx\\Gamma_0+\\Gamma_0'\\,\\varepsilon\n\\end{equation}\nwith respect to the smooth $\\varepsilon$-dependence should be taken into\naccount \\cite{LSSS-94}. Because of the smoothness, the derivative $\\Gamma_0'$\nis small. One can see from eq.(\\ref{qeq}) that this derivative is pure\nimaginary. The form (\\ref{overl}) is now reproduced again for sufficiently\nsmall $\\varepsilon$ ,\n\\begin{equation}\\label{Cqqmap}\nK_Q(\\varepsilon) = \\frac{4\\Gamma_g^2}{M^2T^2}\\,\n\\frac{\\Gamma_g^2-\\varepsilon^2}{(\\varepsilon^2+\\Gamma_g^2)^2 }\\,\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{Gg}\n\\Gamma_g=\\frac{\\Gamma_0}{1+i\\Gamma_0'}\\,\\,.\n\\end{equation}\nIt has been proven in \\cite{LSSS-94} that $\\Gamma_g$, playing the role of the\ncorrelation length of the Ericson fluctuations, coincides with the gap between\nthe distribution of resonance energies in the complex energy plane and the\nreal energy axis. Therefore we come to the conclusion that the properties of\nfluctuations both of the $S$-matrix and time delay are described by the same\nquantity, the gap $\\Gamma_g$, rather than the classical escape rate.\n\nUntil now we neglected the \"one-sided\" contribution\n\\[\\widetilde{C}_Q(\\varepsilon)=\\langle Q\\rangle^2\\,\n\\widetilde{K}_Q(\\varepsilon)=\\]\n\\begin{equation}\\label{tqq2}\n-\\frac{2}{M^2}\\,\\mbox{Re}\\left\\{\\langle\\mbox{tr\\,}{\\cal G}(\\frac{\\varepsilon}\n{2})\\mbox{tr\\,}{\\cal G}(-\\frac{\\varepsilon}{2})\\rangle\n- \\langle \\mbox{tr\\,}{\\cal G}(\\frac{\\varepsilon}{2})\\rangle\n\\langle\\mbox{tr\\,}{\\cal G}(-\\frac{\\varepsilon}{2})\n\\rangle \\right\\}\n\\end{equation}\nto the correlation function (\\ref{dcf}). As long as $m=0$, this contribution\nis of higher order in the parameter $N^{-1}$. However, this is not the case\nwhen the ratio $M\/N$ is finite. So one has to calculate (\\ref{tqq2})\nexplicitly. The well-known replica method \\cite{EA-75} turns out to be\nsufficient for the latter purpose. Dropping here the corresponding rather\ncumbersome\nexpressions we only note that the function $\\widetilde{K}_Q(\\varepsilon)$ is\nentirely expressed in terms of the slowly varying ${\\sl\ng}(\\frac{\\varepsilon}{2})$ and varies slowly itself. It has got no pronounced\nresonance behaviour around the point $\\varepsilon=0$ and constitutes a\nsmooth background for the correlation function. Its value at the point\n$\\varepsilon=0$ is approximately equal to\n\\[\\widetilde{K}_Q(0)\\approx - \\frac{1}{8N^2}\\]\nso that\n\\[\\Bigg|\\widetilde K_Q(0)\/{K_Q(0)}\\Bigg|\n\\approx\\frac{1}{2}\\,\\left(\\frac{\\pi\\rho\\Gamma_0}{2N}\\right)^2\\,\\,.\\]\nThe ratio is small under the condition\n\\begin{equation}\\label{con}\n\\pi\\rho\\Gamma_0\\ll N \\quad\\mbox{or}\\quad \\Gamma_0\\ll 1\n\\end{equation}\nimplying a clear-cut distinction of the local and global scales\n\\cite{LSSS-94}. Such a scale separation is necessary for matrix models to\nbe valid so far as the fluctuations are concerned.\n\nThe obtained form of the $\\varepsilon$-dependence of the many-channel\ncorrelation function $C_Q$ is close to that found in \\cite{SW-92} for the\nGutzwiller's model of single-channel chaotic scattering on a space of\nnegative curvature. The same values of all resonance widths and the\noutcoming possibility for resonances to overlap are two specific features\nof the model which are in fact in strong disagreement with properties of\nthe resonance spectra represented by matrix models. In particular, the\nsingle-channel resonances cannot overlap at all in the latter models\n\\cite{SZ-89} and their widths fluctuate strongly. That is why our result\nfor $M=1$ (see below) differs noticeably from the correlation function\nof ref.\\cite{SW-92}. The situation changes when the number of channels is\nlarge. The width fluctuations diminish with the number $M$ of channels\ngrowing. Since the time delay depends, according to (\\ref{q2}), only on\nproperties of the complex energies of resonances and not on the number of\nchannels directly, the correlation functions become similar in the two\nquite different cases compared.\n\nIt is worthy to note that the resonances overlapping strongly suppress the\ntime delay fluctuations. Indeed, eq.(\\ref{isol}) gives for isolated\nresonances\n\\[K_Q(0)=\\frac{1}{\\pi\\rho\\Gamma_W}\\gg 1\\]\nwhereas\n\\[K_Q(0)=\\frac{1}{\\pi^2\\rho^2\\Gamma_W^2}\\ll 1\\]\nwhen they overlap. The duration of a collision thus becomes a good definite\nquantity in the \"quasiclassical\" limit.\n\n\\section{Numerical results}\nExcepting a few limiting cases considered above, further analytical study of\n(\\ref{K-f}) is not possible and one has to use numerical methods. However,\nthe threefold integral as it stands does not suit for numerical computation.\nA very convenient substitution of the integration variables has been proposed\nin \\cite{V-86} to overcome all difficulties appearing. Following this author\nwe reduce the expression (\\ref{K-f}) to the Fourier integral\n\\begin{equation}\\label{fourier}\n K_Q(\\varepsilon) =\n \\int\\limits_0^{\\infty}\\!dt\\, F(t)\\cos(\\pi\\rho\\varepsilon t)\n\\end{equation}\nwith the Fourier transform $F(t)$ given by a double integral of a smooth\nfunction quite convenient for the numerical work. The asymptotic behaviour\nof $F(t)$ can be easily found explicitly\n\\begin{equation}\\label{F-asymp}\n F(t) \\sim \\left\\{ \\begin{array}{ll} t\n &\\ \\ \\mbox{for}\\ t\\ll 1 \\\\ (1+Tt)^{-M\/2} &\\ \\\n \\mbox{for}\\ t\\gg 1 \\end{array} \\right.\\,\\,.\n\\end{equation}\n\nFor a closed system ($T=0$) the Fourier transform $F(t)$ tends to unity in\nthe large-$t$ asymptotics. This results in the $\\delta$-term in the GOE\ndensity-density correlation. A singularity still survives even for an open\nsystem with one or two decay channels. The asymptotics (\\ref{F-asymp})\nimplies square root or logarithmic divergences correspondingly at the\npoint $\\varepsilon=0$ in these two cases.\n\nIn Fig. 1 the function $K_Q(x)$ versus $x=\\rho\\varepsilon$ is plotted for the\ncase of a single open channel. The singular behaviour near zero as well as\nGOE-like asymptotics are shown. The dashed line represents the Dyson's\nfunction $-Y_2(\\pi x)$. The calculation was made for the value $\\gamma=1$;\nonly some small domain around zero is sensitive to the choice of $\\gamma$.\nThe correlation function Fig.1 has little in common with that found in\n\\cite{SW-92}. This discrepancy is due to the strong fluctuations of\nsingle-channel widths in our model in contrast to identical widths of all\nresonances in Gutzwiller's one.\n\nFor $M>2$ the quantity $K_Q(0)$ is finite and the correlation function\napproaches, as the number of channels grows, the asymptotics given by\n(\\ref{lM}). The Fig. 2 demonstrates this for the ratio\n$K_Q(\\varepsilon)\/K_Q(0)$ in the\ncase of overlapping resonances. In asymptotic regime (\\ref{overl}) such a\nratio is an universal function of the only variable $\\varepsilon\/\\Gamma_W$.\nOne can see how the exact result (\\ref{K-f}) gets more and more close to this\nuniversal behaviour.\n\nThe Lorentzian peak should dominate the ratio\n$K_Q(\\varepsilon)\/K_Q(0)$ in the domain\n$\\varepsilon\/\\Gamma_W\\mathrel{\\mathpalette\\fun <} (\\pi\\rho\\Gamma_W)^{-\\frac{1}{2}}\\gg 1$ when\nresonances are isolated (see (\\ref{isoz})). Fig. 3 demonstrates this for two\nvalues of coupling constant $\\gamma$.\n\nAs it has been mentioned above, the function $K_Q(\\varepsilon)$ vanishes at\nsome point $\\varepsilon_0$. The position of this point as the function of\nthe number of channels $M$ at several fixed values of $\\gamma$ is shown in\nFig. 4 for three different values of the coupling constant $\\gamma$. It is\nclearly seen that the square root dependence for isolated resonances (see\n(\\ref{isoz})) is replaced by the linear one for overlapping ones.\n\n\n\n\\section{Summary.}\nIn this paper we have considered the fluctuations of the characteristic time\nof collisions in the framework of a random matrix model of resonance chaotic\nscattering. These fluctuations are entirely due to the fluctuations of the\nspectrum of complex resonance energies. We calculate analytically the time\ndelay correlation function and investigate its properties analytically and\nnumerically for different values of the number of channels and the strength\nof the coupling to the continuum. For any values of these parameters this\nfunction is far from being a Lorentzian. In particular, it vanishes at some\npoint which plays the role of the characteristic correlation length of the\nfluctuations. In the \"quasiclassical\" limit of a large number of strongly\noverlapping resonances this length is given, similar to that of the S-matrix\nfluctuations, by the gap between the upper edge of the distribution of\ncomplex energies of resonances and the real energy axis. We do not expect\nthat this quantity may be connected to the escape rate appearing in the\nclassical theory of chaotic scattering. The latter has been conjectured in\n\\cite{Sm-91} to be the semiclassical limit for the correlation length in\nchaotic scattering.\n\n\\begin{center}\n{\\large\\bf Acknowledgements}\n\\end{center}\nWe are grateful to F.Izrailev for his permanent interest to this work.\nFinancial support by the Deutsche Forschungsgemeinschaft through the SFB 237\nis acknowledged. For two of us (V.V.S. and D.V.S.) the research described in\nthis publication was made possible in part by Grant No RB7000 from the\nInternational Science Foundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nCrowd counting aims to count the number of people in a crowded scene where as density estimation aims to map an input crowd image to it's corresponding density map which indicates the number of people per pixel present in the image (as illustrated in Fig. \\ref{fig:task_illustration}) and the two problems have been jointly addressed by researchers. The problem of crowd counting and density estimation is of paramount importance and it is essential for building higher level cognitive abilities in crowded scenarios such as crowd monitoring \\cite{chan2008privacy} and scene understanding \\cite{shao2015deeply,zhou2012understanding}. Crowd analysis has attracted significant attention from researchers in the recent past due to a variety of reasons. Exponential growth in the world population and the resulting urbanization has led to an increased number of activities such as sporting events, political rallies, public demonstrations etc. (shown in Fig. \\ref{fig:crowd_scenes}), thereby resulting in more frequent crowd gatherings in the recent years. In such scenarios, it is essential to analyze crowd behavior for better management, safety and security. \n\nLike any other computer vision problem, crowd analysis comes with many challenges such as occlusions, high clutter, non-uniform distribution of people, non-uniform illumination, intra-scene and inter-scene variations in appearance, scale and perspective making the problem extremely difficult. Some of these challenges are illustrated in Fig. \\ref{fig:crowd_scenes}. The complexity of the problem together with the wide range of applications for crowd analysis has led to an increased focus by researchers in the recent past. \n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig1}\n\\vskip-6pt\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig2}\n\\vskip-6pt\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\vskip-10pt\n\\captionof{figure}{Illustration of density map estimation. (a) Input image (b) Corresponding density map with count.}\n\\label{fig:task_illustration}\n\\end{center}\n\n\\end{figure}\n\nCrowd analysis is an inherently inter-disciplinary research topic with researchers from different communities (such as sociology \\cite{moussaid2010walking,blumer1951collective}, psychology \\cite{aveni1977not}, physics \\cite{castellano2009statistical,1971HendersonStatistics}, biology \\cite{parrish1999complexity,zhang2010collective}, computer vision and public safety) have addressed the issue from different viewpoints. Crowd analysis has a variety of critical applications of inter-disciplinarian nature:\n\n\\noindent \\textit{Safety monitoring}: The widespread usage of video surveillance cameras for security and safety purposes in places such as sports stadiums, tourist spots, shopping malls and airports has enabled easier monitoring of crowd in such scenarios. However, traditional surveillance algorithms may break down as they are unable to process high density crowds due to limitations in their design. In such scenarios, we can leverage the results of algorithms specially designed for crowd analysis related tasks such as behavior analysis \\cite{saxena2008crowd,ko2008survey}, congestion analysis \\cite{zhou2015learning,huang2015congestion}, anomaly detection \\cite{li2014anomaly,chaker2017social} and event detection \\cite{benabbas2010motion}. \n\n\n\\noindent \\textit{Disaster management}: Many scenarios involving crowd gatherings such as sports events, music concerts, public demonstrations and political rallies face the risk of crowd related disasters such as stampedes which can be life threatening. In such cases, crowd analysis can be used as an effective tool for early overcrowding detection and appropriate management of crowd, hence, eventual aversion of any disaster \\cite{abdelghany2014modeling,almeida2013crowd}. \n\n\\noindent \\textit{Design of public spaces}: Crowd analysis on existing public spots such as airport terminals, train stations, shopping malls and other public buildings \\cite{chow2008waiting,sime1995crowd} can reveal important design shortcomings from crowd safety and convenience point of view. These studies can be used for design of public spaces that are optimized for better safety and crowd movement \\cite{lu2016study,al2013crowd}. \n\n\\noindent \\textit{Intelligence gathering and analysis}: Crowd counting techniques can be used to gather intelligence for further analysis and inference. For instance, in retail sector, crowd counting can be used to gauge people's interest in a product in a store and this information can be used for appropriate product placement \\cite{lipton2015video,mongeon2015busyness}. Similarly, crowd counting can be used to measure queue lengths to optimize staff numbers at different times of the day. Furthermore, crowd counting can be used to analyze pedestrian flow at signals at different times of the day and this information can be used for optimizing signal-wait times \\cite{bernal2014system}. \n\n\\noindent \\textit{Virtual environments}: Crowd analysis methods can be used to understand the underlying phenomenon thereby enabling us to establish mathematical models that can provide accurate simulations. These mathematical models can be further used for simulation of crowd phenomena for various applications such as computer games, inserting visual effects in film scenes and designing evacuation plans \\cite{gustafson2016mure,perez2016task}.\n\n\\noindent \\textit{Forensic search}: Crowd analysis can be used to search for suspects and victims in events such as bombing, shooting or accidents in large gatherings. Traditional face detection and recognition algorithms can be speeded up using crowd analysis techniques which are more adept at handling such scenarios \\cite{klontz2013case,barr2014effectiveness}.\n\n\n\n\nThese variety of applications has motivated researchers across various fields to develop sophisticated methods for crowd analysis and related tasks such as counting \\citep{chan2008privacy,chan2009bayesian,chen2012feature,idrees2013multi,chan2012counting,skaug2016end,ge2009marked,idrees2013multi}, density estimation \\citep{lempitsky2010learning,chen2013cumulative,zhang2016single,zhang2015cross,pham2015count,wang2016fast,boominathan2016crowdnet}, segmentation \\citep{kang2014fully,dong2007fast}, behaviour analysis \\citep{bandini2014towards,shao2014scene,cheng2014recognizing,zhou2012understanding,zhou2015learning,yi2016l0}, tracking \\citep{rodriguez2011density,zhu2014crowd}, scene understanding \\citep{shao2015deeply,zhou2012understanding} and anomaly detection \\citep{mahadevan2010anomaly,li2014anomaly}. Among these, crowd counting and density estimation are a set of fundamental tasks and they form basic building blocks for various other applications discussed earlier. Additionally, methods developed for crowd counting can be easily extended to counting tasks in other fields such as cell microscopy \\cite{wang2016fast,walach2016learning,lempitsky2010learning,chen2012feature}, vehicle counting \\cite{onoro2016towards}, environmental survey \\cite{french2015convolutional,zhan2008crowd}, etc.\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig3}\n\\vskip-6pt \\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig4}\n\\vskip-6pt \\captionof*{figure}{(b)}\n\\end{minipage}\n\\end{center}\n\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig5}\n\\vskip-6pt \\captionof*{figure}{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig6}\n\\vskip-6pt \\captionof*{figure}{(d)}\n\\end{minipage}\n\\vskip-10pt \\captionof{figure}{Illustration of various crowded scenes and the associated challenges. (a) Parade (b) Musical concert (c) Public demonstration (d) Sports stadium. High clutter, overlapping of subjects, variation in scale and perspective can be observed across images.}\n\\label{fig:crowd_scenes}\n\\end{center}\n\\end{figure}\n\nOver the last few years, researchers have attempted to address the issue of crowd counting and density estimation using a variety of approaches such as detection-based counting, clustering-based counting and regression-based counting \\cite{loy2013crowd}. The initial work on regression-based methods mainly use hand-crafted features and the more recent works use Convolutional Neural Network (CNN) based approaches. The CNN-based approaches have demonstrated significant improvements over previous hand-crafted feature-based methods, thus, motivating more researchers to explore CNN-based approaches further for related crowd analysis problems. In this paper, we review various single image crowd counting and density estimation methods with a specific focus on recent CNN-based approaches. \n\n\nResearchers have attempted to provide a comprehensive survey and evaluation of existing techniques for various aspects of crowd analysis \\citep{zhan2008crowd,ferryman2014performance,junior2010crowd,li2015crowded,zitouni2016advances}. Zhan \\textit{et al. } \\cite{zhan2008crowd} and Junior \\textit{et al. } \\cite{junior2010crowd} were among the first ones to study and review existing methods for general crowd analysis. Li \\textit{et al. } \\cite{li2015crowded} surveyed different methods for crowded scene analysis tasks such as crowd motion pattern learning, crowd behavior, activity analysis and anomaly detection in crowds. More recently, Zitouni \\textit{et al. } \\cite{zitouni2016advances} evaluated existing methods across different research disciplines by inferring key statistical evidence from existing literature and provided suggestions towards the general aspects of techniques rather than any specific algorithm. While these works focussed on the general aspects of crowd analysis, researchers have studied in detail crowd counting and density estimation methods specifically \\cite{loy2013crowd,saleh2015recent,ryan2015evaluation}. Loy \\textit{et al. } \\cite{loy2013crowd} provided a detailed description and comparison of video imagery-based crowd counting and evaluation of different methods using the same protocol. They also analyzed each processing module to identify potential bottlenecks to provide new directions for further research. In another work, Ryan \\textit{et al. } \\cite{ryan2015evaluation} presented an evaluation of regression-based methods for crowd counting across multiple datasets and provided a detailed analysis of performance of various hand-crafted features. Recently, Saleh \\textit{et al. } \\cite{saleh2015recent} surveyed two main approaches which are direct approach (i.e., object based target detection) and indirect approach (e.g. pixel-based, texture-based, and corner points based analysis). \n\n\nThough existing surveys analyze various methods for crowd analysis and counting, they however cover only traditional methods that use hand-crafted features and do not take into account the recent advancements driven primarily by CNN-based approaches \\cite{shao2015deeply,hu2016dense,zhao2016crossing,boominathan2016crowdnet,skaug2016end,walach2016learning,arteta2016counting,wang2015deep,zhang2016single,zhang2015cross,onoro2016towards,shao2016slicing}\nand creation of new challenging crowd datasets \\cite{zhang2016data,zhang2015cross,zhang2016single}. While CNN-based approaches have achieved drastically lower error rates, the creation of new datasets has enabled learning of more generalized models. To keep up with the rapidly advancing research in crowd counting, we believe it is necessary to analyze these methods in detail in order to understand the trends. Hence, in this paper, we provide a survey of recent state-of-the-art CNN-based approaches for crowd counting and density estimation for single images. \n\n\n\nRest of the paper is organized as follows: Section \\ref{sec:review_traditional} briefly reviews the traditional crowd counting and density estimation approaches with an emphasis on the most recent methods. This is followed by a detailed survey on CNN-based methods along with a discussion on their merits and drawbacks in Section \\ref{sec:survey_cnn}. In Section \\ref{sec:datasets_and_results}, recently published challenging datasets for crowd counting are discussed in detail along with results of the state-of-the-art methods. We discuss several promising avenues for achieving further progress in Section \\ref{sec:future_research}. Finally, concluding remarks are made in Section \\ref{sec:conclusion}.\n\n\\section{Review of traditional approaches}\n\\label{sec:review_traditional}\nVarious approaches have been proposed to tackle the problem of crowd counting in images \\cite{idrees2013multi,chen2013cumulative,lempitsky2010learning,zhang2015cross,zhang2016single} and videos \\cite{brostow2006unsupervised,ge2009marked,rodriguez2011density,chen2015person}. \nLoy \\textit{et al. } \\cite{loy2013crowd} broadly classified traditional crowd counting methods based on the approach into the following categories: (1) Detection-based approaches, (2) Regression-based approaches, and (3) Density estimation-based approaches. \n\nSince the focus of this work is on CNN-based approaches, in this section, we briefly review the detection and regression-based approaches using hand-crafted features for the sake of completeness. In addition, we present a review of the recent traditional methods \\cite{idrees2013multi,lempitsky2010learning,pham2015count,wang2016fast,xu2016crowd} that have not been analyzed in earlier surveys. \n\n\\subsection{Detection-based approaches}\nMost of the initial research was focussed on detection style framework, where a sliding window detector is used to detect people in the scene \\cite{dollar2012pedestrian} and this information is used to count the number of people \\cite{li2008estimating}. Detection is usually performed either in the monolithic style or parts-based detection. Monolithic detection approaches \\cite{dalal2005histograms,leibe2005pedestrian,tuzel2008pedestrian,enzweiler2009monocular} typically are traditional pedestrian detection methods which train a classifier using features (such as Haar wavelets \\cite{viola2004robust}, histogram oriented gradients \\cite{dalal2005histograms}, edgelet \\cite{wu2005detection} and shapelet \\cite{sabzmeydani2007detecting}) extracted from a full body. Various learning approaches such as Support Vector Machines, boosting \\cite{viola2005detecting} and random forest \\cite{gall2011hough} have been used with varying degree of success. Though successful in low density crowd scenes, these methods are adversely affected by the presence of high density crowds. Researchers have attempted to address this issue by adopting part-based detection methods \\cite{felzenszwalb2010object,lin2001estimation,wu2007detection}, where one constructs boosted classifiers for specific body parts such as\nthe head and shoulder to estimate the people counts in a designated area \\cite{li2008estimating}. In another approach using shape learning, Zhao et al. \\cite{zhao2008segmentation} modelled humans using 3D shapes composed of ellipsoids, and employed a stochastic process to estimate the number and shape configuration that best explains a given foreground mask in a scene. Ge and Collins \\cite{ge2009marked} further extended the idea by using flexible and practical shape models.\n\n\\subsection{Regression-based approaches}\nThough parts-based and shape-based detectors were used to mitigate the issues of occlusion, these methods were not successful in the presence of extremely dense crowds and\nhigh background clutter. To overcome these issues, researchers attempted to count by regression where they learn a mapping between features extracted from local image patches to their counts \\cite{chan2009bayesian,ryan2009crowd,chen2012feature}. By counting using regression, these methods avoid dependency on learning detectors which is a relatively complex task. These methods have two major components: low-level feature extraction and regression modelling. A variety of features such as foreground features, edge features, texture and gradient features have been used for encoding low-level information. Foreground features are extracted from foreground segments in a video using standard background subtraction techniques. Blob-based holistic features such as area, perimeter, perimeter-area ration, etc. have demonstrated encouraging results \\cite{chan2008privacy,chen2012feature,ryan2009crowd}. While these methods capture global properties of the scene, local features such as edges and texture\/gradient features such as local binary pattern (LBP), histogram oriented gradients (HOG), gray level co-occurrence matrices (GLCM) have been \nused to further improve the results. Once these global and local features are extracted, different regression techniques such as linear regression \\cite{paragios2001mrf}, piecewise linear regression \\cite{chan2008privacy}, ridge regression \\cite{chen2012feature}, Gaussian process regression and neural network \\cite{marana1998efficacy} are used to learn a mapping from low-level feature to the crowd count.\n\nIn a recent approach, Idrees \\textit{et al. } \\cite{idrees2013multi} identified that no single feature or detection method is reliable enough to provide sufficient information for accurate counting in the presence of high density crowds due to various reasons such as low resolution, severe occlusion, foreshortening and perspective. Additionally, they observed that there exists a spatial relationship that can be used to constrain the count estimates in neighboring local regions. With these observations in mind, they proposed to extract features using different methods that capture different information. By treating densely packed crowds of individuals as irregular and non-homogeneous texture, they employed Fourier analysis along with head detections and SIFT interest-point based counting in local neighborhoods. The count estimates from this localized multi-scale analysis are then aggregated subject to global consistency constraints. The three sources, i.e., Fourier, interest points and head detection are then combined with their respective confidences and counts at localized patches are computed independently. These local counts are then globally constrained in a multi-scale Markov Random Field (MRF) framework to get an estimate of count for the entire image. The authors also introduced an annotated dataset (UCF\\textunderscore CC\\textunderscore 50) of 50 images containing 64000 humans.\n\nChen \\textit{et al. } \\cite{chen2013cumulative} introduced a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. Considering that the challenges of inconsistent features along with sparse and imbalanced (encountered during learning a regression function) are related, cumulative attribute-based representation for learning a regression model is proposed. Specifically, features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space. The method is based on the notion of discriminative attributes used for addressing sparse training data. This method is inherently capable of handling imbalanced data.\n\n\\subsection{Density estimation-based approaches}\nWhile the earlier methods were successful in addressing the issues of occlusion and clutter, most of them ignored important spatial information as they were regressing on the global count. In contrast, Lempitsky \\textit{et al. } \\cite{lempitsky2010learning} proposed to learn a linear mapping between local patch features and corresponding object density maps, thereby incorporating spatial information in the learning process. In doing so, they avoided the hard task of learning to detect and localize individual object instances by introducing a new approach of estimating image density whose integral over any region in the density map gives the count of objects within that region. The problem of learning density maps is formulated as a minimization of a regularized risk quadratic cost function. A new loss function appropriate for learning density maps is introduced. The entire problem is posed as a convex optimization task which they solve using cutting-plane optimization.\n\nObserving that it is difficult to learn a linear mapping, Pham \\textit{et al. } \\cite{pham2015count} proposed to learn a non-linear mapping between local patch features and density maps. They used random forest regression from multiple image patches to vote for densities of multiple target objects to learn a non-linear mapping. In addition, they tackled the problem of large variation in appearance and shape between crowded image patches and non-crowded ones by proposing a crowdedness prior and they trained two different forests corresponding to this prior. Furthermore, they were able to successfully speed up the estimation process for real-time performance by proposing an effective forest reduction that uses permutation of decision trees. Apart from achieving real-time performance, another advantage of their method is that it requires relatively less memory to build and store the forest.\n\nSimilar to the above approach, Wang and Zou \\cite{wang2016fast} identified that though existing methods are effective, they were inefficient from computational complexity point of view. To this effect, they proposed a fast method for density estimation based on subspace learning. Instead of learning a mapping between dense features and their corresponding density maps, they learned to compute the embedding of each subspace formed by image patches. Essentially, they exploited the relationship between images and their corresponding density maps in the respective feature spaces. The feature space of image patches are clustered and examples of each subspace are collected to learn its embedding. Their assumption that local image patches and their corresponding density maps share similar local geometry enables them to learn locally linear embedding using which the density map of an image patch can be estimated by preserving the geometry. Since, implementing locally linear embedding (LLE) is time-consuming, they divided the feature spaces of image patches and their counterpart density maps into subspaces, and computed the embedding of each subspace formed by image patches. The density map of input patch is then estimated by simple classification and mapping with the corresponding embedding matrix. \n\nIn a more recent approach, Xu and Qiu \\cite{xu2016crowd} observed that the existing crowd density estimation methods used a smaller set of features thereby limiting their ability to perform better. Inspired by the ability of high-dimensional features in other domains such as face recognition, they proposed to boost the performances of crowd density estimation by using a much extensive and richer set of features. However, since the regression techniques used by earlier methods (based on Gaussian process\nregression or Ridge regression) are computationally complex and are unable to process very high-dimensional features, they used random forest as the regression model whose tree structure is intrinsically fast and scalable. Unlike traditional approaches to random forest construction, they embedded random projection in the tree nodes to combat the curse of dimensionality and to introduce randomness in the tree construction.\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig7}\n\\end{minipage}%\n\\vskip-10pt\n\\captionof{figure}{Categorization of existing CNN-based approaches.}\n\\label{fig:classification}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\section{CNN-based methods}\n\\label{sec:survey_cnn}\nThe success of CNNs in numerous computer vision tasks has inspired researchers to exploit their abilities for learning non-linear functions from crowd images to their corresponding density maps or corresponding counts. A variety of CNN-based methods have been proposed in the literature. We broadly categorize these methods based on property of the networks and training approach as shown in Fig. \\ref{fig:classification}. Based on the property of the networks, we classify the approaches into the following categories:\n\\begin{itemize}[noitemsep]\n\\item \\textbf{Basic CNNs}: Approaches that involve basic CNN layers in their networks fall into this category. These methods are amongst initial deep learning approaches for crowd counting and density estimation.\n\\item \\textbf{Scale-aware models}: The basic CNN-based approaches evolved into more sophisticated models that were robust to variations in scale. This robustness is achieved through different techniques such as multi-column or multi-resolution architectures.\n\\item \\textbf{Context-aware models}: Another set of approaches attempted to incorporate local and global contextual information present in the image into the CNN framework for achieving lower estimation errors. \n\\item \\textbf{Multi-task frameworks}: Motivated by the success of multi-task learning for various computer vision tasks, various approaches have been developed to combine crowd counting and estimation along with other tasks such as foreground-background subtraction and crowd velocity estimation.\n\\end{itemize}\n\nIn an yet another categorization, we classify the CNN-based approaches based on the inference methodology into the following two categories:\n\\begin{itemize}[noitemsep]\n\\item \\textbf{Patch-based inference}: In this approach, the CNNs are trained using patches cropped from the input images. Different methods use different crop sizes. During the prediction phase, a sliding window is run over the test image and predictions are obtained for each window and finally aggregated to obtain total count in the image. \n\\item \\textbf{Whole image-based inference}: Methods in this category perform a whole-image based inference. These methods avoid computationally expensive sliding windows. \n\\end{itemize}\nTable~\\ref{tab:classification} presents a categorization of various CNN-based crowd counting methods based on their network property and inference process. \n\n\n\\begin{table}[htp!]\n\\centering\n\\caption{Categorization of existing CNN-based approaches.}\n\\label{tab:classification}\n\\resizebox{0.99\\linewidth}{!}\n\\begin{tabular}{|l|l|l|}\n\\hline\n & \\multicolumn{2}{c|}{Category} \\\\ \\hline\nMethod & \\begin{tabular}[c]{@{}l@{}}Network \\\\ property\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Inference\\\\ process\\end{tabular} \\\\ \\hline\nFu \\textit{et al. } \\cite{fu2015fast} & Basic & Patch-based \\\\ \\hline\nWang \\textit{et al. } \\cite{wang2015deep} & Basic & Patch-based \\\\ \\hline\nZhang \\textit{et al. } \\cite{zhang2015cross} & Multi-task & Patch-based \\\\ \\hline\nBoominathan \\textit{et al. } \\cite{boominathan2016crowdnet} & Scale-aware & Patch-based \\\\ \\hline\nZhang \\textit{et al. } \\cite{zhang2016single} & Scale-aware & Whole image-based \\\\ \\hline\nWalach and Wolf \\cite{walach2016learning} & Basic & Patch-based \\\\ \\hline\nOnoro \\textit{et al. } \\cite{onoro2016towards} & Scale-aware & Patch-based \\\\ \\hline\nShang \\textit{et al. } \\cite{skaug2016end} & Context-aware & Whole image-based \\\\ \\hline\nSheng \\textit{et al. } \\cite{sheng2016crowd} & Context-aware & Whole image-based \\\\ \\hline\nKumagai \\textit{et al. } \\cite{kumagai2017mixture} & Scale-aware & Patch-based \\\\ \\hline\nMarsden \\textit{et al. } \\cite{marsden2016fully} & Scale-aware & Whole image-based \\\\ \\hline\nMundhenk \\textit{et al. } \\cite{mundhenk2016large} & Basic & Patch-based \\\\ \\hline\nArtetta \\textit{et al. } \\cite{arteta2016counting} & Multi-task & Patch-based \\\\ \\hline\nZhao \\textit{et al. } \\cite{zhao2016crossing} & Multi-task & Patch-based \\\\ \\hline\nSindagi \\textit{et al. } \\cite{sindagi2017cnnbased} & Multi-task & Whole image-based \\\\ \\hline\nSam \\textit{et al. } \\cite{sam2017switching} & Scale-aware & Patch-based \\\\ \\hline\nKang \\textit{et al. } \\cite{zhao2016crossing} & Basic & Patch-based \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\subsection{Survey of CNN-based methods}\nIn this section, we review various CNN-based crowd counting and density estimation methods along with their merits and drawbacks.\n\nWang \\textit{et al. } \\cite{wang2015deep} and Fu \\textit{et al. } \\cite{fu2015fast} were among the first ones to apply CNNs for the task of crowd density estimation. Wang \\textit{et al. } proposed an end-to-end deep CNN regression model for counting people from images in extremely dense crowds. They adopted AlexNet network \\cite{krizhevsky2012imagenet} in their architecture where the final fully connected layer of 4096 neurons is replaced with a single neuron layer for predicting the count. Besides, in order to reduce false responses background like buildings and trees in the images, training data is augmented with additional negative samples whose ground truth count is set as zero. In a different approach, Fu \\textit{et al. } proposed to classify the image into one of the five classes: very high density, high density, medium density, low density and very low density instead of estimating density maps. Multi-stage ConvNet from the works of Sermanet \\textit{et al. } \\cite{sermanet2012convolutional} was adopted for better shift, scale and distortion invariance. In addition, they used a cascade of two classifiers to achieve boosting in which the first one specifically samples misclassified images whereas the second one reclassifies rejected samples. \n\nZhang \\textit{et al. } \\cite{zhang2015cross} analyzed existing methods to identify that their performance reduces drastically when applied to a new scene that is different from the training dataset. To overcome this issue, they proposed to learn a mapping from images to crowd counts and to adapt this mapping to new target scenes for cross-scene counting. To achieve this, they first learned their network by alternatively training on two objective functions: crowd count and density estimation which are related objectives. By alternatively optimizing over these objective functions one is able to obtain better local optima. In order to adapt this network to a new scene, the network is fine-tuned using training samples that are similar to the target scene. It is important to note that the network is adapted to new target scenes without any extra label information. The overview of their approach is shown in Fig. \\ref{fig:cross_scene}. Also, in contrast to earlier methods that use the sum of Gaussian kernels centered on the locations of objects, a new method for generating ground truth density map is proposed that incorporates perspective information. In doing so, the network is able to perform perspective normalization thereby achieving robustness to scale and perspective variations. Additionally, they introduced a new dataset for the purpose of evaluating cross-scene crowd counting. The network is evaluated for cross-scene crowd counting as well as single scene crowd counting and superior results are demonstrated for both scenarios.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig8}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of cross scene crowd counting proposed by Zhang \\textit{et al. } \\cite{zhang2015cross}.}\n\\label{fig:cross_scene}\n\\end{center}\n\\end{figure}\n\n\nInspired by the success of cross-scene crowd counting \\cite{zhang2015cross}, Walach and Wolf \\cite{walach2016learning} performed layered boosting and selective sampling. Layered boosting involves iteratively adding CNN layers to the model such that every new layer is trained to estimate the residual error of the earlier prediction. For instance, after the first CNN layer is trained, the second CNN layer is trained on the difference between the estimation and ground truth. This layered boosting approach is based on the notion of Gradient Boosting Machines (GBM) \\cite{friedman2001greedy} which are a subset of powerful ensemble techniques. An overview of their boosting approach is presented in Fig. \\ref{fig:learning_to_count_arch}. The other contribution made by the authors is the use of sample selection algorithm to improve the training process by reducing the effect of low quality samples such as trivial samples or outliers. According to the authors, the samples that are correctly classified early on are trivial samples. Presenting such samples for training even after the networks have learned to classify them tends to introduce bias in the network for such samples, thereby affecting its generalization performance. Another source of training inefficiency is the presence of outliers such as mislabeled samples. Apart from affecting the network's performance, these samples increase the training time. To overcome this issue, such samples are eliminated out of the training process for a number of epochs. The authors demonstrated that their method reduces the count estimation error by 20\\% to 30\\% over existing state-of-the-art methods at that time on different datasets. \n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.7\\linewidth}\n\\includegraphics[width=1\\linewidth]{Fig9}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of learning to count using boosting by Walach and Wolf \\cite{walach2016learning}.}\n\\label{fig:learning_to_count_arch}\n\\end{center}\n\\end{figure}\n\n\nIn contrast to the above methods that use patch-based training, Shang \\textit{et al. } \\cite{skaug2016end} proposed an end-to-end count estimation method using CNNs (Fig. \\ref{fig:end_to_end_arch}). Instead of dividing the image into patches, their method takes the entire image as input and directly outputs the final crowd count. As a result, computations on overlapping regions are shared by combining multiple stages of processing leading to a reduction of complexity. The network simultaneously learns to estimate local counts and can be viewed as learning a patch level counting model which enables faster training. By doing so, contextual information is incorporated into the network, enabling it to ignore background noises and achieve better performance. The network is composed of three parts: (1) Pre-trained GoogLeNet model \\cite{szegedy2015going}, (2) Long-short time memory (LSTM) decoders for local count, and (3) Fully connected layers for the final count. The network takes an image as input and computes high-dimensional CNN feature maps using the GoogleNet network. Local blocks in these high-dimensional features are decoded into local count using a LSTM unit. A set of fully connected layers after the LSTM unit map the local counts into global count. The two counting objectives are jointly optimized during training.\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig10}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of the end-to-end counting method proposed by Shang \\textit{et al. } \\cite{skaug2016end}. GoogLeNet is used to compute high-dimensional features which are further decoded into local counts using LSTM units. }\n\\label{fig:end_to_end_arch}\n\\end{center}\n\\end{figure}\n\nIn an effort to capture semantic information in the image, Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet} combined deep and shallow fully convolutional networks to predict the density map for a given crowd image. The combination of two networks enables one to build a model robust to non-uniform scaling of crowd and variations in perspective. Furthermore, an extensive augmentation of the training dataset is performed in two ways. Patches from the multi-scale image representation are sampled to make the system robust to scale variations. Fig. \\ref{fig:boominathan_arch} shows overview of this method. \n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig11}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of counting method proposed by Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet}. A deep network is used in combination with a shallow network to address scale variations across images. }\n\\label{fig:boominathan_arch}\n\\end{center}\n\\end{figure}\n\nIn another approach, Zhang \\textit{et al. } \\cite{zhang2016single} proposed a multi-column based architecture (MCNN) for images with arbitrary crowd density and arbitrary perspective. Inspired by the success of multi-column networks for image recognition \\cite{ciregan2012multi}, the proposed method ensures robustness to large variation in object scales by constructing a network that comprises of three columns corresponding to filters with receptive fields of different sizes (large, medium, small) as shown in Fig. \\ref{fig:single_image_arch}. These different columns are designed to cater to different object scales present in the images. Additionally, a new method for generating ground truth crowd density maps is proposed. In contrast to existing methods that either use sum of Gaussian kernels with a fixed variance or perspective maps, Zhang \\textit{et al. } proposed to take into account perspective distortion by estimating spread parameter of the Gaussian kernel based on the size of the head of each person within the image. However, it is impractical to estimate head sizes and their underlying relationship with density maps. Instead they used an important property observed in high density crowd images that the head size is related to distance between the centers of two neighboring persons. The spread parameter for each person is data-adaptively determined based on its average distance to its neighbors. Note that the ground truth density maps created using this technique incorporate distortion information without the use of perspective maps. Finally, considering that existing crowd counting datasets do not cater to all the challenging situations encountered in real world scenarios, a new ShanghaiTech crowd datasets is constructed. This new dataset includes 1198 images with about 330,000 annotated heads. \n\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig12}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of single image crowd counting via multi-column network by Zhang \\textit{et al. } \\cite{zhang2016single}.}\n\\label{fig:single_image_arch}\n\\end{center}\n\\end{figure}\n \n\n\nSimilar to the above approach, Onoro and Sastre \\citep{onoro2016towards} developed a scale aware counting model called Hydra CNN that is able to estimate object densities in a variety of crowded scenarios without any explicit geometric information of the scene. First, a deep fully-convolutional neural network (which they call as Counting CNN) with six convolutional layers is employed. Motivated by the observation of earlier work \\cite{zhang2015cross,loy2013crowd} that incorporating perspective information for geometric correction of the input features results in better accuracy, geometric information is incorporated into the Counting CNN (CCNN). To this effect, they developed Hydra CNN that learns a multi-scale non-linear regression model. As shown in Fig. \\ref{fig:hydra_cnn_arch} the network consists of 3 heads and a body with each head learning features for a particular scale. Each head of the Hydra-CNN is constructed using the CCNN model whose outputs are concatenated and fed to the body. The body consists of a set of two fully-connected layers followed by a rectified linear unit (ReLu), a dropout layer and a final fully connected layer to estimate the object density map. While the different heads extract image descriptors at different scales, the body learns a high-dimensional representation that fuses the multi-scale information provided by the heads. This network design of Hydra CNN is inspired by the work of Li et al. \\cite{li2015visual}. Finally, the network is trained with pyramid of image patches extracted at multiple scales. The authors demonstrated through their experiments that the Hydra CNN is able to perform successfully in scenarios and datasets with significant variations in the scene. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig13}\n\\includegraphics[width=\\linewidth]{Fig14}\n\\end{minipage}%\n\\vskip-10pt\n\\captionof{figure}{Overview of Hydra-CNN by Onoro \\textit{et al. } \\citep{onoro2016towards}.}\n\\label{fig:hydra_cnn_arch}\n\\end{center}\n\\end{figure}\n\n\nInstead of training all regressors of a multi-column network \\cite{zhang2016single} on all the input patches, Sam \\textit{et al. } \\cite{sam2017switching} argue that better performance is obtained by training regressors with a particular set of training patches by leveraging variation of crowd density within an image. To this end, they proposed a switching CNN that cleverly selects an optimal regressor suited for a particular input patch. As shown in Fig. \\ref{fig:switching_cnn}, the proposed network consists of multiple independent regressors similar to multi-column network \\cite{zhang2016single} with different receptive fields and a switch classifier. The switch classifier is trained to select the optimal regressor for a particular input patch. Independent CNN crowd density regressors are trained on patches sampled from a grid in a given crowd scene. The switch classifier and the independent regressors are alternatively trained. The authors describe multiple stages of training their network. First, the independent regressors are pretrained on image patches to minimize the Euclidean distance between the estimated density map and ground truth. This is followed by a differential training stage where, the count error is factored in to improve the counting performance by back-propagating a regressor with the minimum count error for a given training patch. After training the multiple regressors, a switch classifier based on VGG-16 architecture \\cite{simonyan2014very} is trained to select an optimal regressor for accurate counting. Finally, the switch classifier and CNN regressors are co-adapted in the coupled training stage.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig32}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Switching CNN by Sam \\textit{et al. } \\cite{sam2017switching}.}\n\\label{fig:switching_cnn}\n\\end{center}\n\\end{figure}\n \n\nWhile the above methods concentrated on incorporating scale information in the network, Sheng \\textit{et al. } in \\cite{sheng2016crowd} proposed to integrate semantic information by learning locality-aware feature sets. Noting that earlier methods that use hand-crafted features ignored key semantic and spatial information, the authors proposed a new image representation which incorporates semantic attributes as well as spatial cues to improve the discriminative power of feature representations. They defined semantic attributes at the pixel level and learned semantic feature maps via deep CNN. The spatial information in the image is encoded using locality-aware features in the semantic attribute feature map space. The locality-aware features (LAF) are built on the idea of spatial pyramids on neighboring patches thereby encoding spatial context and local information. The local descriptors from adjacent cells are then encoded into image representations using weighted VLAD encoding method. \n\n\n\n\nSimilar to \\cite{zhang2016single,onoro2016towards}, Kumagai \\textit{et al. } \\cite{kumagai2017mixture}, based on the observation that a single predictor is insufficient to appropriately predict the count in the presence of large appearance changes, proposed a Mixture of CNNs (MoCNN) that are specialized to a different scene appearances. As shown in Fig. \\ref{fig:mixture_cnn_arch}, the architecture consists of a mixture of expert CNNs and a gating CNN that adaptively selects the appropriate CNN among the experts according to the appearance of the input image. For prediction, the expert CNNs predict crowd count in the image while the gating CNN predicts appropriate probabilities for each of the expert CNNs. These probabilities are further used as weighting factors to compute the weighted average of the counts predicted by all the expert CNNs.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{.8\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig15}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of MoC (Mixture of CNN) for crowd counting by Kumagai \\textit{et al. } \\cite{kumagai2017mixture}.}\n\\label{fig:mixture_cnn_arch}\n\\end{center}\n\\end{figure}\n\n\n\n\nMotivated by the success of scale aware models \\cite{zhang2016single,onoro2016towards}, Marsden \\textit{et al. } \\cite{marsden2016fully} proposed to incorporate scale into the models with much less number of model parameters. Observing that the earlier scale aware models \\cite{zhang2016single,onoro2016towards} are difficult to optimize and are computationally complex, Marsden \\textit{et al. } \\cite{marsden2016fully} proposed a single column fully convolutional network where the scale information is incorporated into the model using a simple yet effective multi-scale averaging step during prediction without any increase in the model parameters. The method addresses the issues of scale and perspective changes by feeding multiple scales of test image into the network during prediction phase. The crowd count is estimated for each scale and the final count is obtained by taking an average of all the estimates. Additionally, a new training set augmentation scheme is developed to reduce redundancy among the training samples. In contrast to the earlier methods that use randomly cropped patches with high degree of overlap, the training set in this work is constructed using the four image quadrants as well as their horizontal flips ensuring no overlap. This technique avoids potential overfit when the network is continuously exposed to the same set of pixels during training, thereby improving the generalization performance of the network. In addition, the generalization performance of the proposed method is studied by measuring cross dataset performance. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{0.8\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig16}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Fully Convolutional Network for crowd counting by Marsden \\textit{et al. } \\cite{marsden2016fully}.}\n\\label{fig:congested_arch}\n\\end{center}\n\\end{figure}\n\n\nInspired by the superior results achieved by simultaneous learning of related tasks \\cite{ranjan2016hyperface,yu2017iprivacy}, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} and Marsden et al. \\cite{marsden2017resnetcrowd} explored multi-task learning to boost individual task performance. Marsden et al. \\cite{marsden2017resnetcrowd} proposed a Resnet-18 \\cite{he2016deep} based architecture for simultaneous crowd counting, violent behaviour detection and crowd density level classification. The network consists of initial 5 convolutional layers of Resnet18 including batch normalisation layers and skip connections form the primary module. The convolutional layers are followed by a set of task specific layers. Finally, sum of all the losses corresponding to different tasks is minimized. Additionally, the authors constructed a new 100 image dataset specifically designed for multi-task learning of crowd count and behaviour. In a different approach, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} proposed a cascaded CNN architecture to incorporate learning of a high-level prior to boost the density estimation performance. Inspired by \\cite{chen2016cascaded}, the proposed network simultaneously learns to classify the crowd count into various density levels and estimate density map (as shown in Fig. \\ref{fig:cascaded_mtcnn}). Classifying crowd count into various levels is equivalent to coarsely estimating the total count in the image thereby incorporating a high-level prior into the density estimation network. This enables the layers in the network to learn globally relevant discriminative features. Additionally, in contrast to most recent work, they make use of transposed convolutional layers to generate high resolution density maps.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig33}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Cascaded Multi-task CNN by Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased}.}\n\\label{fig:cascaded_mtcnn}\n\\end{center}\n\\end{figure}\n\n\n\nIn a recent work, Kang \\textit{et al. } \\cite{kang2017beyond} explored maps generated by density estimation methods for the purpose of various crowd analysis tasks such as counting, detection and tracking. They performed a detailed analysis of the effect of using full-resolution density maps on the performance of these tasks. They demonstrated through their experiments that full resolution density maps improved the performance of localization tasks such as detection and tracking. Two different approaches are considered for generating full-resolution maps. In the first approach, a sliding window based CNN regressor is used for pixel-wise density prediction. In the second approach, Fully Convolutional Networks \\cite{long2015fully} along with skip connections are used to learning a non-linear mapping between input image and the corresponding density map.\n\n\n\n\nIn a slightly different application context of counting, Mundhenk \\textit{et al. } \\cite{mundhenk2016large} and Arteta \\textit{et al. } \\cite{arteta2016counting} proposed to count different types of objects such as cars and penguins respectively. Mundhenk \\textit{et al. } \\cite{mundhenk2016large} addressed the problem of automated counting of automobiles from satellite\/aerial platforms. Their primary contribution is the creation of a large diverse set of cars from overhead images. Along with the large dataset, they present a deep CNN-based network to recognize the number of cars in patches. The network is trained in a classification setting where the output of the network is a class that is indicative of the number of objects in the input image. Also, they incorporated contextual information by including additional regions around the cars in the training patches. Three different networks based on AlexNet \\cite{krizhevsky2012imagenet}, GoogLeNet \\cite{szegedy2015going} and ResNet \\cite{he2016deep} with Inception are evaluated. For a different application of counting penguins in images, Arteta \\textit{et al. } \\cite{arteta2016counting} proposed a deep multi-task architecture for accurate counting even in the presence of labeling errors. The network is trained in a multi-task setting where, the tasks of foreground-background subtraction and uncertainty estimation along with counting are jointly learned. The authors demonstrated that the joint learning especially helps in learning a counting model that is robust to labeling errors. Additionally, they exploited scale variations and count variability across the annotations to incorporate scale information of the object and prediction of annotation difficulty respectively into the model. The network was evaluated on a newly created Penguin dataset. \n\n\nZhao \\textit{et al. } addressed a higher level cognitive task of counting people that cross a line in \\citep{zhao2016crossing}. Though the task is a video-based application, it comprises of a CNN-based model that is trained with pixel-level supervision maps similar to single image crowd density estimation methods, making it a relevant approach to be included in this article. Their method consists of a two-phase training scheme (as shown in Fig. \\ref{fig:crossing_line_arch}) that decomposes original counting problem into two sub-problems: estimating crowd density map and crowd velocity map where the two tasks share the initial set of layers enabling them to learn more effectively. The estimated crowd density and crowd velocity maps are then multiplied element-wise to generate the crowd counting maps. Additionally, they contributed a large-scale dataset for evaluating crossing-line crowd counting algorithms, which includes 5 different scenes, 3,100 annotated frames and 5,900 annotated pedestrians. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig17}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of the method proposed by Zhao \\textit{et al. } \\cite{zhao2016crossing} for counting people crossing a line.}\n\\label{fig:crossing_line_arch}\n\\end{center}\n\\end{figure}\n\n\\section{Discussion}\n\nWith a variety of methods discussed in Section \\ref{sec:survey_cnn}, we analyze various advantages and disadvantages of the broad approaches followed by these methods in this section. \n\nZhang \\textit{et al. } \\cite{zhang2015cross} were among the first ones to address the problem of adapting models to new unlabelled datasets using a simple and effective method based on finding similar patches across datasets. However, their method is heavily dependent on accurate perspective maps which may not be necessarily available for all the datasets. Additionally, the use of 72$\\times$72 sized patches for training and evaluation ignores global context which is necessary for accurate estimation of count. Walach \\textit{et al. } \\cite{walach2016learning} successfully addressed training inefficiencies in earlier methods using a layered boosting approach and a simple sample selection method. However, similar to Zhang \\textit{et al. } \\cite{zhang2015cross}, their method involves patch-based training and evaluation resulting in loss of global context information along with inefficiency during evaluation due to the use of a sliding window approach. Additionally, these methods tend to ignore scale variance among the dataset assuming that their models will implicitly learn the invariance.\n\nIn an effort to explicitly model scale invariance, several methods involving combination of networks were proposed (\\cite{zhang2016single,onoro2016towards,sam2017switching,kumagai2017mixture,boominathan2016crowdnet}). \nWhile these methods demonstrated significant improvements in the performance using multiple column networks and a combination of deep and shallow networks, the invariance achieved is limited by the number of columns present in the network and receptive field sizes which are chosen based on the scales present in the dataset. Additionally, these methods do not explicitly model global context information which is crucial for a task such as crowd counting. In a different approach, Marsden \\textit{et al. } \\cite{marsden2016fully} attempt to address the scale issue by performing a multi-scale averaging during the prediction phase. While being simple and effective, it results in an inefficient inference stage. Additionally, these methods do not explicitly encode global context present in an image which can be crucial for improving the count performance. To this end, few approaches model local and global context \\cite{sheng2016crowd,skaug2016end} by considering key spatial and semantic information present in the image. \n\nIn an entirely different approach, few methods \\cite{marsden2017resnetcrowd,sindagi2017cnnbased} take advantage of multi-task learning and incorporate high-level priors into the network. For instance, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} simultaneously learn density estimation and a high-level prior in the form of crowd count classification. While they demonstrated high performance gain by learning an additional task of crowd density level classification, the number of density levels is dataset dependent and it needs to be carefully chosen based on the density levels present in the dataset. \n\n\n\n\n\n\n\n\\section{Datasets and results}\n\\label{sec:datasets_and_results}\nA variety of datasets have been created over the last few years driving researchers to create models with better generalization abilities. While the earlier datasets usually contain low density crowd images, the most recent ones focus on high density crowd thus posing numerous challenges such as scale variations, clutter and severe occlusion. The creation of these large scale datasets has motivated recent approaches to develop methods that cater to such challenges. In this section, we review five key datasets \\cite{chan2008privacy,chen2012feature,idrees2013multi,zhang2015cross,zhang2016single} followed by a discussion on the results of CNN-based approaches and recent traditional methods that were not included in the earlier surveys. \n\n\\begin{table*}[t!]\n\\caption{Summary of various datasets.}\n\\begin{center}\n\\vskip-15pt\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline\nDataset & No. of images & Resolution & Min & Ave & Max & Total count \\\\\n\\hlin\nUCSD \\cite{chan2008privacy} & 2000 & 158x238 & 11 & 25 & 46 & 49,885\\\\\n\\hline\nMall \\cite{chen2012feature} & 2000 & 320x240 & 13 & - & 53 & 62,325\\\\\n\\hline\nUCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} & 50 & Varied & 94 & 1279 & 4543 & 63,974\\\\\n\\hline\nWorldExpo '10 \\cite{zhang2016data,zhang2015cross} & 3980 & 576x720 & 1 & 50 & 253 & 199,923\\\\\n\\hline\nShanghaiTech Part A \\cite{zhang2016single} & 482 & Varied & 33 & 501 & 3139 & 241,677\\\\\n\\hline\nShanghaiTech Part B \\cite{zhang2016single}& 716 & 768x1024 & 9 & 123 & 578 & 88,488\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:datasetsummary}\n\\end{table*}\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig18}\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig19}\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig20}\n \\captionof*{figure}{(c)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig21}\n\\captionof*{figure}{(d)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig22}\n\\captionof*{figure}{(e)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig23}\n\\captionof*{figure}{(f)}\n\\end{minipage}\n\\end{center}\n\\vskip-10pt \\captionof{figure}{Sample images from various datasets. (a) UCSD \\cite{chan2008privacy} (b) Mall \\cite{chen2012feature} (c) UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} (d) WorldExpo '10 \\cite{zhang2015cross} (e) Shanghai Tech Part A \\cite{zhang2016single} (f) SHanghai Tech Part B \\cite{zhang2016single}. It can be observed that in the case of UCSD and Mall dataset , the images come from the same video sequence providing no variation in perspective across images.}\n\\label{fig:dataset}\n\\end{figure*}\n\n\n\\subsection{Datasets}\n\\textbf{UCSD dataset}: The UCSD dataset \\cite{chan2008privacy} was among the first datasets to be created for counting people. The dataset was collected from a video camera at a pedestrian walkway. The dataset consists of 2000 frames of size 238$\\times$158 from a video sequence along with ground truth annotations of each pedestrian in every fifth frame. For the rest of the frames, linear interpolation is used to create the annotations. A region-of-interest is also provided to ignore unnecessary moving objects such as trees. The dataset contains a total of 49,885 pedestrian instances and it is split into training and test set. While the training set contains frames with indices 600 to 1399, the test set contains the remaining 1200 images. This dataset has relatively low density crowd with an average of around 15 people in a frame and since the dataset was collected from a single location, there is no variation in the scene perspective across images. \n\\linebreak\n\n\n\\noindent \\textbf{Mall dataset}: Considering little variation in the scene type in the UCSD dataset, Chen \\textit{et al. } in \\cite{chen2012feature} collected a new Mall dataset with diverse illumination conditions and crowd densities. The dataset was collected using a surveillance camera installed in a shopping mall. Along with having various density levels, it also has different activity patterns (static and moving crowds). Additionally, the scene contained in the dataset has severe perspective distortion resulting in large variations in size and appearance of objects. The dataset also presents the challenge of severe occlusions caused by the scene objects, e.g.stall, indoor plants along the walking path. The video sequence in the dataset consists of 2000 frames of size 320$\\times$240 with 6000 instances of labelled pedestrians. The first 800 frames are used for training and the remaining 1200 frames are used for evaluation. In comparison to the UCSD dataset, the Mall dataset has relatively higher crowd density images. However, both the datasets do not have any variation in the scene perspective across images since they are a part of a single continuous video sequence. \n\\linebreak\n\n\\noindent \\textbf{UCF\\textunderscore CC\\textunderscore 50 dataset}: The UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} is the first truly challenging dataset constructed to include a wide range of densities and diverse scenes with varying perspective distortion. The dataset was created from publicly available web images. In order to capture diversity in the scene types, the authors collected images with different tags such as concerts, protests, stadiums and marathons. It contains a total of 50 images of varying resolutions with an average of 1280 individuals per image. A total of 63075 individuals were labelled in the entire dataset. The number of individuals varies from 94 to 4543 indicating a large variation across the images. The only drawback of this dataset is that only a limited number of images are available for training and evaluation. Considering the low number of images, the authors defined a cross-validation protocol for training and testing their approach where the dataset was divided into sets of 10 and a five fold cross-validation is performed. The challenges posed by this dataset are so enormous that even the results of recent CNN-based state-of-the-art approaches on this dataset are far from optimal.\n\\linebreak\n\n\\noindent \\textbf{WorldExpo '10 dataset}: Since some of the earlier approaches and datasets focussed primarily on single scene counting, Zhang \\textit{et al. } \\cite{zhang2015cross} introduced a dataset for the purpose of cross-scene crowd counting. The authors attempted to perform a data-driven cross-scene crowd counting for which they collected a new large-scale dataset that includes 1132 annotated video sequences captured by 108 surveillance cameras, all from Shanghai 2010 WorldExpo event. Large diversity in the scene types is ensured by collecting videos from cameras having disjoint bird views. The dataset consists of a total of 3980 frames of size 576 $\\times$ 720 with 199923 labelled pedestrians. The dataset is split into two parts: training set consisting of 1,127 one-minute long video sequences from 103 scenes and test set consisting of 5 one-hour long video sequences from 5 different scenes. Each test scene consists of 120 labelled frames with the crowd count varying from 1 to 220. Though an attempt is made to capture diverse scenes with varying density levels, the diversity is limited to only 5 scenes in the test set and the maximum crowd count is limited to 220. Hence, the dataset is not sufficient enough for evaluating approaches designed for extremely dense crowds in a variety of scenes. \n\\linebreak\n\n\n\\begin{table*}[htp!]\n\\centering\n\\caption{Comparison of results on various datasets. The CNN-based approaches provide significant improvements over traditional approaches that rely on hand-crafted representations. Further, among the CNN-based methods, scale aware and context aware approaches tend to achieve lower count error.}\n\\label{tab:results}\n\\resizebox{0.70\\textwidth}{!}{%\n\\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{1}{|l|}{} & Dataset & \\multicolumn{2}{c|}{UCSD} & \\multicolumn{2}{c|}{Mall} & \\multicolumn{2}{c|}{UCF CC 50} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}WorldExpo\\\\ '10\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Shanghai\\\\ Tech-A\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Shanghai\\\\ Tech-B\\end{tabular}} \\\\ \\hline\n\\multicolumn{1}{|l|}{\\begin{tabular}[c]{@{}l@{}}Approach\\\\ type\\end{tabular}} & Method & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE \\\\ \\hline\n\\multirow{5}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{4.5cm}{\\centering Traditional approaches}}} & \\begin{tabular}[c]{@{}l@{}}Multi-source multi-scale\\\\ Idrees \\textit{et al. } \\cite{idrees2013multi}\\end{tabular} & & & & & 468.0 & 590.3 & & & & & & \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}Cumulative Attributes\\\\ Chen \\textit{et al. } \\cite{chen2013cumulative}\\end{tabular} & 2.07 & 6.86 & 3.43 & 17.07 & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Density learning\\\\ Lempitsky \\textit{et al. } \\cite{lempitsky2010learning}\\end{tabular} & 1.7 & & & & 493.4 & 487.1 & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Count forest\\\\ Pham \\textit{et al. } \\cite{pham2015count}\\end{tabular} & 1.61 & 4.40 & 2.5 & 10.0 & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Exemplar density\\\\ Wang \\textit{et al. } \\cite{wang2016fast}\\end{tabular} & 1.98 & 1.82 & 2.74 & {\\underline \\textbf{2.10}} & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Random projection forest\\\\ Xu \\textit{et al. } \\cite{xu2016crowd}\\end{tabular} & 1.90 & 6.01 & 3.22 & 15.5 & & & & & & & & \\\\ \\hline\n\\multirow{7}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{9.5cm}{\\centering CNN-based approaches}}} & \\begin{tabular}[c]{@{}l@{}}Cross-scene\\\\ Zhang \\textit{et al. } \\cite{zhang2015cross}\\end{tabular} & 1.60 & 3.31 & & & 467.0 & 498.5 & 12.9 & & 181.8 & 277.7 & 32.0 & 49.8 \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Deep + shallow\\\\ Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet}\\end{tabular} & & & & & 452.5 & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}M-CNN\\\\ Zhang \\textit{et al. } \\cite{zhang2016single}\\end{tabular} & {\\underline \\textbf{1.07}} & {\\underline \\textbf{1.35}} & & & 377.6 & 509.1 & 11.6 & & 110.2 & 173.2 & 26.4 & 41.3 \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}CNN-boosting\\\\ Walach and Wolf \\cite{walach2016learning}\\end{tabular} & 1.10 & & {\\underline \\textbf{2.01}} & & 364.4 & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Hydra-CNN\\\\ Onoro \\textit{et al. } \\cite{onoro2016towards}\\end{tabular} & & & & & 333.7 & 425.2 & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Joint local \\& global count\\\\ Shang \\textit{et al. } \\cite{skaug2016end}\\end{tabular} & & & & & {\\underline \\textbf{270.3}} & & 11.7 & & & & & \\\\ \\cline{2-14} \n \n \n & \\begin{tabular}[c]{@{}l@{}}MoCNN \\\\ Kumagai \\textit{et al. } \\cite{kumagai2017mixture}\\end{tabular} & & & 2.75 & 13.4 & 361.7 & 493.3 & & & & & & \n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}FCN \\\\ Marsden \\textit{et al. } \\cite{marsden2016fully}\\end{tabular} & & & & & 338.6 & 424.5 & & & 126.5 & 173.5 & 23.76 & 33.12\n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}CNN-pixel \\\\ Kang \\textit{et al. } \\cite{kang2017beyond}\\end{tabular} & 1.12 & 2.06 & & & 406.2 & 404.0 & 13.4 & & & & & \n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}Weighted V-LAD\\\\ Sheng \\textit{et al. } \\cite{sheng2016crowd}\\end{tabular} & 2.86 & 13.0 & 2.41 & 9.12 & & & & & & & & \n \\\\ \\cline{2-14}\n \n & \\begin{tabular}[c]{@{}l@{}}Cascaded-MTL\\\\ Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased}\\end{tabular} & & & & & 322.8 & {\\underline \\textbf{341.4 }} & & & 101.3 & 152.4 & {\\underline \\textbf{20.0}} & {\\underline \\textbf{31.1}}\n \\\\ \\cline{2-14} \n \n \n \n & \\begin{tabular}[c]{@{}l@{}}Switching-CNN\\\\ Sam \\textit{et al. } \\cite{sam2017switching}\\end{tabular} & 1.62 & 2.10 & & & 318.1 & 439.2 & {\\underline \\textbf{9.4}} & & {\\underline \\textbf{90.4} } & {\\underline \\textbf{ 135.0}} & 21.6 & 33.4 \\\\ \\hline\n \n \n \n \n\\end{tabular}%\n}\n\\end{table*}\n\n\n\\noindent \\textbf{Shanghai Tech dataset}: Zhang \\textit{et al. } \\cite{zhang2016single} introduced a new large-scale crowd counting dataset consisting of 1198 images with 330,165 annotated heads. The dataset is among the largest ones in terms of the number of annotated people and it contains two parts: Part A and Part B. Part A consists of 482 images that are randomly chosen from the Internet whereas Part B consists of images taken from the streets of metropolitan areas in Shanghai. Part A has considerably larger density images as compared to Part B. Both the parts are further divided into training and evaluation sets. The training and test of Part A has 300 and 182 images, respectively, whereas that of Part B has 400 and 316 images, respectively. The dataset successfully attempts to create a challenging dataset with diverse scene types and varying density levels. However, the number of images for various density levels are not uniform making the training and evaluation biased towards low density levels. Nevertheless, the complexities present in this dataset such as varying scales and perspective distortion has created new opportunities for more complex CNN network designs. \n\nSample images from the five datasets are shown in Fig. \\ref{fig:dataset}. The datasets are also summarized in Table \\ref{tab:datasetsummary}. It can be observed that the UCSD and the Mall dataset have relatively low density images and typically focus on single scene type. In contrast, the other datasets have significant variations in the density levels along with different perspectives across images. \n\n\\subsection{Discussion on results}\nResults of the recent traditional approaches along with CNN-based methods are tabulated in Table \\ref{tab:results}. The count estimation errors are reported directly from the respective original works. The following standard metrics are used to compare different methods:\n\\begin{equation}\nMAE = \\frac{1}{N}\\sum_{i=1}^{N}|y_i-y'_i|,\n\\end{equation}\n\\begin{equation}\nMSE = \\sqrt{\\frac{1}{N}\\sum_{i=1}^{N}|y_i-y'_i|^2},\n\\end{equation}\nwhere MAE is mean absolute error, MSE is mean squared error, $N$ is the number of test samples, $y_i$ is the ground truth count and $y'_i$ is the estimated count corresponding to the $i^{th}$ sample. We make the following observations regarding the results:\n\n\\begin{itemize}[noitemsep]\n\\item In general, CNN-based methods outperform the traditional approaches across all datasets. \n\\item While the CNN-based methods are especially effective in large density crowds with a diverse scene conditions, the traditional approaches suffer from high error rates in such scenarios. \n\\item Among the CNN-based methods, most performance improvement is achieved by scale-aware and context-aware models. It can be observed from Table \\ref{tab:results} that a reduction in count error is largely driven by the increase in the complexity of CNN models (due to addition of context and scale information). \n\\item While the multi-column CNN architecture \\cite{zhang2016single} achieves the state-of-the-art results on 3 datasets: UCSD, WorldExpo '10 and ShanghaiTech, the CNN-boosting approach by \\cite{walach2016learning} achieves the best results on the Mall dataset. The best results on the UCF\\textunderscore CC\\textunderscore 50 dataset are achieved by joint local and global count approach \\cite{skaug2016end} and Hydra-CNN \\cite{onoro2016towards}. \n\\item The work in \\cite{walach2016learning} suggests that layered boosting can achieve performances that are comparable to scale aware models. \n\\item The improvements obtained by selective sampling in \\cite{wang2015deep} and \\cite{walach2016learning} suggests that it helps to obtain unbiased performance. \n\\item Whole image-based methods such as Zhang \\textit{et al. } \\cite{zhang2016single} and Shang \\textit{et al. } \\cite{skaug2016end} are less computationally complex from the prediction point of view and they have proved to achieve better results over patch-based techniques. \n\\item Finally, techniques such as layered boosting and selective sampling \\cite{onoro2016towards,wang2016fast} not only improve the estimation error but also reduce the training time significantly. \n\\end{itemize}\n\n\n\n\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig24}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig25}\n\\end{minipage}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig26}\n\\end{minipage}\n\\end{center}\n\n\\begin{center}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig27}\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig28}\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig29}\n\\captionof*{figure}{(c)}\n\\end{minipage}\n\\end{center}\n\\vskip-10pt \\captionof{figure}{Results of Zhang \\textit{et al. } \\cite{zhang2016single} on ShanghaiTech dataset. (a) Input image(b) Ground-truth density map (c) Estimated density maps. It can be observed that though the method is able to accurate estimation of crowd count, the estimated density maps are of poor quality.}\n\\label{fig:quality}\n\\end{figure}\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=0.47\\linewidth]{Fig30}\n\\includegraphics[width=0.47\\linewidth]{Fig31}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Distribution of crowd counts in ShanghaiTech dataset. It can be observed that the dataset is highly imbalanced. }\n\\label{fig:shanghaitech}\n\\end{center}\n\\end{figure}\n\n\\section{Future research directions}\n\\label{sec:future_research}\n\nBased on the analysis of various methods and results from Section \\ref{sec:survey_cnn} and \\ref{sec:datasets_and_results} and the trend of other developments in computer vision, we believe that CNN-based deeper architectures will dominate further research in the field of crowd counting and density estimation. We make the following observations regarding future trends in research on crowd counting:\n\n\\begin{enumerate}[noitemsep]\n\\item Given the requirement of large datasets for training deep networks, collection of large scale datasets (especially for extremely dense crowds) is essential. Though many datasets exist currently, only one of them (The UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi}) caters to large density crowds. However, the size of the dataset is too small for training deeper networks. Though Shanghai Tech \\cite{zhang2016single}) attempts to capture large density crowds, the number of images per density level is non-uniform with a large number of images available for low density levels and very few samples for high density levels (as shown in Fig. \\ref{fig:shanghaitech}). \n\n\\item Considering the difficulty of training deep networks for new scenes, it would be important to explore how to leverage from models trained on existing sources. Most of the existing methods retrain their models on a new scene and it is impractical to do so in real world scenarios as it would be expensive to obtain annotations for every new scene. Zhang \\textit{et al. } \\cite{zhang2015cross} attempted to address this issue by performing a data driven training without the need of labelled data for new scenes. In an another approach, Liu \\textit{et al. } \\cite{liu2015bayesian} considered the problem of transfer learning for crowd counting. A model adaptation technique for Gaussian process counting model was introduced. Considering the source model as a prior and the target dataset as a set of observations, the components are combined into a predictive distribution that captures information in both the source and target datasets. However, the idea of transfer learning or domain adaptation \\cite{VMP_SPM_DA_2015} for crowd scenes is relatively unexplored and is a nascent area of research.\n\n\\item Most crowd counting and density estimation methods have been designed for and evaluated either only on single images or videos. Combining the techniques developed separately for these methods is a non-trivial task. Development of low-latency methods that can operate in real-time for counting people in crowds from videos is another interesting problem to be addressed in future. \n\n\n\n\\item Another key issue ignored by earlier research is that the quality of estimated crowd density maps. Many existing CNN-based approaches have a number of max-pooling layers in their networks compelling them to regress on down-sampled density maps. Also, most methods optimize over traditional Euclidean loss which is known to have certain disadvantages \\cite{johnson2016perceptual1}. Regressing on down-sampled density maps using Euclidean loss results in low quality density maps. Fig. \\ref{fig:quality} demonstrates the results obtained using the state-of-the-art method \\cite{zhang2016single}. It can be observed that though accurate count estimates are obtained, the quality of the density maps is poor. As a result, these poor quality maps adversely affect other higher level cognition tasks which depend on them. Recent work on style-transfer \\cite{zhang17ICCV}, image de-raining \\cite{zhang2017image} and image-to-image translation \\cite{pix2pix2016} have demonstrated promising results from the use of additional loss functions such as adversarial loss and perceptual loss. In principle, density estimation can be considered as an image-to-image translation problem and it would be interesting to see the effect of these recent loss functions. Generating high quality density maps along with low count estimation error would be another important issue to be addressed in the future.\n\n\n\n\n\n\\item Finally, considering advancements by scale-aware \\cite{zhang2016single,onoro2016towards} and context-aware models \\cite{skaug2016end}, we believe designing networks to incorporate additional contextual and scale information will enable further progress. \n\n\n\\end{enumerate}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis article presented an overview of recent advances in CNN-based methods for crowd counting and density estimation. In particular, we summarized various methods for crowd counting into traditional approaches (that use hand-crafted features) and CNN-based approaches. The CNN-based approaches are further categorized based on the training process and the network property. Obviously all the literature on crowd counting cannot be covered, hence, we have chosen a representative subset of the latest approaches for a detailed analysis and review. We also reviewed the results demonstrated by various traditional and CNN-based approaches to conclude that CNN-based methods are more adept at handling large density crowds with variations in object scales and scene perspective. Additionally, we observed that incorporating scale and contextual information in the CNN-based methods drastically improves the estimation error. Finally, we identified some of the most compelling challenges and issues that confront research in crowd counting and density estimation using computer vision and machine leaning approaches. \n\n\\vspace{5pt}\n\\noindent \\textbf{Acknowledgement}\nThis work was supported by US Office of Naval Research (ONR) Grant YIP N00014-16-1-3134.\n\n\n\\bibliographystyle{model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction to dibosons}\nProcesses involving multiple massive bosons probe right at the\ncore of the SM, i.e. its electro-weak symmetries and resulting\ngauge boson structure. Our current understanding requires some -- yet\nundetected --\nmechanism to regularize processes like $W_L W_L \\rightarrow W_L W_L$ at the\n$O(1)$ TeV scale in order to maintain unitarity \\cite{Chanowitz:2004gk}.\nThis hints at an experimental no-lose situation where we either will\nfind the scalar\nresponsible for the Higgs mechanism, or find new interactions\ninteracting strongly with the longitudinal gauge modes. If none of these cases\nare seen we will be obliged to go back to the sub TeV region and ask ourself\nwhat it is that we don't understand on the fundamental level.\n\nOur lack of knowledge concerning the underlying electro-weak symmetry\nbreaking (EWSB) mechanism of the SM desperately calls for experimental input to\nguide us. Clearly, as indicated from the discussion above diboson\nscattering is likely to provide the ultimate EWSB exploration tool.\nUnfortunately the cross-section for diboson scattering is expected to\nbe unobservable at the Tevatron compared to the LHC \\cite{Butterworth:2002tt}.\nAnother way to get a handle on the EWSB is to reconstruct massive triple gauge\nbosons, but also here is the expected Tevatron cross-section beyond reach.\nIt is primarily for inclusive dibosons that we currently are gaining\nsensitivity to anomalous gauge interactions where new phase-space regions\nare opening up with the increase of integrated Tevatron luminosity.\n\nLEP 2 puts stringent limits on\nthe anomalous triple gauge couplings \\cite{Bruneliere:2004ab} (ATGC) which\nare the lowest order general effective theory operator couplings related\nto the diboson final state. However, the LEP 2 energy scale\n$\\sqrt{\\hat{s}} \\sim 200$ GeV is sufficiently small compared to the new\nphysics scale $\\Lambda$ such that higher order operators can be neglected. At\nthe Tevatron this is no longer the case since $O(1)$ TeV resonances can\nbe produced on-shell and a form factor must be\nintroduced in order to maintain unitarity of the model \\cite{Hagiwara:1989mx}.\nThe form factor is equivalent to a infinite series of higher order operators\nand implies a non linear extension of the linear model commonly used\nat LEP. For this reason the quoted limits on the couplings refer to\nslightly different assumptions at the two accelerators. And one should be\naware of that Tevatron measurements are in many ways complementary to\nthose made at LEP, e.g. new unexpected\non-shell dynamics may kick-in at higher energies and with different\nproduction mechanisms.\n\n\\section{Tevatron measurements}\nAs can be seen from table \\ref{tab:diboxs} it is just recently that\nmassive diboson signals became statistically significant at hadron colliders.\n\\begin{table}[t]\n\\caption{New(**) and recent diboson measurements at the Tevatron. The\nphoton($\\gamma$) has a transverse momentum cut of 7(8) GeV for the\nCDF(D0) measurements. All cross-sections are within the theoretical\nexpectations.\n\\label{tab:diboxs}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n& \\multicolumn{2}{|c|}{CDF} & \\multicolumn{2}{|c|}{D0} \\\\\n\\hline\nDiboson & Events & $\\sigma$[pb] & Events & $\\sigma$[pb] \\\\\n\\hline\n$W(l\\nu)\\gamma \\times BR(l\\nu)$ & 208(200\/pb)& $18.1 \\pm 3.1$ &\n141(162\/pb) & $14.8 \\pm 1.9$ \\\\\n$Z(ll)\\gamma \\times BR(ll)$ & 66 (200\/pb)& $4.6 \\pm 0.6$ &\n244(320\/pb) & $4.2 \\pm 0.5$ \\\\\n$W(l\\nu)W(l\\nu)$ & 12(200\/pb) &\n\\begin{minipage}{1in}\n$14.6^{+5.8}_{-5.1}$(stat) $^{+1.8}_{-3.0}$(sys) $\\pm 0.9$(lum)\n\\end{minipage}\n& 17(252\/pb) &\n\\begin{minipage}{1in}\n$13.8^{+4.3}_{-3.8}$(stat) $^{+1.2}_{-0.9}$(sys) $\\pm 0.9$(lum)\n\\end{minipage}\n\\\\\n$W(l\\nu)(W(jj)+Z(jj))$ & n.a.(350\/pb) & $<36$ (**) & & \\\\\n$W(l\\nu)Z(ll)$ & n.a.(825\/pb)& $<6.4$ (**) & n.a.(320\/pb) & $<13.3$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe measured cross-sections decrease with the diboson mass and the first\nprocess that has been firmly established during the last two years is WW to\nleptons. This is very different from LEP 2 where thousands of WW pairs have\nbeen observed in many decay channels, both leptonic and pure hadronic\nmodes. However, constraints on new physics are dominated by high\nenergy scales and the inclusive cross-section is not the most efficient\nobservable at the Tevatron for anomalous gauge interactions. A closer\nlook at the anomalous cross-section contributions \\cite{Hagiwara:1989mx}\nreveal that they, apart from a boson angular dependence, scale with\nthe parton center-of-mass energy ($\\sqrt{\\hat{s}}$). Thus the very high energy\ntail of the boson transverse momentum turns out to be a extremely sensitive and\nrobust observable for ATGC.\n\nIn the following subsections I will report on two brand new measurements\nof WW and WZ dibosons from CDF. It is interesting to note that even though the\nprocesses themselves are just below the the threshold of observation they\nhave a significant constraining power for ATGC.\n\n\\subsection{The process $WZ \\rightarrow l\\nu ll$}\n\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=feyn_wz.eps,width=12cm}\n\\caption{Tree level graphs for WZ.\n\\label{fig:wz}}\n\\end{center}\n\\end{figure}\n\nThe WZ process, see figure \\ref{fig:wz}, is particularly interesting among\nthe dibosons.\nSince it could not be be produced at tree level at LEP we have so far no\ndirect experimental measurement of the WWZ vertex. Furthermore,\nthe absence of interference with other triple gauge vertices at tree level\nmakes this channel unique since it can provide a unambiguous\nhandle on the WWZ couplings in case of any observed anomaly that needs to be\ndisentangled in e.g. the WW production.\n\nA new CDF search for WZ to leptons has been made using data\nequivalent to 825\/pb of integrated luminosity of proton-antiproton collisions\nat $\\sqrt{s}=1.96$ TeV. The events are triggered by a lepton (isolated\nelectron or muon) with at least 20 GeV transverse momentum. Two more leptons\nare then required with at least 10 GeV transverse momentum. A neutrino\nlike signature is selected by requiring at least 25 GeV, see figure\n\\ref{fig:wzmet}, of missing transverse\nenergy. One Z is selected by one opposite sign lepton pair within the\ndilepton mass $76 < M_{ll} < 106$ GeV, see figure \\ref{fig:wzmll},\nand ZZ events are vetoed by removing\nevents with tracks that falls within $76 < M_{trk,l} < 106$ GeV using the\nremaining unmatched lepton. Due to the specific multilepton signature the\nselected candidates have a relatively small expected background contamination.\nAn overview of measured and estimated events are shown in\ntable \\ref{tab:wz}. Only two candidates are observed and a limit is\nderived on the total cross-section\n$$\n\\sigma(p\\bar{p} \\rightarrow WZ)< 6.4 \\textrm{ pb (95\\% C.L.)}.\n$$\nThis is intriguingly close to the theoretical NLO value around 4 pb.\nLimits on the WWZ ATGC have not yet been extracted but we know\nfrom a recent and similar D0 measurement \\cite{Abazov:2005ys}\n(see also the corresponding entry in table 1)\nthat this analysis have the potential to provide strong constraints.\n\n\\begin{table}[t]\n\\caption{Measured and expected number of events for WZ to leptons\nat CDF using 825\/pb of data.\n\\label{tab:wz}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\nProcess & Events \\\\\n\\hline\n$WZ$ & $3.72 \\pm 0.02$(stat) $\\pm 0.15$(sys) \\\\\n\\hline\n$ZZ$ & $0.50 \\pm 0.01$(stat) $\\pm 0.05$(sys) \\\\\n$Z\\gamma$ & $0.03 \\pm 0.01$(stat) $\\pm 0.01$(sys) \\\\\n$t\\bar{t}$ & $0.05 \\pm 0.01$(stat) $\\pm 0.01$(sys) \\\\\n$Z+jets$ & $0.34 \\pm 0.07$(stat) $^{+0.16}_{-0.10}$(sys) \\\\\n\\hline\nTotal background: & $0.92 \\pm 0.07$(stat) $\\pm 0.15$(sys) \\\\\n\\hline\nTotal data: & 2 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[p]\n\\begin{center}\n\\psfig{figure=h_Met_noMetCut_wsf.eps,width=13cm}\n\\caption{Missing transverse energy in the WZ events. The cut for the signal\nregion is also shown.\n\\label{fig:wzmet}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\psfig{figure=trileptons_Mll_Met.eps,width=13cm}\n\\caption{Scatter plot in the missing transverse energy and dilepton mass\nplane for WZ. Data is compared to expectation.\n\\label{fig:wzmll}}\n\\end{center}\n\\end{figure}\n\n\\subsection{The process $WW+WZ \\rightarrow l\\nu jj$}\n\nSemi-leptonic decay modes of WW events have yet not been observed at hadron\ncolliders, see figure \\ref{fig:ww} where one of the bosons decays into quarks\nand the other one into leptons.\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=feyn_ww.eps,width=13cm}\n\\caption{Tree level and LO Higgs graphs for WW.\n\\label{fig:ww}}\n\\end{center}\n\\end{figure}\nThe reconstruction is challenging due\nto the limited dijet mass resolution of about 10\\% and the W+jets background\nwhich is $O(100)$ times larger than the signal after event preselection\nat the Tevatron. Note that events from diagrams in figure \\ref{fig:wz} (replace\nthe leptonic Z decay with quarks) cannot be excluded, hence they are included\nin the signal. Still, the hadronic decay modes are interesting since\nthey have larger event yield than the pure leptonic modes and they have\nthe important ability to reconstruct the W transverse momentum which we\nknow is a sensitive handle on the ATGC.\n\nIn order to constrain the ATGC and to verify the SM rate\na new CDF search for WW+WZ to leptons, missing transverse energy and jets\nhas been made using data equivalent to 350\/pb of integrated luminosity\nof proton-antiproton collisions at $\\sqrt{s}=1.96$ TeV. The events are\ntriggered by an isolated lepton with 25 GeV transverse energy (20 GeV\ntransverse momentum for muons). An inclusive set of leptonic W decays is\nselected by requiring the missing transverse energy to be at least 25 GeV.\nThe events selection then proceeds by requiring two jets with at least\n15 GeV transverse energy and with a dijet mass in the range $32 < M_{jj} <\n184$ GeV. An additional cut on the W transverse mass below 25 GeV is applied\nto reduce the multijet background. Also proximity cuts are applied among the\nlepton and jets. However care must be taken not to reject narrow jets since\nthe interesting high transverse momentum boson have narrow jets due to the\nboost. The expected number of events is shown in table \\ref{tab:ww}.\n\n\\begin{table}[t]\n\\caption{Measured and expected number of events for WW+WZ to leptons,\nmissing transverse energy and jets at CDF using 350\/pb of data.\n\\label{tab:ww}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|l|c|r|}\n\\hline\nProcess & Uncertainty & Events \\\\\n\\hline\n$WW$ & 15\\% & 142.0 \\\\\n$WZ$ & 15\\% & 18.2 \\\\\n\\hline\n$W+jets$ & 20\\% & 6261.0 \\\\\n$Multijets$ & 40\\% & 263.4 \\\\\n$W(\\tau)+jets$ & 20\\% & 171.0 \\\\\n$Z+jets$ & 20\\% & 154.0 \\\\\n$t\\bar{t}$ & 25\\% & 171.6 \\\\\n$t(t-ch)$ & 25\\% & 14.4 \\\\\n$t(s-ch)$ & 25\\% & 8.2 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nSince the theoretical uncertainty on the W+jets background is huge we extract\nthe WW+WZ signal by assuming the SM ratio between WW\/WZ and fit the expected\nsignal and background dijet mass shapes to data, see figure \\ref{fig:wwxsfit}.\nThe fit gives 109 signal events and an statistical uncertainty of 110 events.\nSystematic uncertainties contributes with an additional 54 events. From this\nwe estimate the upper limit on the WW+WZ cross-section to be\n$$\n\\sigma(p\\bar{p} \\rightarrow WW+WZ) < 36 \\textrm{ pb}.\n$$\nThe SM expectation is 16 pb.\n\nThe ATGC are extracted from the signal region defined to be the\ndijet mass window $560$ without loss\nof generality, then the one-soliton solution can be expressed in the\nfollowing parametric form\n\\begin{equation} \\label{CSP1solitona}\nq=\\frac{\\alpha_1}{|\\alpha_1|}\\frac{2p_{1R}}{|p_1|^2}e^{\\mathrm{i} \\eta_{1I}}\n\\mbox{sech} \\left(\\eta_{1R}+\\eta_{10}\\right) \\,,\n\\end{equation}\n\\begin{equation} \\label{CSP1solitonb}\nx=y-\\frac{2p_{1R}}{|p_1|^2}\\left(\\tanh\n\\left(\\eta_{1R}+\\eta_{10}\\right)+1\\right)\\,, \\quad t=-s\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta_{1R}=p_{1R}y+\\frac{p_{1R}}{|p_1|^2}s , \\quad \\eta_{1I}=p_{1I} y-\\frac\np_{1I}}{|p_1|^2}s \\,,\\quad \\eta_{10}=\\ln \\frac{|\\alpha_1||p_1|^2}{4p_{1R}}\\,.\n\\end{equation}\n\nEq. (\\ref{CSP1solitona}) represents an envelope soliton of amplitude \n2p_{1R}\/|p_1|^2$ and phase $\\eta_{1I}$. To analyze the property for the\none-soliton solution, we calculate out\n\\begin{equation}\n\\frac{\\partial x}{\\partial y} = 1- \\frac{2p^2_{1R}}{|p_1|^2} {\\mbox{sech}\n^2(\\eta_{1R}+\\eta_{10})\\,.\n\\end{equation}\nTherefore, $\\partial x\/\\partial y \\to 1$ as $y \\to \\pm \\infty$. Moreover, it\nattains a minimum value of $({p^2_{1I}-p^2_{1R}})\/({p^2_{1I}+p^2_{1R}})$\nat the peak point of envelope soliton where $\\eta_{1R}+\\eta_{10}=0$. Since $\n\\partial |q|}\/{\\partial x}=\\frac{\\partial |q|\/\\partial y}{\\partial\nx\/\\partial y}$, we can classify this one-soliton solution as follows:\n\n\\begin{itemize}\n\\item \\textbf{smooth soliton:} when $|p_{1R}| < |p_{1I}|$, ${\\partial x}\/\n\\partial y}$ is always positive, which leads to a smooth envelope soliton\nsimilar to the envelope soliton for the nonlinear Schr{\\\"o}dinger equation.\nAn example with $p_1=1+1.5\\mathrm{i}$ is illustrated in Fig. 1 (a).\n\n\\item \\textbf{loop soliton:} when $|p_{1R}| > |p_{1I}|$, the minimum value\nof ${\\partial x}\/{\\partial y}$ at the peak point of the soliton becomes\nnegative. In view of the fact that $\\partial x\/\\partial y \\to 1$ as $y \\to\n\\pm \\infty$, ${\\partial x}\/{\\partial y}$ has two zeros at both sides of the\npeak of the envelope soliton. Moreover, ${\\partial x}\/{\\partial y}< 0$\nbetween these two zeros. This leads to a loop soliton for the envelope of $q\n. An example is shown in Fig. (b) with $p_1=1+0.5\\mathrm{i}$.\n\n\\item \\textbf{cuspon soliton:} when $|p_{1R}| = |p_{1I}|$, ${\\partial x}\/\n\\partial y}$ has a minimum value of zero at $\\eta_{1R}+\\eta_{10}=0$, which\nmakes the derivative of the envelope $|q|$ with respect to $x$ going to\ninfinity at the peak point. Thus, we have a cusponed envelope soliton, which\nis illustrated in Fig. 1 (c) with $p_1=1+\\mathrm{i}$.\n\\end{itemize}\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{onesolitona1b1p5.eps}\\quad\n\\includegraphics[scale=0.35]{onesolitona1b0p5.eps}} \\kern-0.3\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern16em\\hss(b)\\kern11em} \\kern+0.3\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{onesolitona1b1.eps}} \\kern-0.3\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern21em} \\kern+0.3\\textwidth\n\\caption{Envelope soliton for the complex short pulse equation (\\protect\\re\n{CSP}), solid line: $Re(q)$, dashed line: $|q|$; (a) smooth soliton with \np_1=1+1.5\\mathrm{i}$, (b) loop soliotn with $p_1=1+0.5\\mathrm{i}$, (c)\ncuspon soliton with $p_1=1+\\mathrm{i}$.}\n\\label{figure:cspe1soliton}\n\\end{figure}\n\n\\begin{remark}\nThe one-soliton solution to the short pulse equation (\\ref{SPE}) is of\nloop-type, which lacks physical meaning in the context of nonlinear optics.\nHowever, the one-soliton solution to the complex short pulse equation (\\re\n{CSP}) is of breather-type, which allows physical meaning for optical pulse.\n\\end{remark}\n\n\\begin{remark}\nWhen $|p_{1R}| <|p_{1I}|$, there is no singularity for one-soliton solution.\nMoreover, in view of $\\eta_{1R}$ associated with the width of envelope\nsoliton and $\\eta_{1I}$ associated with the phase, it is obvious that this\nnonsingular envelope soliton can only contain a few optical cycle. This\nproperty coincides with the fact that the complex short pulse equation is\nderived for the purpose of describing ultra-short pulse propagation. When \n|p_{1R}| =|p_{1I}|$, the soliton becomes cuspon-like one, which agrees with\nthe results in \\cite{Bandelow} derived from a bidirectional model.\n\\end{remark}\n\n\\subsubsection{Two-soliton solution}\n\nBased on the $N$-soliton solution of the complex short pulse equation from \n\\ref{CSP_sl1})--(\\ref{CSP_sl2}), the tau-functions for two-soliton solution\ncan be expanded for $N=2$\n\\begin{eqnarray}\n&&f=\\mathrm{Pf}(a_{1},a_{2},a_{3},a_{4},b_{1},b_{2},b_{3},b_{4}) \\nonumber\n\\\\\n&&\\quad =1+a_{1\\bar{1}}e^{\\eta _{1}+\\bar{\\eta}_{1}}+a_{1\\bar{2}}e^{\\eta _{1}\n\\bar{\\eta}_{2}}+a_{2\\bar{1}}e^{\\eta _{2}+\\bar{\\eta}_{1}}+a_{2\\bar{2}}e^{\\eta\n_{2}+\\bar{\\eta _{2}}} \\nonumber \\\\\n&&\\qquad +|P_{12}|^{2}\\left( a_{1\\bar{1}}a_{2\\bar{2}}P_{1\\bar{2}}P_{2\\bar{1\n}-a_{1\\bar{2}}a_{2\\bar{1}}P_{1\\bar{1}}P_{2\\bar{2}}\\right) e^{\\eta _{1}+\\eta\n_{2}+\\bar{\\eta}_{1}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&&g=\\mathrm{Pf}(d_{0},\\beta\n_{1},a_{1},a_{2},a_{3},a_{4},b_{1},b_{2},b_{3},b_{4}) \\nonumber \\\\\n&&\\quad =\\alpha _{1}e^{\\eta _{1}}+\\alpha _{2}e^{\\eta _{2}}+P_{12}\\left(\n\\alpha _{1}P_{1\\bar{1}}a_{2\\bar{1}}-\\alpha _{2}P_{2\\bar{1}}a_{1\\bar{1\n}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{1}} \\nonumber \\\\\n&&\\qquad +P_{12}\\left( \\alpha _{1}P_{1\\bar{2}}a_{2\\bar{2}}-\\alpha _{2}P_{\n\\bar{2}}a_{1\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray\nwhere\n\\begin{equation}\nP_{ij}=\\frac{p_{i}-p_{j}}{p_{i}+p_{j}}\\,,\\quad P_{i\\bar{j}}=\\frac{p_{i}-\\bar\np}_{j}}{p_{i}+\\bar{p}_{j}}\\,,\\quad a_{i\\bar{j}}=\\frac{\\alpha _{i}\\bar{\\alpha\n_{j}(p_{i}\\bar{p}_{j})^{2}}{4(p_{i}+\\bar{p}_{j})^{2}}\\,,\n\\end{equation\nand $\\eta _{j}=p_{j}y+p_{j}^{-1}s$, $\\bar{\\eta}_{j}=\\bar{p}_{j}y+\\bar{p\n_{j}^{-1}s$.\n\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{CSPE_2soliton_ct.eps}\\quad\n\\includegraphics[scale=0.35]{CSPE_2soliton_elastic.eps}} \\kern-0.31\n\\textwidth \\hbox to\n\\textwidth{\\hss(a)\\kern4em\\hss(b)\\kern2em} \\kern+0.315\\textwidth\n\\caption{Two-soliton solution to the complex short pulse equation (a)\ncontour plot; (b) profiles at $t=-80$, $80$.}\n\\label{f:1com2soliton}\n\\end{figure}\nTo avoid the singularity of the envelope solitons, the conditions $|p_{1R}|<\n|p_{1I}|$ and $|p_{2R}|< |p_{2I}|$ need to be satisfied.\nWhen two solitons stay apart, the amplitude of each soliton is of $2|p_{iR}|\/|p_{i}|^2$, and the velocity is of $-1\/|p_i|^2$ in the $ys$-coordinate system. Therefore, the soliton of larger velocity will catch up with and collide with the soliton of smaller velocity if it is initially located on the left. Furthermore, the\ncollision is elastic, and there is no change in shape and amplitude of\nsolitons except a phase shift. In Fig. 2, we illustrate\nthe contour plot for the collision of two solitons (a), as well as the\nprofiles (b) before and after the collision. The parameters are taken as \n\\alpha_1=\\alpha_2=1.0$, $p_1=1+1.2\\mathrm{i}$ and $p_2=1+2\\mathrm{i}$.\n\nSince the velocity of single envelope soliton is $-1\/|p_i|^2$ in the $ys\n-coordinate system, a bound state can be formed under the condition of \n|p_1|^2 = |p_2|^2$ if two solitons stay close enough and move with the same\nvelocity. Such a bound state is shown in Fig. 3 for\nparameters chosen as $\\alpha_1=\\alpha_2=1.0$, $p_1=1.3+1.8193\\mathrm{i}$, \np_2=1+2\\mathrm{i}$.\nIt is interesting that the envelope of the bound state oscillates\nperiodically as it moves along $x$-axis.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{1com_bd3d.eps}\\quad\n\\includegraphics[scale=0.35]{1com_bdprofile.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern4em\\hss(b)\\kern2em} \\kern+0.315\\textwidth\n\\caption{Bound state to the complex short pulse equation: (a) 3D plot (b)\nprofiles at $t=-100$, $40$.}\n\\label{f:1comboundstate}\n\\end{figure}\n\\subsection{Bilinear equations and $N$-soliton solutions to the coupled\ncomplex short pulse equation}\n\n\\begin{proposition}\nThe coupled complex short pulse equation is derived from bilinear equations\n\\begin{equation} \\label{CCSPE_bilinear1}\nD_sD_y f \\cdot g_i =fg_i, \\quad i=1,2 \\,,\n\\end{equation}\n\\begin{equation} \\label{CCSPE_bilinear2}\nD^2_s f \\cdot f =\\frac{1}{2} \\left(|g_1|^2+|g_2|^2\\right)\\,,\n\\end{equation}\nby dependent variable transformation\n\\begin{equation} \\label{CCSPE_vartrf}\nq_1=\\frac{g_1}{f}, \\quad q_2=\\frac{g_2}{f}\\,,\n\\end{equation}\nand hodograph transformation\n\\begin{equation}\nx = y -2(\\ln f)_s\\,, \\quad t=-s \\,, \\label{CCSP_hodograph}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nDividing both sides of Eqs. (\\ref{CCSPE_bilinear1})--(\\ref{CCSPE_bilinear2})\nby $f^2$, we have\n\\begin{equation}\n\\left(\\frac{g_i}{f} \\right)_{sy} + 2\\frac{g_i}{f} \\left( \\ln f\\right)_{sy} =\n\\frac{g_i}{f}\\,, \\label{CCSP_BL2a}\n\\end{equation}\n\\begin{equation}\n\\left( \\ln f\\right)_{ss} =\\frac{1}{4} \\left( \\frac{|g_1|^2}{f^2}+\\frac\n|g_2|^2}{f^2} \\right)\\,. \\label{CCSP_BL2b}\n\\end{equation}\n\nFrom dependent variable and hodograph transformations (\\ref{CCSPE_vartrf})--\n\\ref{CCSP_hodograph}), we obtain\n\\[\n\\frac{\\partial x}{\\partial s} = -2(\\ln f)_{ss} = -\\frac 12\n\\left(|q_1|^2+|q_2|^2 \\right)\\,, \\qquad \\frac{\\partial x}{\\partial y} =\n1-2(\\ln f)_{sy}\\,,\n\\]\nwhich implies\n\\begin{equation} \\label{CCSP_BL3}\n{\\partial_y} = \\rho^{-1} {\\partial_x}\\,, \\qquad {\\partial_s} = -{\\partial_t}\n- \\frac 12 \\left(|q_1|^2+|q_2|^2 \\right) {\\partial_x}\\,\n\\end{equation}\nby letting $1-2(\\ln f)_{sy} = \\rho^{-1}$.\n\nWith the use of (\\ref{CCSP_BL3}), Eq. (\\ref{CCSP_BL2a}) can be recast into\n\\begin{equation} \\label{CCSP_BL4}\n\\rho \\left(\\frac{g_i}{f} \\right)_{sy} = \\frac{g_i}{f}\\,, \\quad i=1,2\\,,\n\\end{equation}\nwhich can be further converted into\n\\begin{equation} \\label{CCSPE_alt}\n\\partial_x \\left(-\\partial_t - \\frac 12 (|q_1|^2+|q_2|^2) \\partial_x\n\\right)q_i = q_i\\,, \\quad i=1,2\\,.\n\\end{equation}\nEq. (\\ref{CCSPE_alt}) is, obviously, equivalent to the coupled complex short\npulse equation (\\ref{CCSP1})--(\\ref{CCSP2}).\n\\end{proof}\n\n\n$N$-soliton solution for the coupled complex short pulse equation is given\nin a similar way as the complex short pulse equation by the following\ntheorem.\n\n\n\\begin{theorem}\nThe coupled complex short pulse equation admits the following $N$-soliton\nsolution\n\\[\nq_i=\\frac{g_i}{f}, \\quad x = y -2(\\ln f)_s\\,, \\quad t=-s \\,,\n\\]\nwhere $f$, $g_i$ are pfaffians defined as\n\\begin{eqnarray} \\label{CCSPE_Nsoliton1}\nf &=& \\mathrm{Pf} (a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})\\,, \\\\\ng_i &=& \\mathrm{Pf} (d_0, \\beta_{i}, a_1, \\cdots, a_{2N}, b_1, \\cdots,\nb_{2N})\\,, \\label{CCSPE_Nsoliton2}\n\\end{eqnarray}\nand the elements of the pfaffians are determined as\n\\begin{equation} \\label{NCSPE_pf1}\n\\mathrm{Pf}(a_j,a_k)= \\frac{p_j-p_k}{p_j+p_k} e^{\\eta_j+\\eta_k}\\,, \\quad\n\\mathrm{Pf}(a_j,b_k)=\\delta_{j,k}\\,,\n\\end{equation}\n\\begin{equation} \\label{NCSPE_pf2}\n\\mathrm{Pf}(b_j,b_k)=\\frac 14 \\frac{\\sum^2_{i=1} \\alpha^{(i)}_j\n\\alpha^{(i)}_k }{p^{-2}_j-p^{-2}_{k}} \\delta_{\\mu+1, \\nu}\\,, \\quad \\mathrm{P\n}(d_l,a_k)= p_k^{l} e^{\\eta_k}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{NCSPE_pf4}\n\\mathrm{Pf}(b_j,\\beta_i)=\\alpha^{(i)}_j \\delta_{\\mu, i}\\,,\\quad \\mathrm{Pf\n(d_0,b_j) =\\mathrm{Pf}(d_0,\\beta_i) = \\mathrm{Pf}(a_j,\\beta_i)=0\\,.\n\\end{equation}\nHere $\\mu=index(b_j)$, $\\nu=index(b_k)$, $\\eta_j=p_j y + p_j^{-1} s +\n\\eta_{j,0}$ which satisfying $p_{j+N}=\\bar{p}_j$, $\\alpha_{j+N}=\\bar{\\alpha\n_{j}$.\n\\end{theorem}\n\nThe proof of the Theorem is given in the Appendix. In the subsequent\nsection, based on the $N$-soliton solution of coupled complex short pulse\nequation, we will investigate the dynamics of one- and two-solitons in\ndetails.\n\n\\begin{remark}\nThrough the transformations\n\\begin{equation} \\label{NCSPE_trfs}\nx = y -2(\\ln f)_s\\,, \\quad t=-s \\,, \\quad q_i=\\frac{g_i}{f}\\,,\n\\end{equation}\nthe vector complex short pulse equation (\\ref{NCSPE}) can be decomposed into\nthe following bilinear equations\n\\begin{equation} \\label{NCSPE_bilinear1}\nD_sD_y f \\cdot g_i =fg_i, \\quad i=1, \\cdots, n \\,,\n\\end{equation}\n\\begin{equation} \\label{NCSPE_bilinear2}\nD^2_s f \\cdot f =\\frac{1}{2} \\left(\\sum^n_{i=1}|g_i|^2\\right)\\,.\n\\end{equation}\nThe parametric form of $N$-soliton solution in terms of pfaffians to the\nvector complex short pulse equation (\\ref{NCSPE}) can be given in a very\nsimilar from as to to the coupled complex short pulse equation. Here, we\nomit the details and will report the results later on.\n\\end{remark}\n\n\\section{Dynamics of solitons to the coupled complex short pulse equation}\n\\subsection{One-soliton solution}\nThe tau-functions for one-soliton solution to the coupled complex short\npulse equation are obtained from (\\ref{CCSPE_Nsoliton1})--(\\re\n{CCSPE_Nsoliton2}) for $N=1$\n\\begin{equation}\nf = -1-\\frac 14 \\frac {\\sum_{i=1}^2|\\alpha^{(i)}_1|^2(p_1\\bar{p}_1)^2}{(p_1\n\\bar{p}_1)^2} e^{\\eta_1+\\bar{\\eta}_1} \\,,\n\\end{equation}\n\\begin{equation}\ng_1 = -\\alpha^{(1)}_1 e^{\\eta_1}\\,, \\quad g_2 = -\\alpha^{(2)}_1 e^{\\eta_1}\\,.\n\\end{equation}\n\nLet $p_{1}=p_{1R}+\\mathrm{i}p_{1I}$, the one-soliton solution can be\nexpressed in the following parametric form\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\nA_{1} \\\\\nA_{2\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{\\mathrm{i}\\eta _{1I}}{\\mbox{sech}\n\\left( \\eta _{1R}+\\eta _{10}\\right) \\,, \\label{1soliton_ay}\n\\end{equation\n\\begin{equation}\nx=y-\\frac{2p_{1R}}{|p_{1}|^{2}}\\left( \\tanh (\\eta _{1R}+\\eta _{10})+1\\right)\n\\,,\\quad t=-s\\,, \\label{CCSP1solitonb}\n\\end{equation\nwhere\n\\begin{equation}\n\\eta _{1R}=p_{1R}\\left( y+\\frac{1}{|p_{1}|^{2}}s\\right) ,\\quad \\eta\n_{1I}=p_{1I}\\left( y-\\frac{1}{|p_{1}|^{2}}s\\right) \\,,\n\\end{equation\n\\begin{equation}\nA_{i}=\\frac{\\alpha _{1}^{(i)}}{\\sqrt{\\sum_{i=1}^{2}|\\alpha _{1}^{(i)}|^2}\n\\,,\\quad \\eta_{10}=\\ln \\frac{\\sqrt{\\sum_{i=1}^{2}|\\alpha _{1}^{(i)}|^2\n|p_{1}|^{2}}{4|p_{1R}|}\\,.\n\\end{equation\nThe amplitudes of the single soliton in each component are ${2|A_{1}|p_{1R}}\n{|p_{1}|^{2}}$ and ${2|A_{2}|p_{1R}}\/{|p_{1}|^{2}}$, respectively. Note that\n$|A_{1}|^{2}+|A_{2}|^{2}=1$. Same as the analysis for one-soliton solution\nof complex short pulse equation, if $|p_{1R}|<|p_{1I}|$, the envelope for\none-soliton in each of the component is smooth, whereas, if \n|p_{1R}|>|p_{1I}| $, it becomes a loop (multi-valued) soliton, if \n|p_{1R}|=|p_{1I}|$, it is a cuspon.\n\n\\subsection{Soliton interactions}\nTwo-soliton solution for coupled complex short pulse equation is obtained\nfrom (\\ref{CCSPE_Nsoliton1})--(\\ref{CCSPE_Nsoliton2}) for $N=2$. By\nexpanding the pfaffians, the tau-functions for two-soliton solution are\nexpressed by\n\\begin{eqnarray}\n&&f=1+e^{\\eta _{1}+\\bar{\\eta}_{1}+r_{1\\bar{1}}}+e^{\\eta _{1}+\\bar{\\eta\n_{2}+r_{1\\bar{2}}}+e^{\\eta _{2}+\\bar{\\eta}_{1}+r_{2\\bar{1}}}+e^{\\eta _{2}\n\\bar{\\eta}_{2}+r_{2\\bar{2}}} \\nonumber \\\\\n&&\\qquad +|P_{12}|^{2}|P_{1\\bar{2}}|^{2}P_{1\\bar{1}}P_{2\\bar{2}}\\left( B_{\n\\bar{1}}B_{2\\bar{2}}-B_{2\\bar{1}}B_{1\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}\n\\bar{\\eta}_{1}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& g_1= \\alpha^{(1)}_{1} e^{\\eta_1} + \\alpha^{(1)}_2 e^{\\eta_2} +P_{12} P_{\n\\bar{1}} P_{2\\bar{1}} \\left( \\alpha^{(1)}_2 B_{1\\bar{1}} - \\alpha^{(1)}_1\nB_{2\\bar{1}} \\right) e^{\\eta_1+\\eta_2+\\bar{\\eta}_1} \\nonumber \\\\\n&& \\qquad +P_{12} P_{1\\bar{2}} P_{2\\bar{2}} \\left( \\alpha^{(1)}_2 B_{1\\bar{2\n} - \\alpha^{(1)}_1 B_{2\\bar{2}} \\right) e^{\\eta_1+\\eta_2+\\bar{\\eta}_2} \\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&&g_{2}=\\alpha _{1}^{(2)}e^{\\eta _{1}}+\\alpha _{2}^{(2)}e^{\\eta\n_{2}}+P_{12}P_{1\\bar{1}}P_{2\\bar{1}}\\left( \\alpha _{2}^{(2)}B_{1\\bar{1\n}-\\alpha _{1}^{(2)}B_{2\\bar{1}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{1}}\n\\nonumber \\\\\n&&\\qquad +P_{12}P_{1\\bar{2}}P_{2\\bar{2}}\\left( \\alpha _{2}^{(2)}B_{1\\bar{2\n}-\\alpha _{1}^{(2)}B_{2\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta\n_{2}}\\,,\n\\end{eqnarray\nwhere\n\\[\nP_{ij}=\\frac{p_{i}-p_{j}}{p_{i}+p_{j}}\\,,\\quad P_{i\\bar{j}}=\\frac{p_{i}-\\bar\np}_{j}}{p_{i}+\\bar{p}_{j}}\\,,\n\\\n\\[\nB_{i\\bar{j}}=\\frac{\\alpha _{i}^{(1)}\\bar{\\alpha}_{j}^{(1)}+\\alpha _{i}^{(2)\n\\bar{\\alpha}_{j}^{(2)}}{4(p_{i}^{-2}-\\bar{p}_{j}^{-2})}\\,,\\quad e^{r_{i\\bar{\n}}}=\\frac{\\alpha _{i}^{(1)}\\bar{\\alpha}_{j}^{(1)}+\\alpha _{i}^{(2)}\\bar\n\\alpha}_{j}^{(2)}}{4(p_{i}^{-1}+\\bar{p}_{j}^{-1})^2}\\,.\n\\\nand $\\eta _{j}=p_{j}y+p_{j}^{-1}s$, $p_{3}=\\bar{p}_{1}$, $p_{4}=\\bar{p}_{2}\n, thus, $\\eta _{3}=\\bar{\\eta}_{1}$, $\\eta _{4}=\\bar{\\eta}_{2}$.\n\nNext, we investigate the asymptotic behavior of two-soliton solution. To\nthis end, we assume $p_{1R}>p_{2R}>0$, $p_{1R}\/|p_{1}|^{2}>p_{2R}\/|p_{2}|^{2}\n$ without loss of generality. For the above choice of parameters, we have\n(i) $\\eta _{1R}\\approx 0$, $\\eta _{2R}\\rightarrow \\mp \\infty $ as \nt\\rightarrow \\mp \\infty $ for soliton 1 and (ii) $\\eta _{2R}\\approx 0$, \n\\eta _{2R}\\rightarrow \\pm \\infty $ as $t\\rightarrow \\mp \\infty $ for soliton\n2. This leads to the following asymptotic forms for two-soliton solution.\n\\newline\n(i) Before collision ($t\\rightarrow -\\infty $) \\newline\nSoliton 1 ($\\eta _{1R}\\approx 0$, $\\eta _{2R}\\rightarrow -\\infty $):\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) &\\rightarrow &\\left(\n\\begin{array}{c}\n\\alpha _{1}^{(1)} \\\\\n\\alpha _{1}^{(2)\n\\end{array\n\\right) \\frac{e^{\\eta _{1}}}{1+e^{\\eta _{1}+\\bar{\\eta}_{1}+r_{1\\bar{1}}}}\\,,\n\\nonumber \\label{soliton1_aybf} \\\\\n&\\rightarrow &\\left(\n\\begin{array}{c}\nA_{1}^{1-} \\\\\nA_{2}^{1-\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{i\\eta _{1I}}{\\mbox{sech}}\\left( \\eta\n_{1R}+\\frac{r_{1\\bar{1}}}{2}\\right) \\,,\n\\end{eqnarray\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{1-} \\\\\nA_{2}^{1-\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\n\\alpha _{1}^{(1)} \\\\\n\\alpha _{1}^{(2)\n\\end{array\n\\right) \\frac{1}{\\sqrt{|\\alpha _{1}^{(1)}|^{2}+|\\alpha _{1}^{(2)}|^{2}}}\\,.\n\\end{equation}\n\nSoliton 2 ($\\eta_{2R} \\approx 0$, $\\eta_{1R} \\to \\infty$):\n\\begin{equation} \\label{soliton2_aybf}\n\\left\n\\begin{array}{c}\nq_1 \\\\\nq_\n\\end{array\n\\right) \\to \\left\n\\begin{array}{c}\nA^{2-}_1 \\\\\nA^{2-}_\n\\end{array\n\\right)\\frac{2p_{2R}}{|p_{2}|^2} e^{i\\eta_{2I}} {\\mbox{sech}}\n\\left(\\eta_{2R}+\\frac {r_{1\\bar{1}2\\bar{2}}-r_{1\\bar{1}}}{2} \\right)\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\left\n\\begin{array}{c}\nA^{2-}_1 \\\\\nA^{2-}_\n\\end{array\n\\right) = \\left\n\\begin{array}{c}\ne^{r^{(1)}_{1\\bar{1}2}} \\\\\ne^{r^{(2)}_{1\\bar{1}2}\n\\end{array\n\\right) \\frac{e^{-(r_{1\\bar{1}2\\bar{2}}+r_{1\\bar{1}}-r_{2\\bar{2}})\/{2}}} \n\\sqrt{|\\alpha^{(1)}_2|^2+|\\alpha^{(2)}_2|^2}} \\,,\n\\end{equation}\nwith\n\\begin{equation}\ne^{r^{(i)}_{1\\bar{1}2}}= P_{12} P_{1\\bar{1}} P_{2\\bar{1}} \\left(\n\\alpha^{(i)}_2 B_{1\\bar{1}} - \\alpha^{(i)}_1 B_{2\\bar{1}} \\right)\\,, \\quad\n(i=1,2)\n\\end{equation}\n\\begin{equation}\ne^{r_{1\\bar{1}2\\bar{2}}}=|P_{12}|^2 |P_{1\\bar{2}}|^2 P_{1\\bar{1}} P_{2\\bar{2\n} \\left(B_{1\\bar{1}} B_{2\\bar{2}} -B_{2\\bar{1}}B_{1\\bar{2}}\\right)\\,.\n\\end{equation}\n\\newline\nAfter collision ($t \\to \\infty$) \\newline\nSoliton 1 ($\\eta_{1R} \\approx 0$, $\\eta_{2R} \\to \\infty$):\n\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) \\rightarrow \\left(\n\\begin{array}{c}\nA_{1}^{1+} \\\\\nA_{2}^{1+\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{i\\eta _{1I}}{\\mbox{sech}}\\left( \\eta\n_{2R}+\\frac{r_{1\\bar{1}2\\bar{2}}-r_{2\\bar{2}}}{2}\\right) \\,,\n\\label{soliton2_ayafter}\n\\end{equation\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{1+} \\\\\nA_{2}^{1+\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\ne^{r_{12\\bar{1}}^{(1)}} \\\\\ne^{r_{12\\bar{1}}^{(2)}\n\\end{array\n\\right) \\frac{e^{-(r_{1\\bar{1}2\\bar{2}}-r_{1\\bar{1}}+r_{2\\bar{2}})\/{2}}}\n\\sqrt{|\\alpha _{1}^{(1)}|^{2}+|\\alpha _{1}^{(2)}|^{2}}}\\,,\n\\end{equation\nwith\n\\begin{equation}\ne^{r_{12\\bar{1}}^{(i)}}=P_{12}P_{1\\bar{2}}P_{2\\bar{2}}\\left( \\alpha\n_{2}^{(i)}B_{1\\bar{2}}-\\alpha _{1}^{(i)}B_{2\\bar{2}}\\right) \\,,\\quad\n(i=1,2)\\,.\n\\end{equation}\n\nSoliton 2 ($\\eta _{2R}\\approx 0$, $\\eta _{1R}\\rightarrow -\\infty $):\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) \\rightarrow \\left(\n\\begin{array}{c}\nA_{1}^{2+} \\\\\nA_{2}^{2+\n\\end{array\n\\right) \\frac{2p_{2R}}{|p_{2}|^{2}}e^{i\\eta _{2I}}{\\mbox{sech}}\\left( \\eta\n_{2R}+\\frac{r_{2\\bar{2}}}{2}\\right) \\,, \\label{soliton2_ayaf}\n\\end{equation\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{2+} \\\\\nA_{2}^{2+\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\n\\alpha _{2}^{(1)} \\\\\n\\alpha _{2}^{(2)\n\\end{array\n\\right) \\frac{1}{\\sqrt{|\\alpha _{2}^{(1)}|^{2}+|\\alpha _{2}^{(2)}|^{2}}}\\,.\n\\end{equation}\n\nSimilar to the analysis for the CNLS equations \\cit\n{Lakshmanan1997,Lakshmanan2001,Lakshmanan2003}, the change in the amplitude\nof each of the solitons in each component can be obtained by introducing the\ntransition matrix $T^k_j$ by $A_j^{k+}= T_j^k A_j^{k-}$, $j,k=1,2$. The\nelements of transition matrix is obtained from the above asymptotic analysis\nas\n\\begin{equation} \\label{trasition_matrix1}\nT_j^1=\\left(\\frac{P_{12}P_{1\\bar{2}}}{\\bar{P}_{12}\\bar{P}_{1\\bar{2}}}\n\\right)^{1\/2} \\frac{1}{\\sqrt{1-\\lambda_1 \\lambda_2}} \\left(1-\\lambda_2\\frac\n\\alpha_2^{(j)}}{\\alpha_1^{(j)}} \\right)\\,, \\quad j=1,2\\,,\n\\end{equation}\n\\begin{equation} \\label{trasition_matrix2}\nT_j^2=\\left(\\frac{\\bar{P}_{12}P_{1\\bar{2}}}{P_{12}\\bar{P}_{1\\bar{2}}}\n\\right)^{1\/2} \\sqrt{1-\\lambda_1 \\lambda_2} \\left(1-\\lambda_1\\frac\n\\alpha_1^{(j)}}{\\alpha_2^{(j)}} \\right)^{-1}\\,, \\quad j=1,2\\,,\n\\end{equation}\nwhere $\\lambda_1=B_{2\\bar{1}}\/B_{1\\bar{1}}$, $\\lambda_2=B_{1\\bar{2}}\/B_{\n\\bar{2}}$.\n\nTherefore, in general, there is an exchange of energies between two components of two solitons\nafter the collision.\nAn example is shown in Fig. 4 for the\nparameters taken as follows\n$p_{1}=1+1.2\\mathrm{i}$, $p_{2}=1+2\\mathrm{i}$, $\\alpha^{(1)}_{1}\n\\alpha^{(2)}_{1}=1.0$, $\\alpha^{(1)} _{2}=2.0$, $\\alpha^{(2)}_{2}=1.0$.\n\\begin{figure}[tbph]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled complex short\npulse equation. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic1}\n\\end{figure}\nHowever, only for the special case\n\\begin{equation}\n\\frac{\\alpha _{1}^{(1)}}{\\alpha _{2}^{(1)}}=\\frac{\\alpha _{1}^{(2)}}{\\alpha\n_{2}^{(2)}}\\,,\n\\end{equation\nthere is no energy exchange between two compoents of solitons\nafter the collision. An example is shown in Fig. 5 for the\nparameters\n$p_{1}=1+1.2\\mathrm{i}$,\\ $p_{2}=1+2\\mathrm{i}$,\\ $\\alpha^{(1)}_{1}\n\\alpha^{(2)}_{1}=1.0$, \\ $\\alpha^{(1)}_{2}=\\alpha^{(2)}_{2}=1.0$.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Elastic_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Elastic_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Elastic_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Elastic_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Elastic collision in coupled complex short pulse equation.}\n\\label{f:elastic}\n\\end{figure}\n\nIt is interesting to note that if we just change the parameters in previous\ntwo examples as $\\alpha^{(1)}_{2}=0$, $\\alpha^{(2)}_{2}=1.0$,\nthe energy of one soliton is concentrated in component $q_2$ before the collision. However, component $q_1$ gains some energy after the collision.\nSuch an example is shown in Fig. 6.\n\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic2_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic2_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic2_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic2_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled\ncomplex short pulse equation for $p_{1}=1+1.2{\\rm i}$, $p_{2}=1+2{\\rm i}$, $\\alpha^{(1)}_{1}=\\alpha^{(2)}_{1}=1.0$, $\\alpha^{(1)}_{2}=0$, $\\alpha^{(2)}_{2}=1.0$. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic2}\n\\end{figure}\nOn the other hand, if we change the parameters as $\\alpha^{(1)}_{2}=1.0$, \n\\alpha^{(2)}_{2}=0$, then the energy of one soliton, which are distributed between two components before the\ncollision is concentrated into one component $q_2$ after the collision.\n The example is\nshown in Fig. 7.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic3_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic3_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic3_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic3_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled\ncomplex short pulse equation for $p_{1}=1+1.2\\mathrm{i}$, $p_{2}=1+2\\mathrm{\n}$, $\\protect\\alpha^{(1)}_{1}=\\alpha^{(2)}_{1}=1.0$, \\ $\\alpha^{(1)}_{2}=1.0$, $\\alpha^{(2)}_{2}=0$. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic3}\n\\end{figure}\n\\section{Concluding Remarks}\nIn this paper, we proposed a complex short pulse equation and its\ntwo-component generalization. Both of the equations can be used to model\nthe propagation of ultra-short pulses in optical fibers. We have shown their\nintegrability by finding the Lax pairs and infinite numbers of conservation\nlaws. Furthermore, multi-soliton solutions are constructed via Hirota's\nbilinear method. In particular, one-soliton solution for the CSP equation is\nan envelope soliton with a few optical cycles under certain condition, which\nperfectly match the requirement for the ultra-short pulses.\nThe $N$-solution for complex short pulse equation and its\ntwo-component generalization is a benchmark for the study of soliton\ninteractions in ultra-short pulses propagation in optical fibers. It is expected that these analytical\nsolutions can be confirmed from experiments.\n\nSimilar to our previous results for the integrable discretizations of the short pulse equation \\cite{SPE_discrete1}, how to construct integrable discretizations of the CSP and coupled CSP equations and how to\napply them for the numerical simulations is also an interesting topic to be\nstudied. It is obviously beyond the scope of the present paper, we\nare to report the results on this aspect in a forthcoming paper.\n\n\n\n\n\n\n\\apptitle\n\\section{}\n\\appeqn\n\\textbf{Proof of Theorem 4.2}\n\n\\begin{proof}\nFirst we define\n\\[\n(b_j, \\bar{\\beta}_1)= \\bar{\\alpha}_j\\delta_{\\mu,1} \\,, \\quad (b_j, \\bar{\\bet\n}_2)= \\bar{\\alpha}_j\\delta_{\\mu,2}\\,,\n\\]\nwhere $index(b_j)=\\mu$ , then from the fact\n\\[\n\\mathrm{Pf}(\\bar{a}_j,a_k)= \\mathrm{Pf} (a_{N+j},a_{N+k})\\,, \\mathrm{Pf}\n\\bar{b}_j,b_k)= \\mathrm{Pf} (b_{N+j},b_{N+k})\\,,\n\\]\nwe obtain\n\\[\n\\bar{f}=f\\,, \\quad \\bar{g}= \\mathrm{Pf} (d_0, \\bar{\\beta}_1, a_1, \\cdots,\na_{2N}, b_1, \\cdots, b_{2N})\\,.\n\\]\nSince\n\\[\n\\frac{\\partial} {\\partial y} \\mathrm{Pf} (a_j,a_k)= (p_j -\np_k)e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_0, d_1, a_j,a_k)\\,,\n\\]\n\n\\[\n\\frac{\\partial} {\\partial s} \\mathrm{Pf} (a_j,a_k)= (p^{-1}_k - p^{-1}_j)\ne^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-1}, d_0, a_j,a_k)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2} {\\partial s^2} \\mathrm{Pf} (a_j,a_k)= (p^{-2}_k -\np^{-2}_j) e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-2}, d_0, a_j,a_k)\\,,\n\\]\n\\[\n\\frac{\\partial^2} {\\partial y \\partial s}\\mathrm{Pf} (a_j,a_k)= (p_jp^{-1}_k\n- p_k p^{-1}_j) e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-1}, d_1, a_i,a_j)\\,,\n\\]\nwe then have\n\\[\n\\frac{\\partial f} {\\partial y} = \\mathrm{Pf} (d_0, d_1, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial f} {\\partial s} = \\mathrm{Pf} (d_{-1}, d_0, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2 f} {\\partial s^2} = \\mathrm{Pf} (d_{-2}, d_0, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2 f} {\\partial y \\partial s} = \\mathrm{Pf} (d_{-1}, d_1,\n\\cdots)\\,.\n\\]\nHere $\\mathrm{Pf} (d_0, d_1, a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})$ is\nabbreviated by $\\mathrm{Pf} (d_0, d_1, \\cdots)$, so as other similar\npfaffians.\n\nFurthermore, it can be shown\n\\begin{eqnarray*}\n&& \\frac{\\partial g} {\\partial y} = \\frac{\\partial} {\\partial y} \\left\n\\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf} (\\beta_1, \\cdots\n,\\hat{a}_j, \\cdots)\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\left( {\\partial_y} \\mathrm{Pf} (d_0,\na_j) \\right) \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf}\n(d_0, a_j) {\\partial_y} \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\n\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\mathrm{Pf} (d_1, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf}\n(\\beta_1, d_0, d_1, \\cdots ,\\hat{a}_j, \\cdots) \\right] \\\\\n&& = \\mathrm{Pf} ( d_1, \\beta_1, \\cdots)+ \\mathrm{Pf} ( d_0, \\beta_1, d_0,\nd_1, \\cdots) \\\\\n&& = \\mathrm{Pf} (d_1, \\beta_1, \\cdots)\\,.\n\\end{eqnarray*}\n\nHere $\\hat{a}_j$ means that the index $j$ is omitted. Similarly, we can show\n\\[\n\\frac{\\partial g} {\\partial s} = \\mathrm{Pf} (d_{-1}, \\beta_1, \\cdots)\\,,\n\\]\n\n\\begin{eqnarray*}\n&& \\frac{\\partial^2 g} {\\partial y \\partial s} = \\frac{\\partial} {\\partial y}\n\\left[\\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (d_{-1}, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\left( {\\partial_y} \\mathrm{Pf} (d_{-1},\na_j) \\right) \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf}\n(d_{-1}, a_j) {\\partial_y} \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\n\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf} (d_{-1}, a_j) \\mathrm{Pf}\n(\\beta_1, d_0, d_1, \\cdots ,\\hat{a}_j, \\cdots) \\right] \\\\\n&& = \\mathrm{Pf} (d_0, \\beta_1, \\cdots)+ \\mathrm{Pf} (d_{-1}, \\beta_1, d_0,\nd_1, \\cdots)\\,.\n\\end{eqnarray*}\n\nAn algebraic identity of pfaffian \\cite{Hirota}\n\\begin{eqnarray*}\n&& \\mathrm{Pf} (d_{-1}, \\beta_1, d_0, d_1, \\cdots) \\mathrm{Pf} (\\cdots)=\n\\mathrm{Pf} (d_{-1}, d_0, \\cdots) \\mathrm{Pf} (d_1, \\beta_1, \\cdots) \\\\\n&& \\quad - \\mathrm{Pf} (d_{-1}, d_1, \\cdots) \\mathrm{Pf} (d_0, \\beta_1,\n\\cdots) + \\mathrm{Pf} (d_{-1}, \\beta_1, \\cdots) \\mathrm{Pf} (d_0, d_1,\n\\cdots)\\,,\n\\end{eqnarray*}\nimplies\n\\[\n( {\\partial_s} {\\partial_y} g-g) \\times f = {\\partial_s} f \\times {\\partial_\n} g - {\\partial_s} {\\partial_y} f \\times g + {\\partial_s} g \\times \n\\partial_y} f \\,.\n\\]\nTherefore, the first bilinear equation is approved.\n\nThe second bilinear equation can be proved in the same way by Iwao and\nHirota \\cite{IwaoHirota}.\n\\begin{eqnarray}\n&& \\frac{\\partial^2 f} {\\partial s^2} \\times 0 - \\frac{\\partial f} {\\partial\ns} \\frac{\\partial f} {\\partial s} \\nonumber \\\\\n&& = \\mathrm{Pf} (d_{-2}, d_0, \\cdots) \\mathrm{Pf} (d_{0}, d_0, \\cdots) -\n\\mathrm{Pf} (d_{-1}, d_0, \\cdots) \\mathrm{Pf} (d_{-1}, d_0, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i=1}^{2N} (-1)^i \\mathrm{Pf} (d_{-2}, a_i) \\mathrm{Pf} (d_0,\n\\cdots, \\hat{a}_i, \\cdots) \\sum_{j=1}^{2N} (-1)^j \\mathrm{Pf} (d_{0}, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&& -\\sum_{i=1}^{2N} (-1)^i \\mathrm{Pf} (d_{-1}, a_i) \\mathrm{Pf} (d_0,\n\\cdots, \\hat{a}_i, \\cdots) \\sum_{j=1}^{2N} (-1)^j \\mathrm{Pf} (d_{-1}, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&& =\\sum_{i,j=1}^{2N} (-1)^{i+j} \\left[ \\mathrm{Pf} (d_{-2}, a_i) \\mathrm{Pf}\n(d_{0}, a_j) -\\mathrm{Pf} (d_{-1}, a_i) \\mathrm{Pf} (d_{-1}, a_j) \\right]\n\\nonumber \\\\\n&& \\quad \\times \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&&=\\sum_{i,j=1}^{2N} (-1)^{i+j+1} \\left[p_i^{-2} + p_i^{-1}p_j^{-1} \\right]\n\\mathrm{Pf} (a_i, a_j) \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm\nPf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber\n\\end{eqnarray}\n\nThe summation over the second term within the bracket vanishes due to the\nfact that\n\\begin{eqnarray*}\n&& \\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-1}p_j^{-1} \\mathrm{Pf} (a_i, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_j, \\cdots) \\\\\n&& = \\sum_{j,i=1}^{2N} (-1)^{j+i+1} p_j^{-1}p_i^{-1} \\mathrm{Pf} (a_j, a_i)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_i, \\cdots) \\\\\n&& = -\\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-1}p_j^{-1} \\mathrm{Pf} (a_i, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_j, \\cdots)\\,.\n\\end{eqnarray*}\nTherefore,\n\\begin{eqnarray}\n&& - \\frac{\\partial f} {\\partial s} \\frac{\\partial f} {\\partial s}\n=\\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-2} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots)\n\\nonumber \\\\\n&&=\\sum_{i=1}^{2N} (-1)^{i+1} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i,\n\\cdots) \\left[ \\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_j, \\cdots) \\right] \\nonumber \\label{CSP1_proof5}\n\\end{eqnarray}\nFurther, we note that the following identity can be substituted into the\nterm within bracket\n\\begin{eqnarray*}\n&& \\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf} (d_0, \\cdots,\n\\hat{a}_j, \\cdots) \\\\\n&& = \\mathrm{Pf} (d_{0}, a_i) \\mathrm{Pf} (\\cdots) + (-1)^{i+1} \\mathrm{Pf}\n(d_0, \\cdots, \\hat{b}_i, \\cdots)\\,\n\\end{eqnarray*}\nwhich is obtained from the expansion of the following vanishing pfaffian \n\\mathrm{Pf} (a_i, d_0, \\cdots)$ on $a_i$. Consequently, we have\n\\begin{eqnarray}\n&& - \\frac{\\partial f} {\\partial s} \\frac{\\partial f} {\\partial s} =\n\\nonumber \\\\\n&& \\sum_{i=1}^{2N} (-1)^{i+1} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i,\n\\cdots) \\left[\\mathrm{Pf} (d_{0}, a_i) \\mathrm{Pf} (\\cdots) + (-1)^{i+1}\n\\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\right]\\,, \\nonumber \\\\\n&& = -\\mathrm{Pf} (\\cdots) \\mathrm{Pf} (d_{-2}, d_0, \\cdots)+\n\\sum_{i=1}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\,,\n\\end{eqnarray}\nwhich can be rewritten as\n\\begin{equation}\n\\frac{\\partial^2 f} {\\partial s^2} f- \\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s}= \\sum_{i=1}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\n\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\,.\n\\end{equation}\n\nNow, we work on the r.h.s of the second bilinear equation.\n\\begin{eqnarray} \\label{CSP1_proof1}\n&& \\frac 12 |g|^2 = \\frac 12 \\mathrm{Pf} (d_0,\\beta_1, \\cdots) \\mathrm{Pf}\n(d_0, \\bar{\\beta}_1, \\cdots) \\nonumber \\\\\n&& = \\frac 12 \\sum_{i,j}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i, \\beta_1) \\mathrm{P\n} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (b_j,\\bar{\\beta}_1) \\mathrm{Pf}\n(d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\frac 14 \\sum_{i,j}^{2N} (-1)^{i+j} (\\alpha_i \\bar{\\alpha}_j) \\mathrm{P\n} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i,j}^{2N} (-1)^{i+j} \\left(p_i^{-2}-p_{j}^{-2}\\right) \\mathrm{Pf}\n(b_i,b_j)\\mathrm{Pf}(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots\n\\hat{b}_j, \\cdots) \\nonumber \\\\\n\\end{eqnarray}\nNext, the expansion of the vanishing pfaffian $\\mathrm{Pf} (b_i, d_0,\n\\cdots) $ on $b_i$ yields\n\\begin{equation}\n\\sum_{j=1}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i, b_j) \\mathrm{Pf} (d_0, \\cdots,\n\\hat{b}_j, \\cdots) = \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots)\\,,\n\\end{equation}\nwhich subsequently leads to\n\\begin{eqnarray}\n&& \\sum_{i,j}^{2N} (-1)^{i+j} p_i^{-2} \\mathrm{Pf} (b_i,b_j)\\mathrm{Pf}(d_0,\n\\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,. \\label{CSP1_proof2}\n\\end{eqnarray}\nSimilarly, we can show that\n\\begin{eqnarray}\n&& -\\sum_{i,j}^{2N} (-1)^{i+j} p_j^{-2} \\mathrm{Pf} (b_i,b_j)\\mathrm{Pf\n(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{j}^{2N} p_j^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_j, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\\,. \\label{CSP1_proof3}\n\\end{eqnarray}\nSubstituting Eqs. (\\ref{CSP1_proof2})--(\\ref{CSP1_proof2}) into Eq. (\\re\n{CSP1_proof1}), we arrive at\n\\begin{equation} \\label{CSP1_proof4}\n\\frac 12 |g|^2= 2\\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i,\n\\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,.\n\\end{equation}\nConsequently we have\n\\begin{equation}\n2\\frac{\\partial^2 f} {\\partial s^2} f- 2\\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s}= \\frac 12 |g|^2\\,,\n\\end{equation}\nwhich is nothing but the second bilinear equation. Therefore, the proof is\ncomplete.\n\\end{proof}\n\n\\textbf{The proof of Theorem 4.6}\n\n\\begin{proof}\nThe proof of the first bilinear equation can be done exactly in the same way\nas for the complex short pulse equation. In what follows, we prove the\nsecond equation by starting from the r.h.s of this equation. Because\n\\[\n\\bar{g}_1= \\mathrm{Pf} (d_0, \\bar{\\beta}_1, a_1, \\cdots, a_{2N}, b_1,\n\\cdots, b_{2N})\\,,\n\\]\n\\[\n\\bar{g}_2= \\mathrm{Pf} (d_0, \\bar{\\beta}_2, a_1, \\cdots, a_{2N}, b_1,\n\\cdots, b_{2N})\\,,\n\\]\nthe r.h.s of the bilinear equation turns out to be\n\\begin{eqnarray} \\label{CCSP1_proof1}\n&& \\frac 12 \\left( g_{1} \\bar{g}_{1} + g_{2} \\bar{g}_{2} \\right) \\nonumber\n\\\\\n&& = \\frac 12 \\sum^2_{k=1} \\sum_{i,j}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i,\n\\beta_k) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (b_j,\\bar\n\\beta}_k) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\frac 14 \\sum_{i,j}^{2N} (-1)^{i+j} \\sum^2_{k=1}(\\alpha^{(k)}_i \\bar\n\\alpha}^{(k)}_j) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf}\n(d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\sum_{i,j}^{2N} (-1)^{i+j} \\left(p_i^{-2}-p_{j}^{-2}\\right) \\mathrm{Pf}\n(b_i,b_j)\\mathrm{Pf}(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots\n\\hat{b}_j, \\cdots) \\nonumber \\\\\n\\end{eqnarray}\nSimilar to the complex short pulse equation, we can show\n\n\\begin{equation} \\label{CCSP1_proof4}\n\\frac 12 \\left(|g_{1}|^2 + |g_{2}|^2 \\right) = 2\\sum_{i}^{2N} p_i^{-2}\n\\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b\n_i, \\cdots)\\,.\n\\end{equation}\nRegarding the r.h.s of the bilinear equation, exactly the same as the proof\nof the Theorem 4.2, we have\n\\begin{equation} \\label{CCSP1_proof5}\n\\frac{\\partial^2 f} {\\partial s^2} f- \\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s} =\\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots\n\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,.\n\\end{equation}\nTherefore the second bilinear equation is proved.\n\\end{proof}\n\n\\thank\n\\section{}\nThe author is grateful for the useful discussions with Dr. Yasuhiro Ohta\n(Kobe University) and Dr. Kenichi Maruno at Waseda University. This work is partially supported by the National Natural Science Foundation of China (No. 11428102).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWith the exponential growth of Internet usage, online users massively publish textual content on online media. For instance, a micro-blogging website, Twitter, allows users to post their content in 140-characters length. A popular social media like Facebook allows users to interact and share content in their communities, as known as ``Friends''. An electronic commercial website, Amazon, allows users to ask questions on their interested items and give reviews on their purchased products. While these textual data have been broadly studied in various research areas (e.g. automatic text summarization, information retrieval, information extraction, etc.), online debate domain, which recently becomes popular among Internet users, has not yet largely explored. For this reason, there are no sufficient resources of annotated debate data available for conducting research in this genre. This motivates us to explore online debate data. \n\nIn this paper, we collected and annotated debate data for an automatic summarization task. There are 11 debate topics collected. Each topic consists of different number of debate comments. In total, there are 341 debate comments collected, accounting for 2518 sentences. In order to annotate online debate data, we developed a web-based system which simply runs on web browsers. We designed the user interface for non-technical users. When participants logged into the system, a debate topic and a comment which is split to a list of consecutive sentences were shown at a time. The annotators were asked to select salient sentences from each comment which summarize it. The number of salient sentences chosen from each comment is controlled by a compression rate of 20\\% which is automatically calculated by the web-based system. For instance, Table \\ref{table_annotation} shows a debate comment to be annotated by an annotator. Based on the compression rate of 20\\%, the annotator needs to choose 1 sentence that summarizes the comment. This compression rate was also used in \\cite{Neto2002ATS} and \\cite{Morris199217}. In total, we obtained 5 sets of annotated debate data. Each set of data consists of 341 comments with total 519 annotated salient sentences. \n\n\nInter-annotator agreement in terms of Cohen's Kappa and Krippendorff's alpha are 0.28 and 0.27 respectively. For social media data such low agreements have been also reported by related work. For instance, \\cite{Mitrat} reports Kappa scores between 0.20 and 0.50 for human constructed newswire summaries. \\cite{Liu:2008:CRH:1557690.1557747} reports again Kappa scores between 0.10 and 0.35 for the conversation transcripts. Our agreement scores are based on strict conditions where agreement is achieved when annotators have selected exact the same sentences. However, such condition does not consider syntactically different sentences bearing the same semantic meaning. Thus we also experimented with a more relaxed version that is based on semantic similarity between sentences. We regard two sentences as identical when their semantic similarity is above a threshold. Our results revealed that after applying such an approach the averaged Cohen's Kappa and Krippendorff's alpha increase to 35.71\\% and 48.15\\% respectively. \n\n\n\n\nFinally we report our results of automatic debate data summarization. We implemented an extractive text summarization system that extracts salience sentences from user comments. Among the features the most contributing ones are sentence position, debate titles, and cosine similarity of the debate title words and sentences. \n\n\nThe paper is structured as follows. First we describe the nature of our online debate data. In Section \\ref{data_annotation} we discuss the procedures of data annotation and discuss our experiments with semantic similarity applied on inter-annotator agreement computation. In Section \\ref{experiment_salient}, we present our first results on automatically performing debate data summarization. We conclude in Section \\ref{conclusion}.\n\n\n\n\n\\begin{table}[ht]\n\\begin{flushleft}\n\\begin{framed}\n\\noindent\\textbf{Task 02: Is global warming fictitious?}\\\\\n\\emph{$[1]$} I do not think global warming is fictitious.\\\\\n\\emph{$[2]$} I understand a lot of people do not trust every source and they need solid proof.\\\\\n\\emph{$[3]$} However, if you look around us the proof is everywhere.\\\\\n\\emph{$[4]$} It began when the seasons started getting harsh and the water levels were rising.\\\\\n\\emph{$[5]$} I do not need to go and see the ice caps melting to know the water levels are rising and the weather is changing.\\\\\n\\emph{$[6]$} I believe global warming is true, and we should try and preserve as much of the Earth as possible.\n\\end{framed}\n\\end{flushleft}\n\\caption{Examples of the debate data to be annotated.}\\label{table_annotation} \n\\end{table}\n\n\n\n\n\n\\begin{table}[ht]\n\\begin{flushleft}\n\\begin{framed}\n\\textbf{Example 1: Propositions from the proponents} \n\\\\ - Global warming is real.\n\\\\ - Global warming is an undisputed scientific fact. \n\\\\ - Global warming is most definitely not a figment of anyone's imagination, because the proof is all around us.\n\\\\ - I believe that global warming is not fictitious, based on the observational and comparative evidence that is currently presented to us.\n\\vskip 0.2in\n\\textbf{Example 2: Propositions from the opponents} \n\\\\ - Global warming is bull crap.\n\\\\ - Global Warming isn't a problem at all.\n\\\\ - Just a way for the government to tax people on more things by saying their trying to save energy.\n\\\\ - Yes, global warming is a myth, because they have not really proven the science behind it. \n\\\\ \n\\end{framed}\n\\end{flushleft}\n\\caption{Examples of Paraphrased Arguments.}\\label{table_pargument} \n\\end{table}\n\\FloatBarrier\n\n\n\\section{Online Debate Data and Their Nature} \\label{nature_debate}\n\nThe nature of online debate is different from other domains. It gives opportunities to users to discuss ideological debates in which users can choose a stance of a debate, express their opinions to support their stance, and oppose other stances. To conduct our experiments we collected debate data from the Debate discussion forum.\\footnote{http:\/\/www.debate.org} The data are related to an issue of the existence of global warming. In the data, there are two main opposing sides of the arguments. A side of proponents believes in the existence of global warming and the other side, the opponents, says that global warming is not true. When the proponents and the opponents express their sentiments, opinions, and evidences to support their propositions, the arguments between them arise. Moreover, when the arguments are referred across the conversation in the forum, they are frequently paraphrased. Table \\ref{table_pargument} illustrates examples of the arguments being paraphrased. Sentences expressing related meaning are written in different context. \n\n\n\\section{Annotation Procedures} \\label{data_annotation}\nIn this paper, we collected and annotated debate data for an automatic summarization task. There are 11 debate topics collected. Each topic consists of a different number of debate comments as shown in Table \\ref{table_stats_corpus}. The annotation was guided through a web-based application. The application was designed for non-technical users. When participants logged in to the system, a debate topic and a comment which is split to a list of sentences were shown at a time. The annotators were given a guideline to read and select salient sentences that summarize the comments. From each comment we allowed the participants to select only 20\\% of the comment sentences. These 20\\% of the sentences are treated as the summary of the shown comment. In the annotation task, all comments in the 11 debate topics were annotated. We recruited 22 participants: 10 males and 12 participants to annotate salient sentences. The participants' backgrounds were those who are fluent in English and aged above 18 years old. We aimed to have 5 annotations sets for each debate topic. Due to a limited number of annotators and a long list of comments to be annotated in each debate topic, 11 participants were asked to complete more than one debate topic, but were not allowed to annotate the same debate topics in which they had done before. In total, 55 annotation sets were derived: 11 debate topics and each with 5 annotation sets. Each annotation set consists of 341 comments with total 519 annotated salient sentences.\\footnote{This dataset can be downloaded at https:\/\/goo.gl\/3aicDN.}\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{@{}clccc@{}}\n\\toprule\n\\textbf{Topic ID} & \\multicolumn{1}{c}{\\textbf{Debate Topics}} & \\textbf{Comments} & \\textbf{Sentences} & \\textbf{Words} \\\\ \\midrule\n01 & Is global warming a myth? & 18 & 128 & 2701 \\\\\n02 & Is global warming fictitious? & 28 & 173 & 3346 \\\\\n03 & Is the global climate change man made? & 10 & 47 & 1112 \\\\\n04 & Is global climate change man-made? & 103 & 665 & 12054 \\\\\n05 & Is climate change man-made? & 9 & 46 & 773 \\\\\n06 & Do you believe in global warming? & 21 & 224 & 3538 \\\\\n07 & Does global warming exist? & 68 & 534 & 9178 \\\\\n08 & \\begin{tabular}[c]{@{}l@{}}Can someone prove that climate \\\\ change is real (yes) or fake (no)?\\end{tabular} & 8 & 49 & 1127 \\\\\n09 & Is global warming real? & 51 & 434 & 6749 \\\\\n10 & Is global warming true? & 5 & 26 & 375 \\\\\n11 & \\begin{tabular}[c]{@{}l@{}}Is global warming real (yes) or just a bunch \\\\ of scientist going to extremes (no)?\\end{tabular} & 20 & 192 & 2988 \\\\\\midrule\n\\textbf{} & \\multicolumn{1}{r}{\\textbf{Average}} & \\textbf{31} & \\textbf{229} & \\textbf{3995} \\\\\n\\textbf{} & \\multicolumn{1}{r}{\\textbf{Total}} & \\textbf{341} & \\textbf{2518} & \\textbf{43941} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Statistical information of the online debate corpus.}\n\\label{table_stats_corpus}\n\\end{table}\n\n\n\n\\subsection{Inter-Annotator Agreement}\n\nIn order to compute inter-annotator agreement between the annotators we calculated the averaged Cohen's Kappa and Krippendorff's alpha with a distant metric, Measuring Agreement on Set-valued Items metric (MASI). The scores of averaged Cohen's Kappa and Krippendorff's alpha are 0.28 and 0.27 respectively. According to the scale of \\cite{krippendorff-2004}, our alpha did neither accomplish the reliability scale of 0.80, nor the marginal scales between 0.667 and 0.80. Likewise, our Cohen's Kappa only achieved the agreement level of \\emph{fair agreement}, as defined by \\cite{Landis77}. However, such low agreement scores are also reported by others who aimed creating gold standard summaries from news texts or conversational data \\cite{Mitrat} \\cite{Liu:2008:CRH:1557690.1557747} .\n\n\nOur analysis shows that the low agreement is caused by different preferences of annotators in the selection of salient sentences. As shown in Table \\ref{table_pargument} the sentences are syntactically different but bear the same semantic meaning. In a summarization task with a compression threshold, such situation causes the annotators to select one of the sentences but not all. Depending on each annotator's preference the selection leads to different set of salient sentences. To address this we relaxed the agreement computation by treating sentences equal when they are semantically similar. We outline details in the following section.\n\n\n\n\n\\subsection{Relaxed Inter-Annotator Agreement}\n\nWhen an annotator selects a sentence, other annotators might select other sentences expressing similar meaning. In this experiment, we aim to detect sentences that are semantically similar by applying Doc2Vec from the Gensim package \\cite{rehurek_lrec}. Doc2Vec model simultaneously learns the representation of words in sentences and the labels of the sentences. The labels are numbers or chunks of text which are used to uniquely identify each sentence. We used the debate data and a richer collections of sentences related to climate change to train the Doc2Vec model. In total, there are 10,920 sentences used as the training set. \n\nTo measure how two sentences are semantically referring to the same content, we used a function provided in the package to calculate cosine similarity scores among sentences. A cosine similarity score of 1 means that the two sentences are semantically equal and 0 is when it is opposite the case. In the experiment, we manually investigated pairs of sentences at different threshold values and found that the approach is stable at the threshold level above 0.44. The example below shows a pair of sentences obtained at 0.44 level. \\\\\n\n\n\\indent \\textbf{S1: }\\emph{Humans are emitting carbon from our cars, planes and factories, which is a heat trapping particle.}\\\\\n\\indent \\textbf{S2: }\\emph{So there is no doubt that carbon is a heat trapping particle, there is no doubt that our actions are emitting carbon into the air, and there is no doubt that the amount of carbon is increasing.}\\\\\n\n\nIn the pair, the two sentences mention the same topic (i.e. \\emph{carbon emission}) and express the idea in the same context. We used the threshold 0.44 to re-compute the agreement scores. By applying the semantic approach, the inter-annotator agreement scores of Cohen's Kappa and Krippendorff's alpha increase from 0.28 to 35.71\\% and from 0.27 to 48.15\\% respectively. The inter-annotator agreement results are illustrated in Table \\ref{iaa}. Note that, in the calculation of the agreement, we incremented the threshold by 0.02. Only particular thresholds are shown in the table due to the limited space.\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\n\\multicolumn{1}{c}{\\textbf{Trial}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Threshold\\\\ ($\\ge$)\\end{tabular}} & \\textbf{$\\kappa$} & &\\textbf{$\\alpha$} \\\\ \\midrule\n\\multicolumn{1}{c}{Before} & & 0.28 && 0.27 \\\\\\midrule\n\\multicolumn{1}{c}{After} & 0.00 & 0.81 && 0.83 \\\\ \n & 0.10 & 0.62 & & 0.65 \\\\\n & 0.20 & 0.46 & & 0.50 \\\\\n & 0.30 & 0.40 & & 0.43 \\\\\n & 0.40 & 0.39 & & 0.41 \\\\\n & 0.42 & 0.38 & & 0.41 \\\\\n & \\textbf{0.44} & \\textbf{0.38} & & \\textbf{0.40} \\\\\n & 0.46 & 0.38 & & 0.40 \\\\\n & 0.48 & 0.38 & & 0.40 \\\\\n & 0.50 & 0.38 & & 0.40 \\\\\n & 0.60 & 0.38 & & 0.40 \\\\\n & 0.70 & 0.38 & & 0.40 \\\\\n & 0.80 & 0.38 & & 0.40 \\\\\n & 0.90 & 0.38 & & 0.40 \\\\\n & 1.00 & 0.38 & & 0.40 \\\\\\bottomrule\n\\end{tabular}\n\\caption{Inter-Annotator Agreement before and after applying the semantic similarity approach.}\n\\label{iaa}\n\\end{table}\n\n\n\n\n\n\\section{Automatic Salient Sentence Selection} \\label{experiment_salient} \n\\subsection{Support Vector Regression Model}\n\n In this experiment, we work on extractive summarization problem and aim to select sentences that are deemed important or that summarize the information mentioned in debate comments. Additionally, we aim to investigate the keys features which play the important roles in the summarization of the debate data. We view this salient sentence selection as a regression task. A regression score for each sentence is ranged between 1 to 5. It is derived by the number annotators selected that sentence divided by the number of all annotators. In this experiment, a popular machine learning package which is available in Python, called Scikit-learn \\cite{scikitLearn} is used to build a support vector regression model. We defined 8 different features and the support vector regression model combines the features for scoring sentences in each debate comment. From each comment, sentences with the highest regression scores are considered the most salient ones. \n\n\\subsection{Feature Definition}\n\\begin{enumerate}\n \\item \\textbf{Sentence Position (SP).}\nSentence position correlates with the important information in text \\cite{Baxendale,EdmundsonRatingSummary,Goldstein}. In general, humans are likely to mention the first topic in the earlier sentence and they express more information about it in the later sentences. We prove this claim by conducting a small experiment to investigate which sentence positions frequently contain salient sentences. From our annotated data, the majority votes of the sentences are significantly at the first three positions (approximately 60\\%), shaping the assumption that the first three sentences are considered as containing salient pieces of information. Equation \\ref{eq_sentence_position} shows the calculation of the score obtained by the sentence position feature. \\\\\n \\begin{equation}\t \\label{eq_sentence_position}\n SP=\\left\\{\n \\begin{array}{@{}ll@{}}\n \\frac{1}{sentence \\; position}, & \\text{if}\\ position <4 \\\\\n 0, & \\text{otherwise}\n \\end{array}\\right.\n \\end{equation} \n \n \n\\item \\textbf{Debate Titles (TT).}\nIn writing, a writer tends to repeat the title words in a document. For this reason, a sentence containing title words is likely to contain important information. We collected 11 debate titles as shown in Table \\ref{table_stats_corpus}. In our experiment, a sentence is considered as important when it contains mutual words as in debate titles. Equation \\ref{eq_titleword} shows the calculation of the score by this feature. \\\\\n \\begin{equation} \\label{eq_titleword}\n TT = \\frac{\\; number \\; of \\; title \\; words \\; in \\; sentence}{number \\; of \\; words \\;in \\;debate \\;titles}\n \\end{equation}\n \n \n \\item \\textbf{Sentence Length (SL).} \nSentence length also indicates the importance of sentence based on the assumption that either very short or very long sentences are unlikely to be included in the summary. Equation \\ref{eq_sentencelength} is used in the process of extracting salient sentences from debate comments. \\\\\n \\begin{equation} \\label{eq_sentencelength}\n SL = \\frac{\\; number \\; of \\; words \\; in \\; a \\; sentence}{number \\; of \\; words\\; in \\; the \\; longest \\; sentence}\n \\end{equation}\t\n\n\n \\item \\textbf{Conjunctive Adverbs (CJ).}\nOne possible feature that helps identify salient sentence is to determine conjunctive adverbs in sentences. Conjunctive adverbs were proved that they support cohesive structure of writing. For instance, ``the conjunctive adverb \\emph{moreover} has been used mostly in the essays which lead to a conclusion that it is one of the best accepted linkers in the academic writing process.\" \\cite{januliene2015use}. The NLTK POS Tagger\\footnote{http:\/\/www.nltk.org\/api\/nltk.tag.html} was used to determine conjunctive adverbs in our data. \\\\\n\n \\item \\textbf{Cosine Similarity.}\nCosine similarity has been used extensively in Information Retrieval, especially in the vector space model. Documents will be ranked according to the similarity of the given query. Equation \\ref{cosinesim} illustrates the equation of cosine similarity, where: \\emph{q} and \\emph{d} are n-dimensional vectors \\cite{Manning:1999:FSN:311445}. Cosine similarity is one of our features that is used to find similarity between two textual units. The following features are computed by applying cosine similarity. \n\n \\begin{equation} \\label{cosinesim}\t\n cos(q,d) = \\frac{\\sum\\limits_{i=1}^n q_{i} d_{i}}{\\sqrt{\\sum\\limits_{i=1}^n q^2_{i}}\\sqrt{\\sum\\limits_{i=1}^n d^2_{i}}} \n \\end{equation}\n\n \\begin{enumerate}\n \\item \\textbf{Cosine similarity of debate title words and sentences (COS\\_TTS).} For each sentence in debate comments we compute its cosine similarity score with the title words. This is based on the assumption that a sentence containing title words is deemed as important. \\\\\n \n \\item \\textbf{Cosine similarity of climate change terms and sentences (COS\\_CCTS)}. The climate change terms were collected from news media about climate change. We calculate cosine similarity between the terms and sentences. In total, there are 300 most frequent terms relating to location, person, organization, and chemical compounds.\\\\\n \n \\item \\textbf{Cosine similarity of topic signatures and sentences (COS\\_TPS).} Topic signatures play an important role in automatic text summarization and information retrieval. It helps identify the presence of complex concepts or the importance in text. In a process of determining topic signatures, words appearing occasionally in the input text but rarely in other text are considered as topic signatures. They are determined by an automatic predefined threshold which indicates descriptive information. Topic signatures are generated by comparing with pre-classified text on the same topic using a concept of likelihood ratio \\cite{nenkova-mckeown-2011,Lin:2000:AAT:990820.990892}, $\\lambda$ presented by \\cite{Dunning1993}. It is a statistical approach which calculates a likelihood of a word. For each word in the input, the likelihood of word occurrence is calculated in pre-classified text collection. Another likelihood values of the same word is calculated and compared in another out-of-topic collection. The word, on the topic-text collection that has higher likelihood value than the out-of-topic collection, is regarded as topic signature of a topic. Otherwise the word is ignored. \\\\ \n \\end{enumerate}\n \\item \\textbf{Semantic Similarity of Sentence and Debate Titles (COS\\_STT).} Since the aforementioned features do not semantically capture the meaning of context, we create this feature for such purpose. We compare each sentence to the list of debate titles based on the assumption that forum users are likely to repeat debate titles in their comments. Thus, we compare each sentence to the titles and then calculate the semantic similarity score by using Doc2Vec \\cite{rehurek_lrec}. \n\\end{enumerate}\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\n\n\\textbf{ROUGE-N} & \\textbf{CB} & \\textbf{CJ} & \\textbf{COS\\_CCT} & \\textbf{COS\\_TTS} & \\textbf{COS\\_TPS} & \\textbf{SL} & \\textbf{SP} & \\textbf{COS\\_STT} & \\textbf{TT} \\\\ \\hline\n\\textbf{R-1} & 0.4773 & 0.4988 & 0.3389 & 0.5630 & 0.3907 & 0.4307 & \\textbf{0.6124} & 0.4304 & 0.5407 \\\\ \\hline\n\\textbf{R-2} & 0.3981 & 0.4346 & 0.2558 & 0.5076 & 0.2986 & 0.3550 & \\textbf{0.5375} & 0.3561 & 0.4693 \\\\ \\hline\n\\textbf{R-SU4} & 0.3783 & 0.4147 & 0.2340 & 0.4780 & 0.2699 & 0.3335 & \\textbf{0.4871} & 0.3340 & 0.4303 \\\\ \\hline\n\\end{tabular}}\n\\caption{ROUGE scores after applying Doc2Vec to the salient sentence selection.}\\label{table_rouge_scores}\n\\end{table}\n\n\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{lcccccc}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\textbf{Comparison Pairs}}} & \\multicolumn{2}{c|}{\\textbf{ROUGE-1}} & \\multicolumn{2}{c|}{\\textbf{ROUGE-2}} & \\multicolumn{2}{c|}{\\textbf{ROUGE SU4}} \\\\ \\cline{2-7} \n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS CB} & \\multicolumn{1}{c|}{$-4.246^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.962^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.044^b$} & \\multicolumn{1}{c|}{0.002} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS CJ} & \\multicolumn{1}{c|}{$-3.570^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.090^b$} & \\multicolumn{1}{c|}{0.002} & \\multicolumn{1}{c|}{$-2.192^b$} & \\multicolumn{1}{c|}{0.028} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_CCTS} & \\multicolumn{1}{c|}{$-6.792^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.511^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.117^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_TTS} & \\multicolumn{1}{c|}{$-1.307^b$} & \\multicolumn{1}{c|}{0.191} & \\multicolumn{1}{c|}{$-.789^b$} & \\multicolumn{1}{c|}{0.43} & \\multicolumn{1}{c|}{$-.215^b$} & \\multicolumn{1}{c|}{0.83} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_TPS} & \\multicolumn{1}{c|}{$-6.728^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.663^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.384^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS SL} & \\multicolumn{1}{c|}{$-4.958^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.789^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.110^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_STT} & \\multicolumn{1}{c|}{$-4.546^c$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.322^c$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.671^c$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS TT} & \\multicolumn{1}{c|}{$-3.360^c$} & \\multicolumn{1}{c|}{0.001*} & \\multicolumn{1}{c|}{$-2.744^c$} & \\multicolumn{1}{c|}{0.006} & \\multicolumn{1}{c|}{$-2.641^c$} & \\multicolumn{1}{c|}{0.008} \\\\ \\hline\n\\multicolumn{7}{l}{a) Wilcoxon Signed Ranks Test.} \\\\\n\\multicolumn{7}{l}{b) Based on negative ranks.} \\\\\n\\multicolumn{7}{l}{c) Based on positive ranks.}\n\\end{tabular}}\n\\caption{The statistical information of comparing sentence position and other features after applying Doc2Vec.}\n\\label{table_sig_position}\n\\end{table}\n\n\n\n\n\\subsection{Results} \\label{results}\nIn order to evaluate the system summaries against the reference summaries, we apply ROUGE-N evaluation metrics. We report ROUGE-1 (unigram), ROUGE-2 (bi-grams) and ROUGE-SU4 (skip-bigram with maximum gap length of 4). The ROUGE scores as shown in Table \\ref{table_rouge_scores} indicate that sentence position feature outperforms other features. The least performing feature is the cosine similarity of climate change terms and sentences feature.\n\n\nTo measure the statistical significance of the ROUGE scores generated by the features, we calculated a pairwise Wilcoxon signed-rank test with Bonferroni correction. We report the significance p = .0013 level of significance after the correct is applied. Our results indicate that there is statistically significance among the features. Table \\ref{table_sig_position} illustrates the statistical information of comparing sentence position and other features. The star indicates that there is a statistical significance difference between each comparison pair. \n\n\n\n\n\n\n\\section{Conclusion} \\label{conclusion}\nIn this paper we worked on an annotation task for a new annotated dataset, online debate data. We have manually collected reference summaries for comments given to global warming topics. The data consists of 341 comments with total 519 annotated salient sentences. We have performed five annotation sets on this data so that in total we have 5 X 519 annotated salient sentences. We also implemented an extractive text summarization system on this debate data. Our results revealed that the key feature that plays the most important role in the selection salient sentences is sentence position. Other useful features are debate title words feature, and cosine similarity of debate title words and sentences feature. \n\nIn future work, we aim to investigate further features for the summarization purposes. We also plan to integrate stance information so that summaries with pro-contra sides can be generated.\n\n\\section*{Acknowledgments}\nThis work was partially supported by the UK EPSRC Grant No. EP\/I004327\/1, the European Union under Grant Agreements No. 611233 PHEME, and the authors would like to thank Bankok University of their support. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep learning methods are quite successful in many fields such as image analytics and natural language processing.\nDeep learning uses several stacked layers of neural networks, which are optimised using loss functions such as cross entropy with stochastic gradient descent.\nAn example is presented in eq.\\ref{eq:empirical-risk}, where\n$\\alpha$ represents the possible configurations of the learning machine, $z_{i}$ is a set of examples $i = 1, ..., l$ and $\\mathcal{Q}$ is a loss function.\nMinimisation of the empirical risk equals minimising eq.~\\ref{eq:empirical-risk}.\n\n\\begin{equation}\nR_{emp}(\\alpha) = \\frac{1}{l} \\sum_{i=1}^l \\mathcal{Q}(z_{i},\\alpha)\n\\label{eq:empirical-risk}\n\\end{equation}\n\nWe could consider several ways in which to reduce the risk and several advances have been done in improving stochastic gradient descent~\\cite{kingma2014adam}.\nNeural networks have a large number of neurons, which implies that they have a large capacity able of modeling a large set of problems and several sets of network weight values could be identified during learning that have minimal empirical risk, which might have different generalisation capabilities.\nTheir capacity in terms of VC-dimension is large due to the large number of neurons, even though it is finite.\n\nAs indicated in equation~\\ref{eq:risk-generalization}, the bound on the generalisation performance of a trained model depends on the performance on the training set $R_{emp}(\\alpha_{l})$, the VC-dimension $h$ of the classifier and the size of the number of examples used as training data.\n\n\\begin{equation}\nR(\\alpha_{l}) \\leq R_{emp}(\\alpha_{l}) + \\dfrac{B \\mathcal{E} (l) }{2} (1+ \\sqrt{1 + \\dfrac{4 R_{emp} (\\alpha_{l}) }{B \\mathcal{E} (l)}}) \n\\label{eq:risk-generalization}\n\\end{equation}\n\n\\begin{equation}\n \\mathcal{E} (l) = 4 \\dfrac{h (ln \\dfrac{2l}{h}+1)- ln \\dfrac{\\eta}{4}}{l}\n\\end{equation}\n\nAssuming that the empirical risk on the training set is the same for several functions, in order to improve the predictive performance, the function with a lower VC-dimension or the one trained with more data should have a lower risk on the performance on an unseen test set.\n\nFor controlling the VC-dimension of a function, constraining the effective VC-dimension has been proposed~\\cite{vapnik1998statistical}.\nAmong existing work, regularisation is a way to constrain the search for functions that follow certain properties.\nLinear models have relied on Tikhonov regularisation~\\citep{tikhonov1977solutions}.\nNeural networks are made of a set of non-linearities, thus regularisation such as $L_2$ have been applied~\\cite{elsayed2018large}.\nThere have been several proposals, which include recent work to the models using knowledge~\\citep{roychowdhury2021regularizing}.\n\nThe structural risk minimisation framework~\\cite{vapnik1998statistical} intends to control parameters that minimise the VC-dimension and offers certain guarantees about the performance of the trained model.\nA good example of learning algorithm that implements the structural risk minimisation is support vector machines (SVM)~\\citep{vapnik2013nature}.\nSVM is a large margin classifier that aims separating the classes defining it as a constraint problem, whose solution builds on the Karush\u2013Kuhn\u2013Tucker approach that generalises the Lagrange multipliers.\nThe vectors from the training set that are on the margin are the named the support vectors.\nIf we have a training set defined by pairs $\\{x_i, y_i\\}$ where $x_i$ is a vector in $R^n$ and $y_i = \\{1,-1\\}$ defines the class of the instance where $i=1,...,l$.\nWhere the margin is defined by the $1\/w \\in R^n$ and $b \\in R^1$ is the bias term, the constraint optimisation problem for SVMs is defined as:\n\n\\begin{equation}\n\\begin{aligned}\nmin & & \\frac{1}{2}w^2 \\\\\n\\\\\ns.t. & & y_i (w x_i + b) \\geq 1\n\\end{aligned}\n\\end{equation}\n\nUsing Lagrange multipliers $\\alpha = \\{ \\alpha_1,..., \\alpha_l \\}$ with $\\alpha \\geq 0$ and $\\sum_{i=1}^l \\alpha_i y_i = 0$, the separating hyperplane has the formulation:\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i (x_i*x) + b\n\\label{eq:svm-hyperplane}\n\\end{equation}\n\nEq.~\\ref{eq:svm-hyperplane} depends on the inner product of $x_i$ and $x$. The elements of the training set on the margin are named the support vectors.\n\nIf the data is not linearly separably in input space, a mapping into a feature space in which the data is linearly separable and an inner product exists would be a solution. For instance, in eq.~\\ref{eq:mapping-svm} function $z$ maps the input space into a feature space.\nAs we can see, what is important is the calculation of the inner product and not the dimensionality of the space.\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i (z(x_i)*z(x)) + b\n\\label{eq:mapping-svm}\n\\end{equation}\n\nFrom the formulation of SVMs, the inner product between vectors is what is needed to define the solution to the large margin classifier.\nUsing Mercer's theorem allows using kernels to calculate the inner product in a Hilbert space.\nEq.~\\ref{eq:kernel-svm} shows how the SVM formulation would be defined using kernel $K$.\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i K(x_i,x) + b\n\\label{eq:kernel-svm}\n\\end{equation}\n\nUsing kernels in this way allows working in high-dimensional feature spaces without having to work in the high-dimensional space explicitly, this is known as the \\textit{kernel trick}.\nSpecialised kernels have been developed to use SVMs in several problem types.\n\n\nIn this paper, we explore using non-linear functions similar to deep neural networks as mapping functions from input space to feature space.\nWe show examples of the expected performance based on structural risk minimisation principles.\nWe evaluate the proposed method on several data sets and baseline methods.\nWhen the training data is small, the proposed method largely improves over the baseline methods.\nAs expected, this improvement is reduced when more training data is provided.\n\nThe code used in this experiments is available from the following GitHub repository:\\\\\\url{https:\/\/github.com\/ajjimeno\/nn-hyperplane-bounds}\n\n\\section{Methods}\n\nIn this section, we define a large margin linear classifier and we introduce the structural risk minimisation principle.\nThen, we provide a way to define a large margin classifier using a feature space defined by a set of non-linear functions.\nThese non-linear functions are the equivalent of a deep neural network.\n\n\\subsection{Large margin classifier}\n\n\n\nThere are several kernels that have been developed over time that turn the input space into a feature space that captures relations among the features, from image analytics (e.g.~\\cite{szeliski2010computer,camps2006composite}) to text analytics (e.g. string kernel~\\cite{lodhi2002text}).\nThe kernel trick mentioned above allows working in high dimensional feature spaces without the cost of working directly in the feature space.\nThis is achieved by using the kernel to calculate the dot product in feature space without having to map the instances into the feature space.\nOn the other hand, it is difficult to design a kernel that will work well with all sorts of data when compared to the recent success of deep learning.\n\n\nDeep neural network seem to be effective at approximating many different functions, thus it is interesting to map our input feature space into a feature space in which an optimal hyperplane could be identified.\nThis means, using a neural network $z$ as the mapping function between the input space and the feature space, so the optimisation problem should consider now $z(x_i) \\in R^m$ instead of $x_i$ and now the constraints look like $y_i (w z(x_i) + b) \\geq 1$. and $w \\in R^m$.\n\nOne problem is that current neural networks have a large number of parameters, which are needed to be effective in the current tasks where they are successful.\nThis implies as well that they have a large capacity or VC-dimension.\nIn the next sections, we explore how to search for the mapping that has better generalisation guarantees. \n\n\n\n\n\n\n\n\\subsection{Hyperplane bounds}\n\n\nIn this section, we explore properties of the separating hyperplane and what constrains are needed to identify a configuration of the neural network used as mapping function that has better generalisation properties.\n\n\nIf we consider a training set $\\{Y,X\\}$, as defined above, the following inequality holds true for vector $w_0$ and $\\rho_0$,\n\n\\begin{equation}\n min_{(y,x) \\in \\{Y,X\\}} \\frac{y(w_0 x)}{|w_0|} \\geq \\rho_0\n\\end{equation}\n\nwhich assumes that the training set is separable by a hyperplane with margin $\\rho_0$, then the following theorem holds true.\n\n\n\\paragraph{Novikoff theorem}\n\nGiven and infinite sequence of training examples $u_i$ with elements satisfying the inequality $|x_i| < D$, there is a hyperplane with coefficients $w_0$ that separates the training examples currently and satisfies conditions as above.\nUsing an iterative procedure, e.g. stochastic gradient descent, to construct such a hyperplane takes at most\n\n\\begin{equation}\n M = [\\frac{D^2}{\\rho^2_0}]\n\\end{equation}\n\nAs a follow up theorem, we accept that for the algorithm constructing hyperplanes in the regime that separates training data without error, the following bound on error rate is valid\n\n\\begin{equation}\n ER(w_l) \\leq \\frac{E[\\frac{D^2_l}{\\rho^2_l}]}{l+1}\n\\end{equation}\n\nwhere $[\\frac{D^2_l}{\\rho^2_l}]$ is estimated from training data.\n\nThese two theorems provide already a bound on the error of the separating hyperplane, which relies on parameters that can be estimated from the training data.\nIn the following two sections, we show theorems for the bounds on the VC-dimension of the separating hyperplane and properties of the optimal separating hyperplane that will be used to define the optimisation problem for the neural network mapping function. \n\n\\subsection{Bounds on the VC-dimension for \\texorpdfstring{$\\Delta$-margin}{TEXT} separating hyperplanes}\n\nIn this section, we explore theorems that define bounds on the VC-dimension of the separating hyperplane.\n\n\\paragraph{Theorem} Let vectors $x \\in X$ belong to a sphere of radius $R$, the set of $\\Delta$-margin separating hyperplanes has the following VC-dimension h bounded by the inequality\n\n\\begin{equation}\n h \\leq min([\\frac{R^2}{\\Delta^2}], n)+1 \n\\end{equation}\n\n\\paragraph{Corollary} With probability $1-\\eta$ the probability that a test example will not be separated correctly by the $\\Delta$-margin hyperplane has the bound\n\n\\begin{equation}\n P_{error}\\leq \\frac{m}{l} + \\frac{\\xi}{2}(1 + \\sqrt{1+\\frac{4m}{l\\xi}})\n\\end{equation}\n\nwhere \n\n\\begin{equation}\n \\xi = 4 \\frac{h(ln \\frac{2l}{h} + 1) -ln \\frac{\\eta}{4}}{l}\n\\end{equation}\n\nwhere $m$ is the number of examples not separated correctly by this $\\Delta$-margin hyperplane, $h$ is the bound in the VC-dimension, where a good generalisation is dependent on $\\Delta$.\n\nSo, we have already a bound on the VC-dimension of the separating hyperplane.\nIn the next section, we present the idea of identifying the optimal hyperplane, which links the hyperplane optimisation problem with structural risk minimisation.\n\n\\subsection{Optimal hyperplane properties}\n\nThe optimal hyperplane is the one that separates the training examples from classes $y={1,-1}$ with the maximal margin.\nIt has been previously shown that the optimal hyperplane is unique~\\cite{vapnik1998statistical}.\nThe optimal hyperplane has some interesting properties that are relevant to our work.\nOne of them is that the generalisation ability of the optimal hyperplane is better than the general bounds obtained for methods that minimise the empirical risk.\nLet's define the set $X = {x_1, ..., x_l}$ in space $R^n$, where we have our training examples. Within the elements of $X$ that have the following property:\n\n\\begin{equation}\n \\inf_{x \\in X} | w x + b | = 1\n\\end{equation}\n\nThese elements $x \\in X$ are the essential support vectors and are on the margin.\nHaving defined the essential support vectors, we define the number of essential support vectors $K_l$\n\n\\begin{equation}\n K_l = K((x_1, y_1), ... (x_l, y_l))\n\\end{equation}\n\nAnd the maximum norm $D_l$ of the essential support vectors.\n\n\\begin{equation}\nD_l = D((x_1, y_1), ... (x_l, y_l)) = \\max_i |x_i|\n\\end{equation}\n\nBased on the definitions above for the essential support vectors, the following properties have been proved for the optimal hyperplane.\nThe following inequality $K_l \\leq n$ holds true, which implies that the number of essential support vectors is smaller than the dimensionality of the elements of $X$ and $w$.\nLet $ER(\\alpha_l)$ be defined as the expectation of the probability of error of the optimal hyperplane defined using the training data, then the following inequalities hold for the optimal hyperplane considering the values estimated on the essential support vectors. \n\n\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{EK_{l+1}}{l+1}\n\\end{equation}\n\n\nAdditionally, considering an optimal hyperplane passing through the origin:\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{E(\\frac{D_{l+1}}{\\rho_{l+1}})^2}{l+1}\n \\label{eq:novikoff-error-expectation}\n\\end{equation}\n\nCombining the two previous inequalities, we obtain a bound on the expectation of the probability of error, which is bounded by the number of examples in the training data but as well by the number of essential support vectors and the relation between the ball in which the support vectors are and the margin, as shown below\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{E \\min(K_l, (\\frac{D_{l+1}}{\\rho_{l+1}})^2)}{l+1}\n\\end{equation}\n\nLeave-one-out error has been used as an unbiased estimator to prove the bounds on the optimal hyperplane~\\citep{luntz1969estimation}.\n\n\\begin{equation}\n E\\frac{\\mathcal{L}(z_1,...,z_{l+1})}{l+1}=ER(\\alpha_l)\n\\end{equation}\n\nFirst, the number of errors by leave-one-out does not exceed the number of support vectors~\\citep{vapnik2000bounds}.\nIf a vector $x_i$ is not an essential support vector, then there is an expansion of the vector $\\phi_0$ that defines the optimal hyperplane that does not contain the vector $x_i$.\nThe optimal hyperplane is unique, so removing this vector from the training set does not change it.\nLeave-one-out recognizes correctly all the vectors that are not in the set of essential support vectors.\nThe number $\\mathcal{L}(z_1,...,z_{l+1})$ of errors in leave-one-out does not exceed $\\mathcal{K}_{l+1}$, which implies\n\n\n\\begin{equation}\n ER(\\alpha_l) = \\frac{E \\mathcal{L}(z_1,...,z_{l+1})}{l+1} \\leq \\frac{EK_{l+1}}{l+1}\n\\end{equation}\n\nTo prove~eq.~\\ref{eq:novikoff-error-expectation}, the number of errors in leave-one-out does not exceed the number of corrections M to find the optimal hyperplane, as defined by Novikoff's theorem presented above.\n\n\n\n\n\n\n\n\n\n\\subsection{Mapping into feature space using neural networks}\n\nUp to this point, the formulations rely on an input space defined by the training data instances $x_1, ..., x_l$ in a given space $R^n$.\nIf a hyperplane separating the data instances into classes $y=\\{1,-1\\}$ does not exist in input space, the instances are mapped into a feature space $R^m$.\n\n\\begin{equation}\n z(x) : R^n \\mapsto R^m\n\\end{equation}\n\nIn the case of support vector machines, the~\\textit{kernel trick} is used to calculate the dot product using a kernel in feature space without having to do the explicit mapping, which allows working with feature spaces of larger dimensions, even infinite dimensions.\nKernels have been designed to map the input space into separable feature spaces.\n\nOnce the kernel is designed, there is a point in the feature space for each training instance.\nThe properties described for the optimal hyperplane mentioned in the previous sections would still apply but in this case, these properties would be applied to the generated feature space.\nSpecific formulations are applied to profit from the kernel trick mentioned above.\n\nDeveloping the best kernel for a specific problem has shown to be a difficult task and multiple kernels have appeared for different tasks.\nRecently, neural networks have shown that they can learn a classifier with relatively less effort, despite the increase in computational power required to train these classifiers.\nNeural networks in a sense map the input space to another space using a concatenation of linear operators followed by a non-linearity (e.g. sigmoid, RELU, ...), as shown in equation~\\ref{eq:neural-network}.\n\n\\begin{equation}\nz(x) = \\sigma f_{k}(\\sigma f_{k-1}(... \\sigma f_1(x))))\n\\label{eq:neural-network}\n\\end{equation}\n\nEach function $f \\in {f_1, ..., f_k}$ will be a derivative of $f(x) = Wx+B$, where $W$ and $B$ are matrices of weights and biases.\nThese are parameters that need to be optimised for each function.\nStochastic gradient descent is typically used to optimize such functions, thus we prepare the optimisation to use stochastic gradient descent.\n\nRevisiting the classification that we intend to optimise and considering the mapping function $z$, we obtain eq.~\\ref{eq:neural-network-svm}.\nThe parameters to optimize are the vector $w$, the bias $b$ and the weights and biases of the neural network $z$.\nThe dimension of the weight vector $w$ is defined by $z$.\n\n\\begin{equation}\ny_i(w z(x_i) + b) \\geq 1\n\\label{eq:neural-network-svm}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\nIn a support vector machine in feature space, a way to compare different kernels is to measure the maximum radius $D_l$ in which the support vectors fit in and multiply it by $\\lVert w_{l} \\rVert ^2$ as in $[ D_l^2 \\lVert W_{l} \\rVert^2 ]$.\nConsidering that $D_l^2 \\lVert W_{l} \\rVert^2 = \\frac{D_l^2}{\\rho_l^2}$, the lower is this ration, the lower is the probability of error, which is considered as a mean to control the VC-dimension of the separating hyperplane for the setup defined in this work.\n\n\n\n\n\n\\subsection{Optimisation}\n\nTo find the values of the parameters of the system, we use stochastic gradient descent.\nOther approaches, such as Lagrange multipliers are not applicable to neural networks.\nOn the other hand, using stochastic gradient descent guarantees finding a local optima, which hopefully is an approximation with reasonable generalisation performance.\n\nFor optimisation, we use AdamW~\\citep{loshchilov2017decoupled} with eps=1e-08 and weight decay set to 0.1 (which already implies using an $L_{2}$ regularisation) and betas=(0.9, 0.999).\n\nWe have adapted~\\citep{zhang2004solving} using large margin loss.\nWe use the modified Huber loss, which has nicer properties, even though other large margin losses could be explored.\n\nModified Huber loss~\\citep{zhang2004solving}\n\\begin{equation}\n l(y)=\n \\begin{cases}\n -4*h*y & \\text{if }h*y <= -1\\\\\n (1-h*y)^2 & \\text{if } -1 < h*y <= 1\\\\\n 0 & \\text{if } h*y > 1\n \\end{cases}\n\\end{equation}\n\nGradient of the modified Huber loss:\n\n\\begin{equation}\n \\frac{\\partial l}{\\partial w_i}=\n \\begin{cases}\n -4*h*x_i & \\text{if }h*y <= -1\\\\\n -2* (1-h*y)*h*x_i & \\text{if } -1 < h*y <= 1\\\\\n 0 & \\text{if } h*y > 1\n \\end{cases}\n\\end{equation}\n\nWe need to estimate $w$ and $z$ subject to the constraints above.\n$z$ will be defined using a neural network and the size of the feature space derived from $z$ will define the size of the vector $w$.\n$\\lVert w \\rVert$ will be minimised and so will be the $\\lVert z(x) \\rVert$ of the vectors.\nThe loss function of the optimisation problem is defined as follows\n\n\\begin{equation}\n\\mathcal{L}(x_1, ..., x_l)_{inv}= \\mathcal{L} (x_1, ..., x_l) + \\alpha \\lVert w \\rVert^2 + \\beta \\sum^l_{i=1} \\lVert z(x_i) \\rVert^2\n\\end{equation}\n\nThe development above is prepared for binary classification.\nIn a multi class setting, as many classifiers $y \\in \\{1, -1\\}$ as classes are trained and during prediction, the classifier with the maximum value is returned as prediction as shown in equation \\ref{eq:argmax-multi-class}.\n\n\\begin{equation}\n \\arg \\max_{c \\in 1..n} \\{ f_{1}(x), ..., f_{n}(x) \\}\n\\label{eq:argmax-multi-class}\n\\end{equation}\n\nThe proposed method works in the multi class setting, but it has the advantage that it might be able of deciding when an instance does not belong to any class if all the classifier functions predict the -1 class.\n\n\n\n\n\n\\section{Results}\n\nWe have evaluated the proposed method using the MNIST and CIFAR 10 data sets.\nThe different methods have been evaluated using several algorithms.\n$\\alpha$ and $\\beta$ of the loss function have been set to the same value which is the best set up during the experiments.\nWe evaluate the both losses and the performance of dropout~\\citep{srivastava2014dropout} and augmentation based on affine transformations.\n\n\\subsection{MNIST}\n\nMNIST is a collection of hand written digits from 0 to 9.\nThe training set has a total of 60k examples while the test set contains a total of 10k examples.\nAll images are 28x28 with black background pixels and different white level pixels to define the numbers using just one channel.\nImages were normalised using a mean of 0.1307 and standard deviation of 0.3081.\n\nWe have used LeNet as base neural network, which has been adapted to be used in our approach.\nIn order to use LeNet in our approach, we have considered the last layer as the margin weights $w$, which defines each each one of the 10 MNIST classes functions.\nThe rest of the network has been considered as the function $z(x_i)$.\nThis means that both the vector $w$ and $z(x_i)$ belong to $R^{84}$.\nIf the parameters $\\alpha$ and $\\beta$ are set to zero, we are effectively using LeNet.\n\nTable~\\ref{tab:mnist-results-percentage} shows the results of different experimental setups, which includes dropout $p=0.5$, augmentation and a combination of several configurations.\nAugmentation was done using random affine transformations with a maximum rotation of 20 degrees, translation maximum of 0.1 on the x and y axis and scale between 0.9 and 1.1, as well brightness and contrast were randomly altered with a maximum change of 0.2.\nWe have used several partitions of the training set to simulate training the different configurations with data sets of several sizes, which evaluates as well the impact of the number of training examples in addition to the bounded VC-dimension of the separating hyperplane.\nExperiments have been run 5 times per configuration and results show the average and the standard deviation.\n\n\n\n\\begin{sidewaystable}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c}\n\\hline\nMethod&1&5&10&20&40&60&80&100 \\\\\n\\hline\nce&91.52$\\pm$0.28&97.20$\\pm$0.16&98.07$\\pm$0.10&98.58$\\pm$0.10&99.04$\\pm$0.10&99.20$\\pm$0.06&99.25$\\pm$0.05&99.20$\\pm$0.03 \\\\\nce+do&94.36$\\pm$0.22&97.78$\\pm$0.09&98.54$\\pm$0.07&98.81$\\pm$0.07&99.17$\\pm$0.03&99.22$\\pm$0.03&99.32$\\pm$0.03&99.37$\\pm$0.04 \\\\\nce+lm-0.001&94.89$\\pm$0.23&97.95$\\pm$0.10&98.42$\\pm$0.09&98.61$\\pm$0.08&99.05$\\pm$0.10&99.01$\\pm$0.03&99.22$\\pm$0.07&99.25$\\pm$0.08 \\\\\nce+lm-0.001+do&95.12$\\pm$0.17&97.84$\\pm$0.15&98.53$\\pm$0.08&98.77$\\pm$0.04&98.98$\\pm$0.06&99.13$\\pm$0.02&99.22$\\pm$0.05&99.19$\\pm$0.04 \\\\\n\\hline\nce+aug&95.82$\\pm$0.17&98.37$\\pm$0.09&98.94$\\pm$0.09&99.23$\\pm$0.08&99.36$\\pm$0.04&99.40$\\pm$0.05&99.52$\\pm$0.04&99.50$\\pm$0.02 \\\\\nce+aug+do&95.18$\\pm$0.25&97.96$\\pm$0.10&98.56$\\pm$0.07&98.89$\\pm$0.11&99.11$\\pm$0.07&99.09$\\pm$0.06&99.16$\\pm$0.05&99.22$\\pm$0.06 \\\\\nce+aug+lm-0.001&96.82$\\pm$0.17&98.68$\\pm$0.12&99.08$\\pm$0.09&99.24$\\pm$0.06&99.37$\\pm$0.05&99.41$\\pm$0.04&99.47$\\pm$0.06&99.53$\\pm$0.01 \\\\\nce+aug+lm-0.001+do&95.57$\\pm$0.36&97.91$\\pm$0.09&98.55$\\pm$0.08&98.78$\\pm$0.06&98.95$\\pm$0.06&98.99$\\pm$0.04&99.05$\\pm$0.06&99.16$\\pm$0.05 \\\\\n\\hline\n\\hline\nmh&91.66$\\pm$0.48&97.22$\\pm$0.22&98.03$\\pm$0.15&98.49$\\pm$0.08&98.99$\\pm$0.08&99.13$\\pm$0.03&99.16$\\pm$0.03&99.27$\\pm$0.06 \\\\\nmh+do&94.75$\\pm$0.32&97.99$\\pm$0.10&98.51$\\pm$0.04&98.82$\\pm$0.05&99.16$\\pm$0.05&99.22$\\pm$0.08&99.33$\\pm$0.04&99.36$\\pm$0.08 \\\\\nmh+lm-0.001&94.17$\\pm$0.25&97.53$\\pm$0.12&98.26$\\pm$0.13&98.57$\\pm$0.10&98.89$\\pm$0.04&98.97$\\pm$0.07&99.05$\\pm$0.05&99.23$\\pm$0.04 \\\\\nmh+lm-0.001+do&95.03$\\pm$0.31&97.95$\\pm$0.13&98.50$\\pm$0.13&98.79$\\pm$0.07&99.01$\\pm$0.09&99.14$\\pm$0.03&99.14$\\pm$0.07&99.12$\\pm$0.08 \\\\\n\\hline\nmh+aug&96.20$\\pm$0.18&98.45$\\pm$0.05&98.96$\\pm$0.03&99.22$\\pm$0.04&99.40$\\pm$0.06&99.44$\\pm$0.02&99.53$\\pm$0.04&99.51$\\pm$0.04 \\\\\nmh+aug+do&95.25$\\pm$0.45&98.08$\\pm$0.13&98.60$\\pm$0.07&98.76$\\pm$0.07&99.01$\\pm$0.04&99.11$\\pm$0.06&99.19$\\pm$0.03&99.22$\\pm$0.04 \\\\\nmh+aug+lm-0.001&96.53$\\pm$0.12&98.62$\\pm$0.13&99.05$\\pm$0.08&99.19$\\pm$0.06&99.41$\\pm$0.02&99.46$\\pm$0.04&99.46$\\pm$0.06&99.50$\\pm$0.01 \\\\\nmh+aug+lm-0.001+do&95.14$\\pm$0.43&98.03$\\pm$0.06&98.52$\\pm$0.09&98.78$\\pm$0.12&98.90$\\pm$0.09&99.00$\\pm$0.06&99.09$\\pm$0.09&98.99$\\pm$0.06 \\\\\n\\hline\n\\end{tabular}\n\\caption{MNIST results using LeNet using cross-entropy (ce) vs Modified Huber Loss (ml), dropout (do), hyperplane bound loss factor (lm) and augmented (aug)}\n\\label{tab:mnist-results-percentage}\n\\end{sidewaystable}\n\n\\subsection{CIFAR10}\n\nCIFAR10 is a data set of 32x32 images with 10 classes of objects with 3 channels. There a total of 60k training images and 10k testing images, with an equal split between images.\nEach image channel was normalised using a mean of 0.5 and standard deviation of 0.5.\n\nFor CIFAR10, we have considered the vgg19~\\citep{simonyan14vgg19} network.\nThe last layer is considered the margin weights $w$ and the output of the rest of the network as the function $z(x_i)$, which belong to $R^{4096}$. \n\n\n\\begin{sidewaystable}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c}\n\\hline\nMethod&1&5&10&20&40&60&80&100 \\\\\n\\hline\nce&28.77$\\pm$1.74&43.60$\\pm$0.76&51.51$\\pm$0.90&60.98$\\pm$0.61&70.99$\\pm$0.64&76.23$\\pm$0.49&78.98$\\pm$0.60&81.16$\\pm$0.33 \\\\\nce+do&29.92$\\pm$1.22&43.54$\\pm$0.94&52.05$\\pm$0.75&61.52$\\pm$0.81&71.11$\\pm$0.68&75.99$\\pm$0.68&79.15$\\pm$0.56&80.96$\\pm$0.37 \\\\\nce+lm-1e-05&29.73$\\pm$1.57&44.01$\\pm$0.80&51.54$\\pm$0.98&61.11$\\pm$0.91&70.92$\\pm$0.89&76.34$\\pm$0.28&79.68$\\pm$0.48&81.23$\\pm$0.21 \\\\\nce+lm-1e-05+do&30.55$\\pm$1.46&43.21$\\pm$1.36&51.37$\\pm$1.28&61.15$\\pm$0.73&71.16$\\pm$0.24&76.20$\\pm$0.55&79.64$\\pm$0.58&81.45$\\pm$0.56 \\\\\n\\hline\nce+aug&32.50$\\pm$1.44&49.81$\\pm$1.00&61.55$\\pm$0.65&70.58$\\pm$0.59&79.18$\\pm$0.32&83.03$\\pm$0.19&85.47$\\pm$0.16&87.09$\\pm$0.16 \\\\\nce+aug+do&32.77$\\pm$1.53&51.53$\\pm$0.92&61.38$\\pm$0.59&71.00$\\pm$0.48&79.22$\\pm$0.32&82.91$\\pm$0.29&85.34$\\pm$0.27&87.22$\\pm$0.19 \\\\\nce+aug+lm-1e-05&32.53$\\pm$1.56&50.03$\\pm$1.02&60.85$\\pm$0.73&70.81$\\pm$0.27&79.38$\\pm$0.38&83.21$\\pm$0.31&85.25$\\pm$0.22&87.12$\\pm$0.14 \\\\\nce+aug+lm-1e-05+do&33.31$\\pm$1.27&50.94$\\pm$1.93&61.29$\\pm$0.63&71.07$\\pm$0.64&79.50$\\pm$0.39&83.14$\\pm$0.32&85.66$\\pm$0.29&87.38$\\pm$0.14 \\\\\n\\hline\n\\hline\nmh&30.26$\\pm$1.59&45.11$\\pm$1.40&54.14$\\pm$0.93&63.39$\\pm$0.49&71.93$\\pm$0.71&76.83$\\pm$0.64&79.49$\\pm$0.48&81.58$\\pm$0.20 \\\\\nmh+do&30.63$\\pm$1.43&46.16$\\pm$1.01&54.66$\\pm$0.99&63.70$\\pm$0.94&71.99$\\pm$0.22&76.51$\\pm$0.54&79.13$\\pm$0.46&81.54$\\pm$0.32 \\\\\nmh+lm-1e-05&30.72$\\pm$1.35&45.22$\\pm$0.96&53.80$\\pm$1.19&63.46$\\pm$0.80&71.89$\\pm$0.59&76.78$\\pm$0.52&79.35$\\pm$0.56&81.14$\\pm$0.37 \\\\\nmh+lm-1e-05+do&31.47$\\pm$1.22&45.71$\\pm$0.96&53.98$\\pm$0.77&62.86$\\pm$0.82&71.56$\\pm$0.67&76.43$\\pm$0.50&79.29$\\pm$0.27&81.22$\\pm$0.51 \\\\\n\\hline\nmh+aug&37.25$\\pm$0.84&54.38$\\pm$0.42&63.54$\\pm$0.57&71.64$\\pm$0.45&79.66$\\pm$0.43&82.98$\\pm$0.40&85.25$\\pm$0.27&87.10$\\pm$0.36 \\\\\nmh+aug+do&36.37$\\pm$0.35&54.66$\\pm$0.66&63.11$\\pm$0.69&72.06$\\pm$0.53&79.46$\\pm$0.29&83.06$\\pm$0.40&85.32$\\pm$0.18&87.03$\\pm$0.21 \\\\\nmh+aug+lm-1e-05&36.91$\\pm$0.98&54.54$\\pm$0.76&63.53$\\pm$1.00&71.35$\\pm$0.51&79.49$\\pm$0.41&82.90$\\pm$0.37&85.36$\\pm$0.21&86.94$\\pm$0.23 \\\\\nmh+aug+lm-1e-05+do&36.30$\\pm$1.34&54.61$\\pm$0.70&63.34$\\pm$0.60&72.08$\\pm$0.37&79.44$\\pm$0.40&83.14$\\pm$0.21&85.45$\\pm$0.34&87.09$\\pm$0.16 \\\\\n\\hline\n\\end{tabular}\n\\caption{CIFAR10 results using vgg19 cross-entropy (ce) vs Modified Huber Loss (ml), dropout (do), hyperplane bound loss factor (lm) and augmented (aug).}\n\\label{tab:cifar-results-percentage}\n\\end{sidewaystable}\n\n\n\\section{Discussion}\n\nWe have evaluated the proposed approach using two well-known image classification data sets and compared againt several baselines.\nThe results are compared against several baselines, which includes dropout and augmentation using affine transformations.\nIn addition to the modified Huber loss, we compare as well the performance of cross entropy loss, which is typically used in deep learning.\n\nWe have mentioned at the beginning that there are two factors that are relevant for the risk of generalisation of machine learning algorithms.\nOne is the VC-dimension, which we have tried to influence in this work.\nThe second one is additional training data.\nProviding additional training data has been simulated in two ways.\nThe first one is by selecting portions of the training data available from the training sets.\nThe second one has been using affine transformations, which uses the examples in the portion of the data set used as training data.\n\nWe observe that the proposed method shows a strong performance improvement when less training data is being used.\nWe observe as well that by providing additional training data effectively affects performance on the test set positively, which can be combined with the proposed method.\nDropout performed well when no modification or data augmentation were used.\n\nWhen considering the values of $D_l^2$ and $\\lVert w_l \\rVert^2$ in the loss function, the changes in size on the training set seems to have limited impact. The values for the margin are very similar, while the radius $D_l^2$ for the support vectors are around 1.\nConsidering these variables in the loss function, their values become more stable between training runs.\nWhen not using these terms as part of the loss function, the values tend to change significantly between experiments.\n\n\\section{Related work}\n\nDeep learning has achieved tremendous success in many practical tasks but\nthe neural networks used in deep learning have a high number of parameters, which implies a high VC-dimension.\nThese networks are optimised to reduce the empirical risk and having large data sets helps reducing the generalisation risk linked to this.\n\nNeural networks are trained using stochastic gradient descent, which guarantees reaching an optimum even if it is a local one.\nThere is recent work in which the neural networks is studied within the kernel theory, which has shown interesting understanding about the global optimisation of neural networks~\\citep{du2019gradient} and the convergence of wide neural networks through\nNeural Tangent Kernels~\\citep{jacot2018neural}.\nThis work complements what we have studied in this work and provides as well directions for further research.\nOn the other hand, this previous research could be considered to define directions for better defining approximation functions that would use well know properties of kernels and the adaptability of neural networks. \n\nCompared as well to what we propose in this work from the point of view of regularisation, there have been several means of regularisation of neural networks that include $L^2$ regularisation and several variants where different layers are regularised independently.\nTransfer learning is a way to pre-train the neural networks using similar data to the training data, which has been quite successful in recent proposed systems to improve the capability of existing systems.\nIn addition, knowledge has been proposed as means of regularisation~\\citep{borghesi2020improving,roychowdhury2021regularizing}, which would reuse existing resources and reduce the need for training data.\n\nIn this work, we have used hyperplane bounds to study how this contribute to find a better hyperplane using neural networks as feature space mapper.\nThere are several recent directions that would provide directions to improve the definition of functions that have better guarantees in terms of optimisation.\n\n\\section{Conclusions and future work}\n\nImprove over the baseline, which is most significant when a small portion of the training data is used.\nWhen more training data is made available, the improvement is reduced, which is an expected behaviour as shown by the bounds of the VC-dimension for $\\Delta$-margin separating hyperplanes and formulation of the empirical risk.\n\nWe have considered well known networks for the experiments, based on convolution neural networks, it might be interesting to explore additional network configurations to understand the impact of the network architecture with the proposed approach.\nThere has been as well recent work to better understand the optimisation and behaviour of neural networks that we would like to explore as future work.\n\n\\section{Acknowledgements}\n\nThis research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\\label{sec0}\nThe Lieb--Thirring inequalities~\\cite{LT} give estimates for $\\gamma$-moments\nof the negative eigenvalues of the Schr\\\"odinger operator $-\\Delta-V$ in $L_2(\\mathbb{R}^d)$,\nwhere $V=V(x)\\ge0$:\n\\begin{equation}\\label{LT}\n\\sum_{\\lambda_i\\le0}|\\lambda_i|^\\gamma\\le\\mathrm{L}_{\\gamma,d}\n \\int_{\\mathbb{R}^d} V(x)^{\\gamma+ d\/2}dx.\n\\end{equation}\nIn the case $\\gamma=1$ estimate~\\eqref{LT} is equivalent to the dual\ninequality\n\\begin{equation}\\label{orth}\n \\int_{\\mathbb{R}^d} \\rho(x)^{1+2\/d}dx\\le\\mathrm{k}_d\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2,\n\\end{equation}\nwhere $\\rho(x)$ is as in \\eqref{rho}, and $\\{\\psi_j\\}_{j=1}^N\\in H^1(\\mathbb{R}^d)$\nis an arbitrary orthonormal system. Furthermore the (sharp) constants $\\mathrm{k}_d$ and\n $\\mathrm{L}_{1,d}$ satisfy\n\\begin{equation}\\label{kL}\n \\mathrm{k}_d=(2\/d)(1+d\/2)^{1+2\/d}\\mathrm{L}_{1,d}^{2\/d}.\n\\end{equation}\n\nSharp constants in \\eqref{LT} were found in~\\cite{Lap-Weid} for $\\gamma\\ge 3\/2$,\nwhile for a long time the best available estimates for $1\\le\\gamma<3\/2$ were those found in\n\\cite{D-L-L}. Very recently an important improvement in the area was made\nin~\\cite{Frank-Nam}, where the original idea of~\\cite{Rumin2}\nwas developed and extended in a substantial way.\n\n\nInequality \\eqref{orth} plays an important role in the theory of the\n Navier--Stokes equations\n\\cite{Lieb, B-V, T}, where the constant $ \\mathrm{k}_2$ enters the estimates\nof the fractal dimension of the global attractors\nof the Navier--Stokes system in various two-dimensional formulations.\n(In the three-dimensional case the corresponding results are of a conditional character.)\n\nAlong with the problem in a bounded domain\n$\\Omega\\subset\\mathbb{R}^2$ with Dirichlet boundary conditions\nthe Navier--Stokes system is also studied with periodic boundary conditions,\nthat is, on a two-dimensional torus. In this case for the system to be dissipative\none has to impose the zero mean condition on the components of the velocity vector\nover the torus.\n\nAnother physically relevant model is the Navier--Stokes system on the sphere.\nIn this case the system is dissipative without extra orthogonality conditions.\nHowever, if we want to study the system in the form of the scalar vorticity equation, then the\nscalar stream function of a divergence free vector field is defined up\nto an additive constant, and without loss of generality we can (and always) assume that\nthe integral of the stream function over the sphere vanishes.\n\n\nWe can formulate our main result as follows.\n\n\n\n\n\\begin{theorem}\\label{Th:1}\nLet ${M}$ denote either $\\mathbb{S}^2$ or $\\mathbb{T}^2$,\nand let $\\dot H^1({M})$ be the Sobolev space of functions with\nmean value zero.\nLet $\\{\\psi_j\\}_{j=1}^N \\in\\dot H^1({M})$ be an orthonormal family\nin $L_2({M})$. Then\n\\begin{equation}\\label{rho}\n\\rho(x):=\\sum_{j=1}^N|\\psi_j(x)|^2\n\\end{equation}\nsatisfies the inequality\n\\begin{equation}\\label{M}\n\\int_{{M}}\\rho(x)^2d{M}\\le\\mathrm{k}\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2,\n\\end{equation}\nwhere\n$$\n\\mathrm{k}\\le\\frac{3\\pi}{32}=0.2945\\dots\\,.\n$$\n\\end{theorem}\n\n\\begin{corollary} Setting $N=1$ and $\\psi=\\varphi\/\\|\\varphi\\|$ we obtain\nthe interpolation inequality which is often called the Ladyzhenskaya\ninequality (in the context of the Navier--Stokes equations) or the\nKeller--Lieb--Thirring one-bound-state inequality (in the context of the spectral theory):\n$$\n\\|\\varphi\\|_{L_4}^4\\le \\mathrm{k}_{\\mathrm{Lad}}\\|\\varphi\\|^2\\|\\nabla\\varphi\\|^2,\n\\qquad \\mathrm{k}_{\\mathrm{Lad}}\\le\\mathrm{k}_{\\mathrm{LT}}.\n$$\n\\end{corollary}\n\\begin{remark}\n{\\rm\nThe previous estimate of the Lieb--Thirring constant\non $\\mathbb{T}^2$ and $\\mathbb{S}^2$ obtained in~\\cite{Zel-Il-Lap2019} and \\cite{I-L-AA} by\nmeans of the discrete version of the method of \\cite{Rumin2} was:\n$$\n\\mathrm{k}\\le\\frac{3}{2\\pi}=0.477\\,.\n$$\n}\n\\end{remark}\n\\begin{remark}\n{\\rm\nIn all cases $M=\\mathbb{S}^2, \\mathbb{T}^2$, or $\\mathbb{R}^2$ the Lieb--Thirring constant\nsatisfies the (semiclassical) lower bound\n$$\n0.1591\\dots=\\frac1{2\\pi}\\le\\mathrm{k}_{\\mathrm{LT}}.\\\n$$\nIn $\\mathbb{R}^2$ the sharp value of $\\mathrm{k}_{\\mathrm{Lad}}$ was found in\n\\cite{Weinstein} by the numerical solution of the corresponding Euler--Lagrange equation\n$$\n\\mathrm{k}_{\\mathrm{Lad}}=\\frac1{\\pi\\cdot 1.8622\\dots}=0.1709\\dots\\,,\n$$\nwhile the best to date closed form estimate for this constant was obtained in\n\\cite{Nasibov}\n$$\n\\mathrm{k}_{\\mathrm{Lad}}\\le\\frac{16}{27\\pi}=0.188\\dots,\n$$\nsee also \\cite[Theorem 8.5]{Lieb--Loss} where the equivalent result is obtained for the inequality\nin the additive form.\n}\n\\end{remark}\n\n\n\n\n\n\n\\setcounter{equation}{0}\n\\section{Lieb--Thirring inequalities on $\\mathbb{S}^2$ }\\label{sec2}\n\nWe begin with the case of a sphere and first consider the scalar\ncase. We recall the basic facts concerning the spectrum of the\nscalar Laplace operator $\\Delta=\\div\\nabla$ on the sphere\n$\\mathbb{S}^{2}$:\n\\begin{equation}\\label{harmonics}\n-\\Delta Y_n^k=n(n+1) Y_n^k,\\quad\nk=1,\\dots,2n+1,\\quad n=0,1,2,\\dots.\n\\end{equation}\nHere the $Y_n^k$ are the orthonormal real-valued spherical\nharmonics and each eigenvalue $\\Lambda_n:=n(n+1)$ has multiplicity $2n+1$.\n\n\nThe following identity is essential in what\nfollows \\cite{S-W}: for any $s\\in\\mathbb{S}^{2}$\n\\begin{equation}\\label{identity}\n\\sum_{k=1}^{2n+1}Y_n^k(s)^2=\\frac{2n+1}{4\\pi}.\n\\end{equation}\n\n\n\n\n\n\n\n\\begin{theorem}\\label{Th:S2}\nLet $\\{\\psi_j\\}_{j=1}^N\\in H^1(\\mathbb{S}^2)$ be an orthonormal family of scalar functions\nwith zero average: $\\int_{\\mathbb{S}^2}\\psi_j(s)dS=0$. Then $\\rho(s):=\\sum_{j=1}^N|\\psi_j(s)|^2$\nsatisfies the inequality\n\\begin{equation}\\label{LTS2}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\\frac{3\\pi}{32}\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nWe use the discrete version of the recent important far-going improvement \\cite{Frank-Nam}\nof the approach of \\cite{Rumin2}.\n\nLet $f$ be a smooth non-negative function on $\\mathbb{R}^+$ with\n\\begin{equation}\\label{f}\n\\int_0^\\infty f(t)^2dt=1,\n\\end{equation}\nand therefore for any $a>0$\n\\begin{equation}\\label{fa}\na=\\int_0^\\infty f(E\/a)^2dE.\n\\end{equation}\nExpanding a function $\\psi$ with $\\int_{\\mathbb{S}^2}\\psi(s)dS=0$ in spherical harmonics\n$$\n\\psi(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\psi_n^kY_n^k(s),\\qquad\n\\psi_n^k=\\int_{\\mathbb{S}^2}\\psi(s)Y_n^k(s)dS=(\\psi,Y_n^k)\n$$\nand observing that the summation starts with $n=1$ we see using\n\\eqref{fa} that\n\\begin{equation}\\label{chain}\n\\aligned\n\\|\\nabla\\psi\\|^2=\\int_{\\mathbb{S}^2}|\\nabla\\psi(s)|^2dS=\n\\sum_{n=1}^\\infty n(n+1)\\sum_{k=1}^{2n+1}|\\psi_n^k|^2=\\\\=\n\\int_0^\\infty \\sum_{n=1}^\\infty f\\biggl(\\frac E{ n(n+1)}\\biggr)^2\\,\\sum_{k=1}^{2n+1}|\\psi_n^k|^2dE=\\\\=\n\\int_0^\\infty\\int_{\\mathbb{S}^2}|\\psi^E(s)|^2dSdE=\n\\int_{\\mathbb{S}^2}\\int_0^\\infty|\\psi^E(s)|^2dEdS,\n\\endaligned\n\\end{equation}\nwhere\n$$\n\\psi^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}f\\biggl(\\frac E{ n(n+1)}\\biggr)\n\\psi_n^k\\,Y_n^k(s).\n$$\nReturning to the family $\\{\\psi_j\\}_{j=1}^N$ we have for any $\\varepsilon>0$\n$$\n\\aligned\n\\rho(s)&=\\sum_{j=1}^N|\\psi_j(s)|^2=\\\\&=\\sum_{j=1}^N|\\psi^E_j(s)|^2\n+2\\sum_{j=1}^N\\psi^E_j(s)(\\psi_j(s)-\\psi_j^E(s))+\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2\\le\\\\&\\le\n(1+\\varepsilon)\\sum_{j=1}^N|\\psi^E_j(s)|^2+\n(1+\\varepsilon^{-1})\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2.\n\\endaligned\n$$\nFor each term in the second sum we have\n$$\n\\psi(s)-\\psi^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\psi_n^k\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,Y_n^k(s)=\\bigl(\\psi(\\cdot),\\chi^E(\\cdot,s)\\bigr),\n$$\nwhere\n$$\n\\chi^E(s',s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,Y_n^k(s')Y_n^k(s).\n$$\nSince the $\\psi_j$'s are orthonormal, we have by Bessel's inequality\n$$\n\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2=\\sum_{j=1}^N\\bigl(\\psi_j(\\cdot),\\chi^E(\\cdot,s)\\bigr)^2\n\\le\\|\\chi^E(\\cdot,s)\\|^2,\n$$\nwhere in view of \\eqref{identity} $\\|\\chi^E(\\cdot,s)\\|^2$, in fact, is independent of $s$:\n\\begin{equation}\\label{indeps}\n\\aligned\n\\|\\chi^E(\\cdot,s)\\|^2=\n\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\nY_n^k(s)^2=\\\\=\\frac1{4\\pi}\\sum_{n=1}^\\infty(2n+1)\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2.\n\\endaligned\n\\end{equation}\n\nWe now specify the choice of $f$ by setting (see~\\cite{Frank-Nam}, \\cite{lthbook})\n\\begin{equation}\\label{choice}\nf(t)=\\frac1{1+\\mu t^2},\\qquad\\\n\\mu=\\frac{\\pi^2}{16}\\,.\n\\end{equation}\nThe function $f$ so chosen solves the minimization problem\n$$\n\\aligned\n\\int_{\\mathbb{R}^2}\\left(1-f(1\/|\\xi|^2)\\right)^2d\\xi=&\\pi\\int_0^\\infty(1-f(t))^2t^{-2}dt\\to\\min\\\\\n\\text{under condition}\\quad&\\int_0^\\infty f(t)^2dt=1,\n\\endaligned\n$$\nand the above integral over $\\mathbb{R}^2$ corresponds to the series\non the right-hand side in \\eqref{indeps} (see also~\\eqref{tor}).\n\nWe first observe that \\eqref{f} is satisfied and secondly,\nin view of the estimate for the series in the Appendix\n\\begin{equation}\\label{series1}\n\\aligned\n\\|\\chi^E(\\cdot,s)\\|^2=\\frac1{4\\pi}\\sum_{n=1}^\\infty\\frac\n{(2n+1)}{\\biggl({1+\\left(\\frac1{\\sqrt{\\mu}E}n(n+1)\\right)^2}\\biggr)^2}<\\\\<\n\\frac1{4\\pi}\\sqrt{\\mu}E\\int_0^\\infty\\frac{dt}{(1+t^2)^2}=\n\\frac1{4\\pi}\\sqrt{\\mu}E\\frac\\pi4=\\frac\\pi{64}E=:AE\\,.\n\\endaligned\n\\end{equation}\nHence\n\\begin{equation}\\label{epseps}\n\\rho(s)\\le(1+\\varepsilon)\\sum_{j=1}^N|\\psi^E_j(s)|^2+\n(1+\\varepsilon^{-1})AE.\n\\end{equation}\nOptimizing with respect to $\\varepsilon$ we obtain\n$$\n\\rho(s)\\le\\left(\\sqrt{\\sum_{j=1}^N|\\psi^E_j(s)|^2}+\\sqrt{AE}\\right)^2,\n$$\nwhich gives that\n$$\n\\sum_{j=1}^N|\\psi^E_j(s)|^2\\ge\\left(\\sqrt{\\rho(s)}-\\sqrt{AE}\\right)^2_+.\n$$\nSumming equalities \\eqref{chain} from $j=1$ to $N$ we\nobtain\n$$\n\\aligned\n&\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2=\\int_{\\mathbb{S}^2}\\int_0^\\infty\n\\sum_{j=1}^N|\\psi_j^E(s)|^2dEdS\\ge\\\\&\n\\int_{\\mathbb{S}^2}\\int_0^\\infty\\left(\\sqrt{\\rho(s)}-\\sqrt{AE}\\right)^2_+dEdS=\n\\frac1{6A}\\int_{\\mathbb{S}^2}\\rho(s)^2dS=\\frac{32}{3\\pi}\\int_{\\mathbb{S}^2}\\rho(s)^2dS.\n\\endaligned\n$$\nThe proof is complete.\n\\end{proof}\n\\begin{remark}\\label{R:semi}\n{\\rm\nThe constant $\\mathrm{k}$ in the theorem satisfies the (semiclassical) lower bound\n\\begin{equation}\\label{lb}\nk\\ge\\frac1{2\\pi}\\,,\n\\end{equation}\nwhich can easily be proved in our particular case of $\\mathbb{S}^2$. In fact, we\ntake for the orthonormal family the eigenfunctions $Y_n^k$ with $n=1,\\dots,N-1$,\nand $k=1,\\dots,2n+1$, so that\n$$\n\\sum_{n=1}^{N-1}(2n+1)=N^2-1\\ \\ \\text{and}\\ \\ \\sum_{n=1}^{N-1}(2n+1)n(n+1)=\\frac12N^2(N^2-1),\n$$\nthen \\eqref{M} and the Cauchy inequality give \\eqref{lb}, since\n$$\n(N^2-1)^2=\\left(\\int_{\\mathbb{S}^2}\\rho(s)dS\\right)^2\\le\n4\\pi\\|\\rho\\|^2\\le\n2\\pi \\mathrm{k}N^2(N^2-1).\n$$\n\n}\n\\end{remark}\n\n\n\\subsection{The vector case}\n\nThe vector case is similar, and the key identity~\\eqref{identity} is replaced by\nvector analogue\n(see \\cite{I93}): for any $s\\in\\mathbb{S}^{2}$\n\\begin{equation}\\label{identity-vec}\n\\sum_{k=1}^{2n+1}|\\nabla Y_n^k(s)|^2=n(n+1)\\frac{2n+1}{4\\pi}.\n\\end{equation}\n\n\n\n\nIn the vector case by the Laplace operator\nacting on (tangent) vector\nfields on $\\mathbb{S}^2$ we mean the Laplace--de Rham\noperator $-d\\delta-\\delta d$ identifying $1$-forms and\nvectors. Then for a two-dimensional manifold\n(not necessarily $\\mathbb{S}^2$) we have\n\\cite{I93}\n\\begin{equation}\\label{vecLap}\n\\mathbf{\\Delta} u=\\nabla\\div u-\\mathop\\mathrm{rot}\\rot u,\n\\end{equation}\nwhere the operators $\\nabla=\\mathop\\mathrm{grad}$ and $\\div$ have the\nconventional meaning. The operator $\\mathop\\mathrm{rot}$ of a vector $u$ is a\nscalar and for a scalar $\\psi$,\n$\\mathop\\mathrm{rot}\\psi$ is a vector:\n\\begin{equation}\\label{divrot}\n\\mathop\\mathrm{rot} u:=\\div(u^\\perp),\\qquad\n\\mathop\\mathrm{rot}\\psi:=\\nabla^\\perp\\psi,\n\\end{equation}\nwhere in the local frame $u^\\perp=(u_2,-u_1)$.\n\n Integrating by parts\nwe obtain\n\\begin{equation}\\label{byparts}\n(-\\mathbf{\\Delta} u,u)=\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2.\n\\end{equation}\n\n\nThe vector Laplacian has a complete in $L_2(T\\mathbb{S}^2)$ orthonormal basis\nof vector eigenfunctions: corresponding\nto the eigenvalue\n$\\Lambda_n=n(n+1)$, where $n=1,2,\\dots$, there are two families of $2n+1$\northonormal vector-valued eigenfunctions $w_n^k(s)$ and $v_n^k(s)$\n\\begin{equation}\\label{bases}\n\\aligned\nw_n^k(s)&=(n(n+1))^{-1\/2}\\,\\nabla^\\perp Y_n^k(s),\\ -\\mathbf{\\Delta}w_n^k=n(n+1)w_n^k,\\\n\\div w_n^k=0;\\\\\nv_n^k(s)&=(n(n+1))^{-1\/2}\\,\\nabla Y_n^k(s),\\ \\ -\\mathbf{\\Delta}v_n^k=n(n+1)v_n^k,\\ \\mathop\\mathrm{rot} v_n^k=0,\n\\endaligned\n\\end{equation}\nwhere\n$k=1,\\dots,2n+1$, and~(\\ref{identity-vec}) gives the\nfollowing important identities: for any $s\\in\\mathbb{S}^2$\n\\begin{equation}\\label{id-vec}\n\\sum_{k=1}^{2n+1}|w_n^k(s)|^2=\\frac{2n+1}{4\\pi},\\qquad\n\\sum_{k=1}^{2n+1}|v_n^k(s)|^2=\\frac{2n+1}{4\\pi}.\n\\end{equation}\nWe finally observe that $-\\mathbf{\\Delta}$ is strictly\npositive $-\\mathbf{\\Delta}\\ge \\Lambda_1I=2I.$\n\n\n\n\\begin{theorem}\\label{Th:LT-vec}\nLet $\\{u_j\\}_{j=1}^N\\in H^1(T\\mathbb{S}^2)$\nbe an orthonormal family of vector fields in $L^2(T\\mathbb{S}^2)$. Then\n\\begin{equation}\\label{orthvec}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\n \\frac{3\\pi}{16}\\sum_{j=1}^N(\\|\\mathop\\mathrm{rot} u_j\\|^2\n+\\|\\div u_j\\|^2),\n\\end{equation}\nwhere $\\rho(s)=\\sum_{j=1}^N|u_j(s)|^2$.\nIf, in addition, $\\div u_j=0$ $($or $\\mathop\\mathrm{rot} u_j=0$$)$,\nthen\n\\begin{equation}\\label{orthvecsol}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\\frac{3\\pi}{32}\\cdot\n\\begin{cases}\\displaystyle\n\\sum_{j=1}^N\\|\\mathop\\mathrm{rot} u_j\\|^2,\n\\quad \\ \\div u_j=0,\n\\\\\\displaystyle\n\\sum_{j=1}^N\\|\\div u_j\\|^2,\n\\quad \\mathop\\mathrm{rot} u_j=0.\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\\begin{proof} We prove the first inequality in~\\eqref{orthvecsol},\nthe proof of the second is similar. Expanding a vector function\n$u$ with $\\div u=0$ in the basis $w_n^k$\n$$\nu(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1} u_n^kw_n^k(s),\\qquad\nu_n^k=(u,w_n^k),\n$$\nwe have instead of \\eqref{chain}\n\\begin{equation}\\label{chain1}\n\\aligned\n\\|\\mathop\\mathrm{rot} u\\|^2&=\n\\sum_{n=1}^\\infty n(n+1)\\sum_{k=1}^{2n+1}|u_n^k|^2=\\\\&=\n\\int_0^\\infty \\sum_{n=1}^\\infty f\\biggl(\\frac E{ n(n+1)}\\biggr)^2\\,\\sum_{k=1}^{2n+1}|u_n^k|^2dE=\n\\int_{\\mathbb{S}^2}\\int_0^\\infty|u^E(s)|^2dEdS,\n\\endaligned\n\\end{equation}\nwhere\n$$\nu^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}f\\biggl(\\frac E{ n(n+1)}\\biggr)\nu_n^k\\,w_n^k(s).\n$$\nAs before\n$$\n\\aligned\n\\rho(s)\\le\n(1+\\varepsilon)\\sum_{j=1}^N|u^E_j(s)|^2+\n(1+\\varepsilon^{-1})\\sum_{j=1}^N|u_j(s)-u^E_j(s)|^2.\n\\endaligned\n$$\nWe now imbed $\\mathbb{S}^2$ into $\\mathbb{R}^3$\nin the natural way and use the standard basis $\\{e_1,e_2,e_3\\}$\nand the scalar product $\\langle \\cdot,\\cdot\\rangle$ in $\\mathbb{R}^3$.\nThen we see that\n$$\n\\aligned\n\\langle u(s)-u^E(s),e_1\\rangle=\\\\\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}u_n^k\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,\\langle w_n^k(s),e_1\\rangle=\\bigl(u(\\cdot),\\chi^E_1(\\cdot,s)\\bigr),\n\\endaligned\n$$\nwhere the vector function\n$$\n\\chi^E_1(s',s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,w_n^k(s')\\langle w_n^k(s),e_1\\rangle.\n$$\nBy orthonormality and Bessel's inequality\n$$\n\\aligned\n\\sum_{j=1}^N|u_j(s)-u^E_j(s)|^2=\\sum_{j=1}^N\\sum_{l=1}^3|\\langle u_j(s)-u^E_j(s),e_l\\rangle|^2=\\\\\n=\\sum_{l=1}^3\\sum_{j=1}^N\\bigl(u_j(\\cdot),\\chi^E_l(\\cdot,s)\\bigr)^2\n\\le\\sum_{l=1}^3\\|\\chi_l^E(\\cdot,s)\\|^2.\n\\endaligned\n$$\nHowever, in view of~\\eqref{id-vec}, the right hand side is again independent of $s$\n$$\n\\aligned\n\\sum_{l=1}^3\\|\\chi^E_l(\\cdot,s)\\|^2&=\n\\sum_{n=1}^\\infty\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\\sum_{k=1}^{2n+1}\\sum_{l=1}^3\n|\\langle w_n^k(s),e_l\\rangle|^2=\\\\&=\n\\sum_{n=1}^\\infty\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\\sum_{k=1}^{2n+1}\n| w_n^k(s)|^2=\\\\&=\\frac1{4\\pi}\\sum_{n=1}^\\infty(2n+1)\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2,\n\\endaligned\n$$\nand we complete the proof in exactly the same way as we have\ndone in the proof of Theorem~\\ref{Th:S2} after \\eqref{indeps}.\nFinally, in the proof of inequality~\\eqref{orthvec} both families\nof vector eigenfunctions \\eqref{bases} play equal roles,\nand the constant is increased by the factor of two.\n\\end{proof}\n\nThis, however, does not happen for a single vector function.\n\n\\begin{corollary}\\label{C:vec}\nLet $u\\in H^1(T\\mathbb{S}^2)$. Then\n\\begin{equation}\\label{vecLad}\n\\|u\\|^4_{L_4}\\le \\vec{\\mathrm{k}}_\\mathrm{Lad}\n\\|u\\|^2\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right),\\qquad\n\\vec{\\mathrm{k}}_\\mathrm{Lad}\\le\\frac{3\\pi}{32}\\,.\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nThe proof is based on the equivalence\n\\eqref{LT}$_{\\gamma=1}\\Leftrightarrow$\\eqref{orth} with equality\nfor the constants~\\eqref{kL} and the fact that the eigenvalues of\nthe vector Schr\\\"odinger operator on $\\mathbb{S}^2$\n\\begin{equation}\\label{vecSchr}\nAv=-\\mathbf{\\Delta} v-Vv\n\\end{equation}\nhave even multiplicities as the following equality implies\n(see~\\eqref{vecLap}, \\eqref{divrot})\n$$\n\\mathbf{\\Delta}(v^\\perp)=(\\mathbf{\\Delta} v)^\\perp.\n$$\nNow let $u$ in \\eqref{vecLad} be normalized, $\\|u\\|=1$, let\n$V(s)=\\alpha|u(s)|^2$, $\\alpha>0$, and let $E$ be the lowest\neigenvalue of~\\eqref{vecSchr}. If $E<0$, then since $E$ is counted\nat least twice in the sum $\\sum_{\\lambda_j\\le0}\\lambda_j$, it\nfollows that\n\\begin{equation}\\label{E}\nE\\ge \\frac12\\sum_{\\lambda_j\\le0}\\lambda_j\\ge-\\frac12\\mathrm{L_1}\\int_{\\mathbb{S}^2}V(s)^2dS=\n-\\alpha^2\\frac12\\mathrm{L_1}\\|u\\|_{L_4}^4,\n\\end{equation}\nwhere the second inequality is~\\eqref{LT} with\n$$\n\\mathrm{L_1}\\le \\frac14\\cdot\\frac{3\\pi}{16},\n$$\nin view of \\eqref{kL} and \\eqref{orthvec}. If $E\\ge0$, then\n\\eqref{E} also formally holds.\n\nNext, by the variational principle\n\\begin{equation}\\label{Eless}\n\\aligned\nE\\le(Au,u)=\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2-\\int_{\\mathbb{S}^2}V(s)|u(s)|^2dS=\\\\\n\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2-\\alpha\\|u\\|^4_{L_4}.\n\\endaligned\n\\end{equation}\nCombining \\eqref{E} and \\eqref{Eless} and setting optimal\n$\\alpha=1\/\\mathrm{L}_1$ we finally obtain\n$$\n\\|u\\|_{L_4}^4\\le 2\\mathrm{L}_1\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right)\\le\n\\frac{3\\pi}{32}\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right).\n$$\n\\end{proof}\n\n\n\\begin{remark}\n{\\rm\nIt is worth pointing out that\n and for any domain on the sphere\n$\\Omega\\subseteq\\mathbb{S} ^2$ and an orthonormal family\n$\\{u_j\\}_{j=1}^N\\in H^1_0(\\Omega,T\\mathbb{S}^2)$ extension by zero\nshows that the corresponding Lieb--Thirring constants are uniformly\nbounded by the constants on the whole sphere whose estimates were\nfound in Theorem~\\ref{Th:LT-vec} and Corollary~\\ref{C:vec}. }\n\\end{remark}\n\n\n\\setcounter{equation}{0}\n\\section{Lieb--Thirring inequalities on $\\mathbb{T}^2$ }\\label{sec3}\nWe now prove Theorem~\\ref{Th:1} for the 2D torus. We first consider the torus with equal periods\nand without loss of generality\nwe set $\\mathbb{T}^2=[0,2\\pi]^2$.\n\\begin{proof}[Proof of Theorem~\\ref{Th:1} for $\\mathbb{T}^2$]\nWe use the Fourier series\n$$\n\\psi(x)=\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0}\\psi_k e^{ik\\cdot x},\\qquad\n\\psi_k=\\frac1{2\\pi}\\int_{\\mathbb{T}^2}\\psi(x)e^{-ik\\cdot x}dx,\\quad\n\\mathbb{Z}^2_0=\\mathbb{Z}^2\\setminus\\{0,0\\},\n$$\nso that\n$$\n\\|\\psi\\|^2=\\sum_{k\\in\\mathbb{Z}^2_0}|\\psi_k|^2,\n\\qquad\n\\|\\nabla\\psi\\|^2=\\sum_{k\\in\\mathbb{Z}^2_0}|k|^2|\\psi_k|^2.\n$$\nThen as before we have\n$$\n\\|\\nabla\\psi\\|^2=\n\\int_0^\\infty \\sum_{k\\in\\mathbb{Z}^2_0} f\\biggl(\\frac E{|k|^2}\\biggr)^2|\\psi_k|^2dE=\n\\int_{\\mathbb{T}^2}\\int_0^\\infty|\\psi^E(x)|^2dEdx,\n$$\nwhere\n$$\n\\psi^E(x)=\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0} f\\biggl(\\frac E{|k|^2}\\biggr)\\psi_ke^{ik\\cdot x},\n$$\nand therefore\n$$\n\\psi(x)-\\psi^E(x)=\n\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)\\psi_ke^{ik\\cdot x}=\n(\\psi(\\cdot), \\chi^E(\\cdot,x)),\n$$\nwhere\n$$\n\\chi^E(x',x)=\\frac1{2\\pi}\n\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)e^{ik\\cdot x'}e^{-ik\\cdot x}.\n$$\nWith the choice of $f$ given in \\eqref{choice} and setting $a=\\sqrt{\\mu}E$ below, we have\n\\begin{equation}\\label{tor}\n\\aligned\n\\|\\chi^E(\\cdot,x)\\|^2=\\frac1{4\\pi^2}\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)^2=\\\\=\n\\frac1{4\\pi^2}\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}\n<\\frac a{16}=\\frac\\pi{64}E=:AE,\n\\endaligned\n\\end{equation}\nwhere the key inequality for series is proved in the Appendix.\n\nAt this point we can complete the proof as in Theorem~\\ref{Th:S2}.\n\\end{proof}\n\n\\subsection{Elongated torus.}\nWe now briefly discuss the Lieb--Thirring constant on a 2D torus\nwith aspect ratio $\\alpha$. Since the Lieb--Thirring constant\ndepends only on $\\alpha$, we consider the torus $\\mathbb{T}^2_\\alpha=[0,2\\pi\/\\alpha]\\times\n[0,2\\pi]$. Furthermore, it suffices to consider the case $\\alpha\\le1$, since otherwise\nwe merely interchange the periods.\n\n\\begin{theorem}\\label{Th:alpha}\nThe Lieb--Thirring constant on the elongated torus $\\mathbb{T}^2_\\alpha$\nsatisfies the bound\n\\begin{equation}\\label{alpha}\n\\mathrm{k}_\\mathrm{LT}(\\mathbb{T}^2_\\alpha)\\le\\frac1\\alpha\\frac{3\\pi}{32}\\\n\\text{as}\\ \\alpha\\to0.\n\\end{equation}\n\\end{theorem}\n\\begin{proof} We shall prove \\eqref{alpha} under an additional technical assumption that\n$k=1\/\\alpha\\in\\mathbb{N}$. Given the orthonormal family\n$\\{\\psi_j\\}_{j=1}^N \\in\\dot H^1(\\mathbb{T}^2_\\alpha)$, we extend each $\\psi_j$\n by periodicity in the $x_2$ direction $k$ times, multiply the result by $\\sqrt{\\alpha}$\n and denote the resulting function defined on the square\n torus $\\mathbb{T}^2=[0,2\\pi k]^2$ by $\\widetilde\\psi_j$. Then the family\n $\\{\\widetilde\\psi_j\\}_{j=1}^N$ is orthonormal in $L_2(\\mathbb{T}^2)$ and\n for $\\rho_{\\widetilde\\psi}(x)=\\sum_{j=1}^N|\\widetilde\\psi_j(x)|^2$\n and $\\rho_{\\psi}(x)=\\sum_{j=1}^N|\\psi_j(x)|^2$ it holds\n $$\n \\int_{\\mathbb{T}^2}\\rho_{\\widetilde\\psi}(x)^2dx=\n \\alpha\\int_{\\mathbb{T}^2_\\alpha}\\rho_{\\psi}(x)^2dx,\\qquad\n \\int_{\\mathbb{T}^2}|\\nabla\\widetilde\\psi_j(x)|^2dx=\n \\int_{\\mathbb{T}^2_\\alpha}|\\nabla\\psi_j(x)|^2dx,\n $$\n which gives \\eqref{alpha}.\n \\end{proof}\n \\begin{remark}\n{\\rm\nThe rate of growth $1\/\\alpha$ of the Lieb--Thirring constant is sharp\nas $\\alpha\\to0$. To see this we set $N=1$ and consider a function\non $\\mathbb{T}^2_\\alpha$ depending on the long coordinate $x_1$ only.\nFor example, let $\\psi(x_1,x_2)=\\sin(2\\pi\\alpha x_1)$.\nThen $\\|\\psi\\|^4_{L_4}\\sim 1\/\\alpha$ $(=\\frac{3\\pi^2}{2\\alpha})$,\n$\\|\\psi\\|^2_{L_2}\\sim 1\/\\alpha$ $(=\\frac{2\\pi^2}{\\alpha})$,\n$\\|\\nabla\\psi\\|^2_{L_2}\\sim \\alpha$ $(=2\\pi^2\\alpha)$.\nTherefore $\\mathrm{k}_\\mathrm{LT}(\\mathbb{T}^2_\\alpha)\\succeq 1\/\\alpha$\n$(\\ge\\frac1\\alpha\\frac3{8\\pi^2})$.\n}\n\\end{remark}\n\\begin{remark}\n{\\rm\nThe orthogonal complement to the subspace of functions depending only on the long coordinate\n$x_1$ consists of functions $\\psi(x_1,x_2)$ with mean value\nzero with respect to the short coordinate $x_2$:\n\\begin{equation}\\label{T2cond}\n\\int_0^{2\\pi}\\psi(x_1,x_2)dx_2=0\\quad\\forall x_1\\in[0,2\\pi\/\\alpha].\n\\end{equation}\nThe Lieb--Thirring constant on this subspace is bounded uniformly\nwith respect to $\\alpha$ as $\\alpha\\to0$. The similar\nresult holds for the multidimensional torus with different periods.\nSee \\cite{I-L-MS} for the details.\n}\n\\end{remark}\n\n\\begin{remark}\n{\\rm The lifting argument of \\cite{Lap-Weid} was used in\n\\cite{I-L-MS} to derive the Lieb--Thirring inequalities on the\nmultidimensional with pointwise orthogonality condition of the type\n\\eqref{T2cond}. It is not clear how to use the lifting argument\nin the case of a global (and weaker) orthogonality contition\n$\\int_{\\mathbb{T}^d}\\psi(x)dx=0$.\n\nFinally, we do not know whether the lifting argument\ncan in some form be used for the Lieb--Thirring inequalities\non the sphere, say, when going over from $\\mathbb{S}^{d-1}$ to\n$\\mathbb{S}^{d}$. }\n\\end{remark}\n\n\n\n\n\\setcounter{equation}{0}\n\\section{Appendix. Estimates of the series}\\label{sec4}\n\n\\subsection*{Estimate for the sphere.} The series estimated in \\eqref{series1} is precisely of the type\n\\begin{equation}\\label{G}\nG(\\nu):=\\sum_{n=1}^\\infty(2n+1)g\\left(\\nu\\, n(n+1)\\right),\n\\end{equation}\nwhere $g$ is sufficiently smooth and sufficiently fast decays at\ninfinity. We need to find the asymptotic behavior of $G(\\nu)$ as\n$\\nu\\to0$. This has been done in~\\cite{IZ} where the following\nresult was proved.\n\\begin{lemma}\\label{L:E-M}\nThe following asymptotic expansion holds as $\\nu\\to0$:\n\\begin{equation}\\label{as2}\nG(\\nu)=\\frac1{\\nu}\\int_0^\\infty g(t)dt-\\frac23g(0)-\n\\frac1{15}\\nu g'(0)+\\frac4{315}\\nu^2g''(0)+\nO(\\nu^3).\n\\end{equation}\n\\end{lemma}\n\nThe series in \\eqref{series1} is of the form~\\eqref{G}\nwith\n$$\ng(t)=\\frac1{(1+t^2)^2}, \\qquad \\nu=\\frac1{\\sqrt{\\mu}E},\n$$\nso that $g(0)=1$, $g'(0)=0$, $g''(0)=-4$ and $\\int_0^\\infty g(t)dt=\\pi\/4$. Therefore\n\\eqref{as2} gives\n$$\n\\sum_{n=1}^\\infty\\frac\n{(2n+1)}{\\biggl({1+\\left(\\frac{n(n+1)}a\\right)^2}\\biggr)^2}=\n a\\frac\\pi 4-\\frac23-\\frac{16}{315}a^{-2}+O(a^{-3}),\\ \\text{as}\\ a\\to\\infty,\n$$\nwhich shows that\ninequality \\eqref{series1}\nclearly holds for large energies $E>E_0$. The proof of inequality \\eqref{series1} for\nall $E\\in[0,\\infty)$ amounts to showing that the inequality\n$$\nH_{\\mathbb{S}^2}(a):=\\frac4\\pi\\,a^3\\sum_{n=1}^\\infty\\frac{2n+1}{\\bigl(\\bigl(n(n+1)\\bigr)^2+a^2\\bigr)^2}\n\\,<\\,1, \\quad a=\\sqrt{\\mu}E=\\frac\\pi 4E\n$$\nholds on a \\emph{finite} interval $ a\\in[0,a_0]$.\nThe value of $a_0$ (say, $20$) can be specified similarly to \\cite{I12JST}.\nFurthermore, the sum of the series\ncan be found in an explicit form in terms of the (digamma) $\\psi$-function.\nThe function $H_{\\mathbb{S}^2}(a)$ and the third-order remainder term are shown in Fig.~\\ref{fig:S2}.\n\\begin{figure}[htb]\n\\centerline{\\psfig{file=S2.eps,width=7.5cm,height=6cm,angle=0}\n\\psfig{file=remainderS2.eps,width=7.5cm,height=6cm,angle=0}}\n\\caption{The graph of $H_{\\mathbb{S}^2}(a)$ is on the left;\nthe remainder term $\\left(H_{\\mathbb{S}^2}(a)-1-\\frac8{3\\pi a}\\right)\\cdot a^3$\nis shown on the right, the horizontal red line is $-64\/(315\\pi)=-0.064$.}\n\\label{fig:S2}\n\\end{figure}\n\\subsection*{Estimate for the torus.}\n\n\\begin{lemma}\\label{L:Poisson}\nThe following asymptotic expansion holds as $a\\to\\infty$:\n\\begin{equation}\\label{at2}\n\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}\n=\\frac{\\pi^2}4 a-1+O(e^{-C\\sqrt{a}}).\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe use the Poisson summation formula\n(see, e.\\,g., \\cite{S-W}):\n$$\n\\sum_{m\\in\\mathbb{Z}^d}f(m\/\\nu)=\n(2\\pi)^{d\/2}\\nu^d\n\\sum_{m\\in\\mathbb{Z}^d}\\widehat{f}(2\\pi m \\nu),\n$$\nwhere\n$\\widehat{f}(\\xi)=(2\\pi)^{-d\/2}\\int_{\\mathbb{R}^d}\nf(x)e^{-i\\xi x}dx$.\n\nTaking into account that the term with $k=(0,0)$ is missing in\n\\eqref{at2}, the Poisson summation formula gives\n\\begin{equation}\\label{expsmall}\n\\aligned\n\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}+1=\\\\=\na\\int_{\\mathbb{R}^2}\\frac{dxdy}{((x^2+y^2)^2+1)^2}+2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n\\widehat h(2\\pi\\sqrt{a}|k|)=\\frac{\\pi^2}4a+O(e^{-C\\sqrt{a}}),\n\\endaligned\n\\end{equation}\nwhere $h(x,y)=\\frac1{((x^2+y^2)^2+1)^2}$ is analytic\nand therefore its Fourier transform\n\\begin{equation}\\label{Fourier}\n\\widehat h(\\xi)=\\frac1{2\\pi}\\int_{\\mathbb{R}^2}\ne^{-ix\\xi_1-iy\\xi_2}h(x,y)dxdy\n\\end{equation}\nis exponentially decaying.\n\\end{proof}\n\nThe graph of the function\n\\begin{equation}\\label{lessthen1}\nH_{\\mathbb{T}^2}(a):=\\frac4{\\pi^2}\\,a^3\\sum_{k\\in\\mathbb{Z}^2_0}\\frac{1}{\\bigl(|k|^4+a^2\\bigr)^2}\n\\,<\\,1,\n\\end{equation}\nand the exponentially small remainder term\n$$\nR(a)=2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n\\widehat h(2\\pi\\sqrt{a}|k|)\n$$\nare shown in Fig.~\\ref{fig:T2}\n\\begin{figure}[htb]\n\\centerline{\\psfig{file=T2.eps,width=7.5cm,height=6cm,angle=0}\n\\psfig{file=remainderT2.eps,width=7.5cm,height=6cm,angle=0}}\n\\caption{The function $H_{\\mathbb{T}^2}(a)$ is shown on the left;\nthe exponentially small term $R(a)$ is shown on the right.}\n\\label{fig:T2}\n\\end{figure}\n\nWe now give an explicit estimate for the exponentially small remainder term in~\\eqref{expsmall}.\nThe function $h(z)$ is analytic in the domain $\\Omega\\subset\\mathbb{C}^2$:\n$\\Omega=\\{z\\in \\mathbb{C}^2,\\ |\\mathrm{Im} z_1|<\\frac12,\\ |\\mathrm{Im} z_2|<\\frac12\\}$.\nIn fact, the equation\n$$\n(x+i\\alpha)^2+(y+i\\alpha)^2=\\pm i\n$$\nhas real solutions $x$ and $y$ only for $\\alpha\\ge\\frac12$.\n\nFor $F(x,y,\\alpha)=((x+i\\alpha)^2+(y+i\\alpha)^2)^2+1$ we have\n$$\n\\aligned\n&\\mathrm{Re}\\,F=\n(x^2+y^2)^2-8\\alpha^2(x^2+y^2+xy)+4\\alpha^4+1\\ge t^2-12\\alpha^2t+4\\alpha^4+1,\\\\\n&\\mathrm{Im}\\,F=\n4a(x+y)(x^2+y^2-2\\alpha^2),\\quad|\\mathrm{Im}\\,F|\\le4\\sqrt{2}\\alpha t^{1\/2}|t-2\\alpha^2|,\n\\endaligned\n$$\nwhere $t:=x^2+y^2$. Next, by a direct inspection we verify that\nfor $t\\ge0$\n$$\n\\aligned\n|F^2|\\ge\n (\\mathrm{Re}\\,F)^2-(\\mathrm{Im}\\,F)^2=\\\\\n(t^2-12\\alpha^2t+4\\alpha^4+1)^2-32\\alpha^2t(t-2a^2)^2>\\frac1b(t^4+1),\n\\endaligned\n$$\nwhere $\\alpha=4.6^{-1}$ and $b=4.75$.\nThis gives that for $|\\mathrm{Im} z_1|\\le\\alpha,\\ |\\mathrm{Im} z_2|\\le\\alpha$\n$$\n|h(x+i\\alpha,y+i\\alpha)|\\le\\frac b{(x^2+y^2)^4+1}\\,.\n$$\nBy the Cauchy integral theorem we can shift the $x$ and $y$ integration in \\eqref{Fourier}\nin the complex plane by $\\pm i\\alpha$ (depending on the sign of $\\xi_1$ and $\\xi_2$) and find that\n$$\n|\\widehat h(\\xi)|\\!\\le\\!\\frac b{2\\pi}e^{-(|\\xi_1|+|\\xi_2|)\\alpha}\\!\\int_{\\mathbb{R}^2}\n\\frac{dxdy}{(x^2+y^2)^4+1}\\!=e^{-(|\\xi_1|+|\\xi_2|)\\alpha}\\frac{b\\pi\\sqrt{2}}8\\le\ne^{-\\alpha|\\xi|}\\frac{b\\pi\\sqrt{2}}8\\,.\n$$\n\nWe write the numbers $|k|^2$ over the lattice $\\mathbb{Z}^2_0$\nin non-decreasing order and denote them by $\\{\\lambda_j\\}^\\infty_{j=1}$.\nUsing that $\\lambda_j\\ge j\/4$ (see \\cite{I-L-AA}) and setting\n$L:=\\frac{\\alpha\\pi\\sqrt{a}}2$ and $A:=\\frac{\\pi^2\\sqrt{2}ab}4$ we\nestimate the series in \\eqref{expsmall} as follows\n$$\n\\aligned\n|R(a)|\\le\n2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n|\\widehat h(2\\pi\\sqrt{a}|k|)|=2\\pi a\\sum_{j=1}^\\infty\n|\\widehat h(2\\pi\\sqrt{a}\\lambda_j^{1\/2})|\\le\\\\\nA\\sum_{j=1}^\\infty\ne^{-2\\pi\\alpha\\sqrt{a}\\lambda_j^{1\/2}}\\le\nA\\sum_{j=1}^\\infty\ne^{-2Lj^{1\/2}}=Ae^{-L}\\sum_{j=1}^\\infty e^{-L(2j^{1\/2}-1)}\\le\\\\\nAe^{-L}\\sum_{j=1}^\\infty e^{-Lj^{1\/2}}<\nAe^{-L}\\int_0^\\infty e^{-L\\sqrt{x}}dx=\nAe^{-L}\\frac2{L^2}=\\frac{2^{3\/2}b}{\\alpha^2}e^{-\\frac{\\alpha\\pi\\sqrt{a}}2}\\,.\n\\endaligned\n$$\nInequality~\\eqref{lessthen1} holds if $R(a)<1$. The above estimates show that\n$|R(a)|<1$ for\n$$\na>\\left[\\frac2{\\alpha\\pi}\\log\\left(\\frac{2^{3\/2}b}{\\alpha^2}\\right)\\right]^2=\n273.8\\,.\n$$\n\nA more optimistic estimate follows from the fact that $h(x)$ is radial and therefore\nso is its Fourier transform\n $$\\widehat h(\\xi)=\\int_0^\\infty J_0(|\\xi|r)h(r)rdr,\n$$\nwhere $J_0$ is the Bessel function. The latter integral is expressed in terms of the\nMeijer G-function and satisfies $|\\widehat h(\\xi)|\\left[\\frac4\\pi\\log\\frac{64}\\pi\\right]^2=14.73\\,.\n$$\n\n\n\\subsection*{Acknowledgements}\\label{SS:Acknow}\nThe work of A.\\,I. and S.\\,Z. is supported in part by the Russian\nScience Foundation grant 19-71-30004 (sections 1,2). Research of A.\\,L. is\nsupported by the Russian Science Foundation grant 19-71-30002 (sections 3,4).\n\n\n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\n\nPartial least squares (PLS) is a regularized regression technique developed by \\citet{Wol84} to deal with collinearities in the regressor matrix. It is an iterative algorithm where the covariance between response and regressor is maximized at each step, see \\citet{Hel88} for a detailed description. Regularization in the PLS algorithm is obtained by stopping the iteration process early.\n\nSeveral studies showed that partial least squares algorithm is competitive with other regression methods such as ridge regression and principal component regression and it needs generally fewer iterations than the latter to achieve comparable estimation and prediction, see, e.g., \\citet{FrankFriedman} and \\citet{Krae07b}. For an overview of further properties of PLS we refer to \\citet{Ros06}.\n\nReproducing kernel Hilbert spaces (RKHS) have a long history in probability and statistics \\citep[see e.g.][]{Berlinet}. \nHere we focus on the supervised kernel based learning approach for the solution of non-parametric regression problems. RKHS methods are both computationally and theoretically attractive, due to the kernel trick \\citep{Sch98} and the representer theorem \\citep{Wah99} as well as its generalization \\citep{Sch01}. Within the reproducing kernel Hilbert space framework one can adapt linear regularized regression techniques like ridge regression and principal component regression to a non-parametric setting, see \\citet{Sau98} and \\citet{Ros00a}, respectively. We refer to \\citet{bSch} for more details on the kernel based learning approach.\n\nKernel PLS was introduced in \\citet{Ros01} who reformulated the algorithm presented in \\citet{Lin93}. The relationship to kernel conjugate gradient (KCG) methods was highlighted in \\citet{Blan10a}. It can be seen in \\citet{Hanke} that conjugate gradient methods are well suited for handling ill-posed problems, as they arise in kernel learning, see, e.g., \\citet{Vit06}.\n\n\\citet{Ros03} investigated the performance of kernel partial least squares (KPLS) for non-linear discriminant analysis.\n\\citet{Blan10a} proved the consistency of KPLS when the algorithm is stopped early without giving convergence rates. \n\n\\citet{Cap07} showed that kernel ridge regression (KRR) attains optimal probabilistic rates of convergence for independent and identically distributed data, using a source and a polynomial effective dimensionality condition. A generalization of these results to a wider class of effective dimensionality conditions and extension to kernel principal component regression can be found in \\citet{Dicker17}.\n\nFor a variant of KCG \\citet{Blan10b} obtained probabilistic convergence rates for independent identically distributed data. The pointed explicitly out that their approach and results are not directly applicable to KPLS.\n\nWe study of the convergence of the kernel partial least squares estimator to the true regression function when the algorithm is stopped early. \nSimilar to \\citet{Blan10b} we derive explicit probabilistic convergence rates. In contrast to previously cited works on kernel regression our input data are not independent and identically distributed but rather stationary time series. We derive probabilistic convergence results that can be applied for arbitrary temporal dependence structures, given that certain concentration inequalities for these data hold.\nThe derived convergence rates depend not only on the complexity of the target function and of the data mapped into the kernel space, but also on the persistence of the dependence in the data. In the stationary setting we prove that the short range dependence still leads to optimal rates, but if the dependence is more persistent, the rates become slower.\n\n\\section{Kernel Partial Least Squares}\n\\label{sec:problem}\nConsider the non-parametric regression problem \n\\begin{equation}\n\\label{eq:model}\n\ty_t = f^\\ast(X_t) + \\varepsilon_t,~~t \\in {\\mathbb Z}.\n\\end{equation}\nHere $\\{X_t\\}_{t \\in {\\mathbb Z}}$ is a $d$-dimensional, $d \\in {\\mathbb N}$, stationary time series on a probability space $(\\Omega,\\mathcal A,\\mathrm{P})$ and $\\{\\varepsilon_t\\}_{t \\in {\\mathbb Z}}$ is an independent and identically distributed sequence of real valued random variables with expectation zero and variance $\\sigma^2 > 0$ that is independent of $\\{X_t\\}_{t \\in {\\mathbb Z}}$. Let $X$ be a random vector that is independent of $\\{X_t\\}_{t \\in {\\mathbb Z}}$ and $\\{\\varepsilon_t\\}_{t \\in {\\mathbb Z}}$ with the same distribution as $X_0$. The target function we seek to estimate is $f^\\ast \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$.\n\nFor the purpose of supervised learning assume that we have a training sample $\\{(X_t,y_t)\\}_{t=1}^n$ for some $n \\in {\\mathbb N}$. In the following we introduce some basic notation for the kernel based learning approach.\n\nDefine with $(\\mathcal{H},\\langle \\cdot,\\cdot\\rangle_\\mathcal H)$ the RKHS of functions on ${\\mathbb R}^d$ with reproducing kernel $k:{\\mathbb R}^d \\times {\\mathbb R}^d \\rightarrow {\\mathbb R}$, i.e., it holds\n\\begin{equation}\n\\label{eq:rep.property}\n\tg(x) = \\langle g, k(\\cdot,x) \\rangle_\\mathcal H, ~~x \\in {\\mathbb R}^d, g\\in \\mathcal H.\n\\end{equation} \n\nThe corresponding inner product and norm in $\\mathcal H$ is denoted by $\\langle \\cdot,\\cdot \\rangle_\\mathcal H$ and $\\|\\cdot\\|_\\mathcal H$, respectively. We refer to \\citet{Berlinet} for examples of Hilbert spaces and their reproducing kernels. In the following we deal with reproducing kernel Hilbert spaces which fulfill the following, rather standard, conditions:\n\\begin{enumerate}[label={(K\\arabic*})]\n\\item \\label{con:k1}\n$\\mathcal H$ is separable,\n\\item \\label{con:k2}\nThere exists a $\\kappa>0$ such that $|k(x,y)| \\leq \\kappa$ for all $x,y \\in {\\mathbb R}^d$ and $k$ is measurable.\n\\end{enumerate}\nUnder \\ref{con:k1} the Hilbert-Schmidt norm $\\|\\cdot\\|_{\\mathrm{HS}}$ for operators mapping from $\\mathcal H$ to $\\mathcal H$ is well defined.\nIf condition \\ref{con:k2} holds, all functions in $\\mathcal H$ are bounded, see \\citet{Berlinet}, chapter 2. \nThe conditions are satisfied for a variety of popular kernels, e.g., Gaussian or triangular. \n\nThe main principle of RKHS methods is the mapping of the data $X_t$ into $\\mathcal H$ via the feature maps $\\phi_t = k(\\cdot,X_t)$, $t=1,\\dots,n$. This mapping can be done implicitly by using the kernel trick $\\langle \\phi_t,\\phi_s \\rangle_\\mathcal H = k(X_t,X_s)$ and thus only the $n \\times n$ dimensional kernel matrix $K_n = n^{-1}[k(X_t,X_s)]_{t,s=1}^n$ is needed in the computations. Then the task for RKHS methods is to find coefficients $\\alpha_1,\\dots,\\alpha_n$ such that $f_\\alpha = \\sum_{t=1}^n \\alpha_t \\phi_t$ is an adequate approximation of $f^\\ast$ in $\\mathcal H$, measured in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$ norm $\\|\\cdot\\|_2$.\n\nThere are a variety of different approaches to estimate the coefficients $\\alpha_1,\\dots,\\alpha_n$, including kernel ridge regression, kernel principal component regression and, of course, kernel partial least squares. The latter method was introduced by \\citet{Ros01} and is the focus of the current work.\n\nIt was shown by \\citet{Krae07b} that the KPLS algorithm solves\n\\begin{equation}\n\\label{eq:kpls.optim}\n\t\\widehat{\\alpha}_i = \\arg\\min\\limits_{v \\in \\mathcal K_i(K_n,y)} \\|y - K_n v\\|^2,~~i=1,\\dots,n,\n\\end{equation}\nwith $y=(y_1,\\dots,y_n)^{ \\mathrm{\\scriptscriptstyle T} }$. Here $\\mathcal K_i(K_n,y) = \\mathrm{span}\\left\\{\n\ty,K_n y, K_n^2y,\\dots,K_n^{i-1}y\n\\right\\}$, $i=1,\\dots,n$, is the $i$th order Krylov space with respect to $K_n$ and $y$ and $\\| \\cdot\\|$ denotes the Euclidean norm rescaled by $n^{-1}$. The dimension $i$ of the Krylov space is the regularization parameter for KPLS.\n\nWe will introduce several operators that will be crucial for our further analysis. Fist define two integral operators: the kernel integral operator $T^\\ast:\\mathcal L^2\\left(\\Prob^{X} \\right) \\rightarrow \\mathcal H, g \\mapsto\\mathrm{E}\\{k(\\cdot,X) g(X)\\}$ and the change of space operator $T:\\mathcal H \\rightarrow \\mathcal L^2\\left(\\Prob^{X} \\right), g \\mapsto g$, which is well defined if \\ref{con:k2} holds.\nIt is easy to see that $T, T^\\ast$ are adjoint, i.e., for $u \\in \\mathcal H$ and $v \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ it holds $\\langle T^\\ast v, u \\rangle_\\mathcal H = \\langle v, T u \\rangle_2$ with $\\langle\\cdot,\\cdot \\rangle_2$ being the inner product in $\\mathcal L^2\\left(\\Prob^{X} \\right)$. \n\nThe sample analogues of $T,T^\\ast$ are $T_n:\\mathcal H \\rightarrow {\\mathbb R}^n, g \\mapsto \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }$ and $T_n^\\ast:{\\mathbb R}^n \\rightarrow \\mathcal H, (v_1,\\dots,v_n)^{ \\mathrm{\\scriptscriptstyle T} } \\mapsto n^{-1} \\sum_{t=1}^n v_t k(\\cdot,X_t)$, respectively. Both operators are adjoint with respect to \nthe rescaled Euclidean product $\\langle u,v\\rangle = n^{-1} u^{ \\mathrm{\\scriptscriptstyle T} } v$, $u,v \\in {\\mathbb R}^d$\n\nFinally, we define the sample kernel covariance operator $S_n = T^\\ast_n T_n:\\mathcal H \\rightarrow \\mathcal H$ and the population kernel covariance operator $S = T^\\ast T:\\mathcal H \\rightarrow \\mathcal H$. Note that it holds $K_n = T_n T_n^\\ast$. Under \\ref{con:k1} and \\ref{con:k2} $S$ is a self-adjoint compact operator with operator norm $\\|S\\|_{\\mathcal L} \\leq \\kappa$, see \\citet{Cap07}.\n\nWith this notation we can restate (\\ref{eq:kpls.optim}) for the function $f_\\alpha$ \n\\begin{equation}\n\\label{eq:func.representation}\nf_{\\widehat{\\alpha}_i} = \\arg \\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|y- \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }\\|^2=\\arg \\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )}\\|y-T_ng\\|^2.\n\\end{equation}\nHence, we are looking for functions that minimize the squared distance to $y$ constrained to a sequence of Krylov spaces. \n\nIn the literature of ill-posed problems it is well known that without further conditions on the target function $f^\\ast$ the convergence rate of the conjugate gradient algorithm can be arbitrarily slow, see \\citet{Hanke}, chapter 3.2. One common a-priori assumption on the regression function $f^\\ast$ is \na source condition:\n\\begin{enumerate}[label={(S)}]\n\\item\n\\label{eq:source}\nThere exist $r \\geq 0$, $R>0$ and $u \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ such that $f^\\ast = (T T^\\ast)^{r} u$ and $\\|u\\|_2 \\leq R$.\n\\end{enumerate}\n\nIf $r \\geq 1\/2$, then the target function $f^\\ast \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ coincides almost surely with a function $f \\in \\mathcal H$ and we can write $f^\\ast = T f$, see \\citet{Cuc02}.\nWith this the kernel partial least squares estimator $f_{\\widehat{\\alpha}_i}$ estimates the correct target function, not only its best approximation in $\\mathcal{H}$. This case is known as the inner case. \n\nThe situation with $r<1\/2$ is referred to as the outer case. Under additional assumptions, e.g., the availability of additional unlabeled data, it is still possible that an estimator of $f^\\ast$ converges to the true target function in $\\mathcal L^2\\left(\\Prob^{X} \\right)$ norm with optimal rates (with respect to the number $n$ of labeled data points). See \\citet{Vit06} for a detailed description of this semi-supervised approach for kernel ridge regression in the independent and identically distributed case. We do not treat the case $r<1\/2$ in this work.\n\nA source conditions is often interpreted as an abstract smootheness condition. This can be seen as follows.\nLet $\\eta_1 \\geq \\eta_2 \\geq \\dots$ be the eigenvalues and $\\psi_1,\\psi_2,\\dots$ the corresponding eigenfunctions of the compact operator $S$. \nThen it is easy to see that the source condition \\ref{eq:source} is equivalent to $f = \\sum_{j=1}^\\infty b_j \\psi_j \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ with $b_j$ such that $\\sum_{j=1}^\\infty \\eta_j^{-2(r+1\/2)} b_j^2 < \\infty$. Hence, the higher $r$ is chosen the faster the sequence $\\{b_j\\}_{j=1}^\\infty$ must converge to zero. Therefore, the sets of functions for which source conditions hold are nested, i.e., the larger $r$ is the smaller the corresponding set will be. The set with $r=1\/2$ is the largest one and corresponds to a zero smoothness condition, i.e., $\\sum_{j=1}^\\infty \\eta_j^{-2} b_j^2 < \\infty$, which is equivalent to $f \\in \\mathcal H$. For more details we refer to \\citet{Dicker17}.\n\n\\section{Consistency of Kernel Partial Least Squares}\n\\label{sec:kpls.convergence}\nThe KCG algorithm as described by \\citet{Blan10b} is consistent when stopped early and convergence rates can be obtained when a source condition \\ref{eq:source} holds. Here we will proof the same property for KPLS. Early stopping in this context means that we stop the algorithm at some $a=a(n) \\leq n$ and consider the estimator $f_{\\widehat{\\alpha}_a}$ for $f^\\ast$.\n\nThe difference between KCG and KPLS is the norm which is optimized. The kernel conjugate gradient algorithm studied in \\citet{Blan10b} estimates the coefficients $\\alpha \\in {\\mathbb R}^n$ of $f_\\alpha$ via $\\widehat{\\alpha}_i^{CG} = \\arg\\min_{v \\in \\mathcal K_i(K_n,y)} \\langle y- K_n v, K_n(y- K_n v)\\rangle$.\nIt is easy to see that this optimization problem can be rewritten for the function $f_\\alpha$ as \n\\[\n\\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|T_n^\\ast y-S_ng\\|_\\mathcal H^2=\t\\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|T_n^\\ast\\left(y-T_ng\\right)\\|_\\mathcal H^2,\n\\] \ncompared to (\\ref{eq:func.representation}) for KPLS. Thus, KCG obtains the least squares approximation $g$ in the $\\mathcal H$-norm for the normal equation $T_n^\\ast y = T^\\ast _nT_n g$ and KPLS finds a function that minimizes the residual sum of squares. In both methods the solutions are restricted to functions $g \\in \\mathcal K_i(S_n,T_n^\\ast y )$.\n\nAn advantage of the kernel conjugate gradient estimator is that concentration inequalities can be established for both $T_n^\\ast y$ and $S_n$ and applied directly as the optimization function contains both quantities. The stopping index for the regularization can be chosen by a discrepancy principle as ${a^\\ast} = \\min\\{1\\leq i \\leq n: \\|S_n f_{\\widehat{\\alpha}_i^{CG}} - T_n^\\ast y\\| \\leq \\Lambda_n\\}$ with $\\Lambda_n$ being a threshold sequence that goes to zero as $n$ increases.\n\nOn the other hand, the function to be optimized for KPLS contains only $y$ and $T_n g = \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }$ for which statistical properties are not readily available. Thus, we need to find a way to apply the concentration inequalities for $T_n^\\ast y$ and $S_n$ to this slightly different problem. This leads to complications in the proof of consistency and a rather different and more technical stopping rule for choosing the optimal regularization parameter $a^\\ast$ is used, as can be seen in Theorem \\ref{th:kpls}. This stopping rule has its origin in \\citet{Hanke}.\\\\\n\n In the following $\\|\\cdot\\|_{\\cal{L}}$ denotes the operator norm and $\\|\\cdot\\|_{HS}$ is the Hilbert-Schmidt norm.\n\n\\begin{theorem}\n\\label{th:kpls}\nAssume that conditions \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source} hold with $r \\geq 3\/2$ and there are constants $C_\\delta(\\nu),C_\\epsilon(\\nu)>0$ and a sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}} \\subset [0,\\infty)$, $\\gamma_n \\rightarrow 0$, such that we have for $\\nu \\in (0,1]$\n\\begin{align*}\n\t\\mathrm{P}\\left(\n\t\\|S_n-S\\|_{\\mathcal L} \\leq C_\\delta(\\nu) \\gamma_n\n\t\\right) &\\geq 1 - \\nu\/2,\\\\\n\t\\mathrm{P}\\left(\n\t\\|T_n^\\ast y - S f\\|_\\mathcal H \\leq C_\\epsilon(\\nu) \\gamma_n\n\t\\right) &\\geq 1- \\nu\/2.\n\\end{align*}\nDefine the stopping index $a^\\ast$ by\n\\begin{equation}\n\\label{eq:stopping}\na^\\ast = \\min\\left\\{1 \\leq a \\leq n: \\sum_{i=0}^a \\|S_n f_{\\widehat{\\alpha}_i} - T_n^\\ast y\\|^{-2}_\\mathcal H \\geq (C \\gamma_n)^{-2}\n\t\\right\\},\n\\end{equation}\nwith $C = C_\\epsilon(\\nu) + \\kappa^{r-1\/2}(r+1\/2) R \\{1 + C_\\delta(\\nu)\\}$.\n\nThen it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n^{2r\/(2r+1)}\\right\\},\\\\\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f\\|_\\mathcal H &= O\\left\\{\\gamma_n^{(2r-1)\/(2r+1)}\\right\\},\n\\end{align*}\nwith $f^\\ast = T f$.\n\\end{theorem}\n\n\nIt can be shown that the stopping rule (\\ref{eq:stopping}) always determines a finite index, i.e., the set the minimum is taken over is not empty, see \\citet{Hanke}, chapter 4.3.\n\nThe theorem yields two convergence results, one in the $\\mathcal H$-norm and one in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm. It holds that $\\|v\\|_2 = \\|S^{1\/2}v\\|_\\mathcal H$. These are the endpoints of a continuum of norms $\\|v\\|_\\beta = \\|S^\\beta v\\|_\\mathcal H$, $\\beta \\in [0,1\/2]$ that were considered in \\citet{Nemirovskii86} for the derivation of convergence rates for KCG algorithms in a deterministic setting. \n\nThe convergence rate of the kernel partial least squares estimator depends crucially on the sequence $\\gamma_n$ and the source parameter $r$. If $\\gamma_n = O(n^{-1\/2})$, this yields the same convergence rate as Theorem 2.1 of \\citet{Blan10b} for kernel conjugate gradient or \\citet{Vito05} for kernel ridge regression with independent and identically distributed data. For stationary Gaussian time series we will derive concentration inequalities in the next section and obtain convergence rates depending on the source parameter $r$ and the range of dependence. Note that Theorem \\ref{th:kpls} is rather general and it can be applied to any kind of dependence structure, as long as the necessary concentration inequalities can be established.\n\nThe next theorem derives faster convergence rates under assumptions on the effective dimensionality of operator $S$, which is defined as $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1} S\\}$. The concept of effective dimensionality was introduced in \\citet{Zho02} to get sharp error bounds for general learning problems considered there. If $\\mathcal H$ is a finite dimensional space it was shown in \\citet{Zho02} that $d_\\lambda \\leq \\mathrm{dim}(\\mathcal H)$. For infinite dimensional spaces it describes the complexity of the interactions between data and reproducing kernel. \n\nIf $d_\\lambda = O(\\lambda^{-s})$ for some $s \\in (0,1]$, \\citet{Cap07} showed that the order optimal convergence rates $n^{-r\/(2r+s)}$ are attained for KRR with independent and identically distributed data.\n\nThe effective dimensionality clearly depends on the behaviour of eigenvalues of $S$. If these converge sufficiently fast to zero, nearly parametric rates of convergence can be achieved for reproducing kernel Hilbert space methods, see, e.g., \\citet{Dicker17}. In particular, the behaviour of $d_\\lambda$ around zero is of interest, since it determines how ill-conditioned the operator $(S+\\lambda)^{-1}$ becomes. In the following theorem we set $\\lambda = \\lambda_n$ for a sequence $\\{\\lambda_n\\}_{n\\in {\\mathbb N}} \\subset (0,\\infty)$ that converges to zero. \n\n\\begin{theorem}\n\\label{th:kpls2}\nAssume that conditions \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source} hold with $r \\geq 1\/2$ and that the effective dimensionality $d_\\lambda$ is known. Additionally, there are constants $C_\\delta(\\nu),C_\\epsilon(\\nu),C_\\psi\n>0$ and a sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}} \\subset [0,\\infty)$, $\\gamma_n \\rightarrow 0$, such that for $\\nu \\in (0,1]$ and $n$ sufficiently large\n\\begin{align*}\n\t\\mathrm{P}\\left\\{\n\t\\|S_n-S\\|_{\\mathcal L} \\leq C_\\delta(\\nu) \\gamma_n\n\t\\right\\} &\\geq 1 - \\nu\/3,\\\\\n\t\\mathrm{P}\\left\\{\n\t\\|(S+\\lambda_n)^{-1\/2}(T_n^\\ast y - S f)\\|_\\mathcal H \\leq C_\\epsilon(\\nu) \\sqrt{d_{\\lambda_n}}\\gamma_n\n\t\\right\\} &\\geq 1- \\nu\/3,\\\\\n\t\\mathrm{P}\\left\\{\n\t\\|(S+\\lambda_n)^{1\/2}(S_n+\\lambda_n)^{-1\/2}\\|_{\\mathcal L} \\leq C_\\psi\n\t\\right\\} &\\geq 1 - \\nu\/3,\n\\end{align*}\nHere $\\{\\lambda_n\\}_{n \\in {\\mathbb N}} \\subset (0,\\infty)$ is a sequence converging to zero such that for $n$ large enough \n\\begin{equation}\n\t\\label{eq:lambda.inequ}\n\t\\gamma_n \\leq \\lambda_n^{r-1\/2}.\n\\end{equation}\n\nTake $\\zeta_n = \\max\\{\\sqrt{\\lambda_n d_{\\lambda_n}} \\gamma_n, \\lambda_n^{r+1\/2}\\}$ \nDefine the stopping index $a^\\ast$ by\n\\begin{equation}\n\\label{eq:stopping2}\na^\\ast = \\min\\left\\{1 \\leq a \\leq n: \\sum_{i=0}^a \\|S_n f_{\\widehat{\\alpha}_i} - T_n^\\ast y\\|^{-2}_\\mathcal H \\geq (C \\zeta_n)^{-2}\\right\\},\n\\end{equation}\nwith $C=4 R \\max\\{1, C_\\psi^2,(r-1\/2)\\kappa^{r-3\/2} C_\\delta(\\nu),2^{-1\/2}R^{-1} C_\\psi C_\\epsilon(\\nu)\\}$.\n\nThen it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\lambda_n^{-1\/2}\\zeta_n\\right\\},\\\\\n\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f\\|_\\mathcal H &= O\\left\\{\\lambda_n^{-1}\\zeta_n\\right\\},\n\\end{align*}\nwith $f^\\ast = T f$.\n\\end{theorem}\nThe condition (\\ref{eq:lambda.inequ}) holds trivially for $r=1\/2$ as $\\gamma_n$ converges to zero. For $r >1\/2$ the sequence $\\lambda_n$ must not converge to zero arbitrarily fast. \n\nIn its general form Theorem \\ref{th:kpls2} does not give immediate insight in the probabilistic convergence rates of the kernel partial least squares estimator. Therefore, we state two corollaries, where the function $d_\\lambda$ is specified. In both corollaries we explicitly state the choice of the sequence $\\lambda_n$ that yield the corresponding rates.\n\n\\begin{corollary}\n\\label{cor:pol.ed}\nAssume that there exists $s \\in (0,1]$ such that $\n\td_\\lambda = O(\\lambda^{-s})\n$ for $\\lambda \\rightarrow 0$.\nThen under conditions of Theorem \\ref{th:kpls2} with $\\lambda_n = \\gamma_n^{2\/(2r + s)}$ it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n^{2r\/(2r+s)}\\right\\}.\n\\end{align*}\n\\end{corollary}\nPolynomial decay of the effective dimensionality $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1}S\\}$ occurs if the eigenvalues of $S$ also decay polynomially fast, that is, $\\mu_i = c_s i^{-1\/s}$ for $s \\in (0,1]$, since in this case $d_\\lambda = \\sum\\limits_{i=1}^\\infty \\{1+\\lambda\/c_s i^{1\/s}\\}^{-1} = O(\\lambda^{-s})$. This holds, for example, for the Sobolev kernel $k(x,y) = \\min(x,y)$, $x,y \\in [0,1]$ and data that are uniformly distributed on $[0,1]$, see \\citet{Ras14}.\n\nIf $\\gamma_n = n^{-1\/2}$, then the KPLS estimator converges in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm with a rate of $n^{-r\/(2r+s)}$. This rate is shown to be optimal in \\citet{Cap07} for KRR with independent identically distributed data.\n\nNote that the rate obtained in Theorem \\ref{th:kpls} corresponds to $\\gamma_n^{-2r\/(2r+s)}$ with $s=1$, i.e., the worst case rate with respect to the parameter $s \\in (0,1]$.\n\nIn the next corollary to Theorem \\ref{th:kpls2} we assume that the effective dimensionality behaves in a logarithmic fashion.\n\n\\begin{corollary}\n\\label{cor:log.ed}\nLet $d_\\lambda = O\\{\\log(1+a\/\\lambda)\\}$ for $\\lambda \\rightarrow 0$ and $a>0$. Then under the conditions of Theorem \\ref{th:kpls2} with $\\lambda_n = \\gamma_n^2 \\log\\{ \\gamma_n^{-2}\\}$ and $r=1\/2$ it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n \\log(1\/2 \\gamma_n^{-2})\\right\\}.\n\\end{align*}\n\\end{corollary}\nThe effective dimensionality takes the special form considered in this corollary, for example, when the eigenvalues of $S$ decay exponentially fast. This holds, for example, if the data are Gaussian and the Gaussian kernel is used, see Section \\ref{sec:gauss.kern}.\nIf $\\gamma_n =O(n^{-1\/2})$, then the convergence rate is of order $O\\{n^{-1}\\log(n)\\}$, which are nearly parametric. It is noteworthy that the source condition only impacts the choice of the sequence $\\lambda_n$, not the convergence rates of the estimator in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm. Therefore, we stated the corollary for $r=1\/2$, which is a minimal smoothness condition on $f^\\ast$, i.e., that $f^\\ast = T f$ almost surely for an $f \\in \\mathcal H$.\n\nThe rates obtained in Corollaries \\ref{cor:pol.ed} and \\ref{cor:log.ed} were derived in \\citet{Dicker17} for kernel ridge regression and kernel principal component regression under the assumption of independent and identically distributed data.\n\n\\section{Concentration Inequalities for Gaussian Time Series}\n\\label{sec:concentration}\nCrucial assumptions of Theorem \\ref{th:kpls} and \\ref{th:kpls2} are the concentration inequalities\nfor $S_n$ and $T_n^\\ast y$ and convergence of the sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}}$. Here we establish such inequalities in a Gaussian setting for stationary time series. At the end of this section we will state explicit convergence rates for $f_{\\widehat{\\alpha}_{a^\\ast}}$ that depend not only on the source parameter $r \\geq 1\/2$ and the effective dimensionality $d_\\lambda$, but also on the persistence of the dependence in the data.\n\nThe Gaussian setting is summarized in the following assumptions\n\\begin{enumerate}[label={(D\\arabic*})]\n\\item\n\\label{D1}\n$(X_h,X_0)^{ \\mathrm{\\scriptscriptstyle T} } \\sim \\mathcal{N}_{2d}(0,\\Sigma_h)$, $h=1,\\dots,n-1$, with \n\\[\n \\Sigma_h = \\left[\n\\begin{matrix}\n\t\\tau_0 & \\tau_h\\\\\n\t\\tau_h & \\tau_0\n\\end{matrix}\n\\right] \\otimes \\Sigma.\n\\]\nHere $\\Sigma \\in {\\mathbb R}^{d \\times d}$ and $V=[\\tau_{|i-j|}]_{i,j=1}^{n} \\in {\\mathbb R}^{n \\times n}$ are positive definite, symmetric matrices and $\\otimes$ denotes the Kronecker product between matrices. Furthermore $X_0 \\sim \\mathcal{N}_d(0,\\tau_0 \\Sigma)$.\n\\item\n\\label{D2}\nFor the autocorrelation function $\\rho_h = \\tau^{-1}_0\\tau_h$ there exists a $q>0$ such that $|\\rho_h| \\leq (h+1)^{-q}$ for $h =0,\\dots,n-1$.\n\\end{enumerate}\nCondition \\ref{D1} is a separability condition for the covariance matrices $\\Sigma_h$, $h = 0,\\dots,n-1$. Due to \\ref{D1} the effects (on the covariance) over time and between the different variables can be treated separately. Under condition \\ref{D2} it is easy to see that from $q>1$ follows the absolute summability of the autocorrelation function $\\rho$ and thus $\\{X_t\\}_{t \\in {\\mathbb Z}}$ is a short memory process. Stationary short memory processes keep many of the properties of independent and identically distributed data, see, e.g., \\citet{bBrock}.\n\nOn the other hand $q \\in (0,1]$ yields a long memory process, see, e.g., Definition 3.1.2 in \\citet{Giraitis}. Examples of long memory processes are the fractional Gaussian noise with an autocorrelation function that behaves like $(h+1)^{-2(1-H)}$, with $H \\in [0,1)$ being the Hurst coefficient. Stationary long memory processes exhibit dependencies between observations that are more persistent and many statistical results that hold for independent and identically distributed data turn out to be false, see \\citet{Samoro} for details.\n\nThe next theorem gives concentration inequalities for both estimators $S_n$ and $T_n^\\ast y$ in a Gaussian setting with convergence rates depending on the parameter $q>0$. These inequalities are the ones needed in Theorem \\ref{th:kpls} and Theorem \\ref{th:kpls2}. Recall that $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1}S\\}$ denotes the effective dimensionality of $S$.\n\n\\begin{theorem}\n\\label{th:conc.equality}\n(i) Define $\\mathrm{d}\\mu_h(x,y) = \\mathrm{d}\\mathrm{P}^{X_h,X_0}(x,y) - \\mathrm{d} \\mathrm{P}^{X_0}(x)\\mathrm{d} \\mathrm{P}^{X_0}(y)$.\nUnder Assumptions \\ref{con:k1} and \\ref{con:k2} it holds for $\\nu \\in (0,1]$ with probability at least $1-\\nu$ that\n\\begin{align*}\n\\|S_n - S\\|^2_{\\mathcal L} &\\leq\n\\frac{2 \\nu^{-1}}{n^2}\\sum\\limits_{h=1}^{n-1} (n-h) \\int\\limits_{{\\mathbb R}^{2d}} k^2(x,y) \\mathrm{d}\\mu_h(x,y) + \\frac{\\nu^{-1}}{n} \\left\\{\n\t\\mathrm{E} k^2(X_0,X_0) - \\|S\\|^2_{\\mathrm{HS}}\n\\right\\},\\\\\n\\|T_n^\\ast y - S f\\|^2_\\mathcal H &\\leq\n\\frac{2\\nu^{-1}}{n^2}\\sum\\limits_{h=1}^{n-1} (n-h) \\int\\limits_{{\\mathbb R}^{2d}} k(x,y)f(x)f(y)\\mathrm{d}\\mu_h(x,y)\\\\\n&+ \\frac{\\nu^{-1}}{n} \\left[\n\t\\mathrm{E} \\left\\{k(X_0,X_0) f^2(X_0)\\right\\} - \\| S f\\|^2_\\mathcal H + \\sigma^2 \\mathrm{E}\\{ k(X_0,X_0)\\}\n\\right].\n\\end{align*}\n\n\n(ii) Assume that additionally to \\ref{con:k1}, \\ref{con:k2} also \\ref{D1}, \\ref{D2} for $q >0$ are fulfilled. Denote $M=\\sup_{x \\in {\\mathbb R}^d} |f(x)|$.\n\nThen there exists a constant $C(q)>0$ such that \n\\begin{align*}\n\\|S_n - S\\|_\\mathcal L\n\t&\\leq \n\t \\nu^{-1\/2} \\{\\gamma_n^2(q) \\kappa C_\\gamma\t\n\t\t+ n^{-1}(\\kappa^2-\\|S\\|_\\mathrm{HS}^2)\\}^{1\/2},\\\\\n \\|T_n^\\ast y - S f\\|_\\mathcal H\n\t&\\leq \n\t\\nu^{-1\/2} \\left[\\gamma_n^2(q) M C_\\gamma \n\t\t+ n^{-1}\\left\\{\n\t\t\\kappa (M + \\sigma^2) - \\| S f\\|^2_\\mathcal H\t\\right\\}\\right]^{1\/2}\n\t\t,\n\\end{align*}\nfor $C_\\gamma = C(q)\\{(2\\pi)^d \\mathrm{det}(\\Sigma)\\}^{-1\/2} \\kappa d^{1\/2}(1-4^{-q})^{-1\/4(d+2)}$. The function $\\gamma_n(q)$, $q>0$, is defined as\n\\[\n\t\\gamma_n(q) = \\left\\{\n\t\t\t\t\\begin{array}{clc}\n\t\t\t n^{-1\/2} &,& q>1\\\\\n\t\tn^{-1\/2} \\log(1\/2n)\t&,& q=1\\\\\n\t\t\tn^{-q\/2} \n\t\t\t&,& q \\in (0,1).\n\t\t\\end{array}\n\t\\right.\n\\]\n\n(iii) Let \\ref{con:k1}, \\ref{con:k2} and \\ref{eq:source} hold. Let $\\gamma_n(q)$ be the function as defined in (ii). Then there exists a constant $\\tilde{C}_\\epsilon>0$ such that it holds with probability at least $1-\\nu$ for $\\lambda>0$ that\n\\[\n \\|(S+\\lambda)^{-1\/2}(T_n^\\ast y - S_n f\\|_\\mathcal H \\leq \\nu^{-1\/2} \\tilde{C}_\\epsilon \\sigma \\sqrt{d_\\lambda}\\gamma_n(q).\n\\] \n\n(iv) Let \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source}, \\ref{D1} and \\ref{D2} hold. Let $\\lambda_n^{-1\/2}d_{\\lambda_n}^{1\/2} \\gamma_n(q) \\rightarrow 0$ for a sequence $\\lambda_n \\rightarrow 0$ and $\\gamma_n(q)$ the function defined in (ii). Then there exists an $n_0 = n_0(\\nu,q) \\in {\\mathbb N}$ such that with probability at least $1-\\nu$ we have for all $n \\geq n_0$\n\\[\\|(S+\\lambda_n)^{1\/2}(S_n+\\lambda_n)^{-1\/2}\\|_\\mathcal L \\leq \\sqrt{2}.\n\\]\n\\end{theorem}\n\nThe first part of the theorem is general and can be used to derive concentration inequalities not only in the Gaussian setting and is of interest in itself. The convergence rate is controlled by the sums appearing on the right hand side.\n If these sums are of $O(n)$ then the mean squared error of both $S_n$ and $T_n^\\ast y$ will converge to zero with a rate of $n^{-1}$. On the other hand, if the sums are of order $O(n^{2-q})$ for some $q\\in (0,1)$, the mean squared errors will converge with the reduced rate $n^{-q}$.\n\nThe second part derives explicit concentration inequalities in the Gaussian setting described by \\ref{D1} and \\ref{D2} with rates depending on the range of the dependence measured by $q>0$. These inequalities appear in Theorem \\ref{th:kpls}.\n\nParts (iii) and (iv) give the additional probabilistic bounds needed to apply Theorem \\ref{th:kpls2}. The condition $\\lambda_n^{-1\/2} d^{1\/2}_{\\lambda_n} \\gamma_n(q) \\rightarrow 0$ in Theorem \\ref{th:conc.equality} (iv) is fulfilled in the settings of Corollary \\ref{cor:pol.ed} and Corollary \\ref{cor:log.ed}.\n\n Theorem \\ref{th:kpls}, Corollary \\ref{cor:pol.ed}, Corollary \\ref{cor:log.ed} and Theorem \\ref{th:conc.equality} together imply\n\\begin{corollary}\n\\label{cor:convergence}\nLet the conditions of Theorem \\ref{th:kpls2} and \\ref{D1}, \\ref{D2} hold.\n\n(i) Assume that there exists $s\\in (0,1]$ such that $d_\\lambda = O(\\lambda^{-s})$ for $\\lambda \\rightarrow 0$. Then with probability at least $1-\\nu$\n\t\\[\n\t\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 = \\left\\{\n\t\t\t\\begin{array}{cc}\n\t\t\t\tO\\{n^{-r\/(2r+s)}\\}, & q>1,\\\\\n\t\t\t\tO\\{n^{-q r\/(2r+s)}\\}, & q \\in (0,1).\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\]\nIf instead of conditions of Theorem \\ref{th:kpls2}, conditions of Theorem \\ref{th:kpls} are assumed, then the convergence rates above have $s=1$. \n\t\n(ii) Assume that there exists $a>0$ such that $d_\\lambda = O\\{\\log(1+a\/\\lambda)\\}$ for $\\lambda \\rightarrow 0$ and $r=1\/2$. Then with probability at least $1-\\nu$\n\t\\[\n\t\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 = \\left\\{\n\t\t\t\\begin{array}{cc}\n\t\t\t\tO\\{n^{-1\/2}\\log(1\/2 n)\\}, & q>1,\\\\\n\t\t\t\tO\\{n^{-q\/2} \\log(1\/2 n^q)\\}, & q \\in (0,1).\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\]\n\\end{corollary}\nHence, for $q>1$ the kernel partial least squares algorithm achieves the same rates as if the data were independent and identically distributed. For $q \\in (0,1)$ the convergence rates become substantially slower, highlighting that dependence structures that persist over a long time can influence the convergence rates of the algorithm.\n\n\\section{Source condition and effective dimensionality for Gaussian kernels}\n\\label{sec:gauss.kern}\nThe source condition \\ref{eq:source} and the effective dimensionality $d_\\lambda$ are of great importance in the convergence rates derived in previous sections. Here we investigate these conditions for the reproducing kernel Hilbert space corresponding to the Gaussian kernel $k(x,y) = \\exp(-l\\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^d$, $l>0$, for $d=1$. Hence, the space $\\mathcal H$ is the space of all analytic functions that decay exponentially fast, see \\citet{Steinwart05anexplicit}.\n\nWe also impose the normality conditions \\ref{D1} and \\ref{D2} on $\\{X_t\\}_{t\\in {\\mathbb Z}}$, where now $\\sigma^2_x =\\Sigma \\in {\\mathbb R}$ due to $d=1$. The following proposition derives a more explicit representation for $f \\in \\mathcal H$.\n\n\\begin{proposition}\n\\label{prop:source}\nAssume that \\ref{con:k1},\\ref{con:k2} and \\ref{eq:source} hold for $r \\geq 1\/2$. Let $d=1$, $X_0 \\sim \\mathcal{N}(0,\\sigma^2_x), \\sigma^2_x > 0$ and consider the Gaussian kernel $k(x,y) = \\exp\\{-l (x-y)^2\\}$ for $x,y \\in {\\mathbb R}$, $l>0$. Then $f$ can be expressed for $\\mu = r - 1\/2 \\in {\\mathbb N}$\n via $f(x) = \\sum_{i=1}^\\infty c_i L_\\mu(x,z_i)$ for fixed $\\{z_i\\}_{i=1}^\\infty,\\{c_i\\}_{i=1}^\\infty \\subset {\\mathbb R}$ such that $\\sum_{i,j=1}^\\infty c_i c_j k(z_i,z_j) \\leq R^2$, $R>0$. Here we have for $x,z \\in {\\mathbb R}$ \n\t\\begin{align*}\n\tL_\\mu(x,z) &= \\exp\n\t\t\\left[\n\t\t\t-1\/2 \\left\\{\n\t\t\t\t\\frac{\\det(\\Lambda)(x^2+z^2)-2 l^{\\mu+1} x z}{\\det(\\Lambda_{1:\\mu})}\n\t\t\t\\right\\}\n\t\t\\right],\n\t\t\\end{align*}\n\twith $\\Lambda \\in {\\mathbb R}^{(\\mu+1)\\times (\\mu+1)}$ being a tridiagonal matrix with elements \n\t\\[\n\t\t\\Lambda_{i,j} = \\left\\{\n\t\\begin{array}{cll}\n\t\t\\sigma^{-2}_x + 2 l &, & i=j<\\mu+1\\\\\n\t\tl &, & i=j=\\mu+1\\\\\n\t\t-l &, & |i-j| = 1\\\\\n\t\t0&, & else\n\t\\end{array}\\right.\n\t\\]\n\tfor $i,j=1,\\dots,\\mu+1$ and $\\Lambda_{1:\\mu}$ is the $\\mu \\times \\mu$-dimensional sub-matrix of $\\Lambda$ including the fist $\\mu$ columns and rows.\n\t\n\tConversely any function $f^\\ast = T f$ with $f$ of the above form fulfills a source condition $\\ref{eq:source}$ with $r = \\mu + 1\/2$, $\\mu \\in {\\mathbb N}$.\n\\end{proposition}\nHence if we fix an $r \\geq 1\/2$ with $r-1\/2 \\in {\\mathbb N}$ this theorem gives us a way to construct functions $f \\in \\mathcal H$ with $f^\\ast = Tf$ that fulfill \\ref{eq:source}.\n\nThe next proposition derives the effective dimensionality $d_\\lambda$ in this setting:\n\\begin{proposition}\n\\label{prop:ed}\nLet $d=1$, $X_0 \\sim \\mathcal{N}(0,\\sigma^2_x)$ for some $\\sigma^2_x>0$ and consider the Gaussian kernel $k(x,y) = \\exp\\{-l(x-y)^2\\}$, $x,y \\in {\\mathbb R}$, $l>0$. \n\nThen there is a constant $D>0$ such that it holds for any $\\lambda \\in (0,1]$\n\\[\n\td_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1} S\\} \\leq \tD\\log(1+a\/\\lambda),\n\\]\nwith $a =\\sqrt{2}(1+\\beta+\\sqrt{1+\\beta})^{-1\/2} $, $\\beta = 4 l \\sigma^2_x$.\n\\end{proposition}\n\nWith the latter result Corollary \\ref{cor:log.ed} is applicable and we expect convergence rates for the kernel partial least squares algorithm of order $O\\{\\gamma_n \\log(1\/2 \\gamma_n^{-2})\\}$ for a sequence $\\{\\gamma_n\\}_n$ as in Theorem \\ref{th:kpls2}.\n\n\n\\section{Simulations}\n\\label{sec:simulations}\nTo validate the theoretical results of the previous sections we conducted a simulation study. The reproducing kernel Hilbert space is chosen to correspond to the Gaussian kernel $k(x,y) = \\exp(-l\\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^d$, $l=2$, for $d=1$.\n\nThe source parameter is taken $r=4.5$ and we consider the function \n\\[\n\tf(x) = 4.37^{-1}\\{3 {L}_4(x,-4) - 2 {L}_4(x,3)+ 1.5 {L}_4(x,9)\\}, ~~ x \\in {\\mathbb R}.\n\t\\] \n\tThe normalization constant is chosen such that $f$ takes values in $[-0.35,0.65]$ and $L_4$ is the exponential function given in Proposition \\ref{prop:source}. The function $f$ is shown in Figure \\ref{fig:func}.\n\\begin{figure}\n \\begin{center}\n \t\\includegraphics[width=.5\\linewidth]{f_plot}\n \t\\caption{\\label{fig:func}\n\t\tThe function $f$ evaluated on $[-7.5,7.5]$ (black) and one realisation of the noisy data $y = f(x) + \\varepsilon$ (grey). \t\t\n \t\t }\n \\end{center}\n\\end{figure}\n\nIn condition \\ref{D1} we set $\\sigma^2_x =\\Sigma = 4$. For the matrix $V^2 = [\\tau_{|i-j|}]_{i,j=1}^n \\in {\\mathbb R}^{n \\times n}$ we choose three different structures for $n\\in\\{200,400,1000\\}$. In the first setting $\\tau_h = \\mathbb{I}(h=0)$, which corresponds to independent data. The second setting with $\\tau_h = 0.9^{-h}$ implies an autoregressive process of order one. Finally, the third setting with $\\tau_h = (1+h)^{-q}$, $q=1\/4$, $h =0,\\dots,n-1$ leads to the long range dependent case. \n\nIn a Monte Carlo simulation with $M=1000$ repetitions the time series $\\{X_t^{(j)}\\}_{t=1}^n$ are generated via $X^{(j)} = V N^{(j)}$ with $N^{(j)} \\sim \\mathcal{N}_n(0,\\sigma^2 I_n)$, $j=1,\\dots,M$, where $I_n$ is the $n \\times n$-dimensional identity matrix. \n\nThe residuals $\\varepsilon_1^{(j)},\\dots,\\varepsilon_n^{(j)}$ are generated as independent standard normally distributed random variables and independent of $\\{X_t^{(j)}\\}_{t=1}^n$ . The response is defined as $y_t^{(j)} = f(X_t^{(j)}) + \\eta\\, \\varepsilon_t^{(j)}$, $t=1,\\dots,n$, $j=1,\\dots,M$, with $\\eta = 1\/16$.\n\nThe kernel partial least squares and kernel conjugate gradient algorithms are run for each sample $\\{(X_t^{(j)},y_t^{(j)})^{ \\mathrm{\\scriptscriptstyle T} }\\}_{t=1}^n$, $j=1,\\dots,M$, with a maximum of $40$ iteration steps. We denote the estimated coefficients with $\\widehat{\\alpha}_1^{(j,m)},\\dots,\\widehat{\\alpha}_{40}^{(j,m)}$, $j=1,\\dots,M$, with $m=CG$ meaning that the kernel conjugate gradient algorithm was employed and $m=PLS$ that kernel partial least squares was used to estimate $\\alpha_1,\\dots,\\alpha_n$. \n\nThe squared error in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm is calculated via\n\\[\n\t\\widehat{e}_{n,\\tau}^{(j,m)} = \\min\\limits_{a=1,\\dots,40} \\left[ \\frac{1}{\\sqrt{2\\pi\\sigma_x^2}}\\int\\limits_{-\\infty}^\\infty \\left\\{f_{\\widehat{\\alpha}_a^{(j,m)}}(x) - f(x) \\right\\}^2 \\exp\n\t\\left(-\\frac{1}{2\\sigma_x^2} x^2 \\right) \\mathrm{d} x \\right],\n\\]\nfor $j=1,\\dots,M$, $n = 200,400,\\dots,1000$ and $m \\in \\{CG, PLS\\}$.\n\nThe results of the Monte-Carlo simulations are depicted in the boxplots of Figure \\ref{fig:box}.\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_1}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_2}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_3}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:box}\n\t\t\tBoxplots of the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ of kernel partial least squares (left side of each panel) and kernel conjugate gradient (right side of each panel) for different autocovariance functions $\\tau$ and $n = 200,400,1000$. On the left is $\\tau_h = \\mathbb{I}(h=0)$, in the middle $\\tau_h = 0.9^{-h}$ and on \n\t\t\tthe right $\\tau_h = (h+1)^{-1\/4}$.\n\t\t}\n\t\\end{center}\n\\end{figure}\nFor kernel partial least squares (left panels) one observes that independent and autoregressive dependent data have roughly the same convergence rates, although the latter have a somewhat higher error. In contrast, the long range dependent data show slower convergence with the larger interquartile range, supporting the theoretical results of Corollary \\ref{cor:convergence}.\n\nThe $\\mathcal L^2\\left(\\Prob^{X} \\right)$-error of kernel conjugate gradient estimators is generally slightly higher than that of kernel partial least squares. Nonetheless, both of them have a similar behaviour.\n\nWe also investigated the the stopping indices $a=1,\\dots,40$ for which the errors $\\widehat{e}_{n,\\tau}^{(j,m)}$ were attained. These are shown in Figure \\ref{fig:a} for independent and identically distributed data.\n\\begin{figure}\n \\begin{center}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_a_200}\n \t\\end{minipage}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_a_1000}\n \t\\end{minipage}\n \t\\caption{\\label{fig:a}\n \t\tBoxplots of the optimal indices $a\\in\\{1,\\dots,40\\}$ for which the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ were attained. Kernel partial least squares is on the left of each panel and kernel conjugate gradient on the right. On the left is $n = 200$, on the right $n=1000$. The data were assumed to be independent and identically distributed.\n \t}\n \\end{center}\n\\end{figure}\nIt can be seen that the optimal indices for both algorithms have a rather similar behaviour. Kernel conjugate gradient stops slightly later, but overall the differences seem negligible.\n\nFigure \\ref{fig:lines} shows the mean (over $j$) of the estimated $\\mathcal L^2\\left(\\Prob^{X} \\right)$ errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ for different $n$, $\\tau$ and $m \\in \\{CG,PLS\\}$. The errors were multiplied by $n\/\\log(n)$ to illustrate the convergence rates. According to Proposition \\ref{prop:ed} and Corollary \\ref{cor:convergence} (ii) we expect the rates for the independent and autoregressive cases to be $n^{-1}\\log(n)$, which is verified by the fact that the solid black and grey lines are roughly constant.\nFor the long range dependent case we expect worse convergence rates which are also illustrated by the divergence of the dashed black line.\n\\begin{figure}\n \\begin{center}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_pls_mean}\n \t\\end{minipage}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_cg_mean}\n \t\\end{minipage}\n \t\\caption{\\label{fig:lines}\n \t\tMean of the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ of kernel partial least squares (left) and kernel conjugate gradient (right) for $n = 200,400,\\dots,1000$ multiplied by $n\/\\log(n)$. The solid black line is for $\\tau_h = \\mathbb{I}(h=0)$, the grey line for $\\tau_h = 0.9^{-h}$ and the dashed black line for $\\tau_h = (h+1)^{-1\/4}$.\n \t}\n \\end{center}\n\\end{figure}\n\n\\section{Application to Molecular Dynamics Simulations}\n\\label{sec:protein}\nThe collective motions of protein atoms are responsible for its biological function and molecular dynamics simulations is a popular tool to explore this \\citep{Henz07}. \n\nTypically, the $p \\in {\\mathbb N}$ backbone atoms of a protein are considered for the analysis with the relevant dynamics happening in time frames of nanoseconds. Although the dynamics are available exactly, the high dimensionality of the data and large number of observations can be cumbersome for regression analysis, e.g., due to the high collinearity in the columns of the covariates matrix. Many function-dynamic relationships are also non-linear \\citep{Hub09}.\nA further complication is the fact that the motions of different backbone atoms are highly correlated, making additive non-parametric models for the target function $f^\\ast$ less suitable. \n\nWe consider T4 Lysozyme (T4L) of the bacteriophage T4, a protein responsible for the hydrolisis of 1,4-beta-linkages in peptidoglycans and chitodextrins from bacterial cell walls.\nThe number of available observations is $n=4601$ and T4L consists of $p=486$ backbone atoms.\n\nDenote with $A_{t,i} \\in {\\mathbb R}^3$ the $i$-th atom, $i=1,\\dots,p$, at time $t=1,\\dots,n$ and $c_i \\in {\\mathbb R}^3$ the $i$-th atom in the (apo) crystal structure of T4L. A usual representation of the protein in a regression setting is the Cartesian one, i.e., we take as the covariate $X_t = (A_{1,t}^{ \\mathrm{\\scriptscriptstyle T} },\\dots,A^{ \\mathrm{\\scriptscriptstyle T} }_{p,t})^{ \\mathrm{\\scriptscriptstyle T} }$, $t=1,\\dots,n$, see \\citet{Bro83}.\nThe functional quantity to predict is the root mean square deviation of the protein configuration $X_t$ at time $t = 1,\\dots,n$ from the (apo) crystal structure $C = (c_1^{ \\mathrm{\\scriptscriptstyle T} },\\dots,c_d^{ \\mathrm{\\scriptscriptstyle T} })^{ \\mathrm{\\scriptscriptstyle T} }$, i.e.,\n\\[\n\ty_t = \\left\\{\n\t\tp^{-1} \\sum\\limits_{i=1}^{p} \\|X_{i,t}-C_i\\|^2\n\t\\right\\}^{1\/2}.\n\\] \nThis nonlinear function was previously considered in \\citet{Hub09}, where it was established that linear models are insufficient for the prediction.\n\nFigure \\ref{fig:sample.series} shows the time series corresponding to $X_{t,1}$ (i.e., the first coordinate of the first atom of T4L) on the left and the functional quantity $y_t$ on the right. These plots reveal certain persistent dependence over time.\n\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_X1}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_Y}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:sample.series}\n\t\t\tTime series of $X_{t,1}$, i.e., the first coordinate of the first atom T4L consists of (left) and the root mean squared deviation $y_t$ between the protein configuration at time $t$ and the (apo) crystal structure.\n\t\t}\n\t\\end{center}\n\\end{figure}\n\n\nFitting autoregressive moving average models of order $(3,2)$ ($ARMA(3,2)$) to $y_t$ and $ARMA(5,2)$ to $X_{t,1}$ shows that the smallest root of their respective characteristic polynomial is close to one ($1.009$ for $y_t$ and $1.003$ for $X_{t,1}$), highlighting that we are on the border of stationarity, see, e.g., \\citet{bBrock}.\n\nFigure \\ref{fig:acf} depicts the autocorrelation functions of $X_{t,1}$ and $y_t$, the theoretical autocorrelation function of the corresponding autoregressive moving average process and $\\rho_h \\propto (h+1)^{-q}$ for $q=0.134$ for $X_{t,1}$ and $q=0.066$ for $y_t$. The latter, as highlighted in Section \\ref{sec:concentration}, is an autocorrelation function for a stationary long range dependent process.\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_acf_X}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_acf_y}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:acf}\n\t\t\tAutocorrelation \n\t\t\tplots of $X_{t,1}$ (left) and $y_t$ (right). The estimated autocorrelation function is grey, the theoretical one of a fitted $ARMA(3,2)$ process is solid black and $\\rho_h \\propto (h+1)^{-q}$ for a suitable choice of $q>0$ is dashed black.\n\t\t}\n\t\\end{center}\n\\end{figure}\nThese plots suggest that $X_{t,1}$ and $y_t$ follow some long-range stationary process.\n\nWe apply kernel partial least squares to this data set with the Gaussian kernel $k(x,y) = \\exp(-l \\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^{3p}$, $l>0$. The function $f$ we aim to estimate is a distance between protein configurations, so using a distance based kernel seems reasonable. Moreover, we also investigated the impact of other bounded kernels such as triangular and Epanechnikov and obtained similar results. The first $25\\%$ of the data form a training set to calculate the kernel partial least squares estimator and the remaining data are used for testing.\n\nThe parameter $l>0$ is calculated via cross validation on the training set. \nIn our evaluation we obtained $l = 0.0189$.\n\nFigure \\ref{fig:prot} compares the observed response in the test set with the prediction on the test set obtained by kernel partial least squares, kernel principal component regression and linear partial least squares.\n\\begin{figure}[t!]\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_pred_cor}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_pred_rss}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:prot}\n\t\t\tCorrelation (left) and residual sum of squares (right) between predicted values and the observed response on the test set depending on the number of used components for kernel partial least squares (solid black), partial least squares (grey) and kernel principal component regression (dashed black).\n\t\t}\n\t\\end{center}\n\\end{figure}\nApparently, kernel partial least squares show the best performance and the kernel principal components algorithm is able to achieve comparable prediction with more components only. Obviously, linear partial least squares can not cope with the non-linearity of the problem. \n\nThis application highlights that kernel partial least squares still delivers a robust prediction even when the dependence in the data is more persistent, if enough observations are available.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzisfq b/data_all_eng_slimpj/shuffled/split2/finalzzisfq new file mode 100644 index 0000000000000000000000000000000000000000..64b30348ac53f89b6fb19ca728eda91a9d8f3faa --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzisfq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nEvery atom heavier than lithium has been processed by stars. \nElements of the $\\alpha$ process, iron peak, and \n{\\em r} process have different formation sites, \nand therefore, understanding the distribution of \nthese elements in nearby stars can lead to a \nbetter understanding of the Galaxy's \nchemical enrichment history.\nThe {\\em r}-process\\ site is the least well understood.\nEuropium is our choice for an {\\em r}-process\\ investigation\nfor two reasons:\\ 96\\% of Galactic europium is \nformed through the {\\em r}-process\\ \\citep{burris_2000}, \nand it has several strong lines in the \nvisible portion of the electromagnetic spectrum.\nIn order to provide insight into the Galactic\nenrichment history, however, europium measurements\nin very large stellar samples will be needed.\n\nThe study of {\\em r}-process\\ elements is well established in metal-poor \nstars \\citep[e.g.,][]{sneden_2008,lai_2008,frebel_2007,\nsimmerer_2004,johnson_2001},\nwhere {\\em r}-process\\ abundances probe early Galactic history. \nA handful of studies have measured europium in \nsolar-metallicity stars \n\\citep[e.g.,][]{reddy_2006,bensby_2005,koch_2002,woolf_1995}, \nbut have done so in samples of 50--200 stars.\nWe aim to extend such work to a substantial sample of \n1000 stars at solar metallicity, using spectra\ncollected by the California and Caltech Planet Search\n(CCPS).\nSuch a large sample will require automated analysis.\n\nOur goal in this paper is to establish a new, \nautomated abundance fitting method based on Spectroscopy Made Easy\n\\citep[SME;][]{valenti_1996}, an LTE spectral synthesis code \ndiscussed in further detail in \\S\\,\\ref{abund}. \nThe analysis builds on the framework of the \n\\citealt{valenti_2005} (hereafter VF05) Spectroscopic \nProperties of Cool Stars (SPOCS) catalog.\nOur method will yield consistent results across large stellar \nsamples, generating smaller errors than previous analyses.\n\nWe begin here with 41\\ stars that\nhave been examined in the literature, and we include \nthree europium lines:\\ 4129\\,\\AA, 4205\\,\\AA, and 6645\\,\\AA.\nOur stellar observations are detailed in \\S\\,\\ref{obs}.\nWe describe the details of our europium \nabundance measurement algorithm in \\S\\,\\ref{abund}, \ncompare our values with existing europium literature\nin \\S\\,\\ref{comp},\nand summarize the results of the study in \n\\S\\,\\ref{summary}.\n\n\\section{Observations}\n\\label{obs}\n\nThe spectra of the 41\\ stars included in this study were taken with the HIRES echelle spectrograph \\citep{vogt_1994} on the 10-m Keck I telescope in Hawaii. \nThe spectra date from January 2004 to September 2008 and have resolution $R \\sim 50\\,000$ and signal-to-noise ratio (S\/N) of $\\sim$\\,$160$ near \\mbox{4200\\,\\AA}, where the two strongest Eu~{\\sc ii}\\ lines included in this study are located. \nThe spectra have the same resolution but S\/N~$\\sim 340$ at \\mbox{6645\\,\\AA}, where a third, weaker Eu~{\\sc ii}\\ line is located. \n\nThe spectra were originally obtained by the CCPS with the intention of detecting exoplanets. \nThe CCPS uses the same spectrometer alignment each night and employs the HIRES exposure meter \\citep{kibrick_2006}, so the stellar observations are extremely consistent, even across years of data collection. \nFor a more complete description of the CCPS and its goals, see \\citealt{marcy_2008}.\nIt should be noted that the iodine cell Doppler technique \\citep{marcy_1992} imprints molecular iodine lines only between the wavelengths of 5000 and 6400\\,\\AA, leaving the regions of interest for this study iodine free.\n\n\\subsection{Stellar Sample}\n\\label{stellsamp}\nFor this study we choose to focus on a subset of \nstars that have been analyzed in other abundance studies. \nWe compare our measurements with those from \n\\citealt{woolf_1995,simmerer_2004,bensby_2005}; and \n\\citealt{delpeloso_2005a}. We select CCPS target stars that \nhave temperature, gravity, mass, and metallicity measurements \nin the SPOCS catalog (VF05). \n\nOur initial stellar sample consisted of 44 objects that were \nmembers both of the aforementioned studies and the SPOCS catalog, \nand that were observed by the CCPS on the Keck I telescope.\nThree stars that otherwise fit our criteria, but for which we\ncould only measure the europium abundance in one line, were \nremoved from the sample.\nThey were HD\\,64090, HD\\,188510, and HIP\\,89215. \nWhile stellar europium abundances may be successfully \nmeasured from a single line, \nthe goal of this study is to establish the robustness \nof our method, \nand so we deem it necessary to determine the \neuropium abundance in at least two lines to \ninclude a star in our analysis here.\nWe describe our criteria for rejecting a fit in \n\\S\\,\\ref{stellar_abund}.\n\nThe 41\\ stars included in this study have \n$3.4 < V < 10.0$,\n$0.50 < B-V < 0.92$, and\n$5 < d < 75\\unit{pc}$ ({\\em Hipparcos}; \\citealt{hipparcos}).\nVF05 determine stellar properties using SME,\nso it is reasonable to adopt their values for our SME europium\nanalysis.\nBased on the VF05 analysis, our stars have\nmetallicity $-1.4 < \\textrm{[M\/H]} < 0.4$,\neffective temperature $4940 < T_{\\rm eff} < 6230$, and\ngravity $3.8 < \\log{g} < 4.8$.\nIt should be noted that throughout this study we use the \n[M\/H]\\ parameter for a star's metallicity, rather than \nthe iron abundance [Fe\/H]. \nOur [M\/H]\\ value is taken from VF05, where it \nis an independent model parameter that adjusts the\nrelative abundance of all metals together.\nIt is not an average of individual metal \nabundances.\nThe full list of stellar properties, from the {\\em Hipparcos} and\nSPOCS catalogs, appears in Table \\ref{starsinfo_table}.\n\n\\begin{deluxetable*}{rrrrrrrrrc}\n\\tablecaption{Stellar Data.\\label{starsinfo_table}}\n\\tablewidth{0pt}\n\\tablehead{\n \\multicolumn{3}{c}{{Identification}}\n & \\colhead{$V$\\tablenotemark{a}}\n & \\colhead{$B-V$\\tablenotemark{a}}\n & \\colhead{$d$\\tablenotemark{a}}\n & \\colhead{$T_{\\rm eff}$\\tablenotemark{b}}\n & \\multirow{2}{*}{[M\/H]\\tablenotemark{b}}\n & \\colhead{$\\log{g}$\\tablenotemark{b}}\n & \\multirow{2}{*}{Ref.\\tablenotemark{c}} \\\\\n \\colhead{HD}\n & \\colhead{HR}\n & \\colhead{HIP}\n & \\colhead{(mag)}\n & \\colhead{(mag)}\n & \\colhead{(pc)}\n & \\colhead{(K)}\n & \n & \\colhead{(cgs)}\n & \n}\n\\startdata \n 3795 & 173 & 3185 & 6.14 & 0.72 & 28.6 & 5369 & $ -0.41$ & 4.16 & 1 \\\\\n 4614 & 219 & 3821 & 3.46 & 0.59 & 6.0 & 5941 & $ -0.17$ & 4.44 & 4 \\\\\n 6734 & \\ldots & 5315 & 6.44 & 0.85 & 46.4 & 5067 & $ -0.28$ & 3.81 & 1 \\\\\n 9562 & 448 & 7276 & 5.75 & 0.64 & 29.7 & 5939 & $ 0.19$ & 4.13 & 1, 2, 4 \\\\\n 9826 & 458 & 7513 & 4.10 & 0.54 & 13.5 & 6213 & $ 0.12$ & 4.25 & 4 \\\\\n 14412 & 683 & 10798 & 6.33 & 0.72 & 12.7 & 5374 & $ -0.45$ & 4.69 & 1 \\\\\n 15335 & 720 & 11548 & 5.89 & 0.59 & 30.8 & 5891 & $ -0.20$ & 4.07 & 4 \\\\\n 16397 & \\ldots & 12306 & 7.36 & 0.58 & 35.9 & 5788 & $ -0.35$ & 4.50 & 1 \\\\\n 22879 & \\ldots & 17147 & 6.68 & 0.55 & 24.3 & 5688 & $ -0.76$ & 4.41 & 1, 2, 4 \\\\\n 23249 & 1136 & 17378 & 3.52 & 0.92 & 9.0 & 5095 & $ 0.03$ & 3.98 & 1 \\\\\n 23439 & \\ldots & 17666 & 7.67 & 0.80 & 24.5 & 5070 & $ -0.73$ & 4.71 & 3 \\\\\n 30649 & \\ldots & 22596 & 6.94 & 0.59 & 29.9 & 5778 & $ -0.33$ & 4.44 & 4 \\\\\n 34411 & 1729 & 24813 & 4.69 & 0.63 & 12.6 & 5911 & $ 0.09$ & 4.37 & 4 \\\\\n 43947 & \\ldots & 30067 & 6.61 & 0.56 & 27.5 & 5933 & $ -0.28$ & 4.37 & 2 \\\\\n 45184 & 2318 & 30503 & 6.37 & 0.63 & 22.0 & 5810 & $ 0.03$ & 4.37 & 1 \\\\\n 48938 & 2493 & 32322 & 6.43 & 0.55 & 26.6 & 5937 & $ -0.39$ & 4.31 & 4 \\\\\n 84737 & 3881 & 48113 & 5.08 & 0.62 & 18.4 & 5960 & $ 0.14$ & 4.24 & 4 \\\\\n 86728 & 3951 & 49081 & 5.37 & 0.68 & 14.9 & 5700 & $ 0.11$ & 4.29 & 4 \\\\\n 102365 & 4523 & 57443 & 4.89 & 0.66 & 9.2 & 5630 & $ -0.26$ & 4.57 & 2 \\\\\n \\ldots & \\ldots & 57450 & 9.91 & 0.58 & 73.5 & 5272 & $ -1.42$ & 4.30 & 3 \\\\\n 103095 & 4550 & 57939 & 6.42 & 0.75 & 9.2 & 4950 & $ -1.16$ & 4.65 & 3 \\\\\n 109358 & 4785 & 61317 & 4.24 & 0.59 & 8.4 & 5930 & $ -0.10$ & 4.44 & 4 \\\\\n 115617 & 5019 & 64924 & 4.74 & 0.71 & 8.5 & 5571 & $ 0.09$ & 4.47 & 4 \\\\\n 131117 & 5542 & 72772 & 6.30 & 0.61 & 40.0 & 5973 & $ 0.10$ & 4.06 & 2 \\\\\n 144585 & \\ldots & 78955 & 6.32 & 0.66 & 28.9 & 5854 & $ 0.25$ & 4.33 & 1 \\\\\n 156365 & \\ldots & 84636 & 6.59 & 0.65 & 47.2 & 5856 & $ 0.24$ & 4.09 & 1 \\\\\n 157214 & 6458 & 84862 & 5.38 & 0.62 & 14.4 & 5697 & $ -0.15$ & 4.50 & 4 \\\\\n 157347 & 6465 & 85042 & 6.28 & 0.68 & 19.5 & 5714 & $ 0.03$ & 4.50 & 1 \\\\\n 166435 & \\ldots & 88945 & 6.84 & 0.63 & 25.2 & 5843 & $ 0.01$ & 4.44 & 1 \\\\\n 169830 & 6907 & 90485 & 5.90 & 0.52 & 36.3 & 6221 & $ 0.08$ & 4.06 & 1 \\\\\n 172051 & 6998 & 91438 & 5.85 & 0.67 & 13.0 & 5564 & $ -0.24$ & 4.50 & 1 \\\\\n 176377 & \\ldots & 93185 & 6.80 & 0.61 & 23.4 & 5788 & $ -0.23$ & 4.40 & 1 \\\\\n 179949 & 7291 & 94645 & 6.25 & 0.55 & 27.0 & 6168 & $ 0.11$ & 4.34 & 1 \\\\\n 182572 & 7373 & 95447 & 5.17 & 0.76 & 15.1 & 5656 & $ 0.36$ & 4.32 & 1 \\\\\n 190360 & 7670 & 98767 & 5.73 & 0.75 & 15.9 & 5552 & $ 0.19$ & 4.38 & 1 \\\\\n 193901 & \\ldots & 100568 & 8.65 & 0.55 & 43.7 & 5408 & $ -1.19$ & 4.14 & 3 \\\\\n 199960 & 8041 & 103682 & 6.21 & 0.64 & 26.5 & 5962 & $ 0.24$ & 4.31 & 1 \\\\\n 210277 & \\ldots & 109378 & 6.54 & 0.77 & 21.3 & 5555 & $ 0.20$ & 4.49 & 1 \\\\\n 217107 & 8734 & 113421 & 6.17 & 0.74 & 19.7 & 5704 & $ 0.27$ & 4.54 & 1 \\\\\n 222368 & 8969 & 116771 & 4.13 & 0.51 & 13.8 & 6204 & $ -0.08$ & 4.18 & 4 \\\\\n 224383 & \\ldots & 118115 & 7.89 & 0.64 & 47.7 & 5754 & $ -0.06$ & 4.31 & 1 \\\\\n\\enddata\n\\tablenotetext{a}{$V$-magnitude, color index, and parallax-based distance \nfrom the {\\em Hipparcos} catalogue \\citep{hipparcos}.}\n\\tablenotetext{b}{Stellar parameters previously published in VF05.}\n\\tablenotetext{c}{Star included for comparison to the following \nworks:~1---\\citealt{bensby_2005};\n2---\\citealt{delpeloso_2005a};\n3---\\citealt{simmerer_2004};\n4---\\citealt{woolf_1995}}\n\\end{deluxetable*}\n\n\n\\subsection{Co-adding Data}\n\\label{data}\nThe nature of the radial velocity planet search dictates that most stars \nwill have multiple, and in some cases, dozens of, observations. \nTo take advantage of the multiple exposures, we carefully \nco-add the reduced echelle spectra where possible. \n\nThe co-adding procedure is as follows:\\ a 2000-pixel region \n(approximately half the full order width) near \nthe middle of each order is cross-correlated, order by order, \nwith one observation arbitrarily designated as standard. \nThe pixel shifts are then examined and a linear trend as a function of \norder number is fit to the pixel shifts. \nFor any order whose pixel shift falls more than 0.4 pixels from the linear \ntrend, the value predicted by the linear trend is substituted. \nThis method corrects outlying pixel shift values, which often proved to\nbe one of a handful of problematic orders where the echelle blaze\nfunction shape created a false cross-correlation peak.\n\nEach spectral order is adjusted by its appropriate fractional pixel\nshift before all the newly aligned spectra are added together.\nIn order to accurately add spectra that have been shifted by \nnon-integer pixel amounts, we use a linear interpolation between \npixel values to artificially (and temporarily) increase \nthe sampling density by a factor of 20.\nAfter the co-adding, the resultant high-S\/N \nmulti-observation spectrum is sampled back down to its \noriginal spacing. \n\nThe number of observations used per star is recorded in the $N_{obs}$ \ncolumn of Table \\ref{starsvals_table}. \nThe co-adding proved particularly beneficial when fitting the relatively \nweak Eu~{\\sc ii}\\ line at 6645\\,\\AA. \nTwo sample stellar spectra, from a star with 1 observation\nand from a star with 17 observations, are given in Figure \n\\ref{coadding_works} as evidence of the S\/N advantages \nof this procedure.\n\n\\begin{deluxetable}{rrrrrrr}\n\\tablecaption{Stellar abundance values.\\label{starsvals_table}}\n\\normalsize\n\\tablehead{\n \\multirow{2}{*}{Name\\tablenotemark{a}} \n & \\multirow{2}{*}{$N_{obs}$}\n & \\colhead{4129 \\AA}\n & \\colhead{4205 \\AA}\n & \\colhead{6645 \\AA}\n & \\colhead{Weighted} \\\\\n & \n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{Average\\tablenotemark{c}}\n}\n \n\\startdata\n 3795 & 2 & $0.07$ & $0.06$ & $0.13$ &$0.07 \\pm 0.03$ \\\\\n 4614 & 2 & $-0.17$ & $-0.19$ & $-0.24$ &$-0.18 \\pm 0.02$ \\\\\n 6734 & 6 & $-0.02$ & $-0.06$ & $0.01$ &$-0.03 \\pm 0.03$ \\\\\n 9562 & 3 & $0.11$ & $0.14$ & $0.08$ &$0.12 \\pm 0.02$ \\\\\n 9826 & 1 & $0.10$ & $0.13$ & $0.09$ &$0.12 \\pm 0.02$ \\\\\n 14412 & 70 & $-0.28$ & $-0.15$ & $-0.16$ &$-0.24 \\pm 0.06$ \\\\\n 15335 & 2 & $-0.13$ & $-0.10$ & $-0.08$ &$-0.12 \\pm 0.03$ \\\\\n 16397 & 1 & $-0.21$ & $-0.20$ & $-0.22$ &$-0.21 \\pm 0.02$ \\\\\n 22879 & 27 & $-0.55$ & $-0.50$ & $-0.57$ &$-0.54 \\pm 0.03$ \\\\\n 23249 & 34 & $0.05$ & $0.26$ & $0.24$ &$0.17 \\pm 0.11$ \\\\\n 23439 & 24 & $-0.43$ & $-0.36$ & $-0.47$ &$-0.38 \\pm 0.03$ \\\\\n 30649 & 3 & $-0.13$ & $-0.11$ & $-0.13$ &$-0.12 \\pm 0.02$ \\\\\n 34411 & 70 & $0.12$ & $0.13$ & $0.11$ &$0.13 \\pm 0.02$ \\\\\n 43947 & 6 & $-0.21$ & $-0.19$ & $-0.21$ &$-0.20 \\pm 0.02$ \\\\\n 45184 & 95 & $0.01$ & $-0.03$ & $0.02$ &$0.00 \\pm 0.02$ \\\\\n 48938 & 2 & $-0.30$ & $-0.28$ & $-0.41$ &$-0.30 \\pm 0.03$ \\\\\n 84737 & 7 & $0.14$ & $0.14$ & $0.14$ &$0.14 \\pm 0.02$ \\\\\n 86728 & 46 & $0.06$ & $0.11$ & $0.10$ &$0.09 \\pm 0.03$ \\\\\n 102365 & 12 & $-0.13$ & $-0.08$ & $-0.14$ &$-0.11 \\pm 0.03$ \\\\\n HIP57450 & 1 & $-1.10$ & $-1.03$ &$\\leq -0.95$ &$-1.06 \\pm 0.04$ \\\\\n 103095 & 9 & \\ldots & $-0.37$ & $-0.38$ &$-0.37 \\pm 0.02$ \\\\\n 109358 & 47 & $-0.12$ & $-0.16$ & $-0.12$ &$-0.14 \\pm 0.03$ \\\\\n 115617 & 165 & $0.05$ & $0.02$ & $0.09$ &$0.04 \\pm 0.03$ \\\\\n 131117 & 2 & $0.10$ & $0.09$ & $0.12$ &$0.10 \\pm 0.02$ \\\\\n 144585 & 17 & $0.22$ & $0.25$ & $0.22$ &$0.23 \\pm 0.03$ \\\\\n 156365 & 1 & $0.11$ & $0.20$ & $0.14$ &$0.13 \\pm 0.04$ \\\\\n 157214 & 24 & $0.05$ & $0.06$ & $0.07$ &$0.06 \\pm 0.02$ \\\\\n 157347 & 77 & $0.09$ & $0.12$ & $0.13$ &$0.10 \\pm 0.02$ \\\\\n 166435 & 1 & $-0.05$ & $-0.17$ & $0.04$ &$-0.07 \\pm 0.06$ \\\\\n 169830 & 8 & $0.02$ & $0.05$ & $0.05$ &$0.03 \\pm 0.02$ \\\\\n 172051 & 37 & $-0.19$ & $-0.15$ & $-0.12$ &$-0.17 \\pm 0.03$ \\\\\n 176377 & 56 & $-0.20$ & $-0.21$ & $-0.23$ &$-0.20 \\pm 0.02$ \\\\\n 179949 & 1 & $0.05$ & $0.02$ & $0.05$ &$0.04 \\pm 0.03$ \\\\\n 182572 & 56 & $0.25$ & $0.41$ & $0.36$ &$0.31 \\pm 0.08$ \\\\\n 190360 & 73 & $0.17$ & $0.29$ & $0.30$ &$0.24 \\pm 0.07$ \\\\\n 193901 & 2 & $-0.92$ & $-0.91$ & $-0.98$ &$-0.91 \\pm 0.02$ \\\\\n 199960 & 3 & $0.14$ & $0.19$ & $0.19$ &$0.16 \\pm 0.03$ \\\\\n 210277 & 74 & $0.20$ & $0.34$ & $0.32$ &$0.27 \\pm 0.07$ \\\\\n 217107 & 37 & $0.20$ & $0.43$ & $0.33$ &$0.29 \\pm 0.11$ \\\\\n 222368 & 2 & $0.08$ & $0.09$ & $0.08$ &$0.08 \\pm 0.02$ \\\\\n 224383 & 2 & $0.00$ & $-0.01$ & $0.04$ &$0.00 \\pm 0.02$ \\\\\n Vesta & 3 & $-0.04$ & $0.00$ & $0.00$ &$-0.03 \\pm 0.03$ \\\\\n\\enddata\n\\tablenotetext{a}{All names are HD numbers unless otherwise indicated.}\n\\tablenotetext{b}{For a given star, [Eu\/H] = $\\log{\\epsilon\\left(\\textrm{Eu}\\right)} - \\log{\\epsilon\\left(\\textrm{Eu}\\right)_{\\odot}}$, where \\mbox{$\\log{\\epsilon\\left(X\\right)} = \\log_{10}{\\left(N_{X}\/N_{H}\\right)}$.}}\n\\tablenotetext{c}{The weight of each line in the average is based on 50 Monte Carlo trials in each Eu~{\\sc ii}~line. We have adopted an error floor of $0.02\\unit{dex}$, added in quadrature to the errors determined by our Monte Carlo procedure. See \\S\\ref{errors} for a more complete description of the weighted average and associated uncertainty.}\n\\end{deluxetable}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f1.eps}\n\\figcaption[Co-added Stellar Spectra.]{The three Eu~{\\sc ii}\\ lines considered in this paper, \n plotted for two different stars to demonstrate the advantage to be gained\n from co-adding multiple observations of the same star. \n Note that the rightmost plots have a different ordinate axis scaling than \n the other four panels.\n The top three panels, showing HD\\,156365, include 1 spectrum, while the bottom \n three panels, showing HD\\,144585, include 17. \n The two stars have approximately the same metallicity and\n effective temperature. \n The advantage from co-adding is most profound\n in the weak 6645-\\AA\\ line.\n \\label{coadding_works}}\n\\end{figure}\n\n\\section{Abundance Measurements}\n\\label{abund}\n\nWe use the SME suite of routines for our spectral synthesis, both for fine-tuning line lists based on the solar spectrum (\\S\\,\\ref{linelists}) and for measuring europium in each star (\\S\\,\\ref{stellar_abund}).\nSME is an LTE spectral synthesis code based on the \\citealt{kurucz_1992} grid of stellar atmosphere models. \nIn brief, to produce a synthetic spectrum, SME interpolates between the atmosphere models, calculates the continuous opacity, computes the radiative transfer, and then applies line broadening, which is governed by macroturbulence, stellar rotation, and instrumental profile. \nConsult \\citealt{valenti_1996} and VF05 for a more in-depth description of SME's inner workings.\n\nThroughout this study we use SME only to compute synthetic spectra.\nAll fitting is done in specialized routines of our own design, external to SME.\n\nIn this section we outline our technique for measuring europium abundances in the set of 41\\ stars included in this work. \nIn broad strokes, we first determine the atomic parameters of our spectral lines by fitting the solar spectrum (\\S\\,\\ref{linelists}). \nThen, we use that line list to measure the europium abundance in three transitions (4129\\,\\AA, 4205\\,\\AA, and 6645\\,\\AA); a weighted average of the three transitions determines a star's final europium value (\\S\\,\\ref{stellar_abund}). Finally, we estimate our uncertainties by adding artificial noise to our data in a series of Monte Carlo trials (\\S\\,\\ref{errors}).\nWe also here include notes on individual lines (\\S\\,\\ref{individual_lines}).\n\n\\subsection{Line Lists}\n\\label{linelists}\nWe use relatively broad spectral segments in our europium \nanalysis.\nThe regions centered on the Eu~{\\sc ii}\\ 4129\\,\\AA\\ and \n6645\\,\\AA\\ lines are 5\\,\\AA\\ wide, and the region centered\non the Eu~{\\sc ii}\\ 4205\\,\\AA\\ line is 8\\,\\AA\\ wide.\nWe find it necessary to use such broad spectral segments\nin order to fit a robust and consistent continuum in \nthe crowded blue regions.\n\nLine lists are initially drawn from the Vienna Astrophysics Line \nDatabase (VALD; \\citealt{piskunov_1995,kupka_1999}). \nThe VALD line lists, in the regions surrounding all three \nEu~{\\sc ii}\\ transitions, make extensive use of Kurucz line lists. \n\nWe apply the original VALD line list to an observed solar spectrum in\norder to determine the list's completeness and to adjust line parameters\nas needed.\nIn the blue, we use the disk-center solar spectrum from \n\\citealt{wallace_1998} with the following global parameters: \n$T_{\\rm eff}$~$ = 5770\\unit{K}$, \n$\\log{g} = 4.44$, \n[M\/H]~$ = 0$, \nmicroturbulence $v_{mic} = 1.0\\unit{km s$^{-1}$}$, \nmacroturbulence $v_{mac} = 3.6\\unit{km s$^{-1}$}$,\nrotational velocity $v\\sin{i} = 0\\unit{km s$^{-1}$}$, \nand radial velocity $v_{rad} = 0\\unit{km s$^{-1}$}$. \nThese are the same solar parameters adopted in VF05.\nIn the red, we find the \\citealt{wallace_1998} solar atlas \nto have insufficient S\/N to accurately determine\nthe atomic parameters.\nAt 6645\\,\\AA, therefore, we instead compare our \nline list to the disk-integrated NSO solar spectrum \n\\citep{kurucz_1984}, \nadjusting $v\\sin{i}$ to $1.6\\unit{km s$^{-1}$}$ because\nthe full solar disk has more substantial rotational \nbroadening.\n\nWe find that adjustments to the oscillator strength \n($\\log{\\textrm{\\em gf}}$) and van der Waals broadening ($\\Gamma_{6}$) parameters \nare required for the strongest lines in a given \nwavelength segment, even far from the Eu~{\\sc ii}\\ line of \ninterest.\nFor example, the Fe~{\\sc i}\\ line at 4202\\,\\AA\\ has an equivalent \nwidth $W=326\\unit{m\\AA}$ in the Sun \nand significantly affects \nthe continuum of the 4205-\\AA\\ region.\nThe $\\log{\\textrm{\\em gf}}$\\ parameter controls the line depth while the\n$\\Gamma_{6}$\\ parameter controls the line shape, so in general\nthe two parameters are orthogonal.\n\nWe used the Kurucz $\\log{\\textrm{\\em gf}}$\\ values provided \nby VALD where possible, but adjustments\nwere necessary where line depths were poorly fit.\nFor $\\Gamma_{6}$, VALD returns the \\citealt{barklem_2000} \nparameters for beryllium through barium ($Z$ of 4--56),\nbut has no $\\Gamma_{6}$\\ values above $Z=56$.\nWe therefore find it necessary to fit $\\Gamma_{6}$\\ in \nspecies heavier than barium and in deep features\nnot fit well by VALD values.\nWe take particular care to determine the appropriate \n$\\log{\\textrm{\\em gf}}$\\ and $\\Gamma_{6}$\\ parameters for all lines \nadjacent to the Eu~{\\sc ii}\\ line of interest. \nWe find the best value for these parameters with the SME \nsynthesizer by performing a $\\chi^2$\\ minimization against \nthe solar spectrum on each parameter.\n\nWhere the VALD line list is insufficient, we add line data \nfrom \\citealt{moore_1966}, \nNIST (whose lists are based on a variety of sources) and \nC.~Sneden (private communication).\nWe also add CH and CN molecular lines based on values\nobtained from the Kurucz molecular line list web \nsite.\\footnote{\\tt http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}\nIn the 4129-\\AA\\ region, we find it necessary to add \nthree artificial iron lines in order to match the solar spectrum.\nWe follow here the precedent of \\citealt{delpeloso_2005a},\nthough we find our fit requires the lines to have \nslightly different wavelengths and $\\log{\\textrm{\\em gf}}$\\ values.\nThe complete line list for Eu~{\\sc ii}\\ at 4129\\,\\AA\\ appears in\nTable \\ref{4129_nso_table}, \nfor 4205\\,\\AA\\ in Table \\ref{4205_nso_table}, and\nfor 6645\\,\\AA\\ in Table \\ref{6645_nso_table}.\nThe corresponding plots of the regions in these tables appear in\nFigures \\ref{eu4129_nso}, \\ref{eu4205_nso}, and \\ref{eu6645_nso}.\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f2.eps}\n\\figcaption[Eu 4129\\,\\AA\\ in NSO.]{The 4129-\\AA\\ Eu~{\\sc ii}\\ line in the solar spectrum. \n Individual lines are annotated and are listed in Table \\ref{4129_nso_table}.\n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 32 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}).\n The cross-hatched region indicates a portion of the spectrum \n used to fit a continuum. \n This plot represents a subset of the spectral region used in our \n analysis; the full region is 5\\,\\AA\\ wide and contains three additional \n continuum fitting regions.\n \\label{eu4129_nso}}\n\\end{figure}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f3.eps}\n\\figcaption[Eu 4205\\,\\AA\\ in solar atlas.]{The 4205-\\AA\\ Eu~{\\sc ii}\\ line in the solar spectrum. \n Individual lines are annotated and are listed in Table \\ref{4205_nso_table}. \n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 30 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}). \n The cross-hatched region indicates a portion of the spectrum \n used to fit a continuum. This plot represents a subset of the \n spectral region used in our analysis; the full region is 8\\,\\AA\\ wide \n and contains six additional continuum fitting regions.\n \\label{eu4205_nso}}\n\\end{figure}\n \n\\begin{deluxetable*}{ccccccc}[!htp]\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 4129\\,\\AA.\\label{4129_nso_table}}\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{{$\\log{\\textrm{\\em gf}}$}} \n & \\multicolumn{2}{c}{{$\\Gamma_{6}$}}\\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}}\n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}}\n}\n\n\\startdata\n 4129.147 & Pr {\\sc ii} & 1.039 & $-0.100$ & $-0.100$ & $-7.454$ & \\ldots \\\\\n 4129.159 & Cr {\\sc i} & 3.013 & $-1.948$ & $-1.948$ & $-6.964$ & $-7.362$ \\\\\n 4129.159 & Ti {\\sc ii} & 1.893 & $-2.300$ & $-1.730$ & $-6.900$ & $-7.908$ \\\\\n 4129.166 & Ti {\\sc i} & 2.318 & $-0.200$ & $-0.231$ & $-6.900$ & $-7.572$ \\\\\n 4129.174 & Ce {\\sc ii} & 0.740 & $-3.000$ & $-0.901$ & $-7.493$ & \\ldots \\\\\n 4129.182\\tablenotemark{c} &\n Cr {\\sc i} & 2.914 & $-0.100$ & \\ldots & $-8.300$ & \\ldots \\\\\n 4129.220 & Fe {\\sc i} & 3.417 & $-3.500$ & $-2.030$ & $-6.857$ & $-7.255$ \\\\\n 4129.220 & Sm {\\sc ii} & 0.248 & $-1.123$ & $-1.123$ & $-7.536$ & \\ldots \\\\\n 4129.425 & Dy {\\sc ii} & 0.538 & $-0.522$ & $-0.522$ & $-7.554$ & \\ldots \\\\\n 4129.426 & Nb {\\sc i} & 0.086 & $-0.780$ & $-0.780$ & $-7.462$ & \\ldots \\\\\n 4129.458 & Fe {\\sc i} & 3.396 & $-1.950$ & $-1.970$ & $-6.863$ & $-7.206$ \\\\\n 4129.522\\tablenotemark{d} &\n Fe {\\sc i} & 3.140 & $-3.497$ & \\ldots & $-6.873$ & \\ldots \\\\\n 4129.643 & Ti {\\sc i} & 2.239 & $-1.987$ & $-1.987$ & $-7.529$ & $-7.529$ \\\\\n 4129.705 & Eu {\\sc ii} & 0.000 & $+0.260$ & $+0.173$ & $-7.174$ & \\ldots \\\\\n 4129.817 & Co {\\sc i} & 3.812 & $-1.808$ & $-1.808$ & $-6.099$ & $-7.782$ \\\\\n 4129.837 & Nd {\\sc ii} & 2.024 & $-0.543$ & $-0.543$ & $-6.237$ & \\ldots \\\\\n 4129.959\\tablenotemark{d} &\n Fe {\\sc i} & 2.670 & $-3.139$ & \\ldots & $-7.322$ & \\ldots \\\\\n 4130.035 & Fe {\\sc i} & 1.557 & $-3.900$ & $-4.345$ & $-7.885$ & $-7.826$ \\\\\n 4130.036 & Fe {\\sc i} & 3.111 & $-2.350$ & $-2.636$ & $-8.026$ & $-7.857$ \\\\\n 4130.073 & Cr {\\sc i} & 2.914 & $-1.971$ & $-1.971$ & $-6.929$ & $-7.349$ \\\\\n 4130.122\\tablenotemark{e} &\n V {\\sc i} & 1.218 & $-1.000$ & $-3.142$ & $-7.060$ & $-7.800$ \\\\\n 4130.233 & Mn {\\sc i} & 2.920 & $-2.400$ & $-3.309$ & $-7.900$ & $-7.784$ \\\\\n 4130.315 & V {\\sc i} & 2.269 & $-0.300$ & $-0.607$ & $-7.187$ & $-7.585$ \\\\\n 4130.364 & Gd {\\sc ii} & 0.731 & $+0.177$ & $-0.090$ & $-6.608$ & \\ldots \\\\\n 4130.452 & Cr {\\sc i} & 2.914 & $-1.099$ & $-2.751$ & $-6.805$ & $-7.348$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{Identification from \\citealt{moore_1966}, \\loggf\\ and \\gamsix\\ from \\chisq\\ minimization.}\n\\tablenotetext{d}{Artificial iron lines included after \\citealt{delpeloso_2005a}.}\n\\tablenotetext{e}{Identification and parameters from Kurucz line lists hosted by the University of Hannover:\\newline \\texttt{http:\/\/www.pmp.uni-hannover.de\/cgi-bin\/ssi\/test\/kurucz\/sekur.html}.}\n\\end{deluxetable*}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f4.eps}\n\\figcaption[Eu 6645\\,\\AA\\ in solar atlas.]{The 6645-\\AA\\ Eu~{\\sc ii}\\ line in the solar \n spectrum. Note that the ordinate axis is scaled differently than in Figures \n \\ref{eu4129_nso} and \\ref{eu4205_nso}.\n Individual lines are annotated and are listed in Table \\ref{6645_nso_table}.\n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 30 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}). \n The two cross-hatched regions indicate the portion of the spectrum used \n to fit a continuum.\n This plot represents a subset of the spectral region used in our analysis; \n the full region is 5\\,\\AA\\ wide and contains four additional continuum fitting regions.\n \\label{eu6645_nso}}\n\\end{figure}\n\nHyperfine splitting is the dominant broadening mechanism for the\neuropium spectral lines. \nThe interaction between the nuclear spin and the atom's \nangular momentum vector causes energy level splitting\nin atoms with odd atomic numbers (europium is $Z=63$). \nThe effect is particularly pronounced in rare earth \nelements.\nThe 4129-\\AA\\ and 4205-\\AA\\ lines, for example, have FWHMs \nof 1.5\\,\\AA, due largely to hyperfine structure\n(but not isotope splitting---see insets of Figures \n\\ref{eu4129_nso}--\\ref{eu6645_nso}).\nSince the relative strengths of the hyperfine\ncomponents are constant without regard to\ntemperature, pressure, or magnetic field \\citep{abt_1952},\nthe components measured in laboratory settings\ncan be applied to stellar spectra.\n\nLike other spectral fitting packages,\nSME has no built-in treatment of hyperfine \nstructure. \nWe therefore convert a single europium line into \nits constituent hyperfine components and include them \nas separate entries in the star's line list. \nThe relative strengths and wavelength offsets of the \nhyperfine components come from \\citealt{ivans_2006}, \nwhich bases the values on an FTS laboratory analysis. \nFollowing the procedure of \\citealt{ivans_2006}, \nwe assume a solar system \ncomposition for the relative abundance of the two \neuropium isotopes ($^{151}$Eu at 47.8\\% and \n$^{153}$Eu at 52.2\\%, from \\citealt{rosman_1998}).\nWe divide the Eu~{\\sc ii}\\ $\\log{\\textrm{\\em gf}}$\\ values listed in \nTables \\ref{4129_nso_table}, \\ref{4205_nso_table}, \nand \\ref{6645_nso_table} amongst the \nhyperfine components according to their relative strengths. \nAll other attributes of the Eu~{\\sc ii}\\ lines remain the same in \nthe creation of the hyperfine lines. \nThe relative strengths of the hyperfine components are displayed \nin the inset plots in the solar spectrum Figures\n\\ref{eu4129_nso}, \\ref{eu4205_nso}, and \\ref{eu6645_nso}.\n\n\\begin{deluxetable*}{ccccccc}\n\\tablecolumns{7}\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 4205\\,\\AA.\\label{4205_nso_table}}\n\\tablewidth{0pt}\n\\scriptsize\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{$\\log{\\textrm{\\em gf}}$}\n & \\multicolumn{2}{c}{$\\Gamma_{6}$} \\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}} \n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}} \n}\n\n\\startdata\n 4204.695 & Y {\\sc ii} & 0.000 & $-1.800$ & $-1.760$ & $-7.700$ & \\ldots \\\\\n 4204.717 & Ce {\\sc ii} & 0.792 & $-0.963$ & $-0.963$ & $-7.000$ & \\ldots \\\\\n 4204.759\\tablenotemark{c} &\n CH & 0.519 & $-1.140$ & $-1.158$ & $-8.900$ & \\ldots \\\\\n 4204.771\\tablenotemark{c} &\n CH & 0.520 & $-1.900$ & $-1.135$ & $-8.500$ & \\ldots \\\\\n 4204.801 & Sm {\\sc ii} & 0.378 & $-1.771$ & $-1.771$ & $-6.738$ & \\ldots \\\\\n 4204.831\\tablenotemark{c} &\n CH & 0.520 & $-3.360$ & $-3.360$ & $-7.699$ & \\ldots \\\\\n 4204.858 & Gd {\\sc ii} & 0.522 & $-0.668$ & $-0.668$ & $-6.787$ & \\ldots \\\\\n 4204.990 & Cr {\\sc i} & 4.616 & $-1.457$ & $-1.457$ & $-6.658$ & $-7.852$ \\\\\n 4205.000 & Fe {\\sc i} & 4.220 & $-1.900$ & $-2.150$ & $-7.000$ & $-7.548$ \\\\\n 4205.038 & V {\\sc ii} & 1.686 & $-1.850$ & $-1.875$ & $-6.800$ & $-7.913$ \\\\\n 4205.042 & Eu {\\sc ii} & 0.000 & $+0.250$ & $+0.120$ & $-6.800$ & \\ldots \\\\\n 4205.084 & V {\\sc ii} & 2.036 & $-1.100$ & $-1.300$ & $-6.900$ & $-7.956$ \\\\\n 4205.098 & Fe {\\sc i} & 2.559 & $-4.900$ & $-4.900$ & $-6.671$ & $-7.865$ \\\\\n 4205.107 & Cr {\\sc i} & 4.532 & $-1.160$ & $-1.160$ & $-6.582$ & $-7.776$ \\\\\n 4205.163 & Ce {\\sc ii} & 1.212 & $-0.653$ & $-0.653$ & $-6.672$ & \\ldots \\\\\n 4205.253 & Nd {\\sc ii} & 0.680 & $-0.992$ & $-0.992$ & $-6.699$ & \\ldots \\\\\n 4205.303 & Nb {\\sc i} & 0.049 & $-0.850$ & $-0.850$ & $-6.677$ & \\ldots \\\\\n 4205.381 & Mn {\\sc ii} & 1.809 & $-3.300$ & $-3.376$ & $-6.800$ & $-8.001$ \\\\\n 4205.402\\tablenotemark{c} &\n CH & 0.488 & $-2.300$ & $-3.960$ & $-8.000$ & \\ldots \\\\\n 4205.427\\tablenotemark{c} &\n CH & 1.019 & $-2.300$ & $-1.130$ & $-8.000$ & \\ldots \\\\\n 4205.491\\tablenotemark{c} &\n CH & 1.019 & $-3.463$ & $-3.463$ & $-8.000$ & \\ldots \\\\\n 4205.533\\tablenotemark{c} &\n CH & 1.019 & $-1.800$ & $-1.149$ & $-8.000$ & \\ldots \\\\\n 4205.538 & Fe {\\sc i} & 3.417 & $-1.100$ & $-1.435$ & $-7.800$ & $-7.224$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{All molecular line data (except \\gamsix, from \\chisq\\ minimization) from Kurucz web site:\\ \\texttt{http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}.}\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{ccccccc}\n\\tablecolumns{7}\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 6645\\,\\AA.\\label{6645_nso_table}}\n\\tablewidth{0pt}\n\\scriptsize\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{$\\log{\\textrm{\\em gf}}$}\n & \\multicolumn{2}{c}{$\\Gamma_{6}$} \\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}} \n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}} \n}\n\n\\startdata\n 6644.320\\tablenotemark{c} &\n CN & 0.805 & $-1.456$ & $-2.258$ & $-7.695$ & \\ldots \\\\\n 6644.415\\tablenotemark{d} &\n La {\\sc i} & 0.131 & $-1.330$ & $-2.070$ & $-8.000$ & \\ldots \\\\\n 6645.111 & Eu {\\sc ii} & 1.380 & $+0.219$ & $+0.205$ & $-7.218$ & \\ldots \\\\\n 6645.210 & Si {\\sc i} & 6.083 & $-2.510$ & $-2.120$ & $-7.118$ & \\ldots \\\\\n 6645.372 & Fe {\\sc i} & 4.386 & $-2.759$ & $-3.536$ & $-6.780$ & $-7.808$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{All molecular line data (except \\gamsix, from \\chisq\\ minimization) from Kurucz web site:\\ \\texttt{http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}.}\n\\tablenotetext{d}{Identification and parameters from Kurucz line lists hosted by the University of Hannover:\\newline \\texttt{http:\/\/www.pmp.uni-hannover.de\/cgi-bin\/ssi\/test\/kurucz\/sekur.html}.}\n\\end{deluxetable*}\n\n\\subsection{Europium Abundances}\n\\label{stellar_abund}\nIn order to measure the europium abundances in our selected stars, \nwe begin with the co-added spectra described in \\S\\,\\ref{data}. \nWe do a preliminary continuum fit using the SME routines,\nwhich follow the VF05 procedure:\\ deep features are filled \nin with a median value from neighboring spectral orders, \nthen a sixth-order polynomial is fit to the region of interest.\nThe built-in procedure creates a flat, normalized\ncontinuum, though we find it necessary to fine-tune the \ncontinuum normalization in the course of our europium \nfitting.\n\nFrom the VF05 SPOCS catalog we take \n$T_{\\rm eff}$, $\\log{g}$, [M\/H], $v\\sin{i}$, $v_{mac}$, and $v_{mic}$\n(fixed at 0.85\\unit{km s$^{-1}$}) \nfor each star we consider. \nIn general, the global parameters from VF05 agree very well with\nthe values adopted in the studies we compare to here. \n(Most of the literature values fall within the 2-$\\sigma$ errors\nquoted in VF05.) The VF05 catalog is one of the largest and most \nreliable sources of stellar properties determined to date; \n\\citealt{haywood_2006}, for example, finds the VF05 $T_{\\rm eff}$\\ and [M\/H]\\\nto be in good agreement with other reliable measurements.\n\nFor most element abundances we use a scaled solar\nsystem composition, shifting the \n\\citealt{grevesse_1998} solar abundances by \nthe star's [M\/H].\nThe exceptions to this rule are \nsodium, silicon, titanium, iron, and nickel,\nwhich VF05 measured individually. \nThose $\\alpha$ and iron-peak elements are\ntherefore treated independently of the na\\\"ive scaled-solar\nadjustment.\nIt is possible that a more explicit treatment of iron-peak\nand $\\alpha$ elements would improve our europium measurement\naccuracy. \nFor the present study, however, we deem individual \nabundance analysis (apart from europium) unnecessary.\nWe hold fixed the abundances of all elements other than \neuropium in the subsequent analysis.\n\nWe fit for the europium abundance by iterating three \n$\\chi^2$-minimization routines that solve for the \nwavelength alignment, spectrum continuum, and \neuropium abundance.\nA summary of each routine follows:\n\\begin{enumerate}\n\\item Wavelength. \nThe pixel scale is pre-determined from\nthe thorium lamp calibration taken each night.\nA first estimate of the rest frame wavelengths comes from\na cross-correlation of the full spectral segment with the \nsolar spectrum, a built-in functionality of SME.\nWe then use a 2-\\AA\\ region immediately surrounding \nthe Eu~{\\sc ii}\\ line of interest to perform a \n$\\chi^2$\\ minimization between the modeled stellar \natmosphere and the spectral data, thus solving for\nthe wavelength scale alignment as precisely as \npossible in the Eu~{\\sc ii}\\ region.\n\\item Continuum.\nWe fit a quadratic function across the points designated \nin the solar spectrum as continuum-fitting points \n(the cross-hatch regions in \nFigures \\ref{eu4129_nso}, \\ref{eu4205_nso},\nand \\ref{eu6645_nso}).\nWe adjust the quadratic continuum function vertically\nto require that 1--2\\% of spectral points \nin the full spectral segment are above unity, \nthereby ensuring that all spectra are scaled identically.\n\\item Abundance.\nWe perform a $\\chi^2$\\ minimization \nadjusting only the abundance of europium.\nWe begin with the solar abundance value scaled\nby the star's metallicity, then search \n$1.0\\unit{dex}$\nof europium abundance space to find the \nbest-fit value.\nIn minimizing the $\\chi^2$\\ statistic, \nwe calculate the residuals between the data \nand the fit in a limited region around the \nEu~{\\sc ii}\\ line (the gray regions in Figures \n\\ref{eu4129_nso}, \\ref{eu4205_nso}, and \n\\ref{eu6645_nso}).\n\\end{enumerate}\n\\noindent The wavelength alignment, spectrum continuum, and europium abundance \nfitting routines are run in that order and iterated \nuntil a stable solution is reached.\nIn most cases a stable solution requires only one or \ntwo iterations.\nThe abundances for each line in each star as \ndetermined by this process are listed in Table \n\\ref{starsvals_table}.\n\nAfter running our automatic europium fitting algorithm \non all 41\\ stars,\nwe examine each line in each star by eye to \nconfirm that the fit is successful. \nIn a few of the metal-poor 4129-\\AA\\ fits, the blended lines\nthat encroach on Eu~{\\sc ii}\\ were poorly enough fit with the\nVF05 SPOCS values that the europium value was \nunconvincing. \nIn those cases, the 4129-\\AA\\ value is\nomitted from Table \\ref{starsvals_table} and does not\ncontribute to the average.\nSee Figure \\ref{keep} for a comparison of a rejected\n4129-\\AA\\ feature and a robust 4129-\\AA\\ fit.\nSimilarly, in a few metal-poor stars the\n6645-\\AA\\ line is too weak to be seen in the noise, \nand hence the output of the fitting routine serves only\nas an upper limit on the europium abundance.\nSee Figure \\ref{keep} for a comparison of a measureable \n6645-\\AA\\ feature and a feature that provides an \nupper limit.\nIn the cases where the 6645-\\AA\\ line is an upper limit\nonly, it is listed as such in Table \\ref{starsvals_table}\nand does not contribute to the average. \n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f5.eps}\n\\figcaption[Bad fits vs.\\ good fits in comparable stars.]{Bad fits (top) \n versus good fits (bottom) in comparable stars. \n In the upper left panel, the SPOCS stellar properties fit the lines adjacent to\n Eu~{\\sc ii}\\ poorly enough that [Eu\/H]\\ is unreliable. That line is removed from further \n analysis.\n In the upper right panel, the 6645-\\AA\\ line is buried in the noise, meaning\n the fit represents an upper limit to the europium abundance. The abundance\n upper limit is noted in Table \\ref{starsvals_table}, but does not contribute\n to the overall [Eu\/H]\\ measurement in the star.\n The lower two panels include fits to stars with similar characteristics \n to the stars in the upper panels, but where the 4129-\\AA\\ and 6645-\\AA\\\n fits were more successful.\n \\label{keep}}\n\\end{figure}\n\nIf both 4129\\,\\AA\\ and 6645\\,\\AA\\ proved problematic we removed the\nstar from our analysis entirely. \nThe three stars for which this was the case (listed in\n\\S\\,\\ref{stellsamp}) are omitted from \nTables \\ref{starsinfo_table} and \\ref{starsvals_table}.\nSince all of the rejected stars were\nmetal poor, we conclude that our \nfitting routine is most robust at solar metallicity, \nand becomes less reliable at [M\/H]~$< -1$. \nTemperature may also play a role, as one of the\nrejected stars, HD\\,64090, has $T_{\\rm eff} = 7300\\unit{K}$; \nVF05 determined SME to be reliable between 4800 and \n$6500\\unit{K}$.\n\nFor each star we calculate a weighted average europium \nabundance value based on the three (or, in some \ncases, two) spectral lines.\nWeighting the average is important because for stars with\nrelatively few observations (e.g., HD\\,156365 in Figure \n\\ref{coadding_works}), the weak line at 6645\\,\\AA\\ should \nbe weighted significantly less than the more robust blue \nlines. \nStars with a larger number of observations \nand higher S\/N spectra\n(e.g., HD\\,144585 in Figure \\ref{coadding_works})\nshould have the 6645-\\AA\\ line weighted more strongly.\nIn order to determine the relative weights of the three \nspectral lines, we tested the robustness of our fit by \nadding artificial noise to the spectra. \nWe describe that process in \\S\\,\\ref{errors}.\n\n\\subsection{Error Analysis}\n\\label{errors}\nWe begin our error analysis by adding to the data \nGaussian-distributed random noise with a standard \ndeviation set by the photon noise at each pixel.\nWe then fit the europium line again, using the \nsame iterative $\\chi^2$-minimization process described \nin \\S\\,\\ref{stellar_abund}, and repeat the process \n50 times.\nThe standard deviation of the 50 Monte Carlo \ntrials determines the relative weights of the \nlines in the average listed in \nTable \\ref{starsvals_table}, with lower \nstandard deviation lines (corresponding \nto a more robust fit) weighted more \nstrongly.\n\nAs expected, the results of the Monte Carlo trials show \nthat the larger the photon noise in an observation, the \nless that line is to be weighted. \nWe derive a linear relation between \nthe Poisson uncertainty of an observation\nand its relative weight, based on the Monte Carlo results.\nBecause the Monte Carlo trials are CPU intensive, we plan\nto use that relation to determine \nrelative weights in the future.\n\nWe estimate the uncertainty of our europium\nabundances to be the sum of the squares of the \nresiduals for all three lines, where \nwe assign the residuals the same relative\nweights we applied when calculating the average.\nWe also impose an error floor of $0.02\\unit{dex}$,\nadded to the uncertainty in quadrature.\n\nThe error floor comes from a comparison of\nabundance values from the individual lines\n(Figure \\ref{line_compare}, discussed in more \ndetail in \\S\\,\\ref{compare_lines});\nerror bars of $0.03\\unit{dex}$ on each measurement\nwould make the reduced $\\sqrt{\\chi^2}$~$=1$. \nA minimum error of $0.03\\unit{dex}$ on each \neuropium measurement\ntranslates into an uncertainty of $0.02\\unit{dex}$ \nfor the final averaged value.\nThe $0.02\\unit{dex}$ error floor is included in the \nuncertainty quoted in Table \\ref{starsvals_table}.\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f6.eps}\n\\figcaption[Comparing three Eu~{\\sc ii}\\ lines.]{A comparison of the Eu~{\\sc ii}\\ values \n measured in the 41\\ stars of this study, as listed in Table \n \\ref{starsvals_table}. \n Vesta is marked with a plus symbol.\n The solid line represents a 1:1 correlation; the dashed line is a \n best fit to the data.\n Points are omitted where the fit was poor in one of the Eu~{\\sc ii}\\ lines. \n Upper limits for the 6645-\\AA\\ line are marked with arrows.\n See \\S\\,\\ref{compare_lines} for a discussion of the quality of the fit.\n \\label{line_compare}}\n\\end{figure}\n\nAs an initial test of the robustness of our fitting routine,\nwe measure the europium abundance in a spectrum of Vesta,\nwhich serves as a solar proxy.\nThe values are listed in the last row of\nTable \\ref{starsvals_table}.\nThe three Vesta europium abundances have a standard\ndeviation of $0.022\\unit{dex}$, indicating that \nsystematic errors ($\\sim$\\,$0.03\\unit{dex}$ for Vesta,\nits offset from the solar value) may be a significant\nportion of the error budget.\n\nFor the moment we absorb any systematic error with \nour random error estimates.\nThe full sample of 1000 stars, the analysis \nof which will follow this work, will\nallow a far more thorough investigation of \nthe dependence of our results on various model \nparameters ($T_{\\rm eff}$, $\\log{g}$, etc.)\\ than can be \naccomplished here.\nTherefore, we delay a substantive discussion of \nsystematics until \nwe have more europium abundance measurements in hand,\nthough we touch on it again in \\S\\,\\ref{compare_lines}\nand \\S\\,\\ref{compare_lit}.\nIt is important to note that since most of the CCPS stars \nare similar to the Sun, and we are treating each star\nidentically, our results will be internally consistent.\n\n\\subsection{Notes on Individual Lines}\n\\label{individual_lines}\n\n\\subsubsection{Europium 4129\\,\\AA}\n\\label{abund:eu4129}\nThe europium line at 4129\\,\\AA\\ is the result of a \nresonance transition of Eu~{\\sc ii}, and is a strong, \nrelatively clean line. \nIt provides the most reliable measurement of \neuropium abundance in a star. \nOur fit to the Eu~{\\sc ii}\\ line at 4129\\,\\AA\\ in the \nsolar spectrum (described in \\S\\,\\ref{linelists}) \nappears as Figure \\ref{eu4129_nso}, with the 32\nhyperfine components (16 each from $^{151}$Eu \nand $^{153}$Eu; \\citealt{ivans_2006}) \nrepresented in the Figure \n\\ref{eu4129_nso} inset.\n\nIn the course of stellar fitting, the europium \nabundance is determined from the 4129\\,\\AA\\ line \nin all 41\\ stars except HD\\,103095. \nThat star is quite metal poor ([M\/H]~$= -1.16$) and cool \n($T_{\\rm eff} = 4950$), factors that likely contribute to the \npoor fit in the lines adjacent to the Eu~{\\sc ii}\\ line of \ninterest (see Figure \\ref{keep}).\n\n\\subsubsection{Europium 4205\\,\\AA}\n\\label{abund:eu4205}\nThe Eu~{\\sc ii}\\ line at 4205\\,\\AA\\ is the other fine-structure\ncomponent of the resonance transition responsible\nfor the line at 4129\\,\\AA.\nThough it is as strong as the Eu~{\\sc ii}\\ line at 4129\\,\\AA, \ncontamination from embedded lines (see Figure \\ref{eu4205_nso})\nhas the potential make the 4205-\\AA\\ line less reliable. \nHowever, it is useful as a comparison line for the results\nfrom the 4129-\\AA\\ Eu~{\\sc ii}\\ line.\nOur solar spectrum fit at 4205\\,\\AA\\ appears in \nFigure \\ref{eu4205_nso}, with the \n30 hyperfine components (15 each from $^{151}$Eu and $^{153}$Eu;\n\\citealt{ivans_2006}) \nin the inset.\nDespite its blended nature, the fit appears sound\nin all 41\\ stars.\n\n\\subsubsection{Europium 6645\\,\\AA}\n\\label{abund:eu6645}\nThe Eu~{\\sc ii}\\ line at 6645\\,\\AA\\ is weaker than the lines in the blue, but\nit is also relatively unblended, making it worthwhile to fit wherever \npossible. \nOur solar spectrum fit at 6645\\,\\AA\\ appears in \nFigure \\ref{eu6645_nso},\nwith the inset plot showing the 30 hyperfine components (15\neach from $^{151}$Eu and $^{153}$Eu; \\citealt{ivans_2006}). \nIn HIP\\,57450, which is metal poor ([M\/H]~$= -1.42$) and has only\none observation, the 6645-\\AA\\ line was lost in the noise\n(see Figure \\ref{keep}), and\nonly the 4129-\\AA\\ and 4205-\\AA\\ lines contribute to the final\neuropium abundance; the 6645-\\AA\\ line provides an upper limit only.\n\n\n\\section{Results}\n\\label{comp}\n\\subsection{Comparison of Individual Lines}\n\\label{compare_lines}\nWe compare our results from the three Eu~{\\sc ii}\\ lines in\nFigure \\ref{line_compare}.\nThe Vesta abundances, included as a solar \nproxy, are consistent with the stellar \nabundances to within $0.03\\unit{dex}$.\nBy assigning a measurement uncertainty\nof $0.03\\unit{dex}$ to each line, the \nreduced $\\sqrt{\\chi^2}$\\ for each of the three plots is unity.\nThis measurement uncertainty is the source of the\nerror floor discussed in \\S\\,\\ref{errors}.\n\n\\begin{figure*} \n\\centering\n\\includegraphics[width=0.80\\textwidth]{f7.eps}\n\\figcaption[Comparing our Eu~{\\sc ii}\\ values to others'.]{A comparison of the \n final Eu~{\\sc ii}\\ values from this work with literature measurements. \n The solid line represents a 1:1 correlation. \n The horizontal error bars represent the uncertainties as quoted in \n the source literature. \n The vertical error bars come from our analysis as described in \\S\\,\\ref{errors}. \n Two stars, HD\\,9562 and HD\\,22879 (with [Eu\/H]\\ of $+0.12$ and $-0.54$, \n respectively), were included in the \\citealt{bensby_2005}, \n \\citealt{delpeloso_2005a}, and \\citealt{woolf_1995} analyses, so those \n two stars are represented by three points along the abscissa at the \n same ordinate value. \n See Table \\ref{starsinfo_table} for the full list of the europium \n abundances plotted here, and see \\S\\,\\ref{compare_lit} for a discussion\n of the quality of our fit based on this plot. \n \\label{me_vs_them}}\n\\end{figure*}\n\nCalculating the best-fit line for each \ncomparison plot (the dashed lines in\nFigure \\ref{line_compare}) lends insight\ninto possible systematic trends in our analysis.\nIn the blue, the two Eu~{\\sc ii}\\ lines (4129\\,\\AA\\ and\n4205\\,\\AA) have no apparent linear systematic \ntrend:\\ the slope of the best-fit line is 1.03.\nThe red Eu~{\\sc ii}\\ line (6645\\,\\AA), however, exhibits\na minor systematic trend:\\ the best-fit line\nin the red has a slope of 1.10 relative to the blue,\ni.e., a 10\\% stretching of the blue abundance values \nabout [Eu\/H]~$= 0$ roughly reproduces the red \nabundance values.\nSince this systematic trend is absorbed by the\n$0.03\\unit{dex}$ error bars assigned to each point \n(necessary to make reduced $\\sqrt{\\chi^2}$\\ unity even when\ncomparing the non-systematic blue lines),\nwe make no attempt to correct this systematic\ntrend here.\n\nBetween the relatively low measurement uncertainty\nneeded to achieve reduced $\\sqrt{\\chi^2}$\\ of unity and the minor\nsystematic trend that only appears in the red, \nwe conclude that the europium abundance values \nderived from the Eu~{\\sc ii}\\ lines at 4129\\,\\AA, \n4205\\,\\AA, and 6645\\,\\AA\\ are consistent with one \nanother.\n\n\\subsection{Comparison with Literature Europium Measurements}\n\\label{compare_lit}\nIn Figure \\ref{me_vs_them}, we compare our \nfinal europium abundance measurements to the\nliterature values.\nThe agreement is quite good.\nThe literature values, plotted with the error\nbars quoted in the original studies, appear\nas the abscissa.\nOur europium values, plotted with the error\nbars calculated in \\S\\,\\ref{errors} and listed\nin Table \\ref{starsvals_table}, appear\nas the ordinate.\nThe solid line represents a 1:1 correlation.\nComparing the points to the 1:1 line,\nthe reduced $\\sqrt{\\chi^2}$~$=\\rchisqtot$.\nThat value is dominated by\nHD\\,103095, the \\citealt{simmerer_2004} data \npoint with the smallest error bars, and \nomitting it makes\nthe reduced $\\sqrt{\\chi^2}$~$=\\rchisqsm$. \n\nAdopting the global values ($T_{\\rm eff}$, [M\/H], $\\log{g}$)\nused by the comparison studies (instead of the VF05 \nvalues) drops the reduced $\\sqrt{\\chi^2}$\\ to \\rchisqlit, indicating\nthat some of the scatter in Figure \\ref{me_vs_them}\nis from the choice of stellar parameters.\nWe emphasize that the VF05 stellar parameters are \nreliable; we calculate reduced $\\sqrt{\\chi^2}$\\ using\nthe comparison studies' values in an attempt to \nseparate how much of the disagreement in Figure \n\\ref{me_vs_them} is from the europium abundance\ntechnique and how much is from the parameters adopted.\n\nOmitting the outlier mentioned at the beginning\nof this section, we examine \nthe data in Figure \\ref{me_vs_them} to \nsearch for systematics in our results relative \nto the literature values. \nWe consider the effects of a global offset in\nour europium values and a linear trend\nwith europium abundance, finding that systematic\noffsets of $\\sim$\\,$0.1\\unit{dex}$ and linear trends\nof $\\sim$\\,$40\\%$ are needed\nto make reduced $\\sqrt{\\chi^2}$\\ climb to 2.\nWe conclude that our error bars have sufficiently\ncharacterized our uncertainties.\n\nOverall, we find reduced $\\sqrt{\\chi^2}$\\ to be dominated by the \npoints at [Eu\/H]~$< 0.5$, consistent with\nour conclusion in \\S\\,\\ref{stellar_abund} that\nour results are most reliable near solar \nmetallicities, although\nthe larger error bars on the \\citealt{woolf_1995}\npoints make the correlation very forgiving near\n[Eu\/H]~$=0$. \nOur spectra have very high S\/N:\\ $\\sim$\\,160 \nat 4200\\,\\AA\\ \nin a single observation, enhanced substantially\nby our co-adding procedure (\\S\\,\\ref{data}).\nThe automated\nSME synthesis treats all stars consistently,\nespecially important for line blends in the \ncrowded blue regions.\nFor these reasons we believe that our europium abundance\ntechnique is accurate and robust and\nthat our smaller error bars are warranted.\n\nWe conclude that\nour abundance measurements are consistent\nwith previous studies, not surprising\nsince most abundance techniques rely on the same\nKurucz stellar atmosphere models.\nBecause the majority of the points in\nFigures \\ref{me_vs_them} fall near the\n1:1 correlation, we also conclude that\nnear [Eu\/H]~$=0$ our errors are \n$\\sim$\\,$0.03\\unit{dex}$, though at\n[Eu\/H]~$< -0.5$, the errors may be as high as \n$0.1\\unit{dex}$.\n\n\n\\section{Summary}\n\\label{summary}\nWe have established that our method for measuring europium in \nsolar-metallicity stars using SME is sound.\nThe resolution and S\/N of the Keck HIRES spectra are \nsufficiently high to fit the Eu~{\\sc ii}\\ lines in question. \nThe values obtained from the \nthree europium lines are self-consistent, and our final \naveraged europium value for each of the 41\\ stars in this study\nare consistent with the literature values for those stars.\n\nBy employing SME to calculate our synthetic spectra,\nwe are self-consistently modeling all the lines in the regions\nof interest. \nAny blending from neighboring lines is treated consistently\nfrom star to star, adding robustness to our europium determination.\nUsing SME has the added benefit of allowing us to \nadopt the stellar parameters from the SPOCS catalog.\nOur automated procedure ensures all stars are treated consistently.\n\nHaving established a new method for measuring stellar \neuropium abundances, we intend to apply our technique to \n1000 F, G, and K stars from the Keck CCPS survey.\nOur analysis of europium in these stars will represent the \nlargest and most consistent set of europium measurements in\nsolar-metallicity stars to date, and will provide\ninsight into the question of the {\\em r}-process\\ formation\nsite and the enrichment history of the Galaxy.\n\n\n\\acknowledgements\nThe author is indebted to\nGeoffrey W.~Marcy, \nChristopher Sneden,\nDebra A.~Fischer, \nJeff A.~Valenti, \nAnna~Frebel,\nJames W.~Truran,\nand Taft E.~Armandroff\nfor productive and enlightening conversations about the\nprogress of this work.\nParticular thanks are extended to\nChristopher Sneden,\nGeoffrey W.~Marcy, and\nLouis-Benoit Desroches\nfor their thoughtful comments on this paper.\nThe author is also grateful to her fellow observers\nwho collected the Keck HIRES data used\nhere:\\ Geoffrey W.~Marcy, \nDebra A.~Fischer,\nJason T.~Wright, \nJohn Asher Johnson,\nAndrew W.~Howard, \nChris McCarthy,\nSuneet Upadhyay,\nR.~Paul Butler,\nSteven S.~Vogt,\nEugenio Rivera,\nand Joshua Winn.\nWe gratefully acknowledge the dedication of the\nstaff at Keck Observatory, particularly Grant Hill\nand Scott Dahm for their HIRES support.\nThis research has made use of \nthe SIMBAD database, operated at CDS, Strasbourg, France; \nthe Vienna Atomic Line Database; \nthe Kurucz Atomic and Molecular Line Databases; \nthe NIST Atomic Spectra Database;\nand NASA's Astrophysics Data System Bibliographic Services.\nThe author extends thanks to those of Hawaiian \nancestry on whose sacred mountain of Mauna Kea we\nare privileged to be guests. \nWithout their generous hospitality, the Keck observations\npresented here would not have been possible.\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{0pt}{8pt plus 2pt minus 1pt}{6pt plus 2pt minus 1pt}\n\n\\usepackage{balance}\n\n\\usepackage[scaled]{beramono}\n\\usepackage{listings}\n\n\\lstset{\n language=Python,\n showstringspaces=false,\n formfeed=\\newpage,\n tabsize=4,\n commentstyle=\\itshape,\n basicstyle=\\ttfamily,\n breaklines=true,\n morekeywords={models, lambda, forms}\n}\n\n\\usepackage{color}\n\\definecolor{Orange}{rgb}{0.9,0.5,0}\n\\definecolor{NavyBlue}{rgb}{0.1, 0.4, 0.8}\n\\definecolor{Magenta}{rgb}{0.8, 0.1, 0.6}\n\\definecolor{Red}{rgb}{1, 0, 0}\n\\newcommand{\\marco}[1]{\\textcolor{Magenta}{\\textbf{[By Marco: #1]}}}\n\\newcommand{\\gloria}[1]{\\textcolor{blue}{\\textbf{[By Gloria: #1]}}}\n\\newcommand{\\bruno}[1]{\\textcolor{Red}{\\textbf{[By Bruno: #1]}}}\n\\newcommand{\\yahui}[1]{\\textcolor{Orange}{\\textbf{[By Yahui: #1]}}}\n\n\\newcommand{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}\n\\newcommand{$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}{$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}\n\n\n\n\n\n\\usepackage[belowskip=-0.3em,aboveskip=2pt]{caption}\n\\begin{document}\n\\fancyhead{}\n\\title{Gesture-to-Gesture Translation in the Wild via Category-Independent Conditional Maps}\n\n\\author{Yahui Liu}\n\\email{yahui.liu@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Marco De Nadai}\n\\email{denadai@fbk.eu}\n\\orcid{0000-0001-8466-3933}\n\\affiliation[obeypunctuation=true]{\\institution{FBK}, \\city{Trento}, \\country{Italy}}\n\n\\author{Gloria Zen}\n\\email{gloria.zen@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Nicu Sebe}\n\\email{niculae.sebe@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Bruno Lepri}\n\\email{lepri@fbk.eu}\n\\affiliation[obeypunctuation=true]{\\institution{FBK}, \\city{Trento}, \\country{Italy}}\n\n\n\n\n\\renewcommand{\\shortauthors}{Liu, et al.}\n\\renewcommand{\\shorttitle}{Gesture-to-Gesture Translation in the Wild}\n\n\n\\begin{abstract}\n\\begin{sloppypar}\nRecent works have shown Generative Adversarial Networks (GANs) to be particularly effective in image-to-image translations.\nHowever, in tasks such as body pose and hand gesture translation, existing methods usually require precise annotations, e.g. key-points or skeletons, which are time-consuming to draw. \nIn this work, we propose a novel GAN architecture that decouples the required annotations into a category label - that specifies the gesture type - and a simple-to-draw category-independent conditional map - that expresses the location, rotation and size of the hand gesture.\nOur architecture synthesizes the target gesture while preserving the background context, thus effectively dealing with gesture translation \\textit{in the wild}.\nTo this aim, we use an attention module and a rolling guidance approach, which loops the generated images back into the network and produces higher quality images compared to competing works.\nThus, our GAN learns to generate new images from simple annotations without requiring key-points or skeleton labels.\nResults on two public datasets show that our method outperforms state of the art approaches both quantitatively and qualitatively.\nTo the best of our knowledge, no work so far has addressed the gesture-to-gesture translation \\emph{in the wild} by requiring user-friendly annotations.\n\n\\end{sloppypar}\n\n\\end{abstract}\n\n\\begin{CCSXML}\n\n\n10010147.10010178.10010224<\/concept_id>\nComputing methodologies~Computer vision<\/concept_desc>\n500<\/concept_significance>\n<\/concept>\n\n10010147.10010257<\/concept_id>\nComputing methodologies~Machine learning<\/concept_desc>\n300<\/concept_significance>\n<\/concept>\n<\/ccs2012>\n\\end{CCSXML}\n\n\\ccsdesc[500]{Computing methodologies~Computer vision}\n\\ccsdesc[300]{Computing methodologies~Machine learning}\n\n\\keywords{GANs, image translation, hand gesture}\n\n\n\n\n\\maketitle\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\columnwidth]{figures\/teaser2.pdf}\n \n \\caption\n Our proposal decouples the \\emph{category label} that specifies the gesture type (e.g., gesture \"5\" or \"7\") from the \\emph{conditional map} (i.e. triangle) that controls the location, orientation and size of the target gesture.\n Existing works require a detailed conditional map (e.g. skeleton) that is gesture-dependent. \n In this example, we show that our method significantly lowers the drawing effort and expertise required by users. Our method can generate multiple output images with the same map for multiple gesture categories. \n \n }\n \\label{fig:teaser}\n \\vspace{-0.3em}\n\\end{figure}\n\n\\section{Introduction}\n\\begin{sloppypar}\nPhoto-editing software, fashion, and retail markets would enormously benefit from the possibility of modifying an image through a simple user input describing the changes to make (e.g. change the hand gesture of the person in the picture from ``open hand'' to ``ok'').\nHowever, despite the significant advances of Generative Adversarial Networks (GANs)~\\cite{goodfellow2014generative,radford2015unsupervised,arjovsky2017wasserstein,gulrajani2017improved}, the generation of images \\emph{in the wild} without precise annotations (e.g. hand skeleton) is still an open problem.\nPrevious literature on image-to-image translation has relied either on pixel-to-pixel mappings~\\cite{isola2017image,zhu2017unpaired,yang2018crossing} or precise annotations to localize the instance to be manipulated, such as segmentation masks~\\cite{mo2018instagan}, key-points~\\cite{siarohin2018deformable}, skeletons~\\cite{tang2018gesturegan} and facial landmarks~\\cite{sanchez2018triple}. \nHowever, obtaining such annotations is not trivial.\nOn one hand, automatic methods for key-points extraction~\\cite{cao2017realtime,simon2017hand} may fail or the reference gesture image\/video~\\cite{siarohin2018animating} may be not available. On the other, drawing such annotations is complicated, time-consuming, and their quality directly affects the performance of the network. \n\nMoreover, existing methods often focus on foreground content, e.g., the target gesture or facial expression, generating blurred and imprecise backgrounds~\\cite{siarohin2018deformable,ma2017pose,reed2016learning}.\nThese methods are well suited in the cases where\nshare fixed or similar spatial structures, such as facial expression datasets~\\cite{liu2015deep,fabian2016emotionet}.\nInstead, in image-to-image translations \\textit{in the wild}, both the foreground and background between the source image and target image can vary a lot~\\cite{tang2018gesturegan}. \n\nIn this paper, we propose a novel method, named as ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, that requires a simple-to-draw annotation, such as a triangle, and focuses on the challenging task of hand gesture-to-gesture translation \\emph{in the wild}.\nIn general, annotations such as key-points or skeletons are category-dependent since they provide four types of information at the same time, namely \\textit{category} (e.g., gesture ``5\"), \\textit{location}, \\textit{scale} and \\textit{orientation} of the hand gesture.\nInstead, our ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~decouples the \\textit{category} from the \\textit{location}-\\textit{scale}-\\textit{orientation} information.\nUsing a category-independent conditional map significantly lowers the annotation cost, allowing to generate multiple target images while requiring users to draw only a single map. \nIn this work, we refer to ``annotations'' as the user effort to draw the desired target gesture also at deploy time, besides the effort needed for generating the training data for our model. \nThe intuition of our approach is depicted in Figure~\\ref{fig:teaser}.\nFurthermore, we propose a novel architecture that uses an attention module and a rolling guidance approach to perform gesture-to-gesture translation \\textit{in the wild}.\nOur research yields three main contributions:\n\\begin{itemize}\n \\item \\emph{Decoupled conditional map and category.} We designed a general architecture for gesture-to-gesture translations that separately encodes the category label (e.g., gesture ``5\") and the category-independent conditional map. \n This allows to perform several image translations with the same conditional map.\n \\item \\emph{Rolling guidance and attention mask.} We propose a novel rolling guidance approach that allows generating higher quality output images by feeding the generated image back to the input as an additional condition to be considered. \n \n \n Also, ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~learns unsupervisedly an attention mask that allows to preserve the details shared between the input and target image.\n \n \n \\item \\emph{Simple-to-draw conditional maps.} \n We propose a triangle conditional map as the simplest and minimal necessary user provided condition for gesture-to-gesture translation.\n To the best of our knowledge, no work so far has addressed the gesture-to-gesture translation task \\emph{in the wild} by requiring user-friendly annotations. \n Furthermore, we assess the performance of our method with different shapes, such as boundary and skeleton maps.\n \n \n Finally, we enrich two public datasets with different conditional maps for each gesture image, specifically based on triangles and boundaries.\n\\end{itemize}\n\\end{sloppypar}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{figures\/diagram.pdf}\n \\vspace{-1em}\n \\caption{Overview of $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, that allows to translate hand gestures \\emph{in the wild} by separately encoding the input image and the target gesture attributes including the category label (e.g., gesture 3) and the category-independent conditional map. We include an unsupervised attention mask to preserve the details shared between the input image and the target image. Specially, we feed the first reconstructed image back to the input conditions encoding module to improve quality of output image.}\n \\label{fig:ourGAN}\n \n\\end{figure*}\n\n\n\\section{Related work}\nRecently, there have been a lot of works on Generative Adversarial Networks (GANs) and, particularly, on conditional GANs for the task of image-to-image translation~\\cite{isola2017image,zhu2017unpaired,zhu2017toward,choi2018stargan,pumarola2018ganimation}. \nA significant line of these works have focused on translation tasks where the the input and target images are spatially aligned, as in the case of style transfer~\\cite{zhu2017unpaired,johnson2016perceptual,isola2017image}, emotional content transfer~\\cite{emotionGAN18} or \nimage inpaiting~\\cite{zhang2018semantic}.\nIn general, these works aim to preserve the main image content while presenting them in various styles.\nAnother line of works have tackled the more challenging task of image translation where the target object is spatially unaligned with respect to the original location, shape or size of the input original object.\nThis is the case of image-to-image translation tasks like facial expression~\\cite{sanchez2018triple,geng20193d}, \nhuman pose~\\cite{ma2017pose,siarohin2018deformable} or \nhand gesture translation~\\cite{tang2018gesturegan}.\nTo this aim, methods usually require geometry information as guidance to guarantee where and how to edit visual patterns corresponding to the image attributes,\nsuch as key-points~\\cite{ma2017pose,siarohin2018animating,ma2018disentangled}, skeletons~\\cite{tang2018gesturegan}, object segmentation masks~\\cite{mo2018instagan}, facial landmarks~\\cite{sanchez2018triple}, action units~\\cite{pumarola2018ganimation} or 3D models~\\cite{geng20193d}. \n\n\n GANimation~\\cite{pumarola2018ganimation} learns an attention mask to preserve the background content of the source image for the spatially aligned translation of facial expressions. \n InstaGAN~\\cite{mo2018instagan} performs multi-instance domain-to-domain image translation by requiring as input precise segmentation masks for the target objects.\n Pose Guided Person Generation Network (PG$^2$)~\\cite{ma2017pose} proposes a two stages generating framework to refine the output image given a reference image and a target pose.\n MonkeyNet~\\cite{siarohin2018animating} generates a conditioned video transferring body movements from a target driving video to a reference appearance image. The target key-points are obtained automatically using state-of-the-art detectors.\n\nRelatively few works have considered the challenging task of image translation \\textit{in the wild}, where both the foreground content and the background context undergoes significant variation~\\cite{tang2018gesturegan}. \nIn these cases, not only the networks learn to synthesize the target object or attribute, but they also have to correctly locate it in the image while the pixels depicting the content are preserved from the original image.\nGestureGAN~\\cite{tang2018gesturegan} proposes a novel color loss to improve the output image quality and to produce sharper results. However, this approach does not aim to separately encode the foreground content and the background context.\n\nFurthermore, the large majority of image-to-image translations works focus on one to one domain mapping. \nRecently, efficient solutions have been proposed to address the multi-domain translations~\\cite{choi2018stargan,geng20193d}. In particular, StarGAN~\\cite{choi2018stargan} proposes the use of multiple mask vectors to generalize on different datasets and domain labels. 3DMM~\\cite{geng20193d} decomposes an image into shape and texture spaces, and relies on an identity and target coefficients to generate images in multiple domains. They, however, focus on multi-domain face attributes where neither the background quality or the misalignment are considered.\n\nOur work falls within the category of multi-domain and image-to-image translation \\emph{in the wild}. Still, rather than asking expensive annotation to the user, we propose a novel user-friendly annotation strategy.\nTo the best of our knowledge, no works so far have investigated other approaches for gesture translation that also allow to reduce the annotation effort.\n\n\n\\section{Our approach}\nThe goal of this work is to perform gesture translation \\emph{in the wild}, conditioned on the user provided gesture attributes, such as the desired category, location, scale and orientation. \nSpecifically, we seek to estimate the mapping $\\mathcal{M}$: $(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y) \\rightarrow \\mathbf{I}_Y$ that translates an input image $\\mathbf{I}_X$ into an output image $\\mathbf{I}_Y$, conditioned to a hand gesture category $C_Y$ and a simple-to-draw conditional map $\\mathbf{S}_Y$, which encodes the desired location, scale and orientation of the gesture.\nThe generated image is discriminated through a Conditional Critic discriminator~\\cite{pumarola2018ganimation} that judges the photo-realism of the generated image and assess the category of the generated gesture (e.g. gesture \"5\"). \nFurthermore, in order to efficiently deal with the gesture translation \\emph{in the wild}, we propose a rolling guidance approach and an attention mask module. \nFigure~\\ref{fig:ourGAN} depicts the architecture of our approach. In this Section we further explain the details of the architecture and the loss functions used to train our framework.\n\n\\subsection{Network Architecture}\n\\label{sec:network}\n\n\n\\noindent \\textbf{Generator.} \nOur Generator $G$ takes as input the conditional image $\\textbf{I}_X$, the category label $C_Y$ and the conditional map $\\textbf{S}_Y$\nand it outputs the translated image $\\textbf{I}_Y$.\nThe original image and the target gesture conditions are encoded\nrespectively through the encoder $E_1$ and $E_2$, and then concatenated and provided to $F_{res}$. Then, the output features of $E_2$ and $F_{res}$ are concatenated and provided to $F_{dec}$. \nSimilarly to~\\cite{pumarola2018ganimation}, in the decoder we learn to predict in an unsupervised manner the approximated image $\\widetilde{\\textbf{I}}_Y$ and an \\textit{Attention Mask} $\\mathbf{A} \\in [0,1.0]^{{H\\times W}}$. \nSince both the background and foreground between the input and target images can vary a lot, the learned attention mask $\\mathbf{A}$ tends to preserve the shared part between $\\mathbf{I}_X$ and $\\mathbf{I}_Y$. \nPixels in $\\mathbf{A}$ with higher value indicate to preserve the corresponding pixels from $\\mathbf{I}_X$, otherwise from $\\widetilde{\\mathbf{I}}_Y$.\nThe final generated image is thus obtained as:\n\\begin{equation}\n\\widehat{\\mathbf{I}}_Y = \\mathbf{A}*\\mathbf{I}_X + (1-\\mathbf{A})*\\widetilde{\\mathbf{I}}_Y\n\\label{eq:attention-mask}\n\\end{equation}\nFurthermore, the generated image $\\mathbf{I}_Y$ is rolled back as an input condition and concatenated together with the category label $C_Y$ and the conditional map $\\textbf{S}_Y$. More details are provided in Section~\\ref{Sec:rolling-guidance}. \n\n\\medskip\n\\noindent \\textbf{Discriminator.} Our Discriminator $D$ takes as input the generated image $\\widehat{\\textbf{I}}_Y$ and its conditional map $\\textbf{S}_Y$. \nThe outputs of $D$ consist of two parts: $D_{cat}$ predicts the category label \nand $D_{prob}$ classifies whether local image patches are real or fake. \nAs in~\\cite{pumarola2018ganimation}, D$_{prob}$ is based on PatchGANs~\\cite{isola2017image}.\nThe conditional map $\\textbf{S}_Y$ is needed by $D$ to verify that the generated gesture has also the right location, scale and orientation.\n\nWe refer to the Supplementary Material for additional details in the architecture.\n\n\\subsection{Objective Formulation}\n\\label{sec:loss}\n\nThe loss function of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ is composed of four main components, namely \\emph{GAN Loss} that pushes the generated image distribution to the distribution of source images; \\emph{Reconstruction loss} that forces the generator to reconstruct the source and target image; \\emph{Category Label loss} that allows to properly classify the generated image into hand gesture classes; and \\emph{Total Variation loss} that indirectly learns the attention mask in an unsupervised fashion.\n\n\\medskip\n\\noindent \\textbf{GAN Loss.} To generate images with the same distribution of the source images,\nwe adopt an adversarial loss: \n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{GAN} = & \\mathbb{E}_{\\mathbf{I}_Y, \\mathbf{S}_Y\\sim \\mathbb{P}}[\\log D_{prob}(\\mathbf{I}_Y, \\mathbf{S}_Y)] + \\\\ & \\mathbb{E}_{\\mathbf{I}_X,\\mathbf{S}_Y, C_Y\\sim \\mathbb{P}}[\\log (1- D_{prob}(G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y), \\mathbf{S}_Y))] \\\\\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbb{P}$ is the data distribution of the hand gesture images in the dataset, and\nwhere $G$ generates an image $G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y)$ conditioned on both the input image $\\mathbf{I}_X$, the conditional map $\\mathbf{S}_Y$, and the target category $C_Y$, while $D$ tries to distinguish between real and fake images. In this paper, we refer to the term $D_{prob}(\\mathbf{I}, \\mathbf{S})$ as a probability distribution over sources given by $D$.\n\n\\medskip\n\\noindent \\textbf{Reconstruction Loss.} The adversarial loss does not guarantee that the generated image is consistent with both the target conditional map $\\mathbf{S}$ and category $C$.\nThus, we first apply a \\emph{forward reconstruction loss} that ties together the target image $\\mathbf{I}_Y$ with its target conditional map $\\mathbf{S}_Y$ and category $C_Y$:\n\\begin{equation}\n \\mathcal{L}_{rec} = \\|G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y) - \\mathbf{I}_Y\\|_1\n\\end{equation}\nThen, instead of using perceptual features (e.g. extracted from VGG~\\cite{very2015simonyan} networks) to force the model to reconstruct the source image, we propose a simplified \\emph{self-reconstruction (identity) loss}:\n\\begin{equation}\n \\mathcal{L}_{idt} = \\|G(\\mathbf{I}_X, \\mathbf{S}_X, C_X) - \\mathbf{I}_X \\|_1\n\\end{equation}\nwhere $\\mathbf{S}_X$ is the conditional map of the source image and $C_X$ the category label of the source image.\nFinally, we apply the \\emph{cycle consistency loss}~\\cite{zhu2017unpaired,kim2017learning} to reconstruct the original image from the generated one:\n\\begin{equation}\n \\mathcal{L}_{cyc}=\\|G(G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y), \\mathbf{S}_X, C_X) - \\mathbf{I}_X\\|_1\n\\end{equation}\nNote that we apply the cycle reconstruction loss only in one direction to reduce computation, i.e., A-to-B-to-A, since a translation pair based on two images A and B may be sampled either as A-to-B or as B-to-A\nduring the training.\n\n\\medskip\n\\noindent \\textbf{Category Label loss.} We enforce the generator to render realistic samples that have to be correctly classified to the hand gesture expressed by the input category label.\nSimilarly to StarGAN~\\cite{choi2018stargan}, we split the \\emph{Category Label loss} in two terms: a gesture classification loss of the real image $\\mathbf{I}_Y$ used to optimize $D$, and a gesture classification loss of the generated image $\\hat{\\mathbf{I}}_Y$, used to optimize $G$.\nSpecifically:\n\\begin{equation}\n \\mathcal{L}_{cls} = \\mathbb{E}_{\\mathbf{I}_Y, C_Y}[- \\log D_{cat}(C_Y | \\mathbf{I}_Y, \\mathbf{S}_Y)]\n\\end{equation}\n\\begin{equation}\n\\mathcal{L}_{\\hat{cls}} = \\mathbb{E}_{\\hat{\\mathbf{I}}_Y, C_Y}[- \\log D_{cat}(C_Y | \\hat{\\mathbf{I}}_Y, \\mathbf{S}_Y)]\n\\end{equation}\nwhere $D_{cat}(C_Y | \\mathbf{I}_Y, \\mathbf{S}_Y)$ and $D_{cat}(C_Y | \\hat{\\mathbf{I}}_Y,\\mathbf{S}_Y)$ represent a probability distribution over the categories of hand gestures respectively in the real and generated images.\nIn other words, these losses allow to generate images that can be correctly classified as the target hand gesture category.\n\n\\medskip\n\\noindent \\textbf{Total Variation loss}. \nTo prevent the final generated image having artifacts, we use a Total Variation Regularization, $f_{tv}$,\nas in GANimation~\\cite{pumarola2018ganimation}.\nHowever, differently from them, we calculate $f_{tv}$ over the approximated image $\\widetilde{\\mathbf{I}}_Y$ instead of the attention mask $\\mathbf{A}$, \nthus allowing to freely explore the shared pixels between the source and target images. \nThe total variation loss is applied both to the \\textit{forward reconstruction} and \\textit{self-reconstruction} and is formulated as:\n\\begin{equation}\n \\mathcal{L}_{tv} = f_{tv}(G_C(\\mathbf{I}_X, \\mathbf{S}_X, C_X)+f_{tv}(G_C(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y))\n\\end{equation}\nThe total variation regularization $f_{tv}$ is defined as:\n\\begin{dmath}\n f_{tv}(\\mathbf{I})=\n \\mathbb{E}_{\\mathbf{I}} \\left[\\sum_{i,j}^{H-1,W-1}[(\\mathbf{I}_{i+1, j} - \\mathbf{I}_{i,j})^2 + (\\mathbf{I}_{i, j+1} - \\mathbf{I}_{i,j})^2]\\right]\n\\end{dmath}\nwhere $\\mathbf{I}_{i,j}$ is the entry $i,j$ of the image matrix $\\mathbf{I}$.\n\n\n\\medskip\n\\noindent \\textbf{Total loss}. The final objective function used to optimize $G$ and $D$ is formulated as follows:\n\\begin{dmath}\n\\mathcal{L}_{D} = \\lambda_{D}\\mathcal{L}_{GAN} + \\lambda_{cls}\\mathcal{L}_{cls}\n\\label{eq:loss_D}\n\\end{dmath}\n\\begin{dmath}\n\\mathcal{L}_{G} = \\lambda_{G}\\mathcal{L}_{GAN} + \\lambda_{rec}\\mathcal{L}_{rec} + \\lambda_{idt}\\mathcal{L}_{idt} + \\lambda_{cyc}\\mathcal{L}_{cyc} + \\lambda_{cls}\\mathcal{L}_{\\hat{cls}} + \\lambda_{tv}\\mathcal{L}_{tv}\n\\label{eq:loss_G}\n\\end{dmath}\nwhere $\\lambda_{D}$, $\\lambda_{G}$, $\\lambda_{rec}$, $\\lambda_{idt}$, $\\lambda_{cyc}$, $\\lambda_{cls}$, and $\\lambda_{tv}$ are hyper-parameters that control the relative importance of each loss term.\n\n\n\\subsection{Rolling Guidance}\n\\label{Sec:rolling-guidance}\n\\begin{sloppypar}\nWhile the total variation loss $\\mathcal{L}_{tv}$ also enforces the approximated images $\\widetilde{\\mathbf{I}}_Y$ to be smooth, the source and target images might contain edges and details that have to be preserved. \nMoreover, the $E_1$ and $E_2$ encoders mostly focus on the gesture, failing to learn important details of the context, which might result in blurred images.\nInspired by previous works~\\cite{mosinska2018beyond,ma2017pose, zhang2014rolling}, we propose a Rolling Guidance approach to refine the generated image in a two-stage process. \nFirst, the network generates an initial version $\\widehat{\\mathbf{I}}_Y$ from input ($\\mathbf{I}_X$, $\\mathbf{S}_Y$, $C_Y$). \nThen, $\\widehat{\\mathbf{I}}_Y$ is fed back to $E_2$. Thus, the network generates a refined version of $\\widehat{\\mathbf{I}}_Y$ from input ($\\mathbf{I}_X$, $\\mathbf{S}_Y$, $C_Y$, $\\widehat{\\mathbf{I}}_Y$).\nNote that there exists some approaches, like PG$^2$~\\cite{ma2017pose}, feed the initial generated image back and concatenated to the source input to learn difference map to refine the results. However, images with a considerable variation in both the foreground and background between source and target images might result in an ill-posed problem. Gesture-to-gesture translation in the wild shows such an issue. For this reason, in ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, we feedback the generated image to $E_2$ to refine the generated image and to learn the condition related features of the target gesture at the same time. This results in better generalization and significantly improves the results as well.\n\\end{sloppypar}\n\n\\section{Experiments}\nWe compare our model with the state of the art techniques on two hand gesture datasets.\nFirst, we evaluate our model quantitatively through various widely used metrics that compare the generated image with the ground truth. \nThen, we also evaluate our results qualitatively through a perceptual user study.\nWe released the resulted dataset and annotations, source code and trained models are available at: \\url{https:\/\/github.com\/yhlleo\/TriangleGAN}.\n\n\\subsection{Datasets}\nThe NTU Hand Gesture dataset~\\cite{ren2013robust} is a collection of 1,000 RGB-D images recorded with a Kinect sensor. It includes 10 gestures repeated 10 times by 10 subjects under a cluttered background. Image resolution is 640x480. The Creative Senz3D dataset~\\cite{memo2018head} is a collection of 1,320 images, where 11 gestures are repeated 30 times by 4 subjects. \nImage resolution is 640x480.\n\n\\medskip\n\\noindent \\textbf{Experimental setting.}\nWe consider two dataset setups in our experimental evaluation.\n\n\\medskip\n\\noindent \\textit{Normal} uses the same setup as GestureGAN~\\cite{tang2018gesturegan}, in order to directly compare with the state of the art.\nGestureGAN authors used only a subset of the datasets (acquired by OpenPose~\\cite{cao2018openpose}): 647 images out of 1,000 for NTU Hand Gesture and 494 out of 1,320 for Creative Senz3d. It shows that the state-of-the-art detector OpenPose~\\cite{cao2018openpose} fails at detecting key-points for about 50\\% of the images.\nThe resulting number of training and test data pairs obtained are respectively the following:\n21,153 and 8,087 for NTU Hand Gesture; \n30,704 and 11,234 for Creative Senz3D.\nThese numbers are different from those reported in~\\cite{tang2018gesturegan} since we here report only the unique pairs without considering flipping and A-to-B reverse ordering.\n\n\n\\medskip\n\\noindent \\textit{Challenging} pushes the limits of our model by ensuring that all the translation pairs \"A-to-B\" to a specific image \"B\" are included either in the train or in the test, a condition not ensured by the \\emph{normal} setting and, thus, by state of the art.\nAs a consequence, the model here generates multi-domain images without previous knowledge about it.\nWe randomly select for the training and test data the following number of pairs: \n22,050 and 13,500 for NTU Hand Gesture;\n138,864 and 16,500 for the Creative Senz3D. \n\n\n\n\n\\medskip\n\\noindent \\textbf{Conditional Maps.}\nWe consider three possible shapes of the conditional map to prove the effectiveness and generality of our method. Sample images are reported in Figure~\\ref{fig:maps}.\n\n\\medskip\n\\noindent \\textit{Triangle Map.}\nIn this type of annotation, the user has to provide an oriented triangle which outlines the size, base and orientation of the hand palm.\nThis conditional map is easy to draw, as it is possible to provide a simple interface where users can draw a triangle simply specifying the three delimiting points of the triangle, plus its base.\nMoreover, the triangle conditional map is category independent, as it does not contain any information about the gesture.\nWe annotated all images for both datasets with the corresponding triangle conditional maps.\n\n\\medskip\n\\noindent \\textit{Boundary Map.}\nIn the boundary map annotation, the user has to draw the contour of the desired target gesture. This type of annotation is weakly category dependent, since from the conditional map (see Figure~\\ref{fig:maps} in the center) it may be possible to infer the target gesture category. However, as this shape is easy to draw, it may be a valid alternative to the skeleton and triangle maps.\nWe annotated all 1,320 images for the NTU Hand dataset with the corresponding boundary annotations.\n\n\\medskip\n\\noindent \\textit{Skeleton Map.}\nIn the skeleton map, the user is required to draw either a complicated skeleton of the hand gesture or the exact position of the hand gesture key-points.\nHowever, when the target gesture image is available, it is sometimes possible to draw them automatically. \nAs in~\\cite{tang2018gesturegan}, we obtain the skeleton conditional maps by connecting the key-points obtained through OpenPose, a state of the art hand key-points detector~\\cite{simon2017hand, cao2018openpose}.\nIn the case of \\textit{normal} experimental setting, the hand pose is detected for all the 647 and 494 images of the two datasets. Instead, in the case of \\textit{challenging} experimental setting, the key-points could not be obtained for over half of the image set. \nTo this reason, the skeleton map is considered only in the \\textit{normal} experimental setting.\nThis conditional map is hard to draw, and strongly dependent on the category of hand gesture.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/S2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/S.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/B2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/B.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/T2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/T.png}}\n\t\\caption{The three considered shapes of the conditional map, sorted by user drawing effort: from (left) the most difficult to (right) the easiest to draw.}\n\t\\label{fig:maps}\n\\end{figure}\n\n\n\\subsection{Evaluation}\n\\label{Sec:evaluation}\n\n\\textbf{Baseline models.}\nAs our baseline models we select the state of the art for hand gesture translation \\emph{in the wild} GestureGAN~\\cite{tang2018gesturegan}. \nAlso, we adopt StarGAN~\\cite{choi2018stargan}, GANimation~\\cite{pumarola2018ganimation}, and PG$^2$~\\cite{ma2017pose} as they showed impressive results on multi-domain image-to-image translation. \nBoth StarGAN and GANimation learn to use attribute vectors to transfer facial images from one expression to another one. GestureGAN learns to transfer hand gestures via category-dependent skeleton maps.\n\n\\medskip\n\\noindent \\textbf{Evaluation metrics.}\nWe quantitatively evaluate our method performance using two metrics that measure the quality of generated images, namely Peak Signal-to-Noise Ratio (PSNR) and Fr\u00e9chet Inception Distance (FID)~\\cite{NIPS2017_7240}, and the F1-score, which measures whether the generated images depict a consistent category label. \nMoreover, to be comparable with GestureGAN~\\cite{tang2018gesturegan}, we employ the Mean Squared Error (MSE) between pixels and the Inception Score (IS)~\\cite{gulrajani2017improved}.\nHowever, these two metrics were indicated as highly unstable~\\cite{barratt2018note,borji2019pros}\nand they are not strictly related to the quality of generated images. To this reason, we report their results on a separate table and do not further discuss them. \n\n\\medskip\n\\noindent \\textit{PSNR.} It compares two images through their MSE and the maximal pixel intensity ($MAX_I = 255$). It is defined as: $PSNR = 20 \\log_{10}(\\frac{MAX_I}{\\sqrt{MSE}})$.\n\n\n\\medskip\n\\noindent \\textit{FID.} It is defined as the distance between two Gaussian with mean and covariance $(\\mu_x, \\Sigma_x)$ and $(\\mu_y, \\Sigma_y)$: $FID(x,y) = || \\mu_x- \\mu_y ||^2_2 + Tr(\\Sigma_x + \\Sigma_y - 2(\\Sigma_x\\Sigma_y)^{1\/2})$, where the two Gaussians are defined in the feature space of the Inception model.\n\n\\medskip\n\\noindent \\textit{F1.} The F1-score for binary classifiers is defined as $F_1 = (2 p r)\/(p+r)$ where $p$ and $r$ are the precision and recall. For multi-class classifiers it can be defined as the sum of \nthe F1-scores of each class, weighted by the percentage of labels belonging to the class.\nThe resulting measure ranges from 0 to 1, where 1 means that all the classes are correctly classified and 0 the opposite. Thus, we use the F1-score to evaluate the category consistence of our generated images. To compute the F1-score, we train a network to recognize hand gestures. Further details are provided in the Implementation Details.\n\n\n\\medskip\n\\noindent \\textit{Perceptual User study.} \nWe run a ``fake'' vs ``real'' perceptual user study following the same protocol as~\\cite{tang2018gesturegan,yang2018crossing}. Users are shown a pair of images, the original and the generated image, and they are asked to select the fake one. The images are displayed for only 1 second before the user can provide his or her choice. The image pairs are randomly shuffled in order to avoid introducing bias in the results. \nOverall, we collected perceptual annotations from 12 users and each user was asked to vote for 98 image comparisons. Specifically, 12 image translation pairs were selected for each dataset and experimental setting. \n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Quantitative comparison for the gesture-to-gesture translation task in \\textit{normal} experimental setting, using the same evaluation metrics as GestureGAN~\\cite{tang2018gesturegan}.}\n\n\t\\resizebox{0.99\\linewidth}{!}{ \n\t\\begin{tabular}{@{}l rrr rrr@{}}\n\t\\toprule\n\t\t\\textbf{Model} & \n\t\t\\multicolumn{3}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n\t\t\\cmidrule(r{4pt}){2-4} \\cmidrule(l{4pt}){5-7}\n\t\t& MSE & PSNR & IS & MSE & PSNR & IS\n\t\t\\\\\n\t\t\\midrule\n\t\tPG$^2$~\\cite{ma2017pose} & 116.10 & 28.24 & 2.42 & 199.44 & 26.51 & 3.37\n\t\t\\\\\n\t\tYan \\emph{et al.}~\\cite{yan2017skeleton} & 118.12 & 28.02 & 2.49 & 175.86 & 26.95 & 3.33\n\t\t\\\\\n\t\tMa \\emph{et al.}~\\cite{ma2018disentangled} & 113.78 & 30.65 & 2.45 & 183.65 & 26.95 & 3.38\n\t\t\\\\\n\t\tPoseGAN \\emph{et al.}~\\cite{siarohin2018deformable} & 113.65 & 29.55 & 2.40 & 176.35 & 27.30 & 3.21\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} & 105.73 & 32.61 & \\textbf{2.55} & 169.92 & 27.97 & \\textbf{3.41}\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~- no rolling &\n\t\t31.80 & 33.57 & 1.92 &\n\t\t46.79 & 31.98 & 2.31\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &\n\t\t\\textbf{15.76} & \\textbf{36.51} & 2.00 &\n\t\t\\textbf{21.73} & \\textbf{35.39} & 2.34\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\label{tab:QuantitativeResults_normal}\n\\end{table}\n\n\n\n\n\n\n\\begin{table*}[t]\n\t\\centering\n\t\\caption{Quantitative results for the gesture-to-gesture translation task for the two experimental settings.}\n\n\t\\begin{tabularx}{\\textwidth}{@{}Xccrrrrrrrrrrrrrrrrrrr@{}}\n\t\\toprule\n\t\t\\textbf{Model} & \\textbf{Experimental setting} & \\textbf{Easy to draw map} & \\phantom{a} &\n\t\t\\multicolumn{3}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \\phantom{a} & \\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n\t\t\\cmidrule{5-7} \\cmidrule{9-11}\n\t\t& & && PSNR & FID & F1 && PSNR & FID & F1\n\t\t\\\\\n\t\t\\midrule\n\t\tGANimation~\\cite{pumarola2018ganimation} & \\multirow{6}{*}{\\textit{Normal}} & \\multirow{6}{*}{$\\times$} && 10.03 & 440.45 & 0.08 && 11.11 & 402.99 & 0.23 \n\t\t\\\\\n\t\tStarGAN~\\cite{choi2018stargan} &&&& 17.79 & 98.84 & 0.09 && 11.44 & 137.88 & 0.07 \n\t\t\\\\\n\t\tPG$^2$~\\cite{siarohin2018deformable} &&&& 21.71 & 66.83 & 0.15 && 21.78 & 122.08 & 0.68\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} &&&& 34.24 & 15.06 & \\textbf{0.93} & & 28.65 & 54.00 & 0.96\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ - no rolling &&&&\n\t\t34.07 & 20.29 & 0.90 &&\n\t\t31.98 & 58.28 & 0.88\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &&&&\n\t\t\\textbf{36.51} & \\textbf{13.73} & \\textbf{0.93} &&\n\t\t\\textbf{35.39} & \\textbf{32.04} & \\textbf{0.99}\n\t\t\\\\\n\t\t\\midrule\n\t\t\\midrule\n\t\tPG$^2$~\\cite{ma2017pose} & \\multirow{5}{*}{\\textit{Challenging}} & \\multirow{5}{*}{\\checkmark} && 21.94 & 135.32 & 0.10 && 18.30 & 265.73 & 0.10\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} && && 25.60 & 94.10 & 0.38 & & 18.69 & 254.37 & 0.34\n\t\t\\\\\n\t\tGestureGAN$^\\dag$ && && 27.32 & 59.79 & 0.43 & & 22.24 & 192.77 & 0.38\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ - no rolling &&& & 27.14 & 61.14 & 0.43 && 22.77 & 134.54 & 0.33\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &&&& \\textbf{28.11} & \\textbf{47.88} & \\textbf{0.61} && \\textbf{23.11} & \\textbf{101.22} & \\textbf{0.58}\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\label{tab:QuantitativeResults_both}\n\\end{table*}\n\\begin{sloppypar}\n\n\n\n\\medskip\n\\noindent \\textbf{Implementation Details.}\nInspired by previous methods~\\cite{zhu2017unpaired,choi2018stargan}, both $E_1$ and $E_2$ are composed of two convolutional layers with the stride size of two for downsampling, $F_{res}$ refers to six residual blocks~\\cite{he2016deep}, and $F_{dec}$ is composed of two transposed convolutional layers with the stride size of two for upsampling. \nWe train our model using Adam~\\cite{kingma2014adam} with $\\beta_1=0.5$ and $\\beta_2=0.999$ and batch size 4. We use an $n$-dimensional one-hot vector to represent the category label ($n=10$ for NTU dataset and $n=11$ for Senz3D Dataset). For data augmentation we flip the images horizontally with a probability of 0.5 and we reverse the ``A-to-B\" direction with a probability of 0.5. The initial learning rate is set to $0.0002$. We train for 20 epochs and linearly decay the rate to zero over the last 10 epochs. To reduce model oscillation~\\cite{goodfellow2016nips}, we follow previous works~\\cite{shrivastava2017learning,zhu2017unpaired} and update the discriminators using a history of generated images rather than the ones produced by the latest generators. \nWe use instance normalization~\\cite{ulyanov2016instance} for the generator $G$.\nFor all the experiments, the weight coefficients for the loss term in Eq.~\\ref{eq:loss_D} and Eq.~\\ref{eq:loss_G} are set to $\\lambda_{D} = 1$, $\\lambda_{G} = 2$,\n$\\lambda_{cls} = 1$, $\\lambda_{rec}=100$, $\\lambda_{idt}=10$, $\\lambda_{cyc}=10$ and $\\lambda_{tv}=1e-5$.\nBaseline models are optimized using the same settings described in the respective articles.\nWe used the source code released by the authors for all competing works, except for GestureGAN, which was implemented from scratch following the description of the original article~\\cite{tang2018gesturegan}.\n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~is implemented using the deep learning framework PyTorch.\n\nTo compute the F1-score, we train a network on hand gesture recognition using Inception v3~\\cite{szegedy2016rethinking} network fine tuned on the NTU Hand Gesture and Creative Senz3D datasets. \nThe network achieves F1-score 0.93 and 0.99 on Creative Senz3D and NTU Hand Gesture test sets, respectively. Additional details on the training can be found in the supplementary materials.\n\n\n\\end{sloppypar}\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_real_A.jpeg}}\\hfill\n\t\\subfloat[Target skeleton]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_cond_B.jpeg}}\\hfill\n\t\\subfloat[Ground truth]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_real_B.jpeg}}\\hfill\n\t\\subfloat[GANimation~\\cite{pumarola2018ganimation}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat[StarGAN~\\cite{choi2018stargan}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$~\\cite{ma2017pose}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B2.jpeg}}\\hfill\n\t\\subfloat[GestureGAN~\\cite{tang2018gesturegan}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B_gest.jpeg}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} ]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B2_masked_roll.jpeg}}\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \\\\\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_real_A.jpeg}\n\t\t}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B2_pg2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B_gesture.jpeg}}\\hfill\n\t\\subfloat{\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B2_masked_rolling.jpeg}}\n\t\\\\\n\t\\subfloat\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B_gesture.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B2_masked_roll.jpeg}}\n\n\t\\caption{Qualitative comparison between ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~and competing works in the \\textit{normal} experimental setting. NTU Hand Gesture dataset (top two rows) and Creative Senz3D (bottom two rows).}\n\t\\label{fig:res_qual_normal}\n\\end{figure*}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_real_A.jpeg}}\\hfill\n\t\\subfloat[Target]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_real_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_diff_map.jpeg}}\\hfill\n\t\\subfloat[GANimation]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B_mask.png}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_fake_B2_mask.jpeg}}\n\t\\\\\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\t%\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_diff_map.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B_mask.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2_mask.jpeg}}\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\t\\caption{Masks computed by the various state of the art methods. Specifically, PG$^2$ computes a difference map that is noisy, GANimation fails at computing the attention mask, while $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~computes the attention of the pixels that stay constant from source to target images.}\n\t\\label{fig:attention}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\begin{minipage}{.88\\textwidth}\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_real_A.jpeg}}\\hfill\n\t\\subfloat[Target triangle]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_cond_B.jpeg}}\\hfill\n\t\\subfloat[Category label]{%\n\t\t\\squarecat{\"9\"}}\\hfill\n\t\\subfloat[Ground truth]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_real_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B1.jpeg}}\\hfill\n\t\\subfloat[GestureGAN]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B.jpeg}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B2_masked.jpeg}} \n\t\t\\\\\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\squarecat{\"4\"}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2_masked.jpeg}}\n\t\\end{minipage}\n\n\t\\caption{Qualitative comparison between ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~and competing works in the \\textit{challenging} experimental setting. NTU Hand Gesture dataset (top row) and Creative Senz3D (bottom row).\n\t}\n\t\\label{fig:res_qual_challenging}\n\t\n\\end{figure*}\n\n\n\n\n\n\\section{Results}\n\\noindent \\textbf{Quantitative Results.} \nWe begin by directly comparing ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ with the same experimental protocol and metrics used by our most similar competitor, GestureGAN~\\cite{tang2018gesturegan}.\nTable~\\ref{tab:QuantitativeResults_normal} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ performs better than all competing works in terms of MSE and PSNR, especially when the rolling guidance is employed. In terms of IS GestureGAN performs better.\nHowever, the MSE and the IS are not directly related to the quality of the generated images. The MSE is indeed a metric of pixel difference, while the low reliability of the Inception Score is well known~\\cite{barratt2018note}.\n\nFor this reason, we compare our method with competing works using the PSNR, F1-score and the FID score, both in \\textit{normal} and \\textit{challenging} experimental settings. These metrics compare the diversity, quality and hand-gesture consistency of the generated images.\nTable~\\ref{tab:QuantitativeResults_both} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms all competing works, in both the experimental settings and for all the metrics.\nIn the \\emph{normal} setting, compared to GestureGAN, we achieve higher PSNR (36.51 vs 34.24 and 35.39 vs 28.65), F1-score (0.99 vs 0.96) and lower FID (13.73 vs 15.06 and 32.04 vs 54.00) in both datasets. \nOther methods perform particularly poor in terms of F1-score. \nFor example, GANimation and StarGAN perform near randomness in the F1-score (random baseline $\\sim 0.09$), while ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~ achieves near-perfect performance in Creative Senz3D ($0.99$) and NTU Hand Gesture ($0.93$).\nThese outcomes might be related to the fact that GANimation, StarGAN and PG$^2$ are not designed for image translation \\emph{in the wild}.\n\n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ significantly outperforms the competing methods in the \\emph{challenging} setting, where we enforce a stricter division of training and test gestures, and we use the 100\\% of the data, differently from GestureGAN's settings.\nIn particular, the FID score reduces by 49\\% and 60\\% from GestureGAN in NTU Hand Gesture and Creative Senz3D, respectively. \nIn terms of F1-score, the result improves by 61\\% and 71\\% in NTU Hand Gesture and Creative Senz3D, respectively. \nThe PSNR improves by 10\\% and 24\\% in NTU Hand Gesture and Creative Senz3D, respectively. We note that the rolling guidance applied to GestureGAN (denoted in Table~\\ref{tab:QuantitativeResults_both} as GestureGAN$^\\dag$) improves the original FID results by 36\\% and 24\\% in NTU Hand Gesture and Creative Senz3D, respectively. \n\n\nAltogether, the quantitative results show that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms the state of the art, both in the \\emph{normal} and \\emph{challenging} setting. \n\n\\begin{sloppypar}\n\\medskip\n\\noindent \\textbf{Qualitative Results.} Figure~\\ref{fig:res_qual_normal} shows some randomly selected gesture-to-gesture translations in the \\textit{normal} experimental setting. \nBoth GestureGAN~\\cite{tang2018gesturegan} and ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ produce sharper output results while the output gestures from PG$^2$~\\cite{ma2017pose} are very blurry. ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ also produces a better define sharper background than GestureGAN. \nStarGAN and GANimation, however, fail to produce gestures from the provided conditional map.\n\nWe further inspect the reason behind the poor results of PG$^2$ and GANimation. \nThese methods focus on multi-domain translation and are specifically tailored to the task of facial expression translation and to the cases where the input and target objects are aligned. \nFigure~\\ref{fig:attention} depicts two sample cases of the difference and attention masks generated by these methods. It can be seen that PG$^2$ fails at finding the difference mask, especially in the NTU Hand Gesture dataset (top row). Similarly, GANimation generates an empty attention mask, which might also be the cause of its poor results in both Figure~\\ref{fig:res_qual_normal} and Table~\\ref{tab:QuantitativeResults_both}. \n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, instead, learns to focus on the differences between the source and target images.\n\nFigure~\\ref{fig:res_qual_challenging} shows the results in the \\textit{challenging} setting. \n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ generates sharper images with recognizable hand gestures, while other methods such as GestureGAN fails at it. \nThis result is in line with the F1-scores reported in Table~\\ref{tab:QuantitativeResults_both} (bottom), which range within 0.10 and 0.38 for the competing works, and within 0.58 and 0.61 in case of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}. \nBoth qualitative and quantitative results confirm that state-of-the-art methods are not adapted to perform gesture translation in the challenging settings, i.e. where a user-friendly category independent conditional map is provided, and where the network is asked to translate to an unseen gesture for a given user.\nWe refer to the Supplementary Material for additional qualitative Figures.\n\\end{sloppypar}\n\n\n\n\n\\begin{figure}[!htp]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat[Target map]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat[\"7\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat[\"9\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat[\"11\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-11_fake_B2_masked.jpeg}}\n\t\\\\ \n\t%\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\n\t\\\\ \n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-11_fake_B2_masked.jpeg}}\n\n\t\\caption{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~decouples the conditional map from the category label of the hand gesture. The same conditional map can be used with different category labels to generate multiple images. \n\t}\n\t\\label{fig:diversity}\n\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\renewcommand{\\tabcolsep}{1pt}\n\t\\subfloat{\\footnotesize\\begin{tabular}{>{\\centering\\arraybackslash}p{0.39\\columnwidth}>{\\centering\\arraybackslash}p{0.58\\columnwidth}}\n \\textbf{Source} & \\textbf{Conditional maps and generated images} \\\\ \n \\end{tabular}}\\\\\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \\subfloat{%\n\t\t\\includegraphics[width=0.39\\columnwidth,height=0.396\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-a_real_A.jpeg}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-13_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-13_fake_B2_masked.jpeg}\n\t\\end{minipage}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-6_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-6_fake_B2_masked.jpeg}\n\t\\end{minipage}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-7_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-7_fake_B2_masked.jpeg}\n\t\\end{minipage}}\n\n\t\\caption\n\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~generates images with the same conditional map that is rotated, shifted and resized.\n\t}\n\t\\label{fig:diversity2}\n\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\medskip\n\\noindent \\textbf{Diversity Study.} ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ decouples the desired hand gesture category, specified through a class number, from its location, scale and orientation, which are specified by a conditional map.\nThis means that users can use the same conditional map with different hand gesture categories to generate multiple images.\nFigure~\\ref{fig:diversity} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ can generate three different distinguishable hand gestures by using the same \\emph{triangle} conditional map and different category numbers.\nInstead, when using non category-independent shapes are used (e.g. boundary or skeleton), ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ fails to synthesize several hand gestures categories from the same conditional map. \nAs mentioned before, \\emph{Boundary} maps are weakly category dependent, as their shape might suggest the type of gesture, while \\emph{Skeleton} and \\emph{Key-points} maps are category dependent. \n\nFurthermore, we test the performance of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ with the same source image and category, but with different conditional maps. \nFor this test, we manually draw three triangle conditional maps with arbitrary size, location and orientation.\nFigure~\\ref{fig:diversity2} shows that our method faithfully synthesizes the target gestures in all cases.\nAltogether, we show that users can generate hand gestures \\emph{in the wild} with much less effort than state-of-the-art models, that require complex annotations that are dependent on the specific hand gesture users want to generate. \n\n\n\n\n\\begin{table}[t]\n \\caption{Perceptual user study. Percentage of times, on average, when the translated images are selected as ``real'' by users, in the ``fake'' vs. ``real'' comparison.}\n \n \\centering\n \\small\n \\begin{tabularx}{\\columnwidth}{@{}X rr rrr@{}}\n \\toprule \n \\textbf{Model} & \\multicolumn{2}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \n \\multicolumn{2}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n \\cmidrule(r{4pt}){2-3} \\cmidrule(l{4pt}){4-5}\n & \\textit{Normal} & \\textit{Challenging} & \\textit{Normal} & \\textit{Challenging} \\\\\n \\midrule\n GestureGAN & 36.54\\% & 3.21\\% & 3.85\\% & 0.64\\% \\\\\n ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} & \\textbf{44.32\\%} & \\textbf{7.05\\%} & \\textbf{16.03\\%} & \\textbf{3.85\\%} \\\\\n \\bottomrule\n \\end{tabularx}\n \\label{tab:userstudy}\n\\end{table}\n\n\\medskip\n\\noindent \\textbf{Perception User Study.}\nTable~\\ref{tab:userstudy} shows the outcome of the perceptual user study, where we report the percentage of times when the translated image wins against the original target image in the real vs fake comparison, i.e when the original image is selected as fake. It can be seen that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms GestureGAN~\\cite{tang2018gesturegan} in both experimental settings. \nThis means that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ generates higher quality images that can be mistaken with the real ones at higher rates than competitors.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\small\n\t\\caption{Performance degradation of $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ by removing the rolling guidance approach.\n\t}\n\n\t\\begin{tabularx}{\\columnwidth}{@{}X c rrr@{}}\n\t\\toprule\n\t\t\\textbf{Conditional map} & \\textbf{Category} & \n\t\t\\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\ \n\t\t\\cmidrule{3-5} \n\t\t& \\textbf{independent} & PSNR & FID & F1 \n\t\t\\\\\n\t\t\\midrule\n\t\t\\emph{Triangle} & \\checkmark & 1.53\\% \\textcolor{red}{\\textdownarrow} & 24.77\\% \\textcolor{red}{\\textuparrow} & 75.76\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\emph{Boundary} & $\\sim$ &\n\t\t 1.84\\% \\textcolor{red}{\\textdownarrow} & 35.37\\% \\textcolor{red}{\\textuparrow} & 58.06\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\emph{Skeleton} & $\\times$ &\n\t\t 10.66\\% \\textcolor{red}{\\textdownarrow} & 81.90\\% \\textcolor{red}{\\textuparrow} & 12.50\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\label{tab:ablation}\n\t\\vspace{-0.3cm}\n\\end{table}\n\n\n\\medskip\n\\noindent \\textbf{Ablation study.} We further investigate the effect of the rolling guidance approach by removing this component from ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}. \nIn the quantitative results, Table~\\ref{tab:QuantitativeResults_both} it can be seen that the rolling guidance allows to significantly improves the quality of the generated images.\nIn Table~\\ref{tab:ablation} we report the degradation of our method performance without rolling guidance, on Creative Senz3D dataset, for the three types of annotation maps. \nSpecifically, we observe 24.77\\%, 35.37\\% and 81.90\\% worse (increased) FID score for the \\emph{Triangle}, \\emph{Boundary} and \\emph{Skeleton} maps, respectively. \nWhile the F1-score decreases by 75.76\\% and 58.06\\% on the \\emph{Triangle} and \\emph{Boundary} maps respectively, for \\emph{Skeleton} maps it decreases by 12.50\\%. In terms of PSNR the degradation is less significant for the \\emph{Triangle} and \\emph{Boundary}, but higher (10.66\\%) for \\emph{Skeleton} maps.\nFinally, we observed that a single rolling is sufficient to improve the generated results, while ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ does not benefit from additional rolling iterations.\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe have presented a novel GAN architecture for gesture-to-gesture translation \\textit{in the wild}. Our model decouples the conditional input into a category label and an easy-to-draw conditional map.\nThe proposed attention module and rolling guidance approach allow generating sharper images \\textit{in the wild}, faithfully generating the target gesture while preserving the background context. \nExperiments on two public datasets have shown that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms state of the art both quantitatively and qualitatively.\nMoreover, it allows the use of simple-to-draw and category-independent conditional maps, such as triangles.\nThis significantly reduces the annotation effort both at drawing time and allowing to use a single map for multiple gesture translations. The proposed framework is not limited to category labels but can also be used through embedding, learned from the data, that can express the gesture characteristics. In future work users could easily provide a reference image instead of a label, and translate the original image to the target gesture expressed by the reference image.\n\nOur approach is especially important when the target image, and thus the target conditional map, is not available, which is the typical scenario in photo-editing software. \n\n\n\n\n\n\\begin{acks}\nWe gratefully acknowledge NVIDIA Corporation for the donation of the Titan X GPUs and Fondazione Caritro for supporting the SMARTourism project.\n\\end{acks}\n\n\n\\bibliographystyle{ACM-Reference-Format}\n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStudying regions close to the Galactic plane in the optical is difficult due both due to dust obscuration and source confusion. It was only recently Feast et al. (2014) reported the discovery of five classical Cepheid variables at distances of 13 - 22 kpc from the Galactic center, towards the Galactic bulge, that may be associated with the flared atomic hydrogen disk of our Galaxy. Two classical Cepheid variables at 11 kpc close to the plane of the Milky Way have been recently uncovered from VVV data (Dekany et al. 2015), that indicates an underlying young star cluster. Searches for dwarf galaxies in the optical have primarily targeted high latitudes (McConnachie 2012). The Sagittarius (Sgr) dwarf galaxy is the closest known dwarf galaxy to the plane, at a latitude of $b = -14^\\circ$ (Ibata et al. 1994). The dearth of Milky Way satellites at low latitudes (Mateo 1998; McConnachie 2012) is underscored by simulations that suggest that there may be massive, nearly dark satellites that have not yet been discovered (Boylan-Kolchin et al. 2011). Not only dwarf galaxies, but even bright spiral galaxies are not easily seen if they are hidden behind the obscuring column of dust and gas of the Galactic disk (Kraan-Korteweg et al. 1994).\n\nMining data from deep infrared surveys of the Galactic plane may well uncover new dwarf galaxies and halo sub-structure. This would alleviate several outstanding problems in near-field cosmology. The \"missing satellites problem\", or the overabundance of dwarf galaxies in cosmological simulations relative to the number of observed dwarf galaxies in and around the Local Group (Klypin et al. 1999), and the \"too big to fail problem\", wherein there are too few massive satellites in the Milky Way relative to cosmological simulations (Boylan-Kolchin et al. 2011) are two such outstanding problems. Yet another is the ostensibly anisotropic distribution of the Milky Way satellites (Kroupa et al. 2005). These discepancies may be resolved by a more complete inventory of the structure of our Galaxy at low latitudes. \n\nWe have searched for distant stars close to the Galactic plane using near-infrared data from the ESO Public survey VISTA Variables of the Via Lactea (VVV) (Minniti et al. 2011; S12), targeting the VVV disk area, which covers Galactic longitudes $-65.3^\\circ < l <\n-10^\\circ$ within Galactic latitudes $-2.25^\\circ < b < +2.25^\\circ$. The VVV survey is an ongoing 5-band photometric survey in the Z (0.87$~\\mu\\rm m$), Y (1.02$~\\mu\\rm m$), \nJ (1.25 $~\\mu\\rm m$), H (1.64 $~\\mu\\rm m$) and $K_{s}$ (2.14 $~\\mu\\rm m$) bands (S12),\nand is multi-epoch in the $K_{s}$ band, with approximately 30-40 epochs per star across the VVV disk area at the time of writing. \nIn \\S \\ref{sec:results}, we review the methods we used to identify Cepheid variables, and present the distance and extinction values. We discuss possible interpretations and conclude in \\S \\ref{sec:conclusion}.\n\n\\section{Results \\& Analysis}\n\\label{sec:results}\n\nThe infrared photometry is from the VVV survey, which is based on aperture photometry\ncomputed on the individual tile images (S12).\nEach of the sources was observed with a median exposure time of $16$ s\nper pixel, depending on the position in the tile (each exposure is $8$ s\nlong, and most of the area in a tile is a combination of two\npointings). The limiting magnitude of the VVV data using aperture photometry\nis $K_{s} \\sim 18.0$ mag in most fields (S12). \nA particular pointing is called a ``tile'', covers $\\sim1.64$ square\ndegree, and contains approximately $10^{6}$ stars. \nAs a preliminary search, we examined the disk area of the VVV survey by applying color cuts that correspond to distant ($D > 60~\\rm kpc$) red-clump\nstars. Red-clump stars have been shown to be good distance indicators (Alves 2000; Paczynski \\& Stanek 1998).\nGiven the mean values of intrinsic near-infrared colors for red-clump stars in the Milky Way disk and the Cardelli et al. (1989) extinction law, we used the distance modulus noted in Minniti et al. (2011), which gives a color cut of $1.5<(J-K_{\\rm s})<1.8$ and\n$K_{s} > 17.6$ (which corresponds to distances in excess of $\\sim$ 60 kpc). Using this color cut, we saw an excess of distant red-clump stars \nat $l \\sim -27^{\\circ}$. We defer a detailed analysis of the red-clump stars and other stellar populations to a future paper.\n\nWe carried out a search for variable stars, restricting our search to faint variables, with mean $K_{s} > 15$ mag, and periods greater than three days. We examined the variability data in five tiles close to Galactic longitude $l \\sim -27^\\circ$ and searched six comparison tiles at other locations in the VVV disk area. We found four Cepheid variables at $l \\sim -27^\\circ$ at an average distance of $\\sim$ 90 kpc,\nand none in the other tiles. The survey strategy ensures that the tiles in the VVV disk area have similar number of observations and limiting magnitude (S12). While the control on the cadence is limited (Saito et al. 2013), we have checked that there is no significant difference in the cadence for the $l \\sim -27^\\circ$ tiles relative to the rest of the disk area, i.e., the region at $l \\sim -27^\\circ$ is not unique in terms of the way it was observed.\n\nIn identifying Cepheids, we employed\nseveral successive tests. The first two tests are based on the statistical significance of the highest peak in the Lomb-Scargle periodogram, and the uncertainty of the period, respectively. These two tests ensure that we have identified sources of a given pulsation period, and that there is a small uncertainty in the period that we derive. For the final cut, we quantitatively assess the shapes of the light curves by calculating the Fourier parameters of the sources, as well as the skewness and acuteness parameters, and visually inspect the light curves. \n\nA given tile is searched using the Lomb-Scargle algorithm (Lomb 1976; Scargle 1982), and periodograms\nare constructed for every source. The statistical significance of the amplitude of the largest peak in the periodogram (Scargle 1982)\ncorresponds to a false alarm probability $p_{0}$. Claiming the detection of the signal if the amplitude\nexceeds the threshold value, one can expect to be wrong a fraction $p_{0}$ of the time. Alternately, the statistical significance\nlevel of the detection is $1-p_{0}$, and this quantity is listed in Table 1.\nFor the first test, we require $\\bar{K_{s}} >$ 15 mag\nto search for faint variables, and set the minimum and maximum period range in our variability search\nbetween 3 - 50 days. To pass the first test, sources have to satisfy the following conditions:\n1) the period corresponding to the maximum in the Lomb-Scargle periodogram is greater\nthan three days, 2) the maximum in the Lomb-Scargle periodogram exceeds 90-th pericentile \nfor the significance level, 3) if there are other maxima in the periodogram that are at 90-th percentile\nor higher, the periods corresponding to these maxima must differ by a factor of two or less. \nThe last condition amounts to requiring a clean periodogram without spuriously large multiple peaks.\n\nIn the second test, we assess the quality of the light curves of the variables that pass the tests above with a\nparametric bootstrap. Assuming a Gaussian distribution of errors, we sample the distribution one thousand times\nto derive the distribution of periods for each source, which is similar to prior work (Klein et al. 2012; Klein et al. 2014)\non RR Lyrae stars.\nIf the mean of the period distribution agrees to within 20 \\% of the period calculated from the raw data, and \nif the mean of the period distribution $\\pm$ the standard deviation still exceeds three days, we consider the\nperiod distribution to be sufficiently well constrained. The goal of this second cut is to select sources that have a small uncertainty in the derived period, given the photometric errors. \nThe Lomb-Scargle algorithm allows us to derive the period and its statistical signficance, but not the uncertainty in the derived period. \nIf the width of the histogram that gives the distribution of periods from the bootstrap calculation is narrow, the uncertainty in the derived period is low. For the sources that pass the above tests, we fit the light curves with a Fourier series (Kovacs \\& Kupi 2007).\n The Fourier parameters are similar to the light curves of classical Cepheids for $P \\sim 3 - 15$ days observed in the K-band (Persson et al. 2004; Bhardwaj et al. 2015). The light curves of Cepheid variables in the optical are different in shape and amplitude from the light curves of Cepheid variables in the K-band (Matsunaga et al. 2013; Bhardwaj et al. 2015), and our comparison here is to the observed light curves of classical Cepheid variables in the K-band. The Cepheids we list here pass all of the automated and visual checks. Because of this multi-tiered, conservative approach, the light curve analysis is time consuming, but allows us to derive accurate distances. It is worth noting that in addition to lower extinctions in the infrared relative to the optical, another advantage of infrared photometry of Cepheids is that it is minimally affected by metallicity variations (Bono et al. 2010; Freedman et al. 2010).\n\nThe tiles close to longitude $l \\sim -27^\\circ$ produce a significantly larger number of variables\nthat pass the first of our tests than the other six tiles we examined (at $l = -15^{\\circ},-29^{\\circ}, -35^{\\circ}, -40^{\\circ}, -50^{\\circ}, -65^{\\circ}$). Figure 2 of S12 depicts the VVV survey area. The number of sources in tiles d027, d065, d103, and d141 \nthat are centered at $l \\sim -27 ^\\circ$ and extend upwards in latitude (S12), produce $\\sim 100-200$ sources\nthat pass our first cut. In contrast, the average number that pass the first cut from the comparison tiles is $\\sim$ 60. If we consider \nthis background number to be the mean of a Poisson distribution, and randomly sample a Poisson distribution with this mean value, values in excess of 100 are above 5-$\\sigma$, i.e., they are statistically extremely unlikely to occur by chance. Figure \\ref{f:comb} shows the number of sources that pass the first of our tests as a function of longitude (the value at $l \\sim -27 ^\\circ$ is an average over latitude), as well as a function of the total number of variable stars in the tile. While the number of sources that pass the first of our tests has some correlation with the number of variable stars (which is not unexpected), the region at $l \\sim -27 ^\\circ$ is a clear outlier. In some tiles centered at $l \\sim -27 ^\\circ$, there were a significant number that passed our automated and visual analysis of the light curves, but did not pass our visual inspection of the images, due to the possibility of spikes or blending. \nThe number of Cepheid variables that we report here from the final cut is very likely an underestimate. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{comb.png}\n\\caption{The number of sources that pass the first of our tests (requiring statistical significance greater than 90-th percentile, $P >$ 3 days, $K_{s} > 15~\\rm mag$) is shown as a function of longitude in the top panel, and as a function of the total number of variable stars in the tile (normalized to one million) in the bottom panel. The error bars in the top-panel is from Poisson noise. \n\\label{f:comb}}\n\\end{center}\n\\end{figure}\n\nThe phase folded light curves of the Cepheid variables, which show a clear resemblance to each other, and corresponding images are shown in Figure 1. Table 1 summarizes the derived distances and other parameters for the Cepheids. To estimate the dust extinction from the excess color, we use the quasi-simultaneous\nsingle epoch VVV measurements in the $J$, $H$ and $K_{s}$ bands ($\\sim$ 190 s between each band). The near-infrared amplitudes of \nclassical Cepheids are relatively small (Persson et al. 2004) and as such an estimate of the\ndust extinction from the single epoch measurements of the colour should be sufficient.\nThe average extinction-corrected $(J-K_{s})$ colours of the Cepheids is $\\sim$ 0.4, which is\nconsistent with the colors of short-period classical Cepheids (Persson et al. 2004) in the LMC.\n\n\\begin{table*}\n\\centering\n \\caption{Data For Individual Cepheids and Derived Parameters}\n\n \\begin{tabular}{@{}lccccccc@{}}\n \\hline\n\nVVV ID & $l~\\rm (deg)$ & $b~\\rm (deg)$ & D (kpc) & P (day) & $\\bar{K_{s}}$ & Significance Level \\\\\n\\hline\n\nVVVJ162559.36-522234.0 & -27.5971 & -2.23686 & 92 & 3.42 & 16.04 & 91 \\% \\\\\nVVV J162328.18-513230.4 & -27.2729 & -1.37557 & 100 & 4.19 & 16.12 & 93 \\% \\\\\nVVVJ162119.39-520233.3 & -27.8621 & -1.49382 & 71 & 5.69 & 15.1 & 97 \\% \\\\\nVVV J161542.47-494439.0 & -26.8882 & 0.768427 & 93 & 13.9 & 15.6 & 98 \\% \\\\\n\n\n\n\\hline\n\n\\end{tabular} \n\n\\small {VVV ID, Galactic longitude ($l$) and latitude ($b$), $D$ is the distance from the sun, $P$ is the pulsation period, $\\bar{K_{s}}$ is the mean $K_{s}$-band magnitude, the last column is the significance level of the highest peak in the Lomb-Scargle periodogram. }\n\n\n\n\\end{table*} \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.65]{Figure1Cep.png}\n\\caption{$JHK_{\\rm s}$ false color\nimage of the Cepheid variables, with phase folded $K_{\\rm s}$-band\nlight curves. All fields are $30\"\\times30\"$, oriented in Galactic\ncoordinates. The VVV ID and period are also listed in Table 1, along\nwith the Galactic latitude, longitude, distances and average $K_{\\rm\ns}$-band magnitude. The four light curves have a clear resemblance to each other, and a quantitative assessment of their shapes shows they are similar to those of $K_{s}$-band light curves of classical Cepheids.\n\\label{f:cep}}\n\\end{center}\n\\end{figure}\n\nWe adopt the period-luminosity relations of classical Cepheids in the LMC (Matsunaga et al. 2011), with a LMC distance modulus\nof 18.5 mag and interstellar extinction value of $A_{K_{s}} = 0.02$ mag for the LMC\ndirection. This gives the distance modulus $\\mu$ for a Cepheid with pulsation\nperiod $P$ (Feast et al. 2014):\n\n\\begin{equation}\n\\mu= K_{s} - A_{K_{s}} + 3.284~\\rm log(P) + 2.383 \\; ,\n\\end{equation}\nwhere $A_{Ks}$ is the extinction in the $K_{s}$ band, which we can express in terms\nof the colour excess:\n\n\\begin{equation}\nA_{Ks} = 0.6822 E(J - K_{s}) \\; ,\n\\end{equation}\nwhere $E(J-K_{s}) = (M_{J} - M_{K_{s}})_{\\rm obs} - (M_{J} - M_{K_{s}})_{\\rm int} $ is the difference between the observed and intrinsic colors,\nand we adopt the period-luminosity relations of classical Cepheids in the LMC (Matsunaga et al. 2011) and the Cardelli et al. (1989) extinction law. The single-epoch colors and extinctions in the $K_{s}$ and $J$ bands,\nalong with the extinction corrected colors are listed in Table 2. Using extinction values from dust maps derived\nfrom far-IR colors (Schlegel et al. 1998) leads to slightly larger values close to the plane of the Galaxy along these lines of sight, as well as recent work that is based on spectra from the Sloan Digital Sky Survey (Schlafly \\& Finkbeiner 2011). If we consider the standard deviation of these three values to be the uncertainty in the dust extinction, and include the photometric errors and uncertainty in the period distribution to derive the uncertainties in the distance, on average this gives a distance uncertainty of $\\sim$ 20 \\%, where the dominant term is the uncertainty in the extinction. The dust extinction in this area (Schlegel et al. 1998) is not unusual, i.e., this area is neither a high or low region of dust extinction. \n\n\\begin{table*}\n\\centering\n \\caption{Photometry Data And Extinction}\n\n\n \\begin{tabular}{@{}lccccccc@{}}\n \\hline\n\nVVV ID & J & H & $K_{s}$ & $A_{K_{s}} $ & $A_{J}$ & $(J-K_{s})_{\\rm corr}$ \\\\ \n\\hline\n\n\nVVVJ162559.36-522234.0 & 17.078 & 16.429 & 16.175 & 0.348 & 0.83 & 0.42 \\\\\nVVV J162328.18-513230.4 & 17.88 & 17.09 & 16.7 & 0.53 & 1.25 & 0.44 \\\\\nVVVJ162119.39-520233.3 & 16.416 & 15.414 & 14.96 & 0.7 & 1.68 & 0.48 \\\\\nVVV J161542.47-494439.0 & 18.71 & 16.69 & 15.45 & 1.89 & 4.52 & 0.6 \\\\\n\n\\hline\n\n\n\n\\end{tabular}\n\\small {\\center{Single epoch VVV photometry in the $J, H$ and $K_{s}$ bands, with extinction values derived from the color excess assuming the Cardelli et al. (1989) extinction law, along with the extinction corrected $(J-K_{s})$ color.}\n}\n\n\\end{table*}\n\nShort-period ($\\sim$ day) close contact eclipsing binaries like W Ursa Majoris stars can mimic the sinusoidal light curves of RR Lyrae stars, which have periods of $\\sim$ day (Rucinski 1993), but is less of a concern for long-period variables. To ensure that the shapes of the light curves are quantitatively similar to classical Cepheids, we compute their Fourier parameters, as well as the skewness and acuteness parameter, and visually inspect all the light curves.\nWe have computed the Fourier parameters of our sources (Figure \\ref{f:fourier}), following Kovacs \\& Kupi (2007). Out to fourth-order the Fourier series is expressed as:\n\n\\begin{equation}\nm(t) = A_{0} + \\sum_{i=1}^{4} A_{i} \\rm cos\\left(2\\pi i t\/P + \\phi_{i} \\right)\\; ,\n\\end{equation}\nwhere $m(t)$ is the light curve, $P$ is the period, and $A_{i}$ and $\\phi_{i}$ are the amplitudes and phases respectively.\nThe top panel shows $R_{21} = A_{2}\/A_{1}$ and $\\phi_{21} = \\phi_{2} - 2 \\phi_{1}$, and the bottom panel shows $R_{31} = A_{3}\/A_{1}$ and $\\phi_{31} = \\phi_{3} - 3 \\phi_{1}$. Eclipsing binaries have $\\phi_{21}$ and $\\phi_{31}$ values close to 2$\\pi$ or zero, which reflects their symmetric variations (Matsunaga et al. 2013). Thus, the Fourier parameters of our sources indicate that they are not eclipsing binaries. Bhardwaj et al. (2015) provides a compilation of the Fourier parameters of a large number of Galactic and LMC Type I Cepheids across a range of wavelengths. This work shows the differences in the shape of the light curve in the K-band relative to the I-band, as well as differences between the K-band light curves of Cepheids in the Galaxy and in the LMC. We have overplotted in Figure \\ref{f:fourier} the Fourier parameters of Type II Cepheids in the Milky Way (Matsunaga et al. 2013). These Type II Cepheids tend to have lower $\\phi_{31}$ values than classical, Type I Cepheids, but there is scatter in the Fourier parameters derived from K-band light curves.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.35]{fourierall.png}\n\\caption{The Fourier parameters (defined by Eqn 3 and corresponding text) are plotted vs period. We have plotted here the Fourier parameters derived from K-band light curves of Type I, classical Cepheids in the Milky Way (marked \"MW\"), Type I classical Cepheids from the LMC (marked \"LMC\") (Bhardwaj et al. 2015), the Cepheid variables discovered at $\\sim$ 90 kpc (marked \"this paper\"), eclipsing binaries (Matsunaga et al. 2013) (marked \"EB\"), and Type II Cepheids (Matsunaga et al. 2013) (marked \"Type II\").\n\\label{f:fourier}}\n\\end{center}\n\\end{figure}\n\nThe shape of the light curve can be further quantified by the skewness ($S_{k}$) and acuteness ($A_{c}$) parameters:\n\\begin{equation}\nS_{k}=\\frac{1}{\\phi_{\\rm rb}} -1, ~~~~ \\phi_{\\rm rb} = \\phi_{\\rm max}-\\phi_{\\rm min},~~~~ A_{c} = \\frac{1} {\\phi_{\\rm fw}} -1 \\; ,\n\\end{equation}\nwhere $\\phi_{\\rm max}$ and $\\phi_{\\rm min}$ are the phases corresponding to the minimum and maximum of the rising branch, and $\\phi_{\\rm rb}$ is therefore the phase duration of the rising branch, and $\\phi_{\\rm fw}$ is the full width at half maximum of the light curve. Bhardwaj et al. (2015) demonstrated that the skewness parameter derived from I-band light curves is significantly higher than K-band light curves. The average skewness parameter of our sources is $\\sim$ 0.63 and the average acuteness parameter is $\\sim$ 0.8, which is comparable to classical Cepheids observed in the K-band (Bhardwaj et al. 2015). \n\n\\section{Discussion \\& Conclusion}\n\\label{sec:conclusion}\n\nBy employing a series of successive tests to determine the periods of variable stars, the uncertainty in their periods, and a quantitative assessment of the light curve shape, we have found four Cepheid variables within an angular extent of one degree centered at Galactic longitude of $l = -27.4^\\circ$ and Galactic latitude of $b = -1.08 ^\\circ$, at an average distance of 90 kpc. These successive tests are not satisfied at any of the other locations where we searched for Cepheid variables. Spectroscopic observations would be useful to confirm the spectral type and determine a radial velocity. Type II Cepheids that are part of the Galactic halo are not expected to be clustered within a degree, which is what we see here. \nType II Cepheids that are part of a dwarf galaxy can be clustered. There are many more Type I, classical Cepheids than Type II Cepheids; the OGLE survey has detected 3361 Type I, classical Cepheids in the LMC, and 197 Type II Cepheids (Soszynski et al. 2008; Soszynki et al. 2008a). Unless this object is as massive and extended as the LMC, one would expect that these sources are more likely to be Type I rather than Type II Cepheids. If they are Type II Cepheids, they would be at an average distance of $\\sim$ 50 kpc (Matsunaga et al. 2013), and such a concentration of Type II Cepheids (which are very rare) is unexpected beyond the edge of the Galactic disk. Therefore, on the basis of the Fourier parameters, skewness and acuteness parameters, and their angular concentration, we conclude these sources are Type I Cepheids.\n\nEarlier work (Chakrabarti \\& Blitz 2009) predicted that the observed perturbations in the atomic hydrogen disk of our Galaxy (Levine, Blitz \\& Heiles 2006)\nare due to a recent (300 Myr years ago) interaction with a dwarf satellite galaxy that is one-hundredth the mass of our Galaxy,\ncurrently at a distance of 90 kpc from the Galactic center, close to the plane, and within\nGalactic longitudes of $-50^{\\circ} < l < -10^{\\circ}$ (Chakrabarti \\& Blitz 2011). \nThis methodology was applied to spiral galaxies with known, tidally dominant optical companions to provide a proof of principle of the method (Chakrabarti et al. 2011). \n\nThere are no known dwarf galaxies that have\ntidal debris at this location. \nThe tidal debris of the Sgr dwarf does not not extend to within $\\sim$\ntwenty-five degrees of Galactic longitude of $l=-27^\\circ$ (Carlin et al. 2012), and the Magellanic stream does not extend to within $\\sim$ 40 degrees of this region (Putman et al. 2003). \nThe Canis Major overdensity was identified as an excess of M-giant stars from the Two Micron All Sky Survey (2MASS)\nat $(l,b) = (-120^\\circ,-8^\\circ)$, at a distance of $\\sim$ 7 kpc from the Sun (Martin et al. 2004).\nIts proximity to the Milky Way indicates that this overdensity is also unlikely to be associated with the\nCepheids we report here.\n\nThese are the most distant Cepheid variables close to the plane of our Galaxy discovered to date. The fact that the Cepheids that we detect are at an average distance of 90 kpc, highly clustered in angle (within one degree) and in distance (within 20 \\% of the mean value of 90 kpc), is difficult to explain without invoking the hypothesis of these stars being associated with a dwarf galaxy, which may be more extended in latitude than can be determined from the VVV survey alone. Constraining the structure of this object should be possible with future deeper observations.\n\n\\bigskip\n\\bigskip\n\n\\acknowledgements\nWe gratefully acknowledge use of data from the ESO Public Survey\nprogramme ID 179.B-2002 taken with the VISTA Telescope, and data\nproducts from the Cambridge Astronomical Survey Unit. R.K.S.\nacknowledges support from CNPq\/Brazil through projects 310636\/2013-2\nand 481468\/2013-7. F.G. acknowledges support from CONICYT-PCHA Mag\\'{i}ster National 2014-22141509.\nS.C. thanks M. Feast, B. Madore, C.F. McKee, J. Scargle, and G. Kovacs for helpful discussions.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{The presence of renewable generations in power systems, especially solar generations, has increased rapidly in recent decades. Reference \\cite{SolarMart} reports that the global capacity of photovoltaic (PV) installment reached 634 GW in 2019. The solar capacity in 2019 has grown nearly 400 times since 2000. California Independent System Operator (ISO) estimates that the renewable energy generations will contribute 50\\% power supplies by 2030 in California \\cite{CAISO}. \n\nThe wide deployment of renewable generations\ndecreases greenhouse gas emissions, however, but also brings great challenges to the reliability and resiliency of existing power systems. For example, at the substation level, the measurements of power consumptions are the net loads that contain different types of loads. The solar generation is invisible to the power system operator and thus is behind-the-meter (BTM). {Because of the stochastic nature and high volatility of renewable generations, the accurate estimation of generated energy is challenging.}\nEnergy disaggregation at the substation level (EDS)\naims to disaggregate each individual load\\footnote{{Generation is considered as a negative load in this paper.}} from aggregate measurement. The accurate information for load consumption are crucial for power system planning and operations, such as hosting capacity evaluation \\cite{AMZ18,WDW19}, demand response and load dispatching \\cite{MB20,XGL20} and load forecasting \\cite{WZCK18,SZWS19}.\n \n \n {Energy disaggregation problem at the household level (EDH) has been extensively studied, see, e.g., \n\\cite{KBNY10,KDM16,CZWHFJ19,HSLS16},\nalso under the terminology \\textit{non-intrusive load monitoring (NILM)} \\cite{KBNY10,KDM16,CZWHFJ19,HSLS16,H92,GAM15,MSW16}}. The electric appliances are typically single-state or multi-state devices and patterns of their power consumptions usually are repeatable. The general procedure for EDH methods is to first collect historical power consumption for each individual appliance and learn patterns from these well-annotated data. Then EDH methods disaggregate power consumptions for each appliance from the aggregate data based on these patterns. In comparison, obtaining historical power consumption for each individual load at the substation level is more difficult, as the measurements at the substation level are highly aggregated from different types of loads. Even though the operator has information about load types attached to a substation, whether a certain load is consuming\/generating energy or not in a certain time interval is not already clear. One example is the BTM solar generation.\n{Thus, the measurements at the substation usually contain multiple loads and are partially labeled. \n It is more challenging to learn distinctive load profiles\nunder this situation than learning from measurements on individual loads. Moreover, the volatility of load and renewable generation often lead to significant estimation errors. \nHowever, to the best of our knowledge, there is no work that provides a confidence measure of the energy disaggregation results.\n\n{ This paper summarizes our recent results\nfor solving these two challenges. Given partially labeled training data, our work~\\cite{LYWW20} proposes a deterministic dictionary learning-based method to learn load patterns and disaggregate the aggregate measurements into individual loads in real-time.\n\nNote that \\cite{LYWW20} is a deterministic approach and therefore is unable to provide the confidence measure of the estimation results. To estimate the reliability of the disaggregation results,\nin \\cite{YW21}, we propose a probabilistic energy disaggregation approach based on Bayesian dictionary learning. }\n\n\n{The contributions of this paper are three folds: 1. We summarize our works~\\cite{LYWW20} and \\cite{YW21} for solving the ``partial label'' problem and modeling the uncertainty. 2. We compare these two methods and other two existing works in the experiment. (3) We provide more testing cases for these two methods in this paper. }\n\n\n{The remainder of this paper is organized as follows. Section II explains our partial label formulation. Section\nIII discusses our proposed deterministic approach to solve the issue of partial labels and introduces our proposed Bayesian method for modeling the uncertainty of disaggregation results. Section IV summarizes this paper.}\n\n\n\n\n\n\n\n\n\n\n\\section{Problem Formulation}\n\n{A substation is connected to $C$ ($C>1$) types of loads in total.\nLet $\\bm{x}\\in \\mathbb{R}^P$ denote the aggregate measurement with window length $P$. Let a binary vector $\\bm{y}= [y^1, y^2,..., y^C] \\subseteq \\{0,1\\}^C$ denote the load existence in $\\bm{x}$. For example,\nwhen $C=3$, $\\bm{y}=[0,0,1]^T$ means that only load 3 exists in $\\bm{x}$.}\n\n{In our paper \\cite{LYWW20}, we propose a ``partial label formulation'' where the operator only knows partial entries in $\\bm{y}$. The partial labels can be obtained by designing a load detector for each load separately \\cite{HCQ19,ZG16} or from engineering experience. As described in \\cite{LYWW20}, annotating partial labels has a lower cost for manpower or communication burdens than annotating all the labels. Moreover, if a detector fails to identify some loads \\cite{HD18}, we can only obtain partial labels. \n\n\nLet $\\bar{\\bm{X}}=[\\bar{\\bm{x}}_1,\\bar{\\bm{x}}_2,...,\\bar{\\bm{x}}_N ] \\in \\mathbb{R}^{P \\times N}$ denote $N$ measurements. $\\bar{\\bm{x}}_i$ denotes the data at the $i$th time window. $\\bm{y}_i \\in \\{0,1\\}^C$ denotes the labels in $\\bar{\\bm{x}}_i$. Let label matrix $\\bm{Y}=[\\bm{y}_1, \\bm{y}_2,...,\\bm{y}_N]$ denote all the labels in $\\bar{\\bm{X}}$. Let $\\Omega$ denote the indices of known entries in $\\bm{Y}$. $\\bm{Y}_\\Omega$ denotes all the known partial labels. In the above example, if one only knows $\\bar{\\bm{x}}$ contains load 3 and does not know whether the other two loads exist or not, then the corresponding $\\bm{Y}_\\Omega$ is $[?, ?, 1]^T$ where $?$ denotes one does not know the corresponding load exists or not.\n\nFig. 1 illustrates our partial label formulation. The aggregate data are aggregated from two industrial loads and one solar generation. Each subfigure shows patterns of aggregate data and the corresponding individual loads at the same time interval. {In all these four cases, the label is $[?, ?, 1]^T$, indicating that load 3 always exists, while the existence of loads 1 and 2 is unknown. \n\n Given training dataset $\\bar{\\bm{X}}$, the corresponding partial label matrix $\\bm{Y}_\\Omega$ and an aggregate measurement $\\hat{\\bm{x}}\\in \\mathbb{R}^P$, the objective of this paper is to: (1) learn distinctive patterns of individual loads from $\\bar{\\bm{X}}$ and disaggregate\n $\\hat{\\bm{x}}$, and (2) characterize the uncertainty of the disaggregation results.\n\n}\n\n\n \n \n \n \n\n \\begin{figure}[!ht] \n \\centering\n \t\\includegraphics[width=0.45\\textwidth]{.\/Figures\/PartialLabel.pdf} \n \t\\caption{{An example of partial label formulation. There are three load types in the aggregate data. All four aggregate data have the same partial label $[?,?,1]^T$. The aggregate data contain (a) all three loads; (b) load 1 and 3; (c) load 2 and 3; (d) only load 3.} \\cite{LYWW20}} \\label{fig1}\n \\end{figure}\n \n \n \n \n\\section{Methodology}\n\n{In this section, we present our two model-free approaches based on deterministic dictionary learning \\cite{LYWW20} and Bayesian dictionary learning \\cite{YW21}, respectively.\n\\subsection{Deterministic Energy Disaggregation}\n\nTo learn patterns of each individual load from the given training data $\\bar{X}$, we formulate\na deterministic dictionary learning problem,\n \\begin{align} \n\\min_{ A, D } f(A,D) &=\\lVert \\bar{X}- \\Sigma_{i = 1}^C D_i A_i \\rVert_F^2+\\Sigma_{i = 1}^C \\lambda_i \\Sigma_{j: i \\notin y_j } \\lVert A_i^{j} \\rVert \\nonumber\\\\\n & + \\lambda_D \\text{Tr}(D\\Theta D^\\intercal) \\label{eqn:dictionary} \\\\\n\\text{s.t. }& \\lVert d^{m } \\rVert_2 \\leq 1, d^{m} \\geq 0, m=1, \\cdots, K \\label{cons} \\\\\n& c_i A_i \\geq 0, \\forall i \\label{eqn:positive} \n\\end{align} \nwhere $D_i\\in \\mathbb{R}^{P\\times K_i}$ denotes the dictionary for load $i$, and $A_i \\in \\mathbb{R}^{K_i \\times N}$ denotes the corresponding coefficients of load $i$. $A_i^j$ is the $j$th column in $A_i$. $A=[A_1; A_2; \\cdots; A_C]\\in \\mathbb{R}^{K \\times N}$ is the matrix that contains all coefficients. $D = \\begin{bmatrix}\nD_1, D_2, \\cdots, D_C\n\\end{bmatrix} \\in \\mathbb{R}^{P \\times K }$ is the matrix that contains all dictionaries. $d^m$ is the $m$th column in $D$. $K = \\Sigma_{i=1}^C K_i $. $\\text{Tr}(\\cdot)$ is the trace operator, and $D^\\intercal$ represents the transpose matrix of $D$. $\\lambda_i$ and $\\lambda_D$ are pre-defined hyper-parameters. }\n\n \\begin{figure}[!ht] \n \\centering\n \t\\includegraphics[width=0.48\\textwidth]{.\/Figures\/Dictionary1.pdf} \n \t\\caption{ {The dictionary representation in Fig. \\ref{fig1}.\n The coefficients\t$A_1$ and $A_2$ are column-sparse. \\cite{LYWW20}}}\n \t\\label{diction}\n \\end{figure}\n\n \\begin{figure*}[!ht] \n \\centering\n \t\\includegraphics[width=0.8\\textwidth]{.\/Figures\/figure1.pdf} \n \n \\caption{{An illustrative framework of the proposed Bayesian method. In the training stage, our method learns the posterior distribution of atom labels, coefficients and load labels from the training data. In the testing stage, the method samples learned distributions of the dictionary and learns the posterior distribution for atom labels, coefficients and load labels, respectively, for test data. \\cite{YW21}}}\n \t\\label{fig3}\n \n \\label{fig_example}\n \\end{figure*}\n \n{The first term $\\lVert \\bar{X}- \\Sigma_{i = 1}^C D_i A_i \\rVert_F^2$ is the standard reconstruction error in dictionary learning. It measures the reconstruction error between the original data and the learned dictionaries and coefficients. $\\sum_{j: i \\notin y_j} \\lVert A_i^{j} \\rVert$ is the column sparsity constraint. The motivation of using this regularization is illustrated in Fig. \\ref{diction}, which shows the dictionary representation of Fig. \\ref{fig1}.\nBecause the training data only have partial label 3, load 1 and load 2 may not exist in the training data. Therefore, we impose the column sparsity on $A_1$ and $A_2$ to promote the group sparsity of $A_1$ and $A_2$. \n \nThe incoherence term $\\text{Tr} (D\\Theta D^\\intercal) $ is defined as\n\n\\begin{align}\\label{eqn:trace}\n\\text{Tr} (D\\Theta D^\\intercal) & = \\Sigma_{m = 1}^{K} \\Sigma_{p =1}^{K } \\theta_{mp} (d^m)^\\intercal d^{p}.\n\\end{align} \nThe $(m,p)$th entry $\\theta_{m p}$ in the weight matrix $\\Theta \\in R^{K \\times K}$ is $0$ if $d^m$ and $d^p$ are in the same dictionary and $1$ otherwise. The incoherence term promotes a discriminative dictionary such that $D_i$ and $D_j$ are as different as possible. The discriminative dictionaries are able to enhance the disaggregation performance.}\n\n\n\n\n \n {Given an aggregate test data $\\hat{x}$, we aim to disaggregate the aggregate measurement into individual load $c$, denoted by $\\hat{x}^c$. The objective function in the testing stage can be written as\n\\begin{align} \\label{w} \n \\min_{w \\in \\mathbb{R}^q } & \\lVert \\hat{x} - \\hat{D} \\tilde{A} w \\rVert_2 + \\mu \\lVert w \\rVert_1, \n \\end{align} \nwhere we select a submatrix $\\tilde{A}=[\\tilde{A}_1;\\cdots; \\tilde{A}_n] \\in \\mathbb{R}^{K \\times q}$ from $\\hat{A}$. $\\hat{D}$, $\\hat{A}$ is the solution by solving the objective function \\eqref{eqn:dictionary}. $\\mu$ is a pre-defined hyper-parameter. The intuition is that some load combinations are repetitive in the training data. We can select some representative combinations and disaggregate the aggregate measurement with respect to these combinations to improve the disaggregation accuracy. Let $\\hat{w}$ be the solution to \\eqref{w}, then the estimated load for load $c$ is $\\hat{x}^c= \\hat{D}_i \\tilde{A}_i \\hat{w}$. }\n\n\n\n \n \\subsection{Bayesian Energy Disaggregation}\n \n{ In \\cite{YW21}, we propose a Bayesian method to deal with partial label data and provide the confidence measure of our disaggregation results. An overall framework is shown in Fig. \\ref{fig3}. Given the training data $\\bar{X}$ and partial labels $\\bm{Y}_\\Omega$, the proposed Bayesian method learns the posterior distribution of dictionaries and coefficients in the training stage. At the testing stage, the method learns the distributions of coefficients based on the learned distributions of dictionaries. The distribution of $\\hat{x}^c$ is then computed, where $\\hat{x}^c$ is the estimated power consumption of load $c$. The mean of the distribution of $\\hat{x}^c$ is used as the estimation of the load $c$ and the covariance is computed to measure the uncertainty. \n\n\nThe proposed method is based on a hierarchical probabilistic model. The prior distribution of the aggregate data ${x}_i$ can be written as \n\n \\begin{equation} \\label{equation1}\n\\bar{x}_i = \\sum_{c=1}^{C}{D}_{c} \\bm{\\omega}_i^c+\\bm{\\epsilon}_i \n\\end{equation}\n\\begin{equation}\\label{equation2}\n\\bm{\\omega}_i^c = (\\bm{z}_i^c \\odot \\bm{s}_i^c)y_i^c\n\\end{equation}\nfor all $i=1,2,3,...,N$, $c=1,2,3,...,C$, \nwhere $\\bm{\\omega}_i^c \\in \\mathbb{R}^{K_c}$ is the coefficients for $D_c$, and $\\bm{\\epsilon}_i$ is the measurement noise. {In \\eqref{equation2}, $\\odot$ represents the element-wise product.} Let $\\bm{d}^c_k$ denote the $k$th column in the dictionary ${D}_{c}$. $\\bm{d}^c_k$ is sampled from a multivariate Gaussian distribution $\\mathcal{N}(\\bold{0},\\frac{1}{\\lambda_d} \\bm{I}_P)$, where $\\lambda_d$ is a pre-defined scalar, and $\\bm{I}_P$ is an identity matrix with size $P\\times P$. The noise $\\bm{\\epsilon}_i$ is sampled from Gaussian $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_\\epsilon} \\bm{I}_P)$. One can see from (\\ref{equation2}) that $\\bm{\\omega}_i^c$ is the element-wise product of $\\bm{z}_i^c$ and $\\bm{s}_i^c$ and then multiplied by $y_i^c$. $y_i^c$ is a binary variable sampled from a Bernoulli distribution and $y_i^c=1$ indicates that load $c$ exists in ${x}_i$. $\\bm{z}_i^c$ is a binary vector. Let $\\bm{z}_{ik}^c$ denote the $k$th entry of $\\bm{z}_i^c$. $\\bm{z}_{ik}^c=1$ indicates $d_k^c$ is used to represent ${x}_i$ and 0 otherwise. $\\bm{z}_{ik}^c$ is sampled from the Bernoulli distribution. Note that the Bayesian method is able to infer the actual dictionary size $K_c$ by gradually pruning the dictionary size based on $\\bm{z}_i^c$ in the training stage. Therefore, the Bayesian method is not sensitive to the selection of initial dictionary size.\n{$\\bm{s}_i^c$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_s^c} \\bm{I}_{Kc})$. We put Gamma priors on $\\gamma_s^c$ and $\\gamma_\\epsilon$, respectively. { The Gamma priors are conjugate priors of the Gaussian distribution. If conjugate priors are selected, we can derive the analytical solution of the posterior distribution in the variational inference, which simplifies the updating process.} Note that our model selects conjugate priors to simplify the updating process.} \n\n Let $\\bm{\\Theta}$ denote all the latent variables. Given $\\bar{X}$ and partial labels $\\bm{Y}_\\Omega$, the objective is to obtain the posterior $P(\\bm{\\Theta}, \\bm{Y}_{\\bar{\\Omega}} |\\bm{X}, \\bm{Y}_\\Omega)$. From the Bayes theorem, \n\n\\begin{equation}\\label{eqn:bayes}\n P(\\bm{\\Theta}, \\bm{Y}_{\\bar{\\Omega}} |\\bar{X}, \\bm{Y}_\\Omega)=\\frac{P(\\bm{\\Theta}, \\bar{X}, \\bm{Y})}{P(\\bar{X},\\bm{Y}_{\\Omega})}\n \\end{equation}\nBecause computing \\eqref{eqn:bayes} directly is intractable, we use Gibbs sampling \\cite{I12} to compute the posterior distribution. Gibbs sampling sequentially samples from the conditional probability of one variable in $\\bm{\\Theta}$ and $\\bm{Y}_{\\bar{\\Omega}}$ while keeping all other variables fixed. These conditional distributions have closed-form expressions because of the conjugate priors, which leads to an efficient updating process. \n\n\nIn the testing stage, given the aggregate test data $\\hat{\\bm{x}}$, the goal of our approach is to estimate $\\hat{{x}}^c$. A similar probabilistic model for $\\hat{\\bm{x}}$ and $\\hat{{x}}^c$ is described as: \n\n\n\\begin{equation} \\label{xhat}\n \\hat{{x}} = \\sum_{c=1}^{C}{D}_{c} (\\bm{\\hat{z}^c} \\odot \\bm{\\hat{s}^c})\\hat{y}^c+\\hat{\\bm{\\epsilon}}\\\\\n\\end{equation}\n \n\\begin{equation}\\label{eqn:hatxc}\n {\\hat{{x}}^c} = {D}_c (\\bm{\\hat{z}^c} \\odot \\bm{\\hat{s}^c})\\hat{y}^c+\\frac{\\hat{\\bm{\\epsilon}}}{C}.\n \\end{equation}\nfor all $c=1,...,C$, $k=1,...,K_c$.\n\nThe dictionary atom $d_k^c$ is sampled from learned distribution $p({d_k^c}|\\bm{X},\\bm{Y}_\\Omega)$ in the training stage. We also assume that $\\hat{y}^c$ and $\\hat{\\bm{z}}^c$ are sampled from Bernoulli distributions. $\\hat{\\bm{s}}^c$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_s^c} \\bm{I}_{K_c})$ and $\\hat{\\bm{\\epsilon}}$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\hat{\\gamma}_{\\epsilon}} \\bm{I}_P)$. Gibbs sampling is also employed for computing probabilistic distributions of $\\hat{y}^c$, $\\hat{\\bm{z}}^c$, $\\hat{\\bm{s}}^c$, and $\\hat{\\gamma}_\\epsilon$. \n\n\n{The per-iteration computational complexity of the Bayesian offline training is $\\mathcal{O}(CK_cPN)$. The per-iteration computational complexity of the online testing is $\\mathcal{O}(CK_cP)$. Thus, the computational complexity scales linearly with respect to the number of loads. }\n\n \\subsection{Uncertainty Modeling}\n Equipped with all learned posterior distributions, we then estimate the distribution of $\\hat{x}^c$. However, it is intractable to obtain the explicit expression for the distribution of $\\hat{x}^c$. Monte-Carlo integration \\cite{PBJ12} is employed to approximately compute the predictive mean and predictive variance.\n\n \n \nDefine\n\\begin{equation}\nf (\\bm{\\Psi}) = {D}_c (\\hat{\\bm{z}}^c \\odot \\hat{\\bm{s}}^c)\\hat{y}^c\n\\end{equation}\nwhere $\\bm{\\Psi}=\\{{D}_c, \\hat{\\bm{z}}^c, \\hat{\\bm{s}}^c, {\\hat{y}^c}, \\hat{\\gamma}_\\epsilon\\}$.\nThe predictive mean of ${\\hat{{x}}^c}$ is computed by\n\\begin{equation} \\label{predictmean}\n\\begin{split}\nE[{\\hat{{x}}^c}]\n &\\approx\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}^l)\n\\end{split}\n\\end{equation}\nwhere $L$ is the number of Monte-Carlo samples. {More Monte-Carlo samples increase the estimation accuracy, at the cost of higher computational burden. Our experiments show that $50$ Monte-Carlo samples suffice to provide accurate estimations of the predictive mean and the predictive variance.} $\\bm{\\Psi}^l$ is sampled from the learned distributions of variables in $\\bm{\\Psi}$. $E[{\\hat{{x}}^c}]$ is then used as the estimation of the power consumption of load C. \n\nThe predictive covariance is approximated by\n\n\\begin{equation} \\label{predictvariance}\n\\begin{split}\n \\textrm{Var}[{\\hat{{x}}^c}] \n =&E[{\\hat{x}^c}{\\hat{x}^c}{}^T]-{E[{\\hat{x}^c}]}{E[{\\hat{x}^c}]}^T \\\\\n \\approx &\\frac{\\bm{I}_P}{LC}\\sum_{l=1}^{l=L} \\frac{1}{\\hat{\\gamma}_{\\epsilon}^{l}} +\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}_l){ f (\\bm{\\Psi}^l)}^T \\\\ -&(\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}^l))(\\frac{1}{L}\\sum_{l=1}^{l=L} {f (\\bm{\\Psi}^l)}^T) \n\\end{split}\n\\end{equation}\n\n\n\\noindent Let $\\sigma_i$ ($i=1,..., P$) denote all the singular values of $\\textrm{Var}[{\\hat{{x}}^c}]$.\n\nThe uncertainty index $U_c$ for individual load $c$ and the uncertainty index $U_{\\textrm{all}}$ for total estimated loads are computed as\n\\begin{equation}\\label{Uncertaintyindex1}\nU_c = \\Sigma_{i = 1}^P \\sigma_i\n\\end{equation}\n\\begin{equation}\\label{Uncertaintyindex2}\nU_{\\textrm{all}} = \\Sigma_{c = 1}^C U_c\n\\end{equation}\nThe intuition is that a large variance indicates higher uncertainty of the estimation. The uncertainty index is able to characterize the confidence level of disaggregation results. \n\n }\n\n \n\\section{Numerical Experiment}\n{ The performance of the proposed methods is evaluated on a partially labeled dataset. The dataset contains two industry loads and one solar generation. $N=360$ training samples and $M=300$ testing samples are generated. Even though the generated training samples contain up to three loads, each sample is annotated with only one label. The testing samples also contain up to three loads and have no label. In the following experiments, $\\gamma$ represents the percentage of the training data that measure individual loads. For example, $\\gamma=50\\%$ denotes that $50\\%$ training data labeled as load $c$ contain pure load $c$ and the remaining $50\\%$ data contain other loads. \n\n} \n\n\n{ \\subsubsection{Error Metrics} Several metrics are employed to compute the disaggregation error. The standard Root Mean Square Error (RMSE) \\cite{RQFKT17, PGWRA12} is defined as, \n\\begin{align}\n\\text{RMSE}_c & = \\sqrt{\\frac{\\Sigma_{i=1}^{M} \\lVert \\hat{x}^c_i -x^c_i \\rVert_2^2}{P \\times M}. } \n\\end{align}\nwhere $\\hat{x}^c_i, x^c_i \\in \\mathbb{R}^P$ are the estimated and the ground-truth load $c$ in the $i$th testing sample, respectively.\n \nA new Total Error Rate (TER) is proposed to compute the disaggregation error of all the loads as follows,\n\\begin{align}\n\\text{TER} = & \\frac{ \\Sigma_{i = 1}^{M} \\Sigma_{c =1}^C \\min( \\lVert \\hat{x}^c_i - x_i^c \\rVert_1 , \\lVert {x}^c_i \\rVert_1 ) }{ \\Sigma_{i= 1}^{M} \\Sigma_{c = 1}^C \\lVert x^c_i \\rVert_1 } \n\\end{align} \n\n \n \n \n\nThe Weighted Root Mean Square Error (WRMSE) is proposed to take the uncertainty index into account. The weighted average disaggregation error is computed as, \n\n\n\\begin{equation}\\label{eqn:WRMSE}\n{\\text{WRMSE}_c = \\sqrt{\\frac{\\Sigma_{i=1}^{M} \\frac{\\lVert \\hat{x}^c_i -{x}^c_i \\rVert_2^2}{U_c(\\hat{x}^c_i)} }{P\\Sigma_{i=1}^{M} \\frac{1}{U_c(\\hat{x}^c_i)} } } }\n\\end{equation}\n\n\\noindent where $U_c(\\hat{x}^c_i)$ denotes the uncertainty index of $\\hat{x}^c_i$. A larger $U_c(\\hat{x}^c_i)$ represents a less reliable estimation. If the estimated loads with higher disaggregation errors are accompanied by larger uncertainty indices, the RMSE$_c$ could be much larger than WRMSE$_c$. The scenario that RMSE$_c$ is much larger than WRMSE$_c$ indicates that the unreliable estimation results are correctly flagged by higher uncertainty indices.\n \n \n\\subsubsection{Methods} Our deterministic EDS method in \\cite{LYWW20} is abbreviated as ``D-EDS.'' Our Bayesian EDS method in \\cite{YW21} is abbreviated as ``B-EDS.'' Two other existing methods are employed for comparison. The work in \\cite{KBNY10} that is based on discriminative sparse coding is abbreviated as ``DDSC,'' and the work \\cite{RQFKT17} based on sum-to-k matrix factorization is abbreviated as ``sum-to-k''. Because we set the Monte-Carlo samples $L=50$ in our method B-EDS, then D-EDS, DDSC and sum-to-k are averaged over 50 runs for\na fair comparison. The comparisons of disaggregation performance of B-EDS, D-EDS, DDSC and sum-to-k are shown in Table \\ref{table1}. $\\gamma=70\\%$. Note that all the existing works such as DDSC and sum-to-k methods require fully labeled data to obtain accurate estimation.\nDirectly applying the existing methods\nto partially labeled data leads to a low disaggregation accuracy. The proposed two approaches B-EDS and D-EDS are designed for partially labeled data and can achieve state-of-the-art disaggregation performance.\nBetween these two methods, the disaggregation accuracy of B-EDS is slightly better. Moreover, one can see from Table I that the WRMSE$_c$ is much smaller than the corresponding RMSE$_c$. As we discussed above, this means that those estimations with larger disaggregation errors also have large uncertainty indices. This validates the effectiveness of applying the proposed uncertainty index to measure the reliability of the disaggregation results. \n\n\nThe major advantage of B-EDS over D-EDS is that B-EDS is able to measure the confidence level of disaggregation results from the uncertainty index. We provide five case studies to verify the performance of uncertainty modeling of B-EDS.\n\n\n\\begin{itemize}\n\\item Case 1: we select test data from the testing datasets in Table I and this test data contains three types of loads.\n\n\\item Case 2: the test data is as same as the data in Case 1 with an additional Gaussian noise $\\mathcal{N}(0,4^2)$ in each entry. \n\n\\item Case 3: the test data is as same as the data in Case 1 with an additional Gaussian noise $\\mathcal{N}(0,6^2)$ in each entry. \n\n\\item Case 4: the test data only contains one solar generation, but the pattern of solar generation is different from the solar patterns in the training data.\n\n\\item Case 5: the test data contains the same load 1 and 2 as those in Case 1, and as well as a solar generation with a pattern different from the solar patterns in the training data. \n\\end{itemize}\n\n\nFigs.~\\ref{fig_4} and ~\\ref{fig_5} show the disaggregation performance of D-EDS on Case1 and Case 4. \nThe aggregate measurement is shown in (a), and the disaggregation results are shown in (b)-(d) in both figures. In Case 1, the disaggregation results by D-EDS follow the actual load pattern. In Case 4, the disaggregation result of the solar generation does not follow the actual solar pattern because it is different from the learned pattern from the training data. In both cases, the disaggregation results contain some errors. That motivates using the Bayesian approach to compute a probabilistic distribution of load consumption rather than computing one deterministic estimation. \n\n\n{Fig.~\\ref{fig_uncertainty} shows the \n disaggregation performance of B-EDS on these five cases, and Table II shows the corresponding uncertainty index.\n Each subfigure in Fig.~\\ref{fig_uncertainty} plots the ground-truth load, the estimated load and the $99.7\\%$ confidence interval of the estimated load. One can see\n that in Cases 1-3, although there are some errors in the disaggregation results, the ground-truth loads are within the confidence interval. Moreover, the estimation errors increase slightly when the noise levels increase. Correspondingly, Table II shows that the total uncertainty indices in Case 1-3 also increase as the noise level increases, which indicates the effectiveness of using the uncertainty index to characterize the uncertainty in the estimation. In Case 4 and Case 5, because the patterns of solar generation are far from the patterns in the training data, the ground-truth load consumption may not fall inside the confidence interval (especially Case 5).\n The uncertainty indices in Table II also increase significantly, indicating that the estimated results are less reliable in these cases.\n Therefore,\n the users can use the uncertainty index to evaluate the accuracy of the disaggregation results.\nTable II also compares the TER of B-EDS and D-EDS, and B-EDS has a smaller disaggregation error of D-EDS.}\n\n\\begin{figure*}[!ht] \n \\centering\n \\subfigure[]\t{\\includegraphics[width=0.24\\textwidth]{.\/Figures\/netFig1.png}}\n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig1.png}}\n \n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig2.png}}\n\\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig3.png}}\n \t\\caption{ {Disaggregation performance of D-EDS on Case 1.(a) Net load. (b) The ground-truth and disaggregated load 1.(c) The ground-truth and disaggregated load 2. (d) The ground-truth and disaggregated solar.} } \\label{fig_4}\n \\end{figure*}\n \n\\begin{figure*}[!ht] \n \\centering\n \\subfigure[]\t{\\includegraphics[width=0.24\\textwidth]{.\/Figures\/nettFig1.png}}\n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig1.png}}\n \n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig2.png}}\n\\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig3.png}}\n \t\\caption{ {Disaggregation performance of D-EDS on Case 4.(a) Net load. (b) The ground-truth and disaggregated load 1.(c) The ground-truth and disaggregated load 2. (d) The ground-truth and disaggregated solar.} } \\label{fig_5}\n \\end{figure*}\n \n \n \\begin{table}[]\n\\centering\n\\caption{Comparison between B-EDS, D-EDS, sum-to-k and DDSC methods on disaggregation accuracy}\n\\label{table1}\n\\begin{tabular}{lllll}\n \\hline & B-EDS & D-EDS & sum-to-k & DDSC \\\\\n \\hline RMSE$_1$ & 6.20 & 6.62 & 13.17 & 22.77 \\\\\n \\hline RMSE$_2$ & 5.19 & 6.34 & 11.35 & 23.86 \\\\\n \\hline RMSE$_3$ & 5.82 & 4.65 & 10.70 & 13.49 \\\\\n \\hline WRMSE$_1$ & 0.16 & - & - & - \\\\\n \\hline WRMSE$_2$ & 0.13 & - & - & - \\\\\n \\hline WRMSE$_3$ & 0.13 & - & - & - \\\\\n \\hline TER & 8.97\\% & 9.95\\% & 20.61\\% & 37.12\\% \\\\\n \\hline \n\\end{tabular}\n\\end{table}\n\n\n\n} \n\n\\begin{figure*} [h!]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig1.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig1.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig1.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig1.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig1.png}}\n\n \\addtocounter{subfigure}{-12}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig2.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig2.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig2.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig2.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig2.png}}\n \\addtocounter{subfigure}{-12}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig3.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig3.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig3.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig3.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig3.png}}\n \t\\caption{The disaggregation performance of B-EDS on Cases 1-5. Each subfigure shows the ground-truth load, disaggregated load and the corresponding confidence interval. (a)-(c): the disaggregation results for Case 1. (d)-(f): the disaggregation results for Case 2 where the test data contains Gaussian noise $\\mathcal{N}(0,4^2)$. (g)-(i): the disaggregation results for Case 2 where the test data contains Gaussian noise $\\mathcal{N}(0,6^2)$. (j)-(l): the disaggregation results for Case 4 where the test data is a solar generation and its pattern is different from the training data. (m)-(o) the disaggregation results for Case 5 where the test data contains three loads, and the pattern of solar generation is different from the training data. }\\label{fig_uncertainty}\n \n\\end{figure*}\n\n\n\\begin{table}[ht!]\n\\centering\n\\caption{The uncertainty indices and disaggregation accuracy on five testing cases}\n\\label{table_uncertainty}\n\\begin{tabular*}{\\hsize}{@{}@{\\extracolsep{\\fill}}c|ccccc@{}}\n\\hline & Case 1 & Case 2 & Case 3 & Case 4 & Case 5 \\\\\n\\hline U$_1$ &\t243.72\t&\t280.35\t&\t201.52\t&\t0.058\t&\t160.89\t\\\\\n\\hline U$_2$&\t116.07\t&\t101.24\t&\t215.23\t&\t0.060\t&\t394.41\t\\\\\n\\hline U$_3$ &\t249.89\t&\t257.78\t&\t287.23\t&\t703.48\t&440.26\t\\\\\n\\hline U$_{\\textrm{all}}$ &609.69\t&\t639.37\t&\t703.98\t&\t703.60\t&788.44\t\\\\\n\\hline B-EDS TER &\t4.77\\%\t&\t5.10\\%\t&\t7.00\\%\t&\t6.77\\%\t&\t12.97\\%\t\\\\\n\\hline D-EDS TER &\t7.19\\%\t&\t8.86\\%\t&\t11.60\\%\t&\t11.01\\%\t&\t16.45\\%\t\\\\\n\\hline \n\\end{tabular*}\n\\end{table}\n\n\n\n\nThe Bayesian method B-EDS has slightly better disaggregation performance than\nthe deterministic approach D-EDS. The major advantage of the Bayesian approach is to measure the confidence level of the disaggregation results. However, the deterministic approach\nis much more computationally efficient than the Bayesian method. {In Table I, the B-EDS requires around 50 seconds for the offline training and 4 seconds for each testing sample. In comparison, the D-EDS requires around 15 seconds for the offline training, and 0.9 seconds for each testing sample. } If users want to know the reliability of the estimation results, the Bayesian method should be selected. In contrast, if users need to disaggregate the aggregate measurement with limited computational resources in real-time, the deterministic approach is a better option. \n\n\n\\section{Conclusion}\n{\n\n\n\n\nEnergy disaggregation at substations with BTM solar generations has drawn increasing attention.\nAccurate energy disaggregation results are crucial for power system planning and operations. However, collecting training data with full labels at the substation level is challenging. Therefore, \nwe propose the concept of partially labeled data which is applicable in practice and significantly reduces the burden of annotating data. This paper summarizes two new load disaggregation approaches.\nBoth the deterministic approach and the Bayesian approach can achieve accurate disaggregation results on partially labeled data. Moreover,\nan uncertainty index is proposed to measure the reliability of the disaggregation results.\nTo the best of our knowledge, this is the first work to provide the uncertainty measure for the energy disaggregation problem.}\n\n\n\n\n\\section*{Acknowledgment}\n\n\nThis work was supported in part by the NSF grant \\# 1932196, AFOSR FA9550-20-1-0122, and ARO W911NF-21-1-0255.\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn quantum computers, quantum bits (qubits) \\cite{nielsen2011quantum, scherer2019mathematics} are the elementary information carriers. In such a computer, quantum gates \\cite{nielsen2011quantum, scherer2019mathematics, deutsch1989quantum} can manipulate arbitrary multi-partite quantum states \\cite{raussendorf2001one} including arbitrary superposition of the computational basis states, which are frequently also entangled. Thus the logic gates of quantum computation are considerably more varied than the logic gates of classical computation. In addition, a quantum computer can solve problems exponentially faster than any classical computer \\cite{deutsch1992rapid} because by exploiting superposition principle and entanglement allows the computer to manipulate and store more bits of information than a classical computer.\n\nIn this paper we present a theoretical approach of realizing Hadamard and controlled-NOT (C-NOT) quantum logic gates which form a universal set for quantum computation \\cite{barenco1995elementary, shi2002both, boykin2000new}.\nThe important discovery and proof of a conserved excitation number operator of the AJC Hamiltonian \\cite{omolo2017conserved} now means that dynamics generated by the AJC Hamiltonian is exactly solvable, as demonstrated in the polariton and anti-polariton qubit (photospin qubit) models in \\cite{omolo2017polariton, omolo2019photospins}. The reformulation developed in \\cite{omolo2017conserved, omolo2017polariton, omolo2019photospins}, drastically simplifies exact solutions of the AJC model which we shall apply in the present work.\n\nWe define the quantum C-NOT gate as that which affects the unitary operation on two qubits which in a chosen orthonormal basis in $\\mathbb{C}^2$ gives the C-NOT operation obtained as\n\n\\begin{equation}\n|a\\rangle|b\\rangle\\rightarrow|a\\rangle|a\\oplus|b\\rangle\n\\label{eq:theqcnot}\n\\end{equation}\nwhere $|a\\rangle$ is the control qubit, $|b\\rangle$ is the target qubit and $\\oplus$ indicates addition modulo 2 \\cite{barenco1995conditional, scherer2019mathematics, nielsen2011quantum}. The C-NOT gate transforms superposition into entanglement thus acts as a measurement gate \\cite{nielsen2011quantum, scherer2019mathematics,barenco1995conditional} fundamental in performing algorithms in quantum computers \\cite{knill2002introduction}. Transformation to a separable state (product state) is realized by applying the C-NOT gate again. In this case, it is used to implement Bell measurement on the two qubits \\cite{braunstein1992maximal}. \n\nWe note here that the JC model has been applied extensively in implementing C-NOT and Hadamard gate operations. Domokos \\textit{et al} (1995) \\cite{domokos1995simple} showed that using induced transitions between dressed states, it is possible to implement a C-NOT gate in which a cavity containing at most one photon is the control qubit and the atom is the target qubit. Later, Vitali, D. \\textit{et al} (2001)\\cite{vitali2001quantum} proposed a scheme of implementing a C-NOT gate between two distinct but identical cavities, acting as control and target qubits respectively. By passing an atom prepared initially in ground state consecutively between the two cavities a C-NOT ($cavity\\rightarrow{atom}$) and a C-NOT ($atom\\rightarrow{cavity}$) is realised with the respective classical fields S. Saif, F. \\textit{et al} (2001) \\cite{saif2007engineering} presented a study of quantum computing by engineering non-local quantum universal gates based on interaction of a two-level atom with two modes of electromagnetic field in high Q superconducting cavity. The two-level atom acted as the control qubit and the two-mode electromagnetic field served as the the target qubit. In this letter, we apply an approach similar to that in \\cite{saif2007engineering} where we implement a quantum C-NOT gate operation between two cavities defined in a two-dimensional Hilbert space spanned by the state vectors $|\\mu_1\\rangle=|1_A,0_B\\rangle$ and $|\\mu_2\\rangle=|0_A,1_B\\rangle$ as target qubits. Here $|\\mu_1\\rangle$ expresses the presence of one photon in mode A, when there is no photon in mode B, and $|\\mu_2\\rangle$ indicates that mode A is in vacuum state and one photon is present in mode B. The control qubit in this respect is a two-level atom. The important difference with the approach used in \\cite{saif2007engineering} is the model, i.e, while the initial absolute atom-field ground state $|g,0\\rangle$ in the AJC interaction is affected by atom-cavity coupling, the ground state $|g,0\\rangle$ in the JC model \\cite{saif2007engineering} is not affected by atom-cavity coupling. A similar result was determined independently in \\cite{omolo2019photospins}. Further, with precise choice of interaction time in the AJC qubit state transition operations defined in the AJC qubit sub-space spanned by normalised but non-orthogonal basics qubit state vectors \\cite{omolo2017polariton, omolo2019photospins}, C-NOT gate operations are realized between the two cavities.\n\nThe Hadamard gate also known as the Walsh-Hadamard gate is a single qubit gate \\cite{scherer2019mathematics, nielsen2011quantum}. The Hadamard transformation is defined as \n \n\\begin{equation}\n\\hat{H}=\\frac{\\hat{\\sigma}_\\textit{x}+\\hat{\\sigma}_\\textit{z}}{\\sqrt{2}}\n\\label{eq:mathhadamard}\n\\end{equation}\nwhere it transforms atomic computational basis states $|e\\rangle(|0\\rangle)$, $|g\\rangle(|1\\rangle)$ into diagonal basis states according to\n\\begin{eqnarray}\n\\hat{H}|e\\rangle\\rightarrow\\frac{|e\\rangle+|g\\rangle}{\\sqrt{2}}\\quad;\\quad\\hat{H}|g\\rangle\\rightarrow\\frac{|e\\rangle-|g\\rangle}{\\sqrt{2}}\\nonumber\\\\\n\\hat{H}|0\\rangle\\rightarrow\\frac{|0\\rangle+|1\\rangle}{\\sqrt{2}}\\quad;\\quad\\hat{H}|1\\rangle\\rightarrow\\frac{|0\\rangle-|1\\rangle}{\\sqrt{2}}\n\\label{eq:mathhadamard1}\n\\end{eqnarray}\n Vitali, D. \\textit{et al} (2001)\\cite{vitali2001quantum} showed that one qubit operations can be implemented on qubits represented by two internal atomic states because it amounts to applying suitable Rabi pulses. He demonstrated that the most practical solution on implementing one qubit operations on two Fock states is sending the atoms through the cavity. If the atom inside the cavity undergoes a $\\frac{\\pi}{2}$ pulse one realizes a Hadamard-phase gate. Saif, F. \\textit{et al} (2001) \\cite{saif2007engineering} also showed that it is possible to realise Hadamard operation by a controlled interaction between a two-mode high Q electromagnetic cavity field and a two-level atom. In his approach, the two-level atom is the control qubit, whereas the target qubit is made up of two modes of cavity field. Precision of the gate operations is realised by precise selection of interaction times of the two-level atom with the cavity mode. In this paper, we show that Hadamard operation in the AJC interaction is possible for a specified initial atomic state by setting a specific sum frequency and photon number in the anti-Jaynes-Cummings qubit state transition operation \\cite{omolo2017polariton, omolo2019photospins}, noting that the interaction components of the anti-Jaynes-Cummings Hamiltonian generates state transitions.\n\nThe content of this paper is therefore summarised as follows. Section \\ref{sec:model} presents an overview of the theoretical model. In sections \\ref{sec:qcgate} and \\ref{sec:hadgatelogic} respectively, implementation of a quantum C-NOT and Hadamard gates in the AJC interaction are presented. Finally section \\ref{sec:conclusion} contains the conclusion. \n\\section{The model}\n\\label{sec:model}\nThe quantum Rabi model of a quantized electromagnetic field mode interacting with a two-level atom is generated by the Hamiltonian \\cite{omolo2017conserved} \n\n\\begin{equation}\n\\hat{H}_R=\\frac{1}{2}\\hbar\\omega(\\hat{a}^{\\dagger}\\hat{a}+\\hat{a}\\hat{a}^{\\dagger})+\\hbar\\omega_0\\hat{s}_z + \\hbar\\lambda(\\hat{a}+\\hat{a}^{\\dagger})(\\hat{s}_++\\hat{s}_-)\n\\label{eq:rabi1}\n\\end{equation}\nnoting that the free field mode Hamiltonian is expressed in normal and anti-normal order form $\\frac{1}{2}\\hbar\\omega(\\hat{a}^{\\dagger}\\hat{a}+\\hat{a}\\hat{a}^{\\dagger})$.\nHere, $\\omega\\hspace{1mm},\\hspace{1mm}\\hat{a}\\hspace{1mm},\\hspace{1mm}\\hat{a}^{\\dagger}$ are quantized field mode angular frequency, annihilation and creation operators, while $\\omega_0\\hspace{1mm},\\hspace{1mm}\\hat{s}_z\\hspace{1mm},\\hspace{1mm}\\hat{s}_+\\hspace{1mm},\\hspace{1mm}\\hat{s}_-$ are atomic state transition angular frequency and operators. The Rabi Hamiltonian in eq.~\\eqref{eq:rabi1} is expressed in a symmetrized two-component form \\cite{omolo2017conserved, omolo2017polariton, omolo2019photospins}\n\n\\begin{equation}\n\\hat{H}_R=\\frac{1}{2}(\\hat{H}+\\hat{\\overline{H}})\n\\label{eq:rabi2}\n\\end{equation}\nwhere $\\hat{H}$ is the standard JC Hamiltonian interpreted as a polariton qubit Hamiltonian expressed in the form \\cite{omolo2017conserved}\n\n\\begin{eqnarray}\n\\hat{H}&=&\\hbar\\omega\\hat{N}+2\\hbar\\lambda\\hat{A}-\\frac{1}{2}\\hbar\\omega\\quad;\\quad\\hat{N}=\\hat{a}^{\\dagger}\\hat{a}+\\hat{s}_+\\hat{s}_- \\nonumber\\\\\n\\hat{A}&=&\\alpha\\hat{s}_z+\\hat{a}\\hat{s}_++\\hat{a}^{\\dagger}\\hat{s}_-\\quad;\\quad\\alpha=\\frac{\\omega_0-\\omega}{2\\lambda}\n\\label{eq:pham1}\n\\end{eqnarray}\nwhile $\\hat{\\overline{H}}$ is the AJC Hamiltonian interpreted as an anti-polariton qubit Hamiltonian in the form \\cite{omolo2017conserved}\n\n\\begin{eqnarray}\n\\hat{\\overline{H}}&=&\\hbar\\omega\\hat{\\overline{N}}+2\\hbar\\lambda\\hat{\\overline{A}}-\\frac{1}{2}\\hbar\\omega\\quad;\\quad\\hat{\\overline{N}}=\\hat{a}\\hat{a}^{\\dagger}+\\hat{s}_-\\hat{s}_+\\nonumber\\\\\\hat{\\overline{A}}&=&\\overline{\\alpha}\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^{\\dagger}\\hat{s}_+\\quad;\\quad\\overline{\\alpha}=\\frac{\\omega_0+\\omega}{2\\lambda}\n\\label{eq:antpham1}\n\\end{eqnarray}\nIn eqs.~\\eqref{eq:pham1} and \\eqref{eq:antpham1}, $\\hat{N}$, $\\hat{\\overline{N}}$ and $\\hat{A}$, $\\hat{\\overline{A}}$ are the respective polariton and anti-polariton qubit conserved excitation numbers and state transition operators.\n\nFollowing the physical property established in \\cite{omolo2019photospins}, that for the field mode in an initial vacuum state only an atom entering the cavity in an initial excited state $|e\\rangle$ couples to the rotating positive frequency field component in the JC interaction mechanism, while only an atom entering the cavity in an initial ground state $|g\\rangle$ couples to the anti-rotating negative frequency field component in an AJC interaction mechanism, we generally take the atom to be in an initial excited state $|e\\rangle$ in the JC model and in an initial ground state $|g\\rangle$ in the AJC model. \n\nConsidering the AJC dynamics, applying the state transition operator $\\hat{\\overline{A}}$ from eq.~\\eqref{eq:antpham1} to the initial atom-field \\textit{n}-photon ground state vector $|g,n\\rangle$, the basic qubit state vectors $|\\psi_{gn}\\rangle$ and $|\\overline{\\phi}_{gn}\\rangle$ are determined in the form (\\textit{n}=0,1,2,....) \\cite{omolo2019photospins}\n\n\\begin{equation}\n|\\psi_{gn}\\rangle=|g,n\\rangle\\quad;\\quad|\\overline{\\phi}_{gn}\\rangle=-\\overline{c}_{gn}|g,n\\rangle+\\overline{s}_{gn}|e,n+1\\rangle\n\\label{eq:entsuptate}\n\\end{equation}\nwith dimensionless interaction parameters $\\overline{c}_{gn}$, $\\overline{s}_{gn}$ and Rabi frequency $\\overline{R}_{gn}$ defined as\n\n\\begin{eqnarray}\n\\overline{c}_{gn}&=&\\frac{\\overline{\\delta}}{2\\overline{R}_{gn}}\\quad;\\quad\\overline{s}_{gn}=\\frac{2\\lambda\\sqrt{n+1}}{\\overline{R}_{gn}}\\quad;\\quad\\overline{R}_{gn}=2\\lambda{\\overline{A}_{gn}}\\nonumber\\\\\n\\overline{A}_{gn}&=&\\sqrt{(n+1)+\\frac{\\overline{\\delta}^2}{16\\lambda^2}}\\quad;\\quad\\overline{\\delta}=\\omega_0+\\omega\n\\label{eq:parameters}\n\\end{eqnarray}\nwhere we have introduced sum frequency $\\overline{\\delta}=\\omega_0+\\omega$ to redefine $\\overline{\\alpha}$ in eq.~\\eqref{eq:antpham1}.\n\nThe qubit state vectors in eq.~\\eqref{eq:entsuptate} satisfy the qubit state transition algebraic operations\n\n\\begin{equation}\n\\hat{\\overline{A}}|\\psi_{gn}\\rangle=\\overline{A}_{gn}|\\overline{\\phi}_{gn}\\rangle\\quad;\\quad\\hat{\\overline{A}}|\\overline{\\phi}_{gn}\\rangle=\\overline{A}_{gn}|\\psi_{gn}\\rangle\n\\label{eq:traans}\n\\end{equation}\nIn the AJC qubit subspace spanned by normalized but non-orthogonal basic qubit state vectors $|\\psi_{gn}\\rangle$\\hspace{1mm},\\hspace{1mm} $|\\overline{\\phi}_{gn}\\rangle$ the basic qubit state transition operator $\\hat{\\overline{\\varepsilon}}_g$ and identity operator $\\hat{\\overline{I}}_g$ are introduced according to the definitions \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_g=\\frac{\\hat{\\overline{A}}}{\\overline{A}_{gn}}\\quad;\\quad\\hat{\\overline{I}}_g=\\frac{\\hat{\\overline{A}}^2}{\\overline{A}_{gn}^2}\\quad\\Rightarrow\\quad\\hat{\\overline{I}}_g=\\hat{\\overline{\\varepsilon}}_g^2\n\\label{eq:anttransop1}\n\\end{equation}\nwhich on substituting into eq.~\\eqref{eq:traans} generates the basic qubit state transition algebraic operations\n\n\\begin{eqnarray}\n\\hat{\\overline{\\varepsilon}}_g|\\psi_{gn}\\rangle&=&|\\overline{\\phi}_{gn}\\rangle\\quad;\\quad\\hat{\\overline{\\varepsilon}}_g|\\overline{\\phi}_{gn}\\rangle=|\\psi_{gn}\\rangle\\nonumber\\\\\\hat{\\overline{I}}_g|\\psi_{gn}\\rangle&=&|\\psi_{gn}\\rangle\\quad;\\quad\\hat{\\overline{I}}_g|\\overline{\\phi}_g\\rangle=|\\overline{\\phi}_g\\rangle\n\\label{eq:algop11}\n\\end{eqnarray}\nThe algebraic properties \\hspace{0.5mm} $\\hat{\\overline{\\varepsilon}}^{2k}=\\hat{\\overline{I}}_g$ \\hspace{0.5mm} and \\hspace{0.5mm} $\\hat{\\overline{\\varepsilon}}^{2k+1}=\\hat{\\overline{\\varepsilon}}_g$ \\hspace{0.5mm}easily gives the final property \\cite{omolo2019photospins}\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}=\\cos(\\theta)\\hat{\\overline{I}}_g-i\\sin(\\theta)\\hat{\\overline{\\varepsilon}}_g\n\\label{eq:antialgprop}\n\\end{equation}\nwhich is useful in evaluating time-evolution operators.\n\nThe AJC qubit Hamiltonian defined within the qubit subspace spanned by the basic qubit state vectors $|\\psi_{gn}\\rangle$ , $|\\overline{\\phi}_{gn}\\rangle$ is then expressed in terms of the basic qubit states transition operators $\\hat{\\overline{\\varepsilon}}_g$, $\\hat{\\overline{I}}_g$ in the form \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\hat{\\overline{H}}_g=\\hbar\\omega(n+\\frac{3}{2})\\hat{\\overline{I}}_g+\\hbar\\overline{R}_{gn}\\hat{\\overline{\\varepsilon}}_g\n\\label{eq:antijch2}\n\\end{equation}\n\\section{Quantum c-not gate operations}\n\\label{sec:qcgate}\n\nIn order to realise a C-NOT quantum gate operation in this context, we take a two-level atom as the control qubit, which is defined in a two dimensional Hilbert space with $|e\\rangle$ and $|g\\rangle$ as basis vectors, where $|e\\rangle$ expresses the excited state of the two-level atom and $|g\\rangle$ indicates the ground state. Two non-degenerate and orthogonal polarized cavity modes $C_A$ and $C_B$ make the target qubit. The target qubit is defined in two-dimensional Hilbert space spanned by the state vector $|\\mu_1\\rangle=|1_A,0_B\\rangle$, which expresses the presence of one photon in mode A, when there is no photon in mode B, and the state vector $|\\mu_2\\rangle=|0_A,1_B\\rangle$, which indicates that mode A is in the vacuum state and one photon is present in mode B.\n\nWith reference to the AJC qubit state transition operator in eq.\\eqref{eq:antialgprop}, lets first consider when an atom in ground state $|g\\rangle$ enters an electromagnetic cavity with mode A in vacuum state and a single photon in mode B. The atom couples to the anti-rotating negative frequency component of the field mode undergoing an AJC qubit state transition. After the atom interacts with mode A for a time $t=\\frac{\\pi}{\\overline{R}_{g0}}$, equal to half Rabi oscillation time, the driving field is modulated such that $\\theta=\\overline{R}_{g0}t=2\\lambda\\overline{A}_{g0}t=\\pi$. Redefining \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\overline{\\alpha}=\\frac{\\overline{\\delta}}{2\\lambda}=\\frac{\\omega_0-\\omega+2\\omega}{2\\lambda}=\\frac{\\delta}{2\\lambda}+\\frac{\\omega}{\\lambda}=\\alpha+\\frac{\\omega}{\\lambda}\n\\end{equation}\nand considering a resonance case where $\\delta=\\omega_0-\\omega=0$ with $\\lambda\\gg\\omega$, $\\overline{\\alpha}$ becomes very small thus $\\theta=\\lambda{t}=\\frac{\\pi}{2}$, since $\\overline{A}_{g0}=1$ determined from eq.~\\eqref{eq:parameters}. The evolution of this interaction determined by applying the AJC qubit state transition operation in eq.~\\eqref{eq:antialgprop} noting the definition of $\\hat{\\overline{I}}_g$ and $\\hat{\\overline{\\varepsilon}}_g$ \\citep{omolo2019photospins} in eq.~\\eqref{eq:anttransop1} is of the form\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}|g,0_A\\rangle=\\cos(\\theta)|g,0_A\\rangle-i\\sin(\\theta)|e,1_A\\rangle\n\\label{eq:modeA}\n\\end{equation}\nwhich reduces to\n\n\\begin{equation}\n|g,0_A\\rangle\\rightarrow-i|e,1_A\\rangle\n\\label{eq:flip1}\n\\end{equation}\nWe observe that the atom interacted with mode A and completed half of the Rabi oscillation, as a result, it contributed a photon to mode A and evolved to excited state $|e\\rangle$. Now, after the interaction time, it enters mode B containing a single photon, interacting with the cavity mode as follows\n\n\\begin{equation}\n-ie^{i\\theta\\hat{\\overline{\\varepsilon}}_e}|e,1_B\\rangle=-i\\cos(\\theta)|e,1_B\\rangle+\\sin(\\theta)|g,0_B\\rangle\n\\label{eq:modeB}\n\\end{equation}\nAfter an interaction with mode B for a time $t_1=2t$ such that $t_1=\\frac{\\pi(\\overline{R}_{g0}+\\overline{R}_{e1})}{\\overline{R}_{g0}\\overline{R}_{e1}}$, the driving field is modulated such that $\\theta=\\left(\\frac{\\overline{R}_{g0}\\overline{R}_{e1}}{\\overline{R}_{g0}+\\overline{R}_{e1}}\\right)t=\\frac{\\pi}{2}$ with $\\overline{R}_{g0}=2\\lambda{\\overline{A}_{g0}}=2\\lambda$ since $\\overline{A}_{g0}=1$ and $\\overline{R}_{e1}=2\\lambda{\\overline{A}_{e1}}=2\\lambda$ since $\\overline{A}_{e1}=1$. Therefore, $\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeB} results into the evolution\n\n\\begin{equation}\n-i|e,1_B\\rangle\\rightarrow|g,0_B\\rangle\n\\label{eq:flip2}\n\\end{equation}\nThe results in eq.~\\eqref{eq:flip2} shows that the atom evolves to ground state and absorbs a photon initially in mode B. Therefore the atom clearly performs a swapping of the electromagnetic field between the two field modes by controlled interaction. \n\nWhen the atom in ground state $|g\\rangle$, enters the electromagnetic cavity containing a single photon in mode A and mode B in vacuum state, the atom and the field interacts as follows\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}|g,0_B\\rangle=\\cos(\\theta)|g,0_B\\rangle-\\sin(\\theta)|e,1_B\\rangle\n\\label{eq:modeB2}\n\\end{equation} \nAfter an interaction with field mode B for a time $t=\\frac{\\pi}{\\overline{R}_{g0}}$ equal to half Rabi oscillation time, the driving field is modulated such that $\\theta=\\overline{R}_{g0}t=\\pi$, with $\\overline{R}_{g0}=2\\lambda{\\overline{A}}_{g0}=2\\lambda$ since $\\overline{A}_{g0}=1$. Therefore $\\theta=\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeB2} results into the evolution\n\n\\begin{equation}\n|g,0_B\\rangle\\rightarrow-|e,1_B\\rangle\n\\label{eq:flip3}\n\\end{equation}\nThe atom then enters mode A containing one photon and interacts as follows\n\n\\begin{equation}\n-e^{i\\theta\\hat{\\overline{\\varepsilon}}_e}|e,1_A\\rangle=-\\cos(\\theta)|e,1_A\\rangle-i\\sin(\\theta)|g,0_A\\rangle\n\\label{eq:modeA2}\n\\end{equation}\nAfter an interaction with the cavity mode for a time $t_1=2t$ such that $t_1=\\frac{\\pi(\\overline{R}_{e1}+\\overline{R}_{g0})}{\\overline{R}_{e1}\\overline{R}_{g0}}$ we obtain a driving field modulation $\\theta=\\left(\\frac{\\overline{R}_{e1}\\overline{R}_{g0}}{\\overline{R}_{e1}+\\overline{R}_{g0}}\\right)t=\\frac{\\pi}{2}$, with $\\overline{R}_{e1}=2\\lambda{\\overline{A}_{e1}}=2\\lambda$ since $\\overline{A}_{e1}=1$ and $\\overline{R}_{g0}=2\\lambda{\\overline{A}_{g0}}=2\\lambda$ since $\\overline{A}_{g0}=1$. Therefore $\\theta=\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeA2} results into the evolution\n\n\\begin{equation}\n|e,1_A\\rangle\\rightarrow{i}|g,0_A\\rangle\n\\label{eq:flip4}\n\\end{equation}\nThis shows that the atom evolves to ground state and performs a field swapping by absorbing a photon in mode A.\n\nWhen the atom in excited state $|e\\rangle$ enters mode A in vacuum state,that is target qubit $|\\mu_2\\rangle$, the atom propagates as a free wave without coupling to the field mode in vacuum state $|0\\rangle$ \\cite{omolo2019photospins}, leaving the cavity without altering the state of the cavity-field mode. A similar observation is made when the atom enters cavity B in vacuum state for the case of target qubit $|\\mu_1\\rangle$.\n\nFrom the results obtained, we conclude that the target qubit made up of the electromagnetic field remains unchanged if the control qubit, that is, the two-level atom, is initially in the excited state $|e\\rangle$, while when the atom is in ground state $|g\\rangle$, the cavity states $|0\\rangle$ and $|1\\rangle$ flip. We shall refer to this gate as the AJC C-NOT $(atom\\rightarrow{cavity})$\n\n\\subsection{Probability of success of the C-NOT gate} \n\\label{sec:cnotgate}\nSuccess probability for the C-NOT gate is given by\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\theta_A)+\\cos^2(\\theta_A)\\sin^2(\\theta_B))\n\\label{eq:success1}\n\\end{equation}\nIn terms of the Rabi frequency we write eq.~\\eqref{eq:success1} as\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\overline{R}_A\\Delta{t_A})+\\cos^2(\\overline{R}_A\\Delta{t_A})\\sin^2(\\overline{R}_B\\Delta{t_B}))\n\\label{eq:success2}\n\\end{equation}\nFor the case in which the atom is in the ground state $|g\\rangle$ and enters an electromagnetic cavity with mode A in vacuum state and a single photon of the field mode B\n\n\\begin{equation*}\n\\overline{R}_A=\\overline{R}_{{g}{0}}=2\\lambda{\\overline{A}_{{g}{0}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation*}\n\\Delta{t_A}=\\frac{\\pi}{\\overline{R}_A}=\\frac{\\pi}{\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\end{equation*}\n\n\\begin{equation*}\n\\overline{R}_B=\\overline{R}_{{e}{1}}=2\\lambda{\\overline{A}_{{e}{1}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation}\n\\Delta{t_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_A+\\overline{R}_B)}{\\overline{R}_A\\overline{R}_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_{{g}{0}}+\\overline{R}_{{e}{1}})}{\\overline{R}_{{g}{0}}\\overline{R}_{{e}{1}}}=\\frac{\\pi}{2\\lambda}\n\\label{eq:first}\n\\end{equation}\nsubstituting eq.~\\eqref{eq:first} into eq.~\\eqref{eq:success2} we obtain\n\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\pi)+\\cos^2(\\pi)\\sin^2(\\pi))=1\n\\label{eq:success3}\n\\end{equation}\na unit probability of success.\n\nWhen the atom in ground state $|g\\rangle$ enters an electromagnetic cavity containing a single photon in mode A, and mode B in the vacuum state\n\n\\begin{equation*}\n\\overline{R}_A=\\overline{R}_{{e}{1}}=2\\lambda{\\overline{A}_{{e}{1}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation*}\n\\Delta{t_A}=\\frac{\\pi}{2}\\frac{(\\overline{R}_A+\\overline{R}_B)}{\\overline{R}_A\\overline{R}_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_{{e}{1}}+\\overline{R}_{{g}{0}})}{\\overline{R}_{{e}{1}}\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\end{equation*}\n\n\\begin{equation*}\n\\overline{R}_B=\\overline{R}_{{g}{0}}=2\\lambda{\\overline{A}_{{g}{0}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation}\n\\Delta{t_B}=\\frac{\\pi}{\\overline{R}_B}=\\frac{\\pi}{\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\label{eq:second}\n\\end{equation}\nsubstituting eq.~\\eqref{eq:second} into eq.~\\eqref{eq:success2} we obtain\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\pi)+\\cos^2(\\pi)\\sin^2(\\pi))=1\n\\label{eq:success4}\n\\end{equation}\na unit probability of success.\n\nWe observe that success probabilities depend mainly upon the precise selection of the interaction times of the two level atom with the successive cavity modes.\n\n\\section{Hadamard logic gate}\n\\label{sec:hadgatelogic}\n\nTo realise Hadamard operation in the AJC interaction, we apply the qubit state transition operation in eq.~\\eqref{eq:anttransop1} and the general form in \\cite{omolo2017polariton, omolo2019photospins}. In this respect, we define the Hadamard operation at sum frequency $\\overline{\\delta}=4\\lambda$ and $n=0$ specified for an initial atomic state $|g\\rangle$ as \n\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_g=\\frac{1}{\\overline{A}_{g0}}\\left(2\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^\\dagger\\hat{s}_+\\right)\\hspace{5mm};\\hspace{5mm}\\overline{A}_{g0}=\\sqrt{2}\n\\end{equation}\nThe initial atomic state $|g\\rangle$ is rotated to\n\n\\begin{equation}\n|g\\rangle\\rightarrow\\frac{1}{\\sqrt{2}}(|e\\rangle-|g\\rangle)\n\\label{eq:hadop1}\n\\end{equation}\nSimilarly, the Hadamard operation at sum frequency $\\overline{\\delta}=4\\lambda$ and $n=1$ specified for an initial atomic state $|e\\rangle$ is defined as \\citep{omolo2019photospins}\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_e=\\frac{1}{\\overline{A}_{e1}}\\left(2\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^\\dagger\\hat{s}_+\\right)\\hspace{5mm};\\hspace{5mm}\\overline{A}_{e1}=\\sqrt{2}\n\\end{equation}\nThe initial atomic state $|e\\rangle$ is rotated to\n\n\n\\begin{equation}\n|e\\rangle\\rightarrow\\frac{1}{\\sqrt{2}}(|e\\rangle+|g\\rangle)\n\\label{eq:hadop2}\n\\end{equation}\nThe Hadamard transformations in Eqs.~\\eqref{eq:hadop1} and \\eqref{eq:hadop2} realised in the AJC interaction process (AJC model) agree precisely with the standard definition in Eq.~\\eqref{eq:mathhadamard1}.\n\\section{ Conclusion}\n\\label{sec:conclusion}\nIn this paper we have shown how to implement quantum C-NOT and Hadamard gates in the anti-Jaynes-Cummings interaction mechanism. We obtained ideal unit probabilities of success due to precise selection of interaction times during the C-NOT gate operations. We also realised efficient Hadamard operations through application of respective AJC qubit state transitions. \n\\section*{Acknowledgment}\n\nWe thank Maseno University, Department of Physics and Materials Science for providing a conducive environment to do this work.\n\n \\bibliographystyle{apsrev}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzitkr b/data_all_eng_slimpj/shuffled/split2/finalzzitkr new file mode 100644 index 0000000000000000000000000000000000000000..786db9b3866e7d258c621faf51eacb52cf4362f3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzitkr @@ -0,0 +1,5 @@ +{"text":"\\section{introduction}\n\nThe quantum information theory in the relativistic framework has\nreceived considerable attention due to its theoretical importance\nand practical application \\cite{Peres,Boschi,Pan}. Especially, more\nand more efforts have been expended on the study of quantum\nentanglement in a relativistic setting because people consider the\nentanglement to be a major resource for quantum information tasks\nsuch as quantum teleportation, quantum computation and so on\n\\cite{Bouwmeester}. With the intention of studying the entanglement\nbetween accelerated observers, the fidelity of teleportation between\ntwo parties in relative uniform acceleration was discussed by Alsing\n\\emph{et al.} \\cite{Alsing-Milburn,Alsing-McMahon-Milburn}. Xian-Hui\nGe \\emph{et al.} extended the gravitational field of the\nteleportation to the four and higher dimensional spacetimes, and\neven explicitly discussed what effects the shape of the cavity in\nwhich particles are confined has on the teleportation in a black\nhole spacetime \\cite{Ge-Shen,Ge-Kim}. In order to further\ninvestigate the observer-dependent character of the entanglement,\nFuentes-Schuller \\emph{et al.} analyzed the entanglement between two\nmodes of a non-interacting massless scalar field when one of the\nobservers describing the state is uniformly accelerated\n\\cite{Schuller-Mann}. And then Alsing \\emph{et al.} calculated the\nentanglement between two modes of a free Dirac field described by\nrelatively accelerated parties in a flat spacetime\n\\cite{Alsing-Mann}. Their results \\cite{Schuller-Mann,Alsing-Mann}\nalso showed that the different type of field will have a\nqualitatively different effect on the degradation of entanglement\nproduced by the Unruh effect \\cite{Davies,unruh}. More recently, Ahn\n\\emph{et al.} extended the investigation to the entanglement of a\ntwo-mode squeezed state in Riemannian spacetime \\cite{Ahn-Kim}, Yi\nLing \\emph{et al.} discussed the entanglement of electromagnetic\nfield in noninertial reference frames \\cite{Ling}, and Adesso\n\\emph{et al.} investigated the distribution of entanglement between\nmodes of a free scalar field from the perspective of observers in\nuniform acceleration \\cite{Adesso}.\n\nAs a further step along this line, we will provide an analysis of\nthe entanglement for the scalar field in the spacetime of a most\ngeneral, static and asymptotically flat black hole with spherical\nsymmetry. It seems to be an interesting study to consider the\ninfluences of the Hawking effect\n\\cite{Hawking-1,Hawking-2,Hawking-3} on the quantum entangled states\nand show how the Hawking temperature will change the properties of\nthe entanglement and teleportation. Choosing a generically entangled\nstate as the initially entangled state for two observers in the flat\nregion of this black hole, we will also try to see what effects the\nuncertain entangled state will have on the degradation of\nentanglement in our scheme due to the presence of an arbitrary state\nparameter. Our scheme proposes that the two observers, Alice and\nBob, share an initially entangled state at the same initial point in\nflat Minkowski spacetime before the black hole is formed. After the\ncoincidence of Alice and Bob, Alice stays stationary at the\nasymptotically flat region, while Bob falls in toward the mass and\nthen hovers outside of it. Once Bob is safely hovering outside of\nthe object at some constant acceleration, let it collapse to form a\nblack hole. By Birkhoff's theorem \\cite{Birkhoff} this won't change\nthe metric outside of the black hole and therefore won't change\nBob's acceleration. Thus, Bob's detector registers only thermally\nexcited particles due to the Hawking effect \\cite{unruh-1,unruh-2}.\nIn order to investigate the teleportation between two modes of a\nscalar field as detected by the two observers, we assume that Alice\nand Bob each hold an optical cavity which is small and perfect for\nthe teleportation in the black hole spacetime. Just as suggested by\nRefs. \\cite{Alsing-Milburn,Alsing-McMahon-Milburn}, we further\nsuppose that each cavity supports two orthogonal modes, with the\nsame frequency, which are each excited to a single photon Fock state\nat the coincidence point for Alice and Bob. Different from the\nstandard teleportation protocol, our scheme assumes that Bob hovers\noutside of the object before it collapses, and turns on his detector\nafter the formation of the black hole. Then, Bob can check to see\nwhether any thermal photons have been excited in his local cavity\nusing the non-absorbing detector.\n\nThe organization of this paper is as follows. In Sec. 2 we discuss\nthe vacuum structure of the background spacetime and the Hawking\neffect for the scalar particles as experienced by the observer\noutside the black hole. In Sec. 3 we analyze the effects of the\nHawking temperature on the entanglement between the modes for the\ndifferent state parameter. In Sec. 4 we describe the process of the\nteleportation between Alice and Bob, and calculate the fidelity of\nteleportation. We summarize and discuss our conclusions in the last\nsection.\n\n\\section{Vacuum structure and Hawking Radiation of scalar field}\n\nIt is well known that the spherically symmetric line element of a\nstatic and asymptotically flat black hole such as Schwarzschild\nblack hole, Reissner-Nordstr\\\"{o}m black hole \\cite{Chandrasekhar},\nGarfinkle-Horowitz-Strominger dilaton black hole \\cite{Horowitz},\nCasadio-Fabbri-Mazzacurati (CFM) brane black hole \\cite{Casadio} and\nso on can be written in the form\n\\begin{eqnarray}\\label{metric}\nds^2=f(r)dt^{2}-\\frac{1}{h(r)}dr^{2}-R^{2}(r)(d\\theta^{2}+\\sin\\theta^{2}d\\varphi^{2}),\n\\end{eqnarray}\nwhere the functions $f(r)$ and $h(r)$ vanish at the event horizon\n$r=r_{+}$ of the black hole. Throughout this paper we use\n$G=c=\\hbar=\\kappa_{B}=1$. It is obvious that the surface gravity of\nthe event horizon is determined by\n$\\kappa=\\sqrt{f'(r_{+})h'(r_{+})}\/2$. Defining the tortoise\ncoordinates $r_{*}$ as $dr_{*}=dr\/\\sqrt{f(r)h(r)}$, we can rewrite\nthe metric (\\ref{metric}) as\n\\begin{eqnarray}\\label{new metric}\nds^2=f(r)(dt^{2}-dr_{*}^{2})-R^{2}(r)(d\\theta^{2}+\\sin\\theta^{2}d\\varphi^{2}).\n\\end{eqnarray}\n\nThe massless scalar field $\\psi$ satisfies the Klein-Gordon equation\n\\begin{eqnarray}\n\\label{K-G Equation}\\frac{1}{\\sqrt{-g}}\\frac{{\\partial}}{\\partial\nx^{\\mu}} \\left(\\sqrt{-g}g^{\\mu\\nu}\\frac{\\partial\\psi}{\\partial\nx^{\\nu}}\\right)=0.\n\\end{eqnarray}\nAfter expressing the normal mode solution as \\cite{unruh,D-R}\n\\begin{eqnarray}\n\\psi_{\\omega lm}=\\frac{1}{R(r)}\\chi_{\\omega\nl}(r)Y_{lm}(\\theta,\\varphi)e^{-i\\omega t},\n\\end{eqnarray}\nwe can easily get the radial equation\n\\begin{eqnarray}\\label{radial equation}\n\\frac{d^{2}\\chi_{\\omega\nl}}{dr_{*}^{2}}+[\\omega^{2}-V(r)]\\chi_{\\omega l}=0,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nV(r)=\\frac{\\sqrt{f(r)h(r)}}{R(r)}\\frac{d}{dr}\\left[\\sqrt{f(r)h(r)}\\frac{d\nR(r)}{dr}\\right]+\\frac{l(l+1)f(r)}{R^{2}(r)},\n\\end{eqnarray}\nwhere $Y_{lm}(\\theta,\\varphi)$ is a scalar spherical harmonic on the\nunit twosphere. Solving Eq. (\\ref{radial equation}) near the event\nhorizon, we obtain the incoming wave function which is analytic\neverywhere in the spacetime manifold \\cite{D-R}\n\\begin{eqnarray}\n\\psi_{in,\\omega lm}=e^{-i\\omega v}Y_{lm}(\\theta,\\varphi),\n\\end{eqnarray}\nand the outgoing wave functions for the inside and outside region of\nthe event horizon\n\\begin{eqnarray}\\label{inside mode}\n\\psi_{out,\\omega lm}(rr_{+})=e^{-i\\omega u}Y_{lm}(\\theta,\\varphi),\n\\end{eqnarray}\nwhere $v=t+r_{*}$ and $u=t-r_{*}$. Eqs. (\\ref{inside mode}) and\n(\\ref{outside mode}) are analytic inside and outside the event\nhorizon respectively, so they form a complete orthogonal family. In\nsecond-quantizing the field $\\Phi_{out}$ in the exterior of the\nblack hole we can expand it as follows \\cite{unruh}\n\\begin{eqnarray}\\label{First expand}\n&&\\Phi_{out}=\\sum_{lm}\\int d\\omega[b_{in,\\omega lm}\\psi_{out,\\omega\nlm}(rr_{+})+b^{\\dag}_{out,\\omega\nlm}\\psi^{*}_{out,\\omega lm}(r>r_{+})],\n\\end{eqnarray}\nwhere $b_{in,\\omega lm}$ and $b^{\\dag}_{in,\\omega lm}$ are the\nannihilation and creation operators acting on the vacuum of the\ninterior region of the black hole, and $b_{out,\\omega lm}$ and\n$b^{\\dag}_{out,\\omega lm}$ are the annihilation and creation\noperators acting on the vacuum of the exterior region respectively.\nThus, the Fock vacuum state can be defined as\n\\begin{eqnarray}\\label{dilaton vacuum}\nb_{in,\\omega lm}|0\\rangle_{in}=b_{out,\\omega lm}|0\\rangle_{out}=0.\n\\end{eqnarray}\n\nIntroducing the generalized light-like Kruskal coordinates\n\\cite{D-R,Sannan,Zhao,Birrell}\n\\begin{eqnarray}\n&&U=-\\frac{1}{\\kappa}e^{-\\kappa u},\\quad V=\\frac{1}{\\kappa}e^{\\kappa\nv},\\quad {\\rm if\\quad r>r_{+}};\\nonumber\\\\\n&&U=\\frac{1}{\\kappa}e^{-\\kappa u},\\quad V=\\frac{1}{\\kappa}e^{\\kappa\nv}, \\quad {\\rm if\\quad rr_{+})+e^{-\\frac{\\pi\\omega}{2\\kappa}}\\psi^{*}_{out,\\omega\nlm}(rr_{+})+e^{\\frac{\\pi\\omega}{2\\kappa}}\\psi_{out,\\omega\nlm}(r