diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzxjn" "b/data_all_eng_slimpj/shuffled/split2/finalzxjn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzxjn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA quandle is a set $X$ together with a binary operation\n$*:X\\times X\\to X$ satisfying certain conditions (see definition\non example \\ref{exrack} below), it generalizes the operation\nof conjugation on a group, but also is an algebraic structure that behaves well with respect to\nReidemeister moves, so it is very useful for defining knot\/links invariants. Knot theorists\nhave defined a cohomology theory for quandles (see \\cite{CJKS}\nand \\cite{tCES}) in such a way that 2-cocycles give rise to knot invariants by means of \nthe so-called state-sum procedure. \nBiquandles are generalizations of quandles in the sense that quandles give rise to solutions\nof the Yang-Baxter equation by setting $\\sigma(x,y):=(y,x*y)$. For biquandles there is also a cohomology theory and state-sum procedure for producing knot\/links invariants\n(see \\cite{CES}).\n\nIn this work, for a set theoretical solution of the Yang-Baxter equation $(X,\\sigma)$, we\ndefine a d.g. algebra $B=B(X,\\sigma)$, containing the semigroup algebra\n$A=k\\{X\\}\/\\langle xy=zt : \\sigma(x,y)=(z,t)\\rangle$,\n such that $k\\otimes_AB\\otimes_Ak$ and $\\mathrm{Hom}_{A-A}(B,k)$ are respectively\nthe standard homology and cohomology complexes attached to general set theoretical\nsolutions of the Yang-Baxter equation. \nWe prove that this d.g. algebra has a natural structure of d.g. {\\em bialgebra}\n(Theorem \\ref{teobialg}). Also, depending on properties of the solution $(X,\\sigma)$\n(square free, quandle type, biquandle, involutive,...) this d.g. bialgebra $B$ has \nnatural (d.g. bialgebra) quotients, giving rise to the standard\nsub-complexes computing quandle cohomology (as sub-complex of rack homology),\n biquandle cohomology, etc.\n\nAs a first consequence of our construction, we give a very simple and purely algebraic proof\nof the existence of a cup product in cohomology. This was known for rack cohomology\n(see \\cite{Cl}), the proof was based on topological methods, but it was unknown for biquandles\nor general solutions of the Yang-Baxter equation.\nA second consequence is the existence of a comparison map between Yang-Baxter (co)homology\nand Hochschild (co)homology of the semigroup algebra $A$. Looking carefully this comparison map\nwe prove that it factors through a complex of \"size\" $A\\otimes \\mathfrak{B}\\otimes A$, where\n$\\mathfrak{B}$ is the Nichols algebra associated to the solution $(X,-\\sigma)$.\nThis result leads to new questions, for instance when $(X,\\sigma)$ is\n involutive (that is $\\sigma^2=\\mathrm{Id}$) and the characteristic\nis zero we show that this complex is acyclic (Proposition \\ref{propinvo}), we wander if\n this is true in any other characteristic, and for non necessarily involutive solutions.\n\n\n\n{\\bf Acknowledgements:}\nThe first author wishes to thank Dominique Manchon for fruitful discussion during a visit to\n Laboratoire de math\\'ematiques de l'Universit\\'e Blaise Pascal where a preliminary version of \nthe bialgebra $B$ for racks came up. He also want to thanks Dennis Sullivan for very pleasant\nstay in Stony Brook where the contents of this work was discussed in detail, in particular,\nthe role of Proposition \\ref{fnormal} in the whole construction.\n\n\\subsection{Basic definitions}\nA set theoretical solution of the Yang-Baxter equation (YBeq) is a pair $(X, \\sigma)$ where $\\sigma: X\\times X\\rightarrow X\\times X$\nis a bijection satisfying \n\\[\n (\\mathrm{Id}\\times\\sigma)(\\sigma\\times \\mathrm{Id})(\\mathrm{Id}\\times\\sigma)=(\\sigma\\times \\mathrm{Id})(\\mathrm{Id}\\times\\sigma)(\\sigma\\times \\mathrm{Id}):X\\times X\\times X\\rightarrow X\\times X\\times X\n\\]\n\nIf $X=V$ is a $k$-vector space and $\\sigma$ is a linear bijective map satisfying YBeq \nthen it is called a braiding on $V$.\n\n\\begin{ex}\\label{exrack} A set $X$ with a binary operation $\\triangleleft:X\\times X\\rightarrow X\\times X$ is called a rack if \n\\begin{itemize}\n \\item $-\\triangleleft x:X\\rightarrow X$ is a bijection $\\forall x\\in X$ and \n \\item $(x\\triangleleft y)\\triangleleft z=(x\\triangleleft z)\\triangleleft (y\\triangleleft z)$ $\\forall x,y,z \\in X$. \n\\end{itemize}\n\n$x\\triangleleft y $ is usually denoted by $x^y$.\n\nIf $X$ also verifies that $x\\triangleleft x=x$ then $X$ is called a {\\em quandle}.\n\nAn important example of rack is $X=G$ a group, $x\\triangleleft y=y^{-1}xy$.\n\nIf $(X,\\triangleleft)$ is a rack, then \\[\n \\sigma(x,y)=(y, x\\triangleleft y)\n \\]\n is a set theoretical solution of the YBeq.\n\\end{ex} \n\n Let $M=M_X$ be the monoid freely generated in $X$ with relations \\[\n xy=zt\n \\]\n$\\forall x,y,z,t$ such that $\\sigma(x,y)=(z,t)$. Denote $G_X$ the group with the same generators and relations.\nFor example, when \n$\\sigma= \\text{flip}$ then $M=\\mathbb{N}_0^{(X)}$ and $G_X=\\mathbb{Z}_0^{(X)}$. If $ \\sigma=\\mathrm{Id}$ then $M$ is the free (non abelian)\nmonoid in $X$. If $\\sigma$ comes from a rack $(X,\\triangleleft)$ then $M$ is the monoid with relation \n$xy=y(x\\triangleleft y)$ and $G_X$ is the group with relations $x\\triangleleft y=y^{-1}xy$.\n\n \n \n \n\n\n \n \n\\section{A d.g. bialgebra associated to $(X,\\sigma)$}\nLet $k$ be a commutative ring with 1. \nFix $X$ a set, and $\\sigma:X\\times X\\to X\\times X$ a solution of the YBeq.\nDenote $A_\\sigma(X)$, or simply $A$ if $X$ and $\\sigma$ are understood,\n the quotient of the free $k$ algebra on generators $X$\nmodulo the ideal generated by elements of the form $xy-zt$ whenever $\\sigma(x,y)=(z,t)$:\n\\[\nA:=k\\langle X\\rangle\/\\langle xy-zt : x,y\\in X,\\ (z,t)=\\sigma(x,y)\\rangle=k[M]\n\\]\nIt can be easily seen that\n$A$ is a $k$-bialgebra declaring $x$ to be grouplike for any $x\\in X$,\n since $A$ agrees with the semigroup-algebra on $M$ (the monoid\nfreely generated by $X$ with relations $xy\\sim zt$).\nIf one considers $G_X$, the group freely generated by\n$X$ with relations $xy=zt$, then $k[G_X]$ is the (non commutative) localization of $A$, where one has inverted the\nelements of $X$.\nAn example of $A$-bimodule that will be used later, which is actually a $k[G_X]$-module, is\n $k$ with $A$-action determined on generators by\n\\[\nx\\lambda y=\\lambda, \\ \\forall x,y\\in X,\\ \\lambda\\in k\n\\]\nWe define $B(X,\\sigma)$ (also denoted by $B$) the algebra freely generated \nby three copies of $X$, denoted $x$, $e_x$ and $x'$,\nwith relations as follows:\nwhenever $\\sigma(x,y)=(z,t)$ we have\n\\begin{itemize}\n\\item $ xy\\sim zt$ , $xy'\\sim z't$, $x'y'\\sim z't'$\n\\item $ xe_{y}\\sim e_zt$, $ e_xy'\\sim z'e_{t}$\n\\end{itemize}\nSince the relations are homogeneous, $B$ is a graded algebra declaring\n \\[\n |x|=|x'|=0,\\ \\ |e_x|=1\n \\]\n\n\\begin{teo}\\label{teobialg}\n The algebra $B$ admits the structure of a differential graded bialgebra, with $d$ the unique superderivation satisfying\n \\[\n d(x)=d(x')=0,\\ \\\n d(e_x)=x-x'\n \\]\nand comultiplication determined by \n\\[\n\\Delta(x)=x\\otimes x,\\\n\\Delta(x')=x'\\otimes x',\\\n\\Delta(e_x)=x'\\otimes e_x+e_x\\otimes x\n\\]\n\\end{teo}\nBy differential graded bialgebra we mean that the differential is both a derivation with respect to multiplication, and \ncoderivation with respect to comultiplication.\n\\begin{proof}\nIn order to see that $d$ is well-defined as super derivation, one must check that the relations\n are compatible with $d$. The first relations\nare easier since\n\\[\nd(xy-zt)=\nd(x)y+xd(y)-d(z)t-zd(t)=0+0-0-0=0\n\\]\nand similar for the others\n(this implies that $d$ is $A$-linear and $A'$-linear). For the rest of the relations:\n\\[\nd(xe_{y}-e_zt)=\nxd(e_y)-d(e_z)t\n=\nx(y-y')-(z-z')t\n\\]\n\\[\n=xy-zt-(xy'-z't)=0\n\\]\n\\[\n d(e_xy'-z'e_{t})\n=(x-x')y'-z'(t-t')\n=xy'-z't -(x'y'-z't')=0\n\\]\nIt is clear now that $d^2=0$ since $d^2$ vanishes on generators.\nIn order to see that $\\Delta$ is well defined, we compute\n\n\\[\n \\Delta(xe_y-e_zt)\n =\n(x\\otimes x)( y'\\otimes e_y+e_y\\otimes y)\n-(z'\\otimes e_z+e_z\\otimes z)(t\\otimes t)\n\\]\n\\[=\nxy'\\otimes xe_y+xe_y\\otimes xy\n-z't\\otimes e_zt-e_zt\\otimes zt\n\\]\nand using the relations we get\n\\[=\nxy'\\otimes xe_y+xe_y\\otimes xy\n-xy'\\otimes xe_y-xe_y\\otimes xy=0\n\\]\nsimilarly\n\\[\n \\Delta(x'e_y-e_zt')\n =\n(x'\\otimes x')( y'\\otimes e_y+e_y\\otimes y)\n-(z'\\otimes e_z+e_z\\otimes z)(t'\\otimes t')\n\\]\n\\[\n=\nx'y'\\otimes x'e_y+x'e_y\\otimes x'y\n-z't'\\otimes e_zt'-e_zt'\\otimes zt'\n\\]\n\\[=\nx'y'\\otimes x'e_y+x'e_y\\otimes x'y\n-x'y'\\otimes x'e_y-x'e_y\\otimes x'y=0\n\\]\nThis proves that $B$ is a bialgebra, and $d$ is (by construction) a derivation.\nLet us see that it is also a coderivation:\n\\[\n(d\\otimes 1+1\\otimes d)(\\Delta(x))=\n(d\\otimes 1+1\\otimes d)(x\\otimes x)=0=\\Delta(0)=\\Delta(dx)\n\\]\nfor $x'$ is the same. For $e_x$:\n\\[\n(d\\otimes 1+1\\otimes d)(\\Delta(e_x))=\n(d\\otimes 1+1\\otimes d)(x'\\otimes e_x+e_x\\otimes x)\n\\]\n\\[=\nx'\\otimes (x-x')+(x-x')\\otimes x\n=\nx'\\otimes x-x'\\otimes x'+x\\otimes x\n-x'\\otimes x\n\\]\n\\[=\n-x'\\otimes x'+x\\otimes x\n=\\Delta(x-x')=\\Delta(de_x)\n\\]\n \\end{proof}\n \\begin{rem}\n $\\Delta$ is coassociative.\n \\end{rem}\n\nFor a particular element of the form\n$b=e_{x_1}\\dots e_{x_n}$, the formula for $d(b)$ can be computed as follows:\n\\[ \nd(e_{x_1}\\dots e_{x_n})=\\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\\dots e_{x_{i-1}}d(e_{x_i}) e_{x_{i+1}}\\dots e_{x_n}\n\\]\n\\[\n=\\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\\dots e_{x_{i-1}}(x_i-x'_i)e_{x_{i+1}}\\dots e_{x_n}\n\\]\n\\[=\n\\overbrace{\\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\\dots e_{x_{i-1}}x_ie_{x_{i+1}}\\dots e_{x_n}}^{I}\n -\\overbrace{\\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\\dots e_{x_{i-1}}x'_ie_{x_{i+1}}\\dots e_{x_n}}^{II}\n\\]\nIf one wants to write it in a normal form (say, every $x$ on the right, every $x'$ on the left,\nand the $e_x$'s in the middle), then one should use the relations in $B$: this might be a very\ncomplicated formula, depending on the braiding. We give examples in some particular cases.\nLets denote $\\sigma(x,y)=(\\sigma^1\\!(x,y), \\sigma^2(x,y))$. \n\n\\begin{comment}\nUsing the relations in $B$ one has\n\\[\nI=\\sum^n_{i=1}(-1)^{i+1}e_{x_1}\\dots e_{x_{i-1}}e_{y_{i+1}^1}\\dots e_{y_n^1}y_{n,i}^2\n\\]\nwhere \n\\[\n\\begin{array}{rcl}\ny_{i+1,i}&=&(\\sigma^1\\!(x_i, x_{i+1}), \\sigma^2(x_i, x_{i+1}))\\\\\n y_{i+2,i}&=& (\\sigma^1\\!(y_{i+1,i}^2, x_{i+2}), \\sigma^2(y_{i+1,i}^2, x_{i+2}))\\\\\n y_{i+3,i}&=&(\\sigma^1\\!(y_{i+2,i}^2,x_{i+3}),\\sigma^2(y_{i+2,i}^2,x_{i+3}))\\\\\n\\vdots&&\\vdots\\\\\n y_{n,i}&=&(\\sigma^1\\!(y_{n-1,i}^2,x_n),\\sigma^2(y_{n-1,i}^2,x_n))\n\\end{array}\n\\]\nand similarly\n\\[\nII=\\sum^n_{i=1}(-1)^{i+1}(z_{1,i}^1)'e_{z_{1,i}^2}\\dots e_{z_{i-2,i}^2}e_{z_{i-1,i}^2}e_{x_i+1}\\dots e_{x_n}\n\\]\n\nwhere\n\n\\[\\begin{array}{rcl}\n z_{i-1,i}&=&(\\sigma^1\\!(x_{i-1},x_i),\\sigma^2(x_{i-1},x_i))\n\\\\\n z_{i-2,i}&=&(\\sigma^1\\!(x_{i-2},z_{i-1,i}^1),\\sigma^2(x_{i-2},z_{i-1,i}^1))\n\\\\\n\\vdots&&\\vdots\n\\\\\nz_{1,i}&=&(\\sigma^1\\!(x_1,z_{2,i}^1),\\sigma^2(x_1,z_{2,i}^1))\n\\end{array}\\]\n\n\\[\n \\partial f(x_1,\\dots,x_n)=f(d(e_{x_1}\\dots e_{x_n}))=\\]\n\\[\n \\sum^n_{i=1}(-1)^{i+1}\\left(f(x_1,\\dots, x_{i-1},y_{i+1,i}^1,\\dots,\n y_{n,i}^1)y_{n,i}^2-(z_{1,i}^1)'f(z_{1,i}^2,\\dots,z_{i-1,i}^2,x_{i+1},\\dots, x_n)\\right)\n \\]\n\n\\end{comment}\n \n \n \\begin{ex} In low degrees we have\n \\begin{itemize}\n \\item $d(e_x)=x-x'$\n \\item $d(e_xe_y)=(e_zt-e_xy)-(x'e_y-z'e_t)$, where as usual $\\sigma(x,y)=(z,t)$.\n \\item $d(e_{x_1}e_{x_2}e_{x_3})=A_I-A_{II}$\n where\n \n $A_I=e_{\\sigma^1\\!(x_1,x_2)}e_{\\sigma^1\\!(\\sigma^2(x_1,x_2),x_3)}\\sigma^2(\\sigma^2(x_1,x_2),x_3)-e_{x_1}e_{\\sigma^1\\!(x_2,x_3)}\n \\sigma^2(x_2,x_3)+e_{x_1}e_{x_2}x_3$\n \n $A_{II}= x_1'e_{x_2}e_{x_3}-\\sigma^1\\!(x_1,x_2)'e_{\\sigma^2(x_1,x_2)}e_{x_3}+\n \\sigma^1\\!(x_1,\\sigma^1\\!(x_2,x_3))'e_{\\sigma^2(x_1,\\sigma^1\\!(x_2,x_3))}e_{\\sigma^2(x_2,x_3)}$\n \n In particular, if $f:B\\to k$ is an $A$-$A'$ linear map, then\n \\[\nf(d(e_{x_1}e_{x_2}e_{x_3}))= \nf(e_{\\sigma^1\\!(x_1,x_2)}e_{\\sigma^1\\!(\\sigma^2(x_1,x_2),x_3)})\n-f(e_{x_1}e_{\\sigma^1\\!(x_2,x_3)})+f(e_{x_1}e_{x_2})\n\\]\\[-f(e_{x_2}e_{x_3})+f(e_{\\sigma^2(x_1,x_2)}e_{x_3})-\nf(e_{\\sigma^2(x_1,\\sigma^1\\!(x_2,x_3))}e_{\\sigma^2(x_2,x_3)})\n\\]\nErasing the $e$'s we notice the relation with the cohomological complex given in \\cite{CES},\nsee Theorem \\ref{teocomplejo} below.\n\n \\end{itemize}\nIf $X$ is a rack and $\\sigma$ the braiding defined by $\\sigma(x,y)=(y,x\\triangleleft y)=(x,x^y)$, then:\n \\begin{itemize}\n \\item $d(e_x)=x-x'$\n \\item $d(e_xe_y)=\n( e_{y}x^{y}\n- e_{x}y)\n-(x'e_{y}-y'e_{x^y})$\n \\item $d(e_xe_ye_z)=\ne_xe_yz\n-e_xe_zy^z\n+e_ye_zx^{yz}\n-x'e_ye_z\n+y'e_{x^y}e_z\n-z'e_{x^z}e_{y^z}$.\n\\item In general, expressions I and II are\n\\[\n I=\\sum_{i=1}^{n} (-1)^{i+1} e_{x_1}\\dots e_{x_{i-1}}e_{x_{i+1}}\\dots e_{x_n}x_{i}^{x_{i+1}\\dots x_{n}}\n\\]\n\\[\n II=\\sum_{i=1}^{n} (-1)^{i+1}x'_ie_{x_1^{x_i}}\\dots e_{x_{i-1}^{x_i}}e_{x_{i+1}}\\dots e_{x_n}\n\\]\n\n\nthen\n\n\\[\n \\partial f(x_1,\\dots,x_n)=f(d(e_{x_1}\\dots e_{x_n}))=\\]\n\\[\n \\sum_{i=1}^{n} (-1)^{i+1} \\left(f(x_1,\\dots, x_{i-1},x_{i+1},\\dots, x_n)x_{i}^{x_{i+1}\\dots x_{n}}-x'_if({x_1}^{x_i},\\dots\n , x_{i-1}^{x_i},x_{i+1},\\dots, x_n)\\right)\n \\]\n \nLet us consider $k\\otimes_{k[M']} B\\otimes_{k[M]} k$\nthen $d$ represents the canonical differential of rack homology and \n$\\partial f (e_{x_1}\\dots e_{x_n})=f(d(e_{x_1}\\dots e_{x_n}))$ gives the traditional rack cohomology structure. \n\nIn particular, taking trivial coefficients:\n \\[\n \\partial f(x_1,\\dots,x_n)=f(d(e_{x_1}\\dots e_{x_n}))=\\]\n\\[\n \\sum_{i=1}^{n} (-1)^{i+1} \\left(f(x_1,\\dots, x_{i-1},x_{i+1},\\dots, x_n)-f({x_1}^{x_i},\\dots, x_{i-1}^{x_i},x_{i+1}\\dots, x_n)\\right)\n\\]\n\n \\end{itemize}\n\n\\end{ex}\n\n\n \n \\begin{teo}\\label{teocomplejo}\nTaking in $k$ the trivial $A'$-$A$-bimodule, the complexes associated to set theoretical Yang-Baxter solutions\ndefined in \\cite{CES} can be recovered as\n\\[\n (C_\\bullet(X,\\sigma), \\partial)\\simeq (k\\otimes_{A'} B_\\bullet \\otimes_{A} k, \\partial=id_k\\otimes_{A'} d\\otimes_{A}id_k)\n\\]\n\n\\[\n (C^\\bullet(X,\\sigma), \\partial^*)\\simeq (\\mathrm{Hom}_{A'-A}(B, k), \\partial^*=d^*)\n\\]\n \\end{teo}\nIn the proof of the theorem we will assume first Proposition \\ref{fnormal}\nthat says that one has a left $A'$-linear and right $A$-linear\nisomorphism: \n\\[B\\cong A'\\otimes TE\\otimes A\\]\nwhere $A'=TX'\/(x'y'=z't': \\sigma(x,y)=(z,t))$ and $A=TX\/(xy=zt: \\sigma(x,y)=(z,t))$.\n We will prove \n Proposition \\ref{fnormal} later.\n\n \\begin{proof}\n\nIn this setting every expression in $x,x',e_x$, using the relations defining $B$, can be written as\n$x'_{i_1}\\cdots x'_{i_n} e_{x_1}\\cdots e_{x_k} x_{j_1}\\cdots x_{j_l}$, tensorizing leaves the expression \n$$1\\otimes e_{x_1} \\cdots e_{x_k}\\otimes 1$$\nThis shows that $T=k\\otimes_{k[M']} B\\otimes_{k[M]} k\\simeq T\\{e_x\\}_{x\\in X}$, where $\\simeq$ means isomorphism of $k$-modules. \nThis also induces isomorphisms of complexes\n\n\\[\n (C_\\bullet(X, \\sigma), \\partial)\\simeq (k\\otimes_{A'} B_\\bullet \\otimes_{A} k, \\partial= id_k\\otimes_{A'} d\\otimes_{A}id_k)\n\\]\n\n\\[\n (C^\\bullet(X, \\sigma), \\partial^*)\\simeq (\\mathrm{Hom}_{A'-A}(B, k), d^*)\n\\]\n \\end{proof}\n \nNow we will prove Proposition \\ref{fnormal}:\nCall\n$Y=\\langle x,x',e_x\\rangle_{x\\in X}$ the free monoid in $X$ with unit 1, $k\\langle Y\\rangle$ the $k$ algebra associated to $Y$.\nLets define\n$w_1=xy'$, $w_2=xe_y$ and $w_3=e_xy'$.\nLet $S=\\{r_1,r_2,r_3\\}$ be the reduction system defined as follows: $r_i:k\\langle Y\\rangle\\rightarrow k\\langle Y\\rangle$ the families of \n$k$-module endomorphisms\nsuch that $r_i$ fix all elements except \n\n$r_1(xy')=z't$,\\ \\ $r_2(xe_y)=e_zt$ and \n$r_3(e_xy')=z'e_t$. \n\nNote that $S$ has more than 3 elements, each $r_i$ is a family of reductions.\n\\\n\n\n\\begin{defi}\nA reduction $r_i$ {\\em acts trivially} on an element $a$ if $w_i$ does not apear in $a$, ie: $Aw_iB$ apears with coefficient 0.\n\\end{defi}\n\nFollowing \\cite{B}, $a\\in k\\langle Y\\rangle$ is called {\\em irreducible} if $Aw_iB$ does not appear for $i\\in\\{1,2,3\\}$. Call\n$k_{irr}\\langle Y\\rangle$ the $k$ submodule of irreducible elements of $k\\langle Y\\rangle$.\n A finite sequence of reductions is called {\\em final} in $a$ if $r_{i_n}\\circ \\dots \\circ r_{i_1}(a)\\in k_{irr}(Y)$.\nAn element $a\\in k\\langle Y\\rangle$ is called {\\em reduction-finite} if for every sequence of reductions $r_{i_n}$ acts trivially on\n $r_{i_{n-1}}\\circ \\dots \\circ r_{i_1}(a)$ for sufficiently large $n$.\n If a is reduction-finite, then any maximal sequence of reductions, such that each $r_{i_j}$\n acts nontrivially on $r_{i_{(j-1)}}\\dots r_{i_1}(a)$, will be finite, and hence a final \nsequence. It follows that the reduction-finite elements form \na k-submodule of $k\\langle Y\\rangle$ \n $a\\in k\\langle Y\\rangle$ is called {\\em reduction-unique} if is reduction finite and it's image under every finite\n sequence of reductions is the same.\n This comon value will be denoted $r_s(a)$.\n \n\n\\begin{defi}\n Given a monomial $a\\in k\\langle Y \\rangle$ we define the disorder degree of \n $a$, $\\hbox{disdeg}(a)=\\sum_{i=1}^{n_x}rp_i+\\sum_{i=1}^{n_{x'}}lp_j$, where \n $rp_i$ is the position of the $i$-th letter ``$x$'' counting from right to left, and $lp_i$ is the position of the $i$-th letter ``$x'$'' \n counting from left to right.\n \n If $a=\\sum_{i=1}^{n} k_i a_i$ where $a_i$ are monomials in leters of $X,X', e_X$ and $k_i\\in K-\\{0\\} $,\n \\[\\hbox{disdeg}(a):=\\sum_{i=1}^{n}\\hbox{disdeg}(a_i)\\]\n\\end{defi}\n\n\n\n\\begin{ex}\\begin{itemize}\n \\item $\\hbox{disdeg}(x_1e_{y_1}x_2z'_1x_3z'_2)=(2+4+6)+(4+6)=22$\n \\item $\\hbox{disdeg}(xe_yz')=3+3=6$ and $\\hbox{disdeg}(x'e_yz)=1+1$\n \\item $\\hbox{disdeg}(\\prod_{i=1}^{n}x'_i\\prod_{i=1}^{m}e_{y_i}\\prod^{k}_{i=1}z_i)=\\frac{n(n+1)}{2}+\\frac{k(k+1)}{2}$\n \\end{itemize}\n\n \n \\end{ex}\n\n The reduction $r_1$ lowers disorder degree in two and reductions $r_2$ and $r_3$ lowers disorder degree in one. \n\\begin{rem}\\begin{itemize}\n\\item $k_{irr}(Y)=\\{\\sum A'e_B C: A' \\ \\hbox{word in}\\ X', e_B \\hbox{word in}\\ e_x, C \\ \\hbox{word in}\\ X\\}$.\n\\item $k_{irr}\\simeq TX'\\otimes TE\\otimes TX$\n\\end{itemize}\n\\end{rem}\n\nTake for example $a=xe_yz'$, there are two possible sequences of final reductions: $r_3\\circ r_1\\circ r_2$ or $r_2\\circ r_1\\circ r_3$.\nThe result will be $a=A'e_B C$ and $a=D'e_E F$ respectively, where \n\n$A=\\sigma^{(1)}\\left(\\sigma^{(1)}(x,y),\\sigma^{(1)}(\\sigma^{(2)}(x,y),z)\\right)$\n\n$B=\\sigma^{(2)}\\left(\\sigma^{(1)}(x,y),\\sigma^{(1)}(\\sigma^{(2)}(x,y),z)\\right)$\n\n$C=\\sigma^{(2)}\\left(\\sigma^{(2)}(x,y),z\\right)$\n\n$D=\\sigma^{(1)}\\left(x,\\sigma^{(1)}(y,z)\\right)$\n\n$E=\\sigma^{(1)}\\left(\\sigma^{(2)}(x,\\sigma^{(1)}(y,z),\\sigma^{(2)}(y,z))\\right)$\n\n$F=\\sigma^{(2)}\\left(\\sigma^{(2)}(x,\\sigma^{(1)}(y,z),\\sigma^{(2)}(y,z))\\right)$\n\nWe have $A=D$, $B=E$ and $C=F$ as $\\sigma$ is a solution of YBeq, hence \\\\\n$r_3\\circ r_1\\circ r_2(xe_yz')=r_2\\circ r_1\\circ r_3(xe_yz')$.\n\n\nA monomial $a$ in $k\\langle Y\\rangle$ is said to have an {\\em overlap ambiguity} of $S$ if $a=ABCDE$\nsuch that $w_i=BC$ and $w_j=CD$. We shall say the \noverlap ambiguity is {\\em resolvable} if there exist compositions of \nreductions, $r,r'$ such that $r(Ar_i(BC)DE)=r'(ABr_j(CD)E)$.\nNotice that it is enough to take $r=r_s$ and $r'=r_s$.\n\\begin{rem}\nIn our case, there is only one type of overlap ambiguity and is the one we solved previously.\n\\end{rem}\n\\begin{proof}\n There is no rule with $x'$ on the left nor rule with $x$ on the right, so there will be no overlap ambiguity including the family $r_1$. \nThere is only one type of ambiguity involving reductions $r_2$ and $r_3$.\n \\end{proof}\n\n\nNotice that $r_s$ is a proyector and $I=\\langle xy'-z't,xe_y-e_zt,e_xy'-z'e_t\\rangle$ is trivially included in the kernel. \nWe claim that it is actually equal:\n\\begin{proof}\n As $r_s$ is a proyector, an element $a\\in \\ker$ must be $a=b-r_s(b)$ where $b\\in k\\langle Y\\rangle$. It is enough to prove it for monomials $b$.\n \\begin{itemize}\n \\item if $a=0$ the result follows trivially.\n \\item if not, then take a monomial $b$ where at least one of the products $xy'$, $xe_y$ or $e_xy'$ appear. Lets suppose $b$ has a factor $xy'$\n (the rest of the cases are analogous). \n \n $b=Axy'B$ where $A$ or $B$ may be empty words. $r_1(b)=Ar_1(xy')B=Az'tB$. Now we can rewrite:\n \n $b-r_s(b)=\\underbrace{Axy'B-Az'tB}_{\\in I}+Az'tB-r_s(b)$. \n As $r_1$ lowers $\\hbox{disdeg}$ in two, we have $\\hbox{disdeg}(Az'tB-r_s(b))<\\hbox{disdeg}(b-r_s(b))$ then in a finite number of steps we get $b=\\sum^{N}_{k=1} i_k$\n where $i_k\\in I$. It follows that $b\\in I$. \n \\end{itemize}\n \n\\end{proof}\n\\begin{coro} $r_s$ induces a $k$-linear isomorphism: \n\\[k\\langle Y\\rangle\/\\langle xy'-z't,xe_y-e_zt,e_xy'-z'e_t\\rangle\\rightarrow TX'\\otimes TE\\otimes TX \\]\n\\end{coro}\n\nReturning to our bialgebra, taking quotients we obtain the following proposition:\n\n\\begin{prop}\\label{fnormal}\n$B\\simeq \\left(TX'\/(x'y'=z't')\\right)\\otimes TE\\otimes \\left(TX\/(xy=zt)\\right)$\n\\end{prop}\n\nNotice that $\\overline{x_1\\dots x_n}=\\overline{\\prod \\beta_m\\circ \\dots \\circ\\beta_1(x_1,\\dots, x_n)}$ where $\\beta_i=\\sigma^{\\pm 1}_{j_i}$,\nanalogously with $\\overline{x'_1\\dots x'_n}$.\n\n\nThis ends the proof of Theorem \\ref{teocomplejo}.\n\n\n\n\\begin{ex}\n \\item If the coeficients are trivial, $f\\in C^1(X,k)$ and we identify\n $C^1(X,k)=k^X$, then\n \\[\n (\\partial f)(x,y)=f(d(e_xe_y))=-f(x)-f(y)+f(z)+f(t)\n \\]\nwhere as usual $\\sigma(x,y)=(z,t)$ \n(If instead of considering $\\mathrm{Hom}_{A'-A}$, we consider $\\mathrm{Hom}_{A-A'}$ then \n $(\\partial f)(x,y)=f(d(e_xe_y))=f(x)+f(y)-f(z)-f(t)$ but with $\\sigma(z,t)=(x,y)$).\n\n \\item Again with trivial coefficients, and $\\Phi\\in C^2(X,k)\\cong k^{X^2}$, then\n \\[\n (\\partial \\Phi)(x,y,z)=\\Phi (d(e_xe_ye_z))=\\Phi\\left(\\overbrace{xe_ye_z}^{I}-\\overbrace{x'e_ye_z}^{II}-\\overbrace{e_xye_z}^{III}+\n \\overbrace{e_xy'e_z}^{IV}+\\overbrace{e_xe_yz}^{V}-\\overbrace{e_xe_yz'}^{VI} \\right) \\]\n \n If considering $\\mathrm{Hom}_{A'-A}$ then,using the relations defining $B$, the terms $I,III,IV$ and $VI$ changes leaving\n \\[\n \\partial \\Phi(x,y,z)=\\Phi (\\sigma^1\\!(x,y),\\sigma^1\\!(\\sigma^2(x,y),z))-\\Phi(y,z)-\\Phi(x,\\sigma^1\\!(y,z))+\\]\n \\[\n \\Phi(\\sigma^2(x,y),z)+\\Phi(x,y)-\\Phi(\\sigma^2(x,\\sigma^1\\!(y,z)),\\sigma^2(y,z))\n \\]\n\n \n \n \n \n \\item If $M$ is a $k[T]$-module (notice that $T$ need not to be invertible as in \\cite{tCES}) then\n $M$ can be viewed as an $A'-A$-bimodule via\n \\[\n x' \\cdot m=m\n,\\ \\ m\\cdot x=Tm\n \\]\n The actions are compatible with the relations defining $B$:\n \n \\[\n (m\\cdot x)\\cdot y =T^2 m \\ ,\\ \\ (m\\cdot z)\\cdot t =T^2 m \\]\n and \n \\[\n x'\\cdot (y'\\cdot m)=m \\ , \\ \\ z'\\cdot (t'\\cdot m)= m \n \\]\nUsing these coefficients we get twisted cohomology as in \\cite{tCES} but for general YB solutions.\n\n\n\n If one takes the special case of $(X,\\sigma)$ being a rack, namely $\\sigma(x,y)=(y,x\\triangleleft y)$, then the general formula\n gives\n \\[\n \\partial f(x_1,\\dots,x_n)=f(d(e_{x_1}\\dots e_{x_n}))=\\]\n\\[\n \\sum_{i=1}^{n} (-1)^{i+1} \\left(Tf(x_1,\\dots, x_{i-1},x_{i+1},\\dots, x_n)-f({x_1}^{x_i},\\dots, x_{i-1}^{x_i},x_{i+1},\\dots, x_n)\\right)\n \\]\n that agree with the differential of the twisted cohomology defined in \\cite{tCES}.\n\n\\end{ex}\n\n \n \n\\begin{rem} \nIf $c(x\\otimes y)=f(x,y)\\sigma^1\\!(x,y)\\otimes \\sigma^2(x,y)$, then $c$ is a solution of YBeq if and only if $f$ is a 2-cocycle.\n\\end{rem}\n\n\\[\n c_1\\circ c_2\\circ c_1(x\\otimes y\\otimes z)\n\\]\n\n\\[\n =a\\overbrace{\\sigma^1\\!\\left(\\sigma^1\\!(x,y),\\sigma^1\\!(\\sigma^2(x,y),z)\\right)\\otimes \\sigma^2\\left(\\sigma^1\\!(x,y),\\sigma^1\\!(\\sigma^2(x,y),z\\right)\\otimes \\sigma^2\\left(\\sigma^2(x,y),z)\\right)}^{I}\n\\]\nwhere\n\\[\n a=f(x,y)f\\left(\\sigma^2(x,y),z\\right)f\\left(\\sigma^1\\!(x,y),\\sigma^1\\!(\\sigma^2(x,y),z)\\right)\n\\]\n\n\\[\n c_2\\circ c_1 \\circ c_2 (x\\otimes y\\otimes z)\n\\]\n\n\\[\n b\\overbrace{\\sigma^1\\!(x,\\sigma^1\\!(y,z))\\otimes \\sigma^1\\!\\left(\\sigma^2(x,\\sigma^1\\!(y,z)),\\sigma^2(y,z)\\right)\\otimes \\sigma^2\\left(\\sigma^2(x,\\sigma^1\\!(y,z),\\sigma^2(y,z))\\right)}^{II}\n\\]\nwhere\n\\[b= f(y,z)f\\left(x,\\sigma^1\\!(y,z)\\right)f\\left(\\sigma^2(x,\\sigma^1\\!(y,z)),\\sigma^2(y,z)\\right)\n\\]\n Writing YBeq with this notation leaves:\n\\begin{equation}\\label{trenza-trenza}\n\\sigma \\ is\\ a\\ braid \\Leftrightarrow I=II\n\\end{equation}\n\n\nTake $f$ a two-cocycle, then \n\n\\[\n 0=\\partial f(x,y,z)=f(d(e_xe_ye_z))=f((x-x')e_ye_z-e_x(y-y')e_z+e_xe_y(z-z'))\n \\]\n\n is equivalent to the following equality\n \\[\n f(xe_ye_z)+f(e_xy'e_z)+f(e_xe_yz)=f(x'e_ye_z)+f(e_xye_z)+f(e_xe_yz')\n \\]\nusing the relations defining $B$ we obtain\n\n\\[\n f\\left(e_{\\sigma^1\\!(x,y)}e_{\\sigma^1\\!(\\sigma^2(x,y),z)}\\sigma^2(\\sigma^2(x,y)z)\\right)+f\\left(\\sigma^1\\!(x,y)'e_{\\sigma^2(x,y)}e_z\\right)\n +f\\left(e_xe_yz\\right)\n \\]\n\\[\n =f\\left(x'e_ye_z\\right)+f\\left(e_xe_{\\sigma^1\\!(y,z)}\\sigma^2(y,z)\\right)+\n f\\left(\\sigma^1\\!(x,\\sigma^1\\!(y,z))'e_{\\sigma^2(x,\\sigma^1\\!(y,z))}e_{\\sigma^2(y,z)}\\right)\n\\]\n\nIf $G$ is an abelian multiplicative group and $f:X\\times X\\rightarrow (G, \\cdotp)$ then the previous formula says \n\\[\n f\\left(e_{\\sigma^1\\!(x,y)}e_{\\sigma^1\\!(\\sigma^2(x,y),z)}\\sigma^2(\\sigma^2(x,y)z)\\right)f\\left(\\sigma^1\\!(x,y)'e_{\\sigma^2(x,y)}e_z\\right)\n f\\left(e_xe_yz\\right)\n\\]\n\\[\n =f\\left(x'e_ye_z\\right)f\\left(e_xe_{\\sigma^1\\!(y,z)}\\sigma^2(y,z)\\right)\n f\\left(\\sigma^1\\!(x,\\sigma^1\\!(y,z))'e_{\\sigma^2(x,\\sigma^1\\!(y,z))}e_{\\sigma^2(y,z)}\\right)\n\\]\nwhich is exactly the condition $a=b$.\n\n\nNotice that if the action is trivial, then the equation above simplifies giving\n\\begin{equation}\\label{2-cocycle}\n f\\!\\left(e_{\\sigma^1\\!(x,y)}e_{\\sigma^1\\! (\\sigma^2(x,y),z)} \\right)\\! \nf \\!\\left(e_{\\sigma^2(x,y)}e_z\\right)\\!\n f\\!\\left(e_xe_y\\right)\\!\n \\newline\n =f\\!\\left(e_ye_z\\right)\\!f\\!\\left(e_xe_{\\sigma^1\\!(y,z)}\\right)\\!\n f\\!\\left(e_{\\sigma^2(x,\\sigma^1\\!(y,z))}e_{\\sigma^2(y,z)}\\right)\n\\end{equation}\nwhich is precisely the formula on \\cite{CES} for Yang-Baxter 2-cocycles \n (with $R_1$ and $R_2$ instead of $\\sigma^1$ and $\\sigma^2$).\n\n\n\n\n\\section{1st application: multiplicative structure\non cohomology}\n\n\n\\begin{prop} $\\Delta$ induces an associative product in $\\mathrm{Hom}_{A'-A}(B,k)$ (the graded Hom).\n\\end{prop}\n \\begin{proof} It is clear that $\\Delta$ induces an associative product on\n $\\mathrm{Hom}_{k}(B,k)$ (the graded Hom), and\n $\\mathrm{Hom}_{A'-A}(B,k)\\subset\\mathrm{Hom}_k(B,k)$ is a $k$-submodule. We will show that it is in fact a subalgebra.\n\n Consider the $A'$-$A$ diagonal structure on $B\\otimes B$\n(i.e. $x_1'.(b\\otimes b').x_2=x_1'bx_2\\otimes x_1'b'x_2$)\nand denote $B\\otimes^DB$ the $k$-module $B\\otimes B$ considered as $A'-A$-bimodule in this diagonal way.\nWe claim that \n$\\Delta: B\\rightarrow B\\otimes^D B$ is a morphism of $A'-A$-modules:\n\\[\n \\Delta(x_1'yx_2)=x_1'yx_2\\otimes x_1'yx_2=x_1'(y\\otimes y)x_2\n\\]\nsame with $y'$, and with $e_x$:\n\\[\n \\Delta(x_1'e_yx_2)=(x_1'\\otimes x_1')(y'\\otimes e_y+e_y\\otimes y)(x_2\\otimes x_2)=x'_1\\Delta(e_y)x_2\n\\]\n\nDualizing $\\Delta$ one gets:\n\\[\n\\Delta^*:\\mathrm{Hom}_{A'-A}(B\\otimes^DB, k)\\rightarrow \\mathrm{Hom}_{A'-A}(B,k)\n\\]\nconsider the natural map\n$$\\iota:\\mathrm{Hom}_k(B,k)\\otimes \\mathrm{Hom}_k(B,k)\\rightarrow \\mathrm{Hom}_k(B\\otimes B, k)$$\n$$\\iota(f\\otimes g)(b_1\\otimes b_2)=f(b_1)g(b_2)$$\nand denote $\\iota|$ by\n\\[\n \\iota|=\\iota|_{\\mathrm{Hom}_{A'-A}(B,k)\\otimes \\mathrm{Hom}_{A'-A}(B,k)}\n\\]\nLet us see that \n\\[\n Im (\\iota|)\\subset \\mathrm{Hom}_{A'-A}(B\\otimes B, k)\\subset \\mathrm{Hom}_k(B\\otimes B, k)\n\\]\nIf $f,g: B\\rightarrow k$ are two $A'-A$-module morphisms (recall $k$ has trivial actions, i.e. $x'\\lambda=\\lambda$ and $\\lambda x=x$),\nthen\n\\[\n \\iota(f\\otimes g)(x'(b_1\\otimes b_2))=f(x'b_1)g(x'b_2)=(x'f(b_1))(x'g(b_2))\\]\n \\[=f(b_1)g(b_2)=x'\\iota (f\\otimes g)(b_1\\otimes b_2)\n\\]\n\\[\n \\iota(f\\otimes g)((b_1\\otimes b_2)x)=f(b_1x)g(b_2x)=(f(b_1)x)(g(b_2)x)\n \\]\n \\[=(f(b_1)g(b_2))x=\\iota(f\\otimes g)(b_1\\otimes b_2)x\n\\]\nSo, it is possible to compose $\\iota|$ and $\\Delta$, and obtain in this way an associative multiplication in $\\mathrm{Hom}_{A'-A}(B,k)$.\n\\end{proof}\n\n\n\nNow we will describe several natural quotients of $B$, each of them\ngive rise to a subcomplex of the cohomological complex of $X$\nwith trivial coefficients that are not only subcomplexes but also subalgebras;\nin particular they are associative algebras.\n\n\\subsection{Square free case}\nA solution $(X,\\sigma)$ of YBeq satisfying $\\sigma(x,x)=(x,x)\\forall x\\in X$ is called {\\em square free}. \nFor instance, if $X$ is a rack, then this condition is equivalent to $X$ being a quandle.\n\nIn the square free situation,\nnamely when $X$ is such that $\\sigma (x,x)=(x, x)$ for all $x$, we add the condition $e_xe_x\\sim 0$.\n \n\n If $(X,\\sigma)$ is a square-free solution of the YBeq, let us denote\n $sf$ the two sided ideal of $B$ generated by $\\{e_xe_{x}\\}_{x\\in X}$.\n\\begin{prop}\n\\label{propsf}\n$sf$ is a differential Hopf ideal. More precisely, \n\\[\nd(e_xe_{x})=0\n\\hbox{ and }\n\\Delta(e_xe_{x})=x'x'\\otimes e_xe_{x}+\ne_xe_{x}\\otimes xx.\\]\n\\end{prop}\nIn particular $B\/sf$ is a differential graded bialgebra.\nWe may identify \\\\\n$\\mathrm{Hom}_{A'A}(B\/sf,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ as the elements $f$ such that $f(\\dots,x,x,\\dots)=0$. If $X$ is a quandle, this\nconstruction leads to the quandle-complex.\nWe have $\\mathrm{Hom}_{A'A}(B\/sf,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.\n \n\n\n\\subsection{Biquandles}\n\nIn \\cite{KR}, a generalization of quandles is proposed (we recall it with different notation), \na solution $(X,\\sigma)$ is called\nnon-degenerated, or {\\em birack} if in addition, \n\\begin{enumerate}\n \\item for any $x,z\\in X$ there exists a unique $y$ such that $\\sigma^1\\!(x,y)=z$, (if this is the case, $\\sigma^1\\!$ is called {\\em left invertible}),\n \\item for any $y,t\\in X$ there exists a unique $x$ such that $\\sigma^2(x,y)=t$, (if this is the case, $\\sigma^2$ is called {\\em right invertible}),\n\\end{enumerate}\nA birack is called {\\em biquandle} if, given $x_0\\in X$, there exists a unique $y_0\\in X$ such that\n$\\sigma(x_0,y_0)=(x_0,y_0)$. In other words, if there exists a bijective map $s:X\\to X$ such\nthat\n\\[\n\\{(x,y):\\sigma(x,y)=(x,y)\\}=\n\\{(x,s(x)): x\\in X\\}\n\\]\n\n\\begin{rem}\n Every quandle solution is a biquandle, moreover, given a rack $(X,\\triangleleft)$, then\n $\\sigma(x,y)=(y,x\\triangleleft y)$ is a biquandle if and only if $(X,\\triangleleft)$ is a quandle. \n\\end{rem}\n\nIf $(X,\\sigma)$ is a biquandle, for all $x\\in X$ we add in $B$ the relation $e_{x}e_{s(x)}\\sim 0$. \nLet us denote $bQ$ the two sided ideal of $B$ generated by $\\{e_xe_{sx}\\}_{x\\in X}$.\n\\begin{prop}\n\n\n\n\n\n\\label{propbQ}\n$bQ$ is a differential Hopf ideal. More precisely, \n$d(e_xe_{sx})=0$ and\n$\\Delta(e_xe_{sx})=x's(x)'\\otimes e_xe_{sx}+\ne_xe_{sx}\\otimes xs(x)$.\n\\end{prop}\nIn particular $B\/bQ$ is a differential graded bialgebra.\nWe may identify\n\\[\n\\mathrm{Hom}_{A'A}(B\/bQ,k)\\cong \n\\{f\\in \\mathrm{Hom}_{A'A}(B,k) : f(\\dots,x,s(x),\\dots)=0\\} \n\\subset \n\\mathrm{Hom}_{A'A}(B,k)\\]\nIn \\cite{CES}, the condition $f(\\dots,x_0,s(x_0),\\dots)=0$ is\n called the {\\em type 1 condition}. A consequence of the above proposition is that\n $\\mathrm{Hom}_{A'A}(B\/bQ,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.\nBefore proving this proposition we will review some other similar constructions.\n\n \n\\subsection{Identity case}\nThe two cases above may be generalized in the following way:\n\nConsider $S\\subseteq X\\times X$ a subset of elements verifying\n$\\sigma(x,y)=(x,y)$ for all $(x,y)\\in S$. \nDefine $idS$ the two sided ideal of $B$ given by $idS=\\langle e_xe_y\/ (x,y)\\in S\\rangle$.\n\n\\begin{prop}\n\\label{propidS}\n$idS$ is a differential Hopf ideal. More precisely, \n$d(e_xe_{y})=0$ for all $(x,y)\\in S$ and\n$\\Delta(e_xe_{y})=x'y'\\otimes e_xe_{y}+\ne_xe_{y}\\otimes xy$.\n\\end{prop}\nIn particular $B\/idS$ is a differential graded bialgebra.\\\\\nIf one identifies $\\mathrm{Hom}_{A'A}(B\/sf,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ as the elements $f$ such that\n\\[ f(\\dots,x,y,\\dots)=0 \\ \\forall (x,y)\\in S\\]\nWe have that $\\mathrm{Hom}_{A'A}(B\/idS,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.\n \n \\subsection{Flip case}\n Consider the condition $e_xe_y+e_ye_x\\sim 0$ for all pairs such that $\\sigma(x,y)=(y,x)$. For such a pair $(x,y)$ we have \n the equations $xy=yx$, $xy'=y'x$, $x'y'=y'x'$ and $xe_y=e_yx$. Note that there is no equation for $e_xe_y$. \n The two sided ideal $D=\\langle e_xe_y+e_ye_x:\\sigma(x,y)=(y,x)\\rangle$ is a differential and Hopf ideal.\n\nMoreover, the following generalization is still valid:\n\n\\subsection{Involutive case}\n\nAssume $\\sigma(x,y)^2=(x, y)$. This case is called {\\em involutive} in \\cite{ETS}.\nDefine $Invo$ the two sided ideal of $B$ given by $Invo=\\langle e_xe_y+e_ze_t : (x,y)\\in X,\\sigma(x,y)=(z,t)\\rangle$.\n\n\\begin{prop}\n\\label{propInvo}\n$Invo$ is a differential Hopf ideal. More precisely, \n$d(e_xe_{y}+e_ze_t)=0$ for all $(x,y)\\in X$ (with $(z,t)=\\sigma(x,y)$) and\nif $\\omega=e_xe_y+e_ze_t$ then\n$\\Delta(\\omega)=x'y'\\otimes \\omega+\\omega \\otimes xy$.\n\\end{prop}\nIn particular $B\/Invo$ is a differential graded bialgebra.\nIf one identifies \\\\ \n$\\mathrm{Hom}_{A'A}(B\/Invo,k)\\subset \n\\mathrm{Hom}_{A'A}(B,k)$ then\n$\\mathrm{Hom}_{A'A}(B\/Invo,k)\\subset \\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but a subalgebra.\n \n\\begin{conj}\n$B\/Invo$ is acyclic in positive degrees.\n\\end{conj}\n\n\\begin{ex} If $\\sigma=flip$ and $X=\\{x_1,\\dots,x_n\\}$ then $A=k[x_1,\\dots,x_n]=SV$, the symmetric algebra on \n$V=\\oplus _{x\\in X}kx$. In this case\n$(B\/Invo,d)\\cong (S(V)\\otimes\\Lambda V\\otimes S(V),d)$ gives the Koszul resolution of $S(V)$ as\n$S(V)$-bimodule.\n\\end{ex}\n\n\\begin{ex} If $\\sigma=Id$, $X=\\{x_1,\\dots,x_n\\}$\nand $V=\\oplus _{x\\in X}kx$, then $A=TV$ the tensor algebra.\nIf $\\frac12\\in k$, then\n$(B\/invo,d)\\cong TV\\otimes (k\\oplus V)\\otimes TV$ gives the Koszul resolution\nof $TV$ as $TV$-bimodule. Notice that we don't really need $\\frac12\\in k$,\none could replace $invo=\\langle e_xe_y+e_xe_y:(x,y)\\in X\\times X\\rangle$ by\n $idXX=\\langle e_xe_y:(x,y)\\in X\\times X\\rangle$.\n \\end{ex}\n\n\nThe conjecture above, besides these examples, is supported by\nnext result:\n\\begin{prop}\\label{propinvo}\nIf $\\mathbb{Q}\\subseteq k$, then $B\/Invo$ is acyclic in positive degrees.\n\\end{prop}\n\\begin{proof}\nIn $B\/Invo$ it can be defined $h$ as the unique (super)derivation such that:\n\\[\n h(e_x)=0; h(x)=e_x, h(x')=-e_x\n\\]\nLet us see that $h$ is well defined:\n\\[\n h(xy-zt)=e_xy+xe_y-e_zt-ze_t=0\n\\]\n\\[\n h(xy'-z't)=e_xy'-xe_y+e_zt-z'e_t=0\n\\]\n\\[\n h(x'y'-z't')=-e_xy'-x'e_y+e_zt'+z'e_t=0\n\\]\n\\[\n h(xe_y-e_zt)=e_xe_y+e_ze_t=0\n\\]\nNotice that in particular next equation shows\n that $h$ is not well-defined in $B$.\n\\[\n h(e_xy'-z'e_t)=e_xe_y+e_ze_t=0\n\\]\n\\[\n h(zt'-x'y)=e_zt'-ze_t+e_xy-x'e_y=0\n\\]\n\\[\nh(ze_t-e_xy)= e_ze_t+e_xe_y=0\n\\]\n\\[\n h(e_zt'-x'e_y)=e_ze_t+e_xe_y=0\n\\]\n\\[\n h(e_xe_y+e_ze_t)=0\n\\]\nSince (super) commutator of (super)derivations is again a derivation,\nwe have that $[h, d]=hd+dh$ is also a derivation. \n Computations on generators: \n\\[\nh(e_x)=2e_x,\\ h(x)=x-x', \\ h(x')=x'-x\n\\]\nor equivalently\n\\[\nh(e_x)=2e_x,\\ h(x+x')=0, \\ h(x-x')=2(x-x')\n\\]\nOne can also easily see that $B\/Invo$ is generated by\n$e_x,x_{\\pm}$, where $x_\\pm=x\\pm x'$, and that their\nrelations are homogeneous. We see that $hd+dh$ is nothing but the Euler\nderivation with respect to the grading defined by\n\\[\n\\deg e_x=2,\\\n\\deg x_+=0,\\\n\\deg x_-=2,\\]\nWe conclude automatically that the homology vanish for positive degrees of the $e_x$'s\n(and similarly for the $x_-$'s).\n\\end{proof}\n\n Next, we generalize Propositions \n \\ref{propsf}, \\ref{propbQ}, \n \\ref{propidS} and \\ref{propInvo}.\n \n\\subsection{Braids of order $N$}\n \n Let $(x_0,y_0)\\in X\\times X$ such that \n $\\sigma^N(x_0,y_0)=(x_0,y_0)$ for some $N\\geq 1$.\n If $N=1$ we have the ''identity case'' and all subcases, if $N=2$ we have the \n''involutive case''.\nDenote\n \\[\n (x_i,y_i):=\\sigma^i(x_0,y_0) \\ 1\\leq i \\leq N-1 \n \\]\n Notice that the following relations hold in $B$:\n\\begin{itemize}\n\\item[$\\star$] $x_{N-1}y_{N-1}\\sim x_0y_0$, \\ $x_{N-1}y'_{N-1}\\sim x'_0y_0$, \\ $x'_{N-1}y'_{N-1}=x'_0y'_0$\n\\item[$\\star$] $ x_{N-1}e_{y_{N-1}}\\sim e_{x_0}y_0$, \\ $ e_{x_{N-1}}y'_{N-1}\\sim x'_0e_{y_0}$\n\\end{itemize}\n and for $1\\leq i \\leq N-1$:\n \\begin{itemize}\n\\item[$\\star$] $x_{i-1}y_{i-1}\\sim x_iy_i$, \\ $x_{i-1}y'_{i-1}\\sim x'_iy_i$, \\ $x'_{i-1}y'_{i-1}=x'_iy'_i$\n\\item[$\\star$] $ x_{i-1}e_{y_{i-1}}\\sim e_{x_i}y_i$, \\ $ e_{x_{i-1}}y'_{i-1}\\sim x'_ie_{y_i}$\n\\end{itemize} \nTake $\\omega=\\sum_{i=0}^{N-1}e_{x_i}e_{y_i}$, then we claim that \n\\[\nd\\omega=0\\]\nand \n\\[\\Delta\\omega\n=x_0y_0\\otimes\\omega+\\omega\\otimes x'_0y'_0\\]\nFor that, we compute\n\\[\n d(\\omega)=\\sum_{i=0}^{N-1}(x_i-x'_i)e_{y_i}-e_{x_i}(y_i-y'_i)=\n\\]\n\\[\n \\sum_{i=0}^{N-1}(x_ie_{y_i}-e_{x_i}y_i)- \\sum_{i=0}^{N-1} (x'_ie_{y_i}-e_{x_i}y'_i)=0\n\\]\nFor the comultiplication,\nwe recall that \n\\[\\Delta(ab)=\\Delta(a)\\Delta(b)\\]\n where the product on the right hand side is defined using the Koszul sign rule:\n \\[\n (a_1\\otimes a_2)(b_1\\otimes b_2)=(-1)^{|a_2||b_1|}a_1b_1\\otimes a_2b_2\n \\]\nSo, in this case we have\n\\[\n \\Delta(\\omega)=\\sum_{i=0}^{N-1}\\Delta (e_{x_i}e_{y_i})=\n\\]\n\\[\n\\sum_{i=0}^{N-1}(x'_iy'_i\\otimes e_{x_i}e_{y_i}-x'_ie_{y_i}\\otimes e_{x_i}y_i+ e_{x_i}y'_i\\otimes x_ie_{y_i}+e_{x_i}e_{y_i}\\otimes x_iy_i)\n\\]\nthe middle terms cancel telescopically, giving\n\\[\n=\\sum_{i=0}^{N-1}(x'_iy'_i\\otimes e_{x_i}e_{y_i}+e_{x_i}e_{y_i}\\otimes x_iy_i)\n\\]\nand the relation $x_iy_i\\sim x_{i+1}y_{i+1}$ gives\n\\[\n=x'_0y'_0\\otimes( \\sum_{i=0}^{N-1}e_{x_i}e_{y_i})+\n(\\sum_{i=0}^{n-1}e_{x_i}e_{y_i})\\otimes x_0y_0\n\\]\n\\[\n=x'_0y'_0\\otimes \\omega+\n\\omega\\otimes x_0y_0\n\\]\nThen the two-sided ideal of $B$ generated by $\\omega$ is a Hopf ideal.\nIf instead of a single $\\omega$ we have several $\\omega_1,\\dots \\omega_n$, we simply remark that\nthe sum of differential Hopf ideals is also a differential Hopf ideal.\n\n\\begin{rem} If X, is finite then for every $(x_0,y_0)$ there exists $N>0$ such that\n$\\sigma^N(x_0,y_0)=(x_0,y_0)$.\n\\end{rem}\n\n\\begin{rem}\nLet us suppose $(x_0,y_0)\\in X\\times X$ is such that $\\sigma^N(x_0,y_0)=(x_0,y_0)$ and $u\\in X$ an arbitrary element.\nConsider the element \n\\[\n((\\mathrm{Id} \\times \\sigma)( \\sigma \\times \\mathrm{Id}) (u,x_0,y_0)=(\\widetilde x_0,\\widetilde y_0,u'')\n\\]\ngraphically\n\\[\n \\xymatrix{\n u\\ar[rd]&x\\ar[ld]&y\\ar[d]\\\\\n \\widetilde x\\ar[d]&u'\\ar[rd]&y\\ar[ld]\\\\\n \\widetilde x&\\widetilde y&\\widetilde u'' \n }\n\\]\nthen $\\sigma^N(\\widetilde x_0,\\widetilde y_0)=(\\widetilde x_0,\\widetilde y_0)$.\n\\end{rem}\n\\begin{proof}\\[\n (\\sigma^N \\times id)(\\widetilde x_0,\\widetilde y_0,u'')=(\\sigma^N\\times id)(id\\times \\sigma)(\\sigma \\times id)(u,x_0,y_0)=\n \\]\n\\[\n (\\sigma^{N-1}\\times id)(\\sigma\\times id)(id\\times \\sigma)(\\sigma \\times id)(u,x_0,y_0)=\n\\]\nusing YBeq\n\\[\n (\\sigma^{N-1}\\times id)(id\\times \\sigma)(\\sigma \\times id)(id\\times \\sigma)(u,x_0,y_0)=\n\\]\n\nrepeating the procedure $N-1$ times leaves\n\\[\n(id\\times \\sigma)(\\sigma \\times id)(id\\times \\sigma^N)(u,x_0,y_0)=(id\\times \\sigma)(\\sigma \\times id )(u,x_0,y_0)=(\\widetilde x_0,\\widetilde y_0,u'')\n\\]\n\n\\end{proof}\n\n\n\n\\section{$2^{nd}$ application: Comparison with Hochschild cohomology}\n\n$B$ is a differential graded algebra, and on each degree $n$\nit is isomorphic to $A\\otimes (TV)_n\\otimes A$, where $V=\\oplus_{x\\in X}ke_x$.\nIn particular $B_n$\nis free as $A^e$-module. We \nhave {\\em for free} the existence of a comparison map\n\\[\n\\xymatrix@-2ex{\n\\cdots\\ar[r]&B_n\\ar[r]\\ar@{=}[d]&\\cdots\\ar[r]&B_2\\ar[r]^d\\ar@{=}[d]&B_1\\ar[r]^d\\ar@{=}[d]&\\ar@{=}[d]B_0\\\\\n\\cdots\\ar[r]&A'(TX)_n A\\ar@{=}[d] ^{\\cong}\\ar[r]&\\cdots\\ar[r] &\\oplus_{x,y\\in X}A'e_xe_y A\\ar@{=} ^{\\cong}[d]\\ar[r]^d&\\oplus_{x\\in X} A'e_x A\\ar[r]^d\\ar@{=}[d]&A' A\\ar@{=}[d] ^{\\cong}\\\\\n\\cdots\\ar[r]&A\\otimes V^{\\otimes n}\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]&\\cdots\\ar[r] &A\\otimes V^{\\otimes 2}\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]^{d_2}& A\\otimes V\\otimes A\\ar[d]^{\\widetilde\\mathrm{Id}}\\ar[r]^{d_1}&A\\otimes A\\ar[d]^{\\mathrm{Id}}\\ar[r]^m& A\\ar[d]^{\\mathrm{Id}}\\ar[r]&0\\\\\n\\cdots\\ar[r]&A\\otimes A^{\\otimes n}\\otimes A\\ar[r]& \\cdots\\ar[r] &A\\otimes A^{\\otimes 2}\\otimes A\\ar[r]^{b'}& A\\otimes A\\otimes A\\ar[r]^{b'}&A\\otimes A\\ar[r]^m& A\\ar[r]&0\\\\\n}\\]\n\n\\begin{coro}\nFor all $A$-bimodule $M$, there exists natural maps\n\n\\[\n\\widetilde\\mathrm{Id}_*: H^{YB}_\\bullet(X,M)\\to H_\\bullet(A,M)\n\\]\n\\[\n\\widetilde\\mathrm{Id}^*: H^\\bullet(A,M)\\to H_{YB}^\\bullet(X,M)\n\\]\nthat are the identity in degree zero and 1.\n\\end{coro}\n\n\n\nMoreover, one can choose an explicit map with extra properties. For that we recall some definitions: there is a set theoretical section to the canonical projection from the\nBraid group to the symmetric group\n\\[\n\\xymatrix{\n\\mathbb{B}_n\\ar@{->>}[r]& \\mathbb{S}_n\\ar@\/_\/@{..>}[l]\n}\n\\]\n\\[\n\\xymatrix{\nT_s:=\\sigma_{i_1}\\dots\\sigma_{i_k}\n&\n s=\\tau_{i_1}\\dots \\tau_{i_k} \\ar@{|->}[l]\n}\n\\]\nwhere \n\\begin{itemize}\n \\item $\\tau\\in S_n$ are transpositions of neighboring elements $i$ and $i+1$, \nso-called simple transpositions,\n \\item $\\sigma_i$ are the corresponding generators of $\\mathbb{B}_n$,\n \\item $\\tau_{i_{1}}\\dots \\tau_{i_{k}}$ is one of the shortest words representing $s$.\n\\end{itemize}\nThis inclusion factorizes trough \n\\[\n \\mathbb{S}_n\\hookrightarrow \\mathbb{B}_n^+\\hookrightarrow \\mathbb{B}_n\n \\]\n It is a set inclusion not preserving the monoid structure.\n \n\\begin{defi}\nThe permutation sets\n\\[\n \\mathrm{Sh}_{p_1,\\dots,p_k}:=\\left\\{ s\\in \\mathbb{S}_{p_1+\\dots+p_k}\/s(1)<\\dots}[r]&\n a_1\\otimes(x_1\\shuffle_{-\\sigma} \\cdots \\shuffle_{-\\sigma} x_n)\\otimes a_2}\n\\]\nis a chain map lifting the identify. Moreover, \n$\\widetilde\\mathrm{Id}:B\\to (A\\otimes TA\\otimes A,b')$ is a differential graded algebra\nmap, where in $TA$ the product is $\\shuffle_{-\\sigma}$, and\nin $A\\otimes TA\\otimes A$ the multiplicative structure\n is not the usual tensor product algebra, but the braided one.\n In particular, this map factors through\n $A\\otimes \\mathfrak{B}\\otimes A$, where $\\mathfrak{B}$ is the Nichols algebra\n associated to the braiding $\\sigma'(x\\otimes y)= - z\\otimes t$, where\n $x,y\\in X$ and $\\sigma(x,y)=(z,t)$. \n\\end{teo}\n\n\n\\begin{rem}The Nichols algebra\n$\\mathfrak{B}$ is the quotient of $TV$ by the ideal generated by (skew)primitives that\nare not in $V$, so the result above explains the good behavior\nof the ideals $invo$, $idS$,\nor in general the ideal generated by\nelements\nof the form\n $\\omega=\\sum_{i=0}^{N-1}e_{x_i}e_{y_i}$ where $\\sigma(x_i,y_i)=(x_{i+1},y_{i+1})$ and $\\sigma^N(x_0,y_0)=(x_0,y_0)$.\n It would be interesting to know the properties of $A\\otimes\\mathfrak{B}\\otimes A$\n as a differential object, since it appears to be a candidate of\n Koszul-type resolution for the semigroup algebra $A$\n (or similarly the group algebra $k[G_X]$).\n \\end{rem}\n\nThe rest of the paper is devoted to the proof of \\ref{teoid}. Most of the Lemmas\nare \"folklore\" but we include them for completeness. The interested reader can\nlook at \\cite{Lebed2} and references therein.\n\n\\begin{lem}\\label{AC monoid}\n Let $\\sigma$ be a braid in the braided (sub)category that contains two associative algebras $A$ and $C$, meaning there \n exists\n bijective functions \n \\[\n\\sigma_A:A\\otimes A\\to A\\otimes A,\\\n\\sigma_C:C\\otimes C\\to C\\otimes C,\\\n\\sigma_{C,A}:C\\otimes A\\to A\\otimes C\\]\nsuch that \n\\[\n\\sigma_*(1,-)=(-,1)\\hbox{ and } \\sigma_*(-,1)=(1,-) \\ \\hbox{ for } *\\in \\{A,C;C,A\\} \n\\]\n\\[\n \\sigma_{C,A}\\circ (1\\otimes m_A)=(m_A\\otimes 1)(1\\otimes \\sigma_{C,A})(\\sigma_{C,A}\\otimes 1)\n\\]\n and \n\\[\\sigma_{C,A}\\circ ( m_C\\otimes 1)=(1\\otimes m_C)(\\sigma_{C,A}\\otimes 1)(1\\otimes \\sigma_{C,A})\\]\nDiagrammatically\n\\\n\\xymatrix{\nC\\ar[d]&A\\ar[rd]^{\\!\\!\\!m_A}&&A\\ar[ld]\\\\\n\\ar[rrd]^{\\!\\!\\!\\!\\!\\!\\! \\sigma_{C,A}}&&A\\ar[lld]&\\\\\nA&&C&\n}\n\\xymatrix{\n\\\\\n\\\\\n&=^{[*]}&\\\\}\n\\xymatrix{\nC\\ar[rrd]^{\\!\\!\\!\\!\\sigma_{C,A}}&&A\\ar[lld]&A\\ar[d]\\\\\nA\\ar[d]&&C\\ar[rd]&A\\ar[ld]\\\\\nA\\ar[rd]&&A\\ar[ld]&C\\ar[d]\\\\\n&A&&C\n}\n\\]\nand\n\\[\n\\xymatrix{\nC\\ar[rd]^{\\ \\ m_C}&&C\\ar[ld]&A\\ar[d]\\\\\n&C\\ar[rrd]^{\\!\\!\\!\\!\\!\\!\\!\\sigma_{C,A}}&&A\\ar[lld]\\\\\n&A&&C\n}\n\\xymatrix{\n\\\\\n\\\\\n&=^{[**]}&\\\\}\n\\xymatrix{\nC\\ar[d]&C\\ar[rrd]&&A\\ar[lld]\\\\\nC\\ar[rd]&A\\ar[ld]&&C\\ar[d]\\\\\nA\\ar[d]&C\\ar[rd]&&C\\ar[ld]\\\\\nA&&C&\n}\n\\]\nAssume that they\nsatisfy the braid equation with any combination of $\\sigma_A,\\sigma_C$ or $\\sigma_{A,C}$.\nThen, $A\\otimes_\\sigma C=A\\otimes C$ with product defined by\n \\[\n(m_A\\otimes m_C)\\circ(\\mathrm{Id}_A\\otimes \\sigma_{C,A}\\otimes\\mathrm{Id}_C)\\colon\n(A\\otimes C)\\otimes (A\\otimes C)\\to A\\otimes C\n\\]\nis an associative algebra. In diagram:\n\\[\n\\xymatrix{\nA\\ar[d]&&C\\ar[rd]^{\\!\\!\\!\\sigma}&A\\ar[ld]&&C\\ar[d]\\\\\nA\\ar[rd]^{\\ \\ m_A}&&A\\ar[ld]&C\\ar[rd]^{\\ \\ m_C}&&C\\ar[ld]\\\\\n&A&&&C&\n}\\]\n \n\\end{lem}\n\\begin{proof}\n\n\n\nTake $m\\circ (1\\otimes m)((a_1\\otimes c_2)\\otimes ((a_2\\otimes c_2)\\otimes (a_3\\otimes c_3))$\n use $[*]$, associativity in $A$, associativity in $C$ then $[**]$ and the result follows.\n\\end{proof}\n\n\n\n\n\n\\begin{lem}\nLet $M$ be the monoid freely generated by $X$\nmodule the relation $xy=zt$ where $\\sigma(x,y)=(z,t)$, then,\n$\\sigma:X\\times X\\to X\\times X$ naturally extends to a braiding in $M$ and verifies \n\n\n\n\\[\n\\xymatrix{\nM\\ar[rd]^{\\ \\ \\ m}& &M\\ar[ld]&M\\ar[d]^\\mathrm{Id}\\\\\n &M\\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n&M&&M\n} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[d]^{\\mathrm{Id}} &M\\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n M\\ar[rd]^{\\!\\!\\sigma}&M\\ar[ld]&&M\\ar[d]\\\\\nM\\ar[d]&M\\ar[rd]^{\\ \\ \\ m}&&M\\ar[ld]\\\\\nM&&M} \n\\]\n\n\\[\n\\xymatrix{\nM\\ar[d]&M\\ar[rd] &&M\\ar[ld]_{m}\\\\\n M \\ar[rrd]^\\sigma&&M\\ar[lld]\\\\\n M&&M\n} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[rd]^{\\!\\!\\sigma} &M\\ar[ld]&&M\\ar[d]\\\\\n M\\ar[d]&M\\ar[rrd]^\\sigma&&M\\ar[ld]\\\\\nM\\ar[rd]^m&&M\\ar[ld]&M\\ar[d]^\\mathrm{Id}\\\\\n&M&&M} \n\\]\n\\end{lem}\n\n\\begin{proof}\n It is enough to prove that the extension mentioned before is well defined in the quotient. Inductively, it will be enough to\n see that\n $\\sigma(axyb,c)=\\sigma(aztb,c)$ and $\\sigma(c,axyb)=\\sigma(c,aztb)$ where \n$ \\sigma(x,y)=(z,t)$, and this follows\n immediately from the braid equation:\n\n A diagram for the first equation is the following: \n \\[\n\\xymatrix{\na\\ar[d]&x\\ar[rd]&y\\ar[ld]&b\\ar[rd]&c\\ar[ld]\\\\\n\\ar[d]&z\\ar[d]&t\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]\\\\\n\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]&\\ar[d]\\\\\n&&\\alpha&\\beta&} \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\na\\ar[d]&x\\ar[d]&y\\ar[d]&b\\ar[rd]&c\\ar[ld]\\\\\n\\ar[d]&\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n\\ar[d]&\\ar[rd]&\\ar[ld]&\\ar[d]&\\ar[d]\\\\\n\\ar[rd]&\\ar[ld]&\\ar[rd]&\\ar[ld]&\\ar[d]\\\\\n&&\\alpha^*&\\beta^*&\n } \n\\]\n \n As $\\alpha\\beta=\\alpha^*\\beta^*$ the result follows.\n \n \n\\end{proof}\n\n\n\n\n\n\\begin{lem}\n\n$m\\circ\\sigma=m$, diagrammatically:\n\\[\n\\xymatrix{\nM\\ar[rrd]&&M\\ar[lld]\\\\\nM \\ar[rd]^{\\ \\ \\ m} &&M\\ar[ld]\\\\\n& M } \n\\xymatrix{\n\\\\\n\\\\\n&=&\\\\\n} \n\\xymatrix{\nM\\ar[rd]^{\\ \\ \\ m}&&M\\ar[ld]\\\\\n& M\\ar[d]^\\mathrm{Id}\\\\\n& M\n } \n\\]\n\n\\end{lem}\n\n\\begin{proof} Using successively that $m\\circ \\sigma_i=m$, we have: \n\\[m\\circ \\sigma(x_1\\dots x_n, y_1\\dots y_k)=m\\left((\\sigma_k\\dots \\sigma_1)\\dots(\\sigma_{n+k-1}\\dots \\sigma_n)_{(x_1\\dots x_ny_1\\dots y_k)}\\right)\\]\n\\[=m\\left((\\sigma_{k-1}\\dots \\sigma_1)\\dots(\\sigma_{n+k-1}\\dots \\sigma_n)_{(x_1\\dots x_ny_1\\dots y_k)}\\right)=\\dots\\newline\\]\n\\[\n=m(x_1\\dots x_n,y_1\\dots y_k) \n\\]\n\\end{proof}\n\n\\begin{coro}\nIf one considers $A=k[M]$,\nthen the algebra $A$ verifies all diagrams in previous lemmas.\n \\end{coro}\n\n\\begin{lem}\nIf $T=(TA, \\shuffle_\\sigma)$ there are bijective functions \n\\[\n\\sigma_{T,A}:=\\sigma|_{T\\otimes A}: T\\otimes A\\rightarrow A\\otimes T \n\\]\n\\[\n\\sigma_{A,T}:=\\sigma|_{A\\otimes T}: A\\otimes T\\rightarrow T\\otimes A \n\\]\nthat verifies the hypothesis of Lemma \\ref{AC monoid}, and the same for \n $(TA, \\shuffle_{-\\sigma})$.\n\\end{lem}\n\\begin{coro}\n $A\\otimes (TA, \\shuffle_{-\\sigma})\\otimes A$ is an algebra.\n\\end{coro}\n\n\\begin{proof}\n Use \\ref{AC monoid} twice and the result follows.\n\\end{proof}\n\n\\begin{coro}\\label{btp}\nTaking $A=k[M]$, then the standard resolution of $A$ as $A$-bimodule has a natural algebra structure\n defining the braided tensorial product as follows: \n\\[\nA\\otimes TA\\otimes A=\nA\\otimes_\\sigma(T^cA,\\shuffle_{-\\sigma})\\otimes_\\sigma A\\]\n\\end{coro}\nRecall the differential of the standard resolution\nis defined as\n$b':A^{\\otimes n+1}\\to A^{\\otimes n}$\n\\[ b'(a_0\\otimes\\dots\\otimes a_n)= \\sum_{i=0}^{n-1}(-1)^{i}a_0\\otimes \\dots\\otimes a_ia_{i+1}\\otimes\\dots\\otimes a_n\\]\nfor all $n\\geq 2$.\nIf $A$ is a commutative algebra then the Hochschild resolution\nis an algebra viewed as $\\oplus_{n\\geq 2}A^{\\otimes n}=A\\otimes TA\\otimes A$, with right and left \n$A$-bilinear extension of the shuffle product on $TA$, and $b'$ is a (super) derivation with\n respect to that product (see for instance Prop. 4.2.2 \\cite{L}). \nIn the braided-commutative case we have the analogous result:\n\\begin{lem}\n$b'$ is a derivation with respect to the product mentioned in Corollary \\ref{btp}.\n\\end{lem}\n\\begin{proof}\nRecall the commutative proof \nas in Prop. 4.2.2 \\cite{L}. \nDenote $*$ the product\n \\[\n(a_0\\otimes\\dots\\otimes a_{p+1} )*(b_0\\otimes\\dots\\otimes b_{q+1})=\na_0b_0\\otimes((a_1\\dots\\otimes a_{p} )\\shuffle (b_1\\otimes\\dots\\otimes b_{q}))\\otimes a_{p+1}b_{q+1}\n\\]\nSince $\\oplus_{n\\geq 2}A^{\\otimes n}=A\\otimes TA\\otimes A$\nis generated by $A\\otimes A$ and $1\\otimes TA\\otimes 1$, we check on generators.\nFor $a\\otimes b\\in A\\otimes A$, $b'(a\\otimes b)=0$, in particular, it satisfies Leibnitz\nrule for elements in $A\\otimes A$.\n Also, $b'$ is $A$-linear on the left, and right-linear on the right, so\n\\[\n b'\\big((a_0\\otimes a_{n+1})*(1\\otimes a_1\\otimes \\cdots \\otimes a_n\\otimes 1)\\big)=\nb'(a_0\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes a_{n+1})\n\\]\n\\[=\na_0b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)a_{n+1}\n=(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n\\]\n\\[\n=0+(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n \\]\\[\n=b'(a_0\\otimes a_{n+1})*(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n+(a_0\\otimes a_{n+1})*b'(1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)\n\\]\nNow consider $(1\\otimes a_1\\otimes \\dots\\otimes a_{p}\\otimes 1 )*(1\\otimes b_1\\otimes\\dots\\otimes b_{q}\\otimes 1)$,\nit is a sum\nof terms where two consecutive tensor terms can be of the form\n$(a_i,a_{i+1})$, or $(b_j,b_{j+1})$, or $(a_i,b_j)$ or $(b_j,a_i)$.\nWhen one computes $b'$, multiplication of two consecutive tensor factors will give,\nrespectively, terms of the form\n\\[ \\cdots\\otimes a_ia_{i+1}\\otimes \\cdots, \\ \\cdots\\otimes b_jb_{j+1}\\otimes\\cdots,\\ \\cdots\\otimes a_ib_j\\otimes\\cdots,\n\\cdots \\otimes b_ja_i\\otimes\\cdots\n\\]\nThe first type of terms will recover $b'((1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1))*\n(1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1)$ and the second type of terms will recover\n$\\pm (1\\otimes a_1\\otimes\\cdots\\otimes a_n\\otimes 1)*b'((1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1))$.\nOn the other hand, the difference between the third and forth type of terms is just\na single trasposition so they have different signs, while $a_ib_j=b_ja_i$ because the \nalgebra is commutative, if one take the {\\em signed} shuffle then they cancel each other.\n\n\nIn the {\\em braided} shuffle product, the summands are indexed by the same set\nof shuffles, so we have the same type of terms, that is, when computing $b'$ of \na (signed) shuffle product, one may do the product of two elements in coming form the \nfirst factor, two elements of the second factor. \nor a mixed term. For the mixed terms, they will have the form\n\\[\n\\cdots\\otimes A_iB_j \\otimes\\cdots \\hbox{, or }\n\\cdots\\otimes \\sigma^1(A_i,B_j)\\sigma^2(A_i,B_j)\\otimes\\cdots\n \\]\nAs in the algebra $A$ we have $A_iB_j=\\sigma^1(A_i,B_j)\\sigma^2(A_i,B_j)$ then this \nterms will cancel leaving only the terms corresponding to \n $b'(1\\otimes a_1\\otimes\\cdots\\otimes a_p \\otimes 1)\\shuffle_{-\\sigma} (1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes )$\nand $\\pm(1\\otimes a_1\\otimes\\cdots\\otimes a_p\\otimes 1 )\\shuffle_{-\\sigma} b'(1\\otimes b_1\\otimes\\cdots\\otimes b_q\\otimes 1)$\nrespectively.\n\n\n\n\\end{proof}\n\n\n\n\n\\begin{coro} There exists a comparison morphism \n$f:(B,d)\\to (A\\otimes TA\\otimes A,b')$ which is a differential graded algebra morphism, $f(d)=b'(f)$,\n simply defining it on $e_x$ ($x\\in X$)\n and verifying $f(x'-x)=b'(f(e_x))$.\n\\end{coro}\n\\begin{proof}\nDefine $f$ on $e_x$, extend $k$-linearly to $V$, multiplicatively to $TV$, and $A'$-$A$ linearly to\n$A'\\otimes TV\\otimes A=B$. In order to see that $f$ commutes with the differential, by $A'$-$A$-linearity\nit suffices to check on $TV$, but since $f$ is multiplicative on $TV$ it is enough to check on $V$, and by $k$-linearity we check on basis, that is, we only need $f(de_x)=b'f(e_x)$.\n\\end{proof}\n\n\n\\begin{coro}\n$f|_{TX}$ is the quantum symmetrizer map, and therefore \n$\\mathrm{Ker}(f)\\cap TX\\subset B$ defines the Nichol's ideal \nassociated to $-\\sigma$.\n\\end{coro}\n\\begin{proof}\n\\[\n f(e_{x_1}\\cdots e_{x_n})=f(e_{x_1})*\\cdots *f(e_{x_n})=(1\\otimes x_1\\otimes 1)*\\cdots *(1\\otimes x_n \\otimes 1)=1\\otimes(x_1\\shuffle \\cdots \\shuffle x_n)\\otimes 1\n\\]\n\n\\end{proof}\n\nThe previous corollary explains why $\\mathrm{Ker}(\\mathrm{Id}-\\sigma)\\subset B_2$\ngives a Hopf ideal and also ends the proof of Theorem \\ref{teoid}.\n\n\\begin{question}\n$Im(f)=A\\otimes \\mathfrak{B}\\otimes A$ is a resolution of $A$ as a $A$-bimodule? namely,\nis $(A\\otimes\\mathfrak{B}\\otimes A,d)$ acyclic?\n\\end{question}\nThis is the case for involutive solutions in characteristic zero, but\n also for $\\sigma=$flip in any characteristic, and $\\sigma=\\mathrm{Id}$ (notice this $\\mathrm{Id}$-case \ngives the Koszul resolution for the tensor algebra). If the answer to that question is yes,\nand $\\mathfrak{B}$ is finite dimensional then $A$ have necessarily finite global dimension. \nAnother interesting question is how to relate \ngenerators for the relations defining $\\mathfrak{B}$ and cohomology classes for $X$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe competition between thermal fluctuations, pinning and\ninteractions between vortices leads to many novel physical\nphenomena in type-II high-temperature\nsuperconductors~\\cite{blatter}. Examples include the melting of the\nAbrikosov flux-lattice into an entangled vortex-liquid~\\cite{ns} and\nthe proposed existence of low temperature Bose-glass~\\cite{nv},\nvortex glass~\\cite{fisher_glass} and Bragg glass~\\cite{nat_sch}\nphases.\n\nMany experimental probes have been used to study these phenomena.\nThey include decoration, transport and magnetization measurements,\nneutron scattering, electron microscopy, electron holography and\nHall probe microscopes. More recently it has become possible to\nmanipulate single vortices, for example using magnetic force\nmicroscopy (MFM)~\\cite{Wadas92}. These can, in principle, measure\ndirectly many microscopic properties which have been up to now under\ndebate or assumed. The possibility of performing such experiments is\nsimilar in spirit to single molecule experiments on motor proteins,\nDNA, and RNA which have opened a window on phenomena inaccessible\nvia traditional bulk biochemistry experiments~\\cite{singlemol}.\n\nIn this spirit Olson-Reichhardt and Hastings~\\cite{ORH04} have\nproposed using MFM to wind two vortices around each other. Such an\nexperiment allows direct probing of the energetic barrier for two\nvortices to cut through each other. A high barrier for flux lines\ncrossing has important consequences for the dynamics of the\nentangled vortex phase.\n\nIn this paper we introduce and study several experiments in which a\nsingle vortex is depinned from extended defects using, for example,\nMFM. A brief account of the results can be found in\nRef.~[\\onlinecite{knp}]. First we consider a setup where MFM is used\nto pull an isolated vortex bound to common extended defects such as\na columnar pin, screw dislocation, or a twin plane in the presence\nof point disorder. Using a scaling argument, supported by numerical\nand rigorous analytical results, we derive the displacement of the\nvortex as a function of the force exerted by the tip of a magnetic\nforce microscope. We focus on the behavior near the depinning\ntransition and consider an arbitrary dimension $d$. We argue that\nthe transition can be characterized by a universal critical\nexponent, which depends {\\it only on the dimensionality of the\ndefect}. We show that unzipping experiments from a twin plane\ndirectly measures the free-energy fluctuations of a vortex in the\npresence of point disorder in $d=1+1$ dimensions. To the best of our\nknowledge, there is only one, indirect, measurement of this\nimportant quantity in Ref.~[\\onlinecite{Bolle}]. The form of the\nphase diagram in the force temperature plane is also analyzed in\ndifferent dimensions. Related results apply when a tilted magnetic\nfield is used to tear away vortex lines in the presence of point\ndisorder, which was not considered in earlier work on clean\nsystems.~\\cite{hatano}. Furthermore, we show that a possible\nexperimental application of the scaling argument is a direct measurement of the vortex line tension in an unzipping\nexperiment. As we will show in this paper, in a system of finite size, the displacement of the flux line at the transition\ndepends only on the critical force exerted on the flux line by the\nMFM tip, the flux line tension and the sample thickness. Thus\nunzipping experiments can provide valuable information on the\nmicroscopic properties of flux lines.\n\nNext we consider a setup where a single vortex is pulled out of a\nplane with many vortices. It is known that the large-scale behavior\nof vortices in a plane is characterized by a single dimensionless\nnumber, often referred to as the Luttinger liquid parameter due to\nan analogy with bosons in $d=1+1$ dimensions. We show that\nexperiments which unzip a single vortex out of the plane can be used\nto directly probe the Luttinger liquid parameter. We also discuss\nthe effects of disorder both within the defect and in the bulk with the same setup.\n\n\n\\section{Unzipping a vortex from a defect}\n\\label{Sec2}\n\n\\subsection{Review of clean case}\n\\label{sectioncleancase}\n\nWe begin by considering the unzipping of a vortex from an extended\ndefect in a clean sample. For a columnar defect the system is\ndepicted in Fig.~\\ref{fig:clean_unzip}. At the top of the sample the\nMFM applies a constant force ${\\bf f}$ which pulls the vortex away\nfrom the defect. We assume that at the bottom of the sample the\nvortex is always bound to the defect at a specific location. This\nassumption will not influence the results since below the unzipping\ntransition the flux line is unaffected by the boundary conditions at\nthe far end of the sample.\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=8cm]{Clean_unzip.eps}\n\\caption{A MFM tip applies a constant force ${\\bf f}$ which pulls the\nvortex away from the defect. The configuration of the vortex is\nrepresented by ${\\bf r}(\\tau)$. We assume throughout that the vortex\nis always bound to the defect at the bottom of the sample so that\n${\\bf r}(\\tau=0)=0$. }\\label{fig:clean_unzip}\n\\end{figure}\n\nIn the absence of external force, and for an external field aligned\nwith the defect, the appropriate energy for a given configuration\n${\\bf r}(\\tau)$ of the vortex is given by \\cite{blatter}:\n\\begin{equation}\nF_0\\!\\!=\\!\\!\\!\\int_0^L \\!\\!\\!\\!d \\tau \\left[ \\frac{\\gamma}{2}\n(\\partial_\\tau {\\bf r}(\\tau))^2+V({\\bf r}(\\tau)) \\right] .\n\\label{f0}\n\\end{equation}\nHere $\\gamma$ is the line tension and $L$ is the length of the\nsample along the $\\tau$ direction. The vector ${\\bf r}(\\tau)$\nrepresents the configuration of the vortex in the $d$ dimensional\nspace and $V({\\bf r})$ is a short-ranged attractive\npotential describing the $d'$-dimensional extended defect (in\nFig.~\\ref{fig:clean_unzip} $d=3$ and $d'=1$). The effect of the the\nexternal force, exerted by the MFM, can be incorporated by adding to\nthe free energy the contribution\n\\begin{equation}\nF_1=-{\\bf f}\\cdot {\\bf r(L)}=-\\int_0^L {\\bf f}\\cdot\n\\partial_\\tau \\bf r(\\tau)\\,d\\tau\n\\label{eq:unzipfe}\n\\end{equation}\nwhere we have used ${\\bf r}(\\tau=0)={\\bf 0}$. Here ${\\bf f}$\nstands for the local force exerted by the MFM in the transverse\ndirection. The free energy of a given configuration of the vortex\nis given by\n\\begin{equation}\nF({\\bf r})=F_0({\\bf r})+F_1({\\bf r}) \\;.\n\\end{equation}\nThe problem, as stated, has been studied first in the context of\nvortices in the presence of a tilted magnetic field~\\cite{hatano}\nand the results have been applied to the related problem of DNA\nunzipping~\\cite{Lubensky}.\n\nWe note that a similar setup can be\nachieved by using a transverse magnetic field instead of the\nexternal force. See Fig.~(\\ref{fig:clean_unzip_mag}).\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=8cm]{Clean_unzip_mag.eps}\n\\caption{Same as in Fig.~\\ref{fig:clean_unzip} but with a transverse\nmagnetic field instead of the MFM force tearing the flux line away from a\ndefect.}\\label{fig:clean_unzip_mag}\n\\end{figure}\nIndeed in the free energy (\\ref{eq:unzipfe}) the external force\ncouples to the slope of the flux line $\\partial_\\tau {\\bf r}$ in the\nsame way as the external magnetic field does~\\cite{hatano}. The only\ndifference between the two setups is that there are now equal and\nopposite forces acting on the top and bottom ends of the sample.\nHowever this difference is important only in short samples, where\nthe two ends of the flux line are not independent from each other.\n\nIn this paper we focus on the thermal average of distance of the tip\nof the vortex from the extended defect $\\langle x_m (\\tau=L)\\rangle$. This quantity\nis related to the thermal average of the length of the vortex that is unzipped from the\ndefect, $\\langle \\tau_m \\rangle$, through $\\langle x_m \\rangle = f\n\\langle \\tau_m \\rangle \/ \\gamma$. Here and throughout the paper\n$\\langle \\ldots \\rangle$ denotes a thermal average while an overbar\ndenotes an average over realizations of the disorder.\n\nAs stated above the universal behavior of $\\langle \\tau_m \\rangle$\n(or equivalently $\\langle x_m \\rangle$) within this disorder-free model have been\nderived previously. Here we sketch two approaches which will be\ngeneralized to samples with quenched disorder in the rest of the paper.\n\nIn the first approach, instead of directly summing over\nconfigurations of the vortex we perform the sum in two parts by\ndividing the vortex into bound and unbound segments. The unbound\nsegment starts at the point where the vortex departs the defect\nwithout ever returning to hit it again up to the top of the sample.\nUsing Eq.~(\\ref{eq:unzipfe}) it is straightforward to integrate over\nvortex configurations to obtain for the partition function of the\nunzipped segment\n\\begin{eqnarray}\nZ_u(\\tau_m)&=&\\int {\\cal D}{\\bf r}(\\tau)\\,\\mathrm\ne^{-\\beta\\int_0^{\\tau_m}\\! d \\tau \\left[ \\frac{\\gamma}{2}\n(\\partial_\\tau {\\bf r}(\\tau))^2-{\\bf\nf}\\cdot\\partial_\\tau{\\bf r(\\tau)}\\right]} \\nonumber \\\\\n&\\propto& \\mathrm e^{\\tau_m \\beta f^2 \/2\\gamma} \\;,\n\\label{free_unzip1}\n\\end{eqnarray}\nso that the free energy associated with this conditional partition function is\n\\begin{equation}\n{\\cal F}_u(\\tau_m)=-\\beta^{-1}\\ln Z_u(\\tau_m)= - f^2 \\tau_m\/\n2\\gamma \\;,\n\\label{free_unzip}\n\\end{equation}\nwhere $\\beta$ is the inverse temperature. Henceforth in this paper we\nset $\\beta=1$, which can be always achieved by appropriate rescaling\nof the energy units. Even though the above sum also runs over\nconfigurations which return to the defect it is easy to verify that\nthese configurations give rise to exponentially small correction in\nthe $\\tau_m$. Equation~(\\ref{free_unzip}) implies that as the force,\n${\\bf f}$, increases the free energy density of the unzipped portion\nof the vortex decreases. In contrast, the free energy density of the\nbound part is, clearly, independent of the force and given by ${\\cal\nF}_b(\\tau_m)=V_0(L-\\tau_m)$, where $V_0$ is the free energy per unit\nlength of a bound vortex and $L$ is the length of the sample along\nthe defect. The vortex will be unzipped when $f=f_c=\\sqrt{2 \\gamma\n|V_0|}$ such that the free-energy densities of the bound and\nunzipped states are equal.\n\nIn this representation the total free energy of the vortex is\ngiven by\n\\begin{equation}\n{\\cal F}(\\tau_m)={\\cal F}_u(\\tau_m)+{\\cal F}_b(\\tau_m)\\;.\n\\end{equation}\nThe unconstrained partition function of the model is given by\n\\begin{equation}\nZ=\\int_0^L d\\tau_m e^{-(f_c^2-f^2)\\tau_m\/2 \\gamma} \\;.\n\\end{equation}\nSince both results are independent of the dimensionality of the\ndefect (columnar or planar) near the transition one always finds\nin the $L \\to \\infty$ limit\n\\begin{equation}\n\\langle \\tau_m \\rangle \\sim \\frac{1}{(f_c-f)^\\nu} \\;,\n\\label{eq:clean}\n\\end{equation}\nwith $\\nu=1$. Note, that it can easily be seen that approaching the\ntransition from above the average length of the vortex which is\nbound to the defect, $\\langle (L-\\tau_m) \\rangle$, diverges in the\nsame manner~\\cite{hatano}.\n\nAn alternative approach, which will also be useful in this paper,\nuses the mapping of the problem to the physics of a fictitious\nquantum particle \\cite{NelsonBook}. The contribution of the external\nfield ${\\bf f}$ to the free energy now manifests itself as an\nimaginary vector potential acting on the particle in $d-1$\ndimensions (with the $\\tau$ axis acting as a time direction).\nExplicitly, using the standard conversion from path-integrals (see\nRef.~[\\onlinecite{hatano}] for details) one finds that the problem\ncan be described in terms of a non-Hermitian Hamiltonian:\n\\begin{equation}\n{\\cal H}={1\\over 2\\gamma} {\\bf p}^2 -{i\\over \\gamma} {\\bf f} \\cdot\n{\\bf p} +V({\\bf r}) \\;,\n\\label{H}\n\\end{equation}\nwhere ${\\bf p}=\\frac{1}{i} \\vec{\\nabla}$ is the momentum operator.\nIn this language the vortex is bound to the defect as long as there\nis a bound state in the Hamiltonian. As mentioned above $i {\\bf f}$\nis equivalent to a constant imaginary vector potential. This analogy\nmakes it apparent that solutions of the non-Hermitian problem can be\nrelated to those of the Hermitian Hamiltonian (where one sets ${\\bf\nf}=0$) by an imaginary gauge-transformation~\\cite{hatano}. In\nparticular the left $\\psi^L_n({\\bf r},{\\bf f})$ and the right\n$\\psi^R_n({\\bf r},{\\bf f})$ eigenfunctions of the non-Hermitian\nproblem can be obtained from those of the Hermitian problem,\n$\\psi_n({\\bf r},{\\bf f}={\\bf 0})$, using\n\\begin{eqnarray}\n\\psi^R_n({\\bf r},{\\bf f}) &=& {\\cal U} \\psi_n({\\bf r},{\\bf\nf}={\\bf 0}) \\nonumber \\\\\n\\psi^L_n({\\bf r},{\\bf f}) &=& \\psi_n({\\bf r},{\\bf f}={\\bf 0})\n{\\cal U}^{-1} \\;, \\label{eq:gauge1}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n{\\cal U}=\\mathrm e^{{\\bf f} \\cdot {\\bf r}}= \\mathrm e^{f x}\\; ;\n\\;\\;\\;\\;\\; {\\cal U}^{-1}=\\mathrm e^{-{\\bf f} \\cdot {\\bf r}}=\\mathrm\ne^{-f x} \\;.\n\\label{eq:gauge2}\n\\end{equation}\nThe universal behavior of $\\tau_m$ at the transition,\nEq.~(\\ref{eq:clean}), was obtained in Ref.~[\\onlinecite{hatano}] by noting\nthat\n\\begin{equation}\n\\langle\\tau_m\\rangle\\propto \\langle x_m\\rangle ={\\int x\\psi^R_n({\\bf\nr}) d{\\bf r}\\over \\int \\psi^R_n({\\bf r}) d{\\bf r}}\\propto{1\\over\n{f_c-f}}, \\label{tau_m}\n\\end{equation}\nwhere $\\psi^R_n({\\bf r})\\propto \\mathrm e^{-f_c r}$ at long $r$. We\nnote that the imaginary gauge transformation is justified only at\n$f1$ this result\nis modified and one can use known results for the free energy of a\ndirect path in a random media (see Eq.~(\\ref{eq:fefluct})), which\nleads to $\\delta {\\cal F}_{b}\\propto \\tau_m^{\\omega(d')}$, where\n$d'$ is the dimensionality of the defect \\cite{Karreview}. Finally,\nthere is a contribution to the free energy fluctuations from the\ninteraction of the unzipped part of the vortex with the bulk point\ndisorder, $\\delta {\\cal F}_{u}$. This contribution behaves similarly\nto $\\delta \\mathcal F_b$ with a different bulk exponent: $\\delta\n{\\cal F}_{u}\\propto \\tau_m^{\\omega(d)}$, where $d>d'$ is the\ndimensionality of the sample. Collecting all three terms gives:\n\\begin{equation}\n{\\cal F}(\\tau_m)=a(f_c-f)\\tau_m +\\delta\\mathcal F_{b}(\\tau_m)\n+\\delta\\mathcal F_u(\\tau_m)\\;.\n\\label{f}\n\\end{equation}\nAs discussed above, $\\omega(d)$ has been studied extensively in the\npast and it is well known that $\\omega(d')>\\omega(d)$ for any\n$d'0$ it is relevant, i. e. long excursions are\nenergetically costly.\n\nAs mentioned above in $d=3$ numerical simulations indicate that\n$\\zeta \\approx 0.6$ which gives for the planar defect ($d^\\prime=2$)\n$\\varepsilon \\approx 1\/8$ and for the columnar pin ($d^\\prime=1$)\n$\\varepsilon \\approx -1\/2$. Therefore, a weak twin plane is always relevant and\nthe vortex is always bound to it. However, a weak columnar pin is\nirrelevant and then one expects an unbinding transition. In $d=2$,\nwhere there can be no twin plane, $\\zeta=2\/3$ and the columnar\ndefect is found to be marginal. As argued in Ref.~[\\onlinecite{HwaNatter}]\nit is in fact marginally {\\it relevant}.\n\nTo summarize this discussion for columnar defects in 3 dimensional samples we\nexpect there is a critical strength of the bulk disorder beyond which the\nflux line spontaneously unzips even at zero force. In contrast for a planar defect in\n3 dimensions and for columnar defects in planar 2 dimensions superconductors we expect\nthat for any strength of the disorder there is a finite non-zero\nvalue of the force needed to unzip the vortex.\n\nNext, we will check the scaling (\\ref{nu}) and the anticipated\nlocalization \/ delocalization behavior for a number of different\nsituations using both analytical methods based on the replica trick\nand numerical simulations.\n\n\n\\subsection{Unzipping from a disordered columnar pin without excursions.}\n\\label{replica}\n\nWe start our quantitative analysis from the simplest situation,\nwhere one can get exact analytical results. Namely, we consider\nunzipping from a 1D pin with disorder localized only on the pin.\nAdditionally we neglect all excursions of the vortex line from the\npin except for the unzipped region. This problem then becomes\nidentical to DNA unzipping. In Ref.~[\\onlinecite{Lubensky}] the\nauthors analyzed this problem using a Fokker-Planck approach and\nindeed derived $\\nu=2$ near the unzipping transition. Here we show\nhow the same problem can be solved using the replica trick. The\nsolution was sketched in Ref.~[\\onlinecite{kp}]. Here we review the\nderivation for completeness and provide additional details.\n\nIgnoring excursions of the bound part of the flux line into the bulk\ngives the free energy a particularly simple form. We again write it\nas a sum over the contribution from the bound and unbound segments.\nThe bound segment contribution is given by ${\\cal\nF}_b(\\tau_m)=V_0(L-\\tau_m)+\\int_{\\tau_m}^L d \\tau_m' U(\\tau_m')$,\nwhere $V_0<0$ is the mean value of the attractive potential, $L$ is\nthe length of the columnar defect which is assumed to be very large,\nand $U(\\tau_m)$ is a random Gaussian uncorrelated potential with\nzero mean satisfying\n$\\overline{U(\\tau_{m_1})U(\\tau_{m_2})}=\\Delta\\delta(\\tau_{m_1}-\\tau_{m_2})$.\nThe contribution from the unzipped part takes the same form as in\nthe clean case (see Eq. (\\ref{free_unzip})). Collecting the two\nterms gives:\n\\begin{equation}\n\\mathcal F(\\tau_m)=\\epsilon \\tau_m+\\int_{\\tau_m}^L d \\tau_m'\nU(\\tau_m').\n\\label{fz}\n\\end{equation}\nAs before we work in the units, where $k_B T=1$. In the equation\nabove the deviation from the unzipping transition is measured by\n$\\epsilon=(f_c^2-f^2)\/2\\gamma$, where $f$ is the force applied to\nthe end of the flux line and $f_c=\\sqrt{2\\gamma |V_0|}$ is the\ncritical force. In Eq.~(\\ref{fz}) we dropped an unimportant constant\nadditive term $V_0 L$.\n\nThe statistical properties of the unzipping transition can be\nobtained by considering $n$ replicas of the partition function $Z(\\tau)=\\exp(-\\mathcal\nF(\\tau))$~\\cite{edwards anderson}:\n\\begin{equation}\n\\overline{Z^n}=\\int_0^L d\\tau_1\\ldots\\int_0^L\nd\\tau_n\\,\\overline{\\exp\\left(-\\sum_{\\alpha=1}^n \\mathcal\nF(\\tau_\\alpha)\\right)},\n\\label{Z_n}\n\\end{equation}\nwhere the overbar denotes averaging over point disorder. The\naveraging procedure can be easily done for a positive integer $n$.\nWe eventually wish to take the limit $n \\to 0$. First we order the\ncoordinates $\\tau_j$, where the $j^{th}$ replica unbinds from the\npin according to: $0\\leq \\tau_1\\leq \\tau_{2}\\leq\\dots\\leq \\tau_n$.\nThen for $\\tau\\in[0,z_1)$ there are no replicas bound to the\ncolumnar pin, for $\\tau \\in[\\tau_1,\\tau_2)$ there is one replica on\nthe pin until finally for $L \\geq \\tau\\geq \\tau_n$ all $n$ replicas\nare bound to the pin. Using this observation and explicitly\naveraging over the point disorder in Eq.~(\\ref{Z_n}) we arrive at:\n\\begin{equation}\n\\overline{Z^n}\\!=n!\\!\\int\\limits_0^L d\\tau_1.\\,.\\!\\!\\int\\limits_{\\tau_{n-1}}^L\n\\!\\!d\\tau_n\\exp\\!\\left[-\\!\\sum\\limits_{j=1}^n\\! \\epsilon \\tau_j+\n{\\Delta\\over 2} j^2(\\tau_{j+1}-\\tau_{j})\\right],\n\\label{tuam2}\n\\end{equation}\nwhere we use the convention $\\tau_{n+1}=L$. The integral above is\nstraightforward to evaluate in the $L \\to \\infty$ limit so that\n\\begin{eqnarray}\n&&\\overline{Z^n}=\\mathrm e^{n^2L\\Delta\/2}{1\\over \\epsilon_n^n}\\prod_{j=1}^n\n{1\\over 1-\\kappa_n j} \\nonumber\n\\\\\n&&=\\mathrm\ne^{n^2L\\Delta\/2}\\left({2\\over\\Delta}\\right)^n\n{\\Gamma(1+1\/\\kappa_n-n)\\over\\Gamma(1+1\/\\kappa_n)}\n\\;, \\phantom{XXX} \\label{eq:partunzip}\n\\end{eqnarray}\nwhere $\\epsilon_n=\\epsilon+\\Delta n$ and\n$\\kappa_n=\\Delta\/2\\epsilon_n$. The exponential prefactor is an\nunimportant overall contribution of the whole columnar pin while the\nrest of the expression is the ($L$ independent) contribution from\nthe unzipped region. Interestingly the restricted partition\nfunctions for the unbinding problem from a hard wall (with no\nexternal force) and for the unzipping from a 1 dimensional pin are identical\nand thus there is equivalence between the two problems (see\nRef.~[\\onlinecite{kp}] for more details.)\n\nThe disorder-averaged free energy is given by the limit\n$\\overline{\\mathcal F}=-\\lim_{n \\to 0}\n(\\overline{Z^n}-1)\/n$~[\\onlinecite{edwards anderson}]. With the help\nof Eq.~(\\ref{eq:partunzip}) one obtains\n\\begin{equation}\n\\overline{\\mathcal F}=\\ln (\\epsilon \\kappa) + \\Psi(1\/\\kappa),\n\\label{free_en}\n\\end{equation}\nwhere $\\Psi(x)$ is the digamma function and\n$\\kappa=\\Delta\/2\\epsilon$. The unzipping transition occurs at\n$\\epsilon=0$ or equivalently at $\\kappa \\to \\infty$. The expression\n(\\ref{free_en}) is identical to the one found in\nRef.~[\\onlinecite{oper}] using a Fokker-Planck equation approach,\nsupporting the validity of the analytic continuation in $n$ for this\nparticular application of the replica calculation.\n\nIt is easy to see that this free energy yields\n\\begin{equation}\n\\overline{\\langle \\tau_m\\rangle}={\\partial \\overline{\\mathcal F}\\over\n\\partial\\epsilon}={1 \\over \\kappa\\epsilon}\\Psi^{(1)}(1\/\\kappa),\n\\label{zav}\n\\end{equation}\nwhere $\\Psi^{(n)}(x)$ stands for the $n$-th derivative of the\ndigamma function. The expression above predicts a crossover from\n$\\overline{\\langle \\tau_m\\rangle}\\approx 1\/\\epsilon$ for $\\kappa\\ll\n1$ (far from the transition) to $\\overline{\\langle\n\\tau_m\\rangle}\\approx\\kappa\/\\epsilon=\\Delta\/\\epsilon^2$ for\n$\\kappa\\gg 1$ (close to the transition) similarly to the unzipping\nfrom the wall problem analyzed above. Also, it is easy to check that\n\\begin{equation}\nw=\\overline{\\langle \\tau_m^2 \\rangle - \\langle \\tau_m\n\\rangle^2}={\\partial^2 \\overline{\\mathcal F}\\over \\partial\\epsilon^2}=-{1 \\over\n(\\kappa\\epsilon)^2}\\Psi^{(2)}(1\/\\kappa). \\label{fav}\n\\end{equation}\nHere there is a crossover from $w \\approx 1\/\\epsilon^2$ for $\\kappa\n\\ll 1$ to $w \\approx 2 \\kappa\/\\epsilon^2=\\Delta\/\\epsilon^3$ for\n$\\kappa\\gg 1$. As has been noted in the context of DNA unzipping\n\\cite{Lubensky} $\\sqrt{w}\/\\overline{\\langle \\tau_m\\rangle}$ changes\nfrom being of order unity for the weakly disordered $\\kappa \\ll 1$ case to\n$\\sim \\epsilon^{1\/2}$ for $\\kappa \\gg 1$. Thus for $\\kappa \\gg 1$,\nclose to the unzipping transition, thermal fluctuations become\nnegligible and one can work in the zero temperature limit.\n\nThe simplicity of the problem also allows finding the higher moments\nof the distribution. Here we evaluate the second moment, which gives\nthe width of the distribution of $\\overline{\\langle \\tau_m\\rangle}$\ndue to different disorder realizations. Note that since the order of\naveraging over thermal fluctuations and disorder is important this\nquantity can not be extracted directly from Eq. (\\ref{fav}). To\nproceed we consider the generating function, ${\\cal W}_n(\\epsilon_j)$ defined by\n\\begin{equation}\n{\\cal W}_n(\\epsilon_j)=\\int\\limits_{0}^L\nd\\tau_1\\ldots\\int\\limits_{\\tau_{n-1}}^L d\\tau_n\\,\\mathrm\ne^{-\\sum\\limits_{j=1}^n \\epsilon_j\\tau_j+\\Delta\/2\nj^2(\\tau_{j+1}-\\tau_j)}\\!\\!. \\nonumber \\label{zm1}\n\\end{equation}\nThe second (and similarly the higher) moments can be found by\ndifferentiating ${\\cal W}_n$ with respect to $\\epsilon_j$:\n\\begin{equation}\n\\overline{\\langle \\tau_m^2\\rangle}=\\lim_{n\\to 0} \\left. {1\\over\n{\\cal W}_n(\\epsilon_j)}\\,{1\\over n}\\sum_{j=1}^n {\\partial^2 {\\cal\nW}_n(\\epsilon_j)\\over\\partial\n\\epsilon_j^2}\\right|_{\\epsilon_j=\\epsilon}. \\label{zm3}\n\\end{equation}\nUpon evaluating the integral, we find\n\\begin{equation}\n{\\cal W}_n(\\epsilon_j)=\\prod_{j=1}^n {1\\over\n\\sum_{k=1}^j\\epsilon_k\\,-\\,\\Delta j^2\/2} \\label{zm2}\n\\end{equation}\nand correspondingly\n\\begin{equation}\n\\overline {\\langle \\tau_m^2\\rangle}={1\\over \\epsilon^2}\\lim_{n\\to\n0}{1\\over n}\\sum_{j=1}^n {2\\over 1-\\kappa j}\\sum_{k=j}^n {1\\over k\n(1-\\kappa k)}.\n\\end{equation}\nThis double sum can be calculated using a trick similar to the one\ndescribed in Ref.~[\\onlinecite{Kardar}]:\n\\begin{eqnarray}\n\\overline{\\langle \\tau_m^2\\rangle}&=&{2\\kappa^2\\over \\epsilon\n^2}\\int\\!\\!\\!\\!\\!\\!\\int\\limits_{\\!\\!\\!\\!x>y>0}\\!\\!\\!\\! dx dy\n{1\\over \\mathrm e^{\\kappa x}-1}{y\\,\\mathrm e^{-y}\\over \\mathrm\ne^{\\kappa y }-1}\\left[ \\mathrm e^{\\kappa y}+\\mathrm e^{2y}\\mathrm\ne^{\\kappa\nx-x}\\right]\\nonumber\\\\\n&-&{4\\over \\kappa\n\\epsilon^2}\\Psi^{(1)}(1\/\\kappa)\\left(C+\\Psi(1\/\\kappa)\\right),\n\\label{z2}\n\\end{eqnarray}\nwhere $C\\approx 0.577$ is Euler's constant. In the limit of weak\ndisorder or high temperature $\\kappa\\ll 1$, not surprisingly, we get\n$\\overline{\\langle \\tau_m^2\\rangle }\\approx 2\/ \\epsilon^2$, which\nagrees with the Poissonian statistics of $\\tau_m$ with an average\ngiven by $\\overline{\\langle \\tau_m \\rangle}=1\/\\epsilon$. In the\nopposite limit $\\kappa\\gg 1$ one finds $\\overline{\\langle\n\\tau_m^2\\rangle }=4\\kappa^2\/ \\epsilon^2$. Note that\n$\\overline{\\langle \\tau_m\\rangle}=\\kappa\/\\epsilon$, thus the\nrelative width of the distribution ($\\delta \\tau_m\/\\overline{\\langle\n\\tau_m\\rangle}$), defined as the ratio of the variance of the\nunzipping length $\\tau_m$ to its mean is larger by a factor of\n$\\sqrt{3}$ than that in the high temperature regime. The\ndistribution thus becomes superpoissonian at large $\\kappa$. In\nfact, in the limit $\\kappa\\to\\infty$ one can derive the full\ndistribution function $P_{\\kappa\\to\\infty}(\\tau_m)$ using extreme\nvalue statistics~\\cite{Lubensky, ledoussal}:\n\\begin{equation}\n{\\cal P}_{\\kappa\\to\\infty}(\\tau_m)\\approx {\\epsilon\/ \\kappa}\\,\nG(\\tau_m\\,\\epsilon\/\\kappa)\n\\end{equation}\nwith\n\\begin{equation}\nG(x)={1\\over\\sqrt{\\pi x}}\\,\\mathrm e^{-x\/4}-{1\\over 2}{\\rm\nerfc}(\\sqrt{x}\/2),\n\\end{equation}\nwhere ${\\rm erfc}(x)$ is the complimentary error function. It is\neasy to check that this distribution indeed reproduces correct\nexpressions for the mean and the variance. We emphasize that while\nthe thermal fluctuations of the unzipping length become negligible\nnear the transition, the fluctuations due to different realizations\nof point disorder are enhanced and lead to a wider-than-Poissonian\ndistribution of $\\tau_m$.\n\nTo check these results and uncover subtleties that might\narise in experiments, we performed direct numerical simulations of\nthe partition function of the free energy (\\ref{fz}). For this\npurpose we considered a discrete version of the problem where the\npartition function is\n\\begin{equation}\nZ=\\prod_l \\mathrm e^{-\\epsilon m_l+\\sum_{l^\\prime=1}^l U(m_l)}.\n\\end{equation}\nHere $U(m_l)$ is the random potential uniformly distributed in the\ninterval $[-U_0,U_0]$ so that the disorder variance is\n$\\Delta=\\overline{U^2(m_l)}=U_0^2\/3$. For the simulations we choose\n$\\epsilon=\\ln(1.2)-0.18\\approx 0.00232$ and $U_0=0.3$, which gives\n$\\Delta=0.03$, $\\kappa\\approx 6.46$ and according to both\nEq.~(\\ref{zav}) and numerical simulations $\\overline{\\langle\n\\tau_m\\rangle}\\approx 2860$. Then we computed $\\delta\n\\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ using both Eq.~(\\ref{z2})\nand performing numerical simulations. For the chosen parameters the\nequation~(\\ref{z2}) gives $\\delta \\tau_m\/\\overline{\\langle\n\\tau_m\\rangle}\\approx 1.68$, while the numerical simulations yield\n$\\delta \\tau_m\/\\overline{\\langle \\tau_m\\rangle}\\approx 1.67$.\nClearly the results are very close to each other and the small\ndiscrepancy can be attributed to the discretization error. In\nFig.~\\ref{fig_var} we plot dependence of\n$\\delta\\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ vs. system size.\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=9cm]{distrib1d6.eps}\n\\caption{Dependence of the relative width of the distribution\n$\\delta \\tau_m\/\\overline{\\langle \\tau_m\\rangle}$ on the system size.\nSymbols correspond to the actual data, the solid line is the guide\nto the eye, and the dashed line corresponds to the replica result in\nthe thermodynamic limit.}\\label{fig_var}\n\\end{figure}\nIt is obvious from the figure that in the thermodynamic limit\n$L\\to\\infty$ the replica result is in excellent agreement with\nnumerical simulations. We mention that numerical simulations of\n$\\delta \\tau_m$ show very strong finite size effects. Therefore one\nhas to go to very large $L\\gtrsim 50 \\overline{\\langle\n\\tau_m\\rangle}$ in order to approach the thermodynamic limit for the\nwidth of the distribution.\n\nDepending on the system the quantity $\\overline{\\langle\n\\tau_m^2\\rangle}$ is not always experimentally accessible. For\nexample, in the unzipping experiments it is easier to measure\nthermal average, $\\langle \\tau_m\\rangle$, in each experimental run.\nWe note that this quantity has sample to sample fluctuations only\ndue to the presence of disorder. Then the variance of the\ndistribution will be characterized by $\\overline{\\langle\n\\tau_m\\rangle^2}$. The difference between the two expectation values\nis given by $w$ found in Eq.~(\\ref{fav}). Defining $(\\delta\n\\tau_m^{T})^2=\\overline{\\langle \\tau_m\\rangle^2}-\\overline{\\langle\n\\tau_m\\rangle}^{\\,2}$ and using Eqs.~(\\ref{z2}) and (\\ref{fav}) we\nfind that $\\delta \\tau_m^T\/\\overline{\\langle \\tau_m\\rangle}\\approx\n\\sqrt{\\kappa\/2}$ in the weak disorder limit ($\\kappa\\ll 1$) and\n$\\delta \\tau_m^T\/\\overline{\\langle \\tau_m\\rangle}\\approx\n\\sqrt{3}-1\/(\\sqrt{3}\\kappa)$ in the opposite limit $\\kappa\\gg 1$. We\nplot both $\\delta \\tau_m^T$ and $\\delta \\tau_m$ versus the disorder\nparameter $\\kappa$ in Fig.~\\ref{fig_dz}.\n\\begin{figure}[h]\n\\center\n\\includegraphics[width=9cm]{width.eps}\n\\caption{Dependence of the relative width of the\ndistribution on the disorder parameter $\\kappa$. The two curves\ncorrespond to different averaging over temperature and disorder (see\ntext for details). The horizontal line at $\\sqrt{3}$ denotes the\nasymptotic value of both $\\delta \\tau_m$ and $\\delta \\tau_m^T$ at\n$\\kappa\\to\\infty$}\\label{fig_dz}\n\\end{figure}\nThe same issue of importance of the order of thermal and disorder\naveraging appears in the calculation of the higher moments of\n$\\tau_m$, becoming irrelevant only in the limit ($\\kappa\\to\\infty$),\nwhich effectively corresponds to the zero temperature case.\n\nBefore concluding this section let us make a few remarks about the\nrebinding transition, i.e., the rezipping that occurs with decreasing force. One can consider a similar setup with a lower\nend of the flux line fixed at the bottom of the columnar pin and the\ntop end is pulled away from the pin with a force $f$. However, now\nwe will be interested in $f>f_c$. Then clearly most of the flux line\nwill be unzipped from the pin except for a portion near the bottom\nend. If $f$ is very large, the length of the bound segment $\\tilde\n\\tau_m$ near the sample boundary is small. However as $f$ decreases and approaches $f_c$ from\nabove, the length of this segment increases and finally diverges at\nthe transition. This rebinding transition can be described in a\nsimilar spirit to the unbinding. For example instead of the free\nenergy (\\ref{fz}) one has to deal with\n\\begin{equation}\n\\mathcal F(\\tilde \\tau_m)=|\\epsilon|\n\\tilde\\tau_m+\\int_{0}^{\\tilde\\tau_m} d \\tau_m' U(\\tau_m').\n\\label{fzr}\n\\end{equation}\nAs we already noted the free energies (\\ref{fz}) and (\\ref{fzr}) are\nequivalent up to an unimportant constant equal to the total disorder\npotential of the pin: $\\int_0^L d\\tau_m' U(\\tau_m')$. We conclude that the unbinding and rebinding\ntransitions for a single flux line on a disordered columnar pin are\nidentical. In other words, statistical properties of $\\tau_m$ for a\ngiven $f=f_c-\\delta f$ are identical to those of $\\tilde\\tau_m$ for\n$f=f_c+\\delta f$.\n\n\n\n\\subsection{Unzipping from a planar defect without excursions.}\n\\label{replica_2D}\n\nWe now generalize the ideas of the previous section to the more\ncomplicated problem of unzipping of a single flux line from a\ndisordered twin plane. As before we ignore excursions out of the\nplane for the bound part of the flux line. Let us consider the\nrebinding transition first. That is we assume that $f$ is slightly\ngreater than $f_c$ and we study the statistics of the\nbound part of the flux line. We again assume that the flux line is\npinned at the bottom of the plane ($\\tau=0$) and unbinds for $\\tau$ larger than some\n$\\tilde\\tau_m$.\n\nThe point disorder potential now depends on the two coordinates\n$\\tau$ and $z$ spanning the twin plane. Using Eq. (\\ref{free_unzip}) the\npartition function reads:\n\\begin{eqnarray}\n&&Z=\\int_0^L d\\tilde\\tau_m \\int Dz(\\tau^\\prime)\n\\exp\\biggl[-{f^2\\over\n2\\gamma}\\tilde\\tau_m-V\\tilde\\tau_m\\nonumber\\\\\n&&~~~-\\beta\\int_0^{\\tilde\\tau_m} d\\tau^\\prime \\left({\\gamma\\over\n2}\\left({dz\\over d\\tau^\\prime}\\right)^2+\n\\mu(\\tau^\\prime,z^\\prime)\\right)\\Biggr],\n\\end{eqnarray}\nwhere $V<0$ is the mean attractive potential of the twin plane and\nwe have dropped the unimportant $L$-dependent factors. As before, we\nassume a Gaussian random noise with zero mean and\n\\begin{equation}\n\\overline{\\mu(\\tau_1,z_1)\\mu(\\tau_2,z_2)}=\\sigma\n\\delta(\\tau_1-\\tau_2)\\delta(z_1-z_2).\n\\end{equation}\nWe also introduce $\\epsilon=-f^2\/(2\\gamma)-V$. Note that for the\nrebinding transition $\\epsilon<0$. After replicating the partition\nfunction and averaging over point disorder we find\n\\begin{eqnarray}\n&&\\overline{Z^n}=n!\\int\\limits_{0}^L\nd\\tilde\\tau_n\\int\\limits_{\\tilde\\tau_n}^{L}d\\tilde\\tau_{n-1}\\ldots\n\\int\\limits_{\\tilde\\tau_2}^L d\\tilde\\tau_1\\int\nDz_1(\\tau_1^\\prime)\\dots Dz_n(\\tau_n^\\prime)\\nonumber\\\\\n&&~~~~~~~\\exp\\left[\\sum_{\\alpha=1}^n\n\\epsilon\\tilde\\tau_\\alpha+\\!\\!\\!\\int\\limits_{\\tilde\\tau_{\\alpha+1}}^{\\tilde\\tau_{\\alpha}}\n\\!\\!\\!d\\tau_\\alpha^\\prime \\mathcal\nL_{\\alpha}[z_1(\\tau_1^\\prime),\\ldots,\nz_\\alpha(\\tau_\\alpha^\\prime)]\\right],\n\\end{eqnarray}\nwhere we define $\\tilde\\tau_{n+1}\\equiv 0$ and $\\mathcal L_\\alpha$\nis the Euclidean Lagrangian corresponding to the Hamiltonian\n($\\mathcal H_\\alpha$) of $\\alpha$ interacting particles~\\cite{Kardar}:\n\\begin{equation}\n\\mathcal H_\\alpha=-{\\sigma\\over 2}\\alpha-{1\\over\n2\\gamma}\\sum_{\\beta=1}^{\\alpha} {\\partial^2\\over\\partial\nz_\\beta^2}-\\sigma\\sum_{1\\leq\\beta<\\gamma\\leq\\alpha}\n\\delta(z_\\beta-z_\\gamma).\n\\end{equation}\nClose to the rebinding transition, we anticipate\n$\\tilde\\tau_m\\to\\infty$ and thus the mean separation between the\nrebinding times of different replicas $\\tilde\\tau_\\alpha$ and\n$\\tilde\\tau_{\\alpha-1}$ diverges. Therefore the contribution to the\npartition function coming from integration over $\\tau_\\alpha$ will\nbe dominated by the ground state of configurations with $\\alpha$\nreplicas. In this case we can significantly simplify the partition\nfunction and evaluate it analytically:\n\\begin{eqnarray}\n&&\\overline{Z^n}=n!\\int\\limits_{0}^L\nd\\tilde\\tau_n\\int\\limits_{\\tilde\\tau_n}^{L}d\\tilde\\tau_{n-1}\\ldots\n\\int\\limits_{\\tilde\\tau_2}^L d\\tilde\\tau_1\\\\\n&&\\exp\\left[\\sum_{\\alpha=1}^n \\epsilon\\tilde\\tau_\\alpha+(\\mathcal\nE_{\\alpha}-\\mathcal E_{\\alpha-1})\n\\tilde\\tau_{\\alpha})\\right].\\nonumber\n\\end{eqnarray}\nHere $\\mathcal E_\\alpha$ is the ground state energy of $\\mathcal\nH_{\\alpha}$ with a subtracted term linear in $\\alpha$, that just\nrenormalizes $f_c$. Close to the transition $\\epsilon$ is linear\nin the difference $f-f_c$. The energy, $\\mathcal E_\\alpha$, was\ncomputed in Ref.~[\\onlinecite{Kardar}]:\n\\begin{equation}\n\\mathcal E_\\alpha=-{\\sigma^2\\gamma\\over\n12}\\alpha^3=-\\xi\\alpha^3.\n\\end{equation}\nUpon integrating over $\\tilde\\tau_\\alpha$ one obtains\n\\begin{equation}\n\\overline{Z^n}=n!\\prod_{\\alpha=1}^n {1\\over\n|\\epsilon|\\alpha-\\xi\\alpha^3}\\to\\prod_{\\alpha=1}^n {1\\over\n|\\epsilon|-\\xi\\alpha^2}\n\\label{Z_n0}\n\\end{equation}\nThe product above can be reexpressed in terms of $\\Gamma$-functions,\nwhich in turn allows for a straightforward analytic continuation to\n$n\\to 0$:\n\\begin{equation}\n\\overline{Z^n} ={1\\over\n\\xi^n}{1\\over1+n{\\sqrt{\\xi}\\over\\sqrt{|\\epsilon|}}}\n{\\Gamma\\left({\\sqrt{|\\epsilon|}\\over\\sqrt{\\xi}}-n\\right)\\over\n\\Gamma\\left({\\sqrt{|\\epsilon|}\\over\\sqrt{\\xi}}+n\\right)}.\n\\label{Z_n1}\n\\end{equation}\nUsing this expression we obtain the free energy and the mean\nlength of the localized segment:\n\\begin{equation}\n\\mathcal F=-\\lim_{n\\to 0}{\\overline{Z^n}-1\\over n}=\\ln\n\\xi+{\\sqrt{\\xi}\\over\\sqrt{|\\epsilon|}}+2\\Psi\\left({\\sqrt{|\\epsilon|}\\over\n\\sqrt{\\xi}}\\right),\n\\label{f_2d}\n\\end{equation}\n\\begin{equation}\n\\overline{\\langle \\tilde\\tau_m \\rangle}={\\partial \\mathcal\nF\\over\\partial |\\epsilon|}=-{\\sqrt{\\xi}\\over\n2|\\epsilon|^{3\/2}}+{1\\over\n\\sqrt{|\\epsilon|\\xi}}\\Psi^{(1)}\\left({\\sqrt{|\\epsilon|}\\over\n\\sqrt{\\xi}}\\right)\n\\label{tau_2d}\n\\end{equation}\nwhere as before $\\Psi^{(n)}(x)$ stands for the $n$th derivative of\nthe digamma function. This expression has the asymptotic behaviors:\n\\begin{eqnarray}\n&&\\overline{\\langle \\tilde\\tau_m \\rangle}\\to\n{1\\over\\epsilon}\\qquad\\quad~ \\xi\\ll\n|\\epsilon|\\nonumber\\\\\n&&\\overline{\\langle \\tilde\\tau_m \\rangle}\\to {\\sqrt{\\xi}\\over\n2|\\epsilon|^{3\/2}} \\quad \\xi\\gg|\\epsilon|.\n\\end{eqnarray}\nThis scaling confirms the crossover between exponents $\\nu=1$ and\n$\\nu=3\/2$ for the rebinding transition to a two-dimensional\ndisordered plane predicted by the simple scaling argument leading to\nEq.~(\\ref{valuenu}).\n\nIn a similar way one can also consider an unzipping transition with\n$f \\leq f_c$. One finds an expression for the partition function\nwhich is identical to (\\ref{Z_n0}) with the substitution\n$\\xi\\to-\\xi$. Note however, that the analytic continuation of the\nproduct (\\ref{Z_n1}) results in a complex partition function and\nhence a complex free energy. It thus appears that the analytic\ncontinuation of the product (\\ref{Z_n0}) to noninteger values of $n$\nis not unique. One can always multiply it by any periodic function\nof $n$, which is equal to unity when the argument is integer. While\nwe were able to find some real-valued analytic continuations of\n$\\overline{Z^n}$ to negative values of $\\xi$, these continuations\ndid not lead to physically sensible results.\n\nBecause of the ambiguity of the analytic continuation and some\napproximations used to derive Eqs.~(\\ref{f_2d}) and (\\ref{tau_2d})\nwe also performed numerical simulations for the vortex unzipping\nfrom a disordered twin plane.\n\nFor numerical simulations we are using the lattice version of the\nmodel, where in each step along the $\\tau$ direction the vortex can\neither move to the left or the right one lattice spacing. Note that\nbecause we neglect excursions the vortex motion occurs strictly\nwithin the plane until the vortex is unbound. Then the restricted\npartition function for the bound part of the flux line, $Z(x,\\tau)$,\nwhich sums over the weights of all path leading to $x,\\tau$,\nstarting at $x=0,\\tau=0$ satisfies the recursion\nrelation~\\cite{Kardar}\n\\begin{eqnarray}\n&& Z(x,\\tau+1)=e^{\\mu(x,\\tau+1)}\\big[J Z(x-1,\\tau) +J\nZ(x+1,\\tau)\\nonumber\\\\\n&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(1-2J) Z(x,\\tau)\\big].\n\\label{eqz1}\n\\end{eqnarray}\nWe assume that $\\mu(x,\\tau)$ is uniformly distributed in the\ninterval $[-U_0,U_0]$ implying as before the variance $\\sigma=U_0^2\/3$. The\nvariable $J$ controls the line tension. In the continuum limit $J\\ll\n1$ and $U_0\\ll 1$ the equation (\\ref{eqz1}) reduces to the\nSchr\\\"odinger equation:\n\\begin{equation}\n{\\partial Z\\over \\partial\\tau}=-\\mathcal H Z(x,\\tau)\n\\label{Z_tauu}\n\\end{equation}\nwith the Hamiltonian given by Eq.~(\\ref{H}) with $\\gamma=2J$ and\n$f=0$ (there is no force acting on the flux line within the\nplane). We note that even if the parameters of the discrete model are not\nsmall we still expect that Eq.~(\\ref{Z_tauu}) remains valid at long\nlength and time scales. However, the relation between the\nmicroscopic parameters of the discrete model and the parameters of\nthe effective coarse-grained Hamiltonian (\\ref{H}) is more\ncomplicated.\n\nIn our simulations we evaluated numerically the free energy of the\nbound part of the vortex line for each realization of point disorder\nand used the analytical expression for the free energy of the\nunbound part, for which point disorder can be neglected. The latter\nis given by Eq.~(\\ref{free_unzip}). This free energy is controlled\nby a single parameter $f^2\/(2\\gamma)$. Use of the analytic result\n(\\ref{free_unzip}) significantly simplifies calculations of\n$\\overline{\\langle \\tau_m\\rangle}$ and allows us to perform large\nscale simulations.\n\nFirst we verify the scaling (\\ref{nu}) with $\\nu=3\/2$ at the\nunzipping transition. To do this we perform standard finite size\nscaling procedure.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_6.eps}\n\\caption{Ratio of the unzipping length\n$\\overline{\\langle\\tau_m\\rangle}$ to the system size $L$ as a\nfunction of $f^2\/2\\gamma$ for different system sizes. Here $f$ is\nthe external force and $\\gamma$ is the line tension of the vortex\n(see Eqs.~(\\ref{free_unzip1} and (\\ref{free_unzip}))). According to\nthe scaling relation (\\ref{scaling1}) the crossing point corresponds\nto the unzipping transition. In simulations the parameters of the\nmicroscopic model (\\ref{eqz1}) are chosen to be $J=0.2$, $U_0=2$}\n\\label{fig3}\n\\end{figure}\nIn Fig.~\\ref{fig3} we show dependence of the ratio\n$\\overline{\\langle\\tau_m\\rangle}\/L$ on the parameter $f^2\/(2\\gamma)$\nfor four different sizes. As we expect from the scaling relation\n(\\ref{scaling1}) the three curves intersect at the same point\ncorresponding to the unzipping transition ($g_0\\approx 0.7$). Once\nwe determine the crossing point corresponding to the critical force\n$f_c$ we can verify the scaling relation (\\ref{scaling1}) with\n$\\nu=3\/2$.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_1.eps}\n\\caption{Data collapse of $\\overline{\\langle\\tau_m\\rangle}\/L$ as a\nfunction of $\\epsilon L^{1\/\\nu}$ with the exponent $\\nu=3\/2$ for two\ndifferent system sizes (see Eq.~(\\ref{scaling1})). The parameters of\nthe model are the same as in Fig.~\\ref{fig3}. The inset shows\nderivative of $\\overline{\\langle\\tau_m\\rangle}$ with respect to\n$\\epsilon$ for $L=12800$. Clearly the scaling function is asymmetric\nwith respect to $\\epsilon\\to -\\epsilon$. Thus the unbinding and\nrebinding transitions are not equivalent.}\n\\label{fig:collapse2D}\n\\end{figure}\nIn Fig.~\\ref{fig:collapse2D} we plot\n$\\overline{\\langle\\tau_m\\rangle}\/L$ versus the scaling parameter\n$\\epsilon L^{1\/\\nu}$ (see Eq.~(\\ref{scaling1})) with $\\nu=3\/2$ for\ntwo different system sizes. Clearly the data collapse is nearly\nperfect, which proves the validity of the scaling (\\ref{scaling})\nwith $\\nu=3\/2$ for the unzipping of a flux line from a twin plane.\nThe inset shows the derivative of $\\overline{\\langle\\tau_m\\rangle}$\nwith respect to $\\epsilon$. Clearly this derivative is asymmetric\nwith respect to $\\epsilon\\to\\ -\\epsilon$, implying that there is no\nsymmetry between the unbinding and rebinding transitions. This is\ncontrary to the unzipping from a columnar pin with no excursions,\nwhere such a symmetry does exist.\n\nNext we turn to verifying the analytic prediction for\n$\\overline{\\langle\\tau_m\\rangle}$, Eq.~(\\ref{tau_2d}). As we argued\nabove the parameter $\\zeta$ describing the disorder strength can be\neasily extracted from microscopic parameters of the model only in\nthe continuum limit $U_0\\gg 1$, $J\\ll 1$. Unfortunately, it is not\npossible to do simulations directly in the continuum limit ($J\\ll 1$\nand $U_0\\ll 1$). Indeed as Eq.~(\\ref{tau_2d}) suggests in order to\nsee the scaling exponent $\\nu=3\/2$ one needs to go to length scales much\nlarger than $1\/\\xi$, where $\\xi=\\sigma^2 J\/12=U_0^4 J\/36$. If\n$J\\ll 1$ and especially $U_0\\ll 1$ then one has to simulate\nextremely large system sizes where $L$ is larger than $10^7$ for\n$U_0=0.1$ and $J=0.1$. Therefore we perform simulations in the regime\nwhere $J$ and especially $U_0$ are appreciable. We then\nregard $\\xi$ as a fitting parameter of the model which should be\nequal roughly to $U_0^4 J\/36$.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_5.eps}\n\\caption{Dependence of the length of the bound part of the flux line\nto the twin plane $L-\\overline{\\langle\\tau_m\\rangle}$ on $\\epsilon$\nfor the rebinding transition. Different curves correspond to\ndifferent system sizes. The solid black line is the best single\nparameter fit using Eq.~(\\ref{tau_2d}) with $\\xi$ being the\nfitting parameter.}\n\\label{fig:replica1}\n\\end{figure}\nIn Fig.~\\ref{fig:replica1} we show results of numerical simulation\nfor the rezipping length $L-\\overline{\\langle\\tau_m\\rangle}$ on the\ndetuning parameter $\\epsilon$ for different system sizes. The solid\nblack line is the best single-parameter fit to the data using the\nanalytic expression (\\ref{tau_2d}). The fitting parameter $\\xi$\nfound from simulations is $\\xi \\approx 0.036$, while a continuum\nestimate $U_0^4 J\/36$ gives $\\xi \\approx 0.089$, which is very\nreasonable given that this estimate is valid only at $U_0\\ll 1$. We\nalso performed similar simulations for $U_0=1.5$ and got a very good\nfit with (\\ref{tau_2d}) for $\\xi=0.018$, while the continuum\nestimate gives $\\xi \\approx 0.028$. We thus see that indeed as\n$U_0$ decreases the fitting parameter $\\xi$ becomes closer to the\ncontinuum expression.\n\nWhile we were not able to derive a closed analytic expression for\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding transition, we\nperformed numerical simulations. As the inset in\nFig.~\\ref{fig:collapse2D} suggests the transition is highly\nasymmetric. In fact this asymmetry persists in the thermodynamic\nlimit.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=9cm]{scaling_2d_7.eps}\n\\caption{Comparison of dependences of\n$L-\\overline{\\langle\\tau_m\\rangle}$ for the rebinding transition and\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding transition on\n$|\\epsilon|$. We used the parameters of Fig.~\\ref{fig3} with\n$L=51200$. The finite size effects are negligible on the scale of\nthe graph. Both curves interpolate between $1\/|\\epsilon|$ dependence\nat $|\\epsilon|\\gg \\xi$ and $C\/|\\epsilon|^{3\/2}$ at $|\\epsilon|\\ll\n\\xi$. However, the prefactor $C$ for the unbinding transition is\nabout three times larger than for the rebinding.}\n\\label{fig:unbind_rebind}\n\\end{figure}\nIn Fig.~\\ref{fig:unbind_rebind} we plot\n$L-\\overline{\\langle\\tau_m\\rangle}$ for the rebinding transition and\n$\\overline{\\langle\\tau_m\\rangle}$ for the unbinding versus\n$|\\epsilon|$. Both curves interpolate between $1\/|\\epsilon|$\ndependence at weak disorder $|\\epsilon|\\ll \\xi$ and\n$C\/|\\epsilon|^{3\/2}$ dependence at strong disorder $|\\epsilon|\\gg\n\\xi$. However, the prefactor $C$ in front of $1\/|\\epsilon|^{3\/2}$\nis larger for the unzipping transition.\n\n\\subsection{Unzipping from a hard wall}\n\\label{Bethe_ansatz}\n\nAs the next step we consider unzipping from an attractive hard wall\nin $d=1+1$ dimensions with point disorder in the bulk. Our method\nis a straightforward generalization of the Bethe ansatz solution\nfound by Kardar in the absence of the external force~\\cite{Kardar}.\nThe system is illustrated in Fig. \\ref{Bethe}. Here the potential\nexperienced by the flux line, $V(x)$, has a short ranged attractive\npart and an impenetrable core at $x=0$. While the scaling argument\nis unchanged in this case, this problem has the merit of being\nexactly solvable within the replica approach. Since most details of\nthe calculation are identical to those presented in\nRef.~[\\onlinecite{Kardar}], here we only outline the solution.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.7]{unzipwall.eps}\n\\caption{\\label{Bethe} An illustration of the setup considered in\nthe Bethe Ansatz calculation. The flux line is restricted to the\nhalf plane and a MFM tip is acting on it at the top of the sample.}\n\\end{figure}\nAfter replicating the free energy Eq.~(\\ref{f0dis}) along with the\ncontribution from the external field Eq.~(\\ref{eq:unzipfe}) and\naveraging over the disorder, the replicated sum over all path weights connecting points $(0,0)$ and $(x,\\tau)$,\n$\\overline{Z^n(x,t)}$, can be calculated from\n\\begin{equation}\n\\partial_\\tau \\overline{Z^n(x,t)} = -{\\cal H} \\overline{Z^n(x,t)} \\;,\n\\end{equation}\nwith the initial condition $\\overline{Z^n(x,0)}=\\delta(x)$. The\nreplicated system describes $n$ {\\it attractively interacting\nbosons} with a non-Hermitian Hamiltonian ${\\cal H}$ given by\n\\begin{eqnarray}\n{\\cal H}&=& \\sum_{\\alpha=1}^n \\left[ -\\frac{1}{2\\gamma}\n \\partial^2_{x_\\alpha}-f\n\\partial_{x_\\alpha}+ V(x_\\alpha) \\right]\\nonumber\n\\\\&-&\\sigma \\sum_{\\alpha < \\beta}\n\\delta(x_\\alpha-x_\\beta) -\\frac{1}{2} \\sigma n \\;.\n\\end{eqnarray}\n\nIn Ref.~[\\onlinecite{Kardar}] the problem was solved for $f=0$ using\nthe Bethe Ansatz. The boundary conditions were that the ground state\nwave function should vanish at large $x$ should decay as\n$\\exp(-\\lambda x)$ for the particle closest to the wall. One then\nfinds that for the permutation ${\\bf P}$ of particles such that\n$0 \\lambda$) it is unbound.\n\nThe ground state wave function for the {\\it non-zero} value of the force\ncan be obtained by noting that the non-Hermitian term acts like an\nimaginary vector potential. In particular, it can be gauged away when\nthe vortices are bound to the wall as discussed in Sec.\n\\ref{sectioncleancase} (see Eqs.~(\\ref{eq:gauge1}) and\n(\\ref{eq:gauge2})). This imaginary gauge transformation gives\n\\begin{equation}\n\\Psi_{f}=\\Psi_{f=0}\\exp\\left( \\sum_{\\alpha=1}^n fx_\\alpha \\right)\n\\;,\n\\end{equation}\nwhich implies that the solution is\n\\begin{equation}\n\\Psi_{f}=\\exp \\left(-\\sum_{\\alpha=1}^n \\tilde{\\kappa}_\\alpha x_{P\n\\alpha}\\right) \\;,\n\\end{equation}\nwith $\\tilde{\\kappa}_\\alpha = \\lambda+2(\\alpha-1)\\kappa-f$. The\neffect of the force is simply to shift all the $\\kappa_\\alpha$'s by a\nconstant. The average localization length (which satisfies near the\ntransition $\\langle x _m\\rangle \\simeq f_c \\langle \\tau_m \\rangle \/\n\\gamma$) is then given by\n\\begin{equation}\n\\langle x_m \\rangle={1\\over \\tilde Z_n n}\\int_0^\\infty\n\\prod_{j=1}^n dx_j \\left[\\sum_{j=1}^n x_j\\right]\\,\\Psi_f(x_j),\n\\label{eq:Kardarresult}\n\\end{equation}\nwhere $\\tilde Z_n=\\int_0^\\infty \\prod_{j=1}^n dx_j \\Psi_f(x_j)$.\nNote that the normalization factor $\\tilde Z_n$ in the equation\nabove is formally equivalent to the partition function (\\ref{tuam2})\nfor the unzipping from a columnar pin without excursions if we\nidentify $\\lambda-\\kappa-f$ with $\\epsilon$ and $\\kappa$ with\n$\\Delta\/2$. This equivalence implies that $\\langle x_m\\rangle$ for\nthe unzipping from a hard wall has the same statistical properties\nas $\\overline{\\langle\\tau_m\\rangle}$ for the unbinding from a\ncolumnar pin (for more details see Ref.~[\\onlinecite{kp}]). In particular, the\nunzipping problem has a crossover from $\\langle x_m\\rangle \\sim\n1\/(f_c-f)$ for $\\lambda-f\\gg\\kappa$ to $\\langle x_m\\rangle \\sim 1\/\n(f_c-f)^{3\/2}$ in the opposite limit.\n\nThis example confirms another prediction of\nthe simple scaling argument: the critical exponents for the\nunbinding transition are determined only by the dimensionality of\nthe defect even if the disorder is also present in the bulk of the system.\n\n\\subsection{Unzipping from a columnar pin with excursions into the bulk.}\n\\label{sec:numerics}\n\nIn this section we consider the setup similar to Sec.~\\ref{replica},\nnamely unzipping from a columnar defect in $d=1+1$ dimensions, but\nallowing excursions of the flux line to the bulk (see\nFig.~\\ref{fig:unzip_1D}).\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=9cm]{unzip.eps}\n\\caption{A setup illustrating unzipping from a\ncolumnar pin in $d=1+1$ dimensions with excursions into the bulk.}\n\\label{fig:unzip_1D}\n\\end{figure}\nUnfortunately there is no analytic solution available for this\nproblem. Therefore we present only numerical results. As in\nSec.~\\ref{replica_2D} we consider a lattice version of the model\nwhere in each step along the $\\tau$ direction the vortex can either\nmove to the left or the right one lattice spacing. The attractive\npotential was placed at $x=0$. The restricted partition function of\nthis model, $Z(x,\\tau)$, which sums over the weights of all path\nleading to $x,\\tau$, starting at $x=0,\\tau=0$ satisfies the\nrecursion relation~\\cite{Kardar}:\n\\begin{eqnarray}\n&&Z(x,\\tau+1)=\\delta_{x,0}(e^{V}-1)Z(0,\\tau) \\nonumber\\\\\n&&+e^{\\mu(x,\\tau+1)}\\left[J e^f Z(x-1,\\tau) +J\ne^{-f}Z(x+1,\\tau)\\right]. \\label{eqz}\n\\end{eqnarray}\nSimilarly to Eq.~(\\ref{eqz1}) we assume that $\\mu(x,\\tau)$ is\nuniformly distributed in the interval $[-U_0,U_0]$ implying the\nvariance $\\sigma=U_0^2\/3$. The variable $J$ controls the line\ntension, $V$ is the attractive pinning potential, and $f$ is proportional\nto the external force. In the continuum limit $J\\ll 1$, $f\\ll 1$,\nand $U_0\\ll 1$, equation (\\ref{eqz}) reduces to the Schr\\\"odinger\nequation:\n\\begin{equation}\n{\\partial Z\\over \\partial\\tau}=-\\mathcal H Z(x,\\tau)\n\\label{Z_tau}\n\\end{equation}\nwith the Hamiltonian given by Eq.~(\\ref{H}) with $\\gamma=2J$.\n\nFor the simulations we have chosen particular values of $J=0.1$ and\n$V=0.1$. As before we work in units such that $k_B T=1$. In the\nresults described below the partition function was evaluated for\neach variance of the disorder for several systems of finite width\n$w=2L_x$ averaging over the time-like direction (typically $\\tau\n\\simeq 10^6$ ``time'' steps) with the initial condition $Z(0,0)=1$\nand $Z(x,0)=0$ for $x \\neq 0$.\n\nTo analyze the numerics we performed a finite size scaling analysis. In the spirit of Eq.~(\\ref{nu}), in the vicinity of the transition we\nexpect the scaling form (compare Eq.~(\\ref{scaling1})):\n\\begin{equation}\n\\overline{\\langle\\tau_m\\rangle}=L_x \\Phi\\left[L_x(f_c-f)^\\nu\\right],\n\\label{scaling}\n\\end{equation}\nwhere $\\Phi$ is some scaling function.\nBased on the results of previous sections we anticipate a smooth\ninterpolation between scaling exponents $\\nu=1$ and $\\nu=2$ with either\nincreasing $L_x$ or increasing strength of disorder at fixed $L_x$.\nTo perform the finite size scaling we obtain for each value of $L_x$\na value for the exponent $\\nu$ from the best collapse of the\nnumerical data of two systems sizes $L_x$ and $L_x\/2$. In\nFig.~\\ref{fig1} we plot $1\/\\nu$ as a function of the system size\n$L_x$. As can be seen the data is consistent with $\\nu$ saturating\nat $\\nu=2$ for large systems. The crossover to $\\nu=2$ is much more\nrapid if the point disorder is enhanced near the columnar pin (see\nthe inset in Fig.~\\ref{fig1}), as might be expected for damage\ntracks created by heavy ion radiation.\n\\begin{figure}\n\\center\n\\includegraphics[width=8.5cm]{scaling_1dv2.eps}\n\\caption{Effective exponent $1\/\\nu$ versus $L_x$ for a fixed\nstrength of point disorder $\\sigma=0.03$. The results are\nconsistent with the general argument that this exponent should\nsaturate at $\\nu=2$ as $L_x\\to\\infty$. The inset shows the same\nexponent vs $\\sigma_c$, the variance of additional point disorder\nplaced directly on the columnar pin extracted from two system\nsizes $L_x=600$ and $L_x=1200$. It appears that $\\nu\\to 2$ as\n$\\sigma_c$ increases.} \\label{fig1}\n\\end{figure}\n\nNext, we test the behavior of the critical force as the\ndisorder strength is increased. According to our discussion in Sec.\n\\ref{scalingphasediagram}, we anticipate that in the absence of an\nexternal force the flux line is always bound to the pin in $1+1$\ndimensions. This is in contrast with the problem of unzipping from\nthe wall discussed in the previous section, where there is a\ncritical strength of the disorder, $\\sigma_c$, which leads to an\nunbinding transition for $f=0$. Note that the existence of a critical value\nof the disorder is a direct consequence (see discussion in Sec.\n\\ref{scalingphasediagram}) of the excursions of the vortex from the\ndefect which, as argued above, do not modify the critical behavior\nof the unzipping transition. The existence of a critical value of\nthe disorder is therefore strongly dependent on the dimensionality\nof the problem.\n\nIn numerical simulations for each strength of disorder we determine\nthe critical force plotting the ratio\n$\\overline{\\langle\\tau_m\\rangle}\/L_x$ for two different sizes $L_x$\nand using the scaling relation~(\\ref{scaling}). Note that this ratio\ndoes not depend on $L_x$ at $f=f_c$ (see also the discussion in\nSec.~\\ref{replica_2D}). We checked that this is indeed the case.\nUpon repeating this procedure for different disorder strengths we\nobtain the dependence $f_c(U_0)$ which is plotted in\nFig.~\\ref{fig8}.\n\\begin{figure}[ht]\n\\hspace{0.5cm}\n\\includegraphics[bb=1cm 1cm 20cm 25cm, scale=0.38, angle=90]{crit_f.eps}\n\\caption{Critical force for unzipping from a columnar defect in\n$1+1$ dimensions as a function of the disorder\nstrength.}\\label{fig8}\n\\end{figure}\nThe graph suggests that there is no unbinding transition at zero\ntilt at any strength of disorder consistent with the scaling\nargument presented in Sec.~\\ref{scalingphasediagram} and those of\nRef.~[\\onlinecite{HwaNatter}]. We point out that the strongest disorder\nshown in the graph $U_0=0.9$ required samples quite extended in the\ntime-like direction, $L_\\tau\\approx 10^8$.\n\n\n\\section{Unzipping a Luttinger liquid}\n\\label{sec:Lutunzip}\n\nWe now turn to consider the effect of interactions on the unzipping\nof single vortices. To do this we study a system where the vortices\nare preferentially bound to a thin two-dimensional slab which is\nembedded in a three-dimensional sample so that the density of\nvortices in the slab is much higher than in the bulk.\nExperimentally, this setup could be achieved using, for example, a\ntwin plane in YBCO or by inserting a thin plane with a reduced lower\ncritical field $H_{c1}$ (with, for example, molecular beam epitaxy)\ninto a bulk superconductor. The scenario we analyze is one where a\nMFM is used to pull a single vortex out of the two-dimensional slab\n(see Fig. \\ref{fig9}). The physics of the vortices confined to two\ndimensions is well understood and is analogous to a spinless Luttinger\nliquid of bosons (see, e.g. Ref.~[\\onlinecite{AHNS}]).\n\nAs we show below the dependence of the displacement of the vortex\nfrom the two-dimensional slab on the force exerted by the MFM\ndepends on the physics of the two-dimensional vortex liquid which\nresides in the slab. Specifically, the critical properties of the\nunbinding transition depend on the ``Luttinger liquid parameter''\nwhich controls the large-distance behavior of the vortex liquid. The\nexperimental setup can thus be used to probe the two-dimensional\nphysics of the vortices in the slab.\n\n\\begin{figure}[ht]\n\\includegraphics[scale=0.6]{UnzipLuttinger.eps}\n\\caption{Possible experimental setup for studying unzipping from\nLuttinger Liquid. A MFM is used to pull a single vortex out of a\nplane where the vortices are confined. The measured quantity is the\ndistance of the pulled vortex from the confining plane as a function\nof the force $f$. }\\label{fig9}\n\\end{figure}\n\\subsection{Two-dimensional vortex liquids}\n\nThe physics of vortices in two dimensions is very well understood.\nThe vortices form a one-dimensional array located at position\n$x_i(\\tau)$. The density profile of the vortices is then given by\n\\begin{equation}\n n(x,\\tau)=\\sum_j \\delta \\left[ x-x_j(\\tau)\\right] \\;,\n\\end{equation}\nwhere $x$ and $\\tau$ denote transverse and longitudinal coordinates\nwith respect to the vortices and $i$ is an index labeling the\nvortices. By changing variable into the phonon displacement field\n$u_j$ through $x_j(\\tau)=a\\left[j+u_j(\\tau)\\right]$, where $a$ is\nthe mean distance between vortex lines the free-energy of a\nparticular configuration can be written as:\n\\begin{equation}\n {\\cal F}_0=\\frac{a^2}{2} \\int dx d\\tau \\left[ c_{11}\n (\\partial_x u)^2 + c_{44} (\\partial_\\tau u)^2\\right] \\;.\n\\end{equation}\nHere $c_{11}$ and $c_{44}$ are the compressional and the tilt moduli\nrespectively. After rescaling the variables $x$ and $\\tau$ according\nto\n\\begin{equation}\n x \\to x \\left(\\frac{c_{11}}{c_{44}}\\right)^{1\/4} \\;\\; ,\n \\; \\tau \\to \\tau \\left(\\frac{c_{44}}{c_{11}}\\right)^{1\/4} \\;,\n\\end{equation}\nthe free energy takes the isotropic form\n\\begin{equation}\n {\\cal F}_0=\\frac{A}{2}\\int dx d\\tau\n \\left[ (\\partial_x u)^2 + (\\partial_\\tau u)^2\\right]\n\\end{equation}\nwith $A=a^2\\sqrt{c_{11}c_{44}}$. The partition function is then\ngiven by the functional integral\n\\begin{equation}\n Z=\\int D u(x,\\tau) e^{-S} \\;,\n\\end{equation}\nwith $S=S_0={\\cal F}_0\/T$. In the limit of large sample sizes in the\n``timelike'' direction one can regard $Z$ as the zero temperature\npartition function of interacting bosons~\\cite{AHNS}. In this\nlanguage the imaginary time action can be written as\n\\begin{equation}\n S_0=\\frac{\\pi}{2g}\\int dx d\\tau\n \\left[ (\\partial_x u)^2 + (\\partial_\\tau u)^2\\right] \\;.\n \\label{freeaction}\n\\end{equation}\nHere we set $\\hbar=1$ and identified the Luttinger-liquid parameter,\n$g$, as\n\\begin{equation}\n g=\\frac{\\pi T}{A} \\;.\n \\label{Lutpara}\n\\end{equation}\nThe Luttinger-liquid parameter controls the long-distance properties\nof the model. For vortices $g$ it is a\ndimensionless combination of the compressional and tilt moduli, the\ndensity of vortices and temperature.\n\nVarious properties of Luttinger liquids are well understood. For\nexample, the correlation function for the density fluctuations\n$\\delta n(x,\\tau)=n(x,\\tau)-n_0$, where $n_0=1\/a$ is the mean\ndensity, obeys\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle \\simeq\n \\frac{\\cos\\left( 2 \\pi n_0 x \\right)}{(x^2+\\tau^2)^g} \\;.\n\\end{equation}\nThere is quasi long-range order in the system and the envelope of\nthe density correlation function decays as a power law with the exponent\ndepending only on $g$. As we show below, $g$ can be probed by\nunzipping a single vortex out of a plane which contains a $(1+1)$-dimensional vortex liquid.\n\nIn what follows we also consider the case where there is point\ndisorder present in the sample. The behavior will be strongly influenced by the behavior of the vortices\nin two dimensions in the presence of disorder. This problem has been\nstudied in some detail in the past (see e.g. Ref.~[\\onlinecite{pkn}] and\nreferences therein). Here we briefly review features which will be\nimportant in analyzing the unzipping problem. The most relevant (in\nthe renormalization group sense) contributions to the action from\nthe point disorder is\n\\begin{equation}\n S_{PD}=2\\int dx d\\tau R(x,\\tau)\n \\cos \\left[2 \\pi u(x,\\tau) +\\beta(x,\\tau) \\right] \\;,\n\\end{equation}\nwhere positive (negative) $R$ implies a repulsive (attractive)\npotential between the vortices and the quenched random disorder. We assume, for simplicity, that $\\beta(x,\\tau)$ is\ndistributed uniformly between $0$ and $2 \\pi$ and $R(x,\\tau)$ has a\nan uncorrelated Gaussian distribution with the variance $\\Delta_0$:\n\\begin{equation}\n \\overline{R(x_1,\\tau_1)R(x_2,\\tau_2)}=\n \\Delta_0 \\delta(x_1-x_2)\\delta(\\tau_1-\\tau_2) \\;,\n\\end{equation}\nwhere the overbar, as before, represents averaging over disorder.\n\nTo analyze the disordered problem, similar to the single vortex case,\nwe use the replica trick. Then the replicated noninteracting part of\nthe action becomes\n\\begin{equation}\n S_0=\\frac{\\pi}{2g} \\sum_{\\alpha,\\beta} \\int\n \\int dx d\\tau \\left[ \\frac{\\partial u_\\alpha}{\\partial \\tau}\n \\frac{\\partial u_\\beta}{\\partial \\tau} +\\frac{\\partial u_\\alpha}{\\partial x}\n \\frac{\\partial u_\\beta}{\\partial x} \\right] \\left[ \\delta_{\\alpha,\\beta}\n - \\frac{\\kappa}{g} \\right] \\;.\n\\end{equation}\nHere $u_\\alpha(x,\\tau)$ is the replicated phonon field and $\\kappa$\nis an off-diagonal coupling which is zero in the bare model but is\ngenerated by the disorder. It plays the role of a quenched random\n``chemical potential'' which is coupled to the first derivative of\nthe phonon field $u$. The replica indices, $\\alpha$ and $\\beta$ run\nfrom $1$ to $n$ and at the end of the calculation one takes the\nlimit $n \\to 0$. After replication the contribution from the point\ndisorder becomes\n\\begin{equation}\n S_{PD}=-\\Delta_0 \\sum_{\\alpha,\\beta} \\int \\int dx d\\tau\n \\cos 2 \\pi \\left[ u_\\alpha (x,\\tau) - u_\\beta (x,\\tau) \\right] \\;.\n\\end{equation}\nThe combined action can be treated within the renormalization group\nusing a perturbation series near $g=1$ where a phase transition\nbetween a vortex liquid and a vortex glass\noccurs~\\cite{fisher_v_glass}. By continuously eliminating degrees of\nfreedom depending on frequency and momentum within the shell\n$\\Lambda - \\delta \\Lambda < \\sqrt{\\omega^2+q^2} < \\Lambda$, one\nobtains the following renormalization group equations~\\cite{Cardy,\npkn}\n\\begin{eqnarray}\n \\frac{dg}{dl}&=&0 \\\\\n \\frac{d \\Delta}{dl}&=&2(1-g) \\Delta - 2 C \\Delta^2 \\\\\n \\frac{d \\kappa}{dl}&=&C^2 \\Delta^2\n\\end{eqnarray}\nHere $l$ is the flow parameter $\\Lambda(l)=\\Lambda e^{-l}$. $C$ is\na non-universal constant which depends on the cutoff $\\Lambda$. The\nequations are subject to the initial conditions $\\kappa(l=0)=0$ and\n$\\Delta(l=0)=\\Delta_0$. Note that the Luttinger liquid parameter is\nnot renormalized. Analyzing the flow equations it has been shown\nthat in the vortex liquid phase ($g>1$) the correlation of the\ndensity fluctuation behaves in the vortex liquid phase as\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle\n \\simeq \\frac{1}{(x^2+\\tau^2)^{g+\\tilde{\\kappa}\/2}} \\;,\n\\end{equation}\nwhere $\\tilde{\\kappa}$ is a nonuniversal exponent. In the glass\nphase ($g<1$) correlations decay faster than a power law, with\n\\begin{equation}\n \\langle \\delta n(x,\\tau) \\delta n(0,0) \\rangle\n \\simeq \\exp \\left( -(1-g)^2 \\ln^2 \\sqrt{x^2+\\tau^2}\\right) \\;.\n\\end{equation}\n\nIn what follows we consider a setup in which a two dimensional array\nof vortices, whose properties have been described above, is embedded\nin a three dimensional bulk sample. As shown below when a single vortex\nis unzipped into the bulk in a clean sample the critical properties\nof the unzipping transition yield information on the properties of\nthe two dimensional vortex liquid. In particular, they provide a\ndirect measure of the Luttinger-liquid parameter. In the same setup\nin a disordered sample we will show that the critical properties of\nthe unzipping transition will be modified. In particular, they can\nyield information on the on the three-dimension wandering exponent\nof a single vortex in a disordered sample.\n\n\n\\subsection{Unzipping a Luttinger liquid: The clean case}\nConsider first an experiment where an attractive two-dimensional\npotential holds vortices confined to it. A MFM then pulls a {\\it\nsingle} vortex out of the plane (see Fig. \\ref{fig9}). We assume\nthroughout that the density of vortices in the three dimensional bulk\nis so small that we can neglect interactions between the vortex that\nis pulled out of the sample and vortices in the three dimensional\nbulk. In this subsection only the clean case (no point disorder) will be studied.\n\nWe assume the MFM exerts a force ${\\bf f}=f \\hat{x}$. As in the\nunzipping experiments discussed above we expect that for large\nforces $f>f_c$ the vortex will be completely pulled out of the two\ndimensional slab. Similar to the case of the unzipping of a single\nvortex we write the free energy of the vortex as a sum of two\ncontributions. The first, ${\\cal F}_u(\\tau_m)$, arises from the part\nof the vortex that is outside the two dimensional slab. The second\n${\\cal F}_b(\\tau_m)$ is the change in the free-energy of the\nvortices that remain inside the two dimension slab. As before\n$\\tau_m$ is the length along the $\\tau$ direction which is unbound\nfrom the two-dimensional slab. The free-energy of the unzipped part\nis clearly identical to that calculated in Eq.~\\ref{free_unzip} or\nexplicitly\n\\begin{equation}\n {\\cal F}_u(\\tau_m)= - f^2 \\tau_m\/ 2\\gamma \\;.\n \\label{eq:unzupfeagain}\n\\end{equation}\n\nThe calculation of the free-energy, ${\\cal F}_b(\\tau_m)$, is\nsomewhat more involved. Clearly there is a linear contributions due\nto the length $\\tau_m$ removed from the attractive potential of the\nslab. However, in addition there is an extra contribution from the\nenergy of the dislocation, ${\\cal F}_d(\\tau_m)$, (see Fig.\n\\ref{fig9}) created in the two dimensional vortex array. This\ncontribution to the free-energy, as we show below, is {\\it\nnon-linear} and controlled by the Luttinger liquid parameter $g$.\nThis non-linearity results, near the unzipping transition, in a\nsensitivity of the critical properties to the value of $g$.\n\nWe leave the details of the calculation of the dislocation energy to\nAppendix~\\ref{App:dislocation} and present here only the key steps\nof derivation.\n\nIn order to satisfy boundary conditions near the interface one can\nuse the method of images (see Fig.~(\\ref{fig11})). The free energy\nof this dislocation pair can be calculated by standard methods (see\ndetails in Appendix~\\ref{App:dislocation}). In particular, at large\n$\\tau_m$ it behaves logarithmically (see e.g. Ref.~[\\onlinecite{chakin}]):\n\\begin{equation}\n {\\cal F}_d=\\frac{T}{4g} \\ln(\\tau_m\/a_0),\n \\label{free_en_dis}\n\\end{equation}\nwhere $a_0$ is the short range cutoff of the order of the distance\nbetween flux lines. We note that the free energy of the dislocation\nnear the interface (\\ref{free_en_dis}) is one half of the free\nenergy of a dislocation pair.\n\nWith the energy of the dislocation in hand we can now analyze the\nproperties of the unzipped length near the transition using the\nmethods used for analyzing the single vortex unzipping experiments.\nThe contributions to the free energy are from the unzipped part of\nthe vortex and the energy of the dislocation. Collecting all the\nrelevant terms, near the transition the free energy is given by\n\\begin{equation}\n {\\cal F}(\\tau_m)={\\cal F}_u(\\tau_m)+{\\cal F}_b(\\tau_m)=\n \\epsilon\\tau_m+\\frac{T}{4g}\\ln(\\tau_m\/a_0)\\;.\n\\end{equation} \nThe probability of finding a certain value of $\\tau_m$ is then given\nby\n\\begin{equation}\n P(\\tau_m) \\propto e^{-F(\\tau_m)\/T}=\\frac{C}{\\tau_m^{1\/(4g)}}e^{-\\epsilon\\tau_m}.,\n\\end{equation}\nwhere $C$ is the normalization constant. At the transition\n$\\epsilon=0$ the distribution becomes a pure power law in $\\tau_m$.\nTherefore, the average value of $\\tau_m$ is very sensitive to the\nvalue of $g$. In particular, for $g>1\/4$ (i.e. for weakly\ninteracting flux lines) the behavior of $\\langle\\tau_m\\rangle$ near\nthe transition is identical to that of a single vortex in the\nabsence of interactions with other vortices\n\\begin{equation}\n \\langle \\tau_m \\rangle \\sim {1\\over\\epsilon} \\;.\n\\end{equation}\nIn contrast, for $1\/8 < g < 1\/4$ (stronger interactions) there is a\ncontinuously varying exponent governing the transition\n\\begin{equation}\n\\langle\\tau_m\\rangle\\sim {1\\over \\epsilon^{2-1\/4g}} \\;.\n\\end{equation}\nAnd finally, for $g<1\/8$ (strongly interacting flux lines) we find\nthat $\\langle \\tau_m\\rangle$ does not diverge near the transition.\nNote that even though in this regime the mean displacement remains\nconstant at the transition the higher moments of $\\tau_m$ diverge\nand are thus sensitive to $\\epsilon$. The reason for this is at the\ntransition the distribution of $\\tau_m$ is a power law.\n\n\n\\subsection{Unzipping from a twin plane with point disorder}\n\\label{3c}\n\nWe now consider the problem of unzipping a vortex from a\nplane with many vortices in the presence of disorder. In the spirit\nof the treatments presented in this paper, one needs to calculate the\nfree-energy of the unzipped part of the vortex ${\\cal F}_u(\\tau_m)$,\nthe free-energy of the bound part of the vortex ${\\cal F}_b(\\tau_m)$\nand the {\\it fluctuations} in both quantities averaged over\nrealizations of disorder. This can be done perturbatively near $g=1$. We again relegate details of the derivation of\nthe dislocation energy to\nAppendix~\\ref{App:dislocation1}. One conclusion from our\ncalculations is that the mean free energy of the dislocation near\nthe boundary is not affected by the disorder and is given by\nEq.~(\\ref{free_en_dis}). Another important conclusion is that the\nfluctuations of the free energy also depend logarithmically on\n$\\tau_m$:\n\\begin{equation}\n \\overline{\\delta {\\cal F}^2_d(\\tau_m)}= T^2\\frac{\\kappa(\\infty)}{8g^2} \\ln(\\tau_m\/a_0)\n\\end{equation}\nfor $g>1$ and\n\\begin{equation}\n \\overline{\\delta {\\cal F}^2_d(\\tau_m)}=T^2\\frac{(1-g)^2}{4} \\ln^2(\\tau_m\/a_0)\n\\end{equation}\nfor $g<1$.\n\\begin{figure}[ht]\n\\includegraphics[scale=0.6]{UnzipLuttingerdis.eps}\n\\caption{Possible experimental setup for studying unzipping from\nLuttinger Liquid in the presence of disorder.}\\label{fig10}\n\\end{figure}\nWe note that in the case of many flux lines there is a weak\nlogarithmic dependence of free energy fluctuations on $\\tau_m$ as\nopposed to strong power law dependence in the case of a single flux\nline (compare Eq.~(\\ref{eq:fefluct})). This somewhat surprising\nresult is a consequence of the screening of strong power-law\nfluctuations by other flux lines. We note that if the pinning of\nflux lines by disorder is extremely strong so that tearing a single\nflux line does not affect positions of other lines in the duration\nof experiment, we are back to the single flux line physics and\n$\\overline{\\delta \\mathcal F_d^2}\\propto \\tau_m$.\n\nTo complete the analysis, we need to consider the free-energy\ncontribution from the unzipped part. Of particular of importance are\nthe free-energy fluctuations due to the disorder in the bulk of the\nsample. As discussed in Sec. \\ref{Sec2}, in a three dimensional\nsample these grow as $\\delta {\\cal F}_u \\propto m^{\\omega(3)}$ with\n$\\omega(3) \\simeq 0.22$. This contribution grows much quicker than\nthe contribution from the fluctuations in the free-energy of the\ndislocation. Therefore following the ideas of Sec. \\ref{Sec2} the\ntotal free-energy is given by\n\\begin{equation}\n{\\cal F}(m)=a(f_c-f)\\tau_m -b\\tau_m^{\\omega(3)}\\;.\n\\label{fplane}\n\\end{equation}\nwhere $a$ and $b$ are positive constants. Minimizing Eq. (\\ref{fplane}) gives for the critical properties in\nthis case\n\\begin{equation}\n s \\sim \\frac{1}{(f_c-f)^{1.28}} \\;.\n\\end{equation}\nThus screening disorder fluctuations in the plain by other flux\nlines effectively enhances the role of disorder in the bulk. As the\nresult these unzipping experiments can serve as a probe of the\nthree-dimensional anomalous wandering exponent.\n\n\\acknowledgements\n\nYK was supported by the Israel Science Foundation and thanks the\nBoston University visitors program for hospitality. YK and DRN were\nsupported by the Israel-US Binational Science Foundation. Research\nby DRN was also supported by the National Science Foundation,\nthrough grant DMR 0231631 and through the Harvard Materials Research\nScience and Engineering center through grant DMR0213805. AP was\nsupported by AFOSR YIP.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Learning with Adversarial Contexts}\n\\label{sec:adversarial}\n\n\nIn this section, we focus on the adversarial contexts case, where at time $t\\in [T]$, the contexts $x_{t,1},\\cdots,x_{t,K}$ can be arbitrarily chosen by an adversary who observes all past contexts and rewards, with $\\|x_{t,a}\\|_2\\le 1$ for any $a\\in [K]$. \nWe first state the main results in Section~\\ref{subsec:adversarial_main}, which\ncharacterize the upper and lower bounds of the regret.\nWe then give a UCB-based algorithm in the sequential batch setting in Section \\ref{subsec:adversarial_UCB} and describe several important aspects of the algorithm, including a variant that is used for theoretical bound purposes. \nNext, in Section \\ref{subsec.UCB_upperbound}, we show that the proposed sequential batching algorithm achieves the regret upper bound in Theorem \\ref{thm.adversarial} . \nFinally, we prove the regret lower bound in Section \\ref{subsec.adversarial} and therefore establish that the previous upper bound is close to be tight. \n\n\\subsection{Main Results}\\label{subsec:adversarial_main}\n\\begin{theorem}\\label{thm.adversarial}\n\tLet $T$, $M$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.\n\t\\begin{enumerate}\n\t\t\\item Under Assumption~\\ref{aspn.TKd}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$, where ${\\mathcal{T}}$ is a uniform grid defined by $t_m = \\lfloor \\frac{mT}{M}\\rfloor$ and $\\pi$ is explicitly defined in Section \\ref{subsec:adversarial_UCB},\n\t\tsuch that:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T\\left(\\textbf{Alg}\\right)] \\le \\mathsf{polylog}(T)\\cdot \\left(\\sqrt{dT} + \\frac{dT}{M}\\right).\n\t\t\\end{align*}\n\t\t\\item Conversely, for $K=2$ and any sequential batch learning algorithm, we have:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T\\left(\\textbf{Alg}\\right)] \\ge c\\cdot \\left(\\sqrt{dT} + \\left(\\frac{T\\sqrt{d}}{M}\\wedge \\frac{T}{\\sqrt{M}}\\right)\\right),\n\t\t\\end{align*}\n\t\twhere $c>0$ is a universal constant independent of $(T,M,d)$. \n\t\\end{enumerate}\n\\end{theorem}\n\nOur subsequent analysis easily gives high-probability regret upper bounds. However, for simplicity and to highlight more clearly the matching between the upper and lower bounds, \nwe stick with presenting results on expected regret.\nTheorem \\ref{thm.adversarial} shows a polynomial dependence of the regret on the number of batches $M$ under adversarial contexts, and the following corollary is immediate. \n\\begin{corollary}\\label{cor.adversarial}\n\tUnder adversarial contexts, $\\Theta(\\sqrt{dT})$ batches achieve the fully online regret $\\tilde{\\Theta}(\\sqrt{dT})$. \n\\end{corollary}\n\nAccording to Corollary \\ref{cor.adversarial}, $T$ batches are not necessary to achieve the fully online performance under adversarial contexts: $\\Theta(\\sqrt{Td})$ batches suffice. Since we are \\text{not} in the high-dimensional regime (per Assumption~\\ref{aspn.TKd}, $d \\le \\sqrt{T}$), the number of batches needed without any performance suffering is at most $O(T^{0.75})$, a sizable reduction from $O(T)$. Further, in the low-dimensional regime (i.e. when $d$ is a constant), only $O(\\sqrt{T})$ batches are needed to achieve fully online performance.\nNevertheless, $O(\\sqrt{dT})$ can still be a fairly large number. \nIn particular, if only a constant number of batches are available, then the regret is linear. The lower bound indicates that not much better can be done in the adversarial contexts. \nThis is because the power of the adversary under adversarial contexts is too strong when the learner only has a few batches: the adversary may simply pick any batch and choose all contexts anterior to this batch to be orthogonal with the contexts within this batch, such that the learner can learn nothing about the rewards in any given batch. \n\n\n\n\\subsection{A Sequential Batch UCB Algorithm}\\label{subsec:adversarial_UCB}\nThe overall idea of the algorithm is that, at the end of every batch, the learner computes an estimate $\\hat{\\theta}$ of the unknown parameter $\\theta^\\star$ via ridge regression as well as a confidence set that contains $\\theta^\\star$ with high probability. Then, whenever the learner enters a new batch, at each time $t$ he simply picks the action with the largest upper confidence bound. Finally, we choose the uniform grid, i.e., $t_m = \\lfloor \\frac{mT}{M}\\rfloor$ for each $m\\in [M]$. The algorithm is formally illustrated in Algorithm \\ref{algo.ucb}.\n\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Sequential Batch UCB (SBUCB) \\label{algo.ucb}}\n\t\\textbf{Input:} time horizon $T$; context dimension $d$; number of batches $M$; tuning parameter $\\gamma>0$.\n\t\n\t\\textbf{Grid choice:} ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$ with $t_m = \\lfloor \\frac{mT}{M}\\rfloor$. \n\t\n\t\\textbf{Initialization:} $A_0 = I_d\\in \\mathbb{R}^{d\\times d}$, $\\hat{\\theta}_0={\\bf 0}\\in \\mathbb{R}^d$, $t_0 = 0$.\n\t\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tChoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a}^\\top A^{-1}_{m-1} x_{t,a}}$ (break ties arbitrarily). \\\\\n\t\t\t}\n\t\tReceive rewards in the $m$-th batch: $\\{r_{t,a_t}\\}_{t_{m-1}+1 \\le t \\le t_m}$. \n\t\t\n\t\t$A_m = A_{m-1} + \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta}_m = A^{-1}_m\\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.\n\t}\n\\end{algorithm}\n\n\\begin{remark}\nNote that when $M=T$ (i.e. the fully online setting), Algorithm~\\ref{algo.ucb} degenerates to the standard LinUCB algorithm in~\\cite{chu2011contextual}.\n\\end{remark}\t\n\nTo analyze the sequential batch UCB algorithm, we need to first show that the constructed confidence bound is feasible. By applying \\cite[Lemma 1]{chu2011contextual} to our setting, we immediately obtain the following concentration result that the estimated $\\hat{\\theta}_{m-1}$ is close to the true $\\theta^\\star$: \n\n\\begin{lemma}\\label{lemma.concentration}\n\tFix any $\\delta > 0$.\n\tFor each $m\\in [M]$, if for a fixed sequence of selected contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$ up to time $t_m$, the (random) rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$ are independent, then for each $t \\in [t_{m-1}+1, t_m]$,\n\twith probability at least $1-\\frac{\\delta}{T}$, the following holds for all $a\\in [K]$:\n\t\\begin{align*}\n\t|x_{t,a}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star)| \\le \\left(1+\\sqrt{\\frac{1}{2}\\log\\left(\\frac{2KT}{\\delta}\\right)}\\right)\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}.\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{remark}\nLemma~\\ref{lemma.concentration} rests on an important conditional independence assumption of the rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$. However, this assumption\ndoes not hold in the vanilla version of the algorithm as given in Algorithm~\\ref{algo.ucb}.\nThis is because a future selected action $a_t$ and hence the chosen context $x_{t,a_t}$ depends on \nthe previous rewards. Consequently, by conditioning on $x_{t,a_t}$, previous rewards, say $r_{\\tau_1}, r_{\\tau_2}$ ($\\tau_1, \\tau_2 < t$), can become dependent. Note the somewhat subtle issue here on\nthe dependence of the rewards: when conditioning on $x_{t,a_t}$, the corresponding reward $r_t$ becomes independent of all the past rewards $\\{r_\\tau\\}_{\\tau < t}$. Despite this, when a future $x_{t^\\prime, a_{t^\\prime}}$ is revealed ($t^\\prime > t$), these rewards (i.e. $r_t$ and all the rewards prior to $r_t$) become coupled again: what was known about $r_t$ now reveals information about the previous rewards $\\{r_\\tau\\}_{\\tau < t}$, because $r_t$ itself would not determine the selection of $x_{t^\\prime, a_{t^\\prime}}$:\nall those rewards have influence over $x_{t^\\prime, a_{t^\\prime}}$. Consequently, a complicated dependence structure is thus created when conditioning on $\\{x_{t,a_t}\\}_{t\\in [t_m]}$.\n\nThis lack of independence issue will be handled with a master algorithm variant of Algorithm~\\ref{algo.ucb} discussed in the next subsection. \nUsing the master algorithm to decouple dependencies is a standard technique in contextual bandits that was first developed in~\\cite{auer2002using}. Subsequently, it has been used for the same purpose in~\\cite{chu2011contextual, li2017provably}, among others. We will describe how to adapt the master algorithm in our current sequential batch learning setting next. We end this subsection by pointing out that, strictly speaking, our regret upper bound is achieved only by this master algorithm, rather than Algorithm~\\ref{algo.ucb}. However, we take the conventional view that the master algorithm is purely used as a theoretical construct (to resolve the dependence issue) rather than a practical algorithm that should actually be deployed in practice. In practice, Algorithm~\\ref{algo.ucb} should be used instead. For that reason, we discuss the master algorithm only in the proof.\n\\end{remark}\n\n\\subsection{Regret Analysis for Upper bound}\\label{subsec.UCB_upperbound}\n\nWe start with a simple fact from linear algebra that will be useful later.\n\n\\begin{lemma}\\cite[Lemma 11]{auer2002using}\\label{lemma.eigenvalue}\n\tLet $A$ be a symmetric matrix such that $I_d\\preceq A$, and $x\\in \\mathbb{R}^d$ be a vector satisfying $\\|x\\|_2\\le 1$. Then the eigenvalues $\\lambda_1,\\cdots,\\lambda_d$ of $A$ and the eigenvalues $\\nu_1,\\cdots,\\nu_d$ of $A+xx^\\top$ can be rearranged in a way such that $\\lambda_i\\le \\nu_i$ for all $i\\in [d]$, and\n\t\\begin{align*}\n\t\\mathsf{Tr}(A^{-1}xx^\\top) \\le 10\\sum_{j=1}^d \\frac{\\nu_j - \\lambda_j}{\\lambda_j}. \n\t\\end{align*}\n\\end{lemma}\n\nWe next establish a key technical lemma that will be used in establishing our regret upper bound. \n\\begin{lemma}\\label{lemma.trace_sum}\nDefine $X_m = \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. We have:\n\\begin{align*}\n\\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)} \\le \\sqrt{10}\\log(T+1)\\cdot \\left(\\sqrt{Md} + d\\sqrt{\\frac{T}{M}} \\right). \n\\end{align*}\n\\end{lemma} \n\n\\begin{proof}\nWe start by noting that with the above notation, we have $A_m=A_{m-1}+X_m$ for any $m\\in [M]$ with $A_0=I_d$. \nApplying Lemma \\ref{lemma.eigenvalue} repeatedly, we may rearrange the eigenvalues $\\lambda_{m,1},\\cdots,\\lambda_{m,d}$ of $A_m$ in such a way that $\\lambda_{m-1,j}\\le \\lambda_{m,j}$ for all $m\\in [M], j\\in [d]$, and \n\\begin{align}\\label{eq.reduction_eigenvalue}\n\\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)} \\le \\sqrt{10}\\cdot \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}. \n\\end{align}\nNote that $\\lambda_{0,j}=1$ for all $j\\in [d]$. \nNote further that $ \\lambda_{M,j}\\le 1+T, \\forall j \\in [d]$, which follows from the \nfact that $z^\\top (A_M) z = z^\\top (I_d + \\sum_{t=1}^T x_{t,a_t}x_{t,a_t}^\\top ) z = \\|z\\|_2^2 + \\sum_{t=1}^T \\|z^\\top x_{t,a_t}\\|^2_2 \\le (T+1) \\|z\\|_2^2$, since $\\|x_{t,a_t}\\|_2 \\le 1$.\nConsequently, every eigenvalue of $A_M$ must be bounded by $T+1$.\n\nUtilizing the above two pieces of information on $\\lambda_{0,j}$ and $ \\lambda_{M,j}$, we then have the following:\n\\begin{align}\n\\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} &\\le \\sqrt{M\\sum_{m=1}^M\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}} }\n =\\sqrt{M \\sum_{j=1}^d \\sum_{m=0}^{M-1} \\frac{\\lambda_{m+1,j} - \\lambda_{m,j}}{\\lambda_{m+1,j}} } \\nonumber \\\\\n&\\le \\sqrt{M\\sum_{j=1}^d \\int_{\\lambda_{0,j}}^{\\lambda_{M,j}} \\frac{dx}{x} } \n= \\sqrt{M\\sum_{j=1}^d \\log \\lambda_{M,j}}\n\\le \\sqrt{Md\\log(T+1)}, \\label{eq.inequality_1}\n\\end{align}\nwhere the first inequality follows from $(\\sum_{i=1}^n x_i)^2 \\le n \\sum_{i=1}^n x_i^2$, for any real numbers\n$x_1, \\dots, x_n$.\n\n\nWe now look at the difference between Equation~\\eqref{eq.reduction_eigenvalue} and Equation~\\eqref{eq.inequality_1} and have:\n\\begin{align*}\n\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}} - \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} &\\stepa{\\le} \\frac{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^2}{\\lambda_{m,j}\\lambda_{m-1,j} }}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n & = \\frac{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{1\/2}}{\\lambda_{m-1,j}^{1\/2} } \\cdot \n\\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{3\/2}}{\\lambda_{m,j}\\lambda_{m-1,j}^{1\/2}}}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n &\\stepb{\\le} \n \\frac{\\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})}{\\lambda_{m-1,j} }} \\cdot \n \\sqrt{\\sum_{j=1}^d\t\\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^{3}}{\\lambda_{m,j}^2\\lambda_{m-1,j}}}}{\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}}}\\\\\n &= \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^3}{\\lambda_{m-1,j}\\lambda_{m,j}^2} }, \n\\end{align*}\nwhere step (a) follows from the basic inequality $\\sqrt{a}-\\sqrt{b}\\le (a-b)\/\\sqrt{a}$ for $a\\ge b\\ge 0$, and step (b) is due to Cauchy--Schwartz. \n\n\nNote further that $\\lambda_{m,j}-\\lambda_{m-1,j}\\le \\mathsf{Tr}(X_m) = \\sum_{t=t_{m-1}+1}^{t_m} \\|x_{t,a_t}\\|_2^2\\le t_m - t_{m-1} = \\frac{T}{M}$, we therefore have:\n\\begin{align}\n&\\sum_{m=1}^M \\left(\\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m-1,j}}} - \\sqrt{\\sum_{j=1}^d \\frac{\\lambda_{m,j} - \\lambda_{m-1,j}}{\\lambda_{m,j}}} \\right) \\le \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^3}{\\lambda_{m-1,j}\\lambda_{m,j}^2} } \\nonumber \\\\\n& \\le \\sum_{m=1}^M \\sqrt{\\sum_{j=1}^d \\frac{(\\lambda_{m,j}-\\lambda_{m-1,j})^2}{\\lambda_{m,j}^2 }}\\cdot \\sqrt{\\frac{T}{M}}\n\\le \\sum_{m=1}^M \\sum_{j=1}^d \\frac{\\lambda_{m,j}-\\lambda_{m-1,j}}{\\lambda_{m,j} }\\cdot \\sqrt{\\frac{T}{M}} \\nonumber \\\\\n&\\le \\sqrt{\\frac{T}{M}}\\sum_{j=1}^d \\int_{\\lambda_{0,j}}^{\\lambda_{M,j}} \\frac{dx}{x} \\nonumber \\\\\n&\\le d\\sqrt{\\frac{T}{M}}\\log(T+1) \\label{eq.inequality_2},\n\\end{align}\nwhere the second inequality follows from the fact that\n$\\lambda_{m-1,j}\\ge \\lambda_{0,j}=1$ for any $m\\in [M]$.\nNow combining \\eqref{eq.reduction_eigenvalue}, \\eqref{eq.inequality_1} and \\eqref{eq.inequality_2} completes the proof. \n\\end{proof}\n\nWe are now ready to prove the regret upper bound stated in Theorem~\\ref{thm.adversarial}.\n\\begin{proof}[Proof of Statement 1 in Theorem~\\ref{thm.adversarial}]\n\\begin{enumerate}\n\\item[]\n\\item \\textbf{Regret bound under conditional independence assumption.}\n\nFor a given $\\delta > 0$, set the hyper-parameter $\\gamma$ in Algorithm~\\ref{algo.ucb} to be $1+\\sqrt{\\frac{1}{2}\\log(\\frac{2KT}{\\delta})}$ for the entire proof.\nUnder the conditional independence assumption in Lemma~\\ref{lemma.concentration},\nby a simple union bound over all $t \\in [T]$, we have with probability at least $1 - \\delta$, the following event holds:\n\\begin{align*}\n\\forall m \\in [M], \\forall t \\in [t_{m-1}+1, t_m], \\forall a\\in [K], \\quad |x_{t,a}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star)| \\le \\gamma\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}.\n\\end{align*}\nOn this high probability event (with probability $1 - \\delta$), we can bound the regret as follows:\n\\begin{align}\nR_T(\\textbf{Alg}) &= \\sum_{t=1}^T \\left( \\max_{a\\in [K]}x_{t,a}^\\top \\theta^\\star - x_{t,a_t}^\\top \\theta^\\star \\right)\n= \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( \\max_{a\\in [K]}x_{t,a}^\\top \\theta^\\star - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& \\le \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( \\max_{a\\in [K]} \\Big(x_{t,a}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a}^\\top A_{m-1}^{-1}x_{t,a}}\\Big) - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& = \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( x_{t,a_t}^\\top \\hat{\\theta}_{m-1} + \\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} - x_{t,a_t}^\\top \\theta^\\star \\right) \\nonumber \\\\\n& = \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} \\left( x_{t,a_t}^\\top (\\hat{\\theta}_{m-1} - \\theta^\\star) + \\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} \\right) \\nonumber\\\\\n& \\le \n\\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} 2\\gamma\\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1}x_{t,a_t}} = 2\\gamma\\cdot \\sum_{m=1}^M \\sum_{t=t_{m-1}+1}^{t_m} 1\\cdot \\sqrt{x_{t,a_t}^\\top A_{m-1}^{-1} x_{t,a_t}} \\nonumber \\\\ \\label{eq.regret_ucb}\n&\\le 2\\gamma\\sqrt{\\frac{T}{M}}\\cdot \\sum_{m=1}^M \\sqrt{\\sum_{t=t_{m-1} +1}^{t_m} x_{t,a_t}^\\top A_{m-1}^{-1} x_{t,a_t}} = \n 2\\gamma\\sqrt{\\frac{T}{M}}\\cdot \\sum_{m=1}^M \\sqrt{ \\mathsf{Tr}(A_{m-1}^{-1} X_m)}, \n\\end{align}\nwhere the inequality in~\\eqref{eq.regret_ucb} follows from Cauchy--Schwartz and the choice of a uniform grid (without loss of generality we assume that $T\/M$ is an integer).\n\nNext, setting $\\delta = \\frac{1}{T}$ (and hence resulting in $\\gamma = 1+\\sqrt{\\frac{1}{2}\\log\\left(2KT^2\\right)}$) and applying Lemma~\\ref{lemma.trace_sum} to the upper bound in~\\eqref{eq.regret_ucb}, we immediately obtain that again on this high-probability event:\n\\begin{align}\nR_T(\\textbf{Alg}) &\\le 2\\sqrt{10}\\left(\\sqrt{\\frac{1}{2}\\log\\left(2KT^2\\right)}+1\\right)\\log(T+1)\\sqrt{\\frac{T}{M}}\\left(\\sqrt{Md} + d\\sqrt{\\frac{T}{M}} \\right) \\nonumber\\\\ \n&\\label{eq.x}= \\mathsf{polylog}(T)\\cdot (\\sqrt{dT} + \\frac{dT}{M}).\n\\end{align}\nConsequently, taking the expectation of $R_T(\\textbf{Alg})$ yields the same bound as in Equation~\\eqref{eq.x}, since with probability at most $\\frac{1}{T}$, the total regret over the entire horizon is at most $T$ (each time accumulates at most a regret of $1$ by the normalization assumption).\nSince the regret bound is independent of $\\theta^\\star$, it \nimmediately follows that \n$\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)]\n\\le \\mathsf{polylog}(T)\\cdot (\\sqrt{dT} + \\frac{dT}{M}).$ \n\n\n\\item \\textbf{Building a Master algorithm that satisfies conditional independence}\n\nTo complete the proof, we need to validate the conditional independence assumption in Lemma~\\ref{lemma.concentration}. Since the length of the confidence intervals does not depend on the random rewards, this task can be done by using a master algorithm SupSBUCB (Algorithm~\\ref{algo:SupSBUCB}), which runs in $O(\\log T)$ stages at each time step $t$ similar to \\cite{auer2002using}, which is subsequently adopted in the linear contextual bandits setting~\\cite{chu2011contextual} and then in the generalized linear contextual bandits setting~\\cite{li2017provably} for the same purpose of meeting the conditional independence assumption. \nNote that SupSBUCB is responsible for selecting the actions $a_t$ and it does so by calling BaseSBUCB (Algorithm~\\ref{algo:BaseSBUCB}), which merely performs regression.\nThis master-base algorithm pair has by now become a standard trick to get around the conditional dependency in the vanilla UCB algorithm for a variety of contextual bandits problems (by sacrificing at most $O(\\log T)$ regret).\n\n\\begin{algorithm}[!h]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{SupSBUCB \\label{algo:SupSBUCB}}\n \\textbf{Inputs}: $T, M \\in \\mathbb{Z}_{++}$, Grid ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$.\n \n $S\\leftarrow \\log(T), \\Psi_1^s \\leftarrow \\emptyset$ for all $s\\in[S]$\n \n\t\\For{$m=1,2,\\cdots,M$}{\n\t\tInitialize $\\Psi_{m+1}^{s^{\\prime}}\\leftarrow \\Psi_{m}^{s^{\\prime}}$ for all $s^{\\prime} \\in [S]$.\n\t\t\n\t \\For{$t= t_{m-1}+1, \\dots, t_m$}{\n\t $s \\leftarrow 1$ and $\\hat{A}_1 \\leftarrow [K]$\n\t\t\n\t\t\\textbf{Repeat:}\n\t\t\n\n\t\t\n\t Use BaseSBUCB with $\\Psi_{m}^s$ to compute $\\theta_m^s$ and $A_m^s$\n\t \n\t For all $a \\in \\hat{A}_s$, compute $w^s_{t,a} =\\gamma \\sqrt{x_{t,a}^T (A_{m}^s)^{-1}x_{t,a}}$, $\\hat{r}_{t,a}^s = \\langle \\theta_m^s, x_{t,a} \\rangle$ \n\t\t\n\t\t\\textbf{(a)} If $w^s_{t,a}\\leq 1\/\\sqrt{T}$ for all $a\\in \\hat{A}_s$,\n\t\tchoose $a_t = \\arg\\max_{a\\in \\hat{A}_s}\\left( \\hat{r}_{t,a}^s+w^s_{t,a}\\right)$. \n\t\t\n\t\t\n\t\t\\textbf{(b)} Else if $w^s_{t,a}\\leq 2^{-s}$ for all $a \\in \\hat{A}_s$,\n\t\t$\\hat{A}_{s+1}\\leftarrow \\{a\\in \\hat{A}_s \\,\\,\\vert\\,\\,\\hat{r}_{t,a}^s+w^s_{t,a}\\geq \\max_{a^{\\prime}\\in \\hat{A}_s}(\\hat{r}_{t,a^{\\prime}}^s+w^s_{t,a^{\\prime}})-2^{1-s}\\}$, \n\t\t\n\t\t\\quad $s \\leftarrow s+1$.\n\t\t\n\t\t\\textbf{(c)} Else choose any $a_t \\in \\hat{A}_{s}$ such that $w_{t,a_t}^s >2^{-s}$, Update\n\t $\\Phi_{m+1}^{s} \\leftarrow\n\t\t\\Phi_{m+1}^{s} \\cup \\{t\\}.$\n\t\n\t\t\n\t\t\\textbf{Until}{\\quad an action $a_t$ is found.}\n\t}\n}\n\\end{algorithm}\n\n\\begin{algorithm}[!h]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{BaseSBUCB \\label{algo:BaseSBUCB}}\n\t\n\t\t\\textbf{Input}: $\\Psi_m$.\n\t\n\t\t$A_m = I_d+\\sum_{\\tau \\in \\Psi_m}x_{t,a_{\\tau}}x^{\\prime}_{t,a_{\\tau}}$\n\t\t\n\t\t$c_m = \\sum_{\\tau \\in \\Psi_m} r_{\\tau,a_{\\tau}}x_{\\tau,a_{\\tau}}$\n\t\n\t\t$\\theta_m = A_m^{-1} c_m$\n\t\t\n\t\t\\textbf{Return} $(\\theta_m, A_m)$.\n\\end{algorithm}\n\n\n\nMore specifically, the master algorithm developed by~\\cite{auer2002finite} has the following structure: each time step is divided into at most $\\log T$ stages. At the beginning of each stage $s$, the learner computes the confidence interval using only the previous contexts designated as belonging to that stage and selects any action whose confidence interval has a large length (exceeding some threshold). If all actions has a small confidence interval, then we end this stage, observe the rewards of the given contexts and move on to the next stage with a smaller threshold on the length of the confidence interval. In other words, conditional independence is obtained by successive manual masking and revealing of certain information. One can intuitively think of each stage $s$ as a color, and each time step $t$ is colored using one of the $\\log T$ colors (if colored at all). When computing confidence intervals and performing regression, only previous contexts that have the same color are used, instead of all previous contexts. \n\nAdapting this algorithm to the sequential batch setting is not difficult: we merely keep track\nof the sets $\\Psi_{m}^s$ ($s \\in [\\log T]$) per each batch $m$ (rather than per each time step $t$ as in the fully online learning case). Note that we are still coloring each time step $t$, the difference here lies in the frequency at which we are running BaseSBUCB to compute the confidence bounds and rewards. \nDue to great similarity we omit the details here and refer to \\cite[Section 4.3]{auer2002using}. In particular, by establishing similar results to \\cite[Lemma 15, Lemma 16]{auer2002using}, it is straightforward to show that the regret of the master algorithm SupSBUCB here is enlarged at most by a multiplicative factor of $O(\\log T)$, which leads to the upper bound in Theorem \\ref{thm.adversarial}. \n\\end{enumerate}\n\\end{proof}\n\n\n\n\n\n\\subsection{Regret Analysis for Lower bound}\\label{subsec.adversarial}\nIn this section, we establish the regret lower bound and show that for any fixed grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$ and any learner's policy on this grid, there exists an adversary who can make the learner's regret at least $\\Omega(\\sqrt{Td}+(T\\sqrt{d}\/M \\wedge T\/\\sqrt{M}))$ even if $K=2$. Since the lower bound $\\Omega(\\sqrt{Td})$ has been proved in \\cite{chu2011contextual} even in the fully online case, it remains to show the lower bound $\\Omega(T\\sqrt{d}\/M \\wedge T\/\\sqrt{M})$. Note that in the fully online case, the lower bound $\\Omega(\\sqrt{Td})$ given in \\cite{chu2011contextual} is obtained under the same assumption $d^2 \\le T$ as in Assumption~\\ref{aspn.TKd}.\n\n\\begin{proof}[Proof of Statement 2 in Theorem~\\ref{thm.adversarial}]\nFirst we consider the case where $M\\ge d\/2$, and without loss of generality we may assume that $d'=d\/2$ is an integer (if $d$ is odd, then we can take $d^\\prime = \\frac{d-1}{2}$ and modify the subsequent procedure only slightly). By an averaging argument, there must be $d'$ batches $\\{i_1,i_2,\\cdots,i_{d'}\\}\\subset [M]$ such that\n\\begin{align}\\label{eq.large_batch}\n\\sum_{k=1}^{d'} \\left(t_{i_k} - t_{i_k-1} \\right) \\ge \\frac{d'T}{M}. \n\\end{align}\nNow $\\theta^*$ is chosen as follows: Flip $d'$ independent fair coins to obtain $U_1,\\cdots,U_{d'}\\in \\{1,2\\}$, and set $\\theta^\\star = (\\theta_1,\\cdots,\\theta_d)$ with\n$\\theta_{2k-1} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 1), \\theta_{2k} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 2), \\forall k\\in [d']$.\n(If $d$ is odd, then the last component $\\theta_d$ is set to $0$.)\n\n\nNote that $\\theta^\\star$ is a random variable and clearly $\\|\\theta^\\star\\|_2=1$ (surely). Next the contexts are generated in the following manner: for $t\\in (t_{m-1},t_{m}]$, if $m=i_k$ for some $k\\in [d']$, set $x_{t,1}=e_{2k-1}, x_{t,2}=e_{2k}$, where $e_j$ is the $j$-th basis vector in $\\mathbb{R}^d$; otherwise, set $x_{t,1}=x_{t,2}={\\bf 0}$. \n\nNow we analyze the regret of the learner under this environment. Clearly, for any $k\\in [d']$, the learner has no information about whether $(\\theta_{2k-1}, \\theta_{2k}) = (1\/\\sqrt{d'},0)$ or $(0,1\/\\sqrt{d'})$ before entering the $i_k$-th batch, while an incorrect action incurs an instantenous regret $1\/\\sqrt{d'}$. Consequently, averaged over all possible coin flips $(U_1,\\cdots,U_{d'})\\in \\{1,2\\}^{d'}$, the expected regret is at least:\n\\begin{align*}\n\\frac{1}{2}\\sum_{k=1}^{d'} \\frac{t_{i_k} - t_{i_{k-1}}}{\\sqrt{d'}} \\ge \\frac{1}{2\\sqrt{2}}\\cdot \\frac{T\\sqrt{d}}{M}\n\\end{align*}\ndue to \\eqref{eq.large_batch}, establishing the lower bound $\\Omega\\left(\\frac{T\\sqrt{d}}{M}\\right)$ when $M\\ge d\/2$.\n\nNext, in the case where $M < d\/2$, choose $d^\\prime = M$.\nHere, we obviously have $\\sum_{k=1}^{d'} \\left(t_{i_k} - t_{i_k-1} \\right) = T.$ \nIn this case, again flip $d'$ independent fair coins to obtain $U_1,\\cdots,U_{d'}\\in \\{1,2\\}$, and set $\\theta^\\star = (\\theta_1,\\cdots,\\theta_d)$ with\n$\\theta_{2k-1} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 1), \\theta_{2k} = \\frac{1}{\\sqrt{d'}}\\mathbbm{1}(U_k = 2), \\forall k\\in [d']$.\nSet all remaining components of $\\theta$ to $0$.\nThe contexts are generated as follows: for $t\\in (t_{m-1},t_{m}], 1\\le m \\le M$, set $x_{t,1}=e_{2m-1}, x_{t,2}=e_{2m}$.\nIn this case, we again average over all possible coin flips $(U_1,\\cdots,U_{d'})\\in \\{1,2\\}^{d'}$, and the expected regret is at least:\n\\begin{align*}\n\\frac{1}{2}\\sum_{m=1}^{M} \\frac{t_{m} - t_{m-1}}{\\sqrt{d'}} = \\frac{1}{2}\\cdot \\frac{T}{\\sqrt{M}}\n\\end{align*}\n\nCombining the above two cases yields a lower bound of $ \\Omega\\left(\\frac{T\\sqrt{d}}{M}\\wedge \\frac{T}{\\sqrt{M}}\\right)$.\n\\end{proof}\n\n\\section{Definitions and Auxiliary Results}\\label{appendix.auxiliary}\n\n\\begin{definition}\nLet $(\\mathcal{X}, \\mathcal{F})$ be a measurable space and $P$, $Q$\nbe two probability measures on $(\\mathcal{X}, \\mathcal{F})$. \n\\begin{enumerate}\n\t\\item The total-variation distance between $P$ and $Q$ is defined as:\n\t$$ \\mathsf{TV}(P,Q) = \\sup_{A \\in \\mathcal{A}} |P(A) - Q(A)|.$$\n\t\\item The KL-divergence between $P$ and $Q$ is:\n\t\\begin{equation*}\n\tD_{\\text{\\rm KL}}(P\\|Q) = \\begin{cases}\n\t\\int \\log \\frac{dP}{dQ} dP \\text{\\quad if $P << Q$} \\\\\n\t+\\infty \\text{\\quad otherwise}\n\t\\end{cases}\n\t\\end{equation*}\n\\end{enumerate}\n\n\n\\end{definition}\n\\begin{lemma}\\cite[Lemma 2.6]{Tsybakov2008}\\label{lemma.TV_KL}\n\tLet $P$ and $Q$ be any two probability measures on the same measurable space. Then\n\t\\begin{align*}\n\t1- \\mathsf{TV}(P,Q) \\ge \\frac{1}{2}\\exp\\left(-D_{\\text{\\rm KL}}(P\\|Q)\\right). \n\t\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}\\cite[Theorem 6.1]{wainwright2019high}\n\t\\label{lemma.wishart}\nLet $x_1,x_2,\\cdots,x_n\\sim {\\mathcal{N}}(0,I_d)$ be i.i.d. random vectors. Then for any $\\delta>0$, \n\\begin{align*}\n\\mathbb{P}\\left(\\sigma_{\\max}\\left(\\frac{1}{n}\\sum_{i=1}^n x_ix_i^\\top\\right) \\ge 1+\\sqrt{\\frac{d}{n}}+\\delta \\right) \\le \\exp\\left(-\\frac{n\\delta^2}{2}\\right),\n\\end{align*}\nwhere $\\sigma_{\\max}(A)$ denotes the largest singular value of $A$. \n\\end{lemma}\n\n\n\\section{Proof of Main Lemmas}\n\\subsection{Proof of Lemma \\ref{lemma.equator}}\nLet $y_{t,a} = \\Sigma^{-1\/2}x_{t,a}$, then each $y_{t,a}$ is marginally distributed as ${\\mathcal{N}}(0,I_d)$. Define\n\\begin{align*}\nB \\triangleq \\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\\top.\n\\end{align*}\n\nRecall that $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta} = \\arg\\max_{a\\in [K]} y_{t,a}^\\top (\\Sigma^{1\/2}\\hat{\\theta})$ for any $t\\in [t_{m-1}+1,t_m]$, and $\\hat{\\theta}$ is an estimate of $\\theta^\\star$ that is independent of all contexts in the current batch $ [t_{m-1}+1,t_m]$. By rotational invariance of ${\\mathcal{N}}(0,I_d)$, we can without loss of generality assume $\\Sigma^{1\/2}\\hat{\\theta}=ce_d$ for some $c>0$. Consequently, each $y_{t,a_t}$ follows the distribution\n$\\mu_t = {\\mathcal{N}}(0,1) \\otimes \\cdots \\otimes {\\mathcal{N}}(0,1) \\otimes \\nu_t,$\nwhere $\\nu_t$ is the probability distribution of $\\max_{a\\in [K]} Z_{t,a}$, where each $Z_{t,a}$ is a standard Gaussian and the $Z_{t,a}$'s can be correlated across different $a$'s. \n\nNow for $y=(y_1,y_2,\\cdots,y_d)\\sim \\mu_t$ and any unit vector $u\\in \\mathbb{R}^d$, we show that there exist numerical constants $c_1,c_2>0$ independent of $(d,K)$ such that\n\\begin{align}\\label{eq.large_prob_fixed_u}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1\\right) \\ge c_2.\n\\end{align}\nTo establish \\eqref{eq.large_prob_fixed_u}, we distinguish into two cases. If $|u_d|<\\frac{1}{2}$, using the fact that $\\mathbb{P}(|{\\mathcal{N}}(0,1)+t|\\ge c)$ is minimized at $t=0$ for any fixed $c>0$, we conclude that\n\\begin{align*}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1 \\right) \\ge \\mathbb{P}\\left( \\left| \\sum_{i=1}^{d-1}y_iu_i \\right| \\ge c_1 \\right) = \\mathbb{P}(|{\\mathcal{N}}(0,1-u_d^2)|\\ge c_1) \\ge \\mathbb{P}\\left(\\left|{\\mathcal{N}}(0,\\frac{3}{4})\\right|\\ge c_1\\right)\n\\end{align*}\nis lower bounded by some positive constant. If $|u_d|\\ge \\frac{1}{2}$, we have\n\\begin{align*}\n\\mathbb{P}\\left(|y^\\top u| \\ge c_1 \\right)\\ge \\frac{1}{2}\\mathbb{P}\\left(|u_dy_d| \\ge c_1 \\right) \\ge \\frac{1}{2}\\mathbb{P}\\left(|y_d| \\ge 2c_1 \\right) \\ge \\frac{1}{2}\\mathbb{P}\\left(Z_{t,1}\\ge 2c_1\\right) = \\frac{1}{2}\\mathbb{P}({\\mathcal{N}}(0,1)\\ge 2c_1),\n\\end{align*}\nwhich is again lower bounded by a numerical constant. Hence the proof of \\eqref{eq.large_prob_fixed_u} is completed. \n\nBased on \\eqref{eq.large_prob_fixed_u} and the deterministic inequality\n\\begin{align*}\nu^\\top\\cdot \\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\\top\\right)\\cdot u \\ge \\frac{c_1^2}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} \\mathbbm{1}\\left(|y_{t,a_t}^\\top u| \\ge c_1 \\right),\n\\end{align*}\nthe Chernoff inequality yields that for any unit vector $u\\in \\mathbb{R}^d$, we have\n\\begin{align}\\label{eq.concentration_fixed_u}\n\\mathbb{P}\\left( u^\\top B u \\ge \\frac{c_1^2c_2}{2}\\right) \\ge 1 - e^{-c_3(t_m-t_{m-1})},\n\\end{align}\nwhere $c_3>0$ is some numerical constant. \n\nNext we prove an upper bound of $\\lambda_{\\max}(B)$, i.e., the largest eigenvalue of $B$. Since $(a+b)(a+b)^\\top \\preceq 2(aa^\\top + bb^\\top)$ for any vectors $a,b\\in\\mathbb{R}^d$, for $y_t\\sim \\mu_t$ we have\n\\begin{align*}\ny_ty_t^\\top \\preceq 2(v_tv_t^\\top + w_tw_t^\\top), \n\\end{align*}\nwhere $v_t=(v_{t,1},\\cdots,v_{t,d-1},0)$ with $v_{t,i}\\sim{\\mathcal{N}}(0,1)$, and $w_t=(0,\\cdots,0,w_{t,d})$ with $w_{t,d}\\sim \\nu_t$. By concentration of Wishart matrices (cf. Lemma \\ref{lemma.wishart}), with probability at least $1-e^{-\\Omega(t_m-t_{m-1})}$, \n\\begin{align*}\n\\lambda_{\\max}\\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} v_tv_t^\\top \\right) \\le c_4\n\\end{align*}\nholds for some numerical constant $c_4>0$. For the second term, since $w_{t,d}\\sim \\nu_t$ is the maximum of $K$ arbitrary ${\\mathcal{N}}(0,1)$ random variables, the Gaussian tail and the union bound imply that $|w_{t,d}|\\le \\sqrt{c_5\\log(KT)}$ with probability at least $1-O(T^{-5})$. Hence, with probability at least $1 - O(T^{-4})$, we have\n\\begin{align*}\n\\lambda_{\\max}\\left(\\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} w_tw_t^\\top \\right) = \\frac{1}{t_m-t_{m-1}}\\sum_{t=t_{m-1}+1}^{t_m} w_{t,d}^2 \\le c_5\\log (KT). \n\\end{align*}\nCombining all the previous results, and using $\\lambda_{\\max}(A+B)\\le \\lambda_{\\max}(A)+\\lambda_{\\max}(B)$ for symmetric matrices $A,B$, we conclude that with probability at least $1-e^{-\\Omega(t_m-t_{m-1})} - O(T^{-4})$, we have\n\\begin{align}\\label{eq.lambda_max}\n\\lambda_{\\max}(B) \\le c_6\\log (KT)\n\\end{align}\nholds for some numerical constant $c_6>0$. \n\nFinally, we are ready to prove a lower bound on $\\lambda_{\\min}(B)$ via an $\\varepsilon$-net argument. Let ${\\mathcal{N}}_d(\\varepsilon)$ be an $\\varepsilon$-net of the unit ball in $\\mathbb{R}^d$ (both in $\\ell_2$ norm) with cardinality at most $(1+\\frac{2}{\\varepsilon})^d$. Standard $\\varepsilon$-net techniques (cf. \\cite[Section 2.3.1]{tao2012topics}) give\n\\begin{align*}\n\\min_{u: \\|u\\|_2=1} u^\\top Bu \\ge \\min_{u\\in {\\mathcal{N}}_d(\\varepsilon)} u^\\top Bu - 2\\varepsilon\\lambda_{\\max}(B).\n\\end{align*}\nHence, choosing $\\varepsilon = \\frac{c_1^2c_2}{8c_6\\log (KT)}$ and combining \\eqref{eq.concentration_fixed_u}, \\eqref{eq.lambda_max} and the union bound over ${\\mathcal{N}}_d(\\varepsilon)$ gives\n\\begin{align*}\n\\mathbb{P}\\left(\\lambda_{\\min}(B) \\ge \\frac{c_1^2c_2}{4}\\right) \\ge 1 - e^{O(d\\log\\log (KT)) - \\Omega(t_m-t_{m-1})} - O(T^{-4}). \n\\end{align*}\nBy noting that $t_m - t_{m-1} = \\Omega(d\\sqrt{T})$ due to the choice of the grid in \\eqref{eq.minimax_grid}, the parameter $a$ in \\eqref{eq.a}, and the assumption $M=O(\\log\\log T)$, we conclude that $\\lambda_{\\min}(B) \\ge c_7$ for some numerical constant $c_7>0$ with probability at least $1 - O(T^{-4})$. The proof is completed by noting that\n$$\n\\frac{1}{t_m - t_{m-1}}\\sum_{t = t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top = \\Sigma^{1\/2} B \\Sigma^{1\/2} \\succeq \\Sigma^{1\/2} (c_7I_d) \\Sigma^{1\/2}= c_7\\Sigma\n$$\nwhenever $\\lambda_{\\min}(B) \\ge c_7$ and the assumption $\\lambda_{\\min}(\\Sigma)\\ge \\kappa\/d$. \n\\qed\n\n\\subsection{Proof of Lemma \\ref{lemma.Q}}\nLet $v_1,\\cdots,v_d$ be an orthonormal basis of $\\mathbb{R}^d$ with $v_1=u_t$. By rotational invariance of the uniform distribution on spheres, we have $(v_1^\\top \\theta, v_2^\\top \\theta, \\cdots, v_d^\\top \\theta)\\sim \\mathsf{Unif}(\\Delta\\mathbb{S}^{d-1})$ under $Q_0$. Now recall that\n\\begin{align*}\n\\frac{dQ_1}{dQ_0}(\\theta) = \\frac{r_t\\jiao{v_1,\\theta}_+}{Z_0}, \\qquad \\frac{dQ_2}{dQ_0}(\\theta) = \\frac{r_t\\jiao{v_1,\\theta}_-}{Z_0}, \n\\end{align*}\nwe conclude that if $\\theta' = \\theta - 2(v_1^\\top \\theta)v_1$, we have\n\\begin{align*}\n\\frac{dQ_1}{dQ_0}(\\theta) = \\frac{dQ_2}{dQ_0}(\\theta'). \n\\end{align*}\nAs a result, it is equivalent to have $\\theta\\sim Q_1$ or $\\theta' = \\theta - 2(v_1^\\top \\theta)v_1\\sim Q_2$. \n\nFor the identity \\eqref{eq.Z_0}, recall that the density of $\\theta=(\\theta_1,\\cdots,\\theta_d)\\sim \\mathsf{Unif}(\\Delta\\mathbb{S}^{d-1})$ is \n\\begin{align}\\label{eq.uniform_density}\nf(\\theta) = f(\\theta_2,\\cdots,\\theta_d) = \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\frac{2\\Delta}{\\sqrt{\\Delta^2 - \\theta_2^2 - \\cdots - \\theta_d^2}}\\cdot \\mathbbm{1}\\left(\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2\\right), \n\\end{align}\nwhere $\\Gamma(t)=\\int_0^\\infty x^{t-1}e^{-x}dx$ is the Gamma function. Hence, by rotational invariance, we have\n\\begin{align*}\nZ_0 &= \\frac{r_t}{2}\\mathbb{E}_{Q_0}[|\\theta_1|] = r_t\\Delta\\cdot \\int_{\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} d\\theta_2\\cdots d\\theta_d \\\\\n&= r_t\\Delta\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{\\Delta^{d-1} \\pi^{\\frac{d-1}{2}}}{\\Gamma(\\frac{d-1}{2}+1)} = r_t\\Delta\\cdot\\begin{cases}\n\\frac{2^d}{\\pi d}\\binom{d}{d\/2}^{-1}, & \\text{if }d\\text{ is even} \\\\\n\\frac{1}{2^d}\\binom{d-1}{(d-1)\/2}, & \\text{if }d\\text{ is odd}\n\\end{cases}. \n\\end{align*}\n\nUsing Stirling's approximation $\\sqrt{2\\pi n}(\\frac{n}{e})^n\\le n!\\le e\\sqrt{n}(\\frac{n}{e})^{n}$ for any $n\\ge 1$, we have\n\\begin{align}\\label{eq.combinatorics}\n\\frac{2}{e^2}\\sqrt{\\frac{\\pi}{n}}\\le \\frac{1}{2^{2n}}\\binom{2n}{n} \\le \\frac{e}{\\pi\\sqrt{2n}}\n\\end{align}\nfor all $n\\ge 1$, and the rest of \\eqref{eq.Z_0} follows from \\eqref{eq.combinatorics}. \n\nAs for the second moment in \\eqref{eq.second_moment}, we use the spherical coordinates \n\\begin{align*}\n\\left\\{\n\\begin{array}{lr}\n\\theta_2 = r\\cos\\varphi_1, \\\\\n\\theta_3 = r\\sin\\varphi_1\\cos\\varphi_2, \\\\\n\\vdots \\\\\n\\theta_{d-1} = r\\sin\\varphi_1\\sin\\varphi_2\\cdots\\sin\\varphi_{d-3}\\cos\\varphi_{d-2}, \\\\\n\\theta_d = r\\sin\\varphi_1\\sin\\varphi_2\\cdots\\sin\\varphi_{d-3}\\sin\\varphi_{d-2}.\n\\end{array}\n\\right.\n\\end{align*}\nto obtain\n\\begin{align*}\n\\mathbb{E}_{Q_1}[(v_1^\\top \\theta)^2] &= \\frac{r_t}{2Z_0}\\cdot \\mathbb{E}_{Q_0}[|\\theta_1|^3] \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\cdot \\int_{\\sum_{i=2}^d \\theta_i^2 \\le \\Delta^2} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} (\\Delta^2-\\theta_2^2-\\cdots-\\theta_d^2) d\\theta_2\\cdots d\\theta_d \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\cdot \\int_0^\\Delta \\int_0^\\pi \\cdots \\int_0^\\pi \\int_0^{2\\pi} \\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1} (\\Delta^2-r^2) \\\\\n&\\qquad \\cdot r^{d-2}\\sin^{d-3}(\\varphi_1) \\sin^{d-4}(\\varphi_2)\\cdots \\sin(\\varphi_{d-3}) drd\\varphi_1\\cdots d\\varphi_{d-2} \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{2\\Delta^{d+1}}{d^2-1}\\cdot \\frac{\\Gamma(\\frac{d-2}{2})\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{d-1}{2})}\\cdot \\frac{\\Gamma(\\frac{d-3}{2})\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{d-2}{2})}\\cdot \\cdots \\cdot \\frac{\\Gamma(1)\\Gamma(\\frac{1}{2})}{\\Gamma(\\frac{3}{2})}\\cdot 2\\pi \\\\\n&= \\frac{r_t\\Delta}{Z_0}\\left(\\frac{d\\pi^{d\/2}\\Delta^{d-1}}{\\Gamma(\\frac{d}{2}+1)}\\right)^{-1}\\cdot \\frac{2\\Delta^{d+1}}{d^2-1}\\cdot\\frac{2\\pi^{\\frac{d-1}{2}}}{\\Gamma(\\frac{d-1}{2})} \\\\\n&= \\frac{2\\Delta^2}{d+1}. \n\\end{align*}\\qed\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\nWe introduce the problem of sequential batch learning on finite-action linear contextual bandits.\n\n\\subsection{Notation}\nWe start by fixing some notation that will be used throughout the paper. For a positive integer $n$, let $[n]\\triangleq\\{1,\\cdots,n\\}$. For real numbers $a,b$, let $a\\wedge b\\triangleq \\min\\{a,b\\}$. For a vector $v$, let $v^\\top$ and $\\|v\\|_2$ be the transpose and $\\ell_2$ norm of $v$, respectively. For square matrices $A,B$, let $\\mathsf{Tr}(A)$ be the trace of $A$, and let $A\\preceq B$ denote that the difference $B-A$ is symmetric and positive semi-definite. We adopt the standard asymptotic notations: for two non-negative sequences $\\{a_n\\}$ and $\\{b_n\\}$, let $a_n=O(b_n)$ iff $\\limsup_{n\\to\\infty} a_n\/b_n<\\infty$, $a_n=\\Omega(b_n)$ iff $b_n=O(a_n)$, and $a_n=\\Theta(b_n)$ iff $a_n=O(b_n)$ and $b_n=O(a_n)$. We also write $\\tilde{O}(\\cdot), \\tilde{\\Omega}(\\cdot)$ and $\\tilde{\\Theta}(\\cdot)$ to denote the respective meanings within multiplicative logarithmic factors in $n$. For probability measures $P$ and $Q$, let $P\\otimes Q$ be the product measure with marginals $P$ and $Q$. If measures $P$ and $Q$ are defined on the same probability space, we denote by $\\mathsf{TV}(P,Q) = \\frac{1}{2}\\int |dP-dQ| $ and $ D_{\\text{KL}}(P\\|Q) = \\int dP\\log\\frac{dP}{dQ}$ the total variation distance and Kullback--Leibler (KL) divergences between $P$ and $Q$, respectively. \n\n\n\\subsection{Decision Procedure and Reward Structures}\nLet $T$ be the time horizon of the problem. At the beginning of each time $t\\in [T]$, the decision maker observes a set of $K$ $d$-dimensional feature vectors (i.e. contexts) $\\{x_{t,a} \\mid a \\in [K]\\} \\subseteq \\mathbb{R}^d$ corresponding to the $t$-th unit.\nIf the decision maker selects action $a \\in [K]$, then a reward $r_{t,a} \\in \\mathbb{R}$ corresponding to time $t$ is incurred (although not necessarily immediately observed).\nWe assume the mean reward is linear: that is, there exists an underlying (but unknown) parameter\n$\\theta^\\star$ such that \n $$r_{t,a} = x_{t,a}^\\top \\theta^\\star + \\xi_t,$$\n where $\\{\\xi_t\\}_{t=0}^{\\infty}$ is a sequence of zero-mean independent sub-Gaussian random variables with a uniform upper bound on the sub-Gaussian constants. Without loss of generality and for notational simplicity,\n we assume each $\\xi_t$ is $1$-sub-Gaussian: $\\mathbf{E}[e^{\\lambda \\xi_t}] \\le e^{\\lambda^2\/2}, \\forall t, \\forall \\lambda \\in \\mathbb{R}$.\n Further, without loss of generality (via normalization), we assume $\\|\\theta^\\star\\|_2\\le 1$.\n We denote by $a_t$ and $r_{t, a_t}$ the action chosen and the reward obtained at time $t$, respectively. \n Note that both are random variables; in particular, $a_t$ is random either because the action is randomly selected based on the contexts $\\{x_{t,a} \\mid a \\in [K]\\}$ or because the contexts $\\{x_{t,a} \\mid a \\in [K]\\}$\n are random, or both.\n\nAs there are different (but equivalent) formulations of contextual bandits, we briefly discuss the meaning of the above abstract quantities and how they arise in practice. In general, at each round $t$, an individual characterized by $v_t$ (a list of characteristics associated with that individual) becomes available.\nWhen the decision maker decides to apply action $a_t$ to this individual,\n a reward $y_t(v_t, a_t)$, which depends (stochastically) on both $v_t$ and $a_t$, is obtained. In practice, for both modelling and computational reasons, one often first featurizes the individual characteristics and the actions.\nIn particular, with sufficient generality, one assumes $\\mathbf{E}[y_t(v_t, a_t) \\mid v_t, a_t] = g_{\\theta} (\\phi(v_t, a_t))$, \nwhere $g_{\\theta}(\\cdot)$ is the parametrized mean reward function and $\\phi(v_t, a_t)$ extracts the features from the given raw individual characteristics $v_t$ and action $a_t$. In the above formulation,\nas is standard in the literature, we assume the feature map $\\phi(\\cdot)$ is known and given and $x_{t,a} = \\phi(v_t, a)$. Consequently, we directly assume access to contexts $\\{x_{t,a} \\mid a \\in [K]\\}$.\nNote that the linear contextual bandits setting then corresponds to $g_{\\theta}(\\cdot)$ is linear.\n\n\n\\subsection{Sequential Batch Learning}\nIn the standard online learning setting, the decision maker immediately observes the reward $r_{t, a_t}$ after selecting action $a_t$ at time $t$. Consequently, in selecting $a_t$, the decision maker can base his decision on all the past contexts $\\{x_{\\tau,a} \\mid a \\in [K], \\tau\\le t\\}$ and all the past rewards $\\{r_{\\tau,a_\\tau} \\mid \\tau\\le t-1\\}$. \n\nIn constrast, we consider a \\textit{sequential batch learning} setting, where the decision maker is only allowed to partition the $T$ units into (at most) $M$ batches, and the reward corresponding to each unit in a batch can only be observed at the end of the batch. More specifically, given a maximum batch size $M$, the decision maker needs to choose a sequential batch learning algorithm \\textbf{Alg} that has the following two components:\n\\begin{enumerate}\n\t\\item A \\emph{grid} ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$, with $0 = t_0 < t_10$, where $\\lambda_{\\min}(\\Sigma), \\lambda_{\\max}(\\Sigma)$ denote the smallest and the largest eigenvalues of $\\Sigma$, respectively. \n \\end{assumption}\n \nThe upper bound $ \\lambda_{\\max}(\\Sigma) \\le 1\/d$ in Assumption \\ref{assumption:cov} ensures that $\\mathbb{E}\\|x_{t,a}\\|_2^2\\le 1$, and therefore the stochastic contexts share the similar constraint with the previous adversarial contexts. The lower bound $\\lambda_{\\min}(\\Sigma) \\ge \\kappa\/d$ ensures that each stochastic context is approximately distributed as an isotropic Gaussian random vector, with a bounded condition number no less than $\\kappa^{-1}$. We assume that $\\kappa>0$ is a fixed constant (say $0.1$) and will not optimize the dependence on $\\kappa$. \n \nThe next theorem presents tight regret bounds for the stochastic contexts case.\n\n\\begin{theorem}\\label{thm.stochastic}\n\tLet $T$, $M=O(\\log\\log T)$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.\n\t\\begin{enumerate}\n\t\\item \n\tUnder Assumptions \\ref{aspn.TKd} and \\ref{assumption:cov}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$ (explicitly defined in Section \\ref{subsec.pure-exp}) such that:\n\t\\begin{align*}\n\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\le \\mathsf{polylog}(T)\\cdot \\sqrt{\\frac{dT}{\\kappa}}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}}.\n\t\\end{align*}\n\t\\item\n\tConversely, even when $K=2$ and contexts $x_{t,a}\\sim {\\mathcal{N}}(0,I_d\/d)$ are independent over all $a\\in [K], t\\in [T]$, for any $M\\le T$ and any sequential batch learning algorithm, we have:\n\t\\begin{align*}\n\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot \\sqrt{dT}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}},\n\t\\end{align*}\n\twhere $c>0$ is a numerical constant independent of $(T,M,d)$. \n\\end{enumerate}\n\\end{theorem}\n\nTheorem \\ref{thm.stochastic} completely characterizes the minimax regret for the sequential batch learning problem in linear contextual bandits with stochastic contexts, and shows a doubly exponential dependence of the optimal regret on the number of batches $M$. The following corollary is immediate. \n\\begin{corollary}\\label{cor.stochastic}\n\tUnder stochastic contexts, it is necessary and sufficient to have $\\Theta(\\log\\log (T\/d^2))$ batches to achieve the fully online regret $\\tilde{\\Theta}(\\sqrt{dT})$. \n\\end{corollary}\n\nIn contrast to Corollary \\ref{cor.adversarial}, the above corollary shows that a much smaller number of batches are capable of achieving the fully online performance, which suits better for many practical scenarios. Note that for smaller number of batches, Theorem \\ref{thm.stochastic} also gives the tight regrets within logarithmic factors, e.g., the optimal regret is $\\tilde{\\Theta}(Td^{-1\/2})$ when $M=1$, is $\\tilde{\\Theta}(T^{2\/3}d^{1\/6})$ when $M=2$, is $\\tilde{\\Theta}(T^{4\/7}d^{5\/14})$ when $M=3$, and so on. \n\n\n\\subsection{A Sequential Batch Pure-Exploitation Algorithm}\\label{subsec.pure-exp}\n\nIn contrast to the adversarial contexts, under stochastic contexts the decision maker enjoys the advantage that he can choose to learn the unknown parameter $\\theta^\\star$ from any desired direction. In other words, the exploration of the learner is no longer subject to the adversary's restrictions, and strikingly, making decisions based on the best possible inference of $\\theta^\\star$ is already sufficient.\n\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Sequential Batch Pure-exploitation\t\\label{algo.pure-exp}}\n\t\\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$. \\\\\n\t\\textbf{Set} $a = \\Theta\\left( \\sqrt{T}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}} \\right)$\\\\\n\t\\textbf{Grid choice}: ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$, with $t_1 = ad, \\quad t_m = \\lfloor a\\sqrt{t_{m-1}} \\rfloor, m=2,3,\\cdots,M,.$\\\\\n\t\\textbf{Initialization:} $A = {\\bf 0}\\in \\mathbb{R}^{d\\times d}$, $\\hat{\\theta}={\\bf 0}\\in \\mathbb{R}^d$\\;\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tchoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}$ (break ties arbitrarily). \\\\\n\t\t\t\treceive reward $r_{t,a_t}$. \n\t\t\t}\n\t\t}\n\t\t$A\\gets A + \\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta} \\gets A^{-1}\\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.\n\\end{algorithm}\n\nThe algorithm we use in this setting is quite simple (see Algorithm~\\ref{algo.pure-exp}). Specifically, under a particularly chosen grid ${\\mathcal{T}}=\\{t_1,t_2,\\cdots,t_M\\}$, the learner, at the beginning of each batch, uses the least squares estimate $\\hat{\\theta}$ of $\\theta^\\star$ based on the data in the previous batches, and then simply selects the action $a\\in [K]$ which maximizes the estimated reward $x_{t,a}^\\top \\hat{\\theta}$ for any time $t$ in this batch. Then at the end of each batch, the learner updates his estimate $\\hat{\\theta}$ of $\\theta^\\star$ based on the new observations from the current batch. \n\nHow do we select the grid ${\\mathcal{T}}$? Intuitively, in order to minimize overall regret, we must ensure that the regret incurred on each batch is not too large, because the overall regret is dominated by the batch that has the largest regret. Guided by this observation, we can see intuitively an optimal way of selecting the grid must ensure that each batch's regret is the same (at least orderwise in terms of the dependence of $T$ and $d$): for otherwise, there is a way of reducing the regret order in one batch and increasing the regret order in the other and the sum of the two will still have smaller regret order than before (which is dominated by the batch that has larger regret order). As we shall see later, the following grid choice satisfies this equal-regret-across-batches requirement:\n\\begin{align}\\label{eq.minimax_grid}\nt_1 = ad, \\quad t_m = \\lfloor a\\sqrt{t_{m-1}} \\rfloor, \\qquad m=2,3,\\cdots,M,\n\\end{align}\nwhere the parameter $a = \\Theta\\left( \\sqrt{T}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^M-1)}} \\right)$ is chosen so that $t_M=T$. \n\n\n\n\n\\subsection{Regret Analysis for Upper bound}\\label{subsec.stochastic_upperbound}\nWe now turn to establishing the upper bound in Theorem \\ref{thm.stochastic}. \nWe again execute a two-step program. First, we prove that Algorithm \\ref{algo.pure-exp} with the grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$ in \\eqref{eq.minimax_grid} attains the regret upper bound in Theorem \\ref{thm.stochastic}, assuming the conditional independence assumption (cf. Lemma \\ref{lemma.difference}) holds. Second, similar to the master algorithm in the previous section, we then modify Algorithm \\ref{algo.pure-exp} slightly to validate this condition. One thing to note here is that, unlike in the adversarial contexts case, here the modification is much simpler, as we shall see later.\n\nWe start by establishing that the least squares estimator $\\hat{\\theta}$ is close to the true parameter $\\theta^\\star$ at the beginning of every batch with high probability. By the theory of least squares, this would be obvious if the chosen contexts $x_{t,a_t}$ were i.i.d. Gaussian. However, since the action $a_t$ depends on all contexts $(x_{t,a})_{a\\in [K]}$ available at time $t$, the probability distribution of $x_{t,a_t}$ may be far from isotropic. Consequently, a priori, there might be one or more directions in the context space that were never chosen, hence yielding inaccurate estimation of $\\theta^\\star$ along that (or those) direction(s). However, as we shall see next, this is not a concern: we establish that the matrix formed by the selected contexts are reasonbly well-conditioned, despite being selected in a greedy fashion.\n\n\n\\begin{lemma}\\label{lemma.equator}\n\tFor each $m\\in [M]$, with probability at least $1-O(T^{-4})$ we have\n\t\\begin{align*}\n\t\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d},\n\t\\end{align*}\n\twhere $c>0$ is a numerical constant independent of $(K,T,d,m,\\kappa)$. \n\\end{lemma}\n\nThe proof of the above lemma is a bit long and hence deferred to the appendix. Based on Lemma \\ref{lemma.equator}, we are ready to show that the least squares estimator $\\hat{\\theta}$ is close to the true parameter $\\theta^\\star$ with high probability. For $m\\in [M]$, let $\\hat{\\theta}_m$ be the estimate at the end of $m$-th batch, and $A_m = \\sum_{t=1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top$ be the regression matrix. \n\\begin{lemma}\\label{lemma.difference}\n\tFor each $m\\in [M]$, if the rewards $\\{r_{t,a_t}\\}_{t\\in [t_m]}$ up to time $t_m$ are mutually independent given the selected contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$, then with probability at least $1-O(T^{-3})$,\n\t\\begin{align*}\n\t\\|\\hat{\\theta}_m - \\theta^\\star\\|_2 \\le Cd\\cdot \\sqrt{\\frac{\\log T}{\\kappa t_m}}\n\t\\end{align*}\n\tfor a numerical constant $C>0$ independent of $(K,T,d,m,\\kappa)$. \n\\end{lemma}\n\\begin{proof}{Proof.}\n\tBy the standard algebra of linear regression, we have:\n\t\\begin{align*}\n\t\\hat{\\theta}_m - \\theta^\\star = A_m^{-1}\\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star). \n\t\\end{align*}\n\tHence, conditioned on the contexts $\\{x_{t,a_t}\\}_{t\\in [t_m]}$, the noise terms $r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star$ are independent by the assumption, and each noise term $r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star$ is $1$-sub-Gaussian.\n\t\n\tNext, we show that the random vector $\\hat{\\theta}_m - \\theta^\\star$ is $\\sigma^2$-sub-Gaussian conditioned on the contexts with $\\sigma^2 = \\lambda_{\\min}(A_m)^{-1}$. To see this, we start by recalling that a centered (i.e. zero-mean) random vector $V$ is $v$-sub-Gaussian if the scalar random variable $\\langle V, u\\rangle$ is $v$-sub-Guassian for any unit vector $u$.\n\tConsequently, take any unit vector $u \\in \\mathbf{R}^d$, we have:\n\t$$\\langle\t\\hat{\\theta}_m - \\theta^\\star , u \\rangle = \\langle A_m^{-1}\\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star), u\\rangle = \\sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\\top \\theta^\\star).$$\n\tSince each term in the summand is $(u^T A_m^{-1}x_{t,a_t})^2$-sub-Gaussian, and since all of them are independent (after being conditioned on $\\{x_{t,a_t}\\}_{t\\in [t_m]}$), their sum is also sub-Gaussian with the sub-Gaussian constant equal to the sum of the sub-Guassian constants:\n\t\\begin{align*}\n\t&\\sum_{t=1}^{t_m} (u^T A_m^{-1}x_{t,a_t})^2= \\sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} x_{t,a_t}^T A_m^{-1} u\n\t= u^T A_m^{-1}\\big( \\sum_{t=1}^{t_m} x_{t,a_t} x_{t,a_t}^T \\big) A_m^{-1} u \\\\\n\t&= u^T A_m^{-1}A_m A_m^{-1} u = u^T A_m^{-1} u \\le \\lambda_{\\max}(A_m^{-1}) = \\lambda_{\\min}(A_m)^{-1}. \n\t\\end{align*}\n\tSince the above inequality holds for any unit vector $u$, choosing $\\sigma^2 = \\lambda_{\\min}(A_m)^{-1}$\n\testablishes the claim.\n\t\n\t\n\tProceeding further, by Lemma \\ref{lemma.equator}, we have for each $m\\in [M]$, with probability at least $1-O(T^{-4})$ \n\t$\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d}$. Consequently, by a union bound over all $M$ (which is at most $T$),\n\twe have with probability at least $1-O(T^{-3})$, $\\lambda_{\\min}\\left(\\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge c\\cdot \\frac{\\kappa(t_m-t_{m-1})}{d}$ for all $m \\in [M]$.\n\tSince $\\lambda_{\\min}(X+Y)\\ge \\lambda_{\\min}(X)+\\lambda_{\\min}(Y)$ for any symmetric matrices $X,Y$, \n\tit then follows that with probability at least $1-O(T^{-3})$:\n\t\\begin{align*}\n\t\\lambda_{\\min}(A_m) = \\lambda_{\\min}\\left(\\sum_{l=1}^m \\sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge \\sum_{l=1}^m \\lambda_{\\min}\\left(\\sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\\top \\right) \\ge \\frac{c\\kappa t_m}{d}.\n\t\\end{align*}\n\t\n\tFinally, since $\\hat{\\theta}_m - \\theta^\\star$ is a $\\frac{d}{c\\kappa t_m}$-sub-Gaussian random vector, $\\|\\hat{\\theta}_m - \\theta^\\star\\|_2^2 $\n\tis a sub-exponential random variable.\n\tTherefore, conditioned on the above event for the stochastic contexts, the sub-exponential concentration gives the claimed upper bound on $\\|\\hat{\\theta}_m - \\theta^\\star\\|_2$ with a further probability at least $1 - O(T^{-3})$ over the random noises. Finally, taking a union bound to complete the proof. \n\\end{proof}\n\nLemma~\\ref{lemma.difference} shows that given the conditional independence assumption, the estimator $\\hat{\\theta}$ given by pure exploitation essentially achieves the rate-optimal estimation of $\\theta^\\star$ even if one purely explores. This now positions us well to prove the upper bound of Theorem \\ref{thm.stochastic}. Of course, bear in mind that when using Algorithm~\\ref{algo.pure-exp}, the conditional independence assumption does not hold, for the choice of future contexts depends on the rewards in the previous batches. Therefore, we will use sample splitting to build another master algorithm to gain independence at the cost of the sample size reduction by a multiplicative factor of $M$ (recall that $M = O(\\log\\log T)$). The following proof implements these two steps; note that in this setting, the master algorithm is entirely different from and much simpler than the one given in the adversarial case.\n\n\\begin{proof}[Proof of Statement 1 in Theorem~\\ref{thm.stochastic}]\n\\begin{enumerate}\n\\item[]\n\\item \\textbf{Regret bound under conditional independence assumption.}\n\n Consider the $m$-th batch with any $m\\ge 2$, and any time point $t$ inside this batch. By the definition of $a_t$, we have $x_{t,a_t}^\\top \\hat{\\theta}_{m-1}\\ge x_{t,a}^\\top \\hat{\\theta}_{m-1}$ for any $a\\in [K]$. Consequently, \n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star &\\le \\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\\\\n&\\le \\max_{a,a'\\in [K]} (x_{t,a} - x_{t,a'})^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\\\\n&\\le 2\\max_{a\\in [K]} |x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})|. \n\\end{align*}\nFor fixed $a\\in [K]$, marginally we have $x_{t,a}\\sim {\\mathcal{N}}(0,\\Sigma)$ independent of $\\hat{\\theta}_{m-1}$. Therefore, conditioning on the previous contexts and rewards, we have $x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})\\sim {\\mathcal{N}}(0,\\sigma^2)$ with\n$$\n\\sigma^2 = (\\theta^\\star - \\hat{\\theta}_{m-1})^\\top \\Sigma (\\theta^\\star - \\hat{\\theta}_{m-1}) \\le \\frac{\\|\\theta^\\star - \\hat{\\theta}_{m-1}\\|_2^2}{d}\n$$\nby Assumption \\ref{assumption:cov}. By a union bound over $a\\in [K]$, with probability at least $1-O(T^{-3})$ over the randomness in the current batch we have\n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\le 2\\max_{a\\in [K]} |x_{t,a}^\\top (\\theta^\\star - \\hat{\\theta}_{m-1})| = O\\left(\\|\\theta^\\star - \\hat{\\theta}_{m-1} \\|_2 \\cdot \\sqrt{\\frac{\\log(KT)}{d}}\\right). \n\\end{align*}\nApplying Lemma \\ref{lemma.difference} and another union bound, there exists some numerical constant $C'>0$ such that with probability at least $1-O(T^{-3})$, the instanteous regret at time $t$ is at most\n\\begin{align*}\n\\max_{a\\in [K]} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\le C'\\sqrt{\\log(KT)\\log T}\\cdot \\sqrt{\\frac{d}{\\kappa t_{m-1}}}. \n\\end{align*}\nNow taking the union bound over $t\\in [T]$, the total regret incurred after the first batch is at most\n\\begin{align}\\label{eq.later_batch}\n\\sum_{m=2}^M C'\\sqrt{\\log(KT)\\log T}\\cdot t_m\\sqrt{\\frac{d}{\\kappa t_{m-1}}} \\le C'\\sqrt{\\frac{\\log(KT)\\log T}{\\kappa}}M\\cdot a\\sqrt{d}\n\\end{align}\nwith probability at least $1-O(T^{-2})$, where the inequality is due to the choice of the grid in \\eqref{eq.minimax_grid}. \n\nAs for the first batch, the instanteous regret at any time point $t$ is at most the maximum of $K$ Gaussian random variables ${\\mathcal{N}}(0,(\\theta^\\star)^\\top \\Sigma \\theta^\\star)$. Since $\\|\\theta^\\star\\|_2\\le 1$ and $\\lambda_{\\max}(\\Sigma)\\le 1\/d$, we conclude that the instanteous regret is at most $C''\\sqrt{\\log(KT)\/d}$ for some constant $C''>0$ with probability at least $1-O(T^{-3})$. Now by a union bound over $t\\in [t_1]$, with probability at least $1-O(T^{-2})$ the total regret in the first batch is at most\n\\begin{align}\\label{eq.first_batch}\nC''\\sqrt{\\log(KT)\/d}\\cdot t_1 = C''\\sqrt{\\log(KT)}\\cdot a\\sqrt{d}. \n\\end{align}\n\nNow combining \\eqref{eq.later_batch}, \\eqref{eq.first_batch} and the choice of $a$ in Algorithm~\\ref{algo.pure-exp} gives the desired regret bound in Theorem \\ref{thm.stochastic} with high probability (note that $M=O(\\log\\log T)$), and consequently in expectation. \n\n\\item \\textbf{Building a Master algorithm that satisfies conditional independence}\n\\begin{algorithm}[h!]\n\t\\DontPrintSemicolon \n\t\\SetAlgoLined\n\t\\BlankLine\n\t\\caption{Batched Pure-exploitation (with sample splitting)\t\\label{algo.sample_splitting}}\n\t\\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$; grid ${\\mathcal{T}} = \\{t_1,\\cdots,t_M\\}$ same as in Algorithm~\\ref{algo.pure-exp}.\\;\n\t\\textbf{Initialization:} Partition each batch into $M$ intervals evenly, i.e., $(t_m,t_{m+1}]=\\cup_{j=1}^M T_m^{(j)}$. \\;\n\t\\For{$m \\gets 1$ \\KwTo $M$}{\n\t\t\\If{$m=1$}{\n\t\t\tchoose $a_t = 1$ and receives reward $r_{t,a_t}$ for any $t\\in [1,t_1]$. \n\t\t}\n\t\t\\Else{\n\t\t\t\\For{$t\\gets t_{m-1}+1$ \\KwTo $t_m$}{\n\t\t\t\tchoose $a_t = \\arg\\max_{a\\in [K]} x_{t,a}^\\top \\hat{\\theta}_{m-1}$ (break ties arbitrarily). \\\\\n\t\t\t\treceive reward $r_{t,a_t}$. \n\t\t\t}\n\t\t}\n\t\t$T^{(m)} \\gets \\cup_{m'=1}^m T_{m'}^{(m)}.$\\\\\n\t\t$A_m\\gets \\sum_{t\\in T^{(m)}} x_{t,a_t}x_{t,a_t}^\\top$. \\\\\n\t\t$\\hat{\\theta}_m \\gets A_m^{-1}\\sum_{t\\in T^{(m)}} r_{t,a_t}x_{t,a_t}$.\n\t}\n\t\\textbf{Output: resulting policy $\\pi=(a_1,\\cdots,a_T)$}.\n\\end{algorithm}\n\nWe start by proposing a sample splitting based master algorithm (see Algorithm~\\ref{algo.sample_splitting}) that ensures that when restricting to the subset of observations used for constructing $\\hat{\\theta}$, the rewards are conditionally independent given the contexts. \nThe key modification in Algorithm \\ref{algo.sample_splitting} lies in the computation of the estimator $\\hat{\\theta}_{m}$ after the first $m$ batches. Specifically, instead of using all past contexts and rewards before $t_m$, we only use the past observations inside the time frame $T^{(m)}\\subsetneq [t_m]$ to construct the estimator. The key property of the time frames is the disjointness, i.e., $T^{(1)},\\cdots,T^{(M)}$ are pairwise disjoint. Then the following lemma shows that the conditional independence condition holds within each time frame $T^{(m)}$. \n\n\n\\begin{lemma}\\label{lemma.cond_indep}\n\tFor each $m\\in [M]$, the rewards $\\{r_{t,a_t}\\}_{t\\in T^{(m)}}$ are mutually independent conditioning on the selected contexts $\\{x_{t,a_t}\\}_{t\\in T^{(m)}}$. \n\\end{lemma}\n\\begin{proof}{Proof.}\n\tFor $t\\in T^{(m)}$, the action $a_t$ only depends on the contexts $\\{x_{t,a}\\}_{a\\in [K]}$ at time $t$ and the past estimators $\\hat{\\theta}_1, \\cdots, \\hat{\\theta}_{m-1}$. However, for any $m'\\in [m-1]$, the estimator $\\hat{\\theta}_{m'}$ only depends on the contexts $x_{\\tau,a_\\tau}$ and rewards $r_{\\tau,a_\\tau}$ with $\\tau\\in T^{(m')}$. Repeating the same arguments for the action $a_\\tau$ with $\\tau\\in T^{(m')}$, we conclude that $a_t$ only depends on the contexts $\\{x_{\\tau,a}\\}_{a\\in [K],\\tau\\in \\cup_{m'\\le m-1} T^{(m')}\\cup \\{t\\}}$ and rewards $\\{r_{\\tau,a_\\tau} \\}_{\\tau\\in \\cup_{m'\\le m-1} T^{(m')}}$. Consequently, by the disjointness of $T^{(m)}$ and $\\cup_{m'\\le m-1} T^{(m')}$, the desired conditional independence holds. \n\\end{proof}\n\nBy Lemma \\ref{lemma.cond_indep}, the conditional independence condition of Lemma \\ref{lemma.difference} holds for Algorithm \\ref{algo.sample_splitting}. Moreover, the sample splitting in Algorithm \\ref{algo.sample_splitting} reduces the sample size by a multiplicative factor at most $M$ at each round, and $M=O(\\log\\log T)$, therefore all proofs in Section \\ref{subsec.pure-exp} continue to hold with a multiplicative penalty at most doubly logarithmic in $T$. As a result, Algorithm \\ref{algo.sample_splitting} achieves the regret upper bound in Theorem \\ref{thm.stochastic}. \n\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Lower bound}\\label{subsec.stochastic_lower}\nIn this section we prove the minimax lower bound of the regret under stochastic contexts for $K=2$. \nThe lower bound argument for the stochastic context case is quite involved and we start by establishing the following key lemma. \n\\begin{lemma}\\label{lemma.lower_bound}\n\tFor any fixed grid $0=t_0 1. \n\t\\end{align*}\n\tCombining \\eqref{eq.bayesian} and \\eqref{eq.target} completes the proof of Lemma \\ref{lemma.lower_bound}. \n\\end{proof}\n\nWe are now ready to put everything together and complete the proof of the lower bound.\n\n\\begin{proof}[Proof of Statement 2 in Theorem~\\ref{thm.stochastic}]\nFor any fixed grid ${\\mathcal{T}}=\\{t_1,\\cdots,t_M\\}$, define $s=\\min\\{m\\in [M]: t_m\\ge d^{2} \\}$, which always exists due to our assumption that $T\\ge d^2$. Now choosing some candidates of $\\Delta \\in \\{1, \\frac{d}{\\sqrt{t_s}}, \\frac{d}{\\sqrt{t_{s+1}}}, \\cdots, \\frac{d}{\\sqrt{T}}\\} \\subset [0,1]$ in Lemma \\ref{lemma.lower_bound} gives\n\\begin{align}\\label{eq.minimax}\n\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2 \\le 1} \\mathbb{E}[R_T(\\pi)] \\ge c\\cdot \\max\\left\\{\\frac{t_s}{\\sqrt{d}}, t_{s+1}\\sqrt{\\frac{d}{t_s}}, t_{s+2}\\sqrt{\\frac{d}{t_{s+1}}},\\cdots, T\\sqrt{\\frac{d}{t_{M-1}}} \\right\\}\n\\end{align}\nfor some numerical constant $c>0$. After some algebra, the right-hand side of \\eqref{eq.minimax} may be further lower bounded by\n\\begin{align*}\n\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2 \\le 1} \\mathbb{E}[R_T(\\pi)] \\ge c\\sqrt{dT}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^{M-s+1}-1)}} \\ge c\\sqrt{dT}\\cdot \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{2(2^{M}-1)}}. \n\\end{align*}\n\\end{proof}\n\n\\section{Problem-Dependent Regret Bounds}\\label{sec:gap}\n\nThe regret bounds given in the previous two sections are problem-independent regret bounds (also known as gap-independent regret bounds in the bandits literature): they do not depend on the underlying parameters of the probability distribution. When the contexts are stochastic, under certain ``margin\" conditions, we can also consider problem-dependent regret bounds that can result in sharper bounds than those problem-independent ones. When the number of contexts is small (e.g., $K=2$), there could be a large margin between the performance of the optimal context and any sub-optimal contexts if $\\|\\theta^\\star\\|_2$ is bounded away from zero, raising the possibility that a problem-dependent regret bound sometimes better than the worst-case regret $\\Theta(\\sqrt{dT})$ could be obtained in sequential batch learning. The next theorem characterizes this. \n\n\\begin{theorem}\\label{thm.problem-dependent}\n\tAssume $K=2$, and let $T$, $M=O(\\log T)$, $d$ be the learning horizon, number of batches and the dimension of each context respectively. Denote by $\\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$. Assume without loss of generality $\\|\\theta^*\\|_2 > 0$. \n\t\\begin{enumerate}\n\t\t\\item \n\t\tUnder Assumptions \\ref{aspn.TKd} and \\ref{assumption:cov}, there exists a sequential batch learning algorithm \\textbf{Alg}= $({\\mathcal{T}}, \\pi)$ (explicitly defined below ) that achieves the following regret:\n\t\t\\begin{align*}\n\t \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\le \\mathsf{polylog}(T)\\cdot \\frac{(d\/\\kappa)^{3\/2}}{\\|\\theta^\\star\\|_2} \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}.\n\t\t\\end{align*}\n\t\t\\item\n\t\tConversely, when the contexts $x_{t,a}\\sim {\\mathcal{N}}(0,I_d\/d)$ are independent over all $a\\in [K], t\\in [T]$, for any $M\\le T$ and any sequential batch learning algorithm, we have:\n\t\t\\begin{align*}\n\t\t\\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot d^{3\/2} \\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}, \n\t\t\\end{align*}\n\t\twhere $c>0$ is a numerical constant independent of $(T,M,d)$. \n\t\\end{enumerate}\n\\end{theorem}\n\\begin{corollary}\nIn this setting, it is necessary and sufficient to have $\\Theta(\\log(T\/d^2))$ batches to achieve the optimal problem-dependent regret $\\tilde{\\Theta}(d^{3\/2} \/ \\|\\theta^\\star\\|_2)$. \nHere we are not aiming to get the tightest dependence on $\\log T$ (note that $\\tilde{\\Theta}(\\cdot)$ hides polylog factors). \n\\end{corollary}\n\nNote that the dependence on $T$ is significantly better than $\\sqrt{T}$ in the problem-dependent bound, showing that a large $\\|\\theta^\\star\\|_2$ makes learning simpler. We remark that although the problem-dependent regret in Theorem \\ref{thm.problem-dependent} only holds for $K=2$, the generalization to a generic $K$ is straightforward. Moreover, the margin between the optimal context and the sub-optimal context shrinks quickly as $K$ gets larger, and therefore the margin-based problem-dependent bound is not that useful compared with the worst-case regret bound in Theorem \\ref{thm.stochastic} for large $K$. \n\n\\subsection{Proof of the Upper Bound in Theorem \\ref{thm.problem-dependent}}\nThe sequential batch learning algorithm which achieves the claimed upper bound is exactly the batched pure-exploitation algorithm with sample splitting shown in Algorithm \\ref{algo.sample_splitting}, with a different choice of the grid: we consider a geometric grid ${\\mathcal{T}}' = \\{t_1', t_2', \\cdots, t_M'\\}$ with\n\\begin{align*}\nt_1' = bd^2, \\qquad t_m' = \\lfloor bt_{m-1}' \\rfloor, \\quad m=2,3,\\cdots,M,\n\\end{align*}\nwhere $b = \\Theta((T\/d^2)^{1\/M})$ so that $t_M' = T$. Next we show that with the above choice of the grid, Algorithm \\ref{algo.sample_splitting} attains the regret upper bound in Theorem \\ref{thm.problem-dependent}. \n\nConsider the $m$-th batch with any $m\\ge 2$, and any time point $t$ inside this batch. Define $v_t = x_{t,1} - x_{t,2}$, then our algorithm chooses the wrong arm if and only if $v_t^\\top \\theta^\\star$ and $v_t^\\top \\hat{\\theta}_{m-1}$ have different signs. Hence, the instantenous regret at time $t$ is\n\\begin{align*}\nv_t^\\top \\theta^\\star \\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0) - v_t^\\top \\theta^\\star \\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\le 0, v_t^\\top \\hat{\\theta}_{m-1} \\ge 0),\n\\end{align*}\nand by the symmetry of $v_t\\sim {\\mathcal{N}}(0,2\\Sigma)$, it holds that\n\\begin{align*}\n\\mathbb{E}\\left[\\max_{a\\in \\{1,2\\}} (x_{t,a} - x_{t,a_t})^\\top \\theta^\\star \\right] = 2\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0) \\right]. \n\\end{align*}\nSet $\\delta = \\sqrt{d\\log T\/(\\kappa t_{m-1}')}$, and partition the non-negative axis $\\mathbb{R}_+$ into $\\bigcup_{i=0}^\\infty [i\\delta, (i+1)\\delta)$. Using this partition gives\n\\begin{align}\n&\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] \\nonumber\\\\\n&= \\sum_{i=0}^\\infty \\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top \\hat{\\theta}_{m-1} \\le 0,\\|v_t\\|_2 \\le \\sqrt{10\\log T} \\right) \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta,\\|v_t\\|_2 \\le \\sqrt{10\\log T} \\right) \\nonumber\\\\\n&\\le \\sum_{i=0}^\\infty (i+1)\\delta\\cdot \\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right)\\cdot \\mathbb{P}\\left( v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta \\big| v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), \\|v_t\\|_2 \\le \\sqrt{10\\log T}\\right). \\label{eq.partition}\n\\end{align}\n\nWe deal with each term in \\eqref{eq.partition} separately. For $\\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right)$, note that $v_t^\\top\\theta^\\star$ is a normal random variable with variance $(\\theta^\\star)^\\top \\Sigma \\theta^\\star \\ge \\lambda_{\\min}(\\Sigma)\\|\\theta^\\star\\|_2^2\\ge \\kappa\\|\\theta^\\star\\|_2^2\/d$, thus the probability density of this random variable is upper bounded by $\\sqrt{d\/2\\pi \\kappa}\/\\|\\theta^\\star\\|_2$ everywhere. Therefore, \n\\begin{align}\\label{eq.anticoncentration}\n\\mathbb{P}\\left(v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta) \\right) \\le \\delta\\cdot \\frac{\\sqrt{d}}{\\sqrt{2\\pi\\kappa}\\|\\theta^\\star\\|_2}. \n\\end{align}\nFor the second term of \\eqref{eq.partition}, the proof of Lemma \\ref{lemma.difference} shows that the random vector $\\theta^\\star - \\hat{\\theta}_{m-1}\\in \\mathbb{R}^d$ is $d\/(c\\kappa t_{m-1}')$-subGaussian for some absolute constant $c>0$, and is also independent of $v_t$. Hence, conditioning on $\\|v_t\\|_2\\le \\sqrt{10\\log T}$, the random variable $v_t^\\top(\\theta^\\star - \\hat{\\theta}_{m-1})$ is also subGaussian with parameter $\\|v_t\\|_2^2d\/(c\\kappa t_{m-1})\\le 10d\\log T\/(c\\kappa t_{m-1}')$. Consequently, subGaussian concentration gives\n\\begin{align}\\label{eq.concentration}\n\\mathbb{P}\\left( v_t^\\top (\\theta^\\star - \\hat{\\theta}_{m-1}) \\ge i\\delta \\big| v_t^\\top \\theta^\\star \\in [i\\delta, (i+1)\\delta), \\|v_t\\|_2 \\le \\sqrt{10\\log T}\\right) \\le \\exp\\left(-\\frac{c\\kappa i^2\\delta^2t_{m-1}'}{20d\\log T}\\right). \n\\end{align}\n\nCombining \\eqref{eq.partition}, \\eqref{eq.anticoncentration}, \\eqref{eq.concentration} and the choice of $\\delta$, we conclude that\n\\begin{align*}\n\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 \\le \\sqrt{10\\log T}) \\right] &\\le \\frac{d^{3\/2}\\log T}{\\sqrt{2\\pi \\kappa^3}t_{m-1}'\\|\\theta^\\star\\|_2} \\sum_{i=0}^\\infty (i+1)e^{-ci^2\/20} \\\\\n&\\le C\\cdot \\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}t_{m-1}'\\|\\theta^\\star\\|_2}. \n\\end{align*}\nMoreover, since $v_t^\\top \\theta^\\star \\le 2$ almost surely and $\\mathbb{P}(\\|v_t\\|_2\\ge \\sqrt{10\\log T})\\le T^{-5}$, we also have\n\\begin{align*}\n\\mathbb{E}\\left[v_t^\\top \\theta^\\star\\cdot \\mathbbm{1}(v_t^\\top \\theta^\\star \\ge 0, v_t^\\top \\hat{\\theta}_{m-1} \\le 0)\\cdot \\mathbbm{1}(\\|v_t\\|_2 > \\sqrt{10\\log T}) \\right] \\le 2T^{-5}. \n\\end{align*}\nTherefore, by the choice of the grid, the expected total regret in the $m$-th batch is at most\n\\begin{align*}\n\\left(C\\cdot \\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}t_{m-1}'\\|\\theta^\\star\\|_2} + 2T^{-5}\\right)\\cdot t_m' = O\\left(\\frac{d^{3\/2}\\log T}{\\kappa^{3\/2}\\|\\theta^\\star\\|_2}\\cdot \\left(\\frac{T}{d^2}\\right)^{1\/M} \\right). \n\\end{align*}\n\nThe first batch is handled in the same way as the upper bound proof of Theorem \\ref{thm.stochastic}. Specifically, the expected total regret in the first batch is \n\\begin{align*}\nO\\left( t_1'\\cdot \\sqrt{\\frac{\\log T}{d}} \\right) = O\\left(\\sqrt{d^3\\log T}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}}\\right) = O\\left(\\frac{\\sqrt{d^3\\log T}}{\\kappa^{3\/2}\\|\\theta^\\star\\|_2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}} \\right).\n\\end{align*}\nFinally summing up all batches $m=1,2,\\cdots,M$ completes the proof. \n\n\\subsection{Proof of the lower bound in Theorem \\ref{thm.problem-dependent}} The proof is entirely analogous to the lower bound proof of Theorem \\ref{thm.stochastic}. First we observe that by Lemma \\ref{lemma.lower_bound}, for any $\\Delta \\in [0,1]$ and fixed grid ${\\mathcal{T}} = \\{t_1, t_2, \\cdots, t_M\\}$ we have\n\\begin{align*}\n\\inf_\\pi \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] &\\ge \\Delta\\cdot \\inf_\\pi \\sup_{\\theta^\\star: \\Delta\\le \\|\\theta^\\star\\|_2\\le 1} \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\\\\n&\\ge \\Delta^2 \\cdot\\sum_{m=1}^M \\frac{t_m-t_{m-1}}{10\\sqrt{d}}\\exp\\left(-\\frac{16t_{m-1}\\Delta^2}{d^2}\\right). \n\\end{align*}\nNow define $s = \\min\\{m\\in [M]: t_m \\ge d^2 \\}$, which always exists due to the assumption $T\\ge d^2$. Choosing $\\Delta \\in \\{1,d^2\/t_s,d^2\/t_{s+1},\\cdots,d^2\/t_M\\} \\subseteq [0,1]$ in the above inequality gives\n\\begin{align*}\n\\inf_\\pi \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot \\max\\left\\{\\frac{t_s}{\\sqrt{d}}, \\frac{d^{3\/2}t_{s+1}}{t_s}, \\frac{d^{3\/2}t_{s+2}}{t_{s+1}},\\cdots, \\frac{d^{3\/2}T}{t_{M-1}} \\right\\}\n\\end{align*}\nfor some absolute constant $c>0$. Finally, applying $\\max\\{a_1,\\cdots,a_n\\} \\ge \\sqrt[n]{a_1a_2\\cdots a_n}$ gives\n\\begin{align*}\n\\inf_{\\pi} \\sup_{\\theta^\\star: \\|\\theta^\\star\\|_2\\le 1} \\|\\theta^\\star\\|_2\\cdot \\mathbb{E}_{\\theta^\\star}[R_T(\\pi)] \\ge c\\cdot d^{3\/2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M-s+1}} \\ge c\\cdot d^{3\/2}\\left(\\frac{T}{d^2}\\right)^{\\frac{1}{M}},\n\\end{align*}\nas claimed. \n\n\n\\section{Conclusion}\n\nAs we have shown in this paper, sequential batch learning provides an interesting and nontrivial departure from the traditional online learning setting where feedback is immediately observed and incorporated into making the next decision. We studied sequential batch learning in the linear contextual bandits setting and provided an in-depth inquiry into the algorithms and theoretical performance. An important insight here is that the nature of the contexts-adversarial or stochastic--has a significant impact on the optimal achievable performance, as well as the algorithms that would achieve the minimax optimal regret bounds.\n\nSeveral questions immediately suggest themselves.\nFirst, in the stochastic context setting, our current regret upper bound\ndepends heavily on the Gaussian assumption of the contexts. It would be interesting to see how far we can move beyond the Gaussian family. \nIt would be unlikely that the same result holds for any distribution and hence, characterizing a (hopefully large) class of distributions under which the same tight bounds are achievable would be interesting.\nAnother direction would be to look at more complex reward structures that go beyond linear bandits and see to what extent can the current set of results be generalized. We leave them for future work. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Datasets and their Templates}\n\\label{sec:appendix:analysis}\n\n\\subsection{Division of Crowdsourcing Instructions into Minimal Tasks}\n\\label{sec:appendix:division:screenshots}\nFig.~\\ref{fig:subtsakdivision} shows an example of how a task is divided into multiple subtasks for the MC-TACO dataset. MC-TACO has five categories (Event Duration, Event Frequency etc.). Each category contributes to 2 subtasks one for question generation and one for answer generation.\n\n\n\n\n\n\\paragraph{Number of tasks in each dataset.} \nFig.~\\ref{fig:no. of subtasks} illustrates how the number of steps in the data creation process varies across the 6 datasets. QASC and MC-TACO contain a relatively higher number of steps in the data creation process in comparison to DROP, Quoref, CosmosQA, and Winogrande. \n\n\n\\begin{figure}[H]\n\\centering\n \\includegraphics[scale=0.28,trim=2cm 4cm 0cm 3cm]{figures\/Number_of_subtasks.pdf}\n \\caption{Variations in the number of subtasks}\n \\label{fig:no. of subtasks}\n\\end{figure}\n\n\\subsection{Analysis of Crowdsourcing Templates}\nWe analyzed crowdsourcing templates of 6 datasets: CosmosQA~\\cite{huang2019cosmos}, \nDROP~\\cite{dua2019drop}, \nMC-TACO~\\cite{zhou2019going}, \nQASC~\\cite{khot2020qasc}, \nQuoref~\\cite{dasigi2019quoref}, and\nWinogrande~\\cite{sakaguchi2020winogrande}. Our intention behind the analysis is to identify similarities and differences across templates and subsequently decide regarding the collection of more templates.\n\\label{appendix:analysis:templates}\n\n\n\\paragraph{Size of the instructions.} We observe significant variation in size across the 6 datasets (Fig.~\\ref{fig:size inst}). In the case of QASC, the instruction size associated with each step of the data creation process is very high, whereas for Winogrande, it is exactly the opposite-- instruction size associated with each step of the data creation process is very low. Instead, the size of the common instruction (i.e., the instruction preceding the first step of the data creation process) is high in Winogrande; this is also seen for DROP. The major mode of instruction varies across datasets. Examples and instructions associated with each step of data creation respectively take up the majority of space in Quoref and CosmosQA. MC-TACO relies on examples to explain the crowdsourcing task, while Winogrande and QASC depend mostly on common instructions and instructions associated with each step of the data creation process respectively, to explain the task to the crowdworker.\n\n\\paragraph{The number of positive\/negative examples.} \nVariation in the occurrence of \\textsc{Positive} and \\textsc{Negative} Examples across datasets has been illustrated in Fig.~\\ref{fig:no. of examples}. Only Winogrande provides an equal number of \\textsc{Positive} and \\textsc{Negative} Examples. \nQASC instructions do not contain any \\textsc{Negative} Examples. \nOverall, DROP instructions consist of a relatively higher number of examples than other datasets.\n\n\\begin{figure}[H]\n\\centering\n \\includegraphics[width=0.96\\columnwidth ]{figures\/example_num.png}\n \\caption{Variation in the number of positive and negative examples}\n\\label{fig:no. of examples}\n\\end{figure}\n\n\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.96\\columnwidth]{figures\/instruction_size.pdf}\n \\caption{Variation in the number of sentences in the crowdsourcing instructions across datasets}\n \\label{fig:size inst}\n\\end{figure}\n\n\\paragraph{Presence of reasons\/suggestions in examples.} All datasets except QASC contain both \\textsc{Positive} and \\textsc{Negative} Examples. \nHowever, Quoref is the only dataset to provide \\textsc{Reasons} for all the \\textsc{Positive} and \\textsc{Negative} Examples. There are explanations associated with each of the \\textsc{Negative} Examples, but the presence of explanations associated with \\textsc{Positive} Examples varies across datasets. Finally, Quoref is the only dataset to provide \\textsc{Suggestions} along with the \\textsc{Reasons} associated with the \\textsc{Negative} Examples.\n\n\\begin{comment}\n\\paragraph{Dimensions of Input and Output:}The input dimension of a step is defined as the number of previous step outputs that are fed as input. Parallely, the output dimension of a step is the number of distinct outputs the model needs to produce in that step-- for example, if a model has to generate both a question and an answer in a step, the output dimension will be 2. CosmosQA and QASC have relatively high dimensional instances, whereas Quoref and MC-TACO have relatively low dimensional instances.\n\\end{comment}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth,trim=0cm 0cm 0cm 0cm]{figures\/sub-task.pdf}\n \\caption{\n Dividing a data creation task into multiple subtasks for the MC-TACO dataset. \n }\n \\label{fig:subtsakdivision}\n\\end{figure}\n\n\n\n\n\\subsection{Qualitative Analysis}\n\\paragraph{Writing Style.} There are significant variation in writing style across the datasets, even among those datasets \nthat have the common a objective (e.g., DROP, Quoref and QASC). \nDROP instructions say \\textit{\"There is an AI running in the background which will also try to answer the question. You won't be able to submit the question if the AI gives the same response.\"} The writing style in Quoref however is different: \\textit{\"We also want you to avoid questions that can be answered correctly by someone without actually understanding the paragraph. ...\"} \n\n\\paragraph{Information.} We observe that sometimes instructions of a dataset contain information that is relevant to several other datasets, which do not contain similar instruction information. For example, Quoref, DROP and CosmosQA are datasets that are all based on reading comprehension tasks. CosmosQA contains a step in the data creation process asking users to skip passages containing inappropriate or offensive content. This information is also relevant to Quoref and DROP, but is not mentioned in their respective instructions.\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \n \\includegraphics[scale=0.36,trim=0.1cm 0.1cm 0.1cm 0.1cm]{figures\/Task_specification.pdf}\n \n \\caption{Variation in Task Specification: Quoref contains a single line instruction whereas the CosomosQA contains a detailed instruction. QASC on the other hand, contains examples along with instruction.}\n \\label{fig:task_specification}\n\\end{figure}\n\n\n\n\\paragraph{Hardness.} In a typical crowdsourcing task, certain tasks may be harder than the others, often these are the core tasks, e.g.: question generation, adversarial data creation, etc. Additional information, especially in the form of tips is always helpful in solving these hard tasks. Figure~\\ref{fig:task_specification} illustrates that the task of question generation is stated differently in Quoref, CosmosQA, and QASC. QASC mentions an easy and detailed way to create questions, whereas CosmosQA mentions several different attributes of a good quality question. Knowing about the CosmosQA and QASC question generation processes may help with data creation for Quoref and other such question generation tasks, where less additional information is provided regarding question creation. \n\n\n\n\n\n\\subsection{Data Curation Effort}\n\\label{appendix:subsect:curation}\nTable \\ref{tab:datacuration} shows the effort distribution in the data curation process of \\textsc{Natural Instructions}{}. Step-8 which involves parsing instances is the main bottleneck in the data curation process. Table \\ref{tab:structure} shows the detailed structure of tasks in \\textsc{Natural Instructions}{}. Fig.~\\ref{fig:examplesfull} shows examples of four different tasks in \\textsc{Natural Instructions}{}.\n\n\\begin{table}[h]\n \\centering\n \\footnotesize\n \n \\begin{tabular}{m{0.5cm}p{4.5cm}p{1.5cm}}\n \\toprule\n step & task & time per task \\\\ \n \\midrule\n 1 & Identify crowdsourced dataset and engage with their authors. & 20-30 mins \\\\\n \n 2 & Go through the template and understand the task. & 10-15 mins \\\\ \n 3 & Manually fill fields in the schema with content from the template. & 30-45 mins \\\\ \n 4 & Iterate over the instructions to ensure their clarity while eliminating the repeated content. Fix writing issue in examples, also typos etc. \n \n & 2-3 hrs\\\\ \n 5 & Create negative examples if not present. Add the missing explanations to the examples. & 1-2 hrs \\\\ \n \n 6 & Extract the input\/output instances from raw crowdsourcing annotations. & 0.5-24 hrs \\\\ \n \n 7 & Final inspections of the data to verify the data quality \n \n & 0.25- 2hrs \\\\\n \\midrule\n & Overall & 6-34 hrs\\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{Steps taken to curate each task in \\textsc{Natural Instructions}{} and their estimated times.\n \n }\n \\label{tab:datacuration}\n\\end{table}\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[scale=0.75,trim=0.7cm 0.5cm 0.5cm 1.5cm]{figures\/examples_detailed.pdf}\n \\caption{\n Examples from \\textsc{Natural Instructions}{}. \n Each task follows the schema provided in Fig.~\\ref{fig:schema_plate}. \n }\n \\label{fig:examplesfull}\n\\end{figure*}\n\n\n\\begin{table*}\n \\centering\n \\small\n \\begin{adjustbox}{max width=\\textwidth}\n \\begin{tabular}{llcc}\n \\toprule\n task id & title& source dataset & task category\\\\\n \\midrule\n1 & task001\\_quoref\\_question\\_generation & Quoref & Question Generation \\\\\n2 & task002\\_quoref\\_answer\\_generation & Quoref & Answer Generation \\\\\n\\midrule \n3 & task003\\_mctaco\\_question\\_generation\\_event\\_duration & MC-TACO & Question Generation \\\\\n4 & task004\\_mctaco\\_answer\\_generation\\_event\\_duration & MC-TACO & Answer Generation \\\\\n5 & task005\\_mctaco\\_wrong\\_answer\\_generation\\_event\\_duration & MC-TACO & Incorrect Answer Generation \\\\\n6 & task006\\_mctaco\\_question\\_generation\\_transient\\_stationary & MC-TACO & Question Generation \\\\\n7 & task007\\_mctaco\\_answer\\_generation\\_transient\\_stationary & MC-TACO & Answer Generation \\\\\n8 & task008\\_mctaco\\_wrong\\_answer\\_generation\\_transient\\_stationary & MC-TACO & Incorrect Answer Generation \\\\\n9 & task009\\_mctaco\\_question\\_generation\\_event\\_ordering & MC-TACO & Question Generation \\\\\n10 & task010\\_mctaco\\_answer\\_generation\\_event\\_ordering & MC-TACO & Answer Generation \\\\\n11 & task011\\_mctaco\\_wrong\\_answer\\_generation\\_event\\_ordering & MC-TACO & Incorrect Answer Generation \\\\\n12 & task012\\_mctaco\\_question\\_generation\\_absolute\\_timepoint & MC-TACO & Question Generation \\\\\n13 & task013\\_mctaco\\_answer\\_generation\\_absolute\\_timepoint & MC-TACO & Answer Generation \\\\\n14 & task014\\_mctaco\\_wrong\\_answer\\_generation\\_absolute\\_timepoint & MC-TACO & Incorrect Answer Generation \\\\\n15 & task015\\_mctaco\\_question\\_generation\\_frequency & MC-TACO & Question Generation \\\\\n16 & task016\\_mctaco\\_answer\\_generation\\_frequency & MC-TACO & Answer Generation \\\\\n17 & task017\\_mctaco\\_wrong\\_answer\\_generation\\_frequency & MC-TACO & Incorrect Answer Generation \\\\\n18 & task018\\_mctaco\\_temporal\\_reasoning\\_presence & MC-TACO & Classification \\\\\n19 & task019\\_mctaco\\_temporal\\_reasoning\\_category & MC-TACO & Classification \\\\\n20 & task020\\_mctaco\\_span\\_based\\_question & MC-TACO & Classification \\\\\n21 & task021\\_mctaco\\_grammatical\\_logical & MC-TACO & Classification \\\\\n\\midrule \n22 & task022\\_cosmosqa\\_passage\\_inappropriate\\_binary & Cosmosqa & Classification \\\\\n23 & task023\\_cosmosqa\\_question\\_generation & Cosmosqa & Question Generation \\\\\n24 & task024\\_cosmosqa\\_answer\\_generation & Cosmosqa & Answer Generation \\\\\n25 & task025\\_cosmosqa\\_incorrect\\_answer\\_generation & Cosmosqa & Incorrect Answer Generation \\\\\n\\midrule \n26 & task026\\_drop\\_question\\_generation & DROP & Question Generation \\\\\n27 & task027\\_drop\\_answer\\_type\\_generation & DROP & Classification \\\\\n28 & task028\\_drop\\_answer\\_generation & DROP & Answer Generation \\\\\n\\midrule \n29 & task029\\_winogrande\\_full\\_object & Winogrande & Minimal Text Modification \\\\\n30 & task030\\_winogrande\\_full\\_person & Winogrande & Minimal Text Modification \\\\\n31 & task031\\_winogrande\\_question\\_generation\\_object & Winogrande & Question Generation \\\\\n32 & task032\\_winogrande\\_question\\_generation\\_person & Winogrande & Question Generation \\\\\n33 & task033\\_winogrande\\_answer\\_generation & Winogrande & Answer Generation \\\\\n34 & task034\\_winogrande\\_question\\_modification\\_object & Winogrande & Minimal Text Modification \\\\\n35 & task035\\_winogrande\\_question\\_modification\\_person & Winogrande & Minimal Text Modification \\\\\n\\midrule \n36 & task036\\_qasc\\_topic\\_word\\_to\\_generate\\_related\\_fact & QASC & Minimal Text Modification \\\\\n37 & task037\\_qasc\\_generate\\_related\\_fact & QASC & Minimal Text Modification \\\\\n38 & task038\\_qasc\\_combined\\_fact & QASC & Minimal Text Modification \\\\\n39 & task039\\_qasc\\_find\\_overlapping\\_words & QASC & Verification \\\\\n40 & task040\\_qasc\\_question\\_generation & QASC & Question Generation \\\\\n41 & task041\\_qasc\\_answer\\_generation & QASC & Answer Generation \\\\\n42 & task042\\_qasc\\_incorrect\\_option\\_generation & QASC & Incorrect Answer Generation \\\\\n\\midrule \n43 & task043\\_essential\\_terms\\_answering\\_incomplete\\_questions & Essential Terms & Answer Generation \\\\\n44 & task044\\_essential\\_terms\\_identifying\\_essential\\_words & Essential Terms & Verification \\\\\n\\midrule \n45 & task045\\_miscellaneous\\_sentence\\_paraphrasing & Miscellaneous & Minimal Text Modification \\\\\n46 & task046\\_miscellaenous\\_question\\_typing & Miscellaenous & Classification \\\\\n47 & task047\\_miscellaenous\\_answering\\_science\\_questions & Miscellaenous & Answer Generation \\\\\n\\midrule \n48 & task048\\_multirc\\_question\\_generation & MultiRC & Question Generation \\\\\n49 & task049\\_multirc\\_questions\\_needed\\_to\\_answer & MultiRC & Classification \\\\\n50 & task050\\_multirc\\_answerability & MultiRC & Classification \\\\\n51 & task051\\_multirc\\_correct\\_answer\\_single\\_sentence & MultiRC & Answer Generation \\\\\n52 & task052\\_multirc\\_identify\\_bad\\_question & MultiRC & Classification \\\\\n53 & task053\\_multirc\\_correct\\_bad\\_question & MultiRC & Minimal Text Modification \\\\\n54 & task054\\_multirc\\_write\\_correct\\_answer & MultiRC & Answer Generation \\\\\n55 & task055\\_multirc\\_write\\_incorrect\\_answer & MultiRC & Incorrect Answer Generation \\\\\n56 & task056\\_multirc\\_classify\\_correct\\_answer & MultiRC & Classification \\\\\n57 & task057\\_multirc\\_classify\\_incorrect\\_answer & MultiRC & Classification \\\\\n58 & task058\\_multirc\\_question\\_answering & MultiRC & Answer Generation \\\\\n\\midrule \n59 & task059\\_ropes\\_story\\_generation & ROPES & Minimal Text Modification \\\\\n60 & task060\\_ropes\\_question\\_generation & ROPES & Question Generation \\\\\n61 & task061\\_ropes\\_answer\\_generation & ROPES & Answer Generation \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Detailed set of tasks included in \\textsc{Natural Instructions}{}}\n \\label{tab:structure}\n\\end{table*}\n\n\n\\clearpage\n\\onecolumn\n\n\\changed{\n\\subsection{Qualitative Comparison to PromptSource}\n\\label{subsec:promptsource}\nWe provide a comparison between our proposed dataset and PromptSource~\\cite{sanh2021multitask}. \nPromptSource tasks are mainly focused on the common NLP downstream tasks (such as question-answering, coreference, NLI, etc). \nHowever, since we create tasks from various steps (including the intermediate steps) in a data creation process, our instructions contain a broader variety of tasks. For example, tasks for chaining facts (task 38; Table~\\ref{tab:structure}), question typing (task 27; Table~\\ref{tab:structure}) or detecting inappropriate content (task 22; Table~\\ref{tab:structure}) are unique additions in \\textsc{Natural Instructions}{}. \nAdditionally, since our instructions were originally written by various researchers targeted for crowdworkers, they are elaborate and contain the complete definition of each task. \nThis is somewhat evident from observation that GPT3 leads to higher performance on our instructions (Table~\\ref{tab:prompt:source:gpt3:eval}). \nLast but not least, since we represent the instructions in a structured format, we are able to ablate various elements of the instructions (definition, negative\/positive examples, etc.) and empirically quantify their contributions (\\S\\ref{sec:experiments}). \n}\n\n\\begin{table}[h]\n \\centering\n \\small\n \\begin{tabular}{clcc}\n \\toprule \n Task & Model & PromptSource & \\textsc{Natural Instructions}{} \\\\\n \\midrule\n \\multirow{2}{*}{ Quoref QA (002) } & GPT3-Instruct & 43 & {\\bf 47} \\\\\n & GPT3 & 2 & {\\bf 13} \\\\\n \\multirow{2}{*}{ DROP QA (028) } & GPT3-Instruct & 6 & {\\bf 10} \\\\\n & GPT3 & 2 & {\\bf 3} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{ \n Comparing zero-shot performance of GPT3 on our instructions vs. PromptSource. \n The instructions curated in this work, despite being lengthier, lead to higher performance. \n \n }\n \\label{tab:prompt:source:gpt3:eval}\n\\end{table}\n\n\\begin{table*}[h]\n \\centering\n \n \n \\includegraphics[scale=0.88,trim=1.4cm 13.4cm 1.2cm 1.85cm,clip=true]{figures\/comparisonWithPromptSource-3.pdf}\n \\caption{Qualitative comparison of the task instructions for several shared tasks among \\textsc{Natural Instructions}{} and PromptSource~\\cite{sanh2021multitask}.}\n \\label{tab:prompt:source}\n\\end{table*}\n\n\\twocolumn\n\\clearpage\n\n\\section{Building Baselines for \\textsc{Natural Instructions}{}}\nIn this section, we provide several details on the baselines included in our work. \n\n\\subsection{Encoding of the instructions}\n\\label{appendix:subsect:encoding}\n\nAccording to our schema (\\S\\ref{subsec:schema}), each instruction $I_t$ for the $t$-th task is a set that contains the following fields:\n$$\nI_t = \\setOf{ \n \\I{t}{title}, \n \\I{t}{def.}, \n \\I{t}{avoid}, \n \\I{t}{emph.}, \n \\I{t}{prompt},\n \\I{t}{pos. ex.},\n \\I{t}{neg. ex.}\n }\n$$\n\n\nTo feed the instances to LMs, we first encoder them into plain text. \nLet $enc(I, x)$ define a function that maps a given instruction $I$ and input instance $x$ to plain text. \nEvidently, there are many choices for this function. \nIn our study, we consider the following encodings: \n\n\\paragraph{\\textsc{No-instructions} encoding.}\nThis encoding is the conventional paradigm where no instructions exist: \n\n\\begin{equation} \\label{eq1}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\n\n\\paragraph{\\textsc{prompt} encoding.}\nIn this encoding, we append the prompt message before the input:\n\n\\begin{equation} \\label{eq2}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\n\\paragraph{\\textsc{Prompt + Definition} encoding.}\nIn this encoding, the prompt message and the task definition appear before the input: \n\\begin{equation} \\label{eq3}\n \\small\n \\begin{split}\n enc(I_t, x) := & \\textnormal{``}\\mathtt{\\small Definition:} \\; \\I{t}{def.} \\\\ \n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small input:} \\; x \\\\\n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\nIntuitively, this encoding is more informative and more complex than ``prompt'' encoding. \n\n\\paragraph{\\textsc{Full Instructions} encoding.}\nThis encoding contains all the instruction content: \n\\begin{equation} \n \\label{eq4}\n \\small\n \\begin{split}\n enc(I_t, x) := \n & \\textnormal{``}\\mathtt{\\small Definition:} \\; \\I{t}{def.} \\\\ \n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small Things \\; to \\; Avoid:} \\; \\I{t}{avoid.} \\\\ \n & \\mathtt{\\small Emphasis \\& Caution:} \\; \\I{t}{emph.} \\\\ \n & \\textnormal{``}\\mathtt{\\small Negative Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.}\\mathtt{(reason)} \\\\ \n & \\mathtt{\\small Negative Example2-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\textnormal{``}\\mathtt{\\small Positive Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.}\\mathtt{(reason)} \\\\ \n & \\mathtt{\\small Positive Example2-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\n\nwhere $enc_{\\textnormal{ex}} (I_t)$ is an alternating encoding positive and negative examples. We include as many examples as possible, before exceeding the input limit. \n\n\\begin{comment}\n\\newcommand{\\mathrel{+}=}{\\mathrel{+}=}\n\\begin{equation*} \n \\begin{split}\n & \\mathtt{for \\; (p, n) \\; in \\; zip(} \\I{t}{pos. ex.}, \\I{t}{neg. ex.} \\mathtt{):} \\\\ \n & \\hspace{0.5cm} enc_{\\textnormal{ex}} (I_t) \\mathrel{+}= \\\\ \n & \\hspace{0.99cm} \\textnormal{``}\\mathtt{\\small Positive Example-} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small input:} \\; \\mathtt{p}_{\\textnormal{\\tiny input}} \\; \\mathtt{\\small output:} \\; \\mathtt{p}_{\\textnormal{\\tiny output}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small reason:} \\; \\mathtt{p}_{\\textnormal{\\tiny reason}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small Negative Example-} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small input:} \\; \\mathtt{n}_{\\textnormal{\\tiny input}} \\; \\mathtt{\\small output:} \\; \\mathtt{n}_{\\textnormal{\\tiny output}} \\\\ \n & \\hspace{0.99cm} \\mathtt{\\small reason:} \\; \\mathtt{n}_{\\textnormal{\\tiny reason}} \\;\n \\mathtt{\\small suggestion:} \\; \\mathtt{n}_{\\textnormal{\\tiny sugg.}} \n \\textnormal{''} \n \\end{split}\n\\end{equation*}\n\\end{comment}\n\n\\paragraph{\\textsc{Positive Examples} encoding.}\nThis encoding contains only positive examples of the subtask (no task description, etc). \n\n\\begin{equation} \n \\label{eq5}\n \\small\n \\begin{split}\n enc(I_t, x) := \n \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}\\mathtt{(input)} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small output:} \\; \\I{t}{pos. ex.}\\mathtt{(output)} \\\\ \n \n \n & \\hspace{0.7cm} \\mathtt{\\small \\hdots } \\\\ \n & \\mathtt{\\small input:} \\; x \\\\ \n & \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation}\nSuch example-only have been used in several recent studies in the field~\\cite{zhao2021calibrate}. \n\n\\clearpage\n\\onecolumn\n\n\\twocolumn\n\n\\section{Analysis on Baseline Results}\n\\label{sec:appendix:banalysis}\n\n\n\n\\changed{\n\\subsection{Comparison to Raw Instructions}\n\\label{subsec:efratlevycomparison}\nWe seek to understand the value of breaking the tasks into sub-tasks and mapping them into our proposed schema (\\S\\ref{sec:mapping}). \nWe compute performance of raw instructions (first sub-task of four datasets), \nin the same vein as \n\\citep{efrat2020turking}'s setup. \nWe compare this to our \\textsc{Full Instruction - neg examples} encoding. \nThe results in Table~\\ref{tab:comparison:raw:instructions} indicate that GPT3 leads to higher performance with our encoding (2nd row) compared to raw instructions (first row). \nWeak performance of LMs on raw instructions aligns with \\citep{efrat2020turking}'s finding that ``language model performs poorly''. \n\n\\newcolumntype{R}[2]{%\n >{\\adjustbox{angle=#1,lap=\\width-(#2)}\\bgroup}%\n l%\n <{\\egroup}%\n}\n\\newcommand*\\rot{\\multicolumn{1}{R{30}{1em}}\n\n\n\\begin{table}[h]\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n \\toprule\n & \\rot{Quoref} & \\rot{MCTaco} & \\rot{CosmosQA} & \\rot{QASC} \\\\\n \\midrule\n \\makecell{raw instructions} & 12.5 & 5.00 & 6.9 & 3.7 \\\\\n \\makecell{our schema} & 25.8 & 42.6 & 17.7 & 51.3 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Comparing GPT3 performance on raw crowdsourcing instructions vs. our encoding. All numbers are ROUGE-L.} \n \\label{tab:comparison:raw:instructions}\n\\end{table}\n\nThis might be partly due to the verbose language of the raw instructions: \nthe average length of the raw instructions is $2.5k$ tokens, in comparison to $950$ tokens for our encoding. \nWhile repetition often helps human understanding, concise instructions seem to be more effective for computers. \n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\subsection{An Ablation Study of Instructional Elements}\n\\label{sec:ablation:study}\nWe conduct an ablation study with GPT3 on 3 distinct tasks (answer generation from Winogrande; question generation from QASC; verifying temporal reasoning category of a given question from MC-TACO). \nTable~\\ref{tab:ablation:subset} (top) shows the effect of eliminating various fields in the encoding while Table~\\ref{tab:ablation:subset} (bottom) indicates the gains from adding each field. \nThe overall observation is that GPT3 benefits the most from \\emph{positive examples}, mildly from \\emph{definition}, and deteriorates with \\emph{negative examples}. \nWe hypothesize it is easier for GPT3 to mimic the patterns in positive examples while utilizing \\emph{negative examples} requires deeper understanding. \n\n\n\n\n\n\\begin{table}[h]\n \\centering\n \n \n \n \n \n \\includegraphics[scale=0.73,trim=8.7cm 5.7cm 2cm 1.9cm]{figures\/ablation-subset.pdf}\n \n \\caption{An ablation study of the different fields included in \\textsc{Natural Instructions}{} based on GPT3. This model benefits the most from \\textsc{positive} examples and the least from \\textsc{negative} examples. \n }\n \\label{tab:ablation:subset}\n\\end{table}\n\\newpage\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{table}[ht]\n \\centering\n \\small\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lcc}\n \\toprule\n error type & GPT3 & BART \\\\\n \\midrule\n \n \n \n \n does not follow instruction and generate an invalid question & 14 & 8\\\\\n \n generates a nonsensical\/vague question & 4 & 47\\\\\n copies the given fact or a subset of it & 8 & 3 \\\\\n explains the question after generating it & 6 & 0\\\\\n generates a yes\/no question & 12 & 4\\\\\n generates candidate answers as output &4 & 0\\\\\n generates questions whose answer does not exist &4 &3\\\\\n \\makecell[l]{generates generic questions independent\\\\ of the given context} &6 &0\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Percentage of errors on QASC QG task (\\S\\ref{sec:error:analysis}). \n The numbers do not sum to 100 since the error types are not mutually exclusive. \n \n }\n \\label{Tab: Error Analysis}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\begin{table*}[t]\n \\small \n \\centering\n \\resizebox{\\linewidth}{!}{\n\\begin{tabular}{lcccccc|l||cccccc|l}\n\\toprule\n& \\multicolumn{7}{c}{BART} & \\multicolumn{7}{c}{GPT3} \\\\\n\\cmidrule(r){2-8} \\cmidrule(r){9-15} \n task category \u2192 & QG & AG & CF & IAG & MM & VF & avg & QG & AG & CF & IAG & MM & VF & avg \\\\\n\\midrule\n\\textsc{No Instruction} & 26 & 6 & 0 & 21 & 33 & 7 & 13 & - & - & - & - & - & - & - \\\\\n\\midrule\n\\textsc{prompt} & 27 & 22 & 7 & 22 & 34 & \\textbf{9} & 20 & 33 & 32 & 14 & 13 & \\textbf{73} & 16 & 30 \\\\\n{\\ \\ \\ +\\textsc{definition}} & 35 & 24 & 50 & \\textbf{25} & 36 & 7 & 30$\\uparrow$ (+50) & 36 & 35 & 40 & 14 & 70 & 16 & 35$\\uparrow$ (+17)\\\\\n{ \\ \\ \\ +\\textsc{things to avoid}} & 33 & 24 & 4 & 24 & \\textbf{58} & \\textbf{9} & 25$\\uparrow$ (+25) & 28 & 33 & 11 & 16 & 68 & 14 & 28$\\downarrow$ (-7) \\\\\n{\\ \\ \\ +\\textsc{emphasis}} & 38 & 23 & 16 & \\textbf{26} & 49 & 3 & 26$\\uparrow$ (+30) & 29 & 28 & 18 & 16 & 72 & 16 & 30 \\\\\n{\\ \\ \\ +\\textsc{pos. examp.}} & 53 & 22 & 14 & \\textbf{25} & 17 & 7 & 23$\\uparrow$ (+15) & \\textbf{43} & 49 & 29 & 21 & 70 & \\textbf{36} & 41$\\uparrow$ (+37) \\\\\n{\\ \\ \\ +\\textsc{definition+pos. examp.}} & 51 & 23 & \\textbf{56} & \\textbf{25} & 37 & 6 & 33$\\uparrow$ (+65) & \\textbf{43} & 50 & \\textbf{45} & \\textbf{23} & 70 & 32 & \\textbf{44}$\\uparrow$(+47) \\\\\n{\\ \\ \\ +\\textsc{pos, neg ex+ explan.}} & 50 & 21 & 27 & 25 & 50 & 7 & 30 $\\uparrow$ (+50) & 32 & 19 & 8 & 12 & 61 & 13 & 24$\\downarrow$(-20) \\\\\n\\textsc{pos. examp.} & \\textbf{55} & 6 & 18 & \\textbf{25} & 8 & 6 & 20 & 30 & 32 & 15 & 16 & 68 & 23 & 31$\\uparrow$(+3) \\\\\n\\midrule\n\\textsc{Full Instruction} & 46 & 25 & 52 & 25 & 35 & 7 & 32$\\uparrow$ (+60) & 33 & 18 & 8 & 12 & 60 & 11 & 24$\\downarrow$(-20) \\\\\n{\\ \\ \\ -\\textsc{ examples }} & 40 & 24 & 36 & 25 & 55 & 8 & 31$\\uparrow$ (+55) & 31 & 34 & 39 & 14 & 69 & 13 & 33$\\uparrow$(+10) \\\\\n{\\ \\ \\ - \\textsc{neg. examp.}} & 52 & \\textbf{30} & 50 & \\textbf{25} & 47 & 8 & \\textbf{35}$\\uparrow$ (+75) & \\textbf{43} & \\textbf{54} & 44 & 21 & 70 & 32 & \\textbf{44}$\\uparrow$(+47) \\\\\n\\bottomrule\n\\end{tabular}\n}\n \\caption{\n Full BART and GPT3 results with various input encodings for different task categories, under random split (\\S\\ref{subsec:split}).\n Both models show improved results when encoded with instructions, comparing relative gains indicated in the `avg' columns (in percentage compared to \\textsc{prompt} encoding.)\n Category names: QG: Question Generation, AG: Answer Generation, CF: Classification, IAG: Incorrect Answer Generation, MM: Minimal Text Modification, VF: Verification.\n \n }\n \\label{tab:random:splitfull2}\n\\end{table*}\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\caption{GPT3}\n \\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures\/gains-categories-gpt.pdf}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\caption{BART}\n \\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures\/gains-categories-bart.pdf}\n \\end{subfigure}\n \n \\caption{\n GPT3 and BART were evaluated with various encoding and various categories of tasks. \n The benefit of instructions to the models depends on the semantics of the task. For instance, for GPT3 (left) \\emph{minimal text modification} category benefits a lot, while the benefits to \\emph{verification} tasks are minimal. \n }\n \\label{fig:gains:per:categories}\n\\end{figure*}\n\\end{comment}\n\n\\begin{comment}\n\\subsection{Generalization vs. number of positive examples}\n\\label{subsection:numberofpositiveexamples}\nFig.~\\ref{fig:GPTexample} and \\ref{fig:BARTexample} illustrates the performance variation of models with respect to the number of examples. Clearly, addition of examples is not helping GPT3 and BART. Note that model performance with just the prompt+definition encoding is 35 in case of GPT3 and 30 in case of BART. This may suggest that the effort in creating many examples can be utilized to improve other aspects of Instruction. Another demerit of larger number of examples is that they increases input token size which increases the API usage cost in case of GPT3 and training time and higher memory usage in case of BART.\n\\end{comment}\n\n\\clearpage\n\n\\begin{comment}\n\\onecolumn\n\n\\subsection{User Study to Find Important Task-Specific Instruction Fields}\n\\label{subsec:appendix:user:study}\nWe ask our quality assessment annotators to also specify which instruction fields help them understand the task and answer prompts. For each of the 12 tasks in our evaluation set, we ask: \\textit{Which instruction field helps you the most to understand the task and answer questions and why? Remember, on removing this field significant major information should get lost.} We compile these results category-wise, and present them in Table \\ref{Tab: User Study}. In particular, there are two tasks Classification (CF) and Minimal Text Modification (MM) for which humans find only a single instruction field to be important. We find that models also find the same fields to be most important, as evinced in Table \\S\\ref{tab:random:splitfull}), where the performance of models with these fields is higher than the rest. Interestingly, this is similar to the patterns observed in the model performance (Table \\S\\ref{tab:random:splitfull}).\n\n\\begin{figure*}[]\n \\centering\n \\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures\/example_variation_GPT.pdf}\n \\caption{GPT3 performance as a function of the number of examples in its encoding. The number of examples is limited by three upperbounds: 3, 10 and 70. This shows that addition of examples is not helping GPT3.\n }\n \\label{fig:GPTexample}\n\\end{figure*}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures\/example_variation_BART.pdf}\n \\caption{BART performance as a function of the number of examples in its encoding. The number of examples is limited by two upperbounds: 3 and 10. This shows that addition of examples is not helping BART. Since BART's maximum token size is 1024, it can not fit a lot examples unlike GPT3, so we did not experiment further with larger number of examples.\n }\n \\label{fig:BARTexample}\n\\end{figure*}\n\\end{comment}\n\n\n\n\\begin{comment}\n\\begin{table*}\n \\centering\n \n \\includegraphics[scale=0.65,trim=1.8cm 6.5cm 0cm 3cm]{figures\/ablation-polished.pdf}\n \n \\caption{Detailed results of the encoding ablation performed on three distinct subtasks.}\n \\label{tab:ablation:all}\n\\end{table*}\n\\end{comment}\n\n\\begin{comment}\n\\begin{table*}\n \\centering\n \\includegraphics[scale=0.65,trim=1.7cm 8.9cm 0cm 2cm]{figures\/results_table_det.pdf}\n \n \\caption{\n Empirical results \\textsc{Natural Instructions}{}. \n The best numbers among the four encodings are indicated with \\textbf{bold}. \n The first row is {\\color{gray} grayed out} since it is our oracle upperbound.\n }\n \\label{tab:results:detail}\n\\end{table*}\n\\end{comment}\n\\subsection{Evaluating Generalization Across Tasks} \n\n\n\n\\begin{comment}\n\\paragraph{\\textsc{Natural Instructions}{} tasks splits.}\nTo evaluate generalization across subtasks, we divide \\textsc{Natural Instructions}{} into two collections: \n(i) \\emph{evaluation} tasks \\task{eval} (35 subtasks) for test and \n(ii) \\emph{non-evaluation} tasks \\task{non-eval} (26 subtasks) for training.\\footnote{The tasks are enumerated in the appendix.} \nIn making this collection, we ensure that the tasks included in \\task{eval} accept a relatively reliable automatic evaluation. For example, those tasks that have restricted answer space (like classification tasks) or those that have several gold output references. The end tasks of the source datasets which are typically the answer generation tasks are often included in the evaluation set.\nAdditionally, we ensure to have at least one representative subtask from each of the semantic categories (\\S\\ref{sec:mapping}) in the \\emph{non-evaluation} collection\nHowever, tasks within categories are very different from each other. For instance, creation of DROP questions requires understanding of numerical reasoning and reading comprehension, whereas creation of Winogrande questions requires understanding of co-reference resolution and the requirement of the task to create twin question and answer pairs.\n\n\\daniel{\n emphasize that not any two tasks are exactly the same. Every two QG tasks are different (e.g., MC-TACO vs CosmosQA). \n}\n\\paragraph{Evaluation.}\nWe formulate three evaluation settings with different supervision types available to a model (Table~\\ref{tab:supervision:types}). \nIn `task-specific' setting, a model is supervised with the training instances of the evaluation task -- similar to the conventional setup. \nIn `few-shot' setting, a model only observes a few examples of the evaluation task.\\footnote{\n We use ``few-shot'' to refer to \\emph{any setup with a small number of labeled examples},\n \n regardless of whether these examples are used for fine-tuning or inference-time conditioning (no gradient updates). \n}\nIn `generalization' setting, a model does not observe any instances from the evaluation task.\n\\end{comment}\n\n\n\n\\begin{comment}\n\\paragraph {``no-instructions''} encoding.\nThis encoding is the conventional paradigm where no instructions exist, except the input instance (Eq.~\\ref{}). \n\n\\paragraph{\\emph{``prompt''} encoding.}\nIn this encoding, we append the prompt message before the input instance. \n\n\\paragraph{\\emph{``prompt + definition''} encoding.}\nIn this encoding, the prompt message and the task \\emph{definition} appear before the input instance.\nIntuitively, this encoding is more informative and more complex than \\emph{``prompt''} only encoding. \n\n\\paragraph{\\emph{``all instructions''} encoding.}\nThis encoding contains all the instruction content.\nWe include as many examples as possible, before exceeding the token limit of LMs. \n\n\\paragraph{\\emph{``positive examples''} encoding.}\nThis encoding contains only positive examples from the task instructions. \nSuch example-only encodings have been used in several recent studies in prompting LMs~\\cite{zhao2021calibrate}. \n\n\n\\end{comment}\n\n\\begin{comment}\n\\begin{table}[h]\n \\centering\n \\small\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{L{1.9cm}cL{3.9cm}}\n \\toprule\n Setup & Evaluation & Supervision \\\\\n \\midrule\n \n task-specific & $T \\in$ \\task{eval} & all the instances of $T$ \\\\ \n \\cmidrule(r){1-3} \n few-shot & $T \\in$ \\task{eval} & \n \n instructions of $T$ \\\\ \n \\cmidrule(r){1-3} \n generalization & $T \\in$ \\task{eval} & instructions+ instances of \\task{non-eval} tasks + instructions of $T$ \\\\ \n \n \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Different modes of supervision considered in this work, when evaluating a model on the instances of a fixed task $T \\in$ \\task{eval}. \n }\n \\label{tab:supervision:types}\n\\end{table}\n\\end{comment}\n\n\\begin{comment}\n\\section{Evaluating Language Models to Address \\textsc{Natural Instructions}{}} \n\nWe use generative language models BART~\\cite{lewis2019bart} and GPT-3~\\cite{brown2020language} to address tasks in \\textsc{Natural Instructions}{}. Here, we describe how we encode instructions and instances into plain text and feed them into generative language models (\\S \\ref{subsect:encoding}). We then describe the model details (\\S \\ref{subsec:models}). \nWe then explain how we use language models to encode instruction (\\S \\ref{subsect:encoding}). \\daniel{to be updated}\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{The benefit from instructions heavily depends on the task at hand.}\nFigure~\\ref{fig:gains:per:categories} shows the performance of our models on our task categories, broken down into several coarse input encodings. \nSimilar to our previous observations, \\emph{all instructions} encoding \\emph{typically} performs better than other encodings. \nHowever, these gains are not uniform across task categories. \n\n\n\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{Task-specific BART (oracle upper-bound estimate).}\nWe train BART on input\/output instances of each task (no instructions) and evaluate on the same task. \nThis is the conventional setup where the model is fine-tuned to solve the task only, without any instructions involved.\nSuch a model, by design, won't generalize across different tasks since it is specialized to each subtask. \nHowever, the numbers elicited from this can be viewed as the upper-bounds for each task (i.e., how well can BART perform, if it were to be trained on many instances of this particular task). \n\n\\end{comment}\n\n\\begin{comment}\n\\subsection{Task-specific Calibration of GPT-3}\n\\label{subsec:calibration}\n\nWe transform inputs (Instructions) to make it more understandable for GPT-3. We have a human-in-the loop setup to perform calibration. We do calibration in two steps (i) we develop various calibration procedure by experimenting with various type of prompts and identifying the types of prompt which help model follow instructions better. This is done on the non-eval split. (ii) we employ one of the calibration procedures by looking at few samples of the end-task in the eval split.\n\\end{comment}\n\n\\begin{comment}\n\\paragraph{A Data Creation Toolbox:} \n\\textsc{Natural Instructions}{} covers various skills beyond question generation and answer generation, such as sentence paraphrasing, verification of whether a question is in a specified category, etc that are frequently used during NLP dataset creation. A successful model trained on \\textit{Natural Instructions} will be a toolbox for dataset creation. Our intuition behind the focus on dataset creation is that any NLP task can be expressed as a step in the data creation \n\\end{comment}\n\n\\section{Introduction}\n\nWe have witnessed great progress in solving many NLP datasets through fine-tuning pre-trained language models (LMs)~\\cite{peters2018deep,brown2020language}. \nMore recent studies show tremendous promise in generalization \\emph{within} the set of observed tasks through multi-task training and unified encoding~\\cite{khashabi2020unifiedqa,aghajanyan2021muppet}. \nHowever, cross-task generalization -- \\emph{generalization} to \\emph{unseen} tasks -- has generally remained under-explored.\nFor example, can we supervise a model with instances of grammar checking or question answering tasks, yet expect it to solve a different task like question typing (Fig.\\ref{fig:teaster}).\nEvidently, humans are capable of such generalizations; an average human can follow natural language \\emph{instructions} to solve a variety of problems, as evident by the success of crowdsourcing platforms (also argued in~\\citet{efrat2020turking}). In this paper, we study if models can generalize to \\emph{unseen} tasks given their \ncrowdsourcing instructions (Fig.\\ref{fig:teaster}). \n\n\n\n\\begin{figure}[t]\n \\centering\n \n \n \n \n \n \n \\includegraphics[scale=0.9, trim=0.75cm 0.8cm 0cm 1.0cm,clip=false]{figures\/teaser1-2.pdf}\n \n \\caption{\n We construct the \\textsc{Natural Instructions}{} dataset from crowdsourcing instructions and instances of different NLP datasets. We study if models can learn from {\\emph{\\color{blue} seen}} tasks and generalize to {\\emph{\\color{red} unseen}} tasks given their natural crowdsourcing instructions. \n \n \n \n \n \n \n \n \n }\n \\label{fig:teaster}\n\\end{figure}\n\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \n \n \n \\small\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{ccc}\n \\toprule \n Task & \\makecell{Instance-Level\\\\Generalization} & \\makecell{Task-Level\\\\Generalization} \\\\\n \\midrule \n \\makecell{Training\\\\data} & $X^{\\color{darkgreen} \\text{train}}, Y^{\\color{darkgreen} \\text{train}}$ & \\makecell{$(I_t, X_t^{{\\color{darkgreen} \\text{train}}}, Y_t^{{\\color{darkgreen} \\text{train}}})$ \\\\ $t \\in \\text{\\task{\\color{blue} seen}} $ \\\\ } \\\\ \n \\midrule \n Evaluation & \\makecell{ $x \\rightarrow y$ \\vspace{0.2cm} \\\\ where: \\\\ $(x, y) \\in (X^{ \\color{purple} \\text{test}}, Y^{ \\color{purple} \\text{test}})$ \\vspace{0.3cm} } & \\makecell{$(x, I_t) \\rightarrow y$ \\vspace{0.2cm} \\\\ where: \\\\ $(x, y) \\in (X_t^{ {\\color{purple} \\text{test}}}, Y_t^{{\\color{purple} \\text{test}}})$ \\\\ $t \\in$ \\task{\\color{red} unseen} } \\\\ \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n A comparison of \\emph{task} vs \\emph{instance}-level generalization \n $I_t$, $X_t$ and $Y_t$ indicate natural language instructions, input, and output sets respectively for task $t$.\n \n In the conventional setup, training and evaluation are done on the instances of the same task. \n However, in task-level generalization, a model is expected to generalize to {\\color{red} unseen} tasks, where \\task{\\color{red} unseen} $\\cap$ \\task{\\color{blue} seen}$ = \\emptyset $. \n }\n \\label{tab:comparison}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \n \n \n \\includegraphics[scale=0.64,trim=0.2cm 0cm 0cm 1cm,clip=false]{figures\/fig5scaleup-4.pdf}\n \\caption{BART evaluation on {\\emph{unseen}} tasks ($y$-axis is perf. on \\task{unseen}) when supervised with {\\emph{seen}} tasks ($x$-axis is $|$\\task{seen}$|$). \n \n \\changed{\n A model using {\\color{purple}instructions} ($I_t$) consistently improves with more observed tasks. In contrast, models with {\\color{orange} no access to the instructions} show no sign of improved generalization. \n }\n \\changed{Details in \\S\\ref{subsec:supervision:size:experiment}.}\n }\n \\label{fig:scaling:tasks}\n \\end{subfigure}\n \\caption{The formal definition of generalization to unseen tasks (a) and a summary of its empirical outcome (b). }\n \n\\end{figure*}\n\n\n\n\nWe build \\textsc{Natural Instructions}, a dataset consisting of {\\it natural} crowdsourcing instructions for various tasks and their instances. \nTraining on {\\it seen} tasks $\\text{\\task{\\color{blue} seen}}$ in our dataset, we build a model that learns to follow natural instructions that define a task and perform tasks (i.e., mapping input to output).\nTesting on \\emph{unseen} tasks \\text{\\task{\\color{red} unseen}}, we evaluate if the model can perform {\\it unseen} tasks solely from their instructions and without any task-specific labeled data (Table~\\ref{tab:comparison}; right). \nIn contrast to the instance-level generalization (Table~\\ref{tab:comparison}; left), our model uses instruction as additional input, and evaluations are done on tasks that were not observed in the training stage. \n\n\\changed{\nWe compile \\textsc{Natural Instructions}{} from task instructions written by researchers for crowdsourcing existing NLP datasets. \nSuch crowdsourcing instructions often elaborate a variety of details about how a task should (and should not) be done. \nTo provide a systematic study of various elements of crowdsourcing instructions, we map them\n}\nto a unified {\\it schema} to cover the most important elements of task descriptions --- such as definition, constraints, positive and negative examples. \nWe collect tasks in \\textsc{Natural Instructions}{} as minimal stand-alone steps provided to crowdworkers to complete a downstream NLP task. \nFor example, tasks collected from \n\\changed{QASC~\\cite{khot2020qasc} include sub-tasks about generating topic words or combining facts, as well as answering multi-hop questions. \nTherefore our dataset not only contains typical downstream tasks in NLP, but also the intermediate subtasks that are not well-represented in the common benchmarks. \n}\nThe unified schema and the collection of minimal subtasks enable training LMs that can generalize across different tasks by learning from instructions.\nIn total, our dataset consists of 61 distinct NLP tasks and $193k$ instances.\n\n\nOur experimental results indicate that LMs learn to leverage natural language instructions as they show improved generalization to new\ntasks. \nFor example, a BART~\\cite{lewis2019bart} achieves a 19\\% gain in terms of cross-task generalization compared to a model not using instructions\n(\\S\\ref{sec:experiments}). \nImportantly, LMs can generalize better to unseen tasks if they observe more tasks in training (Fig.\\ref{fig:scaling:tasks}). \nThis upward trajectory suggests the potential for stronger cross-task generalizable models upon scaling up the diversity of tasks represented in a meta-dataset of task instructions. \nDespite the benefits of instructions, we observe a sizable gap between models' generalization and their estimated upperbounds (\\ref{subsec:task-specific}), encouraging the community to work on this challenging problem. \n\n\n\n\\vspace{.1cm}\n\\noindent\\textbf{Contributions:} In summary, the contributions of this work are as follows: \n(a) we introduce \\textsc{Natural Instructions}{}, a dataset of human-authored instructions curated from existing well-known datasets mapped to a unified schema, providing training and evaluation data for learning from instructions;\n(b) we build models that can encode instructions and show: \n(b.1) the benefit of cross-task generalization by leveraging instructions; \n(b.2) the importance of different elements of instructions in the performance; \n(b.3) noteworthy headroom for improvement on our benchmark, which hopefully will motivate further work in this direction. \n\n\\input{related}\n\n\n\\changed{\n\\section{Defining Cross-Task Generalization}\n\\label{subsec:input:output}\nHere we formally define the problem setup for generalization across tasks. \nEach task $t$ consists of input\/output instances $(X_t, Y_t)$ and is described in terms of its natural language instructions $I_t$. \n\n\n\\vspace{-.2cm}\n\\paragraph{Task-specific models.}\nStandard supervised learning algorithms use task-specific labeled instances to learn a mapping from input $x$ to output $y$: $M(x)=y$ for $(x,y)\\in (X_t^{\\text{train}}, Y_t^{\\text{train}})$ and is evaluated on the test instances of the same (or similar) task $(X_t^{\\text{test}}, Y_t^{\\text{test}})$. We refer to this as the \\emph{instance-level} generalization (Table~\\ref{tab:comparison}; left).\n\n\\vspace{-.2cm}\n\\paragraph{Cross-task models.} \nIn this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \\; \\mbox{for} \\ (x,y)\\in (X_t, Y_t)$.\nIn contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. We collect \\textsc{Natural Instructions}\\ (\\S\\ref{sec:construction:natural:instructions}) to study this question: can a model be trained to follow instructions via training tasks \\task{seen} and be generalized to follow instructions for a task $t' \\in$ \\task{unseen}. \nWe refer to this as a \\emph{task}-level generalization (Table~\\ref{tab:comparison}; right). \n}\n\n\\section{\\textsc{Natural Instructions}{}}\n\\label{sec:construction:natural:instructions}\n\n\\textsc{Natural Instructions}{} consists of instructions that describe a task (e.g., question answering) and instances of that task (e.g., answers extracted for a given question). \nFig.\\ref{fig:examples} shows an example instruction for the task of `generating questions that require an understanding of event duration' accompanied with positive and negative examples \nthat contextualize the task. \nHere we introduce a schema for representing instructions (\\S\\ref{subsec:schema}) and then describe how existing datasets (their crowdsourcing templates) are mapped into our schema (\\S\\ref{sec:mapping}).\n\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \n \n \\includegraphics[scale=0.70, trim=0.45cm 0.8cm 0cm 0.99cm]{figures\/examples_detailed_two.pdf}\n \\caption{\n An example from our dataset. \n Note that it follows the schema provided in Fig.\\ref{fig:schema_plate}. See Fig~.\\ref{fig:examplesfull} for more examples.\n }\n \\label{fig:examples}\n\\end{figure}\n\n\\subsection{Instruction Schema}\n\\label{subsec:schema}\n\n\nInstructions used in crowdsourcing various datasets, are written by distinct authors for different purposes, and they are different in a variety of ways (see Appendix~\\ref{appendix:analysis:templates} for their differences.) We introduce a unified schema (Fig.\\ref{fig:schema_plate}) to consistently represent these diverse forms of instructions. \nOur instruction schema is the result of our pilot study conducted on a subset of datasets. Below we describe the ingredients of this schema: \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.97\\columnwidth,trim=0.35cm 0.8cm 0.5cm 1cm]{figures\/schema-2.pdf}\n \\caption{The schema used for representing instruction in \\textsc{Natural Instructions}{} (\\S\\ref{subsec:schema}), shown in plate notation.\n }\n \\label{fig:schema_plate}\n\\end{figure}\n\n\n\\begin{itemize}[noitemsep,topsep=0pt,parsep=3pt,leftmargin=0.3cm]\n \n \\item \\underline{\\textsc{Title}} provides a high-level description of a task and its associated skill (such as question generation, answer generation).\n \\item \\underline{\\textsc{Prompt}} is a single sentence command that often appears before the input instance and connects it to the instructions.\n \\item \\underline{\\textsc{Definition}} provides the core detailed instructions for a task. \n \n \n \\item \\underline{\\textsc{Things to Avoid}} contain instructions regarding undesirable annotations that must be avoided. These help to define the scope of a task and the space of acceptable responses. \n \n \\item \\underline{\\textsc{Emphasis and Caution}} are short, but important statements highlighted in the crowdsourcing templates which were intended to be emphasized or warned against.\n \\item \\underline{\\textsc{Positive Examples}} contain inputs\/outputs similar to the input given to a worker\/system and its expected output, helping crowdworkers better understand a task~\\cite{ali1981use}. \n \\item \\underline{\\textsc{Negative Examples}} contain inputs\/outputs to emphasize \\textsc{Things to Avoid} by providing examples that must not be produced. \n \n \n \n \\item \\underline{\\textsc{Reason}} provides explanations behind why an example is positive or negative.\n \\item \\underline{\\textsc{Suggestion}} contains suggestions on how a negative example could be modified to turn it into a positive example. \n\\end{itemize}\n\n The next section describes the process of mapping the raw instructions (designed for crowdworkers) to our instruction schema. \n\n\n\n\n\n\n\n\n\n\\subsection{Constructing \\textsc{Natural Instructions}} \n\\label{sec:mapping}\n\n\n\n\n\n\\subsubsection{Collecting Data}\n\\label{sec:datacollection}\n\\paragraph{Collecting raw instructions and instances.} \n We use existing, widely adopted NLP benchmarks that are collected via crowdsourcing platforms and hence, come with crowdsourcing templates. \n In the first step, we identified several datasets and engaged with their authors to get their crowdsourcing templates and raw data. \nThis yields the following datasets: \nCosmosQA~\\cite{huang2019cosmos}, \nDROP~\\cite{dua2019drop}, \nEssential-Terms~\\cite{khashabi2017learning}, \nMCTACO~\\cite{zhou2019going}, \nMultiRC~\\cite{khashabi2018looking}, \nQASC~\\cite{khot2020qasc}, \nQuoref~\\cite{dasigi2019quoref}, ROPES~\\cite{lin2019reasoning} and\nWinogrande~\\cite{sakaguchi2020winogrande}.\\footnote{\n We only focus on textual instructions and avoid datasets that involve visual or auditory steps, mostly focusing on QA datasets that were available to the authors. \n} \n \n\\vspace{-.2cm}\n\\paragraph{Splitting crowdsourcing instructions into minimal tasks.} \nAlmost all the crowdworking instructions include sequences of steps to guide crowdworkers in creating task instances.\nFor example, QASC and MCTACO include 7 and 19 steps in the data creation process, respectively. \nWe divide crowdsourcing instructions into their underlying steps and generate multiple subtasks that are minimal and standalone.\\footnote{\n We eliminate tasks that involve model-in-the-loop. \n} Table~\\ref{tab:sample:tasks} shows subtasks extracted for Quoref and QASC. For example, the main task in Quoref is to answer a question given a context paragraph, but the crowdsourcing template consists of two sub-tasks of {\\it question generation} and {\\it answer generation} with their separate instructions. This process results in a more consistent definition of tasks, enabling a successful mapping of instructions into our schema, in contrast to the work of \\citet{efrat2020turking} that uses crowdsourcing instructions as-is. \n\n\n\\begin{table}\n \\centering\n \\footnotesize \n \\begin{tabular}{ll}\n \\toprule\n source dataset & task \\\\\n \\midrule\n \\multirow{2}{*}{\\makecell{Quoref\\\\ \\cite{dasigi2019quoref} }} & question generation \\\\ \n & answer generation \\\\ \n \\midrule\n \\multirow{6}{*}{\\makecell{QASC\\\\ \\cite{khot2020qasc}} } & topic word generation \\\\\n & fact generation \\\\ \n & combining facts \\\\ \n & question generation \\\\ \n & answer generation \\\\ \n & incorrect answer generation \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{\n Examples of the datasets and the tasks formed from them. \n The extracted tasks are independent annotation assignments in the crowdsourcing templates of the datasets. \n The complete list is in Table~\\ref{tab:structure} in Appendix. \n }\n \\label{tab:sample:tasks}\n\\end{table}\n\n\\begin{table}\n \\footnotesize \n \\begin{tabular}{lcc}\n \\toprule\n category & \\# of tasks & \\# of instances \\\\\n \\midrule\n {question generation} & 13 & 38$k$ \\\\ \n {answer generation} & 16 & 53$k$ \\\\ \n {classification} & 12 & 36$k$ \\\\ \n {incorrect answer generation} & 8 & 18$k$ \\\\ \n {minimal modification} & 10 & 39$k$ \\\\ \n {verification} & 2 & 9$k$ \\\\ \n \\midrule\n Total & 61 & 193$k$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Task categories and their statistics. \n \n }\n \\label{tab:taskcategories}\n\\end{table}\n\n\nIn total, there are 61 tasks, which are categorized into 6 semantic categories (Table~\\ref{tab:taskcategories}). \nWe assigned these broad categories to the tasks to understand their collective behavior in the experiments. \nIt is noteworthy that, despite the apparent resemblance of the tasks included in the same category, \nany pair of tasks are distinct. \nFor example, while \\emph{question generation} is part of Quoref, CosmosQA, and QASC, each has its own separate variant of the question generation task (see Fig.\\ref{fig:task_specification} in Appendix). \n\n\n\n\n\\subsubsection{Mapping Raw Instructions to Schema } \n\\label{subsec:maptoschema}\n We manually fill in the fields of our instruction schema with the content from the crowdsourcing instructions.\n For instance, parts of the raw instructions that are highlighted for emphasis are incorporated as part of our \\emph{emphasis\/caution} field. \nThe modifications suggested in this step were applied by one author and were verified by another author.\\footnote{On average, the process of data curation for each task takes around 5 hrs-34 hrs (details in Appendix; Table~\\ref{tab:datacuration}).} \n\n\\vspace{-.2cm}\n\\paragraph{Improving description quality and consistency.}\n We edit raw instructions to ensure their quality. Particularly, we fix writing issues (typos, ambiguities, etc.) and redact repetitions. \n While repetition often helps in augmenting human understanding, short and concise instructions are often more effective for computers due to their limited attention span~\\cite{beltagy2020longformer}. \n \n\\vspace{-.2cm}\n\\paragraph{Augmenting examples and reasons.}\n \n There is a large variance in the number of examples provided in the raw instructions. Instructions often include more positive examples, or some instructions do not include any negative examples (e.g., QASC). \nWhenever possible, we add negative examples such that each task has at least two negative examples. \nFurthermore, not all raw instructions contain \\textsc{reasons} or \\textsc{suggestions} for each of their examples. For example, positive examples are usually not accompanied by explanations, and most datasets do not include suggestions.\nWe add them, wherever such information is missing in the instructions. \n\n\\vspace{-.2cm}\n\\paragraph{Collecting input\/output instances for subtasks.} \nMost of our tasks are the intermediate steps in the crowdsourcing process. \n Therefore, to extract input\/output instances for each task, we need to parse the raw annotations of crowdworkers for every step. Since each dataset stores its annotations in a slightly different format, extracting and unifying such intermediate annotations can be non-trivial. \n \n \n \n \n \n\\vspace{-.2cm}\n\\paragraph{Verification.} \n\\changed{\nAn annotator verified the quality of the resulting data in consultation with dataset authors. \nThe annotator iterated on the authors' feedback (avg of 3 iters) until they were \nsatisfied. \n}\n\n\\vspace{-.2cm}\n\\paragraph{Quality assessment.}\nWe ask independent human annotators to answer 240 random instances (20 instances from 12 random tasks, used later for our evaluation~\\S\\ref{subsec:split}). \nThe subsequent evaluation of the human-generated responses results in more than 96\\% accuracy, which indicates that humans can effortlessly understand and execute our instructions. \n\n\n\n\n\n\n\n\n\n\n\\subsubsection{\\textsc{Natural Instructions}\\ Statistics}\n\\label{subsec:dataset:statistics}\n\nIn summary, \\textsc{Natural Instructions}\\ consists of subtasks each with a set of instructions and input\/output instances (Fig.\\ref{fig:examples} and \\ref{fig:schema_plate}). The complete list of instructions is included in the appendix. In total, the dataset includes 61 tasks and 193$k$ instances.\nTable~\\ref{tab:taskcategories} shows data statistics for each task category.\\footnote{We limit the number of instances in each task to $6.5k$ to avoid massive instance imbalance.} On average, instructions contain 4.9 positive examples and 2.2 negative examples. \nThe longest element of instructions is usually \\textsc{Definitions} with 65.5 tokens and the shortest is \\textsc{title} with 8.3 tokens (more statistics in Table~\\ref{tab:schemastat}).\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{lc}\n \\toprule\n statistic & value \\\\ \n \\midrule\n ``title'' length & 8.3 tokens \\\\ \n ``prompt'' length & 12.6 tokens \\\\ \n ``definition'' length & 65.5 tokens \\\\ \n ``things to avoid'' length & 24.1 tokens\\\\ \n ``emphasis\/caution'' length & 45.0 tokens\\\\\n ``reason'' length & 24.9 tokens\\\\ \n ``suggestion'' length & 19.6 tokens\\\\ \n \n \n \n num of positive examples & 4.9 \\\\ \n num of negative examples & 2.2 \\\\ \n \n \\bottomrule\n \\end{tabular}\n \\caption{\n Statistics of \\textsc{Natural Instructions}{}\n \n}\n \\label{tab:schemastat}\n\\end{table}\n\n\n\n\\section{Problem Setup and Models }\n\\label{subsec:setup}\n\\changed{\nHere we define different cross-task generalization settings (\\S \\ref{subsec:split}) and the models (\\S\\ref{subsec:models}). \n}\n\n\\begin{comment}\n\\subsection{Learning Tasks From Instructions} \n\\label{subsec:input:output}\nEvery task $t$ in \\textsc{Natural Instructions}{} consists of an instruction $I_t$ and a set of input and output instances $D_t=\\{(x,y)|x \\in X_t,y \\in Y_t\\}.$\n\\vspace{-.2cm}\n\\paragraph{Task-specific models.} Standard supervised learning uses task-specific training instances to train a model that learns a mapping between input and output: $M(x)=y$ for $(x,y)\\in D_t$. \n\n\\vspace{-.2cm}\n\\paragraph{Learning from instructions.} In this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \\; \\mbox{for} \\ (x,y)\\in D_t$.\n\nIn contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. Instead, we use \\textsc{Natural Instructions}\\ to study this question: can a model be trained to follow instructions via training tasks \\task{seen} and be generalized to follow instructions for a task $t' \\in$ \\task{unseen}. \nWe study various generalization settings with different splits of the tasks. \n\n\\end{comment}\n\n\\subsection{Task Splits and Generalizations Types}\n\\label{subsec:split}\n\n\n\n\n\\paragraph{Random split.}\nThis setup follows the common practice in benchmarking NLP models with random data splits. Here, two tasks from each task category (Table~\\ref{tab:taskcategories}) in \\textsc{Natural Instructions}{} are randomly selected for evaluation, and the rest of the tasks are used for training. This leads to 12 tasks in \\task{unseen} and 49 tasks in \\task{seen}.\\footnote{Those tasks that do not accept a relatively reliable automatic evaluation are excluded from \\task{unseen}. } \n\n\n\\paragraph{Leave-one-out generalization.}\nTo better understand the nature of cross-task generalization, we study more restrictive settings of dividing training and evaluation tasks. \n\n\\noindent \\ul{leave-one-category}: evaluates how well a model generalizes to a task category if it is trained on others -- no task of that category is in \\task{seen}. \n\n\n\\noindent \\ul{leave-one-dataset}: evaluates how well a model can generalize to all tasks in a particular dataset if it is trained on all other tasks -- no task of that dataset is in \\task{seen}.\nThis split prevents any leakage across tasks that belong to the same source datasets. \n\n\\noindent \\underline{leave-one-task}: evaluates how well a model can learn a single task by training on all other tasks. \\\\\n\n\n\n\n\\subsection{Models}\n\\label{subsec:models}\nWe build models using pre-trained LMs with encoder-decoder architectures BART~\\cite{lewis2019bart} for fine-tuning and GPT3~\\cite{brown2020language} for few-shot experiments. \n \\paragraph{Encoding instructions and instances.} \n For every problem setup, we map a given instruction $I_t$ and an input instance $x$ into a textual format and decode an output $y$ and obtain $enc(I_t, x)$. \nThis encoding function is then fed to an encoder-decoder model to predict $y$: $M:enc(I_t, x) \\rightarrow y$. \n\n\n\\begin{figure}\n\\centering\n\\begin{boxedminipage}{\\columnwidth}\n\\begin{equation*} \n \\small\n \\begin{split}\n & \\mathtt{\\small Prompt:} \\; \\I{t}{prompt} \\\\ \n & \\mathtt{\\small Definition:} \\; \\I{t}{Definition} \\\\ \n & \\mathtt{\\small Things \\; to \\; Avoid:} \\; \\I{t}{avoid.} \\\\ \n & \\mathtt{\\small Emphasis \\& Caution:} \\; \\I{t}{emph.} \\\\ \n & \\textnormal{}\\mathtt{\\small Negative Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}, \\mathtt{\\small output:} \\; \\I{t}{pos. ex.},\n \\mathtt{\\small reason:} \\; \\I{t}{pos. ex.} \\\\ \n \n & \\mathtt{\\small Positive Example1-} \\\\ \n & \\hspace{0.7cm} \\mathtt{\\small input:} \\; \\I{t}{pos. ex.}, \n \\mathtt{\\small output:} \\; \\I{t}{pos. ex.} \\mathtt{\\small reason:} \\; \\I{t}{pos. ex. } \\\\ \n \n & \\mathtt{\\small input:} \\; x, \\mathtt{\\small output:} \\textnormal{''}\n \\end{split}\n\\end{equation*}\n\\end{boxedminipage}\n\\caption{Encoding instruction $I_t$, where $I_t^c$ refers to the text of a component $c$ in the instruction schema.}\n\\label{fig:encoding}\n\\end{figure}\nEncoding instances follows a standard NLP paradigm of mapping an input instance to text. \nEach instruction $I_t$ consists of multiple elements as described in our instruction schema (\\S\\ref{subsec:schema}). Here, we map each element of the instruction to a textual format and append it before the input instance. Fig.\\ref{fig:encoding} shows how we encode the full instruction. \n\nTo study the impact of each instruction element for cross-task generalization, we compare these encodings: (1) \\textsc{prompt}, (2) \\textsc{pos. examples}, (3) \\textsc{prompt + definition}, (4) \\textsc{prompt + things to avoid}, (5) \\textsc{prompt + emphasis} , (6) \\textsc{prompt + pos. examples}, (7) \\textsc{prompt + + definition + pos. examples}, and (8) \\textsc{Full instruction}.\n\\changed{\n Each of these (e.g., \\textsc{prompt} and \\textsc{pos. examples}) correspond to prompting setups in the recent literature~\\cite{scao2021many,lu2021fantastically}. \n} \n\n \\begin{table*}[t]\n \\small \n \\centering\n \n \\begin{tabular}{clcccc}\n \\toprule\n \n \n \n \\makecell{model \u2193} & \\makecell{evaluation set \\task{unseen} \u2192} & \\makecell{random split\\\\of tasks} &\\makecell{leave-one-\\\\category (QG)} & \\makecell{leave-one-\\\\dataset (QASC)} & \\makecell{leave-one-\\\\task (QASC QG)} \\\\\n \\cmidrule(lr){1-1} \\cmidrule(lr){2-2} \\cmidrule(lr){3-3} \\cmidrule(lr){4-4} \\cmidrule(lr){5-5} \\cmidrule(lr){6-6} \n \n \n \\multirow{2}{*}{\\makecell{BART (fine-Tuned)} } & \\textsc{No instructions} & 13 & 6 & 37 & 20 \\\\\n & \\textsc{Full instructions} & \\textbf{32} & \\textbf{17} & \\textbf{51} & \\textbf{56} \\\\\n \\midrule\n GPT3 (not fine-tuned)& \\textsc{Full instructions} & 24 & 33 & 22 & 33 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{Cross-task generalization of BART under various splits (\\S\\ref{subsec:split}).\n \n Fine-tuned BART shows improved performance when provided with instructions. \n It also archives better performance than GPT3, despite being over $1k$ times smaller. \n \n \\changed{All numbers are ROUGE-L. }\n }\n \\label{tab:bart:generalization:all:splits}\n\\end{table*}\n\n\\vspace{-.2cm}\n\\paragraph{BART.}\n\\label{sec:bart}\nWe use BART (base)~\\cite{lewis2019bart} which allows us to fine-tune its model parameters. \nThis is an encoder-decoder architecture with $140m$ parameters.\nFor each setup, the input is encoded using different instruction elements, trained on all \\task{seen} tasks, and evaluated on \\task{unseen} (\\S\\ref{subsec:split}). \n\n\\vspace{-.2cm}\n\\paragraph{GPT3.}\nAs a comparison, we evaluate\nGPT3~\\cite{brown2020language} which is a $175B$ parameter autoregressive LM ($\\times1.2k$ larger than BART) and has shown promising results in mimicking demonstrations provided in its prompt.\nWe cannot fine-tune the parameters of this massive model and use it as-is \nunder its default setting on the evaluation tasks in \\task{unseen} (\\S\\ref{subsec:split}) using the encoding introduced earlier. \n\n\\begin{comment}\n \\begin{table*}[t]\n \\small \n \\centering\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{cl|c|cc|cc|cc}\n \\toprule\n model \u2193 & & random split & \\multicolumn{6}{c}{leave-one-$x$ split} \\\\\n \n \\cmidrule(lr){3-3} \\cmidrule(lr){4-9}\n & & &\\multicolumn{2}{c}{$x =$ category} & \\multicolumn{2}{c}{$x =$ dataset} & \\multicolumn{2}{c}{$x =$ task} \\\\\n \\cmidrule(lr){4-5} \\cmidrule(lr){6-7} \\cmidrule(lr){8-9} \n & evaluation set \\task{unseen} \u2192 & \\makecell{ALL} &\\makecell{AG} & \\makecell{QG} & \\makecell{QASC} & \\makecell{Quoref} & \\makecell{Winogrande AG } & \\makecell{QASC QG } \\\\\n \\midrule\n \n \n \n \\multirow{4}{*}{\\makecell{\\cha{BART-Fine-Tuned}} } & \\textsc{No instructions} & 13 & 11 & 6 & 37 & 10 & 11 & 20 \\\\\n \\cmidrule(lr){2-9}\n & \\textsc{prompt+definition} & 30 &18 & 10 & 43 & \\textbf{39} & 11 & 22 \\\\\n & \\textsc{prompt+pos. examp.} & 23 &18 & \\textbf{20} & 47 & 33 & 16 & 55 \\\\\n \n \n & \\textsc{Full instructions} & \\textbf{32} &\\textbf{19} & 17 & \\textbf{51} & 37 & \\textbf{19} & \\textbf{56} \\\\\n \\midrule\n \n \\cha{GPT3-Few-Shot}& \\textsc{Full instructions} & 24 & 18 & 33 & 22 & 24 & 10 & 33 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{BART generalization under various leave-one-out splits (\\S\\ref{subsec:split}). Encoding instructions improve cross-task generalization across all settings. \n \\changed{All numbers are ROUGE-L. }\n }\n \\label{tab:bart:generalization}\n\\end{table*}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\\input{maintable}\n\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\vspace{-.1cm}\n\\paragraph{Evaluation metrics.}\nWe treat all of our tasks as text generation problems and evaluate them with \nautomated evaluation metrics for text generation. \nIn particular, we use \nROUGE-L~\\cite{lin2004rouge} to automatically evaluate the generated outputs.\\footnote{\nOur experiments show that other metrics, e.g. BLEURT~\\cite{sellam2020bleurt} are also correlated with ROUGE-L, which has also been used in generative QA tasks.\n}\n\n\n\\vspace{-.2cm}\n\\paragraph{Implementation details.}\nFor BART, our models are trained for 3 epochs with a learning rate of 5e-5 for a given training split and input encoding. For GPT3, we use the {\\texttt{davinci-instruct}} engine and produce outputs with greedy decoding, \ngenerating up to a maximum number of tokens of 16 (the default value). We use the default stop condition which is 2 newline tokens.\\footnote{The relevant code is available at: \n\\url{https:\/\/github.com\/allenai\/natural-instructions-v1}\n}\n\n\n\\subsection{Generalization Under Various Task Splits}\n\\label{sec:gen:various:splits}\n\n\\changed{\nTable~\\ref{tab:bart:generalization:all:splits} reports the results of the BART model train and evaluated with various task splits (\\S\\ref{subsec:split})}. \nFor comparison, we evaluate GPT3 which uses no fine-tuning, unlike BART that is fine-tuned with the \\task{seen} tasks.\nThe first column corresponds to random split of tasks, while the remaining columns report cross-task generalization results of the BART model under \n\\changed{\nleave-one-$x$\n}\nsplits (\\S\\ref{subsec:split}). \nFor \n\\changed{\n$x =$ \\ul{category},}\nthe tasks in \\emph{question-generation} category are held out during training. \nFor \n\\changed{\n$x =$ \\ul{dataset},}\nthe tasks that were extracted from the \\emph{QASC} dataset were excluded from training. \nFor \n\\changed{\n$x =$ \\ul{task},}\nwe train a model on all tasks, except \\emph{QASC question generation} task which is used for evaluation. \n\n\n\n\\vspace{-.2cm}\n\\paragraph{Instructions benefit cross-task generalization.} \nThe results indicate that BART benefits from instructions in generalizing to new tasks, regardless of task splits. \nFor example, under random split, the model using \\textsc{Full Instructions} results in +19\\% gains over a model that is not using instructions. \n This is particularly interesting for \nleave-one-\\ul{category}-out split\nsince the trained model can generalize to the tasks of a particular semantic category, without being exposed to it. \nIn comparison to GPT3, the fine-tuned BART model that utilizes instructions achieves a stronger performance despite being $\\times 1k$ smaller than GPT3. \nFor example, a BART models using \\textsc{Full Instructions} achieves 8\\% higher performance than GPT3 under random split of tasks. \n\n\nNote that the absolute values in leave-one-category are lower due to the difficulty of this setup compared to, for example, the random split setup. \nWhile all settings involve evaluating on tasks not seen during training, the leave-one-category setting enforces more dissimilarity among training and evaluation tasks.\n\n\n\\subsection{Generalization Under Instruction Encoding and Task Categories}\nTable~\\ref{tab:random:splitfull2} reports the results of the BART model \nper encodings of different instruction elements (\\S\\ref{subsec:models}) and for different task categories.\nThe table shows that encoding more elements of the instructions generally achieves better results than just using \\textsc{prompt} or \\textsc{positive examples}. \nIt additionally shows that the benefit of the instruction elements seems to depend on the target task category. \nWe observe that the \\emph{question-generation} (QG) tasks benefit the most from \\textsc{positive examples}, whereas in \\emph{classification} (CF),\n\\textsc{positive examples} are of little help. We hypothesis this is because it is easier to mimic question-generation based on a few examples, whereas it is difficult to define classes via a few examples, where \\textsc{definition} can be more helpful. \nThe models show little improvement in \\emph{verification} (VF). \nWe hypothesize these tasks are inherently more difficult, partially because of their distinctness from the rest of the tasks in the dataset. \nWe hope future work on this line will study a wider variety of tasks and will improve our understanding of such failure cases. \n\n\\subsection{Generalization vs. Number of Seen Tasks}\n\\label{subsec:supervision:size:experiment}\nFig.\\ref{fig:scaling:tasks} compares the impact of the number of seen tasks for cross-task generalization. \nFor supervision, we randomly sample a few tasks as \\task{seen} and evaluate on 6 tasks (one from each category). \n(each point in the figure is averaged over 5 random subsamples.) \nThe results\nshow that with \\textsc{no-instruction} encoding there is no tangible value in observing more tasks. \nIn contrast, the generalization of the models that encode instructions improves with observing more tasks. \nThis is an exciting observation since it suggests that scaling up our dataset to more tasks may lead to stronger instruction-following systems. \n\n\n\n\n\n\\subsection{Analyses} \\label{subsec:task-specific}\n\n\n\\paragraph{Upperbound: Task-specific Models.}\nFor each task, we obtain a task-specific model (\\S~\\ref{subsec:input:output}) by training BART separately on each task's annotated training data. We evaluate these task-specific models to obtain a loose estimate of \\emph{upperbounds} for each task. \nOn average, task-specific models score 66\\% which is considerably higher than our models' best generalization (32\\%; Table~\\ref{tab:bart:generalization:all:splits}).\nThis indicates that { there is considerable room for improving generalization-based models} that use instructions. \n\n\\begin{comment}\n\\begin{table}\n \\centering\n \n \\footnotesize\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lcc}\n \\toprule\n error type & GPT3 & BART \\\\\n \\midrule\n \n \n \n \n \n \n generates a nonsensical\/vague question & 4 & 47\\\\\n \n explains the question after generating it & 6 & 0\\\\\n generates a yes\/no question & 12 & 4\\\\\n \n \n \\makecell[l]{generates generic questions independent\\\\ of the given context} &6 &0\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Percentage of errors on QASC QG task. \n The numbers do not sum to 100 since the error types are not mutually exclusive. \n \n }\n \\label{tab_error_analysis_main_text}\n\\end{table}\n\\subsection{Error Analysis}\nTable~\\ref{tab_error_analysis_main_text} shows the breakdown of most common error types for the QASC question generation task by analyzing 30 errors (more error analyses can be found in Appendix~\\ref{sec:error:analysis}; Table~\\ref{Tab: Error Analysis}).\n\\end{comment}\n\n\\begin{table}\n \\centering\n \\small\n \\resizebox{0.99\\linewidth}{!}{\n \\begin{tabular}{llcc}\n \n \\toprule\n Model \u2193 & Split \u2193 & \\makecell{w\/ neg.\\\\examples} & \\makecell{w\/o neg.\\\\examples} \\\\\n \\midrule\n \\multirow{5}{*}{BART} & random & 32 & {\\bf 35} \\\\ \n & leave-one-$x$ \\\\ \n & \\ $\\drsh x=$ category (AG) & 19 & {\\bf 21} \\\\ \n & \\ $\\drsh x=$ dataset (Quoref) & 37 & 37 \\\\ \n & \\ $\\drsh x=$ task (QASC QG) & 56 & {\\bf 57} \\\\ \n \\midrule\n GPT3 & - & 24 & {\\bf 44} \\\\ \n \\bottomrule\n \\end{tabular}\n }\n \\caption{\n Effect of excluding negative examples from \\textsc{Full Instruction} encoding. Negative instructions are surprisingly difficult for the models to learn from. \n }\n \\label{tab:negative:examples}\n\\end{table}\n\n\n\n\\paragraph{Impact of Negative Examples.}\nCrowdsourcing instructions often include negative examples to exemplify undesirable responses. \nWe study how negative examples in instructions affect cross-task generalization. \nOur cases study (Table~\\ref{tab:negative:examples}) indicates that the models work better \\emph{without} (w\/o) negative examples, \ncontrary to the previously-observed benefits of other instructional elements (e.g., definition, positive examples). \nThis is aligned with the previous studies ~\\cite{xuan2020hard,lin2003bootstrapped} that discuss the challenges of learning from negative examples.\nInterestingly, GPT3's drop (44 vs 24) is more significant than BART (35 vs 32), showing that BART can partly recover through the training step. \n\n\\begin{table*}[ht]\n \\centering\n \\resizebox{0.78\\textwidth}{!}{\n \\footnotesize\n \\begin{tabular}{p{4.5cm}p{3.5cm}p{6.5cm}}\n \\toprule\n Category & Helpful Fields & Explanation \\\\\n \\midrule\n Question Generation (QG) & 1. \\textsc{Definition} & - Provides a holistic picture of the task.\\\\\n & 2. \\textsc{Emphasis \\& Caution} & - Provides key information for solving the task.\\\\\n & 3. \\textsc{Positive Examples} & - This gives an idea of what is expected in the output.\\\\\n & 4. \\textsc{Negative Examples} & - Good to know the common mistakes people do.\\\\ \n \\midrule\n Answer Generation (AG) & \\textsc{1. Prompt} & - It limits the exploration space to question spans.\\\\\n & \\textsc{2. Definition} & - Provides a general understanding of the task. \\\\\n & \\textsc{3. Positive Examples} & - Reason field is very helpful.\\\\\n \\midrule\n Classification (CF) & \\textsc{1. Definition} & - The task is unclear without this field.\\\\\n \\midrule\n Incorrect Answer Generation (IAG) & \\textsc{1. Definition} & - Helps understand the utility of such a task.\\\\\n & \\textsc{2. Emphasis \\& Caution} & - Source of some useful shortcuts.\\\\\n & \\textsc{3. Positive Examples} & - Helps in understanding the type of questions asked.\\\\\n \\midrule\n Minimal Text Modification (MM) & \\textsc{1. Things to Avoid} & - Provides critical information.\\\\\n \\midrule\n Verification (VF) & \\textsc{1. Definition} & - Makes the task easy to understand.\\\\\n & \\textsc{2. Things to avoid} & - Contains useful tips required for this task.\\\\\n & \\textsc{3. Positive Examples} & - Exemplifies task understanding.\\\\ \n & \\textsc{4. Negative examples} & - Helps avoid potential mistakes.\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Results of humans' perceived importance of instruction elements. Our annotators, for example, find \\textsc{Definition} and \\textsc{Thing to Avoid} to be helpful for \\textit{Classification} and \\textit{Minimal Text Modification} tasks, respectively.}\n \\label{Tab:User:Study}\n\\end{table*}\n\n\n\\paragraph{Error Analysis.}\nWe randomly sample 30 erroneous predictions of our fine-tuned BART on 3 distinct tasks (Winogrande answer generation; QASC question generation; MC-TACO incorrect answer generation). We categorize the errors into common patterns (Table~\\ref{Tab: Error Analysis}).\n\\begin{table}[ht]\n \\centering\n \\small\n \n \\begin{tabular}{lc}\n \\toprule\n error type & BART \\\\\n \\midrule\n {\\color{brown} \\textit{Generates a nonsensical\/vague question}} & 47\\\\\n {\\color{brown}\\textit{Generate an invalid question}} & 8\\\\\n {\\color{brown}\\textit{Generates a yes\/no question}} & 4\\\\\n {\\color{brown}\\textit{Copies the given fact or a subset of it}} & 3 \\\\\n {\\color{brown}\\textit{Generates unanswerable questions}} &3\\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Percentage of errors on QASC QG task. \n The numbers do not sum to 100 since the error types are not mutually exclusive.\n \n }\n \\label{Tab: Error Analysis}\n\\end{table}\n\nUnlike GPT3 which generally suffers from generating redundant content irrelevant to instructions, our BART model provides more control, however, it can fail to generate proper output. \nHere are several erroneous predictions from our model: \n\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Generates a nonsensical\/vague question}}\\\\\n \\textbf{BART input:}[...]input: Fact: graduations are used for measuring the volume of substances. output: ?\\\\\n \\textbf{BART output:} How many substances are there in volume?\n}\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Copies the given fact or a subset of it}}\\\\\n \\textbf{BART input:}[...] input: Fact: If someone is not breathing, begin with ventilation. output: ? \\\\\n \\textbf{BART output:} If someone is breathing, begin with ventilation.\n}\n\n\n\\egbox{\n\\fontsize{8pt}{10pt}\\selectfont\n {\\color{brown}\\textit{Generates a yes\/no question }}\\\\\n \\textbf{BART input:}[...] input: Fact: Lack of saliva hinders the break down of food into components the body can absorb. output: ?\n \\textbf{BART output:} Does saliva hinders the break down of food into components the body can absorb?\n}\n\n\\paragraph{Perceived Impact of Instruction Elements.}\nWe survey human annotators to find out the value of instruction elements to humans.\nExcept for the negative examples which were shown to be difficult for models, we observe similar trends between humans' perceived value of those elements (Table~\\ref{Tab:User:Study}) and their contributions to the model performance (Table~\\ref{tab:random:splitfull2}). \nFor example, humans viewed \\textsc{Definition} and \\textsc{Things to Avoid} as necessary fields for \\emph{classification} and \\emph{minimal text modification} categories, respectively, which is compatible with our empirical observations (e.g., \\textsc{prompt + definition} has the highest score on CF category in Table~\\ref{tab:random:splitfull2}). \n\n\n\n\n\\section{Conclusion}\n\\label{sec:discussion}\nIn this paper, we studied the goal of building models that generalize to new tasks by encoding and understanding crowdsourcing instructions. \nWe introduced \\textsc{Natural Instructions}{}, which is built based on existing crowdsourced datasets, that enables building such models and systematically evaluate them. \nTo the best of our knowledge, this is the first work to show the benefit of instructions towards improved cross-task generalization. \nAdditionally, we observe that our proposed task has a large room for improvement, which we believe \nwill bring more attention to building stronger models that can generalize to a wider range of tasks. \n\n\n\\begin{comment}\n\\vspace{-.2cm}\n\\paragraph{Future extensions.}\n The observations made in \\S\\ref{subsec:supervision:size:experiment} indicate that there are likely benefits to repeating our study with a larger set of datasets. \n We hope the future work expands our work with a larger and broader range of tasks. \n \n We use automatic evaluation, in order to facilitate the replicability of the follow-up work on \\textsc{Natural Instructions}{}. Admitting limitations of automatic evaluations, we hope future work will provide an easy-to-reproduce human evaluation for the tasks studied here, based on the recent proposals for streamlining human evaluation of text generation models~\\cite{khashabi2021genie}. \n\\end{comment}\n\n\n\n\n\n\n\\section*{Acknowledgements}\nWe thank OpenAI for providing access to the GPT3 API, \nauthors who generously shared their dataset templates with us, Matt Peters and Nicholas Lourie for helpful input, the Beaker team for their support with experiments, and the anonymous reviewers for their helpful feedback. \nThe support of DARPA SAIL-ON, DARPA CHESS program, NSF IIS-2044660, ONR N00014-18-1-2826,\nand Paul G. Allen Foundation is gratefully acknowledged.\n\n\n\n\n\n\\section{Related Works}\n\\label{sec:related:work}\n\\begin{comment}\n\\paragraph{Instructions in NLP applications.} \\hanna{if no space, you can cut this paragraph}\nPrior work has studied ``instructions'' in various niches, such as\nrobotic instructions~\\cite{shridhar2020alfred, stepputtis2020language}, \ndatabases~\\cite{kim2020natural}, \nprogramming~\\cite{lin2018nl2bash,shao2020chartdialogs}, \\emph{inter alia}. \nSuch {instructions} are inherently different from ours, as they are intended to be mapped to pre-defined symbolic forms (e.g., SQL commands). \nConversely, our instructions describe general NLP tasks (no underlying grammar) for measuring task-level generalization. \n\\end{comment}\n\n\\changed{\n\\vspace{-.2cm} \\paragraph{Learning from instructions.}\nThere is recent literature on the extent to which models follow language instructions~~\\cite{hase2021can,ye2021zero,Gupta2021TowardsGP,Zhong2021AdaptingLM}.\nFor example, \\citet{efrat2020turking} examine if language models can follow crowdsourcing instructions with no further training. On the contrary, our work is pursuing a fundamentally different goal: creating a dataset of crowdsourcing instructions and task instances and formulating cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones.\n\\citet{weller-etal-2020-learning} construct a crowdsourced dataset with short question-like task descriptions. \nCompared to this work, our instructions are longer, more complex and natural since they were used to collect datasets through crowdsourcing. \n\nPromptSource and FLAN~\\cite{wei2021finetuned,sanh2021multitask} are two concurrent works that pursue a similar goal as ours. \nA key difference between our work to these works is in terms of data collection strategy. \nOur work uses natural instructions created by NLP researchers before the dataset instances were created by crowd workers, and hence it contains the complete definition of each task (definition, things to avoid, negative examples, etc.). \nOn the other hand, instructions in the concurrent work are collected retroactively based on the already-available task instances. \nOur {\\it natural} instructions enable evaluating models on how they learn tasks given different elements of task descriptions. (See \\S\\ref{subsec:promptsource} for further comparisons.) \nNevertheless, we believe that all these approaches to constructing instructions and task categories are complementary and the community will benefit from considering both towards solving the challenging problem of cross-task generalization.\n\n\\vspace{-.2cm}\\paragraph{Prompt engineering.}\nConstructing effective discrete prompts for language models to perform NLP tasks is an active area of research~\\cite{schick2020few,reynolds2021prompt,liu2021pre}. \nSuch prompts are often extremely short and may not include a complete definition of complex tasks. \nIn contrast, our instructions encode detailed instructions as they were used to collect the datasets. \nMoreover, the goals are different:\nMost prompt-engineering approaches seek prompts with higher performance on a particular task, \ntypically through assumptions about their target task which make them non-trivial to generalize to any other task. \nHowever, our introduced meta dataset enables the measurement of generalization to unseen tasks. \n\n\n\\vspace{-.2cm}\\paragraph{Beyond standard multi-task learning.}\nMulti-task learning is a long-standing goal for AI~\\cite{caruana1997multitask} and has led to successful models that can support a wider range of tasks\n~\\cite{mccann2018natural,raffel2020exploring,khashabi2020unifiedqa,mishra2020towards,aghajanyan2021muppet,ye2021crossfit}.\nMost of the conventional setups in the multi-tasking literature evaluate on instances that belong to the tasks that are seen, i.e., their labeled instances were observed during training (1st column of Table~\\ref{tab:comparison}). \nWe augment this setup by \nintroducing natural language instructions which enable our models to bridge to tasks that were not seen during training. \n}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Lambda Function and Motivations}\n\nThe existence of the core decomposition with Peano quotient for planar compacta \\cite{LLY-2019} enables us to associate to each compact set $K\\subset\\hat{\\bbC}$ a map $\\lambda_K:\\hat{\\bbC}\\rightarrow\\bbN\\cup\\{\\infty\\}$, called the {\\em lambda function of $K$}. This function sends all points $x\\notin K$ to zero and may take a positive value for some $x\\in K$. It ``quantifies'' certain aspects of the topological structure of $K$, that is more or less related to the property of being locally connected. In particular, a continuum $K\\subset\\hat{\\bbC}$ is locally connected if and only if $\\lambda_K(x)=0$ for all $x\\in\\hat{\\mathbb{C}}$. On the other hand, if a continuum $K\\subset\\hat{\\bbC}$ is not locally connected at $x\\in K$ then $\\lambda_K(x)\\ge1$; but the converse is not necessarily true.\n\nThe quantification in terms of lambda function allows us to carry out a new analysis of the topology of $K$, by computing or estimating $\\lambda_K(x)$ for specific choices of $x\\in K$. In the current paper, we will investigate an interesting phenomenon that was firstly revealed in a fundamental result by Marie Torhorst, as one of the three highlights of \\cite{Torhorst}. This result is often referred to as Torhorst Theorem \\cite[p.106, (2.2)]{Whyburn42} and reads as follows.\n\\begin{theorem*}[{\\bf Torhorst Theorem}]\nThe boundary $F$ of every complementary domain $R$ of a locally connected continuum $M\\subset\\hat{\\mathbb{C}}$ is itself a locally connected continuum.\n\\end{theorem*}\n\nWe will obtain an inequality that includes the Torhorst Theorem as a simple case.\nThe inequality is about the lambda function $\\lambda_K$. The function $\\lambda_K$ is based on the core decomposition of $K$ with Peano quotient \\cite{LLY-2019}, which is motivated by some open questions in \\cite{Curry10} and extends two earlier models of polynomial Julia sets developed in \\cite{BCO11,BCO13}. Those models, briefly called BCO models, provide efficient ways (1) to describe the topology of unshielded compacta, like polynomial Julia sets, and (2) to obtain specific factor systems for polynomials restricted to the Julia set. The BCO models are special cases of a more general model, working well for all planar compacta, that associates natural factor systems to the dynamics of rational functions \\cite{LLY-2019,LYY-2020}.\n\nRecall that a {\\bf Peano continuum} means the image of $[0,1]$ under a continuous map. By Hahn-Mazurkiewicz-Sierpi\\'nski Theorem\n\\cite[p.256, \\S 50, II, Theorem 2]{Kuratowski68}, a continuum is locally connected if and only if it is a Peano continuum. On the other hand,\na {\\bf Peano compactum} is defined to be a compactum having locally connected components such that for any constant $C>0$ at most finitely many of its components are of diameter greater than $C$. Therefore, the Cantor ternary set is a Peano compactum and a Peano continuum is just a Peano compactum that is connected. Concerning how such a definition arises from the discussions of BCO models, we refer to \\cite[Theorems 1-3]{LLY-2019}.\n\n\n\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$, there exists an upper semi-continuous decomposition of $K$ into sub-continua, denoted as $\\Dc_K^{PC}$, such that (1) the quotient space is a Peano compactum and (2) $\\Dc_K^{PC}$ refines every other such decomposition of $K$ \\cite[Theorem 7]{LLY-2019}.\nWe call $\\Dc_K^{PC}$ the core decomposition of $K$ with Peano quotient. The hyperspace $\\Dc_K^{PC}$ under quotient topology is called the Peano model of $K$. Every $d\\in\\Dc_K^{PC}$ is called an {\\bf atom} of $K$, or an {\\bf order-one atom}, or an atom of order $1$. Every atom of an order-one atom is called an {\\bf order-two atom}, and so on. Note that a compactum such as the pseudo-arc or Cantor's Teepee may have a non-degenerate atom of order $\\infty$.\n\nConsidering the atoms of a compactum $K\\subset\\hat{\\mathbb{C}}$ as its structural units, we summarize the results obtained in \\cite[Theorem 7]{LLY-2019} and \\cite[Theorem 1.1]{LYY-2020} in the following way.\n\\begin{theorem*}[{\\bf Theory of Atoms}\nEvery compactum $K\\subset\\hat{\\mathbb{C}}$ is made up of atoms; all its atoms are sub-continua of $K$ and they form an upper semi-continuous decomposition, with its quotient space being a Peano compactum, that refines every other such decomposition; moreover, for any finite-to-one open map $f:\\hat{\\bbC}\\rightarrow\\hat{\\bbC}$ and any atom $d$ of $K$, each component of $f^{-1}(d)$ is an atom of $f^{-1}(K)$.\n\\end{theorem*}\n\nUsing the hierarchy formed by {\\bf atoms of atoms}, we introduce the lambda function.\n\\begin{definition*}[{\\bf Lambda Function}]\nGiven a compactum $K\\subset\\hat{\\bbC}$. Let $\\lambda_K(x)=0$ for $x\\notin K$. Let $\\lambda_K(x)=m-1$ for any $x\\in K$, if there is a smallest integer $m\\ge1$ such that $\\{x\\}$ is an order-$m$ atom of $K$. If such an integer $m$ does not exist, we put $\\lambda_K(x)=\\infty$.\n\\end{definition*}\nWhen little is known about the topology of $K$, it is difficult to completely determine the values of $\\lambda_K$. On the other hand, the level sets $\\lambda_K^{-1}(n)(n\\ge0)$ are ``computable'' for typical choices of $K$. In such circumstances, the lambda function $\\lambda_K$ is useful in describing certain aspects of the topology of $K$. For instance, one may check the following observations: (1) a compact set $K\\subset\\hat{\\bbC}$ is a Peano compactum if and only if $\\lambda_K(x)=0$ everywhere; (2) if $K=\\left\\{t+\\left(\\sin\\frac1t\\right){\\bf i}: 00$ its complement has at most finitely many components of diameter $>\\varepsilon$. Unfortunately, the condition of $E$-compactum alone is still not sufficient for the {\\bf Lambda Equality} $\\tilde{\\lambda}_K=\\lambda_K$. See Examples \\ref{E-compactum} and \\ref{finite-comp}.\n\nThe theorem below gives three conditions under which the Lambda Equality holds.\n\n\\begin{main-theorem}\\label{equality-case}\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$, the Lambda Equality $\\tilde{\\lambda}_K=\\lambda_K$ holds if one of the following conditions is satisfied:\n\n(i) $K$ is an $E$-compactum such that the envelope function $\\tilde{\\lambda}_K(x)$ vanishes everywhere;\n\n(ii) $K$ is an $E$-compactum whose complementary components have disjoint closures.\n\n(iii) $K$ is a partially unshielded compactum.\n\\end{main-theorem}\n\n\\begin{rem}\nIn (i) and (ii) of Theorem \\ref{equality-case}, the assumption that $K$ is an $E$-compactum can not be removed. We may set $K=[0,1]\\!\\times\\![0,{\\bf i}]\\setminus\\left(\\bigcup\\limits_1^\\infty R_n\\right)$, with $R_n=\\left(\\frac{1}{3n},\\frac{2}{3n}\\right)\\times\\left(\\frac13{\\bf i},\\frac23{\\bf i}\\right)$. If $W=\\hat{\\mathbb{C}}\\setminus[0,1]\\!\\times\\![0,{\\bf i}]$ then $\\partial W\\cap\\partial R_n=\\emptyset$ for all $n\\ge1$ and $\\partial R_n\\cap \\partial R_m=\\emptyset$ for $n\\ne m$. See Figure \\ref{non-E} for a simple depiction of $K$.\nThe continuum $K$ is not an $E$-continuum but it satisfies the other assumptions in (i) and (ii) of Theorem \\ref{equality-case}. It has exactly one non-degenerate atom, the segment $\\left[\\frac13{\\bf i},\\frac23{\\bf i}\\right]$. Thus $\\lambda_K(x)-\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}1& x\\in\\left[\\frac13{\\bf i},\\frac23{\\bf i}\\right]\\\\ 0&\\text{otherwise}.\\end{array}\\right.$\n\n\\begin{figure}[ht]\n\\vskip -0.75cm\n\\begin{center}\n\\begin{tikzpicture}[x=5cm,y=5cm,scale=0.618]\n\\fill[gray!20,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);\n\\draw[gray,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);\n\\draw[black, ultra thick] (0,1\/3) -- (0,2\/3);\n\n\\foreach \\j in {1,...,3}\n{\n \\fill[white] (1\/3^\\j,1\/3) -- (2\/3^\\j,1\/3) -- (2\/3^\\j,2\/3) -- (1\/3^\\j,2\/3) --(1\/3^\\j,1\/3);\n \\draw[gray, thick] (1\/3^\\j,1\/3) -- (2\/3^\\j,1\/3) -- (2\/3^\\j,2\/3) -- (1\/3^\\j,2\/3) --(1\/3^\\j,1\/3);\n}\n\\foreach \\j in {1,...,6}\n{\\fill[gray] (1\/60,0.32+0.05*\\j) circle(0.3ex);\n}\n\n\\draw(0,0.02) node[left]{$0$};\n\\draw(1,0.02) node[right]{$1$};\n\\draw(0,0.98) node[left]{${\\bf i}$};\n\n\\end{tikzpicture}\n\\end{center}\n\\vskip -0.95cm\n\\caption{A depiction for $K$ and some of the rectangles.}\\label{non-E}\n\\vskip -0.25cm\n\\end{figure}\n\\end{rem}\n\nNote that the Lambda Equality may not hold for an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$, even if it has finitely many complementary components. See Example \\ref{finite-comp}. Also notice that the Lambda Equality under condition (i) implies the theorem below. This extends Whyburn's Theorem \\cite[p.113, (4.4)]{Whyburn42}, which says that {\\em an $E$-continuum is a Peano continuum if and only if the boundary of any of its complementary components is a Peano continuum}.\n\\begin{theorem*}[{Extended Whyburn's Theorem}] An $E$-compactum is a Peano compactum if and only if the boundary of any of its complementary components is a Peano compactum.\n\\end{theorem*}\n\n\nTheorem \\ref{lambda_inequality} addresses how $\\lambda_K$ and $\\lambda_L$ are related when $L$ lies on the boundary of a component of $\\hat{\\bbC}\\setminus K$. There are other choices of planar compacta $K\\supset L$ so that $\\lambda_K$ and $\\lambda_L$ are intrinsically related. A typical situation happens, if the common part of $\\overline{K\\setminus L}$ and $L$ is a finite set.\n\n\\begin{main-theorem}\\label{gluing_lemma}\nIf $K\\supset L$ are planar compacta such that $\\overline{K\\setminus L}$ intersects $L$ at finitely many points then $\\lambda_K(x)=\\max\\left\\{\\lambda_{\\overline{K\\setminus L}}(x),\\lambda_L(x)\\right\\}$ for all $x$. \\end{main-theorem}\n\nSetting $A=\\overline{K\\setminus L}$, we can infer that that $\\lambda_K(x)$ coincides with $\\lambda_A(x)$ for $x\\in A\\setminus L$ and with $\\lambda_L(x)$ for $x\\in L\\setminus A$, equals $\\max\\left\\{\\lambda_{A}(x),\\lambda_L(x)\\right\\}$ for $x\\in A\\cap L$, and vanishes for every $x\\notin(A\\cup L)$. Therefore, we have.\n\n\\begin{theorem*}[{Gluing Lemma for Lambda Functions}]\\label{gluing_lemma_1}\nIf in addition $\\lambda_A(x)=\\lambda_L(x)$ for all $x\\in A\\cap L$ then $\\lambda_K(x)=\\lambda_{A\\cup L}$ may be obtained by gluing $\\lambda_A$ and $\\lambda_L$, in the sense that\n\\begin{equation}\\label{form-1}\n\\lambda_{A\\cup L}(x)=\\left\\{\\begin{array}{ll}\\lambda_A(x)& x\\in A\\\\ \\lambda_L(x)& x\\in L\\\\ 0& {otherwise.}\\end{array}\\right.\n\\end{equation}\n\\end{theorem*}\n\\begin{rem}\nThe formation in Equation (\\ref{form-1}) is similar to the one illustrated in the well known gluing lemma for continuous maps. See for instance \\cite[p.69, Theorem (4.6)]{Armstrong}.\nIn certain situations, Theorem \\ref{gluing_lemma} helps us to analyze questions concerning local connectedness of polynomial Julia sets. See Question \\ref{small_julia}. However,\nthe case that $A\\cap L$ is an infinite set is more involved. In Theorem \\ref{baby_M}, we will extend Theorem 4 to such a case under additional assumptions. This extension allows one to choose $K$ to be the Mandelbrot set and $L$ the closure of a hyperbolic component. For concrete choices of $A$ and $L$ so that Equation \\ref{form-1} does not hold, we refer to Examples \\ref{cantor_combs}, \\ref{brooms} and \\ref{cup-fs}.\n\\end{rem}\n\n\n\n\n\n\n\n\nThe other parts of this paper are arranged as follows. Section \\ref{proof-c} is devoted to the proofs for Theorems \\ref{compare_atoms} to \\ref{lambda_inequality}. Section \\ref{equality} gives a proof for Theorem \\ref{equality-case}. In Section \\ref{glue} we firstly prove Theorem \\ref{gluing_lemma} and then continue to establish Theorem \\ref{baby_M}.\nSection \\ref{examples} gives examples.\n\n\n\n\n\n\n\\section{The Lambda Inequality}\\label{proof-c}\n\nIn this section we prove Theorems \\ref{compare_atoms} and \\ref{lambda_inequality}.\n\nWe will study relations on compacta $K\\subset\\hat{\\mathbb{C}}$. Such a relation is considered as a subset of the product space $K\\times K$ and is said to be {\\bf closed} if it is closed in $K\\times K$. Given a relation $\\Rc$ on $K$, we call $\\Rc[x]=\\{y\\in K: \\ (x,y)\\in\\Rc\\}$ the fiber of $\\Rc$ at $x$. We mostly consider closed relations $\\Rc$ that are reflexive and symmetric, so that for all $x,y\\in K$ we have (1) $x\\in \\Rc[x]$ and (2) $x\\in\\Rc[y]$ if and only if $y\\in\\Rc[x]$. For such a relation, the {\\em iterated relation} $\\Rc^2$ is defined naturally so that\n$\\displaystyle y\\in\\Rc^2[x]$ if and only if there exist $z\\in K$ with $(x,z), (z,y)\\in \\Rc$.\n\nRecall that the {\\bf Sch\\\"onflies relation} on a planar compactum $K$ is a reflexive symmetric relation. Under this relation, two points $x_1, x_2$ are related provided that either $x_1=x_2$ or there exist two disjoint Jordan curves $J_i\\ni x_i$ such that $\\overline{U}\\cap K$ has infinitely many components $P_n$, intersecting $J_1$ and $J_2$ both, whose limit under Hausdorff distance contains $\\{x_1,x_2\\}$. Here $U$ is the component of \\ $\\hat{\\bbC}\\setminus(J_1\\cup J_2)$ with $\\partial U=J_1 \\cup J_2$.\n\nGiven a compactum $K\\subset\\hat{\\bbC}$, denote by $R_K$ the Sch\\\"onflies relation on $K$ and by $\\overline{R_K}$ the closure of $R_K$. We also call $\\overline{R_K}$ the {\\bf closed Sch\\\"onflies relation}.\nLet $\\Dc_K$ be the finest upper semi-continuous decompositions of $K$ into sub-continua that splits none of the fibers $R_K[x]$. Then $\\Dc_K$ coincides with $\\Dc_K^{PC}$, the core decomposition of $K$ with Peano quotient \\cite[Theorem 7]{LLY-2019}. Therefore the elements of $\\mathcal{D}_K$ are the (order-one) atoms of $K$.\n\nEvery fiber $\\overline{R_K}[x]$ is a continuum. See\n\\cite[Theorem 1.4]{LYY-2020}. However, the compactness and connectedness of the fibers of $R_K$ remain open.\nMoreover, in order that a point $y\\ne x$ lies in $\\overline{R_K}[x]$ it is necessary and sufficient that for small enough $r>0$ the difference $K\\setminus(B_r(x)\\cup B_r(y))$ has infinitely many components that intersect each of the circles $\\partial B_r(x)$ and $\\partial B_r(y)$. See \\cite[Theorem 1.3]{LYY-2020}.\nThe lemma below relates the fibers of $\\overline{R_K}^2$ to those of $\\overline{R_L}$, where $L$ is a compact subset of $K$ satisfying certain properties.\n\n\\begin{lemma}\\label{key-lemma}\nGiven a compactum $K\\subset\\hat{\\mathbb{C}}$ and a component $U$ of \\ $\\hat{\\bbC}\\setminus K$. If $L\\subset\\partial U$ is compact then $\\overline{R_L}[x]\\subset \\overline{R_K}^2[x]$ for any $x\\in L$.\n\\end{lemma}\n\\begin{proof}\nTo obtain the containment $\\overline{R_L}[x]\\subset \\overline{R_K}^2[x]$ for any given $x\\in L$, we may fix an arbitrary point $y\\in\\overline{R_L}[x]\\setminus\\{x\\}$ and consider the annulus $A_n=\\hat{\\mathbb{C}}\\setminus\\left(B_{1\/n}(x)\\cup B_{1\/n}(y)\\right)$, for any integer $n\\ge1$ such that $\\overline{B_{1\/n}(x)}\\bigcap\\overline{B_{1\/n}(y)}=\\emptyset$. Here $B_{1\/n}(x)$ and $B_{1\/n}(y)$ are open disks with radius $1\/n$ under spherical distance, that are respectively centered at $x$ and $y$. By \\cite[Theorem 1.3]{LYY-2020}, $A_n\\cap L$ has infinitely many components intersecting both $\\partial B_{1\/n}(x)$ and $\\partial B_{1\/n}(y)$. So we can find an infinite sequence $\\{P_i\\}$ of such components that converge to some continuum $P_\\infty$ under Hausdorff metric.\n\nSince $x,y\\in L\\subset\\partial U$, we can find an open arc $\\alpha_0\\subset U$ that connects a point on $\\partial B_{1\/n}(x)$ to one on $\\partial B_{1\/n}(y)$. Going to an appropriate sub-arc, if necessary, we may assume that $\\alpha\\subset A_n$. Then, we may slightly thicken the closed arc $\\overline{\\alpha}$ and obtain a topological disc $\\alpha^*\\subset A_n$, satisfying $\\alpha^*\\cap K=\\emptyset$. From this we see that $\\overline{A_n\\setminus\\alpha^*}$ is homeomorphic to $[0,1]^2$. We will obtain the following.\n\n\n{\\bf Claim}. $P_\\infty$ contains two points $u_n\\in \\partial B_{1\/n}(x), v_n\\in \\partial B_{1\/n}(y)$ with $v_n\\in\\overline{R_K}^2[u_n]$.\n\nThe flexibility of the large enough integers $n$ ensures that $\\lim\\limits_nu_n=x$ and $\\lim\\limits_nv_n=y$. Since $\\overline{R_K}^2$ is a closed relation, we surely obtain $y\\in \\overline{R_K}^2[x]$. This completes our proof. Thus, the remaining issue is to verify the above claim.\n\nAs $\\overline{A_n\\setminus\\alpha^*}$ is a topological disc, we consider it to be the unit square $[0,1]^2$. Moreover, we may represent by $[0,1]\\times\\{1\\}$ the arc $l_1=\\overline{A_n\\setminus\\alpha^*}\\cap\\partial B_n(x)$ and by $[0,1]\\times\\{0\\}$ the arc $l_2=\\overline{A_n\\setminus\\alpha^*}\\cap\\partial B_n(y)$. Fix any point $z$ in $P_\\infty\\cap(0,1)^2$. For any $r>0$ that is small, let $W_r$ denote the open rectangle centered at $z$ with diameter $r$.\nSince $P_i\\rightarrow P_\\infty$ under Hausdorff distance we may assume that every $P_i$ intersects $W_r$ and lies in $[0,1]^2$, which from now on represents $\\overline{A_n\\setminus\\alpha^*}$.\nSee Figure \\ref{key}.\n\\begin{figure}[ht]\n\\vskip -0.5cm\n\\center{\n\\begin{tikzpicture}[scale=0.8,x=1.618cm, y=0.618cm]\n\\draw(-2,0)--(-2,7);\n\\draw(-2,0)--(7,0)node[below]{$l_2\\subset \\partial B_n(y)$};\n\\draw(-2,7)--(7,7)node[above]{$l_1\\subset \\partial B_n(x)$};\n\\draw(7,0)--(7,7);\n\\draw(2,0)--(2,7)node[above]{$P_\\infty$};\n\\fill(2,3.5)circle(2pt);\n\\draw[blue,thick](-1,2)--(5,2) -- (5,5) -- (-1,5) -- (-1,2);\n\\draw(2,3.5) node[left]{$W_r\\ni z$};\n\\draw(3,0)--(3,7)node[above]{$P_{2i+1}$};\n\\draw(4.5,0)--(4.5,7)node[above]{$P_{2i-1}$};\n\\draw[dashed](3.75,0)--(3.75,7);\n\\draw (3.75,0)node[below]{$P_{2i}$};\n\\draw(3.75,3) node[left]{$a_i$};\n\\fill(3.75,3)circle(2pt);\n\\draw(4,4) node[right]{$b_i$};\n\\fill(4,4)circle(2pt);\n\\draw[red](4.25,5)--(4.25,7); \\draw(4.3,6) node[left]{$\\beta_i$};\n\\draw[red](4.25,2)--(4.25,0); \\draw(4.3,1) node[left]{$\\beta_i$};\n\\end{tikzpicture}\n}\\vskip -0.75cm\n\\caption{Relative locations of $z,l_1,l_2,W_r, P_{2i-1}, P_{2i}, P_{2i+1}$ and $a_i, b_i$.}\\label{key}\n\\vskip -0.25cm\n\\end{figure}\n\nRecall that $[0,1]^2\\setminus P_\\infty$ has two components, one containing $\\{1\\}\\times[0,1]$ and the other $\\{0\\}\\times[0,1]$. One of these components contains infinitely many $P_i$. Without losing generality we may assume that every $P_i$ lies in the one containing $\\{1\\}\\times[0,1]$, denoted $V$. Thus $P_i$ can be connected to $\\{1\\}\\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_\\infty$. Moreover, rename $P_i(i\\ge1)$ so that every $P_i$ can be connected to $\\{1\\}\\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_j$ for $j\\ge i+1$. Therefore, each $P_i$ is ``to the right of'' $P_{i+1}$.\n\n\nFor all $i\\ge1$ let $V_i$ be the unique component of $\\hat{\\bbC}\\setminus\\left(P_{2i-1}\\cup P_{2i+1}\\cup l_1\\cup l_2\\right)$ whose boundary intersects each of $l_1$, $l_2$, $P_{2i-1}$ and $P_{2i+1}$. Then $P_{2i}\\subset \\overline{V_i}$ for $i\\ge1$. For the previously given point $z$ in $P_\\infty\\cap(0,1)^2$, we can find for each $i\\ge1$ a point $a_i\\in P_{2i}\\cap W_r$ such that $\\lim\\limits_{i\\rightarrow\\infty}a_i=z$. Since $P_{2i}\\subset L\\subset \\partial U$, we further find a point $b_i\\in (W_r\\cap V_i\\cap U)$ for every $i\\ge1$, such that the distance between $a_i$ and $b_i$ converges to zero as $i\\rightarrow\\infty$. Check Figure \\ref{key} for relative locations of $a_i\\in P_{2i+1}$ and $b_i\\in(W_r\\cap V_i\\cap U)$.\n\nNow, we may find arcs $\\alpha_i\\subset U$ for each $i\\ge1$ that starts from a fixed point $b_0\\in U$ and ends at $b_i$. Let $c_i$ be the last point on $\\alpha_i$ that leaves $\\partial[0,1]^2$. Let $d_i$ be the first point on $\\alpha_i$ after $c_i$ at which $\\alpha_i$ intersects $\\partial W_r$. Clearly, we have $c_i\\in(l_1\\cup l_2)$. Let $\\beta_i$ be the sub-arc of $\\alpha_i$ from $c_i$ to $d_i$. Check Figure \\ref{key} for a rough depiction of two possible locations for $\\beta_i$. Then $\\beta_i$ and $\\beta_j$ for $i\\ne j$ are contained in distinct components of $\\mathcal{A}_r\\setminus L$, where $\\mathcal{A}_r=[0,1]^2\\setminus W_r$ is topologically a closed annulus.\nSince $L\\subset K$ and $K\\cap U=\\emptyset$, the arcs $\\beta_i$ and $\\beta_j$ for $i\\ne j$ are contained in distinct components of $\\mathcal{A}_r\\setminus K$.\n\nLet $x_n$ be the only point on $l_1\\cap P_\\infty$ such that the right piece of $l_1\\setminus\\{x_n\\}$ does not intersect $P_\\infty$. Let $y_n$ be the point on $l_2\\cap P_\\infty$ such that the right piece of $l_2\\setminus\\{y_n\\}$ does not intersect $P_\\infty$. The sequence $\\{c_i\\}$ then has a limit point in $\\{x_n,y_n\\}$. We may assume that $z_r=\\lim\\limits_{i\\rightarrow\\infty}d_i$ for some point $z_r\\in\\partial W_r$. Since $\\partial[0,1]^2$ and $\\partial W_r$ are disjoint Jordan curves, from the choices of $x_n, y_n$ and $z_r$ we can infer that either $(x_n,z_r)\\in R_K$ or $(y_n,z_r)\\in R_K$. The flexibility of $r>0$ then leads to the inclusion $z\\in\\left(\\overline{R_K}[x_n]\\cup\\overline{R_K}[y_n]\\right)$.\n\nNow consider the two closed sets $E_n=P_\\infty\\cap \\left(\\overline{R_K}[x_n]\\cup l_1\\right)$ and $F_n=P_\\infty\\cap \\left(\\overline{R_K}[y_n]\\cup l_2\\right)$, which satisfy $P_\\infty=E_n\\cup F_n$. From the connectedness of $P_\\infty$ we see that $E_n\\cap F_n\\ne\\emptyset$. Clearly, each point $w\\in (E_n\\cap F_n)$ necessarily falls into one of the following cases:\n\\begin{itemize}\n\\item[(1)] $w$ lies in $l_1\\subset\\partial B_{1\/n}(x)$ and belongs to $\\overline{R_K}[y_n]$,\n\\item[(2)] $w$ lies in $l_2\\subset\\partial B_{1\/n}(y)$ and belongs to $\\overline{R_K}[x_n]$,\n\\item[(3)] $w\\notin(l_1\\cup l_2)$ and it lies in $\\overline{R_K}[x_n]\\cap\\overline{R_K}[y_n]\\cap(0,1)^2$.\n\\end{itemize}\nIn case (1) we set $u_n=w,v_n=y_n$; in case (2) we set $u_n=x_n, v_n=w$; in case (3) we set $u_n=x_n, v_n=y_n$. Then, in cases (1) and (2) we have $v_n\\in\\overline{R_K}[u_n]\\subset\\overline{R_K}^2[u_n]$; and in case (3) we will have $v_n\\in\\overline{R_K}^2[u_n]$. This verifies the claim and completes our proof.\n\\end{proof}\n\nWith Lemma \\ref{key-lemma}, we are well prepared to prove Theorems \\ref{compare_atoms} and \\ref{lambda_inequality} as follows.\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{compare_atoms}}]\nSince $U$ is also a complementary component of $\\partial U$, we only verify that every atom of $L$ is contained in a single atom of $K$.\n\nTo this end, let $\\Dc_L^\\#$ consist of all those continua that are each a component of $d^*\\cap L$ for some $d^*\\in\\Dc_K$. By \\cite[p.44, Theorem 3.21]{Nadler92} and \\cite[p.278, Lemma 13.2]{Nadler92}, we see that $\\Dc_L^\\#$ is an upper semi-continuous decomposition of $L$. As every fiber of $\\overline{R_K}^2$ is entirely contained in a single element of $\\Dc_K$, by Lemma \\ref{key-lemma} we know that every fiber $\\overline{R_L}[z]$ is entirely contained in a single element of $\\Dc_L^\\#$. This implies that $\\Dc_L^\\#$ is refined by $\\Dc_L$. In other words, every atom of $L$ is entirely contained in a single atom of $K$.\n\\end{proof}\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{lambda_inequality}}]\nTo obtain $\\lambda_L(x)\\le\\lambda_K(x)$ for all $x$, we only need to consider the points $x\\in L$. With no loss of generality, we may assume that $\\lambda_K(x)=m-1$ for some integer $m\\ge1$. That is to say, there exist strictly decreasing continua $d_1^*\\supset d_2^*\\supset\\cdots\\supset d_{m}^*=\\{x\\}$ such that $d_1^*$ is an atom of $K$ and $d_{i+1}^*$ an atom of $d_i^*$ for $1\\le i\\le m-1$. Here we may have $m=1$. By Theorem \\ref{compare_atoms}, the atom of $L$ containing $x$, denoted as $d_1$, is a subset of $d_1^*$. Since $d_1\\subset d_1^*$ also satisfies the assumptions of Theorem \\ref{compare_atoms}, we can infer that the atom of $d_1$ containing $x$, denoted as $d_2$, is a subset of $d_2^*$. Repeating the same argument for $m$ times, we obtain for $1\\le i\\le m$ an order-$i$ atom $d_i$ of $L$ with $d_i\\subset d_i^*$. Here we have $d_{m}=\\{x\\}$ and hence $\\lambda_L(x)\\le m=\\lambda_K(x)$.\n\\end{proof}\n\\begin{rem}\nIn the proof for Theorem \\ref{lambda_inequality}, we know that $U$ is a component of $\\hat{\\bbC}\\setminus K$ and $L\\subset\\partial U$. Therefore, in the same way we can show that $\\lambda_L(x)\\le\\lambda_{\\partial U}(x)$ for all $x$. From this we can infer that\n$\\sup\\limits_U\\lambda_{\\partial U}(x)=\\tilde{\\lambda}_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$.\n\\end{rem}\n\n\n\n\n\\section{On Lambda Equalities}\\label{equality}\n\nWe prove Theorem \\ref{equality-case}, establishing three equalities in terms of the lambda function. Two of these equalities are for $E$-compacta. The other one is for partially unshielded compacta.\n\nGiven an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$ with complementary components $U_1,U_2,\\ldots$, so that the diameters $\\delta(U_i)$ either form a finite sequence of an infinite one converging to zero. The Torhorst Inequality requires that $\\sup\\limits_i\\lambda_{\\partial U_i}(x)\\le\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$ and for all $i\\ge1$. Since $\\lambda_K(x)=\\tilde{\\lambda}_K(x)=0$ for all $x\\in K^o\\cup\\left(\\bigcup\\limits_iU_i\\right)$, we only need to consider the points on $\\partial K$, which may not equal $\\bigcup\\limits_i\\partial U_i$.\n\nLemma \\ref{bridging_lemma} follows from \\cite[Lemma 3.3]{LLY-2019} and is useful when we prove Lemma \\ref{trivial-fiber}.\n\\begin{lemma}\\label{bridging_lemma}\nIf $A\\subset\\hat{\\bbC}$ is a closed topological annulus and $K\\subset\\hat{\\bbC}$ a compactum then the following statements are equivalent: (1) $A\\cap K$ has infinitely many components intersecting each of the two components of $\\partial A$; (2) $A\\setminus K$ has infinitely many components intersecting each of the two components of $\\partial A$.\n\\end{lemma}\n\n\n\n\\begin{lemma}\\label{trivial-fiber}\nGiven an $E$-compactum $K\\subset\\hat{\\mathbb{C}}$ with complementary components $U_1,U_2,\\ldots$. If $\\overline{R_K}[x]$ contains a point $y\\ne x$ then $y\\in\\overline{R_{\\partial U_i}}[x]$ for some $i$.\n\\end{lemma}\n\\begin{proof}\nLet $\\rho(x,y)$ be the spherical distance between $x$ and $y$. For each $n\\ge2$ let $B_n(x)$ and $B_n(y)$ be the open disks of radius $2^{-n}\\rho(x,y)$ that are centered at $x$ and $y$ respectively. Then $A_n=\\hat{\\mathbb{C}}\\setminus\\left(B_n(x)\\cup B_n(y)\\right)$ is a topological annulus. By \\cite[Theorem 1.3]{LYY-2020}, the intersection $A_n\\cap K$ has infinitely many components that intersect $\\partial B_n(x)$ and $\\partial B_n(y)$ both. By Lemma \\ref{bridging_lemma}, the difference $A_n\\setminus K$ has infinitely many components, say $\\{P^n_j: j\\ge1\\}$, that intersect $\\partial B_n(x)$ and $\\partial B_n(y)$ both. Since the diameters of those $P^n_j$ are no less than $\\rho(x,y)\/2$ and since we assume $K$ to be an $E$-compactum, there is an integer $i(n)$ such that $U_{i(n)}$ contains infinitely many of those $P^n_j$. Here all those $P^n_j$ that are contained in $U_{i(n)}$ are each a component of $A_n\\cap U_{i(n)}$.\n\nNow, choose a subsequence $\\{Q^n_k: k\\ge1\\}$ of $\\{P^n_j: j\\ge1\\}$, with $Q_k^n\\subset U_{i(n)}$, such that $\\overline{Q^n_k}$ converges under Hausdorff distance to a continuum $M_n$. Then $M_n$ is a subset of $\\partial U_{i(n)}$ and intersects $\\partial B_n(x)$ and $\\partial B_n(y)$ both. Fixing any $a_n$ in $M_n\\cap \\partial B_n(x)$ and $b_n$ in $M_n\\cap \\partial B_n(y)$, we will have $(a_n,b_n)\\in R_{\\partial U_{i(n)}}$. Since $K$ is an $E$-compactum, there are infinitely many integers $n$ such that $i(n)$ takes the same value, say $i$. Therefore, we have two infinite sequences $\\{c_n\\}\\subset\\{a_n\\}$ and $\\{d_n\\}\\subset \\{b_n\\}$, with $c_n,d_n\\in\\partial U_i$, such that $(c_n,d_n)\\in R_{\\partial U_i}$ for all $n\\ge2$. Since $\\lim\\limits_{n\\rightarrow\\infty}c_n=x$ and $\\lim\\limits_{n\\rightarrow\\infty}d_n=y$, we readily have $(x,y)\\in\\overline{R_{\\partial U_i}}$, or equivalently $y\\in\\overline{R_{\\partial U_i}}[x] $.\n\\end{proof}\n\nNow we are well prepared to prove parts (i) and (ii) of Theorem \\ref{equality-case}, whose results are respectively included in the next two propositions.\n\n\\begin{proposition}\\label{equality-case-1}\nIf $K$ is an $E$-compactum such that $\\tilde{\\lambda}_K(x)=0$ for all $x\\in\\hat{\\mathbb{C}}$ then $\\lambda_K(x)$ vanishes everywhere.\n\\end{proposition}\n\\begin{proof}\nAs $\\tilde{\\lambda}_K(x)$ vanishes everywhere, all the relations $\\overline{R_{\\partial U_i}}$ are trivial, in the sense that the fibers $\\overline{R_{\\partial U_i}}[x]$ are each a singleton for all $i$ and all $x\\in\\partial U_i$. Combing this with the conclusion of Lemma \\ref{trivial-fiber}, we can infer that the fiber $\\overline{R_K}[x]=\\{x\\}$ for all $x\\in K$. From this, we see that every atom of $K$ is a singleton and that $\\lambda_K(x)=0$ for all $x$.\n\\end{proof}\n\n\n\\begin{proposition}\\label{equality-case-2}\nGiven an $E$-compactum $K$. If $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$ then $\\lambda_K=\\tilde{\\lambda}_K$.\n\\end{proposition}\n\\begin{proof}\nLet $\\mathcal{D}_i$ denote the core decomposition of $\\partial U_i$. Since we assume that $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$, the collection\n$\\displaystyle \\mathcal{D}_K^*:=\\left(\\bigcup\\limits_i\\mathcal{D}_i\\right)\\cup\\left\\{\\{x\\}: x\\in K\\setminus\\left(\\bigcup\\limits_i\\partial U_i\\right)\\right\\}$\nis a partition that divides $K$ into sub-continua. It suffices to show that $\\Dc_K^*$ is the core decomposition of $K$.\n\nRecall that $\\Dc_K$ is the finest monotone decomposition such that every fiber of $\\overline{R_K}$ is contained in a single element of $\\Dc_K$. By Lemma \\ref{key-lemma}, we know that $\\Dc_K$ is refined by $\\Dc_K^*$. On the other hand, since $K$ is an $E$-compactum and since $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$, we can use Lemma \\ref{trivial-fiber} to infer that every fiber of $\\overline{R_K}$ is contained in a single element of $\\Dc^*_K$. Therefore, we only need to verify that $\\mathcal{D}^*_K$ is upper semi-continuous, which then indicates that $\\Dc_K^*$ is a monotone decomposition hence is refined by $\\Dc_K$.\n\nIn other words, we need to verify that the equivalence $\\sim$ determined by the partition $\\Dc_K^*$ is closed as a subset of $K\\times K$. To this end, we consider an arbitrary sequence $\\{(x_n,y_n): n\\ge1\\}$ in $K\\times K$ with $\\lim\\limits_{n\\rightarrow\\infty}(x_n,y_n)=(x,y)$ such that $x_n\\sim y_n$ for all $n\\ge1$. There are two possibilities: either $x=y$ or $x\\ne y$. In the first case, we have $(x,y)=(x,x)$, which is surely an element of $\\sim$. In the second, the assumption that $K$ is an $E$-compactum implies that there is some $U_i$ such that $\\{x_n,y_n\\}\\subset\\partial U_i$ for infinitely many $n\\ge1$. Consequently, the subset $\\{x,y\\}$ is contained in a single element of $\\Dc_{i}$, which is a sub-collection of $\\Dc_K^*$. That is to say, we have $x\\sim y$. This ends our proof.\n\\end{proof}\n\n\nThe arguments in the above proof actually imply the following.\n\\begin{theo}\\label{equal-cd}\nGiven an $E$-compactum $K$. If $\\partial U_i\\cap\\partial U_j=\\emptyset$ for $i\\ne j$ then every atom of $K$ is either an atom of some $\\partial U_i$ or a singleton $\\{x\\}$ with $x\\in K\\setminus\\left(\\bigcup_i\\partial U_i\\right)$.\n\\end{theo}\n\n\nNow we go on to consider partially unshielded compacta and obtain Theorem \\ref{equality-case}(iii).\n\n\\begin{deff}\\label{part-unshielded}\nLet $L\\subset\\hat{\\mathbb{C}}$ be an unshielded compactum, which equals the boundary $\\partial U$ of one of its complementary components $U$. A compactum $K$ formed by the union of $L$ with some complementary components of $L$ other than $U$ is called a {\\bf partially unshielded compactum} determined by $L$.\n\\end{deff}\n\nIn order to find typical examples, one may set $L$ to be a polynomial Julia set, $U$ the unbounded Fatou component, and $K$ the union of $L$ and some bounded Fatou components. The next proposition discusses the relation between the atoms of any given compactum $L\\subset\\hat{\\mathbb{C}}$ and those of a compactum $K$, where $K$ is the union of $L$ with some (not all) components of $\\hat{\\bbC}\\setminus L$.\n\n\n\\begin{proposition}\\label{useful}\nGiven a planar compactum $L\\subset \\hat{\\mathbb{C}}$ and a family $\\{U_\\alpha:\\ \\alpha\\in I\\}$ of components of \\ $\\hat{\\mathbb{C}}\\!\\setminus\\!L$. If $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$ then $\\overline{R_K}$ is a subset of $\\{(z,z):\\ z\\in K\\!\\setminus\\!L\\}\\cup\\overline{R_L}$. Consequently, every atom of $K$ is either a singleton lying in $K\\setminus L$ or a sub-continuum of an atom of $L$.\n\\end{proposition}\n\n\\begin{proof}\nSince $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$, every point $z\\in (K\\setminus L)$ lies in some $U_\\alpha$. Thus the atom of $K$ containing $z$ is exactly the singleton $\\{z\\}$. From this it readily follows that every atom $d^*$ of $K$ that intersects $L$ is a sub-continuum of $L$. So we have $\\overline{R_K}=\\{(z,z):\\ z\\in K\\!\\setminus\\!L\\}\\cup\\left(\\overline{R_K}\\cap L^2\\right)$. Therefore, we only need to show that $\\left(\\overline{R_K}\\cap L^2\\right)\\subset \\overline{R_L}$.\n\nIndeed, if on the contrary there were some $(x,y)\\in \\overline{R_K}\\cap L^2$ not belonging to $\\overline{R_L}$ then, for any small enough number $r>0$, the difference $L\\setminus (B_r(x)\\cup B_r(y))$ would have finitely many components intersecting $\\partial B_r(x)$ and $\\partial B_r(y)$ both. Let $A_r=\\hat{\\mathbb{C}}\\setminus (B_r(x)\\cup B_r(y))$. By Lemma \\ref{bridging_lemma}, $A_r\\setminus L$ has at most finitely many components that intersect $\\partial B_r(x)$ and $\\partial B_r(y)$ both. As we assume that $\\displaystyle K=L\\cup\\left(\\bigcup_{\\alpha\\in I}U_\\alpha\\right)$, every component of $A_r\\setminus K$ is also a component of $A_r\\setminus L$. Thus $A_r\\setminus K$ has at most finitely many components that intersect both $\\partial B_r(x)$ and $\\partial B_r(y)$. In other words, we have $(x,y)\\notin\\overline{R_K}$. This is absurd since we assume that $(x,y)\\in \\overline{R_K}$.\n\\end{proof}\n\n\nThere are other basic facts concerning an unshielded compactum $L$ and a partially unshielded compactum $K$ determined by $L$. Firstly, every interior point of $K$ lies in some complementary component of $L$; secondly, every boundary point of $K$ lies in $L$. Thus we always have $\\partial K=L$; moreover, every atom of $K$ that intersects the interior $K^o$ is necessarily a singleton. Therefore, in order to determine the atoms of $K$ we only need to consider those of $L$.\n\n\\begin{theo}\\label{part-2}\nLet $L\\subset\\hat{\\mathbb{C}}$ be an unshielded compactum. Let $K$ be a partially unshielded compactum determined by $L$. Then every atom of $L$ is also an atom of $K$ and we have $\\Dc_K=\\Dc_L\\cup\\{\\{x\\}: x\\in K\\setminus L\\}$. Consequently, $\\tilde{\\lambda}_K(x)=\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$.\n\\end{theo}\n\n\\begin{proof}\nAs $L$ is unshielded, there is a component $U$ of $\\hat{\\mathbb{C}}\\setminus L$ with $L=\\partial U$. By Lemma \\ref{key-lemma}, every atom of $L$ lies in a single atom of $K$. By Lemma \\ref{useful}, every atom of $K$ intersecting $L$ is contained in a single atom of $L$. Thus every atom of $L$ is also an atom of $K$. As any singleton $\\{x\\}$ with $x\\in K^o= K\\setminus L$ is an atom of $K$, we have $\\Dc_K=\\Dc_L\\cup\\{\\{x\\}: x\\in K\\setminus L\\}$. This indicates the Lambda Equality $\\tilde{\\lambda}_K=\\lambda_K$.\n\\end{proof}\n\n\\begin{rem}\\label{why_partially_unshielded}\nTheorem \\ref{part-2} gives a result that is slightly stronger than Theorem \\ref{equality-case}(iii). In particular, for any full compactum $K$ we have $\\mathcal{D}_{\\partial K}\\subset\\mathcal{D}_K$. Therefore, a full compactum $K$ is a Peano compactum if and only if the boundary $\\partial K$ is. In particular, if $G\\subset\\hat{\\mathbb{C}}$ is a simply connected bounded domain then $\\partial G$ is locally connected if and only if $K=\\hat{\\mathbb{C}}\\setminus G$ is locally connected, or equivalently when $K$ is a Peano continuum. This basic fact has been well known, see for instance the items (iii) and (iv) of \\cite[p.20, Theorem 2.1]{Pom92}. Now, it is extended to a quantitative version in Theorem \\ref{part-2}. This extension applies to an arbitrary full continuum, that may or may not be locally connected.\n\\end{rem}\n\n\n\\section{The Gluing Lemma for Lambda Functions}\\label{glue}\n\nWe will follow the philosophy of the well known gluing lemma for continuous maps.\nSee for instance \\cite[p.69, Theorem (4.6)]{Armstrong} for the simple case and \\cite[p.70, Theorem (4.8)]{Armstrong} for the general setting. Our aim is to prove Theorem \\ref{gluing_lemma}, which deals with the lambda functions $\\lambda_K,\\lambda_L$ for planar compacta $K\\supset L$ such that $A=\\overline{K\\setminus L}$ intersects $L$ at finitely many points $x_1,\\ldots,x_n$. In Theorem \\ref{baby_M}, we further extend Theorem \\ref{gluing_lemma} to the case that $A\\cap L$ is a countably infinite set, under additional assumptions. Notice that when $A\\cap L$ is an infinite set Theorem \\ref{gluing_lemma} may not hold. See Examples \\ref{cantor_combs}, \\ref{brooms} and \\ref{cup-fs}.\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{gluing_lemma}}]\nFor $1\\le i\\le n$, denote by $d_i^1$ the order-$1$ atom of $A$ that contains $x_i$. Similarly, denote by $e_i^1$ the atom of $L$ that contains $x_i$.\nLet $K_1=A_1\\cup L_1$, where $\\displaystyle A_1=\\bigcup_id_i^1$ and $\\displaystyle L_1=\\bigcup_ie_i^1$. Then $K_1$ has finitely many components. Let $\\Ec_1$ be the collection of these components.\n\nBy \\cite[Theorem 1.3]{LYY-2020}, a point $y\\ne x$ lies in $\\overline{R_K}[x]$\nif and only if $K\\setminus(B_r(x)\\cup B_r(y))$ has infinitely many components that intersect both $\\partial B_r(x)$ and $\\partial B_r(y)$ for small enough $r>0$. Because of this, we can directly check that $\\overline{R_K}=\\overline{R_A}\\cup\\overline{R_L}$. Here $\\overline{R_K},\\overline{R_A},\\overline{R_L}$ are respectively the closed Sch\\\"onflies relations on $K,A$ and $L$. Let \\[\n\\Dc_1=\\left(\\Dc_L\\setminus\\left\\{e_1^1,\\ldots,e_n^1\\right\\}\\right)\\cup\n\\left(\\Dc_A\\setminus\\left\\{d_1^1,\\ldots,d_n^1\\right\\}\\right)\\cup\n\\Ec_1.\\]\nThen $\\Dc_1$ is an upper semi-continuous decomposition of $K$ into subcontinua. Since $\\Dc_1$ does not split the fibers of $\\overline{R_K}$, it is refined by $\\Dc_K$, the core decomposition of $K$ with Peano quotient. On the other hand, the equality $\\overline{R_K}=\\overline{R_A}\\cup\\overline{R_L}$ indicates that $\\mathcal{D}_K$ does not split the fibers of $\\overline{R_A}$ and those of $\\overline{R_L}$. Thus each atom of $A$ lies in an atom of $K$; similarly, every atom of $L$ lies in an atom of $K$. Consequently, we have.\n\\begin{lemma}\\label{gluing_atoms_a}\n$\\Dc_K=\\Dc_1$. Thus $d\\cap A$ (or $d\\cap L$) either is empty or consists of finitely many atoms of $A$ (resp. $L$) for any atom $d$ of $K$.\n\\end{lemma}\n\n\nLemma \\ref{gluing_atoms_a} ensures that $\\displaystyle \\lambda_K(x)=\\max\\left\\{\\lambda_A(x),\\lambda_L(x)\\right\\}$ for all $x\\notin K_1$. That is to say, the equation $\\lambda_K(x)=\\max\\left\\{\\lambda_{\\overline{K\\setminus L}}(x),\\lambda_L(x)\\right\\}$ in Theorem \\ref{gluing_lemma} holds for all points $x\\notin K_1$, so that we only need to consider the points $x\\in K_1$.\n\nNotice that we have set $A=\\overline{K\\setminus L}$, $\\displaystyle A_1=\\bigcup_id_i^1$, and $\\displaystyle L_1=\\bigcup_ie_i^1$. We will need to verify that $\\displaystyle \\lambda_{K_1}(x)=\\max\\left\\{\\lambda_{A_1}(x),\\lambda_{L_1}(x)\\right\\}$ for all $x\\in K_1$, since for $x\\in A_1$ and $y\\in L_1$ we have\n\\[\n\\begin{array}{ccc}\n\\lambda_A(x)=\\left\\{\\begin{array}{ll} 0& \\{x\\}\\in\\Dc_A\\\\\n1+\\lambda_{A_1}(x)& otherwise\\end{array}\\right. &\\text{and}&\n\\lambda_L(y)=\\left\\{\\begin{array}{ll} 0& \\{y\\}\\in\\Dc_L\\\\\n1+\\lambda_{L_1}(y)& otherwise.\\end{array}\\right.\n\\end{array}\\]\nTo do that, we recall that $\\mathcal{D}_{A_1}$ consists of all the order-$2$ atoms of $A$ lying in $A_1$. Similarly, $\\mathcal{D}_{L_1}$ consists of all the order-$2$ atoms of $L$ lying in $L_1$. Thus we may repeat the above procedure again, replacing $A$ and $L$ by $A_1$ and $L_1$. This then gives rise to two compacta $A_2\\subset A_1$ and $L_2\\subset L_1$ such that\n$\\displaystyle \\lambda_{K_1}(x)=\\max\\left\\{\\lambda_{A_1}(x),\\lambda_{L_1}(x)\\right\\}$ for all $x\\notin K_2=A_2\\cup L_2$.\n\nWe may carry out the same procedure indefinitely and obtain two decreasing sequences of compacta: (1) $A_1\\supset A_2\\supset\\cdots$ and (2) $L_1\\supset L_2\\supset\\cdots$. Setting $K_p=A_p\\cup L_p$ for $p\\ge1$, we have the following equations:\n\\begin{equation}\n\\displaystyle \\lambda_{K_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{K_p}\\\\\n1+\\lambda_{K_{p+1}}(x)& otherwise\\end{array}\\right. \\quad (x\\in K_{p+1}).\n\\end{equation}\n\\begin{equation}\n\\displaystyle \\lambda_{A_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{A_p}\\\\\n1+\\lambda_{A_{p+1}}(x)& otherwsie\\end{array}\\right. \\quad(x\\in A_{p+1})\n\\end{equation}\n\\begin{equation}\n\\displaystyle \\lambda_{L_{p}}(x)=\\left\\{\\begin{array}{ll}0& \\{x\\}\\in\\Dc_{L_p}\\\\\n1+\\lambda_{L_{p+1}}(x)& otherwise\\end{array}\\right.\\quad (x\\in L_{p+1})\n\\end{equation}\n\\begin{equation}\n\\lambda_{K_p}(x)=\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}\\quad (x\\notin K_{p+1})\n\\end{equation}\nThere are two possibilities. In the first, we have $K_p=K_{p+1}$ for some $p\\ge1$, indicating that $K_m=K_p$ for all $m\\ge p$.\nIn such a case, we have $\\lambda_{K_p}(x)=\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}$ and hence $\\lambda_{K}(x)=\\max\\left\\{\\lambda_{A}(x),\\lambda_{L}(x)\\right\\}$.\nIn the second, we have $K_p\\ne K_{p+1}$ for all $p\\ge1$. This implies that $\\lambda_K(x)=\\max\\left\\{\\lambda_A(x),\\lambda_L(x)\\right\\}=\\infty$ holds for all $x\\in K_\\infty=\\bigcap_pK_p$ and that $\\lambda_{K}(x)=p+\\lambda_{K_p}(x)=p+\\max\\left\\{\\lambda_{A_p}(x),\\lambda_{L_p}(x)\\right\\}=\n\\max\\left\\{\\lambda_{A}(x),\\lambda_{L}(x)\\right\\}$ holds for $p\\ge1$ and $x\\notin K_p\\setminus K_{p+1}$. Here $\\displaystyle K_1\\setminus K_{\\infty}=\\bigcup_{p=1}^\\infty(K_p\\setminus K_{p+1})$. This completes our proof.\n\\end{proof}\n\nLemma \\ref{gluing_atoms_a} and Theorem \\ref{gluing_lemma} are useful, when we study $\\lambda_K$ for certain choices of planar compacta $K$. For instance, we may choose $K$ to be the Julia set of a renormalizable polynomial $f(z)=z^2+c$ and $L$ the small Julia set. For the sake of convenience, we further assume that the only critical point of $f$ is recurrent and that there is no irrationally neutral cycle. Then it is possible to choose a decreasing sequence of Jordan domains $\\{U_n\\}$, with $\\overline{U_{n+1}}\\subset U_n$ and $\\displaystyle L=\\bigcap_{n=1}^\\infty U_n$, such that every $K\\cap \\partial U_n$ consists of finitely many points that are periodic or pre-periodic. See for instance \\cite[section 2.2]{Jiang00}.\nFor any $n\\ge1$ we can use \\cite[Theorems 2 and 3]{Kiwi04} to infer that every singleton $\\{x\\}$ with $x\\in (K\\cap\\partial U_n)$ is an atom of $K$ hence is also an atom of $L_n=K\\cap\\overline{U_n}$. Combining these with Lemma \\ref{gluing_atoms_a} and Theorem \\ref{gluing_lemma}, we further see that $\\mathcal{D}_{L_{n+1}}\\subset\\mathcal{D}_{L_n}\\subset\\mathcal{D}_K$ for all $n\\ge1$. However, we are not sure whether $\\mathcal{D}_L\\subset\\mathcal{D}_K$. Similarly, it is not clear whether $\\lambda_K(x)=\\lambda_L(x)$ holds for $x\\in L$. Therefore, we propose the following.\n\n\\begin{que}\\label{small_julia}\nLet $K=L_0\\supset L_1\\supset L_2\\supset\\cdots$ a decreasing sequence of planar compacta such that $L_n\\cap\\overline{K\\setminus L_n}$ is a finite set for all $n\\ge1$.\nSetting $L=\\bigcap_{n\\ge1}L_n$. Find conditions so that (1) $\\mathcal{D}_L\\subset\\mathcal{D}_K$ or (2) $\\lambda_K(x)=\\lambda_L(x)$ holds for all $x\\in L$.\n\\end{que}\n\nAs a response to Question \\ref{small_julia}, we turn to study the lambda functions of two planar compact $K\\supset L$\nsuch that $K\\setminus L$ is contained in the union of at most countably many continua $P_n\\subset K$ that satisfy the following properties:\n\\begin{itemize}\n\\item[(P1)] every $P_n$ intersects $L$ at a single point $x_n$, and\n\\item[(P2)] for any constant $C>0$ at most finitely many $P_n$ are of diameter greater than $C$.\n\\item[(P3)] $P_n\\cap P_m=\\emptyset$ for $n\\ne m$.\n\\end{itemize}\nHere $P_n\\setminus\\{x_n\\}$ might be disconnected for some of the integers $n\\ge1$. Notice that there is a special situation, when $K$ is the Mandelbrot set $\\M$. Then, in order that the above properties (P1)-(P3) be satisfied, we may choose $L$ to be the closure of a hyperbolic component or a {\\bf Baby Mandelbrot set}.\n\nAs an extension of Theorem \\ref{gluing_lemma}, we will obtain the following.\n\n\\begin{theo}\\label{baby_M}\nGiven two planar compacta $K\\supset L$ that satisfy (P1) to (P3), we have \\begin{equation}\\label{baby}\n\\lambda_K(x)=\\left\\{\\begin{array}{lll} \\lambda_{P_n}(x)&x\\in P_n\\setminus\\{x_n\\}\\ {for\\ some}\\ n&({case}\\ 1)\\\\ \\lambda_L(x)& x\\in L\\setminus\\{x_n: n\\in\\mathbb{N}\\}&({case}\\ 2)\\\\ \\max\\left\\{\\lambda_L(x_n),\\lambda_{P_n}(x_n)\\right\\}& x=x_n\\ {for\\ some}\\ x_n&({case}\\ 3)\\\\\n0 &{otherwise}&(case\\ 4)\\end{array}\\right. \\end{equation}\n\\end{theo}\n\nWe just need to consider the above equation for points $x\\in K$.\nTo do that, we may\ndefine an equivalence $\\sim$ on $\\mathbb{N}$ so that $m\\sim n$ if and only if $x_m,x_n$ are contained in the same atom of $L$. Let $\\{I_j:j\\}$ be the equivalence classes of $\\sim$.\n\nDenote by $d_n$ the atom of $P_n$ that contains $x_n$, and by $e_j$ the atom of $L$ that contains all $x_n$ with $n\\in I_j$. Moreover, set $e'_j=e_j\\cup\\left(\\bigcup_{n\\in I_j}d_n\\right)$ for every $j$.\n\nThen $\\{e'_j: j\\}$ is a collection of at most countably many continua that are pairwise disjoint.\nNow we consider the following upper semi-continuous decomposition of $K$:\n\\begin{equation}\\label{baby_M_partition}\n\\Dc_1=\\left(\\Dc_L\\setminus\\left\\{e_j: j\\right\\}\\right)\\cup\n\\left(\\bigcup_n\\Dc_{P_n}\\setminus\\{d_n\\}\\right)\\cup\n\\{e'_j: j\\}.\n\\end{equation}\nAll its elements are sub-continua of $K$ that do not split the fibers of $\\overline{R_K}$. So it is refined by $\\Dc_K$. On the other hand, by \\cite[Theorem 1.3]{LYY-2020}, we also have $\\overline{R_K}=\\overline{R_L}\\cup\\left(\\bigcup_{n\\ge1}\\overline{R_{P_n}}\\right)$. Thus $\\mathcal{D}_K$ does not split the fibers of $\\overline{R_L}$ and those of $\\overline{R_{P_n}}$ for all $n\\ge1$. Therefore, every atom of $L$ lies in an atom of $K$, so does every atom of $P_n$. Consequently, we have.\n\\begin{lemma}\\label{gluing_atoms_b}\n$\\Dc_K=\\Dc_1$. Therefore, for any atom $d$ of $K$ the intersection $d\\cap L$ (or $d\\cap P_n$ for any $n\\ge1$) is either empty or a single atom of $L$ (respectively, $P_n$).\n\\end{lemma}\n\n\n\\begin{proof}[{\\bf Proof for Theorem \\ref{baby_M}}]\nClearly, $\\lambda_K(x)=\\lambda_L(x)$ for all $x$ in $L\\setminus\\left(\\bigcup_je'_j\\right)$. Similarly, $\\lambda_K(x)=\\lambda_{P_n}(x)$ for all $x$ in $P_n\\setminus d_n$. Moreover, $\\lambda_K(x_n)=\\lambda_L(x_n)=\\lambda_{P_n}(x_n)=0$ for all $x_n$ such that $\\{x_n\\}$ is an atom of $L$ and also an atom of $P_n$. Therefore, we just need to consider those $e_j'$ that are non-degenerate.\n\nLet $\\mathcal{N}_1$ be the collection of all the integers $j$ such that $e_j'$ is not a singleton.\nThen $e_{n_1}$ is a subcontinuum of $e_{n_1}'$ for any $n_1\\in\\mathcal{N}_1$, such that $e_{n_1}'\\setminus e_{n_1}$ is covered by all those $d_n$ with $n\\in I_{n_1}$. Thus the properties (P1) - (P3) are satisfied, if $K$ and $L$ are respectively replaced by $e_{n_1}'$ and $e_{n_1}$. It is then routine to check the following:\n\\begin{equation}\\label{inductive_1}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1}e_{n_1}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1}e_{n_1}'\\right)\\\\\n1+\\lambda_{e_{n_1}'}(x)& x\\in e_{n_1}'\\ \\text{for\\ some}\\ n_1\\in\\mathcal{N}_1\n\\end{array}\\right.\n\\end{equation}\n\nEvery atom of $e_{n_1}'$ falls into exactly one of the following possibilities: (1) an order-two atom of $P_n$ for some $n\\in I_{n_1}$ that is disjoint from $\\{x_n\\}$, (2) an order-two atom of $L$ that is disjoint from $\\{x_n: n\\in I_{n_1}\\}$, (3) a singleton $\\{x_n\\}$ for some $n\\in I_{n_1}$, which is an order-two atom of $L$ and is also an order-two atom of $P_n$, (4) a non-singleton continuum that consists of the order-two atom of $L$ containing some $x_n$, with $n\\in I_{n_1}$, and the order-two atom of $P_n$ containing $x_n$.\n\nAn atom falling in the first three possibilities is called an atom of {\\bf pure type}.\nWe can check that $e_{n_1}'$ has at most countably many atoms that is not of pure type. Such an atom is generally denoted as $e'_{n_1n_2}$. Similarly we can define continua $e'_{n_1n_2\\ldots n_p}$ for $p>2$. On the one hand, such a continuum is an order-$p$ atom of $K$; on the other, it is also an atom of $e'_{n_1n_2,\\ldots n_{p-1}}$ that is not of pure type.\nBy the same arguments, that have been used in obtaining Equation (\\ref{inductive_1}), we can infer the following equation:\n\\begin{equation}\\label{inductive_2}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1,n_2}e_{n_1n_2}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1,n_2}e_{n_1n_2}'\\right) \\\\\n2+\\lambda_{e_{n_1n_2}'}(x)& x\\in e_{n_1n_2}'\\ \\text{for\\ some}\\ n_1,n_2\n\\end{array}\\right.\n\\end{equation}\nThis equation may be extended to order-$p$ atoms $e'_{n_1n_2\\ldots n_p}$ with $p\\ge2$ in the following way.\n\\begin{equation}\\label{inductive_p}\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}\n\\lambda_{P_n}(x)&x\\in P_n\\setminus\\left(\\bigcup_{n_1,\\ldots,n_p}e_{n_1\\cdots n_p}'\\right)\\\\\n\\lambda_{L}(x)& x\\in L\\setminus\\left(\\bigcup_{n_1,\\ldots,n_p}e_{n_1\\cdots n_p}'\\right) \\\\\np+\\lambda_{e_{n_1\\cdots n_p}'}(x)& x\\in e_{n_1\\cdots n_p}'\\ \\text{for\\ some}\\ n_1,\\ldots,n_p\n\\end{array}\\right.\n\\end{equation}\nNotice that Theorem \\ref{baby_M} holds for every $x\\in K$ lying in an atom of $e'_{n_1n_2\\ldots n_p}$ that is of pure type. Such a point $x$ does not lie in $e'_{n_1n_2\\ldots n_pn_{p+1}}$ for any choice of $n_{p+1}$ and hence falls into exactly one of the following possibilities:\n\\begin{itemize}\n\\item that $x\\in P_n\\setminus\\{x_n\\}$ for some $n\\ge1$ and $\\lambda_K(x)=\\lambda_{P_n}(x)\\ge p$.\n\\item that $x\\in L\\setminus\\{x_n: n\\ge1\\}$ and $\\lambda_K(x)=\\lambda_L(x)\\ge p$.\n\\item that $x=x_n$ for some $n\\ge1$ and $\\lambda_K(x)=\\max\\left\\{\\lambda_L(x),\\lambda_{P_n}(x)\\right\\}=p$.\n\\end{itemize}\nEvery other point $x\\in K$ necessarily lies in $e'_{n_1n_2\\ldots n_p}$ for infinitely many $p$. The continua $e'_{n_1n_2\\ldots n_p}$ decrease to a continuum $M_x$. There are three possibilities, either $x\\in L\\setminus\\{x_n\\}$, or $x\\in P_n\\setminus\\{x_n\\}$ for some $n\\ge1$, or $x=x_n$ for some $n\\ge1$. In the first case, we have $\\lambda_K(x)=\\lambda_L(x)=\\infty$; in the second, we have $\\lambda_K(x)=\\lambda_{P_n}(x)=\\infty$; in the third, we have $\\lambda_K(x)=\\max\\left\\{\\lambda_L(x),\\lambda_{P_n}(x)\\right\\}=\\infty$. This completes our proof.\n\\end{proof}\n\n\n\\begin{rem}\nLet $K=L_0\\supset L_1\\supset L_2\\supset\\cdots$ be given as in Question \\ref{small_julia}. Also let $L=\\bigcap_{n\\ge1}L_n$. Then $L_n\\cap\\overline{K\\setminus L_n}$ is a finite set for all $n\\ge1$.\nAssume in addition that (1) every singleton $\\{x_n\\}$ is an atom of $K$ and (2) $K$ and $L$ satisfy the requirements in Theorem \\ref{baby_M}. By Lemma \\ref{gluing_atoms_a}, we see that $\\mathcal{D}_{L_{n+1}}\\subset\\mathcal{D}_{L_n}\\subset\\mathcal{D}_K$ for all $n\\ge1$; thus from Theorem \\ref{gluing_lemma} we can infer that $\\lambda_K(x)=\\lambda_{L_n}(x)$ for all $x\\in L_n$. Moreover, by Lemma \\ref{gluing_atoms_b} we have $\\mathcal{D}_L\\subset\\mathcal{D}_K$. Therefore, by Theorem \\ref{baby_M} we further infer that $\\lambda_K(x)=\\lambda_L(x)$ holds for all $x\\in L$.\n\\end{rem}\n\n\n\\section{Examples}\\label{examples}\n\nWe shall construct examples.\nThe beginning two provide choices of compacta $A,B\\subset\\hat{\\bbC}$ such that $\\lambda_{A\\cup B}(x)\\ne\\max\\left\\{\\lambda_A(x),\\lambda_B(x)\\right\\}$ for some $x$, although $\\lambda_A(x)=\\lambda_B(x)$ for all $x\\in A\\cap B$. In the first, $A\\cap B$ is an uncountable set; in the second, $A\\cap B$ is a countably infinite set. Therefore, the conditions of Theorem \\ref{gluing_lemma} are not satisfied.\n\n\n\\begin{exam}\\label{cantor_combs}\nLet $A\n\\{t+s{\\bf i}: t\\in\\Kc, 0\\le s\\le1\\}$, where $\\Kc$ is the Cantor ternary set. Let\n$B\n\\{t+(1+s){\\bf i}: 0\\le t\\le 1, s\\in\\Kc\\}$. Let $A_1=A\\cup B$ and $B_1=(A+1+{\\bf i})\\cup(B+1-{\\bf i})$.\nSee Figure \\ref{not_glued} for a simplified depiction of $A, B, A_1, B_1$.\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\begin{tabular}{ccccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\draw[gray,dashed] (0,0) -- (3,0)-- (3,3)-- (0,3)-- (0,0);\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,3) -- (3+3*\\i\/27,6);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,3) -- (3+6\/9+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+3*\\i\/27,3) -- (3+2+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,3) -- (3+2+6\/9+3*\\i\/27,6);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture} \\hspace{0.25cm}\n\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]\n\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27+3) -- (3,3*\\i\/27+3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27+3) -- (3,6\/9+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+3*\\i\/27+3) -- (3,2+3*\\i\/27+3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27+3) -- (3,2+6\/9+3*\\i\/27+3);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,3) -- (3+3*\\i\/27,6);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,3) -- (3+6\/9+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+3*\\i\/27,3) -- (3+2+3*\\i\/27,6);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,3) -- (3+2+6\/9+3*\\i\/27,6);\n}\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture}\n\\\\ $A$& $B$& $A_1=A\\cup B$& $B_1$& $A_1\\cup B_1$\\end{tabular}\n\\caption{The two compacta $A, B$ and their union.}\\label{not_glued}\n\\end{figure}\nThen\n$\\lambda_A(x)=1$ for all $x\\in A$ and vanishes otherwise; similarly, $\\lambda_B(x)=1$ for all $x\\in B$ and vanishes otherwise.\nHowever, both $A\\cap B$ and $A_1\\cap B_1$ are uncountable, thus the conditions in Theorem \\ref{gluing_lemma} are not satisfied. Moreover, we have\n\\[\\lambda_{A_1}(x)=\\lambda_{A\\cup B}(x)=\\left\\{\\begin{array}{ll}2&x\\in A\\\\ 1& B\\setminus A\\\\ 0& {otherwise}\\end{array}\\right.\\quad {and}\\quad\n\\lambda_{A_1\\cup B_1}(x)=\\left\\{\\begin{array}{ll}\\infty & x\\in (A_1\\cup B_1)\\\\ 0& {otherwise}.\\end{array}\\right.\\]\n\n\\end{exam}\n\n\n\\begin{exam}\\label{brooms}\nSet $A=\\bigcup\\limits_{n\\ge0}A_n$. Here $A_0=\\{s{\\bf i}: 0\\le s\\le1\\}$ and $A_1$ is the continuum that consists of the line $\\displaystyle\\left\\{1+t{\\bf i}: 0\\le t\\le1\\right\\}$ and all those lines connecting $1+{\\bf i}$ to $\\displaystyle\\frac{k}{k+1}$ for $k\\ge1$; moreover, for $n\\ge2$, $A_n=\\displaystyle \\left\\{2^{-n+1}t+s{\\bf i}: t+s{\\bf i}\\in A_1\\right\\}$. See Figure \\ref{broom_comb}.\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{tabular}{cc}\n\\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3+3*\\i\/27) -- (3,3+3*\\i\/27);\n \\draw[gray,very thick] (0,3+6\/9+3*\\i\/27) -- (3,3+6\/9+3*\\i\/27);\n \\draw[gray,very thick] (0,3+2+3*\\i\/27) -- (3,3+2+3*\\i\/27);\n \\draw[gray,very thick] (0,3+2+6\/9+3*\\i\/27) -- (3,3+2+6\/9+3*\\i\/27);\n}\n\n\\draw[gray,very thick] (0,0) --(0,3);\n\\draw[gray,very thick] (3,0) --(3,3);\n\\draw[gray,very thick] (3\/2,0) --(3\/2,3);\n\\draw[gray,very thick] (3\/4,0) --(3\/4,3);\n\\draw[gray,very thick] (3\/8,0) --(3\/8,3);\n\\draw[gray,very thick] (3\/16,0) --(3\/16,3);\n\\draw[gray,very thick] (3\/32,0) --(3\/32,3);\n\\draw[gray,very thick] (3\/64,0) --(3\/64,3);\n\\draw[gray,very thick] (3\/128,0) --(3\/128,3);\n\\draw[gray,very thick] (3\/256,0) --(3\/256,3);\n\\draw[gray,very thick] (3\/512,0) --(3\/512,3);\n\n\\foreach \\i in {2,...,7}\n{\n \\draw[gray,very thick] (3,3) -- (3-3\/\\i,0);\n}\n\\node at (2.8,0.15) {$\\ldots$};\n\n\n\\foreach \\i in {2,...,7}\n{\n \\draw[gray,very thick] (3\/2,3) -- (3\/2-1.5\/\\i,0);\n}\n\n\n \\node at (9\/16,1.75) {$\\vdots$};\n \\node at (9\/16,1.25) {$\\vdots$};\n\\node at (-0.1,0.2){$0$}; \\node at (3.1,0.2){$1$};\n\\node at (0,0){$\\cdot$}; \\node at (3,0){$\\cdot$};\n\\node at (3,3){$\\cdot$}; \\node at (3,6){$\\cdot$};\n\\node at (3.35,3.0){$1\\!+\\!{\\bf i}$}; \\node at (3.35,6.0){$1\\!+\\!2{\\bf i}$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,dashed] (0,3+3*\\i\/27) -- (3,3+3*\\i\/27);\n \\draw[gray,dashed] (0,3+6\/9+3*\\i\/27) -- (3,3+6\/9+3*\\i\/27);\n \\draw[gray,dashed] (0,3+2+3*\\i\/27) -- (3,3+2+3*\\i\/27);\n \\draw[gray,dashed] (0,3+2+6\/9+3*\\i\/27) -- (3,3+2+6\/9+3*\\i\/27);\n}\n\n\\draw[gray,very thick] (0,3) --(3,3);\n\n\\draw[gray,very thick] (0,0) --(0,3);\n\\draw[gray,very thick] (3,0) --(3,3);\n\\draw[gray,very thick] (3\/2,0) --(3\/2,3);\n\\draw[gray,very thick] (3\/4,0) --(3\/4,3);\n\\draw[gray,very thick] (3\/8,0) --(3\/8,3);\n\\draw[gray,very thick] (3\/16,0) --(3\/16,3);\n\\draw[gray,very thick] (3\/32,0) --(3\/32,3);\n\\draw[gray,very thick] (3\/64,0) --(3\/64,3);\n\\draw[gray,very thick] (3\/128,0) --(3\/128,3);\n\\draw[gray,very thick] (3\/256,0) --(3\/256,3);\n\\draw[gray,very thick] (3\/512,0) --(3\/512,3);\n\n\\node at (-0.1,0.2){$0$}; \\node at (3.1,0.2){$1$}; \\node at (3.35,3.0){$1\\!+\\!{\\bf i}$};\n\\end{tikzpicture}\\\\ $A\\cup B$ & $d$\n\\end{tabular}\n\\end{center}\n\\vskip -0.75cm\n\\caption{The compactum $A\\cup B$ and the atom $d$ of $A\\cup B$.}\\label{broom_comb}\n\\end{figure}\nFurther setting $B$ as in Example \\ref{cantor_combs}, we have $A\\cap B=\\{ {\\bf i}\\}\\cup\\left\\{2^{-n}+{\\bf i}: n\\ge0\\right\\}$.\nIf $x\\in A\\cap B$ then $\\lambda_A(x)=\\lambda_B(x)=1$. Let\n$\\displaystyle L_1=\\left\\{t+s{\\bf i}: 0\\le s\\le 1, t=0\\ {or}\\ 2^{-n}\\ {for\\ some}\\ n\\ge0\\right\\}.$ Then $d=L_1 \\cup\\ \\{t+{\\bf i}: 0\\le t\\le1\\}$ is an atom of $A\\cup B$ and is not locally connected at any $x\\in A_0$. Moreover, we have\n\\[\\lambda_{A}(x)=\\left\\{\\begin{array}{ll}1&x\\in L_1 \\\\ 0& {otherwise}\\end{array}\\right.\\quad {and}\\quad\n\\lambda_{A\\cup B}(x)=\\left\\{\\begin{array}{ll}2& x\\in A_0\\\\ 1&x\\in (B\\cup d)\\setminus A_0\\\\ 0& {otherwise}.\\end{array}\\right.\\]\n\\end{exam}\n\n\n\nThe next two examples are about $E$-continua $K\\subset\\hat{\\mathbb{C}}$ such that the lambda equality given in (i) or (ii) of Theorem \\ref{equality-case} does not hold.\n\n\n\\begin{exam}\\label{E-compactum}\nLet $X$ denote the square $[1,2]\\times[0,{\\mathbf i}]\\subset\\hat{\\mathbb{C}}$. Let $Y$ be an embedding of $[0,\\infty)$ whose closure $\\overline{Y}$ equals the union of $Y$ with $\\partial X$. See the left part of Figure \\ref{negative} for a simplified representation of \\ $\\overline{Y}$, which is depicted as \\tb{blue}.\n\\begin{figure}[ht]\n\\vskip -0.25cm\n\\begin{tabular}{ll}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]\n\\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);\n\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\\foreach \\k in {0,1}\n{\n\\draw[gray, dashed, thick] (16+1,16*\\k+1) -- (32-1,16*\\k+1) -- (32-1, 16*\\k+16-1) -- (16+1\/2,16*\\k+16-1);\n}\n\n\n\\foreach \\k in {0,1,2,3}\n{\n\\draw[gray, dashed, thick] (8+1\/2,8*\\k+1\/2) -- (16-1\/2,8*\\k+1\/2) -- (16-1\/2, 8*\\k+8-1\/2) -- (8+1\/2,8*\\k+8-1\/2);\n}\n\n\\foreach \\i in {0,1}\n{\\foreach \\j in {2,...,6}\n{\n \\draw[gray, thick] (16+\\j,\\j+16*\\i) -- (32-\\j,\\j+16*\\i) -- (32-\\j,16-\\j+16*\\i) -- (16+\\j-1,16-\\j+16*\\i)--(16+\\j-1,\\j-1+16*\\i);\n}\n}\n\n\\foreach \\i in {0,...,3}\n{\\foreach \\j in {2,...,6}\n{\n \\draw[gray] (8+\\j\/2,\\j\/2+8*\\i) -- (16-\\j\/2,\\j\/2+8*\\i) -- (16-\\j\/2,8-\\j\/2+8*\\i) --\n (8+\\j\/2-1\/2,8-\\j\/2+8*\\i)--(8+\\j\/2-1\/2,\\j\/2-1\/2+8*\\i);\n}\n}\n\\foreach \\j in {0,...,7}\n{\n \\fill(5.5,2+\\j*4)circle(1pt);\n \\fill(6,2+\\j*4)circle(1pt);\n \\fill(6.5,2+\\j*4)circle(1pt);\n}\n\\node at (0.25,-1.5){$0$};\n\\node at (32,-1.5){$1$};\n\\node at (64,-1.5){$2$};\n\\node at (0.25,33.5){${\\mathbf i}$};\n\n\\draw[blue,thick] (64,0) -- (64,32) -- (32,32) -- (32,0) -- (64,0);\n\n\\foreach \\j in {2,...,6}\n{ \\draw[blue, thick] (32+2*\\j,2*\\j) -- (64-2*\\j,2*\\j) -- (64-2*\\j,32-2*\\j) -- (32+2*\\j-2,32-2*\\j) --(32+2*\\j-2,2*\\j-2);\n}\n\n\\draw[blue, dashed, thick] (32+2,2) -- (64-2,2) -- (64-2,32-2) -- (32+1,32-2);\n\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]\n\\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\\node at (0.25,-1.5){$0$};\n\\node at (32,-1.5){$1$};\n\\node at (64,-1.5){$2$};\n\\node at (0.25,33.5){${\\mathbf i}$};\n\\end{tikzpicture}\n\\end{tabular}\n\\vskip -0.25cm\n\\caption{(left): the $E$-continuum $K$; (right): the only non-degenerate atom.}\\label{negative}\n\\vskip -0.25cm\n\\end{figure}\nLet $f_1(z)=\\frac{z}{2}$ and $f_2(z)=\\frac{z+{\\mathbf i}}{2}$. Let $K_0=\\overline{Y}$. For all $n\\ge1$, let $K_n=f_1\\left(K_{n-1}\\right)\\cup f_2\\left(K_{n-1}\\right)$. Then $K_0,K_1,\\ldots$ is an infinite sequence of continua converging to the segment $[0,{\\mathbf i}]$ under Hausdorff distance. Clearly,\n\\[K=\\left(\\bigcup\\limits_{n\\ge0}K_n\\right)\\cup\\{s{\\mathbf i}: 0\\le s\\le1\\}\\]\nis an $E$-continuum. See left part of Figure \\ref{negative}. Let $L_0=\\partial X$. For all $n\\ge1$, let $L_n=f_1\\left(L_{n-1}\\right)\\cup f_2\\left(L_{n-1}\\right)$. Then $L_0,L_1,\\ldots$ is an infinite sequence of continua converging to the segment $[0,{\\mathbf i}]$ under Hausdorff distance. Similarly, we see that\n\\[L=\\left(\\bigcup\\limits_{n\\ge0}L_n\\right)\\cup\\{s{\\mathbf i}: 0\\le s\\le1\\}\\]\nis also an $E$-continuum. See right part of Figure \\ref{negative}. Moreover, the continuum $K$ has exactly one atom of order $1$ that is not a singleton. This atom equals $L$. Thus we have\n\\[\n\\lambda_K(x)=\\left\\{\\begin{array}{ll}1&x\\in L\\\\ 0& {otherwise}\\end{array}\\right.\\ \\text{and}\\\n\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}\n1& x\\in L\\setminus[0,{\\mathbf i}]\\\\ 0& {otherwise}.\\end{array}\\right.\n\\]\n\\end{exam}\n\n\n\\begin{exam}\\label{finite-comp}\nLet $\\mathcal{C}$ denote Cantor's ternary set. Let $U_1\\subset\\hat{\\mathbb{C}}$ be the domain, not containing $\\infty$, whose boundary consists of $[0,1]\\times{\\bf i}\\mathcal{C}=\\{t+s{\\bf i}: 0\\le t\\le 1, s\\in\\mathcal{C}\\}$ and $\\partial\\left([0,\\frac43]\\times[0,{\\bf i}]\\right)$.\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\\draw[gray,very thick] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);\n\\draw[gray,very thick] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);\n\\draw[gray,very thick] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);\n\\draw[gray,very thick] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27-3) -- (3,3*\\i\/27-3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27-3) -- (3,6\/9+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+3*\\i\/27-3) -- (3,2+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27-3) -- (3,2+6\/9+3*\\i\/27-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,0) -- (3+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,0) -- (3+6\/9+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+3*\\i\/27,0) -- (3+2+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,0) -- (3+2+6\/9+3*\\i\/27,-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\n\\draw(1.5,2.75) node[above]{$U_2$};\n\\draw(0.25,-1.5) node[left]{$U_3$};\n\\draw(4.5,-2.75) node[below]{$U_4$};\n\\draw(5.75,1.5) node[right]{$U_1$};\n\\draw(4.5,3.5) node[right]{$U_5$};\n\\end{tikzpicture}\n&&\\hskip 1.0cm\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\\draw[gray,dashed] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);\n\\draw[gray,dashed] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);\n\\draw[gray,dashed] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);\n\\draw[gray,dashed] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3*\\i\/27,0) -- (3*\\i\/27,3);\n \\draw[gray,very thick] (6\/9+3*\\i\/27,0) -- (6\/9+3*\\i\/27,3);\n \\draw[gray,very thick] (2+3*\\i\/27,0) -- (2+3*\\i\/27,3);\n \\draw[gray,very thick] (2+6\/9+3*\\i\/27,0) -- (2+6\/9+3*\\i\/27,3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (0,3*\\i\/27-3) -- (3,3*\\i\/27-3);\n \\draw[gray,very thick] (0,6\/9+3*\\i\/27-3) -- (3,6\/9+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+3*\\i\/27-3) -- (3,2+3*\\i\/27-3);\n \\draw[gray,very thick] (0,2+6\/9+3*\\i\/27-3) -- (3,2+6\/9+3*\\i\/27-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3+3*\\i\/27,0) -- (3+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+6\/9+3*\\i\/27,0) -- (3+6\/9+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+3*\\i\/27,0) -- (3+2+3*\\i\/27,-3);\n \\draw[gray,very thick] (3+2+6\/9+3*\\i\/27,0) -- (3+2+6\/9+3*\\i\/27,-3);\n}\n\n\\foreach \\i in {0,...,3}\n{\n \\draw[gray,very thick] (3,3*\\i\/27) -- (3+3,3*\\i\/27);\n \\draw[gray,very thick] (3,6\/9+3*\\i\/27) -- (3+3,6\/9+3*\\i\/27);\n \\draw[gray,very thick] (3,2+3*\\i\/27) -- (3+3,2+3*\\i\/27);\n \\draw[gray,very thick] (3,2+6\/9+3*\\i\/27) -- (3+3,2+6\/9+3*\\i\/27);\n}\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{The continuum $K$ and the only non-degenerate atom $d\\in\\Dc_K$.}\\label{finite-comp-pic}\n\\end{figure}\nFor $2\\le j\\le 4$ let $U_j=f^{j-1}(U_1)$, where $f(z)={\\bf i}z$. See the left part of Figure \\ref{finite-comp-pic}. Then $K=\\bigcup_i\\partial U_i$ is a continuum, whose complementary components are $U_1,\\ldots, U_4, U_5$. Here $U_5$ is the one containing $\\infty$. Moreover, the only non-degenerate atom of $K$ is\n$\\displaystyle d=\\bigcup_{j=0}^3 f^j([0,1]\\times{\\bf i}\\mathcal{C})$.\nSince the continuum $d$ has a single atom, which is itself, we have\n$\\lambda_K(x)=\\left\\{\\begin{array}{ll}\\infty& x\\in d\\\\ 0&{otherwise}.\\end{array}\\right.$\nOn the other hand, by the construction of $U_1,\\ldots, U_4$ and $U_5$, we also have\n$\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll}1& x\\in d\\\\ 0&{otherwise}.\\end{array}\\right.$\nConsequently, we have $\\lambda_K(x)-\\tilde{\\lambda}_K(x)=\\left\\{\\begin{array}{ll} \\infty& x\\in d\\\\ 0& {otherwise}.\\end{array}\\right.$\n\n\\end{exam}\n\n\n\n\nThen we continue to find planar continua $K$, trying to describe possible relations between $\\lambda_K$ and $\\lambda_{\\partial K}$. The first one is Peano continuum $K$ but its boundary is a continuum that is not locally connected. Therefore, $\\lambda_{\\partial K}(x)\\ge\\lambda_K(x)$ for all $x\\in\\hat{\\mathbb{C}}$ and $\\lambda_{\\partial K}(x)>\\lambda_K(x)$ for uncountably many $x$.\n\n\\begin{exam}\\label{bd_larger}\nConsider a spiral made of broken lines, lying in the open square $W=\\{t+s{\\mathbf i}: 0< t,s<1\\}\\subset\\hat{\\mathbb{C}}$, which converges to $\\partial W$. See the left of Figure \\ref{spiral}\n\\begin{figure}[ht]\n\\vspace{-0.05cm}\n\\center{\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[blue, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n }\n\\draw[blue,ultra thick, dashed] (0,4) -- (13.0,4);\n\n\\draw[blue,ultra thick, dashed] (12.75,4.0) -- (12.75,27.5);\n\n\\end{tikzpicture}\n\\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\n\n\n\n\n\n\n\\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,4*\\j+16+\\k*0.45) --(-4*\\j-4-\\k*0.45,4*\\j+16+\\k*0.45)-- (-4*\\j-4-\\k*0.45,-4*\\j+12-\\k*0.45) --(0,-4*\\j+12-\\k*0.45);\n }\n\n\\draw[gray!16,ultra thick, dashed] (0,4) -- (13.2,4); \\draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55); \\draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);\n\n\n\\end{tikzpicture}\n\\hspace{0.25cm}\n&\n\\hspace{0.25cm}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);\n\\foreach \\j in {1,2}\n\\foreach \\k in {1,2}\n{\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16) -- (4*\\j,-4*\\j+16) -- (4*\\j,4*\\j+16) --(-4*\\j-4,4*\\j+16)-- (-4*\\j-4,-4*\\j+12) --(0,-4*\\j+12);\n\\draw[gray!16, ultra thick] (-4,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,-4*\\j+16-\\k*0.45) -- (4*\\j+\\k*0.45,4*\\j+16+\\k*0.45) --(-4*\\j-4-\\k*0.45,4*\\j+16+\\k*0.45)-- (-4*\\j-4-\\k*0.45,-4*\\j+12-\\k*0.45) --(0,-4*\\j+12-\\k*0.45);\n }\n\\draw[gray!16,ultra thick, dashed] (0,3.8) -- (13.2,3.8);\n\\draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55);\n\\draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);\n\n\n\\foreach \\j in {1,2}\n\\foreach \\k in {2}\n{\n\\draw[blue] (-4,-4*\\j+16+0.2) -- (4*\\j-0.2,-4*\\j+16+0.2) -- (4*\\j-0.2,4*\\j+16-0.2) --(-4*\\j-4+0.2,4*\\j+16-0.2)-- (-4*\\j-4+0.2,-4*\\j+12+0.2) --(0,-4*\\j+12+0.2);\n\\draw[blue] (-4,-4*\\j+16-\\k*0.45-0.3) -- (4*\\j+\\k*0.45+0.3,-4*\\j+16-\\k*0.45-0.3) -- (4*\\j+\\k*0.45+0.3,4*\\j+16+\\k*0.45+0.3) --(-4*\\j-4-\\k*0.45-0.3,4*\\j+16+\\k*0.45+0.3)-- (-4*\\j-4-\\k*0.45-0.3,-4*\\j+12-\\k*0.45-0.3) --(0,-4*\\j+12-\\k*0.45-0.3);\n }\n\n\n\\foreach \\i in {0,1,2}\n{\n\\draw[blue] (-4+3.9*\\i,12.25) -- (-4+3.9*\\i,12-1.25);\n\\draw[blue] (3.8,13.8+3*\\i) -- (5.2,13.8+3*\\i);\n\\draw[blue] (-7.8,13.8+3*\\i) -- (-9.2,13.8+3*\\i);\n\\draw[blue] (-7.8,12-1.9*\\i) -- (-9.2,12-1.9*\\i);\n}\n\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[blue] (-5.6+3.6*\\i,19.8) -- (-5.6+3.6*\\i,21.2);\n\\draw[blue] (-7.8+1.2*\\i,8.2) -- (-7.8+1.2*\\i,6.8);\n\\draw[blue] (-3+1.2*\\i,8.2) -- (-3+1.2*\\i,6.8);\n\\draw[blue] (1.8+1.0*\\i,8.2) -- (1.8+1.0*\\i,6.8);\n\\draw[blue] (4.8+1.0*\\i,8.2) -- (4.8+1.0*\\i,6.8);\n\\draw[blue] (7.8,7.8+1.0*\\i) -- (9.2,7.8+1.0*\\i);\n\\draw[blue] (7.8,11.8+1.0*\\i) -- (9.2,11.8+1.0*\\i);\n\\draw[blue] (7.8,15.8+0.8*\\i) -- (9.2,15.8+0.8*\\i);\n\\draw[blue] (7.8,19.2+0.8*\\i) -- (9.2,19.2+0.8*\\i);\n\\draw[blue] (7.8,22.4+0.7*\\i) -- (9.2,22.4+0.7*\\i);\n\\draw[blue] (7.8-0.8*\\i,23.8) -- (7.8-0.8*\\i,25.2);\n\\draw[blue] (4.6-0.8*\\i,23.8) -- (4.6-0.8*\\i,25.2);\n\\draw[blue] (1.4-0.8*\\i,23.8) -- (1.4-0.8*\\i,25.2);\n\\draw[blue] (-1.8-0.8*\\i,23.8) -- (-1.8-0.8*\\i,25.2);\n\\draw[blue] (-5-0.8*\\i,23.8) -- (-5-0.8*\\i,25.2);\n\\fill[blue!62](-9-\\i,24.5) circle(1.8pt);\n}\n\n\\end{tikzpicture}\n\n\\end{tabular}\n}\n\\caption{A Peano continuum $K$ whose boundary is not locally connected.}\\label{spiral}\n\\end{figure}\nWe may thicken the spiral to an embedding $h: [0,\\infty)\\times[0,1]\\rightarrow W$, of the unbounded strip $U=[0,\\infty)\\times[0,1]$. Such an embedding may be chosen appropriately, so that $h(\\partial U)$ consists of countably many segments. Then, we obtain a continuum\n$K_0=\\overline{W}\\setminus h(U)$. See the middle part of Figure \\ref{spiral}. Clearly, the continuum $K_0$ is not locally connected on $\\partial W$; and it is locally connected at all the other points. Now, divide the thickened spiral $h(U)$ into smaller and smaller quadruples, which are depicted in the right part of Figure \\ref{spiral} as small rectangles. Let $K$ be the union of $K_0$ with the newly added bars, used in the above mentioned division. Then $K$ is locally connected everywhere hence is a Peano continuum. However, its boundary $\\partial K$ is not locally connected on $\\partial W$ and is locally connected elsewhere. Therefore, we have\n\\[\\lambda_K(x)\\equiv 0\\quad{and}\\quad\n\\lambda_{\\partial K}(x)=\\left\\{\\begin{array}{ll}1 & x\\in\\partial W\\\\\n0& x\\notin\\partial W.\\end{array}\\right.\\]\n\\end{exam}\n\n\n\n\\begin{exam}\\label{bd_smaller}\nLet the continuum $K$ be defined as in Example \\ref{bd_larger}. Let $f_j(z)=\\frac{z}{2}+\\frac{j-1}{2}{\\mathbf i}$ for $j=1,2$. For any compact set $X\\subset\\hat{\\bbC}$, put $\\Phi(X)=f_1(X)\\cup f_2(X)$. We will use the continuum $K$ and the mapping $\\Phi$ to construct a continuum $L$. See Figure \\ref{spiral-double}.\n\\begin{figure}[ht]\n\\vskip -0.05cm\n\\center{\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.8]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\n\n \\draw[gray, thick] (-4,-4) -- (64+4,-4) -- (64+4,32+4) --(-3,32+4)--(-3,-3);\n \\draw[gray, thick] (-3,-3) -- (64+3,-3) -- (64+3,32+2) --(-2,32+2) -- (-2,-2);\n \\draw[gray, thick] (-2,-2) -- (64+2,-2) -- (64+2,32+1) --(-1,32+1) -- (-1,-1);\n \\draw[gray, thick, dashed] (-1,-1)--(64+1,-1)--(64+1,30);\n\n\\node at (33,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (63,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (61,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\node at (48,16) {$\\partial K+1$};\n\\node at (24,8) {$f_1(\\partial K\\!+\\!1)$};\n\\node at (24,24) {$f_2(\\partial K\\!+\\!1)$};\n\n\\draw[gray, very thin] (12,4) -- (-12.5,8);\n\\node at (-15,10) {$f_1\\circ f_1(K\\!+\\!1)$};\n\\draw[gray, very thin] (12,12) -- (-12.5,16);\n\\node at (-15,18) {$f_1\\circ f_2(K\\!+\\!1)$};\n\n\n\\draw[gray, very thin] (12,20) -- (-12.5,24);\n\\node at (-15,26) {$f_2\\circ f_1(K\\!+\\!1)$};\n\\end{tikzpicture}\n}\\vskip -0.5cm\n\\caption{Relative locations of $\\partial K+1$, $\\Phi^{1}(\\partial K+1)$ and $\\Phi^{2}(K+1)$.}\\label{spiral-double}\n\\end{figure}\nThe continuum $L$ consists of five parts:\n\\begin{enumerate}\n\\item the segment $[0,{\\mathbf i}]=\\{s{\\bf i}: 0\\le s\\le1\\}$;\n\\item a spiral converging to the boundary of $[0,2]\\times[0,{\\mathbf i}]$;\n\\item $\\partial K+1=\\{z+1: z\\in \\partial K\\}$;\n\\item $\\Phi^{2n}(K+1)$ for all integers $n\\ge1$; and\n\\item $\\Phi^{2n-1}(\\partial K+1)$ for all integers $n\\ge1$.\n\\end{enumerate}\nOn the one hand, we can directly check that $L$ has a unique non-degenerate atom $d$, which consists of the following four parts: (1) the segment $[0,{\\mathbf i}]$; (2) the boundary of $[1,2]\\times[0,{\\mathbf i}]$, denoted as $A$; (3) $\\Phi^{2n-1}(A)$ for all integers $n\\ge1$; and (4) the boundary of $[2^{-2n},2^{-2n+1}]\\times[0,{\\mathbf i}]$ for all integers $n\\ge1$.\nOn the other hand, the boundary $\\partial L$ has a unique non-degenerate atom $d^*$, which is the union of $A$, the segment $[0,{\\mathbf i}]$, and $\\Phi^{n}(A)$ for all integers $n\\ge1$. See Figure \\ref{atoms} for a depiction of $d$ and $d^*$.\n\\begin{figure}[ht]\n\\vspace{-0.25cm}\n\\center{\\begin{tabular}{cc}\n\\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\n\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,\\j*16) -- (16,\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (16,+\\j*8) -- (16,32+\\j*8) -- (8,32+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\n\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,\\j*4) -- (4,\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (4,\\j*2) -- (4,32+\\j*2) -- (2,32+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\n\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,\\j*1) -- (1,\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\n\\foreach \\j in {0} \n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,32+\\j*1\/2) -- (1\/2,32+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\\node at (34,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (62,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (60,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\end{tikzpicture}\n\n& \\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]\n\\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);\n\\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);\n\\foreach \\j in {0,1}\n{ \\draw[gray, thick] (32,16+\\j*16) -- (16,16+\\j*16) -- (16,\\j*16) --(32,\\j*16);\n}\n\\foreach \\j in {0,...,3}\n{\n \\draw[gray, thick] (16,\\j*8) -- (16,8+\\j*8) -- (8,8+\\j*8) -- (8,\\j*8) --(16,\\j*8);\n}\n\\foreach \\j in {0,...,7}\n{\n \\draw[gray, thick] (8,\\j*4) -- (8,4+\\j*4) -- (4,4+\\j*4) -- (4,\\j*4) --(8,\\j*4);\n}\n\\foreach \\j in {0,...,15}\n{\n \\draw[gray, thick] (4,\\j*2) -- (4,2+\\j*2) -- (2,2+\\j*2) -- (2,\\j*2) --(4,\\j*2);\n}\n\\foreach \\j in {0,...,31}\n{\n \\draw[gray, thick] (2,\\j*1) -- (2,1+\\j*1) -- (1,1+\\j*1) -- (1,\\j*1) --(2,\\j*1);\n}\n\\foreach \\j in {0,...,63}\n{\n \\draw[gray, thick] (1,\\j*1\/2) -- (1,1\/2+\\j*1\/2) -- (1\/2,1\/2+\\j*1\/2) -- (1\/2,\\j*1\/2) --(1,\\j*1\/2);\n}\n\n\\foreach \\j in {0,...,127}\n{\n \\draw[gray, thick] (1\/2,\\j*1\/4) -- (1\/2,1\/4+\\j*1\/4) -- (1\/4,1\/4+\\j*1\/4) -- (1\/4,\\j*1\/4) --(1\/2,\\j*1\/4);\n}\n\\foreach \\j in {0,...,255}\n{\n \\draw[gray, thick] (1\/4,\\j*1\/8) -- (1\/4,1\/8+\\j*1\/8) -- (1\/8,1\/8+\\j*1\/8) -- (1\/8,\\j*1\/8) --(1\/4,\\j*1\/8);\n}\n\n\\node at (34,2) {$1$}; \\fill(32,0) circle(2pt);\n\\node at (62,2) {$2$}; \\fill(64,0) circle(2pt);\n\\node at (60,30) {$2\\!+\\!{\\mathbf i}$}; \\fill(64,32) circle(2pt);\n\n\\end{tikzpicture}\n\\end{tabular}\n}\\vskip -0.0cm\n\\caption{A depiction of $d$ and $d^*$.}\\label{atoms}\n\\end{figure}\nThe atom $d^*$ (of $\\partial L$) is a Peano continuum and contains $d$. However, the atom $d$ (of $L$) is not locally connected at points $s{\\mathbf i}$ with $0\\lambda_{\\partial K}(x)$ for at least one point $x\\in\\partial K$}.\n\n\n\nTo conclude this section, we now consider unions and intersections of specific Peano compacta in the plane. We will find concrete Peano continua in the plane, say $X$ and $Y$, such that $X\\cap Y$ is a continuum that is not locally connected. Notice that $X\\cup Y$ is always a Peano continuum.\n\n\\begin{exam}\\label{cap-peano}\nLet $M$ be the union of $[0,1]\\times\\{0\\}$ with the vertical segments $\\{0\\}\\times[0,1]$ and $\\{2^{-k}\\}\\times[0,1]$ for integers $k\\ge0$. Then $M$ is a continuum and is not locally connected at points on $\\{0\\}\\times(0,1]$; moreover, we have\n\\[\\lambda_M(x)=\\left\\{\\begin{array}{ll}1& x\\in \\{t{\\bf i}: 0\\le t\\le 1\\}\\\\ 0 & otherwise\\end{array}\\right.\n\\]\nWe will construct two Peano continua $X$ and $Y$ satisfying $X\\cap Y=M$. To this end, for all integers $k\\ge1$ we put\n\\[A_k=\\bigcup_{j=1}^{2^k-1}\\left[0,2^{-k+1}\\right]\\times\\left\\{j2^{-k}\\right\\}.\\]\nThen $X=M\\cup\\left(\\bigcup_kA_k\\right)$ is a Peano continuum.\n\\begin{figure}[ht]\n\\vskip -0.05cm\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]\n\\foreach \\i in {1,...,3}\n{\n\\draw[red] (0,1.296*\\i) -- (2.592,1.296*\\i);\n}\n\n\\foreach \\i in {1,...,7}\n{\n\\draw[red] (0,0.648*\\i) -- (1.296,0.648*\\i);\n}\n\n\\foreach \\i in {1,...,15}\n{\n\\draw[red] (0,0.324*\\i) -- (0.648,0.324*\\i);\n}\n\n\\foreach \\i in {1,...,31}\n{\n\\draw[red] (0,0.162*\\i) -- (0.324,0.162*\\i);\n}\n\n\\foreach \\i in {1,...,63}\n{\n\\draw[red] (0,0.081*\\i) -- (0.162,0.081*\\i);\n}\n\n\\draw[blue,thick] (0,0) -- (0,5.184);\n\\draw[blue,thick] (0,0) -- (5.184,0);\n\\draw[red] (2.592,2.592) -- (5.184,2.592);\n\n\\foreach \\i in {1,2,4,8,16,32}\n{\n\\draw[blue,thick] (5.184\/\\i,0) -- (5.184\/\\i,5.184);\n}\n\n\n\\fill[black] (5.184,0) circle (0.35ex); \\draw[purple] (5.184,0) node[right]{$1$};\n\\fill[black] (5.184,5.184) circle (0.35ex); \\draw[purple] (5.184,5.184) node[right]{$1+{\\bf i}$};\n\\fill[black] (0,0) circle (0.35ex); \\draw[purple] (0,0) node[left]{$0$};\n\\end{tikzpicture}\\hspace{0.25cm}\n&\n&\\hspace{0.25cm}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]\n\\foreach \\i in {1,...,2}\n{\n\\draw[red] (0,1.728*\\i) -- (5.184,1.728*\\i);\n}\n\n\\foreach \\i in {1,...,8}\n{\n\\draw[red] (0,0.576*\\i) -- (2.592,0.576*\\i);\n}\n\n\\foreach \\i in {1,...,26}\n{\n\\draw[red] (0,0.192*\\i) -- (1.296,0.192*\\i);\n}\n\n\\foreach \\i in {1,...,80}\n{\n\\draw[red] (0,0.064*\\i) -- (0.648,0.064*\\i);\n}\n\n\\draw[blue,thick] (0,0) -- (0,5.184);\n\\draw[blue,thick] (0,0) -- (5.184,0);\n\n\\foreach \\i in {1,2,4,8}\n{\n\\draw[blue,thick] (5.184\/\\i,0) -- (5.184\/\\i,5.184);\n}\n\n\n\\fill[black] (5.184,0) circle (0.35ex); \\draw[purple] (5.184,0) node[right]{$1$};\n\\fill[black] (5.184,5.184) circle (0.35ex); \\draw[purple] (5.184,5.184) node[right]{$1+{\\bf i}$};\n\\fill[black] (0,0) circle (0.35ex); \\draw[purple] (0,0) node[left]{$0$};\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{\\small Two Peano continua that intersect at a non-locally connected continuum.}\\label{peano-cap}\n\\end{figure}\nSee left part of Figure \\ref{peano-cap} for a rough approximate of $X$. Similarly, if for every $k\\ge1$ we set\n\\[B_k=\\bigcup_{j=1}^{3^k-1}\\left[0,2^{-k+1}\\right]\\times\\left\\{j3^{-k}\\right\\},\\]\nthen $\\displaystyle Y=M\\cup\\left(\\bigcup\\limits_kB_k\\right)$ is also a Peano continuum. See right part of Figure \\ref{peano-cap} for a rough approximate of $Y$. Moreover, we have $X\\cap Y=M$.\n\\end{exam}\n\nWe also find Peano compacta $X$ and $Y$ such that the union $X\\cup Y$ is not a Peano compactum, although the intersection $X\\cap Y$ is always a Peano compactum. We will use {\\em fractal squares} to construct two such compacta.\nHere a {\\bf fractal square of order $n\\ge2$} is the attractor of an iterated function system\n$\\displaystyle \\Fc_\\Dc:=\\left\\{f_d(x)=\\frac{x+d}{n}: d\\in\\Dc\\right\\}$\nfor some $\\Dc\\subset\\{0,1,\\ldots,n-1\\}^2$ which contains at least $2$ and at most $n^2-1$ elements.\nFor general theory on iterated function systems, we refer to \\cite{Hutchinson81}.\n\n\\begin{exam}\\label{cup-fs}\nLet $X$ and $Y$ be the fractal squares determined by $\\Fc_{\\Dc_X}$ and $\\Fc_{\\Dc_Y}$. Here $\\Dc_X=\\{(i,0): i=0,1,2\\}\\cup\\{(0,2)\\}$ and $\\Dc_Y=\\{(i,0): i=0,1,2\\}\\cup\\{(1,2),(2,2)\\}$. See Figure \\ref{fs-cup} for relative locations of the small squares $f_d([0,1]^2)$ with $d\\in\\Dc_X$ and $d\\in\\Dc_Y$.\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\n\\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);\n\\fill[purple!30] (0,3.456) -- (1.728,3.456) -- (1.728,5.184) -- (0,5.184) -- (0,3.456);\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[gray,thick] (0,1.728*\\i) -- (5.184,1.728*\\i);\n\\draw[gray,thick] (1.728*\\i,0) -- (1.728*\\i,5.184);\n}\n\\end{tikzpicture}\n&\n\\hskip 0.5cm\n&\n\\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]\n\n\\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);\n\\fill[purple!30] (1.728,3.456) -- (5.184,3.456) -- (5.184,5.184) -- (1.728,5.184) -- (1.728,3.456);\n\n\\foreach \\i in {0,...,3}\n{\n\\draw[gray,thick] (0,1.728*\\i) -- (5.184,1.728*\\i);\n\\draw[gray,thick] (1.728*\\i,0) -- (1.728*\\i,5.184);\n}\n\\end{tikzpicture}\n\\end{tabular}\n\\end{center}\n\\vskip -0.5cm\n\\caption{\\small The small squares $f_d([0,1]^2)$ for $d\\in\\Dc_1$ (left part) and for $d\\in\\Dc_2$ (right part).}\\label{fs-cup}\n\\end{figure}\nThen $X$ and $Y$ are Peano compacta, each of which contains the interval $[0,1]$, such that $X\\cup Y$ contains all the segments $[0,1]\\times\\{\\frac{2}{3^k}\\}$ for $k\\ge1$. Moreover, $X\\cap Y$ is a Peano compactum having uncountably many components. All but one of these components are single points. The only non-degenerate component is the interval $[0,1]$. On the other hand, for all $k\\ge1$ the horizontal strip $\\bbR\\times\\left(\\frac{1}{3^k},\\frac{2}{3^k}\\right)$ is disjoint from $X\\cup Y$. This implies that $X\\cup Y$ is not a Peano compactum. Consequently, we have\n\\[\n\\begin{array}{ccc}\\lambda_{X}(x)=0 (\\forall x\\in \\hat{\\mathbb{C}}); & \\lambda_{Y}(x)= 0 (\\forall x\\in \\hat{\\mathbb{C}}); & \\lambda_{X\\cup Y}(x)=\\left\\{\\begin{array}{ll}1& x\\in [0,1]\\\\ 0 & otherwise.\\end{array}\\right.\n\\end{array}\n\\]\nNotice that $Y\\cup X+1$ is also a Peano compactum, although $Y\\cap X+1$ has uncountably many components. Thus $\\lambda_{X+1}(x)=\\lambda_{Y}(x)=\\lambda_{Y\\cup X+1}(x)=0$ for all $x\\in \\hat{\\mathbb{C}}$. \\end{exam}\n\n\n\n\n\n\\noindent\n{\\bf Acknowledgement}. The authors are grateful to Dr. Yi Yang at Sun Yat-sen University, for valuable discussions during the period of his phd study.\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}