{"text":"\\section{Introduction}\nLet $\\A$ be a Banach algebra (over the complex field $\\mathbb{C}$), and $\\U$ be a Banach $\\A$-bimodule. A linear map $\\delta:\\A\\rightarrow\\U$ is called a \\textit{derivation} if $\\delta (ab)=a\\delta(b)+\\delta (a)b$ holds for all $a,b\\in\\A$. For any $x\\in\\U$, the map $id_x:\\A\\rightarrow\\U$ given by $id_x(a)=ax-xa$ is a continuous derivation called \\textit{inner derivation}. The set of all continuous derivations from $\\A$ into $\\U$ is denoted by $Z^{1}(\\A,\\U)$ while $N^{1}(\\A,\\U)$ denotes the set of all inner derivations from $\\A$ into $\\U$. $Z^{1}(\\A,\\U)$ is a linear subspace $\\mathbb{B}(\\A,\\U)$ (the space of all continuous linear maps from $\\A$ into $\\U$) and $N^{1}(\\A,\\U)$ is a linear subspace of $Z^{1}(\\A,\\U)$. We denote by $H^{1}(\\A,\\U)$, the quotient space $\\frac{Z^{1}(\\A,\\U)}{N^{1}(\\A,\\U)}$ which is called \\textit{the first cohomology group of $\\A$ with coefficients in $\\U$}. We call the the first cohomology group of $\\A$ with coefficients in $\\A$ briefly by first cohomology group of $\\A$. Derivations and cohomology group are important subjects in the study of Banach algebras. Among the most important problems related to them are these questions; Under what conditions a derivation $\\delta:\\A\\rightarrow\\U$ is continuous? Under what conditions one has $H^{1}(\\A,\\U)=(0)$? (i.e., every continuous derivation from $\\A$ into $\\U$ is inner). Also the calculation of $H^{1}(\\A,\\U)$, up to isomorphism of linear spaces, is a considerable problem.\n\\par \nThe problem of continuity of derivations is related to the subject of automatic continuity which is an important subject in mathematical analysis. Many studies have been performed in this regard and it has a long history. We may refer to \\cite{da} for more information which is a detailed source in this subject. Here we only review the most important obtained results concerning the automatic continuity of derivations. Johnson and Sinclair in \\cite{john1} have shown that every derivation on a semisimple Banach algebra is continuous. Ringrose in \\cite{rin} showed that every derivation from a $C^{*}$-algebra $\\A$ into Banach $\\A$-bimodule $\\U$ is continuous. In \\cite{chr}, Christian proved that every derivation from a nest algebra on Hilbert space $\\mathcal{H}$ into $\\mathbb{B}(\\mathcal{H})$ is continuous. Additionally, some results on automatic continuity of derivations on prime Banach algebras have been established by Villena in \\cite{vi1} and \\cite{vi2} .\n\\par \nThe study of the first cohomology group of Banach algebras is also a considerable topic which may be used to study the structure of Banach algebras. Johnson in \\cite{john2}, using the first cohomology group, has defined amenable Banach Algebras and then various types of amenability defined by using the first cohomology group. We may refer the reader to \\cite{run} for more information. Also among the interesting problems in the theory of derivations is either characterizing algebras on which every continuous derivation is inner, that is, the first cohomology group is trivial or characterizing the first cohomology group of Banach algebras up to vector spaces isomorphism. Sakai in \\cite{sa} showed that every continuous derivation on a $W^{*}$-algebra is inner. Kadison \\cite{ka} proved that every derivation of a $C^{*}$-algebra on a Hilbert space $\\mathcal{H}$ is spatial (i.e., it is of the form $a\\mapsto ta-at$ for $t\\in \\mathbb{B}(\\mathcal{H})$) and in particular, every derivation on a von Neumann algebra is inner. Some results have also been obtained in the case of non-self-adjoint operator algebras. Christian \\cite{chr} showed that every continuous derivation on a nest algebra on $\\mathcal{H}$ to itself and to $\\mathbb{B}(\\mathcal{H})$ is inner and then this result generalized to some other forms among which we may refer to \\cite{li} and the references therein. Gilfeather and Smith have calculated the first cohomology group of some operator algebras called joins(\\cite{gil1}, \\cite{gil2}). In \\cite{do}, the cohomology group of operator algebras called seminest algebras has been calculated. All of these operator algebras and nest algebras have a structure like triangluar Banach algebras, so motivated by these studies, Forrest and Marcoux in \\cite{for} verified the first cohomology group of triangular Banach algebras. The first cohomology group of triangular Banach algebras is not necessarily zero and in \\cite{for}, in fact it has been calculated under some special conditions and using the results, various examples of Banach algebras with non-trivial cohomology have been given and then the first cohomology group of those examples has been computed. Also in \\cite{for}, some results concerning the automatic continuity of derivations on triangluar Banach algebras are presented. A generalization of triangluar Banach algebras is module extensions of Banach algebras which Zhang \\cite{zh} has studied the weak amenability of them and then Medghalchi and Pourmahmood in \\cite{med} computed the first cohomology group of module extensions of Banach algebras and using those results, they gave various examples of Banach algebras with non-trivial cohomology group and in fact they calculated the first cohomology group of the given examples.\n\\par \nAnother class of Banach algebras which considered during the last thirty years, is the class of algebras obtained by a special product called $\\theta$-Lau product. This product is firstly introduced by Lau \\cite{lau} for a special class of Banach algebras which are pre-dual of von Neumann algebras where the dual unit element (the unit element of the dual) is a multiplicative linear functional. Afterwards, various studies have been performed to it. For instance, Monfared in \\cite{mn} has verified the structure of this special product and in \\cite{gha} the amenability of Banach algebras equipped with the same product has been studied. For more information about this product the reader may refer to \\cite{gha}, \\cite{mn} and references therein. \n\\par \nLet $\\mathcal{B}$ be a Banach algebra such that $\\mathcal{B} =\\A\\oplus\\U$ (as direct sum of Banach spaces) where $\\A$ is a closed subalgebra of $\\mathcal{B}$ and $\\U$ is a closed ideal in $\\mathcal{B}$. In this case, we say that $\\mathcal{B}$ is \\textit{semidirect product} of $\\A$ and $\\U$ and write $\\mathcal{B}=\\A\\ltimes \\U$. Semidirect product Banach algebras appear in the study of many classes of Banach algebras. For instance, in the strong Wedderburn decomposition of a Banach algebra $\\mathcal{B}$, it is assumed that $\\mathcal{B}=\\A\\ltimes Rad\\mathcal{B}$ where $Rad\\mathcal{B}$ is the Jacobson radical of $\\mathcal{B}$ and $\\A $ is a closed subalgebra of $\\mathcal{B}$ with $\\A\\cong \\frac{\\mathcal{B}}{Rad\\mathcal{B}}$ or for example, in \\cite{da2} using the structure of semidirect product, the authors have studied the amenability of measure algebras since every measure algebra has a decomposition to semidirect product of Banach algebras. In \\cite{tho2}, Thomas has verified the necessary conditions of decomposition of a commutative Banach algebra to semidirect product of a closed subalgebra and a principal ideal. We also may refer to \\cite{ba}, \\cite{ber} and \\cite{wh} where semidirect product Banach algebras are studied from different points of view. Equivalently, one may discuss semidirect product Banach algebras as follows; Let $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with compatible actions and appropriate norm. Consider the multiplication on $\\A\\times\\U$ given by \n\\[(a,x)(b,y)=(ab,ay+xb+xy)\\quad\\quad ((a,b)\\in \\A\\times\\U).\\]\nIt can be shown that with this multiplication and $\\l^{1}$-norm, $\\A\\times\\U$ is a Banach algebra where $\\A$ is a closed subalgebra of this Banach algebra and $\\U$ is a closed ideal of it. So in fact this Banach algebra is equivalent to $\\A\\ltimes\\U$. By considering different module actions or algebra multiplications, it can be seen that $\\A\\ltimes\\U$ is a generalization of direct products of Banach algebras, tivial extension Banach algebras, triangular Banach algebras or $\\theta$-Lau product Banach algebras. In this paper we consider semidirect product Banach algebras as mentioned above and study the derivations on this special product of Banach algebras. In fact we establish various results concerning the automatic continuity of derivations on $\\A\\ltimes\\U$ and the first cohomology group of it and we present these results in special cases of $\\A\\ltimes\\U$ and obtain various examples of Banach algebras with automatically continuous derivations and trivial first cohomology group or compute their first cohomology group.\n\\par\nThis paper is organized as follows. In section 2, we investigate the definition of semidirect product Banach algebras and we show that this product is a generalization of various types of Banach algebras. In section 3, the structure of derivations on semidirect product Banach algebras will be discussed and using that we obtain several results about the automatic continuity of derivations on these Banach algebras and also verify the decomposition of derivations into the sum of a continuous derivation and another derivation. In section 4, we consider the fist cohomology group of semidirect product Banach algebras and compute it under some different conditions and establish various results in this context. In section 5, we apply the obtained results in sections 3,4 to some special cases of semidirect products of Banach algebras. In fact we investigate the automatic continuity of the derivations and the first cohomology group of direct products of Banach algebras, module extension Banach algebras and $\\theta$-Lau products of Banach algebras and establish some various results about the derivations on these Banach algebras. \n\\par \nAt the end of this section we introduce some used notations and expressions in the paper.\n\\par \nIf $\\mathcal{X}$ and $\\mathcal{Y}$ are Banach spaces, for a linear transform $T:\\mathcal{X}\\rightarrow \\mathcal{Y}$, define the separating space $\\mathfrak{S}(T)$ as \n\\[\\mathfrak{S}(T):=\\{y\\in \\mathcal{Y}\\, \\mid \\, \\text{there is}\\, \\, \\{x_n\\}\\subseteq \\mathcal{X}\\,\\, \\text{with}\\, \\, x_n\\rightarrow 0 , \\, T(x_n)\\rightarrow y\\} .\\]\nBy closed graph theorem, $T$ is continuous if and only if $\\mathfrak{S}(T)=(0)$.\n\\par \nLet $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. By $Z(\\A)$, we mean the center of $\\A$. Consider the set $ann_{\\A}\\U$ as\n\\[ann_{\\A}\\U:=\\{a\\in\\A\\, \\mid \\, a\\U=\\U a=(0)\\}.\\]\nIf $\\mathcal{N}$ is an $\\A$-submodule of $\\U$, we put \n\\[(\\mathcal{N}:\\U)_{\\A}:=\\{a\\in\\A \\, \\mid \\, a\\U\\subseteq \\mathcal{N}, \\, \\U a\\subseteq \\mathcal{N}\\}. \\] \nIt is clear that if $\\mathcal{N}=(0)$, then $((0):\\U)_{\\A}=ann_{\\A}\\U$. Let $\\U$ and $\\mathcal{V}$ be Banach $\\A$-bimodules. A linear map $\\phi :\\U\\rightarrow \\mathcal{V}$ is said to be a \\textit{left $\\A$-module homomorphism} if $\\phi (ax)=a\\phi (x)$ whenever $a\\in \\A$ and $x\\in\\U$ and it is a \\textit{right $\\A$-module homomorphism} if $\\phi (xa)=\\phi (x) a\\quad (a\\in \\A,x\\in\\U)$. The linear map $\\phi$ is called \\textit{$\\A$-module homomorphism}, if $ \\phi$ is both of left and right $\\A$-module homomorphism. The set of all continuous $\\A$-module homomorphisms from $\\U$ into $\\mathcal{V}$ is denoted by $Hom_{\\A}(\\U,\\mathcal{V})$. Note that if the spaces are the same, we just write $Z^{1}(\\A), N^{1}(\\A), H^{1}(\\A), Hom_{\\A}(\\U)$\n\\section{Semidirect products of Banach algebras}\nIn this section we introduce the notion of semidirect produts of Banach algebras and give some properties of this concept.\\\\\nLet $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with\n\\[\\parallel ax \\parallel \\leq \\parallel a \\parallel \\parallel x \\parallel, \\quad \\parallel xa \\parallel \\leq \\parallel x \\parallel \\parallel a \\parallel \\quad\\quad (a\\in \\A, x\\in \\U), \\] \nand compatible actions, that is \n\\[ (a.x)y=a.(xy),\\,\\,\\, (xy).a=x(y.a), \\,\\,\\, (x.a)y=x(a.y) \\quad\\quad (a \\in \\A, \\, x,y\\in \\U). \\]\nIf we equip the set $\\A \\times \\U$ with the usual $\\mathbb{C}$-module structure, then the multiplication \n\\[ (a,x)(b,y)=(ab, a.y+x.b+xy) \\]\nturns $\\A \\times \\U$ into an associative algebra. \n\\par \nIn continue we also denote the module actions by $ax$ and $xa$.\n\\par \nThe \\textit{semidirect product} of Banach algebras $\\A$ and $\\U$, denoted by $\\A \\ltimes \\U$, is defined as the space $\\A \\times \\U$ with the above algebra multiplication and with the norm \n\\[ \\parallel (a,x) \\parallel = \\parallel a \\parallel + \\parallel x \\parallel . \\]\nThe semidirect product $\\A \\ltimes \\U$ is a Banach algebra.\n\\begin{rem}\\label {1}\nIn $\\A \\ltimes \\U$ we identify $\\A \\times \\lbrace 0 \\rbrace$ with $\\A$, and $\\lbrace 0 \\rbrace \\times \\U $ with $\\U$. Then $\\A$ is a closed subalgebra while $\\U$ is a closed ideal of $\\A \\ltimes \\U$, and\n\\[ \\A \\ltimes \\U \/ \\U \\cong \\A \\quad (isometric \\, \\, isomorphism). \\]\nIndeed $\\A \\ltimes \\U$ is equal to direct sum of $\\A$ and $\\U$ as Banach spaces.\n\\par\nConversely, let $\\mathcal{B}$ be a Banach algebra which has the form $\\mathcal{B}=\\A \\oplus \\U$ as Banach spaces direct sum, where $\\A$ is a closed subalgebra of $\\mathcal{B}$ and $\\U$ is a closed ideal in $\\mathcal{B}$. In this case, the product of two elements $a+x$ and $b+y$ of $\\mathcal{B}=\\A \\oplus \\U$ is given by \n\\[(a+x)(b+y)=ab+(ay+xb+xy), \\] where $ab\\in \\A$ and $ay+xb+xy\\in \\U$. Also the norm on $\\mathcal{B}$ is equivalent to the one given by \n\\[ \\parallel a+x \\parallel =\\parallel a \\parallel + \\parallel x \\parallel \\quad\\quad (a\\in \\A, x \\in \\U)).\\]\nOn the other hand $\\A$ and $\\U$ are Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with compatible actions and\n\\[\\parallel ax \\parallel \\leq \\parallel a \\parallel \\parallel x \\parallel, \\quad \\parallel xa \\parallel \\leq \\parallel x \\parallel \\parallel a \\parallel \\quad\\quad (a\\in \\A, x\\in \\U). \\] \nBy above arguments we have \n\\[ \\mathcal{B}\\cong \\A \\ltimes \\U , \\]\nas isomorphism of Banach algebras.\n\\end{rem}\nThe following examples provides various\ntypes of semidirect product af Banach algebras.\n \\begin{exm}\nLet $\\A$ be a Banach algebra. Then the unitization of $\\A$ denoted by $\\A^{\\#}$ is in fact the semidirect product $\\mathbb{C}\\ltimes \\A$.\n\\end{exm}\n\\begin{exm}\\label{dp}\nSuppose that the action $\\A$ on $\\U$ is trivial, that is, $ \\A\\U=\\U\\A=(0)$, then we obtain the usual $l^{1}$-direct product of Banach algebras $\\A$ and $\\U$. In this case, $\\A \\ltimes \\U= \\A \\times \\U$.\n\\end{exm}\n\\begin{exm}\\label{me}\nLet the algebra multiplication on $\\U$ be the trivial action, that is, $\\U^2=(0)$, then $\\A\\ltimes \\U$ is same as the module extension (or trivial extension) of $\\A$ by $\\U$ which we denote by $T(\\A,\\U)$.\n\\end{exm}\n\\begin{exm}\\label{tri}\nLet $\\A$ and $\\mathcal{B}$ be Banach algebras and $\\mathcal{M}$ be a Banach $(\\A,\\mathcal{B})$-bimodule. The triangular Banach algebra introduced in \\cite{for} is \n$Tri(\\A, \\mathcal{M}, \\mathcal{B}):=\\begin{pmatrix}\n \\A & \\mathcal{M} \\\\\n 0 &\\mathcal{B} \n\\end{pmatrix}$ with the usual matrix operations and $l^1$-norm. If we trun $\\mathcal{M}$ into a Banach $\\A \\times\\mathcal{B}$-bimodule with the actions \n$$(a,b)m=am\\quad ,\\quad m(a,b)=mb\\quad\\quad ((a,b)\\in \\A\\times\\mathcal{B}, m\\in \\mathcal{M} )$$\n( $\\A \\times \\mathcal{B}$ is considered with $l^1$-norm), then $Tri(\\A, \\mathcal{M}, \\mathcal{B})\\cong T(\\A\\times\\mathcal{B},\\mathcal{M})$ as isomorphism of Banach algebras. Therefore triangular Banach algebra is an example of semidirect products of Banach algebras.\n\\end{exm}\n\\begin{rem}\nLet $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with the compatible actions and norm, and let $\\overline{\\A\\U}=\\U$ or $\\overline{\\U\\A}=\\U$. Let the Banach algebra $\\A$ be the direct sum of its closed ideals $I_1$ and $I_2$, that is, $\\A=I_1 \\oplus I_2$. If $I_2\\U=\\U I_1=(0)$, then $\\U^2=(0)$. Because if $a\\in I_1$, $b\\in I_2$ and $x,y\\in \\U$ are arbitrary, then by the associativity we have $$x[(a+b)y]=[x(a+b)]y.$$\nSo $$x(ay)=(xb)y.$$\nIf $\\overline{\\A\\U}=\\U$ and we put $b=0$ in the above equation, then $xy=0$ and analogously, if $\\overline{\\U\\A}=\\U$, letting $a=0$ gives $xy=0$.\n\\par \nBy the above arguments, hypothesis and according to Example \\ref{tri}, in this case we have \n\\begin{equation*}\n\\A\\ltimes \\U=T(\\A,\\U)\\cong\\begin{pmatrix}\n I_1 & \\U \\\\\n 0 &I_2\n\\end{pmatrix}\n\\end{equation*}\n as isomorphism of Banach algebras.\n\\end{rem}\n\\begin{exm}\\label{la}\nLet $\\A$ and $\\U$ be Banach algebras, $\\theta\\in \\Delta(\\A)$ where $\\Delta(\\A)$ is the set of all non-zero characters of $\\A$. With the following module actions, $\\U$ becomes a Banach $\\A$-bimodule:\n$$ax=xa=\\theta(a)x\\quad\\quad (a\\in \\A ,x\\in \\U).$$\nThe norm and the actions on $\\U$ are compatible and one can consider $\\A\\ltimes \\U$ with the multiplication\n$$(a,x)(b,y)=(ab,\\theta(a)y+\\theta(b)x+xy).$$\nIn this case $\\A\\ltimes \\U$ is the $\\theta$-Lau product which is introduced in \\cite{lau}. \n\\end{exm}\n\\begin{exm}\nLet $G$ be a locally compact group. Then $M(G)=l^{1}(G)\\ltimes M_{c}(G)$ where $M_{c}(G)$ is a subspace of $M(G)$ containing all continuous measures such that for $\\mu\\in M(G)$, $\\mu\\in M_c(G)$ if and only if $\\mu(\\{s\\})=0$ $(s\\in G)$. We denote the subspace of discrete measures by $M_d({G})$ which is isomorphic to $l^{1}(G)$ and \n\\begin{equation*}M_{d}(G)=\\{\\mu =\\sum{\\alpha_{s}\\delta_{s}}:\\Vert\\mu\\Vert =\\sum_{s\\in G}{\\vert \\alpha_{s}\\vert }<\\infty \\}.\n\\end{equation*}\nIndeed $l^{1}(G)$ is a closed subspace of $M(G)$ and $M_c(G)$ is a closed ideal of $M(G)$. If $G$ is discrete, then $M(G)=l^{1}(G)$ and $M_{c}(G)=\\{0\\}$. But if $G$ is not discrete, then $M_{c}(G)\\neq\\{0\\}$ \n\\end{exm}\nIt is possible for an algebra $\\mathcal{B}$ with a closed ideal $\\mathcal{I}$, not to exist some closed subalgebra $\\A$ of $\\mathcal{B}$ such that $\\mathcal{B}=\\A\\ltimes \\mathcal{I}$. In the following we give an example of such a Banach algebra.\n\\begin{exm}\nLet $\\mathcal{B}:=C([0,1])$ be the Banach algebra of continuous complex-valued functions on $[0,1]$ and let $\\mathcal{I}:=\\{f\\in \\mathcal{B} :f(0)=f(1)=0\\}$. $\\mathcal{I}$ is a closed ideal of $\\mathcal{B}$. If $\\A $ is a closed subalgebra of $\\mathcal{B}$ satisfying $\\mathcal{B}=\\A \\oplus \\mathcal{I}$ as Banach spaces direct sum, then for $f\\in \\mathcal{I}$ and $g\\in \\A$ with $f(x)+g(x)=x$ for $x\\in [0,1]$, we have $g(0)=0$, $g(1)=1$ and $g-g^2\\in \\A\\cap \\mathcal{I}$. But yet $g-g^2\\neq 0$.\n\\end{exm}\nNote that $\\A\\ltimes\\U$ is commutative if and only if both $\\A$ and $\\U$ are commutative Banach algebras and $\\U$ is a commutative $\\A$-bimodule.\n\\par \nIn the rest of this section we introduce some special maps which are used in next sections.\n\\par \nFor $a\\in\\A$ define the map $r_{a}:\\U\\rightarrow\\U$ by $r_{a}(x)=xa-ax$. Some properties of this map are given in the following remark.\n\\begin{rem}\\label{inn1}\nFor $a\\in\\A$, consider the map $r_{a}:\\U\\rightarrow\\U$.\n\\begin{enumerate}\n\\item[(i)]\n$r_a$ is a derivation on $\\U$.\n\\item[(ii)]\nFor every $b\\in\\A$ and $x\\in\\U$,\n\\[r_{a}(bx)=br_{a}(x)+id_{a}(b)x \\quad \\text{and} \\quad r_{a}(xb)=r_{a}(x)b+x id_{a}(b).\\]\n\\item[(iii)]\nFor $a\\in \\A$, if $id_{a}=0$, then $r_a$ is an $\\A$-bimodule homomorphism. Also if $ann_{\\A}\\U =(0)$ and $r_a$ is an $\\A$-bimodule homomorphism, then $id_{a}=0$.\n\\end{enumerate}\n\\end{rem}\nAlso inner derivations on $\\U$ have significant properties considered in determining the first cohomology group of $\\A\\ltimes\\U$ which are given in the next remark.\n\\par \nNote that both of inner derivations from $\\A$ to $\\U$ and inner derivations from $\\U$ to $\\U$ are denoted by $id_{x}$. So in order to avoid confusion, we denote by $id_{\\A , x}$, the inner derivations from $\\A$ to $\\U$ while $id_{\\U , x}$ denotes the inner derivations from $\\U$ to $\\U$.\n\\begin{rem}\\label{inn2}\nFor $x_0\\in\\U$, consider the inner derivation $id_{\\U , x_0}:\\U\\rightarrow\\U$.\n\\begin{enumerate}\n\\item[(i)]\nFor every $a\\in\\A$ and $x\\in\\U$, \n\\[id_{\\U ,x_0}(ax)=a \\, id _{\\U ,x_0}(x)+id _{\\A , x_0}(a)x \\quad \\text{and} \\quad id_{\\U ,x_0}(xa)=id _{\\U ,x_0}(x)a+x \\, id _{\\A, x_0}(a).\\]\n\\item[(ii)]\nFor $x_{0}\\in \\U$, if $id_{\\A, x_0}=0$, then $id_{\\U , x_0}$ is an $\\A$-bimodule homomorphism. If $ann_{\\U}\\U =(0)$ and $id_{\\U , x_0}$ is an $\\A$-bimodule homomorphism, then $id_{\\A, x_0}=0$.\n\\end{enumerate}\n\\end{rem}\nThis following sets play an important role in determining the first cohomology group of $\\A\\ltimes\\U$.\n\\[R_{\\A}(\\U):=\\{r_{a}: \\U\\rightarrow\\U \\, \\mid \\, a\\in \\A\\};\\]\n\\[C_{\\A}(\\U):=\\{r_{a} :\\U\\rightarrow\\U \\, \\mid \\, id_{a}=0 \\, \\, (a\\in \\A)\\};\\]\n\\[I(\\U):=\\{id_{\\U, x}:\\U\\rightarrow\\U \\, \\mid \\, id_{\\A, x}=0 \\, \\, (x\\in \\U)\\};\\]\n\\indent In view of the above remarks, the set $R_{\\A}(\\U)$ is a linear subspace of $Z^{1}(\\U)$. Also $C_{\\A}(\\U)$ is a linear subspace of $Hom_{\\A}(\\U) \\cap R_{\\A}(\\U)$ and $I(\\U)$ is a linear subspace of $Hom_{\\A}(\\U) \\cap N^{1}(\\U)$. Indeed, we have the following inclusion linear subspaces;\n\\[ C_{\\A}(\\U)+I(\\U)\\subseteq Hom_{\\A}(\\U) \\cap (R_{\\A}(\\U)+N^{1}(\\U))\\subseteq Hom_{\\A}(\\U) \\cap Z^{1}(\\U).\\]\nIf $\\A$ is commutative, then $R_{\\A}(\\U)= C_{\\A}(\\U)$ and if $\\U$ is a commutative $\\A$-bimodule, then $N^{1}(\\U)=I(\\U)$. If $\\U ^{2}=(0)$, then $Z^{1} (\\U)=\\mathbb{B}(\\U)$ and $N^{1}(\\U)=I(\\U )=(0)$.\n\\section{Derivations on $\\A \\ltimes \\U$}\nIn this section we determine the structure of derivations on $\\A\\ltimes \\U$. According to which, we get some results concerning the automatic continuity of derivations on $\\A\\ltimes \\U$. Also we use the results of this section to determine the first cohomology group of $\\A\\ltimes \\U$ in the next sections.\n\\par \nThroughout this section we always assume that $\\A$ and $\\U$ are Banach algebras where $\\U$ is a Banach $\\A$-bimodule with the compatible actions and norm (As in the section 2). In other cases the conditions will be specified.\n\\par \nIn the following theorem the structure of derivations on $\\A\\ltimes \\U$ are determined.\n\\begin{thm}\\label{asll}\nLet $D:\\A\\ltimes \\U\\rightarrow \\A\\ltimes \\U$ be a map. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item[(i)] $D$ is a derivation.\n\\item[(ii)] \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nsuch that \n\\begin{enumerate}\n\\item[(a)] \n$\\delta_1:\\A\\rightarrow\\A$ is a derivation.\n\\item[(b)]\n$\\delta_2:\\A\\rightarrow \\U$ is a derivation.\n\\item[(c)]\n$\\tau_1 :\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that \n$$\\tau_1(xy)=0,$$\nfor all $x,y\\in \\U$.\n\\item[(d)]\n$\\tau_2 :\\U\\rightarrow \\U$ is a linear map such that for every $a\\in \\A$ and $x,y\\in \\U$ satisfies the following conditions\n\\begin{eqnarray*}\n\\tau_2 (ax)&=&a\\tau_2 (x)+\\delta_1 (a)x+\\delta_2 (a)x; \\\\\n\\tau_2 (xa)&=&\\tau_2 (x)a+x\\delta_1 (a)+x\\delta_2 (a); \\\\\n\\tau_2(xy)&=&x\\tau_1(y)+\\tau_1(x)y+x\\tau_2(y)+\\tau_2(x)y.\n\\end{eqnarray*}\n\\end{enumerate}\n\\end{enumerate}\nMoreover, $D$ is an inner derivation if and only if $\\delta_1 ,\\delta_2$ are inner derivations, $\\tau_1 =0$ and if $\\delta_1 =ad_{a_0}$ and $ \\delta_2 =ad_{x_0}$, then $\\tau_2 =ad_{x_0}+r_{a_0}$.\n\\end{thm}\n\\begin{proof}\n$(i)\\implies (ii)$. Since $D$ is a linear map and $\\A\\ltimes \\U$ is the direct sum of the linear spaces $\\A$ and $\\U$, there are some linear maps $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$, $\\tau_1 :\\U\\rightarrow \\A$ and $\\tau_2 :\\U\\rightarrow \\U$ such that for any $(a,x)\\in \\A\\ltimes \\U$ we have \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x)).$$\nBy applying $D$ on the equality $(a,0)(b,0)=(ab,0)$ we conclude that $\\delta_1$ and $\\delta_2$ are derivations. Analogously, applying $D$ on the equalities \n\\[ (a,0)(0,x)=(0,ax),\\,\\,\\, (0,x)(a,0)=(0,xa) \\,\\,\\, and \\,\\,\\, (0,x)(0,y)=(0,xy),\\]\n establishes the desired properties for the maps $\\tau_1$ and $\\tau_2$ given in parts $(c)$ and $(d)$ respectively.\n$(ii)\\implies (i)$ Clear.\n\\par \nThe equivalent conditions of inner-ness of $D$ can be obtained by a straightforward calculation.\n\\end{proof}\nIn the sequel for a derivation $D$ on $\\A\\ltimes \\U$, we always assume that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U)$$ in which the mentioned maps satisfy the conditions of the preceding theorem.\n\\par \nBy Theorem \\ref{asll}, if $D$ is an inner derivation on $\\A\\ltimes \\U$, then $\\tau_1 =0$. So this question is of interest for a given derivation $D$, under what conditions one has $\\tau_1 =0$?\nBy part $(ii)-(c)$ of Theorem \\ref{asll} if $\\U^2=\\U$ ($\\overline{\\U^2}=\\U$, if $D$ is continuous), then $\\tau_1 =0$. If $\\U$ has a bounded approximate identity, then by Cohen's factorization theorem we have $\\U^2=\\U$ and therefore in this case $\\tau_1 =0$. All unital Bnach algebras, $C^{*}$-algebras and group algebras have bounded approximate identity. Also if $\\U$ is a simple Banach algebra, then $\\U^{2}=\\U$.\n\\par \n The following corollary follows from Theorem \\ref{asll}.\n \\begin{cor}\\label{tak}\n Suppose that $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$, $\\tau_1 :\\U\\rightarrow \\A$ and $\\tau_2 :\\U\\rightarrow \\U$ are linear maps.\n \\begin{enumerate}\n \\item[(i)]\n $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ defined by $D((a,x))=(\\delta_1(a),0)$ is a derivation if and only if $\\delta_1$ is a derivation and $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U$. In this case if $\\delta_1 =id_{a_0}$ where $aa_0-a_0 a\\in ann_{\\A}\\U \\, \\, (a\\in \\A) $ and for any $x\\in \\U$, $a_0x =xa_0$, then $D$ is inner.\n \\item[(ii)]\n $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(0,\\delta_2(a))$ is a derivation if and only if $\\delta_2$ is a derivation and $\\delta_2 (A)\\subseteq ann_{\\U}\\U$. Moreover, if $\\delta_2 =id_{\\A, x_0}$ is inner, $ax_0 - x_0a\\in ann_{\\U}\\U$ (for all $a\\in \\A$) and $x_0\\in Z(\\U)$, then $D$ is inner.\n \\item[(iii)]\n$D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(\\tau_1(x),0)$ is a derivation if and only if $\\tau_1(xy)=0, x\\tau_1(y)+\\tau_1(x)y=0 (x,y\\in\\U)$. In this case $D$ is inner if and only if $\\tau_1 =0$.\n\\item[(iv)] $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(0,\\tau_2(x))$ is a derivation if and only if $\\tau_2$ is a derivation and also an $\\A$-bimodule homomorphism. In this case $D$ is inner if and only if $\\tau_2(x)=r_{a_0}(x)+id_{\\U, x_0}(x)$ where $a_0\\in Z(\\A)$ and $ax_0 =x_0 a$ for all $a\\in \\A$.\n\\end{enumerate}\n \\end{cor}\nIn the following we give an example showing that the condition $\\tau_1 =0$ does not hold in general.\n\\begin{exm}\nLet $\\mathcal{B}$ be a Banach algebra and $\\A:=T(\\mathcal{B},\\mathcal{B})$ and let $\\U :=\\mathcal{B}$. The Banach algebra $\\U$ becomes a Banach $\\A$-bimodule by the following compatible module actions:\n\\[(a,b)x=ax,\\quad x(a,b)= xa\\quad\\quad (a,b\\in \\A , x\\in \\U).\\]\nNow consider the Banach algebra $T(\\A,\\U)$ and define the map $\\tau_1:\\U\\rightarrow \\A$ by \n$$\\tau_1(x)=(0,x)\\quad\\quad (x\\in \\U)$$\nThen $\\tau_1\\neq 0$ is an $\\A$-bimodule homomorphism with $\\tau_1(\\U ^2)=(0)$ and for any $x,y\\in \\U$ we have \n$$x\\tau_1(y)+\\tau_1(x)y=x(0,y)+(0,x)y=0$$\nNow by Corollary \\ref{tak} it can be seen that the map $D$ on $T(\\A,\\U)$ defined by $D((a,b),x)=(\\tau_1(x),0)$ is a derivation in which $\\tau_1\\neq 0$.\n\\end{exm}\nIn this example $\\tau_1\\neq 0$ but $x\\tau_1(y)+\\tau_1(x)y=0$ $ (x,y\\in\\U)$. The next example shows that $\\tau_1$ does not satisfy the condition $x\\tau_1(y)+\\tau_1(x)y=0 \\,\\, (x,y\\in\\U)$ in general.\n\\begin{exm}\nLet $\\A$ be a Banach algebra, $\\mathcal{C}$ be a Banach $\\A$-bimodule and $\\gamma :\\mathcal{C}\\rightarrow \\A$ be a nonzero $\\A$-bimodule homomorphism such that \n\\[ c\\gamma (c')+\\gamma (c)c' =0\\]\nfor all $c,c'\\in\\mathcal{C}$.\n\\\\\nLet $\\U :=\\A\\times \\mathcal{C}$. With the usual actions, $\\U$ is a Banach $\\A$-bimodule. Consider the multiplication on $\\U$ given by\n\\[(x,y)(x',y')=(xx',0)\\quad\\quad ((x,y),(x',y')\\in\\U). \\]\nWith this multiplication $\\U$ is a Banach algebra such that the multiplication on $\\U$ is compatible with its module actions. Now we may consider the Banach algebra $\\A\\ltimes\\U$. Define the maps $\\tau_1:\\U\\rightarrow\\A$ and $\\tau_2:\\U\\rightarrow\\U$ by \n\\[\\tau_1 ((x,y))=\\gamma(y)\\quad , \\quad \\tau_2((x,y))=(-\\gamma (y),0).\\]\n$\\tau_1$ and $\\tau_2$ are $\\A$-bimodule homomorphisms such that $\\tau_1(\\U ^2)=0$, $\\tau_1\\,,\\tau_2\\neq 0$ and \n\\[0=\\tau_2((x,y)(x',y'))=(x,y)\\tau_1 ((x',y'))+(x,y)\\tau_2((x',y'))+\\tau_1 ((x,y)) (x',y')+\\tau_2((x,y))(x',y').\\]\nBy Theorem \\ref{asll} it can be seen that $D=(\\tau_1,\\tau_2)$ is a derivation on $\\A\\ltimes\\U$. If we assume further that $\\A$ is a unital algebra, then for any $y,y'\\in\\mathcal{C}$ with $\\gamma (y)\\neq 0$,\n\\begin{eqnarray*}\n(0,y)\\tau_1((1,y'))+\\tau_1((0,y))(1,y')&=&(0,y)\\gamma (y')+\\gamma (y)(1,y')\\\\&=&(\\gamma(y),y\\gamma (y')+\\gamma(y)y')\\\\&=&(\\gamma (y),0)\\neq 0.\n\\end{eqnarray*}\n \\end{exm}\nIn the following we investigate the automatic continuity of derivations on $\\A\\ltimes\\U$. It is clear that a derivation $D$ on $\\A\\ltimes \\U$ is continuous if and only if the maps $\\delta_1 , \\delta_2 , \\tau_1$ and $\\tau_2$ are continuous. \n\\par \n In the next theorem we state the relation between separating spaces of $\\delta_i$ and $\\tau_i$ ($i=1,2)$ and then by using this theorem, we obtain some results concerning the automatic continuity of the derivations on $\\A\\ltimes\\U$.\n\\begin{thm}\\label{joda}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),$$\nthen\n\\begin{enumerate}\n\\item[(i)]\n$\\mathfrak{S} (\\tau_1)$ is an ideal in $\\A$. If $\\tau_2$ is continuous, then $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U$ and if $\\tau_2(\\U)\\subseteq ann_{\\U}\\U$, then $\\mathfrak{S} (\\tau_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\item[(ii)]\n$\\mathfrak{S} (\\tau_2)$ is an $\\A$-sub-bimodule of $\\U$ and if $\\tau_1$ is continuous or $\\tau_1 (\\U)\\subseteq ann_{\\A}\\U$, then $\\mathfrak{S} (\\tau_2)$ is an ideal in $\\U$.\n\\item[(iii)]\n$\\mathfrak{S} (\\delta_1)$ is an ideal in $\\A$ and if $\\delta_2$ is continuous or $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$, then $\\mathfrak{S} (\\delta_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\item[(iv)]\n$\\mathfrak{S} (\\delta_2)$ is an $\\A$-subbimodule of $\\U$ and if $\\delta_1$ is continuous or $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U$, then $\\mathfrak{S} (\\delta_2)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\U$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n$(i)$ Let $a\\in \\mathfrak{S} (\\tau_1)$. Thus there is a sequence $\\{x_n\\}$ in $\\U$ such that \n$x_n\\rightarrow 0$ and $ \\tau_1(x_n)\\rightarrow a$.\nFor any $b\\in \\A$, $bx_n\\rightarrow 0$, so $\\tau_1(bx_n)=b\\tau_1 (x_n)\\rightarrow ba $ and hence $ba\\in \\mathfrak{S} (\\tau_1)$. Similarly, $ab\\in \\mathfrak{S} (\\tau_1)$. Therefore $\\mathfrak{S} (\\tau_1)$ is an ideal in $\\A$.\\\\\n Now for every $x\\in \\U$ we have \n$$\\tau_2(xx_n)=x\\tau_1(x_n)+x\\tau_2(x_n)+\\tau_1(x)x_n+\\tau_2(x)x_n$$\nand\n$$\\tau_2(x_nx)=x_n\\tau_1(x)+x_n\\tau_2(x)+\\tau_1(x_n)x+\\tau_2(x_n)x.$$\nIf $\\tau_2$ is continuous, taking limit of the above equations gives $ax=xa=0$ ($x\\in \\U$). So $a\\in ann_{\\A}\\U$. Hence $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U$. \n\\par \nIf $\\tau_2 (\\U)\\subseteq ann_{\\U}\\U$, by taking limit of the above equations we get \n$$\\tau_2(xx_n)\\rightarrow xa\\quad \\text{and} \\quad \\tau_2(x_n x)\\rightarrow ax.$$ So $ax,xa\\in \\mathfrak{S} (\\tau_2)$ and hence $\\mathfrak{S} (\\tau_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\\\\n$(ii)$ If $x_n\\rightarrow 0$ and $\\tau_2(x_n)\\rightarrow x$, then for any $a\\in \\A$ we have \n$$\\tau_2(ax_n)=a\\tau_2(x_n)+\\delta_1(a)x_n+\\delta_2(a)x_n$$\nand\n$$\\tau_2(x_{n}a)=\\tau_2(x_n)a+x_n\\delta_1(a)+x_n\\delta_2(a).$$\nBy taking limits of these equations we get \n$$\\tau_2(ax_n)\\rightarrow ax\\quad\\text{and}\\quad \\tau_2(x_{n}a)\\rightarrow xa.$$\nSo $ax,xa\\in \\mathfrak{S} (\\tau_2)$ and hence $\\mathfrak{S} (\\tau_2)$ is an $\\A$-subbimodule of $\\U$.\n\\par \nFor any $y\\in \\U$ we have\n$$\\tau_2(yx_n)=\\tau_1(y)x_n+\\tau_2(y)x_n+y\\tau_1(x_n)+y\\tau_2(x_n)$$\n$$\\tau_2(x_n y)=\\tau_1(x_n)y+\\tau_2(x_n)y+x_n\\tau_1(y)+x_n\\tau_2(y).$$\nIf $\\tau_1$ is continuous or $\\tau_1(\\U)\\subseteq ann_{\\A} \\U$, by taking limit we obtain \n$$\\tau_2(yx_n)\\rightarrow yx\\quad \\text{and}\\quad \\tau_2(x_{n}y)\\rightarrow xy.$$\nTherefore $xy,yx\\in \\mathfrak{S} (\\tau_2)$ and $\\mathfrak{S} (\\tau_2)$ is an ideal.\n\\\\\n$(iii)$ Let $a_n\\rightarrow 0 $ and $\\delta_1 (a_n)\\rightarrow a$. Since $\\delta_1$ is a derivation, it follows that $\\mathfrak{S} (\\delta_1)$ is an ideal in $\\A$. For every $x\\in \\U$ we have \n$$\\tau_2(a_nx)=a_n\\tau_2(x)+\\delta_1(a_n)x+\\delta_2(a_n)x$$\nand\n$$\\tau_2(ax_n)=\\tau_2(x)a_n+x\\delta_1(a_n)+x\\delta_2(a_n).$$\nIf $\\delta_2$ is continuous or $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$, taking limit of the above equations yields \n$$\\tau_2(a_n x)\\rightarrow ax\\quad \\text{and}\\quad \\tau_2(x a_n)\\rightarrow xa.$$\nSince $xa_n, a_nx\\rightarrow 0$, it follows that $ax,xa\\in \\mathfrak{S} (\\tau_2)$. So $\\mathfrak{S} (\\delta_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\\\\n$(iv)$ The proof is similar to part $(iii)$.\n\\end{proof}\nNow we give some results of the preceding theorem.\n\\begin{prop}\\label{au1}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),$$\nthen \n\\begin{enumerate}\n\\item[(i)]\nif $ann_{\\A}\\U =(0)$, $\\tau_2$ and $\\delta_2$ are continuous, then $D$ is continuous.\n\\item[(ii)]\nif $ann_{\\U}\\U =(0)$, $\\tau_2$ and $\\delta_1$ are continuous, then $\\delta_2$ is continuous. If we also add the assumption of continuity of $\\tau_1$, then $D$ is continuous.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n$(i)$ Since $\\tau_2$ is continuous so $\\mathfrak{S} (\\tau_2)=0$. By Theorem \\ref{joda}-$(i)$ from the continuity of $\\tau_2$ we conclude that $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U=(0)$, thus $\\tau_1$ is continuous. Moreover, since the conditions of Theorem \\ref{joda}-$(iii)$ hold, we have\n$$\\mathfrak{S} (\\delta_1)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\A=((0):\\U)_\\A =ann_{\\A}\\U=(0).$$ Therefore $\\delta_1$ is continuous. So $D$ is continuous.\n\\\\\n$(ii)$ Since the conditions of Theorem \\ref{joda}-$(iv)$ hold and $\\tau_2$ is continuous, it follows that\n $$\\mathfrak{S} (\\delta_2)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\U=((0):\\U)_\\U=ann_{\\U}\\U =(0).$$ So $\\delta_2$ is continuous . If $\\tau_1$ is continuous, then we have the continuity of all the maps and thus $D$ is continuous.\n\\end{proof}\nIf we add the condition $ann_{\\A}\\U=(0)$ to part $(ii)$ of the previous proposition, then as in part $(i)$, this implies that $\\tau_1$ is continuous and in this case $D$ is continuous as well.\n\\par \nNow by the preceding proposition, we obtain some results concerning the automatic continuity of the derivations on $\\A\\ltimes\\U$.\n\\begin{cor}\\label{n1}\nSuppose that for every derivation $D$ on $\\A\\ltimes \\U$ we have \n$x\\tau_1 (y)+\\tau_1(x)y=0$ $ (x,y\\in \\U)$. If $ann_{\\A}\\U=(0)$ and each derivation from $\\U$ to $\\U$ and each derivation from $\\A$ to $\\U$ is continuous, then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nBy Theorem \\ref{asll}, for any derivation $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$ on $\\A\\ltimes \\U$, the maps $\\delta_2:\\A \\rightarrow\\U$ and $\\tau_2:\\U\\rightarrow \\U$ are derivations, since we have \n$x\\tau_1 (y)+\\tau_1(x)y=0$ $ (x,y\\in \\U)$. So $\\delta_2$ and $\\tau_2$ are continuous by the hypothesis. Now the continuity of $D$ follows from Proposition \\ref{au1}-$(i)$.\n\\end{proof}\n\\begin{cor}\\label{n2}\nSuppose that for any derivation $D$ on $\\A\\ltimes \\U$ we have $x\\tau_1 (y)+\\tau_1(x)y=0 $ $(x,y\\in \\U)$. If $ann_{\\U}\\U=(0)$, every derivation from $\\U$ to $\\U$ and every derivation from $\\A$ to $\\A$ is continuous and every $\\A$-bimodule homomorphism from $\\U$ to $\\A$ is continuous, then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nConsider the derivation $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$. By Theorem \\ref{asll}, the maps \n $\\delta_1:\\A\\rightarrow \\A$ and $\\tau_2:\\U\\rightarrow \\U$ are derivation and $\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism. By the hypothesis, $\\delta_1, \\tau_1$ and $\\tau_2$ are continuous. Therefore $D$ is continuous by Proposition \\ref{au1}-$(ii)$.\n\\end{proof}\nBy Johnson's and Sinclair's theorem \\cite{john1}, every derivation on a semisimple Banach algebra is continuous, so we have the following corollary.\n\\begin{cor}\\label{semi}\nSuppose that $\\A$ and $\\U$ are semisimple Banach algebras such that $\\U$ has a bounded approximate identity. Then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$ be a derivation on $\\A\\ltimes \\U$. By the Cohen's factorization theorem, $\\U ^2 =\\U$ and so $\\tau_1=0$. Also from the hypothesis we conclude that $ann_{\\U}\\U =(0)$. By Johnson's and Sinclair's theorem \\cite{john1}, every derivation on $\\A$ and $\\U$ is continuous. Now by Proposition \\ref{au1}-$(ii)$, every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{proof}\nAll $C^{*}$-algebras, semigroup algebras, measure algebras and unital simple algebras are semisimple Banach algebras with bounded approximate identity. \n\\par \nIn next results we investigate some conditions under which we can express the derivation $D$ as the sum of two derivations one of which being continuous.\n\\begin{prop}\\label{taj1}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),\\]\nthen\n\\begin{enumerate}\n\\item[(i)]\nif $ann_{\\U}\\U =(0)$, $x\\tau_1 (y)+\\tau_1(x)y=0 $ $(x,y\\in \\U)$, $\\delta_1$ and $\\tau_2$ are continuous, then $D=D_1 +D_2$ in which \n$$D_1((a,x))=(\\delta_1(a),\\delta_2(a)+\\tau_2(x))\\quad\\text{and}\\quad D_2((a,x))=(\\tau_1(x),0),$$\nsuch that $D_1$ and $D_2$ are derivations on $\\A\\ltimes\\U$ and $D_1$ is continuous.\n\\item[(ii)]\nif $ann_{\\U}\\U =(0)$, $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$ and $\\tau_1, \\tau_2$ are continuous, then \\[D_1((a,x))=(\\tau_1(x),\\delta_2 (a)+\\tau_2 (x))\\] is a continuous derivation on $\\A\\ltimes \\U$ and $D=D_1 +D_2$ where $D_2((a,x))=(\\delta_1(a),0)$ is a derivation on $\\A\\ltimes \\U$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n$(i)$ By Proposition \\ref{au1}-$(ii)$, $\\delta_2$ is continuous. By Corollary \\ref{tak}, $D=D_1 +D_2$ where \n$$D_1((a,x))=(\\delta_1(a),\\delta_2(a)+\\tau_2(x)) \\quad \\text{and} \\quad D_2((a,x))=(\\tau_1(x),0)\\quad\\quad ((a,x)\\in \\A\\ltimes\\U)$$\nare derivations on $\\A\\ltimes \\U$. By the assumptions and that $\\delta_2$ is continuous, it follows that $D_1$ is a continuous derivation.\n\\\\\n$(ii)$ By Corollary \\ref{tak} \n$$D_1((a,x))=(\\tau_1(x),\\delta_2 (a)+\\tau_2 (x))\\quad\\text{and}\\quad D_2((a,x))=(\\delta_1(a),0)$$\nare derivations on $\\A\\ltimes \\U$. Now by the continuity of $\\tau_2$, $ann_{\\U}\\U =(0)$ and that $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$, from Theorem \\ref{joda}-$(iv)$, it follows that \n$$\\mathfrak{S} (\\delta_2)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\U =((0):\\U)_\\U =ann_{\\U}\\U=(0).$$ \nTherefore $\\delta_2$ is continuous and hence so is $D_1$.\n\\end{proof}\nTo prove the next proposition we need the following lemma.\n\\begin{lem}(\\cite [Proposition 5.2.2]{da}).\\label{da}\nLet $\\mathcal{X}$, $\\mathcal{Y}$ and $\\mathcal{Z}$ be Banach spaces, and let $T:\\mathcal{X}\\rightarrow \\mathcal{Y}$ be linear.\n\\begin{enumerate}\n\\item[(i)] \nSuppose that $R:\\mathcal{Z}\\rightarrow \\mathcal{X}$ is a continuous surjective linear map. Then $ \\mathfrak{S} (TR)= \\mathfrak{S} (T). $\n\\item[(ii)]\n Suppose that $S:\\mathcal{Y}\\rightarrow \\mathcal{Z}$ is a continuous linear map. Then $ST$ is continuous if and only if $S(\\mathfrak{S} (T))=(0)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{prop}\\label{taj2}\nLet $D$ be a derivation on $\\A\\ltimes \\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)$$\nand $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$. Then under any of the following conditions, \\[D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x))\\]\nis a continuous derivation on $\\A\\ltimes \\U$ and $D=D_1+D_2$ where $D_2((a,x))=(0,\\delta_2(a))$ is a derivation on $\\A\\ltimes \\U$.\n\\begin{enumerate}\n\\item[(i)]\n$ann_{\\A}\\U =(0)$ and $\\tau_2$ is continuous.\n\\item[(ii)]\n$\\A$ possesses a bounded right approximate identity, there is a surjective left $\\A$-module homomorphism \n$\\phi :\\A\\rightarrow \\U$ and both $\\delta_1$ and $\\tau_1$ are continuous.\n\\item[(iii)]\n $\\A$ possesses a bounded right approximate identity, there is an injective left $\\A$-module homomorphism \n$\\phi :\\A\\rightarrow \\U$ and both $\\tau_1$ and $\\tau_2$ are continuous.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n From Corollary \\ref{tak}-$(ii)$,\n\\[D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x)) \\quad \\text{and} \\quad\nD_2((a,x))=(0,\\delta_2(a))\\quad\\quad ((a,x))\\in \\A\\ltimes \\U) \\]\nare derivations on $\\A\\ltimes \\U$.\\\\\n$(i)$ Since $ann_{\\A}\\U =(0)$, as in Proposition \\ref{au1}-$(i)$, it can be proved that $\\tau_1$ is continuous. By continuity of $\\tau_2$, $ann_{\\A}\\U =(0)$, $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$ and Theorem \\ref{joda}-$(iii)$, we have \n$$\\mathfrak{S} (\\delta_1)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_{\\A}=((0),\\U)_{\\A}=ann_{\\A}\\U =(0).$$\nTherefore $\\delta_1$ is continuous and so is $D_1$.\\par \nBefore proving parts $(ii)$ and $(iii)$ we show that if $\\psi :\\A\\rightarrow \\U$ is any left $\\A$-module homomorphism, then it is continuous. Let $\\{a_k\\}$ be a sequence in $\\A$ such that $a_k\\rightarrow 0$. By the Cohen's factorization theorem there is some $c\\in \\A$ and some sequence $b_k$ in $\\A$ for which $b_k\\rightarrow 0$ and $a_k=b_k c \\, \\, (k\\in \\mathbb{N})$. Thus $\\psi (a_k)=b_k\\psi (c)\\rightarrow 0$ and hence $\\psi$ is continuous.\n\\\\\n$(ii)$ Let $\\psi :\\A\\rightarrow \\U$ be the map $\\psi = \\tau_2 o\\phi - \\phi o\\delta_1$. By the condition $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$ and the properties of $\\delta_1$ and $\\tau_2$ it can be shown that $\\psi$ is a left $\\A$-module homomorphism. Therefore $\\psi$ and $\\phi$ are continuous and hence $\\tau_2 o\\phi$ is continuous. So $\\mathfrak{S} (\\tau_2 o\\phi)=(0)$. On the other hand by Lemma \\ref{da}-$(i)$, since $\\phi$ is surjective, $\\mathfrak{S} (\\tau_2 o\\phi)=\\mathfrak{S} (\\tau_2)$. Thus $\\mathfrak{S} (\\tau_2)=0$. Therefore $\\tau_2$ is continuous. So by the hypothesis $D_1$ is continuous.\n\\\\\n$(iii)$\nAs in the previous part we put $\\psi = \\tau_2 o\\phi - \\phi o\\delta_1$ which is a left $\\A$-module homomorphism. Since $\\tau_2$ and $\\phi$ are continuous, it follows that $\\phi o\\delta_1 $ is continuous as well. By Lemma \\ref{da}-$(ii)$ we have $\\phi(\\mathfrak{S} (\\delta_1))=(0)$. Since $\\phi$ is injective, it follows that $\\mathfrak{S} (\\delta_1)=(0)$. So $\\delta_1$ is continuous and by the hypothesis $D_1$ is continuous as well.\n\\end{proof}\nNote that Propositions \\ref{taj2} also holds if \"bounded right approximate identity\" and \"left $\\A$-module homomorphism\" are replaced respectively by \"bounded left approximate identity\" and \"right $\\A$-module homomorphism\".\n\\begin{rem}\\label{taj11}\nIn Propositions \\ref{taj1} and \\ref{taj2} if we assume that $ann_{\\A}\\U =(0)$, then as in Proposition \\ref{au1}-$(i)$, the continuity of $\\tau_2$ implies the continuity of $\\tau_1$ or if we assume that $\\U$ has a bounded approximate identity, we conclude that $\\tau_1 =0$. Therefore any of the conditions $ann_{\\A}\\U =(0)$ or $\\U$ to have a bounded approximate identity implies that $\\tau_1$ is continuous and therefore so $D_1$ is continuous.\n\\end{rem}\n\\par \nIn the following example we show that the derivation $D_2$ in Propositions \\ref{taj1} and \\ref{taj2}, can be discontinuous.\n\\begin{exm}\nLet $\\mathcal{B}$ be a Banach algebra and $d:\\mathcal{B}\\rightarrow \\mathcal{B}$ be a discontinuous derivation.\n\\begin{enumerate}\n\\item[(i)]\nLet $\\A:=\\mathcal{B}\\times \\mathcal{B}$ be the direct product of Banach algebras and let $\\U :=\\mathcal{B}$ becomes a Banach $\\A$-bimodule with compatible actions, by the following module actions;\n\\[ \n(a,b)x=ax \\quad \\text{and} \\quad \nx(a,b)= xa\\quad\\quad (a,b\\in \\A , x\\in \\U).\n\\]\nTherefore $(0)\\times \\mathcal{B}\\subseteq ann_{\\A}\\U$.\nDefine the map $\\delta_1: \\A\\rightarrow \\A$ by $\\delta_1((a,b))=(0,d(b))$. Then $\\delta_1$ is a discontinuous derivation on $\\A$ such that \n$$\\delta_1 (\\A)\\subseteq (0)\\times \\mathcal{B}\\subseteq ann_{\\A}\\U. $$\nSo the map $D_2: T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D_2((a,b),x)=(\\delta_1((a,b)),0)$ is discontinuous.\n\\item[(ii)]\nAssume that $\\U :=\\mathcal{B}\\times \\mathcal{B}$ as the direct product of Banach spaces which becomes a Banach algebra with thr product \n$$(x,y)(x',y')=(xx',0)\\quad \\quad ((x,y),(x',y')\\in \\U).$$\nLet $\\A:=\\mathcal{B}$ and $\\U$ is became a Banach $\\A$-bimodule with the pointwise module actions which are compatible with its algebraic operations.\\par \nNow consider the map $\\delta_2: \\A\\rightarrow \\U$ defined by $\\delta_2(a)=(0,d(a))$. $\\delta_2$ is a discontinuous derivation such that \n$$\\delta_2(\\A)\\subseteq (0)\\times \\mathcal{B}\\subseteq ann_{\\U}\\U.$$\nHence the map $D_2: \\A \\ltimes \\U \\rightarrow \\A \\ltimes \\U$ defined by \n$$D_2((a,(x,y)))=(0,\\delta_2(a))$$\nis a discontinuous derivation on $\\A\\ltimes \\U$.\n\\item[(iii)]\nSuppose that $\\U$ is a Banach space and $T:\\U\\rightarrow \\mathbb{C}$ is a discontinuous linear functional. \nSet $\\A:=T(\\mathbb{C},\\mathbb{C})$ and we turn $\\U$ into a Banach $\\U$-bimodule by the actions below;\n\\[(a,b)x=ax \\quad \\text{and} \\quad x(a,b)=xb \\quad\\quad ((a,b)\\in A, x\\in \\U).\\]\nConsider the Banach algebra $T(\\A,\\U)$ and the map $\\tau_1:\\U\\rightarrow \\A$ defined by \n$$\\tau_1(x)=(0,T(x)).$$\nSo $\\tau_1$ is an $\\A$-bimodule homomorphism such that \n$$x\\tau_1(x')+\\tau_1 (x)x'=0\\quad\\quad (x,x'\\in \\U).$$\nThus the map $D_2((a,b),x)=(\\tau_1(x),0)$ is a derivation on $T(\\A,\\U)$ which is discontinuous.\n\\end{enumerate}\n\\end{exm}\nNote that in Proposition \\ref{taj2} if we assume that $\\delta_2$ is continuous, then derivation $D$ is continuous as well.\n\n\\section{The first cohomology group of $\\A\\ltimes\\U$}\nIn this section we determine the first cohomology group of $\\A\\ltimes\\U$ in some special cases. Throughout this section for two linear spaces $\\mathcal{X}$ and $\\mathcal{Y}$, we shall write $ \\mathcal{X} \\cong \\mathcal{Y}$ to indicate that the spaces are linearly isomorphic. \n\\par \nFrom this point up to the last section we assume that every derivation $D$ on $\\A\\ltimes\\U$ is of the form \n\\[D(a,x)=(\\delta_1 (a),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nin which $\\delta_1:\\A\\rightarrow\\A$, $\\delta_2:\\A\\rightarrow\\U$ and $\\tau_2:\\U\\rightarrow\\U$\nare derivations such that \n\\[\n\\tau_2 (ax)=a\\tau_2 (x)+\\delta_1 (a)x+\\delta_2 (a)x \\,\\, \\text{and} \\,\\, \n\\tau_2 (xa)=\\tau_2 (x)a+x\\delta_1 (a)+x\\delta_2 (a) \\quad\\quad (a\\in \\A,x\\in \\U).\n\\]\nIndeed, we consider that for every derivation $D$ on $\\A\\ltimes\\U$ we have $\\tau_1=0$, where $\\tau_1$ is as in Theorem \\ref{asll}. \n\\par \nIf for all $\\delta\\in Z^{1}(\\A)$, $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$, then by Corollary \\ref{tak}, the map $D$ given by $D((a,x))=(\\delta(a),0)$ is a continuous derivation on $\\A\\ltimes\\U$ and for every continuous derivation $D$ on $\\A\\ltimes\\U$ the maps $D_1((a,x))=(0,\\delta_2 (a)+\\tau_2 (x))$ and $D_2((a,x))=(\\delta_1(a),0)$ are derivations in $Z^{1}(\\A\\ltimes\\U)$ and $D=D_1+D_2$. By these arguments, in this case,\n\\[Z^{1}(\\A\\ltimes\\U)\\cong \\mathcal{W} \\times Z^{1}(\\A),\\]\nwhere $\\mathcal{W}$ is a linear subspace consisting of all continuous derivations on $\\A\\ltimes\\U$ of the form \n$D(a,x)=(0,\\delta_2 (a)+\\tau_2 (x))$, such that $\\delta_2\\in Z^{1}(\\A,\\U)$, $\\tau_2\\in Z^{1}(\\U)$ and \n\\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_2 (a)x \\,\\, \\text{and} \\,\\,\\tau_2 (xa)=\\tau_2 (x)a+x\\delta_2 (a)\\quad\\quad (a\\in\\A,x\\in\\U).\\]\n\\begin{thm}\\label{11}\nLet for every derivation $\\delta\\in Z^{1}(\\A)$ we have $\\delta(\\A)\\subseteq ann_{\\A}\\U$. If $H^{1}(\\A,\\U)=(0)$, then \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{E}},\\]\nwhere $\\mathcal{E}$ is the following linear subspace of $Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]$;\n\n\\[\\mathcal{E}=\\{(id_{a},r_{a}+id_{\\U,x})\\, \\mid \\, a\\in\\A, \\, id_{\\A , x}=0 \\, \\, (x\\in\\U)\\}.\\]\n\\end{thm}\n\\begin{proof}\nDefine the map \\[\\Phi:Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\nby $\\Phi ((\\delta,\\tau))=[D_{\\delta,\\tau}]$\nwhere $D_{\\delta,\\tau} ((a,x))=(\\delta (a),\\tau (x))$ and $[D_{\\delta,\\tau}]$ represents the equivalence class of $D_{\\delta,\\tau}$ in $H^{1}(\\A\\ltimes\\U).$ By Corollary \\ref{tak} the maps $D_1:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U$ and $D_2:\\A\\ltimes\\U\\rightarrow\\A\\ltimes\\U$ given respectively by $D_1((a,x))=(0,\\tau (x))$ and $D_2((a,x))=(\\delta(a),0)$, are continuous derivations on $\\A\\ltimes\\U$ and so $\\Phi$ is well-defined. Clearly $\\Phi$ is linear. Let $D\\in Z^{1}(\\A\\ltimes\\U)$. By the hypothesis and the discussion before the theorem, $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1(a),0)$ and $D_2((a,x))=(\\delta_1(a),0)$ are continuous derivations on $\\A\\ltimes\\U$ and $\\delta_1\\in Z^{1}(\\A)$, $\\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau_2\\in Z^{1}(\\U)$. Since $H^{1}(\\A,\\U)=(0)$, there is some $x_0\\in \\U$ such that $\\delta_2 =id_{\\A,x_0}$. Define the map $\\tau:\\U\\rightarrow \\U $ by $\\tau=\\tau_2 - id_{\\U,x_0}$. By the properties of $\\tau$ and Remark \\ref{inn2}, it follows that $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Now we have\n\\[\nD((a,x))-D_{\\delta_1 ,\\tau} ((a,x))=(0,id_{\\A,x_0}(a)+id_{\\U,x_0}(x))= id_{(0,x_0)}(a,x) \\quad \\quad ((a,x)\\in\\A\\ltimes\\U).\\]\nSo $[D]=[D_{\\delta,\\tau}]$ and hence $\\Phi ((\\delta_1,\\tau))=[D_{\\delta_1 ,\\tau}]=[D]$. Thus $\\Phi$ is surjective.\n\\par\nIt can be easily checked that $\\mathcal{E}$ is a linear subspace and if $(\\delta,\\tau)\\in ker \\Phi$, then $D_{\\delta,\\tau}$ is an inner derivation on $\\A\\ltimes\\U$. So for some $(a_0,x_0)\\in \\A\\ltimes\\U$, \n\\[D_{\\delta,\\tau}((a,x))=id_{a_0,x_0}(a,x)=(id_{a_0}(a),id_{\\A,x_0}(a)+r_{a_0}(x)+id_{\\U,x_0}(x)),\\]\nwhich implies that \\[\\delta =id_{a_0}, \\quad \\tau (x)=r_{a_0}(x)+id_{\\U,x_0}(x)\\quad \\text {and} \\quad id_{\\A,x_0}=0 \\quad \\quad (a\\in\\A ,x\\in\\U).\\] Hence $(\\delta,\\tau)\\in \\mathcal{E}$. Conversely, Suppose that $(\\delta,\\tau)\\in \\mathcal{E}$. Then for $a_0\\in\\A$ and $x_0\\in\\U$ we have $\\delta =id_{a_0}$ and $ \\tau =r_{a_0}+id_{\\U,x_0}$ where $id_{\\A,x_0}=0$ for all $a\\in\\A$. So \n\\[\nD_{\\delta,\\tau}((a,x))=(id_{a_0}(a),r_{a_0}(x)+id_{\\U ,x_0}(x))\n= id_{(a_0,x_0)}((a,x)).\\]\nThus $D_{\\delta,\\tau}\\in N^{1}(\\A\\ltimes\\U)$ and hence $(\\delta,\\tau)\\in ker\\Phi$. So $\\mathcal{E}=ker\\Phi$. Therefore by the above arguments we have \n\\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{E}}.\\]\n\\end{proof}\n\\begin{cor}\nSuppose that for every $\\delta\\in Z^{1}(\\A)$, $\\delta (\\A)\\subseteq ann_{\\A}\\U$. If $H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A)=(0)$ and $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}=(0).$\n\\end{cor}\n\\begin{proof}\nLet $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ and $\\delta\\in Z^{1}(\\A)$. Since $\\delta (\\A)\\subseteq ann_{\\A}\\U$, it follows that \n\\[ (0,\\tau)\\,,\\, (\\delta,0)\\in Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)].\\]\nBy the hypothesis and according to the preceding theorem, there exist some $a_0\\in\\A$ and $x_0\\in\\U$ such that $id_{\\A,x_0}=0$ and \\[(\\delta,0)=(id_{a_0},r_{a_0}+id_{\\U,x_0})\\quad ,\\quad (0,\\tau)=(id_{a_0},r_{a_0}+id_{\\U,x_0}).\\]\nHence $\\delta=id_{a_0}$ and so $H^{1}(\\A)=(0)$. Moreover, $\\tau=r_{a_0}+id_{\\U,x_0}$ where $id_{a_0}=0$. Thus $\\tau\\in C_{\\A}(\\U)+I(\\U)$.\n \\end{proof}\n If we assume that $\\A$ is a Banach algebra with $ann_{\\A}\\A=(0)$ and $\\delta\\in Z^{1}(\\A)$ where $\\delta\\neq 0$, then $\\delta(\\A)\\not\\subseteq ann_{\\A}\\A =(0)$. So in this case, the condition $\\delta (\\A)\\subseteq ann_{\\A}\\A$ is not stisfied on $\\A\\ltimes\\A$ for every derivation $\\delta\\in Z^{1}(\\A)$ and this shows that this condition does not hold in general. In the following we give an example of Banach algebras which are satisfying in conditions of Theorem \\ref{11}.\n \\begin{exm}\nLet $\\A$ be a semisimple commutative Banach algebra. By Thomas' theorem \\cite{tho}, we have $Z^{1}(\\A)=(0)$. So in this case, for every $\\delta\\in Z^{1}(\\A)$ and every Banach $\\A$-bimodule $\\U$, $\\delta (\\A)\\subseteq ann_{\\A}\\U$ (in fact $\\delta=0$). In particular for a semisimple commutative Banach algebra $\\A$, if $\\U$ is a Banach $\\A$-bimodule such that $H^{1}(\\A,\\U)=(0)$ and $\\A\\ltimes\\U$ is satisfying in conditions of Theorem \\ref{11}, then\n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ I(\\U)}.\\]\n \\end{exm}\n Note that if $\\A$ is a commutative Banach algebra with $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$, then above example holds again.\n \\par \nWe continue by characterizing the first cohomology group of $\\A\\ltimes\\U$ in another case.\n\\par \nIf for each $\\delta_2\\in Z^{1}(\\A,\\U)$, $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$, then by Corollary \\ref{tak}, the map $D ((a,x))=(0,\\delta_2(a))$ is a continuous derivation on $\\A\\ltimes\\U$ and for each derivation $D\\in Z^{1}(\\A\\ltimes\\U)$ we conclude that the maps $D_1 ((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2((a,x))=(0,\\delta_2(a))$ are derivations in $Z^{1}(\\A\\ltimes\\U)$ and $D=D_1+D_2$. So by this argument in this case,\n \\[Z^{1}(\\A\\ltimes\\U)\\cong Z^{1}(\\A ,\\U)\\times \\mathcal{W}\\]\n where $\\mathcal{W}$ is the linear subspace of all continuous derivations on $\\A\\ltimes\\U$ which are of the form $D_1((a,x))=(\\delta_1 (a),\\tau_2(x))$ with $\\delta_1\\in Z^{1}(\\A)\\,,\\tau_2\\in Z^{1}(\\U)$ and \n \\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_1 (a)x\\quad \\text{and} \\quad \\tau_2 (xa)=\\tau_2 (x)a+x\\delta_1 (a)\\quad\\quad ((a,x)\\in \\A\\ltimes\\U).\\]\n \\begin{thm}\\label{22}\n Suppose that for each derivation $\\delta\\in Z^{1}(\\A,\\U)$ we have $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If $H^{1}(\\A)=(0)$, then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{F}},\\]\n where $\\mathcal{F}$ is the following linear subspace of $Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]$;\n \\[\\mathcal{F}=\\{ (id_{\\A,x},r_{a}+id_{\\U, x})\\, \\mid \\, x\\in\\U, \\, id_{a}=0 \\, \\, (a\\in\\A) \\}.\\]\n \\end{thm}\n \\begin{proof}\n Define the map \n \\[\\Phi: Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\n by $\\Phi ((\\delta,\\tau))=[D_{\\delta,\\tau}]$\n where $D_{\\delta,\\tau}((a,x))=(0,\\delta(a)+\\tau(x))$\n is a continuous derivation on $\\A\\ltimes\\U$. Clearly $\\Phi$ is a well-defined linear map. If \n $D\\in Z^{1}(\\A\\ltimes\\U)$, then $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2((a,x))=(0,\\delta_2(a))$ are continuous derivations on $\\A\\ltimes\\U$ with $\\delta_1\\in Z^{1}(\\A), \\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Z^{1}(\\U)$. Since $H^{1}(\\A)=(0)$, there is some $a_0\\in \\A$ such that $\\delta_1=id_{a_0}$. Consider the map $\\tau=\\tau_2-r_{a_0}$ on $\\U$. We have \n $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Also\n \\[D((a,x))-D_{\\delta_2 ,\\tau}((a,x))=id_{(a_0,0)}((a,x)).\\]\n So $[D]=[D_{\\delta_2 ,\\tau}]$ and hence $\\Phi ((\\delta_2,\\tau))=[D_{\\delta_2,\\tau}]=[D]$.\n Therefore $\\Phi$ is surjective. A straightforward proceeding shows that $\\mathcal{F}$ is a vector subspace and $ker\\Phi =\\mathcal{F}$. This establishes the desired vector space isomorphism. \n \\end{proof}\n \\begin{cor}\\label{222}\n Suppose that for any derivation $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If \n $H^{1}(\\A)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A,\\U)=(0)$ and \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}=(0).$\n \\end{cor}\n \\begin{proof}\n Let $\\delta\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. By the hypothesis, \n$(\\delta,0),(0,\\tau)\\in Z^{1}(\\A,\\U)\\times Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Again by the hypothesis and the preceding theorem, there are $x_0\\in\\U$ and $a_0\\in \\A$ such that $id_{a_0}=0$ and \n\\[(\\delta,0)=(id_{\\A,x_0},r_{a_0}+id_{\\U,x_0})\\quad , \\quad (0,\\tau)=(id_{\\A,x_0},r_{a_0}+id_{\\U,x_0}).\\]\nNow the result follows from these equalities.\n \\end{proof}\nIf we assume that $\\A$ is a Banach algebra with $ann_{\\A}\\A =(0)$ and $\\delta\\in Z^{1}(\\A)$ where $\\delta\\neq 0$, then by putting $\\U =\\A$ we have $\\delta(\\A)\\not\\subseteq ann_{\\A}\\A =ann_{\\A}\\U =(0)$. So in this case, the condition $\\delta(\\A)\\subseteq ann_{\\A}\\U$ is not satisfied on $\\A\\ltimes\\U$ for every derivation $\\delta\\in Z^{1}(\\A,\\U)$. This shows that this condition does not hold in general. In the following we give an example of Banach algebras which are satisfying in conditions of Theorem \\ref{22}.\n\\begin{exm}\nIf $\\A$ is a Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule such that $H^{1}(\\A,\\U)=(0)$, then $Z^{1}(\\A,\\U)=(0)$. So in this case, for every $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$ (in fact $\\delta =0$). In this case if $H^{1}(\\A)=(0)$ and $\\A\\ltimes\\U$ satisfies in Theorem \\ref{22}, then\n \\[ H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)\\cap N^{1}(\\U)}.\\]\n\\end{exm}\n Note that if $\\A$ is a super amenable Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule, then above example holds.\n \\par \n In the continuation we consider a case on $\\A\\ltimes\\U$ in which for any derivation $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$ and $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$. In fact by these conditions on $\\A\\ltimes\\U$, for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U);$\n \\[\\delta_1(a)x+\\delta_2(a)x=0\\quad \\text {and} \\quad x\\delta_1 (a)+x\\delta_2 (a)=0 \\quad\\quad (a\\in \\A,x\\in \\U).\\]\nIn this case, every derivation $D\\in Z^{1}(A\\ltimes\\U)$ can be written as $D=D_1+D_2$ where \n \\[D_{1}((a,x))=(\\delta_1 (a),\\delta_2(a))\\quad \\text{and} \\quad D_{2}((a,x))=(0,\\tau_2(x))\\]\nare continuous derivations on $\\A\\ltimes\\U$ and $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. In this case, we conclude that \n$$R_{\\A}(\\U)+N^{1}(\\U)\\subseteq Hom_{\\A}(\\U)\\cap Z^{1}(\\U).$$ \n\\begin{thm}\\label{33}\n Suppose that for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have \\[\\delta_1(\\A)\\subseteq ann_{\\A}\\U \\quad \\text{and} \\quad \\delta_{2}(\\A)\\subseteq ann _{\\U}\\U.\\]\nSuppose further that $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ N^{1}(\\U)}=(0)$. Then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times Z^{1}(\\A,\\U)}{\\mathcal{K}},\\]\n where $\\mathcal{K}$ is the following linear subspace of $Z^{1}(\\A)\\times Z^{1}(\\A,\\U)$;\n \\[\\mathcal{K}=\\{(id_{a},id_{\\A,x})\\, \\mid \\, r_{a}+id_{\\U,x}=0\\,\\,(a\\in \\A,x\\in \\U)\\}.\\]\n \\end{thm}\n \\begin{proof}\n Define the map \n \\[\\Phi: Z^{1}(\\A)\\times Z^{1}(\\A,\\U)\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\n by $\\Phi ((\\delta_1,\\delta_2))=[D_{\\delta_1 , \\delta_2}]$\n where $D_{\\delta_1 , \\delta_2} ((a,x))=(\\delta_1 (a),\\delta_2(a))$ is a continuous derivation on $\\A\\ltimes\\U$. The map $\\Phi$ is well-defined and linear. If $D\\in Z^{1}(\\A\\ltimes\\U)$, then $D((a,x))=(\\delta_1 (a),\\delta_2(a)+\\tau_2(x))$. By the hypothesis, $\\tau_2=r_{a}+id_{\\U,x}$ where $a\\in\\A$ and $x\\in\\U$. Define the derivations $d_1\\in Z^{1}(\\A)$ and $d_2\\in Z^{1}(\\A,\\U)$ by $d_1=\\delta_1 -id_{a}$ and $d_2=\\delta_2 -id_{\\A,x}$ respectively. Then $D-D_{d_1 ,d_2}=id_{(a ,x )}$. So $\\Phi ((\\delta_1,\\delta_2))=[D_{d_1 , d_2}]=[D]$.\nThus $\\Phi$ is surjective. It can be easily seen that $ker \\Phi =\\mathcal{K}$. So we have the desired vector spaces isomorphism.\n \\end{proof}\n \\begin{cor}\\label{333}\n Suppose that for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have \\[\\delta_1(\\A)\\subseteq ann_{\\A}\\U \\quad \\text{and} \\quad \\delta_2(\\A)\\subseteq ann _{\\U}\\U.\\] Suppose further that \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{{R_{\\A}(\\U)+ N^{1}(\\U)}}=(0)$. If $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$.\n \\end{cor}\n \\begin{proof}\n Let $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$. By the hypothesis \n \\[(\\delta_1 , 0)\\,,\\, (0,\\delta_2)\\in Z^{1}(\\A)\\times Z^{1}(\\A,\\U).\\]\n Again by the hypothesis and the preceding theorem, there exists some $(id_{a},id_{\\A,x})\\in \\mathcal{K}$ such that \n $(\\delta_1 , 0)=(id_{a},id_{\\A,x})$ and $(0,\\delta_2)=(id_{a},id_{\\A,x})$ and so $\\delta_1$ and $\\delta_2$ are inner.\n \\end{proof} \nIf $H^{1}(\\U)=(0)$, since $R_{\\A}(\\U)+ N^{1}(\\U)\\subseteq Z^{1}(\\U)= N^{1}(\\U)$, It follows that \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{{R_{\\A}(\\U)+ N^{1}(\\U)}}=(0)$. So if $H^{1}(\\U)=(0)$, Theorem \\ref{33} and Corollary \\ref{333} hold again. \n \\par \n Let $\\A$ be a Banach algebra with $ann_{\\A}\\A =(0)$ and $\\delta_1 ,\\delta_2\\in Z^{1}(\\A)$ such that $\\delta_1 +\\delta_2 \\neq 0$. If we put $\\tau =\\delta_1+\\delta_2$ and define linear map $D$ on $\\A\\ltimes\\A$ by \n \\[D((a,x))=(\\delta_1 (a),\\delta_2(a)+\\tau(x))\\quad\\quad ((a,x)\\in \\A\\ltimes\\A),\\]\n then $D\\in Z^{1}(\\A\\ltimes\\A )$. But for $a, x\\in\\A$, the equation $\\delta_1 (a)x+\\delta_2 (a)x=0$\n is not necessarily true. This example does not satisfy the conditions of Theorem \\ref{33}. \n \\par \nThe last case we consider is as follows.\n \\begin{thm}\\label{44}\n Let $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$. Then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}.\\]\n\\end{thm} \n\\begin{proof}\nDefine the map \n\\[\\Phi : Hom_{\\A}(\\U)\\cap Z^{1}(\\U)\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\nby $\\Phi(\\tau)=[D_{\\tau}]$ where $D_{\\tau}((a,x))=(0,\\tau (x))$ is a continuous derivation on $\\A\\ltimes\\U$. The map $\\Phi$ is well-defined linear. If $D\\in Z^{1}(\\A\\ltimes\\U )$, then $D=(\\delta_1 , \\delta_2 +\\tau_2)$. By hypotheses $\\delta_1 =id _{a}$ and $\\delta_2 =id_{\\A, x}$ for some $a\\in \\A$ and $x\\in \\U$. Define the map $\\tau:\\U\\rightarrow \\U$ by $\\tau=\\tau_2-r_{a}-id_{\\U, x}$. Then $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ and so $D-D_{\\tau}=id_{(a , x)}$. Hence $\\Phi (\\tau)=[D_\\tau]=[D]$. Thus $\\Phi$ is surjective. By Corollary \\ref{tak}-$(iv)$, $ker \\Phi =C_{\\A}(\\U)+I(\\U)$. So the desired vector space isomorphism is established.\n\\end{proof}\nThe following corollary follows immediately from the preceding theorem.\n\\begin{cor}\nIf $H^{1}(\\A)=(0),H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then for every $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ there are some $r_{a}\\in C_{\\A}(\\U)$ and $id_{\\U , x}\\in I(\\U)$ such that $\\tau =r_{a}+id_{x}$.\n\\end{cor}\nIn the following an example of Banach algebras satisfying the conditions of Theorem \\ref{44}, is given.\n\\begin{exm}\nIf $\\A$ is a weakly amenable commutative Banach algebra, then for any commutative Banach $\\A$-bimodule $\\mathcal{X}$ we have $H^{1}(\\A,\\mathcal{X})=(0)$. So in this case, if $\\U$ is a commutative Banach $\\A$-bimodule satisfying the conditions of Theorem \\ref{44}, then $Z^{1}(\\A)=H^{1}(\\A)=(0)$ and $Z^{1}(\\A,\\U)=H^{1}(\\A,\\U)=(0)$ and hence \n\\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ N^{1}(\\U)}.\\]\n\\end{exm}\nThis example could be also derived from Theorem \\ref{22}.\n\\section{Applications}\nIn this section we investigate applications of the previous sections and give some examples.\n\\subsection*{Direct product of Banach algebras}\nLet $\\A$ and $\\U$ be Banach algebras. With trivial module actions $\\A\\U=\\U\\A =(0)$, as we saw in Example \\ref{dp}, $\\A\\ltimes\\U=\\A\\times\\U$ where $\\A\\times\\U$ is $l^{1}$-direct product of Banach algebras. In this case, $ann_{\\A}\\U =\\A$ and so for every derivation $\\delta:\\A\\rightarrow \\A$ we have $\\delta(\\A)\\subseteq ann_{\\A}\\U$. Also in this case, $R_{\\A}(\\U)=(0)$ and $Hom_{\\A}(\\U)=\\mathbb{B}(\\U)$. The following proposition follows from Theorem \\ref{asll}.\n\\begin{prop}\\label{dpd}\nLet $\\A$ and $\\U$ be Banach algebras and $D:\\A\\times\\U\\rightarrow\\A\\times\\U$ be a map. The following are equivalent.\n\\begin{enumerate}\n\\item[(i)]\n$D$ is a derivation.\n\\item[(ii)]\n\\[D((a,x))=(\\delta_1(a)+\\tau_1(x),\\delta_2(a)+\\tau_2(x))\\quad \\quad ((a,x)\\in\\A\\times\\U)\\]\nsuch that $\\delta_1:\\A\\rightarrow\\A,\\tau_2:\\U\\rightarrow\\U$ are derivations and $\\tau_1:\\U\\rightarrow\\A$ and $\\delta_2:\\A\\rightarrow\\U$ are linear maps satisfying the following conditions;\n\\[\\tau_1(\\U)\\subseteq ann_{\\A}\\A , \\,\\, \\delta_2(\\A)\\subseteq ann_{\\U}\\U, \\,\\, \\tau_1(xy)=0, \\, \\text{and} \\,\\, \\delta_2(ab)=0 \\quad \\quad(a,b\\in\\A , x,y\\in \\U).\\]\n\\end{enumerate}\nMoreover, if $ann_{\\U}\\U=(0)$ or $\\A^{2}=\\A$ and $ann_{\\A}\\A=(0)$ or $\\U ^{2}=\\U$, then $\\delta_2=0$ and $\\tau_1=0$, respectively.\n\\end{prop}\nBy this proposition it is clear if $\\A$ or $\\U$ has a bounded approximate identity, then $\\delta_2=0$ and $\\tau_1=0$. Hence in this case every derivation $D$ on $\\A\\times\\U$ is of the form $D((a,x))=(\\delta(a),\\tau(x))$ where $\\delta$ and $\\tau$ are derivations on $\\A$ and $\\U$, respectively.\n\\begin{rem}\\label{d1}\nIf $\\A$ and $\\U$ are Banach algebras, then for any derivation $\\delta:\\A\\rightarrow\\A$ and $\\tau:\\U\\rightarrow\\U$ the maps $D_1$ and $D_2$ on $\\A\\times\\U$ given by \n\\[D_{1}((a,x))=(\\delta (a),0)\\quad \\text{and} \\quad D_2((a,x))=(0,\\tau (x)),\\]\nare derivations. According to this point, if every derivation on $\\A\\times\\U$ is continuous, then every derivation on $\\A$ and every derivation on $\\U$ is continuous.\n\\par \nAlso let $\\A$ or $\\U$ have a bounded approximate identity. So by above arguments if every derivation on $\\A$ and every derivation on $\\U$ is continuous, then every derivation on $\\A\\times\\U$ is continuous.\n\\end{rem}\n\\begin{rem}\\label{d2}\nLet $\\A$ and $\\U$ be Banach algebras and $D\\in Z^{1}(\\A\\ltimes\\U)$. If $ann_{\\U}\\U =(0)$ or $\\overline{\\A^{2}}=\\A$ and $ann_{\\A}\\A =(0)$ or $\\overline{\\U ^{2}}=\\U$, then in the representation of $D$ as in the preceding proposition, $\\delta_2=0$ and $\\tau_1=0$. So $D((a,x))=(\\delta(a),\\tau(x))$ where $\\delta \\in Z^{1}(\\A)$ and $\\tau\\in Z^{1}(\\U)$. In this case we can conclude\n\\[ Z^{1}(\\A\\times\\U)\\cong Z^{1}(\\A)\\times Z^{1}(\\U) \\quad \\text{and} \\quad N^{1}(\\A\\times\\U)\\cong N^{1}(\\A)\\times N^{1}(\\U).\\]\nMoreover\n\\[ H^{1}(\\A\\times\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U).\\]\nIn particular if $\\A$ or $\\U$ have a bounded approximate identity, then above observation holds.\n\\par \nIn the case of isomorphism of the first cohomology group by Theorem \\ref{11}, one can weaken the conditions. Indeed $\\A\\times\\U$ with $\\overline{\\U ^{2}}=\\U$ or $ann_{\\A}\\A =(0)$ satisfies the conditions of Theorem \\ref{11}. Since $Hom_{\\A}(\\U)=\\mathbb{B}(\\U)$ and $R_{\\A}(\\U)=(0)$, in this case \n\\[H^{1}(\\A\\times\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U)\\]\n(In fact in this case, $\\mathcal{E}=N^{1}(\\A)\\times N^{1}(\\U)$).\n\\par \nIf $\\overline{\\A ^{2}}=\\A$ or $ann_{\\U}\\U =(0)$, by symmetry as above we obtain the same result again.\n\\end{rem}\nAs the next example confirms, it is not necessarily true that for any derivation on $\\A\\times\\U$, $\\tau_1 =0$ or $\\delta_1 =0$ in the decomposition of it. The example is given from \\cite{ess}.\n \\begin{exm}\n Let $\\A$ be a Banach algebra with $ann_{\\A}\\A\\neq 0$. Put $\\U:=ann_{\\A}\\A$. Then $\\U$ is a closed subalgebra of $\\A$. Define the map $D$ on $\\A\\times\\U$ by $D((a,x))=(x,0)$. Then $D$ is a derivation on $\\A\\times\\U$ such that in its representation the map $\\tau_1:\\U\\rightarrow\\A$ is given by $\\tau_1(x)=x$ with $\\tau_1\\neq 0$. \n \\end{exm}\nLet $\\A$ and $\\U$ be Banach algebras and $\\alpha:\\A\\rightarrow \\U$ be a continuous algebra homomorphism with $\\Vert \\alpha\\Vert \\leq 1$. Then the following module actions turn $\\U$ into a Banach $\\A$-bimodule with the compatible actions and norm;\n\\[\nax=\\alpha(a)x \\quad \\text{and} \\quad\nxa=x\\alpha(a)\\quad\\quad (a\\in \\A,x\\in \\U).\\]\nIn this case we can consider $\\A\\ltimes \\U$ with the multiplication given by \n$$(a,x)(b,y)=(ab,\\alpha(a) y+x\\alpha(b)+xy).$$\nWe denote this Banach algebra by $\\A\\ltimes_{\\alpha} \\U$ which is introduced in \\cite{bh}. In the next proposition we see that $\\A\\ltimes_{\\alpha} \\U$ is isomorphic as a Banach algebra to the direct product $\\A\\times \\U$.\n\\begin{prop}\\label{dal}\nLet $\\A$, $\\U$ be Banach algebras and $\\alpha:\\A\\rightarrow \\U$ be a continuous algebra homomorphism with $\\Vert \\alpha\\Vert \\leq 1$. Consider the Banach algebra $\\A\\ltimes_{\\alpha} \\U$ as above. Then $\\A\\ltimes_{\\alpha} \\U$ is isomorphic as a Banach algebra to $\\A\\times \\U$.\n\\end{prop}\n\\begin{proof}\nDefine $\\theta:\\A\\times \\U\\rightarrow\\A\\ltimes_{\\alpha} \\U$ by $\\theta((a,x))=(a, x-\\alpha(a))$ $(a,x)\\in \\A\\times \\U$. The map $\\theta$ is linear and continuous. Also\n\\[\\theta((a,x)(b,y))=(a,x-\\alpha(a))(b,y-\\alpha(b))=(ab,xy-\\alpha(ab))=\\theta((a,x))\\theta((b,y)),\\]\nfor any $(a,x),(b,y)\\in \\A\\times \\U$. It is obvious that $\\theta$ is bijective. Hence $\\theta$ is a continuous algebra isomorphism.\n\\end{proof}\n\\begin{rem}\nLet $\\A$, $\\U$ and $\\alpha$ are as Proposition \\ref{dal}. By Remarks \\ref{d1}, \\ref{d2} and Proposition \\ref{dal} we have the following.\n\\par \nIf every derivation on $\\A\\ltimes_{\\alpha}\\U$ is continuous, then every derivation on $\\A$ and every derivation on $\\U$ is continuous. If $\\A$ or $\\U$ have a bounded approximate identity and every derivation on $\\A$ and every derivation on $\\U$ is continuous, then every derivation on $\\A\\ltimes_{\\alpha}\\U$ is continuous. Also if $\\overline{\\U ^{2}}=\\U$ or $ann_{\\A}\\A =(0)$ ($\\overline{\\A ^{2}}=\\A$ or $ann_{\\U}\\U =(0)$), then $H^{1}(\\A\\ltimes_{\\alpha}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U)$.\n\\end{rem}\nIf $\\U$ is a Banach algebra and $\\A$ is a closed subalgebra of it, then it is clear that the embedding map $i:\\A\\rightarrow\\U$ is continuous algebra homomorphism and hence we can conmsider $\\A\\ltimes_{i} \\U$ which is isomorphic as a Banach algebra to $\\A\\times \\U$ by Proposition \\ref{dal}. So by above remark we have the following examples.\n\\begin{exm}\n\\begin{enumerate}\n\\item[(i)] If $\\A$ is a semisimle Banach algebra with a bounded approximate identity, then every derivation on $\\A\\ltimes_{i}\\A$ is continuous. \n\\item[(ii)] Consider the group algebra $L^{1}(G)$ as a closed ideal in the measure algebra $M(G)$. Every derivation on $M(G)$ and every derivation on $L^{1}(G)$ is continuous and $M(G)$ is unital. So every derivation on $L^{1}(G)\\ltimes_{i} M(G)$ is continuous.\n\\item[(iii)] Let $\\mathcal{H}$ be a Hilbert space and $\\mathcal{N}$ be a complete nest in $\\mathcal{H}$. The associated nest algebra $Alg\\mathcal{N}$ is a closed subalgebra of $\\mathbb{B}(\\mathcal{H})$ which is unital. By \\cite{chr} every derivation on $Alg\\mathcal{N}$ is continuous. Also $\\mathbb{B}(\\mathcal{H})$ is a unital $C^{*}$-algebra. Hence every derivation on $Alg\\mathcal{N}\\ltimes_{i} Alg\\mathcal{N}$ and $Alg\\mathcal{N}\\ltimes_{i} \\mathbb{B}(\\mathcal{H})$ is continuous.\n\\end{enumerate}\n\\end{exm}\n\\begin{exm}\n\\begin{enumerate}\n\\item[(i)] Let $\\A$ be a weakly amenable commutative Banach algebra. Since $H^{1}(\\A)=(0)$ and $\\overline{\\A ^{2}}=\\A$, it follows that $H^{1}(\\A\\ltimes_{i}\\A)=(0)$.\n\\item[(ii)]\nSakai showed in \\cite{sa} that every continuous derivation on a $W^{*}$-algebra is inner. Every von Neumann algebra is a $W^{*}$-algebra which is unital. Let $\\A$ be a von Neumann algebra on a Hilbert space $\\mathcal{H}$. Hence $H^{1}(\\A\\ltimes_{i}\\A)=(0)$ and $H^{1}(\\A\\ltimes_{i}\\mathbb{B} (\\mathcal{H}))=(0)$.\n\\item[(iii)]\nLet $\\mathcal{H}$ be a Hilbert space and $\\mathcal{N}$ be a complete nest in $\\mathcal{H}$. In \\cite{chr}, Christensen proved that $H^{1}(Alg\\mathcal{N})=(0)$. Also $\\mathbb{B}(\\mathcal{H})$ is a von Neumann algebra. Hence \\[ H^{1}(Alg\\mathcal{N}\\ltimes_{i} Alg\\mathcal{N})=(0) \\quad \\text{ and } \\quad H^{1}(Alg\\mathcal{N}\\ltimes_{i} \\mathbb{B}(\\mathcal{H}))=(0).\\]\n\\end{enumerate}\n\\end{exm}\n\\subsection*{Module extension Banach algebras}\nLet $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. With trivial product $\\U ^{2}=(0)$, as we saw in Example \\ref{me}, $\\A\\ltimes\\U$ is the same as module extension of $\\A$ by $\\U$, namely $T(\\A,\\U)$. In this case, $ann_{\\U}\\U =\\U$ and so for every derivation $\\delta:\\A\\rightarrow\\U$ we have $\\delta(\\A)\\subseteq ann_{\\U}\\U$. By these notes, Theorem \\ref{asll} and Corollary \\ref{tak}, we have the following proposition on derivations on $T(\\A,\\U)$.\n\\begin{prop}\\label{ttd}\n Let $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ be a linear map such that \n \\[D((a,x))=(\\delta_1(a)+\\tau_1(x),\\delta_2(a)+\\tau_2(x))\\quad\\quad ((a,x)\\in \\A\\ltimes\\U).\\]\n The following are equivalent.\n \\begin{enumerate}\n \\item[(i)]\n $D$ is a derivation. \n \\item[(ii)]\n $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x))$ and $D_{2}((a,x))=(0,\\delta_2(a))$ are derivations on $T(\\A,\\U)$.\n \\item[(iii)]\n $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$ are derivations, $\\tau_{2}:\\U\\rightarrow \\U$ is a linear map such that \n \\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_2 (a)x\\quad \\text{and} \\quad \\tau_2 (xa)=\\tau_2 (x)a+x\\delta_2(a)\\quad \\quad (a\\in\\A,x\\in\\U).\\] \n and $\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that $x\\tau_1(y)+\\tau_1(x)y=0$ $ (x,y\\in\\U)$. \\\\\n Moreover, $D$ is an inner derivation if and only if $\\delta_1 ,\\delta_2$ are inner derivations, $\\tau_1 =0$ and if $\\delta_1 =ad_{a}$ and $ \\delta_2 =ad_{x}$, then $\\tau_2 =r_{a}$.\n \\end{enumerate}\n \\end{prop}\n Proposition 2.2 of \\cite{med} is a consequence of this proposition. \n\\begin{rem} \\label{r0} \n By Proposition \\ref{ttd}, the linear map $\\delta:\\A\\rightarrow\\U$ is a derivation if and only if the linear map $D((a,x))=(0,\\delta (a))$ on $T(\\A,\\U)$ is a derivation. So any derivation $\\delta:\\A\\rightarrow\\U$ is continuous, if every derivation on $T(\\A,\\U)$ is continuous.\n \\end{rem} \n \\begin{prop}\\label{tau}\nSuppose that there are $\\A$-bimodule homomorphisms $\\phi:\\A\\rightarrow\\U$ and $\\psi:\\U\\rightarrow\\A$ such that $\\phi o\\psi = I_{\\U}$ ($I_\\U$ is the identity map on $\\U$). If every derivation on $T(\\A,\\U)$ is continuous, then every derivation on $\\A$ is continuous.\n\\end{prop}\n\\begin{proof}\nLet $\\delta$ be a derivation on $\\A$. Define the map $\\tau:\\U\\rightarrow\\U$ by $\\tau =\\phi o\\delta o\\psi$.\nThen for every $a\\in\\A,x\\in\\U$,\n\\[\\tau(ax)=a\\tau(x)+\\delta(a)x\\quad \\text{and} \\quad \\tau(xa)=\\tau(x)a+x\\delta(a).\\]\nSo the map $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D((a,x))=(\\delta(a),\\tau(x))$ is a derivation which is continuous by the hypothesis. Thus $\\delta$ is continuous.\n\\end{proof}\nIn the previous proposition the assumption of existence of $\\A$-bimodule homomorphisms $\\phi:\\A\\rightarrow\\U$ and $\\psi:\\U\\rightarrow\\A$ with $\\phi o \\psi =I_{\\U}$ is equivalent to that there exists a subbimodule $\\mathcal{V}$ of $\\A$ such that $\\A=\\U\\oplus \\mathcal{V}$ as $\\A$-bimodules direct sum. (In fact in this case, $\\U$ and $\\mathcal{V}$ are ideals of $\\A$). \n \\par \nSince for every derivation $\\delta:\\A\\rightarrow\\U$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$, in the stated cases in Proposition \\ref{taj2}, one can express any derivation $D$ on $T(\\A,\\U)$ as the sum of two derivations one of which being continuous. Also the Proposition \\ref{au1}-$(i)$ holds in the case of module extension Banach algebras. \n\\begin{rem}\\label{r1}\nIf $\\A$ is a Banach algebra and $\\mathcal{I}$ is a closed ideal on it, then $\\frac{\\A}{\\mathcal{I}}$ is a Banach $\\A$-bimodule and so we can consider $T(\\A, \\frac{\\A}{\\mathcal{I}})$. Suppose that $\\A$ possesses a bounded right (or left) approximate identity and every derivation on $\\A$ and every derivation from $\\A$ to $\\frac{\\A}{\\mathcal{I}}$ is continuous. Let $D$ be a derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ which has a structure as in Proposition \\ref{ttd}. Then $\\tau_1:\\frac{\\A}{\\mathcal{I}}\\rightarrow \\A$ is an $\\A$-bimodule homomorphism. Since $\\A$ has a bounded right(left) approximate identity, then so does $\\frac{\\A}{\\mathcal{I}}$. Hence $\\tau_1$ is continuous. Now from Proposition \\ref{taj2}-$(ii)$ it follows that $D$ is continuous. Hence in this case any derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ is continuous.\n\\end{rem}\n\\begin{rem}\\label{r2}\nIf $\\mathcal{I}$ is a closed ideal in a Banach algebra $\\A$ and $\\delta:\\A\\rightarrow\\A$ is a derivation such that $\\delta(\\mathcal{I})\\subseteq \\mathcal{I}$, then the map $\\tau: \\frac{\\A}{\\mathcal{I}}\\rightarrow \\frac{\\A}{\\mathcal{I}}$ defined by $\\tau(a+\\mathcal{I})=\\delta(a)+\\mathcal{I}$ is well-defined and linear and \n\\[\\tau (a(x+\\mathcal{I}))=a\\tau (x+\\mathcal{I})+\\delta (a)(x+\\mathcal{I})\\quad,\\quad \\tau((x+\\mathcal{I})a)=\\tau(x+\\mathcal{I})a+(x+\\mathcal{I})\\delta(a).\\]\n Therefore the map $D$ on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ defined by $D((a,x))=(\\delta (a),\\tau(x+\\mathcal{I}))$ is a derivation. So if every derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ is continuous, then every derivation $\\delta:\\A\\rightarrow \\A $ with $\\delta(\\mathcal{I})\\subseteq \\mathcal{I}$ is continuous.\n\\end{rem}\n In Remarks \\ref{r1} and \\ref{r2}, if we let $\\mathcal{I}=(0)$, then we have the following corollary.\n \\begin{cor}\n Let $\\A$ be a Banach algebra.\n\\begin{enumerate}\n\\item[(i)] If $\\A$ has a right (left) approximate identity and every derivation on $\\A$ is continuous, then any derivation on $T(\\A, \\A)$ is continuous. \n\\item[(ii)] If every derivation on $T(\\A, \\A)$ is continuous, then any derivation on $\\A$ is continuous.\n\\end{enumerate} \n \\end{cor}\nThe part (ii) of this corollary could also be derived from Remark \\ref{r0} or Proposition \\ref{tau}.\n\\par \nIn continue we give some results of Proposition \\ref{taj2} in the case of module extension Banach algebras.\n\\begin{cor}\\label{trs}\nLet $\\A$ be a semisimple Banach algebra which has a bounded approximate identity and $\\U$ be a Banach $\\A$-bimodule with $ann_{\\A}\\U=(0)$. Suppose that there exists a surjective left $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and every derivation from $\\A$ into $\\U$ is continuous, then every derivation on $T(\\A,\\U)$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $D$ be a derivation on $T(\\A,\\U)$ which has a structure as in Proposition \\ref{ttd}. Since $\\A$ is semisimple, every derivation from $\\A$ into $\\A$ is continuous. Now by Proposition \\ref{taj2}-$(ii)$ and Remark \\ref{taj11}, it follows that every derivation on $T(\\A,\\U)$ is continuous.\n\\end{proof}\nRingrose in \\cite{rin} proved that every derivation from a $C^{*}$-algebra into a Banach bimodule is continuous. So we have the following example which satisfies the conditions of the above corrolary.\n\\begin{exm}\nLet $\\A$ be a $C^{*}$-algebra and $\\U$ be a Banach $\\A$-bimodule with $ann_{\\A}\\U=(0)$. Suppose that there exists a surjective left $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$. Hence $\\A$ is a semisimple Banach algebra with a bounded approximate identity. So by \\cite{rin} and Corrolary \\ref{trs}, any derivation on $T(\\A,\\U)$ is continuous. \n\\end{exm}\nAn element $p$ in an algebra $\\A$ is called an \\textit{idempotent} if $p^{2}=p$.\n\\begin{cor}\nLet $\\A$ be a prime Banach algebra with a non-trivial idempotent\n$p$ (i.e. $p \\neq 0$) such that $\\A p$ is finite dimensional. Then every derivation\non $\\A$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $\\U:=\\A p$. Then $\\U$ is a closed left ideal in $\\A$. By the following make $\\U$ into a Banach\n$\\A$-bimodule:\n\\[ xa=0 \\quad \\quad (x\\in \\A p, a\\in \\A), \\]\nand the left multiplication is the usual multiplication of $\\A$. So we can consider $T(\\A,\\U)$ in this case. Since $\\A$ is prime, it follows that $ann_{\\A}\\U=(0)$. Let $\\delta:\\A \\rightarrow \\A$ be a derivation. Define the map $\\tau:\\U \\rightarrow \\U$ by $\\tau(ap)=\\delta(ap)p$ $(a\\in \\A)$. The map $\\tau$ is well-defined and linear. Also\n\\[\\tau (ax)=a\\tau (x)+\\delta (a)x\\quad \\text{and} \\quad \\tau (xa)=\\tau (x)a+x\\delta(a)\\quad \\quad (a\\in\\A,x\\in\\U).\\] \nSince $\\U$ is finite dimensional, it follows that $\\tau$ is continuous. By Proposition \\ref{ttd}, the mapping $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D((a,x))=(\\delta(a),\\tau(x))$ is a derivation. Now for $D$ the conditions of Proposition \\ref{taj2}-$(i)$ hold and hence $D$ is continuous. Therefore $\\delta$ is continuous.\n\\end{proof}\n Now we investigate the first cohomology group of $T(\\A ,\\U)$. \n \\par \n In module extension $T(\\A ,\\U)$ since $\\U ^{2}=(0)$, for every derivation $\\delta:\\A\\rightarrow\\U$ on $T(\\A ,\\U)$ we have $\\delta (\\A)\\subseteq ann_{\\U}\\U$ and also $Z^{1}(\\U)=\\mathbb{B}(\\U)$ and $N^{1}(\\U)=(0)$, we may conclude the following proposition from Theorem \\ref{22}. \n \\begin{prop}\\label{cte}\nConsider the module extension $T(\\A,\\U)$ of a Banach algebra $\\A$ and a Banach $\\A$-bimodule $\\U$. Suppose that $H^{1}(\\A)=(0)$ and the only continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ satisfying $T(x)y+xT(y)=0$ is $T=0$. Then \n\\[H^{1}(T(\\A ,\\U))\\cong H^{1}(\\A ,\\U)\\times \\frac{Hom_{\\A}(\\U)}{C_{\\A}(\\U)}.\\]\n \\end{prop} \nIn fact with the assumptions of the previous proposition, in Theorem \\ref{22} we have $\\mathcal{F}=N^{1}(\\A)\\times C_{\\A}(\\U)$. Proposition \\ref{cte} is the same as Theorem 2.5 of \\cite{med}. So it can be said that Theorem \\ref{22} is a generalization of Theorem 2.5 in \\cite{med}. \n\\begin{rem}\n \\begin{enumerate}\n \\item[(i)]\n If $\\U$ is a closed ideal of a Banach algebra $\\A$ such that $\\overline{\\U^{2}}=\\U$, then for any continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ with $T(x)y+xT(y)=0\\, (x,y\\in\\U)$ we have $T(xy)=0$. Since $\\overline{\\U^{2}}=\\U$ and $T$ is continuous, it follows that $T=0$. Hence in this case, for any $\\delta\\in Z^{1}(T(\\A ,\\U))$ in its representation, $\\tau_1=0$. Also in this case, for any $r_a \\in C_{\\A}(\\U)$, since $a\\in Z(\\A)$, then $r_{a}=0$. Therefore if $H^{1}(\\A)=(0)$, in this case by Proposition \\ref{22} we have \n \\[H^{1}(T(\\A ,\\U))\\cong H^{1}(\\A,\\U)\\times Hom_{\\A}(\\U).\\]\n \\item[(ii)]\n If Banach algebra $\\A$ has no nonzero nilpotent elements, $\\U$ is a Banach $\\A$-bimodule and $T:\\U\\rightarrow\\A$ is an $\\A$-bimodule homomorphism such that $T(x)y+xT(y)=0\\, (x,y\\in\\U)$, then $2T(x)T(y)=0\\, (x,y\\in \\U)$. Thus for any $x\\in \\U$, $T(x)^{2}=0$ and by the hypothesis $T(x)=0$. So in this case for any derivation on $T(\\A ,\\U)$, in its structure by Proposition \\ref{ttd} we have $\\tau_1 =0$.\n\\end{enumerate}\n \\end{rem} \n \\begin{exm}\n If $\\A$ is a weakly amenable commutative Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule such that for any continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ with $T(x)y+xT(y)=0\\, (x,y\\in\\U)$ implies that $T=0$, then $H^{1}(\\A)=(0)$, $H^{1}(\\A,\\U)=(0)$ and $C_{\\A}(\\U)=(0)$ and thus \n \\[H^{1}(T(\\A ,\\U))\\cong Hom_{\\A}(\\U).\\]\n \\end{exm}\nVarious examples of the trivial extension of Banach algebras and computing their first cohomology group are given in \\cite{med}. \n\\par \nIf $\\delta\\in Z^{1}(\\A,\\U)$, then the map $D_{\\delta}:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D_{\\delta}((a,x))=(0,\\delta(a))$ is a continuous derivation. Now we may define the linear map $\\Phi :Z^{1}(\\A,\\U)\\rightarrow H^{1}(T(\\A ,\\U))$ by $\\Phi (\\delta)=[D_{\\delta}]$. By noting that $\\delta$ is inner if and only if $D_{\\delta}$ is inner (by Corollary \\ref{tak}), it can be seen that $ker\\Phi= N^{1}(T(\\A ,\\U))$. Thus $H^{1}(\\A ,\\U)$ is isomorphic to some subspace of $H^{1}(T(\\A ,\\U))$. So we have the following corollary. \n \\begin{cor}\n Let $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. Then there is a linear isomorphism from \n $H^{1}(\\A ,\\U)$ onto a subspace of $H^{1}(T(\\A ,\\U))$.\n \\end{cor}\n By the above corollary from $H^{1}(T(\\A ,\\U))=(0)$ we conclude that $H^{1}(\\A ,\\U)=(0)$. In particular, if $H^{1}(T(\\A ,\\A))=(0)$, then $H^{1}(\\A)=(0)$. We can also obtain this result from the fact that any derivation $\\delta:\\A\\rightarrow\\A$ gives rise to a derivation $D:T(\\A,\\A)\\rightarrow T(\\A,\\A)$ given by $D((a,x))=(\\delta(a),\\delta(x))$ (by Remark \\ref{r2}). So if $D$ is inner, then $\\delta$ is inner.\n\n \\subsection*{$\\theta$-Lau products of Banach algebras} \nIn this subsection we assume that $0\\neq \\theta\\in \\Delta (\\A)$ and $\\U$ is a Banach algebra. By the module action given in Example \\ref{la} we turn $\\U$ into a Banach $\\A$-bimodule with comaptible actions and norm and if it is necessary we show this module by $ \\U_{\\theta}$. Note that $ann_{\\A}\\U_{\\theta} =ker \\theta$. Consider $\\A\\ltimes\\U$ and denote it by $\\A\\ltimes_{\\theta}\\U$ which is called $\\theta$-Lau product. In the continuation of this section we always consider $\\A\\ltimes_{\\theta}\\U$ as just mentioned. \n\\par \nThe following proposition characterizes the structure of derivations on $\\A\\ltimes_{\\theta}\\U$, which is obtained from Theorem \\ref{asll}.\n \\begin{prop}\\label{lau-der}\nLet $D:\\A\\ltimes_{\\theta} \\U\\rightarrow \\A \\ltimes_{\\theta}\\U$ be a map. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item[(i)] $D$ is a derivation.\n\\item[(ii)] \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nsuch that \n\\begin{enumerate}\n\\item[(a)]\n$\\delta_1 :\\A\\rightarrow\\A, \\delta_2 :\\A\\rightarrow\\U$ are derivations such that \n\\[\\theta (\\delta_1 (a))x+\\delta_2(a)x=0\\quad \\text{and}\\quad \\theta (\\delta_1 (a))x+x\\delta_2(a)=0\\quad (a\\in\\A,x\\in\\U).\\]\n\\item[(b)]\n$\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that $\\tau_1(xy)=0\\quad (x,y\\in\\U)$.\n\\item[(c)]\n$\\tau_2:\\U\\rightarrow \\U$ is a linear map such that \n\\[\\tau_2(xy)=\\theta (\\tau_1(y))x+\\theta (\\tau_1(x))y+x\\tau_2(y)+\\tau_2(x)y\\quad \\quad (x,y\\in\\U).\\]\nAlso $D$ is inner if and only if $\\tau_1 =0, \\delta_2 =0, \\delta_1 =id_{a}$ and $\\tau_2 =id_{\\U,x}$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{prop}\nBy the above proposition, for a derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we have \n\\[\\delta_2 (\\A)\\subseteq Z(\\U), \\quad \\theta (a)\\tau_1 (x) =a\\tau_1(x)=\\tau_1(x)a\\]\nand so $\\tau_1(\\U)\\subseteq Z(\\A)$. Also $x\\tau_1 (y)+\\tau_1 (x)y =0$ for all $x,y\\in \\U$ if and only if $\\tau_1 (\\U)\\subseteq ker \\theta$. Additionally, $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U =ker\\theta$ if and only if $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$.\n\\begin{rem} \\label{cla}\nIf $\\A$ is a commutative Banach algebra, then by Thomas' theorem \\cite{tho}, for any derivation $\\delta:\\A\\rightarrow\\A$, $\\delta(\\A)\\subseteq rad (\\A)$. So in this case for any derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we always have $\\delta_1 (\\A)\\subseteq rad (\\A) \\subseteq ker\\theta=ann_{\\A}\\U $ and hence $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$. Also $D=D_1+D_2+D_{3}$ where $D_1((a,x))=(\\delta_1(a),0)$, $D_{2}((a,x))=(0,\\delta_2(a))$ and $D_3((a,x))=(\\tau_1(x),\\tau_2(x))$ are derivations on $\\A\\ltimes_{\\theta}\\U$. In this case by Corollary \\ref{tak}-$(i)$, for every derivation $\\delta:\\A\\rightarrow\\A$ the map $D$ on $\\A\\ltimes_{\\theta}\\U$ defined by $D((a,x))=(\\delta(a),0)$ is a derivation.\n\\end{rem}\nIn the following we always assume that for any derivation $D\\in \\A\\ltimes_{\\theta}\\U$ we have $\\tau_1 =0$ and study the derivations under this condition. In this case, $\\tau_2$ is then always a derivation on $\\U$. \n\\par \nThe established results about the automatic continuity of derivations in section 3 obviously hold as well as in this special case $\\theta$-Lau product. Now we are ready to state some results concerning the automatic continuity of derivations on $\\A\\ltimes_{\\theta}\\U$.\n\\par \nBy the definition of $\\theta$-Lau product and Corollary \\ref{tak}-$(iv)$, any linear map $\\tau:\\U\\rightarrow\\U$ is a derivation if and only if the linear map $D((a,x))=(0,\\tau(x))$ on $\\A\\ltimes_{\\theta}\\U$ is a derivation. According to this point, the following corollary is clear.\n\\begin{cor}\nIf every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous, then every derivation on $\\U$ is continuous. In particular, if every derivation on $\\A\\ltimes_{\\theta}\\A$ is continuous, then every derivation on $\\A$ is continuous.\n\\end{cor}\nIf $\\A$ and $\\U$ are semisimple where $\\U$ has a bounded approximate identity, then by Proposition \\ref{semi} every derivatin on $\\A\\ltimes_{\\theta}\\U$ is continuous. In particular if $\\A$ is a semisimple Banach algebra with a bounded approximate identity, then all derivations on $\\A\\ltimes_{\\theta}\\A$ are continuous. In the case of $C^{*}$-algebras we can drop the semisimplicity of $\\U$. In fact by Ringrose's result \\cite{rin}, Proposition \\ref{lau-der} and preceding corollary we have the next corollary.\n\\begin{cor}\nLet $\\A$ be a $C^{*}$-algebra. Then every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous if and only if every derivation on $\\U$ is continuous.\n\\end{cor}\nFrom Proposition \\ref{taj1}-$(ii)$ and Remark \\ref{cla} we have the following corollary.\n\\begin{cor}\nLet $\\A$ be a commutative Banach algebra and $\\U$ a semisimple Banach algebra. Then every derivation $D$ on $ \\A\\ltimes_{\\theta}\\U $ is of the form $D=D_1+D_2$ where $D_1 ((a,x))=(0,\\delta_2 (a)+\\tau_2(x))$ is a continuous derivation on $\\A\\ltimes_{\\theta}\\U$ and $D_1 ((a,x))=(\\delta_1 (a),0)$ is a derivation on $\\A\\ltimes_{\\theta}\\U$. In particular, in this case, if every derivation on $\\A$ is continuous, then every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous.\n\\end{cor}\nNote that if $\\A$ is commutative, then $\\U$ is a commutative $\\A$-bimodule and hence the set of all left $\\A$-module homomorphisms, all right $\\A$-module homomorphisms and all $\\A$-module homomorphisms are the same. Now by Proposition \\ref{taj2} and Remark \\ref{cla} we obtain the next corollary.\n\\begin{cor}\nLet $\\A$ be a commutative Baanch algebra which has a bounded approximate identity. Then any derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ is of the form $D=D_1+D_2$ where $D_1 ((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2 ((a,x))=(0,\\delta_2(a))$ are derivations on $\\A\\ltimes_{\\theta}\\U$ and under any of the following conditions, $D_1$ is continuous.\n\\begin{enumerate}\n\\item[(i)]\nThere is a surjective $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and $\\delta_1$ is continuous.\n\\item[(ii)]\nThere is an injective $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and $\\tau_2$ is continuous.\n\\end{enumerate}\n\\end{cor}\nIn the continuation we assume that for any continuous derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we have $\\tau_1 =0$. By the definition of $\\A\\ltimes_{\\theta}\\U$, in this case, $Hom_{\\A}(\\U)=\\mathbb{B}(\\U), N^{1}(\\A,\\U)=(0), Z^{1}(\\A,\\U)=H^{1}(\\A,\\U), R_{\\A}(\\U)=C_{\\A}(\\U)=(0)$ and $N^{1}(\\U)=I(\\U)$. \n\\begin{rem}\\label{pri}\nFor any derivation $\\delta\\in Z^{1}(\\A)$, we have $\\delta(\\A)\\subseteq ker \\theta=ann_{\\A}\\U $, since Sinclair's theorem \\cite{sinc} implies that $\\delta(P)\\subseteq P$ for any primitive ideal $P$ of $\\A$. So in this case for any derivation $D\\in Z^{1}(\\A\\ltimes_{\\theta}\\U)$ we always have $\\delta_1 (\\A)\\subseteq ker \\theta =ann_{\\A}\\U $ and hence $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$. Also $D=D_1+D_2+D_{3}$ where $D_1((a,x))=(\\delta_1(a),0)$, $D_{2}((a,x))=(0,\\delta_2(a))$ and $D_3((a,x))=(\\tau_1(x),\\tau_2(x))$ are in $Z^{1}(\\A\\ltimes_{\\theta}\\U)$. \n\\end{rem}\nNote that it is not necessarily true that $\\delta\\in Z^{1}(\\A,\\U)$ implies $\\delta(\\A)\\subseteq ann_{\\U}\\U$ as the next example shows this.\n\\begin{exm}\nAssume that $G$ is a non-discrete abelian group. In \\cite{br}, it has been shown that there is a nonzero continuous point derivation $d$ at a nonzero character $\\theta$ on $M(G)$. Now consider $M(G)\\ltimes_{\\theta} \\mathbb{C}$. Every derivation from $M(G)$ into $\\mathbb{C}_\\theta$ is a point derivation at $\\theta$. It is clear that $ann_{\\mathbb{C}}\\mathbb{C} =(0)$. But $d\\in Z^{1}(M(G),\\mathbb{C}_\\theta)$ is a nonzero derivation such that $d(M(G))\\not\\subseteq ann_{\\mathbb{C}}\\mathbb{C} =(0)$.\n\\end{exm}\n\\par \nNow we determine the first cohomology group of $\\A\\ltimes_{\\theta}\\U$ in some other cases.\n\\par \nLinear space $\\mathcal{E}$ in Theorem \\ref{11} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{E}=N^{1}(\\A)\\times N^{1}(\\U)$. Therefore by theorem \\ref{11} and Remark \\ref{pri} we have the next proposition.\n\\begin{prop}\\label{a1}\nIf $H^{1}(\\A,\\U)=(0)$, then \n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U).\\]\n\\end{prop}\n\\begin{cor}\nLet $\\A$ be a Banach algebra with $H^{1}(\\A,\\A_{\\theta})=(0)$ and $\\overline{\\A^{2}}=\\A$. Then $H^{1}(\\A \\ltimes_{\\theta}\\A)=(0)$ if and only if $H^{1}(\\A)=(0)$.\n\\end{cor}\n\\par \nLinear space $\\mathcal{F}$ in Theorem \\ref{22} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{F}=(0)\\times N^{1}(\\U)$. So we have the following proposition.\n\\begin{prop}\nLet for any $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If $H^{1}(\\A)=(0)$, then \n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A,\\U)\\times H^{1}(\\U).\\]\n\\end{prop}\n The linear space $\\mathcal{K}$ in Theorem \\ref{33} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{K}=N^{1}(\\A)\\times (0)$. Therefore by Theorem \\ref{33} we have the next proposition.\n \\begin{prop}\\label{prop8}\nLet for every $\\delta\\in Z^{1}(\\A,\\U)$, $ \\delta(\\A)\\subseteq ann _{\\U}\\U$.\n If $H^{1}(\\U)=(0)$, then \n \\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\A,\\U).\\]\n\\end{prop}\nThe following proposition follows directly from Theorem \\ref{44} and the properties of $\\A\\ltimes_{\\theta}\\U$.\n\\begin{prop}\\label{prop10}\nSuppose that $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$. Then\n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\U).\\] \n\\end{prop}\n\\begin{rem}\\label{akh}\nFrom Proposition \\ref{prop10} it follows that if $H^{1}(\\A)=(0), H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\U)=(0)$, then $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$. Under some conditions the converse of is also true. That is, if for any $\\delta\\in Z^{1}(\\A,\\U)$ we have $\\delta (\\A)\\subseteq ann_{\\U}\\U$ and $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$, then $H^{1}(\\A)=(0), H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\U)=(0)$. It does not yield from above propositions. We prove it directly as follows. By the hypotheses, for any $\\delta_1\\in Z^{1}(\\A)$, $\\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Z^{1}(\\U)$ the map $D:\\A\\ltimes_{\\theta}\\U\\rightarrow \\A\\ltimes_{\\theta}\\U$ given by $D((a,x))=(\\delta_1(a),0)$, $D((a,x))=(0,\\delta_2(a))$ or $D((a,x))=(0,\\tau_2(x))$ is a continuous derivation. If $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$, then $\\delta_2=0$ and $\\delta_1, \\tau_2$ are inner.\n\\end{rem}\n \\begin{exm}\nLet $\\A$ be a weakly amenable commutative Banach algebra and $\\overline{\\U^{2}}=\\U$. So $Z^{1}(\\A)=(0)$, $Z^{1}(\\A,\\U)=(0)$ and from Remark \\ref{akh} we have $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$ if and only if $H^{1}(\\U)=(0)$.\n \\end{exm}\n In particular if $\\A$ is a weakly amenable commutative Banach algebra, above example implies that $H^{1}(\\A\\ltimes_{\\theta}\\A)=(0)$. Note that for a weakly amenable Banach algebra $\\A$ the equality $\\overline{\\A^{2}}=\\A$ always holds.\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n \\label{Sec_intro}\n\n In this paper we present results on the longitudinal double spin\nasymmetry $A_1^{\\rho }$ for exclusive incoherent $\\rho ^0$ production in the\nscattering of high energy muons on nucleons.\nThe experiment was carried out at CERN by the COMPASS collaboration\nusing the 160~GeV muon beam and the large $^{6}$LiD polarised target.\n\nThe studied reaction is \n\\begin{equation} \n \\mu + N \\rightarrow \\mu^{\\prime} + \\rho ^0 + N^{\\prime}, \\label{murho}\n\\end{equation}\nwhere $N$ is a quasi-free nucleon from the polarised deuterons.\nThe reaction (\\ref{murho}) can be described in terms of the virtual photoproduction\nprocess\n\\begin{equation}\n \\gamma^{\\ast} + N \\rightarrow \\rho^0 + N^{\\prime}. \\label{phorho} \n\\end{equation}\nThe reaction (\\ref{phorho}) can be regarded as a fluctuation of the\nvirtual photon into a quark-antiquark pair (in partonic language),\nor an off-shell vector meson (in Vector Meson Dominance model), which then scatters\noff the target nucleon resulting in the production of an on-shell vector meson.\nAt high energies this is predominantly a diffractive process and plays\nan important role in the investigation of Pomeron exchange and its\ninterpretation in terms of multiple gluon exchange.\n\nMost of the presently available information on the spin structure \nof reaction (\\ref{phorho})\nstems from the $\\rho ^0$ spin density matrix elements,\nwhich are obtained from the analysis of angular distributions\nof $\\rho ^0$ production and decay \\cite{Schilling73}. Experimental results\non $\\rho ^0$ spin density matrix elements \ncome from various experiments \\cite{NMC,E665-1,ZEUS,H1,HERMES00} including\nthe preliminary results from COMPASS \\cite{sandacz}.\n \nThe emerging picture of the spin structure of the considered process \nis the following. At low photon virtuality $Q^2$ the cross section by transverse virtual photons\n$\\sigma _T$\ndominates, while the relative contribution of the cross section by longitudinal\nphotons $\\sigma _L$ \nrapidly increases with $Q^2$. At $Q^2$ of about 2~(GeV\/{\\it c})$^2$ both\ncomponents become comparable and at a larger $Q^2$ the contribution of\n$\\sigma _L$ becomes dominant and continues to grow, although\nat lower rate than at low $Q^2$. \nApproximately, the so called $s$-channel helicity\nconservation (SCHC) is valid, i.e.\\ the helicity of the vector meson is the same\nas the helicity of the parent virtual photon. The data indicate that \nthe process can be described approximately by the \nexchange in the $t$-channel of an object with natural \nparity $P$.\nSmall deviations from SCHC are observed, also at the highest energies, whose\norigin is still to be understood. \nAn interesting suggestion was made in Ref.~\\cite{igivanov} that at high energies\nthe magnitudes of various helicity amplitudes for the reaction (\\ref{phorho})\nmay shed a light on the spin-orbital momentum structure of the vector meson. \n\nA complementary information can be obtained from measurements of the double\nspin cross section asymmetry, when the information on both the beam\nand target polarisation is used.\nThe asymmetry is defined as\n\\begin{equation}\n A_{1}^{\\rho} = \n \\frac{\n \\sigma_{1\/2} - \\sigma_{3\/2}\n }{\n \\sigma_{1\/2} + \\sigma_{3\/2}}\n ,\n \\label{A1def}\n\\end{equation}\nwhere $\\sigma_{1\/2 (3\/2)}$ stands for the cross sections of the reaction (\\ref{phorho})\nand the subscripts denote the total virtual photon--nucleon angular momentum\ncomponent along the virtual photon\ndirection.\nIn the following we will also use the asymmetry\n\\ALL\\\nwhich is defined for reaction (\\ref{murho}) as the asymmetry of\nmuon--nucleon cross sections for antiparallel and parallel beam and\ntarget longitudinal spin orientations.\n\n\nIn the Regge approach \\cite{manaenkov} the longitudinal double spin asymmetry\n$A_1^{\\rho }$ can arise due \nto the interference of amplitudes for exchange in the $t$-channel of Reggeons with natural parity\n(Pomeron, $\\rho $, $\\omega $, $f$, $A_2$ ) with amplitudes for Reggeons with \nunnatural parity \n($\\pi , A_1$).\nNo significant asymmetry is expected \nwhen only a non-perturbative \nPomeron is exchanged because it has small spin-dependent couplings as found from \nhadron-nucleon data for cross sections and polarisations.\n\nSimilarly, in the approach of Fraas \\cite{fraas76}, \nassuming approximate validity of \nSCHC, the spin asymmetry $A_1^{\\rho }$ arises from the interference between\nparts of the helicity amplitudes for transverse photons corresponding\nto the natural and unnatural parity exchanges\nin the $t$ channel. While a measurable asymmetry can arise even from a small\ncontribution of the unnatural parity exchange, the latter may remain\nunmeasurable in the cross sections.\nA significant unnatural-parity contribution may indicate an exchange \nof certain Reggeons like $\\pi$, $A_{1}$ or in\npartonic terms an exchange of $q\\bar{q}$ pairs.\n\nIn the same reference a theoretical prediction for $A_1^{\\rho}$ was\npresented, which is based on the description of forward exclusive $\\rho^{0}$\nleptoproduction and inclusive inelastic lepton-nucleon\nscattering by the off-diagonal Generalised Vector Meson Dominance (GVMD) model,\napplied to the case of polarised lepton--nucleon scattering.\nAt the values of Bjorken variable $x < 0.2$, with additional assumptions\n\\cite{HERMES01}, \\Aor\\ can be related \nto the $A_{1}$ asymmetry for inclusive inelastic lepton scattering at the same\n\\xBj\\ as\n\\begin{equation}\n A_1^{\\rho } = \\frac{2 A_1}{1 + (A_1)^2} . \\label{A1-Fraas}\n\\end{equation} \nThis prediction is consistent with the HERMES results for both the proton\nand deuteron targets, although with rather large errors.\n\nIn perturbative QCD, there exists a general proof of factorisation \\cite{fact}\nfor exclusive vector meson production by longitudinal photons. \nIt allows a decomposition of the full amplitude for reaction (\\ref{phorho})\ninto three components:\na hard scattering amplitude for the exchange of quarks or gluons,\na distribution amplitude for the meson and \nthe non-perturbative description of the target nucleon in terms of\nthe generalised parton distributions (GPDs), which are related to the internal\nstructure\nof the nucleon.\nNo similar proof of factorisation exists for transverse virtual photons, and as a consequence\nthe interpretation of $A_{1}^{\\rho}$ in perturbative QCD is not possible\nat leading twist. However, a model including higher twist effects proposed\nby Martin et al. \\cite{mrt} describes the behaviour of both $\\sigma_{L}$\nas well as of $\\sigma _T$ reasonably well. An extension of this model by Ryskin\n\\cite{Ryskin} for the\nspin dependent cross sections allows to relate\n\\Aor\\ to the spin dependent GPDs of gluons and quarks in the nucleon.\nThe applicability of this model is limited to the range $Q^2 \\geq 4~(\\mathrm{GeV}\/c)^2$.\nMore recently another pQCD-inspired model involving GPDs has been proposed by Goloskokov and Kroll \n\\cite{krgo,gokr}. The non-leading twist asymmetry $A_{LL}$ results from the \ninterference between the dominant GPD $H_g$ and the helicity-dependent GPD\n$\\tilde{H}_g$. The asymmetry is estimated to be of the order \n$ k_T^2 \\tilde{H}_g \/ (Q^2 H_g )$,\nwhere $k_T$ is the transverse momentum of the quark and the antiquark.\n\nUp to now little experimental information has been available on the\ndouble spin asymmetries for exclusive leptoproduction of vector mesons. \nThe first observation of a non-zero asymmetry $A_1^{\\rho }$ in polarised\nelectron--proton deep-inelastic scattering was reported by the HERMES experiment \n\\cite{HERMES01}.\nIn the deep inelastic region $(0.8 < Q^2 < 3~(\\mathrm{GeV}\/c)^2)$\nthe measured asymmetry is equal to 0.23 $\\pm$ 0.14 (stat) $\\pm$ 0.02 (syst)\n\\cite{HERMES03},\nwith little dependence on the kinematical variables. In contrast, for the `quasi-real photoproduction' data, with \n$\\langle Q^2\\rangle = 0.13~(\\mathrm{GeV}\/c)^2$, the asymmetry for the proton target is consistent with zero.\nOn the other hand the measured asymmetry $A_1^{\\rho }$ for the polarised deuteron target and the asymmetry $A_1^{\\phi }$ for exclusive production of $\\phi $\nmeson \neither on polarised protons or deuterons\n are consistent with zero both in the deep inelastic and in the \nquasi-real photoproduction regions\n \\cite{HERMES03}.\n\nThe HERMES result indicating a non-zero $A_1^{\\rho }$ for the proton\ntarget differs from the unpublished result of similar measurements by \nthe SMC experiment \\cite{ATrip-1} at comparable values of $Q^2$\\ but at\nabout three times higher values of the photon-nucleon centre of mass energy $W$, i.e.\\ at smaller \\xBj. The SMC measurements of \n\\ALL\\ \nin several bins of $Q^2$ are consistent with zero\nfor both proton and deuteron targets. \n\n\\section{The experimental set-up}\n \\label{Sec_exper}\n\nThe experiment \\cite{setup} was performed with\nthe high intensity positive muon beam from the CERN M2 beam line.\nThe $\\mu ^{+}$ beam intensity is $2 \\cdot 10^8$ per spill of 4.8 s with a cycle time of\n16.8~s. The average beam energy is 160~GeV and the momentum spread is $\\sigma _{p}\/p = 0.05$.\nThe momentum of each beam muon is measured upstream of the experimental area in a beam\nmomentum station consisting of several planes of scintillator strips or scintillating fibres\nwith a dipole magnet in between. The precision of the momentum determination is typically\n$\\Delta p\/p \\leq 0.003$. The $\\mu ^{+}$ beam is naturally polarised by the weak decays \nof the parent\nhadrons. The polarisation of the muon varies with its energy and the \naverage polarisation is $-0.76$.\n\nThe beam traverses the two cells of the polarised target, \neach 60~cm long, 3~cm in diameter and separated by 10~cm, which are placed one after the other. \nThe target cells are filled with $^{6}$LiD which is used as polarised deuteron target material\nand is longitudinally polarised by dynamic nuclear polarisation (DNP).\nThe two cells are polarised in opposite directions so that data from both spin directions\nare recorded at the same time. The typical values of polarisation are about 0.50.\nA mixture of liquid $^3$He\nand $^4$He, used to refrigerate the target, and a small amount of heavier nuclei are also\npresent in the target. \nThe spin directions in the two target cells are reversed every 8 hours by rotating\nthe direction of the magnetic field in the target. \nIn this way fluxes and acceptances cancel in the calculation of spin asymmetries, provided that\nthe ratio of acceptances of the two cells remains unchanged after the reversal. \n\nThe COMPASS spectrometer is designed to reconstruct the scattered muons\nand the produced hadrons in wide momentum and angular ranges.\nIt is divided in two stages with two dipole magnets, SM1 and SM2. The first magnet, \nSM1, accepts charged particles of momenta larger than 0.4~GeV\/{\\it c}, and the second one, SM2, those\nlarger than 4~GeV\/{\\it c}. The angular acceptance of the spectrometer is limited by\nthe aperture of the polarised target magnet. For the upstream\nend of the target it is $\\pm 70$~mrad.\n\nTo match the expected particle flux at various locations in the spectrometer, COMPASS\nuses various tracking detectors. Small-angle tracking is provided by stations of\nscintillating fibres, silicon detectors, micromesh gaseous chambers and gas electron\nmultiplier chambers. Large-angle tracking devices are multiwire proportional chambers,\ndrift chambers and straw detectors. Muons are identified in large-area mini drift \ntubes and drift tubes placed downstream of hadron absorbers. Hadrons are detected by\ntwo large iron-scintillator sampling calorimeters installed in front of the absorbers\nand shielded to avoid electromagnetic contamination.\nThe identification of charged particles is possible with a RICH detector, although\nin this paper we have not utilised the information from the RICH.\n\nThe data recording system is activated by various triggers indicating the presence\nof a scattered muon and\/or an energy deposited by hadrons\nin the calorimeters. In addition to the inclusive trigger, in which the scattered muon\nis identified by coincidence signals in the trigger hodoscopes, several semi-inclusive\ntriggers were used. They select events fulfilling the requirement to detect\nthe scattered muon together with the energy deposited in the hadron calorimeters exceeding\na given threshold. In 2003 the acceptance was further extended towards high $Q^2$\nvalues by the addition of a standalone calorimetric trigger in which no condition \nis set for the scattered muon. \nThe COMPASS trigger system allows us to cover a wide range of $Q^2$, from quasi-real\nphotoproduction to deep inelastic interactions.\n \nA more detailed description of the COMPASS apparatus can be found in Ref.~\\cite{setup}\n\n\\section{Event sample}\n \\label{Sec_sample}\n\nFor the present analysis the whole data sample taken in 2002 and 2003 with the \nlongitudinally polarised target is used. For an event to be accepted for further analysis it is required to originate in the target, have a reconstructed \nbeam track, a\nscattered muon track, and only two additional tracks \nof oppositely charged hadrons associated to the primary vertex. \nThe fluxes of beam muons passing through each target cell are equalised using\nappropriate cuts on the position and angle of the beam tracks.\n\nThe charged\npion mass hypothesis is assigned to each hadron track and the invariant mass of two\npions, $m_{\\pi \\pi}$, calculated. A cut on the invariant mass of two pions, $0.5 < m_{\\pi \\pi} < 1~\\mathrm{GeV}\/c^2$, is applied to select\nthe $\\rho ^0$.\nAs slow recoil target particles are not detected, in order to select\nexclusive events we use the cut on the missing energy, $-2.5 < E_{miss} < 2.5~\\mathrm{GeV}$, and on the transverse momentum of $\\rho^0$ with respect to the direction of\nvirtual photon, $p_t^2 < 0.5~(\\mathrm{GeV}\/c)^2$. Here $E_{miss} = (M^{2}_{X} - M^{2}_{p})\/2M_{p}$, where $M_X$ is the missing mass of the unobserved recoiling\nsystem and $M_p$ is the proton mass.\nCoherent interactions on the\ntarget nuclei are removed by a cut $p_t^2 > 0.15~(\\rm{GeV}\/c)^2$. \nTo avoid large corrections for acceptance and misidentification of\nevents, additional cuts \n$\\nu > 30~\\mathrm{GeV}$ and $E_{\\mu'} > 20~\\mathrm{GeV}$ are applied.\n\n\nThe distributions of $m_{\\pi \\pi}$, $E_{miss}$ and $p_t^2$ are shown in \nFig.~\\ref{hadplots}. Each plot is obtained applying all cuts except\nthose corresponding to the displayed variable. \nOn the left top panel of Fig.~\\ref{hadplots} a clear peak of the \\rn\\ resonance, centred at 770~MeV\/$c^2$, is visible on the top of the small contribution of\nbackground of the non-resonant \\pip\\pin\\ pairs. Also the skewing of the resonance peak towards smaller values\nof \\mpipi, due to an interference with the non-resonant background, is noticeable. A small bump below 0.4~GeV\/$c^2$\nis due to assignment of the charged pion mass to the kaons from decays of $\\phi$ mesons.\nThe mass cuts eliminate the non-resonant background outside of the \\rn\\ peak, as well as the contribution of $\\phi$\nmesons.\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfig{figure=mpipi_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Emiss_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=ptsq_0203.eps,height=5cm,width=7.5cm}\n \\caption{Distributions of \\mpipi\\ (top left), \\Emi\\ (top right) \nand \\pts\\ (bottom) for the exclusive sample. \nThe arrows show cuts imposed on each variable to define the final sample.}\n \\label{hadplots}\n \\end{center}\n\\end{figure}\n \nOn the right top panel of the figure the peak at $E_{miss} \\approx 0$ is the signal of \nexclusive \\rn\\ production. The width of the peak, $\\sigma \\approx 1.1~\\mathrm{GeV}$,\nis due to the spectrometer resolution. Non-exclusive events, where in addition to the recoil nucleon other undetected hadrons are produced, appear at $E_{miss} > 0$.\nDue to the finite resolution, however, they are not resolved from the exclusive peak.\nThis background\nconsists of two components: the double-diffractive events where additionally to \\rn\\ an excited nucleon state is produced in the\nnucleon vertex of reaction (\\ref{phorho}), and events with semi-inclusive \\rn\\ production, in which other\nhadrons are produced but escape detection. \n\nThe $p_t^2$ distribution shown on the bottom panel of the figure\nindicates a contribution from coherent production on target \nnuclei at small $p_t^2$ values. A three-exponential fit to this distribution \nwas performed, which indicates also \na contribution of non-exclusive background increasing\nwith $p_t^2$.\nTherefore to select the sample of exclusive incoherent \\rn\\ production, \nthe aforementioned $p_t^2$ cuts, indicated by arrows, were applied. \n\nAfter all selections the final sample consists of about 2.44 million events.\nThe distributions of $Q^2$, \\xBj\\ and $W$ are shown in Fig.~\\ref{kinplots}.\nThe data cover a wide range in $Q^2$\\ and \\xBj\\ which extends towards the small values \nby almost two orders of magnitude compared to the similar studies reported in \nRef.~\\cite{HERMES03}. The sharp edge of the $W$ distribution at the low $W$ values\nis a consequence of the cut applied on $\\nu $. For this sample \n$\\langle W \\rangle$ is equal to 10.2~GeV and $\\langle p_{t}^{2}\n\\rangle = 0.27(\\mathrm{GeV}\/c)^2$.\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfig{figure=Qsq_0203_incoh_liny.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Qsq_0203_incoh_logy.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=xBj_0203_incoh.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Wene_0203_incoh.eps,height=5cm,width=7.5cm}\n \n \\caption{Distributions of the kinematical variables for the final sample: $Q^2$\\ with linear and\n logarithmic vertical axis scale (top left and right panels respectively), \\xBj\\ (bottom left), and the energy\n $W$ (bottom right).}\n \\label{kinplots}\n \\end{center}\n\\end{figure}\n\n\\section{Extraction of asymmetry \\Aor}\n \\label{Sec_extract} \n\nThe cross section asymmetry $A_{LL} = \n(\\sigma_{\\uparrow \\downarrow} - \\sigma_{\\uparrow \\uparrow})\/\n(\\sigma_{\\uparrow \\downarrow} + \\sigma_{\\uparrow \\uparrow})$ for reaction\n(\\ref{murho}) , for antiparallel ($\\uparrow \\downarrow $) and parallel \n($\\uparrow \\uparrow $) spins of\nthe incoming muon and the target nucleon, is related to the virtual-photon\nnucleon asymmetry $A_{1}^{\\rho}$ by\n\\begin{equation}\n A_{LL} = D \\left( A_{1}^{\\rho} + \\eta A_{2}^{\\rho} \\right) , \\label{A1rALL}\n\\end{equation}\nwhere the factors $D$ and $\\eta $ depend on the event kinematics and \n$A_{2}^{\\rho}$ is related to\nthe interference cross section for exclusive production by \nlongitudinal and transverse virtual photons. \nAs the presented results extend into the range of very small $Q^2$, \nthe exact formulae for the depolarisation factor $D$ and kinematical \nfactor $\\eta$ \\cite{JKir}\nare used without neglecting terms proportional to the lepton mass squared $m^2$.\nThe depolarisation factor is given by\n\\begin{equation}\n D(y, Q^{2}) = \n \\frac{\n y \\left[ (1 + \\gamma^{2} y\/2) (2 - y) - \n 2 y^{2} m^{2} \/ Q^{2} \\right]\n }{\n y^{2} (1 - 2 m^{2} \/ Q^{2}) (1 + \\gamma^{2}) + \n 2 (1 + R) (1 - y - \\gamma^{2} y^{2}\/4)\n }, \\label{depf}\n\\end{equation}\nwhere $R = \\sigma_{L} \/ \\sigma_{T}$, $\\sigma_{L(T)}$ is the cross section for reaction (\\ref{phorho})\ninitiated by longitudinally (transversely) polarised virtual photons, the fraction of the muon energy lost $y = \\nu \/E_{\\mu }$ and $\\gamma^{2} = Q^{2} \/ \\nu^{2}$.\nThe kinematical factor $\\eta (y, Q^{2})$ is the same as for the inclusive \nasymmetry.\n\nThe asymmetry $A_{2}^{\\rho}$ obeys the positivity limit $A_{2}^{\\rho} < \\sqrt{R}$, analogous to the one for the inclusive case.\nFor $Q^2 \\leq 0.1~(\\mathrm{GeV}\/c)^2$ the ratio $R$ for the reaction (\\ref{phorho}) is\nsmall, cf.\\ Fig.~\\ref{R-pict}, and the positivity limit constrains $A_{2}^{\\rho}$ to small values. Although for larger $Q^2$\\ the ratio $R$ for the process (\\ref{phorho}) increases\nwith $Q^2$, because of small values of $\\eta $ the product $\\eta \\sqrt{R}$ is small in the whole $Q^2$\\ range of our sample.\nTherefore the second term in Eq.~\\ref{A1rALL} can be neglected, so that\n\\begin{equation}\n A_{1}^{\\rho} \\simeq \\frac{1}{D} A_{LL},\n\\end{equation}\nand the effect of this approximation is included in the systematic uncertainty of \\Aor.\n\\begin{figure}[thb]\n\\begin{center}\n \\epsfig{file=RLT.eps,height=5cm,width=7.4cm}\n \\caption{The ratio $R= \\sigma_L\/\\sigma_T$ as a function of $Q^2$\\ measured in the E665 experiment.\n The curve is a fit to the data described in the text.}\n \\label{R-pict}\n \\end{center}\n\\end{figure}\n\nThe number of events $N_i$ collected from a given target cell in a given time interval\nis related to the spin-independent cross section $\\bar{\\sigma}$ for reaction (\\ref{phorho})\nand to the asymmetry $A_1^{\\rho}$ by\n\\begin{equation}\nN_i = a_i \\phi _i n_i \\bar{\\sigma } (1+P_B P_T f D A_1^{\\rho } ),\n\\label{nevents}\n\\end{equation}\nwhere $P_B$ and $P_T$ are the beam and target polarisations, $\\phi _i$ is the incoming\nmuon flux, $a_i$ the acceptance for the target cell, $n_i$ corresponding number of target nucleons, and $f$ the target dilution factor.\nThe asymmetry is extracted from the data sets taken before and after a reversal of the\ntarget spin directions. The four relations of Eq.~\\ref{nevents}, corresponding to the two \ncells ($u$ and $d$) and the two spin orientations (1 and 2) lead to a second-order\nequation in \\Aor\\ for the ratio $(N_{u,1}N_{d,2}\/N_{d,1}N_{u,2})$. Here fluxes cancel out\nas well as acceptances, if the ratio of acceptances for the two cells is the same before\nand after the reversal \\cite{SMClong}. In order to minimise the statistical error\nall quantities used in the asymmetry calculation are evaluated event by event with the\nweight factor $w=P_BfD$. The polarisation of the beam muon, $P_B$, is obtained from\na simulation of the beam line and parameterised as a function of the beam momentum. The target polarisation is not \nincluded in the event weight $w$ because it may vary in time and generate false\nasymmetries. An average $P_T$ is used for each target cell and each spin orientation.\n\n The ratio $R$, which enters the formula for $D$ and strongly depends on $Q^2$\\ for reaction (\\ref{phorho}), was calculated on an event-by-event basis\nusing the parameterisation \n\\begin{equation}\n R(Q^{2}) = \n a_{0} (Q^{2})^{a_{1}} ,\n\\end{equation}\nwith $a_{0} = 0.66 \\pm 0.05$, and $a_{1} = 0.61 \\pm 0.09$.\nThe parameterisation was obtained by the Fermilab E665 experiment from a fit\nto their $R$ measurements for exclusive $\\rho ^0$ muoproduction on protons \\cite{E665-1}.\nThese are shown in Fig.~\\ref{R-pict} together with the fitted $Q^2$-dependence.\nThe preliminary COMPASS results on $R$ for the incoherent exclusive $\\rho ^0$\nproduction on the nucleon \\cite{sandacz}, which cover a broader kinematic region in $Q^2$\n, agree reasonably well with this parameterisation.\nThe uncertainty of $a_{0}$ and $a_{1}$ is included in the \nsystematic error of $A_{1}^{\\rho}$.\n\nThe dilution factor $f$ gives the fraction of events of reaction (\\ref{phorho}) \noriginating from nucleons in polarised deuterons inside the target material.\nIt is calculated event-by-event using the formula\n\\begin{equation}\n f = C_1 \\cdot f_{0} = C_1 \\cdot \n \\frac{\n n_{\\rm D}\n }{\n n_{\\rm D} + \\Sigma_{A} n_{\\rm A} \n (\\tilde{\\sigma}_{\\rm A} \/ \\tilde{\\sigma}_{\\rm D})\n }. \n \\vspace*{3mm}\n\\end{equation}\nHere $n_{\\rm D}$ and $n_{\\rm A}$ denote numbers of nucleons in deuteron and nucleus of atomic mass $A$\nin the target, and \n$\\tilde{\\sigma}_{\\rm D}$ and $\\tilde{\\sigma}_{\\rm A}$ are the cross sections\n per nucleon for reaction (\\ref{phorho}) occurring on the deuteron and on the nucleus of atomic mass\n$A$, respectively. The sum runs over all nuclei present in the COMPASS target.\nThe factor $C_1$ takes into account that there are two polarised deuterons in\nthe $^6$LiD molecule,\nas the $^6$Li nucleus is in a first approximation composed of a deuteron and\nan $\\alpha $ particle.\n\nThe measurements of the $\\tilde{\\sigma}_{\\rm A} \/ \\tilde{\\sigma}_{\\rm D}$ \nfor incoherent exclusive $\\rho ^0$ production come from the NMC \\cite{NMC}, E665 \\cite{E665-2} and early experiments on \\rn\\ photoproduction \\cite{BPSY}. They were fitted in Ref.~\\cite{ATrip-2}\nwith the formula:\n\\begin{equation}\n \\tilde{\\sigma}_{\\rm A} = \n \\sigma_{\\rm p} \\cdot A^{\\alpha(Q^{2}) - 1} , \\hspace*{1cm}\n {\\rm with}\\ \\ \\alpha(Q^{2}) - 1 = -\\frac{1}{3} \\exp\\{-Q^{2}\/Q^{2}_{0}\\},\n \\label{alpha}\n\\end{equation}\nwhere $\\sigma_{\\rm p}$ is the cross section for reaction (\\ref{phorho})\non the free proton.\nThe value of the fitted parameter $Q^{2}_{0}$ is equal to $9 \\pm 3~(\\mathrm{GeV}\/c)^2$.\nThe measured values of the parameter $\\alpha $ and the fitted curve $\\alpha (Q^2)$\nare shown on the left panel of \nFig.~\\ref{alpha-fig} taken from Ref.~\\cite{ATrip-2}.\nOn the right panel of the figure the average value of $f$ is plotted for\nthe various $Q^2$ bins used in the present analysis. The values of $f$\nare equal to about 0.36 in most of the $Q^2$ range, rising to about 0.38 at the\nhighest $Q^2$.\n\n\\begin{figure}[t]\n\\begin{center}\n \\epsfig{file=alpha.eps,height=5cm,width=7.5cm}\n \\epsfig{file=f_Qsq_0203.eps,height=5cm,width=7.5cm}\n \\caption{(Left) Parameter $\\alpha$ of Eq.~\\ref{alpha} as a function of $Q^2$\\ (from Ref.~\\cite{ATrip-2}). The experimental points and the fitted curve are shown. See text for details. (Right) The dilution factor $f$ as a function of $Q^2$.}\n \\label{alpha-fig}\n \\end{center}\n\\end{figure}\n\n The radiative corrections (RC) have been neglected in the present analysis, in particular in the calculation of $f$, because\nthey are expected to be small for reaction (\\ref{murho}). They were evaluated \\cite{KKurek} to be\nof the order of 6\\% for the NMC exclusive \\rn\\ production analysis.\nThe small values of RC are mainly due to the requirement of event exclusivity via cuts on \\Emi\\ and $p_t^2$,\nwhich largely suppress the dominant external photon radiation.\nThe internal (infrared and virtual) RC were\nestimated in Ref.~\\cite{KKurek} to be of the order of 2\\%. \n\n\\section{Systematic errors}\n \\label{Sec_syst}\n\nThe main systematic uncertainty of \\Aor\\ comes from an estimate of possible false asymmetries. \nIn order to improve the accuracy of this estimate,\nin addition to the standard sample of incoherent events, a second sample\nwas selected by changing the $p_t^2$ cuts to\n\\begin{equation}\n 0 < p_{t}^{2} < 0.5~(\\mathrm{GeV}\/c)^{2}, \\label{ptsext}\n\\end{equation} and keeping all the remaining selections and cuts the same\nas for the `incoherent sample'.\nIn the following it will be\n referred to as the `extended $p_t^2$ sample'.\nSuch an extension of the \\pts\\ range allows one to obtain a sample which is\nabout\nfive times larger than the incoherent sample. \nHowever, in addition to incoherent events such a sample contains a large fraction of events originating from coherent \\rn\\ production.\nTherefore, for the estimate of the dilution factor $f$ a different nuclear dependence of the\nexclusive cross section was used, applicable for the sum of coherent and incoherent\ncross sections \\cite{NMC}.\nThe physics asymmetries \\Aor\\ for both samples are consistent within statistical errors.\n\nPossible, false experimental asymmetries were searched for by modifying the selection\nof data sets used for the asymmetry calculation. The grouping of the data into\nconfigurations with opposite target-polarisation was varied from large samples,\ncovering at most two weeks of data taking, into about 100 small samples, taken in time\nintervals of the order of 16 hours.\nA statistical test was performed on the distributions of asymmetries obtained\nfrom these small samples. In each of the $Q^2$\\ and \\xBj\\ bins the dispersion of the values of\n$A_{1}^{\\rho}$ around their mean agrees with the statistical error.\nTime-dependent effects which would lead to a broadening of these distributions\nwere thus not observed. Allowing the dispersion of $A_{1}^{\\rho}$ to vary within its\ntwo standard deviations we \nobtain for each bin an upper bound for the systematic error arising from\ntime-dependent effects\n\\begin{equation}\n \\sigma_{\\rm falseA,tdep} < \n 0.56~\\sigma_{\\rm stat}.\n\\end{equation}\nHere $\\sigma _{\\rm stat}$ is the statistical error on $A_{1}^{\\rho}$ for the extended\n$p_t^2$ sample.\nThe uncertainty on the estimates of possible false asymmetries due to the time-dependent \neffects is the dominant contribution to the total systematic error in most of the kinematical\nregion.\n\nAsymmetries for configurations where spin effects cancel out were calculated to\ncheck the cancellation of effects due to fluxes and acceptances. They were found compatible\nwith zero within the statistical errors. Asymmetries obtained with different\nsettings of the microwave (MW) frequency, used for DNP, were compared in order to test possible\neffects related to the orientation of the target magnetic field. \nThe results for the extended $p_t^2$ sample tend to show that there is a small\ndifference between asymmetries for the two MW configurations.\nHowever, because the numbers of events of the data samples taken with each MW setting \nare approximately balanced, the effect of this difference on \\Aor\\ is\nnegligible for the total sample.\n \nThe systematic error on $A_{1}^{\\rho}$ also contains an overall scale uncertainty of\n6.5\\% due to uncertainties on $P_B$ and $P_T$. The uncertainty of the \nparameterisation of $R(Q^2)$ \naffects the depolarisation factor $D$. The uncertainty of the dilution factor\n$f$ is mostly due to uncertainty of the parameter $\\alpha (Q^2)$ which takes into account\nnuclear effects in the incoherent $\\rho ^0$ production. \nThe neglect of the $A_2^{\\rho }$ term mainly affects the highest bins of $Q^2$\\ and \\xBj.\n\nAnother source of systematic errors is due to the contribution of\nthe non-exclusive background to our sample.\nThis background originates from two sources.\nFirst one is due to the production of\n\\rn\\ accompanied by the dissociation of the target nucleon,\nthe second one is the production of \\rn\\ in inclusive scattering.\nIn order to evaluate the amount of background in the sample of exclusive\nevents it is necessary to\ndetermine the $E_{miss}$ dependence for the non-exclusive background\nin the region under the exclusive peak (cf.\\ Fig.~\\ref{hadplots} ).\nFor this purpose complete Monte Carlo simulations of the experiment were \nused, with events \ngenerated by either the PYTHIA 6.2 or LEPTO 6.5.1 generators.\nEvents generated with LEPTO come only from deep inelastic scattering and cover the range of $Q^2 > 0.5~(\\mathrm{GeV}\/c)^2$. \nThose generated with PYTHIA cover the whole kinematical range of the experiment and include exclusive production of vector mesons and processes with diffractive excitation of the target nucleon or the vector meson, in addition to inelastic\nproduction.\n\nThe generated MC events were reconstructed\nand selected for the analysis using the same procedure as for the data.\nIn each bin of $Q^2$\\ the $E_{miss}$ distribution for the MC was \nnormalised to the corresponding one for the data\nin the range of large $E_{miss} > 7.5~\\mathrm{GeV}$. Then the normalised MC\ndistribution was used to estimate the number of background events under the exclusive peak in the data. \nThe fraction of background events in the sample of\nincoherent exclusive $\\rho ^0$ production was estimated to be about $0.12 \\pm 0.06$\nin most of the kinematical range, except in the largest $Q^2$\\ region, where it is about\n$0.24 \\pm 0.12$. \nThe large uncertainties of these fractions reflect the differences between \nestimates from LEPTO and PYTHIA in the region where they overlap. In \nthe case of PYTHIA the\nuncertainties on the cross sections for diffractive photo- and\nelectroproduction of vector mesons also contribute.\nFor events generated with PYTHIA the $E_{miss}$ \ndistributions for various physics processes could be studied separately. \nIt was found\nthat events of \\rn\\ production with an\nexcitation of the target nucleon into $N^*$ resonances\nof small mass, $M < 2~\\mathrm{GeV}\/c^2$, cannot be resolved from the exclusive\npeak and therefore were not included \nin the estimates of number of background events.\n\nAn estimate of the asymmetry \\Aor\\ for the background was obtained using a non-exclusive sample,\nwhich was selected with the standard cuts used in this analysis,\nexcept the cut on $E_{miss}$ which was modified to $E_{miss} > 2.5$~GeV. In different high-$E_{miss}$ bins \\Aor\\ for this sample was\nfound compatible with zero.\n\nBecause no indication of a non-zero \\Aor\\ for the background was found,\nand also due to a large uncertainty of the estimated amount of background \nin the exclusive sample, no background corrections were made.\nInstead, the effect of background was treated as a source\nof systematic error. Its contribution \nto the total systematic error was not significant in most of the kinematical\nrange, except for the highest $Q^2$ and $x$.\n\nThe total systematic error on \\Aor\\ was obtained as a quadratic sum of the errors from\nall discussed sources. \nIts values for each $Q^2$\\ and \\xBj\\ bin are given\nin Tables~\\ref{binsq2-1} and \\ref{binsx-1}. The total systematic error \namounts to about 40\\% of the \nstatistical error for most of the kinematical range. Both errors become\ncomparable in the highest bin of $Q^2$. \n\n\\section{Results}\n \\label{Sec_resu}\n\nThe COMPASS results on $A^{\\rho}_1$ are shown as a function of $Q^2$\\ and \\xBj\\ \nin Fig.~\\ref{A1-Com} and listed in \nTables~\\ref{binsq2-1} and \\ref{binsx-1}.\nThe statistical errors are represented by vertical bars and the total systematic\nerrors by shaded bands.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\epsfig{file=A1_Qsq_incoh_cons_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{file=A1_xBj_incoh_cons_0203.eps,height=5cm,width=7.5cm}\n \\caption{\\Aor\\ as a function of $Q^2$\\ (left) and \\xBj\\ (right) from the present analysis.\nError bars correspond to statistical errors, while bands at the bottom represent the systematical errors.}\n \\label{A1-Com}\n \\end{center}\n\\end{figure}\n\\begin{table}[ht]\n \\caption{Asymmetry $A_1^{\\rho }$ as a function of $Q^2$. Both the statistical errors (first) and the total systematic errors (second) are listed.}\n \\label{binsq2-1}\n \\small\n \\begin{center}\n \\begin{tabular}{||c|c|c|c|c||}\n \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm]{$Q^2$\\ range}\\ \\ &\n $\\langle Q^{2} \\rangle$ [$(\\mathrm{GeV}\/c)^2$] & $\\langle x \\rangle$ &\n $\\langle \\nu \\rangle$ [GeV] & $A_1^{\\rho }$ \\\\ \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.0004 - 0.005~$ & \\ \\ \\ 0.0031\\ \\ \\ & $4.0 \\cdot 10^{-5}$ & 42.8 & $-0.030 \\pm 0.045 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.005 - 0.010$ & ~0.0074 & $8.4 \\cdot 10^{-5}$ & 49.9 & $~~0.048 \\pm 0.038 \\pm 0.013$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.010 - 0.025$ & 0.017 & $1.8 \\cdot 10^{-4}$ & 55.6 & $~~0.063 \\pm 0.026 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.025 - 0.050$ & 0.036 & $3.7 \\cdot 10^{-4}$ & 59.9 & $-0.035 \\pm 0.027 \\pm 0.009$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.05 - 0.10$ & 0.072 & $7.1 \\cdot 10^{-4}$ & 62.0 & $-0.010 \\pm 0.028 \\pm 0.008$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.10 - 0.25$ & 0.16~ & 0.0016 & 62.3 & $-0.019 \\pm 0.029 \\pm 0.009$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.25 - 0.50$ & 0.35~ & 0.0036 & 60.3 & $~~0.016 \\pm 0.045 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.5 - 1~~$ & 0.69~ & 0.0074 & 58.6 & $~~0.141 \\pm 0.069 \\pm 0.030$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $1 - 4$ & 1.7~~ & 0.018~~ & 59.7 & $~~0.000 \\pm 0.098 \\pm 0.035$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $~4 - 50$ & 6.8~~ & 0.075~~ & 55.9 & $-0.85 \\pm 0.50 \\pm 0.39 $\\\\\n \\hline\n \\hline\n \\end{tabular}\n \\end{center}\n \\normalsize\n\\end{table}\n\\begin{table}[ht]\n \\caption{Asymmetry $A_1^{\\rho }$ as a function of \\xBj. Both the statistical errors (first) and the total systematic errors (second) are listed.}\n \\label{binsx-1}\n \\small\n \\begin{center}\n \\begin{tabular}{||c|c|c|c|c||}\n \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm]{\\xBj\\ range}\\ \\ & $\\langle x \\rangle$ & $\\langle Q^{2} \\rangle$ [$(\\mathrm{GeV}\/c)^2$] &\n $\\langle \\nu \\rangle$ [GeV] & $A_1^{\\rho }$ \\\\ \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $8 \\cdot 10^{-6} - 1 \\cdot 10^{-4}$ & $5.8 \\cdot 10^{-5}$ & ~0.0058 & 51.7 & $~~0.035 \\pm 0.026 \\pm 0.011$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $1 \\cdot 10^{-4} - 2.5 \\cdot 10^{-4}$ & $1.7 \\cdot 10^{-4}$ & 0.019 & 59.7 & $~~0.036 \\pm 0.024 \\pm 0.010$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $2.5 \\cdot 10^{-4} - 5 \\cdot 10^{-4}$ & $3.6 \\cdot 10^{-4}$ & 0.041 & 61.3 & $-0.039 \\pm 0.027 \\pm 0.012$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $5 \\cdot 10^{-4} - 0.001$ & $7.1 \\cdot 10^{-4}$ & 0.082 & 60.8 & $-0.010 \\pm 0.030 \\pm 0.010$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.001 - 0.002$ & 0.0014 & 0.16~~ & 58.6 & $-0.005 \\pm 0.036 \\pm 0.013$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.002 - 0.004$ & 0.0028 & 0.29~~ & 54.8 & $ ~~0.032 \\pm 0.050 \\pm 0.019$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.004 - 0.01~~$ & 0.0062 & 0.59~~ & 50.7 & $ ~~0.019 \\pm 0.069 \\pm 0.026$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.01 - 0.025$ & 0.015~~ & 1.3~~~~ & 47.5 & $-0.03 \\pm 0.14 \\pm 0.06$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.025 - 0.8~~~~$ & 0.049~~ & 3.9~~~~ & 43.8 & $-0.27 \\pm 0.38 \\pm 0.19$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\end{center}\n \\normalsize\n\\end{table}\n\nThe wide range in $Q^2$\ncovers four orders of magnitude from $3 \\cdot 10^{-3}$ to 7~$(\\mathrm{GeV}\/c)^2$. The domain in \\xBj\\ which is strongly correlated\nwith $Q^2$, varies from $5 \\cdot 10^{-5}$ to about 0.05 (see Tables for more details).\nFor the whole kinematical range the \\Aor\\\nasymmetry measured by COMPASS is consistent with zero. As discussed in the introduction,\nthis indicates that the role of unnatural parity exchanges,\nlike $\\pi$- or $A_{1}$-Reggeon exchange, \nis small in that kinematical domain, which is to be expected if diffraction\nis the dominant process for reaction (\\ref{phorho}). \n\nIn Fig.~\\ref{A1-Com-Her} the COMPASS results are compared to the HERMES results on $A^{\\rho}_1$ obtained\non a deuteron target \\cite{HERMES03}.\nNote that the lowest $Q^2$\\ and \\xBj\\ HERMES points, referred to as `quasi-photoproduction', come from measurements where the kinematics of the small-angle scattered \nelectron was not measured but estimated from a MC simulation. This is in contrast to COMPASS,\nwhere scattered muon kinematics is measured even at the smallest $Q^2$.\n\\begin{figure}[ht]\n\\begin{center}\n \\epsfig{file=A1_Qsq_incoh_cons_0203_wHermes.eps,height=5cm,width=7.5cm}\n \\epsfig{file=A1_xBj_incoh_cons_0203_wHermes.eps,height=5cm,width=7.5cm}\n \\caption{\\Aor\\ as a function of $Q^2$\\ (left) and \\xBj\\ (right) from the present analysis (circles)\ncompared to HERMES results on the deuteron target (triangles). For the COMPASS results inner bars represent statistical errors, while the outer bars correspond to the total error.\n For the HERMES results vertical bars represent the quadratic sum of statistical and systematic errors. The curve represents the prediction explained in the text.}\n \\label{A1-Com-Her}\n\\end{center}\n\\end{figure}\n\nThe results from both experiments are consistent within errors. \nThe kinematical range covered by the present analysis extends further towards small\nvalues of \\xBj\\ and $Q^2$\\ by almost two orders of \nmagnitude. In each of the two experiments $A^{\\rho}_1$ is measured\nat different average $W$, which is equal to about 10~GeV for\nCOMPASS and 5~GeV for HERMES. Thus, no significant $W$ dependence\nis observed for $A^{\\rho}_1$ on an isoscalar nucleon target.\n\nThe \\xBj\\ dependence of the measured \\Aor\\ is compared in Fig.~\\ref{A1-Com-Her} to the prediction given by Eq.~\\ref{A1-Fraas}, which\nrelates $A_1^{\\rho}$ to the asymmetry $A_{1}$ for the inclusive inelastic\nlepton-nucleon scattering. To produce the curve\nthe inclusive asymmetry $A_{1}$ was parameterised as\n$A_1(x) = (x^{\\alpha } - \\gamma^{\\alpha }) \\cdot (1- e^{-\\beta x})$ , where\n$\\alpha = 1.158 \\pm 0.024$, $\\beta = 125.1 \\pm 115.7$ and $\\gamma = 0.0180 \\pm 0.0038$.\nThe values of the parameters have been obtained\nfrom a fit of $A_{1}(x)$ to the world data\nfrom polarised deuteron targets \\cite{smc,e143,e155_d,smc_lowx,hermes_new,compass_a1_recent} including COMPASS measurements at very \nlow $Q^2$ and \\xBj\\ \\cite{compass_a1_lowq2}. Within the present accuracy the results on \\Aor\\ are consistent with this prediction. \n\n\nIn the highest $Q^2$\\ bin, $\\langle Q^{2} \\rangle = 6.8~(\\mathrm{GeV}\/c)^2$, in the kinematical domain of applicability of pQCD-inspired models which relate\nthe asymmetry to the spin-dependent\nGPDs for gluons and quarks (cf.\\ Introduction), \none can observe a hint of a possible nonzero asymmetry, although with a\nlarge error.\nIt should be noted that in Ref.~\\cite{ATrip-1} a negative value of $A_{LL}$ different from zero by about 2 standard deviations\nwas reported at $\\langle Q^{2} \\rangle = 7.7~(\\mathrm{GeV}\/c)^2$.\nAt COMPASS, including the data \ntaken with the longitudinally polarised deuteron target in 2004 and 2006 will result in an\nincrease of\nstatistics by a factor of about three compared to the present paper,\nand thus may help to clarify the issue.\n \nFor the whole $Q^2$ range future COMPASS data, to be taken with the polarised proton target, would be very valuable for checking if the role of the flavour-blind\nexchanges is indeed dominant, as expected for the Pomeron-mediated process.\n\n \n\\section{Summary}\n \\label{Sec_summ}\n\nThe longitudinal double spin asymmetry \\Aor\\ for the diffractive muoproduction of \\rn\\ meson, $\\mu + N \\rightarrow \n\\mu + N + \\rho$, has been measured by scattering longitudinally polarised muons off longitudinally polarised deuterons from the\n$^6$LiD target and selecting incoherent exclusive $\\rho^0$ production.\nThe presented results for the COMPASS 2002 and 2003 data cover a range of energy $W$ from about 7 to 15~GeV.\n\nThe $Q^2$\\ and \\xBj\\ dependence of \\Aor\\ is presented in a wide kinematical range\n$3 \\cdot 10^{-3} \\leq Q^{2} \\leq 7~(\\mathrm{GeV}\/c)^2$ and $5 \\cdot 10^{-5} \\leq x \\leq 0.05$.\nThese results extend the range in $Q^2$\\ and \\xBj\\ by two orders of magnitude down\nwith respect to the existing data from HERMES.\n\nThe asymmetry $A^{\\rho}_1$ is compatible with zero in the whole \\xBj\\ and\n$Q^2$\\ range.\nThis may indicate that the role of unnatural parity exchanges like $\\pi$- or $A_{1}$-Reggeon exchange is\nsmall in that kinematical domain.\n\nThe \\xBj\\ dependence of measured \\Aor\\ is consistent with the prediction of Ref.~\\cite{HERMES01} which relates $A^{\\rho}_1$\nto the asymmetry $A_{1}$ for the inclusive inelastic lepton--nucleon scattering.\n\n\\section{Acknowledgements}\n \\label{Sec_ack}\n\nWe gratefully acknowledge the support of the CERN management and staff and\nthe skill and effort of the technicians of our collaborating institutes.\nSpecial thanks are due to V. Anosov and V. Pesaro for their support during the\ninstallation and the running of the experiment. This work was made possible by \nthe financial support of our funding agencies.\n\n\n\\noindent\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\n\n\nMany tasks in natural language processing, computational biology, reinforcement learning, and time series analysis rely on learning\nwith sequential data, i.e.\\ estimating functions defined over sequences of observations from training data. \nWeighted finite automata~(WFAs) and recurrent neural networks~(RNNs) are two powerful and flexible classes of models which can efficiently represent such functions.\nOn the one hand, WFAs are tractable, they encompass a wide range of machine learning models~(they can for example compute any probability distribution defined by a hidden Markov \nmodel~(HMM)~\\cite{denis2008rational} and can model the transition and observation behavior of\npartially observable Markov decision processes~\\cite{thon2015links}) and they offer appealing theoretical guarantees. In particular, \nthe so-called \n\\emph{spectral\nmethods} for learning HMMs~\\cite{hsu2009spectral}, WFAs~\\cite{bailly2009grammatical,balle2014spectral} and related models~\\cite{glaude2016pac,boots2011closing}, \nprovide an alternative to Expectation-Maximization based algorithms that is both computationally efficient and \nconsistent. \nOn the other hand, RNNs are \nremarkably expressive models --- they can represent any computable function~\\cite{siegelmann1992computational} --- and they have successfully\ntackled many practical problems in speech and audio recognition~\\cite{graves2013speech,mikolov2011extensions,gers2000learning}, but\ntheir theoretical analysis is difficult. Even though recent work provides interesting results on their\nexpressive power~\\cite{khrulkov2018expressive,yu2017long} as well as alternative training algorithms coming with learning guarantees~\\cite{sedghi2016training},\nthe theoretical understanding of RNNs is still limited.\n\n\\footnotetext{\\footnotemark[1] Mila \\footnotemark[2] Universit\u00e9 de Montr\u00e9al \\footnotemark[3] McGill University}\n\n\\renewcommand*{\\thefootnote}{\\arabic{footnote}}\n\nIn this work, we bridge a gap between these two classes of models by unraveling a fundamental connection between WFAs and second-order RNNs~(2-RNNs): \n\\textit{when considering input sequences of discrete symbols, 2-RNNs with linear activation functions and WFAs are one and the same}, i.e.\\ they are expressively\nequivalent and there exists a one-to-one mapping between the two classes~(moreover, this mapping conserves model sizes). While connections between\nfinite state machines~(e.g.\\ deterministic finite automata) and recurrent neural networks have been noticed and investigated in the past~(see e.g.\\ \\cite{giles1992learning,omlin1996constructing}), to the\nbest of our knowledge this is the first time that such a \\rev{rigorous} equivalence between linear 2-RNNs and \\emph{weighted} automata is explicitly formalized. \n\\rev{More precisely, we pinpoint exactly the class of recurrent neural architectures to which weighted automata are equivalent, namely second-order RNNs with\nlinear activation functions.}\nThis result naturally leads to the observation that linear 2-RNNs are a natural generalization of WFAs~(which take sequences of \\emph{discrete} observations as\ninputs) to sequences of \\emph{continuous vectors}, and raises the question of whether the spectral learning algorithm for WFAs can be extended to linear 2-RNNs. \nThe second contribution of this paper is to show that the answer is in the positive: building upon the spectral learning algorithm for vector-valued WFAs introduced\nrecently in~\\cite{rabusseau2017multitask}, \\emph{we propose the first provable learning algorithm for second-order RNNs with linear activation functions}.\nOur learning algorithm relies on estimating sub-blocks of the so-called Hankel tensor, from which the parameters of a 2-linear RNN can be recovered\nusing basic linear algebra operations. One of the key technical difficulties in designing this algorithm resides in estimating\nthese sub-blocks from training data where the inputs are sequences of \\emph{continuous} vectors. \nWe leverage multilinear properties of linear 2-RNNs and the fact that the Hankel sub-blocks can be reshaped into higher-order tensors of low tensor train rank~(a\nresult we believe is of independent interest) to perform this estimation efficiently using matrix sensing and tensor recovery techniques.\nAs a proof of concept, we validate our theoretical findings in a simulation study on toy examples where we experimentally compare \ndifferent recovery methods and investigate the robustness of our algorithm to noise and rank mis-specification. \\rev{We also show that refining the estimator returned\nby our algorithm using stochastic gradient descent can lead to significant improvements.}\n\n\\rev{\n\\paragraph{Summary of contributions.} We formalize a \\emph{strict equivalence between weighted automata and second-order RNNs with linear activation\nfunctions}~(Section~\\ref{sec:WFAs.and.2RNNs}), showing that linear 2-RNNs can be seen as a natural extension of (vector-valued) weighted automata for input sequences of \\emph{continuous} vectors. We then\npropose a \\emph{consistent learning algorithm for linear 2-RNNs}~(Section~\\ref{sec:Spectral.learning.of.2RNNs}).\nThe relevance of our contributions can be seen from two perspectives.\nFirst, while learning feed-forward neural networks with linear activation functions is a trivial task (it reduces to linear or reduced-rank regression), this\nis not at all the case for recurrent architectures with linear activation functions; to the best of our knowledge, our algorithm is the \\emph{first consistent learning algorithm\nfor the class of functions computed by linear second-order recurrent networks}. Second, from the perspective of learning weighted automata, we propose a natural extension of WFAs to continuous inputs and \\emph{our learning algorithm addresses the long-standing limitation of the spectral learning method to discrete inputs}.\n}\n\n\\paragraph{Related work.}\nCombining the spectral learning algorithm for WFAs with matrix completion techniques~(a problem which is closely related to matrix sensing) has\nbeen theoretically investigated in~\\cite{balle2012spectral}. An extension of probabilistic transducers to continuous inputs~(along with a spectral learning algorithm) has been proposed in~\\cite{recasens2013spectral}.\nThe connections between tensors and RNNs have been previously leveraged to study the expressive power of RNNs in~\\cite{khrulkov2018expressive}\nand to achieve model compression in~\\cite{yu2017long,yang2017tensor,tjandra2017compressing}. \nExploring relationships between RNNs and automata has recently received a renewed interest~\\cite{peng2018rational,chen2018recurrent,li2018nonlinear}. In particular, such connections have been explored for interpretability purposes~\\cite{weiss2018extracting,ayache2018explaining} and the ability of RNNs to learn classes of formal languages\nhas been investigated in~\\cite{avcu2017subregular}. \\rev{Connections between the tensor train decomposition and WFAs have been previously noticed in~\\cite{critch2013algebraic,critch2014algebraic,rabusseau2016thesis}.} \nThe predictive state RNN model introduced in~\\cite{downey2017predictive} is closely related to 2-RNNs and the authors propose\nto use the spectral learning algorithm for predictive state representations to initialize a gradient based algorithm; their approach however comes without\ntheoretical guarantees. Lastly, a provable algorithm for RNNs relying on the tensor method of moments has been proposed in~\\cite{sedghi2016training} but\nit is limited to first-order RNNs with quadratic activation functions~(which do not encompass linear 2-RNNs).\n\n\\emph{The proofs of the results given in the paper can be found in the supplementary material.}\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec:prelim}\nIn this section, we first present basic notions of tensor algebra before introducing second-order recurrent neural \nnetwork, \nweighted finite automata and the spectral learning algorithm.\nWe start by introducing some notation.\nFor any integer $k$ we use $[k]$ to denote the set of integers from $1$ to $k$. We use\n$\\lceil l \\rceil$ to denote the smallest integer greater or equal to $l$.\nFor any set $\\Scal$, we denote by $\\Scal^*=\\bigcup_{k\\in\\Nbb}\\Scal^k$ the set of all\nfinite-length sequences of elements of $\\Scal$~(in particular, \n$\\Sigma^*$ will denote the set of strings on a finite alphabet $\\Sigma$). \nWe use lower case bold letters for vectors (e.g.\\ $\\vec{v} \\in \\Rbb^{d_1}$),\nupper case bold letters for matrices (e.g.\\ $\\mat{M} \\in \\Rbb^{d_1 \\times d_2}$) and\nbold calligraphic letters for higher order tensors (e.g.\\ $\\ten{T} \\in \\Rbb^{d_1\n\\times d_2 \\times d_3}$). We use $\\ten_i$ to denote the $i$th canonical basis \nvector of $\\Rbb^d$~(where the dimension $d$ will always appear clearly from context).\nThe $d\\times d$ identity matrix will be written as $\\mat{I}_d$.\nThe $i$th row (resp. column) of a matrix $\\mat{M}$ will be denoted by\n$\\mat{M}_{i,:}$ (resp. $\\mat{M}_{:,i}$). This notation is extended to\nslices of a tensor in the straightforward way.\nIf $\\vec{v} \\in \\Rbb^{d_1}$ and $\\vec{v}' \\in \\Rbb^{d_2}$, we use $\\vec{v} \\otimes \\vec{v}' \\in \\Rbb^{d_1\n\\cdot d_2}$ to denote the Kronecker product between vectors, and its\nstraightforward extension to matrices and tensors.\nGiven a matrix $\\mat{M} \\in \\Rbb^{d_1 \\times d_2}$, we use $\\vectorize{\\mat{M}} \\in \\Rbb^{d_1\n\\cdot d_2}$ to denote the column vector obtained by concatenating the columns of\n$\\mat{M}$. The inverse of $\\mat{M}$ is denoted by $\\mat{M}^{-1}$, its Moore-Penrose pseudo-inverse\nby $\\mat{M}^\\dagger$, and the transpose of its inverse by $\\mat{M}^{-\\top}$; the Frobenius norm\nis denoted by $\\norm{\\mat{M}}_F$ and the nuclear norm by $\\norm{\\mat{M}}_*$.\n\n\\paragraph{Tensors.}\nWe first recall basic definitions of tensor algebra; more details can be found\nin~\\cite{Kolda09}. \nA \\emph{tensor} $\\ten{T}\\in \\Rbb^{d_1\\times\\cdots \\times d_p}$ can simply be seen\nas a multidimensional array $(\\ten{T}_{i_1,\\cdots,i_p}\\ : \\ i_n\\in [d_n], n\\in [p])$. The\n\\emph{mode-$n$} fibers of $\\ten{T}$ are the vectors obtained by fixing all\nindices except the $n$th one, e.g.\\ $\\ten{T}_{:,i_2,\\cdots,i_p}\\in\\Rbb^{d_1}$.\nThe \\emph{$n$th mode matricization} of $\\ten{T}$ is the matrix having the\nmode-$n$ fibers of $\\ten{T}$ for columns and is denoted by\n$\\tenmat{T}{n}\\in \\Rbb^{d_n\\times d_1\\cdots d_{n-1}d_{n+1}\\cdots d_p}$.\nThe vectorization of a tensor is defined by $\\vectorize{\\ten{T}}=\\vectorize{\\tenmat{T}{1}}$.\nIn the following $\\ten{T}$ always denotes a tensor of size $d_1\\times\\cdots \\times d_p$.\n\nThe \\emph{mode-$n$ matrix product} of the tensor $\\ten{T}$ and a matrix\n$\\mat{X}\\in\\Rbb^{m\\times d_n}$ is a tensor denoted by $\\ten{T}\\ttm{n}\\mat{X}$. It is \nof size $d_1\\times\\cdots \\times d_{n-1}\\times m \\times d_{n+1}\\times\n\\cdots \\times d_p$ and is defined by the relation \n$\\ten{Y} = \\ten{T}\\ttm{n}\\mat{X} \\Leftrightarrow \\tenmat{Y}{n} = \\mat{X}\\tenmat{T}{n}$.\nThe \\emph{mode-$n$ vector product} of the tensor $\\ten{T}$ and a vector\n$\\vec{v}\\in\\Rbb^{d_n}$ is a tensor defined by $\\ten{T}\\ttv{n}\\vec{v} = \\ten{T}\\ttm{n}\\vec{v}^\\top\n\\in \\Rbb^{d_1\\times\\cdots \\times d_{n-1}\\times d_{n+1}\\times\n\\cdots \\times d_p}$.\nIt is easy to check that the $n$-mode product satisfies $(\\ten{T}\\ttm{n}\\mat{A})\\ttm{n}\\mat{B} = \\ten{T}\\ttm{n}\\mat{BA}$\nwhere we assume compatible dimensions of the tensor $\\ten{T}$ and\nthe matrices $\\mat{A}$ and $\\mat{B}$.\n\n\nGiven strictly positive integers $n_1,\\cdots, n_k$ satisfying\n$\\sum_i n_i = p$, we use the notation $\\tenmatgen{\\ten{T}}{n_1,n_2,\\cdots,n_k}$ to denote the $k$th order tensor \nobtained by reshaping $\\ten{T}$ into a tensor\\footnote{Note that the specific ordering used to perform matricization, vectorization\nand such a reshaping is not relevant as long as it is consistent across all operations.} of size \n$(\\prod_{i_1=1}^{n_1} d_{i_1}) \\times (\\prod_{i_2=1}^{n_2} d_{n_1 + i_2}) \\times \\cdots \\times (\\prod_{i_k=1}^{n_k} d_{n_1+\\cdots+n_{k-1} + i_k})$.\nIn particular we have $\\tenmatgen{\\ten{T}}{p} = \\vectorize{\\ten{T}}$ and $\\tenmatgen{\\ten{T}}{1,p-1} = \\tenmat{\\ten{T}}{1}$.\n\n\nA rank $R$ \\emph{tensor train (TT) decomposition}~\\cite{oseledets2011tensor} of a tensor \n$\\ten{T}\\in\\Rbb^{d_1\\times \\cdots\\times d_p}$ consists in factorizing $\\ten{T}$ into the product of $p$ core tensors\n$\\ten{G}_1\\in\\Rbb^{d_1\\times R},\\ten{G}_2\\in\\Rbb^{R\\times d_2\\times R},\n\\cdots, \\ten{G}_{p-1}\\in\\Rbb^{R\\times d_{p-1} \\times R},\n\\ten{G}_p \\in \\Rbb^{R\\times d_p}$, and is defined\\footnote{The classical definition of the TT-decomposition allows the rank $R$ to be different\nfor each mode, but this definition is sufficient for the purpose of this paper.} by\n\\begin{align*}\n\\MoveEqLeft\\ten{T}_{i_1,\\cdots,i_p} = \n&(\\ten{G}_1)_{i_1,:}(\\ten{G}_2)_{:,i_2,:}\\cdots \n (\\ten{G}_{p-1})_{:,i_{p-1},:}(\\ten{G}_p)_{:,i_p}\n\\end{align*}\n %\nfor all indices $i_1\\in[d_1],\\cdots,i_p\\in[d_p]$; we will use the notation $\\ten{T} = \\TT{\\ten{G}_1,\\cdots,\\ten{G}_p}$\nto denote such a decomposition. A tensor network representation of this decomposition is shown in Figure~\\ref{fig:tn.TT}.\n While the problem of finding the best approximation of TT-rank $R$\nof a given tensor is NP-hard~\\cite{hillar2013most}, \na quasi-optimal SVD based compression algorithm~(TT-SVD) has been proposed \nin~\\cite{oseledets2011tensor}.\nIt is worth mentioning that the TT decomposition is invariant under change of basis: \nfor any invertible matrix $\\mat{M}$ and any core tensors $\\ten{G}_1,\\ten{G}_2,\\cdots,\\ten{G}_p$, we have\n$\\TT{\\ten{G}_1,\\cdots,\\ten{G}_p} = \\TT{\\ten{G}_1\\ttm{2}\\mat{M}^{-\\top},\\ten{G}_2\\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\cdots,\n\\ten{G}_{p-1}\\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\ten{G}_p\\ttm{1}\\mat{M}}$.\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\resizebox{0.45\\textwidth}{0.08\\textwidth}{%\n\\begin{tikzpicture}\n\t\\input{tikz_tensor_networks}\n\t\\node[tensor](G1){$\\ten{G}_1$};\n\t\\node[draw = none,below=0.8cm of G1](G11){};\n\t\n\t\\node[tensor,right = 1cm of G1](G2){$\\ten{G}_2$};\n\t\\node[draw=none,below=0.8cm of G2](G22){};\n\t\n\t\\node[tensor,right = 1cm of G2](G3){$\\ten{G}_3$};\n\t\\node[draw=none,below=0.8cm of G3](G32){};\n\t\n\t\\node[tensor,right = 1cm of G3](G4){$\\ten{G}_4$};\n\t\\node[draw=none,below=0.8cm of G4](G41){};\n\t\n\t\\node[tensor,left=3cm of G1](T){$\\ten{T}$};\n\t\\node[draw=none,below left = 0.2cm and 1cm of T](T1){};\n\t\\node[draw=none,below left = 0.8cm and 0.2cm of T](T2){};\n\t\\node[draw=none,below right = 0.8cm and 0.2cm of T](T3){};\n\t\\node[draw=none,below right = 0.2cm and 1cm of T](T4){};\n\t\\node[draw=none,right=1.5cm of T](eq){$=$};\t\n\t\t\n\t\\edgeports{T}{1}{above left}{T1}{}{}{$d_1$};\n\t\\edgeports{T}{2}{below right}{T2}{}{}{$d_2$};\n\t\\edgeports{T}{3}{below right=-0.1cm and 0.1cm}{T3}{}{}{$d_3$};\n\t\\edgeports{T}{4}{above right}{T4}{}{}{$d_4$};\n\t\n\t\\edgeports{G1}{1}{below left = -0.1cm and 0.01cm }{G11}{}{}{$d_1$};\n\t\n\t\\edgeports{G2}{1}{below left}{G1}{2}{below right}{$R$};\t\n\t\\edgeports{G2}{2}{below left = -0.1cm and 0.01cm }{G22}{}{}{$d_2$};\n\t\n\t\\edgeports{G3}{1}{below left}{G2}{3}{below right}{$R$};\t\n\t\\edgeports{G3}{2}{below left = -0.1cm and 0.01cm }{G32}{}{}{$d_3$};\n\t\n\t\\edgeports{G4}{1}{below left}{G3}{3}{below right}{$R$};\t\n\t\\edgeports{G4}{2}{below left = -0.1cm and 0.01cm }{G41}{}{}{$d_4$};\n\n\t\n\\end{tikzpicture}\n}%\n\\end{center}\n\\caption{Tensor network representation of a rank $R$ tensor train decomposition~(nodes represent tensors and an edge between two nodes\nrepresents a contraction between the corresponding modes of the two tensors).}\n\\label{fig:tn.TT}\n\\end{figure}\n\n\\paragraph{Second-order RNNs.}\nA \\emph{second-order recurrent neural network} (2-RNN)~\\cite{giles1990higher,pollack1991induction,lee1986machine}\\footnote{Second-order reccurrent architectures have also been successfully used more recently, see e.g.\\ \\cite{sutskever2011generating} and \\cite{wu2016multiplicative}.} with $n$ hidden units can be defined as a tuple \n$M=(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ where $\\vec{h}_0\\in\\Rbb^n$ is the initial state, $\\Aten\\in\\Rbb^{ n\\times d \\times n }$ is the transition tensor, and\n$\\vecs{\\vvsinfsymbol}\\in \\Rbb^{p\\times n}$ is the output matrix,\nwith $d$ and $p$ being the input and output dimensions respectively. \nA 2-RNN maps any sequence of inputs $\\vec{x}_1,\\cdots,\\vec{x}_k\\in\\Rbb^d$ to\na sequence of outputs $\\vec{y}_1,\\cdots,\\vec{y}_k\\in\\Rbb^p$ defined for any $ t=1,\\cdots,k$ by\n\\begin{equation}\\label{eq:2RNN.definition}\n\\vec{y}_t = z_2(\\vecs{\\vvsinfsymbol}\\vec{h}_t) \\text{ with }\\vec{h}_t = z_1(\\Aten\\ttv{1}\\vec{x}_t\\ttv{2}\\vec{h}_{t-1})\n\\end{equation}\nwhere $z_1:\\Rbb^n\\to\\Rbb^n$ and $z_2:\\Rbb^p\\to\\Rbb^p$ are activation functions.\nAlternatively, one can think of a 2-RNN as computing\na function $f_M:(\\Rbb^d)^*\\to\\Rbb^p$ mapping each input sequence $\\vec{x}_1,\\cdots,\\vec{x}_k$ to the corresponding final output $\\vec{y}_k$.\nWhile $z_1$ and $z_2$ are usually non-linear\ncomponent-wise functions, we consider in this paper the case where both $z_1$ and $z_2$ are the identity, and we refer to\nthe resulting model as a \\emph{linear 2-RNN}.\nFor a linear 2-RNN $M$, the function $f_M$ is multilinear in the sense that, for any integer $l$, its restriction to the domain $(\\Rbb^d)^l$ is\nmultilinear. Another useful observation is that linear 2-RNNs are invariant under change of basis: for any invertible matrix\n$\\P$, the linear 2-RNN $\\tilde{M}=(\\P^{-\\top}\\vec{h}_0,\\Aten\\ttm{1}\\P\\ttm{3}\\P^{-\\top},\\P\\vecs{\\vvsinfsymbol})$ is such that $f_{\\tilde{M}}=f_M$. A linear 2-RNN $M$ with $n$ states is called \\emph{minimal} if its number of hidden units is minimal~(i.e.\\ any linear 2-RNN computing $f_M$ has\nat least $n$ hidden units).\n\n\n\\paragraph{Weighted automata and spectral learning.} \\emph{Vector-valued weighted finite automaton}~(vv-WFA) have\nbeen introduced in~\\cite{rabusseau2017multitask} as a natural generalization of weighted automata from scalar-valued functions\nto vector-valued ones. A $p$-dimensional vv-WFA with $n$ states is a tuple $A=\\vvwa$\nwhere $\\vecs{\\szerosymbol}\\in\\Rbb^n$ is the initial weights vector, $\\vecs{\\vvsinfsymbol}\\in\\Rbb^{p\\times n}$ is the matrix of final weights, \nand $\\mat{A}^\\sigma\\in\\Rbb^{n\\times n}$ is the transition matrix for each symbol $\\sigma$ in a finite alphabet $\\Sigma$.\nA vv-WFA $A$ computes a function $f_A:\\Sigma^*\\to\\Rbb^p$ defined by \n$$f_A(x) =\\vecs{\\vvsinfsymbol} (\\mat{A}^{x_1}\\mat{A}^{x_2}\\cdots\\mat{A}^{x_k})^\\top\\vecs{\\szerosymbol} $$\nfor each word $x=x_1x_2\\cdots x_k\\in\\Sigma^*$. We call a vv-WFA \\emph{minimal} if its number of states\nis minimal. Given a function $f:\\Sigma^*\\to \\Rbb^p$ we denote by $\\rank(f)$ the number of states of a minimal vv-WFA computing $f$~(which is\nset to $\\infty$ if $f$ cannot be computed by a vv-WFA).\n\nThe spectral learning algorithm for vv-WFAs relies on the following fundamental theorem relating\nthe rank of a function $f:\\Sigma^*\\to\\Rbb^d$ to its Hankel tensor $\\ten{H} \\in \\Rbb^{\\Sigma^*\\times\\Sigma^*\\times p}$, which is defined\nby $\\ten{H}_{u,v,:} = f(uv)$ for all $u,v\\in\\Sigma^*$.\n\\begin{theorem}[\\cite{rabusseau2017multitask}]\n\\label{thm:fliess-vvWFA}\nLet $f:\\Sigma^*\\to\\Rbb^d$ and let $\\ten{H}$ be its Hankel tensor. Then $\\rank(f) = \\rank(\\tenmat{H}{1})$.\n\\end{theorem}\nThe vv-WFA learning algorithm leverages the fact that the proof of this theorem is constructive: one can recover a vv-WFA computing $f$\nfrom any low rank factorization of $\\tenmat{H}{1}$. In practice, a finite sub-block $\\ten{H}_{\\Pcal,\\Scal} \\in \\Rbb^{\\Pcal\\times \\Scal\\times p}$ of the Hankel tensor is used\nto recover the vv-WFA, where $\\Pcal,\\Scal\\subset\\Sigma^*$ are finite sets of prefixes and suffixes forming a \\emph{complete basis} for $f$, i.e.\\ such that\n$\\rank(\\tenmatpar{\\ten{H}_{\\Pcal,\\Scal}}{1}) = \\rank(\\tenmat{H}{1})$. More details can be found \nin~\\cite{rabusseau2017multitask}.\n\n\n\\section{A Fundamental Relation between WFAs and Linear 2-RNNs}\\label{sec:WFAs.and.2RNNs}\n\nWe start by unraveling a fundamental connection between vv-WFAs and linear 2-RNNs: vv-WFAs and\nlinear 2-RNNs are expressively equivalent for representing functions defined over sequences of\ndiscrete symbols. \\rev{Moreover, both models have the same capacity in the sense that there is a direct\ncorrespondence between the hidden units of a linear 2-RNN and the states of a vv-WFA computing the same function}. More formally, we have the following theorem. \n\\begin{theorem}\\label{thm:2RNN-vvWFA}\nAny function that can be computed by a vv-WFA with $n$ states can be computed by a linear 2-RNN with $n$ hidden units.\nConversely, any function that can be computed by a linear 2-RNN with $n$ hidden units on sequences of one-hot vectors~(i.e.\\ canonical basis \nvectors) can be computed by a WFA with $n$ states.\n\n\nMore precisely, the WFA $A=\\vvwa$ with $n$ states and the linear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ with\n$n$ hidden units, where $\\Aten\\in\\Rbb^{n\\times \\Sigma \\times n}$ is defined by $\\Aten_{:,\\sigma,:}=\\mat{A}^\\sigma$ for all $\\sigma\\in\\Sigma$, are\nsuch that\n$f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k) = f_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k)$ for all sequences of input symbols $\\sigma_1,\\cdots,\\sigma_k\\in\\Sigma$,\nwhere for each $i\\in[k]$ the input vector $\\vec{x}_i\\in\\Rbb^\\Sigma$ is\nthe one-hot encoding of the symbol $\\sigma_i$.\n\n\\end{theorem}\n\nThis result first implies that linear 2-RNNs defined over sequence of discrete symbols~(using one-hot encoding) \\emph{can be provably learned using the spectral\nlearning algorithm for WFAs\/vv-WFAs}; indeed, these algorithms have been proved to return consistent estimators.\n\\rev{Let us stress again that, contrary to the case of feed-forward architectures, learning recurrent networks with linear activation functions is not a trivial task.}\nFurthermore, Theorem~\\ref{thm:2RNN-vvWFA} reveals that linear 2-RNNs are a natural generalization of classical weighted automata to functions\ndefined over sequences of continuous vectors~(instead of discrete symbols). This spontaneously raises the question of whether the spectral learning algorithms\nfor WFAs and vv-WFAs can be extended to the general setting of linear 2-RNNs; we show that the answer is in the positive in the next section.\n\n\n\n \n\n\n\n\\section{Spectral Learning of Linear 2-RNNs}\\label{sec:Spectral.learning.of.2RNNs}\nIn this section, we extend the learning algorithm for vv-WFAs to linear 2-RNNs, \\rev{thus at the same time addressing the limitation of the spectral learning algorithm to discrete inputs and providing the first consistent learning algorithm for linear second-order RNNs.}\n\n\n\n\\subsection{Recovering 2-RNNs from Hankel Tensors}\\label{subsec:SL-2RNN}\nWe first present an identifiability result showing how one can recover a linear 2-RNN computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$ from observable tensors extracted from some Hankel tensor\nassociated with $f$. Intuitively, we obtain this result by reducing the problem to the one of learning a vv-WFA. This is done by considering the restriction of\n$f$ to canonical basis vectors; loosely speaking, since the domain of this restricted function is isomorphic to $[d]^*$, this allows us to fall back onto the \nsetting of sequences of discrete symbols.\n\nGiven a function $f:(\\Rbb^d)^*\\to\\Rbb^p$, we define its Hankel tensor $\\ten{H}_f\\in \\Rbb^{[d]^* \\times [d]^* \\times p}$ by\n$$(\\ten{H}_f)_{i_1\\cdots i_s, j_1\\cdots j_t,:} = f(\\ten_{i_1},\\cdots,\\ten_{i_s},\\ten_{j_1},\\cdots,\\ten_{j_t}),$$ \nfor all $i_1,\\cdots,i_s,j_1,\\cdots,j_t\\in [d]$, which is infinite in two\nof its modes. It is easy to see that $\\ten{H}_f$ is also the Hankel tensor associated with the function $\\tilde{f}:[d]^* \\to \\Rbb^p$ mapping any\nsequence $i_1i_2\\cdots i_k\\in[d]^*$ to $f(\\ten_{i_1},\\cdots,\\ten_{i_k})$. Moreover, in the special case where $f$ can be computed by a linear 2-RNN, one\ncan use the multilinearity of $f$ to show that $f(\\vec{x}_1,\\cdots,\\vec{x}_k) = \\sum_{i_1,\\cdots,i_k = 1}^d (\\vec{x}_1)_{i_1}\\cdots(\\vec{x}_l)_{i_k} \\tilde{f}(i_1\\cdots i_k)$,\n\\rev{giving us some intuition on how one could} learn $f$ by learning a vv-WFA computing $\\tilde{f}$ using the spectral learning algorithm. \nThat is, given a large enough sub-block $\\ten{H}_{\\Pcal,\\Scal}\\in \\Rbb^{\\Pcal\\times \\Scal\\times p}$ of $\\ten{H}_f$ for some prefix and suffix sets $\\Pcal,\\Scal\\subseteq [d]^*$, \none should be able to recover a vv-WFA computing $\\tilde{f}$ and consequently a linear 2-RNN computing $f$ using Theorem~\\ref{thm:2RNN-vvWFA}. \n\\rev{Before devoting the remaining of this section to formalize this intuition~(leading to Theorem~\\ref{thm:2RNN-SL}), it is worth \nobserving that while this approach is sound, it is not realistic since it requires observing entries of the Hankel tensor $\\ten{H}_f$, which implies having access to input\/output examples where the inputs are \\emph{sequences of canonical basis vectors}; This issue will be discussed in more details and addressed in the \nnext section.} \n\n\n\n\\emph{For the sake of clarity, we present the learning algorithm for the particular case where there exists an $L$ such that \nthe prefix and suffix sets consisting of all sequences of length $L$, that is $\\Pcal = \\Scal = \n[d]^L$, forms a complete basis for $\\tilde{f}$}~\\rev{(i.e.\\ the sub-block $\\ten{H}_{\\Pcal,\\Scal}\\in\\Rbb^{[d]^L\\times [d]^L\\times p}$ of the Hankel tensor $\\ten{H}_f$ is\nsuch that $\\rank(\\tenmatpar{\\ten{H}_{\\Pcal,\\Scal}}{1}) = \\rank(\\tenmatpar{\\ten{H}_f}{1})$)}. This assumption allows us to present all the key elements of the algorithm in a simpler way, the technical details\nneeded to lift this assumption are given in the supplementary material.\n\nFor any integer $l$, we define the finite tensor \n$\\ten{H}^{(l)}_f\\in \\Rbb^{ d\\times \\cdots \\times d\\times p}$ of order $l+1$ by\n$$ (\\ten{H}^{(l)}_f)_{i_1,\\cdots,i_l,:} = f(\\ten_{i_1},\\cdots,\\ten_{i_l}) \\ \\ \\ \\text{for all } i_1,\\cdots,i_l\\in [d].$$ \nObserve that for any integer $l$, the tensor $\\ten{H}^{(l)}_f$ can be obtained by reshaping a finite sub-block of the Hankel tensor $\\ten{H}_f$. \nWhen $f$ is computed by a linear $2$-RNN, we have the useful property that, for any integer $l$,\n\\begin{equation}\\label{eq:}\n f(\\vec{x}_1,\\cdots,\\vec{x}_l) = \\ten{H}^{(l)}_f \\ttv{1} \\vec{x}_1 \\ttv{2} \\cdots \\ttv{l} \\vec{x}_l\n\\end{equation}\nfor any sequence of inputs $\\vec{x}_1,\\cdots,\\vec{x}_l\\in\\Rbb^d$~(which can be shown using the\nmultilinearity of $f$). Another fundamental property of the tensors $\\ten{H}^{(l)}_f$ is that they are of low tensor train rank. Indeed, for any $l$, one can check that\n$\\ten{H}^{(l)}_f = \\TT{\\Aten\\ttv{1}\\vecs{\\szerosymbol}, \\underbrace{\\Aten, \\cdots, \\Aten}_{l-1\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\top}$~(the tensor network representation of this\ndecomposition is shown in Figure~\\ref{fig:Hl.TT.rank}).\nThis property will be particularly relevant to the learning algorithm we design in the following section, but it is also a fundamental relation\nthat deserves some attention on its own: it implies in particular that, beyond the classical relation between the rank of the Hankel matrix $\\H_f$ and the number states of\na minimal WFA computing $f$, the Hankel matrix possesses a deeper structure intrinsically connecting weighted automata to the tensor\ntrain decomposition. \n\\rev{We now state the main result of this section, showing that a (minimal) linear 2-RNN computing a function $f$ can be exactly recovered from sub-blocks of the Hankel tensor $\\ten{H}_f$.}\n\n\n\n\\begin{figure}\n\\begin{center}\n\\resizebox{0.45\\textwidth}{0.08\\textwidth}{%\n\\begin{tikzpicture}\n\t\\input{tikz_tensor_networks}\n\t\\node[tensor](G1){$\\vecs{\\szerosymbol}$};\n\t\\node[draw=none,left of = G1](H){$\\ten{H}^{(4)}_f=\\ \\ \\ $};\n\t\\node[tensor,right = 1cm of G1](G2){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G2](G22){};\n\t\n\t\\node[tensor,right = 1cm of G2](G3){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G3](G32){};\n\t\n\n\t\\node[tensor,right = 1cm of G3](G4){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G4](G42){};\t\n\t\n\t\\node[tensor,right = 1cm of G4](G4-){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G4-](G42-){};\t\n\t\n\t\\node[tensor,right = 1cm of G4-](G5){$\\vecs{\\vvsinfsymbol}$};\n\t\\node[draw=none,below=0.8cm of G5](G51){};\n\t\n\n\t\n\t\\edgeports{G2}{1}{below left = 0.01cm }{G1}{1}{below right}{$n$};\t\n\t\\edgeports{G2}{2}{below left = -0.1cm and 0.01cm }{G22}{}{}{$d$};\n\t\n\t\\edgeports{G3}{1}{below left}{G2}{3}{below right}{$n$};\t\n\t\\edgeports{G3}{2}{below left = -0.1cm and 0.01cm }{G32}{}{}{$d$};\n\t\n\t\\edgeports{G4}{1}{below left}{G3}{3}{below right}{$n$};\t\n\t\\edgeports{G4}{2}{below left = -0.1cm and 0.01cm }{G42}{}{}{$d$};\n\t\n\t\\edgeports{G4-}{1}{below left}{G4}{3}{below right}{$n$};\t\n\t\\edgeports{G4-}{2}{below left = -0.1cm and 0.01cm }{G42-}{}{}{$d$};\n\t\n\t\\edgeports{G5}{1}{below left}{G4-}{3}{below right}{$n$};\t\n\t\\edgeports{G5}{2}{below left = -0.1cm and 0.01cm }{G51}{}{}{$p$};\n\n\t\n\\end{tikzpicture}\n}%\n\\end{center}\n\\caption{Tensor network representation of the TT decomposition of the Hankel tensor $\\ten{H}^{(4)}_f$ induced\nby a linear $2$-RNN $(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$.}\n\\label{fig:Hl.TT.rank}\n\\end{figure}\n\n\n\n\\begin{theorem}\\label{thm:2RNN-SL}\nLet $f:(\\Rbb^d)^*\\to \\Rbb^p$ be a function computed by a minimal linear $2$-RNN with $n$ hidden units and let\n$L$ be an integer such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nThen, for any $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$, the\nlinear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ defined by\n\n$$\\vecs{\\szerosymbol} = (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1}, \\ \\ \\ \\ \\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\n$$\n$$\\Aten = (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top$$\nis a minimal linear $2$-RNN computing $f$.\n\\end{theorem}\nFirst observe that such an integer $L$ exists under the assumption that $\\Pcal = \\Scal = \n[d]^L$ forms a complete basis for $\\tilde{f}$.\nIt is also worth mentioning that a necessary condition for $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$ is that\n$d^L\\geq n$, i.e.\\ $L$ must be of the order $\\log_d(n)$.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Hankel Tensors Recovery from Linear Measurements}\\label{subsec:Hankel.tensor.recovery}\n\n\n\nWe showed in the previous section that, given the Hankel tensors $\\ten{H}^{(L)}_f$, $\\ten{H}^{(2L)}_f$ and $\\ten{H}^{(2L+1)}_f$, one can recover \na linear 2-RNN computing $f$ if it exists. This first implies that the class of functions that can be computed by linear 2-RNNs is learnable in Angluin's\nexact learning model~\\cite{angluin1988queries} where one has access to an oracle that can answer membership queries~(e.g.\\ \\textit{what is the value computed by the target $f$ \non~$(\\vec{x}_1,\\cdots,\\vec{x}_k)$?}) and equivalence queries~(e.g.\\ \\textit{is my current hypothesis $h$ equal to the target $f$?}). While this fundamental result is \nof significant theoretical interest, assuming access to such an oracle is unrealistic. In this section, we show that a stronger learnability result can be obtained in a more realistic setting,\nwhere we only\nassume access to randomly generated input\/output examples $((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)})\\in(\\Rbb^d)^*\\times\\Rbb^p$ where $\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\n\nThe key observation is that such an input\/output example $((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)})$ can be seen as a \\emph{linear measurement} of the \nHankel tensor $\\ten{H}^{(l)}$. Indeed, we have\n\\begin{align*}\n\\vec{y}^{(i)} &= f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}) = \\ten{H}^{(l)}_f \\ttv{1} \\vec{x}_1 \\ttv{2} \\cdots \\ttv{l} \\vec{x}_l \\\\\n&= \n\\tenmatgen{\\ten{H}^{(l)}}{l,1}^\\top \\vec{x}^{(i)}\n\\end{align*}\nwhere $\\vec{x}^{(i)} = \\vec{x}^{(i)}_1\\otimes\\cdots \\otimes \\vec{x}_{l}^{(i)}\\in\\Rbb^{d^l}$. Hence, by regrouping $N$ output examples $\\vec{y}^{(i)}$ into\nthe matrix $\\Ymat\\in\\Rbb^{N\\times p}$ and the corresponding input vectors $\\vec{x}^{(i)}$ into the matrix $\\mat{X}\\in\\Rbb^{N\\times d^l}$,\none can recover $\\ten{H}^{(l)}$ by solving the linear system $\\Ymat = \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}$, which has a unique\nsolution whenever $\\mat{X}$ is of full column rank. \nThis naturally leads to the following theorem, whose proof relies on the fact that\n$\\mat{X}$ will be of full column rank whenever $N\\geq d^l$ and the components of each $\\vec{x}^{(i)}_j$ for $j\\in[l],i\\in[N]$\nare drawn independently from a continuous distribution over $\\Rbb^{d}$~(w.r.t. the Lebesgue measure).\n\n\n\n\n\n\n\n\\begin{theorem}\\label{thm:learning-2RNN}\nLet $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ be a minimal linear 2-RNN with $n$ hidden units computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$, and let $L$ be an integer\\footnote{Note that the theorem can be adapted if such an integer $L$ does not exists~(see supplementary material).}\nsuch that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\nSuppose we have access to $3$ datasets\n$D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ for $l\\in\\{L,2L,2L+1\\}$ \nwhere the entries of each $\\vec{x}^{(i)}_j$ are drawn independently from the standard normal distribution and where each\n$\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\nThen, if $N_l \\geq d^l$ for $l =L,\\ 2L,\\ 2L+1$,\nthe linear 2-RNN $M$ returned by Algorithm~\\ref{alg:2RNN-SL} with the least-squares method satisfies $f_M = f$ with probability one.\n\\end{theorem}\n\n\n\\begin{algorithm}[tb]\n \\caption{\\texttt{2RNN-SL}: Spectral Learning of linear 2-RNNs }\n \\label{alg:2RNN-SL}\n\\begin{algorithmic}[1]\n \\REQUIRE Three training datasets $D_L,D_{2L},D_{2L+1}$ with input sequences of length $L$, $2L$ and $2L+1$ respectively, a \\texttt{recovery\\_method}, rank $R$ and learning rate $\\gamma$~(for IHT\/TIHT).\n \n \\FOR{$l\\in\\{L,2L,2L+1\\}$}\n \\STATE\\label{alg.firstline.forloop} Use $D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ to build $\\mat{X}\\in\\Rbb^{N_l\\times d^l}$ with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$ for $i\\in[N_l]$ and $\\Ymat\\in\\Rbb^{N_l\\times p}$ with rows $\\vec{y}^{(i)}$ for $i\\in[N_l]$.\n \\IF{\\texttt{recovery\\_method} = \"Least-Squares\"}\n \\STATE\\label{alg.line.lst-sq} $\\ten{H}^{(l)} = \\displaystyle\\argmin_{\\ten{T}\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\mat{X}\\tenmatgen{\\ten{T}}{l,1} - \\Ymat}_F^2$.\n \\ELSIF{\\texttt{recovery\\_method} = \"Nuclear Norm\"}\n \\STATE $\\ten{H}^{(l)} = \\displaystyle\\argmin_{\\ten{T}\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\tenmatgen{\\ten{T}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}}_*$ subject to $\\mat{X} \\tenmatgen{\\ten{T}}{l,1} = \\Ymat$.\n \\label{alg.line.nucnorm}\n \\ELSIF{\\texttt{recovery\\_method} = \"(T)IHT\"}\n \\STATE Initialize $\\ten{H}^{(l)} \\in \\Rbb^{d\\times\\cdots\\times d\\times p}$ to $\\mat{0}$.\n \\REPEAT\\label{alg.line.iht.start}\n \\STATE\\label{alg.line.iht.gradient} $\\tenmatgen{\\ten{H}^{(l)}}{l,1} = \\tenmatgen{\\ten{H}^{(l)} }{l,1} + \\gamma\\mat{X}^\\top(\\Ymat - \\mat{X}\\tenmatgen{\\ten{H}^{(l)} }{l,1})$\n \n \\STATE $\\ten{H}^{(l)} = \\texttt{project}(\\ten{H}^{(l)},R)$~(using either SVD for IHT or TT-SVD for TIHT)\n \n \\UNTIL{convergence}\\label{alg.line.iht.end}\n \\ENDIF\\label{alg.lastline.forloop} \n \n \\ENDFOR\n \\STATE\\label{alg.line.svd} Let $\\tenmatgen{\\ten{H}^{(2L)}}{L,L+1} = \\P\\S$ be a rank $R$ factorization.\n \\STATE Return the linear 2-RNN $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ where \n \\begin{align*}\n \\vecs{\\szerosymbol}\\ &= (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1},\\ \\ \\ \\ \\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\\\\\n \\Aten\\ &= (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\n\\end{align*} \n\\end{algorithmic}\n\\end{algorithm}\n\nA few remarks on this theorem are in order. The first observation is that the $3$\ndatasets $D_L$, $D_{2L}$ and $D_{2L+1}$ can either be drawn independently or not~(e.g.\\ the sequences in $D_{L}$ can\nbe prefixes of the sequences in $D_{2L}$ but it is not necessary). In particular, the result still holds when the datasets $D_l$ are constructed from a unique dataset \n$S =\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_T^{(i)}),(\\vec{y}^{(i)}_1,\\vec{y}^{(i)}_2,\\cdots,\\vec{y}^{(i)}_T)) \\}_{i=1}^{N}$\nof input\/output sequences with $T\\geq 2L+1$, where $\\vec{y}^{(i)}_t = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_t^{(i)})$ for any $t\\in[T]$.\nObserve that having access to such input\/output training sequences is not an unrealistic assumption: for example when training RNNs for\nlanguage modeling the output $\\vec{y}_t$ is the conditional probability vector of the next symbol, and for classification tasks the output is\nthe one-hot encoded label for all time steps. Lastly, when the outputs $\\vec{y}^{(i)}$ are noisy, one can solve the least-squares problem\n$\\norm{\\Ymat - \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}}^2_F$ to approximate the Hankel tensors; we will empirically evaluate this approach \nin Section~\\ref{sec:xp} and we defer its theoretical analysis in the noisy setting to future work.\n\n\n\n\\subsection{\\rev{Leveraging the low rank structure of the Hankel tensors}}\nWhile the least-squares method is sufficient to obtain the theoretical guarantees of Theorem~\\ref{thm:learning-2RNN}, it does not leverage\nthe low rank structure of the Hankel tensors $\\ten{H}^{(L)}$, $\\ten{H}^{(2L)}$ and $\\ten{H}^{(2L+1)}$. We now propose three alternative recovery\nmethods to leverage this structure, whose sample efficiency will be assessed in a simulation study in Section~\\ref{sec:xp}~(deriving improved sample\ncomplexity guarantees using these methods is left for future work). In the noiseless setting, we first propose to replace solving the linear \nsystem $\\Ymat = \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}$ with a nuclear norm minimization problem~(see line~\\ref{alg.line.nucnorm} of Algorithm~\\ref{alg:2RNN-SL}), thus leveraging the\nfact that $\\tenmatgen{\\ten{H}^{(l)}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}$ is potentially of low matrix rank. We also propose to use iterative hard thresholding~(IHT)~\\cite{jain2010guaranteed}\nand its tensor counterpart TIHT~\\cite{rauhut2017low}, which are based on the classical projected gradient descent algorithm and have shown to be\nrobust to noise in practice. These two methods are implemented in lines~\\ref{alg.line.iht.start}-\\ref{alg.line.iht.end} of Algorithm~\\ref{alg:2RNN-SL}. There,\nthe \\texttt{project} method either projects $\\tenmatgen{\\ten{H}^{(l)}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}$ onto the manifold of low rank matrices \nusing SVD~(IHT) or projects $\\ten{H}^{(l)}$ onto the manifold of tensors with TT-rank $R$~(TIHT). \n\n\\input{xp_figures.tex}\n\n\n\\rev{\nThe low rank structure of the Hankel tensors can also be leveraged to improve the scalability of the learning algorithm.\nOne can check that the computational complexity of Algorithm~\\ref{alg:2RNN-SL} is exponential in the maximum sequence length: indeed,\nbuilding the matrix $\\mat{X}$ in line~2 is already in $\\bigo{N_ld^l}$, where $l$ is in turn equal to $L,\\ 2L$ and $2L+1$.\nFocusing on the TIHT recovery method, a careful analysis shows that the computational complexity of the algorithm\nis in \n$$\\bigo{d^{2L+1}\\left(p(TN+R) +R^2\\right) + TL\\max(p,d)^{2L+3}}, $$\nwhere $N=\\max(N_L,N_{2L},N_{2L+1})$ and $T$ is the number of iterations of the loop on line~\\ref{alg.line.iht.start}.\nThus, in its present form, our approach cannot scale to high dimensional inputs and long sequences. However, one can leverage\nthe low tensor train rank structure of the Hankel tensors to circumvent this issue:\nby storing \nboth the estimates of the Hankel tensors $\\ten{H}^{(l)}$ and the matrices $\\mat{X}$ in TT format~(with decompositions of ranks $R$ and $N$ respectively),\nall the operations needed to implement Algorithm~\\ref{alg:2RNN-SL} with the TIHT recovery method can be performed in time $\\bigo{T(N+R)^3(Ld + p)}$~(more details can be found in the supplementary\nmaterial). By leveraging the tensor train structure, one can thus lift the dependency on $d^{2L+1}$ by paying the price of an increased cubic complexity \nin the number of examples $N$ and the number of states $R$. While the\ndependency on the number of states is not a major issue~($R$ should be negligible w.r.t. $N$), the dependency on $N^3$ can quickly become prohibitive for realistic application scenario. \nFortunately, this issue can be\ndealt with by using mini-batches of training data for the gradient updates on line~\\ref{alg.line.iht.gradient} instead of the whole dataset $D_l$, in which case the overall complexity\nof Algorithm~\\ref{alg:2RNN-SL} becomes $\\bigo{T(M+R)^3(Ld + p)}$ where $M$ is the mini-batch size~(the overall algorithm in TT format is summarized in Algorithm~\\ref{alg:2RNN-SL-TT} in the supplementary material).\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\input{xp}\n\n\n\n\n\n\\section{Conclusion and Future Directions}\n\nWe proposed the first provable learning algorithm for second-order RNNs with linear activation functions:\nwe showed that linear 2-RNNs are a natural extension of vv-WFAs to the setting of input sequences of \\emph{continuous vectors}~(rather than\ndiscrete symbol) and we extended the vv-WFA spectral learning\nalgorithm to this setting. We believe that the results presented in this paper open a number of exciting and promising research directions on both the\ntheoretical and practical perspectives. We first plan to use the spectral learning estimate as a starting point for\ngradient based methods to train non-linear 2-RNNs. More precisely, linear 2-RNNs can be thought of as 2-RNNs using LeakyRelu activation functions with negative slope $1$, therefore one could use \na linear 2-RNN as initialization before gradually reducing the negative slope parameter during training. The extension of the spectral method to linear 2-RNNs also\nopens the door to scaling up the classical spectral algorithm to problems with large discrete alphabets~(which is a known caveat of the spectral algorithm for WFAs) since\nit allows one to use low dimensional embeddings of large vocabularies~(using e.g.\\ word2vec or latent semantic analysis). From the theoretical perspective, we plan on \nderiving learning guarantees for linear 2-RNNs in the noisy setting~(e.g.\\ using the PAC learnability framework). Even though it is intuitive that such guarantees should hold~(given\nthe continuity of all operations used in our algorithm), we believe that such an analysis may entail results of independent interest. In particular, analogously to the\nmatrix case studied in~\\cite{cai2015rop}, obtaining rate optimal convergence rates for the recovery of the low TT-rank Hankel tensors from rank one measurements is an interesting\ndirection; such a result could for example allow one to improve the generalization bounds provided in~\\cite{balle2012spectral} for spectral learning of \ngeneral WFAs. \n\n\n\n\\newpage\n\\subsubsection*{Acknowledgements} This work was done while G. Rabusseau was an IVADO postdoctoral scholar at McGill University. \n{\n\\bibliographystyle{plain}\n\n\n\\section*{\\centering \\LARGE Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning \\\\ \\vspace*{0.3cm}(Supplementary Material)}\n\n\n\\section{Proofs}\n\\subsection{Proof of Theorem~\\ref{thm:2RNN-vvWFA}}\n\\begin{theorem*}\nAny function that can be computed by a vv-WFA with $n$ states can be computed by a linear 2-RNN with $n$ hidden units.\nConversely, any function that can be computed by a linear 2-RNN with $n$ hidden units on sequences of one-hot vectors~(i.e.\\ canonical basis \nvectors) can be computed by a WFA with $n$ states.\n\n\nMore precisely, the WFA $A=\\vvwa$ with $n$ states and the linear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ with\n$n$ hidden units, where $\\Aten\\in\\Rbb^{n\\times \\Sigma \\times n}$ is defined by $\\Aten_{:,\\sigma,:}=\\mat{A}^\\sigma$ for all $\\sigma\\in\\Sigma$, are\nsuch that\n$f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k) = f_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k)$ for all sequences of input symbols $\\sigma_1,\\cdots,\\sigma_k\\in\\Sigma$,\nwhere for each $i\\in[k]$ the input vector $\\vec{x}_i\\in\\Rbb^\\Sigma$ is\nthe one-hot encoding of the symbol $\\sigma_i$.\n\\end{theorem*}\n\\begin{proof}\nWe first show by induction on $k$ that, for any sequence $\\sigma_1\\cdots\\sigma_k\\in\\Sigma^*$, the hidden state $\\vec{h}_k$ computed by $M$~(see\nEq.~\\eqref{eq:2RNN.definition})\non the corresponding one-hot encoded sequence $\\vec{x}_1,\\cdots,\\vec{x}_k\\in\\Rbb^d$\nsatisfies $\\vec{h}_k = (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_k})^\\top\\vecs{\\szerosymbol}$. The case $k=0$\nis immediate. Suppose the result true for sequences of length up to $k$. One can check easily check that $\\Aten\\ttv{2}\\vec{x}_i = \\mat{A}^{\\sigma_i}$\nfor any index $i$. Using the induction hypothesis it then follows that\n\\begin{align*}\n\\vec{h}_{k+1} &= \\Aten \\ttv{1}\\vec{h}_k \\ttv{2} \\vec{x}_{k+1} = \\mat{A}^{\\sigma_{k+1}}\\ttv{1} \\vec{h}_k = (\\mat{A}^{\\sigma_{k+1}})^\\top \\vec{h}_k\\\\\n&= (\\mat{A}^{\\sigma_{k+1}})^\\top (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_k})^\\top\\vecs{\\szerosymbol} = (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_{k+1}})^\\top\\vecs{\\szerosymbol} .\n\\end{align*} \nTo conclude, we thus have\n\\begin{equation*}\nf_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k) = \\vecs{\\vvsinfsymbol}\\vec{h}_{k} = \\vecs{\\vvsinfsymbol}(\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_{k}})^\\top\\vecs{\\szerosymbol} = f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k).\\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{thm:2RNN-SL}}\n\n\n\\begin{theorem*}\nLet $f:(\\Rbb^d)^*\\to \\Rbb^p$ be a function computed by a minimal linear $2$-RNN with $n$ hidden units and let\n$L$ be an integer such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nThen, for any $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$, the\nlinear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ defined by\n$$\\vecs{\\szerosymbol} = (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1},\\ \\ \\ \\ \\Aten = (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top,\\ \\ \\ \\ \n\\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}$$\nis a minimal linear $2$-RNN computing $f$.\n\\end{theorem*}\n\\begin{proof}\nLet $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ be such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$\nDefine the tensors \n$$\\Pten^* = \\TT{\\Aten^\\star\\ttv{1}\\vecs{\\szerosymbol}^\\star, \\underbrace{\\Aten^\\star, \\cdots, \\Aten^\\star}_{L-1\\text{ times}}, \\mat{I}_n}\\in\\Rbb^{d\\times\\cdots\\times d\\times n}\\ \\ \\ \\ \n\\text{ and }\\ \\ \\ \\ \n\\Sten^* = \\TT{\\mat{I}_n,\\underbrace{\\Aten^\\star, \\cdots, \\Aten^\\star}_{L\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\star}\\in\\Rbb^{n\\times d\\times\\cdots\\times d\\times p}$$ \nof order $L+1$ and $L+2$ respectively, and let $\\P^\\star = \\tenmatgen{\\Pten^*}{l,1} \\in\\Rbb^{d^l\\times n}$ \nand $\\S = \\tenmatgen{\\Sten^*}{1,L+1} \\in\\Rbb^{n\\times d^lp}$. Using the identity $\\ten{H}^{(j)}_f = \\TT{\\Aten\\ttv{1}\\vecs{\\szerosymbol}, \\underbrace{\\Aten, \\cdots, \\Aten}_{j-1\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\top}$ for\nany $j$, one can easily check the following identities:\n\\begin{gather*}\n\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P^\\star\\S^\\star,\\ \\ \\ \\ \\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1}= \\Aten^\\star \\ttm{1} \\P^\\star \\ttm{3} (\\S^\\star)^\\top,\\\\\n\\tenmatgen{\\ten{H}^{(L)}_f}{L,1} = \\P^\\star(\\vecs{\\vvsinfsymbol}^\\star)^\\top, \\ \\ \\ \\ \\ \\ \\\n\\tenmatgen{\\ten{H}^{(L)}_f}{L+1} = (\\S^\\star)^\\top\\vecs{\\szerosymbol}.\n\\end{gather*}\n\nLet $\\mat{M} = \\P^\\dagger\\P^\\star$. We will show that $\\vecs{\\szerosymbol} = \\mat{M}^{-\\top}\\vecs{\\szerosymbol}^\\star$, $\\Aten = \\Aten^\\star \\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top}$ and\n$\\vecs{\\vvsinfsymbol} = \\mat{M}\\vecs{\\vvsinfsymbol}^\\star$, which will entail the results since linear 2-RNN are invariant under change of basis~(see Section~\\ref{sec:prelim}). First observe that $\\mat{M}^{-1} = \\S^\\star\\S^\\dagger$. Indeed,\nwe have\n$\\P^\\dagger\\P^\\star\\S^\\star\\S^\\dagger = \\P^\\dagger\\tenmatgen{\\ten{H}^{(2l)}_f}{l,l+1}\\S^\\dagger = \\P^\\dagger\\P\\S\\S^\\dagger = \\mat{I}$\nwhere we used the fact that $\\P$~(resp. $\\S$) is of full column rank~(resp. row rank) for the last equality. \n\nThe following derivations then follow from basic tensor algebra:\n\\begin{align*}\n\\vecs{\\szerosymbol} \n&= \n(\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1} \n=\n(\\S^\\dagger)^\\top (\\S^\\star)^\\top\\vecs{\\szerosymbol}\n=\n(\\S^\\star\\S^\\dagger)^\\top\n=\n\\mat{M}^{-\\top}\\vecs{\\szerosymbol}^\\star,\\\\\n\\ \\\\\n\\Aten \n&= \n(\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\\\\\n&=\n(\\Aten^\\star \\ttm{1} \\P^\\star \\ttm{3} (\\S^\\star)^\\top)\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\\\\\n&=\n\\Aten^\\star \\ttm{1} \\P^\\dagger\\P^\\star \\ttm{3} (\\S^\\star\\S^\\dagger)^\\top = \\Aten^\\star \\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\\\\n\\ \\\\\n\\vecs{\\vvsinfsymbol}^\\top \n&= \n\\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\n=\n\\P^\\dagger\\P^\\star(\\vecs{\\vvsinfsymbol}^\\star)^\\top\n=\n\\mat{M}\\vecs{\\vvsinfsymbol}^\\star,\n\\end{align*}\nwhich concludes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:learning-2RNN}}\n\n\n\\begin{theorem*}\nLet $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ be a minimal linear 2-RNN with $n$ hidden units computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$, and let $L$ be an integer\\footnote{Note that the theorem can be adapted if such an integer $L$ does not exists~(see supplementary material).}\nsuch that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nSuppose we have access to $3$ datasets\n$D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ for $l\\in\\{L,2L,2L+1\\}$ \nwhere the entries of each $\\vec{x}^{(i)}_j$ are drawn independently from the standard normal distribution and where each\n$\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\nThen, whenever $N_l \\geq d^l$ for each $l\\in\\{L,2L,2L+1\\}$,\nthe linear 2-RNN $M$ returned by Algorithm~\\ref{alg:2RNN-SL} with the least-squares method satisfies $f_M = f$ with probability one.\n\\end{theorem*}\n\\begin{proof}\nWe just need to show for each $l\\in \\{L,2L,2L+1\\}$ that, under the hypothesis of the Theorem, the Hankel tensors $\\hat{\\ten{H}}^{(l)}$ computed in line~\\ref{alg.line.lst-sq} of\nAlgorithm~\\ref{alg:2RNN-SL} are equal to the true Hankel tensors $\\ten{H}^{(l)}$ with probability one. Recall that these tensors are computed by solving the least-squares\nproblem\n$$\\hat{\\ten{H}}^{(l)} = \\argmin_{T\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\mat{X}\\tenmatgen{\\ten{T}}{l,1} - \\Ymat}_F^2$$\nwhere $\\mat{X}\\in\\Rbb^{N_l\\times d_l}$ is the matrix with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$ for each $i\\in[N_l]$. Since $\\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1} = \\Ymat$ and since the solution\nof the least-squares problem is unique as soon as $\\mat{X}$ is of full column rank, we just need to show that this is the case with probability one\nwhen the entries of the vectors $\\vec{x}^{(i)}_j$ are drawn at random from a standard normal distribution. The result will then directly follow\nby applying Theorem~\\ref{thm:2RNN-SL}.\n\nWe will show that the set \n$$\\Scal = \\{ (\\vec{x}_1^{(i)},\\cdots, \\vec{x}_l^{(i)}) \\mid \\ i\\in[N_l],\\ dim(span(\\{ \\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)} \\})) < d^l\\} $$\nhas Lebesgue measure $0$ in $((\\Rbb^d)^{l})^{N_l}\\simeq \\Rbb^{dlN_l}$ as soon as $N_l \\geq d^l$, which will imply that it has probability $0$ under any continuous probability, hence\nthe result. For any $S=\\{(\\vec{x}_1^{(i)},\\cdots, \\vec{x}_l^{(i)})\\}_{i=1}^{N_l}$, we denote by $\\mat{X}_S\\in\\Rbb^{N_l\\times d^l}$ the matrix with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$.\nOne can easily check that $S\\in\\Scal$ if and only if $\\mat{X}_S$ is of rank strictly less than $d^l$, which is equivalent to the determinant of \n$\\mat{X}_S^\\top\\mat{X}_S$ being equal to $0$. Since this determinant is a polynomial in the entries of the vectors $\\vec{x}_j^{(i)}$, $\\Scal$ is an algebraic\nsubvariety of $\\Rbb^{dlN_l}$.\nIt is then easy to check that the polynomial $det(\\mat{X}_S^\\top\\mat{X}_S)$ is not uniformly 0 when $N_l \\geq d^l$. Indeed, \nit suffices to choose the vectors $\\vec{x}_j^{(i)}$ such that the family $(\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)})_{n=1}^{N_l}$ spans the whole space \n$\\Rbb^{d^l}$~(which is possible since we can choose arbitrarily any of the $N_l\\geq d^l$ elements of this family), hence the result. \nIn conclusion, $\\Scal$ is a proper algebraic subvariety of $\\Rbb^{dlN_l}$ and hence has Lebesgue\nmeasure zero~\\cite[Section 2.6.5]{federer2014geometric}.\n\n\n\n\\end{proof}\n\n\n\\section{Lifting the simplifying assumption}\\label{app:lift}\nWe now show how all our results still hold when there does not exist an $L$ such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\nRecall that this simplifying assumption followed from assuming that the sets $\\Pcal=\\Scal=[d]^L$ form a complete basis for the function\n$\\tilde{f}:[d]^*\\to \\Rbb^p$ defined by $\\tilde{f}(i_1i_2\\cdots i_k) = f(\\ten_{i_1},\\ten_{i_2},\\cdots,\\ten_{i_k})$. \nWe first show that there always exists an integer $L$ such that $\\Pcal=\\Scal=\\cup_{i\\leq L} [d]^i$ forms a complete basis for $\\tilde{f}$.\n Let $M = (\\vecs{\\szerosymbol}^\\star,\\Aten^\\star,\\vecs{\\vvsinfsymbol}^\\star)$ be a linear 2-RNN with $n$ hidden units computing\n$f$~(i.e.\\ such that $f_M=f$). It follows from Theorem~\\ref{thm:2RNN-vvWFA} and from the discussion at the beginning of Section~\\ref{subsec:SL-2RNN}\nthat there exists a vv-WFA computing $\\tilde{f}$ and it is easy to check that $\\rank(\\tilde{f}) = n$. This implies $\\rank(\\tenmatpar{\\ten{H}_f}{1}) = n$ by\nTheorem~\\ref{thm:fliess-vvWFA}. Since $\\Pcal=\\Scal=\\cup_{i\\leq l} [d]^i$ converges to $[d]^*$ as $l$ grows to infinity, there exists an $L$ such that \nthe finite sub-block $\\tilde{\\ten{H}}_f \\in \\Rbb^{\\Pcal\\times\\Scal\\times p}$ of $\\ten{H}_f\\in \\Rbb^{[d]^*\\times[d]^*\\times p}$\nsatisfies $\\rank(\\tenmatpar{\\tilde{\\ten{H}}_f}{1}) = n$, i.e.\\ such that\n$\\Pcal=\\Scal=\\cup_{i\\leq L} [d]^i$ forms a complete basis for $\\tilde{f}$. \n\nNow consider the finite sub-blocks $\\tilde{\\ten{H}}^{+}_f\\in \\Rbb^{\\Pcal \\times [d] \\times \\Scal \\times p}$ and $\\tilde{\\H}^{-}_f\\in \\Rbb^{\\Pcal \\times p}$ of $\\ten{H}_f$ defined by\n$$(\\tilde{\\ten{H}}^{+}_f)_{u,i,v,:}=\\tilde{f}(uiv),\\ \\ \\ \\text{and} (\\tilde{\\H}^{-}_f)_{u,:}=f(u)$$\nfor any $u\\in \\Pcal=\\Scal$ and any $i\\in [d]$. One can check that Theorem~\\ref{thm:2RNN-SL} holds by replacing \\emph{mutatis mutandi}\n$\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}$ by $\\tenmatpar{\\tilde{\\ten{H}}_f}{1}$, $\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1}$ by $\\tilde{\\ten{H}}^{+}_f$, $\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}$ by $\\tilde{\\H}^{-}_f$\nand $\\tenmatgen{\\ten{H}^{(L)}_f}{L+1}$ by $\\vectorize{\\tilde{\\H}^{-}_f}$.\n\nTo conclude, it suffices to observe that both $\\tilde{\\ten{H}}^{+}_f$ and $\\tilde{\\H}^{-}_f$ can be constructed\nfrom the entries for the tensors $\\ten{H}^{(l)}$ for $1\\leq l \\leq 2L+1$, which can be recovered~(or estimated in the noisy setting) using\nthe techniques described in Section~\\ref{subsec:Hankel.tensor.recovery}~(corresponding to lines \\ref{alg.firstline.forloop}-\\ref{alg.lastline.forloop} of\nAlgorithm~\\ref{alg:2RNN-SL}).\n\nWe thus showed that linear 2-RNNs can be provably learned even when there does not exist an $L$ such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$. In this\nsetting, one needs to estimate enough of the tensors $\\ten{H}^{(l)}$ to reconstruct a complete sub-block $\\tilde{\\ten{H}}_f$ of the Hankel tensor \n$\\ten{H}$~(along with the corresponding tensor $\\tilde{\\ten{H}}^{+}_f$ and matrix $\\tilde{\\H}^{-}_f$)\nand recover the linear 2-RNN by applying Theorem~\\ref{thm:2RNN-SL}. In addition, one needs to have access to sufficiently large datasets \n$D_l$ for each $l\\in [2L+1]$ rather than only the three datasets mentioned in Theorem~\\ref{thm:learning-2RNN}. However the data requirement remains the\nsame in the case where we assume that each of the datasets $D_l$ is constructed from a unique training dataset \n$S =\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_T^{(i)}),(\\vec{y}^{(i)}_1,\\vec{y}^{(i)}_2,\\cdots,\\vec{y}^{(i)}_T)) \\}_{i=1}^{N}$\nof input\/output sequences.\n\n\\section{Leveraging the tensor train structure for computational efficiency}\nThe overall learning algorithm using the TIHT recovery method in TT format is summarized in Algorithm~\\ref{alg:2RNN-SL-TT}. The key ingredients to improve the complexity of Algorithm~\\ref{alg:2RNN-SL} are (i) to estimate the gradient using mini-batches of data and (ii) to directly use the TT format to represent and perform operations on the tensors $\\ten{H}^{(l)}$ and the tensors $\\Xten^{(l)}\\in\\Rbb^{M\\times d\\times \\cdots \\times d}$ defined by\n\\begin{equation}\\label{eq:Xten}\n \\Xten_{i,:,\\cdots,:}= \\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}\\ \\ \\ \\text{for }i\\in[M]\n\\end{equation}\nwhere $M$ is the size of a mini-batch of training data~($\\ten{H}^{(l)}$ is of TT-rank $R$ by design and it can easily be shown that $\\Xten^{(l)}$ is of TT-rank at most $M$, cf. Eq.~\\eqref{eq:XtenTT}). \nThen, all the operations of the algorithm can be expressed in terms of these tensors and performed efficiently in TT format.\nMore precisely, the products and sums needed to compute the gradient update on line~\\ref{algtt.line.iht.gradient} can be performed in~$\\bigo{(R+M)^2(ld+p)+(R+M)^3d}$. After the gradient update, the tensor $\\ten{H}^{(l)}$ has TT-rank at most $(M+R)$ but can be efficiently projected back to a tensor of TT-rank $R$ using the tensor train rounding operation~\\cite{oseledets2011tensor} in $\\bigo{(R+M)^3(ld+p)}$~(which is the operation dominating the complexity of the whole algorithm).\nThe subsequent operations on line~\\ref{line.algtt.return} can be performed efficiently in the TT format in~$\\bigo{R^3d+R^2p}$~(using the method described in~\\cite{klus2018tensor} to compute the pseudo-inverses of the matrices $\\P$ and $\\S$). \nThe overall complexity of Algorithm~2 is thus in~$\\bigo{T(R+M)^3(Ld+p)}$ where $T$ is the number of iterations of the inner loop.\n\n\n\\begin{algorithm}\n \\caption{\\texttt{2RNN-SL-TT}: Spectral Learning of linear 2-RNNs \\textbf{in tensor train format}}\n \\label{alg:2RNN-SL-TT}\n\\begin{algorithmic}[1]\n \\REQUIRE Three training datasets $D_L,D_{2L},D_{2L+1}$ with input sequences of length $L$, $2L$ and $2L+1$ respectively, rank $R$, learning rate $\\gamma$ and\n mini-batch size $M$.\n \n \\FOR{$l\\in\\{L,2L,2L+1\\}$}\n \\STATE Initialize all cores of the rank $R$ TT-decomposition $\\ten{H}^{(l)} = \\TT{\\Gten^{(l)}_1,\\cdots,\\Gten^{(l)}_{l+1}} \\in \\Rbb^{d\\times\\cdots\\times d\\times p}$ to $\\mat{0}$.\\\\\n \/\/ \\emph{Note that all the updates of $\\ten{H}^{(l)}$ stated below are in effect applied directly to the core tensors $\\Gten^{(l)}_k$, i.e. the tensor $\\ten{H}^{(l)}$ is never \n explicitely constructed.}\n \n \\REPEAT\\label{algtt.line.iht.start}\n \\STATE Subsample a minibatch $$\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{M}\\subset(\\Rbb^d)^l\\times \\Rbb^p$$ of size $M$ from $D_l$.\n \\STATE Compute the rank $M$ TT-decomposition of the tensor $\\Xten=\\Xten^{(l)}$~(defined in Eq.~\\eqref{eq:Xten}), which is given by\n \\begin{equation}\\label{eq:XtenTT}\n \\Xten = \\TT{\\mat{I}_M, \\Aten_1,\\cdots,\\Aten_l} \\text{ where the cores are defined by } (\\Aten_k)_{i,:,j}=\\delta_{ij}\\vec{x}^{(i)}_k \\ \\ \\text{ and } \\ \\ (\\Aten_l)_{i,:} = \\vec{x}^{(i)}_k\n \\end{equation}\n for all $1\\leq k