{"text":"\\section{Introduction}\n\nContinuous first-order logic is a generalization of first-order logic suitable for studying metric structures, which are mathematical structures with an underlying complete metric and with uniformly continuous functions and $\\mathbb{R}$-valued predicates. A rich class of metric structures arise from Banach spaces and expansions thereof. An active area of research in continuous logic is the characterization of inseparably categorical continuous theories.\nFor a general introduction to continuous logic, see \\cite{MTFMS}.\n\n\nIn the present work we will consider expansions of Banach spaces. We introduce the notion of an indiscernible subspace. An indiscernible subspace is a subspace in which types of tuples of elements only depend on their quantifier-free type in the reduct consisting of only the metric and the constant $\\mathbf{0}$. Similarly to indiscernible sequences, indiscernible subspaces are always consistent with a Banach theory (with no stability assumption, see Theorem \\ref{thm:exist}), but are not always present in every model. We will show that an indiscernible subspace always takes the form of an isometrically embedded real Hilbert space wherein the type of any tuple only depends on its quantifier-free type in the Hilbert space. The notion of an indiscernible subspace is of independent interest in the model theory of Banach and Hilbert structures, and in particular here we use it to improve the results of Shelah and Usvyatsov in the context of types in the full language (as opposed to $\\Delta$-types). Specifically, in this context we give a shorter proof of Shelah and Usvyatsov's main result \\cite[Prop.\\ 4.13]{SHELAH2019106738}, we improve their result on the strong uniqueness of Morley sequences in minimal wide types \\cite[Prop.\\ 4.12]{SHELAH2019106738}, and we expand on their commentary on the ``induced structure'' of the span of a Morley sequence in a minimal wide type \\cite[Rem.\\ 5.6]{SHELAH2019106738}. This more restricted case is what is relevant to inseparably categorical Banach theories, so our work is applicable to the problem of their characterization.\n\nFinally, we present some relevant counterexamples and in particular we resolve (in the negative) the question of Shelah and Usvyatsov presented at the end of Section 5 of \\cite{SHELAH2019106738}, in which they ask whether or not the span of a Morley sequence in a minimal wide type is always a type-definable set.\n\n\\subsection{Background}\n\nFor $K \\in \\{\\mathbb{R}, \\mathbb{C}\\}$, we think of a $K$-Banach space $X$ as being a metric structure $\\mathfrak{X}$ whose underlying set is the closed unit ball $B(X)$ of $X$ with metric $d(x,y) = \\left\\lVert x - y \\right\\rVert$.\\footnote{For another equivalent approach, see \\cite{MTFMS}, which encodes Banach structures as many-sorted metric structures with balls of various radii as different sorts.} This structure is taken to have for each tuple $\\bar{a} \\in K$ an $|\\bar{a}|$-ary predicate $s_{\\bar{a}}(\\bar{x}) = \\left\\lVert \\sum_{i<|\\bar{a}|} a_i x_i \\right\\rVert$, although we will always write this in the more standard form.\n Note that we evaluate this in $X$ even if $\\sum_{i<|\\bar{a}|} a_i x_i$ is not actually an element of the structure $\\mathfrak{X}$. For convenience, we will also have a constant for the zero vector, $\\mathbf{0}$, and an $n$-ary function $\\sigma_{\\bar{a}}(\\bar{x})$ such that $\\sigma_{\\bar{a}}(\\bar{x}) = \\sum_{i<|\\bar{a}|} a_i x_i$ if it is in $B(X)$ and $\\sigma_{\\bar{a}}(\\bar{x}) = \\frac{\\sum_{i<|\\bar{a}|} a_i x_i}{\\left\\lVert \\sum_{i<|\\bar{a}|} a_i x_i \\right\\rVert}$ otherwise. If $|a|\\leq 1$, we will write $ax$ for $\\sigma_{a}(x)$. Note that while this is an uncountable language, it is interdefinable with a countable reduct of it (restricting attention to rational elements of $K$). These structures capture the typical meaning of the ultraproduct of Banach spaces. As is common, we will conflate $X$ and the metric structure $\\mathfrak{X}$ in which we have encoded $X$.\n\n\n\\begin{defn}\nA \\emph{Banach (or Hilbert) structure} is a metric structure which is the expansion of a Banach (or Hilbert) space. A \\emph{Banach (or Hilbert) theory} is the theory of such a structure. The adjectives \\emph{real} and \\emph{complex} refer to the scalar field $K$.\n\\end{defn}\n\n$C^{\\ast}$- and other Banach algebras are commonly studied examples of Banach structures that are not just Banach spaces.\n\n\n\n\n\n\n\nA central problem in continuous logic is the characterization of inseparably categorical countable theories, that is to say countable theories with a unique model in each uncountable density character. The analog of Morley's theorem was shown in continuous logic via related formalisms \\cite{ben-yaacov_2005, Shelah2011}, but no satisfactory analog of the Baldwin-Lachlan theorem or its precise structural characterization of uncountably categorical discrete theories in terms of strongly minimal sets is known. Some progress in the specific case of Banach theories has been made in \\cite{SHELAH2019106738}, in which Shelah and Usvyatsov introduce the notion of a wide type and the notion of a minimal wide type, which they argue is the correct analog of strongly minimal types in the context of inseparably categorical Banach theories.\n\n\\begin{defn}\nA type $p$ in a Banach theory is \\emph{wide} if its set of realizations consistently contain the unit sphere of an infinite dimensional real subspace.\n\nA type is \\emph{minimal wide} if it is wide and has a unique wide extension to every set of parameters.\n\\end{defn}\nIn \\cite{SHELAH2019106738}, Shelah and Usvyatsov were able to show that every Banach theory has wide complete types using the following classical concentration of measure results of Dvoretzky and Milman, which Shelah and Usvyatsov refer to as the Dvoretzky-Milman theorem.\n\n\\begin{fact}[Dvoretzky-Milman theorem] \\label{fact:DM-thm}\nLet $(X,\\left\\lVert \\cdot \\right\\rVert)$ be an infinite dimensional real Banach space with unit sphere $S$ and let $f:S \\rightarrow \\mathbb{R}$ be a uniformly continuous function. For any $k<\\omega$ and $\\varepsilon > 0$, there exists a $k$-dimensional subspace $Y \\subset X$ and a Euclidean norm\\footnote{A norm $\\vertiii{\\cdot}$ is \\emph{Euclidean} if it satisfies the parallelogram law, $2\\vertiii{a}^2 + 2 \\vertiii{b}^2 = \\vertiii{a+b}^2 + \\vertiii{a-b}^2,$ or equivalently if it is induced by an inner product.} $\\vertiii{\\cdot}$ on $Y$ such that for any $a,b \\in S\\cap Y$, we have $\\vertiii{a} \\leq \\left\\lVert a\\right\\rVert \\leq (1 + \\varepsilon)\\vertiii{a}$ and $|f(a) - f(b)| < \\varepsilon$.\\footnote{Fact \\ref{fact:DM-thm} without $f$ is (a form of) Dvoretsky's theorem.}\n\\end{fact}\n\n\nShelah and Usvyatsov showed that in a stable Banach theory every wide type has a minimal wide extension (possibly over a larger set of parameters) and that every Morley sequence in a minimal wide type is an orthonormal basis of a subspace isometric to a real Hilbert space. Furthermore, they showed that in an inseparably categorical Banach theory, every inseparable model is prime over a countable set of parameters and a Morley sequence in some minimal wide type, analogously to how a model of a discrete uncountably categorical theory is always prime over some finite set of parameters and a Morley sequence in some strongly minimal type.\n\nThe key ingredient to our present work is the following result, due to Milman.\n It extends the Dvoretzky-Milman theorem in a manner analogous to the extension of the pigeonhole principle by Ramsey's theorem.\\footnote{The original Dvoretzky-Milman result is often compared to Ramsey's theorem, such as when Gromov coined the term \\emph{the Ramsey-Dvoretzky-Milman phenomenon} \\cite{gromov1983}, but in the context of Fact \\ref{fact:main} it is hard not to think of the $n=1$ case as being analogous to the pigeonhole principle and the $n>1$ cases as being analogous to Ramsey's theorem.}\n\n\\begin{defn}\\label{defn:main-defn}\nLet $(X,\\left\\lVert \\cdot \\right\\rVert)$ be a Banach space. If $a_0,a_1,\\dots,a_{n-1}$ and $b_0,b_1,\\dots,\\allowbreak b_{n-1}$ are ordered $n$-tuples of elements of $X$, we say that $\\bar{a}$ and $\\bar{b}$ are \\emph{congruent} if $\\left\\lVert a_i - a_j\\right\\rVert=\\left\\lVert b_i - b_j \\right\\rVert$ for all $ i,j \\leq n$, where we take $a_{n}=b_{n}=\\mathbf{0}$. We will write this as $\\bar{a} \\cong \\bar{b}$.\n\\end{defn}\n\n\\begin{fact}[\\cite{zbMATH03376472}, Thm.\\ 3] \\label{fact:main}\nLet $S^\\infty$ be the unit sphere of a separable infinite dimensional real Hilbert space ${H}$ and let $f:(S^\\infty)^n \\rightarrow \\mathbb{R}$ be a uniformly continuous function. For any $\\varepsilon>0$ and any $k<\\omega$ there exists a $k$-dimensional subspace $V$ of $H$ such that for any $a_0,a_1,\\dots,a_{n-1},b_0,b_1,\\dots,b_{n-1}\\in S^\\infty$ with $\\bar{a} \\cong \\bar{b}$, $|f(\\bar{a})-f(\\bar{b})| < \\varepsilon$.\n\\end{fact}\n\nNote that the analogous result for inseparable Hilbert spaces follows immediately, by restricting attention to a separable infinite dimensional subspace. Also note that by using Dvoretsky's theorem and an easy compactness argument, Fact \\ref{fact:main} can be generalized to arbitrary infinite dimensional Banach spaces.\n\n\\subsection{Connection to Extreme Amenability}\n\n A modern proof of Fact \\ref{fact:main} would go through the extreme amenability of the unitary group of an infinite dimensional Hilbert space endowed with the strong operator topology, or in other words the fact that any continuous action of this group on a compact Hausdorff space has a fixed point, which was originally shown in \\cite{10.2307\/2374298}. This connection is unsurprising. It is well known that the extreme amenability of $\\mathrm{Aut}(\\mathbb{Q})$ (endowed with the topology of pointwise convergence) can be understood as a restatement of Ramsey's theorem. It is possible to use this to give a high brow proof of the existence of indiscernible sequences in any first-order theory $T$:\n \n \\begin{proof}\nFix a first-order theory $T$. Let $Q$ be a family of variables indexed by the rational numbers. The natural action of $\\mathrm{Aut}(\\mathbb{Q})$ on $S_Q(T)$, the Stone space of types over $T$ in the variables $Q$, is continuous and so by extreme amenability has a fixed point. A fixed point of this action is precisely the same thing as the type of a $\\mathbb{Q}$-indexed indiscernible sequence over $T$, and so we get that there are models of $T$ with indiscernible sequences.\n \\end{proof}\n \n \n A similar proof of the existence of indiscernible subspaces in Banach theories (Theorem \\ref{thm:exist}) is possible, but requires an argument that the analog of $S_Q(T)$ is non-empty (which follows from Dvoretzky's theorem) and also requires more delicate bookkeeping to define the analog of $S_Q(T)$ and to show that the action of the unitary group of a separable Hilbert space is continuous. In the end this is more technical than a proof using Fact \\ref{fact:main} directly.\n\n\n\n\\section{Indiscernible Subspaces} \\label{sec:ind-subsp}\n\n\n\n\\begin{defn}\n Let $T$ be a Banach {theory}. Let $\\mathfrak{M}\\models T$ and let $A\\subseteq \\mathfrak{M}$ be some set of parameters.\n An \\emph{indiscernible subspace over $A$} is a real subspace $V$ of $\\mathfrak{M}$ such that for any $n<\\omega$ and any $n$-tuples $\\bar{b},\\bar{c} \\in V$, $\\bar{b} \\equiv_A \\bar{c}$ if and only if $\\bar{b} \\cong \\bar{c}$.\n\n{\\sloppy If $p$ is a type over $A$, then $V$ is an \\emph{indiscernible subspace in $p$ (over $A$)} if it is an indiscernible subspace over $A$ and $b\\models p$ for all $b\\in V$ with $\\left\\lVert b \\right\\rVert = 1$.}\n\n\n\\end{defn}\n\nNote that an indiscernible subspace is a real subspace even if $T$ is a complex Banach theory. Also note that an indiscernible subspace in $p$ is not literally contained in the realizations of $p$, but rather has its unit sphere contained in the realizations of $p$. It might be more accurate to talk about ``indiscernible spheres,'' but we find the subspace terminology more familiar.\n\nIndiscernible subspaces are very metrically regular.\n\n\\begin{prop}\nSuppose $V$ is an indiscernible subspace in some Banach structure. Then $V$ is isometric to a real Hilbert space.\n\nIn particular, a real subspace $V$ of a Banach structure is indiscernible over $A$ if and only if it is isometric to a real Hilbert space and for every $n<\\omega$ and every pair of $n$-tuples $\\bar{b},\\bar{c}\\in V$, $\\bar{b}\\equiv_A\\bar{c}$ if and only if for all $i,j = \\left$.\n\\end{prop}\n\\begin{proof}\nFor any real Banach space $W$, if $\\dim W \\leq 1$, then $W$ is necessarily isometric to a real Hilbert space. If $\\dim V \\geq 2$, let $V_0$ be a $2$-dimensional subspace of $V$. A subspace of an indiscernible subspace is automatically an indiscernible subspace, so $V_0$ is indiscernible. For any two distinct unit vectors $a$ and $b$, indiscernibility implies that for any $r,s\\in \\mathbb{R}$, $\\left\\lVert r a + s b\\right\\rVert = \\left\\lVert s a + r b\\right\\rVert$, hence the unique linear map that switches $a$ and $b$ fixes $\\left\\lVert \\cdot \\right \\rVert$. This implies that the automorphism group of $(V_0, \\left\\lVert \\cdot \\right \\rVert)$ is transitive on the $\\left\\lVert \\cdot \\right\\rVert$-unit circle. By John's theorem on maximal ellipsoids \\cite{MR0030135}, the unit ball of $\\left\\lVert \\cdot \\right \\rVert$ must be an ellipse, so $\\left\\lVert \\cdot \\right \\rVert$ is a Euclidean norm.\n\nThus every $2$-dimensional real subspace of $V$ is Euclidean and so $(V,\\left\\lVert \\cdot \\right \\rVert)$ satisfies the parallelogram law and is therefore a real Hilbert space.\n\nThe `in particular' statement follows from the fact that in a real Hilbert subspace of a Banach space, the polarization identity \\cite[Prop.\\ 14.1.2]{Blanchard2002} defines the inner product in terms of a particular quantifier-free formula:\n\\begin{equation*}\n\\left = \\frac{1}{4}\\left( \\left\\lVert x + y \\right\\rVert ^2 - \\left\\lVert x - y \\right\\rVert^2 \\right).\\footnotemark \\qedhere\n\\end{equation*}\n\\end{proof}\n\\footnotetext{There is also a polarization identity for the complex inner product: $${\\left_{\\mathbb{C}} = \\frac{1}{4}\\left( \\left\\lVert x + y \\right\\rVert ^2 - \\left\\lVert x - y \\right\\rVert^2 + i\\left\\lVert x - iy \\right\\rVert^2 - i \\left\\lVert x + iy \\right\\rVert^2 \\right).}$$}\n\n \n\n\n\n\n\n\n\n\n\\subsection{Existence of Indiscernible Subspaces}\n\n\n\n\nAs mentioned in \\cite[Cor.\\ 3.9]{SHELAH2019106738}, it follows from Dvoretzky's theorem that if $p$ is a wide type and $\\mathfrak{M}$ is a sufficiently saturated model, then $p(\\mathfrak{M})$ contains the unit sphere of an infinite dimensional subspace isometric to a Hilbert space. We refine this by showing that, in fact, an indiscernible subspace can be found. \n\n\\begin{thm} \\label{thm:exist}\nLet $A$ be a set of parameters in a Banach {theory} $T$ and let $p$ be a wide type over $A$. For any $\\kappa$, there is $\\mathfrak{M} \\models T$ and a subspace $V\\subseteq \\mathfrak{M}$ of dimension $\\kappa$ such that $V$ is an indiscernible subspace in $p$ over $A$. In particular, any $\\aleph_0 + \\kappa+|A|$-saturated $\\mathfrak{M}$ will have such a subspace.\n\\end{thm}\n\\begin{proof}\nFor any set $\\Delta$ of $A$-formulas, call a subspace $V$ of a model $\\mathfrak{N}$ of $T_A$ \\emph{$\\Delta$-indiscernible in $p$} if every unit vector in $V$ models $p$ and for any $n<\\omega$ and any formula $\\varphi \\in \\Delta$ of arity $n$ and any $n$-tuples $\\bar{b},\\bar{c} \\in V$ with $\\bar{b} \\cong \\bar{c}$, we have $\\mathfrak{N}\\models \\varphi(\\bar{b}) = \\varphi(\\bar{c})$.\n\nSince $p$ is wide, there is a model $\\mathfrak{N}\\models T$ containing an infinite dimensional subspace $W$ isometric to a real Hilbert space such that for all $b\\in W$ with $\\left\\lVert b \\right\\rVert = 1$, $b\\models p$. This is an infinite dimensional $\\varnothing$-indiscernible subspace in $p$.\n\nNow for any finite set of $A$-formulas $\\Delta$ and formula $\\varphi$, assume that we've shown that there is a model $\\mathfrak{N}\\models T$ containing an infinite dimensional $\\Delta$-indiscernible subspace $V$ in $p$ over $A$. We want to show that there is a $\\Delta \\cup \\{\\phi\\}$-indiscernible subspace in $V$. By Fact \\ref{fact:main}, for every $k<\\omega$ there is a $k$-dimensional subspace $W_{k}\\subseteq V$ such that for any unit vectors $b_0,\\dots,b_{\\ell -1},c_0,\\dots,c_{\\ell-1}$ in $W_{k}$ with $\\bar{b}\\cong\\bar{c}$, we have that $|\\varphi^{\\mathfrak{N}}(\\bar{b})-\\varphi^{\\mathfrak{N}}(\\bar{c})| < 2^{-k}$. If we let $\\mathfrak{N}_k = (\\mathfrak{N}_k,W_k)$ where we've expanded the language by a fresh predicate symbol $D$ such that $D^{\\mathfrak{N}_k}(x)=d(x,W_k)$, then an ultraproduct of the sequence $\\mathfrak{N}_k$ will be a structure $(\\mathfrak{N}_\\omega,W_\\omega)$ in which $W_\\omega$ is an infinite dimensional Hilbert space.\n\n\\emph{Claim:} $W_\\omega$ is $\\Delta\\cup\\{\\varphi\\}$-indiscernible in $p$.\n\n\\emph{Proof of claim.} Fix an $m$-ary formula $\\psi \\in \\Delta \\cup \\{\\varphi\\}$ and let $f(k)=0$ if $\\psi \\in \\Delta$ and $f(k)=2^{-k}$ if $\\psi = \\varphi$. For any $k \\geq 2m$, fix $b_0,\\dots,b_{m-1},c_0,\\dots,c_{m-1}$ in the unit ball of $W_k$, there is a $2m$ dimensional subspace $W^\\prime \\subseteq W_k$ containing $\\bar{b},\\bar{c}$. By compactness of $B(W^\\prime)^m$ (where $B(X)$ is the unit ball of $X$), we have that for any $\\varepsilon > 0$ there is a $\\delta(\\varepsilon) > 0$ such that if $|\\left - \\left| < \\delta(\\varepsilon)$ for all $i,j < m$ then $|\\psi^{\\mathfrak{N}}(\\bar{b})-\\psi^{\\mathfrak{N}}(\\bar{c})| \\leq f(k) + \\varepsilon$. Note that we can take the function $\\delta$ to only depend on $\\psi$, specifically its arity and modulus of continuity, and not on $k$, since $B(W^\\prime)^m$ is always isometric to $B(\\mathbb{R}^{2m})^m$. Therefore, in the ultraproduct we will have $(\\forall i,j - \\left| < \\delta(\\varepsilon) \\Rightarrow |\\psi^{\\mathfrak{N}}(\\bar{b})-\\psi^{\\mathfrak{N}}(\\bar{c})| \\leq \\varepsilon$ and thus $\\bar{b}\\cong \\bar{c} \\Rightarrow \\psi^{\\mathfrak{N}_\\omega}(\\bar{b}) = \\psi^{\\mathfrak{N}_\\omega}(\\bar{c})$, as required. \\hfill $\\qed_{\\textit{Claim}}$\n\nNow for each finite set of $A$-formulas we've shown that there's a structure $(\\mathfrak{M}_\\Delta,V_\\Delta)$ (where, again, $V_\\Delta$ is the set defined by the new predicate symbol $D$) such that $\\mathfrak{M}_\\Delta \\models T_A$ and $V_\\Delta$ is an infinite dimensional $\\Delta$-indiscernible subspace in $p$. By taking an ultraproduct with an appropriate ultrafilter we get a structure $(\\mathfrak{M},V)$ where $\\mathfrak{M}\\models T_A$ and $V$ is an infinite dimensional subspace. $V$ is an indiscernible subspace in $p$ over $A$ by the same argument as in the claim.\n\nFinally note that by compactness we can take $V$ to have arbitrarily large dimension and that any subspace of an indiscernible subspace in $p$ over $A$ is an indiscernible subspace in $p$ over $A$, so we get the required result.\n\\end{proof}\n\n\n\n\n\n\nTogether with the fact that wide types always exist in Banach theories with infinite dimensional models \\cite[Thm.\\ 3.7]{SHELAH2019106738}, we get a corollary.\n\n\\begin{cor} \\label{cor:ind-subsp}\nEvery Banach {theory} with infinite dimensional models has an infinite dimensional indiscernible subspace in some model. In particular, every such theory has an infinite indiscernible set, namely any orthonormal basis of an infinite dimensional indiscernible subspace.\n\\end{cor}\n\n\n\\section{Minimal{ }Wide Types}\n\n\n\nCompare the following Theorem \\ref{thm:main} with this fact in discrete logic: If $p$ is a minimal type (i.e.\\ $p$ has a unique global non-algebraic extension), then an infinite sequence of realizations of $p$ is a Morley sequence in $p$ if and only if it is an indiscernible sequence.\n\nHere we are using the definition of Morley sequence for (possibly unstable) $A$-invariant types: Let $p$ be a global $A$-invariant type, and let $B\\supseteq A$ be some set of parameters. A sequence $\\{c_i\\}_{i< \\kappa}$ is a \\emph{Morley sequence in $p$ over $B$} if for all $i< \\kappa$, $\\mathrm{tp}(c_i\/Bc_{0$. \nRecall that a linear map $T:X\\rightarrow Y$ between Banach spaces is an \\emph{isomorphism} if it is a continuous bijection. This is enough to imply that $T$ is invertible and that both $T$ and $T^{-1}$ are Lipschitz. An analog of Dvoretzky's theorem for $k \\geq \\omega$ would imply that every sufficiently large Banach space has an infinite dimensional subspace isomorphic to Hilbert space, which is known to be false. Here we will see as specific example of this.\n\n The following is a well known result in Banach space {theory} (for a proof see the comment after Proposition 2.a.2 in \\cite{Lindenstrauss1996}).\n\n\\begin{fact} \\label{fact:no-no}\nFor any distinct $X,Y \\in \\{\\ell_p: 1\\leq p < \\infty\\} \\cup \\{c_0\\}$, no subspace of $X$ is isomorphic to $Y$.\n\\end{fact}\n\nNote that, whereas Corollary \\ref{cor:ind-subsp} says that every Banach theory is consistent with the partial type of an indiscernible subspace, the following corollary says that this type can sometimes be omitted in arbitrarily large models (contrast this with the fact that the existence of an Erd\\\"os cardinal implies that you can find indiscernible sequences in any sufficiently large structure in a countable language \\cite[Thm.\\ 9.3]{Kanamori2003}).\n\n\\begin{cor} \\label{cor:no-no-cor}\nFor $p \\in [1,\\infty) \\setminus \\{2\\}$, there are arbitrarily large models of $\\mathrm{Th}(\\ell_p)$ that do not contain any infinite dimensional subspaces isomorphic to a Hilbert space. \n\\end{cor}\n\\begin{proof}\nFix $p \\in [1,\\infty) \\setminus \\{2\\}$ and $\\kappa \\geq \\aleph_0$.\n Let $\\ell_p(\\kappa)$ be the Banach space of functions $f:\\kappa \\rightarrow \\mathbb{R}$ such that $\\sum_{i<\\kappa} |f(i)|^p < \\infty$. Note that $\\ell_p(\\kappa) \\equiv \\ell_p$.\\footnote{To see this, we can find an elementary sub-structure of $\\ell_p(\\kappa)$ that is isomorphic to $\\ell_p$: Let $\\mathfrak{L}_0$ be a separable elementary sub-structure of $\\ell_p(\\kappa)$. For each $i<\\omega$, given $\\mathfrak{L}_i$, let $B_i$ be the set of all $f \\in \\ell_p(\\kappa)$ that are the indicator function of a singleton $\\{i\\}$ for some $i$ in the support of some element of $\\mathfrak{L}_i$. $B_i$ is countable. Let $\\mathfrak{L}_{i+1}$ be a separable elementary sub-structure of $\\ell_p(\\kappa)$ containing $\\mathfrak{L}_i\\cup B_i$. $\\overline{\\bigcup_{i<\\omega}\\mathfrak{L}_{i+1}}$ is equal to the span of $\\bigcup_{i<\\omega} B_i$ and so is a separable elementary sub-structure of $\\ell_p(\\kappa)$ isomorphic to $\\ell_p$.}\n Pick a subspace $V \\subseteq \\ell_p(\\kappa)$. If $V$ is isomorphic to a Hilbert space, then any separable $V_0 \\subseteq V$ will also be isomorphic to a Hilbert space. There exists a countable set $A \\subseteq \\kappa$ such that $V_0 \\subseteq \\ell_p(A) \\subseteq \\ell_p(\\kappa)$. By Fact \\ref{fact:no-no}, $V_0$ is not isomorphic to a Hilbert space, which is a contradiction. Thus no such $V$ can exist.\n\\end{proof}\n\n\n\nEven assuming we start with a Hilbert space we do not get an analog of the infinitary pigeonhole principle (i.e.\\ a generalization of Fact \\ref{fact:DM-thm}). The discussion by H\\'ajeck and Mat\\v ej in \\cite[after Thm.\\ 1]{Hajek2018} of a result of Maurey \\cite{Maurey1995} implies that there is a Hilbert theory $T$ with a unary predicate $P$ such that for some $\\varepsilon>0$ there are arbitrarily large models $\\mathfrak{M}$ of $T$ such that for any infinite dimensional subspace $V \\subseteq \\mathfrak{M}$ there are unit vectors $a,b\\in V$ with $|P^{\\mathfrak{M}}(a)-P^{\\mathfrak{M}}(b)| \\geq \\varepsilon$.\n\nStability of a theory often has the effect of making Ramsey phenomena more prevalent in its models, so there is a natural question as to whether anything similar will happen here. Recall that a function $f:S(X)\\rightarrow \\mathbb{R}$ on the unit sphere $S(X)$ of a Banach space $X$ is \\emph{oscillation stable} if for every infinite dimensional subspace $Y \\subseteq X$ and every $\\varepsilon>0$ there is an infinite dimensional subspace $Z \\subseteq Y$ such that for any $a,b\\in S(Z)$, $|f(a)-f(b)|\\leq \\varepsilon$.\n\n\n\\begin{quest}\nDoes (model theoretic) stability imply oscillation stability? That is to say, if $T$ is a stable Banach theory, is every unary formula oscillation stable on models of $T$?\n\\end{quest}\n\n\\subsection{The (Type-)Definability of Indiscernible Subspaces and Complex Banach Structures} \\label{subsec:comp}\n\nA central question in the study of inseparably categorical Banach space theories is the degree of definability of the `minimal Hilbert space' that controls a given inseparable model of the theory. Results of Henson and Raynaud in \\cite{HensonRaynaud} imply that in general the Hilbert space may not be definable. In \\cite{SHELAH2019106738}, Shelah and Usvyatsov ask whether or not the Hilbert space can be taken to be type-definable or a zeroset. In Example \\ref{ex:no-def} we present a simple, but hopefully clarifying, example showing that this is slightly too much to ask.\n\nIt is somewhat uncomfortable that even in complex Hilbert structures we are only thinking about \\emph{real} indiscernible subspaces rather than \\emph{complex} indiscernible subspaces. One problem is that Ramsey-Dvoretzky-Milman phenomena only deal with real subspaces in general. The other problem is that Definition \\ref{defn:main-defn} is incompatible with complex structure:\n\n\\begin{prop} \\label{prop:no-comp}\nLet $T$ be a complex Banach theory. Let $V$ be an indiscernible subspace in some model of $T$. For any non-zero $a\\in V$ and $\\lambda \\in \\mathbb{C} \\setminus \\{0\\}$, if $\\lambda a \\in V$, then $\\lambda \\in \\mathbb{R}$.\n\\end{prop}\n\\begin{proof}\nAssume that for some non-zero vector $a$, both $a$ and $ia$ are in $V$. We have that $(a,ia)\\equiv(ia,a)$, but $(a,ia)\\models d(ix,y)=0$ and $(ia,a)\\not\\models d(ix,y)=0$, which contradicts indiscernibility. Therefore we cannot have that both $a$ and $ia$ are in $V$. The same statement for $a$ and $\\lambda a$ with $\\lambda \\in \\mathbb{C}\\setminus \\mathbb{R}$ follows immediately, since $a,\\lambda a \\in V \\Rightarrow ia \\in V$. \n\\end{proof}\n\nIn the case of complex Hilbert space and other Hilbert spaces with a unitary Lie group action, this is the reason that indiscernible subspaces can fail to be type-definable. We will explicitly give the simplest example of this\n\n\n\n\n\\begin{ex} \\label{ex:no-def}\nLet $T$ be the theory of an infinite dimensional complex Hilbert space and let $\\mathfrak{C}$ be the monster model of $T$. $T$ is inseparably categorical, but for any partial type $\\Sigma$ over any small set of parameters $A$, $\\Sigma(\\mathfrak{C})$ is not an infinite dimensional indiscernible subspace (over $\\varnothing$).\n\n\\end{ex}\n\\begin{proof}\n$T$ is clearly inseparably categorical by the same reasoning that the theory of real infinite dimensional Hilbert spaces is inseparably categorical (being an infinite dimensional complex Hilbert space is first-order and there is a unique infinite dimensional complex Hilbert space of each infinite density character). \n\nIf $\\Sigma(\\mathfrak{C})$ is not an infinite dimensional subspace of $\\mathfrak{C}$, then we are done, so assume that $\\Sigma(\\mathfrak{C})$ is an infinite dimensional subspace of $\\mathfrak{C}$. Let $\\mathfrak{N}$ be a small model containing $A$. Since $\\mathfrak{N}$ is a subspace of $\\mathfrak{C}$, $\\Sigma(\\mathfrak{N}) = \\Sigma(\\mathfrak{C})\\cap \\mathfrak{N}$ is a subspace of $\\mathfrak{N}$. Let $v \\in \\Sigma(\\mathfrak{C})\\setminus \\Sigma(\\mathfrak{N})$. This implies that $v\\in \\mathfrak{C} \\setminus \\mathfrak{N}$, so we can write $v$ as $v_\\parallel+ v_\\perp$, where $v_\\parallel$ is the orthogonal projection of $v$ onto $\\mathfrak{N}$ and $v_\\perp$ is complex orthogonal to $\\mathfrak{N}$. Necessarily we have that $v_\\perp \\neq 0$. Let $\\mathfrak{N}^\\perp$ be the orthocomplement of $\\mathfrak{N}$ in $\\mathfrak{C}$. If we write elements of $\\mathfrak{C}$ as $(x,y)$ with $x\\in \\mathfrak{N}$ and $y\\in \\mathfrak{N}^\\perp$, then the maps $(x,y)\\mapsto (x,-y)$, $(x,y)\\mapsto (x,iy)$, and $(x,y)\\mapsto(x,-iy)$ are automorphisms of $\\mathfrak{C}$ fixing $\\mathfrak{N}$. Therefore $(v_\\parallel + v_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel - v_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel + iv_\\perp) \\equiv_{\\mathfrak{N}} (v_\\parallel -iv_\\perp)$, so we must have that $(v_\\parallel - v_\\perp),(v_\\parallel + iv_\\perp),( v_\\parallel - iv_\\perp) \\in \\Sigma(\\mathfrak{C})$ as well. Since $\\Sigma(\\mathfrak{C})$ is a subspace, we have that $b_\\perp \\in \\Sigma(\\mathfrak{C})$ and $ib_\\perp \\in \\Sigma(\\mathfrak{C})$. Thus by Proposition \\ref{prop:no-comp} $\\Sigma(\\mathfrak{C})$ is not an indiscernible subspace over $\\varnothing$.\n\\end{proof}\n\nThis example is a special case of this more general construction: If $G$ is a compact Lie group with an irreducible unitary representation on $\\mathbb{R}^n$ for some $n$ (i.e.\\ the group action is transitive on the unit sphere), then we can extend this action to $\\ell_2$ by taking the Hilbert space direct sum of countably many copies of the irreducible unitary representation of $G$, and we can think of this as a structure by adding function symbols for the elements of $G$.\nThe theory of this structure will be totally categorical and satisfy the conclusion of Example \\ref{ex:no-def}.\n\nExample \\ref{ex:no-def} is analogous to the fact that in many strongly minimal theories the set of generic elements in a model is not itself a basis\/Morley sequence. The immediate response would be to ask the question of whether or not the unit sphere of the complex linear span (or more generally the `$G$-linear span,' i.e.\\ the linear span of $G\\cdot V$) of the indiscernible subspace in a minimal{ }wide type agrees with the set of realizations of that minimal{ }wide type, but this can overshoot:\n\n\\begin{ex} \\label{ex:bad-comp}\nConsider the structure whose universe is (the unit ball of) $\\ell_2 \\oplus \\ell_2$ (where we are taking $\\ell_2$ as a real Hilbert space), with a complex action $(x,y)\\mapsto (-y,x)$ and orthogonal projections $P_0$ and $P_1$ for the sets $\\ell_2 \\oplus \\{\\mathbf{0}\\}$ and $\\{\\mathbf{0}\\} \\oplus \\ell_2$, respectively. Let $T$ be the theory of this structure. This is a totally categorical complex Hilbert structure, but for any complete type $p$ and $\\mathfrak{M}\\models T $, $p(\\mathfrak{M})$ does not contain the unit sphere of a non-trivial complex subspace.\n\\end{ex}\n\\begin{proof}\n$T$ is bi-interpretable with a real Hilbert space, so it is totally categorical. For any complete type $p$, there are unique values of $\\left\\lVert P_0(x) \\right\\rVert$ and $\\left\\lVert P_1(x) \\right\\rVert$ that are consistent with $p$, so the set of realizations of $p$ in any model cannot contain $\\{\\lambda a\\}_{\\lambda \\in \\mathrm{U}(1)}$ for $a$, a unit vector, and $\\mathrm{U}(1) \\subset \\mathbb{C}$, the set of unit complex numbers. \n\\end{proof}\n\nThe issue, of course, being that, while we declared by fiat that this is a complex Hilbert structure, the expanded structure does not respect the complex structure.\n\nSo, on the one hand, Example \\ref{ex:bad-comp} shows that in general the unit sphere of the complex span won't be contained in the minimal{ }wide type. On the other hand, a priori the set of realizations of the minimal{ }wide type could contain more than just the unit sphere of the complex span, such as if we have an $\\mathrm{SU}(n)$ action. The complex (or $G$-linear) span of a set is of course part of the algebraic closure of the set in question, so this suggests a small refinement of the original question of Shelah and Usvyatsov:\n\n\\begin{quest}\nIf $T$ is an inseparably categorical Banach {theory}, $p$ is a minimal{ }wide type, and $\\mathfrak{M}$ is a model of $T$ which is prime over an indiscernible subspace $V$ in $p$, does it follow that $p(\\mathfrak{M})$ is the unit sphere of a subspace contained in the algebraic closure of $V$?\n\\end{quest}\n\nThis would be analogous to the statement that if $p$ is a strongly minimal type in an uncountably categorical discrete theory and $\\mathfrak{M}$ is a model prime over a Morley sequence $I$ in $p$, then $p(\\mathfrak{M})\\subseteq \\mathrm{acl}(I)$.\n\n\n\n\n\n\n\n\n\n\\subsection{Non-Minimal Wide Types}\n\nThe following example shows, unsurprisingly, that Theorem \\ref{thm:main} does not hold for non-minimal{ }wide types.\n\n\\begin{ex}\nLet $T$ be the theory of (the unit ball of) the infinite Hilbert space sum $\\ell_2 \\oplus \\ell_2 \\oplus \\dots$, where we add a predicate $D$ that is the distance to $S^\\infty \\sqcup S^\\infty \\sqcup \\dots$, where $S^\\infty$ is the unit sphere of the corresponding copy of $\\ell_2$. This theory is $\\omega$-stable. The partial type $\\{D = 0\\}$ has a unique global non-forking extension $p$ that is wide, but the unit sphere of the linear span of any Morley sequence in $p$ is not contained in $p(\\mathfrak{C})$.\n\\end{ex}\n\\begin{proof}\nThis follows from the fact that on $D$ the equivalence relation `$x$ and $y$ are contained in a common unit sphere' is definable by a formula, namely \\[E(x,y) = \\inf_{z,w \\in D}(d(x,z)\\dotdiv 1) + (d(z,w)\\dotdiv 1) + (d(w,y)\\dotdiv 1),\\]\nwhere $a \\dotdiv b = \\max\\{a-b,0\\}$. If $x,y$ are in the same sphere, then let $S$ be a great circle passing through $x$ and $y$ and choose $z$ and $w$ evenly spaced along the shorter path of $S$. It will always hold that $d(x,z),d(z,w),d(w,y) \\leq 1$, so we will have $E(x,y)=0$. On the other hand, if $x$ and $y$ are in different spheres, then $E(x,y)= \\sqrt{2} -1$.\n\nTherefore a Morley sequence in $p$ is just any sequence of elements of $D$ which are pairwise non-$E$-equivalent and the unit sphere of the span of any such set is clearly not contained in $D$.\n\\end{proof}\n\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nRecurrent neural networks (RNNs) and their variants like long short-term memory (LSTM) have been empirically proven to be quite successful in structured prediction applications such as machine translation \\cite{nips-sutskever:14}, Chatbot \\cite{arxiv-vinyals:15}, parsing \\cite{emnlp-ballesteros:16}, summarization \\cite{emnlp-rash:15} and image caption generation \\cite{cvpr-vinyals:15} due to their capability to bridge arbitrary time lags. \nBy far the most popular strategy to train RNNs is via the maximum likelihood principle, which consists of maximizing the probability of the next word in the sequence given the current (recurrent) state and the previous ground truth word (also known as \\emph{teacher forcing}). \nAt inference time, truth previous words are unknown, and then are replaced by words predicted by the model itself.\n\n\\begin{figure}[t]\n \\small\n \\centering\n \\includegraphics[width=7.5cm]{sequence.pdf}\n \\caption{An example training sentence. For each time step, there are a single ground truth word (highlighted by grey color) and multiple semantically and syntactically similar words (highlighted by orange color). In our approach, the weighted embedding of those words at the time step $t$ is fed into the network at the next time step. Such design makes it possible for the model to explore more feasible combinations (e.g. ``I get few red graphs'' or ``We have some yellow peaches''), and can be considered as an approximation to the beam search in a less computationally demanding way.}\n \\label{fig:sequence}\n\\end{figure}\n\nThe models trained by such teacher forcing strategy suffer from at least three drawbacks. \nFirst, the discrepancy between training and inference, called \\emph{exposure bias} \\cite{iclr-ranzato:16}, can yield errors because the model is only exposed to the distribution of training data, instead of its own predictions at inference. \nAs a result the errors can accumulate quickly along the sequence to be generated. \nSecond, the training loss for RNNs is usually defined at the word-level using maximum likelihood estimation (MLE), while their performances are typically evaluated using discrete and non-differentiable sequence-level metrics, such BLUE \\cite{acl-papineni:02} or ROUGE \\cite{acl-lin:04}. \nWe call it \\emph{evaluation bias}. \nThird, the whole training process is not \\emph{fully differentiable} because the chosen input at each time step hinders the gradients back-propagating to all previous states in an end-to-end manner. \nAlthough Goodfellow et al \\shortcite{book-goodfellow:16} pointed out that the underlying graphical model of these RNNs is a complete graph, and no conditional independence assumption is made to simplify the sequence prediction, the individual output probabilities might be still conditioned on the last few predictions if we use the teacher forcing regimen \\cite{iclr-leblond:18}. \nNote that the chain rule can not be applied any longer where an incorrect prediction is replaced with its ground truth, which prevents the model from capturing long-term dependencies and recovering the probability of the whole sequence.\n\nMany approaches have been proposed to deal with the first two drawbacks of training RNNs by scheduled sampling \\cite{bengio2015scheduled}, adversarial domain adaptation \\cite{nips-goyal:16}, reinforcement learning \\cite{iclr-ranzato:16,iclr-bahdanau:17}, or learning to search (L2S) \\cite{emnlp-wiseman:16,iclr-leblond:18} while little attention has been given to tackling the last one. \nA fully differentiable solution is capable of back-propagating the gradients through the entire sequence, which alleviates the discrepancy between training and inference by feeding RNNs with the same form of inputs, and gently bridges the gap between the training loss defined at each word and the evaluation metrics derived from whole sequence. \nThanks to the chain rule for differentiation, the training loss of differentiable solution is indeed a sequence level loss that involves the joint probability of words in a sequence. \n\nWe propose a fully differentiable training algorithm for RNNs to addresses the issues discussed above. \nThe key idea is that at the time step $t+1$, the network takes as input a ``bundle'' of predicted words at the previous time step $t$ instead of a single ground truth. \nIntuitively, the input words at each time step need to be similar in terms of semantics and syntax, and they can be replaced each other without harming the structure of sentences (see Figure \\ref{fig:sequence}).\nThe mixture of the similar words can be represented by a convex hull formed by their representations, which could be viewed as regularization to the input of recurrent neural networks.\nThis design makes it possible for the model to explore more feasible combinations (possibly unseen sequences), and can be interpreted as a computationally efficient approximation to the beam search.\n\nWe also want the number of input words can vary for different time steps, unlike the end-to-end algorithm proposed by Ranzato et al \\shortcite{iclr-ranzato:16}, in which they propagated as input the predicted top-$k$ words, and the value of $k$ is fixed in advance.\nAlthough different numbers of $k$ can be tested to determine the optimal value, the number of proper words that can be used heavily depends on contexts in which they occur. \nFor a given $k$, we could introduce unnecessary noise if more words are taken as input, whereas we may prevent the model from exploring possible sequences if less words are involved. \nIn our architecture, an attention mechanism \\cite{nips-Vaswani:17} is applied to select candidate words, and their weighted embedding are fed into the network according to their attention scores. \nSmoothing the input by this way makes the whole process trainable and differentiable by using standard back-propagation. \n\n\n\\section{Related Work}\n\nUnlike Conditional Random Fields \\cite{icml-Lafferty:01} and other models that assume independence between outputs at different time steps, RNNs and their vriants are capable of representing the probability distribution of sequences in the relatively general form. However, the most popular strategy for training RNNs, known as ``teacher forcing'' takes the ground truth as input at each time step, and makes the later predictions partly conditioned on those inputs being fed back into the model. Such training strategy impairs their ability to learn rich distributions over entire sequences. Although some training methods, such as scheduled sampling \\cite{bengio2015scheduled} and ``professor forcing'' \\cite{nips-goyal:16} are proposed to encourage the states of recurrent networks to be the same as possible when the training and inference over multiple time steps, the neural networks using greedy predictions (top-$1$) by the arguments of the maxima (abbreviated argmax) is not fully differentiable. Discrete categorical variables are involved in those greedy prediction, and become the obstacle to permit efficient computation of parameter gradients.\n\nPerhaps the most natural and na\\\"{i}ve approach towards the differentiable training is that at time step $t + 1$ we take as input all the words whose contributions are weighted by their scores instead of the ground truth word, and the scores are the predicted distribution over words from the previous time step $t$. However, the output distribution at each time step is normally not sparse enough, and thus the input may blur by a large number of words semantically far from the ground truth. A simple remedy would be the end-to-end algorithm proposed by Ranzato et al \\shortcite{iclr-ranzato:16}, in which they propagated as input the top $k$ words predicted at the previous time step. The $k$ largest scoring words are weighted by their re-normalized scores (summing to one). Although different numbers of $k$ can be tested to choose the optimal value, the value of $k$ is fixed at each time step. It would better to learn to determine a particular $k$ for each time step, depending on a specific context. For a fixed $k$, we would introduce unnecessary noise if too many words are involved, whereas we may prevent the model from exploring possible sequences if too few words are chosen.\n\nJang et al \\shortcite{jang2017categorical} proposed Gumbel-Softmax that can make the output distributions over words more sparse by approximating discrete one-hot categorical distributions with their continuous analogues. Such approximate sampling mechanism for categorical variables can be integrated into RNNs, and trained using standard back-propagation by replacing categorical samples with Gumbel-Softmax ones. Theoretically, as the temperature $\\tau$ of Gumbel-Softmax distribution approaches $0$, samples from this distribution will become one-hot. In practice, they begin with a high temperature and anneal to a small but non-zero one. The temperature, however, can not be too close to $0$ (usually $\\tau \\ge 0.5$) because the variance of the gradients will be so large, which may result in the gradient explosion.\n\nIdeas coming from learning to search \\cite{emnlp-wiseman:16,iclr-leblond:18} or reinforcement learning, such as the REINFORCE \\cite{iclr-ranzato:16} and ACTOR-CRITIC \\cite{iclr-bahdanau:17}, have been used to derive training losses that are more closely related to the sequence-level evaluation metrics that we actually want to optimize. Those approaches side step the issues associated with discrete nature of the optimization by not requiring losses to be differentiable. While those approaches appear to be well suited to tackle the training problem of RNNs, they suffer from a very large action space which makes them difficult to learn or search efficiently, and thus slow to train the models, especially for natural language processing tasks normally with a large vocabulary size. We reduce the effect of search error by pursuing few next word candidates at each time step in a pseudo-parallel way. We note that those ideas and ours are complementary to each other and can be combined to improve performance further, although the extent of the improvements will be task dependent.\n\n\\section{Methodology}\n\nWe propose a fully differentiable method to train a sequence model, which allows the gradients to back-propagate through the entire sequence. \nFollowing \\citep{zhang2016generating,jang2017categorical,goyal2017differentiable}, the network takes as input the weighted sum of word embeddings at each time step, where the weights reflect how likely they are chosen in the previous step. \nThe following discussion is based on a general architecture that can be viewed as a variational encoder-decoder with an attention mechanism \\citep{bahdanau2014neural, kingma2014auto-encoding, bowman2016generating}. \nThe teacher forcing method, denoted as VAE-RNNSearch, is implemented with the above architecture, while we extend this architecture with a lexicon-based memory (Figure \\ref{fig:reg-model} shows the details, where the margin relaxed component should be ignored at present), called \\textbf{W}ord \\textbf{E}mbedding \\textbf{A}s \\textbf{M}emory (WEAM).\n\nGiven a source sentence $s_{1:M}$, $s_i \\in \\mathcal{V}^s$ where $\\mathcal{V}^s$ is the vocabulary of source language, a multi-layer BiLSTM encoder $f^s(\\cdot)$ is used to compute a high-level contextual word representation $e_i \\in \\mathbb{R}^d$ for each word $s_i$, where $d$ is the dimensionality of the embedded vector space,\n\\begin{equation} \\small\n e_{1:M} = f^s(s_{1:M})\n\\end{equation}\nwhich is also the entries to be attended by the decoder. \nThe final state of BiLSTM is used to compute the mean and the log variance of the latent Gaussian distribution $\\mathcal{N}(\\mu, \\sigma^2)$ in VAE with a linear layer. \n\nIn the decoding process, a seed $c$ sampled for the latent Gaussian distribution is fed into a multi-layer LSTM decoder $f^t(\\cdot)$ with multi-hop attention. \nThe decoder aims at predicting the distributions $p_{1:N}$ of target words $t_{1:N}$ conditioned on the seed $c$ and the encoder output $e_{1:M}$,\n\\begin{equation} \\small\n\\label{eq:decoder}\n p_{1:N}=f^t(c, e_{1:M})\n\\end{equation}\nwhere each $p_j\\in [0, 1]^{|\\mathcal{V}^t|}$ is the probability distribution of the $j$-th target word, and $\\mathcal{V}^t$ is the vocabulary of target language.\nThe words with the maximum probability are the predictions $\\tilde{t}_{1:N}$.\n\nSpecifically, we explain the decoding process in a recurrent manner. \nAbove all, the word embedding matrix $\\mathcal{M} \\in \\mathbb{R}^{|\\mathcal{V}^t| \\times d}$ is considered as a memory.\nThe recurrent function $g(\\cdot)$ is a multi-layer LSTM cell incorporated with multi-hop attention over encoder output $e_{1:M}$. \nAt timestamp $j$, the recurrent function $g(\\cdot)$ output a hidden representation $h_{j+1}$ by giving the previous states $h_{j}$ and the current word embedding $v_j$, and the hidden state is further used to computed the likelihood of the predictions\n\\begin{equation} \\small\n h_{j+1} = g(h_j, v_j)\n\\end{equation}\n\\begin{equation} \\small\n p_{j+1} = \\text{softmax}(\\mathcal{M}h_{j+1}) \n\\end{equation}\nFor the starting timestamp, $h_0$ is defined as the seed $c$, and $v_0$ is the word embedding of the end-of-sentence token. \nDuring the training process, $v_j$ used in VAE-RNNSearch is the ground truth word embedding $\\mathcal{M}(t_j)$ and that in WEAM is approximated by attention over word embeddings. \nThe attention score here could be obtained by reusing the predicted distribution of the next word, namely $p_j(\\cdot)$. \nThus the input word embedding $v_j$ is approximated by\n\\begin{equation} \\small\n\\label{eq:embed_atten}\n v_j = p_j\\mathcal{M} = \\sum_{w\\in\\mathcal{V}^t}{p_j(w)\\mathcal{M}(w)}\n\\end{equation}\nFrom now on, we use ground truth words to denote words inputted into the network and target words to denote objective ones for the purpose of distinguishing their functions, although they refer to the same sequence.\n\nFinally, after the entire sequence is generated, the objective function of WEAM is computed from the predicted probability $p_j(\\cdot)$ and the target sequence $t_{1:N}$ as well as the latent Gaussian distribution, \n\\begin{equation} \\small\n\\begin{split}\n\\label{eq:loss}\n L(\\theta) = & -\\sum_{j=1..N}{\\log p_j(t_j)} \\\\\n & + D_{KL}(\\mathcal{N}(\\mu, \\sigma^2) || \\mathcal{N}(0,1))\n\\end{split}\n\\end{equation}\nIn Equation \\ref{eq:loss}, the first term is the cross-entropy between the predicted probability with the target one, and the second term is the KL divergence between the approximate latent Gaussian distribution and a prior normal distribution. \nEquipped with WEAM, the training process of text generation is fully differentiable.\n\nAt the inference, instead of using the attention-weighted embedding in Equation \\ref{eq:embed_atten}, we found it better to feed an exact word embedding into the neural network at each timestamp. \nAlthough the word embedding memory regularizes $v_{j+1}$ in a convex hull of word embeddings comparing with $h_j$ (Figure \\ref{fig:reg-weam}), there is still difference with the representations of genuine words in vocabulary, which is a finite set.\nThe bias could lead to minor mistakes, which accumulate as the prediction process goes on and is potentially harmful to future predictions.\nThus feeding an exact word embedding in inference would be helpful in generating a sequence by regularizing the input embedding and erasing the potential mistakes.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=7.5cm]{model.pdf}\n \\caption{\\small{M-WEAM architecture. It is based on a variational encoder-decoder with attention mechanism. (a) performs an attention operation over all the word embeddings to produce their probabilities (reflecting how likely they are chosen); (b) retrieves the words according to their probabilities (masked to make it sparse). By applying a margin relaxed method, we rule out the words having the lower probabilities than a threshold, and the remaining values are normalized to estimate the distribution of input words at the next time step.}}\n \\label{fig:reg-model}\n\\end{figure}\n\n\\subsection{Sparse Word Distributions}\n\nAs discussed above, feeding an exact word embedding is a helpful property in inference, which is equivalent to setting a sparse distribution $p_j(\\cdot)$.\nHowever, WEAM model suffers from the long tail effect cause by the non-sparse $p_j(\\cdot)$.\nMost words in vocabulary are predicted with a very small but non-negligible probability, leading to a noisy $v_j$.\n\nA widely-used method to obtain a sparse distribution is rescaling the logits before presenting it to the final-layer classifier, equivalently\n\\begin{equation} \\small\n\\label{eq:rescaled-logit}\n \\tilde{p}_j(k) = p_j(k)^{1\/\\tau}\n\\end{equation}\nwhere $\\tau$ is a rescaling factor decaying from one to zero \\citep{zhang2016generating, jang2017categorical}. \nIdeally, when $\\tau$ converges to zeros, the distribution $\\tilde{p}_j(\\cdot)$ converges to a categorical one.\nHowever, in practice, $\\tau$ cannot be set to a small value, which could lead to gradient explosion. \nThus an alternative strategy is setting a lowerbound for $\\tau$, which is empirically $0.5$ in \\citep{jang2017categorical}. \nThis value of $\\tau$ is far from the optimal number (i.e. $0$) in theory, and the sparse categorical distribution can be achieved in practice. \n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{fullspace.png}\n \\subcaption{WEAM}\n \\label{fig:reg-weam}\n \\end{subfigure}\n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{subspace.png}\n \\subcaption{M-WEAM}\n \\label{fig:reg-rweam}\n \\end{subfigure}\n \\caption{The difference between (a) WEAM and (b) M-WEAM in the way they form the subspace in a convex hull defined by all the word embeddings, which can be viewed as a kind of regularizations. Blue points represent all the words in a vocabulary, whereas yellow ones are the unmasked words in M-WEAM. The simplex is the convex hull defined by the unmasked words chosen by the results of the lexicon-based attention.}\n\\label{fig:reg}\n\\end{figure}\n\nWe propose a margin relaxed method to make a sparse distribution through restricting $v_j$ in a relatively smaller subspace (see Figure \\ref{fig:reg-rweam}). \nThe architecture of M-WEAM model is shown in Figure \\ref{fig:reg-model}. \nGiven a distribution $p_j(\\cdot)$, its sparse version $\\tilde{p}_j(\\cdot)$ is computed by ruling out the words with low probability,\n\\begin{equation} \\small\n \\tilde{p}_j(w) = \\frac{p_j(w)mask_j(w)}{\\sum_{l}{p_j(w')mask_j(w')}}\n\\end{equation}\n\\begin{equation} \\small\n mask_j(w) = \\begin{cases} 1 & p_j(w) > \\eta_j \\\\\n 0 & otherwise\n \\end{cases}\n\\end{equation}\nwhere $\\eta_j$ is a threshold and could be determined by various ways.\nIts motivation is to alleviate the long tail effect by masking the ``long tail'' entries with close-to-zero probability.\nA possible choice is defined the threshold $\\eta_j$ as \n\\begin{equation} \\small\n\\label{eq:margin}\n \\eta_j=e^{-\\epsilon}\\max_{k}p_j(k)\n\\end{equation}\nwhere $\\epsilon$ is a margin in log space.\nAs $\\epsilon$ converges to $0$, the $\\tilde{p}_j(k)$ converges to a categorical distribution as well.\nHowever, we do not intend to anneal $\\epsilon$ to zero, aiming at keep a mixture meaning of multiple semantically and syntactically similar words in the training process.\nIn vector space, the mixture meaning is represented as the convex hull shown in Figure \\ref{fig:reg-rweam}.\nAll the words forming this convex hull are reasonable predictions, and their representations are optimized in the training.\nIf $\\epsilon$ become very small, there is only one word, namely one point, to be optimized, leading to slower convergence.\nBesides, an input representation $v_j$ generated from a subspace would be more robust than that assigned to only one point, preventing the model from overfitting to a specific point.\nThe optimization over a subspace could be considered as a balance between that over a point and over the whole space in Figure \\ref{fig:reg-weam}.\n\nThe re-estimated probability $\\tilde{p}(\\cdot)$ can also be used in objective function Equation \\ref{eq:loss} following ``enough is enough'' principal.\nNote that the training objective is discriminating a target word with the others in vocabulary by increasing the margin between them.\nAs training progresses, the margin becomes larger implying that the target word can be easily pointed out from the vocabulary. \nKeeping optimizing the margin between the target word and all the others would be useless, and easily leads to overfitting.\nThe ``enough is enough'' principal means that a word requires no further optimization when its likelihood is much lower than the target one. \nThus the re-estimated probability $\\tilde{p}(\\cdot)$ is used in the loss function to prevent the words with low probability from over-optimization.\n\nWe denote the vanilla non-sparse version as WEAM, the one rescaling logits in Equation \\ref{eq:rescaled-logit} as R-WEAM, and the proposed margin relaxed one as M-WEAM. Note that R-WEAM is slightly different from that in \\citep{zhang2016generating, jang2017categorical} since word classifier and word embeddings are tied \\cite{inan2017tying}. \n\n\\subsection{Warm-up Strategy}\n\nAt the beginning of the training, the words generated by WEAM models are almost random. \nConditioned on such randomly predicted words, the model fails to benefit from the future training.\nIn order to alleviate this problem, the models are warmed up using the teacher forcing strategy. We present two warming up techniques to train WEAM-family models.\n\n\\subsubsection{Scheduled Sampling}\n\nScheduled sampling is a widely-used way to warm up WEAM-family models in training \\citep{bengio2015scheduled}.\nWhether to use the embedding of a ground truth word or its approximation in Equation \\ref{eq:embed_atten} at the next timestamp is randomly selected as:\n\\begin{equation} \\small \\label{eq:sample-i}\n\\begin{split}\n v_{j+1} & =\\begin{cases}\n \\mathcal{M}(w_{j+1}), & t=0 \\\\\n \\sum_{w\\in\\mathcal{V}^t}{p_j(w)M(w)}, & t=1\n \\end{cases} \\\\\n t & \\sim \\mathcal{B}(\\text{steps}\/\\text{max\\_steps})\n\\end{split}\n\\end{equation}\nwhere $t$ is sampled from a Bernoulli distribution $\\mathcal{B}(\\cdot)$ whose expectation has a positive correlation with training progress.\nFrom a training perspective, the probability of choosing the approximated word embedding increases linearly from zero to one.\n\n\\subsubsection{Threshold}\n\nMasking words with low probability may cause a new problem that a target word may be masked and lose its opportunity for optimization.\nOnce a target word is ruled out of training, it may be ruled out forever since its low probability, leading to bad performances and slow convergence.\nAn alternative way to solve this problem is computing the threshold with the probability of the target word, and we determines the threshold in a probable way as:\n\\begin{equation} \\small \\label{eq:gold-margin}\n\\begin{split}\n \\eta_j & =\\begin{cases}\n e^{-\\epsilon} p_j(w_{j+1}), & r=0 \\\\\n e^{-\\epsilon} \\max_{k}e^{-\\epsilon}p_j(k), & r=1\n \\end{cases} \\\\\n r & \\sim \\mathcal{B}(\\max(\\text{steps}\/\\text{max\\_steps}, \\xi))\n\\end{split}\n\\end{equation}\nwhere $r$ is similar to $t$ with lowerbound $\\xi$.\nEquation \\ref{eq:gold-margin} shows that the threshold is determined by the target word at the beginning of training whereas it is determined by the predictions when the algorithm almost converges.\nAdopting a lowerbound $\\xi$ also leads to an attractive property that a prediction and its re sptteiaevcrget word are replaceable to each other, and thus the model is expected to capturing the phenomenon of synonyms.\n\n\\section{Experiments}\n\nWe conducted three sets of experiments to demonstrate the effectiveness of M-WEAM on various tasks including machine translation, text summarization and dialogue generation. \n\nWe employed almost the same setting to all the tasks. \nA wrapped recurrent block was used for both the encoder and decoder, defined as a sequence of LSTM, dropout, residual connection and layer normalization. \nUni-directional LSTM was used in encoder while the bi-directional LSTM was used in decoder.\nThe multi-hop attention component was also wrapped in a similar way.\nWe set the dimensionality of vector space to $256$, layers of recurrent block to $3$, dropout rate to $0.3$, lowerbound of margin $\\epsilon$ to $1$, and $\\xi$ to $0.5$.\nThe models were optimized by Adam \\cite{kingma2014adam} with $\\beta_1=0.9, \\beta_2=0.999 $ and $\\text{eps}=10^{-8}$. \nLearning rate was scheduled following \\cite{feng2018neural}, and its maximum was set to $0.001$.\nLabel smoothing was also used with $0.1$ smoothing rate. \nSeparated vocabularies were constructed for source and target domains on translation and dialogue tasks whereas a shared one was used for summarization task.\nAll the models were implemented with PyTorch and trained on $1$ NVIDIA Titan Xp GPU.\n\n\n\\subsection{Machine Translation}\n\n\\begin{table}[b]\n \\centering\n \\begin{tabular}{l|c}\n \\toprule\n Model & BLEU \\\\\n \\midrule\n MIXER & $20.73$ \\\\\n BSO & $23.83$ \\\\\n $\\alpha$-soft annealing & $20.60$ \\\\\n Actor-Critic & $27.49$ \\\\\n SEARNN & $28.20$ \\\\\n NPMT & $28.57$ \\\\\n \\midrule\n VAE-RNNSearch & $28.97$ \\\\\n WEAM & $27.84$ \\\\\n R-WEAM & $28.13$\\\\\n \\midrule\n M-WEAM & \\textbf{29.17} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on IWSLT14 German-English.}\n \\label{tab:de-en}\n\\end{table}\n\nMachine translation task was experimented on two datasets, IWSLT14 German to English (De-En) \\citep{cettolo2014report} and IWSLT15 English to Vietnamese (En-Vi) \\citep{cettolo2015iwslt}. \nThe dataset was preprocessed basically following \\cite{huang2017towards, feng2018neural}. \nFor IWSLT14 De-En, the dataset roughly contains 153K training sentences, 7K development sentences and 7K testing sentences. \nWords whose frequency of occurrence is lower than 3 were replaced by an ``UNK'' token.\nFor IWSLT15 En-Vi, the dataset contains 133K translation pairs in training set, 1,553 in validation set (TED tst2012) and 1,268 in test set (TED tst2013).\nSimilar to De-En, words with frequency of occurrence lower than 5 were replaced by the ``UNK'' token.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|c}\n \\toprule\n Model & BLEU \\\\\n \\midrule\n Hard Monotonic & $23.00$ \\\\\n NPMT & $26.91$ \\\\\n \\midrule\n VAE-RNNSearch & $28.66$ \\\\\n WEAM & $27.14$ \\\\\n R-WEAM & $26.79$ \\\\\n \\midrule\n M-WEAM & \\textbf{28.70} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on IWSLT15 English-Vietnamese.}\n \\label{tab:en-vi}\n\\end{table}\n\nOn IWSLT14 De-En, we compared M-WEAM against VAE-RNNSearch \\citep{bowman2016generating}, WEAM, R-WEAM \\citep{zhang2016generating}, MIXER \\citep{iclr-ranzato:16}, BSO \\citep{emnlp-wiseman:16}, Actor-Critic \\citep{iclr-bahdanau:17}, SEARNN \\citep{iclr-leblond:18}, $\\alpha$-soft annealing \\cite{goyal2017differentiable}, and NPMT \\citep{huang2017towards}.\nOn IWSLT15 En-Vi, we compared M-WEAM against VAE-RNNSearch \\citep{bowman2016generating}, WEAM, R-WEAM \\citep{zhang2016generating}, Hard Monotonic \\citep{raffel2017online} and NPMT \\citep{huang2017towards} .\n\n\nTable \\ref{tab:de-en} and Table \\ref{tab:en-vi} display the results on the German-English and English-Vietnamese translation task respectively. \nIt is can be seen that M-WEAM achieves decent results on the both tasks.\nComparing with differentiable models, M-WEAM seizes a notable gain of over $1$ BLEU point, outperforming WEAM and R-WEAM on both tasks. \nM-WEAM also achieves comparable results with its teacher forcing version.\nBesides, M-WEAM beats previous SOTA RNN-based models with a significant margin.\n\nInspired by \\cite{keskar2016large}, where they explored the neighborhood of the minima to measure the model's sensitivity, we conducted a similar experiment on IWSLT14 De-En to show the robustness of VAE-RNNSearch and WEAM family models. \nWe tested on the models by injecting noise into input embedding. \nThe noise was sampled from an unbiased Gaussian distribution with the standard deviation increasing from 0 to 1. \nAs the noise increases, models with sharper local minima should suffer more from performance degradation. \n\nFigure \\ref{fig:sensitivity} shows how the performance of the compared models changes against noise. \nM-WEAM model surpasses the teacher forcing model by nearly 3 BLEU points when the standard deviation reaches 1. \nIt thus can be inferred that M-WEAM converges to a flatter minima than VAE-RNNSearch does.\nBesides, among all models, WEAM shows strongest anti-noise ability because it has been trained with massive noise due to the long tail effect. \nIn general, M-WEAM achieves a balance between performance and robustness.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=7.5cm]{noise.pdf}\n \\caption{Sensitivity test with different Gaussian noise levels. The $x$-axis is the standard deviation of the Gaussian noise. \n \n }\n \\label{fig:sensitivity}\n\\end{figure}\n\n\n\n\\subsection{Abstractive Summarization}\n\nWe evaluated the models on Gigaword benchmark, and applied the same data preprocessing as \\citep{rush2015neural, chopra2016abstractive}.\nThe results are reported with F1 ROUGE scores, including ROUGE-1, ROUGE-2 and ROUGE-L.\n\nThe results of ABS \\citep{rush2015neural}, ABS+ \\citep{rush2015neural}, Feats \\citep{nallapati2016abstractive}, RAS-LSTM \\citep{chopra2016abstractive} and RAS-Elman \\citep{chopra2016abstractive} are extracted from the numbers reported by their authors. We implemented the VAE-RNNSearch \\citep{bowman2016generating} as well as WEAM, and report their results using our implementations. The results of ABS and ABS+ with the greedy search are unavailable, and thus we report those with the beam search instead. We are aware that SEASS \\citep{zhou2017selective} and DRGD \\citep{li2017deep} are two recent studies in which the higher ROUGE metrics were reported on Gigaword dataset. \nTheir results are not listed because those models are specifically tailored for the abstractive summarization task. SEASS is equipped with a selective network for salient language units, and DRGD uses a recurrent latent variable to capture the summary structure, while we focus on the general framework and training algorithm for the sequence-to-sequence tasks. \n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\toprule\n Model & R-1 & R-2 & R-L \\\\\n \\midrule\n ABS$^*$ & $29.55$ & $11.32$ & $26.42$ \\\\\n ABS+$^*$ & $29.78$ & $11.89$ & $26.97$ \\\\\n Feats & $32.67$ & $15.59$ & $30.64$ \\\\\n RAS-LSTM & $31.71$ & $13.63$ & $29.31$ \\\\\n RAS-Elman & $33.10$ & $14.45$ & $30.25$ \\\\\n \n \n \\midrule\n VAE-RNNSearch & $\\textbf{33.99}$ & $15.72$ & $\\textbf{31.67}$ \\\\\n WEAM & $33.09$ & $15.05$ & $30.79$ \\\\\n R-WEAM & $32.35$ & $14.52$ & $30.10$ \\\\\n \\midrule\n \n M-WEAM & $33.87$ & $\\textbf{15.78}$ & $31.52$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on Gigaword dataset. The models indicated with $^*$ use beam search to generate summaries whereas the others use the greedy search instead.}\n \\label{tab:gigaword}\n\\end{table}\n\nTable \\ref{tab:gigaword} shows the results on Gigaword Test Set. \nAmong the fully differentiable WEAM family models, M-WEAM achieves the best performance, and outperform the WEAM and R-WEAM with at least $0.7$ improvement on the all ROUGE metrics.\nBy comparing with VAE-RNNSearch, M-WEAM achieves comparable results on all the ROUGE metrics. \nIt is worth noting that although M-WEAM performs slightly worse than VAE-RNNSearch on ROUGE-1, but it does better on ROUGE-2, which shows that M-WEAM is able to recover more longer patterns, thanks to its fully differentiable property.\n\n\\subsection{Dialogue}\n\nFinally, we evaluated the models on the task of single-turn dialogue, and used a subset of STC-weibo corpus \\cite{shang2015neural} as a dataset, where weibo is a popular Twitter-like microblogging service in China. \nThe complete corpus consists of roughly $4.4$M conversational post-response pairs.\nWe removed the sentences containing more than $10$ words and created a dataset with $907,809$ post-response pairs for fast training and testing.\nThe dataset was randomly split into training, validation, and testing sets, which contains $899$K, $10$K and $10$K pairs respectively. \nThe vocabulary was formed by the most frequent $40$K words in training data.\n\nBLEU score is a widely used metric in dialogue evaluation to measure the response relevance, whereas distinct-1\/2 is used to measure the response diversity. \nAs is shown in Table \\ref{tab:stc}, M-WEAM model achieves the highest performance in terms of BLEU, but performs worse than other models with distinct metrics. \nUnlike machine translation, a response can be taken as an acceptable answer to many posts (or questions) and the dataset contains a number of one-to-many cases. In order to recover the probability of entire sequences, the fully differentiable model with improved training efficiency tends to generate frequently-used sequences and $n$-grams, which hurts the diversity of the generated responses. We leave this issue for future research. \nThe high relevance but low diversity of responses generated by M-WEAM show that the proposed M-WEAM model is more desirable for the tasks requiring to generate exact sequences.\n\n\\begin{table}[t] \n \\small\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\toprule\n Model & BLEU & distinct-1 & distinct-2 \\\\\n \\midrule\n VAE-RNNSearch & $5.95$ & $2.39$ & $19.01$ \\\\\n WEAM & $5.67$ & \\textbf{2.50} & $19.52$ \\\\\n R-WEAM & $5.73$ & $2.49$ & \\textbf{19.65} \\\\\n \\midrule\n M-WEAM & \\textbf{6.11} & $2.39$ & $16.31$ \\\\\n \\bottomrule\n \\end{tabular}\n \\normalsize\n \\caption{Automatic evaluation results on STC-weibo dataset. The best results are highlighted in bold font.}\n \\label{tab:stc}\n\\end{table}\n\n\n\\section{Conclusion}\n\nWe proposed a fully differentiable training algorithm for RNNs to alleviate the discrepancy between training and inference (exposure bias), and to bridge the gap between the training loss defined at each word and the evaluation metrics derived from whole sequence (evaluation bias). \nThe fully differentiable property is achieved by feeding the network at each step with a ``bundle'' of the words carrying similar meaning instead of a single ground truth. \nIn our solution, the network is allowed to take different number of words as input at each time step, depending on the context. \nWe may have more candidate words at some positions, while less at others when trying to generate a sentence. \nExperiments on machine translation, abstractive summarization, and open-domain dialogue response generation tasks showed that the proposed architecture and training algorithm achieved the best or comparable performance, especially in BLUE or ROUGE-2 metrics defined on sequence level, reflecting the enhanced ability in capturing long-term dependencies and recovering the probability of the whole sequence. \nThe trained networks is also empirically proven to be more robust to noises, and perform constantly better other competitors with different noise levels.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\\label{intro}\\setcounter{equation}{0}\n\t\n\tMany physical processes such as harvesting, natural disaster, shocks etc, cause abrupt changes in their states at certain time instant. These sudden changes occur for negligible time period and they are estimated in the form of instantaneous impulses. The theory of instantaneous impulsive systems has remarkable applications in several areas of science and engineering, for example, population dynamics, ecology, network control system with scheduling protocol etc., (cf. \\cite{NE,AM,TY}, etc). However, certain dynamics of evolution processes in pharmacotherapy cannot be modeled by instantaneous impulsive dynamical systems, for example, in hemodynamical equilibrium of a person, introduction of insulin into the bloodstream and the consequent absorption of the body are gradual processes and stay active for a finite time interval. Thus, we cannot describe this situation via instantaneous impulsive systems. Therefore, Hern\\'andez et. al. \\cite{EHD} introduced a new class of impulses termed as non-instantaneous impulses, which starts at an arbitrary fixed point and stays active on a finite time interval and they established the existence of solutions for such a class of impulsive differential equations. Later, Wang et. al. \\cite{JRW,JRY}, extended this model to two general classes of impulsive differential equations, which are very important in the study of dynamics of evolutionary processes in pharmacotherapy. For more details on the theory of non-instantaneous impulsive systems, we refer the interested readers to \\cite{YTJ,JMF}, etc, and the references therein. In addition to this, there are various real world phenomena, for example, neural network, inferred grinding models, ecological models, heat conduction in materials with fading memory etc. In these phenomena, the current state of a system is influenced by the past states. The dynamics of such processes are characterized by delay differential equations (finite or infinite), see for example, \\cite{XFL,LA,JW}, etc. In the application point of view, the functional evolution systems with state-dependent delays are more prevalent and adequate, see for instance, \\cite{AO,CNF}, etc, and the references therein. \n\t\n\tThe concept of controllability plays a vital role in the study of control systems. Controllability (exact or approximate) refers that the solution of a control system can steer from an arbitrary initial state to a desired final state by using some control function. In the infinite dimensional setting, in comparison with exactly controllable systems, approximate controllable systems are more extensive and have wide range of applications, cf. \\cite{M,TRR,TR,EZ}, etc. In the past two decades, the problem of approximate controllability of various kinds of systems (in Hilbert and Banach spaces) such as impulsive differential equations, functional differential equations, stochastic systems, Sobolev type evolution systems, etc, is extensively studied with the help of fixed point approach and produced excellent results, see for instance, \\cite{SM,SAM,FUX,AGK,M,ER}, etc. \n\t\n\tOn the other hand, fractional differential equations (FDEs), which involve fractional derivatives of the form $\\frac{d^{\\alpha}}{dt^{\\alpha}}$, where $\\alpha>0$ is not necessarily an integer, attained great importance due to their ability to model complex phenomena. They naturally appear in diffusion processes, electrical engineering, viscoelasticity, control theory of dynamical systems, quantum mechanics, biological sciences, electromagnetic theory, signal processing, finance, economics, and many other fields (cf. \\cite{HlR,AAK,VM,IPF,TPJ,YAR,MST}, etc). A comprehensive study on fractional calculus and FDEs are available in \\cite{AAK,IPF}, etc. For the past few decades, FDEs in infinite dimensions seek incredible attention of many researchers and eminent contributions have been made both in theory as well as in applications. Several authors studied the existence and approximate controllability results for fractional order systems in Hilbert spaces, see for instance, \\cite{SNS,NIZ,SRY,JWY}, etc, and the references therein. \n\t\n\tThe study of approximate controllability of the fractional order control systems in Banach spaces has not got much attention in the literature. In \\cite{NMI}, Mahmudov developed sufficient conditions for the approximate controllability of the Sobolev type fractional evolution equations with Caputo derivative in separable reflexive Banach spaces using the Schauder fixed point theorem. Later in \\cite{MIN}, he studied the approximate controllability of the fractional neutral evolution systems by taking infinite delay using Krasnoselkii's fixed-point theorem. After that Chalishazar et. al. \\cite{DNC} extended his work by considering instantaneous impulses and examined the approximate controllability in Banach spaces. \n\t\n\tThe articles \\cite{PCY,FZM,RS,RGR}, etc claimed the approximate controllability of the factional order systems in general Banach spaces using resolvent operator condition. But, the resolvent operator defined in these works is valid only if the state space is a Hilbert space, whose dual is identified by the space itself (see, the resolvent operator definition in the expression \\eqref{2.1} and Remark \\ref{rem2.8}). Moreover, many papers deal with the fractional order impulsive systems with delays, see for instance, \\cite{ZT,ZH,YZO}, etc. In these works, the characterization of norm or seminorm defined in the phase space involves uniform norm, but the choice of such a norm or seminorm is not suitable in the impulsive case, for counter examples and more details, we refer the interested readers to \\cite{GU} (a detailed discussion on this problem is also available in \\cite{SAMT}). Many articles considered the fractional order impulsive systems with non-instantaneous impulses (cf. \\cite{RMS,Ad,Zyf,Zy} etc.). In theses works, the concept of the mild solution defined for the considered system is not realistic, a counter example and appropriate definition of the mild solution discussed in \\cite{Fe,JRY} (see Definition \\ref{def2.6} for the mild solution definition for system under our consideration). One of the main aims of this work is to resolve these issues.\n\t\n\tRecently, a few papers have been reported on the approximate controllability of the non-instantaneous impulsive systems with and without delays in Hilbert spaces, (cf. \\cite{RMS,SKS,SAJ}, etc). Dhayal et. al. \\cite{RMS} formulated the approximate controllability results for a class of fractional order non-instantaneous impulsive stochastic differential equations driven by fractional Brownian motion. In \\cite{SKS}, Kumar and Abdal derived sufficient conditions for the approximate controllability of non-instantaneous impulsive fractional semilinear measure driven control systems with infinite delay. Liu and his co-authors, in \\cite{SAJ}, investigated the approximate controllability of fractional differential equation with non-instantaneous impulses via iterative learning control scheme. To best of our knowledge, there is no work has been reported on the approximate controllability of the fractional order non-instantaneous impulsive systems with state-dependent delay in Banach spaces.\n\t\n\tMotivated from the above facts, in this work, we derive sufficient conditions for the approximate controllability of fractional order non-instantaneous impulsive functional evolution equations with state-dependent delay in separable reflexive Banach spaces. Moreover, we properly define the resolvent operator in Banach spaces, which plays a crucial role in obtaining the aforementioned results (see the expression \\eqref{2.1} below). Proper motivation for the construction of different forms of feedback controls for the fractional order semilinear systems available in the literature has been justified in this work (see Remarks \\ref{rem3.6} and \\ref{rem4.4} below). Furthermore, our paper modifies the phase space characterization to incorporate Guedda's observations in \\cite{GU}, by replacing the uniform norm on the phase space by integral norm for the impulsive differential equations (see Example \\ref{exm2.8}). \n\t\n\tWe consider the following fractional order non-instantaneous impulsive functional evolution equation with state-dependent delay: \n\t\\begin{equation}\\label{1.1}\n\t\\left\\{\n\t\\begin{aligned}\n\t^C\\mathrm{D}_{0,t}^{\\alpha}x(t)&=\\mathrm{A}x(t)+\\mathrm{B}u(t)+f(t,x_{\\rho(t, x_t)}), \\ t\\in\\bigcup_{k=0}^{m} (\\tau_k, t_{k+1}]\\subset J=[0,T], \\\\\n\tx(t)&=h_k(t, x(t_k^{-})),\\ t\\in(t_k, \\tau_k], \\ k=1,\\dots,m, \\\\\n\tx_{0}&=\\psi\\in \\mathfrak{B},\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twhere \\begin{itemize} \\item $^C\\mathrm{D}_{0,t}^{\\alpha}$ denotes a derivative in Caputo sense of order $\\alpha$ with $\\frac{1}{2}<\\alpha<1$, \\item the operator $\\mathrm{A}:\\mathrm{D(A)}\\subset \\mathbb{X}\\to\\mathbb{X}$ is an infinitesimal generator of a $\\mathrm{C}_0$-semigroup $ \\mathcal{T}(t) $ on a separable reflexive Banach space $ \\mathbb{X}$ (having a strictly convex dual $\\mathbb{X}^*$), \\item the linear operator $\\mathrm{B}:\\mathbb{U}\\to\\mathbb{X}$ is bounded with $\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U},\\mathbb{X})}= \\tilde{M}$ and the control function $u\\in \\mathrm{L}^{2}(J;\\mathbb{U})$, where $\\mathbb{U}$ is a separable Hilbert space, \\item the function $ f:J\\times \\mathfrak{B}\\rightarrow \\mathbb{X} $, where $\\mathfrak{B}$ is a phase space, which will be specified in the subsequent sections, \\item for $k=1,\\ldots,m$, the functions $h_k:[t_k, \\tau_k]\\times\\mathbb{X}\\to\\mathbb{X}$ represent the non-instantaneous impulses and the fixed points $\\tau_k$ and $t_k$ satisfy $0=t_0=\\tau_00$ is defined as\n\t\t\\begin{align*}\n\t\tI_{a}^{q}f(t):=\\frac{1}{\\Gamma(q)}\\int_{a}^{t}\\frac{f(s)}{(t-s)^{1-q}}\\mathrm{d}s,\\ \\mbox{ for a.e. } \\ t\\in[a,b],\n\t\t\\end{align*}\n\t\twhere $f\\in\\mathrm{L}^1([a,b];\\mathbb{R})$ and $\\Gamma(\\alpha)=\\int_{0}^{\\infty}t^{\\alpha-1}e^{-t}\\mathrm{d}t$ is the Euler gamma function.\n\t\\end{Def}\n\t\\begin{Def\n\t\tThe \\emph{Riemann-Liouville fractional derivative} of a function $f:[a,b]\\to\\mathbb{R}$ of order $q>0$ is given as \n\t\t\\begin{align*}\n\t\t^L\\mathrm{D}_{a,t}^{q}f(t):=\\frac{1}{\\Gamma(n-q)}\\frac{d^n}{dt^n}\\int_{a}^{t}(t-s)^{n-q-1}f(s)\\mathrm{d}s,\\ \\mbox{ for a.e. }\\ t\\in[a,b],\n\t\t\\end{align*}\n\t\twith $n-1< q0$ is defined as \n\t\t\\begin{align*}\n\t\t^C\\mathrm{D}_{a,t}^{q}f(t):=\\ ^L\\mathrm{D}_{a,t}^{q}\\left[f(t)-\\sum_{p=1}^{n-1}\\frac{f^{(p)}(a)}{p!}(x-a)^p\\right],\\ \\mbox{ for a.e. } \\ t\\in[a,b].\n\t\t\\end{align*}\n\t\\end{Def}\n\n\n\n\t\\begin{rem}[\\cite{IPF}]\n\t\tIf a function $f\\in\\mathrm{AC}^n([a,b];\\mathbb{R})$, then the Caputo fractional derivative of $f$ can be written as\n\t\t\\begin{align*}\n\t\t^C\\mathrm{D}_{a,t}^{q}f(t)=\\frac{1}{\\Gamma(n-q)}\\int_{a}^{t}(t-s)^{n-q-1}f^{(n)}(s)\\mathrm{d}s,\\ \\mbox{ for a.e. }\\ \\ t \\in[a,b], \\ n-1< q0$, then the operators $\\mathcal{T}_{q}(t)$ and $\\widehat{\\mathcal{T}}_{q}(t)$ are also compact for $t>0$. \n\t\t\\end{enumerate}\n\t\\end{lem}\n\tLet us define the set \n\t\\begin{align*} \n\t\\mathrm{PC}(J;\\mathbb{X})&:=\\big\\{x:J \\rightarrow \\mathbb{X} : x\\vert_{t\\in I_k}\\in\\mathrm{C}(I_k;\\mathbb{X}),\\ I_k:=(t_k, t_{k+1}],\\ k=0,1,\\ldots,m \\ \\mbox{ and }\\ x(t_k^+)\\\\&\\qquad \\qquad \\mbox{ and }\\ x(t_k^-)\\ \\mbox{ exist for each }\\ k=1,\\ldots,m, \\ \\mbox{ and satisfy }\\ x(t_k)=x(t_k^-)\\big\\}, \n\t\\end{align*}\n\tendowed with the norm $\\left\\|x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}:=\\sup\\limits_{t\\in J}\\left\\|x\\right\\|_{\\mathbb{X}}$.\n\t\n\tWe now introduce the concept of mild solution for the system \\eqref{1.1} (cf. \\cite{JRY}). \n\t\\begin{Def}[Mild solution]\\label{def2.6}\n\t\tA function $x(\\cdot;\\psi,u):(-\\infty, T]\\to\\mathbb{X}$ is said to be a \\emph{mild solution} of \\eqref{1.1}, if $x_0=\\psi\\in\\mathfrak{B}$ and $x\\vert_{J}\\in\\mathrm{PC}(J;\\mathbb{X})$ and satisfies the following:\n\t\t\\begin{equation}\\label{2.2}\n\t\tx(t)=\n\t\t\\begin{dcases}\n\t\t\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s,\\ t\\in[0, t_1],\\\\\n\t\th_k(t, x(t_k^-)),\\ t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k, x(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s\\\\\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u(s)+f(s,x_{\\rho(s, x_s)})\\right]\\mathrm{d}s,\\ t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m.\n\t\t\\end{dcases}\n\t\t\\end{equation}\n\t\\end{Def}\n\n\t\\subsection{Phase space}\n\tWe now provide the definition of phase space $\\mathfrak{B}$ introduced in \\cite{HY}, and suitably modify to incorporate the impulsive systems (cf. \\cite{VOj}). The linear space $\\mathfrak{B}$ equipped with the seminorm $\\left\\|\\cdot\\right\\|_{\\mathfrak{B}}$, consisting of all functions from $(-\\infty, 0]$ into $\\mathbb{X}$ and satisfying the following axioms:\n\t\\begin{enumerate}\n\t\t\\item [(A1)] If $x: (-\\infty, T]\\rightarrow \\mathbb{X}$ such that $x_{0}\\in \\mathfrak{B}$ and $x|_{J}\\in \\mathrm{PC}(J;\\mathbb{X})$. Then the following conditions hold:\n\t\t\\begin{itemize}\n\t\t\t\\item [(i)] $x_{t}\\in\\mathfrak{B}$ for $t\\in J$.\n\t\t\t\\item [(ii)] $\\left\\|x_{t}\\right\\|_{\\mathfrak{B}}\\leq \\Lambda(t)\\sup\\{\\left\\|x(s)\\right\\|_{\\mathbb{X}}: 0\n\t\t\t\\leq s\\leq t\\}+\\Upsilon(t)\\left\\|x_{0}\\right\\|_{\\mathfrak{B}},$ for $t\\in J$, where $\\Lambda, \\Upsilon:[0, \\infty)\\rightarrow [0, \\infty)$ are independent of $x$, the function $\\Lambda(\\cdot)$ is strictly positive and continuous, $\\Upsilon(\\cdot)$ is locally bounded.\n\t\t\n\t\t\\end{itemize}\n\t\t\\item [(A2)] The space $\\mathfrak{B}$ is complete. \n\t\\end{enumerate} \n\tFor any $\\psi\\in \\mathfrak{B}$, the function $\\psi_{t}, \\ t\\leq 0,$ defined as $\\psi_{t}(\\theta)=\\psi(t+\\theta),\\ \\theta \\in (-\\infty, 0].$ Then for any function $x(\\cdot)$ satisfying the axiom (A1) with $x_{0}=\\psi$, we can extend the mapping $t\\mapsto x_{t}$ by setting $x_{t}=\\psi_{t}, \\ t\\leq0$, to the whole interval $(-\\infty, T]$. Moreover, let us introduce a set \n\t\\begin{align*}\n\t\\mathcal{Q}(\\rho^-)&=\\{\\rho(s, \\varphi):\\ \\rho(s, \\varphi)\\leq 0, \\mbox{ for } (s, \\varphi)\\in J \\times \\mathfrak{B}\\}.\n\t\\end{align*} \n\tAssume that the function $t\\mapsto \\psi_{t}$, defined from $\\mathcal{Q}(\\rho^-)$ into $\\mathfrak{B}$ is continuous, and there exists a continuous and bounded function $\\varTheta^{\\psi}: \\mathcal{Q}(\\rho^-)\\rightarrow (0, \\infty)$ such that \n\t\\begin{align*}\n\t\\left\\|\\psi_{t}\\right\\|_{\\mathfrak{B}}&\\leq \\varTheta^{\\psi}(t)\\left\\|\\psi\\right\\|_{\\mathfrak{B}}.\n\t\\end{align*}\n\t\\begin{lem}[\\cite{HY}]{\\label{lem2.7}}\n\t\tLet $x:(-\\infty, T]\\rightarrow \\mathbb{X}$ be a function such that $x_0=\\psi$ and $x\\vert_J\\in\\mathrm{PC}(J;\\mathbb{X})$. Then \n\t\t\\begin{align*}\n\t\t\\left\\|x_s\\right\\|_{\\mathfrak{B}}&\\leq H_{1}\\left\\|\\psi\\right\\|_{\\mathfrak{B}}+H_{2}\\sup \\big\\{ \\left\\|x(\\theta)\\right \\|_{\\mathbb{X}}:\\theta \\in [0, \\max \\{0, s\\}] \\big\\}, \\ s\\in \\mathcal{Q}(\\rho^-) \\cup J,\n\t\t\\end{align*}\n\t\twhere $$H_{1}= \\sup\\limits_{t\\in \\mathcal{Q}(\\rho^-)} \\Theta^{\\psi}(t) + \\sup\\limits_{t\\in J}\\Upsilon(t), \\;\\; H_{2}= \\sup\\limits_{t\\in J}\\Lambda(t).$$ \n\t\\end{lem}\n\t\\begin{Ex}\\label{exm2.8}\n\t\tLet us take $\\mathfrak{B}=\\mathrm{PC}_{r}\\times\\mathrm{L}^p_h(\\mathbb{X}), r\\ge0, 1\\le p<\\infty$, which consists of all functions $\\psi:(-\\infty,0]\\to\\mathbb{X}$ such that $\\psi\\vert_{[-r,0]}\\in \\mathrm{PC}([-r,0];\\mathbb{X}),$ Lebesgue measurable on $(-\\infty,-r)$ and $h\\|\\psi(\\cdot)\\|^p_{\\mathbb{X}}$ is Lebesgue integrable on $(-\\infty,-r]$. The seminorm in $\\mathfrak{B}$ is defined as \n\t\t\\begin{align}\n\t\t\\label{Bnorm}\\left\\|\\psi\\right\\|_{\\mathcal{P}}:=\\int_{-r}^{0}\\left\\|\\psi(\\theta)\\right\\|_\\mathbb{X}\\mathrm{d}\\theta+\\left(\\int_{-\\infty}^{-r}h(\\theta)\\left\\|\\psi(\\theta)\\right\\|^p_{\\mathbb{X}}\\mathrm{d}\\theta\\right)^{\\frac{1}{p}},\n\t\t\\end{align}\n\t\twhere the function $h:(-\\infty, 0]\\to\\mathbb{R}^+$ is locally bounded and Lebesgue integrable. Moreover, there exists a locally bounded function $H:(-\\infty,0]\\to\\mathbb{R}^+$ such that $h(t+\\theta)\\le H(t)h(\\theta),$ for all $t\\le0$ and $\\theta\\in(-\\infty,0)\\backslash \\mathcal{O}_t$ where $\\mathcal{O}_t\\subseteq(-\\infty,0)$ is a set with Lebesgue measure zero. \n\t\\end{Ex}\n\n\t\\subsection{Resolvent operator and assumptions}\n\tTo discuss the approximate controllability of the system \\eqref{1.1}, we first define the following operators:\n\t\\begin{equation}\\label{2.1}\n\t\\left\\{\n\t\\begin{aligned}\n\tL_0^Tu&:=\\int^{T}_{0}(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}u(t)\\mathrm{d}t,\\\\\n\t\\Phi_{0}^{T}&:=\\int^{T}_{0}(T-t)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathrm{d}t,\\\\\n\t\\mathcal{R}(\\lambda,\\Phi_{0}^{T})&:=(\\lambda \\mathrm{I}+\\Phi_{0}^{T}\\mathcal{J})^{-1},\\ \\lambda > 0,\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twhere $\\mathrm{B}^{*}$ and $\\widehat{\\mathcal{T}}_{\\alpha}(t)^*$ denote the adjoint operators of $\\mathrm{B}$ and $\\widehat{\\mathcal{T}}_{\\alpha}(t),$ respectively. It is immediate that the operator $L_0^T$ is linear and bounded for $\\frac{1}{2}<\\alpha<1$. Moreover, the map $\\mathcal{J} : \\mathbb{X} \\rightarrow 2^{\\mathbb{X}^*}$ stands for the duality mapping, which is defined as \n\t\\begin{align*}\n\t\\mathcal{J}[x]&=\\{x^* \\in \\mathbb{X}^* : \\langle x, x^* \\rangle=\\left\\|x\\right\\|_{\\mathbb{X}}^2= \\left\\|x^*\\right\\|_{\\mathbb{X}^*}^2 \\}, \\mbox{ for all } x\\in \\mathbb{X},\n\t\\end{align*}\n\twhere $\\langle \\cdot, \\cdot \\rangle $ represents a duality pairing between $\\mathbb{X}$ and $\\mathbb{X}^*$. Since the space $\\mathbb{X}$ is a reflexive Banach space, then $\\mathbb{X}^*$ becomes strictly convex (see, \\cite{AA}), which implies that the mapping $\\mathcal{J}$ is bijective, strictly monotonic and demicontinuous, that is, $$x_k\\to x\\ \\mbox{ in }\\ \\mathbb{X}\\ \\mbox{ implies }\\ \\mathcal{J}[x_k] \\xrightharpoonup{w} \\mathcal{J}[x]\\ \\mbox{ in } \\ \\mathbb{X}^*\\ \\mbox{ as }\\ k\\to\\infty.$$ Moreover, the inverse mapping $\\mathcal{J}^{-1}:\\mathbb{X}^*\\to\\mathbb{X}$ is also duality mapping.\n\t\\begin{rem}\\label{rem2.8}\n\t\tIf $\\mathbb{X}$ is a separable Hilbert space (identified with its own dual), then the resolvent operator is defined as $\\mathcal{R}(\\lambda,\\Phi_{0}^{T}):=(\\lambda \\mathrm{I}+\\Phi_{0}^{T})^{-1},\\ \\lambda > 0$.\n\t\\end{rem}\n\t\\begin{lem}[Lemma 2.2 \\cite{M}]\\label{lem2.9}\n\t\tFor every $h\\in\\mathbb{X}$ and $\\lambda>0$, the equation\n\t\t\\begin{align}\\label{2.4}\\lambda z_{\\lambda}+\\Phi_{0}^{T}\\mathcal{J}[z_{\\lambda}]=\\lambda h,\\end{align}\n\t\thas a unique solution $z_{\\lambda}(h)=\\lambda(\\lambda \\mathrm{I}+\\Phi_{0}^{T}\\mathcal{J})^{-1}(h)=\\lambda\\mathcal{R}(\\lambda,\\Phi_{0}^{T})(h)$ and \\begin{align}\\label{2.5}\n\t\t\\left\\|z_{\\lambda}(h)\\right\\|_{\\mathbb{X}}=\\left\\|\\mathcal{J}[z_{\\lambda}(h)]\\right\\|_{\\mathbb{X}^*}\\leq\\left\\|h\\right\\|_{\\mathbb{X}}.\n\t\t\\end{align}\n\t\t\\begin{proof}\n\t\t\tSince the non-negative operator $\\Phi_0^T$ is linear and bounded for $\\frac{1}{2}<\\alpha<1$, then proceeding similar way as in the proof of Lemma 2.2 \\cite{M}, one can obtain the results.\n\t\t\\end{proof}\n\t\\end{lem}\n\t\n\t\n\t\n\t\\begin{Def}[\\cite{FUX}]\n\t\tThe system \\eqref{1.1} is said to be \\emph{approximately controllable} on $ J $, for any initial function $\\psi\\in\\mathfrak{B}$, if the closure of reachable set is whole space $\\mathbb{X}$, that is, $\\overline{\\mathfrak{R}(T,\\psi)}=\\mathbb{X},$ where the reachable set is defined as \\begin{align*}\\mathfrak{R}(T,\\psi) = \\{x(T;\\psi,u): u(\\cdot) \\in \\mathrm{L}^{2}(J;\\mathbb{U})\\}.\\end{align*}\n\t\\end{Def}\n\tWe impose the following assumptions to investigate the approximate controllability of the system \\eqref{1.1}:\n\t\\begin{Ass}\\label{as2.1} \n\t\t\\begin{enumerate}\n\t\t\t\\item [\\textit{($H0$)}] For every $h\\in\\mathbb{X}$, $z_\\lambda=z_{\\lambda}(h)=\\lambda\\mathcal{R}(\\lambda,\\Phi_{0}^{T})(h) \\rightarrow 0$ as $\\lambda\\downarrow 0$ in strong topology, where $z_{\\lambda}(h)$ is a solution of the equation \\eqref{2.4}.\n\t\t\t\\item[\\textit{(H1)}] The $\\mathrm{C}_0$-semigroup of bounded linear operator $\\mathcal{T}(t)$ is compact for $t>0$ with bound $M\\geq 1,$ such that $\\|\\mathcal{T}(t)\\|_{\\mathcal{L(\\mathbb{X})}}\\leq M$.\n\t\t\t\\item [\\textit{(H2)}] \n\t\t\t\\begin{enumerate} \n\t\t\t\t\\item [(i)] Let $x:(-\\infty, T]\\rightarrow \\mathbb{X}$ be such that $x_0=\\psi$ and $x|_{J}\\in \\mathrm{PC}(J;\\mathbb{X}).$ The function $t\\mapsto f(t, x_{\\rho(t,x_{t})}) $ is strongly measurable on $J$ and the function $f(t,\\cdot): \\mathfrak{B}\\rightarrow \\mathbb{X}$ is continuous for a.e. $t\\in J$. Also, the map $t\\mapsto f(s,x_{t})$ is continuous on $\\mathcal{Q}(\\rho^-) \\cup J,$ for every $s\\in J$. \n\t\t\t\t\\item [(ii)] For each positive integer $r$, there exists a constant $\\alpha_1\\in[0,\\alpha]$ and a function $\\gamma_{r}\\in \\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})$, such that $$ \\sup_{\\left\\| \\psi\\right\\|_{\\mathfrak{B}}\\leq r} \\left\\|f(t, \\psi)\\right\\|_{\\mathbb{X}}\\leq\\gamma_{r}(t), \\mbox{ for a.e.} \\ t \\in J \\mbox{ and } \\psi\\in \\mathfrak{B},$$ with\n\t\t\t\t$$ \\liminf_{r \\rightarrow \\infty } \\frac {\\left\\|\\gamma_r\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r} = \\beta< \\infty. $$ \n\t\t\t\\end{enumerate}\n\t\t\t\\item [\\textit{(H3)}] The impulses $ h_k:[t_k,\\tau_k]\\times\\mathbb{X}\\to\\mathbb{X}$, for $k=1,\\dots,m$, are such that \n\t\t\t\\begin{itemize}\n\t\t\t\t\\item [(i)]The impulses $h_k(\\cdot,x):[t_k,\\tau_k]\\to\\mathbb{X}$ are continuous for each $x\\in\\mathbb{X}$. \n\t\t\t\t\\item [(ii)] Each $h_k(t,\\cdot):\\mathbb{X}\\to\\mathbb{X}$ is completely continuous, for all $t\\in[t_k,\\tau_k]$.\n\t\t\t\t\\item[(iii)] $\\left\\|h_k(t,x) \\right\\|_{\\mathbb{X}} \\leq l_k, \\mbox{ for each } t\\in [t_k, \\tau_k] \\mbox{ and } x \\in \\mathbb{X},$ where $l_k$'s are positive constants.\n\t\t\t\\end{itemize} \n\t\t\\end{enumerate}\n\t\\end{Ass}\n\tThe following version of the discrete Gronwall-Bellman lemma (cf. \\cite{Ch}) is used in the sequel. \n\t\\begin{lem}\\label{lem2.13}\n\tIf $\\{f_n\\}_{n=0}^{\\infty}, \\{g_n\\}_{n=0}^{\\infty}$ and $\\{w_n\\}_{n=0}^{\\infty}$ are non-negative sequences and $$f_n\\le g_n+\\sum_{k=0}^{n-1}w_kf_k,\\ \\text{ for }\\ n\\geq 0,$$ then $$f_n\\le g_n+\\sum_{k=0}^{n-1}g_kw_k\\exp\\left(\\sum_{j=k+1}^{n-1}g_j\\right),\\text{ for }\\ n\\geq 0.$$\n\t\\end{lem}\n\t\n\n\t\\section{Linear Control Problem} \\label{Linear}\\setcounter{equation}{0}\n\tThe present section is devoted for discussing the approximate controllability of the fractional order linear control problem corresponding to \\eqref{1.1}. To establish this result, we first obtain the existence of an optimal control by minimizing the cost functional given by\n\t\\begin{equation}\\label{3.1}\n\t\\mathcal{G}(x,u)=\\left\\|x(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t,\n\t\\end{equation}\n\twhere $x(\\cdot)$ is the solution of the linear control system:\n\t\\begin{equation}\\label{3.2}\n\t\\left\\{\n\t\\begin{aligned}\n\t^CD_{0,t}^{\\alpha}x(t)&= \\mathrm{A}x(t)+\\mathrm{B}u(t),\\ t\\in J,\\\\\n\tx(0)&=\\zeta,\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\twith the control $u\\in \\mathbb{U}$, $x_{T}\\in \\mathbb{X}$ and $\\lambda >0$. Since $\\mathrm{B}u\\in\\mathrm{L}^1(J;\\mathbb{X})$, the system \\eqref{3.2} has a unique mild solution $x\\in \\mathrm{C}(J;\\mathbb{X}) $ given by (see Corollary 2.2, Chapter 4, \\cite{P} and Lemma 4.68, Chapter 4, \\cite{Y})\n\t\\begin{align*}\n\tx(t)= \\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u(s)\\mathrm{d}s,\n\t\\end{align*}\n\tfor any $u\\in\\mathscr{U}_{\\mathrm{ad}}=\\mathrm{L}^2(J;\\mathbb{U})$ (admissible control class). Next, we define the \\emph{admissible class} $\\mathscr{A}_{\\mathrm{ad}}$ for the system \\eqref{3.2} as\n\t\\begin{align*}\n\t\\mathscr{A}_{\\mathrm{ad}}:=\\big\\{(x,u) :x\\mbox{ is \\mbox{the unique mild solution} of }\\eqref{3.2} \\mbox{ with the control }u\\in\\mathscr{U}_{\\mathrm{ad}}\\big\\}.\n\t\\end{align*}\n\tFor any given control $u\\in\\mathscr{U}_{\\mathrm{ad}}$, the system \\eqref{3.2} has a unique mild solution, which ensures that the set $\\mathscr{A}_{\\mathrm{ad}}$ is nonempty. By using the definition of the cost functional, we can formulate the optimal control problem as:\n\t\\begin{align}\\label{3.3}\n\t\\min_{ (x,u) \\in \\mathscr{A}_{\\mathrm{ad}}} \\mathcal{G}(x,u).\n\t\\end{align}\n\tIn the next theorem, we show the existence of an optimal pair for the problem \\eqref{3.3}.\n\t\\begin{theorem}[Existence of an optimal pair]\\label{optimal}\n\t\tFor a given $\\zeta\\in\\mathbb{X}$ and fixed $\\frac{1}{2}<\\alpha<1$, there exists a unique optimal pair $(x^0,u^0)\\in\\mathscr{A}_{\\mathrm{ad}}$ for the problem \\eqref{3.3}.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tLet us assume $$\\mathcal{G} := \\inf \\limits _{u \\in \\mathscr{U}_{\\mathrm{ad}}}\\mathcal{G}(x,u).$$ Since $0\\leq \\mathcal{G} < +\\infty$, there exists a minimizing sequence $\\{u^n\\}_{n=1}^{\\infty} \\in \\mathscr{U}_{\\mathrm{ad}}$ such that $$\\lim_{n\\to\\infty}\\mathcal{G}(x^n,u^n) = \\mathcal{G},$$ where $x^n(\\cdot)$ is the unique mild solution of the system \\eqref{3.2}, corresponding to the control $u^n(\\cdot),$ for each $n\\in\\mathbb{N}$ with $x^n(0)=\\zeta$.\tNote that $x^n(\\cdot)$ satisfies\n\t\t\\begin{align}\\label{3.4}\n\t\tx^n(t)&=\\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^n(s)\\mathrm{d}s,\n\t\t\\end{align} \n\t\tfor $t\\in J$. Since $0\\in\\mathscr{U}_{\\mathrm{ad}}$, without loss of generality, we may assume that $\\mathcal{G}(x^n,u^n) \\leq \\mathcal{G}(x,0)$, where $(x,0)\\in\\mathscr{A}_{\\mathrm{ad}}$. Using the definition of $\\mathcal{G}(\\cdot,\\cdot)$, we easily get\n\t\t\\begin{align}\\label{35}\n\t\t\\left\\|x^n(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u^n(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t\\leq \\left\\|x(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}\\leq 2\\left(\\|x(T)\\|_{\\mathbb{X}}^2+\\|x_T\\|_{\\mathbb{X}}^2\\right)<+\\infty.\n\t\t\\end{align}\n\t\tFrom the above estimate, it is clear that, there exists a large $L>0$ (independent of $n$), such that \n\t\t\\begin{align}\\label{3.5}\\int_0^T \\|u^n(t)\\|^2_{\\mathbb{U}} \\mathrm{d} t \\leq L < +\\infty .\\end{align}\n\t\tUsing the expression \\eqref{3.4}, we compute\n\t\t\\begin{align*}\n\t\t\\left\\|x^n(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\zeta\\right\\|_{\\mathbb{X}}+\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^n(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\ &\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\right\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\int_{0}^{t}(t-s)^{\\alpha-1}\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U},\\mathbb{X})}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\\n\t\t&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\n\t\t\\nonumber\\\\&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{2\\alpha-1}}{2\\alpha-1}\\left(\\int_{0}^{t}\\left\\|u^n(s)\\right\\|_{\\mathbb{U}}^2\\mathrm{d}s\\right)^{\\frac{1}{2}}\\nonumber\\\\&\\le M\\left\\|\\zeta\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{2\\alpha-1}L^{\\frac{1}{2}}}{2\\alpha-1}<+\\infty,\n\t\t\\end{align*}\n\t\tfor all $t\\in J$ and $\\frac{1}{2}<\\alpha<1$. Since we know that $\\mathrm{L}^{2}(J;\\mathbb{X})$ is reflexive, then by applying the Banach-Alaoglu theorem, we can find a subsequence $\\{x^{n_k}\\}_{k=1}^{\\infty}$ of $\\{x^n\\}_{n=1}^{\\infty}$ such that \n\t\t\\begin{align}\\label{3.6}\n\t\tx^{n_k}\\xrightharpoonup{w}x^0\\ \\mbox{ in }\\ \\mathrm{L}^{2}(J;\\mathbb{X}), \\ \\mbox{ as }\\ k\\to\\infty. \n\t\t\\end{align}\n\t\tFrom the estimate \\eqref{3.5}, we also infer that the sequence $\\{u^n\\}_{n=1}^{\\infty}$ is uniformly bounded in the space $\\mathrm{L}^2(J;\\mathbb{U})$. Further, by using the Banach-Alaoglu theorem, there exists a subsequence, say, $\\{u^{n_k}\\}_{k=1}^{\\infty}$ of $\\{u^n\\}_{n=1}^{\\infty}$ such that \n\t\t\\begin{align*}\n\t\tu^{n_k}\\xrightharpoonup{w}u^0\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{U})=\\mathscr{U}_{\\mathrm{ad}}, \\ \\mbox{ as }\\ k\\to\\infty. \n\t\t\\end{align*}\n\t\tSince $\\mathrm{B}$ is a bounded linear operator from $\\mathbb{U}$ to $\\mathbb{X}$, then we have \n\t\t\\begin{align}\\label{3.7}\n\t\t\\mathrm{B}\tu^{n_k}\\xrightharpoonup{w}\\mathrm{B}u^0\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{X}),\\ \\mbox{ as }\\ k\\to\\infty.\n\t\t\\end{align}\n\t\tMoreover, by using the above convergences together with the compactness of the operator $(\\mathrm{Q}f)(\\cdot) =\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X}) $ (see Lemma \\ref{lem2.12} below), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{n_k}(s)\\mathrm{d}s-\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\to0,\\ \\mbox{ as }\\ k\\to\\infty,\n\t\t\\end{align*}\n\t\tfor all $t\\in J$. We now estimate \n\t\t\\begin{align}\\label{3.8}\n\t\t\\left\\|x^{n_k}(t)-x^*(t)\\right\\|_{\\mathbb{X}}&=\\left\\|\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{n_k}(s)\\mathrm{d}s-\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{0}(s)\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0,\\ \\mbox{ as } \\ k\\to\\infty, \\ \\mbox{for all }\\ t\\in J,\n\t\t\\end{align}\n\t\twhere \n\t\t\\begin{align*}\n\t\tx^*(t)=\\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{0}(s)\\mathrm{d}s,\\ t\\in J.\n\t\t\\end{align*}\n\t\tIt is clear by the above expression, the function $x^*\\in \\mathrm{C}(J;\\mathbb{X})$ is the unique mild solution of the equation \\eqref{3.2} with the control $u^{0}\\in\\mathscr{U}_{\\mathrm{ad}}$. Since the weak limit is unique, then by combining the convergences \\eqref{3.6} and \\eqref{3.8}, we obtain $x^*(t)=x^0(t),$ for all $t\\in J$. Hence, the function $x^0$ is the unique mild solution of the system \\eqref{3.2} with the control $u^{0}\\in\\mathscr{U}_{\\mathrm{ad}}$ and also the whole sequence $x^n\\to x^0\\in\\mathrm{C}(J;\\mathbb{X})$. Consequently, we have $(x^0,u^0)\\in\\mathscr{A}_{\\mathrm{ad}}$.\n\t\t\n\t\tIt remains to show that the functional $\\mathcal{G}(\\cdot,\\cdot)$ attains its minimum at $(x^0,u^0)$, that is, \\emph{$\\mathcal{G}=\\mathcal{G}(x^0,u^0)$}. Since the cost functional $\\mathcal{G}(\\cdot,\\cdot)$ given in \\eqref{3.1} is continuous and convex (see Proposition III.1.6 and III.1.10, \\cite{EI}) on $\\mathrm{L}^2(J;\\mathbb{X}) \\times \\mathrm{L}^2(J;\\mathbb{U})$, it follows that $\\mathcal{G}(\\cdot,\\cdot)$ is weakly lower semi-continuous (Proposition II.4.5, \\cite{EI}). That is, for a sequence \n\t\t$$(x^n,u^n)\\xrightharpoonup{w}(x^0,u^0)\\ \\mbox{ in }\\ \\mathrm{L}^2(J;\\mathbb{X}) \\times \\mathrm{L}^2(J;\\mathbb{U}),\\ \\mbox{ as }\\ n\\to\\infty,$$\n\t\twe have \n\t\t\\begin{align*}\n\t\t\\mathcal{G}(x^0,u^0) \\leq \\liminf \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n).\n\t\t\\end{align*}\n\t\tHence, we obtain \n\t\t\\begin{align*}\\mathcal{G} \\leq \\mathcal{G}(x^0,u^0) \\leq \\liminf \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n)= \\lim \\limits _{n\\rightarrow \\infty} \\mathcal{G}(x^n,u^n) = \\mathcal{G},\\end{align*}\n\t\tand thus $(x^0,u^0)$ is a minimizer of the problem \\eqref{3.3}. Note that the cost functional given in \\eqref{3.1} is convex, the constraint \\eqref{3.2} is linear and the class $\\mathscr{U}_{\\mathrm{ad}}=\\mathrm{L}^2(J;\\mathbb{U})$ is convex, then the optimal control obtained above is unique.\n\t\\end{proof}\n\t\n\t\n\tIn the following lemma, we prove the compactness of the operator $(\\mathrm{Q}f)(\\cdot) =\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X}) ,\\ \\mbox{ for }\\ \\frac{1}{2}<\\alpha<1,$ where we assume that $\\mathbb{X}$ is a general Banach space. The case of $\\alpha=1$ is available in Lemma 3.2, \\cite{JYONG}. \n\t\\begin{lem}\\label{lem2.12}\n\t\tSuppose that Assumptions (H1) holds. Let the operator $\\mathrm{Q}:\\mathrm{L}^{2}(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X})$ be defined as\n\t\t\\begin{align}\n\t\t(\\mathrm{Q}\\psi)(t)= \\int^{t}_{0}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\psi(s)\\mathrm{d}s, \\ t\\in J,\\ \\frac{1}{2}<\\alpha<1.\n\t\t\\end{align}\n\t\tThen the operator $\\mathrm{Q}$ is compact.\n\t\\end{lem}\n\t\\begin{proof}\n\t\tWe prove that $\\mathrm{Q}$ is a compact operator by using the infinite-dimensional version of Arzel\\'a-Ascoli theorem (see, Theorem 3.7, Chapter 2, \\cite{JYONG}). Let a closed and bounded ball $\\mathcal{B}_R$ in $\\mathrm{L}^{2}(J;\\mathbb{X})$ be defined as \n\t\t\\begin{align*}\n\t\t\\mathcal{B}_R=\\left\\{\\psi\\in \\mathrm{L}^{2}(J;\\mathbb{X}):\\left\\|\\psi\\right\\|_{\\mathrm{L}^{2}(J;\\mathbb{X})}\\leq R\\right\\}.\n\t\t\\end{align*}\n\t\tFor $s_1,s_2\\in J$ ($s_10$, which implies that the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ is continuous under the uniform operator topology (see Theorem 3.2, Chapter 2, \\cite{P}). Hence, using the arbitrariness of $\\epsilon$ and continuity of $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ in the uniform operator topology, the right hand side of the expression \\eqref{2.8} converges to zero as $|s_2-s_1| \\rightarrow 0$. Thus, $\\mathrm{Q}\\mathcal{B}_R$ is equicontinuous on $\\mathrm{L}^{2}(J;\\mathbb{X})$. \n\t\t\n\t\tNext, we show that $\\mathrm{V}(t):= \\left\\{(\\mathrm{Q}\\psi)(t):\\psi\\in \\mathcal{B}_R\\right\\},$ for all $t\\in J$ is relatively compact. For $t=0$, it is easy to check that the set $\\mathrm{V}(t)$ is relatively compact in $\\mathbb{X}$. Let us take $ 00$, we define\n\t\t\\begin{align*}\n\t\t(\\mathrm{Q}^{\\eta,\\delta}\\psi)(t)&=\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tSince the operator $\\mathcal{T}(\\cdot)$ is compact, the set $\\mathrm{V}_{\\eta,\\delta}(t)=\\{(\\mathrm{Q}^{\\eta,\\delta}_\\lambda \\psi)(t):\\psi\\in \\mathcal{B}_R\\}$ is relatively compact in $\\mathbb{X}$. Hence, there exist a finite $ x_{i}$'s, for $i=1,\\dots, n $ in $ \\mathbb{X} $ such that \n\t\t\\begin{align*}\n\t\t\\mathrm{V}_{\\eta,\\delta}(t) \\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon\/2),\n\t\t\\end{align*}\n\t\tfor some $\\varepsilon>0$, where $\\mathcal{S}(x_i, \\varepsilon\/2)$ is an open ball centered at $x_i$ and of radius $\\varepsilon\/2$. Let us choose $\\delta>0$ and $\\eta>0$ such that \n\t\t\\begin{align*}\n\t\t\\left\\|(\\mathrm{Q}\\psi)(t)-(\\mathrm{Q}^{\\eta,\\delta}\\psi)(t)\\right\\|_{\\mathbb{X}}&\\le\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\frac{MR\\alpha t^{2\\alpha-1}}{2\\alpha-1}\\int_{0}^{\\delta}\\xi\\varphi(\\xi)\\mathrm{d}\\xi+\\frac{MR\\alpha}{\\Gamma(1+\\alpha)}\\frac{\\eta^{2\\alpha-1}}{2\\alpha-1}\\le\\frac{\\varepsilon}{2}.\n\t\t\\end{align*}\n\t\tConsequently $$ \\mathrm{V}(t)\\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon ). $$ Thus, for each $t\\in J$, the set $\\mathrm{V}(t)$ is relatively compact in $ \\mathbb{X}$. Then by invoking the Arzela-Ascoli theorem, we conclude that the operator $\\mathrm{Q}$ is compact.\n\t\\end{proof}\n\tSince $\\mathbb{X}^*$ is strictly convex, the norm $\\|\\cdot\\|_{\\mathbb{X}}$ is Gateaux differentiable (cf. Fact 8.12, \\cite{MFb}). Furthermore, every separable Banach space admits an equivalent Gateaux differentiable norm (cf. Theorem 8.13, \\cite{MFb}). Since $\\mathcal{J}$ is single-valued, the Gateaux derivative of $\\phi(x)=\\frac{1}{2}\\|x\\|_{\\mathbb{X}}^2$ is the duality map, that is, $$\\langle\\partial_x\\phi(x),y\\rangle=\\lim_{\\varepsilon \\to 0}\\frac{\\phi(x+\\varepsilon y)-\\phi(x)}{\\varepsilon}=\\frac{1}{2}\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\|x+\\varepsilon y\\|_{\\mathbb{X}}^2\\Big|_{\\varepsilon=0}=\\langle\\mathcal{J}[x],y\\rangle,$$ for $y\\in\\mathbb{X}$, where $\\partial_x\\phi(x)$ denotes the Gateaux derivative of $\\phi$ at $x\\in\\mathbb{X}$. In fact, since $\\mathbb{U}$ is a separable Hilbert space (identified with its own dual), by Theorem 8.24, \\cite{MFb}, we infer that $\\mathbb{U}$ admits a Fr\\'echet differentiable norm. The explicit expression of the optimal control ${u}$ in the feedback form is obtained in the following lemma: \n\t\n\t\\begin{lem}\\label{lem3.1}\n\t\tLet $u$ be the optimal control satisfying \\eqref{3.3} and minimizing the cost functional \\eqref{3.1}. Then $u$ is given by \n\t\t\\begin{align*}\n\t\tu(t)=(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^T)p(x(\\cdot))\\right],\\ t\\in [0, T),\\ \\lambda>0,\\ \\frac{1}{2}<\\alpha<1, \n\t\t\\end{align*}\n\t\twith\n\t\t\\begin{align*}\n\t\tp(x(\\cdot))=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta.\n\t\t\\end{align*}\n\t\\end{lem}\n\t\\begin{proof}\n\t\tLet us first consider the functional \n\t\t\\begin{align*}\n\t\t\\mathcal{I}(\\varepsilon)=\\mathcal{G}(x_{u+\\varepsilon w},u+\\varepsilon w),\n\t\t\\end{align*}\n\t\twhere $(x,u)$ is the optimal solution of \\eqref{3.3} and $w\\in \\mathrm{L}^{2}(J;\\mathbb{U})$. Also the function $x_{u+\\varepsilon w}$ is the unique mild solution of \\eqref{3.2} corresponding to the control $u+\\varepsilon w$. Then, it is immediate that \n\t\t\\begin{align*}\n\t\tx_{u+\\varepsilon w}(t)= \\mathcal{T}_{\\alpha}(t)\\zeta+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}(u+\\varepsilon w)(s)\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tIt is clear $\\varepsilon=0$ is the critical point of $\\mathcal{I}(\\varepsilon)$. We now evaluate the first variation of the cost functional $\\mathcal{G}$ (defined in \\eqref{3.1}) as\n\t\t\\begin{align*}\n\t\t\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\mathcal{I}(\\varepsilon)\\Big|_{\\varepsilon=0}&=\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\bigg[\\left\\|x_{u+\\varepsilon w}(T)-x_{T}\\right\\|^{2}_{\\mathbb{X}}+\\lambda\\int^{T}_{0}\\left\\|u(t)+\\varepsilon w(t)\\right\\|^{2}_{\\mathbb{U}}\\mathrm{d}t\\bigg]_{\\varepsilon=0}\\nonumber\\\\\n\t\t&=2\\bigg[\\langle \\mathcal{J}(x_{u+\\varepsilon w}(T)-x_{T}), \\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}(x_{u+\\varepsilon w}(T)-x_{T})\\rangle\\nonumber\\\\&\\qquad +2\\lambda\\int^{T}_{0}(u(t)+\\varepsilon w(t),\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}(u(t)+\\varepsilon w(t)))\\mathrm{d}t\\bigg]_{\\varepsilon=0}\\nonumber\\\\\n\t\t&=2\\left\\langle\\mathcal{J}(x(T)-x_T),\\int_0^T(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t)\\mathrm{d}t \\right\\rangle+2\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t. \n\t\t\\end{align*}\n\t\tBy taking the first variation of the cost functional is zero, we deduce that\n\t\t\\begin{align}\\label{3.9}\n\t\t0&=\\left\\langle\\mathcal{J}(x(T)-x_T),\\int_0^T(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t)\\mathrm{d}t\\right\\rangle+\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t\\nonumber\\\\&=\\int_0^T(T-t)^{\\alpha-1}\\left\\langle\\mathcal{J}(x(T)-x_T),\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}w(t) \\right\\rangle\\mathrm{d}t+\\lambda\\int_0^T(u(t),w(t))\\mathrm{d} t\\nonumber\\\\&= \\int_0^T\\left((T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T)+\\lambda u(t),w(t) \\right)\\mathrm{d}t,\n\t\t\\end{align}\n\t\twhere $(\\cdot,\\cdot)$ is the inner product in the Hilbert space $\\mathbb{U}$. Since $w\\in \\mathrm{L}^{2}(J;\\mathbb{U})$ is an arbitrary element (one can choose $w$ to be $(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T)+\\lambda u(t)$), it follows that the optimal control is given by\n\t\t\\begin{align}\\label{3.10}\n\t\tu(t)&= -\\lambda^{-1}(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}(x(T)-x_T),\n\t\t\\end{align}\n\t\tfor a.e. $t\\in [0,T]$. Since by the relations \\eqref{3.9} and \\eqref{3.10}, it is clear that $u\\in\\mathrm{C}([0,T);\\mathbb{U})$. Using the above expression of the control, we find\n\t\n\t\t\\begin{align}\\label{3.11}\n\t\tx(T)&=\\mathcal{T}_{\\alpha}(T)\\zeta-\\int^{T}_{0}\\lambda^{-1}(T-s)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-s)^*\\mathcal{J}(x(T)-x_T)\\mathrm{d}s\\nonumber\\\\\n\t\t&= \\mathcal{T}_{\\alpha}(T)\\zeta-\\lambda^{-1}\\Phi_{0}^T\\mathcal{J}\\left[x(T)-x_{T}\\right].\n\t\t\\end{align}\n\t\tLet us assume\n\t\t\\begin{align}\\label{3.12}\n\t\tp(x(\\cdot)):=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta.\n\t\t\\end{align}\n\t\tCombining \\eqref{3.11} and \\eqref{3.12}, we have\n\t\t\\begin{align}\\label{3.13}\n\t\tx(T)-x_{T}&=-p(x(\\cdot))-\\lambda^{-1}\\Phi_{0}^T\\mathcal{J}\\left[x(T)-x_{T}\\right].\n\t\t\\end{align}\n\t\tFrom \\eqref{3.13}, one can easily deduce that \n\t\t\\begin{align}\\label{3.15}\n\t\tx(T)-x_T=-\\lambda\\mathrm{I}(\\lambda\\mathrm{I}+\\Phi_0^T\\mathcal{J})^{-1}p(x(\\cdot))=-\\lambda\\mathcal{R}(\\lambda,\\Phi_0^T)p(x(\\cdot)).\n\t\t\\end{align}\n\t\tFinally, from \\eqref{3.10}, we get the expression for optimal control as\n\t\t\\begin{align*}\n\t\tu(t)=(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^T)p(x(\\cdot))\\right],\\ \\mbox{ for } \\ t\\in [0,T),\n\t\t\\end{align*}\n\t\twhich completes the proof. \n\t\\end{proof}\n\tNext, we examine the approximate controllability of the linear control system \\eqref{3.2} through the following lemma. \n\t\\begin{lem}\\label{lem4.2}\n\t\tThe linear control system \\eqref{3.2} is approximately controllable on $J$ if and only if Assumption (H0) holds. \n\t\\end{lem}\n\tA proof of the above lemma can be obtained by proceeding similarly as in the proof of Theorem 3.2, \\cite{SM}.\n\t\\begin{rem}\\label{rem3.4}\n\t\tIf Assumption (\\textit{H0}) holds, then by Theorem 2.3, \\cite{M}, we know that the operator $\\Phi_{0}^{T}$ is positive and vice versa. The positivity of $\\Phi_{0}^{T}$ is equivalent to $$ \\langle x^*, \\Phi_{0}^{T}x^*\\rangle=0\\Rightarrow x^*=0.$$ We know that \n\t\t\\begin{align}\n\t\t\\langle x^*, \\Phi_{0}^{T}x^*\\rangle =\\int_0^T\\left\\|(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_\\alpha(T-t)^*x^*\\right\\|_{\\mathbb{X}^*}^2\\mathrm{d}t.\n\t\t\\end{align}\n\t\tBy the above fact and Lemma \\ref{lem4.2}, we infer that the approximate controllability of the linear system \\eqref{3.2} is equivalent to the condition $$\\mathrm{B}^*\\widehat{\\mathcal{T}}_\\alpha(T-t)^*x^*=0,\\ 0\\le t0,\\ \\frac{1}{2}<\\alpha<1, \n\t\t\\end{align*}\n\t\twith\n\t\t$\tp(x(\\cdot))=x_{T}-\\mathcal{T}_{\\alpha}(T)\\zeta,$ and \n\t\t\\begin{equation}\\label{3.20}\n\t\t\\Phi_{0}^{T}=\\int^{T}_{0}(T-t)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-t)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-t)^*\\mathrm{d}t. \n\t\t\\end{equation}\n\t\\end{rem}\n\t\n\t\\section{Approximate Controllability of the Semilinear Impulsive System} \\label{semilinear}\\setcounter{equation}{0}\n\tThe purpose of this section is to investigate the approximate controllability of the fractional order semilinear impulsive system \\eqref{1.1}. In order to acquire sufficient conditions on approximate controllability, we first show that for $\\lambda>0$ and $x_T\\in\\mathbb{X}$, there exists a mild solution of the system \\eqref{1.1} with the control function defined as \n\t\\begin{align}\\label{C}\n\tu^\\alpha_{\\lambda}(t)&=\\sum_{k=0}^{m}u^\\alpha_{k,\\lambda}(t)\\chi_{[\\tau_k, t_{k+1})}(t), \\ t\\in J,\\ \\frac{1}{2}<\\alpha<1,\n\t\\end{align}\n\twhere \n\t\\begin{align*}\n\tu^\\alpha_{k,\\lambda}(t)&=(t_{k+1}-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_k(x(\\cdot))\\right],\n\t\\end{align*}\n\tfor $t\\in [\\tau_k, t_{k+1}),k=0,1,\\ldots,m$, with\n\t\\begin{align*}\n\tp_0(x(\\cdot))&=\\zeta_{0}-\\mathcal{T}_{\\alpha}(t_1)\\psi(0)-\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}s,\\nonumber\\\\\n\tp_k(x(\\cdot))&=\\zeta_{k}-\\mathcal{T}_{\\alpha}(t_{k+1}-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})+\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\right]\\mathrm{d}s \\\\&\\quad-\\int_{0}^{t_{k+1}}(t_{k+1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}s\t\\\\&\\quad-\\int_{0}^{\\tau_k}(t_{k+1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\mathrm{d}s, \\ k=1,\\ldots,m,\n\n\t\\end{align*} \n\tand $\\tilde{x}:(-\\infty,T]\\rightarrow\\mathbb{X}$ such that $\\tilde{x}(t)=\\psi(t), \\ t\\in(-\\infty,0]\\ \\tilde{x}(t)=x(t),\\ t\\in J=[0,T],$ and $\\zeta_{k}\\in \\mathbb{X}$ for $k=0,1,\\ldots,m$.\n\t\\begin{rem}\n\t\tSince the operator $\\Phi_{\\tau_k}^{t_{k+1}},$ for each $k=0,\\ldots,m,$ is non-negative, linear and bounded for $\\frac{1}{2}<\\alpha<1$, Lemma \\ref{lem2.9} is also valid for each $\\Phi_{\\tau_k}^{t_{k+1}},$ for $k=0,\\ldots,m$.\n\t\\end{rem}\n\tThe following theorem provides the existence of mild solution of the system \\eqref{1.1} with the control \\eqref{C}. \n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\begin{theorem}\\label{thm4.3}\n\t\tLet Assumptions (H1)-(H3) hold true. Then for every $ \\lambda>0 $ and fixed $\\zeta_{k}\\in\\mathbb{X},$ for $k=0,1,\\ldots,m$, the system \\eqref{1.1} with the control \\eqref{C} has at least one mild solution on $J$, provided \n\t\t\\begin{align}\\label{cnd}\n\t\\frac{MH_{2}\\alpha\\beta}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\{1+\\frac{(m+1)(m+2)\\tilde{R}}{2}+\\frac{m(m+1)\\tilde{R}^2}{2}\\sum_{j=0}^{m-1}e^{\\frac{(m+j)(m-j-1)\\tilde{R}}{2}}\\right\\}<1,\n\t\t\\end{align}\n\t\twhere $\\tilde{R}=\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}$ and $\\mu=\\frac{\\alpha-\\alpha_1}{1-\\alpha_1}$.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tLet us take a set $\\mathrm{E}:=\\{x\\in\\mathrm{PC}(J;\\mathbb{X}) : x(0)=\\psi(0)\\}$ with the norm $\\left\\|\\cdot\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}$. For each $r>0$, we consider a set $\\mathrm{E}_{r}=\\{x\\in\\mathrm{E} : \\left\\|x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}\\le r\\}$.\n\t\t\n\t\tFor $\\lambda>0$, let us define an operator $F_{\\lambda}:\\mathrm{E}\\to\\mathrm{E}$ such that\n\t\t\\begin{eqnarray}\\label{2}\n\t\t(F_{\\lambda}x)(t)=\\left\\{\n\t\t\\begin{aligned}\n\t\t&\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s,\\ t\\in[0, t_1],\\\\\n\t\t&\th_k(t, \\tilde{x}(t_k^-)),\\ t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t&\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s) \\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\n\t\t\t\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s,\\ t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m,\n\t\t\\end{aligned}\n\t\t\\right.\n\t\t\\end{eqnarray}\n\t\twhere $u^{\\alpha}_{\\lambda}$ is given in \\eqref{C}. From the definition of $ F_{\\lambda}$, we infer that the system $\\eqref{1.1}$ has a mild solution, if the operator $ F_{\\lambda}$ has a fixed point. We divide the proof of the fact that the operator $F_{\\lambda}$ has a fixed point in the following steps. \n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (1): } \\emph{$ F_{\\lambda}(\\mathrm{E}_r)\\subset \\mathrm{E}_r,$ for some $ r $}. On the contrary, let us suppose that our claim is not true. Then for any $\\lambda>0$ and for all $r>0$, there exists $x^r\\in \\mathrm{E}_r,$ such that $\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}>r,$ for some $t\\in J$, where $t$ may depend upon $r$. \n\t\tFirst, by using Assumption \\ref{as2.1}, we estimate \n\t\t\\begin{align}\\label{4.4}\n\t\t\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+\\left\\|\\mathcal{T}_{\\alpha}(t_1)\\psi(0)\\right\\|_{\\mathbb{X}}+\\int_{0}^{t_1}(t_1-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(t_1-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t_1}(t_1-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left(\\int_{0}^{t_1}(\\gamma_{r'}(s))^{\\frac{1}{\\alpha_1}}\\mathrm{d}s\\right)^{\\alpha_1}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{t_{1}^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t_1];\\mathbb{R^{+}})}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}\\nonumber\\\\&\\le\\left\\|\\zeta_0\\right\\|_{\\mathbb{X}}+M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}=N_0,\n\t\t\\end{align} \n\t\twhere $\\mu=\\frac{\\alpha-\\alpha_1}{1-\\alpha_1}$ and $r'=H_{1}\\left\\|\\psi\\right\\|_{\\mathfrak{B}}+H_{2}r$.\tFurther, using Assumption \\ref{as2.1}, we estimate \n\t\t\\begin{align*}\n\t\t&\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+\\left\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t_{k+1}}(t_{k+1}-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j,t_{j+1})}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(t_{k+1}-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-s)\\mathrm{B}\\sum_{j=0}^{k-1}u^\\alpha_{j,\\lambda}(s)\\chi_{[\\tau_j, t_{j+1})}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{\\tau_k^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{t_{k+1}^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t_{k+1}];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{k+1}-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\le\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{2M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{j+1}-s)^{2(\\alpha-1)}\\mathrm{d}s\\nonumber\\\\&\\le N_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t_{j+1}-\\tau_j)^{2\\alpha-1}}{2\\alpha-1}\\nonumber\\\\&\\le N_k+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\n\t\t\\nonumber\\\\&= N_k+\\tilde{R}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}},\n\t\t\\end{align*}\n\t\twhere $\\tilde{R}=\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}$ and $N_k=\\left\\|\\zeta_k\\right\\|_{\\mathbb{X}}+Ml_k+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}$ for $k=1,\\ldots,m.$ Applying the discrete Gronwall-Bellman lemma (Lemma \\ref{lem2.13}), we obtain \n\t\t\\begin{align}\\label{4.5}\n\t\t\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\le&N_k+\\tilde{R}\\sum_{j=0}^{k-1}N_je^{\\frac{(k+j)(k-j-1)\\tilde{R}}{2}}=C_k, \\ \\mbox{for}\\ k=1,\\ldots,m.\n\t\t\\end{align}\n\t\tTaking $t\\in[0,t_1]$ and using the relation \\eqref{2.5}, Lemma \\ref{lem2.5} and Assumption \\ref{as2.1} \\textit{(H1)}-\\textit{(H3)}, we compute\n\t\t\\begin{align}\\label{4.21}\n\t\tr&<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}\\nonumber\\\\&=\\left\\|\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha}(t)\\psi(0)\\right\\|_{\\mathbb{X}}+\\int_0^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_0^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\!\\!\\int_0^t\\!\\!(t-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\!\\!\\int_0^t\\!\\!(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{0}^{t}(t-s)^{\\alpha-1}(t_1-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_0^t(t-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{t^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_0^t(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left\\|p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\sum_{j=0}^{m}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})},\n\t\t\\end{align}\n\t\twhere $C_0=N_0$. For $t\\in(t_k,\\tau_k],\\ k=1,\\ldots,m$, we obtain \n\t\t\\begin{align}\\label{4.22}\n\t\tr<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}&\\le\\left\\|h_k(t, \\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le l_k\\le l_k+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}.\n\t\t\\end{align}\n\t\tTaking $t\\in(\\tau_k,t_{k+1}], \\ k=1,\\dots,m$, we evaluate\n\t\t\\begin{align}\\label{4.23}\n\t\tr&<\\left\\|(F_{\\lambda}x^r)(t)\\right\\|_\\mathbb{X}\\nonumber\\\\&=\\bigg\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat {\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\bigg\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha} (t)h_k(\\tau_k, \\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\n\t\t\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s+\\int_{0}^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad +\\int_{0}^t(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\n\t\t\\nonumber\\\\&\\le Ml_k+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}\n\t\t{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s+\\frac{M\\alpha}\n\t\t{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le Ml_k+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\Bigg[\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\qquad+\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{\\alpha-1}(t_{k+1}-s)^{\\alpha-1}\\mathrm{d}s\\Bigg]+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(\\tau_k-s)^{\\alpha-1}(t_{j+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{\\alpha-1}(t_{k+1}-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_j}^{t_{j+1}}(t_{j+1}-s)^{2(\\alpha-1)}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\int_{\\tau_k}^{t}(t-s)^{2(\\alpha-1)}\\mathrm{d}s+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t_{j+1}-\\tau_j)^{2\\alpha-1}}{2\\alpha-1}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}\\nonumber\\\\&\\quad+\\frac{1}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\frac{(t-\\tau_k)^{2\\alpha-1}}{2\\alpha-1}+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\sum_{j=0}^{k-1}\\left\\|p_j(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,\\tau_k];\\mathbb{R^{+}})}+\\frac{2T^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\left\\|p_k(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\int_{0}^{t}(t-s)^{\\frac{\\alpha-1}{1-\\alpha_1}}\\mathrm{d}s\\right)^{1-\\alpha_1}\\left\\|\\gamma_{r'}\\right\\|_{\\mathcal{L}^{\\frac{1}{\\alpha_1}}([0,t];\\mathbb{R^{+}})}\\nonumber\\\\&\\le Ml_k+\\tilde{R}\\sum_{j=0}^{k}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\nonumber\\\\&\\le Ml_k+\\tilde{R}\\sum_{j=0}^{m}C_j+\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}.\n\t\t\\end{align}\n\t\tUsing Assumption \\ref{as2.1} (\\textit{H2})(ii), we easily obtain\n\t\t\\begin{align*}\n\t\t\\liminf_{r \\rightarrow \\infty }\\frac {\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r}&=\\liminf_{r \\rightarrow \\infty } \\left (\\frac {\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}}{r'}\\times\\frac{r'}{r}\\right)=H_2\\beta.\n\t\t\\end{align*}\n\t\tThus, dividing by $r$ in expressions \\eqref{4.21}, \\eqref{4.22}, \\eqref{4.23} and then passing $r\\to\\infty$, we obtain\n\t\t\\begin{align*}\n\t\t\\frac{MH_{2}\\alpha\\beta}{\\Gamma(1+\\alpha)}\\frac{2T^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\{1+\\frac{(m+1)(m+2)\\tilde{R}}{2}+\\frac{m(m+1)\\tilde{R}^2}{2}\\sum_{j=0}^{m-1}e^{\\frac{(m+j)(m-j-1)\\tilde{R}}{2}}\\right\\}>1,\n\t\t\\end{align*}\n\t\twhich is a contradiction to \\eqref{cnd}. Hence, for some $ r>0$, $F_{\\lambda}(\\mathrm{E}_{r})\\subset \\mathrm{E}_{r}.$\n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (2): } \\emph{The operator $ F_{\\lambda}$ is continuous}. To achieve this goal, we consider a sequence $\\{{x}^n\\}^\\infty_{n=1}\\subseteq \\mathrm{E}_r$ such that ${x}^n\\rightarrow {x}\\mbox{ in }{\\mathrm{E}_r},$ that is,\n\t\t$$\\lim\\limits_{n\\rightarrow \\infty}\\left\\|x^n-x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}=0.$$\n\t\tFrom Lemma \\ref{lem2.7}, we infer that\n\t\t\\begin{align*}\n\t\t\\left\\|\\tilde{x_{s}^n}- \\tilde{x_{s}}\\right\\|_{\\mathfrak{B}}&\\leq H_{2}\\sup\\limits_{\\theta\\in J}\\left\\|\\tilde{x^{n}}(\\theta)-\\tilde{x}(\\theta)\\right\\|_{\\mathbb{X}}=H_{2}\\left\\|x^{n}-x\\right\\|_{\\mathrm{PC}(J;\\mathbb{X})}\\rightarrow 0 \\ \\mbox{ as } \\ n\\rightarrow\\infty,\n\t\t\\end{align*}\n\t\tfor all $s\\in\\mathcal{Q}(\\rho^-)\\cup J$. Since $\\rho(s, \\tilde{x_s^k})\\in \\mathcal{Q}(\\rho^-)\\cup J,$ for all $k\\in\\mathbb{N}$, then we conclude that\n\t\t\\begin{align*}\n\t\t\\left\\|\\tilde{x^n}_{\\rho(s,\\tilde{x_s^k})}-\\tilde{x}_{\\rho(s,\\tilde{x_s^k})}\\right\\|_{\\mathfrak{B}}\\rightarrow 0 \\ \\mbox{ as } \\ n\\rightarrow\\infty, \\ \\mbox{ for all }\\ s\\in J\\ \\mbox{ and }\\ k\\in\\mathbb{N}.\n\t\t\\end{align*}\n\t\tIn particular, we choose $k=n$ and use the above convergence together with Assumption \\ref{as2.1} \\textit{(H1)} to obtain\n\t\t\\begin{align}\\label{4.25}\n\t\t\\left\\|f(s, \\tilde{x^n}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}&\\leq\\left\\|f(s, \\tilde{x^n}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s^n})})\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|f(s, \\tilde{x}_{\\rho(s,\\tilde{x_s^n})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\mbox{ uniformly for } \\ s\\in J. \n\t\t\\end{align}\n\t\tFrom the above convergence and the dominated convergence theorem, we evaluate\n\t\t\\begin{align}\\label{4.26}\n\t\t\\left\\|p_0(x^n(\\cdot))-p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_1-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int^{t_1}_{0}(t_1-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x_s})})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align}\n\t\tUsing the convergences \\eqref{4.26} and the relation \\eqref{2.5}, we calculate\n\t\t\\begin{align*}\n\t\t\\left\\|\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))-\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}&=\\frac{1}{\\lambda}\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})\\left(p_0(x^{n}(\\cdot))-p_0(x(\\cdot))\\right)\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\leq\\frac{1}{\\lambda}\\left\\|p_0(x^{n}(\\cdot))-p_0(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0 \\ \\mbox{ as } \\ n \\to \\infty.\n\t\t\\end{align*}\n\t\tSince the mapping $\\mathcal{J}:\\mathbb{X}\\to\\mathbb{X}^{*}$ is demicontinuous, it is easy to obtain\n\t\t\\begin{align}\\label{4.2}\n\t\t\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_0(x^{n}(\\cdot))\\right]\\xrightharpoonup{w}\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_0(x(\\cdot))\\right] \\ \\mbox{ as } \\ n\\to\\infty \\ \\mbox{ in }\\ \\mathbb{X}^{*}.\n\t\t\\end{align}\n\t\tFrom Lemma \\ref{lem2.5}, we infer that the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)$ is compact for $t>0$. Therefore, the operator $\\widehat{\\mathcal{T}}_{\\alpha}(t)^*$ is also compact for $t>0$. Hence, by using the compactness of this operator together with the weak convergence \\eqref{4.2}, one can obtain\n\t\t\\begin{align}\\label{4.1}\n\t\t&\\left\\|u^{n,\\alpha}_{0,\\lambda}(t)-u_{0,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\nonumber\\\\&\\le\\left\\|(t_{1}-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{1}-t)^*\\left[\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))\\right]-\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0k(x(\\cdot))\\right]\\right]\\right\\|_{\\mathbb{U}}\\nonumber\\\\&\\le(t_{1}-t)^{\\alpha-1}\\tilde{M}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t_{1}-t)^*\\left[\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x^{n}(\\cdot))\\right]-\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{0}^{t_{1}})p_0(x(\\cdot))\\right]\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for each }\\ t\\in[0,t_{1}).\n\t\t\\end{align}\n\t\tSimilarly, for $k=1$, we compute\n\t\t\\begin{align}\\label{4.27}\n\t\t\\left\\|p_1(x^n(\\cdot))-p_1(x(\\cdot))\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t_{2}-\\tau_1)\\left[h_1(\\tau_1,\\tilde{x^n}(t_1^-))-h_1(\\tau_1,\\tilde{x}(t_1^-))\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_{1}}(\\tau_{1}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_{1}-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{t_{2}}(t_{2}-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_{2}-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_1}(\\tau_1-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_1-s)\\mathrm{B}\\left[u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_1}(t_2-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t_2-s)\\mathrm{B}\\left[u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}} \\nonumber\\\\&\\le M\\left\\|h_1(\\tau_1,\\tilde{x^n}(t_1^-))-h_1(\\tau_1,\\tilde{x}(t_1^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_{1}}(\\tau_{1}-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_{2}}(t_{2}-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(\\tau_1-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t_1}(t_2-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{0,\\lambda}(s)-u^{\\alpha}_{0,\\lambda}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\to0\\ \\mbox{ as }\\ n\\to\\infty, \n\t\t\\end{align}\n\t\twhere we used Assumption \\ref{as2.1} (\\textit{H3}), convergences \\eqref{4.25}, \\eqref{4.1} and the dominated convergence theorem. Moreover, similar to the convergence \\eqref{4.1}, one can obtain\n\t\t\\begin{align*}\n\t\t&\\left\\|u^{n,\\alpha}_{1,\\lambda}(t)-u_{1,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for each }\\ t\\in[\\tau_1,t_{2}).\n\t\t\\end{align*}\n\t\tFurther, applying a similar analogy as above for $k=2,\\ldots,m,$ one can compute \n\t\t\\begin{align*}\n\t\t&\\left\\|u^{n,\\alpha}_{k,\\lambda}(t)-u_{k,\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty, \\mbox{ for each }\\ t\\in[\\tau_k,t_{k+1}), k=2,\\ldots,m.\n\t\t\\end{align*} \n\t\tTherefore, we have\n\t\t\\begin{align}\\label{4.29}\n\t\t\\left\\|u^{n,\\alpha}_{\\lambda}(t)-u_{\\lambda}^{\\alpha}(t)\\right||_{\\mathbb{U}}\\to 0\\ \\mbox{ as }\\ n\\to\\infty,\\ \\mbox{ for each }\\ t\\in[\\tau_k,t_{k+1}), k=0,1,\\ldots,m.\n\t\t\\end{align}\n\t\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\tUsing the convergences \\eqref{4.25}, \\eqref{4.29} and the dominated convergence theorem, we arrive at \n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[f(s,\\tilde{x^n}_{\\rho(s,\\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0 \\ \\mbox{ as }\\ n\\to\\infty, \\ \\mbox{ for }\\ t\\in[0,t_1].\n\t\t\\end{align*}\n\t\tSimilarly, for $t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m$, we deduce that \n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(t-\\tau_k)\\left[h_k(\\tau_k,\\tilde{x^n}(t_k^-))-h_k(\\tau_k,\\tilde{x}(t_k^-))\\right]\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{\\tau_k}(\\tau_1-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_1-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\mathrm{B}\\left[u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right]\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\le M\\left\\|h_k(\\tau_k,\\tilde{x^n}(t_k^-))-h_k(\\tau_k,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\quad+ \\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|u^{n,\\alpha}_{\\lambda}(s)-u_{\\lambda}^{\\alpha}(s)\\right\\|_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\int_{0}^{t}(t-s)^{\\alpha-1}\\left\\|f(s,\\tilde{x^n}_{\\rho(s, \\tilde{x^n_s})})-f(s,\\tilde{x}_{\\rho(s, \\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}s\\nonumber\\\\&\\to 0 \\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align*}\n\t\tMoreover, for $t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m$, applying Assumption \\ref{as2.1} (\\textit{H3}), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|(F_{\\lambda}x^n)(t)-(F_{\\lambda}x)(t)\\right\\|_{\\mathbb{X}}&\\le\\left\\|h_k(t,\\tilde{x^n}(t_k^-))-h_k(t,\\tilde{x}(t_k^-))\\right\\|_{\\mathbb{X}}\\to 0 \\ \\mbox{ as }\\ n\\to\\infty.\n\t\t\\end{align*}\n\t\tHence, it follows that $F_{\\lambda}$ is continuous.\n\t\t\\vskip 0.1in \n\t\t\\noindent\\textbf{Step (3): } \\emph{$ F_{\\lambda}$ is a compact operator.} In order to prove this claim, we use the well-known Ascoli-Arzela theorem. According to the infinite-dimensional version of the Ascoli-Arzela theorem (see, Theorem 3.7, Chapter 2, \\cite{JYONG}), it is enough to show that \n\t\t\\begin{itemize}\n\t\t\t\\item [(i)] the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is uniformly bounded (which is proved in Step I),\n\t\t\t\\item [(ii)] the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is equicontinuous,\n\t\t\t\\item [(iii)] for an arbitrary $t\\in J$, the set $\\mathrm{V}(t)=\\{(F_\\lambda x)(t):x\\in \\mathrm{E}_r\\}$ is relatively compact.\n\t\t\\end{itemize}\n\t\t\n\t\t\n\t\tFirst, we claim that the image of $\\mathrm{E}_r$ under $F_{\\lambda}$ is equicontinuous. For $s_1,s_2\\in[0,t_1]$ such that $s_10$, we define\n\t\t\\begin{align*}\n\t&\t(F_{\\lambda}^{\\eta,\\delta}x)(t)\\nonumber\\\\&=\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(0)\\mathrm{d}\\xi +\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\bigg[\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(0)\\mathrm{d}\\xi\\nonumber\\\\&\\qquad\\qquad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\qquad\\qquad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\bigg]\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)y(t,\\eta,\\delta),\n\t\t\\end{align*}\n\t\twhere $y(\\cdot,\\cdot,\\cdot)$ is the term appearing inside the parenthesis. Using Assumption \\ref{as2.1}, one can calculate\n\t\t\\begin{align*}\n\t\t\\left\\|y(t,\\eta,\\delta)\\right\\|_{\\mathbb{X}}&\\le\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}(t^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\psi(0)\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\left\\|\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right\\|_{\\mathbb{X}}\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\le\n\t\tM\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}+\\frac{N_0}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\frac{t^{2\\alpha-1}-\\eta^{2\\alpha-1}}{\\lambda(2\\alpha-1)}\\nonumber\\\\&\\quad\\times\\frac{M\\alpha}{\\Gamma(1+\\alpha)}\\left(\\frac{t^\\mu-\\eta^\\mu}{\\mu}\\right)^{1-\\alpha_1}\\left(\\int_{0}^{t-\\eta}(\\gamma_{r'}(s))^{\\frac{1}{\\alpha_1}}\\right)^{\\alpha_1}<+\\infty, \\ \\mbox{ for }\\ t\\in[0,t_1].\n\t\t\\end{align*}\n\t\tThe compactness of the operator $\\mathcal{T}(\\cdot)$ implies that the set $\\mathrm{V}_{\\eta,\\delta}(t)=\\{(F^{\\eta,\\delta}_\\lambda x)(t):x\\in \\mathrm{E}_r\\}$ is relatively compact in $\\mathbb{X}$. Hence, there exist a finite $ x_{i}$'s, for $i=1,\\dots, n $ in $ \\mathbb{X} $ such that \n\t\t\\begin{align*}\n\t\t\\mathrm{V}_{\\eta,\\delta}(t) \\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon\/2),\n\t\t\\end{align*}\n\t\tfor some $\\varepsilon>0$. Let us choose $\\delta>0$ and $\\eta>0$ such that \n\t\t\\begin{align*}\n\t\t&\\left\\|(F_{\\lambda}x)(t)-(F_{\\lambda}^{\\eta,\\delta}x)(t)\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\int_{0}^{\\delta}\\varphi_{\\alpha}(\\xi)\\mathcal{T}(t^{\\alpha}\\xi)\\psi(0)\\mathrm{d}\\xi\\right\\|_{\\mathbb{X}}+\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{0}^{t}\\int_{0}^{\\delta}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\alpha\\left\\|\\int_{t-\\eta}^{t}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)f(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\mathrm{d}\\xi\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le M\\left\\|\\psi(0)\\right\\|_{\\mathbb{X}}\\int_{0}^{\\delta}\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi+\\frac{M^2\\tilde{M}^2N_{0}\\alpha t^{2\\alpha-1}}{\\lambda(2\\alpha-1)(\\Gamma(\\alpha+1))}\\int_{0}^{\\delta}\\xi\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi\n\t\t\\nonumber\\\\&\\quad+\\frac{N_0}{\\lambda}\\left(\\frac{M\\tilde{M}\\alpha}{\\Gamma(1+\\alpha)}\\right)^{2}\\int_{t-\\eta}^{t}(t-s)^{\\alpha-1}(t_1-s)^{\\alpha-1}\\mathrm{d}s\\nonumber\\\\&\\quad+\\frac{M\\alpha t^{\\alpha-\\alpha_1}}{\\mu^{1-\\alpha_1}}\\left\\|\\gamma_{r'}\\right\\|_{\\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R^+})}\\int_{0}^{\\delta}\\xi\\varphi_{\\alpha}(\\xi)\\mathrm{d}\\xi+\\frac{M\\alpha}{(\\Gamma(1+\\alpha))^2}\\int_{t-\\eta}^{t}(t-s)^{\\alpha-1}\\gamma_{r'}(s)\\mathrm{d}s\\nonumber\\\\&\\le\\frac{\\varepsilon}{2}.\n\t\t\\end{align*}\n\t\tConsequently $$\\mathrm{V}(t)\\subset \\bigcup_{i=1}^{n}\\mathcal{S}(x_i, \\varepsilon ).$$\n\t\tThus, for each $t\\in [0,t_1]$, the set $\\mathrm{V}(t)$ is relatively compact in $ \\mathbb{X}$. Next, we take $ t\\in(\\tau_k,t_{k+1}],$ for $k=1,\\ldots,m$ and for given $\\eta$ with $ 0<\\eta<\\min\\{t-\\tau_k, \\tau_k\\}$ and any $\\delta>0$, we define\n\t\t\\begin{align*}\n\t\t&(F_{\\lambda}^{\\eta,\\delta}x)(t)\\nonumber\\\\&=\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-\\tau_k)^{\\alpha}\\xi)h_k(\\tau_k,\\tilde{x}(t_k^-))\\mathrm{d}\\xi \\nonumber\\\\&\\quad-\\alpha\\int_{0}^{\\tau_k-\\eta}\\int_{\\delta}^{\\infty}\\xi(\\tau_k-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((\\tau_k-s)^{\\alpha}\\xi)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}(\\eta^{\\alpha}\\delta)\\bigg[\\int_{\\delta}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-\\tau_k)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)h_k(\\tau_k,\\tilde{x}(t_k^-))\\mathrm{d}\\xi\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{\\tau_k-\\eta}\\int_{\\delta}^{\\infty}\\xi(\\tau_k-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((\\tau_k-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\nonumber\\\\&\\quad+\\alpha\\int_{0}^{t-\\eta}\\int_{\\delta}^{\\infty}\\xi(t-s)^{\\alpha-1}\\varphi_{\\alpha}(\\xi)\\mathcal{T}((t-s)^{\\alpha}\\xi-\\eta^{\\alpha}\\delta)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+(s,\\tilde{x}_{\\rho(s,\\tilde{x}_s)})\\right]\\mathrm{d}\\xi\\mathrm{d}s\\bigg].\n\t\t\\end{align*}\n\t\tProceeding similarly for the case $t\\in[0,t_1]$, one can prove that the set $\\mathrm{V}(t),$ for $t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m$ is relatively compact in $ \\mathbb{X}$. Moreover for $t\\in(t_k,\\tau_k], k=1\\ldots,m$, the fact that the set $\\mathrm{V}(t)$ is relative compact follows by the compactness of the impulses $h_k,$ for $k=1,\\ldots,m$. Therefore, the set $\\mathrm{V}(t)=\\{(F_\\lambda x)(t):x\\in \\mathrm{E}_r\\},$ for each $t\\in J$ is relatively compact in $\\mathbb{X}$.\n\t\t\n\t\tHence, by invoking the Arzela-Ascoli theorem, we conclude that the operator $ F_{\\lambda}$ is compact. Then \\emph{Schauder's fixed point theorem} yields that the operator $F_{\\lambda}$ has a fixed point in $\\mathrm{E}_{r}$, which is a mild solution of the system \\eqref{1.1}.\n\t\\end{proof}\n\t\n\tIn order to prove the approximate controllability of the system \\eqref{1.1}, we replace the assumption (\\textit{H2}) by the following stronger assumption: \n\t\\begin{enumerate}\\label{as}\n\t\t\\item [\\textit{(H4)}] The function $ f: J \\times \\mathfrak{B}_{\\alpha} \\rightarrow \\mathbb{X} $ satisfies the assumption (\\textit{$H2$})(i) and there exists a function $ \\gamma\\in \\mathrm{L}^{\\frac{1}{\\alpha_1}}(J;\\mathbb{R}^+)$ with $\\alpha_1\\in[0,\\frac{1}{2})$ such that $$ \\|f(t,\\psi)\\|_{\\mathbb{X}}\\leq \\gamma(t),\\ \\text{ for all }\\ (t,\\psi) \\in J \\times \\mathfrak{B}_{\\alpha}. $$ \n\t\\end{enumerate}\n\t\\begin{theorem}\\label{thm4.4}\n\t\tSuppose that Assumptions (H0)-(H1), (H3)-(H4) and the condition \\eqref{cnd} of Theorem \\ref{thm4.3} are satisfied. Then the system \\eqref{1.1} is approximately controllable.\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tBy using Theorem \\ref{thm4.3}, we infer that for every $\\lambda>0$ and $\\zeta_k\\in \\mathbb{X}$ for $k=0,1,\\ldots m$, there exists a mild solution $x^{\\lambda}\\in\\mathrm{E}_r$ such that\n\t\t\\begin{equation}\\label{M}\n\t\tx^{\\lambda}(t)=\\begin{dcases}\n\t\t\\mathcal{T}_{\\alpha}(t)\\psi(0)+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s, \\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s,t\\in[0, t_1],\\\\\n\t\th_k(t, \\tilde{x^\\lambda}(t_k^-)),t\\in(t_k, \\tau_k],\\ k=1,\\ldots,m,\\\\\n\t\t\\mathcal{T}_{\\alpha}(t-\\tau_k)h_k(\\tau_k,\\tilde{x^\\lambda}(t_k^-))-\\int_{0}^{\\tau_k}(\\tau_k-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_k-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\\\\\quad+\\int_{0}^{t}(t-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(t-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s, \\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s,\\\\ \\qquad\\qquad\\qquad t\\in(\\tau_k,t_{k+1}],\\ k=1,\\ldots,m,\n\t\t\\end{dcases}\n\t\t\\end{equation}\n\t\twith the control defined in \\eqref{C}. Next, we estimate\n\t\t\\begin{align}\\label{4.35}\n\t\tx^{\\lambda}(T)&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})+\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{\\tau_m}^{T}(T-s)^{2(\\alpha-1)}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(T-s)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot))\\right]\\mathrm{d}s\\nonumber\\\\&=\\mathcal{T}_{\\alpha}(T-\\tau_m)h_{m}(\\tau_m,\\tilde{x^\\lambda}(t_m^-))\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}_{\\lambda}(s)+f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})\\right]\\mathrm{d}s\\nonumber\\\\&\\quad+\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})+\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}_{\\lambda}(s)\\mathrm{d}s\\nonumber\\\\&\\quad+\\Phi_{\\tau_m}^{T}\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot))\\right]\\nonumber\\\\&=\\zeta_m-\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x^\\lambda(\\cdot)).\n\t\t\\end{align}\n\t\tMoreover, that fact that $ x^{\\lambda}\\in\\mathrm{E}_r $ is bounded implies that the sequence $ x^{\\lambda}(t),$ for each $t\\in J$ is bounded in $\\mathbb{X}$. Then by the Banach-Alaoglu theorem, we can find a subsequence, still denoted as $ x^{\\lambda},$ such that \n\t\t\\begin{align*}\n\t\tx^\\lambda(t)\\xrightharpoonup{w}z(t) \\ \\mbox{ in }\\ \\mathbb{X} \\ \\ \\mbox{as}\\ \\ \\lambda\\to0^+,\\ t\\in J.\n\t\t\\end{align*}\n\t\tUsing the condition (\\textit{$H3$}) of Assumption \\ref{as2.1}, we obtain\n\t\t\\begin{align}\\label{4.19}\n\t\th_m(t,x^\\lambda(t_m^-))\\to h_m(t,z(t_m^-)) \\ \\mbox{ in }\\ \\mathbb{X} \\ \\ \\mbox{as}\\ \\ \\lambda\\to0^+.\n\t\t\\end{align}\n\t\tFurthermore, by using Assumption \\textit{(H4)}, we get\n\t\t\\begin{align}\n\t\t\\int_{s_1}^{s_2}\\left\\|f(s,\\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)})\\right\\|_{\\mathbb{X}}^{2}\\mathrm{d}s&\\le \\int_{s_1}^{s_2}\\gamma^2(s)\\mathrm{d} s\\leq \\left(\\int_{s_1}^{s_2}\\gamma^{\\frac{1}{\\alpha_1}}(s)\\mathrm{d}s\\right)^{2\\alpha_1}(s_1-s_2)^{1-2\\alpha_1}<+\\infty, \\nonumber\n\t\t\\end{align}\n\t\tfor any $ s_1,s_2\\in[0,T]$ with $s_10\\} $ in $ \\mathrm{L}^2([s_1,s_2]; \\mathbb{X})$ is bounded. By an application of the Banach-Alaoglu theorem, we can find a subsequence still denoted as $ \\{f(\\cdot, \\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)}): \\lambda > 0 \\}$ such that \n\t\t\\begin{align}\\label{4.36}\n\t\tf(\\cdot, \\tilde{x^{\\lambda}}_{\\rho(s,\\tilde{x^{\\lambda}}_s)})\\xrightharpoonup{w}f(\\cdot) \\ \\mbox{ in }\\ \\mathrm{L}^2([s_1,s_2];\\mathbb{X}).\n\t\t\\end{align}\n\t\tWe now calculate \n\t\t\\begin{align}\n\t\t&\\int_{0}^{\\tau_m}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&=\\int_{0}^{t_1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{t_1}^{\\tau_1}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{\\tau_1}^{t_2}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\cdots+\\int_{t_m}^{\\tau_m}\\left\\|u^{\\alpha}_{\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&=\\int_{0}^{t_1}\\left\\|u^{\\alpha}_{0,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\int_{\\tau_1}^{t_2}\\left\\|u^{\\alpha}_{1,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s+\\cdots+\\int_{\\tau_{m-1}}^{t_m}\\left\\|u^{\\alpha}_{m-1,\\lambda}(s)\\right\\|^2_{\\mathbb{U}}\\mathrm{d}s\\nonumber\\\\&\\le\\left(\\frac{M\\tilde{M}\\alpha}{\\lambda\\Gamma(1+\\alpha)}\\right)^{2}\\frac{T^{2\\alpha-1}}{2\\alpha-1}\\sum_{j=0}^{m-1}C_j^2=C,\n\t\t\\end{align}\n\t\twhere $C_k,$ for $k=1,\\ldots,m-1$ are the same as given in \\eqref{4.5} and $C_0=N_0$ given in \\eqref{4.4}. Moreover, the above estimate ensures that the sequence $\\{u^\\alpha_{\\lambda}(\\cdot): \\lambda >0\\}$ in $ \\mathrm{L}^2([0,\\tau_m]; \\mathbb{U})$ is bounded. Further, by the Banach-Alaoglu theorem, we can find a subsequence, still denoted as $\\{u^\\alpha_{\\lambda}(\\cdot): \\lambda >0\\}$ such that \n\t\t\\begin{align}\\label{4}\n\t\tu^\\alpha_{\\lambda}(\\cdot)\\xrightharpoonup{w}u^\\alpha(\\cdot) \\ \\mbox{ in }\\ \\mathrm{L}^2([0,\\tau_m];\\mathbb{U}).\n\t\t\\end{align}\n\t\tNext, we compute\n\t\t\\begin{align}\\label{4.37}\n\t\t\\left\\|p_{m}(x^{\\lambda}(\\cdot))-\\omega\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\mathcal{T}_{\\alpha}(T-\\tau_m)(h_k(\\tau_m,\\tilde{x^\\lambda}(t_m^-))-h_k(\\tau_m,z(t_m^-)))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\mathcal{T}_{\\alpha}(T-\\tau_m)(h_k(\\tau_m,x^\\lambda(t_m^-))-h_k(\\tau_m,z(t_m^-)))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad+\\frac{T^{2\\alpha-1}-(T-\\tau_m)^{2\\alpha-1}}{2\\alpha-1}\\left(\\int_{0}^{\\tau_m}\\left\\|\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}\\left[u^\\alpha_{\\lambda}(s)-u^{\\alpha}(s)\\right]\\right\\|^2_{\\mathbb{X}}\\mathrm{d}s\\right)^{\\frac{1}{2}}\\nonumber\\\\&\\quad+\\left\\|\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\left[f(s,\\tilde{x^\\lambda}_{\\rho(s,\\tilde{x^\\lambda}_s)})-f(s)\\right]\\mathrm{d}s\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\quad\\to 0\\ \\mbox{as}\\ \\lambda\\to0^+, \n\t\t\\end{align}\n\t\twhere \n\t\t\\begin{align*}\n\t\t\\omega &=\\zeta_m-\\mathcal{T}_{\\alpha}(T-\\tau_m)h_m(\\tau_m,z(t_m^-))+\\int_{0}^{\\tau_m}(\\tau_m-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\tau_m-s)\\left[\\mathrm{B}u^{\\alpha}(s)+f(s)\\right]\\mathrm{d}s\\nonumber\\\\&\\quad-\\int_{0}^{\\tau_m}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)\\mathrm{B}u^{\\alpha}(s)\\mathrm{d}s-\\int_{0}^{T}(T-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(T-s)f(s)\\mathrm{d}s.\n\t\t\\end{align*}\n\t\tHere, we used the convergences \\eqref{4.19},\\eqref{4.36},\\eqref{4}, the dominated convergence theorem and the compactness of the operator $f(\\cdot)\\to\\int_{0}^{\\cdot}(\\cdot-s)^{\\alpha-1}\\widehat{\\mathcal{T}}_{\\alpha}(\\cdot-s) f(s)\\mathrm{d}s:\\mathrm{L}^2(J;\\mathbb{X})\\rightarrow \\mathrm{C}(J;\\mathbb{X})$, (see Lemma \\ref{lem2.12}).\n\t\t\tFinally, by using the equality \\eqref{4.35}, we evaluate \n\t\t\\begin{align}\n\t\t\\left\\|x^{\\lambda}(T)-\\zeta_m\\right\\|_{\\mathbb{X}}&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})p_m(x(\\cdot))\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})(p_m(x(\\cdot))-\\omega)\\right\\|_{\\mathbb{X}}+\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\omega\\right\\|_{\\mathbb{X}}\\nonumber\\\\&\\le\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\right\\|_{\\mathcal{L}(\\mathbb{X})}\\left\\|p_m(x(\\cdot))-\\omega\\right\\|_{\\mathbb{X}}+\\left\\|\\lambda\\mathcal{R}(\\lambda,\\Phi_{\\tau_m}^{T})\\omega\\right\\|_{\\mathbb{X}}.\n\t\t\\end{align}\n\t\tUsing the above inequality, \\eqref{4.37} and Assumption \\ref{as2.1} (\\textit{H0}), we obtain\n\t\t\\begin{align*}\n\t\t\\left\\|x^{\\lambda}(T)-\\zeta_m\\right\\|_{\\mathbb{X}}\\to0,\\ \\mbox{ as }\\ \\lambda\\to0^+,\n\t\t\\end{align*}\n\t\twhich ensures that the system \\eqref{1.1} is approximately controllable on $J$.\n\t\\end{proof}\n\t\\begin{rem}\\label{rem4.4}\n\t\tThe works \\cite{PCY,NIZ,SRY}, etc considered a different kind of control for the fractional order semilinear problems. If one follows Remark \\ref{rem3.6}, the controllability operator defined in \\eqref{2.1} changes to \\eqref{3.20}\n\t\tand $u^\\alpha_{k,\\lambda}(\\cdot)$ appearing in the control defined in \\eqref{C} takes the form \n\t\t\\begin{align*}\n\t\tu^\\alpha_{k,\\lambda}(t)&=\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha}(t_{k+1}-t)^*\\mathcal{J}\\left[\\mathcal{R}(\\lambda,\\Phi_{\\tau_k}^{t_{k+1}})p_k(x(\\cdot))\\right],\n\t\t\\end{align*}\n\t\tfor $t\\in [\\tau_k, t_{k+1}),k=0,1,\\ldots,m$. The control provided in \\eqref{C} is motivated from the linear regulator problem with the cost functional defined in \\eqref{3.1}. The proof of Theorems \\ref{thm4.3} and \\ref{thm4.4} follows in a similar way with some obvious modifications in the calculations. \n\t\\end{rem}\n\t\\section{Application}\\label{application}\\setcounter{equation}{0}\n\tIn this section, we discuss a concrete example to verify the results developed in previous sections. \n\t\n\t\\begin{Ex}\\label{ex1} Let us take the following fractional heat equation with non-instantaneous impulses and delay:\n\t\t\\begin{equation}\\label{ex}\n\t\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\\frac{\\partial^\\alpha z(t,\\xi)}{\\partial t^\\alpha}&=\\frac{\\partial^2z(t,\\xi)}{\\partial \\xi^2}+\\eta(t,\\xi)+\\int_{-\\infty}^{t}b(s-t)z(s-\\sigma(\\|z(t)\\|),\\xi)\\mathrm{d}s, \\\\&\\qquad \\qquad \\ t\\in\\bigcup_{k=0}^{m} (\\tau_k, t_{k+1}]\\subset J=[0,T], \\ \\xi\\in[0,\\pi], \\\\\n\t\tz(t,\\xi)&=h_k(t,z(t_k^-,\\xi)),\\ t\\in(t_k,\\tau_k],\\ k=1,\\ldots, m,\\ \\xi\\in[0,\\pi],\\\\\n\t\tz(t,0)&=0=z(t,\\pi), \\ t\\in [0, 1], \\\\\n\t\tz(\\theta,\\xi)&=\\psi(\\theta,\\xi), \\ \\xi\\in[0,\\pi], \\ \\theta\\leq0.\n\t\t\\end{aligned}\n\t\t\\right.\n\t\t\\end{equation}\n\t\twhere the function $\\eta:[0,1]\\times[0,\\pi]\\to[0,\\pi]$ is continuous in $t$ and the functions $\\sigma:[0,\\infty)\\to[0,\\infty)$ are also continuous.\n\t\\end{Ex}\n\t\\vskip 0.1 cm\n\t\\noindent\\textbf{Step 1:} \\emph{$\\mathrm{C}_0$-semigroup and phase space:} \n\tLet $\\mathbb{X}_p= \\mathrm{L}^{p}([0,\\pi];\\mathbb{R})$ with $p\\in[2,\\infty)$, and $\\mathbb{U}=\\mathrm{L}^{2}([0,\\pi];\\mathbb{R})$. Note that $\\mathbb{X}_p$ is separable and reflexive with strictly convex dual $\\mathbb{X}_p^*=\\mathrm{L}^{\\frac{p}{p-1}}([0,\\pi];\\mathbb{R})$ and $\\mathbb{U}$ is separable. We define the linear operator $\\mathrm{A}_p:\\mathrm{D}(\\mathrm{A}_p)\\subset\\mathbb{X}_p\\to\\mathbb{X}_p$ as\n\t\\begin{align*}\n\t\\mathrm{A}_pg(\\xi)= g''(\\xi),\n\t\\end{align*}\n\twhere $\\mathrm{D}(\\mathrm{A}_p)= \\mathrm{W}^{2,p}([0,\\pi];\\mathbb{R})\\cap\\mathrm{W}_0^{1,p}([0,\\pi];\\mathbb{R})$. Since we know that $\\mathrm{C}_0^{\\infty}([0,\\pi];\\mathbb{R})\\subset\\mathrm{D}(\\mathrm{A}_p)$ and hence $\\mathrm{D}(\\mathrm{A}_p)$ is dense in $\\mathbb{X}_p$ and one can easily verify that the operator $\\mathrm{A}_p$ is closed. Next, we consider the following Sturm-Liouville system:\n\t\\begin{equation}\\label{59}\n\t\\left\\{\n\t\\begin{aligned}\n\t\\left(\\lambda\\mathrm{I}-\\mathrm{A}_p\\right)g(\\xi)&=l(\\xi), \\ 0<\\xi<\\pi,\\\\\n\tg(0)=g(\\pi)&=0.\n\t\\end{aligned}\n\t\\right.\n\t\\end{equation}\n\tOne can easily rewrite the above system as \n\t\\begin{align}\\label{511}\n\t\\left(\\lambda\\mathrm{I}-\\Delta\\right)g(\\xi)&=l(\\xi),\n\t\\end{align}\n\twhere $\\Delta g(\\xi)=g''(\\xi)$. Multiplying both sides of \\eqref{511} by $g|g|^{p-2}$ and then integrating over $[0,\\pi]$, we obtain\n\t\\begin{align}\\label{536}\n\t\\lambda\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi&+(p-1)\\int_0^{\\pi}|g(\\xi)|^{p-2}|f'(\\xi)|^2\\mathrm{d}\\xi=\\int_0^{\\pi}l(\\xi)g(\\xi)|g(\\xi)|^{p-2}\\mathrm{d}\\xi.\n\t\\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n\tApplying H\\\"older's inequality, we get\n\t\\begin{align*}\n\t\\lambda\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi&\\leq \\left(\\int_0^{\\pi}|g(\\xi)|^p\\mathrm{d}\\xi\\right)^{\\frac{p-1}{p}}\\left(\\int_0^{\\pi}|l(\\xi)|^p\\mathrm{d}\\xi\\right)^{\\frac{1}{p}}.\n\t\\end{align*}\n\tThus, we have \n\t\\begin{align*}\n\t\\|\\mathcal{R}(\\lambda,\\mathrm{A}_p)l\\|_{\\mathrm{L}^p}=\\|g\\|_{\\mathrm{L}^p}\\leq\\frac{1}{\\lambda}\\|l\\|_{\\mathrm{L}^p},\n\t\\end{align*}\n\tso that we obtain \n\t\\begin{align}\n\t\\|\\mathcal{R}(\\lambda,\\mathrm{A}_p\\|_{\\mathcal{L}(\\mathrm{L}^p)}\\leq\\frac{1}{\\lambda}.\n\t\\end{align}\n\tHence, by applying the Hille-Yosida theorem, we obtain that the operator $\\mathrm{A}_p$ generate a strongly continuous semigroup $\\{\\mathcal{T}_p(t):t\\ge0\\}$ of bounded linear operators. \n\t\n\tMoreover, the infinitesimal generator $\\mathrm{A}_p$ and the semigroup $\\mathcal{T}_p(t)$ can be written as\n\t\\begin{align}\n\t\\mathrm{A}_pg&= \\sum_{n=1}^{\\infty}-n^{2}\\langle g, w_{n} \\rangle w_{n},\\ g\\in \\mathrm{D}(\\mathrm{A}_p),\\nonumber\\\\\n\t\\mathcal{T}_p(t)g&= \\sum_{n=1}^{\\infty}\\exp\\left(-n^2t\\right)\\langle g, w_{n} \\rangle w_{n},\\ g\\in\\mathbb{X}_p,\n\t\\end{align}\n\twhere, $w_n(\\xi)=\\sqrt{\\frac{2}{\\pi}}\\sin(n\\xi)$ are the normalized eigenfunctions corresponding to the eigenvalues $\\lambda_n=-n^2\\ (n\\in\\mathbb{N})$ of the operator $\\mathrm{A}_p$ and $\\langle g,w_n\\rangle :=\\int_0^{\\pi}g(\\xi)w_n(\\xi)\\mathrm{d}\\xi$. Further, the resolvent operator $\\mathcal{R}(\\lambda,\\mathrm{A}_p)$ is compact (see \\cite{MTM} for more details). Therefore, the generated semigroup $\\mathcal{T}_p(t)$ is compact for $t>0$. Thus, the condition (\\textit{H1}) of Assumption \\ref{as2.1} holds.\n\t\n\tWe now define the following operators\n\t\\begin{align}\n\t\\mathcal{T}_{\\alpha,p}(t)g&=\\int_{0}^{\\infty}\\varphi_{\\alpha}(\\xi)\\mathcal{T}_p(t^{\\alpha}\\xi)g(\\xi)\\mathrm{d}\\xi=\\int_{0}^{\\infty}\\varphi_{\\alpha}(\\xi)\\sum_{n=1}^{\\infty}\\exp\\left(-n^2t^\\alpha\\xi\\right)\\langle g, w_{n} \\rangle w_{n}(\\xi)\\mathrm{d}\\xi.\\\\\n\t\\widehat{\\mathcal{T}}_{\\alpha,p}(t)g&=\\alpha\\int_{0}^{\\infty}\\xi\\varphi_{\\alpha}(\\xi)\\mathcal{T}_p(t^{\\alpha}\\xi)g(\\xi)\\mathrm{d}\\xi=\\alpha\\int_{0}^{\\infty}\\xi\\varphi_{\\alpha}(\\xi)\\sum_{n=1}^{\\infty}\\exp\\left(-n^2t^\\alpha\\xi\\right)\\langle g, w_{n} \\rangle w_{n}(\\xi)\\mathrm{d}\\xi, \\label{5.7}\n\t\\end{align}\n\tfor all $g\\in\\mathbb{X}_p$. \n\t\n\tLet us take $\\mathfrak{B}=\\mathrm{PC}_{0}\\times\\mathrm{L}^1_h(\\mathbb{X})$ with $h(\\theta)=e^{\\nu\\theta}$, for some $\\nu>0$ (see Example \\ref{exm2.8}). Proceeding similar arguments as in section 5, \\cite{SSM}, one can verify that the space $\\mathfrak{B}=\\mathrm{PC}_{0}\\times\\mathrm{L}^1_h(\\mathbb{X})$ is a phase space, which satisfies the axioms (A1) and (A2) with $\\Lambda(t) =\\int_{-t}^{-0}h(\\theta)\\mathrm{d}\\theta$ and $\\Upsilon(t)= H(-t)$.\n\tWe define $K:=\\sup\\limits_{\\theta\\in(-\\infty,0]}\\frac{|b(-\\theta)|}{h(\\theta)}$.\n\t\\vskip 0.1 cm \n\t\\noindent\\textbf{Step 2:} \\emph{Abstract formulation and approximate controllability.}\n\tLet us define $$x(t)(\\xi):=z(t,\\xi),\\ \\mbox{ for }\\ t\\in J\\ \\mbox{ and }\\ \\xi\\in[0,\\pi],$$ and the bounded linear operator $\\mathrm{B}:\\mathbb{U}\\to\\mathbb{X}_p$ as $$\\mathrm{B}u(t)(\\xi):=\\eta(t,\\xi)=\\int_{0}^{\\pi}K(\\zeta,\\xi)u(t)(\\zeta)\\mathrm{d}\\zeta, \\ t\\in J,\\ \\xi\\in [0,\\pi],$$ where $K\\in\\mathrm{C}([0,\\pi]\\times[0,\\pi];\\mathbb{R})$ with $K(\\zeta,\\xi)=K(\\xi,\\zeta),$ for all $\\zeta,\\xi\\in [0,\\pi]$. We assume that the operator $\\mathrm{B}$ is one-one.\n\tLet us estimate \n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}^p=\\int_{0}^{\\pi}\\left|\\int_{0}^{\\pi}K(\\zeta,\\xi)u(t)(\\zeta)\\mathrm{d}\\zeta\\right|^p\\mathrm{d}\\xi.\n\t\\end{align*}\n\tApplying the Cauchy-Schwarz inequality, we have\n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}^p&\\le\\int_{0}^{\\pi}\\left[\\left(\\int_{0}^{\\pi}|K(\\zeta,\\xi)|^2\\mathrm{d}\\zeta\\right)^{\\frac{1}{2}}\\left(\\int_{0}^{\\pi}|u(t)(\\zeta)|^2\\mathrm{d}\\zeta\\right)^{\\frac{1}{2}}\\right]^{p}\\mathrm{d}\\xi\\\\&=\\left(\\int_{0}^{\\pi}|u(t)(\\zeta)|^2\\mathrm{d}\\zeta\\right)^{\\frac{p}{2}}\\int_{0}^{\\pi}\\left(\\int_{0}^{\\pi}|K(\\zeta,\\xi)|^2\\mathrm{d}\\zeta\\right)^{\\frac{p}{2}}\\mathrm{d}\\xi.\n\t\\end{align*}\n\tSince the kernel $K(\\cdot,\\cdot)$ is continuous, we arrive at\n\t\\begin{align*}\n\t\\left\\|\\mathrm{B}u(t)\\right\\|_{\\mathbb{X}_p}\\le C\\left\\|u(t)\\right\\|_{\\mathbb{U}},\n\t\\end{align*}\n\tso that we get \n\t$\t\\left\\|\\mathrm{B}\\right\\|_{\\mathcal{L}(\\mathbb{U};\\mathbb{X}_p)}\\le C.$\n\tHence, the operator $\\mathrm{B}$ is bounded. Moreover, the symmetry of the kernel implies that the operator $\\mathrm{B}=\\mathrm{B}^*$ (self adjoint). For example, one can take $K(\\xi,\\zeta)=1+\\xi^2+\\zeta^2,\\ \\mbox{for all}\\ \\xi, \\zeta\\in [0,\\pi]$.\n\tThe function $\\psi:(-\\infty,0]\\rightarrow\\mathbb{X}$ is given as\n\t\\begin{align}\n\t\\nonumber \\psi(t)(\\xi)=\\psi(t,\\xi),\\ \\xi\\in[0,\\pi].\n\t\\end{align}\t \n\tNext, the functions $f, \\rho:J\\times \\mathfrak{B}\\to\\mathbb{X}$ are defined as\n\t\\begin{align}\n\t\\nonumber f(t,\\psi)\\xi&:=\\int_{-\\infty}^{0}b(-\\theta)\\psi(\\theta,\\xi)\\mathrm{d}\\theta,\\\\\n\t\\nonumber\\rho(t,\\psi):&=t-\\sigma(\\|\\psi(0)\\|_{\\mathbb{X}}),\n\t\\end{align}\t\n\tfor $\\xi\\in[0,\\pi]$. Clearly, $f$ is continuous and uniformly bounded by $K$. These facts guarantee that the function $f$ satisfied the condition \\textit{$(H2)$} of Assumption \\ref{as2.1} and the condition \\textit{$(H4)$}.\n\t\n\tMoreover, the impulse functions $h_k:[t_k,\\tau_k]\\times\\mathbb{X}\\to\\mathbb{X},$ for $k=1,\\ldots,m,$ are defined as \n\t\\begin{align*}\n\th_k(t,x)\\xi:=\\int_{0}^{\\pi}\\rho_k(t,\\xi,z)\\cos^2(x(t_k^-)z)\\mathrm{d}z, \\ \\mbox{ for }\\ t\\in(t_k,\\tau_k],\n\t\\end{align*}\n\twhere, $\\rho_k\\in\\mathrm{C}([0,1]\\times[0,\\pi]^2;\\mathbb{R})$. It is easy to verify that the impulses $h_k,$ for $k=1,\\ldots,m,$ satisfy the condition \\textit{$(H3)$} of Assumption \\ref{as2.1}.\n\t\n\t\n\tThe system \\eqref{ex} can be transformed into the abstract form \\eqref{1.1} by using the above substitutions and it satisfies Assumption \\ref{as2.1} \\textit{$(H1)$-$(H3)$} and Assumption (\\textit{$H4$}). Moreover, it remains to verify that the associated linear system of the equation \\eqref{1.1} is approximately controllable. In order to prove this, we consider\n\t$$(T-t)^{\\alpha-1}\\mathrm{B}^*\\widehat{\\mathcal{T}}_{\\alpha,p}(T-t)^*x^*=0,\\ \\mbox{ for any}\\ x^*\\in\\mathbb{X}^*,\\ 0\\le t + \\ensuremath{\\,\\text{d}}\\Phi_{+2}^< \\, .\n\\end{align}\nThe ordered part $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^<$ corresponds to the region accessible to strongly-ordered shower paths $\\ensuremath{t_0} > t > t'$, whereas the unordered part $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>$ is inaccessible to strongly-ordered showers because of the larger intermediate scale $\\ensuremath{t_0} > t' > t$. We will use V\\protect\\scalebox{0.8}{INCIA}\\xspace's sector criterion, cf.\\ sec.~3.3 in \\cite{Brooks:2020upa}, to distinguish between the two, cf.~\\cref{subsec:2to4}.\n\nIn order to be able to match the NNLO calculation with the shower, the shower needs to incorporate virtual corrections to ordinary $2\\to 3$ branchings as well as new $2\\to 4$ branchings, accounting for the simultaneous emission of two particles. These new shower terms correspond to the real-virtual and double-real corrections in the NNLO calculation. In addition, we need to incorporate the corresponding parton-shower counterterms.\nWe start by defining the two-particle NLO Sudakov as \\cite{Li:2016yez}\n\\begin{multline}\n \\Delta_2^\\mathrm{NLO}(\\ensuremath{t_0},t) \\\\\n = \\exp\\Bigg\\{-\\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+1}\\, {\\mathrm{A}}_{2\\mapsto3}^{(0)}(\\Phi_{+1}) w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) \\Bigg\\} \\\\\n \\times \\exp\\Bigg\\{-\\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>\\, {\\mathrm{A}}_{2\\mapsto 4}^{(0)}(\\Phi_{+2})w^\\mathrm{LO}_{2\\mapsto4}(\\Phi_2,\\Phi_{+2}) \\Bigg\\} \\, ,\n\\label{eq:nloSudakov}\n\\end{multline}\nwhere we have introduced the $2\\mapsto4$ LO matrix-element correction factor,\n\\begin{equation}\n w^\\mathrm{LO}_{2\\mapsto4}(\\Phi_2,\\Phi_{+2}) = \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+2})}{{\\mathrm{A}}^{(0)}_{2\\mapsto4}(\\Phi_{+2}){\\mathrm{B}}(\\Phi_2)}\n\\label{eq:LOMEC2to4}\n\\end{equation}\nand the $2\\mapsto3$ NLO matrix-element correction factor $w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_{+1})$, which we write in terms of a second order correction to the LO $2\\mapsto 3$ MEC in \\cref{eq:LOMEC2to3},\n\\begin{multline}\n w^\\mathrm{NLO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) = w^\\mathrm{LO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})\\\\\n \\times\\big(1+\\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})\n +\\tilde{w}^\\mathrm{PS}_{2\\mapsto3}(\\Phi_2)\\big)\\,.\n\\label{eq:NLOMEC2to3}\n\\end{multline}\nThe coefficients $\\tilde{w}$ are given by matching the $\\Order{\\ensuremath{\\alpha_{\\mathrm{s}}}^2}$ terms\nin the expansion of the truncated shower approximation to the fixed-order result \nin \\cref{eq:expvalNNLO} \\cite{Li:2016yez,Hartgring:2013jma}.\nWe find the fixed-order contribution\n\\begin{multline}\n \\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1})=\\\\\n \\frac{{\\mathrm{R\\kern-0.15emV}}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} + \\int^{t}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}^\\prime\\, \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} \\\\\n -\\left(\\frac{{\\mathrm{V}}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)}+\\int^{\\ensuremath{t_0}}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\,\\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1}')}{{\\mathrm{B}}(\\Phi_2)}\\right)\\;,\n\\label{eq:VMEC2to3}\n\\end{multline}\nand the second-order parton-shower matching term\n\\begin{multline}\n \\tilde{w}^\\mathrm{PS}_{2\\mapsto3}(\\Phi_2)\n = \\frac{\\ensuremath{\\alpha_{\\mathrm{s}}}}{2\\pi}\\ln\\frac{\\kappa^2\\mu_{\\rm S}^2}{\\ensuremath{\\mu}_\\mathrm{R}^2}\\\\\n + \\int^{\\ensuremath{t_0}}_{t}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\, {\\mathrm{A}}_{2\\mapsto3}^{(0)}(\\Phi_{+1}')w^\\mathrm{LO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}')\\;.\n\\end{multline}\nThe factor $\\kappa$ is a constant and $\\mu_\\mathrm{S}^2$ is the parton-shower renormalisation scale.\nThe two are conventionally chosen such that the logarithmic structure of eq.~\\eqref{eq:VMEC2to3}\nis reproduced, which leads to $\\mu_\\mathrm{S}=p_\\perp$ and $\\kappa^2=\\exp\\{K\/\\beta_0\\}$, with $K$ \nthe two-loop cusp anomalous dimension~\\cite{Kodaira:1981nh,Davies:1984hs,Davies:1984sp,Catani:1988vd}. \nThis is known as the CMW scheme~\\cite{Catani:1990rr}.\n\nNote that in \\cref{eq:nloSudakov}, the integral over ${\\mathrm{A}}_{2\\mapsto 4}^{(0)}$ is defined over \nthe range $[t, \\ensuremath{t_0}]$, since the ``ordered'' contribution $t'\\, {\\mathrm{A}}_{2\\mapsto4}^{(0)}(\\Phi_{+2})w^\\mathrm{LO}_{2\\mapsto4} (\\Phi_2,\\Phi_{+2})O(\\Phi_2,\\Phi_{+2}) \\nonumber\n\\end{align}\nand our final NNLO+PS matching formula takes the simple form:\n\\begin{equation}\n \\avg{O}_{\\mathrm{NNLO}+\\mathrm{PS}} = \\int \\ensuremath{\\,\\text{d}} \\Phi_2\\, {\\mathrm{B}}(\\Phi_2) k_\\mathrm{NNLO}(\\Phi_2) {\\mathcal{S}}_2(\\ensuremath{t_0},O) \\, .\n\\label{eq:nnlops}\n\\end{equation}\n\nWhen expanding the truncated shower operator ${\\mathcal{S}}_2$ in \\cref{eq:nnlops} up to order $\\ensuremath{\\alpha_{\\mathrm{s}}}^2$, NNLO accuracy is recovered for the observable $O(\\Phi_2)$, while $O(\\Phi_3)$ and $O(\\Phi_4)$ achieve NLO and LO accuracy, respectively.\nThis is true, because the combination of the iterated $2\\mapsto 3\\mapsto 4$ and the direct $2\\mapsto 4$ contributions to \\cref{eq:showerOpNLO} yields the correct double-real correction ${\\mathrm{R\\kern-0.15em R}}$ in \\cref{eq:expvalNNLO} by means of the LO MEC factors in \\cref{eq:LOMEC2to3,eq:LOMEC3to4,eq:LOMEC2to4}. Moreover the NLO correction \\cref{eq:NLOMEC2to3} recovers the correct real and real-virtual corrections ${\\mathrm{R}}$ and ${\\mathrm{R\\kern-0.15emV}}$ in \\cref{eq:expvalNNLO} by means of \\cref{eq:LOMEC2to3} and \\cref{eq:VMEC2to3}.\n\n\\begin{comment}\n\\todo{SH: I think the following paragraph is not needed anymore. We could comment earlier on when we describe the counterterms. One really does not have a choice if one wants to avoid a large logarithm in the NLO weight.}\nWe want to close this section by elaborating upon the renormalisation scales in our calculation. While the fixed-order calculation is renormalised at the scale of the hard process, $\\ensuremath{\\mu}_\\mathrm{R}^2$, the scales in the real-radiation contributions are dictated by the parton-shower resummation, meaning that the strong coupling is evaluated at the emission scales $t \\equiv p_\\perp^2$.\nThis is reflected in our calculation above by evaluating all antenna functions at the shower scales,\n\\begin{equation}\n {\\mathrm{A}}^{(\\ell)}_{n\\mapsto n+m}(\\Phi_{+m}) \\equiv {\\mathrm{A}}^{(\\ell)}_{n\\mapsto n+m}(\\Phi_{+m}; p_{\\perp,n+m}^2) \\, ,\n\\end{equation}\nwhereas all matrix-element correction factors and the Born-local (N)NLO weights are evaluated at the renormalisation scale of the hard process,\n\\begin{align}\n w^\\mathrm{MEC}_{n\\mapsto n+m}(\\Phi_n,\\Phi_{+m}) &\\equiv w^\\mathrm{MEC}_{n\\mapsto n+m}(\\Phi_n,\\Phi_{+m}; \\ensuremath{\\mu}_\\mathrm{R}^2) \\, , \\\\\n k_\\mathrm{(N)NLO}(\\Phi_2) &\\equiv k_\\mathrm{(N)NLO}(\\Phi_2; \\ensuremath{\\mu}_\\mathrm{R}^2) \\, .\n\\end{align}\nThis allows the calculation to be reorganised in a way that all logarithms containing scale hierarchies can be reabsorbed in a multiplicative factor.\n\\end{comment}\n\n\\section{Numerical Implementation} \\label{sec:implementation}\nIn this section, we want to present all necessary components of an implementation of our NNLO matching strategy. These are:\n\\begin{itemize}\n \\item a framework to calculate the Born-local NNLO $K$-factors in Eq.~\\eqref{eq:born_local_nnlo_kfactor}\n \\item a shower filling the strongly-ordered \\cite{Brooks:2020bhi} and unordered \\cite{Li:2016yez} regions of the single- and double-emission phase space\n \\item tree-level MECs in strongly-ordered \\cite{Fischer:2017yja} and unordered \\cite{Giele:2011cb} shower paths\n \\item NLO MECs in the first emission \\cite{Hartgring:2013jma}\n\\end{itemize}\nWith the exception of the first point, (process-dependent) implementations of these components existed in previous V\\protect\\scalebox{0.8}{INCIA}\\xspace versions (not necessarily simultaneously), and have been described in detail in the various references.\nWe have (re-)implemented all components in a semi-automated~\\footnote{Semi-automated here refers to the fact that antenna subtraction terms are explicitly implemented for each class of processes.} fashion in the V\\protect\\scalebox{0.8}{INCIA}\\xspace antenna shower in P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3. We access loop matrix elements via a novel M\\protect\\scalebox{0.8}{CFM}\\xspace \\cite{Campbell:1999ah,Campbell:2011bn,Campbell:2015qma,Campbell:2019dru} interface presented in \\cite{Campbell:2021vlt} and tree-level matrix elements via a new run-time interface \\cite{ComixInterface} to the C\\protect\\scalebox{0.8}{OMIX}\\xspace matrix element generator \\cite{Gleisberg:2008fv} in S\\protect\\scalebox{0.8}{HERPA}\\xspace \\cite{Gleisberg:2008ta,Sherpa:2019gpd}.\n\nOur NNLO matching algorithm can be summarised in the following steps:\n\\begin{enumerate}\n \\item[1.] Generate a phase space point according to the Born cross section ${\\mathrm{B}}(\\Phi_2)$.\n \\item[2.] Calculate the Born-local NNLO factor $k_\\mathrm{NNLO}(\\Phi_2)$ and reweight the phase space point by the result. \n \\item[3.] Let the phase-space maximum given by the invariant mass of the two Born partons define the starting scale for the shower, $t_\\mathrm{now} = t_0(\\Phi_2)$.\n \\item[4.] Starting from the current shower scale, $t_\\mathrm{now}$, let the $2\\mapsto 3$ and $2\\mapsto 4$ showers compete for the highest branching scale. \n \\item[5.] Update the current shower scale to be that of the winning branching, $t_\\mathrm{now} = \\mathrm{max}(t_{2\\mapsto 3},t_{2\\mapsto 4})$.\n \\item[6a.] If the winning branching is a $2\\mapsto 3$ branching, calculate the accept probability including the NLO MEC $w^\\mathrm{NLO}_{2\\mapsto3}$. \n \\begin{itemize}\n \\item If rejected, continue from step 4.\n \\item If accepted, continue with a LO shower from the resulting three-particle configuration, starting from $t_\\mathrm{now}$ and including the LO MEC $w^\\mathrm{LO}_{3\\mapsto 4}$ when calculating accept probabilities for the $3\\mapsto4$ step. \n \\end{itemize}\n When a $3\\mapsto 4$ branching is accepted (or the shower cutoff scale is reached), continue with step 7.\n \\item[6b.] If the winning branching is a $2\\mapsto 4$ branching, calculate the accept probability including the LO MEC $w^\\mathrm{LO}_{2\\mapsto4}$. \n \\begin{itemize}\n \\item If rejected, continue from step 4. \n \\item If accepted, continue with step 7.\n \\end{itemize}\n \\item[7.] Continue with a standard (possibly uncorrected) shower from the resulting four-particle configuration, starting from $t_\\mathrm{now}$. \n\\end{enumerate}\nIt should be emphasised that the matrix-element correction factors make this algorithm independent of the splitting kernels (i.e.\\ antenna functions in our case) up to the matched order and the shower merely acts as an efficient Sudakov-weighted phase-space generator. Hence, if the algorithm is stopped after step 6, an NNLO-matched result is obtained, which can be showered by any other parton shower, just as is the case for P\\protect\\scalebox{0.8}{OWHEG}\\xspace NLO matching. Note, that there remains a dependence on the ordering variable, which has to be properly accounted for.\n\n\\subsection{NNLO Kinematics}\\label{subsec:kinematics}\nFor both, the unordered shower contributions and the Born-local NNLO weight, new kinematic maps are needed to reflect their direct $2\\mapsto 4$, i.e.\\ unordered or double-unresolved, nature. We utilise that the $n$-particle phase space measure\nmay be factorised into the product of a $2\\mapsto 3$ antenna phase space and the $n-1$-particle phase space measure, as well as into the product of a $2\\mapsto 4$ antenna phase space and the $n-2$-particle phase space.\nThis allows us to write the $2\\mapsto 4$ antenna phase space as the product of two $2\\mapsto 3$ antenna phase spaces,\n\\begin{multline}\n \\ensuremath{\\,\\text{d}} \\Phi_{+2} (p_I+p_K; p_i, p_{j_1}, p_{j_2}, p_{k}) \\\\\n = \\ensuremath{\\,\\text{d}} \\Phi_{+1} (p_I+p_K; \\hat{p}_i, \\hat{p}_{j}, p_{k}) \\\\\n \\times \\ensuremath{\\,\\text{d}} \\Phi_{+1} (\\hat{p}_i+\\hat{p}_j; p_i, p_{j_1}, p_{j_2}) \\, ,\n\\label{eq:PS2to4}\n\\end{multline}\ncorresponding to the kinematic mapping\n\\begin{equation}\n p_I + p_K = \\hat{p}_i + \\hat{p}_j + p_k = p_i + p_{j_1} + p_{j_2} + p_k \\, ,\n\\end{equation}\neffectively representing a tripole map \\cite{Gehrmann-DeRidder:2003pne}. In line with the phase space factorisation, the kinematic mapping is then constructed as an iteration of two on-shell $2\\mapsto 3$ antenna maps given in sec.~2.3 in \\cite{Brooks:2020upa}. \n\nWe have tested the validity of our kinematic maps by comparing V\\protect\\scalebox{0.8}{INCIA}\\xspace's phase-space mappings (double-gluon emission and gluon-emission-plus-splitting) to a flat sampling via R\\protect\\scalebox{0.8}{AMBO}\\xspace.\n\n\\subsection{Unordered Shower Contributions}\\label{subsec:2to4}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{main205-1.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-2.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-3.pdf}\n \\includegraphics[width=0.4\\textwidth]{main205-4.pdf}\n \\caption{Ratio of the evolution variable of the four-parton and three-parton configuration $\\log(p_{\\perp,4}^2\/p_{\\perp,3}^2)$ in $e^+e^-\\to 4j$. The region $> 0$ corresponds to unordered contributions not reached by strongly-ordered showers.}\n \\label{fig:orderingZDecays}\n\\end{figure*}\n\nAn important part of our proposal is the inclusion of double-unresolved radiation in the shower evolution.\nTo this end, we employ the sector-antenna framework \\cite{Brooks:2020upa} and amend it by direct $2\\mapsto 4$ branchings as described in \\cite{Li:2016yez}. \nIn the sector-shower approach, each branching is restricted to the region in phase space where it minimises the resolution variable, defined for final-state clusterings by\n\\begin{equation}\n Q^2_{\\mathrm{res},j} = \\begin{cases} \\frac{s_{ij}s_{jk}}{s_{IK}} & \\text{if } j \\text{ is a gluon} \\\\ s_{ij} \\sqrt{\\frac{s_{jk}}{s_{IK}}} & \\text{if } (i,j) \\text{ is a quark-antiquark pair} \\end{cases}\n\\label{eq:resVar}\n\\end{equation}\nThis is achieved by a ``sectorisation'' of phase space according the partition of unity,\n\\begin{equation}\n 1 = \\sum\\limits_j \\Theta^\\mathrm{sct}_{j\/IK} = \\sum\\limits_j \\theta\\left(\\min\\limits_{i}\\left\\{Q^2_{\\mathrm{res},i}\\right\\} - Q^2_{\\mathrm{res},j}\\right) \\, ,\n\\label{eq:sectorVetoLO}\n\\end{equation}\nwhich is implemented in the shower evolution as an explicit veto for each trial branching.\nSince only a single branching kernel contributes per colour-ordered phase space point, sector antenna functions have to incorporate the full singularity structure associated with the respective sector. At LO, this amounts to including both the full single-collinear and single-soft limits in the antenna function. The full set of V\\protect\\scalebox{0.8}{INCIA}\\xspace's LO sector antenna functions is collected in \\cite{Brooks:2020upa}.\n\nBy construction, the default sector shower generates only strongly-ordered sequences\\footnote{This is different to virtually any other strongly-ordered shower, where recoil effects introduce unordered sequences. Such phase space points are vetoed in a sector shower.}, as the sector veto ensures that each emission is the softest (or most-collinear) in the post-branching configuration. The inclusion of direct $2\\mapsto 4$ branchings (which look unordered from an iterated $2\\mapsto 3$ point of view) in the sector shower is facilitated by extending the sector decomposition in \\cref{eq:sectorVetoLO} by an ordering criterion,\n\\begin{align}\n 1 &= \\sum\\limits_j \\left[\\Theta^<_{j\/IK}\\Theta^\\mathrm{sct}_{j\/IK} + \\Theta^>_{j\/IK}\\Theta^\\mathrm{sct}_{j\/IK}\\right]\\\\\n &= \\underbrace{\\sum\\limits_j \\theta\\left(\\hat{p}_{\\perp,\\hat{j}}^2 - p_{\\perp,j}^2\\right)\\Theta^\\mathrm{sct}_{j\/IK}}_{2\\mapsto 3 \\text{ (strongly ordered)}} + \\underbrace{\\sum\\limits_j \\theta\\left(p_{\\perp,j}^2 - \\hat{p}_{\\perp,\\hat{j}}^2\\right)\\Theta^\\mathrm{sct}_{j\/IK}}_{2\\mapsto4 \\text{ (unordered)}} \\nonumber \n\\end{align}\nwhere $p_\\perp^2$ denotes V\\protect\\scalebox{0.8}{INCIA}\\xspace's transverse-momentum ordering variable and hatted variables denote the intermediate node in a sequence $IL \\mapsto \\hat{i} \\hat{j} \\hat{\\ell} \\mapsto i j k \\ell$. Here, the scales $p_\\perp^2$ and $\\hat p_\\perp^2$ are uniquely defined by the ordering variable of the sector-shower emission, i.e., that emission which minimises \\cref{eq:resVar}. Direct $2\\mapsto 4$ emissions are thus restricted to the unordered region of the double-emission phase space, denoted as $\\ensuremath{\\,\\text{d}}\\Phi_{+2}^>$ in \\cref{eq:nloSudakov} and defined as\n\\begin{equation}\n \\ensuremath{\\,\\text{d}}\\Phi_{+2}^> = \\sum\\limits_{j} \\Theta^>_{j\/IK} \\Theta^\\mathrm{sct}_{j\/IK} \\ensuremath{\\,\\text{d}} \\Phi_{+2}^j \\, .\n\\end{equation}\n\nFor $2\\to 4$ emissions off quark-antiquark and gluon-gluon antennae, we use the double-real antenna functions in \\cite{GehrmannDeRidder:2004tv,GehrmannDeRidder:2005aw,GehrmannDeRidder:2005cm}. We note that NLO quark-gluon antenna functions appear in the Standard Model at lowest order for three final-state particles and are hence not of interest for our test case of $e^+e^-\\to jj$. We wish to point out, however, that the NLO quark-gluon antenna functions in \\cite{GehrmannDeRidder:2005hi,GehrmannDeRidder:2005cm} contain spurious singularities which have to be removed before a shower implementation is possible.\n\nAs a validation, we show in \\cref{fig:orderingZDecays} the ratio of the four-jet to three-jet evolution variable for $e^+e^- \\to 4j$ at $\\sqrt{s} = 240~\\giga e\\volt$. To focus on the perturbative realm, the shower evolution is constrained to the region between $\\ensuremath{t_0} = s$ and $\\ensuremath{t_\\mathrm{c}} = (5~\\giga e\\volt)^2$. The region $>0$ corresponds to the unordered part of phase space to which strongly-ordered showers cannot contribute. Due to the use of sector showers, there is a sharp cut-off at the boundary between the ordered and unordered region, as the sector criterion ensures that the last emission is always the softest and therefore, no recoil effects can spoil the strong ordering of the shower. As expected, the inclusion of direct $2\\to 4$ branchings gives access to the unordered parts of phase space, a crucial element of our matching method.\n\n\\subsection{LO Matrix-Element Corrections}\nIn order for the shower expansion to match the fixed-order calculation, we need (iterated) $2\\mapsto 3$ tree-level MECs and (direct) $2\\mapsto4$ tree-level MECs. Both take a particularly simple form in the sector-antenna framework, as will be shown below.\n\nAt leading-colour, tree-level MECs to the ordered sector shower can be constructed as \\cite{LopezVillarejo:2011ap,Fischer:2017yja}\n\\begin{align*}\n w_{2\\mapsto 3,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_2,\\Phi_{+1})\n &= \\frac{{\\mathrm{R}}^\\mathrm{LC}_{i}(\\Phi_2,\\Phi_{+1})}{\\sum_j \\Theta^\\mathrm{sct}_{j\/IK} A^\\mathrm{sct}_{j\/IK}(p_i,p_j,p_k){\\mathrm{B}}(\\Phi_2)} \\, ,\\\\\n w_{3\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_3,\\Phi_{+1})\n &= \\frac{{\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_3,\\Phi_{+1})}{\\sum_j \\Theta^\\mathrm{sct}_{j\/IK} A^\\mathrm{sct}_{j\/IK}(p_i,p_j,p_k){\\mathrm{R}}^\\mathrm{LC}_i(\\Phi_3)} \\, ,\n\\end{align*}\nwhere\n\\begin{align*}\n {\\mathrm{B}}(\\Phi_2) &= \\abs{{\\mathcal{M}}_2^{(0)}(p_1,p_2)}^2 \\, , \\\\\n {\\mathrm{R}}^\\mathrm{LC}_{i}(\\Phi_3) &= \\abs{{\\mathcal{M}}_3^{(0)}(\\sigma_i\\{p_1,p_2,p_3\\})}^2 \\, , \\\\\n {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_4) &= \\abs{{\\mathcal{M}}_4^{(0)}(\\sigma_i\\{p_1,p_2,p_3,p_4\\})}^2 \\, ,\n\\end{align*}\ndenote squared leading-colour colour-ordered amplitudes with the index $i$ denoting the respective permutation $\\sigma_i$ (the number of permutations depends on the process). The sector veto $\\Theta^\\mathrm{sct}_{j\/IK}$ ensures that only the most singular term contributes in the denominators, rendering the fraction exceptionally simple.\n\nDirect $2\\mapsto 4$ branchings can be corrected in an analogous way, replacing the sum over $2\\mapsto3$ antenna functions with a sum of $2\\mapsto4$ ones,\n\\begin{multline*}\n w_{2\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}(\\Phi_2,\\Phi_{+2})\n \\\\\n = \\frac{{\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{i}(\\Phi_2,\\Phi_{+2})}{\\sum_{\\{j,k\\}} \\Theta^\\mathrm{sct}_{jk\/IL} A^\\mathrm{sct}_{jk\/IL}(p_i,p_j,p_k,p_\\ell){\\mathrm{B}}(\\Phi_2)} \\, ,\n\\end{multline*}\n\nThe full-colour matrix element can be recovered on average by multiplication with a full-colour to leading-colour-summed matrix-element weight,\n\\begin{align}\n w_{2\\mapsto 3,i}^{\\mathrm{LO}} &= w_{2\\mapsto 3,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}{\\sum_j {\\mathrm{R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+1})} \\, , \\\\\n w_{3\\mapsto 4,i}^{\\mathrm{LO}} &= w_{3\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_3,\\Phi_{+1})}{\\sum_j {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{j}(\\Phi_3,\\Phi_{+1})} \\, , \\\\\n w_{2\\mapsto 4,i}^{\\mathrm{LO}} &= w_{2\\mapsto 4,i}^{\\mathrm{LO},\\mathrm{LC}}\n \\times \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+2})}{\\sum_j {\\mathrm{R\\kern-0.15em R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+2})} \\, .\n\\end{align}\n\nFor gluon splittings, multiple histories contribute even in the sector shower, because all permutations of quark lines have to be taken into account. To ensure that the MEC factors remain finite for final states with multiple quark pairs, an additional quark-projection factor has to be included. Since we only deal with a maximum of two quark pairs, it is given by\n\\begin{equation}\n \\rho_j = \\frac{A^\\mathrm{sct}_{j_q\/g_IX_K}(\\bar q_i, q_j, X_k)}{\\sum_{j}A^\\mathrm{sct}_{j_q\/g_IX_K}(\\bar q_i, q_j, X_k)}\n\\end{equation}\nfor $2\\to 3$ branchings and\n\\begin{equation}\n \\rho_j = \\frac{A^\\mathrm{sct}_{j_qk_{\\bar q}\/X_IY_L}(X_i, q_j,\\bar q_k, Y_\\ell)}{\\sum_{j}A^\\mathrm{sct}_{j_qk_{\\bar q}\/X_IY_L}(X_i, q_j, \\bar q_k, Y_\\ell)}\n\\end{equation}\nfor $2\\mapsto 4$ branchings.\n\n\\subsection{NLO Matrix-Element Corrections}\nMaking the antenna subtraction terms explicit, the fixed-order correction to the NLO matrix-element correction \\cref{eq:NLOMEC2to3} reads\n\\begin{align}\n &\\tilde{w}^\\mathrm{FO}_{2\\mapsto3}(\\Phi_2,\\Phi_{+1}) = \\frac{{\\mathrm{R\\kern-0.15emV}}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} + \\frac{{\\mathrm{I}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1})}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}\n \\label{eq:VMEC2to3Numerical} \\\\\n &\\, + \\int^{t}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}^\\prime\\, \\left[ \\frac{{\\mathrm{R\\kern-0.15em R}}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} - \\frac{{\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)}{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})} \\right] \\nonumber\\\\\n &\\, - \\Bigg(\\frac{{\\mathrm{V}}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)} + \\frac{{\\mathrm{I}}^\\mathrm{NLO}(\\Phi_2)}{{\\mathrm{B}}(\\Phi_2)} \\nonumber \\\\\n &\\qquad + \\int^{\\ensuremath{t_0}}_{0}\\ensuremath{\\,\\text{d}}\\Phi_{+1}'\\, \\left[\\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1}^\\prime)}{{\\mathrm{B}}(\\Phi_2)} - \\frac{{\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1}^\\prime)}{{\\mathrm{B}}(\\Phi_2)} \\right]\\Bigg)\\, , \\nonumber\n\\end{align}\nwith the differential NLO antenna subtraction terms ${\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1}^\\prime)$, ${\\mathrm{S}}^\\mathrm{NLO}(\\Phi_2,\\Phi_{+1},\\Phi_{+1}^\\prime)$ and their integrated counterparts ${\\mathrm{I}}^\\mathrm{NLO}_{{\\mathrm{S}}}(\\Phi_2)$, ${\\mathrm{I}}^\\mathrm{NLO}_{{\\mathrm{S}}}(\\Phi_2,\\Phi_{+1})$ cf.~\\cref{eq:expvalNNLO,eq:subtTermsNLO1jet}.\nBased on the argument of the last subsection, we construct the full-colour NLO matrix-element correction as\n\\begin{align}\n w^\\mathrm{NLO}_{2\\mapsto 3,i}(\\Phi_2,\\Phi_{+1}) &= w^{\\mathrm{LO},\\mathrm{LC}}_{2\\mapsto3,i}(\\Phi_2,\\Phi_{+1}) \\frac{{\\mathrm{R}}(\\Phi_2,\\Phi_{+1})}{\\sum_j {\\mathrm{R}}^\\mathrm{LC}_{j}(\\Phi_2,\\Phi_{+1})} \\nonumber \\\\ \n &\\quad \\times (1 + \\tilde{w}^\\text{FO}_{2\\mapsto 3}(\\Phi_2,\\Phi_{+1}) + \\tilde{w}^\\text{PS}_{2\\mapsto 3}(\\Phi_2)) \\, .\n\\end{align}\n\nThe integration over the radiation phase spaces denoted $\\Phi_{+1}^\\prime$ in \\cref{eq:VMEC2to3Numerical} is done numerically, utilising antenna kinematics to map $3$-parton configurations to $4$-parton configurations (similarly for $2$-parton configurations). This phase-space generation approach will be described in detail in the next subsection in the context of the NNLO Born weight.\nNote that the radiation phase space $\\Phi_{+1}$ in \\cref{eq:VMEC2to3Numerical} is generated by the shower.\n\n\\subsection{NNLO Born Weight}\nThe Born-local NNLO weight can be calculated numerically using a ``forward-branching'' phase-space generation approach \\cite{Frixione:2007vw,Hoche:2010pf,Alioli:2010xd,Giele:2011tm,Figy:2018imt}, which has previously been applied to unweighted NLO event generation, using Catani-Seymour dipole subtraction \\cite{Campbell:2012cz}. The application to NNLO corrections to $e^+e^- \\to 2j$ using antenna subtraction has been outlined in \\cite{Weinzierl:2006ij}.\n\nGiven a Born phase space point, the real-radiation phase space is generated by uniformly sampling the shower variables $(t,\\zeta,\\phi)$ for each antenna, which represent integration channels in this context. As for the shower evolution, every phase space point is restricted to the sector in which the emission(s) correspond to the most-singular clusterings. \nThe momenta of the Born$+1j$ point are constructed according to the same kinematic map as the shower uses, summarised in sec.~2.3 in \\cite{Brooks:2020bhi}. Since antenna functions are azimuthally averaged, they do not cancel spin-correlations in collinear gluon branchings locally. To obtain a point-wise pole cancellation, the subtracted real correction ${\\mathrm{R}}-{\\mathrm{S}}$ can be evaluated on two correlated phase space points,\n\\begin{equation*}\n \\left\\{ \\left(t,\\zeta,\\phi\\right), \\left(t,\\zeta,\\phi+\\uppi\/2\\right)\\right\\} \\,\n\\end{equation*}\nwhich cancels the collinear spin correlation exactly, as it is proportional to $\\cos(2\\phi)$.\nTo obtain double-real radiation phase space points for the subtracted double-real correction ${\\mathrm{R\\kern-0.15em R}}-{\\mathrm{S}}$, this procedure can be iterated, yielding four angular-correlated phase space points which cancel spin correlations in double single-collinear and triple-collinear limits.\nDue to the bijective nature of the sector-antenna framework, each $3$- or $4$-particle phase-space point obtained in this way can be mapped back uniquely to its $2$-particle origin, making the NNLO weight exactly Born-local. For $e^+e^- \\to 2j$ this procedure is identical to the one in \\cite{Weinzierl:2006ij}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{spiketest-nnlo-tc.pdf}\n \\includegraphics[width=0.4\\textwidth]{spiketest-nnlo-ds.pdf}\\\\\n \\includegraphics[width=0.4\\textwidth]{nnlotest-134.pdf}\n \\includegraphics[width=0.4\\textwidth]{nnlotest-243.pdf}\n \\caption{Test of the convergence of the double-real subtraction term ${\\mathrm{S}}(\\Phi_{2},\\Phi_{+2},O)$ in \\cref{eq:SNNLORR} in $e^+e^-\\to q g g \\bar q$. \\textsl{Top row}: progression of weight distributions from $x=10^{-2}$ to $x=10^{-4}$ in the triple-collinear limit ($s_{134}\/s_{1234} < x$) and double-soft limit ($s_{134}s_{234}\/s_{1234}^2 < x$). \\textsl{Bottom row}: trajectories $x\\cdot s_{134}$, $x\\cdot s_{234}$, $x\\to 0$ approaching the two triple collinear limits. Phase space points are not azimuthally averaged.}\n \\label{fig:subtractionNNLO}\n\\end{figure*}\n\nWe have implemented the NNLO antenna subtraction terms for processes with two massless final-state jets, cf.~e.g.~\\cite{GehrmannDeRidder:2004tv}, in V\\protect\\scalebox{0.8}{INCIA}\\xspace in a semi-automated fashion. \nAs a validation, we illustrate the convergence of the double-real radiation subtraction term \\cref{eq:SNNLORR} in the triple-collinear and double-soft limits for the process $e^+e^-\\to q g g \\bar q$ in \\cref{fig:subtractionNNLO}. Phase space points are sampled according to the kinematic map in \\cref{subsec:kinematics} and we do not make use of the azimuthal averaging alluded to above.\n\nIt should be noted that a numerical calculation of the Born-local NNLO weight is not necessary for colour-singlet decays, as the inclusive $K$-factors are well known from analytical calculations, cf.~e.g.~\\cite{Chetyrkin:1996ela,GehrmannDeRidder:2004tv} for $Z\\to q\\bar q$ (with massless quarks), \\cite{Gorishnii:1990zu,Chetyrkin:1996sr,Baikov:2005rw,DelDuca:2015zqa} for $H \\to b\\bar b$ (with massless $b$s), and \\cite{Chetyrkin:1997iv,GehrmannDeRidder:2005aw} for $H\\to gg$ (in the Higgs effective theory).\n\n\\section{Conclusions and Outlook} \\label{sec:conclusions}\nWe have presented a technique to match final-state parton showers fully-differentially to next-to-next-to-leading order calculations in processes with two final-state jets. To our knowledge, this is the first method of its kind.\n\nWe have outlined a full-fledged numerical implementation in the V\\protect\\scalebox{0.8}{INCIA}\\xspace antenna shower in the P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3 event generator. Phenomenological studies employing our strategy will be presented in separate works.\n\nWe want to close by noting that, while we here focused on the simplest case of two massless final-state jets, the use of the NNLO antenna subtraction formalism facilitates its adaption to more complicated processes such as $e^+ e^- \\to t\\bar t$ or $e^+e^-\\to 3j$. Considering the latter, spurious singularities in the quark-gluon NNLO antenna subtraction terms need to be removed before exponentiation in the shower.\nFor future work, an extension of our method to processes with coloured initial states can be envisioned, given the applicability of the NNLO antenna subtraction to hadronic collisions.\n\n\\section*{Acknowledgements} \nWe thank Aude Gehrmann-de Ridder and Thomas Gehrmann for providing us with FORM files of their antenna functions.\nWe thank Philip Ilten for the development of a general matrix-element generator interface for P\\protect\\scalebox{0.8}{YTHIA}\\xspace 8.3, which allowed us to interface C\\protect\\scalebox{0.8}{OMIX}\\xspace in this work.\nCTP is supported by the Monash Graduate Scholarship, the Monash International Postgraduate Research Scholarship, and the J.L.~William~Scholarship.\nHTL is supported by the U.S. Department of Energy under Contract No. DE-AC02-06CH11357 and the National Science Foundation under Grant No. NSF-1740142.\nThis research was supported by Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. \nThis work was further partly funded by the Australian Research Council via Discovery Project DP170100708 \u2014 \"Emergent Phenomena in Quantum Chromodynamics\".\nThis work was also supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\\l{}odowska-Curie grant agreement No 722104 \u2013 MCnetITN3.\n\n\\bibliographystyle{elsarticle-num}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nNext generation 5G cellular systems will encompass frequencies from around 500 MHz all the way to around 100 GHz. For the development of new 5G systems to operate in bands above 6 GHz, there is a need for accurate radio propagation models for these bands which are not fully modeled by existing channel models below 6 GHz, as previous generations were designed and evaluated for operation at frequencies only as high as 6 GHz. One important example is the recently developed 3D-Indoor Hotspot (InH) channel model~\\cite{3GPP36873}. This paper is a summary of key results provided in a much more detailed white paper by the authors found at the link in~\\cite{5GSIG}, in addition to a 3GPP-style outdoor contribution in~\\cite{5GSIG_VTC16}. The 3GPP 3D channel model provides additional flexibility for the elevation dimension, thereby allowing modeling two dimensional antenna systems, such as those that are expected in next generation system deployments. It is important for future system design to develop a new channel model that will be validated for operation at higher frequencies (e.g., up to 100 GHz) and that will allow accurate performance evaluation of possible future technical specifications in indoor environments. Furthermore, the new models should be consistent with the models below 6 GHz. In some cases, the requirements may call for deviations from the modeling parameters or methodology of the existing models, but these deviations should be kept to a bare minimum and only introduced when necessary for supporting the 5G simulation use cases.\n\nThere are many existing and ongoing campaign efforts worldwide targeting 5G channel measurements and modeling. They include METIS2020~\\cite{METIS2015}, COST2100\/COST~\\cite{COST2100}, IC1004~\\cite{COSTic1004}, ETSI mmWave~\\cite{ETSI2015}, NIST 5G mmWave Channel Model Alliance~\\cite{NIST}, MiWEBA~\\cite{MiWEBA2014}, mmMagic~\\cite{mmMagic}, and NYU WIRELESS~\\cite{Rap13a,Rap15a,Rap15b,Mac15a}. METIS2020, for instance, has focused on 5G technologies and has contributed extensive studies in terms of channel modelling. Their target requirements include a wide range of frequency bands (up to 86 GHz), very large bandwidths (hundreds of MHz), fully three dimensional and accurate polarization modelling, spherical wave modelling, and high spatial resolution. The METIS channel models consist of a map-based model, stochastic model, and a hybrid model which can meet requirement of flexibility and scalability.\n\nThe COST2100 channel model is a geometry-based stochastic channel model (GSCM) that can reproduce the stochastic properties of multiple-input\/multiple output (MIMO) channels over time, frequency, and space. On the other hand, the 5G mmWave Channel Model Alliance is newly established and will establish guidelines for measurement calibration and methodology, modeling methodology, as well as parameterization in various environments and a database for channel measurement campaigns. NYU WIRELESS has conducted and published extensive urban propagation measurements at 28, 38, 60, and 73 GHz for both outdoor and indoor channels, and has created large-scale and small-scale channel models and concepts of \\emph{time cluster spatial lobes} (TCSL) to model multiple multipath time clusters that are seen to arrive in particular directions~\\cite{Rap15a,Rap13a,Rap15b,Samimi15a,Samimi15b,Samimi15c,Nie13a}.\n\nThis paper presents a brief overview of the indoor channel properties for bands up to 100 GHz based on extensive measurements and results across a multitude of bands. In addition we present a preliminary set of channel parameters suitable for indoor 5G simulations that are capable of capturing the main properties and trends.\n\n\\section{Requirements For New Channel Model}\n\nThe requirements of the new channel model that will support 5G operation across frequency bands up to 100 GHz are outlined below:\n\\begin{enumerate}\n\\item The new channel model should preferably be based on the existing 3GPP 3D channel model~\\cite{3GPP36873} but with extensions to cater for additional 5G modeling requirements and scenarios, for example:\n\t\\begin{enumerate}\n\t\t\\item Antenna arrays, especially at higher-frequency millimeter-wave bands, will very likely be 2D and dual-polarized both at the access point (AP) and at the user equipment (UE) and will hence need properly-modeled azimuth and elevation angles of departure and arrival of multipath components.\n\t\t\\item Individual antenna elements will have antenna radiation patterns in azimuth and elevation and may require separate modeling for directional performance gains. Furthermore, polarization properties of the multipath components need to be accurately accounted for in the model. \n\t\\end{enumerate}\n\\item The new channel model must accommodate a wide frequency range up to 100 GHz. The joint propagation characteristics over different frequency bands will need to be evaluated for multi-band operation, e.g., low-band and high-band carrier aggregation configurations. \n\\item The new channel model must support large channel bandwidths (up to 2 GHz), where:\n\t\\begin{enumerate}\n\t\t\\item The individual channel bandwidths may be in the range of 100 MHz to 2 GHz and may support carrier aggregation.\n\t\t\\item The operating channels may be spread across an assigned range of several GHz.\n\t\\end{enumerate}\n\\item The new channel model must support a range of large antenna arrays, in particular:\n\t\\begin{enumerate}\n\t\t\\item Some large antenna arrays will have very high directivity with angular resolution of the channel down to around 1.0 degree.\n\t\t\\item 5G will consist of different array types, e.g., linear, planar, cylindrical and spherical arrays, with arbitrary polarization.\n\t\t\\item The array manifold vector can change significantly when the bandwidth is large relative to the carrier frequency. As such, the wideband array manifold assumption is not valid and new modeling techniques may be required. It may be preferable, for example, to model departure\/arrival angles with delays across the array and follow a spherical wave assumption instead of the usual plane wave assumption.\n\t\\end{enumerate}\n\\item The new channel model must accommodate mobility, in particular (for outdoor models, although mentioned here for consistency):\n\t\\begin{enumerate}\n\t\t\\item The channel model structure should be suitable for mobility up to 350 km\/hr.\n\t\t\\item The channel model structure should be suitable for small-scale mobility and rotation of both ends of the link in order to support scenarios such as device to device (D2D) or vehicle to vehicle (V2V).\n\t\\end{enumerate}\n\\item The new channel model must ensure spatial\/temporal\/frequency consistency, in particular:\n\t\\begin{enumerate}\n\t\t\\item The model should provide spatial\/temporal\/frequency consistencies which may be characterized, for example, via spatial consistence, inter-site correlation, and correlation among frequency bands. \n\t\t\\item The model should also ensure that the channel states, such as line-of-sight (LOS)\/non-LOS (NLOS) for outdoor\/indoor locations, the second order statistics of the channel, and the channel realizations change smoothly as a function of time, antenna position, and\/or frequency in all propagation scenarios. \n\t\t\\item The spatial\/temporal\/frequency consistencies should be supported for simulations where the channel consistency impacts the results (e.g. massive MIMO, mobility and beam tracking, etc.). Such support could possibly be optional for simpler studies.\n\t\\end{enumerate}\n\\item The new channel model must be of practical computational complexity, in particular:\n\t\\begin{enumerate}\n\t\t\\item The model should be suitable for implementation in single-link simulation tools and in multi-cell, multi-link radio network simulation tools. Computational complexity and memory requirements should not be excessive. The 3GPP 3D channel model~\\cite{3GPP36873} is seen, for instance, as a sufficiently accurate model for its purposes, with an acceptable level of complexity. Accuracy may be provided by including additional modeling details with reasonable complexity to support the greater channel bandwidths, and spatial and temporal resolutions and spatial\/temporal\/frequency consistency, required for millimeter-wave modeling.\n\t\t\\item The introduction of a new modeling methodology (e.g. Map based model) may significantly complicate the channel generation mechanism and thus substantially increase the implementation complexity of the system-level simulator. Furthermore, if one applies a completely different modeling methodology for frequencies above 6 GHz, it would be difficult to have meaningful comparative system evaluations for bands up to 100 GHz.\n\t\\end{enumerate}\n\\end{enumerate}\n\n\\section{Indoor Deployment Scenarios - Indoor (InH): Open and Closed Office, Shopping Mall}\nThe indoor scenario includes open and closed offices, corridors within offices and shopping malls as examples. The typical office environment has open cubicle areas, walled offices, open areas, corridors, etc., where the partition walls are composed of a variety of materials like sheetrock, poured concrete, glass, cinder block, etc. For the office environment, APs are generally mounted at a height of 2-3 m either on the ceilings or walls, with UEs at heights between 1.2 and 1.5 m. Shopping malls are generally 2-5 stories high and often include an open area (``atrium\"). In the shopping-mall environment, APs are generally mounted at a height of approximately 3 m on the walls or ceilings of the corridors and shops, with UEs at heights between 1.2 and 1.5 m. The density of APs may range from one per floor to one per room, depending on the frequency band and output power. A typical indoor office and shopping mall scenario are shown in Figures~\\ref{fig:InHsc} and~\\ref{fig:SMsc}, respectively. \n\\begin{figure}[b!]\n\t\\centering\n\t\\includegraphics[width = 3.7in]{InHsc.eps}\n\t\\caption{Typical Indoor Office.}\n\t\\label{fig:InHsc}\n\\end{figure}\n\\begin{figure}[b!]\n\t\\centering\n\t\\includegraphics[width = 3.7in]{SMsc.eps}\n\t\\caption{Indoor Shopping Malls.}\n\t\\label{fig:SMsc}\n\\end{figure}\n\n\\section{Characteristics of the InH Channel from 6 GHz to 100 GHz}\nMeasurements over a wide range of frequencies have been performed by the co-authors of this paper. In the following sections we outline the main observations per scenario with some comparisons to the existing 3GPP models for below 6 GHz (e.g.~\\cite{3GPP36873}). \n\nIn LOS conditions, multiple reflections from walls, floor, and ceiling give rise to waveguiding. Measurements in both office and shopping mall scenarios show that path loss exponents, based on a 1 m free space reference distance, are typically below 2 in LOS conditions, leading to more favorable path loss than predicted by Friis' free space path loss formula. The strength of the waveguiding effect is variable and the path loss exponent appears to increase very slightly with increasing frequency, possibly due to the relation between the wavelength and surface roughness. \n\nMeasurements of the small scale channel properties such as angular spread and delay spread have shown remarkable similarities between channels over a very wide frequency range. It appears as if the main multipath components are present at all frequencies though with some smaller variations in amplitudes.\n\nRecent work shows that polarization discrimination ranges between 15 and 25 dB for indoor millimeter wave channels~\\cite{Karttunen15a}, with greater polarization discrimination at 73 GHz than at 28 GHz~\\cite{Mac15a}.\n\n\\section{Penetration Inside Buildings}\nMeasurements have been reported for penetration loss for various materials at 2.5, 28, and 60 GHz for indoor scenarios~\\cite{Rap15a,Rap13a,Anderson02a,Zhao13a}, although all materials were not measured for the same frequencies. For easy comparisons, walls and drywalls were lumped together into a common dataset and different types of clear class were lumped together into a common dataset with normalized penetration loss shown in Figure~\\ref{fig:NYUpenetration}. It was observed that clear glass has widely varying attenuation (20 dB\/cm at 2.5 GHz, 3.5 dB\/cm at 28 GHz, and 11.3 dB\/cm at 60 GHz). For mesh glass, penetration was observed to increase as a function of frequency (24.1 dB\/cm at 2.5 GHz and 31.9 dB\/cm at 60 GHz), and a similar trend was observed with whiteboard penetration increasing as frequency increased. At 28 GHz, indoor tinted glass resulted in a penetration loss of 24.5 dB\/cm. Walls showed very little attenuation per cm of distance at 28 GHz (less than 1 dB\/cm)~\\cite{5GSIG}. Furthermore, a simple parabolic model as a function of frequency for low-loss and high-loss building penetration is given in~\\cite{5GSIG_VTC16}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 3.7in]{NYUpenetration.eps}\n\t\\caption{2.5 GHz, 28 GHz, and 60 GHz normalized material penetration losses from indoor measurements with common types of glass and walls were lumped together into common datasets~\\cite{Rap15b,Anderson02a,Zhao13a}.}\n\t\\label{fig:NYUpenetration}\n\\end{figure}\n\n\\section{Path loss, Shadow Fading, LOS, and Blockage Modeling}\n\\subsection{LOS Probability}\nThe definition of LOS used in this paper is discussed in this sub-section together with other LOS models. The LOS state is determined by a map-based approach, i.e., by considering the transmitter (AP) and receiver (UE) positions and whether any buildings or walls block the direct path between the AP and the UE. The impact of objects not represented in the map such as chairs, desks, office furniture, etc. is modelled separately using shadowing\/blocking terms. An attractive feature of this LOS definition is that it is frequency independent, as only walls are considered in the definition. \n\nSince the 3GPP 3D model~\\cite{3GPP36873} does not include an indoor scenario for LOS-probability, and the indoor hotspot scenario in e.g. the IMT advanced model~\\cite{ITU-M.2135-1} differs from the office scenario considered in this paper, an investigation on the LOS probability for indoor office has been conducted based on ray-tracing simulations. Different styles of indoor office environments were investigated, including open-plan office with cubical area, closed-plan office with corridor and meeting room, and also a hybrid-plan office with both open and closed areas. It has been verified that the following model fits the propagation in indoor office environment the best, of the three models evaluated:\n\\begin{equation}\\label{eq1}\nP_{LOS} = \\begin{cases}\n1, & d\\leq1.2\\text{ m}\\\\\n\\exp(-(d-1.2)\/4.7), & 1.210\\text{ m}\\end{cases}\\end{equation*}} & \\parbox{6.5cm}{\\begin{equation*}\\label{eq5} P_{LOS} = \\begin{cases} 1, & d\\leq1\\text{ m}\\\\ \\exp(-(d-1)\/9.4), & d>1\\text{ m}\\end{cases}\\end{equation*}} & 0.0572 \\\\ \\hline\n\t\t\t\\scriptsize WINNER II model (A1)\t& \\parbox{6.7cm}{\\begin{equation*}\\tiny\\label{eq6} P_{LOS} = \\begin{cases} 1, & d\\leq2.5\\text{ m}\\\\ 1-0.9(1-(1.24-0.61\\log_{10}(d))^3)^{1\/3}, & d>2.5\\text{ m}\\end{cases}\\end{equation*}} & \\parbox{6.7cm}{\\begin{equation*}\\tiny\\label{eq7} P_{LOS} = \\begin{cases} 1, & d\\leq2.6\\text{ m}\\\\ 1-0.9(1-(1.16-0.4\\log_{10}(d))^3)^{1\/3}, & d>2.6\\text{ m}\\end{cases}\\end{equation*}} & 0.0473 \\\\ \\hline\n\t\t\t\\scriptsize New Model\t& N\/A & \\parbox{7.5cm}{\\begin{equation*}\\label{eq8}\n\t\t\tP_{LOS} = \\begin{cases} 1, & d\\leq1.2\\text{ m}\\\\ \\exp(-(d-1.2)\/4.7), & 1.2\\text{ m}d_{BP}\n\t\\end{cases}$}\n\t\\end{equation}\n\\end{figure*}\n\\begin{figure*}\n\t\\begin{equation}\\label{eq12}\n\t\\resizebox{0.94\\hsize}{!}{$%\n\t\t\\mathrm{PL}_{Dual}^{CIF}(f,d)[\\mathrm{dB}] = \\begin{cases} \\textrm{FSPL}(f,\\text{1 m})+10n_1\\left(1+b_1\\left(\\frac{f-f_0}{f_0}\\right)\\right)\\log_{10}\\left(\\frac{d}{\\text{1 m}}\\right), & 1\\text{ m}d_{BP}\n\t\t\\end{cases}$}\n\t\\end{equation}\n\t\\hrulefill\n\n\t\\vspace*{4pt}\n\\end{figure*}\n\nIn the CI PL model, only a single parameter, the path loss exponent (PLE), needs to be determined through optimization to minimize the SF standard deviation over the measured PL data set~\\cite{Rap15b,Sun15a,Sun16a}. In the CI PL model there is an anchor point that ties path loss to the FSPL at 1 m, which captures frequency-dependency of the path loss, and establishes a uniform standard to which all measurements and model parameters may be referred. In the CIF model there are 2 optimization parameters ($n$ and $b$), and since it is an extension of the CI model, it also uses a 1 m free-space close-in reference distance path loss anchor. In the ABG PL model there are three parameters which need to be optimized to minimize the standard deviation (SF) over the data set~\\cite{Mac15a,Sun16a}. Closed form expressions for optimization of the model parameters for the CI, CIF, and ABG path loss models are given in~\\cite{Mac15a}, where it was shown that indoor channels experience an increase in the PLE value as the frequency increases, whereas the PLE is not very frequency dependent in outdoor UMa or UMi scenarios~\\cite{Mac15a,Rap15b,Sun15a,Sun16a,Thomas16a}. The CI, CIF, and ABG models, as well as cross-polarization forms and closed-form expressions for optimization are given for indoor channels in~\\cite{Mac15a}.\n\nAnother important issue related to path loss is shadow fading. For InH, the distance dependency and frequency dependency were investigated for both indoor office and shopping mall. For the LOS propagation condition, the frequency and distance dependency is weak. But for the NLOS propagation condition, frequency and distance dependency is more apparent as indicated in Table 7 of~\\cite{5GSIG}.\n\n\\begin{table}\n\t\\caption{InH and Shopping Mall Path Loss Models for LOS and NLOS.}\\label{tbl:InHPL}\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.6}\n\t\\begin{center}\n\t\\scalebox{0.82}{\n\t\t\\fontsize{8}{8}\\selectfont\n\t\t\\begin{tabular}{|p{3cm}|p{3cm}|p{3cm}|}\n\t\t\t\\hline\n\t\t\tScenario & CI\/CIF Model Parameters & ABG Model Parameters \\\\ \\hline \\hline\n\t\t\tInH-Indoor-Office-LOS & $n$=1.73, $\\sigma_{\\mathrm{SF}}$=3.02 dB & N\/A \\\\ \\hline\n\t\t\tInH-Indoor-Office-NLOS single slope (FFS) & $n$=3.19, $b$=0.06, $f_0$=24.2 GHz, $\\sigma_{\\mathrm{SF}}$=8.29 dB & $\\alpha$=3.83, $\\beta$=17.30, $\\gamma$=2.49, $\\sigma_{\\mathrm{SF}}$=8.03 dB \\\\ \\hline\n\t\t\tInH-Indoor-Office-NLOS dual slope & $n_1$=2.51, $b_1$=0.12, $f_0$=24.1 GHz, $n_2$=4.25, $b_2$=0.04, $d_{BP}$=7.8 m, $\\sigma_{\\mathrm{SF}}$=7.65 dB & $\\alpha_1$=1.7, $\\beta_1$=33.0, $\\gamma$=2.49, $d_{BP}$=6.90 m, $\\alpha_2$=4.17, $\\sigma_{\\mathrm{SF}}$=7.78 dB \\\\ \\hline\n\t\t\tInH-Shopping Malls-LOS & $n$=1.73, $\\sigma_{\\mathrm{SF}}$=2.01 dB & N\/A \\\\ \\hline\n\t\t\tInH-Shopping Malls-NLOS single slope (FFS) & $n$=2.59, $b$=0.01, $f_0$=39.5 GHz, $\\sigma_{\\mathrm{SF}}$=7.40 dB & $\\alpha$=3.21, $\\beta$=18.09, $\\gamma$=2.24, $\\sigma_{\\mathrm{SF}}$6.97 dB \\\\ \\hline\n\t\t\tInH-Shopping Malls-NLOS dual slope & $n_1$=2.43, $b_1$=0.01, $f_0$=39.5 GHz, $n_2$=8.36, $b_2$=0.39, $d_{BP}$= 110 m, $\\sigma_{\\mathrm{SF}}$=6.26 dB & $\\alpha_1$=2.9, $\\beta_1$=22.17, $\\gamma$=2.24, $d_{BP}$=147.0 m, $\\alpha_2$=11.47, $\\sigma_{\\mathrm{SF}}$=6.36 dB \\\\ \\hline\n\t\t\\end{tabular}}\n\t\\end{center}\n\\end{table}\n\n\\section{Fast Fading Modeling}\nFor InH scenarios, an investigation of fast fading modelling has been conducted based on both measurement and ray-tracing. Both indoor office and shopping mall environments have been investigated at frequencies including 2.9 GHz, 3.5 GHz, 6 GHz, 14 GHz, 15 GHz, 20 GHz, 28 GHz, 29 GHz, 60 GHz, and 73 GHz. Some preliminary analysis on large-scale channel characteristics have been summarized in~\\cite{5GSIG}. Although it is still too early to apply these results to the full frequency range up to 100 GHz, these preliminary investigations have provided insight into the difference induced by the largely extended frequency range. The preliminary analysis in~\\cite{5GSIG} illustrates the frequency dependency of large-scale channel characteristics across the measured frequency range.\n\\section{Conclusion}\nThe basis for this paper is the open literature in combination with recent and ongoing propagation channel measurements performed by a majority of the co-authors of this paper, some of which are as of yet unpublished. The InH propagation models are somewhat different from the outdoor UMi and UMa models in that the indoor channels are more frequency-dependent than outdoor channels, leading to the ABG and CIF frequency-dependent NLOS path loss models. In LOS conditions, waveguiding effects were observed in all frequencies measured, leading to path loss exponents less than the theoretical value of $n$ = 2 in LOS. The preceding tables give an overview of these recent measurement activities in different frequency bands and scenarios, in addition to further information provided in~\\cite{5GSIG}.\n\\section*{Acknowledgment}\nThe authors would like to thank Jianhua Zhang\\textsuperscript{b} and Yi Zheng\\textsuperscript{c} who are also contributing authors of this manuscript and 5G white paper~\\cite{5GSIG}.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}