diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmpiv" "b/data_all_eng_slimpj/shuffled/split2/finalzzmpiv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmpiv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA sequence $\\{f_n\\}_{n=1}^\\infty$ in separable Hilbert space $\\mathcal{H}$ is called a frame for the Hilbert space $\\mathcal{H},$ if there exist positive constants $A,~ B >0$ such that\n\\begin{align}\\label{0} A\\| f\\|^2_\\mathcal{H} \\leq \\sum_{n=1}^\\infty |\\langle f, f_n \\rangle|^2 \\leq B\\|f\\|^2_\\mathcal{H},~~~ \\mbox{for all}~~ f\\in \\mathcal{H}\n \\end{align} The positive constants $A$ and $B$ are called the lower and upper frame bounds of the frame, respectively. The inequality in (\\ref{0}) is called the frame inequality of the frame. The frame is called a tight frame if $A=B$ and is called normalized tight frame if $A=B=1.$ If $\\{f_n\\}_{n=1}^\\infty$ is a frame for $\\mathcal{H}$ then the following operators are associated with it.\n \\begin{enumerate}[(a)]\n \\item Pre-frame operator $T: l^2(\\mathbb{N}) \\longrightarrow \\mathcal{H}$ is defined as $T\\{c_n\\}_{n=1}^\\infty= \\sum\\limits_{k=1}^\\infty c_nf_n,~~\\{c_n\\}_{n=1}^\\infty \\in l^2(\\mathbb{N}).$\n \\item Analysis operator $T^*: \\mathcal{H} \\longrightarrow l^2(\\mathbb{N}), T^*f= \\{\\langle f, f_k\\rangle\\}_{k=1}^\\infty~~~ f \\in \\mathcal{H}.$\n \\item Frame operator $S=TT^*=: \\mathcal{H} \\longrightarrow \\mathcal{H},~~ Sf= \\sum\\limits_{k=1}^\\infty \\langle f, f_k \\rangle f_k, ~~f \\in \\mathcal{H}.$ The frame operator $S$ is bounded, linear and invertible on $\\mathcal{H}$. Thus, a frame for $\\mathcal{H}$ allows each vector in $\\mathcal{H}$ to be written as a linear combination of the elements in the frame, but the linear independence between the elements is not required; i.e for each vector $f \\in \\mathcal{H}$ we have,\n \\begin{eqnarray*}\n f=SS^{-1}f= \\sum\\limits_{k=1}^\\infty \\langle f, f_k \\rangle f_k.\n \\end{eqnarray*}\n \\end{enumerate}\nFrames in Hilbert spaces were introduced by Duffin and Schaeffer \\cite{8} in 1952, while addressing some deep problems in non-harmonic Fourier series. Frames were reintroduced by Daubechies, Grossmann and Meyer \\cite{DGM} after three decades, in 1986, frames were brought to life, and were widely studied after this nobel work. Frames are generalization of orthonormal basis. The\nmain property of frames which makes them useful is their redundancy. Now, frames play an important role not only in pure mathematics but also in applied mathematics. Representation\nof signals using frames is advantageous over basis expansions in\na variety of practical applications in science and engineering. In\nparticular, frames are widely used in sampling theory \\cite{ab, es}, wavelet theory \\cite{id}, signal processing \\cite{pg}, image processing \\cite{ma}, pseudo-differential operators \\cite{pdo},\nfilter banks \\cite{cf}, quantum computing \\cite{qc}, wireless sensor network \\cite{ws}, coding theory \\cite{ct}, geometry\\cite{SV1, SV2} and so on.\n Feichtinger and Gr\\\"{o}cheing \\cite{10} extended the notion of frames to Banach space and defined the notion of atomic decomposition.\n Gr\\\"{o}cheing \\cite{13} introduced a more general concept for Banach spaces called Banach frame. Banach frames and atomic decompositions were further studied in \\cite{6,13,14}. Casazza, Christensen and Stoeva \\cite{4} studied $\\mathcal{X}_d$-frame and $\\mathcal{X}_d$-Bessel sequence in Banach spaces. Shah \\cite{SS} defined and studied approximative $\\mathcal{X}_d$-frames and approximative $\\mathcal{X}_d$-Bessel sequences. He gave the following definition.\n \\begin{defn}\\cite{SS}\nA sequence $ \\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}} \\subseteq \\mathcal{X}^*,$ where $\\{m_n\\}$ is an increasing sequence of positive integers, is called an approximative \\emph{$\\mathcal{X}_d$-frame} for $\\mathcal{X}$ if\n\\begin{enumerate}[(a)]\n\\item $\\{h_{n,i}(x) \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}} \\in \\mathcal{X}_d$, for all $ x\\in \\mathcal{X}.$\n\\item There exist constants $A$ and $B$ with $0 < A \\leq B <\\infty$ such that\n\\begin{eqnarray}\\label{XD}\n A\\|x \\|_\\mathcal{X}~\\leq \\| \\{h_{n,i}(x) \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}} \\|_{\\mathcal{X}_d}~\\leq B \\|x \\|_\\mathcal{X},~\\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray}\n\\end{enumerate}\n\\end{defn}\n\\noindent The constants $A$ and $B$ are called approximative \\emph{$\\mathcal{X}_d$-frame} bounds.\nIf atleast (a) and the upper bound condition in (\\ref{XD}) are satisfied, then $\\{h_{n,i}(x) \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ is called an approximative \\emph{$\\mathcal{X}_d$-Bessel sequence} for $\\mathcal{X}.$\\\\\nOne may note that if $\\{f_n\\}$ is an $\\mathcal{X}_d$-frame for $\\mathcal{X},$ then for $\\{h_{n,i} \\}=f_i,~i=1,2,3,...,n;~ n \\in \\mathbb{N}$, $\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ is an approximative $\\mathcal{X}_d$-frame for $\\mathcal{X}.$ Also, note that if $\\{f_n\\}$ is an $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X},$ then for $\\{h_{n,i} \\}=f_i,~i=1,2,3,...,n;~ n \\in \\mathbb{N}$, $\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ is an approximative $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X}.$ The\nbounded linear operator $U: \\mathcal{X} \\rightarrow \\mathcal{X}_d$ given by\n\\begin{eqnarray}\nU(x)=\\{h_{n,i}(x) \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}, x \\in \\mathcal{X}\n\\end{eqnarray}\n is called an \\emph{analysis operator} associated to the approximative $\\mathcal{X}_d$-Bessel sequence $\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$.\n If $\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ is an approximative \\emph{$\\mathcal{X}_d$-frame} for $\\mathcal{X}$ and there exists a bounded linear operator $S:\\mathcal{X}_d\\longrightarrow \\mathcal{X}$ such that $S(\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}})=x,$~ for all $x\\in \\mathcal{X}$, then $(\\{h_{n,i} \\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}},S)$ is called an approximative \\emph{Banach frame} for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$.\\\\\n Atomic systems for an operator $K$ in Hilbert spaces were introduced by Gavruta \\cite{12}. Xiao et al. \\cite{21} discussed relationship between $K$-frames and ordinary frames in Hilbert spaces. Poumai and Jahan \\cite{KS} introduced atomic systems for operators in Banach spaces. Frames for operators in Banach spaces were further studied in \\cite{R1, R3}.\\\\\n \\textbf{Outline of the paper.}\n$K$-frames and atomic systems for an operator $K$ in Hilbert spaces were introduced by Gavruta \\cite{12} and further studied by Xiao, Zhu and Gavruta \\cite{21} discussed relationship between $K$-frames and frames in Hilbert spaces.\n In the present paper, we define approximative atomic system for an operator $K$ in a Banach space and prove some results on the existence of approximative atomic system for $K$. We also define approximative family of local atoms for subspaces and give a characterization for the approximative family of local atoms for subspaces. Also, we discuss methods to construct approximative atomic system for an operator $K$ from an approximative Bessel sequence and an approximative $\\mathcal{X}_d$-Bessel sequence.\\\\\n\\\\\\indent Throughout this paper, $\\mathcal{X}$ will denote a Banach space over the scalar field $\\mathbb{K}$($\\mathbb{R}$ or $\\mathbb{C}$), $\\mathcal{X}^*$ the dual space of $\\mathcal{X}$, $\\mathcal{X}_d$ a BK-space, $\\mathcal{X}_d^*$ denotes dual of $\\mathcal{X}_d$ and we will assume that $\\mathcal{X}_d$ has a sequence of canonical unit vectors as basis. By $\\{h_{n,i}\\}$ we means a sequence of coefficient functionals of row finite matrix. $L(\\mathcal{X})$ will denote the set of all bounded linear operators from $\\mathcal{X}$ into $\\mathcal{X}$. For $T\\in L(\\mathcal{X}) $, $T^{*}$ denotes the adjoint of $T$. $\\pi:\\mathcal{X} \\longrightarrow \\mathcal{X}^{**}$ is the natural canonical projection from $\\mathcal{X}$ onto $\\mathcal{X}^{**}$. A sequence space $S$ is called a \\emph{BK-space} if it is a Banach space and the co-ordinate functionals are continuous on $S.$ That is the relations $x_n= \\{ {\\alpha_j}^{(n)}\\}$, $x=\\{ \\alpha_j\\} \\in S$, $\\lim\\limits_{n \\longrightarrow \\infty}x_n=x$ imply $\\lim\\limits_{n \\longrightarrow \\infty}\\alpha_j^{(n)}=\\alpha_j ~~~(j=1,2,3,...).$\n\n\\section{Preliminaries}\n Gavruta \\cite{12}, introduced the notion of $K$-frame and atomic system for an operator $K$ in a Hilbert space. She gave the following definition.\n\n\\begin{defn} \\cite{12} Let $\\mathcal{H}$ be a Hilbert space, $K\\in L(\\mathcal{H})$ and $\\lbrace x_n\\rbrace\\subseteq \\mathcal{H}$. Then \\\\\n(a) $\\lbrace x_n\\rbrace$ is called a \\emph{K-frame} for $\\mathcal{H}$ if there exist constants $A,B>0$ such that\n\\begin{eqnarray*}\nA\\Vert K^*x\\Vert^2\\leq\\sum\\limits_{n=1}^{\\infty}\\vert\\langle x,x_n\\rangle\\vert^2\\leq B\\Vert x\\Vert^2, \\ \\text{for all} \\ x\\in \\mathcal{H}.\n\\end{eqnarray*}\n(b) $\\lbrace x_n\\rbrace$ is called an \\emph{atomic system for $K$} if\n\\begin{enumerate}[(i)]\n\\item the series $\\sum\\limits_{n=1}^{\\infty}c_nx_n$ converges for all $c=\\lbrace c_n\\rbrace\\in l^2$.\n\\item there exists $C>0$ such that for every $x\\in \\mathcal{H}$ there exists $\\lbrace a_n\\rbrace\\in l^2$ such that $\\Vert \\lbrace a_n\\rbrace\\Vert_{l^2}\\leq C\\Vert x\\Vert$ and $K(x)=\\sum\\limits_{n=1}^{\\infty}a_nx_n.$\n\\end{enumerate}\n\\end{defn}\n Shah \\cite{SS} defined and studied approximative K-atomic decompositions and approximative $\\mathcal{X}_d$-frames in Banach spaces. For further studies related to this concept one may refer \\cite{SH, n}.\n\nNext, we give some results which we will use throughout this manuscript.\n\\begin{defn}\n\\cite{16} Let $T\\in L(\\mathcal{X})$, we say that an operator S$\\in L(\\mathcal{X})$ is a pseudoinverse of $T$ if $TST=T$. Also, $S\\in L(\\mathcal{X})$ is the generalized inverse of $T$ if $TST=T$ and $STS=S$.\n\\end{defn}\n\\begin{lem}\\cite{22}\\label{PI} Let $\\mathcal{X}$, $\\mathcal{Y}$ be Banach spaces and $T:\\mathcal{X} \\longrightarrow \\mathcal{Y}$ be a bounded linear operator. Then the following statements are equivalent:\n\\begin{enumerate}\n\\item There exist two continuous projection operators $P:\\mathcal{X}\\rightarrow \\mathcal{X}$ and $Q:\\mathcal{Y} \\rightarrow \\mathcal{Y}$ such that\n\\begin{eqnarray}\\label{1E1}\nP( \\mathcal{X})=kerT \\quad and \\quad Q(\\mathcal{Y})=T( \\mathcal{X}).\n\\end{eqnarray}\n\\item There exist closed subspaces $W$ and $Z$ such that $\\mathcal{X}=KerT\\oplus W$ and $\\mathcal{Y}=T(\\mathcal{X})\\oplus Z.$\n\\item $T$ has a pseudo inverse operator $T^\\dag$.\n\\end{enumerate}\nIf two continuous projection operators $P:\\mathcal{X} \\rightarrow \\mathcal{X}$ and $Q:\\mathcal{Y} \\rightarrow \\mathcal{Y}$ satisfies (\\ref{1E1}), then there exists a pseudo inverse operator $T^\\dag$ of $T$ such that \\begin{equation*}\nT^\\dag T=I_{\\mathcal{X}}-P \\ and \\ TT^\\dag=Q,\n\\end{equation*}\nwhere $I_{\\mathcal{X}}$ is the identity operator on $\\mathcal{X}$.\n\\end{lem}\n\\begin{lem}\\label{5}\n\\cite{2,19} Let $\\mathcal{X}$ be a Banach space. If $T\\in L(\\mathcal{X})$ has a generalized inverse $S\\in L(\\mathcal{X}).$ Then $TS$,$ST$ are projections and $TS(\\mathcal{X})=T(\\mathcal{X})$ and $ST(\\mathcal{X})=S(\\mathcal{X}).$\n\\end{lem}\n\n\\begin{lem}\\label{6}\n\\cite{15} Let $\\mathcal{X}$ and $\\mathcal{Y}$ be Banach spaces. Let $L:\\mathcal{X} \\rightarrow \\mathcal{X}$ and $L_1:\\mathcal{X} \\rightarrow \\mathcal{Y}$ be linear operators. Then the following conditions are equivalent:\n\\begin{enumerate}[(a)]\n\\item $L=L_2L_1$, for some continuous linear operator $L_2:L_1(\\mathcal{X})\\rightarrow \\mathcal{X}$.\n\\item $\\Vert L(\\mathcal{X})\\Vert\\leq k\\Vert L_1(\\mathcal{X})\\Vert$, for some $k\\geq 0$ and for all $x\\in \\mathcal{X}$.\n\\item $Range(L^*)\\subseteq Range(L_1^{*})$.\n\\end{enumerate}\n\\end{lem}\n\\begin{lem}\\label{7}\n\\cite{1} Assume $T\\in B(\\mathcal{X},\\mathcal{Y})$. If $S\\in B(\\mathcal{X},\\mathcal{Z})$ with $Range(S^*)\\subseteq Range(T^*)$ and $\\overline{Range(T )}$ is complemented, then there\nexists $V\\in B(\\mathcal{Y},\\mathcal{Z})$ such that $S = VT.$\n\\end{lem}\n\\begin{lem}\\label{8}\n\\cite{4} Let $\\mathcal{X}_d$ be a BK-space for which the canonical unit vectors $\\lbrace e_n\\rbrace$ form a Schauder basis. Then the space $\\mathcal{Y}_d=\\lbrace\\lbrace h(e_n)\\rbrace\\vert h\\in \\mathcal{X}_d^*\\rbrace$ with norm $\\Vert\\lbrace h(e_n)\\rbrace\\Vert_{\\mathcal{Y}_d}=\\Vert h\\Vert_{\\mathcal{X}_d^*}$ is a BK-space isometrically isomorphic to $\\mathcal{X}_d^*$. Also, every continuous linear functional $\\Phi$ on $\\mathcal{X}_d$ has the form\n$\\Phi\\lbrace c_n\\rbrace=\\sum\\limits_{n=1}^{\\infty}c_nd_n,$\nwhere $\\lbrace d_n\\rbrace\\in \\mathcal{Y}_d$ is uniquely determined by $d_n=\\Phi(e_n)$, and\n$\\Vert \\Phi\\Vert=\\Vert\\lbrace \\Phi(e_n)\\rbrace\\Vert_{\\mathcal{Y}_d}$.\n\\end{lem}\nTerekhin \\cite{20} introduced and studied frames in Banach spaces.\n\\begin{defn}\n\\cite{20} Let $\\mathcal{X}$ be a Banach space and $\\mathcal{X}_d$ be a BK-space with the sequence of canonical unit vectors $\\lbrace e_n\\rbrace$ as basis. Let $Y_d$ be a sequence space mentioned in Lemma \\ref{8}. A sequence $\\lbrace x_n \\rbrace_{n=1}^{\\infty}\\subseteq \\mathcal{X}$ is called a frame with respect to $\\mathcal{X}_d$ if\n\\begin{enumerate}[(a)]\n\\item $\\lbrace f(x_n)\\rbrace\\in Y_d,$~ for all $f\\in \\mathcal{X}^*$,\n\\item there exist constants $A$ and $B$ with $00$ such that\n\\begin{eqnarray*}\nC\\Vert K(x)\\Vert\\leq\\Vert\\lbrace h_{n,i}(x)\\rbrace\\Vert, \\text{for all} \\ x\\in \\mathcal{X},\n\\end{eqnarray*} and\n\\begin{eqnarray*}\nD\\Vert K^*(f)\\Vert\\leq\\Vert\\lbrace h(x_n)\\rbrace\\Vert, \\text{for all} \\ h\\in \\mathcal{X}^*.\n\\end{eqnarray*}\n\\\\~(IV)\n$\\lbrace x_n\\rbrace\\subseteq \\mathcal{X}$ is a Bessel sequence for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$ if and only if there exists a bounded linear operator $T$ from $\\mathcal{X}_d$ into $\\mathcal{X}$ for which $T\\lbrace h_n\\rbrace=\\sum\\limits_{n=1}^{\\infty}h_nx_n$, $\\lbrace h_n\\rbrace\\in \\mathcal{X}_d.$ Recall that $T$ is called the \\emph{synthesis operator} associated with the Bessel sequence $\\{x_n\\},$ and the bounded linear operator $R:\\mathcal{X}^*\\rightarrow \\mathcal{Y}_d$ given by\n\\begin{eqnarray*}\nR(h)=\\lbrace h(x_n)\\rbrace, \\ \\text{for} \\ h\\in \\mathcal{X}^*,\n\\end{eqnarray*}\nis called the \\emph{analysis operator} of Bessel sequence $\\lbrace x_n\\rbrace$. Also observe that from Lemma \\ref{8}, $\\mathcal{Y}_d$ is isometrically isomorphic to $\\mathcal{X}_d^*$. Let $J_d:\\mathcal{X}_d^*\\rightarrow \\mathcal{Y}_d$ denote an isometric isomorphism from $\\mathcal{X}_d^*$ onto $\\mathcal{Y}_d$.\nNote that the synthesis operator $T$ need not be onto. Indeed, let $\\mathcal{X}_d=\\mathcal{X}=l_1$ and let $\\lbrace e_n\\rbrace$ be a sequence of canonical unit vectors as basis of $\\mathcal{X}$. Take $x_n=e_{n+1}$, for $n \\in \\mathbb{N}$. Let $h=\\lbrace h_n\\rbrace_{n=1}^\\infty\\in \\mathcal{X}^*=l^\\infty$ then $\\lbrace h(x_n)\\rbrace\\in l^\\infty$ and $\\Vert\\lbrace h(x_n)\\rbrace\\Vert_{l^\\infty}\\leq A\\Vert h\\Vert_{l^\\infty}$, where $A>0$ is some constant. Thus, $\\lbrace x_n\\rbrace$ is Bessel sequence for $\\mathcal{X}$. But $T:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ given by $T(e_n)=x_n$, for $n \\in \\mathbb{N}$ is a bounded linear operator which is not onto.\n\\\\\n$\\lbrace x_n\\rbrace\\subseteq \\mathcal{X}$ is a frame if and only if there exists a bounded linear operator $T$ from $\\mathcal{X}_d$ onto $\\mathcal{X}$ for which $T\\lbrace c_n\\rbrace=\\sum\\limits_{n=1}^{\\infty}c_nx_n$, $\\lbrace c_n\\rbrace\\in \\mathcal{X}_d$.\n\nFrom the frame inequality, $T^*$ is one-one and Range of $T^*$ is closed in $\\mathcal{X}_d^*$. So by [\\cite{17}, page 103], $T$ is onto.\\\\\n\\indent Conversely, if $T$ is onto. Then by [\\cite{17}, page 103] $T^*$ is one-one and range of $T^*$ is closed. Also, by [\\cite{9}, page 487], there exists a constant $D>0$ such that\n\\begin{eqnarray*}\n\\Vert h\\Vert\\leq D\\Vert T^*(h)\\Vert=D\\Vert \\lbrace h(x_n)\\rbrace\\Vert,\\ \\text{for all} \\ h\\in \\mathcal{X}^*.\n\\end{eqnarray*}\n\nIn the following result, we construct an approximative family of local atoms for $K(\\mathcal{X})$ and an approximative atomic decomposition for $[K(\\mathcal{X})]^*$ from a given approximative atomic system for a bounded linear operator $K$.\n\\begin{thm}\\label{T1}\nLet $\\lbrace x_n\\rbrace$ be an approximative atomic system for $K$ in $\\mathcal{X}$ and $K$ has pseudo inverse $K^\\dagger$. Then, $\\lbrace x_n\\rbrace$ is an approximative family of local atoms for $K(\\mathcal{X})$ in $\\mathcal{X}$.\nMoreover, if $\\mathcal{X}_d^*$ has a sequence of canonical unit vectors $\\lbrace e_n^*\\rbrace$ as basis, then there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\{h_{n,i}\\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}\\subset \\mathcal{X}^*$ for $K(\\mathcal{X})$ such that $(h_{n,i},\\pi(x_n))$ is an approximative atomic decomposition for $[K(\\mathcal{X})]^*$ with respect to $\\mathcal{X}_d^*$.\n\\end{thm}\n\\begin{proof}\nSince $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$ in $\\mathcal{X}$ $K$ has a pseudo inverse $K^\\dagger$, $KK^\\dagger$ is a projection from $\\mathcal{X}$ onto $K(\\mathcal{X})$ and\n$KK^\\dagger(x)=x, \\ \\text{for all} \\ x\\in K(\\mathcal{X}).$\nThus, $KK^\\dagger\\vert_{K(\\mathcal{X})}=I_{K(\\mathcal{X})}$. Let $\\lbrace h_{n,i}\\rbrace$ be its associated approximative $\\mathcal{X}_d$-Bessel sequence with bound C. Take $f_{n,i}=(K^\\dagger\\vert_{K(\\mathcal{X})})^*(h_{n,i})$, $n\\in\\mathbb{N}$ and\nlet $x\\in K(\\mathcal{X})$. Then, we compute\n\\begin{eqnarray*}\nx&=&K(K^\\dagger\\vert_{K(\\mathcal{X})}(x))\n\\\\\n&=&\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(K^\\dagger\\vert_{K(\\mathcal{X})}(x))x_i\n\\\\\n&=&\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}(K^\\dagger\\vert_{K(\\mathcal{X})^*}h_{n,i}(x))x_i\n\\\\\n&=&\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}f_{n,i}(x)x_i\n\\end{eqnarray*}\n\\indent Now, we will show that $\\lbrace f_{n,i}\\rbrace$ is an approximative Bessel sequence for $K(\\mathcal{X})$ with respect to $\\mathcal{X}_d$. Let $x\\in K(\\mathcal{X}).$ Then we have\n\\begin{eqnarray*}\n\\lbrace f_{n,i}(x)\\rbrace=\\lbrace h_{n,i}(K^\\dagger\\vert_{K(\\mathcal{X})}(x)\\rbrace\\in \\mathcal{X}_d, \\ \\text{for all} \\ x\\in K(\\mathcal{X})\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n\\Vert\\lbrace f_{n,i}(x)\\rbrace\\Vert&=&\\Vert\\lbrace h_{n,i}(K^\\dagger\\vert_{K(\\mathcal{X})}(x))\\rbrace\\Vert\\leq C\\Vert K^\\dagger\\vert_{K(\\mathcal{X})}(x)\\Vert\n\\\\\n&\\leq&C\\Vert K^\\dagger\\Vert\\Vert x\\Vert, \\ \\text{for all} \\ x\\in K(\\mathcal{X}).\n\\end{eqnarray*}\nAlso, $(K^\\dagger\\vert_{K(\\mathcal{X})})^* K^*=I^*_{[K(\\mathcal{X})]^*}$. Let $h \\in [K(\\mathcal{X})]^*$. Then\n\\begin{eqnarray*}\n\\Vert h\\Vert&=&\\Vert(K^\\dagger\\vert_{K(\\mathcal{X})})^* K^*(h)\\Vert\\leq \\Vert K^\\dagger\\Vert\\Vert K^*(h)\\Vert\n\\\\\n&=&\\Vert K^\\dagger\\Vert\\sup\\limits_{x\\in K(\\mathcal{X}),\\Vert x\\Vert=1}\\vert h(\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n} h_{n,i}(x)x_i)\\vert\n\\\\\n&\\leq& C\\Vert K^\\dagger\\Vert\\Vert \\lbrace h(x_i)\\rbrace\\Vert.\n\\end{eqnarray*}\nLet $h\\in [K(\\mathcal{X})]^*.$ Then for $N\\in \\mathbb{N},$ we have\n\\begin{eqnarray*}\n\\Vert h-\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)h_{n,i}\\Vert&=&\\sup\\limits_{x\\in K(\\mathcal{X}),\\Vert x\\Vert=1}\\vert h(x)- \\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)h_{n,i}(x)\\vert\n\\\\\n&=&\\sup\\limits_{x\\in K(\\mathcal{X}),\\Vert x\\Vert=1}\\vert\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)h_{n,i}(x)\\vert\n\\\\\n&\\leq&C\\Vert K^\\dagger\\Vert\\Vert \\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)e_i^*\\Vert\n\\\\\n&\\rightarrow& 0 \\ \\text{as N} \\rightarrow \\infty.\n\\end{eqnarray*}\n\\indent Thus, $f=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)h_{n,i}$, for all $h\\in [K(\\mathcal{X})]^*$.\n\\end{proof}\nIf $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$ and $K$ has pseudo inverse $K^\\dagger$, then for $h_{n,i}=f_i$, $\\lbrace x_n\\rbrace$ is frame as well as an approximative family of local atoms for $K(\\mathcal{X})$.\\\\\nThe following theorem gives a necessary condition for the existence of an approximative atomic system for a bounded linear operator $K$.\n\\begin{thm}\\label{T2}\nIf $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$, then there exists a bounded linear operator $T: \\mathcal{X}_d\\rightarrow \\mathcal{X}$ such that $T(e_n)=x_n$, $n\\in\\mathbb{N}$ and Range$K^{**}\\subseteq RangeT^{**}$, where $\\lbrace e_n\\rbrace$ is the sequence of canonical unit vectors as a basis of $\\mathcal{X}_d$.\n\\end{thm}\n\\begin{proof}\nSince $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$, $T:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ given $T(\\lbrace h_{n,i}\\rbrace) =\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}x_i$ is a well defined bounded linear operator such that $T(e_i)=x_i$, $n\\in\\mathbb{N}$. Also, there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\lbrace h_{n,i}\\rbrace$ for $\\mathcal{X}$ with bound $B$ such that\n\\begin{eqnarray*}\nK(x)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i, \\ \\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\nThen, for $h\\in \\mathcal{X}^{*}$, we have\n\\begin{eqnarray*}\n\\Vert K^*(h)\\Vert&=&\\sup\\limits_{x\\in \\mathcal{X},\\Vert x\\Vert=1}\\vert K^*h(x)\\vert=\\sup\\limits_{x\\in \\mathcal{X},\\Vert x\\Vert=1}\\vert\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)h(x_i)\\vert\n\\\\\n&\\leq&B\\Vert \\lbrace h(x_i)\\rbrace\\Vert=B\\Vert T^*(h)\\Vert.\n\\end{eqnarray*}\nHence, by Lemma \\ref{7}, we have $RangeK^{**}\\subseteq RangeT^{**}$.\n\\end{proof}\nGavruta \\cite{12}, had proved that for a separable Hilbert space $\\mathcal{H}$ a sequence $\\lbrace x_n\\rbrace\\subseteq \\mathcal{H}$ is an atomic system for $K$ if and only if there exists a bounded linear operator $L:l^2\\rightarrow \\mathcal{H}$ such that $L(e_n)=x_n$ and $Range(K)\\subseteq Range(L)$, where $\\lbrace e_n\\rbrace$ is an orthonormal basis for $l^2$. Towards the converse of the Theorem \\ref{T2}, we have the following theorem.\n\\begin{thm}\nLet $\\mathcal{X}_d$ be reflexive, $\\mathcal{X}_d^*$ has a sequence of canonical unit vectors $\\lbrace e_n^*\\rbrace$ as basis and $\\lbrace x_n\\rbrace$ be a sequence in $\\mathcal{X}$. Let $K\\in L(\\mathcal{X})$, $T:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ be bounded linear operator with $T(e_n)=x_n,~n \\in \\mathbb{N}$, $RangeK^{**}\\subseteq RangeT^{**}$ and $\\overline{T^*(\\mathcal{X}^*)}$ is complemented subspace of $\\mathcal{X}_d^*$. Then $\\lbrace x_n\\rbrace$ is an approximative atomic system for K.\n\\end{thm}\n\\begin{proof}\nSince $T:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ is a bounded linear operator and is given by\n\\begin{eqnarray*}\nT(\\lbrace \\alpha_n\\rbrace)=\\sum\\limits_{n=1}^{\\infty}\\alpha_nx_n,~~\\lbrace \\alpha_n\\rbrace \\in \\mathcal{X}_d^*.\n\\end{eqnarray*}\nSo, by Observation IV, $\\lbrace x_n\\rbrace$ is a Bessel sequence with bound say $B$. Since $RangeK^{**}\\subseteq RangeT^{**}$ and $\\overline{T^*(\\mathcal{X}^*)}$ is complemented subspace of $\\mathcal{X}_d^*$, by Lemma \\ref{7}, there exists a bounded linear operator $\\theta:\\mathcal{X}_d^*\\rightarrow \\mathcal{X}^*$ such that $K^*=\\theta T^*$.\n Take $h_{n,i},~~\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}=\\theta(e_n^*)$, $n\\in\\mathbb{N}$. Then, for $x\\in \\mathcal{X}$, we get\n\\begin{eqnarray*}\n\\theta^*(x)(e_j^*)=\\pi(x)\\theta(e_j^*)=h_{n,i}(x);i=1,2,...{m_n} \\ \\text{for} \\ n\\in\\mathbb{N}.\n\\end{eqnarray*}\nThis gives $\\lbrace h_{n,i}(x)\\rbrace=\\theta^*(x)\\in \\mathcal{X}_d, \\ \\text{for all} \\ x\\in \\mathcal{X}$\nand $\\Vert\\lbrace h_{n,i}(x)\\rbrace\\Vert=\\Vert\\theta^*(x)\\Vert\\leq\\Vert \\theta\\Vert\\Vert x\\Vert, \\ \\text{for all} \\ x\\in \\mathcal{X}.$\nAlso, for $h\\in \\mathcal{X}^*,$ we have\n\\begin{eqnarray*}\nK^*(h)=\\theta(\\lbrace h(x_n)\\rbrace)=\\theta(\\sum\\limits_{n=1}^{\\infty}h(x_n)e_n^*)\n&=&\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h(x_i)h_{n,i}.\n\\end{eqnarray*} and for $x\\in \\mathcal{X}$ and $n\\in \\mathbb{N},$ we compute\n\\begin{eqnarray*}\n\\Vert K(x)-\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{n}h_{n,i}(x)x_i\\Vert&=&\\sup\\limits_{h\\in \\mathcal{X}^*, \\Vert h\\Vert=1}\\vert h(K(x))-\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{n}h_{n,i}(x)h(x_i)\\vert\n\\\\\n&=&\\sup\\limits_{h\\in \\mathcal{X}^*, \\Vert h\\Vert=1}\\vert \\sum\\limits_{i=n+1}^{\\infty}h_{n,i}(x)h(x_i)\\vert\n\\\\\n&\\leq&B\\Vert \\sum\\limits_{i=n+1}^{\\infty}h_{n,i}(x)e_i\\Vert\\rightarrow 0 \\ \\text{as} \\ n\\rightarrow\\infty.\n\\end{eqnarray*}\n\\indent Hence, $K(x)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i$, for all $x\\in \\mathcal{X}$.\n\\end{proof}\nThe following theorem gives a complete characterization for the approximative family of local atoms for the closed subspace of a Banach space.\n\\begin{thm}\nLet $\\lbrace x_n\\rbrace$ be a Bessel sequence for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$, $\\mathcal{X}_d^*$ has a sequence of canonical unit vectors $\\lbrace e_n^*\\rbrace$ as basis and $M$ be a closed subspace of $\\mathcal{X}$. Let there exists a projection P from $\\mathcal{X}$ onto M. Then, the following statements are equivalent:\n\\begin{enumerate}[(a)]\n\\item $\\lbrace x_n\\rbrace$ is an approximative family of local atoms for M.\n\\item $\\lbrace x_n\\rbrace$ is an approximative atomic system for P.\n\\item There exists a bounded linear operator $U:M \\rightarrow \\mathcal{X}_d$ such that $TUP=P$, where $T$ is the synthesis operator of the Bessel sequence $\\lbrace x_n\\rbrace$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n$(a)\\Rightarrow(b)$ Since $\\lbrace x_n\\rbrace$ is an approximative family of local atoms for M so, there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\{h_{n,i}\\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ for $M$ such that\n\\begin{eqnarray*}\nx=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i, \\ \\text{for all} \\ x\\in M.\n\\end{eqnarray*}\nNow for each $y\\in \\mathcal{X}$, $P(y)\\in M$ and\n\\begin{eqnarray*}\nP(y)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(P(y))x_i=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}(P^*(h_{n,i})(y)x_i, \\ \\text{for all} \\ y\\in \\mathcal{X}.\n\\end{eqnarray*}\n\\indent Clearly $\\lbrace P^*(h_{n,i})\\rbrace$ is an approximative $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X}$. Hence, $\\lbrace x_n\\rbrace$ is an approximative atomic system for $P$.\n\\\\\n$(b)\\Rightarrow (a)$ Since $\\{x_n\\}$ is an approximative atomic system for P and $x=P(x)$, for all $x\\in M$, the proof follows.\n\\\\\n$(a)\\Rightarrow (c)$ Since $\\{x_n\\}$ is an approximative family of local atoms for M so there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\{h_{n,i}\\}\\underset{n \\in \\mathbb{N}}{_{i=1,2,3,...,m_n}}$ for M such that\n\\begin{eqnarray*}\nP(x)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(P(x))x_i, \\ \\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\nLet $U:M\\rightarrow \\mathcal{X}_d$ be the analysis operator for $\\lbrace h_{n,i}\\rbrace$ given by $U(P(x))=\\lbrace h_{n,i}(P(x))\\rbrace,x\\in \\mathcal{X}.$ Also, $\\lbrace h_{n,i}(P(x)\\rbrace\\in \\mathcal{X}_d$, for all $x\\in \\mathcal{X}$. Hence\n\\begin{eqnarray*}\nP(x)&=&\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(P(x))x_i=T(\\lbrace h_{n,i}(P(x))\\rbrace)\n\\\\\n&=&TUP(x), \\ \\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\\\\\n$(c)\\Rightarrow (a)$ Since $U:M\\rightarrow \\mathcal{X}_d$ is a bounded linear operator such that $TUP=P$. Take $h_{n,i}=U^*(e_n^*),n\\in\\mathbb{N}$ and $x\\in M$. Then $h_{n,i}(x)=U^*(e_n^*)(x)=e_n^*(U(x)).$\nTherefore, $\\lbrace h_{n,i}(x)\\rbrace=U(x)\\in \\mathcal{X}_d$, for all $x\\in M$ and\n\\begin{eqnarray*}\n\\Vert \\lbrace h_{n,i}(x)\\rbrace\\Vert=\\Vert U(x)\\Vert\\leq\\Vert U\\Vert\\Vert x\\Vert, \\ \\text{for all} \\ x\\in M.\n\\end{eqnarray*}\nAlso, for each, $x \\in M,$ there exists $y\\in \\mathcal{X}$ such that $x=P(y)$. Moreover, $\\lbrace h_{n,i}(P(y))\\rbrace\\in \\mathcal{X}_d$ for all $y\\in \\mathcal{X}$ and\n\\begin{eqnarray*}\nx=TUP(y)=T(\\lbrace h_{n,i}(P(y))\\rbrace)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(P(y))x_i=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(y)x_i\n\\end{eqnarray*}\n\\end{proof}\nNext, we shall show that if $\\lbrace x_n\\rbrace$ is an approximative $K$ atomic system with $K=I_\\mathcal{X}$, then every complemented subspace of $\\mathcal{X}$ has an approximative family of local atoms and in case $h_{n,i}=f_n$ then $\\{x_n\\}$ is a $K$ atomic system and every complemented subspace of $\\mathcal{X}$ has a family of local atoms.\n\\begin{thm}\nLet $\\lbrace x_n\\rbrace$ be a frame for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$. If there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\lbrace h_{n,i}\\rbrace$ such that\n\\begin{eqnarray*}\nx=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(P(x))x_i, \\ \\text{for all} \\ x\\in \\mathcal{X},\n\\end{eqnarray*}\nthen every complemented subspace of $\\mathcal{X}$ has an approximative family of local atoms.\n\\end{thm}\n\\begin{proof}\nLet $\\{x_n\\}$ be a frame for $\\mathcal{X}$ and $T$ be the synthesis operator of $\\{x_n\\}$. Then by given hypothesis, there exists an approximative $\\mathcal{X}_d$-Bessel sequence $\\lbrace h_{n,i}\\rbrace$ such that\n\\begin{eqnarray*}\nx=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i, \\ \\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\nLet $S$ be the analysis operator of $\\lbrace h_{n,i}\\rbrace$. Note that $I_\\mathcal{X}=TS$. Let $M$ be a complemented subspace of $\\mathcal{X}$ and $P$ be the projection from $\\mathcal{X}$ onto $M$. Take $y_n=P(x_n)$, for $n\\in \\mathbb{N}$. Define $T_1:\\mathcal{X}_d\\rightarrow M$ by $T_1=P\\circ T$.\nThen clearly $T_1$ is a bounded linear operator from $\\mathcal{X}_d$ onto $M$ such that\n\\begin{eqnarray*}\nT_1(\\lbrace h_{n,i}\\rbrace)=P(\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)y_i,~~ \\lbrace h_{n,i}\\rbrace\\in \\mathcal{X}_d.\n\\end{eqnarray*}\nThus, $\\lbrace y_n\\rbrace$ is frame for $M$ and a Bessel sequence for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$. Let $N=S(M)$ and let $\\alpha=\\lbrace h_{n,i}\\rbrace\\in \\mathcal{X}_d$. Then, $T_1(\\alpha)=x,$ for some $x\\in M.$ Now take $\\alpha_1=S(x).$ Then, $\\alpha_1\\in N.$ Also, if $\\alpha_0=\\alpha-\\alpha_1,$ then\n\\begin{eqnarray*}\nT_1(\\alpha_0)=T_1(\\alpha)-T_1(\\alpha_1)\n=x-P(x)=0.\n\\end{eqnarray*}\nThis gives $\\alpha_0 \\in kerT_1.$ Thus $\\alpha=\\alpha_0+\\alpha_1$. Then $\\lambda\\in N\\cap kerT_1$. Then $T_1(\\lambda)=0$ and there exists $x_1\\in M$ such that $\\lambda=S(x_1)$. Therefore\n\\begin{eqnarray*}\nx_1=P(x_1)=PTS(x_1)=T_1(\\lambda)=0.\n\\end{eqnarray*}\nThus, $\\lambda=0$ and so $N\\cap kerT_1=\\lbrace 0\\rbrace$. Therefore $\\mathcal{X}_d=N\\oplus kerT_1$. Now, by Lemma \\ref{PI}, $T_1$ has a pseudo inverse $T_1^\\dagger$. Moreover $T_1T_1^\\dagger$ is a projection from $M$ onto $T_1(\\mathcal{X}_d)=M$. This gives $T_1T_1^\\dagger=I_M$. Let $\\lbrace z_n\\rbrace$ be the sequence of coordinate functionals on $\\mathcal{X}_d$. Take $f_{n,i}=(T_1^\\dagger)^*(l_n)$, for $n\\in \\mathbb{N}$ and let $x\\in M$. Then\n\\begin{eqnarray*}\nf_{n,i}(x)=(T_1^\\dagger)^*(z_n)(x)=z_n(T_1^\\dagger(x)), \\ \\text{for} \\ n\\in\\mathbb{N}, \\ x\\in M.\n\\end{eqnarray*}\nTherefore $\\lbrace f_{n,i}(x)\\rbrace=T_1^\\dagger(x)\\in \\mathcal{X}_d$, for all $x\\in M$ and\n\\begin{eqnarray*}\n\\Vert\\lbrace f_{n,i}(x)\\rbrace\\Vert=\\Vert T_1^\\dagger(x)\\Vert\\leq\\Vert T_1^\\dagger\\Vert\\Vert x\\Vert, \\ \\text{for all} \\ x\\in M.\n\\end{eqnarray*}\nThus $\\lbrace f_{n,i}\\rbrace$ is an approximative $\\mathcal{X}_d$-Bessel sequence for $M$. Therefore, we compute\n\\begin{eqnarray*}\nx=T_1T_1^\\dagger(x)=T_1(\\lbrace f_{n,i}(x)\\rbrace)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}f_{n,i}(x)y_i, \\ \\text{for all} \\ x\\in M.\n\\end{eqnarray*}\n\\end{proof}\nLet $\\lbrace x_n\\rbrace$ be a Bessel sequence for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$ and let T be its associated synthesis operator. Also, let $\\lbrace h_{n,i}\\rbrace$ be an approximative $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X}$ with its associated analysis operator S. Then $TS: \\mathcal{X} \\rightarrow \\mathcal{X}$ is an operator such that\n\\begin{eqnarray*}\nTS(x)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i, \\ \\text{for all} \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\nNote that $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$ in $\\mathcal{X}$ with its associated approximative $\\mathcal{X}_d$-Bessel sequence $\\lbrace h_{n,i}\\rbrace$ if and only if\n\\begin{eqnarray}\\label{E2}\nTS=K.\n\\end{eqnarray} Also one may observe that in case $\\{h_{n,i}\\}=\\{f_i\\}$ then $\\{x_n\\}$ is an atomic system for $K$ in $\\mathcal{X}$ with its associated $\\mathcal{X}_d$-Bessel sequence $\\{f_i\\}$\\\\\nThe following theorem gives a method how to construct an approximative atomic system for $K$ from a given Bessel sequence.\n\\begin{thm}\nLet $\\mathcal{X}_d^*$ has a sequence of canonical unit vectors $\\lbrace e_n^*\\rbrace$ as basis. Let $\\lbrace x_n\\rbrace$ be a Bessel sequence for $\\mathcal{X}$ with respect to $\\mathcal{X}_d$ and $T:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ be its synthesis operator such that T has pseudoinverse $T^\\dagger$. Let $K\\in L(\\mathcal{X})$ with $K(\\mathcal{X})\\subseteq T(\\mathcal{X}_d)$. Then $\\mathcal{X}$ has an approximative atomic system for $K.$\n\\end{thm}\n\\begin{proof}\nSince $T$ has a pseudo inverse $T^\\dagger$, by Lemma \\ref{5}, $TT^\\dagger$ is a projection from $\\mathcal{X}$ onto $T(\\mathcal{X}_d)$. Define $S:\\mathcal{X}\\rightarrow \\mathcal{X}_d$ as\n\\begin{eqnarray}\\label{E3}\nS=T^\\dagger K+W-T^\\dagger TW,\n\\end{eqnarray}\nwhere $W:\\mathcal{X} \\rightarrow \\mathcal{X}_d$ is a bounded linear operator. Then, we compute\n\\begin{eqnarray*}\nTS&=&T(T^\\dagger K+W-T^\\dagger TW)\n\\\\\n&=&TT^\\dagger K+TW-TT^\\dagger TW\n\\\\\n&=&TT^\\dagger K=K.\n\\end{eqnarray*}\nNote that $T^*=J_d^{-1}\\circ R$ and $\\lbrace J_de_n^*\\rbrace$ is a basis of $\\mathcal{Y}_d$.\n Taking $h_{n,i}=S^*(e_n^*)$, $f_{n,i}=(T^+)^*(e_n^*)$ and $g_n=W^*(e_n^*)$ , for $n\\in \\mathcal{N},$ we compute\n\\begin{eqnarray*}\nT^*(T^\\dagger)^*e_n^*=T^*(f_{n,i})=J_d(R(f_{n,i}))=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}f_{n,i}(x_i)e_i^*\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\nh_{n,i}&=&K^*(T^\\dagger)^*(e_n^*)+W^*(e_n^*)-W^*T^*(T^\\dagger)^*(e_n^*)\n\\\\\n&=&K^*f_{n,i}+g_i-\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}f_{n,i}(x_i)g_i, \\ \\text{for all} \\ n\\in \\mathbb{N}.\n\\end{eqnarray*}\nAlso, for $x\\in \\mathcal{X}$, we have\n\\begin{eqnarray*}\nh_{n,i}(x)=S^*(e_n^*)(x)=e_n^*(S(x)), \\ \\text{for all} \\ n\\in \\mathbb{N}.\n\\end{eqnarray*}\nTherefore $\\lbrace h_{n,i}(x)\\rbrace=S(x)\\in \\mathcal{X}_d$, for all $x \\in \\mathcal{X}$. Also,\n\\begin{eqnarray*}\n\\Vert\\lbrace h_{n,i}(x)\\rbrace\\Vert&=&\\Vert S(x)\\Vert\n\\\\\n&\\leq&\\Vert T^\\dagger K+W-T^\\dagger TW\\Vert\\Vert x\\Vert,~x \\in \\mathcal{X}.\n\\end{eqnarray*}\nThus, $\\lbrace h_{n,i}\\rbrace$ is an approximative $\\mathcal{X}_d$- Bessel sequence and\n\\begin{eqnarray*}\n\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i&=&T(\\lbrace h_{n,i}(x)\\rbrace)=T(\\lbrace S^*(e_n^*)(x)\\rbrace)\n\\\\\n&=&T(\\lbrace e_n^*(S(x))\\rbrace)=TS(x)\n\\\\\n&=&K(x), \\ x\\in \\mathcal{X}.\n\\end{eqnarray*}\nHence, $\\lbrace x_n\\rbrace$ is an approximative atomic system for $K$.\n\\end{proof}\nNote that (\\ref{E2}) is the general formula for all $U$ satisfying the equality (\\ref{E3}). As a result of this, let $U_\\circ$ be a linear operator such that $TU_\\circ=K$ and take $W=U_\\circ$ in the right side of (\\ref{E3}), then\n\\begin{eqnarray*}\nT^\\dagger K+U_\\circ-T^\\dagger TU_\\circ=T^\\dagger K+U_\\circ-T^\\dagger K=U_\\circ\n\\end{eqnarray*}\n\\indent The following theorem shows that we can construct an approximative atomic system for $K$ from a given approximative $\\mathcal{X}_d$-Bessel sequence.\n\\begin{thm}\n Let $\\mathcal{X}_d$ be a BK-space and let $\\lbrace e_n\\rbrace$ be a sequence of canonical unit vectors as basis of $\\mathcal{X}_d$ and $\\mathcal{X}_d^*$ has a sequence of canonical unit vectors $\\lbrace e_n^*\\rbrace$ as basis. Let $\\lbrace h_{n,i}\\rbrace$ be an approximative $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X}$ with $S:\\mathcal{X} \\rightarrow \\mathcal{X}_d$ as its analysis operator and S has pseudo inverse $S^\\dagger$. Let $K\\in L(\\mathcal{X})$ and $K^*(\\mathcal{X}^*)=[h_{n,i}]$. Then, $\\mathcal{X}$ has an approximative atomic system for $K.$\n\\end{thm}\n\\begin{proof} Since $\\lbrace h_{n,i}\\rbrace$ is an approximative $\\mathcal{X}_d$-Bessel sequence for $\\mathcal{X}$ with $S:\\mathcal{X} \\rightarrow \\mathcal{X}_d$ as its analysis operator and S has pseudoinverse $S^\\dagger.$\nDefine $T:\\mathcal{X}_d \\rightarrow \\mathcal{X}$ by\n\\begin{eqnarray}\\label{E4}\nT=KU^\\dagger +W(I-UU^\\dagger),\n\\end{eqnarray}\nwhere $W:\\mathcal{X}_d\\rightarrow \\mathcal{X}$ is a bounded linear operator. Then, we compute\n\\begin{eqnarray*}\nTS&=&(KS^\\dagger +W(I-SS^\\dagger))S\n\\\\\n&=&KS^\\dagger S+WS-WSS^\\dagger S\n\\\\\n&=&KS^\\dagger S.\n\\end{eqnarray*}\nAlso, for $x\\in \\mathcal{X}$, $S^*:\\mathcal{X}_d^*\\rightarrow \\mathcal{X}^*$ is given by\n\\begin{eqnarray*}\nS^*(e_n^*)(x)=e_n^*(S(x))=h_{n,i}(x), \\ n\\in \\mathcal{N}.\n\\end{eqnarray*}\nSo, we have\n\\begin{eqnarray*}\n[h_{n,i}]=[S^*(e_n^*)]=S^*(\\mathcal{X}_d^*)=K^*(\\mathcal{X}^*).\n\\end{eqnarray*}\nAlso $S$ has a pseudo inverse $S^\\dagger$, $S^*$ has pseudo inverse $(S^\\dagger)^*$. So,\n\\begin{eqnarray*}\nS^*(S^\\dagger)^*S^*(\\mathcal{X}_d^*)=S^*(\\mathcal{X}_d^*)=K^*(\\mathcal{X}^*).\n\\end{eqnarray*}\nThus, we conclude that $S^*(S^\\dagger)^*K^*=K^*$. Therefore, $TS=KS^\\dagger S=K$.\nLet $x_n=T(e_n)$, $y_n=S^\\dagger(e_n)$ and $l_n=W(e_n)$, for $n\\in \\mathbb{N}$. Then\n\\begin{eqnarray*}\nx_n=KS^\\dagger(e_n)+W(I-SS^\\dagger)(e_n)\n\\\\\n=K(y_n)+l_n-\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(y_i)l_i, ~~n\\in \\mathcal{N}.\n\\end{eqnarray*}\nAlso, $T: \\mathcal{X}_d\\rightarrow \\mathcal{X}$ is a bounded linear operator given by\n\\begin{eqnarray*}\nT(\\lbrace h_{n,i}\\rbrace)=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i} x_i,~~ \\lbrace h_{n,i}\\rbrace\\in \\mathcal{X}_d.\n\\end{eqnarray*}\nBy observation III, $\\lbrace x_i\\rbrace$ is a Bessel sequence. Thus, we get\n\\begin{eqnarray*}\nK(x)=TS(x)=T(\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)e_i)\n=\\lim\\limits_{n \\rightarrow \\infty}\\sum\\limits_{i=1}^{m_n}h_{n,i}(x)x_i, \\ x \\in \\mathcal{X}.\n\\end{eqnarray*}\n\\indent Hence $\\lbrace x_n\\rbrace$ is an approximative atomic system for K.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nIt is expected that much of the dynamics of the small-amplitude solutions of a partial differential equation (PDE), including their stability or instability, is dictated by the study of a linearized (about a trivial solution, say $u=0$) problem. In this article, we focus specifically on the spectral stability of periodic traveling-wave solutions of Hamiltonian PDEs as they bifurcate away from a trivial solution. Our work follows earlier ideas of MacKay \\cite{mackay} and MacKay and Saffman \\cite{mackaysaffman}. We start from an autonomous Hamiltonian system of PDEs \\cite{arnold}, {\\em i.e.},\n\n\\begin{equation}\\la{ham}\nu_t=J \\dd{H}{u}.\n\\end{equation}\n\n\\noindent \\sloppypar \\noindent Here and throughout, indices involving $x$ or $t$ denote partial derivatives. Further, $u=(u_1(x,t), \\ldots, u_M(x,t))^T$ is an $M$-dimensional vector function defined in a suitable function space, and $J$ is a Poisson operator \\cite{arnold, arnoldnovikov}. More details and examples are given below. Finally, $H=\\int_D {\\cal H}(u,u_x, \\ldots)dx$ is the Hamiltonian, whose density ${\\cal H}$ depends on $u$ and its spatial derivatives, defined for $x\\in D$. We consider only the stability of periodic solutions, thus $D$ is any interval of length $L$, the period.\nNote that for some of our examples ${\\cal H}$ will depend on spatial derivatives of $u$ of arbitrary order.\n\nTo investigate the stability of traveling wave solutions of this system, we reformulate \\rf{ham} in a frame moving with speed $c$, using the transformation $\\hat x=x-ct$, $\\hat t=t$, and considering solutions $u(\\hat x, \\hat t)=U(\\hat x)$ (successively omitting hats). This leads to\n\n\\begin{equation}\\la{hamc}\nu_t-c u_x=J \\dd{H}{u} ~~\\iff~~u_t=J \\dd{H_c}{u},\n\\end{equation}\n\n\\noindent for a modified Hamiltonian $H_c$. Traveling wave solutions are solutions of the ordinary differential system\n\n\\begin{equation}\\la{travham}\n-c U_x=J \\dd{H}{U} ~~\\iff~~0=J \\dd{H_c}{U}.\n\\end{equation}\n\n\\noindent Thus if $J$ is invertible, traveling waves are stationary points of the Hamiltonian $H_c$. The system~\\rf{travham} typically has the zero (trivial) solution for a range of $c$ values. The small-amplitude solutions whose stability we investigate bifurcate away from these trivial zero-amplitude solutions at special values of the speed parameter $c$, as is schematically shown in Fig.~\\ref{fig:bif}. It is our goal to see to what extent anything can be said about the stability of the small-amplitude solutions (with amplitudes in the shaded regions of Fig.~\\ref{fig:bif}) from knowledge of the zero-amplitude solutions at the bifurcation point. An outline of the steps in this process is as follows.\n\n\\begin{figure}[tb]\n\\def3.2in}\\input{bubbletrack.pdf_tex{5in}\n\\centerline{\\hspace*{0.3in}\\input{bif.pdf_tex}}\n\\caption{\\la{fig:bif} A cartoon of the bifurcation structure of the traveling waves for a third-order ($M=3$) system: solution branches bifurcate away from the trivial zero-amplitude solution at specific values of the traveling wave speed $c$.\n }\n\\end{figure}\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian}. A linear system of equations is obtained by linearizing the system \\rf{travham} around the zero solution: let $u=\\epsilon v+{o}(\\epsilon)$ and omit terms of order $o(\\epsilon)$. Alternatively, if $J$ is independent of $u$ and its spatial derivatives, one may expand the Hamiltonian $H_c$ as a function of $\\epsilon$ and retain its quadratic terms. The resulting Hamiltonian $H_c^0$ of the linearized system is the starting point for the next steps.\n\n\\item {\\bf Dispersion relation}. The linearized system has constant coefficients and is easily solved using Fourier analysis. The dispersion relation $F(\\omega, k)=0$ governs the time dependence of the solutions. It is obtained by investigating solutions whose spatial and temporal dependence is proportional to $\\exp(ikx-i \\omega t)$. Here $F(\\omega,k)=0$ is of degree $M$ in $\\omega$. It is a fundamental assumption of our approach that all solutions $\\omega_j(k)$ ($j=1, \\ldots, M$) of $F(\\omega,k)=0$ are real for $k\\in \\mathbb{R}$. The dispersion relation can be expressed entirely in terms of the coefficients appearing in the quadratic Hamiltonian $H_c^0$. For periodic systems of period $L$, the values of $k$ are restricted to be of the form $2\\pi N\/L$, $N\\in \\mathbb{Z}$.\n\n\\item {\\bf Bifurcation branches}. The values of the phase speed $c_j=\\omega_j\/k$ for which nontrivial solutions bifurcate away from the zero-amplitude solution are determined by the condition that the zero solution is {\\em not} the unique solution to the Fourier transformed problem. In effect, this is the classical bifurcation condition that a Jacobian is singular. This simple calculation determines the bifurcation branch starting points explicitly in terms of the different solutions to the dispersion relation. In what follows, we follow the first branch, starting at $c_1$, without loss of generality.\n\n It is assumed that only a single non-trivial bifurcation branch emanates from a bifurcation point. Although more general cases can be incorporated, we do not consider them here. Further, we fix the period of the solutions on the bifurcation branch (usually to $2\\pi$). Other choices can be made. Instead of varying the amplitude as a function of the speed for fixed period, one could fix the speed and vary the period, {\\em etc.} The methods presented can be redone for those scenarios in a straightforward fashion.\n\n\\item {\\bf Stability spectrum}. The spectrum of the linear operator determining the spectral stability of the zero solution at the bifurcation point on the first branch is calculated. Since this spectral problem has constant coefficients, this calculation can be done explicitly. Again, this is done entirely in terms of the dispersion relation of the problem. Using a Floquet decomposition (see \\cite{deconinckkutz1, kapitulapromislow}), the spectrum is obtained as a collection of point spectra, parameterized by the Floquet exponent $\\mu\\in (-\\pi\/L, \\pi\/L]$. Due to the reality of the branches of the dispersion relation, the spectrum is confined to the imaginary axis. In other words, the zero-amplitude solutions are spectrally stable. The use of the Floquet decomposition allows for the inclusion of perturbations that are not necessarily periodic with period $L$. Instead, the perturbations may be quasiperiodic with two incommensurate periods, subharmonic (periodic, but with period an integer multiple of $L$), or spatially localized \\cite{deconinckkutz1, haraguskapitula, kapitulapromislow}.\n\n\\item {\\bf Collision condition}. Given the explicit expression for individual eigenvalues $\\lambda$, it is easy to find the conditions for which eigenvalues corresponding to different parameters (Floquet exponent, branch number of the dispersion relation, {\\em etc}.) coincide on the imaginary axis. This is referred to as the {collision condition}. Once again, it is given entirely in terms of the dispersion relation.\n\n It is a consequence of the Floquet theorem \\cite{coddingtonlevinson} that collisions need to be considered only for spectral elements corresponding to the same value of the Floquet exponent since the subspaces of eigenfunctions for a fixed Floquet exponent are invariant under the flow of the linearized equation.\n\n\\item {\\bf Krein signature}. Having obtained the stability spectrum at the starting point of the bifurcation branches, we wish to know how the spectrum evolves as we move up a bifurcation branch. One tool to investigate this is the Krein signature \\cite{kollarmiller, krein1, krein2, mackay, meiss}. In essence, the Krein signature of an eigenvalue is the sign of the Hamiltonian of the linearized system evaluated on the eigenspace of the eigenvalue. Different characterizations are given below. If two imaginary eigenvalues of the same signature collide as a parameter changes, their collision does not result in them leaving the imaginary axis. Thus the collision of such eigenvalues does not result in the creation of unstable modes. In other words, it is a necessary condition for collisions to lead to instability that the Krein signature of the colliding eigenvalues is different. This scenario is illustrated in Fig.~\\ref{fig:krein}. That figure also illustrates the quadrifold symmetry of the stability spectrum of the solution of a Hamiltonian system: for each eigenvalue $\\lambda\\in \\mathbb{C}$, $\\lambda^*$, $-\\lambda$ and $-\\lambda^*$ are also eigenvalues. Here $\\lambda^*$ denotes the complex conjugate of $\\lambda$. It should be noted that the occurrence of a collision is required for eigenvalues to leave the imaginary axis, due to the quadrifold symmetry of the spectrum.\n\n\\begin{figure}[tb]\n\\def3.2in}\\input{bubbletrack.pdf_tex{6in}\n\\centerline{\\hspace*{1in}\\input{krein.pdf_tex}}\n\\caption{\\la{fig:krein} Colliding eigenvalues in the complex plane as a parameter is increased. On the left, two eigenvalues are moving towards each other on the positive imaginary axis, accompanied by a complex conjugate pair on the negative imaginary axis. In the middle, the eigenvalues in each pair have collided. On the right, a Hamiltonian Hopf bifurcation occurs: the collided eigenvalues separate, leaving the imaginary axis (implying that the two Krein signatures were different).\n }\n\\end{figure}\n\n Thus we calculate the Krein signature of any coinciding eigenvalues, obtained in Step~5. If these Krein signatures are equal, the eigenvalues will remain on the imaginary axis as the amplitude is increased. Otherwise, the eigenvalues may leave the imaginary axis, through a so-called Hamiltonian Hopf bifurcation \\cite{vandermeer}, resulting in instability. Thus we establish a necessary condition for the instability of periodic solutions of small amplitude. The Krein signature condition cannot be expressed entirely in terms of the dispersion relation, and the coefficients of $H^0_c$ are required as well. Please refer to the next two sections for details.\n\n\\end{enumerate}\n\nAlthough all calculations are done for the zero-amplitude solutions at the starting point of a bifurcation branch, the continuous dependence of the stability spectrum on the parameters in the problem \\cite{hislop}, including the velocity of the traveling wave or the amplitude of the solutions, guarantees that the stability conclusions obtained persist for solutions of small amplitude. Thus meaningful conclusions about solutions in the shaded regions of the bifurcation branches of Fig.~\\ref{fig:bif} are reached, despite the fact that we are unable to say anything about the size of the maximal amplitude for which these conclusions are valid.\n\n\\vspace*{0.1in}\n\n{\\bf Remarks.}\n\n\\begin{itemize}\n\n\\item Throughout this introduction, and in fact throughout much of this manuscript, our emphasis is on generality and usability as opposed to rigor. As a consequence some of the statements made above are necessarily vague: more precise statements would limit the generality aimed for. Within the context of more specific examples, more precision may be possible. Along the same lines, we have limited ourselves to the most generic case of two eigenvalues colliding on the imaginary axis. Lastly, since we are assuming an initial ({\\em i.e.}, for a starting value of 0 for the amplitude parameter) situation that is neutrally stable, our only interest is in collisions on the imaginary axis. If eigenvalues are present off the imaginary axis for the zero-amplitude solution, such solutions are already spectrally unstable, and the continuous dependence of the spectrum on its parameters \\cite{hislop} guarantees instability for solutions of small amplitude as a consequence.\n\n\\item If eigenvalues collide at the origin, their Krein signature is zero (this follows immediately from its definition) and no conclusion can be drawn about whether or not colliding eigenvalues leave the imaginary axis or not. Because of this, the methods discussed here allow only for the study of so-called { high-frequency instabilities.} Indeed, if eigenvalues collide away from the origin, the unstable perturbations do not only grow in magnitude exponentially, they also display an oscillation of frequency $\\omega$ that is equal to the non-zero imaginary part of the colliding eigenvalues. In this sense {high frequency} is more accurately described as non-zero frequency. We use the high-frequency name to distinguish the instabilities investigated here from the modulational instability \\cite{zakharovostrovsky}, which is a consequence of eigenvalues colliding at the origin. Further, as seen in the examples below, often the presence of one high-frequency instability is accompanied by a sequence of such instabilities with increasing frequencies tending to infinity.\n\n The study of the modulational instability requires a different set of techniques, see for instance \\cite{benjaminperiodic, whitham1, zakharovostrovsky} for different classical approaches. More recently, a general framework was developed by Bronski, Johnson and others (see {\\em e.g.} \\cite{bronskijohnson1, hurjohnson}). Collisions at the origin are common, since any Lie symmetry of the underlying problem gives rise to a zero eigenvalue in the stability problem \\cite{gss1, kapitulapromislow}. For this same reason, such collisions typically involve more than two eigenvalues (for instance, the original Benjamin-Feir instability involves four \\cite{benjaminperiodic, bridgesmielke}), which is one of the reasons why their treatment is often complicated: its analysis is not only technical but also involves tedious calculations, as exemplified recently in a paper by Hur and Johnson \\cite{hurjohnson} on the modulational instability for periodic solutions of the Whitham equation.\n\n\\item Following the Floquet decomposition, we use eigenfunctions given by a single Fourier mode since the linear stability problem for the zero-amplitude solutions has constant coefficients. Lastly, the calculation of the Krein signature involves only the finite-dimensional eigenspace of the eigenvalue under consideration. Thus, in essence, all the calculations done in this paper are finite dimensional, as they are in \\cite{mackaysaffman}, for instance. As was noted there, for equations with real-valued solutions, it is necessary to consider the eigenfunctions corresponding to Floquet exponent $\\mu$ and $-\\mu$ simultaneously, since the eigenfunctions with $-\\mu$ are the complex conjugates of those with $\\mu$.\n\n\\item We call an equation dispersive if all branches of its dispersion relation $\\omega(k)$ are real valued for $k\\in \\mathbb{R}$. It is easy to see that there exist linear, constant-coefficient Hamiltonian systems that are not dispersive. An explicit example is\n \\begin{equation}\n \\left\\{\n \\begin{array}{rcl}\n q_t&=&q_{xx},\\\\\n p_t&=&-p_{xx}.\n \\end{array}\n \\right.\n \\end{equation}\nThis (admittedly bizarre) system is Hamiltonian with canonical Poisson structure\n\\begin{equation}\\la{canonicalJ}\nJ=\\left(\n\\begin{array}{rr}\n0 & 1\\\\\n-1 & 0\n\\end{array}\n\\right),\n\\end{equation}\nand Hamiltonian $H=-\\int_0^{2\\pi} q_x p_x dx$. The branches of its dispersion relation are given by $\\omega_{1,2}(k)=\\mp i k^2$.\n\nIt is an interesting question whether there exist dispersive systems that are not Hamiltonian. One may be tempted to consider an example like\n\\begin{equation}\n \\left\\{\n \\begin{array}{rcl}\n q_t&=&a q_{x},\\\\\n p_t&=&b p_{x}.\n \\end{array}\n \\right.\n \\end{equation}\nFor $a, b \\in \\mathbb{R}$ this system is dispersive ($\\omega_1=-a k$, $\\omega_2=-b k$), but it is not Hamiltonian with canonical structure. However, allowing for the noncanonical structure\n\\begin{equation}\nJ=\\left(\n\\begin{array}{rr}\n\\partial_x & 0\\\\\n0 & \\partial_x\n\\end{array}\n\\right),\n\\end{equation}\nthe system is easily found to have the Hamiltonian $H=\\frac{1}{2}\\int_0^{2\\pi}(a q^2+b p^2)dx$. We conjecture that no systems exist that are dispersive but not Hamiltonian. As demonstrated by the example above, the question is difficult to analyze, since different forms of the Poisson operator have to be allowed.\n\n\\item The investigation of the Hamiltonian structure of the zero-amplitude solution follows earlier work by Zakharov, Kuznetsov, and others. A review is available in \\cite{zakharovmusher}. The stability of the trivial solution is also investigated there, with stability criteria given entirely in terms of the branches of the dispersion relation. Using canonical perturbation theory, Hamiltonians containing higher-than-quadratic terms are considered. This is used to consider the impact of nonlinear effects on the dynamics of the trivial solution. At this point, connections to resonant interaction theory \\cite{benneyRIT, hammackhendersonRIT, phillipsRIT} become apparent, as they will in what follows. Although the physical reasoning leading to resonant interaction theory and that leading to the criteria established below is different, it is clear that connections exist. \n\n\\end{itemize}\n\n\\section{Motivating example}\\la{sec:whitham}\n\nOur investigations began with the study of the so-called Whitham equation \\cite{WhithamEq}, \\cite[page 368]{whitham}. The equation is usually posed on the whole line, for which the equation satisfied by $u(x,t)$ is\n\n\\begin{equation}\\la{whitham}\nu_t+N(u)+\\int_{-\\infty}^\\infty K(x-y)u_y(y,t) dy=0.\n\\end{equation}\n\n\\noindent The term $N(u)$ denotes the collection of all nonlinear terms in the equation. It is assumed that $\\lim_{\\epsilon\\rightarrow 0}N(\\epsilon u)\/\\epsilon=0$. The last term encodes the dispersion relation of the linearized Whitham equation. The kernel $K(x)$ is the inverse Fourier transform of the phase speed $c(k)$, where $c(k)=\\omega(k)\/k$, with $\\omega(k)$ the dispersion relation. Here $c(k)$ is assumed to be real valued and nonsingular for $k\\in \\mathbb{R}$. Thus\n\n\\begin{equation}\nK(x)=\\frac{1}{2\\pi}\\Xint-_{-\\infty}^\\infty c(k)e^{ikx}dk,\n\\end{equation}\n\n\\noindent where $\\Xint-$ denotes the principal value integral. Depending on $c(k)$, this equation may have to be interpreted in a distributions sense \\cite{EhrnstromKalish, stakgold}. Letting $u\\sim \\exp(ikx-\\omega(k) t)$, it is a straightforward calculation to see that the dispersion relation of the linear Whitham equation is $\\omega(k)$. In fact, the linear Whitham equation is easily seen to be a rewrite of the linear evolution equation \\cite{as}\n\n\\begin{equation}\\la{whithamasform}\nu_t=-i \\omega(-i \\partial_x)u,\n\\end{equation}\n\n\\noindent where $\\omega(-i \\partial_x)$ is a linear operator with odd symbol $\\omega(k)$: $\\omega(k)=-\\omega(-k)$, the dispersion relation considered. Indeed, letting $\\omega(-i \\partial_x)$ act on\n\n\\begin{equation}\nu(x,t)=\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty e^{ikx}\\hat u(k,t)dk,\n\\end{equation}\n\n\\noindent and replacing $\\hat u(k,t)$ by\n\n\\begin{equation}\n\\hat u(k,t)=\\int_{-\\infty}^\\infty e^{-ik y} u(y,t) dy,\n\\end{equation}\n\n\\noindent the linear part of \\rf{whitham} is obtained after one integration by parts. We restrict our considerations to odd dispersion relations, to ensure the reality of the Whitham equation.\n\nOne of Whitham's reasons for writing down the Whitham equation \\cite{WhithamEq, whitham} was to describe waves in shallow water (leading to the inclusion of a KdV-type nonlinearity $N(u)\\sim u u_x$) that feel the full dispersive response of the one-dimensional water wave problem (without surface tension), for which\n\n\\begin{equation}\\la{wwdispersion}\n\\omega^2(k)=gk\\tanh(kh),\n\\end{equation}\n\n\\noindent with $g$ the acceleration of gravity and water depth $h$. It is common to choose $c(k)=\\omega_1(k)\/k>0$ in \\rf{whitham}, so that $\\omega_1(k)$ is the root of \\rf{wwdispersion} with the same sign as $k$. In what follows, we refer to this choice as the Whitham equation. The stability of periodic traveling wave solutions of the Whitham equation has received some attention recently. Notable are \\cite{ehrnstromgroveswahlen}, where the focus is on solitary waves, and \\cite{hurjohnson}, where the modulational instability of small-amplitude periodic solutions is emphasized, although the main result for the high-frequency instabilities discussed in Example~\\ref{ex:whitham} is included as well. Most recently, the spectral stability of periodic solutions of the Whitham equation was examined in \\cite{SanfordKodamaCarterKalisch}. The goal of considering an equation like \\rf{whitham} as opposed to the Korteweg-de~Vries equation or other simpler models is to capture as much of the dynamics of the full water wave problem as possible, without having to cope with the main difficulties imparted by the Euler water wave problem \\cite{vandenboek} ({\\em e.g.}, it is a nonlinear free boundary-value problem, the computation of its traveling-wave solutions is a nontrivial task, {\\em etc}). One of the important aspects of the dynamics of a nonlinear problem is the (in)stability of its traveling wave solutions. It was shown explicitly in \\cite{deconinckoliveras1} that periodic traveling wave solutions of the one-dimensional Euler water wave problem are spectrally unstable for all possible values of their parameters $h$, $g$, amplitude, and wave period. The nature of the instabilities depends on the value of these parameters. As is well known, waves in deep water are susceptible to the Benjamin-Feir or modulational instability (see \\cite{zakharovostrovsky} for a review). In addition, waves in both deep and shallow water of all non-zero amplitudes are unstable with respect to high-frequency perturbations: these are perturbations whose growth rates do not have a small imaginary part, resulting in oscillatory behavior in time, independent of the spatial behavior of the unstable modes. The work in \\cite{SanfordKodamaCarterKalisch} does not reveal any high-frequency instabilities for solutions of small amplitude in water that is shallow, in the context of the Whitham equation. Thus an important aspect of the Euler water wave dynamics is absent from \\rf{whitham}. We will provide an analytical indication that the Whitham equation misses the presence of these instabilities, while explaining why they are missed. This explanation leads to a way to address this problem.\n\nFor suitable $N(u)$, the Whitham equation \\rf{whitham} is a Hamiltonian system. In fact, for our considerations, it matters only that the linearized Whitham equation is Hamiltonian. The Lagrangian structure with the dispersion relation given by $\\omega_1(k)$ as in \\rf{wwdispersion} was already written down by Whitham in \\cite{whitham3}, from which the Hamiltonian structure easily follows. Explicitly, for the linearized Whitham equation posed on the whole line, for any odd $\\omega(k)$ we have\n\n\\begin{equation}\nH=-\\frac{1}{2}\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^\\infty K(x-y) u(x)u(y)dx dy,\n\\end{equation}\n\n\\noindent with $J=\\partial_x$. Then\n\n\\begin{equation}\\la{whithamham}\nu_t=\\partial_x \\frac{\\delta H}{\\delta u}.\n\\end{equation}\n\n\\noindent If instead the linearized equation is posed with periodic boundary conditions $u(x+L,t)=u(x,t)$, it follows immediately from \\rf{whithamasform} that we have\n\n\\begin{equation}\\la{whithamper}\nu_t+\\int_{-L\/2}^{L\/2} K(x-y)u(y)dy=0,\n\\end{equation}\n\n\\noindent where we have used a Fourier series instead of a Fourier transform. Further,\n\n\\begin{equation}\nK(x)=\\frac{1}{L}\\sum_{j=-\\infty}^\\infty c(k_j)e^{ik_jx},\n\\end{equation}\n\n\\noindent and $k_j=2\\pi j\/L$, $j\\in \\mathbb{Z}$. The Hamiltonian formulation for the periodic Whitham equation \\rf{whithamper} is also given by \\rf{whithamham}, but with\n\n\\begin{equation}\nH=-\\frac{1}{2} \\int_{-L\/2}^{L\/2}\\int_{-L\/2}^{L\/2} K(x-y) u(x) u(y) dx dy.\n\\end{equation}\n\n\\noindent In fact, a formal $L\\rightarrow \\infty$ immediately results in the recovery of the equation posed on the whole line. Thus the Whitham equation and its periodic solutions fit in to the framework developed in this manuscript. It is one of many examples we use below. Other notable examples are the Euler water wave problem (as expected, allowing us to check our results with those of MacKay \\& Saffman \\cite{mackaysaffman}), the KdV equation, the Sine-Gordon equation, {\\em etc}. We are particularly interested in the comparison between the results for the Euler water wave problem and those for the Whitham equation.\n\nThe results of Examples~\\ref{ex:whitham} and~\\ref{ex:ww} show that the Whitham equation cannot possess the high-frequency instabilities present in the water wave problem. This leads us to propose a new model equation, a so-called Boussinesq-Whitham or bidirectional Whitham equation. This equation is shown to at least satisfy the same necessary condition for the presence of high-frequency instabilities as the water wave problem, and these high-frequency instabilities originate from the same points on the imaginary axes as they do for the Euler equations. However, it is easily seen that this equation is ill posed for solutions that do not have zero average. As such it is a poor candidate to replace the Whitham equation \\rf{whitham} as a shallow-water equation with the correct dispersive behavior.\n\n\\vspace*{0.1in}\n\n{\\bf Remark.} If $\\omega(k)$ is not odd, then the function $u$ solving the linear Whitham equation \\rf{whithamasform} is necessarily complex. The problem is Hamiltonian:\n\n\\begin{equation}\\la{whithamcomplex}\nu_t=-i \\frac{\\delta H}{\\delta u^*},\n\\end{equation}\n\n\\noindent where $*$ denotes the complex conjugate. The Poisson structure is\n\n\\begin{equation}\nJ=\\left(\n\\begin{array}{rr}\n0 & -i\\\\\ni & 0\n\\end{array}\n\\right),\n\\end{equation}\n\n\\noindent although usually one writes only the first of the two evolution equations, omitting the equation that is the complex conjugate of \\rf{whithamcomplex}. The Hamiltonian is given by\n\n\\begin{equation}\nH=\\int_0^{2\\pi} u^* \\omega(-i\\partial_x)u dx.\n\\end{equation}\n\n\\noindent A real formulation in terms of the real and imaginary parts of $u$ is possible as well (using the canonical transformation $u=(q+i p)\/\\sqrt{2}$), resulting in a canonical Hamiltonian structure. The linear Whitham equation \\rf{whithamcomplex} is often rewritten in Fourier space using the discrete Fourier transform, due to the periodic boundary conditions. This leads to the Hamiltonian $H=\\sum_{n=-\\infty}^\\infty \\omega(n) z_n z_n^*$.\n\n\\section{Scalar Hamiltonian PDEs}\\label{sec:scalar}\n\nIn this section, we investigate the stability of $2\\pi$-periodic traveling wave solutions of Hamiltonian systems of the form\n\n\\begin{equation}\\la{scalar}\nu_t=\\partial_x \\frac{\\delta H}{\\delta u},\n\\end{equation}\n\n\\noindent where $u(x,t)$ is a scalar real-valued function. Thus $J=\\partial_x$. Since this Poisson operator is singular, all equations of this form conserve the quantity $\\int_0^{2\\pi} u dx$, which is the Casimir for this Poisson operator. Systems of this form include the Korteweg- de~Vries equation \\cite{gardner, zakharovfaddeev} and its many generalizations, the Whitham equation \\rf{whitham}, and many others. As mentioned above, our only interest is in the linearization of these equations around their trivial solution. We write the quadratic part $H^0$ of $H$ as\n\n\\begin{equation}\\la{h0scalar}\nH^0=-\\frac{1}{2}\\int_0^{2\\pi} \\sum_{n=0}^\\infty \\alpha_n u_{nx}^2 dx,\n\\end{equation}\n\n\\noindent where the coefficients $\\alpha_n\\in \\mathbb{R}$. As before, indices on $u$ denote partial derivatives. Specifically, $u_{nx}$ denotes $\\partial_x^{n} u$.\n\nFor most examples, the number of terms in \\rf{h0scalar} is finite, and all but a few of the coefficients $\\alpha_n$ are nonzero. For the Whitham equation \\rf{whitham}, the number of nonzero terms is infinite, but convergence is easily established. Note that \\rf{h0scalar} is the most general form of a quadratic Hamiltonian depending on a single function. Indeed, a term in $H^0$ containing $u_{mx} u_{nx}$, with $m$ and $n$ positive integers, may be reduced to a squared term using integration by parts.\n\nUsing the notation \\rf{h0scalar}, the linearized equation is\n\n\\begin{equation}\\la{scalarlinear}\nu_t=-\\sum_{n=0}^\\infty (-1)^n \\alpha_n u_{(2n+1)x}.\n\\end{equation}\n\n\\noindent We proceed with the six steps outlined in the introduction.\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian.} The modified Hamiltonian $H_c^0$ is given by\n\\begin{equation}\nH_c^0=\\frac{c}{2}\\int_0^{2\\pi}u^2dx-\\frac{1}{2}\\int_0^{2\\pi} \\sum_{n=0}^\\infty \\alpha_n u_{nx}^2 dx.\n\\end{equation}\n\n\\item {\\bf Dispersion relation.} For equations of the form \\rf{scalar}, the dispersion relation has only a single branch:\n\n\\begin{equation}\\la{scalardispersion}\n\\omega(k)=\\sum_{n=0}^\\infty \\alpha_n k^{2n+1}.\n\\end{equation}\n\nThe absence of even powers of $k$ in \\rf{scalardispersion} is due to our imposition that \\rf{scalarlinear} is a conservative equation, {\\em i.e.}, there is no dissipation. All integers are allowable $k$ values, since we have equated the period to be $2\\pi$. The equation \\rf{scalarlinear} may be written as\n\n\\begin{equation}\nu_t=-i \\omega(-i\\partial_x)u.\n\\end{equation}\n\n\\item {\\bf Bifurcation branches.} Since \\rf{scalar} is scalar, only one branch can bifurcate away from the trivial solution. To find the corresponding value of $c$, we write \\rf{scalarlinear} in a moving frame as\n\n\\begin{equation}\\la{scalarlinearmoving}\nu_t-c u_x=i \\omega(i\\partial_x)u.\n\\end{equation}\n\n\\noindent This equation has its own dispersion relation given by\n\n\\begin{equation}\n\\Omega(k)=\\omega(k)-c k,\n\\end{equation}\n\n\\noindent \\sloppypar obtained by looking for solutions of the form $u=\\exp(ikx-i \\Omega t)$. Letting $u=\\sum_{n=-\\infty}^\\infty \\exp(i n x) \\hat u_n$, it follows that $\\partial_t \\hat u_n=-i \\Omega(n)\\hat u_n$. Thus a nonzero stationary solution may exist provided $\\Omega(N)=0$, for $N\\in \\mathbb{N}, N\\neq 0$. We have used the oddness of $\\Omega(N)$ to restrict to strictly positive values of $N$. Thus the starting point of the bifurcation branch in the (speed, amplitude)-plane is $(c,0)$, where $c$ is determined by\n\n\\begin{equation}\nc=\\frac{\\omega(N)}{N},\n\\end{equation}\n\n\\noindent for any integer $N>0$. Choosing $N>1$ implies that the fundamental period of the solutions is not $2\\pi$, but $2\\pi\/N$. In practice, we choose $N=1$. A Fourier series approximation to the explicit form of the small-amplitude solutions corresponding to this bifurcation branch may be obtained using a standard Stokes expansion \\cite{stokes, whitham}.\n\n\\item {\\bf Stability spectrum.} In order to compute the stability spectrum associated with the zero-amplitude solution at the start of the bifurcation branch, we let $u(x,t)=U(x)\\exp(\\lambda t)+$c.c., where c.c. denotes the complex conjugate of the preceding term. As usual, if any $\\lambda$ are found for which the real part is positive, the solution is spectrally unstable \\cite{kapitulapromislow}. All bounded eigenfunctions $U(x)$ may be represented as\n\n\\begin{equation}\\la{efscalar}\nU(x)=\\sum_{n=-\\infty}^\\infty a_n e^{i(n+\\mu)x},\n\\end{equation}\n\n\\noindent where $\\mu\\in (-1\/2,1\/2]$ is the Floquet exponent. Such a representation for $U(x)$ is valid even for solutions on the bifurcation branch of nonzero amplitude \\cite{deconinckkutz1}. Since \\rf{scalarlinearmoving} is a problem with constant coefficients, only a single term in \\rf{efscalar} is required. We obtain\n\n\\begin{equation}\\la{evscalar}\n\\lambda_n^{(\\mu)}=-i\\Omega(n+\\mu)=-i \\omega(n+\\mu)+i(n+\\mu)c, ~~n\\in \\mathbb{Z}.\n\\end{equation}\n\n\\noindent As expected, all eigenvalues are imaginary and the zero-amplitude solution is neutrally stable. For a fixed value of $\\mu$, \\rf{evscalar} gives a point set on the imaginary axis in the complex $\\lambda$ plane. As $\\mu$ is varied in $(-1\/2, 1\/2]$, these points trace out intervals on the imaginary axis. Depending on $\\omega(k)$, these intervals may cover the imaginary axis.\n\n\\item {\\bf Collision condition.} The most generic scenarios for two eigenvalues given by \\rf{evscalar} to collide are that (i) two of them are zero, and they collide at the origin, and (ii) two of them are equal, but nonzero. We ignore the first possibility, since the next step proves to be inconclusive for this case, as discussed in the introduction. The second possibility requires $\\lambda_n^{(\\mu)}=\\lambda_m^{(\\mu)}$, for some $m,n\\in \\mathbb{Z}$, $m\\neq n$, fixed $\\mu\\in (-1\/2, 1\/2]$, and $\\lambda_n^{(\\mu)}$, $\\lambda_m^{(\\mu)}\\neq 0$. This may be rewritten as\n\n \\begin{equation}\\la{scalarcollision}\n \\frac{\\omega(n+\\mu)-\\omega(m+\\mu)}{n-m}=\\frac{\\omega(N)}{N}, ~~m,n\\in \\mathbb{Z}, m\\neq n \\mbox{~and~} \\mu\\in (-1\/2, 1\/2].\n \\end{equation}\n\n\\noindent This equation has an elegant graphical interpretation: the right-hand side is fixed by the choice of $N$, fixing the bifurcation branch in Step~3. It represents the slope of a line through the origin and the point $(N,\\omega(N))$ in the $(k, \\omega)$ plane. The left hand side is the slope of a line in the same plane passing through the points $(n+\\mu, \\omega(n+\\mu))$ and $(m+\\mu, \\omega(m+\\mu))$, see Fig.~\\ref{fig:scalarslope}.\n\nEven though the graph of the dispersion relation admits parallel secant lines, this is not sufficient for a solution of \\rf{scalarcollision}, as it is required that their abscissas are an integer apart. Nevertheless, the graphical interpretation can provide good intuition for solving the collision condition, which typically has to be done numerically.\n\n\\begin{figure}[tb]\n\\def3.2in}\\input{bubbletrack.pdf_tex{5.8in}\n\\centerline{\\hspace*{0in}\\input{scalarslope.pdf_tex}}\n\\caption{\\la{fig:scalarslope} The graphical interpretation of the collision condition \\rf{scalarcollision}. The solid curve is the graph of the dispersion relation $\\omega(k)$. The slope of the dashed line in the first quadrant is the right-hand side in \\rf{scalarcollision}. The slope of the parallel dotted line is its left-hand side.\n }\n\\end{figure}\n\n\\item {\\bf Krein signature.} The Krein signature of an eigenvalue is the sign of the Hamiltonian $H^0_c$ evaluated on the eigenspace associated with the eigenvalue. We are considering two simple eigenvalues colliding, thus the eigenspace for each eigenvalue consists of multiples of the eigenfunction only. To allow for eigenfunctions of the form $a_n \\exp(i(n+\\mu)x+\\lambda_n^{(\\mu)}t)+$c.c, which are not $2\\pi$-periodic (unless $\\mu=0$), it is necessary to replace the integral in \\rf{h0scalar} with a whole-line average. More details on this process are found, for instance, in \\cite{deconinckoliveras1}. A simple calculation shows that the contribution to $H^0_c$ from the $(n,\\mu)$ mode is proportional to the ratio of $\\Omega(n+\\mu)\/(n+\\mu)$:\n\n \\begin{equation}\\la{scalarkrein}\n H^0_c|_{(n,\\mu)}\\sim -|a_n|^2 \\frac{\\Omega(n+\\mu)}{n+\\mu}.\n \\end{equation}\n\n\\noindent Other terms are present in the Hamiltonian density, but they have zero average. The sign of this expression is the Krein signature of the eigenvalue $\\lambda_n^{(\\mu)}$. Thus a necessary condition for $\\lambda_n^{(\\mu)}$ and $\\lambda_m^{(\\mu)}$ to leave the imaginary axis for solutions of non-zero amplitude is that the signs of \\rf{scalarkrein} with $(n,\\mu)$ and $(m, \\mu)$ are different, contingent on $\\mu$, $m$ and $n$ satisfying \\rf{scalarcollision}. Explicitly, this condition is\n\n\\begin{equation}\\la{scalarkreincondition1}\n\\mbox{sign}\\left[\\frac{\\omega(n+\\mu)}{n+\\mu}-c\\right]\\neq \\mbox{sign}\\left[\\frac{\\omega(m+\\mu)}{m+\\mu}-c\\right].\n\\end{equation}\n\n\\noindent Alternatively, the product of the left-hand side and the right-hand side should be negative. Using \\rf{scalarcollision}, \\rf{scalarkreincondition1} becomes\n\n\\begin{equation}\\la{scalarkreincondition2}\n(n+\\mu)(m+\\mu)<0,\n\\end{equation}\n\n\\noindent or, provided $mn\\neq 0$, and using that $\\mu \\in (-1\/2, 1\/2]$,\n\n\\begin{equation}\\la{scalarkreincondition3}\nn m<0.\n\\end{equation}\n\n\\end{enumerate}\n\n{\\bf Remarks.}\n\n\\begin{itemize}\n\n\\item It is clear from \\rf{scalarkrein} why our methods do not lead to any conclusions about collisions of eigenvalues at the origin. If $\\lambda_n^{(\\mu)}=0$, then $\\Omega(n+\\mu)=0$, and the contribution to the Hamiltonian of such a mode vanishes. As a consequence, the associated Krein signature is zero.\n\n\\item When the theory of \\cite{kollarmiller} is restricted to the case of solutions of zero-amplitude, so as to recover the constant coefficient stability problem, the graphical stability criterion given there coincides with the one presented here.\n\n\\end{itemize}\n\n\\vspace*{0.1in}\n\nWe conclude our general considerations of this section with the following summary.\n\n\\vspace*{0.1in}\n\nAssume that the linearization of the scalar Hamiltonian system \\rf{scalar} is dispersive ({\\em i.e.,} its dispersion relation $\\omega(k)$ is real valued for $k\\in \\mathbb{R}$). Let $N$ be a strictly positive integer.\nConsider $2\\pi\/N$-periodic traveling wave solutions of this system of sufficiently small-amplitude and with velocity sufficiently close to $\\omega(N)\/N$. In order for these solutions to be spectrally unstable with respect to high-frequency instabilities as a consequence of two-eigenvalue collisions, it is necessary that there exist $n$, $m~\\in \\mathbb{Z}$, $n\\neq m$, $\\mu \\in (-1\/2, 1\/2]$ for which\n\n\\begin{equation}\n\\frac{\\omega(n+\\mu)}{n+\\mu}\\neq \\frac{\\omega(N)}{N}, ~~\\frac{\\omega(m+\\mu)}{m+\\mu}\\neq \\frac{\\omega(N)}{N},\n\\end{equation}\n\n\\noindent such that\n\n\\begin{equation}\n\\frac{\\omega(n+\\mu)-\\omega(m+\\mu)}{n-m}=\\frac{\\omega(N)}{N},\n\\end{equation}\n\n\\noindent and\n\n\\begin{equation}\n(m+\\mu)(n+\\mu) <0.\n\\end{equation}\n\nNext we proceed with some examples.\n\n\\subsection{The (generalized) Korteweg - de Vries equation}\n\nWe consider the generalized KdV (gKdV) equation\n\n\\begin{equation}\\la{gkdv}\nu_t+\\sigma u^n u_x +u_{xxx}=0,\n\\end{equation}\n\n\\noindent where we restrict $n$ to integers 1 or greater. Here $\\sigma$ is a constant coefficient, chosen as convenient. Important special cases discussed below are the KdV equation ($n=1$) and the modified KdV (mKdV) equation ($n=2$). Many of the details below extend easily to more general nonlinearities, with the main requirement being that the linearized equation is $u_t+u_{xxx}=0$. The stability of periodic solutions of the gKdV equation has received some attention recently \\cite{bronskijohnson1, bronskijohnsonkapitula, todd, johnsonzumbrunbronski}. For the integrable cases $n=1$ and $n=2$, more detailed analysis is possible, see \\cite{bottmandeconinck, deconinckkapitula, deconincknivalamkdv}. We do not claim to add anything new to these discussions, but we wish to use this example to illustrate how the six-step process outlined in this section leads to easy conclusions before moving on to more complicated settings.\n\n\\begin{enumerate}\n\n\\item The modified Hamiltonian is given by\n\n\\begin{equation}\nH^0_c=\\frac{1}{2}\\int_0^{2\\pi} (u_x^2+c u^2)dx.\n\\end{equation}\n\n\\item The dispersion relation is\n\n\\begin{equation}\n\\omega=-k^3.\n\\end{equation}\n\n\\item Bifurcation branches in the ($c$, amplitude)-plane start at $(c,0)$, with\n\n\\begin{equation}\nc=\\frac{\\omega(k)}{k}=-k^2.\n\\end{equation}\n\n\\noindent Since we desire $2\\pi$ periodic solutions, we choose $k=1$. Any choice $k=N$, where $N$ is a non-zero integer is allowed. Choosing $k=1$, bifurcation branches start at $(-1,0)$.\n\nFor the integrable cases $n=1$ (KdV) and $n=2$ (mKdV), these bifurcation branches may be calculated in closed form. For the KdV equation in a frame traveling with speed $c$, the $2\\pi$-periodic solutions are given by (with $\\sigma=1$)\n\n\\begin{equation}\\la{cnkdv}\nu=\\frac{12\\kappa^2 K^2(\\kappa)}{\\pi^2}{\\rm cn}^2\\left(\\frac{K(\\kappa)x}{\\pi},\\kappa\\right),\n\\end{equation}\n\n\\noindent where ${\\rm cn}$ denotes the Jacobian elliptic cosine function and $K(\\kappa)$ is the complete elliptic integral of the first kind \\cite{dlmf, dlmfbook}. Further,\n\n\\begin{equation}\\la{ckdv}\nc(\\kappa)=\\frac{4K^2(\\kappa)}{\\pi^2}(2\\kappa^2-1),\n\\end{equation}\n\n\\noindent resulting in an explicit bifurcation curve $(c(\\kappa), 12\\kappa^2 K^2(\\kappa)\/\\pi^2$), parameterized by the elliptic modulus $\\kappa\\in [0,1)$. This bifurcation curve is shown in Fig.~\\ref{fig:kdvbranch}a.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\multicolumn{2}{c}{\\def3.2in}\\input{bubbletrack.pdf_tex{3.5in}\n\\input{kdvbranch.pdf_tex}}\\\\\n\\multicolumn{2}{c}{(a)}\\\\\n~&~\\\\\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{mkdvcnbranch.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{mkdvsnbranch.pdf_tex}\\\\\n(b) & (c)\n\\end{tabular}\n\\caption{\\la{fig:kdvbranch} The amplitude {\\em vs.} $c$ bifurcation plots for the traveling-wave solutions of the generalized KdV equation \\rf{gkdv}. (a) The KdV equation, $n=1$, for the cnoidal wave solutions \\rf{cnkdv}. (b) The mKdV equation, $n=2$, for the cnoidal wave solutions \\rf{mkdvcn}. Lastly, (c) shows the bifurcation plot for the snoidal wave solutions \\rf{mkdvsn} of mKdV, $n=2$.\nNote that all bifurcation branches start at $(-1,0)$, as stated above.\n }\n\\end{figure}\n\nFor the mKdV equation ($n=2$), different families of traveling-wave solutions exist \\cite{deconincknivalamkdv}. We consider two of the simplest. For $\\sigma=3$ (focusing mKdV), a family of $2\\pi$-periodic solutions is given by\n\n\\begin{equation}\\la{mkdvcn}\nu=\\frac{2\\sqrt{2}\\kappa K(\\kappa)}{\\pi}{\\rm cn}\\left(\\frac{2 K(\\kappa)x}{\\pi},\\kappa\\right),\n\\end{equation}\n\n\\noindent with $c(\\kappa)$ given by \\rf{ckdv}, resulting in an explicit bifurcation curve $(c(\\kappa), 2\\sqrt{2}\\kappa K(\\kappa)\/\\pi)$, parameterized by the elliptic modulus $\\kappa\\in [0,1)$. This bifurcation curve is shown in Fig.~\\ref{fig:kdvbranch}b. It should be noted that a solution branch exists where the solution is expressed in terms of the Jacobian ${\\rm dn}$ function \\cite{dlmf, dlmfbook}: $u=\\sqrt(2)K(\\kappa){\\rm dn}(K(\\kappa)x\/\\pi,\\kappa)\/\\pi$, but this solution does not have a small-amplitude limit and our methods do not apply directly to it. Rather, the solutions limit to the constant solution $u=1\/\\sqrt{2}$ as $\\kappa\\rightarrow 0$. A simple transformation $v=u-1\/\\sqrt{2}$ transforms the problem to one where our methods apply.\n\nFor $\\sigma=-3$ (defocusing mKdV), a period $2\\pi$ solution family is\n\n\\begin{equation}\\la{mkdvsn}\nu=\\frac{2\\sqrt{2}\\kappa K(\\kappa)}{\\pi}{\\rm sn}\\left(\\frac{2 K(\\kappa)x}{\\pi},\\kappa\\right),\n\\end{equation}\n\n\\noindent with $c(\\kappa)=-4(1+\\kappa^2)K^2(\\kappa)\/\\pi^2$. Here ${\\rm sn}$ is the Jacobian elliptic sine function \\cite{dlmf, dlmfbook}, resulting in an explicit bifurcation curve $(c(\\kappa), 2\\sqrt{2}\\kappa K(\\kappa)\/\\pi)$, parameterized by the elliptic modulus $\\kappa\\in [0,1)$. This bifurcation curve is shown in Fig.~\\ref{fig:kdvbranch}c.\n\n\\item The stability spectrum is given by \\rf{evscalar}, with $\\omega(k)=-k^3$ and $c=-1$, resulting in\n\n\\begin{equation}\n\\lambda^{(n)}_\\mu=i(n+\\mu)(1+(n+\\mu)^2).\n\\end{equation}\n\nThese eigenvalues cover the imaginary axis, as $n$ and $\\mu$ are varied. The imaginary part of this expression is displayed in Fig.~\\ref{fig:threecover}a. For the sake of comparison with Fig.~2 in \\cite{bottmandeconinck} we let $\\mu\\in [-1\/4, 1\/4)$, which implies that $n$ is any half integer. The results of Fig.~2 in \\cite{bottmandeconinck} are for elliptic modulus $\\kappa=0.8$, implying a solution of moderate amplitude. The comparison of these two figures serves to add credence to the relevance of the results obtained using the zero-amplitude solutions at the start of the bifurcation branch.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{threecover.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{kdvcollision.pdf_tex}\\\\\n(a) & (b)\n\\end{tabular}\n\\caption{\\la{fig:threecover} (a) The imaginary part of $\\lambda_n^{(\\mu)}\\in (-0.7, 0.7)$ as a function of $\\mu\\in[-1\/4, 1\/4)$. (b) The curves $\\Omega(k+n)$, for various (integer) values of $n$, illustrating that collisions occur at the origin only.\n }\n\\end{figure}\n\n\\item With $n+\\mu=k$ and $m+\\mu=k+l$, for some $l\\in \\mathbb{Z}$, the collision condition \\rf{scalarcollision} is written as\n\n\\begin{equation}\nl^2+3kl+3k^2-1=0,\n\\end{equation}\n\n\\noindent where the trivial solution $l=0$ has been discarded. This is the equation of an ellipsoid in the $(k,l)$ plane. It intersects lines of nonzero integer $l$ in six integer points: $\\pm (1, -2)$, $\\pm(0, 1)$, $\\pm(1, -1)$. Since for all of these, $\\Omega(k)=0$, any collisions happen only at the origin $\\lambda_n^{(\\mu)}=0$. This is also illustrated in Fig.~\\ref{fig:threecover}b.\n\n\\item The final step of our process is preempted by the results of the previous step. No Krein signature of colliding eigenvalues can be computed, since no eigenvalues collide.\n\n\\end{enumerate}\n\nSince eigenvalues do not collide away from the origin they cannot leave the imaginary axis through such collisions and no high-frequency instabilities occur for small amplitude solutions of the gKdV equation. This result applies to the KdV and mKdV equations as special cases. The absence of high-frequency instabilities for small amplitude solutions is consistent with the results in, for instance, \\cite{bottmandeconinck, deconinckkapitula, todd}.\n\n\\begin{comment}\n\n\\subsection{The Kawahara equation}\n\nAs a second example, consider the so-called Kawahara or super KdV (sKdV) equation \\cite{akersgao, hauptboyd, kawahara}, whose linearization is given by\n\\begin{equation}\\la{skdv}\nu_t=\\alpha u_{xxx}+\\beta u_{5x},\n\\end{equation}\n\n\\noindent We intend to show that high-frequency instabilities of small-amplitude solutions are present for suitable choices of the parameters $\\alpha$ and $\\beta$.\n\n\\begin{enumerate}\n\n\\item The modified Hamiltonian is $H^0_c=\\frac{1}{2}\\int_0^{2\\pi} (c u^2-\\alpha u_x^2 +\\beta u_{xx}^2)dx$.\n\n\\item The dispersion relation is $\\omega=\\alpha k^3-\\beta k^5$.\n\n\\item The bifurcation branch starts at $(c,0)=(\\alpha-\\beta,0)$, where we have chosen $N=1$ so that the minimal period of the solutions is $2\\pi$.\n\n\\item The elements of the stability spectrum are given by $\\lambda_n^{(\\mu)}=i(n+\\mu)(\\alpha-\\beta-\\alpha(n+\\mu)^2+\\beta(n+\\mu)^4)$.\n\n\\item It is easily seen graphically that the collision condition \\rf{scalarcollision} may be satisfied, see Fig.~\\ref{fig:superkdv}, where the values $\\alpha=1$ and $\\beta=1\/4$ were used. In fact, three distinct solutions ({\\em i.e.}, solutions giving rise to different collisions locations on the imaginary axis) exist: $(\\mu, \\lambda_n^{(\\mu)})=(0.2071149, 0.212841i)$, $(\\mu, \\lambda_n^{(\\mu)})=(0.2154767, 0.179833i)$, and $(\\mu, \\lambda_n^{(\\mu)})=(-0.3675445, 0.227684i)$. These collision locations are obtained with $(m,n)=(-2,1)$, $(m,n)=(-2,-1)$, and $(m,n)=(0,2)$, respectively.\n\n\\item Since $mn<0$, the colliding eigenvalues have opposing Krein signatures, and it is possible for the eigenvalues to leave the imaginary axis, leading to high-frequency instabilities.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{superkdv.pdf_tex}\n&\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{kawaharacols.pdf_tex}\n\\\\\n(a)&(b)\n\\end{tabular}\n\\caption{\\la{fig:superkdv} (a) The dispersion relation for the sKdV equation \\rf{skdv}, with $\\alpha=1$, $\\beta=1\/4$. The line through the origin has slope $\\omega(1)\/1$, which represents the right-hand side of \\rf{scalarcollision}. The slope of the parallel line through the points $(m+\\mu,\\omega(m+\\mu))$, $(n+\\mu, \\omega(n+\\mu))$ represents the left-hand side. Here $m=-2$, $n=1$ and $\\mu\\approx 0.2071149219$. (b) The curves $\\Omega(k+n)$, for various (integer) values of $n$, illustrating that collisions occur also away from the origin.\n }\n\\end{figure}\n\n\\end{enumerate}\n\nThe conclusions above hold for small-amplitude solutions, and they provide only a necessary condition for the presence of high-frequency instabilities. Below, we use numerical results to indicate that for the small-amplitude solutions of the sKdV equation the condition is also sufficient, and instabilities do occur. To our knowledge, the only previous work on the stability or instability of periodic solutions of the sKdV equation was by Haragus {\\em et al} \\cite{haraguslombardischeel}, where the spectral stability of periodic waves of sufficiently small amplitude propagating with small velocities is shown.\n\\begin{figure}[!tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaProfile.pdf_tex}\n&\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaSpectrum.pdf_tex}\n\\\\\n(a)&(b)\\\\\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaBubble1ComplexLambda.pdf_tex}\n&\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaBubble2ComplexLambda.pdf_tex}\\\\\n(c)&(d)\\\\\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaBubble1MuLambda.pdf_tex}\n&\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\n\\hspace*{0in}\\input{KawaharaBubble2MuLambda.pdf_tex}\\\\\n(e)&(f)\n\\end{tabular}\n\\caption{\\la{fig:kawaharasol} (a) A $2\\pi$-periodic solution of the sKdV equation, with amplitude slightly above $10^{-4}$. (b) The numerically computed stability spectrum of this solution, with two regions corresponding to instabilities highlighted. Note the $4$-fold symmetry of the spectrum. (c) A blow-up of region~I in (b), showing that the bubble of high-frequency instabilities is centered around the value predicted by our theory. (d) Same, but for region~II. (e) The range of Floquet exponents $\\mu$ corresponding to the bubble shown in (c), again validating the theoretical zero-amplitude prediction. (f) Same as (e), but for the bubble shown in (d).\n }\n\\end{figure}\n\nUsing numerical continuation methods \\cite{allgowergeorg}, we construct small-amplitude $2\\pi$-periodic solutions of the sKdV equation, see Fig.~\\ref{fig:kawaharasol}a. The spectrum of this solution is computed using the method of \\cite{deconinckkutz1}. It is shown in Fig.~\\ref{fig:kawaharasol}b. In order to capture the instabilities shown, an adaptive spacing on the Floquet exponent $\\mu \\in [-1\/2, 1\/2)$ has to be used, with more values chosen near the values of $\\mu$ for which collisions occur, as determined above. The Hamiltonian symmetry of the spectrum is clear in Fig.~\\ref{fig:kawaharasol}b. The regions I and II are zoomed in on, in Panels (c) and (d) respectively. A third bubble is barely discernible above region~II. The vertical extent of the bubble in Fig.~\\ref{fig:kawaharasol}d is $1.32\\,10^{-6}$. The Floquet exponents giving rise to the two bubbles with largest real part are shown together with the real part of those eigenvalues in Panels~(e) and~(f). The support of the $\\mu$ region in Fig.~\\ref{fig:kawaharasol}f is approximately $8.6\\,10^{-7}$. It should be emphasized that since there does not appear to be a modulation instability (see Fig.~\\ref{fig:kawaharasol}b), the high-frequency instabilities are the only instabilities of the small-amplitude $2\\pi$-periodic solutions.\n\n\\end{comment}\n\n\\subsection{The Whitham equation}\\la{ex:whitham}\n\nAs our second scalar example, we consider the Whitham equation \\rf{whitham}. For this example, no analytical results exist, but the work of Sanford {\\em et al.} \\cite{SanfordKodamaCarterKalisch} allows for a comparison with numerical results. Sanford {\\em et al.} do not report the presence of high-frequency instabilities for solutions of any period. Their absence has been verified by us using the same methods, see Fig.~\\ref{fig:wstuff}. Hur \\& Johnson \\cite{hurjohnson} consider periodic solutions, focusing on the modulational instability. However, they do include a Krein signature calculation of the eigenvalues of the zero-amplitude solutions, reaching the same conclusions obtained below. In what follows the nonlinear term $N(u)$ does not contribute, as in the previous examples.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.2in}\\hspace*{-0.2in}\\input{whithamProfile.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.6in}\\hspace*{-0.4in}\\input{whithamStab.pdf_tex}\\\\\n(a) & \\hspace*{-0.4in}(b)\n\\end{tabular}\n\\caption{\\la{fig:wstuff} (a) The profile of a $2\\pi$-periodic small-amplitude traveling wave solution of the Whitham equation \\rf{whitham} with $c\\approx 0.7697166847$, computed using a cosine collocation method with 128 Fourier modes, see \\cite{SanfordKodamaCarterKalisch}. (b) The stability spectrum of this solution, computed using the Fourier-Floquet-Hill method \\cite{deconinckkutz1} with $128$ modes and 2000 different values of the Floquet parameter $\\mu$. The presence of a modulational instability is clear, but no high-frequency instabilities are observed, in agreement with the theory presented. Note that the hallmark bubbles of instability were looked for far outside of the region displayed here.\n }\n\\end{figure}\n\n\\begin{enumerate}\n\n\\item The modified Hamiltonian is $$H^0_V=\\frac{V}{2}\\int_0^{2\\pi} u^2 dx-\\frac{1}{2}\\int_{-\\pi}^\\pi \\int_{-\\pi}^\\pi K(x-y)u(x) u(y) dx dy.$$\n\n\\noindent We use $V$ to denote the speed of the traveling wave, to avoid confusion with the phase speed $c(k)$ in the kernel of the Whitham equation.\n\n\\item The dispersion relation is given by $\\omega(k)=\\mbox{sign}(k)\\sqrt{gk\\tanh(kh)}$.\n\n\\item The bifurcation branch starts at $(V,0)=(\\sqrt{g \\tanh(h)},0)$, where we have chosen $N=1$ so that the minimal period of the solutions is $2\\pi$.\n\n\\item The elements of the stability spectrum are given by\n\n\\begin{equation}\\la{evwhitham}\n\\lambda_n^{(\\mu)}=i(n+\\mu)\\sqrt{g \\tanh(h)}-i \\mbox{sign}(n+\\mu)\\sqrt{g(n+\\mu)\\tanh (h(n+\\mu))}.\n\\end{equation}\n\n\\item The dispersion relation for the Whitham equation is plotted in Fig.~\\ref{fig:wcollision}(a), together with the line through the origin with slope $\\omega(1)\/1$. Since the dispersion relation is concave down (up) in the first (third) quadrant, the condition \\rf{scalarcollision} is not satisfied. Thus collisions of eigenvalues away from the\n origin do not occur. This is also illustrated in Fig.~\\ref{fig:wcollision}(b), where the imaginary part of $\\lambda_n^{(\\mu)}$ is plotted for various integers $n$.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{wdispersion.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{wcollision.pdf_tex}\\\\\n(a) & (b)\n\\end{tabular}\n\\caption{\\la{fig:wcollision} (a) The dispersion relation for the Whitham equation (curve), together with the line through the origin of slope $\\omega(1)\/1$, representing the right-hand side of \\rf{scalarcollision}. (b) The curves $\\Omega(k+n)$, for various (integer) values of $n$, illustrating that collisions occur at the origin only.\n }\n\\end{figure}\n\n\\item No Krein signature calculation is relevant since eigenvalues do not collide away from the origin.\n\n\\end{enumerate}\n\nWe conclude that periodic solutions of sufficiently small amplitude of the Whitham equation are not susceptible to high-frequency instabilities. This is consistent with the results presented in \\cite{SanfordKodamaCarterKalisch}, see also Fig.~\\ref{fig:wstuff}. Thus, the Whitham equation is unable to replicate the instabilities found in the shallow depth water wave problem for solutions of small amplitude, despite having a dispersion relation that is identical to one branch of the water wave dispersion relation. We return to this in the next section.\n\n\\section{Two-component Hamiltonian PDEs with canonical Poisson structure}\\la{sec:vector}\n\nWe generalize the ideas of the previous section to the setting of two-component Hamiltonian PDEs with canonical Poisson structure. In other words, the evolution PDE can be written as\n\n\\begin{align}\\la{can}\n\\pp{}{t}\n\\left(\n\\begin{array}{c}\nq\\\\partial\n\\end{array}\n\\right)&=J \\nabla H ~~\\iff~~\\left\\{\n\\begin{array}{rcl}\n\\displaystyle q_t&=&\\displaystyle \\dd{H}{p}\\\\\n\\displaystyle p_t&=&\\displaystyle -\\dd{H}{q}\n\\end{array}\n\\right.,\n\\end{align}\n\n\\noindent where the Poisson operator $J$ is given by \\rf{canonicalJ}. This Poisson operator is nonsingular, thus there are no Casimirs. Examples of systems of this form are the Nonlinear Schr\\\"odinger equation in real coordinates \\cite{bottmandeconincknivala}, the Sine-Gordon equation \\cite{faddeev, jonesmiller1, jonesmiller2}, and the water wave problem \\cite{zakharov1}. As before, our interest is in the linearization of this system around the zero-amplitude solution. The quadratic Hamiltonian corresponding to this linearization can be written as\n\n\\begin{equation}\\la{canham}\nH^0=\\int_0^{2\\pi} \\left(\n\\frac{1}{2}\\sum_{n=0}^\\infty c_n q_{nx}^2+\\frac{1}{2}\\sum_{n=0}^\\infty b_n p_{nx}^2+p\\sum_{n=0}^\\infty a_n q_{nx}\n\\right)dx,\n\\end{equation}\n\n\\noindent with $a_n$, $b_n$, $c_n\\in \\mathbb{R}$. Typically the number of terms in the sums above is finite, but an example like the water wave problem requires the possibility of an infinite number of nonzero contributing terms in the Hamiltonian. As for the Whitham equation, convergence of the resulting series is not problematic. The form \\rf{canham} is the most general form of a quadratic Hamiltonian depending on two functions $q(x,t)$ and $p(x,t)$. Indeed, any quadratic term of a form not included above is reduced to a term that is included by straightforward integration by parts. The linearization of \\rf{can} is given by\n\n\\setcounter{saveeqn}{\\value{equation}\n\n\\begin{align}\\la{canlin}\nq_t&=\\sum_{n=0}^\\infty a_n q_{nx}+\\sum_{n=0}^\\infty (-1)^n b_n p_{2nx},\\\\\np_t&=-\\sum_{n=0}^\\infty (-1)^n c_n q_{2nx}-\\sum_{n=0}^\\infty (-1)^n a_n p_{nx}.\n\\end{align}\n\n\\setcounter{equation}{\\value{saveeqn}\n\nWe proceed with the six step program outlined in the introduction.\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian.} The modified Hamiltonian $H_c^0$ is given by\n\n\\begin{equation}\nH_c^0=\\int_0^{2\\pi} \\left(c p q_x+\n\\frac{1}{2}\\sum_{n=0}^\\infty c_n q_{nx}^2+\\frac{1}{2}\\sum_{n=0}^\\infty b_n p_{nx}^2+p\\sum_{n=0}^\\infty a_n q_{nx}\n\\right)dx.\n\\end{equation}\n\n\\noindent This expression serves as a repository for the coefficients which are needed in what follows.\n\n\\item {\\bf Dispersion relation.} We look for solutions to \\rf{canlin} of the form $q=\\hat q \\exp(ikx-i\\omega t)$, $p=\\hat p \\exp(ikx-i\\omega t)$. Requiring the existence of non-trivial ({\\em i.e.}, non-zero) solutions, we find that $\\omega(k)$ is determined by\n\n\\begin{equation}\\la{candis}\n\\det \\left(\n\\begin{array}{cc}\n\\displaystyle i\\omega+\\sum_{n=0}^\\infty a_n (ik)^n &\n\\displaystyle \\sum_{n=0}^\\infty b_n k^{2n}\\\\\n\\displaystyle -\\sum_{n=0}^\\infty c_n k^{2n} &\n\\displaystyle i\\omega -\\sum_{n=0}^\\infty a_n (-1)^n (ik)^n\n\\end{array}\n\\right)=0.\n\\end{equation}\n\n\\noindent This is a quadratic equation for $\\omega(k)$, resulting in two branches of the dispersion relation, $\\omega_1(k)$ and $\\omega_2(k)$. Assuming that (\\ref{canlin}-b) is dispersive, $\\omega_1(k)$ and $\\omega_2(k)$ are real-valued for $k\\in \\mathbb{R}$. This is not easily translated in a condition on the coefficients $a_n$, $b_n$, $c_n$ and $d_n$, since their reality is not assumed.\n\n\\item {\\bf Bifurcation branches.} Traveling wave solutions are stationary solutions of\n\n\\setcounter{saveeqn}{\\value{equation}\n\n\\begin{align}\\la{linmoving}\nq_t&=c q_x+\\sum_{n=0}^\\infty a_n q_{nx}+\\sum_{n=0}^\\infty (-1)^n b_n p_{2nx}=\\dd{H_c^0}{p},\\\\\np_t&=c p_x-\\sum_{n=0}^\\infty (-1)^n c_n q_{2nx}-\\sum_{n=0}^\\infty (-1)^n a_n p_{nx}=-\\dd{H_c^0}{q}.\n\\end{align}\n\n\\setcounter{equation}{\\value{saveeqn}\n\n\\noindent This system has the dispersion relations $\\Omega_{1,2}(k)=\\omega_{1,2}(k)-c k$. In Fourier space the stationary equations become\n\n\\setcounter{saveeqn}{\\value{equation}\n\n\\begin{align}\n0&=ik c \\hat q+\\sum_{n=0}^\\infty a_n (ik^n) \\hat q+\\sum_{n=0}^\\infty (-1)^n b_n (ik)^{2n} \\hat p,\\\\\n0&=ik c \\hat p-\\sum_{n=0}^\\infty (-1)^n c_n (ik)^{2n} \\hat q-\\sum_{n=0}^\\infty (-1)^n a_n (ik)^n \\hat p.\n\\end{align}\n\n\\setcounter{equation}{\\value{saveeqn}\n\n\\noindent Thus $c$ is obtained from the condition that these equations have a nontrivial solution $(\\hat q, \\hat p)$. This condition requires that the $2\\times 2$ determinant of the system above is zero. A simple comparison with \\rf{candis} gives that there are two bifurcation points given by $(\\omega_1(N)\/N,0)$ and $(\\omega_2(N)\/N,0)$. Any positive integer value of $N$ is allowed, but we usually choose $N=1$ so that the fundamental period is $2\\pi$. In what follows, we examine the small-amplitude solutions starting from the branch $(c,0)=(\\omega_1(N)\/N,0)$, without loss of generality.\n\nIn many systems the two bifurcation branches are reflections of each other about the vertical axis. The corresponding solution profiles are identical to each other, moving to the right on one branch, moving to the left on the other. Examples are given below. Non-symmetric bifurcation branches cannot be excluded, however, without imposing extra assumptions on the coefficients of (\\ref{canlin}-b).\n\n\\item {\\bf Stability spectrum.} To find the stability spectrum, we let $q=Q(x) \\exp(\\lambda t)$, $p=P(x) \\exp(\\lambda t)$. Next, using Floquet's Theorem,\n\n\\begin{equation}\\la{canfloquet}\nQ=e^{i\\mu x}\\sum_{j=-\\infty}^\\infty Q_j e^{i j x}, ~~P=e^{i\\mu x}\\sum_{j=-\\infty}^\\infty P_j e^{ijx},\n\\end{equation}\n\n\\noindent with $\\mu\\in (-1\/2, 1\/2]$. Since (\\ref{linmoving}-b) has constant coefficients, it suffices to consider monochromatic waves, {\\em i.e.}, only one term of the sums in \\rf{canfloquet} is retained. It follows that $\\lambda$ satisfies \\rf{candis} with $i\\omega$ replaced by $-\\lambda+i(n+\\mu)c$. Thus\n\n\\begin{equation}\\la{canlambda}\n\\lambda_{n,l}^{(\\mu)}=i(n+\\mu)c-i\\omega_l(n+\\mu)=-i \\Omega_l(n+\\mu), ~~~~l=1,2.\n\\end{equation}\n\n\\noindent As expected, the zero solution is neutrally stable since $\\omega_{1,2}(k)$ are real, assuming dispersive equations. The stability spectrum consists of two one-parameter point sets, one for $l=1$, the other for $l=2$.\n\n\\item {\\bf Collision conditions.} Ignoring collisions at the origin, we require $\\lambda_{n_1, l_1}^{(\\mu)}=\\lambda_{n_2, l_2}^{(\\mu)}\\neq 0$ for some $n_1, n_2 \\in \\mathbb{Z}$, $\\mu\\in (-1\/2,1\/2]$, $l_1, l_2\\in \\{1,2\\}$. This gives\n\n\\begin{equation}\\la{cancollision}\n\\frac{\\omega_{l_1}(n_1+\\mu)-\\omega_{l_2}(n_2+\\mu)}{n_1-n_2}=\\frac{\\omega_1(N)}{N}.\n\\end{equation}\n\n\\noindent The right-hand side depends on $\\omega_1$ since we have chosen the first branch of the dispersion relation in Step~3. As before, this collision condition may be interpreted as a parallel secant condition, but with the additional freedom of being able to use points from both branches of the dispersion relation. This is illustrated in Fig.~\\ref{fig:cancollision}.\n\n\\begin{figure}[tb]\n\\def3.2in}\\input{bubbletrack.pdf_tex{4.8in}\n\\centerline{\\hspace*{0in}\\input{canonicalcollision.pdf_tex}}\n\\caption{\\la{fig:cancollision} The graphical interpretation of the collision condition \\rf{cancollision}. The dashed curves are the graphs of the dispersion relations $\\omega_1(k)$ and $\\omega_2(k)$. The slope of the segment $P_1P_2$ is the right-hand side in \\rf{cancollision}. The collision condition \\rf{cancollision} seeks points whose abscissas are an integer apart, so that at least one of the segments $P_3P_4$, $P_3P_6$, $P_5P_4$ or $P_5P_6$ is parallel to the segment $P_1P_2$.\n }\n\\end{figure}\n\n\\item {\\bf Krein signature.} In the setting of a system of Hamiltonian PDEs as opposed to a scalar PDE, we use a different but equivalent characterization of the Krein signature~\\cite{mackay}. The Krein signature is the contribution to the Hamiltonian of the mode involved with the collision. Since our Hamiltonians are quadratic, this implies that the Krein signature of the eigenvalue $\\lambda$ with eigenvector $v$ is given by\n\n \\begin{equation}\\la{kreinhessian}\n \\mbox{signature}(\\lambda, v)=\\mbox{sign}(v^\\dagger {\\cal L}_c v),\n \\end{equation}\n\n\\noindent where ${\\cal L}_c$ is the Hessian of the Hamiltonian $H_c^0$, and $v^\\dagger$ denotes the complex conjugate of the transposed vector. Since the Hessian ${\\cal L}_c$ is a symmetric linear operator, the argument of the $\\mbox{sign}$ in \\rf{kreinhessian} is real. Recall that the linearization of the system \\rf{hamc} can be written as\n\n\\begin{equation}\n\\partial_t\\left(\n\\begin{array}{c}\nq\\\\partial\n\\end{array}\n\\right)=\nJ {\\cal L}_c\n\\left(\n\\begin{array}{c}\nq\\\\partial\n\\end{array}\n\\right),\n\\end{equation}\n\n\\noindent which makes it easy to read off ${\\cal L}_c$. For the case of (\\ref{linmoving}-b),\n\n\\begin{equation}\\la{hessian}\n{\\cal L}_c=\\left(\n\\begin{array}{cc}\n\\displaystyle \\sum_{n=0}^\\infty c_n (-1)^n \\partial_x^{2n} &\n\\displaystyle -c \\partial_x+\\sum_{n=0}^\\infty a_n (-1)^n \\partial_x^n \\\\\n\\displaystyle c \\partial_x+\\sum_{n=0}^\\infty a_n \\partial_x^n &\n\\displaystyle \\sum_{n=0}^\\infty b_n (-1)^n \\partial_x^{2n}\n\\end{array}\n\\right).\n\\end{equation}\n\n\\noindent Next, the eigenvectors $v$ are given by\n\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq \\\\ p\n\\end{array}\n\\right)=e^{\\lambda t+i\\mu x+i n x}\\left(\n\\begin{array}{c}\nQ_n \\\\ P_n\n\\end{array}\n\\right),\n\\end{equation}\n\n\\noindent where $(Q_n, P_n)^T$ satisfies\n\n\\begin{equation}\\la{evec}\n\\lambda\n\\left(\n\\begin{array}{c}\n\\displaystyle Q_n \\\\ \\displaystyle P_n\n\\end{array}\n\\right)=J\n\\hat {\\cal L}_c\\left(\\begin{array}{c}\n\\displaystyle Q_n \\\\ \\displaystyle P_n\n\\end{array}\n\\right).\n\\end{equation}\n\n\\noindent Here $\\hat {\\cal L}_c$ is the symbol of ${\\cal L}_c$, {\\em i.e.}, the $2\\times 2$ matrix obtained by replacing $\\partial_x\\rightarrow i(n+\\mu)$ in \\rf{hessian}:\n\n\\begin{equation}\\la{symbol}\n\\hat {\\cal L}_c=\n\\left(\n\\begin{array}{cc}\n\\displaystyle \\sum_{n=0}^\\infty c_n (n+\\mu)^{2n} &\n\\displaystyle -i c (n+\\mu)+\\sum_{n=0}^\\infty a_n (-1)^n (in+i\\mu)^n \\\\\n\\displaystyle i c (n+\\mu) +\\sum_{n=0}^\\infty a_n (in+i\\mu)^n &\n\\displaystyle \\sum_{n=0}^\\infty b_n (n+\\mu)^{2n}\n\\end{array}\n\\right).\n\\end{equation}\n\n\\noindent Using \\rf{evec}, \\rf{kreinhessian} is rewritten as\n\n\\begin{equation}\n\\mbox{signature}\\left(\\lambda_{n,l}^{(\\mu)}, v_{n,l}^{(\\mu)}\\right)=\\mbox{sign}\\left(\\lambda_{n,l}^{(\\mu)}\\det \\left(\n\\begin{array}{cc}\nQ_n & P_n\\\\\nQ_n^* & P_n^*\n\\end{array}\n\\right)\\right).\n\\end{equation}\n\n\\noindent The determinant is imaginary, since interchanging the rows gives the complex conjugate result. Since $\\lambda_{n,l}^{(\\mu)}$ is imaginary, the result is real and the signature is well defined. Again, it is clear that no conclusions can be drawn if $\\lambda_{n,l}^{(\\mu)}=0$. Since we wish to examine whether signatures are equal or opposite, we consider the product of the signatures corresponding to $\\lambda_{n_1,l_1}^{(\\mu)}$ and $\\lambda_{n_2,l_2}^{(\\mu)}$. Using \\rf{symbol} we find that signatures are opposite, provided that\n\n\\begin{align}\\nonumber\n&\\sum_{j_1=0}^\\infty c_{j_1} (n_1+\\mu)^{2j_1}\\sum_{j_2=0}^\\infty c_{j_2} (n_2+\\mu)^{2j_2}\n\\left(\\omega_{l_1}(n_1+\\mu)+\\sum_{j_3=0}^\\infty a_{2j_3+1}(-1)^{j_3}(n_1+\\mu)^{2j_3+1}\\right)\\times\\\\\\la{cankrein1}\n&~~~~~~~~~~~~~~~~\\left(\\omega_{l_2}(n_2+\\mu)+\\sum_{j_4=0}^\\infty a_{2j_4+1}(-1)^{j_4}(n_2+\\mu)^{2j_4+1}\\right)<0.\n\\end{align}\n\n\\noindent The above condition is obtained by expressing the eigenvectors in \\rf{evec} in terms of the entries of the first row of \\rf{symbol}. An equivalent condition is obtained using the second row:\n\n\\begin{align}\\nonumber\n&\\sum_{j_1=0}^\\infty b_{j_1} (n_1+\\mu)^{2j_1}\\sum_{j_2=0}^\\infty b_{j_2} (n_2+\\mu)^{2j_2}\n\\left(\\omega_{l_1}(n_1+\\mu)-\\sum_{j_3=0}^\\infty a_{2j_3+1}(-1)^{j_3}(n_1+\\mu)^{2j_3+1}\\right)\\times\\\\\\la{cankrein2}\n&~~~~~~~~~~~~~~~~\\left(\\omega_{l_2}(n_2+\\mu)-\\sum_{j_4=0}^\\infty a_{2j_4+1}(-1)^{j_4}(n_2+\\mu)^{2j_4+1}\\right)<0.\n\\end{align}\n\n\\noindent Depending on the system at hand, the condition \\rf{cankrein1} or \\rf{cankrein2} may be more convenient to use.\n\n\\end{enumerate}\n\n{\\bf Remark.} An important class of systems is those for which $\\omega_1(k)=-\\omega_2(k)$. We refer to such systems as even systems. It follows immediately from \\rf{candis} that for even systems $a_{2j+1}=0$, $j=1,2,\\ldots$. The Krein conditions \\rf{cankrein1} and \\rf{cankrein2} simplify significantly, becoming\n\n\\begin{equation}\\la{sym1}\n\\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)\\sum_{j_1=0}^\\infty c_{j_1} (n_1+\\mu)^{2j_1}\\sum_{j_2=0}^\\infty c_{j_2} (n_2+\\mu)^{2j_2}<0,\n\\end{equation}\n\n\\noindent or\n\n\\begin{equation}\\la{sym2}\n\\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)\\sum_{j_1=0}^\\infty b_{j_1} (n_1+\\mu)^{2j_1}\\sum_{j_2=0}^\\infty b_{j_2} (n_2+\\mu)^{2j_2}<0.\n\\end{equation}\n\n\nWe summarize our results.\n\n\\vspace*{0.1in}\n\nAssume that the linearization of the Hamiltonian system \\rf{can} is dispersive ({\\em i.e.,} its dispersion relations $\\omega_1(k)$ and $\\omega_2(k)$ are real valued for $k\\in \\mathbb{R}$). Let $N$ be a strictly positive integer.\nConsider $2\\pi\/N$-periodic traveling wave solutions of this system of sufficiently small-amplitude and with velocity sufficiently close to $\\omega_1(N)\/N$. In order for these solutions to be spectrally unstable with respect to high-frequency instabilities as a consequence of two-eigenvalue collisions, it is necessary that there exist $l_1$, $l_2~\\in \\{1,2\\}$, $n_1$, $n_2~\\in \\mathbb{Z}$, $n_1\\neq n_2$, $\\mu \\in (-1\/2, 1\/2]$ for which\n\n\\begin{equation}\n\\frac{\\omega_{l_1}(n_1+\\mu)}{n_1+\\mu}\\neq \\frac{\\omega_1(N)}{N}, ~~\\frac{\\omega_{l_2}(n_2+\\mu)}{n_2+\\mu}\\neq \\frac{\\omega(N)}{N},\n\\end{equation}\n\n\\noindent such that\n\n\\begin{equation}\n\\frac{\\omega_{l_1}(n_1+\\mu)-\\omega_{l_2}(n_2+\\mu)}{n_1-n_2}=\\frac{\\omega_1(N)}{N},\n\\end{equation}\n\n\\noindent and \\rf{cankrein1}, or equivalently, \\rf{cankrein2} holds.\n\n\\vspace*{0.1in}\n\nWe proceed with examples.\n\n\\subsection{The Sine-Gordon equation}\n\nAs a first example, we consider the Sine-Gordon (SG) equation \\cite{scott}:\n\n\\begin{equation}\nu_{tt}-u_{xx}+\\sin u=0.\n\\end{equation}\n\n\\noindent The stability of the periodic traveling wave solutions of this equation has been studied recently by Jones {\\em et al.} \\cite{jonesmiller1, jonesmiller2}. Different classes of periodic traveling wave solutions exist, but only two of those can be considered as small-amplitude perturbations of a constant background state. We consider the so-called superluminal ($c^2>1$) librational waves. The subluminal ($c^2<1$) librational waves require the use of the transformation $v=u-\\pi$ so that their small amplitude limit approaches the zero solution. We do not consider them here. The limits of the rotational waves are either soliton solutions or have increasingly larger amplitude. As such the rotational waves do not fit in the framework of this paper. An overview of the properties of these solutions as well as illuminating phase-plane plots are found in \\cite{jonesmiller1}. In contrast to \\cite{jonesmiller1, jonesmiller2}, we fix the period of our solutions, as elsewhere in this paper. This makes a comparison of the results more complicated.\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian.} With $q=u$, $p=u_t$,\n\n\\begin{equation}\nH_c^0=\\int_0^{2\\pi} \\left(\nc p q_x+\\frac{1}{2}p^2+\\frac{1}{2}q^2+\\frac{1}{2}q_x^2\n\\right)dx.\n\\end{equation}\n\n\\noindent Thus $b_0=1$, $c_0=1$, $c_1=1$ are the only non-zero coefficients.\n\n\\item {\\bf Dispersion relation.} Using \\rf{candis},\n\n\\begin{equation}\n\\omega_{1,2}=\\pm \\sqrt{1+k^2}.\n\\end{equation}\n\n\\noindent These expressions are real valued for $k\\in \\mathbb{R}$, thus the SG equation is dispersive when linearized around the superluminal librational waves. Both branches of the dispersion relation are displayed in Fig.~\\ref{fig:sgcollision}a.\n\n\\item {\\bf Bifurcation branches.} With $N=1$, we obtain $c=\\omega_1(1)\/1=\\sqrt{2}$.\n\n\\item {\\bf Stability spectrum.} The stability spectrum is given by \\rf{canlambda}:\n\n\\begin{equation}\n\\lambda_{n,l}^{(\\mu)}=-i\\Omega_l(n+\\mu)=i(n+\\mu)\\sqrt{2}\\mp i\\sqrt{1+(n+\\mu)^2},\n\\end{equation}\n\n\\noindent with $l=1$ ($l=2$) corresponding to the $-$ ($+$) sign. Here $n\\in \\mathbb{Z}$, $\\mu\\in [-1\/2, 1\/2)$.\n\n\\item {\\bf Collision condition.} The collision condition \\rf{cancollision} becomes\n\n\\begin{equation}\n\\frac{\\omega_1(n_1+\\mu)-\\omega_2(n_2+\\mu)}{n_1-n_2}=\\sqrt{2}.\n\\end{equation}\n\n\\noindent We have chosen $\\omega_{l_1}=\\omega_1$ and $\\omega_{l_2}=\\omega_2$, since it is clear that the collision condition can only be satisfied if points from both dispersion relation branches are used. This is illustrated in Fig.~\\ref{fig:sgcollision}a. In fact, many collisions occur, as is illustrated in Fig.~\\ref{fig:sgcollision}b. One explicit solution is given by\n\n\\begin{equation}\nn_1=3,~n_2=0,~\\mu=\\frac{\\sqrt{10}-3}{2}\\approx 0.081138830.\n\\end{equation}\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{sg1dispersion.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{sg1collision.pdf_tex}\\\\\n(a) & (b)\n\\end{tabular}\n\\caption{\\la{fig:sgcollision} (a) The two branches of the dispersion relation for the Sine-Gordon equation. The line segment $P_1 P_2$ has slope $\\omega(1)\/1$, representing the right-hand side of \\rf{cancollision}. The slope of the parallel line segment $P_3 P_4$ represents the left-hand side of \\rf{cancollision}. (b) The two families of curves $\\Omega_1(k+n)$ (red, solid) and $\\Omega_2(k+n)$ (black, dashed), for various (integer) values of $n$, illustrating that many collisions occur away from the origin.\n }\n\\end{figure}\n\n\\item {\\bf Krein signature.} Since $\\omega_2(k)=-\\omega_1(k)$, we may use the conditions \\rf{sym1} or \\rf{sym2}. Since only one $b_j\\neq 0$, \\rf{sym2} is (slightly) simpler to use. We get that\n\n\\begin{equation}\n\\omega_1(n_1+\\mu)\\omega_2(n_2+\\mu)<0\n\\end{equation}\n\n\\noindent is a necessary condition for the presence of high-frequency instabilities of small-amplitude superluminal librational solutions of the SG equation. The condition is trivially satisfied as it was remarked in the previous step that points from both dispersion relation branches have to be used to have collisions.\n\n\\end{enumerate}\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.2in}\\hspace*{-0.2in}\\input{SGProfile.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.7in}\\hspace*{-0.4in}\\input{SGStab.pdf_tex}\\\\\n(a) & \\hspace*{-0.4in}(b)\n\\end{tabular}\n\\caption{\\la{fig:sgstuff} (a) A small-amplitude $2\\pi$-periodic superluminal solution of the SG equation ($c\\approx 1.236084655663$). (b) A blow-up of the numerically computed stability spectrum in a neighborhood of the origin, illustrating the presence of a modulational instability, but the absence of high-frequency instabilities.\n }\n\\end{figure}\n\nIt follows that for the superluminal solutions of the SG equations the necessary condition for the occurrence of high-frequency instabilities is satisfied. Nevertheless, as the results of \\cite{jonesmiller1, jonesmiller2} show, such instabilities do not occur. This is illustrated in Fig.~\\ref{fig:sgstuff}. The left panel illustrates an exact $2\\pi$-periodic superluminal solution of the SG equation with $c\\approx 1.236084655663$, obtained using elliptic functions. Computing the stability spectrum (right panel) of the solution using the Fourier-Floquet-Hill method \\cite{deconinckkutz1} with $51$ Fourier modes and $1000$ Floquet exponents shows that no high-frequency instabilities are present to within the accuracy of the numerical method. This is consistent with the results of \\cite{jonesmiller1, jonesmiller2} where only the presence of a modulational instability is observed. Thus the example of this section illustrates that the necessary condition is not always sufficient.\n\n\\subsection{The water wave problem}\\la{ex:ww}\n\nAs a final example, we consider the water wave problem: the problem of determining the dynamics of the surface of an incompressible, irrotational fluid under the influence of gravity. For this example, the effects of surface tension are ignored and we consider only two-dimensional fluids, {\\em i.e.}, the surface is one dimensional. The Euler equations governing the dynamics are\n\n\\setcounter{saveeqn}{\\value{equation}\n\n\\begin{align}\\la{eulera}\n\\phi_{xx}+\\phi_{zz}&=0, && (x,z)\\in D,\\\\\n\\phi_z&=0, && z=-h,\\\\\n\\eta_t+\\eta_x \\phi_x&=\\phi_z,&& z=\\eta(x,t),\\\\\n\\phi_t+\\frac{1}{2}(\\phi_x^2+\\phi_z)+g\\eta&=0, && z=\\eta(x,t),\n\\end{align}\n\n\\setcounter{equation}{\\value{saveeqn}\n\n\\noindent where $x$ and $z$ are the horizontal and vertical coordinate, respectively, see Fig.~\\ref{fig:ww}; $z=\\eta(x,t)$ is the free top boundary and $\\phi(x,z,t)$ is the velocity potential. Further, $g$ is the acceleration due to gravity and $h$ is the average depth of the fluid.\n\n\\begin{figure}[tb]\n\\centerline{\\def3.2in}\\input{bubbletrack.pdf_tex{6in}\\input{perww.pdf_tex}}\n\\caption{\\la{fig:ww} The domain for the water wave problem. Here $z=0$ is the equation of the surface for flat water, $z=-h$ is the flat bottom.\n }\n\\end{figure}\n\nThe main goal of the water wave problem is to understand the dynamics of the free surface $\\eta(x,t)$. Thus it is convenient to recast the problem so as to involve only surface variables. Zakharov \\cite{zakharov1} showed that the water wave problem is Hamiltonian with canonical variables $\\eta(x,t)$ and $\\varphi(x,t)=\\phi(x,\\eta(x,t),t)$. In other words $\\varphi(x,t)$ is the velocity potential evaluated at the surface. Following \\cite{craigsulem}, the Hamiltonian is written as\n\n\\begin{equation}\nH=\\frac{1}{2}\\int_0^{2\\pi}\\left(\\varphi G(\\eta) \\varphi+g \\eta^2\\right)dx,\n\\end{equation}\n\n\\noindent where $G(\\eta)$ is the Dirichlet~$\\rightarrow$~Neumann operator: $G(\\eta)\\varphi=(1+\\eta_x^2)^{1\/2}\\phi_n$, at $z=\\eta(x,t)$. Here $\\phi_n$ is the normal derivative of $\\phi$. Using the water wave problem, $G(\\eta)\\varphi=\\phi_z-\\eta_x \\phi_x=\\eta_t$, which is the first of Hamilton's equations. The water wave problem for $\\eta(x,t)$ and $\\varphi(x,t)$ is\n\n\\begin{equation}\\la{wweqs}\n\\eta_t=\\dd{H}{\\varphi}, ~~\\varphi_t=-\\dd{H}{\\eta}.\n\\end{equation}\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian.} Since for our purposes, the linearization of (\\ref{wweqs}-b) in a moving frame is required, it suffices to evaluate the Dirichlet~$\\rightarrow$~Neumann operator $G(\\eta)$ at the flat surface $\\eta=0$, resulting in\n\n \\begin{equation}\n G(0)=-i \\partial_x \\tanh(-ih \\partial_x).\n \\end{equation}\n\nThe quadratic Hamiltonian $H_c^0$ is given by\n\n\\begin{equation}\nH_c^0=c \\int_0^{2\\pi} \\varphi \\eta_x dx+\\frac{1}{2}\\int_0^{2\\pi}\\left(\n\\varphi (-i \\tanh(-ih\\partial_x) \\varphi_x+g \\eta^2\n\\right) dx,\n\\end{equation}\n\n\\noindent giving rise to the linearized equations in a frame moving with velocity $c$:\n\n\\setcounter{saveeqn}{\\value{equation}\n\n\\begin{align}\\la{linww}\n\\eta_t&=\\dd{H_c^0}{\\varphi}=c\\eta_x-i \\tanh(-ih \\partial_x)\\varphi_x,\\\\\n\\varphi_t&=-\\dd{H_c^0}{\\eta}=c\\varphi_x+g\\eta.\n\\end{align}\n\n\\setcounter{equation}{\\value{saveeqn}\n\n\\item {\\bf Dispersion relation.} The well-known dispersion relation \\cite{vandenboek} for the water wave problem is immediately recovered from the linearized system (\\ref{linww}-b) with $c=0$ (no moving frame), resulting in\n\n\\begin{equation}\\la{wwdisp}\n\\omega^2=gk \\tanh(kh).\n\\end{equation}\n\n\\noindent Note that the right-hand side of this expression is always positive. Thus there are two branches to the dispersion relation:\n\n\\begin{equation}\\la{wwbranches}\n\\omega_{1,2}=\\pm \\mbox{sign}(k)\\sqrt{gk\\tanh(kh)}.\n\\end{equation}\n\n\\noindent Thus $\\omega_1$ ($\\omega_2$) corresponds to positive (negative) phase speed, independent of the sign of $k$.\n\n\\item {\\bf Bifurcation branches.} Branches originate from $(c,\\mbox{amplitude})=(\\omega_1(1)\/1,0)$ and $(c,\\mbox{amplitude})=(\\omega_2(1)\/1,0)$. Without loss of generality, we focus on the first branch for which the phase speed $\\sqrt{g\\tanh(h)}$ is positive. This allows for a straightforward comparison of our results with those for the Whitham equation, in Example~\\ref{ex:whitham}.\n\n\\item {\\bf Stability spectrum.} The elements of the spectrum are given by\n\n\\setcounter{saveeqn}{\\value{equation}\n\\begin{align}\\nonumber\n\\lambda_{n,1}^{(\\mu)}&=-i\\,\\Omega_1(n+\\mu)\\\\\\la{wwspectrum}\n&=i(n+\\mu)\\sqrt{g\\tanh(h)}-i\\,\\mbox{sign}(n+\\mu)\\sqrt{g(n+\\mu)\\tanh(h(n+\\mu))},\\\\\n\\nonumber\n\\lambda_{n,2}^{(\\mu)}&=-i\\,\\Omega_2(n+\\mu)\\\\\n&=i(n+\\mu)\\sqrt{g\\tanh(h)}+i\\,\\mbox{sign}(n+\\mu)\\sqrt{g(n+\\mu)\\tanh(h(n+\\mu))}.\n\\end{align}\n\\setcounter{equation}{\\value{saveeqn}\n\n\\noindent The $\\mbox{sign}(n+\\mu)$'s may be omitted in these expressions, as the same set of spectral elements is obtained.\n\n\\item {\\bf Collision condition.} The condition \\rf{cancollision} is easily written out explicitly, but for our purposes it suffices to plot $\\Omega_1(k+n)$ and $\\Omega_2(k+n)$, for different values of $n$. This is done in Fig.~\\ref{fig:wwcollision}b with $g=1$ and $h=1$. Although only the first collision is visible in the figure (all intersection points are horizontal integer shift of each other and correspond to the same value of $\\mu$ and $\\lambda_{n,j}^{(\\mu)}$), it is clear from the curves shown that many collisions occur. The figure is qualitatively the same for all finite values of depth.\n\n\\begin{figure}[tb]\n\\begin{tabular}{cc}\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.2in}\\input{wwdispersion.pdf_tex} &\n\\def3.2in}\\input{bubbletrack.pdf_tex{2.8in}\\input{wwcollision.pdf_tex}\\\\\n(a) & (b)\\\\\n\\multicolumn{2}{c}{\n\\def3.2in}\\input{bubbletrack.pdf_tex{3.2in}\\input{bubbletrack.pdf_tex}\n}\\\\\n\\multicolumn{2}{c}{(c)}\n\\end{tabular}\n\\caption{\\la{fig:wwcollision} (a) The two branches of the dispersion relation for the water wave problem ($g=1$, $h=1$). The line through the origin has slope $\\omega_1(1)\/1$, representing the right-hand side of \\rf{cancollision}. (b) The two families of curves $\\Omega_1(k+n)$ (red, solid) and $\\Omega_2(k+n)$ (black, dashed), for various (integer) values of $n$, illustrating that many collisions occur away from the origin. (c) The origin of the high-frequency instability closest to the origin as a function of depth $h$.\n }\n\\end{figure}\n\n\\item {\\bf Krein signature.} The conditions \\rf{sym1} and \\rf{sym2} become\n\n\\begin{equation}\\la{sym1ww}\n\\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)g^2<0,\n\\end{equation}\n\n\\noindent and\n\n\\begin{equation}\\la{sym2ww}\n\\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)\\sum_{{j_1}=1}^\\infty \\alpha_{j_1-1}h^{2j_1-1} (n_1+\\mu)^{2j_1}\\sum_{{j_2}=1}^\\infty \\alpha_{j_2-1}h^{2j_2-1} (n_2+\\mu)^{2j_2}<0,\n\\end{equation}\n\n\\noindent respectively. Here the coefficients $\\alpha_j$ are related to the Bernoulli numbers \\cite{dlmf}, as they are defined by the Taylor series\n\n\\begin{equation}\\la{tanh}\n\\tanh(z)=\\sum_{j=0}^\\infty \\alpha_j z^{2j+1}, ~~~|z|<\\pi\/2.\n\\end{equation}\n\n\\noindent Because of the finite radius of convergence of this series, \\rf{sym2ww} is only valid for small values of the wave numbers $n_1+\\mu$ and $n_2+\\mu$, but it is possible to phrase all results in terms of $\\tanh$ directly, avoiding this difficulty. For instance, using \\rf{tanh}, \\rf{sym2ww} may be rewritten as\n\n\\begin{align*}\n&&\\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)\\frac{\\omega_{\\alpha}^2(n_1+\\mu)}{g}\n\\frac{\\omega_{\\beta}^2(n_2+\\mu)}{g}&<0\\\\\n&\\Rightarrow& \\omega_{l_1}(n_1+\\mu)\\omega_{l_2}(n_2+\\mu)&<0,\n\\end{align*}\n\n\\noindent in agreement with \\rf{sym1ww}. The indices $\\alpha$, $\\beta\\in \\{0,1\\}$ are irrelevant since $\\omega_\\alpha$ and $\\omega_\\beta$ both appear squared. This serves to illustrate that for specific examples one of the two criteria \\rf{cankrein1} and \\rf{cankrein2} (or \\rf{sym1} and \\rf{sym2} for even systems) may be significantly easier to evaluate, although they are equivalent.\n\nThus all collision points are potential origins of high-frequency instabilities. It appears from the numerical results in \\cite{deconinckoliveras1} that the bubble of non-imaginary eigenvalues closest to the origin contains the high-frequency eigenvalues with the largest real part. Thus for waves in shallow water $kh<1.363$ (no Benjamin-Feir instability) \\cite{benjaminperiodic, whitham1, zakharovostrovsky}, these are the dominant instabilities. For waves in deep water ($kh>1.363$) the Benjamin-Feir instability typically dominates, although there is a range of depth in deep water where the high-frequency instabilities have a larger growth rate, see \\cite{deconinckoliveras1}. The dependence on depth $h$ of the location on the imaginary axis from which the high-frequency bubble closest to the origin bifurcates is shown in Fig.~\\ref{fig:wwcollision}(c), with $g=1$. As $h\\rightarrow \\infty $, the imaginary part of $\\lambda\\rightarrow 3\/4$. This asymptote is drawn in Fig.~\\ref{fig:wwcollision}(c) for reference. This figure demonstrates that for all positive values of the depth $h$, the instabilities considered are not modulational as they do not bifurcate away from the origin as the amplitude increases.\n\nIt was remarked in Example~\\ref{ex:whitham} that no collisions are possible due to the concavity of the dispersion relation. As a consequence, all collisions away from the origin observed in Fig.~\\ref{fig:wwcollision}b involve both branches of the dispersion relation, {\\em i.e.}, they involve a solid curve and a dashed curve. This is easily seen from Fig.~\\ref{fig:wwcollision}a: a parallel cord with abscissae of the endpoints that are integers apart is easily found by sliding a parallel cord away from the cord $((0,0),(1,\\omega_1(1)))$ until the integer condition is met. This implies that $\\omega_{l_1}(n_1+\\mu)$ and $\\omega_{l_2}(n_2+\\mu)$ in the collision condition \\rf{cancollision} have opposite sign and \\rf{sym1ww} is always satisfied. Thus colliding eigenvalues of zero-amplitude water wave solutions {\\em always} have opposite Krein signature. As a consequence, the necessary condition for the presence of high-frequency instabilities is met. In fact, it was observed in \\cite{deconinckoliveras1} that all colliding eigenvalues give rise to bubbles of instabilities as the amplitude is increased.\n\nOur general framework easily recovers the results of MacKay \\& Saffman \\cite{mackaysaffman}. There the set-up is for arbitrary amplitudes of the traveling wave solutions, but the results are only truly practical for the zero-amplitude case.\n\n\\end{enumerate}\n\n{\\bf Remark.} It follows from these considerations that the high-frequency instabilities present in the water wave problem are a consequence of counter-propagating waves as no such instabilities are present in the Whitham equation \\rf{whitham}. Although it is often stated that the value of the Whitham equation lies in that it has the same dispersion relation as the water wave problem (see for instance \\cite{whitham}), this is in fact not the case as it contains only one branch of the dispersion relation. Thus the equation does not allow for the interaction of counter-propagating modes, and as such misses out on much of the important dynamics of the Euler equations.\n\n\\section{A Boussinesq-Whitham equation}\n\nThe goal of this section is the introduction of a model equation that has the same dispersion relation as the Euler equations (\\ref{eulera}-d) at the level of heuristics that led Whitham to the model equation \\rf{whitham}. In other words, we propose a bidirectional Whitham equation, so as to capture both branches of the water wave dispersion relation. We refer to this equation as the Boussinesq-Whitham (BW) equation. It is given by\n\n\\begin{equation}\\la{bw}\nq_{tt}=N(q)+\\partial_x^2\\left(\\alpha q^2+\\int_{-\\infty}^\\infty K(x-y)q(y)dy\n\\right),\n\\end{equation}\n\n\\noindent where\n\n\\begin{equation}\nK(x)=\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty c^2(k)e^{ikx}dk,\n\\end{equation}\n\n\\noindent and $c^2(k)=g\\tanh(kh)\/k$, $\\alpha>0$ for the water wave problem without surface tension. In \\rf{bw}, $N(u)$ denotes the nonlinear terms, which are ignored in the remainder of this section. Since our methods focus on the analysis of zero amplitude solutions, the sign of $\\alpha$ is not relevant in what follows. This equation is one of many that may stake its claim to the name ``Boussinesq-Whitham equation\". Equation \\rf{bw} is a ``Whithamized'' version of the standard Bad Boussinesq equation and it may be anticipated that it captures at least the small-amplitude instabilities of the water wave problem in shallow water. It should be remarked that the Bad Boussinesq equation is ill posed as an initial-value problem \\cite{mckean1}, but it might be anticipated that the inclusion of the entire water-wave dispersion relation overcomes the unbounded growth that is present due to the polynomial truncation. We return to this at the end of this section.\n\nBefore applying our method to examine the potential presence of high-frequency instabilities of small-amplitude solutions of the BW equation, we need to present its Hamiltonian structure. Further, since \\rf{bw} is defined as an equation on the whole line, a periodic analogue is required, as in Section~\\ref{sec:whitham}.\n\nIt is easily verified that \\rf{bw} is Hamiltonian with (non-canonical) Poisson operator \\cite{mckean1}\n\n\\begin{equation}\\la{bwj}\nJ=\\left(\n\\begin{array}{cc}\n0 & \\partial_x\\\\\n\\partial_x & 0\n\\end{array}\n\\right),\n\\end{equation}\n\n\\noindent and Hamiltonian\n\n\\begin{equation}\\la{bwh}\nH=\\int_{-\\infty}^{\\infty} \\left(\\frac{1}{2} p^2+\\frac{\\alpha}{3} q^3 \\right)dx+\\frac{1}{2}\\int_{-\\infty}^{\\infty} dx \\int_{-\\infty}^{\\infty} dy\\, K(x-y) q(x) q(y).\n\\end{equation}\n\n\\noindent Indeed, \\rf{bw} can be rewritten in the form \\rf{ham} with $u=(q,p)^T$.\n\nTo define a periodic version of \\rf{bw}, let\n\n\\begin{equation}\nK(x)=\\frac{1}{L}\\sum_{j=-\\infty}^\\infty c^2(k_j) e^{i k_j x},\n\\end{equation}\n\n\\noindent where $k_j=2\\pi j\/L$, $j\\in \\mathbb{Z}$. The periodic BW equation is obtained from \\rf{ham}, using \\rf{bwj} and \\rf{bwh}, but with all $\\pm$ infinities in the integration bounds replaced by $\\pm L\/2$, respectively. Since \\rf{bw} has a Poisson operator \\rf{bwj} that is different from those used in Sections~\\ref{sec:scalar} and~\\ref{sec:vector}, minor modifications to the use of the method are necessary.\n\n\\begin{enumerate}\n\n\\item {\\bf Quadratic Hamiltonian.} Ignoring the contributions of the nonlinear term, the quadratic Hamiltonian in a frame of reference moving with speed $V$ is given by\n\n\\begin{equation}\\la{bwhc}\nH_V^0=\\int_{0}^{2\\pi}\\left(V q p+\\frac{1}{2}p^2 \\right)dx+\\frac{1}{2}\\int_{0}^{2\\pi} dx \\int_{0}^{2\\pi} dy\\, K(x-y) q(x) q(y),\n\\end{equation}\n\n\\noindent where we have fixed the period of the solutions to be $L=2\\pi$. The inclusion of the first term in \\rf{bwhc} is one place where the effect of the different form for $J$ is felt, as its functional form is a direct consequence of the form of \\rf{bwj}.\n\n\\item {\\bf Dispersion Relation.} A direct calculation confirms that\n\n\\begin{equation}\n\\omega^2=gk \\tanh(kh),\n\\end{equation}\n\n\\noindent which is, by construction, identical to the dispersion relation for the full water wave problem \\rf{wwdisp}. This gives rise to two branches of the dispersion relation \\rf{wwbranches}, corresponding to right- and left-going waves.\n\n\\item {\\bf Bifurcation Branches.} Bifurcation branches for $2\\pi$-periodic solutions start at $(V_{1,2},0)$, where the phase speeds $V_{1,2}$ are given by $V_{1,2}=\\pm \\sqrt{g\\tanh(h)}$.\n\n\\item {\\bf Stability Spectrum.} The stability spectrum elements are, again by construction, identical to those for the water wave problem, given in (\\ref{wwspectrum}-b).\n\n\\item {\\bf Collision Condition.} Given that the spectral elements are identical to those for the water wave problem, the collision condition is identical too. It is displayed in Fig.~\\ref{fig:wwcollision}(a-b). Thus, collisions away from the origin occur. It remains to be seen whether these can result in the birth of high-frequency instabilities.\n\n\\item {\\bf Krein Signature.} As for the canonical case of Section~\\ref{sec:vector}, we use \\rf{kreinhessian}. Thus we calculate the Hessian ${\\cal L}_c$ of the Hamiltonian $H_c^0$.\n\n Let\n\n \\begin{equation}\n c^2(k)=\\sum_{j=0}^{\\infty} \\gamma_j k^{2j},\n \\end{equation}\n\n\\noindent where $\\gamma_j=gh^{2j+1}a_j$, with the coefficients $a_j$ defined in \\rf{tanh}. A direct calculation gives that the Hamiltonian \\rf{bwhc} is rewritten as\n\n\\begin{equation}\nH_V^0=\\frac{1}{2}\\int_0^{2\\pi} \\left(p^2+Vqp+\\sum_{j=0}^\\infty \\gamma_j q_{jx}^2\\right)dx.\n\\end{equation}\n\n\\noindent Using this form of the Hamiltonian, the calculation of the Hessian is straightforward, leading to\n\n\\begin{equation}\n{\\cal L}_V=\n\\left(\n\\begin{array}{cc}\n\\displaystyle \\sum_{j=0}^\\infty \\gamma_j (-1)^j \\partial_x^{2j} & V\\\\\nV & 1\n\\end{array}\n\\right).\n\\end{equation}\n\nNext, we compute the eigenvectors $v=(q,p)^T$. We have\n\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq\\\\partial\n\\end{array}\n\\right)=e^{i \\lambda t}\n\\left(\n\\begin{array}{c}\nQ(x)\\\\P(x)\n\\end{array}\n\\right),\n\\end{equation}\n\n\\noindent where $(Q,P)^T$ satisfies\n\n\\begin{equation}\\la{bwevp}\n\\lambda \\left(\n\\begin{array}{c}\nQ\\\\P\n\\end{array}\n\\right)= \\left(\n\\begin{array}{cc}\n0 & \\partial_x\\\\\n\\partial_x & 0\n\\end{array}\n\\right) {\\cal L}_V \\left(\n\\begin{array}{c}\nQ\\\\P\n\\end{array}\n\\right).\n\\end{equation}\n\nThis is a second place where the Poisson operator $J$ plays a crucial role as it affects the form of $v=(q,p)^T$ and thus the expression for the signature. One easily verifies that\n\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nQ\\\\P\n\\end{array}\n\\right)=e^{i(n+\\mu)x}\\left(\n\\begin{array}{c}\ni(n+\\mu)\\\\\\lambda-i(n+\\mu)V\n\\end{array}\n\\right)\n\\end{equation}\n\n\\noindent satisfies \\rf{bwevp}.\n\nWe need to evaluate the sign of\n\n\\begin{align}\\nonumber\n&\\left(\n\\begin{array}{c}\nQ\\\\P\n\\end{array}\n\\right)^\\dagger\n{\\cal L}_V \\left(\n\\begin{array}{c}\nQ\\\\P\n\\end{array}\n\\right)\\\\\\nonumber\n&=e^{-i(n+\\mu)x}\\left(\n\\begin{array}{c}\n-i(n+\\mu) \\\\ -\\lambda+i(n+\\mu)V\n\\end{array}\n\\right)^T\n\\left(\n\\begin{array}{cc}\n\\displaystyle \\sum_{j=0}^\\infty \\gamma_j (-1)^j \\partial_x^{2j} & V\\\\\nV & 1\n\\end{array}\n\\right)\\left(\n\\begin{array}{c}\ni(n+\\mu)\\\\\\lambda-i(n+\\mu)V\n\\end{array}\n\\right)e^{i(n+\\mu)x}\\\\\\nonumber\n&=\\left(\n\\begin{array}{c}\n-i(n+\\mu) \\\\ -\\lambda+i(n+\\mu)V\n\\end{array}\n\\right)^T\n\\left(\n\\begin{array}{cc}\n\\displaystyle \\sum_{j=0}^\\infty \\gamma_j (n+\\mu)^{2j} & V\\\\\nV & 1\n\\end{array}\n\\right)\\left(\n\\begin{array}{c}\ni(n+\\mu)\\\\\\lambda-i(n+\\mu)V\n\\end{array}\n\\right)\\\\\\nonumber\n&=\\left(\n\\begin{array}{c}\n-i(n+\\mu)\\\\ i\\omega(n+\\mu)\n\\end{array}\n\\right)^T\\left(\n\\begin{array}{cc}\nc^2(n+\\mu) & V\\\\\nV & 1\n\\end{array}\n\\right)\\left(\n\\begin{array}{c}\ni(n+\\mu)\\\\-i\\omega(n+\\mu)\n\\end{array}\n\\right)\\\\\n&=2\\omega\\left(\\omega-(n+\\mu)V\\right).\n\\end{align}\n\nLet the signature associated with the first eigenvalue be the sign of $2\\omega_{j_1}(\\omega_{j_1}-(n_1+\\mu)V)$, where $\\omega_{j_1}$ is a function of $n_1+\\mu$. Similarly, for the second eigenvalue, the signature is the sign of $2\\omega_{j_2}(\\omega_{j_2}-(n_2+\\mu)V)$. Using the collision condition $\\lambda_{n_1,j_1}^{(\\mu)}=\\lambda_{n_2,j_2}^{(\\mu)}$, the product of these two expressions is $4\\omega_{j_1}\\omega_{j_2}(\\omega_{j_2}-(n_2+\\mu)V)^2$,\nwhich is less than zero since collisions can only occur for eigenvalues associated with opposite branches of the dispersion relation, see Fig.~\\ref{fig:wwcollision}b. It follows that, as in the water wave case, the signatures of colliding eigenvalues are always opposite, and the necessary condition for spectral instability is met. Thus, unlike the Whitham equation \\rf{whitham}, the BW model \\rf{bw} does not exclude the presence of high-frequency instabilities of small-amplitude solutions.\n\n\\end{enumerate}\n\nThe results obtained from the Krein signature calculations are confirmed by numerical results, see Fig.~\\ref{fig:bwstuff}. Panel~(a) shows a numerically computed traveling wave solution of the BW equation \\rf{bw}. This solution is computed using a cosine collocation method with 60 points, as for the Whitham equation, see Fig.~\\ref{fig:wstuff} \\cite{SanfordKodamaCarterKalisch}. For the solution plotted, $c\\approx 1.049815$. The second panel in the first row displays the spectrum computed using Hill's method with 100 modes and 20000 values of the Floquet parameter, using an interpolation of the solution profile. This panel shows the presence of a large number of apparent instabilities, most with small growth rate, in the neighborhood of the imaginary axis. The third panel shows a zoom of the region around the origin, revealing a modulational instability. This is expected, since such an instability is also present for the Whitham equation, see Section~\\ref{ex:whitham}. The fourth panel zooms in on the first bubble of instabilities centered on the positive imaginary axis, revealing a shape and location that is consistent with the Krein collision theory presented here.\n\n\\begin{figure}[hbt]\n\\begin{tabular}{cc}\n\\hspace*{-1.0in}\\def3.2in}\\input{bubbletrack.pdf_tex{3.2in}\\input{bwprofile.pdf_tex} &\n\\hspace*{-1.14in}\\def3.2in}\\input{bubbletrack.pdf_tex{4in}\\input{bwstab.pdf_tex}\\\\\n\\hspace*{-0.7in}(a) & \\hspace*{-1.5in}(b)\\\\\n\\hspace*{-0.3in}\\def3.2in}\\input{bubbletrack.pdf_tex{4in}\\input{bwbf.pdf_tex} &\n\\hspace*{-1.14in}\\def3.2in}\\input{bubbletrack.pdf_tex{4in}\\input{bwbubble.pdf_tex}\\\\\n\\hspace*{-0.7in}(c) & \\hspace*{-1.5in}(d)\\\\\n\\end{tabular}\n\\caption{\\la{fig:bwstuff} (a) A small-amplitude traveling wave solution of the Boussines-Whitham equation \\rf{bw} with $c\\approx 1.0498515$. (b) The numerically computer stability spectrum. (c) A blow-up of the stability spectrum in a neighborhood of the origin. (d) A blow-up of the stability spectrum around what appears as a horizontal segment visible in (b) immediately above the longest segment appearing horizontal. More detail is given in the main text.\n }\n\\end{figure}\n\n\\vspace*{0.1in}\n\nThe wave form displayed in Fig.~\\ref{fig:bwstuff}(a) does not have zero average, unlike the one shown in Fig.~\\ref{fig:wstuff}, for reasons explained here. Let us examine the stability of a flat-water state $q=a$ (constant), $p=0$. Thinking of the BW equation as an approximation to the water wave problem where the flat-water state is neutrally stable (spectrum on the imaginary axis), independent of the reference level of the water, the neutral stability of this state is desired in the context of the BW system as well. However, the BW system is easily checked to not be Galilean invariant, thus the average value of the solution may be important.\n\nLinearizing the system around the flat water state $(q,p)=(a,0)$ results in a linear system with constant coefficients, whose dispersion relation is given by $\\omega^2=2ak^2+k^2c^2(k)$. This results in two branches for the dispersion relation: $\\omega_{1,2}=\\pm k \\sqrt{c^2(k)+2a}$. It follows that if $a>0$ then both $\\omega_1$ and $\\omega_2$ are real, resulting in neutral stability, since the stability eigenvalue and the frequency $\\omega(k)$ are different by a factor of $i$. On the other hand, if $a<0$, it follows that both $\\omega_1$ and $\\omega_2$ are imaginary for sufficiently large $k$, since $\\lim_{|k|\\rightarrow \\infty}c(k)=0$. This leads to the dynamics of the flat-water state with $a<0$ to not only be unstable, but to be ill-posed, as the growth rate of the instability $\\rightarrow \\infty$ as $|k|\\rightarrow \\infty$. Thus Whitham-izing the Bad Boussinesq equation and incorporating the full water wave dispersion relation does not remove the illposedness of the problem. Rather it alters it where negative constant solutions experience unbounded growth, unlike positive constant solutions.\n\nIt is observed numerically that this behavior of perturbed constant solutions is carried over to nonconstant solutions: solutions of negative average display the same illposed behavior described above, with stability spectra that have unbounded real part. In contrast, the spectra of solutions of positive average have bounded real part, as in Fig.~\\ref{fig:bwstuff}. Annoyingly, the illposed behavior extends to numerical solutions constructed to have zero average. Presumably this is a consequence of numerical error, as higher accuracy numerical experiments display narrower spectra whose real part tends to infinity more slowly.\n\nWe may summarize our findings on the BW equation as follows. The equation was constructed as a bi-directional Whitham equation so as to truly have the same linear dispersion relation as the water wave problem. Even though the BW has a different Poisson structure than the water wave problem, we find that periodic solutions of the BW are susceptible to high-frequency instabilities originating from Krein collisions at the exact same locations on the imaginary axis as the water wave problem. On the other hand, we have not attempted to quantify whether the resulting growth rates are comparable to those for the water wave problem. Further, the illposedness of the equation for solutions of negative average is a significant strike against its potential use in applications. Nevertheless, it appears possible to design more equations like the BW equation, possessing the exact same dispersion relation as the water wave problem and with it all its high-frequency instabilities, without the equation dynamics being illposed for any important class of solutions.\n\n\\section*{Acknowledgements}\n\nWe wish to thank Richard Kollar for interesting discussions. John Carter helped our understanding of the Whitham equation and Mat Johnson was a part of our initial investigations on the Boussinesq-Whitham equation. This work was supported by the National\nScience Foundation through grant NSF-DMS-1008001 (BD) and in part by the EPSRC under grant EP\/J019569\/1 and NSERC (OT). Any opinions, findings,\nand conclusions or recommendations expressed in this material are\nthose of the authors and do not necessarily reflect the views of the\nfunding sources.\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOur work is devoted to low-resource languages: Veps and Karelian. These languages belong to the Finno-Ugric languages of the Uralic language family. \nMost Uralic languages still lack full-fledged morphological analyzers and large corpora~\\cite{ref_Pirinen_2019}.\n\nIn order to avoid this trap the researchers of \nKarelian Research Centre are developing the \nOpen corpus of Veps and Karelian languages (VepKar). \nOur corpus contains morphological dictionaries of the Veps language and the three supradialects of the Karelian language: the Karelian Proper, Livvi-Karelian and Ludic Karelian. \nThe developed software (corpus manager)\\footnote{See\n\\url{https:\/\/github.com\/componavt\/dictorpus}}\nand \nthe database, including dictionaries and texts, have open licenses. \n\nAlgorithms for assigning part of speech tags to words \nand grammatical properties to words, \nwithout taking into account a context, \nusing manually built dictionaries, are presented in the article\n(see Section~\\ref{section:pos_algorithm}).\n\nThe proposed technology of evaluation (see the section~\\ref{section:experiments}) \nallows to use all 313 thousand Veps and 66 thousand Karelian words to verify the accuracy of the algorithms (Table~\\ref{tab:nwords}).\nOnly a third of Karelian words (28\\%) and two-thirds of Veps words (65\\%) in the corpus texts are automatically linked to the dictionary entries \nwith all word forms (Table~\\ref{tab:nwords}).\nThese words were used in the evaluation of the algorithms.\n\n\\begin{table}\n\\caption{Total number of words in the VepKar corpus and dictionary}\\label{tab:nwords}\n\n\\begin{tabular}{c c c r} \\toprule\nLanguage & \\multicolumn{1}{|c|}{\\specialcell{The total number\\\\\n of tokens in texts,\\\\\n $10^3$}}\n & \\multicolumn{1}{|c|}{\\specialcell{N tokens linked to\\\\\n dictionary automatically,\\\\\n $10^3$}}\n & \\specialcell{N tokens linked to\\\\lemmas having\\\\\n a complete paradigm,\\\\\n $10^3$}\\\\\n\\midrule\nVeps & 488 & 400 (82\\%) & 313 (65\\%)\\\\\n\\cmidrule{1-1}\n\\specialcell{Karelian\\\\Proper} \n & 245 & 111 (45\\%) & 69 (28\\%)\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\nLet us describe several works devoted to the development of morphological analyzers for the Veps and Karelian languages.\n\\begin{itemize}\n \\item The Giellatekno language research group is mainly engaged in low-resource languages, the project covers about 50 languages~\\cite{ref_Moshagen_2014}. Our project has something in common with the work of Giellatekno in that (1) we work with low-resource languages, (2) we develop software and data with open licenses.\n \n A key role in the Giellatekno infrastructure is given to formal approaches (grammar-based approach) in language technologies. They work with morphology rich languages. Finite-state transducers (FST) are used to analyse and generate the word forms~\\cite{ref_Moshagen_2014}.\n \n \\item There is a texts and words processing library for the Uralic languages called UralicNLP~\\cite{ref_Hamalainen_2019}.\n This Python library provides interface to such Giellatekno tools as FST for processing morphology and constraint grammar for syntax. \n The UralicNLP library lemmatizes words in 30 Finno-Ugric languages and dialects including the Livvi dialect of the Karelian language (\\textit{olo} -- language code).\n\\end{itemize}\n\n\n\\section{Data organization and text tagging in the VepKar corpus}\n\nAutomatic text tagging is an important area of research in corpus linguistics. It is required for our corpus to be a useful resource.\n\nThe corpus manager handles the dictionary and the corpus of texts (Fig.~\\ref{fig:corpus:manager:tagging}).\nThe texts are segmented into sentences, \nthen sentences are segmented into words (tokens).\nThe dictionary includes lemmas with related meanings, word forms, and \\textbf{sets} of \\textbf{gram}matical features (in short -- \\textbf{gramsets}).\n\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{pix\/vepkar_structure_v7-b-en.png}\n\\caption{Data organization and text tagging in the VepKar corpus.\\newline Total values (e.g. number of words, texts) are calculated for all project languages.} \\label{fig:corpus:manager:tagging}\n\\end{figure}\n\n\nText tokens are automatically searched in the dictionary of lemmas and word forms, this is the first stage (\\RNum{1}) of the text tagging, it is not presented at Fig.~\\ref{fig:corpus:manager:tagging}.\n\\begin{enumerate}\n\\item \\textbf{Semantic tagging}. \nFor the word forms found in the dictionary, the lemmas linked with them are selected (\\RNum{2} at Fig.~\\ref{fig:corpus:manager:tagging}), then all the meanings of the lemmas are collected (\\RNum{3}) and semantic relationships are established between the tokens and the meanings of the lemmas (marked ``not verified'') (\\RNum{4}). \n\nThe task of an expert linguist is to check these links and confirm their correctness, either choose the correct link from several possible ones, or manually add a new word form, lemma or meaning.\n\n\n\\begin{tikzpicture}\n\n\\node (token) {tokens (words)};\n\\node (wf) [right=of token] {word forms};\n\\node (lemka) [right=of wf] {lemmas};\n\\node (meaning) [right=of lemka] {meanings};\n\n\\draw[<->] (token.east) -- node[above] {\\RNum{1}} (wf.west);\n\\draw[<->] (lemka.east) -- node[above] {\\RNum{3}} (meaning.west);\n\\draw[<->] (wf.east) -- node[above] {\\RNum{2}} (lemka.west);\n\\path[<->,red] (token.south) edge [bend right=13] node[above] {\\RNum{4} not verified} (meaning.south);\n\\end{tikzpicture}\n\n\nWhen the editor clicks on the token in the text, then a drop-down list of lemmas with all the meaning will be shown. The editor selects the correct lemma and the meaning (Fig.~\\ref{fig:meaning:select}). \n\n\n\n\n\n\n\\begin{figure}\n \\fbox{\\includegraphics[width=1\\textwidth]{pix\/samarialaine-en-2020_Select-meaning_narrow.png}}\n\\caption[Selection the correct lemma and the meaning for token]%\n {Vepsian and Russian parallel Bible \n translation\\textsuperscript{$\\ddagger$} \n in the corpus.\n The editor clicks the word ``Opendai'' in the text, a menu pops up. This menu contains a list of meanings collected automatically for this token, namely: \n the meaning of the noun ``teacher'' (``opendai'' in Veps) and five meanings of the Veps verb ``opeta''. \n The noun ``opendai'' and the verb ``opeta'' have the same wordform ``opendai''.\n If the editor selects one of the lemma meanings in the menu (clicks the plus sign), then the token and the correct meaning of the lemma will be connected \n ({\\color{red}\\RNum{4} stage is verified}).\n \\newline \n}\n\\line(1,0){82}\\newline\n\\small\\textsuperscript{$\\ddagger$} See full text online at VepKar: \\url{http:\/\/dictorpus.krc.karelia.ru\/en\/corpus\/text\/494}\n \n \\label{fig:meaning:select}\n\\end{figure}\n\n\n\\item \\textbf{Morphological tagging}. \nFor the word forms found in the dictionary, the gramsets linked with them are selected (\\RNum{5}) and morphological links are established (\\RNum{6}) between the tokens and the pairs ``word form -- gramset'' (Fig.~\\ref{fig:corpus:manager:tagging}). The expert's task is to choose the right gramset.\n\n\\begin{tikzpicture}\n\\node (token) {tokens (words)};\n\\node (wf) [right=of token] {word forms};\n\\node (gramset) [right=of lemka] {gramsets};\n\n\\draw[<->] (token.east) -- node[above] {\\RNum{1}} (wf.west);\n\\draw[<->] (wf.east) -- node[above] {\\RNum{5}} (gramset.west);\n\\path[<->,red] (token.south) edge [bend right=13] node[above] {\\RNum{6}} (gramset.south);\n\\end{tikzpicture}\n\n\\end{enumerate}\n\n\n\n\\section{Corpus tagging peculiarities}\\label{section:corpus_peculiarities}\nIn this section, we describe why the word forms with white spaces and analytical forms are not taken into account in the search algorithm described below.\nAnalytical form is the compound form consisting of auxiliary words and the main word.\n\nThe ultimate goal of our work is the morphological markup of the text, previously tokenized into words by white spaces and non-alphabetic characters \n(for example, brackets, punctuation, numbers).\nTherefore, analytical forms do not have markup in the texts.\n\nAlthough we store complete paradigms in the dictionary, including analytical forms, such forms do not used in the analysis of the text, because each individual word is analyzed in the text, not a group of words.\n\nFor example, we take the Karelian verb ``pageta'' (leave, run away).\nIn the dictionary not only the negative form of indicative, presence, first-person singular ``en pagene'' is stored, but also connegative (a word form used in negative clauses) of the indicative, presence ``pagene'', which is involved in the construction of five of the six forms of indicative, presence.\nThus, in the text the word 'en' (auxiliary verb 'ei', indicative, first-person singular) and 'pagene' (verb 'pageta', connegative of indicative, presence) are separately marked.\n\n\n\\section{Part of speech and gramset search by analogy algorithms}\\label{section:pos_algorithm}\n\nThe proposed algorithms operate on data from a morphological dictionary. \nThe algorithms are based on the analogy hypothesis that words with the \\emph{same suffixes} are likely to have the same inflectional models and the same sets of grammatical information (part of speech, number, case, tense, etc.). \nThe \\emph{suffix} here is a final segment of a string of characters.\n\nLet the hypothesis be true, in that case, if the suffixes of new words coincide with the suffixes of dictionary words, then part of the speech and other grammatical features of the new words will coincide with the dictionary words.\nIt should be noted that \nthe length of the suffixes is unpredictable and can be various for different pairs of words~\\cite[p.~53]{ref_Belonogov_2004}.\n\nThe POSGuess and GramGuess algorithms described below use the concept of ``suffix'' (Fig.~\\ref{fig:kezaman_substring}), the GramPseudoGuess algorithm uses the concept ``pseudo-ending'' (Fig.~\\ref{fig:huukkua_substring}).\n\n\n\\subsection{The POSGuess algorithm for part of speech tagging with a~suffix}\n\nGiven the set of words $W$, for each word in this set the part of speech is known. \nThe algorithm~\\ref{alg:POSGuess} finds a part of speech $\\pos_u$ for a given word~$u$ using this set. \n\n\\begin{algorithm\n \\caption{Part of speech search by a suffix (POSGuess)}\n \\label{alg:POSGuess}\n\\DontPrintSemicolon\n\\SetAlgoLined\n \\KwData{\n $P$ -- a set of part of speech (POS),\\hfill \\break\n $W = \\{ w \\; | \\; \\exists \\pos_w \\in P \\}$ -- a set of words, POS is known for each word,\\hfill \\break\n \n \n $u \\notin W$ -- the word with unknown POS,\\hfill \\break\n $len(u)$ -- the length (in characters) of the string $u$.\n }\n \\KwResult{\\[\n u_z : \n \\begin{cases}\n \n \n %\n len (u_z) \\xrightarrow[ z=2,\\ldots,len(u) ]{} \\max,\\;\n \n \\Comment{\/\/\\,Longest suffix}\\\\\n \\exists w \\in W : w = w_{\\text{prefix}} \\concat u_z\\;\n \\Comment{\/\/\\,Concatenation of strings}\n \\end{cases}\n \\]\n \\begin{align*}\n\\text{Counter} & \\left[ \\pos^k \\right] = c^k, \\; k = \\overline{1,m}, \\; \\text{where :}\\\\\n & c^k \\in \\mathbb{N}, \\; c^1 \\geq c^2 \\geq \\ldots \\geq c^m,\\\\\n & \\exists w^k_i \\in W : \n w^k_i = {w_{\\text{prefix}}}^k_i \\concat u_z \\Rightarrow\n c^k = \\vert \\pos^k_{w^k_i} \\vert, \\\\\n & \n i = \\overline{1,c^k}, \\\\\n & \\forall i : \\pos^k_{w^k_i} = \\pos^k \\in P, \\;\\;\\;\\; \n a \\ne b \\Leftrightarrow \\pos^a \\ne \\pos^b\\\\\n & \n m \\; \\text{-- the number of different POS of found words} \\; w^k_i\n \\end{align*}\n \n }\n \\BlankLine\n \n $z$ = 2 \\Comment{\/\/ The position in the string $u$}\n $z_{found} = \\text{FALSE}$ \\;\n \\BlankLine\n \\While{ $z \\leq len(u)$ and $\\neg z_{found}$ }{\n \\BlankLine\n \\Comment{\/\/ The suffix of the word $u$ from $z$-th character}\n $u_z = \\text{ substr } (u, z) \\;$\n \\BlankLine\n \\ForEach{$w \\in W$}{\n \\Comment{\/\/ If the word $w$ has the suffix $u_z$ (regular expression)\n }\n \\If{$w =\\sim \\textrm{m}\/u_z\\$\/$}{\n $\\text{Counter} \\left[ \\pos_w \\right] ++$\\\\\n $z_{found} = \\text{TRUE}$ \\Comment{\/\/ Only POS of words with this $u_z$ suffix will be counted. The next \"while\" loop will break, so the shorter suffix $u_{z+1}$ will be omitted.}\n }\n }\n z = z + 1\n }\n \\BlankLine\n \\Comment{\/\/ Sort the array in descending order, according to the value}\n \\verb|arsort|( $\\text{Counter} \\left[ \\; \\right]$ )\n\\end{algorithm}\n\n\\newpage\n\nIn Algorithm~\\ref{alg:POSGuess} \nwe look for in the set $W$ (line~5) the words \nwhich have the same suffix $u_z$ as the unknown word $u$. \nFirstly, we are searching for the longest substring of $u$, \nthat starts at index z. \nThe first substring $u_{z=2}$ will start at the second character (line 1 in Algorithm~\\ref{alg:POSGuess}), since $u_{z=1} = u$ is the whole string \n(Fig.~\\ref{fig:kezaman_substring}). \n\nThen we increment the value $z$ decreasing the length of the substring $u_z$ in the loop, \nwhile the substring $u_z$ has non-zero length, $z \\leq len(u)$.\nIf there are words in $W$ with the same suffix, then we count the number of similar words for each part of the speech and stop the search.\n\n\\hfill \\break\n\n\nThe Fig.~\\ref{fig:kezaman_substring} shows the idea of the algorithm~\\ref{alg:POSGuess}: for a new word ($kezaman$), we look for a word form in the dictionary ($raman$) with the same suffix ($aman$).\n\n\nWe begin to search in the dictionary for word forms with the suffix $u_{z=2}$. If we have not find any words, then we are looking for $u_{z=3}$ and so on.\nThe longest suffix $u_{z=4}$=``aman'' with $z=4$ is found.\n\nThen we find all words with the suffix $u_{z=4}$ and count \nhow many of such words are nouns, verbs, adjectives and so on.\nThe result is written to the array $Counter[\\,]$. In Fig.~\\ref{fig:kezaman_substring} the \\emph{noun} ``raman'' was found, therefore we increment the value of $Counter[noun]$.\n\n\n\\begin{figure}\n\\includegraphics[width=0.7\\textwidth]{pix\/kezaman_v05.png}\n\\caption{Veps nouns in the genitive case ``kezaman'' (``kezama'' means ``melted ground'') and ``raman'' (``rama'' means ``frame''). The word $u$ \nwith an unknown part of speech is ``kezaman''. The word $w$ from the dictionary with the known POS is ``raman''. \nThey share the common suffix $u_z$, which is ``aman''.} \\label{fig:kezaman_substring}\n\\end{figure}\n\n\n\n\n \n \n \n\\subsection{The GramGuess algorithm for gramset tagging with a suffix}\n\nThe GramGuess algorithm is exactly the same as the POSGuess algorithm, \nexcept that it is needed to search a subset of gramsets instead of parts of speech.\nThat is in the set $W$ the gramset is known for each word. \nThe gramset is a set of morphological tags (number, case, tense, etc.).\n\n\n\n\n\\subsection{The GramPseudoGuess algorithm for gramset tagging with a pseudo-ending}\n\nLet us explain the ``pseudo-ending'' used in the algorithm GramPseudoGuess.\n\nAll word forms of one lemma share a common invariant substring. This substring is a \\emph{pseudo-base} of the word (Fig.~\\ref{fig:huukkua_substring}). \nHere the pseudo-base is placed at the start of a word, it suits for the Veps and Karelian languages.\nFor example, in Fig.~\\ref{fig:huukkua_substring} the invariant substring ``huuk'' is the pseudo-base for all word forms of the lemma ``huukkua''. \nThe Karelian verb ``huukkua'' means ``to call out'', ``to holler'', ``to halloo''.\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{pix\/huukkua_v02.png}\n\\caption{Wordforms of the Karelian verb ``huukkua'' (it means ``to call out'', ``to holler'', ``to halloo''). All word forms have the same pseudo-base and different pseudo-endings for different set of grammatical attributes (gramsets).} \\label{fig:huukkua_substring}\n\\end{figure}\n\nGiven the set of words $W$, for each word in this set a gramset and a pseudo-ending are known. \nThe algorithm~\\ref{alg:POSPseudoGuess} finds a gramset $g_u$ for a given word~$u$ using this set.\n\n\\begin{algorithm\n \\caption{Gramset search by a pseudo-ending (GramPseudoGuess)}\n \\label{alg:POSPseudoGuess}\n\\DontPrintSemicolon\n\\SetAlgoLined\n \\KwData{\n $G$ -- a set of gramsets,\\hfill \\break\n $W = \\{ w \\; | \\; \\exists \\,g_w \\in G, \n \\exists \\,\\pend_w : w = w_{\\text{prefix}} \\concat \\pend_w \\}$ \n -- a set of words, where gramset and pseudo-ending (pend) are known for each word, \\hfill \\break\n \n \n $u \\notin W$ -- the word with unknown gramset,\\hfill \\break\n $len(u)$ -- the length (in characters) of the string $u$.\n }\n \\KwResult{\\[\n u_z : \n \\begin{cases}\n \n \n %\n len (u_z) \\xrightarrow[ z=2,\\ldots,len(u) ]{} \\max,\\;\n \n \\Comment{\/\/\\,Longest substring}\\\\\n \\exists w \\in W : \\pend_w = u_z\\;\n \\end{cases}\n \\]\n \\begin{align*}\n\\text{Counter} & \\left[ g^k \\right] = c^k, \\; k = \\overline{1,m}, \\; \\text{where :}\\\\\n & c^k \\in \\mathbb{N}, \\; c^1 \\geq c^2 \\geq \\ldots \\geq c^m,\\\\\n & \\exists w^k_i \\in W : \n \\pend_{w^k_i} = u_z \\Rightarrow\n c^k = \\vert g^k_{w^k_i} \\vert, \\\\\n & \n i = \\overline{1,c^k}, \\\\\n & \\forall i : g^k_{w^k_i} = g^k \\in G, \\;\\;\\;\\; \n a \\ne b \\Leftrightarrow g^a \\ne g^b\\\\\n & \n m \\; \\text{-- the number of different gramsets of found words} \\; w^k_i\n \\end{align*}\n \n }\n \\BlankLine\n \n $z$ = 2 \\Comment{\/\/ The position in the string $u$}\n $z_{found} = \\text{FALSE}$ \\;\n \\BlankLine\n \\While{ $z \\leq len(u)$ and $\\neg z_{found}$ }{\n \\BlankLine\n \\Comment{\/\/ The substring of the word $u$ from $z$-th character}\n $u_z = \\text{ substr } (u, z) \\;$\n \\BlankLine\n \\ForEach{$w \\in W$}{\n \\Comment{\/\/ If the word $w$ has the pseudo-ending $u_z$}\n \\If{$\\pend_w == u_z$}{\n $\\text{Counter} \\left[ g_w \\right] ++$\\\\\n $z_{found} = \\text{TRUE}$ \\Comment{\/\/ Only gramsets of words with the pseudo-ending $u_z$ will be counted. The next \"while\" loop will break, so the shorter $u_{z+1}$ will be omitted.}\n }\n }\n z = z + 1\n }\n \\BlankLine\n \\Comment{\/\/ Sort the array in descending order, according to the value}\n \\verb|arsort|( $\\text{Counter} \\left[ \\; \\right]$ )\n\\end{algorithm}\n\nIn Algorithm~\\ref{alg:POSPseudoGuess} \nwe look for in the set $W$ (line~5) the words \nwhich have the same pseudo-ending $u_z$ as the unknown word $u$. \nFirstly, we are searching for the longest substring of $u$, \nthat starts at index z. \n\nThen we increment the value $z$ decreasing the length of the substring $u_z$ in the loop, \nwhile the substring $u_z$ has non-zero length, $z \\leq len(u)$.\nIf there are words in $W$ with the same pseudo-ending, then we count the number of similar words for each gramset and stop the search.\n\n\n\\section{Experiments}\\label{section:experiments}\n\n\\subsection{Data preparation}\nLemmas and word forms from our morphological dictionary were gathered to \\emph{one set} as a search space of part of speech tagging algorithm.\nThis set contains unique pairs ``word -- part of speech''.\n\nIn order to search a gramset, we form the set consisting of (1) lemmas without inflected forms (for example, adverbs, prepositions) and (2) inflected forms (for example, nouns, verbs).\nThis set contains unique pairs ``word -- gramset''. For lemmas without inflected forms the gramset is empty.\n\nWe put on constraints for the words in both sets: strings must consist of more than two characters and must not contain whitespace.\nThat is, analytical forms and compound phrases have been excluded from the sets (see section~\\ref{section:corpus_peculiarities}).\n\n\n\\subsection{Part of speech search by a suffix (POSGuess algorithm)}\n\nFor the evaluation of the quality of results of the searching algorithm POSGuess the following function $\\text{eval}(\\pos^u)$ was proposed:\n\\begin{equation}\\label{eq:metric-eval}\n\\text{eval} \\left( \n \\begin{array}{@{}l@{\\thinspace}l}\n \\pos^u, \\\\\n \\text{Counter} & \\left[ \\pos^k \\right] \\rightarrow c^k,\\\\\n & \\forall k = \\overline{1,m} \\\\\n \\end{array}\n \\right) = \n \\begin{cases}\n \\multicolumn{2}{l}{\\bluecomment{The array Counter[\\,] \n do not contain the correct\n $\\pos^u$.}}\\\\\n 0, & \\pos^u \\ne \\pos^k, \\forall k = \\overline{1,m},\\\\[3mm]\n \\multicolumn{2}{l}{\\bluecomment{\\specialcell{First several POS in the array can have \\\\ \n the same maximum frequency $c^1$, \n one of this POS is $\\pos^u$.}}}\\\\\n 1, & \\pos^u \\in \\{ \\left[ \\pos^1, \\ldots, \\pos^j \\right] : \n c^1=c^2=\\ldots=c^j, j \\leq m \\},\\\\[4mm]\n \\multicolumn{2}{l}{\\frac{c^k}{ \\sum_{k=1}^{m}c^k }, \\; \n \\exists k : \\pos^k = \\pos^u, \\;\n c^k < c^1\n }\n \\end{cases}\n\\end{equation}\n\nThis function~(\\ref{eq:metric-eval}) evaluates the result of the POSGuess algorithm against the correct part of speech $\\pos^u$.\nThe POSGuess algorithm counts the number of words similar to the word $u$ \nseparately for each part of speech \nand stores the result in the Counter array.\n\nThe Counter array is sorted in descending order, according to the value. The first element in the array is a part of speech with maximum number of words similar to the unknown word $u$.\n\n\\num{71 091} ``word -- part of speech'' pairs for the Proper Karelian supradialect and \\num{399 260} ``word -- part of speech'' pairs for the Veps language have been used in the experiments to evaluate algorithms.\n\nDuring the experiments, two Karelian words were found, for which there were no suffix matches in the dictionary. They are the word ``cap'' (English: snap; Russian: \\foreign{\u0446\u0430\u043f}) and the word ``\u0161tob'' (English: in order to; Russian: \\foreign{\u0447\u0442\u043e\u0431\u044b}).\nThat is, there were no Karelian words with the endings -p and -b.\nThis could be explained by the fact that these two words migrated from Russian to Karelian language.\n\n\n\nFigure~\\ref{fig:search_POS} shows the proportion of Veps and Karelian words with correct and wrong part of speech assignment by the POSGuess algorithm. \nValues along the X axis are the values of the function $\\text{eval}(\\pos^u)$, \nsee the formula~(\\ref{eq:metric-eval}). \nThis function for evaluating the part of speech assignment takes the following values:\n\n\\begin{labeling}{eval-formula}\n\\item [0] \\num{4.7}\\% of Vepsian words and 9\\% of Karelian words ($x = 0$ in Fig.~\\ref{fig:search_POS}) were assigned the wrong part of speech. \nThat is, there is no correct part of speech in the result array $Counter[\\,]$ in the \\mbox{POSGuess} algorithm.\nThis is the first line in the formula~(\\ref{eq:metric-eval}).\n\\hfill \\break\n\n\\item [\\num{0.1} -- \\num{0.5}] \n\\num{2.92}\\% of Vepsian words and \\num{4.23}\\% of Karelian words ($x \\in [0.1 ; 0.5]$ in Fig.~\\ref{fig:search_POS}) were assigned the partially correct POS tags.\nThat is, the array $Counter[\\,]$ contains the correct part of speech, but it is not at the beginning of the array. \nThis is the last line in the formula~(\\ref{eq:metric-eval}).\n\\hfill \\break\n\n\\item [1] \n\\num{92.38}\\% of Vepsian words and \\num{86.77}\\% of Karelian words ($x = 1$ in Fig.~\\ref{fig:search_POS}) were assigned the correct part of speech. \nThe array $Counter[\\,]$ contains the correct part of speech at the beginning of the array. \n\\end{labeling}\n\n\\begin{figure}[H]\n\\includegraphics[width=1\\textwidth]{pix\/pos_search_all_langs.png}\n\\caption{\nThe proportion of Vepsian (red curve) and Karelian (blue curve) words with correct ($x=1$) and wrong ($x=0$) part of speech assignment by the POSGuess algorithm \nwith the formula~(\\ref{eq:metric-eval}).\n} \\label{fig:search_POS}\n\\end{figure}\n\n\nFigure~\\ref{fig:search_POS} shows the evaluation of the results of the POSGuess algorithm for all parts of speech together.\nTable~\\ref{tab:POS:quantity:Veps} (Veps) and Table~\\ref{tab:POS:quantity:krl} (Karelian) show the evaluation of the same results of the POSGuess algorithm, \nbut they are presented for each part of speech separately. \n\n\n\n\\begin{table}\n\\caption{Number of Vepsian words of different parts of speech \nused in the experiment. \nThe evaluation of results found by POSGuess algorithm by the formula~(\\ref{eq:metric-eval}) \nand fraction of results in percent, where \nthe column \\textit{0} means the fraction of words with incorrectly found POS, \\textit{1} -- the fraction of words with correct POS in the top \nof the list created by the algorithm.}\\label{tab:POS:quantity:Veps}\n\n\\setlength{\\tabcolsep}{8pt}\n\n\\begin{tabular}{ l r r r r r r r >{\\bfseries}r } \\toprule\n \n \\multicolumn{2}{c}{\\large Veps} &\n \\multicolumn{7}{c}{\\specialcell{Fraction \n of not guessed (column 0),\\\\\n partly guessed (0.1--0.5) and guessed (1) POS, \\%}} \\\\ \\cmidrule(r){3-9}\n \n POS & Words & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & \\multicolumn{1}{c}{1}\\\\ \\midrule\nVerb & \\num{93 047} & 2.12 & 0.52 & 0.55 & 0.47 & 0.36 & 0.01 & 95.97\\\\\nNoun & \\num{240 513} & 2.88 & 0.3 & 0.67 & 0.6 & 0.42 & 0.24 & 94.89\\\\\nAdjective & \\num{61 845} & 12.45 & 1.62 & 1.44 & 1.53 & 1.58 & 0.51 & 80.87\\\\\nPronoun & \\num{1 244} & 46.54 & 8.12 & 0.56 & 0.64 & 0 & 0 & 44.13\\\\\nNumeral & \\num{1 200} & 44 & 6.25 & 2.33 & 0.67 & 0.33 & 0 & 46.42\\\\\nAdverb & 650 & 64.92 & 3.08 & 2.46 & 1.23 & 0.46 & 0 & 27.85\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table}\n\\caption{Number of Karelian words of different parts of speech used in the experiment.}\\label{tab:POS:quantity:krl}\n\n\\setlength{\\tabcolsep}{8pt}\n\n\\begin{tabular}{ l r r r r r r r >{\\bfseries}r } \\toprule\n \n \\multicolumn{2}{c}{\\large Karelian} &\n \\multicolumn{7}{c}{\\specialcell{Fraction \n of not guessed (column 0),\\\\\n partly guessed (0.1--0.5) and guessed (1) POS, \\%}} \\\\ \\cmidrule(r){3-9}\n \n POS & Words & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & \\multicolumn{1}{c}{1}\\\\ \\midrule\nVerb & \\num{26 033} & 3.26 & 0.5 & 0.74 & 0.6 & 0.23 & 0.01 & 94.67\\\\\nNoun & \\num{36 908} & 5.47 & 0.38 & 1.13 & 1.08 & 0.52 & 0.04 & 91.38\\\\\nAdjective & \\num{6 596} & 35.81 & 6.66 & 4.15 & 4.56 & 2.73 & 0.38 & 45.71\\\\\nPronoun & 610 & 81.64 & 2.13 & 0.66 & 3.11 & 2.3 & 0 & 10.16\\\\\nNumeral & 582 & 65.81 & 1.72 & 1.03 & 0.17 & 1.03 & 0 & 30.24\\\\\nAdverb & 235 & 68.51 & 3.4 & 2.98 & 2.13 & 0 & 0 & 22.98\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics[width=1\\textwidth]{pix\/POSdistrib.png}\n\\caption{Number of Vepsian and Karelian words of different parts of speech used in the experiment.} \\label{fig:POS:distribution}\n\\end{figure}\n\n\\subsection{Gramset search by a suffix (GramGuess algorithm) and by a pseudo-ending (GramPseudoGuess algorithm)}\n\n\\num{73 395} ``word -- gramset'' pairs for the Karelian Proper supradialect and \\num{452 790} ``word -- gramset'' pairs for the Veps language have been used in the experiments to evaluate GramGuess and GramPseudoGuess algorithms.\n\nA list of gramsets was searched for each word. The list was ordered by the number of similar words having the same gramset.\n\nFor the evaluation of the quality of results of the searching algorithms the following function $\\text{eval}(g^u)$ has been proposed:\n\\begin{equation}\\label{eq:metric-eval-gram}\n\\text{eval} \\left( \n \\begin{array}{@{}l@{\\thinspace}l}\n g^u, \\\\\n \\text{Counter} & \\left[ g^k \\right] \\rightarrow c^k,\\\\\n & \\forall k = \\overline{1,m} \\\\\n \\end{array}\n \\right) = \n \\begin{cases}\n \\multicolumn{2}{l}{\\bluecomment{The array Counter \n do not contain the correct gramset \n $g^u$.}}\\\\\n 0, & g^u \\ne g^k, \\forall k = \\overline{1,m},\\\\[3mm]\n \\multicolumn{2}{l}{\\bluecomment{\\specialcell{First several gramsets in the array can have \\\\ \n the same maximum frequency $c^1$, \n one of these gramsets is $g^u$.}}}\\\\\n 1, & g^u \\in \\{ \\left[ g^1, \\ldots, g^j \\right] : \n c^1=c^2=\\ldots=c^j, j \\leq m \\},\\\\[4mm]\n \\multicolumn{2}{l}{\\frac{c^k}{ \\sum_{k=1}^{m}c^k }, \\; \n \\exists \\, k : g^k = g^u, \\;\n c^k < c^1\n }\n \\end{cases}\n\\end{equation}\n\nThis function~(\\ref{eq:metric-eval-gram}) evaluates the results of the GramGuess and GramPseudoGuess algorithms against the correct gramset $g^u$.\n\n\\begin{table}\n\\caption{Evaluations of results of gramsets search for Vepsian and Karelian by GramGuess and GramPseudoGuess algorithms.\n}\\label{tab:Gram:quantity}\n\n\\setlength{\\tabcolsep}{8pt}\n\n\\begin{tabular}{ c c c c c} \\toprule\n & \\multicolumn{2}{c}{GramGuess}& \\multicolumn{2}{c}{GramPseudoGuess}\\\\ \\cmidrule(r){2-5}\n \n Evaluation & Veps & Karelian & Veps & Karelian\\\\ \\midrule\n0 & 2.53 & 5.72 & 7.9 & 9.23\\\\\n0.1 & 0.53 & 0.83 & 1.04 & 1.57\\\\\n0.2 & 0.71 & 1.16 & 1.24 & 1.37\\\\\n0.3 & 0.64 & 0.89 & 2.68 & 1.36\\\\\n0.4 & 0.2 &\t0.56 & 0.14 & 0.68\\\\\n0.5 & 0.11 & 0.09 & 0.83 & 0.43\\\\\n1 &\t\\bf{95.29} & \\bf{90.74} & \\bf{86.17} & \\bf{85.36}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nThe table 4 shows that the GramGuess algorithm gives the better results than the GramPseudoGuess algorithm, namely: \n\\begin{labeling}{eval-formula}\n\\item [Karelian] \n90.7\\% of Karelian words were assigned a correct gramset by GramGuess algorithm versus 85.4\\% by GramPseudoGuess algorithm;\n\\hfill \\break\n\n\\item [Veps] \n95.3\\% of Vepsian words were assigned a correct gramset by GramGuess algorithm versus 85.4\\% by GramPseudoGuess algorithm.\n\\end{labeling}\n\nIt may be suggested by the fact that suffixes are longer than pseudo-endings. \nIn addition, the GramPseudoGuess algorithm is not suitable for a part of speech without inflectional forms.\n\n\n\n\n\n\n\\section{Morphological analysis results}\n\n\nIn order to analyze the algorithm errors, the results of the part-of-speech algorithm POSGuess were visualized using the Graphviz program.\nPart-of-speech error transition graphs were built for Veps language (Fig.~\\ref{fig:pos-error-graph-vep}) \nand Karelian Proper supradialect (Fig.~\\ref{fig:pos-error-graph-krl}).\n\nLet us explain how these graphs were built.\nFor example, a thick grey vertical arrow connects adjective and noun (Fig.~\\ref{fig:pos-error-graph-krl}), and this arrow has labels of 21.6\\%, 1424 and 3.9\\%.\nThis means that the POSGuess algorithm has erroneously identified \\num{1424} Karelian adjectives as nouns.\nThis accounted for 21.6\\% \nof all Karelian adjectives \nand 3.9\\% of nouns.\nThis can be explained by the fact that \nthe same lemma (in Veps and Karelian) can be both a noun and an adjective. \nNouns and adjectives are inflected in the same form (paradigm).\n\n\nThe experiment showed that there are significantly more such lemmas (noun-adjective) for the Karelian language than for the Veps language (21.6\\% versus 9.8\\% in Fig.~\\ref{fig:pos-error-graph}).\nAlthough in absolute numbers Veps exceeds Karelian, namely: 6061 versus 1424 errors of this kind. \nThis is because the Veps dictionary is larger in the VepKar corpus.\n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{pix\/247_veps_only_max0_pos-1_0.jpg}\n \\caption{Veps language}\n \\label{fig:pos-error-graph-vep}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n\\includegraphics[width=1.0\\textwidth]{pix\/456_pos-4_20_number-krl.jpg} \\caption{Karelian Proper supradialect}\n \\label{fig:pos-error-graph-krl}\n\\end{subfigure}\n\\caption{Part-of-speech error transition graph, which reflects the results of the POSGuess algorithm.}\n\\label{fig:pos-error-graph}\n\\end{figure}\n\n\n\n\n\\section{Conclusion}\n\nThis research devoted to the low-resource Veps and Karelian languages. \n\nAlgorithms for assigning part of speech tags to words and grammatical properties to words are presented in the article. \nThese algorithms use our morphological dictionaries, where the lemma, part of speech and a set of grammatical features (gramset) are known for each word form. \n\nThe algorithms are based on the analogy hypothesis that words with the same suffixes are likely to have the same inflectional models, the same part of speech and gramset.\n\nThe accuracy of these algorithms were evaluated and compared. 313 thousand Vepsian and 66 thousand Karelian words were used to verify the accuracy of these algorithms.\nThe special functions were designed to assess the quality of results of the developed algorithms.\n\n\n\n71,091 ``word -- part of speech'' pairs for the Karelian Proper supradialect and 399,260 ``word -- part of speech'' pairs for the Veps language have been used in the experiments to evaluate algorithms.\n86.77\\% of Karelian words \nand 92.38\\% of Vepsian words \nwere assigned a correct part of speech. \n\n73,395 ``word -- gramset'' pairs for the Karelian Proper supradialect and 452,790 ``word -- gramset'' pairs for the Veps language have been used in the experiments to evaluate algorithms.\n90.7\\% of Karelian words \nand \n95.3\\% of Vepsian words \nwere assigned a correct gramset by our algorithm.\n\n\nIf you need only one correct answer, then all three of developed algorithms are not very useful.\nBut in our case, the task is to get an ordered list of the parts of speech and gramsets for a word and to offer this list to an expert.\nThen the expert selects the correct part of speech and gramset from the list and assigns to the word. \nThis is a semi-automatic tagging of the texts.\nThus, these algorithms are useful for our corpus.\n\n\\bibliographystyle{splncs04}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUsing a short simple pulse for sending in a radar system needs to employ a high power transmitter which is expensive and easy to intercept. If the pulse width increases, the required range resolution to detect the several targets is severely degraded; therefore, the best solution is to modulate the pulse \\cite{1,2}. By performing modulation also known as pulse compression, bandwidth and signal-to-noise ratio (SNR) are increased and range resolution improves \\cite{3,4,5,6,7,8}, but the existence of high sidelobe level in the autocorrelation function (ACF) is annoying \\cite{9,10}. Pulse compression is done using various methods such as phase coding (Barker code), amplitude weighing, linear frequency modulation (LFM), and NLFM \\cite{11,12,13}.\n\nIn the phase coding and amplitude weighing methods, due to the phase discontinuity and variable amplitude, the mismatch loss increases in the receiver \\cite{14}; therefore, the use of LFM signals increases significantly due to continuous phase and constant amplitude. Although, the LFM method shows clear advantages over the phase coding and amplitude weighing methods, but the high sidelobe level is still annoying because of masking the smaller targets by sidelobes of bigger targets; thus, the NLFM method is used thanks to its significant decrease in peak sidelobe level (PSL) \\cite{15}.\n\nIn the NLFM method similar to LFM method, the signal amplitude is constant, and an optimal phase is intended to find. The stationary phase concept (SPC) is often used for NLFM method. The stationary phase concept expresses that the power spectral density (PSD) in a specific frequency is relatively large if frequency variations are small with respect to time \\cite{12}. The stationary phase method has been used in \\cite{16} resulted in significant reduction of PSL while the mainlobe width widens which can be neglected.\n\nIn this paper, the proposed method improves the PSL of the stationary phase method.\n\nThe remainder of the paper is organized as follows: The section 2 reviews NLFM waveform design based on the stationary phase method. NLFM waveform design based on the proposed method is explained in the section 3 in which the optimal phase is calculated initially and then the convergence of the proposed method is demonstrated using mathematical analysis. The section 4 contains simulation results. Finally, the section 5 concludes the paper.\n\n\\section{NLFM Signal Design with Stationary Phase Method}\nIn stationary phase method, the desired signal is defined as $x\\left( t \\right)$.\n\\begin{equation}\nx\\left( t \\right) = a\\left( t \\right)\\exp \\left( {j\\varphi \\left( t \\right)} \\right)\\;\\;\\;\\;\\;\\;\\; - \\frac{T}{2} \\le t \\le \\;\\frac{T}{2}\n\\end{equation}\nwhere $a\\left( t \\right)$, ${\\varphi \\left( t \\right)}$, and $T$ are amplitude, phase, and pulse width of $x\\left( t \\right)$, respectively. The ${f_n}$ is instantaneous frequency of $x\\left( t \\right)$ at time ${t_n}$ which is determined as follow\n\\begin{equation}\n{f_n} = \\frac{1}{{2\\pi }}{\\varphi '\\left( t_n \\right)}\n\\end{equation}\n\nIf ${X\\left( {{f_n}} \\right)}$ is Fourier transform of $x\\left( t \\right)$ in instantaneous frequency ${f_n}$ , so based on SPC, the relation between the power spectral density and frequency variation can be denoted as following equation \\cite{12}\n\\begin{equation}\n{\\left| {X\\left( {{f_n}} \\right)} \\right|^2} \\approx \\;2\\pi \\frac{{{a^2}\\left( {{t_n}} \\right)}}{{\\left| {\\varphi ''\\left( {{t_n}} \\right)} \\right|}}\n\\end{equation}\n\nEquation (3) indicates that PSD is directly proportional to $a^2\\left( t \\right)$ and it is inversely proportional to value of second order derivative of ${\\varphi \\left( t \\right)}$. In NLFM technique, signal amplitude is constant, so we consider $a\\left( t \\right) = A$, ($A$ = constant), and PSD only depends on the value of ${\\varphi ''\\left( t \\right)}$. If we estimate $X(f)$ with a function such as $Z(f)$, we can rewrite (3) in frequency domain as follow \\cite{12}\n\\begin{equation}\n\\theta ''\\left( f \\right) \\approx k{\\left| {{\\rm}Z\\left( f \\right)} \\right|^2},\\;\\;\\;k = constant\n\\end{equation}\nwhere $\\theta\\left( f \\right)$ is defined as the phase of $X(f)$. If $B$ is the bandwidth of $Z(f)$, $\\theta '\\left( f \\right)$ is obtained from the integral of $\\theta ''\\left( f \\right)$ that can be written as follow\n\\begin{equation}\n\\theta '\\left( f \\right) = \\mathop \\int \\limits_{-\\frac{{B}}{2}}^f {\\theta ''\\left(\\alpha\\right)} d\\alpha\n\\end{equation}\n\nThe group time delay function ${T_g}\\left( f \\right)$ is defined as following equation\n\\begin{equation}\n{T_g}\\left( f \\right) = - \\frac{1}{{2\\pi }}\\theta '\\left( f \\right)\n\\end{equation}\nIf (4) and (5) are substituted in (6), so\n\\begin{equation}\n\\frac{{d{T_g}\\left( f \\right)}}{{df}} = - \\frac{k}{{2\\pi }}{\\left| {Z\\left( f \\right)} \\right|^2}\\to{T_g}\\left( f \\right) = - \\frac{k}{{2\\pi }}\\mathop \\int \\limits_{ - \\frac{B}{2}}^f {\\left| {Z\\left( \\alpha \\right)} \\right|^2}d\\alpha + r\n\\end{equation}\nwhere $r$ is constant and independent of the frequency. Also it is calculated by using the following boundary conditions\n\\begin{equation}\n{T_g}\\left( {B\/2} \\right) = T\/2,\\;\\;{T_g}\\left( { - B\/2} \\right) = - T\/2\n\\end{equation}\n\nNow, the frequency function of time can be determined as following equation \\cite{17}\n\\begin{equation}\nf\\left( t \\right) = T_g^{ - 1}\\left( f \\right)\n\\end{equation}\n\nFinding the inverse group time delay function is not always easy; and in some cases it should be carried\nout numerically. Eventually, the phase of $x\\left( t \\right)$ can be calculated by integral of frequency function as follow\n\\begin{equation}\n \\varphi \\left( t \\right) = 2\\pi\\int\\limits_{-\\frac{{T}}{2}}^t f\\left( \\alpha \\right)d\\alpha\n\\end{equation}\n\nThis method was applied in \\cite{16}. In section 4, the results are compared against the proposed method.\n\n\\section{NLFM Signal Design with Proposed Method}\n\nOur proposed method is based on a constrained optimization problem. First, a window as the initial window is considered, then by solving the constrained optimization problem, the desired signal is aimed to be found. This method is performed iteratively and the phase obtained from the stationary phase method \\cite{16} is used to start. To guarantee the convergence of the proposed method, triangle inequality and mathematical analysis are used to demonstrate the minimum error decrement in each iteration. Additionally, the minimum error value is positive expressing convergence due in large to the fact that a positive nonincreasing sequence certainly converges.\n\n\\subsection{Optimal Phase}\nIn the proposed method, to obtain the phase of the desired signal, first, an initial window such as ${W_{initial}}$ is considered where $\\left| {Y\\left( f \\right)} \\right|$ is its root. Difference between $\\left| {Y\\left( f \\right)} \\right|$ and the amplitude of the Fourier transform of the desired signal is defined as the error. Since in NLFM signals, amplitude is constant, so our goal is to minimize the error with the constraint of being unit the amplitude of $x\\left( t \\right)$ for $| t | \\le T\/2$, therefore we try to minimize the following equation\n\\begin{equation}\n\\begin{array}{cl}\n\\min\\limits_{X(f)} \n&E = \\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}} {\\Big| {\\left| {Y\\left( f \\right)} \\right| - \\left|X\\left( f \\right)\\right|} \\Big|^2}df\\\\\n\\text{s.t.} &\n\\begin{cases}\n| {x(t)} |^2 = 1 &| t | \\le T\/2\\\\\nx(t) = 0&| t | > T\/2\n\\end{cases}\n\\end{array}\n\\end{equation}\nIf two complex numbers come close to each other, then it can be concluded that the values of their amplitudes also close together, so if the following equation is reduced, then the error can be reduced, which is expressed in the phase matching problems \\cite{18,19,20}\n\\begin{equation}\n\\begin{array}{cl}\n\\min\\limits_{\\theta (f),X(f)} \n&E = \\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}} {\\Big| {\\left| {Y\\left( f \\right)} \\right|\\exp (j\\theta (f)) - X\\left( f \\right)} \\Big|^2}df\\\\\n\\text{s.t.} &\n\\begin{cases}\n| {x(t)} |^2 = 1 &| t | \\le T\/2\\\\\nx(t) = 0&| t | > T\/2\n\\end{cases}\n\\end{array}\n\\end{equation}\nAssume ${Y_\\theta }\\left( f \\right) = \\left| {Y\\left( f \\right)} \\right|\\exp \\left( {j\\theta \\left( f \\right)} \\right)$ and $f = kB\/(K - 1)$, then the error equation can be written in the discrete form as follow\n\\begin{equation}\nE = \\sum \\limits_{k = 0}^{K - 1} {\\left| {{Y_\\theta }\\left( k \\right) - X\\left( k \\right)} \\right|^2} \n\\end{equation}\nwhere ${X\\left( k \\right) = \\mathop \\sum \\limits_{n = 0}^{N - 1} x\\left( n \\right)\\exp \\left( { - j\\frac{{2\\pi kn}}{K}} \\right)}$.\nConsider equations in vector space and assume ${\\bf{x}} = {\\left[ {x\\left( 0 \\right),\\ x\\left( 1 \\right), \\dots,\\ x\\left( {N - 1} \\right)} \\right]^T}$ and ${{\\left[ {\\bf{W}} \\right]_{k,n}} = \\exp \\left( { - j\\frac{{2\\pi kn}}{K}} \\right)}$, so\n\\begin{equation}\n\\left[ {\\begin{array}{*{20}{c}}\n{\\begin{array}{*{20}{c}}\n{X\\left( 0 \\right)}\\\\\n\\end{array}}\\\\\n{\\begin{array}{*{20}{c}}\n \\vdots \\\\\n{X\\left( {K - 1} \\right)}\n\\end{array}}\n\\end{array}} \\right] = {\\bf{Wx}}\n\\end{equation}\nIf we assume ${{\\bf{Y}}_{\\bf{\\theta }}} = {\\left[ {{Y_\\theta }\\left( 0 \\right),\\ {Y_\\theta }\\left( 1 \\right), \\ldots.\\ {Y_\\theta }\\left( {K - 1} \\right)} \\right]^T}$, the (12) is rewritten as follow\n\\begin{equation}\n\\begin{array}{rl}\n\\min\\limits_{\\bf \\uptheta, \\bf{x}} & {\\left( {{{\\bf{Y}}_{\\bf{\\theta }}} - {\\bf{Wx}}} \\right)^H}\\left( {{{\\bf{Y}}_{\\bf{\\theta }}} - {\\bf{Wx}}} \\right)\\\\\n\\text{s.t.}&{\\left| {x\\left( n \\right)} \\right|^2} = 1,\\quad n=1,\\dots,N-1\n\\end{array}\n\\end{equation}\nUsing Lagrangian method, we solve the obtained constrained optimization problem \\cite{21}.\n\\begin{align}\nJ &= {\\left( {{{\\bf{Y}}_{\\bf{\\theta }}} - {\\bf{Wx}}} \\right)^H}\\left( {{{\\bf{Y}}_{\\bf{\\theta }}} - {\\bf{Wx}}} \\right) + \\displaystyle\\sum_{n=0}^{N-1}{\\lambda _n}\\;{\\left| {x\\left( n \\right)} \\right|^2} \\nonumber\\\\\n&= {{\\bf{Y}}_{\\bf{\\theta }}}^H{{\\bf{Y}}_{\\bf{\\theta }}} {-} {{\\bf{Y}}_{\\bf{\\theta }}}^H{\\bf{Wx}} {-} {{\\bf{x}}^H}{{\\bf{W}}^H}{{\\bf{Y}}_{\\bf{\\theta }}} {+} {{\\bf{x}}^H}{{\\bf{W}}^H}{\\bf{Wx}} {+} {{\\bf{x}}^H}{\\bf{\\Lambda }}{\\bf{x}}\n\\end{align\nwhere ${\\lambda _n}$ is Lagrange multiplier and ${\\bf{\\Lambda }} = \\text{diag}({\\lambda _0},\\;{\\lambda _1},\\;...,{\\lambda _{N - 1}})$. The symbol diag(.) is the diagonal matrix which the entries outside the main diagonal are all zero. Because of the orthogonality of the columns of the matrix ${\\bf{W}}$, the value of ${{\\bf{W}}^H}{\\bf{W}}$ is equal to $K{{\\bf{I}}_N}$ where ${{\\bf{I}}_N}$ is identity matrix of size $N$. We take $J$ derivative with respect to ${\\bf{x}}$.\n\\begin{gather}\n\\cfrac{{\\partial J}}{{\\partial {\\bf{x}}}} = - {({{\\bf{W}}^H}{{\\bf{Y}}_{\\bf{\\theta }}})^*} + K{{\\bf{I}}_N}{{\\bf{x}}^*} + {\\bf{\\Lambda }}{{\\bf{x}}^*} = 0\\nonumber\\\\\n\n \\Rightarrow {\\bf{x}} = {(K{{\\bf{I}}_N} + {{\\bf{\\Lambda }}^*})^{ - 1}}{{\\bf{W}}^H}{{\\bf{Y}}_{\\bf{\\theta }}}\n\\end{gather}\nwhere * denotes the complex conjugate, ${(K{{\\bf{I}}_N} + {{\\bf{\\Lambda }}^*})^{ - 1}}$ is the inverse matrix of $K{{\\bf{I}}_N} + {{\\bf{\\Lambda }}^*}$ and calculated as follow\n\\begin{equation}\n{(K{{\\bf{I}}_N} + {{\\bf{\\Lambda }}^*})^{ - 1}} = \n\\text{diag}\\left({\\frac{1}{{K + \\lambda _0^*}}},\\ \\dots,\\ {\\frac{1}{{K + \\lambda _{N-1}^*}}}\\right)\n\\end{equation}\nSince the constraints of the optimization problem are real, then the Lagrange multipliers ${\\lambda _n}$ are real and the vector ${\\bf{x}}$ is expressed as follow\n\\begin{equation}\n\\begin{bmatrix}\nx(0)\\\\\n \\vdots \\\\\nx(N - 1)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\\frac{1}{{K + {\\lambda _0}}}\\displaystyle\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{1,k + 1}}{Y_\\theta }\\left( k \\right)} \\\\\n \\vdots \\\\\n\\frac{1}{{K + {\\lambda _{N - 1}}}}\\displaystyle\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{N,k + 1}}{Y_\\theta }\\left( k \\right)} \n\\end{bmatrix}\n\\end{equation}\nDue to the constraints of the problem, the values ${\\lambda _n}$ must be such that the square of the amplitude of each coefficient of ${\\bf{x}}$ is equal to one, so\n\\begin{gather}\n{\\left| {\\frac{1}{{K + {\\lambda _n}}}\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{n+1,k + 1}}{Y_\\theta }\\left( k \\right)} } \\right|^2} = 1\\nonumber\\\\\n \\Rightarrow {\\lambda _n} = \\pm \\left| {\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{n+1,k + 1}}{Y_\\theta }\\left( k \\right)} } \\right| - K\n\\end{gather}\nTherefore, the vector ${\\bf{x}}$ is calculated as follow.\n\\begin{equation}\n{\\bf{x}} =\n\\left[ \\begin{array}{c}\n\\frac{{\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{1,k + 1}}{Y_\\theta }\\left( k \\right)} }}{{\\left| {\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{1,k + 1}}{Y_\\theta }\\left( k \\right)} } \\right|}}\\\\\n \\vdots \\\\\n\\frac{{\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{N,k + 1}}{Y_\\theta }\\left( k \\right)} }}{{\\left| {\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{N,k + 1}}{Y_\\theta }\\left( k \\right)} } \\right|}}\n\\end{array} \\right] = {{\\bf{\\Lambda }}_1}{{\\bf{W}}^H}{{\\bf{Y}}_{\\bf{\\theta }}}\n\\end{equation}\nwhere ${{\\bf{\\Lambda }}_1}$ is a $N\\times N$ matrix as follow\n\\begin{equation}\n\\left[{{\\bf{\\Lambda }}_1}\\right]_{i,j} = \n\\begin{cases}\n{{{\\left| {\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{i,k + 1}}{Y_\\theta }\\left( k \\right)} } \\right|^{-1}}}}, &\ni=j\\\\\n0,&i\\neq j\n\\end{cases}\n\\end{equation}\nTo achieve the desired signal, the proposed method is performed as an iterative algorithm; therefore, in respect to (21), in r-th iteration, the desired signal will be as follow\n\\begin{equation}\n{{\\bf{x}}^{(r)}} = {\\bf{\\Lambda }}_1^{(r - 1)}{{\\bf{W}}^H}{\\bf{Y}}_{\\bf{\\theta }}^{(r - 1)}\n\\end{equation}\n${{\\bf{\\uptheta }}^{\\left( r \\right)}}$ is the phase of $X\\left( f \\right)$ in r-th iteration, which can be calculated as follow\n\\begin{equation}\n{{\\bf{\\uptheta }}^{\\left( r \\right)}} = {\\rm{phase}}({\\bf{W}}{{\\bf{x}}^{(r)}})\n\\end{equation}\nBy calculating the ${{\\bf{\\uptheta }}^{\\left( r \\right)}}$ value, vector ${\\bf{Y}}_{\\bf{\\theta }}^{(r)}$ and then the matrix ${\\bf{\\Lambda }}_1^{(r)}$ is calculated.\n\\begin{equation}\n{\\bf{Y}}_{\\bf{\\theta }}^{(r)} = \\left[ {\\begin{array}{*{20}{c}}\n{\\left| {Y\\left( 0 \\right)} \\right|\\exp (j{\\theta ^{(r)}}(0))}\\\\\n \\vdots \\\\\n{\\left| {Y\\left( {K - 1} \\right)} \\right|\\exp (j{\\theta ^{(r)}}(K - 1))}\n\\end{array}} \\right]\n\\end{equation}\n\\begin{equation}\n\\left[{{\\bf{\\Lambda }}_1^{(r)}}\\right]_{i,j} = \n\\begin{cases}\n{{{\\left| {\\sum\\limits_{k = 0}^{K - 1} {{{\\left[ {{{\\bf{W}}^H}} \\right]}_{i,k + 1}}{Y_\\theta^{(r)} }\\left( k \\right)} } \\right|^{-1}}}}, &\ni=j\\\\\n0,&i\\neq j\n\\end{cases}\n\\end{equation}\nTo start the algorithm, we set the ${{\\bf{\\uptheta }}^{\\left( 0 \\right)}}$ equal to the phase value obtained from the stationary phase method for the Fourier transform of the NLFM signal \\cite{16}. Thus, by repeating the algorithm, we obtain the desired NLFM signal, and because the amplitude of NLFM signal is constant, so we can multiply the amplitude of the obtained signal in the constant coefficient $A$.\n\n\\subsection{Convergence of the Proposed Method}\nWith respect to the obtained value for the vector ${\\bf{x}}$ in (21) substituted in (15), the minimum value of $E$ is calculated as follow.\n\\begin{equation}\n\\begin{array}{l}\n{E_{\\text{Min}}} \n = {{\\bf{Y}}_{\\bf{\\theta }}}^H\\left({{\\bf{I}}_K} - {\\bf{W}}(2{{\\bf{\\Lambda }}_1} - K{\\bf{\\Lambda }}_1^2){{\\bf{W}}^H}\\right){{\\bf{Y}}_{\\bf{\\theta }}}\n\\end{array}\n\\end{equation}\nIn (27), ${\\bf{P}} = {\\bf{W}}(2{{\\bf{\\Lambda }}_1} - K{\\bf{\\Lambda }}_1^2){{\\bf{W}}^H}$ and ${\\bf{A}} = {{\\bf{I}}_K} - {\\bf{W}}(2{{\\bf{\\Lambda }}_1} - K{\\bf{\\Lambda }}_1^2){{\\bf{W}}^H}$ are the projection and orthogonal complement matrices, respectively \\cite{22}. The minimum error in r-th iteration is as follow\n\\begin{equation}\n\\Scale[0.845]{\nE_{\\text{Min}}^{(r)} = {\\left({{\\bf{Y}}_{\\bf{\\theta }}}^{(r-1)}\\right)^H}\n\\left({{\\bf{I}}_K} - {\\bf{W}}\\left(2{\\bf{\\Lambda }}_1^{(r-1)} - K{({\\bf{\\Lambda }}_1^{(r-1)})^2}\\right){{\\bf{W}}^H}\\right)\n{{\\bf{Y}}_{\\bf{\\theta }}}^{(r-1)}}\n\\end{equation}\nFor convergence of the proposed method, the error value must be reduced with increasing of iterations. In other words, for the convergence of the proposed method, the following inequality should be satisfied.\n\\begin{equation}\n0 \\le E_{\\text{Min}}^{(r + 1)} \\le E_{\\text{Min}}^{(r)}\n\\end{equation}\nTo prove (29), we use (12) to represent the minimum error in r-th iteration as\n\\begin{equation}\n\\begin{array}{l}\nE_{\\text{Min}}^{^{(r)}} = \\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}}\\Big| {\\left| {Y\\left( f \\right)} \\right|\\exp (j{\\theta ^{(r - 1)}}(f)) - {X^{(r)}}\\left( f \\right)} \\Big|^2df\n\\end{array}\n\\end{equation}\nAccording to the triangle inequality, we can write\n\\begin{equation}\n\\begin{array}{ll}\nE_{\\text{Min}}^{^{(r)}} \n& \\ge \\displaystyle\\int \\limits_{-\\frac{B}{2}}^{\\frac{B}{2}} \\left||Y( f )| - | X^{(r)}(f)| \\right|^2df\\\\\n&=\\displaystyle\\int \\limits_{-\\frac{B}{2}}^{\\frac{B}{2}}\n \\Big| {\\exp (j{\\theta ^{(r)}}(f))} \\Big|^2\n\\left||Y( f )| - | X^{(r)}(f)| \\right|^2df\\\\\n\n&= \\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}}\\Big| {\\left| {Y\\left( f \\right)} \\right|\\exp (j{\\theta ^{(r)}}(f)) - {X^{(r)}}\\left( f \\right)} \\Big|^2df\n\\end{array}\n\n\\end{equation}\nOn the other hand, since $E_{\\text{Min}}^{(r + 1)}$ is the minimum value of error in $(r {+} 1)$-th iteration, the following equation is satisfied.\n\\begin{equation}\n\\begin{array}{ll}\nE_{\\text{Min}}^{^{(r + 1)}} &= \n\\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}}\\Big| {\\left| {Y\\left( f \\right)} \\right|\\exp (j{\\theta ^{(r)}}(f)) - {X^{(r+1)}}\\left( f \\right)} \\Big|^2df\\\\\n&\\le\\displaystyle\\int \\limits_{ - \\frac{B}{2}}^{\\frac{B}{2}}\\Big| {\\left| {Y\\left( f \\right)} \\right|\\exp (j{\\theta ^{(r)}}(f)) - {X^{(r)}}\\left( f \\right)} \\Big|^2df\n\\end{array}\n\\end{equation}\nFrom the comparison of the two equations (31) and (32), we conclude $E_{\\text{Min}}^{^{(r)}} \\ge E_{\\text{Min}}^{^{(r + 1)}}$. Since we considered two arbitrary successive iterations, so (29) is satisfied for each two arbitrary successive iterations.\nSince $E_{\\text{Min}}^{(r)}$ is a positive nonincreasing sequence, convergence of the proposed method is guaranteed.\n\n\\section{Simulation Results}\nThe proposed method is performed for six initial windows of Raised-Cosine, Taylor, Chebyshev, Gaussian, Poisson, and Kaiser. Table 1 shows the formula of the selected windows, the group time delay functions, and their constant parameters. As already mentioned, the group time delay function for some windows is numerically calculated. In the Table 1, erf(.) and sgn(.) are defined as the error and sign functions, respectively. The design parameters such as $\\text{bandwidth ($B$)}$, pulse width ($T$), and sampling rate are considered equal to $\\text{100 MHz}$, 2.5 $\\mu$s, and 1 GHz, respectively. Fig. 1 and $\\text{Fig. 2}$ show the autocorrelation functions of the designed signals using the stationary phase method and the proposed method for the six selected windows, respectively. Also, Fig. 3 shows the minimum error for the six selected windows in different iterations.\n\n\\begin{figure*}[!h]\n\t\\centering\n\t\\mbox{\\subfloat[]{\\label{subfig10:a} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{RC_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{RC_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\\vspace{24pt}\n\t\\mbox{\\subfloat[]{\\label{subfig10:b} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Taylor_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Taylor_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n\t\\mbox{\\subfloat[]{\\label{subfig10:c} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Cheb_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Cheb_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\\vspace{24pt}\n\t\\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Gaussian_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Gaussian_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n \\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Poisson_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Poisson_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n \\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Kaiser_SPC_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Kaiser_SPC}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n\t\\caption{The autocorrelation functions of the designed\nsignals using the stationary phase method for the six windows of Raised-Cosine, Taylor, Chebyshev, Gaussian, Poisson, and Kaiser.}\n\t\\label{fig:fig10}\n\\end{figure*}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\mbox{\\subfloat[]{\\label{subfig10:a} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{RC_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{RC_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\\vspace{24pt}\n\t\\mbox{\\subfloat[]{\\label{subfig10:b} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Taylor_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Taylor_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n\t\\mbox{\\subfloat[]{\\label{subfig10:c} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Cheb_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Cheb_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\\vspace{24pt}\n\t\\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Gaussian_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Gaussian_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n \\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Poisson_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Poisson_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n \\mbox{\\subfloat[]{\\label{subfig10:d} \t\\def\\big{\\includegraphics[width=0.48\\textwidth,height=6.25cm]{Kaiser_PIA_B}}\n\t\t\t\\def\\little{\\includegraphics[height=2.4cm]{Kaiser_PIA}}\n\t\t\t\\def\\stackalignment{r}\n\t\t\t\\topinset{\\little}{\\big}{15pt}{24pt}}}\n\t\\caption{The autocorrelation functions of the designed\nsignals using the proposed method for the six windows of Raised-Cosine, Taylor, Chebyshev, Gaussian, Poisson, and Kaiser.}\n\t\\label{fig:fig1}\n\\end{figure*}\n\n\\begin{table*}[!h]\n\t\\centering\n\t\\caption{Selected Windows.}\n\\resizebox{\\textwidth}{!}{\\begin{tabular}{c c c c}\n \\toprule[1.5pt]\\\\\n\t\t\\bfseries Windows & \\bfseries Formula & \\bfseries Group Time Delay Function & \\bfseries Constant Parameters\\\\ \\\\\n \\toprule[1.5pt]\\\\\n\t\tRaised-Cosine & $w\\left( n \\right) = k + \\left( {1 - k} \\right){\\rm{co}}{{\\rm{s}}^2}\\left( {\\cfrac{{\\pi n}}{{M - 1}}} \\right),\\;\\;\\;\\;\\;\\left| n \\right| \\le \\cfrac{{M - 1}}{2}$ & $ {\\cfrac{Tf}{B} + \\cfrac{T}{{2\\pi }}\\left( {\\cfrac{{1 - k}}{{1 + k}}} \\right)\\sin \\left( {\\cfrac{{2\\pi f}}{B}} \\right)} $ & $k = 0.17$\\\\ \\\\\n\t\t\\hline\\\\\n\t\tTaylor & $\\begin{array}{l}\nw\\left( n \\right) = 1 + \\displaystyle\\sum \\limits_{m = 1}^{\\bar n - 1} {F_m}{\\rm{cos}}\\left( {\\cfrac{{2\\pi mn}}{{M - 1}}} \\right),\\;\\;\\;\\;\\;\\left| n \\right| \\le \\cfrac{{M - 1}}{2}\\\\\n{F_m} = F\\left( {m,\\bar n,\\eta } \\right)\n\\end{array}$ & ${\\cfrac{Tf}{B} + \\cfrac{T}{{2\\pi }} \\displaystyle\\sum \\limits_{m = 1}^{\\bar n - 1} \\cfrac{{{F_m}}}{m}{\\rm{sin}}\\left( {\\cfrac{{2\\pi mf}}{B}} \\right)} $ & $\\begin{array}{l} \n\t\t\\eta = 88.5\\;dB\\\\\n\t\t\\bar n = 2\n\t\t\\end{array}$ \\\\ \\\\\n\t\t\\hline \\\\\n\t\tChebyshev & $\\begin{array}{l}\nW\\left( m \\right) = \\cfrac{{\\cos \\left\\{ {M{\\rm{co}}{{\\rm{s}}^{ - 1}}\\left[ {\\beta \\cos \\left( {\\frac{{\\pi m}}{M}} \\right)} \\right]} \\right\\}}}{{\\cosh \\left[ {M{\\rm{cos}}{{\\rm{h}}^{ - 1}}\\left( \\beta \\right)} \\right]}}\\;\\;,\\;\\;\\;\\;m = 0,1,2, \\ldots ,M - 1\\\\\n\\beta = \\cosh \\left[ {\\frac{1}{M}{\\rm{cos}}{{\\rm{h}}^{ - 1}}\\left( {{{10}^\\alpha }} \\right)} \\right]\\\\\nw\\left( n \\right) = \\cfrac{1}{N} \\displaystyle\\sum \\limits_{m = 0}^{M - 1} W\\left( m \\right).{\\rm{exp}}\\left( {\\cfrac{{j2\\pi mn}}{M}} \\right),\\;\\;\\;\\;\\;\\;\\left| n \\right| \\le \\cfrac{{M - 1}}{2}\n\\end{array}$ & Calculated Numerically & $\\alpha = 2$ \\\\ \\\\\n\t\t\\hline\\\\\n Gaussian& $w\\left( n \\right) = \\exp \\left( { - k{{\\left( {\\cfrac{n}{{2\\left( {M - 1} \\right)}}} \\right)}^2}} \\right),\\;\\;\\;\\;\\;\\;\\left| n \\right| \\le \\cfrac{{M - 1}}{2}$ & $\\cfrac{T}{{2\\;{\\rm{erf}}\\left( {\\sqrt k \/4} \\right)}}\\ {\\rm{erf}}\\left( {\\cfrac{{f\\sqrt k }}{{2B}}} \\right)\\;$& $k = 35.51$\\\\ \\\\\n\t\t\\hline\\\\\n Poisson& $w\\left( n \\right) = \\exp \\left( { - k\\cfrac{{\\left| n \\right|}}{{\\left( {M - 1} \\right)}}} \\right),\\;\\;\\;\\;\\;\\;\\;\\;\\left| n \\right| \\le \\cfrac{{M - 1}}{2}$ & $\\cfrac{{T{\\rm{sgn}}\\left( f \\right)}}{{2\\left( {1 - \\exp \\left( { \\cfrac{-k}{2}} \\right)} \\right)}}\n\\left( {1 - \\exp \\left( {\\cfrac{{ - k\\left| f \\right|}}{B}} \\right)} \\right)$ & $k = 2.5$\\\\ \\\\\n\t\t\\hline\\\\\n\t\tKaiser & $w\\left( n \\right) \\buildrel \\Delta \\over = \\left\\{ {\\begin{array}{*{20}{c}}\n{\\cfrac{{{I_0}\\left( {\\pi \\alpha \\sqrt {1 - {{\\left( {\\frac{n}{{M\/2}}} \\right)}^2}} } \\right)}}{{{I_0}\\left( {\\pi \\alpha } \\right)}}\\;\\;\\;\\;\\;}&{\\left| n \\right| \\le \\cfrac{{M - 1}}{2}}\\\\\n0&{elsewhere}\n\\end{array}} \\right.,\\;\\beta \\buildrel \\Delta \\over = \\pi \\alpha $ & Calculated Numerically & $\\beta = 4.5$ \\\\ \\\\\n \\toprule[1.5pt]\n\t\\end{tabular} }\n\\end{table*}\n\n\\begin{figure*}[!ht]\n\t\\centering\n \\includegraphics[width=\\linewidth,height=10.2cm]{Emin}\n\t\\caption{Minimum error with respect to iteration for the six windows of Raised-Cosine, Taylor, Chebyshev, Gaussian, Poisson, and Kaiser.}\n\t\\label{fig:fig11}\n\\end{figure*}\n\nTable 2 compares the results obtained from the stationary phase method and the proposed method for PSL of the autocorrelation function. The results indicate that the average of PSL reduction is about 5 dB revealing the maximum PSL reduction associated with the Poisson window with -17.28 dB decrement. The minimum error in Fig. 3 is calculated according to (28). As mentioned in section 3, the minimum error of the proposed method has a decreasing trend. First, this error has a significant value, but it begins to decrease and the trend of its change is almost constant in high iterations.\n\n\\begin{table}[!t]\n\\centering\n\\caption{Comparison of PSL for the Stationary Phase Method (SPM) and Pro-\\\\posed Method (PM).}\n\\begin{tabular}{c c c c}\n \\toprule[1.5pt]\n \\multirow{2}{*}{\\bfseries Windows}& \\multicolumn{2}{c} {\\bfseries PSL (dB)} & \\multirow{2}{*}{\\bfseries Improvement (dB)} \n \\\\\n \\cmidrule(r){2-3}\n\t & \\bfseries SPM& \\bfseries PM & \\\\\n \\toprule[1.5pt] \\\\\n\tRaised-Cosine & $-33.34$ & $-37.89$ & $-4.55$ \\\\ \\\\\n\t\\hline \\\\\n\tTaylor & $-33.34$ & $-37.73$ & $-4.39$ \\\\ \\\\\n\t\\hline \\\\\n\tChebyshev & $-31.77$ & $-37.37$ & $-5.60$ \\\\ \\\\\n\t\\hline \\\\\n Gaussian & $-32.38$ & $-37.67$ & $-5.29$ \\\\ \\\\\n\t\\hline \\\\\n Poisson & $-20.39$ & $-37.67$ & $-17.28$ \\\\ \\\\\n\t\\hline \\\\\n\tKaiser & $-30.98$ & $-36.82$ & $-5.84$ \\\\ \\\\\n \\bottomrule[1.5pt]\n\\end{tabular}\n\\end{table}\n\n\\section{Conclusion}\nIn the proposed method, a NLFM signal by solving a constrained optimization problem using Lagrangian method is obtained. By this iterative method, the PSL of the autocorrelation function reduced about 5 dB compared with the stationary phase method. PSL reduction for the Poisson window compared to other windows is significant. Using mathematical analysis, we showed that the minimum error of the proposed method has a decreasing trend and this guarantees the convergence of proposed method. The results of the minimum error of six selected windows also reveal the validity of this statement.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{BISTABLE MEDIA MODEL}\n\n\\indent This work is based on a modified FitzHugh-Nagumo model,\n\\begin{eqnarray}\n u_t &=& au -u^3 - v + \\nabla^2 u, \\\\\n v_t &=& \\varepsilon(u-v)+\\delta \\nabla^2 v,\n\\end{eqnarray}\nhere variables $u$ and $v$ represent the concentrations of the activator and inhibitor, respectively, and $\\delta$ denotes the ratio of their diffusion coefficients. The small value $\\varepsilon$ characterizes the time scales of the two variables, where $v$ remains approximately constant $v_f$ on the length scale over which $u$ varies. The system described by Eqs.(1) and (2) can be excitable, Turing-Hopf, or bistable type. In this paper the parameter $a$ is chosen such that the system is bistable. The two stationary and uniform stable states are indicated by up state ($u_{+}$,$v_{+}$) and down state ($u_{-}$,$v_{-}$), respectively, and they are symmetric, $(u_{+},v_{+})$=$-(u_{-},v_{-})$. A front (interface) connects the two stable states smoothly. On decreasing $\\varepsilon$ the system follows NIB bifurcation that leads to the formation of a couple of Bloch fronts. In the followings, we define a front which jumps from down to up state ($v_{f}$$<$$0$, the planar front velocity $c_{nf}$), and a back which falls from up to down state ($v_{f}$$>$$0$, the planar back velocity $c_{nb}$). Here the front and the back correspond to the two Bloch fronts. So the image of bistable spiral wave is clear: a couple of Bloch fronts (front and back) propagating with opposite velocities enclose a spiral arm, and the front meets the back at the spiral tip at where $v_{f(b)}$$=$$0$ and $u$$=$$v$$=$$0$. In order to differentiate the obtained dense and sparse spiral waves, we define an order parameter $\\alpha=|\\frac{\\lambda_+-\\lambda_-}{\\lambda_++\\lambda_-}|$, here $\\lambda_+$ and $\\lambda_-$ present the average widths of up state and down state, respectively, which can be served as an indicator of duty ratio of spiral wave in a simple way. In dense spiral case, $\\alpha$$=$$0$ which means that the up and down states own identical widths [as Fig. 1 (c) indicated]. In sparse spiral case, if $\\lambda_+$$>$$\\lambda_-$ we call it Negative Phase Sparse Spiral [NPSS, Fig.1 (d)]. Otherwise if $\\lambda_+$$<$$\\lambda_-$ we call it Positive Phase Sparse Spiral [PPSS, Fig. 1(e)-(g)].\n\n\\indent Because pattern formations in bistable media are sensitive to the initial conditions and boundary conditions, we adopt two types of fixed initial conditions during numerical simulations as shown in Fig.1 (a) and (b). In order to investigate the transformation process between the dense and sparse spirals, we use Fig.1 (a) [Fig.1 (b)] as the initial condition when increasing (decreasing) $\\varepsilon$, which firstly evolves into dense (sparse) spiral. The boundary conditions are taken to be no-flux. The simulation is done in a two-dimensional (2D) Cartesian coordinate system with different grid sizes. A generalized Peaceman-Rachford ADI scheme is used to integrate the above model with a space step $dx$$=$$dy$$=$$0.3$ length unit and a time step $dt$$=$$0.05$ time unit. Unless otherwise noted, our simulations are under the parameter sets: $a$$=$$2.0$, $\\delta$$=$$0.1$.\n\\section{Simulation results and discussion}\n\\subsection{Bifurcation scenario of bistable spiral waves}\n \\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=8cm,height=4.7cm]{fig1.eps}\n \\caption{Evolution of spiral waves and tip paths with increasing $\\varepsilon$.\n (a) symmetric (b) asymmetric initial conditions. (c) dense spiral, $\\varepsilon$$=$$0.3$;\n(d) NPSS, $\\varepsilon$$=$$0.33$; (e)-(g) PPSS, $\\varepsilon$$=$$0.34, 0.42, 0.4385$; (h) uniform state, $\\varepsilon$$=$$0.439$; (i) corresponding tip paths. The dash (solid) lines in (a) and (b) present the contour lines $u$=$0$ ($v$=$0$). The dash circle in (i) corresponding to (h) represents a rough tip trajectory of spiral which travels outside the domain. In order to illustrate the meandering of sparse spiral a small domain size is used: $64$$\\times$$64$ s.u.}\n \\label{1}\n \\end{center}\n\\end{figure}\n\\indent Figure 1 shows the bifurcation scenarios of spiral waves and tip trajectories with increasing $\\varepsilon$. Here, we use a small domain size $64$$\\times$$64$ s.u. in order to illustrate the boundary-induced spiral meandering simultaneously. It can be seen that, upon increasing $\\varepsilon$, the system follows dense spiral - sparse spiral - meandering of sparse spiral - sparse spiral - uniform state. When $\\varepsilon$$<$$0.33$, the spiral wave is dense [Fig. 1(c)], and the spiral tip is a fixed point. When $\\varepsilon$ exceeds the critical value $\\varepsilon_{c}$$=$$0.33$ the dense spirals transform into either PPSS or NPSS. The order parameter $\\alpha$ increases with $\\varepsilon$. After this transformation the tip of sparse spiral begins to travel and traces out a circle with primary radius $r_{1}$ as the spiral wave rotating as indicated in Fig. 1(i). The radius $r_{1}$ increases with $\\varepsilon$ as shown in Fig. 2. When $\\varepsilon$ approaches roughly $0.404$, we observe the meandering of spiral waves. A secondary circle appears on the trajectory due to the meandering and the primary circle (radius $r_{1}$) orbits the secondary circle (radius $r_{2}$) in anticlockwise direction with frequency $f_{2}$. The primary circle spins about its center in clockwise direction with frequency $f_{1}$. The radius of secondary circle $r_{2}$ reduces upon increasing $\\varepsilon$. When the parameter $\\varepsilon$ reaches a critical value $0.445$, the meandering of spiral waves disappears and $r_{2}$$=$$0$. If $\\varepsilon$ exceeds this value the spiral tip travels outside the domain, so one obtains uniform state (either up state or down state).\n\\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=7cm,height=5cm]{fig2.eps}\n \\caption{Dependence of the radius $r_{1}$ on parameter $\\varepsilon$ under different domain sizes. The numbers in the label present the domain sizes.}\n \\label{2}\n\\end{center}\n\\end{figure}\n\n\\indent Figure 3 gives a two-parameter numerical bifurcation diagram of spiral waves in a finite domain. The below line $L1$ separates the dense spiral and sparse spiral, and it keeps constant while changing the domain sizes. There exists a parameter range corresponding the meandering of sparse spiral as indicated by the gray region in Fig. 3. This region changes with domain sizes. If the system is large enough, for example $160$$\\times$$160$ s.u., this region reduces to a line which separates the sparse spiral and the uniform state. So the observed meandering of sparse spiral is induced by the boundary effect.\n\n\\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=7cm,height=5cm]{fig3.eps}\n \\caption{Two-parameter numerical bifurcation diagram of spiral waves. The region between lines $L1$ and $L3$ presents sparse spirals. The gray region M indicates meandering sparse spirals. The regions $D$ and $U$ denote Dense spirals and Uniform states, respectively. Domain size: $64$$\\times$$64$ s.u.}\n \\label{3}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=7cm,height=5cm]{fig4.eps}\n \\caption{Phase diagram of the sparse spirals spanned by the domain size and the parameter $a$. This diagram presents the choice of NPSS or PPSS after the transformation as increasing $\\varepsilon$. R denotes Ruleless choice. P (N) indicates PPSS (NPSS). The dashed line divides roughly the states of sparse spirals after the transformation into two regions, above which the choice is always Ruleless.}\n \\label{4}\n \\end{center}\n\\end{figure}\n\n\\indent As indicated in Fig. 1, the dense spiral can transit into either PPSS or NPSS as increasing $\\varepsilon$. In order to study the choice of NPSS or PPSS after the transformation, we do extensive simulation with different parameter sets: $\\varepsilon$, $a$, and domain sizes. For example, given fixed parameters: domain size $64$$\\times$$64$ s.u., $a$$=$$2$ as shown in Fig. 1, we try to connect the spiral type with the parameter $\\varepsilon$. However, at different $\\varepsilon$, the dense spiral transforms into different types of sparse spirals. We don't find rule about how the parameter $\\varepsilon$ determines the type of sparse spiral. Here we denote this transformation as Ruleless choice. Nevertheless, at some specific parameters, the dense spiral always transits into one type of sparse spiral, either PPSS or NPSS. For example, given domain size $64$$\\times$$64$, $a$$=$$3$, the dense spiral always transits into NPSS at different $\\varepsilon$. We give a phase diagram of the spiral types as shown in Fig. 4, in which R represents Ruleless choice, P (N) indicates the PPSS (NPSS). It is shown that if the domain size is large enough (roughly, the region above the dash line) dense spiral evolves into either PPSS or NPSS as changing $\\varepsilon$. The transformed states PPSS and NPSS can be regarded as two stable states of system after the bifurcation. Without forcing (for large domain sizes), the choice is ruleless due to the spontaneous symmetry breakdown. For smaller domain size, however, this choice tends to be unique after the transformation. It is obvious that this choice originates from boundary effects.\n\n\\indent We want to mention that Fig. 4 shows a phase diagram with different parameters. However, given a set of parameters, the choice is deterministic. For example, given domain size $64$$\\times$$64$ s.u., $a$$=$$2.0$, $\\varepsilon$$=$$0.33$, the initial dense spiral will transform into NPSS as shown in Fig. 1 (d). So, if decreasing $\\varepsilon$ from $0.33$ to $0.3$ then increasing to $0.34$, one can turn a NPSS into dense spiral then into PPSS. This process changes the duty ratio of spiral waves. It is helpful to adjusting the duration of a cardiac action potential to prevent the atrial fibrillation. \\cite{Nattel}\n\n\\indent Sparse spiral can also evolve into dense spiral as decreasing $\\varepsilon$. From the given asymmetric initial condition [Fig. 1(b)], it first develops into sparse spiral. When $\\varepsilon$$<$$0.315$ this sparse spiral further evolves into dense spiral. We want to mention that, as shown in Fig. 1 when increasing $\\varepsilon$, the transition from dense to sparse spiral occurs at $\\varepsilon$$=$$0.33$. So, this transformation is bistable with respect to $\\varepsilon$, and originates from a subcritical bifurcation. This transformation process is very different from that in excitable media, \\cite{Krinsky, Kessler} in which the transformation between sparse and dense spirals is smooth. The present phenomena attribute to the intrinsic characteristics of the bistable system.\n\n\\subsection {Transformation details from dense spiral to sparse spiral}\n\n\\indent Because this transformation is bistable, here, we focus on the transformation details from dense spiral to sparse spiral. When the parameter $\\varepsilon$ has exceeded $\\varepsilon_{c}$, the system with the given symmetric initial condition [Fig. 1 (a)] evolves first into dense spiral, then into sparse spiral. We study the transformation in detail by measuring the intensity change of variable $u$ and $v$ at a fixed point far away from the spiral tip and the domain boundary as shown in Fig. 5. It can be seen that the period of sparse spiral $T_{1}$ is always larger than that of dense spiral $T_{0}$. There exists a critical period $T_{c}$ (interval of $u_{+}$) at which the transformation from dense spiral to sparse spiral occurs. It is found that the differences between $T_{0}$ and $T_{c}$ are related to the choice of PPSS or NPSS,\n\\begin{displaymath}\n\\left\\{ \\begin{array}{ll}\nT_0 < T_c, & \\textrm{PPSS},\\\\\nT_0 > T_c, & \\textrm{NPSS}.\n\\end{array} \\right.\n\\end{displaymath}\n\\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=7cm,height=5cm]{fig5.eps}\n \\caption{Time sequence of variable $u$ and $v$ around the transformation point for NPSS. Solid line and dash line present $u$ and $v$, respectively. $T_{0}$ and $T_{1}$ are the periods of dense spiral\nand sparse spiral, respectively. $T_{c}$ indicates a critical period.}\n \\label{5}\n\\end{center}\n\\end{figure}\n\n\\indent The appearance of $T_{c}$ indicates the change in velocities of both the front and the back. For the case of small $\\varepsilon$ and $\\delta$, Eqs. (1) and (2) reduce to\n\n\\begin{eqnarray}\n u_t &=& au - u^3 - v_i + \\nabla^2 u, \\\\\n v &=& v_i (i = f, b),\n\\end{eqnarray}\nhere $v_{i}$ is the value of the inhibitor at the front or the back. From Eqs.(3) and (4) we can obtain the velocity of the planar front (back) $c_{ni}$=$-\\frac{3}{\\sqrt{2}a}$$v_{i}$($i$=$f$,$b$). So the value $v_{i}$ determines uniquely the velocity of front (back). For\ndense spiral, the front and the back are symmetry except an angle separation of $\\pi$ and travel at the same speed, $|v_{b}|$$=$$|v_{f}|$. The interaction between the front and the back is negligible. Undergoing a subcritical bifurcation the symmetry between the front and the back breaks down. Here, if $|v_{f}|$$>$$|v_{b}|$, the front speeds up, so the measured critical period $T_{c}$ would be less than $T_{0}$. The accelerated wave front approaches and interacts with the back to form a NPSS. On the contrary, if $|v_{b}|$$>$$|v_{f}|$ the back speeds up, so $T_{c}$$>$$T_{0}$ and a PPSS will appear.\n\n\\indent The velocities of both the front and the back are also restricted by the eikonal relation which amends the velocity difference between the front and the back induced by the symmetry breakdown,\n\\begin{equation}\\label{7}\n c_i = c_{ni}(v_i) - D\\kappa_i.\n\\end{equation}\nhere, $\\kappa_i$ indicates the local curvature. For NPSS (PPSS) the front and the back are asymmetric and the local curvature $\\kappa_f$ of the front is larger (smaller) than that $\\kappa_b$ of the back far away the tip. So, the final velocities of the front and the back are consistent, which leads to stable spiral wave.\n\n\\indent In the simulation we find that the period difference between the dense and sparse spirals and the radius $r_{1}$ of primary circle satisfy the following approximate relation,\n\\begin{equation}\\label{4}\n T_1 - T_0 =8 r_1.\n\\end{equation}\n\\indent If $r_{1}$$=$$0$, $T_1$ reduces to $T_0$, and it corresponds to dense spiral. Because the velocities of both the front and the back decrease with $\\varepsilon$, $T_{0}$ and $T_{1}$ increase with $\\varepsilon$. The closer the parameter $\\varepsilon$ is to $\\varepsilon_{c}$, the longer it needs to evolve from the initial condition to sparse spiral. This attributes to the critical slowing down near the bifurcation point $\\varepsilon_{c}$.\n\n\\indent In the present case, the system is symmetric and the symmetry breakdown of the front and the back originates from a subcritical bifurcation, which differs from that in Ref. 21. They obtained the dense and sparse spirals in symmetric and asymmetric bistable systems, respectively, in the limit of $\\varepsilon$\/$\\delta$$\\ll$$1$. In that case, the symmetry breakdown originates from a saddle-node bifurcation induced by a constant in the model.\n\\begin{figure}[htbp]\n \\begin{center}\\includegraphics[width=7cm,height=8cm]{fig6.eps}\n \\caption{Dependence of rotation frequency of spirals on the parameter $\\varepsilon$. In (a) the filled square (cycle) denotes dense (sparse) spiral. Domain size: $64$$\\times$$64$ s.u. (b) shows the rotation frequencies of sparse spirals under different domain sizes. The curves for small domain sizes diverge when $\\varepsilon$ is larger than about $0.38$.}\n \\label{6}\n\\end{center}\n\\end{figure}\n\n\\indent Figure 6 (a) shows the dependence of rotation frequency of dense and sparse spirals on the parameter $\\varepsilon$. The filled square (cycle) denotes dense (sparse) spiral. It can be seen that the rotation frequency of dense (sparse) spiral varies nonlinearly, nonmonotonously (monotonously) with respect to $\\varepsilon$. The frequency of sparse spiral disappears for $\\varepsilon$$<$$0.33$, and is always smaller than that of dense spiral, which is due to the rigid rotation.\n\n\\subsection{Meandering of sparse spiral induced by boundary effects}\n\n\\indent Figure 6 (b) shows the dependence of average primary frequency $\\omega_{1}$ on the parameter $\\varepsilon$ under different domain sizes. It can be seen that for small $\\varepsilon$ these curves are superposition. But for large $\\varepsilon$ they begin to diverge. The onset of diverge increases with domain sizes and indicates the startpoint of meandering spiral. The rotation frequency increases with decreasing domain sizes at larger $\\varepsilon$. This phenomenon agrees well with the experimental observation in Ref. 28. The radius $r_{1}$ of primary circle decreases with domain sizes for larger $\\varepsilon$ as shown in Fig. 2, which attribute to the interaction between the spiral tip and the boundary. In addition, near the onset, the sparse spiral begins to meander and the radius $r_{2}$ of the secondary cycle tends to be infinite. But due to the boundary effects, $r_{2}$ is limited to a distance away from the boundary about $10$ s.u.. With increasing $\\varepsilon$, $r_{2}$ decreases dramatically and then tends to be zero at certain bifurcation point. Near this point the radius $r_{2}$ scales approximately as the square-root of the distance from this bifurcation point. This means that the boundary-induced meandering originates from a supercritical bifurcation.\n\n\\indent From the simulation for different domain sizes and $\\varepsilon$ we can give an experiential condition for meandering of sparse spirals,\n\\begin{equation}\\label{9}\n L\/\\lambda\\in(0.5, 0.7),\n\\end{equation}\nwhere $L$ is the domain size and $\\lambda$ is the wavelength of sparse spiral. It demonstrates that meandering of spiral occurs when the wavelength of spiral reaches at least twice of the domain size.\n\n\\indent We only observed the meandering of sparse spirals with outward petals even by changing the chiralities of spirals. No meandering spiral with inward petal is found in simulations. The observed characteristics of meandering spirals differ greatly from that observed in excitable media in Refs. 29, 30 in which hypocycloidlike orbit transits into epibycloidlike orbit as changing a control parameter continuously. That meandering with both outward and inward petals originates from a pair of secondary Hopf bifurcations. But in the present work, from the description of Fig. 3, it can be seen that the obtained meandering of sparse spiral originates from boundary effect rather than regular meandering instability. By investigating the linear stability analysis of spirals, B\\\"{a}r and co-workers \\cite{Bar3} observed meandering-Hopf bifurcation (regular meandering which contains a couple of Hopf bifurcations as Barkley reported) and boundary-Hopf bifurcation (boundary-induced Hopf bifurcation which contains only one Hopf bifurcation point). The boundary-Hopf bifurcation has two characteristics: 1) it occurs only when the domain size is smaller than a critical value; 2) it contains only one bifurcation point. Our results, as shown in Fig. 1-3, satisfy the two conditions. So, the meandering spirals in our case result from boundary-induced Hopf bifurcation. If there exists regular meandering for bistable spiral, it is impossible to make a simulation because it tends to be uniform state when $\\varepsilon$ is large enough.\n\n\\section{CONCLUSION AND REMARKS}\n\\indent We have studied the transformation between dense and sparse spirals based on a bistable FitzHugh-Nagumo model and investigated a novel meandering of sparse spiral with only outward petals. The dense spiral and the sparse spiral can transform each other via a subcritical bifurcation which differs from that in excitable media. \\cite{Krinsky, Kessler} The dense spiral can transit into either PPSS or NPSS. We can find a way to turn a PPSS into dense spiral then into NPSS by decreasing $\\varepsilon$ and then increasing it again, which provides a simple method to adjusting the duration of a cardiac action potential to prevent the atrial fibrillation. \\cite{Nattel} By using different domain sizes we have studied the meandering of sparse spiral and given an experiential condition for spiral meandering induced by the boundary effects $L$\/$\\lambda$$\\in$($0.5$, $0.7$). The observed meandering of sparse spiral with only outward petal originates from a Hopf bifurcation induced by the boundary effects.\n\n\\indent Although the present FitzHugh-Nagumo model isn't a realistic model describing chemical reaction, it has successfully explained many pattern formations in ferrocyanide-iodate-sulfite reactions,\\cite{Li2, Hagberg0, Hagberg1, Szalai, Lee1, Lee2, Lee3} such as the spiral, the self-replicating spots, and the labyrinthine pattern which occur on the left, near, and on the right of the NIB bifurcation point, respectively. In the present work we studied the transformation between dense and sparse spirals on the left of this bifurcation point. We hope that our results can be observed in a ferrocyanide-iodate-sulfite reaction.\n\n\\section{ACKNOWLEDGMENTS}\n\\indent This work is supported in part by Hong Kong Baptist University and the Hong Kong Research Grants Council. Y. F. He also acknowledges the National Natural Science Foundation of China with Grant No. 10775037, 10947166, and the Research Foundation of Education Bureau of Hebei Province, China (Grant No. 2009108).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}