|
{"text":"\\section{\\label{Intro}Introduction}\n\nOver four decades passed since it was first shown that plasmas and\nbeam-plasma systems immersed in an external magnetic field can\nsupport travelling electromagnetic waves with specific features.\nThese waves propagate parallel to the applied magnetic field being\ncircularly polarized in a plane transverse to the direction of\npropagation. It has become conventional in the physics of\nmagnetized plasmas to call such structures waves in the whistler\nmode.\n\nAlthough the linear stability properties of the electromagnetic\nwaves in the whistler mode are relatively well studied\n\\cite{Weibel,Neufeld,Bell,Sudan}, there is a serious gap in the\nunderstanding of their nonlinear behaviour. Chen et al.\n\\cite{Wurtele} have shown that electromagnetic whistler waves can\nbe considered as complementary to the nonlinear travelling\nelectrostatic waves, known as the Bernstein-Greene-Kruskal (BGK)\nmodes \\cite{BGK}. While the BGK modes are longitudinal, the\nwhistler modes are transverse, in other words, the components of\nthe electric and magnetic field of the whistler wave parallel to\nthe external magnetic field are both zero. The study of the\nnonlinear behaviour of whistler waves has been initiated by\nTaniuti and Washimi \\cite{Taniuti}, who obtained a nonlinear\nSchr\\\"{o}dinger equation for the slowly varying amplitude (see\nalso Reference \\cite{Shukla}).\n\nThe present paper is aimed at filling the gap in the understanding\nof the nonlinear evolution of whistler waves. The method adopted\nhere is the renormalization group (RG) method \\cite{Oono,Tzenov}.\nThe basic feature of this approach is that it provides a\nconvenient and straightforward tool to obtain an adequate\ndescription of the physically essential properties of\nself-organization and formation of patterns in complex systems.\nCoherent structures which result from the nonlinear interaction\nbetween plane waves evolve on time and\/or spatial scales\ncomparatively large compared to those the fast oscillations occur.\nThe RG method can be considered as a powerful systematic procedure\nto separate the relatively slow dynamics from the fast one, which\nis of no considerable physical relevance. In a context similar to\nthat of the present paper, it has been successfully applied by one\nof the authors \\cite{Tzenov,Tzenov1} to study collective effects\nin intense charged-particle beams.\n\nThe paper is organized as follows. In the next section, we state\nthe basic equations which will be the subject of the\nrenormalization group reduction in section III. Starting from a\nsingle equation [see equation (\\ref{Waveeqallord})] for the\nelectromagnetic vector potential, we obtain a formal perturbation\nexpansion of its solution to second order. As expected, it\ncontains secular terms proportional to powers of the time variable\nwhich is the only renormalization parameter adopted in our\napproach. In section IV, the arbitrary constant amplitudes of the\nperturbation expansion are renormalized such as to eliminate the\nsecular terms. As a result, a set of equations for the\nrenormalized slowly varying amplitudes is obtained, known as the\nrenormalization group equations (RGEs). These equations comprise\nan infinite system of coupled nonlinear Schr\\\"{o}dinger equations.\nIn section V, the latter are analyzed in the simplest case.\nFinally, section VI is dedicated to discussion and conclusions.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Formulation of the Problem and Basic Equations}\n\nPlasmas and beam-plasma systems considered in the present paper\nare assumed to be weakly collisional. Therefore, the dynamics of\nplasma species is well described by the hydrodynamic equations\ncoupled with the equations for the electromagnetic self-fields. We\nstart with the equations for plasma in an external constant\nmagnetic field ${\\bf B}_0$, which can be written as follows\n\\begin{equation}\n{\\frac {\\partial n_a} {\\partial t}} + \\nabla \\cdot {\\left( n_a\n{\\bf V}_a \\right)} = 0, \\label{Continuity}\n\\end{equation}\n\\begin{equation}\n{\\frac {{\\rm D}_a {\\bf V}_a} {{\\rm D} t}} = - {\\frac {k_B T_a}\n{m_a n_a}} \\nabla n_a + {\\frac {e q_a} {m_a}} {\\left[ {\\bf E} +\n{\\bf V}_a \\times {\\left( {\\bf B}_0 + {\\bf B} \\right)} \\right]},\n\\label{Mombalance}\n\\end{equation}\nwhere $n_a$ and ${\\bf V}_a$ are the density and the current\nvelocity of the species $a$. Furthermore, $m_a$, $q_a$ and $T_a$\nare the mass, the relative charge and the temperature,\nrespectively, while $k_B$ is the Boltzmann constant. The\nsubstantional derivative on the left-hand-side of equation\n(\\ref{Mombalance}) is defined as\n\\begin{equation}\n{\\frac {{\\rm D}_a} {{\\rm D} t}} = {\\frac {\\partial} {\\partial t}}\n+ {\\bf V}_a \\cdot \\nabla. \\label{Substant}\n\\end{equation}\nThe electromagnetic self-fields ${\\bf E}$ and ${\\bf B}$ can be\nobtained in terms of the electromagnetic vector ${\\bf A}$ and\nscalar $\\varphi$ potentials according to the well-known relations\n\\begin{equation}\n{\\bf E} = - \\nabla \\varphi - {\\frac {\\partial {\\bf A}} {\\partial\nt}}, \\qquad {\\bf B} = \\nabla \\times {\\bf A}. \\label{Elmagfield}\n\\end{equation}\nThe latter satisfy the wave equations\n\\begin{equation}\n\\Box {\\bf A} = - \\mu_0 e \\sum \\limits_a n_a q_a {\\bf V}_a, \\qquad\n\\Box \\varphi = - {\\frac {e} {\\epsilon_0}} \\sum \\limits_a n_a q_a,\n\\label{Waveequatvec}\n\\end{equation}\nin the Lorentz gauge\n\\begin{equation}\n{\\frac {1} {c^2}} {\\frac {\\partial \\varphi} {\\partial t}} + \\nabla\n\\cdot {\\bf A} = 0. \\label{Lorgauge}\n\\end{equation}\nHere $\\Box$ denotes the well-known d'Alembert operator. In what\nfollows, we consider the case of a quasineutral plasma\n\\begin{equation}\n\\sum \\limits_a n_a q_a = 0, \\label{Quasineutr}\n\\end{equation}\nin a constant external magnetic field along the $x$-axis ${\\bf\nB}_0 = {\\left( B_0, 0, 0 \\right)}$. Then, equations\n(\\ref{Continuity})--(\\ref{Lorgauge}) possess a stationary solution\n\\begin{equation}\nn_a = n_{a0} = {\\rm const}, \\quad {\\bf V}_a = 0, \\quad {\\bf A} =\n0, \\quad \\varphi = 0. \\label{Statsol}\n\\end{equation}\nThe frequency of the wave will be taken as much higher than the\nion-cyclotron frequency. Therefore, we can further neglect the ion\nmotion and scale the hydrodynamic and field variables as\n\\begin{equation}\nn_e = n_0 + \\epsilon N, \\quad {\\bf V}_e = \\epsilon {\\bf V}, \\quad\n{\\bf A} \\longrightarrow \\epsilon {\\bf A}, \\quad \\varphi\n\\longrightarrow \\epsilon \\varphi, \\label{Scale}\n\\end{equation}\nwhere $\\epsilon$ is a formal small parameter introduced for\nconvenience, which will be set equal to one at the end of the\ncalculations. Thus, the basic equations to be used for the\nsubsequent analysis can be written in the form\n\\begin{equation}\n{\\frac {\\partial N} {\\partial t}} + n_0 \\nabla \\cdot {\\bf V} +\n\\epsilon \\nabla \\cdot {\\left( N {\\bf V} \\right)} = 0,\n\\label{Continuit}\n\\end{equation}\n\\begin{equation}\n{\\frac {\\partial {\\bf V}} {\\partial t}} + \\epsilon {\\bf V} \\cdot\n\\nabla {\\bf V} = - {\\frac {k_B T} {m {\\left( n_0 + \\epsilon N\n\\right)}}} \\nabla N \\nonumber\n\\end{equation}\n\\begin{equation}\n- {\\frac {e} {m}} {\\left[ {\\bf E} + {\\bf V} \\times {\\left( {\\bf\nB}_0 + \\epsilon {\\bf B} \\right)} \\right]}, \\label{Mombalanc}\n\\end{equation}\n\\begin{equation}\n\\Box {\\bf A} = \\mu_0 e {\\left( n_0 + \\epsilon N \\right)} {\\bf V},\n\\qquad {\\frac {1} {c^2}} {\\frac {\\partial \\varphi} {\\partial t}} +\n\\nabla \\cdot {\\bf A} = 0. \\label{Waveequat}\n\\end{equation}\nBefore we continue with the renormalization group reduction of the\nsystem of equations (\\ref{Continuit})--(\\ref{Waveequat}) in the\nnext section, let us assume that the actual dependence of the\nquantities $N$, ${\\bf V}$, ${\\bf A}$ and $\\varphi$ on the spatial\nvariables is represented by the expression\n\\begin{equation}\n{\\widehat{\\Psi}} = {\\widehat{\\Psi}} {\\left( {\\bf x}, {\\bf X}; t\n\\right)}, \\qquad {\\widehat{\\Psi}} = {\\left( N, {\\bf V}, {\\bf A},\n\\varphi \\right)}, \\label{Actualdep}\n\\end{equation}\nwhere ${\\bf X} = \\epsilon {\\bf x}$ is a slow spatial variable.\nThus, the only renormalization parameter left at our disposal is\nthe time $t$ which will prove extremely convenient and simplify\ntedious algebra in the sequel.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Renormalization Group Reduction of the Magnetohydrodynamic\nEquations}\n\nFollowing the standard procedure of the renormalization group\nmethod, we represent ${\\widehat{\\Psi}}$ as a perturbation\nexpansion\n\\begin{equation}\n{\\widehat{\\Psi}} = \\sum \\limits_{n=0}^{\\infty} \\epsilon^n\n{\\widehat{\\Psi}}_n, \\label{Perturbexp}\n\\end{equation}\nin the formal small parameter $\\epsilon$. The next step consists\nin expanding the system of hydrodynamic and field equations\n(\\ref{Continuit})-(\\ref{Waveequat}) in the small parameter\n$\\epsilon$, and obtaining their naive perturbation solution order\nby order. Note that in all orders the perturbation equations\nacquire the general form\n\\begin{equation}\n{\\frac {\\partial N_n} {\\partial t}} + n_0 \\nabla \\cdot {\\bf V}_n =\n\\alpha_n, \\label{Continuitn}\n\\end{equation}\n\\begin{equation}\n{\\frac {\\partial {\\bf V}_n} {\\partial t}} = - {\\frac {v_T^2}\n{n_0}} \\nabla N_n - {\\frac {e} {m}} {\\bf E}_n - \\omega_c {\\bf V}_n\n\\times {\\bf e}_x + {\\bf W}_n, \\label{Mombalancn}\n\\end{equation}\n\\begin{equation}\n\\Box {\\bf A}_n = \\mu_0 e n_0 {\\bf V}_n + {\\bf U}_n, \\qquad {\\frac\n{1} {c^2}} {\\frac {\\partial \\varphi_n} {\\partial t}} + \\nabla\n\\cdot {\\bf A}_n = \\beta_n, \\label{Waveequatn}\n\\end{equation}\nwhere $\\alpha_n$, $\\beta_n$, ${\\bf U}_n$ and ${\\bf W}_n$ are\nquantities, that have been already determined from previous\norders. Here\n\\begin{equation}\nv_T^2 = {\\frac {k_B T} {m}}, \\qquad \\omega_c = {\\frac {e B_0} {m}}\n\\label{Parameters}\n\\end{equation}\nare the thermal velocity of electrons and the electron-cyclotron\nfrequency, respectively and ${\\bf e}_x = {\\left( 1, 0, 0 \\right)}$\nis the unit vector in the $x$-direction. Manipulating in an\nobvious manner equations (\\ref{Continuitn})--(\\ref{Waveequatn}),\nit is possible to obtain a single equation for ${\\bf A}_n$. The\nlatter reads as\n\\begin{equation}\n\\Box {\\frac {\\partial^2 {\\bf A}_n} {\\partial t^2}} - v_T^2 \\Box\n\\nabla {\\left( \\nabla \\cdot {\\bf A}_n \\right)} + \\omega_c \\Box\n{\\frac {\\partial {\\bf A}_n} {\\partial t}} \\times {\\bf e}_x\n\\nonumber\n\\end{equation}\n\\begin{equation}\n- {\\frac {\\omega_p^2} {c^2}} {\\frac {\\partial^2 {\\bf A}_n}\n{\\partial t^2}} + \\omega_p^2 \\nabla {\\left( \\nabla \\cdot {\\bf A}_n\n\\right)} = \\mu_0 e n_0 {\\frac {\\partial {\\bf W}_n} {\\partial t}} +\n{\\frac {\\partial^2 {\\bf U}_n} {\\partial t^2}} \\nonumber\n\\end{equation}\n\\begin{equation}\n- \\mu_0 e v_T^2 \\nabla \\alpha_n - v_T^2 \\nabla {\\left( \\nabla\n\\cdot {\\bf U}_n \\right)} + \\omega_c {\\frac {\\partial {\\bf U}_n}\n{\\partial t}} \\times {\\bf e}_x + \\omega_p^2 \\nabla \\beta_n,\n\\label{Waveeqallord}\n\\end{equation}\nwhere\n\\begin{equation}\n\\omega_p^2 = {\\frac {e^2 n_0} {\\epsilon_0 m}}, \\label{Plasmafreq}\n\\end{equation}\nis the electron plasma frequency. Note that the thermal velocity\n$v_T$ as defined by equation (\\ref{Parameters}) can be\nalternatively expressed according to the expression\n\\begin{equation}\nv_T = \\omega_p r_D, \\qquad r_D^2 = {\\frac {\\epsilon_0 k_B T} {e^2\nn_0}}, \\label{Thermvel}\n\\end{equation}\nwhere $r_D$ is the electron Debye radius. Equation\n(\\ref{Waveeqallord}) represents the starting point for the\nrenormalization group reduction, the final goal of which is to\nobtain a description of the relatively slow dynamics leading to\nformation of patterns and coherent structures.\n\nLet us proceed order by order. We assume that the dependence on\nthe fast spatial variables ${\\bf x} = {\\left( x, y, z \\right)}$ is\nthrough the longitudinal (parallel to the external magnetic field\n${\\bf B}_0$) $x$-coordinate only. The solution to the zero-order\nperturbation equations (\\ref{Waveeqallord}) can be written as\n\\begin{equation}\n{\\bf A}_0 = \\sum \\limits_{k} {\\bf A}_{k}^{(0)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}}, \\label{Zeroordera}\n\\end{equation}\nwhere\n\\begin{equation}\n\\psi_{k} {\\left( x; t \\right)} = k x - \\omega_{k} t, \\label{Phase}\n\\end{equation}\nand ${\\cal A}_{k}$ is an infinite set of constant complex\namplitudes, which will be the subject of the renormalization\nprocedure in the sequel. Here \"constant\" means that the amplitudes\n${\\cal A}_{k}$ do not depend on the fast spatial variable $x$ and\non the time $t$, however, it can depend on the slow spatial\nvariables ${\\bf X}$. The summation sign in equation\n(\\ref{Zeroordera}) and throughout the paper implies summation over\nthe wave number $k$ in the case where it takes discrete values, or\nintegration in the continuous case. From the dispersion equation\n\\begin{equation}\n{\\cal D} {\\left( k; \\omega_{k} \\right)} = \\omega_k^2 {\\left[\n\\omega_k^2 {\\left( \\Box_k - {\\frac {\\omega_p^2} {c^2}} \\right)}^2\n- \\omega_c^2 \\Box_k^2 \\right]} = 0, \\label{Disperequat}\n\\end{equation}\nit follows that the wave frequency $\\omega_{k}$ can be expressed\nin terms of the wave number $k$, where the Fourier-image $\\Box_k$\nof the d'Alembert operator can be written according to\n\\begin{equation}\n\\Box_{k} = {\\frac {\\omega_{k}^2} {c^2}} - k^2. \\label{Dalembert}\n\\end{equation}\nMoreover, it can be verified in a straightforward manner that the\nconstant vector ${\\bf A}_{k}^{(0)}$ can be expressed as\n\\begin{equation}\n{\\bf A}_{k}^{(0)} = {\\left( 0, 1, -i {\\rm sgn} (k) \\right)},\n\\label{Constvect}\n\\end{equation}\nwhere ${\\rm sgn} (k)$ is the well-known sign-function. Details\nconcerning the derivation of the dispersion law\n(\\ref{Disperequat}) and equation (\\ref{Constvect}) can be found in\nthe Appendix. Note that equation (\\ref{Constvect}) is an\nalternative representation of the solvability condition\n(\\ref{Appsolcond}). It is important to emphasize that\n\\begin{equation}\n\\omega_{-k} = - \\omega_k, \\qquad {\\cal A}_{-k} = {\\cal\nA}_{k}^{\\ast}, \\label{Importnote}\n\\end{equation}\nwhere the asterisk denotes complex conjugation. The latter assures\nthat the vector potential as defined by equation\n(\\ref{Zeroordera}) is a real quantity. The zero-order current\nvelocity ${\\bf V}_0$ obtained directly from the first equation\n(\\ref{Waveequatn}) can be written as\n\\begin{equation}\n{\\bf V}_0 = \\sum \\limits_{k} {\\bf V}_{k}^{(0)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}}, \\qquad {\\bf V}_{k}^{(0)} = {\\frac {\\Box_{k}}\n{\\mu_0 e n_0}} {\\bf A}_{k}^{(0)}. \\label{Zeroorderv}\n\\end{equation}\nIn addition, the zero-order density, scalar potential and magnetic\nfield are represented by the expressions\n\\begin{equation}\nN_0 \\equiv 0, \\qquad \\varphi_0 \\equiv 0, \\qquad {\\bf B}_0 = \\sum\n\\limits_{k} {\\bf B}_{k}^{(0)} {\\cal A}_{k} {\\rm e}^{i \\psi_{k}},\n\\label{Zeroordern}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf B}_{k}^{(0)} = -k {\\bf A}_{k}^{(0)} {\\rm sgn} (k) = {\\left(\n0, -k {\\rm sgn} (k), ik \\right)}. \\label{Zeroorderb}\n\\end{equation}\n\nIt has been mentioned that the first-order \"source terms\" on the\nright-hand-side of equation (\\ref{Waveeqallord}) can be expressed\nvia quantities already known from zero order. Thus, we have\n\\begin{equation}\n\\alpha_1 = - n_0 {\\widehat{\\nabla}} \\cdot {\\bf V}_0, \\qquad\n\\beta_1 = - {\\widehat{\\nabla}} \\cdot {\\bf A}_0,\n\\label{Firstordalp}\n\\end{equation}\n\\begin{equation}\n{\\bf U}_1 = - 2 \\nabla \\cdot {\\widehat{\\nabla}} {\\bf A}_0, \\qquad\n{\\bf W}_1 = - {\\frac {e} {m}} {\\bf V}_0 \\times {\\bf B}_0,\n\\label{Firstordu}\n\\end{equation}\nwhere the shorthand notation\n\\begin{equation}\n{\\widehat{\\nabla}} = {\\frac {\\partial} {\\partial {\\bf X}}}\n\\label{Firstorddef}\n\\end{equation}\nhas been introduced. Note that the vector ${\\bf W}_1$ representing\nthe zero-order Lorentz force has the only nonzero component along\nthe external magnetic field, that is\n\\begin{equation}\n{\\bf W}_1 = {\\bf e}_x \\sum \\limits_{k,l} \\alpha_{kl} {\\cal A}_k\n{\\cal A}_l {\\rm e}^{i {\\left( \\psi_k + \\psi_l \\right)}},\n\\label{Zeroordlf}\n\\end{equation}\nwhere\n\\begin{equation}\n\\alpha_{kl} = - {\\frac {i} {2 \\mu_0 n_0 m}} {\\left( k \\Box_l + l\n\\Box_k \\right)} {\\left[ 1 - {\\rm sgn} (k) {\\rm sgn} (l) \\right]}.\n\\label{Firstordalph}\n\\end{equation}\n\nEquation (\\ref{Waveeqallord}) has now two types of solutions. The\nfirst is a secular solution linearly dependent on the time\nvariable in the first-order approximation. As a rule, the highest\npower in the renormalization parameter of the secular terms\ncontained in the standard perturbation expansion is equal to the\ncorresponding order in the small perturbation parameter. The\nsecond solution of equation (\\ref{Waveeqallord}) arising from the\nnonlinear interaction between waves in the first order, is\nregular. Omitting tedious but standard algebra, we present here\nonly the result\n\\begin{equation}\n{\\bf A}_{1} = \\sum \\limits_{k} {\\widehat{\\bf A}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + {\\bf e}_x \\sum \\limits_{k,l}\nA_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k +\n\\psi_l \\right)}} , \\label{Firstordvps}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf A}}_{k}^{(1)} = {\\left( {\\widehat{A}}_{kx}^{(1)}, t\n{\\widehat{A}}_{ky}^{(1)}, -i t {\\widehat{A}}_{ky}^{(1)} {\\rm sgn}\n(k) \\right)}, \\label{Firstordvec}\n\\end{equation}\nSome of the details of the calculations are presented in the\nAppendix. In explicit form, the components of the vector operator\n${\\widehat{\\bf A}}_{\\bf k}^{(1)}$ and those of the infinite matrix\n$A_{kl}^{(1)}$ are given by the expressions\n\\begin{equation}\n{\\widehat{A}}_{kx}^{(1)} = - {\\frac {i k \\beta_k} {\\gamma_k\n\\Box_k}} {\\widehat{\\nabla}}_k, \\qquad {\\widehat{\\nabla}}_k = {\\bf\nA}_{k}^{(0)} \\cdot {\\widehat{\\nabla}}, \\label{Firstordakx}\n\\end{equation}\n\\begin{equation}\n{\\widehat{A}}_{ky}^{(1)} = - {\\frac {{\\widehat{F}}_{k}} {2\n\\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}},\n\\label{Firstordaky}\n\\end{equation}\n\\begin{equation}\nA_{kl}^{(1)} = {\\frac {e} {2m v_T^2}} {\\frac {\\omega_k + \\omega_l}\n{\\Box_{kl} {\\cal D}_{kl}}} {\\left( k \\Box_l + l \\Box_k \\right)}\n{\\left[ 1 - {\\rm sgn} (k) {\\rm sgn} (l) \\right]},\n\\label{Firstordaklx}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{F}}_{k} = 2k \\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]} {\\widehat{\\nabla}}_{X}, \\label{Firstordoper}\n\\end{equation}\n\\begin{equation}\n\\Box_{kl} = {\\frac {{\\left( \\omega_k + \\omega_l \\right)}^2} {c^2}}\n- (k+l)^2, \\label{Firstordconst}\n\\end{equation}\n\\begin{equation}\n{\\cal D}_{kl} = {\\frac {{\\left( \\omega_k + \\omega_l \\right)}^2}\n{v_T^2}} - (k+l)^2 - {\\frac {1} {r_D^2}}. \\label{Firstordconsta}\n\\end{equation}\nIn addition, the constants $\\alpha_k$, $\\beta_k$, $\\gamma_k$ and\n$\\chi_k$ entering the expressions above are given by\n\\begin{equation}\n\\alpha_k = \\Box_k + {\\frac {\\omega_k^2 - \\omega_p^2} {c^2}},\n\\qquad \\beta_k = \\Box_k - {\\frac {1} {r_D^2}}, \\label{Constantsfo}\n\\end{equation}\n\\begin{equation}\n\\gamma_k = {\\frac {\\omega_k^2} {v_T^2}} - k^2 - {\\frac {1}\n{r_D^2}}, \\qquad \\chi_k = \\Box_k + {\\frac {2 \\omega_k^2} {c^2}}.\n\\label{Constantsfor}\n\\end{equation}\nFurthermore, the first-order current velocity can be expressed as\n\\begin{equation}\n{\\bf V}_{1} = \\sum \\limits_{k} {\\widehat{\\bf V}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + {\\bf e}_x \\sum \\limits_{k,l}\nV_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k +\n\\psi_l \\right)}}, \\label{Firstordcvel}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf V}}_{k}^{(1)} = {\\left( {\\widehat{V}}_{kx}^{(1)},\n{\\widehat{V}}_{ky}^{(1)}, -i {\\widehat{V}}_{ky}^{(1)} {\\rm sgn}\n(k) \\right)}. \\label{Firstordvcvel}\n\\end{equation}\nThe corresponding operators and matrix coefficients can be written\nexplicitly according to the expressions\n\\begin{equation}\n{\\widehat{V}}_{kx}^{(1)} = {\\frac {\\Box_k} {\\mu_0 e n_0}}\n{\\widehat{A}}_{kx}^{(1)}, \\qquad V_{kl}^{(1)} = {\\frac {\\Box_{kl}}\n{\\mu_0 e n_0}} A_{kl}^{(1)}, \\label{Currentvelx}\n\\end{equation}\n\\begin{equation}\n{\\widehat{V}}_{ky}^{(1)} = {\\frac {1} {\\mu_0 e n_0}} {\\left[ t\n\\Box_k {\\widehat{A}}_{ky}^{(1)} + 2i {\\left( {\\frac {\\omega_k}\n{c^2}} {\\widehat{A}}_{ky}^{(1)} + k {\\widehat{\\nabla}}_X \\right)}\n\\right]}, \\label{Currentvely}\n\\end{equation}\nCalculating the first-order density $N_1$ from equation\n(\\ref{Continuitn}), we obtain\n\\begin{equation}\nN_{1} = \\sum \\limits_{k} {\\widehat{N}}_{k}^{(1)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}} + \\sum \\limits_{k,l} N_{kl}^{(1)} {\\cal A}_{k}\n{\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k + \\psi_l \\right)}},\n\\label{Firstordden}\n\\end{equation}\n\\begin{equation}\n{\\widehat{N}}_{k}^{(1)} = {\\frac {\\Box_k} {\\mu_0 e \\omega_k}}\n{\\left( k {\\widehat{A}}_{kx}^{(1)} - i {\\widehat{\\nabla}}_k\n\\right)}, \\label{Firstorddenc}\n\\end{equation}\n\\begin{equation}\nN_{kl}^{(1)} = {\\frac {k + l} {2 \\mu_0 m v_T^2 {\\cal D}_{kl}}}\n{\\left( k \\Box_l + l \\Box_k \\right)} {\\left[ 1 - {\\rm sgn} (k)\n{\\rm sgn} (l) \\right]}. \\label{Firstorddenco}\n\\end{equation}\nAnalogously, for the first-order scalar potential $\\varphi_1$, we\nfind\n\\begin{equation}\n\\varphi_1 = \\sum \\limits_{k} {\\widehat{\\varphi}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + \\sum \\limits_{k,l}\n\\varphi_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left(\n\\psi_k + \\psi_l \\right)}}, \\label{Firstordscp}\n\\end{equation}\n\\begin{equation}\n{\\widehat{\\varphi}}_{k}^{(1)} = {\\frac {e} {\\epsilon_0 \\Box_k}}\n{\\widehat{N}}_{k}^{(1)} = {\\frac {c^2} {\\omega_k}} {\\left( k\n{\\widehat{A}}_{kx}^{(1)} - i {\\widehat{\\nabla}}_k \\right)},\n\\label{Firstordscpc}\n\\end{equation}\n\\begin{equation}\n\\varphi_{kl}^{(1)} = {\\frac {e c^2 (k+l)} {2 m v_T^2 \\Box_{kl}\n{\\cal D}_{kl}}} {\\left( k \\Box_l + l \\Box_k \\right)} {\\left[ 1 -\n{\\rm sgn} (k) {\\rm sgn} (l) \\right]}. \\label{Firstordscpco}\n\\end{equation}\nFinally, the first-order magnetic field is calculated to be\n\\begin{equation}\n{\\bf B}_{1} = \\sum \\limits_{k} {\\widehat{\\bf B}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}}, \\label{Firstordmagf}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf B}}_{k}^{(1)} = {\\left( - i {\\rm sgn} (k)\n{\\widehat{\\nabla}}_k, {\\widehat{B}}_{ky}^{(1)}, -i\n{\\widehat{B}}_{ky}^{(1)} {\\rm sgn} (k) \\right)},\n\\label{Firstordmagfi}\n\\end{equation}\n\\begin{equation}\n{\\widehat{B}}_{ky}^{(1)} = - {\\rm sgn} (k) {\\left( t k\n{\\widehat{A}}_{ky}^{(1)} - i {\\widehat{\\nabla}}_X \\right)}.\n\\label{Firstordmafi}\n\\end{equation}\n\nA couple of interesting features of the zero and first-order\nperturbation solution are noteworthy to be commented at this\npoint. First of all, the zero-order density $N_0$ vanishes which\nmeans that no density waves are induced by the whistler\neigenmodes. The second terms in the expressions for the\nfirst-order density $N_1$ and current velocity ${\\bf V}_1$ [see\nequations (\\ref{Firstordcvel}) and (\\ref{Firstordden})] imply\ncontribution from nonlinear interaction between waves according to\nthe nonlinear Lorentz force. It will be shown in the remainder\nthat these terms give rise to nonlinear terms in the\nrenormalization group equation and describe solitary wave\nbehaviour of the whistler mode.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{The Renormalization Group Equation}\n\nPassing over to the final stage of our renormalization group\nprocedure, we note that in second order the quantities ${\\bf U}_2$\nand ${\\bf W}_2$ entering the right-hand-side of equation\n(\\ref{Waveeqallord}) can be written as\n\\begin{equation}\n{\\bf U}_2 = - 2 \\nabla \\cdot {\\widehat{\\nabla}} {\\bf A}_1 -\n{\\widehat{\\nabla}}^2 {\\bf A}_0 + \\mu_0 e N_1 {\\bf V}_0,\n\\label{Secondordu}\n\\end{equation}\n\\begin{equation}\n{\\bf W}_2 = {\\frac {e} {m}} {\\widehat{\\nabla}} \\varphi_1 - {\\frac\n{v_T^2} {n_0}} {\\widehat{\\nabla}} N_1 - {\\bf V}_1 \\cdot \\nabla\n{\\bf V}_0 - {\\frac {e} {m}} {\\bf V}_1 \\times {\\bf B}_0,\n\\label{Secondordw}\n\\end{equation}\nSince we are interested only in the secular terms in second order,\nappearing in the expressions for the $y$ and $z$ components of the\nelectromagnetic vector potential ${\\bf A}_2$, contributions in the\nsource vectors ${\\bf U}_2$ and ${\\bf W}_2$ leading to such terms\nare sufficient for completing the renormalization group procedure.\nThus, we can write\n\\begin{equation}\n{\\bf A}_2 = \\sum \\limits_{k} {\\left( t {\\widehat{\\bf A}}_k^{(2)} +\nt^2 {\\widehat{\\bf C}}_k \\right)} {\\cal A}_{k} {\\rm e}^{i \\psi_{k}}\n\\nonumber\n\\end{equation}\n\\begin{equation}\n+ t \\sum \\limits_{k} {\\widehat{\\bf D}}_k^{(2)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}} + t \\sum \\limits_{k,l} {\\mathbf \\Gamma}_{kl}\n{\\left| {\\cal A}_{l} \\right|}^2 {\\cal A}_{k} {\\rm e}^{i \\psi_k}.\n\\label{Secondordvps}\n\\end{equation}\nAn important remark is in order at this point. From the\nsolvability condition (\\ref{Appsolcond}) it follows that the\ncomplex amplitude ${\\cal A}_k$ must satisfy the complex Poisson\nequation\n\\begin{equation}\n{\\widehat{\\nabla}}_k^2 {\\cal A}_k = 0. \\label{Secondordscon}\n\\end{equation}\nThe latter imposes additional restrictions on the dependence of\nthe wave amplitudes ${\\cal A}_k$ on the slow transverse\nindependent variables $Y$ and $Z$. Straightforward calculations\nyield (see the Appendix for details)\n\\begin{equation}\n{\\widehat{A}}_{ky}^{(2)} = - {\\frac {i {\\rm sgn} (k)} {2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} {\\left( \\beta_k^{(2)}\n{\\widehat{A}}_{ky}^{(1) {\\bf 2}} - {\\widehat{G}}_k \\right)},\n\\label{Secondordaky}\n\\end{equation}\n\\begin{equation}\n{\\widehat{D}}_{ky}^{(2)} = {\\frac {i v_T^2 \\beta_k {\\rm sgn} (k)}\n{2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} {\\left( 1 +\n{\\frac {k^2 \\beta_k} {\\gamma_k \\Box_k}} \\right)}\n{\\widehat{\\nabla}}_Y {\\widehat{\\nabla}}_k, \\label{Secondorddky}\n\\end{equation}\n\\begin{equation}\n{\\widehat{C}}_{ky} = {\\frac {1} {2}} {\\widehat{A}}_{ky}^{(1) {\\bf\n2}}, \\label{Secondordcky}\n\\end{equation}\nwhere\n\\begin{equation}\n\\beta_k^{(2)} = \\alpha_k + {\\frac {4 \\omega_k^2} {c^2}} + {\\frac\n{3 \\omega_c \\omega_k} {c^2}} {\\rm sgn} (k), \\label{Secondordcon}\n\\end{equation}\n\\begin{equation}\n{\\widehat{G}}_k = \\omega_k {\\rm sgn} (k) {\\left[ \\omega_k {\\rm\nsgn} (k) + \\omega_c \\right]} {\\widehat{\\nabla}}^2.\n\\label{Secondordoper}\n\\end{equation}\nThe matrix coefficient $\\Gamma_{kly}$ determining the nonlinear\ncontribution represented by the second term in equation\n(\\ref{Secondordvps}) reads explicitly as\n\\begin{equation}\n\\Gamma_{kly} = - {\\frac {1 - {\\rm sgn} (k) {\\rm sgn} (l)} {\\mu_0\nn_0 m v_T^2 \\omega_l {\\cal D}_{kl}}} {\\frac {i \\omega_k \\Box_l\n{\\left( k \\Box_l + l \\Box_k \\right)} {\\rm sgn} (k)} {2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( l \\omega_k - k \\omega_l \\right)}\n{\\rm sgn} (l) + (k+l) \\omega_k \\omega_l \\right]}.\n\\label{Secondordgam}\n\\end{equation}\nFollowing the standard procedure \\cite{Tzenov} of the RG method,\nwe finally obtain the desired RG equation\n\\begin{equation}\n{\\frac {\\partial {\\widetilde{\\cal A}}_k} {\\partial t}} - \\epsilon\n{\\widehat{A}}_{ky}^{(1)} {\\widetilde{\\cal A}}_k \\nonumber\n\\end{equation}\n\\begin{equation}\n= \\epsilon^2 {\\left( {\\widehat{A}}_{ky}^{(2)} +\n{\\widehat{D}}_{ky}^{(2)} \\right)} {\\widetilde{\\cal A}}_k +\n\\epsilon^2 \\sum \\limits_l \\Gamma_{kly} {\\left| {\\widetilde{\\cal\nA}}_l \\right|}^2 {\\widetilde{\\cal A}}_k, \\label{Rgroupeq}\n\\end{equation}\nwhere now ${\\widetilde{\\cal A}}_k$ is the renormalized complex\namplitude \\cite{Tzenov}. Thus, the renormalized solution for the\nelectromagnetic vector potential acquires the form\n\\begin{equation}\n{\\bf A} = \\sum \\limits_{k} {\\bf A}_{k}^{(0)} {\\widetilde{\\cal\nA}}_{k} {\\rm e}^{i \\psi_{k}}. \\label{Renormsolut}\n\\end{equation}\nAnalogously, for the electric and magnetic field of the whistler\nwave, one can obtain in a straightforward manner the following\nexpressions\n\\begin{equation}\n{\\bf B} = \\sum \\limits_{k} {\\bf B}_{k}^{(0)} {\\widetilde{\\cal\nA}}_{k} {\\rm e}^{i \\psi_{k}}, \\qquad {\\bf E} = i \\sum \\limits_{k}\n\\omega_k {\\bf A}_{k}^{(0)} {\\widetilde{\\cal A}}_{k} {\\rm e}^{i\n\\psi_{k}}. \\label{Renormsolb}\n\\end{equation}\n\nIt is important to mention that the plasma density remains\nunchanged ($N = 0$) contrary to the case of electrostatic waves,\nwhere the evolution of the induced electrostatic waves follows the\nevolution of the density waves.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{\\label{Essent}System of Coupled Nonlinear Schr\\\"{o}dinger\nEquations}\n\nThe simplest case of the validity of the solvability condition\n(\\ref{Secondordscon}) consists in the assumption that the slow\nwave amplitudes ${\\cal A}_k$ do not depend on the transverse\ncoordinates. Setting $\\epsilon = 1$ in equation (\\ref{Rgroupeq}),\nwe obtain the following system of coupled nonlinear\nSchr\\\"{o}dinger equations\n\\begin{equation}\ni {\\rm sgn} (k) {\\frac {\\partial {\\cal A}_k} {\\partial t}} + i\n\\nu_k {\\rm sgn} (k) {\\frac {\\partial {\\cal A}_k} {\\partial x}} =\n\\lambda_k {\\frac {\\partial^2 {\\cal A}_k} {\\partial x^2}} + \\sum\n\\limits_l \\mu_{kl} {\\left| {\\cal A}_l \\right|}^2 {\\cal A}_k,\n\\label{Couplednse}\n\\end{equation}\nwhere for simplicity the tilde-sign over the renormalized\namplitude has been dropped. Moreover, the coefficients $\\nu_k$,\n$\\lambda_k$ and $\\mu_{kl}$ are given by the expressions\n\\begin{equation}\n\\nu_k = {\\frac {2k \\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c\n\\chi_k}}, \\label{Coefficientnu}\n\\end{equation}\n\\begin{equation}\n\\lambda_k = {\\frac {\\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c\n\\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left\\{ {\\frac {4k^2 \\omega_k \\beta_k^{(2)} {\\left[\n\\omega_k {\\rm sgn} (k) + \\omega_c \\right]}} {{\\left[ 2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k \\right]}^2}} - {\\rm sgn}\n(k) \\right\\}}, \\label{Coefficientla}\n\\end{equation}\n\\begin{equation}\n\\mu_{kl} = {\\frac {1 - {\\rm sgn} (k) {\\rm sgn} (l)} {\\mu_0 n_0 m\nv_T^2 \\omega_l {\\cal D}_{kl}}} {\\frac {\\omega_k \\Box_l {\\left( k\n\\Box_l + l \\Box_k \\right)}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) +\n\\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( l \\omega_k - k \\omega_l \\right)}\n{\\rm sgn} (l) + (k+l) \\omega_k \\omega_l \\right]}.\n\\label{Coefficientmu}\n\\end{equation}\nInterestingly enough, the infinite matrix of coupling coefficients\n$\\mu_{kl}$ represents a sort of selection rules. Clearly,\n\\begin{equation}\n\\mu_{kk} = 0, \\qquad \\mu_{k, -k} = 0, \\label{Selectrule1}\n\\end{equation}\nand\n\\begin{equation}\n\\mu_{kl} = 0, \\qquad {\\rm for} \\quad {\\rm sgn} (k) {\\rm sgn} (l) =\n1. \\label{Selectrule2}\n\\end{equation}\nThis means that a generic mode with a wave number $k$ cannot\ncouple with itself, neither can it couple with another mode with a\nwave number of the same sign. Note that this feature is a\nconsequence of the vector character of the nonlinear coupling\nbetween modes and is due to the nonlinear Lorentz force.\nTherefore, for a given mode $k$ the simplest nontrivial reduction\nof the infinite system of coupled nonlinear Schr\\\"{o}dinger\nequations consists of minimum two coupled equations.\n\nWithout loss of generality, we can assume in what follows that the\nsign of an arbitrary mode $k$ under consideration is positive ($k\n> 0$). Suppose that for a particular whistler mode with a\npositive wave number $k$ there exist a mode with wave number $-l$\nfor which the coupling coefficient $\\mu_{k, -l}$ is maximum.\nNeglecting all other modes but the modes $k$ and $-l$, we can\nwrite\n\\begin{equation}\ni {\\frac {\\partial {\\cal A}_k} {\\partial t}} + i \\nu_k {\\frac\n{\\partial {\\cal A}_k} {\\partial x}} = \\lambda_k {\\frac {\\partial^2\n{\\cal A}_k} {\\partial x^2}} + \\mu_1 {\\left| {\\cal A}_l \\right|}^2\n{\\cal A}_k, \\label{Couplednsek}\n\\end{equation}\n\\begin{equation}\ni {\\frac {\\partial {\\cal A}_l} {\\partial t}} + i \\nu_l {\\frac\n{\\partial {\\cal A}_l} {\\partial x}} = \\lambda_l {\\frac {\\partial^2\n{\\cal A}_l} {\\partial x^2}} + \\mu_2 {\\left| {\\cal A}_k \\right|}^2\n{\\cal A}_l, \\label{Couplednsel}\n\\end{equation}\nwhere\n\\begin{equation}\n\\mu_1 = {\\frac {2} {\\mu_0 n_0 m v_T^2 \\omega_l {\\cal D}_{k, -l}}}\n{\\frac {\\omega_k \\Box_l {\\left( k \\Box_l - l \\Box_k \\right)}} {2\n\\omega_k \\alpha_k + \\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( k \\omega_l - l \\omega_k \\right)} +\n(k-l) \\omega_k \\omega_l \\right]}. \\label{Coefficientmu1}\n\\end{equation}\n\\begin{equation}\n\\mu_2 = {\\frac {2} {\\mu_0 n_0 m v_T^2 \\omega_k {\\cal D}_{k, -l}}}\n{\\frac {\\omega_l \\Box_k {\\left( k \\Box_l - l \\Box_k \\right)}} {2\n\\omega_l \\alpha_l + \\omega_c \\chi_l}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( k \\omega_l - l \\omega_k \\right)} +\n(k-l) \\omega_k \\omega_l \\right]}. \\label{Coefficientmu2}\n\\end{equation}\n\nThe system of coupled nonlinear Schr\\\"{o}dinger equations\n(\\ref{Couplednsek}) and (\\ref{Couplednsel}) is non integrable in\ngeneral \\cite{Manakov}. It represents an important starting point\nfor further investigations on the nonlinear dynamics and evolution\nof whistler waves in magnetized plasmas.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Discussion and conclusions}\n\nWe studied the nonlinear dynamics of whistler waves in magnetized\nplasmas. Since plasmas and beam-plasma systems considered here are\nassumed to be weakly collisional, the point of reference for the\nanalysis performed in the present paper is the system of\nhydrodynamic and field equations. We apply the renormalization\ngroup method to obtain dynamical equations for the slowly varying\namplitudes of whistler waves. As a result of the investigation\nperformed, it has been shown that the amplitudes of eigenmodes\nsatisfy an infinite system of coupled nonlinear Schr\\\"{o}dinger\nequations. In this sense, the whistler eigenmodes form a sort of\na gas of interacting quasiparticles, while the slowly varying\namplitudes can be considered as dynamical variables heralding the\nrelevant information about the system.\n\nAn important feature of our description is that whistler waves do\nnot perturb the initial uniform density of plasma electrons. The\nplasma response to the induced whistler waves consists in velocity\nredistribution which follows exactly the behaviour of the\nwhistlers. Another interesting peculiarity are the selection rules\ngoverning the nonlinear mode coupling. According to these rules\nmodes with the same sign do not couple, which is a direct\nconsequence of the vector character of the interaction. Careful\ninspection shows that the initial source of the nonlinear\ninteraction between waves in the whistler mode is the zero-order\nLorentz force [see equation (\\ref{Zeroordlf})]. Since the quantity\n${\\bf W}_1$ is proportional to ${\\bf A}_k^{(0)} \\times {\\bf\nA}_l^{(0)}$, the above mentioned selection rules follow directly,\nprovided the only case in which the cross product does not vanish\nis the case, where modes $k$ and $l$ have different sign.\n\nWe believe that the results obtained in the present paper might\nhave a wide class of possible applications ranging from laboratory\nexperiments to observations of a variety of effects relevant to\nspace plasmas.\n\n\\begin{acknowledgments}\nIt is a pleasure to thank B. Baizakov for many interesting and\nuseful discussions concerning the subject of the present paper.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Introduction}\nThe detection of primordial gravitational waves (GWs) via B-mode\npolarization of Cosmic Microwave Background Radiation (CMBR) by\nBICEP2~\\cite{Ade:2014xna} has shown that the cosmic inflation\noccurred at a high scale of $10^{16}$ GeV is the most plausible\nsource of generating primordial GWs. However, more data are\nrequired to confirm the above situation. The primordial GWs can be\nimprinted in the anisotropies and polarization spectrum of CMBR\n by making the photon redshifts. The B-mode signal\nobserved by BICEP2 might contain contributions from other sources\nof vector modes and cosmic strings, in addition to tensor\nmodes~\\cite{Moss:2014cra}.\n\nThe conformal gravity\n$C^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}(=C^2)$ of being invariant\nunder the conformal transformation of $g_{\\mu\\nu}\\to\n\\Omega^2(x)g_{\\mu\\nu}$ has its own interests in gravity and\ncosmology. On the gravity side, it gives us a promising combination\nof $R_{\\mu\\nu}^2-R^2\/3$ up to the Gauss-Bonnet term which kills\nmassive scalar GWs when it couples to Einstein gravity (known to be\nthe Einstein-Weyl gravity)~\\cite{Lu:2011zk}.\nStelle~\\cite{Stelle:1976gc} has first introduced the quadratic\ncurvature gravity of $a(R_{\\mu\\nu}^2-R^2\/3)+b R^2$ to improve the\nperturbatively renormalizable property of Einstein gravity. In case\nof $ab\\not=0$, the renormalizability was achieved but the unitarity\nwas violated unless $a=0$, showing that the renormalizability and\nunitarity exclude to each other. Although the $a$-term of providing\nmassive GWs improves the ultraviolet divergence, it induces\nsimultaneously ghost excitations which spoil the unitarity. The\nprice one has to pay for making the theory renormalizable is the\nloss of unitarity. This issue is not resolved completely until now.\n\nHowever, the conformal gravity itself is renormalizable. Also, it\nprovides the AdS black hole solution~\\cite{Riegert:1984zz} and its\nthermodynamic properties and holography were discussed extensively\nin the literature~\\cite{Klemm:1998kf,Lu:2012xu,Grumiller:2013mxa}.\nThe authors have investigated the AdS black hole thermodynamics and\nstability in the Einstein-Weyl gravity and in the limit of the\nconformal gravity~\\cite{Myung:2013uka}.\n\nOn the cosmology side of the conformal gravity, it provides surely a\nmassive vector propagation generated during de Sitter inflation in\naddition to massive tensor ghosts when it couples to Einstein\ngravity~\\cite{Clunan:2009er,Deruelle:2010kf,Deruelle:2012xv}.\nRecently, the authors have shown that in the limit of $m^2 \\to 0$\n(keeping the conformal gravity only), the vector and tensor power\nspectra disappear. It implies that their power spectra are not\ngravitationally produced because the vector and tensor perturbations\nare decoupled from the expanding de Sitter background. This occurs\ndue to conformal invariance as a transversely massive vector has\nbeen shown in the $m^2\\to 0$ limit of the massive Maxwell theory\n($-F^2\/4+m^2A^2\/2$)~\\cite{Dimopoulos:2006ms}. We note here that\n$F^2$ is conformally invariant like $C^2$ under the transformation\nof $g_{\\mu\\nu}\\to \\Omega^2(x)g_{\\mu\\nu}$~\\cite{Myung:2014aia}.\n The\nconformal gravity implication to cosmological perturbation was first\nstudied in~\\cite{Mannheim:2011is} which might indicate that there\nexists a difference between conformal and Einstein gravities in\ntheir perturbed equations around de Sitter background. Even though\nhe has obtained a ``degenerate fourth-order equation\" for the metric\nperturbation tensor $h_{\\mu\\nu}$ from the conformal gravity, any\nrelevant quantity was not found because he did not split\n$h_{\\mu\\nu}$ according to the SO(3) decomposition for cosmological\nperturbations. As far as we know, there is no definite computation\nof an observable like the power spectrum in the conformal gravity.\n\nIn this Letter, we will study the conformal gravity as a\nhigher-order gravity theory to compute the vector and tensor power\nspectra generated from de Sitter inflation. Considering the\nconformal invariant of the conformal gravity seriously, we expect to\nobtain the constant power spectra for vector and tensor\nperturbations.\n\n\n\\section{Conformal gravity }\n\n\nLet us first consider the conformal gravity whose action is given\nby\n\\begin{equation} \\label{ECSW}\nS_{\\rm CG}=\\frac{1}{4\\kappa m^2}\\int d^4x\n\\sqrt{-g}\\Big[C^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}\\Big],\n\\end{equation}\nwhere the Weyl-squared term is given by\n\\begin{eqnarray}\nC^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}&=&2\\Big(R^{\\mu\\nu}R_{\\mu\\nu}-\\frac{1}{3}R^2\\Big)+\n(R^{\\mu\\nu\\rho\\sigma}R_{\\mu\\nu\\rho\\sigma}-4R^{\\mu\\nu}R_{\\mu\\nu}+R^2)\n\\end{eqnarray}\nwith the Weyl tensor\n\\begin{equation}\nC_{\\mu\\nu\\rho\\sigma}=R_{\\mu\\nu\\rho\\sigma}-\\frac{1}{2}\\Big(\ng_{\\mu\\rho}R_{\\nu\\sigma}-g_{\\mu\\sigma}R_{\\nu\\rho}-g_{\\nu\\rho}R_{\\mu\\sigma}+g_{\\nu\\sigma}R_{\\mu\\rho}\\Big)+\\frac{1}{6}R(g_{\\mu\\rho}g_{\\nu\\sigma}-g_{\\mu\\sigma}g_{\\nu\\rho}).\n\\end{equation}\n Here we have\n$\\kappa=8\\pi G=1\/M^2_{\\rm P}$, $M_{\\rm P}$ being the reduced Planck\nmass and a mass-squared $m^2$ is introduced to make the action\ndimensionless. Greek indices run from 0 to 3 with conventions\n$(-+++)$, while Latin indices run from 1 to 3. Further, we note that\nthe Weyl-squared term is invariant under the conformal\ntransformation of $g_{\\mu\\nu} \\to \\Omega^2(x) g_{\\mu\\nu}$.\n\nIts equation takes the form\n\\begin{equation} \\label{ein-eq}\n2 \\nabla^\\rho\\nabla^\\sigma\nC_{\\mu\\rho\\nu\\sigma}+G^{\\rho\\sigma}C_{\\mu\\rho\\nu\\sigma}=0\n\\end{equation} with the Einstein tensor $G_{\\mu\\nu}$.\nThe solution is de Sitter space whose curvature quantities are\ngiven by\n\\begin{equation}\n\\bar{R}_{\\mu\\nu\\rho\\sigma}=H^2(\\bar{g}_{\\mu\\rho}\\bar{g}_{\\nu\\sigma}-\\bar{g}_{\\mu\\sigma}\\bar{g}_{\\nu\\rho}),~~\\bar{R}_{\\mu\\nu}=3H^2\\bar{g}_{\\mu\\nu},~~\\bar{R}=12H^2\n\\end{equation}\nwith $H$=constant. We choose de Sitter background explicitly by\nchoosing a conformal time $\\eta$\n\\begin{eqnarray} \\label{frw}\nds^2_{\\rm dS}=\\bar{g}_{\\mu\\nu}dx^\\mu\ndx^\\nu=a(\\eta)^2\\Big[-d\\eta^2+\\delta_{ij}dx^idx^j\\Big],\n\\end{eqnarray}\nwhere the conformal scale factor is\n\\begin{eqnarray}\na(\\eta)=-\\frac{1}{H\\eta}\\to a(t)=e^{Ht}.\n\\end{eqnarray}\nHere the latter denotes the scale factor with respect to cosmic\ntime $t$.\n\nWe choose the Newtonian gauge of $B=E=0 $ and $\\bar{E}_i=0$ which\nleads to $10-4=6$ degrees of freedom (DOF). In this case, the\ncosmologically perturbed metric can be simplified to be\n\\begin{eqnarray} \\label{so3-met}\nds^2=a(\\eta)^2\\Big[-(1+2\\Psi)d\\eta^2+2\\Psi_i d\\eta\ndx^{i}+\\Big\\{(1+2\\Phi)\\delta_{ij}+h_{ij}\\Big\\}dx^idx^j\\Big]\n\\end{eqnarray}\nwith the transverse vector $\\partial_i\\Psi^i=0$ and\ntransverse-traceless tensor $\\partial_ih^{ij}=h=0$. We emphasize\nthat choosing the SO(3)-perturbed metric (\\ref{so3-met}) contrasts\nsharply with the covariant approach to the cosmological conformal\ngravity~\\cite{Mannheim:2011is}.\n\nIn order to get the cosmological perturbed equations, one is\nfirst to obtain the bilinear action and then, varying it to yield\nthe perturbed equations. We expand the conformal gravity action\n(\\ref{ECSW}) up to quadratic order in the perturbations of\n$\\Psi,\\Phi,\\Psi_i,$ and $h_{ij}$ around the de Sitter\nbackground~\\cite{Deruelle:2010kf}. Then, the bilinear actions for\nscalar, vector and tensor perturbations can be found as\n\\begin{eqnarray}\n&&\\hspace*{-2.3em}S_{\\rm CG}^{({\\rm S})}=\\frac{1}{3\\kappa m^2}\\int\nd^4x\\Big[\\nabla^2 (\\Psi-\\Phi)\\Big]^2,\n\\label{scalar}\\\\\n&&\\nonumber\\\\\n &&\\hspace*{-2.3em}S_{\\rm CG}^{({\\rm V})}=\\frac{1}{4\\kappa m^2}\\int\nd^4x\\Big(\\partial_i\\Psi'_{j}\\partial^{i}\\Psi'^{j}\n-\\nabla^2\\Psi_i\\nabla^2\\Psi^i\\Big),\\label{vpeq}\\\\\n&&\\nonumber\\\\\n&&\\hspace*{-2.3em} S_{\\rm CG}^{({\\rm T})}=\\frac{1}{8\\kappa m^2}\\int\nd^4x\\Big(h''_{ij}h''^{ij} -2\\partial_kh'_{ij}\\partial^{k}h'^{ij}\n+\\nabla^2h_{ij}\\nabla^2h^{ij}\\Big)\\label{hpeq}.\n\\end{eqnarray}\nVarying the actions (\\ref{vpeq}) and (\\ref{hpeq}) with respect to\n$\\Psi^{i}$ and $h^{ij}$ leads to the equations of motion for vector\nand tensor perturbations\n\\begin{eqnarray}\n&&\\Box\\Psi_i=0,\\label{veq}\\\\\n &&\\Box^2h_{ij}=0,\\label{heq}\n\\end{eqnarray}\nwhere $\\Box=d^2\/d\\eta^2-\\nabla^2$ with $\\nabla^2$ the Laplacian\noperator.\n It is worth noting that Eqs.(\\ref{veq}) and (\\ref{heq}) are independent of\nthe expanding de Sitter background in the conformal gravity.\n\nFinally, we would like to mention two scalars $\\Phi$ and $\\Phi$. Two\nscalar equations\n are given by $\\nabla^2\\Psi=\\nabla^2\\Phi=0$, which implies\nthat they are obviously\n non-propagating modes in the de Sitter background. This means that the cosmological conformal gravity describes 4 DOF of vector and tensor modes. Hereafter,\n thus, we will not consider the scalar sector.\n\n\\section{Primordial power spectra}\nThe power spectrum is given by the two-point correlation function\nwhich could be computed when one chooses the vacuum state\n$|0\\rangle$. It is defined by\n\\begin{equation}\n\\langle0|{\\cal F}(\\eta,\\bold{x}){\\cal\nF}(\\eta,\\bold{x}')|0\\rangle=\\int d^3\\bold{k} \\frac{{\\cal P}_{\\cal\nF}}{4\\pi k^3}e^{-i \\bold{k}\\cdot (\\bold{x}-\\bold{x}')},\n\\end{equation}\nwhere ${\\cal F}$ denotes vector and tensor and $k=|\\bold{k}|$ is the\nwave number. In general, fluctuations are created on all length\nscales with wave number $k$. Cosmologically relevant fluctuations\nstart their lives inside the Hubble radius which defines the\nsubhorizon: $k~\\gg aH~(z=-k\\eta\\gg 1)$. On the other hand, the\ncomoving Hubble radius $(aH)^{-1}$ shrinks during inflation while\nthe comoving wavenumber $k$ is constant. Therefore, eventually all\nfluctuations exit the comoving Hubble radius which defines the\nsuperhorizon: $k~\\ll aH~(z=-k\\eta\\ll 1)$.\n\nOne may compute the two-point function by taking the Bunch-Davies\nvacuum $|0\\rangle$.\n In the de Sitter inflation, we choose the subhorizon limit\nof $z\\to \\infty$ to define the Bunch-Davies vacuum, while we\nchoose the superhorizon limit of $z\\to 0$ to get a definite form of\nthe power spectrum which stays alive after decaying. For example,\nfluctuations of scalar and tensor originate on subhorizon scales and\nthey propagate for a long time on superhorizon scales. This can be\nchecked by computing their power spectra which are scale-invariant.\nAccordingly, it would be interesting to check what happens when one\ncomputes the power spectra for vector and tensor perturbations\ngenerated from de Sitter inflation in the frame work of conformal\ngravity.\n\n\n\\subsection{Vector power spectrum}\n\n\nLet us consider Eq.(\\ref{veq}) for vector perturbation and then,\nexpand $\\Psi_i$ in plane waves with the linearly polarized states\n\\begin{eqnarray}\\label{psim}\n\\Psi_i(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\sum_{s=1,2}p_i^{s}({\\bf k})\\Psi_{\\bf k}^{s}(\\eta)e^{i{\\bf\nk}\\cdot{\\bf x}},\n\\end{eqnarray}\nwhere $p^{1\/2}_{i}$ are linear polarization vectors with\n $p^{1\/2}_i\np^{1\/2, i}=1$. Also, $\\Psi_{\\bf k}^{s}$ denote linearly polarized\nvector modes. Plugging (\\ref{psim}) into the equation (\\ref{veq}),\none finds the equation\n\\begin{eqnarray}\\label{v0eq}\n\\Bigg[\\frac{d^2}{d\\eta^2}+k^2\\Bigg]\\Psi_{\\bf k}^s(\\eta)=0.\n\\end{eqnarray}\nIntroducing $z=-k\\eta$, Eq.(\\ref{v0eq}) takes a simple form\n\\begin{equation}\\label{v1eq}\n\\Big[\\frac{d^2}{dz^2}+1\\Big]\\Psi_{\\bf k}^{s}(z)=0\\end{equation}\nwhose positive frequency solution is given by \\begin{equation}\n\\Psi_{\\bf k}^{s}(z)\\sim e^{iz}\\end{equation} up to the\nnormalization.\n\n We are willing to calculate\nvector power spectrum. For this purpose, we define a commutation\nrelation for the vector. In the bilinear action (\\ref{vpeq}), the\nconjugate momentum for the field $\\Psi_j$ is found to be\n\\begin{eqnarray}\\label{vconj}\n\\pi_{\\Psi}^{j}=-\\frac{1}{2\\kappa m^2}\\nabla^2\\Psi'^{j},\n\\end{eqnarray}\nwhere one observes an unusual factor $\\nabla^2$ which reflects that\nthe vector $\\Psi_i$ is not a canonically defined vector, but it is\nfrom the cosmological conformal gravity. The canonical quantization\nis implemented by imposing the commutation relation\n\\begin{eqnarray}\\label{vcomm}\n[\\hat{\\Psi}_{j}(\\eta,{\\bf x}),\\hat{\\pi}_{\\Psi}^{j}(\\eta,{\\bf\nx}^{\\prime})]=2i\\delta({\\bf x}-{\\bf x}^{\\prime})\n\\end{eqnarray}\nwith $\\hbar=1$.\n\n\nNow, the operator $\\hat{\\Psi}_{j}$ can be expanded in Fourier modes\nas\n\\begin{eqnarray}\\label{vex}\n\\hat{\\Psi}_{j}(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int\nd^3{\\bf k}\\sum_{s=1,2}\\Big(p_{j}^{s}({\\bf k})\\hat{a}_{\\bf\nk}^{s}\\Psi_{\\bf k}^{s}(\\eta)e^{i{\\bf k}\\cdot{\\bf x}}+{\\rm h.c.}\\Big)\n\\end{eqnarray}\nand the operator $\\hat{\\pi}_{\\Psi}^{j}=\\frac{k^2}{2\\kappa\nm^2}\\hat{\\Psi}'^{j}$ can be easily obtained from (\\ref{vex}).\nPlugging (\\ref{vex}) and $\\hat{\\pi}_{\\Psi}^{j}$ into (\\ref{vcomm}),\nwe find the commutation relation and Wronskian condition as\n\\begin{eqnarray}\n&&\\hspace*{-2em}[\\hat{a}_{\\bf k}^{s},\\hat{a}_{\\bf k^{\\prime}}^{\ns^{\\prime}\\dag}]=\\delta^{ss^{\\prime}}\\delta^3({\\bf k}-{\\bf\nk}^{\\prime}),\\label{comm0}\\\\\n&&\\hspace*{-2em}\\Psi_{\\bf k}^{s}\\Big(\\frac{k^2}{2\\kappa\nm^2}\\Big)(\\Psi_{\\bf k}^{*s})^{\\prime}-{\\rm c.c.}=i \\to \\Psi_{\\bf\nk}^{s}\\frac{d\\Psi_{\\bf k}^{*s}}{dz}-{\\rm c.c.}=-\\frac{2i\\kappa\nm^2}{k^3}. \\label{vwcon}\n\\end{eqnarray}\n We\nchoose the positive frequency mode for a Bunch-Davies vacuum\n$|0\\rangle$ normalized by the Wronskian condition\n\\begin{eqnarray} \\label{vecsol}\n\\Psi_{\\bf k}^{s}(z) =\\sqrt{\\frac{\\kappa m^2}{k^3}} e^{iz}\n\\end{eqnarray}\nas the solution to (\\ref{v1eq}).\n On the other hand, the\nvector power spectrum is defined by\n\\begin{eqnarray}\\label{powerv}\n\\langle0|\\hat{\\Psi}_{j}(\\eta,{\\bf x})\\hat{\\Psi}^{j}(\\eta,{\\bf\nx}')|0\\rangle=\\int d^3{\\bf k}\\frac{{\\cal P}_{\\Psi}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})},\n\\end{eqnarray}\nwhere we take the Bunch-Davies vacuum state $|0\\rangle$ by\nimposing $\\hat{a}_{\\bf k}^{s}|0\\rangle=0$. The vector power\nspectrum ${\\cal P}_{\\Psi}$ takes the form \\begin{equation}\n\\label{vecpt}{\\cal\nP}_{\\Psi}\\equiv\\sum_{s=1,2}\\frac{k^3}{2\\pi^2}\\Big|\\Psi_{\\bf\nk}^{s}\\Big|^2.\\end{equation}\n Plugging (\\ref{vecsol}) into\n(\\ref{vecpt}), we find a constant power spectrum for a vector\nperturbation\n\\begin{eqnarray} \\label{vec-pow}\n{\\cal P}_{\\Psi}=\\frac{m^2}{\\pi^2 M^2_{\\rm P}}.\n\\end{eqnarray}\n\n\\subsection{Tensor power spectrum}\n\n\nWe take Eq.(\\ref{heq}) to compute tensor power spectrum. In this\ncase, the metric tensor $h_{ij}$ can be expanded in Fourier modes\n\\begin{eqnarray}\\label{hijm}\nh_{ij}(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\sum_{s={\\rm +,\\times}}p_{ij}^{s}({\\bf k})h_{\\bf\nk}^{s}(\\eta)e^{i{\\bf k}\\cdot{\\bf x}},\n\\end{eqnarray}\nwhere $p^{s}_{ij}$ linear polarization tensors with $p^{s}_{ij}\np^{s,ij}=1$. Also, $h_{\\bf k}^{s}(\\eta)$ represent linearly\npolarized tensor modes. Plugging (\\ref{hijm}) into (\\ref{heq}) leads\nto the fourth-order differential equation\n\\begin{eqnarray}\n&&(h_{\\bf k}^{s})^{''''}+2k^2(h_{\\bf k}^{s})^{''}+k^4h_{\\bf k}^{s}\n=0,\\label{heq2}\n\\end{eqnarray}\nwhich is further rewritten as a factorized form\n\\begin{eqnarray}\n&&\\Bigg[\\frac{d^2}{d\\eta^2}+k^2\\Bigg]^2h_{\\bf k}^{s}(\\eta)=0.\n\\label{hee}\n\\end{eqnarray}\nIntroducing $z=-k\\eta$, Eq.(\\ref{hee}) takes the form of a\ndegenerate fourth-order equation\n\\begin{equation}\\label{hc0}\n\\Big[\\frac{d^2}{dz^2}+1\\Big]^2h_{\\bf k}^{s}(z)=0.\\end{equation} This\nis the same equation for a degenerate Pais-Uhlenbeck (PU)\noscillator and its solution is given by\n\\begin{equation} \\label{desol}\nh_{\\bf k}^{s}(z)=\\frac{N}{2k^2}\\Big[(a_2^s+a_1^s z)e^{iz}+c.c.\\Big]\n\\end{equation}\n with $N$ the normalization constant. After quantization, $a^s_2$\n and $a^s_1$ are promoted to operators $\\hat{a}^s_2({\\bf k})$ and $\\hat{a}^s_1({\\bf\n k})$. The presence of $z$\nin $(\\cdots)$ reflects clearly that $ h_{\\bf k}^{s}(z)$ is a\nsolution to the degenerate\n equation (\\ref{hc0}).\nHowever, it is difficult to quantize $h_{ij}$ in the subhorizon\nregion directly because it satisfies the degenerate fourth-order\nequation (\\ref{heq}). In order to quantize $h_{ij}$, we have to\nconsider (\\ref{heq}) as a final equation obtained by making use of\nan auxiliary tensor $\\beta_{ij}$.\n\nFor this purpose, one rewrites the fourth-order action (\\ref{hpeq})\nas a second-order action\n\\begin{equation}\nS_{\\rm AC}^{({\\rm T})}=-\\frac{1}{4 \\kappa m^2}\\int\nd^4x\\Big(\\eta^{\\mu\\nu}\\partial_\\mu \\beta_{ij}\\partial_\\nu h^{ij}\n+\\frac{1}{2} \\beta_{ij}\\beta^{ij}\\Big).\\label{ahpeq}\n\\end{equation}\nTheir equations are given by\n\\begin{equation}\n\\Box h_{ij}=\\beta_{ij},~~\\Box \\beta_{ij}=0,\n\\end{equation}\nwhich are combined to give the fourth-order tensor equation\n(\\ref{heq}). Explicitly, acting $\\Box$ on the first equation leads\nto (\\ref{heq}) when one uses the second one. Actually, this is an\nextension of the singleton action describing a dipole ghost pair as\nthe fourth-order scalar\ntheory~\\cite{Myung:1999nd,Rivelles:2003jd,Kim:2013waf,Kim:2013mfa}.\nThis is related to not a non-degenerate PU oscillator and its\nquantization, but a degenerate PU and\nquantization~\\cite{Mannheim:2004qz,Smilga:2008pr}. The canonical\nconjugate momenta are given by\n\\begin{equation}\n\\pi_h^{ij}=\\frac{1}{4\\kappa\nm^2}\\beta'^{ij},~~\\pi_\\beta^{ij}=\\frac{1}{4\\kappa m^2}h'^{ij}.\n\\end{equation}\nAfter expanding $\\hat{h}_{ij}$ and $\\hat{\\beta}_{ij}$ in their\nFourier modes, their amplitudes at each mode are given by\n\\begin{eqnarray}\n\\label{sol1}\\hat{\\beta}^s_{\\bf k}(z)&=&iN\\Big(\\hat{a}^s_1({\\bf k})e^{iz}-\\hat{a}^{s\\dagger}_1({\\bf k})e^{-iz}\\Big),\\\\\n \\label{sol2}\\hat{h}^s_{\\bf k}(z)&=&\\frac{N}{2k^2}\\Big[\\left(\\hat{a}^s_2({\\bf k})+\\hat{a}^s_1({\\bf\n k})z\\right)e^{iz}+{\\rm h.c.}\\Big].\n \\end{eqnarray}\nNow, the canonical quantization is accomplished by imposing\nequal-time commutation relations:\n\\begin{eqnarray}\\label{comm}\n[\\hat{h}_{ij}(\\eta,{\\bf x}),\\hat{\\pi}_{h}^{ij}(\\eta,{\\bf\nx}^{\\prime})]=2i\\delta^3({\\bf x}-{\\bf\nx}^{\\prime}),~~[\\hat{\\beta}_{ij}(\\eta,{\\bf\nx}),\\hat{\\pi}_{\\beta}^{ij}(\\eta,{\\bf x}^{\\prime})]=2i\\delta^3({\\bf\nx}-{\\bf x}^{\\prime}),\n\\end{eqnarray}\nwhere the factor 2 is coming from the fact that $h_{ij}$ and\n$\\beta_{ij}$ represent 2 DOF, respectively. Taking (\\ref{sol1}) and\n(\\ref{sol2}) into account, the two operators $\\hat{\\beta}_{ij}$ and\n$\\hat{h}_{ij}$ are given by\n\\begin{eqnarray}\\label{hex1}\n\\hat{\\beta}_{ij}(z,{\\bf x})&=&\\frac{ N}{(2\\pi)^{\\frac{3}{2}}}\\int\nd^3{\\bf k}\\Bigg[\\sum_{s=+,\\times}\\Big(ip_{ij}^{s}({\\bf\nk})\\hat{a}_1^s({\\bf k})e^{iz}e^{i{\\bf k}\\cdot{\\bf\nx}}\\Big)+{\\rm h.c.}\\Bigg], \\\\\n\\label{hex2} \\hat{h}_{ij}(z,{\\bf\nx})&=&\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\frac{N}{2k^2}\\Bigg[\\sum_{s=+,\\times}\\Big\\{p_{ij}^{s}({\\bf\nk})\\Big(\\hat{a}_2^s({\\bf k})+\\hat{a}_1^s({\\bf k})z\n\\Big)e^{iz}e^{i{\\bf k}\\cdot{\\bf x}}\\Big\\}+{\\rm h.c.}\\Bigg].\n\\end{eqnarray}\nPlugging (\\ref{hex1}) and (\\ref{hex2}) into (\\ref{comm}) determines\nthe normalization constant $N=\\sqrt{2\\kappa m^2}$ and commutation\nrelations between $\\hat{a}_i^s({\\bf k})$ and\n$\\hat{a}^{s\\dagger}_j({\\bf k}')$ as\n \\begin{equation} \\label{scft}\n [\\hat{a}_i^s({\\bf k}), \\hat{a}^{s^{\\prime}\\dagger}_j({\\bf k}')]= 2k \\delta^{ss'}\n \\left(\n \\begin{array}{cc}\n 0 & -i \\\\\n i & 1 \\\\\n \\end{array}\n \\right)\\delta^3({\\bf k}-{\\bf k}').\n \\end{equation}\nHere the commutation relation of $ [\\hat{a}_2^s({\\bf k}),\n\\hat{a}^{s^{\\prime}\\dagger}_2({\\bf k}')]$ is determined by the\ncondition of \\begin{equation} [\\hat{h}_{ij}(\\eta,{\\bf\nx}),\\hat{\\pi}_{\\beta}^{ij}(\\eta,{\\bf x}^{\\prime})]=0.\n\\end{equation}\nWe are ready to compute the power spectrum of the\ngravitational waves defined by\n\\begin{eqnarray}\\label{power}\n\\langle0|\\hat{h}_{ij}(\\eta,{\\bf x})\\hat{h}^{ij}(\\eta,{\\bf\nx^{\\prime}})|0\\rangle=\\int d^3k\\frac{{\\cal P}_{\\rm h}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})}.\n\\end{eqnarray}\nHere we choose the Bunch-Davies vacuum $|0\\rangle$ by imposing\n$\\hat{a}_i^s({\\bf k})|0\\rangle=0$. The tensor power spectrum ${\\cal\nP}_{\\rm h}$ in (\\ref{power}) denotes ${\\cal P}_{\\rm\nh}\\equiv\\sum_{s={+,\\times}}{\\cal P}^s_{\\rm h}$ where $ {\\cal\nP}^s_{\\rm h}$ is given as\n\\begin{eqnarray}\n{\\cal P}^s_{\\rm h}=\\frac{k^3}{2\\pi^2}|h_{\\bf\nk}^{s}\\Big|^2=\\frac{m^2}{2\\pi^2M^2_{\\rm P}}.\n\\end{eqnarray}\nFinally, we obtain the tensor power spectrum\n\\begin{equation}\n{\\cal P}_{\\rm h}=\\frac{m^2}{\\pi^2M^2_{\\rm P}}\n\\end{equation}\nwhich corresponds to a constant power spectrum. This is the same\nform as for the vector power spectrum (\\ref{vec-pow}).\n\nOn the other hand, the power spectrum of auxiliary tensor\n$\\beta_{ij}$ is defined by\n\\begin{equation}\n\\langle0|\\hat{\\beta}_{ij}(\\eta,{\\bf x})\\hat{\\beta}^{ij}(\\eta,{\\bf\nx^{\\prime}})|0\\rangle=\\int d^3k\\frac{{\\cal P}_{\\rm \\beta}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})}. \\end{equation} Here\nwe obtain the zero power spectrum as\n\\begin{equation}\n{\\cal P}_{\\rm \\beta}=0\n\\end{equation}\nwhen one used the commutation relation $[\\hat{a}_1^s({\\bf k}),\n\\hat{a}^{s^{\\prime}\\dagger}_1({\\bf k}')]=0$. This is clear because\n$\\beta_{ij}$ is an auxiliary tensor to lower the fourth-order action\nto the second-order action. However, it is not understand why\n$\\hat{h}_{ij}$ could be expanded by $\\hat{a}_2^s({\\bf k})$ and\n$\\hat{a}_1^s({\\bf k})$ without introducing $\\beta_{ij}$, because\n$\\beta_{ij}$ was expanded by $\\hat{a}_1^s({\\bf k})$ solely.\n\n\\section{Discussions}\n\nWe have found the constant vector and tensor power spectra generated\nduring de Sitter inflation from conformal gravity. These constant\npower spectra could be understood because the conformal gravity is\ninvariant under conformal (Weyl) transformation. This means that\ntheir power spectra are constant with respect to $z=-k\\eta$ since\nvector and tensor perturbations are decoupled from the expanding de\nSitter inflation. In other words, this is so because the bilinear\nactions (\\ref{vpeq}) and (\\ref{hpeq}) are independent of the\nconformal scale factor $a(\\eta)$ as a result of conformal\ninvariance. On the contrary of Ref.\\cite{Mannheim:2011is}, it is\nless interesting for the conformal gravity to further investigate\nits cosmological implications.\n\n Hence, our\nanalysis implies that the Einstein-Weyl gravity is more promising\nthan the conformal gravity to obtain the physical tensor power\nspectrum because the Einstein-Hilbert term provides the coupling of\nscale factor $a$ like as\n$a^2(h'_{ij}h'^{ij}-\\partial_lh_{ij}\\partial^lh^{ij})$. Also, the\nsingleton Lagrangian of ${\\cal\nL}_s=-\\sqrt{-\\bar{g}}(\\frac{1}{2}\\bar{g}^{\\mu\\nu}\\partial_\\mu\\phi_1\\partial_\\nu\\phi_2+\\frac{1}{2}\\phi_1^2)$\nis quite interesting because it provides two scalar equations $(\\Box\n+2aH\\partial_\\eta)\\phi_2=\\phi_1$ and $(\\Box +2aH\\partial_\\eta)\n\\phi_1=0$ which are combined to yield the degenerate fourth-order\nequation $(\\Box +2aH\\partial_\\eta)^2 \\phi_2=0$. Here we observe the\npresence of the scale factor $a$ in the perturbed equation of the\nsingleton.\n\n Consequently, the\nconformal invariance of the Lagrangian like $\\sqrt{-g}C^2$ or\n$\\sqrt{-g}F^2$ has no responsibility for generating the observed\nfluctuations during inflation.\n\n\\vspace{0.25cm}\n\n {\\bf Acknowledgement}\n\n\\vspace{0.25cm}\n This work was supported by the National\nResearch Foundation of Korea (NRF) grant funded by the Korea\ngovernment (MEST) (No.2012-R1A1A2A10040499).\n\n\\newpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section*{Introduction}\nDeep neural networks (DNNs) are a popular machine learning technique and have shown superior performance in many scientific problems. Despite of their high prediction accuracy, DNNs are often criticized for a lack of interpretation of how changes of the input variables influence the output. Indeed, for applications in many scientific fields such as biology and medicine, understanding the statistical models described by the networks can be as important as, if not more important than, the prediction accuracy. In a DNN, because of its nonlinearity and inherent complexity, generally one should not expect a concise relationship between each input variable and the output, such as the conditional monotonicity in linear regression or logistic regression. A more realistic approach for interpreting the DNN model can be selecting a subset of variables, among all input variables, that have significant predictive power on the output, which is known as ``variable selection''. This paper considers the variable selection problem in DNNs.\n\nDuring the past decades, many methods have been proposed for this task. The variable selection methods for neural networks, similar to the ones for other machine learning techniques, can be broadly classified into three categories: filters, wrappers and embedded methods \\cite{may_review_2011-1,guyon_introduction_2003,chandrashekar_survey_2014}. Filters select variables by information theoretic criteria such as mutual information \\cite{battiti_using_1994} and partial mutual information \\cite{may_non-linear_2008}, and the selection procedure does not involve network training. In contrast, both wrappers and embedded methods are based on the training of neural networks. Wrappers wrap the training phase with a search strategy, which searches through the set, or a subset, of all possible combinations of input variables and selects the combination whose corresponding network gives the highest prediction accuracy. A number of sequential \\cite{sung1998ranking,maier1998use} and heuristic search strategies \\cite{brill_fast_1992,tong_genetic_2010,sivagaminathan_hybrid_2007} have been used. Embedded methods, unlike wrappers, select variables during the training of the network of interest. This can be done by gradually removing\/pruning weights or variables according to their importance measured in various ways (a detailed review is given in the Methods section) or by incorporating a regularization term into the loss function of the neural network to impose sparsity on the weights \\cite{grandvalet_outcomes_1999,chapados_input_2001,simila_combined_2009,scardapane_group_2017}. For a more exhaustive review of variable selection methods in neural networks, see \\cite{may_review_2011-1,zhang_neural_2000}.\n\nWhile a lot of variable selection methods have been developed for neural networks, there are still challenges that hinder them from being widely used. First and foremost, these methods lack a control on the quality of selected variables. When selecting from a large number of variables, a standard way of quality control is to calculate false discovery rate (FDR) \\cite{benjamini1995controlling} and control it at a certain level, particularly in biological and medical studies. In the context of variable selection, FDR is the (expected) proportion of false positives among all variables called significant; for example, if 20 variables are selected (called significant), and two of them are actually null, then the FDR is $2\/20=0.1$. However, no variable selection methods for neural networks so far have tried to estimate FDR or keep FDR under control. Second, among these methods, many were developed for specific types of networks, especially very shallow networks, and they do not work, or work inefficiently, for deeper networks. Third, many of the methods are not applicable to large datasets, on which their computational loads can be prohibitively high.\n\nIn this paper, we develop a method called SurvNet for variable selection in neural networks that overcomes these limitations. It is an embedded method that gradually removes least relevant variables until the FDR of remaining variables reaches a desired threshold. Figure \\ref{fig:1} is the flowchart of SurvNet. It starts by adding a set of simulated input variables called ``surrogate variables'' that will help estimate FDR and training a network with all variables, including both original and surrogate variables. Then it calculates the importance of each variable (original or surrogate) and eliminates the variables that are least important. When eliminating a variable, its corresponding input neuron and all outgoing connections of this neuron are removed from the network. After this, SurvNet estimates the FDR of the original variables that remain in the model. If the estimated FDR is greater than the pre-set threshold, SurvNet will go back to the step of training the (updated) network; otherwise, the elimination stops, and all remaining surrogate variables are removed before the final model is trained. Note that each updated network is trained using the values of weights in the last trained network as initial values for a ``warm start''.\n\nThere are three major novelties in this backward elimination procedure of SurvNet. First, it proposes a new measure\/score of variable importance, which works regardless of the type of problems (classification or regression), the number of output neurons (one or multiple), and the number of hidden layers (one or multiple) in neural networks. In fact, this score can be readily computed for networks with arbitrary depths and activation functions. Second, SurvNet proposes an easy and quick way of estimating FDRs. Statistical estimation of FDRs requires obtaining the null distribution of the importance scores, that is, the distribution of the scores of irrelevant variables \\cite{storey2003statistical}. This is often done by permuting the output values of samples and training multiple independent models in parallel, each of which corresponds to a permuted dataset, but the computational cost is typically unaffordable for neural networks. SurvNet proposes a distinct way. It generates a set of null variables which serve as surrogates of the (unknown) null original variables to obtain the null distribution. With the introduction of surrogate variables, an estimate of FDR can be given by a simple mathematical formula without training a large number of networks at each step. Third, instead of eliminating one variable or any pre-specified number of variables at each step, SurvNet is able to adaptively determine an appropriate number of variables to eliminate by itself. This number, expressed in a concise mathematical formula, makes the elimination highly efficient while having the FDR well controlled on the desired level. The formula includes a parameter called ``elimination rate'', which is a constant between 0 and 1 and controls the ``aggressiveness'' of elimination. When this parameter is chosen to be 1, the elimination is the most aggressive.\n\nPut together, SurvNet is a computationally efficient mechanism for variable selection in neural networks that needs little manual intervention. After setting the initial network structure, an FDR cutoff $\\eta ^*$ (0.1 is the most commonly used value), and an elimination rate $\\varepsilon$ (1 is often an acceptable choice), the elimination procedure will automatically determine how many and which variables to eliminate at each step and stop when the estimated FDR is no greater than $\\eta ^*$.\n\n\\section*{Data and results}\nWe applied SurvNet to digits 4's and 9's in the MNIST database (Dataset 5), a single-cell RNA-Seq dataset (Dataset 6), as well as four simulation datasets (Datasets 1 $\\sim$ 4).\n\nMNIST \\cite{lecun1998gradient} contains 60,000 training images (including 5,000 validation images) and 10,000 testing images of ten handwritten digits from 0 to 9. Each image contains $28\\times28 = 784$ pixels, which are treated as 784 input variables.\n\nSingle-cell RNA-Seq \\cite{kolodziejczyk2015technology} is a biological technique for measuring gene expression in cells. Along with other single-cell techniques, it was recognized as the ``2018 Breakthrough of the Year'' by the \\textit{Science} magazine on account of its important applications in biomedical and genomic research. In single-cell RNA-Seq data, the samples are the cells, the inputs are expression levels of individual genes, and the output is the cell type. Biologically, it is often believed that the cell type is determined by a small set of genes, and thus single-cell RNA-Seq data can be a good choice to study variable selection. \n\nThe classification accuracy of SurvNet for these real data was evaluated by several criteria, including initial test loss, initial test error, final test loss and final test error. Here ``test loss'' and ``test error'' refer to the cross-entropy loss and the misclassification rate on the test data, respectively; and their ``initial'' and ``final'' values were derived by using the network with all original variables and with selected variables only, respectively. See Supplementary Materials for details about how they were calculated.\n\nFor these real datasets (Datasets 5 $\\sim$ 6), however, it is unknown which variables are truly significant. Hence we relied on simulated data to quantitatively assess the accuracy of selected variables, the most important aspect of SurvNet. Four datasets were simulated under different schemes. Datasets 1 $\\sim$ 3 were for classification and Dataset 4 was for regression.\n\nExcept for the MNIST data, each dataset was divided into a training set and a test set, with 80\\% of the samples in the training set and 20\\% in the test set, and 30\\% of training samples were further separated for validation (used to decide when to stop training, see Supplementary Materials).\n\nSurvNet was implemented on TensorFlow 1.8 \\cite{tensorflow2015-whitepaper}. For each dataset, we used a common and simple network structure with two hidden layers, which consisted of 40 and 20 nodes respectively. The ReLU activation function was used, together with a batch size of 50 and a learning rate of 0.05 (0.01 for the regression problem).\n\nIn our experiments, Datasets 1 $\\sim$ 4 were simulated 25 times, and the results of variable selection using SurvNet are averaged over these 25 simulations. For Dataset 1, we demonstrate how SurvNet works step by step to look into its behavior, and we also study the influence of the elimination rate by setting $\\varepsilon$ to different values. On other simulation datasets, results are similar and thus are not given.\n\n\\subsection*{Dataset 1: simulated data with independent variables}\nWe simulated a $10,000\\times784$ matrix $\\bm{X}$, with $x_{ij} \\sim {\\rm i.i.d.}\\ U(0,1)$ for $1 \\le i \\le 10,000$, $1 \\le j \\le 784$, where $U$ means uniform distribution, and treated its rows and columns as samples and variables respectively. The samples were randomly assigned into two classes $C_1$ and $C_2$ of equal size. Then $p^\\prime = 64$ variables were chosen at random and their values in one class were shifted: for each of these variables, we generated a shift value $\\delta_j \\sim U(0.1,0.3)$, with its direction having equal probability of being positive and negative. More precisely, $x_{ij} \\leftarrow x_{ij} + (2\\alpha_j-1) \\cdot \\delta_j$ for $i \\in C_1$, $j \\in \\Omega_{p^\\prime}$, where $\\alpha_j \\sim {\\rm Bernoulli}(\\frac{1}{2})$ and $\\Omega_{p^\\prime}$ was the set of $p^\\prime$ randomly chosen variables. In this way, the 784 variables were independent from each other, and the 64 variables were significant because each of them had different mean values in the two classes. This ``independent-variable differential-mean'' scheme is a very widely used simulation scheme for studying variable selection.\n\nWe ran SurvNet on this dataset with an FDR cutoff $\\eta^* = 0.1$ and an elimination rate $\\varepsilon = 1$. To demonstrate how SurvNet works step by step, Figure \\ref{fig:2}a shows, in one instance of simulation, the number of original variables and surrogate variables left at each step of a selection process as well as the corresponding estimated FDR. The number of variables to be eliminated in the subsequent step is also displayed, and notice that our algorithm was efficient: it eliminated a large number of variables at the beginning and gradually slowed down the elimination as the number of remaining variables decreased and the estimated FDR got closer to the desired value. When the estimated FDR became less than 0.1, the selection process stopped, and the final model turned out to contain all the 64 truly significant variables. On the same data, we studied the influence of elimination rate, and the results of using $\\varepsilon = 1$ and $\\varepsilon = 0.5$ are shown in Figure \\ref{fig:2}b and \\ref{fig:2}c. It is found that while a larger elimination rate led to a faster selection process with fewer steps, the number of variables left at the end of the selection was almost the same (Figure \\ref{fig:2}b). Moreover, regardless of elimination rate, our method gave an accurate estimate of FDR, and the true value of FDR was well controlled throughout the selection process (Figure \\ref{fig:2}c).\n\nThe overall performance of SurvNet under $\\eta^* = 0.1$ and $\\varepsilon = 1$ was summarized in Table \\ref{tab:1}. The test loss and test error on the model with selected variables were both less than those on the model that contains all original variables, indicating enhanced predictive power of the network. More importantly, SurvNet accurately selected the significant variables: it kept 61.92 of the 64 significant variables, along with 7.42 false positives, and the selected variables had an FDR of 0.105, which was very close to the cutoff value 0.1. The estimated FDR, 0.093, was also close to the actual FDR.\n\nThe results under different elimination rates ($\\varepsilon = 1$ and $\\varepsilon = 0.5$), different FDR cutoffs ($\\eta^* = 0.1$ and $\\eta^* = 0.05$), and different numbers of significant variables ($p^\\prime = 64$ and $p^\\prime = 32$) are shown in Table S1.\n\n\\subsection*{Dataset 2: simulated data with correlated variables}\nWe considered correlated variables in this simulation dataset. It is well known that variable dependence often makes FDR estimation difficult \\cite{benjamini2001control, heesen2015inequalities}, and we wondered whether SurvNet was still valid in this case. Images are perfect examples of data with correlated variables, as the value of a pixel usually highly depends on the value of its surrounding pixels. Here we used all images of digit 0 in the MNIST data and randomly assigned them into two classes, and all variables were supposed to be non-significant for classification at this time. Then we picked $p^\\prime = 64$ variables and shifted their mean values in one class in the same way we did in Dataset 1.\n\nTable \\ref{tab:1} shows the performance of SurvNet under $\\eta^* = 0.1$ and $\\varepsilon = 1$. Similar to that on Dataset 1, the test loss decreased after variable selection. The test error before and after variable selection were both zero, possibly due to the positive correlation between pixels, which reduced the difficulty of the classification problem. Although SurvNet identified slightly fewer significant variables (59.36 of the 64 significant variables) than it did in Dataset 1, the FDR 0.107 was still very close to the desired cutoff, and its estimated value 0.094 was accurate as well. For results under different sets of parameter values, see Table S2.\n\n\\subsection*{Dataset 3: simulated data with variance-inflated variables}\nThe third simulation scheme is very challenging. Unlike in the previous two datasets, the significant variables did not differ in the mean values of the two classes; instead, they differed only in the variances. Same as in Dataset 1, we simulated a $10,000\\times784$ matrix $\\bm{X}$ whose element $x_{ij} \\sim {\\rm i.i.d.}\\ U(0,1)$ and divided the samples into two equal-size classes $C_1$ and $C_2$. But then, to make $p^\\prime=64$ randomly chosen variables significant, we let $x_{ij} \\leftarrow x_{ij} + (2\\alpha_{ij}-1) \\cdot \\delta_{ij}$ for $i \\in C_1$, $j \\in \\Omega_{p^\\prime}$, where $\\alpha_{ij} \\sim {\\rm Bernoulli}(\\frac{1}{2})$, and $\\delta_{ij} \\sim U(0.8,1)$. Note that different from the first two simulation schemes, here $\\delta$ and $\\alpha$ depend on both $i$ and $j$. Thus the means of these variables remained unchanged, but their standard deviations inflated from 0.29 to 0.95 (see Supplementary Materials for calculations). In other words, the only difference between the two classes was that the values of 64 out of 784 pixels were ``noisier''. In this case, classifiers and tests based on discrepancies in the mean values would fail. For example, t-test merely identified 0.20 (averaged over 25 instances) of the 64 significant variables.\n\nThe results of applying SurvNet with $\\eta^* = 0.1$ and $\\varepsilon = 1$ were shown in Table \\ref{tab:1}, and the first thing to notice is the dramatic improvement of classification accuracy on the test set. While the test error given by the network with all 784 vriables was 49.42\\%, it dropped to 0.47\\% after variable selection by SurvNet; that is, from an almost random guess to an almost perfect classification. This implies that the variable selection gives back to the DNN the ability of utilizing all types of information useful for classification, which was masked by the overwhelming irrelevant variables. \nAmong the selected variables, 23.00 were truly significant variables, and 3.40 were false positives. Although only 36\\% of the significant variables were successfully identified, the FDR of the remaining variables, 0.114, was close to the cutoff, and the estimated FDR was acceptably accurate.\n\nWe then scrutinized the selection process of SurvNet on this dataset, and found that the reason only a proportion of significant variables were retained was that the initial network that made almost random guesses could not accurately determine the importance of variables and thus many significant variables were removed. As the selection proceeded, the network gained higher classification accuracy and also stronger ability to distinguish the significant variables. When we used a smaller elimination rate, say $\\varepsilon = 0.5$, SurvNet was able to keep a larger proportion of significant variables (see Table S3 for details).\n\n\n\\subsection*{Dataset 4: simulated regression data}\nSuppose the data matrix is $\\bm{X}=(x_{ij})_{10,000 \\times 784}$, and each $x_{ij} \\sim U(-1,1)$. Of the 784 variables, 64 were randomly chosen as significant variables (denoted by $x_{k_j}, j=1,\\ldots,64$), and $y$ was set to be the linear combination of $x_{k_j}$ or its nonlinear functions, plus a few interaction terms and a random error term:\n\\[\n\\begin{split}\ny_i=\\sum_{j=1}^{16} \\beta_j x_{ik_j} + \\sum_{j=17}^{32} \\beta_j \\sin x_{ik_j} + \\sum_{j=33}^{48} \\beta_j e^{x_{ik_j}} + \\sum_{j=49}^{64} \\beta_j \\max (0,x_{ik_j}) \\\\\n+ \\beta_1^\\prime x_{ik_{15}} x_{ik_{16}} + \\beta_2^\\prime x_{ik_{31}} x_{ik_{32}} + \\beta_3^\\prime x_{ik_{47}} x_{ik_{48}} + \\beta_4^\\prime x_{ik_{63}} x_{ik_{64}} + \\varepsilon_i,\n\\end{split}\n\\]\nwhere $\\beta_j=(2\\alpha_j-1) \\cdot b_j$, $\\alpha_j \\sim {\\rm Bernoulli}(\\frac{1}{2})$, $b_j \\sim U(1,3)$, $\\varepsilon_i \\sim N(0,1)$ for $i=1,\\ldots,10,000$, $j=1,\\ldots,64$, and $\\beta_1^\\prime$, $\\beta_2^\\prime$, $\\beta_3^\\prime$, $\\beta_4^\\prime$ have the same distribution as $\\beta_j$.\n\nWe ran SurvNet with $\\eta^*=0.1$ and $\\varepsilon=1$ on 25 instances of simulation and the results are reported in the format of mean $\\pm$ standard deviation. After variable selection, the test loss was reduced greatly, from $33.013\\pm27.059$ to $8.901\\pm1.988$. The number of remaining original variables was $71.16\\pm5.02$ on average, and $63.96\\pm0.20$ of the 64 significant variables were kept. The actual FDR of the selected variables was $0.097\\pm0.061$, close to the desired value 0.1, and the estimated FDR, $0.094\\pm0.004$, was accurate. The results suggest that SurvNet is highly effective for this regression dataset.\n\n\\subsection*{Dataset 5: digits 4 and 9 in MNIST}\nAfter four simulation datasets, we applied SurvNet to the MNIST data. Here we only used the images of two digits that look alike (4 and 9), as they are similar in most pixels and are only different in pixels in certain regions. In Figure \\ref{fig:3}a, we show two representative 4's that differ in the width of top opening and two representative 9's that differ in the presence of a bottom hook. The four regions circled in red are likely to be most significant in differentiating 4's and 9's, especially the region in the upper middle denoting whether the top is closed or open, and the region in the lower middle denoting whether there is a hook at the bottom.\n\nFrom left to right, Figure \\ref{fig:3}b shows the pixels that were selected by SurvNet under four combinations of FDR cutoffs ($\\eta^* = 0.1$ or 0.01) and elimination rates ($\\varepsilon = 1$ or 0.5). The colors display the relative importance, defined by equation \\ref{eq:Sj_L2} (see Methods), of the selected pixels, and a darker color means greater importance. We found that different parameter settings gave quite consistent results, and they all picked out the four regions that were speculated to be significant.\n\n\\subsection*{Dataset 6: single-cell RNA-Seq data}\nChen \\textit{et al.} performed single-cell RNA-Seq analysis of the adult mouse hypothalamus and identified 45 cell types based on clustering analysis \\cite{chen_single-cell_2017}. We used 5,282 cells in two non-neuronal clusters, oligodendrocyte precursor cell (OPC) and myelinating oligodendrocyte (MO), which reflected two distinct stages of oligodendrocyte maturation. Following a standard pre-processing protocol of single-cell RNA-Seq data \\cite{hwang2018single}, we filtered out the genes whose expression could not be detected in more than 30\\% of these cells, which left 1,046 genes for further analysis, and used $\\log({\\rm TPM}+1)$ for measuring gene expression levels, where TPM standed for ``transcripts per million''.\n\nWith $\\eta^*=0.01$ and $\\varepsilon=1$, SurvNet selected 145 genes in one realization. Figure \\ref{fig:4} shows the heatmap of the expression values of these genes, in which rows are genes and columns are cells. The top banner shows the class labels for the samples. For gene expression data, the set of significant genes are typically identified by ``differential expression'' analysis, which finds differences in the mean expression levels of genes between classes. Indeed, as the heatmap shows, most genes have evidently different mean expression levels in the OPCs and MOs. However, among the 145 significant genes identified by SurvNet, 16 have log-fold-changes (logFCs) less than 1, meaning that their mean expression levels are not very different in the OPCs and MOs. In Figure \\ref{fig:4}, these genes are marked in purple on the left banner, in contrast to green for the other genes. Actually, Bartlett's test, which tests the difference in variance, claimed that 14 of these 16 genes had unequal variances in the two groups of cells (p-value < 0.05); thus, they are instances of variance-inflated variables selected by SurvNet, in addition to the ones in Dataset 3. Again, SurvNet demonstrates its ability to identify various types of significant variables, not just variables with different means.\n\nFurther, the functional interpretations of the selected genes match the biological characteristics of OPCs and MOs. We conducted Gene Ontology (GO) analysis using DAVID 6.8 program \\cite{huang2009systematic,da2009bioinformatics}, and found that these genes were likely to play an important role in a number of biological processes, for example, substantia nigra development (with p-value $8.8\\times10^{-9}$, fold enrichment 29.1), nervous system development ($1.8\\times10^{-5}$, 4.8), positive regulation of dendritic spine development ($1.2\\times10^{-3}$, 19.0) and astrocyte differentiation ($3.2\\times10^{-3}$, 34.5). In particular, oligodendrocyte differentiation ($1.8\\times10^{-3}$, 16.2) defines the transition from OPCs to their mature form (MOs) \\cite{rubio2004vitro,barateiro2014temporal}, and myelination ($7.1\\times10^{-5}$, 13.8), which is the process of generating myelin and is a kind of axon ensheathment ($3.5\\times10^{-2}$, 55.2), is unique to MOs \\cite{menn2006origin,barateiro2014temporal}. Corresponding to these processes, the selected genes were also enriched for cellular components such as myelin sheath ($2.4\\times10^{-19}$, 16.2), axon ($1.2\\times10^{-5}$, 5.0) as well as internode region of axon ($2.9\\times10^{-4}$, 106.1), and molecular functions like structural constituent of myelin sheath ($3.1\\times10^{-6}$, 115.3). Besides, among the 16 selected genes whose expression levels had no obvious differences in the OPCs and MOs, \\textit{Cd9} was involved in oligodendrocyte development \\cite{terada2002tetraspanin}, and \\textit{Ckb}, \\textit{Actb}, \\textit{Tuba1a} as well as \\textit{Gpm6b} were related to myelin sheath or myelin proteolipid protein PLP \\cite{jahn2009myelin,werner2013critical}.\n\nAfter variable selection, the test loss was reduced from $4.230\\times10^{-3}$ to $3.460\\times10^{-3}$, and the test error dropped from 0.083\\% to 0.076\\% (averaged over 25 realizations).\n\n\\section*{Conclusions and discussion}\nWe have presented a largely automatic procedure for variable selection in neural networks (SurvNet). It is based on a new measure of variable importance that applies to a variety of networks, deep or shallow, for regression or classification, and with one or multiple output units. More importantly, SurvNet is the first method that estimates and controls the FDR of selected variables, which is essential for applications where the trustworthiness of variable selection is pivotal. By introducing surrogate variables, it avoids training multiple networks in parallel. SurvNet also adjusts the number of variables to eliminate at each step, and the ``warm start'' nature of backward elimination facilitates the training of networks. On multiple simulation datasets and real datasets, SurvNet has effectively identified the significant variables and given a dependable estimate of FDR.\n\nSurvNet takes advantages of modern developments of DNNs. The importance scores of input variables that are based on derivatives with respect to the inputs can be efficiently computed by functions in deep-learning packages such as TensorFlow, PyTorch, and Theano. Moreover, advances in optimization techniques and computation platforms have made the training of DNNs highly scalable. In particular, DNNs can accommodate a large number of input variables, which enables the introduction of surrogate variables.\n\nGiven a dataset, SurvNet may select different sets of significant variables at different runs owing to the randomness originated from the generation of surrogate variables and the training of networks (e.g., the random initial values of weights). While the former is unique to SurvNet, the latter is ubiquitous to any applications of neural networks. The randomness caused by generating surrogate variables may be lowered by, for example, using a larger number of surrogate variables or assembling results from multiple runs, but this randomness should not be a major concern if it is not much larger than the inevitable randomness coming from network training. To study this, we take Dataset 5 as an example. Using $\\eta^*=0.1$ and $\\varepsilon=1$, we ran SurvNet 25 times, and found that SurvNet selected $114.16\\pm11.36$ variables; and the overlapped proportion of the selected variables in each pair of realizations was approximately 0.77. These results reflected both sources of randomness. Then we fixed the surrogate variables in each realization, and SurvNet selected $118.32\\pm7.94$ variables, with the overlapped proportion of the selected variables of each pair of realizations around 0.79. This indicates that for this dataset, the randomness brought by surrogate variables was much less than that by the training of networks. And (hopefully) as peace of mind, some other well-known techniques for statistical tests and variable selection, such as permutation tests and bootstrap tests (and especially, parametric bootstrap tests), also have extra randomness caused by permutations or random number generations, but they are still very widely used.\n\nNext we discuss how many surrogate variables should be generated. In all experiments in this paper, we simply set the number of surrogate variables ($q$) to be the same as the number of original variables ($p$). A larger $q$ may lower the randomness brought by the surrogate variables and thus give a more stable selection of variables and a more accurate estimate of FDR. These improvements can be noticeable and worth pursuing when the number of original variables is small. On the other hand, a larger number of surrogate variables may increase the computational load. As a rule of thumb, we recommend using $q=p$ for datasets with moderate to large sample size, and $q$ can be a few times larger than $p$ if $p$ is small and be smaller than $p$ if $p$ is very large.\n\nAlthough variable selection is critical to many real applications and is often considered one of the most fundamental problems in machine learning \\cite{trevor2009elements, tibshirani2015statistical}, it is worth noting that this task does not apply to certain problems or certain types of DNNs. As an example, for some image datasets like ImageNet \\cite{deng2009imagenet}, deep convolutional neural networks are often a good choice due to their translation invariance characteristics, as the object of interest, such as a dog, may appear at any position in an image and thus theoretically every pixel should be relevant. Also, in the area of natural language processing, where recurrent neural networks are often used, the number of input variables (i.e. the length of input sequence) is not fixed and variable selection makes little sense.\n\nThe main aim of variable selection is to identify significant variables, which may, for example, shed light on the mechanisms of biological processes or guide further experimental validation. Apart from that, an additional aim may be to improve the classification accuracy. Although we did observe an improvement of generalization accuracy on all our simulated and real datasets, such an improvement is not guaranteed even if the variable selection procedure works perfectly. In some datasets, except for a set of significant variables, all other variables are almost completely irrelevant to the outcome, and variable selection may give extra power in prediction. However, in some other datasets, the relevances of variables are not polarized; there are many variables each having a very small influence on the output, but their accumulative contribution is non-negligible. For these datasets, such variables are likely to be ruled out during selection since it is hard to confidently determine their individual significance, but ignoring all of them could cause a loss of prediction power.\n\n\\section*{Methods}\n\n\\subsection*{Measures of variable importance}\n\n\\subsubsection*{Notation}\nWe use a tuple ($\\bm{x}$,$\\bm{y}$) to represent the input and the output of the network, with $\\bm{y}$ being either one-dimensional or multi-dimensional. $x_j$ denotes the $j^\\mathrm{th}$ component of $\\bm{x}$, namely the $j^\\mathrm{th}$ variable, and ($\\bm{x}^{(i)}$,$\\bm{y}^{(i)}$) ($i=1,\\ldots,n$) is the $i^\\mathrm{th}$ sample, where $n$ is the total number of samples (in the training set). Given a proper form of the loss $L(\\cdot,\\cdot)$, the loss function $L^*=\\sum_{i=1}^n L(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))$, where $f$ denotes the output function of the network. The most popular choices for $L(\\cdot,\\cdot)$ are the squared error loss for regression problems and the cross-entropy loss for classification problems.\n\n\\subsubsection*{Existing measures}\nMany statistics have been proposed to measure the importance of variables in neural networks, and they generally fall into two categories \\cite{tetko_neural_1996, steppe_feature_1997}.\n\nOne category of methods estimate the importance of $x_j$, denoted by $S_j$, based on the magnitudes of the connection weights in the network \\cite{sen_predicting_1995, yacoub_hvs:_1997, garson_interpreting_1991, nath_determining_1997, gevrey_review_2003}. A simple example is the sum of absolute values of input weights \\cite{sen_predicting_1995}, but larger values of weights in the input layer do not mean greater importance if connections in hidden layers have small weights, and a better alternative is to replace the input weights with the products of the weights on each path from this input to the output \\cite{yacoub_hvs:_1997}. These measures were developed for networks with only one hidden layer, and they are unlikely to work well for deeper networks as the outgoing weights of a neuron does not reflect its importance once the neuron is inactive (e.g., when the input of a sigmoid neuron is far from zero or the input of a ReLU neuron is negative).\n\nThe other category of methods estimate $S_j$ by the sum of influences of the input weights on the loss function, i.e. $S_j=\\sum_{k \\in \\Omega_j} \\delta L^*_k$, where $\\Omega_j$ is the set of outgoing weights from the $j^\\mathrm{th}$ input neuron, and $\\delta L^*_k$ is the increment of the loss function caused by the removal of weight $w_k$ \\cite{tetko_neural_1996}. $\\delta L^*_k$ can be approximated by a Taylor series of the loss function using first-order terms \\cite{mozer_skeletonization:_1989, karnin_simple_1990} or second-order terms \\cite{lecun_optimal_1990, cibas_variable_1994, hassibi_second_1993}. However, it is unclear why $S_j$ equals the (unweighted) sum of $\\delta L^*_k$'s.\n\nApart from these two major categories of measures, it was also proposed to use $S_j=\\frac{\\partial f}{\\partial x_j}$, i.e. $S_j=\\frac{\\partial y}{\\partial x_j}$, when the output $y$ is one-dimensional \\cite{dimopoulos_use_1995,dimopoulos_neural_1999}. But it is unclear how $S_j$ should be defined when there are multiple output units. Let $y_1,\\ldots,y_K$ be the output values of $K$ output units, and one definition of $S_j$ was given by $S_j=\\sum_{k=1}^{K} |\\frac{\\partial y_k}{\\partial x_j}|$ \\cite{ruck_feature_1990}. However, using this summation seems problematic in some cases, especially when $y_1,\\ldots,y_K$ are the outputs of softmax functions.\n\n\\subsubsection*{Our new measure}\nWe propose a simple and direct measure of the importance of variable $j$ based on $\\frac{\\partial L}{\\partial x_j}$, which describes how the loss changes with $x_j$. There are a few advantages of using $\\frac{\\partial L}{\\partial x_j}$. First, regardless of the structure of the network and whether the output(s) is\/are continuous or categorical, $L$ is always well defined since it is the target for the optimization\/training of the network. Thus the proposed measure is applicable to a wide variety of networks. Second, no matter how many output units there are, $L$ is always a scalar and hence $\\frac{\\partial L}{\\partial x_j}$ is always a scalar. There is no trouble in how to combine effects from multiple output units. Third, $\\frac{\\partial L}{\\partial x_j}$ is easily computable with the backpropogation method, and popular frameworks\/libraries for DNN computations (e.g., TensorFlow, PyTorch and Theano) all use differentiators that efficiently compute partial derivatives (gradients) of arbitrary forms.\n\nNote that $\\frac{\\partial L}{\\partial x_j}$ is a function of the tuple ($\\bm{x}$,$\\bm{y}$), and hence it is natural to estimate it by its mean over all observations in the training set. To avoid cancellation of positive and negative values, we measure the importance of $x_j$ by the mean of absolute values\n\\begin{equation}\n\tS_j=\\frac{1}{n} \\sum_{i=1}^n |\\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))|,\n\t\\label{eq:Sj_L1}\n\\end{equation}\nor the mean of squares\n\\begin{equation}\n\tS_j=\\frac{1}{n} \\sum_{i=1}^n \\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))^2,\n\t\\label{eq:Sj_L2}\n\\end{equation}\nwhere $\\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))$ is the value of $\\frac{\\partial L}{\\partial x_j}$ at the $i$'th training sample.\n\nThe importance scores given by equation \\ref{eq:Sj_L1} and equation \\ref{eq:Sj_L2} implicitly assume that all the input values have similar range, which is typically the case for DNNs, since it is common practice to standardize\/scale the variables before supplying them to the network for the sake of faster and more stable training of the network \\cite{bishop1995neural, lecun2012efficient}. If this is not the case, we suggest the score in equation \\ref{eq:Sj_L1} be multiplied by the (sample) standard deviation of $x_j$ and the score in equation \\ref{eq:Sj_L2} be multiplied by the (sample) variance of $x_j$.\n\nNote that in the case of multiple linear regression, $L = \\frac{1}{2} (y-\\hat{y})^2 = \\frac{1}{2} (y-\\sum_j \\beta_j x_j)^2$, where $y$ is a scalar response and $\\beta_j$ is the $j^\\mathrm{th}$ regression coefficient, then $\\frac{\\partial L}{\\partial x_j}=-(y-\\hat{y})\\beta_j$. Thus, $S_j$ is defined as $|\\beta_j| \\cdot \\frac{1}{n} \\sum_{i=1}^n |e_i|$ or $\\beta_j^2 \\cdot \\frac{1}{n} \\sum_{i=1}^n e_i^2 $ by (1) and (2) respectively, where $e_i=y^{(i)}-\\hat{y}^{(i)}$. Note that $S_j$ is proportional to $|\\beta_j|$ or $\\beta_j^2$ as $\\frac{1}{n} \\sum_{i=1}^n |e_i|$ and $\\frac{1}{n} \\sum_{i=1}^n e_i^2$ are constants. Therefore, both of them are reasonable measures of the contribution of the $j^\\mathrm{th}$ variable, and they are actually equivalent in this case. The meaning of $S_j$ in some other special cases, such as linear regression with multiple outputs and logistic regression with one or multiple outputs, is elaborated in Supplementary Materials.\n\nAll results in the main text were obtained using equation \\ref{eq:Sj_L2}. Results obtained using equation \\ref{eq:Sj_L1} (given in Supplementary Materials) are not significantly different.\n\n\\subsection*{Elimination procedure with FDR control}\nIn this section, we first introduce how we estimate FDR and then talk about how we use this estimate to determine the number of variables to eliminate at each step.\n\n\\subsubsection*{Introduction of surrogate variables}\nThe key of estimating FDR \\cite{storey2003statistical} is to estimate\/generate the null distribution of the test statistic. In our case, it is to obtain the distribution of the importance score $S_j$ defined by equation \\ref{eq:Sj_L2} or equation \\ref{eq:Sj_L1} for variables that are not significant. Since the network is a complicated and highly nonlinear model, a theoretical distribution that applies to various network structure and various types of data may not exist. This null distribution needs to be obtained for the network and the data in hand.\n\nHowever, it is usually unknown which variables are truly null. If we construct the null distribution by permuting the output values of the data, it seems inevitable to train multiple networks from scratch in parallel. For this reason, we propose to introduce\/add a number of variables that are known\/generated to be null. We call these variables ``surrogate null variables'' (or ``surrogate variables'' for short). These variables will be concatenated with the original variables to form a larger data matrix.\n\nTo be precise, suppose there are $p$ original variables and $n$ training samples (including validation samples). Then after we add $q$ surrogate variables, the new data matrix will be of size $n\\times(p+q)$, which binds the original $n\\times p$ data matrix $\\bm{X}$ with a $n\\times q$ data matrix for surrogate variables $\\bm{X}_s$. It is assumed that the original variables are distributed in similar ranges or have been standardized, which is a suggested pre-processing step as it benefits the training of the network, and the elements in $\\bm{X}_s$ are sampled with replacement (or without replacement when $q \\le p$) from the elements in $\\bm{X}$. As a result, the $q$ surrogate variables are null, and their importance scores give the null distribution.\n\nWe recommend $q$ to be on the same scale as $p$ (see Conclusions and Discussion for a more detailed discussion about the choice of $q$). For convenience, $q$ takes the same value as $p$ in all experiments in this paper. In this case, the elements in $\\bm{X}_s$ can be generated by permuting the elements in $\\bm{X}$.\n\nThe selection procedure of SurvNet starts with using all $p+q$ variables as inputs. Then at each step, it eliminates a number of least important variables, including both original variables and surrogate variables. The remaining variables are used to continue training the network, and the elimination stops once the FDR falls below the cutoff.\n\n\\subsubsection*{FDR estimation}\nThen we consider how to estimate FDR at any given time of the selection process. Suppose $r$ variables are retained in the network, among which there are $r_0$ surrogate variables, then $r_0\/q$ proportion of surrogate (null) variables have not been eliminated yet. Accordingly, one would expect that roughly the same proportion of null original variables still exist at this time, that is, approximately $\\frac{r_0}{q} \\cdot p_0$ variables among the remaining original variables are falsely called significant, where $p_0$ is the number of null variables in the original dataset. Thus, an estimate of the FDR of the $r-r_0$ original variables is given by\n\\begin{equation}\n\t\\tilde{\\eta}=\\frac{\\frac{r_0}{q} \\cdot p_0}{r-r_0}\n\\end{equation}\nIn practice, however, $p_0$ is unknown, and a common strategy is to replace it with its upper bound $p$ \\cite{storey2003statistical}. Hence we have the following estimated FDR,\n\\begin{equation}\n\t\\hat{\\eta}=\\frac{\\frac{r_0}{q} \\cdot p}{r-r_0}=\\frac{r_0}{r-r_0} \\cdot \\frac{p}{q}\n\\label{eq:est_fdr}\n\\end{equation}\nApparently, when $\\hat{\\eta}$ is controlled to be no greater than a pre-specified threshold $\\eta^*$, $\\tilde{\\eta}$ is guaranteed to be no greater than $\\eta^*$ as well. When $q=p$, $\\hat{\\eta}$ can be simplified as $\\frac{r_0}{r-r_0}$.\n\n\n\\subsubsection*{Determination of the number of variables to eliminate}\nIf the estimated FDR $\\hat{\\eta}$ (given by equation \\ref{eq:est_fdr}) is less than or equal to the FDR cutoff $\\eta^*$, the variable selection procedure stops. Otherwise, the procedure proceeds, and we want to decide how many variables to eliminate among the $r$ variables that are still in the model. Let this number be $m$, and the determination of $m$ is based on the following considerations. On one hand, we expect that the elimination process is time-saving and reaches the FDR threshold quickly; on the other hand, we want to avoid eliminating too many variables at each step, in which case the FDR may fall much lower than the threshold. We have\n\\begin{thm}\n\tIf $m$ variables are further eliminated from the current model, the smallest possible estimated FDR after this step of elimination is\n\t\\begin{equation}\n\t\\min \\hat{\\eta}^{\\rm new}= (1-\\frac{m}{r_0})\\cdot \\hat{\\eta},\n\t\\label{eq:rm_new}\n\t\\end{equation}\n\twhere $r_0$ is the number of surrogate variables that are in the model before this step of elimination.\n\\end{thm}\n\\begin{proof}\n\tSuppose there are $m_0$ surrogate variables among the $m$ variables to be eliminated, $0 \\le m_0 \\le m$, then according to equation \\ref{eq:est_fdr}, $\\hat{\\eta}$ will be updated to\n\t\\begin{equation}\n\t\\hat{\\eta}^{\\rm new}=\\frac{r_0-m_0}{r-r_0-(m-m_0)} \\cdot \\frac{p}{q}.\n\t\\end{equation}\n\tNote that $\\hat{\\eta}^{\\rm new}$ is monotonically decreasing with respect to $m_0$ for any fixed $m$, we have\n\t\\begin{equation}\n\t\\min \\hat{\\eta}^{\\rm new}=\\hat{\\eta}^{\\rm new}|_{m_0=m}=\\frac{r_0-m}{r-r_0} \\cdot \\frac{p}{q}.\n\t\\label{eq:rm_new_int}\n\t\\end{equation}\n\tEquation \\ref{eq:est_fdr} indicates that $\\frac{1}{r - r_0} \\cdot \\frac{p}{q}=\\frac{\\hat{\\eta}}{r_0}$. Plugging it into \\ref{eq:rm_new_int}, we have\n\t\\[\n\t\\min \\hat{\\eta}^{\\rm new}=(r_0-m) \\cdot \\frac{\\hat{\\eta}}{r_0} = (1-\\frac{m}{r_0})\\cdot \\hat{\\eta}.\n\t\\]\n\\end{proof}\n\nIt follows from equation \\ref{eq:rm_new} that $\\min \\hat{\\eta}^{\\rm new} = \\eta^*$ when $m=(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$. Also, note that $\\min \\hat{\\eta}^{\\rm new}$ is a monotonically decreasing function of $m$. Therefore, when $m < (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, $\\min \\hat{\\eta}^{\\rm new} > \\eta^*$ and thus $\\hat{\\eta}^{\\rm new} > \\eta^*$. That is,\n\\begin{cor}\n\tWhen $m < (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, the estimated FDR after this step of elimination $\\hat{\\eta}^{\\rm new}$ is guaranteed to be still greater than the FDR cutoff $\\eta ^*$.\n\\end{cor}\nOn the other hand, when $m \\ge (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, $\\min \\hat{\\eta}^{\\rm new} \\le \\eta^*$. That is,\n\\begin{cor}\n\tWhen $m \\ge (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, the estimated FDR after this step of elimination $\\hat{\\eta}^{\\rm new}$ may reach the FDR cutoff $\\eta ^*$.\n\\end{cor}\n\nCorollary 1 says that $m$ values less than $(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$ are ``safe'' but the elimination will not stop after this step. Corollary 2 says that $m$ values much larger than $(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$ may not be ``safe'' anymore. Taking both into consideration, we choose the step size to be\n\\begin{equation}\n m=\\lceil (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0 \\rceil,\n \\label{eq:m}\n\\end{equation}\nwhere $\\lceil \\cdot \\rceil$ denotes ``ceiling'', i.e. the smallest integer that is no less than $\\cdot$. Notice that when $\\hat{\\eta} > \\eta^*$, which is the premise of continuing to eliminate variables, $1-\\frac{\\eta^*}{\\hat{\\eta}}>0$, and $r_0>0$ as well since $\\hat{\\eta}$ is positive. Thus $m$ is ensured to be no less than 1 at each step of variable elimination.\n\nThis form of $m$ seems to be quite reasonable for the following reasons. First, if there still remain a great number of surrogate variables in the network, clearly more of them should be taken out. As $r_0$ decreases, $m$ will be smaller, and this makes sense since one should be more careful in further elimination. Second, when $\\hat{\\eta}$ is much higher than $\\eta^*$, one will naturally expect a larger $m$ so that the updated estimated FDR will approach this cutoff.\n\nUsing the $m$ determined by equation \\ref{eq:m}, there is a chance that the estimated FDR will get to the cutoff in only one step. Many times such a fast pace is not preferred as removing too many inputs at a time may make our warm start of the training not warm any more. Hence we may introduce an ``elimination rate'' $\\varepsilon$, which is a constant between 0 and 1, and take\n\\begin{equation}\n m=\\lceil \\varepsilon \\cdot (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0 \\rceil.\n\\end{equation}\n\n\n\\iffalse\n\n\\begin{itemize}\n\\item Bullet point one\n\\end{itemize}\n\n\\begin{enumerate}\n\\item Numbered list item one\n\\end{enumerate}\n\n\\fi\n\n\n\\section*{Author Contributions}\nJ.L. conceived the study, J.L. and Z.S. proposed the methods, Z.S. implemented the methods and constructed the data analysis, Z.S. drafted the manuscript, J.L. substantively revised it.\n\n\\section*{Competing Interests statement}\nThe authors declare no competing interests.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Introduction}\n\nOne of the fundamental results in commutative algebra is the irreducible decomposition theorem \\cite[Satz II and Satz IV]{N21} proved by Emmy Noether in 1921. In this paper she had showed that any ideal $I$ of a Noetherian ring $R$ can be expressed as a finite intersection of irreducible ideals, and the number of irreducible ideals in such an irredundant irreducible decomposition is independent of the choice of the decomposition. This number is then called the index of reducibility of $I$ and denoted by $\\mathrm{ir}_R(I)$. Although irreducible ideals belong to basic objects of commutative algebra, there are not so much papers on the study of irreducible ideals and the index of reducibility. Maybe the first important paper on irreducible ideals after Noether's work is of W. Gr\\\"{o}bner \\cite{G35} (1935). Since then there are interesting works on the index of reducibility of parameter ideals on local rings by D.G. Northcott \\cite{No57} (1957), S. Endo and M. Narita \\cite{EN64} (1964) or S. Goto and N. Suzuki \\cite{GS84} (1984). Especially, W. Heinzer, L.J. Ratliff and K. Shah propounded in a series of papers \\cite{HRS94}, \\cite{HRS95-1}, \\cite{HRS95-2}, \\cite{HRS95-3} a theory of maximal embedded components which is useful for the study of irreducible ideals. It is clear that the concepts of irreducible ideals, the index of reducibility and maximal embedded components can be extended for finitely generated modules. Then the purpose of this paper is to investigate the index of reducibility of submodules of a finitely generated $R$-module $M$ concerning its maximal embedded components as well as the behaviour of the function ir$_M(I^nM)$, where $I$ is an ideal of $R$, and to present applications of the index of reducibility for studying the structure of the module $M$. The paper is divided into 5 sections. Let $M$ be a finitely generated module over a Noetherian ring and $N$ a submodule of $M$. We present in the next section a formula to compute the index of reducibility $\\mathrm{ir}_M(N)$ by using the socle dimension of the module $(M\/N)_{\\frak p}$ for all $\\frak p \\in \\mathrm{Ass}_R(M\/N)$ (see Lemma 2.3). This formula is a generalization of a well-known result which says that $\\mathrm{ir}_M(N) = \\dim_{R\/\\frak m} \\mathrm{Soc}(M\/N)$ provided $(R, \\frak m)$ is a local ring and $\\lambda_R(M\/N) < \\infty$. Section 3 is devoted to answer the following question: When is the index of reducibility of a submodule $N$ equal to the sum of the indices of reducibility of their primary components in a given irredundant primary decomposition of $N$? It turns out here that the notion of maximal embedded components of $N$ introduced by Heinzer, Ratliff and Shah is the key for answering this question (see Theorem \\ref{T3.2}). In Section 4, we consider the index of reducibility $\\mathrm{ir}_M(I^nM)$ of powers of an ideal $I$ as a function in $n$ and show that this function is in fact a polynomial for sufficiently large $n$. Moreover, we can prove that $\\mathrm{bight}_M(I)-1$ is a lower bound and the $\\ell_M(I)-1$ is an upper bound for the degree of this polynomial (see Theorem \\ref{T4.1}), where $\\mathrm{bight}_M(I)$ is the big height and \n $\\ell_M(I)$ is the analytic spread of $M$ with respect to the ideal $I$. However, the degree of this polynomial is still mysterious to us. We can only give examples to show that these bounds are optimal. In the last section, we involve in working out some applications of the index of reducibility. A classical result of Northcott \\cite{No57} says that the index of reducibility of a parameter ideal in a Cohen-Macaulay local ring is dependent only on the ring and not on the choice of parameter ideals. We will generalize Northcott's result in this section and get a characterization for the Cohen-Macaulayness of a Noetherian module in terms of the index of reducibility of parameter ideals (see Theorem \\ref{T4.7}).\n\n\\section{Index of reducibility of submodules}\nThroughout this paper $R$ is a Noetherian ring and $M$ is a finitely generated $R$-module. For an $R$-module $L$, $\\lambda_R(L)$ denotes the length of $L$.\n\n\\begin{definition}\\rm A submodule $N$ of $M$ is called an {\\it irreducible submodule} if $N$ can not be written as an intersection of two properly larger submodules of $M$. The number of irreducible components of an irredundant irreducible decomposition of $N$, which is independent of the choice of the decomposition by Noether \\cite{N21}, is called the {\\it index of reducibility} of $N$ and denoted by $\\mathrm{ir}_M(N)$.\n\\end{definition}\n\n\\begin{remark}\\rm We denoted by $\\mathrm{Soc}(M)$ the sum of all simple submodules of $M$. $\\mathrm{Soc}(M)$ is called the socle of $M$. If $R$ is a local ring with the unique maximal ideal $\\frak m$ and $\\frak k = R\/\\frak m$ its residue field, then it is well-known that $\\mathrm{Soc}(M) = 0:_M\\frak m$ is a $\\frak k$-vector space of finite dimension. Let $N$ be a submodule of $M$ with $\\lambda_R(M\/N) < \\infty$. Then it is easy to check that $\\mathrm{ir}_M(N) = \\lambda_R((N:\\frak m)\/N) = \\dim_{\\frak k} \\mathrm{Soc}(M\/N).$\n\\end{remark}\n\nThe following lemma presents a formula for computing the index of reducibility $\\mathrm{ir}_M(N)$ without the requirement that $R$ is local and $\\lambda_R(M\/N) < \\infty$. It should be mentioned here that the first conclusion of the lemma would be known to experts. But, we cannot find its proof anywhere. So for the completeness, we give a short proof for it. Moreover, from this proof we obtain immediately a second conclusion which is useful for proofs of further results in this paper. For a prime ideal $\\frak p$, we use $k(\\frak p)$ to denote the residue field $ R_{\\frak p}\/\\frak pR_{\\frak p}$ of the local ring $R_{\\frak p}$.\n\n\\begin{lemma}\\label{L2.3} Let $N$ be a submodule of $M$. Then\n $$\\mathrm{ir}_M(N) = \\sum_{\\frak p \\in \\mathrm{Ass}_R(M\/N)} \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p} .$$\nMoreover, for any $\\frak p \\in \\mathrm{Ass}_R(M\/N)$, there is a $\\frak p$-primary submodule $N(\\frak p)$ of $M$ with $\\mathrm{ir}_M(N(\\frak p)) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p} $ such that\n$$N = \\bigcap_{\\frak p \\in \\mathrm{Ass}_R(M\/N)}N(\\frak p)$$\nis an irredundant primary decomposition of $N$.\n\\end{lemma}\n\\begin{proof}\n Passing to the quotient $M\/N$ we may assume without any loss of generality that $N=0$. Let $\\mathrm{Ass}_R(M) = \\{ \\frak p_1,..., \\frak p_n\\}$. We set $t_i = \\dim_{k(\\frak p_i)}\\mathrm{Soc}(M_{\\frak p_i})$ and $t = t_1 + \\cdots + t_n$. Let $\\mathcal{F} = \\{\\frak p_{11},..., \\frak p_{1t_1}, \\frak p_{21},..., \\frak p_{2t_2}, ..., \\frak p_{n1},..., \\frak p_{nt_n}\\}$ be a family of prime ideals of $R$ such that $\\frak p_{i1} = \\cdots = \\frak p_{it_i} = \\frak p_i$ for all $i = 1,..., n$. Denote $E(M)$ the injective envelop of $M$. Then we can write\n $$E(M) = \\bigoplus_{i=1}^n E(R\/\\frak p_i)^{t_i} = \\bigoplus_{\\frak p_{ij} \\in \\mathcal{F}}E(R\/\\frak p_{ij}).$$\n Let $$ \\pi_i: \\oplus_{i=1}^n E(R\/\\frak p_i)^{t_i} \\to E(R\/\\frak p_i)^{t_i} \\ \\ \\mathrm{and} \\ \\ \\pi_{ij}: \\oplus_{\\frak p_{ij} \\in \\mathcal{F}}E(R\/\\frak p_{ij}) \\to E(R\/\\frak p_{ij})$$ be the canonical projections for all $i = 1,..., n$ and $j = 1,..., t_i$, and set $N(\\frak p_i) = M \\cap \\ker \\pi_i$, $N_{ij} = M \\cap \\ker \\pi_{ij}$. Since $E(R\/\\frak p_{ij})$ are indecomposible, $N_{ij}$ are irreducible submodules of $M$. Then it is easy to check that $N(\\frak p_i)$ is a $\\frak p_i$-primary submodule of $M$ having an irreducible decomposition $N(\\frak p_i) = N_{i1} \\cap \\cdots \\cap N_{it_i}$ for all $i=1,\\ldots , n$. \n Moreover, because of the minimality of $E(M)$ among injective modules containing $M$, the finite intersection $$0 = N_{11} \\cap \\cdots \\cap N_{1t_1} \\cap \\cdots \\cap N_{n1} \\cap \\cdots \\cap N_{nt_n}$$ \n is an irredundant irreducible decomposition of $0$. Therefore $0 = N(\\frak p_1) \\cap \\cdots \\cap N(\\frak p_n)$ is an irredundant primary decomposition of $0$ with\n $\\mathrm{ir}_M(N(\\frak p_i)) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M\/N)_{\\frak p_i} $ and $\\mathrm{ir}_M(0) = \\sum_{\\frak p \\in \\mathrm{Ass}(M)} \\dim_{k(\\frak p)} \\mathrm{Soc}(M)_{\\frak p}$\n as required.\n \\end{proof}\n\n\\section{Index of reducibility of maximal embedded components}\n\nLet $N$ be a submodule of $M$ and $\\frak p \\in \\mathrm{Ass}_R(M\/N)$. We use $\\bigwedge_{\\frak p}(N)$ to denote the set of all $\\frak p$-primary submodules of $M$ which appear in an irredundant primary decomposition of $N$. We say that a $\\frak p$-primary submodule $Q$ of $M$ is a $\\frak p$-primary component of $N$ if $Q \\in \\bigwedge_{\\frak p}(N)$, and $Q$ is said to be a {\\it maximal embedded component} (or more precisely, $\\frak p$-maximal embedded component) of $N$ if $Q$ is a maximal element in the set $\\bigwedge_{\\frak p}(N)$. It should be mentioned that the notion of maximal embedded components was first introduced for commutative rings by Heinzer, Ratliff and Shah. They proved in the papers \\cite{HRS94}, \\cite{HRS95-1}, \\cite{HRS95-2}, \\cite{HRS95-3} many interesting properties of maximal embedded components as well as they showed that this notion is an important tool for studying irreducible ideals.\\\\\n\nWe recall now a result of Y. Yao \\cite{Y02} which is often used in the proof of the next theorem.\n\\begin{theorem}[Yao \\cite{Y02}, Theorem 1.1] \\label{T3.1} Let $N$ be a submodule of $M$, $\\mathrm{Ass}_R(M\/N) = \\{\\frak p_1,..., \\frak p_n\\}$ and $Q_i \\in \\bigwedge_{\\frak p_i}(N)$, $i = 1,..., n$. Then $N = Q_1 \\cap \\cdots \\cap Q_n$ is an irredundant primary decomposition of $N$.\n\\end{theorem}\nThe following theorem is the main result of this section.\n\\begin{theorem}\\label{T3.2} Let $N$ be a submodule of $M$ and $\\mathrm{Ass}_R(M\/N) = \\{\\frak p_1,..., \\frak p_n\\}$. Let $N = Q_1 \\cap \\cdots \\cap Q_n$ be an irredundant primary decomposition of $N$, where $Q_i$ is $\\frak p_i$-primary for all $i= 1, \\ldots , n$. Then $\\mathrm{ir}_M(N) = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n)$ if and only if $Q_i$ is a $\\frak p_i$-maximal embedded component of $N$ for all embedded associated prime ideals $\\frak p_i$ of $N$.\n \\end{theorem}\n \\begin{proof} As in the proof of Lemma \\ref{L2.3}, we may assume that $N = 0$.\\\\\n {\\it Sufficient condition}: Let $0 = Q_1 \\cap \\cdots \\cap Q_n$ be an irredundant primary decomposition of the zero submodule $0$, where $Q_i$ is maximal in $\\bigwedge_{\\frak p_i}(0)$, $i = 1,..., n$. Setting $\\mathrm{ir}_M(Q_i) = t_i$, and let $Q_i = Q_{i1} \\cap \\cdots \\cap Q_{it_i}$ be an irredundant irreducible decomposition of $Q_i$. Suppose that\n $$t_1 + \\cdots + t_n = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n) > \\mathrm{ir}_M(0).$$\n Then there exist an $i \\in \\{1,..., n \\}$ and a $j \\in \\{1,..., t_i\\}$ such that\n $$Q_1 \\cap \\cdots \\cap Q_{i-1} \\cap Q_i' \\cap Q_{i+1} \\cap \\cdots \\cap Q_n \\subseteq Q_{ij},$$\n where $Q_i' = Q_{i1} \\cap \\cdots \\cap Q_{i(j-1)} \\cap Q_{i(j+1)} \\cap \\cdots \\cap Q_{it_i} \\supsetneqq Q_i$. Therefore\n $$Q_i' \\bigcap (\\cap_{k \\neq i} Q_k) = Q_i \\bigcap (\\cap_{k \\neq i} Q_k) = 0$$\n is also an irredundant primary decomposition of $0$. Hence $Q_i' \\in \\bigwedge_{\\frak p_i}(0)$ which contradicts the maximality of $Q_i$ in $\\bigwedge_{\\frak p_i}(0)$. Thus $\\mathrm{ir}_R(0) = \\mathrm{ir}_R(Q_1) + \\cdots + \\mathrm{ir}_R(Q_n)$ as required.\\\\\n {\\it Necessary condition}: Assume that $0 = Q_1 \\cap \\cdots \\cap Q_n$ is an irredundant primary decomposition of $0$ such that $\\mathrm{ir}_M(0) = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n)$. We have to prove that $Q_i$ are maximal in $\\bigwedge_{\\frak p_i}(0)$ for all $i = 1,..., n$. Indeed, let $N_1=N(\\frak p_1),..., N_n=N(\\frak p_n)$ be primary submodules of $M$ as in Lemma \\ref{L2.3}, that is $N_i \\in \\bigwedge_{\\frak p_i}(0)$, $0 = N_1 \\cap \\cdots \\cap N_n$ and $\\mathrm{ir}_M (0) = \\sum_{i=1}^n \\mathrm{ir}_M(N_i) = \\sum_{i=1}^n \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$. Then by Theorem \\ref{T3.1} we see for any $0 \\leq i \\leq n$ that\n $$0 = N_1 \\cap \\cdots \\cap N_{i-1} \\cap Q_i \\cap N_{i+1} \\cap \\cdots \\cap N_n = N_1 \\cap \\cdots \\cap N_n$$\nare two irredundant primary decompositions of $0$. Therefore\n$$\\mathrm{ir}_M(Q_i) + \\sum_{j \\neq i} \\mathrm{ir}_M(N_j) \\geq \\mathrm{ir}_M(0) = \\sum_{j =1}^n \\mathrm{ir}_M(N_j),$$\nand so $\\mathrm{ir}_M(Q_i) \\geq \\mathrm{ir}_M(N_i) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$ by Lemma \\ref{L2.3}.\\\\\nSimilarly, it follows from the two irredundant primary decompositions\n$$0 = Q_1 \\cap \\cdots \\cap Q_{i-1} \\cap N_i \\cap Q_{i+1} \\cap \\cdots \\cap Q_n = Q_1 \\cap \\cdots \\cap Q_n$$\nand the hypothesis that $\\mathrm{ir}_M(N_i) \\geq \\mathrm{ir}_M(Q_i)$. Thus we get\n$$\\mathrm{ir}_M(Q_i) = \\mathrm{ir}_M(N_i) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$$\nfor all $i = 1, ..., n$. Now, let $Q_i'$ be a maximal element of $\\bigwedge_{\\frak p_i}(0)$ and $Q_i \\subseteq Q_i'$. It remains to prove that $Q_i = Q_i'$. By localization at $\\frak p_i$, we may assume that $R$ is a local ring with the unique maximal ideal $\\frak m = \\frak p_i$. Then, since $Q_i$ is an $\\frak m$-primary submodule and by the equality above we have\n$$\\lambda_R((Q_i:\\frak m)\/Q_i) = \\mathrm{ir}_M(Q_i) = \\dim_{\\frak k}\\mathrm{Soc}(M) = \\lambda_R(0:_M \\frak m) = \\lambda_R \\big( (Q_i + 0:_M \\frak m)\/Q_i\\big).$$\nIt follows that $Q_i: \\frak m = Q_i + 0:_M \\frak m$. If $Q_i \\subsetneqq Q_i'$, there is an element $x \\in Q_i' \\setminus Q_i$. Then we can find a positive integer $l$ such that $\\frak m^l x \\subseteq Q_i$ but $\\frak m^{l-1} x \\nsubseteq Q_i$. Choose $y \\in \\frak m^{l-1} x \\setminus Q_i$. We see that\n$$y \\in Q_i' \\cap (Q_i : \\frak m) = Q_i' \\cap (Q_i + 0:_M \\frak m) = Q_i + (Q_i' \\cap 0:_M \\frak m).$$\nSince $0:_M \\frak m \\subseteq \\cap_{j \\neq i} Q_j$ and $Q_i' \\cap (\\cap_{j \\neq i} Q_j) = 0$ by Theorem \\ref{T3.1}, $Q_i' \\cap (0:_M \\frak m) = 0$. Therefore $y \\in Q_i$ which is a contradiction with the choice of $y$. Thus $Q_i = Q_i'$ and the proof is complete.\n \\end{proof}\n\nThe following characterization of maximal embedded components of $N$ in terms of the index of reducibility follows immediately from the proof of Theorem \\ref{T3.2}.\n\\begin{corollary}\\label{T3.3}\nLet $N$ be a submodule of $M$ and $\\frak p$ an embedded associated prime ideal of $N$. Then an element $Q \\in \\bigwedge_{\\frak p}(N)$ is a maximal embedded component of $N$ if and only if $\\mathrm{ir}_M(Q) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$.\n\\end{corollary}\n\nAs consequences of Theorem \\ref{T3.2}, we can obtain again several results on maximal embedded components proved by Heinzer, Ratliff and Shah. The following corollary is one of that results stated for modules. For a submodule $L$ of $M$ and $\\frak p$ a prime ideal, we denote by $\\mathrm{IC}_{\\frak p}(L)$ the set of all irreducible $\\frak p$-primary submodules of $M$ that appear in an irredundant irreducible decomposition of $L$, and denote by $\\mathrm{ir}_{\\frak p}(L)$ the number of irreducible $\\frak p$-primary components in an irredundant irreducible decomposition of $L$ (this number is well defined by Noether \\cite[Satz VII]{N21}).\n\n\\begin{corollary}[see \\cite{HRS95-3}, Theorems 2.3 and 2.7] Let $N$ be a submodule of $M$ and $\\frak p$ an embedded associated prime ideal of $N$. Then\n\\begin{enumerate}[{(i)}]\\rm\n\\item {\\it $\\mathrm{ir}_{\\frak p}(N) = \\mathrm{ir}_{\\frak p}(Q) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$ for any $\\frak p$-maximal embedded component $Q$ of $N$.}\n\\item {\\it $\\mathrm{IC}_{\\frak p}(N) = \\bigcup_Q \\mathrm{IC}_{\\frak p}(Q)$, where the submodule $Q$ in the union runs over all $\\frak p$-maximal embedded components of $N$.}\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n(i) follows immediately from the proof of Theorem \\ref{T3.2} and Corollary \\ref{T3.3}.\\\\\n(ii) Let $Q_1 \\in \\mathrm{IC}_{\\frak p}(N)$ and $t_1 = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$. By the hypothesis and (i) there exists an irredundant irreducible decomposition $N= Q_{11}\\cap \\ldots \\cap Q_{1 t_1} \\cap Q_2 \\cap \\ldots \\cap Q_l$ such that $Q_{11} = Q_1 , \\ Q_{12}, \\ldots , Q_{1 t_1}$ are all $\\frak p$-primary submodules in this decomposition. Therefore $Q=Q_{11}\\cap \\ldots \\cap Q_{1 t_1}$ is a maximal embedded component of $N$ by Corollary \\ref{T3.3}, and so $Q_1\\in \\mathrm{IC}_{\\frak p}(Q)$. The converse inclusion can be easily proved by applying Theorems \\ref{T3.1} and \\ref{T3.2}.\n\\end{proof}\n\n\\section{Index of reducibility of powers of an ideal}\nLet $I$ be an ideal of $R$. It is well known by \\cite{B79} that the $\\mathrm{Ass}_R(M\/I^nM)$ is stable for sufficiently large $n$ ($n\\gg 0$ for short). We will denote this stable set by $\\text{A}_M(I)$. The big height, $\\mathrm{bight}_M(I)$, of $I$ on $M$ is defined by\n$$\\mathrm{bight}_M(I)=\\max\\{\\dim_{R_{\\frak p}} M_{\\frak p}\\mid \\text{ for all minimal prime ideals } {\\frak p}\\in \\mathrm{Ass}_R(M\/IM)\\}.$$\nLet $G(I)=\\bigoplus\\limits_{n\\geq 0} I^n\/I^{n+1}$ be the associated graded ring of $R$ with respect to $I$ and $G_M(I)=\\bigoplus\\limits_{n\\geq 0} I^nM\/I^{n+1}M$ the associated graded $G(I)$-module of $M$ with respect to $I$. If $R$ is a local ring with the unique maximal ideal $\\frak m$, then the analytic spread $\\ell_M(I)$ of $I$ on $M$ is defined by\n$$\\ell_M(I)=\\dim_{G(I)}(G_M(I)\/{\\frak m} G_M(I)).$$\nIf $R$ is not local, the analytic spread $\\ell_M(I)$ is also defined by\n$$\n\\begin{aligned}\n\\ell_M(I)=\\max\\{ \\ell_{M_{\\frak m}}(IR_{\\frak m})\\mid {\\frak m} &\\text{ is a maximal ideal and }\n\\\\&\n\\text{ there is a prime ideal }{\\frak p} \\in A_M(I) \\text{ such that } {\\frak p}\\subseteq {\\frak m}\\}.\n\\end{aligned}$$\nWe use $\\ell(I)$ to denote the analytic spread of the ideal $I$ on $R$.\nThe following theorem is the main result of this section.\n\n\\begin{theorem} \\label{T4.1} Let $I$ be an ideal of $R$. Then there exists a polynomial $\\mathrm{Ir}_{M,I}(n)$ with rational coefficients such that $\\mathrm{Ir}_{M,I}(n)=\\mathrm{ir}_M(I^nM)$ for sufficiently large $n$. Moreover, we have\n$$\\mathrm{bight}_M(I)-1\\le \\deg(\\mathrm{Ir}_{M,I}(n))\\le \\ell_M(I)-1.$$\n\\end{theorem}\n\nTo prove Theorem \\ref{T4.1}, we need the following lemma.\n\n\\begin{lemma}\\label{L4.2} Suppose that $R$ is a local ring with the unique maximal ideal $\\frak m$ and $I$ an ideal of $R$. Then\n\\begin{enumerate}[{(i)}]\\rm\n \\item {\\it $\\dim_{\\frak k}\\mathrm{Soc}(M\/I^nM)=\\lambda_R(I^nM:{\\frak m}\/I^nM)$ is a polynomial of degree $\\le \\ell_M(I)-1$ for $n\\gg 0$.}\n \\item {\\it Assume that $I$ is an ${\\frak m}$-primary ideal. Then $\\mathrm{ir}_M(I^nM)=\\lambda_R(I^nM:{\\frak m}\/I^nM)$ is a polynomial of degree $\\dim_RM-1$ for $n\\gg 0$.}\n \\end{enumerate}\n \\end{lemma}\n\n \\begin{proof}\n\n(i) Consider the homogeneous submodule $0:_{G_M(I)}{\\frak m}G(I)$. Then \n$$\\lambda_R(0:_{G_M(I)}{\\frak m}G(I))_n=\\lambda_R(((I^{n+1}M:{\\frak m})\\cap I^nM)\/I^{n+1}M)$$ is a polynomial for $n\\gg 0$. Using a result proved by P. Schenzel \\cite[Proposition 2.1]{S98}, proved for rings but easily extendible to modules, we find a positive integer $l$ such that for all $n\\ge l$, $0:_M{\\frak m}\\cap I^nM=0$ and\n$$I^{n+1}M:{\\frak m}= I^{n+1-l}(I^l M:{\\frak m})+ 0:_M{\\frak m}.$$\nTherefore \n$$\n\\begin{aligned}\n(I^{n+1}M:{\\frak m})\\cap I^nM&=I^{n+1-l}(I^l M:{\\frak m})+0:_M{\\frak m}\\cap I^nM\\\\\n&=I^{n+1-l}(I^l M:{\\frak m}).\n\\end{aligned}\n$$\nHence, $\\lambda_R(I^{n+1-l}(I^l M:{\\frak m})\/I^{n+1}M)=\\lambda_R(((I^{n+1}M:{\\frak m})\\cap I^nM)\/I^{n+1}M)$ is a polynomial for $n\\gg 0$. It follows that\n$$\\dim_{\\frak k}\\mathrm{Soc}(M\/I^nM) =\\lambda_R((I^n M:{\\frak m})\/I^{n}M)=\\lambda_R(I^{n-l}(I^l M:{\\frak m})\/I^{n}M)+\\lambda_R(0:_M{\\frak m})$$\nis a polynomial for $n\\gg 0$, and the degree of this polynomial is just equal to\n$$\\dim_{G(I)}(0:_{G_M(I)}{\\frak m}G(I))-1\\le \\dim_{G(I)}({G_M(I)}\/{\\frak m}G_M(I))-1=\\ell_M(I)-1.$$\n(ii) The second statement follows from the first one and the fact that\n$$\n\\begin{aligned}\n\\lambda_R(I^nM\/I^{n+1}M) &=\\lambda_R(\\mathrm{Hom}_R(R\/I,I^nM\/I^{n+1}M))\\\\\n &\\le \\lambda_R(R\/I)\\lambda_R(\\mathrm{Hom}_R(R\/{\\frak m},I^nM\/I^{n+1}M)) \\le \\lambda_R(R\/I)\\mathrm{ir}_M(I^{n+1}M).\n\\end{aligned}\n$$\n\\end{proof}\n\nWe are now able to prove Theorem \\ref{T4.1}.\n\n\\begin{proof}[Proof of Theorem \\ref{T4.1}] Let $\\text{A}_M(I)$ denote the stable set $\\mathrm{Ass}_R(M\/I^nM)$ for $n\\gg 0$. Then, by Lemma \\ref{L2.3} we get that\n$$\\mathrm{ir}_M(I^nM)=\\sum\\limits_{\\frak p\\in A_M(I)}\\dim_{k({\\frak p})}\\mathrm{Soc}(M\/I^nM)_{\\frak p}$$ for all $n\\gg 0$.\nFrom Lemma \\ref{L4.2}, (i), $\\dim_{k({\\frak p})}\\mathrm{Soc}(M\/I^nM)_{\\frak p}$ is a polynomial of degree $\\le \\ell_{M_{\\frak p}}(IR_{\\frak p})-1$ for $n\\gg0$. Therefore there exists a polynomial $\\mathrm{Ir}_{M,I}(n)$ of such that $\\mathrm{Ir}_{M,I}(n)=\\mathrm{ir}_M(I^nM)$ for $n\\gg 0$ and \n$$\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))\\le \\max \\{\\ell_{M_{\\frak p}}(IR_{\\frak p})-1\\mid \\frak p \\in A_M(I)\\}\\le \\ell_M(I)-1.$$\nLet $\\mathrm{Min}(M\/IM)=\\{{\\frak p_1},\\ldots,{\\frak p_m}\\}$ be the set of all minimal associated prime ideals of $IM$. It is clear that ${\\frak p_i}$ is also minimal in $A_M(I)$. Hence $\\Lambda_{\\frak p_i}(I^nM)$ has only one element, says $Q_{in}$. It is easy to check that \n$$\\mathrm{ir}_M(Q_{in})={\\rm ir}_{M_{\\frak p_i}}(Q_{in})_{\\frak p_i}=\\mathrm{ir}_{M_{\\frak p_i}}(I^nM_{\\frak p_i})$$ for $i=1, \\ldots , m$. This implies by Theorem \\ref{T3.2} that\n$\\mathrm{ir}_M(I^nM)\\ge \\sum\\limits_{i=1}^m \\mathrm{ir}_{M_{\\frak p_i}}(I^nM_{\\frak p_i})$. It follows from Lemma \\ref{L4.2}, (ii) for $n\\gg 0$ that\n$$\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))\\ge \\max\\{\\dim_{R_{\\frak p_i} }M_{\\frak p_i}-1\\mid i=1, \\ldots ,m\\}=\\mathrm{bight}_M(I)-1.$$\n\\end{proof}\n\n The following corollaries are immediate consequences of Theorem \\ref{T4.1}.\n An ideal $I$ of a local ring $R$ is called an equimultiple ideal if $\\ell(I)=\\mathrm{ht}(I)$, and therefore $\\mathrm{bight}_R(I)=\\mathrm{ht}(I)$.\n\n\\begin{corollary} \\label{C4.3} Let $I$ be an ideal of $R$ satisfying $\\ell_M(I)= \\mathrm{bight}_M(I)$. Then $$\\deg(\\mathrm{Ir}_{M,I}(n))=\\ell_M(I)-1.$$\n\\end{corollary}\n\n\\begin{corollary} \\label{C4.4} Let $I$ be an equimultiple ideal of a local ring $R$ with the unique maximal ideal $\\frak m$. Then $$\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=\\mathrm{ht}(I)-1$$. \n\\end{corollary}\n\nExcepting the corollaries above, the authors of the paper do not know how to compute exactly the degree of the polynomial of index of reducibility $\\mathrm{Ir}_{M,I}(n)$. Therefore it is maybe interesting to find a formula for this degree in terms of known invariants associated to $I$ and $M$. Below we give examples to show that although these bounds are sharp, neither $\\mathrm{bight}_M(I)-1$ nor $\\ell_M(I)-1$ equal to $\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))$ in general.\n\n\\begin{example} \\label{E4.5}{\\rm\n(1) Let $R=K[X,Y]$ be the polynomial ring of two variables $X$, $Y$ over a field $K$ and $I=(X^2,XY)=X(X,Y)$ an ideal of $R$. Then we have\n$$\\mathrm{bight}_R(I)=\\mathrm{ht}(I)=1, \\text{ }\\ell(I)=2,$$ and by Lemma \\ref{L2.3}\n$$\\mathrm{ir}_R(I^n)=\\mathrm{ir}_R(X^n(X,Y)^n)=\\mathrm{ir}_R((X,Y)^n) +1=n+1.$$\nTherefore \n$$\\mathrm{bight}_R(I)-1=0 <1= \\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=\\ell (I)-1.$$\n\n(2) Let $T=K[X_1,X_2,X_3,X_4,X_5,X_6]$ be the polynomial ring in six variables over a field $K$ and $R=T_{(X_1,\\ldots,X_6)}$ the localization of $T$ at the homogeneous maximal ideal $(X_1,\\ldots,X_6)$. Consider the monomial ideal\n$$\n\\begin{aligned}\nI&=(X_1X_2,X_2X_3,X_3X_4,X_4X_5,X_5X_6,X_6X_1)= (X_1,X_3,X_5)\\cap (X_2,X_4,X_6)\\cap \\\\\n&\\cap (X_1,X_2,X_4,X_5) \\cap (X_2,X_3,X_5,X_6)\\cap (X_3,X_4,X_6,X_1).\n\\end{aligned}\n$$\nSince the associated graph to this monomial ideal is a bipartite graph, it follows from \\cite[Theorem 5.9]{SVV94} that\n$\\mathrm{Ass}(R\/I^n)=\\mathrm{Ass}(R\/I)=\\mathrm{Min}(R\/I)$ for all $n\\geq 1$. Therefore $\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))= \\mathrm{bight}(I)-1= 3$ by Theorem \\ref{T3.2} and Lemma \\ref{L4.2} (ii). On the other hand, by \\cite[Exercise 8.21]{HS06} $\\ell(I)=5$, so\n$$\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=3< 4=\\ell(I)-1.$$\n}\n\\end{example}\n\nLet $I$ be an ideal of $R$ and $n$ a positive integer. The $n$th symbolic power $I^{(n)}$ of $I$ is defined by\n$$I^{(n)} = \\bigcap_{\\frak p \\in \\mathrm{Min}(I)}(I^nR_{\\frak p}\\cap R),$$ where ${\\rm Min} (I)$ is the set of all minimal associated prime ideals in Ass$(R\/I)$. Contrary to the function ${\\rm ir}(I^n)$, the behaviour of the function ${\\rm ir}(I^{(n)})$ seems to be better.\n\\begin{proposition} \\label{C4.2}\nLet $I$ be an ideal of $R$. Then there exists a polynomial $p_I(n)$ with rational coefficients that such\n$p_I(n)={\\rm ir}_R (I^{(n)}) $ for sufficiently large $n$ and $${\\rm deg}(p_I(n))={\\rm bight}(I)-1.$$\n\\end{proposition} \\label{C4.2}\n\\begin{proof} It should be mentioned that Ass$(R\/I^{(n)})={\\rm Min} (I)$ for all positive integer $n$. Thus, by virtue of Theorem \\ref{T3.2}, we can show as in the proof of Theorem \\ref{T4.1} that \n$${\\rm ir} _R(I^{(n)})=\\sum\\limits_{\\frak p\\in {\\rm Min}(I)}\\mathrm{ir}_{R_{\\frak p}}(I^nR_{\\frak p})$$ for all $n$. So the proposition follows from Lemma \\ref{L4.2}, (ii).\n\\end{proof}\n\n\\section{Index of reducibility in Cohen-Macaulay modules}\nIn this section, we assume in addition that $R$ is a local ring with the unique maximal ideal $\\frak m$, and $\\frak k = R\/\\frak m$ is the residue field. Let $\\frak q=(x_1,\\ldots, x_d)$ be a parameter ideal of $M$ ($d=\\dim M$). Let $H^i({\\frak q}, M)$ be the $i$-th Koszul cohomology module of $M$ with respect to $\\frak q$ and $H^i_{\\frak m}(M)$ the $i$-th local cohomology module of $M$ with respect to the maximal ideal $\\frak m$. In order to state the next theorem, we need the following result of Goto and Sakurai \\cite[Lemma 3.12]{GS03}.\n\n\\begin{lemma}\\label{L4.6}\nThere exists a positive integer $l$ such that for all parameter ideals $\\frak q$ of $M$ contained in $\\frak m^l$, the canonical homomorphisms on socles\n$$\\mathrm{Soc}(H^i({\\frak q}, M))\\to \\mathrm{Soc}(H^i_{\\frak m}(M))$$\nare surjective for all $i$.\n \\end{lemma}\n\n\\begin{theorem} \\label{T4.7} Let $M$ be a finitely generated $R$-module of $\\dim M=d$. Then the following conditions are equivalent:\n\\begin{enumerate}[{(i)}]\\rm\n \\item {\\it $M$ is a Cohen-Macaulay module.}\n \\item {\\it $\\mathrm{ir}_M({\\frak q}^{n+1}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M)) \\binom{n+d-1}{d-1}$ for all parameter ideals $\\frak q$ of $M$ and all $n \\geq 0$.}\n \\item {\\it $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$ for all parameter ideals $\\frak q$ of $M$.}\n \\item {\\it There exists a parameter ideal $\\frak q$ of $M$ contained in $\\frak m^l$, where $l$ is a positive integer as in Lemma \\ref{L4.6}, such that $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M))$.}\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n(i) $\\Rightarrow$ (ii) Let $\\frak q$ be a parameter ideal of $M$. Since $M$ is Cohen-Macaulay, we have a natural isomorphism of graded modules\n\n$$G_M(\\frak q)=\\bigoplus\\limits_{n\\ge 0}\\frak q^nM\/\\frak q^{n+1}M\\to M\/\\frak q M[T_1,\\ldots,T_d],$$ where $T_1,\\ldots,T_d$ are indeterminates.\nThis deduces $R$-isomomorphisms on graded parts\n$$\\frak q^nM\/\\frak q^{n+1}M\\to\\big( M\/\\frak q M[T_1,\\ldots,T_d]\\big)_n\\cong M\/\\frak q M^{\\binom{n+d-1}{d-1}}$$ for all $n\\geq 0$.\nOn the other hand, since $\\frak q$ is a parameter ideal of a Cohen-Macaulay module, $\\frak q^{n+1}M:\\frak m\\subseteq \\frak q^{n+1}M:\\frak q=\\frak q^{n}M$. It follows that\n$$\n\\begin{aligned}\n\\mathrm{ir}_M({\\frak q}^{n+1}M)&=\\lambda_R(\\frak q^{n+1}M:\\frak m\/\\frak q^{n+1}M)=\\lambda_R(0:_{\\frak q^{n}M\/\\frak q^{n+1}M}\\frak m)\\\\\n&=\\lambda_R(0:_{M\/\\frak q M}\\frak m) \\binom{n+d-1}{d-1}=\\dim_{\\frak k}(\\mathrm{Soc}(M\/\\frak q M))\\binom{n+d-1}{d-1}.\n\\end{aligned}\n$$\nSo the conclusion is proved, if we show that $\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$. Indeed, let $\\frak q=(x_1,\\ldots, x_d)$ and $\\overline{M}= M\/x_1M$. Then, it is easy to show by induction on $d$ that\n$$\n\\begin{aligned}\n\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)&=\\dim_{\\frak k} \\mathrm{Soc}(\\overline{M}\/\\frak q \\overline{M})\\\\\n&=\\dim_{\\frak k} \\mathrm{Soc}(H^{d-1}_{\\frak m}(\\overline{M})) = \\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M)).\n\\end{aligned}\n$$\n\n(ii) $\\Rightarrow$ (iii) and (iii) $\\Rightarrow$ (iv) are trivial.\n\n(iv) $\\Rightarrow$ (i) Let $\\frak q =(x_1, \\ldots , x_d)$ be a parameter ideal of $M$ such that $\\frak q\\subseteq \\frak m^l$, where $l$ is a positive integer as in Lemma 5.1 such that the canonical homomorphism on socles\n$$\\mathrm{Soc}(M\/\\frak q M)=\\mathrm{Soc}(H^d({\\frak q}, M))\\to \\mathrm{Soc}(H^d_{\\frak m}(M))$$\nis surjective. Consider the submodule $(\\underline{x})_M^{\\mathrm{lim}}=\\bigcup\\limits_{t\\ge 0}(x_1^{t+1},\\ldots,x_d^{t+1}):(x_1\\ldots x_d)^{t}$ of $M$. This submodule is called the limit closure of the sequence $x_1,\\ldots,x_d$. Then \n$(\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M$ is just the kernel of the canonical homomorphism $M\/\\frak q M\\to H^d_{\\frak m}(M)$ (see \\cite{CHL99}, \\cite{CQ10}). Moreover, it was proved in \\cite[Corollary 2.4]{CHL99} that the module $M$ is Cohen-Macaulay if and only if $(\\underline{x})_M^{\\mathrm{lim}}=\\frak q M$. Now we assume that $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$, therefore $\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M)) =\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)$. Then it follows from the exact sequence\n$$0\\to (\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M \\to M\/\\frak q M\\to H^d_{\\frak m}(M) $$\n and the choice of $l$ that the sequence\n$$0\\to \\mathrm{Soc}((\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M) \\to \\mathrm{Soc}( M\/\\frak q M)\\to\\mathrm{Soc}( H^d_{\\frak m}(M))\\to 0 $$\n is a short exact sequence. Hence $\\dim_{\\frak k} \\mathrm{Soc}((\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M)=0 $ by the hypothesis. So $(\\underline{x})_M^{\\mathrm{lim}}=\\frak q M$, and therefore $M$ is a Cohen-Macaulay module.\n \\end{proof}\n \\rm It should be mentioned here that the proof of implication (iv) $\\Rightarrow$ (i) of Theorem \\ref{T4.7} is essentially following the proof of \\cite [Theorem 2.7] {MRS08}. It is well-known that a Noetherian local ring $R$ with $\\dim R =d$ is Gorenstein if and only if $R$ is Cohen-Macaulay with the Cohen-Macaulay \ntype r$(R)=\\dim_{\\frak k}\\mathrm{Ext}^d(\\frak k,M))= 1$. Therefore the following result, which is the main result of \\cite[Theorem]{MRS08}, is an immediate consequence of Theorem \\ref{T4.7}.\n \n \\begin{corollary}\\label{5.3}\n Let $(R, \\frak m)$ be a Noetherian local ring of dimension $d$. Then $R$ is Gorenstein if and only if there exists an irreducible parameter ideal $\\frak q$ contained in $\\frak m^l$, where $l$ is a positive integer as in Lemma \\ref{L4.6}. Moreover, if $R$ is Gorenstein, then for any parameter ideal $\\frak q$ it holds ${\\rm ir}_R(\\frak q^{n+1}) = \\binom{n+d-1}{d-1}$ for all $n\\geq 0$. \n \\end{corollary}\n \\begin{proof}\nLet $\\frak q=(x_1,\\ldots , x_d)$ be an irreducible parameter ideal contained in $\\frak m^l$ such that the map\n $$\\mathrm{Soc}( M\/\\frak q M)\\to\\mathrm{Soc}( H^d_{\\frak m}(M))$$ is surjective.\n Since $\\dim_{\\frak k}\\mathrm{Soc}( H^d_{\\frak m}(M))\\not= 0$ and $\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)=1$ by the hypothesis, \n $\\dim_{\\frak k} \\mathrm{Soc}( H^d_{\\frak m}(M))=1. $ This imples by Theorem \\ref{T4.7} that $M$ is a Cohen-Macaulay module with \n $${\\rm r}(R)=\\dim_{\\frak k}\\mathrm{Ext}^d(\\frak k,M)=\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)= 1,$$ and so $R$ is Gorenstein. The last conclusion follows from Theorem \\ref{T4.7}.\n \\end{proof}\n \\begin{remark} \\rm Recently, it was shown by many works that the index of reducibility of parameter ideals can be used to deduce a lot of information on the structure of some classes of modules such as Buchsbaum modules \\cite{GS03}, generalized Cohen-Macaulay modules \\cite{CT08}, \n\\cite{Q12} and sequentially Cohen-Macaulay modules \\cite{T13}.\n It follows from Theorem \\ref{T4.7} that $M$ is a Cohen-Macaulay module if and only if there exists a positive integer $l$ such that\n $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M))$ for all parameter ideals $\\frak q$ of $M$ contained in $\\frak m^l$. The necessary condition of this result can be extended for a large class of modules called generalized Cohen-Macaulay modules. An $R$-module $M$ of dimension $d$ is said to be a generalized Cohen-Macaulay module (see \\cite{CST78}) if $H^i_{\\frak m}(M)$ is of finite length for all $i=0,\\ldots , d-1$. We proved in \\cite [Theorem 1.1]{CT08} (see also \\cite [Corollary 4.4]{CQ11}) that if $M$ is a generalized Cohen-Macaulay module, then there exists an integer $l$ such that \n $${\\rm ir}_M(\\frak qM) = \\sum_{i=0}^d \\binom{d}{i}\\dim_{\\frak k}\\mathrm{Soc}(H^i_{\\frak m}(M)).$$\n for all parameter ideals $\\frak q \\subseteq \\frak m^l$. Therefore, we close this paper with the following two open questions, which are suggested during the work in this paper, on the characterization of the Cohen-Macaulayness and of the generalized Cohen-Macaulayness in terms of the index of reducibility of parameter ideals as follows.\n \\vskip0.1cm\n \\noindent\n {\\bf Open questions 5.5.} Let $M$ be a finitely generated module of dimension $d$ over a local ring $R$. Then our questions are \\\\\n 1. Is $M$ a Cohen-Macaulay module if and only if there exists a parameter ideal $\\frak q$ of $M$ such that $$\\mathrm{ir}_M({\\frak qM}^{n+1}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M)) \\binom{n+d-1}{d-1}$$ for all $n\\geq 0$?\n \n \\noindent\n 2. Is $M$ a generalized Cohen-Macaulay module if and only if there exists a positive integer $l$ such that $${\\rm ir}_M(\\frak qM)= \\sum_{i=0}^d \\binom{d}{i}\\dim_{\\frak k}\\mathrm{Soc}(H^i_{\\frak m}(M))$$ for all parameter ideals $\\frak q \\subseteq \\frak m^l$?\n \\end{remark}\n \n\\bigskip\n\\noindent{\\bf Acknowledgments.} The authors would like to thank the anonymous referee for helpful comments on the earlier version. This paper was finished during the authors' visit at the Vietnam\nInstitute for Advanced Study in Mathematics (VIASM).\nThey would like to thank VIASM for their support and hospitality.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Introduction and Motivation}\n\nIn 1998, Ellis \\cite{el98} extended the theory of the Schur multiplier for a pair of groups. By a pair of groups $(G,N)$ we mean a group $G$ with a normal subgroup $N$ of $G$. The Schur multiplier of a pair $(G,N)$ of groups is a\nfunctorial abelian group $M(G,N)$ whose principal feature is a\nnatural exact sequence\n\\begin{eqnarray*}\n H_3(G) &\\rightarrow& H_3(G\/N) \\rightarrow M(G,N) \\rightarrow M(G)\n\\rightarrow M(G\/N)\\\\\n&\\rightarrow& N\/[N,G]\\rightarrow (G)^{ab} \\rightarrow (G\/N)^{ab} \\rightarrow 0\n\\end{eqnarray*}\nin which $H_3(G)$ is the third homology group of $G$ with integer coefficients.\nIn particular, if $N=G$, then $M(G,G)$ is the usual Schur multiplier $M(G)$.\n\nIt has been a considerable question that when $\\exp(M(G))$ divides $\\exp(G)$, in which $\\exp(G)$ denotes the exponent of $G$.\nMacdonald and Wamsely (see \\cite{bay})\nconstructed an example of a group of exponent 4, whereas its Schur\nmultiplier has exponent 8, hence the conjecture is not true in\ngeneral. In 1973, Jones \\cite{jons} proved\nthat the exponent of the Schur multiplier of a finite $p$-group of\nclass $c \\geq 2$ and exponent $p^{e}$ is at most $p^{e(c-1)}$ and hence $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a $p$-group of class 2. In 1987, Lubotzky and Mann \\cite{lub} proved that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a powerful $p$-group. A\nresult of Ellis \\cite{el2001} shows that if $G$ is a $p$-group of class $k\n\\geq 2$ and exponent $p^{e}$, then $\\exp(M(G))\\leq p^{e\\lceil k\/2\\rceil}$, where $\\lceil k\/2\\rceil$ denotes the smallest integer\n$n$ such that $n \\geq k\/2$. Moravec \\cite{mor} showed that\n$\\lceil k\/2\\rceil$ can be replaced by $2\\lfloor \\log_2{k}\\rfloor$\nwhich is an improvement if $k\\geq 11$. He \\cite{mor} also proved that if $G$ is\na metabelian group of exponent $p$, then $\\exp(M(G))$ divides $p$.\nKayvanfar and Sanati \\cite{kay} proved that if $G$ is a $p$-group, then $\\exp(M(G))$ divides\n$\\exp(G)$ when $G$ is a finite $p$-group of class 3, 4 or 5 with some conditions. The authors \\cite{mhm} extended the result and proved that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a finite $p$-group of class at most $p-1$.\\\n\nOn the other hand, Ellis \\cite{el98} proved that $\\exp(M(G,N))$ divides $|N|$ for any pair $(G,N)$ of finite groups, in which $|N|$ denotes the order of $N$. Now a question that can naturally arise, is whether $\\exp(M(G,N))$ divides $\\exp(N)$ when $N$ is a proper normal subgroup of $G$. In this paper, first we present an example to give\n a negative answer to the question. Second, we give some conditions\n under which the exponent of $M(G,N)$ divides the exponent of $N$.\n\n In Section 2, we give an upper bound for $\\exp(M(G,N))$ in terms of $\\exp(N)$, when $(G,N)$ is a pair of finite $p$-groups such that $N$ admits a complement in $G$, and apply it to prove that if $(G,N)$ is a pair of finite $p$-groups of class at most $p-1$ (i.e $[N, \\ _{p-1}G]=1$), then $\\exp(M(G,N))$ divides $\\exp(N)$.\nFinally in Section 3, we show that if $(G,N)$ is a pair of finite $p$-groups and $N$ is powerfully embedded in $G$, then $\\exp(M(G,N))$ divides $\\exp(N)$.\n\n\\section{ Nilpotent pairs of $p$-groups}\n\nI. D. Macdonald and J.W. Wamsley \\cite{bay} gave an example which shows that $\\exp(M(G,G))$ dose not divide $\\exp(G)$, in general. The following example shows that $\\exp(M(G,N))$ dose not divide $\\exp(N)$ when $N$ is a proper normal subgroup of $G$. \\\\\n\n\\begin{ex}\nLet $D=A >\\!\\!\\!\\! \\lhd <x_1>$, where $A=<x_2>\\times <x_3>\\times<x_4>\\times<x_5>\\cong {\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{2}}$ and $x_1$ is an automorphism of order 2 of $A$ acting in the following way:\n$$[x_2,x_1]=x_2^2,\\ \\ [x_3,x_1]=x_3^2,\\ \\ [x_4,x_1]=x_4^2,\\ \\ [x_5,x_1]=1.$$\nThere exists an automorphism $a$ of $D$ of order 4 acting on $D$ as follows:\n$$[x_1,a]=x_3,\\ \\ [x_2,a]=x_2^2 x_3^2 x_4^3\\ \\ [x_3,a]=x_5,\\ \\ [x_4,a]=x_2^2,\\ \\ [x_5,a]=x_3^2.$$\nForm $N=D >\\!\\!\\! \\lhd <a>$ and put $G=N >\\!\\!\\! \\lhd <b>$, where $b^2=1$ and\n$[x_1,b]=x_2,\\ [x_2,b]=x_2^2x_4^3x_5,\\ [x_3,b]=x_4,\\ [x_4,b]=x_3^2x_4^2,\\ [x_5,b]=x_2^2x_3^2x_4^2,\\ [a,b]=x_1.$\n Moravec \\cite{mor} showed that the group $G$ is a nilpotent group of class 6 and exponent 4 and $M(G)\\cong{\\mathbf Z}_{{2}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{8}}$.\n Ellis \\cite{el98} proved that if $G=K >\\!\\!\\!\\! \\lhd Q$, then $M(G)\\cong M(G,K)\\oplus M(Q)$. This implies that $M(G,N)\\cong M(G)$. Therefore $\\exp(M(G,N))=8$ dose not divide $\\exp(N)=4$.\n \\end{ex}\n\nHere we first give an upper bound for the exponent of $M(G,N)$ in terms of the exponent of $N$, when $(G,N)$ is a pair of finite $p$-groups such that $N$ admits a complement in $G$. Since our proof relies on commutator calculations, we need to state the following lemmas.\n\n\\begin{lem} (\\cite{st}). Let $x_1, x_2,\n\\ldots , \\ x_r$ be any elements of a group and $\\alpha$ be a nonnegative integer. Then\n$$(x_1 x_2 ... x_r)^{\\alpha}=x_{i_1}^{\\alpha}x_{i_2}^{\\alpha} ...x_{i_r}^{\\alpha} \\upsilon_1^\n{f_1(\\alpha)}\\upsilon_2^{f_2(\\alpha)}\\cdots \\ ,$$\nwhere $\\{i_1, \\ i_2, \\ldots,\\ i_r \\} = \\{1, 2, \\ldots, r \\}$ and $\\upsilon_1,\\upsilon_2, \\cdots$ are commutators of weight at least two in the letters $x_i^,$ in ascending order and\n\\begin{equation}\nf_i(\\alpha)=a_1{\\alpha \\choose 1}+ a_2{\\alpha \\choose 2}+\\cdots+a_{w_i}{\\alpha \\choose w_i},\n\\end{equation}\nwith $a_1, \\ldots, a_{wi} \\in \\mathbf{Z} $ and $w_i$ is the\nweight of $\\upsilon_i$ in elements $x_1, \\ldots, x_r$.\n\\end{lem}\n\n\\begin{lem} (\\cite{st}). Let $\\alpha$\nbe a fixed integer and $G$ be a nilpotent group of class at most\n$k$. If $b_1, \\ldots, b_r \\in G$ and $r<k$, then\n$$[b_1,\\ldots,b_{i-1},b_i^{\\alpha},b_{i+1},\\ldots,b_r]=[b_1,...,b_r]^{\\alpha}\n\\upsilon_1^{f_1(\\alpha)}\\upsilon_2^{f_2(\\alpha)}\\cdots,$$ where\n$\\upsilon_1, \\upsilon_2, \\ldots$ are commutators in $b_1, \\ldots,b_r$ of weight strictly\ngreater than $r$, and every $b_j$, $1\\leq j \\leq r$, appears in\neach commutator $\\upsilon_i$. The $f_i(\\alpha)$ are of the form (2.1), with $a_1, \\ldots, a_{w_i} \\in \\mathbf{Z}$ and $w_i$ is the weight of $\\upsilon_i$ (in $b_1, \\ldots, b_r$)\nminus $(r-1)$.\n\\end{lem}\n\nIt is noted by Struik \\cite{st} that the above lemma can be proved similarly if $[b_1, \\dots ,b_{i-1},b_i^{\\alpha},b_{i+1}, \\dots ,b_r]$ and $[b_1, \\dots ,b_r]$ are replaced by arbitrary commutators (that is monimial commutators with parentheses arranged arbitrarily).\n\nTo prove the main results we require the following notions.\n\\begin{defn}\n A relative central\nextension of a pair $(G,N)$ of groups consists of a group homomorphism\n$\\sigma: M \\rightarrow G$ together with an action of $G$ on $M$\nsuch that \\\\\ni) $\\sigma (M)=N$;\\\\\nii) $\\sigma (m^g)=g^{-1} \\sigma (m)g$, for all $m \\in M, g \\in G$;\\\\\niii) $m^{\\sigma(m_1)}=m_1^{-1} m m_1$, for all $m \\in M, g \\in G$;\\\\\niv) $G$ acts trivially on $ker \\sigma$.\n\\end{defn}\n\nLet $(G,N)$ be a pair of groups and $\\sigma: M \\rightarrow G$ be a relative central\nextension of $(G,N)$. The $G$-commutator subgroup of $M$ is defined\nthe subgroup $[M,G]$ generated by all the $G$-commutators $[m,g]=m^{-1}m^g ,$\nwhere $m^{g}$ is the action of $g$ on $m$, for all $g \\in G, m \\in M$. Also for all positive integer $n$, we define\n$$Z_n(M,G)= \\{ m \\in M | \\ [m,g_1,g_2,\\ldots,g_n]=1 , \\ for \\ all\\ g_1, g_2, \\ldots , g_n \\in G \\},$$ in which $ [m,g_1,g_2, \\ldots,g_n]$ denotes $[\\cdots[[m,g_1],g_2], \\ldots,g_n]$. It is easy to see that $Z_n(M,G)\\leq Z_n(M)$. The subgroup $Z_1(M,G)$ is well known as the $G$-center of $M$ which is denoted by $Z(M,G)$.\n\nLet $(G,N)$ be a pair of groups and $k$ be a positive integer. We define\n$\\gamma_{k+1}(N,G)= [N, \\ _kG]$ in which $[N, \\ _kG]=[ \\cdots [[N,\\underbrace{G],G], \\ldots ,G]}_{k-times}$.\nA pair $(G,N)$ of groups is called nilpotent of class $k$ if $\\gamma_{k+1}(N,G)=1$ and $\\gamma_{k}(N,G)\\neq 1$. It is clear that any pair of finite $p$-groups is nilpotent.\n\n\\begin{defn}\n A relative central\nextension $\\sigma: N^*\\rightarrow G$ of a pair $(G,N)$ is\ncalled a covering pair if there exists a subgroup $A$ of $N^*$\nsuch that \\\\\ni) $A \\leq Z(N^*,G)\\cap [N^*,G]$;\\\\\nii) $A\\cong M(G,N)$;\\\\\niii) $N\\cong N^*\/A$.\\\n\nEllis proved that any pair of finite groups has at least one covering pair [2, Theorem 5.4].\n\n\\end{defn}\n\nHereafter in this section, we suppose that $(G,N)$ is a pair of finite groups and $K$ is the complement of $N$ in $G$. Also, suppose that\n$\\sigma: N^*\\rightarrow G$ is a covering pair of $(G,N)$ with a subgroup of $A$ of $N^*$ such that $A \\leq Z(N^*,G)\\cap [N^*,G]$, $A\\cong M(G,N)$ and $N\\cong N^*\/A$. Then for all $k \\in K$, the homomorphism $\\psi_k:N^* \\rightarrow N^*$ defined by $ n^* \\mapsto {n^*}^k$ is an automorphism of $N^*$ in which ${n^*}^k$ is induced by the\naction of $G$ on $N^*$. Considering the homomorphism $\\psi:K \\rightarrow Aut(N^*)$ given by $\\psi(k)=\\psi_k$ for all $k \\in K$, we form the semidirect product of $N^*$\nby $K$ and denote it by $G^*=N^*K$. Then it is easy to check\nthat the subgroups $[N^*,G]$ and $Z(N^*,G)$ are contained in\n$[N^*,G^*]$ and $Z(N^*, G^*)$, respectively. If $\\delta : G^*\n\\rightarrow G$ is the map given by\n$\\delta(n^*k)=\\sigma(n^*)k,$ for all $n^* \\in N^*$ and $k\\in K$,\nthen $\\delta$ is an epimorphism with $\\ker \\delta=\\ker \\sigma$.\n\n\\begin{lem}\n By the above notation, let $(G,N)$ be\na nilpotent pair of finite groups of class $k$ and $\\exp(N)=p^e$.\n Then every commutator of weight $w$ ($w \\geq 2$) in $[N^*,\\ _{w-1}G^*]$ has an order dividing $p^{e+m(k+1-w)}$, where $m= \\lfloor \\log_p k \\rfloor $.\n \\end{lem}\n\n\\begin{proof}\nWe use reverse induction on $w$ to prove the lemma.\nSince $(G,N)$ is nilpotent of class $k$ and $N\\cong N^*\/A$, $G\\cong G^*\/A$ and $A\\leq Z(N^*,G^*)$, we have $[N^*,\\ _{k+1}G^*]=1$. On the other hand, $\\exp(N)=p^e$ implies that $[{N^*}^{p^e},G^*]=1$. Hence the result follows for $w \\geq k+1$ by Lemma 2.2.\n Now assume that $l<k+1$ and the result is true for all $w>l$. We will prove the result for $l$. Put\n$\\alpha=p^{e+m(k+1-l)}$ with $m= \\lfloor \\log_p k \\rfloor $ and let $u=[n, x_2, \\ldots , x_l]$ be a commutator of weight $l$, where $n\\in N^*$ and $x_2, \\ldots , x_l \\in G^*$. Then by Lemma 2.2, we have\n$$ [n^{\\alpha}, x_2, \\ldots , x_l]=[n, x_2, \\ldots , x_l]^{\\alpha}\n\\upsilon_1^{f_1(\\alpha)} \\upsilon_2^{f_2(\\alpha)} \\cdots,$$\nwhere $\\upsilon_i$ is a commutator on $n, x_2, \\ldots, x_l$ of weight $w_i$ such that $l < w_i \\leq k+1 $, and $f_i(\\alpha)=a_1{\\alpha \\choose 1}+\na_2{\\alpha \\choose 2}+ \\dots +a_{k_i}{\\alpha \\choose k_i}$, where\n$k_i=w_i - l + 1 \\leq k$, for all $ i \\geq 1$.\nOne can easily check that $p^t$ divides $p^{t+m} \\choose s$ with $m=\\lfloor \\log_p k \\rfloor$, for any prime $p$ and any positive integers $t,s$ with $s \\leq k$.\nThis implies that $p^{e+m(k-l)}$ divides the $f_i(\\alpha)$'s and so by induction hypothesis\n$\\upsilon_i^{f_i(\\alpha)} =1$, for all $ i \\geq 1$.\nOn the other hand, it is clear that $[n^{\\alpha},x_2, \\ldots, x_l] =1$. Therefore $u^{\\alpha}=1$ and this\ncompletes the proof.\n\\end{proof}\n\n\\begin{thm}\nIf $(G,N)$ is\na nilpotent pair of finite groups of class $k$ and $N$ is a $p$-group of exponent\n$p^e$, then $\\exp([N^*,G^*])$ divides $p^{e+m(k-1)}$,\nwhere $m= \\lfloor \\log_p k \\rfloor $.\n\\end{thm}\n\n\\begin{proof}\nEvery element $g \\in [N^*,G^*]$ can be expressed as $g=y_1 y_2 \\cdots\ny_n$, where $y_i=[n_i,g_i]$ for $ n_i \\in N^*, g_i \\in G^*$. Put $\\alpha = p^{e+m(k-1)} $.\nBy Lemma 2.1, we have\n$$g^{\\alpha} = y_{i_1}^{\\alpha} y_{i_2}^{\\alpha} \\cdots y_{i_n}^{\\alpha}\n\\upsilon_1^{f_1(\\alpha)} \\upsilon_2^{f_2(\\alpha)} \\cdots,$$\nwhere $\\{i_1, i_2, \\ldots,i_n \\}=\\{1, 2, \\ldots, n \\}$ and $\\upsilon_i$ is a basic commutator of weight $w_i$ in $y_1, y_2, \\ldots, y_n$, with $2\\leq w_i \\leq k$, for all $i \\geq 1$, and also $f_i(\\alpha)$ is of the form (2.1). Hence by an argument similar to the proof of Lemma 2.5 $p^{e+m(k-2)}$\ndivides $f_i(\\alpha)$. Then applying Lemma 2.5, we have $\\upsilon_i^{f_i(\\alpha)} =1$,\nfor all $i \\geq 1$, and $y_j^{\\alpha} =1$, for all $j$, $1\\leq j \\leq n$.\nWe therefore have $g^{\\alpha}=1$ and the desired result follows.\n\\end{proof}\n\nAn upper bound for the exponent of the Schur multiplier of some pairs of finite groups is given in the following theorem.\n\\begin{thm}\nLet $(G,N)$ be a nilpotent pair of finite groups of class $k$ such that $\\exp(N)=p^e$. Then $\\exp(M(G,N))$ is a divisor of $p^{e+m(k-1)}$, where $m= \\lfloor \\log_p k \\rfloor $.\n\\end{thm}\n\n\\begin{proof}\n The result follows by Theorem 2.6 and the fact that $M(G,N)\\cong A \\leq [N^*,G]\\leq [N^*,G^*]$.\n\\end{proof}\n\nThe following corollary gives a condition under which the exponent of the Schur multiplier of a pair $(G,N)$ divides the exponent of $N$.\n\\begin{cor}\nLet $(G,N)$ be a pair of finite $p$-groups of class at most $p-1$. Then $\\exp(M(G,N))$ divides $\\exp(N)$.\n\\end{cor}\n\n\\begin{rem}\nLet $G$ be a finite $p$-group of class $k$ with $exp(G)=p^e$ . Since $M(G,G)=M(G)$, Theorem 2.7 implies that $exp(M(G))$ divides $p^{e+[\\log_pk](k-1)}$. It is easy to see that this bound improves the bound $p^{(2e\\lfloor \\log_2{k}\\rfloor)}$ given by Moravec \\cite{mor}. For example for any $p$-group $G$ of class $k$, $2\\leq k \\leq p-1$ with $exp(G)=p^e$, we have $p^{e+[\\log_pk](k-1)} \\leq p^{(2e\\lfloor \\log_2{k}\\rfloor)}$.\n\\end{rem}\n\n\\begin{rem}\nLet $(G,N)$ be a pair of finite nilpotent groups of class at most $k$. Let $S_1,S_2, \\ldots, S_n$ be all the Sylow subgroups of $G$.\nBy [2, Corollary 1.2], we have $$ M(G,N)=M(S_{1} , S_{1} \\cap N) \\times \\dots\n\\times M(S_{n} , S_{n} \\cap N).$$ Put $m_i= \\lfloor \\log_{p_i} k \\rfloor $, for all $i$,\n $1\\leq i \\leq n$. Then by Theorem 2.7, we have\n$$\\exp( M(G,N) \\mid \\prod^n_{i=1}p_i^{e_i+m_i(k-1)},$$ where\n$p_i^{e_i} = \\exp(S_{i})$.\n\\end{rem}\n\n\\section{ Pairs of powerful $p$-groups}\n\nIn 1987, A. Lubotzky and A. Mann \\cite{lub} defined powerful $p$-groups which are used for studying $p$-groups.\nThey gave some bounds for the order, the exponent and the number of\ngenerators of the Schur multiplier of a powerful $p$-group. Also, they showed that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a powerful $p$-group. The purpose of this section is to show that\nif $(G,N)$ is a pair of finite $p$-groups and $N$ is powerfully embedded in $G$, then the exponent of $M(G,N)$ divides the exponent of $N$.\n Throughout this section $\\mho_i(G)$ denotes the subgroup of\n$G$ generated by all $p^{i}$th powers of elements of $G$. It is easy to see that $\\mho_{i+j}(G) \\subseteq \\mho_i(\\mho_j(G))$, for all positive integers $i,j$.\n\\begin{defn}\n$(i)$ A $p$-group $G$ is\ncalled powerful if $p$ is odd and $G' \\leq \\mho_1(G)$, or\n$p=2$ and $G' \\leq \\mho_2(G)$.\\\\\n$(ii)$ Let $G$ is a $p$-group and $N\\leq G$. Then $N$ is powerfully\nembedded in $G$ if $p$ is odd and $[N,G]\\leq \\mho_1(N)$, or $p=2$ and $[N,G]\\leq \\mho_2(N)$.\n\\end{defn}\n\nAny powerfully embedded subgroup is itself a powerful\n$p$-group and must be normal in the whole group. Also a $p$-group is\npowerful exactly when it is powerfully embedded in itself. While it\nis obvious that factor groups and direct products of powerful\n$p$-groups are powerful, this property is not subgroup-inherited \\cite{lub}.\nThe following lemma gives some properties of powerful $p$-groups.\\\\\n\n\\begin{lem}\n (\\cite{lub}). The following statements\nhold for a powerful $p$-group\n$G$.\\\\\n$(i)$ $\\gamma_i(G), G^{i}, \\mho_i(G), \\Phi(G)$ are powerfully\nembedded in $G$.\\\\\n$(ii)$ $\\mho_i(\\mho_j(G))=\\mho_{i+j}(G)$.\\\\\n$(iii)$ Each element of $\\mho_i(G)$ can be written as $a^{p^{i}},$\nfor\nsome $a\\in G$ and hence $\\mho_i(G)=\\{g^{p^{i}}: g\\in G\\}$.\\\\\n$(iv)$ If $G=\\langle a_1,a_2,...,a_d\\rangle$, then $\\mho_i(G)=\\langle\na_1^{p^{i}},a_2^{p^{i}},...,a_d^{p^{i}}\\rangle$.\n\\end{lem}\n\n\\begin{lem}\n (\\cite{lub}). Let $N$ be powerfully embedded in $G$. Then $\\mho_i(N)$ is powerfully embedded in $G$.\n\\end{lem}\n\nThe proof of the following lemma is straightforward.\n\\begin{lem}\nLet $M$ and $G$ be two groups with an action of $G$ on $M$. Then for all $m,n \\in M$, $g,h \\in G$, and any integer $k$ we have the following equalities. \\\\\n$(i)$ $[mn,g]=[m,g]^n[n,g]$;\\\\\n$(ii)$ $[m,gh]=[m,h][m,g]^h$;\\\\\n$(iii)$ $[m^{-1},g]^{-1}=[m,g]^{m^{-1}}$;\\\\\n$(iv)$ $[m,g^{-1}]^{-1}=[m,g]^{g^{-1}}$;\\\\\n$(v)$ $[m,g^{-1},h]^g[m,[g,h^{-1}]]^h[[m^{-1},h]^{-1},g]^m=1$;\\\\\n$(vi)$ $[m^k,g]=[m,g]^k [m,g,m]^{k(k-1)\/2} \\pmod{[M, \\ _3G]}$.\n\\end{lem}\n\n\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Suppose that $M$ and $K$ are two normal subgroups of $N^*$. Then $M \\leq K$ if $M \\leq K[M,G]$.\n \\end{lem}\n\n\\begin{proof}\n Applying Lemma 3.4 we have\n$$ M \\leq K[M,G]\\leq K[K[M,G],G]\\leq K[K,G][M,G,G]\\leq \\dots \\leq K[M, \\ _iG], $$\nfor all $i\\geq 1$. On the other hand, since $G$ is a finite $p$-group, there exists an integer $l$ such that $[N, \\ _lG]=1 $. Hence $[N^*, \\ _{l+1}\\ G]=1$ and the result follows.\n\\end{proof}\n\n\\begin{lem}\n Let $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Let $M$ be a normal\nsubgroup of $H$. Then the following statements hold.\\\\\n$(i)$ If $p>2$, then $[\\mho_1(M),G]\\subseteq \\mho_1([M,G] )[M, \\ _3G]$.\\\\\n$(ii)$ If $p=2$, then $[\\mho_2(M),G]\\subseteq \\mho_2([M,G] ) \\mho_1([M, \\ _2G])[M, \\ _3G]$.\n\\end{lem}\n\\begin{proof} $(i)$ It is enough to show that $[m^p,g]\\in \\mho_1([M,G] )[M, \\ _3G]$, for all $m \\in M, g \\in G$.\n By Lemma 3.4 $[m^p,g]={[m,g]}^p {[m,g,m]}^{p(p-1)\/2}\\ \\ \\ \\pmod {[M, \\ _3G]}.$ Since $p$ is odd and $p \\mid \\frac{p(p-1)}{2}$ we have ${[m,g]}^p {[m,g,m]}^{p(p-1)\/2} \\in \\mho_1([M,G] )$. Now the result holds.\\\\\n $(ii)$ The proof is similar to $(i)$.\n\\end{proof}\n\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Suppose that $K \\leq N^*$. Then the following statements hold.\\\\\n$(i)$ If $p>2$, then $[K,G] \\leq \\mho_1 (K)$ if and only if $[K\/[K,\\ _2G], G]\\leq \\mho_1( K\/[K,\\ _2G])$.\\\\\n$(ii)$ If $p=2$, then $[K,G] \\leq \\mho_2 (K)$ if and only if $[K\/[K,\\ _2G], G]\\leq \\mho_2( K\/[K,\\ _2G])$.\\\\\n$(iii)$ If $p=2$, then $[K,G] \\leq \\mho_2 (K)$ if and only if $[K\/\\mho_1([K, G]), G]\\leq \\mho_2( K\/\\mho_1([K, G]))$.\n\\end{lem}\n\\begin{proof} $(i)$ Let $[K,G] \\leq \\mho_1 (K)$ and put $H=[K, \\ _2G]$. Then\n$$ [\\frac{K}{H},G]=\\frac{[K,G]H}{H} \\leq \\frac{\\mho_1(K)H}{H}=\\mho_1(\\frac{K}{H}),$$ as desired. Sufficiency follows by Lemma 3.5.\\\\\n$(ii)$ The proof is similar to $(i)$.\\\\\n$(iii)$ Necessity follows as for (i). Let $[K\/\\mho_1([K, G]), G]\\leq \\mho_2( K\/\\mho_1([K, G]))$. Then $[K,G]\\leq \\mho_2(K) \\mho_1([K,G])$. On the other hand, $\\mho_1([K\/\\mho_1([K, G]), G])$. Thus $[K\/\\mho_1([K, G]), G]$ is abelian and so $\\Phi([K\/\\mho_1([K, G]), G])=1$. This implies that $\\Phi([K,G])=\\mho_1([K,G])$. Therefore $[K,G]\\leq \\mho_2(K)$.\n\\end{proof}\n\nThe following useful remark is a consequence of Lemma 3.7.\n\\begin{rem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Let $K \\leq N^*$. Then to prove that $[K,G] \\leq \\mho_1 (K)$ ( $[K,G] \\leq \\mho_2 (K)$ for $p=2$) we can assume that\\\\\n$(i)$ $[K, \\ _2G]=1$;\\\\\n$(ii)$ $\\mho_1(K)=1\\ ( \\mho_2(K)=1$ for $p=2$ ) and try to show that $[K,G]=1$;\\\\\n$(iii)$ $\\mho_1([K,G])=1$ whenever $p=2$.\n\\end{rem}\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a covering pair of $(G,N)$. Let $N$ be powerfully embedded in $G$. \\\\\n$(i)$ If $p>2$, then $[\\mho_n([N^*,G]),G]\\leq \\mho_1(\\mho_n([N^*,G]))$ .\\\\\n$(ii)$ If $p=2$, then $[\\mho_n([N^*,G]),G]\\leq \\mho_2(\\mho_n([N^*,G]))$.\n\\end{lem}\n\\begin{proof}\n $N^*$ has a subgroup $A$ such that $A \\leq Z(N^*,G)\\cap [N^*,G]$, $A\\cong M(G,N)$ and $N\\cong N^*\/A$.\\\\\n$(i)$ Let $p>2$. We use induction on $n$. If $n=0$, then by Remark 3.8 we may assume that $[[N^*,G],G,G]=1$, $\\mho_1([N^*,G])=1$ and\n we should show that $[[N^*,G],G]=1$.\n Since $N$ is powerfully embedded in $G$, we have $[N,G]\\leq \\mho_1(N)$, and therefore $[N^*,G]\\leq \\mho_1(N^*)A$. Now we claim that $\\mho_1(N^*)\\leq Z(N^*,G)$.\n To prove the claim, let $ a \\in N^*$ and $b\\in G$. Since $\\gamma_3(\\langle a, [N^*,G]\\rangle)=1$, we have $cl(<a,a^b>) \\leq cl(<a, [N^*,G]>)\\leq 2$ ($cl(H)$ denotes the nilpotency class of $H$). On the other hand, Lemma 3.4 implies that $$\n (a^p)^b=a^p[a^p,b]\\equiv a^p[a,b]^p[a,b,a]^{p(p-1)\/2} \\pmod{[<a>, \\ _3G]}.$$ Therefore $(a^p)^b=a^p$ since $[[N^*,G],G,G]=1$ and $\\mho_1([N^*,G])=1$.\n Hence $\\mho_1(N^*)\\leq Z(N^*,G)$ as desired. Thus $[N^*,G] \\leq \\mho_1(N^*)A\\leq Z(N^*,G)$ and the result follows for $n=0$.\n \n Now suppose that the induction hypothesis is true for $n=k$. The first step of induction implies that $[N^*,G]$ is powerful.\n Using Lemmas 3.5 and 3.6, one can see that if $H$ is a subgroup of $N^*$ and $[H,G] \\leq \\mho_1(H)$, then $[\\mho_1(H),G] \\leq \\mho_1(\\mho_1(H))$. Hence by Lemma 3.2 and induction hypothesis we have\n$$[\\mho_{k+1}([N^*,G]),G]= [\\mho_1(\\mho_{k}([N^*,G])),G]\\leq \\mho_1(\\mho_1(\\mho_{k}([N^*,G])))$$ $$=\\mho_1(\\mho_{k+1}([N^*,G]))$$ which completes the proof.\\\\\n$(ii)$ Let $p=2$. The proof is similar to (i), but we need to prove that if $H$ is a subgroup of $N^*$ and $[H,G]\\leq \\mho_2(H)$, then $[ \\mho_1(H),G] \\leq \\mho_2(\\mho_1(H))$. By Remark 3.8,\n for $a \\in H, b \\in G$ we have $[a^4,b]=[a^2,b]^2=1$. So $a^4 \\in Z(H,G)$ and $\\mho_2(H)\\leq Z(H,G)$. Then $[H,G]\\leq \\mho_2(H)\\leq Z(H,G)$. Therefore $[a^2,b]=[a,b]^2$ and\n \\begin{equation}\n [\\mho_1( H),G]=\\mho_1([H,G]).\n \\end{equation}\n On the other hand, since $\\mho_2(H)\\leq Z(H,G)$, we have\n $$\\mho_1(\\mho_2(H))=<({a_1}^4 \\dots {a_k}^4)^2| a_i \\in H>=<{a_1}^8 \\dots {a_k}^8>=\\mho_3(H)\\leq \\mho_2(\\mho_1(H)). $$\n Hence (3.1) implies that $[\\mho_1(H),G] \\leq \\mho_2(\\mho_1(H))$ which completes the proof of the above claim.\n\\end{proof}\n\\begin{lem}\nLet $H$ and $G$ be two arbitrary groups with an action of $G$ on $N$. If $x \\in H$ and $g\\in G$, then $$[x^n,g]=[x,g]^n c, $$ where $M=\\langle x,[x,g] \\rangle$ and $ c \\in \\gamma_2(M)$.\n\\end{lem}\n\\begin{proof} Applying Lemma 2.1, we have\n$$ [x^n,g]=(x^n)^{-1}(x^n)^{g}=(x)^{-n}(x^g)^n=(x)^{-n}(x[x,g])^n=[x,g]^n c, $$\nwhere $M=\\langle x,[x,g] \\rangle, c \\in \\gamma_2(M)$.\n\\end{proof}\n\nNow we can state the main result of this section.\n\\begin{thm}\n Let $(G,N)$ be a pair of finite $p$-groups in which $N$ is powerfully embedded in $G$. Then\n $\\exp(M(G,N))$ divides $ \\exp(N)$.\n \\end{thm}\n\\begin{proof} Let $p>2$ and $\\sigma : N^* \\rightarrow G$ be a covering pair of $(G,N)$ with a subgroup $A$ such that $A \\leq Z(N^*,G)\\cap [N^*,G]$, $A\\cong M(G,N)$ and $N\\cong N^*\/A$. It is enough to show that $\\exp([N^*,G])=\\exp(N^*\/Z(N^*,G))$. For this we use induction on $k$ and show that\n\\begin{equation}\n\\mho_k([N^*,G])=[\\mho_k(N^*),G].\n\\end{equation}\nIf $k=0$, then (3.2) holds. Now assume that (3.2) holds, for $k=n$.\nWorking in powerful $p$-group $N^*\/A$ we get $\\mho_{n+1}(N^*\/A)=\\mho_1 (\\mho_n(N^*\/A))$ by Lemma 3.2. Hence\n\\begin{equation}\n \\frac{\\mho_{n+1}(N^*)A}{A}=\\mho_1(\\frac{\n\\mho_n(N^*)A}{A})=\\frac{\\mho_1(\\mho_n(N^*)A)A}{A}.\n\\end{equation}\nThen Lemmas 3.6 and 3.9 and induction hypothesis imply that\n\\begin{eqnarray*}\n [\\mho_{n+1}(N^*),G]=\n[\\mho_1 (\\mho_n(N^*)A)A,G] & \\leq &\n\\mho_1 ([\\mho_n(N^*)A,G])[\\mho_n(N^*)A, \\ _3G] \\\\& \\leq & \\mho_1 ([\\mho_n(N^*),G])[\\mho_n(N^*)A, \\ _2G] \\\\&\\leq& \\mho_1 (\\mho_n([N^*,G])[\\mho_n([N^*,G]),G] \\\\&\\leq& \\mho_1 (\\mho_n([N^*,G])=\\mho_{n+1}([N^*,G]).\n\\end{eqnarray*}\nFor the reverse inclusion, we show that\n$$\\mho_{n+1}([N^*,G]) \\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$$\nSince by $(3.4)$, $[\\mho_{n+1}(N^*),G])=[\\mho_1 (\\mho_n(N^*)A)A,G]=[\\mho_1 (\\mho_n(N^*)A),G]$, it follows that\n$\\mho_1 (\\mho_n(N^*)A)A \\leq Z(N^*,G) \\pmod{[\\mho_{n+1}(N^*),G]}$.\\\\\nOn the other hand, since $N$ is powerfully embedded in $G$, we have\n\\begin{eqnarray*}\n [\\mho_{n}(N^*),G]=[\\mho_{n}(N^*)A,G] &\\leq& \\mho_1 (\\mho_n(N^*)A)A \\\\&\\leq& Z(N^*,G) \\pmod{[\\mho_{n+1}(N^*),G]}.\n \\end{eqnarray*}\nTherefore $[\\mho_{n}(N^*),G,G]\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$\\\\\n Moreover, by Lemma 3.10 we have\n $$[\\mho_1 (\\mho_n(N^*)A),G][\\mho_{n}(N^*),G,G]=\\mho_1( [ \\mho_n(N^*),G])[\\mho_{n}(N^*),G,G].$$\n It follows that $\\mho_1 [(\\mho_n(N^*)),G]\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}$. Then by induction hypothesis, we have\n $$\\mho_{n+1}([N^*,G])= \\mho_1(\\mho_{n}[N^*,G])=\\mho_1 ([\\mho_n(N^*),G])\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$$\n This completes the proof for odd primes $p$. The proof for the case $p=2$ is similar.\n\\end{proof}\n\\section*{Acknowledgement}\nThe authors would like to thank the referee for the valuable comments and useful suggestions to improve the present paper.\n\nThis research was supported by a grant from Ferdowsi University of Mashhad; (No. MP90259MSH).\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
|