diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgdyq" "b/data_all_eng_slimpj/shuffled/split2/finalzzgdyq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgdyq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nReconfigurable intelligent surfaces (RISs) have arisen as a promising solution for future sixth generation (6G) wireless communication systems \\cite{DiRenzo_JSAC_2020,Do_arXiv_2021}.\nAn RIS is an array consisting of a large number of passive reflecting elements, in which each element can be programmed and electronically controlled to configure its phase-shift independently so that impinging waves can be reflected and steered toward an intentional direction.\nRecently, in \\cite{Basar_ACCESS_2019}, the authors specified the optimal phase-shift configuration for RIS-aided point-to-point communication systems. In \\cite{Huang_TWC_2019}, the authors showed that RIS achieves more energy efficiency (EE) than the conventional amplify-and-forward (AF) relaying. In \\cite{Bjornson_WCL_2020}, RIS was demonstrated to outperform the conventional decode-and-forward (DF) relaying in terms of EE. Very recently, in \\cite{VanChien_CL_2020}, the authors provided various coverage and capacity analyses on RIS-aided dual-hop relaying systems.\n\nToward 6G, non-terrestrial communications have been recognized as sustainable and reliable tools for communicating over remote and hazardous areas. However, due to dynamics of spatial movements, there is a need of new fading models to accurately characterize both air-to-ground (A2G) downlink (DL) and uplink (UL) channels. In \\cite{Sharma_CL_2020}, considering three-dimensional (3D) spatial movements, the outage performance of unmanned aerial vehicles (UAV)-aided dual-hop relaying was analyzed. Recently, in \\cite{Bithas_TCOM_2020}, based on empirical data obtained in A2G trials, A2G channels were modeled using Nakagami-$m$ multipath fading and inverse-Gamma (IG) shadowing. In \\cite{Bao_WCL_2020}, the authors used a deep neural network (DNN) to predict the secrecy performance of A2G communications. More recently, the concept of aerial RIS (aerial-RIS)-enabled communication systems was proposed in \\cite{Lu_TWC_2021}; however, the authors modeled A2G links using the free-space path-loss model and neglected the fading and shadowing effects. \n\n\\textit{Different from existing works}, in this paper, we propose a practical aerial-RIS-aided wireless communication system subject to composite fading channel model, where the small-scale fading follows a Nakagami-$m$ distribution whereas the large-scale shadowing follows an IG distribution. We analyze the outage performance of the aerial-RIS-aided system under such a practical fading channel model.\nNext, we consider a mobile environment, in which the 3D spatial movement of aerial-RIS is modeled using the random waypoint mobility model (RWMM). With such a mobile system, locations of the aerial-RIS are treated as random variables (RVs), which makes the outage performance analysis intractable. Thus, relying on data-driven methods, we build a DNN that can be trained to predict the OP using pre-collected channel state information (CSI) and associated OP data.\nThe key contributions of the paper are summarized as follows:\n\\begin{itemize}\n\\item We propose \\textit{a technical framework} to derive the OP of the proposed aerial-RIS-aided system. Specifically, in order to circumvent the appearance of parabolic cylinder functions, we use the \\textit{moment-matching technique} to fit the distribution of the product of four different RVs to that of a $[\\mathrm{Gamma\/(Gamma^2)}]$ RV. The re-structured RV is transformed again to a mixture Gamma (MG) RV using Gaussian-Laguerre quadrature. We then use Laplace transform to derive the distribution of the sum of multiple transformed MG RVs. With this proposed technical framework, we obtain a tight approximate closed-form expression for the system OP.\n\\item Considering the 3D spatial movement of the aerial-RIS, whose mathematical performance analysis is intractable, we propose a detailed procedure to train and test the DNN, which is tailored to our proposed system. We show that the trained DNN accurately predicts the system OP. \n\\item Representative results\\footnote{The source code is published at \\url{https:\/\/github.com\/trinhudo\/Aerial-RIS}} show that the OP is significantly sensitive to the considered composite fading model. With the DNN-based predicted OP results, we demonstrate the achievable EE of the aerial-RIS in terms of transmit power consumption under different reflecting element settings. Besides, we show that the aerial-RIS-aided system outperforms conventional cooperative relaying systems. \n\\end{itemize}\n\n\\noindent \\textbf{Notations}: $[\\vec{v}]_r$ denotes the $r$-th element of vector $\\vec{v}$, $\\mathbb{E}(\\cdot)$ denotes expectation operation, $\\Gamma(\\cdot)$ is the Gamma function \\cite[(8.310.1)]{Gradshteyn2007}, $\\Phi_2^{(K)} (\\cdot)$ is the multivariate confluent hypergeometric function \\cite[pp. 290]{Srivastava1985}, and $K_\\nu(\\cdot)$ is the modified Bessel function \\cite[(8.407.1)]{Gradshteyn2007}.\n\n\\section{System and Chanel Models}\n\nWe consider a low complexity communication system that can be deployed for disaster relief. Considering that a source $\\mathrm{S}$, e.g., a terrestrial transmitter, and a destination $\\mathrm{D}$, e.g., a terrestrial receiver, are both equipped with a single-antenna, we assume that the $\\mathrm{S} \\to \\mathrm{D}$ direct link is not available due to severe fading and shadowing caused by on-the-ground obstacles. Instead, the $\\mathrm{S} \\to \\mathrm{D}$ communication is assisted by an aerial-RIS, $\\mathrm{R}$, which passively reflects signals from $\\mathrm{S}$ to $\\mathrm{D}$, as depicted in Fig.~\\ref{fig_system}.\n\nSuppose that the aerial-RIS has $N$ discrete reflecting elements, let $\\vec{h}_{\\mathrm{S} \\mathrm{R}} \\in \\mathbb{C}^{N \\times 1}$ and $\\vec{h}_{\\mathrm{R} \\mathrm{D}} \\in \\mathbb{C}^{N \\times 1}$ be the $\\mathrm{S}\\to\\mathrm{R}$ and $\\mathrm{D}\\to\\mathrm{R}$ \\textit{complex channel coefficient vectors}, respectively. \nThe properties of the aerial-RIS are characterized via the phase-shift matrix $\\vec{\\Psi} = \\kappa \\mathrm{diag}(e^{j \\phi_1}, \\ldots, e^{j \\phi_N})$, where $\\phi_r \\in [0, 2\\pi), r = 1,...,N,$ is the \\textit{phase-shift} occurring at element $r$ of the aerial-RIS, and $\\kappa \\in (0,1]$ is the \\textit{fixed amplitude reflection coefficient} \\cite{Bjornson_WCL_2020}.\nLet $s$, with $\\mathbb{E}[|s|^2] = 1$, denote the transmit signal from $\\mathrm{S}$. Thus, the received signal at $\\mathrm{D}$ can be expressed as\n\\begin{align} \\label{eq_received_signal}\n\ty = \\sqrt{P_\\mathrm{S}} \\sum_{r=1}^N [\\vec{h}_{\\mathrm{S} \\mathrm{R}}]_r \\kappa e^{j \\phi_r} [\\vec{h}_{\\mathrm{R} \\mathrm{D}}]_r s + w_\\mathrm{D},\n\\end{align}\nwhere $P_\\mathrm{S}$ denotes the transmit power of $\\mathrm{S}$ and $w_\\mathrm{D} \\sim \\mathcal{CN}(0,\\sigma^2)$ is the additive white Gaussian noise (AWGN) at $\\mathrm{D}$ with zero mean and variance $\\sigma^2$. \n\nLet $[\\vec{h}_{\\mathrm{S}\\mathrm{R}}]_r \\triangleq \\tilde{h}_{\\mathrm{S} r}$ and $[\\vec{h}_{\\mathrm{R} \\mathrm{D}}]_r \\triangleq \\tilde{h}_{r \\mathrm{D}}$. The polar representation of the \\textit{complex channel coefficient} $\\tilde{h}_\\mathrm{c}$ can be expressed as $\\tilde{h}_\\mathrm{c} = h_\\mathrm{c} e^{j \\theta_\\mathrm{c}}$, for $\\mathrm{c} \\in \\{\\mathrm{S} r, r \\mathrm{D}\\}$, where $h_\\mathrm{c}$ is the \\textit{magnitude}, i.e., $h_\\mathrm{c} = \\vert \\tilde{h}_\\mathrm{c} \\vert$, and $\\theta_\\mathrm{c} \\in [0, 2\\pi)$ is the \\textit{phase} of $\\tilde{h}_\\mathrm{c}$. Considering a practical composite fading channel, in which the small-scale fading, $G_\\mathrm{c}$, is modeled as a Nakagami-$m$ RV and the large-scale shadowing, $L_\\mathrm{c}$, is modeled as an IG RV, we have that $ h_\\mathrm{c} = L_\\mathrm{c} G_\\mathrm{c}$. More specifically, the Nakagami-$m$ probability density function (PDF) of $G_\\mathrm{c}$ is given by \\cite{Matlab2021a}\n\\begin{align} \\label{eq_PDF_Naka}\n\tf_{G_\\mathrm{c}} (x ; m, \\Omega) = \n\t2 \\left(\\frac{m}{\\Omega}\\right)^m \\frac{1}{\\Gamma(m)} x^{(2m - 1)} e^{\\frac{- m}{\\Omega} x^2} , x>0,\n\\end{align}\nwhere $m$ and $\\Omega$ are the \\textit{shape} and the \\textit{spread} parameters of the distribution, respectively. The inverse-Gamma PDF of $L_\\mathrm{c}$ is given by \\cite{Peebles2000}\n\\begin{align} \\label{eq_PDF_IG}\n\tf_{L_\\mathrm{c}} (x;\\alpha, \\beta) = \\frac{\\beta^\\alpha}{\\Gamma(\\alpha)} x^{(-\\alpha - 1)} e^{\\frac{-\\beta}{x}}, x>0,\n\\end{align}\nwhere $\\alpha > 1$ and $\\beta$ are the \\textit{shape} and the \\textit{scale} parameters of the distribution, respectively. \n\nFrom \\eqref{eq_received_signal}, the end-to-end (e2e) instantaneous achievable capacity [b\/s\/Hz] of the system can be expressed as\n\\begin{align}\n\tR = \\max_{\\phi_1, \\ldots,\\phi_N} \\log_2 \\bigg( 1 + \\frac{P_\\mathrm{S}}{\\sigma^2} \\bigg| \\kappa \\sum_{r=1}^N [\\vec{h}_{\\mathrm{S} \\mathrm{R}}]_r e^{j \\phi_r} [\\vec{h}_{\\mathrm{R} \\mathrm{D}}]_r \\bigg|^2 \\bigg).\n\\end{align}\n\n\\section{Outage Performance Analysis}\n\nThe outage probability of the system is the probability that the instantaneous mutual information of the system falls below a pre-defined target spectral efficiency (SE), $R_\\mathrm{th}$, [b\/s\/Hz], which can be mathematically expressed as\n\\begin{align} \\label{eq_outage}\n \\mathrm{P_{out}} &= \\Pr(R < R_\\mathrm{th}) \\nonumber\\\\\n &= \\Pr(\\gamma < \\gamma_\\mathrm{th}),\n\\end{align}\nwhere $\\gamma_\\mathrm{th} \\triangleq 2^{R_\\mathrm{th}} - 1$, and $\\gamma$ denotes the e2e receive signal-to-noise ratio (SNR) at $\\mathrm{D}$, which is expressed as\n\\begin{align} \\label{eq_snr}\n\t\\gamma = \\max_{\\phi_1,\\ldots,\\phi_N} \\bar{\\gamma} \\bigg| \\kappa \\sum_{r=1}^N L_{\\mathrm{S} r} G_{\\mathrm{S} r} e^{j (\\phi_r + \\theta_{\\mathrm{S} r} + \\theta_{r \\mathrm{D}}) } G_{r \\mathrm{D}} L_{r \\mathrm{D}} \\bigg|^2 ,\n\\end{align} \nwhere $\\bar{\\gamma} = P\/\\sigma^2$ denotes the average transmit SNR [dB]. \n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=.9\\linewidth]{fig1.png}\n\\caption{Illustration of the aerial-RIS-aided wireless communication system.}\n\\label{fig_system}\n\\end{figure}\n\nIn order to address \\eqref{eq_outage}, we first setup the optimal configuration of the phase-shift matrix, i.e., $\\vec{\\Psi}^\\star$, such that the end-to-end SNR in \\eqref{eq_snr} is maximized. Following \\cite{Basar_ACCESS_2019}, \\cite{Bjornson_WCL_2020}, and \\cite{VanChien_CL_2020}, for each reflecting element of the aerial-RIS, we configure $\\phi_r^\\star = - (\\theta_{\\mathrm{S} r} + \\theta_{r \\mathrm{D}}), \\forall r$. Consequently, the OP in \\eqref{eq_outage} can be re-expressed as\n\\begin{align} \\label{eq_Psi}\n\\mathrm{P_{out}} = \\Pr \\bigg(\\bar{\\gamma} \\kappa^2 \\bigg| \\sum_{r=1}^N L_{\\mathrm{S} r} G_{\\mathrm{S} r} G_{r \\mathrm{D}} L_{r \\mathrm{D}} \\bigg|^2 < \\gamma_\\mathrm{th} \\bigg).\n\\end{align}\n\nFirst, let $W_r \\triangleq L_{\\src r} G_{\\src r} L_{r \\des} G_{r \\des}$, $G_r \\triangleq G_{\\src r} G_{r \\des}$, $L_r \\triangleq L_{\\src r} L_{r \\des}$, and $\\tilde{L}_r \\triangleq 1 \/\\sqrt{L_r}$, thus, $W_r = G_r \/ \\tilde{L}_r^2$. Next, we will show that the distribution of $W_r$ can be matched to a $[\\mathrm{Gamma\/(Gamma^2)}]$ distribution. To this end, we first propose a distribution matching for $G_r$ and $\\tilde{L}_r$, as presented in the following Lemma.\nLet $X$ be a Gamma RV with the PDF being given by \\cite{Matlab2021a}\n\\begin{align}\nf_{X} (x; \\nu, \\zeta) = \\frac{1}{ \\zeta^{\\nu} \\Gamma(\\nu)} x^{(\\nu - 1)} e^{ \\frac{-x}{\\zeta} }, x>0,\n\\end{align}\nwhere $\\nu$ and $\\zeta$ denote the \\textit{shape} and the \\textit{scale} parameters, we denote $X \\sim \\mathrm{Gamma} (\\nu, \\zeta)$.\n\n\n\\begin{Lemma} \\label{lemma_matching_Gamma}\nThe distribution of $G_r$ and $\\tilde{L}_r$ can be approximately matched to a Gamma distribution as\n\\begin{align} \n\tG_r &\\overset{\\mathrm{approx.}}{\\sim} \\mathrm{Gamma} (m_G, \\Omega_G \/ m_G ), \\label{eq_G_r_Gamma} \\\\\n\t\\tilde{L}_r &\\overset{\\mathrm{approx.}}{\\sim} \\mathrm{Gamma} (m_L, \\Omega_L \/ m_L), \\label{eq_L_r_Gamma}\n\\end{align}\nrespectively, where\n$\\Omega_G \\triangleq \\frac{ \\Gamma(m_\\mathrm{s} + 1\/2) \\Gamma(m_\\mathrm{d} + 1\/2) }{\\Gamma(m_\\mathrm{s}) \\Gamma(m_\\mathrm{d})} \\Upsilon_G^{-1\/2}$, where \n$\\Upsilon_G \\triangleq (m_\\mathrm{s} m_\\mathrm{d})\/(\\Omega_\\mathrm{s} \\Omega_\\mathrm{d})$,\nand \n$m_G \\triangleq \\frac{\\Omega_G^2}{\\Omega_\\mathrm{s} \\Omega_\\mathrm{d} - \\Omega_G^2}$;\nand \n$\\Omega_L \\triangleq \\Gamma(\\alpha_\\mathrm{s} + 1\/2) \\Gamma(\\alpha_\\mathrm{d} + 1\/2) \/ [\\sqrt{\\beta_\\mathrm{s} \\beta_\\mathrm{d}} \\Gamma(\\alpha_\\mathrm{s}) \\Gamma(\\alpha_\\mathrm{d})]$ \nand \n$m_L \\triangleq \\Omega_L^2\/ [(\\alpha_\\mathrm{s} \\alpha_\\mathrm{d})\/(\\beta_\\mathrm{s} \\beta_\\mathrm{d}) - \\Omega_L^2]$.\n\\end{Lemma} \n\n\\begin{IEEEproof}\nThe proof is provided in Appendix \\ref{apx_proof_lemma_GG_LL}.\n\\end{IEEEproof}\n\nTo further the analysis, we present the following Theorem.\n\\begin{Theorem}\\label{theorem_cdf_Z2}\n\nLet $Z \\triangleq \\sum_{r=1}^N W_r = \\sum_{r=1}^N G_{\\src r} L_{\\src r} G_{r \\des} L_{r \\des}$, an approximate closed-form expression for the cumulative distribution function (CDF) of $|Z|^2 $ can be attained, as in \\eqref{eq_cdf_Z2_a}, on the top of the next page, \n\\begin{figure*}\n\\begin{align} \\label{eq_cdf_Z2_a}\n\tF_{|Z|^2} (z) \n\t\\approx \n\t\\frac{1}{\\Gamma(N m_G +1)} z^{\\frac{N m_G}{2}} \\sum_{\\Xi_\\tau, \\forall \\tau} \\binom{N}{\\tau_1,...,\\tau_K} \n\t\\bigg[ \\prod_{k=1}^K \\xi_k^{v_k}\\bigg] \\Phi_2^{(K)} \\bigg(m_G \\tau_1, ..., m_G \\tau_K ; 1+N m_G; -\\frac{\\sqrt{z}}{\\zeta_1},..., - \\frac{\\sqrt{z}}{\\zeta_K} \\bigg),\n\\end{align} \n\\hrule\n\\end{figure*}\nwhere $\\binom{N}{ \\tau_1,\\dots,\\tau_K } \\triangleq \\frac{ N! }{ \\tau_1 ! \\dots \\tau_K! }$, $\\Xi_\\tau \\triangleq (\\tau_1, ..., \\tau_K)^{(\\tau)}$ denotes a possible combination of $\\tau_1, ..., \\tau_K$, where $\\tau_k \\in \\{\\tau_1, ..., \\tau_K\\}$ are non-negative integers satisfying $\\sum_{k=1}^K \\tau_k = N$.\n\\end{Theorem}\n\n\\begin{IEEEproof}\nLet $\\Lambda_G \\triangleq \\Omega_G \/ m_G$ and $\\Lambda_L \\triangleq \\Omega_L \/ m_L$. Invoking Lemma~\\ref{lemma_matching_Gamma}, the approximate PDF of the $[\\mathrm{Gamma\/(Gamma^2)}]$ RV, i.e., $W_r = G_r\/\\tilde{L}_r^2$, is obtained as\n\\begin{align} \\label{eq_pdf_W_r_a}\n\tf_{W_r} (y) \\approx \\frac{ \\Lambda_L^{-m_L}}{\\Gamma(m_L)} \\int_0^\\infty x^{m_L + 1} f_{G_r} (y x^2) e^{- \\frac{x}{\\Lambda_L}} d x.\n\\end{align}\nIn order to derive a tractable expression of the integral in \\eqref{eq_pdf_W_r_a}, we rely on the Gaussian-Laguerre (G-L) quadrature \\cite{Abramowitz1965}, which yields\n\\begin{align}\n\tf_{W_r} (y) \\approx \\sum_{k=1}^K \\psi_k \\frac{y^{m_G - 1}}{\\Gamma(m_G)} e^{ - \\frac{y}{\\zeta_k}}, y>0,\n\\end{align}\nwhere $\\zeta_k = \\frac{\\Lambda_G}{ (\\mathrm{z}_k \\Lambda_L)^2}$, $\\psi_k = \\frac{\\mathrm{w}_k}{\\Gamma(m_L)} \\frac{\\mathrm{z}_k^{m_L - 1}}{\\zeta_k^{m_G}}$, $\\mathrm{w}_k$ and $\\mathrm{z}_k$ are the weight factors and abscissas of the G-L quadrature, respectively \\cite{Abramowitz1965}, and $K$ denotes the number of terms in the G-L quadrature. After normalization, i.e., $f_{W_r}(y) \\leftarrow f_{W_r} (y) \/ \\int_0^\\infty f_{W_r} (y) \\mathrm{d}y$, an approximate closed-form expression for the PDF of $W_r$ can be attained as\n\\begin{align} \\label{eq_pdf_W_r}\n\tf_{W_r} (y) \\approx \\sum_{k=1}^K \\xi_k \\frac{y^{m_G - 1}}{\\Gamma(m_G)} e^{-\\frac{y}{\\zeta_k} },\n\\end{align} \nwhere $\\xi_k = \\psi_k \/ \\big[ \\sum_{i=1}^K \\psi_i \\zeta_i^{m_G} \\big]$. From \\eqref{eq_pdf_W_r}, it is apparent that $W_r$ follows a MG distribution.\n\nWe now turn our focus to $Z = \\sum_{r=1}^N W_r$. First, the Laplace transform of $W_r$, $\\mathcal{L}_{W_r} (v) \\triangleq \\mathbb{E}_{W_r} [e^{-v W_r}]$, can be obtained as\n\\begin{align}\n\t\\mathcal{L}_{W_r} (v) = \\int_0^\\infty e^{-v y} f_{W_r} (y) d y = \\sum_{k=1}^K \\xi_k \\left( \\frac{1}{\\zeta_k} + v\\right)^{-m_G}.\n\\end{align}\nBy performing the Laplace transform for $Z$ and after some derivation steps, the CDF of $Z$ can be expressed as\n\\begin{align} \\label{eq_CDF_Z_a}\n\tF_Z (z) = \\mathcal{L}^{-1} \\bigg\\{ \\frac{1}{v} \\bigg[\\sum_{k=1}^K \\xi_k \\bigg( \\frac{1}{\\zeta_k} + v \\bigg)^{-m_G}\\bigg]^N; v, z\\bigg\\},\n\\end{align}\nwhere $\\mathcal{L}^{-1}\\{ H(v), v,z \\}$ specifies the inverse Laplace transform of $H(v)$ from $v$-domain to $z$-domain. Invoking the multinominal theorem, \\eqref{eq_CDF_Z_a} is re-expressed as\n\\begin{align}\n\tF_Z (z) &= \\sum_{\\Xi_\\tau, \\forall \\tau} \\binom{N}{\\tau_1,...,\\tau_K} \\bigg[\\prod_{k=1}^K \\xi_k^{\\tau_k}\\bigg] \\nonumber \\\\\n\t&\\quad\\times \\mathcal{L}^{-1} \\bigg\\{ \\frac{1}{v} \\prod_{k=1}^K \\bigg( \\frac{1}{\\zeta_k} + v\\bigg)^{-\\tau_km_G}; v,z \\bigg\\}.\n\\end{align}\nGiven the linearity property of the Laplace transform, we have \n\\begin{align}\n\t&\\mathcal{L}^{-1} \\bigg\\{ \\frac{1}{v} \\prod_{k=1}^K \\bigg( \\frac{1}{\\zeta_k} + v\\bigg)^{-\\tau_km_G}; v,z \\bigg\\} = \\frac{1}{\\Gamma(N m_G + 1)}\\nonumber \\\\\n\t&\\quad\\times \\mathcal{L}^{-1} \\bigg\\{ \\frac{\\Gamma(N m_G + 1)}{v^{N m_G + 1}} \\!\\prod_{k=1}^K \\!\\!\\! \\bigg( \\frac{1}{\\zeta_k v} + 1\\bigg)^{-\\tau_k m_G} \\!\\!\\!\\!\\!\\!\\!\\! ; v,z \\bigg\\},\n\\end{align}\nand by making use of \\cite[Eq. (10)]{Martinez_TCOM_2016}, after some mathematical manipulations\/simplifications, an approximate closed-form expression for the CDF of $Z$ can be derived as\n\\begin{align} \\label{eq_CDF_Z_d}\n\t&F_Z (z) \\approx \\frac{z^{N m_G}}{\\Gamma(N m_G + 1)} \\sum_{\\Xi_\\tau, \\forall \\tau} \\binom{N}{\\tau_1,...,\\tau_K} \\bigg[\\prod_{k=1}^K \\xi_k^{\\tau_k}\\bigg] \\nonumber \\\\\n\t&\\times \\Phi_2^{(K)} \\bigg(m_G \\tau_1 ,..., m_G \\tau_K; N m_G + 1; -\\frac{z}{\\zeta_1},...,-\\frac{z}{\\zeta_K} \\bigg).\n\\end{align}\nSince $F_{X^2} (x) = F_X (\\sqrt{x}), x>0$, one can attain the CDF of $|Z|^2$ as in \\eqref{eq_cdf_Z2_a}. This completes the proof of Theorem \\ref{theorem_cdf_Z2}.\n\\end{IEEEproof}\n\n\\begin{Corollary}\nInvoking Theorem \\ref{theorem_cdf_Z2}, an approximate closed-form expression for the system OP can be attained as \n\\begin{align} \\label{eq_OP_end}\n\\mathrm{P_{out}} \\approx F_{|Z|^2} (\\gamma_\\mathrm{th}\/ (\\bar{\\gamma} \\kappa^2)).\n\\end{align}\n\\end{Corollary}\n\n\\section{DNN-based Performance Prediction}\n\nGiven the system model considered in the previous sections, we further assume that the 3D spatial movement of the aerial-RIS is characterized by the RWMM, the samples of its locations can be drawn from a homogeneous 3D Poisson process. For the sake of exposition, we further assume that the aerial RSI spatially moves in a 3D cylinder, as depicted in Fig.~\\ref{fig_3D_position}. Consequently, each element of the cylindrical coordinate of $\\mathrm{R}$ can be generated from a Uniform distribution, as shown in Table \\ref{table_parameters}. It is noted that with the consideration of 3D movement, distances between nodes in the proposed system are now also RVs, which makes the derivation of the system OP defined in \\eqref{eq_Psi} infeasible, mainly because of the multivariate confluent hypergeometric function. To overcome this hurdle, we treat the problem of finding the system OP as \\textit{a regression problem in supervised learning}. In particular, we generate a data set that comprehensively characterizes the considered system. By being trained by such a data set, the developed DNN is capable of accurately predict the OP under various system settings.\n\n\\begin{table}[!h] \n\\centering\n\\caption{Input Parameters for DNN Training and Testing}\n\\begin{tabularx}{\\linewidth}{l X || l X}\n\\Xhline{2\\arrayrulewidth}\n\\textbf{Inputs} & \\textbf{Values} & \\textbf{Inputs} & \\textbf{Values} \\\\\n\\Xhline{2\\arrayrulewidth}\n$\\bar{\\gamma}$ [dB] & $[5-\\epsilon_\\gamma, 5+\\epsilon_\\gamma]$\n& $N$ & $[20-\\epsilon_N, 20+\\epsilon_N]$ \\\\\n$\\omega_\\mathrm{R}$ & $\\sim \\mathcal{U}(0,2\\pi)$ \n& $\\mathrm{r}_\\mathrm{R}$ & $\\sim 0.5 \\sqrt{\\mathcal{U} (0,1)}$\\\\\n$m_\\mathrm{c}$ & $[2-\\epsilon_m, 2+\\epsilon_m]$ \n& $\\mathrm{h}_\\mathrm{R}$ & $\\sim \\mathcal{U} (0,1)$ \\\\\n$\\alpha_\\mathrm{c}$ & $[2.5-\\epsilon_\\alpha, 2.5+\\epsilon_\\alpha]$ \n& $\\beta_\\mathrm{c}$ & $[1-\\epsilon_\\beta, 1+\\epsilon_\\beta]$ \\\\\n$\\eta$ & $[2.7-\\epsilon_\\eta, 2.7+\\epsilon_\\eta]$ & $R_\\mathrm{th}$ & $[5-\\epsilon_R, 5+\\epsilon_R]$ \\\\\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\n\\label{table_parameters}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.6\\linewidth]{fig2a.pdf}\n\\caption{Illustration of the 3D spatial movement of the aerial-RIS.}\n\\label{fig_3D_position}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.7\\linewidth]{fig2b.pdf}\n\\caption{Illustration of the design DNN for the considered regression problem.}\n\\label{fig_DNN}\n\\end{figure}\n\n\\subsection{3D Spatial Movement Modeling}\nLet $\\omega_\\mathrm{R}$, $\\mathrm{r}_\\mathrm{R}$, and $\\mathrm{h}_\\mathrm{R}$ respectively denote azimuth, radial distance, and height of the cylindrical coordinate of $\\mathrm{R}$. Without loss of generality, we consider a unit normalization cylinder, i.e., $\\omega_\\mathrm{R} \\in [0,2\\pi]$, $\\mathrm{r}_\\mathrm{R} \\in [0,0.5]$, and $\\mathrm{h}_\\mathrm{R} \\in [0,1]$. Thus, the 3D Cartesian coordinates of $\\mathrm{R}$, $(x_\\mathrm{R},y_\\mathrm{R}, z_\\mathrm{R})$, are converted to $x_\\mathrm{R} = \\mathrm{r}_\\mathrm{R} \\sin \\omega_\\mathrm{R}$, $y_\\mathrm{R} = \\mathrm{r}_\\mathrm{R} \\cos \\omega_\\mathrm{R}$, and $z_\\mathrm{R} = \\mathrm{h}_\\mathrm{R}$. Assume that the 3D Cartesian coordinates of $\\mathrm{S}$ and $\\mathrm{D}$ are $(-0.5,0,0)$ and $(0.5,0,0)$, respectively, a distance between two nodes is calculated as $d_\\mathrm{AB} = \\sqrt{(x_\\mathrm{A} - x_\\mathrm{B})^2 + (y_\\mathrm{A} - y_\\mathrm{B})^2 + (z_\\mathrm{A} - z_\\mathrm{B})^2}$, where $\\{\\mathrm{A,B}\\} \\in \\{\\mathrm{S},\\mathrm{D},\\mathrm{R}\\}$. \n\n\\subsection{DNN Construction, Training, and Testing}\n\n\\subsubsection{The Structure of The DNN} Our developing DNN is a feed-forward neural network, consisting of one input layer, $D_\\mathrm{hid}$ hidden layers, and one output layer, as depicted in Fig.~\\ref{fig_DNN}. The input layer has $13$ neurons, corresponding to $13$ system parameters listed in Table~\\ref{table_parameters}. Each hidden layer $i$, $i =1,...,D_\\mathrm{hid}$, has $D_\\mathrm{neu}^{(i)}$ neurons, and uses the rectified linear unit ($\\mathrm{ReLU}$) function as activation function. The output layer has one neuron, and uses the linear function as its activation function to return the predicted OP, $\\mathrm{P}_\\mathrm{out}^\\mathrm{prd}$.\n\n\\begin{algorithm}[t]\n\\renewcommand{\\thealgorithm}{1}\n\t\\caption{Procedure of training and testing the DNN}\n\t\\begin{algorithmic}[1]\n\t\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\t\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\t\t\\REQUIRE Inputs parameters; DNN settings: $D_\\mathrm{hid} =5$, $D_\\mathrm{neu}^{(i)} =128$, $\\mathrm{RMSE}_\\mathrm{th}=2\\times10^{-2}$, learning rate $\\mathrm{lr} = 10^{-3}$\n\t\t\\ENSURE A trained DNN\n\t\t\\STATE Draw $\\mathcal{S}_\\mathrm{trn}$, $\\mathcal{S}_\\mathrm{val}$, and $\\mathcal{S}_\\mathrm{tes}$ sets from the data set\n\t\t\\STATE Create the DNN's structure using Keras and TensorFlow\n\t\t\\WHILE {$(\\mathrm{RMSE} \\geq \\mathrm{RMSE}_\\mathrm{th})$}\n\t\t\\STATE Dynamically adjust $D_\\mathrm{hid}$, $D_\\mathrm{neu}^{(i)}$, $\\mathrm{l_r}$, number of epochs\n\t\t\\STATE Use $\\mathcal{S}_\\mathrm{trn}$ and $\\mathcal{S}_\\mathrm{val}$ to train, validate, and then save the validated DNN as $\\mathtt{validatedDNN.h5}$\n\t\t\\STATE Feed $\\mathcal{S}_\\mathrm{test}$ into $\\mathtt{validatedDNN.h5}$, obtain $\\mathrm{RMSE}$\n\t\t\\ENDWHILE\n\t\t\\RETURN $\\mathtt{trainedDNN.h5}$ \t\t\n\t\\end{algorithmic} \n\t\\label{alg_DNN_train_test}\n\\end{algorithm}\n\n\\begin{figure} [!t] \n\\centering\n\\includegraphics[width=.9\\linewidth]{fig3.pdf}\n\\caption{The convergence of MSE in training and validating the DNN.}\n\\label{fig_validation}\n\\end{figure}\n\n\\subsubsection{Data Set}\nEach sample $i$ of our data set $\\mathcal{S}$ is a row vector, i.e., \n$\\mathrm{Data}[i] = [\\vec{t}[i],\\mathrm{P}_{\\mathrm{out},\\textit{i}}^\\mathrm{sim}]$, where $\\vec{t}[i]$ is a feature vector including all input parameters listed in Table~\\ref{table_parameters}. Each feature vector $\\vec{t}[i]$ is used to create real-value CSI sets, which are fed into a Monte-Carlo simulation, and returns a unique corresponding $\\mathrm{P}_{\\mathrm{out},\\textit{i}}^\\mathrm{sim}$. Totally, we create $10^5$ samples, i.e., $\\mathrm{Data}[i], i= 1,...,10^5$, and concatenate them to create the data set.\nWe then divide the data set into training set, $\\mathcal{S}_\\mathrm{trn}$, validation set, $\\mathcal{S}_\\mathrm{val}$, and test set, $\\mathcal{S}_\\mathrm{tes}$, with ratio $80\\%$, $10\\%$, and $10\\%$, respectively. \n\nUsing the generated data sets, the DNN is trained and tested following Algorithm~\\ref{alg_DNN_train_test}, where the mean squared error (MSE) is defined as $\\mathrm{MSE} = \\frac{1}{|\\mathcal{S}_\\mathrm{tes}|} \\sum_{i=0}^{|\\mathcal{S}_\\mathrm{tes}|-1} \\left(\\mathrm{P}_\\mathrm{out}^\\mathrm{prd} - \\mathrm{P}_\\mathrm{out}^\\mathrm{tes}\\right)^2$, and the root MSE (RMSE) is defined as $\\mathrm{RMSE} = \\sqrt{\\mathrm{MSE}}$.\n\n\\section{Numerical Results and Discussions}\n\nIn Nakagami-$m$ fading, the shape parameter $m$ indicates the fading severity and the scale parameter $\\Omega$ takes into account the large-scale fading effect, i.e., $\\Omega_\\mathrm{c} = d_\\mathrm{c}^{-\\eta}$, for $\\mathrm{c} \\in \\{\\mathrm{S} r, r \\mathrm{D}\\}$, where $\\eta$ denotes the path-loss exponent. \nIn IG shadowing, $\\alpha$ indicates the shadowing severity and $\\beta$ is normalized with respect to $\\bar{\\gamma}$. Parameter settings are: $\\epsilon_{\\bar{\\gamma}}=15$, $\\epsilon_N =10$, $\\epsilon_m = \\epsilon_\\alpha = 0.5$, $\\epsilon_\\eta = 0.3$, $\\epsilon_\\beta = 0.2$, $\\epsilon_R = 3$, $K=30$, and a total of $10^5$ sets of parameters are generated.\n\n\nIn Fig.~\\ref{fig_validation}, we evaluate the accuracy of the training using the validation set. As can be seen, the MSE converges after $30$ epochs and is lower than $10^{-4}$.\n\n\\begin{figure} [!t]\n\\centering\n\\includegraphics[width=.9\\linewidth]{fig4.pdf}\n\\caption{ OP against $\\bar{\\gamma}$ subject to different fading conditions, with $R_\\mathrm{th}=5, N=20$. It is noted that predicted results are obtained by the trained DNN associated with inputs in Table~\\ref{table_parameters}, whereas analytical and simulation results are generated using fixed parameters.}\n\\label{fig_Pout_fading_change}\n\\end{figure}\n\nAs shown in Fig.~\\ref{fig_Pout_fading_change}, the analytical, simulation, and predicted results are perfectly corroborated, which validates the approach in Theorem \\ref{theorem_cdf_Z2} and demonstrates the accuracy of the trained DNN. It can also be observed that the outage performance is sensitive to the channel conditions, e.g., line-of-sight (LoS) strength and shadowing severity. Indeed, for a given $\\bar{\\gamma}$, the OP significantly decreases between the case of weak LoS, strong shadowing, i.e., $m_\\mathrm{c} = 1.5, \\alpha_\\mathrm{c}=3$ and the case of strong LoS, weak shadowing, i.e., $m_\\mathrm{c} = 2.5, \\alpha_\\mathrm{c}=2$.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=.9\\linewidth]{fig5.pdf}\n\\caption{DNN-based predicted OP against $\\bar{\\gamma}$ under different $N$, with $R_\\mathrm{th} = 5, m_\\mathrm{c}=1.5, \\alpha_\\mathrm{c}=3$.} \n\\label{fig_Pout_N_change}\n\\end{figure}\n\nIn Fig.~\\ref{fig_Pout_N_change}, we point out that as the number of reflecting elements increases, the power budget required to achieve a given OP is decreased, thus showing the EE of aerial-RIS. Indeed, for a given $\\mathrm{P_{out}} = 10^{-2}$, $\\bar{\\gamma}$ drops $8.2$ dB as $N$ increases from $15$ to $30$. Again, the simulation and predicted results are well corroborated. It is noted that even if two similar feature vectors are fed to the DNN, the corresponding outputs are likely to be different because the channel gains (CSI) are RVs generated based on the feature vectors.\n\n\\begin{figure}[!t] \n\t\\centering\n\t\\includegraphics[width=.9\\linewidth]{fig6.pdf}\n\t\\caption{Performance comparison between aerial-RIS, HD-VG-AF, HD-DF, FD-AF, and FD-DF with $N = 15$, $R_\\mathrm{th} = 1$, $m_\\mathrm{c} =1.5$, $\\alpha_\\mathrm{c} = 3$.}\n\t\\label{fig_RIS_AF_DF}\n\\end{figure}\n\nIn Fig.~\\ref{fig_RIS_AF_DF}, we compare the performance of aerial-RIS with that of conventional time domain half-duplex (HD)-DF, HD-variable-gain (VG)-AF, full-duplex (FD)-AF, and FD-DF. For any conventional scheme, we assume that $N$ cooperative relays use maximal-ratio combining (MRC) to receive and maximal-ratio transmitting (MRT) to forward the source's signal. Here, it is observed that aerial-RIS-aided communication system significantly by far outperforms the aforementioned relaying systems. \n\n\n\\section{Conclusions}\n\n\nIn this paper, we proposed an aerial-RIS-aided wireless communication system, which takes into account practical fading channels tailored for aerial-to-ground (A2G) and ground-to-aerial (G2A) communications. We proposed a new mathematical framework to derive a tight approximate closed-form expression for the OP. Further, considering mobile environment, we developed a DNN model to deal with the 3D spatial excursion of the aerial-RIS and to predict the OP online. It was shown that (\\textit{i}) the OP is significantly sensitive to the channel conditions, (\\textit{ii}) the aerial-RIS provides EE, and specifically the higher the reflecting elements installed, the lower the power budget needed to achieve a given OP, and (\\textit{iii}) aerial-RIS-aided system outperforms conventional dual-hop relaying systems.\n\n\n\\appendices\n\\section{Proof of Lemma \\ref{lemma_matching_Gamma}} \\label{apx_proof_lemma_GG_LL}\nSince reflecting elements are installed closely to each other, e.g., less than a half-wave-length, individual channels in $\\vec{h}_{\\mathrm{S} \\mathrm{R}}$ and $\\vec{h}_{\\mathrm{R} \\mathrm{D}}$ are assumed to be independent and identically distributed (i.i.d.). Thus, $\\lambda_{\\mathrm{S} r} = \\lambda_\\mathrm{s}$, $\\lambda_{r \\mathrm{D}} = \\lambda_\\mathrm{d}$, $\\forall r$, where $\\lambda \\in \\{m,\\Omega, \\alpha, \\beta\\}$.\nHowever, $G_{\\src r}$ and $G_{r \\des}$ are independent but not necessarily identically distributed (i.n.i.d.) Nakagami-$m$ RVs. Knowing that $f_{XY} (z) = \\int_0^\\infty \\frac{1}{x} f_Y \\left(\\frac{z}{x}\\right) f_X (x) d x$, with $X$ and $Y$ are non-negative RVs, we have\n\\begin{align}\nf_{G_r} (z) &= \\int_0^\\infty (1\/x) f_{G_{\\src r}} (z\/x) f_{G_{r \\des}} (x) d x \\nonumber \\\\\n&= \\frac{4 ( m_\\mathrm{s}\/ \\Omega_\\mathrm{s} )^{m_\\mathrm{s}} ( m_\\mathrm{d} \/ \\Omega_\\mathrm{d} )^{m_\\mathrm{d}}\n }{\\Gamma(m_\\mathrm{s}) \\Gamma(m_\\mathrm{d})} z^{2m_\\mathrm{s} - 1} \\nonumber \\\\\n &\\quad \\times \\int_0^\\infty \\!\\!\\! x^{2 m_\\mathrm{d} - 2 m_\\mathrm{s} - 1} e^{-\\frac{m_\\mathrm{s} z^2}{\\Omega_\\mathrm{s} x^2} - \\frac{m_\\mathrm{d} x^2}{\\Omega_\\mathrm{d}}} d x.\n\\end{align}\nBy making use of \\cite[(3.478.4)]{Gradshteyn2007}, an exact closed-form expression for the PDF of $G_r$ is attained as \n\\begin{align} \\label{pdf_G_r}\n\tf_{G_r} (z) \\!=\\! \\frac{4 z^{m_\\mathrm{s} + m_\\mathrm{s} - 1}}{\\Gamma(m_\\mathrm{s}) \\Gamma(m_\\mathrm{d})} \\Upsilon_G^{\\frac{m_\\mathrm{s} +m_\\mathrm{d}}{2}} \\!\\! K_{m_\\mathrm{d} - m_\\mathrm{s}} \\!\\! \\left( 2z \\sqrt{ \\Upsilon_G } \\right).\n\\end{align}\nFrom \\eqref{pdf_G_r}, by making use of \\cite[(6.561.16)]{Gradshteyn2007} and after some mathematical manipulations, the $n$-th moment of $G_r$, $\\mathbb{E}[G_r^n]$, can be attained as\n\\begin{align} \\label{eq_mu_G_r_k}\n\t\\mathbb{E}[G_r^n] = \\Upsilon_G^{-\\frac{n}{2}} \\frac{ \\Gamma(m_\\mathrm{s} + n\/2) \\Gamma(m_\\mathrm{d} + n\/2) }{\\Gamma(m_\\mathrm{s}) \\Gamma (m_\\mathrm{d})}.\n\\end{align}\nUsing the PDF of $G_r$ in \\eqref{pdf_G_r} makes the derivation of the closed-expression for the PDF of $Z$ intractable. To circumvent this problem, by exploiting the statistical characteristics of $G_r$ obtained in \\eqref{eq_mu_G_r_k}, we use the \\textit{method of moments} (also known as the \\textit{moment-matching technique}) to fit the PDF of $G_r$ to a Gamma distribution as done in Lemma \\ref{lemma_matching_Gamma}. Specifically, we match the first- and second-order moments of $G_r$ to a Gamma RV $X \\sim \\mathrm{Gamma}(\\nu_X, \\zeta_X)$, i.e., $\\mathbb{E}[G_r] = \\mathbb{E}[X]$ and $\\mathbb{E}[G_r^2] = \\mathbb{E}[X^2]$. By solving this system of equations, $\\nu_X$ and $\\zeta_X$ can be obtained as \\cite{Tahir_LWC_2021}\n\\begin{align} \\label{eq_matched_moments}\n\\nu_X &= \\frac{\\mathbb{E}[G_r]^2}{\\mathbb{E}[G_r^2] - \\mathbb{E}[G_r]^2}, \\quad \\zeta_X = \\frac{\\mathbb{E}[G_r^2] - \\mathbb{E}[G_r]^2}{\\mathbb{E}[G_r]}.\n\\end{align}\nThus, we can rewrite that $\\zeta_X = \\mathbb{E}[G_r] \/ \\nu_X$. Let $m_G \\triangleq \\nu_X$ and $\\Omega_G \\triangleq \\mathbb{E}[G_r]$. From \\eqref{eq_mu_G_r_k}, we have $\\mathbb{E}[G_r^2] = \\Omega_\\mathrm{s} \\Omega_\\mathrm{d}$, and thus, we attain the PDF of $G_r$ in \\eqref{eq_G_r_Gamma}.\n\nWe now turn our focus on $\\tilde{L}_r$. It is noted that $L_\\mathrm{s}$ and $L_\\mathrm{d}$ are i.n.i.d. inverse-Gamma RVs.\nDue to the fact that $f_X (x) = 2x f_{X^2} (x^2)$ and $f_{\\frac{1}{X}} (x) = \\frac{1}{x^2} f_X \\left(\\frac{1}{x}\\right)$, for $x>0$, from \\eqref{eq_PDF_IG}, the PDF of $1\/\\sqrt{L_\\mathrm{c}}$ can be obtained as\n\\begin{align}\n\tf_{\\frac{1}{\\sqrt{L_\\mathrm{c}}}} (z) = \\frac{2}{\\Gamma(\\alpha)} \\beta^\\alpha z^{2 \\alpha - 1} e^{-\\beta z^2}, z>0.\n\\end{align}\nRecall that $\\tilde{L}_r = 1\/ \\sqrt{L_r}$, and $L_r = L_{\\src r} L_{r \\des}$, after some mathematical manipulations, the PDF of $\\tilde{L}_r$ is obtained as\n\\begin{align}\n&f_{\\tilde{L}_r} (z) = \\int_{0}^{\\infty} \\frac{1}{x} f_{\\frac{1}{\\sqrt{L_\\mathrm{d}}}} \\left(\\frac{z}{x}\\right) f_{\\frac{1}{\\sqrt{L_\\mathrm{s}}}} (x) d x \\nonumber \\\\\n&= \\frac{2 \\beta_\\mathrm{s}^{\\alpha_\\mathrm{s}} \\beta_\\mathrm{d}^{\\alpha_\\mathrm{d}} z^{2 \\alpha_\\mathrm{d} -1} }{\\Gamma(\\alpha_\\mathrm{s}) \\Gamma(\\alpha_\\mathrm{d})} \\!\\!\\! \\int_0^\\infty \\!\\!\\!\\!\\!\\! x^{2(\\alpha_\\mathrm{s} - \\alpha_\\mathrm{d} - 1)} e^{- \\frac{\\beta_\\mathrm{d} z^2}{x^2} - \\beta_\\mathrm{s} x^2} d x^2.\n\\end{align}\nUsing \\cite[(3.478.4)]{Gradshteyn2007} and after some derivation steps, an exact closed-form expression for the PDF of $\\tilde{L}_r$ can be obtained as\n\\begin{align} \\label{eq_n_moment_1_sqrtLr}\n\tf_{\\tilde{L}_r} (z) = \\frac{4 (\\beta_\\mathrm{s} \\beta_\\mathrm{d})^{(\\alpha_\\mathrm{s} + \\alpha_\\mathrm{d})\/2} }{ \\Gamma(\\alpha_\\mathrm{s}) \\Gamma(\\alpha_\\mathrm{d}) } z^{\\alpha_\\mathrm{s} + \\alpha_\\mathrm{d} - 1} \\! K_{\\alpha_\\mathrm{s} - \\alpha_\\mathrm{d}} \\! (2z \\sqrt{\\beta_\\mathrm{s} \\beta_\\mathrm{d}}).\n\\end{align}\n\nUsing \\eqref{eq_n_moment_1_sqrtLr}, the $n$-th moment of $\\tilde{L}_r$ can be derived as\n\\begin{align}\n\t\\mathbb{E} [(\\tilde{L}_r)^n] = \\frac{ \\Gamma(\\alpha_\\mathrm{s} + n\/2) \\Gamma(\\alpha_\\mathrm{d} + n\/2) }{\\Gamma(\\alpha_\\mathrm{s}) \\Gamma(\\alpha_\\mathrm{d})} (\\beta_\\mathrm{s} \\beta_\\mathrm{d})^{-n\/2}.\n\\end{align}\nWith similar steps as in \\eqref{eq_matched_moments} for the case of $G_r$, let $\\Omega_L \\triangleq \\mathbb{E} [\\tilde{L}_r]$ and $m_L \\triangleq \\Omega_L^2\/[\\mathbb{E}[\\tilde{L}_r^2] - \\Omega_L^2]$, one can obtain the matched PDF of $\\tilde{L}_r$ as in \\eqref{eq_L_r_Gamma}. This completes the proof of Lemma \\ref{lemma_matching_Gamma}.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStochastic differential equations (SDEs, see, \\emph{e.g.}, Refs. \\cite{DOTO} and Refs. therein) is a class of mathematical models with the widest applicability in modern science.\n\\begin{eqnarray}\n\\dot x(t) = F(x(t)) + (2\\Theta)^{1\/2}e_{a}(x(t))\\xi^a(t),\n\\end{eqnarray}\nwhere $x\\in X$ is a point in the phase space $X$, which is a topological manifold, $F\\in TX$ is a vector field from the tangent space of $X$, $TX$, called the flow vector field, $\\Theta$ is the intensity of temperature of the Gaussian white noise, $\\xi$, with the standard expectation values $\\langle \\xi^a(t) \\rangle =0, \\langle \\xi^a(t)\\xi^b(t') \\rangle = \\delta(t-t')\\delta^{ab}$, and the $a$-set of vector fields $e_a\\in TX$ defining the coupling of the noise to the system.\n\nIn physics, for example, it describes everything in nature above the scale of quantum degeneracy\/coherence. One of the main statistical characteristics of these systems is the probability density of the solution\nof this equation. The probability density can be studied by the corresponding Fokker-Planck equation. It can be perform further studies by the supersymmetric theory of stochastics (STS) \\cite{Igor1,Igor1_1,Igor2}, which is one of the latest advancements in the theory of SDEs. Among a few other important findings, STS seems to explain 1\/f noise \\cite{DOTO}, power-laws statistics of various avalanche-type processes \\cite{Aschwanden} and other realizations of mysterious and ubiquitous dynamical long-range order \\cite{Igor1} in nature.\n\nAs compared to classical approaches to SDEs, STS differs in two fundamental ways. First, the Hilbert space of a stochastic model in STS, $\\mathcal{H}$, is the entire exterior algebra of the phase space, \\emph{i.e.}, the space of differential forms or $k$-forms of all degrees,\n\\begin{eqnarray}\n\\psi^{(k)} =\n\\psi^{(k)}_{i_1...i_k}(x)dx^{i_1}\\wedge...\\wedge dx^{i_k}\\in \\Omega^{k},\n\\mathcal{H} = \\bigoplus\\nolimits_{k=0}^{D}\\Omega^{k},\n\\end{eqnarray}\nwhere $\\psi_{i_1...i_k}$ is an antisymmetric contravariant tensor, $\\Omega^{k}$ is the space of all $k$-forms and $D$ is the dimensionality of the phase space of the model. This picture generalizes the classical approach to SDEs, where the Hilbert space is thought to be the space of only top differential forms that have the meaning of total probability distributions in a coordinate-free setting.\n\nThe second distinct feature of the STS, is that the finite-time stochastic evolution operator (SEO) has a clear mathematical meaning. Specifically,\n\\begin{eqnarray}\n\\psi(t) = \\hat{\\mathcal{M}}_{tt'}\\psi(t'), \\hat{\\mathcal{M}}_{tt'} = \\langle M^*_{t't}\\rangle,\n\\end{eqnarray}\nwhere $M^*_{t't}$ is a pullback or action induced by the SDE-defined noise-configuration-dependent diffeomorphism $M_{t't}$, so that a noise-configuration-dependent solution of SDE with initial condition $x(t)|_{t=t_0}=x_0$ can be given as $x(t) = M_{tt_0}(x_0)$, and brackets denote stochastic averaging over all the configuration of the noise.\n\nThe finite-time SEO can be shown \\cite{Igor1} to be,\n\\begin{eqnarray}\n\\hat{\\mathcal{M}}_{tt'} = e^{-(t-t')\\hat H},\\label{F_T_SEO}\n\\end{eqnarray}\nwhere the (infinitesemal) SEO is given as,\n\\begin{eqnarray}\n\\hat H = \\hat {\\mathcal L}_{F} - \\Theta \\hat {\\mathcal L}_{e_a}\\hat {\\mathcal L}_{e_a},\\label{Lieseo}\n\\end{eqnarray}\nwhere $\\hat {\\mathcal L}$ is a Lie or physical derivative along the corresponding vector field.\n\n\nThe presence of the topological supersymmetry given by Eq.(\\ref{F_T_SEO}) tailors the following properties of the eigensystem of the SEO. Here are two types of eigenstates. The first type is the supersymmetric singlets that are non-trivial in the De Rahm cohomology. Each De Rahm cohomology class of $X$ must provide one supersymmetric singlet \\cite{Ours}. All supersymmetic eigenstates have exactly zero eigenvalue.\nThe second type of state are non-supersymmetric doublets.\nThere are no restrictions on the eigenvalues of the non-supersymemtric eignestates other than they must be either real or come in complex conjugate pairs known in the dynamical systems theory as Ruelle-Pollicott resonances and that the real part of its eigenvalue must be bounded from below in case when the diffusion part of the SEO is elliptic. Most of the eigenstates of the SEO are non-supersymmetric. In particular, all eigenstates with non-zero eigenvalues are non-supersymmetric.\n\nThe ground state(s) is the state(s) with the lowest real part of its eigenvalue. As is seen from the exponential temporal evolution in Eq.(\\ref{F_T_SEO}), the ground state(s) grows (and oscillates if its eigenvalue is complex) faster than any other eigenstate. When the ground state is a non-supersymmetric eigenstate, it is said that the topological supersymmetry is spontaneous broken. The topological supersymmetry breakdown can be identified with the stochastic generalization of the concept of deterministic chaos \\cite{Igor1,Igor2}, and this identification is an important finding for applications.\n\nWhether the topological supersymmetry is spontaneously broken or not can be unambiguously determined from the eigensystem of the SEO.\nSo the numerical investigation of the SEO's eigensystem is an important method. Because different parameters will give different eigensystems,\nwe need to solve hundreds of eigenvalue problems.\n\nEigenvalue problem of sparse matrices is an important problem that has applications in many branches of modern science. This problem has a long history and several powerful methods of the numerical studies of sparse matrices have been proposed and implemented by now. One of the most successful such implementations is ARPACK \\cite{Sorensen2} based on the Implicitly Restarted Arnoldi Method \\cite{Sorensen}. ARPACK is a collection of Fortran subroutines designed to compute a few eigenvalues and corresponding eigenvectors of a sparse matrix and it is the foundation of the commonly used MATLAB command \"eigs\".\n\nIn many applications, one has to compute eigenvalues with the smallest real part, \\emph{i.e.}, the leftmost in the complex plane. On the other hand, the structure of the Arnoldi Method targets eigenvalues with the largest magnitude. Therefore, for \"low-lying\" eigenvalues, the \"eigs\" function may encounter convergence problems, even when using a large trial subspace.\n\nThe problem of low-lying eigenvalue is better addressed with the inverse power method that transforms it into the largest eigenvalue problem. Yet another generalization is the Shift-Invert Arnoldi method, which is the original Arnoldi method applied to the shift-inverted matrix $B = (A - \\sigma I)^{-1}$, so that it can find eigenvalues near to the given target $\\sigma$. There exist other variations of the parental Arnoldi method including the Residual Arnoldi and the Shift-Invert Residual Arnoldi methods \\cite{Lee}.\n\nOne of the problems or the Shift-Invert Arnoldi method is that the inverse matrix $(A - \\sigma I)^{-1}$ cannot be easily computed for large matrices. This inversion is practically achieved by iteratively solving the corresponding system of linear equations (CSLE). This may already be a difficult problem for large matrices.\n\nYet another approach is the use of inexact methods \\cite{Freitag, Simoncini}. The main idea of these methods is\ncomputing an approximate solution of the inner equation. The convergence analysis of the\ninexact method has already been widely studied \\cite{Notay, Xue}. Recently, a general convergence theory of the Shift-Invert Residual Arnoldi (SIRA) method has been established \\cite{Jia}.\n\n\nIn order to ensure good convergence, these methods need to expand the dimensionality of the working subspace continuously from iteration to iteration. For large problems, the computation and storage costs may be very high. As it turns out, in our application, we need to solve hundreds of large matrix, under limited time and resource constraints. In order to achieve this goal, we propose what we call the inexact inverse power method (IIPM). The advantages are that one only needs to store two vectors (two-dimensional subspace) during all iterations and for this reason save considerably the required computation resources. At the same time, it keeps a high convergence rate. The existing convergence analyses are based on the prior knowledge of the eigenvalue information. In theory, the convergence of this method can be guaranteed, but lack practical guidance for real computation.\nFrom a view of ensuring the convergence, we analyze the convergence of the new algorithm and propose a convergence criterion of inner iteration for practical computation.\n\nThe paper is organized as follows. In Section \\ref{s3}, we describe the proposed IIPM and analyze the convergence. In Section \\ref{s5}, we exemplify the advantages of the IIMP by applying it to the problem of the diagonalization of the stochastic evolution operators of the ABC and Kuramoto models. Section \\ref{conclusion} concludes this paper.\n\n\n\n\\section{The Inexact Inverse Power Method}\n\\label{s3}\n\nIn this section we would like to discuss the theory of the IIPM for the large matrix diagonalization problems. As we mentioned in the Introduction, this method is derivative of its parental IPM. Therefore, we begin the discussion with the introduction of the IPM.\n\n\\begin{alg} {\\bf Inverse power method}\\label{Ipower}\n\\begin{description}\n \\item[1:]Given starting vector $x_1$, and convergence criterion $tol$ \\\\[-5mm]\n \\item[2:] for $i = 1,2,\\ldots,n$ \\\\[-5mm]\n \\item[3:]\\hspace{0.5cm} $y = A^{-1}x_i$ \\\\[-5mm]\n \\item[4:]\\hspace{0.5cm} $x_{i+1} = y\/\\|y\\|$ \\\\[-5mm]\n \\item[5:]\\hspace{0.5cm} $\\lambda = x_{i+1}^{T} A x_{i+1}$ \\\\[-5mm]\n \\item[6:]\\hspace{0.5cm} $t = Ax_{i+1}- \\lambda x_{i+1}$ \\\\[-5mm]\n \\item[7:]\\hspace{0.5cm} if $ \\|t\\|\\leq tol $, break \\\\[-5mm]\n \\item[8:] end for \\\\[-5mm]\n\\end{description}\n\\end{alg}\n\nWe can apply this process to matrix $(A-\\sigma I)^{-1}$ instead of $A$, where $\\sigma$ is called a shift. This will allow us to compute the eigenvalue closest to $\\sigma$. When $\\sigma$ is very close to the desired eigenvalue, we can obtain a faster convergence rate.\n\nFor large scale matrix, neither $y = A^{-1}x_i$ nor $y =(A-\\sigma I)^{-1}x_i$ can be computed directly. It is difficult to obtain an accurate solution even by solving the corresponding linear systems,\n\\begin{equation}\n(A-\\sigma I)y =x_i.\\label{equ2}\n\\end{equation}\n\nAccording to the idea of inexact methods, we can use an iterative approach to compute an approximate solution $\\tilde{y}$ of Eq.(\\ref{equ2}).\nNamely, we can use ${x}_{i+1} = \\tilde{y}\/\\|\\tilde{y}\\|$ as the updated approximate eigenvector. This alteration of the IPM leads one to the IIPM.\n\n\\begin{alg} {\\bf Inexact inverse power method}\\label{vpower}\n\\begin{description}\n \\item[1:]Given a target $\\sigma $, starting vector $x_1$ and convergence criterion $\\varepsilon_1,\\varepsilon_2$, \\\\[-5mm]\n \\item[2:] for $k = 1,2,\\ldots,n$ \\\\[-5mm]\n \\item[3:]\\hspace{0.5cm} Compute an approximate solution $\\tilde{y}$ of $(A-\\sigma I )y = x_i$ with\n $$\\| r \\|=\\| x_i - (A-\\sigma I )\\tilde{y} \\| < \\varepsilon_1 $$\n \n \\item[4:]\\hspace{0.5cm} Compute eigenpairs$(\\lambda,x_{i+1}) $ from span\\{$x_i, \\tilde{y}\\}$ \\\\[-5mm]\n \\item[5:]\\hspace{0.5cm} $t = Ax_{i+1}- \\lambda x_{i+1}$ \\\\[-5mm]\n \\item[6:]\\hspace{0.5cm} if $\\|t\\| \\leq \\varepsilon_2$, break \\\\[-5mm]\n \\item[7:]\\hspace{0.5cm} end for \\\\[-5mm]\n \\end{description}\n\\end{alg}\n\nWhen we use the inexact solution $\\tilde{y}$ instead of the exact solution $y$, the convergence property of\n${{x}}_{i+1}$ is the most important issue. This means that we need a quantitative standard for $\\varepsilon $.\nIn the kind of inexact methods, the convergence is obtained by\n analyzing the ability of $\\tilde{y}$ to mimic $y$ and the convergence is guaranteed by the subspace expanding.\nFor this method, we write the approximate solution ${\\tilde{y}}$ of Eq.(\\ref{equ2}) as the exact solution of the following perturbed equation\n\\begin{equation}\n(A-\\sigma I + \\delta A )\\tilde{y} =x_i,\\label{equ3}\n\\end{equation}\nhere $\\delta A$ is the perturbation matrix of $(A-\\sigma I)$. The residual of Eq.(\\ref{equ2}) can be written as\n$ r = \\delta A \\tilde{y} $.\n\n\\begin{lem} \\label{lem1} The approximate solution ${\\tilde{y}}$ and the exact solution $y$ of Eq.(\\ref{equ2}) have the following relationship\n\\begin{equation}\n y - \\tilde{y} \\approx (A-\\sigma I)^{-1} \\delta A y.\n\\end{equation}\n\n{\\bf Proof:} For a matrix $X$ and the corresponding unit matrix $I$, if $\\|X\\| < 1$, then $I-X$ is invertible \\cite{Demmel} and\n\\begin{equation}\n(I-X)^{-1} = \\sum_{i=0}^\\infty X^i .\\label{inverse}\n\\end{equation}\n\nNow, we can get\n\\begin{equation}\n\\begin{array}{ccl}\n\\tilde{y}& = & (A-\\sigma I + \\delta A )^{-1} x_i \\\\\n & = & (I + (A-\\sigma I)^{-1} \\delta A )^{-1} (A-\\sigma I)^{-1} x_i.\n\\end{array}\n\\end{equation}\nWe use formula (\\ref{inverse}) to $(I + (A-\\sigma I)^{-1} \\delta A )^{-1}$ and ignore higher order terms to obtain\n\\begin{equation}\n\\begin{array}{ccl}\n\\tilde{y}& \\approx & [I - (A-\\sigma I)^{-1} \\delta A ] (A-\\sigma I)^{-1} x_i \\\\\n & = & [I - (A-\\sigma I)^{-1} \\delta A ]y \\\\\n & = & y - (A-\\sigma I)^{-1} \\delta A y.\n\\end{array}\n\\end{equation}\n\\end{lem}\nThen we obtain the result about $\\tilde{y}$ and $y$.\n\nSuppose $(\\lambda ,x) $ is a simple desired eigenpair of $A$ and $(\\frac{1}{\\lambda - \\sigma},x)$\nis a simple eigenpair of $(A-\\sigma I)^{-1}$. In Algorithm \\ref{Ipower}, both $y$ and ${x}_{i}$ are approximate eigenvectors but $y$ is a better approximate eigenvector than ${x}_{i}$. This can be obtained from the convergence properties of the power method.\nIn Algrithm 2, it is hard to ensure that $\\tilde{y}$ is better than ${x}_{i}$ only by the relationship between $\\tilde{y}$ and $y$.\nTo obtain the convergence property of Method \\ref{vpower}, we need an $ \\varepsilon $ to ensure\n\\begin{equation}\n\\|{x}_{i+1}-x\\|\\leq \\|{x}_{i}-x\\|, \\label{relation}\n\\end{equation}\nhere ${x}_{i+1} = \\tilde{y}\/ \\|\\tilde{y}\\|.$\n\nSince $x,{x}_{i}, {x}_{i+1}$ are unit vectors, their relationship can be expressed better by the angle.\nThe relationship (\\ref{relation}) is equivalent to\n\\begin{equation}\n\\mbox{sin} \\angle ({x}_{i+1},x) \\leq \\mbox{sin} \\angle({x}_{i},x). \\label{relation2}\n\\end{equation}\n\nFor the convenience of analysis, we set\n$$B = (A-\\sigma I)^{-1}.$$\nThe eigenvalues of $B$ satisfy $\\mu_1 \\gg \\mu_2\\geq \\mu_3\\geq\\cdots \\geq \\mu_n$. $ (\\mu_1,x)$ is the desired eigenpair of $B$.\nLet $(x,X_{\\bot})$ be a unitary matrix, where $\\mbox{span}\\{ X_{\\bot}\\}$ is the orthogonal complement of ${x}$.\nThen $x_i,y$ and $\\delta A y$ can be expressed as\n\\begin{equation}\n\\begin{array}{ccl}\nx_i & = & \\alpha x + \\beta z,\\\\\ny & = & \\mu_1 \\alpha x + \\beta Bz,\\\\\n\\delta A y& = & \\tilde{\\alpha} x + \\tilde{\\beta} \\tilde{z},\n\\end{array}\\label{relation1}\n\\end{equation}\nwhere $z,\\tilde{z} \\in \\mbox{span}\\{ X_{\\bot}\\}$, $\\|\\delta A \\|\\leq \\varepsilon$.\n\n\nBy Lemma \\ref{lem1}, $\\tilde{y}$ can be written as\n\\begin{equation}\n\\tilde{y} =\\mu_1 \\alpha x + \\beta B z +\n\\mu_1 \\tilde{\\alpha} x + \\tilde{ \\beta} B \\tilde{z}.\n\\end{equation}\n\nFor the convergence of Algorithm \\ref{vpower}, we have the following result.\n\\begin{theorem} Suppose $B$ is symmetric, $x$ is the desired eigenvector, $x_i$ is the current approximation of $x$.\n $y$ and $\\tilde{y}$ are exact and approximate solution of Eq.(\\ref{equ2}).\nIf $\\varepsilon $ satisfies\n $$\\varepsilon < \\frac{(1- 2\\mu_2\/\\mu_1)\\beta\\alpha}{(2\\mu_2\/\\mu_1) \\alpha + \\beta}.$$\nThen we have\n$$\\mbox{sin}\\angle(\\tilde{y},x) < \\mbox{sin}\\angle(x_i,x).$$\n{\\bf Proof:}\nEq. (\\ref{relation1}) shows that $\\mbox{tan}\\angle(x_i,x)= \\frac{|\\beta|}{|\\alpha|}$.\n$\\mbox{tan}\\angle(y,x)= \\frac{|\\beta|\\|Bz\\|}{|\\alpha \\mu_1|}$.\nSince $z \\in \\mbox{span}\\{ X_{\\bot}\\}$, so $\\|Bz\\| \\leq \\mu_2$. Then we have\n$$\\mbox{tan}\\angle(y,x)< \\frac{|\\beta \\mu_2|}{|\\alpha \\mu_1|} < \\frac{\\mu_2}{\\mu_1}\\mbox{tan}\\angle(x_i,x).$$\nWe can write $\\tilde{y}$ as $\\tilde{y} =\\mu_1( \\alpha + \\tilde{\\alpha}) x + (\\beta + \\tilde{\\beta}) B(z + \\tilde{z} )$.\nFrom $z,\\tilde{z} \\in \\mbox{span}\\{ X_{\\bot}\\}$, we obtain $\\|B(z + \\tilde{z}) \\|\\leq 2 \\mu_2$.\nFrom $|\\tilde{\\beta}|< \\varepsilon, |\\tilde{\\alpha}|< \\varepsilon$, we get $|\\beta + \\tilde{\\beta}| \\leq |\\beta| + \\varepsilon$\nand $|\\alpha + \\tilde{\\alpha}| \\geq |\\alpha| - \\varepsilon$.\nFor the angle between $\\tilde{y}$ and $x$, we have inequality\n\n\\begin{equation}\\label{speed}\n\\mbox{tan}\\angle(\\tilde{y},x)\\leq \\frac{2 \\mu_2 |\\beta + \\tilde{\\beta}|}{\\mu_1|\\alpha + \\tilde{\\alpha}|}\n\\leq \\frac{2 \\mu_2}{\\mu_1} \\frac{|\\beta| + \\varepsilon}{|\\alpha| - \\varepsilon}.\n\\end{equation}\n\nIf\n\\begin{equation}\n\\frac{2 \\mu_2}{\\mu_1} \\frac{|\\beta| + \\varepsilon}{|\\alpha| - \\varepsilon} < \\frac{|\\beta|}{|\\alpha|},\n\\label{iequal2}\n\\end{equation}\nthen we obtain $\\mbox{tan}\\angle(\\tilde{y},x) < \\mbox{tan}\\angle(x_i,x)$.\n\nSince (\\ref{iequal2}) is equivalent to\n\\begin{equation}\n \\varepsilon < \\frac{(1- (2\\mu_2\/\\mu_1))|\\beta\\alpha|}{(2\\mu_2\/\\mu_1) |\\alpha| + |\\beta|},\n\\label{iequal3}\n\\end{equation}\nthen we finish the proof.\n\\end{theorem}\n\nWhen the angle between $x_i$ and $x$ is not very small, the values of $|\\alpha|$ and $|\\beta|$ have same order $|\\alpha|=O(|\\beta|)$.\nFrom $\\mu_1 \\gg \\mu_2$, we obtain $(2\\mu_2\/\\mu_1)$ is a small number and $1- (2\\mu_2\/\\mu_1)\\approx 1$. So the requirements of\n$ \\varepsilon$ is\n$$ \\varepsilon < \\frac{(1- (2\\mu_2\/\\mu_1))|\\beta\\alpha|}{(2\\mu_2\/\\mu_1) |\\alpha| + |\\beta|}= O( |\\beta|).$$\nWhen $x_i$ is a good approximation of $x$, the value of $|\\beta|$ is small. If we use shift as\n $$\\sigma = x_i^T A x_i =\\lambda \\alpha^2 + (z^T Az) \\beta^2. $$\n\nWe can obtain\n$$2\\mu_2\/\\mu_1 = 2\\mu_2(\\lambda - \\sigma) = 2\\mu_2 [(1-\\alpha^2)\\lambda + z^TAz \\beta^2]. $$\nUsually, $\\lambda$ is not the largest eigenvalue and it is closer to $\\sigma$ more than other eigenvalues of $A$.\nTherefore, we can assume that $z^TAz$ and $2\\mu_2$ are not large constants. From $\\|x_i\\|=1$, we can get\n$1-\\alpha^2=\\beta^2$. With these results, we can draw the following result from (\\ref{iequal3})\n$$ \\varepsilon < \\frac{|\\alpha|}{1+|\\alpha\\beta|}.$$\n\nThis shows that when $x_i$ is a good approximation of $x$,\nthe convergence of Algorithm \\ref{vpower} does not require a very small $\\varepsilon$.\nFrom (\\ref{speed}), the convergence rate of Algorithm \\ref{vpower} can be expressed as\n\\begin{equation}\\label{speed2}\n\\frac{\\mbox{tan}\\angle(\\tilde{y},x)}{\\mbox{tan}\\angle(x_i,x)} \\leq \\frac{2 \\mu_2 |\\beta + \\tilde{\\beta}|}{\\mu_1|\\alpha + \\tilde{\\alpha}|}\n\\leq \\frac{2 \\mu_2}{\\mu_1} \\frac{|\\beta| + \\varepsilon}{|\\alpha| - \\varepsilon} \\frac{|\\alpha|}{|\\beta|}.\n\\end{equation}\nIf $\\varepsilon=0$, the convergence rate is decided by $\\frac{2 \\mu_2}{\\mu_1}$. When we set $\\sigma = x_i^T A x_i$,\nwe have $\\frac{2 \\mu_2}{\\mu_1}=O(\\beta^2)$. This means that Algorithm \\ref{vpower} is cubic convergence,\nbecause in this choice of $\\sigma$, one step of Algorithm \\ref{vpower} is one step of Rayleigh quotient iteration.\nWhen $\\varepsilon \\neq 0$, the convergence rate is slowed down. But, the convergence be damaged\nonly when $\\frac{|\\beta| + \\varepsilon}{|\\alpha| - \\varepsilon}> \\frac{1}{\\beta^2} $.\nIf $\\frac{|\\beta| + \\varepsilon}{|\\alpha| - \\varepsilon}$ is not very large, the convergence rate is also decided by $\\frac{2 \\mu_2}{\\mu_1}$.\n\nAlong the standard lines of the inverse power method, the difference between ${\\tilde{y}}$ and $y$ is almost parallel to the eigenvector. When the sequence ${x_i}$ begin to converge to the eigenvector, the inexact method can maintain the convergence trend very well, with a moderate accuracy of inner iteration.\n\nAs we described in Section 2, one has to compute the mostleft eigenvalue. Therefore, we can use an approximation eigenvalue as the target $\\sigma$. For instance, we use the matlab command \"eigs\" to compute the approximate eigenpair. When the convergence rate slows down, we renew the target $\\sigma$. One, we can use the generalized minimal residual method (GMRES) to compute the approximate solution of $(A-\\sigma I )y = x_i$. If we are now to replace $x_i$ by the residue $r$ in $(A-\\sigma I )y = x_i$, the IIPM becomes a two dimensional residual iterative method. For the residual iterative method, in Ref. \\cite{Jia} it was shown that the inexact method can mimic the exact method with accuracy $\\varepsilon_1=10^{-4}$.\n\nFor a non-Hermite matrix, we can compute the consequent approximate eigenvector $x_{i+1}$ from the subspace $span\\{x_i, \\tilde{y}\\}$, to ensure that $x_{i+1}$ is a better approximation than $x_i$. From this point of view, one could as well call this method the modified IIPM.\n\n\\section{Numerical results}\n\\label{s5}\n\nIn this section, we first compute some matrices from the Matrix Market\nby the exact inverse power method and modified inexact inverse power method to illustrate the validity of the theory analysis of the new method.\nHere the exact method refers to the method that computes the exact solution at the third step of Algorithm 2.\nThen we compute the practical problems which are derived from the two models of SEO to illustrate the\npracticability of the new method. The two well-known models are the stochastic ABC model and the stochastic Kuramoto model. The phase space in both cases is a 3-torus, $X=T^3$, and the vector fields defining the noise, $e_a$'s, correspond to the additive Gaussian white noise,\n\\begin{eqnarray}\ne_{1} \\equiv e_x = (1,0,0)^T, e_{2} \\equiv e_{y} = (0,1,0)^T, e_{3} \\equiv e_{z} = (0,0,1)^T,\n\\end{eqnarray}\nin the standard global coordinates on a 3-torus.\n\nThe flow vector fields of the two models are given respectively as,\n$$\nF_{ABC} = (A\\mbox{sin} z + C\\mbox{cos} y)e_x + (B\\mbox{sin} x + A\\mbox{cos} z)e_y + (C\\mbox{sin} y + B\\mbox{cos} x)e_z.\n$$\n$$\n\\begin{array}{l}\nF_{Kur}=\\\\\n (\\omega_x - K\/4(\\mbox{2sin}x + \\mbox{sin}(x+y) + \\mbox{sin}(x+y+z) - \\mbox{sin}y - \\mbox{sin}(y+z) ) )e_x\\\\\n + (\\omega_y - K\/4(\\mbox{2sin}y + \\mbox{sin}(x+y) + \\mbox{sin}(y+z) - \\mbox{sin}x - \\mbox{sin}z))e_y \\\\\n + (\\omega_z - K\/4(\\mbox{2sin}z + \\mbox{sin}(y+z) + \\mbox{sin}(x+y+z) - \\mbox{sin}y - \\mbox{sin}(x+y) ))e_z.\n\\end{array}\n$$\n\nA few remarks about the two models of interest are in order. First, the stochastic ABC model is a toy model for studies of astrophysical phenomenon of kinematic dynamo, \\emph{i.e.}, the phenomenon of the generation of magnetic field by ionized flow of matter. As it was shown in Ref.\\cite{Igor3}, the stochastic evolution of non-supersymmetric 2-forms of the STS of the stochastic ABC model,\n\\begin{eqnarray}\n\\partial_t \\psi^{(2)} = -\\hat H^{(2)}\\psi^{(2)},\n\\end{eqnarray}\nis equivalent to the dynamical equation of the magnetic field, $B$, in the kinematic dynamo theory,\n\\begin{equation}\n\\partial_t B = \\partial \\times F \\times B + R_m^{-1} \\bigtriangleup B,\n\\end{equation}\nwhere $R_m=\\Theta^{-1}$ is the inverse temperature known in the kinematic dynamo theory as the magnetic Reynolds number, and $\\times$ denotes the standard vector product.\n\nAs to the Kuramoto model, it can be thought of as a model of coupled phase oscillators. This model also has many interesting applications. In particular, it may serve as a testbed for the studies of the phenomenon of synchronization that has attracted interest of scientists in biological \\cite{Tass}, chemical \\cite{Kiss}, physical\\cite{Wiesenfeld} and other dynamical systems. An explicitly supersymmetric numerical representation of SEO on a square lattice of a 3-torus was proposed and described in the Appendix of Ref.\\cite{Ours}.\nThis is the representation that we use in this paper.\n\nAll the experiments are run on an Inspur Yitan NF5288 workstation with Intel(R)Core(TM)i5-3470s CPU 2.9GHz,\nRAM 4G using Matlab R2012b under the Linux system.\n\n\\begin{exm}\nWe compare the convergence between inexact method and exact method using a few examples. For comparison, we choose some matrices from the Matrix Market which the corresponding Eq.(\\ref{equ2}) can be directly solved.\n(a) The first matrix is $H = A+A^T$, where $A$ is the matrix \"rw5151\".\n(b) The second matrix is \"cry10000\".\n(c) The third matrix is \"bcsstk29\".\n\\end{exm}\nFor each matrix, we use \"exact\" method and \"inexact\" method to compute the largest and smallest eigenvalues.\nFor the \"exact\" method, we require the solution to satisfy the $\\epsilon_{\\rm mach}=10^{-16}$. For the \"inexact\" method,\nthe accuracy of the inner iteration is $10^{-2}$.\nThe convergence process are shown in the following figures.\nIn the figures, \"la\" is the largest eigenvalue, \"sa\" is the smallest eigenvalue.\n\n\\begin{figure}[!h]\n\\begin{centering}\n\\includegraphics[height=4cm,width=4.8cm]{rw5151.eps}\n\\includegraphics[height=4cm,width=4.8cm]{cry10000.eps}\n\\includegraphics[height=4cm,width=4.8cm]{bcsstk29.eps}\n\\caption{The convergence of the exact (red curves) and inexact (blue curves) methods. The three subfigures represent convergence data, \\emph{i.e.}, the norm of residue, of the largest (solid curves) and the smallest (dashed curves) eigenvalues as a function of the intereation number and for three different matrices (a-c). Even though the exact method converges faster than the inexact method in terms of the number of iteration, the inexact method demonstrates relatively good convergence. Furthermore, inasmuch each iteration of the inexact method is considerably faster than that of the exact method, the inexact method is actually much faster in terms of the real time of the computations.}\n\\label{figure1}\n\\end{centering}\n\\end{figure}\nWe can see from the figures that both methods converge quickly and smoothly.\nThe inexact method mimics the exact very well and it uses no more than three outer iterations.\nThe results confirm our theory and indicate that we can use it to solve more large problems.\n\nIn our application, there are several hundreds matrices need to be computed.\nAll matrices are too large to be solved using the exact method. Therefore, we use the new method to solve them.\nDifferent matrix requires varies iteration number and cputime.\nSo, we give the information including their maximum, minimum, median of cputime and iteration number and so on.\n\\begin{exm}\nIn this example, we study the ABC model in the region $x, y, z\\in [-\\pi,\\pi]$.\nIn order to study the influence of parameters $C$ and $R_m$, we select some points in the plane of $R_m$ and $C$.\nFor each pair of $(R_m, C)$, we discretize the ABC model to a matrix eigenvalue problem and analyze the system by the leftmost eigenvalue of the matrices.\n\\end{exm}\nThe points in the $R_m$ and $C$ plane are $R_m = [1:1:14]$ and $C=[0.4:0.025:1.125]$.\nFor each point, the size of the matrix is 192000.\nWe compute the leftmost eigenvalue of the 420 large scale matrices. The real part of the eigenvalues are plotted in the following figure.\nWhere the circles represent the value less than or equal to zero and the plus represent the value greater than zero.\n\\begin{figure}[!h]\n\\begin{centering}\n\\includegraphics[height=6cm,width=8cm]{ABC.eps}\n\\caption{Contour of ABC model. There are 420 points coresponding to 420 large scale matrices and the size of the matrix is 192000. We use the new method to compute the leftmost eigenpair of each matrix. The average cputime is 6.80s.\nWhere the circle and plus signs indicate the different state of eigenvalues. Based on this result, we can analyze the properties of the ABC model.}\n\\label{figure6}\n\\end{centering}\n\\end{figure}\nIn the computing process, we first use the matlab command \"eigs\" to compute an approximation of the leftmost eigenpair.\nThe convergence tolerance is $10^{-4}$. The Frobenius norm of the matrices are $O(10^3)$,\ntherefore, the absolute error of the approximate eigenvalues are $O(10^{-1})$. This is a modest request and \"eigs\" can\ncompute these result with a suitably large number of Lanczos vectors. But for most of the matrices, it is very hard to get more accurate results. In general, the desired eigenvalues are very close to the origin. So we first transform the leftmost eigenvalue to the module largest eigenvalue by a shift $\\beta$.\nThen we use \"eigs\" to compute the largest eigenvalue of matrix $A-\\beta I$. Here the Frobenius norm of $A$ is a good choice of $\\beta$.\nWe use the approximate eigenpairs as the target and starting vector of the inexact inverse power method to compute the more accurate results.\nThe convergence tolerance of the outer interation is $10^{-10}$ and the maximum iteration number of the outer iteration is 25.\nWe use GMRES to solve the inner linear systems with convergence tolerance $10^{-3}$.\nWe show the iteration number of outer and inner iteration in Table \\ref{T-a},\n\\begin{table}[!h]\n\\begin{center}\n\\caption{Example 1, iteration number.}\\label{T-a}\n\\begin{tabular}{|c|c|c|c|c|c|}\\hline\n{Iteration number} &Average & Maximum & Minimum & Median & Total\\\\\\hline\nOuter & 1.07 &11 & 1 & 1 & 451 \\\\\\hline\nInner & 1050.3 &4992 & 275 & 904 &441126 \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nwhere \"Outer\" represents the iteration number of inverse power and \"Inner\" represents the number of Lanczos vectors in GMRES.\n\"Total\" is the sum of all the 420 matrices. We also show the maximum, minimum, average and median of the iteration number.\nWe can see from the table that we used 451 times inverse power iteration and 441126 Lanczos vectors of GMRES to obtain all the desired eigenvalues.\nThe average outer and inner iteration number of all matrices are 1.07 and 1050.30 respectively.\nThe median number of outer and inner iteration are 1 and 904.\nThe matrix corresponding to $R_m=10, C=1$ needs the most outer iterations. The matrices corresponding to $R_m=11, C=0.85$ and\n$R_m=1, C=1.075$ need most and lest inner iterations respectively.\n\nThe cputime of all 420 matrices computation is 4576.34 seconds and the details of each part are shown in Table \\ref{T-b}.\n\\begin{table}[!h]\n\\begin{center}\n\\caption{Example 1, details of cputime. }\n\\label{T-b}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\\hline\n{cputime } & Average & Maximum & Minimum & Median & Total \\\\\\hline\n eigs & 6.80 &20.96 &2.51 & 6.67 & 2856.27 \\\\\\hline\n GMRES & 697.14 &13215.794 & 61.78 &529.22 & 292796.97 \\\\\\hline\n Entire & 705.39 &13223.85 & 66.19 & 537.62 & 296265.35 \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nThe time for computing the approximate eigenpairs is 2856.27 seconds.\nFor a single matrix, the maximum is 20.96s, the minimum is 2.51s, the average is 6.80s and median is 6.67s.\nWe use a total of 292796.97 seconds to solve all inner equations, and the average of each matrix is 697.14 seconds.\nThe total time of each matrix to compute the eigenvalue is 705.39 seconds on average.\n\nThese results indicate that the most time-consuming part is the inner iteration. Thus reducing the number of outer iteration is very important to improve the computational efficiency.\n\n\n\\begin{exm}\nIn this example we study the Kuramoto model in the region $x, y, z\\in [-\\pi,\\pi]$.\nWe \tanalyze the influence of parameters $K$ and $D$. We set some points in the plane of ($K$, $D$).\nFor each pair of $(K, D)$, We use 30 lattice sites in each of the three directions to discretize the model to a matrix eigenvalue problem. The size of the corresponding matrix $A$ is 128625.\nWe can analyze the property of the system by computing the eigenvalues of $A$ .\n\\end{exm}\nWe set the values of $K$ and $D$ to $D = [0.0015:0.0027:0.042]$ and $K =[0.12:0.02:0.7]$.\nFrom the real part of the leftmost eigenvalues of the 480\nlarge scale matrices, we get the following figure,\nwhere the circles represent the value less than or equal to zero and the plus represent the value greater than zero.\n\\begin{figure}[!h]\n\\begin{centering}\n\\includegraphics[height=6cm,width=8cm]{krumoto.eps}\n\\caption{Contour of Kuramoto model.\nThere are 480 points coresponding to 480 large scale matrices and the size of the matrix is 128625. We use the new method to compute the leftmost eigenpair of each matrix. The average cputime is 130.6s.}\n\\label{figure2}\n\\end{centering}\n\\end{figure}\nWe first use \"eigs\" to compute the approximation eigenpairs $(\\tilde{\\lambda},\\tilde{x})$ satisfying\n$\\frac{||A\\tilde{x} -\\tilde{\\lambda}\\tilde{x}||}{||A||} \\leq10^{-5}$. The order of $||A||$ for all matrices are $O(10^5)$, so the approximate eigenvalues hardly have any algebraic precision for the absolute error. Even so, this is a difficult task for eigs.\nWe use the approximate eigenpairs as the target and starting vector of the inexact inverse power method to compute more accuracy results.\nThe convergence tolerance of the outer iteration is $10^{-11}$ and the maximum iteration number of the outer iteration is 25.\nWe use GMRES to solve the inner linear systems with convergence tolerance $10^{-4}$.\nWe show the iteration number of outer and inner iterations in the following table.\n\\begin{table}[h]\n\\begin{center}\n\\caption{Example 2, iteration number. }\n\\label{1-a}\n\\begin{tabular}{|c|c|c|c|c|c|}\\hline\n{Iteration number} &Average & Maximum & Minimum & Median & Total\\\\\\hline\nOuter & 7.75 &47 & 1 & 5 & 3953\\\\\\hline\nInner & 1283 &17523 & 5 & 288 & 654330\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nWhere \"Outer\" is the outer iteration number and \"Inner\" is the number of Lanczos vectors of GMRES.\n\nThe cputime of all 480 matrices is 220942.50 seconds and the details of each part are shown in the following table.\n\\begin{table}[!h]\n\\begin{center}\n\\caption{Example 2, details of cputime. }\n\\label{1-b}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\\hline\n{cputime } & Average & Maximum & Minimum & Median & Total \\\\\\hline\n eigs & 130.60 &2175.40 & 7.84& 42.88 & 62687.04\\\\\\hline\nGMRES & 323.67 &13343.00 & 0.49 & 33.03 & 155360.50 \\\\\\hline\nEntire & 460.30 &13377 & 37.96 & 202.59& 220942.50 \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe time for computing the approximate eigenpairs is 62687.04 seconds.\nFor a single matrix, the maximum is 2175.40 seconds, the minimum is 7.84 seconds, the average is 130.60 seconds and median is 42.88 seconds.\nWe use a total of 155360.50 seconds to solve the inner equations, and the average of each matrix is 323.67 seconds.\nThe total time of each matrix to compute the eigenvalue is 460.30 seconds on average.\n\nThe results show that this model is more difficult than the previous model. It takes more external iterations and the inner iteration is still the most time-consuming part. The large difference between median and average of the inner cputime indicate that there is a big difference of the inner cputime between different matrices.\n\n\\section{Conclusion}\n\\label{conclusion}\n\n\nIn this paper, we proposed what we call the inexact inverse power method (IIPM) for numerical diagonalization of sparse matrices. This method allows to notably save computational resources as compared to its parental well-established inverse power method. We applied IIPM to the problem of the finding the ground state of the stochastic evolution operators of the stochastic ABC and Kuramoto models and our results demonstrate that IIPM provides solution at acceptable computational time in situations when IPM would fail if using only the resources of a typical desktop computer.\n\n\n\n\n\n\n\\bibliographystyle{model1-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nTransformers \\cite{NIPS2017_3f5ee243} have become the state-of-the-art architecture for natural language processing (NLP) tasks \\cite{devlin-etal-2019-bert,T5_raffel,NEURIPS2020_1457c0d6}. With its success, the NLP community has experienced an urge to understand the decision process of the model predictions \\cite{jain-wallace-2019-attention,serrano-smith-2019-attention}.\n\nIn Neural Machine Translation (NMT), attempts to interpret Transformer-based predictions have mainly focused on analyzing the attention mechanism \\cite{raganato-tiedemann-2018-analysis, voita-etal-2018-context}. A large number of works in this line have investigated the capabilities of the cross-attention to perform source-target alignment \\cite{kobayashi-etal-2020-attention,Zenkel_2019,chen-etal-2020-accurate}, compared with human annotations. Gradient-based \\cite{ding-etal-2019-saliency} and occlusion-based methods \\cite{li-etal-2019-word} have also been evaluated against human word alignments. The latter generates input attributions by measuring the change in the predicted probability after deleting specific tokens, the former computes gradients with respect to the input token embeddings to measure how much a change in the input changes the output. However, there is a tension between finding a faithful explanation and observing human-like alignments, since one does not imply the other \\cite{ferrando-costa-jussa-2021-attention-weights}.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{images\/alti_120.png}\n\\caption{ALTI+ results for a German-English translation example. We obtain source sentence and target prefix (columns) interpretations for every predicted token (row).}\n\\label{fig:rollout_l6}\n\\end{center}\n\\end{figure}\n\nThe decoding process of NMT systems consists of generating tokens in the target vocabulary based on the information provided by the source sequence and the previously generated tokens (target prefix). However, most of the work on interpretability of NMT models only analyses source tokens. Recently, \\citet{voita-etal-2021-analyzing} propose to use Layer Relevance Propagation (LRP) \\cite{LRP_bach} to analyze the source and target contributions to the model prediction. Nonetheless, they apply their method on average over a dataset, not for getting input attributions of a single prediction. Gradient-based methods have also been extended to the target prefix \\cite{ferrando-costa-jussa-2021-attention-weights}, although they do not quantify the relative contribution of source and target inputs.\n\nConcurrently, encoder-based Transformers, such as BERT \\cite{devlin-etal-2019-bert} and RoBERTa \\cite{DBLP:journals\/corr\/abs-1907-11692}, have been analysed with attention rollout \\cite{abnar-zuidema-2020-quantifying}, which models the information flow in the model with a Directed Acyclic Graph, where nodes are token representations and edges, attention weights. Recently, \\citet{ferrando2022measuring} have presented ALTI (Aggregation of Layer-wise Tokens Attributions), which applies the attention rollout method by substituting attention weights with refined token-to-token interactions. In this work, we present the first application of a rollout-based method to the encoder-decoder Transformers. Our key contributions are:\n\\begin{itemize}\n \\item We propose a method that measures the contributions of each input token (source and target prefix) to the encoder-decoder Transformer predictions;\n \\item We show how contextual information is mixed across the encoder of NMT models, with the model keeping up to 47\\% of token identity;\n \\item We evaluate the role of the residual connections in the cross-attention, and show that attention to uninformative source tokens (EOS and final punctuation mark) is used to let information flow from the target prefix;\n \\item We analyze the role of both input contexts in low and high resource scenarios, and show the model behaviour under hallucinations;\n\\end{itemize}\n\n\\section{Background}\n\nIn this section, we provide the background to understand our proposed method by briefly explaining the encoder-decoder Transformer-based model in the context of NMT \\cite{NIPS2017_3f5ee243} and the Aggregation of Layer-wise Token-to-token Interactions (ALTI) method \\cite{ferrando2022measuring}.\n\n\\subsection{Encoder-Decoder Transformer}\nGiven a source sequence of tokens $\\mathbf{x} = (x_1, \\ldots, x_{J})$, and a target sequence $\\mathbf{y} = (y_1, \\ldots, y_{T})$, an NMT system models the conditional probability:\n\\begin{equation}\nP(\\mathbf{y}|\\mathbf{x}) = \\prod_{t=1}^{T} P(y_t|\\mathbf{y}_{}$ used as a special token to mark the beginning and end of sentence. The Transformer is composed by a stack of encoder and decoder layers (\\cref{fig:enc_dec}). The encoder generates a contextualized sequence of representations $\\mathbf{e} = (\\vec{e}_1, \\ldots, \\vec{e}_J)$ of the source sentence. The decoder, at each time step $t$, uses both the encoder outputs ($\\mathbf{e}$) and the target prefix ($\\mathbf{y}_{