diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziliw" "b/data_all_eng_slimpj/shuffled/split2/finalzziliw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziliw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{Sec:Intro}\n\nOne of the revolutions in twentieth century mathematics was the discovery by Church \\cite{Church} and Turing \\cite{Turing} that there are fundamental limits to our understanding of various mathematical objects. An important example of this phenomenon was the theorem of Adyan \\cite{Adyan} that one cannot decide whether a given finitely presented group is the trivial group. In some sense, this is a very negative result, because it suggests that there will never be a full and satisfactory theory of groups. The same is true of manifolds in dimensions 4 and above, by work of Markov \\cite{Markov}. However, one of the main themes of low-dimensional topology is that compact 3-manifolds are tractable objects. In particular, they can be classified, since the homeomorphism problem for them is solvable. So the sort of wildness that one encounters in group theory and higher-dimensional manifold theory is not present in dimension 3. In fact, 3-manifolds are well-behaved objects, in the sense that many algorithmic questions about them are solvable and many optimistic conjectures about them have been shown to be true.\n\nHowever, our understanding of 3-manifolds is far from complete. The status of the homeomorphism problem is a good example of this. Although we can reliably decide whether two compact 3-manifolds are homeomorphic \\cite{KuperbergAlgorithmic, ScottShort}, the known algorithms for achieving this might take a ridiculously long time. If the 3-manifolds are presented to us by two triangulations, then the best known running time for deciding whether they are homeomorphic is, as a function of the number of tetrahedra in each triangulation, a tower of exponentials \\cite{KuperbergAlgorithmic}. This highlights a general rule: although 3-manifolds are tractable objects, they are only just so. It is, in general, not at all straightforward to probe their properties and typically quite sophisticated tools are required. But there is an emerging set of techniques that do lead to more efficient algorithms. For example, the problem of recognising the unknot is now known \\cite{HassLagariasPippenger, LackenbyEfficientCertification} to be in the complexity classes NP and co-NP. Thus, there are ways of certifying in polynomial time whether a knot diagram represents the unknot or whether it represents a non-trivial knot. But whether this problem is solvable using a deterministic polynomial-time algorithm remains unknown.\n\nMy goal in this article is to present some of the known algorithms in 3-manifold theory. I will highlight their apparent limitations, but I will also present some of the new techniques which lead to improvements in their efficiency. My focus is on the theoretical aspects of algorithms about 3-manifolds, rather than their practical implementation. However, it would be remiss of me not to mention here the various important programs in the field, including Snappea \\cite{Weeks, CullerDunfield} and Regina \\cite{BurtonRegina}.\n\nNeedless to say, this survey is far from complete. I apologise to any researchers whose work has been omitted. Inevitably, there is space only to give sketches of proofs, rather than complete arguments. The original papers are usually the best places to look up these details. However, Matveev's book \\cite{Matveev} is also an excellent resource, particularly for the material on normal surfaces in Sections \\ref{Sec:NormalSurfaces}, \\ref{Sec:MatchingFundamental} and \\ref{Sec:Hierarchies}. In addition, there are some other excellent surveys highlighting various aspects of algorithmic 3-manifold theory, by Hass \\cite{HassSurvey}, Dynnikov \\cite{DynnikovSurvey} and Burton \\cite{BurtonSurvey}.\n\nI would like to thank the referee and Mehdi Yazdi for their very careful reading of the paper and for their many helpful suggestions.\n\n\\section{Algorithms and complexity}\n\n\\subsection{Generalities about algorithms} \n\nThis is not the place to give a detailed and rigorous introduction to the theory of algorithms. However, there are some aspects to the theory that are perhaps not so obvious to the uninitiated.\n\nAn algorithm is basically just a computer program. The computer takes, as\t its input, a finite string of letters, typically 0's and 1's. It then starts a deterministic process, which involves passing through various states and reading the string of letters that it is given. It is allowed to write to its own internal memory, which is of unbounded size. At some point, it may or may not reach a specified terminating state, when it declares an output. In many cases, this is just a single 0 or a single 1, which is to be interpreted as a `no' or `yes'.\n\nAlgorithms are supposed to solve practical problems called decision problems. These require a `yes' or `no' answer to a specific question, and the algorithm is said to solve the decision problem if it reliably halts with the correct answer.\n\nOne can make all this formal. For example, it is usual to define an algorithm using Turing machines. This is for two reasons. Firstly, it is intellectually rigorous to declare at the outset what form of computer one is using, in order that the notion of an algorithm is well-defined. Secondly, the simple nature of Turing machines makes it possible to prove various non-existence results for algorithms.\nHowever, in the world of low-dimensional topology, these matters do not really concern us. Certainly, we will not describe any of our algorithms using Turing machines. But we hope that it will be evident that all the algorithms that we describe could be processed by a computer, given a sufficiently diligent and patient programmer.\n\nHowever, this informal approach can hide some important points, including the following:\n\n(1) Decision problems are just functions from a subset of the set of all finite strings of 0's and 1's to the set $\\{ 0, 1 \\}$. In other words, they provide a yes\/no answer to certain inputs. So, a question such as `what is the genus of a given knot?' is not a decision problem. One can turn it into a decision problem by asking, for instance, `is the genus of a given knot equal to $g$?' but this changes things. In particular, a fast solution to the second problem does not automatically lead to a fast solution to the first problem, since one would need run through different values of $g$ until one had found the correct answer.\n\n(2) Typically, we would like to provide inputs to our programs that are not strings of 0's and 1's. For example, some of our inputs may be positive integers, in which case it would be usual to enter them in binary form. However, in low-dimensional topology, the inputs are usually more complicated than that. For example, one might be given a simplicial complex with underlying space that is a compact 3-manifold. Alternatively, one might be given a knot diagram. Clearly, these could be turned into 0's and 1's in some specific way, so that they can be fed into our hypothesised computer. \n\n(3) However, when discussing algorithms, it is very important to specify what form the input data takes. Although it is easy to encode knot diagrams as 0's and 1's, and it is easy to encode triangulations using 0's and 1's, it is not at all obvious that one can easily convert between these two forms of data. For example, suppose that you are given a triangulation of a 3-manifold and that you are told that it is the exterior of some knot. How would you go about drawing a diagram of this knot? It turns out that it is possible, but it is still unknown how complex this problem is.\n\n(4) Later we will be discussing the `complexity' of algorithms, which is typically defined to be their longest possible running time as a function of the length of their input. Again, the encoding of our input data is important here. For example, what is the `size' of a natural number $n$? In fact, it is usual to say that the `size' of a positive integer $n$ is its number of digits in binary. We will always follow this convention in this article. On the other hand, the `size' of a triangulation $\\mathcal{T}$ of a 3-manifold is normally its number of tetrahedra,\nwhich we will denote by $|\\mathcal{T}|$.\n\n\n(5) Algorithms and decision problems are only interesting when the set of possible inputs is infinite. If our decision problem has only a finite number of inputs, then it is always soluble. For instance, suppose that our problem is `does this specific knot diagram represent the unknot?'. Then there is a very short algorithm that gives the right answer. It is either the algorithm that gives the output `yes' or the algorithm that gives the output `no'. This highlights the rather banal point that, in some sense, we do not care how the computer works, as long as it gives the right answer. But of course, we do care in practice, because the only way to be sure that it is giving the right answer is by checking how it works.\n\n\\subsection{Complexity classes}\n\\label{Sec:ComplexityClasses}\n\n\\begin{definition}\n\\begin{enumerate}\n\\item A decision problem lies in $\\mathrm{P}$, or runs in \\emph{polynomial time}, if there is an algorithm to solve it with running time that is bounded above by a polynomial function of the size of the input.\n\\item A decision problem lies in $\\mathrm{EXP}$, or runs in \\emph{exponential time}, if there is a constant $c > 0$ and an algorithm to solve the problem with running time that is bounded above by $2^{n^c}$, where $n$ is the size of the input.\n\\item A decision problem lies in $\\mathrm{E}$ if there is a constant $c >1$ and an algorithm to solve the problem with running time that is bounded above by $c^n$, where $n$ is the size of the input.\n\\end{enumerate}\n\\end{definition}\n\nOut of $\\mathrm{EXP}$ and $\\mathrm{E}$, the latter seems to be more natural at first sight. However, it is less commonly used, for good reason. Typically, we allow ourselves to change the way that the input data is encoded. Alternatively, we may wish to use the solution to one decision problem as a tool for solving another. This may increase (or decrease) the size of the input data by some polynomial function. We therefore would prefer to use complexity classes that are unchanged when $n$ is replaced by a polynomial function of $n$. Obviously $\\mathrm{EXP}$ has this nice property whereas $\\mathrm{E}$ does not. This is more than just a theoretical issue. For example, there is an algorithm to compute the HOMFLY-PT polynomial of a knot with $n$ crossings in time at most $k^{(\\log n) \\sqrt{n}}$ for some constant $k$ \\cite{BurtonHOMFLY}. This function grows more slowly than $c^n$, for any constant $c > 1$. However, it is widely believed (for very good reason \\cite{JaegerVertiganWelsh}) that there is no algorithm to do this in sub-exponential time.\n\n\\begin{definition}\nThere is a \\emph{polynomial-time reduction} (or \\emph{Karp reduction}) from one decision problem $A$ to another decision problem $B$ if there is a polynomial-time algorithm that translates any given input data for problem $A$ into input data for problem $B$. This translation should have the property that the decision problem $A$ has a positive solution if and only if its translation into problem $B$ has a positive solution.\n\\end{definition}\n\nA major theme in the field of computational complexity is the use of non-deterministic algorithms. The most important type of such algorithm is as follows.\n\n\\begin{definition}\nA decision problem lies in $\\mathrm{NP}$ (\\emph{non-deterministic polynomial time}) if there is an algorithm with the following property. The decision problem has a positive answer if and only if there is a certificate, in other words some extra piece of data, such that when the algorithm is run on the given input data and this certificate, it provides the answer `yes'. This is required to run in time that is bounded above by a polynomial function of the size of the initial input data.\n\\end{definition}\n\nThe phrase `non-deterministic' refers to the fact that the algorithm might only complete its task if it is provided with extra information that must be supplied by some unspecified source. One may wonder why non-deterministic algorithms are of any use at all. But there are several reasons to be interested in them. \n\nFirst of all, $\\mathrm{NP}$ captures the notion of problems where a positive answer can be verified quickly. For example, is a given positive integer $n$ composite? If the answer is `yes', then one can verify the positive answer by giving two integers greater than one and multiplying them together to get $n$. Once one is given these integers, this multiplication can be achieved in polynomial time as a function of the number of digits of $n$. Hence, this problem lies in NP.\n\nSecondly, problems in $\\mathrm{NP}$ \\emph{can} be solved deterministically, but with a potentially longer running time. This is because $\\mathrm{NP}$ problems lie in $\\mathrm{EXP}$, for the following reason. If a problem lies in $\\mathrm{NP}$, then there is an algorithm to verify a certificate that runs in time $n^c$, where $n$ is the size of the input data and $c$ is some constant. Since each step of the non-deterministic algorithm can only move the tape of the Turing machine at most one place, the algorithm can only read at most the first $n^c$ digits in the certificate. Thus, we could discard any part of the certificate beyond this without affecting its verifiability. Therefore, we may assume that the certificate has size at most $n^c$. There are only $2^{n^c+1}$ possible strings of 0's and 1's of this length or less. Thus, a deterministic algorithm proceeds by running through all these strings and seeing whether any of them is a certificate that can be successfully verified. If one of these strings is such a certificate, then the algorithm terminates with a `yes'; otherwise it terminates with a `no'. Clearly, this algorithm runs in exponential time. A concrete example highlighting that $\\mathrm{NP} \\subseteq \\mathrm{EXP}$ is again the question of whether a given positive integer $n$ is composite. A certificate for being composite is two smaller positive integers that multiply together to give $n$. So a (rather inefficient) deterministic algorithm simply runs through all possible pairs of integers between $2$ and $n-1$, multiplies them together and checks whether the answer is $n$.\n\nThirdly, there is the following notion, which is an extremely useful one.\n\n\\begin{definition}\nA decision problem is $\\mathrm{NP}$\\emph{-hard} if there is a polynomial-time reduction from any $\\mathrm{NP}$ problem to it. If a problem both is in $\\mathrm{NP}$ and is $\\mathrm{NP}$-hard, then it is termed $\\mathrm{NP}$-\\emph{complete}.\n\\end{definition}\n\nIt is surprising how many NP-complete problems there are \\cite{GareyJohnson}. The most fundamental of these is SAT. This takes as its input a collection of sentences that involve Boolean variables and the connectives AND, OR and NOT, and it asks whether there is an assignment of TRUE or FALSE to each of the variables that makes each of the sentences true. This is clearly a fundamental and universal problem, and so it is perhaps not so surprising that it is NP-complete. But what is striking is that there are so many other problems, spread throughout mathematics, that are NP-complete. As we will see, these include some natural decision problems in topology.\n\nThe following famous conjecture is very widely believed.\n\n\\begin{conjecture} \n$\\mathrm{P} \\not= \\mathrm{NP}$. Equivalently, any problem that is $\\mathrm{NP}$-complete cannot be solved in polynomial time.\n\\end{conjecture}\n\nThus when a problem is NP-complete, this provides strong evidence that this problem is difficult. Indeed, in the field of computational complexity, where there are so many fundamental unsolved conjectures, typically the only way to establish any interesting lower bound on a problem's complexity is to prove it conditionally on some widely believed conjecture.\n\nOne of the peculiar features of the definition of $\\mathrm{NP}$ is that it treats the status of `yes' and `no' answers quite differently. Of course, one could reverse the roles of `yes' and `no' and so one is led to the following definition.\n\n\\begin{definition}\nA decision problem is in co-NP if its negation is in NP.\n\\end{definition}\n\nThe following conjecture, like the famous $\\mathrm{P} \\not= \\mathrm{NP}$, is also widely believed.\n\n\\begin{conjecture} \n\\label{Con:NPCoNP}\n$\\mathrm{NP} \\not= \\mathrm{co}\\textrm{-}\\mathrm{NP}$. Hence, if a problem is $\\mathrm{NP}$-complete, it does not lie in $\\mathrm{co}\\textrm{-}\\mathrm{NP}$.\n\\end{conjecture}\n\nOne rationale for this conjecture is simply that problems in NP are those where a positive solution can be easily verified. In practice, verifying a positive solution seems quite different from verifying a negative solution. For example, to check a positive answer to an instance of SAT, one need only plug in the given truth values to the Boolean variables and check whether the sentences are all true. However, to verify a negative answer seems, in general, to require that one try out all possible truth assignments of the variables, which is obviously a much lengthier task. Of course, for some instances of SAT, there may be shortcuts, but there does not seem to be a general method that one can apply to verify a negative answer to SAT in polynomial time.\n\nThe second part of the above conjecture is a consequence of the first part. For suppose that there were some NP-complete problem $D$ that lies in co-NP. Since any NP problem $D'$ can be reduced to $D$, we would therefore be able to use a certificate for a negative answer to $D$ to provide a certificate for a negative answer to $D'$. Hence, $D'$ would also lie in co-NP. As $D'$ was an arbitrary NP problem, this would imply that $\\mathrm{NP} \\subseteq \\mathrm{co}\\textrm{-}\\mathrm{NP}$. This then implies that $\\mathrm{co}\\textrm{-}\\mathrm{NP} \\subseteq \\mathrm{NP}$ because of the symmetry in the definitions. Hence $\\mathrm{NP} = \\mathrm{co}\\textrm{-}\\mathrm{NP}$, contrary to the first part of the conjecture.\n\nIt is worth highlighting the following result, due to Ladner \\cite{Ladner}. \n\n\\begin{theorem}\nIf $\\mathrm{P} \\not= \\mathrm{NP}$, then there are decision problems that are in $\\mathrm{NP}$, but that are neither in $\\mathrm{P}$ nor $\\mathrm{NP}$-complete.\n\\end{theorem}\n\nA problem that is in NP but that is neither in P nor NP-complete is called NP\\emph{-intermediate}. There are no naturally-occurring decision problems that are known to be NP-intermediate. Problems that are in NP $\\cap$ co-NP but that are not known to be in P are good candidates for being NP-intermediate. As we shall see, there are several decision problems in 3-manifold theory that are of this form. However, for a given problem, it is extremely challenging to provide good evidence for its intermediate status, as there \\emph{might} be a polynomial time algorithm to solve it that has not yet been found.\n\n\\section{Some highlights}\n\n\\subsection{The homeomorphism problem}\n\n\nThis is the most important decision problem in 3-manifold theory. \n\n\\begin{theorem}\n\\label{Thm:HomeoProblem}\nThe problem of deciding whether two compact orientable 3-manifolds are homeomorphic is solvable.\n\\end{theorem}\n\nThe problem of deciding whether two links in the 3-sphere are equivalent is nearly a special case of the above result. One must check whether there is a homeomorphism between the link exteriors taking meridians to meridians. This is also possible, and hence we have the following result \\cite[Corollary 6.1.4]{Matveev}.\n\n\\begin{theorem}\n\\label{Thm:LinkEquivalence}\nThe problem of deciding whether two link diagrams represent equivalent links in the $3$-sphere is solvable.\n\\end{theorem}\n\nThere are now several known methods \\cite{ScottShort, KuperbergAlgorithmic} for proving Theorem \\ref{Thm:HomeoProblem}, but they all use the solution to the Geometrisation Conjecture due to Perelman \\cite{Perelman1, Perelman2, Perelman3}. However, the complexity of the problem is a long way from being understood. The best known upper bound is due to Kuperberg \\cite{KuperbergAlgorithmic}, who showed that the running time is at most \n$$ 2^{2^{2^{2^{\\rotatebox[origin=l]{29}{$\\scriptscriptstyle\\ldots\\mathstrut$}^t}}}} $$\nwhere $t$ is the sum of the number of tetrahedra in the given triangulations, and the height of the tower is some universal, but currently unknown, constant. We will review some of the ideas that go into this in Section \\ref{Sec:HyperbolicStructures}.\n\nThe known lower bounds on the complexity of this problem are also very poor. It was proved by the author \\cite{LackenbyConditionallyHard} that the homeomorphism problem for compact orientable 3-manifolds is at least as hard as the problem of deciding whether two finite graphs are isomorphic. In a recent breakthrough by Babai \\cite{Babai}, graph isomorphism was shown to be solvable in quasi-polynomial time (that is, in time $2^{({\\log n})^c}$ for some constant $c$, where $n$ is the sum of the number of vertices in the two graphs). It is not known whether it is solvable in polynomial time, but it is believed by many that it is NP-intermediate.\n\nGiven the limitations in our understanding of this important problem, it is natural to ask whether there are any decision problems about 3-manifolds for which we can pin down their complexity. Perhaps unsurprisingly, there are very few decision problems in 3-manifold theory that are known to lie in P. But there are some problems that are known to be NP-complete.\n\n\\subsection{Some NP-complete problems}\n\nThe following striking result was proved by Agol, Hass and Thurston \\cite{AgolHassThurston}.\n\n\\begin{theorem}\n\\label{Thm:3ManifoldKnotGenus}\nThe problem of deciding whether a knot in a compact orientable $3$-manifold bounds a compact orientable surface with genus $g$ is $\\mathrm{NP}$-complete.\n\\end{theorem}\n\nThe ideas that go into this are important, and we will devote much of Sections \\ref{Sec:AHT} and \\ref{Sec:NPHard} to them.\n\nRecently, some new results have been announced, establishing that some other natural topological problems are NP-complete. De Mesmay, Rieck,\nSedgwick and Tancer \\cite{DRST} proved the following.\n\n\\begin{theorem} \nThe problem of deciding whether a diagram of the unknot can be reduced to the trivial diagram using at most $k$ Reidemeister moves is $\\mathrm{NP}$-complete.\n\\end{theorem}\n\n\\subsection{Some possibly intermediate problems}\n\nThere are very few algorithms in 3-manifold theory that run in polynomial time. However, there are some decision problems that are likely to be NP-intermediate or possibly in P. Recall from Section \\ref{Sec:ComplexityClasses} that if a problem lies in NP and co-NP, then it is very likely \\emph{not} to be NP-complete. \n\n\\begin{theorem}\n\\label{Thm:UnknotNPCoNP}\nThe problem of recognising the unknot lies in $\\mathrm{NP}$ and $\\mathrm{co}$-$\\mathrm{NP}$.\n\\end{theorem}\n\nWe will discuss this result in Sections \\ref{Sec:AHT} and \\ref{Sec:ThurstonNorm}. The proof that unknot recognition lies in NP is due to Hass, Lagarias and Pippenger \\cite{HassLagariasPippenger}. The fact that unknot recognition lies in co-NP was first proved by Kuperberg \\cite{KuperbergKnottedness}, but assuming the Generalised Riemann Hypothesis. It has now been proved unconditionally by the author \\cite{LackenbyEfficientCertification} using a method that was outlined by Agol \\cite{AgolCoNP}. It is remarkable that the Generalised Riemann Hypothesis should have relevance in this area of 3-manifold theory. In fact, it remains an assumption in the following theorem of Zentner \\cite{Zentner}, Schleimer \\cite{Schleimer} and Ivanov \\cite{Ivanov}, which builds on work of Rubinstein \\cite{Rubinstein} and Thompson \\cite{Thompson}.\n\n\\begin{theorem}\nThe problem of deciding whether a 3-manifold is the 3-sphere lies in $\\mathrm{NP}$ and, assuming the Generalised Riemann Hypothesis, it also lies in $\\mathrm{co}$-$\\mathrm{NP}$.\n\\end{theorem}\n\nIn Sections \\ref{Sec:AlmostNormal} and \\ref{Sec:Homomorphisms}, we explain how this result is proved.\n\n\\subsection{Some NP-hard problems}\n\nMuch of the progress in algorithmic 3-manifold theory has been to show that certain decision problems are solvable and, in many circumstances, an upper bound on their complexity is given. The task of finding lower bounds on their complexity is more difficult in general. However, there are some interesting problems that have been shown to be NP-hard, even though the problems themselves are not known to be algorithmically solvable.\n\nThe \\emph{unlinking number} of a link is the minimal number of crossing changes that can be made to the link that turn it into the unlink. It is not known to be algorithmically computable. However, the following result was proved independently by De Mesmay, Rieck, Sedgwick and Tancer \\cite{DRST} and by Koenig and Tsvietkova \\cite{KoenigTsvietkova}.\n\n\\begin{theorem}\nThe problem of deciding whether the unlinking number of a link is some given integer is $\\mathrm{NP}$-hard.\n\\end{theorem}\n\nThe first set of the above authors also considered the following decision problem, which also is not known to be solvable.\n\n\\begin{theorem}\nThe problem of deciding whether a link in $\\mathbb{R}^3$ bounds a smoothly embedded orientable surface with zero Euler characteristic in $\\mathbb{R}^4_+$ is $\\mathrm{NP}$-hard.\n\\end{theorem}\n\n\\section{Pachner moves and Reidemeister moves}\n\\label{Sec:PachnerReidemeister}\n\nSeveral decision problems, such as the homeomorphism problem for compact 3-manifolds, may be reinterpreted using Pachner moves.\n\n\\begin{definition}\nA \\emph{Pachner move} is the following modification to a triangulation $\\mathcal{T}$ of a closed $n$-manifold: remove a non-empty subcomplex of $\\mathcal{T}$ that is isomorphic to the union $F$ of some of but not all of the $n$-dimensional faces of an $(n+1)$-simplex $\\Delta^{n+1}$, and then insert the remainder of $\\partial \\Delta^{n+1} \\backslash\\backslash F$. (See Figure \\ref{Fig:Pachner}.) For a triangulation $\\mathcal{T}$ of an $n$-manifold $M$ with boundary, we also allow the following modification: attach onto its boundary an $n$-simplex $\\Delta^n$, by identifying a non-empty subcomplex of $\\partial M$ with a subcomplex of $\\partial \\Delta^n$ consisting of a union of some but not all of the $(n-1)$-dimensional faces.\n\\end{definition}\n\n\\begin{figure}\n \\includegraphics[width=4in]{explicitmoves3.pdf}\n \\caption{The Pachner moves for a closed $3$-manifold}\n \\label{Fig:Pachner}\n\\end{figure}\n\nThe following was proved by Pachner \\cite{Pachner}.\n\n\\begin{theorem}\n\\label{Thm:Pachner}\nAny two triangulations of a compact PL $n$-dimensional manifold differ by a finite sequence of Pachner moves, followed by a simplicial isomorphism.\n\\end{theorem}\n\nA \\emph{simplicial isomorphism} between simplicial complexes is a simplicial map that is a homeomorphism and hence that has a simplicial inverse. In fact, the final simplicial isomorphism in Theorem \\ref{Thm:Pachner} may be replaced by an isotopy, at least when $n =3$. (See the discussion before Theorem 1.1 in \\cite{RubinsteinSegermanTillmann} for more details.)\n\nThis has an important algorithmic consequence: if one is given two triangulations of compact $n$-dimensional manifolds, and the manifolds are PL-homeomorphic, then one will always be able to prove that they are PL-homeomorphic. This is because one can start with one of the triangulations. One then applies all possible Pachner moves to this triangulation, thereby creating a list of triangulations. Then one applies all possible Pachner moves to each of these, and so on. By Pachner's theorem, the second triangulation will eventually be formed, and hence this gives a proof that the manifolds are PL-homeomorphic. \n\nOf course, this does not give an algorithm to determine whether two manifolds are (PL-)homeomorphic, because if we are given two triangulations of distinct manifolds, the above procedure does not terminate. However, if one knew in advance how many moves are required, then one would know when to stop. Thus, \\emph{if} there were a computable upper bound on the number of moves that are required, then we would have a solution to the PL-homeomorphism problem for compact $n$-manifolds. In fact, in dimension three, the existence of such a bound is \\emph{equivalent} to the fact that the homeomorphism problem is solvable. Hence, as a consequence of Theorem \\ref{Thm:HomeoProblem}, we have the following.\n\n\\begin{theorem}\nThere is a computable function $P \\colon \\mathbb{N} \\times \\mathbb{N} \\rightarrow \\mathbb{N}$ such that if $\\mathcal{T}_1$ and $\\mathcal{T}_2$ are triangulations of a compact orientable 3-manifold, then they differ by a sequence of at most $P(|\\mathcal{T}_1|, |\\mathcal{T}_2|)$ Pachner moves followed by a simplicial isomorphism.\n\\end{theorem}\n\n\\begin{proof} We need to give an algorithm to compute $P(n_1, n_2)$ for positive integers $n_1$ and $n_2$. To do this, we construct all simplicial complexes that are obtained from $n_1$ tetrahedra by identifying some of their faces in pairs. We discard all the spaces that are not manifolds, which is possible since one can detect whether the link of each vertex is a 2-sphere or 2-disc. We also discard all the manifolds that are not orientable. We do the same for simplicial complexes obtained from $n_2$ tetrahedra. Then we use the solution to the homeomorphism problem for compact orientable 3-manifolds to determine which of these manifolds are homeomorphic. Then for each pair of triangulations in our collection that represent the same manifold, we start to search for sequences of Pachner moves relating them. By Pachner's theorem, such a sequence will eventually be found. Thus, $P(n_1, n_2)$ is computable.\n\\end{proof}\n\nEssentially the same argument gives the following result, using Theorem \\ref{Thm:LinkEquivalence}.\n\n\\begin{theorem}\nThere is a computable function $R \\colon \\mathbb{N} \\times \\mathbb{N} \\rightarrow \\mathbb{N}$ such that if $D_1$ and $D_2$ are connected diagrams of a link with $c_1$ and $c_2$ crossings, then they differ by a sequence of at most $R(c_1, c_2)$ Reidemeister moves.\n\\end{theorem}\n\nAlthough the functions $P$ and $R$ are computable, it would be interesting to have explicit upper bounds on the number of moves. This is useful even for specific manifolds, such as the 3-sphere, or for specific knots such as the unknot. The smaller the bound one has, the more efficient the resulting algorithm is. In some cases, a polynomial bound can be established, for example, in the following result of the author \\cite{LackenbyPolyUnknot}.\n\n\\begin{theorem}\n\\label{Thm:PolyUnknot}\nAny diagram for the unknot with $c$ crossings can be converted to the diagram with no crossings using at most $(236 c)^{11}$ Reidemeister moves.\n\\end{theorem}\n\nThis result provides an alternative proof that unknot recognition is NP (one half of Theorem \\ref{Thm:UnknotNPCoNP}), which was first proved by Hass, Lagarias and Pippenger \\cite{HassLagariasPippenger}. The certificate is simple: just a sequence of Reidemeister moves with length at most $(236 c)^{11}$ taking the given diagram with $c$ crossings to the trivial diagram.\n\nIn recent work of the author \\cite{LackenbyAllKnotTypes}, this has been generalised to every knot type.\n\n\\begin{theorem}\n\\label{Thm:PolyEveryKnot}\nLet $K$ be any link in the 3-sphere. Then there is a polynomial $p_K$ with the following property. Any two diagrams $D_1$ and $D_2$ for $K$ with $c_1$ and $c_2$ crossings can be related by a sequence of at most $p_K(c_1) + p_K(c_2)$ Reidemeister moves.\n\\end{theorem}\n\nHence, we have the following corollary.\n\n\\begin{corollary}\n\\label{Cor:KnotTypeRecognitionNP}\nFor each knot type $K$, the problem of deciding whether a given knot diagram is of type $K$ lies in $\\mathrm{NP}$.\n\\end{corollary}\n\nHowever, if the knot type is allowed to vary, then the best known explicit upper bound on Reidemeister moves is vast. This is a result of Coward and the author \\cite{CowardLackenby}.\n\n\\begin{theorem}\n\\label{Thm:UpperBoundRM}\nIf $D_1$ and $D_2$ are connected diagrams of the same link, with $c_1$ and $c_2$ crossings, then they are related by a sequence of at most\n$$ 2^{2^{2^{2^{\\rotatebox[origin=l]{29}{$\\scriptscriptstyle\\ldots\\mathstrut$}^{c_1+c_2}}}}} $$\nReidemeister moves, where the height of the tower of exponentials is $k^{c_1 + c_2}$. Here, $k = 10^{1000000}$.\n\\end{theorem}\n\nThis was proved using work of Mijatovi\\'c \\cite{MijatovicKnot}, who provided upper bounds on the number of Pachner moves for triangulations of many 3-manifolds. For the 3-sphere, he obtained the following bound \\cite{Mijatovic3Sphere} (see also King \\cite{King}).\n\n\\begin{theorem}\n\\label{Thm:PachnerS3}\nAny triangulation $\\mathcal{T}$ of the 3-sphere may be converted to the standard triangulation, which is the double of a 3-simplex, using at most\n$$6 \\cdot 10^6 t^2 2^{20000 \\, t^2}$$\nPachner moves, where $t$ is the number of tetrahedra of $\\mathcal{T}$.\n\\end{theorem}\n\nThis was proved using the machinery that Rubinstein \\cite{Rubinstein} and Thompson \\cite{Thompson} developed for recognising the 3-sphere. We will discuss this in Section \\ref{Sec:AlmostNormal}.\n\nMijatovi\\'c then went on to analyse most Seifert fibre spaces \\cite{MijatovicSeifertFibred} and then Haken $3$-manifolds \\cite{MijatovicFibreFree} satisfying the following condition. (For simplicity of exposition, we focus on manifolds that are closed or have toral boundary in this definition.)\n\n\\begin{definition}\n\\label{Def:FibreFree}\nA compact orientable 3-manifold with (possibly empty) toral boundary is \\emph{fibre-free} if when an open regular neighbourhood of its JSJ tori is removed, no component of the resulting 3-manifold fibres over the circle or is the union of two twisted $I$-bundles glued along their horizontal boundary, unless that component is Seifert fibred.\n\\end{definition}\n\n\\begin{theorem}\n\\label{Thm:PachnerFibreFree}\nLet $M$ be a fibre-free Haken 3-manifold with (possibly empty) toral boundary. \nLet $\\mathcal{T}_1$ and $\\mathcal{T}_2$ be triangulations of $M$, with $t_1$ and $t_2$ tetrahedra. Then they differ by a sequence of at most\n$$ 2^{2^{2^{2^{\\rotatebox[origin=l]{29}{$\\scriptscriptstyle\\ldots\\mathstrut$}^{t_1}}}}} + 2^{2^{2^{2^{\\rotatebox[origin=l]{29}{$\\scriptscriptstyle\\ldots\\mathstrut$}^{t_2}}}}} $$\nPachner moves, where the heights of the towers are $c^{t_1}$ and $c^{t_2}$ respectively, possibly followed by a simplicial isomorphism. Here, $c = 2^{200}$.\n\\end{theorem}\n\nManifolds that are not fibre-free were also excluded by Haken \\cite{HakenHomeomorphism} in his solution to the homeomorphism problem. However, Mijatovi\\'c was able to remove the fibre-free hypothesis in the case of knot and link exteriors \\cite{MijatovicKnot}, and was thereby able to prove the following result.\n\n\\begin{theorem}\n\\label{Thm:PachnerKnot}\nLet $\\mathcal{T}_1$ and $\\mathcal{T}_2$ be triangulations of the exterior of a knot in the 3-sphere, with $t_1$ and $t_2$ tetrahedra. Then there is a sequence of Pachner moves, followed by a simplicial isomorphism, taking $\\mathcal{T}_1$ to $\\mathcal{T}_2$ with length at most the bound given in Theorem \\ref{Thm:PachnerFibreFree}.\n\\end{theorem}\n\nThis was the main input into the proof of Theorem \\ref{Thm:UpperBoundRM}. However, going from a bound on Pachner moves to a bound on Reidemeister moves was not a straightforward task.\n\nThe bounds on Pachner and Reidemeister moves presented in this section are an attractive measure of the complexity of the homeomorphism problem for $3$-manifolds and the recognition problem for certain links and manifolds. However, it is worth emphasising that even good bounds on Reidemeister and Pachner moves cannot lead to really efficient algorithms. For example, the polynomial upper bound on Reidemeister moves for the unknot given in Theorem \\ref{Thm:PolyUnknot} only establishes that unknot recognition is in $\\mathrm{NP}$ and $\\mathrm{EXP}$. This is because, without further information, a blind search through polynomially many Reidemeister moves could not do any better than exponential time. Therefore, if we are to find any algorithms in $3$-manifold theory and knot theory that run in sub-exponential time, other methods will be required.\n\n\n\\section{Normal surfaces}\n\\label{Sec:NormalSurfaces}\n\nMany, but not all, algorithms in 3-manifold theory rely on normal surface theory. Normal surfaces were introduced by Kneser \\cite{Kneser} and then were developed extensively by Haken \\cite{HakenNormal, HakenHomeomorphism} and many others. In the next three sections, we will give an overview of their theory.\n\n\\begin{definition}\nAn arc properly embedded in a 2-simplex is \\emph{normal} if its endpoints are in the interior of distinct edges.\n\\end{definition}\n\n\\begin{definition}\nA disc properly embedded in a tetrahedron is a \\emph{triangle} if its boundary is three normal arcs. It is a \\emph{square} if its boundary is four normal arcs. A \\emph{normal disc} is either a triangle or a square.\n\\end{definition}\n\n\\begin{figure}\n \\includegraphics[width=3.5in]{normsur3.pdf}\n \\caption{Normal discs}\n \\label{Fig:TrianglesSquares}\n\\end{figure}\n\n\n\\begin{definition}\nLet $M$ be a compact 3-manifold with a triangulation $\\mathcal{T}$. A surface properly embedded in $M$ is \\emph{normal} if its intersection with each tetrahedron of $\\mathcal{T}$ is a union of disjoint normal discs.\n\\end{definition}\n\n\\begin{definition} A \\emph{normal isotopy} of a triangulated $3$-manifold is an isotopy that preserves each simplex throughout.\n\\end{definition}\n\n\n\n\n\n\n\n\nMost interesting surfaces in a 3-manifold can be placed either into normal form or some variant of normal form. For example, we have the following results (see \\cite[Proposition 3.3.24, Corollary 3.3.25]{Matveev}).\n\n\\begin{theorem}\n\\label{Thm:NormalDisc}\nLet $M$ be a compact orientable 3-manifold that has compressible boundary. Let $\\mathcal{T}$ be a triangulation of $M$. Then some compression disc for $\\partial M$ is in normal form with respect to $\\mathcal{T}$.\n\\end{theorem}\n\n\\begin{theorem}\n\\label{Thm:EssentialNormal}\nLet $M$ be a compact orientable 3-manifold that is irreducible and has incompressible boundary. Let $S$ be a surface properly embedded in $M$ that is \\break incompressible and boundary-incompressible, and is neither a sphere nor a boundary-parallel disc. Then $S$ may be isotoped into normal form.\n\\end{theorem}\n\nThe idea behind the proof of these theorems is as follows. In Theorem \\ref{Thm:NormalDisc}, let $S$ be a compression disc for $\\partial M$. In Theorem \\ref{Thm:EssentialNormal}, $S$ is the given surface. First place $S$ in general position with respect to the triangulation $\\mathcal{T}$. It then misses the vertices of $\\mathcal{T}$ and intersects the edges in a finite collection of points. The number of points is the \\emph{weight} of $S$, denoted $w(S)$. This is the primary measure of the complexity of $S$. Each modification that will be made to $S$ will not increase its weight, and many modifications will reduce it. In fact, the weight of $S$ is the most significant quantity in a finite list of other measures of complexity. Each modification will reduce some quantity in this list and will not increase the more significant measures of complexity. Thus, eventually the modifications must terminate, at which stage it can be deduced that the resulting surface is normal.\n\nIn outline, the normalisation procedure is as follows. See \\cite[Section 3.3]{Matveev} for a more thorough treatment.\n\\begin{enumerate}\n\\item Suppose that in some tetrahedron $\\Delta$, $S \\cap \\Delta$ is not a collection of discs. Then $S \\cap \\Delta$ admits a compression disc $D$ in the interior of $\\Delta$.\n\\item Since $S$ is incompressible, $\\partial D$ bounds a disc $D'$ in $S$.\n\\item When $M$ is irreducible, $D \\cup D'$ bounds a ball in $M$, and there is an isotopy that moves $D'$ to $D$.\n\\item Even when $M$ is reducible, we may remove $D'$ from $S$ and replace it by $D$.\n\\item This process reduces the measures of complexity and so at some point we must reach a stage where the intersection between $S$ and each tetrahedron is a collection of discs.\n\\item Suppose that one of these discs intersects an edge of a tetrahedron $\\Delta$ more than once. If the interior of the edge lies in the interior of $M$, then there is an isotopy that can be performed that reduces the weight of the surface. This moves $S$ along an \\emph{edge compression disc}, which is a disc $E$ in $\\Delta$ such that $E \\cap \\partial \\Delta$ is both an arc in $\\partial E$ and a sub-arc of an edge of $\\Delta$, and where $E \\cap S$ is the remainder of $\\partial E$.\n\\item If the above edge lies in $\\partial M$, then the disc $E$ forms a potential boundary-compression disc for $S$. However, $S$ is boundary-incompressible and so we may replace a sub-disc of $S$ by $E$. This reduces the weight of $S$.\n\\item Therefore eventually we reach the stage where $S$ intersects each tetrahedron in a collection of discs and each of these discs intersects each edge of the tetrahedron at most once. It is then normal.\n\\end{enumerate}\n\nWe will see in Section \\ref{Sec:AlmostNormal} that other surfaces, particularly certain Heegaard surfaces, may be placed into a variation of normal form, called almost normal form, and that this has some important algorithmic consequences.\n\n\\section{The matching equations and fundamental surfaces}\n\\label{Sec:MatchingFundamental}\n\nOne of Haken's key insights was to encode a normal surface in a triangulation $\\mathcal{T}$ by counting its number of triangles and squares of each type in each tetrahedron. \n\n\\begin{definition}\nThe \\emph{vector} $(S)$ associated with a normal surface $S$ is the $(7|\\mathcal{T}|)$-tuple of non-negative integers that counts the number of triangles and squares of each type in each tetrahedron.\n\\end{definition}\n\nThe vector of a properly embedded normal surface $S$ satisfies some fairly obvious conditions:\n\\begin{enumerate}\n\\item The co-ordinates have to be non-negative integers. \n\\item Any two squares of different types within a tetrahedron necessarily intersect, and so this imposes constraints on $(S)$. These assert that for each pair of distinct square types within a tetrahedron, at least one of the corresponding co-ordinates of $(S)$ is zero. These are called the \\emph{compatibility conditions} (or the \\emph{quadrilateral conditions}).\n\\item For each face $F$ of $\\mathcal{T}$ with tetrahedra on both sides, the intersection $S \\cap F$ consists of normal arcs. These come in three different types. The number of arcs of each type can be computed from the number of the triangles and squares of particular types in one of the adjacent tetrahedra. Similarly, the number of arcs of this type can be computed from the number of triangles and squares in the other adjacent tetrahedron. Thus these numbers of triangles and squares satisfy a linear equation. There is one such equation for each normal arc type in each face with tetrahedra on both sides. These are called the \\emph{matching equations}. (See Figure \\ref{Fig:MatchingEquations}).\n\\end{enumerate}\n\n\\begin{figure}\n \\includegraphics[width=3in]{matchingequns2.pdf}\n \\caption{A matching equation}\n \\label{Fig:MatchingEquations}\n\\end{figure}\n\n\nThe following key observation is due to Haken \\cite[Theorem 3.3.27]{Matveev}.\n\n\\begin{lemma}\n\\label{Lem:OneOneSurfaceVector}\nThere is a one-one correspondence between properly embedded normal surfaces up to normal isotopy and vectors in $\\mathbb{R}^{7|\\mathcal{T}|}$ satisfying the above three conditions.\n\\end{lemma}\n\nIt is therefore natural to try to understand and exploit this structure on the set of normal surfaces. One is quickly led to the following definition.\n\n\\begin{definition}\nA properly embedded normal surface $S$ is the \\emph{normal sum} of two properly embedded normal surfaces $S_1$ and $S_2$ if $(S) = (S_1) + (S_2)$. We write $S = S_1 + S_2$.\n\\end{definition}\n\nThis has an important topological interpretation. We can place $S_1$ and $S_2$ in general position, by performing a normal isotopy to one of them. Then they intersect in a collection of simple closed curves and properly embedded arcs. It turns out that $S_1 + S_2$ can be obtained from $S_1 \\cup S_2$ by cutting along these arcs and then regluing the surfaces in a different way. For example, consider a simple closed curve of $S_1 \\cap S_2$, and suppose that this is curve is orientation-preserving in both $S_1$ and $S_2$. Then one removes from $S_1$ and $S_2$ annular regular neighbourhoods of this curve, and then reattaches disjoint annuli to form $S_1 + S_2$. In each case, we remove a subsurface from $S_1$ and $S_2$ (consisting of annuli, M\\\"obius bands and discs) and then we reattach a surface with the same Euler characteristic. So we obtain the following consequence.\n\n\\begin{lemma}\nThe normal sum $S_1 + S_2$ satisfies $\\chi(S_1 + S_2) = \\chi(S_1) + \\chi(S_2)$.\n\\end{lemma}\n\nThere is another way of proving this lemma, by observing that the Euler characteristic of a normal surface $S$ is a linear function of its vector $(S)$. This is because its decomposition into squares and triangles gives a cell structure on $S$, and the numbers of vertices, edges and faces of this cell structure are linear functions of the co-ordinates of $(S)$. This alternative proof has an important consequence: one can compute $\\chi(S)$ in polynomial time as a function of the number of digits of the co-ordinates of $(S)$.\n\n\nThe following is also immediate, because $S_1 + S_2$ and $S_1 \\cup S_2$ intersect the 1-skeleton of the triangulation at exactly the same points.\n\n\\begin{lemma}\n\\label{Lem:WeightAdditive}\nThe normal sum $S_1 + S_2$ satisfies $w(S_1 + S_2) = w(S_1) + w(S_2)$.\n\\end{lemma}\n\nBy a careful analysis of normal summation, the following result can be obtained. See Matveev \\cite[Theorems 4.1.13 and 4.1.36 and the proof of Theorem 6.3.21]{Matveev}.\n\n\\begin{theorem}\n\\label{Thm:EssentialSummand}\nLet $M$ be a compact orientable irreducible 3-manifold with a triangulation. \n\\begin{enumerate}\n\\item\nSuppose that $M$ has compressible boundary, and let $D$ be a compression disc for $\\partial M$ that is normal and that has least weight in its isotopy class. Then if $D$ is a normal sum $S_1 + S_2$, then neither $S_1$ nor $S_2$ can be a sphere or a boundary-parallel disc.\n\\item\nSuppose that $M$ has (possibly empty) incompressible boundary. Let $S$ be a connected incompressible boundary-incompressible surface properly embedded in $M$ that is not a sphere, a disc, a projective plane or a boundary-parallel torus. Suppose that $S$ is normal and that it has least weight in its isotopy class. Then if $S$ is a normal sum $S_1 + S_2$, then $S_1$ and $S_2$ are incompressible and boundary-incompressible, and neither is a sphere, disc, projective plane or boundary-parallel torus.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{definition}\nA normal surface is \\emph{fundamental} if it cannot be written as a sum of non-empty normal surfaces.\n\\end{definition}\n\nClearly, any normal surface can be expressed as a sum of fundamental surfaces. The following results demonstrate the importance of fundamental surfaces. Part (1) of the theorem is due to Haken \\cite{HakenNormal}; part (2) is due to Jaco and Oertel \\cite[Theorem 2.2]{JacoOertel} (see also \\cite[Theorems 4.1.13 and 4.1.30]{Matveev}).\n\n\\begin{theorem} \\label{Thm:Fundamental}\nLet $M$ be a compact orientable irreducible 3-manifold with a triangulation.\n\\begin{enumerate}\n\\item If $M$ has compressible boundary, then there is a compressing disc that is normal and fundamental.\n\\item If $M$ is closed and contains a properly embedded orientable incompressible surface other than a sphere, then it contains one that is the boundary of a regular neighbourhood of a fundamental surface.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nWe focus on (1), as the proof of (2) is similar. Let $D$ be a compression disc that is normal and has least possible weight. Suppose that $D$ is a normal sum $S_1 + S_2$. Since $1 = \\chi(D) = \\chi(S_1) + \\chi(S_2)$, we deduce that some $S_i$ has positive Euler characteristic. By focusing on one of its components, we may assume that $S_i$ is connected. It cannot be a sphere by Theorem \\ref{Thm:EssentialSummand} (1). It cannot be a projective plane, since the only irreducible orientable 3-manifold containing a projective plane is $\\mathbb{RP}^3$, which is closed. Hence, it must be a disc. This is not boundary parallel, by Theorem \\ref{Thm:EssentialSummand} (1). Hence, it is also a compression disc. But by Lemma \\ref{Lem:WeightAdditive}, it has smaller weight than $D$, which is a contradiction. Hence, $D$ must have been fundamental.\n\\end{proof}\n\nWe will give a proof of the following result in the next section.\n\n\\begin{theorem} \\label{Thm:ConstructFundamental}\nA triangulation of a compact 3-manifold supports only finitely many fundamental normal surfaces and there is an algorithm to list them all.\n\\end{theorem}\n\nAs a consequence of Theorems \\ref{Thm:Fundamental} (1) and \\ref{Thm:ConstructFundamental}, we get the following famous result of Haken \\cite{HakenNormal}.\n\n\\begin{theorem}\n\\label{Thm:AlgorithmCompressible}\nThere is an algorithm to decide whether a compact orientable 3-manifold has compressible boundary. Hence, there is an algorithm to decide whether a given knot is the unknot.\n\\end{theorem}\n\n\\begin{proof} \nThe input to the first algorithm is a triangulation of the 3-manifold $M$. The input to the second algorithm is either a triangulation of the knot exterior $M$ or a diagram of the knot, but in the latter case, the first step in the algorithm is to use the diagram to create a triangulation. Using Theorem \\ref{Thm:ConstructFundamental}, one simply lists all the fundamental surfaces in $M$. For each one, the algorithm determines whether it is a disc. If it is, then the algorithm determines whether its boundary is an essential curve in $\\partial M$. If there is such a disc, then $\\partial M$ is compressible. If there is not, then by Theorem \\ref{Thm:Fundamental} (1), $\\partial M$ is incompressible.\n\\end{proof}\n\nThe following is an important extension of this result \\cite[Theorem 6.3.17]{Matveev}.\n\n\\begin{definition}\nA compact orientable 3-manifold is \\emph{simple} if it is irreducible and has incompressible boundary and it contains no properly embedded essential annuli or tori.\n\\end{definition}\n\n\\begin{theorem}\n\\label{Thm:ListSurfacesForSimpleManifold}\nThere is an algorithm that takes, as its input, a triangulation of a compact orientable simple 3-manifold $M$ and an integer $k$, and it provides a list of all connected orientable incompressible boundary-incompressible properly embedded surfaces in $M$ with Euler characteristic at least $k$, up to ambient isotopy.\n\\end{theorem}\n\nIn the above theorem, there is no requirement that different surfaces in the list are not isotopic. However, it is possible to arrange this with more work, using a result of Waldhausen \\cite[Proposition 5.4]{Waldhausen} which controls the way that isotopic surfaces intersect each other.\n\n\\begin{proof}\nLet $S_1, \\dots, S_n$ be the fundamental normal surfaces in the given triangulation. Let $S$ be a connected, incompressible, boundary-incompressible surface with $\\chi(S) \\geq k$, other than a sphere, a disc or a boundary-parallel torus. By Theorem \\ref{Thm:EssentialNormal}, it can be isotoped into normal form. Pick a least weight representative for it, also called $S$. Then $S$ is a normal sum $\\lambda_1 S_1 + \\dots + \\lambda_n S_n$, where each $\\lambda_i$ is a non-negative integer. By Theorem \\ref{Thm:EssentialSummand} (2), any $S_i$ that is a sphere, disc or boundary-parallel torus occurs with $\\lambda_i = 0$. The same is true for any $S_i$ that is compressible or boundary-compressible. By our hypothesis that $M$ is simple, any $S_i$ that is an annulus or torus therefore has $\\lambda_i =0$. No $S_i$ can be a projective plane, as the only irreducible orientable 3-manifold containing a projective plane is $\\mathbb{RP}^3$, and this contains no properly embedded orientable incompressible surfaces. It might be the case that some $S_i$ is a Klein bottle or M\\\"obius band, but in this case, $\\lambda_i \\leq 1$, as otherwise $2 S_i$ is summand and this is a torus or annulus. The remaining surfaces all have Euler characteristic at most $-1$. Hence, the sum of the coefficients $\\lambda_i$ for these surfaces is at most $-k$. Therefore, also taking account of possible M\\\"obius bands and Klein bottles, we deduce that $\\sum_i \\lambda_i \\leq -k + n$. \n\nHence, there are only finitely many such surfaces and they may all be listed. If we wish only to list those that are actually incompressible and boundary-incompressible, then we can cut along each surface and verify whether it is incompressible and boundary-incompressible, using a variation of Theorem \\ref{Thm:AlgorithmCompressible}. This actually requires the use of boundary patterns, which are discussed in Section \\ref{Sec:Hierarchies}.\n\\end{proof}\n\nOne might wonder whether it is necessary to assume that the manifold $M$ is simple in the above theorem. However, if $M$ contains an essential annulus or torus, say, then it is possible to Dehn twist along such a surface and so the manifold might have infinite mapping class group. If there is a surface $S$ that intersects the annulus or torus non-trivially, then the image of $S$ under powers of this Dehn twist will, in general, form an infinite collection of non-isotopic surfaces all with the same Euler characteristic. Hence, in this case, the conclusion of Theorem \\ref{Thm:ListSurfacesForSimpleManifold} does not hold.\n\nFor non-simple manifolds, it is natural to consider their canonical tori and annuli. Recall that a torus or annulus $S$ properly embedded in a compact orientable 3-manifold $M$ is \\emph{canonical} if it is essential and, given any other essential annulus or torus properly embedded in $M$, there is an ambient isotopy that pulls it off $S$. Canonical annuli and tori are also called \\emph{JSJ annuli and tori} due to the work of Jaco, Shalen and Johannson \\cite{JacoShalen, Johannson}. The exterior of the canonical tori is the \\emph{JSJ decomposition} of $M$. It was shown by Jaco, Shalen and Johannson that, when $M$ is a compact orientable irreducible 3-manifold with (possibly empty) toral boundary, each component of its JSJ decomposition is either simple or Seifert fibred.\n\nAgain by a careful analysis of normal summation, the following was proved by Haken \\cite{HakenHomeomorphism} and Mijatovi\\'c \\cite[Propositions 2.4 and 2.5]{MijatovicFibreFree}. See also Matveev \\cite[Theorem 6.4.31]{Matveev}.\n\n\\begin{theorem}\n\\label{Thm:ConstructJSJ}\nLet $M$ be a compact orientable irreducible 3-manifold with incompressible boundary and let $\\mathcal{T}$ be a triangulation of $M$ with $t$ tetrahedra. Then there is an algorithm to construct the canonical annuli and tori of $M$. In fact, they may be realised as a normal surface with weight at most $2^{81t^2}$.\n\\end{theorem}\n\n\n\\section{The exponential complexity of normal surfaces}\n\\label{Sec:Sec:ExponentialNormal}\n\n\n\n\\subsection{An upper bound on complexity}\n\nAs we have seen, the fundamental surfaces form the building blocks for all normal surfaces. The following important result \\cite{HassLagariasPippenger} provides an upper bound on their weight.\n\n\\begin{theorem}\n\\label{Thm:WeightFundamental}\nThe weight of a fundamental normal surface $S$ satisfies $w(S) \\leq t^2 2^{7t + 7}$, where $t$ is the number of tetrahedra in the given triangulation $\\mathcal{T}$.\n\\end{theorem}\n\nNote that this immediately implies Theorem \\ref{Thm:ConstructFundamental}, as one can easily list all the normal surfaces with a given upper bound on their weight.\n\nThe proof of Theorem \\ref{Thm:WeightFundamental} relies on the structure of the set of all normal surfaces, which we now discuss.\n\nDefine the \\emph{normal solution space} $\\mathcal{N}$ to be the subset of $\\mathbb{R}^{7t}$ consisting of points $v$ that satisfy the following conditions:\n\\begin{enumerate}\n\\item each co-ordinate of $v$ is non-negative;\n\\item $v$ satisfies the compatibility conditions;\n\\item $v$ satisfies the matching equations.\n\\end{enumerate}\nThus, the vectors of normal surfaces are precisely $\\mathcal{N} \\cap \\mathbb{Z}^{7t}$ by Lemma \\ref{Lem:OneOneSurfaceVector}. Now, $\\mathcal{N}$ is a union of convex polytopes, as follows. Each polytope $\\mathcal{C}$ is formed by choosing, for each tetrahedron of $\\mathcal{T}$, two of its square types, whose co-ordinates are set to zero. Each polytope $\\mathcal{C}$ is clearly a convex subset of $\\mathbb{R}^{7t}$. Equally clearly, if a vector $v$ lies in $\\mathcal{C}$, then so does any positive multiple of $v$. Thus, it is natural to consider the intersection $\\mathcal{P}$ between $\\mathcal{C}$ and $\\{ (x_1, \\dots, x_{7t}) : x_1 + \\dots + x_{7t} = 1 \\}$. Then $\\mathcal{C}$ is a cone over $\\mathcal{P}$, with cone point the origin. This set $\\mathcal{P}$ is clearly compact and convex, and in fact it is a polytope. Its faces are obtained as the intersection between $\\mathcal{P}$ and some hyperplanes of the form $\\{ x_i = 0 \\}$. In particular, each vertex is the unique solution to the following system of equations:\n\\begin{enumerate}\n\\item the matching equations;\n\\item extra equations of the form $x_i = 0$ for certain integers $i$;\n\\item $x_1 + \\dots + x_{7t} = 1$.\n\\end{enumerate}\nBecause any such vertex is a unique solution to these equations, it has rational co-ordinates. (We will discuss this further below.) Hence, some multiple of this vertex has integer entries, and therefore corresponds to a normal surface. The smallest non-zero multiple is termed a \\emph{vertex} surface.\n\n\\begin{proof}[Proof of Theorem \\ref{Thm:WeightFundamental}] We first bound the size of the vector $(S)$ of any vertex surface $S$. By definition, this is a multiple of a vertex $v$ of one of the polytopes $\\mathcal{P}$ described above. This was the unique solution to the matrix equation $B v = (0, 0, \\dots, 0, 1)^T$ for some integer matrix $B$. Note that each row of $B$, except the final one, has at most $4$ non-zero entries and in fact the $\\ell^2$ norm of this row is at most $4$. Since the solution is unique, $B$ has maximal rank $7t$, and hence some square sub-matrix $A$, consisting of some subset of the rows of $B$, is invertible. In other words, $v = A^{-1} (0, 0, \\dots, 0, 1)^T$. Now, $A^{-1} = \\mathrm{det}(A)^{-1} \\mathrm{adj}(A)$. Here, $\\mathrm{adj}(A)$ is the adjugate matrix, each entry of which is obtained by crossing out some row and some column of $A$ and then taking the determinant of this matrix. So, $\\mathrm{det}(A) v = \\mathrm{adj}(A) (0, 0, \\dots, 0, 1)^T$ is an integral matrix and hence corresponds to an actual normal surface. We can bound the co-ordinates of this normal surface by noting that the determinant of a matrix has modulus at most the product of the $\\ell^2$ norms of the rows of the matrix, and so each entry of $ \\mathrm{adj}(A)$ has modulus at most $(\\sqrt{7t})4^{7t-1}$. Hence, the vector $(S)$, which is the smallest non-zero multiple of $v$ with integer entries, also has this bound on its co-ordinates.\n\nNow consider a fundamental surface $S$. It lies in one of the subsets $\\mathcal{C}$ described above that is a cone on a polytope $\\mathcal{P}$. Since $\\mathcal{P}$ is the convex hull of its vertices $v_i$, we deduce that every element of $\\mathcal{P}$ is of the form $\\sum_i \\lambda_i v_i$, where each $\\lambda_i \\geq 0$ and $\\sum_i \\lambda_i = 1$. In fact, we may assume that all but at most $7t$ of these $\\lambda_i$ are zero, because if more than $7t$ were non-zero, we could use the linear dependence between the corresponding $v_i$ to reduce one of the $\\lambda_i$ to zero. We deduce that every element of $\\mathcal{C}$ is of the form $\\sum_i \\mu_i (S_i)$, where each $S_i$ is a vertex surface, each $\\mu_i \\geq 0$ and at most $7t$ of the $\\mu_i$ are non-zero. Clearly, if $S$ is fundamental, then each $\\mu_i < 1$, as otherwise $(S)$ is the sum $(S_i) + ((S) - (S_i))$ and hence is not fundamental. So, each co-ordinate of $S$ is at most $(7t)^{3\/2} 4^{7t-1}$. In fact, a slightly more refined analysis gives the bound $7t 2^{7t-1}$ (see the proof of \\cite[Lemma 6.1]{HassLagariasPippenger}).\n\nThis gives a bound on the weight of $S$. Each co-ordinate correponds to a triangle or square of $S$ and hence contributes at most $4$ to the weight of $S$. Therefore, $w(S)$ is at most $4$ times the sum of its co-ordinates. This is at most $(49 t^2) 2^{7t+1}$, which is at most the required bound.\n\\end{proof}\n\n\\subsection{A lower bound on complexity}\n\nTheorem \\ref{Thm:WeightFundamental} gives an explicit upper bound on the number of triangles and squares in a fundamental normal surface. This is essentially an exponential function of $t$, the number of tetrahedra in the triangulation. One might wonder whether we can improve this to a sub-exponential bound, perhaps even a polynomial one. However, examples due to Hass, Snoeyink and Thurston \\cite{HassSnoeyinkThurston} demonstrate that this is not possible. We present them in this subsection.\n\n\\begin{figure}\n \\includegraphics[width=4in]{hass-snoeyink-thurston.pdf}\n \\caption{The PL unknot $K_n$}\n \\label{Fig:HassSnoeyinkThurston}\n\\end{figure}\n\nThese examples are polygonal curves $K_n$ in $\\mathbb{R}^3$, one for each natural number $n$. (See Figure \\ref{Fig:HassSnoeyinkThurston}.) Each is composed of $10n+9$ straight edges. It is arranged much like one of the standard configurations of a 2-bridge knot. Thus, the majority of the curve lies in $\\mathbb{R}^3$ like a 4-string braid, in the sense that it is transverse to the planes $\\{ x = \\textrm{constant} \\}$. Here, we view the plane of the diagram as the $x-y$ plane, with the $z$-axis pointing vertically towards the reader. This braid is of the form $(\\sigma_1 \\sigma_2^{-1})^n (\\sigma_2 \\sigma_1^{-1})^n$, where $\\sigma_1$ and $\\sigma_2$ are the first two standard generators for the 4-string braid group. The left and right of the braid are capped off to close $K_n$ into a simple closed curve. Since the braid $(\\sigma_1 \\sigma_2^{-1})^n (\\sigma_2 \\sigma_1^{-1})^n$ is trivial, all these knots are topologically the same. In fact, it is easy to see that they are unknotted, and hence they bound a disc, which we may assume is a union of triangles that are flat in $\\mathbb{R}^3$. The following result gives a lower bound on the complexity of such a disc.\n\n\\begin{theorem} \n\\label{Thm:HassSnoeyinkThurston}\nAny piecewise linear disc bounded by $K_n$ consists of at least $2^{n-1}$ flat triangles.\n\\end{theorem}\n\nWe give an outline of the proof and refer the reader to \\cite{HassSnoeyinkThurston} for more details. \n\nThe first step is to exhibit a specific disc $D_n$ that $K_n$ bounds which is smooth in its interior. In the case of $K_0$, the disc $D_0$ is shown in Figure \\ref{Fig:HassSnoeyinkThurston2}. It lies in the plane $\\{ z = 0 \\}$. Note that the intersection between $D_0$ and the plane $\\{ x = 0 \\}$ consists of two straight curves, which we denote by $\\beta$.\n\n\\begin{figure}\n \\includegraphics[width=3in]{hass-snoeyink-thurston2.pdf}\n \\caption{Left: $K_0$ and its spanning disc $D_0$; Right: $D_2 \\cap \\{ x = 0 \\}$}\n \\label{Fig:HassSnoeyinkThurston2}\n\\end{figure}\n\n\nTo obtain $K_n$ from $K_{n-1}$, the following homeomorphism is applied to $\\mathbb{R}^3$. This preserves each of the planes $\\{ x = \\mathrm{constant} \\}$. It is supported in a small regular neighbourhood of the plane $\\{ x= 0 \\}$ that separates the braid $(\\sigma_1 \\sigma_2^{-1})^{n-1}$ from the braid $(\\sigma_2 \\sigma_1^{-1})^{n-1}$. More specifically, a small positive real number $\\epsilon_n$ is chosen and the homeomorphism is supported in $\\{ - \\epsilon_n \\leq x \\leq \\epsilon_n \\}$.\nAt $\\{ x = - \\epsilon_n \\} $, the homeomorphism is the identity. As $x$ increases from $-\\epsilon_n$, the homeomorphism of the planes realises the braid generator $\\sigma_1$, and then the braid generator $\\sigma_2^{-1}$. Thus, the homeomorphism applied to the plane $\\{ x = 0 \\}$ is the map $\\phi$ that in the mapping class group of $(\\mathbb{R}^2, 4 \\ \\mathrm{points})$ represents the braid $\\sigma_1 \\sigma_2^{-1}$. Between $\\{ x = 0 \\}$ and $\\{ x = \\epsilon_n \\}$, the above homeomorphisms are applied but in reverse order, so that by the time we reach $\\{ x = \\epsilon_n \\}$, the homeomorphism is the identity.\nThus, this specifies a homeomorphism $\\mathbb{R}^3 \\rightarrow \\mathbb{R}^3$ taking $K_{n-1}$ to $K_n$. We define $D_n$ to be the image of $D_{n-1}$ under this homeomorphism.\n\nNow, within the plane $\\{ x = 0 \\}$, there is a straight line $L$ dividing the top two points of $K_n$ from the bottom two points. The key to understanding the complexity of $D_n$ is to see how many times it intersects this line. The intersection between $D_n$ and $\\{ x = 0 \\}$ is the image of the two arcs $\\beta$ under the homeomorphism $\\phi^n$. Now, $\\phi$ is a pseudo-anosov homeomorphism with dilatation $\\lambda > 1$ (which is in fact the square of the golden ratio). It is a standard result about pseudo-anosovs that as $n \\rightarrow \\infty$, the number of intersections between $\\phi^n(\\beta)$ and $L$ grows exponentially. In fact, it grows like at least $\\lambda^n$ (see \\cite[Theorem 14.24]{FarbMargalit}).\n\nAlthough $D_n$ is smooth, any piecewise linear approximation to it will have this same property: the number of intersections between it and the line $L$ grows at least like $\\lambda^n$. But a straight triangle intersects a straight line in either a line segment or at most one point. Hence, any piecewise-linear approximation to $D_n$ must have at least exponentially many straight triangles.\n\nSo far, we have only considered the disc $D_n$ and piecewise-linear discs that approximate it. The main part of the proof of Theorem \\ref{Thm:HassSnoeyinkThurston} is to show that \\emph{any} disc $E_n$ bounded by $K_n$ must intersect the line $L$ at least $2^{n-1}$ times. Now, $D_n$ has a special property: it is transverse to the planes $\\{ x= \\mathrm{constant} \\}$ containing the braid $(\\sigma_1 \\sigma_2^{-1})^n (\\sigma_2 \\sigma_1^{-1})^n$. Thus if $\\{ x = 0 \\} $ is the plane in the middle of the braid and $\\{ x = c \\}$ is the plane at one end, then $D_n \\cap \\{ x = 0 \\}$ and $D_n \\cap \\{ x = c \\}$ are related by $\\phi^n$ up to isotopy. Thus, at least one of these intersections must be `exponentially complicated' and in the case of $D_n$, it is the plane $\\{ x = 0 \\}$ that is complicated.\n\nNow, an arbitrary spanning disc $E_n$ need not be transverse to the planes $\\{ x = \\mathrm{constant} \\}$. The co-ordinate $x$ can be viewed as a Morse function on $E_n$. This may have critical points and at the planes on either side of such a critical point, the isotopy classes of the intersection between $E_n$ and these planes may change. A key part of the argument of Hass, Snoeyink and Thurston establishes that, in fact, there can be only \\emph{one} such saddle where the isotopy class changes in any interesting way. In particular, one of the regions $\\{ -c \\leq x \\leq 0 \\}$ and $\\{ 0 \\leq x \\leq c\\}$ does not contain such a saddle (say the latter) and hence the intersection between $E_n$ and $\\{ x = 0 \\}$ or $\\{ x = c \\}$ must be exponentially complicated. In fact, it must be $\\{ x = 0 \\}$ that has exponentially complicated intersection, but this point is not essential for the proof of their theorem.\n\nWe now explain briefly why there is at most one relevant saddle singularity of $E_n$. Suppose that a saddle occurs in the plane $\\{ x = k \\}$. We may assume that the saddles of $E_n$ occur at different $x$ co-ordinates. Hence, the intersection between $E_n$ and $\\{ x = k \\}$ is a graph with a single 4-valent vertex in the interior of $E_n$ and possibly some 1-valent vertices on the boundary. If there are $0$ or $2$ vertices on the boundary, then this is not an `interesting' saddle and the intersection between $E_n$ and the planes just to the left and right of $\\{ x = k \\}$\nare basically the same. On the other hand, when there are $4$ vertices on the boundary, then these 4 vertices divide $K_n$ into $4$ arcs, each of which must contain a critical point of $K_n$ with respect to the function $x$. However, $K_n$ has only $4$ critical points. One can deduce that if there was more than one such saddle, then in fact $K_n$ would have to have more than $4$ critical points, which is manifestly not the case. This completes the sketch of Theorem \\ref{Thm:HassSnoeyinkThurston}.\n\nNote that this theorem provides a lower bound on the number of triangles and squares for normal discs, as follows. It is straightforward to build a triangulation $\\mathcal{T}$ of the exterior of $K_n$, where the number of tetrahedra is a linear function of $n$ and each tetrahedron is straight in $\\mathbb{R}^3$. Any normal surface in $\\mathcal{T}$ can be realised as a union of flat triangles, possibly by subdividing each square along a diagonal into two triangles. Hence, Theorem \\ref{Thm:HassSnoeyinkThurston} provides an exponential lower bound on the number of triangles and squares in any normal spanning disc in $\\mathcal{T}$.\n\n\n\\section{The algorithm of Agol-Hass-Thurston}\n\\label{Sec:AHT}\n\nAs we saw in the previous section, the normal surfaces that we are interested in (such as a spanning disc for an unknot) may be exponentially complicated, as a function of the number of tetrahedra in our given triangulation. Clearly, this is problematic if one is trying to construct efficient algorithms. For example, suppose we are given a normal surface via its vector and we want to determine whether it is a disc. If the vector has exponential size, then we could not hope for our algorithm to build the surface in polynomial time in order to determine its topology. One can easily compute its Euler characteristic, since this is a linear function of the co-ordinates of the vector. So one can easily verify whether the Euler characteristic is $1$ and whether the surface has non-empty boundary. But this is not quite enough information to be able to deduce that the surface is a disc: one also needs to know that it is connected. A very useful algorithm, due to Agol-Hass-Thurston (AHT), allows us to verify this, even when the surface is exponentially complicated. The algorithm is very general and so has many other applications. In fact, it is fair to say that, by using the AHT algorithm cleverly, one can answer just about any reasonable question in polynomial time about a normal surface with exponential weight.\n\n\\subsection{The set-up for the algorithm}\n\nInitially, we will consider just the problem of determining the number of components of a normal surface. This can be solved using a `vanilla' version of the AHT algorithm. We will then introduce some greater generality and will give some examples where this is useful.\n\nOne can think of the problem of counting the components of a normal surface $S$ as the problem of counting certain equivalence classes. Specifically, consider the points of intersection between $S$ and the 1-skeleton $\\mathcal{T}^{1}$ of the triangulation $\\mathcal{T}$. If two such points are joined by a normal arc of intersection between $S$ and a face of $\\mathcal{T}$, then clearly they lie in the same component of $S$. In fact, two points of $S \\cap \\mathcal{T}^{1}$ are in the same component of $S$ if and only if they are joined by a finite sequence of these normal arcs. Thus, one wants to consider the equivalence relation on $S \\cap \\mathcal{T}^{1}$ that is generated by the relation `joined by a normal arc of intersection between $S$ and a face of $\\mathcal{T}$'.\n\nNow one can think of the edges of $\\mathcal{T}$ as arranged disjointly along the real line. Then $S \\cap \\mathcal{T}^{1}$ is a finite set of points in $\\mathbb{R}$, which we can take to be the integers between $1$ and $N$, say, denoted $[1,N]$. (See Figure \\ref{Fig:AgolHassThurston1}.) In each face of $\\mathcal{T}$, there are at most three types of normal arc, and each such arc type specifies a bijection between two intervals in $[1,N]$. This bijection is very simple: it is either $x \\mapsto x + c$ for some integer $c$ or $x \\mapsto c - x$ for some integer $c$.\n\nThus, the `vanilla' version of the AHT algorithm is as follows:\n\n\\emph{Input}: (1) a positive integer $N$;\n\\vspace{-0.05in}\n\\begin{enumerate}\n\\item[(2)] a collection of $k$ bijections between sub-intervals of $[1,N]$, called \\emph{pairings}; \neach pairing is of the form $x \\mapsto x+ c$ (known as \\emph{orientation-preserving}) or $x \\mapsto c - x$ (known as \\emph{orientation-reversing}) for some integer $c$.\n\\end{enumerate}\n\n\\emph{Output}:\nthe number of equivalence classes for the equivalence relation generated by the pairings.\n\nThe running time for the algorithm is a polynomial function of $k$ and $\\log N$. It is the $\\log$ that is crucial here, because this allows us to tackle exponentially complicated surfaces in polynomial time.\n\nIn fact, one might want to know more than just the number of components of $S$. For example, one might be given two specific points of $S \\cap \\mathcal{T}^{1}$ and want to know whether they lie in the same component of $S$. This can be achieved using an enhanced version of the above algorithm, that uses `weight functions'. A \\emph{weight function} is a function $w \\colon [1,N] \\rightarrow \\mathbb{Z}^d$ that is constant on some interval in $[1,N]$ and is $0$ elsewhere. We will consider a finite collection of weight functions, all with the same value of $d$. The \\emph{total weight} of a subset $A$ of $[1,N]$ is the sum, over all weight functions $w$ and all elements $x$ of $A$, of $w(x)$. For example, in the above decision problem that asks whether two specific points of $S \\cap \\mathcal{T}^1$ lie in the same component of $S$, we can set $d = 1$ and then use two weight functions, each of which is $1$ at one of the specified points of $S \\cap \\mathcal{T}^{1}$ and zero elsewhere. Thus the decision problem may be rephrased as: is there an equivalence class with total weight $2$? This can be answered using the following enhanced AHT algorithm:\n\n\\medskip\n\\emph{Extra input}: (3) a positive integer $d$;\n\\vspace{-0.05in}\n\\begin{itemize}\n\\item[(4)] a list of $\\ell$ weight functions $[1,N] \\rightarrow \\mathbb{Z}^d$, each with range in $[-M,M]^d$.\n\\end{itemize}\n\n\\emph{Extra output}:\nA list of equivalence classes and their total weights.\n\nThe running time of the algorithm is at most a polynomial function of $k$, $d$, $\\ell$, $\\log N$ and $\\log M$.\n\nSome of the uses of the AHT algorithm are as follows:\n\n(1) \\emph{Is a normal surface $S$ orientable?} To answer this, one counts the number of components of $S$ and the number of components of the surface with vector $2 (S)$. The latter is twice the former if and only if $S$ is orientable.\n\n(2) \\emph{How many boundary components does a properly embedded normal surface $S$ have?} Here, one considers just the points $\\partial S \\cap \\mathcal{T}^{1}$ and just those pairings arising from normal arcs in $\\partial S$. Then the vanilla version of AHT provides the number of components of $\\partial S$.\n\n(3) \\emph{What are the components of $S$ as normal vectors?} Here, one sets $\\ell$ and $d$ to be the number of edges of $\\mathcal{T}$ and one defines the $i$th weight function $[1,N] \\rightarrow \\mathbb{Z}^d$ to take the value $(0, \\dots, 0, 1, 0, \\dots, 0)$, with the $1$ at the $i$th place, on precisely those points in $[1,N]$ that lie on the $i$th edge. So, the total weight of a component $S'$ just counts the number of intersection points between $S'$ and the various edges of $\\mathcal{T}$. From this, one can readily compute $(S')$, as follows. The weights on the edges determine the points of intersection between $S'$ and the 1-skeleton. From these, one can compute the arcs of intersection between $S'$ and each face of $\\mathcal{T}$. From these, one gets the curves of intersection between $S'$ and the boundary of each tetrahedron, and hence the decomposition of $S'$ into triangles and squares.\n\n(4) \\emph{What are the topological types of the components of $S$?} Using the previous algorithm one can compute the vectors for the components of $S$. Then for each component, one can compute its Euler characteristic, its number of boundary components and whether or not it is orientable. This determines its topological type.\n\n\\subsection{The algorithm}\n\nThis proceeds by modifying $[1,N]$, the pairings and the weights. At each stage, there will be a bijection between the old equivalence classes and the new ones that will preserve their total weights.\n\nAlthough this is not how Agol-Hass-Thurston described their algorithm, it is illuminating to think of the pairings in terms of 2-complexes, in the following way.\n\nWe already view $[1,N]$ as a subset of $\\mathbb{R}$. Each pairing can be viewed as specifying a band $[0,1] \\times [0,1]$ attached onto $\\mathbb{R}$, as follows. If the pairing is $[a,b] \\rightarrow [c,d]$, then we attach $[0,1] \\times \\{ 0 \\}$ onto $[a , b ]$ and we attach $[0,1] \\times \\{ 1 \\}$ onto $[c , d ]$. There are two possible ways to attach the band: with or without a half twist, according to whether the pairing is orientation-preserving or reversing. (See Figure \\ref{Fig:AgolHassThurston1}.)\n\nIf a pairing $[a,b] \\rightarrow [c,d]$ satisfies $a \\leq c$, then $[a,b]$ is the \\emph{domain} of the pairing and $[c,d]$ is the \\emph{range}. Its \\emph{translation distance} is $c - a = d - b$. Its \\emph{width} is $b - a + 1$, the number integers in its domain or its range.\n\n\\begin{figure}\n \\includegraphics[width=1.8in]{agolhassthurston1.pdf}\n \\hspace{.1in}\n \\includegraphics[width=2in]{agolhassthurston5.pdf}\n \\caption{Left: Constructing a band from a pairing. Right: Transmission}\n \\label{Fig:AgolHassThurston1}\n\\end{figure}\n\n\nThe modifications that AHT make can be viewed as alterations to these bands. The modifications will be chosen so that they reduce \n$4^k \\prod_{i=1}^k w_i$,\nwhere $k$ is the number of pairings and $w_1, \\dots, w_k$ are their widths. Since the total running time of the algorithm is at most a polynomial function of $\\log N$, it needs to be the case that at frequent intervals, this measure of complexity goes down substantially. In fact, after every $5k$ cycles through the following modifications, it will be the case that this quantity is scaled by a factor of at most $1\/2$.\n\n\\emph{Transmission.} \nSuppose that one pairing $g_2$ has range contained within the range of another pairing $g_1$. Then one of the attaching intervals for the band $B_2$ corresponding to $g_2$ lies within an attaching interval for the band $B_1$ corresponding to $g_1$. Suppose also that the domain of $g_2$ is not contained in the range of $g_1$. Then modify $g_2$, by sliding the band $B_2$ along $B_1$, as shown in the right of Figure \\ref{Fig:AgolHassThurston1}. On the other hand, if the domain of $g_2$ is contained in the range of $g_1$, then we can slide both endpoints of the band $B_2$ along the band $B_1$.\n\nIn fact, it might be possible to slide $B_2$ multiple times over $B_1$ if the domain and range of $g_1$ overlap. In this case, we do this as many times as possible, so as to move the domain and range of $B_2$ as far to the left as possible.\n\nThis process is used to move the attaching locus of the bands more and more to the left along the line $[1,N]$. Thus, the following modification might become applicable.\n\n\\emph{Truncation.}\nSuppose that there is an interval $[c,N]$ that is incident to a single band. Then one can reduce the size of this band, or eliminate it entirely if the interval $[c,N]$ completely contains one of the attaching intervals of the band.\n\n\\begin{figure}\n \\includegraphics[width=1in]{agolhassthurston4.pdf}\n \\hspace{.1in}\n \\includegraphics[width=1.9in]{agolhassthurston2.pdf}\n \\hspace{.1in}\n \\includegraphics[width=1.7in]{agolhassthurston3.pdf}\n \\caption{Left: Truncation. Middle: Contraction. Right: Trimming}\n \\label{Fig:AgolHassThurston2}\n\\end{figure}\n\n\nThis process reduces the total width of the bands. It might also reduce the number of bands. Hence, the following might become applicable.\n\n\\emph{Contraction.}\nSuppose that there is an interval in $[1,N]$ that is attached to no bands. Then each point in this interval is its own equivalence class. Thus, the procedure removes these points, and records them and their weights.\n\n\\emph{Trimming.} \nSuppose that $g \\colon [a,b] \\rightarrow [c,d]$ is an orientation-reversing pairing with domain and range that overlap. Then trimming is the restriction of the domain and range of this pairing to $[a, \\lceil (a+d)\/2 \\rceil - 1]$ and $[\\lfloor (a+d)\/2 \\rfloor + 1, d]$ so that they no longer overlap.\n\n\nAs transmissions are performed, the attaching loci of the bands are moved to the left and so the two attaching intervals of a band are more likely to overlap. Under these circumstances, the pairing $g \\colon [a,b] \\rightarrow [c,d]$ is said to be \\emph{periodic} if its domain and range intersect and it is also orientation-preserving. The combined interval $[a,d]$ is called a \\emph{periodic interval}.\n\n\\emph{Period merger.}\nWhen there are periodic pairings $g_1$ and $g_2$, then they can be replaced by a single periodic pairing, as long as there is sufficient overlap between their periodic intervals. Moreover, if $t_1$ and $t_2$ are their translation distances, then the new periodic pairing has translation distance equal to the greatest common divisor of $t_1$ and $t_2$.\n\nThus, with periodic mergers, it is possible to see very dramatic decrease in the widths of the bands. These also reduce the number of bands.\n\nThe AHT algorithm cycles through these modifications. The proof that it scales $4^k \\prod_{i=1}^k w_i$ by a factor of at most $1\/2$ after $5k$ cycles through the above steps is very plausible, but the proof in \\cite{AgolHassThurston} is somewhat delicate.\n\n\\subsection{$3$-manifold knot genus is in NP}\n\\label{Sec:GenusNPHard}\nOne of the main motivations for Agol, Hass and Thurston to introduce their algorithm was to prove one half of Theorem \\ref{Thm:3ManifoldKnotGenus}. Specifically, they used it to show that the problem of deciding whether a knot $K$ in a compact orientable 3-manifold $M$ bounds a compact orientable embedded surface of genus $g$ is in NP. The input is a triangulation $\\mathcal{T}$ of $M$ with $K$ as a specified subcomplex, and an integer $g$ in binary. The first stage of the algorithm is to remove an open regular neighbourhood $N(K)$ of $K$, forming a triangulation $\\mathcal{T}'$ of the exterior $X$ of $K$. If $K$ bounds a compact orientable surface of genus $g$, then it bounds such a surface, with possibly smaller genus, that is incompressible and boundary-incompressible in $X$. There is such a surface $S$ in normal form in $\\mathcal{T}'$, by a version of Theorem \\ref{Thm:EssentialNormal}. In fact, we may also arrange that $\\partial S$ intersects each edge of $\\mathcal{T}'$ at most once, by first picking $\\partial S$ to be of this form and then noting that none of the normalisation moves in Section \\ref{Sec:NormalSurfaces} affects this property. We may assume that $S$ has least possible weight among all orientable normal spanning surfaces with this boundary. By a version of Theorem \\ref{Thm:EssentialSummand}, it is a sum of fundamental surfaces, none of which is a sphere or disc. In fact, it cannot be the case that two of these surfaces have non-empty boundary, because then $\\partial S$ would intersect some edge of $\\mathcal{T}'$ more than once. Hence, some fundamental summand $S'$ is also a spanning surface for $K$. It turns out that $S'$ must be orientable, as otherwise it is possible to find an orientable spanning surface with smaller weight. Note also that $\\chi(S') \\geq \\chi(S)$ and hence the genus of $S'$ is at most the genus of $S$. The required certificate is the vector of $S'$. Since $S'$ is fundamental, we have a bound on its weight by Theorem \\ref{Thm:WeightFundamental}. Using the AHT algorithm, one can easily check that it is connected, orientable and has Euler characteristic at least $1-2g$. One can also check that it has a single boundary curve that has winding number one along $N(K)$. \n\n\\section{Showing that problems are NP-hard}\n\\label{Sec:NPHard}\n\nIn the previous section, we explained how the AHT algorithm can be used to show that the problem of deciding whether a knot in a compact orientable 3-manifold bounds a compact orientable surface of genus $g$ is in NP. Agol, Hass and Thurston also showed that this problem is NP-hard. Combining these two results, we deduce that the problem is NP-complete. In this section, we explain how NP-hardness is proved. Partly for the sake of variety, we will show that the following related problem is NP-hard, thereby establishing one half of the following result. This is minor variation of \\cite[Theorem 1.1]{LackenbyConditionallyHard} due to the author.\n\n\\begin{theorem}\n\\label{Thm:LinkGenus}\nThe following problem is NP-complete. The input is a diagram of an unoriented link $L$ in the 3-sphere and a natural number $g$. The output is a determination of whether $L$ bounds a compact orientable surface of genus $g$.\n\\end{theorem}\n\nLike the argument of Agol, Hass and Thurston, the method of proving that this is $\\mathrm{NP}$-hard is to reduce this problem to an NP-complete problem, which is a variant of SAT, called 1-in-3-SAT. In SAT, one is given a list of Boolean variables $v_1, \\dots, v_n$ and a list of sentences involving the variables and the connectives AND, OR and NOT. The problem asks whether there is an assignment of TRUE or FALSE to the Boolean variables that makes each sentence true. The set-up for 1-in-3-SAT is the same, except that the sentences are of a specific form. Each sentence involves exactly three variables or their negations, and it asks whether exactly one of them is true. An example is ``$v_1 \\veebar \\neg v_2 \\veebar v_3$'' which means ``exactly one of $v_1$, $\\mathrm{NOT} (v_2)$ and $v_3$ is true\". Unsurprisingly, given its similarity to SAT, this problem is NP-complete \\cite[p.~259] {GareyJohnson}\n\nFrom a collection of 1-in-3-SAT sentences, one needs to create a diagram of a suitable link $L$. This is probably best described by means of the example in Figure \\ref{Fig:LinkGenusNP}. Here, the variables are $v_1$, $v_2$ and $v_3$ and the sentences are\n$$v_1 \\veebar \\neg v_2 \\veebar v_3, \\qquad v_1 \\veebar \\neg v_1 \\veebar v_3, \\qquad \\neg v_1 \\veebar \\neg v_2 \\veebar \\neg v_3.$$\nThe associated link diagram is constructed from these as in Figure \\ref{Fig:LinkGenusNP}. The parts of the diagram with a $K$ in a box are where the link forms a satellite of a knot $K$. This knot $K$ is chosen to have reasonably large genus: at least $2m+n$, where $m$ is the number of sentences and $n$ is the number of variables.\n\nOne can view this diagram as built as follows. Start with $n+1$ round circles in the plane of the diagram, where $n$ is the number of variables. Thus, each of the variables corresponds to one of the circles, and there is an extra circle. In Figure \\ref{Fig:LinkGenusNP}, the first $n$ circles are arranged at the top of the figure and the extra circle is at the bottom. Also start with $4m$ components, where $m$ is the number of sentences. These are arranged into batches, each containing 4 link components that are 4 parallel copies of the knot $K$. Each sentence corresponds to a batch. Each sentence contains three variables. Given such a sentence, we attach a band from three out of the four components of the batch onto the three relevant variable components. If the negation of the variable appears in the sentence, we insert a half twist into the band. The fourth component of the batch is banded onto the extra component without a twist. \n\nThe resulting link has $n+1$ components. We view an assignment of TRUEs and FALSEs to the variables as corresponding to a choice of orientation on the first $n$ components of the link. For example, in Figure \\ref{Fig:LinkGenusNP}, the orientations shown correspond to the assignment of TRUE to $v_1$ and $v_2$ and the assignment of FALSE to $v_3$. The extra component is oriented in a clockwise way. These orientations determine orientations of the 4 strings within each batch. It should be evident that each sentence is true if and only if, within the corresponding batch, two of the strings are oriented one way and two of the strings are oriented the other. We call this a \\emph{balanced} orientation. Thus, the given instance of 1-in-3-SAT has a solution if and only if the link has a balanced orientation.\n\n\\begin{figure}\n \\includegraphics[width=4in]{knotgenusnp.pdf}\n \\caption{The link diagram obtained from an instance of 1-IN-3-SAT. The given orientation is a balanced one.}\n \\label{Fig:LinkGenusNP}\n\\end{figure}\n\n\nThe NP-hardness of the decision problem in Theorem \\ref{Thm:LinkGenus} is established by the following fact: the link $L$ has a balanced orientation if and only if it bounds a compact orientable surface of genus at most $2m$.\n\nSuppose that it has a balanced orientation. Then it bounds the following surface. Start with a disc for each of the variable circles and the extra component. Within each batch, insert two annuli so that the boundary components of each annulus are oriented in opposite ways. Then attach bands joining the annuli to the discs. It is easy to check that the resulting surface is orientable and has genus at most $2m$.\n\nSuppose that it does not have a balanced orientation. Then, for any orientation on the components of $L$, the resulting link is a satellite of the knot $K$ with non-zero winding number. Hence, by our assumption about the genus of $K$, the genus of any compact orientable surface bounded by $L$ is at least $2m+1$.\n\n\nThus, the NP-hardness of the problem in Theorem \\ref{Thm:LinkGenus} is established. In fact, in \\cite{LackenbyConditionallyHard}, a variant of this was established, which examined not the genus of a spanning surface but its Thurston complexity. (See Definition \\ref{Def:ThurstonComplexity}.) But exactly the same argument gives Theorem \\ref{Thm:LinkGenus}.\n\nThe fact that this problem is in NP is essentially the same argument as in Section \\ref{Sec:GenusNPHard}.\n\n\\section{Hierarchies}\n\\label{Sec:Hierarchies}\n\nSo far, the theory that we have been discussing has mostly been concerned with a single incompressible surface. However, some of the most powerful algorithmic results are proved using sequences of surfaces called hierarchies.\n\n\\begin{definition} A \\emph{partial hierarchy} for a compact orientable 3-manifold $M$ is a sequence of 3-manifolds $M = M_1, \\dots, M_{n+1}$ and surfaces $S_1, \\dots, S_{n}$, with the following properties:\n\\begin{enumerate}\n\\item Each $S_i$ is a compact orientable incompressible surface properly embedded in $M_i$.\n\\item Each $M_{i+1}$ is $M_i \\backslash\\backslash S_i$.\n\\end{enumerate}\nIt is a \\emph{hierarchy} if $M_{n+1}$ is a collection of 3-balls.\n\\end{definition}\n\nThe following was proved by Haken \\cite{HakenHomeomorphism}.\n\n\\begin{theorem} If a compact orientable irreducible 3-manifold contains a properly embedded orientable incompressible surface, then it admits a hierarchy. In particular, any compact orientable irreducible 3-manifold with non-empty boundary admits a hierarchy.\n\\end{theorem}\n\n\\begin{definition}\nA compact orientable irreducible 3-manifold containing a compact orientable properly embedded incompressible surface is known as \\emph{Haken}.\n\\end{definition}\n\nUsing hierarchies, Haken was able to prove the following algorithmic result \\cite{HakenHomeomorphism}. (See Definition \\ref{Def:FibreFree} for the definition of `fibre-free'.)\n\n\\begin{theorem}\n\\label{Thm:HakenHomeoProblem}\nThere is an algorithm to decide whether any two fibre-free Haken 3-manifolds are homeomorphic. \n\\end{theorem}\n\nIn fact, using the solution to the conjugacy problem for mapping class groups of compact orientable surfaces \\cite{Hemion}, it is possible to remove the fibre-free hypothesis.\n\nAn important part of the theory is the following notion.\n\n\\begin{definition}\nA \\emph{boundary pattern} $P$ for a 3-manifold $M$ is a subset of $\\partial M$ consisting of disjoint simple closed curves and trivalent graphs.\n\\end{definition}\n\nThe following extension of Theorem \\ref{Thm:HakenHomeoProblem} also holds.\n\n\\begin{theorem} \n\\label{Thm:AlgorithmBoundaryPattern}\nThere is an algorithm that takes, as its input, two fibre-free Haken 3-manifolds $M$ and $M'$ with boundary patterns $P$ and $P'$ and determines whether there is a homeomorphism $M \\rightarrow M'$ taking $P$ to $P'$.\n\\end{theorem}\n\nBoundary patterns are used in the proof of Theorem \\ref{Thm:HakenHomeoProblem}. However they are also useful in their own right. For example, when $L$ is a link in the 3-sphere and $M$ is the exterior of $L$, then it is natural to assign a boundary pattern $P$ consisting of a meridian curve on each boundary component. Suppose that $(M,P)$ and $(M',P')$ are the 3-manifolds and boundary patterns arising in this way from links $L$ and $L'$. Then there is a homeomorphism between $(M,P)$ and $(M',P')$ if and only if $L$ and $L'$ are equivalent links. Thus, we obtain the following immediate consequence of Theorem \\ref{Thm:AlgorithmBoundaryPattern}\n\n\\begin{theorem}\nThere is an algorithm to decide whether any two non-split links in the 3-sphere are equivalent, provided their exteriors are fibre-free.\n\\end{theorem}\n\nIn fact, it is not hard to remove the non-split hypothesis from the above statement by first expressing a given link as a distant union of non-split sublinks. Also, as mentioned above, one can remove the fibre-free hypothesis, and thereby deal with all links in the 3-sphere.\n\nA partial hierarchy determines a boundary pattern $P_i$ on each of the manifolds $M_i$, as follows. Either the initial manifold $M$ comes with a boundary pattern $P$ or $P$ is declared to be empty. The union $P \\cup \\partial S_1 \\cup \\dots \\partial S_{i-1}$ forms a graph embedded in $M$. Then $P_i$ is defined to be the intersection between this graph and $M_i$. Provided $S_{i-1}$ is separating in $M_{i-1}$, provided $P_{i-1}$ is a boundary pattern and provided $\\partial S_{i-1}$ intersects $P_{i-1}$ transversely and avoids its vertices, then $P_{i}$ is also a boundary pattern.\n\nNote that in this definition, the surfaces are `transparent', in the following sense. Suppose that a boundary curve of $S_2$ runs over the surface $S_1$. Then we get boundary pattern in $\\partial M_3$ on the other side of $S_1$, as well as at the intersection curves between the parts of $\\partial M_3$ coming from $S_1$ and the parts coming from $S_2$. Thus, in total this curve of $\\partial S_2$ gives rise to \\emph{three} curves of $P_3$.\n\nThe key observation in the proof of Theorem \\ref{Thm:AlgorithmBoundaryPattern} is that a hierarchy for $M$ induces a cell structure on $M$, as follows. The 1-skeleton is $P \\cup \\partial S_1 \\cup \\dots \\cup \\partial S_{n}$. The 3-cells are the components of $M_{n+1}$. The 2-cells arise where the components of $\\partial M_{n+1} \\backslash\\backslash P_{n+1}$ are identified in pairs and where the components of $\\partial M_{n+1} \\backslash\\backslash P_{n+1}$ intersect $\\partial M$. There is a small chance that this is not a cell complex, because $\\partial M_{n+1} \\backslash\\backslash P_{n+1}$ might not be discs, but for essentially any reasonable hierarchy this will be the case. We say that this is the cell structure that is \\emph{associated with the hierarchy}.\n\n\\begin{definition}\nA hierarchy $H$ for $M$ is \\emph{semi-canonical} if, given any triangulation of $M$, there is an algorithm to build a finite list of hierarchies for $M$, one of which is $H$.\n\\end{definition}\n\n\\begin{theorem}\n\\label{Thm:SemiCanonical}\nLet $M$ be a fibre-free Haken 3-manifold with incompressible boundary. Then $M$ has a semi-canonical hierarchy.\n\\end{theorem}\n\nThe proof of Theorem \\ref{Thm:AlgorithmBoundaryPattern} now proceeds as follows. Let $\\mathcal{T}$ and $\\mathcal{T}'$ be triangulations of fibre-free Haken 3-manifolds $M$ and $M'$. For simplicity, we will assume that they have incompressible boundaries. By Theorem \\ref{Thm:SemiCanonical}, $M$ and $M'$ admit semi-canonical hierarchies $H$ and $H'$. If $M = M'$, then we may set $H = H'$. Thus, there is an algorithm to produce finite lists $H_1, \\dots, H_m$ and $H'_1, \\dots, H'_{m'}$ of hierarchies for each manifold, one of which is $H$ and one of which is $H'$. For each of these hierarchies, build the associated cell structure. If $M$ is homeomorphic to $M'$, there is therefore a cell-preserving homeomorphism from one of the cell structures for $M$ to one of these cells structures for $M'$. Conversely, if there is a cell-preserving homeomorphism, then clearly $M$ is homeomorphic to $M'$. Thus, the algorithm proceeds by searching for such a cell-preserving homeomorphism, which is clearly a finite task. \n\nWe now explain, in outline, how the semi-canonical hierarchy in Theorem \\ref{Thm:SemiCanonical} is constructed.\n\nAt each stage, we have a 3-manifold $M_i$ with a boundary pattern $P_i$, and we need to find a suitable surface $S_i$ to cut along. We suppose that $\\partial M_i \\setminus P_i$ is incompressible in $M_i$, which we can ensure as long as the initial manifold has incompressible boundary. The choice of the surface $S_i$ depends on whether $(M_i, P_i)$ is simple, which is defined as follows.\n\n\\begin{definition}\n\\label{Def:SimplePattern}\nLet $M$ be a compact orientable 3-manifold with a boundary pattern $P$. Then $(M, P)$ is \\emph{simple} if the following conditions all hold:\n\\begin{enumerate}\n\\item $M$ is irreducible;\n\\item $\\partial M \\setminus P$ is incompressible;\n\\item any incompressible torus in $M$ is boundary parallel;\n\\item any properly embedded annulus disjoint from $P$ either is compressible, or admits a boundary compression disc disjoint from $P$, or is parallel to an annulus $A'$ in $\\partial M$ such that $A' \\cap P$ is a collection of disjoint core curves of $A'$.\n\\end{enumerate}\n\\end{definition}\n\nThe following procedure is then followed:\n\\begin{enumerate}\n\\item Suppose that $M_i$ contains an incompressible torus that is not boundary parallel. Then $S_i$ is either one or two copies of this torus, depending on whether the torus is separating or non-separating in the component of $M_i$ that contains it. \n\\item Suppose that $M_i$ contains an annulus $A$ that is incompressible, disjoint from $P_i$ and not boundary-parallel, and that has the property that any other such annulus can be isotoped so that its boundary is disjoint from $\\partial A$. Then $S_i$ is one or two copies of $A$, again depending on whether $A$ is separating or non-separating.\n\\item Suppose that $M_i$ contains an annulus $A$ disjoint from $P_i$ that is incompressible and that is parallel to annulus $A'$ in $\\partial M_i$. Provided $A' \\cap P_i$ is non-empty and does not just consist of core circles of $A'$, then $S_i$ is set to be $A$.\n\\item Suppose that $(M_i, P_i)$ has a simple component that is not a 3-ball. Suppose also that this component is not a solid torus with a longitude disjoint from $P_i$. Then by a version of Theorem \\ref{Thm:ListSurfacesForSimpleManifold} for manifolds with boundary patterns, one may construct all incompressible boundary-incompressible surfaces in this component with maximal Euler characteristic and that intersect $P_i$ as few times as possible. \nWe set $S_i$ to be one of these surfaces, or possibly two parallel copies of this surface, again depending on whether this surface is separating or non-separating.\n\\end{enumerate}\n\nOf course, it is not clear why this procedure terminates. Moreover, this discussion is a substantial oversimplification. In particular, there are several more operations that are performed other than the ones described in (1) to (4) above. A careful and detailed account of the argument is given in Matveev's book \\cite{Matveev}.\n\nThe reason for the fibre-free hypothesis is as follows. Suppose that $M$ fibres over the circle, for example, with empty boundary pattern. Then we might take the first surface $S_1$ to be a fibre. The next manifold $M_2$ is then a copy of $S_1 \\times I$, with pattern $P_2$ equal to $\\partial S_1 \\times \\partial I$. We note that none of the possibilities in the above procedure applies, and so there is no way to extend the partial hierarchy in a semi-canonical way. The manifold $(M_2, P_2)$ is not simple, but one is not allowed to cut along an annulus that is vertical in product structure and disjoint from the boundary pattern. One can show that if the above procedure is applied to a general Haken 3-manifold with incompressible boundary, and at some stage there is no possible $S_i$ to cut along, then in fact the original manifold was not fibre-free.\n\nThe iterative nature of this algorithm makes it very inefficient. In order to construct the surface $S_i$, we need a triangulation $\\mathcal{T}_i$ of $M_i$. This can be obtained from the triangulation $\\mathcal{T}_{i-1}$ for $M_{i-1}$, as follows. First cut each tetrahedron along its intersection with $S_{i-1}$, forming a cell structure, and then subdivide this cell structure into a triangulation. The problem, of course, is that the number of normal triangles and squares of $S_{i-1}$ might be at least an exponential function of $|\\mathcal{T}_{i-1}|$. Hence, $|\\mathcal{T}_i|$ might be at least an exponential function of $|\\mathcal{T}_{i-1}|$. Thus, the number of tetrahedra in these triangulations might grow like a tower of exponentials, where the height of the tower is the number of surfaces in the hierarchy. This is the source of the bounds in Theorem \\ref{Thm:PachnerFibreFree} and Theorem \\ref{Thm:UpperBoundRM}.\n\n\\section{The Thurston norm}\n\\label{Sec:ThurstonNorm}\n\nWe saw in Theorem \\ref{Thm:3ManifoldKnotGenus} that the problem of deciding whether a knot in a compact orientable 3-manifold bounds a compact orientable surface with genus $g$ is NP-complete. Importantly, the 3-manifold is an input to the problem and so is allowed to vary. What if the manifold is fixed and only the knot can vary? Is the problem still NP-hard? We also saw in Theorem \\ref{Thm:LinkGenus} that the problem of deciding whether a link in the 3-sphere bounds a compact orientable surface of genus $g$ is also NP-complete. Here, the background manifold is fixed. However, there is no requirement that the surface respects any particular orientation on the link. So the surface might represent one of many different homology classes in the link exterior. Indeed, a critical point in the proof was a choice of orientation of the link. What if we fix an orientation on the link at the outset and require the Seifert surface to respect this orientation? In particular, what if we focus on knots in the 3-sphere, rather than links, where there is a unique homology class (up to sign) for a Seifert surface?\n\nPerhaps surprisingly, these restricted problems are almost certainly \\emph{not} NP-complete. For example, we have the following result of the author \\cite{LackenbyEfficientCertification}.\n\n\\begin{theorem}\n\\label{Thm:KnotGenusNPCoNP}\nThe problem of deciding whether a knot in the 3-sphere bounds a compact orientable surface of genus $g$ is in $\\mathrm{NP}$ and $\\mathrm{co}$-$\\mathrm{NP}$.\n\\end{theorem}\n\nWe also saw in Theorem \\ref{Thm:UnknotNPCoNP} that the problem of recognising the unknot is in NP and co-NP. Recall from Conjecture \\ref{Con:NPCoNP} that such problems are believed not to be NP-complete.\n\nRecent work of the author and Yazdi \\cite{LackenbyYazdi} generalises Theorem \\ref{Thm:KnotGenusNPCoNP} to knots in an arbitrary but fixed 3-manifold.\n\n\\begin{theorem}\n\\label{Thm:KnotGenusFixedManifold}\nLet $M$ be a fixed compact orientable 3-manifold. Then the problem of deciding whether a knot in $M$ bounds a compact orientable surface of genus $g$ is in $\\mathrm{NP}$ and $\\mathrm{co}$-$\\mathrm{NP}$.\n\\end{theorem}\n\nWe will discuss in this section the methods that go into the proof of Theorem \\ref{Thm:KnotGenusNPCoNP}.\n\n\\subsection{The Thurston norm}\n\n\\begin{definition}\n\\label{Def:ThurstonComplexity}\nThe \\emph{Thurston complexity} $\\chi_-(S)$ for a compact connected surface $S$ is $\\max \\{ 0, -\\chi(S) \\}$. The Thurston complexity $\\chi_-(S)$ of a compact surface $S$ with components $S_1, \\dots, S_n$ is $\\sum_i \\chi_-(S_i)$.\n\\end{definition}\n\n\\begin{definition}\nLet $M$ be a compact orientable 3-manifold. The \\emph{Thurston norm} $x(z)$ of a class $z \\in H_2(M, \\partial M)$ is the minimal Thurston complexity of a compact oriented properly embedded surface representing $z$.\n\\end{definition}\n\nTheorem \\ref{Thm:KnotGenusNPCoNP} is a consequence of the following result.\n\n\\begin{theorem}\n\\label{Thm:ThurstonNormNP}\nThe following problem is in $\\mathrm{NP}$. The input is a triangulation of a compact orientable 3-manifold $M$, a simplicial cocycle $c$ representing an element $[c]$ of $H^1(M)$ and a non-negative integer $n$ in binary. The size of the input is defined to be the sum of the number of tetrahedra in the triangulation, the number of digits of $n$ and the sum of the number of digits of $c(e)$ as $e$ runs over each edge of the triangulation. The output is an answer to the question of whether the Poincar\\'e dual of $[c]$ has Thurston norm $n$.\n\\end{theorem}\n\n\\begin{proof}[Proof of Theorem \\ref{Thm:KnotGenusNPCoNP}] Let $D$ be a diagram of a knot $K$ with $c(D)$ crossings and let $g$ be a natural number. Let $g(K)$ be the genus of $K$, which therefore lies between $0$ and $(c(D)-1)\/2$. If $g \\geq g(K)$, then Theorem \\ref{Thm:3ManifoldKnotGenus} provides a certificate, verifiable in polynomial time, that $K$ bounds a compact orientable surface of genus $g$. So suppose that $g < g(K)$. From the diagram, we can construct a triangulation $\\mathcal{T}$ of the exterior $M$ of $K$ where the number of tetrahedra is bounded above by a linear function of $c(D)$. We can also give a cocycle $c$ representing a generator for $H^1(M)$. Theorem \\ref{Thm:ThurstonNormNP} provides a certificate that the Thurston norm of the dual of $[c]$ is $\\max \\{ 2g(K) - 1, 0 \\}$ and hence that the genus of $K$ is $g(K)$. This establishes that $K$ does not bound a compact orientable surface of genus $g$.\n\\end{proof}\n\nExactly the same argument establishes that unknot recognition lies in co-NP, which is one half of Theorem \\ref{Thm:UnknotNPCoNP}.\n\n\\subsection{Sutured manifolds}\n\nTheorem \\ref{Thm:ThurstonNormNP} is proved using Gabai's theory of sutured manifolds \\cite{Gabai}.\n\n\\begin{definition} \nA \\emph{sutured manifold} is a compact orientable 3-manifold $M$ with two specified subsurfaces $R_-$ and $R_+$ of $\\partial M$. These must satisfy the condition that $R_- \\cup R_+ = \\partial M$ and that $R_- \\cap R_+ = \\partial R_- = \\partial R_+$. The subsurface $R_-$ is transversely oriented into $M$, and $R_+$ is transversely oriented outwards. The curves $\\gamma = R_- \\cap R_+$ are called \\emph{sutures}. The sutured manifold is denoted $(M, \\gamma)$.\n\\end{definition}\n\n\\begin{definition}\nA compact oriented embedded surface $S$ in $M$ with $\\partial S \\subset \\partial M$ is said to be \\emph{taut} if it is incompressible and it has minimal Thurston complexity in its class in $H_2(M, \\partial S)$.\n\\end{definition}\n\nA basic example of a taut surface is a minimal genus Seifert surface in the exterior of a knot. Indeed, when $M$ has toral boundary, then a compact oriented properly embedded surface $S$ is taut if and only if it is incompressible and has minimal Thurston complexity in its class in $H_2(M, \\partial M)$.\n\n\\begin{definition}\nA sutured manifold $(M, \\gamma)$ is \\emph{taut} if $M$ is irreducible and $R_-$ and $R_+$ are both taut.\n\\end{definition}\n\n\\begin{definition}\nLet $(M, \\gamma)$ be a sutured manifold and let $S$ be a properly embedded transversely oriented surface that intersects $\\gamma$ transversely. Then $M \\backslash\\backslash S$ inherits a sutured manifold structure $(M \\backslash\\backslash S, \\gamma_S)$ as follows. There are two copies of $S$ in $\\partial (M \\backslash\\backslash S)$, one pointing inwards, one outwards. These form part of $R_-(M \\backslash\\backslash S, \\gamma_S)$ and $R_+(M \\backslash\\backslash S, \\gamma_S)$. Also, the intersection between $R_-(M, \\gamma)$ and $M \\backslash\\backslash S$ forms the remainder of the the inward pointing subsurface. The outward pointing subsurface is defined similarly. This is termed a \\emph{sutured manifold decomposition} and is denoted\n$$(M, \\gamma) \\xrightarrow{S} (M \\backslash\\backslash S, \\gamma_S).$$\n\\end{definition}\n\n\\begin{definition}\nA \\emph{sutured manifold hierarchy} is a sequence of decompositions \n$$(M_1, \\gamma_1) \\xrightarrow{S_1} (M_2, \\gamma_2) \\xrightarrow{S_2} \\dots \\xrightarrow{S_n} (M_{n+1}, \\gamma_{n+1})$$\nwhere each $(M_i, \\gamma_i)$ is taut, each surface $S_i$ is taut and $(M_{n+1}, \\gamma_{n+1})$ is a collection of taut 3-balls.\n\\end{definition}\n\nThe fundamental theorems of sutured manifold theory are the following surprising results \\cite[Theorems 2.6, 3.6 and 4.19]{Scharlemann}\n\n\\begin{theorem}\n\\label{Thm:TautnessPullsBack}\nLet \n$$(M_1, \\gamma_1) \\xrightarrow{S_1} (M_2, \\gamma_2) \\xrightarrow{S_2} \\dots \\xrightarrow{S_n} (M_{n+1}, \\gamma_{n+1})$$\nbe a sequence of sutured manifold decompositions with the following properties for each surface $S_i$:\n\\begin{enumerate}\n\\item no component of $\\partial S_i$ bounds a disc in $\\partial M_i$ disjoint from $\\gamma_i$;\n\\item no component of $S_i$ is a compression disc for a solid toral component of $M_i$ with no sutures.\n\\end{enumerate}\nSuppose that $(M_{n+1}, \\gamma_{n+1})$ is taut. Then every sutured manifold $(M_i, \\gamma_i)$ is taut and every surface $S_i$ is taut.\n\\end{theorem}\n\n\\begin{theorem}\n\\label{Thm:SuturedHierarchiesExist}\nLet $(M, \\gamma)$ be a taut sutured manifold and let $z \\in H_2(M, \\partial M)$ be a non-trivial class. Then there is a sutured manifold hierarchy for $(M, \\gamma)$ satisfying (1) and (2) in Theorem \\ref{Thm:TautnessPullsBack} and where the first surface $S_1$ satisfies $[S_1] = z$.\n\\end{theorem}\n\nSuch a hierarchy can be viewed as certificate for the tautness of the first surface $S_1$ and hence for the Thurston norm of $[S_1]$ in the case where $\\partial M$ is empty or toral. Note that the tautness of the final manifold $(M_{n+1}, \\gamma_{n+1})$ is easily verified. This is because a 3-ball with a sutured manifold structure is taut if and only it has at most one suture.\n\n\\subsection{A certificate verifiable in polynomial time} Theorems \\ref{Thm:TautnessPullsBack} and \\ref{Thm:SuturedHierarchiesExist} imply that sutured manifold hierarchies can be used to establish the Thurston norm of a homology class in $H_2(M, \\partial M)$, provided $\\partial M$ is empty or toral and $M$ is irreducible. However, it is somewhat surprising that they can be used to form a certificate that is verifiable in polynomial time. Indeed, the discussion at the end of Section \\ref{Sec:Hierarchies} suggests that it is hard to control the complexity of hierarchies.\n\nHowever, sutured manifold hierarchies seem to be much more tractable than ordinary ones. As we will see, the reason for this is that there is an important distinction between the behaviour of sutures and boundary patterns when a manifold is cut along a surface.\n\nRecall that the main source of the complexity of hierarchies is that a fundamental normal surface $S$ in a compact orientable 3-manifold $M$ may have exponentially many triangles and squares, as a function of the number of tetrahedra in the triangulation $\\mathcal{T}$ of $M$. Thus, when we attempt to build a triangulation of $M \\backslash\\backslash S$, we may need exponentially many tetrahedra. However, there are at most $5$ different triangle and square types that can coexist within a tetrahedron $\\Delta$ of $\\mathcal{T}$. Hence, all but at most $6$ components of $\\Delta \\backslash\\backslash S$ lie between parallel normal discs. These regions patch together to form an $I$-bundle embedded in $M \\backslash\\backslash S$ called its \\emph{parallelity bundle} \\cite{LackenbyCrossing}. Thus, $M \\backslash\\backslash S$ is composed of at most $6|\\mathcal{T}|$ bits of tetrahedra with the parallelity bundle attached to them. Even when $S$ is exponentially complicated, it is possible to determine the topological types of the components of the parallelity bundle in polynomial time using the AHT algorithm. Hence, in fact, $M \\backslash\\backslash S$ is not as complicated as it first seems.\n\nIt would be ideal if the next two stages of the hierarchy after $S$ consisted of the annuli that form the vertical boundary of the parallelity bundle, and then vertical discs in the $I$-bundle that decompose it to balls. Then the resulting manifold would have a simple triangulation. In the case of Haken's hierachies in Section \\ref{Sec:Hierarchies}, this is not possible. It is not permitted to decompose along vertical annuli in an $I$-bundle that are disjoint from the boundary pattern. In fact, before an $I$-bundle can be decomposed in Haken's hierarchies, its horizontal boundary must first receive non-empty boundary pattern, from decompositions along surfaces elsewhere in the manifold. \n\nHowever, in the case of sutured manifolds, these sort of decompositions are allowed, under some fairly mild hypotheses. In particular, a decomposition along an incompressible annulus is always permitted, provided one boundary component lies in $R_-$ and one lies in $R_+$. It is therefore possible, after these decompositions, to obtain a triangulation of the resulting manifold with a controlled number of tetrahedra. In this way, we may build the entire sutured manifold hierarchy for $M$ and encode it in a way that makes it possible to verify in polynomial time that the final manifold consists of taut balls and that it satisfies (1) and (2) of Theorem \\ref{Thm:TautnessPullsBack}. This is the basis for the author's proof \\cite{LackenbyEfficientCertification} of Theorem \\ref{Thm:ThurstonNormNP}.\n\n\n\n\\section{3-sphere recognition and Heegaard splittings}\n\\label{Sec:AlmostNormal}\n\nIn groundbreaking work \\cite{Rubinstein}, Rubinstein proved the following fundamental algorithmic result.\n\n\\begin{theorem}\nThere is an algorithm to determine whether a 3-manifold is the 3-sphere.\n\\end{theorem}\n\nThis result is remarkable because the 3-sphere is quite featureless and so, unlike the case of unknot recognition, there is not an obvious normal surface to search for. Rubinstein's argument was enhanced and simplified by Thompson \\cite{Thompson}. Both arguments relied on the theory of almost normal surfaces, which are defined as follows.\n\n\\begin{definition}\nA surface properly embedded in a tetrahedron is\n\\begin{itemize}\n\\item an \\emph{octagon} if it is a disc with boundary consisting of eight normal arcs;\n\\item a \\emph{tubed piece} if it is an annulus that is obtained from two disjoint normal discs by attaching a tube that runs parallel to an edge of the tetrahedron.\n\\end{itemize}\nAn octagon or tubed piece is called an \\emph{almost normal piece}.\n\\end{definition}\n\n\\begin{figure}\n \\includegraphics[width=3.5in]{almost-normal.pdf}\n \\caption{Almost normal pieces}\n \\label{Fig:AlmostNormal}\n\\end{figure}\n\n\\begin{definition}\n\\label{Def:AlmostNormalSurface}\nA surface properly embedded in a triangulated 3-manifold is \\emph{almost normal} if it intersects each tetrahedron in a collection of disjoint triangles\nand squares, except in precisely one tetrahedron where it consists of exactly one almost normal piece and possibly also some triangles and squares.\n\\end{definition}\n\nThe following striking result is the basis for the Rubinstein-Thompson algorithm.\n\n\\begin{theorem}\n\\label{Thm:AlmostNormal2Sphere}\nLet $\\mathcal{T}$ be a triangulation of a closed orientable 3-manifold $M$. Suppose that $\\mathcal{T}$ has a single vertex and contains no normal spheres other the one consisting of triangles surrounding the vertex. Then $M$ is the 3-sphere if and only if $\\mathcal{T}$ contains an almost normal embedded 2-sphere.\n\\end{theorem}\n\nThe hypothesis that $\\mathcal{T}$ has a single vertex and that it has a unique normal 2-sphere may sound restrictive, but in fact one may always build such a triangulation for a closed orientable irreducible 3-manifold $M$ unless $M$ is one of three exceptional cases: the 3-sphere, $\\mathbb{RP}^3$ and the lens space $L(3,1)$. (See \\cite{Burton:Crushing} for example.)\n\nBoth directions of Theorem \\ref{Thm:AlmostNormal2Sphere} are remarkable. Suppose first that $\\mathcal{T}$ contains an almost normal 2-sphere. One of the features of an almost normal surface $S$ is that it admits an obvious isotopy that reduces the weight of $S$. This isotopy moves the surface off $S$ along an edge compression disc (see (6) at the end of Section \\ref{Sec:NormalSurfaces} for the definition of an edge compression disc). When the surface contains an octagon, then there are two choices for the isotopy, one in each direction away from $S$. When the surface contains a tubed piece, then there is just one possible direction. We then continue to apply these weight reducing isotopies, all going in the same direction. It turns out this continues to be possible until the resulting surface is normal or is a small 2-sphere lying in a single tetrahedron (see \\cite{Schleimer} or \\cite{Mijatovic3Sphere}). In the case where we get a normal 2-sphere, then this is, by hypothesis, the boundary of a regular neighbourhood of the vertex of the triangulation. Since we only ever isotope in the same direction, the image of the isotopy is homeomorphic to $S \\times [0,1]$. Thus, we deduce that the manifold is $S \\times [0,1]$ with small 3-balls attached. Hence, it is a 3-sphere.\n\nSuppose now that $M$ is the 3-sphere. Then one removes a small regular neighbourhood of the vertex to get a 3-ball $B$. This has a natural height function $h \\colon B \\rightarrow [0,1]$, just given by distance from the origin of the ball. The key to the argument is to place the 1-skeleton of $\\mathcal{T}$ into `thin position' with respect to this height function. This notion, which was originally due to Gabai \\cite{Gabai3}, is defined as follows. We may assume that the restriction of $h$ to $\\mathcal{T}^1$ has only finitely critical points, which are local minima or local maxima. Let $0 < x_1 < \\dots < x_n < 1$ be their values under $h$. Then the number of intersection points between $\\mathcal{T}^1$ and sphere $h^{-1}(t)$ for $t \\in (x_i, x_{i+1})$ is some constant $c_i$. This remains true if we set $x_0 = 0$ and $x_{n+1} = 1$. We say that the 1-skeleton $\\mathcal{T}^1$ is in \\emph{thin position} if $\\sum_i c_i$ is minimised. It turns out in this situation, a value of $c_i$ that is maximal gives rise to a surface $h^{-1}(t)$ that is nearly almost normal. More specifically, there is a sequence of compressions in the complement of the 1-skeleton that takes it to an almost normal surface.\n\n\nTheorem \\ref{Thm:AlmostNormal2Sphere} was the basis for the following result of Ivanov \\cite{Ivanov} and Schleimer \\cite{Schleimer}.\n\n\\begin{theorem}\nRecognising whether a 3-manifold is the 3-sphere lies in $\\mathrm{NP}$.\n\\end{theorem}\n\nThe certificate was essentially just the almost normal 2-sphere, although they had to deal with possible presence of normal 2-spheres also.\n\nIn the above implication that if $M$ is the 3-sphere, then it contains an almost normal 2-sphere, the fact that it is a 2-sphere plays very little role. What matters is that the 3-sphere has a `sweepout' by 2-spheres, starting with a tiny 2-sphere that encircles the vertex of $\\mathcal{T}$ and ending with a small 2-sphere surrounding some other point. Such sweepouts arise in another natural situation: when $M$ is given via a Heegaard splitting. \n\n\\begin{definition} A \\emph{compression body} $C$ is either a handlebody or is obtained from $F \\times [0,1]$, where $F$ is a (possibly disconnected) compact orientable surface, by attaching 1-handles to $F \\times \\{ 1 \\}$. The \\emph{negative boundary} is the copy of $F \\times \\{ 0 \\}$ or empty (in the case of a handlebody). The \\emph{positive boundary} is the remainder of $\\partial C$. A \\emph{Heegaard splitting} for a compact orientable 3-manifold $M$ is an expression of $M$ as a union of two compression bodies glued by a homeomorphism between their positive boundaries. The resulting \\emph{Heegaard surface} is the image of the positive boundaries in $M$.\n\\end{definition}\n\nA handlebody can be viewed as a regular neighbourhood of a graph, as follows. It is a 0-handle with 1-handles attached. The graph is a vertex at the centre of the 0-handle together with edges, each of which runs along a core of a 1-handle. It is known as a \\emph{core} of the handlebody. Similarly, a compression body that is not a handlebody is a regular neighbourhood of its negative boundary together with some arcs that start and end on the negative boundary and run along the cores of the 1-handles. These are known as \\emph{core arcs} of the compression body. Thus, given any Heegaard splitting for $M$, there is an associated graph $\\Gamma$ in $M$, which is the cores of the two compression bodies. Then $M \\setminus (\\Gamma\\cup \\partial M)$ is a copy of $S \\times (-1,1)$, where $S \\times \\{ 0 \\}$ is the Heegaard surface. Thus, the projection map $S \\times (-1,1) \\rightarrow (-1,1)$ extends to a function $h \\colon M \\rightarrow [-1,1]$ and we can place $\\mathcal{T}^1$ into thin position with respect to this height function.\n\nUsing these methods, Stocking \\cite{Stocking} proved the following result, based on arguments of Rubinstein.\n\n\\begin{theorem}\nLet $\\mathcal{T}$ be a triangulation of a compact orientable 3-manifold $M$. Let $S$ be a Heegaard surface for $M$ that is strongly irreducible, in the sense that any compression disc for $S$ on one side necessarily intersects any compression disc for $S$ on the other. Suppose that it is not a Heegaard torus for the 3-sphere or the 3-ball. Then there is an ambient isotopy taking $S$ into almost normal form.\n\\end{theorem}\n\nThis alone is not enough to be able to solve many algorithmic problems about Heegaard splittings. This is because there seems to be no good way of ensuring that the almost normal surface $S$ is a bounded sum of fundamental surfaces. However, the author was able to use it to prove the following result \\cite{LackenbyHeegaard}, in combination with methods from hyperbolic geometry.\n\n\\begin{theorem}\nThere is an algorithm to determine the minimal possible genus of a Heegaard surface for a compact orientable simple 3-manifold with non-empty boundary.\n\\end{theorem}\n\n\\section{Homomorphisms to finite groups}\n\\label{Sec:Homomorphisms}\n\nIn recent years, homomorphisms from 3-manifold groups to finite groups have been used to prove some important algorithmic results. The following theorem of Kuperberg \\cite{KuperbergKnottedness}, which was proved using these techniques, still remains particularly striking.\n\n\\begin{theorem} \n\\label{Thm:KnottednessGRH}\nThe problem of deciding whether a knot in the 3-sphere is non-trivial lies in $\\mathrm{NP}$, assuming the Generalised Riemann Hypothesis.\n\\end{theorem}\n\nThis result has been superseded by Theorem \\ref{Thm:UnknotNPCoNP}, which removes the conditionality on the Generalised Riemann Hypothesis. However, the techniques that Kuperberg introduced remain important. In particular, they were used by Zentner to show that the problem of deciding whether a 3-manifold is the 3-sphere lies in co-NP, assuming the Generalised Riemann Hypothesis. Kuperberg's argument is explained in Section \\ref{Subsec:Unknot3Sphere}.\n\n\\subsection{Residual finiteness}\n\n\\begin{definition} A group $G$ is \\emph{residually finite} if, for every $g \\in G$ other than the identity, there is a homomorphism $\\phi$ from $G$ to a finite group, such that $\\phi(g)$ is non-trivial.\n\\end{definition}\n\n\\begin{theorem} \n\\label{Thm:ResiduallyFinite3Manifold}\nAny compact orientable 3-manifold has residually finite fundamental group.\n\\end{theorem}\n\nThis was proved by Hempel \\cite{Hempel} for manifolds that are Haken, but it now applies to all compact orientable 3-manifolds, as a consequence of Hempel's work and Perelman's solution to the Geometrisation Conjecture \\cite{Perelman1, Perelman2, Perelman3}. We will discuss the proof below.\n\nResidual finiteness has long been known to have algorithmic implications. For example, when a finitely presented group $G$ is residually finite, it has solvable word problem. The argument goes as follows. Suppose we are given a finite presentation of $G$ and a word $w$ in the generators and their inverses. It is true for any group that if $w$ is the trivial element, then there is an algorithm that eventually terminates with a proof that it is trivial. For one may start enumerating all products of conjugates of the relations, and thereby start to list all words in the generators that represent the trivial element. If $w$ is trivial, it will therefore eventually appear on this list. On the other hand, if $w$ is non-trivial, then by residual finiteness, there is a homomorphism $\\phi$ to a finite group such that $\\phi(w)$ is non-trivial. Thus, one can start to enumerate all finite groups and all homomorphisms $\\phi$ from $G$ to these groups and eventually one will find a $\\phi$ such that $\\phi(w)$ is non-trivial. By running these two processes in parallel, we will eventually be able to decide whether a given word represents the identity element.\n\nTheorem \\ref{Thm:ResiduallyFinite3Manifold} is fairly straightforward for any compact hyperbolic 3-manifold $M$. In this case, the hyperbolic structure gives an injective homomorphism $\\pi_1(M) \\rightarrow \\mathrm{Isom}^+(\\mathbb{H}^3) = \\mathrm{SO}(3,1)$ and it is therefore linear. The following theorem of Malcev \\cite{Malcev} then applies.\n\n\\begin{theorem} Any finitely generated linear group is residually finite.\n\\end{theorem}\n\nIt is instructive to consider a specific example before sketching the proof of the general theorem. Let $M$ be a Bianchi group, for example $\\mathrm{PSL}(2,\\mathbb{Z}[i])$. Now $\\mathbb{Z}[i] = \\mathbb{Z}[t] \/ \\langle t^2 + 1 \\rangle$. One may form finite quotients of this ring by quotienting by the ideal $\\langle m \\rangle$ for some positive integer $m$. The result is the finite ring $\\mathbb{Z}_m[t] \/ \\langle t^2 + 1 \\rangle$. Thus, we obtain a homomorphism\n$$\\phi_m \\colon \\mathrm{PSL}(2,\\mathbb{Z}[i]) \\rightarrow \\mathrm{SL}(2,\\mathbb{Z}_m[t] \/ \\langle t^2 + 1 \\rangle) \/ \\{ \\pm I \\}.$$\nNow consider any non-trivial element $g$ of $\\mathrm{PSL}(2,\\mathbb{Z}[i])$. This is not congruent to $\\pm I$ modulo $m$ for all $m$ sufficiently large. Thus, for any such $m$, the image of $g$ under $\\phi_m$ is non-trivial. We have therefore proved that $\\mathrm{PSL}(2,\\mathbb{Z}[i])$ is residually finite.\n\nThis generalises to any finitely generated group $G$ that is linear over a field $k$, as follows. One considers a finite generating set $g_1, \\dots, g_t$ for $G$. Then $g_1^{\\pm 1}, \\dots, g_t^{\\pm 1}$ correspond to matrices. The entries of these matrices generate a ring $R$ that is a subring of $k$. The group $G$ therefore lies in $\\mathrm{GL}_n(R)$. It is possible to show that any such ring $R$ has a collection of finite index ideals $I_m$ such that $\\bigcap_m I_m = \\{ 0 \\}$. Thus, given any non-trivial element $g$ of $G$, its image in the finite group $\\mathrm{GL}_n(R\/I_m)$ is non-trivial for some $m$.\n\nThe above argument was only for compact hyperbolic manifolds $M$. More generally, any compact geometric 3-manifold has linear fundamental group. However, it is not currently known whether the fundamental group of every compact orientable 3-manifold is linear. Instead, Hempel proved his theorem by using the decomposition of a compact orientable 3-manifold $M$ along spheres and tori into geometric pieces. The fundamental group of each piece has residually finite fundamental group. By considering carefully the finite quotients of these groups, Hempel was able to show that these homomorphisms could be chosen to be compatible along the JSJ tori. Hence, the prime summands of the manifold are residually finite. This gives the theorem because it is a general result that a free product of residually finite groups is residually finite.\n\n\\subsection{Unknot and 3-sphere recognition}\n\\label{Subsec:Unknot3Sphere}\n\nKuperberg's Theorem \\ref{Thm:KnottednessGRH} was proved by using some of the above methods in a quantified way. Let $M$ be the exterior of a non-trivial knot $K$ in the 3-sphere. Then $\\pi_1(M)$ is non-abelian. This can be proved either by appealing to the Geometrisation Conjecture or by using Theorem 9.13 in \\cite{HempelBook} that classifies the 3-manifold groups that are abelian. By the residual finiteness of $\\pi_1(M)$, there is a finite quotient of $\\pi_1(M)$ that is non-abelian. This can be seen by noting that if $g$ and $h$ are non-commuting elements of $\\pi_1(M)$, then there is some homomorphism $\\phi$ to a finite group such that $\\phi([g,h])$ is non-trivial. The image of this homomorphism is therefore non-abelian. The key claim in Kuperberg's proof is that this non-abelian finite quotient of $\\pi_1(M)$ can be chosen so that it has controlled size (as a function of the size of the given input, which might be a diagram of $K$ or a triangulation of the exterior of $K$). Thus, this quotient can used as a certificate, verifiable in polynomial time, of the non-triviality of $K$.\n\nTo get control over the size of this finite quotient of $\\pi_1(M)$, Kuperberg uses the theory of linear groups. However, as mentioned above, it is not known that every 3-manifold group is linear; this is not known even for the exteriors of knots in the 3-sphere. But Kuperberg observed that we do not need the full strength of linearity to make the above argument work. All we need to know is that there is some non-abelian quotient of $\\pi_1(M)$ that is linear. This is provided by the following result of Kronheimer and Mrowka \\cite{KronheimerMrowka} that is proved using the theory of instantons.\n\n\\begin{theorem} \n\\label{Thm:KronheimerMrowka}\nLet $K$ be any non-trivial knot in the 3-sphere. Then there is a homomorphism $\\pi_1(S^3 \\setminus K) \\rightarrow \\mathrm{SU}(2)$ with non-abelian image.\n\\end{theorem}\n\nThen once one has this linear representation, the existence of a homomorphism to a finite group with non-abelian image is a consequence. Assuming the Generalised Riemann Hypothesis, one can get the following control over the size of this group \\cite[Theorem 3.4]{KuperbergKnottedness}.\n\n\\begin{theorem} \n\\label{Thm:AffineAlgebraicQuotient}\nLet $G$ be an affine algebraic group over $\\mathbb{Z}$. Let $\\Gamma$ be a group with a presentation where the sum of the lengths of the relations is $\\ell$. Suppose that there is a homomorphism $\\Gamma \\rightarrow G(\\mathbb{C})$ with non-abelian image. Then, assuming the Generalised Riemann Hypothesis, there is a homomorphism $\\Gamma \\rightarrow G(\\mathbb{Z}\/p)$ with non-abelian image, for some prime $p$ with $\\log p$ bounded above by a polynomial function of $\\ell$.\n\\end{theorem}\n\nRather than giving the precise definition of an affine algebraic group and the terminology $G(\\mathbb{C})$ and $G(\\mathbb{Z}\/p)$, we focus on the key example of $\\mathrm{SL}(2 ,\\mathbb{C})$, which gives the general idea. Note that $\\mathrm{SU}(2) \\subset \\mathrm{SL}(2 ,\\mathbb{C})$ and hence Theorem \\ref{Thm:KronheimerMrowka} also gives a homomorphism into $\\mathrm{SL}(2 ,\\mathbb{C})$ with non-abelian image.\n\nNow, $\\mathrm{SL}(2 ,\\mathbb{C})$ is an algebraic subvariety of $\\mathbb{C}^{4}$, since the condition that a matrix has determinant one is a polynomial equation. The coefficients of the polynomial are integers. We write this group as $G(\\mathbb{C})$. Thus, for any positive integer $k$, one can define the group $G(\\mathbb{Z}\/k) = SL(2, \\mathbb{Z}\/k)$ as a subset of $(\\mathbb{Z}\/k)^{4}$ with the same defining equation.\n\nAn outline of the proof of Theorem \\ref{Thm:AffineAlgebraicQuotient} is as follows. One considers all homomorphisms $\\Gamma \\rightarrow G(\\mathbb{C})$. To define such a homomorphism, one need only specify where the generators of $\\Gamma$ are mapped and check that each of the relations map to the identity. Thus, each homomorphism determines a point in $\\mathbb{C}^{4t}$, where $t$ is the number of generators. The set of all such points is an algebraic subvariety because the relations in the group impose polynomial constraints. We are interested in homomorphisms into $G(\\mathbb{C})$ with non-abelian image, and it is in fact possible to view this subset also as an algebraic variety in $\\mathbb{C}^n$, for some $n > 4t$.\n\nBy assumption, this variety is non-empty. It is a well known fact that any affine variety in $\\mathbb{C}^n$ defined using polynomials with integer coefficients contains a point whose coordinates are algebraic numbers. Koiran \\cite{Koiran} quantified this result by expressing such a point as\n$$(x_1, \\dots, x_{n}) = (g_1(\\alpha), \\dots, g_n(\\alpha)),$$\nwhere $g_1, \\dots, g_n$ are polynomials with integer coefficients and $\\alpha$ is a root of an irreducible integer polynomial $h$, with control over the degree and the size of the coefficients of the polynomials.\n\nIt is now that the Generalised Riemann Hypothesis is used. It implies that the polynomial $h(x)$ also has a root $r$ in $\\mathbb{Z}\/p$ for some prime $p$ with bounded size. In fact, $\\log p$ ends up being at most a polynomial function of $\\ell$, the sum of the lengths of the relations of $\\Gamma$. Thus, $(g_1(r), \\dots, g_n(r))$ is a point in $(\\mathbb{Z}\/p)^n$ that corresponds to a homomorphism $\\Gamma \\rightarrow G(\\mathbb{Z}\/p)$ with non-abelian image.\n\n\\begin{proof}[Proof of Theorem \\ref{Thm:KnottednessGRH}] Let $K$ be a non-trivial knot in the 3-sphere, given via its diagram or a triangulation $\\mathcal{T}$ of its exterior. In the former case, we build a triangulation $\\mathcal{T}$ of the exterior. This triangulation can easily be used to build a presentation of $\\pi_1(S^3 \\setminus K)$ with length $\\ell$, which is at most a linear function of $|\\mathcal{T}|$. By Theorem \\ref{Thm:KronheimerMrowka}, there is a homomorphism $\\pi_1(S^3 \\setminus K) \\rightarrow \\mathrm{SL}(2, \\mathbb{C}) = G(\\mathbb{C})$ with non-abelian image. Hence, by Theorem \\ref{Thm:AffineAlgebraicQuotient}, there is a homomorphism $\\pi_1(S^3 \\setminus K) \\rightarrow G(\\mathbb{Z}\/p)$ with non-abelian image, where $\\log p$ is at most a polynomial function of $\\ell$. This homomorphism provides a certificate, verifiable in polynomial time of the non-triviality of $K$.\n\\end{proof}\n\nExactly the same proof strategy was used by Zentner \\cite{Zentner} to show that 3-sphere recognition lies in co-NP, assuming GRH. In this case though, the major new input was the following result of Zentner.\n\n\\begin{theorem}\nLet $M$ be a homology $3$-sphere other than the 3-sphere. Then $\\pi_1(M)$ admits a homomorphism to $\\mathrm{SL}(2, \\mathbb{C})$ with non-abelian image.\n\\end{theorem}\n\nThe first step in the proof of this uses a theorem of Boileau, Rubinstein and Wang \\cite{BRW}. This asserts that such a $3$-manifold admits a degree one map onto a hyperbolic 3-manifold, a Seifert fibre space other than $S^3$ or a space obtained by gluing together the exteriors of two non-trivial knots in 3-sphere, by identifying the meridian of each with the longitude of the other. This latter space is called the \\emph{splice} of the two knots. A degree one map between 3-manifolds induces a surjection between their fundamental groups. So it suffices to focus on the case where $M$ is one of the above three possibilities.\nWhen $M$ is hyperbolic, it is an almost direct consequence of the definition that it admits a faithful homomorphism to $\\mathrm{SL}(2, \\mathbb{C})$. When $M$ is Seifert fibred, it is not hard to find a representation into $\\mathrm{SU}(2) \\subset \\mathrm{SL}(2, \\mathbb{C})$ with non-abelian image. Thus, the difficult case is the splice of two knots. Zenter proves his theorem in this situation using instantons, as in the case of Theorem \\ref{Thm:KronheimerMrowka}.\n\n\\section{Hyperbolic structures}\n\\label{Sec:HyperbolicStructures}\n\nAs stated in Theorem \\ref{Thm:HomeoProblem}, the homeomorphism problem for compact orientable 3-manifolds is solved. All known solutions use the Geometrisation Conjecture, which asserts that any compact orientable 3-manifold has `a decomposition into geometric pieces'. The most important and ubiquitous pieces are the hyperbolic ones. Therefore in this section, we discuss the solution to the homeomorphism problem for hyperbolic 3-manifolds. The solution for general compact orientable 3-manifolds uses the techniques in this section, as well as an algorithmic construction of the decomposition of a manifold into its prime summands, and a construction of the pieces of its JSJ decomposition. Our presentation is based on Kuperberg's paper \\cite{KuperbergAlgorithmic}.\n\n\\begin{theorem}\n\\label{Thm:HyperbolicHomeoProblem}\nThere is an algorithm that takes as its input triangulations of two closed hyperbolic $3$-manifolds and determines whether these manifolds are homeomorphic.\n\\end{theorem}\n\n\\begin{definition}\nA triangulation of a closed hyperbolic 3-manifold $M$ is \\emph{geodesic} if each simplex is totally geodesic.\n\\end{definition}\n\n\\begin{lemma}\n\\label{Lem:StraightExists}\nAny closed hyperbolic 3-manifold admits a geodesic triangulation.\n\\end{lemma}\n\n\\begin{proof} Pick a point $p$ in the manifold $M$, and let $\\tilde p$ be a point in its inverse image in $\\mathbb{H}^3$. Let $\\mathcal{P}$ be the set of points in $\\mathbb{H}^3$ that are closer to $\\tilde p$ than to any covering translate of $\\tilde p$. Its closure is a finite-sided polyhedron. The vertices, edges and faces of this polyhedron project to $0$-cells, $1$-cells and $2$-cells of a cell complex in $M$. The remainder of $M$ is a 3-cell. Now subdivide this to a triangulation. Do this by placing a vertex in the interior of each 2-cell and coning off. Then cone the 3-cell from $p$. The result is a geodesic triangulation of $M$.\n\\end{proof}\n\nAn alternative way of viewing a geodesic triangulation is as a recipe for building a hyperbolic structure on $M$. One realises each tetrahedron as the convex hull of four points in hyperbolic space that do not lie in a plane. One has to to ensure that the face identifications between adjacent tetrahedra are realised by isometries. Of course, the angles around each edge sum to $2 \\pi$. In fact, by Poincar\\'e's polyhedron theorem \\cite{EpsteinPetronio}, these conditions are enough to specify a hyperbolic structure on $M$.\n\nThis observation is key to the proof of the following result.\n\n\n\\begin{theorem}\n\\label{Thm:FindStraight}\nThere is an algorithm that takes as its input a triangulation $\\mathcal{T}$ of some closed hyperbolic $3$-manifold $M$ and provides a sequence of Pachner moves taking it to a geodesic triangulation. Moreover, it provides the lengths of the edges in this geodesic triangulation as logarithms of algebraic numbers.\n\\end{theorem}\n\n\\begin{proof} \nWe start applying all possible Pachner moves to $\\mathcal{T}$, creating a list of triangulations of $M$. If this procedure were left to run indefinitely, it would create an infinite list of all triangulations of $M$. For each triangulation $\\mathcal{T}'$, we start to try to find a geodesic structure on it. Thus, for each tetrahedron of $\\mathcal{T}'$, we consider possible arrangements of four points in the upper-half space model for $\\mathbb{H}^3$. We only consider points with co-ordinates that are algebraic numbers. As algebraic numbers can be enumerated, one can consider all such arrangements in turn. For each such arrangement, we compute the lengths of the edges and the interior angles at the edges. For each edge length $\\ell$ and angle $\\alpha$, $e^\\ell$ and $e^{i \\alpha}$ are in fact algebraic functions of the co-ordinates of the points in $\\mathbb{H}^3$. We then check whether whenever two tetrahedra are glued along an edge, then these edge lengths are the same. We also check whether the angles around each edge sum to $2 \\pi$. This is possible since these conditions are polynomial equations of the variables. (In fact, it is convenient to use polynomial inequalities here also \\cite{KuperbergAlgorithmic}.) This fact also explains why we may restrict to co-ordinates that are algebraic numbers. This is because whenever a system of polynomial equations and inequalities with integer coefficients has a real solution, then it has one with algebraic co-ordinates.\n\nThus, for each of these triangulations, if it can be realised as geodesic, then one such geodesic structure will eventually be found. Hence, this process terminates. \\end{proof}\n\nThe importance of geodesic triangulations is demonstrated in the following result.\n\n\\begin{theorem}\n\\label{Thm:PachnerBoundStraight}\nLet $\\mathcal{T}_1$ and $\\mathcal{T}_2$ be geodesic triangulations of a closed hyperbolic 3-manifold. Then there is a computable upper bound on the number of Pachner moves required to pass from $\\mathcal{T}_1$ to $\\mathcal{T}_2$, as a function of the lengths of the edges of $\\mathcal{T}_1$ and $\\mathcal{T}_2$.\n\\end{theorem}\n\n\\begin{proof} The idea is to find a common subdivision $\\mathcal{T}_3$ of $\\mathcal{T}_1$ and $\\mathcal{T}_2$, and to bound the number of Pachner moves joining $\\mathcal{T}_1$ to $\\mathcal{T}_3$ and joining $\\mathcal{T}_3$ to $\\mathcal{T}_2$. This subdivision $\\mathcal{T}_3$ is obtained by superimposing $\\mathcal{T}_1$ and $\\mathcal{T}_2$, to form a cell structure $\\mathcal{C}$, and then subdividing this to a triangulation. Before we do this, we possibly perturb $\\mathcal{T}_2$ a little, maintaining it as a geodesic triangulation, so that it is in general position with respect to $\\mathcal{T}_1$. Each 3-cell of $\\mathcal{C}$ is a component of intersection between a tetrahedron of $\\mathcal{T}_1$ and a tetrahedron of $\\mathcal{T}_2$. To obtain $\\mathcal{T}_3$, we cone off the 2-cells and the 3-cells of $\\mathcal{C}$ (as described in the proof of Theorem \\ref{Lem:StraightExists}). Now, the 3-cells come in finitely many combinatorial types, since the intersection between any two tetrahedra in $\\mathbb{H}^3$ is a polyhedron with a bounded number of faces. Thus, the number of tetrahedra in $\\mathcal{T}_3$ is controlled by the number of 3-cells in each tetrahedron of $\\mathcal{T}_1$, say. In fact, it is not hard to prove that the number of Pachner moves taking $\\mathcal{T}_1$ to $\\mathcal{T}_3$ is also controlled by this quantity. Thus, to prove the theorem, it is necessary to obtain an upper bound on the number of times a tetrahedron of $\\mathcal{T}_1$ and a tetrahedron of $\\mathcal{T}_2$ can intersect. The triangulations $\\mathcal{T}_1$ and $\\mathcal{T}_2$ lift to geodesic triangulations $\\tilde{\\mathcal{T}}_1$ and $\\tilde{\\mathcal{T}}_2$ of $\\mathbb{H}^3$. Any two geodesic tetrahedra in $\\mathbb{H}^3$ are either disjoint or have connected intersection. Thus, we need to control the number of tetrahedra of $\\tilde{\\mathcal{T}}_2$ that can intersect a single tetrahedron of $\\tilde{\\mathcal{T}}_1$.\n\nLet $\\Delta_1$ and $\\Delta_2$ be geodesic tetrahedra in $\\tilde{\\mathcal{T}}_1$ and $\\tilde{\\mathcal{T}}_2$, and let $\\ell_1$ and $\\ell_2$ be the maximal length of their sides. Let $p_1$ and $p_2$ be points in the interiors of $\\Delta_1$ and $\\Delta_2$. Suppose that a covering translate of $\\Delta_2$ intersects $\\Delta_1$. Then the corresponding covering translate of $p_2$ is at a distance at most $\\ell_1 + \\ell_2$ from $p_1$. Hence, the translate of $\\Delta_2$ that contains it lies within the ball of radius $\\ell_1 + 2\\ell_2$ about $p_1$. Let $V$ be the volume of this ball. Let $v$ be the volume of $\\Delta_2$, which can be determined from the edge lengths of $\\Delta_2$. Then the number of translates of $\\Delta_2$ that can intersect $\\Delta_1$ is at most $V\/v$. Hence, the number of tetrahedra in $\\mathcal{T}_3$ is at most a constant times $|{\\mathcal{T}}_1| \\, |{\\mathcal{T}}_2| (V\/v)$. \n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{Thm:HyperbolicHomeoProblem}] Let $\\mathcal{T}_1$ and $\\mathcal{T}_2$ be the given triangulations of closed hyperbolic 3-manifolds $M_1$ and $M_2$. By Theorem \\ref{Thm:FindStraight}, we can find a sequence of Pachner moves taking $\\mathcal{T}_1$ and $\\mathcal{T}_2$ to geodesic ones $\\mathcal{T}'_1$ and $\\mathcal{T}'_2$. Theorem \\ref{Thm:FindStraight} also provides the lengths of their edges. Hence, by Theorem \\ref{Thm:PachnerBoundStraight}, we have a computable upper bound on the number of Pachner moves relating $\\mathcal{T}'_1$ and $\\mathcal{T}'_2$, if $M_1$ and $M_2$ are homeomorphic. We search through all sequences of moves with at most this length. Thus, if $M_1$ and $M_2$ are homeomorphic, we will find a sequence of Pachner moves relating $\\mathcal{T}'_1$ and $\\mathcal{T}'_2$. If we do not find such a sequence, then we know that $M_1$ and $M_2$ are not homeomorphic.\n\\end{proof}\n\nIt would be very interesting to have a quantified version of Theorem \\ref{Thm:HyperbolicHomeoProblem}, which would provide an explicit upper bound on the number of Pachner moves relating any two triangulations $\\mathcal{T}_1$ and $\\mathcal{T}_2$ of a closed hyperbolic 3-manifold. Theorem \\ref{Thm:PachnerBoundStraight} provides such a bound for geodesic triangulations. However, there is no obvious upper to the number of Pachner moves used in the algorithm for converting a given triangulation to a geodesic one that is presented in the proof of Theorem \\ref{Thm:FindStraight}. The arguments given in \\cite{KuperbergAlgorithmic} might be useful here for general triangulations of closed hyperbolic 3-manifolds. However, any prospect of getting a bound that is a polynomial function of $|\\mathcal{T}_1|$ and $|\\mathcal{T}_2|$ still seems to be a long way off.\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nThe amazing developments of laser technologies today allow the\nproduction of very intense (hundreds of TeraWatts), coherent electromagnetic (EM) waves\nconcentrated in very short pulses (tens of femtoseconds).\nThe interaction of such laser pulses with isolated electric charges or with continuous\nmatter is characterized by so fast, huge and highly nonlinear effects\nthat traditional approximation schemes are seriously challenged. Even if the initial state of matter \nis not of plasma type, the huge kinetic energy $\\kappa$ transfered to the electrons almost immediately\nionizes matter locally into a plasma\\footnote{Each level of ionization\n(first, second,...) is practically complete if the associated Keldysh parameter \\\n $\\Gamma_i\\!:=\\!\\sqrt{U_i\/\\kappa}$ \\ ($U_i$ is the associated ionization potential)\n fulfills \\ $\\Gamma_i\\!\\ll\\!1$ \\cite{Puk02,JovFedTanDeNGiz12}.}\n(thereafter quantum effects are completely negligible).\nThe kinetic energies transfered to electrons and ions are also\n many orders of magnitude above the typical values of the thermal spectrum\n (even if the temperature is millions of ${}^{\\circ}$K!),\ntherefore classical relativistic Magneto-Fluid-Dynamics (MFD) at zero temperature, with\nits full nonlinearity, is a perfectly accurate framework while the pulse is passing, \nand also afterwards as long as dissipation\nhas not produced significant effects.\n\nDue to the extremely high number of electrons in a plasma, \nmoderate displacements w.r.t. \nions generate huge electric fields that may in turn lead to extreme acceleration\nof charged particles.\nUnderstanding the underlying collective effect mechanisms would be crucial for many\nscopes, from sheding light on some violent astrophysical phenomena\nto conceiving a completely new kind of particle accelerators.\nToday accelerators are used in particular for:\n\n\\begin{enumerate}\n\n\\item nuclear medicine, cancer therapy (PET, electron\/proton therapy,...);\n\\item research in structural biology;\n\\item research in materials science;\n\\item food sterilization;\n\\item research in nuclear fusion (inertial fusion);\n\\item transmutation of nuclear wastes;\n\\item research in high-energy particle physics.\n\n\\end{enumerate}\n\n\nPast and present-day acceleration technology (cyclotrons, synchrotrons, etc) \nrelies on the interaction of radio-frequency (RF) EM waves with `few' charged \nparticles (those one wishes to accelerate) over long distances. It has been developed for purpose 7, \nbut has more recently found very important applications also for the other ones.\nBy now it is close to its structural limits.\nThe present or recent most powerful accelerators (LHC and its predecessor LEP at CERN, \nSLAC in Stanford, Tevatron at Fermilab), which accelerate(d) particles up to energies of 100 - 1000 GeV, are (or were) already very big and expensive. \nIn 2011 Tevatron closed because of budget cuts; building higher energy accelerators would be prohibitive.\nMuch lower energies (100 - 300 MeV) are needed for other uses; \nbut the still too large machines and high costs prevent the use on a large scale.\nFor instance, the CNAO center in Pavia - one of the few \ncenters for cancer treatment by hadron therapy - is based\non a 25-meters diameter synchrotron which has costed about 100 million Euro.\n\nA lot of theoretical efforts are being\nmade to conceive and construct new types of `table-top'\nplasma-based acceleration machines, at least for energies of the order \n$100\\div 1000$ MeV. A beam of electrons\nof about 200 MeV with very little energetic and angular spread has been\nproduced within a distance of few mm through the socalled Laser Wake Field (LWF) \nmechanism \\cite{TajDaw79} in the bubble-regime \\cite{FauEtAl04}\nin experiments at the {\\it \\'Ecole Polytechnique} \\cite{MalEtAl05}. \nThe involved accelerations are thousands of times the ones\ngenerated by RF-based accelerators.\n Present theoretical research on the subject is dominated by \nnumerical resolution programs of the MFD equations \n(`particle-in-cell' simulations, etc.) or their substitution \nby (sometimes over-) simplified models. This leads to\na qualitative understanding of some phenomena, at best.\nSusbtantial progress in a rigorous analytical study \nof the MFD equations would be highly welcome.\n\nHere we briefly report about recent exact results \\cite{Fio13}\napplying to the differential (sect. \\ref{plane}) and equivalent integral\nequations (sect. \\ref{integral}) ruling a relativistic cold plasma after the plane-wave Ansatz. \nIf the plasma, initially at rest, is reached by a transverse plane EM \ntravelling-wave, then the solution has a very simple dependence on \nthe EM potential in the limit of zero density (sect. \\ref{0densitysol}); \notherwise the zero-density solution is a good approximation of the real\none as long as the back-reaction of the charges on the EM field \ncan be neglected (i.e. for a time lapse decreasing with the plasma density), and can be \ncorrected into better and better ones by an iterative procedure.\nIn sect. \\ref{constdensity} we sketch how to use these results\nto describe the impact of an ultra-intense and ultrashort laser pulse with a plasma\nand determine conditions under which a new phenomenon named {\\it slingshot effect} \n\\cite{FioFedDeA13}\nshould occur. The general motion of a\ncharged test particle in the above EM wave is determined in sect. \n\\ref{arbincond}. \n\nWe first fix the notation and recall the basic equations.\nWe denote as $x=(x^\\mu)=(x^0,\\bx)=(ct,\\bx)$ the spacetime coordinates\n($c$ is the light velocity), $(\\partial_\\mu)\\equiv(\\partial\/\\partial x^\\mu)=(\\partial_0,\\nabla)$,\nas $(A^\\mu)=(A^0,\\bA)$ the EM potential,\nas $F^{\\mu\\nu}=\\partial^\\mu A^\\nu-\\partial^\\nu A^\\mu$ the EM field,\nand consider a collisionless plasma composed by\n$k\\ge 2$ types of charged particles (electrons, ions).\nFor $h\\!=\\!1,...,k$ let $m_h,q_h$ be the rest mass\nand charge of the $h$-th type of particle (as usual, $-e$ the charge of electrons), \n $\\bv_h(x)$, $n_h(x)$\nrespectively the 3-velocity and the density (number of particles per unit volume)\nof the corresponding fluid element located in position $\\bx$ at time $t$. \nIt is convenient to use \ndimensionless variables like \n$$\n\\ba{l}\n \\bb_h\\!:=\\!\\bv_h\/c, \\qquad\\gamma_h\\!:=\\!1\/\\sqrt{1\\!-\\!\\bb_h^2}\\\\[6pt]\n\\mbox{4-vector velocity:}\\quad u_h=( u_h^\\mu)=(u^0_h,\\bu_h)\\!:=\\!(\\gamma_h,\\gamma_h \\bb_h)\n=\\left(\\frac {p^0_h}{m_hc^2},\\frac {{\\bf p}_h}{m_hc}\\right)\n\\ea\n$$\n(then \\\n$ u_h^\\mu u_{h\\mu}\\!=\\!1$, \\ $\\gamma_h\\!=\\!u_h^0\\!=\\!\\sqrt{1\\!+\\!\\bu_h^2}$, \\\n$\\bb_h\\!=\\!\\bu_h\/\\gamma_h$), and \n$$\n\\ba{l}\n\\mbox{4-vector current density:}\\quad (j^\\mu)=(j^0,\\bj)=\n\\left(\\sum\\limits_{h=1}^k q_h n_h,\\sum\\limits_{h=1}^k q_h n_h\\bb_h\\right)\n\\ea\n$$\n\nThe Eulerian and Lagrangian descriptions \nof an observable\nare related by\n\\be\n\\tilde f_h(x^0\\!,\\bX)=f_h\\left[x^0\\!,\\bx_h\\!\\left(x^0,\\!\\bX\\right)\\right]\\quad\\Leftrightarrow\\quad\nf_h(x^0\\!,\\bx)=\\tilde f_h\\!\\left[x^0,\\!\\bX_h(x^0\\!,\\bx)\\right],\n\\ee\nwhere $\\bx_h(x^0\\!,\\!\\bX)$ is the position at time $t$ of the $h$-fluid element initially located\nin $\\bX$, one requires $\\bx_h\\!\\in\\! C^1(\\mathbb{R}^4)$ and the inverse\n$\\bX_h(x^0\\!,\\cdot)\\!:\\!\\bx\\!\\mapsto\\!\\bX$ of $\\bx_h(x^0\\!,\\cdot)\\!:\\!\\bX\\mapsto \\bx$ \\ to exist.\nConservation of the\nparticles of the $h$-th fluid reads\n\\be\n\\ba{l}\n\\tilde n_h\\left\\vert\\frac {\\partial \\bx_h} {\\partial \\bX}\\right\\vert=\n\\widetilde{n_{h0}}(\\bX)\\qquad\n\\Leftrightarrow \\qquad n_h\\left\\vert\\frac {\\partial \\bX_h}{\\partial \\bx} \\right\\vert^{-1} =n_{h0} \\label{n_hg}\n\\ea\n\\ee\nand implies the continuity equation\n\\be\n\\frac {d n_h}{dx^0}\\!+\\!n_h\\nabla\\!\\cdot\\!\\bb_h=\\partial_0n_h\\!+\\!\\nabla\\!\\cdot\\!(n_h\\bb_h)\n=0; \\label{clh'}\n\\ee\nhere $\\frac {d}{dx^0}\\!\\!:=\\!\\!\\frac {d}{cdt}\\!\\!=\\!\\!\\partial_0\\!+\\!\n\\beta^l_h\\partial_l\\!=\\!\\frac{u_h^\\mu}{\\gamma_h}\\partial_\\mu$ is the\n {\\it material} derivative for the $h$-th fluid, rescaled by $c$.\nIn the CGS system Maxwell's equations\nand the (Lorentz) equations of motion of the fluids in Lorentz-covariant formulation read\n\\bea\n&&\\Box A^\\nu-\\partial^\\nu(\\partial_\\mu A^\\mu)\n=\\partial_\\mu F^{\\mu\\nu}=4\\pi j^\\nu,\\label{Maxwell}\\\\[8pt]\n&&-q_h u_{h\\mu} F^{\\mu\\nu}=m_hc^2 u_{h\\mu} \\partial^\\mu u_h^{\\nu} \n \\label{hom'}\n\\eea\n(Eulerian description); (\\ref{hom'})$_{\\nu=0}$ follows also from \ncontracting (\\ref{hom'})$_{\\nu=l}$ with $ u^l_h$, $l\\!=\\!1,2,3$. \nDividing (\\ref{hom'})$_{\\nu=l}$ by $\\gamma_h$ gives\nthe familiar 3-vector formulation of (\\ref{hom'})\n\\be\n\\ba{l}\nq_h\\left(\\bE +\\frac{\\bv_h}c \\wedge \\bB\n\\right)=\\partial_t\\Bp_h+\\bv_h \\cdot \\nabla \\Bp_h\n=\\frac {d\\Bp_h}{dt}\n\\ea\\label{hom}\n\\ee\nin terms of the electric and magnetic fields $E^l=F^{l0}=-\\partial_0A^l-\\partial_l A^0$, $B^l=-\\frac 12\n\\varepsilon^{lkn}F^{kn}=\\varepsilon^{lkn}\\partial_k A^n$.\nGiven the initial momenta $\\widetilde{\\Bp_{h0}}$ and densities $\\widetilde{n_{h0}}$ in (\\ref{n_hg}) \n the unknowns are $A^\\mu,\\bx_h, \\bu_h$ and the equations \nto be solved are (\\ref{Maxwell}-\\ref{hom'}) and \n\\be\n\\partial_0 \\bx_h( x^0\\!,\\bX)=\\bb_h\\!\\left[x^0\\!,\\bx_h({x^0}\\!,\\bX)\\right].\n \\label{lageul}\n\\ee\n\n\n\\section{Lorentz-Maxwell equations for plane waves}\n\\label{plane}\n\nWe restrict our attention to solutions such that for all $h$:\n\\bea\n&& A^\\mu, n_h, \\bu_h \\qquad \\mbox{{\\bf depend only on }} z\\!\\equiv\\!\nx^3\\!,x^0 \\qquad \\mbox{(plane wave Ansatz)}, \\label{pw}\\\\[8pt]\n&&\\!\\!\\ba{ll}\nA^\\mu(x^0\\!,z)\\!=\\!0,\\qquad \\bu_h(x^0\\!,z)\\!=\\!\\0,&\\qquad \\mbox{if }\\:\\:x^0\\!\\le\\! z,\\\\ [6pt]\n\\exists\\:\\:\\widetilde{ n_{h0}}(z) \\quad \\mbox{such that }\n\\:\\sum_{h=1}^k\\!q_h\\widetilde{ n_{h0}}\n\\!\\equiv\\!0, \\quad n_h(x^0\\!\\!,z)\\!=\\!\\widetilde{ n_{h0}}(z)&\\qquad \\mbox{if }\\:\\:x^0\\!\\le\\! z.\n\\ea\n \\qquad \\label{asyc}\n\\eea\nEq. (\\ref{pw}-\\ref{asyc}) entail a partial gauge-fixing, imply\n$\\bB\\!=\\!\\bB^{{\\scriptscriptstyle\\perp}}\\!\\!=\\!{\\hat{\\bm z}}\\!\\wedge\\!\\partial_z\\bA\\!^{{\\scriptscriptstyle\\perp}}$, $\\bE^{{\\scriptscriptstyle\\perp}}\\!=\\!-\\partial_0\\bA\\!^{{\\scriptscriptstyle\\perp}}$,\n\\bea\n\\bE(x)\\!=\\!\\bB(x)\\!=\\!\\0,\\qquad\n\\bx_h(x)=\\bx \\qquad\\quad \\mbox{if }\\:x^0\\!\\le\\! z ,\n \\label{conseq'}\n\\eea\n$-\\bA\\!^{{\\scriptscriptstyle\\perp}}(x^0\\!,z)\\!=\\!\\!\\int^{x^0}_{z}\\!\\!\\!\\!d\\eta \n\\bE^{{\\scriptscriptstyle\\perp}}(\\eta,z)\\!=\\!\\!\\int^{x^0}_{-\\!\\infty }\\!\\!\\!\\!d\\eta\n\\bE^{{\\scriptscriptstyle\\perp}}(\\eta,z)$, so that \n$\\bA\\!^{{\\scriptscriptstyle\\perp}}$ becomes a {\\it physical observable},\nand the existence of the limits $n_h(-\\infty,Z)\\!=\\!\\widetilde{ n_{h0}}(Z)$, \\\n$\\bx_h(-\\infty,\\bX)\\!=\\!\\bX$. \\ \\ Hence we can adopt $-\\infty$ as the\n `initial' time in the Lagrangian description.\nThe map $\\bx_h(x^0\\!,\\cdot)\\!:\\!\\bX\\!\\mapsto\\! \\bx$ is invertible iff\n$z_h( x^0\\!,Z)$ is strictly increasing w.r.t.\n$ Z\\!\\equiv\\! X^3$ for each fixed $x^0$.\nWe shall abbreviate $Z_h\\!\\equiv\\! X^3_h$.\nEq. (\\ref{n_hg})\nbecomes\n\\bea\n&& \\tilde n_h (x^0,Z) \\partial_Z z_h( x^0,Z) =\\widetilde{n_{h0}}( Z ),\n\n\\qquad \\Leftrightarrow \\qquad\nn_h\\, = \\, n_{h0}\\, \\partial_z Z_h. \\label{n_h}\n\\eea\n$\\partial_0 Z \\!=\\!0$ in the Eulerian description gives \n$\\frac {d Z_h}{dx^0}\\!=\\! \\partial_0Z_h\\!\\!+\\!\\!\\beta_h^z \\partial_z Z_h\\!=\\!0$ \\ and by (\\ref{n_h}) \n\\be\nn_{h0} \\,\\partial_0 Z_h\\!+\\!n_h\\beta^z_h=0. \\label{j_h}\n\\ee\n\n\nAs known, eq. (\\ref{hom'})$_{\\nu=x,y}$ amounts to \\ $\\frac {d}{dx^0}(m_hc^2\n\\bu_h^{{\\scriptscriptstyle\\perp}}\\!+\\!q_h\\bA\\!^{{\\scriptscriptstyle\\perp}})\\!=\\!0$, \\\nwhich implies \\ $m_hc^2\\tilde{\\bu}_h^{{\\scriptscriptstyle\\perp}}\\!+\\!\nq_h\\tilde{\\bA}\\!^{{\\scriptscriptstyle\\perp}}\\!=\\!C(\\bX)$; \\ by (\\ref{asyc}) \\ $C(\\bX)\\!\\equiv\\! 0$, whence \n\\be\n\\tilde{\\bu}_h^{{\\scriptscriptstyle\\perp}}\\!= \\frac {-q_h}{m_hc^2}\\tilde{\\bA}\\!^{{\\scriptscriptstyle\\perp}}\\qquad\n\\Leftrightarrow\\qquad\\bu_h^{{\\scriptscriptstyle\\perp}}\\!= \\frac {-q_h}{m_hc^2}\\bA\\!^{{\\scriptscriptstyle\\perp}},\n \\label{hom'12}\n\\ee\nwhich explicitly gives\n$\\bu_h^{{\\scriptscriptstyle\\perp}}$ in terms of $\\bA\\!^{{\\scriptscriptstyle\\perp}}$.\n\\ Eq. (\\ref{Maxwell}) and the remaining (\\ref{hom'}) become\n\\bea\n&& (\\ref{Maxwell})_{\\nu=0}:\\qquad\\partial_z E^z=4\\pi \\sum_{h=1}^kq_hn_h\n,\\label{Maxwell0}\\\\%[8pt]\n&& (\\ref{Maxwell})_{\\nu=z}:\\qquad \\partial_0E^z=-4\\pi \\sum_{h=1}^kq_hn_h\\beta^z_h,\n\\label{Maxwell3}\\\\%[8pt]\n&& (\\ref{Maxwell})_{\\nu=x,y}:\\quad\\:\n\\left[\\partial_0^2\\!-\\!\\partial_z^2\\right]\\!\\bA\\!^{{\\scriptscriptstyle\\perp}}=\n\\underbrace{ 4\\pi\n\\sum\\limits_{h=1}^kq_hn_h\\bb^{{\\scriptscriptstyle\\perp}}_h }_{-\n\\frac{4\\pi }{c^2}\\bA\\!^{{\\scriptscriptstyle\\perp}}\\sum_{h=1}^k\n\\frac{q_h^2n_h}{m_h\\gamma_h} }\\!\\! ,\n\\qquad\\qquad \\label{Maxwell12}\\\\[4pt]\n&& (\\ref{hom'})_{\\nu=0}:\\qquad \n\\frac{d\\gamma_h}{dx^0}-\\frac{q_hu^z_h E^z}{\\gamma_h m_hc^2}-\n\\frac{q_h^2\\partial_0(\\bA\\!^{{\\scriptscriptstyle\\perp}})^2}{2\\gamma_hm_h^2c^4}=0\\label{hom'0}\\\\[4pt]\n&& (\\ref{hom'})_{\\nu=z}:\\qquad \n\\frac{du^z_h}{dx^0}-\\frac{q_h E^z}{m_hc^2}+\n\\frac{q_h^2\\partial_z(\\bA\\!^{{\\scriptscriptstyle\\perp}})^2}{2\\gamma_hm_h^2c^4}=0\\label{hom'3}\n\\eea\nThe independent unknowns in (\\ref{Maxwell0}-\\ref{hom'3}) are $\\bA\\!^{{\\scriptscriptstyle\\perp}}, u_h^z,E^z$\n(all observables).\n\n\n\\subsection{Magnetic and ponderomotive force}\n\\label{0densitysol}\n\nThe term \\ $F\\!\\!_{hm}^{\\,\\, z}\\!:=\\!-\\partial_zq_h^2\\bA\\!^{{\\scriptscriptstyle\\perp}2}\/2\\gamma_hm_hc^2$ \n\\ in (\\ref{hom'3}) is the longitudinal magnetic part \n$q_h(\\bb_h\\! \\wedge\\! \\bB)^z$ of the Lorentz force [cf. (\\ref{hom})]; \n\\ $U_{hm}\\!:=\\!-q_h^2\\bA\\!^{{\\scriptscriptstyle\\perp}2}\/2m_hc^2$\nacts like a `time-dependent potential energy'. For fixed $x^0$ let $\\bar z\\!<\\!x^0$ the right extreme of \n supp$\\bA\\!^{{\\scriptscriptstyle\\perp}}$: then $\\bA\\!^{{\\scriptscriptstyle\\perp}2}(x^0\\!,z)\\!=\\!0$ for $z\\!\\ge\\!\\bar z$,\nwhereas $\\bA\\!^{{\\scriptscriptstyle\\perp}2}(x^0\\!,z)$ is positive and necessarily strictly decreasing for \n$z$ in a suitable interval $[z',\\bar z[$. Then $F\\!\\!_{hm}^{\\,\\, z}$ acts as a positive longitudinal force on {\\it all} the electric charges located at $z\\!\\in[z',\\bar z[$.\n\nIn the prototypical cases of a modulated monochromatic transverse wave\n\\be\n\\ba{ll}\n\\qquad\\qquad\\bE^{{\\scriptscriptstyle \\perp}}\\!(x^0\\!,z)\\!=\\!\\Be^{{\\scriptscriptstyle \\perp}}\\!(x^0\\!\\!-\\!\\!z),\\qquad\n&\\qquad\\Be^{{\\scriptscriptstyle \\perp}}(\\xi)\\!=\\!\\epsilon_s(\\xi)\\Be_o^{{\\scriptscriptstyle \\perp}}\\!(\\xi),\\\\[8pt]\n\\Be_o^{{\\scriptscriptstyle \\perp}}\\!(\\xi)\\!=\\! {\\hat\\bx}\\cos k\\xi, \n\\quad\\qquad\\qquad \\Be_p^{{\\scriptscriptstyle \\perp}}\\!(\\xi)\\!=\\! \n{\\hat\\bx}\\sin k\\xi &\\qquad \\mbox{(linearly polarized), or}\\\\[8pt]\n\\Be_o^{{\\scriptscriptstyle \\perp}}\\!(\\xi)\\!=\\!{\\hat\\bx}\\cos k\\xi\\!+\\!{\\hat\\by}\\sin k\\xi, \n\\quad \\Be_p^{{\\scriptscriptstyle \\perp}}\\!(\\xi)\\!=\\! -\\frac 1k\\Be_o^{{\\scriptscriptstyle \\perp}\\prime}\n&\\qquad \\mbox{(circularly polarized),}\n\\ea \\label{prototype}\n\\ee\nwith amplitude not varying significantly over $\\lambda\\!:=\\!2\\pi\\!\/\\!k$, e.g. \n$\\lambda |\\epsilon_s'\\!(\\!\\xi\\!)\\!\/\\!\\epsilon_s\\!(\\!\\xi\\!)|\\!\\le\\!\\delta\\!\\ll\\!1$ for all $\\xi$,\nthen \\ $\\bA\\!^{{\\scriptscriptstyle\\perp}}\\!(x^0\\!,z)\\!=\\! \n\\left\\{\\frac{1}{k}\\epsilon_s\\left[\\Be_p^{{\\scriptscriptstyle \\perp}}\\!+\\!O(\\delta)\\right]\\right\\}\\!(x^0\\!\\!-\\!\\!z)$. \\\nThe {\\it ponderomotive force} \\ $F\\!\\!_{hp}^{\\,\\, z}\\!:=\\! \\langle F\\!\\!_{hm}^{\\,\\, z}\\rangle$ \\\n ($ \\langle \\,\\,\\rangle$ stands for the average over a period $\\lambda$)\nplays a crucial role in the LWF acceleration and in the slingshot effect. Up to $O(\\delta)$ one finds\n\\be\n\\ba{ll}\nF\\!\\!_{hm}^{\\,\\, z}\\!=\\!\\left[\\mu_h(\\epsilon_s\\Be_p^{{\\scriptscriptstyle \\perp}}\\!)^2{}'\\right]\\!(x^0\\!\\!-\\!\\!z),\n\\quad F\\!\\!_{hp}^{\\,\\, z}\\!=\\!\\frac 12\\!\\left[\\mu_h(\\epsilon_s^2){}'\\right]\\!(x^0\\!\\!-\\!\\!z)\n\\qquad &\\mbox{lin. polarized,}\\\\[10pt]\nF\\!\\!_{hm}^{\\,\\, z}\\!=\\!\\left[\\mu_h(\\epsilon_s\\Be_p^{{\\scriptscriptstyle \\perp}}\\!)^2{}'\\right]\\!(x^0\\!\\!-\\!\\!z)\n\\!=\\!\\left[\\mu_h(\\epsilon_s^2){}'\\right]\\!(x^0\\!\\!-\\!\\!z) \\equiv F\\!\\!_{hp}^{\\,\\, z} \n\\qquad &\\mbox{circ. polarized;}\n\\ea\\ee\nhere\nwe abbreviated $\\mu_h\\!:=\\!\\lambda^2 q_h^2\/8\\pi^2\\gamma_hm_hc^2$,\nand we have used that $\\Be_p^{{\\scriptscriptstyle \\perp}2}\\!=\\!1$ for circular polarization.\n Hence the ponderomotive force \\ $F\\!\\!_{hp}^{\\,\\, z}(x^0\\!,z)$ \\\nis positive (resp. negative) for {\\it all} $h$ if $\\epsilon_s^2(\\xi)$ \nis increasing (resp. decreasing) at $\\xi\\!:=\\!x^0\\!\\!-\\!\\!z$ (fig. \\ref{F2large} - right A). \nTherefore, while the transversal motion is oscillatory\nwith period $\\lambda$ and averages to zero, the longitudinal motion is \nruled in average by $\\epsilon_s$ on the much larger scale $l$;\nin the case of a linearly polarized wave\n the rapid spatial oscillations of $\\Be_p^{{\\scriptscriptstyle \\perp}2}$ have \nthe additional effect of modulating the densities, especially of the electrons (the lightest particles),\ninto equi-spatiated bunches (see fig. \\ref{F2large} - right B,C).\nMoreover, for large amplitudes \\ ($u_h\\gg 1$) \\ the direction of $\\bv_h$ is \nclose to the longitudinal one for most of the time.\n \n\n\\begin{figure}[ht]\n\\includegraphics[width=5.5cm]{EMtransverse}\n\\hfill\n\\includegraphics[width=6cm]{F2large}\n\\caption{Schematic plots of a linearly polarized transverse wave (left) and of its interaction \nwith a density wave of electrons (right).}\n\\label{F2large} \n\\end{figure}\n\n\n\\section{The zero-density solutions}\n\\label{0densitysol}\n\n\\begin{prop} \\cite{Fio13} \\\nIf $\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi)\\!\\in\\! C^2(\\b{R},\\b{R}^2)$ and\n$\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi)\\!=\\!0$ for \\ $\\xi\\!\\le \\!0$ \\ then\n\\be\n\\ba{lll}\n\\bA\\!^{{\\scriptscriptstyle\\perp}}(x) \\!=\\! \n\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi),\\quad\\:\\: \\xi\\!\\!:=\\!x^0\\!\\!-\\!z,\\qquad &\nn_h \\!=\\!n_{h}^{{\\scriptscriptstyle (0)}} \\!:=\\!0,\\qquad \\: &\nE^z \\!=\\!E^{z{\\scriptscriptstyle (0)}}\\!\\!:=\\! 0, \\\\[8pt]\n\\bu_h^{{\\scriptscriptstyle\\perp}}(x)\\!=\\!\\bu_{h}^{{\\scriptscriptstyle \\perp(0)}}(\\xi)\\!:=\\!\n\\frac {-q_h}{m_hc^2}\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi),\n\\quad \\: &\nu_h^z\\!=\\!u_{h}^{{\\scriptscriptstyle z(0)}}\n\\!\\!:=\\!\\frac 12\\bu_{h}^{{\\scriptscriptstyle \\perp(0)}}{}^2\\!, \\qquad \\:\n& \\gamma_h\\!=\\!\\gamma_{h}^{{\\scriptscriptstyle (0)}}\n\\!\\!:=1+\\!u_{h}^{{\\scriptscriptstyle z(0)}},\n \\ea \\label{n=0'+}\n\\ee\nwhich depend on $x$ only through $\\xi$, solve\n(\\ref{hom'12}-\\ref{hom'3}) and (\\ref{asyc}).\n\\label{prop1}\n\\end{prop}\n\n\\noindent Let $s_h\\!:=\\!\\gamma_h\\!-\\! u^z_h$. The difference of eqs. (\\ref{hom'0}-\\ref{hom'3}) \ngives the equivalent equation\n\\be\n \\frac {ds_h}{d{x^0}}=\\frac {q_h^2}\n{2m_h^2c^4\\gamma_h} \\left(\\partial_0+ \\partial_z\\!\\right)\\bA\\!^{{\\scriptscriptstyle\\perp}}{}^2\n\\!-\\!\\frac{s_h}{\\gamma_h}\\frac{q_h E^z}{m_hc^2} \\label{bla'}\n\\ee\nThe assumptions imply $\\frac {ds_h}{d{x^0}}\\!=\\!0$, \\ whence \\ $s_h\\!\\equiv\\!1$, \\\nwhich is the main step of the proof.\nEqs. (\\ref{n=0'+}) give travelling-waves determined solely by the assigned\n$\\Ba^{{\\scriptscriptstyle \\perp}}$ and moving in the \n$\\hat z$ direction with phase velocity equal to $c$;\nthey make up a weak solution if $\\Ba^{{\\scriptscriptstyle \\perp}}$\nis less regular, e.g. $\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi)\\!\\in\\! C(\\b{R},\\b{R}^2)$\nand $\\Ba^{{\\scriptscriptstyle \\perp\\prime}}\\!=\\!\\Be^{{\\scriptscriptstyle \\perp}}$ is continuous \nexcept in a finite number of points of finite discontinuities (e.g. at the wavefront).\nAt no time any particle can move in the negative $z$-direction because $u_{h}^{{\\scriptscriptstyle z(0)}},\\beta_{h}^{{\\scriptscriptstyle z(0)}}$ are nonnegative-definite; the latter are the result of the\nacceleration by the $z$-component $F\\!\\!_{hm}^{\\,\\, z}$ of the magnetic force.\n\nWe introduce the following primitives of \\ $\\bu_h^{{\\scriptscriptstyle (0)}},\\gamma_h^{{\\scriptscriptstyle (0)}}$:\n\\be\n\\bY\\!_h(\\xi)\\!:=\\!\\int^\\xi_0\\!\\!\\!\\! d\\xi' \\,\\bu_h^{{\\scriptscriptstyle (0)}}\\!(\\xi'),\n\\qquad\\quad\\Xi_h(\\xi)\\!:=\\!\\int^\\xi_0\\!\\!\\!\\! d\\xi'\\, \\gamma_h^{{\\scriptscriptstyle (0)}}(\\xi')\\!=\\!\n\\xi \\!+\\! Y^3_h(\\xi); \\label{defYXi}\n\\ee\nAs $u_{h}^{{\\scriptscriptstyle z(0)}}\\!\\ge\\! 0$, $Y^3_h(\\xi)$ is increasing, \\\n$\\Xi_h(\\xi)$ \\ is strictly increasing and invertible.\n \n\\begin{prop} \\cite{Fio13} \\ Choosing $\\bu_h\\!\\equiv\\!\\bu_h^{{\\scriptscriptstyle (0)}}$,\nthe solution \\ $\\bx^{{\\scriptscriptstyle (0)}}_{h}(x^0,\\bX)$ \\ of the ODE\n(\\ref{lageul}) with the initial condition \\\n$\\bx_h(x^0,\\bX)\\!=\\!\\bX {}$ \\ for $x^0\\!\\le\\! Z$,\nand (for fixed $x^0$) its inverse \\ $\\bX^{{\\scriptscriptstyle (0)}}_{h}(\\bx,x^0)$ \\ are given by:\n\\be\n\\ba{l}\nz^{{\\scriptscriptstyle (0)}}_{h}\\!(x^0\\!,Z)=x^0\\!-\\!\\Xi_h^{-1}\\!\n\\left( x^0\\!-\\! Z \\right),\\\\[6pt]\n Z^{{\\scriptscriptstyle (0)}}_{h}\\!(x^0\\!,z)=x^0\\!-\\!\\Xi_h\\!\n\\left( x^0\\!-\\! z \\right)=z\\!-\\! Y^3_h(x^0\\!-\\! z),\\\\[6pt]\n\\bx^{{\\scriptscriptstyle \\perp(0)}}_{h}(x^0,\\bX )=\\bX^{{\\scriptscriptstyle \\perp}}\n\\!+\\!\\bY^{{\\scriptscriptstyle \\perp}}_h\\!\\left[x^0\\!-\\!\nz^{{\\scriptscriptstyle (0)}}_{h}\\!(x^0\\!,Z )\\right],\\\\[6pt]\n\\bX^{{\\scriptscriptstyle \\perp(0)}}_{h}(x^0\\!,\\bx)=\n\\bx^{{\\scriptscriptstyle \\perp}} \\!-\\!\n\\bY\\!^{{\\scriptscriptstyle \\perp}}_h\\!\\left(x^0\\!-\\!z\\right).\n\\ea \\label{hatxtxp}\n\\ee\nThese functions fulfill \n\\be\n\\partial_0 Z ^{{\\scriptscriptstyle (0)}}_h \\!=\\!-u_h^{{\\scriptscriptstyle z(0)}}\\!,\n\\quad\\partial_z Z ^{{\\scriptscriptstyle (0)}}_h \\!=\\!\\gamma_h^{{\\scriptscriptstyle (0)}}\\!\n,\\quad\\partial_Z z^{{\\scriptscriptstyle (0)}}_h\n\\!=\\!\\frac1 {\\widetilde{\\gamma_h}^{{\\scriptscriptstyle (0)}}}\\!,\\quad\n \\partial_Z \\bx^{{\\scriptscriptstyle \\perp(0)}}_h\\!=\\!- \n\\tbb_{h}^{{\\scriptscriptstyle \\perp(0)}}\\!. \\label{hatdtzz'}\n\\ee\n\\label{prop2}\n\\end{prop}\nFrom (\\ref{hatxtxp}) it follows that the longitudinal displacement\nof the $h$-th type of particles w.r.t. their initial position $\\bX$ at time $x^0$ is\n\\be\n\\Delta z_h^{{\\scriptscriptstyle (0)}}\\! ( x^0\\!,\\! Z ) :=\nz_h^{{\\scriptscriptstyle (0)}}\\! ( x^0\\!,\\! Z )\\!-\\!Z\\,=\\, Y^3_h\\!\\left[\\Xi_h^{-1}\\!\n\\left( x^0\\!-\\! Z \\right)\\right]. \\label{displace}\n\\ee\nBy (\\ref{n=0'+}) the evolution of $\\bA^{{\\scriptscriptstyle \\perp}}$ amounts to a translation\nof the graph of $\\Ba^{{\\scriptscriptstyle \\perp}}$.\nIts value $\\check \\Ba{{\\scriptscriptstyle \\perp}}\\!:=\\!\\Ba{{\\scriptscriptstyle \\perp}}(\\check \\xi)$ \nat some point $\\check \\xi$ \nreachs the particles initially located in $Z$ at the time $\\check x_h^0(\\check \\xi,Z)$ such that \n\\be\n\\check x_h^0\\!-\\!\\check \\xi=z_h^{{\\scriptscriptstyle (0)}}\\! (\\check x^0\\!,\\! Z )\n\\!\\stackrel{(\\ref{hatxtxp})_1}{=}\\!\\check x_h^0\\!-\\!\\Xi_h^{-1}\\! \\left[\\check x_h^0\\!-\\! Z \\right] \n\\qquad\\Leftrightarrow \\qquad\n\\check x_h^0(\\check \\xi,Z)\\!=\\!\\Xi_h(\\check \\xi)\\!+\\! Z, \\label{chain}\n\\ee\nin the position $z_h^{{\\scriptscriptstyle (0)}} (\\check x^0\\!, Z )=\n\\Xi_h(\\check \\xi)\\!+\\! Z\\!-\\!\\check \\xi =Y^3_h(\\check \\xi)\\!+\\! Z $. The corresponding displacement \nof these particles is independent of $Z$ and equal to\n\\be\n\\zeta_h=\n\\Delta z_h^{{\\scriptscriptstyle (0)}}\\! \\left[\\check x_h^0 (\\check \\xi,Z), Z \\right]=Y^3_h(\\check \\xi)\n\\label{checkzeta}\n\\ee\n\n\n\n\\section{Motion of test particles with arbitrary initial conditions}\n\\label{arbincond}\n\n\nEq. (\\ref{hatxtxp}) describes also the motion of a {\\it single} test particle of \ncharge $q_h$ and mass $m_h$ starting from position $\\bX$ with velocity \n$\\0$ at sufficiently early time, i.e. before the EM wave arrives.\nThese results for a single test particle can be obtained also by solving \nthe Hamilton-Jacobi equation \\cite{LanLif62}.\nLet now \\ $\\bE\\!^{{\\scriptscriptstyle\\perp}}(x)\\!=\\!\\Be\\!^{{\\scriptscriptstyle\\perp}}( x^0\\!-\\! \\bx\\cdot\\bee)$, \\\n$\\bB\\!^{{\\scriptscriptstyle\\perp}}\\!=\\!\\bee\\!\\wedge\\!\\bE\\!^{{\\scriptscriptstyle\\perp}}$ \n($\\bee$ is the unit vector of the direction of propagation of the wave)\nbe an arbitrary free transverse plane EM travelling-wave \n(we {\\it no longer} require $\\bE\\!^{{\\scriptscriptstyle\\perp}}, \\bB\\!^{{\\scriptscriptstyle\\perp}}$ to vanish for $x^0\\!-\\! \\bx\\cdot\\bee\\!<\\!0$).\nThe {\\it general solution} $\\bx_h(x^0)$ of the Cauchy problem (\\ref{hom}-\\ref{lageul}) \nwith initial conditions \\ $\\bx_h(0)\\!=\\!\\bx_0$, $\\frac{d\\bx_h}{dx^0}(0)\\!=\\!\\bb_0$ \\\nunder the action of such an EM travelling-wave can be now determined by redution to the previous one as follows. \nOne can do a Poincar\\'e transformation $P=TRB$\nto a new reference frame $\\underline\\F$ where the initial velocity and position are zero\n and the wave propagates in the positive $z$ direction: one first\nfinds the boost $B$ from the initial reference frame $\\F$ to a new one $\\F'$ where $\\bb_0'\\!=\\!\\0$\n(this maps the transverse plane electromagnetic wave into a new one), then a rotation\n$R$ to a reference frame $\\F''$ where the \nplane wave propagates in the positive $z$-direction,\nfinally the translation $T$ to the reference frame $\\underline\\F$ where also $\\underline\\bx_0\\!=\\!\\0$.\nNaming $\\underline x^\\mu$ the spacetime coordinates and $\\underline A^\\mu,\\underline F^{\\mu\\nu},...$ \nthe fields w.r.t. $\\underline \\F$, it is \n $\\frac{d\\underline\\bx_h}{d\\underline x^0}(0)\\!=\\!\\0$, $\\underline\\bx_h(0)\\!=\\!\\0$,\nand $\\underline\\bE\\!^{{\\scriptscriptstyle\\perp}}(x)\\!=\\!\\underline\\Be\\!^{{\\scriptscriptstyle\\perp}}(\\underline x^0{}\\!-\\!\\underline z)$, \\ \n$\\underline\\bB\\!^{{\\scriptscriptstyle\\perp}}\\!=\\!\\hat{{\\underline{\\bm z}}}\\!\\wedge\\!\\underline\\bE\\!^{{\\scriptscriptstyle\\perp}}$.\nSince the part of the EM which is already at the right of the particle at $\\underline x^0\\!=\\!0$ will not \ncome in contact with the particle nor affect its motion,\nthe solution $\\underline x_h(\\underline x^0)$ of the Cauchy problem w.r.t. \n$\\underline\\F$ does not change if we replace \n$\\underline\\Be^{{\\scriptscriptstyle\\perp}}\\!(\\underline x^0\\!-\\!\\underline z)$ by the `cut' counterpart\n$\\Be_{{\\scriptscriptstyle\\theta}}^{{\\scriptscriptstyle\\perp}}\\!(\\underline x^0\\!-\\!\\underline z)\\!:=\\!\n\\underline\\Be^{{\\scriptscriptstyle\\perp}}\\!(\\underline x^0\\!-\\!\\underline z)\\,\\theta(\\underline x^0\\!-\\!\\underline z)$\n($\\theta$ stands for the Heaviside step function), see fig. \\ref{CutEPlot}. Clearly \n$\\Be^{{\\scriptscriptstyle\\perp}}_{{\\scriptscriptstyle\\theta}}(\\xi)$ and \n$\\Ba\\!^{{\\scriptscriptstyle\\perp}}_{{\\,\\scriptscriptstyle\\theta}}(\\xi)\\!:=\\!\\!\\int^\\xi_0\\!\\! d\\xi' \\,\\Be_{{\\scriptscriptstyle\\theta}}^{{\\scriptscriptstyle\\perp}}\\!(\\xi')$\nfulfill $\\Be_{{\\scriptscriptstyle\\theta}}^{{\\scriptscriptstyle\\perp}}(\\xi)\\!=\\!\\Ba\\!_{{\\,\\scriptscriptstyle\\theta}}^{{\\scriptscriptstyle\\perp}}(\\xi)\\!=\\!\\0$\nif $\\xi\\!\\le\\!0$; therefore, denoting as $\\bu^{{\\scriptscriptstyle (0)}}_{h{\\scriptscriptstyle\\theta}}(\\xi),\\bx^{{\\scriptscriptstyle (0)}}_{h{\\scriptscriptstyle\\theta}}(\\underline x^0\\!,\\bX ),...$ the functions of the previous section obtained choosing \n $\\Ba\\!^{{\\scriptscriptstyle\\perp}}(\\xi)\\equiv\\Ba\\!_{{\\,\\scriptscriptstyle\\theta}}^{{\\scriptscriptstyle\\perp}}(\\xi)$, we find\n $\\underline \\bx_h(\\underline x^0)\\!=\\!\\bx^{{\\scriptscriptstyle (0)}}_{h{\\scriptscriptstyle\\theta}}(\\underline x^0\\!,\\0 )$.\nThe solution in $\\F$ is finally obtained applying the inverse Poincar\\'e transformation $P^{-1}$ \nto $\\underline \\bx_h(\\underline x^0)$.\n\\begin{figure}[ht]\n\\includegraphics[width=6cm]{CutEPlot1}\\hfill\n\\includegraphics[width=6cm]{CutEPlot2}\n\\caption{The original $\\underline\\bE\\!^{{\\scriptscriptstyle\\perp}}$ (left) and its 'cut' counterpart (right)\nas functions of $\\underline z$ at $\\underline x^0=0$.}\n\\label{CutEPlot} \n\\end{figure}\n\n\n\\section{Integral equations for plane waves}\n\\label{integral}\n\nWe reformulate the PDE's as integral equations. Using (\\ref{n_h}-\\ref{j_h}) \none proves \n\n\\begin{prop} \\cite{Fio13} \\ For any $\\bar Z\\!\\in\\!\\b{R}$ eq.\n(\\ref{Maxwell0}-\\ref{Maxwell3}) and (\\ref{asyc}) are solved by\n\\be\n E^{{\\scriptscriptstyle z}}(x^0,z)=4\\pi \\sum\\limits_{h=1}^kq_h\n\\widetilde{N}_h[ Z_h(x^0\\!,z)], \\qquad\n\\qquad\\widetilde{N}_h(Z):=\\int^{Z}_{\\bar Z}\\!\\!\\! d Z'\\,\\widetilde{n_{h0}}(Z');\n\\label{expl}\n\\ee\nthe neutrality condition (\\ref{asyc})$_3$ implies $\\sum\\limits_{h=1}^kq_h \\widetilde{N}_h(Z)\\equiv 0$.\n\\end{prop}\nFormula (\\ref{expl}) gives the solution of (\\ref{Maxwell0}-\\ref{Maxwell3})\nexplicitly in terms of the initial densities, up to determination of the functions $ Z_h(x^0\\!,z)$.\n\nUsing the Green function \\ $G(x^0,z)\\!=\\!\\frac 12 \\theta(\\xi) \\theta(\\xim)\\!=\\!\\frac 12 \\theta(x^0\\!-\\!|z|)$ \\ of the d'Alembertian $\\partial_0^2\\!-\\!\\partial_z^2=4\\partial_+\\partial_-$ \\ \n[$\\theta$ is the Heaviside step function, \\ $\\xim\\!\\!:=\\!x^0\\!\\!+\\!z$, $\\partial_-\\!:=\\!\\partial\/\\partial_{\\xim}\\!=\\!\\frac 12(\\partial_0\\!+\\!\\partial_z)$], we can rewrite eq. (\\ref{Maxwell12}) with initial\nconditions at $x^0\\!=\\!X^0$ as the integral equation\n\\be\n\\ba{l}\n\\bA\\!^{{\\scriptscriptstyle\\perp}}(x^0\\!,z)-\n\\bA\\!_{{\\scriptscriptstyle f}}^{{\\scriptscriptstyle\\perp}}(x^0,z)=-\\!\\!\n\\displaystyle\\int_{D\\!^{X^0}_x}\\!\\!\\!\\!\\!\\! d^2x'\\,\n2\\pi \\sum\\limits_{h=1}^kq_h\\left[n_h\\bb^{{\\scriptscriptstyle\\perp}}_h\n\\right]\\!\\!({x^0}'\\!, z') \n\\ea \\label{inteq1}\n\\ee\nwhere $\\bA\\!_{{\\scriptscriptstyle f}}^{{\\scriptscriptstyle\\perp}}(x^0\\!,z)\\!=\\!\\Ba\\!^{{\\scriptscriptstyle\\perp}}\\!(x^0\\!\\!-\\!z)+\\Ba\\!_{{\\scriptscriptstyle -}}^{{\\scriptscriptstyle\\perp}}\\!(x^0\\!\\!+\\!z)$\nis determined by the initial conditions,\n\\bea\n \\ba{l}\nD\\!^{X^0}_x\\!\\!:=\\!\\{ x'\\:|\\: X^0 \\!\\!\\le\\! {x^0}'\\! \\!\\le\\! x^0\\!,\n\\, |z\\!-\\!z'|\\!\\le\\!{x^0}\\!\\!-\\!{x^0}'\\! \\}=\\{ x'\\:|\\: 2 X^0\\! \\!\\le\\!\\xi'\\!\\!+\\!\\xim'\\!,\n\\, \\xi'\\!\\le\\! \\xi, \\, \\xim'\\!\\!\\le\\! \\xim\\}. \n\\ea \\nonumber\n\\eea\nIf beside (\\ref{asyc}) {\\bf we assume \\ $n_{h}(0,z)\\!=\\!0$ for $z\\!<\\!0$}, \\\n for $x^0\\!\\le\\! 0$ the EM wave is free and \n$\\bA\\!^{{\\scriptscriptstyle\\perp}}$ is of the form\n$\\bA\\!^{{\\scriptscriptstyle\\perp}}( x^0 ,z)\\equiv \\Ba\\!^{{\\scriptscriptstyle\\perp}}( x^0 \\!-\\!z)$,\nwith $\\Ba\\!^{{\\scriptscriptstyle\\perp}}(\\xi)\\!=\\!0$ for $\\xi\\!\\le\\! 0$. \nThen in (\\ref{inteq1}) we may choose\n$X^0\\!=\\!0$ [hence\n$\\widetilde{n_{h0}}(Z)\\!\\equiv\\! n_{h}(0,\\!Z)$] and set\n$\\bA\\!_{{\\scriptscriptstyle f}}^{{\\scriptscriptstyle\\perp}}(x^0,z)\n\\!=\\!\\Ba\\!^{{\\scriptscriptstyle\\perp}}( x^0 \\!-\\!z)$.\n\n\\noindent\nIn the Lagrangian description (\\ref{bla'}) reads \n$\\tilde\\gamma_h\\partial_0 \\tilde s_h=\\tilde s_h\\tilde\\varepsilon^z_h\\!+\\!\n \\widetilde{\\partial_-\\bu_h^{{\\scriptscriptstyle\\perp}}{}^2}$;\nthe Cauchy problem with initial condition $\\tilde s_{h0}\\!\\equiv\\! 1$ is\nequivalent to the integral equation\n\\be\n\\tilde s_h = e^{\\int\\limits^{x^0}_{ 0 }\\!\\!d\\eta \\,\\tilde \\mu_h\\!(\\eta, Z )}\n\\!\\!\\!+\\!\\!\\displaystyle\\int\\limits^{x^0}_{ 0 }\\!\\!\nd\\eta\\,e^{\\int\\limits^{x^0}_{\\eta}\\!\\!d\\eta'\\tilde \\mu_h\\!(\\eta', Z )} \\left[\\frac { \\widetilde{\\partial_-\\bu_h^{{\\scriptscriptstyle\\perp}}{}^2}}{\\tilde\\gamma_h}\\right]\\! (\\eta, Z ),\n\\qquad \\tilde \\mu_h\\!:=\\!\\frac {-q_h \\tilde E^z}{m_hc^2\\tilde\\gamma_h}. \n\\label{inteq2}\n\\ee\n$u_h^z,\\gamma_h,\\bb_h^{{\\scriptscriptstyle\\perp}},\\beta_h^z$ can be recovered from $s_h,\\bu_h^{{\\scriptscriptstyle\\perp}}$ through the formulae\n\\be\n\\ba{ll}\n\\displaystyle\\gamma_h\\!=\\!\\frac {1\\!+\\!\\bu_h^{{\\scriptscriptstyle\\perp}}{}^2\\!\\!+\\!s_h^2}{2s_h}, \\qquad\\qquad & \\displaystyle\\bb_h^{{\\scriptscriptstyle\\perp}}\\!=\\! \\frac{\\bu_h^{{\\scriptscriptstyle\\perp}}}{\\gamma_h} \\!=\\!\\frac{2s_h\\bu_h^{{\\scriptscriptstyle\\perp}}}\n {1\\!+\\!\\bu_h^{{\\scriptscriptstyle\\perp}2}\\!\\!+\\!s_h^2 },\\\\[18pt]\n\\displaystyle u_h^z\\!=\\!\\frac {1\\!+\\!\\bu_h^{{\\scriptscriptstyle\\perp}}{}^2\\!\\!-\\!s_h^2}{2s_h}, \n \\qquad\\qquad & \\displaystyle\n\\beta_h^z\\!=\\! \\frac{u_h^z}{\\gamma_h}\\!=\\!\\frac{1\\!+\\!\\bu_h^{{\\scriptscriptstyle\\perp}2}\\!\n\\!-\\!s_h^2 } {1\\!+\\!\\bu_h^{{\\scriptscriptstyle\\perp}2}\\!\\!+\\!s_h^2 }. \\label{u_hs_h}\n\\ea\n\\ee\nThe Cauchy problem (\\ref{lageul}) with initial condition \\ $\\bx_{h}(0,\\bX)\\!=\\!\\bX$ \\\nis equivalent to the integral equations\n\\be\n\\ba{l}\n\\Delta z_e(x^0,Z):=z_h(x^0\\!,Z)\\!-\\! Z \\!=\\!\\! \\displaystyle\\int\\limits^{x^0}_{0}\\!\\!\nd\\eta\\, \\beta_h^z[\\eta,\\! z_h(\\eta,\\!Z)],\\\\[8pt]\n\\bx^{{\\scriptscriptstyle \\perp}}_{h}(x^0\\!,\\!\\bX )\\!-\\!\\bX^{{\\scriptscriptstyle \\perp}}\n\\!=\\!\\! \\displaystyle\\int\\limits^{x^0}_{0}\\!\\!\nd\\eta\\, \\bb_h^{{\\scriptscriptstyle \\perp}}[\\eta,\\! z_h(\\eta,\\!Z)]\n\\ea\\qquad \\label{inteq0}\n\\ee\n\nSummarizing, making use of (\\ref{hom'12}), (\\ref{n_h}), (\\ref{expl}),\n(\\ref{u_hs_h}) the evolution of the system is determined by\nsolving the system of integral equations (\\ref{inteq1}),\n (\\ref{inteq2}), (\\ref{inteq0})$_1$ in the unknowns $\\bA\\!^{{\\scriptscriptstyle\\perp}}, s_h, z_h$\n[note that, once this is solved, (\\ref{inteq0})$_2$ becomes known].\nIt is natural to try an iterative resolution of the system \n(\\ref{inteq1})+(\\ref{inteq2})+(\\ref{inteq0})$_1$ within the general approach of the fixed point \ntheorem: replacing \nthe approximation after $k$ steps [which we will distinguish by the superscript $(k)$]\nat the right-hand side (rhs) of these equations we will\nobtain at the left-hand side (lhs) the approximation after $k\\!+\\!1$ steps \\cite{Fioetal}. \nIf we are interested in solving the system for a short time interval after the beginning of\nthe interaction between the EM waves and the plasma,\nand\/or the initial densities are not very high, a convenient starting (0-th) step is the\nzero-density solution $(\\bu^{{\\scriptscriptstyle\\perp}}_e,\n\\tilde s_e,z_e)=(\\bu^{{\\scriptscriptstyle\\perp(0)}}_e,1,z_e^{{\\scriptscriptstyle(0)}})$. \nIn next section we sketch the next approximation under some simplifying assumptions.\n\n\\section{Short pulse against a step-density plasma: the slingshot effect}\n\\label{constdensity}\n\nHenceforth we stick to such small $x^0$ (small times after the beginning of the interaction)\nthat the motion of ions can be neglected (ions respond much more slowly\nthan electrons because of their much larger mass). \nWe formalize this by considering ions as infinitely massive, so that they remain \nat rest [$Z_h(x^0\\!,\\! z)\\!\\equiv\\! z$ for $h\\!\\neq\\! e$], \nhave constant densities, and their contribution to rhs(\\ref{Maxwell12})\n disappears; only electrons contribute:\nrhs(\\ref{Maxwell12})$=\\! 2\\pi e n_e\\bb^{{\\scriptscriptstyle\\perp}}_e$. \nMoreover we assume that\n$\\widetilde{n_{h0}}( Z )$ are not only zero for $Z\\!<\\!0$ but also constant for $Z\\!>\\!0$:\n $\\widetilde{n_{e0}}( Z )\\!=\\!n_0\\theta(Z)$, etc. (as depicted in fig. \\ref{Plots}-left), where\n$n_0$ is the initial electron and proton density.\nChoosing $\\bar Z\\!=\\!0$ \\ in (\\ref{expl}) we find\n\\bea\n E^{{\\scriptscriptstyle z}}\\!(x^0\\!,z)\\!=\\!4\\pi \\sum\\limits_{h=1}^k\\! q_h\n\\widetilde{N}_h[ Z_h(x^0\\!,z)]\\!=\\! 4\\pi e n_0\\left\\{\nz\\,\\theta(z)\\!-\\! Z_e(x^0\\!, z)\\,\\theta[ Z_e(x^0\\!, z)]\\right\\}\\!\\!. \\qquad \\label{elField}\n\\eea\nIf $z,Z\\!>\\!0$ this reduces to the known result (see e.g. \\cite{Daw51,AkhPol67}) \nthat at $x^0$ the electric force acting on the electrons initally located\nin $Z$ is of the harmonic type \n\\ $\\widetilde{F}_e^{{\\scriptscriptstyle z}}(x^0\\!,Z)\\!=\\!- 4\\pi n_0 e^2\n\\Delta z_e(x^0\\!, Z)$, \\ i.e. proportional to their displacement \\ \n$\\Delta z_e(x^0\\!, Z)\\!:=\\! z_e(x^0\\!, Z)\\!-\\! Z$ \\ w.r.t. their initial position.\n\nThe corresponding first corrected approximation reads \\cite{Fio13}:\n\\bea\n&& \\bu_e^{{\\scriptscriptstyle\\perp(1)}}(x^0\\!,z)-\\bu_e^{{\\scriptscriptstyle\\perp(0)}}(x^0\\!\\!-\\!z)\n=-\\frac{2\\pi e^2}{m_ec^2} \\!\\!\\displaystyle\\int_{D\\!^0_{x}}\\!\\!\\!\\! d^2x'\\:\n\\left[n_e^{{\\scriptscriptstyle(0)}}\\bb^{{\\scriptscriptstyle\\perp(0)}}_e \\right]\\!\\!(x'), \\nn \n&& \\tilde s_e^{{\\scriptscriptstyle(1)}} =e^{\\tilde r_e^{{\\scriptscriptstyle(0)}}}, \\label{appr1}\\\\\n&& \\Delta z_e^{{\\scriptscriptstyle(1)}}(x^0,Z)= \\!\\! \\displaystyle\\int\\limits^{x^0}_{0}\\!\\!\nd\\eta\\,\\tilde \\beta_e^{{\\scriptscriptstyle z(1)}}(\\eta, Z ).\\nonumber\n\\eea\nHere the change from the Eulerian to the Lagrangian description (represented by the tilde)\nis performed approximating $\\bx_e$ by $\\bx_e^{{\\scriptscriptstyle(0)}}$; in the second line \nthe integral corresponding to the second term of (\\ref{inteq2}) does not appear because\n$ \\partial\\!_{{\\scriptscriptstyle -}}\\bu_e^{{\\scriptscriptstyle\\perp(0)}2}\\!=\\!0$, and we have \nabbreviated\n\\bea\n\\tilde r_e^{{\\scriptscriptstyle(0)}}(x^0\\!,\\! Z ):=4K\\!\\!\\int^{x^0}_{Z}\\!\\!d\\eta\\frac {z_e^{{\\scriptscriptstyle(0)}}\\theta\\big[z_e^{{\\scriptscriptstyle(0)}}\\big]\\!-\\! Z\\theta(Z)}\n{\\tilde\\gamma_e^{{\\scriptscriptstyle(0)}}}(\\eta,\\! Z ) =\n4K\\,V_e^3\\!\\left[\\Xi_e^{-1}\\!( x^0\\!\\!-\\! Z)\\right],\\nn\n\\mbox{where }\\qquad K\\!:=\\!\\frac{\\pi e^2 n_0}{m_ec^2},\\qquad V_e^3(\\xi):=\\!\n\\displaystyle\\int\\limits^{\\xi}_0\\!\\! dy\\, Y^3_e(y)\\quad [Y^3_e\\mbox{ defined in } (\\ref{defYXi})]. \\nonumber \n\\eea\nIf the EM wave is (\\ref{prototype}) with \\ $\\lambda |\\epsilon_s'\/\\epsilon_s|\\!\\le\\!\\delta\\!\\ll\\!1$, \\ \nsetting \\ $w\\!:=\\!\\frac{ e}{km_ec^2}\\epsilon_s$ \\ one finds\n$$\n\\Ba_e^{{\\scriptscriptstyle \\perp}}\\!\\simeq\\! \\frac{1}{k}\\epsilon_s\\Be_p^{{\\scriptscriptstyle \\perp}}\\!,\\qquad\n\\bu_e^{{\\scriptscriptstyle \\perp(0)}}\\!\\simeq\\! w\\Be_p^{{\\scriptscriptstyle \\perp}}\\!,\\qquad\n\\bY_e\\!^{{\\scriptscriptstyle \\perp}}\\!\\!\\simeq\\! -\\frac 1k w\\Be_o^{{\\scriptscriptstyle \\perp}}\\!,\\qquad\nu_e^{{\\scriptscriptstyle z(0)}}\\!\\simeq\\! \\frac 12 w^2\\Be_p^{{\\scriptscriptstyle \\perp 2}},\n$$\nwhere \\ $a\\!\\simeq\\!b$ \\ means \\\n$a\\!=\\!b\\!+\\!O(\\delta)$. From (\\ref{appr1}) one can show \\cite{Fio13} that \n\\be\n\\bA_e^{{\\scriptscriptstyle \\perp(1)}}\\!\\simeq\\!\\Ba_e^{{\\scriptscriptstyle \\perp}},\\quad\n\\bu_e^{{\\scriptscriptstyle \\perp(1)}}\\!\\simeq\\!\\bu_e^{{\\scriptscriptstyle \\perp(0)}} \\qquad\\: \n\\mbox{if }\\quad\\: 0\\!\\le\\!x^0\\!\\!-\\!z\\!\\le\\! \\xi_0,\\quad\n0\\!\\le\\!x^0\\!+\\!z\\!\\ll\\! \\frac{2\\pi} {K\\lambda}. \\label{stregion}\n\\ee\n($\\xi_0$ stands for the first maximum point of $w, \\epsilon_s$) \nby showing that the relative difference between the lhs and the rhs is much smaller than 1 in the\nspacetime region (\\ref{stregion})$_2$. There we find in particular,\nby (\\ref{displace}), (\\ref{u_hs_h}) and some computation,\n\\bea\n&& \\beta_e^{{\\scriptscriptstyle z(1)}}\\!=\\!\\frac {1\\!+\\!\\bu_e^{{\\scriptscriptstyle\\perp(1)}2}\n\\!\\!-\\!s^{{\\scriptscriptstyle (1)}2}_e}{1\\!+\\!\\bu_e^{{\\scriptscriptstyle\\perp(1)}2}\\!\\!+\n\\!s^{{\\scriptscriptstyle (1)}2}_e }\n\\!\\simeq\\!\\frac {1\\!+\\!\\bu_e^{{\\scriptscriptstyle\\perp(0)}2}\n\\!\\!-\\!e^{2r^{{\\scriptscriptstyle (0)}}_e}}{1\\!+\\!\\bu_e^{{\\scriptscriptstyle\\perp(0)}2}\\!\\!+\n\\!e^{2r^{{\\scriptscriptstyle (0)}}_e} }, \\qquad \\label{beta1}\\\\\n&&\\Delta z_e^{{\\scriptscriptstyle (1)}}(x^0\\!,\\! Z )\\simeq \n\\!\\!\\!\\!\\!\\!\\!\\displaystyle\\int\\limits^{\\Xi_e^{-1}\\!( x^0\\!\\!-\\! Z)}_0\n\\!\\!\\!\\!\\!\\!\\!\\! dy\\, [\\gamma_e^{{\\scriptscriptstyle (0)}}\\beta_e^{{\\scriptscriptstyle z(1)}}](y)\n,\\qquad\\\\\n&& \\ba{l} 0\\le [\\Delta z_e^{{\\scriptscriptstyle (0)}}\\!-\\!\\Delta z_e^{{\\scriptscriptstyle (1)}}](x^0\\!,\\! Z )\n\\simeq G[\\Xi_e^{-1}\\!( x^0\\!\\!-\\! Z)], \\\\[6pt]\nG(\\xi)\\!:=\\!\\!\\int\\limits^{\\xi}_0\\! dy\\, g(y), \\quad\ng:=\\frac{\\left(\\!1\\!+\\!2u_e^{{\\scriptscriptstyle z(0)}}\\!\\right)\\! \n\\big(e^{2r^{{\\scriptscriptstyle (0)}}_e}\\!\\!\\!-\\!1\\!\\big) }\n{1\\!+\\!2u_e^{{\\scriptscriptstyle z(0)}}\\!+\\!e^{2r^{{\\scriptscriptstyle (0)}}_e}},\n\\ea\\qquad\\\\[8pt]\n&& 0\\le \\frac{\\Delta z_e^{{\\scriptscriptstyle (0)}}\\!-\\!\\Delta z_e^{{\\scriptscriptstyle (1)}}}\n{\\Delta z_e^{{\\scriptscriptstyle (0)}}}(x^0\\!,\\! Z )\n\\simeq T[\\Xi_e^{-1}\\!( x^0\\!\\!-\\! Z)],\\qquad\\qquad\nT\\!:=\\!\\frac{G}{Y_e^3}.\\qquad\\label{reldif}\n\\eea{}\nThe last expression is the relative difference \nbetween the displacement $\\Delta z_e$ in the zero-density and in the first corrected approximation. \nHence the approximation \\ $z_e(x^0\\!,\\! Z )\\!\\simeq\\! z_e^{{\\scriptscriptstyle(1)}}(x^0\\!,\\! Z )\\!\\simeq\\! z_e^{{\\scriptscriptstyle(0)}}(x^0\\!,\\! Z )$\nmay be good only as long as \\ $T[\\Xi_e^{-1}\\!( x^0\\!\\!-\\! Z)]\\!\\ll\\! 1$. By (\\ref{chain}), the maximum $\\Ba^{{\\scriptscriptstyle \\perp}}(\\xi_0)$ reaches the electrons initially located in $Z$ at the time $\\check x^0(Z)\\!=\\!\\Xi_e(\\xi_0)\\!+\\! Z$; \ntherefore the approximation \\ $z_e(x^0\\!,\\! Z )\\!\\simeq z_e^{{\\scriptscriptstyle(0)}}(x^0\\!,\\! Z )$\nmay be good for all $x^0\\!\\le\\!\\check x^0(Z)$ only if\n\\be\nT(\\xi)\\ll 1\\qquad\\quad 0\\le\\xi\\le\\xi_0,\n\\qquad\\qquad\\quad\n2Y^3_e(\\xi_0)\\!+\\!\\xi_0\\!+\\! 2Z \\ll \\frac{2\\pi} {K\\lambda}. \\label{condgood}\n\\ee\nIn particular, (\\ref{checkzeta}) with \\ $\\check\\xi\\!=\\!\\xi_0$\nwill give a good estimate $\\zeta_e$ of the displacement of the plasma-surface electrons\n(those with $Z$ close to zero) if (\\ref{condgood}) is satisfied.\n\nIn \\cite{FioFedDeA13} $\\zeta_e$ is used to predict\nand estimate the {\\it slingshot effect}, i.e. the expulsion of \nvery energetic electrons in the negative $z$-direction shortly after the impact of\na suitable ultra-short and ultra-intense laser pulse in the form of a {\\it pancake}\n(i.e. a cylinder of radius $R$ and height $l\\!\\ll\\!R$) \nnormally onto a plasma. The mechanism is very simple: the plasma electrons in \na thin layer - just beyond the surface of the plasma - first are given sufficient \nelectric potential energy by the displacement $\\zeta_e$ w.r.t. the ions, \nthen after the pulse are pulled back by the longitudinal \nelectric force exerted by the latter and may leave the plasma. \nSufficient conditions for this to happen are: 1. $l\\!\\ll\\!R$,\nso that plane wave solutions are sufficiently accurate within the plasma, \nespecially in the forward boost phase;\n2. $R\\!\\gtrsim\\!2\\zeta$, to avoid trapping of the boosted electrons or \neven the onset of the {\\it bubble regime} \\cite{MalEtAl05}; \n3. the EM field inside the pancake is sufficiently intense, and\/or\n$n_0$ is sufficiently low, so that the longitudinal electric force induces\nthe back-acceleration of the electrons mainly\n{\\it after} the pulse maximum has overcome them (in phase with the negative\nponderomotive force exerted by the pulse in its decreasing stage).\nActually we impose the stronger condition that $n_0$ is sufficiently small in order that \n(\\ref{condgood}) be fulfilled and the estimate $\\zeta_e$ be reliable. \nAs a result, an estimate of the final energy of the electrons initially \nlocated at $Z\\!=\\!0$ after the expulsion is \\cite{FioFedDeA13}\n\\be{}\nH=m{}c^2\\gamma_{e{\\scriptscriptstyle M}},\\qquad\\qquad\n\\gamma_{e{\\scriptscriptstyle M}}\\simeq 1+2K \\zeta_e^2.\n\\label{gammaeM}\n\\ee\nThe above conditions are already at hand in several laboratories. \nThe resulting $H$ would be of few MeV implementing those \navailable at the FLAME facility (LNF, Frascati), or at the ILIL laboratory (CNR, Pisa):\nthe pulse energy is a few joules, \\\n$\\lambda\\sim 10^{-4}cm$, \\ $\\xi_0\\sim 10^{-3}cm$, \\ $K\\xi_0^2\\!\\sim\\!1$ \n(whence $n_0\\!\\sim\\! 10^{18}cm^{-3}$);\n(\\ref{condgood}) are fulfilled, see the typical plots reported below\n[the blue, purple curves resp. correspond to a gaussian and to a cut-off\npolynomial amplitude $w(\\xi)$].\n\n\\begin{figure}[ht]\n\\includegraphics[width=4cm]{wgpPlot}\\hfill\\includegraphics[width=3.6cm]{YgpPlot}\\hfill \\includegraphics[width=4cm]{TgpPlotmm}\n\\label{Plots} \n\\end{figure}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThere are many techniques that have been developed with the purpose of solving\nthe Schr\\\"odinger equation since it was first introduced more than eighty\nyears ago. One might think that after so many years this should be a closed subject,\nyet this area of research is still being pursued nowadays by a number of \nphysicists in the world. There are still many potentials of interest for which no \nexact solution is known and,\nmoreover, new potentials that seem to model the behavior of\nphysical systems, such as a set of different molecules interacting with each\nother or with an external field, are still being proposed from time to\ntime, which prompts physicist to study them in more detail. A\nrecent example of this can be found in \\cite{LeRoy1}. Since one cannot find exact\nsolutions for many of the most interesting potentials, one is left\nwith one of two options: (1) use a numerical method or (2) try to find an\napproximate analytic solution. The last option is particularly appealing,\nsince in many situations it is possible to use precise approximate solutions in\nthe same way as the exact ones. \nApproximate analytic solutions can be\nobtained by different methods for both, the energy eigenvalues and eigenfunctions\nof the Schr\\\"odinger equation, although more methods can be found in the\nliterature for the eigenvalues than for the eigenfunctions. In either\ncase, there are hundreds of publications devoted to this subject, and it would\nbe impossible to make justice by citing them all. \n\nAn analytic approximant should be a function of the parameters of the potential\nthat comes very close to the values of the exact solutions (found numerically) when\nevaluated at any particular point in the parameter space. The usefulness of a\nparticular method used to obtain these approximants will depend on the\nprecision of the approximations as well as the simplicity of the analytic expressions.\n\nIn this paper, a new method is proposed for finding analytic approximants to the energy\neigenvalues of the one-dimensional Schr\\\"odinger equation with potentials of\nthe form $V(x)=Ax^a+Bx^b$ \\cite{R1,R2,R3,R4,R5,R6}. This kind of potentials are perhaps not the most\ninteresting ones from a phenomenological point of view, but they are\ncertainly of theoretical interest, since they include such potentials as the\nquartic and sextic anharmonic oscillators, and in any case, they are a good choice\nto test new methods before applying them to more interesting problems.\n\nThe method we are proposing here is based on the multi-point quasi-rational\napproximation technique, which has been successfully applied before to\nsimilar kinds of problems involving differential equations \\cite{CM1,CM2,CM3,M1,CM4,CM5}.\nThis technique consists in using the expansions of the\nfunction to be approximated around different values of the parameters in the\ndifferential equation where this function appears, in order to write an\napproximant in terms of rational functions in these parameters combined with\nauxiliary ones. The approximant will then have almost the same expansions\naround the different chosen values of the parameters. The auxiliary functions\nare usually needed in order to match the behavior of the quantity to be\napproximated when the parameters go to infinity, which normally cannot be done\nsolely using rational functions as in a Pade's approximation.\n\nIn our case, the differential equation we are interested in is, of course, the\nSchr\\\"odinger equation\n\\begin{equation}\n\\left(-\\frac{d^2}{dx^2} +Ax^a+Bx^b \\right) \\psi = E\\psi \\,\\, .\n\\end{equation}\nIn this case, we have too parameters, $A$ and $B$, which will be assumed to be\nboth positive to simplify the treatment, though the method can be used without\nthis restriction. It is also assumed that $a$ and $b$ are positive integers, and\n$b>a \\geq 2$. As it is very well\nknown \\cite{Simon1}, one can make this equation to depend on only one parameter by\nmaking the changes, $x = A^{-\\frac{1}{a+2}}x'$ and \n$E' = A^{-\\frac{2}{a+2}}E$, which leads to\n\\begin{equation}\n\\left(-\\frac{d^2}{dx'^2} + x'^a+\\lambda x'^b \\right) \\psi = E'\\psi \\,\\, ,\n\\label{SE}\n\\end{equation}\nwhere $\\lambda=A^{-\\frac{b+2}{a+2}}B$.\nFrom now on we will drop the primes and rename $x' \\rightarrow x$ and $E'\n\\rightarrow E$. The energy eigenvalues $E$ will depend on the parameter\n$\\lambda$. Our goal is to find an approximating function\nfor $E(\\lambda)$ for each energy level, using expansions around different values of $\\lambda$,\nincluding the power series (perturbative expansion around $\\lambda=0$), and the\nasymptotic expansion ($\\lambda \\rightarrow \\infty$). In sections\n\\ref{PowerSeries} and \\ref{AsymptoticExpansion} a neat\nway to find these expansions will be shown using a system of coupled differential\nequations, which in the case of the power series provides an interesting\nalternative to standard perturbation methods, since here the perturbed\neigenvalue can be found using only the corresponding unperturbed state,\ninstead of using the whole unperturbed eigenvalue spectrum as in the usual\nquantum mechanics perturbation theory. In sections \\ref{quartic} and \\ref{sextic}, the\nconstruction of the approximants will be shown, using the quartic and sextic\nanharmonic oscillators as examples. These approximants will be found for the ground state\nenergy eigenvalues, as well as the first and second excited levels. The important point of our\ntechnique is that the same approximant will be valid and accurate for any value of\n$\\lambda$ (including large and small values).\n\n\n\\section{Power Series}\n\\label{PowerSeries}\n\nThe expansion of the energy eigenvalues and eigenfunctions around $\\lambda=0$ can be written as\n\\begin{eqnarray}\nE &=& E_0+E_1\\lambda+E_2\\lambda^2 + \\cdots \\,\\, , \\\\\n\\psi &=& \\psi_0 +\\psi_1\\lambda +\\psi_2\\lambda^2 + \\cdots \\,\\, .\n\\end{eqnarray}\nOne would like to find the coefficients $E_0$, $E_1$, $E_2 \\ldots$.\nThis can be done introducing these expansions in equation (\\ref{SE}), and\ndemanding it to be satisfied at every order in $\\lambda$, which leads to the \nfollowing system of differential equations\n\\begin{eqnarray}\nL\\psi_0 &=& E_0 \\psi_0 \\,\\, , \\label{PS_DE1} \\\\\nL\\psi_1 + x^b \\psi_0 &=& E_0 \\psi_1 + E_1 \\psi_0 \\,\\, , \\label{PS_DE2} \\\\\nL\\psi_2 + x^b \\psi_1 &=& E_0 \\psi_2 + E_1 \\psi_1 + E_2 \\psi_0 \\,\\, ,\\\\\n\\vdots \\phantom{x^b \\psi_1} && \\phantom{E_0 \\psi_1} \\vdots \\nonumber \\\\\nL\\psi_n + x^b \\psi_{n-1} &=& \\sum_{k=0}^n E_{n-k} \\psi_k \\,\\, , \\label{PS_DEn}\n\\end{eqnarray}\nwhere\n\\begin{equation}\nL = -\\frac{d^2}{dx^2} + x^a\n\\end{equation}\nIt is important to note that, since $\\lambda$ is arbitrary, \nmany of the properties of the eigenfunction\n$\\psi$ will be inherited by the functions $\\psi_0$, $\\psi_1 \\ldots$ . \nFor example, given that for bound states, the function $\\psi$\nshould fall off quickly for large values of $x$, so should the expansion\nfunctions $\\psi_k$. Also, if the function $\\psi$ has definite parity, then the\nfunctions $\\psi_k$ will all have the same parity as $\\psi$.\n\nThe coefficients in the expansion of the energy can be found by\nsolving numerically the differential equations one by one (using, for example,\nthe shooting method). For\ninstance, one could solve equation (\\ref{PS_DE1}), obtaining then a\nnumerical value for the\ncoefficient $E_0$, together with a numerical solution for $\\psi_0$. \nFor a bound state, one should find a solution for a range of values in $x$ where\nthe fall off to zero of $\\psi_0$ can be seen. Then one can\nmake a very precise fit (a large polynomial in $x$) for $\\psi_0$ within this\nrange, and use this fit as an input to solve equation (\\ref{PS_DE2}). This\nthen leads to a numerical value for $E_1$, and a numerical solution for\n$\\psi_1$, and in principle, the procedure can be repeated until one has as many\ncoefficients $E_k$ as one desires. \n\nOf course, the applicability of this method is limited by the numerical\nprecision with which the differential equations are solved, and since the\nmethod is iterative, the numerical errors from the first $n$ equations will\nbe propagated to the solution of equation $n+1$. For this reason, one would\nexpect the precision of $E_k$ to be lower for larger values of $k$.\n\nThe numerical values of the coefficients $E_k$ can also be found directly\nusing the solutions of the first $k-1$ differential equations,\nwithout solving the $k$-th one. For example, if one has\nalready obtained $E_0$ and $\\psi_0(x)$, one can multiply both sides of equation\n(\\ref{PS_DE2}) by $\\psi_0(x)$, and integrating in $x$ it is possible to show\nthat\n\\begin{equation}\nE_1 = \\frac{\\int_{-\\infty}^{\\infty} dx x^b \\psi_0^2}{\n \\int_{-\\infty}^{\\infty} dx \\psi_0^2} \\,\\, ,\n\\label{intE1}\n\\end{equation}\nwhich coincides with the expression obtained using standard perturbation\nmethods. The same procedure can be repeated for all the other equations, leading to\n\\begin{equation}\nE_n = \\frac{\\int_{-\\infty}^{\\infty} dx \\left( x^b \\psi_{n-1} \n -\\sum_{k=1}^{n-1} E_{n-k} \\psi_k \\right) \\psi_0}{\n \\int_{-\\infty}^{\\infty} dx \\psi_0^2} \\,\\, .\n\\label{intEn}\n\\end{equation}\nThis way of finding $E_k$ is more precise, since the number of\ndifferential equation to be solved is $k-1$. In practice, the integrals are taken within\na range of $x$ where the functions $\\psi_k(x)$ have already fallen to very\nsmall values.\n\nOn the other hand, equation (\\ref{PS_DE1}) can be solved exactly for $a=2$,\nsince then it would be the Schr\\\"odinger equation for a harmonic\noscillator. It can be shown that in this case, all the other \nequations in the system can also be solved exactly. For example, for the ground state \n$E_0=1$ and $\\psi_0(x) \\propto \\exp(-\\frac{x^2}{2})$. If we take $b=4$ (quartic\nanharmonic oscillator) the next function, $\\psi_1(x)$, can be written as\n\\begin{equation} \n\\psi_1(x)=(p_0+p_1x+p_2x^2+p_3x^3+p_4x^4)\\exp(-\\frac{x^2}{2}) \\,\\, .\n\\end{equation}\nWhen this is introduced in equation (\\ref{PS_DE2}), the function $\\exp(-\\frac{x^2}{2})$ disappears and\na relation between two polynomials is left. Since this relation must be\nsatisfied at each order in $x$, a system of equations in $E_1$ and the $p_i$'s\nis obtained, whose solution is\n\\begin{equation}\np_1=0\\, , \\,\\, p_2=-\\frac{3}{8} \\, ,\\,\\, p_3=0 \\, ,\\,\\,\np_4=-\\frac{1}{8} \\, ,\\,\\, E_1=\\frac{3}{4} \\,\\, ,\n\\end{equation}\nand it can be seen that $p_0$ arbitrary, which means that just like for $\\psi_0(0)$, \nthe initial condition $\\psi_1(0)=p_0$ is arbitrary (this will be the case for\nall the other functions in the expansion).\n\nThe same procedure can be repeated for $\\psi_2$, $\\psi_3$, etc., writing\n\\begin{equation}\n\\psi_n = \\left( \\sum_{k=0}^{4n} p_k x^k \\right) \\exp(-x^2\/2) \\,\\, .\n\\end{equation}\nWe obtain\n\\begin{equation}\nE_0=1 \\, ,\\,\\, E_1=\\frac{3}{4} \\, ,\\,\\, E_2=-\\frac{21}{16} \\, ,\\,\\,\nE_3=\\frac{333}{64} \\, ,\\,\\, E_4=-\\frac{30885}{1024} \\,\\, .\n\\end{equation}\nThis coincides with the results obtained by using the standard\nRayleigh-Schr\\\"odinger perturbation method, with the advantage that no\ninformation about the eigenstates of energy levels different from the one\nbeing considered is required in order to obtain the terms of higher order.\nThe same can be done for other values of $b$.\n\nSimilar expansions can be found around any point other than $\\lambda=0$. Let's\ncall $\\lambda_{\\alpha} = \\lambda - \\alpha$, then we can write\n\\begin{equation}\n\\left( -\\frac{d^2}{dx^2} + x^a + \\alpha x^b + \\lambda_{\\alpha} x^b \\right) \\psi =\nE \\psi \\,\\, ,\n\\end{equation}\nand now we can expand around $\\lambda_{\\alpha}=0$ (i.e. around\n$\\lambda=\\alpha$). \n\\begin{eqnarray}\nE &=& E^{\\alpha}_0+E^{\\alpha}_1\\lambda_{\\alpha}+E^{\\alpha}_2\\lambda_{\\alpha}^2 + \\cdots \\,\\, , \\\\\n\\psi &=& \\psi^{\\alpha}_0 +\\psi^{\\alpha}_1\\lambda_{\\alpha} +\\psi^{\\alpha}_2\\lambda_{\\alpha}^2 + \\cdots \\,\\, .\n\\end{eqnarray}\n\nThe following set of equations is obtained\n\\begin{eqnarray}\nL_{\\alpha}\\psi^{\\alpha}_0 &=& E^{\\alpha}_0 \\psi^{\\alpha}_0 \\label{PSa_DE1} \\\\\nL_{\\alpha}\\psi^{\\alpha}_1 + x^b \\psi^{\\alpha}_0 &=& E^{\\alpha}_0 \\psi^{\\alpha}_1 + E^{\\alpha}_1 \\psi^{\\alpha}_0 \\,\\, , \\\\\nL_{\\alpha}\\psi^{\\alpha}_2 + x^b \\psi^{\\alpha}_1 &=& E^{\\alpha}_0 \\psi^{\\alpha}_2 + E^{\\alpha}_1 \\psi^{\\alpha}_1 + E^{\\alpha}_2 \\psi^{\\alpha}_0 \\,\\, , \\\\\n\\vdots \\phantom{x^b \\psi_1} && \\phantom{E_0 \\psi_1} \\vdots \\nonumber \\\\ \nL_{\\alpha}\\psi^{\\alpha}_n + x^b \\psi^{\\alpha}_{n-1} &=& \\sum_{k=0}^n E^{\\alpha}_{n-k} \\psi^{\\alpha}_k \\,\\, ,\n\\end{eqnarray}\nwhere\n\\begin{equation}\nL_{\\alpha} = -\\frac{d^2}{dx^2} + x^a + \\alpha x^b \\,\\, .\n\\end{equation}\nClearly, equation (\\ref{PSa_DE1}) will not have exact solutions for any values\nof $a$ and $b$, so one is forced to find the coefficients\nnumerically. Equations (\\ref{intE1}) and (\\ref{intEn}) will still be valid,\nbut with the changes $\\psi_k \\rightarrow \\psi^{\\alpha}_k$ and $E_k \\rightarrow E^{\\alpha}_k$,\ni.e.,\n\\begin{equation}\nE^{\\alpha}_1 = \\frac{\\int_{-\\infty}^{\\infty} dx x^b (\\psi^{\\alpha}_0)^2}{\n \\int_{-\\infty}^{\\infty} dx (\\psi^{\\alpha}_0)^2} \\,\\, \n\\end{equation}\nand\n\\begin{equation}\nE^{\\alpha}_n = \\frac{\\int_{-\\infty}^{\\infty} dx \\left( x^b \\psi^{\\alpha}_{n-1} \n -\\sum_{k=1}^{n-1} E^{\\alpha}_{n-k} \\psi^{\\alpha}_k \\right) \\psi^{\\alpha}_0}{\n \\int_{-\\infty}^{\\infty} dx \\psi_0^2} \\,\\, .\n\\end{equation}\n\nThe coefficient $E^{\\alpha}_k$ is actually the value of the $k$-th derivative of the\nfunction $E(\\lambda)$ evaluated at $\\lambda = \\alpha$. One might find these\nderivatives directly, evaluating $E(\\lambda)$ nearby $\\lambda = \\alpha$, for\nexample,\n\\begin{equation}\nE^{\\alpha}_1 = \\frac{E(\\lambda=\\alpha)-E(\\lambda=\\alpha+\\epsilon)}{\\epsilon} \\,\\, .\n\\end{equation}\nHowever, this way of finding the coefficients becomes relatively difficult for\nhigher derivatives, since then one needs to evaluate the function $E(\\lambda)$\nwith increasing accuracy. The method proposed here can be viewed as an alternative, \nmore accurate, easier and more efficient way to find these derivatives.\n\n\n\\section{Asymptotic Expansion}\n\\label{AsymptoticExpansion}\n\nDoing the change of variables $x=\\lambda^{-\\frac{1}{2+b}}y$, \nand defining $\\tilde{\\lambda}=\\lambda^{-\\frac{2+a}{2+b}}$ and \n$\\tilde{E}=\\lambda^{-\\frac{2}{2+b}}E$, the Schr\\\"odinger equation becomes\n\\begin{equation}\n\\left(-\\frac{d^2}{dy^2} + \\tilde{\\lambda} y^a + y^b \\right) \\psi = \\tilde{E} \\psi \\,\\, .\n\\label{AsySE}\n\\end{equation}\nOne can expand now $\\tilde{E}$ and $\\psi$ in a similar way as before,\n\\begin{eqnarray}\n\\tilde{E} &=& \\tilde{E}_0+\\tilde{E}_1\\tilde{\\lambda}+\\tilde{E}_2\\tilde{\\lambda}^2 + \\cdots \\,\\, , \\label{tE} \\\\ \n\\psi &=& \\tilde{\\psi}_0 +\\tilde{\\psi}_1\\tilde{\\lambda} +\\tilde{\\psi}_2\\tilde{\\lambda}^2 + \\cdots \\,\\, , \n\\end{eqnarray}\nIntroducing this in equation (\\ref{AsySE}) leads also to a system of\ndifferential equations\n\\begin{eqnarray}\n\\tilde{L}\\tilde{\\psi}_0 &=& \\tilde{E}_0 \\tilde{\\psi}_0 \\,\\, , \\label{AsyDE1} \\\\\n\\tilde{L}\\tilde{\\psi}_1 + y^a \\tilde{\\psi}_0 &=& \\tilde{E}_0 \\tilde{\\psi}_1 + \\tilde{E}_1 \\tilde{\\psi}_0 \\,\\, , \\label{AsyDE2} \\\\\n\\tilde{L}\\tilde{\\psi}_2 + y^a \\tilde{\\psi}_1 &=& \\tilde{E}_0 \\tilde{\\psi}_2 + \\tilde{E}_1 \\tilde{\\psi}_1 + \\tilde{E}_2 \\tilde{\\psi}_0 \\,\\, , \\\\\n\\vdots \\phantom{y^a \\psi_1} && \\phantom{E_0 \\psi_1} \\vdots \\nonumber \\\\\n\\tilde{L}\\tilde{\\psi}_n + y^a \\tilde{\\psi}_{n-1} &=& \\sum_{k=0}^n \\tilde{E}_{n-k} \\tilde{\\psi}_k \\,\\, , \\label{AsyDEn}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\tilde{L} = -\\frac{d^2}{dy^2} + y^b\n\\end{equation}\n\nRewriting the asymptotic expansion in terms of $\\lambda$ instead of $\\tilde{\\lambda}$, it\nis clear that the form of the expansion depends on the particular potential to\nbe considered. In the case of the quartic anharmonic oscillator ($a=2$ and $b=4$), equation\n(\\ref{tE}) leads to\n\\begin{eqnarray}\nE &=& \\lambda^{1\/3} \\left(\\tilde{E}_0+\\frac{\\tilde{E}_1}{\\lambda^{2\/3}}+\n\\frac{\\tilde{E}_2}{\\lambda^{4\/3}}+\\frac{\\tilde{E}_3}{\\lambda^2} + \\cdots \\right) \n\\nonumber \\\\ \n&=& \\lambda^{1\/3} \\sum_{k=0}^{\\infty} \\frac{\\tilde{E}_{3k}}{\\lambda^{2k}}\n +\\lambda^{-1\/3} \\sum_{k=0}^{\\infty} \\frac{\\tilde{E}_{3k+1}}{\\lambda^{2k}} \n +\\frac{1}{\\lambda} \\sum_{k=0}^{\\infty} \\frac{\\tilde{E}_{3k+2}}{\\lambda^{2k}}\n\\end{eqnarray}\nwhile in the case of the sextic anharmonic oscillator ($a=2$ and $b=6$), we obtain\n\\begin{eqnarray}\nE &=& \\lambda^{1\/4} \\left( \\tilde{E}_0 + \\frac{\\tilde{E}_1}{\\lambda^{1\/2}} + \\frac{\\tilde{E}_2}{\\lambda}+\n\\frac{\\tilde{E}_3}{\\lambda^{3\/2}} + \\frac{\\tilde{E}_4}{\\lambda^2} + \\cdots \\right) \n\\nonumber \\\\\n&=& \\lambda^{1\/4} \\sum_{k=0}^{\\infty} \\frac{\\tilde{E}_{2k}}{\\lambda^{k}}\n+ \\lambda^{-1\/4} \\sum_{k=0}^{\\infty} \\frac{\\tilde{E}_{2k+1}}{\\lambda^{k}} \\,\\, .\n\\end{eqnarray}\nIn general, the expansions will have this structure, i.e.,\nthey can be divided in a few pieces, each one consisting in a series of\nnegative integer powers of $\\lambda$, multiplied by a rational power of $\\lambda$. For\nthis reason, the approximants that we will build must also be divided in a\nsimilar way, in order to match the behavior of each piece. This will be seen explicitly\nin the next two sections.\n\n\n\n\n\n\\iffalse\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCofficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $a_0$ & & & \\\\\n $a_1$ & & & \\\\\n $a_2$ & & & \\\\\n $a_3$ & & & \\\\\n $a_4$ & & & \\\\\n $a_5$ & & & \\\\\n $b_0$ & & & \\\\\n $b_1$ & & & \\\\\n $b_2$ & & & \\\\\n $b_3$ & & & \\\\\n $b_4$ & & & \\\\\n $b_5$ & & & \\\\\n $c_0$ & & & \\\\\n $c_1$ & & & \\\\\n $c_2$ & & & \\\\\n $c_3$ & & & \\\\\n $c_4$ & & & \\\\\n $c_5$ & & & \\\\\n $q_1$ & & & \\\\\n $q_2$ & & & \\\\\n $q_3$ & & & \\\\\n $q_4$ & & & \\\\\n $q_5$ & & & \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients for the approximants of the first three energy\n eigenvalues of $x^2+\\lambda x^4$, using polynomial of degree 3.}\n\\end{table}\n\\fi\n\n\\section{Approximants for the quartic anharmonic oscillator}\n\\label{quartic}\n\nFor the quartic anharmonic oscillator \\cite{R7,R8,A0,A1,A2}, the approximants for the energy\neigenvalues can be written in the following form\n\\begin{equation}\nE_{\\rm app}(\\lambda) = (1+\\mu \\lambda)^{1\/3}\\frac{P_a(\\lambda)}{Q(\\lambda)}\n +(1+\\mu \\lambda)^{-1\/3}\\frac{P_b(\\lambda)}{Q(\\lambda)}\n +\\frac{1}{1+\\mu \\lambda}\\frac{P_c(\\lambda)}{Q(\\lambda)} \\,\\, ,\n\\label{Eapp}\n\\end{equation}\nwhere\n\\begin{equation}\nP_a(\\lambda) = \\sum_{k=0}^N a_k \\lambda^k \\,\\, , \\quad\nP_b(\\lambda) = \\sum_{k=0}^N b_k \\lambda^k \\,\\, , \\quad\nP_c(\\lambda) = \\sum_{k=0}^N c_k \\lambda^k \\,\\, ,\n\\label{Ps}\n\\end{equation}\nand\n\\begin{equation}\nQ(\\lambda) = 1 + \\sum_{k=1}^N q_k \\lambda^k \\,\\, ,\n\\label{Q}\n\\end{equation}\nthat is, the approximant is constructed using rational functions multiplied by\nauxiliary ones, conveniently chosen in order to match the asymptotic behavior \nof the eigenvalues. Furthermore, since the power series is also going to be\nused, it should be possible to Taylor-expand \nthese functions around positive values of $\\lambda$. It is for this last reason that the\nauxiliary functions are not chosen directly as the factors of $\\lambda^{1\/3}$,\n$\\lambda^{-1\/3}$ and $1\/\\lambda$ that appear multiplying each one of the three\npieces that make up the asymptotic expansion. Instead, we do the\nchange $\\lambda \\rightarrow 1+ \\mu \\lambda$ inside these roots, which, of course,\nstill gives the right behavior for $\\lambda \\rightarrow \\infty$. An arbitrary\nfactor of $\\mu$ has been included, which can be adjusted in order to improve\nthe precision of the approximant. \n\nWith this choice of auxiliary functions the\ndegrees of the polynomials in the numerator must be the same as the ones in the\ndenominator. In principle, this can be done independently for each one of the three pieces\nin equation (\\ref{Eapp}), i.e. $P_a(\\lambda)$, $P_b(\\lambda)$ and\n$P_c(\\lambda)$ could be chosen with different degrees, and in that case,\ndifferent denominators matching the degree of each one of these polynomials\nwould be needed. For simplicity, a denominator $Q(\\lambda)$ common to all\nthree pieces has been chosen, and so all polynomials have the same\ndegree. As it will be understood later, any other choice would lead to\na system of non-linear equations in the $p_k$'s and $q_k$'s, \nmaking the determination of the approximant unnecessarily complicated.\n\nThe coefficients of the polynomials in the approximant are found using the\npower series, asymptotic expansion and the expansions around intermediate\npoints ($0< \\alpha <\\infty$), whose calculation was explained in the previous\ntwo sections. One is free to choose as many\nterms from each expansion as one desires, as long as the total number of terms\nfrom all expansions equals the total number of coefficients in the\napproximant. If the degree of the polynomials is $N$, the total number of\ncoefficients will be $4N+3$. In general, the approximants will have higher\nprecision for higher $N$. \n\nThe values of the first few terms in the power series\n(around $\\lambda=0$) for the first three energy levels (labeled by $n$) \nof the quartic anharmonic\noscillator are shown in table \\ref{Coeffx2x4Pow}, while the values of the first few terms in\nthe asymptotic expansion are shown in table \\ref{Coeffx2x4Asy}. Notice that in accordance\nwith what was discussed in section \\ref{PowerSeries}, the values of the coefficients for the\npower series are exact. The values of the coefficients for the asymptotic\nexpansion were obtained by solving equations (\\ref{AsyDE1})-(\\ref{AsyDEn})\nusing the shooting method. For the ground state ($n=0$) and the second excited\nlevel ($n=2$), the eigenfunctions are even in $x$, and as mentioned\nbefore, so must be the functions $\\tilde{\\psi}_k$ (and the same applies to\n$\\psi_k$ and $\\psi^{\\alpha}_k$), so the initial conditions used in those cases were $\\tilde{\\psi}_k(0)=1$\nand $\\tilde{\\psi}'_k(0)=0$. For the first excited level the eigenfunction is odd in\n$x$, so the conditions were $\\tilde{\\psi}_k(0)=0$ and $\\tilde{\\psi}'_k(0)=1$. One might feel\nuneasy about the propagation of errors from one differential equation to the\nnext, but it can be checked numerically that the accuracy of the energy eigenvalues\nfor large values of $\\lambda$ (or small values of $\\tilde{\\lambda}$) improves as one\nincludes higher terms in the expansion, which gives us confidence that the\nprecision of the coefficients is acceptable.\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $E_0$ & 1 & 3 & 5 \\\\\n $E_1$ & 3\/4 & 15\/4 & 39\/4 \\\\\n $E_2$ & -21\/16 & -165\/16 & -615\/16 \\\\\n $E_3$ & 333\/64 & 3915\/64 & 20079\/64 \\\\\n $E_4$ & -30885\/1024 & -520485\/1024 & -3576255\/1024 \\\\\n $E_5$ & 916731\/4096 & 21304485\/4096 & 191998593\/4096 \\\\\n\\hline\n\\end{tabular}\n\\caption{Exact coefficients of the power series for the first three energy\n levels of the quartic anharmonic oscillator.}\n\\label{Coeffx2x4Pow}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $\\tilde{E}_0$ & 1.060361944892 & 3.7996728480480 & 7.455697915983 \\\\\n $\\tilde{E}_1$ & 0.362022935 & 0.901605953 & 1.244714261 \\\\\n $\\tilde{E}_2$ & -0.034510565 & -0.057483095 & -0.046601602 \\\\\n $\\tilde{E}_3$ & 0.005195593 & 0.005492673 & 0.000958945 \\\\\n $\\tilde{E}_4$ & -0.000831127 & -0.000513914 & -0.000831127 \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients of the asymptotic expansion for the eigenvalues of the\n quartic anharmonic oscillator obtained solving the\n differential equations using the shooting method.}\n\\label{Coeffx2x4Asy}\n\\end{table}\n\nLet's choose a few intermediate points $\\alpha_i$ ($i=1,2, \\dots$), and let's take $n_i$ terms\nfrom the expansion around each one of these points. Let's also take $n_0$ terms from the\npower series (around $\\lambda=0$) and $n_a$ terms from the asymptotic\nexpansion. It will be assumed that $\\sum_i n_i+n_0+n_a=4N+3$. Using the power\nseries at $\\lambda=0$ one can write\n\\begin{equation}\nQ(\\lambda)\\sum^{n_0}_{k=0}E_k\\lambda^k =\n(1+\\mu \\lambda)^{1\/3} P_a(\\lambda) +\n(1+\\mu \\lambda)^{-1\/3} P_b(\\lambda)+\n\\frac{1}{1+\\mu \\lambda} P_c(\\lambda) \\,\\, ,\n\\end{equation}\nTaylor-expanding each side of this equation in $\\lambda$, and demanding it to be satisfied at every order up to\n$\\lambda^{n_0}$, one obtains a set of $n_0$ linear equations in the\ncoefficients of the approximant. Likewise, one can use the expansions at the\nintermediate points, and doing the change $\\lambda=\\lambda_{\\alpha_i}+\\alpha_i$,\none can write\n\\begin{eqnarray}\n\\left(1 + \\sum_{k=1}^N q_k (\\lambda_{\\alpha_i}+\\alpha_i)^k \\right)\n\\sum^{n_i}_{k=0}E^{\\alpha_i}_k\\lambda_{\\alpha_i}^k &=&\n(1+\\mu (\\lambda_{\\alpha_i}+\\alpha_i))^{1\/3} \\sum_{k=0}^N a_k\n(\\lambda_{\\alpha_i}+\\alpha_i)^k \\nonumber \\\\\n&& +\n(1+\\mu (\\lambda_{\\alpha_i}+\\alpha_i))^{-1\/3} \\sum_{k=0}^N b_k\n(\\lambda_{\\alpha_i}+\\alpha_i)^k \\nonumber \\\\\n&& +\n\\frac{1}{1+\\mu (\\lambda_{\\alpha_i}+\\alpha_i)} \\sum_{k=0}^N c_k\n(\\lambda_{\\alpha_i}+\\alpha_i)^k \n\\end{eqnarray}\nIf one demands this equation to be satisfied at every order in\n$\\lambda_{\\alpha_i}$ up to $\\lambda_{\\alpha_i}^{n_i}$, \none obtains a set of $n_i$ linear equations in the\ncoefficients. Finally, one can use the asymptotic expansion. For this we need\nto do the change $\\lambda'=1\/\\lambda$, and match the expansion with the\napproximant for each one of the three pieces in which it is divided. For\nexample, since \n\\begin{equation}\n(1+\\mu \\lambda)^{1\/3} \\frac{P_a(\\lambda)}{Q(\\lambda)} =\n\\lambda^{1\/3} (\\mu + \\lambda')^{1\/3} \n\\frac{\\sum_{k=0}^N a_k \\lambda'^{N-k}}{1 + \\sum_{k=1}^N q_k \\lambda'^{N-k}}\n\\,\\, ,\n\\end{equation}\none can compare the term multiplying $\\lambda^{1\/3}$ in the right hand side of\nthis equation with the term multiplying the same factor in the asymptotic\nexpansion. Doing the same also for the other two pieces leads to\n\\begin{eqnarray}\n\\left( 1 + \\sum_{k=1}^N q_k \\lambda'^{N-k} \\right) \n\\sum_{k=0}^{\\infty} \\tilde{E}_{3k}\\lambda'^{2k} &=&\n(\\lambda'+\\mu)^{1\/3} \\sum_{k=0}^N a_k \\lambda'^{N-k} \\,\\, ,\\\\\n\\left( 1 + \\sum_{k=1}^N q_k \\lambda'^{N-k} \\right) \n\\sum_{k=0}^{\\infty} \\tilde{E}_{3k+1}\\lambda'^{2k} &=&\n(\\lambda'+\\mu)^{-1\/3} \\sum_{k=0}^N b_k \\lambda'^{N-k} \\,\\, , \\\\\n\\left( 1 + \\sum_{k=1}^N q_k \\lambda'^{N-k} \\right) \n\\sum_{k=0}^{\\infty} \\tilde{E}_{3k+2}\\lambda'^{2k} &=&\n\\frac{1}{\\lambda'+\\mu}\\sum_{k=0}^N c_k \\lambda'^{N-k} \\,\\, .\n\\end{eqnarray}\nHere the number of terms taken in each expansion is determined by\n$n_a$, that is, one would not allow any $\\tilde{E}_k$ with $k>n_a$ in\nthe sums.\nIn this way, one gets a set of $n_a$ linear equations for the coefficients\nof the approximant.\n\nIn table \\ref{x4d3}, the values of the coefficients of the approximants are\nshown for the first three energy levels, using polynomials of degree three. There\nare fifteen coefficients in each approximant, and they were obtained using the\nfirst five terms of the power series (around $\\lambda=0$), the first five\nterms of the asymptotic expansion, and the first term of the series around\n$\\lambda=0.5$, $\\lambda=1$, $\\lambda=2$, $\\lambda=5$ and $\\lambda=20$ (which\nare shown for the three energy levels in table \\ref{Interx4d3}). This\nmeans that we are only using the exact energy eigenvalue around these\nintermediate points, and forcing the approximant built with the power series and\nasymptotic expansion to furthermore coincide with these ``exact'' eigenvalues at\nthese points. This not only brings the relative error of the\napproximant at these points down to zero (they become nodes of the relative\nerror as a function of $\\lambda$), but also helps to decrease the\nerror in between these points. The relative error is defined using as target the\neigenvalues obtained numerically through the shooting method, i.e., the\nrelative error is given by\n\\begin{equation}\n\\frac{|E_{\\rm app}- E_{\\rm shooting}|}{E_{\\rm shooting}} \\,\\, .\n\\end{equation} \n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n Coefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $E_0(\\lambda=1\/2)$ & 1.241854043136 & 4.051932338617 & 7.396900686938 \\\\\n $E_0(\\lambda=1)$ & 1.392351580103 & 4.64881282723 & 8.655049982254 \\\\\n $E_0(\\lambda=2)$ & 1.607541348124 & 5.475784646286 & 10.358583364736 \\\\\n $E_0(\\lambda=5)$ & 2.018340657447 & 7.013479298703 & 13.467730394819 \\\\\n $E_0(\\lambda=20)$ & 3.009944947791 & 10.643215959124 & 20.694110927154 \\\\\n\\hline\n\\end{tabular}\n\\caption{First coefficient (energy eigenvalues) of the series at different\n intermediate points for the first three energy levels with $V(x)=x^2+\\lambda\n x^4$. These values were obtained using the shooting method.}\n\\label{Interx4d3}\n\\end{table}\n\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $a_0$ & -235.587774594 & 3.26113271857 & -46.3903727540 \\\\\n $a_1$ & 129.192528081 & 45.7861842084 & 118.622015136 \\\\\n $a_2$ & 819.219808968 & 347.592601172 & 906.778841942 \\\\\n $a_3$ & 4083.20247083 & 1023.55148495 & 3353.40199807 \\\\\n $b_0$ & 49.9309955808 & 4.80222464913 & 8.35806073416 \\\\\n $b_1$ & 374.551951382 & 43.2593054339 & 113.555378592 \\\\\n $b_2$ & 1181.63222463 & 259.439145729 & 536.540936096 \\\\\n $b_3$ & 2212.93937770 & 385.537761596 & 888.696857827 \\\\\n $c_0$ & 186.656779013 & -5.06335736770 & 43.0323120198 \\\\\n $c_1$ & 208.866219507 & -20.7666223608 & 48.6886778830 \\\\\n $c_2$ & -423.418335743 & -35.6233569984 & -59.8483255497 \\\\\n $c_3$ & -334.866518168 & -39.0190739652 & -52.8167275564 \\\\\n $q_1$ & 148.201294158 & 24.5427291322 & 29.7104983824 \\\\\n $q_2$ & 1782.00574019 & 171.823102857 & 247.681714807 \\\\\n $q_3$ & 4851.65727491 & 339.396077796 & 566.683604101 \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients for the approximants of the first three energy\n eigenvalues of $V(x)=x^2+\\lambda x^4$, using polynomials of degree 3.}\n\\label{x4d3}\n\\end{table}\n\nThe highest relative error with these approximants was obtained for small\nvalues of $\\lambda$. Specifically, the maximum relative error was obtained\naround $\\lambda \\approx 0.2$. In the case of the ground state, the highest\nrelative error was\n\\begin{equation}\n\\left. \\frac{|E_{\\rm app}- E_{\\rm shooting}|}{E_{\\rm shooting}}\n\\right|_{\\lambda=0.17} = 1.05 \\times 10^{-6} \\,\\, .\n\\end{equation} \nThe relative error decreases rapidly for smaller values of $\\lambda$, and of\ncourse, it also decreases when $\\lambda$ increases until it finds the next\nnode at $\\lambda=0.5$. After that, the relative error never becomes higher\nthan $2 \\times 10^{-7}$. For the first and second excited level,\nthe maximum error around $\\lambda \\approx 0.2$, was about $8 \\times 10^{-7}$\nand $2.4 \\times 10^{-6}$, respectively, and after the node at $\\lambda=0.5$\nthis error is never higher than $4 \\times 10^{-8}$. In fact, the relative\nerror decreases quite rapidly in the case of the first and second excited levels\nfor large values of $\\lambda$, although it does so more slowly in the case of\nthe ground state.\n\nIn all these approximants, we chose $\\mu=2$. This parameter is arbitrary\nexcept for one restriction: The approximants should not have any defects, that\nis, there should not be any poles in the approximant (positive roots of $Q(\\lambda)$)\nwith the corresponding nearby zeros. Notice in table \\ref{x4d3} that with this\nchoice of $\\mu$ all of the coefficients of $Q(\\lambda)$ are positive, which\nwill, of course, guarantee that it has no roots for $\\lambda>0$. \nOther choices of $\\mu$ may lead to\nmixed negative and positive coefficients in $Q(\\lambda)$, which will in\ngeneral lead to positive real roots in this polynomial. Other than that, there\nis no other restriction in $\\mu$. Among all of the values of $\\mu$ that allow\nto keep the approximant free of defects, one is free to choose the one that\nminimizes the relative errors.\n\nOther than improving the numerical method used to obtain the coefficients of the\nexpansions, there are several ways in which the maximum relative error of the \napproximants can be decreased for all energy levels. The\neasiest one is to move one of the nodes. For example, one may choose the\napproximant to have a node at $\\lambda=0.2$ instead of $\\lambda=0.5$. If this\nis done, it can be seen that the maximum relative error is reduced by about a\nhalf for all energy levels. Another possibility is to use the\nderivatives of $E(\\lambda)$ at some of the intermediate points. Finally, one\nmay try an approximant of higher degree, allowing it\nto coincide with the values of $E(\\lambda)$ and its\nderivatives at more points. This last possibility is studied in the next example.\n\n\n\n\\section{Approximants for the sextic anharmonic oscillator}\n\\label{sextic}\n\nFor the sextic anharmonic oscillator \\cite{R9,R10}, the asymptotic expansion consists\nof only two pieces, so the approximant can be written as\n\n\\begin{equation}\nE_{\\rm app}(\\lambda) = (1+\\mu \\lambda)^{1\/4}\\frac{P_a(\\lambda)}{Q(\\lambda)}\n +(1+\\mu \\lambda)^{-1\/4}\\frac{P_b(\\lambda)}{Q(\\lambda)} \\,\\, ,\n\\end{equation}\nwhere $P_a(\\lambda)$, $P_b(\\lambda)$ and $Q(\\lambda)$ are given in equations\n(\\ref{Ps}) and (\\ref{Q}). The corresponding coefficients $a_k$, $b_k$ and\n$q_k$ are found in a similar way as we did for the quartic anharmonic\noscillator. For approximants of degree $N$, we will have $3N+2$\ncoefficients, so taking $n_0$ terms from the power series, $n_i$ terms from the\nseries at the $i$-th intermediate point, and $n_a$ terms from the\nasymptotic expansion, we should have $\\sum_i n_i+n_0+n_a=3N+2$,\nand the coefficients $a_k$, $b_k$ and $q_k$ can be determined using \nequations derived by matching powers in $\\lambda$ in the equation\n\\begin{equation}\nQ(\\lambda)\\sum^{n_0}_{k=0}E_k\\lambda^k =\n(1+\\mu \\lambda)^{1\/4} P_a(\\lambda) +\n(1+\\mu \\lambda)^{-1\/4} P_c(\\lambda) \\,\\, ,\n\\end{equation}\nmatching powers in $\\lambda_{\\alpha_i}$ using\n\\begin{eqnarray}\n\\left(1 + \\sum_{k=1}^N q_k (\\lambda_{\\alpha_i}+\\alpha_i)^k \\right)\n\\sum^{n_i}_{k=0}E^{\\alpha_i}_k\\lambda_{\\alpha_i}^k &=&\n(1+\\mu (\\lambda_{\\alpha_i}+\\alpha_i))^{1\/4} \\sum_{k=0}^N a_k\n(\\lambda_{\\alpha_i}+\\alpha_i)^k \\nonumber \\\\\n&& +\n(1+\\mu (\\lambda_{\\alpha_i}+\\alpha_i))^{-1\/4} \\sum_{k=0}^N b_k\n(\\lambda_{\\alpha_i}+\\alpha_i)^k \\nonumber \\,\\, ,\n\\end{eqnarray}\nand matching powers in $\\lambda'$ in\n\\begin{eqnarray}\n\\left( 1 + \\sum_{k=1}^N q_k \\lambda'^{N-k} \\right) \n\\sum_{k=0}^{\\infty} \\tilde{E}_{2k}\\lambda'^{k} &=&\n(\\lambda'+\\mu)^{1\/4} \\sum_{k=0}^N a_k \\lambda'^{N-k} \\,\\, ,\\\\\n\\left( 1 + \\sum_{k=1}^N q_k \\lambda'^{N-k} \\right) \n\\sum_{k=0}^{\\infty} \\tilde{E}_{2k+1}\\lambda'^{k} &=&\n(\\lambda'+\\mu)^{-1\/4} \\sum_{k=0}^N b_k \\lambda'^{N-k} \\,\\, . \\\\\n\\end{eqnarray}\n\nThe first few coefficients of the power series at $\\lambda=0$ and the asymptotic\nexpansion, obtained using the systems of\ndifferential equations described in sections \\ref{PowerSeries} and\n\\ref{AsymptoticExpansion} are shown in tables \\ref{Coeffx2x6Pow} and\n\\ref{Coeffx2x6Asy}, respectively. As expected, the coefficients of the power\nseries are exact, and the coefficients of the asymptotic expansion are\nobtained numerically.\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $E_0$ & 1 & 3 & 5 \\\\\n $E_1$ & 15\/8 & 105\/8 & 375\/8 \\\\\n $E_2$ & -3495\/128 & -47145\/128 & -295095\/128 \\\\\n $E_3$ & 1239675\/1024 & 27817125\/1024 & 276931275\/1024 \\\\\n $E_4$ & -3342323355\/32768 & -110913018405\/32768 & -1626954534555\/32768 \\\\\n\\hline\n\\end{tabular}\n\\caption{Exact coefficients of the power series for the first three energy\n levels of $V(x)=x^2 + \\lambda x^6$.}\n\\label{Coeffx2x6Pow}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $\\tilde{E}_0$ & 1.144802430723 & 4.338598612643 & 9.073084583078 \\\\\n $\\tilde{E}_1$ & 0.307920324 & 0.718220191 & 0.904435602 \\\\\n $\\tilde{E}_2$ & -0.018541674 & -0.024395762 & -0.010249211 \\\\\n $\\tilde{E}_3$ & 0.001559745 & 0.00099946565 & -0.000749318 \\\\\n $\\tilde{E}_4$ & -0.000123969 & -0.000026329 & 0.000107946 \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients of the asymptotic expansion obtained solving the\n differential equations using the shooting method for $V(x)=x^2 + \\lambda x^6$.}\n\\label{Coeffx2x6Asy}\n\\end{table}\n\nThe degree of the polynomials in the approximant was first chosen to be\n$N=5$. The coefficients of the approximants were calculated choosing different\nintermediate points (nodes) for the first three energy levels, together with\nthe first four terms of the power series and the first five terms of the\nasymptotic expansion. For the\nground state, the intermediate points were $\\lambda=0.1$, $\\lambda=0.2$,\n$\\lambda=0.5$, $\\lambda=1$, $\\lambda=2$, $\\lambda=5$ and $\\lambda=10$, and\nonly the energy eigenvalues at those point were used, i.e., we didn't pick any\nof the derivatives at these points. Furthermore, we chose $\\mu=1\/2$. \nFor the first and second excited level, we used $E^1_0$, $E^2_0$, $E^5_0$, \n$E^{10}_0$, $E^{20}_0$, $E^{0.1}_0$, $E^{0.1}_1$ \nand $E^{0.01}_0$, with $\\mu=0.95$. Notice that in these cases, the derivative at\n$\\lambda=0.1$, i.e., $E^{0.1}_1$, has also been used. With these\nchoices, the values of the coefficients in the approximants are given in table\n\\ref{x6d5}. With these approximants, the highest relative errors were obtained\naround small values of $\\lambda$. Specifically, at $\\lambda=0.014$ the\nrelative error for the ground state eigenvalue is $2.55 \\times 10^{-5}$. For even\nsmaller values of $\\lambda$, this error decreases rapidly. For $1<\\lambda<5$\nthat maximum error is $1.3 \\times 10^{-7}$, and for $\\lambda>5$ the maximum\nerror was $7 \\times 10^{-8}$, decreasing rapidly for large values of\n$\\lambda$.\n\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $a_0$ & -228343.425175410234 & -1455854.05235538211 & -1636184.16769173318 \\\\\n $a_1$ & -51504.1866802877753 & -523067.617460085793 & -319382.170585718291 \\\\\n $a_2$ & -3246.30836836582236 & -82992.9062424631909 & -76246.2680853710652 \\\\\n $a_3$ & 17641.2930769775453 & 564652.836765077742 & 1736863.85800628006 \\\\\n $a_4$ & 40455.7925641666977 & 2737870.84947363967 & 8425191.38724236837 \\\\\n $a_5$ & 25845.5374941939783 & 3765927.01323457233 & 10377310.4646700931 \\\\\n $b_0$ & 228344.425175410234 & 1455857.05235538211 & 1636189.16769173318 \\\\\n $b_1$ & 108718.883825186570 & 1215348.76068812833 & 1098149.56754718475 \\\\\n $b_2$ & 12471.9577982789445 & 211783.949326608282 & 156879.990271492461 \\\\\n $b_3$ & 8971.90069757654071 & 247389.200455627253 & 461008.702437356268 \\\\\n $b_4$ & 12714.5141045106691 & 765825.922719107915 & 1349548.74023090975 \\\\\n $b_5$ & 4915.62090727820199 & 607633.680665566723 & 1008252.48843727777 \\\\\n $q_1$ & 126.840851046236208 & 245.543619745344247 & 306.370961578640294 \\\\\n $q_2$ & 3258.74684448027568 & 13846.7357289482276 & 20215.7966313260916 \\\\\n $q_3$ & 21339.1202042371419 & 208223.456783358953 & 314306.763606766761 \\\\\n $q_4$ & 39515.8524539180998 & 853339.703125454811 & 1215186.64688146943 \\\\\n $q_5$ & 18984.4284445198520 & 856945.745673868394 & 1129173.69341497105 \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients for the approximants of the first three energy\n eigenvalues of $V(x)=x^2+\\lambda x^6$, using polynomials of degree 5.}\n\\label{x6d5}\n\\end{table}\n\n\nAs it can be seen, in both the quartic and sextic anharmonic oscillators,\nthe region of $\\lambda$ where it is more difficult to\nachieve high accuracy is for $\\lambda<0.5$. We tried also\napproximants of degree 6 for the sextic anharmonic oscillator, and it\nwas possible to reduce the relative error in this region by a factor of\n1\/2. The coefficients of the approximants can be found in table\n\\ref{x6d6}. For $n=0,2$, we used $\\mu=1\/2$, while for $n=1$, we used\n$\\mu=1$. For $n=0$ the approximant was built using the first four terms of the\npower series around $\\lambda=0$ and the first five terms of the asymptotic\nseries, together with $E^1_0$, $E^2_0$, $E^5_0$, $E^{10}_0$, $E^{20}_0$,\n$E^{1\/2}_0$, $E^{1\/2}_1$, $E^{0.2}_0$, $E^{0.1}_0$, $E^{0.1}_1$\nand $E^{0.01}_0$. For $n=1,2$, the same terms were taken, except that the\nfourth term of the power series around $\\lambda=0$ was replaced by\n$E^{1\/2}_2$ (i.e. the second derivative at $\\lambda=1\/2$). The improvement in \nthe accuracy of the approximants for $\\lambda<0.5$ is related to having used\nmore terms in the expansions around points in this region, including first and\nsecond derivatives.\n\nThe fact that\nthe approximants become less accurate for small values of $\\lambda$ may be related to the\nanalytic properties of the exact function $E(\\lambda)$ \\cite{R7}, although\nthis is far from clear. As it is well known,\nthe perturbative expansion (i.e., the power series around $\\lambda=0$) is divergent for any $\\lambda \\neq\n0$. However, in previous works where quasi-rational approximants have been\nused, it has been found that the main factor determining the accuracy of\nan approximant is the accuracy of the coefficients in the expansions used to\nbuild it. For\nexample, in references \\cite{MC1} and \\cite{MC2} series with radius of convergence equal to zero\nwere used, yet the approximants were very accurate because the coefficients of\nthese series were determined with very high precision.\n\n\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nCoefficients & $n=0$ & $n=1$ & $n=2$ \\\\ \n\\hline \\hline\n $a_0$ & 10662630.6230456957 & -4140725.82498950878 & -190986117.577446118 \\\\\n $a_1$ & -792418.407061574860 & -3392187.29848733455 & -96347617.9985940628 \\\\\n $a_2$ & -770199.048057820101 & -691764.158783859149 & -11188867.9611306088 \\\\\n $a_3$ & -30901.7803416591912 & 352514.275425898392 & 582001.151582893834 \\\\\n $a_4$ & 663802.804751486527 & 1809854.68977494829 & 4102541.85172044638 \\\\\n $a_5$ & 1304251.65718790436 & 3042011.16842997607 & 6466260.98769018109 \\\\\n $a_6$ & 768824.185718239332 & 443056.032781395005 & 2682339.11910005685 \\\\\n $b_0$ & -10662629.6230456957 & 4140728.82498950878 & 190986122.577446118 \\\\\n $b_1$ & -18730301.9075998957 & 5463158.61490945598 & 144095348.884473491 \\\\\n $b_2$ & 1312291.68171628629 & 1901041.02272365822 & 29373600.9144712753 \\\\\n $b_3$ & 289403.682268640790 & 314656.648970897322 & 1721958.42036808095 \\\\\n $b_4$ & 312257.358605665963 & 547382.135140547781 & 658971.157529230082 \\\\\n $b_5$ & 397391.953291475945 & 540767.205553174330 & 644912.329241756943 \\\\\n $b_6$ & 146224.401105478984 & 73344.3715121889572 & 189069.456358376232 \\\\\n $q_1$ & 207.057939859483339 & 198.176309122347673 & 230.798303579776449 \\\\\n $q_2$ & 10393.8027931755210 & 9463.18894774927319 & 11496.2792134605234 \\\\\n $q_3$ & 157632.217425587120 & 133763.388618245199 & 150748.225437877810 \\\\\n $q_4$ & 775120.602962958890 & 586955.795810181245 & 587462.218227788458 \\\\\n $q_5$ & 1249527.64792219401 & 727254.753026886137 & 723876.015243016534 \\\\\n $q_6$ & 564727.576025957156 & 102119.617954584846 & 248600.057576103110 \\\\\n\\hline\n\\end{tabular}\n\\caption{Coefficients for the approximants of the first three energy\n eigenvalues of $V(x)=x^2+\\lambda x^6$, using polynomials of degree 6.}\n\\label{x6d6}\n\\end{table}\n\n\\section{Conclusions}\n\nIn this paper, it has been shown that accurate approximants for the energy\neigenvalues of potentials of the form $V(x)=Ax^a+Bx^b$ can be found using a\nmulti-point quasi-rational approximation technique. The approximants are\nconstructed using rational functions, together with auxiliary functions\nintroduced to be able to reproduce the behavior of the eigenvalues for large\n$\\lambda$. The coefficients of the rational functions are found using the\npower series of the eigenvalues, not only at $\\lambda=0$, but also for\narbitrary finite values of $\\lambda$, together with the asymptotic\nexpansion. These expansions are found using a system of differential\nequations, which in the case of the power series at $\\lambda=0$, represents an\nalternative way to find the perturbative expansion. As\nexamples, approximants for the lowest energy levels of the quartic and sextic\nanharmonic oscillators were obtained. The approximants were fairly simple,\nsince the degree of the polynomials used was not too high. In particular, for\nthe quartic anharmonic oscillator, it was shown that it is possible to obtain\napproximants with polynomials of degree 3 for which the relative error is not\nhigher than $\\sim 3 \\times 10^{-6}$. In the case of the sextic anharmonic,\npolynomials of degree 5 and 6 were tried, and it was shown that in the second\ncase it was possible to improve the accuracy for $\\lambda<0.5$ while\nmaintaining the accuracy for larger values of $\\lambda$\n\nIn this technique, one has a lot of freedom choosing\nthe intermediate points, as well as how many terms from each one of the series to\ntake. This gives a lot of possibilities to try to reduce relative errors, and\nit would be interesting to study if it is possible to do this systematically.\nIt would also be interesting to try to find methods that allow to improve the accuracy\nof the coefficients in the expansions. For example, if better numerical solutions \nfor the expansion functions can be found, this should lead to better coefficients \nand therefore to better approximants. As it was mentioned before, experience with quasi-rational\napproximants has shown that the accuracy of the \ncoefficients in the expansions is the main factor influencing the accuracy of multi-point \nquasi-rational approximations. For simplicity, here we have limited ourselves\nto the use of the shooting method to solve the systems of differential\nequations. On the other hand, the first equation in each one of the systems, \ni.e., equations (\\ref{PS_DE1}), (\\ref{PSa_DE1}) and\n(\\ref{AsyDE1}) are just regular Schr\\\"odinger equations, and many methods have\nbeen developed that allow to obtain the corresponding eigenvalues with very high precision \n\\cite{R4,A0,A1,A2,A3,A4,A5,A6,A7,A8,A9}. \nIn fact, we could have used any of these methods to\nobtain the coefficients $E_0^{\\alpha}$ and $\\tilde{E}_0$, but it is not clear how to\nextend these methods to solve the remaning equations in the systems.\nIn the future, we plan to study these issues in more detail, \nand apply this technique to other potentials of interest,\nboth in physics as well as chemistry.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThe tremendous progress made by deep neural networks in solving diverse tasks has driven significant research and industry interest in deploying efficient versions of these models. \nTo this end, entire families of model compression methods have been developed, such as pruning~\\cite{hoefler2021sparsity} and quantization~\\cite{gholami2021survey}, which are now accompanied by hardware and software support~\\cite{vanholder2016efficient, chen2018tvm,david2020tensorflow, mishra2021accelerating, graphcore}. \n\nNeural network pruning, which is the focus of this paper, is the compression method with arguably the longest history~\\cite{lecun1990optimal}. The basic goal of pruning is to obtain neural networks for which many connections are removed by being set to zero, while maintaining the network's accuracy. A myriad pruning methods have been proposed---please see~\\cite{hoefler2021sparsity} for an in-depth survey---and it is currently understood that many popular networks can be compressed by more than an order of magnitude, in terms of their number of connections, without significant accuracy loss. \n\nMany accurate pruning methods require a fully-accurate, dense variant of the model, from which weights are subsequently removed. A shortcoming of this approach is the fact that the memory and computational savings due to compression are only available for the \\emph{inference}, post-training phase, and not during training itself. This distinction becomes important especially for large-scale modern models, which can have millions or even billions of parameters, and for which fully-dense training can have high computational and even non-trivial environmental costs~\\cite{strubell2020energy}. \n\nOne approach to address this issue is \\emph{sparse training}, which essentially aims to remove connections from the neural network as early as possible during training, while still matching, or at least approximating, the accuracy of the fully-dense model. For example, the RigL technique~\\cite{evci2020rigging} randomly removes a large fraction of connections early in training, and then proceeds to optimize over the sparse support, providing savings due to sparse back-propagation. Periodically, the method re-introduces some of the weights during the training process, based on a combination of heuristics, which requires taking full gradients. \nThese works, as well as many recent sparse training approaches~\\cite{bellec2017deep, mocanu2018scalable, jayakumar2020top}, which we cover in detail in the next section, have shown empirically that non-trivial computational savings, usually measured in theoretical FLOPs, can be obtained using sparse training, and that the optimization process can be fairly robust to sparsification of the support. \n\nAt the same time, this line of work still leaves intriguing open questions. The first is \\emph{theoretical}: to our knowledge, none of the methods optimizing over sparse support, and hence providing training speed-up, have been shown to have convergence guarantees. \nThe second is \\emph{practical}, and concerns a deeper understanding of the relationship between the densely-trained model, and the sparsely-trained one. Specifically, (1)~most existing sparse training methods still leave a non-negligible accuracy gap, relative to dense training, or even post-training sparsification; and (2)~most existing work on sparsity requires significant changes to the training flow, and focuses on maximizing global accuracy metrics; thus, we lack understanding when it comes to \\emph{co-training} sparse and dense models, as well as with respect to correlations between sparse and dense models \\emph{at the level of individual predictions}.\n\n\\parhead{Contributions.} In this paper, we take a step towards addressing these questions. \nWe investigate a general hybrid approach for sparse training of neural networks, which we call \\emph{Alternating Compressed \/ DeCompressed (AC\/DC)} training. AC\/DC performs \\emph{co-training of sparse and dense models}, and can return both an accurate \\emph{sparse} model, and a \\emph{dense} model, which can recover the dense baseline accuracy via fine-tuning. We show that a variant of AC\/DC ensures convergence for general non-convex but smooth objectives, under analytic assumptions. Extensive experimental results show that it provides state-of-the-art accuracy among sparse training techniques at comparable training budgets, and can even outperform \\emph{post-training} sparsification approaches when applied at high sparsities. \n \nAC\/DC builds on the classic \\emph{iterative hard thresholding (IHT)} family of methods for sparse recovery~\\cite{blumensath2008iterative}. As the name suggests, AC\/DC works by alternating the standard \\emph{dense} training phases with \\emph{sparse} phases where optimization is performed exclusively over a fixed \\emph{sparse support}, and a subset of the weights and their gradients are fixed at zero, leading to computational savings. (This is in contrast to \\emph{error feedback} algorithms, e.g.~\\cite{courbariaux2016binarized, lin2019dynamic} which require computing fully-dense gradients, even though the weights themselves may be sparse.)\nThe process uses the same hyper-parameters, including the number of epochs, as regular training, and the frequency and length of the phases can be safely set to standard values, e.g. 5--10 epochs. \nWe ensure that training ends on a \\emph{sparse} phase, and return the resulting \\emph{sparse} model, as well as the last \\emph{dense} model obtained at the end of a \\emph{dense} phase. \nThis dense model may be additionally fine-tuned for a short period, leading to a more accurate \\emph{dense-finetuned} model, which we usually find to match the accuracy of the \\emph{dense baseline}. \n\nWe emphasize that algorithms alternating sparse and dense training phases for deep neural networks have been previously investigated~\\cite{jin2016training, han2016dsd}, but with the different goal on using sparsity as a regularizer to obtain \\emph{more accurate dense models}. Relative to these works, our goals are two-fold: we aim to produce highly-accurate, highly-sparse models, but also to maximize the fraction of training time for which optimization is performed over a sparse support, leading to computational savings. \nFurther, we are the first to provide convergence guarantees for variants of this approach. \n\nWe perform an extensive empirical investigation, showing that AC\/DC provides consistently good results on a wide range of models and tasks (ResNet~\\cite{he2016deep} and MobileNets~\\cite{howard2017mobilenets} on the ImageNet~\\cite{imagenet} \/ CIFAR~\\cite{cifar100} datasets, and Transformers~\\cite{vaswani2017attention, dai2019transformer} on WikiText~\\cite{wikitext103}), under standard values of the training hyper-parameters. \nSpecifically, when executed on the same number of training epochs,\nour method outperforms all previous \\emph{sparse training} methods in terms of the accuracy of the resulting sparse model, often by significant margins.\nThis comes at the cost of slightly higher theoretical computational cost relative to prior sparse training methods, although AC\/DC usually reduces training FLOPs to 45--65\\% of the dense baseline. \nAC\/DC is also close to the accuracy of state-of-the-art post-training pruning methods~\\cite{kusupati2020soft, singh2020woodfisher} at medium sparsities (80\\% and 90\\%); surprisingly, it \\emph{outperforms} them in terms of accuracy, at higher sparsities. In addition, AC\/DC is flexible with respect to the structure of the ``sparse projection'' applied at each compressed step: we illustrate this by obtaining \\emph{semi-structured} pruned models using the 2:4 sparsity pattern efficiently supported by new NVIDIA hardware~\\cite{mishra2021accelerating}. \nFurther, we show that the resulting sparse models can provide significant real-world speedups for DNN inference on CPUs~\\cite{NM}. \n\\vspace{-0.2em}\n\nAn interesting feature of AC\/DC is that it allows for accurate dense\/sparse co-training of models. \nSpecifically, at medium sparsity levels (80\\% and 90\\%), the method allows the co-trained dense model to recover the dense baseline accuracy via a short fine-tuning period.\nIn addition, dense\/sparse co-training provides us with a lens into the training dynamics, in particular relative to the sample-level accuracy of the two models, but also in terms of the dynamics of the sparsity masks. \nSpecifically, we observe that co-trained sparse\/dense pairs have higher sample-level agreement than sparse\/dense pairs obtained via post-training pruning, and that weight masks still change later in training. \n\nAdditionally, we probe the accuracy differences between sparse and dense models, by examining their ``memorization'' capacity~\\cite{zhang2016understanding}. \nFor this, we perform dense\/sparse co-training in a setting where a small number of valid training samples have \\emph{corrupted labels}, and examine how these samples are classified during dense and sparse phases, respectively. \nWe observe that the sparse model is less able to ``memorize'' the corrupted labels, and instead often classifies the corrupted samples to their \\emph{true} (correct) class. \nBy contrast, during dense phases model can easily ``memorize'' the corrupted labels. (Please see Figure~\\ref{fig:random-correct-no-da-main} for an illustration.) \nThis suggests that one reason for the higher accuracy of dense models is their ability to ``memorize'' hard-to-classify samples.\n\n\n\n\\vspace{-0.5em}\n\\section{Related Work}\n\\vspace{-0.5em}\n\nThere has recently been tremendous research interest into pruning techniques for DNNs; we direct the reader to the recent surveys of~\\cite{gale2019state} and~\\cite{hoefler2021sparsity} for a more comprehensive overview. \nRoughly, most DNN pruning methods can be split as (1) \\emph{post-training} pruning methods, which start from an accurate dense baseline, and remove weights, followed by fine-tuning; \nand (2) \\emph{sparse training} methods, which perform weight removal during the training process itself. (Other categories such as \\emph{data-free} pruning methods~\\cite{lee2018snip, tanaka2020pruning} exist, but they are beyond our scope.) We focus on \\emph{sparse training}, although we will also compare against state-of-the-art post-training methods. \n\n\nArguably, the most popular metric for weight removal is \\emph{weight magnitude}~\\cite{hagiwara1994, han2015learning, zhu2017prune}. Better-performing approaches exist, such as second-order metrics~\\cite{lecun1990optimal, hassibi1993optimal, 2017-dong, singh2020woodfisher}, or Bayesian approaches~\\cite{molchanov2017variational}, but they tend to have higher computational and implementation cost. \n\nThe general goal of \\emph{sparse training} methods is to perform both the forward (inference) pass \\emph{and the backpropagation} pass over a sparse support, leading to computational gains during the training process as well.\nOne of the first approaches to maintain sparsity throughout training was Deep Rewiring~\\cite{bellec2017deep}, where SGD steps applied to positive weights are augmented with random walks in parameter space, followed by inactivating negative weights. To maintain sparsity throughout training, randomly chosen inactive connections are re-introduced in the ``growth'' phase. \nSparse Evolutionary Training (SET)~\\cite{mocanu2018scalable} introduces a non-uniform sparsity distribution across layers, which scales with the number of input and output channels, and trains sparse networks by pruning weights with smallest magnitude and re-introducing some weights randomly. \nRigL~\\cite{evci2020rigging} prunes weights at random after a warm-up period, and then periodically performs weight re-introduction using a combination of connectivity- and gradient-based statistics, which require periodically evaluating full gradients. RigL can lead to state-of-the-art accuracy results even compared to post-training methods; however, to achieve high accuracy it requires significant additional data passes (e.g. 5x) relative to the dense baseline. \nTop-KAST~\\cite{jayakumar2020top} alleviated the drawback of periodically having to evaluate dense gradients by updating the sparsity masks using gradients \\emph{of reduced sparsity} relative to the weight sparsity. \nThe latter two methods set the state-of-the-art for sparse training: when executing for the same number of epochs as the dense baseline, they provide computational reductions the order of 2x, while the accuracy of the resulting sparse models is lower than that of leading post-training methods, executed at the same sparsity levels. \nTo our knowledge, none of these methods have convergence guarantees. \n\nAnother approach towards faster training is \\emph{training sparse networks from scratch}. The masks are updated by continuously pruning and re-introducing weights. For example, \\cite{lin2019dynamic} uses magnitude pruning after applying SGD on the dense network, whereas \n\\cite{dettmers2019sparse} update the masks by re-introducing weights with the highest gradient momentum. STR~\\cite{kusupati2020soft} learns a separate pruning threshold for each layer and allows sparsity both during forward and backward passes; however, the desired sparsity can not be explicitly imposed, and the network has low sparsity for a large portion of training. These methods can lead to only limited computational gains, since they either require dense gradients, or the sparsity level cannot be imposed.\nBy comparison, our method provides models of similar or better accuracy at the same sparsity, with computational reductions. \nWe also obtain dense models that match the baseline accuracy, with a fraction of the baseline FLOPs. \n\n\nThe idea of alternating sparse and dense training phases has been examined before in the context of neural networks, but with the goal of using temporary sparsification as a regularizer. \nSpecifically, \\emph{Dense-Sparse-Dense (DSD)}~\\cite{han2016dsd} proposes to first \\emph{train a dense model to full accuracy}; this model is then sparsified via magnitude; next, optimization is performed over the sparse support, followed by an additional optimization phase over the full dense support. Thus, this process is used as a regularization mechanism for the dense model, which results in relatively small, but consistent accuracy improvements relative to the original dense model. In \\cite{jin2016training}, the authors propose a similar approach to DSD, but alternate sparse phases during the regular training process. The resulting process is similar to AC\/DC, but, importantly, the goal of their procedure is to return a \\emph{more accurate dense model}. (Please see their Algorithm 1.) For this, the authors use relatively low sparsity levels, and gradually increase sparsity during optimization; they observe accuracy improvements for the resulting dense models, at the cost of increasing the total number of epochs of training. \nBy contrast, our focus is on obtaining accurate \\emph{sparse} models, while reducing computational cost, and executing the dense training recipe. \nWe execute at higher sparsity levels, and on larger-scale datasets and models. In addition, we also show that the method works for other sparsity patterns, e.g. the 2:4 semi-structured pattern~\\cite{mishra2021accelerating}. \n\nMore broadly, the Lottery Ticket Hypothesis (LTH)~\\cite{frankle2018lottery} states that sparse networks can be trained in isolation \\emph{from scratch} to the same performance as a post-training pruning baseline, by starting from the ``right'' weight and sparsity mask initializations, optimizing only over this sparse support.\nHowever, initializations usually require the availability of the fully-trained dense model, falling under \\emph{post-training} methods. \nThere is still active research on replicating these intriguing findings to large-scale models and datasets~\\cite{gale2019state, frankle2020linear}. \nPrevious work \\cite{gale2019state, zhu2017prune} have studied progressive sparsification during regular training, which may also achieve training time speed-up, after a sufficient sparsity level has been achieved. However, AC\/DC generally achieves a better trade-off between validation accuracy and training time speed-up, compared to these methods. \n\nParallel work by~\\cite{mohtashami2021simultaneous} investigates a related approach, but focusing on low-rank decompositions for Transformer models. \nBoth their analytical approach and their application domain are different to the ones of the current work. \n\n\n\\section{Alternating Compressed \/ DeCompressed (AC\/DC) Training}\n\\label{sec:method}\n\n\\subsection{Background and Assumptions}\n\\label{subsec:theory}\n\nObtaining \\emph{sparse} solutions to optimization problems is a problem of interest in several areas~\\cite{candes2006near, blumensath2008iterative, foucart2011hard}, where the goal is to minimize a function $f:\\mathbb{R}^N \\rightarrow \\mathbb{R}$ under sparsity constraints:\n\\begin{equation}\n \\min_{\\theta \\in \\mathbb{R}^N} f(\\theta) \\quad \\text{s.t.} \\quad \\| \\theta \\|_0 \\leq k\\,.\n\\label{eq:optim_l0}\n\\end{equation}\nFor the case of $\\ell_2$ regression, $f(\\theta) = \\| b - A\\theta\\|^2_2$, a solution has been provided by Blumensath and Davies~\\cite{blumensath2008iterative}, known as the \\emph{Iterative Hard Thresholding (IHT)} algorithm, and subsequent work \\cite{foucart2011hard, foucart2012sparse, yuan2014gradient} provided theoretical guarantees for the linear operators used in compressed sensing. \nThe idea consists of alternating gradient descent (GD) steps and applications of a thresholding operator to ensure the $\\ell_0$ constraint is satisfied. \nMore precisely, $T_k$ is defined as the ``top-k'' operator, which keeps the largest $k$ entries of a vector $\\theta$ in absolute value, and replaces the rest with $0$. The IHT update at step $t+1$ has the following form:\n\\begin{equation}\n \\theta_{t + 1} = T_k(\\theta_t - \\eta \\nabla f(\\theta_t)).\n\\label{eq:iht-step}\n\\end{equation}\n\n\nMost convergence results for IHT assume deterministic gradient descent steps. For DNNs, stochastic methods are preferred, so we describe and analyze a stochastic version of IHT. \n\n\n\\parhead{Stochastic IHT.} \nWe consider functions $f: \\mathbb{R}^N \\rightarrow \\mathbb{R}$, for which we can compute stochastic gradients $g_{\\theta}$, which are unbiased estimators of the true gradient $\\nabla f(\\theta)$. Define the \\emph{stochastic} IHT update as:\n\\begin{equation}\n \\theta_{t+1} = T_k(\\theta_t - \\eta g_{\\theta_t}).\n\\label{eq:stochastic-iht}\n\\end{equation}\n\nThis formulation covers the practical case where the stochastic gradient $g_{\\theta}$ corresponds to a \\emph{mini-batch} stochastic gradient. Indeed, as in practice $f$ takes the form $f(\\theta) = \\frac{1}{m}\\sum_{i=1}^m f(\\theta; x_i)$, where $S=\\{x_1, \\ldots, x_m\\}$ are data samples, the stochastic gradients obtained via backpropagation take the form $\\frac{1}{\\vert B\\vert}\\sum_{i \\in B} \\nabla f(\\theta; x_i)$, where $B$ is a sampled mini-batch. \nWe aim to prove strong convergence bounds for stochastic IHT, under common assumptions that arise in the context of training DNNs. \n\n\\parhead{Analytical Assumptions.} Formally, our analysis uses the following assumptions on $f$.\n\\begin{enumerate}\n \\item Unbiased gradients with variance $\\sigma$:\n $\\mathbb{E}[g_{\\theta} | \\theta] = \\nabla f(\\theta),\\textnormal{ and } \\, \\mathbb{E}[ \\| g_{\\theta} - \\nabla f(\\theta) \\|^2 ] \\leq \\sigma^2\\,.$\n \\item Existence of a $k^*$-sparse minimizer $\\theta^*$:\n $\\exists \\theta^* \\in \\arg\\min_{\\theta} f(\\theta), \\textnormal{ s.t. } \\|\\theta^*\\|_0 \\leq k^*\\,.$\n \\item For $\\beta > 0$, the $\\beta$-smoothness condition when restricted to $t$ coordinates ($(t,\\beta)$-smoothness):\n \\begin{equation} \n f(\\theta + \\delta) \\leq f(\\theta) + \\nabla f(\\theta)^\\top \\delta + \\frac{\\beta}{2} \\|\\delta\\|^2, \\, \\textnormal{ for all } \\theta, \\delta \\textnormal{ s.t. } \\|\\delta\\|_0 \\leq t\\,.\n \\end{equation}\n \\item For $\\alpha > 0$ and number of indices $r$, the \\emph{$r$-concentrated Polyak-\\L ojasiewicz ($(r, \\alpha)$-CPL) condition}:\n \\begin{equation}\\label{eq:conc-pl}\n \\|T_{r}(\\nabla f(\\theta))\\| \\geq \\frac{\\alpha}{2} (f(\\theta) - f(\\theta^*))\\,, \\textnormal{ for all } \\theta.\n \\end{equation}\n\\end{enumerate}\n\nThe first assumption is standard in stochastic optimization, while the existence of very sparse minimizers is a known property in over-parametrized DNNs~\\cite{frankle2018lottery}, and is the very premise of our study. \nSmoothness is also a standard assumption, e.g.~\\cite{lin2019dynamic}---we only require it along \\emph{sparse} directions, which is a strictly weaker assumption. \nThe more interesting requirement for our convergence proof is the $(r, \\alpha)$-CPL condition in Equation (\\ref{eq:conc-pl}), which we now discuss in detail. \n\n\nThe standard Polyak-\\L ojasiewicz (PL) condition~\\cite{karimi2016linear} is common in non-convex optimization, and versions of it are essential in the analysis of DNN training~\\cite{liu2020toward, allen2019convergence}. \nIts standard form states that small gradient norm, i.e. approximate stationarity, implies closeness to optimum in function value. \nWe require a slightly stronger version, in terms of the norm of the gradient contributed by its largest coordinates in absolute value. This restriction appears necessary for the success of IHT methods, as the sparsity enforced by the truncation step automatically reduces the progress ensured by a gradient step to an amount proportional to the norm of the top-$k$ gradient entries. This strengthening of the PL condition is supported both theoretically, by the mean-field view, which argues that gradients are sub-gaussian~\\cite{shevchenko2020landscape}, and by empirical validations of this behaviour~\\cite{alistarh2018convergence, shi2019understanding}. \n\n\nWe are now ready to state our main analytical result. \n\n\\begin{restatable}{thm}{mainthm}\n\\label{thm:iht-pl}\nLet $f:\\mathbb{R}^N \\rightarrow \\mathbb{R}$ be a function with a $k^*$-sparse minimizer $\\theta^*$. \nLet $\\beta > \\alpha > 0$ be parameters, let $k = C\\cdot k^* \\cdot (\\beta\/\\alpha)^2$ for some appropriately chosen constant $C$, and suppose that $f$ is $(2k+3k^*,\\beta)$-smooth and $(k^*, \\alpha)$-CPL.\nFor initial parameters $\\theta_{0}$ and precision $\\epsilon>0$, \ngiven access to stochastic gradients with variance $\\sigma$,\nstochastic IHT~(\\ref{eq:stochastic-iht}) converges\nin $O\\left(\\frac{\\beta}{\\alpha}\\cdot\\ln\\frac{f\\left(\\theta_{0}\\right)-f\\left(\\theta^{*}\\right)}{\\epsilon}\\right)$\niterations to a point $\\theta$ with $\\|\\theta\\|_0\\leq k$,\nsuch that\n\\vspace{-0.2cm}\n\\begin{equation*}\n \\mathbb{E}\\left[f\\left(\\theta\\right)-f\\left(\\theta^{*}\\right)\\right]\\leq\\epsilon+\\frac{16\\sigma^{2}}{\\alpha}.\n\\end{equation*}\n\\end{restatable} \n\n\nAssuming a fixed objective function $f$ and tolerance $\\epsilon$, we can obtain lower loss and faster running time by either 1) increasing the support $k$ demanded from our approximate minimizer $\\theta$ relative to the optimal $k^*$, or by reducing the gradient variance. \nWe provide a complete proof of this result in the Appendix. Our analysis approach also works in the absence of the CPL condition (Theorem~\\ref{thm:iht-nonconv}), \nin which case we prove that a version of the algorithm can find sparse nearly-stationary points.\nAs a bonus, we also simplify existing analyses for IHT and extend them to the stochastic case (Theorem~\\ref{thm:vanilla-iht-stoch}). \nAnother interpretation of our results is in showing that, under our assumptions, \\emph{error feedback}~\\cite{lin2019dynamic} is not necessary for recovering good sparse minimizers; this has practical implications, as it allows us to perform fully-sparse back-propagation in sparse optimization phases. \nNext, we discuss our practical implementation, and its connection to these theoretical results. \n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[height=0.9in]{figures\/IHT_diagram_stripes.pdf}\n \\caption{The AC\/DC training process. After a short warmup we alternatively prune to maximum sparsity and restore the pruned weights. The plot shows the sparsity and validation accuracy throughout the process for a sample run on ResNet50\/ImageNet at 90\\% sparsity.}\n \\label{fig:ac-dc}\n\\end{figure*}\n\n\\begin{algorithm}\n\\small{\n\\caption{Alternating Compressed\/Decompressed (AC\/DC) Training \\label{alg:iht}}\n\n\\begin{algorithmic}[1]\n\n\\Require Weights $\\theta \\in \\mathbb{R}^N$, data $S$, sparsity $k$, compression phases $\\mathcal{C}$, decompression phases $\\mathcal{D}$\n\n\n\\State Train the weights $\\theta$ for $\\Delta_w$ epochs \\Comment{Warm-up phase}\n\\While{epoch $\\leq$ max epochs}\n \\If{entered a compression phase} \n \\State $\\theta \\leftarrow T_k(\\theta, k)$ \\Comment{apply compression (top-k) operator on weights}\n \\State $m \\leftarrow \\mathbbm{1}[\\theta_i \\neq 0]$ \\Comment{create masks}\n \\EndIf\n \n \\If{entered a decompression phase}\n \\State $m \\leftarrow \\mathbbm{1}_N$ \\Comment{reset all masks}\n \\EndIf\n \\State $\\theta \\leftarrow \\theta \\odot m$ \\Comment{apply the masks (ensure sparsity for compression phases)}\n \\State $\\tilde{\\theta} \\leftarrow \\{\\theta_i | m_i \\neq 0, 1\\leq i \\leq N \\}$ \\Comment{get the support for the gradients}\n \\For{x mini-batch in $S$}\n \\State $\\theta \\leftarrow \\theta - \\eta \\nabla_{\\tilde{\\theta}} f(\\theta; x)$ \\Comment{optimize the active weights}\n \\EndFor\n \\State epoch $\\leftarrow$ epoch $+1$\n\\EndWhile\n\\State \\Return $\\theta$\n\\end{algorithmic}\n}\n\\end{algorithm}\n\n\n\\subsection{AC\/DC: Applying IHT to Deep Neural Networks}\n\nAC\/DC starts from a standard DNN training flow, using standard optimizers such as SGD with momentum~\\cite{qian1999momentum} or Adam~\\cite{kingma2014adam}, and it preserves all standard training hyper-parameters. It will only periodically modify the \\emph{support} for optimization. \nPlease see Algorithm~\\ref{alg:iht} for pseudocode. \n\nWe partition the set of training epochs into \\emph{compressed} epochs $\\mathcal{C}$, and \\emph{decompressed} epochs $\\mathcal{D}$. We begin with a \\emph{dense warm-up} period of $\\Delta_w$ consecutive epochs, during which regular dense (decompressed) training is performed. \nWe then start alternating \\emph{compressed optimization} phases of length $\\Delta_c$ epochs each, with \\emph{decompressed (regular) optimization} phases of length $\\Delta_d$ epochs each. The process completes on a compressed fine-tuning phase, returning an accurate sparse model. \nAlternatively, if our goal is to return a dense model matching the baseline accuracy, we take the best dense checkpoint obtained during alternation, and fine-tune it over the entire support. In practice, we noticed that allowing a longer final decompressed phase of length $\\Delta_D > \\Delta_d$ improves the performance of the dense model, by allowing it to better recover the baseline accuracy after fine-tuning. \nPlease see Figure~\\ref{fig:ac-dc} for an illustration of the schedule.\n\nIn our experiments, we focus on the case where the compression operation is unstructured or semi-structured pruning. In this case, at the beginning of each sparse optimization phase, we apply the top-k operator across all of the network weights to obtain a mask $M$ over the weights $\\theta$. The top-k operator is applied globally across all of the network weights, and will represent the sparse support over which optimization will be performed for the rest of the current sparse phase. \nAt the end of the sparse phase, the mask $M$ is reset to all-$1$s, so that the subsequent dense phase will optimize over the full dense support. Furthermore, once all weights are re-introduced, it is beneficial to reset to $0$ the gradient momentum term of the optimizer; this is particularly useful for the weights that were previously pruned, which would otherwise have stale versions of gradients. \n\n\n\\parhead{Discussion}.\nMoving from IHT to a robust implementation in the context of DNNs required some adjustments. \nFirst, each \\emph{decompressed phase} can be directly mapped to a \\emph{deterministic\/stochastic IHT} step, where, instead of a single gradient step in between consecutive truncations of the support, we perform several stochastic steps. These additional steps improve the accuracy of the method in practice, and we can bound their influence in theory as well, although they do not necessarily provide better bounds. \nThis leaves open the interpretation of the \\emph{compressed phases}: for this, notice that the core of the proof for Theorem~\\ref{thm:iht-pl} is in showing that a single IHT step significantly decreases the expected value of the objective; using a similar argument, we can prove that additional optimization steps over the sparse support can only improve convergence. \nAdditionally, we show convergence for a variant of IHT closely following AC\/DC (please see Corollary~\\ref{cor:iht-acdc} in the Supplementary Material), but the bounds do not improve over Theorem~\\ref{thm:iht-pl}. However, this additional result confirms that the good experimental results obtained with AC\/DC are theoretically motivated. \n\n\n\\section{Experimental Validation}\n\\label{sec:experiments}\n\n\\parhead{Goals and Setup.} We tested AC\/DC on image classification tasks (CIFAR-100~\\cite{cifar100} and ImageNet~\\cite{imagenet}) and on language modelling tasks~\\cite{wikitext103} using the Transformer-XL model \\cite{dai2019transformer}. The goal is to examine the \\emph{validation accuracy} of the resulting sparse and dense models, versus the induced sparsity, as well as the number of FLOPs used for training and inference, relative to other sparse training methods. Additionally, we compare to state-of-the-art post-training pruning methods~\\cite{singh2020woodfisher}. \nWe also examine prediction differences between the sparse and dense models. \nWe use PyTorch \\cite{pytorch} for our implementation, Weights \\& Biases \\cite{wandb} for experimental tracking, and NVIDIA GPUs for training. \nAll reported image classification experiments were performed in triplicate by varying the random seed; we report mean and standard deviation. \nDue to computational limitations, the language modelling experiments were conducted in a single run.\n\n\n\\parhead{ImageNet Experiments.}\nOn the ImageNet dataset~\\cite{imagenet}, we test AC\/DC on ResNet50 \\cite{he2016deep} and MobileNetV1 \\cite{howard2017mobilenets}. In all reported results, the models were trained for a fixed number of 100 epochs, using SGD with momentum. We use a cosine learning rate scheduler and training hyper-parameters following \\cite{kusupati2020soft}, but without label smoothing. The models were trained and evaluated using mixed precision (FP16). On a small subset of experiments, we noticed differences in accuracy of up to 0.2-0.3\\% between AC\/DC trained with full or mixed precision. However, the differences in evaluating the models with FP32 or FP16 are negligible (less than 0.05\\%). Our dense ResNet50 baseline has $76.84\\%$ validation accuracy. \nUnless otherwise specified, weights are pruned globally, based on their magnitude and in a single step. Similar to previous work, we did not prune biases, nor the Batch Normalization parameters. The sparsity level is computed with respect to all the parameters, except the biases and Batch Normalization parameters and this is consistent with previous work \\cite{evci2020rigging, singh2020woodfisher}.\n\n\nFor all results, the AC\/DC training schedule starts with a ``warm-up'' phase of dense training for 10 epochs, after which we alternate between compression and de-compression every 5 epochs, until the last dense and sparse phase. It is beneficial to allow these last two ``fine-tuning'' phases to run longer: \nthe last decompression phase runs for 10 epochs, whereas the final 15 epochs are the compression fine-tuning phase. \nWe reset SGD momentum at the beginning of every decompression phase. \nIn total, we have an equal number of epochs of dense and sparse training; see Figure~(\\ref{fig:valacc-rn50}) for an illustration. We use exactly the same setup for both ResNet50 and MobileNetV1 models, which resulted in high-quality sparse models. \nTo recover a dense model with baseline accuracy using AC\/DC, we finetune the best dense checkpoint obtained during training; practically, this replaces the last \\emph{sparse} fine-tuning phase with a phase where the \\emph{dense} model is fine-tuned instead. \n\n\n\\begin{minipage}[c]{0.5\\textwidth}\n\\centering\n\\captionof{table}{\\small{ResNet50\/ImageNet, medium sparsity results.}}\n\\label{table:medium-sparse-rn50}\n\\vspace{-0.2cm}\n\\scalebox{0.63}{%\n\\begin{tabular}{@{}ccccc@{}}\n\\toprule\nMethod & \\begin{tabular}[c]{@{}c@{}}Sparsity\\\\ ($\\%$)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Top-1\\\\ Acc. ($\\%$)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}GFLOPs\\\\ Inference\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}EFLOPs\\\\ Train\\end{tabular} \\\\ \\midrule \nDense & $0$ & $76.84$ & $8.2$ & $3.14$ \\\\ \\midrule \\midrule\n\\bf{AC\/DC} & $80$ & $76.3 \\pm 0.1$ & $0.29 \\times$ & $0.65 \\times$ \\\\\nRigL$_{1\\times}$ & $80$ & $74.6 \\pm 0.06$ & $0.23 \\times$ & $0.23 \\times$ \\\\\nRigL$_{1\\times}$(ERK) & $80$ & $75.1 \\pm 0.05$ & $0.42 \\times$ & $0.42 \\times$ \\\\\nTop-KAST & $80$ fwd, $50$ bwd & $75.03$ & $0.23\\times$ & $0.32\\times$ \\\\\n\\hline\nSTR & $79.55$ & $76.19$ & $0.19 \\times$ & - \\\\\n\\bf{WoodFisher} & 80 & $\\bf{76.76}$ & $0.25 \\times$ & - \\\\\n\\midrule\n\\midrule\n\\textbf{AC\/DC} & $90$ & $75.03 \\pm 0.1 $ & $0.18 \\times$ & $0.58 \\times$ \\\\\nRigL$_{1\\times}$ & $90$ & $72.0 \\pm 0.05$ & $0.13 \\times$ & $0.13 \\times$ \\\\\nRigL$_{1\\times}$ (ERK) & $90$ & $73.0 \\pm 0.04$ & $0.24 \\times$ & $0.25 \\times$ \\\\\nTop-KAST & $90$ fwd, $80$ bwd & $74.76$ & $0.13\\times$ & $0.16\\times$ \\\\\n\\hline\nSTR & $90.23$ & $74.31$ & $0.08 \\times$ & - \\\\\n\\bf{WoodFisher} & $90$ & $\\bf{75.21}$ & $0.15 \\times$ & - \\\\ \n\\bottomrule \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.15cm}\n\\begin{minipage}[c]{0.46\\textwidth}\n\\captionof{table}{\\small{ResNet50\/ImageNet, high sparsity results.}}\n\\label{table:high-sparse-rn50}\n\\vspace{-0.2cm}\n\\scalebox{0.62}{%\n\\begin{tabular}{@{}ccccc@{}}\n\\toprule\nMethod & \\begin{tabular}[c]{@{}c@{}}Sparsity\\\\ ($\\%$)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Top-1\\\\ Acc. ($\\%$)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}GFLOPs\\\\ Inference\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}EFLOPs\\\\ Train\\end{tabular} \\\\ \\midrule \nDense & $0$ & $76.84$ & $8.2$ & $3.14$ \\\\ \\midrule \\midrule\n\\bf{AC\/DC} & $95$ & $\\bf{73.14} \\pm 0.2 $ & $0.11 \\times$ & $0.53 \\times$ \\\\\nRigL$_{1\\times}$ & $95$ & $67.5 \\pm 0.1$ & $0.08 \\times$ & $0.08 \\times$ \\\\\nRigL$_{1\\times}$ (ERK) & $95$ & $69.7 \\pm 0.17$ & $0.12 \\times$ & $0.13 \\times$ \\\\\nTop-KAST & $95$ fwd, $50$ bwd & $71.96$ & $0.08\\times$ & $0.22\\times$ \\\\\n\\hline\nSTR & $94.8$ & $70.97$ & $0.04 \\times$ & - \\\\\nWoodFisher & $95$ & $72.12$ & $0.09 \\times$ & - \\\\ \n\\midrule \\midrule\n\\bf{AC\/DC} & $98$ & $\\bf{68.44} \\pm 0.09 $ & $0.06 \\times$ & $0.46 \\times$ \\\\\nTop-KAST & $98$ fwd, $90$ bwd & 67.06 & $0.05\\times$ & $0.08 \\times$ \\\\\n\\hline\nSTR & $97.78$ & $62.84$ & $0.02 \\times$ & - \\\\\nWoodFisher & $98$ & $65.55$ & $0.05 \\times$ & - \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{minipage}\n\n\\vspace{0.1cm}\n\n\\parhead{ResNet50 Results.} \nTables~\\ref{table:medium-sparse-rn50}\\&~\\ref{table:high-sparse-rn50} contain the validation accuracy results across medium and high global sparsity levels, as well as inference and training FLOPs. Overall, AC\/DC achieves higher validation accuracy than any of the state-of-the-art sparse training methods, when using the same number of epochs. \nAt the same time, due to dense training phases, AC\/DC has\nhigher FLOP requirements relative to RigL or Top-KAST at the same sparsity. \nAt medium sparsities (80\\% and 90\\%), AC\/DC sparse models are slightly less accurate than the state-of-the-art post-training methods (e.g. WoodFisher), by small margins. \nThe situation is reversed at higher sparsities, where AC\/DC produces more accurate models: the gap to the second-best methods (WoodFisher \/ Top-KAST) is of more than 1\\% at 95\\% and 98\\% sparsity. \n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[t]{0.5\\textwidth}\n \\centering\n \\includegraphics[height=1.2in]{figures\/valacc_imagenet_rs_stripes.pdf}\n \\caption{Sparsity pattern and validation accuracy vs. number of epochs (ResNet50\/ImageNet).}\n \\label{fig:valacc-rn50}\n \\end{subfigure}%\n ~ \n \\begin{subfigure}[t]{0.5\\textwidth}\n \\centering\n \\includegraphics[height=1.2in]{figures\/cifar10_random_correct_no_da_all.pdf}\n \\caption{Percentage of samples with corrupted training labels classified to their \\emph{true} class (ResNet20\/CIFAR10).}\n \\label{fig:random-correct-no-da-main}\n \\end{subfigure}%\n \\caption{Accuracy vs. sparsity during training, for the ResNet50\/ImageNet experiment (left) and accuracy on the corrupted samples for ResNet20\/CIFAR10, w.r.t. the \\emph{true} class (right). \n }\n \\label{fig:acc-masks-rn50}\n\\end{figure*}\n\n\n\nOf the existing sparse training methods, Top-KAST is closest in terms of validation accuracy to our sparse model, at 90\\% sparsity. However, Top-KAST does not prune the first and last layers, whereas the results in the tables do not restrict the sparsity pattern. \nFor fairness, we executed AC\/DC using the same layer-wise sparsity distribution as Top-KAST, for both uniform and global magnitude pruning. For $90\\%$ global pruning, results for AC\/DC improved; the best sparse model reached 75.64\\% validation accuracy (0.6\\% increase over Table~\\ref{table:medium-sparse-rn50}), while the best dense model had 76.85\\% after fine-tuning. For uniform sparsity, our results were very similar: 75.04\\% validation accuracy for the sparse model and 76.43\\% - for the fine-tuned dense model. We also note that Top-KAST has better results \nat 98\\% when increasing the number of training epochs 2 times, and considerably fewer training FLOPs (e.g. $15\\%$ of the dense FLOPs). \nFor fairness, we compared against all methods on a fixed number of 100 training epochs and we additionally trained AC\/DC at high sparsity without pruning the first and last layers. Our results improved to $74.16\\%$ accuracy for 95\\% sparsity, and $71.27\\%$ for 98\\% sparsity, both surpassing Top-KAST with prolonged training. We provide a more detailed comparison in the Supplementary Material, which also contains results on CIFAR-100. \n\n\nAn advantage of AC\/DC is that it provides \\emph{both} sparse and dense models at cost \\emph{below} that of a single dense training run. \nFor medium sparsity, the accuracy of the dense-finetuned model is very close to the dense baseline. \nConcretely, at 90\\% sparsity, with 58\\% of the total (theoretical) baseline training FLOPs, we obtain a \\emph{sparse} model which is close to state of the art; \nin addition, by fine-tuning the best dense model, we obtain a dense model with $76.56\\%$ (average) validation accuracy. The whole process takes at most 73\\% of the baseline training FLOPs. In general, for 80\\% and 90\\% target sparsity, the dense models derived from AC\/DC are able to recover the baseline accuracy, after finetuning, defined by replacing the final compression phase with regular dense training. The complete results are presented in the Supplementary Material, in Table~\\ref{table:dense-rn50}.\n\nThe sparsity distribution over layers does not change dramatically during training; yet, the dynamic of the masks has an important impact on the performance of AC\/DC. Specifically, we observed that masks update over time, although the change between consecutive sparse masks decreases. Furthermore, a small percentage of the weights remain fixed at $0$ even during dense training, which is explained by filters that are pruned away during the compressed phases.\nPlease see the Supplementary Material for additional results and analysis. \n\n\n\nWe additionally compare AC\/DC with Top-KAST and RigL, in terms of the validation accuracy achieved depending on the number of training FLOPs. We report results at uniform sparsity, which ensures that the inference FLOPs will be the same for all methods considered. For AC\/DC and Top-KAST, the first and last layers are kept dense, whereas for RigL, only the first layer is kept dense; however, this has a negligible impact on the number of FLOPs. Additionally, we experiment with extending the number of training iterations for AC\/DC at 90\\% and 95\\% sparsity two times, similarly to Top-KAST and RigL which also provide experiments for extended training. The comparison between AC\/DC, Top-KAST and RigL presented in Figure \\ref{fig:flops-vs-acc-rn50} shows that AC\/DC is similar or surpasses Top-KAST 2x at 90\\% and 95\\% sparsity, and RigL 5x at 95\\% sparsity both in terms of training FLOPs and validation accuracy. Moreover, we highlight that extending the number of training iterations two times results in AC\/DC models with uniform sparsity that surpass all existing methods at both 90\\% and 95\\% sparsity; namely, we obtain 76.1\\% and 74.3\\% validation accuracy with 90\\% and 95\\% uniform sparsity, respectively. \n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[height=1.4in]{figures\/flops_vs_acc_90_95_unif.pdf}\n \\caption{Training FLOPs vs validation accuracy for AC\/DC, RigL and Top-KAST, with uniform sparsity, at 90\\% and 95\\% sparsity levels. (ResNet50\/ImageNet).}\n \\label{fig:flops-vs-acc-rn50}\n\\end{figure*}\n\nCompared to purely sparse training methods, such as Top-KAST or RigL, AC\/DC requires dense training phases. The length of the dense phases can be decreased, with a small impact on the accuracy of the sparse model. Specifically, we use dense phases of two instead of five epochs in length, and we no longer extend the final decompressed phase prior to the finetuning phase. For 90\\% global sparsity, this resulted in 74.6\\% validation accuracy for the sparse model, using 44\\% of the baseline FLOPs. Similarly, for uniform sparsity, we obtain 74.7\\% accuracy on the 90\\% sparse model, with 40\\% of the baseline FLOPs; this value can be further improved to 75.8\\% validation accuracy when extending two times the number of training iterations. Furthermore, at 95\\% uniform sparsity, we reach 72.8\\% accuracy with 35\\% of the baseline training FLOPs. \n\n\n\\parhead{MobileNet Results.} We perform the same experiment, using exactly the same setup, on the MobileNetV1 architecture~\\cite{howard2017mobilenets}, which is compact and thus harder to compress. On a training budget of 100 epochs, our method finds sparse models with higher Top-1 validation accuracy than existing sparse- and post-training methods, on both 75\\% and 90\\% sparsity levels (Table~\\ref{table:sparse-mobnet}). Importantly, AC\/DC uses exactly the same hyper-parameters used for training the dense baseline~\\cite{kusupati2020soft}. \nSimilar to ResNet50, at 75\\% sparsity, the \\emph{dense-finetuned model} recovers the baseline performance, while for 90\\% it is less than 1\\% below the baseline.\nThe only method which obtains higher accuracy for the same sparsity is the version of RigL~\\cite{evci2020rigging} which executes for 5x more training epochs than the dense baseline. \nHowever, this version also uses more computation than the dense model.\nWe limit ourselves to a fixed number of 100 epochs, the same used to train the dense baseline, which would allow for savings in training time. \nMoreover, RigL does not prune the first layer and the depth-wise convolutions, whereas for the results reported we do not impose any sparsity restrictions. Overall, we found that keeping these layers dense improved our results on 90\\% sparsity by almost 0.5\\%. Then, our results are quite close to RigL$_{2\\times}$, with half the training epochs, and less training FLOPs. We provide a more detailed comparison in the Supplementary Material. \n\n\n\\begin{minipage}[c]{0.45\\textwidth}\n\\centering\n\\captionof{table}{\\small{MobileNetV1\/ImageNet sparsity results}}\n\\vspace{-0.2cm}\n\\label{table:sparse-mobnet}\n\\scalebox{0.65}{%\n\\begin{tabular}{@{}ccccc@{}}\n\\toprule\nMethod & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ (\\%)\\end{tabular}} & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Top-1 \\\\ Acc. (\\%)\\end{tabular}} & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}GFLOPs \\\\ Inference \\end{tabular}} & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}EFLOPs \\\\ Train \\end{tabular}} \\\\ \\midrule\nDense & 0 & 71.78 & $1.1$ & $0.44$ \\\\ \\midrule\n\\bf{AC\/DC} & $75$ & $\\bf{70.3 \\pm 0.07}$ & $0.34\\times$ & $0.64\\times$ \\\\\nRigL$_{1\\times}$ (ERK) & $75$ & $68.39$ & $0.52\\times$ & $0.53 \\times$ \\\\\nSTR & 75.28 & 68.35 & $0.18 \\times$ & - \\\\\nWoodFisher & 75.28 & 70.09 & $0.28 \\times$ & - \\\\ \\midrule\n\\bf{AC\/DC} & 90 & $\\bf{66.08} \\pm 0.09$ & $0.18 \\times$ & $0.56 \\times$ \\\\ \nRigL$_{1\\times}$ (ERK) & 90 & $63.58$ & $0.27 \\times$ & $0.29 \\times$ \\\\\nSTR & 89.01 & 62.1 & $0.07 \\times$ & - \\\\\nWoodFisher & 89 & 63.87 & - & - \\\\\n\\bottomrule \n\\end{tabular}}\n\\end{minipage}\n\\hspace{0.35cm}\n\\begin{minipage}[c]{0.5\\textwidth}\n\\centering\n \\captionof{table}{\\small{Transformer-XL\/WikiText sparsity results}}\n \\label{table:transformer}\n \\vspace{-0.2cm}\n \\scalebox{0.66}{%\n \\begin{tabular}{@{}ccccc@{}}\n \\toprule\n Method & Sparsity (\\%) & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Perplexity \\\\ Sparse \\end{tabular}} & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Perplexity \\\\ Dense\\end{tabular}} & \\multicolumn{1}{c}{\\begin{tabular}[c]{@{}c@{}}Perplexity \\\\ Finetuned Dense \\end{tabular}} \\\\ \\midrule\n \n Dense & 0 & - & 18.95 & - \\\\ \\midrule\n AC\/DC & 80 & 20.65 & 20.24 & 19.54 \\\\\n AC\/DC & 80, 50 embed. & 20.83 & 20.25 & 19.68 \\\\\n \\bf{Top-KAST} & 80, 0 bwd & \\bf{19.8} & - & - \\\\\n Top-KAST & 80, 60 bwd & 21.3 & - & - \\\\ \\midrule\n \\bf{AC\/DC} & 90 & \\bf{22.32} & 21.0 & 20.28\\\\ \n \\bf{AC\/DC} & 90, 50 embed. & \\bf{22.84} & 21.34 & 20.41 \\\\\n Top-KAST & 90, 80 bwd & 25.1 & - & - \\\\ \\bottomrule \n \\end{tabular}}\n\\end{minipage}\n\n\\vspace{0.1cm}\n\\parhead{Semi-structured Sparsity.} \nWe also experiment with the recent 2:4 sparsity pattern (2 weights out of each block of 4 are zero) proposed by NVIDIA, which ensures inference speedups on the Ampere architecture. Recently,~\\cite{mishra2021accelerating} showed that accuracy can be preserved under this pattern, by re-doing the entire training flow. Also, \\cite{zhou2021learning} proposed more general N:M structures, together with a method for training such sparse models from scratch. We applied AC\/DC to the 2:4 pattern, performing training from scratch and obtained sparse models with $76.64\\%\\pm 0.05$ validation accuracy, i.e. slightly below the baseline.\nFurthermore, the dense-finetuned model fully recovers the baseline performance (76.85\\% accuracy). We additionally experiment with using AC\/DC with global pruning at 50\\%; in this case we obtain sparse models that slightly improve the baseline accuracy to 77.05\\%. This confirms our intuition that AC\/DC can act as a regularizer, similarly to~\\cite{han2016dsd}.\n\n\n\n\n\\parhead{Language Modeling.}\nNext, we apply AC\/DC to compressing NLP models. \nWe use Transformer-XL~\\cite{dai2019transformer}, on the WikiText-103 dataset \\cite{wikitext103}, with the standard model configuration with 18 layers and 285M parameters, trained using the Lamb optimizer~\\cite{you2019large} and standard hyper-parameters, which we describe in the Supplementary Material. \nThe same Transformer-XL model trained on WikiText-103 was used in Top-KAST \\cite{jayakumar2020top}, which allows a direct comparison. Similar to Top-KAST, we did not prune the embedding layers, as this greatly affects the quality, without reducing computational cost. (For completeness, we do provide results when embeddings are pruned to 50\\% sparsity.) \nOur sparse training configuration consists in starting with a dense warm-up phase of 5 epochs, followed by alternating between compression and decompression phases every 3 epochs; we follow with a longer decompression phase between epochs 33-39, \nand end with a compression phase between epochs 40-48. \nThe results are shown in Table~\\ref{table:transformer}. \nRelative to Top-KAST, our approach provides significantly improved test perplexity at 90\\% sparsity, as well as better results at 80\\% sparsity with sparse back-propagation. \nThe results confirm that AC\/DC is scalable and extensible.\nWe note that our hyper-parameter tuning for this experiment was minimal. \n\n\\parhead{Output Analysis.} \nFinally, we probe the accuracy difference between the sparse and dense-finetuned models. \nWe first examineed \\emph{sample-level agreement} between sparse and dense-finetuned pairs produced by AC\/DC, relative to model pairs produced by gradual magnitude pruning (GMP). \nCo-trained model pairs consistently agree on more samples relative to GMP: \nfor example, on the 80\\%-pruned ResNet50 model, the AC\/DC model pair agrees on the Top-1 classification of 90\\% of validation samples, whereas the GMP models agree on 86\\% of the samples. The differences are better seen in terms of validation error (10\\% versus 14\\%), which indicate that the dense baseline and GMP model disagree on 40\\% more samples compared to the AC\/DC models. A similar trend holds for the \\emph{cross-entropy} between model outputs. This is a potentially useful side-effect of the method; for example, in constrained environments where sparse models are needed, it is important to estimate their similarity to the dense ones. \n\n\nSecond, we analyze differences in ``memorization'' capacity~\\cite{zhang2016understanding} between dense and sparse models. \nFor this, we apply AC\/DC to ResNet20 trained on a variant of CIFAR-10 where a subset of 1000 samples have randomly corrupted class labels, and examine the accuracy on these samples during training. We consider 90\\% and 95\\% sparsity AC\/DC runs. \nFigure~\\ref{fig:random-correct-no-da-main} shows the results, when the accuracy for each sample is measured with respect to the \\emph{true, un-corrupted} label. \nDuring early training and during \\emph{sparse phases}, \nthe network tends to classify corrupted samples to their \\emph{true class}, ``ignoring'' label corruption. \nHowever, as training progresses, due to dense training phases and lower learning rate, networks tend to ``memorize'' these samples, assigning them to their corrupted class. This phenomenon is even more prevalent at 95\\% sparsity, where the network is less capable of memorization. \nWe discuss this finding in more detail in the Supplementary Material.\n\n\\parhead{Practical Speedups.} \nOne remaining question regards the potential of sparsity to provide real-world speedups. \nWhile this is an active research area, e.g.~\\cite{elsen2020fast}, \nwe partially address this concern in the Supplementary Material, by showing inference speedups for our models on a CPU inference platform supporting unstructured sparsity~\\cite{NM}: for example, our 90\\% sparse ResNet50 model provides 1.75x speedup for real-time inference (batch-size 1) on a resource-constrained processor with 4 cores, and 2.75x speedup on 16 cores at batch size 64, versus the dense model. \n\n\\vspace{-0.5em}\n\\section{Conclusion, Limitations, and Future Work}\n\\label{sec:conclusion}\n\\vspace{-0.5em}\n\nWe introduced AC\/DC---a method for co-training sparse and dense models, with theoretical guarantees. Experimental results show that AC\/DC improves upon the accuracy of previous sparse training methods, and obtains state-of-the-art results at high sparsities. Importantly, we recover near-baseline performance for dense models and do not require extensive hyper-parameter tuning. We also show that AC\/DC has potential for real-world speed-ups in inference and training, with the appropriate software and hardware support. \nThe method has the advantage of returning both an accurate standard model, and a compressed one. \nOur model output analysis confirms the intuition that sparse training phases act as a regularizer, preventing the (dense) model from memorizing corrupted samples. At the same time, they prevent the memorization of \\emph{hard samples}, which can affect accuracy. \n\nThe main limitations of AC\/DC are its reliance on dense training phases, which limits the achievable training speedup, and the need for tuning the length and frequency of sparse\/dense phases. We believe the latter issue can be addressed with more experimentation (we show some preliminary results in Section~\\ref{sec:experiments} and Appendix \\ref{sec:cifar100}); however, both the theoretical results and the output analysis suggest that dense phases may be \\emph{necessary} for good accuracy. \nWe plan to further investigate this in future work, together with applying AC\/DC to other compression methods, such as quantization, as well as leveraging sparse training on hardware that could efficiently support it, such as Graphcore IPUs~\\cite{graphcore}. \n\n\n\\vspace{-0.2em}\n\\acksection\n\\vspace{-0.2em}\nThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML), and a CNRS PEPS grant. This research was supported by the Scientific Service Units (SSU) of IST Austria through resources provided by Scientific Computing (SciComp). We would also like to thank Christoph Lampert for his feedback on an earlier version of this work, as well as for providing hardware for the Transformer-XL experiments.\n\n\n\\small{\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\n\n\\section*{Abstract}\n\\addcontentsline{toc}{chapter}{Abstract}\n\\input{YourThesis\/Abstract.tex}\n\n\\bigskip\n\\noindent\\textbf{Keywords:} \\TheKeyword\n\n\\EmptyPageWithNumber\n\n\n\\setlength{\\parindent}{0pt}\n\n\n\\section*{List of Publications}\n\\addcontentsline{toc}{chapter}{List of Papers}\n\\includelistofpapers\n\n\\setlength{\\parindent}{1em}\n\n\\EmptyPageWithNumber\n\n\n\n\\section*{Acknowledgments}\n\\addcontentsline{toc}{chapter}{Acknowledgements}\n\\input{YourThesis\/Acknowledgement.tex}\n\n\\newpage\n\n\\section*{Financial Support}\n\\input{FinancialSupport.tex}\n\n\n\\newpage\n\n\\section*{Acronyms}\n\\vspace{1cm}\n\\addcontentsline{toc}{chapter}{Acronyms}\n\\input{Acronyms.tex}\n\n\n\n\\tableofcontents\n\\clearpage\n\n\\EmptyPage\n\n\\setcounter{page}{1}\n\\renewcommand{\\thepage}{\\arabic{page}}\n\n\n\n\n\\setcounter{secnumdepth}{3}\n\n\\pagestyle{scrheadings}\n\\automark[section]{chapter}\n\n\n\n\\begin{refsection}\n\\part{Overview}\n\\input{Content.tex}\n\\printbibliography[title=References]\n\\end{refsection}\n\n\n\\part{Papers}\n\\setcounter{secnumdepth}{2}\n\n\\initpapers\n\\renewcommand\\bibname{References}\n\n\\automark{chapter}\n\\renewcommand\\sectionmark[1]{\\markright{\\MakeMarkcase {\\thesection\\hskip .5em\\relax#1}}}\n\\rohead{\\rightmark}\n\\lehead{Paper \\Alph{paper}}\n\\rofoot{\\pagemark}\n\\lefoot{\\pagemark}\n\n\n\n\n\n\n\n\n\\end{document}\n\n\n\\section*{#1}%\n \\addcontentsline{toc}{section}{\\bibname}\n \\markboth{#1}{#1}\n }\n \n\n\n\n\n\\section{Introduction}\nVehicle communication is one of the important use cases in the fifth generation of wireless networks (5G) and beyond \\cite{Dang2020what}. Here, the focus is to provide efficient and reliable connections to cars and public transports, e.g., busses and trains. Channel state information at the transmitter (CSIT), plays an important role in achieving these goals, as it enables advanced closed-loop transmission schemes such as link adaptation, multi-user scheduling, interference coordination and spatial multiplexing schemes. However, typical CSIT acquisition systems, which are mostly designed for (semi)static channels, may not work well as the speed of the vehicle increases. This is because, depending on the vehicle speed, the position of the antennas may change quickly and the CSIT becomes inaccurate. \n\nTo overcome this issue, \\cite{Sternad2012WCNCWusing} proposes the concept of predictor antenna (PA). Here, in its standard version, a PA system is referred to as a setup with two (sets of) antennas on the roof of a vehicle. The PA positioned in the front of the vehicle can be used to improve the CSIT for data transmission to the receive antenna (RA) that is aligned behind the PA. The potential of such setups have been previously shown through experimental tests \\cite{Sternad2012WCNCWusing,Dinh2013ICCVEadaptive,Jamaly2014EuCAPanalysis}, and its performance has been analyzed in, e.g., \\cite{Guo2019WCLrate,guo2020semilinear,guo2020rate}. \n\nOne of the challenges of the PA setup is spatial mismatch that causes CSIT for the RA to be partially inaccurate. This occurs if the RA does not reach the same spatial point as the PA, due to, e.g., the delay for preparing the data is not equal to the time that is needed until the RA reaches the same point as the PA \\cite{Dinh2013ICCVEadaptive}. On the other hand, in a typical PA setup the spectrum is underutilized, and the spectral efficiency could be further improved in case the PA could be used not only for channel prediction, but also for data transmission. We address these challenges by implementing hybrid automatic repeat request (HARQ)-based protocols in PA systems as follows.\n\n\n\nIn this work, we analyze the outage-limited performance of PA systems using HARQ. With our proposed approach, the PA is used not only for improving the CSIT in the retransmissions to the RA, but also for data transmission in the initial round. In this way, as we show, the combination of PA and HARQ protocols makes it possible to improve the spectral efficiency, and adapt the transmission parameters to mitigate the effect of spatial mismatch.\n\nThe problem is cast in the form of minimizing the average transmission power subject to an outage probability constraint. Particularly, we develop approximation techniques to derive closed-form expressions for the instantaneous and average transmission power as well as the optimal power allocation minimizing the outage-limited power consumption. The results are presented for the cases with different repetition time diversity (RTD) and incremental redundancy (INR) HARQ protocols \\cite{makki2014green,chaitanya2011outage,djonin2008joint}. Moreover, we study the effect of different parameters such as the antennas separation and the vehicle speed on the system performance. \n\nAs we show through analysis and simulations, the implementation of HARQ as well as power allocation can improve the outage-limited performance of PA systems by orders of magnitude, compared to the cases with no retransmission. For example, consider an outage probability constraint of $10^{-4}$, initial rate $R=2$ nats-per-channel-use (npcu) and a maximum of two transmission rounds. Then, compared to the cases with no retransmission, our proposed power-adaptive PA-HARQ scheme can reduce the required signal-to-noise ratio (SNR) by 18 dB and 20 dB for the RTD and the INR schemes, respectively.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Problem Formulation}\n\nHere, we first introduce the basics of PA systems which is followed by our proposed HARQ-based PA setup.\n\n\\subsection{Standard PA System}\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure1.pdf}\\\\\n\\caption{Predictor antenna system with mismatch problem.}\\label{ARQmodel}\n\\end{figure}\nFigure \\ref{ARQmodel} shows the standard PA system with two antennas on the roof of a vehicle. Here, the PA first sends pilots in time $t$. Then, the base station (BS) estimates the PA-BS channel $h_1$ and sends the data in time $t+\\delta$ to the RA where $\\delta$ depends on the processing time at the BS. At the same time, the vehicle moves forward $d_\\text{m}$ while the antenna separation between the PA and the RA is $d_\\text{a}$. Then, considering downlink transmission in the BS-RA link, the signal received by the RA is\n\\begin{align}\\label{eq_Y}\ny = \\sqrt{P}h_2 x + z.\n\\end{align}\nHere, $P$ represents the transmit power, $x$ is the input message with unit variance, and $h_2$ is the fading coefficient between the BS and the RA. Also, $z \\sim \\mathcal{CN}(0,1)$ denotes the independent and identically distributed (IID) complex Gaussian noise added at the receiver.\n\nWe represent the probability density function (PDF) and cumulative density function (CDF) of a random variable $A$ by $f_A(\\cdot)$ and $F_A(\\cdot)$, respectively. Due to spatial mismatch between the PA and the RA, assuming a semi-static propagation environment, i.e., assuming that the coherence time of the propagation environment is much larger than $\\delta$ \\footnote{This has been experimentally verified in, e.g., \\cite{Jamaly2014EuCAPanalysis}}, $h_2$ and $h_1$ are correlated according to \\cite[Eq. 5]{Guo2019WCLrate} \n\\begin{align}\\label{eq_H}\n h_2 = \\sqrt{1-\\sigma^2} h_1 + \\sigma q,\n\\end{align}\nwhere $q \\sim \\mathcal{CN}(0,1)$ which is independent of the known channel value $h_1\\sim \\mathcal{CN}(0,1)$, and $\\sigma$ is a function of the mis-matching distance $d = |d_\\text{a}-d_\\text{m}|$ \\cite[Eq. 4]{Guo2019WCLrate}. Defining $g_1 = |h_1|^2$ and $ g_2 = |h_2|^2$, the CDF $F_{g_2|g_1}$ is given by\n\\begin{align}\\label{eq_cdf}\n F_{g_2|g_1}(x) = 1 - Q_1\\left( \\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}}, \\sqrt{\\frac{2x}{\\sigma^2}} \\right),\n\\end{align}\nwhere $Q_1(s,\\rho) = \\int_{\\rho}^{\\infty} xe^{-\\frac{x^2+s^2}{2}}I_0(xs)\\text{d}x$, $s, \\rho \\ge 0$, is the first-order Marcum $Q$-function. Also, $I_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i} }{i!\\Gamma(n+i+1)}$ is the $n$-order modified Bessel function of the first kind, and $\\Gamma(z) = \\int_0^{\\infty} x^{z-1}e^{-x} \\mathrm{d}x$ represents the Gamma function. In this way, although parameter adaptation is performed based on perfect CSIT of $h_1$ at time $t$, the spatial mismatch may lead to unsuccessful decoding by the RA at $t+\\delta$.\n\n\n\\subsection{Proposed HARQ-based PA System}\nAlong with the spatial mismatch problem, the typical PA system still suffers from poor spectral efficiency, compared to regular multiple-antenna system in static conditions, because the PA is used only for channel estimation. On the other hand, because the PA system includes the PA-BS feedback link, HARQ can be supported by the PA structure. For this reason, we propose a setup as follows.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure2.pdf}\\\\\n\\caption{Time structure for the proposed PA-HARQ scheme.}\\label{fig_timeslot}\n\\end{figure}\n\nHere, as seen in Fig. \\ref{fig_timeslot}, with no CSIT, at $t_1$ the BS sends pilots as well as the encoded data with certain initial rate $R$ and power $P_1$ to the PA. At $t_2$, the PA estimates the channel $h_1$ from the received pilots. At the same time, the PA tries to decode the signal. If the message is correctly decoded, i.e., $R\\leq \\log(1+g_1P_1)$, an acknowledgment (ACK) is fed back to the BS at $t_3$, and the data transmission stops. Otherwise, the PA sends both a negative acknowledgment (NACK) and high accuracy quantized CSI feedback about $h_1$. The number of quantization bits are large enough such that we can assume the BS to have perfect CSIT of $h_1$ (see \\cite{guo2020semilinear} for the effect of imperfect CSIT on the performance of PA systems). With NACK, in the second transmission round at time $t_4$, the BS transmits the message to the RA with power $P_2$ which is a function of the instantaneous channel quality $g_1$. The outage occurs if the RA cannot decode the message at the end of the second round. \n\n\\section{Analytical Results}\nLet $\\epsilon$ be the outage probability constraint. Here, we present the results for the cases with RTD and INR HARQ protocols. With an RTD protocol, the same signal (with possibly different power) is sent in each retransmission round, and the receiver performs maximum ratio combining of all received copies of the signal. With INR, on the other hand, new redundancy bits are sent in the retransmissions, and the receiver decodes the message by combining all received signals \\cite{makki2014green,chaitanya2011outage,djonin2008joint}.\n\nConsidering Rayleigh fading conditions with $f_{g_1}(x) = e^{- x}$, the outage probability at the end of Round 1 is given by\n\\begin{align}\\label{eq_pout}\n&\\text{Pr}(\\text{Outage, Round 1}) = \\text{Pr}\\left\\{R\\leq \\log(1+g_1P_1)\\right\\}\\nonumber\\\\\n&~~ = \\text{Pr}\\left\\{g_1\\leq \\frac{e^{R}-1}{P_1}\\right\\}\n = 1-e^{ -\\frac{\\theta}{P_1}}, \n\\end{align}\nwhere $\\theta = e^{R}-1$. Then, using the results of, e.g., \\cite[Eq. 7, 18]{makki2014green} on the outage probability of the RTD- and INR-based HARQ protocols, the power allocation problem for the proposed HARQ-based PA system can be stated as\n\\begin{equation}\n\\label{eq_optproblem}\n\\begin{aligned}\n\\min_{P_1,P_2} \\quad & \\mathbb{E}_{g_1}\\left[P_\\text{tot}|g_1\\right] \\\\\n\\textrm{s.t.} \\quad & P_1, P_2 > 0,\\\\\n&P_\\text{tot}|g_1 = \\left[P_1 + P_2(g_1) \\times \\mathcal{I}\\left\\{g_1 \\le \\frac{\\theta}{P_1}\\right\\}\\right],\n\\end{aligned}\n\\end{equation}\nwith\n\\begin{align}\\label{eq_optproblemrtd}\nF_{g_2|g_1}\\left\\{\\frac{\\theta-g_1P_1}{P_2(g_1)} \\right\\} = \\epsilon, \\quad\\text{for RTD}\n\\end{align}\n\\begin{align}\\label{eq_optprobleminr}\nF_{g_2|g_1}\\left\\{\\frac{e^{R-\\log(1+g_1P_1)}-1}{P_2(g_1)} \\right\\} = \\epsilon, \\quad\\text{for INR}. \n\\end{align}\nHere, $P_\\text{tot}|g_1$ is the total instantaneous transmission power for two transmission rounds (i.e., one retransmission) with given $g_1$, and we define $\\Bar{P} \\doteq \\mathbb{E}_{g_1}\\left[P_\\text{tot}|g_1\\right]$ as the expected power, averaged over $g_1$. Moreover, $\\mathcal{I}(x)=1$ if $x>0$ and $\\mathcal{I}(x)=0$ if $x \\le 0$. Also, $\\mathbb{E}_{g_1}[\\cdot]$ represents the expectation operator over $g_1$. Here, we ignore the peak power constraint and assume that the BS is capable of transmitting sufficiently high power. Finally, (\\ref{eq_optproblem})-(\\ref{eq_optprobleminr}) come from the fact that, with our proposed scheme, $P_1$ is fixed and optimized with no CSIT at the BS and based on average system performance. On the other hand, $P_2$ is adapted continuously based on the predicted CSIT.\n\n\nUsing (\\ref{eq_optproblem}), the required power in Round 2 is given by\n\\begin{equation}\n\\label{eq_PRTDe}\n P_2(g_1) = \\frac{\\theta-g_1P_1}{F_{g_2|g_1}^{-1}(\\epsilon)},\n\\end{equation}\nfor the RTD, and\n\\begin{equation}\n\\label{eq_PINRe}\n P_2(g_1) = \\frac{e^{R-\\log(1+g_1P_1)}-1}{F_{g_2|g_1}^{-1}(\\epsilon)},\n\\end{equation}\nfor the INR, where $F_{g_2|g_1}^{-1}(\\cdot)$ is the inverse of the CDF given in (\\ref{eq_cdf}). Note that, $F_{g_2|g_1}^{-1}(\\cdot)$ is a complex function of $g_1$ and, consequently, it is not possible to express $P_2$ in closed-form. For this reason, one can use \\cite[Eq. 2, 7]{6414576}\n\\begin{align}\n Q_1 (s, \\rho) &\\simeq e^{\\left(-e^{\\mathcal{I}(s)}\\rho^{\\mathcal{J}(s)}\\right)}, \\nonumber\\\\\n \\mathcal{I}(s)& = -0.840+0.327s-0.740s^2+0.083s^3-0.004s^4,\\nonumber\\\\\n \\mathcal{J}(s)& = 2.174-0.592s+0.593s^2-0.092s^3+0.005s^4,\n\\end{align}\nto approximate $F_{g_2|g_1}$ and consequently $F_{g_2|g_1}^{-1}(\\epsilon)$. In this way, (\\ref{eq_PRTDe}) and (\\ref{eq_PINRe}) can be approximated as\n\\begin{equation}\n\\label{eq_PRTDa}\n P_2(g_1) = \\Omega\\left(\\theta-g_1P_1\\right),\n\\end{equation}\nfor the RTD, and\n\\begin{equation}\n\\label{eq_PINRa}\n P_2(g_1) = \\Omega\\left(e^{R-\\log(1+g_1P_1)}-1\\right),\n\\end{equation}\nfor the INR, where\n\\begin{equation}\\label{eq_omega}\n \\Omega (g_1) = \\frac{2}{\\sigma^2}\\left(\\frac{\\log(1-\\epsilon)}{-e^{\\mathcal{I}\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}}\\right)}}\\right)^{-\\frac{2}{\\mathcal{J}\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}}\\right)}}.\n\\end{equation}\n\n\n\nIn this way, for different HARQ protocols, we can express the instantaneous transmission power of Round 2, for every given $g_1$ in closed-form. Then, the power allocation problem (\\ref{eq_optproblem}) can be solved numerically. However, (\\ref{eq_omega}) is still complicated and it is not possible to solve (\\ref{eq_optproblem}) in closed-form. For this reason, we propose an approximation scheme to solve (\\ref{eq_optproblem}) as follows.\n\n\n\n\nLet us initially concentrate on the RTD protocol. Then, combining (\\ref{eq_optproblem}) and (\\ref{eq_PRTDe}), the expected total transmission power is given by\n\\begin{align}\\label{eq_barP}\n \\Bar{P}_\\text{RTD} = P_1 + \\int_0^{\\theta\/P_1} e^{- x}P_2\\text{d}x= P_1 + \\int_0^{\\theta\/P_1} e^{- x}\\frac{\\theta-x P_1}{F_{g_2|x}^{-1}(\\epsilon)}\\text{d}x.\n\\end{align}\nThen, Theorem \\ref{theorem1} derives the minimum required power in Round 1 and the average total power consumption as follows. \n\n\n\n\\begin{theorem}\\label{theorem1}\nWith RTD and given outage constraint $\\epsilon$, the minimum required power in Round 1 and the average total power are given by (\\ref{eq_P1}) and (\\ref{eq_666666}), respectively. \n\\end{theorem}\n\\begin{proof}\nPlugging (\\ref{eq_cdf}) into (\\ref{eq_optproblemrtd}), we have\n\\begin{align}\n 1-Q_1\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}},\\sqrt{\\frac{2(\\theta-g_1P_1)}{\\sigma^2 P_2}}\\right) = \\epsilon.\n\\end{align}\nBy using the approximation \\cite[Eq. 17]{Azari2018TCultra} for moderate\/large $\\sigma$, i.e., if $1-Q_1(s,\\rho) = 1-\\epsilon$, then $\\rho = Q_1^{-1}(s, 1-\\epsilon) \\simeq \\sqrt{-2\\log(1-\\epsilon)}e^{\\frac{s^2}{4}}$, we can obtain \n\\begin{align}\n \\sqrt{\\frac{2(\\theta-g_1P_1)}{\\sigma^2 P_2}}\\simeq \\sqrt{-2\\log(1-\\epsilon)}e^{\\frac{g_1(1-\\sigma^2)}{2\\sigma^2}}.\n\\end{align}\nIn this way, $P_2$ in (\\ref{eq_barP}) is approximated by\n\\begin{align}\n P_2 \\simeq (\\theta - g_1P_1)\\frac{e^{-\\frac{g_1(1-\\sigma^2)}{\\sigma^2}}}{-\\sigma^2\\log(1-\\epsilon)},\n\\end{align}\nand considering RTD, (\\ref{eq_barP}) can be rewritten as\n\\begin{align}\\label{eq_Papprobeforederrivative}\n \\Bar{P} = P_1 + \\int_0^{\\theta\/P_1} e^{- x}(\\theta - xP_1)\\frac{e^{-\\frac{x(1-\\sigma^2)}{\\sigma^2}}}{-\\sigma^2\\log(1-\\epsilon)}\\text{d}x \\overset{(a)}{=} P_1 + \\frac{c}{m^2}\\left(P_1 e^{-\\frac{m\\theta}{P_1}}-P_1 + m\\theta\\right),\n\\end{align}\nwhere, in (a) we set $m = 1 + \\frac{1-\\sigma^2}{\\sigma^2}$ and $c = \\frac{-1}{\\sigma^2\\log(1-\\epsilon)}$ for simplicity. Then, setting the derivative of (\\ref{eq_Papprobeforederrivative}) with respect to $P_1$ equal to zero, the minimum $P_1$ for the minimum total power can be found as\n\\begin{align}\n \\label{eq_P1}\n \\hat{P}_{1,\\text{RTD}} & = \\operatorname*{arg}_{P_1 > 0} \\Bigg\\{ 1 + \\frac{c}{m^2}e^{-\\frac{\\theta m}{P_1}}\\left(\\frac{m\\theta}{P_1}+1\\right) - \\frac{c}{m^2} = 0\\Bigg\\}\\nonumber\\\\\n & = \\operatorname*{arg}_{P_1 > 0} \\left\\{e^{-\\frac{\\theta m}{P_1}}\\left(\\frac{m\\theta}{P_1}+1\\right) = 1-\\frac{m^2}{c} \\right\\}\\nonumber\\\\\n & \\overset{(b)}= \\frac{-m\\theta}{\\mathcal{W}_{-1}\\left(\\frac{m^2}{ce}-\\frac{1}{e}\\right)+1}.\n\\end{align}\nHere, $(b)$ is obtained by the definition of the Lambert W function $xe^x = y \\Leftrightarrow x = \\mathcal{W}(y)$ \\cite{corless1996lambertw}. Also, because $\\frac{m^2}{ce}-\\frac{1}{e}<0$, we use the $\\mathcal{W}_{-1}(\\cdot)$ branch \\cite[Eq. 16]{veberic2010having}. Then, plugging (\\ref{eq_P1}) into (\\ref{eq_Papprobeforederrivative}), we obtain the minimum total transmission power as\n\\begin{align}\\label{eq_666666}\n \\hat{\\Bar{P}}_{\\text{RTD}} =\\hat{P}_{1,\\text{RTD}} + \\frac{c}{m^2}\\left(\\hat{P}_{1,\\text{RTD}} e^{-\\frac{m\\theta}{\\hat{P}_{1,\\text{RTD}}}}-\\hat{P}_{1,\\text{RTD}} + m\\theta\\right).\n\\end{align}\n\\end{proof}\n\n\n\\subsection{On the Effect of CSI Feedback\/Power Allocation}\nIn this part, we consider the case without exploiting CSIT, i.e., we consider the typical HARQ schemes where CSI feedback is not sent along with NACK, and we do not perform power adaptation. Here, the outage probability, for the RTD and the INR are given by\n\\begin{align}\\label{eq_PrRTD}\n \\zeta_{\\text{RTD}} = \\text{Pr}\\left\\{\\log\\left(1+\\left(g_1+g_2\\right)P\\right) 0,\\\\\n&P_\\text{tot}|g_1 = \\left[P_1 + P_2(g_1) \\times \\mathcal{I}\\left\\{g_1 \\le \\frac{\\theta}{P_1}\\right\\}\\right],\n\\end{aligned}\n\\end{equation}\nwith\n\\begin{align}\\label{eq_optproblemrtd}\nF_{g_2|g_1}\\left\\{\\frac{\\theta-g_1P_1}{P_2(g_1)} \\right\\} = \\epsilon, \\quad\\text{for RTD}\n\\end{align}\n\\begin{align}\\label{eq_optprobleminr}\nF_{g_2|g_1}\\left\\{\\frac{e^{R-\\log(1+g_1P_1)}-1}{P_2(g_1)} \\right\\} = \\epsilon, \\quad\\text{for INR}. \n\\end{align}\nHere, $P_\\text{tot}|g_1$ is the total instantaneous transmission power for two transmission rounds (i.e., one retransmission) with given $g_1$, and we define $\\Bar{P} \\doteq \\mathbb{E}_{g_1}\\left[P_\\text{tot}|g_1\\right]$ as the expected power, averaged over $g_1$. Moreover, $\\mathcal{I}(x)=1$ if $x>0$ and $\\mathcal{I}(x)=0$ if $x \\le 0$. Also, $\\mathbb{E}_{g_1}[\\cdot]$ represents the expectation operator over $g_1$. Here, we ignore the peak power constraint and assume that the BS is capable for sufficiently high transmission powers. Finally, (\\ref{eq_optproblem})-(\\ref{eq_optprobleminr}) come from the fact that, with our proposed scheme, $P_1$ is fixed and optimized with no CSIT at the BS and based on average system performance. On the other hand, $P_2$ is adapted continuously based on the predicted CSIT.\n\n\nUsing (\\ref{eq_optproblem}), the required power in Round 2 is given by\n\\begin{equation}\n\\label{eq_PRTDe}\n P_2(g_1) = \\frac{\\theta-g_1P_1}{F_{g_2|g_1}^{-1}(\\epsilon)},\n\\end{equation}\nfor the RTD, and\n\\begin{equation}\n\\label{eq_PINRe}\n P_2(g_1) = \\frac{e^{R-\\log(1+g_1P_1)}-1}{F_{g_2|g_1}^{-1}(\\epsilon)},\n\\end{equation}\nfor the INR, where $F_{g_2|g_1}^{-1}(\\cdot)$ is the inverse of the CDF given in (\\ref{eq_cdf}). Note that, $F_{g_2|g_1}^{-1}(\\cdot)$ is a complex function of $g_1$ and, consequently, it is not possible to express $P_2$ in closed-form. For this reason, one can use \\cite[Eq. 2, 7]{6414576}\n\\begin{align}\n Q_1 (s, \\rho) &\\simeq e^{\\left(-e^{\\mathcal{I}(s)}\\rho^{\\mathcal{J}(s)}\\right)}, \\nonumber\\\\\n \\mathcal{I}(s)& = -0.840+0.327s-0.740s^2+0.083s^3-0.004s^4,\\nonumber\\\\\n \\mathcal{J}(s)& = 2.174-0.592s+0.593s^2-0.092s^3+0.005s^4,\n\\end{align}\nto approximate $F_{g_2|g_1}$ and consequently $F_{g_2|g_1}^{-1}(\\epsilon)$. In this way, (\\ref{eq_PRTDe}) and (\\ref{eq_PINRe}) can be approximated as\n\\begin{equation}\n\\label{eq_PRTDa}\n P_2(g_1) = \\Omega\\left(\\theta-g_1P_1\\right),\n\\end{equation}\nfor the RTD, and\n\\begin{equation}\n\\label{eq_PINRa}\n P_2(g_1) = \\Omega\\left(e^{R-\\log(1+g_1P_1)}-1\\right),\n\\end{equation}\nfor the INR, where\n\\begin{equation}\\label{eq_omega}\n \\Omega (g_1) = \\frac{2}{\\sigma^2}\\left(\\frac{\\log(1-\\epsilon)}{-e^{\\mathcal{I}\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}}\\right)}}\\right)^{-\\frac{2}{\\mathcal{J}\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}}\\right)}}.\n\\end{equation}\n\n\n\nIn this way, for different HARQ protocols, we can express the instantaneous transmission power of Round 2, for every given $g_1$ in closed-form. Then, the power allocation problem (\\ref{eq_optproblem}) can be solved numerically. However, (\\ref{eq_omega}) is still complicated and it is not possible to solve (\\ref{eq_optproblem}) in closed-form. For this reason, we propose an approximation scheme to solve (\\ref{eq_optproblem}) as follows.\n\n\n\n\nLet us initially concentrate on the RTD protocol. Then, combining (\\ref{eq_optproblem}) and (\\ref{eq_PRTDe}), the expected total transmission power is given by\n\\begin{align}\\label{eq_barP}\n \\Bar{P}_\\text{RTD} = P_1 + \\int_0^{\\theta\/P_1} e^{- x}P_2\\text{d}x\n = P_1 + \\int_0^{\\theta\/P_1} e^{- x}\\frac{\\theta-x P_1}{F_{g_2|x}^{-1}(\\epsilon)}\\text{d}x.\n\\end{align}\nThen, Theorem \\ref{theorem1} derives the minimum required power in Round 1 and the average total power consumption as follows. \n\n\n\n\\begin{theorem}\\label{theorem1}\nWith RTD and given outage constraint $\\epsilon$, the minimum required power in Round 1 and the average total power are given by (\\ref{eq_P1}) and (\\ref{eq_666666}), respectively. \n\\end{theorem}\n\\begin{proof}\nPlugging (\\ref{eq_cdf}) into (\\ref{eq_optproblemrtd}), we have\n\\begin{align}\n 1-Q_1\\left(\\sqrt{\\frac{2g_1(1-\\sigma^2)}{\\sigma^2}},\\sqrt{\\frac{2(\\theta-g_1P_1)}{\\sigma^2 P_2}}\\right) = \\epsilon.\n\\end{align}\nBy using the approximation \\cite[Eq. 17]{Azari2018TCultra} for moderate\/large $\\sigma$, i.e., if $1-Q_1(s,\\rho) = 1-\\epsilon$, then $\\rho = Q_1^{-1}(s, 1-\\epsilon) \\simeq \\sqrt{-2\\log(1-\\epsilon)}e^{\\frac{s^2}{4}}$, we can obtain \n\\begin{align}\n \\sqrt{\\frac{2(\\theta-g_1P_1)}{\\sigma^2 P_2}}\\simeq \\sqrt{-2\\log(1-\\epsilon)}e^{\\frac{g_1(1-\\sigma^2)}{2\\sigma^2}}.\n\\end{align}\nIn this way, $P_2$ in (\\ref{eq_barP}) is approximated by\n\\begin{align}\n P_2 \\simeq (\\theta - g_1P_1)\\frac{e^{-\\frac{g_1(1-\\sigma^2)}{\\sigma^2}}}{-\\sigma^2\\log(1-\\epsilon)},\n\\end{align}\nand considering RTD, (\\ref{eq_barP}) can be rewritten as\n\\begin{align}\\label{eq_Papprobeforederrivative}\n \\Bar{P} &= P_1 + \\int_0^{\\theta\/P_1} e^{- x}(\\theta - xP_1)\\frac{e^{-\\frac{x(1-\\sigma^2)}{\\sigma^2}}}{-\\sigma^2\\log(1-\\epsilon)}\\text{d}x \\nonumber\\\\\n &\\overset{(a)}{=} P_1 + \\frac{c}{m^2}\\left(P_1 e^{-\\frac{m\\theta}{P_1}}-P_1 + m\\theta\\right),\n\\end{align}\nwhere, in (a) we set $m = 1 + \\frac{1-\\sigma^2}{\\sigma^2}$ and $c = \\frac{-1}{\\sigma^2\\log(1-\\epsilon)}$ for simplicity. Then, setting the derivative of (\\ref{eq_Papprobeforederrivative}) with respect to $P_1$ equal to zero, the minimum $P_1$ for the minimum total power can be find as\n\\begin{align}\n \\label{eq_P1}\n \\hat{P}_{1,\\text{RTD}} & = \\operatorname*{arg}_{P_1 > 0} \\Bigg\\{ 1 + \\frac{c}{m^2}e^{-\\frac{\\theta m}{P_1}}\\left(\\frac{m\\theta}{P_1}+1\\right) - \\frac{c}{m^2} = 0\\Bigg\\}\\nonumber\\\\\n & = \\operatorname*{arg}_{P_1 > 0} \\left\\{e^{-\\frac{\\theta m}{P_1}}\\left(\\frac{m\\theta}{P_1}+1\\right) = 1-\\frac{m^2}{c} \\right\\}\\nonumber\\\\\n & \\overset{(b)}= \\frac{-m\\theta}{\\mathcal{W}_{-1}\\left(\\frac{m^2}{ce}-\\frac{1}{e}\\right)+1}.\n\\end{align}\nHere, $(b)$ is obtained by the definition of the Lambert W function $xe^x = y \\Leftrightarrow x = \\mathcal{W}(y)$ \\cite{corless1996lambertw}. Also, because $\\frac{m^2}{ce}-\\frac{1}{e}<0$, we use the $\\mathcal{W}_{-1}(\\cdot)$ branch \\cite[Eq. 16]{veberic2010having}. Then, plugging (\\ref{eq_P1}) into (\\ref{eq_Papprobeforederrivative}), we obtain the minimum total transmission power as\n\\begin{align}\\label{eq_666666}\n \\hat{\\Bar{P}}_{\\text{RTD}} =\\hat{P}_{1,\\text{RTD}} + \\frac{c}{m^2}\\left(\\hat{P}_{1,\\text{RTD}} e^{-\\frac{m\\theta}{\\hat{P}_{1,\\text{RTD}}}}-\\hat{P}_{1,\\text{RTD}} + m\\theta\\right).\n\\end{align}\n\\end{proof}\n\n\n\\subsection{On the Effect of CSI Feedback\/Power Allocation}\nAs a benchmark, in this part, we consider the case without exploiting CSIT, i.e., we consider the typical HARQ schemes where CSI feedback is not sent along with NACK, and we do not perform power adaptation. Here, the outage probability, for the RTD and the INR are given by\n\\begin{align}\\label{eq_PrRTD}\n \\zeta_{\\text{RTD}} = \\text{Pr}\\left\\{\\log\\left(1+\\left(g_1+g_2\\right)P\\right) c_2,\n\\end{cases}\n\\end{align}\nwith\n\\begin{align}\\label{eq_c1}\n & c_1(\\alpha) =~~~ \\max\\Bigg(0,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n &~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)-1}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}\\Bigg),\n\\end{align}\n\\begin{align}\\label{eq_c2}\n & c_2(\\alpha) = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n & ~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}.\n\\end{align}\n\\end{lem}\n\\begin{proof}\n\nWe aim to approximate the CDF in the range $y \\in [0, 1]$ by \n\\begin{align}\\label{eq_YY}\n y-y_0 = m(x-x_0),\n\\end{align}\nwhere $\\mathcal{C} = (x_0,y_0)$ is a point on the CDF curve and $m$ is the slope at point $\\mathcal{C}$ of $y(\\alpha,\\beta)$. Then, the parts of the line outside this region are replaced by $y=0$ and $y=1$ (see Fig. \\ref{fig_CDFilu}).\n\nTo obtain a good approximation of the CDF, we select the point $\\mathcal{C}$ by solving\n\n\\begin{align}\\label{eq_partialsquare}\n x = \\mathop{\\arg}_{t} \\left\\{ \\frac{\\partial^2\\left(1-Q_1(\\alpha,t)\\right)}{\\partial t^2} = 0\\right\\},\n\\end{align}\nbecause the function is symmetric around this point, and (\\ref{eq_partialsquare}) gives the best fit for a linear function. Then, using the derivative of the first-order Marcum $Q$-function with respect to $x$ \\cite[Eq. (2)]{Pratt1968PIpartial}\n\\begin{align}\\label{eq_derivativeMarcumQ}\n \\frac{\\partial Q_1(\\alpha,x)}{\\partial x} = -x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x),\n\\end{align}\n(\\ref{eq_partialsquare}) is equivalent to \n\\begin{align}\n x = \\mathop{\\arg}_{x} \\left\\{\\frac{\\partial\\left(x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x)\\right)}{\\partial x}=0\\right\\}.\n\\end{align}\nUsing the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}} $ \\cite[Eq. (9.7.1)]{abramowitz1999ia} and writing\n\\begin{align}\n &~~~\\frac{\\partial\\left(\\sqrt{\\frac{x}{2\\pi \\alpha}}e^{-\\frac{(x-\\alpha)^2}{2}}\\right)}{\\partial x} = 0\\nonumber\\\\\n \\Rightarrow &\\frac{1}{\\sqrt{2\\pi\\alpha}}\\left(\\frac{e^{-\\frac{(x-\\alpha)^2}{2}}}{2\\sqrt{x}}+\\sqrt{x}e^{-\\frac{(x-\\alpha)^2}{2}}(\\alpha-x)\\right) = 0\\nonumber\\\\\n \\Rightarrow &2x^2-2\\alpha x-1 =0,\n\\end{align}\nwe obtain \n\\begin{align}\\label{eq_beta0}\n x = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2},\n\\end{align}\nsince $x\\geq0$. In this way, we find the point \n\\begin{align}\n \\mathcal{C}=\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}, 1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\right).\n\\end{align}\n\nTo calculate the slope $m$ at the point $\\mathcal{C}$, we plug (\\ref{eq_beta0}) into (\\ref{eq_derivativeMarcumQ}) leading to\n\\begin{align}\\label{eq_m}\n & m = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\times\\nonumber\\\\\n & ~~~e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nFinally, using (\\ref{eq_YY}), (\\ref{eq_beta0}) and (\\ref{eq_m}), the CDF $y(\\alpha, \\beta) = 1-Q_1(\\alpha,\\beta)$ can be approximated as in (\\ref{eq_lema1}). Note that, because the CDF is limited to the range [0 1], the boundaries $c_1$ and $c_2$ in (\\ref{eq_lema1}) are obtained by setting $y=0$ and $y=1$ which leads to the semi-linear approximation as given in (\\ref{eq_lema1}).\n\\end{proof}\n\n\nTo further simplify the calculation, considering different ranges of $\\alpha$, the approximation (\\ref{eq_lema1}) can be simplified as stated in the following corollaries. \n\n\\begin{corollary}\\label{coro1}\nFor moderate\/large values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\tilde{\\mathcal{Z}}(\\alpha,\\beta)$ where\n\\begin{align}\\label{eq_coro1}\n\\tilde{\\mathcal{Z}}(\\alpha,\\beta)&\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\ \n\\alpha e^{-\\alpha^2}I_0(\\alpha^2)(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right), \\\\ ~~~~~~~~~~~~~~~~\\mathrm{if}~\\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\leq\\beta\\\\\n~~~~~~~~~~~~~~~~~~~~\\leq\\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\\n1, ~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha.\n\\end{cases}\\\\\n&\\overset{(a)}{\\simeq}\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < \\breve{c}_1 \\\\ \n\\frac{1}{\\sqrt{2\\pi}}(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\breve{c}_1 \\leq\\beta\\leq \\breve{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta>\\breve{c}_2,\n\\end{cases}\n\\end{align}\nwith $\\breve{c}_1$ and $\\breve{c}_2$ given in (\\ref{eq_dotc1}) and (\\ref{eq_dotc2}), respectively.\n\\end{corollary}\n\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for moderate\/large values of $\\alpha$, we have $x \\simeq \\alpha$ and\n\\begin{align}\n \\tilde{c}_1 = \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\n\\begin{align}\n \\tilde{c}_2 = \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\nwhich leads to (\\ref{eq_coro1}). Note that in (\\ref{eq_coro1}) we have used the fact that \\cite[Eq. (A-3-2)]{schwartz1995communication}\n\\begin{align}\n Q_1(\\alpha,\\alpha) = \\frac{1}{2}\\left(1+e^{-\\alpha^2}I_0(\\alpha^2)\\right).\n\\end{align}\nFinally, $(a)$ is obtained by using the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}}$ where\n\\begin{align}\\label{eq_dotc1}\n \\breve{c}_1 = -\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha,\n\\end{align}\n\\begin{align}\\label{eq_dotc2}\n \\breve{c}_2 = \\sqrt{2\\pi}-\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha.\n\\end{align}\n\\end{proof}\n\n\\begin{corollary}\\label{coro2}\nFor small values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\hat{ \\mathcal{Z}}(\\alpha,\\beta)$ with\n\\begin{align}\\label{eq_coro2}\n\\hat{ \\mathcal{Z}}(\\alpha,\\beta)\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\hat{c}_1 \\\\ \n\\frac{\\alpha+\\sqrt{2}}{2} e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}}\\times\\\\\n~~~I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)(\\beta-\\frac{\\alpha+\\sqrt{2}}{2}) + \\\\\n~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right), \n~~~~~~~~\\mathrm{if}~\\hat{c}_1 \\leq\\beta\\leq \\hat{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\hat{c}_2\n\\end{cases}\n\\end{align}\nwith $\\hat{c}_1$ and $\\hat{c}_2$ given in (\\ref{eq_hatc1}) and (\\ref{eq_hatc2}), respectively.\n\\end{corollary}\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for small values of $\\alpha$, we have $x\\simeq \\frac{\\alpha+\\sqrt{2}}{2}$, which leads to \n\\begin{equation}\\label{eq_hatc1}\n \\hat{c}_1 = \\frac{-1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand\n\\begin{equation}\\label{eq_hatc2}\n \\hat{c}_2 = \\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand simplifies (\\ref{eq_lema1}) to (\\ref{eq_coro2}).\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo illustrate these semi-linear approximations, Fig. \\ref{fig_CDFilu} shows the CDF $y(\\alpha,\\beta)= 1-Q_1(\\alpha,\\beta)$ for both small and large values of $\\alpha$, and compares the exact CDF with the approximation schemes of Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro2}. From Fig. \\ref{fig_CDFilu}, we can observe that Lemma \\ref{Lemma1} is tight for a broad range of $\\alpha$ and moderate values of $\\beta$. Moreover, the tightness is improved as $\\alpha$ decreases. Also, Corollaries \\ref{coro1}-\\ref{coro2} provide good approximations for large and small values of $\\alpha$, respectively. Then, the proposed approximations are not tight at the tails of the CDF. However, as observed in \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Ma2000JSACunified,Digham2007TCenergy,Cao2016CLsolutions,sofotasios2015solutions,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan} and in the following, in different applications, the Marcum $Q$-function is normally combined with other functions which tend to zero of the tails of the CDF. In such cases, the inaccuracy of the approximation at the tails does not affect the tightness of the final result. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAs an example, we first consider a general integral in the form of\n\\begin{align}\\label{eq_integral}\nG(\\alpha,\\rho)=\\int_\\rho^\\infty{e^{-nx} x^m \\left(1-Q_1(\\alpha,x)\\right)\\text{d}x} ~~\\forall n,m,\\alpha,\\rho>0.\n\\end{align}\nSuch an integral has been observed in various applications, e.g., in bit-error-probability evaluation of a Rayleigh fading channel \\cite[eq. (1) (13)]{Simon2003TWCsome}, in energy detection of unknown signals over various multipath fading channels \\cite[eq. (2)]{Cao2016CLsolutions}, in capacity analysis with channel inversion and fixed rate over correlated Nakagami fading \\cite[eq. (1)]{sofotasios2015solutions}, in performance evaluation of incoherent receivers in radar systems \\cite[eq. (3)]{Cui2012ELtwo}, and in error probability analysis of diversity receivers \\cite[eq. (1)]{Gaur2003TVTsome}. However, depending on the values of $n, m$ and $\\rho$, (\\ref{eq_integral}) may have no closed-form expression. Using Lemma \\ref{Lemma1}, $G(\\alpha,\\rho)$ can be approximated in closed-form as presented in Lemma \\ref{Lemma2}.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure1.pdf}\\\\\n\\caption{Illustration of the semi-linear approximation with Lemma \\ref{Lemma1}, and Corollaries \\ref{coro1}-\\ref{coro2}. For each value of $\\alpha\\in[1,1.5,2]$, the approximated results obtained by Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro2} are compared with the exact value for a broad range of $\\beta$.}\n\\label{fig_CDFilu}\n\\end{figure}\n\n\\begin{lem}\\label{Lemma2}\nThe integral (\\ref{eq_integral}) is approximately given by\n\\begin{align}\nG(\\alpha,\\rho)\\simeq\n\\begin{cases}\n\\Gamma(m+1,n\\rho)n^{-m-1}, ~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho \\geq \\breve{c}_2 \\\\ \n\\Gamma(m+1,n\\breve{c}_2)n^{-m-1} + \\\\\n~~\\left(-\\frac{\\alpha}{\\sqrt{2\\pi}}+0.5*\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\times n^{-m-1}\\times\\\\\n~~\\left(\\Gamma(m+1,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+1,n\\breve{c}_2)\\right)+\\\\\n~~\\left(\\Gamma(m+2,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+2,n\\breve{c}_2)\\right)\\times\\\\\n~~\\frac{n^{-m-2}}{\\sqrt{2\\pi}},\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho<\\breve{c}_2,\n\\end{cases}\n\\end{align}\nwhere $\\Gamma(s,x) = \\int_{x}^{\\infty} t^{s-1}e^{-t} \\mathrm{d}t$ is the upper incomplete gamma function \\cite[Eq. 6.5.1]{abramowitz1999ia}.\n\\end{lem}\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma2}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\nAs a second integration example of interest, consider\n\\begin{align}\\label{eq_integralT}\n T(\\alpha,m,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} e^{-mx}\\log(1+ax)Q_1(\\alpha,x)\\text{d}x \\nonumber\\\\\\forall m>0,a,\\alpha,\n\\end{align}\nwith $\\theta_2>\\theta_1\\geq0$, which does not have a closed-form expression for different values of $m, a, \\alpha$. This type of integral is interesting as it could be used to analyze the expected performance of outage-limited systems, e.g, the considered integral in the shape of \\cite[eq. (1) (13)]{Simon2003TWCsome}, \\cite[eq. (2)]{Cao2016CLsolutions}, \\cite[eq. (3)]{Cui2012ELtwo}, and \\cite[eq. (1)]{Gaur2003TVTsome}, applied in the analysis of the outage-limited throughput, i.e., when the outage-limited throughput $\\log(1+ax)Q_1(\\alpha,x)$ \\cite[p. 2631]{Biglieri1998TITfading}\\cite[Theorem 6]{Verdu1994TITgeneral}\\cite[Eq. (9)]{Makki2014TCperformance} is averaged over fading statistics. Then, using Lemma \\ref{Lemma1}, (\\ref{eq_integralT}) can be approximated in closed-form as follows.\n\n\\begin{lem}\\label{Lemma3}\nThe integral (\\ref{eq_integralT}) is approximately given by\n\\begin{align}\n T(\\alpha,m,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_1(\\theta_2)-\\mathcal{F}_1(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_1(c_1)-\\mathcal{F}_1(\\theta_1)+\\mathcal{F}_2(\\max(c_2,\\theta_2))-\\mathcal{F}_2(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwhere \n$c_1$ and $c_2$ are given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Moreover,\n\\begin{align}\\label{eq_F1}\n \\mathcal{F}_1(x) \\doteq \\frac{1}{m}\\left(-e^{\\frac{m}{a}}\\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)-e^{-mx}\\log(ax+1)\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_F2}\n \\mathcal{F}_2(x) \\doteq &~ \\mathrm{e}^{-mx}\\Bigg(\\left(mn_2-an_2-amn_1\\right)\\mathrm{e}^\\frac{m\\left(ax+1\\right)}{a}\\nonumber\\\\&~~ \\operatorname{E_1}\\left(\\frac{m\\left(ax+1\\right)}{a}\\right)-\n a\\left(mn_2x+n_2+mn_1\\right)\\nonumber\\\\&~~\\log\\left(ax+1\\right)-an_2\\Bigg),\n\\end{align}\nwith\n\\begin{align}\\label{eq_n1}\n n_1 = 1+\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}-\\nonumber\\\\\n 1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_n2}\n n_2 = -\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nIn (\\ref{eq_F1}) and (\\ref{eq_F2}), $\\operatorname{E_1}(x) = \\int_x^{\\infty} \\frac{e^{-t}}{t} \\mathrm{d}t$ is the Exponential Integral function \\cite[p. 228, (5.1.1)]{abramowitz1999ia}.\n\\end{lem}\n\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma3}.\n\\end{proof}\n\nFinally, setting $m = 0$ in (\\ref{eq_integralT}), i.e.,\n\\begin{align}\\label{eq_integralTs}\n T(\\alpha,0,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} \\log(1+ax)Q_1(\\alpha,x)\\text{d}x, \\forall a,\\alpha,\n\\end{align}\none can follow the same procedure in (\\ref{eq_integralT}) to approximate (\\ref{eq_integralTs}) as\n\\begin{align}\\label{eq_integralTss}\n T(\\alpha,0,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_3(\\theta_2)-\\mathcal{F}_3(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_3(c_1)-\\mathcal{F}_3(\\theta_1)+\\mathcal{F}_4(\\max(c_2,\\theta_2))-\\mathcal{F}_4(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwith $c_1$ and $c_2$ given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Also,\n\\begin{align}\n \\mathcal{F}_3 = \\frac{(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nand\n\\begin{align}\n \\mathcal{F}_4 = \\frac{n_2\\left((2a^2x^2-2)\\log(ax+1)-a^2x^2+2ax\\right)}{4a^2}+\\nonumber\\\\\\frac{n_1(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nwhere $n_1$ and $n_2$ are given by (\\ref{eq_n1}) and (\\ref{eq_n2}), respectively.\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure2.pdf}\\\\\n\\caption{The integral (\\ref{eq_integral}) involving Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma2}, $\\alpha = 2$, $(m,n) = \\{(4,4), (3,3), (2,2), (0,1), (1,1)\\}$.}\\label{fig_integrald}\n\\end{figure}\n\n\n\n\n\n\nIn Figs. \\ref{fig_integrald} and \\ref{fig_t2}, we evaluate the tightness of the approximations in Lemmas \\ref{Lemma2}, \\ref{Lemma3} and (\\ref{eq_integralTss}), for different values of $m$, $n$, $\\rho$, $a$ and $\\alpha$. From the figures, it can be observed that the approximation schemes of Lemmas \\ref{Lemma2}-\\ref{Lemma3} and (\\ref{eq_integralTss}) are very tight for different parameter settings, while our proposed semi-linear approximation makes it possible to represent the integrals in closed-form. In this way, although the approximation (\\ref{eq_lema1}) is not tight at the tails of the CDF, it gives tight approximation results when it appears in different integrals (Lemmas \\ref{Lemma2}-\\ref{Lemma3}) with the Marcum-$Q$ function combined with other functions tending to zero at the tails of the function. Also, as we show in Section 3, the semi-linear approximation scheme is efficient in optimization problems involving the Marcum $Q$-function. Finally, to tightly approximate the Marcum $Q$-function at the tails, which are the range of interest in, e.g., error probability analysis, one can use the approximation schemes of \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy}. \n\n\n\n\n\n\n\n\\section{Applications in PA Systems}\nIn Section 2, we showed how the proposed approximation scheme enables us to derive closed-form expressions for a broad range of integrals, as required in various expectation-based calculations, e.g., \\cite{Simon2003TWCsome,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Gaur2003TVTsome,Simon2000TCexponential,6911973}. On the other hand, the Marcum $Q$-function may also appear in optimization problems, e.g., \\cite[eq. (8)]{Azari2018TCultra},\\cite[eq. (9)]{Alam2014INFOCOMWrobust}, \\cite[eq. (10)]{Gao2018IAadmm}, \\cite[eq. (10)]{Shen2018TVToutage}, \\cite[eq. (15)]{Song2017JLTimpact}, \\cite[eq. (22)]{Tang2019IAan}. For this reason, in this section, we provide an example of using our proposed semi-linear approximation in an optimization problem for the PA systems.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure3.pdf}\\\\\n\\caption{The integral (\\ref{eq_integralT}) involving the Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma3} and (\\ref{eq_integralTss}). $\\theta_1 = 0, \\theta_2 = \\infty$. }\\label{fig_t2}\n\\end{figure}\n\n\\subsection{Problem Formulation}\nVehicle communication is one of the most important use cases in 5G. Here, the main focus is to provide efficient and reliable connections to cars and public transports, e.g., busses and trains. CSIT plays an important role in achieving these goals, since the data transmission efficiency can be improved by updating the transmission parameters relative to the instantaneous channel state. However, the typical CSIT acquisition systems, which are mostly designed for (semi)static channels, may not work well for high-speed vehicles. This is because, depending on the vehicle speed, the position of the antennas may change quickly and the channel information becomes inaccurate. To overcome this issue, \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing} propose the PA setup as shown in Fig. \\ref{system}. With a PA setup, which is of interesting in Vehicle-to-everything (V2X) communications \\cite{Sternad2012WCNCWusing} as well as integrated access and backhauling \\cite{Teyeb2019VTCintegrated}, two antennas are deployed on the top of the vehicle. The first antenna, the PA, estimates the channel and sends feedback to the BS at time $t$. Then, the BS uses the CSIT provided by the PA to communicate with a second antenna, which we refer to as RA, at time $t+\\delta$, where $\\delta$ is the processing time at the BS. In this way, the BS can use the CSIT acquired from the PA and perform various CSIT-based transmission schemes, e.g., \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing}. \n\n\n\\begin{figure}\n\\centering\n \n \\includegraphics[width=0.7\\columnwidth]{Figure4.pdf}\\\\\n\n\\caption{A PA system with the mismatch problem. Here, $\\hat h$ is the channel between the BS and the PA while $h$ refers to the BS-RA link. The vehicle is moving with speed $v$ and the antenna separation is $d_\\text{a}$. The red arrow indicates the spatial mismatch, i.e., when the RA does not reach at the same point as the PA when sending pilots. Also, $d_\\text{m}$ is the moving distance of the vehicle which is affected by the processing delay $\\delta$ of the BS. }\\label{system}\n\\end{figure}\n\n\nWe assume that the vehicle moves through a stationary electromagnetic standing wave pattern \\footnote{This has been experimentally verified in, e.g., \\cite{Jamaly2014EuCAPanalysis}}. Thus, if the RA reaches exactly the same position as the position of the PA when sending the pilots, it will experience the same channel and the CSIT will be perfect. However, if the RA does not reach the same spatial point as the PA, due to, e.g., the BS processing delay is not equal to the time that we need until the RA reaches the same point as the PA, the RA may receive the data in a place different from the one in which the PA was sending the pilots. Such spatial mismatch may lead to CSIT inaccuracy, which will affect the system performance considerably. Thus, we need adaptive schemes to compensate for it.\n\nConsidering downlink transmission in the BS-RA link, the received signal is given by\n\\begin{align}\\label{eq_Y}\n{{Y}} = \\sqrt{P}hX + Z.\n\\end{align}\nHere, $P$ represents the transmit power, $X$ is the input message with unit variance, and $h$ is the fading coefficient between the BS and the RA. Also, $Z \\sim \\mathcal{CN}(0,1)$ denotes the independent and identically distributed (IID) complex Gaussian noise added at the receiver.\n\n\n\nWe denote the channel coefficient of the PA-BS uplink as $\\hat{h}$. Also, we define $d$ as the effective distance between the place where the PA estimates the channel at time $t$, and the place where the RA reaches at time $t+\\delta$. As can be seen in Fig. \\ref{system}, $d$ can be calculated as\n\\begin{align}\\label{eq_d}\n d = |d_\\text{a} - d_\\text{m} | = |d_\\text{a} - v\\delta|,\n\\end{align}\nwhere $d_\\text{m}$ is the moving distance of the vehicle during time interval $\\delta$, and $v$ is the velocity of the vehicle. Also, $d_\\text{a}$ is the antenna separation between the PA and the RA. In conjunction to (\\ref{eq_d}), here, we assume $d$ can be calculated by the BS. \n\nUsing the classical Jake's correlation model \\cite[p. 2642]{Shin2003TITcapacity} and assuming a semi-static propagation environment, i.e., assuming that the coherence time of the propagation environment is larger than $\\delta$, the channel coefficient of the BS-RA downlink can be modeled as \n\\begin{align}\\label{eq_H}\n h = \\sqrt{1-\\sigma^2} \\hat{h} + \\sigma q.\n\\end{align}\nHere, $q \\sim \\mathcal{CN}(0,1)$ which is independent of the known channel value $\\hat{h}\\sim \\mathcal{CN}(0,1)$, and $\\sigma$ is a function of the effective distance $d$ as \n\\begin{align}\n \\sigma = \\frac{\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}}{\\sqrt{ \\left(\\frac{\\phi_2}{\\phi_1}\\right)^2 + \\left(\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}\\right)^2 }} = \\frac{\\phi_2^2-\\phi_1^2}{\\sqrt{ \\left(\\phi_2\\right)^2 + \\left(\\phi_2^2-\\phi_1^2\\right)^2 }} .\n\\end{align}\nHere, $\\phi_1 = \\bm{\\Phi}_{1,1}^{1\/2} $ and $\\phi_2 = \\bm{\\Phi}_{1,2}^{1\/2} $, where $\\bm{\\Phi}$ is from Jake's model \\cite[p. 2642]{Shin2003TITcapacity}\n\\begin{align}\\label{eq_tildeH}\n \\bigl[ \\begin{smallmatrix}\n \\hat{h}\\\\h\n\\end{smallmatrix} \\bigr]= \\bm{\\Phi}^{1\/2} \\bm{H}_{\\varepsilon}.\n\\end{align}\nNote that, the channel model (\\ref{eq_H}) has been experimentally verified in, e.g., \\cite{Jamaly2014EuCAPanalysis} for PA setups. Also, one can follow the same method as in \\cite{Guo2019WCLrate} to extend the model to the cases with temporally-correlated channels. Moreover, in (\\ref{eq_tildeH}), $\\bm{H}_{\\varepsilon}$ has independent circularly-symmetric zero-mean complex Gaussian entries with unit variance, and $\\bm{\\Phi}$ is the channel correlation matrix with the $(i,j)$-th entry given by\n\\begin{align}\\label{eq_phi}\n \\Phi_{i,j} = J_0\\left((i-j)\\cdot2\\pi d\/ \\lambda\\right) \\forall i,j.\n\\end{align}\nHere, $J_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i}(-1)^{i} }{i!\\Gamma(n+i+1)}$ represents the $n$-th order Bessel function of the first kind. Moreover, $\\lambda$ denotes the carrier wavelength, i.e., $\\lambda = c\/f_\\text{c}$ where $c$ is the speed of light and $f_\\text{c}$ is the carrier frequency. \n\n\n\n\n\n\nFrom (\\ref{eq_H}), for a given $\\hat{h}$ and $\\sigma \\neq 0$, $|h|$ follows a Rician distribution, i.e., the probability density function (PDF) of $|h|$ is given by \n\\begin{align}\n f_{|h|\\big|\\hat{g}}(x) = \\frac{2x}{\\sigma^2}e^{-\\frac{x^2+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2x\\sqrt{\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhere $\\hat{g} = (1-\\sigma^2)|\\hat{h}|^2$. Let us define the channel gain between BS-RA as $ g = |{h}|^2$. Then, the PDF of $f_{g|\\hat{g}}$ is given by\n\n\\begin{align}\\label{eq_pdf}\n f_{g|\\hat{g}}(x) = \\frac{1}{\\sigma^2}e^{-\\frac{x+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2\\sqrt{x\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhich is non-central Chi-squared distributed with the CDF containing the first-order Marcum $Q$-function as\n\\begin{align}\\label{eq_cdf}\n F_{g|\\hat{g}}(x) = 1 - Q_1\\left( \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}, \\sqrt{\\frac{2x}{\\sigma^2}} \\right).\n\\end{align}\n\n\n\n\\subsection{Analytical Results on Rate Adaptation Using the Semi-Linear Approximation of the First-order Marcum Q-Function}\\label{In Section III.C,}\nWe assume that $d_\\text{a}$, $\\delta $ and $\\hat{g}$ are known by the BS. It can be seen from (\\ref{eq_pdf}) that $f_{g|\\hat{g}}(x)$ is a function of $v$. For a given $v$, the distribution of $g$ is known by the BS, and a rate adaption scheme can be performed to improve the system performance.\n\nFor a given instantaneous value of $\\hat g$, the data is transmitted with instantaneous rate $R_{|\\hat{g}}$ nats-per-channel-use (npcu). If the instantaneous channel gain realization supports the transmitted data rate $R_{|\\hat{g}}$, i.e., $\\log(1+gP)\\ge R_{|\\hat{g}}$, the data can be successfully decoded. Otherwise, outage occurs. Hence, the outage probability in each time slot is\n\\begin{align}\n \\Pr(\\text{outage}|\\hat{g}) = F_{g|\\hat{g}}\\left(\\frac{e^{R_{|\\hat{g}}}-1}{P}\\right).\n\\end{align}\nAlso, the instantaneous throughput for a given $\\hat{g}$ is\n\\begin{align}\\label{eq_opteta}\n\\eta_{|\\hat {g}}\\left(R_{|\\hat{g}}\\right)=R_{|\\hat{g}}\\left(1-\\Pr\\left(\\log(1+gP) c_2(\\alpha).\n\\end{cases}\n\\end{align}\n\n\n$o_i,i=1,2,3$, are given by (\\ref{eq_lema1}), (\\ref{eq_coro1}), or (\\ref{eq_coro2}) depending on if we use Lemma \\ref{Lemma1} or Corollaries \\ref{coro1}-\\ref{coro2}. In this way, (\\ref{eq_opteta}) is approximated as\n\\begin{align}\\label{eq_appR}\n \\eta_{|\\hat {g}}\\simeq R_{|\\hat{g}}\\left(1-o_1(\\alpha)\\beta + o_1(\\alpha)o_2(\\alpha) - o_3(\\alpha)\\right),\n\\end{align}\nwhere $\\alpha = \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}$. To simplify the equation, we omit $\\alpha$ in the following since it is a constant for given $\\hat{g}$, $\\sigma$. Then, setting the derivative of (\\ref{eq_appR}) equal to zero, we obtain\n\\begin{align}\\label{eq_appRF}\n & R_{|\\hat{g}}^{\\text{opt}} \\nonumber\\\\\n & = \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ 1+o_1o_2-o_3-o_1\\left(\\frac{(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2}{\\sqrt{2P\\sigma^2\\left(e^{R_{|\\hat{g}}}-1\\right)}}\\right)=0\\right\\}\\nonumber\\\\\n \n & \\overset{(b)}{\\simeq} \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ \\left(\\frac{R_{|\\hat{g}}}{2}+1\\right)e^{\\frac{R_{|\\hat{g}}}{2}+1} = \\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}\\right\\}\\nonumber\\\\\n & \\overset{(c)}{=} 2\\mathcal{W}\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right).\n\\end{align}\nHere, $(b)$ comes from $e^{R_{|\\hat{g}}}-1 \\simeq e^{R_{|\\hat{g}}} $ and $(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2 \\simeq (R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}} $ which are appropriate at moderate\/high values of $R_{|\\hat{g}}$. Also, $(c)$ is obtained by the definition of the Lambert $\\mathcal{W}$-function $xe^x = y \\Leftrightarrow x = \\mathcal{W}(y)$ \\cite{corless1996lambertw}. \n\n\\end{proof}\n\nFinally, the expected throughput, averaged over multiple time slots, is obtained by $\\eta = \\mathbb{E}\\left\\{\\eta|{\\hat {g}}(R_{|\\hat{g}}^{\\text{opt}})\\right\\}$ with expectation over $\\hat g$. \n\nUsing (\\ref{eq_appRF}) and the approximation \\cite[Them. 2.1]{hoorfar2007approximation}\n\\begin{align}\n \\mathcal{W}(x) \\simeq \\log(x)-\\log\\log(x), x\\geq0,\n\\end{align}\nwe obtain\n\\begin{align}\n R_{|\\hat{g}}^{\\text{opt}}\\simeq 2\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)-\\nonumber\\\\\n 2\\log\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)\n\\end{align}\nwhich implies as the transmit power increases, the optimal instantaneous rate increases with the square root of the transmit power (approximately) logarithmically.\n\n\\subsection{On the Effect of Imperfect Channel Estimation}\nIn Section \\ref{In Section III.C,}, we assumed perfect channel estimation at the BS. The deviations of channel estimation, due to, e.g., radio-frequency mismatch, could invalidate the assumption of perfect channel estimation, and should be considered in the system design. Here, we follow the similar approach as in, e.g., \\cite{Wang2007TWCperformance}, to add the effect of estimation error of $\\hat{h}$ as an independent additive Gaussian variable whose variance is given by the accuracy of channel estimation. \n\nLet us define $\\tilde h$ as the estimate of $\\hat h$ at the BS. Then, we further develop our channel model (\\ref{eq_H}) as\n\\begin{align}\\label{eq_Htp}\n \\tilde{h} = \\kappa \\hat{h} + \\sqrt{1-\\kappa^2} z, \n\\end{align}\nfor each time slot, where $z \\sim \\mathcal{CN}(0,1)$ is a Gaussian noise which is uncorrelated with $H_{k}$. Also, $\\kappa$ is a known correlation factor which represents the estimation error of $\\hat{h}$ by $\\kappa = \\frac{\\mathbb{E}\\{\\tilde{h}\\hat{h}^*\\}}{\\mathbb{E}\\{|\\hat{h}|^2\\}}$. Substituting (\\ref{eq_Htp}) into (\\ref{eq_H}), we have\n\\begin{align}\\label{eq_Ht}\n h = \\kappa\\sqrt{1-\\sigma^2}\\hat{h}+\\kappa\\sigma q+\\sqrt{1-\\kappa^2}z.\n\\end{align}\nThen, because $\\kappa\\sigma q + \\sqrt{1-\\kappa^2}z$ is equivalent to a new Gaussian variable\\\\ $w \\sim\\mathcal{CN}\\left(0,(\\kappa\\sigma)^2+1-\\kappa^2\\right)$, we can follow the same procedure as in (\\ref{eq_optR})-(\\ref{eq_appRF}) to analyze the system performance with imperfect channel estimation of the PA (see Figs. \\ref{fig_Figure5}-\\ref{fig_Figure6} for more discussions).\n\n\n\n\\subsection{Simulation Results}\nIn this part, we study the performance of the PA system and verify the tightness of the approximation scheme of Lemma \\ref{Lemma4}. Particularly, we present the average throughput and the outage probability of the PA setup for different vehicle speeds\/channel estimation errors. As an ultimate upper bound for the proposed rate adaptation scheme, we consider a genie-aided setup where we assume that the BS has perfect CSIT of the BS-RA link without uncertainty\/outage probability. Then, as a lower-bound of the system performance, we consider the cases with no CSIT\/rate adaptation as shown in Fig. \\ref{fig_Figure5}. Here, the simulation results for the cases of no adaptation are obtained with only one antenna and no CSIT. In this case, the data is sent with a fixed rate $R$ and it is decoded if $R<\\log(1+gP)$, i.e., $g>\\frac{e^{R}-1}{P}$. In this way, assuming Rayleigh fading, the average rate is given by\n\\begin{align}\\label{eq_bench}\n R^{\\text{No-adaptation}} = \\int_{\\frac{e^{R}-1}{P}}^{\\infty} Re^{-x} \\text{d}x = Re^{-\\frac{e^{R}-1}{P}},\n\\end{align}\nand the optimal rate allocation is found by setting the derivative of (\\ref{eq_bench}) with respect to $R$ equal to zero leading to $\\tilde{R} = \\mathcal{W}(P)$, and the throughput is calculated as \n\\begin{align}\\label{eq_etanocsi}\n \\eta^{\\text{No-adaptation}} =\\mathcal{W}(P)e^{-\\frac{e^{\\mathcal{W}(P)}-1}{P}}. \n\\end{align}\nAlso, in the simulations, we set $f_\\text{c}$ = 2.68 GHz and $d_\\text{a} = 1.5\\lambda$. Finally, each point in the figures is obtained by averaging the system performance over $1\\times10^5$ channel realizations.\n\n\n\n\nIn Fig. \\ref{fig_Figure5}, we show the expected throughput $\\eta$ in different cases for a broad range of signal-to-noise ratios (SNRs). Here, because the noise has unit variance, we define the SNR as $10\\log_{10}P$. Also, we set $v = 114$ km\/h in Fig. \\ref{fig_Figure5} as defined in (\\ref{eq_d}), and $\\kappa = 1$ as discussed in (\\ref{eq_Ht}). The analytical results obtained by Lemma \\ref{Lemma4} and Corollary \\ref{coro2}, i.e., the approximation of (\\ref{eq_optR}), are also presented. We have also checked the approximation result of Lemma \\ref{Lemma4} while using Lemma \\ref{Lemma1}\/Corollary \\ref{coro1}. Then, because the results are similar as those presented in Fig. \\ref{fig_Figure5}, they are not included in the figure. Moreover, the figure shows the results of (\\ref{eq_etanocsi}) with no CSIT\/rate adaptation as a benchmark. Finally, Fig. \\ref{fig_Figure6} studies the expected throughput $\\eta$ for different values of estimation error variance $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT. Also, the figure evaluates the tightness of the approximation results obtained by Lemma \\ref{Lemma4}. Here, we set $v =$ 114.5 km\/h and $\\delta = 5$ ms.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure5.pdf}\\\\\n\\caption{Expected throughput $\\eta$ in different cases, $v$ = 114 km\/h, $\\kappa$ = 1, and $\\delta = $ 5 ms. Both the exact values estimated from simulations as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure5}\n\\end{figure}\n\n\n\nSetting SNR = 23 dB and $v = $ 120, 150 km\/h in Fig. \\ref{fig_Figure7}, we study the effect of the processing delay $\\delta$ on the throughput. Finally, the outage probability is evaluated in Fig. \\ref{fig_Figure8}, where the results are presented for different speeds with SNR = 10 dB, in the case of partial CSIT. Also, we present the outage probability for $\\delta = 5.35$ ms and $\\delta = 4.68$ ms in Fig. \\ref{fig_Figure8}. \n\n\n\n\n\n\n\n\n\nFrom the figures, we can conclude the following points:\n\\begin{itemize}\n \\item The approximation scheme of Lemma \\ref{Lemma4} is tight for a broad range of parameter settings (Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}). Thus, the throughput-optimized rate allocation can be well approximated by (\\ref{eq_appRF}), and the semi-linear approximation of Lemma \\ref{Lemma1}\/Corollaries \\ref{coro1}-\\ref{coro2} is a good approach to study the considered optimization problem.\n \n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure6.pdf}\\\\\n\n\\caption{Expected throughput $\\eta$ for different estimation errors $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT, exact and approximation, $v = $ 114.5 km\/h, and $\\delta = 5$ ms. Both the exact values estimated from simulations as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure6}\n\\end{figure}\n \n \\item With deployment of the PA, remarkable throughput gain is achieved especially in moderate\/high SNRs (Fig. \\ref{fig_Figure5}). Also, the throughput decreases when the estimation error is considered, i.e., $\\kappa$ decreases. Finally, as can be seen in Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}, with rate adaptation, and without optimizing the processing delay\/vehicle speed, the effect of estimation error on expected throughput is small unless for large values of $\\kappa$.\n \n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure7.pdf}\\\\\n\\caption{Expected throughput $\\eta$ for different processing delays with SNR = 23 dB and $v$ = 120, 150 km\/h in the case of partial adaptation. }\\label{fig_Figure7}\n\n\\end{figure}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure8.pdf}\\\\\n\n\\caption{Outage probability for different velocities with SNR = 10 dB, in the case of partial CSIT.}\\label{fig_Figure8}\n\n\\end{figure}\n\n \n \\item As it can be seen in Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, for different channel estimation errors, there are optimal values for the vehicle speed and the BS processing delay optimizing the system throughput and outage probability. Note that the presence of the optimal speed\/processing delay can be proved via (\\ref{eq_d}) as well. Finally, the optimal value of the vehicle speed, in terms of throughput\/outage probability, decreases with the processing delay. However, the optimal vehicle speed\/processing delay, in terms of throughput\/outage probability, is almost insensitive to the channel estimation error.\n \n\n\n\n\n \\item With perfect channel estimation, the throughput\/outage probability is sensitive to the speed variation, if we move away from the optimal speed (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). However, the sensitivity to the speed\/processing delay variation decreases as the channel estimation error increases, i.e., $\\kappa$ decreases (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). Finally, considering Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, it is expected that adapting the processing delay, as a function of the vehicle speed, and implementing hybrid automatic repeat request protocols can improve the performance of the PA system. These points will be studied in our future works.\n\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe derived a simple semi-linear approximation method for the first-order Marcum $Q$-function, as one of the functions of interest in different problem formulations of wireless networks. As we showed through various analysis, while the proposed approximation is not tight at the tails of the function, it is useful in different optimization- and expectation-based problem formulations. Particularly, as an application of interest, we used the proposed approximation to analyze the performance of PA setups using rate adaptation. As we showed, with different levels of channel estimation error\/processing delay, adaptive rate allocation can effectively compensate for the spatial mismatch problem, and improve the throughput\/outage probability of PA networks. It is expected that increasing the number of RA antennas will improve the performance of the PA system considerably. \n\n\n\n\n\n\n\n\\section{Introduction}\nThe first-order Marcum $Q$-function\\footnote{To simplify the analysis, our paper concentrates on the approximation of the first-order Marcum-$Q$ function. However, our approximation technique can be easily extended to the cases with different orders of Marcum $Q$-function.} is defined as \\cite[Eq. (1)]{Bocus2013CLapproximation}\n\\begin{align}\n Q_1(\\alpha,\\beta) = \\int_{\\beta}^{\\infty} xe^{-\\frac{x^2+\\alpha^2}{2}}I_0(x\\alpha)\\text{d}x,\n\\end{align}\nwhere $\\alpha, \\beta \\geq 0$ and $I_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i} }{i!\\Gamma(n+i+1)}$ is the $n$-order modified Bessel function of the first kind, and $\\Gamma(z) = \\int_0^{\\infty} x^{z-1}e^{-x} \\mathrm{d}x$ represents the Gamma function. Reviewing the literature, the Marcum $Q$-function has appeared in many areas such as statistics\/signal detection \\cite{helstrom1994elements}, and in the performance analysis of different setups such as temporally correlated channels \\cite{Makki2013TCfeedback}, spatial correlated channels \\cite{Makki2011Eurasipcapacity}, free-space optical (FSO) links \\cite{Makki2018WCLwireless}, relay networks \\cite{Makki2016TVTperformance}, as well as cognitive radio and radar systems \\cite{Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan}. However, in these applications, the presence of the Marcum $Q$-function makes the mathematical analysis challenging, because it is difficult to manipulate with no closed-form expressions especially when it appears in parameter optimizations and integral calculations. For this reason, several methods have been developed in \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral} to bound\/approximate the Marcum $Q$-function. For example, \\cite{Fu2011GLOBECOMexponential,zhao2008ELtight} have proposed modified forms of the function, while \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy} have derived exponential-type bounds which are good for the bit error rate analysis at high signal-to-noise ratios (SNRs). Other types of bounds are expressed by, e.g., error function \\cite{Kam2008TCcomputing} and Bessel functions \\cite{Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral}. Some alternative methods have been also proposed in \\cite{Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Bocus2013CLapproximation,Gaur2003TVTsome}. Although each of these approximation\/bounding techniques are fairly tight for their considered problem formulation, they are still based on difficult functions, or have complicated summation\/integration formations, which may be not easy to deal with in, e.g., integral calculations and parameter optimizations. \n\n\nIn this paper, we first propose a simple semi-linear approximation of the first-order Marcum $Q$-function (Lemma \\ref{Lemma1}, Corollaries \\ref{coro1}-\\ref{coro3}). As we show through various examples, the developed linearization technique is tight for a broad range of parameter settings. More importantly, the proposed approach simplifies various integral calculations and derivations and, consequently, allows us to express different expectation- and optimization-based operations in closed-form (Lemmas \\ref{Lemma2}-\\ref{Lemma4}). \n\nTo demonstrate the usefulness of the proposed approximation technique in communication systems, we analyze the performance of predictor antenna (PA) systems which is of our current interest. Here, the PA system is referred to as a setup with two (sets of) antennas on the roof of a vehicle. The PA positioned in the front of a vehicle can be used to improve the channel state estimation for downlink data reception at the receive antenna (RA) on the vehicle that is aligned behind the PA \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects,Guo2019WCLrate}. The feasibility of such setups, which are of interest particular in public transport systems such as trains and buses, but potentially also for the more design-constrained cars, have been previously shown through experimental tests \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive}, and different works have analyzed their system performance from different perspectives \\cite{Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects,Guo2019WCLrate}. \n\nAmong the challenges of the PA system is the spatial mismatch. If the RA does not arrive in the same position as the PA, the actual channel for the RA would not be identical to the one experienced by the PA before. Such inaccurate channel state information (CSI) estimation will affect the system performance considerably at moderate\/high speeds \\cite{DT2015ITSMmaking,Guo2019WCLrate}. In this paper, we address this problem by implementing adaptive rate allocation. In our proposed setup, the instantaneous CSI provided by the PA is used to adapt the data rate of the signals sent to the RA from the base station (BS). The problem is cast in the form of throughput maximization. Particularly, we use our developed approximation approach to derive closed-form expressions for the instantaneous and average throughput as well as the optimal rate allocation maximizing the throughput (Lemma \\ref{Lemma4}). Moreover, we study the effect of different parameters such as the antennas distance, the vehicle speed, and the processing delay of the BS on the performance of PA setups. \n\n\nOur paper is different from the state-of-the-art literature because the proposed semi-linear approximation of the first-order Marcum $Q$-function and the derived closed-form expressions for the considered integrals have not been presented by, e.g., \\cite{Bocus2013CLapproximation,helstrom1994elements,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral,Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects,Guo2019WCLrate}. Also, as opposed to \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects}, we perform analytical evaluations on the system performance with CSIT (T: at the transmitter)-based rate optimization to mitigate the effect of the spatial mismatch. Moreover, compared to our preliminary results in \\cite{Guo2019WCLrate}, this paper develops the semi-linear approximation method for the Marcum $Q$-function and uses our proposed approximation method to analyze the performance of the PA system. Also, we perform deep analysis of the effect of various parameters, such as imperfect CSIT feedback schemes, and processing delay of the BS on the system performance. \n\n\n\nThe simulation and the analytical results indicate that the proposed semi-linear approximation is useful for the mathematical analysis of different Marcum $Q$-function-based problem formulations. Particularly, our approximation method enables us to represent different Marcum $Q$-function-based integrations and optimizations in closed-form. Considering the PA system, our derived analytical results show that adaptive rate allocation can considerably improve the performance of the PA system in the presence of spatial mismatch. Finally, with different levels of channel estimation, our results show that there exists an optimal speed for the vehicle optimizing the throughput\/outage probability, and the system performance is sensitive to the vehicle speed\/processing delay as speed moves away from its optimal value. \n\n\nThis paper is organized as follows. In Section II, we present our proposed semi-linear approximation of the first-order Marcum $Q$-function, and derive closed-form solutions for some integrals of interest. Section III deals with the application of the approximation in the PA system, deriving closed-form expressions for the optimal rate adaptation, the instantaneous throughput as well as the expected throughput. In this way, Sections II and III demonstrate examples on how the proposed approximation can be useful in, respectively, expectation- and optimization-based problem formulations including the Marcum $Q$-function. Concluding remarks are provided in Section IV. \n\n\n\n\n\n\\section{Approximation of the first-order Marcum Q-function}\nIn this section, we present our semi-linear approximation of the cumulative distribution function (CDF) in the form of $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$. The idea of this proposed approximation is to use one point and its corresponding slope in that point to create a line approximating the CDF. The approximation method is summarized in Lemma \\ref{Lemma1} as follows.\n\\begin{lem}\\label{Lemma1}\n The CDF of the form $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$ can be semi-linearly approximated as $Y(\\alpha,\\beta)\\simeq \\mathcal{Z}(\\alpha, \\beta)$ where\n\\begin{align}\\label{eq_lema1}\n\\mathcal{Z}(\\alpha, \\beta)=\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < c_1 \\\\ \n \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\\\\n ~~~I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\left(\\beta-\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)+\\\\\n ~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right), ~~~~~~\\mathrm{if}~ c_1 \\leq\\beta\\leq c_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> c_2,\n\\end{cases}\n\\end{align}\nwith\n\\begin{align}\\label{eq_c1}\n & c_1(\\alpha) =~~~ \\max\\Bigg(0,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n &~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)-1}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}\\Bigg),\n\\end{align}\n\\begin{align}\\label{eq_c2}\n & c_2(\\alpha) = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n & ~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}.\n\\end{align}\n\\end{lem}\n\\begin{proof}\n\nWe aim to approximate the CDF in the range $y \\in [0, 1]$ by \n\\begin{align}\\label{eq_YY}\n y-y_0 = m(x-x_0),\n\\end{align}\nwhere $\\mathcal{C} = (x_0,y_0)$ is a point on the CDF curve and $m$ is the slope at point $\\mathcal{C}$ of $y(\\alpha,\\beta)$. Then, the parts of the line outside this region are replaced by $y=0$ and $y=1$ (see Fig. \\ref{fig_largealpha}).\n\nTo obtain a good approximation of the CDF, we select the point $\\mathcal{C}$ by solving\n\n\\begin{align}\\label{eq_partialsquare}\n x = \\mathop{\\arg}_{x} \\left\\{ \\frac{\\partial^2\\left(1-Q_1(\\alpha,x)\\right)}{\\partial x^2} = 0\\right\\}.\n\\end{align}\nUsing the derivative of the first-order Marcum $Q$-function with respect to $x$ \\cite[Eq. (2)]{Pratt1968PIpartial}\n\\begin{align}\\label{eq_derivativeMarcumQ}\n \\frac{\\partial Q_1(\\alpha,x)}{\\partial x} = -x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x),\n\\end{align}\n(\\ref{eq_partialsquare}) is equivalent to \n\\begin{align}\n x = \\mathop{\\arg}_{x} \\left\\{\\frac{\\partial\\left(x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x)\\right)}{\\partial x}=0\\right\\}.\n\\end{align}\nUsing the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}} $ \\cite[Eq. (9.7.1)]{abramowitz1999ia} for moderate\/large values of $x$ and writing\n\\begin{align}\n &~~~\\frac{\\partial\\left(\\sqrt{\\frac{x}{2\\pi \\alpha}}e^{-\\frac{(x-\\alpha)^2}{2}}\\right)}{\\partial x} = 0\\nonumber\\\\\n \\Rightarrow &\\frac{1}{\\sqrt{2\\pi\\alpha}}\\left(\\frac{e^{-\\frac{(x-\\alpha)^2}{2}}}{2\\sqrt{x}}+\\sqrt{x}e^{-\\frac{(x-\\alpha)^2}{2}}(\\alpha-x)\\right) = 0\\nonumber\\\\\n \\Rightarrow &2x^2-2\\alpha x-1 =0,\n\\end{align}\nwe obtain \n\\begin{align}\\label{eq_beta0}\n x = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2},\n\\end{align}\nsince $x\\geq0$. In this way, we find the point \n\\begin{align}\n \\mathcal{C}=\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}, 1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\right).\n\\end{align}\n\nTo calculate the slope $m$ at the point $\\mathcal{C}$, we plug (\\ref{eq_beta0}) into (\\ref{eq_derivativeMarcumQ}) leading to\n\\begin{align}\\label{eq_m}\n & m = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\times\\nonumber\\\\\n & ~~~e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nFinally, using (\\ref{eq_YY}), (\\ref{eq_beta0}) and (\\ref{eq_m}), the CDF $y(\\alpha, \\beta) = 1-Q_1(\\alpha,\\beta)$ can be approximated as in (\\ref{eq_lema1}). Note that, because the CDF is limited to the range [0 1], the boundaries $c_1$ and $c_2$ in (\\ref{eq_lema1}) are obtained by setting $y=0$ and $y=1$ which leads to the semi-linear approximation as given in (\\ref{eq_lema1}).\n\\end{proof}\n\n\nTo further simplify the calculation, considering different ranges of $\\alpha$, the approximation (\\ref{eq_lema1}) can be simplified as stated in the following corollaries. \n\n\\begin{corollary}\\label{coro1}\nFor moderate\/large values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\tilde{\\mathcal{Z}}(\\alpha,\\beta)$ where\n\\begin{align}\\label{eq_coro1}\n\\tilde{\\mathcal{Z}}(\\alpha,\\beta)\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\ \n\\alpha e^{-\\alpha^2}I_0(\\alpha^2)(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right), \\\\ ~~~~~~~~~~~~~~~~\\mathrm{if}~\\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\leq\\beta\\\\\n~~~~~~~~~~~~~~~~~~~~\\leq\\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\\n1, ~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha.\n\\end{cases}\n\\end{align}\n\\end{corollary}\n\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for moderate\/large values of $\\alpha$, we have $x \\simeq \\alpha$ and\n\\begin{align}\n \\tilde{c}_1 = \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\n\\begin{align}\n \\tilde{c}_2 = \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\nwhich leads to (\\ref{eq_coro1}). Note that in (\\ref{eq_coro1}) we have used the fact that \\cite[Eq. (A-3-2)]{schwartz1995communication}\n\\begin{align}\n Q_1(\\alpha,\\alpha) = \\frac{1}{2}\\left(1+e^{-\\alpha^2}I_0(\\alpha^2)\\right).\n\\end{align}\n\\end{proof}\n\n\\begin{corollary}\\label{coro2}\nFor small values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\hat{ \\mathcal{Z}}(\\alpha,\\beta)$ with\n\\begin{align}\\label{eq_coro2}\n\\hat{ \\mathcal{Z}}(\\alpha,\\beta)\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\hat{c}_1 \\\\ \n\\frac{\\alpha+\\sqrt{2}}{2} e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}}\\times\\\\\n~~~I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)(\\beta-\\frac{\\alpha+\\sqrt{2}}{2}) + \\\\\n~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right), \n~~~~~~~~\\mathrm{if}~\\hat{c}_1 \\leq\\beta\\leq \\hat{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\hat{c}_2\n\\end{cases}\n\\end{align}\nwith $\\hat{c}_1$ and $\\hat{c}_2$ given in (\\ref{eq_hatc1}) and (\\ref{eq_hatc2}), respectively.\n\\end{corollary}\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for small values of $\\alpha$, we have $x\\simeq \\frac{\\alpha+\\sqrt{2}}{2}$, which leads to \n\\begin{equation}\\label{eq_hatc1}\n \\hat{c}_1 = \\frac{-1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand\n\\begin{equation}\\label{eq_hatc2}\n \\hat{c}_2 = \\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand simplifies (\\ref{eq_lema1}) to (\\ref{eq_coro2}).\n\\end{proof}\n\n\n\n\\begin{corollary}\\label{coro3}\nEquation (\\ref{eq_coro1}) can be further simplified as\n\\begin{align}\\label{eq_coro3}\n\\breve{\\mathcal{Z}}(\\alpha,\\beta)\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < \\breve{c}_1 \\\\ \n\\frac{1}{\\sqrt{2\\pi}}(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\breve{c}_1 \\leq\\beta\\leq \\breve{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta>\\breve{c}_2\n\\end{cases}\n\\end{align}\nwith $\\breve{c}_1$ and $\\breve{c}_2$ given by (\\ref{eq_dotc1}) and (\\ref{eq_dotc2}), respectively.\n\\end{corollary}\n\\begin{proof}\nFor moderate\/large values of $\\alpha$, using the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}}$ for (\\ref{eq_coro1}) leads to (\\ref{eq_coro3}) where\n\\begin{align}\\label{eq_dotc1}\n \\breve{c}_1 = -\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha,\n\\end{align}\nand\n\\begin{align}\\label{eq_dotc2}\n \\breve{c}_2 = \\sqrt{2\\pi}-\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha.\n\\end{align}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n \n \\includegraphics[width=0.7\\columnwidth]{Figure1.pdf}\\\\\n\n\\caption{Illusion of the semi-linear approximation with Lemma \\ref{Lemma1}, and Corollaries \\ref{coro1}-\\ref{coro3}. For each value of $\\alpha\\in[0.1,0.5,1]$, the approximated results obtained by Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro3} are compared with the exact value for a broad range of $\\beta$.}\n\\label{fig_largealpha}\n\\end{figure}\n\n\n\n\nTo illustrate these semi-linear approximations, Fig. \\ref{fig_largealpha} shows the CDF $y(\\alpha,\\beta)= 1-Q_1(\\alpha,\\beta)$ for both small and large values of $\\alpha$, and compares the exact CDF with the approximation schemes of Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro3}. From Fig. \\ref{fig_largealpha}, we can observe that Lemma \\ref{Lemma1} is tight for a broad range of $\\alpha$ and moderate values of $\\beta$. Moreover, the tightness is improved as $\\alpha$ decreases. Also, Corollaries \\ref{coro1}, \\ref{coro3} and Corollary \\ref{coro2} provide good approximations for large and small values of $\\alpha$, respectively. Then, the proposed approximations are not tight at the tails of the CDF. However, as observed in \\cite{Bocus2013CLapproximation,helstrom1994elements,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral}, in different applications, the Marcum $Q$-function is typically combined with other functions which tend to zero of the tails of the CDF. In such cases, the inaccuracy of the approximation at the tails does not affect the tightness of the final analysis. \n\n\n\n\n\n\n\n\n\n\n\n\n\nAs an example, we first consider a general integral in the form of\n\\begin{align}\\label{eq_integral}\nG(\\alpha,\\rho)=\\int_\\rho^\\infty{e^{-nx} x^m \\left(1-Q_1(\\alpha,x)\\right)\\text{d}x} ~~\\forall n,m,\\alpha,\\rho>0.\n\\end{align}\nSuch an integral has been observed in various applications, e.g., \\cite[eq. (1)]{Simon2003TWCsome}\\cite[eq. (2)]{Cao2016CLsolutions}, \\cite[eq. (1)]{sofotasios2015solutions}, \\cite[eq. (3)]{Cui2012ELtwo}, and \\cite[eq. (1)]{Gaur2003TVTsome}. However, depending on the values of $n, m$ and $\\rho$, (\\ref{eq_integral}) may have no closed-form expression. Using Lemma \\ref{Lemma1}, $G(\\alpha,\\rho)$ can be approximated in closed-form as presented in Lemma \\ref{Lemma2}.\n\n\\begin{lem}\\label{Lemma2}\nThe integral (\\ref{eq_integral}) is approximately given by\n\\begin{align}\nG(\\alpha,\\rho)\\simeq\n\\begin{cases}\n\\Gamma(m+1,n\\rho)n^{-m-1}, ~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho \\geq \\breve{c}_2 \\\\ \n\\Gamma(m+1,n\\breve{c}_2)n^{-m-1} + \\\\\n~~\\left(-\\frac{\\alpha}{\\sqrt{2\\pi}}+0.5*\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\times n^{-m-1}\\times\\\\\n~~\\left(\\Gamma(m+1,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+1,n\\breve{c}_2)\\right)+\\\\\n~~\\left(\\Gamma(m+2,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+2,n\\breve{c}_2)\\right)\\times\\\\\n~~\\frac{n^{-m-2}}{\\sqrt{2\\pi}},\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho<\\breve{c}_2,\n\\end{cases}\n\\end{align}\nwhere $\\Gamma(s,x) = \\int_{x}^{\\infty} t^{s-1}e^{-t} \\mathrm{d}t$ is the upper incomplete gamma function \\cite[Eq. 6.5.1]{abramowitz1999ia}.\n\\end{lem}\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma2}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\nAs a second integration example of interest, consider\n\\begin{align}\\label{eq_integralT}\n T(\\alpha,m,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} e^{-mx}\\log(1+ax)Q_1(\\alpha,x)\\text{d}x \\nonumber\\\\\\forall m>0,a,\\alpha,\n\\end{align}\nwith $\\theta_2>\\theta_1\\geq0$, which does not have a closed-form expression for different values of $m, a, \\alpha$. This integral is interesting as it is often used to analyse the expected performance of outage-limited systems, e.g, \\cite{Simon2003TWCsome,Simon2000TCexponential,Gaur2003TVTsome,6911973}. Then, using Lemma \\ref{Lemma1}, $T(\\alpha,m,a)$ can be approximated in closed-form as follows.\n\n\\begin{lem}\\label{Lemma3}\nThe integral (\\ref{eq_integralT}) is approximately given by\n\\begin{align}\n T(\\alpha,m,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_1(\\theta_2)-\\mathcal{F}_1(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_1(c_1)-\\mathcal{F}_1(\\theta_1)+\\mathcal{F}_2(\\max(c_2,\\theta_2))-\\mathcal{F}_2(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwhere \n$c_1$ and $c_2$ are given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Moreover,\n\\begin{align}\\label{eq_F1}\n \\mathcal{F}_1(x) \\doteq \\frac{1}{m}\\left(-e^{\\frac{m}{a}}\\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)-e^{-mx}\\log(ax+1)\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_F2}\n \\mathcal{F}_2(x) \\doteq &~ \\mathrm{e}^{-mx}\\Bigg(\\left(mn_2-an_2-amn_1\\right)\\mathrm{e}^\\frac{m\\left(ax+1\\right)}{a}\\nonumber\\\\&~~ \\operatorname{E_1}\\left(\\frac{m\\left(ax+1\\right)}{a}\\right)-\n a\\left(mn_2x+n_2+mn_1\\right)\\nonumber\\\\&~~\\log\\left(ax+1\\right)-an_2\\Bigg),\n\\end{align}\nwith\n\\begin{align}\\label{eq_n1}\n n_1 = 1+\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}-\\nonumber\\\\\n 1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_n2}\n n_2 = -\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nIn (\\ref{eq_F1}) and (\\ref{eq_F2}), $\\operatorname{E_1}(x) = \\int_x^{\\infty} \\frac{e^{-t}}{t} \\mathrm{d}t$ is the Exponential Integral function \\cite[p. 228, (5.1.1)]{abramowitz1999ia}.\n\\end{lem}\n\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma3}.\n\\end{proof}\n\nFinally, setting $m = 0$ in (\\ref{eq_integralT}), i.e.,\n\\begin{align}\\label{eq_integralTs}\n T(\\alpha,0,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} \\log(1+ax)Q_1(\\alpha,x)\\text{d}x, \\forall a,\\alpha,\n\\end{align}\none can follow the same procedure in (\\ref{eq_integralT}) to approximate (\\ref{eq_integralTs}) as\n\\begin{align}\\label{eq_integralTss}\n T(\\alpha,0,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_3(\\theta_2)-\\mathcal{F}_3(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_3(c_1)-\\mathcal{F}_3(\\theta_1)+\\mathcal{F}_4(\\max(c_2,\\theta_2))-\\mathcal{F}_4(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwith $c_1$ and $c_2$ given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Also,\n\\begin{align}\n \\mathcal{F}_3 = \\frac{(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nand\n\\begin{align}\n \\mathcal{F}_4 = \\frac{n_2\\left((2a^2x^2-2)\\log(ax+1)-a^2x^2+2ax\\right)}{4a^2}+\\nonumber\\\\\\frac{n_1(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nwhere $n_1$ and $n_2$ are given by (\\ref{eq_n1}) and (\\ref{eq_n2}), respectively.\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure2.pdf}\\\\\n\\caption{The integral (\\ref{eq_integral}) involving Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma2}, $\\alpha = 2$, $(m,n) = \\{(4,4), (3,3), (2,2), (0,1), (1,1)\\}$.}\\label{fig_integrald}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure3.pdf}\\\\\n\\caption{The integral (\\ref{eq_integralT}) involving Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma3} and (\\ref{eq_integralTss}). $\\theta_1 = 0, \\theta_2 = \\infty$. }\\label{fig_t2}\n\\end{figure}\n\n\n\nIn Figs. \\ref{fig_integrald} and \\ref{fig_t2}, we evaluate the tightness of the approximations in Lemmas \\ref{Lemma2}, \\ref{Lemma3} and (\\ref{eq_integralTss}), for different values of $m$, $n$, $\\rho$, $a$ and $\\alpha$. From the figures, it can be observed that the approximation schemes of Lemmas \\ref{Lemma2}-\\ref{Lemma3} and (\\ref{eq_integralTss}) are very tight for different parameter settings, while our proposed semi-linear approximation makes it possible to represent the integrals in closed-form. In this way, although the approximation (\\ref{eq_lema1}) is not tight at the tails of the CDF, it gives tight approximation results when it appears in different integrals. Also, as we show in Section III, the semi-linear approximation scheme is efficient in optimization problems involving the Marcum $Q$-function. Finally, to tightly approximate the Marcum $Q$-function at the tails, which are the range of interest in, e.g., error probability analysis, one can use the approximation schemes of \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy}.\n\n\n\n\n\n\n\\section{Applications in PA Systems}\nIn Section II, we showed how the proposed approximation scheme enables us to derive closed-form expressions for a broad range of integrals, as required in various expectation-based calculations, e.g., \\cite{Simon2003TWCsome,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Gaur2003TVTsome,Simon2000TCexponential,6911973}. On other hand, the Marcum $Q$-function may also appear in optimization problems, e.g., \\cite[eq. (8)]{Azari2018TCultra},\\cite[eq. (9)]{Alam2014INFOCOMWrobust}, \\cite[eq. (10)]{Gao2018IAadmm}, \\cite[eq. (10)]{Shen2018TVToutage}, \\cite[eq. (15)]{Song2017JLTimpact}, \\cite[eq. (22)]{Tang2019IAan}. For this reason, in this section, we provide an example of using our proposed semi-linear approximation in an optimization problem for the PA systems.\n\n\\subsection{Problem Formulation}\nVehicle communication is one of the most important use cases in 5G. Here, the main focus is to provide efficient and reliable connections to cars and public transports, e.g., busses and trains. CSIT plays an important role in achieving these goals, since the data transmission efficiency can be improved by updating the transmission parameters relative to the instantaneous channel state. However, the typical CSIT acquisition systems, which are mostly designed for (semi)static channels, may not work well for high-speed vehicles. This is because, depending on the vehicle speed, the position of the antennas may change quickly and the channel information becomes inaccurate. To overcome this issue, \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing} propose the PA setup as shown in Fig. \\ref{system}. With a PA setup, which is of interesting in Vehicle-to-everything (V2X) communications \\cite{Sternad2012WCNCWusing} as well as integrated access and backhauling \\cite{Teyeb2019VTCintegrated}, two antennas are deployed on the top of the vehicle. The first antenna, the PA, estimates the channel and sends feedback to the BS at time $t$. Then, the BS uses the CSIT provided by the PA to communicate with a second antenna, which we refer to as RA, at time $t+\\delta$, where $\\delta$ is the processing time at the BS. In this way, BS can use the CSIT acquired from the PA and perform various CSIT-based transmission schemes, e.g., \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing}. \n\n\n\\begin{figure}\n\\centering\n \n \\includegraphics[width=0.7\\columnwidth]{Figure4.pdf}\\\\\n\n\\caption{A PA system with mismatch problem. Here, $\\hat h$ is the channel between the BS and the PA while $h$ refers to the BS-RA link. The vehicle is moving with speed $v$ and the antenna separation is $d_\\text{a}$. The red arrow indicates the spatial mismatch, i.e., when the RA does not reach at the same point as the PA when sending pilots. Also, $d_\\text{m}$ is the moving distance of the vehicle which is affected by the processing delay $\\delta$ of the BS. }\\label{system}\n\\end{figure}\n\n\nWe assume that the vehicle moves through a stationary electromagnetic standing wave pattern. Thus, if the RA reaches exactly the same position as the position of the PA when sending the pilots, it will experience the same channel and the CSIT will be perfect. However, if the RA does not reach the same spatial point as the PA, due to, e.g., the BS processing delay is not equal to the time that we need until the RA reaches the same point as the PA, the RA may receive the data in a place different from the one in which the PA was sending the pilots. Such spatial mismatch may lead to CSIT inaccuracy, which will affect the system performance considerably. Thus, we need adaptive schemes to compensate for it.\n\nConsidering downlink transmission in the BS-RA link, the received signal is given by\n\\begin{align}\\label{eq_Y}\n{{Y}} = \\sqrt{P}hX + Z.\n\\end{align}\nHere, $P$ represents the transmit power, $X$ is the input message with unit variance, and $h$ is the fading coefficient between the BS and the RA. Also, $Z \\sim \\mathcal{CN}(0,1)$ denotes the independent and identically distributed (IID) complex Gaussian noise added at the receiver.\n\n\n\nWe denote the channel coefficient of the PA-BS uplink as $\\hat{h}$. Also, we define $d$ as the effective distance between the place where the PA estimates the channel at time $t$, and the place where the RA reaches at time $t+\\delta$. As can be seen in Fig. \\ref{system}, $d$ can be calculated as\n\\begin{align}\\label{eq_d}\n d = |d_\\text{a} - d_\\text{m} | = |d_\\text{a} - v\\delta|,\n\\end{align}\nwhere $d_\\text{m}$ is the moving distance of the vehicle during time interval $\\delta$, and $v$ is the velocity of the vehicle. Also, $d_\\text{a}$ is the antenna separation between the PA and the RA. In conjunction to (\\ref{eq_d}), here, we assume $d$ can be calculated by the BS. \n\nUsing the classical Jake's correlation model \\cite[p. 2642]{Shin2003TITcapacity} by assuming uniform angular spectrum, the channel coefficient of the BS-RA downlink can be modeled as \n\\begin{align}\\label{eq_H}\n h = \\sqrt{1-\\sigma^2} \\hat{h} + \\sigma q,\n\\end{align}\nHere, $q \\sim \\mathcal{CN}(0,1)$ which is independent of the known channel value $\\hat{h}\\sim \\mathcal{CN}(0,1)$, and $\\sigma$ is a function of the effective distance $d$ as \n\\begin{align}\n \\sigma = \\frac{\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}}{\\sqrt{ \\left(\\frac{\\phi_2}{\\phi_1}\\right)^2 + \\left(\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}\\right)^2 }}.\n\\end{align}\nHere, $\\phi_1 = \\bm{\\Phi}_{1,1}^{1\/2} $ and $\\phi_2 = \\bm{\\Phi}_{1,2}^{1\/2} $, where $\\bm{\\Phi}$ is from Jake's model \\cite[p. 2642]{Shin2003TITcapacity}\n\\begin{align}\\label{eq_tildeH}\n \\bigl[ \\begin{smallmatrix}\n \\hat{h}\\\\h\n\\end{smallmatrix} \\bigr]= \\bm{\\Phi}^{1\/2} \\bm{H}_{\\varepsilon}.\n\\end{align}\nIn (\\ref{eq_tildeH}), $\\bm{H}_{\\varepsilon}$ has independent circularly-symmetric zero-mean complex Gaussian entries with unit variance, and $\\bm{\\Phi}$ is the channel correlation matrix with the $(i,j)$-th entry given by\n\\begin{align}\\label{eq_phi}\n \\Phi_{i,j} = J_0\\left((i-j)\\cdot2\\pi d\/ \\lambda\\right) \\forall i,j.\n\\end{align}\nHere, $J_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i}(-1)^{i} }{i!\\Gamma(n+i+1)}$ represents the $n$-th order Bessel function of the first kind. Moreover, $\\lambda$ denotes the carrier wavelength, i.e., $\\lambda = c\/f_\\text{c}$ where $c$ is the speed of light and $f_\\text{c}$ is the carrier frequency. \n\n\n\n\n\n\nFrom (\\ref{eq_H}), for a given $\\hat{h}$ and $\\sigma \\neq 0$, $|h|$ follows a Rician distribution, i.e., the probability density function (PDF) of $|h|$ is given by \n\\begin{align}\n f_{|h|\\big|\\hat{g}}(x) = \\frac{2x}{\\sigma^2}e^{-\\frac{x^2+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2x\\sqrt{\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhere $\\hat{g} = |\\hat{h}|^2$. Let us define the channel gain between BS-RA as $ g = |{h}|^2$. Then, the PDF of $f_{g|\\hat{g}}$ is given by\n\n\\begin{align}\\label{eq_pdf}\n f_{g|\\hat{g}}(x) = \\frac{1}{\\sigma^2}e^{-\\frac{x+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2\\sqrt{x\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhich is non-central Chi-squared distributed with the CDF containing the first-order Marcum $Q$-function as\n\\begin{align}\\label{eq_cdf}\n F_{g|\\hat{g}}(x) = 1 - Q_1\\left( \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}, \\sqrt{\\frac{2x}{\\sigma^2}} \\right).\n\\end{align}\n\n\n\n\\subsection{Analytical Results on Rate Adaptation Using the Semi-Linear Approximation of the First-order Marcum Q-Function}\\label{In Section III.C}\nWe assume that $d_\\text{a}$, $\\delta $ and $\\hat{g}$ are known by the BS. It can be seen from (\\ref{eq_pdf}) that $f_{g|\\hat{g}}(x)$ is a function of $v$. For a given $v$, the distribution of $g$ is known by the BS, and a rate adaption scheme can be performed to improve the system performance.\n\nFor a given instantaneous value of $\\hat g$, the data is transmitted with instantaneous rate $R_{|\\hat{g}}$ nats-per-channel-use (npcu). If the instantaneous channel gain realization supports the transmitted data rate $R_{|\\hat{g}}$, i.e., $\\log(1+gP)\\ge R_{|\\hat{g}}$, the data can be successfully decoded. Otherwise, outage occurs. Hence, the outage probability in each time slot is\n\\begin{align}\n \\Pr(\\text{outage}|\\hat{g}) = F_{g|\\hat{g}}\\left(\\frac{e^{R_{|\\hat{g}}}-1}{P}\\right).\n\\end{align}\nAlso, the instantaneous throughput for a given $\\hat{g}$ is\n\\begin{align}\\label{eq_opteta}\n\\eta_{|\\hat {g}}\\left(R_{|\\hat{g}}\\right)=R_{|\\hat{g}}\\left(1-\\Pr\\left(\\log(1+gP) c_2(\\alpha).\n\\end{cases}\n\\end{align}\n\n\n$o_i,i=1,2,3$, are given by (\\ref{eq_lema1}), (\\ref{eq_coro1}), (\\ref{eq_coro2}), or (\\ref{eq_coro3}) depending on if we use Lemma \\ref{Lemma1} or Corollaries \\ref{coro1}-\\ref{coro3}. In this way, (\\ref{eq_opteta}) is approximated as\n\\begin{align}\\label{eq_appR}\n \\eta_{|\\hat {g}}\\simeq R_{|\\hat{g}}\\left(1-o_1(\\alpha)\\beta + o_1(\\alpha)o_2(\\alpha) - o_3(\\alpha)\\right),\n\\end{align}\nwhere $\\alpha = \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}$. To simplify the equation, we omit $\\alpha$ in the following since it is a constant for given $\\hat{g}$, $\\sigma$. Then, setting the derivative of (\\ref{eq_appR}) equal to zero, we obtain\n\\begin{align}\\label{eq_appRF}\n & R_{|\\hat{g}}^{\\text{opt}} \\nonumber\\\\\n & = \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ 1+o_1o_2-o_3-o_1\\left(\\frac{(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2}{\\sqrt{2P\\sigma^2\\left(e^{R_{|\\hat{g}}}-1\\right)}}\\right)=0\\right\\}\\nonumber\\\\\n \n & \\overset{(a)}{\\simeq} \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ \\left(\\frac{R_{|\\hat{g}}}{2}+1\\right)e^{\\frac{R_{|\\hat{g}}}{2}+1} = \\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}\\right\\}\\nonumber\\\\\n & \\overset{(b)}{=} 2\\mathcal{W}\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right).\n\\end{align}\nHere, $(a)$ comes from $e^{R_{|\\hat{g}}}-1 \\simeq e^{R_{|\\hat{g}}} $ and $(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2 \\simeq (R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}} $ which are appropriate at moderate\/high values of $R_{|\\hat{g}}$. Also, $(b)$ is obtained by the definition of the Lambert $\\mathcal{W}$-function $xe^x = y \\Leftrightarrow x = \\mathcal{W}(y)$ \\cite{corless1996lambertw}. \n\n\\end{proof}\n\nFinally, the expected throughput, averaged over multiple time slots, is obtained by $\\eta = \\mathbb{E}\\left\\{\\eta|{\\hat {g}}(R_{|\\hat{g}}^{\\text{opt}})\\right\\}$ with expectation over $\\hat g$. \n\nUsing (\\ref{eq_appRF}) and the approximation \\cite[Them. 2.1]{hoorfar2007approximation}\n\n\\begin{align}\n \\mathcal{W}(x) \\simeq \\log(x)-\\log\\log(x), x\\geq0,\n\\end{align}\nwe obtain\n\\begin{align}\n R_{|\\hat{g}}^{\\text{opt}}\\simeq 2\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)-\\nonumber\\\\\n 2\\log\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)\n\\end{align}\nwhich implies as the transmit power increases, the optimal instantaneous rate increases with the square root of the transmit power (approximately) logarithmically.\n\n\\subsection{On the Effect of Imperfect Channel Estimation}\nIn Section \\ref{In Section III.C}, we assumed perfect channel estimation at the BS. Here, we follow the similar approach as in, e.g., \\cite{Wang2007TWCperformance}, to add the effect of estimation error of $\\hat{h}$ as an independent additive Gaussian variable whose variance is given by the accuracy of channel estimation. \n\nLet us define $\\tilde h$ as the estimate of $\\hat h$ at the BS. Then, we further develop our channel model (\\ref{eq_H}) as\n\\begin{align}\\label{eq_Htp}\n \\tilde{h} = \\kappa \\hat{h} + \\sqrt{1-\\kappa^2} z, \n\\end{align}\nfor each time slot, where $z \\sim \\mathcal{CN}(0,1)$ is a Gaussian noise which is uncorrelated with $H_{k}$. Also, $\\kappa$ is a known correlation factor which represents estimation error of $\\hat{h}$ by $\\kappa = \\frac{\\mathbb{E}\\{\\tilde{h}\\hat{h}^*\\}}{\\mathbb{E}\\{|\\hat{h}|^2\\}}$. Substituting (\\ref{eq_Htp}) into (\\ref{eq_H}), we have\n\\begin{align}\\label{eq_Ht}\n h = \\kappa\\sqrt{1-\\sigma^2}\\hat{h}+\\kappa\\sigma q+\\sqrt{1-\\kappa^2}z.\n\\end{align}\nThen, because $\\kappa\\sigma q + \\sqrt{1-\\kappa^2}z$ is equivalent to a new Gaussian variable \n\n$w \\sim\\mathcal{CN}\\left(0,(\\kappa\\sigma)^2+1-\\kappa^2\\right)$, we can follow the same procedure as in (\\ref{eq_optR})-(\\ref{eq_appRF}) to analyze the system performance with imperfect channel estimation of the PA (see Figs. \\ref{fig_Figure5}-\\ref{fig_Figure6} for more discussions).\n\n\\subsection{Simulation Results}\nIn this part, we study the performance of the PA system and verify the tightness of the approximation scheme of Lemma \\ref{Lemma4}. Particularly, we present the average throughput and the outage probability of the PA setup for different vehicle speeds\/channel estimation errors. As an ultimate upper bound for the proposed rate adaptation scheme, we consider a genie-aided setup where we assume that the BS has perfect CSIT of the BS-RA link without uncertainty\/outage probability. Then, as a lower-bound of the system performance, we consider the cases with no CSIT\/rate adaptation as shown in Fig. \\ref{fig_Figure5}. In the simulations, we set $f_\\text{c}$ = 2.68 GHz and $d_\\text{a} = 1.5\\lambda$. Finally, each point in the figures is obtained by averaging the system performance over $1\\times10^5$ channel realizations.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure5.pdf}\\\\\n\\caption{Expected throughput $\\eta$ in different cases, $v$ = 114 Km\/h, $\\kappa$ = 1, and $\\delta = $ 5 ms. Both the exact values from simulation as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure5}\n\\end{figure}\n\n\nIn Fig. \\ref{fig_Figure5}, we show the expected throughput $\\eta$ in different cases for a broad range of signal-to-noise ratios (SNRs). Here, because the noise has unit variance, we define the SNR as $10\\log_{10}P$. Also, we set $v = 114$ Km\/h in Fig. \\ref{fig_Figure5} as defined in (\\ref{eq_d}). The analytical results obtained by Lemma \\ref{Lemma4} and Corollary \\ref{coro2}, i.e., the approximation of (\\ref{eq_optR}), are also presented. We have also checked the approximation result of Lemma \\ref{Lemma4} while using Lemma \\ref{Lemma1}\/Corollaries \\ref{coro1}, \\ref{coro3}. Then, because the results are similar as those presented in Fig. \\ref{fig_Figure5}, they are not included to the figure. Moreover, the figure shows the results with no CSIT\/rate adaptation as a benchmark. Finally, Fig. \\ref{fig_Figure6} studies the expected throughput $\\eta$ for different values of estimation error variance $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT. Also, the figure evaluates the tightness of the approximation results obtained by Lemma \\ref{Lemma4}. Here, we set $v =$ 114.5 Km\/h and $\\delta = 5$ ms.\n\nSetting SNR = 23 dB and $v = $ 120, 150 Km\/h in Fig. \\ref{fig_Figure7}, we study the effect of the processing delay $\\delta$ on the throughput. Finally, the outage probability is evaluated in Fig. \\ref{fig_Figure8}, where the results are presented for different speeds with SNR = 10 dB, in the case of partial CSIT. Also, we present the outage probability for $\\delta = 5.35$ ms and $\\delta = 4.68$ ms in Fig. \\ref{fig_Figure8}. \n\n\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure6.pdf}\\\\\n\n\\caption{Expected throughput $\\eta$ for different estimation errors $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT, exact and approximation, $v = $ 114.5 Km\/h, and $\\delta = 5$ ms. Both the exact values from simulation as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure6}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure7.pdf}\\\\\n\\caption{Expected throughput $\\eta$ for different processing delays with SNR = 23 dB and $v$ = 120, 150 Km\/h in the case of partial adaptation. }\\label{fig_Figure7}\n\n\\end{figure}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\columnwidth]{Figure8.pdf}\\\\\n\n\\caption{Outage probability for different velocities with SNR = 10 dB, in the case of partial CSIT.}\\label{fig_Figure8}\n\n\\end{figure}\n\n\n\n\nFrom the figures, we can conclude the following points:\n\\begin{itemize}\n \\item The approximation scheme of Lemma \\ref{Lemma4} is tight for a broad range of parameter settings (Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}). Thus, the throughput-optimized rate allocation can be well approximated by (\\ref{eq_appRF}), and the semi-linear approximation of Lemma \\ref{Lemma1}\/Corollaries \\ref{coro1}-\\ref{coro3} is a good approach to study the considered optimization problem.\n \n \\item With deployment of the PA, remarkable throughput gain is achieved especially in moderate\/high SNRs (Fig. \\ref{fig_Figure5}). Also, the throughput decreases when the estimation error is considered, i.e., $\\kappa$ decreases. Finally, as can be seen in Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}, with rate adaptation, and without optimizing the processing delay\/vehicle speed, the effect of estimation error on expected throughput is small unless for large values of $\\kappa$.\n\n \n \\item As it can be seen in Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, for different channel estimation errors, there are optimal values for the vehicle speed and the BS processing delay optimizing the system throughput and outage probability. Note that the presence of the optimal speed\/processing delay can be proved via (\\ref{eq_d}) as well. Finally, the optimal value of the vehicle speed, in terms of throughput\/outage probability, decreases with the processing delay. However, the optimal vehicle speed\/processing delay, in terms of throughput\/outage probability, is almost insensitive to the channel estimation error.\n\n \\item With perfect channel estimation, the throughput\/outage probability is sensitive to the speed variation, if we move away from the optimal speed (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). However, the sensitivity to the speed\/processing delay variation decreases as the channel estimation error increases, i.e., $\\kappa$ decreases (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). Finally, considering Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, it is expected that adapting the processing delay, as a function of the vehicle speed, and implementing hybrid automatic repeat request protocols can improve the performance of the PA system. These points will be studied in our future works.\n\n\\end{itemize}\n\n\n\n\n\n\\section{Conclusion}\nWe derived a simple semi-linear approximation method for the first-order Marcum $Q$-function, as one of the functions of interest in different problem formulations of wireless networks. As we showed through various analysis, while the proposed approximation is not tight at the tails of the function, it is useful in different optimization- and expectation-based problem formulations. Particularly, as an application of interest, we used the proposed approximation to analyze the performance of PA setups using rate adaptation. As we showed, with different levels of channel estimation error\/processing delay, adaptive rate allocation can effectively compensate for the spatial mismatch problem, and improve the throughput\/outage probability of PA networks. It is expected that increasing the number of RA antennas will improve the performance of the PA system considerably.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nThe first-order Marcum $Q$-function\\footnote{To simplify the analysis, our paper concentrates on the approximation of the first-order Marcum-$Q$ function. However, our approximation technique can be easily extended to the cases with different orders of Marcum $Q$-function.} is defined as \\cite[Eq. (1)]{Bocus2013CLapproximation}\n\\begin{align}\n Q_1(\\alpha,\\beta) = \\int_{\\beta}^{\\infty} xe^{-\\frac{x^2+\\alpha^2}{2}}I_0(x\\alpha)\\text{d}x,\n\\end{align}\nwhere $\\alpha, \\beta \\geq 0$ and $I_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i} }{i!\\Gamma(n+i+1)}$ is the $n$-order modified Bessel function of the first kind, and $\\Gamma(z) = \\int_0^{\\infty} x^{z-1}e^{-x} \\mathrm{d}x$ represents the Gamma function. Reviewing the literature, the Marcum $Q$-function has appeared in many areas such as statistics\/signal detection \\cite{helstrom1994elements}, and in the performance analysis of different setups such as temporally correlated channels \\cite{Makki2013TCfeedback}, spatially correlated channels \\cite{Makki2011Eurasipcapacity}, free-space optical (FSO) links \\cite{Makki2018WCLwireless}, relay networks \\cite{Makki2016TVTperformance}, as well as cognitive radio and radar systems \\cite{Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan,ermolova2014laplace,peppas2013performance}. However, in these applications, the presence of the Marcum $Q$-function makes the mathematical analysis challenging, because it is difficult to manipulate with no closed-form expressions especially when it appears in parameter optimizations and integral calculations. For this reason, several methods have been developed in \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral,jimenez2014connection} to bound\/approximate the Marcum $Q$-function. For example, \\cite{Fu2011GLOBECOMexponential,zhao2008ELtight} have proposed modified forms of the function, while \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy} have derived exponential-type bounds which are good for the bit error rate analysis at high signal-to-noise ratios (SNRs). Other types of bounds are expressed by, e.g., error function \\cite{Kam2008TCcomputing} and Bessel functions \\cite{Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral}. Some alternative methods have been also proposed in \\cite{Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Bocus2013CLapproximation,Gaur2003TVTsome}. Although each of these approximation\/bounding techniques are fairly tight for their considered problem formulation, they are still based on difficult functions, or have complicated summation\/integration formations, which may be not easy to deal with in, e.g., integral calculations and parameter optimizations. \n\n\nIn this paper, we first propose a simple semi-linear approximation of the first-order Marcum $Q$-function (Lemma \\ref{Lemma1}, Corollaries \\ref{coro1}-\\ref{coro2}). As we explain in the following (Lemmas \\ref{Lemma1}-\\ref{Lemma4}), in contrast to the schemes of \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral,jimenez2014connection}, our proposed approximation is not tight at the tails of the Marcum $Q$-function. Therefore, it is not useful in, e.g., error probability-based problem formulations. On the other hand, the advantages of our proposed approximation method, compared to \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral,jimenez2014connection}, are 1) its simplicity, and 2) tightness in the moderate values of the function. This is important because, as observed in, e.g., \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Ma2000JSACunified,Digham2007TCenergy,Cao2016CLsolutions,sofotasios2015solutions,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan}, in different applications, the Marcum $Q$-function is typically combined with other functions which tend to zero at the tails of the Marcum $Q$-function. In such cases, the inaccuracy of the approximation at the tails does not affect the tightness of the final analysis. Thus, our proposed scheme provides tight and simple approximation results for different problem formulations such as capacity calculation \\cite{Makki2011Eurasipcapacity,Suraweera2010TVTcapacity}, throughput\/average rate derivation \\cite{Makki2013TCfeedback,Makki2018WCLwireless,Makki2016TVTperformance}, energy detection of unknown signals over various multipath fading channels \\cite{Digham2007TCenergy,Cao2016CLsolutions,sofotasios2015solutions}, as well as performance evaluation of non-coherent receivers in radar systems \\cite{Cui2012ELtwo}. Also, the simplicity of the approximation method makes it possible to perform further analysis such as parameter optimization and to obtain intuitive insights from the derivations.\n\n\nTo demonstrate the usefulness of the proposed approximation technique in communication systems, we analyze the performance of predictor antenna (PA) systems in presence of spatial mismatch. Here, the PA system is referred to as a setup with two (sets of) antennas on the roof of a vehicle. The PA positioned in the front of a vehicle can be used to improve the channel state estimation for downlink data reception at the receive antenna (RA) on the vehicle that is aligned behind the PA \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects,Guo2019WCLrate}. \n\n\nThe feasibility of PA setups, which are of interest particularly in public transport systems such as trains and buses, but potentially also for the more design-constrained cars, has been previously shown through experimental tests \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman}. Particularly, as shown in testbed implementations, e.g., \\cite{BJ2017PIMRCpredictor} and \\cite{BJ2017ICCWusing}, with a two-antenna PA setup a normalised mean square error of around -10 dB can be obtained for speeds up to 50 km\/h, with measured predictions horizons up to three times the wavelengths. This is by an order of magnitude better than state-of-the-art Kalman prediction-based systems \\cite{Ekman2002,Aronsson2011} with prediction horizon limited to 0.1-0.3 times the wavelength. Moreover, the European project deliverables, e.g., ARTIST4G project \\cite[Chapter 2]{ARTIST4G}, METIS project \\cite[P. 107]{metis2015d33} and 5GCAR project \\cite[Chapter 3]{5gcar2019d33}, have well addressed the feasibility of the PA concept in network-level design. Finally, different works have analyzed the PA system in both frequency division duplex (FDD) \\cite{BJ2017ICCWusing,BJ2017PIMRCpredictor,phan2018WSAadaptive} and time division duplex (TDD) \\cite{DT2015ITSMmaking,Apelfrojd2018PIMRCkalman} systems, with some developments on addressing the system challenges such as antenna coupling \\cite{Jamaly2014EuCAPanalysis,Jamaly2019IETeffects}, spatial mismatch \\cite{Jamaly2019IETeffects,Guo2019WCLrate}, and spectrum underutilization \\cite{guo2020power,guo2020rate}. \n\nAmong the challenges of the PA system is the spatial mismatch. If the RA does not arrive in the same position as the PA, the actual channel for the RA would not be identical to the one experienced by the PA before. Such inaccurate channel state information (CSI) estimation will affect the system performance considerably at moderate\/high speeds \\cite{DT2015ITSMmaking,Guo2019WCLrate}. \n\nIn this paper, we address the spatial mismatch problem by implementing adaptive rate allocation. In our proposed setup, the instantaneous CSI provided by the PA is used to adapt the data rate of the signals sent to the RA from the base station (BS). The problem is cast in the form of throughput maximization. Particularly, we use our developed approximation approach to derive closed-form expressions for the instantaneous and average throughput as well as the optimal rate allocation maximizing the throughput (Lemma \\ref{Lemma4}). Moreover, we study the effect of different parameters such as the antennas distance, the vehicle speed, and the processing delay of the BS on the performance of PA setups. \n\n\nOur paper is different from the state-of-the-art literature because the proposed semi-linear approximation of the first-order Marcum $Q$-function and the derived closed-form expressions for the considered integrals have not been presented by, e.g., \\cite{Bocus2013CLapproximation,helstrom1994elements,Fu2011GLOBECOMexponential,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan,ermolova2014laplace,peppas2013performance,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral,jimenez2014connection,Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects,Guo2019WCLrate}. Also, as opposed to \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing,Apelfrojd2018PIMRCkalman,Jamaly2019IETeffects}, we perform analytical evaluations on the system performance with CSIT (T: at the transmitter)-based rate optimization to mitigate the effect of the spatial mismatch. Moreover, compared to our preliminary results in \\cite{Guo2019WCLrate}, this paper develops the semi-linear approximation method for the Marcum $Q$-function, and uses our proposed approximation method to analyze the performance of the PA system. Also, we perform deep analysis of the effect of various parameters, such as imperfect CSIT feedback schemes, and processing delay of the BS on the system performance. \n\n\n\nThe simulation and the analytical results indicate that the proposed semi-linear approximation is useful for the mathematical analysis of different Marcum $Q$-function-based problem formulations. Particularly, our approximation method enables us to represent different Marcum $Q$-function-based integrations and optimizations in closed-form. Considering the PA system, our derived analytical results show that adaptive rate allocation can considerably improve the performance of the PA system in the presence of spatial mismatch. Finally, with different levels of channel estimation, our results show that there exists an optimal speed for the vehicle optimizing the throughput\/outage probability, and the system performance is sensitive to the vehicle speed\/processing delay as the speed moves away from its optimal value. \n\n\nThis paper is organized as follows. In Section II, we present our proposed semi-linear approximation of the first-order Marcum $Q$-function, and derive closed-form solutions for some integrals of interest. Section III deals with the application of the approximation in the PA system, deriving closed-form expressions for the optimal rate adaptation, the instantaneous throughput as well as the expected throughput. In this way, Sections II and III demonstrate examples on how the proposed approximation can be useful in, respectively, expectation- and optimization-based problem formulations involving the Marcum $Q$-function. Concluding remarks are provided in Section IV. \n\n\n\n\n\n\\section{Approximation of the first-order Marcum $Q$-function}\nIn this section, we present our semi-linear approximation of the cumulative distribution function (CDF) in the form of $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$. The idea of this proposed approximation is to use one point and its corresponding slope in that point to create a line approximating the CDF. The approximation method is summarized in Lemma \\ref{Lemma1} as follows.\n\\begin{lem}\\label{Lemma1}\n The CDF of the form $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$ can be semi-linearly approximated as $Y(\\alpha,\\beta)\\simeq \\mathcal{Z}(\\alpha, \\beta)$ where\n\\begin{align}\\label{eq_lema1}\n\\mathcal{Z}(\\alpha, \\beta)=\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < c_1 \\\\ \n \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\\\\n ~~~I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\left(\\beta-\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)+\\\\\n ~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right), ~~~~~~\\mathrm{if}~ c_1 \\leq\\beta\\leq c_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> c_2,\n\\end{cases}\n\\end{align}\nwith\n\\begin{align}\\label{eq_c1}\n & c_1(\\alpha) =~~~ \\max\\Bigg(0,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n &~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)-1}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}\\Bigg),\n\\end{align}\n\\begin{align}\\label{eq_c2}\n & c_2(\\alpha) = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\\nonumber\\\\\n & ~~~\\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}.\n\\end{align}\n\\end{lem}\n\\begin{proof}\n\nWe aim to approximate the CDF in the range $y \\in [0, 1]$ by \n\\begin{align}\\label{eq_YY}\n y-y_0 = m(x-x_0),\n\\end{align}\nwhere $\\mathcal{C} = (x_0,y_0)$ is a point on the CDF curve and $m$ is the slope at point $\\mathcal{C}$ of $y(\\alpha,\\beta)$. Then, the parts of the line outside this region are replaced by $y=0$ and $y=1$ (see Fig. \\ref{fig_CDFilu}).\n\nTo obtain a good approximation of the CDF, we select the point $\\mathcal{C}$ by solving\n\n\\begin{align}\\label{eq_partialsquare}\n x = \\mathop{\\arg}_{t} \\left\\{ \\frac{\\partial^2\\left(1-Q_1(\\alpha,t)\\right)}{\\partial t^2} = 0\\right\\},\n\\end{align}\nbecause the function is symmetric around this point, and (\\ref{eq_partialsquare}) gives the best fit for a linear function. Then, using the derivative of the first-order Marcum $Q$-function with respect to $x$ \\cite[Eq. (2)]{Pratt1968PIpartial}\n\\begin{align}\\label{eq_derivativeMarcumQ}\n \\frac{\\partial Q_1(\\alpha,x)}{\\partial x} = -x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x),\n\\end{align}\n(\\ref{eq_partialsquare}) is equivalent to \n\\begin{align}\n x = \\mathop{\\arg}_{x} \\left\\{\\frac{\\partial\\left(x e^{-\\frac{\\alpha^2+x^2}{2}}I_0(\\alpha x)\\right)}{\\partial x}=0\\right\\}.\n\\end{align}\nUsing the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}} $ \\cite[Eq. (9.7.1)]{abramowitz1999ia} and writing\n\\begin{align}\n &~~~\\frac{\\partial\\left(\\sqrt{\\frac{x}{2\\pi \\alpha}}e^{-\\frac{(x-\\alpha)^2}{2}}\\right)}{\\partial x} = 0\\nonumber\\\\\n \\Rightarrow &\\frac{1}{\\sqrt{2\\pi\\alpha}}\\left(\\frac{e^{-\\frac{(x-\\alpha)^2}{2}}}{2\\sqrt{x}}+\\sqrt{x}e^{-\\frac{(x-\\alpha)^2}{2}}(\\alpha-x)\\right) = 0\\nonumber\\\\\n \\Rightarrow &2x^2-2\\alpha x-1 =0,\n\\end{align}\nwe obtain \n\\begin{align}\\label{eq_beta0}\n x = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2},\n\\end{align}\nsince $x\\geq0$. In this way, we find the point \n\\begin{align}\n \\mathcal{C}=\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}, 1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\right).\n\\end{align}\n\nTo calculate the slope $m$ at the point $\\mathcal{C}$, we plug (\\ref{eq_beta0}) into (\\ref{eq_derivativeMarcumQ}) leading to\n\\begin{align}\\label{eq_m}\n & m = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\times\\nonumber\\\\\n & ~~~e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nFinally, using (\\ref{eq_YY}), (\\ref{eq_beta0}) and (\\ref{eq_m}), the CDF $y(\\alpha, \\beta) = 1-Q_1(\\alpha,\\beta)$ can be approximated as in (\\ref{eq_lema1}). Note that, because the CDF is limited to the range [0 1], the boundaries $c_1$ and $c_2$ in (\\ref{eq_lema1}) are obtained by setting $y=0$ and $y=1$ which leads to the semi-linear approximation as given in (\\ref{eq_lema1}).\n\\end{proof}\n\n\nTo further simplify the calculation, considering different ranges of $\\alpha$, the approximation (\\ref{eq_lema1}) can be simplified as stated in the following corollaries. \n\n\\begin{corollary}\\label{coro1}\nFor moderate\/large values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\tilde{\\mathcal{Z}}(\\alpha,\\beta)$ where\n\\begin{align}\\label{eq_coro1}\n\\tilde{\\mathcal{Z}}(\\alpha,\\beta)&\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\ \n\\alpha e^{-\\alpha^2}I_0(\\alpha^2)(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right), \\\\ ~~~~~~~~~~~~~~~~\\mathrm{if}~\\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\leq\\beta\\\\\n~~~~~~~~~~~~~~~~~~~~\\leq\\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha \\\\\n1, ~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha.\n\\end{cases}\\\\\n&\\overset{(a)}{\\simeq}\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < \\breve{c}_1 \\\\ \n\\frac{1}{\\sqrt{2\\pi}}(\\beta-\\alpha) + \n\\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\breve{c}_1 \\leq\\beta\\leq \\breve{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta>\\breve{c}_2,\n\\end{cases}\n\\end{align}\nwith $\\breve{c}_1$ and $\\breve{c}_2$ given in (\\ref{eq_dotc1}) and (\\ref{eq_dotc2}), respectively.\n\\end{corollary}\n\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for moderate\/large values of $\\alpha$, we have $x \\simeq \\alpha$ and\n\\begin{align}\n \\tilde{c}_1 = \\frac{-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\n\\begin{align}\n \\tilde{c}_2 = \\frac{1-\\frac{1}{2}\\left(1-e^{-\\alpha^2}I_0(\\alpha^2)\\right)}{\\alpha e^{-\\alpha^2}I_0(\\alpha^2)}+\\alpha,\n\\end{align}\nwhich leads to (\\ref{eq_coro1}). Note that in (\\ref{eq_coro1}) we have used the fact that \\cite[Eq. (A-3-2)]{schwartz1995communication}\n\\begin{align}\n Q_1(\\alpha,\\alpha) = \\frac{1}{2}\\left(1+e^{-\\alpha^2}I_0(\\alpha^2)\\right).\n\\end{align}\nFinally, $(a)$ is obtained by using the approximation $I_0(x) \\simeq \\frac{e^x}{\\sqrt{2\\pi x}}$ where\n\\begin{align}\\label{eq_dotc1}\n \\breve{c}_1 = -\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha,\n\\end{align}\nand\n\\begin{align}\\label{eq_dotc2}\n \\breve{c}_2 = \\sqrt{2\\pi}-\\frac{\\sqrt{2\\pi}}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)+\\alpha.\n\\end{align}\n\\end{proof}\n\n\\begin{corollary}\\label{coro2}\nFor small values of $\\alpha$, we have $y(\\alpha,\\beta)\\simeq\\hat{ \\mathcal{Z}}(\\alpha,\\beta)$ with\n\\begin{align}\\label{eq_coro2}\n\\hat{ \\mathcal{Z}}(\\alpha,\\beta)\\simeq\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~\\beta < \\hat{c}_1 \\\\ \n\\frac{\\alpha+\\sqrt{2}}{2} e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}}\\times\\\\\n~~~I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)(\\beta-\\frac{\\alpha+\\sqrt{2}}{2}) + \\\\\n~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right), \n~~~~~~~~\\mathrm{if}~\\hat{c}_1 \\leq\\beta\\leq \\hat{c}_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> \\hat{c}_2\n\\end{cases}\n\\end{align}\nwith $\\hat{c}_1$ and $\\hat{c}_2$ given in (\\ref{eq_hatc1}) and (\\ref{eq_hatc2}), respectively.\n\\end{corollary}\n\\begin{proof}\nUsing (\\ref{eq_beta0}) for small values of $\\alpha$, we have $x\\simeq \\frac{\\alpha+\\sqrt{2}}{2}$, which leads to \n\\begin{equation}\\label{eq_hatc1}\n \\hat{c}_1 = \\frac{-1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand\n\\begin{equation}\\label{eq_hatc2}\n \\hat{c}_2 = \\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{2}}{2}\\right)}{\\left(\\frac{\\alpha+\\sqrt{2}}{2} \\right)^2 e^{-\\frac{\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{2}}{2}\\right)^2}{2}} I_0\\left(\\frac{\\alpha(\\alpha+\\sqrt{2})}{2}\\right)}+\\alpha,\n\\end{equation}\nand simplifies (\\ref{eq_lema1}) to (\\ref{eq_coro2}).\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo illustrate these semi-linear approximations, Fig. \\ref{fig_CDFilu} shows the CDF $y(\\alpha,\\beta)= 1-Q_1(\\alpha,\\beta)$ for both small and large values of $\\alpha$, and compares the exact CDF with the approximation schemes of Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro2}. From Fig. \\ref{fig_CDFilu}, we can observe that Lemma \\ref{Lemma1} is tight for a broad range of $\\alpha$ and moderate values of $\\beta$. Moreover, the tightness is improved as $\\alpha$ decreases. Also, Corollaries \\ref{coro1}-\\ref{coro2} provide good approximations for large and small values of $\\alpha$, respectively. Then, the proposed approximations are not tight at the tails of the CDF. However, as observed in \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Ma2000JSACunified,Digham2007TCenergy,Cao2016CLsolutions,sofotasios2015solutions,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan} and in the following, in different applications, the Marcum $Q$-function is normally combined with other functions which tend to zero of the tails of the CDF. In such cases, the inaccuracy of the approximation at the tails does not affect the tightness of the final result. \n\n\n\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure1.pdf}\\\\\n\\caption{Illustration of the semi-linear approximation with Lemma \\ref{Lemma1}, and Corollaries \\ref{coro1}-\\ref{coro2}. For each value of $\\alpha\\in[1,1.5,2]$, the approximated results obtained by Lemma \\ref{Lemma1} and Corollaries \\ref{coro1}-\\ref{coro2} are compared with the exact value for a broad range of $\\beta$.}\n\\label{fig_CDFilu}\n\\end{figure}\n\n\n\n\n\n\n\n\nAs an example, we first consider a general integral in the form of\n\\begin{align}\\label{eq_integral}\nG(\\alpha,\\rho)=\\int_\\rho^\\infty{e^{-nx} x^m \\left(1-Q_1(\\alpha,x)\\right)\\text{d}x} ~~\\forall n,m,\\alpha,\\rho>0.\n\\end{align}\nSuch an integral has been observed in various applications, e.g., in bit-error-probability evaluation of a Rayleigh fading channel \\cite[eq. (1) (13)]{Simon2003TWCsome}, in energy detection of unknown signals over various multipath fading channels \\cite[eq. (2)]{Cao2016CLsolutions}, in capacity analysis with channel inversion and fixed rate over correlated Nakagami fading \\cite[eq. (1)]{sofotasios2015solutions}, in performance evaluation of incoherent receivers in radar systems \\cite[eq. (3)]{Cui2012ELtwo}, and in error probability analysis of diversity receivers \\cite[eq. (1)]{Gaur2003TVTsome}. However, depending on the values of $n, m$ and $\\rho$, (\\ref{eq_integral}) may have no closed-form expression. Using Lemma \\ref{Lemma1}, $G(\\alpha,\\rho)$ can be approximated in closed-form as presented in Lemma \\ref{Lemma2}.\n\n\\begin{lem}\\label{Lemma2}\nThe integral (\\ref{eq_integral}) is approximately given by\n\\begin{align}\nG(\\alpha,\\rho)\\simeq\n\\begin{cases}\n\\Gamma(m+1,n\\rho)n^{-m-1}, ~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho \\geq \\breve{c}_2 \\\\ \n\\Gamma(m+1,n\\breve{c}_2)n^{-m-1} + \\\\\n~~\\left(-\\frac{\\alpha}{\\sqrt{2\\pi}}+0.5*\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\times n^{-m-1}\\times\\\\\n~~\\left(\\Gamma(m+1,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+1,n\\breve{c}_2)\\right)+\\\\\n~~\\left(\\Gamma(m+2,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+2,n\\breve{c}_2)\\right)\\times\\\\\n~~\\frac{n^{-m-2}}{\\sqrt{2\\pi}},\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho<\\breve{c}_2,\n\\end{cases}\n\\end{align}\nwhere $\\Gamma(s,x) = \\int_{x}^{\\infty} t^{s-1}e^{-t} \\mathrm{d}t$ is the upper incomplete gamma function \\cite[Eq. 6.5.1]{abramowitz1999ia}.\n\\end{lem}\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma2}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\nAs a second integration example of interest, consider\n\\begin{align}\\label{eq_integralT}\n T(\\alpha,m,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} e^{-mx}\\log(1+ax)Q_1(\\alpha,x)\\text{d}x \\nonumber\\\\\\forall m>0,a,\\alpha,\n\\end{align}\nwith $\\theta_2>\\theta_1\\geq0$, which does not have a closed-form expression for different values of $m, a, \\alpha$. This type of integral is interesting as it could be used to analyze the expected performance of outage-limited systems, e.g, the considered integral in the shape of \\cite[eq. (1) (13)]{Simon2003TWCsome}, \\cite[eq. (2)]{Cao2016CLsolutions}, \\cite[eq. (3)]{Cui2012ELtwo}, and \\cite[eq. (1)]{Gaur2003TVTsome}, applied in the analysis of the outage-limited throughput, i.e., when the outage-limited throughput $\\log(1+ax)Q_1(\\alpha,x)$ \\cite[p. 2631]{Biglieri1998TITfading}\\cite[Theorem 6]{Verdu1994TITgeneral}\\cite[Eq. (9)]{Makki2014TCperformance} is averaged over fading statistics. Then, using Lemma \\ref{Lemma1}, (\\ref{eq_integralT}) can be approximated in closed-form as follows.\n\n\\begin{lem}\\label{Lemma3}\nThe integral (\\ref{eq_integralT}) is approximately given by\n\\begin{align}\n T(\\alpha,m,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_1(\\theta_2)-\\mathcal{F}_1(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_1(c_1)-\\mathcal{F}_1(\\theta_1)+\\mathcal{F}_2(\\max(c_2,\\theta_2))-\\mathcal{F}_2(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwhere \n$c_1$ and $c_2$ are given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Moreover,\n\\begin{align}\\label{eq_F1}\n \\mathcal{F}_1(x) \\doteq \\frac{1}{m}\\left(-e^{\\frac{m}{a}}\\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)-e^{-mx}\\log(ax+1)\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_F2}\n \\mathcal{F}_2(x) \\doteq &~ \\mathrm{e}^{-mx}\\Bigg(\\left(mn_2-an_2-amn_1\\right)\\mathrm{e}^\\frac{m\\left(ax+1\\right)}{a}\\nonumber\\\\&~~ \\operatorname{E_1}\\left(\\frac{m\\left(ax+1\\right)}{a}\\right)-\n a\\left(mn_2x+n_2+mn_1\\right)\\nonumber\\\\&~~\\log\\left(ax+1\\right)-an_2\\Bigg),\n\\end{align}\nwith\n\\begin{align}\\label{eq_n1}\n n_1 = 1+\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}-\\nonumber\\\\\n 1+Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right),\n\\end{align}\nand\n\\begin{align}\\label{eq_n2}\n n_2 = -\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\nonumber\\\\\n I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right).\n\\end{align}\nIn (\\ref{eq_F1}) and (\\ref{eq_F2}), $\\operatorname{E_1}(x) = \\int_x^{\\infty} \\frac{e^{-t}}{t} \\mathrm{d}t$ is the Exponential Integral function \\cite[p. 228, (5.1.1)]{abramowitz1999ia}.\n\\end{lem}\n\n\\begin{proof}\nSee Appendix \\ref{proof_Lemma3}.\n\\end{proof}\n\nFinally, setting $m = 0$ in (\\ref{eq_integralT}), i.e.,\n\\begin{align}\\label{eq_integralTs}\n T(\\alpha,0,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} \\log(1+ax)Q_1(\\alpha,x)\\text{d}x, \\forall a,\\alpha,\n\\end{align}\none can follow the same procedure in (\\ref{eq_integralT}) to approximate (\\ref{eq_integralTs}) as\n\\begin{align}\\label{eq_integralTss}\n T(\\alpha,0,a,\\theta_1,\\theta_2)\\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\begin{cases}\n\\mathcal{F}_3(\\theta_2)-\\mathcal{F}_3(\\theta_1), ~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ 0\\leq\\theta_1<\\theta_2 < c_1 \\\\ \n\\mathcal{F}_3(c_1)-\\mathcal{F}_3(\\theta_1)+\\mathcal{F}_4(\\max(c_2,\\theta_2))-\\mathcal{F}_4(c_1), \\\\~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1c_1\\\\\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\theta_1 > c_2,\n\\end{cases} \n\\end{align}\nwith $c_1$ and $c_2$ given by (\\ref{eq_c1}) and (\\ref{eq_c2}), respectively. Also,\n\\begin{align}\n \\mathcal{F}_3 = \\frac{(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nand\n\\begin{align}\n \\mathcal{F}_4 = \\frac{n_2\\left((2a^2x^2-2)\\log(ax+1)-a^2x^2+2ax\\right)}{4a^2}+\\nonumber\\\\\\frac{n_1(ax+1)(\\log(ax+1)-1)}{a},\n\\end{align}\nwhere $n_1$ and $n_2$ are given by (\\ref{eq_n1}) and (\\ref{eq_n2}), respectively.\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure2.pdf}\\\\\n\\caption{The integral (\\ref{eq_integral}) involving Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma2}, $\\alpha = 2$, $(m,n) = \\{(4,4), (3,3), (2,2), (0,1), (1,1)\\}$.}\\label{fig_integrald}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure3.pdf}\\\\\n\\caption{The integral (\\ref{eq_integralT}) involving the Marcum $Q$-function. Solid lines are exact values while crosses are the results obtained from Lemma \\ref{Lemma3} and (\\ref{eq_integralTss}). $\\theta_1 = 0, \\theta_2 = \\infty$. }\\label{fig_t2}\n\\end{figure}\n\n\n\nIn Figs. \\ref{fig_integrald} and \\ref{fig_t2}, we evaluate the tightness of the approximations in Lemmas \\ref{Lemma2}, \\ref{Lemma3} and (\\ref{eq_integralTss}), for different values of $m$, $n$, $\\rho$, $a$ and $\\alpha$. From the figures, it can be observed that the approximation schemes of Lemmas \\ref{Lemma2}-\\ref{Lemma3} and (\\ref{eq_integralTss}) are very tight for different parameter settings, while our proposed semi-linear approximation makes it possible to represent the integrals in closed-form. In this way, although the approximation (\\ref{eq_lema1}) is not tight at the tails of the CDF, it gives tight approximation results when it appears in different integrals (Lemmas \\ref{Lemma2}-\\ref{Lemma3}) with the Marcum-$Q$ function combined with other functions tending to zero at the tails of the function. Also, as we show in Section III, the semi-linear approximation scheme is efficient in optimization problems involving the Marcum $Q$-function. Finally, to tightly approximate the Marcum $Q$-function at the tails, which are the range of interest in, e.g., error probability analysis, one can use the approximation schemes of \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy}. \n\n\n\n\n\n\n\\section{Applications in PA Systems}\nIn Section II, we showed how the proposed approximation scheme enables us to derive closed-form expressions for a broad range of integrals, as required in various expectation-based calculations, e.g., \\cite{Simon2003TWCsome,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Gaur2003TVTsome,Simon2000TCexponential,6911973}. On the other hand, the Marcum $Q$-function may also appear in optimization problems, e.g., \\cite[eq. (8)]{Azari2018TCultra},\\cite[eq. (9)]{Alam2014INFOCOMWrobust}, \\cite[eq. (10)]{Gao2018IAadmm}, \\cite[eq. (10)]{Shen2018TVToutage}, \\cite[eq. (15)]{Song2017JLTimpact}, \\cite[eq. (22)]{Tang2019IAan}. For this reason, in this section, we provide an example of using our proposed semi-linear approximation in an optimization problem for the PA systems.\n\n\\subsection{Problem Formulation}\nVehicle communication is one of the most important use cases in 5G. Here, the main focus is to provide efficient and reliable connections to cars and public transports, e.g., busses and trains. CSIT plays an important role in achieving these goals, since the data transmission efficiency can be improved by updating the transmission parameters relative to the instantaneous channel state. However, the typical CSIT acquisition systems, which are mostly designed for (semi)static channels, may not work well for high-speed vehicles. This is because, depending on the vehicle speed, the position of the antennas may change quickly and the channel information becomes inaccurate. To overcome this issue, \\cite{Sternad2012WCNCWusing,DT2015ITSMmaking,BJ2017PIMRCpredictor,phan2018WSAadaptive,Jamaly2014EuCAPanalysis, BJ2017ICCWusing} propose the PA setup as shown in Fig. \\ref{system}. With a PA setup, which is of interesting in Vehicle-to-everything (V2X) communications \\cite{Sternad2012WCNCWusing} as well as integrated access and backhauling \\cite{Teyeb2019VTCintegrated}, two antennas are deployed on the top of the vehicle. The first antenna, the PA, estimates the channel and sends feedback to the BS at time $t$. Then, the BS uses the CSIT provided by the PA to communicate with a second antenna, which we refer to as RA, at time $t+\\delta$, where $\\delta$ is the processing time at the BS. In this way, the BS can use the CSIT acquired from the PA and perform various CSIT-based transmission schemes, e.g., \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing}. \n\n\n\\begin{figure}\n\\centering\n \n \\includegraphics[width=1.0\\columnwidth]{Figure4.pdf}\\\\\n\n\\caption{A PA system with the mismatch problem. Here, $\\hat h$ is the channel between the BS and the PA while $h$ refers to the BS-RA link. The vehicle is moving with speed $v$ and the antenna separation is $d_\\text{a}$. The red arrow indicates the spatial mismatch, i.e., when the RA does not reach at the same point as the PA when sending pilots. Also, $d_\\text{m}$ is the moving distance of the vehicle which is affected by the processing delay $\\delta$ of the BS. }\\label{system}\n\\end{figure}\n\n\nWe assume that the vehicle moves through a stationary electromagnetic standing wave pattern \\footnote{This has been experimentally verified in, e.g., \\cite{Jamaly2014EuCAP}}. Thus, if the RA reaches exactly the same position as the position of the PA when sending the pilots, it will experience the same channel and the CSIT will be perfect. However, if the RA does not reach the same spatial point as the PA, due to, e.g., the BS processing delay is not equal to the time that we need until the RA reaches the same point as the PA, the RA may receive the data in a place different from the one in which the PA was sending the pilots. Such spatial mismatch may lead to CSIT inaccuracy, which will affect the system performance considerably. Thus, we need adaptive schemes to compensate for it.\n\nConsidering downlink transmission in the BS-RA link, the received signal is given by\n\\begin{align}\\label{eq_Y}\n{{Y}} = \\sqrt{P}hX + Z.\n\\end{align}\nHere, $P$ represents the transmit power, $X$ is the input message with unit variance, and $h$ is the fading coefficient between the BS and the RA. Also, $Z \\sim \\mathcal{CN}(0,1)$ denotes the independent and identically distributed (IID) complex Gaussian noise added at the receiver.\n\n\n\nWe denote the channel coefficient of the PA-BS uplink as $\\hat{h}$. Also, we define $d$ as the effective distance between the place where the PA estimates the channel at time $t$, and the place where the RA reaches at time $t+\\delta$. As can be seen in Fig. \\ref{system}, $d$ can be calculated as\n\\begin{align}\\label{eq_d}\n d = |d_\\text{a} - d_\\text{m} | = |d_\\text{a} - v\\delta|,\n\\end{align}\nwhere $d_\\text{m}$ is the moving distance of the vehicle during time interval $\\delta$, and $v$ is the velocity of the vehicle. Also, $d_\\text{a}$ is the antenna separation between the PA and the RA. In conjunction to (\\ref{eq_d}), here, we assume $d$ can be calculated by the BS. \n\nUsing the classical Jake's correlation model \\cite[p. 2642]{Shin2003TITcapacity} and assuming a semi-static propagation environment, i.e., assuming that the coherence time of the propagation environment is larger than $\\delta$, the channel coefficient of the BS-RA downlink can be modeled as \n\\begin{align}\\label{eq_H}\n h = \\sqrt{1-\\sigma^2} \\hat{h} + \\sigma q.\n\\end{align}\nHere, $q \\sim \\mathcal{CN}(0,1)$ which is independent of the known channel value $\\hat{h}\\sim \\mathcal{CN}(0,1)$, and $\\sigma$ is a function of the effective distance $d$ as \n\\begin{align}\n \\sigma = \\frac{\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}}{\\sqrt{ \\left(\\frac{\\phi_2}{\\phi_1}\\right)^2 + \\left(\\frac{\\phi_2^2-\\phi_1^2}{\\phi_1}\\right)^2 }} = \\frac{\\phi_2^2-\\phi_1^2}{\\sqrt{ \\left(\\phi_2\\right)^2 + \\left(\\phi_2^2-\\phi_1^2\\right)^2 }} .\n\\end{align}\nHere, $\\phi_1 = \\bm{\\Phi}_{1,1}^{1\/2} $ and $\\phi_2 = \\bm{\\Phi}_{1,2}^{1\/2} $, where $\\bm{\\Phi}$ is from Jake's model \\cite[p. 2642]{Shin2003TITcapacity}\n\\begin{align}\\label{eq_tildeH}\n \\bigl[ \\begin{smallmatrix}\n \\hat{h}\\\\h\n\\end{smallmatrix} \\bigr]= \\bm{\\Phi}^{1\/2} \\bm{H}_{\\varepsilon}.\n\\end{align}\nNote that, the channel model (\\ref{eq_H}) has been experimentally verified in, e.g., \\cite{Jamaly2014EuCAP} for PA setups. Also, one can follow the same method as in \\cite{Guo2019WCLrate} to extend the model to the cases with temporally-correlated channels. Moreover, in (\\ref{eq_tildeH}), $\\bm{H}_{\\varepsilon}$ has independent circularly-symmetric zero-mean complex Gaussian entries with unit variance, and $\\bm{\\Phi}$ is the channel correlation matrix with the $(i,j)$-th entry given by\n\\begin{align}\\label{eq_phi}\n \\Phi_{i,j} = J_0\\left((i-j)\\cdot2\\pi d\/ \\lambda\\right) \\forall i,j.\n\\end{align}\nHere, $J_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i}(-1)^{i} }{i!\\Gamma(n+i+1)}$ represents the $n$-th order Bessel function of the first kind. Moreover, $\\lambda$ denotes the carrier wavelength, i.e., $\\lambda = c\/f_\\text{c}$ where $c$ is the speed of light and $f_\\text{c}$ is the carrier frequency. \n\n\n\n\n\n\nFrom (\\ref{eq_H}), for a given $\\hat{h}$ and $\\sigma \\neq 0$, $|h|$ follows a Rician distribution, i.e., the probability density function (PDF) of $|h|$ is given by \n\\begin{align}\n f_{|h|\\big|\\hat{g}}(x) = \\frac{2x}{\\sigma^2}e^{-\\frac{x^2+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2x\\sqrt{\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhere $\\hat{g} = (1-\\sigma^2)|\\hat{h}|^2$. Let us define the channel gain between BS-RA as $ g = |{h}|^2$. Then, the PDF of $f_{g|\\hat{g}}$ is given by\n\n\\begin{align}\\label{eq_pdf}\n f_{g|\\hat{g}}(x) = \\frac{1}{\\sigma^2}e^{-\\frac{x+\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2\\sqrt{x\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhich is non-central Chi-squared distributed with the CDF containing the first-order Marcum $Q$-function as\n\\begin{align}\\label{eq_cdf}\n F_{g|\\hat{g}}(x) = 1 - Q_1\\left( \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}, \\sqrt{\\frac{2x}{\\sigma^2}} \\right).\n\\end{align}\n\n\n\n\\subsection{Analytical Results on Rate Adaptation Using the Semi-Linear Approximation of the First-order Marcum $Q$-Function}\\label{In Section III.C,}\nWe assume that $d_\\text{a}$, $\\delta $ and $\\hat{g}$ are known by the BS. It can be seen from (\\ref{eq_pdf}) that $f_{g|\\hat{g}}(x)$ is a function of $v$. For a given $v$, the distribution of $g$ is known by the BS, and a rate adaption scheme can be performed to improve the system performance.\n\nFor a given instantaneous value of $\\hat g$, the data is transmitted with instantaneous rate $R_{|\\hat{g}}$ nats-per-channel-use (npcu). If the instantaneous channel gain realization supports the transmitted data rate $R_{|\\hat{g}}$, i.e., $\\log(1+gP)\\ge R_{|\\hat{g}}$, the data can be successfully decoded. Otherwise, outage occurs. Hence, the outage probability in each time slot is\n\\begin{align}\n \\Pr(\\text{outage}|\\hat{g}) = F_{g|\\hat{g}}\\left(\\frac{e^{R_{|\\hat{g}}}-1}{P}\\right).\n\\end{align}\nAlso, the instantaneous throughput for a given $\\hat{g}$ is\n\\begin{align}\\label{eq_opteta}\n\\eta_{|\\hat {g}}\\left(R_{|\\hat{g}}\\right)=R_{|\\hat{g}}\\left(1-\\Pr\\left(\\log(1+gP) c_2(\\alpha).\n\\end{cases}\n\\end{align}\n\n\n$o_i,i=1,2,3$, are given by (\\ref{eq_lema1}), (\\ref{eq_coro1}), or (\\ref{eq_coro2}) depending on if we use Lemma \\ref{Lemma1} or Corollaries \\ref{coro1}-\\ref{coro2}. In this way, (\\ref{eq_opteta}) is approximated as\n\\begin{align}\\label{eq_appR}\n \\eta_{|\\hat {g}}\\simeq R_{|\\hat{g}}\\left(1-o_1(\\alpha)\\beta + o_1(\\alpha)o_2(\\alpha) - o_3(\\alpha)\\right),\n\\end{align}\nwhere $\\alpha = \\sqrt{\\frac{2\\hat{g}}{\\sigma^2}}$. To simplify the equation, we omit $\\alpha$ in the following since it is a constant for given $\\hat{g}$, $\\sigma$. Then, setting the derivative of (\\ref{eq_appR}) equal to zero, we obtain\n\\begin{align}\\label{eq_appRF}\n & R_{|\\hat{g}}^{\\text{opt}} \\nonumber\\\\\n & = \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ 1+o_1o_2-o_3-o_1\\left(\\frac{(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2}{\\sqrt{2P\\sigma^2\\left(e^{R_{|\\hat{g}}}-1\\right)}}\\right)=0\\right\\}\\nonumber\\\\\n \n & \\overset{(b)}{\\simeq} \\operatorname*{arg}_{R_{|\\hat{g}}\\geq 0}\\left\\{ \\left(\\frac{R_{|\\hat{g}}}{2}+1\\right)e^{\\frac{R_{|\\hat{g}}}{2}+1} = \\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}\\right\\}\\nonumber\\\\\n & \\overset{(c)}{=} 2\\mathcal{W}\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right).\n\\end{align}\nHere, $(b)$ comes from $e^{R_{|\\hat{g}}}-1 \\simeq e^{R_{|\\hat{g}}} $ and $(R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}}-2 \\simeq (R_{|\\hat{g}}+2)e^{R_{|\\hat{g}}} $ which are appropriate at moderate\/high values of $R_{|\\hat{g}}$. Also, $(c)$ is obtained by the definition of the Lambert $\\mathcal{W}$-function $xe^x = y \\Leftrightarrow x = \\mathcal{W}(y)$ \\cite{corless1996lambertw}. \n\n\\end{proof}\n\nFinally, the expected throughput, averaged over multiple time slots, is obtained by $\\eta = \\mathbb{E}\\left\\{\\eta|{\\hat {g}}(R_{|\\hat{g}}^{\\text{opt}})\\right\\}$ with expectation over $\\hat g$. \n\nUsing (\\ref{eq_appRF}) and the approximation \\cite[Them. 2.1]{hoorfar2007approximation}\n\\begin{align}\n \\mathcal{W}(x) \\simeq \\log(x)-\\log\\log(x), x\\geq0,\n\\end{align}\nwe obtain\n\\begin{align}\n R_{|\\hat{g}}^{\\text{opt}}\\simeq 2\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)-\\nonumber\\\\\n 2\\log\\log\\left(\\frac{(1+o_1o_2-o_3)e\\sqrt{2P\\sigma^2}}{2o_1}-1\\right)\n\\end{align}\nwhich implies as the transmit power increases, the optimal instantaneous rate increases with the square root of the transmit power (approximately) logarithmically.\n\n\\subsection{On the Effect of Imperfect Channel Estimation}\nIn Section \\ref{In Section III.C,}, we assumed perfect channel estimation at the BS. The deviations of channel estimation, due to, e.g., radio-frequency mismatch, could invalidate the assumption of perfect channel estimation, and should be considered in the system design. Here, we follow the similar approach as in, e.g., \\cite{Wang2007TWCperformance}, to add the effect of estimation error of $\\hat{h}$ as an independent additive Gaussian variable whose variance is given by the accuracy of channel estimation. \n\nLet us define $\\tilde h$ as the estimate of $\\hat h$ at the BS. Then, we further develop our channel model (\\ref{eq_H}) as\n\\begin{align}\\label{eq_Htp}\n \\tilde{h} = \\kappa \\hat{h} + \\sqrt{1-\\kappa^2} z, \n\\end{align}\nfor each time slot, where $z \\sim \\mathcal{CN}(0,1)$ is a Gaussian noise which is uncorrelated with $H_{k}$. Also, $\\kappa$ is a known correlation factor which represents the estimation error of $\\hat{h}$ by $\\kappa = \\frac{\\mathbb{E}\\{\\tilde{h}\\hat{h}^*\\}}{\\mathbb{E}\\{|\\hat{h}|^2\\}}$. Substituting (\\ref{eq_Htp}) into (\\ref{eq_H}), we have\n\\begin{align}\\label{eq_Ht}\n h = \\kappa\\sqrt{1-\\sigma^2}\\hat{h}+\\kappa\\sigma q+\\sqrt{1-\\kappa^2}z.\n\\end{align}\nThen, because $\\kappa\\sigma q + \\sqrt{1-\\kappa^2}z$ is equivalent to a new Gaussian variable $w \\sim\\mathcal{CN}\\left(0,(\\kappa\\sigma)^2+1-\\kappa^2\\right)$, we can follow the same procedure as in (\\ref{eq_optR})-(\\ref{eq_appRF}) to analyze the system performance with imperfect channel estimation of the PA (see Figs. \\ref{fig_Figure5}-\\ref{fig_Figure6} for more discussions).\n\n\\subsection{Simulation Results}\nIn this part, we study the performance of the PA system and verify the tightness of the approximation scheme of Lemma \\ref{Lemma4}. Particularly, we present the average throughput and the outage probability of the PA setup for different vehicle speeds\/channel estimation errors. As an ultimate upper bound for the proposed rate adaptation scheme, we consider a genie-aided setup where we assume that the BS has perfect CSIT of the BS-RA link without uncertainty\/outage probability. Then, as a lower-bound of the system performance, we consider the cases with no CSIT\/rate adaptation as shown in Fig. \\ref{fig_Figure5}. Here, the simulation results for the cases of no adaptation are obtained with only one antenna and no CSIT. In this case, the data is sent with a fixed rate $R$ and it is decoded if $R<\\log(1+gP)$, i.e., $g>\\frac{e^{R}-1}{P}$. In this way, assuming Rayleigh fading, the average rate is given by\n\\begin{align}\\label{eq_bench}\n R^{\\text{No-adaptation}} = \\int_{\\frac{e^{R}-1}{P}}^{\\infty} Re^{-x} \\text{d}x = Re^{-\\frac{e^{R}-1}{P}},\n\\end{align}\nand the optimal rate allocation is found by setting the derivative of (\\ref{eq_bench}) with respect to $R$ equal to zero leading to $\\tilde{R} = \\mathcal{W}(P)$, and the throughput is calculated as \n\\begin{align}\\label{eq_etanocsi}\n \\eta^{\\text{No-adaptation}} =\\mathcal{W}(P)e^{-\\frac{e^{\\mathcal{W}(P)}-1}{P}}. \n\\end{align}\nAlso, in the simulations, we set $f_\\text{c}$ = 2.68 GHz and $d_\\text{a} = 1.5\\lambda$. Finally, each point in the figures is obtained by averaging the system performance over $1\\times10^5$ channel realizations.\n\nIn Fig. \\ref{fig_Figure5}, we show the expected throughput $\\eta$ in different cases for a broad range of signal-to-noise ratios (SNRs). Here, because the noise has unit variance, we define the SNR as $10\\log_{10}P$. Also, we set $v = 114$ km\/h in Fig. \\ref{fig_Figure5} as defined in (\\ref{eq_d}), and $\\kappa = 1$ as discussed in (\\ref{eq_Ht}). The analytical results obtained by Lemma \\ref{Lemma4} and Corollary \\ref{coro2}, i.e., the approximation of (\\ref{eq_optR}), are also presented. We have also checked the approximation result of Lemma \\ref{Lemma4} while using Lemma \\ref{Lemma1}\/Corollary \\ref{coro1}. Then, because the results are similar as those presented in Fig. \\ref{fig_Figure5}, they are not included in the figure. Moreover, the figure shows the results of (\\ref{eq_etanocsi}) with no CSIT\/rate adaptation as a benchmark. Finally, Fig. \\ref{fig_Figure6} studies the expected throughput $\\eta$ for different values of estimation error variance $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT. Also, the figure evaluates the tightness of the approximation results obtained by Lemma \\ref{Lemma4}. Here, we set $v =$ 114.5 km\/h and $\\delta = 5$ ms.\n\nSetting SNR = 23 dB and $v = $ 120, 150 km\/h in Fig. \\ref{fig_Figure7}, we study the effect of the processing delay $\\delta$ on the throughput. Finally, the outage probability is evaluated in Fig. \\ref{fig_Figure8}, where the results are presented for different speeds with SNR = 10 dB, in the case of partial CSIT. Also, we present the outage probability for $\\delta = 5.35$ ms and $\\delta = 4.68$ ms in Fig. \\ref{fig_Figure8}. \n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure5.pdf}\\\\\n\\caption{Expected throughput $\\eta$ in different cases, $v$ = 114 km\/h, $\\kappa$ = 1, and $\\delta = $ 5 ms. Both the exact values estimated from simulations as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure5}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure6.pdf}\\\\\n\n\\caption{Expected throughput $\\eta$ for different estimation errors $\\kappa$ with SNR = 10, 19, 25 dB, in the case of partial CSIT, exact and approximation, $v = $ 114.5 km\/h, and $\\delta = 5$ ms. Both the exact values estimated from simulations as well as the analytical approximations from Lemma \\ref{Lemma4} are presented.}\\label{fig_Figure6}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure7.pdf}\\\\\n\\caption{Expected throughput $\\eta$ for different processing delays with SNR = 23 dB and $v$ = 120, 150 km\/h in the case of partial adaptation. }\\label{fig_Figure7}\n\n\\end{figure}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{Figure8.pdf}\\\\\n\n\\caption{Outage probability for different velocities with SNR = 10 dB, in the case of partial CSIT.}\\label{fig_Figure8}\n\n\\end{figure}\n\n\n\n\nFrom the figures, we can conclude the following points:\n\\begin{itemize}\n \\item The approximation scheme of Lemma \\ref{Lemma4} is tight for a broad range of parameter settings (Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}). Thus, the throughput-optimized rate allocation can be well approximated by (\\ref{eq_appRF}), and the semi-linear approximation of Lemma \\ref{Lemma1}\/Corollaries \\ref{coro1}-\\ref{coro2} is a good approach to study the considered optimization problem.\n \n \\item With deployment of the PA, remarkable throughput gain is achieved especially in moderate\/high SNRs (Fig. \\ref{fig_Figure5}). Also, the throughput decreases when the estimation error is considered, i.e., $\\kappa$ decreases. Finally, as can be seen in Figs. \\ref{fig_Figure5}, \\ref{fig_Figure6}, with rate adaptation, and without optimizing the processing delay\/vehicle speed, the effect of estimation error on expected throughput is small unless for large values of $\\kappa$.\n\n \n \\item As it can be seen in Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, for different channel estimation errors, there are optimal values for the vehicle speed and the BS processing delay optimizing the system throughput and outage probability. Note that the presence of the optimal speed\/processing delay can be proved via (\\ref{eq_d}) as well. Finally, the optimal value of the vehicle speed, in terms of throughput\/outage probability, decreases with the processing delay. However, the optimal vehicle speed\/processing delay, in terms of throughput\/outage probability, is almost insensitive to the channel estimation error.\n\n \\item With perfect channel estimation, the throughput\/outage probability is sensitive to the speed variation, if we move away from the optimal speed (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). However, the sensitivity to the speed\/processing delay variation decreases as the channel estimation error increases, i.e., $\\kappa$ decreases (Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}). Finally, considering Figs. \\ref{fig_Figure7} and \\ref{fig_Figure8}, it is expected that adapting the processing delay, as a function of the vehicle speed, and implementing hybrid automatic repeat request protocols can improve the performance of the PA system. These points will be studied in our future works.\n\n\\end{itemize}\n\n\n\n\n\n\\section{Conclusion}\nWe derived a simple semi-linear approximation method for the first-order Marcum $Q$-function, as one of the functions of interest in different problem formulations of wireless networks. As we showed through various analysis, while the proposed approximation is not tight at the tails of the function, it is useful in different optimization- and expectation-based problem formulations. Particularly, as an application of interest, we used the proposed approximation to analyze the performance of PA setups using rate adaptation. As we showed, with different levels of channel estimation error\/processing delay, adaptive rate allocation can effectively compensate for the spatial mismatch problem, and improve the throughput\/outage probability of PA networks. It is expected that increasing the number of RA antennas will improve the performance of the PA system considerably. \n\n\n\n\n\n\n\n\n\n\n\n\n\\appendices\n\\section{Proof of Lemma \\ref{Lemma2}}\n \\label{proof_Lemma2}\nUsing Corollary \\ref{coro1}, we have\n\\begin{align}\nG(\\alpha,\\rho)\\simeq\n\\begin{cases}\n\\int_\\rho^{\\breve{c}_1}0\\text{d}x + \\int_{\\breve{c}_1}^{\\breve{c}_2}e^{-nx}x^{m}\\times\\nonumber\\\\\n~~~\\left(\\frac{1}{\\sqrt{2\\pi}}(x-\\alpha) + \\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\text{d}x +\\nonumber\\\\\n~~~\\int_{\\breve{c}_2}^{\\infty}e^{-nx}x^{m}\\text{d}x, ~~~~~~~~~~~~~~\\mathrm{if}~ \\rho < \\breve{c}_1 \\\\ \n \\int_{\\rho}^{\\breve{c}_2}e^{-nx}x^{m}\\times\\nonumber\\\\\n~~~\\left(\\frac{1}{\\sqrt{2\\pi}}(x-\\alpha) + \\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\text{d}x +\\nonumber\\\\\n~~~\\int_{\\breve{c}_2}^{\\infty}e^{-nx}x^{m}\\text{d}x, ~~~~~~~~~~~~~~\\mathrm{if}~ \\breve{c}_1\\leq\\rho<\\breve{c}_2,\\nonumber\\\\\n\\int_\\rho^{\\infty} e^{-nx}x^{m}\\text{d}x, ~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\rho\\geq\\breve{c}_2.\n\\end{cases}\n\\end{align}\n\n\nThen, for $\\rho\\geq \\breve{c}_2$, we obtain\n\\begin{align}\n& \\int_\\rho^\\infty e^{-nx}\\times x^{m}(1-Q_1(\\alpha,x))\\text{d}x\\nonumber\\\\\n & \\overset{(d)}\\simeq\\int_\\rho^\\infty e^{-nx}\\times x^{m}\\text{d}x\n \\overset{(e)}= \\Gamma(m+1,n\\rho)n^{-m-1}, \n\\end{align}\nwhile for $\\rho<\\breve{c}_2$, we have\n\n\\begin{align}\n & \\int_\\rho^\\infty e^{-nx}\\times x^{m}(1-Q_1(\\alpha,x))\\text{d}x\\nonumber\\\\\n& \\overset{(f)}\\simeq \\int_{\\max(\\breve{c}_1,\\rho)}^{\\breve{c}_2} \\left(\\frac{1}{\\sqrt{2\\pi}}(x-\\alpha) + \n\\frac{1}{2}\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\times\\nonumber\\\\\n& ~~~~~~e^{-nx}x^{m}\\text{d}x +\\int_{\\breve{c}_2}^\\infty e^{-nx} x^{m}\\text{d}x\\nonumber\\\\\n& \\overset{(g)}= \\Gamma(m+1,n\\breve{c}_2)n^{-m-1} + \\nonumber\\\\\n&~~~~~~\\left(-\\frac{\\alpha}{\\sqrt{2\\pi}}+0.5*\\left(1-\\frac{1}{\\sqrt{2\\pi\\alpha^2}}\\right)\\right)\\times n^{-m-1}\\times\\nonumber\\\\\n&~~~~~~\\left(\\Gamma\\left(m+1,n\\max(\\breve{c}_1,\\rho)\\right)-\\Gamma\\left(m+1,n\\breve{c}_2\\right)\\right)+\\nonumber\\\\\n&~~~~~~\\left(\\Gamma(m+2,n\\max(\\breve{c}_1,\\rho))-\\Gamma(m+2,n\\breve{c}_2)\\right)\\frac{n^{-m-2}}{\\sqrt{2\\pi}}.\n\\end{align}\nNote that $(d)$ and $(f)$ come from Corollary \\ref{coro1} while $(e)$ and $(g)$ use the fact that $\\Gamma(s,x)\\rightarrow 0$ as $x\\rightarrow\\infty$.\n\n\n\\section{Proof of Lemma \\ref{Lemma3}}\n\\label{proof_Lemma3}\nUsing Lemma \\ref{Lemma1}, the integral (\\ref{eq_integralT}) can be approximated as\n\n1) for $\\theta_2>\\theta_1>c_2$, $T(\\alpha,m,a,\\theta_1,\\theta_2) \\simeq 0$,\n\n2) for $c_1<\\theta_1\\leq c_2, \\theta_2>\\theta_1$,\n\\begin{align}\n & T(\\alpha,m,a,\\theta_1,\\theta_2) = \\nonumber\\\\& ~~\\int_{\\theta_1}^{\\min(c_2,\\theta_2)} (n_2x+n_1)e^{-mx}\\log(1+ax) \\text{d}x,\n\\end{align}\n\n3) for $\\theta_1c_1$,\n\\begin{align}\n & T(\\alpha,m,a,\\theta_1,\\theta_2) \\simeq \\int_{\\theta_1}^{c_1} e^{-mx}\\log(1+ax)\\text{d}x + \\nonumber\\\\\n &~~ \\int_{c_1}^{\\min(c_2,\\theta_2)} (n_2x+n_1)e^{-mx}\\log(1+ax) \\text{d}x,\n\\end{align}\n\n4) for $\\theta_1<\\theta_2 \\theta_2 > c_1$ for simplicity. The other cases can be proved with the same procedure. Moreover, the functions $\\mathcal{F}_1(x)$ and $\\mathcal{F}_2(x)$ are obtained by\n\\begin{align}\n \\mathcal{F}_1(x) &= \\int e^{-mx}\\log(1+ax)\\text{d}x\\nonumber\\\\\n &\\overset{(h)}= -\\frac{e^{-mx}\\log(ax+1)}{m}-\\int -\\frac{ae^{-mx}}{m(ax+1)} \\text{d}x\\nonumber\\\\\n & \\overset{(i)}= \\frac{1}{m}\\left(-e^{\\frac{m}{a}}\\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)-e^{-mx}\\log(ax+1)\\right)+C,\n\\end{align}\nand\n\\begin{align}\n \\mathcal{F}_2(x)& = \\int (n_2x+n_1)e^{-mx}\\log(1+ax) \\text{d}x\\nonumber\\\\\n &\\overset{(j)} = -\\frac{(mn_2x+n_2+mn_1)e^{-mx}\\log(1+ax)}{m^2} - \\nonumber\\\\\n &~~~\\int \\frac{a(-mn_2x-n2-mn_1)e^{-mx}}{m^2(ax+1)} \\text{d}x\\nonumber\\\\\n & \\overset{(k)}= -\\frac{(mn_2x+n_2+mn_1)e^{-mx}\\log(1+ax)}{m^2} - \\nonumber\\\\\n &~~~ -\\frac{1}{a}\\bigg(e^{\\frac{m}{a}}(mn_2x+n_2+mn_1)\\nonumber\\\\\n &~~~ \\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)\\bigg)+\n \\int - \\frac{me^{\\frac{m}{a}}n_2}{\\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)} \\text{d}x\\nonumber\\\\\n & \\overset{(l)}= -\\frac{(mn_2x+n_2+mn_1)e^{-mx}\\log(1+ax)}{m^2} - \\nonumber\\\\\n &~~~ -\\frac{1}{a}\\bigg(e^{\\frac{m}{a}}(mn_2x+n_2+mn_1) \\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)\\bigg)+\\nonumber\\\\\n &~~~ \\frac{n_2e^{-mx}}{a} - \\frac{e^{\\frac{m}{a}}n_2(mx+\\frac{m}{a}) \\operatorname{E_1}\\left(mx+\\frac{m}{a}\\right)}{a}+C,\n\\end{align}\nwhere $(h)$, $(j)$ and $(k)$ come from partial integration and some manipulations. Also, $(i)$ and $(l)$ use \\cite[p. 195]{geller1969table}\n\\begin{align}\n \\int \\operatorname{E_1}(u) \\text{d}u = u\\operatorname{E_1}(u)-e^{-u}.\n\\end{align}\n\n\n\n\n\n\n\n\n\n\\chapter{PA Systems and Analytical Channel Model}\\label{chapter:two}\nThis chapter first introduces the PA concept in a \\gls{tdd} \\footnote{The \\gls{pa} concept can be applied in both \\gls{tdd} and \\gls{fdd} systems. In \\gls{fdd} the \\gls{pa} estimates the \\gls{dl} channel based on \\gls{dl} pilots from the \\gls{bs}, and reports back using an \\gls{ul} feedback channel. The \\gls{bs} uses this information (as input) to obtain the \\gls{dl} channel estimate to be used for the \\gls{dl} towards the \\gls{ra} when the \\gls{ra} reaches the same spatial point as the \\gls{pa} at the time of \\gls{dl} estimation. On the other hand, in \\gls{tdd} the \\gls{pa} instead sends the pilots and the \\gls{bs} estimates the \\gls{ul} channel, and uses that in combination with channel reciprocity information (as input) to obtain the \\gls{dl} channel estimate to be used for the \\gls{dl} towards the \\gls{ra}.} \\gls{dl} \\footnote{This thesis mainly focus on the \\gls{dl}, but the \\gls{pa} concept can be adapted to be used also in the \\gls{ul} case.} system, where one \\gls{pa} and one \\gls{ra} are deployed at the receiver side. Also, the associated challenges and difficulties posed by practical constraints are discussed. Then, the proposed analytical channel model based on Jake's assumption is presented, where the \\gls{cdf} of the channel gain is described by the first-order Marcum $Q$-function. Finally, to simplify the analytical derivations, we develop a semi-linear approximation of the first-order Marcum $Q$-function which can simplify, e.g., integral calculations as well as optimization problems.\n\n\\section{The PA Concept}\nIn \\gls{5g}, a significant number of users would access wireless networks in vehicles, e.g., in public transportation like trams and trains or private cars, via their smart phones and laptops \\cite{shim2016traffic,lannoo2007radio,wang2012distributed,Dat2015,Laiyemo2017,yutao2013moving,Marzuki2017,Andreev2019,Haider2016,Patra2017}. In \\cite{shim2016traffic}, the emergence of vehicular heavy user traffic is observed by field experiments conducted in 2012 and 2015 in Seoul, and the experimental results reveal that such traffic is becoming dominant, as shown by the 8.62 times increase from 2012 to 2015 in vehicular heavy user traffic, while total traffic increased only by 3.04 times. Also, \\cite{lannoo2007radio,wang2012distributed,Dat2015,Laiyemo2017} develop traffic schemes and networks for users in high-speed trains. Setting an \\gls{mrn} in vehicles can be one promising solution to provide a high-rate reliable connection between a \\gls{bs} and the users inside the vehicle \\cite{yutao2013moving,Marzuki2017,Andreev2019}. From another perspective, \\cite{Haider2016} and \\cite{Patra2017} adopt femtocell technology inside a vehicle to provide better spectral and energy efficiency compared to the direct transmission scheme.\n\nIn such a so-called hot spot scenario, we often deploy \\gls{tdd} systems with channel reciprocity. It is intuitively because we have more data in \\gls{dl} than in \\gls{ul}. Here, we estimate the \\gls{dl} channel based on the \\gls{ul} pilots. Then, the problem may occur because of the movement, and the channel in the \\gls{dl} would not be the same as the one in the \\gls{ul}. This could be compensated for by extrapolating the \\gls{csi} from the \\gls{ul}, for example by using Kalman predictions \\cite{Aronsson2011}. However, it is difficult to predict small-scale fading by extrapolating the past estimates and the prediction horizon is limited to 0.1-0.3$\\lambda$ with $\\lambda$ being the carrier wavelength \\cite{Ekman2002}. Such a horizon is satisfactory for pedestrian users, while for high mobility users, such as vehicles, a prediction horizon beyond 0.3$\\lambda$ is usually required \\cite{Apelfrojd2018PIMRCkalman}. One possible way to increase the prediction horizon is to have a database of pre-recorded coordinate-specific \\gls{csi} at the \\glspl{bs} \\cite{zirwas2013channel}. Here, the basic idea is that the users provide the \\glspl{bs} with their location information by, e.g., \\gls{gps}, and the \\gls{bs} could use the pre-recorded information to predict the channel environment. However, such a method requires large amount of data which may need to be updated frequently, and \\gls{gps} position data would also not be accurate for small scale fading prediction, since the accuracy is much worse than a wavelength for typical mobile communications systems.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{YourThesis\/figs\/system_mismatch.pdf}\n \\caption{The PA concept with spatial mismatch problem.}\n \\label{fig:PA_mismatch}\n\\end{figure}\n\nTo overcome this issue, \\cite{Sternad2012WCNCWusing} proposes the concept of \\gls{pa} wherein at least two antennas are deployed on top of the vehicle. As can be seen from Fig. \\ref{fig:PA_mismatch}, the first antenna, which is the \\gls{pa}, estimates the channel $\\hat{H}$ in the \\gls{ul} to the \\gls{bs}. Then, the \\gls{bs} uses the information received about $\\hat H$ to estimate the channel $H$, and communicate with a second antenna, which we refer to as the \\gls{ra}, when it arrives to the same position as the \\gls{pa}. Then, a problem appears: how should we model such a channel? The intuitive idea is that the correlation between $H$ and $\\hat{H}$ should be affected by the moving speed $v$, the time for \\gls{ul} and \\gls{dl}, as well as the antenna separation $d_\\text{a}$ between the \\gls{pa} and the \\gls{ra}.\n\nOne way to evaluate such a model is to measure $H$ and $\\hat{H}$ under different system configurations, and calculate the \\gls{nmse} of $H$ and $\\hat{H}$. Followed by \\cite{Sternad2012WCNCWusing}, experimental results in \\cite{BJ2017ICCWusing} and \\cite{BJ2017PIMRCpredictor} show that an \\gls{nmse} of around -10 dB can be obtained for speeds up to 50 km\/h, with all measured predictions horizons up to 3$\\lambda$, which is ten times longer than the limit for Kalman filter-based channel extrapolation. In \\cite{BJ2017ICCWusing,BJ2017PIMRCpredictor,phan2018WSAadaptive} \\gls{fdd} systems are considered, where dense enough \\gls{dl} channel estimation pilots with \\gls{ofdm} are used. On the other hand, for \\gls{tdd} systems, the \\gls{ul} and \\gls{dl} frames need to be adjusted so that the estimation of $H$ can be as close as possible to $\\hat{H}$, as proposed and evaluated in \\cite{DT2015ITSMmaking}. However, such a method would need to adapt \\gls{ul} and \\gls{dl} ratios for each user, which is complicated from system design point of view. To mitigate this issue, \\cite{DT2015ITSMmaking} also proposes an interpolation scheme at the \\gls{bs} which is suitable for different \\gls{ul} and \\gls{dl} ratios. Also, a Kalman smoother for the interpolation of the \\gls{pa} for the \\gls{tdd} case with a two-filter approach is proposed in \\cite{Apelfrojd2018PIMRCkalman}, where the \\gls{csi} quality of the \\gls{dl} can be improved such that the duration of the \\gls{dl} can be extended remarkably. Moreover, it is shown that the correlation between $H$ and $\\hat{H}$ would be reduced, if the \\gls{pa} and the \\gls{ra} are too close to each other, e.g., 0.2-0.4$\\lambda$. Different ways to compensate for such a coupling effect, such as open-circuit decoupling, are proposed in \\cite{Jamaly2014EuCAPanalysis, Jamaly2019IETeffects}.\n\n\n\n\n\n\n\n\\section{Challenges and Difficulties}\nPrevious studies have shown that deploying the \\gls{pa} system can provide significant performance gains in terms of, e.g., \\gls{nmse} \\cite{BJ2017ICCWusing,BJ2017PIMRCpredictor,phan2018WSAadaptive}. However, realistic gains can be limited by many practical constraints. In this section, we discuss a number of such challenges that have been partly addressed in this work.\n\n\\subsection*{Lack of Analytical Model}\n\n\nIn the literature, most \\gls{pa} work rely on experimental measurements and simulations. This is sufficient for the validation purpose. However, to have a deeper understanding of the \\gls{pa} system, it is useful to develop analytical models. There are different statistical wireless channel models such as Rayleigh, Rice, Nakagami, and log-normal fading, as well as their combinations on multi-path and shadow fading components \\cite{vatalaro1995generalized,tjhung1999fade}. Here, obtaining an exact analytical model for the \\gls{pa} system may be difficult, but understanding the correlation between $H$ and $\\hat{H}$ would be a good starting point.\n\n\n\\subsection*{Spatial Mismatch}\nAs addressed in, e.g., \\cite{DT2015ITSMmaking,Jamaly2019IETeffects}, even assuming that the channel does not change over time, if the \\gls{ra} does not arrive at the same point as the \\gls{pa}, the actual channel for the \\gls{ra} would not be identical to the one experienced by the PA before. As can be seen in Fig. \\ref{fig:PA_mismatch} with \\gls{tdd} setup, considering one vehicle deploying two antennas on the roof with one \\gls{pa} positioned in the front of the moving direction and an \\gls{ra} aligned behind the \\gls{pa}. The idea of the data transmission model with \\gls{tdd} is that the \\gls{pa} first sends pilots at time $t$, then the \\gls{bs} estimates the channel and sends the data at time $t+\\delta$ to the \\gls{ra}. Here, $\\delta$ depends on the processing time at the \\gls{bs}. Then, we define $d$ as the effective distance between the position of the PA at time $t$ and the position of the RA at time $t+\\delta$, as can be seen in Fig. \\ref{fig:PA_mismatch}. That is, $d$ is given by\n\\begin{align}\\label{eq_d}\n d = |d_\\text{a} - d_\\text{m} | = |d_\\text{a} - v\\delta|,\n\\end{align}\nwhere $d_\\text{m}$ is the moving distance between $t$ and $t+\\delta $ while $v$ is the velocity of the vehicle. To conclude, different values of $v$, $ \\delta$, $f_\\text{c}$ and $d_\\text{a}$ in (\\ref{eq_d}) correspond to different values of $d$. We would like to find out how to connect $H$ and $\\hat{H}$ as a function of $d$, and how different values of $d$ would affect the system performance.\n\n\n\\subsection*{Spectral Efficiency Improvement}\nIn a typical \\gls{pa} setup, the spectrum is underutilized, and the spectral efficiency could be further improved in case the \\gls{pa} could be used not only for channel prediction but also for data transmission. However, proper data transmission schemes need to be designed to make the best use of the \\gls{pa}.\n\n\n\\subsection*{Temporal Correlation}\nThe overhead from the \\gls{ul}-\\gls{dl} structure of the \\gls{pa} system would affect the accuracy of the \\gls{csi} acquisition, i.e., $\\hat{H}$ obtained from \\gls{pa} would change over time. Basically, the slowly-fading channel is not always a realistic model for fast-moving users, since the channel may change according to the environmental effects during a transmission block \\cite{tsao2007prediction,Makki2013TCfeedback,Makki2014ISWCSreinforcement}. There are different ways to model the temporally-correlated channel, such as using the first-order Gauss-Markov process \\cite{Makki2013TCfeedback,Makki2014ISWCSreinforcement}.\n\n\n\n\n\\subsection*{Estimation Error}\nThere could be channel estimation errors from the \\gls{ul} \\cite{jose2011pilot}, which would degrade the system performance. The assumption of perfect channel reciprocity in \\gls{tdd} ignores two important facts \\cite{Mi2017massive}: 1) the \\gls{rf} chains of the \\gls{ul} and the \\gls{dl} are separate circuits with random impacts on the transmitted and received signals \\cite{Mi2017massive,lu2014an}, which is the so-called \\gls{rf} mismatch; 2) the interference profile at the transmitter and receiver sides are different \\cite{tolli2017compensation}. These deviations are defined as reciprocity errors that invalidate the assumption of perfect reciprocity, and should be considered in the system design.\n\n\\subsection*{Effects of Other Parameters}\nAs mentioned in Fig. \\ref{fig:PA_mismatch}, different system parameters such as the speed $v$, the antenna separation $d_\\text{a}$ and the control loop time $\\delta$ would affect the system behaviour by, e.g., spatial mismatch or antenna coupling. Our goal is to study the effect of these parameters and develop robust schemes which perform well for a broad range of their values.\n\n\n\n\n\n\n\\section{Analytical Channel Model}\nConsidering \\gls{dl} transmission in the \\gls{bs}-\\gls{ra} link, which is our main interest, the received signal is given by \\footnote{In this work, we mainly focus on the cases with single \\gls{pa} and \\gls{ra} antennas. The future works will address the problem in the cases with array antennas.}\n\\begin{align}\\label{eq_Y}\n{{Y}} = \\sqrt{P}HX + Z.\n\\end{align}\nHere, $P$ represents the average received power at the \\gls{ra}, while $X$ is the input message with unit variance, and $H$ is the fading coefficient between the \\gls{bs} and the \\gls{ra}. Also, $Z \\sim \\mathcal{CN}(0,1)$ denotes the \\gls{iid} complex Gaussian noise added at the receiver.\n\nWe denote the channel coefficient of the \\gls{pa}-\\gls{bs} \\gls{ul} by $\\hat{H}$ and we assume that $\\hat{H}$ is perfectly known by the \\gls{bs}. The result can be extended to the cases with imperfect \\gls{csi} at the BS (see our work \\cite{guo2020semilinear}). In this way, we use the spatial correlation model \\cite[p. 2642]{Shin2003TITcapacity}\n\\begin{align}\\label{eq_tildeH}\n \\Tilde{\\bm{H}} = \\bm{\\Phi}^{1\/2} \\bm{H}_{\\varepsilon},\n\\end{align}\nwhere $\\Tilde{\\bm{H}}$ = $\\bigl[ \\begin{smallmatrix}\n \\hat{H}\\\\H\n\\end{smallmatrix} \\bigr]$ is the channel matrix including both \\gls{bs}-\\gls{pa} channel $\\hat{H}$ and \\gls{bs}-\\gls{ra} channel $H$ links. $\\bm{H}_{\\varepsilon}$ has independent circularly-symmetric zero-mean complex Gaussian entries with unit variance, and $\\bm{\\Phi}$ is the channel correlation matrix.\n\nIn general, the spatial correlation of the fading channel depends on the distance between the \\gls{ra} and the \\gls{pa}, which we denote by $d_\\text{a}$, as well as the angular spectrum of the radio wave pattern. If we use the classical Jakes' correlation model by assuming uniform angular spectrum, the $(i,j)$-th entry of $\\bm{\\Phi}$ is given by \\cite[Eq. 1]{Chizhik2000CLeffect}\n\\begin{align}\\label{eq_phi}\n \\Phi_{i,j} = J_0\\left((i-j)\\cdot2\\pi d\/ \\lambda\\right).\n\\end{align}\nHere, $J_0(\\cdot)$ is the zeroth-order Bessel function of the first kind. Also, $\\lambda = c\/f_\\text{c}$ represents the wavelength where $c$ is the speed of light and $f_\\text{c}$ is the carrier frequency. \n\nAs discussed before, different values of $v$, $ \\delta$, $f_\\text{c}$ and $d_\\text{a}$ in (\\ref{eq_d}) correspond to different values of $d$, which leads to different levels of channel spatial correlation~(\\ref{eq_tildeH})-(\\ref{eq_phi}).\n\nCombining (\\ref{eq_tildeH}) and (\\ref{eq_phi}) with normalization, we have\n\\begin{align}\\label{eq_H}\n H = \\sqrt{1-\\sigma^2} \\hat{H} + \\sigma q,\n\\end{align}\nwhere $q \\sim \\mathcal{CN}(0,1)$ which is independent of the known channel value $\\hat{H}\\sim \\mathcal{CN}(0,1)$, and $\\sigma$ is a function of the mismatch distance $d$.\n\nFrom (\\ref{eq_H}), for a given $\\hat{H}$ and $\\sigma \\neq 0$, $|H|$ follows a Rician distribution, i.e., the \\gls{pdf} of $|H|$ is given by \n\\begin{align}\n f_{|H|\\big|\\hat{H}}(x) = \\frac{2x}{\\sigma^2}e^{-\\frac{x^2+(1-\\sigma^2)\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2x\\sqrt{(1-\\sigma^2)\\hat{g}}}{\\sigma^2}\\right), \n\\end{align}\nwith $\\hat{g} = |{\\hat{H}}|^2$, and $I_n(x) = (\\frac{x}{2})^n \\sum_{i=0}^{\\infty}\\frac{(\\frac{x}{2})^{2i} }{i!\\Gamma(n+i+1)}$ being the $n$-th order modified Bessel function of the first kind where $\\Gamma(z) = \\int_0^{\\infty} x^{z-1}e^{-x} \\mathrm{d}x$ denotes the Gamma function. Then, we define the channel gain between BS-RA as $ g = |{H}|^2$. By changing variables from $H$ to $g$, the \\gls{pdf} of $f_{g|\\hat{H}}$ is given by\n\n\\begin{align}\\label{eq_pdf}\n f_{g|\\hat{H}}(x) = \\frac{1}{\\sigma^2}e^{-\\frac{x+(1-\\sigma^2)\\hat{g}}{\\sigma^2}}I_0\\left(\\frac{2\\sqrt{x(1-\\sigma^2)\\hat{g}}}{\\sigma^2}\\right),\n\\end{align}\nwhich is non-central Chi-squared distributed, and the \\gls{cdf} is\n\\begin{align}\\label{eq_cdf}\n F_{g|\\hat{H}}(x) = 1 - Q_1\\left( \\sqrt{\\frac{2(1-\\sigma^2)\\hat{g}}{\\sigma^2}}, \\sqrt{\\frac{2x}{\\sigma^2}} \\right).\n\\end{align}\nHere, $Q_1(\\alpha,\\beta)$ is Marcum $Q$-function and it is defined as \\cite[Eq. 1]{Bocus2013CLapproximation}\n\\begin{align}\\label{eq_q1}\n Q_1(\\alpha,\\beta) = \\int_{\\beta}^{\\infty} xe^{-\\frac{x^2+\\alpha^2}{2}}I_0(x\\alpha)\\text{d}x,\n\\end{align}\nwhere $\\alpha, \\beta \\geq 0$.\n\n\n\nWe study the system performance in various temporally-correlated conditions, i.e., when $H$ is not the same as $\\hat{H}$ even at the same position. Particularly, using the same model as in \\cite[Eq. 2]{Makki2013TCfeedback}, we further develop our channel model (\\ref{eq_H}) as\n\\begin{align}\\label{eq_Htp}\n H_{k+1} = \\beta H_{k} + \\sqrt{1-\\beta^2} z, \n\\end{align}\nfor each time slot $k$, where $z \\sim \\mathcal{CN}(0,1)$ is a Gaussian noise which is uncorrelated with $H_{k}$. Also, $\\beta$ is a known correlation factor which represents two successive channel realizations dependencies by $\\beta = \\frac{\\mathbb{E}\\{H_{k+1}H_{k}^*\\}}{\\mathbb{E}\\{|H_k|^2\\}}$. Substituting (\\ref{eq_Htp}) into (\\ref{eq_H}), we have\n\\begin{align}\\label{eq_Ht}\n H_{k+1} = \\beta\\sqrt{1-\\sigma^2}\\hat{H}_{k}+\\beta\\sigma q+\\sqrt{1-\\beta^2}z = \\beta\\sqrt{1-\\sigma^2}\\hat{H}_{k} + w.\n\\end{align}\nHere, to simplify the calculation, $\\beta\\sigma q + \\sqrt{1-\\beta^2}z$ is equivalent to a new Gaussian variable $w \\sim\\mathcal{CN}\\left(0,(\\beta\\sigma)^2+1-\\beta^2\\right)$. Moreover, we can follow the same approach as in \\cite{Wang2007TWCperformance} to add the effect of estimation errors of $\\hat{H}$ as an independent additive Gaussian variable whose variance is given by the accuracy of \\gls{csi} estimation.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The First-Order Marcum Q-Function and Semi-Linear Approximation}\nThe first-order \\footnote{To simplify the analysis, our work concentrates on the approximation of the first-order Marcum-$Q$ function. However, our approximation technique can be easily extended to the cases with different orders of the Marcum $Q$-function.} Marcum $Q$-function (\\ref{eq_q1}) is observed in various problem formulations. However, it is not an easy-to-handle function with modified Bessel function, double parameters ($\\alpha$ and $\\beta$), and the integral shape.\n\n\nIn the literature, the Marcum $Q$-function has appeared in many areas, such as statistics\/signal detection \\cite{helstrom1994elements}, and in performance analysis of different setups, such as temporally correlated channels \\cite{Makki2013TCfeedback}, spatial correlated channels \\cite{Makki2011Eurasipcapacity}, free-space optical (FSO) links \\cite{Makki2018WCLwireless}, relay networks \\cite{Makki2016TVTperformance}, as well as cognitive radio and radar systems \\cite{Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,Azari2018TCultra,Alam2014INFOCOMWrobust,Gao2018IAadmm,Shen2018TVToutage,Song2017JLTimpact,Tang2019IAan,ermolova2014laplace,peppas2013performance,jimenez2014connection}. However, in these applications, the presence of the Marcum $Q$-function makes the mathematical analysis challenging, because it is difficult to manipulate with no closed-form expressions especially when it appears in parameter optimizations and integral calculations. For this reason, several methods have been developed in \\cite{Bocus2013CLapproximation,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral} to bound\/approximate the Marcum $Q$-function. For example, \\cite{Fu2011GLOBECOMexponential,zhao2008ELtight} have proposed modified forms of the function, while \\cite{Simon2000TCexponential,annamalai2001WCMCcauchy} have derived exponential-type bounds which are good for the bit error rate analysis at high signal-to-noise ratios (SNRs). Other types of bounds are expressed by, e.g., error function \\cite{Kam2008TCcomputing} and Bessel functions \\cite{Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral}. Some alternative methods have been also proposed in \\cite{Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Bocus2013CLapproximation,Gaur2003TVTsome}. Although each of these approximation\/bounding techniques are fairly tight for their considered problem formulation, they are still based on hard-to-deal functions, or have complicated summation\/integration structures, which may be not easy to deal with in e.g., integral calculations and parameter optimizations. \n\nWe present our semi-linear approximation of the \\gls{cdf} in the form of $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$. The idea of this proposed approximation is to use one point and its corresponding slope at that point to create a line approximating the CDF. The approximation method is summarized in Lemma \\ref{Lemma1} as follows.\n\\begin{lem}\\label{Lemma1}\n The CDF of the form $y(\\alpha, \\beta ) = 1-Q_1(\\alpha,\\beta)$ can be semi-linearly approximated by $Y(\\alpha,\\beta)\\simeq \\mathcal{Z}(\\alpha, \\beta)$ where\n\\begin{align}\\label{eq_lema1}\n\\mathcal{Z}(\\alpha, \\beta)=\n\\begin{cases}\n0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta < c_1 \\\\ \n \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}\\times\\\\\n ~~~I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)\\times\\left(\\beta-\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)+\\\\\n ~~~1-Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right), ~~~~~~\\mathrm{if}~ c_1 \\leq\\beta\\leq c_2 \\\\\n1, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathrm{if}~ \\beta> c_2,\n\\end{cases}\n\\end{align}\nwith\n\\begin{align}\\label{eq_c1}\n c_1(\\alpha) = \\max\\Bigg(0,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\n \\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)-1}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}\\Bigg),\n\\end{align}\n\\begin{align}\\label{eq_c2}\n c_2(\\alpha) = \\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}+\n \\frac{Q_1\\left(\\alpha,\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}{\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2} e^{-\\frac{1}{2}\\left(\\alpha^2+\\left(\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)^2\\right)}I_0\\left(\\alpha\\frac{\\alpha+\\sqrt{\\alpha^2+2}}{2}\\right)}.\n\\end{align}\n\\end{lem}\n\\begin{proof}\nSee \\cite[Sec. II]{guo2020semilinear}.\n\\end{proof}\n\n\nMoreover, we can make some second level approximations considering different ranges of $\\alpha$ to further simplify notations. For more details, refer to \\cite{guo2020semilinear}.\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{YourThesis\/figs\/CDF.pdf}\n \\caption{The illustration of proposed semi-linear approximation, $\\alpha = 2$.}\n \\label{fig:cdf}\n\\end{figure}\n\nOne example result of the proposed approximation can be seen in Fig. \\ref{fig:cdf} with $\\alpha$ set to 2. We can observe that Lemma 1 is tight for moderate values of $\\beta$. Note that the proposed approximations are not tight at the tails of the \\gls{cdf}. However, as observed in \\cite{Bocus2013CLapproximation,helstrom1994elements,Makki2013TCfeedback,Makki2011Eurasipcapacity,Makki2018WCLwireless,Makki2016TVTperformance,Simon2003TWCsome,Suraweera2010TVTcapacity,Kang2003JSAClargest,Chen2004TCdistribution,Ma2000JSACunified,Zhang2002TCgeneral,Ghasemi2008ICMspectrum,Digham2007TCenergy, simon2002bookdigital,Fu2011GLOBECOMexponential,zhao2008ELtight,Simon2000TCexponential,Cao2016CLsolutions,sofotasios2015solutions,Cui2012ELtwo,annamalai2001WCMCcauchy,Sofotasios2010ISWCSnovel,Li2010TCnew,andras2011Mathematicageneralized,Gaur2003TVTsome,Kam2008TCcomputing,Corazza2002TITnew,Baricz2009TITnew,chiani1999ELintegral}, in different applications, the Marcum $Q$-function is typically combined with other functions which tend to zero in the tails of the CDF. In such cases, the inaccuracy of the approximation at the tails does not affect the tightness of the final analysis. For example, it can simplify integrals such as\n\\begin{align}\\label{eq_integral}\nG(\\alpha,\\rho)=\\int_\\rho^\\infty{e^{-nx} x^m \\left(1-Q_1(\\alpha,x)\\right)\\text{d}x} ~~\\forall n,m,\\alpha,\\rho>0.\n\\end{align}\nSuch an integral has been observed in various applications, e.g., \\cite[Eq. 1]{Simon2003TWCsome}, \\cite[Eq. 2]{Cao2016CLsolutions}, \\cite[Eq. 1]{sofotasios2015solutions}, \\cite[Eq. 3]{Cui2012ELtwo}, and \\cite[Eq. 1]{Gaur2003TVTsome}. However, depending on the values of $n, m$ and $\\rho$, (\\ref{eq_integral}) may have no closed-form expression.\n\nAnother example of integral calculation is\n\\begin{align}\\label{eq_integralT}\n T(\\alpha,m,a,\\theta_1,\\theta_2) = \\int_{\\theta_1}^{\\theta_2} e^{-mx}\\log(1+ax)Q_1(\\alpha,x)\\text{d}x ~~\\forall m>0,a,\\alpha,\n\\end{align}\nwith $\\theta_2>\\theta_1\\geq0$, which does not have a closed-form expression for different values of $m, a, \\alpha$. This integral is interesting as it is often used to analyse the expected performance of outage-limited systems, e.g, \\cite{Simon2003TWCsome,Simon2000TCexponential,Gaur2003TVTsome,6911973}.\n\nFinally, the proposed semi-linear approximation can be used for the rate adaptation scheme developed in the \\gls{pa} system. For more details, refer to Chapter \\ref{chapter:nosections} as well as \\cite[Sec. II]{guo2020semilinear}.\n\\chapter{Contributions and Future Work}\\label{sec:conclusion}\n This chapter summarizes the contributions of each appended publication and lays out possible directions for future work based on the topics in this thesis.\n\\section{Paper A}\n\\subsection*{\"Rate adaptation in predictor antenna systems\"}\nIn this paper, we study the performance of \\gls{pa} systems in the presence of the mismatch problem with rate adaptation. We derive closed-form expressions for the instantaneous throughput, the outage probability, and the throughput-optimized rate adaptation. Also, we take the temporal variation of the channel into account and evaluate the system performance in various conditions. The simulation and the analytical results show that, while \\gls{pa}-assisted adaptive rate adaptation leads to considerable performance improvement, the throughput and the outage probability are remarkably affected by the spatial mismatch and temporal correlations.\n\n\\section{Paper B}\n\\subsection*{\"A semi-linear approximation of the first-order Marcum $Q$-function with application to predictor antenna systems\"}\nIn this paper, we first present a semi-linear approximation of the Marcum $Q$-function. Our proposed approximation is useful because it simplifies, e.g., various integral calculations including the Marcum $Q$-function as well as various operations such as parameter optimization. Then, as an example of interest, we apply our proposed approximation approach to the performance analysis of \\gls{pa} systems. Considering spatial mismatch due to mobility, we derive closed-form expressions for the instantaneous and average throughput as well as the throughput-optimized rate allocation. As we show, our proposed approximation scheme enables us to analyze \\gls{pa} systems with high accuracy. Moreover, our results show that rate adaptation can improve the performance of \\gls{pa} systems with different levels of spatial mismatch.\n\n\\section{Paper C}\n\\subsection*{\"Power allocation in HARQ-based predictor antenna systems\"}\nIn this work, we study the performance of \\gls{pa} systems using \\gls{harq}. Considering spatial mismatch due to the vehicle mobility, we derive closed-form expressions for the optimal power allocation and the minimum average power of the \\gls{pa} systems under different outage probability constraints. The results are presented for different types of \\gls{harq} protocols, and we study the effect of different parameters on the performance of \\gls{pa} systems. As we show, our proposed approximation scheme enables us to analyze \\gls{pa} systems with high accuracy. Moreover, for different vehicle speeds, we show that the \\gls{harq}-based feedback can reduce the outage-limited transmit power consumption of PA systems by orders of magnitude.\n\n\n\n\\section{Related Contributions}\nAnother \\gls{csi}-related application in vehicle communication is beamforming. As discussed in Chapter \\ref{ch:introduction}, the channel for vehicles changes rapidly, such that it is hard to acquire \\gls{csit}, especially during initial access. In Paper D, we study the performance of large-but-finite \\gls{mimo} networks using codebook-based beamforming. Results show that the proposed genetic algorithm-based scheme can reach (almost) the same performance as in the exhaustive search-based scheme with considerably lower implementation complexity. Then, in Paper E, we extend our study in Paper D to include beamforming at both the transmitter and the receiver side. Also, we compare different machine learning-based analog beamforming approaches for the beam refinement. As indicated in the results, our scheme outperforms the considered state-of-the-art schemes in terms of throughput. Moreover, when taking the user mobility into account, the proposed approach can remarkably reduce the algorithm running delay based on the beamforming results in the previous time slots. Finally, in Paper F, with collaborative users, we show that the end-to-end throughput can be improved by data exchange through device-to-device links among the users.\n\n\n\\section{Future work}\nIn this thesis, we have developed analytical models for evaluating the \\gls{pa} system from different perspectives, and proposed resource allocation schemes to mitigate the mismatch problem and improve the system performance. Here are some potential directions for future work:\n\\begin{itemize}\n \\item Several results presented in the papers above rely on the assumption that the scattering environment around the \\glspl{ra} is isotropic and remains constant over the time period of interest in a small moving distance. To more accurately resemble reality, one could consider alternative models to evaluate the \\gls{pa} system, for example some mixture models with more time-varying properties \\cite{abdi2003new}.\n \\item As a natural follow-up from above, one can consider more use cases for the \\gls{pa} system, such as satellite-train communication and vehicle localization, with different channel models and service requirements. Here, the results of \\cite{shim2016traffic,lannoo2007radio,wang2012distributed,Dat2015,Laiyemo2017,yutao2013moving,Marzuki2017,Andreev2019,Haider2016,Patra2017} can be supportive. \n \\item The work we have done considers \\gls{siso} setup, i.e., with one antenna at the \\gls{bs} and one \\gls{ra} on the top of the vehicle at the receiver side. Though in \\cite{guo2020power} we exploit the \\gls{pa} as part of the data transmission, it is still interesting to see where the gain of deploying the \\gls{pa} system comes from and when we should apply it over typical transceiver schemes. Moreover, one can deploy the \\gls{pa} in multiple antenna systems for which the results of \\cite{Dinh2013ICCVEadaptive,DT2015ITSMmaking,phan2018WSAadaptive} can be useful. It is expected that combining the \\gls{pa} with \\gls{mimo} would result in higher performance gain in fast moving scenarios. \n \\item As we discussed in Chapter 1, in \\gls{pa} systems we target at \\gls{urllc}, i.e., delay\/latency is crucial in the system design. Hence, there is a natural extension to perform finite blocklength analysis in the (\\gls{harq}-based) \\gls{pa} system. As opposed to the literature on finite blocklength studies, e.g., \\cite{makki2014greenglobe,Makki2014WCLfinite,Yang2014TITquasi}, here the channel in the retransmission round(s) is different from the one in the initial transmission due to mobility.\n \\item Machine learning-based channel estimation\/prediction has become powerful in various applications where the statistical model of the channel does not exist or is not robust \\cite{ye2017power,wen2018deep,feng2020deep}. On the other hand, the \\gls{pa} itself provides a reliable feedback loop at the cost of additional resources. Using the \\gls{pa} setup to perform machine learning-based channel prediction would be a very valuable contribution.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\\chapter{Resource Allocation in PA Systems}\\label{chapter:nosections}\nResource allocation plays an important role in communication systems as a way of optimizing the assignment of available resources to achieve network design objectives. In the \\gls{pa} system, resource allocation can be deployed to mitigate different challenges mentioned in Chapter \\ref{chapter:two}. In this chapter, we develop various resource allocation schemes for the \\gls{pa} system under different practical constraints.\n\n\\section{Rate Adaptation in the Classic PA Setup}\nIn this section, we propose a rate adaptation scheme to mitigate the mismatch problem. Here, the classic setup means the \\gls{pa} is only used for channel prediction, not for data transmission. We assume that $d_\\text{a}$, $\\delta $ and $\\hat{g}$ are known at the BS. It can be seen from (\\ref{eq_pdf}) that $f_{g|\\hat{H}}(x)$ is a function of $v$. For a given $v$, the distribution of $g$ is known at the BS, and a rate adaption scheme can be performed to improve the system performance.\n\nThe data is transmitted with rate $R^*$ nats-per-channel-use (npcu). If the instantaneous realization of the channel gain supports the data rate, i.e., $\\log(1+gP)\\ge R^*$, the data can be successfully decoded, otherwise outage occurs. Hence, the outage probability in each time slot is obtained as $\\text{Pr}(\\text{Outage}|\\hat{H}) = F_{g|\\hat{H}}\\left(\\frac{e^{R^*}-1}{P}\\right)$. Considering slotted communication in block fading channels, where $\\Pr(\\text{Outage})>0$ varies with different fading models. Here, we define throughput as the data rate times the successful decoding probability \\cite[p. 2631]{Biglieri1998TITfading}, \\cite[Th. 6]{Verdu1994TITgeneral}, \\cite[Eq. 9]{Makki2014TCperformance}. I.e., the expected data rate successfully received by the receiver is an appropriate performance metric. Hence, the rate adaptation problem of maximizing the throughput in each time slot, with given $v$ and $\\hat{g}$, can be expressed as\n\\begin{align}\\label{eq_avgR}\n R_{\\text{opt}|\\hat{g}}=\\argmax_{R^*\\geq 0} \\left\\{ \\left(1-\\text{Pr}\\left(\\log(1+gP) 0,\\\\\n&P_\\text{tot}|\\hat{g} = \\left[P_1 + P_2(\\hat{g}) \\times \\mathcal{I}\\left\\{\\hat{g} \\le \\frac{\\theta}{P_1}\\right\\}\\right],\n\\end{aligned}\n\\end{equation}\nwith\n\\begin{align}\\label{eq_optproblemrtd}\nF_{g|\\hat{g}}\\left\\{\\frac{\\theta-\\hat{g}P_1}{P_2(\\hat{g})} \\right\\} = \\epsilon, \\quad\\text{for RTD}\n\\end{align}\n\\begin{align}\\label{eq_optprobleminr}\nF_{g|\\hat{g}}\\left\\{\\frac{e^{R-\\log(1+\\hat{g}P_1)}-1}{P_2(\\hat{g})} \\right\\} = \\epsilon, \\quad\\text{for INR}. \n\\end{align}\nHere, $P_\\text{tot}|\\hat{g}$ is the total instantaneous transmission power for two transmission rounds (i.e., one retransmission) with given $\\hat{g}$, and we define $\\Bar{P} \\doteq \\mathbb{E}_{\\hat{g}}\\left[P_\\text{tot}|\\hat{g}\\right]$ as the expected power, averaged over $\\hat{g}$. Moreover, $\\mathcal{I}(x)=1$ if $x>0$ and $\\mathcal{I}(x)=0$ if $x \\le 0$. Also, $\\mathbb{E}_{\\hat{g}}[\\cdot]$ represents the expectation operator over $\\hat{g}$. Here, we ignore the peak power constraint and assume that the \\gls{bs} is capable for sufficiently high transmission powers. Finally, (\\ref{eq_optproblem})-(\\ref{eq_optprobleminr}) come from the fact that, with our proposed scheme, $P_1$ is fixed and optimized with no \\gls{csi} at the \\gls{bs} and based on average system performance. On the other hand, $P_2$ is adapted continuously based on the instantaneous \\gls{csi}.\n\n\nUsing (\\ref{eq_optproblem}), the required power in Round 2 is given by\n\\begin{equation}\n\\label{eq_PRTDe}\n P_2(\\hat{g}) = \\frac{\\theta-\\hat{g}P_1}{F_{g|\\hat{g}}^{-1}(\\epsilon)},\n\\end{equation}\nfor the \\gls{rtd}, and\n\\begin{equation}\n\\label{eq_PINRe}\n P_2(\\hat{g}) = \\frac{e^{R-\\log(1+\\hat{g}P_1)}-1}{F_{g|\\hat{g}}^{-1}(\\epsilon)},\n\\end{equation}\nfor the \\gls{inr}, where $F_{g|\\hat{g}}^{-1}(\\cdot)$ is the inverse of the \\gls{cdf} given in (\\ref{eq_cdf}). Note that $F_{g|\\hat{g}}^{-1}(\\cdot)$ is a complicated function of $\\hat{g}$, and consequently, it is not possible to express $P_2$ in closed-form. For this reason, one can use \\cite[Eq. 2, 7]{6414576}\n\\begin{align}\n Q_1 (s, \\rho) &\\simeq e^{\\left(-e^{\\mathcal{I}(s)}\\rho^{\\mathcal{J}(s)}\\right)}, \\nonumber\\\\\n \\mathcal{I}(s)& = -0.840+0.327s-0.740s^2+0.083s^3-0.004s^4,\\nonumber\\\\\n \\mathcal{J}(s)& = 2.174-0.592s+0.593s^2-0.092s^3+0.005s^4,\n\\end{align}\nto approximate $F_{g|\\hat{g}}$ and consequently $F_{g|\\hat{g}}^{-1}(\\epsilon)$. In this way, (\\ref{eq_PRTDe}) and (\\ref{eq_PINRe}) can be approximated as\n\\begin{equation}\n\\label{eq_PRTDa}\n P_2(\\hat{g}) = \\Omega\\left(\\theta-\\hat{g}P_1\\right),\n\\end{equation}\nfor the RTD, and\n\\begin{equation}\n\\label{eq_PINRa}\n P_2(\\hat{g}) = \\Omega\\left(e^{R-\\log(1+\\hat{g}P_1)}-1\\right),\n\\end{equation}\nfor the INR, where\n\\begin{equation}\\label{eq_omega}\n \\Omega (\\hat{g}) = \\frac{2}{\\sigma^2}\\left(\\frac{\\log(1-\\epsilon)}{-e^{\\mathcal{I}\\left(\\sqrt{\\frac{2\\hat{g}(1-\\sigma^2)}{\\sigma^2}}\\right)}}\\right)^{-\\frac{2}{\\mathcal{J}\\left(\\sqrt{\\frac{2\\hat{g}(1-\\sigma^2)}{\\sigma^2}}\\right)}}.\n\\end{equation}\n\n\n\nIn this way, for different \\gls{harq} protocols, we can express the instantaneous transmission power of Round 2, for every given $\\hat{g}$ in closed-form. Then, the power allocation problem (\\ref{eq_optproblem}) can be solved numerically. However, (\\ref{eq_omega}) is still complicated and it is not possible to solve (\\ref{eq_optproblem}) in closed-form. For this reason, we propose an approximation scheme to solve (\\ref{eq_optproblem}) as follows.\n\n\n\n\nLet us initially concentrate on the \\gls{rtd} protocol. Then, combining (\\ref{eq_optproblem}) and (\\ref{eq_PRTDe}), the expected total transmission power is given by\n\\begin{align}\\label{eq_barP}\n \\Bar{P}_\\text{RTD} = P_1 + \\int_0^{\\theta\/P_1} e^{- x}P_2\\text{d}x\n = P_1 + \\int_0^{\\theta\/P_1} e^{- x}\\frac{\\theta-x P_1}{F_{g|x}^{-1}(\\epsilon)}\\text{d}x.\n\\end{align}\n\nThen, Theorem 1 in \\cite{guo2020power} derives the minimum required power in Round 1 and the average total power consumption.\n\nTo study the performance of the \\gls{inr}, we can use Jensen's inequality and the concavity of the logarithm function \\cite[Eq. 30]{makki2016TWCRFFSO}\n\\begin{align}\\label{eq_jensen}\n \\frac{1}{n}\\sum_{i=1}^{n} \\log (1+x_i)\\leq\\log\\left(1+\\frac{1}{n}\\sum_{i=1}^{n}x_i\\right),\n\\end{align}\nand derive the closed-form expressions for the minimum required power following the similar steps as for the \\gls{rtd} (see \\cite[Sec. III B]{guo2020power}) for detailed discussions).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\chapter{Introduction}\n\\label{ch:introduction}\n\n\\section{Background}\nNowadays, wireless communication and its related applications play important roles in our life. Since the first mobile communication system employed in the early 1980s, new standards were established roughly every ten years, leading to the first commercial deployment of the \\gls{5g} cellular networks in late 2019 \\cite{ericsson2019,Dang2020what,andrews2014what}. From the \\gls{2g}, where the first digital communication system was deployed with text messages being available, through the recent \\gls{4g} with \\gls{3gpp} \\gls{lte} being the dominant technology, to future \\gls{5g} with \\gls{nr} standardized by the \\gls{3gpp} \\cite{zaidi20185g}, one theme never changes: the growing demand for high-speed, ultra-reliable, low-latency and energy-efficient wireless communications with limited radio spectrum resource.\n\n\nAccording to the Ericsson mobility report \\cite{ericsson2019}, the total number of mobile subscriptions has exceeded 8.1 billion today, with \\gls{4g} being the major standard, and it is expected that this number will reach around 9 billion with over 20\\% being supported by \\gls{nr} by the end of 2024 \\cite{ericsson2019}. Thanks to the higher bandwidth (usually larger than 1 GHz) at millimeter wave frequency spectrum, as well as the development of multi antenna techniques, new use cases in \\gls{5g}, such as intelligent transport systems, autonomous vehicle control, virtual reality, factory automation, and providing coverage to high-mobility users, have been developed rapidly \\cite{Simsek2016JSAC5g}. These use cases are usually categorized into three distinct classes by the standardization groups of \\gls{5g} \\cite{osseiran2014scenarios}:\n\n\\begin{itemize}\n\\item[i)] \\gls{emb} deals with large data packets and how to deliver them using high data rates \\cite{dahlman20185g}. This can be seen as a natural extension of the current established \\gls{lte} system that is designed for the similar use case. Typical \\gls{emb} applications involve high-definition video steaming, virtual reality, and online gaming.\n\\item[ii)] \\gls{mtc} is a new application in \\gls{5g}, which targets at providing wide coverage to a massive number of devices such as sensors who send sporadic updates to a \\gls{bs} \\cite{bockelmann2016massive}. Here, the key requirements are energy consumption, reliability, and scalability. High data rate and low latency, on the other hand, are of secondary importance. \n\\item[iii)] \\gls{urllc} concerns mission-critical applications with stringent requirements on reliability and latency \\cite{bockelmann2016massive}. In this type of use case, the challenge is to design protocols which can transmit data with very low error probability and fulfill the latency constraint at the same time. Applications falling into this category include real-time control in smart factories, remote medical surgery, and \\gls{v2x} communications which mainly focus on safety with high-mobility users. \n\\end{itemize}\n\nThis thesis targets both \\gls{emb} and \\gls{urllc}. More specifically, this work develops efficient (high data rate) and reliable (low error probability) \\gls{v2x} schemes with latency requirement, using the \\gls{pa} concept. A detailed review of the \\gls{v2x} communications and the \\gls{pa} concept, as well as the associated research challenges, are presented in the following sub-sections.\n\n\n\n\n\n\n\n\n\n\\subsection{Vehicle Communications in 5G and Time\/Space-Varying Channel}\n\nProviding efficient, reliable broadband wireless communication links in high mobility use cases, such as high-speed railway systems, urban\/highway vehicular communications, has been incorporated as an important part of the \\gls{5g} developments \\cite{imt2015}. According to \\cite{samsung2015}, \\gls{5g} systems are expected to support a large number of users traveling at speeds up to 500 km\/h, at a data rate of 150 Mbps or higher. One interesting scenario in \\gls{5g} vehicle communication is the \\gls{mrn}, where a significant number of users could access cellular networks using moving relays, e.g., at public transportation such as buses, trams, and trains, via their smart phones or laptops \\cite{yutao2013moving}. As one type of \\gls{mrn}, one can consider the deployment of \\gls{iab} nodes on top of the vehicles \\cite{Teyeb2019VTCintegrated}, where part of the radio resources is used for wireless backhauling. In this way, moving \\gls{iab} nodes can provide feasible solutions for such relay densification systems in \\gls{5g}\\footnote{It should be noted that mobile \\gls{iab} is not supported in \\gls{3gpp} Rel-16 and 17. However, it is expected to be discussed in the next releases.}.\n\n\nMost current cellular systems can support users with low or moderate mobility, while high moving speed would limit the coverage area and the data rate significantly. For example, \\gls{4g} systems are aimed at supporting users perfectly at the speed of 0-15 km\/h, serving with high performance from 15 km\/h to 120 km\/h, and providing functional services at 120-350 km\/h \\cite{3gpp2014requirements}. On the other hand, field tests at different places \\cite{Wu2016IEsurvey} have shown that current \\gls{4g} systems can only provide 2-4 Mbps data rate in high-speed trains. To meet the requirement of high data rate at high moving speed in future mobility communication systems, new technologies that are able to cope with the challenges of mobility need to be developed.\n\nWith the setup of \\gls{mrn} and other \\gls{v2x} applications such as vehicle platooning \\cite{guo2017wiopt,guo2017pimrc,guo2018hindawi} and remote driving \\cite{Liang2017TVTvehiclar}, different technologies can be applied to improve the system performance at high speeds. For example, strategies in current standard aiming at improving the spectral efficiency include \\gls{mimo}, \\gls{csi}-based scheduling, and adaptive modulation and coding. Moreover, in the future standardization, techniques such as \\gls{comp} \\gls{jt} and massive \\gls{mimo} will be also involved. All these techniques have one thing in common: they require accurate estimation of \\gls{csit} with acceptable cost. However, this is not an easy task. The main reason is that the channel in vehicle communication has certain features which makes it difficult to acquire \\gls{csit} \\cite{Wu2016IEsurvey}:\n\\begin{itemize}\n\\item[i)] \\textit{Fast time-varying fading}: For high-speed vehicles, the channel has fast time-variation due to large Doppler spread. Let us consider a simple example. Assume a vehicle operating at a speed of 200 km\/h and a frequency of 6 GHz. Then, the maximum Doppler frequency is obtained by $f_\\text{D} = v\/\\lambda = $1111 Hz, which corresponds to a channel coherence time of around 900 $\\mu$s. However, in \\gls{lte} the control loop time with both \\gls{ul} and \\gls{dl} is around 2 ms, which makes \\gls{csit} outdated if we consider the \\gls{tdd} system with channel reciprocity. Moreover, the speeds of moving terminals are usually time-varying, making the channel even more dynamic.\n\\item[ii)] \\textit{Channel estimation errors}: Due to the time-varying channel, it is not practical to assume perfect \\gls{csit}, as we do for low mobility systems. In fact, mobility causes difficulties not only on accurately estimating the channel, but also on tracking, updating and predicting the fading parameters. Also, the estimation error may have remarkable effects on system performance, which makes this aspect very important in the system design.\n\\item[iii)] \\textit{Doppler diversity}: Doppler diversity has been developed for systems with perfect \\gls{csit}, in which it provides diversity gain to improve system performance. On the other hand, Doppler diversity may cause high channel estimation error, which makes it important to study the trade-off between Doppler diversity and estimation errors.\n\\end{itemize} \nBesides these three aspects, there are also some issues for the channel with mobility, e.g., carrier frequency offset, inter-carrier interference, high penetration loss, and frequent handover. To conclude, with the existing methods and depending on the vehicle speed, channel coefficients may be outdated at the time of transmission, due to various delays in the control loop and the mobility of the vehicles.\n\nThe use of channel predictions can alleviate this problem. By using the statistics over time and frequency, combining with linear predictors such as Kalman predictor, the channel coefficients can be predicted for around 0.1-0.3 carrier wavelengths in space \\cite{Sternad2012WCNCWusing}. This prediction horizon is enough for \\gls{4g} systems with short control loops (1-2 ms) or for users with pedestrian velocities. However, it is inadequate for vehicular velocities at high frequencies.\n\n\n\n\n\n\n\n\n\\subsection{Predictor Antenna and Related Work}\nTo overcome the issue of limited prediction horizon in the rapidly changed channel with mobility, and to support use cases such as \\gls{mrn}, \\cite{Sternad2012WCNCWusing} proposed the concept of \\gls{pa}. Here, the \\gls{pa} system refers to a setup with two sets of antennas on the roof of a vehicle, where the \\glspl{pa} positioned in the front of the vehicle are used to predict the channel state observed by one \\gls{ra} or a set of \\glspl{ra} that are aligned behind the \\glspl{pa}, and send the \\gls{csi} back to the \\gls{bs}. Then, if the \\gls{ra} reaches the same point as the \\gls{pa}, the \\gls{bs} can use the \\gls{csi} obtained from the \\glspl{pa} to improve the transmission to the \\glspl{ra} using, for example, power\/rate adaptations and beamforming. The results in \\cite{Sternad2012WCNCWusing} indicate that the \\gls{pa} system can provide sufficiently accurate channel estimation for at least one wavelength in the \\gls{los} case, and \\cite{Jamaly2014EuCAPanalysis} shows that with a smoothed roof of the vehicle to avoid refraction, abnormal reflection and scattering, and with antenna coupling compensation at least 3 wavelengths can be predicted in both \\gls{los} and \\gls{nlos} conditions.\n\n\nFollowing \\cite{Sternad2012WCNCWusing}, \\cite{BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman} provide experimental validation to prove the feasibility of the \\gls{pa} concept. Specifically, \\cite{BJ2017ICCWusing} presents an order of magnitude increase of prediction horizons compared to time-series-based prediction. Moreover, \\cite{phan2018WSAadaptive} shows that the \\gls{pa} concept works for massive \\gls{mimo} \\glspl{dl} where the \\gls{pa} can improve the signal-to-interference ratio in setups with \\gls{nlos} channels. Also, \\cite{Apelfrojd2018PIMRCkalman} demonstrates that the Kalman smoothing-based \\gls{pa} system enables up to 0.75 carrier wavelengths prediction at vehicle speeds for Rayleigh-like \\gls{nlos} fading channels. The review of \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman} reveals the following research problems:\n\\begin{itemize}\n\\item[i)] \\textit{Speed sensitivity}: From the results in \\cite{BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman}, we can observe that, for given control loop time, if the speed is too low or too high which leads to large distances, i.e., spatial mismatch, between the spot where the \\gls{pa} estimates the channel and the spot where the \\gls{ra} reaches at the second time slot, the accuracy of prediction decreases drastically. We cannot make sure that the speed of the vehicle remains the same all the time, which may lead to performance loss. Indeed, \\cite{DT2015ITSMmaking} and \\cite{Jamaly2019IETeffects} have addressed this kind of spatial mismatch problem in the \\gls{pa} system. In \\cite{DT2015ITSMmaking}, an interpolation-based beamforming scheme is proposed for \\gls{dl} \\gls{miso} systems to solve the mis-pointing problem. From another perspective, \\cite{Jamaly2019IETeffects} studies the effect of velocity variation on prediction performance. However, how to analytically study speed sensitivity of the \\gls{pa} system remains unclear.\n\\item[ii)] \\textit{Lack of analytical model}: As we can see, \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman} are based on real-world testing data which validates the concept, while \\cite{DT2015ITSMmaking} is based on simulated channel, and \\cite{Jamaly2019IETeffects} focuses more on the antenna pattern. No analytical model of the \\gls{pa} system has been proposed in \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman,DT2015ITSMmaking,Jamaly2019IETeffects}. Moreover, as mentioned in the previous item, we need an analytical tool to study the sensitivity of the system performance to speed variation.\n\\item[iii)] \\textit{What else can we do with the \\gls{pa} system}: As we can see from the results in \\cite{Sternad2012WCNCWusing,BJ2017ICCWusing,phan2018WSAadaptive,Apelfrojd2018PIMRCkalman,DT2015ITSMmaking,Jamaly2019IETeffects}, although the \\gls{pa} system can provide larger prediction horizons for up to three wavelengths, there is still a limit on the region, and the system is quite sensitive to vehicle speed. Hence, additional structure\/schemes could potentially be built on top of the \\gls{pa} system to achieve better performance.\n\\item[iv)] \\textit{When to use the \\gls{pa} system}: The key point of the \\gls{pa} concept is to use an additional antenna to acquire better quality of \\gls{csit}. In this way, the time-frequency resources of the \\gls{pa} are used for channel prediction instead of data transmission. Intuitively, there should exist a condition under which the \\gls{pa} concept could be helpful, compared to the case with simply using the \\gls{pa} as one of the \\glspl{ra}. Here, theoretical models may help us make such decisions.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Scope of the Thesis}\nThe aim of this thesis is to present analytical evaluation of the \\gls{pa} system and, at the same time, to apply some key-enablers of \\gls{urllc}, such as rate adaptation, \\gls{harq}, and power allocation, considering imperfect \\gls{csi} estimation.\nThe channel considered in this thesis is the non-central Chi-square distributed fading channel, which we model as the combination of the known part of the channel from the \\gls{pa}, and the uncertainty part from the spatial mismatch. Firstly, in Paper A, we present our proposed analytical model for evaluating the sensitivity of the \\gls{pa} system with spatial mismatch. Some preliminary work on how to use rate adaptation based on imperfect \\gls{csi} is also presented. \n\nIn Paper B, we first develop a mathematical tool that can be used to remarkably simplify the analysis of our proposed channel, some integral calculations, as well as optimization problems that contain the first-order Marcum $Q$-function. Then, we extend the work in Paper A and perform deep analysis of the effect of various parameters, such as processing delay of the \\gls{bs} and imperfect feedback schemes. \n\nBesides the results in Paper A and B, we are also interested in how to further exploit the \\gls{pa} system by, e.g., involving the \\gls{pa} partly into the transmission process.\nIn Paper C, we propose an \\gls{harq}-based \\gls{pa} system which uses the \\gls{bs}-\\gls{pa} link for the initial transmission, and the feedback bit on the decoding results combined with the \\gls{csi} estimation for adapting the transmission parameters during the \\gls{bs}-\\gls{ra} transmission. Moreover, we develop power allocation schemes based on the \\gls{harq}-\\gls{pa} structure and study the outage-constrained average power of the system. \n\n\n\nThe specific objectives of this thesis can be summarized as follows.\n\\begin{itemize}\n\\item[i)] To characterize the speed sensitivity of the \\gls{pa} system by analytically modeling the channels in \\gls{pa} systems.\n\\item[ii)] To develop a mathematical tool in order to simplify the performance evaluation of the \\gls{pa} setup which involves the Marcum $Q$-function. \n\\item[iii)] To design efficient and reliable transmission schemes which are able to improve the performance of existing \\gls{pa} systems.\n\\end{itemize}\n\n\\section{Organization of the Thesis}\nIn Chapter 2, we introduce the \\gls{pa} setups that are considered in the thesis. Specifically, we model the spatial mismatch in the \\gls{pa} system and define the data transmission model. The details of the channel model which involve the Marcum Q-function are also presented. To help the analytical evaluations, we provide a review of the use cases of the Marcum Q-function in a broad range of research areas, and present our proposed semi-linear approximation of the Marcum Q-function, with its applications on integral calculations and optimizations. In Chapter 3, we present different resource allocation schemes, namely, rate adaptation and \\gls{harq}-based power allocation, to improve the performance of the \\gls{pa} system under the mismatch problem. For each scheme, we show the problem formulation, the data transmission model as well as the details of the proposed method. Finally, in Chapter 4, we provide a brief overview of our contributions in the attached papers, and discuss possible future research directions.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}