{"text":"\\section*{Introduction}\n\nThis paper is part of our continuing project of investigating the different notions of primeness and coprimeness for (sub)modules of a given a non-zero module $M$ over a (commutative) ring $R$ in their \\emph{natural context} as prime (coprime) elements in the lattice $Sub_R(M)$ of $R$-submodules with the canonical action of the poset $Ideal(R)$ of ideals of $R$. This approach proved to be very appropriate and enabled use to prove several results in this general setting and to provide more elegant and shorter proofs of our results. Moreover, it enabled us to generalize several notions and dualize them in a more systematic and elegant way.\n\nGeneralizing the notion of a \\emph{strongly hollow element} in a lattice, we introduce for a lattice with an \\emph{action of a poset} the notion of a \\emph{pseudo strongly hollow element}. The two notions are equivalent in case the lattice is \\emph{multiplication}. Considering the lattice $Sub_R(M)$ of a non-zero module $M$ over a commutative ring $R$, we obtain new class of modules, which we call \\emph{pseudo strongly hollow modules}. We study this class of $R$-modules, as well as modules which can be written as \\emph{finite sums} of their pseudo strongly hollow submodules. In particular, we provide existence and uniqueness theorems of such representation over Artinian rings.\n\nThis paper consists of two sections. In Section 1, we define, for a bounded lattice $\\mathcal{L} = (L,\\wedge, \\vee, 0, 1)$, several notions of primeness for elements in $L\\backslash \\{1\\}$ as well as several coprimeness notions for elements in $L\\backslash \\{0\\}$. In Theorem \\ref{Proposition 5.8}, we prove that the spectrum $Spec^{c}(\\mathcal{L})$ of coprime elements in $L$ is nothing but the spectrum $Spec^{s}(\\mathcal{L}^{0})$ of second elements in the dual bounded lattice $\\mathcal{L}^{0}:=(L,\\vee ,\\wedge ,1,0).$\n\nIn Section 2, we apply the results of Section 1 to the lattice $\\mathcal{L}:=Sub_R(M)$ of\nsubmodules of a non-zero module $M$ over a commutative ring $R.$ We present the\nnotion of a \\emph{pseudo strongly hollow submodule }(\\emph{PS-hollow submodule} for\nshort) $N\\leq M$ as dual to the \\emph{pseudo strongly irreducible submodules}. Modules which are\nfinite sums of PS-hollow submodules are said to be \\emph{PS-hollow\\ representable}.\nProposition \\ref{Remark 5.6} asserts the existence of \\emph{minimal} PS-hollow\nrepresentations for PS-hollow representable modules over Artinian rings. The\nFirst and the Second Uniqueness Theorems of minimal pseudo strongly hollow\nrepresentations are provided in Theorems \\ref{Theorem 5.9} and \\ref{Theorem\n5.10}, respectively. Sufficient conditions for $_{R}M$ to have a PS-hollow\nrepresentation are given in Proposition \\ref{hollow-PS}. Finally, Theorem\n\\ref{Theorem 5.17} investigates semisimple modules each PS-hollow submodules\nof which is simple.\n\n\\section{Primeness and Coprimeness Conditions for Lattices}\n\n\\hspace{1cm} In this section, we provide some preliminaries and study several notions of \\emph{primeness} and \\emph{coprimeness} for elements in a complete lattice $\\mathcal{L}:=(L,\\wedge ,\\vee,0,1)$ attaining an action of a poset $(S,\\leq ).$\n\nThroughout, $S=(S,\\leq)$ is a non-empty poset and $S^0=(S,\\geq)$ is the dual poset.\n\n\\begin{punto}\n(\\cite{G}) A \\emph{lattice}\n\\index{lattice} $\\mathcal{L}$ is a \\emph{poset} $(L,\\leq )$ closed under two\nbinary commutative, associative and idempotent operations: $\\wedge $ (\\emph\nmeet}) and $\\vee $ (\\emph{join}), and we write $\\mathcal{L}=(L,\\wedge ,\\vee)$; we say that $\\mathcal{L}$ is a \\emph{bounded lattice} iff there exist $0,1 \\in L$ such that $0 \\leq x \\leq 1$ for all $x\\in L$.\nWe say that a lattice $(L,\\wedge ,\\vee )$ is a \\emph{complete lattice} iff\n$\\bigwedge\\limits_{x\\in H}x$ and $\\bigvee\\limits_{x\\in H}x$ exist in $L$ for any\n$H\\subseteq L$. Every complete lattice is bounded with $0= \\bigwedge\\limits_{x\\in L}x$ and $1= \\bigvee\\limits_{x\\in L}x$.\n\nFor two (complete) lattices $\\mathcal{L}=(L,\\wedge ,\\vee )$ and\n$\\mathcal{L}^{\\prime }=(L^{\\prime },\\wedge ^{\\prime },\\vee ^{\\prime }),$ a\n\\emph{homomorphism of (complete) lattices}\n\\index{homomorphism of lattices} from $\\mathcal{L}$ to $\\mathcal{L}^{\\prime\n} $ is a map $\\varphi :L\\longrightarrow L^{\\prime }$ that preserves finite\n(arbitrary) meets and finite (arbitrary) joins.\n\\end{punto}\n\nThe notion of a \\emph{strongly hollow submodule} was introduced by Abuhlail in \\cite{Abu2015}, as dual to that of \\emph{strongly irreducible submodules}. The notion was generalized to general lattices and investigated by Abuhlail and Lomp in \\cite{AbuC0}.\n\n\\begin{punto}\nLet $\\mathcal{L}=(L,\\wedge ,\\vee ,0,1)$ be a bounded lattice.\n\n\\begin{enumerate}\n\\item An element $x\\in L\\backslash \\{1\\}$ is said to be:\n\n\\emph{irreducible} (or \\emph{uniform}) iff for any $a,b\\in L$ with $a\\wedge b=x,$\nwe have $a=x$ or $b=x$;\n\n\\emph{strongly irreducible} iff for any $a,b\\in L$ with \na\\wedge b\\leq x,$ we have $a\\leq x$ or $b\\leq x$.\n\n\\item An element $x\\in L\\backslash \\{0\\}$ is said to be:\n\n\\emph{hollow }iff whenever for any $a,b\\in L$ with $x=a\\vee b,$ we have $x=a$\nor $x=b;$\n\n\\emph{strongly hollow} iff for any $a,b\\in L$ with $x\\leq a\\vee b,$ we have $x\\leq a$ or $x\\leq b$.\n\\end{enumerate}\n\nWe denote the set of irreducible (resp. strongly irreducible, hollow, strongly hollow) elements in $L$ by $I(\\mathcal{L})$ (resp. $SI(\\mathcal{L})$, $H(\\mathcal{L})$, $SH(\\mathcal{L})$).\n\nWe say that $\\mathcal{L}$ is a \\emph{hollow lattice} (resp. \\emph{uniform lattice}) iff $1$ is hollow (0 is uniform).\n\\end{punto}\n\n\\begin{punto}\n\\index{lattice}\n\\label{action}Let $\\mathcal{L}=(L,\\wedge,\\vee )$ be a lattice. An $S$-\\textit{action}\n\\index{$S-$action} on $\\mathcal{L}$ is a map $\\rightharpoonup :S\\times L\\longrightarrow L$\nsatisfying the following conditions for all $s,s_{1},s_{2}\\in S$ and $x,y\\in L$:\n\\begin{enumerate}\n \\item $s_{1}\\leq _{S}s_{2}\\Rightarrow s_{1}\\rightharpoonup x\\leq\ns_{2}\\rightharpoonup x$.\n \\item $x\\leq y\\Rightarrow s\\rightharpoonup x\\leq s\\rightharpoonup y$.\n \\item $s\\rightharpoonup x \\leq x$.\n\\end{enumerate}\nA bounded lattice $\\mathcal{L}=(L,\\wedge,\\vee,0,1)$ with an $S$-action is \\emph{multiplication}\niff for every element $x\\in L$, there is some $s\\in S$ such that $x=s\\rightharpoonup 1$.\n\\end{punto}\n\n\\begin{ex}\n\\index{complete lattice}\n\\label{LAT}Let $M$ be an $R$-module. The complete lattice $LAT(_{R}M)$ of $R$-submodule has an $Ideal(R)$-action defined by the canonical product $IN$ of an ideal $I\\leq R$ and a submodule $N\\leq M$.\n\\end{ex}\n\n\\begin{rem}\n\\label{dual action}Let $\\mathcal{L}=(L,\\wedge,\\vee ,0,1)$ a bounded lattice with an $S$-action $\\rightharpoonup\n: S\\times L\\longrightarrow L.$ The dual lattice $\\mathcal{L}^{0}$ has an $S^0$-action given by\n\\begin{equation*}\ns\\rightharpoonup ^{0}x=(s\\rightharpoonup 1)\\vee \n\\text{ } \\label{s0x}, \\text{ for all } s\\in S \\text{ and }x\\in L.\n\\end{equation*}\n\\end{rem}\n\nWe generalized the notion of a \\emph{strongly hollow element} of a lattice investigated by Abuhlail and Lomp in $\\cite{AbuC0}$ to a \\emph{strongly hollow element} of a lattice with an action from a poset. Moreover, we introduced its dual notion of a \\emph{pseudo strongly irreducible element} which is a generalization of the notion of a \\emph{strongly irreducible element}.\n\n\\begin{defns}\n\\label{spectra}Let $(\\mathcal{L},\\rightharpoonup )$ a bounded lattice with an $S$-action. We say that:\n\n\\begin{enumerate}\n\\item $x\\in L\\backslash \\{1\\}$ is\n\n\\emph{pseudo strongly irreducible} iff for all $y\\in L$ and $s\\in S:$\n\\begin{equation}\n(s\\rightharpoonup 1)\\wedge y\\leq x \\hspace{8pt} \\Rightarrow \\hspace{8pt} s\\rightharpoonup 1\\leq x \\text{\\hspace{8pt}\nor \\hspace{8pt} }y\\leq x; \\label{psi}\n\\end{equation}\n\n\\emph{prime} iff for all $y\\in L$ and \ns\\in S$ with\n\\begin{equation}\ns\\rightharpoonup y\\leq x\\hspace{8pt} \\Rightarrow \\hspace{8pt} s\\rightharpoonup 1\\leq x\\text{\\hspace{8pt}\nor \\hspace{8pt} }y\\leq\nx. \\label{p}\n\\end{equation}\n\n\\emph{coprime} iff for all $s\\in S:\n\\begin{equation}\ns\\rightharpoonup 1\\leq x\\text{\\hspace{8pt}\nor \\hspace{8pt} }(s\\rightharpoonup 1)\\vee x=1 \\label{c}\n\\end{equation}\n\n\\item $x\\in L\\backslash \\{0\\}$ is\n\n\\emph{pseudo strongly hollow} (or \\emph{PS-hollow} for short) iff for all $s\\in S:$\n\\begin{equation}\nz\\leq s\\rightharpoonup x + y \\Rightarrow z\\leq s \\rightharpoonup 1 \\text{ or }z\\leq y. \\label{psh}\n\\end{equation\n\n\\emph{second} iff for all $s\\in S:\n\\begin{equation}\ns\\rightharpoonup x=x\\text{\\hspace{8pt}\nor \\hspace{8pt} }s\\rightharpoonup x=0 \\label{s}\n\\end{equation}\n\n\\emph{first} iff for all $y\\in L$ and $s\\in S$ wit\n\\begin{equation}\ns\\rightharpoonup y=0\\text{\\hspace{8pt}\nand \\hspace{8pt} }y\\leq x\\hspace{8pt} \\Rightarrow \\hspace{8pt} s\\rightharpoonup x=0\\text{\\hspace{8pt}\nor \\hspace{8pt} }y=0. \\label{f}\n\\end{equation\n\\end{enumerate}\n\nThe spectrum of pseudo strongly irreducible (resp. prime, coprime, pseudo strongly hollow, second, first) elements of $\\mathcal{L}$ is denoted by $Spec^{psi}(\\mathcal{L})$ (resp. $Spec^{p}(\\mathcal{L})$, $Spec^{c}(\\mathcal{L})$, $Spec^{s}(\\mathcal{L})$, $Spec^{f}(\\mathcal{L})$).\n\\end{defns}\n\n\\begin{lem}\n\\index{complete lattice}\n\\label{00=*}Let $\\mathcal{L}=(L,\\wedge ,\\vee,0,1)$ be a bounded lattice with an $S$-action and defin\n\\begin{equation}\ns\\rightharpoonup ^{\\ast }x=(s\\rightharpoonup 1)\\wedge x \\label{s*x}\n\\end{equation}\nfor all $s\\in S$ and $x\\in L.$ Then $((\\mathcal{L},\\rightharpoonup\n)^{0})^{0}=(\\mathcal{L},\\rightharpoonup ^{\\ast })$.\n\\end{lem}\n\n\\begin{Beweis}\nIt is clear that $\\rightharpoonup ^{\\ast }$ is an $S-$action on $\\mathcal{L}\n. For all $s\\in S$ and all $x\\in L$ we have\n\\begin{equation}\ns({\\rightharpoonup ^{0}})^{0}\\text{ }x=(s\\rightharpoonup ^{0}1^{0})\\vee\n^{0}x=((s\\rightharpoonup 1)\\vee 0)\\wedge x=(s\\rightharpoonup 1)\\wedge\nx=s\\rightharpoonup ^{\\ast }x.\n\\end{equation}$\\blacksquare$\n\\end{Beweis}\n\n\\begin{rems}\\label{rems-new}\nLet $(\\mathcal{L},\\rightharpoonup )=(L,\\wedge,\\vee ,0,1)$ a bounded lattice with an $S$-action.\n\n\\begin{enumerate}\n\\item $0$ is prime if and only if $1$ is first.\n\n\\item $SH(\\mathcal{L})\\subseteq Spec^p(\\mathcal{L}^0)$.\n\n\\item If $(\\mathcal{L},\\rightharpoonup )$ is multiplication, then $$Spec^{psi}(\\mathcal{L}) = SH(\\mathcal{L})=Spec^{p}(\\mathcal{L}^{0})$$.\n\nAssume that $(\\mathcal{L},\\rightharpoonup )$ is multiplication. The first equality follow\nfrom the definitions.\n\nLet $x\\in Spec^{p}(\\mathcal{L}^{0})$. Suppose that $x\\leq y\\vee z$ for some $y,z\\in L.$\nSince $(\\mathcal{L},\\rightharpoonup )$ is multiplication, \ny=s\\rightharpoonup 1$ for some $s\\in S$, and so $x\\leq (s\\rightharpoonup\n1)\\vee z$, i.e. $s\\rightharpoonup ^{0}z\\leq ^{0}x$. Since $x\\in Spec^{p}\n\\mathcal{L}^{0}),$ we have $s\\rightharpoonup ^{0}1^{0}\\leq ^{0}x$ or $z\\leq\n^{0}x$ and so $x\\leq s\\rightharpoonup 1 = y$ or $x\\leq z$. So, $Spec^{p}\n\\mathcal{L}^{0})\\subseteq SH(\\mathcal{L}).$ The inverse inclusion follows by\n(2).\n\n\\item \\index{pseudo strongly irreducible} $x\\in L\\backslash \\{1\\}$ is prime in $(\\mathcal{L},\\rightharpoonup\n^{\\ast })$ if and only if $x$ is pseudo strongly irreducible in $(\\mathcal{L},\\rightharpoonup )$.\n\n\\item $x\\in L\\backslash \\{1\\}$ is coprime in $(\\mathcal{L},\\rightharpoonup )$\nif and only if $x$ is coprime in $(\\mathcal{L},\\rightharpoonup ^{\\ast })$.\n\n\\item $x\\in L\\backslash \\{0\\}$ is first if and only if $0$ is prime in \n[0,x].$\n\n$(\\Rightarrow )\\ $Let $x\\in L\\backslash \\{0\\}$ be first. Observe that the\nmaximum element in the sublattice $[0,x]$ is $x$. Suppose that \ns\\rightharpoonup y=0$ for some $y\\leq x$. Since $x$ is first, $y=0$ or \ns\\rightharpoonup x=0.$ So $0$ is prime in $[0,x].$\n\n$(\\Leftarrow )$ Let $0$ be prime in $[0,x].$ Suppose that $s\\rightharpoonup\ny=0$ for some $y\\leq x.$ Since $y\\in \\lbrack 0,x]$, we have $y=0$ or $s\\rightharpoonup\nx=0$ as $x$ is the maximum element of $[0,x].$\n\n\\item $x\\in L\\backslash \\{0\\}$ is second if and only if $0$ is coprime in\nthe interval $[0,x].$\n\\end{enumerate}\n\\end{rems}\n\nThe notion of top-lattices was introduced by Abuhlail and Lomp \\cite{AbuC}:\n\n\\begin{punto}\nLet $(\\mathcal{L},\\rightharpoonup )=(L,\\wedge,\\vee ,0,1)$ a complete lattice and $X\\subseteq L\\backslash \\{1\\}.$\nFor $a\\in L,$ we define the \\textit{variety} of $a$\\index{variety} as $V(a):=\\{p\\in X\\mid a\\leq p\\}$ and set $V(\\mathcal{L\n):=\\{V(a)\\mid a\\in L\\}.$ Indeed, $V(\\mathcal{L})$ is closed under arbitrary\nintersections (in fact, $\\bigcap_{a\\in A}V(a)=V(\\bigvee_{a\\in A}(a))$ for\nany $A\\subseteq L$). The lattice $\\mathcal{L}$ is called $X$\\emph{-top}\n(or a \\textit{topological lattice} iff $V(\\mathcal{L})$ is closed under finite\nunions.\n\\end{punto}\n\nMany results in the literature for prime, coprime, second, first, and other\ntypes of spectra of submodules of a module can be generalized to a top-lattices\nwith actions from posets. For example, we have the following generalization of \\cite[Theorem 3.5]{MMS1997}.\n\n\\begin{lemma}\nLet $(\\mathcal{L},\\rightharpoonup )$ be a complete lattice with an action from a poset $S$. If $\\mathcal{L}$ is multiplication, then $\\mathcal{L}$ is $Spec^{p}(\\mathcal{L})$-top.\n\\end{lemma}\n\\begin{Beweis} This follows from the fact that we have $V(s\\rightharpoonup 1)\\cup V(y)=V((s\\rightharpoonup 1)\\wedge y)$ for all $s\\in S$ and $y\\in L$. Indeed, by definition of prime elements and the axioms of the $S$-action, and noting that $V(-)$ is an order reversing map, we have:\n\\begin{equation*}\nV(s\\rightharpoonup y)\\subseteq V(s\\rightharpoonup 1)\\cup V(y)\\subseteq\nV((s\\rightharpoonup 1)\\wedge y)\\subseteq V(s\\rightharpoonup y)\\blacksquare.\n\\end{equation*}\n\\end{Beweis}\n\n\\begin{defn}\n\\label{quot}Let $\\mathcal{L}=(L,\\wedge ,\\vee )$ be a lattice. Let $x,y,z\\in\nL $, with $x\\leq y$ and $x\\leq z$. We define $y\\sim z$ iff for all \ny^{\\prime }\\leq y$, there exists $z^{\\prime }\\leq z$ such that $y^{\\prime\n}\\vee x=z^{\\prime }\\vee x$, and for all $z^{\\prime }\\leq z$, there exists \ny^{\\prime }\\leq y$ such that $y^{\\prime }\\vee x=z^{\\prime }\\vee x$. It is\nclear that $\\sim $ is an equivalence relation. Denote the equivalence class\nof $y\\geq x$ by $y\/x$, and define\n\\begin{equation*}\nL\/x:=\\{y\/x\\mid y\\in L\\text{ and }x\\leq y\\}.\n\\end{equation*\nDefine $y\/x\\leq ^{q}z\/x$ iff for all $y^{\\prime }\\leq y$, there exists \nz^{\\prime }\\leq z$ such that $y^{\\prime }\\vee x=z^{\\prime }\\vee x$.\nThen $\\mathcal{L}\/x=(L\/x,\\wedge ^{q},\\vee ^{q})$ is a lattice,\ncalled the \\emph{quotient lattice}, where the meet\n$\\wedge ^{q}$ and the join $\\vee ^{q}$ on $L\/x$ are defined by:\n\\begin{equation*}\ny\/x\\wedge ^{q}z\/x:=(y\\wedge z)\/x\\text{ and }y\/x\\vee ^{q}z\/x:=(y\\vee z)\/x.\n\\end{equation*\nIf $\\mathcal{L}=(L,\\wedge ,\\vee ,0,1)$ is a complete lattice, then\n$\\mathcal{L}\/x=(L\/x,\\wedge ^{q},\\vee ^{q})$ is a complete lattice, where\n\\begin{equation}\n\\bigwedge_{i\\in A}^{q}(x_{i}\/x)=(\\bigwedge_{i\\in A}x_{i})\/x\\text{ and \n\\bigvee_{i\\in A}^{q}(x_{i}\/x)=(\\bigvee_{i\\in A}x_{i})\/x).\n\\end{equation\n\\end{defn}\n\n\\begin{rem}\n\\label{quotient action}Let $(\\mathcal{L},\\rightharpoonup )$ a lattice with an $S$-action. Define for all $s\\in S$\nand $y\/x\\in \\mathcal{L}\/x$:\n\\begin{equation}\ns\\rightharpoonup ^{q}y\/x=(s\\rightharpoonup y)\\vee x\n\\end{equation\nThen $(\\mathcal{L}\/x,\\rightharpoonup ^{q})$ is a lattice with an $S$-action.\n\\end{rem}\n\n\\begin{thm}\n\\index{coprime element}\n\\index{second element}\n\\index{$S-$action}\n\\label{Proposition 5.8}Let $(\\mathcal{L},\\rightharpoonup )=(L,\\wedge ,\\vee ,0,1)$ a complete lattice with an $S\n-action.\n\n\\begin{enumerate}\n\\item $Spec^c(\\mathcal{L}) = Spec^s(\\mathcal{L}^0)$.\n\n\\item $Spec^c(\\mathcal{L}^0) = Spec^s(\\mathcal{L}^*)$.\n\n\\item If $x\\in L\\backslash \\{1\\}$ is prime, then\n\\begin{equation*}\nSpec^{f}(\\mathcal{L}\/x)=(\\mathcal{L}\/x)\\backslash \\{x\/x\\}.\n\\end{equation*}\n\n\\item Assume that the following additional condition is satisfied for\nour action:\n\\begin{eqnarray}\ns &\\rightharpoonup &(y\\vee z)=s\\rightharpoonup y\\vee s\\rightharpoonup \n\\text{ for all }s\\in S\\text{ and }y,z\\in L \\label{C1}\n\\end{eqnarray}\n\nThen $x\\in L\\backslash \\{1\\}$ is prime $\\Leftrightarrow Spec^{f}(\\mathcal{L\n\/x)=(\\mathcal{L}\/x)\\backslash \\{x\/x\\}$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{Beweis}\n\\begin{enumerate}\n\\item $p\\in Spec^{c}(\\mathcal{L})\\Leftrightarrow s\\rightharpoonup 1\\leq p$\nor $(s\\rightharpoonup 1)\\vee p=1$ for all $s\\in S$\\newline\n\\hphantom{ $p\\in Spec^c(\\mathcal{L})$} $\\Leftrightarrow s\\rightharpoonup\n1\\vee p=p$ or $s\\rightharpoonup ^{0}p=0^{0}$ for all $s\\in S$ \\newline\n\\hphantom{ $p\\in Spec^c(\\mathcal{L})$} $\\Leftrightarrow s\\rightharpoonup\n^{0}p=p$ or $s\\rightharpoonup ^{0}p=0^{0}$ for all $s\\in S$ \\newline\n\\hphantom{ $p\\in Spec^c(\\mathcal{L})$} $\\Leftrightarrow p\\in Spec^{s}\n\\mathcal{L}^{0})$.\n\n\\item $p\\in Spec^{c}(\\mathcal{L}^{0})\\Leftrightarrow s\\rightharpoonup\n^{0}1^{0}\\leq ^{0}p$ or $(s\\rightharpoonup ^{0}1^{0})\\vee ^{0}p=1^{0}$\n\\newline\n\\hphantom{$p\\in Spec^c(\\mathcal{L}^0)$} $\\Leftrightarrow (s\\rightharpoonup\n1)\\vee 0\\geq p$ or $((s\\rightharpoonup 1)\\vee 0)\\wedge p=0$ for all $s\\in S$ \\newline\n\\hphantom{$p\\in Spec^c(\\mathcal{L}^0)$} $\\Leftrightarrow (s\\rightharpoonup\n1)\\wedge p=p$ or $(s\\rightharpoonup 1)\\wedge p=0$ for all $s\\in S$ \\newline\n\\hphantom{$p\\in Spec^c(\\mathcal{L}^0)$} $\\Leftrightarrow s\\rightharpoonup\n^{\\ast }p=p$ or $s\\rightharpoonup ^{\\ast }p=0$ for all $s\\in S$ \\newline\n\\hphantom{$p\\in Spec^c(\\mathcal{L}^0)$} $\\Leftrightarrow p\\in Spec^{s}\n\\mathcal{L}^{\\ast })$.\n\n\\item Let $x\\in L\\backslash \\{1\\}$ be prime. \\textbf{Claim:} $y\/x\\in\n\\mathcal{L}\/x$ is first.\n\nLet $s\\rightharpoonup ^{q}z\/x=x\/x$ and $z\/x\\leq ^{q}y\/x$ and suppose that \nz\/x\\nleq x\/x$. Then $((s\\rightharpoonup z)\\vee x)\/x=x\/x$. It follows that \n((s\\rightharpoonup z)\\vee x)=x$, and hence $((s\\rightharpoonup z)\\leq x$.\nSince $x$ is prime, $((s\\rightharpoonup 1)\\leq x$ or $z\\leq x$. But $z\\leq x$\nimplies that $z=x$, and so $z\/x=x\/x$. Therefore, $((s\\rightharpoonup 1)\\leq\nx $, and so $(s\\rightharpoonup 1)\\vee x=x$. Hence $s\\rightharpoonup\n^{q}1\/x=x\/x $. Therefore, $s\\rightharpoonup ^{q}y\/x=x\/x$.\n\n\\item Assume that the additional condition (\\ref{C1}) is satisfied and that $Spec^{f}(\\mathcal{L}\/x)=(\\mathcal{L}\/x)\\backslash\n\\{x\/x\\} $. \\textbf{Claim:} $x$ is prime in $\\mathcal{L}$.\n\nSuppose that $s\\rightharpoonup y\\leq x$ and $y\\nleq x$. Since \ns\\rightharpoonup y\\leq x,$ we have $(s\\rightharpoonup y)\\vee x=x$. It\nfollows by (\\ref{C1}) that $s\\rightharpoonup (y\\vee x)=s\\rightharpoonup\ny\\vee s\\rightharpoonup x$. Since $s\\rightharpoonup x\\leq x$, we have\n\\begin{equation*}\ns\\rightharpoonup (y\\vee x)=s\\rightharpoonup y\\vee s\\rightharpoonup x\\leq\n(s\\rightharpoonup y)\\vee x=x.\n\\end{equation*}\nTherefore $(s\\rightharpoonup (y\\vee x)\\vee x)\/x=x\/x$, whence \ns\\rightharpoonup ^{q}(y\\vee x)\/x=x\/x$. But $1\/x$ is first in $\\mathcal{L}\/x,$\nwhence $(y\\vee x)\/x=x\/x$ or $s\\rightharpoonup ^{q}1\/x=x\/x$. Notice that \n(y\\vee x)\/x=x\/x$ cannot happen as $y\\nleq x$. Thus $s\\rightharpoonup\n^{q}1\/x=x\/x$. Whence $s\\rightharpoonup 1\\vee x=x,$ i.e. $s\\rightharpoonup\n1\\leq x$.$\\blacksquare$\n\\end{enumerate}\n\\end{Beweis}\n\n\\begin{rem}\nLet $(\\mathcal{L},\\rightharpoonup )=(L,\\wedge,\\vee ,0,1)$ a complete lattice with an $S$-action. Since $Spec^{c}(\\mathcal{\n})=Spec^{s}(\\mathcal{L}^{0})$ by \\ref{Proposition 5.8} (2), the result on\nthe second spectrum can be dualized to the coprime spectrum.\n\\end{rem}\n\n\n\\section{PS-Hollow Representation}\n\n\\hspace{1cm} Throughout this Section, $R$ is a commutative ring with unity and $M$ is a non-zero $R$-module. We consider the poset $\\mathcal{I}=(Ideal(R),\\subseteq)$ of ideals of $R$ acting on the lattice $\\mathcal{L} = Sub_R(M)$ of $R$-submodules of $M$ in the canonical way. We say that a proper $R$-submodule of $M$ is irreducible (resp. strongly irreducible, pseudo strongly irreducible, prime, coprime) iff it is so as an element of $Sub_R(M)$. On the other hand, we say that a non-zero $R$-submodule of $M$ is hollow (resp. strongly hollow, pseudo strongly hollow, second, first) iff it is so as an element of $Sub_R(M)$. For such notions for modules one might consult \\cite{Abu2011-CA}, \\cite{Abu2011-TA}, \\cite{Abu2015}, \\cite{Yas1995}, \\cite{Yas2001},\\cite{Wij2006}).\n\nIn \\cite{AH-sr}, we introduced and investigated modules attaining \\emph{second representations}, i.e. modules which are finite sums of \\emph{second submodules} (see \\cite{Ann2002}, \\cite{Bai2009}). Since every second submodule is secondary, modules with secondary representations can be considered as generalizations of such modules. Secondary modules can be considered, in some sense, as dual to those of primary submodules.\n\nIn this section, we consider modules with \\emph{pseudo strongly hollow representations}, i.e. which are finite sums of \\emph{pseudo strongly hollow submodules}. Assuming suitable conditions, we prove existence and uniqueness theorems for modules with such representations (called \\emph{PS-hollow representable modules}). This work is inspired by the theory of primary and secondary decompositions of modules over commutative rings (e.g. Ann2002).\n\n\\begin{punto}\n\\index{primary submodule}\n\\index{primary decomposition}\n\\index{minimal primary submodule} A proper $R$-submodule $N\\lneq M$ is\ncalled \\emph{primary} \\cite{AK2012} iff whenever $rx\\in N$ for some $r \\in R$ and $x\\in M$, either $x\\in N$\nor $r^{n}M\\subseteq N$ for some $n\\in \\mathbb{N}$. We say that $M_R$ has a \\emph{primary decomposition} \\cit\n{AK2012} iff there are primary submodules $N_{1},\\cdots ,N_{n}$ of $M$ with $M=\\bigcap_{i=1}^{n}N_{i}.$\n\nDually, an $R$-submodule $N\\leq M$ is said to be a \\emph{secondary submodule} (\\cite{K1973}, \\cite{M1973}) iff for any $r\\in R$\nwe have $rN=N$ or $r^{n}N=0$ for some $n\\in \\mathbb{N}$. An $R$-module $M$ has a {secondary representation} iff $M=\\sum\\limits_{i=1}^{n}N_{i},$ where $N_{1},\\cdots ,N_{n}$ are secondary $R$-submodules of $M$.\n\\end{punto}\n\nThe notion of a primary submodule can be dualized in different ways. Instead of considering such notions, we consider the \\emph{exact dual} of a pseudo strongly irreducible submodule (defined in \\ref{psi}). Recall that, the pseudo strongly irreducible elements in $(Sub_R(M),\\rightharpoonup)$ are exactly the prime elements in $(Sub_R(M),\\rightharpoonup^*)$ (defined in \\ref{psh}).\n\nStrongly irreducible submodules (ideal) have been considered by several authors (e.g. \\cite{HRR2002}, \\cite{Ata2005}, \\cite{Azi2008}). The dual notion of a \\emph{strongly hollow submodule} was investigated by Abuhlail and Lomp in \\cite{AbuC0}. In this section we consider the more general notion of a \\emph{pseudo strongly hollow submodule}. For the convenience of the reader, we restate the definition in the special case of the lattice $Sub_R(M)$.\n\n\\begin{defn}\nWe say that an $R$-submodule $N\\leq M$ is \\emph{pseudo strongly hollow submodule \n\\index{pseudo strongly hollow submodule} (or \\emph{PS-hollow\n\\index{PS-hollow submodule} for short) iff for any ideal $I\\leq R$ and any $R\n$-submodule $L\\leq M,$ we hav\n\\begin{equation}\nN\\subseteq IM+L\\Rightarrow N\\subseteq I\n\\text{ or }N\\subseteq L. \\label{PSH}\n\\end{equation\nWe say that $_{R}M$ is a \\emph{pseudo strongly hollow module} (or PS-hollow\\emph{\\ \nfor short) iff $M$ is a PS-hollow submodule of itself, that is, $M$ is\nPS-hollow iff for any ideal $I\\leq R$ and any $R$-submodule $L\\leq M,$ we\nhav\n\\begin{equation}\nM=IM+L\\Rightarrow M=IM\\text{ or }M=L. \\label{PH}\n\\end{equation}\n\\end{defn}\n\n\\begin{ex}\nLet $_{R}M$ be second. Every $R$-submodule $N\\leq M$ is a PS-hollow\nsubmodule of $M.$ Indeed, suppose that $N\\subseteq IM+L$ for some $L\\leq M$\nand $I\\leq R$. Since $_{R}M$ is second, either $IM=0$ whence $N\\subseteq L,$\nor $IM=M$ whence $N\\subseteq IM$. In particular, every second module is a\nPS-hollow module.\n\\end{ex}\n\n\\begin{rem}\nIt is clear that any strongly hollow submodule of $M$ is PS-hollow; the\nconverse holds in case $_{R}M$ is multiplication.\n\\end{rem}\n\n\\begin{ex}\n\\item \\label{Example 5.2}\n\n\\begin{enumerate}\n\\item There exists an $R$-module $M$ which is not multiplication but all of\nits PS-hollow submodules are strongly hollow. Consider the Pr\\\"{u}fer group $M=\\mathbb{Z}({p^{\\infty }})$ as a $\\mathbb{Z}$-module. Notice that $_{\\mathbb{Z}}M$ is not a multiplication module, however every $\\mathbb{Z}$-submodule of $M$ is strongly hollow).\n\n\\item A PS-hollow submodule $N\\leq M$ need not be hollow. Consider $M\n\\mathbb{Z}_{2}[x]$ as a $\\mathbb{Z}$-module. Set $N:=x\\mathbb{Z}_{2}[x]$,\nand $L:=(x+1)\\mathbb{Z}_{2}[x]$. Then $N,L\\lneq M$ and $M=L+N$ is PS-hollow\nwhich is not hollow. Indeed, $x^{i}=x^{i-1}(x+1)-x^{i-2}(x)$ for all $i\\geq\n2 $ and $1=(x+1)-x$. On the other hand, $IM=M$ or $IM=0$ for\nevery $I\\leq \\mathbb{Z}$.\n\\end{enumerate}\n\\end{ex}\n\n\\begin{lem}\n\\label{Lemma 5.3}Let $N\\leq M$ be a PS-hollow submodule. If $I$ is minimal\nin $A:=\\{I\\leq R\\mid N\\subseteq IM\\},$ then $I$ is a hollow\nideal of $R.$\n\\end{lem}\n\n\\begin{Beweis}\nLet $I=J+K$ for some ideals $J,K\\leq R.$ Notice that $N\\subseteq\nIM=(J+K)M=JM+KM$, whence $N\\subseteq JM$ or $N\\subseteq KM$, i.e. $J\\in A$\nor $K\\in A$. By the minimality of $I,$ it follows that $J=I$ or $K=I$.\nTherefore, $I$ is hollow.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{punto}\nLet $N\\leq M$ be a PS-hollow submodule and se\n\\begin{equation*}\nA_{N}:=\\{I\\leq R\\mid N\\subseteq IM\\},\\text{ }H_{N}:=Min(A)\\text{ and \nIn(N):=\\bigcap\\limits_{I\\in H_N}IM.\n\\end{equation*\nNotice that $A_{N}$ is non-empty as $R\\in A,$ while $H_{N}$ might be empty\nand in this case $In(N):=M$ (however $H_{N}\\neq \\emptyset $ if $R$ is\nArtinian). When $N$ is clear from the context, we drop it from the index of the above notations. We\nsay that $N$ is an $H$\\emph{-PS-hollow submodule\n\\index{$H$-PS-hollow submodule} of $M.$ Every element in $H$ is called an\n\\emph{associated hollow ideal of }$M\n\\index{associated hollow ideal}. We write $Ass^{h}(M)$ to denote the set of\nall associated hollow ideals of $M.$\n\\end{punto}\n\n\\begin{prop}\n\\index{PS-hollow submodule}\n\\label{Proposition 5.5}Let $R$ be an Artinian ring, $N$ and $L$ be\nincomparable PS-hollow submodules of $M$ and $H\\subseteq Ass^{h}(M)$. Then \nN+L$ is $H$-PS-hollow if and only if $N$ and $L$ are $H$-PS-hollow.\n\\end{prop}\n\n\\begin{Beweis}\n$(\\Leftarrow )$ Let $N\\leq M$ and $L\\leq M$ be $H$-PS-hollow submodules.\n\n\\textbf{Claim 1:} $H_{N+L}=H_{N}=H.$\n\nConsider $I\\in H_{N}=H_{L}.$ Clearly, $I\\in A_{N+L}.$ If $I\\notin\nH_{N+L}:=Min(A_{N+L})$, then there is $I^{\\prime }\\subsetneq I$ such that \nN\\subseteq N+L\\subseteq I^{\\prime }M$ which contradicts the minimality of $I$\nin $A_{N}$.\n\nConversely, let $I\\in H_{N+L}$. Clearly, $I\\in A_{N}\\cap A_{L}.$ If $I\\notin\nH_{N},$ then there is $I^{\\prime }\\in H_{N}=H_{L}$ with $I^{\\prime\n}\\subseteq I$ and therefore $N+L\\subseteq I^{\\prime }M,$ whence $I=I^{\\prime\n}$ since $I^{\\prime }\\in A_{N+L}.$ Therefore, $H_{N+L}=H_{N}=H$.\n\n\\textbf{Claim 2:} $N+L$ is PS-hollow.\n\nSuppose that $N+L\\subseteq JM+K$ for some ideal $J\\leq R$ and some submodule \nK\\leq M$. Then $N\\subseteq N+L\\subseteq JM+K$ and so $N\\subseteq JM$ or \nN\\subseteq K$. Similarly $L\\subseteq N+L\\subseteq JM+K$ and so $L\\subseteq\nJM $ or $L\\subseteq K$. Suppose that $N\\subseteq JM$, whence there is $I\\in\nH $ such that $N\\subseteq IM$ and $I\\subseteq J$ (as $R$ is Artinian) and so\n$L\\subseteq IM\\subseteq JM$. Therefore, either $N+L\\subseteq JM$ or \nN+L\\subseteq K$. Hence $N+L$ is PS-hollow.\n\n$(\\Rightarrow )$ Assume that $N+L$ is $H$-PS-hollow. It is clear that \nH_{N+L}\\subseteq H_{L}$. Assume that $L\\subseteq IM$. Then $N+L\\subseteq\nIM+L $ and $N+L\\nsubseteq L$ as $N$ and $L$ are incomparable, whence \nN+L\\subseteq IM$ and so $H_{L}\\subseteq H_{N+L}$. Therefore, $H_{L}=H_{N+L}\n. Similarly, $H_{N}=H_{N+L}$.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{punto}\n\\index{DPS-hollow representable module\n\\index{SPS-hollow representable module\n\\index{minimal PS-hollow representation} We say that a module $M$ is \\emph\nPS-hollow representable\n\\index{PS-hollow representable module} iff $M$ can be written as a \\emph\nfinite} sum of PS-hollow submodules. A module $M$ is called \\emph{directly\nPS-hollow representable\n\\index{directly PS-hollow representable} (or DPS-hollow\nrepresentable, for short) iff $M$ is a \\emph{finite direct} sum of\nPS-hollow submodules. A module $M$ is called \\emph{semi-pseudo strongly\nhollow representable} (or \\emph{SPS-hollow representable}, for short) iff $M$\nis a sum of PS-hollow submodules. We call $M=\\sum\\limits_{i=1}^{n}N_{i},$\nwhere each $N_{i}$ is $H_{i}$-PS-hollow, a \\emph{minimal PS-hollow\nrepresentation\n\\index{minimal PS-hollow representation} for $M$ iff the following\nconditions are satisfied:\n\n\\begin{enumerate}\n\\item $In(N_{1}), In(N_{2}), .... , In(N_{n})$ are incomparable.\n\n\\item $N_{j}\\nsubseteq \\sum\\limits_{i=1,i\\neq j}^{n}N_{i}$ for all $j\\in\n\\{1,\\cdots ,n\\}$.\n\\end{enumerate}\n\nIf such a minimal PS-hollow representation for $M$ exists, then we call each\n$N_{i}$ a \\emph{main PS-hollow submodule}\n\\index{main PS-hollow submodule}\\emph{of} $M$ and the elements of \nH_{1},H_{2},\\cdots ,H_{n}$ are called \\emph{main associated hollow ideals}\n\\index{main associated hollow ideals} of $M;$ the set of the main associated\nhollow ideals of $M$ is dented by $ass^{h}(M)$.\n\\end{punto}\n\n\\begin{prop}\n\\index{minimal PS-hollow representation}\n\\label{Remark 5.6}(Existence of minimal PS-hollow representation) Let $R$ be an Artinian ring and suppose that $In(N)$ is PS-hollow whenever $N$ is PS-hollow. Then every PS-hollow representable $R$-module has a minimal PS-hollow representation.\n\\end{prop}\n\n\\begin{Beweis}\nLet $M=\\sum\\limits_{i\\in A}K_{i}$, where $A$ is finite and $K_{i}$ is an \nH_{i}$-PS-hollow submodule $\\forall i\\in A$.\n\n\\textbf{Step 1:} Remove the redundant submodules $K_{j}\\subseteq\n\\sum\\limits_{i\\neq j}K_{i}.$ This is possible by the finiteness of $A$.\n\n\\textbf{Step 2:} Gather all submodules $K_{m}$ that share the same $H$ to\nconstruct an $H$-PS-hollow $N\\leq M$ as a sum of such $H$-PS-hollow\nsubmodules (this is possible by Proposition \\ref{Proposition 5.5}).\n\n\\textbf{Step 3:} If $In(K_{i})$ and $In(K_{j})$ are comparable for some $i,j\\in \\{1, 2, .., n\\}$; say $In(K_{i}) \\subseteq In(K_{j})$ then replace $K_i$ and $K_j$ in the representation by $In(K_{j})$.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{ex} Any vector space $V$ has a trivial minimal PS-hollow representation as it is a PS-hollow submodule of itself. Notice that $V$ is not necessarily multiplication.\n\\end{ex}\n\nWe provide an example of a module with a minimal PS-hollow representation that is \\it{not} a strongly hollow representation.\n\n\\begin{ex} Consider $R:={\\Bbb{Z}}_{pq}$ where $p$ and $q$ are distinct prime numbers and $M= {\\Bbb{Z}}_{pq}[x]$. Notice that $M= pM+qM$ is a minimal PS-hollow representation while neither $pM$ nor $qM$ is strongly hollow. To see this, notice that $M=xM + {\\Bbb{Z}}_{pq}$ but neither $pM\\subseteq xM$ nor $pM\\subseteq {\\Bbb{Z}}_{pq}$ (similarly, neither $qM\\subseteq xM$ nor $qM\\subseteq {\\Bbb{Z}}_{pq}$). Observe that $M$ is neither multiplication nor a vector space.\n\\end{ex}\n\n\\begin{rem}\n\\index{$H$-PS-hollow submodule}\n\\label{In(N) is H-PS-hollow}Let $R$ be Artinian and $N\\leq M$ be an $H\n-PS-hollow submodule. If $In(N)$ is PS-hollow, then $In(N)$ is $H\n-PS-hollow. To show this, observe that for any ideal $I\\leq R$, we have \nN\\subseteq IM$ if and only if there exists $I^{\\prime }\\in H$ such that \nN\\subseteq I^{\\prime }M$ with $I^{\\prime }\\subseteq I$ (as $R$ is Artinian),\nwhence $In(N)\\subseteq IM$ if and only if $N\\subseteq IM$.\n\\end{rem}\n\n\\begin{lem}\n\\label{Lemma 5.7}Let $R$ be Artinian, $N\\leq M$ be an $H$-PS-hollow submodule and \nIn(N)\\leq L$ whenever $N\\leq L\\leq M$. Then $In(N)$ is $H$-PS-hollow.\n\\end{lem}\n\n\\begin{Beweis}\nLet $K=In(N):=\\bigcap\\limits_{I\\in H}IM$. Suppose that $K\\subseteq JM+L$ for\nsome $J\\leq R$ and $L\\leq M.$ If $K\\nsubseteq JM$, then $N\\nsubseteq JM$ and\nso $N\\subseteq L$, whence $K\\subseteq L$. Therefore $K$ is PS-hollow. Thus,\nby the Remark \\ref{In(N) is H-PS-hollow}, $In(N)$ is $H$-PS-hollow.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{ex}\nIf $R$ is Artinian, then every multiplication $R$-module $M$ satisfies the\nconditions of Lemma \\ref{Lemma 5.7} and so $In(N)$ is $H$-PS-hollow for\nevery $H$-PS-hollow $N\\leq M$ (in fact, $In(N)=N$ in this case).\n\\end{ex}\n\n\\begin{rem}\n\\label{Example 5.8}Let $R$ be Artinian and $M$ a multiplication $R$-module.\nIt is easy to see that there is a unique minimal PS-hollow representation of\n$M$ up to the order, \\emph{i.e.} if $\\sum\\limits_{i=1}^{n}N_{i}=M=\\su\n\\limits_{j=1}^{m}K_{j}$ are two minimal PS-representations such that each \nN_{i}$ is $H_{i}$-PS-hollow and each $K_{j}$ is $H_{j}^{\\prime }$-PS-hollow,\nthen $n=m$ and $\\{N_{1},\\cdots ,N_{n}\\}=\\{K_{1},\\cdots ,K_{n}\\}.$\n\\end{rem}\n\n\\begin{thm}\n\n\\label{Theorem 5.9}\\index{minimal PS-hollow representation}(First uniqueness theorem of PS-hollow representation)\nLet $R$ be Artinian and $\\sum\\limits_{i=1}^{n}N_{i}=M=\\su\n\\limits_{j=1}^{m}K_{j}$ be two minimal PS-representations for $_{R}M$ such\nthat $N_{i}$ is $H_{i}$-PS-hollow for each $i\\in \\{1,\\cdots ,n\\}$ and $K_{j}$\nis $H_{j}^{\\prime }$-PS-hollow for each $j\\in \\{1,\\cdots ,m\\}$. Then $n=m,$ \n\\{H_{1},\\cdots ,H_{n}\\}=\\{H_{1}^{\\prime },\\cdots,H_{n}^{\\prime }\\}$ and $In(N_{i})=In(K_{j})$\nwhenever $H_{i}=H_{j}^{\\prime} $.\n\\end{thm}\n\n\\begin{Beweis}\nSet $N_{i}^{\\prime }=In(N_{i})$ and $K_{j}^{\\prime }=In(K_{j})$ for $i\\in\n\\{1,\\cdots ,n\\}$ and $j\\in \\{1,\\cdots ,m\\}$.\n\n\\textbf{Claim: }For any $i\\in \\{1,\\cdots ,n\\}$, there is $j\\in\n\\{1,\\cdots ,m\\}$ such that $N_{i}^{\\prime }=K_{j}^{\\prime }$.\n\n\\textbf{Step 1:} Suppose that there exists some $i\\in \\{1,\\cdots ,n\\}$ for\nwhich $N_{i}\\nsubseteq K_{j}^{\\prime }$ for all $j\\in \\{1,\\cdots ,m\\}$.\nThen for any $j\\in \\{1,\\cdots ,m\\}$, there is $J_{j}^{\\prime }\\in\nH_{j}^{\\prime }$ such that $N_{i}\\nsubseteq J_{j}^{\\prime }M.$ But \nN_{i}\\subseteq M=\\sum\\limits_{j=1}^{n}K_{j}\\subseteq\n\\sum\\limits_{j=1}^{m}J_{j}^{\\prime }M$, whence $N_{i}\\subseteq J_{j}^{\\prime\n}M$ for some $j$ (a contradiction). So, $N_{i}\\subseteq K_{j}^{\\prime }$ for\nsome $j\\in \\{1,\\cdots ,m\\}.$\n\n\\textbf{Step 2:}\\ We show that $N_{i}^{\\prime }\\subseteq K_{j}^{\\prime }$.\n\nSince $N_{i}\\subseteq K_{j}^{\\prime }$, we have $N_{i}\\subseteq IM$ for all \nI\\in H_{j}^{\\prime }$. Since $R$ is Artinian, there is a minimal ideal \nJ_{I}\\leq I$ such that $N_{i}\\subseteq J_{I}M$ and so\n\\begin{equation*}\nN_{i}^{\\prime }=In(N_{i})=\\bigcap_{I\\in H_{i}}IM\\subseteq \\bigcap_{I\\in\nH_{j}^{\\prime }}J_{I}M\\subseteq K_{j}^{\\prime }.\n\\end{equation*\nSimilarly, for any $j\\in \\{1,\\cdots ,m\\}$, there is some $i\\in\n\\{1,\\cdots ,n\\}$ such that $K_{j}^{\\prime }\\subseteq N_{i}^{\\prime }$.\nTherefore, for any $i\\in \\{1,\\cdots ,n\\}$, there is some $j\\in\n\\{1,\\cdots ,m\\}$ such that $N_{i}^{\\prime }=K_{j}^{\\prime }$ as \nN_{1}^{\\prime },N_{2}^{\\prime },\\cdots ,N_{n}^{\\prime }$ are incomparable.\n\n\\textbf{Claim:} $H_{i}=H_{j}^{\\prime }$ whenever $N_{i}^{\\prime\n}=K_{j}^{\\prime }.$\n\nLet $N_{i}^{\\prime }=K_{j}^{\\prime }.$ Pick any $I\\in H_{i}$. Then \nN_{i}\\subseteq IM$, whence $K_{j}^{\\prime }=N_{i}^{\\prime }\\subseteq IM$.\nSince $R$ is Artinian, there is a minimal ideal $I^{\\prime }\\in\nH_{j}^{\\prime }$ such that $I^{\\prime }\\leq I$, and therefore $I^{\\prime }=I$\nas $I$ is minimal with respect to $N_{i}\\subseteq IM$. Hence $H_{i}\\subseteq\nH_{j}^{\\prime }$. One can prove similarly that $H_{j}^{\\prime }\\subseteq\nH_{i}.$ So, $H_{i}=H_{j}^{\\prime }$.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{thm}\n\\label{Theorem 5.10}\\index{minimal PS-hollow representation}(Second uniqueness theorem of PS-hollow representation)\nLet $R$ be Artinian, $M$ be an $R$-module with two minimal PS-hollow\nrepresentations $\\sum\\limits_{i=1}^{n}N_{i}=M=\\sum\\limits_{j=1}^{n}K_{j}$\nwith $N_{i}$ is $H_{i}$-PS-hollow for each $i\\in \\{1,\\cdots ,n\\}$ and $K_{j}$\nis $H_{j}$-PS-hollow for each $j\\in \\{1,\\cdots ,n\\}$. If $H_{m}$ is minimal\nin $\\{H_{1},H_{2},\\cdots ,H_{n}\\}$, then either $N_{m}=K_{m}$ or $In(N_{m})$\nis not PS-hollow.\n\\end{thm}\n\n\\begin{Beweis}\nLet $H_{m}$ be minimal in $\\{H_{1},H_{2},\\cdots ,H_{n}\\}$ such that \nIn(N_{m})$ is PS-hollow. For any $j\\neq m$, there is $I_{j}\\in\nH_{j}\\backslash H_{m}.$ But $\\sum\\limits_{j\\neq m}I_{j}M+N_{m}=M$ and so \nIn(N_{m})\\subseteq \\sum\\limits_{j\\neq m}I_{j}M+N_{m}$. Since $I_{j}\\in\nH_{j}\\backslash H_{m}$, it follows that $In(N_{m})\\nsubseteq I_{j}M$ for all\n$j\\in \\{1,\\cdots ,n\\}\\backslash \\{m\\}$ and so $In(N_{m})\\subseteq N_{m}$,\nwhence $In(N_{m})=N_{m}$. One can prove similarly that $In(K_{m})=K_{m}.$ It\nfollows tha\n\\begin{equation*}\nN_{m}=In(N_{m})\\overset\n\\text{Theorem }\\ref{Theorem 5.9}}{=}In(K_{m})=K_{m}.\n\\end{equation*}$\\blacksquare$\n\\end{Beweis}\n\n\\begin{cor}\n\\label{Corollary 5.11}Let $R$ be Artinian and $\\sum\\limits_{i=1}^{n}N_{i}=M\n\\sum\\limits_{i=1}^{n}K_{i}$ be two minimal PS-hollow representations of \n_{R}M$ such that $N_{i}$ is $H_{i}$-PS-hollow for $i\\in \\{1,\\cdots ,n\\}$\nand $K_{i}$ is $H_{i}$-PS-hollow for $i\\in \\{1,\\cdots ,n\\}$. If $In(N)$ is\nPS-hollow whenever $N$ is a main PS-hollow submodule of $M,$ then \nN_{i}=K_{i}$ for all $i\\in \\{1,\\cdots ,n\\}$.\n\\end{cor}\n\n\\begin{Beweis}\nApply Theorem \\ref{Theorem 5.10} and observe that $H_{i}$ is minimal in \n\\{H_{1},H_{2},\\cdots ,H_{n}\\}$ for each $i\\in \\{1,\\cdots ,n\\}$ as \nIn(N_{i})$ is PS-hollow: otherwise, $H_{j}\\subsetneq H_{i}$ for some $i\\neq\nj $ and $In(N_{j})$ can replace $N_{i}+N_{j}$ whence $\\su\n\\limits_{i=1}^{n}N_{i}$ is not minimal (a contradiction).$\\blacksquare$\n\\end{Beweis}\n\n\\begin{punto}\nWe say that an $R$-module $M$ is\\emph{\\ pseudo distributive\n\\index{pseudo distributive module} iff for all $L,N\\leq M$ and every $I\\leq\nR $ we have\n\\begin{equation}\nL\\cap (IM+N)=(L\\cap IM)+(L\\cap N).\n\\end{equation\nEvery distributive $R$-module is indeed pseudo distributive. The two notions\ncoincide for multiplication modules.\n\\end{punto}\n\n\\begin{ex}\nA pseudo distributive module need not be distributive. Consider $M:=\\mathbb{\n}_{2}[x]$ as a $\\mathbb{Z}$-module. Let $N:=xM$, $L:=(x+1)M$ and $K=\\mathbb{\n}_{2}$. Then $N,L,K\\leq M$ are $R$-submodules and\n\\begin{equation*}\n(K\\cap L)+(K\\cap N)=0\\neq K=K\\cap (L+N).\n\\end{equation*\nNotice that $M$ is pseudo distributive as $IM=0$ or $IM=M$ for every $I\\leq\nR $.\n\\end{ex}\n\n\\begin{rem}\n\\index{directly hollow representable}\n\\index{hollow representable}\n Assume that $M$ is a (directly) hollow representable module for which every maximal hollow is PS-hollow. Then $M$ is (directly) PS-hollow representable.\n\\end{rem}\n\nIn \\cite{AH-sr}, we introduced the notion of \\emph{s-lifting modules}:\n\n\\begin{punto}\nRecall that an $M$ is a \\emph{lifting} $R$-module iff any $R\n-submodule $N\\leq M$ contains a direct summand $X\\leq M$ such that $N\/X$ is\nsmall in $M\/X$ (e.g. \\cite[22.2]{JCNR}). We call $_{R}M$ \\emph{s-lifting} iff $_{R}M$ is lifting\nand every \\emph{maximal} hollow submodule of $M$ is second.\n\\end{punto}\n\n\\begin{prop}\n\\label{hollow-PS}\n\\index{s-lifting}\n\n\\begin{enumerate}\n\\item If $_{R}M$ is pseudo distributive, then every hollow submodule of $M$\nis PS-hollow.\n\n\\item If $_{R}M$ is s-lifting, then every maximal hollow submodule of $M$ is\nPS-hollow.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{Beweis}\n\\begin{enumerate}\n\\item Let $M$ is pseudo distributive. Let $N\\leq M$ be hollow. Suppose that \nN\\subseteq IM+L$ , whence $N=(IM+L)\\cap N=(IM\\cap N)+(L\\cap N)$ as $M$ is pseudo\ndistributive. Since $N$ is hollow, $N=IM\\cap N$ or $N=L\\cap N$, therefore \nN\\subseteq IM$ or $N\\subseteq L$. So, $N$ is PS-hollow.\n\n\\item Let $_{R}M$ be s-lifting. Suppose that $K\\leq M$ is a maximal hollow\nsubmodule of $M$ and that $K\\leq IM+L$. Since $M$ is s-lifting, there exists\n$K^{\\prime }\\subseteq K$ and $N\\leq M$ such that $K^{\\prime }\\oplus N=M$ and\n$K\/K^{\\prime }$ is small in $M\/K^{\\prime }$.\n\n\n\\textbf{Case 1: }$K^{\\prime }=0$: i.e. $M=N$. Since $K$ is second, we\nhave $K=IK\\subseteq IN=IM$.\n\n\\textbf{Case 2: }$K^{\\prime }\\neq 0$: We claim that $K=K^{\\prime }$. To\nprove this, let $x\\in K$. Then there are $y\\in K^{\\prime }$ and $z\\in N$\nsuch that $x=y+z$. But $y\\in K$, whence $z\\in K$. Therefore, $K\\subseteq\nK^{\\prime }\\oplus (K\\cap N)$, but $K$ hollow implies that $K=K^{\\prime }$ or\n$K=K\\cap N$. But $K^{\\prime }\\neq 0$, whence $K=K^{\\prime }$; otherwise, \nK^{\\prime }\\cap N\\neq 0$. Therefore, $M=K\\oplus N$. Now, it is easy to show\nthat\n\\begin{equation*}\nIM+L\\leq (IM\\cap K+L\\cap K)\\oplus (IM\\cap N+L\\cap N),\n\\end{equation*\nand so\n\\begin{equation*}\nK\\leq (IM\\cap K+L\\cap K)\\oplus (IM\\cap N+L\\cap N),\n\\end{equation*\nwhence $K\\leq IM\\cap K+L\\cap K$. Since $IM\\cap K+L\\cap K\\leq K$, it follows\nthat $K=IM\\cap K+L\\cap K$ and so $K=IM\\cap K$ or $K=L\\cap K$ which implies\nthat $K\\leq IM$ or $K\\leq L$.$\\blacksquare$\n\\end{enumerate}\n\\end{Beweis}\n\n\\begin{exs}\n\\label{Example 5.14}\n\\index{PS-hollow representable}\n\\index{directly PS-hollow representable}\n\\index{directly hollow representable}\n\\index{hollow representable}\n\\begin{enumerate}\n\\item Every (directly) hollow representable pseudo distributive module is (directly) PS-hollow representable.\n\\item Every s-lifting module with finite hollow dimension is directly PS-hollow representable.\n\\item The $\\mathbb{Z}$-module $M=\\mathbb{Z}_{n}$ is PS-hollow representable.\nTo see this, consider the prime factorization $n=p_{1}^{m_{1}}\\cdots\np_{k}^{m_{k}}$, and set $n_{i}\n\\frac{n}{p_{i}^{m_{i}}}$ for $i\\in \\{1,\\cdots ,k\\}.$ Then \nM=\\sum\\limits_{i=1}^{k}(n_{i})$ is a minimal PS-hollow representation for $M\n, and $(n_{i})$ is $H_{i}$-PS-hollow where $H_{i}=\\{(n_{i})\\}$ for $i\\in\n\\{1,\\cdots ,k\\}$.\n\n\\item The $\\mathbb{Z}$-module $M=\\mathbb{Z}_{12}$ is PS-hollow representable\n($M=4Z_{12}+3Z_{12}$), but $M$ is not second representable. Observe that $M$\nis not semisimple and is even not s-lifting as $3Z_{12}\\leq \\mathbb{Z}_{12}$\nis a maximal hollow $\\mathbb{Z}$-subsemimodule but not second.\n\n\\item Any Noetherian semisimple $R$-module is directly PS-hollow\nrepresentable.\n\n\\item Any Artinian semisimple $R$-module is directly PS-hollow representable.\n\\end{enumerate}\n\\end{exs}\n\n\\begin{lem}\n\\label{Lemma 5.15}Let $N\\leq M$ be an $H$-PS-hollow submodule such that\nevery non-small submodule $K$ of $M$ is of the form $JM$ for some ideal \nJ\\leq R$. Every non-small submodule $K\\leq N$ is $H$-PS-hollow submodule; Moreover, for any ideal $I\\leq R$, we have: $K\\subseteq IM$ if and only if $N\\subseteq IM$.\n\\end{lem}\n\n\\begin{Beweis}\nLet $N\\leq M$ be an $H$-PS-hollow submodule and $K\\leq N$ be a non-small submodule.\nSuppose that $K\\subseteq IM+L$ and $K\\nsubseteq L$. Notice that $N\\nsubseteq L$.\nSince $K$ is not small in $N$, there is a proper submodule $K^{\\prime }$ of $N$ such that\n$N=K+K^{\\prime } \\subseteq IM+L+K^{\\prime }$. If $N\\subseteq L+K^{\\prime }$, then $K^{\\prime }=JM$ for some $J\\leq R$ (notice that $K^{\\prime }$ not small in $N$) and therefore $N\\subseteq K^{\\prime }$ (a contradiction).\nHence, $N\\subseteq IM$ and so $K\\subseteq IM$, whence $K$ is PS-hollow.\n\n\\textbf{Claim:} $A_H = A_K$.\n\nAssume that $K\\subseteq IM$ for some $I\\leq R$. Then \nN=K+K^{\\prime }\\subseteq IM+K^{\\prime }$. Since $N$ is PS-hollow and \nK^{\\prime }\\neq N$, we have $N\\subseteq IM$.\n\\end{Beweis}\n\n\\begin{ex}\n\\label{Example 5.16}Consider $M=\\mathbb{Z}_{12}$ as a $\\mathbb{Z}$-module.\nThen $K_{1}=3\\mathbb{Z}_{12}$ and $K_{2}=4\\mathbb{Z}_{12}$ satisfy the\nassumptions of Lemma \\ref{Lemma 5.15}. Notice that $_{\\mathbb{Z}}M$ is not\nsemisimple.\n\\end{ex}\n\n\\begin{punto}\n\\index{comultiplication module}\n A module $_{R}M$ is called \\emph{comultiplication} \\cite{Abu2011-TA} iff\nfor every submodule $K\\leq M$, we have $K=(0:_{M}(0:_{R}K)).$\n\\end{punto}\n\n\\begin{thm}\n\\index{PS-hollow submodule}\n\\index{second submodule}\n\\index{multiplication module}\n\\index{comultiplication module}\n\\label{Theorem 5.17}Let $_{R}M$ be semisimple, $B$ the set of all maximal\nsecond submodules of $M$, and assume that $Ann(M)\\neq \\bigcap\\limits_{K\\in\nB\\backslash \\{N\\}}Ann(K)$ for any $N\\in B$. The following conditions are\nequivalent:\n\n\\begin{enumerate}\n\n\\item $_{R}M$ is multiplication.\n\n\\item Every PS-hollow submodule of $M$ is simple.\n\n\\item Every second submodule of $M$ is simple.\n\n\\item $_{R}M$ is comultiplication.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{Beweis}\nLet $M=\\bigoplus\\limits_{S\\in A}S$, where $S$ is a simple submodule of $M$ for\nall $S\\in A$.\n\n$(1)\\Rightarrow (2)$: Assume that $_{R}M$ is multiplication. Suppose that there is an\n$H$-PS-hollow submodule $N\\leq M$, which is not simple. Then $N$ contains properly a\nsimple submodule $S^{\\prime }\\in A$. Since $S^{\\prime }$ is not small in $N$,\nLemma \\ref{Lemma 5.15} implies that $S^{\\prime }$ is $H$-PS-hollow. But\nthere is another simple submodule $S^{\\prime \\prime }$ of $N$ (as $N$ is not\nsimple). Let $I:=Ann(S^{\\prime \\prime }).$ It follows that $S^{\\prime}\\subseteq IM$ while\n$N\\nsubseteq JM$ (which contradicts Lemma \\ref{Lemma 5.15}).\n\n\\vspace*{0.5cm}\n\n$(2)\\Rightarrow (3)$: Assume that every PS-hollow submodule of $M$ is simple.\n\n\\textbf{Claim:} Every second submodule of $M$ is PS-hollow, whence simple.\n\nLet $N = \\bigoplus\\limits_{i\\in A}S_{i}$ be a second submodule of $M$ and\nsuppose that $N\\subseteq IM+L$ for some ideal $I$ of $R$ and some $R$-submodule $N$ of $M$.\n\n{\\it Case 1}: $I\\subseteq Ann(N)$. In this case, $N\\cap IM=0$, and it follows that $N\\subseteq L$.\n\n{\\it Case 2}: $I\\nsubseteq Ann(N)$. Since $N$ is second, $N=IN\\subseteq IM$.\n\n\\vspace*{0.5cm}\n\n$(3)\\Rightarrow (1)$: Assume that every second submodule of $M$ is simple.\nConsider a submodule $K=\\bigoplus\\limits_{S\\in C\\subseteq A}S$ of $M$ and\nset $I:=\\bigcap\\limits_{S\\in A\\backslash C}Ann(S)$. Notice that $K=IM$,\notherwise, $I\\subseteq Ann(S)$ for some $S\\in C$ whence $Ann(M)=\\bigcap_{\n\\in A\\backslash \\{S\\}}Ann(S)$ (a contradiction).\nSince $K$ is an arbitrary submodule of $M$, we conclude that $M_R$ is multiplication.\n\n\\vspace*{0.5cm}\n\n$(3)\\Rightarrow (4)$: Assume that every second submodule of $M$ is simple.\nConsider a submodule $K=\\bigoplus\\limits_{S\\in C\\subseteq A}S$ of $M$ and\nset $I:=(0:_{R}K)$. Suppose that $(0:_{M}I)\\neq K$, whence there is a simple\nsubmodule $S^{\\prime }\\leq M$ with $S^{\\prime }\\cap K=0$ and $I\\subseteq\nAnn(S^{\\prime })$ which is not allowed by our assumption as it would yield \nAnn(M)=\\bigcap\\limits_{S\\in B}Ann(S)=\\bigcap\\limits_{S\\in B\\backslash\n\\{S^{\\prime }\\}}Ann(S)$ (a contradiction to the assumption).\n\n\\vspace*{0.5cm}\n\n$(4)\\Rightarrow (3)$: Let $_{R}M$ be comultiplication. Let $K\\leq M$ be second.\nFor any simple $S\\leq K$ we hav\n\\begin{equation}\nK=(0:_{M}(0:_{R}K))=(0:_{M}(0:_{R}S))=S,\n\\end{equation\ni.e. $_{R}K$ is simple.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{ex}\n\\label{Example 5.18}Consider the $\\mathbb{Z}$-module $M=\\prod\\limits_{i=1}^\n\\infty }\\mathbb{Z}_{p_{i}p_{i}^{\\prime }}$, where $p_{i}$\nand $p_{i}^{\\prime }$ are primes and $p_{i}\\neq p_{j}$, $p_{i}^{\\prime }\\neq\np_{j}^{\\prime }$ for all $i\\neq j\\in \\mathbb{N}$ and $p_{i}^{\\prime }\\neq\np_{j}$ for any $i$ and $j$. Let the simple $\\mathbb{Z}$-modules $K_{p_{i}}$\nand $K_{p_{i}^{\\prime }}$ be such that $(0:K_{p_{i}})=(p_{i})$ and \n(0:K_{p_{i}^{\\prime }})=(p_{i}^{\\prime })$, so\n\\begin{equation*}\nM=\\bigoplus\\limits_{i=1}^{\\infty }K_{p_{i}}\\oplus \\bigoplus_{i=1}^{\\infty\n}K_{p_{i}^{\\prime }}.\n\\end{equation*\nEvery second $\\mathbb{Z}$-submodule of $M$ is simple, while $_{\\mathbb{Z}}M$\nis not multiplication. Notice that the assumption on $Ann(M)$ in Theorem \\re\n{Theorem 5.17} is not satisfied for this $\\mathbb{Z}$-module, which shows\nthat this condition cannot be dropped.\n\\end{ex}\n\nRecall from \\cite{AH-sr} that an $R$-module $M$ is \\emph{second representable} iff $M=\\sum\\limits_{i=1}^{n}K_{i}$, where $K_{i}$ is a second $R$-submodule of $M$ for all $i=1,\\cdots ,n$. If this \\emph{second representation} is minimal, the set of \\emph{main second attached primes} of $M$ is given by $att^{s}(M)=\\{Ann(K_{i})\\mid i=1,\\cdots ,n\\}$.\n\n\\begin{cor}\n\\label{Corollary 5.19}If $_{R}M$ is semisimple second representable with $att^{s}(M)=Min(att^{s}(M))$. The following are equivalent:\n\n\\begin{enumerate}\n\n\\item $M$ is multiplication.\n\n\\item Every PS-hollow submodule of $M$ is simple.\n\n\\item Every second submodule of $M$ is simple.\n\n\\item $M$ is comultiplication.\n\\end{enumerate}\n\\end{cor}\n\n\\begin{Beweis}\nSince $M$ is second representable, the set $B$ defined in Theorem \\re\n{Theorem 5.17} is finite. Since $Ann(S_{i})$ is prime for every $i\\in A$ and\n$att^{s}(M)=Min(att^{s}(M))$ (i.e. different annihilators of simple\nsubmodules of $M$ are incomparable), we have $Ann(M)\\neq \\bigcap_{K\\in\nB\\backslash \\{N\\}}Ann(K)$ for every $N\\in B$. The result follows now from\nTheorem \\ref{Theorem 5.17}.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{ex}\n\\label{Example 5.20}Consider $M=\\mathbb{Z}_{30}[x]$ as a $\\mathbb{Z}\n-module. Let $K_{i}=(10x^{i})$, $N_{i}=(15x^{i})$ and $L_{i}=(6x^{i})$. Set \nK:=\\bigoplus\\limits_{i=1}^{\\infty }K_{i}$, $N:=\\bigoplus\\limits_{i=1}^\n\\infty }N_{i}$ and $L:=\\bigoplus\\limits_{i=1}^{\\infty }L_{i}$. Notice that\n\\begin{equation*}\nM=K+N+L.\n\\end{equation*\nIt is clear that $M$ is second representable semisimple with infinite\nlength, and\n\\begin{equation*}\natt^{s}(M)=Min(att^{s}(M))=\\{(2),(3),(5)\\}.\n\\end{equation*\nSince $K$ is second but not simple, $_{\\mathbb{Z}}M$ is not comultiplication\nby Theorem \\ref{Theorem 5.17} (notice also that $_{\\mathbb{Z}}M$ is not\nmultiplication).\n\\end{ex}\n\n\\begin{ex}\nConsider $M=\\mathbb{Z}_{30}=(10)+(6)+(15)$. It is clear that $M$ is a second\nrepresentable, multiplication, comultiplication and semisimple $\\mathbb{Z}\n-module in which $att^{s}(M)=Min(att^{s}(M))$ and every second submodule of \nM$ is simple. By Corollary \\ref{Corollary 5.19}, every PS-hollow submodule\nof $M$ is simple, and so $(10),(6)$ and $(15)$ are the only PS-hollow\nsubmodules of $M$.\n\\end{ex}\n\n\\begin{thm}\n\\label{Theorem 5.21}\n\n\\begin{enumerate}\n\\item If $M=\\sum\\limits_{i=1}^{n}K_{i}$ is a minimal second representation\nof $M$ with $att^{s}(M)=Min(att^{s}(M))$ and $K_{i}\\cap \\sum\\limits_{j\\neq\ni}K_{j}$ is PS-hollow in $M$ for all $i\\in \\{1,\\cdots ,n\\}$, then \nM=\\bigoplus\\limits_{i=1}^{n}K_{i}$ if and only if $K_{i}\\cap K_{j}=0$ for\nall $i\\neq j$.\n\n\\item Let $_{R}M$ be distributive and $M=\\sum\\limits_{i=1}^{n}K_{i}$ be a\nminimal PS-hollow representation such that every submodule of $K_{i}$ is\nzero or strongly irreducible or $H_{i}$-PS-hollow. Then $M\n\\bigoplus_{i=1}^{n}K_{i}$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{Beweis}\n\\begin{enumerate}\n\\item Assume that $K_{i}\\cap K_{j}=0$ for all $i\\neq j$ in $\\in \\{1,\\cdots\n,n\\}$. Set $I_{i}=\\bigcap\\limits_{j\\neq i}Ann(K_{i})$. Since \natt^{s}(M)=Min(att^{s}(M))$, we have $I_{i}M=K_{i}$. Also, $K_{i}\\cap\n\\sum\\limits_{j\\neq i}K_{j}\\subseteq K_{i}$. Since $K_{i}\\cap\n\\sum\\limits_{j\\neq i}K_{j}$ is PS-hollow and each $K_{j}=I_{j}M$ for all \nj\\neq i$, we have $K_{i}\\cap \\sum\\limits_{j\\neq i}K_{j}\\subseteq\n\\sum\\limits_{j\\neq i}K_{j}$ implies that $K_{i}\\cap \\sum\\limits_{j\\neq\ni}K_{j}\\subseteq K_{l}$ for some $l\\neq i$, whence $K_{i}\\cap\n\\sum\\limits_{j\\neq i}K_{j}\\subseteq K_{l}\\cap K_{i}=0$.\n\n\\item Since $_{R}M$ is distributive, it is enough to prove that $K_{i}\\cap\nK_{j}=0$ for all $i\\neq j$ in $\\{1,\\cdots ,n\\}$. Suppose that $K_{i}\\cap\nK_{j}\\neq 0$ for some $i\\neq j$. But $0\\neq K_{i}\\cap K_{j}\\subseteq K_{i},$\nwhence $K_{i}\\cap K_{j}$ is strongly irreducible or $H_{i}$-PS-hollow.\nSuppose that $K_{i}\\cap K_{j}$ is strongly irreducible. Since $K_{i}\\cap\nK_{j}\\subseteq K_{i}\\cap K_{j}$, it follows that $K_{i}\\subseteq K_{i}\\cap\nK_{j}$ or $K_{j}\\subseteq K_{i}\\cap K_{j}$ and so $K_{i}\\subseteq K_{j}$ or \nK_{j}\\subseteq K_{i}$ which contradicts the minimality of \n\\sum_{i=1}^{n}K_{i}$. So, $K_{i}\\cap K_{j}$ is $H_{i}$-PS-hollow and at the\nsame time $H_{j}$-PS-hollow, which contradicts the minimality of \n\\sum\\limits_{i=1}^{n}K_{i}$. Therefore $K_{i}\\cap K_{j}=0$ for all $i\\neq j$\nin $\\{1,\\cdots ,n\\}.$$\\blacksquare$\n\\end{enumerate}\n\\end{Beweis}\n\n\\begin{exs}\n\\label{Example 5.22}\n\n\\begin{enumerate}\n\\item Every second representable semisimple module satisfies the assumptions\nof Theorem \\ref{Theorem 5.21} ( 2).\n\n\\item $M=\\mathbb{Z}_{n},$ considered as a $\\mathbb{Z}$-module, $M$ satisfies\nall assumptions of Theorem \\ref{Theorem 5.21} ((1) and (2)).\n\\end{enumerate}\n\\end{exs}\n\n\\begin{thm}\n\\label{Theorem 5.23}Let $R$ be Artinian and $M=\\sum\\limits_{i=1}^{n}K_{i}$\nbe a minimal PS-hollow representation of $_{R}M$. Suppose that the\nsubmodules of $K_{i}$ are PS-hollow $\\forall i\\in \\{1,\\cdots ,n\\}$. If \nIn(K_{i})\\cap In(K_{j})=0$ $\\forall i\\neq j$ in $\\{1,\\cdots ,n\\},$ then \nM=\\bigoplus\\limits_{i=1}^{n}K_{i}$.\n\\end{thm}\n\n\\begin{Beweis}\nAssume that $In(K_{i})\\cap In(K_{j})=0$ for all $i\\neq j$ in $\\{1,\\cdots\n,n\\}.$ For each $j\\in \\{1,\\cdots ,n\\}$, set $N_{j}:=K_{j}\\cap \\sum_{i\\neq\nj}K_{i}$. Then $N_{j}\\subseteq In(K_{i})$ for some $i\\neq j$. Otherwise, \nN_{j}\\nsubseteq In(K_{i})$ for all $i\\neq j$, and so for all $i\\neq j$ there\nis $I_{i}\\in H_{i}$ such that $N_{j}\\nsubseteq I_{i}M$. But $N_{j}\\subseteq\n\\sum\\limits_{i\\neq j}K_{i}\\subseteq I_{i}M$ and $N_{j}$ is a PS-hollow\nsubmodule by assumption, whence $N_{j}\\subseteq I_{i}M$ for some $i\\neq j$\nin $\\{1,\\cdots ,n\\}$ (a contradiction).\n\nObserve that $N_{j}\\subseteq K_{j}\\subseteq In(K_{j})$ and so \nN_{j}\\subseteq In(K_{i})\\cap In(K_{j})$ for some $i\\neq j$ in $\\{1,\\cdots\n,n\\}.$ It follows that $N_{j}=0$ for all $j\\in \\{1,\\cdots ,n\\}$ and\ntherefore $M=\\bigoplus\\limits_{i=1}^{n}K_{i}$.$\\blacksquare$\n\\end{Beweis}\n\n\\begin{cor}\n\\label{Corollary 5.24}Let $R$ be Artinian and $M=\\sum\\limits_{i=1}^{n}K_{i}$\na minimal PS-hollow representation of $_{R}M$. Suppose that the nonzero\nsubmodules of $In(K_{i})$ are $H_{i}$-PS-hollow for all $i\\in \\{1,\\cdots\n,n\\}$, where $K_{i}$ is $H_{i}$-PS-hollow for each $i\\in \\{1,\\cdots ,n\\}$.\nThen $M=\\bigoplus\\limits_{i=1}^{n}K_{i}$.\n\\end{cor}\n\n\\begin{Beweis}\nSuppose that $In(K_{i})\\cap In(K_{j})\\neq 0$ for some $i\\neq j$ in \n\\{1,\\cdots ,n\\}.$ Then $In(K_{i})\\cap In(K_{j})$ is $H_{i}$-PS-hollow, and\nat the same time $In(K_{i})\\cap In(K_{j})$ is $H_{j}$-PS-hollow, which is a\ncontradiction since $H_{i}\\neq H_{j}$ as $M=\\sum\\limits_{i=1}^{n}K_{i}$ is a\nminimal PS-hollow representation. Therefore $In(K_{i})\\cap In(K_{j})=0$. The\nresult is obtained by Theorem \\ref{Theorem 5.23}.$\\blacksquare$\n\\end{Beweis}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction} \\label{sec_1}\nMultipath fading is an unavoidable physical phenomenon that affects\nconsiderably the performance of wideband wireless communication\nsystems. While usually viewed as a deteriorating factor, multipath\nfading can also be exploited to improve the performance by using\nRAKE-type receivers. However, in the soft handover (SHO) region, due\nto the limited number of fingers in the mobile unit, we are faced\nwith a problem of how to judiciously select a subset of paths for\nthe RAKE reception to achieve the required performance.\n\nFinger replacement techniques for RAKE reception in\nthe SHO region have been proposed and analyzed over\nindependent and identical distributed (i.i.d.) fading environments\nwith two base stations (BSs) in ~\\cite{kn:S_Choi_2008_4} which was\nextended to the case of multiple BSs in~\\cite{kn:S_Choi_2008_1}. The\nproposed schemes in~\\cite{kn:S_Choi_2008_1}, as shown in\nFig.~\\ref{example_2}, are basically based on the block comparison\namong groups of resolvable paths from different BSs and lead to the\nreduction of complexity while offering commensurate performance in\ncomparison with previously proposed schemes\nin~\\cite{kn:S_Choi_2008_2, kn:S_Choi_2008_3}. However, in practice,\nthe i.i.d. fading scenario on the diversity paths is not always\nrealistic due to, for example, the different adjacent multipath\nroutes with the same path loss and the resulting unbalance among paths.\nAlthough this non-identical consideration is important from a practical\nstandpoint, \\cite{kn:S_Choi_2008_1} was able to investigate the effect of the non-uniform\npower delay profile of the finger replacement schemes only with\ncomputer simulations due to the high complexity of the analysis.\nNote that the applied method in \\cite{kn:MS_GSC} to derive the required key statistics for i.i.d fading assumptions can not be directly adopted to the case of independent and non-identically distributed (i.n.d.) fading environments. The major difficulties lie in deriving the target statistics with non-identical parameters.\n\nWith this observation in mind, we mathematically attack these main difficulties in this report. More specifically, we address the key mathematical formalism which are the statistics of partial sums and the two-dimensional joint statistics of partial sums of the i.n.d. ordered random variables (RVs) for the accurate performance analysis of the finger replacement scheme with non-identical parameters.\nThe rest of this report is\norganized as follows. In Section~\\ref{sec_2}, we present the system\nmodels as well as the mode of operation of the finger replacement scheme under\nconsideration and provide the results of a general comprehensive framework for the outage performance based on statistical results over\ni.n.d. fading channels. We then provide in Section~\\ref{sec_3} some closed-form expressions of the required key statistics. Finally, Section~\\ref{conc}\nprovides some concluding remarks.\n\n\\section{System Models and Performance Measures} \\label{sec_2}\nAmong the path scanning schemes proposed in~\\cite{kn:S_Choi_2008_1},\nwe consider the full scanning method. With this method, if the combined output signal-to-noise ratio (SNR) of\ncurrent assigned fingers is greater than a certain target SNR, a\none-way SHO is used and no finger replacement is needed. Otherwise,\nthe receiver attempts a two-way SHO by starting to scan additional\npaths from the serving BS as well as all the target BSs.\n\nWe assume that $L$ BSs are active, and there are a total of\n${N_{\\left( L \\right)}}$ resolvable paths, where ${N_{\\left( L\n\\right)}} = \\sum\\limits_{n = 1}^L {{N_n}}$ and $N_n$ is the number\nof resolvable paths from the $n$-th BS. In the SHO region, only\n$N_c$ out of ${N_{\\left( n \\right)}}$ $\\left(1 \\le n \\le L\\right)$\npaths are used for RAKE reception. Without loss of generality, let\n$N_1$ be the number of resolvable paths from the serving BS and\n$N_2, N_3, \\ldots ,N_L$ be those from the target BSs. In the SHO\nregion, the receiver is assumed at first to rely only on $N_1$\nresolvable paths and, as such, starts with ${N_c}\/{N_1}$-generalized\nselection Combining (GSC)~\\cite{kn:alouini_wi_j3}. These schemes are\nbased on the comparison of blocks consisting of ${N_s}\\left( { <\n{N_c} < {N_n}} \\right)$ paths from each BS.\n\nLet $u_{i,n}$ ($i=1,2,\\ldots,N_n$) be the $i$-th order statistics\nout of $N_n$ SNRs of paths from the $n$-th BS by arranging $N_n$\n$(L\\ge2)$ nonnegative i.n.d. RVs, $\\left\\{ {\\gamma_{j,n} }\n\\right\\}_{j = 1}^{N_n}$, at $n$-th BS, where $\\gamma_{j,n}$ is the\nthe SNR of the $j$-th path from $n$-th BS, in decreasing order of\nmagnitude such that $u_{1,n} \\ge u_{2,n} \\ge \\cdots \\ge\nu_{{N_n},n}$. If we let\n\\begin{equation} \\small \\label{eq:1}\nY = \\sum\\limits_{i = 1}^{{N_c} - {N_{s}}} {{u_{i,1}}}\n\\end{equation}\n\\begin{equation} \\small \\label{eq:2}\n{W_n} = \\left\\{ \\begin{array}{l}\n \\sum\\limits_{i = {N_c} - {N_{s}} + 1}^{{N_c}} {{u_{i,n}}}, \\,\\,\\quad \\quad n = 1 \\\\\n \\sum\\limits_{i = 1}^{{N_{s}}} {{u_{i,n}}}, \\quad \\quad \\quad \\quad \\quad \\,\\;\\,n = 2, \\ldots ,L , \\\\\n \\end{array} \\right.\n\\end{equation}\nthen the received output SNR after GSC is given by $Y + {W_1}$. At\nthe beginning of every time slot, the receiver compares the GSC\noutput SNR, $Y + {W_1}$, with a certain target SNR. If $Y + {W_1}$ is\ngreater than or equal to the target SNR, a one-way SHO is used and\nno finger replacement is needed. On the other hand, whenever $Y +\n{W_1}$ falls below the target SNR, the receiver attempts a two-way\nSHO by starting to scan additional paths from the target BSs.\n\nTo study the performance of the finger replacement scheme for i.n.d. fading\nassumptions, we look into the outage performance. Based on the\nmode of operation in Section \\cite[II-B]{kn:S_Choi_2008_1}, an\noverall outage probability is declared when\nthe final combined SNR, $\\gamma_F$, falls below a predetermined\nthreshold, $x$, as\n\\begin{equation} \\small \\label{eq:3}\n{F_{{\\gamma _F}}}\\left( x\n\\right) = \\Pr \\left[ {{\\gamma _F} < x} \\right]\n\\end{equation}\nwhere\n\\begin{equation} \\small\\label{eq:4}\n{\\gamma _F} = \\left\\{ \\begin{array}{lr}\n Y + {W_1}, &Y + {W_1} \\ge {\\gamma _T} \\\\\n Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\}, &Y + {W_1} < {\\gamma _T}. \\\\\n \\end{array} \\right.\n\\end{equation}\nConsidering two cases that i) the final combined SNR is greater than or equal to the\ntarget SNR, $\\gamma_T$, (i.e., $x \\ge {\\gamma _T}$) and ii) the\nfinal combined SNR falls below the target SNR, (i.e., $0 < x <\n{\\gamma _T}$), separately, we can\nrewrite (\\ref{eq:3}) as\n\\begin{equation} \\small \\label{eq:7}\n{F_{{\\gamma _F}}}\\left( x \\right) = \\left\\{ \\begin{array}{ll}\n \\Pr \\left[ {Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right], & 0 < x < {\\gamma _T} \\\\\n \\Pr \\left[ {{\\gamma _T} \\le Y + {W_1} < x} \\right] \\\\\n + \\Pr \\left[ {Y + {W_1} < {\\gamma _T},{\\gamma _T} \\le Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right],& x \\ge {\\gamma _T}.\\\\\n \\end{array} \\right.\n\\end{equation}\nThe detailed derivation is presented in the Appendix~\\ref{appendix_1}.\n\nWith (\\ref{eq:7}), we now need to investigate the following three probabilities,\na) $\\Pr \\big[ {{\\gamma _T} \\!\\le \\!Y\\! + \\!{W_1}\\! < \\!x} \\big]$, b) $\\Pr \\big[ Y \\!+\\! {W_1} \\!<\\! {\\gamma _T},{\\gamma _T} \\!\\le\\! Y\\! + \\!\\max \\left\\{\\! {W_1},\\!{W_2}, \\!\\cdots ,\\!{W_L}\\! \\right\\} \\!< x \\big]$, and c) $ \\Pr \\big[ Y \\!+ \\!\\max \\{\\! W_1, \\!W_2, \\!\\cdots , \\!W_L\\! \\} \\!< x \\big]$.\nNote that the major\ndifficulty in the analysis is to derive the required key\nstatistics of ordered RVs. In~\\cite{kn:S_Choi_2008_4} and \\cite{kn:S_Choi_2008_1}, the required statistics were obtained by\napplying the conditional probability density function (PDF) based\napproach proposed in \\cite{kn:MS_GSC} which is only valid for an assumption of i.i.d.\nfading from path to path. However, in this report, our concern is that the\naverage SNR of each path (or branch) is different, which means more\npractical channel models. For i.n.d. consideration unlike the\ni.i.d. case, we need to consider realistic frequency selective\nchannels which have non-uniform delay profile, for example,\nexponentially decaying power delay profile, and to deal with order statistics of\ni.n.d. RVs. As results, the proposed method in \\cite{kn:MS_GSC} can\nnot be directly adopted in case of i.n.d. fading environments here.\n\nRecently, a unified framework to determine the joint statistics of\npartial sums of ordered i.i.d. RVs has been introduced\nin~\\cite{kn:unified_approach}. With this proposed approach, the\nrequired key statistics of any partial sums of ordered RVs can be\nobtained systematically in terms of the moment generating function\n(MGF) and the PDF. The extension of the mathematical approach\nproposed in \\cite{kn:unified_approach} to i.n.d. fading channels can\nbe found in \\cite{kn:sungsiknam2013_ISIT, kn:IND_MGF_sungsiknam_1}. With the help of\n\\cite{kn:unified_approach, kn:sungsiknam2013_ISIT, kn:IND_MGF_sungsiknam_1}, the\nrequired key statistics to investigate the outage probability\nin~(\\ref{eq:7}) over i.n.d. fading channels can be obtained.\n\nNote that based on the mode of operation, $Y$ and $W_1$ are correlated while $W_n$ (for $n = 2,\n\\ldots ,L$) is independent of $Y$.\nHence, by adopting the\nproposed approach in \\cite{kn:sungsiknam2013_ISIT, kn:IND_MGF_sungsiknam_1} instead of applying \\cite{kn:MS_GSC},\nrequired key statistics in (\\ref{eq:7}) can be evaluated as\n\\begin{enumerate}\n\\item [a)] For $\\Pr \\left[ {{\\gamma _T} \\le Y + {W_1} < x} \\right]$,\n\\begin{equation} \\small \\label{eq:8}\n\\Pr \\left[ {{\\gamma _T} \\le Y + {W_1} < x} \\right] = {F_{Y + {W_1}}}\\left( x \\right) - {F_{Y + {W_1}}}\\left( {{\\gamma _T}} \\right).\n\\end{equation}\n\\item [b)] For $\\Pr \\left[ {Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right]$,\n\\begin{equation} \\small \\label{eq:9}\n\\begin{aligned}\n&\\Pr \\left[ {Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right] \\\\\n&= \\int_0^x {\\int_0^{x - y} {{f_{Y,{W_1}}}\\left( {y,{w_1}} \\right)\\int_0^{x - y} {{f_{{W_2}}}\\left( {{w_2}} \\right)d{w_2} \\cdots \\int_0^{x - y} {{f_{{W_L}}}\\left( {{w_L}} \\right)} d{w_L}d{w_1}dy} } } \\\\\n&= \\int_0^x {\\int_0^{x - y} {{f_{Y,{W_1}}}\\left( {y,{w_1}} \\right)\\prod\\limits_{n = 2}^L {{F_{{W_n}}}\\left( {x - y} \\right)} d{w_1}dy} }.\n\\end{aligned}\n\\end{equation}\n\\item [c)] For $\\Pr \\left[ {Y + {W_1} < {\\gamma _T},{\\gamma _T} \\le Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right]$,\n\\begin{equation} \\small \\label{eq:10}\n\\begin{aligned}\n&\\Pr \\left[ {Y + {W_1} < {\\gamma _T},{\\gamma _T} \\le Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right] \\\\\n&= \\int_0^{{\\gamma _T}} {\\int_0^{{\\gamma _T} - y} {{f_{Y,{W_1}}}\\left( {y,{w_1}} \\right)\\int_0^{x - y} {{f_{{W_2}}}\\left( {{w_2}} \\right)d{w_2} \\cdots \\int_0^{x - y} {{f_{{W_L}}}\\left( {{w_L}} \\right)} d{w_L}d{w_1}dy} } } \\\\\n&= \\int_0^{{\\gamma _T}} {\\int_0^{{\\gamma _T} - y} {{f_{Y,{W_1}}}\\left( {y,{w_1}} \\right)\\prod\\limits_{n = 2}^L {{F_{{W_n}}}\\left( {x - y} \\right)} d{w_1}dy} }.\n\\end{aligned}\n\\end{equation}\n\\end{enumerate}\n\nSimilar to the identical case in~\\cite{kn:S_Choi_2008_1}, it is also\nvery important to study the complexity of finger replacement schemes over\ni.n.d. case by accurately quantifying the performance measures such\nas the average number of path estimations, the average number of SNR\ncomparisons, and the SHO overhead, which are required during the SHO\nprocess of these schemes over i.n.d. case. Note that with these\nperformance measures, a comprehensive investigation of the tradeoff\nbetween complexity and performance over i.n.d. fading channels can\nbe feasible. These important design parameters can be evaluated by\ndirectly applying the defined formulas presented\nin~\\cite{kn:S_Choi_2008_1} with the required key statistics\nfor i.n.d. ordered RVs which will be derived in this work. Hence,\nbased on the mathematical approach proposed\nin~\\cite{kn:unified_approach, kn:sungsiknam2013_ISIT, kn:IND_MGF_sungsiknam_1}, we\nhere focus on the derivation of the following key statistics such as the\ncumulative distribution function (CDF) of the ${N_c}\/{N_1}$-GSC\noutput SNR, ${F_{Y + {W_1}}}\\left( \\cdot \\right)$, the 2-dimensional joint PDF of\ntwo adjacent partial sums, $Y$ and $W_1$, of order statistics,\n${{f_{Y,{W_1}}}\\left( \\cdot,\\cdot \\right)}$, and the CDF of the sum\nof the $N_s$ strongest paths from each target BS,\n${{F_{{W_n}}}\\left( \\cdot \\right)}$, (i.e., $2 \\le n \\le L$).\n\n\n\\section{Key Statistics} \\label{sec_3}\nIn this section, we introduce the key statistics which are\nessential to solve Eqs. (\\ref{eq:8}), (\\ref{eq:9}), and\n(\\ref{eq:10}) in Sec.~\\ref{sec_2}. More specifically, in these three\ncases, only the best ${N_c}$ or $N_s$ among $N_n$ ($N_s \\le {N_c}\n\\le {N_n}$) ordered RVs are involved in the partial sums. Thus,\nbased on the unified frame work in \\cite{kn:unified_approach} and\nthe extended work for i.n.d. case in \\cite{kn:sungsiknam2013_ISIT, kn:IND_MGF_sungsiknam_1},\neach key statistics for three cases can be derived by applying the special step approach based on the substituted groups instead of original groups for each cases (i.e., starting from\n2-dimensional joint statistics, 4-dimensional joint statistics, and 2-dimensional joint statistics, respectively) as\n\\begin{enumerate}\n \\item [1)] ${F_{Y + {W_1}}}\\left( x \\right)$:\nIf we let $Z'=Y + {W_1}$ where $Y+W_1=\\sum\\limits_{i=1}^{N_c}\n{u_{i,1}}$ for convenience, then we can derive the target CDF of\n$Z'$ with the 2-dimensional joint PDF of\n$Z_1=\\sum\\limits_{i=1}^{N_c-1} {u_{i,1}}$ and $Z_2={u_{N_c,1}}$ as\n\\begin{equation} \\small \\label{eq:11}\n\\begin{aligned}\n {F_{Y + {W_1}}}\\left( x \\right) =& \\int_0^x {{f_{Z'}}\\left( z \\right)dz} \\\\\n =& \\int_0^x {\\int_0^{\\frac{z}{{_{{N_c}}}}} {{f_{{Z_1},{Z_2}}}\\left( {z - {z_2},{z_2}} \\right)d{z_2}} dz} .\n\\end{aligned}\n\\end{equation}\n \\item [2)] ${{f_{Y,{W_1}}}\\left( x, y\\right)}$:\nIn this case, we can derive the target 2-dimensional PDF of\n$Y=\\sum\\limits_{i = 1}^{{N_c} - {N_{s}}} {{u_{i,1}}}$ and\n$W_1=\\sum\\limits_{i = {N_c} - {N_{s}} + 1}^{{N_c}} {{u_{i,1}}}$ by\ntransferring the 4-dimensional joint PDF of $Z_1 = \\sum\\limits_{i =\n1}^{{N_c-N_s} - 1} {u_{i,1} }$, $Z_2 = u_{N_c-N_s,1}$, $Z_3 =\n\\sum\\limits_{i = {N_c-N_s} + 1}^{N_c - 1} {u_{i,1} }$, and $Z_4 =\nu_{N_c,1}$ with the help of a function of a marginal PDF\nas\n\\begin{equation} \\small \\label{eq:12} {f_{Y,{W_1}}}\\left( {x,y}\n\\right) = \\int_0^{\\frac{y}{{{N_s} }}}\n{\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c-N_s}}}\n{{f_{{Z_1},{Z_2},{Z_3},{Z_4}}}\\left( {x - {z_2},{z_2},y -\n{z_4},{z_4}} \\right)d{z_2}d{z_4}} }.\n\\end{equation}\n \\item [3)] ${{F_{{W_n}}}\\left( x \\right)}$, (i.e., $2 \\le n \\le L$):\nSimilar to case 1), the target one-dimensional CDF of\n$W_n=\\sum\\limits_{i=1}^{N_s} {u_{i,n}}$ with the 2-dimensional joint\nPDF of $Z'_1=\\sum\\limits_{i=1}^{N_s-1} {u_{i,n}}$ and\n$Z'_2={u_{N_s,n}}$ can be derived with the help of a function of a\nmarginal PDF as\n\\begin{equation} \\small \\label{eq:13}\n {F_{{W_n}}}\\left( x \\right) = \\int_0^x {\\int_0^{\\frac{z}{{_{{N_s}}}}} {{f_{{Z'_1},{Z'_2}}}\\left( {z - {z'_2},{z'_2}} \\right)d{z'_2}} dz}.\n\\end{equation}\n\\end{enumerate}\n\nThe above novel generic results in (\\ref{eq:11})-(\\ref{eq:13}) are\nquite general and can be applied for any RVs. In this report, we limit our\nanalysis to the i.n.d. RVs case with a common exponential PDF,\n${p_{{i_{l,n}}}}\\left( x \\right) = \\frac{1}{{{{\\bar \\gamma\n}_{{i_{l,n}}}}}}\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma\n}_{{i_{l,n}}}}}}} \\right)$ and CDF, $P_{{i_{l,n}}}\\left( x \\right) =\n1-\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}} \\right)$\nfor $\\gamma\\ge 0$, respectively, where ${\\bar \\gamma }_{i_{l,n}}$ is\nthe average of the $l$-th RV at $n$-th BS. Then, we can obtain the\ntarget statistics in a ready-to-use form for i.n.d.\nexponential RVs cases given as the following subsection. The\ndetailed derivations are presented in the Appendix~\\ref{appendix_2} and \\ref{appendix_3}.\n\n\\subsection{CDF of the ${N_c}\/{N_1}$-GSC Output SNR over i.n.d. Rayleigh Fading, ${F_{Y + {W_1}}}\\left( x \\right)$}\n\\begin{equation} \\scriptsize \\label{eq:14}\n\\!\\!\\!\\begin{aligned}\n{F_{Y + {W_1}}}\\left( x \\right) &= \\!\\sum\\limits_{{i_{{N_c,1}}} = 1}^{N_1}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c,1}}}}}}}\\!\\sum\\limits_{\\substack{\n {i_{{N_c} + 1,1}}, \\cdots ,{i_{N_1,1}} \\\\\n {i_{{N_c} + 1,1}} \\ne \\cdots \\ne {i_{N_1,1}} \\\\\n \\vdots \\\\\n {i_{N_1,1}} \\ne {i_{{N_c} + 1,1}} \\\\\n }}^{1,2, \\cdots {N_1}} \\!{ \\sum\\limits_{\\left\\{\\! {{i_{1,1}}, \\cdots ,{i_{{N_c} - 1,1}}} \\!\\right\\} \\in {P_{{N_c} - 1}}\\!\\left( \\!{{I_{N_1}} - \\left\\{\\! {{i_{{N_c,1}}}} \\!\\right\\} - \\left\\{\\! {{i_{{N_c} + 1,1}}, \\ldots ,{i_{N_1,1}}} \\!\\right\\}} \\!\\right)} {\\prod\\limits_{\\scriptstyle q = 1 \\atop\n \\scriptstyle \\left\\{\\! {{i_{1,1}}, \\cdots ,{i_{{N_c} - 1,1}}} \\!\\right\\}}^{{N_c}} \\!\\!\\!\\!{{C_{q,1,{N_c} - 1}}} } } }\n\\\\\n&\\times \\left[ {\\frac{{ - 1}}{{\\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_{q,1}}}}}}} } \\right)}}\\left\\{ {{{\\bar \\gamma }_{{i_{q,1}}}}\\left( {1 - \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_{q,1}}}}}}} \\right)} \\right) - \\frac{{{N_c}}}{{\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}}} }}\\left( {1 - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}}} } \\right)\\frac{x}{{{N_c}}}} \\right)} \\right)} \\right\\}} \\right.\n\\\\\n&\\quad\\quad- \\prod\\limits_{\\scriptstyle k' = 1 \\atop\n \\scriptstyle \\left\\{ {{i_{{N_c} + 1,1}}, \\cdots ,{i_{N_1,1}}} \\right\\}}^{{N_1} - {N_c}} {{{\\left( { - 1} \\right)}^{k'}}\\sum\\limits_{{j_1} = {j_0} + {N_c} + 1}^{{N_1} - k' + 1} { \\cdots \\sum\\limits_{{j_{k'}} = {j_{k' - 1}} + 1}^{N_1} {\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,1}}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_{q,1}}}}}}} } } \\right)}}} } }\n\\\\\n&\\quad\\quad\\left. { \\times \\left\\{ {{{\\bar \\gamma }_{{i_{q,1}}}}\\left( {1 - \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_{q,1}}}}}}} \\right)} \\right) - \\frac{{{N_c}}}{{\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,1}}}}}}}} } }}\\left( {1 - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,1}}}}}}}} } } \\right)\\frac{x}{{{N_c}}}} \\right)} \\right)} \\right\\}} \\right],\n\\end{aligned}\n\\end{equation}\nwhere $j_0=0$,\n\\begin{equation} \\scriptsize \\label{eq:15}\n{C_{l,n_1,n_2}} = \\frac{1}{{\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left( { - {{\\bar \\gamma }_{{i_{l,1}}}}} \\right)} }{F'_{l,n_1,n_2}\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}}} \\right)}},\n\\end{equation}\n\\begin{equation} \\scriptsize \\label{eq:16}\n\\!\\!\\!\\!F'_{l,n_1,n_2}\\left( x \\right) \\!= \\!\\Bigg[\\! {\\sum\\limits_{l = 1}^{{n_2} - {n_1}}\\! {\\left(\\! {{n_2} - {n_1} - l + 1} \\!\\right){x^{{n_2} - {n_1} - l}}{{\\left(\\! { - 1} \\!\\right)}^l}}}{{\\sum\\limits_{{j_1} = {j_0} + {n_1}}^{{n_2} - l + 1} \\!\\!{ \\cdots \\!\\!\\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{n_2}} {\\prod\\limits_{m = 1}^l {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,1}}}}}}}} } } } } \\!\\Bigg] \\!+ \\!\\left(\\! {{n_2} - {n_1} + 1} \\!\\right){x^{{n_2} - {n_1}}}.\n\\end{equation}\n\n\\begin{landscape}\n\\subsection{Joint PDF of Two Adjacent Partial Sums $Y$ and $W_1$ over i.n.d. Rayleigh Fading, ${{f_{Y,{W_1}}}\\left( x, y \\right)}$}\n\\begin{equation} \\label{eq:17}\n\\scriptsize\n\\!\\!\\!\\!\\!\\!\\begin{aligned}\n&{f_{Y,{W_1}}}\\!\\left( \\!{x,y}\\! \\right)\\! = \\!\\sum\\limits_{\\scriptstyle{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}\\atop\n\\scriptstyle{i_{{N_c},1}} \\ne \\cdots \\ne {i_{{N_1},1}}}^{1,2, \\ldots ,{N_1}} \\!\\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c},1}}}}}}\\!\\sum\\limits_{\\scriptstyle{i_{{N_c} - {N_s},1}} = 1\\atop\n\\scriptstyle{i_{{N_c} - {N_s},1}} \\ne {i_{{N_c},1, \\ldots ,}}{i_{{N_1},1}}}^{{N_1}}\\!\\!\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c} - {N_s},1}}}}}}\\!\\sum\\limits_{\\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left(\\! {{I_{{N_1}}} \\!- \\!\\left\\{ {{i_{{N_c} - {N_s},1}}} \\!\\right\\}\\! -\\! \\left\\{ \\!{{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\!\\right\\}} \\!\\right)}\\! {\\sum\\limits_{k = {N_c}\\! -\\! {N_s}\\! +\\! 1}^{{N_c}\\! - \\!1} \\!{{C_{k,{N_c} - {N_s} + 1,{N_c} - 1}}} } } }\n\\\\\n&\\left[ \\!{\\sum\\limits_{\\left\\{ {{i_{1,1}}, \\ldots ,{i_{{N_c} - {N_s} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c} - {N_s} - 1}}\\left( {{I_{{N_1}}} - \\left\\{ {{i_{{N_c} - {N_s},1}}} \\right\\} - \\left\\{ {{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\right\\} - \\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\}} \\right)}\\! {\\sum\\limits_{h = 1}^{{N_c} \\!- \\!{N_s}\\! - \\!1}\\! {{C_{h,1,{N_c} - {N_s} - 1}}} } } \\right.\\!\\exp \\left(\\! { -\\! \\frac{x}{{{{\\bar \\gamma }_{{i_{h,1}}}}}}} \\!\\right)\\!\\exp \\left( \\!{ - \\!\\frac{y}{{{{\\bar \\gamma }_{{i_{k,1}}}}}}} \\!\\right)\n\\\\\n& \\times {{\\bf{{\\rm I}}}}\\left( {{z_2},\\beta ,\\frac{y}{{{N_s}}},\\frac{x}{{{N_c} - {N_s}}};{z_4},\\alpha ,0,\\frac{y}{{{N_s}}}} \\right)\n\\\\\n& +\\! \\sum\\limits_{l = 1}^{{N_s}\\! - \\!1}\\! {{{\\left(\\! { - 1}\\! \\right)}^l}\\!\\sum\\limits_{{j_1} = {j_0} \\!+ \\!{N_c}\\! - \\!{N_s}\\! + \\!1}^{{N_c} \\!- \\!l} { \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}}\\! + \\!1}^{{N_c} \\!- \\!1}\\! {\\sum\\limits_{\\left\\{ {{i_{1,1}}, \\ldots ,{i_{{N_c} - {N_s} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c} - {N_s} - 1}}\\left( {{I_{{N_1}}}\\! - \\!\\left\\{ {{i_{{N_c} - {N_s},1}}} \\right\\}\\! - \\!\\left\\{ {{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\right\\} \\!- \\!\\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\}} \\right)}\\! {\\sum\\limits_{h = 1}^{{N_c}\\!- \\!{N_s}\\! - \\!1} {{C_{h,1,{N_c} - {N_s} - 1}}} } } } } \\!\\exp \\!\\left(\\! { - \\!\\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\!\\right)\\!\\exp\\! \\left(\\! { - \\!\\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}}\\! \\right)\n\\\\\n&\\left. { \\times \\!\\left\\{\\! {{\\bf{{\\rm I}}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{x}{{{N_c}\\! - \\!{N_s}}};{z_4},\\alpha '',0,\\min\\! \\left[\\! {\\frac{y}{{{N_s}}},\\frac{{y\\! -\\! \\frac{l}{{{N_c} \\!-\\! {N_s}}} \\cdot x}}{{\\left(\\! {{N_s} \\!-\\! l} \\!\\right)}}}\\! \\right]} \\!\\right) \\!+\\! {\\bf{{\\rm I}'}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{{y \\!- \\!\\left(\\! {{N_s} \\!- \\!l} \\!\\right) \\cdot {z_4}}}{l};{z_4},\\alpha '',0,\\frac{y}{{{N_s}}}} \\right)\\left. { - \\!{\\bf{{\\rm I}'}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{{y \\!- \\!\\left(\\! {{N_s}\\! - \\!l} \\!\\right) \\cdot {z_4}}}{l};{z_4},\\alpha '',0,\\min \\!\\left[\\! {\\frac{y}{{{N_s}}},\\frac{{y\\! - \\!\\frac{l}{{{N_c} \\!- \\!{N_s}}} \\cdot x}}{{\\left(\\! {{N_s}\\! - \\!l} \\!\\right)}}} \\!\\right]} \\!\\right)} \\!\\right\\}} \\right.}\\! \\right]\n\\\\\n&+ \\sum\\limits_{\\scriptstyle{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}\\atop\n\\scriptstyle{i_{{N_c},1}} \\ne \\cdots \\ne {i_{{N_1},1}}}^{1,2, \\ldots ,{N_1}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c},1}}}}}}\\sum\\limits_{g = 1}^{{N_1} - {N_c}} {{{\\left( { - 1} \\right)}^g}\\sum\\limits_{{{j'}_1} = {{j'}_0} + {N_c} + 1}^{{N_1} - g + 1} { \\cdots \\sum\\limits_{{{j'}_g} = {{j'}_{g - 1}} + 1}^{{N_1}} {\\sum\\limits_{\\scriptstyle{i_{{N_c} - {N_s},1}} = 1\\atop\n\\scriptstyle{i_{{N_c} - {N_s},1}} \\ne {i_{{N_c},1, \\ldots ,}}{i_{{N_1},1}}}^{{N_1}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c} - {N_s},1}}}}}}} } } } } \\sum\\limits_{\\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left( {{I_{{N_1}}} - \\left\\{ {{i_{{N_c} - {N_s},1}}} \\right\\} - \\left\\{ {{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\right\\}} \\right)} {}\n\\\\\n&\\sum\\limits_{k = {N_c} - {N_s} + 1}^{{N_c} - 1} {{C_{k,{N_c} - {N_s} + 1,{N_c} - 1}}}\n\\left[ {\\sum\\limits_{\\left\\{ {{i_{1,1}}, \\ldots ,{i_{{N_c} - {N_s} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c} - {N_s} - 1}}\\left( {{I_{{N_1}}} - \\left\\{ {{i_{{N_c} - {N_s},1}}} \\right\\} - \\left\\{ {{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\right\\} - \\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\}} \\right)} {\\sum\\limits_{h = 1}^{{N_c} - {N_s} - 1} {{C_{h,1,{N_c} - {N_s} - 1}}\\!\\exp \\!\\left(\\! { - \\!\\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}}\\! \\right)\\!\\exp \\!\\left(\\! { -\\! \\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)} } } \\right.\n\\\\\n&\\times {{\\bf{{\\rm I}}}}\\left( {{z_2},\\beta ,\\frac{y}{{{N_s}}},\\frac{x}{{{N_c} - {N_s}}};{z_4},\\alpha ',0,\\frac{y}{{{N_s}}}} \\right)\n\\\\\n&+\\! \\sum\\limits_{l = 1}^{{N_s}\\! - \\!1} \\!{{{\\left(\\! { - 1} \\!\\right)}^l}\\!\\sum\\limits_{{j_1} = {j_0} \\!+ \\!{N_c}\\! -\\! {N_s} \\!+ 1}^{{N_c}\\! -\\! l} { \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}}\\! + \\!1}^{{N_c}\\! - \\!1} \\!{\\sum\\limits_{\\left\\{ {{i_{1,1}}, \\ldots ,{i_{{N_c} - {N_s} - 1,1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c} - {N_s} - 1}}\\left( {{I_{{N_1}}} \\!- \\!\\left\\{ {{i_{{N_c} - {N_s},1}}} \\right\\} \\!- \\!\\left\\{ {{i_{{N_c},1}}, \\ldots ,{i_{{N_1},1}}} \\right\\} \\!-\\! \\left\\{ {{i_{{N_c} - {N_s} + 1,1}}, \\ldots ,{i_{{N_c} - 1,1}}} \\right\\}} \\right)}\\! {\\sum\\limits_{h = 1}^{{N_c} \\!- \\!{N_s}\\! - \\!1} {{C_{h,1,{N_c} - {N_s} - 1}}\\!\\exp \\!\\left(\\! { -\\! \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\!\\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)} } } } }\n\\\\\n&\\left. { \\times \\!\\left\\{ \\!{{\\bf{{\\rm I}}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{x}{{{N_c}\\! - \\!{N_s}}};{z_4},\\alpha ''',0,\\min \\!\\left[ \\!{\\frac{y}{{{N_s}}},\\frac{{y \\!-\\! \\frac{l}{{{N_c}\\! -\\! {N_s}}} \\cdot x}}{{\\left( {{N_s}\\! - \\!l} \\right)}}}\\! \\right]} \\!\\right)\\! +\\! {\\bf{{\\rm I}'}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{{y \\!- \\!\\left(\\! {{N_s} \\!- \\!l} \\!\\right) \\cdot {z_4}}}{l};{z_4},\\alpha ''',0,\\frac{y}{{{N_s}}}} \\right)\\left. { - \\!{\\bf{{\\rm I}'}}\\!\\left(\\! {{z_2},\\beta ',\\frac{y}{{{N_s}}},\\frac{{y \\!- \\!\\left(\\! {{N_s}\\! -\\! l} \\!\\right) \\cdot {z_4}}}{l};{z_4},\\alpha ''',0,\\min \\!\\left[\\! {\\frac{y}{{{N_s}}},\\frac{{y \\!-\\! \\frac{l}{{{N_c}\\! - \\!{N_s}}} \\cdot x}}{{\\left(\\! {{N_s}\\! -\\! l}\\! \\right)}}}\\! \\right]} \\!\\right)} \\!\\right\\}} \\right.} \\!\\right],\n\\end{aligned}\n\\end{equation}\n\\end{landscape}\nwhere\n\\small$\\alpha = - \\left( {\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}}} \\right) - \\frac{{\\left( {{N_s}} \\right)}}{{{{\\bar \\gamma }_{{i_{k,1}}}}}}} } \\right)$\\normalsize, \\small$\\alpha ' = - \\Bigg( \\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} \\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,1}}}}}}} \\right) + \\sum\\limits_{m = 1}^g {\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m,1}}}}}}}} - \\frac{{\\left( {{N_s}} \\right)}}{{{{\\bar \\gamma }_{{i_{k,1}}}}}} \\Bigg)$\\normalsize, \\small$\\alpha '' = - \\left( {\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{l,1}}}}}}}} \\right) - \\frac{{\\left( {{N_s} - l} \\right)}}{{{{\\bar \\gamma }_{{i_{{k,1}}}}}}} - \\sum\\limits_{q = 1}^l {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q,1}}}}}}}} } } \\right)$\\normalsize, \\small$\\alpha ''' = - \\Bigg( \\sum\\limits_{l = {N_c-N_s} + 1}^{N_c} \\left( \\frac{1}{{\\bar \\gamma }_{i_{{l,1}}}} \\right) + \\sum\\limits_{m = 1}^g {\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m,1}}}}}}}} - \\frac{{\\left( {{N_s} - l} \\right)}}{{{{\\bar \\gamma }_{{i_{{k,1}}}}}}} - \\sum\\limits_{q = 1}^l {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q,1}}}}}}}} \\Bigg)$\\normalsize, \\small$\\beta = - \\Bigg( \\sum\\limits_{l = 1}^{{N_c-N_s}} \\Big( \\frac{1}{{{{\\bar \\gamma }_{i_{l,1}}}}} \\Big) - \\frac{{N_c-N_s}}{{{{\\bar \\gamma }_{{i_{h,1}}}}}} \\Bigg)$\\normalsize, and \\small$\\beta' = - \\left( {\\sum\\limits_{l = 1}^{N_c-N_s} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{l,1}}}}}}}} \\right) + \\sum\\limits_{q = 1}^l {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q,1}}}}}}}} - \\frac{{N_c-N_s}}{{{{\\bar \\gamma }_{{i_{{h,1}}}}}}} - \\frac{l}{{{{\\bar \\gamma }_{{i_{k,1}}}}}}} } \\right)$\n\n\n\\subsection{CDF of the Sums of the $N_s$ Strongest Paths from Each Target BS over i.n.d. Rayleigh Fading, ${{F_{{W_n}}}\\left( x \\right)}$}\n\\begin{equation} \\scriptsize \\label{eq:19}\n\\begin{aligned}\nF_{W_n}\\left( x \\right) \\!=& \\!\\sum\\limits_{{i_{{N_s,n}}} = 1}^{N_n}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_s,n}}}}}}}\\!\\sum\\limits_{\\substack{\n {i_{{N_s} + 1,n}}, \\cdots ,{i_{N_n,n}} \\\\\n {i_{{N_s} + 1,n}} \\ne \\cdots \\ne {i_{N_n,n}} \\\\\n \\vdots \\\\\n {i_{N_n,n}} \\ne {i_{{N_s} + 1,n}} \\\\\n }}^{1,2, \\cdots {N_n}}\\! {\\sum\\limits_{\\left\\{\\! {{i_{1,n}}, \\cdots ,{i_{{N_s} - 1,n}}}\\! \\right\\} \\in {P_{{N_s} - 1}}\\left(\\! {{I_{N_n}} - \\left\\{\\! {{i_{{N_s,n}}}} \\!\\right\\} - \\left\\{ \\!{{i_{{N_s} + 1,n}}, \\ldots ,{i_{N_n,n}}}\\! \\right\\}} \\!\\right)} {\\prod\\limits_{\\scriptstyle q = 1 \\atop\n \\scriptstyle \\left\\{\\! {{i_{1,n}}, \\cdots ,{i_{{N_s} - 1,n}}}\\! \\right\\}}^{{N_s}}\\!\\!\\!\\!\\! {{C_{q,1,{N_s} - 1}}} } } }\n\\\\\n&\\times \\left[ {\\frac{{ - 1}}{{\\left( {\\sum\\limits_{l = 1}^{{N_s}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}} - \\frac{{{N_s}}}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} } \\right)}}\\left\\{ {{{\\bar \\gamma }_{{i_{q,n}}}}\\left( {1 - \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} \\right)} \\right) - \\frac{{{N_s}}}{{\\sum\\limits_{l = 1}^{{N_s}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}} }}\\left( {1 - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_s}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}} } \\right)\\frac{x}{{{N_s}}}} \\right)} \\right)} \\right\\}} \\right.\n\\\\\n&\\quad\\quad- \\prod\\limits_{\\scriptstyle k' = 1 \\atop\n \\scriptstyle \\left\\{ {{i_{{N_s} + 1,n}}, \\cdots ,{i_{N_n,n}}} \\right\\}}^{{N_n} - {N_s}} {{{\\left( { - 1} \\right)}^{k'}}\\sum\\limits_{{j_1} = {j_0} + {N_s} + 1}^{{N_n} - k' + 1} { \\cdots \\sum\\limits_{{j_{k'}} = {j_{k' - 1}} + 1}^{N_n} {\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^{{N_s}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,n}}}}}}} - \\frac{{{N_s}}}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} } } \\right)}}} } }\n\\\\\n& \\quad\\quad\\quad\\left. \\sizecorr{\\times \\left[\\! {\\frac{{ - 1}}{{\\left(\\! {\\sum\\limits_{l = 1}^{{N_s}}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}\\! - \\!\\frac{{{N_s}}}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} }\\! \\right)}}\\!\\left\\{\\! {{{\\bar \\gamma }_{{i_{q,n}}}}\\!\\left(\\! {1 - \\exp \\!\\left( \\!{ - \\frac{x}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} \\!\\right)} \\!\\right) - \\frac{{{N_s}}}{{\\sum\\limits_{l = 1}^{{N_s}} \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}} }}\\!\\left( \\!{1 - \\exp \\!\\left(\\! { - \\left(\\! {\\sum\\limits_{l = 1}^{{N_s}}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}}} }\\! \\right)\\frac{x}{{{N_s}}}} \\!\\right)} \\!\\right)}\\! \\right\\}} \\right.}\n{ \\times \\left\\{\\! {{{\\bar \\gamma }_{{i_{q,n}}}}\\left(\\! {1 - \\exp \\!\\left(\\! { - \\frac{x}{{{{\\bar \\gamma }_{{i_{q,n}}}}}}} \\!\\right)}\\! \\right) \\!- \\!\\frac{{{N_s}}}{{\\sum\\limits_{l = 1}^{{N_s}} \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}} \\!+ \\!\\sum\\limits_{m = 1}^{k'} \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,n}}}}}}}} } }}\\!\\left(\\! {1 - \\exp\\! \\left(\\! { - \\left(\\! {\\sum\\limits_{l = 1}^{{N_s}}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{l,n}}}}}} \\!+ \\!\\sum\\limits_{m = 1}^{k'} \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m,n}}}}}}}} } } \\!\\right)\\frac{x}{{{N_s}}}} \\!\\right)} \\!\\right)} \\!\\right\\}} \\!\\right].\n\\end{aligned}\n\\end{equation}\\normalsize\n\nNote that in this report, we provide all three required key\nstatistics in (\\ref{eq:14}), (\\ref{eq:17}), and (\\ref{eq:19}), in\nthe closed-form expressions to accurately investigating the\nperformance measures mentioned in Sec.~\\ref{sec_2}, especially, over\ni.n.d. Rayleigh fading conditions while \\cite{kn:S_Choi_2008_1}\nprovides non-closed-form expressions even over i.i.d. fading\nassumptions since the final results involve finite integrations. With these joint\nstatistics derived in closed-form expressions, the outage\nprobability as well as other performance measures mentioned in\nSec.~\\ref{sec_2} can be easily calculated with standard mathematical\nsoftwares such as Mathematica.\n\n\n\\section{Conclusions}\\label{conc}\nIn this work, we studied the assessment tool of the finger\nreplacement scheme proposed in~\\cite{kn:S_Choi_2008_1} over i.n.d.\nfading conditions by providing the general comprehensive\nmathematical framework with non-identical parameters. Specifically,\nwe provided the closed-form expressions for the required key\nstatistics of i.n.d. ordered exponential RVs by applying a unified\nframework to determine the target statistics of partial sums of\nordered RVs proposed in \\cite{kn:sungsiknam2013_ISIT,\nkn:IND_MGF_sungsiknam_1} and the general comprehensive framework for\nthe outage performance based on the these statistical results. The\nproposed approach is quite general to apply to the performance\nanalysis of various wireless communication systems over practical\nfading channels.\n\nIn Fig. 2, we assess the effect of non-identically distributed paths\non the outage performance of the replacement schemes. More\nspecifically, instead of the uniform power delay profile (PDP)\nconsidered so far, we now consider an exponentially decaying PDP.\nMore specifically, we assume that the channel has an exponential\nmultipath intensity profile (MIP), for which\n$\\bar\\gamma_{i}=\\bar\\gamma\\cdot\\exp\\left(-\\delta\\left(i-1\\right)\\right)$,\n$\\left( 1\\le i \\le N_n, 1\\le n \\le L\\right)$ where $\\bar\\gamma_{i}$\nis the average SNR of the $i$-th path out of the total available\nresolvable paths from each BS, $\\bar\\gamma$ is the strongest average\nSNR (or the average SNR of the first path), and $\\delta$ is the\npower decay factor. Note that $\\delta=0$ means identically\ndistributed paths. These results show that the effect of path\nunbalance induces non-negligible performance degradation compared\nwith the results for i.i.d. fading scenario. \\cite{kn:S_Choi_2008_1}\nshowed that the proposed scheme can be still applied to the i.n.d.\nfading scenario. However, this effect must be taken into account for\nthe accurate prediction of the performance over i.n.d. fading\nenvironments and with our analytical results, we believe that it is\navailable to accurately predict the performance.\n\n\n\n\\newpage\n{\\section*{Appendices}\n\\appendices\nIn here, for analytical convenience, we assume that $N_n=N$, $u_{i,n}=u_i$, and ${\\bar \\gamma }_{i_{l,n}}={\\bar \\gamma }_{i_{l}}$ for all $n=1,2,\\cdots,L$.\n\n\n\n\\section{Derivation of (\\ref{eq:7})} \\label{appendix_1}\nBased on the mode of operation, we need to consider two cases i) the final combined SNR is greater than or equal to the target SNR, $\\gamma_T$ and ii) the\nfinal combined SNR falls below $\\gamma_T$, separately. For case ii), after scanning the paths from the serving BS as well as all the target BSs, the combined SNR of the resolvable paths from the serving BS and all the target BS falls below $\\gamma_T$. Therefore, we can directly re-write (\\ref{eq:3}) for $0 < x < \\gamma_T$ as\n\\begin{equation} \\label{APP:1}\n\\begin{aligned}\n{F_{{\\gamma _F}}}\\left( x \\right) =\\Pr \\left[ {Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right].\n\\end{aligned}\n\\end{equation}\nHowever, for case i), we also need to consider two cases separately a) $Y + {W_1} \\ge \\gamma_T$ and b) $Y + {W_1} < \\gamma_T$. More specifically, for case a), no finger replacement is needed while for case b), the receiver attempts a two-way SHO by starting to scan additional paths from the target BSs and the final combined SNR should be greater than or equal to $\\gamma_T$. By considering the case ii)-a) and ii)-b), we can write (\\ref{eq:3}) for $x \\ge \\gamma_T$ as\n\\begin{equation} \\label{APP:2}\n\\begin{aligned}\n{F_{{\\gamma _F}}}\\left( x \\right) =& \\Pr \\left[ Y + {W_1}\\ge{\\gamma _T}, Y + {W_1} < x \\right]\n\\\\\n&+ \\Pr \\left[ {Y + {W_1} < {\\gamma _T},{\\gamma _T} \\le Y + \\max \\left\\{ {{W_1},{W_2}, \\cdots ,{W_L}} \\right\\} < x} \\right].\n\\end{aligned}\n\\end{equation}\nAs results, after some manipulations, we can re-write (\\ref{APP:2}) in the simplified form given in (\\ref{eq:7}) for $x \\ge {\\gamma _T}$.\n\n\n\\section{CDF of the ${N_c}\/{N}$-GSC output SNR} \\label{appendix_2}\nBased on the proposed unified frame work in \\cite{kn:unified_approach}, noting that $Z'=Y+W_1$, we can obtain the target CDF of $Z'=\\sum\\limits_{i=1}^{N_c} {u_{i}}$ with the 2-dimensional joint PDF of $Z_1=\\sum\\limits_{i=1}^{N_c-1} {u_{i}}$ and $Z_2={u_{N_c}}$. Specifically, by letting $X=Z_1+Z_2$, we can obtain the target CDF of $Z^{'}=X$ by integrating over $z_2$ for a given condition $Z_2\\le \\frac{X}{N_c}$ yielding (\\ref{eq:11}). Fortunately, by adopting \\cite[Eq. (51)]{kn:IND_MGF_sungsiknam_1} to (\\ref{eq:11}), we can obtain the closed-form expression of (\\ref{eq:11}) over i.n.d. Rayleigh fading conditions by performing the double integrations over $z_2$ and $z$ in order.\n\nAfter inserting \\cite[Eq. (51)]{kn:IND_MGF_sungsiknam_1} in (\\ref{eq:11}), the inner integral term in (\\ref{eq:11}) can be re-written as\n\\begin{equation} \\footnotesize \\label{eq:20}\n\\begin{aligned}\n&\\int_0^{\\frac{z}{{{N_c}}}} {{f_Z}\\left( {z - {z_2},{z_2}} \\right)d{z_2}} = \\sum\\limits_{{i_{{N_c}}} = 1}^N \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c}}}}}}}\\!\\sum\\limits_{\\substack{\n {i_{{N_c} + 1}}, \\cdots ,{i_N} \\\\\n {i_{{N_c} + 1}} \\ne \\cdots \\ne {i_N} \\\\\n \\vdots \\\\\n {i_N} \\ne {i_{{N_c} + 1}} \\\\\n }}^{1,2, \\cdots N}\\! {\\sum\\limits_{\\left\\{\\! {{i_1}, \\cdots ,{i_{{N_c} - 1}}} \\!\\right\\} \\in {P_{{N_c} - 1}}\\left( \\!{{I_N} - \\left\\{\\! {{i_{{N_c}}}} \\!\\right\\} - \\left\\{\\! {{i_{{N_c} + 1}}, \\ldots ,{i_N}} \\!\\right\\}} \\!\\right)} {\\prod\\limits_{\\scriptstyle q = 1 \\atop\n \\scriptstyle \\left\\{\\! {{i_1}, \\cdots ,{i_{{N_c} - 1}}} \\!\\right\\}}^{{N_c}}\\!\\!\\! {{C_{q,1,{N_c} - 1}}} } } }\n\\\\\n&\\times \\left[ { - \\int_0^{\\frac{z}{{{N_c}}}} {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}} - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } \\right){z_2}} \\right)d{z_2}} } \\sizecorr{\\left. { - \\prod\\limits_{\\scriptstyle k' = 1 \\atop\n \\scriptstyle \\left\\{ {{i_{{N_c} + 1}}, \\cdots ,{i_N}} \\right\\}}^{N - {N_c}} {{{\\left( { - 1} \\right)}^{k'}}\\sum\\limits_{{j_1} = {j_0} + {N_c} + 1}^{N - k' + 1} { \\cdots \\sum\\limits_{{j_{k'}} = {j_{k' - 1}} + 1}^N {\\int_0^{\\frac{z}{{{N_c}}}} {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}} - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right){z_2}} \\right)d{z_2}} } } } } \\right].}\\right.\n\\\\\n&\\quad\\quad\\left. { - \\prod\\limits_{\\scriptstyle k' = 1 \\atop\n \\scriptstyle \\left\\{ {{i_{{N_c} + 1}}, \\cdots ,{i_N}} \\right\\}}^{N - {N_c}} {{{\\left( { - 1} \\right)}^{k'}}\\sum\\limits_{{j_1} = {j_0} + {N_c} + 1}^{N - k' + 1} { \\cdots \\sum\\limits_{{j_{k'}} = {j_{k' - 1}} + 1}^N {\\int_0^{\\frac{z}{{{N_c}}}} {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}} - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right){z_2}} \\right)d{z_2}} } } } } \\right].\n\\end{aligned}\n\\end{equation}\nIn (\\ref{eq:20}), the first and second integral terms can be evaluated as the following closed-form expressions with the help of the basic exponential integration \\cite{kn:abramowitz}\n\\begin{equation} \\small \\label{eq:21}\n\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } \\right)}}\\left[ {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right) - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} } \\right)\\frac{z}{{{N_c}}}} \\right)} \\right],\n\\end{equation}\n\\begin{equation} \\small \\label{eq:22}\n\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)}}\\left[ {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right) - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} + \\sum\\limits_{m = 1}^{k'} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } } \\right)\\frac{z}{{{N_c}}}} \\right)} \\right].\n\\end{equation}\n\nSubsequently, after substituting (\\ref{eq:21}) and (\\ref{eq:22}) in (\\ref{eq:20}), we can obtain a closed-form expression of the inner integral term in (\\ref{eq:11}) as\n\\begin{equation} \\footnotesize \\label{eq:23}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\begin{aligned}\n&{f_{Z'}}\\left( z \\right) = \\sum\\limits_{{i_{{N_c}}} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c}}}}}}}\\sum\\limits_{\\substack{\n {i_{{N_c} + 1}}, \\cdots ,{i_N} \\\\\n {i_{{N_c} + 1}} \\ne \\cdots \\ne {i_N} \\\\\n \\vdots \\\\\n {i_N} \\ne {i_{{N_c} + 1}} \\\\\n }}^{1,2, \\cdots N} { \\sum\\limits_{\\left\\{ {{i_1}, \\cdots ,{i_{{N_c} - 1}}} \\right\\} \\in {P_{{N_c} - 1}}\\left( {{I_N} - \\left\\{ {{i_{{N_c}}}} \\right\\} - \\left\\{ {{i_{{N_c} + 1}}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle q = 1 \\atop\n \\scriptstyle \\left\\{ {{i_1}, \\cdots ,{i_{{N_c} - 1}}} \\right\\}}^{{N_c}} {{C_{q,1,{N_c} - 1}}} } } }\n\\\\\n&\\times \\left[ {\\frac{{ - 1}}{{\\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } \\right)}}\\left\\{ {\\exp \\left( { - \\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right) - \\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} } \\right)\\frac{z}{{{N_c}}}} \\right)} \\right\\}} \\right.\n\\\\\n&\n\\left. { - \\!\\!\\!\\!\\!\\!\\prod\\limits_{\\scriptstyle k' = 1 \\atop\n \\scriptstyle \\left\\{\\! {{i_{{N_c} + 1}}, \\cdots ,{i_N}} \\!\\right\\}}^{N - {N_c}} \\!{{{\\left(\\! { - 1} \\!\\right)}^{k'}}\\!\\sum\\limits_{{j_1} = {j_0} + {N_c} + 1}^{N - k' + 1}\\! { \\cdots \\!\\sum\\limits_{{j_{k'}} = {j_{k' - 1}} + 1}^N \\!{\\frac{1}{{\\left(\\! {\\sum\\limits_{l = 1}^{{N_c}} {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}\\! + \\!\\sum\\limits_{m = 1}^{k'}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}} - \\frac{{{N_c}}}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\!\\right)}}\\left\\{\\! {\\exp \\!\\left( \\!{ - \\!\\frac{z}{{{{\\bar \\gamma }_{{i_q}}}}}} \\!\\right) - \\exp \\!\\left(\\! { - \\!\\left( \\!{\\sum\\limits_{l = 1}^{{N_c}} \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}} \\!+ \\!\\sum\\limits_{m = 1}^{k'}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } } \\!\\right)\\frac{z}{{{N_c}}}} \\!\\right)} \\!\\right\\}} } } } \\!\\right].\n\\end{aligned}\n\\end{equation}\nFinally, by simply applying a basic exponential integration \\cite{kn:abramowitz} over $z_2$ after substituting (\\ref{eq:23}) in (\\ref{eq:11}) and then replacing $z$ in (\\ref{eq:23}) by $z_2$, we can obtain the target CDF in closed-form as shown in (\\ref{eq:14}).\n\nNote that, in the case of the closed-form expression of the CDF of the ${N_s}\/{N}$-GSC output SNR given in (\\ref{eq:19}), we can directly apply the same approach for (\\ref{eq:13}) just by replacing $N_s$ with $N_c$.\n\n\n\\section{Joint PDF of $Y$ and $W_1$} \\label{appendix_3}\nIn this case, the target 2-dimensional joint PDF of $Y$ and $W_1$ can be obtained starting from the 4-dimensional joint PDF of $Z_1 = \\sum\\limits_{i = 1}^{{N_c-N_s} - 1} {u_{i} }$, $Z_2 = u_{N_c-N_s}$, $Z_3 = \\sum\\limits_{i = {N_c-N_s} + 1}^{N_c - 1} {u_{i }}$, and $Z_4 = u_{N_c}$ based on the proposed unified frame work in \\cite{kn:unified_approach, kn:IND_MGF_sungsiknam_1}. As results, the target 2-dimensional joint PDF of interest, ${f_{Y,{W_1}}}\\left( x , y \\right)$, can be finally obtained from transformed higher dimensional joint PDFs as shown in (\\ref{eq:12}).\nHere, RVs, $Z_1$, $Z_2$, $Z_3$, and $Z_4$, have the following relationships\n\\begin{equation} \\label{eq:relationship}\n\\overbrace {\\underbrace {{u_{1}}, \\cdots ,{u_{{N_c} - {N_s} - 1}}}_{{Z_1}},\\underbrace {{u_{{N_c} - {N_s}}}}_{{Z_2}}}^Y,\\overbrace {\\underbrace {{u_{{N_c} - {N_s} + 1}}, \\cdots ,{u_{{N_c} - 1}}}_{{Z_3}},\\underbrace {{u_{{N_c}}}}_{{Z_4}}}^{{W_1}},{u_{{N_c} + 1}}, \\cdots ,{u_{{N}}}.\n\\end{equation}\nFrom (\\ref{eq:relationship}), we can directly obtain the following valid conditions between these RVs i) $Z_1 \\ge \\left(N_c - N_s -1\\right) Z_2$, ii) $Z_3 \\ge \\left( N_s - 1 \\right) Z_4$ and iii) $N_s \\cdot Z_2 \\ge Z_3 + Z_4$. For case i) and ii), by adding $Z_2$ to both sides of case i), we can obtain the following result as $Z_1+Z_2 \\ge \\left(N_c-N_s \\right)Z_2$ while by adding $Z_4$ in case ii), we can obtain $Z_3+Z_4 \\ge N_s \\cdot Z_4$. Therefore, with the 4-dimensional joint PDF of $Z_1$, $Z_2$, $Z_3$, and $Z_4$, letting $X=Z_1+Z_2$ and $Y=Z_3+Z_4$, then we can obtain the target 2-dimensional joint PDF of $Z^{'}=[X,Y]$ by integrating over $z_2$ and $z_4$ yielding (\\ref{eq:12}).\n\nIn (\\ref{eq:12}), we now need to derive the 4-dimensional joint PDF of $Z_1$, $Z_2$, $Z_3$, and $Z_4$. Fortunately, with the help of the derived result over i.n.d. exponential RVs in \\cite[Eq. (54)]{kn:IND_MGF_sungsiknam_1}, we only need to evaluate the double integrations over $z_2$ and $z_4$. With \\cite[Eq. (54)]{kn:IND_MGF_sungsiknam_1}, to evaluate the additional 2-fold integrations, the multiple product expression, $\\small\\prod\\limits_{j = {N_s} + 1}^N {\\left( {1 - \\exp \\left( { - \\frac{{{z_4}}}{{{{\\bar \\gamma }_{{i_j}}}}}} \\right)} \\right)}$, need to be converted to the summation expression. In this case, with the help of the property of exponential multiplication\\footnote{The product of two exponential numbers of the same base can be simply represented as the sum of the exponents with the same base.}, this multiple product expression can be re-written as the following summation expression by adopting the derived result presented in (\\ref{eq:AP_A_5})\n\\begin{equation} \\small \\label{eq:24}\n\\prod\\limits_{j = {N_s} + 1}^N {\\left( {1 - \\exp \\left( { - \\frac{{{z_4}}}{{{{\\bar \\gamma }_{{i_j}}}}}} \\right)} \\right)} = 1 + \\sum\\limits_{g = 1}^{N - {N_s}} {{{\\left( { - 1} \\right)}^g}\\sum\\limits_{{{j'}_1} = {{j'}_0} + {N_s} + 1}^{N - g + 1} { \\cdots \\sum\\limits_{{{j'}_g} = {{j'}_{g - 1}} + 1}^N {\\exp \\left( { - \\sum\\limits_{m = 1}^g {\\frac{{{z_4}}}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} } \\right)} } }.\n\\end{equation}\nThen, inserting the re-written expression of \\cite[Eq. (54)]{kn:IND_MGF_sungsiknam_1} as the summation expression into (\\ref{eq:12}) and then after some manipulations, (\\ref{eq:12}) can be re-written as\n\n\\begin{equation} \\scriptsize \\label{eq:arxiv_1}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\begin{aligned}\n&{f_{Y,{W_1}}}\\!\\left(\\! {x,y} \\!\\right)\n\\\\\n&= \\!\\sum\\limits_{\\scriptstyle {i_{{N_c}}}, \\ldots ,{i_{N_1}} \\atop\n \\scriptstyle {i_{{N_c}}} \\ne \\cdots \\ne {i_{N_1}}}^{1,2, \\ldots ,{N_1}} \\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c}}}}}}}\\! \\sum\\limits_{\\scriptstyle {i_{N_c - N_s}} = 1 \\atop\n \\scriptstyle {i_{N_c - N_s}} \\ne {i_{{N_c}, \\ldots ,}}{i_{N_1}}}^{N_1}\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{N_c - N_s}}}}}} \\! \\sum\\limits_{\\left\\{\\! {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\!\\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{ N_s} \\!- \\!1}}\\left(\\! {{I_{N_1}} \\!-\\! \\left\\{\\! {{i_{N_c - N_s}}} \\!\\right\\} \\!- \\!\\left\\{\\! {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\!\\right\\}} \\!\\right)} \\! {\\sum\\limits_{k = {N_c - N_s} + 1}^{{N_c} - 1}\\! {{C_{k,{N_c - N_s} + 1,{N_c} - 1}}} } } }\n\\\\\n&\\left[ {\\sum\\limits_{\\left\\{\\! {{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}} \\!\\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\!\\left(\\! {{I_{N_1}} - \\left\\{\\! {{i_{N_c - N_s}}} \\!\\right\\} - \\left\\{\\! {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\!\\right\\} - \\left\\{\\! {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\!\\right\\}} \\!\\right)} \\!{\\sum\\limits_{h = 1}^{{N_c - N_s} - 1} \\!{{C_{h,1,{N_c - N_s} - 1}}} } \\!\\exp \\!\\left(\\! { - \\!\\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\!\\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)} \\right.\n\\\\\n&\\quad\\times \\!\\int_0^{\\frac{y}{{{N_s}}}}\\! {\\int_{\\frac{y}{{{ N_s}}}}^{\\frac{x}{{N_c - N_s}}}\\! {\\exp \\!\\left( \\!{ -\\! \\left(\\! {\\sum\\limits_{l = 1}^{N_c - N_s}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!- \\!\\frac{N_c-N_s}{{{{\\bar \\gamma }_{{i_h}}}}}} } \\!\\right){z_2}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\!\\left( \\!{\\sum\\limits_{l = {N_c - N_s} + 1}^{{N_c}}\\! {\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right)\\! - \\!\\frac{{\\left(\\! {{ N_s}} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right){z_4}} \\!\\right)U\\!\\left( \\!{x \\!- \\!\\left({N_c \\!- \\!N_s}\\right) \\cdot {z_2}} \\!\\right)\\!U\\!\\left(\\! {y \\!- \\!\\left(\\! {{N_s}} \\!\\right) \\cdot {z_4}} \\!\\right)d{z_2}d{z_4}} }\n\\\\\n& + \\!\\sum\\limits_{l = 1}^{{ N_s} - 1} \\!{{{{\\left(\\! { - \\!1} \\!\\right)}^l}\\!\\sum\\limits_{{j_1} = {j_0} + {N_c - N_s} + 1}^{{N_c} - l} \\!{ \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{N_c} - 1}\\! {} } } \\!\\sum\\limits_{\\left\\{ \\!{{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}} \\!\\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\!\\left(\\! {{I_{N_1}} \\!- \\!\\left\\{ \\!{{i_{N_c - N_s}}} \\!\\right\\} \\!- \\!\\left\\{\\! {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\!\\right\\} \\!- \\!\\left\\{\\! {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\!\\right\\}} \\!\\right)} }\n\\\\\n&\\quad\\sum\\limits_{h = 1}^{{N_c - N_s} - 1} {{C_{h,1,{N_c - N_s} - 1}}} \\! \\exp \\!\\left( \\!{ -\\! \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\!\\right)\\!\\exp \\!\\left( \\!{ -\\! \\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)\n\\\\\n&\\quad\\times \\!\\int_0^{\\frac{y}{{{N_s}}}} \\!{\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c - N_s}}}\\! {\\exp \\!\\left(\\! { - \\!\\left( \\!{\\sum\\limits_{l = 1}^{N_c - N_s} \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!+ \\!\\sum\\limits_{q = 1}^l\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} \\!-\\! \\frac{{N_c - N_s}}{{{{\\bar \\gamma }_{{i_h}}}}} \\!- \\!\\frac{l}{{{{\\bar \\gamma }_{{i_k}}}}}} }\\! \\right)\\!{z_2}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\!\\left(\\! {\\sum\\limits_{l = {N_c - N_s} + 1}^{{N_c}} \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!- \\!\\frac{{\\left( \\!{{ N_s} - l} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}} \\!- \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} } } \\!\\right)\\!{z_4}} \\!\\right)} }\n\\\\\n&\\left. \\sizecorr{\\left[ {\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\left( {{I_{N_1}} - \\left\\{ {{i_{N_c - N_s}}} \\right\\} - \\left\\{ {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\right\\} - \\left\\{ {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\right\\}} \\right)} {\\sum\\limits_{h = 1}^{{N_c - N_s} - 1} {{C_{h,1,{N_c - N_s} - 1}}} } \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\right)\\exp \\left( { - \\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)} \\right.}\n\\quad\\quad U\\left( {x - \\left({N_c \\!- \\!N_s}\\right) \\cdot {z_2}} \\right)U\\left( {y - \\left( {l \\cdot {z_2} + \\left( {{N_s} - l} \\right) \\cdot {z_4}} \\right)} \\right)d{z_2}d{z_4} \\right]\n\\\\\n&{ + \\sum\\limits_{\\scriptstyle {i_{{N_c}}}, \\ldots ,{i_{N_1}} \\atop\n \\scriptstyle {i_{{N_c}}} \\ne \\cdots \\ne {i_{N_1}}}^{1,2, \\ldots ,{N_1}} {\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_c}}}}}}}\\sum\\limits_{g = 1}^{{N_1} - {N_c}} {{{\\left( { - 1} \\right)}^g}\\sum\\limits_{{{j'}_1} = {{j'}_0} + {N_c} + 1}^{{N_1} - g + 1} { \\cdots \\sum\\limits_{{{j'}_g} = {{j'}_{g - 1}} + 1}^{N_1} {\\sum\\limits_{\\scriptstyle {i_{N_c - N_s}} = 1 \\atop\n \\scriptstyle {i_{N_c - N_s}} \\ne {i_{{N_c}, \\ldots ,}}{i_{N_1}}}^{N_1} {\\frac{1}{{{{\\bar \\gamma }_{{i_{N_c - N_s}}}}}} } } } } } }\n\\\\\n&\\sum\\limits_{\\left\\{ {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{ N_s} - 1}}\\left( {{I_{N_1}} - \\left\\{ {{i_{N_c - N_s}}} \\right\\} - \\left\\{ {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\right\\}} \\right)} {\\sum\\limits_{k = {N_c - N_s} + 1}^{{N_c} - 1} {{C_{k,{N_c - N_s} + 1,{N_c} - 1}}} }\n\\\\\n&\\left[ {\\sum\\limits_{\\left\\{\\! {{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}}\\! \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\!\\left(\\! {{I_{N_1}} - \\left\\{\\! {{i_{N_c - N_s}}} \\!\\right\\} - \\left\\{\\! {{i_{{N_c}}}, \\ldots ,{i_{N_1}}}\\! \\right\\} - \\left\\{\\! {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}}\\! \\right\\}} \\!\\right)} \\!{\\sum\\limits_{h = 1}^{{N_c - N_s} - 1}\\! {{C_{h,1,{N_c - N_s} - 1}}} } \\!\\exp \\!\\left(\\! { -\\! \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\!\\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)}\\right.\n\\\\\n&\\quad\\times \\int_0^{\\frac{y}{{{N_s}}}} {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c - N_s}}} {\\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^{N_c - N_s} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{{N_c - N_s}}{{{{\\bar \\gamma }_{{i_h}}}}}} } \\right){z_2}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{l = {N_c - N_s} + 1}^{{N_c}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) + \\sum\\limits_{m = 1}^g {\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} - \\frac{{\\left( {{N_s}} \\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\right){z_4}} \\right)} }\n\\\\\n&\\quad\\quad U\\left( {x - \\left({N_c \\!- \\!N_s}\\right) \\cdot {z_2}} \\right)U\\left( {y - \\left( {{N_s}} \\right) \\cdot {z_4}} \\right)d{z_2}d{z_4}\n\\\\\n&+ \\!\\sum\\limits_{l = 1}^{ {N_s} - 1} \\!{ {{{\\left(\\! { - \\!1} \\!\\right)}^l}\\!\\sum\\limits_{{j_1} = {j_0} + {N_c - N_s} + 1}^{{N_c} - l}\\! { \\cdots \\!\\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{N_c} - 1}\\! {} } } \\!\\sum\\limits_{\\left\\{\\! {{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}} \\!\\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\!\\left(\\! {{I_{N_1}} \\!- \\!\\left\\{\\! {{i_{N_c - N_s}}} \\!\\right\\} \\!- \\!\\left\\{\\! {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\!\\right\\} \\!-\\! \\left\\{\\! {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\!\\right\\}} \\!\\right)} }\n\\\\\n&\\quad {\\sum\\limits_{h = 1}^{{N_c - N_s} - 1} {{C_{h,1,{N_c - N_s} - 1}}}} \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\right)\\exp \\left( { - \\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\n\\\\\n& \\quad { \\times \\!\\int_0^{\\frac{y}{{{N_s}}}} \\!{\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c - N_s}}} \\! {\\exp \\!\\left( \\!{ - \\!\\left(\\! {\\sum\\limits_{l = 1}^{N_c - N_s}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}}\\! \\right)\\! + \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} \\! - \\! \\frac{{N_c - N_s}}{{{{\\bar \\gamma }_{{i_h}}}}} \\!- \\!\\frac{l}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right)\\!{z_2}} \\!\\right)\\!\\exp\\! \\left(\\! { - \\!\\left(\\! {\\sum\\limits_{l = {N_c - N_s} + 1}^{{N_c}} \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!+ \\!\\sum\\limits_{m = 1}^g \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} \\! -\\! \\frac{{\\left(\\! {{ N_s} - l} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}} \\!- \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} } } \\!\\right)\\!{z_4}}\\! \\right)} } }\n\\\\\n&\\left. \\sizecorr{\\left[ {\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_c - N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_c - N_s} - 1}}\\left( {{I_{N_1}} - \\left\\{ {{i_{N_c - N_s}}} \\right\\} - \\left\\{ {{i_{{N_c}}}, \\ldots ,{i_{N_1}}} \\right\\} - \\left\\{ {{i_{{N_c - N_s} + 1}}, \\ldots ,{i_{{N_c} - 1}}} \\right\\}} \\right)} {\\sum\\limits_{h = 1}^{{N_c - N_s} - 1} {{C_{h,1,{N_c - N_s} - 1}}} } \\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_h}}}}}} \\right)\\exp \\left( { - \\frac{y}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}\\right.}\n\\quad\\quad U\\left( {x - \\left({N_c \\!- \\!N_s}\\right) \\cdot {z_2}} \\right)U\\left( {y - \\left( {l \\cdot {z_2} + \\left( {{N_s} - l} \\right) \\cdot {z_4}} \\right)} \\right)d{z_2}d{z_4} \\right].\n\\end{aligned}\n\\end{equation}\n\nWith (\\ref{eq:arxiv_1}), we now need to evaluate the following four integral terms\n\\\\\n\\noindent A) For the first integral term:\n\\begin{equation} \\small \\label{eq:25}\n\\begin{aligned}\n&\\int_0^{\\frac{y}{{N_s}}}\\! {\\int_{\\frac{y}{{ {N_s}}}}^{\\frac{x}{{N_c-N_s}}}\\! {\\exp \\!\\left( \\!{ - \\!\\left( \\!{\\sum\\limits_{l = 1}^{N_c-N_s}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}}\\! \\right)\\! - \\!\\frac{{N_c\\!-\\!N_s}}{{{{\\bar \\gamma }_{{i_h}}}}}} }\\! \\right)\\!{z_2}}\\! \\right)} }\n\\\\\n&\\quad \\exp \\!\\left(\\! { - \\!\\left( \\!{\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right)\\! -\\! \\frac{{\\left(\\! { {N_s}} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right)\\!{z_4}} \\!\\right)\\!U\\!\\left( \\!{x \\!- \\!\\left(\\!{N_c\\!-\\!N_s}\\!\\right) \\cdot {z_2}} \\!\\right)\\!U\\!\\left( \\!{y \\!- \\!\\left(\\! {{N_s}} \\!\\right) \\cdot {z_4}} \\!\\right)\\!d{z_2}d{z_4},\n\\end{aligned}\n\\end{equation}\n\\noindent B) For the second integral term:\n\\begin{equation} \\small \\label{eq:26}\n\\begin{aligned}\n&\\int_0^{\\frac{y}{{{N_s}}}} \\!{\\int_{\\frac{y}{{ {N_s}}}}^{\\frac{x}{{N_c-N_s}}} \\!{\\exp\\! \\left(\\! { -\\! \\left(\\! {\\sum\\limits_{l = 1}^{N_c-N_s}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}}\\! \\right)\\! + \\!\\sum\\limits_{q = 1}^l \\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} \\! - \\!\\frac{{N_c\\!-\\!N_s}}{{{{\\bar \\gamma }_{{i_h}}}}} \\!- \\!\\frac{l}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right)\\!{z_2}} \\!\\right)} }\n\\\\\n&\\quad\\exp \\!\\left(\\! { - \\!\\left( \\!{\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}}\\! {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!- \\!\\frac{{\\left( \\!{{N_s} \\!- \\!l} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}} \\!- \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} } } \\!\\right)\\!{z_4}} \\!\\right) U\\!\\left( \\!{x \\!- \\!\\left(\\!{N_c\\!-\\!N_s}\\!\\right) \\cdot {z_2}}\\! \\right)\\!U\\!\\left( \\!{y \\!- \\!\\left( \\!{l \\cdot {z_2} \\!+ \\!\\left( \\!{{N_s} \\!- \\!l} \\!\\right) \\cdot {z_4}} \\!\\right)} \\!\\right)d{z_2}d{z_4},\n\\end{aligned}\n\\end{equation}\n\\noindent C) For the third integral term:\n\\begin{equation} \\small \\label{eq:27}\n\\begin{aligned}\n&\\int_0^{\\frac{y}{{{N_s}}}}\\! {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c-N_s}}} \\!{\\exp\\! \\left(\\! { - \\!\\left(\\! {\\sum\\limits_{l = 1}^{N_c-N_s} \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!- \\!\\frac{{N_c\\!-\\!N_s}}{{{{\\bar \\gamma }_{{i_h}}}}}} } \\!\\right){z_2}} \\!\\right)\\!\\exp\\! \\left(\\! { - \\!\\left(\\! {\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!+\\! \\sum\\limits_{m = 1}^g \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} \\! - \\!\\frac{{\\left(\\! {{N_s}}\\! \\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right){z_4}} \\!\\right)} }\n\\\\\n&\\quad U\\left( {x - \\left({N_c-N_s}\\right)\\cdot {z_2}} \\right)U\\left( {y - \\left( {{N_s}} \\right) \\cdot {z_4}} \\right)d{z_2}d{z_4},\n\\end{aligned}\n\\end{equation}\n\\noindent D) For the fourth integral term:\n\\begin{equation} \\small \\label{eq:28}\n\\!\\!\\!\\begin{aligned}\n&\\int_0^{\\frac{y}{{{N_s}}}} \\!{\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{N_c-N_s}}} \\!{\\exp \\!\\left(\\! { - \\!\\left(\\! {\\sum\\limits_{l = 1}^{N_c-N_s} \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!+ \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} \\! - \\! \\frac{{N_c\\!-\\!N_s}}{{{{\\bar \\gamma }_{{i_h}}}}} \\!-\\! \\frac{l}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\!\\right)\\!{z_2}} \\!\\right)} }\n\\\\\n&\\quad\\exp \\!\\left( \\!{ -\\! \\left( \\!{\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) \\!+ \\!\\sum\\limits_{m = 1}^g \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} \\! - \\!\\frac{{\\left( \\!{ {N_s} \\!-\\! l} \\!\\right)}}{{{{\\bar \\gamma }_{{i_k}}}}} \\!- \\!\\sum\\limits_{q = 1}^l \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_q}}}}}}}} } } \\!\\right)\\!{z_4}} \\!\\right) U\\left( {x - \\left({N_c-N_s}\\right) \\cdot {z_2}} \\right)\n\\\\\n&\\quad U\\left( {y - \\left( {l \\cdot {z_2} + \\left( {{N_s} - l} \\right) \\cdot {z_4}} \\right)} \\right)d{z_2}d{z_4}.\n\\end{aligned}\n\\end{equation}\n\\normalsize\n\nFor the first and the third integral terms, closed-form expression can be obtained by simply applying basic exponential integration \\cite{kn:abramowitz}, using the following useful common function\n\\begin{equation} \\label{eq:common_1}\n\\small\n\\begin{array}{l}\n{\\rm I}\\left( {x,e,a,b;y,f,c,d} \\right)\n = \\int_c^d {\\int_a^b {\\exp \\left( {e \\cdot x} \\right)\\exp \\left( {f \\cdot y} \\right)dxdy} } \\\\\n = \\frac{1}{{e \\cdot f}}\\left\\{ {\\exp \\left( {e \\cdot b} \\right) - \\exp \\left( {e \\cdot a} \\right)} \\right\\}\\left\\{ {\\exp \\left( {f \\cdot d} \\right) - \\exp \\left( {f \\cdot c} \\right)} \\right\\}.\n\\end{array}\n\\end{equation}\nWith (\\ref{eq:common_1}), by letting \\small$\\alpha = - \\left( {\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{{\\left( {{N_s}} \\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\right)$\\normalsize, \\small$\\alpha ' = - \\left( {\\sum\\limits_{l = {N_c-N_s} + 1}^{{N_c}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) + \\sum\\limits_{m = 1}^g {\\frac{1}{{{{\\bar \\gamma }_{{i_{{{j'}_m}}}}}}}} - \\frac{{\\left( {{N_s}} \\right)}}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\right)$\\normalsize, and \\small$\\beta = - \\left( {\\sum\\limits_{l = 1}^{{N_c-N_s}} {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{{N_c-N_s}}{{{{\\bar \\gamma }_{{i_h}}}}}} } \\right)$\\normalsize, the closed-form expression of the first integral term can be obtained by simply applying the basic exponential integration \\cite{kn:abramowitz} as\n\\begin{equation} \\label{eq:29}\n\\small\n\\begin{array}{l}\n\\int_0^{\\frac{y}{{{N_s}}}} {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{{N_c} - {N_s}}}} {\\exp \\left( {\\beta {z_2}} \\right)\\exp \\left( {\\alpha {z_4}} \\right)U\\left( {x - \\left( {{N_c} - {N_s}} \\right) \\cdot {z_2}} \\right)U\\left( {y - \\left( {{N_s}} \\right) \\cdot {z_4}} \\right)d{z_2}d{z_4}} } \\\\\n = \\int_0^{\\frac{y}{{{N_s}}}} {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{{N_c} - {N_s}}}} {\\exp \\left( {\\beta {z_2}} \\right)\\exp \\left( {\\alpha {z_4}} \\right)d{z_2}d{z_4}} } \\\\\n = {\\rm I}\\left( {{z_2},\\beta ,\\frac{y}{{{N_s}}},\\frac{x}{{{N_c} - {N_s}}};{z_4},\\alpha ,0,\\frac{y}{{{N_s}}}} \\right).\n\\end{array}\n\\end{equation}\nSimilarly, for the third integral term, we can also obtain closed-form expressions simply by replacing $\\alpha$ with $\\alpha '$ on (\\ref{eq:29}) as\n\\begin{equation} \\label{eq:30}\n\\small\n\\begin{array}{l}\n\\int_0^{\\frac{y}{{{N_s}}}} {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{{N_c} - {N_s}}}} {\\exp \\left( {\\beta {z_2}} \\right)\\exp \\left( {\\alpha '{z_4}} \\right)U\\left( {x - \\left( {{N_c} - {N_s}} \\right) \\cdot {z_2}} \\right)U\\left( {y - \\left( {{N_s}} \\right) \\cdot {z_4}} \\right)d{z_2}d{z_4}} } \\\\\n = \\int_0^{\\frac{y}{{{N_s}}}} {\\int_{\\frac{y}{{{N_s}}}}^{\\frac{x}{{{N_c} - {N_s}}}} {\\exp \\left( {\\beta {z_2}} \\right)\\exp \\left( {\\alpha '{z_4}} \\right)d{z_2}d{z_4}} } \\\\\n = {\\rm I}\\left( {{z_2},\\beta ,\\frac{y}{{{N_s}}},\\frac{x}{{{N_c} - {N_s}}};{z_4},\\alpha ',0,\\frac{y}{{{N_s}}}} \\right).\n\\end{array}\n\\end{equation}\n\nHowever, for the second and the fourth integral terms, we need to consider two cases separately based on the valid integration region of $z_2$. More specifically, $z_2$ should satisfy two following conditions as i) $z_2 \\le \\frac{x}{{N_c-N_s}}$ and ii) $z_2 \\le \\frac{y-\\left( N_s-l \\right)z_4}{l}$ and it leads $z_2 \\le {\\rm min}\\left[ \\frac{x}{{N_c-N_s}}, \\frac{y-\\left( {N_s}-l \\right)z_4}{l}\\right]$. Based on it, if $\\frac{x}{{N_c-N_s}} \\le \\frac{y-\\left( {N_s}-l \\right)z_4}{l}$, then the valid integral regions for $z_2$ is unchanged, $\\frac{y}{{N_s}} 0$ points, is 0.00010. \n\nFor spot 2, the best match between the times of the modeled and observed peaks was obtained for a stellar period of 11.4 days. The residuals of the light curves for the four transits are shown on the right panels of Figure~\\ref{per11}, the data as crosses and the model as a solid gray line. For a 11.4 days stellar period, spot 2 was on the other side of the star during the transit of April 28th, and both spots were behind the stellar limb during the last transit on May 12th. Moreover, in order to fit the peak intensity on May 5th, which is smaller than that on April 25th, it was necessary to decrease the size of spot 2 from 0.33 to 0.25 $R_p$. This may be an indication of spot evolution, decaying in size after approximately 10 days. \n\nWhat if the signature in the May 5th transit light curve is caused by spot 1 instead of spot 2, which means that the star rotates faster than 11.4 days? The same procedure was repeated, by varying $P_s$ such that the ``bump\" caused by spot 1 turned up at $t = -0.43$ h on May 5th. This was successfully done for a stellar rotational period of 9.9 days, and the results are shown in Figure~\\ref{per9}. Also in this case, the spot is seen to decrease in size from 0.36 to 0.27 $R_p$. \n\nUnfortunately, due to the gaps in the data it is not possible to distinguish between the two cases: 9.9 or 11.4 days for the rotational period of HD 209458. If there were complete data coverage, it might be possible to distinguish between the two period values using the data just after ingress on the May 5th transit, since the model for the 11.4 days predict a spot signature there whereas the light curve for the 9.9 days period does not.\n\n\n\\section{Discussion and Conclusions}\n\nThis work proposes to estimate the rotational period of a star by following the apparent shift in longitude position of its surface spots, similar to what was done by Galileo and his contemporaries four centuries ago for the Sun. Tracking of the spots is done by identifying ``bumps\" in the light curves of successive planetary transits.\n\nThis method was successfully tested for the Sun yielding the correct value for the solar period from simulated transits taken only three days apart even though the solar period is about 27 days. Moreover, by modeling of different sunspot groups it was possible to verify the solar differential rotation.\nSupposing that the small variations detected in the light curve during planetary transits is due to occultation of starspots, the model was also applied to HD 209458 using HST observations obtained by Brown et al. (2001) in April and May of 2000. The star was modeled as having two spots during the four transits. The ``bumps\" detected on two transits separated by three orbital periods yield a stellar rotation period of 9.9 or 11.4 days depending on which of the spots detected on April 25th is considered to cause the intensity variation on the May 5th transit light curve.\n\nSeveral observations of $v \\sin i$ of HD 209458 are found in the literature. Assuming that the inclination angle is 86.68$^\\circ$ and the stellar radius 1.148 R$_\\odot$, these observations yield stellar periods of 14.4 $\\pm$ 2.1 days (Mazeh et al. 2000) and 15 $\\pm$ 6 days (Queloz et al. 2000). More recently, shorter periods have been found, 12.3 $\\pm$ 0.5 days (Winn et al. 2005) and 12 days (Fisher \\& Valenti 2005). \n\nThe periods obtained here, 9.9 and 11.4 days, are a little shorter than those found in the literature. It seems that the 11.4 days period is the true one for HD 209458, even though it is still a little shorter than the periods listed in the literature.\nHowever, HD 209458 probably presents differential rotation, similar to the Sun. In this case, the period obtained by other authors, which was based on line broadening observations, is actually an average of the periods of the whole stellar disk, whereas the period determined here represents the rotational velocity at that specific latitude which is close to the equator. \nAs the planet size spans about 20$^o$ in stellar latitude, we are probing latitudes from -22$^o$ to -38$^o$. Since the work of Winn et al. (2005) was based on the observation of the Rossiter-McLaughlin effect during transits, therefore obtained at the same latitudes we sample, this is the result which should better agree with the one presented here. \n\nThus far, over 50 transiting planets have been detected (The Extrasolar Planets Encyclopaedia - http:\/\/exoplanet.eu) and many more are expected especially in the next months thanks to observations by the CoRoT satellite. The method proposed here can easily be applied to the planetary transits of newly discovered planets and checked against periodic modulation of the stellar flux outside transits themselves.\n\n\\acknowledgments\n\nI thank the referee Antonio Lanza for useful comments which much improved the paper. This research was partly supported by the Brazilian agency FAPESP (grant number 06\/50654-3).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nBootstrap percolation models and arguments have been used to study a range of phenomena in various areas, ranging from crack formation, clustering phenomena, the dynamics of glasses and sandpiles to neural nets and economics; see \\citep{i3,i1,i2} for a small sample of such applications. In this paper, we shall study a new geometric bootstrap percolation model defined on the $d$-dimensional grid $[n]^d$ with infection parameter $r \\in \\mathbb{N}$ which we call \\emph{$r$-neighbour line percolation}. Given $v \\in [n]^d$, write $L\\left(v\\right)$ for the set of $d$ axis parallel lines through $v$ and let \n\\[L([n]^d) = \\bigcup_{v\\in [n]^d}L\\left(v\\right)\\] \nbe the set of all axis parallel lines that pass through the lattice points of $[n]^d$. In line percolation, infection spreads from a subset $A\\subset [n]^d$ of initially infected lattice points as follows: if there is a line $\\mathcal{L} \\in L\\left([n]^d\\right)$ with $r$ or more infected lattice points on it, then every lattice point of $[n]^d$ on $\\mathcal{L}$ gets infected. In other words, we have a sequence $A = A^{\\left(0\\right)} \\subset A^{\\left(1\\right)} \\subset \\dots A^{\\left(m\\right)} \\subset \\dots$ of subsets of $[n]^d$\nsuch that\n\\[A^{\\left(m+1\\right)} = A^{\\left(m\\right)} \\cup \\left\\{ v \\in [n]^d : \\exists \\mathcal{L} \\in L\\left(v\\right) \\mbox{ such that } |\\mathcal{L} \\cap A^{\\left(m\\right)}| \\geq r\\right\\}.\\] \nThe \\emph{closure} of $A$ is the set $[A] = \\bigcup_m A^{\\left(m\\right)}$ of eventually infected points. We say that the process \\emph{terminates} when no more newly infected points are added, i.e., when $A^{\\left(m\\right)} = [A]$. If all the points of $[n]^d$ are infected when the process terminates, i.e., if $[A] = [n]^d$, then we say that $A$ \\emph{percolates}.\n\nThe classical model of \\emph{$r$-neighbour bootstrap percolation on a graph} was introduced by Chalupa, Leath and Reich \\citep{Chalupa79} in the context of disordered magnetic systems and has since been extensively studied not only by mathematicians but by physicists and sociologists as well; for a small sample of papers, see, for instance, \\citep{Adler03, ising1, socio1, socio2}. In this model, a vertex of the graph gets infected if it has at least $r$ previously infected neighbours in the graph. The model is usually studied in the random setting, where the main question is to determine the critical threshold at which percolation occurs. If the elements of the initially infected set are chosen independently at random, each with probability $p$, then one aims to determine the value $p_c$ at which percolation becomes likely. In this regard, the $r$-neighbour bootstrap percolation model on $[n]^d$, with edges induced by the integer lattice $\\Z^d$, has been the subject of large body of work; see \\citep{Holroyd03, Balogh09, Balogh12}, and the references therein.\n\nOn account of its inherent geometric structure, it is possible to construct other interesting bootstrap percolation models on the $d$-dimensional grid. In the past, this has involved endowing the grid with a graph structure other than the one induced by the integer lattice (which, in other words, is a Cartesian product of paths). In this direction, Holroyd, Liggett and Romik \\citep{cross_nbd} considered $r$-neighbour bootstrap percolation on $[n]^2$ where the neighbourhood of a lattice point $v$ is taken to be a \"cross\" centred at $v$, consisting of $r-1$ points in each of the four axis directions. Sharp thresholds for a model with an anisotropic variant of these \"cross\" neighbourhoods were obtained recently by Duminil-Copin and van Enter \\citep{anisotropic}. Gravner, Hoffman, Pfeiffer and Sivakoff \\citep{hamming} studied the $r$-neighbour bootstrap percolation model on $[n]^d$ with the edges induced by the Hamming torus where $u,v \\in [n]^d$ are adjacent if and only if $u-v$ has exactly one nonzero coordinate; the Hamming torus, in other words, is the Cartesian product of complete graphs, which is perhaps the second most natural graph structure on $[n]^d$ after the grid. They obtained bounds on the critical exponents (i.e., $\\log_n (p_c)$) which are tight in the case $d=2$ and for small values of the infection parameter when $d=3$. \n\nThe line percolation model we consider is a natural variant of the bootstrap percolation model on the Hamming torus studied by Gravner, Hoffman, Pfeiffer and Sivakoff. However, we should note that while all the other models mentioned above are $r$-neighbour bootstrap percolation models on \\emph{some underlying graph}, the line percolation model \\emph{is not}. Morally, line percolation is better thought of as an instance of the very general \\emph{neighbourhood family percolation} model introduced by Bollob\\'as, Smith and Uzzell \\citep{nf1}. In the neighbourhood family percolation model, one starts by specifying a homogeneous (possibly infinite) collection of subsets of the grid for each point of the grid; a point of the grid becomes infected if all the points of some set in the collection associated with the point are previously infected. In their paper, Bollob\\'as, Smith and Uzzell prove a classification theorem for neighbourhood family models and show that every such model is of one of three types: \\emph{supercritical}, \\emph{critical} or \\emph{subcritical}. We note that line percolation is a natural geometric example of a supercritical neighbourhood family process. (Bollob\\'as, Smith and Uzzell proved general bounds for the critical probabilities of supercritical and critical models; the analysis of subcritical models is more delicate and was later carried out by Balister, Bollob\\'as, Przykucki and Smith \\citep{nf2}.)\n\n\\section{Our results}\n\nIn this note, our main aim is to investigate what happens in the line percolation model when the initial set $A = A_p\\subset [n]^d$ of infected points is determined by randomly selecting points from $[n]^d$, each independently with probability $p$. It would be natural to determine the values of $p$ for which percolation is likely to occur. Let $\\theta_p\\left(n,r,d\\right)$ denote the probability that such a randomly chosen initial set $A_p$ percolates. We define the \\emph{critical probability} $p_c\\left(n,r,d\\right)$ by setting\n\\[p_c\\left(n,r,d\\right) = \\inf \\left\\{p : \\theta_p\\left(n,r,d\\right) \\geq 1\/2\\right\\}.\\]\n\nThe primary question of interest is to determine the asymptotic behaviour of $p_c\\left(n,r,d\\right)$ for every $d,r \\in \\mathbb{N}$ as $n \\rightarrow \\infty$. Note that when the infection parameter $r=1$, a set $A$ of initially infected lattice points percolates if and only if $|A| > 0$; so in this paper, we restrict our attention to $r \\geq 2$. In two dimensions, we are able to estimate the probability of percolation $\\theta_p\\left(n,r,2\\right)$ up to constant factors for all $0\\leq p \\leq 1$. We also determine $p_c\\left(n,r,2\\right)$ up to a factor of 1+o(1) as $n \\rightarrow \\infty$.\n\n\\begin{thm}\\label{2d-unboosted-perc}\nFix $r, s \\in \\mathbb{N}$, with $r \\geq 2$ and $0 \\leq s \\leq r-1$. Then as $n \\to \\infty$,\n\\begin{equation} \\theta_p\\left(n,r,2\\right) = \\Theta\\left(n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right) - s\\left(s + 1\\right)}\\right) \\; \\textrm{ when } \\; n^{-1-\\frac{1}{r-s-1}} \\ll p \\ll n^{-1-\\frac{1}{r-s}}. \\label{2dformula}\n\\end{equation}\nAlso, $\\theta_p\\left(n,r,2\\right) = \\Theta\\left(1\\right)$ when $p \\gg n^{-1-\\frac{1}{r}}$. Furthermore, \n\\[ p_c\\left(n,r,2\\right) \\sim \\lambda n^{-1-\\frac{1}{r}}\\]\nwhere $\\lambda$ is the unique positive real number satisfying $\\exp \\left(-2\\lambda^r\/ r!\\right) = 1\/2$.\n\\end{thm}\n\n\nThe techniques used to obtain the above formula for $\\theta_p\\left(n,r,2\\right)$ allow us to prove the following result about the critical probability in three dimensions, which is the main result of this paper.\n\n\\begin{thm}\\label{3d-perc}\nFix $r\\in \\mathbb{N}$, with $r \\geq 2$, and let $s = \\lfloor \\sqrt{r+ 1\/4} - 1\/2\\rfloor$. Then as $n \\to \\infty$,\n\\[ p_c\\left(n,r,3\\right) = \\Theta\\left(n^{-1-\\frac{1}{r-\\gamma}}\\right) \\]\n where $\\gamma = \\frac{r + s\\left(s+1\\right)}{2\\left(s+1\\right)}$.\n\\end{thm}\n\nThe nature of the threshold at the critical probability is also worth investigating. We say that the model exhibits a sharp threshold at $p_c=p_c(n,r,d)$ if for any fixed $\\epsilon > 0$, we have $\\theta_{(1+\\epsilon)p_c}(n,r,d) = 1 - o(1)$ and $\\theta_{(1-\\epsilon)p_c}(n,r,d) = o(1)$. It is not difficult to see from our proofs of Theorems \\ref{2d-unboosted-perc} and \\ref{3d-perc} that in stark contrast to the classical $r$-neighbour bootstrap percolation model on the grid, there is no sharp threshold at $p_c$ when $d=2,3$. We expect similar behaviour in higher dimensions but we do not have a proof of such an assertion.\n\nIt is also an interesting question to determine the size of a minimal percolating set for $r$-neighbour line percolation on $[n]^d$ for any $d, r \\in \\mathbb{N}$ and $n \\ge r$. It is easy to check that the set $[r]^d$ percolates (see Figure \\ref{minsetfig}). We shall demonstrate that this is in fact optimal.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}\n\\foreach \\x in {4,5,6,7,8}\n\\foreach \\y in {4,5,6,7,8}\n\t\\node (l\\x\\y) at (\\x\/2-5, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!0] {};\n\n\\foreach \\x in {1,2,3}\n\\foreach \\y in {4,5,6,7,8}\n\t\\node (l\\x\\y) at (\\x\/2-5, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!0] {};\n\n\\foreach \\y in {1,2,3}\n\\foreach \\x in {4,5,6,7,8}\n\t\\node (l\\x\\y) at (\\x\/2-5, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!0] {};\n\n\\foreach \\x in {1,2,3}\n\\foreach \\y in {1,2,3}\n\t\\node (l\\x\\y) at (\\x\/2-5, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\t\n\n\\foreach \\x in {4,5,6,7,8}\n\\foreach \\y in {4,5,6,7,8}\n\t\\node (m\\x\\y) at (\\x\/2, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!0] {};\n\n\\foreach \\x in {1,2,3}\n\\foreach \\y in {4,5,6,7,8}\n\t\\node (m\\x\\y) at (\\x\/2, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\n\\foreach \\y in {1,2,3}\n\\foreach \\x in {4,5,6,7,8}\n\t\\node (m\\x\\y) at (\\x\/2, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\n\\foreach \\x in {1,2,3}\n\\foreach \\y in {1,2,3}\n\t\\node (m\\x\\y) at (\\x\/2, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\t\n\n\\foreach \\x in {1,2,3,4,5,6,7,8}\n\\foreach \\y in {1,2,3,4,5,6,7,8}\n\t\\node (r\\x\\y) at (\\x\/2+5, \\y\/2) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\t\n\n\\foreach \\x in {1,2,3,4,5,6,7,8}\n\\foreach \\y[evaluate={\\yp=int(\\y+1)}] in {1,2,3,4,5,6,7}\n\t\\draw (l\\x\\y) -- (l\\x\\yp);\n\t\n\\foreach \\x[evaluate={\\xp=int(\\x+1)}] in {1,2,3,4,5,6,7}\n\\foreach \\y in {1,2,3,4,5,6,7,8}\n\t\\draw (l\\x\\y) -- (l\\xp\\y);\n\n\\foreach \\x in {1,2,3,4,5,6,7,8}\n\\foreach \\y[evaluate={\\yp=int(\\y+1)}] in {1,2,3,4,5,6,7}\n\t\\draw (m\\x\\y) -- (m\\x\\yp);\n\t\n\\foreach \\x[evaluate={\\xp=int(\\x+1)}] in {1,2,3,4,5,6,7}\n\\foreach \\y in {1,2,3,4,5,6,7,8}\n\t\\draw (m\\x\\y) -- (m\\xp\\y);\n\t\n\\foreach \\x in {1,2,3,4,5,6,7,8}\n\\foreach \\y[evaluate={\\yp=int(\\y+1)}] in {1,2,3,4,5,6,7}\n\t\\draw (r\\x\\y) -- (r\\x\\yp);\n\t\n\\foreach \\x[evaluate={\\xp=int(\\x+1)}] in {1,2,3,4,5,6,7}\n\\foreach \\y in {1,2,3,4,5,6,7,8}\n\t\\draw (r\\x\\y) -- (r\\xp\\y);\t\n\n\\node at (5\/2 - 5,-0.1) {$A^{(0)}$};\t\n\\node at (5\/2,-0.1) {$A^{(1)}$};\n\\node at (5\/2 + 5,-0.1) {$A^{(2)}$};\t\n\\end{tikzpicture}\n\\end{center}\n\\caption{The spread of infection from $A=[3]^2$ in the $3$-neighbour line percolation process on $[8]^2$.}\n\\label{minsetfig}\n\\end{figure}\n\n\\begin{thm}\\label{min-set}\nLet $d,r,n \\in \\mathbb{N}$, with $n \\ge r$. Then the minimum size of a percolating set in the $r$-neighbour line percolation process on $[n]^d$ is $r^d$.\n\\end{thm}\n\nEstablishing this fact is much harder than it appears at first glance. The result is trivial when $d = 1$. When $d = 2$, it is not hard to demonstrate that any percolating set has size at least~$r^2$. Consider a generalised two-dimensional line percolation model on $[n]^2$ where the infection thresholds for horizontal and vertical lines are $r_h$ and $r_v$ respectively; indeed, we recover the $r$-neighbour line percolation model when $r_h = r_v = r$. Let $M(r_h, r_v)$ denote the size of a minimal percolating set in this generalised model. Consider the first line $\\mathcal{L}$ to be infected: if $\\mathcal{L}$ is horizontal, then $\\mathcal{L}$ must contain $r_h$ initially infected points and furthermore, if the set of initially infected points is a percolating set, then the set of initially infected points not on $\\mathcal{L}$ must constitute a percolating set for the generalised process with infection parameters $r_h$ and $r_v - 1$. An analogous statement holds if $\\mathcal{L}$ is vertical. It follows that\n\\[M(r_h, r_v) \\geq \\min \\left( r_v + M(r_h - 1, r_v), r_h + M(r_h, r_v-1)\\right).\\]\nWe obtain by induction that $M(r_h, r_v) \\geq r_h r_v$ which implies in particular that $M(r, r) \\geq r^2$. The argument described above depends crucially on the fact that a line has codimension one in a two-dimensional space. The incidence geometry of a collection of lines in the plane is essentially straightforward; this is no longer the case in higher dimensions and we need more delicate arguments to prove Theorem \\ref{min-set}.\n\nThis paper is organised as follows. We collect together some useful facts about binomial random variables in Section \\ref{binvar}. We consider line percolation in two dimensions in Section~\\ref{2d}, and prove Theorem \\ref{2d-unboosted-perc}. In Section \\ref{3d}, we turn to line percolation in three dimensions and prove Theorem \\ref{3d-perc}, thus obtaining an estimate for the critical probability which is tight up to multiplicative constants. In Section \\ref{minset}, we determine the size of minimal percolating sets for $r$-neighbour line percolation on $[n]^d$ and prove Theorem \\ref{min-set}. We conclude the paper in Section~\\ref{rem} with some discussion.\n\nA word on asymptotic notation; in this paper, we shall think of the infection parameter $r$ as being fixed and study the behaviour of the percolation probability $\\theta_p$ and the critical probability $p_c$ as $n\\rightarrow \\infty$. Given functions $f(n), g(n)$, we write $f=O(g)$, or equivalently, $f \\ll g$, if $f(n) \\leq Cg(n)$ for some absolute constant $C$ and all sufficiently large $n$. Similarly, we write $f=\\Omega(g)$, or equivalently, $f \\gg g$, if $g=O(f)$. If $f\/g\\rightarrow 0$ as $n\\rightarrow \\infty$, we say that $f=o(g)$; also, we write $f= \\omega(g)$ if $g=o(f)$. If $f \\ll g$ and $g \\ll f$, we say that $f=\\Theta(g)$. Finally, we write $f \\sim g$ if $f=(1+o(1))g$. In what follows, the various constants suppressed by the asymptotic notation are allowed to depend on the fixed infection parameter $r$, but of course, not on $n$ or $p$.\n\n\\section{Binomial random variables}\\label{binvar}\nWe shall need some standard facts about binomial random variables. We collect these here for the sake of convenience. As is usual, for a random variable with distribution $\\text{\\textnormal{Bin}}(N,p)$, we write $\\mu \\left(= Np\\right)$ for its mean.\n\n\\begin{claim}\n\\label{binsmall}\nLet $X$ be a random variable with distribution $\\text{\\textnormal{Bin}} (N,p)$ where $p\\leq 1\/2$. Then for any $k\\geq 1$,\n\\[ \\exp \\left(-2 \\mu \\right) \\left(\\mu\/k\\right)^k \\leq \\mathbb{P} \\left(X = k\\right) \\leq \\exp \\left(- \\mu \\right) \\left(2e\\mu\/k\\right)^k.\\]\nAlso, $ \\exp \\left(-2 \\mu \\right) \\leq \\mathbb{P} \\left(X = 0\\right) \\leq \\exp \\left(- \\mu \\right)$. \\qed\n\n\\end{claim}\n\nWe shall make use of the following standard concentration result which first appeared in a paper of Bernstein and was later rediscovered by Chernoff and Hoeffding; see \\citep{probtextbook} for example.\n\n\\begin{claim}\n\\label{chernoff}\nLet $X$ be a random variable with distribution $\\text{\\textnormal{Bin}}(N,p)$. Then for any $0<\\delta<1$, \n$$ \\mathbb{P}\\left( |X - \\mu| > \\delta \\mu \\right) \\leq \\exp \\left(\\frac{-\\delta^2 \\mu}{3} \\right). \\eqno\\qed $$\n\\end{claim}\n\nFinally, we shall make use of the following, easy claim.\n\n\\begin{claim}\n\\label{smallmu}\nLet $X$ be a random variable with distribution $\\text{\\textnormal{Bin}}(N,p)$ and suppose $\\mu \\ll 1$ as $N \\to \\infty$. Then for any $k\\geq 0$,\n$$\\mathbb{P}\\left( X \\geq k\\right) = \\Theta \\left(\\mathbb{P}\\left( X = k\\right)\\right). \\eqno\\qed$$\n\\end{claim}\n\n\\section{Line percolation in two dimensions}\\label{2d}\nThe proof of the following proposition is essentially identical to the proof of Theorem 2.1 in \\citep{hamming}; we reproduce it here for completeness. \n\n \n\\begin{prop}\\label{2d-magnitude}\nFix $r\\in \\mathbb{N}$, with $r \\geq 2$, and let $\\alpha > 0$ be a positive constant. If $p = \\alpha n^{-1-\\frac{1}{r}}$, then\n\\[ \\theta_p(n,r,2) \\sim 1 - \\exp{(-2\\alpha^r\/r!)}.\\]\n\\end{prop}\n\\begin{proof}\nThe probability that a given line has $r+1$ or more initially infected points on it is bounded above by $\\binom{n}{r+1}p^{r+1} $ which implies that the probability that any line has $r+1$ or more initially infected points on it is bounded above by $2n \\binom{n}{r+1}p^{r+1} =O\\left(n^{r+2} p^{r+1}\\right) =O\\left(n^{-1\/r}\\right)$. Consequently, asymptotically almost surely, no line has $r+1$ or more initially infected points on it. \n\nLet $E_h$ denote the event that some horizontal line contains $r$ initially infected points and define $E_v$ analogously. Clearly, the process terminates on the first step if neither $E_h$ nor $E_v$ hold; so $\\theta_p \\leq \\mathbb{P} (E_h \\cup E_v)$. Given a line $\\mathcal{L}$, the probability that a particular line perpendicular to $\\mathcal{L}$ has $r-1$ initially infected points (none of which are on $\\mathcal{L}$) is $\\Theta\\left( (np)^{r-1} \\right) = \\Theta\\left(n^{-1+1\/r}\\right)$. Thus, the number of such lines is a binomial random variable with mean $\\mu = \\Omega\\left(n^{1\/r}\\right)$. Since $\\mu \\rightarrow \\infty$ as $n \\rightarrow \\infty$, by Claim \\ref{chernoff}, the probability that there exist at least $r$ such lines is $1-o(1)$. It follows that $\\theta_p \\sim \\mathbb{P} (E_h \\cup E_v)$.\n\nThe number of horizontal lines with $r$ initially infected points is binomially distributed and it is easily seen to converge in distribution to a Poisson random variable with mean $(\\alpha^r \/ r!)$. Thus $\\mathbb{P}(E_h) \\sim 1 - \\exp(-\\alpha^r\/r!)$; similarly, $\\mathbb{P}(E_v) \\sim 1 - \\exp(-\\alpha^r\/r!)$.\n\nWe now estimate $\\mathbb{P} (E_h \\cap E_v)$. Let $E_h \\circ E_v$ denote the event that $E_h$ and $E_v$ occur disjointly. Now, $E_h$ and $E_v$ are increasing events, and so it follows from the FKG and BK inequalities that $\\mathbb{P} (E_h \\cap E_v) \\geq \\mathbb{P}(E_h)\\mathbb{P}(E_v) \\geq \\mathbb{P} (E_h \\circ E_v)$. Observe that $(E_h \\cap E_v) \\backslash (E_h \\circ E_v)$ happens only if some lattice point $v$ is initially infected and each of the two axis parallel lines through $v$ contain $r-1$ initially infected points. It follows that\n\\[ \\mathbb{P} \\left((E_h \\cap E_v) \\backslash (E_h \\circ E_v)\\right) = O\\left(n^2 p (np)^{2r-2} \\right) = O\\left(n^{-1+1\/r}\\right) \\]\nand so $\\mathbb{P} ((E_h \\cap E_v) \\backslash (E_h \\circ E_v)) = o(1)$. Consequently, we see that $\\mathbb{P}(E_h \\cap E_v) \\sim \\mathbb{P}(E_h)\\mathbb{P}(E_v)$. Hence, we have $\\mathbb{P}(E_h \\cup E_v) \\sim \\mathbb{P}(E_h) + \\mathbb{P}(E_v) - \\mathbb{P}(E_h)\\mathbb{P}(E_v)$ and the result follows.\n\\end{proof}\n\nWe shall now prove Theorem \\ref{2d-unboosted-perc}.\n\\begin{proof}[Proof of Theorem \\ref{2d-unboosted-perc}]\nIt follows from Proposition \\ref{2d-magnitude} that $p_c\\left(n,r,2\\right) \\sim \\lambda n^{-1-\\frac{1}{r}}$ where $\\lambda$ is the unique positive real number satisfying $\\exp \\left(-2\\lambda^r\/ r!\\right) = 1\/2$. \n\nWe now turn to estimating $\\theta_p(n,r,2)$. To do so, we work with a modified two-dimensional line percolation process $A_p = G^{\\left(0\\right)} \\subset G^{\\left(1\\right)} \\subset \\ldots$ where\n\\[G^{\\left(2m+1\\right)} = G^{\\left(2m\\right)} \\cup \\left\\{ v \\in [n]^2 : |\\mathcal{L} \\cap G^{\\left(2m\\right)}| \\geq r \\mbox{ for some horizontal line }\\mathcal{L} \\in L\\left(v\\right)\\right\\}\\] \nand\n\\[G^{\\left(2m+2\\right)} = G^{\\left(2m+1\\right)} \\cup \\left\\{ v \\in [n]^2 : |\\mathcal{L} \\cap G^{\\left(2m+1\\right)}| \\geq r \\mbox{ for some vertical line }\\mathcal{L} \\in L\\left(v\\right)\\right\\}.\\] \nIn other words, in going from $G^{\\left(2m\\right)}$ to $G^{\\left(2m+1\\right)}$, only horizontal lines are infected, and in going from $G^{\\left(2m+1\\right)}$ to $G^{\\left(2m+2\\right)}$, only vertical lines are infected, with the infection of lines happening as in the original line percolation process. Since $G^{\\left(m\\right)} \\subset A^{\\left(m\\right)}$ and $A^{\\left(m\\right)} \\subset G^{\\left(2m\\right)}$, percolation occurs in the original process if and only if it occurs in the modified process.\n\nNote that $A_p$ percolates if and only if some $G^{\\left(m\\right)}$ contains $r$ or more parallel lines; indeed, in this case $G^{\\left(m+1\\right)} = [n]^2$. We stop the process as soon it produces $r$ or more parallel fully infected lines (or reaches termination). Note that if percolation occurs, then it does so in at most $2r+1$ steps in the original process, and consequently, in at most $4r+2$ steps in the modified process.\n\nLet $h_i$ and $v_i$ be the number of horizontal and vertical lines infected when going from $G^{\\left(2i\\right)}$ and $G^{\\left(2i+1\\right)}$ and from $G^{\\left(2i+1\\right)}$ to $G^{\\left(2i+2\\right)}$ respectively. The pair $\\left(\\mathbf{h}=\\langle h_i \\rangle, \\mathbf{v}=\\langle v_i \\rangle\\right)$ is called the \\emph{line-count} of the percolation process.\n\nGiven two sequences $\\mathbf{h}=\\langle h_i \\rangle_{i=0}^k$ and $\\mathbf{v}=\\langle v_i \\rangle_{i=0}^k$, we say that $\\left(\\mathbf{h},\\mathbf{v}\\right)$ is a \\emph{vertical line-count} if $\\left(\\mathbf{h},\\mathbf{v}\\right)$ is the line-count of a process which generates $r$ fully infected vertical lines before it generates $r$ fully infected horizontal lines, i.e., if\n\\begin{enumerate}\n\\item $\\sum_{i < k}v_i < r$,\n\\item $\\sum_{i \\leq k}h_i < r$, and \n\\item $\\sum_{i \\leq k}v_i \\geq r$. \n\\end{enumerate}\n\nThe definition of a \\emph{horizontal line-count} $(\\mathbf{h}=\\langle h_i \\rangle_{i=0}^{k+1}, \\mathbf{v}=\\langle v_i \\rangle_{i=0}^k)$ is analogous.\n\nGiven a vertical line-count $\\left(\\mathbf{h}=\\langle h_i \\rangle_{i=0}^k, \\mathbf{v}=\\langle v_i \\rangle_{i=0}^k\\right)$, let us define its \\emph{(vertical) preface} to be the pair $\\left(\\mathbf{h},\\mathbf{v'}\\right)$ where $\\mathbf{v'}=\\langle v_i \\rangle_{i=0}^{k-1}$. Similarly, the \\emph{(horizontal) preface} of a horizontal line-count $(\\mathbf{h}=\\langle h_i \\rangle_{i=0}^{k+1}, \\mathbf{v}=\\langle v_i \\rangle_{i=0}^k)$ is the pair $\\left(\\mathbf{h'},\\mathbf{v}\\right)$ where $\\mathbf{h'}=\\langle h_i \\rangle_{i=0}^{k}$.\n\nGiven a vertical preface $\\left(\\mathbf{h},\\mathbf{v'}\\right)$, let $E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}$ be the event that the process generates $r$ fully infected vertical lines before it generates $r$ fully infected horizontal lines and furthermore, the (vertical) line-count of the process has preface $\\left(\\mathbf{h},\\mathbf{v'}\\right)$. For a horizontal preface $\\left(\\mathbf{h'},\\mathbf{v}\\right)$, define $E_{\\left(\\mathbf{h'},\\mathbf{v}\\right)}$ analogously. We then note that\n\\[\\theta_p\\left(n,r,2\\right) = \\sum_{\\left(\\mathbf{h},\\mathbf{v'}\\right)} \\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right) + \\sum_{\\left(\\mathbf{h'},\\mathbf{v}\\right)} \\mathbb{P}\\left(E_{\\left(\\mathbf{h'},\\mathbf{v}\\right)}\\right) \\]\nwhere the two sums are over all valid vertical and horizontal prefaces respectively. To specify a valid preface, we need to specify at most $2r$ distinct positive integers, each of which is at most~$r$. So the number of valid prefaces is at most $r^{2r}$; consequently, to estimate the probability of percolation up to constant factors, it suffices to estimate the largest of the probabilities $\\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right)$, $\\mathbb{P}\\left(E_{\\left(\\mathbf{h'},\\mathbf{v}\\right)}\\right)$.\n\nGiven $s \\in \\left\\{ 0,1,\\dots,r-1\\right\\}$, we see that $n\\left(np\\right)^{r-s} \\ll 1$ and $n\\left(np\\right)^{r-s-1} \\gg 1$ when $n^{-1-\\frac{1}{r-s-1}} \\ll p \\ll n^{-1-\\frac{1}{r-s}}$.\n\nLet us say that a vertical preface $(\\mathbf{h}=\\langle h_i \\rangle_{i=0}^k,\\mathbf{v'}=\\langle v_i \\rangle_{i=0}^{k-1})$ is \\emph{slow} if $\\sum_{i < k}v_i \\leq s$ and $\\sum_{i < k}h_i \\leq s$. Similarly, let us say that a horizontal preface $\\left(\\mathbf{h'}=\\langle h_i \\rangle_{i=0}^k,\\mathbf{v}=\\langle v_i \\rangle_{i=0}^{k}\\right)$ is \\emph{slow} if $\\sum_{i < k}v_i \\leq s$ and $\\sum_{i \\leq k}h_i \\leq s$.\n\nThe notion of a slow preface is motivated by the following observation. Suppose that at some stage in the process, we have $l$ parallel fully infected lines where $l$ is such that $n\\left(np\\right)^{r - l} \\gg 1$. Then it follows from Claim \\ref{binsmall} that with probability $\\Omega\\left(1\\right)$, there exist $r$ lines perpendicular to these $l$ lines, each containing $r - l$ initially infected points none of which lie on lines infected earlier (of which there are at most $2r$). These $r$ perpendicular lines become infected in the next step; consequently, we have percolation with probability $\\Theta\\left(1\\right)$. It follows that given any (horizontal or vertical) preface $\\left(\\mathbf{x},\\mathbf{y}\\right)$, there exists a slow (horizontal or vertical) preface $\\left(\\mathbf{x'},\\mathbf{y'}\\right)$, such that $\\mathbb{P}\\left(E_{\\left(\\mathbf{x},\\mathbf{y}\\right)}\\right) = O\\left( \\mathbb{P}\\left(E_{\\left(\\mathbf{x'},\\mathbf{y'}\\right)}\\right)\\right)$. Thus, to estimate $\\theta_p$, it suffices to restrict our attention to slow prefaces.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}[xscale=1.2,yscale=1.2]\n\\draw (0,0)--(5,0)--(5,5)--(0,5)--(0,0);\n\\draw [very thick](0,0.55)--(5,0.55);\n\\draw [very thick](0,1)--(5,1);\n\\draw [very thick](0,1.62)--(5,1.62);\n\\draw [very thick](0,2.3)--(5,2.3);\n\\draw [very thick](0.65,0)--(0.65,5);\n\\draw [very thick](1.3,0)--(1.3,5);\n\n\\draw (2.2,0) -- (2.2,0.55);\n\\draw (2.2,1) -- (2.2,1.62);\n\\draw (2.2,2.3) -- (2.2,5);\n\\draw [dotted, thick] (2.2,0.55)--(2.2,1);\n\\draw [dotted, thick] (2.2,1.62)--(2.2,2.3);\n\\node (p1) at (2.2,0.55) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (p2) at (2.2,1) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (p3) at (2.2,1.62) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (p4) at (2.2,2.3) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\n\\draw (2.75,0) -- (2.75,0.55);\n\\draw (2.75,1) -- (2.75,1.62);\n\\draw (2.75,2.3) -- (2.75,5);\n\\draw [dotted, thick] (2.75,0.55)--(2.75,1);\n\\draw [dotted, thick] (2.75,1.62)--(2.75,2.3);\n\\node (q1) at (2.75,0.55) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q2) at (2.75,1) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q3) at (2.75,1.62) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q4) at (2.75,2.3) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\n\\draw (4.0,0) -- (4.0,0.55);\n\\draw (4.0,1) -- (4.0,1.62);\n\\draw (4.0,2.3) -- (4.0,5);\n\\draw [dotted, thick] (4.0,0.55)--(4.0,1);\n\\draw [dotted, thick] (4.0,1.62)--(4.0,2.3);\n\\node (q1) at (4.0,0.55) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q2) at (4.0,1) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q3) at (4.0,1.62) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (q4) at (4.0,2.3) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\n\\draw [dotted, thick] (2.82,4.2)--(3.95,4.2);\n\n\n\\draw [decorate,decoration={brace,amplitude=2pt,mirror}, semithick, xshift=2pt]\n(5,0.55) -- (5,1.0) node [black,midway,xshift=10pt] \n{\\footnotesize $h_0$};\n\\draw [decorate,decoration={brace,amplitude=2pt,mirror}, semithick, xshift=2pt]\n(5,1.62) -- (5,2.3) node [black,midway,xshift=10pt] \n{\\footnotesize $h_1$};\n\\draw [decorate,decoration={brace,amplitude=2pt}, semithick, yshift=2pt]\n(0.65,5) -- (1.3,5) node [black,midway,yshift=10pt] \n{\\footnotesize $v_0$};\n\\draw [decorate,decoration={brace,amplitude=2pt}, semithick, yshift=2pt]\n(2.2,5) -- (4.0,5) node [black,midway,yshift=10pt] \n{\\footnotesize $v_1$};\n\\end{tikzpicture}\n\\end{center}\n\\caption{We need $(r - h_0 - h_1)$ initially infected points on some $v_1$ vertical lines to generate as many new fully infected vertical lines in the next step.}\n\\label{steppic}\n\\end{figure}\n\nIf at some stage in the process, we have $l$ parallel fully infected lines where $l$ is such that $n\\left(np\\right)^{r - l} \\ll 1$, then the probability that the process generates exactly $l'$ new fully infected lines perpendicular to these $l$ lines in the next step (see Figure \\ref{steppic}) is easily seen to be\n\\[ \\Theta \\left( \\binom{n}{l'}\\left(\\left(np\\right)^{r-l}\\right)^{l'}\\left(1-\\left(np\\right)^{r-l}\\right)^{n-l'}\\right) = \\Theta\\left(\\left(n\\left(np\\right)^{r-l}\\right)^{l'}\\right).\\]\n\nGiven a slow, vertical preface $\\left(\\mathbf{h},\\mathbf{v'}\\right)$, let us write $h = \\sum_{i \\leq k} h_i$. We consider two cases.\n\n\\subsection{Case 1: $h \\leq s$} \nIf $h\\leq s$, it follows from Claim \\ref{smallmu} that \n\\[ \\mathbb{P}\\left(v_k \\geq r - \\sum_{i s$} \nIf $h > s$ on the other hand, we have $n\\left(np\\right)^{r - h} \\gg 1$ and so the estimate for $\\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right)$ becomes\n\\[ (n(np)^r)^{h_0} \\times (n(np)^{r-h_0})^{v_0} \\times (n(np)^{r-v_0})^{h_1} \\times \\dots \\times (n(np)^{r-\\sum_{i < k}v_i})^{h_k} \\times 1 \\]\nwhich in turn, on simplification, is seen to be $\\Theta(n^{r+h}\\left(np\\right)^{r^2} (n\\left(np\\right)^{r-h})^{\\sum_{i < k }v_i-r})$. Since $n\\left(np\\right)^{r-h} = \\omega(1)$, the probability of $E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}$ is maximised (disregarding constant factors) when $\\sum_{i < k }v_i$ is maximal, subject to the condition that $\\sum_{i < k}v_i \\leq s$. Thus, we may assume that $\\sum_{i < k }v_i = s$ and it follows that\n\\[\\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right) = \\Theta\\left(n^{r+h}\\left(np\\right)^{r^2} \\left(n\\left(np\\right)^{r-h}\\right)^{s-r}\\right)\\]\nwhich, on algebraic simplification, gives\n\\[\\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right) = \\Theta\\left(\\left(n\\left(np\\right)^{r}\\right)^s\\left(n\\left(np\\right)^{r-s}\\right)^h\\right).\\] \nSince $n\\left(np\\right)^{r-s} \\ll 1$, we may assume that $h=s+1$ and we conclude in this case that \n\\[\\mathbb{P}\\left(E_{\\left(\\mathbf{h},\\mathbf{v'}\\right)}\\right) = \\Theta\\left(n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right) - s\\left(s + 1\\right)}\\right). \\]\n\nWe claim that the main contributions to $\\theta_p$ come from Case 2. Note that \n\\[n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right) - s\\left(s + 1\\right)} = n^{r+s}\\left(np\\right)^{r^2}\\left(n\\left(np\\right)^{r-s}\\right)^{s+1 - r} \\gg n^{r+s}\\left(np\\right)^{r^2}\\] \nbecause $\\left(n\\left(np\\right)^{r-s}\\right)^{s+1 - r} \\gg 1$; this is true since $\\left(n\\left(np\\right)^{r-s}\\right) \\ll 1$ and $s+1-r \\leq 0$. \n\nThus, we conclude that\n\\[ \\theta_p\\left(n,r,2\\right) = \\Theta\\left(n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right) - s\\left(s + 1\\right)}\\right)\\; \\textrm{ when } n^{-1-\\frac{1}{r-s-1}} \\ll p \\ll n^{-1-\\frac{1}{r-s}}\\]\nas required. \n\nWhen $p \\gg n^{-1-\\frac{1}{r}}$, the probability that there exist $r$ horizontal lines each containing $r$ initially infected points is easily seen to be $\\Omega(1)$. So $\\theta_p\\left(n,r,2\\right) = \\Theta\\left(1\\right)$ when $p \\gg n^{-1-\\frac{1}{r}}$. The result follows.\n\\end{proof}\n\n\\section{The critical probability in three dimensions}\\label{3d}\n\nWe now turn our attention to the line percolation process in three dimensions. We shall now prove Theorem~\\ref{3d-perc}.\n\n\\begin{proof}[Proof of Theorem~\\ref{3d-perc}]\n\nWe prove the upper and lower bounds separately. Let us start with the upper bound.\n\n\\subsection{Proof of the upper bound}\nUnsurprisingly, it is easier to show that percolation occurs than to demonstrate otherwise. We start by bounding $p_c$ from above. Let $p = Cn^{-1-\\frac{1}{r-\\gamma}}$ for some $C\\gg1$. Note that $s$, by definition, is the greatest natural number such that $s\\left(s+1\\right) \\leq r$. Since $s\\left(s+1\\right) \\leq r$ and $\\left(s+1\\right)\\left(s+2\\right) > r$, it is not hard to check that $\\gamma = \\frac{r + s\\left(s+1\\right)}{2\\left(s+1\\right)}$ satisfies\n\\begin{equation}\nn^{-1-\\frac{1}{r-s-1}} \\ll n^{-1-\\frac{1}{r-\\gamma}}\\ll n^{-1-\\frac{1}{r-s}},\\label{range}\n\\end{equation}\nand so it follows from \\eqref{2dformula} that\n \\[\\theta_p\\left(n,r,2\\right) = \\Theta\\left(n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right) = \\Theta\\left(C^{r\\left(2s+1\\right)-s\\left(s+1\\right)}n^{-1}\\right).\\]\n \nWe say that a plane $\\mathcal{P}$ is internally spanned if $A_0\\cap \\mathcal{P}$ percolates in the line percolation process restricted to $\\mathcal{P}$. Choose any direction and consider the $n$ (parallel) planes perpendicular to this direction. The number of such planes which are internally spanned is a binomial random variable with mean $\\mu = \\Omega\\left(C^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right)$. Since $\\mu \\rightarrow \\infty$ as $C \\rightarrow \\infty$, we see from Claim \\ref{chernoff} that there exist $r$ parallel internally spanned planes with probability at least $1\/2$, provided $C$ is a sufficiently large constant. So we have that $p_c\\left(n,r,3\\right) = O\\left(n^{-1-\\frac{1}{r-\\gamma}}\\right)$.\n\n\\subsection{Proof of the lower bound}\nNext, suppose that $p = cn^{-1-\\frac{1}{r-\\gamma}}$ for some $c \\ll 1$. We claim that the probability of percolation is at most $1\/2$, provided $c$ is a sufficiently small constant. We shall demonstrate this by proving something much stronger. \n\nWe shall track the number of planes with $k$ parallel fully infected lines as the infection spreads for every $1 \\leq k \\leq s+1$ and show that these numbers are not too large when the process terminates with probability at least $1\/2$.\n\nWe shall work with a modified three-dimensional line percolation process in which the infection spreads one line at a time. Let $\\mathcal{L}_1,\\mathcal{L}_2,\\dots,\\mathcal{L}_{3n^2}$ be an ordering of the $3n^2$ lines of the three-dimensional grid. In this modified process, we have a sequence of subsets $A_p = H^{\\left(0\\right)} \\subset H^{\\left(1\\right)} \\subset \\dots H^{\\left(m\\right)} \\subset \\dots$ of $[n]^3$ such that\n\\[\n H^{\\left(m+1\\right)} =\n \\begin{cases} \n H^{\\left(m\\right)} \\cup \\mathcal{L}_k \\hfill & \\text{ if } |\\mathcal{L}_k \\cap H^{\\left(m\\right)}| \\geq r, \\text{ where } k = m+1\\imod{3n^2},\\\\\n H^{\\left(m\\right)} \\hfill & \\text{ otherwise.} \\\\\n \\end{cases}\n\\]\nClearly, $H^{(m)} \\subset A^{(m)} \\subset H^{(3n^2m)}$ and so $A_p$ percolates in the original process if and only if it percolates in this modified process.\n\nWe run the modified three-dimensional process starting from $A_p$ and if we find at stage $m$ that\n\\begin{description}\n\\item[A] the number of planes containing $k$ parallel fully infected lines will exceed $ n^{1-\\frac{k\\gamma}{r-\\gamma}} $ for some $1 \\leq k \\leq s+1$ at stage $m+1$, or\n\\item[B] the process will terminate at stage $m+1,$\n\\end{description}\nthen we \\emph{stop the modified process at stage $m$}. Let $E_A$ be the event that we the stop the modified process on account of Condition A.\n\n\\begin{lem}\\label{planecount} In the modified process, we have\n\\[ \\mathbb{P}\\left(E_A\\right) = O\\left(\\sum_{1\\leq k\\leq s} c^{rk} + c^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right).\\] \n\\end{lem}\n\\begin{proof}\nLet us write $N_k$ for the number of planes containing $k$ parallel fully infected lines when we stop the modified three-dimensional process. Since we are infecting lines one at a time, when we stop the process, we see that $N_k \\leq n^{1-\\frac{k\\gamma}{r-\\gamma}}$ for $1 \\leq k \\leq s$. Observe that $\\left(s+1\\right)\\gamma \/ \\left(r-\\gamma\\right) > 1$ since $\\left(s+1\\right)\\left(s+2\\right) > r$ and so $N_k = 0$ for $k \\geq s+1$ since $n^{1-\\frac{(s+1)\\gamma}{r-\\gamma}} < 1$. It follows that $N_0 = n - o(n)$.\n\nWe shall prove Lemma \\ref{planecount} by estimating the probability that a given plane contains $k$ parallel fully infected lines when we stop the process. Let us fix a plane $\\mathcal{P}$. Suppose that a point $v$ of $\\mathcal{P}$ gets infected before we stop the process and suppose further that $v$ is not initially infected. Then $v$ is either\n\\begin{enumerate}\n\\item infected when a line perpendicular to $\\mathcal{P}$ containing $v$ has $r$ other previously infected points on it (we call such points \\emph{boosted points}), or\n\\item infected when a line in $\\mathcal{P}$ containing $v$ has $r$ other previously infected points on it.\n\\end{enumerate}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture}[xscale=1.6,yscale=1]\n\\draw (0,0)--(5,0)--(7,2.5)--(2,2.5)--(0,0);\n\\draw (1.3,4.5)--(6.3,4.5)--(6.3,-0.5)--(1.3,-0.5)--(1.3,4.5);\n\\draw [dotted, thick] (1.3,1.625)--(6.3,1.625);\n\\draw [very thick](2.5,-0.5)--(2.5,4.5);\n\\draw [very thick](3.4,-0.5)--(3.4,4.5);\n\\draw [very thick](5.4,-0.5)--(5.4,4.5);\n\\node (plane) at (0.6, 1.3) {$\\mathcal{P}$};\n\\node (line) at (4.35, 1.95) {$\\mathcal{L}$};\n\\node (b1) at (2.5, 1.625) [inner sep=0.9mm, thick, rectangle, draw=black!100, fill=black!50] {};\n\\node (b2) at (3.4, 1.625) [inner sep=0.9mm, thick, rectangle, draw=black!100, fill=black!50] {};\n\\node (b3) at (5.4, 1.625) [inner sep=0.9mm, thick, rectangle, draw=black!100, fill=black!50] {};\n\\node (b1-1) at (3.4, 3.825) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b1-2) at (3.4, 3.475) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b1-3) at (3.4, 2.875) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b1-4) at (3.4, 2.25) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b2-1) at (2.5, 3.9) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b2-2) at (2.5, 3.5) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b2-3) at (2.5, 3.1) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b2-4) at (2.5, 0.3) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b3-1) at (5.4, 2.7) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b3-2) at (5.4, 2.3) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b3-3) at (5.4, 0.9) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\node (b3-1) at (5.4, -0.1) [inner sep=0.7mm, thick, circle, draw=black!100, fill=black!50] {};\n\\end{tikzpicture}\n\\end{center}\n\\caption{Boosted points on $\\mathcal{L}$ in $\\mathcal{P}$.}\n\\label{boostpic}\n\\end{figure}\n\n\nLet $A_{\\mathcal{P}}$ denote the union of the boosted points and the initially infected points of $\\mathcal{P}$. Observe that if we run the two-dimensional $r$-neighbour line percolation process on $\\mathcal{P}$ starting from $A_{\\mathcal{P}}$, we infect all the points of $\\mathcal{P}$ that were infected in the modified three-dimensional process before it was stopped. Thus, the probability that $\\mathcal{P}$ contains $k$ parallel fully infected lines when we stop the modified three-dimensional process is bounded above by the probability that we generate $k$ parallel fully infected lines in the two-dimensional $r$-neighbour line percolation process on $\\mathcal{P}$ starting from $A_{\\mathcal{P}}$.\n\nFix any arrangement of the boosted points in $\\mathcal{P}$. Note that if we have $k$ boosted points on a line $\\mathcal{L}$ in $\\mathcal{P}$, then this implies that the plane perpendicular to $\\mathcal{P}$ which intersects $\\mathcal{P}$ in $\\mathcal{L}$ generated $k$ parallel fully infected lines in the modified three-dimensional process before it was stopped (see Figure \\ref{boostpic}); consequently, the number of such lines $\\mathcal{L}$ in $\\mathcal{P}$ is at most $N_k$.\n\nFor $1 \\leq k \\leq s+1$, let $E_k$ denote the event that the two-dimensional $r$-neighbour line percolation process on $\\mathcal{P}$ starting from $A_{\\mathcal{P}}$ generates $k$ parallel fully infected lines. \n\n\\begin{lem}\\label{PE_k}\nConditional on any arrangement in $\\mathcal{P}$ of the boosted points, we have \n\\[\\mathbb{P}\\left(E_k\\right) = O\\left( c^{rk} n^{-k\\gamma \/ \\left(r-\\gamma\\right)}\\right) \\] \nfor $1 \\leq k \\leq s$ and \n\\[\\mathbb{P}\\left(E_{s+1}\\right) = O\\left( c^{2\\left(s+1\\right)\\left(r-\\gamma\\right)} n^{-1}\\right).\\]\n\\end{lem}\n\n\\begin{proof}\nAs in the proof of Theorem~\\ref{2d-unboosted-perc}, we consider the modified two-dimensional percolation process $A_{\\mathcal{P}} = G^{\\left(0\\right)} \\subset G^{\\left(1\\right)} \\subset \\ldots$ on $\\mathcal{P}$ where in going from $G^{\\left(2m\\right)}$ to $G^{\\left(2m+1\\right)}$, only horizontal lines are infected, and in going from $G^{\\left(2m+1\\right)}$ to $G^{\\left(2m+2\\right)}$, only vertical lines are infected. Let us stop this modified two-dimensional process on $\\mathcal{P}$ as soon as it generates $k$ or more parallel fully infected lines (or reaches termination).\n\nFor $0\\leq j \\leq s$, let $h_{i,j}$ denote the number of horizontal lines containing $j$ boosted points which are infected when going from $G^{\\left(2i\\right)}$ and $G^{\\left(2i+1\\right)}$. Let $v_{i,j}$ be defined analogously. We say that $\\left(\\mathbf{h}=\\langle h_{i,j} \\rangle, \\mathbf{v}=\\langle v_{i,j} \\rangle\\right)$ is the \\emph{full line-count} of the modified two-dimensional process; also, we define $h_i = \\sum_{j} h_{i,j}$ and $v_i = \\sum_{j} v_{i,j}$. Given $\\left(\\mathbf{h}, \\mathbf{v}\\right)$, let $\\left(\\mathbf{h^*}, \\mathbf{v^*}\\right)$ be defined by setting $h^*_{i,0} = h_i$, $v^*_{i,0} = v_i$, and $h^*_{i,j} = v^*_{i,j} = 0$ for $1 \\leq j \\leq s$.\n\nLet $E_{k,\\left(\\mathbf{h},\\mathbf{v}\\right)}$ denote the event that the modified two-dimensional process on $\\mathcal{P}$ generates $k$ or more parallel fully infected lines and furthermore, the full line-count of the modified two-dimensional process on $\\mathcal{P}$ is given by $\\left(\\mathbf{h}, \\mathbf{v}\\right)$. \n\nFor any $\\left(\\mathbf{h}, \\mathbf{v}\\right)$, we shall show that $\\mathbb{P}\\left(E_{k,\\left(\\mathbf{h},\\mathbf{v}\\right)}\\right) = O\\left(\\mathbb{P}\\left(E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}\\right)\\right)$; in other words, we show that we may restrict our attention to the case where we never use any of the boosted points.\n\nHaving generated $l$ parallel fully infected lines, let us consider the probability that the modified two-dimensional process generates exactly $l'$ new fully infected lines perpendicular to these $l$ lines in the next step. If $n\\left(np\\right)^{r-l} \\gg 1$, then we see from Claim \\ref{binsmall} that the probability of generating $l'$ parallel fully infected lines in the next step where each of these $l'$ new lines contain no boosted points is $\\Omega \\left(1\\right)$. So suppose that $n\\left(np\\right)^{r-l} \\ll 1$. In this case, the probability of generating $l'$ parallel fully infected lines in the next step where each of these $l'$ new lines contain no boosted points is\n\\[ \\Theta \\left( \\binom{N_0}{l'}\\left(\\left(np\\right)^{r-l}\\right)^{l'}\\left(1-\\left(np\\right)^{r-l}\\right)^{n-l'}\\right) = \\Theta\\left(\\left(n\\left(np\\right)^{r-l}\\right)^{l'}\\right)\\]\nsince $N_0 = n-o\\left(n\\right)$. On the other hand, the probability of generating $l'$ parallel fully infected lines in the next step where each of these $l'$ new lines contain $j$ boosted points for some $1 \\leq j \\leq s$ is\n\\[ O \\left( \\binom{N_j}{l'}\\left(\\left(np\\right)^{r-l-j}\\right)^{l'}\\right) = O\\left(\\left(n\\left(np\\right)^{r-l}\\right)^{l'}\\left(n^{1+\\gamma\/\\left(r-\\gamma\\right)}p\\right)^{-jl'}\\right)\\]\nsince $N_j < n^{1-j\\gamma\/\\left(r-\\gamma\\right)}$. Observe that $n^{1+\\gamma\/\\left(r-\\gamma\\right)}p=cn^{\\frac{\\gamma - 1}{r-\\gamma}}$, and since $\\gamma = \\frac{r+s\\left(s+1\\right)}{2\\left(s+1\\right)} \\geq 1$ when $r\\geq 2$, we see that $n^{1+\\gamma\/\\left(r-\\gamma\\right)}p \\gg 1$. It follows that for any $\\left(\\mathbf{h}, \\mathbf{v}\\right)$, we have \n\\[\\mathbb{P}\\left(E_{k,\\left(\\mathbf{h},\\mathbf{v}\\right)}\\right) = O\\left(\\mathbb{P}\\left(E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}\\right)\\right).\\]\n\nThus, to estimate $\\mathbb{P}\\left(E_k\\right)$, we may restrict our attention to the events $E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}$. As in the proof of Theorem~\\ref{2d-unboosted-perc}, we may suppose that $\\sum_{i} v^*_{i} = k \\leq s + 1$ and that $\\sum_{i} h^*_{i} = l < k$. Recall that $s$ is the greatest natural number such that $s\\left(s+1\\right) \\leq r$. Recall that $p = cn^{-1-\\frac{1}{r-\\gamma}}$, where $\\gamma = \\frac{r + s\\left(s+1\\right)}{2\\left(s+1\\right)}$, satisfies $n\\left(np\\right)^{r-s}\\ll 1$ and $n\\left(np\\right)^{r-s-1} = \\omega(1)$.\n\nWe shall mimick the proof of Theorem~\\ref{2d-unboosted-perc}. Since $l \\leq s$ and hence $n\\left(np\\right)^{r-l}\\ll 1$, we see that the probability of $E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}$, up to constant factors, is given by\n\\[ (n(np)^r)^{h^*_0} \\times (n(np)^{r-h^*_0})^{v^*_0} \\times (n(np)^{r-v^*_0})^{h^*_1} \\times \\dots \\times (n(np)^{r-\\sum_{i < t}v^*_i})^{h^*_t} \\times (n(np)^{r-\\sum_{i \\leq t}h^*_i})^{v^*_t}.\\]\nAfter some algebraic simplification, we see that \n\\begin{equation}\n\\mathbb{P}\\left(E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}\\right) = \\Theta\\left(n^{k+l}\\left(np\\right)^{rk + rl - kl}\\right) = \\Theta\\left(n^{k}\\left(np\\right)^{rk}\\left(n\\left(np\\right)^{r-k}\\right)^{l}\\right) \\label{Ek-prob}\n\\end{equation} \n\nWhen $k \\leq s$, we see that the estimate for the probability of $E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}$ in \\eqref{Ek-prob} is maximised by taking $l = 0$, from which we conclude that \n\\[\\mathbb{P}\\left(E_k\\right) = O\\left(\\left(n\\left(np\\right)^r\\right)^k\\right) = O\\left(c^{rk}n^{-\\frac{k\\gamma}{r-\\gamma}}\\right).\\]\nOn the other hand, when $k=s+1$, the estimate for the probability of $E_{k,\\left(\\mathbf{h^*},\\mathbf{v^*}\\right)}$ in \\eqref{Ek-prob} is maximised by taking $l = s$, from which we conclude that \n\\[\\mathbb{P}\\left(E_{s+1}\\right) = O\\left(\\left(n^{2s+1}\\left(np\\right)^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right)\\right).\\]\nUsing the fact that $\\gamma = \\frac{r + s\\left(s+1\\right)}{2\\left(s+1\\right)}$, we see that $\\mathbb{P}\\left(E_{s+1}\\right) = O\\left( c^{2\\left(s+1\\right)\\left(r-\\gamma\\right)} n^{-1}\\right)$ as required. This completes the proof of Lemma \\ref{PE_k}.\n\\end{proof}\n\nRecall that $E_A$ is the event that we stop modified three-dimensional process on account of the number of planes containing $k$ parallel fully infected lines exceeding $ \\lfloor n^{1-\\frac{k\\gamma}{r-\\gamma}}\\rfloor $ for some $1\\leq k \\leq s+1$. \n\nFrom Lemma \\ref{PE_k}, we see that expected number of planes with $k$ parallel fully infected lines when we stop the modified three-dimensional process is $O\\left( c^{rk} n^{1-\\frac{k\\gamma}{r-\\gamma}} \\right)$ when $1\\leq k \\leq s$ and $O\\left(c^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right)$ when $k=s+1$. By Markov's inequality, the probability that the number of planes containing $k$ parallel fully infected lines exceeds $ \\lfloor n^{1-\\frac{k\\gamma}{r-\\gamma}} \\rfloor$ is $O\\left(c^{rk}\\right)$ when $1\\leq k \\leq s$ and $O\\left(c^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right)$ when $k=s+1$ since $ \\lfloor n^{1-\\frac{(s+1)\\gamma}{r-\\gamma}} \\rfloor = 0$. Applying the union bound, we get \n\\[ \\mathbb{P}\\left(E_A\\right) = O\\left(\\sum_{1\\leq k\\leq s} c^{rk} + c^{r\\left(2s+1\\right)-s\\left(s+1\\right)}\\right).\\]\nThis concludes the proof of Lemma \\ref{planecount}.\n\\end{proof}\n\nThe required lower bound on $p_c$ follows immediately from Lemma \\ref{planecount}. The lemma implies that $\\mathbb{P}\\left(E_A\\right) \\rightarrow 0$ as $c \\rightarrow 0$. Hence, for a suitably small constant $c$, the probability that the three-dimensional $r$-neighbour line percolation process with $p = cn^{-1-\\frac{1}{r-\\gamma}}$ generates a plane with $s+1$ parallel fully infected lines before reaching termination is less than $1\/2$ since $n^{1-\\frac{(s+1)\\gamma}{r-\\gamma}} < 1$. Consequently, the probability of percolation is also less than $1\/2$. This implies that $p_c\\left(n,r,3\\right) = \\Omega\\left(n^{-1-\\frac{1}{r-\\gamma}}\\right)$ as required. This completes the proof of Theorem \\ref{3d-perc}.\n\\end{proof}\t\n\n\\section{Minimal percolating sets}\\label{minset}\n\nIn this section, we prove Theorem \\ref{min-set} which tells us the size of a minimal percolating set. We shall make use of the polynomial method which has had many unexpected applications in combinatorics; see \\citep{Guth13} for a survey of many of these surprising applications. While linear algebraic techniques have previously been used to study bootstrap percolation processes (see \\citep{linalg}), we believe that this application of the polynomial method is new to the field.\n\n\\begin{proof}[Proof of Theorem \\ref{min-set}]\nSuppose for the sake of contradiction that there is a set $A\\subset [n]^d$ which percolates with $|A| < r^d$. We shall derive a contradiction using the polynomial method. \n\n\\begin{prop}\nThere exists a non-zero polynomial $P_A \\in \\mathbb{R}[x_1, x_2, \\dots, x_d]$ of degree at most $r-1$ in each variable which vanishes on $A$.\n\\end{prop}\n\\begin{proof}\nLet $V \\subset \\mathbb{R}[x_1, x_2, \\dots, x_d]$ be the vector space of real polynomials in $d$ variables of degree at most $r-1$ in each variable. The dimension of $V$ is clearly $r^d$. Consider the evaluation map from $V$ to $\\mathbb{R}^{|A|}$ which sends a polynomial $P$ to $\\left(P\\left(v\\right)\\right)_{v \\in A}$. Clearly, this map is linear. Since we assumed that $|A| < r^d$, this map has a non-trivial kernel. The existence of $P_A$ follows.\n\\end{proof}\n\nWe shall use the polynomial $P_A$ to follow the spread of infection. The following claim will yield a contradiction.\n\n\\begin{prop}\nThe polynomial $P_A$ vanishes on $A^{\\left(m\\right)}$ for every $m\\geq 0$.\n\\end{prop}\n\n\\begin{proof}\nWe proceed by induction on $m$. The claim is true when $m = 0$ since $A^{\\left(0\\right)} = A$. Now, assume $P_A$ vanishes on $A^{\\left(m\\right)}$ and consider a line $\\mathcal{L}$ which gets infected when going from $A^{\\left(m\\right)}$ to $A^{\\left(m+1\\right)}$. It must be the case that $|\\mathcal{L} \\cap A^{\\left(m\\right)}| \\geq r$. Since $P_A$ vanishes on $A^{\\left(m\\right)}$, the restriction of $P_A$ to $\\mathcal{L}$ disappears on $\\mathcal{L} \\cap A^{\\left(m\\right)}$. If the direction of $\\mathcal{L}$ is $i \\in [d]$, then the restriction of $P_A$ to $\\mathcal{L}$ is a univariate polynomial in the variable $x_i$ of degree at most $r-1$. Since a non-zero univariate polynomial of degree at most $r-1$ has at most $r-1$ roots, the restriction of $P_A$ to $\\mathcal{L}$ has to be identically zero. Consequently, $P_A$ vanishes on $A^{\\left(m+1\\right)}$.\n\\end{proof}\n\nSince $A$ percolates, we conclude that $P_A$ vanishes on $[n]^d$. On the other hand, using the following proposition, the proof of which may be found in \\citep{Alon99}, we conclude that $P_A$ cannot vanish on $[r+1]^d$.\n\n\\begin{prop}\\label{weak-null} Let $P = P(x_1, x_2, \\dots, x_d)$ be a polynomial in $d$ variables over an arbitrary field $F$. Suppose that the degree of $P$ as a polynomial in $x_i$ is at most $t_i$ for $1\\leq i \\leq d$, and let $S_i \\subset F$ be a set of at least $t_i + 1$ distinct elements of $F$. If $P(u_1, u_2, \\dots, u_d) = 0$ for every $d$-tuple $(u_1, u_2, \\dots, u_d) \\in S_1 \\times S_2 \\times \\dots \\times S_d$, then $P$ is identically zero. \\qed\n\\end{prop}\n\nIt follows from Proposition \\ref{weak-null} that $P_A$ is zero and we have a contradiction. This establishes Theorem \\ref{min-set}.\n\\end{proof}\n\n\\begin{rem}\nIt follows from Theorem \\ref{min-set} that the size of a minimal percolating set in the $r$-neighbour bootstrap percolation model on $[n]^d$ with edges induced by the Hamming torus is at least $(r\/d)^d$. On the other hand, it is possible to construct sets of size about $r^d\/2d!$ which percolate. It would be interesting to determine the size of a minimal percolating set in this model exactly for all $d,r \\in \\mathbb{N}$; we suspect that the lower bound of $(r\/d)^d$ is quite far from the truth.\n\\end{rem}\n\n\\section{Concluding remarks}\\label{rem}\nThere remain many challenging and attractive open problems, chief amongst which is the determination of $p_c\\left(n,r,d\\right)$ for all $d, r \\in \\mathbb{N}$. To determine $p_c(n,r,3)$, we used a careful estimate for $\\theta_p(n,r,2)$ which is valid for all $0 \\leq p \\leq 1$. This estimate for $\\theta_p(n,r,2)$ depends crucially on the fact that the two-dimensional process reaches termination in a constant (depending on $r$, but not on $n$) number of steps. We believe that to determine $p_c(n,r,4)$, one will need to determine $\\theta_p(n,r,3)$ for all $0 \\leq p \\leq 1$ but since it is not at all obvious that the three-dimensional process almost surely reaches termination in a constant number of steps, we suspect different methods will be necessary. \n\nAs remarked earlier, it is easily read out of our proofs that the line percolation model does not exhibit a sharp threshold at $p_c$ in two or three dimensions. It would be interesting to prove an analogous statement for every $d,r \\in \\mathbb{N}$.\n\n\\section*{Acknowledgements}\nThe first and second authors were partially supported by NSF grant DMS-1301614 and the second author also wishes to acknowledge support from EU MULTIPLEX grant 317532. Some of the research in this paper was carried out while the authors were visitors at Microsoft Research, Redmond. It was continued while the third and fourth authors were visitors at the University of Memphis. The authors are grateful to MSR, Redmond for their hospitality, and the third and fourth authors are additionally grateful for the hospitality of the University of Memphis. We would also like to thank Janko Gravner for bringing \\citep{hamming} to our attention.\n\n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nOur task is indefinite integration, that is given\n$f$ from some class $A$ we seek $g$ from possibly\ndifferent class $B$ such that\n$$\nf = g'.\n$$\nAs first step in this direction we would like to\ndelimit form that $g$ can take. Various results\nin this direction are usually called Liouville\nprinciple after famous result of Liouville.\n\nClassically, in computation of field below a hyperbole\nit was observed that to integrate rational function one\nneeds a logarithm. This lead to class of elementary functions.\n\nIn computation of arc length of ellipsis there appear\nintegral\n$$\nE(x, m) = \\int \\frac{(1 - x^2)dx}{\\sqrt{(1 - x^2)(1 - mx^2)}}\n$$\nand question if it can be expressed\n\"in finite terms\"{}\n(in particular question if it is an elementary function).\nAbel made first steps and then Liouville proved that\namong others elliptic integral $E$ is not elementary.\n\nAbel and Liouville combined algebraic and analytic arguments,\nwhich causes some doubt about validity of proofs. Also,\nLiouville proof used repeated integration and differentiation\nand deeper sense of main part of his argument was not clear.\nIt turns out that Liouville used differential automorphisms,\nintegration allowed passing info from generator (a derivation)\nto group elements, differentiation went back. We observed\nthat this argument can be done directly inside Lie algebra\nof derivations. In effect, our proof is very close to\noriginal Liouville's proof. Thanks to use of Abel addition\nformula for elliptic integrals we can handle elliptic\nfunctions and elliptic integrals. It is curious that\nLiouville missed that: all ingredients were known to him\nand our main result is natural extension of that of Liouville.\n\n\\section{Differential fields}\n\nDifferential field is a field $F$ with a derivation $D$, that\nis additive operation which satisfies\nLeibniz formula:\n$$\nD(fg) = D(f)g + fD(g)\n$$\nWe say that $f \\in F$ is a constant if and only if $D(f) = 0$.\n\nRemark: In the sequel we consider only fields of characteristic $0$,\nin finite characteristic several crucial results are\nno longer valid.\n\nExample: Field ${\\mathbb Q}(x, \\exp(1\/x^2))$. Function\n$f = ((x^2-2)\\exp(1\/x^2))\/x^2$ is an element of this field.\n\nRemark: Given function belongs to many fields, in practice\nwe prefer small fields.\n\n\nExample: Field of meromorphic functions in a complex area $U$\nwith usual derivative is a differential field.\n\nFor computational purposes this field is too big, we want\nfinitely generated fields.\nField of meromorphic functions is in a sense universal example:\nevery finitely generated differential field is isomorphic to\na subfield of field of meromorphic functions in some area\n(Seidenberg \\cite{Seid}).\n\n\n\n\nAssumption that we work inside a differential field introduces\nsome limitations, for example it excludes $|x|$. In\ncase of multivalued function sometimes we need to make\nchoice of branches, for example\n$$\n\\sqrt{x}\\sqrt{1-x} - \\sqrt{x(1-x)}\n$$\nis zero or not depending on choice of branches of square root.\n\nIn a sense this is necessary limitation, other settings\nquickly lead to unsolvable problems (\\cite{Rich:U}, \\cite{Risch:Prob}).\n\n\\subsection{Differential fields, representation}\n\nOne way to represent differential field is via generators\nand relations. In characteristic $0$ our field is\nan extension of field of rational numbers. Generators\neither are transcendental or defined via minimal\npolynomial of an algebraic extension (that is relation).\nWe define derivative on transcendental generators\nin arbitrary way and extend to the whole field\n(in characteristic $0$ there exist exactly one extension,\nfor example see\n\\cite{Lang:A} case 1 and case 2 following Theorem 5.1).\n\n\nField ${\\mathbb Q}(x, \\exp(1\/x^2))$ is given in this way,\nthere are two generators (both transcendental) $\\theta_1 = x$,\n$\\theta_2 = \\exp(1\/x^2)$,\n$$\nD(\\theta_1) = 1,$$\n$$D(\\theta_2) = \\frac{-2}{\\theta_1^3}\\theta_2.$$\n\n\nAlternative point of view is that a differential field is generated\nby solutions of a system of algebraic differential equations.\n\nFrom this point of view constants are first integrals of our\nsystem of equations, finding them can be a hard problem.\n\n\\subsection{Differential fields, geometric model}\n\nAlternatively, we can treat finitely generated differential field\nas rational functions on an algebraic manifold $M$. Derivative\n$D$ is a vector field on $M$.\n\nRemark: saying about field of rational functions we automatically\nmake no distinction between manifolds with the same field\nof rational functions (birationaly equivalent).\n\n\\subsection{Differential fields, towers}\n\nFor us crucial role is played by extensions of transcendental\ndegree $1$, that is pairs $F \\subset K$ of differential fields\nsuch that $K$ is of transcendental degree $1$ over $F$.\nGeometrically, $K$ is a field of algebraic functions on\na curve (algebraic manifold of dimension $1$) over $F$.\nAssuming that $K$ is algebraic over $F(\\theta)$ we get\n(variant of) chain rule\n$$\nDf = D_Ff + (D\\theta)\\partial_\\theta f\n$$\nwhere $D_F$ is derivation of $K$ equal to $D$ on $F$ and such\nthat $D_F(\\theta) = 0$ and $\\partial_\\theta$ is\nderivation of $K$ which is zero on $F$ and such that\n$\\partial_\\theta(\\theta) = 1$.\n\n\nWe obtain more general fields as towers, that is sequences\n$$\nF_0 \\subset F_1 \\subset F_2 \\dots \\subset F_n = K\n$$\nwhere $F_i \\subset F_{i+1}$ is an extension of transcendental degree\n$\\leq 1$.\n\nOne can consider more general towers, but here we would like\nto consider more specific case.\n\n\\subsection{Differential fields, extensions of degree $1$}\n\nTypical examples of extensions of degree $\\leq 1$, $F \\subset F(\\theta)$:\n\\begin{itemize}\n\\item algebraic extension\n\\item extension by a primitive, $D(\\theta) = f$ where $f\\in F$\n\\item extension by an exponential, $D(\\theta) = D(f)\\theta$\nwhere $f\\in F$\n\\item extension by an elliptic function, $D(\\theta) = D(f)q$ where\n$f\\in F$,\n$q^2 = \\theta^3 -a\\theta - b$, $a$ and $b$ are constants\n\\item extension by a Lambert W function,\n$D(\\theta) = D(f)\\frac{\\theta}{f(\\theta+1)}$, where $f\\in F$\n\\end{itemize}\nIntuitively in the last three cases we have\n$\\theta = \\exp(f)$, $\\theta = P(a, b, f)$, $\\theta = W(f)$,\nwhere $P$ is (an alternative version of) Weierstrass $P$ elliptic function\nand $W$ is Lambert W function.\n\nIn case of elliptic functions corresponding curve is\nan elliptic curve, in other cases we have projective line.\nHowever, algebraic extensions can introduce arbitrary curves.\n\nIn the sequel we will need two kinds of primitives:\nlogarithms, where $f = \\frac{Dh}{h}$ for $h\\in F$ and\nelliptic integrals.\n\n\\subsection{Differential fields, elliptic integrals}\n\nTo define elliptic integrals we assume that there are\nelements $p, q \\in F$ such that $q^2 = p^3 -ap - b$ where $a$ i $b$ are\nconstants.\n\n$\\theta \\in K$ is an elliptic integral of the first kind if\n$$\nD(\\theta) = \\frac{Dp}{q}.\n$$\n\n$\\theta \\in K$ is an elliptic integral of the second kind if\n$$\nD(\\theta) = \\frac{pDp}{q}.\n$$\n\n$\\theta \\in K$ is an elliptic integral of the third kind if\nthere exist constant $c$ such that\n$$\nD(\\theta) = \\frac{Dp}{(p - c)q}.\n$$\nOur elliptic integrals are integrals of a differential form\non an elliptic curve in Weierstrass form.\n\n\nTraditional approach uses Legendre form curve:\n$$\ny^2 = (1 - x^2)(1 - mx^2).\n$$\nThose curves give essentially equivalent theories,\nhowever to write a curve in Legendre form we may need\nadditional algebraic extensions. For our purposes we need\ncontrol over algebraic extensions so Weierstrass form\nis preferable. Also, integrals on Weierstrass curve fit\nwell with Weierstrass elliptic functions. Legendre curve\nnaturally leads to Jacobi elliptic functions which for\nus are less convenient.\n\nElliptic integrals in Legendre form are frequently\ntransformed to trigonometric form. For us this is\ncomplication which needlessly introduces transcendental\nfunctions.\n\nCrucial for us Abel lemma is proved on curve in Legendre\nform, thanks to equivalence we will get it also for\nWeierstrass elliptic integrals.\n\n\\subsection{Differential fields, Lie closed extensions}\n\nAll extensions of degree $1$ that we considered are\nLie closed, that is there exists nonzero derivation $X$\non $K$ such that $X$ commutes with $D$\nand $X$ is zero on $F$.\nMore generally extension $F \\subset K$ of transcendental degree $n$\nis Lie closed\nif there are $n$ linearly independent derivations ${X_k}$ on\n$K$ which commute with $D$ and are zero on $F$.\n\nExplicitly:\n\\begin{itemize}\n\\item For $\\theta$ which is a primitive $X = \\partial_\\theta$\n\\item For $\\theta$ which is an exponential\n$X = \\theta\\partial_\\theta$\n\\item For $\\theta$ which is an elliptic function, that is\n$D(\\theta) = D(f)q$ where $f\\in F$, $q^2 = \\theta^3 -a\\theta - b$,\n$a$ and $b$ are constants we take $X = q\\partial_\\theta$\n\\item For $\\theta$ which is a Lambert W\n$$\nX = \\frac{\\theta}{\\theta + 1}\\partial_\\theta\n$$\n\\end{itemize}\n\nNote that $DX - XD$ is a derivation, so it is enough to\ncheck commutation on generators, that is elements of $F$ and $\\theta$.\nOn $F$ our $X$ is zero, $D$ preserves $F$, so both products are\nzero, so $D$ and $X$ commute on $F$.\n\nFor $\\theta$ which is a primitive we take $X = \\partial_\\theta$\nso we have $\\partial_\\theta\\theta = 1$, so $DX\\theta = 0$.\n$D\\theta \\in F$ so $XD\\theta = 0$ and again $D$ and $X$ commute.\n\nMore generally, for all our $\\theta$ we have $D\\theta = gh(\\theta)$\nwhere $g \\in F$\nand $h(\\theta)$ depends only on $\\theta$ (in the elliptic case\n$h$ is an algebraic function, in other cases it is rational function).\nNow our $X$ is\n$$\nX = h(\\theta)\\partial_\\theta\n$$\nand we have\n$$\nXD\\theta = Xgh(\\theta) = gXh(\\theta) =\ngh(\\theta)\\partial_\\theta h(\\theta),\n$$\n$$\nDX\\theta = Dh(\\theta) = gh(\\theta)\\partial_\\theta h(\\theta)\n$$\nso $D$ and $X$ commute.\n\n\nIn three cases $X$ generates one parameter group of\n(differential) automorphisms:\n\\begin{itemize}\n\\item When $\\theta$ is a primitive, then map\n$\\theta \\mapsto \\theta + c$ where $c$ is a constant extends\nto an automorphism of $K$. In other words translation by\na constant gives automorphism of differential field.\n\\item When $\\theta$ is an exponential, then mapping\n$\\theta \\mapsto c\\theta$ where $c$ is nonzero constant extends\nto an automorphism. So group is multiplicative group of\nnonzero constants.\n\\item On elliptic curve we have multiplication, multiplication\nby points with constant coordinates gives automorphisms.\n\\end{itemize}\nIn other words extensions above are differential Galois\n(Kolchin proves that there are no other differential Galois\nextensions of degree $1$).\n\n\nThe case of Lambert W is different: on algebraic level\nthere are no group.\n\n\\subsection{Differential fields, indefinite integral}\n\nWe sat that $g \\in F$ is indefinite integral (or a primitive)\nof $f \\in F$ when\n$$\nf = D(g).\n$$\nFor example\n$$\n\\int ((x^2-2)\\exp(1\/x^2))\/x^2 = x\\exp(1\/x^2).\n$$\nAs element of differential field indefinite integral is\nuniquely determined up to additive constant\n(but element of a differential field may be represented\nby multiple expressions, so we can get different formulas).\n\n\\subsection{Elementary extensions}\nDifferential field $K$ is an elementary extension of $F$\nif there exists tower\n$$\nF = F_0 \\subset F_2 \\subset \\dots \\subset F_n = K\n$$\nwhere each of $F_i \\subset F_{i+1}$ is an algebraic extension,\nextension by a logarithm or extension by an exponential.\n\nWe say that a function $f$ is elementary over $F$ if it\nis an element of an elementary extension of $F$.\n\nWe say that a function $f$ is elementary if it is elementary\nover field of rational functions over constants.\n\nExample: Let $f(x) = \\sqrt{\\exp(x + \\log(x))}$.\n$f \\in K = {\\mathbb Q}(x)(\\theta_1, \\theta_2, \\theta_3)$ where\n$\\theta_1 = \\log(x)$, $\\theta_2 = \\exp(x + \\log(x))$,\n$\\theta_3 = \\sqrt{\\exp(x + \\log(x))}$. So $f$ is an\nelementary function.\n\n\nWe can write function from previous example as\n$f(x) = \\sqrt{x\\exp(x)}$,\nand use $K = {\\mathbb Q}(x)(\\theta_1, \\theta_2)$ for\n$\\theta_1 = \\exp(x)$, $\\theta_2 = \\sqrt{x\\exp(x)}$.\nHowever, given an expression for $f(x)$ we can build\nan elementary extension in a natural way: each logarithm\nand exponential appearing in $f$ and each irrational algebraic\nsubexpression of $f$ (like a root) is associated with a\ngenerator of an extension. Different expressions for $f$\nmay lead to different elementary extensions.\n\n\nWe normally assume that extensions can not be written in\nsimpler form. For example we will treat $\\theta$ as\nan exponential or a logarithm only when it is not algebraic\nover smaller field. For example we treat\n$f(x) = \\exp(\\frac{\\log(x)}{2}) = \\sqrt{x}$\nnot as an exponential but as a solution to algebraic\nequation $f^2(x) = x$. Similarly we do not treat\n$x = \\log(\\exp(x))$ as a logarithm.\n\n\\subsection{Elliptic-Lambert extensions}\n\nWe will consider also wider class of extensions\n\"{}elliptic-Lambert\"{} extension where we allow\ntowers in which may appear extensions by elliptic functions,\nelliptic integrals and Lambert W function.\n\n\\subsection{Commutator formula}\n\n\\begin{lemma}\\label{der-comm}\nIf $X, Y$ are derivations on $K$, $p\\in K$, $p$ and $\\psi(p)$ are\nalgebraically dependent over constants, then\n$$\nX((Yp)\\psi(p)) - Y((Xp)\\psi(p) = ([X, Y]p)\\psi(p)\n$$\n\\end{lemma}\n\nProof: Since derivations extend uniquely to algebraic extensions\nwe have $X\\psi(p) = (Xp)\\psi'(p)$ and $Y\\psi(p) = (Yp)\\psi'(p)$\nwhere ${}'$ denotes unique derivation on $C(p, \\psi(p))$ such\nthat $p' = 1$ and $C$ is constant field. Now,\n$$\nX((Yp)\\psi(p)) = (XYp)\\psi(p) + (Yp)(Xp)\\psi'(p),\n$$\n$$\nY((Xp)\\psi(p) = (YXp)\\psi(p) + (Xp)(Yp)\\psi'(p).\n$$\nSubtracting the above give the result.\n\\hfill$\\square$\\\\ $\\phantom{A}$\\\\} %\\rule{8pt}{8pt.\n\nRemark: While different proofs of Lioville's theorem at first\nmay look quite different, crucial step of known proofs either\nexplicitly uses something like Lemma \\ref{der-comm} (for\nexample Lemma inside proof of Theorem 1 in \\cite{BK}) or\ndepend on calculations which work only because the Lemma is valid\n(as is the case with original Liouville's proof).\n\n\n\\section{Abel formula}\n\nWe will need Abel addition formula for elliptic integrals\n(in Legendre form). Abel work \\cite{Abel:prec} can be\neasily modified to modern standards. However, below we present\nsomewhat different argument.\n\n We consider integrals of differential forms\non a curve $C$ in Legendre form\n$y^2 = (1 - x^2)(1 - mx^2)$.\nThen\n$$\n\\Pi'(x) = \\frac{1}{(1 - nx^2)y}\n$$\nLet $(x_1, y_1)$ and $(x_2, y_2)$ be points on $C$\nand $(x_3, y_3)$ be their sum. Abel gave formula\n$$\n\\Pi(x_1) + \\Pi(x_2) = C + \\Pi(x_3) -\n\\frac{a}{2\\Delta(a)}\\log\\left(\n\\frac{a_0a + a^3 + x_1x_2x_3\\Delta(a)}{a_0a + a^3 - x_1x_2x_3\\Delta(a)}\n\\right)\n$$\nwhere\n$$\nn = \\frac{1}{a^2},\n$$\n$$\\Delta(a) = \\sqrt{(1 - a^2)(1 - ma^2)},$$\n$$a_0 = \\frac{x_2^3y_1 - x_1^3y_2}{x_1y_2 - x_2y_1}.$$\n\nIn differential version\n$$\n\\frac{dx_1}{(1 - x_1^2\/a^2)y_1} + \\frac{dx_2}{(1 - x_2^2\/a^2)y_2}\n= \\frac{dx_3}{(1 - x_3^2\/a^2)y_3} - \\frac{a df}{2\\Delta(a) f}\n$$\nwhere\n$$\nf = \\frac{a_0a + a^3 + x_1x_2x_3\\Delta(a)}{a_0a + a^3 - x_1x_2x_3\\Delta(a)}.\n$$\nIn this version formula can be checked by direct calculation\n(it can be done on a computer), we skip details.\n\nWe will also need integrals of the first kind $F$ and\nof second kind $E$:\n$$\nF(x)' = \\frac{1}{y},\n$$\n$$\nE(X)' = \\frac{1-mx^2}{y}.\n$$\n\n\\begin{lemma}\\label{el-add}(Abel)\n$$\n\\sum_{k=1}^l F(x_i)' = F(y)',\n$$\n$$\n\\sum_{k=1}^l E(x_i)' = E(y)' + g',\n$$\n$$\n\\sum_{k=1}^l \\Pi(x_i)' = \\Pi(y)' + f',\n$$\nwhere $y$ is sum of points on curve $C$, $g$ is a rational\nfunction and $f$ is a sum of\nlogarithms.\n\\end{lemma}\n\nProof: Formula for $\\Pi$ follows by induction from the formula\nabove. Formulas for $F$ and $E$ are obtained in similar\nway: direct calculation for $l=2$ and induction. Explicit\nformula for $g$ when $l = 2$ is:\n$$\ng = m\\frac{-x_1 x_2^3+x_1^3 x_2}{x_1 y_2 -x_2 y_1} \n$$\n\\hfill$\\square$\\\\ $\\phantom{A}$\\\\} %\\rule{8pt}{8pt\n\nRemark: Abel obtained formula for $F$ taking limit of formula for $\\Pi$\nwhen $n$ goes to $0$ (and similarly for $E$). This can\nbe justified in purely algebraic way, but in computer era\ndirect calculation is simpler.\n\n\\section{Liouville-Ostrowski theorem}\n\nLiouville-Ostrowski theorem:\n\\begin{theorem}\nIf\n$F$ is a differential field,\n$f\\in F$ and $f$ has a primitive in an elementary extension $K$\nof $F$, then there exists extension $\\bar F$ of $F$ by algebraic\nconstants and functions\n$v_i \\in \\bar F$ and constants $c_1, \\dots, c_l \\in \\bar F$\nsuch that\n$$\nf = D(v_0) + \\sum_{i=1}^l c_i\\frac{D(v_i)}{v_i} =\nD(v_0) + \\sum_{i=1}^l c_iD(\\log(v_i))\n$$\n\\end{theorem}\nFor modern proof see \\cite{Ros:L} (and \\cite{Risch:Prob}\nto get strong result about constants).\n\nLiouville-Ostrowski theorem says that we can find all parts of integral\nof $f$ already in $F$ extended by algebraic constants. This is\ncrucial property for symbolic integration algorithms.\n\nUsing his theorem Liouville proved that $\\int \\frac{\\exp(x)}{x}dx$,\n$\\int \\exp(x^2)dx$ and elliptic integrals of the first and\nsecond kind are not elementary.\n\n\n\n\n\n\\subsection{Generalization}\n\\begin{theorem}\\label{liu-ab}\nIf\n$F$ is a differential field, $f\\in F$ and $f$ has a primitive\nin an elliptic-Lambert extension $K$ of $F$, then\nthere exist an extension $\\bar F$ of $F$ by algebraic constants,\nfunctions\n$v_i \\in \\bar F$ and constants $c_1, \\dots, c_l \\in \\bar F$, such that\n$$\nf = D(v_0) + \\sum_{i=1}^l c_i\\phi(Dv_i, v_i)\n$$\nwhere $\\phi(Dv_i, v_i)$ is of one of forms below\n\\begin{itemize}\n\\item derivative of a logarithm, that is\n$\\phi(Dv_i, v_i) = \\frac{D(v_i)}{v_i}$\n\\item $\\phi(Dv_i, v_i) = \\frac{Dv_i}{q_i}$\n\\item $\\phi(Dv_i, v_i) = \\frac{v_iDv_i}{q_i}$\n\\item $\\phi(Dv_i, v_i) = \\frac{Dv_i}{(v_i - c_i)q_i}$\n\\end{itemize}\nwhere $q_i^2 = v_i^3 - a_iv_i - b_i$, $q_i \\in \\bar F$, $a_i, b_i, c_i$\nare constants in $\\bar F$.\n\\end{theorem}\n\nIn other word, we can find all ingredients of the integral already in $F$\nextended by algebraic constants.\n\nRemark: Abel proved equivalent result in case when $F$ is\nalgebraic over $C(x)$ (where $C$ is constant field) and\n$K$ is algebraic over $F$.\n\nProof:\nProof is via induction over tower. Namely let $L$ be\nelliptic-Lambert extension in which integral exists.\nWe can write\n$$\n\\bar F = F_0 \\subset F_1 \\subset \\dots \\subset F_n = L\n$$\nwhere each $F_j \\subset F_{j+1}$ is an extension\nof degree $ \\leq 1$ of form given earlier.\nBy assumption in $L$ we can write integral in the form\ngiven in the theorem. So it remains to prove that\ngiven expression as above with all parts in $F_{j+1}$\nwe can transform it into expression with all parts\nin $F_j$ extended by constants. We will do this in a few steps.\n\n\nStep 1. We will prove that:\n$$\nX\\frac{Dv}{v} = D\\frac{Xv}{v}\n$$\nwhere $X$ is given earlier derivation commuting with $D$.\nAlso\n$$\nX\\frac{Dp}{(p - c)q} = D\\frac{Xp}{(p - c)q}\n$$\nand similarly for remaining $\\phi(Dv_i, v_i)$ terms.\n\nNamely, all our $\\phi(Dv_i, v_i)$ are of form\n$(Dv_i)\\psi(v_i)$ where $\\psi$ is an algebraic function.\nSince $[X, D] = 0$ equality\n$$\nX\\phi(Dp, p) = D\\phi(Xp, p).\n$$\nfollows from Lemma \\ref{der-comm}.\n\n\nStep 2. Now we compute\n$$\n0 = Xf = XD(v_0) + \\sum_{i=1}^l c_iX\\phi(Dv_i, v_i)\n$$\n$$\n= DXv_0 + \\sum_{i=1}^l c_iD\\phi(Xv_i, v_i)\n$$\n$$\n= D(Xv_0 + \\sum_{i=1}^l c_i\\phi(Xv_i, v_i))\n$$\nSo\n$$\nc = Xv_0 + \\sum_{i=1}^l c_i\\phi(Xv_i, v_i)\n$$\nis a constant.\n\n\nStep 3. Now we use chain rule:\n$$\nD = D_{F_j} + (D\\theta)\\partial_\\theta\n$$\nIf $\\theta$ is a primitive, then the above can be written as\n$$\nD = D_{F_j} + (D\\theta)X.\n$$\n\nNow\n$$\nD(v_0) + \\sum_{i=1}^l c_i\\phi(Dv_i, v_i) =\nD_{F_j}(v_0) + \\sum_{i=1}^l c_i\\phi(D_{F_j}v_i, v_i) +\n$$\n$$\n(D\\theta)(Xv_0 + \\sum_{i=1}^l c_i\\phi(Xv_i, v_i))\n$$\n$$\n= D_{F_j}(v_0) + \\sum_{i=1}^l c_i\\phi(D_{F_j}v_i, v_i) + cD\\theta\n$$\nBut $D\\theta = \\phi(Dp, p)$ is in the form required by the theorem, so\nwe can add it as an additional term in the sum.\n$D_{F_j}(\\theta) = 0$, so we have expression of required form in\n$F_j$ extended by constants.\n\nStep 3'. In Lambert W case we have $D\\theta = \\frac{D(v)}{v}X\\theta$\nso $D = D_{F_j} + \\frac{D(v)}{v}X$ and $cD\\theta$ term can be replaced\nby logarithmic term $\\frac{D(v)}{v}$.\n\nStep 3''. In other cases (extension by exponential or\nelliptic function) $D\\theta = D(v)X\\theta$\nand proceeding as before we add $cD(v)$ to $v_0$.\n\nIt remains to consider algebraic extensions. We do this using\nmethods of Galois theory (or more precisely method of Abel).\nWe use trace and norm maps. For an algebraic extension\n$F \\subset E$ we define\n$$\n{\\rm Tr}(y) = \\sum_{k=1}^{m}\\iota_k(y),\n$$\n$$\n{\\rm Norm}(y) = \\prod_{k=1}^{m}\\iota_k(y)\n$$\nwhere $\\iota_k$ goes over all embeddings of $E$ over $F$\ninto algebraic closure of $F$.\n\n\nWe have\n$$\n\\frac{D{\\rm Norm}(y)}{{\\rm Norm}(y)} = {\\rm Tr}(\\frac{Dy}{y}).\n$$\nNamely\n$${\\rm Norm}(y) = \\prod_{k=1}^{m} \\iota_k(y).$$\nSince derivative extends uniquely onto algebraic extension\nwe have\n$D\\iota_k(y) = \\iota_k(Dy)$.\nBy the logarithmic derivative formula we have\n$$\n\\frac{D{\\rm Norm}(y)}{{\\rm Norm}(y)} = \\sum \\frac{D\\iota_k(y)}{\\iota_k(y)}\n= \\sum \\frac{\\iota_k(Dy)}{\\iota_k(y)} = \\sum \\iota_k(\\frac{D(y)}{y})\n= {\\rm Tr}(\\frac{Dy}{y}).\n$$\n\nFrom Abel formula (Lemma \\ref{el-add})\n$$\n{\\rm Tr}(D(\\Pi(p)) = D\\Pi(\\tilde p) + Df\n$$\nwhere $\\tilde p$ is sum of images on curve and $f$ is a sum of\nlogarithms. Namely, put $p_k = \\iota_k(p)$. We have\n$$\nD\\Pi(p_k) = D\\Pi(\\iota_k(p)) = \\iota_k(D\\Pi(p))\n$$\nso applying Lemma \\ref{el-add} we get\n$$\n{\\rm Tr}(D(\\Pi(p)) = \\sum \\iota_k(D(\\Pi(p))) = \\sum D\\Pi(\\iota_k p))\n$$\n$$\n= D\\Pi(\\tilde p) + Df.\n$$\nNote that Lemma \\ref{el-add} is written in terms of\nderivative $D(f) = f'$ but is really equality of differential forms\nso one can substitute an arbitrary derivative and equality\nremains valid.\n\n By Galois (Abel) theory $\\tilde p$ is in $E$.\nSimilar result holds for elliptic integrals of the first and\nsecond kind. Passing between Legendre and Weierstrass theory\nwe get similar result for integrals in Weierstrass form.\n\nNow, when $F_j \\subset F_{j+1}$ is an algebraic extension\nand in $F_{j+1}$ we have\n$$\nf = D(v_0) + \\sum_{i=1}^l c_i\\phi(Dv_i, v_i)\n$$\nand $f \\in F_j$, then applying ${\\rm Tr}$ to both sides we get similar\nformula with terms in $F_j$ extended by constants, which\nends the proof of algebraic case.\n\nOur proof can introduce transcendental constants. We should\nprove that it is enough to use algebraic constants. This\ncan be done in various ways, for example using Hilbert theorem\nabout zeros (like \\cite{Risch:Prob}) or adapting model-theoretic\nproof of Hilbert theorem.\nWe will skip details.\n\\hfill$\\square$\\\\ $\\phantom{A}$\\\\} %\\rule{8pt}{8pt\n\n\\section{Further results and remarks}\n\nWe can strengthen our main theorem by allowing more complicated $K$.\nNamely, we can allow towers containing Lie closed extensions\nsuch that Lie algebra spanned by $X_k$ is spanned by commutators.\nPut\n$$w_{X_k} = X_kv_0 + \\sum_{i=1}^l c_i\\phi(X_kv_i, v_i).$$\nProceeding like in step 2 of proof of Theorem we get:\n$$\nX_kw_{X_j} - X_jw_{X_k} =\n[X_k, X_j]v_0 + \\sum_{i=1}^l c_i\\phi([X_k, X_j]v_i, v_i).\n$$\nFrom step 2 we know that $w_{X_k}$ is a constant, so\n$$\nX_kw_{X_j} - X_jw_{X_k} = 0\n$$\nhence\n$$\n[X_k, X_j]v_0 + \\sum_{i=1}^l c_i\\phi([X_k, X_j]v_i, v_i) = 0\n$$\nWe assume that Lie algebra spanned by $X_k$ is spanned by\ncommutators, so also\n$$w_{X_k} = 0.$$\nBut having this we can proceed like in step 3 using formula\n$$\nD = D_{F_j} + \\sum a_kX_k\n$$\nwhere $a_k \\in K$ are such that $D\\theta_i = \\sum a_k X_k\\theta_i$\nwhere ${\\theta_i}$ is transcendence basis of $F_{j+1}$ over $F_j$.\nTerms involving $X_k$ will vanish so\n$$\nf = Dv_0 + \\sum_{i=1}^l c_i\\phi(Dv_i, v_i) =\n D_{F_j} + \\sum_{i=1}^l c_i\\phi(D_{F_j}, v_i).\n$$\n\nIt is well known (see for example \\cite{Nish:Lie}) that\nPicard-Vessoit extensions are Lie closed and that semisimple\nLie algebra is generated by commutators. Many classical\nspecial functions are solutions of linear ordinary differential\nequations, so are elements of Picard-Vessoit extensions.\nGenerically corresponding Lie algebra is semisimple. Let\nus note that there are a lot of integrals which can be\nexpressed in terms of hypergeometric functions but not in\nterms of elementary functions. Our result proves that\nall those hypergeometric functions must correspond to\ndegenerate cases of hypergeometric equation.\n\nTheorem \\ref{liu-ab} is about functions integrable\nin terms of elliptic integrals. However, if $f$ is integrable\nin some extension $L$ of $F$ which is last term of tower\n$$\nF = F_0 \\subset F_1 \\subset \\dots \\subset F_n = L\n$$\nwhere each of $F_i \\subset F_{i+1}$ is either elementary\nor extension by Lambert W\nor extension by elliptic function or Picard-Vessoit extensions\nwith semisimple Lie algebra, then $f$ has elementary integral.\nNamely, in our proof elliptic integrals appeared only because\nextension by elliptic integral was part of the tower.\n\nResult about elliptic integrals of first and second kind can\nbe proved via main theorem in \\cite{BK}, however to\nhandle elliptic integrals of third kind we need Abel\nformula.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}