diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlpqb" "b/data_all_eng_slimpj/shuffled/split2/finalzzlpqb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlpqb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nGame theoretic models of interdependent security have been used to study security of complex information and physical systems for more than a decade~\\cite{LFB14}. One of the key findings is that the externalities resulting from security decisions made by selfish agents lead to, potentially significant, inefficiencies. This motivates research on methods for improving information security, such as insurance~\\cite{BS10} and network design~\\cite{CDG14,CDG17}. We study the problem of network design for interdependent security in the setup where a strategic adversary collaborates with some nodes in order to disrupt the network.\n\n\\subsection*{The motivation}\nOur main motivation is computer network security in face of contagious attack by a strategic adversary. Examples of contagious attacks are stealth worms and viruses, that gradually spread over the network, infecting subsequent unprotected nodes. Such attacks are considered among the main threats to cyber security~\\cite{SPW02}. Moreover, the study of the data from actual attacks demonstrates that the attackers spend time and resources to study the networks and choose the best place to attack~\\cite{SPW02}. Direct and indirect infection can be prevented by taking security measures that are costly and effective (i.e., provide sufficiently high safety to be considered perfect). Examples include using the right equipment (such as dedicated high quality routers), software (antivirus software, firewall), and following safety practices. All of these measures are costly. In particular, having antivirus software is cheap but using it can be considered to be costly, safety practises may require staff training, staying up to date with possible threats, creating backups, updating software, hiring specialized, well-paid staff. The security decisions are made individually by selfish nodes. Each node derives benefits from the nodes it is connected to (directly or indirectly) in the network. An example is the Metcalfe's law (attributed to Robert Metcalfe~\\cite{SV00}, a co-inventor of Ethernet) stating that each node's benefits from the network are equal to the number of nodes it can reach in the network, and the value of a connected network is equal to the square of the number of its nodes. An additional threat faced by the nodes in the network is the existence of malicious nodes whose objectives are aligned with those of the adversary: they aim to disrupt the network~\\cite{MSW06,MSW09}.\n\n\\subsection*{Contribution}\nWe study the effectiveness of network design for improving system security with malicious (or byzantine) players and strategic adversary.\nTo this end we propose and study a three stage game played by three classes of players: the designer, the adversary, and the nodes. Some of the nodes are malicious and cooperate with the adversary. The identity of the nodes is their private information, known to them and to the adversary only. The designer moves first, choosing the network of links between nodes. Then, costly protection is assigned to the nodes. We consider two methods of protection assignments: the centralized one, where the designer chooses the nodes to receive protection, and the decentralized one, where each node decides individually and independently whether to protect or not. Lastly, the adversary observes the protected network and chooses a single node to infect. The protection is perfect and each non-byzantine node can be infected only if she is unprotected. The byzantine nodes only pretend to use the protection and can be infected regardless of whether they are protected or not. After the initial node is infected, the infection spreads to all the nodes reachable from the origin of infection via a path containing unprotected or byzantine nodes.\nWe show that if the protection decisions are centralized, so that the designer chooses both the network and the protection assignment, then either choosing a disconnected network with unprotected components of equal size or a generalized star with protected core is optimal. When protection decisions are decentralized, then, for sufficiently large number of nodes, the designer can resort to choosing the generalized star as well. In the case of sufficiently well-behaved returns from the network (including for example Metcalfe's law), the protection chosen by the nodes in equilibrium guarantees outcomes that are asymptotically close to the optimum. Hence, in such cases, the inefficiencies due to defense decentralization can be fully mitigated even in the presence of byzantine nodes.\n\n\\subsection*{Related work}\nThere are two, overlapping, strands of literature that our work is related to: the interdependent security games~\\cite{LFB14} and multidefender security games~\\cite{SVL14,LV15,LSV17}.\nEarly research on interdependent security games assumed that the players only care about their own survival and that there are no benefits from being connected \\cite{KH03,V04,ACY06,LB08a,LB08b,CCO12,AMO16}. In particular, the authors of \\cite{ACY06} study a setting in which the network is fixed beforehand, nodes only care about their own survival, attack is random, protection is perfect, and contagion is perfect: infection spreads between unprotected nodes with probability $1$. The focus is on computing Nash equilibria of the game and estimating the inefficiencies caused by defense decentralization. They show that finding one Nash equilibrium is doable in polynomial time, but finding the least or most expensive one is NP-hard. They also point out the high inefficiency of decentralized protection, by showing unboundedness of the price of anarchy. In~\\cite{LB08a,LB08b} techniques based on local mean field analysis are used to study the problem of incentives and externalities in network security\non random networks. In a more recent publication~\\cite{AMO16}, individual investments in protection are considered. The focus is on the strategic structure of the security decisions across individuals and how the network shapes the choices under random versus targeted attacks. The authors show that both under- and overinvestment may be present when protection decisions are decentralized.\nA slightly different, but related, models are considered in~\\cite{GWA10,GWA11,GMW12,LSB12a,LSB12b}. In these models\nthe defender chooses a spanning tree of a network, while the attacker chooses a link to remove. The defender and the adversary move simultaneously. The attack is successful if the chosen link belongs to the chosen spanning tree. Polynomial time algorithms for computing optimal attack and defense strategies are provided for several variants of this game. For a comprehensive review of interdependent security games see an excellent survey~\\cite{LFB14}.\n\nMultidefender security games are models of security where two or more defenders make security decisions with regard to nodes, connected in a network, and prior to an attack by a strategic adversary. Each of the defenders is responsible for his own subset of nodes and the responsibilities of different defenders are non-overlapping. The underlying network creates interdependencies between the defenders' objectives, which result in externalities, like in the interdependent security games. The distinctive feature of multidefender security models is the adopted solution concept: the average case Stackelberg equilibrium. The model is two stage. In the first stage the defenders commit to mixed strategies assigning different types of security configurations across the nodes. In the second stage the adversary observes the network and chooses an attack. The research focuses on equilibrium computation and quantification of inefficiencies due to distributed protection decisions.\n\nPapers most related to our work are~\\cite{MSW09,CDG14,CDG17,GJKKM16}. The authors of~\\cite{MSW06} introduce malicious nodes to the model of~\\cite{ACY06}. The key finding in that paper is that the presence of malicious nodes creates a ``fear factor'' that reduces the problem of underprotection due to defense decentralization. Inspired by~\\cite{MSW06,MSW09}, we also consider malicious nodes in the context of network defense. We provide a formal model of the game with such nodes as a game with incomplete information. Our contribution, in comparison to~\\cite{MSW09}, lies in placing the players in a richer setup, where nodes care about their connectivity as well as their survival, and where both underprotection (i.e., insufficiently many nodes protect as compared to an optimum) and overprotection (excessively many nodes protect as compared to an optimum) problems are present. This leads to a much more complicated incentives structure. In particular, the presence of malicious nodes may lead to underprotection, as nodes may be unable to secure sufficient returns from choosing protection on their own.\n\nWorks~\\cite{CDG14,CDG17} consider the problem of network design and defense prior to the attack by a strategic adversary. In a setting where the nodes care about both their connectivity and their survival, the authors study the inefficiencies caused by defense decentralization and how they can be mitigated by network design. The authors show that both underprotection as well as overprotection may appear, depending on the costs of protection and network topology. Both inefficiencies can be mitigated by network design. In particular, the underprotection problem can be fully mitigated by designing a network that creates a cascade of incentives to protect. Our work builds on~\\cite{CDG14,CDG17} by introducing malicious nodes to the model. We show how the designer can address the problem of uncertainty about the types of nodes and, at the same time, mitigate the inefficiencies due to defense decentralization. \nLastly, in~\\cite{GJKKM16}, a model of decentralized network formation and defense prior to the attack by adversaries of different profiles is considered. The authors show, in particular, that despite the decentralized protocol of network formation, the inefficiencies caused by defense decentralization are relatively low.\n\nThe rest of the paper is structured as follows. In \\cref{sec:model} we define the model of the game, which we then analyze in \\cref{sec:analysis}. In \\cref{sec:extension} we discuss possible modifications of our model. We provide concluding remarks in \\cref{sec:concl}. \\Cref{ap:centralized} contains the proofs of the most technical results.\n\n\\section{The model}\n\\label{sec:model}\nThere are $(n+2)$ players: the designer ($\\mathbf{\\sf{D}}$), the nodes ($V$), and the adversary~($\\mathbf{\\sf{A}}$). In addition, each of the nodes is of one of two types: a genuine node (type~$1$) or a byzantine node (type~$0$). We assume that there are at least $n = 3$ nodes and that there is a fixed amount $n_B \\ge 1$ of byzantine nodes. The byzantine nodes cooperate with the adversary and their identity is known to $\\mathbf{\\sf{A}}$. All the nodes know their own type only. On the other hand, the adversary has complete information about the game. We suppose that he infects a subset of $n_A \\ge 1$ nodes. A \\emph{network} over a set of nodes $V$ is a pair $G = ( V,E )$, where $E \\subseteq \\{ij \\colon i,j \\in V\\}$ is the set of undirected links of $G$. Given a set of nodes $V$, $\\mathcal{G}(V)$ denotes the set of all networks over $V$ and $\\mathcal{G} = \\bigcup_{U\n\\subseteq V} \\mathcal{G}(U)$ is the set of all networks that can be formed over $V$ or any of its subsets. The game proceeds in four rounds (the numbers $n \\ge 3, n_B \\ge 1, n_A \\ge 1$ are fixed before the game):\n\\begin{enumerate}\n\\item The types of the nodes are realized.\n\\item $\\mathbf{\\sf{D}}$ chooses a network $G \\in \\mathcal{G}(V)$, where $\\mathcal{G}(V)$ is the set of all undirected networks over~$V$.\n\\item Nodes from $V$ observe $G$ and choose, simultaneously and independently, whether to protect (what we denote by $1$) or not (denoted by $0$). This determines the set of protected nodes\n $\\mathit{\\Delta}$. The protection of the byzantine nodes is fake and, when attacked, such node gets infected and transmits the infection to all her neighbors.\n\\item $\\mathbf{\\sf{A}}$ observes the protected network $(G,\\mathit{\\Delta})$ and chooses a subset $I \\subset V$ consisting of $\\abs{I} = n_A \\ge 1$ nodes to infect. The infection spreads and eliminates all unprotected or byzantine nodes reachable from $I$ in $G$ via a path that does not contain a genuine protected node from $\\mathit{\\Delta}$.\nThis leads to the residual network obtained from $G$ by removing all the infected nodes.\n\\end{enumerate}\n\nPayoffs to the players are based on the residual network and costs of defense.\nThe returns from a network are measured by a \\emph{network value function}\n$\\mathit{\\Phi} \\colon \\bigcup_{U \\subseteq V} \\mathcal{G}(U) \\rightarrow \\mathbb{R}$\nthat assigns a numerical value to each network that can be formed over a subset $U$ of nodes from $V$.\n\nA \\emph{path in $G$ between nodes} $i,j \\in V$ is a sequence of nodes $i_0,\\ldots,i_m \\in V$\nsuch that $i = i_0$, $j = i_m$, $m \\geq 1$, and $i_{k-1}i_k \\in E$ for all $k = 1,\\ldots,m$. Node $j$ is \\emph{reachable} from node $i$ in $G$ if $i = j$ or there is a path between them in $G$. A \\emph{component} of a network $G$ is a maximal set of nodes $C\\subseteq V$ such that for all $i,j \\in C$, $i \\neq j$, $i$ and $j$ are reachable in $G$. The set of components of $G$ is denoted by $\\mathcal{C}(G)$.\nGiven a network $G$ and a node $i \\in V$, $C_i(G)$ denotes the component $C \\in \\mathcal{C}(G)$ such that $i \\in C$. Network $G$ is \\emph{connected} if $|\\mathcal{C}(G)| = 1$.\n\nWe consider the following family of network value functions:\n\\begin{displaymath}\n\\mathit{\\Phi}(G) = \\sum_{C\\in \\mathcal{C}(G)} f(|C|) \\, ,\n\\end{displaymath}\nwhere the function $f \\colon \\mathbb{R}_{\\geq 0} \\rightarrow \\mathbb{R}$ is increasing, strictly convex, satisfies $f(0) = 0$, and, for all $x \\ge 1$, verifies the inequalities \n\\begin{equation}\\label{eq:move_half}\n\\begin{aligned}\nf(3x) &\\geq 2f(2x) \\, , \\\\ \nf(3x + 2) &\\ge f(2x+2) + f(2x+1) \\, .\n\\end{aligned}\n\\end{equation} \nIn other words, the value of a connected network is an increasing and strictly convex function of its size. The value of a disconnected network is equal to the sum of values of its components. These assumptions reflect the idea that each node derives additional utility from every node she can reach in the network. In the last property we assume that these returns are sufficiently large: the returns from increasing the size of a component by $50\\%$ are higher than the returns from adding an additional, separate, component of the same size to the network.\nSuch form of network value function is in line with Metcalfe's law, where the value of a connected network over $x$ nodes is given by $f(x) = x^2$, as well as with Reed's law, where the value of a connected network is of exponential order with respect to the number of nodes (e.g., $f(x) = 2^x-1$).\n\nBefore defining payoff to a node from a given network, defense, and attack, we formally define the residual network. Given a network $G = (V,E)$ and a set of nodes $Z \\subseteq V$, let $G-Z$ denote the network obtained from $G$ by removing the nodes from $Z$ and their connections from $G$. Thus $G-Z = (V\\setminus Z, E[V\\setminus Z])$, where $E[V\\setminus Z] = \\{ij \\in E \\colon i,j \\in V\\setminus Z\\}$.\nGiven defense $\\mathit{\\Delta}$ and the set of byzantine nodes $B$, the graph $A(G\\mid \\mathit{\\Delta},B) = G - \\mathit{\\Delta}\\setminus B$ is called the \\emph{attack graph}. By infecting a node $i \\in V$, the adversary eliminates the component of $i$ in the attack graph, $C_i(A(G\\mid \\mathit{\\Delta},B))$.\\footnote{We define $C_i(A(G\\mid \\mathit{\\Delta},B)) = \\varnothing$ for every $i \\in \\mathit{\\Delta}\\setminus B$.} Hence, if the adversary infects a subset $I \\subset V$ of nodes, then the \\emph{residual network} (i.e., the network that remains) after such an attack is $R(G \\mid \\mathit{\\Delta}, B, I) = G - \\bigcup_{i \\in I}C_i(A(G \\mid \\mathit{\\Delta},B))$. \n\nNodes' information about whether they are genuine or byzantine is private. Similarly, the adversary's information about the identity of the byzantine nodes is private. As usual in games with incomplete information, private information of the players is represented by their \\emph{types}. The type of a node $i \\in V$ is represented by $\\theta_i \\in \\{0,1\\}$ ($\\theta_i = 1$ means that $i$ is genuine and $\\theta_i = 0$ means that $i$ is byzantine) and the type of the adversary is represented by $\\theta_{\\mathbf{\\sf{A}}} \\in \\binom{V}{n_B}$. (If $X$ is a finite set, then we denote by $\\binom{X}{t}$ the set of subsets of $X$ of cardinality $t$.) A vector $\\bm{\\theta} = (\\theta_1,\\ldots,\\theta_n,\\theta_{\\mathbf{\\sf{A}}})$ of players' types is called a \\emph{type profile}. The type profiles must be consistent so that the byzantine nodes are really known to the adversary. The set of consistent type profiles is $\\Theta = \\{(\\theta_1,\\ldots,\\theta_n,\\theta_{\\mathbf{\\sf{A}}}) \\colon \\theta_{\\mathbf{\\sf{A}}} = \\{i \\in V \\colon \\theta_{i} = 0\\} , \\abs{\\theta_{\\mathbf{\\sf{A}}}} = n_B \\}$.\n\\begin{remark}\nWe point out that $B \\subset V$ is the set of byzantine nodes (i.e., the true state of the world) while $\\theta_{\\mathbf{\\sf{A}}}$ denotes the beliefs of the adversary. The consistency assumption implies that the beliefs of the adversary are correct and $\\theta_{\\mathbf{\\sf{A}}} = B$.\n\\end{remark}\n\nThe adversary aims to minimize the gross welfare (i.e., the sum of nodes' gross payoffs), which is equal to the value of the residual network. Given a network $G$, the set of protected nodes $\\mathit{\\Delta}$, and the type profile $\\bm{\\theta} \\in \\Theta$, the payoff to the adversary from infecting the set of nodes~$I$ is\n\\begin{align*}\nu^{\\mathbf{\\sf{A}}}(G,\\mathit{\\Delta},I \\mid \\bm{\\theta}) = -\\mathit{\\Phi}(R(G \\mid \\mathit{\\Delta}, B, I)) = -\\sum_{C \\in {\\mathcal C}(R(G \\mid \\mathit{\\Delta}, B, I))} f(|C|) \\, .\n\\end{align*}\n\nThe designer aims to maximize the value of the residual network minus the cost of defense. Notice that this cost includes the cost of defense of the byzantine nodes. Formally, the designer's payoff from network $G$ under defense $\\mathit{\\Delta}$, the set of infected nodes $I$, and the type profile $\\bm{\\theta}$ is equal to\n\\begin{align*}\nu^{\\mathbf{\\sf{D}}}(G,\\mathit{\\Delta},I \\mid \\bm{\\theta}) = \\mathit{\\Phi}(R(G\\mid \\mathit{\\Delta},I,B)) - |\\mathit{\\Delta}| c\n = \\left(\\sum_{C \\in \\mathcal{C}(R(G \\mid \\mathit{\\Delta},I,B))} f(|C|)\\right) - |\\mathit{\\Delta}| c \\, .\n\\end{align*}\n\nThe \\emph{gross payoff} to a genuine (i.e., not a byzantine) node $j \\in V$ in a network $G$ is equal to $f(|C_j(G)|)\/|C_j(G)|$. In other words, each genuine node gets the equal share of the value of her component. The net payoff of a node is equal to the gross payoff minus the cost of protection. A genuine node gets payoff $0$ when removed. Defense has cost $c\\in \\mathbb{R}_{>0}$.\nThe byzantine nodes have the same objectives as the adversary and their payoff is the same as that of $\\mathbf{\\sf{A}}$.\nFormally, a payoff to the node $j \\in V$ given a network $G$ with defended nodes $\\mathit{\\Delta}$, the set of infected nodes~$I$, and the type profile $\\bm{\\theta}\\in \\Theta$ is equal to\n \\begin{align*}\nu^j&(G,\\mathit{\\Delta},I \\mid \\bm{\\theta})=\\begin{cases}\n\t\t\t\t u^{\\mathbf{\\sf{A}}}(G,\\mathit{\\Delta},I \\mid \\bm{\\theta}), & \\textrm{if $\\theta_j = 0$}, \\\\\n \\frac{f(|C_j(R(G\\mid \\mathit{\\Delta},B,I))|)}{|C_j(R(G\\mid \\mathit{\\Delta},B,I))|}, & \\textrm{if $\\theta_j = 1$, $j \\notin \\mathit{\\Delta}$}, \\\\\n & \\textrm{and $j \\notin \\bigcup_{i \\in I}C_i(A(G\\mid \\mathit{\\Delta},B))$},\\\\\n \\frac{f(|C_j(R(G \\mid \\mathit{\\Delta},B,I))|)}{|C_j(R(G\\mid \\mathit{\\Delta},B,I))|} - c, & \\textrm{if $\\theta_j = 1$ and $j \\in \\mathit{\\Delta}$}, \\\\\n 0, & \\textrm{if $\\theta_j = 1$, $j \\notin \\mathit{\\Delta}$,} \\\\\n & \\textrm{and $j \\in \\bigcup_{i \\in I} C_i(A(G\\mid \\mathit{\\Delta}, B))$} \\, .\n \\end{cases}\n\\end{align*}\n\nThe adversary and the byzantine nodes make choices that maximize their utility. The designer and the nodes have incomplete information about the game and we assume that they are pessimistic, making choices that maximize the worst possible type realization (cf.~\\cite{AB06}). Formally, the \\emph{pessimistic utility} of a genuine (i.e., of type $\\theta_j = 1$) node $j$ from network $G$, the set of protected nodes $\\Delta$, and the set of infected nodes $I$, is\n\\begin{equation*}\n\\hat{U}^j(G,\\mathit{\\Delta},I) = \\inf_{(\\bm{\\theta}_{-j},1)\\in \\Theta} u^j(G,\\mathit{\\Delta},I \\mid (\\bm{\\theta}_{-j},1)) \\, .\n\\end{equation*}\nSimilarly, the pessimistic utility of the designer from network $G$, the set of protected nodes $\\Delta$, and the set of infected nodes $I$, is\n\\begin{equation*}\n\\hat{U}^{\\mathbf{\\sf{D}}}(G,\\mathit{\\Delta},I) = \\inf_{\\bm{\\theta}\\in \\Theta} u^{\\mathbf{\\sf{D}}}(G,\\mathit{\\Delta},I \\mid \\bm{\\theta}) \\, .\n\\end{equation*}\n\nTo summarize, the set of players is $P = V \\cup \\{\\mathbf{\\sf{D}},\\mathbf{\\sf{A}}\\}$. The set of strategies of player $\\mathbf{\\sf{D}}$ is $S^{\\mathbf{\\sf{D}}} = \\mathcal{G}(V)$.\nA strategy of each node $j$ is a function $\\delta_j \\colon \\mathcal{G}(V) \\times \\{0,1\\} \\rightarrow \\{0,1\\}$ that,\ngiven a network $G \\in \\mathcal{G}(V)$ and a node's type $\\theta_j\\in \\{0,1\\}$, provides the defense decision $\\delta_j(G, \\theta_j)$ of the node. The individual strategies of the nodes determine a function $\\mathit{\\Delta} \\colon \\mathcal{G}(V)\\times \\{0,1\\}^{V} \\rightarrow 2^V$ providing, given a network $G \\in \\mathcal{G}(V)$ and nodes' types profile $\\bm{\\theta}_{-\\mathbf{\\sf{A}}}\\in \\{0,1\\}^V$, the set of defended nodes $\\mathit{\\Delta}(G \\mid \\bm{\\theta}_{-\\mathbf{\\sf{A}}}) = \\{j\\in V \\colon \\delta_j(G,\\theta_j) = 1\\}$.\nThe set of strategies of each node $j \\in V$ is $S^j = 2^{\\mathcal{G}(V)\\times \\{0,1\\}}$. \nA strategy of player $\\mathbf{\\sf{A}}$ is a function $x \\colon \\mathcal{G}(V) \\times 2^V \\times \\binom{V}{n_B} \\rightarrow \\binom{V}{n_A}$ that, given a network $G \\in \\mathcal{G}(V)$, the set of protected nodes $\\mathit{\\Delta} \\subseteq V$, and adversary's type $\\theta_{\\mathbf{\\sf{A}}}\\in \\binom{V}{n_B}$, provides the set of nodes to infect $x(G,\\mathit{\\Delta},\\theta_{\\mathbf{\\sf{A}}})$. The set of strategies of player $\\mathbf{\\sf{A}}$ is $S^{\\mathbf{\\sf{A}}} = \\binom{V}{n_A}^{\\mathcal{G}(V) \\times 2^V \\times \\binom{V}{n_B}}$.\n\nAbusing the notation slightly, we use the same notation for utilities of the players from the strategy profiles in the game. Thus, given a strategy profile $(G,\\Delta,x)$ and a type profile $\\bm{\\theta}$, the payoff to player $j \\in V \\cup \\{\\mathbf{\\sf{D}},\\mathbf{\\sf{A}}\\}$ is $u^{j}(G,\\mathit{\\Delta},x \\mid \\bm{\\theta}) = u^{j}(G,\\mathit{\\Delta}(G),x(G,\\mathit{\\Delta}(G)) \\mid \\bm{\\theta})$, the pessimistic payoff to player $j \\in V \\setminus B$ is\n\\begin{equation}\\label{eq:pess_payoff_node}\n\\hat{U}^j(G,\\mathit{\\Delta},x) = \\inf_{(\\bm{\\theta}_{-j},1)\\in \\Theta} u^j(G,\\mathit{\\Delta}(G),x(G, \\mathit{\\Delta}(G)) \\mid (\\bm{\\theta}_{-j},1)) \\, ,\n\\end{equation}\nand the pessimistic payoff to the designer is given by \n\\begin{equation}\\label{eq:pess_payoff_des}\n\\hat{U}^{\\mathbf{\\sf{D}}}(G,\\mathit{\\Delta},x) = \\inf_{\\bm{\\theta}\\in \\Theta} u^{\\mathbf{\\sf{D}}}(G,\\mathit{\\Delta}(G),x(G, \\mathit{\\Delta}(G)) \\mid \\bm{\\theta}) \\, .\n\\end{equation}\n\nBy convention, we say that the pessimistic payoff of the byzantine node is the same as her payoff. We are interested in subgame perfect mixed strategy equilibria of the game with the preferences of the players defined by the pessimistic payoffs. We call them the equilibria, for short. We make the usual assumption that when evaluating a mixed strategy profile, the players \nconsider an expected value of their payoffs from the pure strategies. In the case of the designer and the genuine nodes, these are expected pessimistic payoffs.\n\nThroughout the paper we will also refer to the subgames ensuing after a network $G$ is chosen. We will denote such subgames by $\\mathit{\\Gamma}(G)$ and call the \\emph{network subgames}. We will abuse the notation by using the same letters to denote the strategies in $\\mathit{\\Gamma}(G)$ and in $\\mathit{\\Gamma}$. The set of strategies of each node $i\\in V$ in game $\\mathit{\\Gamma}(G)$ is $\\{0,1\\}^{\\{0,1\\}}$. \nGiven the type profile $\\bm{\\theta}_{-\\mathbf{\\sf{A}}}\\in \\{0,1\\}^V$, the individual strategies of the nodes determine a function $\\mathit{\\Delta} \\colon \\{0,1\\}^{V} \\rightarrow 2^V$ that provides the set of defended nodes $\\mathit{\\Delta}(\\bm{\\theta}_{-\\mathbf{\\sf{A}}}) = \\{j\\in V \\colon \\delta_j(\\theta_j) = 1\\}$. The set of strategies of the adversary in $\\mathit{\\Gamma}(G)$ is $\\binom{V}{n_A}^{2^V\\times \\binom{V}{n_B}}$.\n\nAll the key notations are summarized in \\cref{tab:notation}.\n\n\\begin{table}\n\\small\n\\begin{tabular}{|c|c|}\n\\hline\n$n$&\\parbox[][20pt][c]{4cm}{\\centering number of nodes} \\\\\n\\hline \n$n_B$&\\parbox[][30pt][c]{4cm}{\\centering number of byzantine nodes} \\\\\n\\hline\n$n_A$&\\parbox[][35pt][c]{4cm}{\\centering number of nodes infected by the adversary} \\\\\n\\hline\n$f$&\\parbox[][20pt][c]{4cm}{\\centering component value function} \\\\\n\\hline\n$G$&\\parbox[][20pt][c]{4cm}{\\centering network} \\\\\n\\hline\n$\\Delta$&\\parbox[][20pt][c]{4cm}{\\centering set of protected nodes} \\\\\n\\hline\n$u^{\\mathbf{\\sf{D}}}, u^{\\mathbf{\\sf{A}}}, u^{j}$&\\parbox[][30pt][c]{4cm}{\\centering payoff to the designer, the adversary, and a node} \\\\\n\\hline\n$\\hat{U}^{\\mathbf{\\sf{D}}}, \\hat{U}^{j}$&\\parbox[][30pt][c]{4cm}{\\centering pessimistic payoff to the designer and a node} \\\\\n\\hline\n\\end{tabular}\n\\vspace*{0.5cm}\n\\caption{Summary of the notation.}\\label{tab:notation}\n\\end{table}\n\n\\subsection{Remarks on the model}\nWe make a number of assumptions that, although common for interdependent security games, are worth commenting on. Firstly, we assume that protection is perfect. This assumption is reasonable when available means of protection are considered sufficiently reliable and, in particular, deter the adversary towards the unprotected nodes. Arguably, this is the case for the protection means used in cybersecurity. Secondly, we assume that the designer and genuine nodes are pessimistic and maximize their worst-case payoff. Such an approach is common in computer science and is in line with trying to provide the worst-case guarantees on system performance. One can also take the probabilistic approach (by supposing that the distribution of the byzantine nodes is given by a random variable). In \\cref{sec:extension} we discuss how our results carry over to such model.\n\n\\section{The analysis}\n\\label{sec:analysis} \n\nWe start the analysis by characterizing the centralized defense model, where the designer chooses both the network and the defense assignment to the nodes. After that the adversary observes the protected network and nodes' types and chooses the nodes to infect. We focus on the first nontrivial case $n_B = n_A = 1$. In this case, we are able to characterize networks that are optimal to the designer. The topology of these networks is based on the generalized $k$-stars. We then turn to the decentralized defense and study the cost of decentralization. It turns out that the topology of $k$-star gives asymptotically low cost of decentralization not only for the simple case studied earlier but for all possible values of parameters $n_B$ and $n_A$. This is enough to prove our main result, \\cref{th:poa}, providing bounds on the price of anarchy.\n\n\\subsection{Centralized defense}\\label{sec:centralized}\nFix the parameters $n_B, n_A$ and suppose that the designer chooses both the network and the protection assignment. This leads to a two stage game where, in the first round, the designer chooses a protected network $(G,\\Delta)$ and in the second round the adversary observes the protected network and nodes' types (recognizing the byzantine nodes) and chooses the nodes to attack. Payoffs to the designer and to the adversary are as described in \\cref{sec:model} and we are interested in subgame perfect mixed strategy equilibria of the game with pessimistic preferences of the designer.\nWe call them equilibria, for short. Notice that, since the decisions are made sequentially, there is always a pure strategy equilibrium of this game. In this section, we focus only on such equilibria. Furthermore, the equilibrium payoff to the designer is the same for all equilibria. We denote this payoff by $\\globdespayoff(n,c)$.\n\nIn the rest of this subsection we focus on the case $n_B = n_A = 1$. In this case, when the protection is chosen by the designer, two types of protected networks can be chosen in an equilibrium (depending on the value function and the cost of defense): a disconnected network with no defense or a generalized star with protected core and, possibly, one or two unprotected components. Before stating the result characterizing equilibrium defended networks and equilibrium payoffs to the designer, we need to define the key concept of a generalized star and some auxiliary quantities. We start with the definition of a generalized star. If $G = (V, E)$ is a network and $V' \\subset V$ is a subset of nodes, then we denote by $G[V']$ the subnetwork of $G$ induced by $V'$, i.e., the network $G[V'] = (V', \\{ij \\in E \\colon i,j \\in V' \\})$.\n\n\\begin{definition}[Generalized $k$-star]\nGiven a set of nodes $V$ and $k \\ge 1$, a \\emph{generalized $k$-star} over $V$ is a network $G = (V,E)$ such that the set of nodes $V$ can be partitioned into two sets, $C$ (the core) of size $|C| = k$ and $P$ (the periphery), in such a way that $G[C]$ is a clique, every node in $P$ is connected to exactly one node in $C$, and every node in $C$ is connected to $\\lfloor n\/k \\rfloor - 1$ or $\\lceil n\/k \\rceil - 1$ nodes in $P$.\n\\end{definition}\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.45\\textwidth}\n \\centering\n \\includegraphics[scale=0.5]{n12k5.pdf}\n \\end{minipage}\\hfill \n\\end{center}\n\\caption{A generalized star with $12$ nodes and core of size $5$.}\\label{fig:star}\n\\end{figure}\n\nRoughly speaking, a generalized $k$-star is a core-periphery network with the core consisting of $k$ nodes and the periphery consisting of the remaining $n - k$ nodes. The core is a clique, each periphery node is connected to exactly one core node and they are distributed evenly across the core nodes. An example of a generalized star is depicted in \\cref{fig:star}.\n\nNow we turn to defining some auxiliary quantities. For any $n \\ge 3$ such that $n \\bmod 6 \\neq 3$ we define\n\\[\nw_{0}(n) = w_{1}(n) = f\\left(\\left\\lfloor \\frac{n}{2} \\right\\rfloor\\right) + f(1)\\mathbbm{1}_{\\{n \\bmod 2 = 1 \\}} \\, ,\n\\]\nand for every $n$ such that $n \\bmod 6 = 3$ we define\n\\begin{equation}\nw_{0}(n) = w_{1}(n) = \\max\\left( 2f\\left( \\frac{n}{3} \\right), f\\left(\\frac{n-1}{2} \\right) + f(1)\\right) \\, . \\label{eq:divisible_by_three}\n\\end{equation}\nGiven $n$ nodes, $w_{0}(n)$ is the maximal network value the designer can secure against a strategic adversary by choosing an unprotected network composed of three components of equal size or two components of equal size and possibly one disconnected node. This is also the maximal network value the designer can secure by choosing such a network with one protected node, because, in the worst case scenario, the protected node is byzantine and may be infected.\n\nFor every $k \\in \\{3, \\dots, n\\}$, let\n\\begin{equation*}\nw_k(n) = \\left\\{\\begin{array}{ll}\n f\\left(n-1-\\frac{n-1}{k}\\right) + f(1), & \\textrm{if $n \\bmod k = 1$} \\, , \\\\\n f\\left(n - \\lceil \\frac{n}{k} \\rceil \\right), & \\textrm{otherwise} \\, .\n \\end{array}\\right.\n\\end{equation*}\nGiven $n$ nodes and $k \\ge 3$, $w_k(n)$ is the network value that the designer can secure by choosing a generalized $k$-star, with one node disconnected in the case of $k$ dividing $n-1$, having all core nodes protected and all periphery nodes unprotected.\n\nWe also define the following quantities:\n\\begin{align}\n&A_{q} = \\min \\left( f(n - q), f\\left(\\left\\lfloor\\frac{n - q}{2}\\right\\rfloor\\right) + f(q) \\right) \\, , \\nonumber \\\\\n&B_{q} = \\min \\left( f(n - q - 1), f\\left(\\left\\lfloor\\frac{n-q-1}{2}\\right\\rfloor\\right) + f(q) \\right) \\, , \\nonumber \\\\\n&h_q(n) = \\max\\left(A_{q}, B_{q} + f(1) \\right) \\, , \\label{eq:size_two_single} \\\\\n&w_2(n) = \\max_{q \\in \\{0, \\ldots, n - 2 \\}} h_{q}(n) \\, . \\label{eq:size_two}\n\\end{align}\nGiven $n$ nodes, $w_2(n)$ is the network value that the designer can secure by choosing a network composed of a generalized $2$-star with a protected core and unprotected periphery, an unprotected component (of size $q \\in \\{0, \\dots, n - 2\\}$), and possibly one node disconnected from both of these components.\n\nFinally, we define\n\\begin{equation*}\nK^{*}(n,c) = {\\arg\\max}_{k\\in \\{0,\\ldots,n\\}} w_k(n) - kc \\, .\n\\end{equation*}\n\nWe point out that $K^{*}(n,c)$ never contains $1$ (because $c > 0$). We are now ready to state the result characterizing equilibrium defended network and pessimistic equilibrium payoffs to the designer.\n\n\\begin{proposition}\n\\label{th:centr}\nLet $n_B = n_A = 1$, $n \\ge 3$,\n$c > 0$, and $k \\in K^{*}(n,c)$. Then, the pessimistic equilibrium payoff to the designer is equal to $\\globdespayoff(n,c) = w_{k} - kc$. Moreover, there exists an equilibrium network $(G, \\Delta)$ that has $\\abs{\\Delta} = k$ protected nodes and the following structure:\n\\begin{compactenum}[i)]\n\\item $G$ has at most three connected components.\n\\item If $k \\ge 3$ and $n \\bmod k \\neq 1$, then $G$ is a generalized $k$-star with protected core and unprotected periphery. \n\\item If $k \\ge 3$ and $n \\bmod k = 1$, then $G$ is composed of a generalized $k$-star of size $(n-1)$ with protected core and unprotected periphery and a single unprotected node.\n\\item If $k = 0$ and $n \\bmod 6 \\neq 3$, then $G$ has two connected components of size $\\lfloor n\/2 \\rfloor$ and, if $n \\bmod 2 = 1$, a single unprotected node.\\label{it:no_def}\n\\item If $k = 0$ and $n \\bmod 6 = 3$, then $G$ either has the structure described in \\cref{it:no_def} or $G$ is composed of three components of size $n\/3$, depending on the term achieving maximum in~\\cref{eq:divisible_by_three}.\n\\item If $k = 2$, then $G$ is composed of a generalized $2$-star with protected core and unprotected periphery, an unprotected component of size $q \\in \\{0, \\dots, n-2\\}$ and, possibly, a single unprotected node. The size $q$ is the number achieving maximum in \\cref{eq:size_two}. The existence of a single unprotected node depends on the term achieving maximum in \\cref{eq:size_two_single}.\n\\end{compactenum}\n\\end{proposition}\n\nThe intuitions behind this result are as follows. When the cost of defense is high, then the designer is better off by not using any defense and partitioning the network into several components. Since the strategic adversary will always eliminate a maximal such component, the designer has to make sure that all the components are equally large. Due to the divisibility problems, one component may be of lower size. Thanks to our assumptions on the component value function $f$, the number of such components is at most three. Moreover, if there are exactly three components, then they are of equal size or the smallest one has size $1$.\n\nWhen the cost of defense is sufficiently low, then it is profitable for the designer to protect some nodes. If the number of protected nodes is not smaller than $3$, then, by choosing a generalized $k$-star with fully protected core (of optimal size $k \\ge 3$ depending on the cost) and unprotected periphery, the designer knows that the strategic adversary is going to attack either the byzantine node (if she is among the core nodes) or any unprotected node (otherwise). An attack on the byzantine core node destroys that node and all periphery nodes attached to her. Thus, in the worst case, a core node with the largest number of periphery nodes connected to her is byzantine. By distributing the core nodes evenly, the designer minimizes the impact of this worst case scenario. Due to the divisibility problems, it may happen that some of the core nodes are connected to a higher number of periphery nodes. If this is the case for one core node only, then it is better for the designer to disconnect this one node from the generalized star. By doing so, the designer spares this node from destruction.\n\nThe case when there are exactly $2$ protected nodes is special. Indeed, in this case, choosing a generalized $2$-star with protected core is not better than using no protection at all. This is because, in the worst case, the byzantine node is among the two protected ones. Therefore, it would be better for the designer to split the network into two unprotected components -- this would result in the same network value after the attack without the need to pay the cost of protection. On the other hand, if the network consists of a generalized $2$-star with protected core and an unprotected component, then the argument above ceases to be valid: even if the byzantine node is among the protected ones, splitting them may give the adversary an incentive to destroy the unprotected component. Therefore, a protection of $2$ nodes may be used as a resource that ensures that one component survives the attack.\n\nIt is interesting to compare this result to an analogous result obtained in~\\cite{CDG14,CDG17} for a model without byzantine nodes. There, depending on the cost of protection, three equilibrium protected networks are possible: an unprotected disconnected network (like in the case with a byzantine node), a centrally protected star, and a fully protected connected network. The existence of a byzantine node leads to a range of core-protected networks between the centrally protected star and the fully protected clique (which is a generalized $n$-star). Notice that pessimistic attitude towards incomplete information results in the star network never being optimal: if only one node is protected, then, in the worst case, the designer expects this node to be byzantine, which leads to loosing all nodes after the attack by the adversary. Therefore, at least two nodes must be protected if protection is used in an equilibrium. The proof of \\cref{th:centr} is given in \\cref{ap:centralized}.\n\n\\begin{example}\n\\Cref{example50} presents how the optimal network changes for different cost values when $f(x) = x^{2}$ and $n \\in \\{12,30,50\\}$. For these values of $n$, it is never optimal to have one node that is disconnected from the rest of the network. Moreover, as we can see, for a given number $n$ of nodes, not all possible generalized $k$-stars arise as optima. It is interesting to note that $3$-stars have never appeared in our experiments as optimal networks for the value function $f(x) = x^{2}$. Similarly, we have not found an example where it is optimal to defend exactly $2$ nodes. The case where there is no defense but the network is split into $3$ equal parts arises when $n = 9$ and the cost is high enough (i.e., $c > 6.2$), as already established in~\\cite{CDG14}.\n\\end{example}\n\n\\begin{remark}\nIn this section, we have characterized the optimal networks for the case $n_B = n_A = 1$. Nevertheless, we have not found a network that has a substantially different structure than the ones described here and performs better for general values of $n_B$ and $n_A$. We therefore suspect that the characterization for the general case is similar to the case $n_B = n_A = 1$.\n\\end{remark}\n\n\\begin{table}\n\\footnotesize\n\n\\begin{tabular}{|l|C{2cm}|l|l|C{2cm}|l|l|C{2cm}|}\n\\cline{8-8}\n\\multicolumn{7}{c|}{} & $n = 50$ \\\\\n\\cline{5-5} \\cline{7-8}\n\\multicolumn{4}{c|}{}&$n=30$ & &$ c < 3.88 $ & $50$-star \\\\\n\\cline{4-5} \\cline{7-8}\n\\multicolumn{3}{c|}{}&$ c < 3.80 $ & $30$-star & & $c \\in (3.88, 11.875)$&$25$-star \\\\\n\\cline{2-2}\\cline{4-5}\\cline{7-8}\n\\multicolumn{1}{c|}{}&$n=12$&&$c \\in (3.80, 11)$ &$15$-star & & $c \\in (11.875, 23.25)$&$17$-star \\\\\n\\cline{1-2}\\cline{4-5}\\cline{7-8}\n$ c < 3.50 $ & $12$-star & & $c \\in (11, 26)$ &$10$-star & &$c \\in (23.25,30.(3))$&$13$-star \\\\\n\\cline{1-2}\\cline{4-5}\\cline{7-8}\n$c \\in (3.50, 9.50)$ &$6$-star & & $c \\in (26,49)$ &$6$-star & &$c \\in (30.(3), 85)$&$10$-star \\\\\n\\cline{1-2}\\cline{4-5}\\cline{7-8}\n$c \\in (9.50, 11.25)$ &$4$-star & & $c \\in (49, 70.20)$ &$5$-star& &$c \\in (85, 195)$ &$5$-star \\\\\n\\cline{1-2}\\cline{4-5}\\cline{7-8}\n$c > 11.25$ &two disconnected components of equal size & & $c > 70.20$&two disconnected components of equal size& &$c > 195$ &two disconnected components of equal size \\\\\n\\cline{1-2}\\cline{4-5}\\cline{7-8}\n\\end{tabular}\n\\vspace*{0.5cm}\n\n\\caption{Optimal networks for $n \\in \\{12, 30, 50\\}$.} \\label{example50}\n\\end{table}\n\n\\subsection{Decentralized defense}\nNow we turn attention to the variant of the model where defense decisions are decentralized. Our goal is to characterize the inefficiencies caused by decentralized protection decisions for general values of $n_B$ and $n_A$. To this end, we need to compare equilibrium payoffs to the designer under centralized and decentralized defense. We start by establishing two results about the existence of equilibria in the decentralized defense game. \n\nFirstly, since the game is finite, we get equilibrium existence by Nash theorem. Notice that our use of the pessimistic aggregation of the incomplete information about types of nodes determines a game where the utilities of the nodes and the designer are defined by the corresponding pessimistic utilities. This game is finite and, by Nash theorem, it has a Nash equilibrium in mixed strategies. This leads to the following existence result.\n\n\\begin{proposition}\n\\label{pr:exist}\nThere exists an equilibrium of $\\Gamma$.\n\\end{proposition}\n\n\\begin{proof}\nIt can be shown that a stronger statement holds. More precisely, one can prove that for any $n, c$ there exists an equilibrium $\\bm e$ such that the strategies of the nodes do not depend on their types. Let us sketch the proof. We consider a modified model in which the nodes do not know their types (i.e., every node thinks that she is genuine, but some of them are byzantine). In this model, the (mixed) strategies of nodes are functions $\\tilde{\\delta}_{j} \\colon \\mathcal{G}(V) \\to \\Sigma(\\{0,1\\})$,\\footnote{\nIf $X$ is a finite set, then by $\\Sigma(X)$ we denote the set of all probability measures on $X$.\n}\nand every node receives a pessimistic utility of a genuine node, as defined in \\cref{eq:pess_payoff_node}. The strategies and payoffs to the adversary and the designer are as in the original model. Let $\\overbar{x} \\colon \\mathcal{G}(V) \\times 2^{V} \\times \\binom{V}{n_B} \\to \\binom{V}{n_A}$ denote any optimal strategy of the adversary (i.e., a function that, given a defended network and the position of the byzantine nodes $B$, returns a subset of nodes that is optimal to infect in this situation). If we fix $\\overbar{x}$, then the game turns into a two stage game (the designer makes his action first and then the nodes make their actions) with complete information. Therefore, this game has a subgame perfect equilibrium in mixed strategies. This equilibrium, together with $\\overbar{x}$, forms an equilibrium $\\bm e$ in the original model, because, in the original model, a byzantine node cannot improve her payoff by a unilateral deviation.\n\\end{proof}\n\nFix the parameters $n_B, n_A$ and let $\\mathcal{E}(n, c)$ denote the set of all equilibria of $\\mathit{\\Gamma}$ with $n$ nodes and the cost of protection $c > 0$. Let $\\globdespayoff(n, c)$ denote the best payoff the designer can obtain in the centralized defense game (as discussed in \\cref{sec:centralized}).\nThe \\emph{price of anarchy} is the fraction of this payoff over the minimal payoff to the designer that can be attained in equilibrium of $\\mathit{\\Gamma}$ (for the given cost of protection $c$),\n\\begin{equation*}\n\\mathrm{PoA}(n, c) = \\frac{\\hat{U}^{\\mathbf{\\sf{D}}}_{\\star}(n,c)}{\\min_{\\bm{e} \\in \\mathcal{E}(n,c)} \\mathbf{E}\\hat{U}^{\\mathbf{\\sf{D}}}(\\bm{e})} \\, .\n\\end{equation*}\n\nAlthough pure strategy equilibria may not exist for some networks, they always exist on generalized stars. Moreover, when these stars are large enough, by choosing such a star, the designer can ensure that all genuine core nodes are protected. \nThis is enough to characterize the price of anarchy as $n$ goes to infinity (with a fixed cost $c$). The next proposition characterizes equilibria on generalized stars.\n\n \\begin{proposition}\\label{thm:ROEchar}\n Let $\\bm e \\in \\mathcal{E}$ be any equilibrium of $\\mathit{\\Gamma}$. Let $G = (V, E)$ be a generalized $k$-star. Denote $\\abs{V} = n$, $x = \\left\\lfloor \\frac{n}{k} \\right\\rfloor - n_A + 1$, and $y = n - n_B\\left\\lfloor \\frac{n}{k} \\right\\rfloor$. Furthermore, suppose that $n \\ge k \\ge n_B+1$ and $x \\ge 2$. If the cost value $c$ belongs to one of the intervals $(0, f(1))$, $(f(1), \\frac{f(x)}{x})$, $(\\frac{f(y)}{y}, +\\infty)$, then the following statements about $\\bm e$ restricted to $\\mathit{\\Gamma}(G)$ hold:\n \\begin{compactitem}\n \\item all genuine nodes use pure strategies\n \\item if $c < f(1)$, then all genuine nodes are protected\n \\item if $f(1) < c < \\frac{f(x)}{x}$, then \n all genuine core nodes are protected and \n all genuine periphery nodes are not protected\n \\item if $\\frac{f(y)}{y} < c$, then all genuine nodes are not protected.\n \\end{compactitem}\n \\end{proposition}\n \n The proof of \\cref{thm:ROEchar} requires an auxiliary lemma.\n\n \\begin{lemma}\\label{attack_on_byz}\n Let $\\bm e \\in \\mathcal{E}$ be any equilibrium of $\\mathit{\\Gamma}$ and $\\overbar{x} \\colon \\mathcal{G}(V) \\times 2^{V} \\times \\binom{V}{n_B} \\to \\Sigma(\\binom{V}{n_A})$ denote the (possibly mixed) strategy of the adversary in this equilibrium.\n Let $(G, \\Delta)$ be a network such that $G$ is a generalized $k$-star. Furthermore, suppose that $\\lfloor \\frac{n}{k} \\rfloor \\ge 2$, $n \\ge 3$, and that the set of byzantine nodes $B$ contains a core node. Then, $\\overbar{x}(G, \\Delta, \\theta_{\\adv})$ infects this node with probability one. \n \\end{lemma}\n \n \\begin{proof}\n Since $\\bm e$ is an equilibrium and the adversary has complete information about the network before making his decision, his strategy $\\overbar{x}(G, \\Delta, \\theta_{\\adv})$ is a probability distribution over the set of subsets of nodes that are optimal to attack. Let $b \\in B$ denote any byzantine node that is also a core node. We will show that any optimal attack infects $b$.\n \nTo do so, fix any set of attacked nodes $I \\in \\binom{V}{n_A}$ and suppose that attacking $I$ does not infect $b$. Given the structure of generalized $k$-star, we see that $I$ consists of genuine protected nodes and periphery nodes that are connected to genuine protected core nodes. To finish the proof, fix any node $j \\in I$ and observe it is strictly better for the adversary to attack the set $I \\cup \\{ b\\} \\setminus \\{j\\}$. Indeed, if $j$ is a genuine protected node, then attacking it does nothing, while attacking $b$ destroys at least one more node. Moreover, if $j$ is a periphery node connected to a genuine core protected node, then attacking $b$ not only destroys one node but also disconnects the network ($b$ is connected to at least one periphery node because $\\lfloor \\frac{n}{k} \\rfloor -1 \\ge 1$).\n \\end{proof}\n\nWe are now ready to present the proof of \\cref{thm:ROEchar}.\n \n \\begin{proof}[Proof of \\cref{thm:ROEchar}]\n Let $\\overbar{x} \\colon \\mathcal{G}(V) \\times 2^{V} \\times \\binom{V}{n_B} \\to \\Sigma(\\binom{V}{n_A})$ denote the strategy of the adversary in $\\bm e$ and let $\\Delta$ be any choice of protected nodes on $G$, $\\Delta \\subset V$. Let $j \\in V$ be a genuine node. \n \n First, suppose that $j \\notin \\Delta$. We will show that the pessimistic payoff of $j$ is equal to $0$. On the one hand, this payoff is nonnegative for every possible choice of the infected node. On the other hand, we can bound it from above by supposing that there exists a byzantine node $b \\in B$ that is a core node and a neighbor of $j$. Then, \\cref{attack_on_byz} shows that $\\overbar{x}$ infects $b$, and the pessimistic payoff of $j$ is not greater than $0$. \n \n Second, suppose that $j \\in \\Delta$. Then, we have two possibilities. If $j$ is a periphery node, then the same argument as above shows that the pessimistic payoff of $j$ is equal to $f(1) - c$. If $j$ is a core node, then her payoff is bounded from below by $\\frac{f(x)}{x} - c$ (where $x = \\lfloor \\frac{n}{k} \\rfloor - n_A + 1$) for every possible choice of the set of infected nodes. Moreover, by supposing that every byzantine node is a core node, we see that the pessimistic payoff of $j$ is bounded from above by $\\frac{f(y)}{y} - c$ (where $y = n - n_B\\lfloor \\frac{n}{k} \\rfloor$).\n \n Since the estimates presented above are valid for any choice of $\\Delta$, we get the desired characterization of equilibria.\n \\end{proof}\n \nOur main result estimates the price of anarchy using \\cref{thm:ROEchar}.\n\\begin{theorem}\n\\label{th:poa}\nSuppose that for all $t \\ge 0$ the function $f$ satisfies \n\\[ \n\\lim_{n \\to +\\infty} f(n)\/f(n - t) = 1 \\, .\n\\]\nThen, for any cost level $c > 0$ and any fixed parameters $n_B \\ge 1$, $n_A \\ge 1$ we have \n\\[\n\\lim_{n \\to +\\infty} \\mathrm{PoA}(n,c) = 1 \\, .\n\\]\n\\end{theorem}\n\nThe proof requires an auxiliary lemma concerning the asymptotic behavior of the function $f$.\n \n \\begin{lemma}\\label{fast_growth}\n We have $\\lim_{x \\to +\\infty} \\frac{f(x)}{x} = +\\infty$.\n \\end{lemma}\n \\begin{proof}\n Since $f$ is strictly convex, for any $0 < x < y < z$ we have (cf. \\cite[Sect.~I.1.1]{lemarechal_convex_analysis})\n \\begin{equation}\n \\frac{f(y) - f(x)}{y - x} < \\frac{f(z) - f(x)}{z - x} < \\frac{f(z) - f(y)}{z - y} \\, . \\label{eq:slopes}\n \\end{equation}\n As a result, the function $g_{t}(x) = (f(x + t) - f(t))\/x$ is strictly increasing for all $t > 0$ (to see that, let $0 < x < y$ and use the left inequality from \\cref{eq:slopes} on the tuple $(t, x + t, y + t)$). Since $f$ is convex and increasing, it is also continuous on $[0, +\\infty)$ (cf. \\cite[Sect.~I.3.1]{lemarechal_convex_analysis}). By fixing $x$ and taking $t \\to 0$ we get that the function $x \\to \\frac{f(x)}{x}$ is nondecreasing. Suppose that $\\lim_{x \\to +\\infty} \\frac{f(x)}{x} = \\eta < +\\infty$. Then, by the assumption that $f(3x) \\geq 2f(2x)$ for all $x \\ge 1$, we have\n \\[\n \\eta = \\lim_{x \\to +\\infty} \\frac{f(x)}{x} = \\lim_{x \\to +\\infty} \\frac{f(3x)}{3x} \\ge \\lim_{x \\to +\\infty} \\frac{2 f(2x)}{3\/2 \\cdot 2x} = \n \\frac{4}{3}\\eta \\, .\n \\]\n Hence $\\eta \\le 0$ and $f(x) = 0$ for all $x \\ge 0$, which contradicts the assumption that $f$ is strictly convex.\n \\end{proof}\n \nWe also need the fact that $f$ is superadditive.\n \\begin{lemma}\\label{superadditive}\n For all $x, y > 0$ we have $f(x + y) > f(x) + f(y)$.\n \\end{lemma}\n \\begin{proof}\n From the strict convexity of $f$ we have $f(x) = f(\\frac{x}{x+y}(x+y) + \\frac{y}{x+y} \\cdot 0) < \\frac{x}{x+y}f(x+y)$. Analogously, $f(y) < \\frac{y}{x+y}f(x+y)$. Hence $f(x+y) > f(x) + f(y)$. \n \\end{proof}\n\nWe now give the proof of \\cref{th:poa}.\n\n\\begin{proof}[Proof of \\cref{th:poa}]\nThe function $f$ is superadditive by \\cref{superadditive}. As a result, the pessimistic payoff to the designer can be trivially bounded by $\\globdespayoff(n,c) \\le f(n)$. We now want to give a lower bound for the quantity $\\min_{\\bm{e} \\in \\mathcal{E}(n,c)} \\mathbf{E}\\hat{U}^{\\mathbf{\\sf{D}}}(\\bm{e})$. By \\cref{fast_growth} we have $\\lim_{x \\to +\\infty} \\frac{f(x)}{x} = +\\infty$. Let $N \\ge 1 + n_A$ be a natural number such that $\\frac{f(x)}{x} > c$ for all $x \\ge N - n_A + 1$. For any $n \\ge (n_B+1)(N + 1)$ we define $k = \\lfloor \\frac{n}{N + 1} \\rfloor \\ge n_B+1$. Observe that if we denote $x = \\lfloor \\frac{n}{k} \\rfloor - n_A + 1$, then we have $x \\ge \\frac{n}{k} - n_A \\ge N - n_A + 1$.\n Hence, if the designer chooses a generalized $k$-star, then \\cref{thm:ROEchar} shows that all genuine core nodes are protected in any equilibrium. In particular, we have\n $\n \\min_{\\bm{e} \\in \\mathcal{E}(n,c)} \\mathbf{E}\\hat{U}^{\\mathbf{\\sf{D}}}(\\bm{e}) \\ge f(n - n_B\\left\\lceil \\frac{n}{k} \\right\\rceil - n_A + 1) - nc\n $. \n Moreover, we can estimate\n \\begin{equation}\n \\begin{aligned}\n \\left\\lceil \\frac{n}{k} \\right\\rceil \\le \\frac{n}{k} + 1 &\\le \\frac{n}{\\frac{n}{N+1} - 1} + 1 \\\\ &\\le \\frac{n}{\\frac{n}{N+1} - \\frac{n}{2(N+1)}} + 1 = 2N + 3 \\, .\n \\end{aligned}\n \\end{equation}\n Hence, using \\cref{fast_growth}, we get\n \\begin{align*} \n \\lim_{n \\to +\\infty} \\mathrm{PoA}(n,c) &\\le \\lim_{n \\to +\\infty} \\frac{f(n)}{f(n - n_B(2N + 3) - n_A + 1) - nc} \\\\ &= \\lim_{n \\to +\\infty} \\frac{1}{\\frac{f(n - n_B(2N + 3) - n_A + 1)}{f(n)} - \\frac{nc}{f(n)}} = 1 \\, . \\qedhere\n \\end{align*}\n\\end{proof}\n\n\\begin{remark}\nNotice that the condition of \\cref{th:poa} is verified for $f(x) = x^{a}$ with $a \\geq 2$. \nHence, in the case of such functions $f$, the price of anarchy is $1$, so the inefficiencies due to decentralization are fully mitigated by the network design.\nThis is true, in particular, for Metcalfe's law.\n\\end{remark}\n\n\\section{Extensions of the model}\\label{sec:extension}\n\nIn the previous section, we have shown that the topology of generalized $k$-star mitigates the costs of decentralization in our model. Nevertheless, our approach can be used to show similar results in a number of modified models. For instance, one could consider a probabilistic model, in which $n_B$ byzantine nodes are randomly picked from the set of nodes $V$ (and the distribution of this random variable is known to all players). Then, the designer and nodes optimize their expected utilities, not the pessimistic ones (where the expectation is taken over the possible positions of the byzantine nodes). In this case, we still can give a partial characterization of Nash equilibria on generalized $k$-stars. More precisely, one can show that if the assumptions of \\cref{thm:ROEchar} are fulfilled and $f(1) < c < \\frac{f(x)}{x}$, then all genuine core nodes are protected. This is exactly what we need in the proof of \\cref{th:poa}. Therefore, the price of anarchy in the probabilistic model also converges to $1$ as the size of the network increases. \n\n\\section{Conclusions}\\label{sec:concl}\nWe studied a model of network defense and design in the presence of an intelligent adversary and byzantines nodes that cooperate with the adversary. We characterized optimal defended networks in the case where defense decisions are centralized, assuming that the number of byzantine nodes and the number of attacked nodes are equal to one. We have also shown that, in the case of sufficiently well-behaved functions $f$ (including $f$ in line with Metcalfe's law), careful network design allows to fully mitigate the inefficiencies due to decentralized protection decisions, despite the presence of the byzantine nodes. In terms of network design, we showed that a generalized star is a topology that can be used to achieve this goal. This topology creates incentives for protection by two means. Firstly, it is sufficiently redundant, so that the protected nodes are connected to several other protected nodes. This secures adequate network value even if some of these nodes are malicious. Secondly, it gives sufficient exposure to the nodes, encouraging the nodes that would benefit from protection to choose to protect through fear of being infected (either directly or indirectly). These results could be valuable, in particular, to policy-makers and regulators, showing that such regulations can have strong effect and providing hints for which network structures are better and why.\n\nAn interesting avenue for future research is to consider a setup where not only the identities but also the number of byzantine nodes are unknown. How would the optimal networks look like if the protection decisions are centralized? Can we still mitigate the inefficiencies caused by decentralization? Another interesting problem are the optimal networks under centralized protection when the number of byzantine nodes or the budget of the adversary are greater than~$1$. Based on our experiments, we suspect that the topology of these networks is very similar to the case considered here. Nevertheless, a formal result remains elusive.\n\n\\bibliographystyle{alpha}\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Sec:Introduction}\n\n\n\n \nThere is a vast number of applications of Gaussian random fields across several engineering and scientific disciplines including systems and control \\cite{ringh2016multidimensional,zhu2021multidimensional}, signal processing \\cite{stoica2005spectral,qian2022afd}, geotechnical engineering \\citep{chen2016cpt,latz2019fast}, image processing \\cite{winkler2003image}, biology, and meteorology \\cite{croci2018efficient,khristenko2019analysis}. \nIn these applications, we often face the standard problem of sampling a random field which could be used e.g., for the numerical solution of a stochastic partial differential equation (PDE). \nIn particular in geotechnical engineering, certain geomaterial properties are modeled as zero-mean stationary random fields indexed by spatial or temporal variables. Then the correlations between random variables in the field are described by the covariance function. \nIn order to carry out simulations in geomaterial modeling, a first step is to generate (possibly large-scale) samples of a stationary random field such that its covariances coincide with the values of a prescribed covariance function. \nA traditional method for this problem is called covariance matrix decomposition (CMD) \\cite{fenton2008risk} which employs e.g., the Cholesky factorization.\nSuch a method in general costs $O(N^3)$ flops given an $N\\times N$ matrix, which is computationally prohibitive when the covariance matrix has a large size. The latter case is typical for multidimensional random fields. For example, a $3$-d random field with a (moderate) size $100\\times 100\\times 100$ results in a $10^6\\times 10^6$ covariance matrix after vectorization. Thus applications of CMD are seriously limited to small-scale and unidimensional cases (time series). However, many practical problems involve multidimensional random fields \\cite{vanmarcke2010random,graham2018analysis} and the ability to efficiently generate large-scale samples is also important. \n\nIn the literature, there are a number of methods to handle such a problem. Recent developments include \\citet{graham2018analysis} which aims to embed a finite multilevel Toeplitz covariance matrix into a larger positive definite multilevel circulant matrix, following earlier works on the embedding problem for (one-level) Toeplitz covariance matrices \\cite{dembo1989embedding,dietrich1997fast}. Then the fast Fourier transform (FFT) can be used to compute the spectral decomposition of the (multilevel) circulant matrix at a reduced cost. \nAnother direction is to use the Karhunen--Lo{\\`e}ve expansion for the continuous covariance kernel and to truncate the expansion for practical computations \\cite{zheng2017simulation,latz2019fast,qian2022afd}.\nA different strategy for the approximation of the covariance matrix involves the notion of $H$ matrices \\cite{feischl2018fast} of which the square root can be computed at an almost linear cost.\n\n\n\n\n\nAlthough the above works are mathematically interesting and deal with general covariance functions, difficulties can still arise when one wants to generate very large-scale samples, especially when the dimension is greater than or equal to three. In order to handle the latter issue, the paper \\cite{li2019stepwise} made a \\emph{decoupling assumption} on the multivariable covariance function and proposed a stepwise CMD method which essentially computes the matrix square root along each dimension and thus reduces the computational cost compared to a full CMD. However, the space domain computation in that paper can be once again greatly reduced if one is able to recognize the frequency-domain structure of the covariance function, namely the \\emph{spectral density} of the random field. This is the main idea behind the current paper. More specifically, we draw inspiration from the systems and control literature on \\emph{stochastic realization} \\cite{LP15} and \\emph{rational covariance extension} in which one aims to describe an underlying random process with a linear dynamical system driven by white noise. The latter topic has undergone decades of development from scalar random processes to vector random fields, see \\cite{BGL-98,byrnes2001finite,enqvist2004aconvex,Georgiou-06,GL-08,FPR-08,LPcirculant-13,Zhu-Baggio-19,zhu2020well,ZFKZ2-M2-Aut,Liu-Zhu-2021,Liu-Zhu-2022-CCSSTA} and the references therein. \nIn particular, we have shown that the exponential covariance function corresponds exactly to an autoregressive (AR) model (a rational filter) of order one, which is easy to implement recursively and permits the sample generation at a \\emph{linear} cost. The decoupling assumption makes straightforward the generalization to multidimensional random fields, and the resulting multidimensional filter is simply a product of individual filters in each dimension. In this way, the filtering procedure is also decoupled as expected.\n\n\n\n\n\n\n\nIn addition, our approach is extremely suitable for multiscale simulations, see e.g., \\citet {chen2016cpt}, where the generated samples of a random field are \\emph{interpolated} and fed into a numerical PDE solver in order to obtain a refined solution.\nThe usual interpolation method samples the probability density of the fine-scale random variables whose values are to be determined, conditioned on the coarse-scale samples that have already been generated. The advantage of our approach is that, once a suitable refined noise input has been determined, only ``boundary'' samples that are necessary to initiate the \\emph{fine-scale} ARMA recursions need to be computed from the conditional probability density, and the rest fine-scale samples are generated in the same fashion as the coarse-scale sampling.\n \n\nThe outline of the paper is as follows. In Section~\\ref{Sec:Problem statement} we state the problem of sampling a stationary random field with a given covariance function that can be decoupled in each dimension. \nIn Section~\\ref{Sec:Spectral analysis} we propose a stochastic realization approach to the sampling problem and focus on the solutions for the exponential and Gaussian covariance functions.\nIn Section~\\ref{Sec:Multi-scale random fields} we integrate our method to multiscale simulations and provide an explicit solution procedure for the bivariate exponential covariance function.\nA number of numerical simulations are presented in Section~\\ref{Sec:Numerical examples}, and in Section~\\ref{Sec:Conclusions} we draw the conclusions.\n\n\n\n\\section{Background on sampling random fields}\\label{Sec:Problem statement}\n\nLet $y(\\mathbf t,\\omega)$ be a $d$-dimensional real random field over a probability space $(\\Omega, \\mathcal{F}, P)$ where $\\mathbf t=(t_1,\\dots,t_d)\\in\\mathbb R^d$ can be interpreted as a space (or spatio-temporal) coordinate vector. For each fixed $\\mathbf t\\in\\mathbb R^d$, assume that $y(\\mathbf t,\\cdot)$ is a zero-mean real-valued random variable with a finite variance, that is,\n\\begin{equation*}\n{\\mathbb E} y(\\mathbf t,\\cdot) = 0\\quad \\text{and} \\quad {\\mathbb E}\\, [y(\\mathbf t,\\cdot)]^2 < \\infty\n\\end{equation*}\nwhere ${\\mathbb E}$ indicates mathematical expectation.\nIt is customary to suppress the dependence on $\\omega$ and write simply $y(\\mathbf t)$. Assume further that the random field under consideration is \\emph{stationary}, which means that the covariance function\n\\begin{equation*}\n\\rho(\\mathbf t,\\sbf) := {\\mathbb E} [y(\\mathbf t)y(\\sbf)]\n\\end{equation*}\ndepends only on the difference $\\boldsymbol{\\tau}:=\\mathbf t-\\sbf$ between the arguments, so we can write $\\rho(\\boldsymbol{\\tau})$ instead. Notice the symmetry $\\rho(-\\boldsymbol{\\tau}) = \\rho(\\boldsymbol{\\tau})$.\n\nIn applications of geotechnical engineering, see e.g., \\cite{firouzianbandpey2015effect,ching2017characterizing}, it is often assumed that the covariance function has a decoupled form, \nnamely\n\\begin{equation}\\label{eq:decoupled form}\n\t\\rho(\\boldsymbol{\\tau})=\\rho_1(\\tau_1) \\rho_2(\\tau_2) \\cdots \\rho_d(\\tau_d),\n\\end{equation}\nwhere each $\\rho_j$ is a covariance function in one variable, and $\\tau_j\\in \\mathbb R$ is the $j$-th component of $\\boldsymbol{\\tau}$. \nIn the following, since we shall be concerned with the sampling problem of the random field $y(\\mathbf t)$, let us now define the sampled version of the random field as well as the covariance function.\nTake $\\tau_j = x_j T_j$ where $T_j>0$ is sampling distance and $x_j\\in\\mathbb Z$. Define a random field on the integer grid $\\mathbb Z^d$ via\n\\begin{equation}\\label{sampled_random_field}\ny_{\\mathrm{s}}(\\mathbf x) = y(x_1 T_1,\\dots,x_d T_d)\n\\end{equation}\nwhere the subscript s means ``sampled''. Then it is easy to deduce that the covariance function of the discrete random field $y_{\\mathrm{s}}$ is\n\\begin{equation}\\label{rho_sampled}\n\t\\rho_{\\mathrm{s}}(\\mathbf k) = \\rho(k_1 T_1,\\dots,k_d T_d) = \\rho_1 (k_1 T_1) \\cdots \\rho_d (k_d T_d),\n\\end{equation}\nwhere $\\mathbf k=(k_1,\\dots,k_d)\\in\\mathbb Z^d$ denotes the difference between two discrete grid points.\nThe sampling problem can then be phrased as follows.\n\\begin{problem}\\label{prob_sampling_general}\n\tGiven a covariance function $\\rho(\\boldsymbol{\\tau})$ of the form \\eqref{eq:decoupled form}, a vector $\\mathbf T=(T_1,\\dots,T_d)$ of sampling distances, and a vector $\\mathbf N=(N_1,\\dots,N_d)$ of positive integers, generate samples of the random field $y_{\\mathrm{s}}(\\mathbf x)$ in \\eqref{sampled_random_field} for $\\mathbf x$ in the index set $($a regular cuboid$)$\n\t\\begin{equation}\\label{ind_set_RF}\n\t\\mathbb Z^d_{\\mathbf N} := \\{(x_1,\\dots,x_d) : 0\\leq x_j\\leq N_j-1,\\ j=1,\\dots,d\\}\n\t\\end{equation}\n\tsuch that its covariance function coincides with the sampled version $\\rho_{\\mathrm{s}}(\\mathbf k)$ in \\eqref{rho_sampled}.\n\\end{problem}\n\n\nThe most straightforward approach for this problem is covariance matrix decomposition mentioned in the Introduction. More precisely, since the index set \\eqref{ind_set_RF} has a finite cardinality, all the samples can be stacked into a long vector $\\mathbf y$ of dimension $|\\mathbf N|:=\\prod_{j=1}^{d}N_j$. Then the covariance matrix $\\boldsymbol{\\Sigma}:={\\mathbb E}(\\mathbf y\\yb^\\top)$ could in principle be evaluated elementwise according to \\eqref{rho_sampled}. Problem~\\ref{prob_sampling_general} would then be solved via simple linear algebra\n\\begin{equation}\\label{eq:Ranom field realization}\n\t\\mathbf y=L\\mathbf w,\n\\end{equation}\nwhere $\\mathbf w\\sim \\mathcal{N}(\\mathbf{0},I)$ is an i.i.d.~standard normal random vector of dimension $|\\mathbf N|$, and $L$ is any matrix square root of $\\boldsymbol{\\Sigma}$ which can in particular, be taken as the Cholesky factor such that $LL^{\\top}=\\boldsymbol{\\Sigma}$. However, it is well known that the matrix factorization, which is the major computational burden here, involves $O(|\\mathbf N|^3)$ flops. Therefore, such a naive approach works only for random fields with a dimension $d=1$ or $2$ when the size of the samples $|\\mathbf N|$ is not too large. For the generation of large-scale samples, one needs to exploit the inherent structure of the covariance function in order to facilitate fast computation.\nThe latter point is indeed the theme of the next section where we will propose an efficient stochastic realization \napproach to Problem~\\ref{prob_sampling_general}.\nMore specifically, we consider two types of $1$-d covariance functions of practical interest:\n\\begin{enumerate}\n\t\\item the exponential type\n\t\\begin{equation}\\label{eq:exponential function}\n\t\t\\rho(x) = \\sigma^2 e^{-\\alpha|x|},\n\t\\end{equation}\n\n\t\\item the Gaussian type\n\t\\begin{equation}\\label{eq:Gaussian function}\n\t\t\\rho(x) = \\sigma^2 e^{-\\alpha|x|^{2}},\n\t\\end{equation}\n\\end{enumerate}\nwhere $\\sigma^2$ is the variance of the random field\nand $\\alpha>0$ is a parameter. The multidimensional covariance function is formed through the product in \\eqref{eq:decoupled form}.\nNotice that our method works also for other types of covariance functions $\\rho(x)$ provided that the decoupling assumption \\eqref{eq:decoupled form} holds.\nIn this case, the spectral density of the random field can be well approximated in each dimension by a low-order rational model.\n\n\n\\begin{remark}\nThe exponential and the Gaussian covariance functions are special cases of the Mat\\'{e}rn family of covariance functions, defined as\n\t\\begin{equation}\\label{matern_cov}\n\t\\rho(\\boldsymbol{\\tau}) = \\kappa(\\|\\boldsymbol{\\tau}\\|\/\\lambda)\\quad \\text{where}\\quad \\kappa(r) = \\sigma^2 \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} (\\sqrt{2\\nu}\\, r)^\\nu K_{\\nu}(\\sqrt{2\\nu}\\, r).\n\t\\end{equation}\n\tIn the formulas above, $\\boldsymbol{\\tau}\\in\\mathbb R^d$, $\\|\\cdot\\|$ is the Euclidean norm, $\\lambda$ the correlation length, $\\sigma^2$ the variance, $\\nu>0$ a smoothness parameter, $\\Gamma$ the gamma function, and $K_{\\nu}$ the modified Bessel function of the second kind. Notice that the case $\\nu=1\/2$ corresponds to the exponential covariances, while $\\nu=\\infty$ corresponds to the Gaussian function of the form $\\kappa(r) = \\sigma^2 e^{-r^2\/2}$. See e.g., \\cite[Example 2.7]{graham2018analysis}.\n\\end{remark}\n\n\n\\section{A stochastic realization approach}\\label{Sec:Spectral analysis}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\tikzstyle{int}=[draw, minimum size=2em]\n\t\\tikzstyle{init} = [pin edge={to-,thin,black}]\n\t\\begin{tikzpicture}[node distance=2cm,auto,>=latex']\n\t\\node [int] (a) {$\\ W(\\mathbf z)\\ $};\n\t\\node (b) [left of=a, coordinate] {};\n\t\\node (c) [right of=a] {};\n\n\t\\path[->] (b) edge node {$w(\\mathbf x)$} (a);\n\t\\path[->] (a) edge node {$y(\\mathbf x)$} (c);\n\n\t\\end{tikzpicture}\n\t\\caption{A $d$-dimensional linear stochastic system with a white noise input.}\n\t\\label{Fig:d_linear_system}\n\\end{figure}\n\nLet $\\mathbf z=(z_1,\\cdots,z_d)$ be a vector of indeterminates.\nConsider a $d$-dimensional discrete-``time'' linear stochastic system as depicted in Fig.~\\ref{Fig:d_linear_system}, where \n\\begin{equation}\nW(\\mathbf z) = \\sum_{\\mathbf k \\in \\mathbb Z^d} \\gamma(\\mathbf k) \\,\\mathbf z^{-\\mathbf k}\n\\end{equation}\nis the transfer function, also called a shaping filter in the signal processing literature. Here the function $\\gamma:\\mathbb Z^d\\to \\mathbb R$ is called the \\emph{impulse response} of the system and $\\mathbf z^{\\mathbf k}$ is a shorthand notation for $z_1^{k_1}\\cdots z_d^{k_d}$. Moreover, the symbol $\\mathbf z^{-\\mathbf k}$ can be interpreted as a $\\mathbf k$-step delay operator. The system is excited by a normalized white noise $w(\\mathbf x)$ such that for any $\\mathbf x\\in\\mathbb Z^d$,\n\\begin{equation}\n{\\mathbb E} [w(\\mathbf x)]=0\\quad\\text{and} \\quad {\\mathbb E}[w(\\mathbf x+\\mathbf k) w(\\mathbf x)]=\\delta_{\\mathbf k,\\mathbf 0} \n= \\begin{cases}\n1 & \\text{if}\\ \\mathbf k=\\mathbf 0, \\\\\n0 & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\nThe output $y(\\mathbf x)$ is a zero-mean stationary random field. Symbolically, we write\n\\begin{equation}\\label{eq:linear-stochastic-realization}\ny(\\mathbf x)=W(\\mathbf z)w(\\mathbf x):= \\sum_{\\mathbf k \\in \\mathbb Z^d} \\gamma_{\\mathbf k} w(\\mathbf x-\\mathbf k).\n\\end{equation}\nNotice that this is a standard model for stationary processes which goes back to the prediction theory of Wiener in the 1940s.\n\n\nLet $\\sigma(\\mathbf k):={\\mathbb E} [y(\\mathbf x+\\mathbf k) y(\\mathbf x)]$ be the covariance function of $y(\\mathbf x)$ and let $\\mathbb T:=[-\\pi, \\pi)$ denote the frequency interval. Then the spectral density of $y(\\mathbf x)$ is by definition \\cite{stoica2005spectral,LP15} the multidimensional discrete-time Fourier transform (DTFT) of the covariance function:\n\\begin{equation}\n\\Phi(e^{i\\boldsymbol{\\theta}})=\\sum_{\\mathbf k \\in \\mathbb Z^d} \\sigma(\\mathbf k)\\, e^{-i \\innerprod{\\mathbf k}{\\boldsymbol{\\theta}}},\n\\end{equation} \nwhere the frequency vector $\\boldsymbol{\\theta}=(\\theta_1,\\dots,\\theta_d)\\in \\mathbb T^d$, $e^{i\\boldsymbol{\\theta}}:=(e^{i\\theta_1},\\dots,e^{i\\theta_d})$ is a point on the $d$-torus (which is isomorphic to $\\mathbb T^d$), and\n$\\innerprod{\\mathbf k}{\\boldsymbol{\\theta}}:=k_1\\theta_1+\\cdots+k_d\\theta_d$ is the standard inner product in $\\mathbb R^d$. It then follows from the spectral theory of stationary random fields \\cite{yaglom1957some} that\n\\begin{equation}\\label{eq:Phi=w(z)w(z^-1)}\n\\Phi(e^{i\\boldsymbol{\\theta}})=|W(e^{i\\boldsymbol{\\theta}})|^2\n\\end{equation}\nwhere $W(e^{i\\boldsymbol{\\theta}})= \\sum_{\\mathbf k \\in \\mathbb Z^d} \\gamma(\\mathbf k) \\,e^{-i\\innerprod{\\mathbf k}{\\boldsymbol{\\theta}}}$, so $\\Phi(e^{i\\boldsymbol{\\theta}})$ takes \\emph{nonnegative} values.\nOn the other hand, if the spectral density $\\Phi$ satisfies certain analytic properties, then it admits a \\emph{spectral factor} $W$, see \\cite{LP15}.\n\n\n\n\nWe are mostly interested in the case where $W(\\mathbf z)$ is a \\emph{rational} function, that is, it can be expressed as a ratio of two polynomials:\n\\begin{equation}\\label{eq:transfer_func}\n\tW(\\mathbf z)=\\frac{b(\\mathbf z)}{a(\\mathbf z)}=\\frac{\\sum_{\\mathbf k \\in \\Lambda_{+,2}}b_{\\mathbf k}\\mathbf z^{-\\mathbf k}}{\\sum_{\\mathbf k \\in \\Lambda_{+,1}}a_{\\mathbf k}\\mathbf z^{-\\mathbf k}},\n\\end{equation}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\Lambda_{+,1} & := \\{(k_1,\\cdots,k_d)\\in\\mathbb Z^d : 0\\leq k_j \\leq m_j,\\ j=1, \\cdots, d \\}, \\\\\n\\Lambda_{+,2} & := \\{(k_1,\\cdots,k_d)\\in\\mathbb Z^d : 0\\leq k_j \\leq n_j,\\ j=1,\\cdots,d\\}\n\\end{aligned}\n\\end{equation*}\nare two index sets with positive integers $m_j, n_j$ given for $j=1,\\dots,d$.\nThen the system \\eqref{eq:linear-stochastic-realization} can equivalently be described in the time domain as an autoregressive moving-average (ARMA) model\n\\begin{equation}\\label{eq:ARMA recursion}\n\t\\sum_{\\mathbf k \\in \\Lambda_{+,1}} a_{\\mathbf k}\\, y(\\mathbf x-\\mathbf k) = \\sum_{\\mathbf k \\in \\Lambda_{+,2}} b_{\\mathbf k}\\, w(\\mathbf x-\\mathbf k).\n\\end{equation}\nSuch a model is extremely useful in practice because rational functions (in fact, polynomials) can approximate any continuous function if the model order is sufficiently large. If the moving-average part of the model is trivial, i.e., $b(\\mathbf z)\\equiv b_{\\mathbf 0}$ is a constant, then \\eqref{eq:ARMA recursion} reduces to a simpler AR model. \n\n\n\n\nIn the above context, Problem~\\ref{prob_sampling_general} can be posed more concretely as follows:\n\n\\begin{problem}\\label{prob_realization}\t\n\tGiven a sampled covariance function $\\rho_{\\mathrm{s}}(\\mathbf k)$ in \\eqref{rho_sampled}, find a rational filter $W(\\mathbf z)$ of the form \\eqref{eq:transfer_func} such that when it is fed with a normalized white noise, the covariance function of the output $y(\\mathbf x)$ coincides with $\\rho_{\\mathrm{s}}(\\mathbf k)$.\n\tEquivalently, we seek a rational spectral density satisfying the trigonometric moment equations\n\t\\begin{equation}\\label{d-dim_moment_eqns}\n\t\\int_{\\mathbb T^d} e^{i\\innerprod{\\mathbf k}{\\boldsymbol{\\theta}}} \\Phi(e^{i\\boldsymbol{\\theta}}) \\d\\mu(\\boldsymbol{\\theta}) = \\rho_{\\mathrm{s}}(\\mathbf k)\\quad \\forall \\mathbf k\\in\\mathbb Z^d\n\t\\end{equation}\n\twhere $\\d\\mu(\\boldsymbol{\\theta}) = \\frac{1}{(2\\pi)^d} \\prod_{j=1}^{d} \\d\\theta_j$ is the normalized Lebesgue on $\\mathbb T^d$.\n\\end{problem}\n\nNotice that the equivalence above is understood modulo the spectral factorization \\eqref{eq:Phi=w(z)w(z^-1)}.\nOnce the above problem is solved, samples of the random field $y_{\\mathrm{s}}(\\mathbf x)$ for $\\mathbf x$ indexed in \\eqref{ind_set_RF} can be generated efficiently via the ARMA recursion \\eqref{eq:ARMA recursion} with a white noise input.\n\n\n\nNext, in view of the decoupling assumption \\eqref{rho_sampled}, it follows easily from the multidimensional DTFT that the corresponding spectral density ${\\Phi}(e^{i\\boldsymbol{\\theta}})$ also has a decoupled form \n\\begin{equation}\\label{Spe-deco}\n\t{\\Phi} (e^{i\\boldsymbol{\\theta}}) = {\\Phi}_1(e^{i\\theta_1})\\, \\cdots\\, {\\Phi}_d(e^{i\\theta_d}),\n\\end{equation}\nwhere the factor ${\\Phi}_j(e^{i\\theta_j})$ can be interpreted as the spectral density in the $j$-th dimension for $j=1,\\cdots,d$, i.e., it is the $1$-d DTFT of the sampled covariance function $\\rho_{\\mathrm{s},j}(k_j):=\\rho_j (k_j T_j)$.\nTherefore, the $d$-dimensional moment equations \\eqref{d-dim_moment_eqns} decouple into $d$ sets of unidimensional moment equations\n\\begin{equation}\\label{one-d_moment_eqns}\n\\int_{\\mathbb T} e^{ik_j\\theta_j} \\Phi_j(e^{i\\theta_j}) \\frac{\\d\\theta_j}{2\\pi} = \\rho_{\\mathrm{s},j}(k_j) \\quad \\forall k_j\\in\\mathbb Z,\\ j=1,\\dots,d,\n\\end{equation}\nwhose solutions have been extensively studied in the literature.\nAfter each $\\Phi_j$ has been constructed from the covariance function $\\rho_{\\mathrm{s},j}$, we can perform the spectral factorization $\\Phi_j (z_j) = W_j (z_j) W_j (z_j^{-1})$ to obtain the transfer function $W_j(z_j)$. Notice that since $\\Phi_j$ is constrained to be rational, the spectral factorization reduces to that for positive trigonometric polynomials, for which there are a number of algorithms \\cite{sayed2001survey}.\nHence, the above procedure leads to a $d$-dimensional ARMA model\n\\begin{equation}\\label{d-dimensional ARMA model}\n\ty(\\mathbf x) = \\underbrace{W_1(z_1) \\, \\cdots\\, W_d(z_d)}_{=: W(\\mathbf z)} \\, w(\\mathbf x)\n\\end{equation}\nagain in a decoupled form. \nThe main steps of our approach for sampling stationary random fields are summarized as follows.\n\\begin{enumerate}\n\t\\item[1)] Given a sampled covariance function \\eqref{rho_sampled}, solve the decoupled moment equations \\eqref{one-d_moment_eqns} for a rational spectral density of the form \\eqref{Spe-deco}.\n\t\\item[2)] Do spectral factorization to obtain the linear filter in \\eqref{d-dimensional ARMA model}.\n\t\\item[3)] Feed the filter with a Gaussian i.i.d.~white noise and collect the output random field.\n\\end{enumerate}\n\n\n\nIn the next subsections, we focus on the nontrivial Step 1 with two types of covariance functions mentioned before, i.e., the exponential and Gaussian covariance functions. It is worth remarking that the case of the exponential covariance function admits an \\emph{exact} rational spectrum and hence a shaping filter in a \\emph{closed form}.\nAlthough the Gaussian covariance function does not lead to an analytic solution, it can be well \\emph{approximated} in the frequency domain by a rational spectral density in the sense that only low-order covariances with significant values are matched in \\eqref{one-d_moment_eqns}. \n\n\n\\subsection{Solution for the exponential covariance function: an AR$(1)$ model}\\label{subsec-expon}\n\n\nUnder the decoupling assumption for the $d$-dimensional covariance function, we only need to solve Problem~\\ref{prob_realization} for each sampled covariance function $\\rho_{\\mathrm{s},j}(k_j)=\\rho_j (k_j T_j)$ of one variable, as discussed previously. Hence, we suppress the subscript $j$.\nConsider first the exponential covariance function in \\eqref{eq:exponential function}, i.e., $\\rho(x) = \\sigma^2 e^{-\\alpha|x|}$ with $x\\in\\mathbb R$. We can for simplicity take $\\sigma^2=1$ since it is only a multiplicative constant. Let $T>0$ be the sampling distance, and we have\n\\begin{equation}\\label{sampled_cov_exp}\n\\rho_{\\mathrm{s}}(k) := \\rho(k T) = e^{-\\alpha T|k|},\\quad k\\in \\mathbb Z.\n\\end{equation}\nLet $r=e^{-\\alpha T}$.\nUsing the DTFT pair\n\\begin{equation}\n\tx(k) = r^{k}u(k) \\xmapsto{\\mathcal{F}} X(e^{i\\theta}) = \\frac{1}{1 - r e^{-i\\theta}} \n\\end{equation}\nwith $0<|r|<1$ and $u(k)$ the discrete-time unit step function, the spectral density corresponding to $\\rho_\\mathrm{s}(k)$ is\n\\begin{equation}\\label{Phi_posi_real}\n\t{\\Phi}(e^{i\\theta}) = Z(e^{i\\theta}) + Z(e^{i\\theta})^* = 2\\Re\\{ Z(e^{i\\theta})\\},\n\\end{equation}\nwhere $^*$ denotes complex conjugate and\n\\begin{equation}\n\tZ(e^{i\\theta}) = \\frac{1}{1 - e^{-\\alpha T} e^{-i\\theta}} - \\frac{1}{2}\n\\end{equation}\nis the so-called \\emph{positive real} part of ${\\Phi} (e^{i\\theta})$. Notice that $Z(e^{i\\theta})$ admits an \\emph{analytic extension} as a rational function\n\\begin{equation}\n\tZ(z) = \\frac{1}{1 - e^{-\\alpha T} z^{-1}} - \\frac{1}{2},\\quad z\\in\\mathbb C,\n\\end{equation}\nso that ${\\Phi}(z)$ in \\eqref{Phi_posi_real} is rational as well. Then after some straightforward calculations, we obtain a transfer function\n\\begin{equation}\\label{eq:Expo-W}\nW(z)=\\frac{(1-e^{-2\\alpha T})^{\\frac{1}{2}}}{1-e^{-\\alpha T} z^{-1}}\n\\end{equation}\nwhich is a \\emph{stable and minimum-phase} spectral factor of $\\Phi(z)$, namely\n\\begin{equation}\n\t{\\Phi}(z) = W(z) \\, W(z^{-1})\n\\end{equation}\nwhere the identity is valid in a neighborhood of the unit circle.\nWe see that $W(z)$ corresponds to an AR model of order one that depends on the parameter $\\alpha_\\mathrm{s}:=\\alpha T$. \nIn consequence, samples of the discrete random field $y_\\mathrm{s}(\\mathbf x)$ can be generated by filtering a white noise through $d$ cascaded AR$(1)$ models, one for each dimension. \nObviously, the algorithm achieves a very low computational cost because each AR component has only order one.\nCompared with \\cite{li2019stepwise}, our approach has a more concise frequency-domain interpretation and a neater time-domain implementation that avoids factorization of large matrices.\nMoreover, the model can be reused for computing multiple realizations of the random filed because the filter $W(z)$ remains unchanged with a given covariance function when the sampling distance is fixed. Although the Cholesky factor in the stepwise CMD method \\cite{li2019stepwise} can also be reused, our method requires much less storage since only the filter coefficients need to be stored.\nIn addition, our model can be used to compute samples of an \\emph{arbitrary} size. On the contrary, the CMD has to be redone from the start if the sample size changes.\nA comparison of the computational complexity between different algorithms will be given at the end of this section.\n\n\n\\subsection{Solution for the Gaussian covariance function: an ARMA model}\n\n\nUnlike the case with an exponential covariance function, the problem with a general covariance function may not have an exact analytic solution. \nIn this subsection, we discuss the case with an Gaussian covariance function in \\eqref{eq:Gaussian function}. The corresponding sampled version is\n\\begin{equation}\\label{sampled_cov_Gauss}\n\\rho_{\\mathrm{s}}(k) := \\rho(k T) = \\sigma^2 e^{-\\alpha T^2 |k|^{2}},\\quad k\\in \\mathbb Z.\n\\end{equation}\nIt is well known that the (continuous-time) Fourier transform of a Gaussian function is another Gaussian function which is certainly nonrational. We speculate that the same happens for the discrete samples of a Gaussian function, i.e., the corresponding spectral density is \\emph{not} a rational function. Therefore, an ARMA representation for such a covariance function must by nature be approximate.\n\n\nThe Gaussian function \\eqref{eq:Gaussian function} decays fast as $|x|$ increases. So a natural idea is to construct a rational spectral density \n\\begin{equation}\\label{eq:rational spectral}\n\\Phi(e^{i\\theta})=\\frac{P(e^{i\\theta})}{Q(e^{i\\theta})}\n\\end{equation}\nthat matches a \\emph{finite} number of low-order (dominant) covariances,\nwhere $P$ and $Q$ are positive symmetric trigonometric polynomials. \nThere are a number of solution techniques for this \\emph{rational covariance extension} problem, see e.g., \\cite{byrnes2001finite,Georgiou-L-03,RFP-09,FMP-12,Z-14,Z-14rat,ZFKZ2020M2-SIAM}. In the paper, we adopt the following \\emph{generalized maximum entropy} formulation \\cite{byrnes2001finite,ringh2016multidimensional}:\n\\begin{equation}\\label{Maximum entropy}\n\t\\begin{aligned}\n\t\t&\\underset{\\Phi>0}{\\max}\\ \\int_{\\mathbb T} P(e^{i\\theta}) \\log \\Phi(e^{i\\theta}) \\frac{\\d \\theta}{2\\pi} \\\\\n\t\t&\\mathrm{s.t.}\\quad \\sigma_k = \\int_{\\mathbb T}e^{ik \\theta}\\Phi(e^{i\\theta})\\frac{\\d \\theta}{2\\pi}\\quad \\forall k\\in\\Lambda,\n\t\\end{aligned}\n\\end{equation}\nwhere,\n\\begin{itemize}\n\t\\item $P$ is a \\emph{known} positive symmetric polynomial which is constructed from a \\emph{given} factor $b(z):=\\sum_{k=0}^{n} b_{k}z^{-k}$, i.e., $P(z)=b(z)b(z^{-1})$;\n\t\\item the index set is defined as $\\Lambda:=\\{-m,\\dots,-1,0,1,\\dots, m\\}$ such that $m$ is a user-specified positive integer,\n\t\\item $\\sigma_k$'s are the covariance data evaluated from the covariance function, namely $\\sigma_k = \\rho_{\\mathrm{s}}(k)$.\n\\end{itemize}\nMore precisely, we choose $m$ to be the smallest positive integer such that $\\rho_{\\mathrm{s}}(k)$ is practically zero for all $k>m$. In other words, the approximation procedure takes into account the covariances with significant values and discards the rest.\nThe optimization problem \\eqref{Maximum entropy} is convex and has a unique solution $\\Phi=P\/\\hat{Q}$, where $\\hat{Q}$ is the optimal solution of the dual problem\n\\begin{equation}\\label{eq:dual-problem}\n\t\\underset{Q>0}{\\min}\\ \\innerprod{\\boldsymbol{\\sigma}}{\\mathbf q}-\\int_{\\mathbb T} P(e^{i\\theta})\\log Q(e^{i\\theta}) \\frac{\\d\\theta}{2\\pi},\n\\end{equation}\nwhere $\\mathbf q:=\\{q_k\\}_{k \\in\\Lambda}$ are Lagrange multipliers, $\\innerprod{\\boldsymbol{\\sigma}}{\\mathbf q}:=\\sum_{k\\in \\Lambda}\\sigma_k q_k$ denotes the inner product, and $Q(e^{i\\theta}):=\\sum_{k =-m}^m q_k e^{-i k\\theta}$ is a symmetric trigonometric polynomial.\nFor technical details we refer readers to \\cite{byrnes2001finite,ringh2016multidimensional}. \nThe reason for choosing this formulation is that we want to obtain a rational $\\Phi(e^{i\\theta})$ of the form \\eqref{eq:rational spectral} which is directly connected to the ARMA model via spectral factorization.\nMore precisely, the polynomial $a(z)=\\sum_{k=0}^{m} a_{k}z^{-k}$ corresponding to the AR coefficients can be determined via the Bauer method \\cite{sayed2001survey} for \nfactoring $\\hat{Q}(z) = \\sum_{k=-m}^{m} \\hat{q}_{k}z^{-k}$ where $\\hat{q}_k$'s are the optimal Lagrange multipliers. \n\n\n\n\nIn this way, once the rational spectral density $\\Phi(z)$ and the filter \n\\begin{equation}\\label{W_z_ARMA}\nW(z) = \\frac{b(z)}{a(z)} =\\frac{\\sum_{k=0}^{n} b_{k}z^{-k}}{\\sum_{k=0}^{m} a_{k}z^{-k}}\n\\end{equation}\nare constructed in each dimension, samples of the random field $y_\\mathrm{s}(\\mathbf x)$ can be generated in the same fashion as described in the previous subsection, now via the cascaded ARMA recursions. Each ARMA recursion has a fixed computational cost (though larger than that of the AR$(1)$ model) related to the model order $(m, n)$ which is chosen small. \n\n\nNext, we focus on the computational complexity of our approach to Problem~\\ref{prob_realization}.\nFor practical applications, we are primarily interested in the $3$-d case. Assume that we are asked to generate samples of a random field with a size $\\mathbf N=(N_1,N_2,N_3)\\in\\mathbb Z^3$, see \\eqref{ind_set_RF}, and each ARMA model in the cascade has order $(m_j,n_j)$ for $j=1,2,3$. It is then common practice to generate samples of a slightly larger size $\\mathbf M=(M_1, M_2, M_3)$ which can be taken as $(1+\\beta)\\mathbf N$, say with $\\beta=0.1$, in order to reduce the ``transient'' effect of filtering caused by an artificial boundary condition.\nFor the exponential covariance function, the computational cost of the algorithm is proportional to the product $M_1M_2M_3$, so the complexity is $O(N_1N_2N_3)$ in which we have absorbed the constant $1+\\beta$ into the capital $O$ notation.\nFor the Gaussian covariance function which corresponds to an ARMA model, the algorithm still mainly runs in $O(N_1N_2N_3)$ flops because the computational cost of solving a small-size convex optimization problem \\eqref{eq:dual-problem} is far lower than that of implementing the ARMA recursion.\n\nIn Table~\\ref{tab:table_Compu} we state the computational costs of different methods: traditional CMD by Cholesky decomposition, stepwise CMD, circulant embedding method, and our stochastic realization approach, where an instance with specific numbers is also shown for clarity.\n\\begin{table*}[t]\n\t\\begin{center}\n\t\t\\caption{\\centering The computational complexity of different methods in the $3$-d case. In the particular example, we have $\\mathbf N=(100,100,100)$, $\\mathbf C=(512,512,512)$, and $\\mathbf M=1.1\\mathbf N$.}\n\t\t\\label{tab:table_Compu}\n\t\t\\begin{tabular}{lll} \n\t\t\t\\toprule \n\t\t\tMethods& Major computational complexity&Example (flops) \\\\\n\t\t\t\\midrule \n\t\t\tCMD&$O(N_1^{3}N_2^{3}N_3^{3}) $ &$1.00\\times10^{18}$\\\\\n\t\t\tStepwise CMD &$O(N_1N_2N_3(N_1+N_2+N_3))$&$3.00\\times10^8$\\\\\n\t\t\tCirculant Embedding&$O(C_1 C_2 C_3 \\log_{2} (C_1 C_2 C_3))$ &$3.62\\times 10^9$\\\\\n\t\t\tStochastic Realization&$O(N_1N_2N_3)$&$1.00\\times10^6$\\\\\n\t\t\t\\bottomrule \n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\nIt can be seen that our method has the lower computational complexity which is \\emph{linear} in the number of samples.\n\n\n\n\\section{Application to multiscale simulations}\\label{Sec:Multi-scale random fields}\n\n\nIn some applications, e.g., the analysis of certain geomaterial properties \\cite{chen2012characterization,chen2016cpt}, we often face a challenging problem that \\emph{refinements} of the generated samples of the random field are needed across several scales, where the spatial variability, i.e., the covariance function is expected to be maintained. \nHere by ``a fine-scale simulation'', we mean obtaining samples of the random field on a denser grid by \\emph{interpolating} the already generated coarse-scale realization, instead of generating fine-scale samples from scratch. Our approach is particularly suitable for such a purpose as we shall describe shortly.\n\n\n\nFor simplicity, we assume that the sampling distance $T$ (which is the scale parameter) in \\eqref{sampled_cov_exp} and \\eqref{sampled_cov_Gauss} is halved in each fine-scale simulation, i.e., $T_1=\\frac{1}{2}T$, $T_2=\\frac{1}{2}T_1=\\frac{1}{4}T$, where the subscript denotes the number of fine-scale simulations that we perform. We want to point out that the model parameters in \\eqref{eq:Expo-W} and \\eqref{W_z_ARMA} in general \\emph{change} across different simulation scales since they depend on $T$, although the model order one is maintained in the case of an exponential covariance function. The change of the AR coefficients in \\eqref{W_z_ARMA} is more implicit and comes from the fact that the covariance data $\\sigma_k$'s are changed in a fine-scale simulation. In any case, once the fine-scale model \\eqref{eq:Expo-W} or \\eqref{W_z_ARMA} is constructed, the interpolation of the samples of the random field can be carried out by the same filtering technique as in the generation of the coarse-scale samples, \\emph{when} suitable boundary conditions and the fine-scale noise input (which is not i.i.d.~any more) are provided. \n\n\\subsection{The boundary conditions}\n\nThe determination of such boundary conditions is rather standard.\nSuppose that the coarse-scale samples of the random field are collected into a column vector $\\mathbf y_1$ and the \\emph{boundary} random variables that are needed to initiate the fine-scale filtering are collected in $\\mathbf y_2$. \nHere we want to remark on the special $1$-d case with the exponential covariance function, where the coarse-scale samples are themselves boundary conditions for the fine-scale simulation due to the AR$(1)$ model structure. Hence, all one needs to do is to generate the white noise samples on the new grid points and to implement the filtering. In general however, some boundary values of the fine-scale samples in $\\mathbf y_2$ are needed in order to start the multidimensional filtering, as shown in Fig.~\\ref{Fig:2-d-multi-schematic-figure} for the $2$-d case where the two cascaded filters are both AR$(1)$.\n\\begin{figure}[t!]\n\t\\begin{center}\n\t\t\\includegraphics[width=10cm]{Fig_Multi_scale\/2d-example.eps} \n\t\t\\caption{A schematic figure for the boundary conditions in a multiscale simulation, where the number of values in boundary locations represents only a small fraction of the total number of unknown variables.}\n\t\t\\label{Fig:2-d-multi-schematic-figure}\n\t\\end{center}\n\\end{figure}\n\nLet $\\mathbf y$ be the joint vector such that\n\\begin{equation}\\label{condi_normal}\n\\mathbf y=\\left[ \\begin{matrix} \\mathbf y_1\\\\\\mathbf y_2\\end{matrix} \\right]\\sim \\mathcal{N}(\\left[ \\begin{matrix} \\mathbf{0}\\\\ \\mathbf{0}\\end{matrix} \\right], \\left[ \\begin{matrix}\\boldsymbol{\\Sigma}_{11}&\\boldsymbol{\\Sigma}_{12}\\\\ \\boldsymbol{\\Sigma}_{21}&\\boldsymbol{\\Sigma}_{22}\\end{matrix} \\right]),\n\\end{equation} \nwhere we have introduced explicitly the Gaussianness assumption for the random field, and the matrix on the right side is\nthe covariance matrix $\\boldsymbol{\\Sigma}$ of $\\mathbf y$ which is evaluated using the given covariance function\\footnote{Recall that the locations of samples in $\\mathbf y$ are known.} $\\rho(\\boldsymbol{\\tau})$ in \\eqref{eq:decoupled form} and partitioned in accordance with $\\mathbf y_1$ and $\\mathbf y_2$.\nThen the unknown random vector $\\mathbf y_2$ conditioned on $\\mathbf y_1=\\mathbf a$ still has a multivariate normal distribution $(\\mathbf y_2|\\mathbf y_1=\\mathbf a)\\sim \\mathcal{N}(\\bar{\\boldsymbol{\\mu}},\\bar{\\boldsymbol{\\Sigma}})$ where\n\\begin{equation}\\label{eq:mean-vector}\n\t\\bar{\\boldsymbol{\\mu}}=\\boldsymbol{\\Sigma}_{21}\\boldsymbol{\\Sigma}_{11}^{-1}\\mathbf a,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:correlation-matrix}\n\t\\bar{\\boldsymbol{\\Sigma}}=\\boldsymbol{\\Sigma}_{22}-\\boldsymbol{\\Sigma}_{21}\\boldsymbol{\\Sigma}^{-1}_{11}\\boldsymbol{\\Sigma}_{12}.\n\\end{equation}\nThe matrix $\\bar{\\boldsymbol{\\Sigma}}$ is known as the \\emph{Schur complement} of $\\boldsymbol{\\Sigma}_{11}$ in $\\boldsymbol{\\Sigma}$. It is positive definite because so is $\\boldsymbol{\\Sigma}$. \nThus, the unknown boundary values $\\mathbf y_{2}$ in a fine-scale simulation can be computed via:\n\\begin{equation}\n\\mathbf y_{2}=R\\mathbf e+\\bar{\\boldsymbol{\\mu}},\n\\end{equation}\nwhere the matrix $R$ constitutes a \\emph{rank factorization} of $\\bar{\\boldsymbol{\\Sigma}}$, namely $\\bar{\\boldsymbol{\\Sigma}}=R R^\\top$ which can be computed from the spectral decomposition plus truncation, and $\\mathbf e$ is an i.i.d.~standard normal vector. \n\n\t\n\\subsection{The noise input}\n\nObviously, samples on the fine-scale depend upon the white noise input which cannot be randomly generated any more because of the existence of the coarse-scale samples as interpolation conditions. Thus it is reasonable to \\emph{determine} the white noise before implementing ARMA recursion, and we report a particular solution for the case with exponential covariance functions, namely,\n\t\\begin{equation*}\\label{Exponential-multi-scale}\n\t\t\\rho_{\\mathrm{s}} (k_1,k_2) = \\sigma^2 e^{-\\alpha_1 T_1 |k_1|-\\alpha_2 T_2 |k_2|}, \\quad (k_1,k_2)\\in\\mathbb Z^2.\n\t\\end{equation*}\nHere we focus on $2$-d random fields which are of engineering interest in e.g., \\citep{chen2016cpt,chen2012characterization}, also for the simplicity of presentation. In principle, a similar procedure should work for the higher dimensional cases but the calculations will necessarily be more complicated.\n\n\n\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[node distance=2cm,auto,>=latex']\n\t\t\\node[draw, coordinate] \t\t(start) {};\n\t\t\\node[draw, right=of start, minimum size=2em] (step 1) {$W_1(z_1)$};\n\t\t\\node[draw, right=of step 1, minimum size=2em] (step 2) {$W_2(z_2)$};\n\t\t\\node[draw, coordinate, right=of step 2]\t\t\t(end)\t {};\n\t\t\\draw[->] (start) -- node[above] {$w(s,t)$} (step 1);\n\t\t\\draw[->] (step 1) -- node[above] {$y_1(s,t)$} (step 2);\n\t\t\\draw[->] (step 2) -- node[above] {$y(s,t)$} (end);\n\t\t\\end{tikzpicture}\n\t\t\\caption{The two cascaded linear stochastic system with given white noise input.}\n\t\t\\label{Fig:2-cascaded filter}\n\t\\end{figure}\n\n\tAssume that we have computed the fine-scale samples in boundary locations, and we know the coarse-scale white noise and samples which have a size $(N_1,N_2)$.\n\tWe aim to interpolate the fine-scale input $w(s,t)$ and then samples $y(s,t)$ for $0 \\leq s \\leq 2(N_1 - 1),\\ 0 \\leq t \\leq 2(N_2 - 1)$. In particular, the indices $s$ and $t$ of the coarse-scale samples are \\emph{even} numbers in the fine-scale realization.\n\tReferring to Fig.~\\ref{Fig:2-cascaded filter}, the two cascaded AR(1) filters of the fine-scale random field are written as\n\t\\begin{equation*}\n\tW_1(z_1)=\\frac{b}{1-az_1^{-1}},\\quad W_2(z_2)=\\frac{d}{1-cz_2^{-1}}.\n\t\\end{equation*}\n\tThe input $w$, the intermediate output $y_1$, and the output $y$ are related via\n\t\\begin{equation*}\n\ty_1(s,t)= W_1(z_1) w(s,t),\\quad y(s,t)=W_2(z_2) y_1(s,t).\n\t\\end{equation*}\n\tApply twice the AR recursion, we have\n\t\\begin{equation}\\label{fine-scale w}\n\t\tw(s-1,t)=\\frac{1}{ab} y_1(s,t) - \\frac{a}{b} y_1(s-2,t) - \\frac{1}{a} w(s,t).\n\t\\end{equation}\n If we take $s = 2k$ and $t=2\\ell$ with $k=1, \\cdots, N_1-1$ and $\\ell=1, \\cdots, N_2-1$, $w(2k,2\\ell)$ is known as the coarse-scale noise input. Then $w(2k-1,2\\ell)$ can be computed directly via \\eqref{fine-scale w} where\n\t\\begin{equation*}\n\t\ty_1(2k,2\\ell) = \\frac{1}{d'} y(2k,2\\ell) - \\frac{c'}{d'} y(2k-2,2\\ell),\n\t\\end{equation*}\n\twhere $c'$ and $d'$ correspond to the filter $W'_2(z_2)$ of coarse-scale random field, and the $y$ samples involved are from the coarse-scale realization.\n\tFor odd $t=2\\ell-1$, $w(2k,2\\ell-1)$ is unknown and \\eqref{fine-scale w} can be rewritten as a linear equation\t\n\t\\begin{equation}\\label{linear-equation}\n\t\tA\\mathbf w = \\mathbf b\n\t\\end{equation}\n\twith\n\t\\begin{equation*}\n\t\tA=\\left[ \\begin{matrix} a&1&0&0&\\cdots&0&0\\\\\n\t\t0&0&a&1&\\cdots&0&0\\\\\n\t\t\\vdots &\\vdots &\\vdots& \\vdots& \\ddots& \\vdots&\\vdots\\\\\n\t\t0&0&0&0&\\cdots&a&1\\end{matrix} \\right],\\,\n\t\t\\mathbf w=\\left[ \\begin{matrix} w(1,t)\\\\w(2,t)\\\\ \\vdots\\\\ w(2N_1-2,t)\\end{matrix} \\right],\\,\n\t\t\\mathbf b=\\frac{1}{b} \\left[ \\begin{matrix} y_1(2,t)\\\\ y_1(4,t)\\\\ \\vdots\\\\ y_1(2N_1-2,t) \\end{matrix} \\right] - \\frac{a^2}{b} \\left[ \\begin{matrix} y_1(0,t)\\\\ y_1(2,t)\\\\ \\vdots\\\\ y_1(2N_1-4,t) \\end{matrix} \\right],\n\t\\end{equation*}\n\twhere the coefficient matrix $A$ has a size $(N_1 - 1) \\times (2N_1-2)$, and $y_1 (2k,2\\ell-1)$ is computed via\n\t\\begin{equation*}\n\t\ty_1(2k,2\\ell-1)= \\frac{1}{cd} y(2k,2\\ell) - \\frac{c}{d} y(2k,2\\ell-2) - \\frac{1}{c} y_1(2k,2\\ell),\n\t\\end{equation*}\n\twhere again the right-hand side requires only the coarse-scale samples.\n The equation \\eqref{linear-equation} has infinitely many solutions since $A$ is of full row rank. In order to make the problem well-posed, we notice a \\emph{dual} linear equation by swapping $W_1(z_1)$ and $W_2(z_2)$ in the filtering which obviously does not change the final output $y$. The intermediate output is, however, different and we write it as\n \\begin{equation*}\n y_2(2k,2\\ell)= \\frac{1}{b'}y(2k,2\\ell)- \\frac{a'}{b'} y(2k,2\\ell-2)\n \\end{equation*}\n where $a'$ and $b'$ correspond to the filter $W'_1(z_1)$ of coarse-scale random field.\n We can now compute the components $w(2k,2\\ell-1)$ in the vector $\\mathbf w$ as\n\t\\begin{equation*}\n\t\tw(2k,2\\ell-1)= \\frac{1}{cd} y_2(2k,2\\ell) -\\frac{c}{d} y_2(2k,2\\ell-2) -\\frac{1}{c} w(2k,2\\ell)\n\t\\end{equation*}\n in the same fashion as computing $w(2k-1,2\\ell)$ from \\eqref{fine-scale w}. Substitute the result back into \\eqref{linear-equation}, we get the rest components.\n\t\n\t\n\n\n\n\n\n\n\t\nIn this way, the fine-scale samples can be generated via the AR recursion easily as the values in the boundary locations and the white noise input have been computed.\nWe would like to point out that unlike traditional methods which generate \\emph{all} the fine-scale samples (which could be a lot) using the conditional distribution \\eqref{condi_normal}, see e.g., \\citet[Sec.~2.4]{chen2012characterization}, \nwe only need $\\mathbf y_2$ in boundary locations (whose number is much smaller, see Fig.~\\ref{Fig:2-d-multi-schematic-figure}).\nTherefore, our procedure involves multiplications and inversions of matrices of smaller sizes, which significantly improves computational efficiency. After the boundary values of the fine-scale random field are computed and the white noise input is obtained, the interpolation of the coarse-scale samples can be accomplished again at a linear cost.\n\n\n\\section{Numerical examples}\\label{Sec:Numerical examples}\n\nIn this section, we perform two sets of numerical simulations of our stochastic realization approach: one for sampling $3$-d random fields and the other includes multiscale simulations in the $2$-d case.\n\n\n\\subsection{Sampling $3$-d Gaussian random fields}\\label{Subsec:AR representation}\n\nFirst, we apply the stochastic realization approach to Problem~\\ref{prob_realization} with an exponential covariance function in three variables\n\\begin{equation}\\label{eq:exponential-covariance-function}\n\t\\rho_{\\mathrm{s}} (x,y,z)=\\sigma^2 e^{-\\alpha_1 T_1 |x|-\\alpha_2 T_2 |y|-\\alpha_3 T_3|z|},\\quad (x,y,z)\\in\\mathbb Z^3.\n\\end{equation}\nIn the following example, the size of the samples to be generated is $\\mathbf N = (100,100,100)$ for the random field $y_{\\mathrm{s}} (x,y,z)$, the variance is $\\sigma^2 = 1$, the parameter vector is $(\\alpha_1, \\alpha_2, \\alpha_3)=(1,1,1)$, and the vector of sampling distances along $x$-, $y$-, and $z$-directions is $(T_1, T_2, T_3) = (1\/12, 1\/10, 1\/8)$.\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=8.4cm]{Fig_Exponential_Ran\/3-D-random.eps} \n\t\t\\caption{A $3$-d random field realization with an exponential covariance function. The size of the realization is $8.33$m $\\times 10.00$m $\\times 12.50$m in space with a total number of samples equal to $100^3 = 10^6$\n\t\t}\n\t\t\\label{Fig:Three-Dimensional random field}\n\t\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Exponential_Ran\/E-x-d.eps}\n\t\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Exponential_Ran\/E-y-d.eps}\n\t\n\t\\end{minipage}\n\t\\centering\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Exponential_Ran\/E-z-d.eps}\n\t\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Exponential_Ran\/E-d-d.eps}\n\t\n\t\\end{minipage}\n\t\\caption{The covariances versus distances along four directions of the $3$-d random field realization which contains 100 repeated trials. The red line denotes the given exponential covariance function and the blue lines are the corresponding sample covariance lags. By $x$-direction, we mean the section $[k_1,0,0]$ of the sample covariance array with $k_1=0,\\dots,N_1-1$. The other directions are understood similarly.}\n\t\\label{Fig:Four-direction covariance}\n\\end{figure}\n\nIt has been discussed in Subsection~\\ref{subsec-expon} that the exponential covariance function corresponds to an exact rational spectral density, and the filter in each dimension can be directly computed via \\eqref{eq:Expo-W}. \nThus the required samples of the random field can be generated easily by implementing three cascaded AR recursions \\eqref{d-dimensional ARMA model}. \nA realization of the random field is shown in Fig.~\\ref{Fig:Three-Dimensional random field} using the Matlab command \\verb*|slice|, where the spatial distances are defined as $\\tau_x=T_1|x|$, $\\tau_y = T_2|y|$, and $\\tau_z = T_3|z|$ along three directions. \nNext, in order to verify the performance of our method, we also plot the \\emph{sample covariances} of the realization $y_{\\mathrm{s}}(x,y,z)$ versus spatial distances along $x$-, $y$-, $z$-, and the diagonal directions in Fig.~\\ref{Fig:Four-direction covariance} with $100$ repeated trials. \nThe diagonal direction is along the line $x=y=z$ with $0\\leq x \\leq N_1-1$. \n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=8.4cm]{Fig_Gaussian_Ran\/Gau-3D.eps} \n\t\t\\caption{A $3$-d random field realization with a Gaussian covariance function. The size of the realization is $2$m $\\times$ $3.13$m $\\times$ $5.56$m in space with a total number of samples equal to $50^3=1.25\\times 10^5$.}\n\t\t\\label{Fig:Three-Dimensional Gaussian random field}\n\t\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Gaussian_Ran\/G-x-d.eps}\n\t\n\t\\end{minipage}\n\t\\begin{minipage}{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Gaussian_Ran\/G-y-d.eps}\n\t\n\t\\end{minipage}\n\t\\centering\n\t\\begin{minipage}{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Gaussian_Ran\/G-z-d.eps}\n\t\n\t\\end{minipage}\n\t\\begin{minipage}{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=6cm]{Fig_Gaussian_Ran\/G-d-d.eps}\n\t\n\t\\end{minipage}\n\t\\caption{The covariances versus distances along four directions of the $3$-d random field realization which contains 100 repeated trials. The red line denotes the given Gaussian covariance function and the blue lines are the corresponding sample covariance lags. By $x$-direction, we mean the section $[k_1,0,0]$ of the sample covariance array with $k_1=0,\\dots,N_1-1$. The other directions are understood similarly.}\n\t\\label{Fig:Four-direction covariance-Gaussian}\n\\end{figure}\nIn particular, the sample covariances of $y_{\\mathrm{s}} (x,y,z)$ are computed via the spatial average \\cite[Section~5]{ZFKZ2-M2-Aut}:\n\\begin{equation}\n\t\\hat\\sigma_\\mathbf k:=\\frac{1}{|\\mathbf N|}\\sum_{\\mathbf x} y_{\\mathrm{s}} (\\mathbf x+\\mathbf k){y_{\\mathrm{s}} (\\mathbf x)}\t\n\\end{equation}\nwhere $\\mathbf x=(x,y,z)$ and $|\\mathbf N| = N_1 N_2 N_3$. \nSince we have explicitly enforced covariance matching in Problem~\\ref{prob_realization}, it follows from the general covariance estimation theory \\cite{priestley1981spectral} that the sample covariances of the output of the filter $W(\\mathbf z)$ must be close to the values of the given covariance function when the sample size is sufficiently large,\nand this point is well illustrated in Fig~\\ref{Fig:Four-direction covariance}. \nWe remark that our result is visually much better than that reported in \\cite[Sec.~5.1]{li2019stepwise}.\n\n\n\nFurthermore, another simulation is performed for a Gaussian covariance function that has the form\n\\begin{equation}\\label{eq:Gaussian-random-produce}\n\t\\rho_{\\mathrm{s}} (x,y,z)=\\sigma^2 e^{-\\alpha_1 T_1^2 |x|^2- \\alpha_2 T_2^2 |y|^2- \\alpha_3 T_3^2 |z|^2}, \\quad (x,y,z)\\in\\mathbb Z^3.\n\\end{equation}\nThe parameters are reported as follows: $\\sigma^2=1$, $(\\alpha_1, \\alpha_2, \\alpha_3) = (1,1,1)$, and $(T_1, T_2, T_3) = (1\/5, 1\/4, 1\/3)$.\nIn this case, we set up an ARMA model to approximate the underlying spectral density. \nThe numerator polynomial $b(\\mathbf z)$ in \\eqref{eq:transfer_func} is specified by the user.\nHere for simplicity, we take $b(z_1,z_2,z_3) = \\prod_{j=1}^3 b_j(z_j)$ with a quite arbitrary $b_j(z_j)=1-0.2z_j^{-1}$ of order one and identical for $j=1,2,3$.\nThe orders of the denominator polynomials $a_j(z_j)$ are chosen to be $(m_1,m_2,m_3)=(8,7,7)$ which is a threshold for ``dominant'' covariances, i.e., large values of the covariance function. \nThen the approximate spectrum in each dimension is constructed via solving the optimization problem \\eqref{Maximum entropy}, where Newton's method is implemented, and the polynomial $a_j(z_j)$ is obtained from spectral factorization.\nThe results are shown in Figs.~\\ref{Fig:Three-Dimensional Gaussian random field} and \\ref{Fig:Four-direction covariance-Gaussian} similar to the previous two figures.\nSince the above Gaussian covariance function decays fast as the spatial distance increases, we set the sample size $\\mathbf N=(50,50,50)$ in order to show more details in the figures. \nIt can be seen that the sample covariances are very close to the given Gaussian covariance function when the lag is small. The mismatch for large lags can be explained by the fact that we are using a relatively low-order ARMA spectrum to approximate the nonrational Gaussian function. \nIn principle, we can also use a higher-order ARMA model for a better approximation, however, at the price of much more computational cost when generating the samples.\n\n\n\n\n\n\\subsection{Multiscale simulations in the $2$-d case}\\label{Subsec:Multi-scale realization}\n\n\nIn this subsection, we perform simulations to produce fine-scale samples of the random field given the coarse-scale realization using our stochastic realization approach. Following the derivation in Section \\ref{Sec:Multi-scale random fields}, we report an example with the following exponential covariance function\n\\begin{equation*}\n\t\t\\rho_{\\mathrm{s}} (x,y)=\\exp(-\\frac{|x|}{5}-\\frac{|y|}{4}), \\quad (x,y)\\in\\mathbb Z^2.\n\\end{equation*}\n\nSuppose that the coarse-scale samples of the random field have a size vector $(N_1,N_2)=(20,20)$ which corresponds to $400$ points. We take the sampling distances to be $1\/5$ and $1\/4$ along $x$- and $y$-directions, respectively, i.e., $\\mathbf T=(T_1,T_2)=(1\/5,1\/4)$, and the parameter vector $(\\alpha_1, \\alpha_2) = (1,1)$.\nThen three fine-scale realizations of the random field are computed, where the sampling distances are reduced to $1\/2$, $1\/4$, and $1\/8$ of the original values, respectively. \nThe simulations are carried out sequentially. More precisely, after the samples with the parameter $\\mathbf T$ are generated, they are then treated as the coarse-scale realization, and the interpolation procedure is executed to produce fine-scale samples with the parameter $\\mathbf T'=\\frac{1}{2}\\mathbf T$.\nNotice that we need to reconstruct the AR$(1)$ filter under each scale, due to the fact that the filter parameters in \\eqref{eq:Expo-W}\ndepend upon the product $\\alpha_j T_j$.\nThen after the sample values in boundary locations are computed utilizing the conditional normal distribution and the white noise in unknown locations is obtained,\nthe rest fine-scale samples can be generated by implementing the AR recursion. These operations are repeated for each finer scale.\nThe result of such a multiscale simulation is shown in Fig.~\\ref{Fig:Figure:multi-scale}. \nIt is evident that more details of the random field can be seen from the fine-scale samples than the coarse-scale realization.\nWe also plot the sample covariances under each scale versus distances along $x$- and $y$-directions in Fig.~\\ref{Fig:Finer-scale random field Exponential}. \nOne can see that the sample covariances are again close to the given covariance function, which indicates that the spatial variability of the random field is well maintained across multiple scales in our simulations. \n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\\label{Sec:Conclusions}\n\nThis paper proposes an efficient stochastic realization approach for sampling large-scale multidimensional Gaussian stationary random fields.\nThe basic idea is to exploit the decoupling assumption on the covariance function, and to construct a rational model which approximates the spectrum of the underlying random field in terms of covariance matching.\nMoreover, our sampling approach features easy implementation and \n\\begin{figure}[t]\n\t\\centering\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Coarse_scale.eps}\n\t\t\\centerline{(a) Coarse-scale samples ($20\\times 20$)}\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Fine_scale_2.eps}\n\t\t\\centerline{(b) Fine-scale samples ($40\\times40$)}\n\t\\end{minipage}\n\t\\centering\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Fine_scale_4.eps}\n\t\t\\centerline{(c) Fine-scale samples ($80 \\times 80$)}\n\t\\end{minipage}\n\t\\begin{minipage}[h]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Fine_scale_8.eps}\n\t\t\\centerline{(d) Fine-scale samples ($160\\times 160$)}\n\t\\end{minipage}\n\t\\caption{Multiscale simulations with an exponential covariance function. Subfig.~(a) shows the coarse-scale samples (colored squares) which are already known, while Subfigs.~(b), (c), and (d) are the fine-scale realizations, where the numbers in parentheses give the number of sampling points in each direction.}\n\t\\label{Fig:Figure:multi-scale}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}[t]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Expo_x_d.eps}\n\t\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=7.8cm]{Fig_Multi_scale\/Expo_y_d.eps}\n\t\n\t\\end{minipage}\n\t\\caption{Covariance lags of the random field versus distances. The red line denotes the given exponential covariance function, and the other four lines correspond to the sample covariances of under different scales. By $x$-direction, we mean the column of the sample covariance matrix indexed by $[k_1,0]$ with $k_1=0,\\dots,N_1-1$. The $y$-direction is understood similarly.}\n\t\\label{Fig:Finer-scale random field Exponential}\n\\end{figure}\n{\\setlength{\\parindent}{0cm}\nlow computational complexity due to the simple structure of the approximate model.\nThe work in the paper can be concluded as follows:}\n\\begin{enumerate}\n\t\\item[(1)] Solutions to the sampling problem with the exponential and Gaussian covariance functions are given, respectively. The former corresponds to a rational spectral density that leads to an AR$(1)$ filter, and the latter has a nonrational spectral density which can be approximated by an ARMA spectrum.\t\n\t\\item[(2)] The stochastic realization approach has been applied to multiscale simulations. Compared with traditional methods, only a few number of values in boundary locations are computed prior to the interpolation via the ARMA recursion, which achieves high efficiency.\n\t\\item[(3)] Several numerical simulations are performed and they show that our method exhibits good performances not only in sampling large-size random fields, but also in refining generated samples across multiple scales. \n\\end{enumerate}\nFinally, it is expected that our approach can be extended to the multivariate case (i.e., vector processes), which will be a future study.\n\n\n\\section*{Acknowledgment}\nThis work was supported in part by the National Natural Science Foundation of China under the grant number 62103453 and the ``Hundred-Talent Program'' of Sun Yat-sen University.\n\n\\section*{Conflict of interest}\nThe authors declare no conflict of interest.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the main goals of the LHC is the identification of the mechanism\nof electroweak symmetry breaking. The most frequently investigated\nmodels are the Higgs mechanism within the Standard \nModel (SM) and within the Minimal Supersymmetric Standard Model\n(MSSM)~\\cite{mssm}. Contrary to the case of the SM, in the MSSM \ntwo Higgs doublets are required.\nThis results in five physical Higgs bosons instead of the single Higgs\nboson in the SM. These are the light and heavy ${\\cal CP}$-even Higgs bosons, $h$\nand $H$, the ${\\cal CP}$-odd Higgs boson, $A$, and the charged Higgs bosons,\n$H^\\pm$.\nThe Higgs sector of the MSSM can be specified at lowest\norder in terms of the gauge couplings, the ratio of the two Higgs vacuum\nexpectation values, $\\tan \\beta \\equiv v_2\/v_1$, and the mass of the ${\\cal CP}$-odd\nHiggs boson, $M_A$ (or $M_{H^\\pm}$, the mass of the charged Higgs boson).\nConsequently, the masses of the ${\\cal CP}$-even neutral and the charged Higgs\nbosons are dependent quantities that can be\npredicted in terms of the Higgs-sector parameters, e.g.\\\n$M_{H^\\pm}^2 = M_A^2 + M_W^2$, where $M_W$ denotes the mass of the $W$~boson.\nThe same applies to\nthe production and decay properties of the MSSM Higgs bosons%\n\\footnote{If the production or decay involves SUSY particles at\n tree-level, also other MSSM parameters enter the prediction at lowest\n order.\n.~Higgs-phenomenology\nin the MSSM is strongly affected by higher-order corrections, in\nparticular from the sector of the third generation quarks and squarks,\nso that the dependencies on various other MSSM parameters can be\nimportant, see e.g.\\ \\citeres{PomssmRep,habilSH,mhiggsAWB} for reviews.\n\nSearches for the charged Higgs bosons of the MSSM (or a more general\nTwo Higgs Doublet Model (THDM)) have been carried out at\nLEP~\\cite{LEPchargedHiggsPrel}, yielding a bound of\n$M_{H^\\pm} \\gsim 80 \\,\\, \\mathrm{GeV}$~\\cite{LEPchargedHiggsProc,LEPchargedHiggs}.\nThe Tevatron placed additional bounds on the MSSM parameter space from\ncharged Higgs-boson searches, in particular at large $\\tan \\beta$ and low\n$M_A$~\\cite{Tevcharged}. At the LHC the charged Higgs bosons will be\naccessible best at large $\\tan \\beta$ up to $M_A \\lsim 800 \\,\\, \\mathrm{GeV}$\n\\cite{atlastdr,cmstdr,benchmark3}. At the ILC, for \n$M_{H^\\pm} \\lsim \\sqrt{s}\/2$ a high-precision determination of the charged\nHiggs boson properties will be\npossible~\\cite{tesla,orangebook,acfarep,Snowmass05Higgs}.\n\n\nThe prospective sensitivities at\nthe LHC are usually displayed in terms of the parameters $M_A$ and $\\tan \\beta$\n(or $M_{H^\\pm}$ and $\\tan \\beta$) that characterize the MSSM Higgs sector at lowest\norder. The other MSSM \nparameters are conventionally fixed according to certain benchmark\nscenarios~\\cite{benchmark2}. \nThe respective LHC analyses of the $5\\,\\sigma$ discovery contours for the\ncharged Higgs boson are given in \\citere{HchargedATLAS} for\nATLAS and in \\citeres{lightHexp,heavyHexp} for CMS. \nHowever, within these analyses the variation with relevant SUSY\nparameters as well as possibly relevant loop corrections in the Higgs\nproduction and decay~\\cite{benchmark3} have been neglected. \n\nWe focus in this paper on the $5\\,\\sigma$ discovery contours for the\ncharged MSSM Higgs boson \nfor the two cases $M_{H^\\pm} < m_{t}$ and $M_{H^\\pm} > m_{t}$, \nwithin the $m_h^{\\rm max}$~scenario and the no-mixing\nscenario~\\cite{benchmark2,benchmark3} (i.e.\\ we concentrate on the\n${\\cal CP}$-conserving case).\nThey are obtained by using the latest CMS\nresults~\\cite{lightHexp,heavyHexp} derived in a model-independent\napproach, i.e.\\ making no assumption on the Higgs boson production\nmechanism or decays. However, the detection relies on the decay mode\nof the charged Higgs bosons to $\\tau\\nu_\\tau$. Furthermore only SM\nbackgrounds have been assumed. \nThese experimental results are combined with up-to-date theoretical\npredictions for charged Higgs production and decay in the MSSM, taking\ninto account also the decay to SUSY particles that can in principle\nsuppress the branching ratio of the charged Higgs boson decay to\n$\\tau\\nu_\\tau$.\n\nFor the interpretation of the exclusion bounds and prospective discovery\ncontours in the benchmark scenarios it is important to assess how\nsensitively the results depend on those parameters that have been fixed\naccording to the benchmark prescriptions. In \\citeres{benchmark3,cmsHiggs}\nthis issue has been analyzed for the neutral heavy MSSM Higgs bosons,\nand it has been found that the by far largest effect arises from the\nvariation of the Higgs-mixing parameter~$\\mu$. \nConsequently, we investigate how the \n5$\\,\\sigma$ discovery regions in the $M_{H^\\pm}$--$\\tan \\beta$ plane \nfor the charged MSSM Higgs boson obtainable with the CMS experiment at\nthe LHC are affected by a variation of the \nmixing parameter~$\\mu$.\n\n\n\\section{Experimental analysis}\n\\label{sec:exp}\n\nThe main\nproduction channels at the LHC are\n\\begin{equation}\npp \\to t\\bar t \\; + \\; X, \\quad\nt \\bar t \\to t \\; H^- \\bar b \\mbox{~~or~~} H^+ b \\; \\bar t~\n\\label{pp2Hpm}\n\\end{equation}\nand\n\\begin{equation}\ngb \\to H^- t \\mbox{~~or~~} g \\bar b \\to H^+ \\bar t~.\n\\label{gb2Hpm}\n\\end{equation}\nThe decay used in the analysis to detect the charged Higgs boson is\n\\begin{equation}\nH^\\pm \\; \\to \\; \\tau \\nu_\\tau \\; \\to \\; {\\rm hadrons~}\\nu_\\tau. \n\\label{Hbug}\n\\end{equation}\nThe analyses described below correspond to \nCMS experimental sensitivities based on full simulation studies, \nassuming an integrated luminosity of 30~$\\mbox{fb}^{-1}$.\nIn these analyses a top quark mass of $m_{t} = 175 \\,\\, \\mathrm{GeV}$ has been\nassumed. \n\n\n\n\\subsection{The light charged Higgs Boson}\n\\label{sec:lightHpm}\n\nThe ``light charged Higgs boson'' is characterized by $M_{H^\\pm} < m_{t}$. \nThe main production channel is given in \\refeq{pp2Hpm}. Close to\nthreshold also \\refeq{gb2Hpm} contributes. The relevant (i.e.\\\ndetectable) decay channel is given by \\refeq{Hbug}.\nThe experimental analysis, based on 30~$\\mbox{fb}^{-1}$\\ collected with CMS, is\npresented in \\citere{lightHexp}. The events were required to be\nselected with the single lepton trigger, thus exploiting the\n$W \\to \\ell \\nu$ decay mode of a $W$~boson from the decay of \none of the top quarks in \\refeq{pp2Hpm}.\n\nThe total number of events leading to final states with the signal\ncharacteristics is evaluated, including their respective experimental\nefficiencies. The various channels and the corresponding efficiencies\ncan be found in \\refta{tab:lightHp}. The efficiencies are given for\n$M_{H^\\pm} = 160 \\,\\, \\mathrm{GeV}$, but vary only insignificantly over the parameter\nspace under investigation. \nThe number of signal-like events is evaluated as the sum of \nbackground and Higgs-boson signal events, \n\\begin{align}\nN_{\\rm ev} =& \\;N_{\\rm background} \n \\mathrm{(from~the~processes~in~\\refta{tab:lightHp})} \\nonumber \\\\\n &+ {\\cal L} \\times \\sigma(pp \\to t \\bar t + X) \n \\times {\\rm BR}(t \\to H^\\pm b)\n \\times {\\rm BR}(H^\\pm \\to \\tau \\nu_\\tau) \\\\\n &\\mbox{}\\hspace{41.5mm} \\times {\\rm BR}(\\tau \\to \\mbox{hadrons})\n \\times \\mbox{exp.\\ eff.}~, \\nonumber \n\\end{align}\nwhere ${\\cal L}$ denotes the luminosity, and the experimental efficiency\nis given in \\refta{tab:lightHp}.\nA $5\\,\\sigma$ discovery can be achieved if a parameter point results in\nmore than 5260~events (with 30~$\\mbox{fb}^{-1}$).\\\\\n\\newpage\n\\noindent\nWe furthermore used \n\\begin{align}\n{\\rm BR}(W^\\pm \\to \\ell \\nu_\\ell) &~= ~0.217 ~~(\\ell = \\mu, e), \\nonumber \\\\ \n{\\rm BR}(W^\\pm \\to \\tau \\nu_\\tau) &~= ~0.1085 , \\nonumber \\\\\n{\\rm BR}(W^\\pm \\to \\mbox{jets}) &~= ~0.67 , \\\\\n{\\rm BR}(\\tau \\to \\mbox{hadrons}) &~= ~0.65 . \\nonumber\n\\end{align}\nThe next-to-leading order LHC cross section for top quark pairs is\ntaken to be 840~pb~\\cite{sigmatt}. \nFor the $W^\\pm$+3 jets background the leading \norder cross section for the process $pp \\to W^{\\pm} + \\rm 3~jets$, \n$W^{\\pm} \\to \\ell^{\\pm} \\nu$ ($\\ell=e,~\\mu$) of 840~pb was used, \nas given by the MadGraph~\\cite{MadGraph} generator.\n\n\\begin{table}[htb!]\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{center}\n\\begin{tabular}{|c|c|} \\hline\nchannel & exp.\\ efficiency \\\\ \\hline\\hline\n$pp \\to t \\bar t +X,\\; t \\bar t \\to H^+ b \\; \\bar t \n \\to (\\tau^+ \\bar{\\nu}_\\tau) \\; b \\; (W^- \\bar b)$; \n~$\\tau \\to \\mbox{hadrons}$, $W \\to \\ell \\nu_\\ell$ & 0.0052 \\\\\n\\hline\n$pp \\to t \\bar t +X,\\; t \\bar t \\to W^+ \\; W^- \\; b \\bar b\n \\to (\\tau \\nu_\\tau) \\; (\\ell \\nu_\\ell) \\; b \\bar b$;\n~$\\tau \\to \\mbox{hadrons}$ & 0.00217 \\\\\n\\hline\n$pp \\to t \\bar t +X,\\; t \\bar t \\to W^+ \\; W^- \\; b \\bar b\n \\to (\\ell \\nu_\\ell) \\; (\\ell \\nu_\\ell) \\; b \\bar b$ & 0.000859 \\\\\n\\hline\n$pp \\to t \\bar t +X,\\; t \\bar t \\to W^+ \\; W^- \\; b \\bar b\n \\to (\\mbox{jet jet}) \\; (\\ell \\nu_\\ell) \\; b \\bar b$ & 0.000134 \\\\\n\\hline\n$pp \\to W + \\rm 3~jets$, $W \\to \\ell \\nu$ & 0.000013 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-1em}\n\\caption{Relevant signal (first line) and background \n channels for the light charged Higgs boson and their\n respective experimental efficiencies. The charge conjugated processes\n ought to be included. The efficiency for the charged Higgs production\n is given for $M_{H^\\pm} = 160 \\,\\, \\mathrm{GeV}$, but varies only insignificantly\n over the relevant parameter space. $\\ell$ denotes $e$ or $\\mu$.\n}\n\\label{tab:lightHp}\n\\renewcommand{\\arraystretch}{1.0}\n\\end{table}\n\n\n\n\n\\subsection{The heavy charged Higgs Boson}\n\\label{sec:heavyHpm}\n\nThe ``heavy charged Higgs boson'' is characterized by $M_{H^\\pm} \\gsim m_{t}$.\nHere \\refeq{gb2Hpm} gives the largest contribution to the production cross\nsection, and very close to \nthreshold \\refeq{pp2Hpm} can contribute somewhat. The relevant decay\nchannel is again given in \\refeq{Hbug}.\nThe experimental analysis, based on 30~$\\mbox{fb}^{-1}$\\ collected with CMS, has been\npresented in \\citere{heavyHexp}. The fully hadronic final state \ntopology was considered, thus events were selected with the single\n$\\tau$ trigger at Level-1 and the combined $\\tau$-$E_{\\rm T}^{\\rm miss}$ High\nLevel trigger. \nThe backgrounds considered were $t \\bar t$, $W^\\pm t$, \n$W^\\pm + 3~{\\rm jets}$ as well as QCD multi-jet background.\nThe $t \\bar t$ and QCD multi-jet processes were generated with\nPYTHIA~\\cite{pythia}, $W^\\pm t$ was \ngenerated with the TopRex generator~\\cite{toprex} and \n$W^\\pm + 3~{\\rm jets}$ with MadGraph~\\cite{MadGraph}. \nThe production cross sections for the $t\\bar t$~background processes were\nnormalized to the NLO cross sections~\\cite{sigmatt}.\nThe total background amounts (after cuts) to \n$1.7 \\pm 1$ events, independently of the charged Higgs boson mass.\n\n\\noindent\nThe number of signal events is evaluated as\n\\begin{equation}\nN_{\\rm ev} = {\\cal L} \\times \\sigma(pp \\to H^\\pm + X) \n \\times {\\rm BR}(H^\\pm \\to \\tau \\nu_\\tau)\n \\times {\\rm BR}(\\tau \\to \\mbox{hadrons})\n \\times \\mbox{exp.\\ eff.}~,\n\\end{equation}\nwhere ${\\cal L}$ denotes the luminosity, and the experimental efficiency\nis given in \\refta{tab:heavyHp} as a function of $M_{H^\\pm}$.\nA $5\\,\\sigma$ discovery corresponds to a number of signal events larger\nthan $14.1$.\n\n\\begin{table}[htb!]\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{center}\n\\begin{tabular}{|c||cccccc|} \n\\hline\\hline\n$M_{H^\\pm}$ [GeV] & 171.6 & 180.4 & 201.0 & 300.9 & 400.7 & 600.8 \\\\ \n\\hline\nexp.\\ eff.\\ [$10^{-4}$] & 3.5 & 4.0 & 5.0 & 23 & 32 & 42 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-1em}\n\\caption{Experimental efficiencies for the heavy charged Higgs boson\n detection. \n}\n\\label{tab:heavyHp}\n\\renewcommand{\\arraystretch}{1.0}\n\\end{table}\n\nThe efficiency for the charged Higgs boson production over the\nfull mass range considered was evaluated with the PYTHIA~\\cite{pythia} \ngenerator processes 401 ($gg \\to tbH^{\\pm}$) and 402 ($qq \\to tbH^{\\pm}$) \nimplemented as described in ~\\citere{tbH}. \n\n\n\\section{Calculation of cross section and branching ratios}\n\\label{sec:theo}\n\nWhile the phenomenology of the production and decay processes of the\ncharged MSSM Higgs bosons at the LHC is mainly characterized by\nthe parameters $M_A$ (or $M_{H^\\pm}$) and $\\tan \\beta$ that govern the Higgs sector\nat lowest \norder, other MSSM parameters enter via higher-order contributions (see\ne.g.\\ \\citere{benchmark3} and references therein), \nand also via the kinematics of Higgs-boson decays into\nsupersymmetric particles. The other MSSM parameters are usually fixed\nin terms of benchmark scenarios. The most commonly used scenarios are\nthe ``$m_h^{\\rm max}$'' and ``no-mixing'' benchmark \nscenarios~\\cite{benchmark2,benchmark3}. According to the\ndefinition of \\citere{benchmark2} the $m_h^{\\rm max}$ scenario is given by,\n\\begin{eqnarray}\n\\mbox{\\underline{$m_h^{\\rm max}:$}} &&\nM_{\\rm SUSY} = 1000 \\,\\, \\mathrm{GeV}, \\quad X_t = 2\\, M_{\\rm SUSY}, \\quad A_b = A_t, \\nonumber \\\\\n&& \\mu = 200 \\,\\, \\mathrm{GeV}, \\quad M_2 = 200 \\,\\, \\mathrm{GeV}, \\quad m_{\\tilde{g}} = 0.8\\,M_{\\rm SUSY}~.\n\\label{mhmax}\n\\end{eqnarray}\nHere $M_{\\rm SUSY}$ denotes the diagonal soft SUSY-breaking parameters in the\nsfermion mass matrices, $m_{t}\\,X_t \\equiv m_{t}\\, (A_t - \\mu\/\\tan \\beta)$ is the\noff-diagonal entry in the scalar top mass matrix. $A_{t(b)}$ denote the\ntrilinear Higgs-stop (-sbottom) couplings, $\\mu$ is the Higgs mixing\nparameter, $m_{\\tilde{g}}$ the gluino mass, and $M_2$ and $M_1$ denote the soft\nSUSY-breaking parameters in the chargino\/neutralino sector. \nThe parameter $M_1$ is fixed via the GUT relation \n$M_1 = (5s_W^2)\/(3c_W^2) \\, M_2$.\nThe no-mixing scenario differs from the $m_h^{\\rm max}$ scenario only in the\ndefinition of \nvanishing mixing in the stop sector and a larger value of $M_{\\rm SUSY}$,\n\\begin{eqnarray}\n\\mbox{\\underline{no-mixing:}} &&\nM_{\\rm SUSY} = 2000 \\,\\, \\mathrm{GeV}, \\quad X_t = 0, \\quad A_b = A_t, \\nonumber \\\\\n&& \\mu = 200 \\,\\, \\mathrm{GeV}, \\quad M_2 = 200 \\,\\, \\mathrm{GeV}, \\quad m_{\\tilde{g}} = 0.8\\,M_{\\rm SUSY}~.\n\\label{nomix}\n\\end{eqnarray}\nThe value of the top-quark mass in \\citere{benchmark2} was chosen\naccording to the experimental central value at that time. For our\nnumerical analysis below, we use \nthe value, $m_{t} = 175 \\,\\, \\mathrm{GeV}$, see \\refse{sec:exp}. \nUsing the current value of $m_{t} = 172.6 \\,\\, \\mathrm{GeV}$~\\cite{mt1726}\nwould lead to a small shift of the discovery contours right at\nthreshold, but is insignificant for the qualitative results of this\nanalysis. \n\nIn \\citere{benchmark3} it was suggested that in the search for heavy\nMSSM Higgs bosons the $m_h^{\\rm max}$ and no-mixing scenarios, which originally\nwere mainly designed for the search for the light ${\\cal CP}$-even Higgs boson\n$h$, should be extended by several discrete values of $\\mu$ (see below),\n\\begin{equation}\n\\mu = \\pm 200, \\pm 500, \\pm 1000 \\,\\, \\mathrm{GeV} ~.\n\\label{eq:variationmu}\n\\end{equation}\nIn our analyses here we focus on $\\mu = \\pm 200, \\pm 1000 \\,\\, \\mathrm{GeV}$. \n\n\\bigskip\nFor the calculation of cross sections and branching ratios we use a\ncombination of up-to-date theory evaluations. The \ninteraction of the charged Higgs boson with the $t\/b$~doublet can be\nexpressed in terms of an effective Lagrangian~\\cite{deltamb2},\n\\begin{equation}\n\\label{effL}\n{\\cal L} = \\frac{g}{2M_W} \\frac{\\overline{m}_b}{1 + \\Delta_b} \\Bigg[ \n \\sqrt{2} \\, V_{tb} \\, \\tan \\beta \\; H^+ \\bar{t}_L b_R \\Bigg] + {\\rm h.c.}\n\\end{equation}\nHere $\\overline{m}_b$ denotes the running bottom quark mass including SM QCD\ncorrections. \nThe prefactor $1\/(1 + \\Delta_b)$ in \\refeq{effL} arises from the\nresummation of the leading $\\tan \\beta$-enhanced corrections to all orders. \nThe explicit\nform of $\\Delta_b$ in the limit of heavy SUSY masses and $\\tan \\beta \\gg 1$\nreads~\\cite{deltamb1}\n\\begin{equation}\n\\Delta_b = \\frac{2\\alpha_s}{3\\,\\pi} \\, m_{\\tilde{g}} \\, \\mu \\, \\tan \\beta \\,\n \\times \\, I(m_{\\tilde{b}_1}, m_{\\tilde{b}_2}, m_{\\tilde{g}}) +\n \\frac{\\alpha_t}{4\\,\\pi} \\, A_t \\, \\mu \\, \\tan \\beta \\,\n \\times \\, I(m_{\\tilde{t}_1}, m_{\\tilde{t}_2}, |\\mu|) ~.\n\\label{def:dmb}\n\\end{equation}\nHere $m_{\\tilde{t}_1}$, $m_{\\tilde{t}_2}$, $m_{\\tilde{b}_1}$, $m_{\\tilde{b}_2}$ denote the $\\tilde{t}$ and\n$\\tilde{b}$~masses. $\\alpha_s$ is the strong coupling\nconstant, while $\\alpha_t \\equiv h_t^2 \/ (4 \\pi)$ is defined via the top\nYukawa coupling. The analytical expression for $I(\\ldots)$ can be found\nin \\citere{benchmark3}.\nLarge negative values of $(\\mu\\,m_{\\tilde{g}})$ and $(\\mu\\,A_t)$ (it should be\nnoted that both\nbenchmark scenarios have positive $m_{\\tilde{g}}$ and $A_t$) can lead to a\nstrong enhancement of the \n$H^\\pm t b$ coupling, while large positive values lead to a strong\nsuppression. \nConcerning the $m_h^{\\rm max}$ and the no-mixing benchmark scenarios, \nas discussed in \\citeres{cmsHiggs,benchmark3} the $\\Delta_b$ effects are \nmuch more pronounced in the $m_h^{\\rm max}$ scenario, where the two terms in\n\\refeq{def:dmb} are of similar size. In the no-mixing scenario the first\nterm in \\refeq{def:dmb} dominates, while the second term is small. A\nfurther suppression is caused by the larger value of $M_{\\rm SUSY}$ (see\n\\refeq{nomix}) in comparison with the $m_h^{\\rm max}$\nscenario. Consequently, the total effect of $\\Delta_b$ is smaller in the\nno-mixing scenario (see also the discussion in \\citere{benchmark3}). \n\nFor the production cross section in \\refeq{pp2Hpm} we use the SM cross\nsection $\\sigma(pp \\to t \\bar t) = 840~\\rm{pb}$~\\cite{sigmatt}%\n\\footnote{\nThe corresponding SUSY corrections are small~\\cite{sigmattSUSY} and have\nbeen neglected. \n}%\n~times the ${\\rm BR}(t \\to H^\\pm\\, b)$ including the $\\Delta_b$ corrections\ndescribed above. \nThe production cross section in \\refeq{gb2Hpm} is evaluated as given in\n\\citeres{HpmXSa,HpmXSb}. In addition also the $\\Delta_b$ corrections of\n\\refeq{effL} are applied. Finally the ${\\rm BR}(H^\\pm \\to \\tau \\nu_\\tau)$ is\nevaluated taking into account all decay channels, among which the most\nrelevant are $H^\\pm \\to tb, cs, W^{(*)}h$. Also possible decays to \nSUSY particles are taken into account. For the decay to $tb$ again\nthe $\\Delta_b$ corrections are included.\nAll the numerical evaluations are performed with the program \n{\\tt FeynHiggs}~\\cite{feynhiggs,mhiggslong,mhiggsAEC,mhcMSSMlong}, see\nalso \\citere{mhcMSSM2L}. \n\n\n\n\n\\section{Numerical analysis}\n\\label{sec:numanal}\n\nThe numerical analysis has been performed in the $m_h^{\\rm max}$~and the\nno-mixing scenarios~\\cite{benchmark2,benchmark3} for \n$\\mu = -1000, -200, +200, +1000 \\,\\, \\mathrm{GeV}$. \nWe separately present the results for the light and the heavy charged\nHiggs and finally compare with the results in the CMS PTDR, where the\nresults had been obtained fixing $\\mu = +200 \\,\\, \\mathrm{GeV}$ and neglecting the\n$\\Delta_b$ corrections, as well as neglecting the charged Higgs-boson decays\nto SUSY particles. \n\n\n\n\\subsection{The light charged Higgs boson}\n\nIn \\reffi{fig:reachlight} we show the \nresults for the $5\\,\\sigma$ discovery contours for the light \ncharged Higgs boson, corresponding to the experimental\nanalysis in \\refse{sec:lightHpm}, where the charged Higgs boson\ndiscovery will be possible in the areas above the curves shown in\n\\reffi{fig:reachlight}. \nAs described above, the experimental analysis was performed for the\nCMS detector and 30~$\\mbox{fb}^{-1}$. The top quark mass is set to $m_{t} = 175 \\,\\, \\mathrm{GeV}$. \nThe thick (thin) lines correspond to positive (negative) $\\mu$, and the\nsolid (dotted) lines have $|\\mu| = 1000 (200) \\,\\, \\mathrm{GeV}$. \nThe curves stop at $\\tan \\beta = 60$, where we stopped the evaluation of\nproduction cross section and branching ratios. For negative $\\mu$ very\nlarge values of $\\tan \\beta$ result in a strong enhancement of the bottom\nYukawa coupling, and for $\\Delta_b \\to -1$ the MSSM enters a non-perturbative\nregime, see \\refeq{effL}. \n\n\\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{mhmax_lightChH_MHP.eps}\\hspace{1em}\n\\includegraphics[width=0.45\\textwidth]{nomix_lightChH_MHP.eps}\n \\caption{%\nDiscovery reach for the light charged Higgs boson of CMS with 30~$\\mbox{fb}^{-1}$\\ in the \n$M_{H^\\pm}$--$\\tan \\beta$~plane for the $m_h^{\\rm max}$~scenario (left) and the no-mixing\nscenario (right).\n}\n\\label{fig:reachlight}\n\\end{center}\n\\end{figure}\n\nWithin the $m_h^{\\rm max}$ scenario, shown in the left plot of\n\\reffi{fig:reachlight}, the search for the light charged Higgs boson covers\nthe area of large $\\tan \\beta$ and $M_{H^\\pm} \\lsim 130 \\ldots 160 \\,\\, \\mathrm{GeV}$. \nThe variation with\n$\\mu$ induces a strong shift in the $5\\,\\sigma$ discovery contours. This\ncorresponds to a shift in $\\tan \\beta$ of \n$\\Delta\\tan \\beta = 15$ for $M_{H^\\pm} \\lsim 110 \\,\\, \\mathrm{GeV}$, rising up to $\\Delta\\tan \\beta = 40$ for\nlarger $M_{H^\\pm}$ values. The discovery region is largest (smallest) for \n$\\mu = -(+)1000 \\,\\, \\mathrm{GeV}$, corresponding to the largest (smallest)\nproduction cross section.\nThe results for the no-mixing scenario are shown in the right plot of\n\\reffi{fig:reachlight}. The effects of the variation of $\\mu$ are much\nless pronounced in this scenario, as discussed in \\refse{sec:theo}, due\nto the smaller \nabsolute value of $\\Delta_b$ (see also the corresponding analysis for neutral\nheavy Higgs bosons in \\citere{cmsHiggs}). The shift in $\\tan \\beta$ for\n$M_{H^\\pm} = 110 \\,\\, \\mathrm{GeV}$ is about $\\Delta\\tan \\beta = 5$ going from $\\mu = -1000 \\,\\, \\mathrm{GeV}$ to\n$+1000 \\,\\, \\mathrm{GeV}$. \nFor $\\tan \\beta = 60$ (where we stop our analysis) the covered $M_{H^\\pm}$ values\nrange from $150 \\,\\, \\mathrm{GeV}$ to $164 \\,\\, \\mathrm{GeV}$.\nIn this charged Higgs boson mass range for the considered benchmark\nscenarios no decay channels into SUSY particles are open, i.e.\\ the\nobserved effects are all due to higher-order corrections, in particular\nassociated with~$\\Delta_b$. \n\n\n\n\\subsection{The heavy charged Higgs boson}\n\nIn \\reffi{fig:reachheavy} we show the \nresults for the $5\\,\\sigma$ discovery contours for the heavy\ncharged Higgs boson, corresponding to the experimental\nanalysis in \\refse{sec:heavyHpm}. The Higgs boson discovery will be\npossible in the areas above the curves.%\n\\footnote{\nAn analysis in other benchmark scenarios that are in\nagreement with the cold dark matter density constraint imposed by WMAP\nand other cosmological data~\\cite{WMAP} can be found in \\citere{ehhow}.}%\n~As before, the experimental analysis was performed for the\nCMS detector and 30~$\\mbox{fb}^{-1}$. The top quark mass is set to $m_{t} = 175 \\,\\, \\mathrm{GeV}$. \nThe thick (thin) lines correspond to positive (negative) $\\mu$, and the\nsolid (dotted) lines have $|\\mu| = 1000 (200) \\,\\, \\mathrm{GeV}$. \n\n\\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{mhmax_heavyChH_MHP.eps}\\hspace{1em}\n\\includegraphics[width=0.45\\textwidth]{nomix_heavyChH_MHP.eps}\n \\caption{%\nDiscovery reach for the heavy charged Higgs boson of CMS with 30~$\\mbox{fb}^{-1}$\\ in the \n$M_{H^\\pm}$--$\\tan \\beta$~plane for the $m_h^{\\rm max}$~scenario (left) and the no-mixing\nscenario (right).\n}\n\\label{fig:reachheavy}\n\\end{center}\n\\end{figure}\n\n\nThe $5\\,\\sigma$ discovery regions for the search for heavy charged Higgs\nbosons in the $m_h^{\\rm max}$ scenario are shown in the left plot of\n\\reffi{fig:reachheavy}. For $M_{H^\\pm} = 170 \\,\\, \\mathrm{GeV}$, where the experimental\nanalysis stops, we find a strong variation\nin the accessible parameter space for $\\mu = -(+)1000 \\,\\, \\mathrm{GeV}$ of $\\Delta\\tan \\beta = 40$.\nIt should be noted in this context that close to threshold, where both\nproduction mechanisms, \\refeqs{pp2Hpm} and (\\ref{gb2Hpm}), contribute,\nthe theoretical \nuncertainties are somewhat larger than in the other regions. \nFor $M_{H^\\pm} = 300 \\,\\, \\mathrm{GeV}$ the variation in the $5\\,\\sigma$ discovery contours\ngoes from $\\tan \\beta = 38$ to $\\tan \\beta = 54$. For $\\mu = -1000 \\,\\, \\mathrm{GeV}$ and larger\n$\\tan \\beta$ values the bottom Yukawa coupling becomes so large \nthat a perturbative treatment would no longer be reliable in this\nregion, and correspondingly we do not continue the respective curve(s).\n\nThe shape of the $\\mu = +1000 \\,\\, \\mathrm{GeV}$ curve has a local minimum at \n$M_{H^\\pm} \\approx 300 \\,\\, \\mathrm{GeV}$ that is not (or only very weakly) present in the other\ncurves, and that is also not visible in the original CMS analysis in\n\\citere{heavyHexp} (obtained for $\\mu = +200 \\,\\, \\mathrm{GeV}$, but neglecting the\n$\\Delta_b$ effects). The reason for the local minimum can be traced back to\nthe strongly improved experimental efficiency going from \n$M_{H^\\pm} = 200 \\,\\, \\mathrm{GeV}$ to $300 \\,\\, \\mathrm{GeV}$, see \\refta{tab:heavyHp}. The better\nefficiency at $M_{H^\\pm} = 300 \\,\\, \\mathrm{GeV}$ corresponds to a lower required cross\nsection ($\\propto \\tan^2 \\beta\\hspace{1mm}$) and\/or a lower ${\\rm BR}(H^\\pm \\to \\tau \\nu_\\tau)$\nto obtain the same number of signal events. \nOn the other hand, going from $M_{H^\\pm} = 200 \\,\\, \\mathrm{GeV}$ to $300 \\,\\, \\mathrm{GeV}$ this effect \nis in most cases overcompensated by a decrease of the cross\nsection due to the increase in $M_{H^\\pm}$. The overcompensation results in\nan increase in $\\tan \\beta$ for the higher $M_{H^\\pm}$ value. \nFor $\\mu = +1000 \\,\\, \\mathrm{GeV}$, however, $\\Delta_b$ is very large, \nsuppressing strongly the charged Higgs production cross section as well\nas the ${\\rm BR}(H^\\pm \\to tb)$. The overall effect is a somewhat better\nreach in $\\tan \\beta$ for $M_{H^\\pm} = 300 \\,\\, \\mathrm{GeV}$ than for $M_{H^\\pm} = 200 \\,\\, \\mathrm{GeV}$. \n\nIn comparison with the analysis of \\citere{benchmark3}, based on the\nolder CMS analysis given in \\citere{heavyHexpold}, several differences\ncan be observed. The feature of the local minimum is absent in\n\\citere{benchmark3}, the variation of the $5\\,\\sigma$ discovery contours\nwith $\\mu$ is weaker, and the effect of the decay of the charged Higgs\nboson to a chargino and a neutralino is more pronounced in\n\\citere{benchmark3}. The reason for these differences is the strongly\nreduced discovery region in the new CMS analysis~\\cite{heavyHexp}\nemployed here as compared to the old CMS analysis~\\cite{heavyHexpold} \nused in \\citere{benchmark3}. The reach in $\\tan \\beta$ is worse by\n$\\sim 15 (30)$ for $M_A = 200 (400) \\,\\, \\mathrm{GeV}$ in the new analysis.%\n\\footnote{\nThe old analysis uses $\\mu = -200 \\,\\, \\mathrm{GeV}$~\\cite{heavyHexpold}, while the\nnew analysis set $\\mu = +200 \\,\\, \\mathrm{GeV}$~\\cite{heavyHexp}. However, since the\n$\\Delta_b$ corrections are neglected in \\citeres{heavyHexpold,heavyHexp}, \nthe effect on the discovery regions should be small.\n}%\n~Thus, at the substantially worse (i.e.\\ higher) $\\tan \\beta$ values employed\nhere the $\\Delta_b$ effects are more pronounced, leading to the local minimum\nfor $\\mu = +1000 \\,\\, \\mathrm{GeV}$ and to a larger absolute variation in $\\tan \\beta$ with the\nsize and the sign of $\\mu$, see \\refse{sec:theo}.\nIn the high $\\tan \\beta$ region furthermore the $\\Delta_b$ effects dominate over the\nimpact of the decay of the charged Higgs to charginos and neutralinos. \nAs an example, for $\\mu = +200 \\,\\, \\mathrm{GeV}$ and $M_{H^\\pm} = 400 \\,\\, \\mathrm{GeV}$ the old\nanalysis in \\citere{benchmark3} found that the discovery region starts\nat $\\tan \\beta = 32$, where ${\\rm BR}(H^\\pm \\to \\cha{}\\neu{}) \\approx 15\\%$.\nHere we find that the discovery region starts at $\\tan \\beta = 64$, where\n${\\rm BR}(H^\\pm \\to \\cha{}\\neu{}) \\approx 3\\%$.\n\nThe no-mixing scenario is shown in the right plot of\n\\reffi{fig:reachheavy}. The features are the same as in the $m_h^{\\rm max}$\nscenario. However, due to the smaller size of $|\\Delta_b|$, see\n\\refse{sec:theo}, they are much less pronounced. The variation in $\\tan \\beta$\nstays at or below the level of $\\Delta\\tan \\beta = 10$ for the whole range of\n$M_{H^\\pm}$. \n\n\n\n\\subsection{Comparison with the CMS PTDR}\n\nIn \\reffi{fig:reach} we show the \ncombined results for the $5\\,\\sigma$ discovery contours for the light and\nthe heavy charged Higgs boson, corresponding to the experimental\nanalyses in the $m_h^{\\rm max}$ scenario as presented in the two previous\nsubsections. They are compared with the results presented in the CMS\nPTDR~\\cite{cmstdr}. Contrary to the previous sections, we now show the\n$5\\,\\sigma$ discovery contours in the $M_A$--$\\tan \\beta$ plane. \nThe thick (thin) lines correspond to positive (negative) $\\mu$, and the\nsolid (dotted) lines have $|\\mu| = 1000 (200) \\,\\, \\mathrm{GeV}$. The thickened\ndotted (red\/blue) lines represent the CMS PTDR results, obtained for\n$\\mu = +200 \\,\\, \\mathrm{GeV}$ and neglecting the $\\Delta_b$ effects.\n\n\\begin{figure}[htb!]\n\\begin{center}\n\\includegraphics[width=0.60\\textwidth]{mhmax_ChHvsPTDR.eps}\n \\caption{%\nDiscovery reach for the charged Higgs boson of CMS with 30~$\\mbox{fb}^{-1}$\\ in the \n$M_A$--$\\tan \\beta$~plane for the $m_h^{\\rm max}$~scenario for \n$\\mu = \\pm 200, \\pm 1000 \\,\\, \\mathrm{GeV}$ in comparison with the results from the CMS\nPTDR (thickened dotted (red and blue) lines), obtained for\n$\\mu = +200 \\,\\, \\mathrm{GeV}$ and neglecting the $\\Delta_b$ effects.\n}\n\\label{fig:reach}\n\\end{center}\n\\end{figure}\n\nApart from the variation in the $5\\,\\sigma$ discovery contours with the\nsize and the sign of $|\\mu|$, two differences can be observed in the\ncomparison of the PTDR results to the new results obtained here, i.e.\\\nincluding the $\\Delta_b$ corrections in the production and decay of the\ncharged Higgs boson as well as taking the decay to SUSY particles into\naccount. \nFor the light charged Higgs analysis the discovery contours are now\nshifted to smaller $M_A$ values, for negative $\\mu$ even ``bending over''\nfor larger $\\tan \\beta$ values. The reason is the more complete inclusion of\nhigher-order corrections (full one-loop and leading \\order{\\alpha_t\\alpha_s}\ntwo-loop) to the relation between $M_A$ and\n$M_{H^\\pm}$~\\cite{mhcMSSMlong,mhcMSSM2L}.\nThe second feature is a small gap between the light and the heavy\ncharged Higgs analyses, while in the PTDR analysis all charged Higgs\nmasses could be accessed. The gap can be observed best by comparing the\n$m_h^{\\rm max}$ scenario in \\reffis{fig:reachlight} and \\ref{fig:reachheavy}. \nThis gap is largest for $\\mu = +1000 \\,\\, \\mathrm{GeV}$ and smallest for \n$\\mu = -1000 \\,\\, \\mathrm{GeV}$, where it amounts only up to $\\sim 5 \\,\\, \\mathrm{GeV}$.\nPossibly the heavy charged Higgs analysis strategy exploiting the fully\nhadronic final state can be extended to smaller $M_A$ values to\ncompletely close the gap. \nFor the interpretation of \\reffi{fig:reach} it should be kept in mind\nthat the accessible area in the heavy Higgs analysis also ``bends over''\nto smaller $M_A$ values for larger $\\tan \\beta$, thus decreasing the visible\ngap in \\reffi{fig:reach}.\n\n\n\\section{Conclusions}\n\nWe have studied the variation of the $5\\,\\sigma$ discovery contours for the\nsearch for the charged MSSM Higgs boson with the SUSY parameters.\nWe combine the latest results for the\nCMS experimental sensitivities based on full simulation studies with \nstate-of-the-art theoretical predictions of MSSM Higgs-boson properties.\nThe experimental analyses are done assuming an integrated luminosity of \n30~$\\mbox{fb}^{-1}$\\ for the two cases, $M_{H^\\pm} < m_{t}$ and $M_{H^\\pm} > m_{t}$.\n\nThe numerical analysis has been performed in the $m_h^{\\rm max}$~and the\nno-mixing scenarios for $\\mu = \\pm 200, \\pm 1000 \\,\\, \\mathrm{GeV}$.\nThe impact of the variation of $\\mu$ enters in particular via the\nhigher-order correction $\\Delta_b$, affecting\nthe charged Higgs production cross section and branching ratios. Also\nthe decays of the charged Higgs boson to SUSY particles have been taken\ninto account. \nAs a general feature, large negative $\\mu$ values give the largest\nreach, while large positive values yield the smallest $5\\,\\sigma$ discovery\nareas.\n\nThe search for the light charged Higgs boson covers the the area of\nlarge $\\tan \\beta$ and $M_{H^\\pm} \\lsim 160 \\,\\, \\mathrm{GeV}$. \nThe variation with $\\mu$ within the $m_h^{\\rm max}$ scenario induces a strong\nshift in the $5\\,\\sigma$ discovery contours with \n$\\Delta\\tan \\beta = 15$ for $M_{H^\\pm} = 100 \\,\\, \\mathrm{GeV}$, rising up to $\\Delta\\tan \\beta = 40$ for\nlarger $M_{H^\\pm}$ values. The discovery region is largest (smallest) for \n$\\mu = -(+)1000 \\,\\, \\mathrm{GeV}$, corresponding to the largest (smallest)\nproduction cross section. The effects are similar, but much less\npronounced, in the no-mixing scenario.\n\nThe search for the heavy charged Higgs boson reaches up to $M_{H^\\pm} \\lsim\n400 \\,\\, \\mathrm{GeV}$ for large $\\tan \\beta$. \nWithin the $m_h^{\\rm max}$ scenario the variation of $\\mu$ induces a very\nstrong shift in the $5\\,\\sigma$ discovery contours of up to $\\Delta\\tan \\beta = 40$\nfor $M_{H^\\pm} \\gsim m_{t}$. As in the light charged Higgs case, within the\nno-mixing scenario the effects show the same qualitative behavior, but\nare much less pronounced.\n\nCombining the search for the light and the heavy charge Higgs boson, we\nfind a small gap, while in the CMS Physics Technical Design Report\nanalysis all charged Higgs masses could be accessed. \nPossibly the heavy charged Higgs analysis strategy exploiting the fully\nhadronic final state can be extended to smaller $M_A$ values to\ncompletely close the gap. This issue deserves further studies.\n\n\n\n\n\n\\subsection*{Acknowledgements}\n\nThe work of S.H.\\ was partially supported by CICYT (grant FPA~2007--66387).\nWork supported in part by the European Community's Marie-Curie Research\nTraining Network under contract MRTN-CT-2006-035505\n`Tools and Precision Calculations for Physics Discoveries at Colliders'.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn cosmological models that obey General Relativity and the cosmological\nprinciple, the mass density parameter, $\\Omega$, and the cosmological\nconstant, $\\Lambda$, together determine the geometry of spacetime.\nThe Einstein-de Sitter ($\\Omega=1$) model has twin virtues of simplicity:\nflat spatial geometry and zero cosmological constant.\nHowever, the ages of globular cluster stars, the high baryon fraction\nin galaxy clusters, and some aspects of large scale galaxy clustering\nare accounted for more easily in low density models, which have flat \ngeometry if \n$\\mbox{${\\lambda}_{0}$} \\equiv\\Lambda c^{2}\/3 H_{0}^{2}= 1-\\mbox{${\\Omega}_{0}$ }$\\footnote{Subscript \n`zero' indicates the value of the parameters at the present epoch. \nAlso, we use the notation \n$H_{0}\\equiv \n{\\rm h}_{0}\\cdot 100\\; {\\rm km}\\; {\\rm s}^{-1}\\; {\\rm Mpc}^{-1}$}, \nor open, negative curvature geometry if $\\Lambda=0$.\nDynamical studies of large scale\nstructure can constrain \\mbox{${\\Omega}_{0}$ } (subject to uncertainties about\nbiased galaxy formation), but they are insensitive to a cosmological\nconstant because it represents an unclustered energy component.\nThis paper examines the prospects for using the anisotropy of the \nquasar correlation function to constrain \\mbox{${\\Omega}_{0}$ } and \\mbox{${\\lambda}_{0}$},\nin particular to distinguish between flat and open cosmologies.\nWhile this test is impractical with present quasar samples, the\nAnglo-Australian 2-degree Field Survey (hereafter 2dF) and the\nSloan Digital Sky Survey (hereafter SDSS or Sloan) will both produce redshift\nsamples of several tens of thousands of quasars over the next few years.\n\nClassical methods of measuring spacetime geometry rely on standard candles\nor standard rulers, and they are therefore subject to systematic uncertainties\nin the evolution of these objects over a large range in redshift.\nAlcock \\& Paczy\\'{n}ski (1979) proposed an alternative approach that is\nalmost entirely independent of evolution and assumes only that structure\nin the universe is statistically isotropic, as implied by the cosmological\nprinciple. At high redshift, the ratio of distances corresponding to\na given redshift separation $\\Delta z$ and angular separation $\\Delta \\theta$\ndepends on the spacetime metric, and one can constrain the cosmological\nparameters by requiring that they yield isotropic structure.\n\nAlcock \\& Paczy\\'{n}ski (1979) illustrated their suggestion with the idealized\nexample of spherical clusters, but nature does not provide us with such\nconvenient objects of study. Ryden (1995a) suggested using the statistical\ndistribution of void shapes in deep galaxy redshift surveys like the SDSS,\nbut it is not clear that even this million galaxy sample will be sufficient\nto detect the effect; it does not extend much beyond $z \\sim 0.2$, and\nat this depth the geometrical distortion of voids is difficult to separate \nfrom distortions induced by galaxy peculiar velocities (Ryden \\& Melott 1996).\nWhile quasar redshift surveys have fewer tracers, they reach to much\nhigher redshifts, where the expected geometrical anisotropy is more \npronounced. Thus Phillips (1994) suggested using the orientations\nof neighboring quasar pairs to implement Alcock \\& Paczy\\'nski's method.\nThe 2-point correlation function contains statistical information on\nthe distribution of all quasar pairs, not just nearest neighbor pairs,\nso it is a more powerful measure for detecting distortions of geometry.\nIt is important to note that these distortions are detectable only because\nquasars are clustered. If quasars were Poisson distributed, then all pair\norientations would be statistically isotropic regardless of the assumed\nmetric.\n\nTwo recent papers (Ballinger, Peacock \\& Heavens 1996; \nMatsubara \\& Suto 1996; see also Nakamura, Matsubara, \\& Suto 1997)\nhave discussed the use of redshift space clustering in quasar samples\nas a tool for constraining \\mbox{${\\Omega}_{0}$ } and \\mbox{${\\lambda}_{0}$}. \nMatsubara \\& Suto (1996) derive an expression for the correlation\nfunction that includes the effects of geometrical distortion and\nlinear theory peculiar velocities. Ballinger et al.\\ (1996) derive\nexpressions for the redshift space power spectrum, including effects of \ngeometrical distortion, linear theory peculiar velocities, and incoherent\nrandom velocities generated by small scale collapse.\n\nIn the next section, we discuss the geometrical distortion of the\ncorrelation function in the absence of peculiar velocities, extending\nthe calculation of Phillips (1994). The distortion due to geometry\nalone is simple to understand: changes in the metric just alter the\nrelation between a $(\\Delta z, \\Delta \\theta)$ separation and the \ncorresponding physical distance. In \\S 3 we show that, for realistic\nquasar samples, measurement errors in the correlation function are\nlikely to be dominated by Poisson fluctuations in the number of\nquasar pairs. As a result, we can easily create Monte Carlo realizations\nof correlation function measurements for a specified model correlation\nfunction. In \\S 4 we use such Monte Carlo experiments to see what\ncosmological constraints can be expected from the 2dF and Sloan quasar\nsamples. In \\S 5 we summarize the results of these experiments and\ndiscuss some of their limitations. The most serious of these is\nprobably our neglect of peculiar velocity distortions in modeling the \ncorrelation function, but we argue in \\S 5 that these are unlikely\nto overwhelm the geometrical signal. A full analysis of observational\ndata will require joint consideration of velocity and geometry effects,\nalong the lines envisioned by Ballinger et al.\\ (1996).\n\n\\section{The redshift space correlation function }\nThe position of a quasar in \nredshift survey data is characterized by three numbers:\nthe redshift and two angle coordinates on the sky $(z, \\theta_r, \\phi_r)$.\nFor a given quasar, the probability of finding a\nneighbor in a given volume element is symmetric about the line of sight,\nsince this is the only preferred direction.\nFor any pair\nof quasars one can therefore find a transformation \n$(\\phi_r, \\theta_r) \\rightarrow (\\phi, \\theta) $ \nsuch that $\\phi_1 = \\phi_2$ in the new frame of reference.\nConsequently, the distance between the points in redshift\nspace can be described in terms of redshift $z$ and angle $\\theta$.\n\n\n\n\nFigure 1 shows the situation with quasars at the positions\n$(z_{1}, \\theta_{1})$ and $(z_{2}, \\theta_{2})$.\nWe define $s_z \\equiv z_{2}-z_{1}$, \n$s_{\\theta} \\equiv z\\cdot (\\theta_2- \\theta_1)=\nz\\cdot \\Delta\\theta $, with $z=(z_{1}+z_{2})\/2$ assumed to be much greater\nthen $z_{2}-z_{1}$. We now adopt the following notation:\n\\begin{equation}\ns \\equiv \\sqrt{s_z^2 + s_{\\theta}^2}\\;, \\lab{s.def}\n\\end{equation}\nand\n\\begin{equation}\n\\mu \\equiv \\frac{s_{z}}{s}\\;. \\lab{mu.def}\n\\end{equation}\nThe angular separation can then be expressed as\n\\begin{equation}\n{\\Delta\\theta}^2=z^{-2}s_z^2(\\mu^{-2} - 1). \\lab{d.theta}\n\\end{equation}\nIn Euclidean geometry, $s_z$ and $s_{\\theta}$ would be the line-of-sight\nand transverse separations, respectively (in dimensionless, redshift units); \n$s$ would be\nthe 3-dimensional separation, and $\\mu$ would be the cosine of the\nangle between the separation vector and the line of sight. \nIn reality the geometry is not Euclidean, but we can still keep the formal\ndefinitions of $s$ and $\\mu$ introduced above.\n\nAdopting Phillipps' (1994) notation we have\n\\begin{equation}\nr^2=f^2 {\\Delta\\theta}^2+g^2 s_z^2\\;, \\lab{r.sq}\n\\end{equation}\nwhere in general (e.g., Weinberg 1972)\n\\begin{equation}\n{\\scriptsize\n{\\normalsize f= \\frac{c}{H_{0}}} \\times \n\\left\\{\n\\begin{array}{ll}\n{\\displaystyle \\frac{1}{\\sqrt{1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$}}} \\sinh{\\left[\\int_{\\frac{1}{1+z}}^{1} \\frac{\\sqrt{1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$}}dy}{y\\sqrt{\\frac{\\mbox{${\\Omega}_{0}$ }}{y}-(1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$})+\\mbox{${\\lambda}_{0}$} y^{2}}}\\right]}} & \\mbox{if $(\\mbox{${\\Omega}_{0}$ }+\\mbox{${\\lambda}_{0}$})<1$} \\\\\n{\\displaystyle \\int_{\\frac{1}{1+z}}^{1} \\frac{dy}{y\\sqrt{\\frac{\\mbox{${\\Omega}_{0}$ }}{y}+\\mbox{${\\lambda}_{0}$} y^{2}}}} & \\mbox{if $(\\mbox{${\\Omega}_{0}$ }+\\mbox{${\\lambda}_{0}$})=1$} \\\\\n{\\displaystyle \\frac{1}{\\sqrt{-\\left(1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$}\\right)}} \\sin{\\left[\\int_{\\frac{1}{1+z}}^{1} \\frac{\\sqrt{-\\left(1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$} \\right)}dy}{y\\sqrt{\\frac{\\mbox{${\\Omega}_{0}$ }}{y}-\\left(1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$}\\right)+\\mbox{${\\lambda}_{0}$} y^{2}}}\\right]}} & \\mbox{if $(\\mbox{${\\Omega}_{0}$ }+\\mbox{${\\lambda}_{0}$})>1$}\n\\end{array}\n\\right. \n}\\lab{f.def}\n\\end{equation}\nand\n\\begin{equation}\ng=\\frac{c}{H_{0}} \\times \\frac{1}{\\sqrt{\\mbox{${\\Omega}_{0}$ } {(1+z)}^{3} + (1-\\mbox{${\\Omega}_{0}$ }-\\mbox{${\\lambda}_{0}$}){(1+z)}^{2}+\\mbox{${\\lambda}_{0}$}}}. \\lab{g.def}\n\\end{equation}\nCombining equations\\r{s.def},\\r{mu.def},\\r{d.theta} and\\r{r.sq}, we find\n\\begin{equation}\n\\frac{r}{s}=g \\sqrt{\\mu^2 + h^2(1-\\mu^2)}\\;,\\;\\;\\; \\mbox{where} \\;\\;h\\equiv\\frac{1}{z}\\cdot \\frac{f}{g}. \\lab{r.over.s}\n\\end{equation}\n\nNow suppose that the quasars are clustered and that their\ncorrelation function $\\xi(r)$ is described by power-law,\n\\begin{equation}\n\\xi(r)={\\left( \\frac{r}{r_0} \\right)}^{-\\mbox{$\\gamma$}}, \\lab{xi.r}\n\\end{equation}\nwhere, in principle, the correlation length $r_0$ and the index $\\mbox{$\\gamma$}$ may\nbe functions of $z$.\nIn redshift space, the correlation function at vector separation $(s, \\mu)$ \nis simply\nthe value of $\\xi(r)$ at the scalar separation $r$ corresponding to $s$, $\\mu$.\nSubstitution of\\r{r.over.s} into\\r{xi.r} yields\n\\begin{equation}\n\\xi(s, \\mu) = {\\left(\\frac{s}{s_0}\\right)}^{-\\mbox{$\\gamma$}}{\\left[{\\mu}^2 + h^2(1-{\\mu}^2)\\right]}^{-\\frac{\\mbox{$\\gamma$}}{2}}, \\lab{xi.s}\n\\end{equation}\nwhere $s_0=r_0\/g$.\nThe correlation function in these coordinates is anisotropic because the\nphysical separation corresponding to a given $s$ depends on angle\nwith respect to the line of sight. The anisotropy is stronger when\n$\\mbox{$\\gamma$}$ is larger because a given change in $r$ then produces a larger change\nin $\\xi(r)$.\n\n\n \nOnce $s_0$ and $\\mbox{$\\gamma$}$ are set, the dependence of \nthe correlation function on the geometry of the universe is \ndetermined entirely through the value of the ``distortion parameter'' $h$.\nFigure~2 shows the redshift dependence of $h(z)$\nfor open and flat cosmologies with various \\mbox{${\\Omega}_{0}$ }.\nThe functional form of $h(z)$ depends much\nmore strongly on \\mbox{${\\lambda}_{0}$ } than on \\mbox{${\\Omega}_{0}$ }, implying that clustering anisotropy\ncan much more easily distinguish models with the same \\mbox{${\\Omega}_{0}$ } and different \n\\mbox{${\\lambda}_{0}$ } than models with similar \\mbox{${\\lambda}_{0}$ } and different \n$\\Omega_0$, as Alcock \\& Paczy\\'{n}ski (1979) pointed out.\nThere is also an approximate degeneracy in $h(z)$ between open models\nwith low \\mbox{${\\Omega}_{0}$ } and flat models with relatively high $\\Omega_0$.\n\n\nIn the range $1 < z < 5$, $h(z)$ can be well approximated by a \nstraight line for most models. The anisotropy increases with redshift\nin all cases, but models can in principle be distinguished both by \nthe value of $h$ at a given $z$ and by the overall redshift dependence\nof $h(z)$. As discussed in the following sections, in a real \nredshift survey the information for distinguishing cosmologies\ncomes mainly from the redshifts where the quasar distribution peaks,\nsince that is where the clustering can be measured most precisely.\n\nFigure 3 shows the angular dependence of the correlation function,\nfrom equation~(\\ref{xi.s}), for various values of $h$.\nIf $h=1$, the geometry of redshift space is Euclidean, and\nthere is no correlation function anisotropy.\nAll models have $h \\approx 1$ at $z \\ll 1$, and the\n$\\lambda_0=1$, $\\Omega_0=0$ model has $h(z)=1$ at all redshifts.\nFor other models $h(z)>1$ at high redshifts, and the correlation\nfunction is amplified\\footnote{Whether one sees \n``amplification'' or ``suppression'' depends on the choice of reference \nmodel. Here our reference is Euclidean (\\mbox{${\\lambda}_{0}$}=1), but Ballinger et al. (1996)\ntake \\mbox{${\\Omega}_{0}$ }=1 as a reference and therefore find that alternative\nmodels have suppressed clustering along the line of sight.}\nfor pair separations along the line of sight ($|\\mu|=1$) relative\nto pair separations perpendicular to the line of sight.\nThe anisotropy of the correlation function\nincreases rapidly with increasing $h$.\n\n\n\\section{Statistical errors in measurements of $\\xi(s, \\mu)$}\nFollowing Peebles (1980, \\S 31), we can define a quasar's\naverage number of ``clustered neighbors'' by\n\\begin{equation}\nN_{c}=\\int^\\infty_0 \\bar{n}\\xi(r)4\\pi r^2dr, \\lab{N.c1}\n\\end{equation}\nwhere $\\bar{n}$ is the space density of quasars in a sample.\nFor purposes of calculation, we assume a power-law correlation function\n(equation~[\\ref{xi.r}]).\nSince the correlation function may fall below this power-law at large\ndistances, we only consider pairs out to a separation $\\alpha r_0$ that\nis a small multiple $\\alpha$ of the correlation length.\nSubstitution of\\r{xi.r} into\\r{N.c1} yields\n\\begin{equation}\nN_{c}=\\frac{4\\pi{\\mbox{$\\alpha$}}^{3-\\mbox{$\\gamma$}}}{3-\\mbox{$\\gamma$}} \\bar{n}r_{0}^3. \\lab{N.c2}\n\\end{equation}\nWe will take $\\mbox{$\\alpha$}=2$ as a template value in our simulations.\nFor $\\mbox{$\\alpha$}=2$, the value of $N_c$ from \\r{N.c2} varies by only 5\\%\nfor $1 \\leq \\mbox{$\\gamma$} \\leq 2$.\n\nConsider a sample of $N_Q$ quasars in an area $A$ square degrees,\nand let $F(z)dz$ denote the fraction of\nthe quasar sample in the redshift range $(z,z+dz)$, normalized\nso that $\\int_0^\\infty F(z) dz=1$.\nThe observations directly tell us $(N_Q\/A)F(z)$, the average\nnumber of quasars per square degree per unit redshift.\nThe relation between $(N_Q\/A)F(z)$\nand the number density $\\bar{n}(z)$ in\ncomoving ${\\mbox h_0}^{-3}\\;\\mbox{Mpc}^{3}$ depends on the\ncosmological model.\nSince the volume of a redshift range $dz$ and solid angle $d\\Omega$\nis $g\\, dz\\cdot f^2 d\\Omega$,\n\\begin{equation}\n\\bar{n}={3283 (N_Q\/A) F(z) \\over gf^2}, \\lab{bar.n1}\n\\end{equation}\nwhere 3283 is the number of square degrees per radian.\nFor compactness, in this and future equations we generally omit the\nexplicit $z$-dependence of $\\bar{n}$, $f$, and $g$.\nFrom\\r{f.def} and\\r{g.def} it is clear that both $f$ and $g$ involve\nthe factor $cH_{0}^{-1}=3000 {\\mbox h_0}^{-1}\\mbox{Mpc}$, and it is convenient\nto write them in the form\n\\begin{equation}\nf=\\frac{c}{H_{0}}\\tilde{f}, \\qquad\\mbox{and}\\qquad \ng=\\frac{c}{H_{0}}\\tilde{g}. \\lab{f.g.def}\n\\end{equation}\nWe use the notation\\r{f.g.def} \nand the value of $cH_{0}^{-1}$ to write\\r{bar.n1} as\n\\begin{equation}\n\\bar{n}=1.21\\cdot 10^{-7} \\frac{N_Q}{A} F(z)\n{\\tilde{g}}^{-1}\\tilde{f}^{-2} ~~\n{\\rm h}_0^3\\;{\\rm Mpc}^{-3} ~{\\rm (comoving)}.\n\\lab{bar.n2}\n\\end{equation}\nSubstitution of\\r{bar.n2} into\\r{N.c2} gives\n\\begin{equation}\nN_{c}(z)=\\frac{4\\pi{\\mbox{$\\alpha$}}^{3-\\mbox{$\\gamma$}}}{3-\\mbox{$\\gamma$}} r_{0}^3 \n\\left(1.21\\cdot 10^{-7} \\frac{N_{Q}}{A} F(z) \n{\\tilde{g}}^{-1}\\tilde{f}^{-2}\\right),\\lab{N.c3}\n\\end{equation}\nfor $r_0$ in comoving h$_0^{-1}$ Mpc.\nThe number of correlated pairs in the range $(z, z+dz)$ is\n\\begin{equation}\nN_{pairs}(z)=N_{Q} F(z) N_{c}(z)\\propto \\frac{N_{Q}^2}{A} F^2(z) r_{0}^3. \\lab{N.pairs1}\n\\end{equation}\n\nAs a fiducial case let us take $\\mbox{$\\alpha$}=2$, $\\mbox{$\\gamma$}=2$, \n$N_{Q}= 10^{5}$, $A= 10^{4} {\\Box}^\\circ $, and $F(z)=0.5$.\nThen\n\\begin{equation}\nN_{c}(z)=0.11 {\\left( \\frac{r_{0}}{10 {\\mbox h_0}^{-1}\\mbox{Mpc}} \\right)}^{3} \\left(\\frac{{\\tilde{g}}^{-1}\\tilde{f}^{-2}}{7.27}\\right), \\lab{N.c4}\n\\end{equation}\nwhere $r_0$ is the comoving correlation length.\nWe have scaled ${\\tilde{g}}^{-1}\\tilde{f}^{-2}$ to the value for an\n\\mbox{${\\Omega}_{0}$ }=1 model at $z=2$.\nSubstitution of\\r{N.c4} into\\r{N.pairs1} yields\n\\begin{equation}\nN_{pairs}(z)=5500 {\\left( \\frac{r_{0}}{10 {\\mbox h_0}^{-1}\\mbox{Mpc}} \\right)}^{3} \\left(\\frac{{\\tilde{g}}^{-1}\\tilde{f}^{-2}}{7.27}\\right) \\lab{N.pairs2}\n\\end{equation}\nfor the number of correlated quasar pairs expected in an interval\n$\\Delta z \\approx 1$ near the peak of the redshift distribution $F(z)$\nin a large quasar redshift survey.\n\nWe now come to the central issue of this Section --- a key issue for\nthe entire paper, in fact --- the nature of statistical fluctuations in\nestimates of the correlation function. In the limit $N_c \\gg 1$,\nthe statistical uncertainty is dominated by the finite number of\nindependent structures in the survey volume, an effect sometimes referred to\nas ``cosmic variance.'' As a conceptual toy model, one can imagine\nthe structure to consist of clusters with an average of $N_c$ members\napiece. Each cluster is sampled by many correlated pairs, so the\nfluctuation in the number of clusters within the survey dominates\nthe error in $\\xi$. In the opposite limit, $N_c \\ll 1$, most clusters\nare not ``detected'' with even a single correlated pair, and it\nis the Poisson fluctuation in the number of pairs that dominates the \nstatistical error rather than the fluctuation in the number of\nindependent structures. In this ``sparse sampling'' or ``Poisson limit,''\nthe error in an estimate of $\\xi(s,\\mu)$ in a bin $(\\Delta s, \\Delta \\mu)$\nis simply the Poisson error in the number of pairs in the bin,\nand the errors in separate bins are statistically independent.\nThe transition between the dense sampling and sparse sampling limits\noccurs at $N_c \\approx 1$; we have carried out numerical experiments\nwith Monte Carlo clustering models (Soneira \\& Peebles 1978) to\nconfirm that the Poisson error approximation holds quite accurately\nfor $N_c \\la 1$.\n\nIn a typical galaxy redshift survey, $N_c(z) \\gg 1$ nearby, \nand $N_c(z)$ drops below one at large distances, where only the\nmost luminous galaxies lie above the survey's apparent magnitude limit.\nRoughly speaking, the volume within which $N_c(z) \\ga 1$ is\nthe effective volume of the survey for purposes of estimating $\\xi$;\nthere is some usable information from larger distances, but as\nthe structure ``fades out'' under increasingly sparse sampling, the\nstatistical significance of this additional information drops.\n\nEquation~(\\ref{N.c4}) implies that even an ambitious quasar\nredshift survey is likely to be in the sparse sampling regime\nat {\\it all} redshifts, unless the comoving quasar correlation\nlength is substantially larger than $10 {\\rm h}_0^{-1}\\;$Mpc.\nThis result has several important implications.\nFirst, the independent, Poisson errors in $\\xi$ allow a straightforward\nmaximum-likelihood scheme for estimating correlation function parameters.\nWe describe this scheme in \\S 4.3 below; this method was developed\nindependently by Croft et al. (1997), who applied it to rich galaxy\nclusters, and it was applied to a sample of high-redshift quasars\nfrom the Palomar Transit Grism Survey by Stephens et al. (1997).\nA second implication is that, given a theoretical prediction of\n$\\xi(s,\\mu)$, one can generate Monte Carlo realizations of measured\ncorrelation functions for a given sample without creating detailed\nrealizations of a clustering pattern, because the fluctuation in\na measured value of $\\xi$ depends only on the expected number of pairs\nand is independent of fluctuations in other bins and of the higher-order\nclustering properties of the quasar distribution.\nWe describe this method for creating Monte Carlo realizations in \\S 4.2\nbelow, and in \\S 4.4 we use it to evaluate the power of the SDSS\nand 2dF quasar redshift surveys for constraining the cosmological constant.\nFinally, there is an observational implication: for a fixed number of \nquasars, a deeper survey over a smaller area has greater statistical\npower, at least for measurements of the correlation function on\nscales $\\sim r_0$. \n\nThis last point is demonstrated most clearly by generalizing\nequations~(\\ref{N.c4}) and~(\\ref{N.pairs2}) to\n\\begin{equation}\nN_{c}(z)=0.11 \\left(\\frac{\\mbox{$\\alpha$}^{3-\\mbox{$\\gamma$}}\/(3-\\mbox{$\\gamma$})}{2}\\right)\n{\\left(\\frac{r_{0}(z)}{10 h^{-1} \\mbox{Mpc}}\\right)}^3 \n\\left(\\frac{N_{Q}\/A}{10\/{\\Box}^{o}}\\right) \n\\left(\\frac{F(z)}{0.5}\\right)\n\\left(\\frac{{\\tilde{g}}^{-1}\\tilde{f}^{-2}}{7.27}\\right),\\lab{N.c5}\n\\end{equation}\n\\begin{equation}\nN_{pairs}(z)=5500 \\left(\\frac{\\mbox{$\\alpha$}^{3-\\mbox{$\\gamma$}}\/(3-\\mbox{$\\gamma$})}{2}\\right)\n{\\left(\\frac{r_{0}(z)}{10 h^{-1} \\mbox{Mpc}}\\right)}^3 \n\\left(\\frac{N_{Q}}{10^5}\\right)\n\\left(\\frac{N_{Q}\/A}{10\/{\\Box}^{o}}\\right) \n{\\left(\\frac{F(z)}{0.5}\\right)}^2 \n\\left(\\frac{{\\tilde{g}}^{-1}\\tilde{f}^{-2}}{7.27}\\right). \\lab{N.pairs3}\n\\end{equation}\nIf $N_c(z) \\la 1$, then the effective signal for correlation function\nmeasurements is set by $N_{pairs}(z)$, since\neach correlated pair contributes non-redundant information.\nAt fixed surface density\n$N_Q\/A$, this signal increases in proportion to the number of\nquasars $N_Q$, but at fixed area $A$ it is proportional to $N_Q^2$.\nOf course, increasing $N_Q$ at fixed $A$ usually requires one to\nobserve fainter quasars, while increasing $N_Q$ at fixed surface density \ndoes not. Equation~(\\ref{N.pairs3}) also implies that the precision of\ncorrelation function measurements will be peaked sharply near the\nmaximum of the survey's redshift distribution $F(z)$ and that the\nprecision attained will depend strongly on the actual value\nof the quasar correlation length $r_0$.\n\nThe remaining equations in this Section are not fundamental,\nbut they are needed for our Monte Carlo simulations and \nmaximum-likelihood parameter estimation, and we include their\nderivation for the convenience of others who may wish to \npursue similar approaches.\nA physical model specifies the correlation function in\nreal space ($r$-space), but a redshift survey provides data\nin redshift space ($s$-space).\nThe relation between coordinates in these two complementary descriptions is\n\\begin{equation}\n\\left( \\begin{array}{c}\ndr \\\\ d(\\cos t)\n\\end{array} \\right) = \\underbrace{\\left( \\begin{array}{cc} \n\\frac{\\mbox{$\\partial$} r}{\\mbox{$\\partial$} s} & \\frac{\\mbox{$\\partial$} r}{\\mbox{$\\partial$} \\mu} \\\\\n\\frac{\\mbox{$\\partial$} (\\cos t)}{\\mbox{$\\partial$} s} & \\frac{\\mbox{$\\partial$} (\\cos t)}{\\mbox{$\\partial$} \\mu}\n\\end{array} \\right)}_{\\mbox{\\boldmath $A$}} \\left( \\begin{array}{c}\nds \\\\ d \\mu\n\\end{array} \\right), \\lab{matrix}\n\\end{equation}\nwhere $\\cos{t} = g s_{z}\/r$.\nThus\n\\begin{equation}\ndr \\,d(\\cos t)= \\det{\\!\\mbox{\\boldmath $A$}} \\; ds \\, d\\mu . \\lab{determinant}\n\\end{equation}\nA short calculation leads to\n\\begin{equation}\nd(\\cos{t})=\\frac{h^2 d\\mu}{[h^2-(h^2-1)\\mu^2]^\\frac{3}{2}}, \\lab{d.cos}\n\\end{equation}\nand\n\\begin{equation}\ndr=g \\sqrt{h^2 - (h^2-1) \\mu^2} \\,ds - \\frac{g s \\mu (h^2-1) d\\mu}{\\sqrt{h^2 - (h^2-1) \\mu^2}}. \\lab{d.r}\n\\end{equation}\nSince $\\frac{\\mbox{$\\partial$} (\\cos t)}{\\mbox{$\\partial$} s}=0$, equation\\r{determinant} yields\n\\begin{equation}\n(dV)_r = -2\\pi r^2 dr d(\\cos{t}) = -2\\pi r^2 \\left[\\frac{\\mbox{$\\partial$} r}{\\mbox{$\\partial$} s} \\cdot \\frac{\\mbox{$\\partial$} (\\cos t)}{\\mbox{$\\partial$} \\mu}\\right] ds d\\mu \\lab{vol.1}\n\\end{equation}\nfor the volume element in real space.\nSubstitution of\\r{d.cos} \\&\\r{d.r} into\\r{vol.1} leads to\n\\begin{equation}\n(dV)_r=g^3 h^2 (-2\\pi s^2 ds d\\mu)= g^3 h^2 (dV)_s, \\lab{vol.2}\n\\end{equation}\nwhere $(dV)_s$ is the volume element in redshift space.\nThe number of quasars in a given region will be the same\nregardless of the adopted coordinate system. Thus\n\\begin{equation}\nn_s (dV)_s \\equiv n(z) (dV)_r, \\lab{vol.3}\n\\end{equation}\nwhere $n_s$ and $n(z)$ are number densities of quasars in redshift and real\nspaces, respectively.\nCombining\\r{vol.2} and\\r{vol.3} leads to \n\\begin{equation}\nn_s(s, \\mu) = g^3(z) h^2(z) n(z) . \\lab{num.den} \n\\end{equation}\n\nThe number of pairs expected in an\ninfinitesimally small bin in $(s, \\mu)$ within the finite redshift range\n$(z, z+\\Delta z)$ is\n\\begin{equation}\ndN_{pairs}=N(z, z+\\Delta z) n_s [1+\\xi(s, \\mu)] (-2\\pi s^2 ds d\\mu),\n\\lab{pair.num.1}\n\\end{equation}\nwhere $N(z, z+\\Delta z)$ is the number of observed quasars \nin this redshift range.\nSince $F(z)dz$ is the fraction of quasar redshifts in an infinitesimal \nrange $dz$, \n\\begin{equation}\nN(z, z+\\Delta z)=N_{Q} \\int_{z}^{z+\\Delta z} F(z) dz,\n\\lab{N.z.dz}\n\\end{equation}\nwhere $N_Q$ is the total number of quasars in the sample.\nSubstitution of\\r{xi.s},\\r{num.den} and\\r{N.z.dz} into\\r{pair.num.1} for\ninfinitesimally small $\\Delta z\\equiv dz$ yields \n\\begin{equation}\ndN_{pairs} = -2\\pi N_Q g^3 h^2 \\bar{n} F(z) \n\\left\\{1+{\\left(\\frac{s}{s_0}\\right)}^{-\\mbox{$\\gamma$}}\n{\\left[{\\mu}^2 + h^2(1-{\\mu}^2)\\right]}^{-\\frac{\\mbox{$\\gamma$}}{2}}\\right\\} \ns^2 dz ds d\\mu .\n\\lab{dN.pairs}\n\\end{equation}\nHere $s_0 \\equiv r_0\/g$ is the line-of-sight separation that corresponds\nto the comoving correlation length $r_0$, and the \nmean number density $\\bar{n}$ is given in terms of\n$N_Q$, $A$, and $F(z)$ by equation\\r{bar.n2}.\nThe quantities $g$, $h$, and $\\bar{n}$ all depend on \nredshift, and $s_0$ and $\\gamma$ may depend on redshift also.\n\n\\section{Monte Carlo experiments}\n\n\\subsection{Observational perspective}\n\nSuccessful measurement of quasar clustering anisotropy will\nrequire redshift samples much larger than those that exist today.\nThere are two clear prospects for such samples, the SDSS and\na quasar redshift survey using the 2dF fiber spectrograph on the \nAnglo-Australian Telescope.\nThe SDSS plans to obtain redshifts for 80,000 quasars in an area\nof 5,000 square degrees in the North Galactic Cap.\nIt will also measure redshifts for $\\sim 20,000$ quasars in the surrounding\n5,000 square degree ``skirt,'' but this subset will be less useful for\nclustering measurements because of its lower density.\nThe SDSS will also conduct a deeper quasar survey in a stripe of $\\sim 200$\nsquare degrees in the South Galactic Cap, to a limiting magnitude\nyet to be determined.\n(A discussion of the planned SDSS quasar survey can be found at\nhttp:\/\/www.astro.princeton.edu\/BBOOK\/SCIENCE\/QUASARS\/quasars.html.)\nThe SDSS will select quasar candidates based on 5-band CCD photometry.\nThe planned 2dF survey will target $\\sim 30,000$ quasars in an\narea $\\sim 750$ square degrees, selected as UV-excess stellar objects\non scanned photographic plates (Shanks, private communication).\n\nIn order to generate Monte Carlo realizations of correlation function\nmeasurements from these samples, we need to know (approximately) the\nexpected redshift distributions $F(z)$.\nWe have computed these using the observational determinations of\nthe quasar luminosity function by\nBoyle (1991) and Warren, Hewett, \\& Osmer (1994).\nThe Boyle (1991) results are based on the Boyle, Shanks \\& Peterson (1988) \nsurvey of quasars selected by the UV-excess method, \nwhich is effective mainly for $z \\la 2.2$.\nThe Warren et al. (1994)\nresults are based on a multicolor survey and probe the redshift range\n$2.0 < z < 4.5$. For our calculations, we assume that the 2dF survey\nwill contain quasars only up to $z=2.2$, because of its UV-excess\nselection technique, and that the SDSS multi-color selection will\nidentify quasars at all redshifts, with apparent magnitude being\nthe only important limit. We find that the planned SDSS and 2dF\nsurface densities imply limiting apparent magnitudes of roughly\n$B=19.6$ and $B=21.4$, respectively. With these apparent magnitude\nlimits, we compute the number of quasars per square degree per unit\nredshift from the above-mentioned luminosity functions.\n\nThe solid curves in the two panels of Figure~4 show the results\nof these calculations. Sharp dips at $z \\sim 2- 2.2$ are an artifact\nof using the Boyle (1991) luminosity function close to the redshift\nlimit of the Boyle et al. (1988) survey. More broadly, the shapes\nof these curves reflect the interplay between the peak in the quasar \nluminosity\nfunction at $z \\sim 2-3$ and the smaller fraction of the luminosity\nfunction that is visible above the apparent magnitude limit at larger\ndistances. Of course, $F(z)$ is also affected by the increase of the\ndifferential volume element $dV\/dz$ with $z$, especially at low redshift.\nUp to $z\\sim 2$, the evolution effects and the magnitude limit\nwork in different directions, producing a relatively flat redshift \ndistribution. Above $z\\sim 2$ both the\nnumber density of quasars and our ability to see them decreases, and \nthe redshift distribution drops rapidly towards zero.\n\n\n\nSince these calculated redshift distributions are only approximate ---\nthe quasar luminosity function is itself uncertain, and we have not \nmodeled the selection criteria and corresponding incompleteness of \nthe two surveys in any detail --- we fit them with simple analytic\nforms to use in our Monte Carlo calculations below. These fits\nare indicated by the dashed curves in Figure~4.\nThe survey's target selection algorithms will probably focus on objects\nwith point-source morphology in order to reduce contamination by galaxies.\nThis selection technique may exclude quasars at low redshift, where\nthe host galaxies are bright enough to be detected as extended emission.\nIn our $F(z)$ fits, we model this effect by a sharp cutoff at $z<0.4$.\n\n\\subsection{Generating Monte Carlo simulations}\nFor our Monte Carlo experiments, we wish to generate realizations\nof {\\it measured} redshift-space correlation functions $\\xi(s,\\mu)$\nfor various cosmological models and quasar survey parameters.\nWe consider two classes of cosmological models: flat models with\n$\\mbox{${\\Omega}_{0}$ }+\\mbox{${\\lambda}_{0}$ }=1$, and open models with $\\mbox{${\\lambda}_{0}$ }=0$.\nThe $\\mbox{${\\Omega}_{0}$ }=1$, Einstein-de Sitter cosmology is a limiting case of both\nfamilies. We assume that the real-space quasar correlation function\nis a power-law, $\\xi(r) = (r\/r_0)^{-\\mbox{$\\gamma$}}$, out to $2r_0$.\nOnce the cosmological model, the correlation function parameters,\nand the quasar survey parameters \nare set, the expected number of pairs in a separation\/redshift\nbin $ds d\\mu dz$ is given by equation\\r{dN.pairs}.\n\nBecause the sparseness of the quasar distribution puts us in the Poisson\nlimit (see \\S 3), we are able to generate realizations of the\nmeasured $\\xi(s,\\mu)$ very simply, without creating artificial spatial\ndistributions. Given the expected number of pairs in a bin from\nequation\\r{dN.pairs}, the number of pairs in a Monte Carlo realization\nis a random deviate drawn from a Poisson distribution with this\nmean value. The pair numbers for each bin can be generated independently.\nIn practice we use\n20 logarithmic bins in $s$, starting at $0.1 s_{0}$ and\ngoing up to $2 s_{0}$.\nWe use 5 equal bins in $\\mu$; because these bins are large, we compute\nthe predicted number of pairs by numerical integration over the bin.\nWe work with redshift bins $\\Delta z = 0.15$, implying\n12 bins for simulations with a 2dF-like $F(z)$ and 22 bins for simulations\nwith an SDSS-like $F(z)$.\nWe perform simulations for seven cosmological models: flat and open\nlow-density models with $\\mbox{${\\Omega}_{0}$ }=0.1, 0.2, 0.4$, and the Einstein-de Sitter\ncosmology, $\\mbox{${\\Omega}_{0}$ }=1$. To fully specify the theoretical model, we must also \nspecify the correlation length $s_0$ and the index $\\mbox{$\\gamma$}$ as functions\nof redshift. Existing observations provide only weak constraints on $\\mbox{$\\gamma$}$. \nWe adopt $\\mbox{$\\gamma$}=2$, close to the index $\\mbox{$\\gamma$}=1.8$ of the galaxy\ncorrelation function, and we assume that $\\mbox{$\\gamma$}$ is independent of redshift.\n\nEven the quasar correlation length $s_0$ is poorly known at present,\nbecause the sparseness of the quasar distribution in current samples\nmakes measurements of the quasar correlation function very noisy.\nThe best existing study is probably that of Shanks \\& Boyle \n(1994, hereafter SB),\nwho combine data from three quasar redshift surveys.\nFor an $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{${\\lambda}_{0}$ }=0$ cosmology, \nthey find that their correlation function measurements are consistent\nwith a quasar correlation length $r_0 = 6 {\\rm h}_0^{-1}\\;{\\rm Mpc}$\nthat remains constant in comoving coordinates, implying\n\\begin{equation}\ns_0 \\equiv \\frac{r_0}{g} = 0.002 (1+z)^{3\/2},\n\\lab{s0.om1}\n\\end{equation}\nwhere we use equation\\r{g.def} for $g(z)$.\n\nIn order to scale the SB results to other\ncosmological models, we note that the property of the correlation\nfunction most robustly constrained by observations is the total\nnumber of correlated pairs. From equation\\r{N.pairs3}, we see\nthat the number of correlated pairs is\n\\begin{equation}\nN_{pairs}(z) \\propto \\frac{r_0^3}{\\tilde g \\tilde f^2} \\propto\n\\frac{s_0^3 g^2}{f^2} \\propto \\frac{s_0^3}{h^2},\n\\lab{Npairs.scaling}\n\\end{equation}\nwhere we have dropped the factors that are independent of the\nspacetime geometry. For alternative cosmological models,\nwe therefore wish to hold the combination $s_0^3\/h^2$ equal\nto the value implied for the $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{${\\lambda}_{0}$ }=0$, \n$r_0 = 6{\\rm h}_0^{-1}\\;{\\rm Mpc}$ model that fits the\nSB data. We therefore adopt\n\\begin{equation}\ns_{0}(z,\\mbox{${\\Omega}_{0}$ },\\mbox{${\\lambda}_{0}$})=0.002{(1+z)}^{\\frac{3}{2}}\n{\\left[\\frac{h(z,\\mbox{${\\Omega}_{0}$ },\\mbox{${\\lambda}_{0}$})}{h(z,\\Omega=1,\\mbox{${\\lambda}_{0}$}=0)}\\right]}^{\\frac{2}{3}}\n\\lab{s0.general}\n\\end{equation}\nas a fiducial redshift-space correlation length for our simulations.\n\n\\subsection{Maximum likelihood determination of parameters}\nGiven a simulated or observed set of correlation measurements, we are\ninterested in estimating the true correlation function parameters and\nthe parameters of the underlying cosmological model.\nSuppose that the data consist of pair counts $N_i$ in $i$ bins, where\n$i$ may in fact represent a multidimensional (e.g. $s, \\mu$) space.\nWe have a model ${\\cal M}$ for the correlation function that may depend\nupon several parameters, e.g. ${\\cal M} (s_0, \\mbox{$\\gamma$}, \\mbox{${\\Omega}_{0}$ }, \\mbox{${\\lambda}_{0}$ })$. \nThis model predicts a number of pairs in each bin\n$A_i=\\bar{n} V_i (1+\\xi (s_i, \\mu_i)) N_Q$, where $V_i$ is the bin volume.\nIn the Poisson limit $N_c \\la 1$, which is expected to hold for realistic\nquasar surveys (see \\S 3),\nthe probability of detecting $N_i$ pairs in bin $i$\nwhen $A_i$ are expected is\n\\begin{equation}\nP(N_i|A_i)=\\frac{e^{-A_i} \\cdot A_i^{N_i}}{N_i!}.\n\\end{equation}\nThe probabilities for separate bins are independent, so the overall\nlikelihood ${\\cal L}$ of obtaining the data given the model is \n\\begin{equation}\n{\\cal L} \\equiv P(D|{\\cal M}) = {\\prod}_i \\frac{e^{-A_i} \n\\cdot A_i^{N_i}}{N_i!},\n\\lab{likelihood}\n\\end{equation}\nimplying\n\\begin{equation}\n\\ln({\\cal L}) = {\\sum}_i (-A_i + N_i \\ln\\,A_i- \\ln \\, N_i!).\n\\end{equation}\nSince the data $N_i$ are independent of the model parameters, one can find\nthe maximum likelihood model by maximizing the quantity\n\\begin{equation}\n\\ln{\\cal L}^{'} ({\\cal M})=\\sum_i (N_i \\ln\\,A_i-A_i).\n\\end{equation}\nThe relative likelihood of two models ${\\cal M}_{1}$ and \n${\\cal M}_{2}$ is simply\n$\\exp(\\ln{\\cal L}^{'} ({\\cal M}_{1})-\\ln{\\cal L}^{'} ({\\cal M}_{2}))$.\n\nAlthough our focus in this paper is on constraining $\\mbox{${\\lambda}_{0}$ }$, the\nmaximum-likelihood technique outlined here is quite general.\nFor a specified cosmology, one can use this technique to estimate\ncorrelation function parameters in a way that makes maximum use\nof the available data (e.g., Stephens et al.\\ 1997), and the\nmethod can easily be extended to incorporate parametrized descriptions\nof peculiar velocity distortions, evolution of clustering, and\nso forth. The sparse sampling limit is crucial to this approach,\nfor it is only this property that makes it possible to write down\na straightforward expression for the likelihood, equation\\r{likelihood}.\nFor a dense sample like a typical galaxy redshift survey, the likelihood\nis a much more complicated function of the model parameters and\nthe data, and a maximum-likelihood approach is correspondingly more\ncumbersome. However, rich galaxy clusters do provide a sparse tracer \nof structure on large scales, and Croft et al. (1997) have used the \nsame maximum-likelihood method to estimate parameters of the \ncluster correlation function.\n\n\\subsection{Results}\nFigures 5--9 present our main results. In Figure~5 we examine the\nability of the 2dF (left) and SDSS (right) quasar samples to \nconstrain $\\mbox{${\\Omega}_{0}$ }$ in flat, $\\mbox{${\\Omega}_{0}$ }+\\mbox{${\\lambda}_{0}$ }=1$ models.\nFor each survey and each value of $\\mbox{${\\Omega}_{0}$ }=0.1$, 0.2, 0.4, and 1,\nwe generated four realizations of measured $\\xi(s,\\mu)$ by\nthe Monte Carlo technique described in \\S 4.2.\nIn all cases we set $\\mbox{$\\gamma$}=2$ and adopt equation\\r{s0.general}\nfor $s_0(z)$, corresponding to $r_0 = 6{\\rm h_0}^{-1}\\;{\\rm Mpc}$\ncomoving at all $z$ for the $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{${\\lambda}_{0}$ }=0$ model and\nto $s_0(z=0)=0.002$ in all models.\nFor each realization, we then determine the best-fit model\nparameters by the maximum-likelihood technique described\nin \\S 4.3, computing likelihoods for a grid of open models and\na grid of flat models with $0 \\leq \\mbox{${\\Omega}_{0}$ } \\leq 1$ and\nvarying values of $s_0$ and $\\mbox{$\\gamma$}$.\nSquares show the estimated values of $\\mbox{${\\Omega}_{0}$ }$, denoted\n${\\hat{\\Omega}}_0$, plotted against the true value $\\Omega_{true}$\nused to generate the realization. Small offsets are added along\nthe $\\Omega_{true}$ axis to improve clarity.\nVertical line segments delineate the range of $\\mbox{${\\Omega}_{0}$ }$ values\nthat give likelihood values with 10\\% of the maximum likelihood;\nthis corresponds roughly to a $2\\sigma$ confidence interval\nfor a Gaussian approximation to the likelihood distribution.\nIn a few cases --- mostly $\\mbox{${\\Omega}_{0}$ }=1$ realizations, which are a limiting\ncase of both flat and open models --- an open model gave a higher\nlikelihood than the best-fit flat model. For these cases\nwe plot a filled circle at the value of ${\\hat{\\Omega}}_0$ for the\nbest-fit open model.\n\n\nIn open cosmologies, the anisotropy of the correlation function is\nonly weakly sensitive to $\\mbox{${\\Omega}_{0}$ }$ (see Figure~2), so we do not show\nthe analog of Figure~5 for open models.\nEven with flat models the $\\mbox{${\\Omega}_{0}$ }$ constraints are rather loose, but\nit is encouraging to see that in the $\\mbox{${\\Omega}_{0}$ }<1$ models, the\n$\\mbox{${\\Omega}_{0}$ }=1$ model can be rejected on the basis of quasar clustering\nanisotropy alone in nearly every case.\n\nThere are a number of methods to constrain $\\mbox{${\\Omega}_{0}$ }$ from the\ndynamics of redshift-space galaxy clustering (e.g., Kaiser 1987;\nCarlberg et al.\\ 1996; Kepner, Summers, \\& Strauss 1997).\nThese methods suffer from a degeneracy between the density parameter\nand the ``bias'' of galaxies with respect to mass, but with the\nhigh-precision measurements from the 2dF and Sloan {\\it galaxy}\nredshift surveys it should be possible to break this degeneracy\nand measure $\\mbox{${\\Omega}_{0}$ }$ directly. We would then like to use quasar\nclustering anisotropy to distinguish between open ($\\mbox{${\\lambda}_{0}$ }=0$)\nand flat ($\\mbox{${\\lambda}_{0}$ }=1-\\mbox{${\\Omega}_{0}$ }$) cosmologies of known $\\mbox{${\\Omega}_{0}$ }$; dynamical\ntechniques are insensitive to this distinction because the\ncosmological constant represents an unclustered energy component.\n\n\nFigure~6 presents the prospects for such a test. For each realization\nof each of our low-$\\mbox{${\\Omega}_{0}$ }$ models, we compute the likelihood ratio\n$R \\equiv {\\cal L}_2\/{\\cal L}_1$, where ${\\cal L}_1$ is the likelihood\nof the true model (with the correct values of $\\mbox{${\\Omega}_{0}$ }$, $\\mbox{${\\lambda}_{0}$ }$,\n$s_0(z)$, and \\mbox{$\\gamma$} ) and ${\\cal L}_2$ is the likelihood of the \nmodel with the same $\\mbox{${\\Omega}_{0}$ }$, $s_0(z)$, and $\\mbox{$\\gamma$}$ but the incorrect\ngeometry. For $\\mbox{${\\Omega}_{0}$ }=0.1$ or 0.2, the incorrect geometry can be\nrejected at $> 10:1$ odds in all of the 2dF realizations and at $\\ga 10:1$\nodds in the SDSS realizations. For $\\mbox{${\\Omega}_{0}$ }=0.4$ the difference between\ngeometries is smaller, and the incorrect model can only be rejected\nat $\\sim 10:1$ odds or (occasionally) less. Since the 2dF and SDSS\nsamples will be independent (with telescopes in the southern and\nnorthern hemispheres, respectively), a pair of $\\sim 10:1$ rejections\nwould constitute a strong statistical result. However, there are\nuncertainties related to peculiar velocity distortions and evolution\nof the correlation function that are not incorporated in our current\nMonte Carlo experiments (see \\S 5 below), so it is not clear that\nthe geometry distinction will be possible with these samples for\n$\\mbox{${\\Omega}_{0}$ } \\geq 0.4$.\n\n\nAs equation\\r{N.pairs3} demonstrates, the precision with which\n$\\xi(s,\\mu)$ can be measured depends sensitively on the true quasar\ncorrelation length. While the SB estimate\nis based on the most extensive compilation of existing data,\nit is nonetheless quite uncertain, and some other studies have\nsuggested higher values. Stephens et al. (1997), for example,\nfind a maximum-likelihood estimate $r_0=20{\\rm h}_0^{-1}\\;{\\rm Mpc}$\n(comoving, for $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{$\\Lambda$ }=0$) for the high-redshift quasars\nin the Palomar Transit Grism Survey. Figures~7 and~8 repeat\nthe $\\mbox{${\\Omega}_{0}$ }$-constraint and geometry-discrimination tests for 2dF \n(Figure 7) and SDSS (Figure 8) realizations with a correlation \nlength double the value implied by equation\\r{s0.general}. This\nhigher correlation length corresponds to \n$r_0 = 12{\\rm h_0}^{-1}\\;{\\rm Mpc}$\ncomoving for $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{${\\lambda}_{0}$ }=0$, and to $s_0(z=0)=0.004$ for all models.\n\n\nWith the higher correlation length, incorrect geometry models with\n$\\mbox{${\\Omega}_{0}$ }=0.4$ are rejected at $\\ga 100:1$ odds in all of the 2dF realizations\nand all but one of the SDSS realizations. With $\\mbox{${\\Omega}_{0}$ } \\leq 0.2$,\nthe incorrect geometry models are rejected with formal odds\nexceeding $10^6:1$ in all realizations of both surveys.\nThe $\\mbox{${\\Omega}_{0}$ }$ determinations in flat models are much more precise\nthan those shown in Figure~5 for the smaller correlation length.\nIt is of course possible that the true quasar correlation length\nis {\\it lower} than the SB estimate instead\nof higher, in which case the prospects for the clustering anisotropy\nmethod would be worse than those implied by Figures~5 and~6.\nThe uncertainty in the amplitude of quasar clustering is presently\nthe main uncertainty in assessing the power of future quasar\nsurveys to constrain cosmological parameters by this approach.\n\n\n\nFigures~5--8 show that the 2dF survey has somewhat greater constraining\npower than the SDSS, even though it has 30,000 quasars instead of 80,000.\nThe high surface density of the 2dF sample is crucial to its performance;\nthe factor $N_Q^2\/A$ in equation\\r{N.pairs3} for the number of \ncorrelated pairs is nearly the same for the 2dF and the SDSS.\nThe reason the 2dF survey does slightly better is its more\ncompact redshift distribution $F(z)$. The SDSS has roughly\nequal numbers of quasars per unit redshift from $z=0.4$ to $z=2.2$,\nand a significant fraction of the sample lies at $z>2.2$. For the\n2dF survey, a majority of the quasars lie in the range $1 < z < 2.2$.\nThe number of correlated pairs per unit redshift is proportional\nto $[F(z)]^2$, and the more precise measurement of $\\xi(s,\\mu)$ at\n$z \\sim 2$ in the 2dF sample wins out over the greater range of\nredshifts probed by the SDSS.\n\nAs this comparison suggests, the ability of a quasar survey to\nmeasure clustering anisotropy is sensitive to its details,\nand there are strategies that can substantially enhance this\nability for a fixed number of quasars. One possible approach would\nbe to use multi-color selection techniques to target quasars in\nparticular redshift ranges, producing peaks in $F(z)$ at specific\nredshifts. Another approach is simply to observe a smaller area\nto a fainter magnitude limit, thus increasing $N_Q^2\/A$. Either\nstrategy improves the survey's sensitivity to geometrical distortion\n(at the cost of longer spectroscopic exposures), provided that the\nclustering measurements remain in the sparsely sampled regime,\n$N_c \\la 1$. Once $N_c$ exceeds one, new quasar pairs contribute\npartially redundant information, and there is more to be gained by\nexpanding the survey's area or redshift range. (This statement\napplies only to the two-point correlation function and its\nrelatives; clustering measures that are sensitive to higher-order\ncorrelations would continue to benefit from denser sampling.)\n\nAs an illustration, we repeat the tests of Figures 5--8 for a hypothetical\nhigh density quasar survey (HDS), now returning to the smaller quasar\ncorrelation length implied by equation\\r{s0.general}.\nWe assume a sample of 30,000 quasars in 200 square degrees --- the\nsame $N_Q$ as 2dF, but a surface density 3.25 times higher, and $F(z)$\nidentical to the one of 2dF.\nThe Sloan southern stripe quasar survey might provide such a sample\nif quasars can be identified and their redshifts measured to a \nsufficiently faint limiting magnitude. Figure 9 shows that even with the small \ncorrelation length, the high-density survey allows strong rejection\nof models with incorrect geometry in all realizations with\n$\\mbox{${\\Omega}_{0}$ }=0.1$ or 0.2 and in most realizations with $\\mbox{${\\Omega}_{0}$ }=0.4$.\n\n\n\n\nAlthough we have focused mainly on the sensitivity of planned surveys \nto \\mbox{${\\Omega}_{0}$ } and \\mbox{${\\lambda}_{0}$}, we should note that in our experiments the\n2dF and SDSS samples constrain $\\gamma$ and $s_{0}$ to at least \n10\\% accuracy, with typical accuracy of about 5\\%. \nIn the cases with a high quasar correlation length or a high-density\nquasar survey, the maximum-likelihood estimates achieve 2--3\\% accuracy \nin recovering $\\gamma$ and 1--3\\% accuracy in recovering $s_{0}$. \nWe can thus expect these future surveys to provide precise measurements\nof the basic parameters characterizing the quasar correlation function.\n\n\\section{Discussion}\nOur results from \\S 4.4 show that the ambitious quasar redshift surveys\nplanned by the 2dF and SDSS teams can make important contributions to\nthe study of the geometry of spacetime.\nIn the case of flat cosmological models, some constraints on \\mbox{${\\Omega}_{0}$ } can be \nobtained even for the SB correlation length, and these\nconstraints become interestingly tight for a high-density quasar survey\nor for the 2dF and SDSS if the correlation length is a factor of two\nlarger than SB's estimate.\nClustering anisotropy does not provide useful constraints on \\mbox{${\\Omega}_{0}$ } in\nopen models because the distortion parameter $h(z)$ is insensitive\nto \\mbox{${\\Omega}_{0}$ }, and there is a near degeneracy in $h(z)$ between an open model with\nlow \\mbox{${\\Omega}_{0}$ } and a flat model with a higher \\mbox{${\\Omega}_{0}$ }.\nThe true power of the clustering anisotropy technique comes into play\nif \\mbox{${\\Omega}_{0}$ } itself is determined independently by dynamical methods.\nClustering anisotropy then provides a tool for discriminating between\nopen models with $\\mbox{${\\lambda}_{0}$ }=0$ and flat models with $\\mbox{${\\lambda}_{0}$ }=1-\\mbox{${\\Omega}_{0}$ }$,\na distinction that is difficult to achieve with dynamical methods alone.\nWe find that clear discrimination between flat and open geometries \nis possible for the 2dF and SDSS samples with the SB correlation length\nif $\\mbox{${\\Omega}_{0}$ } \\leq 0.2$, but the discrimination is only marginal for $\\mbox{${\\Omega}_{0}$ }=0.4$.\nThe high-density survey with the SB correlation length or the 2dF or\nSDSS for double the SB correlation length provide clear discrimination\nfor $\\mbox{${\\Omega}_{0}$ }=0.4$.\n\nThe results in Figures~5--9 look fairly promising, \nbut there are some limitations in our numerical experiments. \nFor example, we assumed no evolution in $\\mbox{$\\gamma$}$ and a fixed form\nof $s_0(z)$ (corresponding to no comoving evolution for $\\mbox{${\\Omega}_{0}$ }=1$, $\\mbox{$\\Lambda$ }=0$),\nso that we could produce one global fit to the clustering and cosmological\nparameters using all of the data.\nIn the real universe, one will have to allow for the possibility of\ndifferent evolution of the correlation function parameters.\nThe maximum-likelihood method described in \\S 4.3 adapts easily\nto this if the evolution can be described in a parametrized form,\nbut because there can be some tradeoff between different effects,\nadding new parameters will weaken the constraints on \\mbox{${\\Omega}_{0}$ } and \\mbox{${\\lambda}_{0}$}.\nAlso, as we have already noted, our assumed values of $s_{0}$ and \\mbox{$\\gamma$}, \nwhile plausible, are highly uncertain. \nIf $s_{0}$ and $\\mbox{$\\gamma$}$ turn out to be larger than we have assumed, then\nthe statistical power of the clustering anisotropy test will increase; \nif smaller it will decrease.\n\nProbably the most important element missing in our analysis is the\neffect of peculiar velocities, since small scale velocity dispersions\nand large scale coherent flows can both induce an angular dependence \nin $\\xi (s,\\mu)$. To a first approximation, the small-scale dispersion\nhas the effect of convolving the correlation function along the line of \nsight with the pairwise velocity distribution function \n(Davis \\& Peebles 1983; see also Fisher 1995). \nThis causes a large distortion of the correlation function for \ngalaxies at $z=0$. However, if $s_0=0.01$ at $z=2$, then the velocity scale \ncorresponding to the correlation length is $0.01c = 3000 \\,{\\rm kms}^{-1}$. \nThe dispersion velocity of quasars is unknown, but a plausible value is \n$\\sim 300 \\,{\\rm kms}^{-1}$ --- the velocity dispersion of typical galaxy \ngroup. The width of the distribution function\nis thus very small compared to the correlation length, so for pairs separated\nby $\\sim s_0$, the distortion due to small scale dispersion should be small.\n\nCoherent large scale motions pose a potentially more serious problem.\nAs a rough gauge of the importance of those velocities in confusing\ngeometrical distortion, we can estimate the ratio of $\\xi(s,1)\/\\xi(s,0)$\nresulting from geometry and from linear theory peculiar velocities.\nFor the surveys that we have considered, most of the geometrical information\ncomes from the redshift $z \\sim 2$, so we will take this value as the\nfiducial one. For typical cosmological models, $h(z) \\sim 2$ at $z=2$,\nwhich means that $\\xi(s,1)\/\\xi(s,0)\\approx 3.5$ for geometrical distortions \nalone (see equation~[\\ref{xi.s}]).\nIn conventional notation, \n\\begin{equation}\n\\beta \\approx \\frac{{\\Omega}^{0.6}}{b} \\lab{beta.def}\n\\end{equation}\nis the linear theory ratio of peculiar velocity convergence \n$-\\nabla\\cdot {\\bf v}_{pec}\/H$ to the galaxy density contrast,\nwhere $b$ is the bias parameter relating galaxy and mass fluctuations.\nIf $\\beta \\ll 1$,\nthen equations (10) and (11) of Hamilton (1992) allow us to write\n\\begin{equation}\n\\frac{\\xi(s,1)}{\\xi(s,0)}=1-\\frac{2\\beta\\mbox{$\\gamma$}}{(3-\\mbox{$\\gamma$})+2\\beta} \\lab{xi.ratio}\n\\end{equation}\nWe justify equation\\r{xi.ratio} in appendix A and argue that the typical\nvalue of $\\beta$ (assuming the SB quasar correlation length)\nis $\\sim 1\/6$ in open, flat, and Einstein-de Sitter cosmological models.\nWith $\\mbox{$\\gamma$}=1.8$, the ratio\\r{xi.ratio} then becomes\n\\begin{equation}\n\\frac{\\xi(s,1)}{\\xi(s,0)}\\approx 0.6\n\\end{equation}\nThus, the effect of coherent peculiar velocities by no means \noverwhelms the geometric signal, but it is not negligible. \nAnalyses of real data will have to fit jointly for peculiar velocity\ndistortions and geometrical distortions, using their different\nangular and redshift dependences to separate them.\nThe results of Ballinger et al.\\ (1996), Matsubara \\& Suto (1996),\nand Nakamura et al.\\ (1997) provide important steps towards this goal,\nthough it is not clear whether the linear theory formula for velocity\ndistortions will be adequate near $\\xi\\sim 1$. \nA suitably parametrized description of peculiar velocity distortions can\neasily be incorporated in the maximum-likelihood scheme.\nThe geometrical distortion is weakest in the models with low values of \n$h$ (low \\mbox{${\\Omega}_{0}$ }, high \\mbox{${\\lambda}_{0}$ } models).\nHowever, these are also the models for which open and flat geometries\nare most easily distinguished, so there is good reason to believe that\nsuch distinctions will remain possible even when the effects of peculiar\nvelocities are taken into account.\n\nThere are other routes to measuring or limiting the cosmological constant.\nProbably the most solid existing limits come from the statistics of\ngravitational lensing (see, e.g., Carroll, Press, \\& Turner 1992;\nMaoz \\& Rix 1993; Kochanek 1996). These limits are somewhat dependent\non assumptions about the evolution of the galaxy population\n(Rix et al.\\ 1994), but current results present a strong case that \n$\\mbox{${\\lambda}_{0}$ } < 0.8$. A promising approach on the horizon is the application of \nthe classical magnitude-redshift test to calibrated candles in the form of\nhigh-redshift, Type Ia supernovae (Perlmutter et al. 1997).\nThis technique is technically challenging, since one must guard carefully\nagainst differential selection biases between low- and and high-redshift\nsamples, but if these obstacles can be overcome it is one of the\nmost powerful methods for constraining $\\mbox{${\\lambda}_{0}$ }$.\nThis approach relies on the\nplausible but not incontrovertible assumption that the luminosities\nof Type Ia supernovae do not change systematically over the age of\nthe universe (see von Hippel, Bothun, \\& Schommer 1997).\nHigh-resolution maps of the cosmic microwave background, such as those\nexpected from the MAP and Planck surveyor satellites, offer another\npromising route to constraining \\mbox{${\\lambda}_{0}$ }, along with a host of other \ncosmological parameters (see, e.g., Bennett et al. 1995 and \nJungman et al. 1995), at least in the context of cosmic inflation models.\nIt is not yet clear which of these approaches will ultimately yield\nthe best constraints on the cosmological constant, but the \nexistence of four essentially independent methods, all of them able\nto produce interesting results in the next 5-10 years, is\nvery encouraging. Either they will all yield consistent answers,\nin which case they provide not just a convincing constraint but\na consistency test of the underlying cosmological picture that motivates\nthem, or they will not, in which case they point to a breakdown in\nthe implicit assumptions behind one or more of the methods.\n\nAs an extension of this last point, we note that all of our \n2dF and SDSS realizations in Figure~5 are inconsistent at the $>2\\sigma$\nlevel with the Euclidean, $\\mbox{${\\lambda}_{0}$ }=1$, $\\mbox{${\\Omega}_{0}$ }=0$ model even for a flat\ngeometry with $\\mbox{${\\Omega}_{0}$ }$ as low as 0.1 --- and the inconsistency is much\nstronger for higher $\\mbox{${\\Omega}_{0}$ }$, a higher quasar correlation length\n(Figures~7 and~8), or a higher density quasar survey (Figure 9).\nThese results imply that the geometrical distortion described in \\S 2\nshould be detected by these surveys, even if the discrimination between\nflat and open space geometries is weak. This distortion is a fundamental\nproperty of the cosmological spacetime curvature predicted by General\nRelativity. If the universe is as we believe it to be, this signature\nof curved spacetime should be detected within the next decade, and perhaps\nwithin the next few years.\n\n\\acknowledgments\n\nWe thank Jim Gunn for seminal discussions at the outset of this project.\nWe thank Heidi Newberg, Brian Yanny, Don Schneider, and other members\nof the SDSS Quasar Working Group for helpful discussions of plans for\nthe SDSS quasar survey. We thank Tom Shanks for information about the\n2dF quasar survey and for helpful discussions on recent estimates\nof quasar clustering.\nWe acknowledge support from NASA Theory Grant NAG5-3111.\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\tViolent stellar events, such as supernovae and neutron \nstar collisions, release tremendous amounts of energy, primarily \nin the form of neutrinos\\cite{1}. In regions of low density, the \nannihilation process $\\nu\\bar\\nu\\rightarrow e^+e^-$\nhas been proposed as a source of energy to restart the shock wave in \ncore collapse supernovae\\cite{2}, and as a means of generating an energetic \n$e^+e^-$ plasma for gamma ray bursts\\cite{3}. Since detailed \ncalculations of supernova and gamma ray burst scenarios indicate that the\nannihilation rate is too low\\cite{4}, it has been suggested that the \nstar's magnetic field will act as a catalyst for the reaction by eliminating \nsome kinematical constraints on the $e^+e^-$ pair\\cite{3}. The case for\nsuch an enhancement is motivated by studies of pair creation\n ($\\gamma\\rightarrow e^+e^-$ and $\\gamma\\rightarrow \\gamma\\gamma$)\\cite{5} \n and the neutrino synchotron process \n($\\nu\\rightarrow\\nu e^+e^-$\\cite{6} \\cite{7}), where processes \nthat are kinematically disallowed in the vacuum are made possible by \nthe exchange \nof momentum between the $e^+e^-$ pair and the magnetic field. The central role \nof the magnetic field in these processes is reflected in the sensitivity of the \ncross sections to the field strength. Significantly, this enhancement does not\nseem to occur in processes that are kinematically allowed in the absence of the \nfield\\cite{8}. \n \n\tAs an added bonus, the magnetic field provides a preferred direction \nin space, opening the way for parity violating effects to produce an \nasymmetry in neutrino cross sections. The asymmetries, which have been calculated\nfor $\\nu-e$ scattering\\cite{7}\\cite{9} and $\\nu$-nucleus elastic \nscattering\\cite{10}, provide a mechanism \nfor producing asymmetric supernova explosions, either directly or by\nasymmetrically heating the surrounding matter. Such asymmetric explosions have \nbeen proposed as a means of producing the observed large \nvelocities of pulsars\\cite{6}. \n\t\n\tIn this paper, we calculate the $\\nu\\bar\\nu$ annihilation \nfor a variety of magnetic field strengths, densities, and temperatures\nsimilar to those found in supernovae. Our formalism\\cite{8}\\cite{9}\\cite{10}, is \ndetailed in the next section, while our results\nand their implications for astrophysics are left to the concluding section.\n\n\\section{Formalism} \n\n\tWe consider the process for $\\nu\\bar\\nu$ annihilation \nin the presence of a magnetic field of strength ${\\bf B}$ in the z direction, \nusing the four point interaction\n\\begin{equation}\nL_{int} ={G_F\\over\\sqrt{2}}\\, \\int\\, d^4x\\, \\bar\\nu(x)\\gamma_\\alpha(1-\\gamma_5)\\nu(x)\\,\\bar e(x)\\gamma^\\alpha(C_V-C_A\\gamma_5)e(x),\n\\end{equation}\nwhere $G_F=1.13\\times 10^{-11}$ MeV$^{-2}$ is the Fermi coupling strength,\n$C_V=2\\sin^2\\theta_W\\pm \\frac{1}{2}$, $C_A=\\pm\\frac{1}{2}$ with the plus sign\nfor electrons and the minus for $\\mu$ or $\\tau$ neutrinos, and \n$\\sin^2\\theta_W\\approx .223$.\n\n\tThe wavefunctions of the electron-positron pair are obtained by \nsolving the Dirac equation in Landau gauge ($A_x=-By$)\\cite{12} \n\n\\begin{eqnarray} \n\\Psi^{e^-}_{(p_x,p_z,n,\\sigma)}({\\bf x},t)& =& {e^{ip_x x}e^{ip_z z}\ne^{-i\\epsilon t}\\over \\sqrt{L_xL_z}} \\pmatrix{ \\alpha CH_{n-1}(\\xi)\\cr\n-\\sigma \\alpha DH_{n}(\\xi)\\cr \\sigma\\beta CH_{n-1}(\\xi)\\cr -\\beta DH_{n}(\\xi)\n\\cr},\\nonumber\\\\\n\\Psi^{e^+}_{(p^\\prime_x,p^\\prime_z,m,\\sigma^\\prime)}({\\bf x},t)& =& \n{e^{-ip^\\prime_x x}e^{-ip^\\prime_z z}\ne^{-i\\epsilon^\\prime t}\\over \\sqrt{L_xL_z}} \\pmatrix{ \\beta^\\prime D^\\prime \nH_{m-1}(\\xi^\\prime)\\cr\n-\\sigma^\\prime \\beta^\\prime C^\\prime H_{m}(\\xi^\\prime)\\cr -\\sigma^\\prime\n\\alpha^\\prime \nD^\\prime H_{m-1}(\\xi^\\prime)\\cr \n\\alpha^\\prime C^\\prime H_{m}(\\xi^\\prime)\n\\cr},\n\\end{eqnarray}\n\n\\noindent where $\\pmatrix{\\alpha\\cr\\beta}=\\frac{1}{\\sqrt{2}}\n(1\\pm\\frac{m_e}{\\epsilon})^{\\frac{1}{2}}$, \n$\\pmatrix{C\\cr D}=\\frac{1}{\\sqrt{2}}(1\n\\pm{\\sigma p_z\\over (\\epsilon^2-m^2)^{\\frac{1}{2}}} )^{\\frac{1}{2}}$,\n$\\,\\sigma=\\pm 1$ is the electron's helicity,\n$H_n(\\xi)$ is the normalized solution of a one dimensional harmonic \noscillator with $\\xi=\\sqrt{eB}(y-{p_x\\over eB})$, and the energies of the \nLandau levels are given by $\\epsilon = \\sqrt{p_z^2+2eBn+m_e^2}$.\nFor the positron spinor, the primed quantities are obtained from the\nunprimed by replacing $\\epsilon\\rightarrow\\epsilon^\\prime$, $p_i\\rightarrow\np^\\prime_i$, and $\\xi^\\prime=\\sqrt{eB}(y+{p^\\prime_x\\over eB})$.\n\n\tThe differential cross section for the annihilation process is given \nby\t\n\\begin{equation}\nd\\sigma = {G_F^2 L_xL_z\\over 4\\pi k_1\\cdot k_2 V} C_{\\mu\\nu}A^{\\mu\\nu}\n\\delta(Q_0-\\epsilon-\\epsilon^\\prime)\\delta(Q_z-p_z-p^\\prime_z)\n\\delta(Q_x-p_x-p^\\prime_x) dp_x dp_z dp^\\prime_x dp^\\prime_z,\n\\end{equation}\nwhere $Q^\\mu=k_1^\\mu+k_2^\\mu$, with $k_1(k_2)$ the (anti-)neutrino momentum,\n$V=L_xL_yL_z$ is a normalization volume,\t\n\\begin{equation} \nC^{\\mu\\nu}= k_1^\\mu k_2^\\nu +k_1^\\nu k_2^\\mu - g^{\\mu\\nu}k_1\\cdot k_2\n+i\\epsilon^{\\mu\\nu\\alpha\\beta}k_{1\\alpha} k_{2\\beta},\n\\end{equation}\nand $A^{\\mu\\nu}=\\sum_{\\sigma\\sigma^\\prime} S^\\mu S^{*\\nu}$ is the \nspin-summed product of the current matrix elements of the electron.\nNeglecting the electron mass, the matrix elements are given by\n\\begin{eqnarray}\nS^0 &=& {(1-\\sigma\\sigma^\\prime)\\over 2}(C_V-C_A\\sigma)\n(-DC^\\prime F_{nm}+CD^\\prime F_{n-1,m-1})\\nonumber\\\\\nS^1 &=& {(1-\\sigma\\sigma^\\prime)\\over 2}(C_V-C_A\\sigma)\n(-DD^\\prime e^{i\\phi}F_{n,m-1}-CC^\\prime e^{-i\\phi}\nF_{n-1,m})\\nonumber\\\\\nS^2 &=& -i{(1-\\sigma\\sigma^\\prime)\\over 2}(C_V-C_A\\sigma)\n(-DD^\\prime e^{i\\phi} F_{n,m-1}+CC^\\prime e^{-i\\phi}\nF_{n-1,m})\\nonumber\\\\\nS^3 &=& {(1-\\sigma\\sigma^\\prime)\\over 2}(C_V-C_A\\sigma)\\sigma\n(DC^\\prime F_{nm}+CD^\\prime F_{n-1,m-1}),\n\\end{eqnarray}\nwhere\n\\begin{equation}\n F_{nm} = \\left(\\frac {n_!}\\right)^\\frac{1}{2} \\left(\\frac{(n-m)}{\\vert n-m\\vert}\\sqrt{\\frac{\nQ_\\perp^2}{2eB}}\\right)^{n_<-n_>} e^{-Q_\\perp^2\\over 4eB} e^{i(m-n)\\phi} \nL_{n<}^{n_>-n_<}\\left(\\frac{Q_\\perp^2}{2eB}\\right),\n\\end{equation}\nwith $L_n^\\alpha$ a generalized Laguerre polynomial\\cite{A+S} and \n$\\tan{\\phi}=Q_y\/Q_x$. Here $n_>$ is the greater of $n$ and $m$.\n\n\tTo simplify the remainder of the calculation, we choose coordinates \nsuch that $Q_y =0$. In this system, the non-zero components $A_{\\mu\\nu}$ are \ngiven by\n\\begin{eqnarray}\nA_{00}&=&\\frac{1}{2}\\bigg[(C_V^2+C_A^2)\\bigg(\n(1+v_e v_p)(\\vert F_{nm}\\vert^2+\n\\vert F_{n-1,m-1}\\vert^2) + 2 v_n v_m \n F_{nm}F_{n-1,m-1}\\vert_{\\phi=0}\\bigg) \\nonumber\\\\\n& & -2C_VC_A(v_e+v_p)\\bigg(\n\\vert F_{n-1,m-1}\\vert^2-\\vert F_{nm}\\vert^2\n\\bigg)\\bigg] \\nonumber\\\\\n& &\\nonumber\\\\\nA_{11}&=&\\frac{1}{2}\\bigg[(C_V^2+C_A^2)\n\\bigg((1-v_e v_p)(\\vert F_{n,m-1}\\vert^2+\n\\vert F_{n-1,m}\\vert^2)-2 v_nv_m F_{n,m-1}\nF_{n-1,m}\\vert_{\\phi=0}\\bigg)\\nonumber \\\\\n& & \n+2C_VC_A(v_e-v_p)\\bigg( \\vert F_{n-1,m}\n\\vert^2-\\vert F_{n,m-1}\\vert^2 \\bigg)\\bigg]\\nonumber\\\\\nA_{22}&=&\\frac{1}{2}\\bigg[(C_V^2+C_A^2)\\bigg((1-v_e v_p)\n(\\vert F_{n,m-1}\\vert^2+\n\\vert F_{n-1,m}\\vert^2)+2 v_nv_m F_{n,m-1}F_{n-1,m}\n\\vert_{\\phi=0}\\bigg) \\nonumber \\\\\n& &+2C_VC_A(v_e-v_p)\\bigg(\n\\vert F_{n-1,m}\\vert^2-\\vert F_{n,m-1}\\vert^2\n\\bigg)\\bigg]\\nonumber\\\\\nA_{33}& = & \\frac{1}{2} \\bigg[ \n(C_V^2+C_A^2) \\bigg( (1+v_e v_p)(\\vert F_{n,m}\\vert^2+\n\\vert F_{n-1,m-1}\\vert^2) - 2 v_n v_m F_{n,m} \nF_{n-1,m-1} \\vert_{\\phi=0}\\bigg) \\nonumber\\\\\n& & -2C_VC_A(v_e+v_p)( \\vert F_{n-1,m-1}\\vert^2-\n\\vert F_{n,m} \\vert^2 \n)\\bigg]\\nonumber\\\\\nA_{01}&=& A_{10}=\\nonumber\\\\\n& &\\,\\frac{1}{2} \\bigg[-(C_V^2+C_A^2)\n\\bigg( v_n ( F_{n,m-1}F_{n-1,m-1}+\n F_{n-1,m}F_{n,m}) +v_m ( F_{n,m-1}F_{n,m}\n+ F_{n-1,m}F_{n-1,m-1})\\bigg)\n\\nonumber\\\\\n& & +2C_VC_A \\bigg( v_nv_p (F_{n,m-1}F_{n-1,m-1}\n- F_{n-1,m}F_{n,m})-v_mv_e (F_{n,m-1}F_{n,m}\n- F_{n-1,m}F_{n-1,m-1})\\bigg) \\bigg]\\vert_{\\phi=0}\n\\nonumber\\\\\nA_{03}&=&A_{30}=\\nonumber\\\\\n& &\\,\\frac{1}{2}\\bigg[-(C_V^2+C_A^2)\\bigg(\n(v_e+v_p)(\\vert F_{n,m}\\vert^2+\\vert F_{n-1,m-1}\\vert^2)+ 2v_nv_m F_{n,m}\nF_{n-1,m-1}\\vert_{\\phi=0}\\bigg)\\nonumber\\\\\n& &+2C_VC_A(1+v_ev_p)\n\\bigg(\\vert F_{n-1,m-1}\\vert^2-\\vert F_{n,m}\\vert^2\\bigg)\\bigg]\n\\nonumber\\\\\nA_{13}&=&A_{31}=\\nonumber\\\\\n& &\\,\\frac{1}{2} \\bigg[ ( C_V^2+C_A^2 )\n\\bigg(v_nv_p (F_{n,m-1} F_{n-1,m-1}\n+ F_{n-1,m}F_{n,m}) +v_mv_e( F_{n,m-1}F_{n,m}\n+ F_{n-1,m}F_{n-1,m-1})\\bigg)\\nonumber\\\\\n& & -2C_VC_A \\bigg( v_n (\n F_{n,m-1}F_{n-1,m-1}-\n F_{n-1,m}F_{n,m})+v_m(F_{n,m-1}F_{n,m}- F_{n-1,m}F_{n-1,m-1})\\bigg)\\bigg]\n\\vert_{\\phi=0} \n\\nonumber\\\\\nA_{02}& = &-A_{20}=\\nonumber\\\\\n& &\\,\\frac{i}{2}\n\\bigg[ -(C_V^2+C_A^2)\n\\bigg( v_n(F_{n,m-1}F_{n-1,m-1}- F_{n-1,m}F_{n,m}) +v_m(F_{n,m-1}F_{n,m}\n- F_{n-1,m}F_{n-1,m-1})\\bigg) \n\\nonumber\\\\\n& & +2C_VC_A\\bigg(v_nv_p( F_{n,m-1}\nF_{n-1,m-1} + F_{n-1,m}F_{n,m})+v_mv_e( F_{n,m-1}F_{n,m}\n+ F_{n-1,m}F_{n-1,m-1})\\bigg)\\bigg]\\vert_{\\phi=0}\n\\nonumber\\\\\nA_{12}&=&-A_{21}=\\nonumber\\\\\n& &\\, \\frac{i}{2}\\bigg[(C_V^2+C_A^2)(1-v_ev_p)\n\\bigg( \\vert F_{n-1,m}\\vert^2-\\vert F_{n,m-1}\\vert^2\\bigg)\\nonumber\\\\\n& & +2C_VC_A\\bigg((v_e-v_p)(\\vert F_{n,m-1}\n\\vert^2+ \\vert F_{n-1,m} \\vert^2)-2 v_nv_m F_{n,m-1}\nF_{n-1,m}\\vert_{\\phi=0}\\bigg)\\bigg] \n\\nonumber \\\\\nA_{23}&=&-A_{32}=\\nonumber\\\\\n& &\\, \\frac{1}{2} \\bigg[( C_V^2+C_A^2)\n\\bigg(v_nv_p(F_{n,m-1}F_{n-1,m-1}- F_{n-1,m}F_{n,m})\n -v_mv_e (F_{n,m-1}F_{n,m} -F_{n-1,m}F_{n-1,m-1})\\bigg) \\nonumber\\\\\n& &\n +2C_VC_A \\bigg( v_n ( F_{n,m-1}\nF_{n-1,m-1}- F_{n-1,m}F_{n,m})-v_m ( F_{n,m-1}F_{n,m}\n- F_{n-1,m}F_{n-1,m-1})\\bigg)\\bigg]\\vert_{\\phi=0},\n\\end{eqnarray}\n\\noindent where\n$v_{n(m)}=\\sqrt{2eBn(m)}$,$v_{e}=\\frac{p_z}{\\epsilon}$,and $v_p=\n\\frac{p^\\prime_z}{\\epsilon^\\prime}$. The contributions may be classified as\neither parity conserving($\\propto (C_V^2+C_A^2)$, symmetric) or \nparity violating, and by whether the parity violation occurs \nin the neutrino($\\propto (C_V^2+C_A^2)$, anti-symmetric) or electron currents\n($\\propto C_VC_A$, symmetric). This last distinction is important \nboth because the total effect of parity violating neutrino currents will\ntend to vanish when integrated against the initial spectrum of neutrino and \nanti-neutrinos, and because the asymmetric propagation of neutrinos vs \nanti-neutrinos will lead to asymmetries in the the star's lepton number,\n$Y_e$. \n\n\tAll that remains is to solve the kinematic constraints on the\nelectron momenta and to include medium effects. The former is\na straight forward algebraic exercise, yielding\n\\begin{equation}\np_z={(Q_0^2-Q_z^2+2eB(n-m))Q_z+\\lambda Q_0\\sqrt{(Q_0^2-Q_z^2)^2-\n4eB(Q_0^2-Q_z^2)(n+m)+4e^2B^2(n-m)^2}\\over 2(Q_0^2-Q_z^2)},\n\\end{equation}\n with $p^\\prime_z=Q_z-p_z$ and $\\lambda=\\pm1$. Implicit in this \nprescription are\n maximum values for $m$ and $n$, $N_{max}=int((Q_0^2-Q_z^2)\/2eB)$,\nand $M_{max}= int{(\\sqrt{N_{max}}-\\sqrt{n})^2)}$. Additionally, the\nintegration over the energy conserving delta function will generate \na Jacobian factor $\\vert v_e-v_p\\vert^{-1}$. After inserting Pauli \nblocking factors for the electron and positron, the cross section becomes\n\\begin{equation}\n\\label{finito}\n\\sigma= \\sum_{\\lambda=\\pm 1}\\sum_{n=0}^{N_{max}}\\sum_{m=0}^{M_{max}}\n{G_F^2 eB\\over 2\\pi s} C_{\\mu\\nu}A^{\\mu\\nu} \n{(1-f_e)(1-f_p)\\over\\vert v_e-v_p\\vert}\n\\end{equation}\nwhere $s=(k_1+k_2)^2$ and \n\\begin{eqnarray}\nf_{e}&=&{1\\over 1+e^{{\\epsilon-\\mu_0}\\over T}},\\nonumber\\\\\nf_{p}&=&{1\\over 1+e^{{\\epsilon^\\prime+\\mu_0}\\over T}}\n\\end{eqnarray}\nwith $\\mu_0$ the electron chemical potential and $T$ the temperature\nof the medium.\n\n\\section{Results}\n\n\tIn order to understand the behaviour of the cross section\nderived in the last section, we have performed calculations for two \ndifferent kinematical situations using a variety of field strengths,\ntemperatures and densities typical of those found in astrophysical \nevents. In the first of these, which we shall refer to as the collinear\ncase, the neutrino and anti-neutrino are traveling nearly parallel to\none another, as is appropriate for neutrinos escaping a supernova. We \nassume their four momenta are given by\n\\begin{eqnarray}\nk_1 &=& (E,P\\sin\\theta,\\sqrt{s}\/2,P\\cos\\theta) \\nonumber\\\\\nk_2 &=& (E,P\\sin\\theta,-\\sqrt{s}\/2,P\\cos\\theta) \n\\end{eqnarray} \nwith $E=10 MeV$, and $P=9 MeV$. The second kinematical situation \nwe shall consider is the case where the neutrino and anti-neutrino\ncollide head-on with zero center of mass momentum. In this instance,\nthe neutrino and anti-neutrino momenta are given by\n\\begin{eqnarray}\nk_1 &=& (P,P\\sin\\theta,0,P\\cos\\theta)\\nonumber\\\\\nk_2 &=& (P,-P\\sin\\theta,0,-P\\cos\\theta).\n\\end{eqnarray} \nThis situation, which desribes the annihilation of backscattered\n(anti-)neutrinos by outgoing (anti-)neutrinos, represents the largest \ncross section for a given incident energy. For ease of comparison\nwith the symmetric case, we choose $P=8.72$ MeV so that the center\nof mass energy, and consequently the zero field cross section,\nremains the same as in the case of parallel kinematics. \n\n\tIn both kinematical regimes, the number of possible final\nstates grows inversely with $B^2$, so that it is necessary to calculate \nthe electron-positron matrix elements for large values of $n$ and $m$ in\norder to obtain the cross section. In the case of head-on collisions,\nthis difficulty is mitigated by the fact that only matrix elements \nwith $n=m$ are non-zero. For the symmetric collisions, however,\nit is necessary to calculate Laguerre polynomials of large order.\nThis is accomplished by upward recursion using quadruple precision\narithmetic to avoid numerical instabilities. As an added precaution,\nthe resulting polynomials have been compared to results obtained from a \ndownward recursive Clenshaw scheme, and found to agree to six significant\nfigures. Thus, our calculations are not limited by the accuracy to which\nthe Laguerre functions can be realized. \n\n\tIn Fig. 1, the cross section for $\\nu\\bar\\nu\\rightarrow e^+e^-$\nin free space is shown for a variety of magnetic field strengths as a \nfunction of the angle between the total momentum and the magnetic field \ndirection, assuming symmetric kinematics for the \nannihilating neutrinos. The jagged structure of the cross section\nas a function of angle is a result of the near-vanishing of the\nJacobian factor at threshold values of $\\cos\\theta$ where new final\nstates for the electron-positron pair become available. For strong\nfields($B=100 m_e^2(\\approx 4\\times 10^{16}$G)), this cross section \nvaries strongly with angle, but this effect essentially vanishes\nfor fields comparable to those expected in the core of superova\n($B\/m_e^2 < 1)$. Qualitatively, the angle averaged cross section tends \nto decrease with increasing $B$, which reflects the fact that the creation \nof both the electron and positron in the lowest Landau level is forbidden \nby helicity conservation. Additionally, there is no asymmetry with respect to \nthe magnetic field direction, as such a preference would violate CP. \nIn Fig. 2, the cross section is shown for the head-on collisions for the same\ncenter of mass energy. Because the electron and positron are only sensitive\nto the total momentum of the annihilating pair, and to the direction of the \nB-field, the cross section varies quadratically with the angle between the \nfield and ${\\bf k}_1$, and can be shown analytically to have the form\n\\begin{equation}\n\\sigma\\propto (A_{11}+A_{33}) + \\cos^2\\theta (A_{11}-A_{33})\n\\end{equation}\nFor large magnetic fields, the $A_{11}$ term dominates, and the \nannihilation cross section for neutrinos traveling parallel to\nthe magnetic field is twice that for those traveling perpendicular to the \nfield. As the field decreases, the electron-positron current matrix elements\nare more isotropic, and the angular dependence of the cross section is\ncorrespondingly smaller. Interestingly, there is a region at small fields\nwhere $A_{33}>A_{11}$, so that neutrinos traveling perpendicular to the \nfield are more likely to annihilate. \n \n When the neutrinos annihilate in the presence of a nearly degenerate gas \nof electron-positron pairs typical of the astrophysical environments, the \ncross section is reduced significantly by the requirement that the produced\nelectron's energy be above the Fermi energy of the gas. Moreover, since \nthe electron gas is not CP symmetric, the argument forbidding a parity \nviolating asymmetry in the annihilation cross section is invalid. In Figs. \n3 and 4, the cross sections for collinear and head on collisions are \nshown as a function of magnetic field strength, assuming an electron\ngas with $T=1$ MeV and $\\mu=15$ MeV. Comparison of Figs. 1 and 3 \nindicates that, at the energies shown, the cross section is suppressed by a \nfactor of 5-10, and that the cross section for annihilating pairs with \nmomentum parallel to the magnetic field direction is larger. The \nsuppression of the cross section for head on collisions is much more \ndramatic, on the order of $10^3$ for the center of mass energies shown. \nThe cross section is also observed to be more sensitive to magnetic field \nstrength. This \nreflects the difficulty of producing a high energy electron in the\nhead on collision relative to the collinear case, where the electron can \nbe chosen to carry the bulk of the neutrinos' initial momentum. In Figs. 5 \nand 6, the neutrino annihilation asymmetry, defined as the ratio of the \nparity violating to parity conserving contributions to the cross section,\nas a function of magnetic field strength and direction. In the collinear case, \nthe asymmetry is, for angles not too near $0$ or $\\pi$, well fit by a \nstraight line in $\\cos\\theta$ with slope proportional to the strength of \nthe magnetic field. For the head on case, the linear dependence on \n$\\cos\\theta$ can be demonstrated analytically since the electon-positron\ncurrents depend only on the total momentum perpendicular to the magnetic field.\nThe resulting asymmetry is given by\n\\begin{equation}\nR=\\frac{\\sigma_{PV}}{\\sigma_{PC}}\\propto\\frac{A_{12}\\cos\\theta}{(A_{11}+A_{33}) + \\cos^2\\theta (A_{11}-A_{33})},\n\\end{equation}\nwith the same linear dependence on the field strength.\n\n\tAs we have already mentioned, the asymmetry is allowed because the \nbackground gas of electrons and positrons is not CP symmetric. Since the\nenergies of the electrons in the magnetic field are spin dependent, there is\na corresponding spin dependence in the occupation probabilities for states\nin a given Landau level. This variation in the occupation probabilities \nof states near the Fermi surface results in a preference for creating\nelectrons with spin aligned along the field direction. In Figs. 7 and\n8, the ratio of the parity violating to parity conserving contributions\nto the annihilation cross section is shown as a function of angle and\nelectron chemical potential $\\mu$ for $B=10m_e^2$. For the collinear case\n(Fig. 7), the asymmetry increases with $\\mu$ approximately quadratically,\nreflecting the increase in the density of states near the Fermi surface.\nFor the head-on case(Fig. 8), the asymmetry shows no significant dependence \non $\\mu$ at all. This remarkable result comes about as a result \nof the parity violation residing in the neutrino matrix elements, which are\noblivious to the presence of the Fermi sea and to the direction of the\nmagnetic field. Memory of the field direction is recovered when the \nneutrino matrix elements are multiplied by the electron currents, producing\nthe observed asymmetry. The effect of increasing the temperature from\n1-4 MeV is shown in in Figs. 9 and 10 for the same magnetic field strength.\nAs the Fermi surface becomes more diffuse, the electron occupation\nnumbers start to vary more slowly, with the result that the asymmetry\ndecreases for large $T$. In the collinear case(Fig. 9), the asymmetry falls\napproximately like $\\frac{1}{\\sqrt{T}}$ while in the head-on kinematics the\nfall-off is faster, varying approximately as $\\frac{1}{T}$. \n\n\tCombining these results, we are able to characterize the asymmetry\nfor the two cases in a manner that allows us to extrapolate our results\nto other situations. For the collinear case, we find\n\\begin{equation} \n{\\sigma_{PV}\\over\\sigma_{PC}}\\propto {\\mu^2 B\\over\\sqrt{T}}. \n\\end{equation}\nSince in this case there are two independent parameters with dimensions\nof energy, $E$ and $s$, we have been unable to discover a simple\nenergy dependence for the asymmetry. For the head-on case, there is\nonly one dimensionally consistent possibility, given by\n\\begin{equation}\n{\\sigma_{PV}\\over\\sigma_{PC}}\\propto {B\\over T\\sqrt{s}}.\n\\end{equation}\nThe dependence of the head-on cross section as a function of energy and angle\nis shown in Fig. 11.\n\nAs we have noted previously, these relations are {\\it approximate},\ngood to 10-15 per cent over a wide range of angles. Moreover, it should\nbe apparent that these relations are only valid in instances were the\nasymmetry is fairly small, less than or of the order of 20 per cent. \nIf we assume the validity of these relations, we can extrapolate from\nthe relatively high fields($\\approx 10^{14}$ G) where calculations are \nfeasible,\nto the field strengths observed in pulsars($\\approx 10^{12}$G)\\cite{13}. \nThe result of this scaling is an asymmetry of order $10^{-4}$, which is \nsignificantly smaller than that required to produce the observed recoil \nvelocities of neutron stars. The smallness of this result does not, however, \nrule out neutrino asymmetries as a source of the recoil velocities since \nthe relatively small asymmetries produced by neutrino annihilations may be \namplified by hydrodynamic instabilities in the collapsing core\\cite{14}. \nAdditionally, there have been speculations that much larger magnetic fields \nmay exist\\cite{15}\\cite{16}. Finally, it should be noted that there are\nother neutrino processes for which the cross section and asymmetries may be\nlarger, and multiple neutrino interactions may significantly increase the net \nasymmetry\\cite{17}. \n\n\tIn summary, we have calculated the cross section for $\\nu\\bar\\nu\n\\rightarrow e^+e^-$ in the presence of a magnetic field for an extensive\nset of conditions typical of those found in astrophysical environments. \nWe do not find that the presence of the field provides any significant\nenhancement of the cross section at fixed chemical potential. Although \nthe presence of a degenerate gas of electrons provides the possibility\nof parity violation in this reaction, they are insufficient to produce \nthe observed recoil velocities of neutron stars at the field strengths \nknown to be associated with pulsars. \n\n\\centerline{\\bf Acknowledgments}\n\n\tThis authors wish to acknowledge support from NSF Research \nContract No. PHY-9408843 and DOE grant DE-FG02-87ER40365.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}