diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkfvn" "b/data_all_eng_slimpj/shuffled/split2/finalzzkfvn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkfvn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe theory of Sample Distribution is an important branch of Statistics and Probability which study the general problem of determining the distribution of functions of random vectors. It provides a formal framework for modelling, simulating and making statistical inference. To be more precise, let us fix a probability measure space $\\left(\\Omega,\\Sigma,p\\right)$ and let $X$ be a (vector valued) random variable i.e a $\\Sigma$-measurable function from $\\Omega$ to $\\mathbb{R}^k$, where $k\\in\\mathbb{N}$; $X$ is usually referred to as the data. Let $Y$ be any measurable function of the data $X$ i.e. $Y$ is a random variable which satisfies $Y=\\phi(X)$ for some measurable function $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ with $n\\in\\mathbb{N}$; $Y$ is usually called a statistic. \nThe problem which we address consists in finding the probability distribution of $Y$ knowing the distribution of $X$.\n\n\n\n\n Depending on the nature of the data, there are in general different approaches for finding the distribution of the statistic $Y$, including the distribution function technique, the moment-generating function technique and the change of variable technique (see e.g. \\citep[Section 4.1, page 122]{HoggCraig}). In the last case, let us suppose for example that the probability measure induced by $X$ is absolutely continuous with respect to the Lebesgue measure on $\\mathbb{R}^k$ and let $f_X$ be its density function. Then if $k=n$ and $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^k$ is a $C^1$-diffeomorphism, then the change of variable $y=\\phi(x)$ yields, for every Borel subset $A\\subseteq\\mathbb{R}^k$,\n\\begin{align}\\label{int 2}\np\\left(\\phi(X)\\in A\\right)=p\\left(X\\in \\phi^{-1}(A)\\right)=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{A}f_X(\\phi^{-1}(y))\\frac{1}{|\\mbox{det}J_\\phi(\\phi^{-1}(y))|}\\,dy.\n\\end{align}\nThe last equation implies that the probability measure induced by $Y$ is absolutely continuous with respect to the Lebesgue measure on $\\mathbb{R}^k$ and its density function $f_Y$ is determined by the equation\n\\begin{align}\\label{int 1}\nf_Y(y)=f_X(\\phi^{-1}(y))\\frac{1}{|\\mbox{det}J_\\phi(\\phi^{-1}(y))|}, \\quad y\\in\\mathbb{R}^k.\n\\end{align}\nThis change of variables formula is widely used, for example, in machine learning and is essential for some recent results in density estimation and\n\tgenerative modeling like normalizing flows (\\citep{Rezende}), NICE (\\citep{Dinh2}), or Real NVP (\\citep{Dinh1}). However all uses of this formula in the machine learning literature that we are aware of are constrained by the bijectivity and the differentiability of the map $\\phi$.\n\n\nIn this expository paper we extend formula \\eqref{int 1} to the more general case of statistics $Y=\\phi(X)$ defined by locally Lipschitz functions $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$. The approach presented is mainly based upon the Coarea Formula proved by Federer in \\citep{Federer} which provides, in our setting, an elegant tool to derive the distribution of $Y$. This also allows to get rid of the differentiability and invertibility assumptions on $\\phi$ and to treat the case $k\\neq n$: this is quite useful in many problems of statistical inference and of machine learning (see e.g. \\citep{Civitkovic}).\n\nWhen $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a $C^1$-map and $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$, our result states that the probability $p_Y$ induced by $Y=\\phi(X)$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)$\nwhich satisfies \n\\begin{align}\\label{eq intr}\n\tf_Y(y)=\n\t\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x), &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(\\mathbb{R}^k),\n\\end{align} \nwhere $J_m\\phi(x)$ is the $m$-dimensional Jacobian of $\\phi$ (see Definition \\ref{k-Jac}). When the Jacobian matrix $J\\phi$ has, at any point, maximum rank we allow the map $\\phi$ to be only locally Lipschitz. We also consider the case of $X$ having probability concentrated on some $m$-dimensional sub-manifold $E\\subseteq\\mathbb{R}^k$. This case has many applications in directional and axial statistics, morphometrics, medical\ndiagnostics, machine vision, image analysis and molecular biology (see e.g \\citep{Bhattacharya} and references there in).\n\n\n\n We do not make any attempt to reach any novelty but the evidence suggests that this result is not as universally known as it should be. Besides this we give also several examples.\n\n\n\n\\smallskip\nLet us briefly describe the content of the sections. In Section \\ref{preliminaries} we introduce the main definitions and notation that we use throughout the paper. Section \\ref{Section AC} collects the main results we need about the theory of Hausdorff measures and the Area and Coarea Formulas. In Section \\ref{Section Distribution} we develop the method proving Formula \\eqref{eq intr}: when $J\\phi$ has maximum rank we also allow the map $\\phi$ to be locally Lipschitz. In Section \\ref{Section Generalization} we gives a further generalization considering the case of random variables $X$ having probability density functions $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert E}$ concentrated on some $m$-dimensional sub-manifold $E\\subseteq\\mathbb{R}^k$. \nFinally, in Section \\ref{Section apll}, we provide several examples which show how to apply the latter results in order to characterize the distribution of algebra of random variables and how to compute, in an easy way, the probability densities of some classic distributions including order statistics, degenerate normal distributions, Chi-squared and \"Student's t\" distributions. \n\n\n\\bigskip\n\\noindent\\textbf{Notation.} \nWe write $\\langle \\lambda,\\mu \\rangle=\\sum_{i}\\lambda_i\\mu_i$ to denote the inner product of $\\mathbb{R}^k$.\n When $f:\\mathbb{R}^k\\to\\mathbb{R}^n$ is a Lipschitz map we write $Jf$ to denote its Jacobian matrix $\\left(\\frac{\\partial f_i}{\\partial x_j}\\right)_{i,j}$ which is defined a.e. on $\\mathbb{R}^k$. When $A=(a_{ij})\\in \\mathbb{R}^{n,k}$ is a real matrix we write $Ax$ to denote the linear operator \n$$\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n,\\qquad x=(x_1,\\dots,x_k)\\mapsto x\\cdot A^t=(y_1,\\dots,y_n),\\quad y_i=\\sum_{j=1}^k a_{i,j}x_j.$$\n With this notation the Jacobian matrix $J(Ax)$ of $Ax$ satisfies $J(Ax)=A$. $I_k$ is the identity matrix of $\\mathbb{R}^{k,k}$. If $\\left(\\Omega_1, \\Sigma_1\\right)$ and $\\left(\\Omega_2, \\Sigma_2\\right)$ are measurable spaces, a function $f :\\Omega_1\\to\\Omega_2$ is said to be $\\left(\\Sigma_1,\\Sigma_2\\right)$-measurable if $f^{-1}(B)\\in\\Sigma_1$ for all $B\\in\\Sigma_2$. \nUnless otherwise specified when $\\Omega_2=\\mathbb{R}^k$ we always choose $\\Sigma_2$ as the $\\sigma$-algebra $\\mathcal{B}(\\mathbb{R}^k)$ of all the Borel subsets of $\\mathbb{R}^k$ and in this case we simply say that $f$ is $\\Sigma_1$-measurable. We finally write $\\mathcal L^k$ and $\\mathcal H^s$ to denote respectively the Lebesgue measure and the $s$-dimensional Hausdorff measure on $\\mathbb{R}^k$: under this notation we have in particular that $\\mathcal L^k=\\mathcal H^k$ and that $\\mathcal H^0$ is the counting measure on $\\mathbb{R}^k$. \n\n\n\n\\bigskip\n\n\\noindent\\textbf{Acknowledgements.} The author thanks F. Durante for several comments on a previous version of the manuscript.\n\n\\section{Preliminaries}\\label{preliminaries}\nIn this section we fix the main notation and collect the main results we use concerning the Probability theory. For a good survey on the topic we refer the reader, for example, to \\citep[Chapter IV]{HalmosMeasure} and \\citep[Appendix A and B]{Schervish}.\n\n\nLet $\\mu,\\nu$ two (positive) measures defined on a measurable space $\\left(\\Omega,\\Sigma\\right)$. $\\nu$ is said to be \\emph{absolutely continuous} with respect to $\\mu$ and we write $\\nu\\ll\\mu$ if and only if $\\nu(B)=0$ for every $B\\in\\Sigma$ such that $\\mu(B)=0$. $\\nu$ is said to have a \\emph{density function} $f$ with respect to $\\mu$ if and only if there exists a measurable positive function $f:\\Omega\\to \\mathbb{R}^+$ such that \n\\begin{align*}\n\\nu(B)=\\int_B f\\,d\\mu,\\quad \\text{for all}\\quad A\\in\\Sigma,\n\\end{align*}\n(note that $f$ is uniquely defined up to zero measure sets). When $\\mu$ is $\\sigma$-finite, thanks to the Radon-Nikodym Theorem, the latter two definitions coincide and $\\frac{d\\nu}{d \\mu}:=f$ is called the \\emph{Radon-Nikodym derivative} of $\\nu$ with respect to $\\mu$ (see e.g. \\citep[Theorem 1.28]{AFPallara}).\n\n\nLet now $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space i.e. a measure space with $p(\\Omega)=1$ and let $k\\in\\mathbb{N}$. A $\\Sigma$-measurable function $X$ from $\\Omega$ to $\\mathbb{R}^k$ is called a (vector) \\emph{random variable}; in statistical inference problems, $X$ is sometimes referred to as the given data. We write $p_X$ to denote the distribution of $X$ i.e. the measure induced by $X$ on $\\left(\\mathbb{R}^k, \\mathcal{B}(\\mathbb{R}^k)\\right)$ defined by\n\\begin{align*}\np_X(A)=p\\left(X\\in A\\right):=p(X^{-1}(A)),\\quad \\text{for all Borel set}\\quad A\\subseteq \\mathbb{R}^k.\n \\end{align*}\nWith a little abuse of terminology, $X$ is said to be an \\emph{absolutely continuous random variable} if and only if $p_X\\ll \\mathcal L^k$ i.e. $p_X$ is absolutely continuous with respect to the Lebesgue measure $\\mathcal L^k$. In this case the non-negative Radon-Nikodym derivative $f_X:=\\frac{dp_X}{d \\mathcal L^k}$ is called the density function of $X$ and it is defined through the relation\n\\begin{align*}\np_X(A)=\\int_{A}f_X(x)\\,dx,\\quad \\text{for all Borel set}\\quad A\\subseteq \\mathbb{R}^k.\n\\end{align*}\n$X$ is said to be a \\emph{discrete random variable} if and only if there exists a countable subset $I=(a_i)_{i\\in N}$ of $\\mathbb{R}^k$ such that $p_X\\ll {\\mathcal{H}^0}_{\\vert I}$ i.e. $p_X$ is absolutely continuous with respect to the counting measure $\\mathcal{H}^0$ on $I$. In this case the density \n $f_X:=\\frac{dp_X}{d {\\mathcal{H}^0}_{\\vert I}}$ is also called the probability mass function and it is defined through the relation\n\\begin{align*}\np_X(A)=\\sum_{i:a_i\\in A}f_X(a_i),\\quad \\text{for all subset}\\quad A\\subseteq \\mathbb{R}^k.\n\\end{align*}\nIn particular $p_X(a)=f_X(a)$, for all $a\\in A$.\n\n\n Let $X:\\Omega\\to\\mathbb{R}^k$ be a fixed random variable and let $n\\in\\mathbb{N}$; a random variable $Y:\\Omega\\to\\mathbb{R}^n$ is called a \\emph{statistic} (of the data $X$) if it is a measurable function of $X$ i.e. $Y$ satisfies $Y=\\phi\\circ X$ for some $\\left(\\mathcal{B}(\\mathbb{R}^k), \\mathcal{B}(\\mathbb{R}^n)\\right)$ measurable function $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$.\n\n\nFinally, let $X_1,\\dots, X_n$ be $n$ random variables where $X_i:\\Omega\\to\\mathbb{R}^k$, for $i=1,\\dots, n$. $X_1,\\dots, X_n$ are said to be \\emph{independent} if for every Borel subset $A_1,\\dots A_n$ of $\\mathbb{R}^k$ and for every $J\\subseteq \\{1,\\dots, n\\}$ one has \n\\begin{align*}\np\\left(\\bigcap_{i\\in J}X_i^{-1}(A_i)\\right)=\\prod_{i\\in J}p_{X_i}(A_i).\n\\end{align*}\nIn this case, if every $X_i$ is absolutely continuous with density function $f_i$, then $X$ is absolutely continuous and its density function satisfies $f_X(x_1,\\dots, x_n)=\\prod_{i=1,\\dots,n}f_i(x_i)$ for every $x_i\\in\\mathbb{R}^k$.\n\nIf, moreover, $X_1,\\dots, X_n$ are identically distributed i.e. $p_{X_i}=p_{X_j}:=q$ for every $i,j$, then $X=\\left(X_1,\\dots, X_n\\right)$ is called a \\emph{random sample} from the distribution $q$; in this case, if every $X_i$ is absolutely continuous with density $f_i=f$, then the density function of $X$ satisfies $f_X(x_1,\\dots, x_n)=\\prod_{i=1,\\dots,n}f(x_i)$.\n\n\\section{Area and Coarea Formulas}\\label{Section AC}\n\nIn this section we provide a brief introduction to the theory of Hausdorff measures and we collect the main results about the Area and Coarea Formulas proved by Federer in \\citep{Federer}. For the related proofs, we refer the reader, for example, to \\citep{AFPallara, Federer-book, GiaqModica} (and references therein).\n\nWe begin with the definition of the $s$-dimensional Hausdorff measure.\nLet $E\\subseteq\\mathbb{R}^n$, $\\epsilon>0$ and let $\\left(B(x_i,r_i)\\right)_{i\\in\\mathbb{N}}$ be a coverings of $E$ by a countable collections of balls $B(x_i,r_i)$ whose radii satisfy $r_i\\leq \\epsilon$. For each $s\\geq 0$, let\n\\begin{align*}\n\\sigma_s(\\epsilon)= \\frac{\\pi^{s\/2}}{\\Gamma (1 + s\/2)}\\inf\\sum_{i\\in\\mathbb{N}} r_i^s,\n\\end{align*}\nwhere the infimum is taken over all such coverings. The monotonicity of $\\sigma_s$, with respect to $s$, implies that there exists the limit (finite or\ninfinite)\n\\begin{align*}\n\\mathcal{H}^s(E) := \\lim_{\\epsilon\\to 0^+}\\sigma_s(\\epsilon).\n\\end{align*}\nThis limit is called the $s$-dimensional Hausdorff measure of E. The Hausdorff measure $\\mathcal{H}^s$ satisfies Caratheodory's criterion therefore, the $\\sigma$-algebra of all the $\\mathcal{H}^s$-measurable sets contains all the Borel subsets of $\\mathbb{R}^n$ (see e.g. \\citep[Proposition 2.49]{AFPallara}).\n\n\nIf $0 n$ and it can be seen as a generalization of the Fubini's theorem about the reduction of integrals. For its proof we refer the reader to \\citep[Theorem 3.2.3]{Federer}, \\citep[Theorem 2.86]{GiaqModica}. \n\\begin{teo}[Coarea formula]\\label{Coarea}\nLet $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a locally Lipschitz map with $k> n$.\n\\begin{itemize}\n\\item[(i)]For any $\\mathcal{L}^k$-measurable set $E\\subseteq \\mathbb{R}^k$ the function $y\\mapsto \\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)$ is $\\mathcal{L}^n$-measurable in $\\mathbb{R}^n$ and\n\\begin{align*}\n\\int_E J_n\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)\\,dy.\n\\end{align*}\n\\item[(ii)] If $u$ is a positive measurable function, or $uJ_n\\phi\\in L^1\\left(\\mathbb{R}^k\\right)$, then \n\\begin{align*}\n\\int_{\\mathbb{R}^k}u(x) J_n\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)}u(x)\\,d\\mathcal{H}^{k-n}(x)\\;dy.\n\\end{align*}\n\\end{itemize} \n\\end{teo}\nWhen $\\phi$ is an orthogonal projection \n(e.g. $\\phi(x_1,\\cdots, x_k)=(x_{i_1},\\cdots, x_{i_n})$ where $\\{i_1,\\cdots, i_n\\}\\subseteq \\{1,\\cdots, k\\}$), then $J_n\\phi=1$, the level sets of $\\phi$ are $(n-k)$-planes and the last formula corresponds to Fubini's theorem. \n\n\nApplying Theorem \\ref{Coarea} in the particular case $n=1$, then one has $J_1\\phi(x)=|\\nabla \\phi(x)|$ and the formula in (ii) becomes\n\\begin{align}\\label{1d-Coarea}\n\\int_{\\mathbb{R}^k}u(x)\\, |\\nabla \\phi(x)|\\,dx=\\int_{\\mathbb{R}}\\int_{\\phi^{-1}(y)}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dy.\n\\end{align}\nIn the special case $\\phi(x)=|x|$, $J_n\\phi(x)=1$ for every $x\\neq 0$ and, since the map sending $x\\mapsto rx$ changes $\\mathcal{H}^{k-1}$ by the factor $r^{k-1}$ (see e.g. \\citep[Proposition 2.49]{AFPallara}), one has \n\\begin{align*}\n\\int_{\\mathbb{R}^k}u(x)\\,dx=\\int_0^\\infty\\int_{|x|=r}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dr=\\int_0^\\infty r^{k-1}\\int_{|x|=1}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dr.\n\\end{align*}\n\n \n\n\\bigskip\n\nWe end the section by stating a generalization of the Coarea Formula to the case where the Lebesgue measure on the right hand side of the equations in Theorem \\ref{Coarea} is replaced by the Hausdorff measure $\\mathcal{H}^m$, where $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. For simplicity we suppose $f$ to be $C^1$-differentiable.\n\\begin{teo}\\label{Coarea gen}\n\tLet $k,n\\in\\mathbb{N}$ and let $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a $C^1$-map and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$ . The following properties hold.\n\t\\begin{itemize}\n\t\t\\item[(i)] For every $\\mathcal{L}^k$-measurable set $E\\subseteq \\mathbb{R}^k$ the function $y\\mapsto \\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)$ is $\\mathcal{H}^m$-measurable in $\\mathbb{R}^n$ and one has\n\t\t\\begin{align*}\n\t\t\t\\int_E J_m\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^m(y).\n\t\t\\end{align*}\n\t\t\\item[(ii)] If $u$ is a positive measurable function, or $uJ_m\\phi\\in L^1\\left(E\\right)$, then \n\t\t\\begin{align*}\n\t\t\t\\int_{E}u(x) J_m\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap E}u(x)\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m( y).\n\t\t\\end{align*}\n\t\\end{itemize} \n\\end{teo}\n{\\sc{Proof.}} The proof is a consequence of \\citep[Theorem 5.1, Theorem 5.2]{KristensenABridge}.\\qed\n\n\n\n\n\\bigskip\nThe next Remark clarifies some positivity properties about $k$-dimensional Jacobians. It can be seen as a generalization of Sard's Theorem (see also \\citep[Lemma 2.73, Lemma 2.96, Remark 2.97]{AFPallara} and \\citep[Theorem 1.1]{KristensenABridge}).\n\n\n\n\\begin{os}\\label{oss J>0}\nLet $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a locally Lipschitz map and let us suppose, preliminarily, that $J\\phi(x)$ has maximum rank for a.e. $x\\in\\mathbb{R}^k$.\n\\begin{itemize}\n\\item[(i)] if $k\\leq n$, then using (i) of Theorem \\ref{Area} with $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_k\\phi(x)=0\\}$ we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^0\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^k(y)=0.\n\\end{align*}\nThis implies $\\mathcal{H}^0\\left(E\\cap \\phi^{-1}(y)\\right)=0$ (i.e. $\\phi(E)\\cap \\{y\\}=\\emptyset $) for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$. This yields $\\mathcal{H}^k\\left(\\phi(E)\\right)$=0 and it implies, in particular, that $J_k\\phi>0$ on $\\phi^{-1}(y)$ for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$.\n\\item[(ii)] if $k\\geq n$, then using (i) of Theorem \\ref{Coarea} with $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_n\\phi(x)=0\\}$ we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)\\,dy=0.\n\\end{align*}\nThis yields $\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)=0$ for a.e. $y\\in\\mathbb{R}^n$ and it implies, in particular, that $J_n\\phi>0$ $\\mathcal{H}^{k-n}$-a.e on $\\phi^{-1}(y)$ for a.e. $y\\in\\mathbb{R}^n$.\n\\end{itemize}\n\\medskip\nLet us suppose, now, $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ to be $C^1$ and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. Then setting $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_m\\phi(x)=0\\}$ and using Theorem \\ref{Coarea gen} we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^m(y)=0.\n\\end{align*}\nThis implies that $J_m\\phi>0$ $\\mathcal{H}^{k-m}$-a.e on $\\phi^{-1}(y)$ for $\\mathcal{H}^m$-a.e. $y\\in\\mathbb{R}^n$.\n\n\n\\end{os}\n\\section{Sample Distribution Theory}\\label{Section Distribution}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable. Let $Y:=\\phi\\circ X$ be a statistic, where $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a measurable map and $k\\in\\mathbb{N}$. In this section we prove that when $\\phi$ is locally Lipschitz then \nthe probability measure $p_Y$ induced by $Y$ has a density function, with respect to some Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)\\subseteq\\mathbb{R}^n$, which can be computed explicitly in terms of an integral involving the density function $f_X$ of $X$.\n We recall preliminarily that the Radon-Nikodym derivative of a measure is uniquely defined up to zero measure sets: since, by definition, $p_Y$ is concentrated on $\\phi(\\mathbb{R}^k)$, in what follows we can always set $f_Y(y)=0$ for $y\\notin\\phi(\\mathbb{R}^k)$. \n\n\nWe start with the case $k\\leq n$.\n\n\n\\begin{teo}\\label{teo k0}, $J_k\\phi>0$ on $\\phi^{-1}(y)$ for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$. Then using the Area Formula of Theorem \\ref{Area} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx\\\\[1ex]\n&=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)\\;d\\mathcal{H}^k(y)\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)\\;d\\mathcal{H}^k(y)=\\int_{A}\\sum_{x\\in \\phi^{-1}(y)}f_X(x)\\frac{1}{J_k\\phi(x)}\\;d\\mathcal{H}^k(y).\n\\end{align*}\nThis proved the first required claim. The second assertion follows after observing that, under the given hypothesis, $\\phi^{-1}(y)=\\bigcup_i \\{\\phi_i^{-1}(y)\\}$ for every $y\\in \\phi(\\mathbb{R}^k)$.\n\\\\\\qed\n\\begin{os}\nWe note that if $kn$.\n\n\\begin{teo}\\label{teo k>n}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ with $k\\geq n$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. If $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a locally Lipschitz map such that $\\mbox{rank }(J\\phi)=n$ a.e., then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable (i.e. $p_{Y}\\ll \\mathcal{L}^n$) and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(R^k)\n\\end{align*} \nand it is $0$ otherwise.\n\\end{teo}\n{\\sc{Proof.}} The case $k=n$ is the result of the previous Corollary. Let us suppose $k>n$. Recalling Definition \\ref{k-Jac} and Remark \\ref{oss J>0}, $J_n\\phi>0$ $\\mathcal{H}^{k-n}$-a.e on $\\phi^{-1}(y)$ for a.e. $y\\in\\mathbb{R}^n$. Let $A\\subseteq \\mathbb{R}^n$ be a Borel set. Then using the Coarea Formula of Theorem \\ref{Coarea} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)\\\\[1ex]\n&=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x)\\;dy\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x)\\;dy.\n\\end{align*}\nThis proved the required claim.\\\\\\qed\n\nFor the reader's convenience we enlighten in the following corollary the particular case $n=1$ which is very useful in the applications and which follows from formula \\eqref{1d-Coarea}. \n\n\\begin{cor}\\label{teo k>1}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. If $\\phi:\\mathbb{R}^k\\to \\mathbb{R}$ is a locally Lipschitz map such that $|\\nabla \\phi|>0$ a.e., then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable and has probability density function \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{|\\nabla\\phi(x)|}\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(R^k).\n\\end{align*} \n\\end{cor}\n\\medskip\n\\begin{os}The assumptions $p(\\Omega)=1$ was never used in the proof of the Theorems \\ref{teo kn}. Indeed analogous results hold with $p_X$ replaced by an absolutely continuous measure on $\\mathbb{R}^k$. More precisely let $\\mu$ be a measure defined on $\\left(\\mathbb{R}^k,\\mathcal{B}\\right)$ such that $\\mu \\ll \\mathcal L^k$ and let $\\frac{d\\mu}{d\\mathcal L^k}$ its Radon-Nykodym derivative. Let $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a locally Lipschitz map whose Jacobian matrix $J\\phi$ has a.e. maximum rank.\n\\begin{itemize}\n\\item[(i)] If $k\\leq n$ then $\\mu \\phi^{-1} \\ll \\mathcal{H}^k$ and for $\\mathcal{H}^k$-a.e. $y\\in \\phi(\\mathbb{R}^k)$one has\n\\begin{align*}\n\\frac{d\\mu \\phi^{-1}}{d\\mathcal \\mathcal{H}^k}(y)=\\int_{\\phi^{-1}(y)}\\frac{d\\mu}{d\\mathcal L^k}(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)=\\sum_{\\phi(x)=y}\\frac{d\\mu}{d\\mathcal L^k}(x)\\frac{1}{J_k\\phi(x)}.\n\\end{align*}\n\\item[(ii)] If $k> n$ then $\\mu \\phi^{-1} \\ll \\mathcal{L}^n$ and for a.e. $y\\in\\mathbb{R}^n$ one has\n\\begin{align*}\n\\frac{d\\mu \\phi^{-1}}{d\\mathcal{L}^n}(y)=\\int_{\\phi^{-1}(y)}\\frac{d\\mu}{d\\mathcal{L}^k}(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x).\n\\end{align*}\n\\end{itemize} \n\\end{os}\n\\bigskip\n\nWe end the section by applying Theorem \\ref{Coarea gen} in order to extend Theorem \\ref{teo k>n} to the case of a $C^1$-map $\\phi$ whose Jacobian could possibly have not maximum rank. In this case, setting $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$, the induced probability $p_Y$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)\\subseteq\\mathbb{R}^n$.\n\n\n\\begin{teo}\\label{teo Hausdorff}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ \n and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. Let $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a $C^1$-map and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. Then the induced probability measure $p_Y$ of the statistic $Y:=\\phi\\circ X$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ \n\n which satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x), &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(R^k)\n\\end{align*} \nand it is $0$ otherwise.\n\\end{teo}\n{\\sc{Proof.}} Recalling Definition \\ref{k-Jac} and Remark \\ref{oss J>0}, $J_m\\phi>0$ $\\mathcal{H}^{k-m}$-a.e on $\\phi^{-1}(y)$ for $\\mathcal{H}^m$-a.e. $y\\in\\mathbb{R}^n$. Let $A\\subseteq \\mathbb{R}^n$ be a Borel set. Then using Theorem \\ref{Coarea gen} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)\\\\[1ex]\n&=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m(y)\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m(y).\n\\end{align*}\nThis proves the required claim.\\\\\\qed\n\n\n\n\\section{Further generalizations}\\label{Section Generalization}\n\nIn this section we briefly expose, for the interested reader, a further generalization of Theorem \\ref{teo k>n} which covers the case of probabilities $p_X$ concentrated on some $m$-dimensional subset $E\\subseteq\\mathbb{R}^k$. This includes, for example, the cases of a random variable $X$ which is uniformly distributed over a generic subset $E$ and of random variables taking values on $m$-dimensional manifolds. Statistical analysis on manifolds \n\thas many applications in directional and axial statistics, morphometrics, medical\n\tdiagnostics, machine vision and image analysis (see e.g \\citep{Bhattacharya} and references there in). Amongst the many important applications, those arising, for example, from the analysis of data on torus play also a fundamental role in molecular biology in the study of the Protein Folding Problem. \n\nAlthough the following Theorem is valid for countably $m$-rectifiable sets (see \\citep[Definition 5.4.1, Lemma 5.4.2]{Krantz-Parks}), we suppose for simplicity $E$ to be an $m$-dimensional sub-manifold of $\\mathbb{R}^k$. \n\nWe state first the Area and Coarea formula relative to sub-manifolds of $\\mathbb{R}^k$. If $J\\phi^E$ is the tangential Jacobian matrix of $\\phi$ with respect to $E$ (see \\citep[Formula 11.1]{Maggi}), the $k$-tangential Jacobian $J_k^E\\phi$ is defined as in Definition \\ref{k-Jac}. For a rigorous introduction to tangential Jacobians as well as for all the other details we refer the reader to \\citep[Chapter 3]{Federer}, \\citep[Section 5.3]{Krantz-Parks}, \\citep[Section 11.1]{Maggi}.\n\n\\begin{teo}\nLet $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a $C^1$-map and let $E\\subseteq\\mathbb{R}^k$ an $m$-dimensional manifold. The following properties hold.\n\\begin{itemize}\n\\item[(i)]If $m\\leq n$ and $u$ is a positive measurable function, or $uJ_m^E\\phi\\in L^1\\left(E,\\mathcal{H}^m\\right)$, one has \n\\begin{align*}\n\\int_{E}u\\, J_m^E\\phi\\,d\\mathcal{H}^m(x)=\\int_{\\mathbb{R}^n}\\int_{E\\cap \\phi^{-1}(y)}u\\,d\\mathcal{H}^{0}(x)\\,d\\mathcal{H}^{m}(y).\n\\end{align*}\n\\item[(ii)]\nIf $m\\geq n$ and $u$ is a positive measurable function, or $uJ_n^E\\phi\\in L^1\\left(E,\\mathcal{H}^m\\right)$,one has\n\\begin{align*}\n\\int_{E}u\\, J_n^E\\phi\\,d\\mathcal{H}^m(x)=\\int_{\\mathbb{R}^n}\\int_{E\\cap \\phi^{-1}(y)}u\\,d\\mathcal{H}^{m-n}(x)\\,dy.\n\\end{align*}\n\\end{itemize}\n\\end{teo}\n{\\sc{Proof.}} See \\citep[Theorem 2.91, 2.93]{Federer} and \\citep[Theorem 2.91, Theorem 2.93]{AFPallara}.\\qed\n\\bigskip\nThe same methods of proof used in Section \\ref{Section Distribution}, yield finally the next result. Note that every $m$-dimensional manifold $E\\subseteq\\mathbb{R}^k$ has $\\mathcal{H}^m$-$\\sigma$-finite measure.\n\\begin{teo}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ and let $E\\subseteq\\mathbb{R}^k$ an $m$-dimensional manifold. Let $X:\\Omega\\to E$ be a random variable having a probability density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert E}$ on $E$. Let $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a $C^1$-map whose tangential Jacobian $J^E\\phi(x)$ has maximum rank at any point $x\\in E$. The following properties hold.\n\\begin{itemize}\n\\item[(i)] If $m\\leq n$ then the probability measure $p_Y$ induced by the statistic $Y:=\\phi\\circ X$ is absolutely continuous with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert\\phi(E)}$ on $\\phi(E)$ and its density function $f_Y$ satisfies\n\\begin{align*}\nf_Y(y)&=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m^E\\phi(x)}\\,d\\mathcal{H}^0(x)=\\sum_{\\phi(x)=y}f_X(x)\\frac{1}{J_m^E\\phi(x)}, &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(E).\n\\end{align*}\n\\item[(ii)] If $m\\geq n$ then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable (i.e. $p_{Y}\\ll \\mathcal{L}^n$) and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n^E\\phi(x)}\\,d\\mathcal{H}^{m-n}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(E).\n\\end{align*} \n\\end{itemize}\n\\end{teo}\n\n\\section{Some applications}\\label{Section apll}\n\n\nIn this section we apply the results of the previous sections in order to compute the density functions of some distributions in some cases of relevant interest.\n\\subsection{First examples}\nIn these first examples we provide a density formula for random variables which are algebraic manipulations of absolutely continuous random variables. The first example, in particular, is used in Proposition \\ref{section chi squared} in order to find the density of the chi-squared distribution.\n\\begin{esem}[Square function]\\label{Square function}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space and let $X:\\Omega\\to \\mathbb{R}$ be an absolutely continuous random variable with probability density function $f_X$. We employ Corollary \\ref{teo k=n} with \\begin{align*}\n\\phi:\\mathbb{R}\\to [0,\\infty[,\\quad \\phi(t)=t^2,\\quad J_1(t)=2|t|.\n\\end{align*}\nThen the statistic $Y:=X^2$ is an absolutely continuous random variable and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\\frac{f_X(\\sqrt y)+f_X\\left(-\\sqrt y\\right)}{2\\sqrt y},\\quad \\text{for any}\\quad y>0.\n\\end{align*}\nMore generally let $k\\in\\mathbb{N}$ and let $X=(X_1,\\dots, X_k):\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous (vector valued) random variable with probability density function $f_X$. Then employing Theorem \\ref{teo k>n} with \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to [0,\\infty[,\\quad \\phi(x)=\\|x\\|^2=x_1^2+\\cdots+x_k^2,\\quad J_1(x)=2\\|x\\|,\n\\end{align*}\n we get that the statistic $Y:=\\|X\\|^2=X_1^2+\\cdots+X_k^2$ is an absolutely continuous random variable whose probability density function satisfies \n\\begin{align*}\nf_Y(y)&=\\int_{\\|x\\|^2=y}\\frac{f_X(x)}{2\\|x\\|}\\,d\\mathcal{H}^{k-1}(x)=\\frac{1}{2\\sqrt y}\\int_{\\|x\\|=\\sqrt y}f_X(x)\\,d\\mathcal{H}^{k-1}(x),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\n\\end{esem}\n\n\n\\begin{esem}[Affine transformations]\\label{Affine transformations}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. Let us consider the affine transformation \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n,\\quad \\phi(x)=Ax+y_0,\n\\end{align*}\nwhere $A\\in\\mathbb{R}^{n\\times k}$, $\\mbox{rank}(A)=m$ and $y_0\\in\\mathbb{R}^n$. Recalling Definition \\ref{k-Jac}, the $m$-dimensional Jacobian of $\\phi$ is given by\n\\begin{align*}\n\tJ_m\\phi(x)=\\sqrt{\\sum_{B}(\\mbox{det\\,}B)^2}=:A_m\n\\end{align*}\nwhere the sum runs along all $m\\times m$ minors $B$ of $A$.\n Then, using Theorem \\ref{teo Hausdorff}, the induced probability measure $p_Y$ of the statistic $Y:=AX+y_0$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the $m$-dimensional hyper-surface $\\phi\\left(\\mathbb{R}^k\\right)=\\{y=Ax+y_0:x\\in\\mathbb{R}^k\\}$ which satisfies \n\\begin{align*}\nf_Y(y)&=\n\\frac{1}{A_m}\\int_{Ax+y_0=y}f_X(x)\\,d\\mathcal{H}^{k-m}(x)\n\\\\[1ex]&=\\frac{1}{A_m}\\int_{\\mbox{Ker}(A)+x_y}f_X(x)\\,d\\mathcal{H}^{k-m}(x)\n, \\quad \\text{for}\\quad y\\in\\phi(R^k).\n\\end{align*} \nHere for $y\\in\\phi(R^k)$, $x_y\\in\\mathbb{R}^k$ is any fixed solution of the equation $y=Ax_y+y_0$.\n\n\n When $m=n$, then $A_n=\\sqrt{\\mbox{det}(AA^T)}$ and the map $\\phi$ is surjective i.e. $\\phi(\\mathbb{R}^k)=\\mathbb{R}^n$. In this case theorem \\ref{teo k>n} implies that $p_y\\ll\\mathcal L ^n$ i.e. $Y$ is an absolutely continuous random variable. If moreover $k=n$ and $A\\in\\mathbb{R}^{k\\times k}$ is not-singular then $A_k=|\\mbox{det } A|$ and in this case we have \n\\begin{align*}\nf_Y(y)=\n\\frac{1}{|\\mbox{det } A|}\\int_{Ax+y_0=y}f_X(x)\\,d\\mathcal{H}^{0}(x)=\\frac{f_X(A^{-1}(y-y_0))}{|\\mbox{det } A|}, &\\quad \\text{for}\\quad y\\in \\mathbb{R}^k.\n\\end{align*} \n\\end{esem}\n\n\n\n\n\\begin{esem}[Sum of variables and Sample mean]\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X=\\left(X_1,\\dots, X_k\\right):\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. We employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to \\mathbb{R},\\quad \\phi(t)=\\sum_{i=1}^k t_i,\\quad J_1(t)=|\\nabla\\phi|=\\sqrt k.\n\\end{align*}\nThen the statistic $Y:=\\sum_{i=1}^k X_i$ is an absolutely continuous random variable and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\sum_{i=1}^k x_i=y}\\frac{f_X(x)}{\\sqrt k}\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*} \nLet us set $x^{k-1}:=\\left(x_1,\\dots,x_{k-1}\\right)$ and let $\\psi(x^{k-1})=\\left(x^{k-1},\\,y-\\sum_{i=1}^{k-1}x_i\\right)$ be a parametrization of the hyperplane $\\sum_{i=1}^k x_i=y$. Using the area formula \\eqref{parametrized manifold}, the last integral becomes\n\\begin{align*}\nf_Y(y)=\n\\int_{\\mathbb{R}^{k-1}}f_X\\Big(x^{k-1},\\,y-\\sum_{i=1}^{k-1}x_i\\Big)\\,dx^{k-1}, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*}\nIn the particular case $k=2$ and if $X_1$, $X_2$ are independent, the last formula gives the well known convolution form for the distribution of the random variable $X_1+X_2$:\n\\begin{align*}\nf_{X_1+X_2}(y)=\n\\int_{\\mathbb{R}}f_{X_1}(t)f_{X_2}(y-t)\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*}\nwhere $f_{X_1}, f_{X_2}$ are respectively the density function of the distribution generated by $X_1,X_2$.\n\n \nMoreover if $X_1,\\dots,X_k$ are identically distributed and independent with common probability density function $f:\\Omega\\to\\mathbb{R}$, then (using also Example \\ref{Affine transformations} with $\\phi(x)=\\frac 1 kx$), the density function of the sample mean $Z:=\\frac{1}{k}\\sum_{i=1}^k X_i$ is\n\\begin{align*}\nf_Z(y)= k\\, f_Y\\left(ky\\right)= k\n\\int_{\\sum\\limits_{i=1}^k x_i= k y}\\;\\prod_{i=1}^k f(x_i)\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*}\n\\end{esem}\n\n\\begin{esem}[Product and ratio of random variables]\\label{ratio}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space and let $X:\\Omega\\to \\mathbb{R}^2,\\, X=\\left(X_1,X_2\\right)$ be an absolutely continuous random variable with probability density function $f_X$. \n\\begin{itemize}\n\\item[(i)] Let us employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^2\\to \\mathbb{R},\\quad \\phi(x_1,x_2)=x_1x_2,\\quad J_1\\phi(x_1,x_2)=|\\nabla\\phi(x_1,x_2)|=\\sqrt{x_1^2+x_2^2}.\n\\end{align*}\nThen the statistic $X_1X_2$ is an absolutely continuous random variable whose probability density function satisfies\n\\begin{align*}\nf_{X_1X_2}(y)=\n\\int_{x_1x_2=y}\\frac{f_X(x_1,x_2)}{\\sqrt {x_1^2+x_2^2}}\\,d\\mathcal{H}^{1}(x_1,x_2)=\\int_{\\mathbb{R}\\setminus\\{0\\}}f_X\\Big(t,\\frac y t\\Big)\\frac{1}{|t|}\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*} \nwhere we parametrized the hyperbole $x_1x_2=y$ by $\\psi(t)=\\left(t,\\frac y t\\right)$ and we used Formula \\eqref{parametrized manifold} to evaluate the last integral.\n\n\\item[(ii)] Let us suppose $X_2\\neq 0$ a.e. and let us employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^2\\to \\mathbb{R},\\quad \\phi(x_1,x_2)=\\frac{x_1}{x_2},\\quad J_1\\phi(x_1,x_2)=|\\nabla\\phi(x_1,x_2)|=\\frac 1 {x_2^2}\\sqrt{x_1^2+x_2^2}.\n\\end{align*}\nThen the statistic $\\frac{X_1}{X_2}$ is an absolutely continuous random variable whose probability density function satisfies\n\\begin{align*}\nf_{\\frac{X_1}{X_2}}(y)=\n\\int_{\\frac{x_1}{x_2}=y}f_X(x_1,x_2)\\frac{x_2^2}{\\sqrt {x_1^2+x_2^2}}\\,d\\mathcal{H}^{1}(x_1,x_2)=\\int_{\\mathbb{R}}f_X\\Big(ty, t\\Big)|t|\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*} \nwhere we parametrized the line $x_1=yx_2$ by $\\psi(t)=\\left(ty,t\\right)$ and we used \\eqref{parametrized manifold} to evaluate the last integral.\n\\item[(iii)]Let $X:\\Omega\\to \\mathbb{R}$ be an absolutely continuous random variable such that $X\\neq 0$ a.e. and let $f_X$ its probability density function. We employ Corollary \\ref{teo k=n} with \\begin{align*}\n\\phi:\\mathbb{R}\\setminus\\{0\\}\\to \\mathbb{R}\\setminus\\{0\\},\\quad \\phi(t)=\\frac 1 t,\\quad J_1(t)=|\\phi'(t)|=\\frac 1{t^2}.\n\\end{align*}\nThen the statistic $\\frac 1 {X}$ is an absolutely continuous random variable whose probability density function satisfies \n\\begin{align*}\nf_{\\frac 1 X}(y)=f_X\\left(\\frac 1 y\\right)\\frac{1}{y^2},\\quad \\text{for any}\\quad y\\neq 0.\n\\end{align*}\n\\end{itemize}\n\n\n\n\n\n\\end{esem}\n\\subsection{Order Statistics}\nLet $S_k$ be the set of all the permutations of the set $\\{1,\\dots,k\\}$. Let $X=\\left(X_1,\\dots, X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable and let us consider the map\n$$\\phi:\\mathbb{R}^k\\to\\mathbb{R}^k,\\quad x=(x_1,\\dots, x_k)\\mapsto (x_{(1)},\\dots, x_{(k)}),$$\nwhich associates to any vector $x$ its increasing rearrangement $(x_{(1)},\\dots, x_{(k)})$ i.e. $x_{(1)}\\leq \\dots\\leq x_{(k)}$. The random variable $\\phi\\circ X:= \\left(X_{(1)},\\dots, X_{(k)}\\right)$ is the random vector of the so-called \\emph{Order Statistics} of $X$. In what follows, as an easy application of the results of the previous sections, we deduce their well known density functions. We start with the following Lemma which shows, in particular, that $\\phi$ has unitary Jacobian. \n \n \\begin{lem}\n Let $n\\in\\mathbb{N}$ such that $n\\leq k$, let $I=\\{i_1,i_2,\\dots,i_n\\}\\subseteq\\{1,\\dots,k\\}$ a subset of indexes, where $|I|=n$ and $i_1n} and the previous Lemma or alternatively by integrating the joint density $f_Y$. Indeed if we write $\\hat{x_i}=\\left(x_1,\\dots x_{i-1}, x_{i+1},\\dots, x_k\\right)\\in\\mathbb{R}^{k-1}$ to denote the variable $x$ without the $x_i$ component and if we set $F=\\{\\hat{x_i}\\in\\mathbb{R}^{k-1} \\;:\\; x_1 0$ and let $X:\\Omega\\to \\mathbb{R}$ be a random variable. $X$ is said to have a \n\\emph{ Normal (or Gaussian) distribution}\n $p_X$, and we write $p_X\\sim \\mathcal N\\left(a,\\sigma^2\\right)$, if $p_X$ has density\n \\begin{align*}\n f_X(t)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\n \\exp\\left(-\\frac{|t-a|^2}{2\\sigma^2}\\right),\\quad t\\in\\mathbb{R}.\n \\end{align*}\nWhen $\\sigma=0$ we also write $p_X\\sim \\mathcal N\\left(a,0\\right)$ with the understanding that $p_X$ is the dirac measure $\\delta_a$ at the point $a$. \nThe parameters $a$ and $\\sigma^2$ are called the mean and the variance of $X$, respectively.\n\n\nLet $k\\in\\mathbb{N}$, $a\\in\\mathbb{R}^k$ and let $\\Sigma\\in\\mathbb{R}^{k,k}$ be a symmetric, and positive semi-definite matrix. A random variable $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ is said to have a \\emph{(multivariate) normal distribution} $\\mathcal{N}(a,\\Sigma)$ if \n\\begin{align*}\n\\langle\\lambda, X\\rangle \\sim \\mathcal{N}\\Big(\\langle\\lambda, a\\rangle, \\langle\\Sigma \\lambda,\\lambda \\rangle\\Big),\\quad \\forall \\lambda\\in\\mathbb{R}^k\n\\end{align*}\n(we write $\\langle \\lambda,\\mu \\rangle=\\sum_{i}\\lambda_i\\mu_i$ to denote the inner product of $\\mathbb{R}^k$).\nHere $a:=E\\left(X\\right)$ is the mean vector and $\\Sigma=\\left(\\sigma_{ij}\\right)_{i,j}:=\\mbox{Cov}(X)$ is the covariance matrix of $X$ i.e. $\\sigma_{i,j}=\\mbox{Cov}\\left(X_i,X_j\\right)$. The following very well known properties about Gaussian vectors are direct consequences of their definition (see for example \\citep[Chapter 1]{bogachev}).\n\n\\begin{prop}\\label{Basic Normal}\nLet $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable such that $X\\sim\\mathcal{N}\\left(a,\\Sigma\\right)$. \n\\begin{itemize}\n\\item[(i)] The mean vector $a$ and the Covariance matrix $\\Sigma$ uniquely characterized the Gaussian measure $p_X$.\n\\item[(ii)] $X_i$, $X_j$ are independent if and only if $\\sigma_{ij}=\\mbox{Cov}(X_i,X_j)=0$. \n\\item[(iii)] For every matrix $A\\in\\mathbb{R}^{m,k}$ one has $AX\\sim\\mathcal{N}\\left(\\langle Aa\\rangle, A\\Sigma A^t\\right)$.\n\\item[(iv)] When $\\Sigma$ is positive definite we say that $p_X$ is \\emph{not-degenerate}: in this case $X$ is absolutely continuous and has density function \n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac k 2}\\mbox{det}(\\Sigma)^\\frac 1 2}\\exp{\\left(-\\frac{|\\Sigma^{-\\frac 1 2}(x-a)|^2}{2}\\right)}.\n\\end{align*}\n\\end{itemize} \n\\end{prop}\n\n\\medskip\nIn the following Proposition we show that, when the Covariance matrix $\\Sigma$ is degenerate, $p_X$ has a density function with respect to the Hausdorff measure $\\mathcal{H}^m$ on some hyperplane of $\\mathbb{R}^k$. In what follows we say that that a matrix $P\\in \\mathbb{R}^{k,m}$, with $m\\leq k$, is orthogonal if it has orthonormal columns; in this case $|Qy|=|y|$ for every $y\\in\\mathbb{R}^m$. \n\n\n\\begin{prop}\nLet $k\\in\\mathbb{N}$, $a\\in\\mathbb{R}^k$ and let $\\Sigma\\in\\mathbb{R}^{k,k}$ be a positive semi-definite matrix with $m=\\mbox{rank} (\\Sigma)\\geq 1$. Let $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable. Then $X\\sim\\mathcal{N}(a,\\Sigma)$ if and only if there exists an orthogonal matrix $P\\in \\mathbb{R}^{k,m}$ and $m$ independent random variables $Y_1,\\dots,Y_m$ which satisfies $Y_i\\sim \\mathcal{N}\\left(0,1\\right)$ for every $i=1,\\dots,m$ and such that \n\\begin{align*}\nX=\\Sigma^{\\frac 1 2 }P\\,Y+a,\\quad Y=\\left(Y_1,\\dots,Y_m\\right).\n\\end{align*}\nMoreover the probability measure $p_X$ has a density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the hyperplane \n$$\\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a=\\{x\\in\\mathbb{R}^k\\;:\\;x=\\Sigma^{\\frac 12}Py+a,\\;y\\in\\mathbb{R}^m\\}$$\n which satisfies \n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac m 2}\\Sigma^{\\frac 12}_m}\\exp{\\left(-\\frac{|y|^2}{2}\\right)}, \\quad x=\\Sigma^{\\frac 12}Py+a,\n\\end{align*}\nwhere $\\Sigma^{\\frac 12}_m=\\prod_i{\\sqrt \\lambda_i}$ and the product runs over all positive eigenvalues of $\\Sigma$ (counted with their multiplicities).\n\n\\end{prop}\n{\\sc{Proof.}} Let us prove the first claim and let us suppose, preliminarily, that the Covariance matrix $\\Sigma$ is a diagonal matrix and, without any loss of generality, let us assume that its entries in the main diagonal are \n$$\\left(\\sigma_{11},\\dots,\\sigma_{mm}, 0,\\dots,0\\right),$$\n where $\\sigma_{ii}>0$ for $i\\leq m$. Then from Proposition \\ref{Basic Normal}, $X_1,\\dots, X_m$ are independent and $X_i\\sim\\mathcal{N}(a_i,\\sigma_{ii})$; moreover $X_i=a_i$ a.e. for $i>m$. The required claim then immediately follows setting $Y=\\left(Y_1,\\dots, Y_m\\right)$, with $Y_i=\\frac{X_i-a_i}{\\sqrt\\sigma_{ii}}$, and $P=\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)$, where $I_m$ is the identity matrix of $\\mathbb{R}^{m,m}$.\n\n\n\n In the general case let us diagonalize the Covariance matrix $\\Sigma$: Let $Q\\in\\mathbb{R}^{k,k}$ be an orthogonal matrix such that $Q\\Sigma Q^t=D$, where $D$ is the diagonal matrix whose entries in the main diagonal are $\\left(\\lambda_1,\\dots,\\lambda_m, 0,\\dots,0\\right)$, where the $\\lambda_i>0$ are the positive eigenvalues of $\\Sigma$. From Proposition \\ref{Basic Normal} the vector $Z=QX$ satisfies $Z\\sim\\mathcal{N}\\left(Qa,D\\right)$; from the previous step there exists $Y=\\left(Y_1,\\dots,Y_m\\right)\\sim \\mathcal{N}\\left(0,I_m\\right)$ such that\n \\begin{align*}\n Z=D^{\\frac 1 2}\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+Qa.\n \\end{align*}\n Then since $Q\\Sigma^{\\frac 1 2 }Q^t=D^{\\frac 1 2 }$ we get \n \\begin{align*}\n X=Q^tZ=Q^tD^{\\frac 1 2}\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+a=\\Sigma^{\\frac 1 2} Q^t\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+a\n \\end{align*}\n and the claim follows with $P=Q^t\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)$.\n\n\n\nFinally, to prove the second claim, let us apply the first step and Example \\ref{Affine transformations} with $A=\\Sigma^{\\frac 1 2}P$ and $y_0=a$. Then we get that $X$ has a density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the hyperplane \n$$\\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a=\\{x\\in\\mathbb{R}^k\\;:\\;x=\\Sigma^{\\frac 12}Py+a,\\;y\\in\\mathbb{R}^m\\}$$\n which satisfies \n\\begin{align*}\nf_X(x)&=\n\\frac{1}{(\\Sigma^{\\frac 12}P)_m}\\int_{\\Sigma^{\\frac 12}Py+a=x}f_Y(y)\\,d\\mathcal{H}^{0}(y), \\quad \\text{for}\\quad x\\in \\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a.\n\\end{align*} \nSince $P$ has orthogonal columns then from Definition \\ref{k-Jac} we have $(\\Sigma^{\\frac 12}P)_m=\\prod{\\sqrt \\lambda_i}:=\\Sigma^{\\frac 12}_m$, where the product runs over all positive eigenvalues of $\\Sigma$. Moreover since $\\Sigma^{\\frac 12}P$ has maximum rank, the equation $x=\\Sigma^{\\frac 12}Py+a$ has a unique solution. Then\n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac m 2}\\Sigma^{\\frac 12}_m}\\exp{\\left(-\\frac{|y|^2}{2}\\right)}, \\quad x=\\Sigma^{\\frac 12}Py+a.\n\\end{align*}\n\\qed\n\n\\subsection{Chi-squared and Student's distributions}\nLet $X:\\Omega\\to \\mathbb{R}^k$ be a Gaussian random vector whose covariance matrix is the identity matrix $I_k$. If $X\\sim\\mathcal{N}\\left(0,I_k\\right)$ then the probability measure $p_{\\chi^2(k)}$ induced by $|X|^2$ is called \\emph{Chi-squared distribution with $k$-degrees of freedom} and we write $|X|^2\\sim \\chi^2(k)$.\n\n If $X$ is not-centred i.e. $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$, then the measure $p_{\\chi^2(k,\\lambda)}$ induced by $|X|^2$ is called \\emph{Non-central Chi-squared distribution with $k$-degrees of freedom and non-centrality parameter $\\lambda=|\\mu|^2>0$} and we write $|X|^2\\sim \\chi^2(k,\\lambda)$.\n\n\\medskip\nIn the next Proposition we derive the density function of $|X|^2$. In what follows we consider the gamma function $\\Gamma(r)=\\int_0^\\infty t^{r-1}e^{-r}\\,dr$, $r>0$ (see e.g. \\citep[page 255]{abramowitz+stegun}) and the modified Bessel function of the first kind $ I_{\\nu }$ defined for $y>0$ as \n\\begin{align*}\nI_{\\nu }(y)=(y\/2)^{\\nu }\\sum _{j=0}^{\\infty }{\\frac {(y^{2}\/4)^{j}}{j!\\,\\Gamma (\\nu +j+1)}}=\\frac{(y\/2)^{\\nu }}{\\pi^{\\frac 1 2}\\Gamma\\left(\\nu+\\frac 1 2\\right)}\\int_0^\\pi e^{y\\cos\\theta}\\left(\\sin \\theta\\right)^{2\\nu}\\,d\\theta,\n\\end{align*}\n(see e.g. \\citep[Section 9.6 and Formula 9.6.20, page 376]{abramowitz+stegun}).\n\\begin{prop}[Chi-squared Distribution]\\label{section chi squared}\nLet $X:\\Omega\\to \\mathbb{R}^k$ be a Gaussian random vector. If $X\\sim\\mathcal{N}\\left(0,I_k\\right)$ then the Chi-squared distribution $p_{\\chi^2(k)}$ induced by $|X|^2$ has density function\n\\begin{align*}\nf_{\\chi^2(k)}(y)=\\frac{1}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{\\frac k 2-1}\\exp\\left(-\\frac y 2\\right),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\nIf $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$ then, setting $\\lambda=|\\mu|^2>0$, the Non-central Chi-squared distribution $p_{\\chi^2(k,\\lambda)}$ induced by $|X|^2$ has density function\n\n\\begin{align*}\nf_{\\chi^2(k,\\lambda)}(y)&=\\frac{1}{2}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\left(\\frac y\\lambda\\right)^{\\frac k 4-\\frac 1 2}I_{\\frac k 2 -1}\\left(\\sqrt{\\lambda y}\\right),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\n\\end{prop}\n{\\sc{Proof.}} \nLet $X\\sim\\mathcal{N}\\left(0,I_k\\right)$; using Example \\ref{Square function} we have for any $y>0$\n\\begin{align*}\nf_{\\chi^2(k)}(y)&=\\frac{1}{2\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\int_{|x|=\\sqrt y}\\exp\\left(-\\frac{|x|^2}2\\right)\\,d\\mathcal{H}^{k-1}(x)\n\\\\[1ex]&=\\frac{1}{2\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y}2\\right)\\mathcal{H}^{k-1}\\left(\\mathbb{S}^{k-1}\\right)y^{\\frac{k-1}2}=\\frac{1}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{\\frac k 2-1}\\exp\\left(-\\frac y 2\\right)\n\\end{align*}\nwhich is the first claim. If $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$, then using Example \\ref{Square function} again and the elementary equality $|x-\\mu|^2=|x|^2+|\\mu|^2-2\\langle x,\\mu\\rangle$ we have for any $y>0$\n\\begin{align*}\nf_{\\chi^2(k)}(y)&=\\frac{1}{2\\sqrt y}\\int_{|x|=\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{|x-\\mu|^2}2\\right)\\,d\\mathcal{H}^{k-1}(x)\\\\[1ex]\n&=\\frac{1}{2\\sqrt y (2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\int_{|x|=\\sqrt y}\\exp\\langle x,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(x)\n\\\\[1ex]&=\\frac{y^{\\frac {k}2-1}}{2(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\int_{|z|=1}\\exp\\langle \\sqrt y z,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(z).\n\\end{align*}\nSince $\\mathcal{H}^{K-1}_{\\vert_{\\mathbb{S}^{k-1}}}$ is rotationally invariant, up to an orthogonal transformation of $\\mathbb{R}^k$ which maps $\\frac{\\mu}{|\\mu|}$ to $e_1=\\left(1,0,\\dots,0\\right)$, we can suppose $\\frac{\\mu}{|\\mu|}=e_1$. Using $k$-dimensional spherical coordinates to evaluate the last integral then we have \n\\begin{align*}\n\\int_{|z|=1}\\exp\\langle \\sqrt y z,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(z)&=\\int_{|z|=1}\\exp\\left(\\sqrt{y\\lambda}z_1\\right)\\,d\\mathcal{H}^{k-1}(z)\n\\\\[1ex]&=\\mathcal{H}^{K-2}(\\mathbb{S}^{K-2})\\int_0^\\pi \\exp\\left(\\sqrt{y\\lambda}\\cos\\theta\\right)(\\sin\\theta)^{k-2}\\,d\\theta\n\\\\[1ex]&=(2\\pi)^{\\frac k 2}\\left(\\lambda y\\right)^{-\\frac k 4+\\frac 1 2}I_{\\frac k 2 -1}\\left(\\sqrt{\\lambda y}\\right).\n\\end{align*}\nCombining the latter equalities gives the required last claim.\\qed\n\\bigskip\n\n\n Finally let $X,Y:\\Omega\\to \\mathbb{R}$ be two independent random variables such that $X\\sim\\mathcal{N}\\left(0,1\\right)$ and $Y\\sim\\chi^2(k)$. The probability measure $p_{T}$ induced by the random variable $T=\\frac{X}{\\sqrt {Y\/k}}$ is called a \\emph{(Student's) t-distribution with $k$-degrees of freedom}. In the next Proposition we use Corollary \\ref{teo k>1} and example \\ref{ratio} in order to derive the density function of $p_{T}$.\n\n\n\\begin{prop}[Student's t-Distribution]\nLet $X,Y:\\Omega\\to \\mathbb{R}$ two independent random variables such that $X\\sim\\mathcal{N}\\left(0,1\\right)$ and $Y\\sim\\chi^2(k)$. The t-distribution $p_{T}$ induced by $T=\\frac{X}{\\sqrt {Y\/k}}$ has density function\n\\begin{align*}\nf_T(y)=\\frac{\\Gamma\\left(\\frac{k+1}2\\right)}{\\sqrt{k\\pi}\\Gamma\\left(\\frac{k}2\\right)}\\left(1+\\frac{y^2}{k}\\right)^{-\\frac{k+1}2},\\quad \\text{for any}\\quad t\\in\\mathbb{R}.\n\\end{align*}\n\\end{prop}\n{\\sc{Proof.}}\nUsing Corollary \\ref{teo k>1} with $\\phi:\\mathbb{R}^+\\to\\mathbb{R}^+, \\phi(t)=\\sqrt {t\/k}$, we have\n\\begin{align*}\nf_{\\sqrt {Y\/k}}(y)&=\\frac{2k^{\\frac k 2}}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{ k -1}\\exp\\left(-\\frac {ky^2} 2\\right),\\qquad \\forall y>0.\n\\end{align*}\nThen applying example \\ref{ratio} we get for $y\\in \\mathbb{R}$,\n\\begin{align*}\nf_{T}(y)=\\int_{0}^\\infty f_X(ty)f_{\\sqrt {Y\/k}}(t)t\\,dt\n=\\frac{2k^{\\frac k 2}}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)\\sqrt{2\\pi}}\\int_{0}^\\infty \\exp\\left({-\\frac{t^2(y^2+k)}{2}}\\right)t^k\\,dt,\n\\end{align*} \nwhich with the substitution $s=t^2\\frac{y^2+k}{2}$ becomes\n\\begin{align*}\nf_{T}(y)=\\frac{1}{\\Gamma\\left(\\frac k 2\\right)\\sqrt{k\\pi}}\\left(1+\\frac{y^2}k\\right)^{-\\frac {k+1} 2 }\\int_{0}^\\infty e^{-s}s^{\\frac {k-1}2}\\,ds=\\frac{\\Gamma\\left(\\frac {k+1} 2\\right)}{\\Gamma\\left(\\frac k 2\\right)\\sqrt{k\\pi}}\\left(1+\\frac{y^2}k\\right)^{-\\frac {k+1} 2 }.\n\\end{align*}\n\\qed\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTexture-zeros, vanishing elements of fermion mass matrices, \nin quark and lepton mass matrix are successful to\npredict masses and mixing angles. However the origin of zero is not \nclear, which is a motivation of our model. As an approach for this problem \nwe consider the discrete flavor symmetry approach. This approach is studied by \nmany authors{\\cite{discrete symmetry}}. Flavor symmetry is expected to \nbe a clue to understand the masses and mixing angles of quarks and leptons because\nit reduces the number of free parameters in Yukawa couplings and some testable \npredictions of masses and mixing angles generally follow(see references in {\\cite{our model}}).\n\nA interesting point of our model is that dynamical realization of Texture-zeros in\nthe discrete flavor symmetry approach. In previous model, in order to derive \nTexture-zeros certain Yukawa couplings are forbidden by the discrete symmetry. \nIn our model however we consider that some of Higgs vacuum expectation values(VEVs) vanish \nby electroweak symmetry breaking(EWSB) in flavor basis, that is, we consider multi-Higgs system and \nTexture-zeros are derived by EWSB dynamically. \n\nIn our model we take $S_3$ symmetry, permutations of three objects, as the discrete symmetry. \nThe reasons why we adopt $S_3$ are that this symmetry is the smallest group of non \ncommutative discrete groups and $S_3$ has three \nirreducible representations, doublet $\\bf 2$, singlet ${\\bf 1_{S}}$, pseudo singlet \n${\\bf 1_{A}}$, so that it is easy to assign the flavor symmetry representations to three generations such as\n${\\bf 2} + {\\bf 1_{S}}$. In addition, we consider all the $S_{3}$ irreducible representations in this model.\n\n\\section{$S_3$ invariant mass matrix on supersymmetry}\n\nIn this section, the $S_3$ invariant mass matrices are presented{\\cite{mass matrix}}. \nWe consider supersymmetric theory and we suppose that two of three generations belong to $S_3$ doublets and the others are singlets. Using the following tensor product of $S_3$ doublet, $\\phi^{c} = \\sigma_{1} \\phi^{*}= \\sigma_{1}(\\phi^{*}_{1},\\phi^{*}_{2})^{T},\\ \\psi = (\\psi_{1},\\psi_{2})^{T}$,\n\\begin{equation} \n\t\\begin{array}{ccccccc}\n\t\t\\phi^{c} \\times \\psi &=& (\\phi_2 \\psi_2,\\phi_1 \\psi_1)^{T} & + & (\\phi_1 \\psi_2 - \\phi_2 \\psi_1) & + & (\\phi_1 \\psi_2 + \\phi_2 \\psi_1), \\\\\n\t\t & & {\\bf 2} & & {\\bf 1_{A}} & & {\\bf 1_{S}} \n\t\\end{array}\n\\end{equation}\nthe $S_{3}$ invariant mass matrices are obtained as\n\\begin{equation}\n M_{D}=\n \\left(\n \\begin{tabular}{cc|c}\n\t $aH_{1}$ & $b H_{S} + c H_{A}$ & $d H_{2}$ \\\\\n\t $b H_{S} - c H_{A}$ & $a H_{2}$ & $d H_{1}$ \\\\\n\t \\hline\n\t $e H_{2}$ & $e H_{1}$ & $ f H_{S}$\n\t \\end{tabular}\n \\right),\\qquad\n\t\tM_{R}=\n \\left(\n\t \\begin{tabular}{cc|c}\n\t\t & $M_{1}$ & \\\\\n\t $M_{1}$ & & \\\\\n\t \\hline\n\t & & $M_{2}$\n\t \\end{tabular}\n\t\\right),\n\\end{equation}\nwhere $a, b, \\cdots,f$ are independent Yukawa coupling constants, $M_1, M_{2}$ are majorana masses. Now we assign ${\\bf 1_{S}}$ to third generation temporarily. We reconfigure this assignment later. \n\n\\section{$S_3$ invariant Higgs scalar potential analysis}\nIn our model we consider the following eight Higgs bosons,\n\\begin{equation}\n\tH_{uS},H_{dS},H_{uA},H_{dA},H_{u1},H_{d1},H_{u2},H_{d2}.\n\\end{equation}\nOur purpose in this section is to discuss whether or not there are some vacuum patterns with no parameter relations in terms of vanishing-VEVs. \nTherefore we have to analyze eight equations at vacuum, which correspond to each Higgs field.\n\\begin{table}[t]\n\\begin{minipage}{0.5 \\textwidth}\n\t\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline\n$v_{uS}$ & $v_{dS}$ & $v_{uA}$ & $v_{dA}$ & $v_{u1}$ & $v_{u2}$ \n& $v_{d1}$ & $v_{d2}$ \\\\ \\hline \\hline\n $0$ & $0$ & $0$ & $0$ & $0$ & & & $0$ \\\\ \\hline\n $0$ & $0$ & $0$ & $0$ & & $0$ & $0$ & \\\\ \\hline\n $0$ & $0$ & $0$ & $0$ & & & & \\\\ \\hline\n $0$ & $0$ & & & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n $0$ & $0$ & & & $0$ & & & $0$ \\\\ \\hline\n $0$ & $0$ & & & & $0$ & $0$ & \\\\ \\hline\n $0$ & $0$ & & & & & & \\\\ \\hline\n \\end{tabular}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5 \\textwidth}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline\n$v_{uS}$ & $v_{dS}$ & $v_{uA}$ & $v_{dA}$ & $v_{u1}$ & $v_{u2}$ \n& $v_{d1}$ & $v_{d2}$ \\\\ \\hline \\hline\n & & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n & & $0$ & $0$ & $0$ & & & $0$ \\\\ \\hline\n & & $0$ & $0$ & & $0$ & $0$ & \\\\ \\hline\n & & $0$ & $0$ & & & & \\\\ \\hline\n & & & & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n & & & & $0$ & & & $0$ \\\\ \\hline\n & & & & & $0$ & $0$ & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\caption{All possible minima of the scalar potential for $S_3$ singlet\nand doublet Higgs fields without tuning of Lagrangian parameters for\nelectroweak symmetry breaking. The blank entries denote\nnon-vanishing VEVs.}\n{\\label{summary}}\n\\end{table} \nIn general, these equations are the coupled equations through a common parameter which contains all the Higgs VEVs. However we can separate these equations into three parts for the singlet\n, the pseudo singlet\nand the doublet\nbecause vanishing-VEVs makes the equations trivial within each sector. \nTherefore analyzing the equations\n, possible 14 VEV patterns(Table {\\ref{summary}}) are obtained with no parameter relations in terms of vanishing-VEVs. \n\n\\section{Quark and lepton mass textures}\n\nIn previous section, we got 14 VEV patterns.\nNow let us analyze these patterns phenomenologically. At first we can \nobtain the most interesting pattern of 14 patterns. \nThis pattern is the following.\n\\begin{equation}\n\tv_{uS}=v_{dS}=v_{u1}=v_{d2}=0,\\qquad v_{uA},v_{dA},v_{u2},v_{d1}\\not= 0. {\\label{vev}}\n\\end{equation}\nThis pattern leads to the simplest texture(i.e.the maximal number of zero matrix elements) with non-trivial flavor mixing. Next we consider mass matrices obtained from this VEV pattern. We only took ${\\bf 2} + {\\bf 1_S}$ as the $S_3$ representations of three generation matter fields so that the $S_3$ charge assignments of matter fields has a complexity. For example we can assign ${\\bf 1_{S}}$ to any generation in general. As results of exhausting all the $S_3$ charge assignments for the quark sector and assuming $SU(5)$ grand unification{\\cite{su(5)GUT}}{\\cite{georgi jarlskog}} for the lepton sector, mass matrices and predictions of this model are derived as \n \n\\begin{equation}\n\tM_u =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& b_u &\t\t\\\\\n\t\t\t\td_u &\t\t& f_u \\\\\n\t\t\t\t & - f_u & i_u\n\t\t\t\\end{array}\n\t\t\\right),\\qquad\n\tM_d =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& b_d &\t\t\\\\\n\t\t\t\td_d &\te_d\t& \\\\\n\t\t\t\t-e_d & & i_d\n\t\t\t\\end{array}\n\t\t\\right), \\qquad\n\tM_e =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& d_d &\t3e_d\t\\\\\n\t\t\t\tb_d &\t-3 e_d\t& \\\\\n\t\t\t\t & & i_d\n\t\t\t\\end{array}\n\t\t\\right),\n\\end{equation}\n\\begin{equation}\n\tM_{\\nu} =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t& b_{\\nu} &\tc_{\\nu}\t\\\\\n\t\t\t\t-b_{\\nu} &\te_{\\nu}\t& \\\\\n\t\t\t\tg_{\\nu} & & \n\t\t\t\\end{array}\n\t\t\\right),\\qquad\n\tM_R =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& M_1 &\t\t\\\\\n\t\t\t\tM_1 &\t\t& \\\\\n\t\t\t\t & & M_2\n\t\t\t\\end{array}\n\t\t\\right),\n\\end{equation}\n\\begin{equation}\n\t\\left| V_{cb} \\right| = \\sqrt{\\frac{m_c}{m_t}},\\qquad \\left| V_{e3} \\right| \\ge 0.04{\\cite{PDG}}.\n\\end{equation}\nwhere blank entries denote zero and each parameter in $M_{u},M_{d},M_{e},M_{\\nu}$ such as $d_{u},d_{d},b_{\\nu}$ denote a product of a Yukawa coupling and a VEV, for example $d_{u} = d v_{u2}$.\n\\section{Higgs mass spectrum and $S_3$ soft breaking in B-term}\n\nThe $S_3$ potential has an enhanced global symmetry $SU(2) \\times U(1)^2$ and leads to massless Nambu-Goldstone bosons \nin the electroweak broken phases. It is therefore reasonable to softly break the flavor symmetry within the scalar potential.\nWe introduce the following supersymmetry-breaking soft terms which do not break phenomenological characters of the exact $S_3$ model.\n\\begin{equation}\n\tV_{\\not S_3} = b_{SD}H_{uS}H_{d2}+ b'_{SD}H_{u1}H_{dS}+ b_{AD}H_{uA}H_{d1}+ b'_{AD}H_{u2}H_{dA} + {\\rm h.c.}\n\\end{equation}\nThese soft terms have not only the same phenomenological characters as exact $S_3$ model but also a character which we can take the same VEV pattern as ({\\ref{vev}}) in previous section with no parameter relations.\n\n\\section{Tree level FCNC}\n\nSince there are multiple electroweak doublet Higgs bosons which couple to \nmatter fields, flavor-changing processes are mediated at classical \nlevel by these Higgs fields.\tWe can show that all but one have masses of the order of supersymmetry breaking parameters. Therefore the experimental observations of FCNC rare events would lead to a bound on the supersymmetry breaking scale.\nAmong various experimental constraints, we find the most important constraint comes from the neutral K meson mixing. For the heavy mass eigenstates, the tree-level $K_{L}-K_{S}$ mass difference $\\Delta m^{\\rm tree}_K$ is given by the matrix element of the effective Hamiltonian between K mesons{\\cite{FCNC}}.\n$\\Delta m^{\\rm tree}_K$ contains $M_{H}$, which is an average of the Higgs masses $1\/M^{2}_{H} = \\frac{1}{4}\\left( 1\/M^{2}_{H^{0}_{1}} + 1\/ M^{2}_{H^{0}_{2}} + 1\/M^{2}_{H^{0}_{3}}+1\/M^{2}_{H^{0}_{4}} \\right)$, and a free parameter $\\eta$, which contains the down type quark Yukawa couplings.\nIt is found that heavy Higgs masses are bounded from below so as to suppress the extra Higgs contribution compared with the standard model one, which bound is roughly given by\n\\begin{equation}\n M_{H} \\ge\n\t \\left\\{\n\t\t\t \\begin{array}{cl}\n\t\t\t\t\t\t3.8 {\\rm TeV} &(\\eta = 0) \\\\\n\t\t\t\t\t\t1.4 {\\rm TeV} &(\\eta = 0.03) \\\\\n\t\t\t\t\t\\end{array}\n\t\t\t\t\\right.\n,\n\\end{equation}\nwhere we take $\\eta = 0,\\ 0.03$ as typical values.\n\n\\section{Summary}\n\nIn our model we have discussed the structure of Higgs potential and fermion mass \nmatrices in supersymmetric models with $S_3$ flavor symmetry and examined possible \nvanishing elements of quark and lepton mass matrices. As results of exhausting the \npatterns of flavor symmetry charges of matter fields, some predictions such that \nthe lepton mixing $V_{e3}$ is within the range which will be tested in near future \nexperiments are obtained. We have also discussed the physical mass spectrum of \nHiggs bosons and the tree level FCNC processes which is propagated by heavy Higgs fields. \nFrom the tree level FCNC process analysis, it is found that heavy Higgs masses, \nwhich is the order of soft supersymmetry breaking scale, are a few TeV.\n\t\n\n\\vspace*{12pt}\n\\noindent\n{\\bf Acknowledgement}\n\\vspace*{6pt}\n\n\\noindent\nThe Summer Institute 2006 is sponsored by the Asia \nPacific Center for Theoretical Physics and the BK 21 \nprogram of the Department of Physics, KAIST.\nWe would like to thank the organizers of Summer Institute 2006 and thank J. Kubo, T. Kobayashi and H. Nakano for helpful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Stability of Dual Solution}\n\\label{appn:dual}\nWe first note that $D(\\alpha)$ is the dual of a convex function and has a unique minimizer $\\alpha^*$.\nThe function ${V(\\alpha) = D(\\alpha) - D(\\alpha^*)}$, hence, is a non-negative function and equals zero only at $\\alpha = \\alpha^*$.\nDifferentiating $V(\\alpha)$ with respect to time we get\n\\[\\dot{V}(\\alpha) = \\frac{\\partial V}{\\partial \\alpha}\\dot{\\alpha} = -\\gamma \\Big(\\sum_{i}{h_i} - B \\Big)^2 < 0.\\]\n\nTherefore, $V(\\cdot)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition.\n\n\\section{Stability of Primal Solution}\n\\label{appn:primal}\nWe first note that since $W(\\mathbf{h})$ is a strictly concave function, it has a unique maximizer $\\mathbf{h}^*$.\nMoreover ${V(\\mathbf{h}) = W(\\mathbf{h}^*) - W(\\mathbf{h})}$ is a non-negative function and equals zero only at $\\mathbf{h} = \\mathbf{h}^*$.\nDifferentiating $V(\\cdot)$ with respect to time we obtain\n\\begin{align*}\n\\dot{V}(\\mathbf{h}) &= \\sum_{i}{\\frac{\\partial V}{\\partial h_i}\\dot{h_i}} \\\\\n&= -\\sum_{i}{\\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right) \\dot{h_i}}.\n\\end{align*}\n\nFor $\\dot{h_i}$ we have\n\\[\\dot{h_i} = \\frac{\\partial h_i}{\\partial t_i}\\dot{t_i}.\\]\nFor non-reset and reset TTL caches we have\n\\[\\frac{\\partial h_i}{\\partial t_i} = \\frac{\\lambda_i}{(1+\\lambda_i t_i)^2} \\qquad\\text{ and }\\qquad \\frac{\\partial h_i}{\\partial t_i} = \\lambda_i e^{-\\lambda_i t_i},\\]\nrespectively, and hence $\\partial h_i \/ \\partial t_i > 0$.\n\nFrom the controller for $t_i$ we have\n\\[t_i = k_i \\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right).\\]\n\nHence, we get\n\\[\\dot{V}(\\mathbf{h}) = -\\sum_{i}{k_i \\frac{\\partial h_i}{\\partial t_i} \\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right)^2} < 0.\\]\n\nTherefore, $V(\\cdot)$ is a Lyapunov function\\footnote{A description of Lyapunov functions and their applications can be found in~\\cite{srikant13}.}, and the system state will converge to $\\mathbf{h}^*$ starting from any initial condition.\n\n\\section{Stability of Primal-Dual Solution}\n\\label{appn:primal_dual}\nAs discussed in Section~\\ref{sec:opt}, the Lagrangian function for the optimization problem~\\eqref{eq:opt} is expressed as\n\\[\\mathcal{L}(\\mathbf{h}, \\alpha) = \\sum_{i}{U_i(h_i)} - \\alpha(\\sum_{i}{h_i} - B).\\]\nNote that $\\mathcal{L}(\\mathbf{h}, \\alpha)$ is concave in $\\mathbf{h}$ and convex in $\\alpha$, and hence first order condition for optimality of $\\mathbf{h}^*$ and $\\alpha^*$ implies\n\\begin{align*}\n\\mathcal{L}(\\mathbf{h}^*, \\alpha) &\\le \\mathcal{L}(\\mathbf{h}, \\alpha) + \\sum_{i}{\\frac{\\partial \\mathcal{L}}{\\partial h_i}(h_i^* - h_i)}, \\\\\n\\mathcal{L}(\\mathbf{h}, \\alpha^*) &\\ge \\mathcal{L}(\\mathbf{h}, \\alpha) + \\frac{\\partial \\mathcal{L}}{\\partial \\alpha}(\\alpha^* - \\alpha) .\n\\end{align*}\n\nAssume that the hit probability of a file can be expressed by $f(\\cdot)$ as a function of the corresponding timer value $t_i$, \\emph{i.e.}\\ ${h_i = f(t_i)}$. The temporal derivative of the hit probability can therefore be expressed as\n\\[\\dot{h_i} = f'(t_i) \\dot{t_i},\\]\nor equivalently\n\\[\\dot{h_i} = f'(f^{-1}(h_i)) \\dot{t_i},\\]\nwhere $f^{-1}(\\cdot)$ denotes the inverse of function $f(\\cdot)$. For notation brevity we define ${g(h_i) = f'(f^{-1}(h_i))}$. Note that as discussed in Appendix~\\ref{appn:primal}, $f(\\cdot)$ is an increasing function, and hence ${g(h_i)\\ge 0}$.\n\nIn the remaining, we show that $V(\\mathbf{h}, \\alpha)$ defined below is a Lyapunov function for the primal-dual algorithm:\n\\[V(\\mathbf{h}, \\alpha) = \\sum_{i}{\\int_{h_i^*}^{h_i}{\\frac{x - h_i^*}{k_i g(x)}\\mathrm{d} x}} + \\frac{1}{2\\gamma}(\\alpha - \\alpha^*)^2.\\]\nDifferentiating the above function with respect to time we obtain\n\\[\\dot{V}(\\mathbf{h}, \\alpha) = \\sum_{i}{\\frac{h_i - h_i^*}{k_i g(h_i)}\\dot{h_i}} + \\frac{\\alpha - \\alpha^*}{\\gamma}\\dot{\\alpha}.\\]\nBased on the controllers defined for $t_i$ and $\\alpha$ we have\n\\[\\dot{h_i} = g(h_i) \\dot{t_i} = k_i g(h_i) \\frac{\\partial \\mathcal{L}}{\\partial h_i},\\]\nand\n\\[\\dot{\\alpha} = -\\gamma\\frac{\\partial \\mathcal{L}}{\\partial \\alpha}.\\]\nReplacing for $\\dot{h_i}$ and $\\dot{\\alpha}$ in $\\dot{V}(\\mathbf{h}, \\alpha)$, we obtain\n\\begin{align*}\n\\dot{V}(\\mathbf{h}, \\alpha) &= \\sum_{i}{(h_i - h_i^*)\\frac{\\partial \\mathcal{L}}{\\partial h_i}} - (\\alpha - \\alpha^*)\\frac{\\partial \\mathcal{L}}{\\partial \\alpha} \\\\\n&\\le \\mathcal{L}(\\mathbf{h}, \\alpha) - \\mathcal{L}(\\mathbf{h}^*, \\alpha) + \\mathcal{L}(\\mathbf{h}, \\alpha^*) - \\mathcal{L}(\\mathbf{h}, \\alpha) \\\\\n&= \\Big(\\mathcal{L}(\\mathbf{h}^*, \\alpha^*) - \\mathcal{L}(\\mathbf{h}^*, \\alpha)\\Big) + \\Big(\\mathcal{L}(\\mathbf{h}, \\alpha^*) - \\mathcal{L}(\\mathbf{h}^*, \\alpha^*)\\Big) \\\\\n&\\le 0,\n\\end{align*}\nwhere the last inequality follows from \n\\[\\mathcal{L}(\\mathbf{h}, \\alpha^*) \\le \\mathcal{L}(\\mathbf{h}^*, \\alpha^*) \\le \\mathcal{L}(\\mathbf{h}^*, \\alpha),\\]\nfor any $\\mathbf{h}$ and $\\alpha$.\n\nMoreover, $V(\\mathbf{h}, \\alpha)$ is non-negative and equals zero only at $(\\mathbf{h}^*, \\alpha^*)$.\nTherefore, $V(\\mathbf{h}, \\alpha)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition.\n\n\\end{appendices}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we proposed the concept of utility-driven caching, and formulated it as an optimization problem with rigid and elastic cache storage size constraints. Utility-driven caching provides a general framework for defining caching policies with considerations of fairness among various groups of files, and implications on market economy for (cache) service providers and content publishers. This framework has the capability to model existing caching policies such as FIFO and LRU, as utility-driven caching policies.\n\nWe developed three decentralized algorithms that implement utility-driven caching policies in an online fashion and that can adapt to changes in file request rates over time. We prove that these algorithms are globally stable and converge to the optimal solution. Through simulations we illustrated the efficiency of these algorithms and the flexibility of our approach.\n\n\\section{Discussion}\n\\label{sec:discussion}\nIn this section, we explore the implications of utility-driven caching on monetizing the caching service and discuss some future research directions.\n\n\\subsection{Decomposition}\nThe formulation of the problem in Section~\\ref{sec:opt} assumes that the utility functions $U_i(\\cdot)$ are known to the system. In reality content providers might decide not to share their utility functions with the service provider. To handle this case, we decompose the optimization problem~\\eqref{eq:opt} into two simpler problems.\n\nSuppose that the cache storage is offered as a service and the service provider charges content providers at a constant rate $r$ for storage space. Hence, a content provider needs to pay an amount of $w_i = rh_i$ to receive hit probability $h_i$ for file $i$. The utility maximization problem for the content provider of file $i$ can then be written as\n\\begin{align}\n\\label{eq:opt_user}\n\\text{maximize} \\quad &U_i(w_i\/r) - w_i \\\\\n\\text{such that} \\quad &w_i \\ge 0 \\notag\n\\end{align}\n\nNow, assuming that the service provider knows the vector $\\mathbf{w}$, for a proportionally fair resource allocation, the hit probabilities should be\nset according to\n\\begin{align}\n\\label{eq:opt_network}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{w_i\\log{(h_i)}} \\\\\n\\text{such that} \\quad &\\sum_{i=1}^{N}{h_i} = B \\notag\n\\end{align}\n\nIt was shown in~\\cite{kelly97} that there always exist vectors $\\mathbf{w}$ and $\\mathbf{h}$, such that $\\mathbf{w}$ solves~\\eqref{eq:opt_user} and $\\mathbf{h}$ solves~\\eqref{eq:opt_network}; further, the vector $\\mathbf{h}$ is the unique solution to~\\eqref{eq:opt}.\n\n\\subsection{Cost and Utility Functions}\nIn Section~\\ref{sec:soft}, we defined a penalty function denoting the cost of using additional storage space. One might also define cost functions based on the consumed network bandwidth. This is especially interesting in modeling in-network caches with network links that are likely to be congested.\n\nOptimization problem~\\eqref{eq:opt} uses utility functions defined as functions of the hit probabilities. It is reasonable to define utility as a function of the hit \\emph{rate}. Whether this makes any changes to the problem, \\emph{e.g.}\\ in the notion of fairness, is a question that requires further investigation. One argument in support of utilities as functions of hit rates is that a service provider might prefer pricing based on request rate rather than cache occupancy. Moreover, in designing hierarchical caches a service provider's objective could be to minimize the internal bandwidth cost. This can be achieved by defining the utility functions as $U_i = -C_i(m_i)$ where $C_i(m_i)$ denotes the cost associated with miss rate $m_i$ for file $i$.\n\n\\subsection{Online Algorithms}\nIn Section~\\ref{sec:online}, we developed three online algorithms that can be used to implement utility-driven caching. Although these algorithms are proven to be stable and converge to the optimal solution, they have distinct features that can make one algorithm more effective in implementing a policy. For example, implementing the max-min fair policy based on the dual solution requires knowing\/estimating the file request rates, while it can be implemented using the modified primal-dual solution without such knowledge. Moreover, the convergence rate of these algorithms may differ for different policies. The choice of non-reset or reset TTL caches also has implications on the design and performance of these algorithms.\nThese are subjects that require further study.\n\n\\section{Online Algorithms}\n\\label{sec:online}\nIn Section~\\ref{sec:opt}, we formulated utility-driven caching as a convex optimization problem either with a fixed or an elastic cache size. However, it is not feasible to solve the optimization problem offline and then\nimplement the optimal strategy. Moreover, the system parameters can change over time. Therefore, we need algorithms\nthat can be used to implement the optimal strategy and adapt to changes in the system by collecting limited information.\nIn this section, we develop such algorithms.\n\n\\subsection{Dual Solution}\n\\label{sec:dual}\nThe utility-driven caching formulated in~\\eqref{eq:opt} is a convex optimization problem, and hence the optimal solution corresponds to solving the dual problem.\nThe Lagrange dual of the above problem is obtained by incorporating the constraints into the maximization by means of Lagrange multipliers\n\\begin{align*}\n\\text{minimize} \\quad &D(\\alpha, \\boldsymbol{\\nu}, \\boldsymbol{\\eta}) = \\max_{h_i}\\Bigg\\{ \\sum_{i=1}^{N}{U_i(h_i)} \\\\\n&\\qquad\\qquad\\qquad -\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\\\\n&\\qquad\\qquad\\qquad -\\sum_{i=1}^{N}{\\nu_i (h_i - 1)} + \\sum_{i=1}^{N}{\\eta_i h_i} \\Bigg\\}\\\\\n\\text{such that} \\quad &\\alpha \\ge 0, \\quad \\boldsymbol{\\nu}, \\boldsymbol{\\eta} \\ge \\mathbf{0}.\n\\end{align*}\nUsing timer based caching techniques for controlling the hit probabilities with $0 < t_i < \\infty$ ensures that $0 < h_i < 1$, and hence we have $\\nu_i = 0$ and $\\eta_i = 0$. \n\nHere, we consider an algorithm based on the dual solution to the utility maximization problem~\\eqref{eq:opt}. Recall that we can write the Lagrange dual of the utility maximization problem as\n\\[D(\\alpha) = \\max_{h_i}{\\left\\{ \\sum_{i=1}^{N}{U_i(h_i)}-\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\right\\}},\\]\nand the dual problem can be written as\n\\[\\min_{\\alpha \\ge 0}{D(\\alpha)}.\\]\n\nA natural decentralized approach to consider for minimizing $D(\\alpha)$ is to gradually move the decision variables towards the optimal point using the gradient descent algorithm.\nThe gradient can be easily computed as\n\\[\\frac{\\partial D(\\alpha)}{\\partial\\alpha} = -\\Big(\\sum_{i}{h_i} - B \\Big),\\]\nand since we are doing a gradient \\emph{descent}, $\\alpha$ should be updated according to the \\emph{negative} of the gradient as\n\\[\\alpha \\gets \\max{\\Big\\{0, \\alpha + \\gamma \\Big( \\sum_{i}{h_i} - B \\Big)\\Big\\}},\\]\nwhere $\\gamma > 0$ controls the step size at each iteration. Note that the KTT conditions require that $\\alpha \\ge 0$.\n\nBased on the discussion in Section~\\ref{sec:opt}, to satisfy the optimality condition we must have\n\\[U'_i(h_i) = \\alpha,\\]\nor equivalently\n\\[h_i = {U'_i}^{-1}(\\alpha).\\]\nThe hit probabilities are then controlled based on the timer parameters $t_i$ which can be set according to~\\eqref{eq:non_reset_t} and~\\eqref{eq:reset_t} for non-reset and reset TTL caches.\n\nConsidering the hit probabilities as indicators of files residing in the cache, the expression $\\sum_{i}{h_i}$ can be interpreted as the number of items currently in the cache, denoted here as $B_{curr}$. We can thus summarize the control algorithm for a reset TTL algorithm as\n\\begin{align}\n\\label{eq:dual_sol}\nt_i &= -\\frac{1}{\\lambda_i} \\log{\\Big(1 - {U'_i}^{-1}(\\alpha) \\Big)}, \\notag\\\\\n\\alpha &\\gets \\max{\\{0, \\alpha + \\gamma ( B_{curr} - B )\\}}.\n\\end{align}\nWe obtain an algorithm for a non-reset TTL cache by using the correct expression for $t_i$ in~\\eqref{eq:non_reset_t}.\n\nLet $\\alpha^*$ denote the optimal value for $\\alpha$. We show in Appendix~\\ref{appn:dual} that $D(\\alpha) - D(\\alpha^*)$ is a Lyapunov function and the above algorithm converges to the optimal solution.\n\n\\subsection{Primal Solution}\nWe now consider an algorithm based on the optimization problem in~\\eqref{eq:opt_soft} known as the \\emph{primal} formulation.\n\nLet $W(\\mathbf{h})$ denote the objective function in~\\eqref{eq:opt_soft} defined as\n\\[W(\\mathbf{h}) = \\sum_{i=1}^{N}{U_i(h_i)} - C(\\sum_{i=1}^{N}{h_i} - B).\\]\nA natural approach to obtain the maximum value for $W(\\mathbf{h})$ is to use the gradient ascent algorithm.\nThe basic idea behind the gradient ascent algorithm is to move the variables $h_i$ in the direction of the gradient\n\\[\\frac{\\partial W(\\mathbf{h})}{\\partial h_i} = U'_i(h_i) - C'(\\sum_{i=1}^{N}{h_i} - B).\\]\nSince the hit probabilities are controlled by the TTL timers, we move $h_i$ towards the optimal point by updating timers $t_i$.\nLet $\\dot{h_i}$ denote the derivative of the hit probability $h_i$ with respect to time. Similarly, define $\\dot{t_i}$ as the derivative of the timer parameter $t_i$\nwith respect to time. We have\n\\[\\dot{h_i} = \\frac{\\partial h_i}{\\partial t_i}\\dot{t_i}.\\]\nFrom~\\eqref{eq:hit_non_reset} and~\\eqref{eq:hit_reset}, it is easy to confirm that $\\partial h_i\/\\partial t_i > 0$ for non-reset and reset TTL caches.\nTherefore, moving $t_i$ in the direction of the gradient, also moves $h_i$s in that direction.\n\nBy gradient ascent, the timer parameters should be updated according to\n\\[t_i \\gets \\max{\\left\\{0, t_i + k_i\\Big[ U'_i(h_i) - C'(B_{curr} - B) \\Big]\\right\\}},\\]\nwhere $k_i > 0$ is the step-size parameter, and $\\sum_{i=1}^{N}{h_i}$ has been replaced with $B_{curr}$ based on the same argument as in the dual solution.\n\nLet $\\mathbf{h}^*$ denote the optimal solution to~\\eqref{eq:opt_soft}. We show in Appendix~\\ref{appn:primal} that $W(\\mathbf{h}^*) - W(\\mathbf{h})$ is a Lyapunov function, and the above algorithm converges to the optimal solution.\n\n\\subsection{Primal-Dual Solution}\nHere, we consider a third algorithm that combines elements of the previous two algorithms.\nConsider the control algorithm\n\\begin{align*}\nt_i &\\gets \\max{\\{0, t_i + k_i [ U'_i(h_i) - \\alpha]\\}}, \\\\\n\\alpha &\\gets \\max{\\{0, \\alpha + \\gamma (B_{current} - B)\\}}.\n\\end{align*}\nUsing Lyapunov techniques we show in Appendix~\\ref{appn:primal_dual} that the above algorithm converges to the optimal solution.\n\nNow, rather than updating the timer parameters according to the above rule explicitly based on the utility function, we can have update rules based on a cache hit or miss.\nConsider the following differential equation\n\\begin{equation}\n\\label{eq:t}\n\\dot{t_i} = \\delta_m(t_i, \\alpha)(1 - h_i)\\lambda_i - \\delta_h(t_i, \\alpha)h_i\\lambda_i,\n\\end{equation}\nwhere $\\delta_m(t_i, \\alpha)$ and $-\\delta_h(t_i, \\alpha)$ denote the change in $t_i$ upon a cache miss or hit for file $i$, respectively.\nMore specifically, the timer for file $i$ is increased by $\\delta_m(t_i, \\alpha)$ upon a cache miss, and decreased by $\\delta_h(t_i, \\alpha)$ on a cache hit.\n\nThe equilibrium for~\\eqref{eq:t} happens when $\\dot{t_i} = 0$, which solving for $h_i$ yields\n\\[h_i = \\frac{\\delta_m(t_i, \\alpha)}{\\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha)}.\\]\nComparing the above expression with $h_i = {U'_i}^{-1}(\\alpha)$ suggests that\n$\\delta_m(t_i, \\alpha)$ and $\\delta_h(t_i, \\alpha)$ can be set to achieve desired hit probabilities and caching policies.\n\nMoreover, the differential equation~\\eqref{eq:t} can be reorganized as\n\\[\\dot{t_i} = h_i \\lambda_i \\Big(\\delta_m(t_i, \\alpha)\/h_i - [\\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha)]\\Big),\\]\nand to move $t_i$ in the direction of the gradient $U'_i(h_i) - \\alpha$ a natural choice for the update functions can be\n\\[\\delta_m(t_i, \\alpha) = h_i U'_i(h_i), \\text{ and } \\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha) = \\alpha.\\]\n\nTo implement proportional fairness for example, these functions can be set as\n\\begin{equation}\n\\label{eq:prop_pd}\n\\delta_m(t_i, \\alpha) = \\lambda_i, \\text{ and } \\delta_h(t_i, \\alpha) = \\alpha - \\lambda_i.\n\\end{equation}\n\nFor the case of max-min fairness, recall from the discussion in Section~\\ref{sec:opt_identical} that a utility function that is content agnostic, \\emph{i.e.}\\ $U_i(h) = U(h)$, results in a max-min fair resource allocation. Without loss of generality we can have $U_i(h_i) = \\log{h_i}$. Thus, max-min fairness can be implemented by having\n\\[\\delta_m(t_i, \\alpha) = 1, \\text{ and } \\delta_h(t_i, \\alpha) = \\alpha - 1.\\]\nNote that with these functions, max-min fairness can be implemented without requiring knowledge about request arrival rates~$\\lambda_i$, while the previous approaches require such knowledge.\n\n\n\\subsection{Estimation of $\\lambda_i$}\n\\label{sec:estimate}\nComputing the timer parameter $t_i$ in the algorithms discussed in this section requires knowing the request arrival rates for most of the policies.\nEstimation techniques can be used to approximate the request rates in case such knowledge is not available at the (cache) service provider.\n\nLet $r_i$ denote the remaining TTL time for file $i$. Note that $r_i$ can be computed based on $t_i$ and a time-stamp for the last time file~$i$ was requested.\nLet $X_i$ denote the random variable corresponding to the inter-arrival times for the requests for file~$i$, and $\\bar{X_i}$ be its mean.\nWe can approximate the mean inter-arrival time as $\\hat{\\bar{X_i}} = t_i - r_i$. Note that $\\hat{\\bar{X_i}}$ defined in this way is a one-sample\nunbiased estimator of $\\bar{X_i}$. Therefore, $\\hat{\\bar{X_i}}$ is an unbiased estimator of $1\/\\lambda_i$. In the simulation section, we will use this estimator in computing the timer parameters for evaluating our algorithms.\n\n\n\\section{Utility Functions and Fairness}\n\\label{sec:fairness}\nUsing different utility functions in the optimization formulation~\\eqref{eq:opt} yields different timer values for the files.\nIn this sense, each utility function defines a notion of fairness in allocating storage resources to different files.\nIn this section, we study a number of utility functions that have important fairness properties associated with them.\n\\subsection{Identical Utilities}\n\\label{sec:opt_identical}\nAssume that all files have the same utility function, \\emph{i.e.}\\ $U_i(h_i) = U(h_i)$ for all $i$. Then, from~\\eqref{eq:c} we obtain\n\\[\\sum_{i=1}^{N}{{U'}^{-1}(\\alpha)} = N {U'}^{-1}(\\alpha) = B,\\]\nand hence\n\\[{U'}^{-1}(\\alpha) = B\/N.\\]\nUsing~\\eqref{eq:hu} for the hit probabilities we get\n\\[h_i = B\/N, \\quad \\forall{i}.\\]\n\nUsing a non-reset TTL policy, the timers should be set according to\n\\[t_i = \\frac{B}{\\lambda_i (N - B)},\\]\nwhile with a reset TTL policy, they must equal\n\\[t_i = -\\frac{1}{\\lambda_i}\\log{\\left(1-\\frac{B}{N}\\right)}.\\]\n\nThe above calculations show that identical utility functions yield identical hit probabilities for all files. Note that the hit probabilities computed above do not depend on the utility function.\n\n\\subsection{$\\boldsymbol{\\beta}$-Fair Utility Functions}\nHere, we consider the family of $\\beta$-fair (also known as \\emph{isoelastic}) utility functions given by\n\\[U_i(h_i) = \\left\\{ \\begin{array}{ll}\n w_i\\frac{h_i^{1-\\beta}}{1-\\beta} & \\beta \\ge 0, \\beta \\neq 1; \\\\\n & \\\\\n w_i \\log{h_i} & \\beta = 1,\n \\end{array} \\right. \\]\nwhere the coefficient $w_i \\ge 0$ denotes the weight for file $i$.\nThis family of utility functions unifies different notions of fairness in resource allocation~\\cite{srikant13}.\nIn the remainder of this section, we investigate some of the choices for $\\beta$ that lead to interesting special cases.\n\n\\subsubsection{$\\boldsymbol{\\beta = 0}$}\\hspace*{\\fill} \\\\\nWith $\\beta = 0$, we get $U_i(h_i) = w_i h_i$, and maximizing the sum of the utilities\ncorresponds to\n\\[\\max_{h_i}{\\sum_{i}{w_i h_i}}.\\]\n\nThe above utility function defined does not satisfy the requirements for a utility function mentioned in Section~\\ref{sec:model}, as it is not strictly concave.\nHowever, it is easy to see that the sum of the utilities is maximized when\n\\[h_i = 1, i=1,\\ldots, B \\quad \\text{ and } \\quad h_i = 0, i=B+1,\\ldots, N,\\]\nwhere we assume that weights are sorted as ${w_1 \\ge \\ldots \\ge w_N}$.\nThese hit probabilities indicate that the optimal timer parameters are\n\\[t_i = \\infty, i=1,\\ldots, B \\quad \\text{ and } \\quad t_i = 0, i=B+1,\\ldots, N.\\]\n\nNote that the policy obtained by implementing this utility function with $w_i = \\lambda_i$ corresponds to the Least-Frequently Used (LFU) caching policy,\nand maximizes the overall throughput.\n\n\\subsubsection{$\\boldsymbol{\\beta = 1}$}\\hspace*{\\fill} \\\\\nLetting $\\beta = 1$, we get $U_i(h_i) = w_i \\log{h_i}$,\nand hence maximizing the sum of the utilities corresponds to\n\\[\\max_{h_i}{\\sum_{i}{w_i \\log{h_i}}}.\\]\n\nIt is easy to see that ${U'_i}^{-1}(\\alpha) = w_i \/ \\alpha$, and hence using~\\eqref{eq:c} we obtain\n\\[\\sum_{i}{{U'_i}^{-1}(\\alpha)} = \\sum_{i}{w_i} \/ \\alpha = B,\\]\nwhich yields\n\\[\\alpha = \\sum_{i}{w_i} \/ B.\\]\nThe hit probability of file $i$ then equals\n\\[h_i = {U'_i}^{-1}(\\alpha) = \\frac{w_i}{\\sum_{j}{w_j}}B.\\]\n\nThis utility function implements a \\emph{proportionally fair} policy~\\cite{kelly98}.\nWith $w_i = \\lambda_i$, the hit probability of file $i$ is proportional to the request arrival rate $\\lambda_i$.\n\n\\subsubsection{$\\boldsymbol{\\beta = 2}$}\\hspace*{\\fill} \\\\\nWith $\\beta = 2$, we get $U_i(h_i) = -w_i\/h_i$, and maximizing the total utility corresponds to\n\\[\\max_{h_i}{\\sum_{i}{\\frac{-w_i}{h_i}}}.\\]\n\nIn this case, we get ${U'_i}^{-1}(\\alpha) = \\sqrt{w_i} \/ \\sqrt{\\alpha}$, therefore\n\\[\\sum_{i}{{U'_i}^{-1}(\\alpha)} = \\sum_{i}{\\sqrt{w_i}} \/ \\sqrt{\\alpha} = B,\\]\nand hence\n\\[\\alpha = \\Big(\\sum_{i}{\\sqrt{w_i}}\\Big)^2 \/ B^2.\\]\n\nThe hit probability of file $i$ then equals\n\\[h_i = {U'_i}^{-1}(\\alpha) = \\frac{\\sqrt{w_i}}{\\sqrt{\\alpha}} = \\frac{\\sqrt{w_i}}{\\sum_{j}{\\sqrt{w_j}}}B.\\]\n\nThe utility function defined above is known to yield minimum potential delay fairness. It was shown in~\\cite{kelly98} that the TCP congestion control protocol\nimplements such a utility function.\n\n\\subsubsection{$\\boldsymbol{\\beta \\rightarrow\\infty}$}\\hspace*{\\fill} \\\\\nWith $\\beta \\rightarrow\\infty$, maximizing the sum of the utilities corresponds to (see~\\cite{mo00} for proof)\n\\[\\max_{h_i}{\\min_{i}{h_i}}.\\]\n\nThis utility function does not comply with the rules mentioned in Section~\\ref{sec:model} for utility functions, as it is not strictly concave.\nHowever, it is easy to see that the above utility function yields\n\\[h_i = B\/N, \\quad \\forall{i}.\\]\n\nThe utility function defined here maximizes the minimum hit probability, and corresponds to the \\emph{max-min fairness}. Note that using identical utility functions for all files resulted in similar hit probabilities as this case. A brief summary of the utility functions discussed here is given in Table~\\ref{tbl:u}. \n\\begin{table*}[]\n\\centering\n\\caption{$\\beta$-fair utility functions family}\n\\begin{tabular}{ | c | c | c | c |}\n\\hline\n$\\beta$ & $\\max{\\sum_{i}{U_i(h_i)}}$ & $h_i$ & implication \\\\\n\\hline\n 0 & $\\max{\\sum{w_i h_i}}$ & $h_i = 1, i\\le B, h_i = 0, i \\ge B+1$ & maximizing overall throughput \\\\\n 1 & $\\max{\\sum{w_i \\log{h_i}}}$ & $h_i = w_i B \/ \\sum_{j}{w_j}$ & proportional fairness \\\\\n 2 & $\\max{-\\sum{w_i \/ h_i}}$ & $h_i = \\sqrt{w_i} B \/ \\sum_{j}{\\sqrt{w_j}}$ & minimize potential delay \\\\\n $\\infty$ & $\\max{\\min{h_i}}$ & $h_i = B\/N$ & max-min fairness \\\\\n\\hline\n\\end{tabular}\n\\label{tbl:u}\n\\end{table*}\n\n\n\\section{Introduction}\nThe increase in data traffic over past years is predicted to continue more aggressively, with global Internet traffic in 2019 estimated to reach 64 times of its volume in 2005~\\cite{cisco14}.\nThe growth in data traffic is recognized to be due primarily to streaming of video on-demand content over cellular networks.\nHowever, traditional methods such as increasing the amount of spectrum or deploying more base stations are not sufficient to cope with this predicted traffic increase~\\cite{Andrews12, Golrezaei12}. Caching is recognized, in current and future Internet architecture proposals, as one of the most effective means to improve the performance of web applications.\nBy bringing the content closer to users, caches greatly reduce network bandwidth usage, server load, and perceived service delays~\\cite{borst10}.\n\n\n\nBecause of the trend for ubiquitous computing, creation of new content publishers and consumers, the Internet is becoming an increasingly heterogeneous environment where different content types have different quality of service requirements, depending on the content publisher\/consumer.\nSuch an increasing diversity in service expectations advocates the need for content delivery infrastructures with service differentiation among different applications and content classes.\nService differentiation not only induces important technical gains, but also provides significant economic benefits~\\cite{feldman02}.\nDespite a plethora of research on the design and implementation of \\emph{fair} and \\emph{efficient} algorithms for differentiated bandwidth sharing in communication networks, little work has focused on the provision of multiple levels of service in network and web caches.\nThe little available research has focused on designing controllers for partitioning cache space~\\cite{ko03, lu04}, biased replacement policies towards particular content classes~\\cite{kelly99}, or using multiple levels of caches~\\cite{feldman02}. These techniques either require additional controllers for fairness, or inefficiently use the cache storage.\n\nMoreover, traditional cache management policies such as LRU treat different contents in a strongly coupled manner that makes it difficult for (cache) service providers to implement differentiated services, and for content publishers to account for the valuation of their content delivered through content distribution networks.\nIn this paper, we propose a utility-driven caching framework, where each content has an associated utility and content is stored and managed in a cache so as to maximize the aggregate utility for all content. Utilities can be chosen to trade off user satisfaction and cost of storing the content in the cache.\nWe draw on analytical results for time-to-live (TTL) caches~\\cite{Nicaise14b}, to design caches with ties to utilities for individual (or classes of) contents.\nUtility functions also have implicit notions of fairness that dictate the time each content stays in cache.\nOur framework allows us to develop \\emph{online} algorithms for cache management, for which we prove achieve optimal performance. Our framework has implications for distributed pricing and control mechanisms and hence is well-suited for designing cache market economic models.\n\nOur main contributions in this paper can be summarized as follows:\n\\begin{itemize}\n\\item We formulate a utility-based optimization framework for maximizing aggregate content publisher utility subject to buffer capacity constraints at the service provider.\nWe show that existing caching policies, \\emph{e.g.}\\ LRU, LFU and FIFO, can be modeled as utility-driven caches within this framework.\n\\item By reverse engineering the LRU and FIFO caching policies as utility maximization problems, we show how the \\emph{characteristic time}~\\cite{Che01} defined for these caches relates to the Lagrange multiplier corresponding to the cache capacity constraint.\n\\item We develop online algorithms for managing cache content, and prove the convergence of these algorithms to the optimal solution using Lyapunov functions.\n\\item We show that our framework can be used in revenue based models where content publishers react to prices set by (cache) service providers without revealing their utility functions.\n\\item We perform simulations to show the efficiency of our online algorithms using different utility functions with different notions of fairness.\n\\end{itemize}\n\nThe remainder of the paper is organized as follows. We review related work in the next section. Section~\\ref{sec:model} explains the network model considered in this paper, and\nSection~\\ref{sec:opt} describes our approach in designing utility maximizing caches. In Section~\\ref{sec:fairness} we elaborate on fairness implications of utility functions, and in Section~\\ref{sec:reverse}, we derive the utility functions maximized by LRU and FIFO caches. In Section~\\ref{sec:online}, we develop online algorithms for implementing utility maximizing caches. We present simulation results in Section~\\ref{sec:simulation}, and discuss prospects and implications of the cache utility maximization framework in Section~\\ref{sec:discussion}. Finally, we conclude the paper in Section~\\ref{sec:conclusion}.\n\n\\section{Model}\n\\label{sec:model}\nConsider a set of $N$ files, and a cache of size $B$. We use the terms file and content interchangeably in this paper.\nLet $h_i$ denote the hit probability for content $i$.\nAssociated with each content, $i=1,\\ldots, N$, is a utility function $U_i:[0,1] \\rightarrow \\mathbb{R}$ that represents the ``satisfaction'' perceived by observing hit probability $h_i$.\n$U_i(\\cdot)$ is assumed to be increasing, continuously differentiable, and strictly concave.\nNote that a function with these properties is invertible. We will treat utility functions that do not satisfy these constraints as special cases.\n\n\\subsection{TTL Caches}\nIn a TTL cache, each content is associated with a timer~$t_i$. Whenever a cache miss to content $i$ occurs, content $i$ is stored in the cache and its timer is set to $t_i$.\nTimers decrease at constant rate, and a content is evicted from cache when its timer reaches zero.\nWe can adjust the hit probability of a file by controlling the time a file is kept in cache.\n\nThere are two TTL cache designs:\n\\begin{itemize}\n\\item Non-reset TTL Cache: TTL is only set at cache misses, \\emph{i.e.}~TTL is not reset upon cache hits.\n\\item Reset TTL Cache: TTL is set each time the content is requested.\n\\end{itemize}\nPrevious work on the analysis of TTL caches~\\cite{Nicaise14} has shown that the hit probability of file $i$ for these two classes of non-reset and reset TTL caches can be expressed as \n\\begin{equation}\n\\label{eq:hit_non_reset}\nh_i = 1 - \\frac{1}{1 + \\lambda_i t_i},\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:hit_reset}\nh_i = 1 - e^{-\\lambda_i t_i},\n\\end{equation}\nrespectively, where requests for file $i$ arrive at the cache according to a Poisson process with rate $\\lambda_i$.\nNote that depending on the utility functions, different (classes of) files might have different or equal TTL values.\n\n\\section{Cache Utility Maximization}\n\\label{sec:opt}\nIn this section, we formulate cache management as a utility maximization problem. We introduce two formulations, one where the buffer size introduces a hard constraint and a second where it introduces a soft constraint.\n\n\\subsection{Hard Constraint Formulation}\nWe are interested in designing a cache management policy that optimizes the sum of utilities over all files, more precisely,\n\\begin{align}\n\\label{eq:opt}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{U_i(h_i)} \\notag\\\\\n\\text{such that} \\quad &\\sum_{i=1}^{N}{h_i} = B \\\\\n& 0 \\le h_i \\le 1, \\quad i=1, 2, \\ldots, N. \\notag\n\\end{align}\nNote that the feasible solution set is convex and since the objective function is strictly concave and continuous, a unique maximizer, called the optimal solution, exists. Also note that the buffer constraint is based on the {\\em expected} number of files not exceeding the buffer size and not the total number of files.\nTowards the end of this section, we show that the buffer space can be managed in a way such that the probability of \\emph{violating} the buffer size constraint vanishes as the number of files and cache size grow large.\n\nThis formulation does not enforce any special technique for managing the cache content, and any strategy that can easily adjust the hit probabilities can be employed. We use the TTL cache as our building block because it provides the means through setting timers to control the hit probabilities of different files in order to maximize the sum of utilities.\n\nUsing timer based caching techniques for controlling the hit probabilities with $0 < t_i < \\infty$ ensures that $0 < h_i < 1$, and hence, disregarding the possibility of $h_i = 0$ or $h_i = 1$, we can write the Lagrangian function as\n\\begin{align*}\n\\mathcal{L}(\\mathbf{h}, \\alpha) &= \\sum_{i=1}^{N}{U_i(h_i)}-\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\\\\n&= \\sum_{i=1}^{N}{\\Big[ U_i(h_i)-\\alpha h_i \\Big]} + \\alpha B,\n\\end{align*}\nwhere $\\alpha$ is the Lagrange multiplier.\n\nIn order to achieve the maximum in $\\mathcal{L}(\\mathbf{h}, \\alpha)$, the hit probabilities should satisfy\n\\begin{equation}\n\\label{eq:drv}\n\\frac{\\partial\\mathcal{L}}{\\partial h_i} = \\frac{\\mathrm{d} U_i}{\\mathrm{d} h_i} - \\alpha = 0.\n\\end{equation}\n\nLet $U'_i(\\cdot)$ denote the derivative of the the utility function $U_i(\\cdot)$, and define ${U'_i}^{-1}(\\cdot)$ as its inverse function.\nFrom~\\eqref{eq:drv} we get\n\\[U'_i(h_i) = \\alpha,\\]\nor equivalently\n\\begin{equation}\n\\label{eq:hu}\nh_i = {U'_i}^{-1}(\\alpha).\n\\end{equation}\nApplying the cache storage constraint we obtain\n\\begin{equation}\n\\label{eq:c}\n\\sum_{i}{h_i} = \\sum_{i}{{U'_i}^{-1}(\\alpha)} = B,\n\\end{equation}\nand $\\alpha$ can be computed by solving the fixed-point equation given above.\n\nAs mentioned before, we can implement utility maximizing caches using TTL based policies.\nUsing the expression for the hit probabilities of non-reset and reset TTL caches given in~\\eqref{eq:hit_non_reset} and~\\eqref{eq:hit_reset},\nwe can compute the timer parameters $t_i$, once $\\alpha$ is determined from~\\eqref{eq:c}.\nFor non-reset TTL caches we obtain\n\\begin{equation}\n\\label{eq:non_reset_t}\nt_i = -\\frac{1}{\\lambda_i}\\Big(1 - \\frac{1}{1 - {U'_i}^{-1}(\\alpha)}\\Big),\n\\end{equation}\nand for reset TTL caches we get\n\\begin{equation}\n\\label{eq:reset_t}\nt_i = -\\frac{1}{\\lambda_i}\\log{\\Big(1 - {U'_i}^{-1}(\\alpha)\\Big)}.\n\\end{equation}\n\n\\subsection{Soft Constraint Formulation}\n\\label{sec:soft}\nThe formulation in~\\eqref{eq:opt} assumes a hard constraint on cache capacity.\nIn some circumstances it may be appropriate for the (cache) service provider to increase the available cache storage at some cost to the file provider\nfor the additional resources\\footnote{One straightforward way of thinking this is to turn the cache memory disks on and off based on the demand.}.\nIn this case the cache capacity constraint can be replaced with a penalty function $C(\\cdot)$ denoting the cost for the extra cache storage.\nHere, $C(\\cdot)$ is assumed to be a convex and increasing function.\nWe can now write the utility and cost driven caching formulation as\n\\begin{align}\n\\label{eq:opt_soft}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{U_i(h_i)} - C(\\sum_{i=1}^{N}{h_i} - B) \\\\\n\\text{such that} \\quad &0 \\le h_i \\le 1, \\quad i=1,2,\\ldots, N. \\notag\n\\end{align}\n\nNote the optimality condition for the above optimization problem states that\n\\[U'_i(h_i) = C'(\\sum_{i=1}^{N}{h_i} - B).\\]\n\nTherefore, for the hit probabilities we obtain\n\\[h_i = {U'_i}^{-1}\\Big(C'(\\sum_{i=1}^{N}{h_i} - B)\\Big),\\]\nand the optimal value for the cache storage can be computed using the fixed-point equation\n\\begin{equation}\n\\label{eq:elastic_B}\nB^* = \\sum_{i=1}^{N}{{U'_i}^{-1}\\Big(C'(B^* - B)\\Big)}.\n\\end{equation}\n\n\\subsection{Buffer Constraint Violations}\n\\label{sec:violation}\nBefore we leave this section, we address an issue that arises in both formulations, namely how to deal with the fact that there may be more contents with unexpired timers than can be stored in the buffer. This occurs in the formulation of (\\ref{eq:opt}) because the constraint is on the {\\em average} buffer occupancy and in (\\ref{eq:opt_soft}) because there is no constraint. Let us focus on the formulation in (\\ref{eq:opt}) first. Our approach is to provide a buffer of size $B(1+\\epsilon )$ with $\\epsilon > 0$, where a portion $B$ is used to solve the optimization problem and the additional portion $\\epsilon B$ to handle buffer violations. We will see that as the number of contents, $N$, increases, we can get by growing $B$ in a sublinear manner, and allow $\\epsilon$ to shrink to zero, while ensuring that content will not be evicted from the cache before their timers expire with high probability. Let $X_i$ denote whether content $i$ is in the cache or not; $P(X_i =1) = h_i $. Now Let $\\mathbb{E}\\bigl[\\sum_{i=1}^N X_i\\bigr] = \\sum_{i=1}^N h_i = B$. We write $B(N)$ as a function of $N$, and assume that $B(N) = \\omega (1)$. \n\\begin{theorem}\n\\label{thrm:violation}\nFor any $\\epsilon > 0$\n\\[\n \\mathbb{P}\\bigl(\\sum_{i=1}^N X_i \\ge B(N)(1+\\epsilon)\\bigr) \\le e^{-\\epsilon^2 B(N)\/3} .\n\\]\n\\end{theorem}\nThe proof follows from the application of a Chernoff bound.\n\nTheorem~\\ref{thrm:violation} states that we can size the buffer as $B(1+\\epsilon)$ while using a portion $B$ as the constraint in the optimization. The remaining portion, $\\epsilon B$, is used to protect against buffer constraint violations. \nIt suffices for our purpose that ${\\epsilon^2 B(N) = \\omega (1)}$. This allows us to select $B(N) = o(N)$ while at the same time selecting $\\epsilon = o(1)$. As an example, consider Zipf's law with $\\lambda_i = \\lambda\/i^s$, $\\lambda > 0$, $0 < s <1$, $i=1,\\ldots, N$ under the assumption that $\\max{\\{t_i\\}} = t$ for some $t <\\infty$. In this case, we can grow the buffer as $B(N) = O(N^{1-s})$ while \n$\\epsilon$ can shrink as $\\epsilon = 1\/N^{(1-s)\/3}$. Analogous expressions can be derived for $s \\ge 1$.\n\nSimilar choices can be made for the soft constraint formulation.\n\n\n\n\n\n\n\n\\section{Related Work}\n\\subsection{Network Utility Maximization}\n\nUtility functions have been widely used in the modeling and control of computer networks, from stability analysis of queues to the study of fairness in network resource allocation; see~\\cite{srikant13, neely10} and references therein. Kelly~\\cite{kelly97} was the first to formulate the problem of rate allocation as one of achieving maximum\naggregate utility for users, and describe how network-wide optimal rate allocation can be achieved by having individual users control their transmission rates.\nThe work of Kelly~\\emph{et al.}~\\cite{kelly98} presents the first mathematical model and analysis of the behavior of congestion control algorithms for general topology networks.\nSince then, there has been extensive research in generalizing and applying Kelly's \\emph{Network Utility Maximization} framework to model and analyze various network protocols and architectures. This framework has been used to study problems such as network routing~\\cite{tassiulas92}, throughput maximization~\\cite{eryilmaz07}, dynamic power allocation~\\cite{neely03} and scheduling in energy harvesting networks~\\cite{huang13}, among many others. Ma and Towsley~\\cite{Ma15} have recently proposed using utility functions for the purpose of designing contracts that allow service providers to monetize caching.\n\n\n\\subsection{Time-To-Live Caches}\nTTL caches, in which content eviction occurs upon the expiration of a timer, have been employed\nsince the early days of the Internet with the Domain Name System (DNS) being an important application~\\cite{Jung03}. More recently, TTL caches have regained popularity, mostly due to admitting a general approach in the analysis of caches that can also be used to model replacement-based caching policies such as LRU. The connection between\nTTL caches and replacement-based (capacity-driven) policies was first established for the LRU policy by Che~\\emph{et al.}~\\cite{Che01} through the notion of cache \\emph{characteristic time}. The characteristic time was theoretically justified and extended to other caching policies such as FIFO and RANDOM~\\cite{Fricker12}. \nThis connection was further confirmed to hold for more general arrival models than Poisson processes~\\cite{Bianchi13}. Over the past few years, several exact and approximate analyses have been proposed for modeling single caches in isolation as well as cache networks using the TTL framework~\\mbox{\\cite{Nicaise14, Berger14}}.\n\nIn this paper, we use TTL timers as \\emph{tuning knobs} for individual (or classes of) files to control the utilities observed by the corresponding contents,\nand to implement \\emph{fair} usage of cache space among different (classes of) contents.\nWe develop our framework based on two types of TTL caches described in the next section.\n\n\n\n\\section{Reverse Engineering}\n\\label{sec:reverse}\nIn this section, we study the widely used replacement-based caching policies, FIFO and LRU, and show that their hit\/miss behaviors can be duplicated in our framework through an appropriate choice of utility functions.\n \nIt was shown in~\\cite{Nicaise14} that, with a proper choice of timer values, a TTL cache can generate the same statistical properties, \\emph{i.e.}~same hit\/miss probabilities, as FIFO and LRU caching policies. \nIn implementing these caches, non-reset and reset TTL caches are used for FIFO and LRU, respectively, with $t_i=T, i=1,\\ldots,N$ where $T$ denotes the \\emph{characteristic time}~\\cite{Che01} of these caches. For FIFO and LRU caches with Poisson arrivals the hit probabilities can be expressed as\n$h_i = 1 - 1\/(1+\\lambda_iT)$ and $h_i = 1 - e^{-\\lambda_i T}$, and $T$ is computed such that $\\sum_{i}{h_i} = B$.\nFor example for the LRU policy $T$ is the unique solution to the fixed-point equation\n\\[\\sum_{i=1}^{N}{\\left(1 - e^{-\\lambda_i T}\\right)} = B.\\]\n\n\nIn our framework, we see from~\\eqref{eq:hu} that the file hit probabilities depend on the Lagrange multiplier $\\alpha$ corresponding to the cache size constraint in~\\eqref{eq:opt}.\nThis suggests a connection between $T$ and $\\alpha$. Further note that the hit probabilities are increasing functions of $T$. On the other hand, since utility functions are concave and increasing, $h_i = {U'_i}^{-1}(\\alpha)$ is a decreasing\nfunction of $\\alpha$. Hence, we can denote $T$ as a decreasing function of $\\alpha$, \\emph{i.e.}~$T = f(\\alpha)$. \n\nDifferent choices of function $f(\\cdot)$ would result in different utility functions for FIFO and LRU policies. \nHowever, if we impose the functional dependence $U_i(h_i) = \\lambda_i U_0(h_i)$, then the equation $h_i = {U'_i}^{-1}(\\alpha)$ yields\n\\[h_i = {U'_0}^{-1}(\\alpha\/\\lambda_i).\\]\nFrom the expressions for the hit probabilities of the FIFO and LRU policies, we obtain $T = 1\/\\alpha$. In the remainder of the section, we use this to derive utility functions for the FIFO and LRU policies.\n\n\\subsection{FIFO}\nThe hit probability of file $i$ with request rate $\\lambda_i$ in a FIFO cache with characteristic time $T$ is\n\\[h_i = 1 - \\frac{1}{1 + \\lambda_i T}.\\]\nSubstituting this into~\\eqref{eq:hu} and letting $T = 1\/\\alpha$ yields\n\\[{U'_i}^{-1}(\\alpha) = 1 - \\frac{1}{1 + \\lambda_i \/ \\alpha}.\\]\nComputing the inverse of ${U'_i}^{-1}(\\cdot)$ yields\n\\[U'_i(h_i) = \\frac{\\lambda_i}{h_i} - \\lambda_i,\\]\nand integration of the two sides of the above equation yields the utility function for the FIFO cache \n\\[U_i(h_i) = \\lambda_i (\\log{h_i} - h_i).\\]\n\n\\subsection{LRU}\nTaking $h_i = 1 - e^{-\\lambda_i T}$ for the LRU policy and letting ${T = 1\/\\alpha}$ yields\n\\[{U'_i}^{-1}(\\alpha) = 1 - e^{-\\lambda_i\/\\alpha},\\]\nwhich yields\n\\[U'_i(h_i) = \\frac{-\\lambda_i}{\\log{(1-h_i)}}.\\]\nIntegration of the two sides of the above equation yields the utility function for the LRU caching policy\n\\[U_i(h_i) = \\lambda_i \\text{li}(1-h_i),\\]\nwhere $\\text{li}(\\cdot)$ is the logarithmic integral function\n\\[\\text{li}(x) = \\int_0^x{\\frac{\\mathrm{d} t}{\\ln{t}}}.\\]\n\nIt is easy to verify, using the approach explained in Section~\\ref{sec:opt}, that the utility functions computed\nabove indeed yield the correct expressions for the hit probabilities of the FIFO and LRU caches.\nWe believe these utility functions are unique if restricted to be multiplicative in\\footnote{We note that utility functions, defined in this context, are subject to affine transformations, \\emph{i.e.}~$aU+b$ yields the same hit probabilities as $U$ for any constant $a>0$ and $b$.} $\\lambda_i$.\n\n\\section{Simulations}\n\\label{sec:simulation}\nIn this section, we evaluate the efficiency of the online algorithms developed in this paper.\nDue to space restrictions, we limit our study to four caching policies: FIFO, LRU, proportionally fair, and max-min fair.\n\nPer our discussion in Section~\\ref{sec:reverse}, non-reset and reset TTL caches can be used with $t_i = T, i=1,\\ldots,N$ to implement caches with the same statistical properties as FIFO and LRU caches.\nHowever, previous approaches require precomputing the cache characteristic time $T$.\nBy using the online dual algorithm developed in Section~\\ref{sec:dual} we are able to implement these policies with no a priori information of $T$.\nWe do so by implementing non-reset and reset TTL caches, with the timer parameters for all files\nset as $t_i = 1\/\\alpha$, where $\\alpha$ denotes the dual variable and is updated according to~\\eqref{eq:dual_sol}.\n\nFor the proportionally fair policy, timer parameters are set to\n\\[t_i = \\frac{-1}{\\lambda_i}\\log{(1 - \\frac{\\lambda_i}{\\alpha})},\\]\nand for the max-min fair policy we set the timers as\n\\[t_i = \\frac{-1}{\\lambda_i}\\log{(1 - \\frac{1}{\\alpha})}.\\]\nWe implement the proportionally fair and max-min fair policies as reset TTL caches.\n\n\nIn the experiments to follow, we consider a cache with the expected number of files in the cache set to $B=1000$. Requests arrive for ${N = 10^4}$ files according to a Poisson process with aggregate rate one. File popularities follow a Zipf distribution with parameter ${s=0.8}$,~\\emph{i.e.}~${\\lambda_i = 1\/i^s}$. In computing the timer parameters we use estimated values for the file request rates as explained in Section~\\ref{sec:estimate}.\n\nFigure~\\ref{fig:dual} compares the hit probabilities achieved by our online dual algorithm with those computed numerically for the four policies explained above.\nIt is clear that the online algorithms yield the exact hit probabilities for the FIFO, LRU and max-min fair policies. For the proportionally fair policy however, the\nsimulated hit probabilities do not exactly match numerically computed values. This is due to an error in estimating $\\lambda_i, i=1,\\ldots, N$. Note that we use a simple estimator\nhere that is unbiased for $1\/\\lambda_i$ but biased for $\\lambda_i$. It is clear from the above equations that computing timer parameters for the max-min fair policy\nonly require estimates of $1\/\\lambda_i$ and hence the results are good. Proportionally fair policy on the other hand requires estimating $\\lambda_i$ as well,\nhence using a biased estimate of $\\lambda_i$ introduces some error.\n\nTo confirm the above reasoning, we also simulate the proportionally fair policy assuming perfect knowledge of the request rates.\nFigure~\\ref{fig:prop_exact} shows that in this case simulation results exactly match the numerical values.\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.50\\linewidth}\n \t\\centering\\includegraphics[scale=0.20]{prop_dual_hit.eps}\n \t\\caption{\\label{fig:prop_exact}}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.50\\linewidth}\n \t\\centering\\includegraphics[scale=0.20]{prop_pd_hit_est.eps}\n \t\\caption{\\label{fig:prop_pd}}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Proportionally fair policy implemented using the (a) dual algorithm with exact knowledge of $\\lambda_i$s, and (b) primal-dual algorithm with ${\\delta_m(t_i, \\alpha) = \\lambda_i}$ and ${\\delta_h(t_i, \\alpha) = \\alpha - \\lambda_i}$, with approximate $\\lambda_i$ values.}\n \\label{fig:prop_fair}\n\\end{figure}\n\nWe can also use the primal-dual algorithm to implement the proportionally fair policy.\nHere, we implement this policy using the update rules in~\\eqref{eq:prop_pd}, and estimated values for the request rates.\nFigure~\\ref{fig:prop_pd} shows that, unlike the dual approach, the simulation results match the numerical values.\nThis example demonstrates how one algorithm may be more desirable than others in implementing a specific policy.\n\nThe algorithms explained in Section~\\ref{sec:online} are proven to be globally and asymptotically stable, and converge to the optimal solution.\nFigure~\\ref{fig:lru_dual_var} shows the convergence of the dual variable for the LRU policy.\nThe red line in this figure shows $1\/T=6.8\\times 10^{-4}$ where $T$ is the characteristic time of the LRU cache computed according to the discussion in Section~\\ref{sec:reverse}.\nAlso, Figure~\\ref{fig:lru_cache_size} shows how the number of contents in the cache is centered around the capacity $B$.\nThe probability density and complementary cumulative distribution function (CCDF) for the number of files in cache are shown in Figure~\\ref{fig:cs}.\nThe probability of violating the capacity $B$ by more than $10\\%$ is less than $2.5\\times 10^{-4}$. For larger systems, \\emph{i.e.}\\ for large $B$ and $N$, the probability of violating the \ntarget cache capacity becomes infinitesimally small; see the discussion in Section~\\ref{sec:violation}. This is what we also observe in our simulations.\nSimilar behavior in the convergence of the dual variable and cache size is observed in implementing the other policies as well.\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\centering\\includegraphics[scale=0.21]{lru_dual_var.eps}\n \t\\caption{\\label{fig:lru_dual_var}}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\centering\\includegraphics[scale=0.21]{cs_conv.eps}\n \t\\caption{\\label{fig:lru_cache_size}}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Convergence and stability of dual algorithm for the utility function representing LRU policy.}\n \\label{fig:lru_dual}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\includegraphics[scale=0.21]{cs_distr.eps}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\includegraphics[scale=0.21]{cs_ccdf.eps}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Cache size distribution and CCDF from dual algorithm with the utility function representing LRU policy.}\n \\label{fig:cs}\n\\end{figure}\n\n\n\\section{Final Remarks}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\nThis appendix contains a description of the methodology used to solve for treatment strategies in chain graphs, the methodology used to train a Q-learner on the Karate Club network, and a brief description of community detection techniques that were used on the Karate Club network.\n\n\\subsection{Solving for treatment strategies in the chain graph via linear program}\n\\label{app:chain_lp}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.33\\columnwidth]{images\/chain_bases.png}\n \\caption{Some risk profiles in a chain graph with 11 nodes for different preemptive treatment strategies. Treating node $i$ creates zero risk for individual $i$ and lowers the risk for neighbors as well.}\n \\label{fig:chain_graph_bases}\n\\end{figure}\n\n\\subsubsection{Preemptive treatments}\nIn the preemptive setting, with only a single treatment, there are a limited number of possible interventions, corresponding to treating each of the members of the graph. \nEach of those interventions has a corresponding risk profile (Figure~\\ref{fig:chain_graph_bases}).\n\nTo find the strategy that minimizes average disease burden, it is sufficient to simply enumerate them and find the lowest.\n\nTo see if there is a distribution over strategies that gives a uniform risk profile, we try to find a convex combination of risk profiles that minimizes average risk and is subject to the constraints that all of the risks are equal. These are all linear constraints, so the problem is a linear program. We use scipy's {\\tt optimize.linprog} function as a solver.\n\n\\subsubsection{1-step precision treatments}\nWe also consider precision treatments after 1 step - i.e., waiting to see which patient is infected and responding with a treatment at that point.\n\nFor a chain graph with N members, we are now looking for N different reactive allocations, with each allocation being a convex combination of treatment risk profiles, similar to above, but with the initial patient already infected.\n\nThis problem has a similar form to above, but with N convex combination constraints instead of 1, and is also a linear program.\n\nWaiting for longer than one step before acting was not considered, though could be a reasonable strategy as well.\n\n\n\\subsection{Training a DQN network on the karate-club graph}\n\\label{app:training_dqn}\n\nA DQN agent was trained using dopamine's experiment runner.\nThe reward function at each step was the negative number of newly sick nodes.\n\nThe observation vector at each step was one-hot encoded health states. \n\nThe graph structure was not passed to the agent explicitly, but it was given the chance to learn by observing the simulations.\n\nHyperparameters of the agent were optimized using a black-box hyperparameter tuning system.\n\nThe range of settings explored were:\n\\begin{itemize}\n \\item gamma: [0, 0.99]\n \\item hidden layer size: [5, 500]\n \\item learning rate: [$10^{-4}$, $10^0$]\n \\item num iterations: [$10^2$, $10^5$]\n\\end{itemize}\nThe agent's stack size was set to 1; max steps per episode was 20, and the training steps per iteration was 500. All other arguments were left at default.\n\nThe best settings found by the hyperparameter tuner after 150 trials were:\n\\begin{itemize}\n \\item gamma: 0.70\n \\item hidden layer size: 106\n \\item learning rate: 0.95\n \\item num iterations: 2543\n\\end{itemize}\n\n\\subsection{Community detection in the karate-club graph}\n\\label{app:community_dection_karate}\nWe use networkx's implementation of the Girvan Newman algorithm for community detection and use the highest level in the hierarchy, creating two communities shown in Figure~\\ref{fig:karate_partition}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\columnwidth]{karate_club_partition.png}\n \\caption{Partitioning of the Karate club graph into 2 communities using the Girvan-Newman algorithm}\n \\label{fig:karate_partition}\n\\end{figure}\n\\section{Background and Related Work}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/epidemic_progression.png}\n \\caption{An illustration of the development of an epidemic on a barbell graph with a central individual. Here, blue nodes are in the `Susceptible' state, red nodes are in the `Infected' state, and purple nodes are in the `Recovered' state. `Recovered` is an abosrbing state. After t=4, there are not more transitions to take place.}\n \\label{fig:barbell_evolution}\n\\end{figure}\n\nMathematical models for studying infectious disease spread have a long history~\\citep[e.g.,][]{longini1978optimization, hethcote1978immunization, patel2005finding, tennenbaum2008simple}, including explicit treatment of network structures~\\citep{newman2002spread, valente2012network, newman2018networks}, and evolving control policies~\\citep{sharomi2017optimal}. \n\n\\citet{keeling2012optimal} used analytical models to illustrate a tension between optimal and equitable distributions of vaccines in a simple case of two non-interacting (or barely interacting) populations. \\citet{yi2015fairness} study tradeoffs between fairness and effectiveness when considering different possible prioritization orderings of who receives vaccines in the case of limited resources. \\citet{salathe2010dynamics} discuss the importance of ``intercommunity-bridges'', individuals who have contact with multiple communities, in order to stem the spread of disease.\n\nEthical discussions of prioritizing individuals for treatment based on considerations like quality-adjusted life years, vulnerability, need, productivity, ability to treat others, and lottery have been discussed extensively~\\citep[e.g.,][]{emanuel2006should, persad2009principles, buccieri2013ethical, saunders2018equality}, but generally deal with the problem of a general order of prioritization rather than precision treatments and do not consider explicit social network structures.\n\nTo the best of our knowledge, the problem of optimally allocating vaccines within a social network in a step-by-step manner as the disease spreads has not been explicitly studied neither from an effectiveness nor fairness view. The turn-based precision version of the problem is closer to learning to play a strategy game like chess or go which has received a lot of attention in the reinforcement learning community~\\citep[e.g.,][]{atari, silver2017mastering}. \n\n\\subsection{Fairness in networks} \n\\label{sec:fairness_in_networks}\n\nMany recently proposed measurements of fairness focus either on group fairness~\\citep{hardt2016equality} or individual-fairness~\\citep{dwork2012fairness}. Networks serve as an interesting setting because of they way that they can richly reproduce many societal structures.\n\n{\\bf Communities:} Social networks tend to organize in community structures~\\citep{girvan2002community, faust2005using}. Membership in these communities tends to be much more nuanced than strictly binary or categorical with individuals potentially belonging to many communities, possibly with differing degrees of identification~\\citep{palla2005uncovering} and with communities interacting at different scales or resolutions~\\citep{ronhovde2009multiresolution}. Using measures like overall burden to a community rather than probability of disease conditioned on being from a community, directly links fairness measures to group well-being. \n\n{\\bf Centrality:} Individuals in different positions in the graph may play different roles and be treated differently. Centrality of a node in a network refers to a measure of its influence on the network and can be characterized using a number of different measures~\\citep{lu2016vital}. Three important measures are {\\em degree centrality}: the number of immediate neighbors; {\\em betweenness-centrality}: the number of shortest paths that pass through this node; and {\\em eigenvector centrality}: uses the dominant eigenvector of the adjacency matrix.\n\nEven with the same set of actors, an individual's characterization in the network (both in terms of community and centrality) may be very different depending on how the edges in the network are defined. For example, in an workplace, the communities formed by who eats lunch together may be different from the communities of who attend meetings together. Thus centrality and community memberships of an individual should not be thought of as unique but depend crucially on the context being analyzed.\n\n\\section{Formalizing the precision contagion control problem}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/wait_and_see.png}\n \\caption{Network structures to illustrate some of the complexity of the precision control task. {\\bf Case 1 - greedy suboptimal:} If one treatment is available per time-step, a greedy agent would prefer to treat node A whereas an agent that plans ahead would treat B then C. {\\bf Case 2 - wait and see:} If disease transfer occurs with with $\\tau = 0.5$ and only one treatment is available overall, an agent minimizing expected sick days should to wait to see if the disease transfers to each neighbor before deciding which side to treat.}\n \\label{fig:greedy_suboptimal}\n\\end{figure}\nWe pose the precision contagion control problem in this section. In defining the problem, we build on work on discrete-time compartmental modeling of infectios disease \\cite{brauer2010discrete,hernandez2015discrete,liu2015effect,allen1994some}.\n\nA population of individuals, $V$, is connected by edges, $E$, forming a social network $N = (V, E)$. Individuals can be in one of three health states $\\{S, I , R\\}$ for susceptible, infected, recovered. The health of the network is $H \\in \\{S, I, R\\}^{|V|}$. Health evolves in discrete time steps. At time $t_0$ an initial set of nodes $V_0$ are infected. At every subsequent step step, disease spreads stochastically from infected to susceptible neighbors at every time step, with probability:\n\n\\begin{equation}\np_{S \\rightarrow I} = 1-(1-\\tau)^{\\#I(N)},\n\\end{equation}\nwhere $\\tau \\in [0, 1]$ is probability of transmission and $\\#I(N)$ is a count of infected neighbors.\n\nThe recovery process does not depend on neighborhood.\n\\begin{equation}\np_{I \\rightarrow R} = \\rho,\n\\end{equation}\n\nwhere $\\rho \\in [0, 1]$ is probability of recovery. \n\nAn agent allocates $N_t$ treatments at each step. Treatments effect a direct transition to the R state, and have no effect on individuals in states $\\{I, R\\}$ (i.e., treatments are {\\em preventative}). Figure~\\ref{fig:barbell_evolution} illustrates the disease being transmitted and individuals recovering in a simple network with no treatments bring allocated.\n\nThe precision contagion control problem consists of learning a treatment policy $\\pi(N, \\tau, \\rho, N_t): H \\rightarrow (V \\cup \\{\\varnothing\\})^{N_t}$, i.e., finding the best next individuals to treat based on the current health state of the network. There is always an option to not use an available treatment which we denote by allocating that treatment a non-existent individual, $\\varnothing$.\n\nIn public health, uncertainties in the network structure as well as the logistical difficulties in implementing precision policies make it practically not worthwhile to optimize precision interventions in the way that we describe here. In this work, we consider fairness implications of this idealized precision setting that may become possible as health IT infrastructure makes it possible to target health interventions on the individual level. \n\nFinding optimal strategies for precision contagion control is not easy. The following properties of the problem, which we prove with example network structures, highlight some of the complexity of the task. \n\n{\\bf Greedy is not optimal:} A greedy policy that aims to minimize expected total disease burden at every step in a greedy way can suffer from being sub-optimal (Figure~\\ref{fig:greedy_suboptimal}, Case 1).\n\n{\\bf Act now or wait?:} Strategies that wait to see how the disease progresses for some number of steps before allocating a treatment can be more effective than treating immediately (Figure~\\ref{fig:greedy_suboptimal}, Case 2). \n\nFrom a fairness perspective, one might be interested in finding treatment strategies that equalize outcomes in the population.\n\n{\\bf Uniform allocation may not produce uniform outcomes:} Some individuals are more or less at risk before intervention due to their position in the social network. Uniformly allocating treatment does not always create uniform outcomes. This will be explored in more detail in Section~\\ref{sec:chain}.\n\n\n\\section{Conclusion}\nThis is work in progress, but we believe that the precision contagion treatment problem, which we intend to contribute to the collection of simulated environments in the open-source {\\tt ML-fairness-gym}~\\citep{fairnessgym}, contains interesting questions of how to find fair allocation policies when individuals are highly intertwined. \nIt emphasizes the importance of {\\em fairness in outcomes} rather than a narrower goal of fairness in e.g., classification accuracy (there is no classification or state estimation task in this problem -- just planning).\nIt also highlights how individuals with ``intermediate risk'' can end up being most at risk worst when policies are put in place to help the most at risk.\nDespite the idealized nature of the problem which assumes fully observed graph structures and highly precise treatment delivery, we hope that an increased understanding of the fairness implications of the underlying optimization problem with network effects can inform work in healthcare more broadly.\n\\section{Problem Statement}\nLet $K = \\{1, ..., N_p\\}$ be indices that represent individual members of a population with $N_p$ members and let $N_t$ be the number of treatments an agent can give out. Let $A \\in P^{N_t}$ be an allocation of treatments to individuals in the population. Finally, let $H = \\{S,I,R\\}^{N_p}$ be a health state, and let $\\tau$ be a schedule according to which treatments are given out -- perhaps only at $t=0$, or at timestep, or something else.\n\nThe agent's task is to find a probabilistic \\emph{allocation policy} $f: H \\rightarrow P(A)$ that maps health states to a distribution over allocations. This function will be applied according to the schedule $\\tau$. One way to go about this is to find the policy by solving an optimization problem that encodes a goal related to reducing infection.\n\nA few more definitions are required before we pose the optimization problems. Let $x_i | t, f \\in \\{0, 1\\}$ be a random variable that indicates whether individual $i \\in K$ ever enters the \\texttt{Infected} state by time $t$ given allocation policy $f$.\n\nNow consider an agent whose only concern is to maximize utility -- that is, to minimize the number of individuals who get sick once the infectious disease has played out. They can find this \\emph{maximum utility} policy by solving the following optimization problem:\n\\begin{equation}\n\\begin{aligned}\nf_{MU} = \\argmin_{f} \\quad & \\frac{1}{N_p} \\sum\\limits_{i=1}^{N_p} P(x_i = 1 | t=\\infty, f) \\\\\n\\end{aligned}\n\\end{equation}\n\nConsider now an agent who is concerned with a particular notion of fairness -- each member of the population should have the same probability of being infected. An agent can find this \\emph{naively equitable} policy by adding a constraint to the problem above:\n\\begin{equation}\n\\begin{aligned}\nf_{NE} = \\argmin_{f} \\quad & \\frac{1}{N_p} \\sum\\limits_{i=1}^{N_p} P(x_i = 1 | t=\\infty, f) \\\\\n\\textrm{s.t.} \\quad & P(x_i = 1 | t=\\infty, f) = P(x_j = 1 | t=\\infty, f) \\: \\forall \\: i, j \\in K\n\\end{aligned}\n\\end{equation}\n\nWhy is the policy naive? It could well be the case that there's a policy that \\emph{dominates} the naively equitable policy by offering an improvement for every individual over the naively equitable policy. That is, an \\emph{undominated equitable} policy is a naively equitable policy for which:\n\\begin{equation}\n\\nexists f' \\; \\textrm{s.t.} \\; P(x_i = 1 | t=\\infty, f') < P(x_j = 1 | t=\\infty, f) \\: \\forall \\: i \\in K\n\\end{equation}\n\\section{Introduction}\n\nThe field of public health has long understood that different medical and health problems are not always easily decomposable into individual health measures, but can benefit from a networked analysis - viewing health as a population-level process.\nExamples of this type of analysis have long been applied to spread of infectious diseases, and more recently to other health factors like obesity~\\citep{christakis2007spread}, substance abuse~\\citep{rosenquist2010spread}, happiness~\\citep{fowler2008dynamic}, and even misinformation about health topics~\\citep{fernandez2015health}.\n\nIn network analyses, determining who benefits from an intervention is not as simple as observing who is directly affected by the intervention (e.g., who receives a vaccine), since neighbors and neighbors of neighbors are also affected indirectly. With this in mind, we can still investigate how different health interventions differentially benefit individuals and communities within a larger population. These tradeoffs are important to study because they highlight how pursuing a coarse metric of success of an intervention averaged over an entire population (like e.g., minimizing total expected number of sick-days) can, under certain circumstances, leave parts of the population under-served. \n\nWe study a stylized version of the public health task of epidemic control that highlights the networked setting. More specifically, we study the problem of optimally allocating vaccines within a social network in a step-by-step manner as the disease spreads. We assume that the agents allocating vaccines can observe the disease states of individuals within the population and the underlying contact network structure. We call this the {\\em precision disease contagion control} problem.\n\nThis work is relevant to fairness in machine learning for healthcare because it addresses class of optimization problems that appear repeatedly in the public health. We study this problem in simulation in order to illuminate core dynamics \\cite{epstein2008} -- with the rise of learned policies that attempt to optimize health outcomes on real data, the tensions at the root of the underlying optimization problem become increasingly relevant.\n\nWe propose this environment as a tool for researchers to explore the dynamics of disease under different contact and initial infection conditions, and to evaluate the relative performance of allocation approaches, learned or otherwise, that they may be interested in exploring.\n\nThis work is exploratory in nature. We present several different contexts in which epidemics can spread and explore the efficacy and fairness of different treatment scenarios within those contexts.\n\n\\subsection{Contributions}\n\\begin{itemize}\n \\item We pose a version of the precision disease contagion control problem and discuss heuristic and learned policies. \n \\item We characterize some of the tradeoffs between efficiency and fairness in simple social network graphs.\n \\item We open-source all of our code in an extensible manner to provide for reproduction and extensions of this work.\n\\end{itemize}\n\n\n\n\n\\section{Related Work}\nIn this section, we provide a brief overview of infectious disease modeling.\n\n\\subsection{Infectious Disease Epidemiology}\n\nInfectious disease epidemiology studies infectious disease at the population level. Researchers in this field rely on simple models where human contact is associated with some probability of communicating disease. Compartmental models, which are a central to the study, represent disease dynamics in populations by specifying a small number of mutually exclusive states that members of a population can occupy and some kind of mechanism by which disease propagates \\cite{hethcote2000mathematics}.\n\nIn this paper, we study the Susceptible-Exposed-Infected-Recovered (SEIR) subclass of compartmental models, where individuals exist in one of four mutually exclusive states: \\texttt{Susceptible}, \\texttt{Exposed}, \\texttt{Infected}, and \\texttt{Recovered}. \\texttt{Susceptible} people are healthy but vulnerable to infection, \\texttt{Exposed} people are not yet sick but will become so after time passes, \\texttt{Infected} people are sick and spreading the disease, and \\texttt{Recovered} people were once sick but no longer are. Note that not all stages need be present -- for example, the SI model only includes the \\texttt{Susceptible} and \\texttt{Infected} states.\n\nCompartmental models are not fully specified by the states that people can occupy. Additional considerations include the representation of time, which is typically treated as continuous, but the discrete case is actively studied as well (such as in \\cite{brauer2010discrete,hernandez2015discrete,liu2015effect,allen1994some}). \n\nIn addition, there is the question of how people come into contact. Models typically assume `full mixing,' whereby all members of a population are equally likely to come into contact with all other members of a population. However, there is a rich vein of work that assumes instead that contact proceeds along some explicitly-specified social network (for a detailed treatment, see \\cite{newman2002spread, newman2018networks}).\n\n\\subsection{Social welfare orderings and collective utility functions}\n\n\n\n\n\\section{Experiments}\n\n\nWe implement the simulated dynamics of this environment using the ML Fairness Gym~\\citep{fairnessgym}. Centrality is computed with the networkx package's centrality measures. The code for the environment, agents, and experiments will be published open-source and is intended to be contributed to the ML Fairness Gym collection of examples.\n\n\\subsection{Policies for contagion control}\nThere are many possible policies for contagion control. In these experiments we consider:\n\n{\\bf Random selection}: Treatments are uniformly at random allocated to individuals in the population. \n\n{\\bf Direct optimization:} Some networks are simple enough that policies can be computed by direct optimization of expected number of sick days.\n\n{\\bf Central-first heuristic}: Central-first heuristics treat more {\\em central} individuals before less-central individuals. Centrality is discussed in more detail in~\\ref{sec:fairness_in_networks}.\n\n{\\bf Deep Q-Network}: In cases where we cannot directly optimize over policies because the graph, we train a Deep Q-Network~\\citep{mnih2015human} with a single hidden layer using the Dopamine library~\\citep{castro18dopamine} for reinforcement learning. Details of the training can be found in Appendix~\\ref{app:training_dqn}.\n\n\n\\subsection{Barbell Graph}\n\nWe start by considering disease transmission in a barbell-like graph where a central, initially-infected node is connected to two cliques of different sizes through intermediaries. Figure \\ref{fig:barbell_evolution} shows the natural progression of disease without intervention.\n\nThe barbell setting was previously considered by~\\citet{keeling2012optimal} as a clear example of conflict between minimizing disease in the total population, and treating communities equally.\n\nIf a single vaccine is available, the optimal policy is to treat on the side of the larger community to block the spread of disease. However, this policy is clearly unfair to the smaller community. If multiple vaccines are available and it is impossible to completely block the spread to one community, \\citet{keeling2012optimal} showed that the optimal allocation will change depending on the amount of vaccine available, but that for many values it would strongly favor one community or the other.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Chain graph}\n\\label{sec:chain}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\columnwidth]{images\/equalizing.png}\n \\includegraphics[width=0.45\\columnwidth]{images\/chain.png}\n \\caption{\n (Left) A visualization of the ``Precision: Equalize'' strategy. Each row represents a possible initial infected patient and the agent's conditional distribution of who to treat in response.\n (Right) Probability of infection of individuals in a chain graph using different treatment strategies. It is possible to ensure an equal probability of infection in the precision setting, but it does not have the lowest overall disease burden.}\n \\label{fig:chain_treatments}\n\\end{figure}\n\nIn contrast to the barbell graph, the chain graph (where vertices are connected sequentially) has no clear group structure, but we can still consider likelihood of infection for individuals in different locations in the graph. For experiments we use a chain graph of size 11 with $\\tau=0.75, \\rho=1$ and a single treatment available.\n\nIf the initial infected patient is a uniformly chosen node in the network, the expected number of sick days with no intervention is higher for the central node and is lower at the two ends (Figure~\\ref{fig:chain_treatments} Right - No treatment). A random allocation strategy recapitulates this shape. \n\n{\\bf Preemptive treatment:} Treatment can be allocated preemptively, {\\em before} the first patient is infected. The preemptive strategy that optimizes average utility is to treat the middle node in the chain. Interestingly, with only one treatment available, there may not be a preemptive strategy (including randomized strategies) that equalizes the expected disease burden for all individuals. We verify this for our chain graph by posing the problem as a constrained linear program (Appendix~\\ref{app:chain_lp}) and determining that there is {\\em no feasible solution}. This is in contrast to equalized-opportunity classifiers in the classification setting~\\citep{hardt2016equality} which can always be constructed through randomization.\n\n{\\bf Precision treatment:} In the precision setting, we choose the individual to receive treatment {\\em conditional} on the observation of which individual becomes sick first. In this setting, the strategy that optimizes average utility is to treat the individual beside the sick patient, choosing the side with more people. Finding a precision policy that equalizes disease burden can be formulated as a constrained linear program (Appendix~\\ref{app:chain_lp}). Despite there being no preemptive treatment strategy that equalizes disease burden, a precision strategy {\\em does exist} (Figure~ \\ref{fig:chain_treatments} Left), though it has a higher average disease burden than the unconstrained policy. Figure \\ref{fig:chain_treatments} compares how the different treatment policies distribute disease burden over the population in the chain graph. The random policy is dominated by the precision equalizing strategy, i.e., every individual in the network fares equally or better with the precision equalize strategy than the random strategy. \n\n\\subsection{Scale-free network}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/scale-free.png}\n \\caption{Disease burden by centrality (using three different measures of centrality) for the members of the synthetic scale-free network~\\citep{holme2002growing}. Moderately central individuals fare the worst under centrality-based treatment.}\n \\label{fig:general_graph}\n\\end{figure}\n\n\nMany real social networks have degree distributions that follow a power-law (or similar) distribution, meaning that a small number of nodes have very high connectivity.\n\n\n\n\nIn this setting, how will individuals with low centrality fare if high-centrality individuals are prioritized for treatment? They may be better off than they would have been under random allocation if centrality-based allocation methods control the epidemic more effectively; they may be worse off because they do not receive treatment themselves.\n\nWe simulate this scenario with a synthetic scale-free network~\\citep{holme2002growing} with 100 nodes and allowed an agent to allocate 1 treatment per turn. \nInfection spreads along contacts with $\\tau=0.25$ and individuals recover with $\\rho=0.01$.\n\nFigure \\ref{fig:general_graph} was generated by running 1000 simulations for 20 steps under three different centrality-based treatment allocation policies. Note that moderately-central individuals have the highest epidemic size burden under centrality-based treatment because they do not enjoy the low risk that comes with being low centrality or the benefit offered to highly-central individuals by treatment.\n\n\n\\subsection{Karate club graph}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{images\/karate_club_centrality.png}\n \n \\caption{Disease burden by centrality (using three different measures of centrality) for each of the 34 members of the Karate club graph~\\citep{zachary1977information}. With natural progression (green circles), more central nodes tend to have a higher expected number of sick days. Central-first treating heuristics bring down everyone's expected disease burden, with the highest disease burden for individuals with intermediate centrality. The Deep Q-network does not succeed in finding strategies that beat heuristics.}\n \\label{fig:karate_club_centrality}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\columnwidth]{images\/karate_club_community.png}\n \\caption{Violin plots showing disease burden in two communities in the karate club graph, for each of the treatment strategies. The distributions pictured are over 1000 simulation runs. Black lines indicate median runs of the simulation.}\n \\label{fig:karate_club_community}\n\\end{figure}\n\nBeyond the synthetic graphs discussed above, we also consider the small scale Karate club graph \\citep{zachary1977information} which records the social interactions between 34 members of a karate club over three years, measuring the expected disease burden on individuals as a function of their centrality in the graph.\n\nSimulations were run 1000 times with parameters ($\\tau = \\rho = 0.5; N_{t} = 1$). Each simulation lasts 20 time-steps. Agents prioritized by centrality (experiments were repeated with the same set of random seeds for each of the 3 different measures of centrality) or using a deep Q-network~\\citep{atari}(Details in appendix~\\ref{app:training_dqn}). Ties between individuals with equal centrality were broken randomly. In each run, the initial infected patient was chosen uniformly. \n\nWith the natural disease progression, more central individuals tend to be {\\em more at risk}, but once central-first treatment policies are put in place, the worst-off individuals are those with intermediate centrality (everyone's risk reduces somewhat). \n\nLooking at expected total number of sick days in the entire population, the different central-first policies are all similar (No treatments: 58.01, Eigen-centrality: 25.56, Degree-centrality: 25.61; Betweenness-centrality: 25.17), but the individual burdens are different (Figure~\\ref{fig:karate_club_centrality}). The Deep Q-network (DQN) did not succeed in learning a policy that outperformed centrality heuristics, suggesting that a different network structure or training approach should be tried.\n\nThe Karate club graph naturally splits into two communities based on how members of the club sided in a particular conflict that led to the dissolution of the club (for more history on this graph, see~\\citet{zachary1977information}). Figure~\\ref{fig:karate_club_community} shows how the disease burden is spread over the two communities (details on the community detection procedure is in Appendix~\\ref{app:community_dection_karate}). On average, community 1 tends to fare a little bit better than community 2. This is even true with natural disease progression, suggesting that community 1 is somehow structured in a way that is less conducive to disease spread. \nThe DQN favors community 1 in an extreme way, but since it's performance is generally poor, its hard to draw strong conclusions from that.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Upon exhausting all of its available fuel, the iron core of a massive star ($\\gtrsim$8$M_\\odot$) collapses, often giving rise to an extremely powerful, bright explosion known as a core-collapse supernova (CCSN). Under very specific circumstances these CCSNe are also thought to be associated with another very powerful and luminous event known as a long-duration gamma-ray burst (LGRB). The circumstances that permit this extraordinary\npartnership are still not well understood, but it seems likely\nthat progenitors born in lower-metallicity environments (sub-solar) favour production of GRBs \\citep[see][]{2006AcA....56..333S,2006Natur.441..463F,2003A&A...406L..63F,2004MNRAS.352.1073T}.\n\n The most popular theoretical model interprets the LGRB as\n being caused by the core-collapse of a specific class of massive\n star. This fundamental theory, coined as the `Collapsar' model, was\n first conceived by \\citet{1993ApJ...405..273W} \\citep[also see][]{1999A&AS..138..499W}. The criteria for the creation of a LGRB\n resulting from the collapse of a massive star are that a black-hole\n must be formed upon core-collapse, either promptly or via fall-back\n of ejected material, and that an accretion disk is allowed to form\n around this black-hole which acts as a `central engine' causing the\n release of collimated relativistic jets along the rotation axis of\n the black-hole\n \\citep{2006astro.ph.10276H,2008arXiv0801.4362Y,2008arXiv0803.3807T}. It\n is these relativistic jets that form the LGRB and the associated\n afterglow emission. These criteria are found to place \n very specific\n constraints on the type of progenitor star that can collapse to\n produce a LGRB. The core of\n the progenitor must be massive enough to produce a black-hole, \n it must be rotating rapidly enough to form an accretion disk, and the star\n needs to be stripped of\n its hydrogen envelope, so that the resulting relativistic jets are\n not inhibited from reaching the stellar surface. All of this points toward\n rapidly rotating Wolf-Rayet stars as viable progenitors. \n \\citep{2008arXiv0804.0014D,2006ApJ...637..914W,2006ARA&A..44..507W}\n\n Rapidly rotating O-stars which mix-up their core processed material\n to produce WR stars through chemically homogeneous evolution have been\nsuggested to \n satisfy the requirements of the collapsar model \\citep{1987A&A...178..159M,2006ASPC..353...63Y,2006A&A...460..199Y,2006ApJ...637..914W}.\n This model prevents the depletion of angular\n momentum due to mass-loss and allows a massive star to become a WR star\n directly, avoiding the supergiant phase. \\citet{2006A&A...460..199Y} have estimated that for a\n rapidly rotating star to be the progenitor of an LGRB, prior to collapse\n it must have a helium core mass greater than 10M$_{\\odot}$, that is an\n initial mass greater than 25-30M$_{\\odot}$. \\citet{2006ApJ...637..914W}\n have modelled the WR mass-loss rates of \\citet{2005A&A...442..587V} and\n predict that the upper metallicity limit for forming LGRB may be as high\n as 0.3Z$_{\\odot}$. Apart from the single star progenitor model, a rapidly rotating massive star could lose its hydrogen envelope\n and still retain a large fraction of its initial angular momentum if a close binary companion were to strip it of the envelope\n rapidly enough for the progenitor to avoid spin down\n \\citep{2007A&A...465L..29C,2007astro.ph..2652B,2007A&A...471L..29D}.\n\nWhatever the model for the progenitor of the LGRB, prior to collapse we require a very rapidly rotating, massive WR star. These constraints lead to the prediction that LGRBs favour a lower-metallicity environment and that CCSNe found in association with an LGRB will be classified as Type Ib or Type Ic, characterised by an absence of hydrogen in their spectra \n\\citep{2007ApJ...666.1024B,2003fthp.conf..171F,2003astro.ph..1006H}. To date many LGRBs have revealed `bumps' in their afterglow emission, thought to be evidence of associated SNe \\citep[see][]{1999Natur.401..453B, 2000ApJ...536..185G, 2003A&A...406L..33D, 2005ApJ...624..880L}. A few of these associated SNe have been adequately spectroscopically typed, all have been found to be broad-lined Type Ic SNe, helping to confirm the theory that LGRBs are the result of the collapse of hydrogen-stripped massive stars \\citep{1998Natur.395..670G,2003ApJ...591L..17S,2003Natur.423..847H,2004ApJ...609..952Z}. \\citet{2008AJ....135.1136M} have found that the SNe Type Ic-BL associated with an LGRB are found to inhabit a sample of host galaxies of significantly lower metallicity than the host galaxies of their Type Ic-BL counterparts with no associated observed LGRB. Interestingly, there has not been a SN Type Ic-BL without an associated observed LGRB that has been found to inhabit the same low-metallicity galaxy sample that the LGRB associated SNe inhabit.\n\n\\citet{2006Natur.441..463F} have found that LGRBs are preferentially\nfound in irregular galaxies which are \nsignificantly fainter, smaller and hence presumably of\nlower-metallicity than typical CCSNe hosts. \n\\citet{2005NewA...11..103S} make a similar point, stating that\nthe majority of LGRBs are found in extremely blue, sub-luminous\ngalaxies, similar to the Blue\nCompact Dwarf Galaxy population.\n\\citet{2006astro.ph..9208S} \nhas speculated that at primordial metallicities stars of up \nto 300M$_{\\odot}$ may form and if they have low mass-loss, \nthey may end their lives still retaining much of their original \nmass. \\citet{2002ApJ...567..532H} predict that those stars\nwithin the mass range 140M$_{\\odot}$-260M$_{\\odot}$ \nmay produce pair-instability supernovae\n(PISNe), huge thermonuclear explosions with energies as high as\n$\\sim$10$^{53}$ ergs. Hence understanding SNe in low-metallicity \nenvironments locally could offer insights into high-z GRBs and\nearly Universe explosions. \nAn indication that these exotic PISN\nevents may not be beyond our present day reach is the recent SN 2006gy\nwhich has been reported to have a peak luminosity three times greater\nthan any other supernova ever recorded reaching a magnitude of -22\n\\citep{2006astro.ph.12617S}. Hypothesised as having had a Luminous\nBlue Variable as a progenitor with an initial mass of\n$\\sim$150M$_{\\odot}$ this has been suggested to be the first ever PISNe\ndetected \\citep[also\nsee][]{2007arXiv0708.1970L,2007ApJ...659L..13O,2007Natur.450..390W}.\nThe unusual SN 2006jc defines another class of peculiar low-metallicity\nevent, occurring spatially coincident with an outburst thought to be\nthat of an LBV just two years previously \\citep{2007Natur.447..829P}, \nwhich has also been postulated to be related to the PISN mechanism\n\\citep{2007Natur.450..390W}\n\nTo date there \nhave been many surveys specifically designed to search for SNe and\ndepending on the motivation of an individual survey generally one of\ntwo different survey strategies has been employed. The first of these\nstrategies is the \\textit{pointed survey strategy}; here we have a\nfixed catalogue of galaxies which are all individually\nobserved with a cadence of a few days and scanned for fresh SNe. These\nsurveys included the highly successful Lick Observatory Supernova\nSearch (LOSS) which uses the fully robotic 0.75m KAIT telescope to\nscan a catalogue of 7\\,500-14\\,000 nearby galaxies ($z \\lesssim 0.04$), \naiming to observe each\nindividual galaxy with a cadence of 3-5 days\n\\citep{2001ASPC..246..121F}. With\nthe advent of relatively inexpensive CCD technology, the ability of\namateur astronomers to detect nearby SNe using this pointed survey\nstrategy has become more than substantial and a large fraction of\nnearby SNe are now being discovered by this community\n\\citep{2006JAVSO..35...42G}.\nThe Southern inTermediate Redshift ESO\nSupernova Search (STRESS) produced a catalogue of $\\sim$43,000\n($0.05 < z < 0.6$) \ngalaxies within 21 fields observed with the ESO Wide-Field-Imager\nand searched for SNe in these hosts to determine \nrates and their dependency on host galaxy colour and \nredshift evolution \\citep{2008A&A...479...49B}.\n\nThe second strategy \nemployed by SN surveys is the \\textit{area survey strategy}; here an\narea of sky is surveyed repeatedly and image subtraction used to\nidentify transient events including SNe. The SN Legacy Survey uses\nthe CFHT MegaCam imager to image four one-square degree fields to\nsearch for SNe Type Ia with the motivation of improving the sampling\nof these SNe within the redshift range $0.2 < z < 1.0$\n\\citep{2005ASPC..339...60P, 2006A&A...447...31A}. The Equation of\nState SupErNova Cosmology Experiment (ESSENCE) uses the Mosaic Imager\non the Blanco 4m telescope to survey equatorial fields, that have\npreferably been observed with other wide-field surveys, to discover SN\nType Ia within the redshift range $0.15 < z < 0.74$\n\\citep{2007ApJ...666..674M}. The Nearby SN Factory uses ultra-wide\nfield CCD mosaic images from the Near-Earth Asteriod Tracking (NEAT) and Palomar-Quest survey\nprograms to perform an area survey in a bid to find SN Type\nIa in the redshift range $0.03 < z < 0.08$\n\\citep{2002SPIE.4836...61A}. The SDSS-II Supernova Survey takes\nrepeated images of Stripe 82, a 300 square degree southern equatorial\nstrip of the sky, driven by the ambition of identifying and measuring\nthe lightcurves of SNe Type Ia in the intermediate redshift range\n$0.05 < z < 0.35$ \\citep{2008AJ....135..338F, 2008AJ....135..348S}. \nThe Texas Supernova Search search \n\\citep{2005AAS...20717102Q, 2007AAS...21110505Y}\nis somewhat unique in that it \nhas a small aperture telescope (0.45m) but very wide field (1.85 square\ndegrees) and is focussed on finding nearby SNe in wide area searches. \nThis survey discovered SN2006gy along with many other of the brightest\nknown SNe ever found and we shall discuss this further in \nSec. \\ref{sect:disc}.\n\n\nThe main driver of \nSN surveys that employ the pointed survey strategy is basically to\nfind as many SNe as possible regardless of type or characteristic. To\nensure that these surveys are as efficient as possible at finding\nCCSNe, the galaxies catalogues used by the surveys generally consist\nonly of the most massive galaxies with the greatest star-formation\nrates. As a result these galaxy catalogues\ntend to be heavily biased towards galaxies with high metallicity. This\nover arching high-metallicity bias has more than likely placed us in\nthe situation where the vast majority of CCSNe that have occurred in\nlow-metallicity environments (such as low-luminosity, dwarf, irregular\ngalaxies) have remained undetected. \\citet{2008ApJ...673..999P} have\nmatched the SAI supernova catalogue to the SDSS-DR4 catalogue of\nstar-forming galaxies with measured metallicities and it is clear that\nthe vast majority of the SNe considered have been detected in areas of\nrelatively high-metallicity; SN Type II occurring in host galaxies\nwith mean metallicity \\mbox{$12+{\\rm \\log(O\/H)}=8.94\\pm0.04$} and SN\nIb\/c at 9.06$\\pm$0.04.\n\nIt is inherently interesting then to search for SNe specifically in \nlow-metallicty environments, or at least without biasing the search\nto look at only high metallicity regimes. \nIn this paper we shall discuss several methods to \nsearch for low-metallicity CCSN\nevents. First we consider compiling a catalogue of nearby,\nlow-metallicity galaxies from pre-existing catalogues \n(taking SDSS DR5 as our primary survey source)\nand using either a single 2.0m telescope or a\nnetwork of 2.0m telescopes to perform a pointed survey of \nlow-metallicity galaxies in the hope of detecting a few CCSNe. \nSecondly we consider\nusing a future all-sky transient survey such as the Panoramic Survey\nTelescope and Rapid Response System (Pan-STARRS) to perform a\nvolume-limited survey for SNe and estimate how many may be found\nin low-metallicity galaxies. \nA third and final method we consider is to use a future all-sky\ntransient survey, limited only by\nthe limiting magnitude of the survey, to search for all CCSNe\nincluding those low-metallicity events.While this paper is \nprimarily aimed at determining numbers of low metallicity events\nthat could be found, the latter calculation gives an estimate of \nthe total number of CCSNe that are likely to be harvested from these\nupcoming surveys.\n\n\\section{Creating Galaxy Catalogues for the Various Survey Strategies}\\label{sec_a}\n\n In order to produce catalogues of galaxies that we can use for the various survey strategies to be considered, we begin with the Sloan Digital Sky Survey (SDSS) \\citep{2007arXiv0707.3380A}. The SDSS Data\n Release 5 (DR5) provides photometric\n and spectroscopic data, within a wavelength range of 3500-9200$\\AA$, for\n $\\sim$675\\,000 galaxies, over an area of 5\\,713 square degrees of the northern\n hemisphere out to a redshift\n of $z\\sim0.4$. In general we only want to detect relatively nearby CCSNe as we want to spectroscopically type and follow these SNe with relative ease, so to this end we introduce a distance limit of $z=0.04$ to the SDSS spectroscopic catalogue. We have used\n the SDSS DR5 website\\footnote{SDSS DR5 website:\n \\it{http:\/\/www.sdss.org\/dr5}} to extract the out 44\\,041 galaxies within z\n $=0.04$ along with data\n including the petrosian magnitudes ({\\it\n u, g, r, i} and {\\it z}), the galactic extinctions in each filter determined following \\citet{1998ApJ...500..525S}, the spectroscopic\n redshifts, the\n {\\it r}-band fibre magnitudes and the line intensities of H$\\alpha$, H$\\beta$,\n [OIII]$\\lambda5007$ and [NII]$\\lambda6584$ for each galaxy.\n \n Of these 44\\,041 galaxies we extract out two separate samples of galaxies. The first of these samples is classified as the high signal-to-noise sample, containing galaxies with an SDSS defined line significance indicator `nSigma' $>$ 7$\\sigma$ for all of the four spectral lines mentioned previously (as advised by DR5). The second sample is classified as the low signal-to-noise sample, containing galaxies that have not been included in the high signal-to-noise sample but exhibit `nSigma' $>$ 5$\\sigma$ in both H$\\alpha$ and H$\\beta$. The high signal-to-noise sample contains 20\\,632 galaxies and the low signal-to-noise sample contains 8\\,703 galaxies.\n \n\\subsection{\\textbf{Removing AGN}}\\label{sec_aa}\n\n Eventually we aim to determine a star-formation rate (SFR) for each\n individual galaxy\n in our sample in order to then determine their core-collapse SN rates\n (CCSR).\n As it is the young, massive, hot star population within each galaxy that\n is the dominant source of hydrogen ionising radiation, it is possible to use\n the H$\\alpha$ luminosity of these galaxies as a SFR indicator.\n \n A difficultly arises in that many galaxies within both the high signal-to-noise and the low signal-to-noise galaxy samples will host Active Galactic Nuclei\n (AGN), which will also contribute to the galaxy's H$\\alpha$\n luminosity. To remove these AGN contaminated galaxies\n from the high signal-to-noise galaxy sample we use the following\n diagnostic line provided by \\cite{2003MNRAS.346.1055K} to discriminate between\n purely star-forming galaxies (SFGs) and galaxies that also host AGN:\n \n\\begin{equation}\n \\log\\left(\\frac{[\\rm{OIII}]\\lambda5007}{\\rm{H}\\beta}\\right)=\\frac{0.61}{\\log([\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha)-0.05}+1.3\n \\label{kauff_equ}\n\\end{equation}\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{BPT_dia.ps}\n \\caption{\\textrm{[OIII]$\\lambda5007\/$H$\\beta$ vs\n [NII]$\\lambda6584\/$H$\\alpha$ plot of 20\\,632 galaxies in our SDSS DR5 good signal-to-noise galaxy sample.\n The diagnostic line from \\cite{2003MNRAS.346.1055K} discriminates between the purely star-forming galaxies in\n our sample (those below the line) and those hosting AGN (above the line).}\\label{BPT_dia}}\n \\end{center}\n\\end{figure}\n\n \\noindent Fig. \\ref{BPT_dia} shows the SFGs found below this line and AGN\n host galaxies above the line. We now have 18\\,350 high signal-to-noise SFGs. Concerning the low signal-to-noise galaxy sample , we do not have accurate enough spectral information to apply this diagnostic line to remove any unwanted AGN host galaxies. However, following the example of\n \\citet{2004MNRAS.351.1151B} it is still possible to remove the AGN hosts from\n the low signal-to-noise sample by requiring that [NII]$\\lambda6584$\/H$\\alpha > 0.6$ and that\n nSigma $>$ 7$\\sigma$ in both lines. We now also have 6\\,000 low signal-to-noise SFGs. An overview of all galaxy sub-samples can be found\n in Table \\ref{subsample_table}. The 17\\,409 unclassified galaxies are\n predominantly early-type galaxies that show little signs of\n recent star-formation, which is the reason that H$\\alpha$ has not been detected to\n the significance levels that we have required. The lack of recent star-formation within these galaxies implies that they will\n also be void of any future CCSN events and hence these galaxies are not of\n interest to our survey.\n \n Having removed all of the AGN contaminated galaxies we now have two catalogues of SFGs which we shall refer to from now on as the high signal-to-noise SFG (HSFG) catalogue and the low signal-to-noise SFG (LSFG) catalogue.\n\n\\begin{table*}\n \\begin{center}\n \\caption{\\textrm{Hierarchy of galaxies extracted from the original SDSS\n DR5 spectroscopic galaxy sample within z$=$0.04}\\label{subsample_table}}\n \\begin{tabular}{c | c | c | c | c | c | c}\n \\hline\\hline\n \\multicolumn{7}{c}{\\bf All SDSS DR5 Galaxies [44\\,041]}\\\\\n {\\bf Sample} & \\multicolumn{2}{|c|}{High signal-to-noise galaxies [20\\,632]} & \\multicolumn{2}{|c|}{Low signal-to-noise galaxies [8\\,703]}& \\multicolumn{2}{c}{Unclassified galaxies [17\\,409]}\\\\\n {\\bf Sub-sample} & {SFGs [18\\,350]} & {AGN [2\\,282]} & {SFGs [6\\,000]} & {AGN [2\\,703]} & \\multicolumn{2}{c}{}\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table*}\n\n \n\\subsection{Measuring Oxygen Abundances}\\label{sec_b}\n\n\n In order to select out low-metallicity galaxies from both the HSFG\n and LSFG catalogues,\n we define oxygen abundances using the empirical calibrations of\n \\citet{2004MNRAS.348L..59P}. Within the range $8.12 \\lesssim 12+\\log(\\rm{O\/H}) < 9.05$ the\n following empirical calibration is used:\n\n\\begin{center} \n\\begin{equation}\n 12+\\log(\\rm{O\/H})=8.73-0.32\\log\\left(\\frac{[\\rm{OIII}]\\lambda5007\/\\rm{H}\\beta}{[\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha}\\right),\n\\end{equation}\n\\end{center}\n\n \\noindent and below $12+\\log(\\rm{O\/H})\\simeq 8.12$ the following calibration is used:\n\n\\begin{center} \n \\begin{equation}\n 12+\\log(\\rm{O\/H})=8.9+0.59\\log([\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha).\n \\end{equation}\n\\end{center}\n\n The wavelengths of the emission lines used for the flux ratios in both of these calibrations\n are separated by only a small amount and therefore their ratios are free of any extinction\n effects. Of the sample of 18\\,350 HSFGs, \\citet{2006A&A...448..955I} have directly measured\n metallicities for 209. They determined the\n oxygen abundances by measuring the\n [OIII]$\\lambda$4363\/[OIII]$\\lambda$5007 line ratio to calculate an\n electron temperature $T_{e}$, and then derived the\n abundances directly from the strengths of the [OII]$\\lambda$3727 (or\n [OII]$\\lambda$7320, 7331 when [OII]$\\lambda$3727 was not available) and\n [OIII]$\\lambda\\lambda$4959, 5007 emission lines. Comparing the\n \\citeauthor{2006A&A...448..955I} $T_{e}$ measured abundances with our \\citeauthor{2004MNRAS.348L..59P}\n empirically calibrated abundances we find good agreement, apart\n from four outlying galaxies (see Fig. \\ref{izo_fig}). These galaxies are\n SDSS J124813.65-031958.2, SDSS J091731.22+415936.8, SDSS\n J123139.98+035631.4 and SDSS J130240.78+010426.8.\n\n \n When viewed, these four\n outliers seem to be dwarf galaxies that are in the same\n line of sight as and possibly gravitationally bound\n to much larger and presumably more metal rich galaxies.\n Assuming that the contamination from these background galaxies has not been\n adequately removed from the spectra of the dwarf galaxies by the SDSS reduction pipeline would explain\n the discrepancy between the oxygen abundances measured by \\citeauthor{2006A&A...448..955I} and\n our measurements. Viewing a random selection of the remaining galaxies from\n the \\citeauthor{2006A&A...448..955I} sample reveal the blue compact dwarf galaxies expected from\n their low-metallicity galaxy sample.\n The fact that we may be over-estimating the oxygen abundances of a very\n small fraction of galaxies should not concern us too much as we are trying to produce a\n low-metallicity galaxy sample and not a high-metallicity sample that would then\n possibly be contaminated by a few misplaced lower-metallicity galaxies. Choosing then\n to ignore these four outlying galaxies we\n find that our oxygen abundance measurements for the remaining\n 205 galaxies fall with an RMS scatter of 0.14 dex from the directly\n measured abundances of the \\citeauthor{2006A&A...448..955I}\n \n Recently \\cite{2008ApJ...673..999P} have taken a sample of 125 958 SFGs from SDSS DR4, with oxygen abundances derived in the same fashion as \\cite{2004ApJ...613..898T} used for SFGs in DR2. The \\cite{2004ApJ...613..898T} method for deriving oxygen abundance estimates an individual galaxy's metallicty via a likelihood analysis which simultaneously fits multiple optical nebular emission lines to those predicted by the hybrid stellar-population plus photoionisation models of \\cite{2001MNRAS.323..887C}. A likelihood distribution of the metallicity is determined for each galaxy the median is taken as the best estimate of the galaxy metallicity. The \\cite{2004ApJ...613..898T} metallicities are essentially on the \\cite{2002ApJS..142...35K} abundance scale. Matching our catalogues of HSFGs and LSFGs against the sample of 125 958 SFGs of \\cite{2008ApJ...673..999P}, we find a common sample of 18 014 SFGs. The oxygen abundances that we measure with the \\citeauthor{2004MNRAS.348L..59P} method are typically $\\sim$0.2 dex below that of \\citeauthor{2004ApJ...613..898T}, in agreement with the findings of \\cite{2008AJ....135.1136M}. The cause of this discrepancy is debated but may either be due to certain parameters that produce temperature variations not being taken into consideration when deriving $T_{e}$ at higher-metallicity, which would lead to an under-estimation of the oxygen abundance measured on the \\citeauthor{2004MNRAS.348L..59P} scale which is calibrated with $T_{e}$ measured abundances \\citep{2005A&A...434..507S, 2006astro.ph..8410B}, or to an unknown problem with the photoionisation models used by \\citeauthor{2004ApJ...613..898T} \\citep{2003ApJ...591..801K}. \n \n\\subsection{SN Rate Indicator}\\label{sec_c}\n\n Having measured the metallicity for each of the galaxies in our\n catalogues, we now wish to determine CCSRs. To do this we must first\n determine SFRs for the galaxies and then determine the fraction of those stars\n formed that will eventually end their lives as CCSNe. The\n best indicator that we have for the SFR for each galaxy is its H$\\alpha$\n luminosity. As alluded to previously, it is going to be the young, massive, hot stars\n in purely star-forming galaxies that are the dominant source of\n hydrogen ionising radiation, causing the galaxy's\n H$\\alpha$ luminosity to be proportional to its recent SFR.\n \n \\citet{1998ARA&A..36..189K} has determined the following calibration\n between a galaxy's SFR and its H$\\alpha$ luminosity:\n \n\\begin{center}\n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=\\frac{L_{{\\rm H}\\alpha}}{1.27\\times10^{34}({\\rm W})}\n \\label{kennicutt_equ}\n\\end{equation}\n\\end{center}\n\n \\noindent where the luminosity is measured in Watts.\n\n Derived from their model fits, \\citet{2004MNRAS.351.1151B} also provide likelihood distributions for\n the conversion factor between the H$\\alpha$ luminosity and the SFR for galaxies of\n various mass ranges. They confirm that the Kennicutt calibration is a\n very good {\\it typical} calibration, comparing well with\n the median value for their sample. When considering the complete HSFG and LSFG catalogues it is acceptable to assume a median mass range for the galaxies and we employ\n the Kennicutt calibration. However, when considering galaxies with\n relatively poor metallicity we choose to use the most\n probable conversion factor for the \\citeauthor{2004MNRAS.351.1151B} distribution with the lowest mass range (${\\rm \\log}M_{*}<8$) as this\n distribution closest resembles the low-metallicity galaxies in our catalogues i.e. low-mass, blue,\n dwarf, irregular galaxies.\n\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=\\frac{L_{{\\rm H}\\alpha}}{2.01\\times10^{34}({\\rm W})}\n\\end{equation}\n\\end{center}\n\n\\begin{figure*}[b]\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=10^{-0.4(r_{\\rm Petro} - r_{\\rm\n fibre})}\\left[\\frac{S_{\\rm H\\alpha}\/S_{\\rm H\\beta}}{2.86}\\right]^{2.114}\\frac{L_{{\\rm H}\\alpha}}{1.27\\times10^{34}({\\rm W})}\n \\label{AC_equ}\n\\end{equation}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}[b]\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=10^{-0.4(r_{\\rm Petro} - r_{\\rm\n fibre})}\\left[\\frac{S_{\\rm H\\alpha}\/S_{\\rm H\\beta}}{2.86}\\right]^{2.114}\\frac{L_{{\\rm H}\\alpha}}{2.01\\times10^{34}({\\rm\n W})}\n \\label{AC_low_equ}\n\\end{equation}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{izo_fig.ps}\n \\caption{\\textrm{Comparing our \\citet{2004MNRAS.348L..59P} empirically calibrated oxygen abundances with those\n directly measured by \\citet{2006A&A...448..955I} via measuring\n the electron temperature reveals that our\n measurements are reliable with an RMS scatter of 0.14 dex, the solid line depicting a one-to-one correspondence. Note the four outlying galaxies - see text for\n details.}\\label{izo_fig}}\n \\end{center}\n\\end{figure}\n\n SDSS provides the H$\\alpha$ equivalent line width and also the continuum flux\n at the wavelength of H$\\alpha$, the equivalent width times the continuum flux\n giving the H$\\alpha$ flux which is then used to determine the H$\\alpha$\n luminosity. A problem with the SDSS data is that the measured H$\\alpha$\n flux for a galaxy is only the flux which falls within the 3''\n fibre aperture of the SDSS multi-fibre spectrograph. Typically this is only a\n fraction of the total galaxy flux, as the SDSS spectrograph fibre locks onto the\n centre of the galaxy and any flux that falls outside of the 3'' fibre\n aperture is lost. \\citet{2003ApJ...599..971H} have developed a very simple aperture-correction that can\n be applied to the measured galaxy H$\\alpha$ luminosity to give an estimate of the total\n galaxy H$\\alpha$ luminosity. The aperture-correction takes account of the ratio between\n the petrosian photometric $r$-band galaxy magnitude and the\n synthetic $r$-band `fibre magnitude' determined from the galaxy spectrum. \n \\citeauthor{2003ApJ...599..971H} also provide an extinction correction to be\n used when determining the galaxy H$\\alpha$ SFR. The correction makes use of the Balmer decrement\n and assumes the standard Galactic extinction law of\n \\citet{1989ApJ...345..245C}. This gives\n a final aperture and extinction corrected H$\\alpha$ luminosity SFR indicator\n for our entire galaxy sample as given in Equation \\ref{AC_equ}, and for the low-metallicity galaxy sample as given in Equation \\ref{AC_low_equ}, where $S_{\\rm H\\alpha}$ and $S_{\\rm H\\beta}$ are the line \nfluxes corrected for stellar absorption according to \\citet{2003ApJ...599..971H}.\n Having now determined a SFR for each of the galaxies in our\n catalogues, we can compare these rates with their oxygen abundances\n (Fig. \\ref{oxy_sfr}). It is\n clear that in general the higher the galaxy SFR the higher the typical oxygen abundance. It can be reasoned that if the SFR of a galaxy is high, or has been high at any point in its history, there is an increased population of young, hot, massive stars which in turn leads to an increased rate of\n CCSNe and therefore a greater rate of enrichment of the\n ISM. The observed high-metallicity cutoff of the SDSS galaxies is probably due to a saturation suffered by [OIII] $\\lambda$4363 T$_{e}$ calibrated metallicities \\citep[see][]{2008arXiv0801.1849K}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{SFR_oxy.ps}\n \\caption{\\textrm{H{$\\alpha$} determined star-formation rates using the low-mass range calibration of \\citet{2004MNRAS.351.1151B} of\n 18\\,350 galaxies\n in our HSFG catalogue compared with their measured oxygen abundances. In general,\n the greater the galaxy metallicity, the greater its star-formation rate - as\n expected.}\n\\label{oxy_sfr}}\n \\end{center}\n\\end{figure}\n\n Given a SFR, it is relatively simple to convert to a CCSR by determining\n the fraction of a stellar population that will eventually collapse to create\n CCSNe. Following the method used by e.g. \\citet{2001MNRAS.324..325M}, we use an initial mass function (IMF) for a stellar\n population to calculate the fraction of this population within the mass\n range 8M$_{\\odot}<$ M $<$ 50M$_{\\odot}$, the same mass range for stars predicted to end\n their lives as CCSNe. From this logic the CCSR is determined as:\n\n\\begin{center} \n\\begin{equation} \n {\\rm CCSR}=\\frac{\\int_{\\rm 8M_{\\odot}}^{\\rm 50M_{\\odot}}\\phi(m)dm}{\\int_{\\rm\n 0.1M_{\\odot}}^{\\rm 125M_{\\odot}}m\\phi(m)dm}\\times{\\rm SFR}\n \\label{mattila_equ}\n\\end{equation}\n\\end{center}\n\n where \\mbox{$\\phi(m)$} is the Salpeter IMF \\citep{1955ApJ...121..161S} with upper and\n lower mass cut-offs of \\mbox{$0.1{\\rm M}_{\\odot}$} and \\mbox{$125{\\rm M}_{\\odot}$}. This conversion is\n calculated to be:\n\n\\begin{center} \n\\begin{equation} \n {\\rm CCSR (SNe \\,\\,yr^{-1})}=0.007 \\times {\\rm SFR (M_{\\odot}\\,\\,yr^{-1})}\n \\label{mattila_equ}\n\\end{equation}\n\\end{center}\n\n\\subsection{Additional nearby bright galaxies}\n\n The target selection criteria for the SDSS DR5 spectroscopic sample\n includes a\n bright magnitude limit of \\mbox{$r\\sim$14.5} in order to avoid saturation and excessive\n cross-talk in the spectrographs. As a result of this restriction many of\n the nearby luminous galaxies have been omitted from the DR5 spectroscopic\n sample. To account for these missing bright galaxies and \nconstruct a complete galaxy catalogue we initially\n select out the galaxies from the HyperLeda galaxy catalogue that match with\n galaxies from the SDSS\n DR5 {\\it photometric} survey with magnitudes $r<14.5$, assuming that the HyperLeda\n catalogue contains all of the nearby luminous galaxies. From this catalogue\n of matched galaxies we further select out those galaxies that are not found in the\n SDSS DR5 {\\it spectroscopic} survey. We discover a total of 1\\,887 nearby\n luminous galaxies included in the SDSS DR5 photometric survey that have\n been omited from the spectroscopic survey.\n \n Of these 1\\,887 galaxies we wish to know the fraction that are star-forming\n galaxies. HyperLeda provides a galaxy morphological classification for a\n large number of the galaxies in the catalogue and we are able to remove all\n galaxies with an early-type classification, as these galaxies\n will generally have very low SFRs. But note that some early-type galaxies\n have HII regions and some evidence of low-level star-formation\n \\citep[e.g.][]{2005MNRAS.357.1337M}. The mean\n $g-i$ colour of these removed early-type galaxies is 1.216, with a standard\n deviation \\mbox{$\\sigma = 0.153$}. In an attempt to remove the remaining\n non-starforming galaxies from this bright galaxy sample that do not have a morphological classification, we remove all\n galaxies redward of one $\\sigma$ from the mean $g-i$ colour, i.e.\n $g-i>1.063$. This results in 1\\,216 remaining star-forming galaxies. There is a possibility that this cut may also exclude a few starburst galaxies which have been heavily reddened due to the effects of extinction, but we are mainly concerned about low-metallicity galaxies, which are not greatly affected by extinction.\n \n As we have no spectral information for these galaxies from SDSS DR5, it is\n necessary to use an alternative indicator to estimate SFR other than the\n H$\\alpha$ luminosity. The $U$-band luminosity can be used as a suitable\n indicator as developed by \\cite{2006ApJ...642..775M}:\n \n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{\\it U_{obs}}}({\\rm M}_{\\odot}{\\rm yr^{-1}})=(1.4\\pm1.1)\\times10^{-43}\\frac{L_{U_{obs}}}{\\rm ergs\\ s^{-1}},\n\\end{equation}\n\\end{center}\n\n \\noindent where the SDSS $u$-band is transformed to \\mbox{$U_{\\rm vega}$} using the following\n transformation from \\citet{2007AJ....133..734B}:\n \n\\begin{center} \n\\begin{equation}\n U_{\\rm vega}=u-0.0140(u-g)+0.0556,\n\\end{equation}\n\\end{center}\n \n \\noindent and the distance moduli from HyperLeda are used to determine $L_{U_{obs}}$. The\n CCSR is then determined using Equation \\ref{mattila_equ}. Note\n that as \\citeauthor{2006ApJ...642..775M} have empirically calibrated this\n $U$-band SFR indicator from\n extinction-corrected H$\\alpha$ galaxy luminosities, there is no need to\n further correct this indicator for dust reddening. The 1\\,216 \n bright, nearby galaxies have a resulting $U$-band indicated CCSR of \\mbox{12.70\n CCSNe yr$^{-1}$}.\n\n\n We now have three galaxy catalogues; a HSFG catalogue and a LSFG\n catalogue, with measured metallcities and CCSRs, and a nearby \n galaxy catalogue containing bright galaxies not found in the SDSS\n spectroscopic galaxy catalogue. \n We have estimates of SFRs and CCSRs for all these galaxies.\n Taking our\n catalogues of 18\\,350 HSFGs and 6\\,000 LSFGs and introducing various\n upper limits on the individual galaxy metallicities and lower limits\n on the CCSRs we are able to extract out separate galaxy\n sub-samples. Table \\ref{SFG_rate_table} displays these sub-samples,\n differing both in the number of galaxies they contain and their\n estimated CCSR determined from the galaxy SFRs\n derived from aperture and extinction-corrected H$\\alpha$ luminosities for the HSFG and LSFG catalogues, and from $U$-band luminosities for the \nnearby bright galaxy catalogue. For the samples of\n galaxies with no metallicity constraint the SFRs have been derived using the\n `typical' calibration of \\citet{1998ARA&A..36..189K} whereas the SFRs of the\n sub-samples of galaxies with constraints on metallicity have been\n derived using the lowest mass range calibration of \\citet{2004MNRAS.351.1151B}. These full catalogues or sub-samples of these catalogue can now be used to determine the feasibility of searching for low-metallicity CCSN events using various survey strategies. Figure \\ref{cum_dis} shows the cumulative\n distribution of the combined CCSN rate from our HSFG and LSFG\n catalogues, measured against oxygen abundance. It is clear that the\n CCSR increases steeply with oxygen abundance of the galaxy sample.\n\n\n\\section{\\textbf{Strategy 1 : A Pointed survey of catalogued low-Metallicity galaxies}}\\label{sec_d}\n\n \n When determining the feasibility of using a pointed low-metallicity survey to search for CCSNe in a catalogue of low-metallicity galaxies we consider using the fully robotic 2.0m\n Liverpool Telescope situated at the Observatorio del Roque de los Muchachos, La Palma. We shall also consider the using of a network of similar sized telescopes.\n \n There are three characteristics that we require of the galaxy catalogue that we shall use with this survey strategy; firstly, it must contain only a few hundred galaxies as too many galaxies in the catalogue would hinder our ability to observe each individual\n galaxy frequently enough to ensure that we detect any SN that it may\n host. Secondly, we require that the galaxies are of\n sufficiently low metallicity in order to determine how the CCSNe that\n they host differ from those hosted by their higher metallicity counterparts.\n Finally, the galaxies must exhibit a suitably high enough CCSR to increase\n the probability of detecting these CCSNe. The latter two requirements somewhat\n contradict each other because metallicity\n tends to scale with SFR in SFGs and therefore requiring a\n low-metallicity\n galaxy sample implies that we require galaxies with lower\n SFRs, and hence {\\it lower} CCSRs. It is\n therefore essential that we produce a galaxy catalogue that can act as\n a compromise between these two conflicting requirements.\n \n\\begin{table*}\n\\caption{\\textrm{The table shows the expected CCSN rates within the SDSS DR5\n spectroscopic survey area (14\\% of the entire sky) out to a distance of \\mbox{$z\\sim0.04$}. }}\\label{SFG_rate_table}\n \\begin{center}\n \\begin{tabular}{c c c c c c c}\n \\multicolumn{7}{c}{\\bf HSFG Catalogue}\\\\\n \\hline\\hline\n & \\multicolumn{6}{c}{\\bf INDIVIDUAL GALAXY SN RATE LIMITS}\\\\\n & \\multicolumn{2}{c}{\\bf $>$ 0.0 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.001 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.01 SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf 12+log(O\/H)} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf No Limit} & 18\\,350 & 115.6 & 13\\,974 & 113.4 & 2\\,557 & 73.6\\\\\n {\\bf $<$ 8.4} & 8\\,019 & 13.2 & 3\\,650 & 11.2 & 120 & 2.3\\\\\n {\\bf $<$ 8.3} & 4\\,290 & 6.9 & 1\\,830 & 5.9 & 73 & 1.3\\\\\n {\\bf $<$ 8.2} & 1\\,713 & 3.1 & {\\bf 727} & {\\bf 2.8} & 42 & 0.8\\\\\n {\\bf $<$ 8.1} & 537 & 1.0 & 209 & 0.9 & 16 & 0.3\\\\\n \\hline\n \\multicolumn{7}{c}{}\\\\\n \\multicolumn{7}{c}{\\bf LSFG Catalogue}\\\\\n \\hline\\hline\n & \\multicolumn{6}{c}{\\bf INDIVIDUAL GALAXY SN RATE LIMITS}\\\\\n & \\multicolumn{2}{c}{\\bf $>$ 0.0 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.001 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.01 SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf 12+log(O\/H)} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf No Limit} & 6\\,000 & 34.1 & 3\\,691 & 33.2 & 901 & 22.9\\\\\n {\\bf $<$ 8.4} & 1\\,757 & 0.8 & 116 & 0.3 & 6 & 0.1\\\\\n {\\bf $<$ 8.3} & 1\\,025 & 0.3 & 50 & 0.1 & 0 & 0.0\\\\\n {\\bf $<$ 8.2} & 401 & 0.1 & 18 & 0.0 & 0 & 0.0\\\\\n {\\bf $<$ 8.1} & 116 & 0.0 & 4 & 0.0 & 0 & 0.0\\\\\n \\hline \n \\multicolumn{7}{c}{}\\\\\n \\multicolumn{7}{c}{\\bf Nearby Bright Galaxy Catalogue}\\\\\n \\hline\\hline\n {\\bf } & {\\bf } & {\\bf Galaxies} & {\\bf } & {\\bf SNe yr$^{-1}$} & {\\bf } & {\\bf }\\\\\n \\hline\n {\\bf } & & 1 216 & & 12.70 & & \\\\\n \\hline \n\n \\end{tabular} \n \\end{center}\n\\end{table*}\n \n \\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{cum-dist.eps}\n \\caption{\\textrm{Cumulative distribution of the combined CCSN rate from our HSFG and LSFG catalogues, measured against oxygen abundance. It is clear that as oxygen abundance increases so to does the rate of CCSNe}}\\label{cum_dis}\n \\end{center}\n\\end{figure}\n \n \n \n For the purpose of this survey strategy we decide that the galaxy catalogue that\n optimally fits our requirements is the sub-sample of the HSFG cataglogue that\n contain galaxies with oxygen abundances of less than 8.2\n dex {\\it and} a CCSR greater than 1 CCSNe every 1\\,000 years. This low-metallicity catalogue contains\n 727 galaxies, a suitable number for a pointed survey, with an estimated\n CCSR of \\mbox{2.8 CCSNe yr$^{-1}$}. It should be noted that this galaxy catalogue is extracted solely from the 5\\,713 square degrees of the sky that\n SDSS DR5 spectroscopic survey covers, that is 14\\% of the entire sky.\n It can be assumed that the rest of the sky contains similar density of\n these low-metallicity galaxies. We shall return to this point later.\n \n\\subsection{Monte-Carlo Simulations}\\label{sec_e}\n\n Of the estimated \\mbox{2.8 CCSNe yr$^{-1}$} that this\n low-metallicity galaxy catalogue will produce, we will only be able to detect a fraction\n of these due to the practical limiting factors of a pointed survey. The\n reason that a given SN would not be detected is simply due to its faintness at the point of\n observation. The factors that will influence the\n likelihood that\n a SN will be detected when observed within our search are: whether\n or not the galaxy that hosts the\n CCSN is observable during the period of time when the CCSN\n is\n detectable, the type of CCSN (IIP, IIL, Ib, Ic or IIn) observed and its intrinsic\n brightness, the distance to the host galaxy, the extinction towards the\n CCSN, the exposure time and the age of the CCSN when observed (affected by the\n cadence of the observations).\n \n By running a Monte-Carlo simulation, constrained by each of these parameters,\n to randomly produce a sample of 100\\,000 possible SNe observable\n within our search, we can infer the fraction of CCSNe that we should\n actually detect.\n \n\\subsection{Supernova Rates, Template Lightcurves and Distributions}\n\n In order for the Monte-Carlo simulation (MCS) to accurately reproduce the\n relative rates of the different types of CCSNe, we\n use the observed rates compiled by \\citet{smartt_et_al}, given in Table\n \\ref{smartt_rate}. These rates have been compiled within a time and volume-limited\n sample, accounting for all SNe discovered within the eight year period between 1999 January 1\n and 2006 December 2006 in galaxies with a recessional velocity \\mbox{$< 2\\,000\n {\\rm km s}^{-1}$}. Apart from those SNe that\n inhabit environments of heavy extinction or first observed late into their\n evolution, it is expected that within this appointed distance limit \n \\mbox{($\\mu = 32.3$)} all known types of SNe should have been bright enough to\n have been detected, implying that these relative rates are\n as free from any Malmquist bias as possible. CCSNe of type IIb have been merged with type Ib\/c,\n and type IIn have been divided between those with a plateau phase in the\n tail of their lightcurves, IIn\/P, and those with a linear phase, IIn\/L.\n \n\\begin{table}\n \\caption{{\\textrm Relative CCSRs taken from\n Smartt et al. (2007).}\\label{smartt_rate}}\n \\begin{center}\n \\begin{tabular}{ c c c c}\n \\hline\\hline\n {\\bf SN Type} & {\\bf Number} & {\\bf Relative Rate} & {\\bf Core-Collapse\n Only}\\\\\n \\hline\n {\\bf Ia} & 25 & 24.8$\\%$ & -\\\\\n {\\bf IIP} & 43 & 42.6$\\%$ & 56.6$\\%$\\\\\n {\\bf IIL} & 2.5 & 2.4$\\%$ & 3.3$\\%$\\\\\n {\\bf Ib\/c} & 28 & 27.7$\\%$ & 36.6$\\%$\\\\\n {\\bf IIn\/P} & 1.25 & 1.2$\\%$ & 1.6$\\%$\\\\\n {\\bf IIn\/L} & 1.25 & 1.2$\\%$ & 1.6$\\%$\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n \n We also supply template lightcurves of the various SNe for the MCS. For Type IIP we use\n 1999em as our template, taking the data from \\citet{2001ApJ...558..615H}. For Type IIL we use 1998S,\n taking data for the rise to maximum light from \\citet{2000A&AS..144..219L} and\n data for the tail from \\citet{2000MNRAS.318.1093F}. For Type Ib\/c we use\n 2002ap, taking data from \\citet{2003PASP..115.1220F}. For\n SNe of Type IIn, we suggest that it is appropriate to divide the relative rate evenly\n between Type IIn that exhibit a plateau phase in their\n lightcurves and those that exhibit a linear phase, Type IIn\/P and Type\n IIn\/L respectively. For the Type IIn\/P, we use\n 1994Y as the template taking data from \\citet{2001PASP..113.1349H}, allowing\n 1998S to provide the rise to maximum light. For Type IIn\/L we use 1999el as the\n template taking data from \\citet{2002ApJ...573..144D}, again allowing 1998S\n to provide the rise to maximum light (see Fig \\ref{templates} for comparison). \n The following conversion from \\citet{2007AJ....133..734B} is used to transform the data for the\n lightcurves to the Sloan $g$-band (note magnitudes are to be in AB system):\n\n\\begin{center} \n\\begin{equation}\n g=B-0.03517-0.3411(B-V)\n \\label{blanton_equ}\n\\end{equation}\n\\end{center} \n\n The SN absolute magnitude distributions of \\citet{2002AJ....123..745R} are used in the MCS to provide weighted,\n random distributions of peak magnitudes for the\n SNe (Table \\ref{rich_table}). Equation \\ref{blanton_equ} is again\n used to transform these distributions to the $g$-band, taking\n a $(B-V)$ colour from the epoch of peak $g$-magnitude from our template\n lightcurves. $\\sigma$ is the range in the peak magnitude distribution and $g$-band magnitudes are given in the AB system. When choosing a filter to perform a SN search with the Liverpool telescope the $r$-band is superior to the $g$-band because it has a greater filter throughput, the CCD detector is more responsive in the $r$-band and also SNe are generally brighter in the $r$-band especially later in their evolution. However, the \\citeauthor{2002AJ....123..745R} distributions of SN peak magnitudes are given in the $B$-band which we can transform to the $g$-band but not to the $r$-band. It is for this reason that we simulate a SN survey in the $g$-band and then with these results we can then hypothesise the outcome of a search in the $r$-band.\n \n\\begin{figure*}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{lightcurves.eps}\n \\caption{\\textrm{Template lightcurves used within the Monte-Carlo\n simulation. Lightcurves for SNe of type IIP are according to 1999em, type IIL to 1998S, type Ib\/c to 2002ap, type IIn\/P to 1994Y and\n type IIn\/L to 1999el. Note that all magnitudes are in the AB system.}}\n \\label{templates}\n \\end{center}\n\\end{figure*}\n\n \n\\begin{table}\n\\caption{\\textrm{Peak magnitude distributions from \\citet{2002AJ....123..745R}}\\label{rich_table}}\n \\begin{center}\n \\begin{tabular}{c c c c}\n \\hline\\hline\n {\\bf SN Type} & ${\\bf M_{B}}$ & ${\\bf M_{g}}$ & ${\\bf \\sigma}$\\\\\n \\hline\n {\\bf IIP} & -17.00 & -17.02 & 1.12\\\\\n {\\bf IIL} & -18.03 & -18.00 & 0.90\\\\\n {\\bf Ib\/c} & -18.04 &-18.22 & 1.39\\\\\n {\\bf IIn\/P} & -19.15 & -19.24 & 0.92\\\\\n {\\bf IIn\/L} & -19.15 & -19.24 & 0.92\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\subsection{Cadence, Distance and Extinction}\n\n A major factor influencing the detection efficiency of any SN\n search is the time between consecutive\n observations of a galaxy hosting a SN. As this\n time increases, so does the likelihood that we will not observe the\n SN until it is well advanced along its lightcurve. This could result\n in the magnitude dropping below our detection limit, hence failing to be detected in the\n search. There are three parameters that affect the cadence of our\n search and\n that need to be included in the MCS. The first of these is the probability\n that the SN will be in solar conjunction for a\n fraction, if not all, of the time that it\n remains detectable by our search. To account for this factor, we use\n the celestial coordinates of the 727 galaxies within our low-metallicity galaxy catalogue and an almanac for the Liverpool Telescope to determine the fraction\n of the year that each individual galaxy is observable, producing a\n distribution with a mean fraction of 0.380. We then use this\n distribution in the MCS to allocate a random amount of time to the\n cadence of observation of each SN, to account for cases where a SN\n would be unobservable and hence undetectable because its host galaxy is behind the sun.\n \n The second factor that affects the cadence is the\n number of nights that cannot be used by the Liverpool Telescope to\n perform our search. Reasons why the telescope could not be used on any\n particular night include weather, technical\n difficulties and scheduled maintenance nights. To\n compensate for these nights in our MCS, we have taken nightly reports\n from the Liverpool Telescope website\\footnote{LT website:\n \\it{http:\/\/telescope.livjm.ac.uk\/}}, from 2005 August 1 to 2006 July 31,\n and use these reports to determine a distribution of nights (58$\\%$ of nights in total) that\n can be used for our search. This distribution of usable nights is included in our MCS when considering the cadence of observations.\n \n The final factor that will affect the cadence of observations, is the\n number of galaxies that we aim to observe every night. This number is\n influenced by the amount of telescope time dedicated to our\n search each night; the\n greater the amount of time on the telescope each night, the greater the number of galaxies we can observe per\n night and the higher the cadence of observations. The number of galaxies\n that can be observed each night is also affected by exposure time. The\n benefit of increasing the exposure time is that we can search to a deeper magnitude limit, meaning\n that fainter SNe are detectable and SNe are detectable for a greater period of time. It is essential therefore to run a number of MCSs in order to find\n the optimal balance between the exposure time given to each galaxy and\n the cadence of observations and hence enabling us to detect the greatest fraction of\n CCSNe hosted by our low-metallicity galaxy catalogue.\n \n The final two parameters placed in our MCS are the host galaxy distances and\n the extinction toward the SNe. The MCS will attribute to each SN a random host galaxy\n distance, out to the distance limit of our search, which is \\mbox{z$=$0.04} (assuming the galaxies within\n our search are spread homogeneously and isotropically throughout this\n volume). Considering that the majority, if not all, of the galaxies\n within our catalogue are low-luminosity, blue, dwarf galaxies with relatively\n low metallicities, we assume that host galaxies will have produced very\n little dust and will not contribute a great deal to any extinction toward\n the CCSN.\n To this end we simply attribute a typical Galactic extinction to each\n SN \\mbox{(A$_{g}=0.3$)}.\n \n We note that a fraction of SNe remain undetected within spiral galaxies that are not observed face on \\citep[e.g. ][]{2003astro.ph.10859C}. This is due to the fact that the average extinction attributed to SNe in spiral galaxies with higher inclination angles is higher than that of face on spiral galaxies, making the SNe fainter and more difficult to detect. Implementation of a correction for this effect into the MCS, using a similar method to that of \\citet{2005MNRAS.362..671R}, could easily result in an under-estimation of the CCSRs observed in low-metallicity galaxies as these tend not be to grand-design spiral galaxies but rather dwarf, irregular galaxies where the effect of galaxy inclination would be negligible. We have therefore decided not to attempt to correct for this effect. This may make our\npredictions for the SN discovery rates in the full galaxy catalogues\n(with no metallicity limit) somewhat optimistic. However this may not lead to a significant over-estimate of events, as the SFRs that we calculate come from the observed galaxy H$\\alpha$ luminosities and in inclined spirals these will also be lower than in face on targets.\n \n \n\\subsection{Monte Carlo Simulation Results for a Single 2.0m Telescope}\\label{sec_f}\n\n Having included each of these parameters in the MCS, we randomly\n generate 100\\,000 SNe that would potentially be observable within\n our search. The information gained about each SN produced by the simulation include\n its type and its apparent magnitude at the point of observation. It is only if this magnitude is\n above the limiting magnitude of our search can we register the SN as being\n detected.\n \n By running the MCS multiple times for (a) the varying number of hours we aim to\n observe with the LT every night and (b) the varying amount of exposure time\n that we dedicate to each individual galaxy, we can determine the optimal\n values of these parameters that shall enable us to detect the greatest\n fraction of the CCSNe that we have predicted our low-metallicity galaxy\n will produce. The results can be seen in Fig. \\ref{exp_time}.\n \n\\begin{figure*}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{exp_time.eps}\n \\caption{\\textrm{Percentage of CCSNe\n detected from our MCSs for a differing number of hours observing per night\n (labeled to the right of the plot). The percentage of CCSNe detected\n increases as the nightly observing time increases, as this increases the\n cadence of observations. As the individual exposure time per galaxies\n increases, the limiting magnitude of the search deepens but the cadence of\n observations decreases. There is therefore an optimal point at which\n the exposure time and cadence of observation are balanced to detect the maximum\n percentage of SNe.}}\n \\label{exp_time}\n \\end{center}\n\\end{figure*} \n\n Studying the results of the MCSs we suggest that the\n difference between the percentage of CCSNe detected while aiming to observe one\n hour per night as opposed to two hours per night or more is not enough to\n justify the extra observing time as the\n fraction of CCSN detected only increases by a few percent. As the detection rates are double valued either side of the peak\n detection rate, we suggest using a shorter exposure time in order to\n increase cadence and thereby increase the probability of \n discovering CCSNe earlier in their evolution as opposed to later. \n \n Aiming to observe one hour a night using 60 sec exposures allows each galaxy to be observed once every $\\sim$17 days while accessible with LT and enables 29.6\\% of CCSNe to be detected with\n our search. When considering the catalogue of 727 low-metallicity galaxies which produce \\mbox{2.8 CCSNe yr$^{-1}$}, this equates to a detected CCSN rate of \\mbox{0.8 SNe yr$^{-1}$}\n and requires a total amount of \\mbox{154hrs\n yr$^{-1}$} telescope time. Remembering that the $r$-band is superior to the $g$-band for performing a SN search, we note that given a 60 sec exposure the AB limiting magnitudes for the Liverpool telescope are $g = 18.7$ mags and $r = 19.0$ mags. Given the average cadence of observations determined by our chosen survey parameters, a typical SN will be observed $\\sim$73 days post-explosion with $g-r=0.8$ mags. Given the difference in limiting magnitudes and the typical $g-r$ colour at the point of observation, we expect to detect SNe that are 1.1 magnitudes fainter with an $r$-band search than we would with a $g$-band search. Assuming that the template SN lightcurves and the distribution of SN peak magnitudes are similar in the $r$-band as compared to the $g$-band, we re-run the $g$-band MCS with a limiting magnitude 1.1 magnitudes fainter than previously used. From this simulation we predict that using the $r$-band will allow for $47.8\\%$ of CCSNe to be detected with our search. Again considering the catalogue of 727 low metallicity HSFGs which produce \\mbox{2.8 CCSNe yr$^{-1}$}, this equates to a detected CCSN rate of \\mbox{1.3 SNe yr$^{-1}$}.\n \n Assuming a typical Galactic extinction of A$_g=0.3$ (A$_r=0.22$) and an exposure\n time of 60 seconds we estimate that the absolute\n limiting magnitude ($25\\sigma$ significance) for this search at a distance of 20Mpc will be\n $M_r=-12.7$, at 70Mpc it will be $M_r=-15.4$ and at 150Mpc it will be\n $M_r=-17.1$. We plan to use the method of image matching and subtraction \\citep[see ][]{1998ApJ...503..325A} to detect SNe throughout this survey. Assuming original \nimages with equal depth, the process of image subtraction increases the\nnoise by roughly a factor of $\\sqrt{2}$. In addition, the detection of\nSNe inside their host galaxies will be always more difficult due to\nimage subtraction residuals caused by uncertainties in the alignment and\nmatching of the two images. Since we wish to avoid a large fraction of \nspurious detections, we require the relatively high significance level of \n25$\\sigma$ from the limiting magnitude of our search.\n\n \n\\subsection{Surveying with a Network of 2.0m Telescopes}\\label{sec_f2}\n\n As discussed previously SDSS DR5 covers only $\\sim$14\\% of the\n entire sky. If we presently had the ability to compile a catalogue of low-metallicity\n galaxies for the whole sky as complete as that for the area of the SDSS DR5\n spectroscopic survey and a collection of seven 2.0m telescopes (or\n three to four 2.0m telescopes with double the time allocation) we would expect to detect\n roughly 7 times the CCSNe detected solely by the LT, i.e. $\\sim$9.3\n CCSNe\/yr. We could consider using a network of six to eight 2.0m robotic telescopes similar to the RoboNet global network of 2.0m\n telescopes consisting of the LT and the Faulkes Telescope North and the\n Faulkes Telescope South to perform this kind of survey. Another advantage of using a network of telescopes in both the northern and southern hemispheres as opposed to a single telescope is that we gain a greater sky coverage and can therefore target a greater number of galaxies within our search.\n \n The two obstacles that we would encounter if we were to use this strategy to search for CCSNe are firstly the generous amount of telescope\n time that we would require \\mbox{($\\sim$1\\,000 hrs yr$^{-1}$)} and secondly the present lack of\n data required to compile an all sky galaxy catalogue. Compiling a catalogue\n of all galaxies listed in the 2dF, 6dF, LEDA and SDSS DR5 within the redshift range\n \\mbox{0$<$z$<$0.04} results in a catalogue containing a total of 103\\,549 individual galaxies, only a fraction of which will be star-forming. Apart from this we shall have to wait for PS1, the prototype\n 1.8m telescope of the Panoramic Survey Telescope and\n Rapid Response System (Pan-STARRS), which shall cover an area of 3$\\pi$ of\n the sky to a depth exceeding that of SDSS DR5, in order to compile a far more\n complete all-sky low-metallicity galaxy catalogue.\n \n \n\\section{Strategy 2 : volume-limited searches with the Pan-STARRS all sky surveys} \\label{sec_g}\n\n Having considered a dedicated pointed low-metallicity galaxy survey to search for CCSNe both with one and with a network of 2.0m telescopes, we\n now turn our attention to an all-sky survey. Pan-STARRS is a\n set of four 1.8m telescopes with wide field optics, each with a 7 square\n degree field of view, which will be constructed by the University of Hawaii. The\n prototype telescope PS1 has now achieved first light and\n is set to go online during 2008. PS1 will have the capability of\n surveying\n the whole available sky in less than a week at a rate of 6\\,000 square degrees per\n night, covering 30\\,000 square degrees of the sky per cycle\n \\citep{PS1_DRM}.\n Included in the list of tasks that the PS1 image reduction pipeline will\n perform is the subtraction of every image from a reference image in order to\n produce a database of residual transient objects that will include moving objects and static\n variables such as SNe.\n\nThere are two different strategies that one would employ to search for\nSNe (or transients of any type), these are volume-limited searches and\nmagnitude-limited searches. We will consider both of these. A volume-limited search has the advantage that it can quantify true rates of \ntransients. In addition the radial limit can be chosen so that the \ntarget discoveries are bright enough to be followed in multi-wavelength\nstudies with complementary facilities (e.g. spectroscopic and photometric\nfollow-up with 2-8m telescopes). \n\n\n\\subsection{Monte Carlo Simulations}\\label{sec_h}\n\n Of all the modes in which PS1 shall be run the 3$\\pi$ Steradian Survey\n (covering 30,000 square degrees of the sky)\n shall be the most effective when searching for nearby CCSNe. The survey aims to cover that whole sky 60 times in 3 years, that is 12 times\n in each of the five filters {\\it g, r, i, z} and {\\it y}. This aim has already\n taken historic weather patterns on Haleakala into consideration. The footprint of the survey\n completely covers the entire footprint of the SDSS DR5 spectroscopic\n survey and the AB limiting magnitude \\mbox{(25$\\sigma$)} in the $g$-band is stated\n to be 21.5\n mags for a single 60 second exposure \\citep{PS1_DRM}.\n \n Having an adapted MCS produce 100\\,000 potential SNe over the\n redshift range of our galaxy samples \\mbox{($020$ & 32.2 \\% & 0.0 \\%\\\\\n \\end{tabular} \n \\end{center}\n\\end{table}\n\nAnother future all-sky survey that has potential for SNe discoveries is \nGaia : the European Space Agency's `super-Hipparcos' satellite with\nthe objective of creating the most precise three-dimensional map of\nthe Galaxy \\citep{2005ASPC..338....3P}. The satellite shall have many\nother additional capabilities including the ability to detect nearby\nSNe (within a few hundred Mpc), and predicted for launch in December\n2011 it is a potential competitor of Pan-STARRS and\nLSST. \\cite{2003MNRAS.341..569B} have performed a feasibility study\nsimilar to this study for Gaia and have predicted that the satellite\nshall detect \\mbox{$\\sim$1\\,420 CCSNe yr$^{-1}$} using a magnitude-limited\nsurvey strategy. Capable of detecting objects brighter than 20th\nmagnitude, 1.5 magnitudes brighter than the $g$=21.5 limit for PS1,\nGaia has the ability to survey a volume of only an eighth of the depth\nthat PS1 shall survey. \\citet{2003MNRAS.341..569B} employed a galaxy\ncatalogue which more than likely neglected low-luminosity galaxies,\nwhich would result in the CCSN rate being under-predicted by a factor\nof up to $\\sim$2 fewer SNe. Hence if we scale the\n\\citeauthor{2003MNRAS.341..569B} numbers by $\\sim$16, the final\nnumbers should be comparable with our PS1 estimates, predicting\n\\mbox{$\\sim$22\\,720 CCSNe yr$^{-1}$}, which is very close to the rate\nthat we have predicted for PS1 using our MCSs, \\mbox{$\\sim$24\\,000\nCCSNe yr$^{-1}$} (see Table \\ref{CCSN_survey_rates}). However, we note\nthat these numbers are likely somewhat over optimistic due to the\ntreatment of host galaxy extinction in our MCS.\n\n\\section{Conclusions}\nHaving determined oxygen abundances, star-formation rates and CCSN rates for all\nspectroscopically typed star-forming galaxies in the Sloan Digital Sky\nSurvey Data Release 5 within $z=0.04$, we have used Monte-Carlo simulations to predict the fraction of\nthese CCSNe that we can expect to detect using different survey\nstrategies. Using a single 2m telescope (with a standard CCD camera) \nsearch we predict a detection rate of\n$\\sim$1.3 CCSNe yr$^{-1}$ in galaxies with metallicities below\n$12+\\log({\\rm O\/H})<8.2$ which are within a volume that will allow\ndetailed follow-up with 4m and 8m telescopes ($z=0.04$). With a\nnetwork of seven 2m telescopes we estimate $\\sim$9.3 CCSNe yr$^{-1}$ \ncould be found, although this would require more than \n1000\\,hrs of telescope time allocated to the network. \nWithin the same radial distance, a \nvolume-limited search with the future Pan-STARRS\nPS1 all-sky survey should uncover 12.5 CCSNe yr$^{-1}$ in low metallicity\ngalaxies. Over a period of a a few years this would allow a detailed comparison of their properties. We have also extended our \ncalculations to determine the total numbers of CCSNe that can potentially be \nfound in magnitude-limited surveys \nwith PS1 (24\\,000 yr$^{-1}$, within $z \\lesssim 0.6$), \nPS4 (69\\,000 yr$^{-1}$, within $z \\lesssim 0.8$ ) \nand LSST (160\\,000 yr$^{-1}$, within $z \\lesssim0.9 $) surveys.\n\nAll considered, a final strategy chosen to searching for CCSNe in low-metallicity environments shall realistically involve both a volume-limited and a magnitude-limited all-sky survey in order to include a volume-limited galaxy sample with which to accurately determine relative SN rates and have some prior knowledge of the host galaxy characteristics, but yet not exclude the potential of detecting rare CCSN events that would have otherwise been missed had we only considered the volume-limited survey strategy. \n\nWith the huge number of CCSNe predicted to be detected, these all-sky surveys are set to serve as a catalyst concerning our understanding of CCSNe; including their varying characteristics with metallicity, the relative rates of the various types of SNe and of extremely rare events similar to SN 2006jc, SN 2006gy and more than likely events the nature of which have not yet been observed. \n\n\n \n\\begin{acknowledgements}\n\n Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http:\/\/www.sdss.org\/.\n\n The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.\n \n This research has made use of the CfA Supernova Archive, which is funded in part by the National Science Foundation through grant AST 0606772.\n \n This work, conducted as part of the award \"Understanding the lives of\nmassive stars from birth to supernovae\" (S.J. Smartt) made under the\nEuropean Heads of Research Councils and European Science Foundation\nEURYI (European Young Investigator) Awards scheme, was supported by\nfunds from the Participating Organisations of EURYI and the EC Sixth\nFramework Programme. SJS and DRY thank the Leverhulme Trust and\nDEL for funding. SM acknowledges financial support from the Academy of Finland, project 8120503.\n\n\\end{acknowledgements}\n\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}