diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzppvd" "b/data_all_eng_slimpj/shuffled/split2/finalzzppvd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzppvd" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nIn a generic metric clustering problem, there is a set of points $\\mathcal{C}$, requiring service from a set of locations $\\mathcal{F}$, where both $\\mathcal{C}$ and $\\mathcal{F}$ are embedded in some metric space. The sets $\\mathcal{C}, \\mathcal{F}$ do not need to be disjoint, and we may very well have $\\mathcal{C} = \\mathcal{F}$. The goal is then to choose a set of locations $S \\subseteq \\mathcal{F}$, where $S$ might have to satisfy additional problem-specific requirements and an assignment $\\phi: \\mathcal{C} \\mapsto S$, such that a metric-related objective function over $\\mathcal{C}$ is minimized. \n\nHowever, in a variety of situations there may be external and metric-independent constraints imposed on $\\phi$, regarding which pairs of points $j,j' \\in \\mathcal{C}$ should be clustered together, i.e., constraints forcing a linkage $\\phi(j) = \\phi(j')$. In this work, we generalize this deterministic requirement, by introducing a novel family of \\emph{stochastic pairwise constraints}. Our input is augmented with multiple sets $P_q$ of pairs of points from $\\mathcal{C}$ ($P_q \\subseteq \\binom{\\mathcal{C}}{2}$ for each $q$), and values $\\psi_q \\in [0,1]$. Given these, we ask for a randomized solution, which ensures that in expectation at most $\\psi_q |P_q|$ pairs of $P_q$ are separated in the returned assignment. \nIn Sections \\ref{definitions}-\\ref{Motivations}, we discuss how these constraints have enough expressive power to capture a wide range of applications such as extending the notion of \\emph{Individual Fairness} from classification to clustering, and incorporating elements of Semi-Supervised clustering.\n\nAnother constraint we address is when $\\mathcal{C} = \\mathcal{F}$ and every chosen point $j \\in S$ must serve as an exemplar of the cluster it defines (the set of all points assigned to it). The subtle difference here, is that an exemplar point should be assigned to its own cluster, i.e., $\\phi(j) = j$ for all $j \\in S$. This constraint is highly relevant in strict classification settings, and is trivially satisfied in vanilla clustering variants where each point is always assigned to its nearest point in $S$. However, the presence of additional requirements on $\\phi$ makes its satisfaction more challenging. Previous literature, especially in the context of fairness in clustering \\cite{anderson2020,esmaeili2020,bera2020,bercea2019}, does not address this issue, but in our framework we explicitly offer the choice of whether or not to enforce it.\n\n\n\n\n\\subsection{Formal Problem Definitions}\\label{definitions}\n\nWe are given a set of points $\\mathcal{C}$ and a set of locations $\\mathcal{F}$, in a metric space characterized by the distance function $d: \\mathcal{C} \\cup \\mathcal{F} \\times \\mathcal{C} \\cup \\mathcal{F} \\mapsto \\mathbb{R}_{\\geq 0}$, which satisfies the triangle inequality. \nMoreover, the input includes a concise description of a set $\\mathcal{L} \\subseteq 2^{\\mathcal{F}}$, that captures the allowable configurations of location openings. The goal of all problems we consider, is to find a set $S \\subseteq \\mathcal{F}$, with $S \\in \\mathcal{L}$, and an efficiently-sampleable distribution $\\mathcal{D}$ over assignments $\\mathcal{C} \\mapsto S$, such that for a randomly drawn $\\phi \\sim \\mathcal{D}$ we have: (i) an objective function being minimized, and (ii) depending on the variant at hand, further constraints are satisfied by $\\phi$. We study two types of additional constraints imposed on $\\phi$.\n\\begin{itemize}\n \\item \\emph{Stochastic Pairwise Constraints} (SPC): We are given a family of sets $\\mathcal{P} = \\{P_1, P_2, \\ldots\\}$, where each $P_q \\subseteq \\binom{\\mathcal{C}}{2}$ is a set of pairs of points from $\\mathcal{C}$, and a sequence $\\psi = (\\psi_1, \\psi_2, \\ldots)$ with $\\psi_q \\in [0, 1]$. We then want $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq \\psi_q |P_q|, ~\\forall P_q \\in \\mathcal{P}$.\n \\item \\emph{Centroid Constraint} (CC): When this is imposed on any of our problems, we must first have $\\mathcal{C} = \\mathcal{F}$. In addition, we should ensure that $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(i) = i]=1$ for all $i \\in S$.\n\\end{itemize}\n\\textbf{Special Cases of SPC:} When each $P_q \\in \\mathcal{P}$ has $|P_q| = 1$, we get two interesting resulting variants.\n\\begin{itemize}\n \\item $\\psi_q = 0, \\forall q$: For each $P_q = \\big{\\{}\\{j,j'\\}\\big{\\}}$ we must ensure that $j,j'$ have $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) = \\phi(j')] = 1$, and hence we call such constraints \\emph{must-link} (ML). Further, since there is no actual randomness involved in these constraints, we assume w.l.o.g. that $|\\mathcal{D}| = 1$, and only solve for a single $\\phi: \\mathcal{C} \\mapsto S$ instead of a distribution over assignments.\n \\item $\\psi_q \\geq 0 , \\forall q$: For each $P_q = \\big{\\{}\\{j,j'\\}\\big{\\}}$ we must have $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq \\psi_q$, and therefore we call this constraint \\emph{probabilistic-bounded-separation} (PBS).\n\\end{itemize}\nThe objective functions we consider are:\n\\begin{itemize}\n \\item \\textbf{$\\mathcal{L}$-center\/$\\mathcal{L}$-supplier:} Here we aim for the minimum $\\tau$ (``radius''), such that $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j),j) \\leq \\tau] = 1$ for all $j \\in \\mathcal{C}$. Further, in the $\\mathcal{L}$-center setting, we have $\\mathcal{C} = \\mathcal{F}$.\n \\item \\textbf{$\\mathcal{L}$-median ($p=1$)\/$\\mathcal{L}$-means ($p=2$):} Here the goal is to minimize $(\\sum_{j \\in \\mathcal{C}}\\mathbb{E}_{\\phi \\sim \\mathcal{D}}[d(\\phi(j),j)^p])^{1\/p}$.\n\\end{itemize}\n\nThere are four types of location specific constraints that we study in this paper. In the first, which we call \\emph{unrestricted}, $\\mathcal{L} = 2^\\mathcal{F}$ and hence any set of locations can serve our needs. In the second, we have $\\mathcal{L} = \\{S \\subseteq \\mathcal{F} ~|~ |S| \\leq k\\}$ for some given positive integer $k$. This variant gives rise to the popular \\emph{$k$-center\/$k$-supplier\/$k$-median\/$k$-means} objectives. In the third, we assume that each $i \\in \\mathcal{F}$ has an associated cost $w_i \\geq 0$, and for some given $W \\geq 0$ we have $\\mathcal{L} = \\{S \\subseteq \\mathcal{F}~|~ \\sum_{i \\in S}w_i \\leq W\\}$. In this case the resulting objectives are called \\emph{knapsack-center\/knapsack-supplier\/knapsack-median\/knapsack-means}. Finally, if the input also consists of a matroid $\\mathcal{M}=(\\mathcal{F}, \\mathcal{I})$, where $\\mathcal{I} \\subseteq 2^\\mathcal{F}$ the family of independent sets of $\\mathcal{M}$, we have $\\mathcal{L} = \\mathcal{I}$, and the objectives are called \\emph{matroid-center\/matroid-supplier\/matroid-median\/matroid-means}.\n\n\nTo specify the problem at hand, we use the notation \\textbf{Objective}-\\textbf{List of Constraints}. For instance, \\textbf{$\\mathcal{L}$-means-SPC-CC} is the $\\mathcal{L}$-means problem, where we additionally impose the SPC and CC constraint. We could also further specify $\\mathcal{L}$, by writing for example \\textbf{$k$-means-SPC-CC}. Moreover, observe that when no constraints on $\\phi$ are imposed, we get the vanilla version of each objective, where the lack of any stochastic requirement implies that the distribution $\\mathcal{D}$ once more has support of size $1$, i.e., $|\\mathcal{D}|=1$, and we simply solve for just an assignment $\\phi: \\mathcal{C} \\mapsto S$. \n\n\n\\subsection{Motivation}\\label{Motivations}\n\nIn this section we present a wide variety of applications, that can be effectively modeled by our newly introduced SPCs.\n\n\\textbf{Fairness: }With machine-learning clustering approaches being ubiquitous in everyday decision making, a natural question that arises and has recently captured the interest of the research community, is how to avoid clusterings which perpetuate existing social biases. \n\nThe \\emph{individual} approach to fair classification introduced in the seminal work of~\\cite{Dwork2012} assumes that we have access to an additional metric, separate from the feature space, which captures the true ``similarity'' between points (or some approximation of it). This similarity metric may be quite different from the feature space $d$ (e.g., due to redundant encodings of features such as race), and its ultimate purpose is to help ``treat similar candidates similarly''. Note now that the PBS constraint introduced earlier, can succinctly capture this notion. For two points $j,j'$, we may have $\\psi_{j,j'} \\in [0,1]$ as an estimate of their true similarity (with $0$ indicating absolute identity), and interpret unfair treatment as deterministically separating these two points in the final solution. Hence, a fair randomized approach would cluster $j$ and $j'$ apart with probability at most $\\psi_{j,j'}$.\n\nA recent work that explores individual fairness in clustering is \\cite{anderson2020}. Using our notation, the authors in that paper require a set $S \\in \\mathcal{L}$, and for all $j \\in \\mathcal{C}$ a distribution $\\phi_j$ that assigns $j$ to each $i \\in S$ with probability $\\phi_{i,j}$. Given that, they seek solutions that minimize the clustering objectives, while ensuring that for given pairs $j,j'$, their assignment distributions are statistically similar based on some metric $D$ that captures distributional proximity (e.g., total variation and KL-divergence). In other words, they interpret individual fairness as guaranteeing $D(\\phi_j, \\phi_{j'}) \\leq p_{j,j'}$ for all provided pairs $\\{j,j'\\}$ and values $p_{j,j'}$. Although this work is interesting in terms of initiating the discussion on individual fair clustering, it has a significant modeling issue. To be more precise, suppose that for $j,j'$ the computed $\\phi_j, \\phi_{j'}$ are both the uniform distribution over $S$. Then according to that paper's definition a fair solution is achieved. However, the actual probability of placing $j,j'$ in different clusters (hence treating them unequally) is almost $1$ if we do not consider any correlation between $\\phi_j$ and $\\phi_{j'}$. On the other hand, our definition which instead asks for a distribution $\\mathcal{D}$ over assignments $\\phi: \\mathcal{C} \\mapsto S$, always provides meaningful results, since it bounds the quantity that really matters, i.e., the probability of separating $j$ and $j'$ in a random $\\phi \\sim \\mathcal{D}$.\n\nAnother closely related work in the context of individual fair clustering is \\cite{brubach2020}. The authors of that paper study a special case of PBS, where for each $j,j' \\in \\mathcal{C}$ we have $\\psi_{j,j'} = d(j,j')\/\\tau^*$, with $\\tau^*$ the objective value of the optimal solution. They then provide a $\\log k$-approximation for the $k$-center objective under the above constraints. Compared to that, our framework 1) can handle the median and means objectives as well, 2) can incorporate further requirements on the set of chosen locations (unrestricted\/knapsack\/matroid), 3) allows for arbitrary values for the separation probabilities $\\psi_{j,j'}$, and 4) provides smaller constant-factor approximations for the objective functions.\n\n\\textbf{Semi-Supervised Clustering: }A common example of ML constraints is in the area of semi-supervised learning~\\cite{Wagstaff2001,Basu2008,Zhu2006}. There we assume that pairs of points have been annotated (e.g., by human experts) with additional information about their similarity~\\cite{Zhang2007}, or that some points may be explicitly labeled~\\cite{zhu2003,Bilenko2004integrating}, allowing pairwise relationships to be inferred. Then these extra requirements are incorporated in the algorithmic setting in the form of ML constraints. Further, our SPCs capture the scenario where the labeler generating the constraints is assumed to make some bounded number of errors (by associating each labeler with a set $P_q$ and an accuracy $\\psi_q$), and also allow for multiple labelers (e.g., from crowdsourcing labels) with different accuracies. Similar settings have been studied by~\\cite{Chang2017multiple,Luo2018semi} as well.\n\n\\textbf{OTU Clustering: }The field of metagenomics involves analyzing environmental samples of genetic material to explore the vast array of bacteria that cannot be analyzed through traditional culturing approaches. A common practice in the study of these microbial communities is the \\emph{de novo} clustering of genetic sequences (e.g., 16S rRNA marker gene sequences) into Operational Taxonomic Units (OTUs)~\\cite{Edgar2013,Westcott2017}, that ideally correspond to clusters of closely related organisms. One of the most ubiquitous approaches to this problem involves taking a fixed radius (e.g., $97\\%$ similarity based on string alignment~\\cite{Stackebrandt1994}) and outputting a set of center sequences, such that all points are assigned to a center within the given radius~\\cite{Edgar2013,Ghodsi2011}. In this case, we do not know the number of clusters a priori, but we may be able to generate pairwise constraints based on a distance\/similarity threshold as in~\\cite{Westcott2017} or reference databases of known sequences. Thus, the ``unrestricted'' variant of our framework is appropriate here, where the number of clusters should be discovered, but radius and pairwise information is known or estimated. Other work in this area has considered \\emph{conspecific probability}, a given probability that two different sequences belong to the same species (easily translated to PBS) and \\emph{adverse triplets}; sets of ML constraints that cannot all be satisfied simultaneously (an appropriate scenario for a set $P_q$ as defined in Section \\ref{definitions})\\cite{Edgar2018updating}.\n\n\\textbf{Community Preservation: }There are scenarios where clustering a \\emph{group}\/\\emph{community} of points together is beneficial for the coherence and quality of the final solution. Examples of this include assigning students to schools such that students living in the same neighborhood are not placed into different schools, vaccinating people with similar demographics in a community (e.g., during a pandemic), and drawing congressional districts with the intent to avoid the practice of gerrymandering. Given such a group of points $G$, we let $P_G = \\binom{G}{2}$, and set a tolerance parameter $\\psi_G \\in [0,1]$. Then, our SPCs will make sure that in expectation at most $\\psi_G |G|$ pairs from $G$ are separated, and thus a $(1-\\psi_G)$ fraction of the community is guaranteed to be preserved. Finally, Markov's inequality also gives tail bounds on this degree of separation for all $G$.\n\n\\subsection{Our Contribution}\n\nIn Section \\ref{sec:2} we present our main algorithmic result, which is based on the two-step approach of \\cite{bercea2019,Chierichetti2017}. Unlike previous works utilizing this technique, the most serious technical difficulty we faced was not in the LP-rounding procedure, but rather in the formulation of an appropriate assignment-LP relaxation. Letting $P_\\mathcal{L}$ be any problem in $\\{\\mathcal{L}$-$\\text{center}, \\mathcal{L}$-$\\text{supplier}, \\mathcal{L}$-$\\text{median}, \\mathcal{L}$-$\\text{means}\\}$ and $\\mathcal{L}$ any of the four location settings, we get:\n\\begin{theorem}\\label{intr-thm1}\nLet $\\tau^*$ the optimal value of a \\textbf{$P_{\\mathcal{L}}$-SPC} instance, and $\\rho$ the best approximation ratio for \\textbf{$P_{\\mathcal{L}}$}. Then our algorithm chooses a set $S_{P_{\\mathcal{L}}}$ and constructs an appropriate distribution over assignments $\\mathcal{D}$, such that $S_{P_{\\mathcal{L}}} \\in \\mathcal{L}$, $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\psi_q |P_q| ~\\forall P_q \\in \\mathcal{P}$, and\n\\begin{enumerate}\n \\item $P_{\\mathcal{L}}$ is $\\mathcal{L}$-center$(\\alpha=1)$\/$\\mathcal{L}$-supplier$(\\alpha=2)$: Here we get $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq (\\alpha+\\rho)\\tau^*] = 1$, for all $j \\in \\mathcal{C}$.\n \\item $P_{\\mathcal{L}}$ is $\\mathcal{L}$-median$(p=1)$\/$\\mathcal{L}$-means$(p=2)$: Here we get $(\\sum_{j \\in \\mathcal{C}}\\mathbb{E}_{\\phi\\sim \\mathcal{D}}[d(\\phi(j),j)^p])^{1\/p} \\leq (2+\\rho)\\tau^*$.\n\\end{enumerate}\nFinally, sampling a $\\phi \\sim \\mathcal{D}$ can be done in polynomial time.\n\\end{theorem}\n\nGiven that the value $\\rho$ is a small constant for all variations of $P_{\\mathcal{L}}$ that we consider, we see that our algorithmic framework gives indeed good near-optimal guarantees. Moreover, a tighter analysis when $\\mathcal{L}=2^\\mathcal{F}$ yields the next result.\n\\begin{theorem}\nWhen $\\mathcal{L}=2^\\mathcal{F}$, our algorithm has the same guarantees as those in Theorem \\ref{intr-thm1}, but this time the cost of the returned solution for \\textbf{$P_{unrestricted}$-SPC} is at most $\\tau^*$.\n\\end{theorem}\n\nAlthough imposing no constraint on the set of chosen locations yields trivial problems in vanilla settings, the presence of SPCs makes even this variant NP-hard. Specifically, we show the following theorem in Appendix \\ref{appendix}.\n\n\\begin{theorem}\\label{np-hard}\nThe problem \\textbf{unrestricted-$\\mathcal{O}$-SPC} is NP-hard, where $\\mathcal{O} \\in \\{\\text{center, supplier, median, means}\\}$.\n\\end{theorem}\n\nIn Section \\ref{sec:3} we consider settings where each $j \\in S$ must serve as an exemplar of its defining cluster. Hence, we incorporate the Centroid constraint in our problems. As mentioned earlier, previous work in the area of fair clustering had ignored this issue. Our first result follows.\n\\begin{theorem}\\label{intr-thm3}\nLet $\\tau^*$ the optimal value of a \\textbf{$k$-center-SPC-CC} instance. Then our algorithm chooses $S_k \\subseteq \\mathcal{C}$ and constructs a distribution $\\mathcal{D}$, such that sampling $\\phi \\sim \\mathcal{D}$ can be done efficiently, $|S_{k}| \\leq k$, $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\psi_q |P_q| ~\\forall P_q \\in \\mathcal{P}$, $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq 3\\tau^*] = 1$ for all $j \\in \\mathcal{C}$, and $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(i)=i] = 1$ for all $i \\in S_{k}$.\n\\end{theorem}\nTo address all objective functions under the Centroid constraint, we demonstrate (again in Section \\ref{sec:3}) a reassignment procedure that gives the following result.\n\\begin{theorem}\\label{intr-thm4}\nLet $\\lambda$ the approximation ratio for the objective of \\textbf{$P_\\mathcal{L}$-SPC} achieved in Theorem \\ref{intr-thm1}. Then, our reassignment procedure applied to the solution produced by the algorithm mentioned in Theorem \\ref{intr-thm1}, gives an approximation ratio $2\\lambda$ for \\textbf{$P_\\mathcal{L}$-SPC-CC}, while also preserving the SPC guarantees of Theorem \\ref{intr-thm1} and satisfying the CC, when $\\mathcal{L} = 2^\\mathcal{C}$ or $\\mathcal{L} = \\{S' \\subseteq \\mathcal{C}:~ |S'| \\leq k\\}$ for some given positive integer $k$.\n\\end{theorem}\n\nAs for ML constraints, since they are a special case of SPCs, our results for the latter also address the former. However, in Section \\ref{sec:4} we provide improved approximation algorithms for a variety of problem settings with ML constraints. Our main result is summarized in the following theorem. \n\\begin{theorem}\\label{intr-thm5}\nThere exists a $2\/3\/3\/3$-approximation algorithm for \\textbf{$k$-center-ML}\/\\textbf{knapsack-center-ML}\/\\textbf{$k$-supplier-ML}\/\\textbf{knapsack-supplier-ML}. This algorithm is also the best possible in terms of the approximation ratio, unless $P=NP$. In addition, it satisfies without any further modifications the Centroid constraint.\n\\end{theorem}\nAlthough ML constraints have been extensively studied in the semi-supervised literature \\cite{Basu2008}, to the extent of our knowledge we are the first to tackle them purely from a Combinatorial Optimization perspective, with the exception of \\cite{Davidson2010}. This paper provides a $(1+\\epsilon)$ approximation for $k$-center-ML, but only in the restricted $k=2$ setting. \n\n\\subsection{Further Related Work}\nClustering problems have been a longstanding area of research in Combinatorial Optimization, with all important settings being thoroughly studied \\cite{Hochbaum1986,Gonzalez1985,harris2017,byrka2017,ola2017,Chakra2019}.\n\nThe work that initiated the study of fairness in clustering is \\cite{Chierichetti2017}. That paper addresses a notion of demographic fairness, where points are given a certain color indicating some protected attribute, and then the goal is to compute a solution that enforces a fair representation of each color in every cluster. Further work on similar notions of demographic fairness includes \\cite{bercea2019,bera2020,esmaeili2020,huang2019,backurs2019,ahmadian2019}.\n\nFinally, a separation constraint similar to PBS is found in \\cite{Davidson2010}. In that paper however, the separation is deterministic and also depends on the underlying distance between two points. Due to their stochastic nature, our PBS constraints allow room for more flexible solutions, and also capture more general separation scenarios, since the $\\psi_p$ values can be arbitrarily chosen.\n\n\n\\subsection{An LP-Rounding Subroutine}\nWe present an important subroutine developed by \\cite{Kleinberg2002}, which we repeatedly use in our results, and call it \\textbf{KT-Round}. Suppose we have a set of elements $V$, a set of labels $L$, and a set of pairs $E \\subseteq \\binom{V}{2}$. Consider the following Linear Program (LP).\n\\begin{align}\n &\\displaystyle\\sum_{l \\in L}x_{l,v} = 1, ~&\\forall v \\in V& \\label{KT-1} \\\\\n&z_{e,l} \\geq x_{l,v} - x_{l,w}, ~&\\forall e = \\{v,w\\} \\in E, ~\\forall l \\in L& \\label{KT-2} \\\\\n&z_{e,l} \\geq x_{l,w} - x_{l,v}, ~&\\forall e = \\{v,w\\} \\in E, ~\\forall l \\in L& \\label{KT-3} \\\\\n&z_e = \\frac{1}{2}\\displaystyle\\sum_{l \\in L}z_{e,l}, ~&\\forall e = \\{v,w\\} \\in E& \\label{KT-4} \\\\\n&0 \\leq x_{l,v}, z_e, z_{e,l} \\leq 1, ~&\\forall v \\in V, \\forall e \\in E, \\forall l \\in L& \\label{KT-5}\n\\end{align}\n\n\\begin{theorem}{\\cite{Kleinberg2002}}\\label{round}\nGiven a feasible solution $(x, z)$ of (\\ref{KT-1})-(\\ref{KT-5}), there exists a randomized rounding approach \\textbf{KT-Round}($V,L,E,x,z$), which in polynomial expected time assigns each $v \\in V$ to a $\\phi(v) \\in L$, such that:\n\\begin{enumerate}\n \\item $\\Pr[\\phi(v) \\neq \\phi(w)] \\leq 2z_e, ~~\\forall e = \\{v,w\\} \\in E$\n \\item $\\Pr[\\phi(v) = l] = x_{l,v}, ~~~~\\forall v \\in V, ~\\forall l \\in L$\n\\end{enumerate}\n\\end{theorem}\n\\section{A General Framework for Approximating Clustering Problems with SPCs}\\label{sec:2}\n\nIn this section we show how to achieve approximation algorithms with provable guarantees for \\textbf{$\\mathcal{L}$-center-SPC\/$\\mathcal{L}$-supplier-SPC\/$\\mathcal{L}$-median-SPC\/$\\mathcal{L}$-means-SPC} using a general two-step framework. At first, let $P_{\\mathcal{L}}$ denote any of the vanilla versions of the objective functions we consider, i.e., $P_{\\mathcal{L}} \\in \\{\\mathcal{L}$-$\\text{center}, \\mathcal{L}$-$\\text{supplier}, \\mathcal{L}$-$\\text{median}, \\mathcal{L}$-$\\text{means}\\}$. \n\nTo tackle a $P_{\\mathcal{L}}$-SPC instance, we begin by using on it any known $\\rho$-approximation algorithm $A_{P_{\\mathcal{L}}}$ for $P_{\\mathcal{L}}$. This gives a set of locations $S_{P_{\\mathcal{L}}}$ and an assignment $\\phi_{P_{\\mathcal{L}}}$, which yield an objective function cost of $\\tau_{P_{\\mathcal{L}}}$ for the corresponding $P_{\\mathcal{L}}$ instance. In other words, we drop the SPC constraints from the $P_{\\mathcal{L}}$-SPC instance, and simply treat it as its vanilla counterpart. Although $\\phi_{P_{\\mathcal{L}}}$ may not satisfy the SPCs, we are going to use the set $S_{P_{\\mathcal{L}}}$ as our chosen locations. The second step in our framework would then consist of constructing the appropriate distribution over assignments. Toward that end, consider the following LP, where $P' = \\displaystyle\\cup_{P_q \\in \\mathcal{P}}P_q$.\n\\begin{align}\n& \\sum_{i \\in S_{P_{\\mathcal{L}}}}x_{i,j} = 1 &\\forall j \\in \\mathcal{C}& \\label{s2-LP-1} \\\\\n&z_{e,i} \\geq x_{i,j} - x_{i,j'} &\\forall e = \\{j,j'\\} \\in P', ~\\forall i \\in S_{P_{\\mathcal{L}}}& \\label{s2-LP-2}\\\\\n&z_{e,i} \\geq x_{i,j'} - x_{i,j} &\\forall e = \\{j,j'\\} \\in P', ~\\forall i \\in S_{P_{\\mathcal{L}}}& \\label{s2-LP-3}\\\\\n&z_e = \\frac{1}{2}\\sum_{i \\in S_{P_{\\mathcal{L}}}}z_{e,i} &\\forall e \\in P'& \\label{s2-LP-4}\\\\\n&\\sum_{e \\in P_q} z_e \\leq \\psi_q |P_q| &\\forall P_q \\in \\mathcal{P}& \\label{s2-LP-5}\\\\\n&0 \\leq x_{i,j}, z_e, z_{e,i} \\leq 1 &\\forall i \\in S_{P_{\\mathcal{L}}}, \\forall j \\in \\mathcal{C}, \\forall e \\in P'& \\label{s2-LP-6}\n\\end{align}\n\nThe variable $x_{i,j}$ can be interpreted as the probability of assigning point $j$ to location $i \\in S_{P_{\\mathcal{L}}}$. To understand the meaning of the $z$ variables, it is easier to think of the integral setting, where $x_{i,j} = 1$ iff $j$ is assigned to $i$ and $0$ otherwise. In this case, $z_{e,i}$ is $1$ for $e = \\{j,j'\\}$ iff exactly one of $j$ and $j'$ are assigned to $i$. Thus, $z_e$ is $1$ iff $j$ and $j'$ are separated. We will later show that in the fractional setting $z_e$ is a lower bound on the probability that $j$ and $j'$ are separated. Therefore, constraint (\\ref{s2-LP-1}) simply states that every point must be assigned to a center, and given the previous discussion, (\\ref{s2-LP-5}) expresses the provided SPCs.\n\nDepending on which exact objective function we optimize, we must augment LP (\\ref{s2-LP-1})-(\\ref{s2-LP-6}) accordingly.\n\\begin{itemize}\n \\item \\textbf{$\\mathcal{L}$-center ($\\alpha=1$)\/$\\mathcal{L}$-supplier ($\\alpha=2$):} Here we assume w.l.o.g. that the optimal radius $\\tau^*_{SPC}$ of the original $P_{\\mathcal{L}}$\\textbf{-SPC} instance is known. Observe that this value is always the distance between some point and some location, and hence there are only polynomially many alternatives for it. Thus, we execute our algorithm for each of those, and in the end keep the outcome that resulted in a feasible solution of minimum value. Given now $\\tau^*_{SPC}$, we add the following constraint to the LP. \n \\begin{align}\n x_{i,j} = 0, ~\\forall i,j: ~d(i,j) > \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC}\\label{s2-LP-cnt}\n \\end{align}\n \\item \\textbf{$\\mathcal{L}$-median ($p=1$)\/$\\mathcal{L}$-means ($p=2$):} In this case, we augment the LP with the following objective function.\n \\begin{align}\n \\min ~ \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}x_{i,j} \\cdot d(i,j)^p \\label{s2-LP-md}\n \\end{align}\n\\end{itemize}\n\nThe second step of our framework begins by solving the appropriate LP for each variant of $P_{\\mathcal{L}}$, in order to acquire a fractional solution $(\\bar x, \\bar z)$ to that LP. Finally, the distribution $\\mathcal{D}$ over assignments $\\mathcal{C} \\mapsto S_{P_{\\mathcal{L}}}$ is constructed by running \\textbf{KT-Round}($\\mathcal{C}, S_{P_{\\mathcal{L}}}, P',\\bar x, \\bar z$). Notice that this will yield an assignment $\\phi \\sim \\mathcal{D}$, where $\\mathcal{D}$ results from the internal randomness of \\textbf{KT-Round}. Our overall approach for solving \\textbf{$P_{\\mathcal{L}}$-SPC} is presented in Algorithm \\ref{alg-1}.\n\n\\begin{algorithm}[t]\n$(S_{P_{\\mathcal{L}}}, \\phi_{P_{\\mathcal{L}}}) \\gets A_{P_{\\mathcal{L}}}(\\mathcal{C}, \\mathcal{F}, \\mathcal{L})$\\;\nSolve LP (\\ref{s2-LP-1})-(\\ref{s2-LP-6}) with (\\ref{s2-LP-cnt}) for \\textbf{$\\mathcal{L}$-center\/$\\mathcal{L}$-supplier}, and with (\\ref{s2-LP-md}) for \\textbf{$\\mathcal{L}$-median\/$\\mathcal{L}$-means}, and get a fractional solution $(\\bar x, \\bar z)$\\;\n$\\phi \\gets$ \\textbf{KT-Round}$(\\mathcal{C}, S_{P_{\\mathcal{L}}}, P', \\bar x, \\bar z)$\\;\n\\caption{Approximating \\textbf{$P_{\\mathcal{L}}$-SPC}}\\label{alg-1}\n\\end{algorithm}\n\n\\begin{theorem}\\label{s2-thm1}\nLet $\\tau^*_{SPC}$ the optimal value of the given \\textbf{$P_{\\mathcal{L}}$-SPC} instance. Then Algorithm \\ref{alg-1} guarantees that $S_{P_{\\mathcal{L}}} \\in \\mathcal{L}$, $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\psi_q |P_q| ~\\forall P_q \\in \\mathcal{P}$ and\n\\begin{enumerate}\n \\item $P_{\\mathcal{L}}$ is $\\mathcal{L}$-center$(\\alpha=1)$\/$\\mathcal{L}$-supplier$(\\alpha=2)$: Here we get $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq \\alpha\\cdot \\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}}] = 1$, for all $j \\in \\mathcal{C}$.\n \\item $P_{\\mathcal{L}}$ is $\\mathcal{L}$-median$(p=1)$\/$\\mathcal{L}$-means$(p=2)$: Here we get $(\\sum_{j \\in \\mathcal{C}}\\mathbb{E}_{\\phi\\sim \\mathcal{D}}[d(\\phi(j),j)^p])^{1\/p} \\leq 2\\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}}$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nAt first, notice that since $S_{P_{\\mathcal{L}}}$ results from running $A_{P_{\\mathcal{L}}}$, it must be the case that $S_{P_{\\mathcal{L}}} \\in \\mathcal{L}$.\n\nFocus now on LP (\\ref{s2-LP-1})-(\\ref{s2-LP-6}) with either (\\ref{s2-LP-cnt}) or (\\ref{s2-LP-md}), depending on the underlying objective. In addition, let $S^* \\in \\mathcal{L}$ and $\\mathcal{D}^*$ be the set of locations and the distribution over assignments $\\mathcal{C} \\mapsto S^*$, that constitute the optimal solution of \\textbf{$P_{\\mathcal{L}}$-SPC}. Given those, let $x^*_{i,j} = \\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) = i]$ for all $i \\in S^*$ and all $j \\in \\mathcal{C}$. Moreover, for every $i' \\in S^*$ let $\\kappa(i') = \\argmin_{i \\in S_{P_{\\mathcal{L}}}}d(i,i')$ by breaking ties arbitrarily. Finally, for all $i \\in S_{P_{\\mathcal{L}}}$ we define $N(i) = \\{i' \\in S^* ~|~ i = \\kappa(i')\\}$, and notice that the sets $N(i)$ form a partition of $S^*$.\n\nConsider now the vectors $\\hat{x}_{i,j} = \\sum_{i' \\in N(i)}x^*_{i',j}$ for every $i \\in S_{P_{\\mathcal{L}}}$ and $j \\in \\mathcal{C}$, and $\\hat{z}_{e,i} = |\\hat{x}_{i,j} - \\hat{x}_{i,j'}|$ for every $e = \\{j,j'\\} \\in P'$ and $i \\in S_{P_{\\mathcal{L}}}$. We first show that the above vectors constitute a feasible solution of LP (\\ref{s2-LP-1})-(\\ref{s2-LP-6}). Initially, notice that constraints (\\ref{s2-LP-2}), (\\ref{s2-LP-3}), (\\ref{s2-LP-4}), (\\ref{s2-LP-6}) are trivially satisfied. Regarding constraint (\\ref{s2-LP-1}), for any $j \\in \\mathcal{C}$ we have:\n\\begin{align}\n \\sum_{i \\in S_{P_{\\mathcal{L}}}}\\hat{x}_{i,j} = \\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}x^*_{i',j} = \\sum_{i' \\in S^*}x^*_{i',j} = 1 \\notag\n\\end{align}\nThe second equality follows because the sets $N(i)$ induce a partition of $S^*$. The last equality is due to the optimal solution $\\mathcal{D}^*, S^*$ satisfying $\\sum_{i \\in S^*}\\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) = i] = 1$.\n\nTo show satisfaction of constraint (\\ref{s2-LP-5}) focus on any $e = \\{j,j'\\} \\in P'$ and $i \\in S_{P_{\\mathcal{L}}}$. We then have:\n\\begin{align}\n \\hat{z}_{e,i} = \\Big{|}\\sum_{i' \\in N(i)}(x^*_{i',j} - x^*_{i',j'})\\Big{|} \\leq \\sum_{i' \\in N(i)}| x^*_{i',j} - x^*_{i',j'} |\\notag\n\\end{align}\nTherefore, we can easily upper bound $\\hat{z}_e$ as follows:\n\\begin{align}\n\\hat{z}_e &= \\frac{1}{2}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\hat{z}_{e,i} \\leq \\frac{1}{2}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}| x^*_{i',j} - x^*_{i',j} | \\notag \\\\\n &\\leq \\frac{1}{2}\\displaystyle\\sum_{i' \\in S^*}| x^*_{i',j} - x^*_{i',j} | \\label{z-bound}\n\\end{align}\nTo move one, notice that:\n\\begin{align}\n\\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) = \\phi(j')] &= \\sum_{i' \\in S^*}\\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) = i' ~ \\wedge ~ \\phi(j') = i'] \\notag \\\\\n&\\leq \\sum_{i' \\in S^*} \\min\\{ x^*_{i',j}, ~ x^*_{i',j'}\\} \\label{z-bound-2}\n\\end{align}\nTo relate (\\ref{z-bound}) and (\\ref{z-bound-2}) consider the following trick.\n\\begin{align}\n&\\sum_{i' \\in S^*} \\min\\{ x^*_{i',j}, ~ x^*_{i',j'}\\} + \\frac{1}{2} \\sum_{i' \\in S^*}| x^*_{i',j} - x^*_{i',j'}| = \\notag \\\\ \n&\\sum_{i' \\in S^*} \\Big{(}\\min\\{ x^*_{i',j}, ~ x^*_{i',j'}\\} + \\frac{| x^*_{i',j} - x^*_{i',j'}|}{2}\\Big{)} = \\notag \\\\\n&\\sum_{i' \\in S^*} \\frac{ x^*_{i',j} + x^*_{i',j'}}{2} = \\frac{2}{2} = 1 \\label{z-bound-3}\n\\end{align}\nFinally, combining (\\ref{z-bound}),(\\ref{z-bound-2}),(\\ref{z-bound-3}) we get:\n\\begin{align}\n\\hat{z}_e \\leq 1 - \\displaystyle\\sum_{i' \\in S^*} \\min\\{ x^*_{i',j}, ~x^*_{i',j'}\\} \\leq \\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) \\neq \\phi(j')] \\notag\n\\end{align}\nGiven the above, for every $P_q \\in \\mathcal{P}$ we have:\n\\begin{align}\n\\sum_{e \\in P_q}\\hat{z}_e \\leq \\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}^*}[\\phi(j) \\neq \\phi(j')] \\leq \\psi_q |P_q| \\notag\n\\end{align}\nwhere the last inequality follows from optimality of $\\mathcal{D}^*$.\n\nNow that we know that $(\\hat x, \\hat z)$ is a feasible solution for (\\ref{s2-LP-1})-(\\ref{s2-LP-6}), we proceed by considering how this solution affects the objective function of each underlying problem.\n\n\\textbf{$\\mathcal{L}$-center$(\\alpha=1)$\/$\\mathcal{L}$-supplier$(\\alpha=2)$:} The objective here is captured by the additional constraint (\\ref{s2-LP-cnt}). Hence, we also need to show that $\\hat x$ satisfies (\\ref{s2-LP-cnt}), i.e., that for all $i \\in S_{P_{\\mathcal{L}}}$, $j \\in \\mathcal{C}$ for which $d(i,j) > \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC}$, we have $\\hat{x}_{i,j} = 0$. \n\nSuppose for the sake of contradiction that there exists a $j \\in \\mathcal{C}$ and an $i \\in S_{P_{\\mathcal{L}}}$ such that $d(i,j) > \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC}$ and $\\hat{x}_{i,j} > 0$. Since $\\hat{x}_{i,j} = \\sum_{i' \\in N(i)}x^*_{i',j}$, this implies that there exists $i' \\in N(i)$ with $x^*_{i',j} > 0$, which consecutively implies $d(i',j) \\leq \\tau^*_{SPC}$. By the triangle inequality we get: \n\\begin{align}\n d(i,j) \\leq d(i,i') + d(i',j) \\leq d(i,i') + \\tau^*_{SPC} \\label{s2-aux1}\n\\end{align}\nIn $\\mathcal{L}$-center, because $i' \\in N(i)$ we also have $d(i,i') \\leq \\tau_{P_{\\mathcal{L}}}$, and so we reach a contradiction. In $\\mathcal{L}$-supplier we have:\n\\begin{align}\n d(i,i') &\\leq d(i', \\phi_{P_{\\mathcal{L}}}(j)) \\leq d(i',j) + d(\\phi_{P_{\\mathcal{L}}}(j), j) \\notag \\\\ &\\leq \\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}} \\label{unrestr}\n\\end{align}\nCombining (\\ref{s2-aux1}),(\\ref{unrestr}) gives the desired contradiction for the case of $\\mathcal{L}$-supplier as well.\n\n\\textbf{$\\mathcal{L}$-median$(p=1)$\/$\\mathcal{L}$-means$(p=2)$:} Here the overall objective function for $\\hat x$ is given by:\n\\begin{align}\n &\\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\hat{x}_{i,j}d(i,j)^p\\Big{)}^{\\frac{1}{p}} = \\notag \\\\\n &\\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}x^*_{i',j}d(i,j)^p\\Big{)}^{\\frac{1}{p}} \\label{s2-aux2}\n\\end{align}\nIn addition, for $i' \\in N(i)$ we also get:\n\\begin{align}\n d(i,j) &\\leq d(i,i') + d(i',j) \\leq d(i', \\phi_{P_{\\mathcal{L}}}(j)) + d(i',j) \\notag \\\\\n &\\leq d(i',j) + d(\\phi_{P_{\\mathcal{L}}}(j), j) + d(i',j) \\notag \\\\ &\\leq 2d(i',j) + d(\\phi_{P_{\\mathcal{L}}}(j), j) \\label{s2-aux3}\n\\end{align}\nCombining (\\ref{s2-aux2}), (\\ref{s2-aux3}) and the fact that the median and means objectives are monotone norms, we get (\\ref{s2-aux2}) $\\leq A + B$, where:\n\\begin{align}\n A &= \\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}2x^*_{i',j}d(i',j)^p\\Big{)}^{\\frac{1}{p}} \\notag \\\\\n &= 2\\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i' \\in S^*}x^*_{i',j}d(i',j)^p\\Big{)}^{\\frac{1}{p}} \\leq 2\\tau^*_{SPC} \\label{s2-auxA} \\\\\n B &= \\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}x^*_{i',j}d(\\phi_{P_{\\mathcal{L}}}(j),j)^p\\Big{)}^{\\frac{1}{p}} \\notag \\\\\n &=\\Big{(} \\sum_{j \\in \\mathcal{C}}d(\\phi_{P_{\\mathcal{L}}}(j),j)^p\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\sum_{i' \\in N(i)}x^*_{i',j}\\Big{)}^{\\frac{1}{p}} \\notag \\\\\n &=\\Big{(} \\sum_{j \\in \\mathcal{C}}d(\\phi_{P_{\\mathcal{L}}}(j),j)^p\\sum_{i' \\in S^*}x^*_{i',j}\\Big{)}^{\\frac{1}{p}} \\notag \\\\\n &=\\Big{(} \\sum_{j \\in \\mathcal{C}}d(\\phi_{P_{\\mathcal{L}}}(j),j)^p\\Big{)}^{\\frac{1}{p}} = \\tau_{P_{\\mathcal{L}}}\\label{s2-auxB}\n\\end{align}\nCombining (\\ref{s2-auxA}), (\\ref{s2-auxB}) we finally get (\\ref{s2-aux2}) $\\leq 2\\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}}$.\n\nSince $(\\hat x, \\hat z)$ is a feasible solution to the appropriate version of the assignment LP, step 2 of Algorithm \\ref{alg-1} is well-defined, and thus can compute a solution $(\\bar x, \\bar z)$ that satisfies (\\ref{s2-LP-1})-(\\ref{s2-LP-6}) and additionally: \\textbf{i)} \\begin{align}\n \\bar{x}_{i,j} = 0, ~\\forall i,j: ~d(i,j) > \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC} \\label{s2-bar-cnt}\n\\end{align} \nfor $\\mathcal{L}$-center$(\\alpha=1)$\/$\\mathcal{L}$-supplier$(\\alpha=2)$, and \\textbf{ii)} \n\\begin{align}\n \\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\bar{x}_{i,j}d(i,j)^p\\Big{)}^{\\frac{1}{p}} &\\leq \\Big{(} \\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\hat{x}_{i,j}d(i,j)^p\\Big{)}^{\\frac{1}{p}} \\notag \\\\\n &\\leq 2\\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}} \\label{s2-bar-md}\n\\end{align}\nfor $\\mathcal{L}$-median$(p=1)$\/$\\mathcal{L}$-means$(p=2)$.\n\nBecause $(\\bar x, \\bar z)$ satisfies (\\ref{s2-LP-1}), (\\ref{s2-LP-2}), (\\ref{s2-LP-3}), (\\ref{s2-LP-4}), (\\ref{s2-LP-6}), \\textbf{KT-Round} can be applied for $V = \\mathcal{C}$, $L = S_{P_{\\mathcal{L}}}$, $E = P'$. Let $\\phi$ be the assignment returned by \\textbf{KT-Round}$(\\mathcal{C}, S_{P_{\\mathcal{L}}}, P', \\bar x, \\bar z)$, and $\\mathcal{D}$ the distribution representing the internal randomness of this process. From Theorem \\ref{round} we have $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\bar{z}_e$, $\\forall e = \\{j,j'\\} \\in P'$. Hence, for every $P_q \\in \\mathcal{P}$:\n\\begin{align}\n \\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\sum_{e \\in P_q}\\bar{z}_e \\leq 2\\psi_q |P_q| \\notag\n\\end{align}\nThe last inequality is due to $\\bar z$ satisfying (\\ref{s2-LP-5}).\n\nRegarding all the different objective functions, we have the following. For $\\mathcal{L}$-center$(\\alpha=1)$\/$\\mathcal{L}$-supplier$(\\alpha=2)$, because of (\\ref{s2-bar-cnt}) and the second property of Theorem \\ref{round}, we know that a point $j \\in \\mathcal{C}$ will never be assigned to a location $i \\in S_{P_{\\mathcal{L}}}$, such that $d(i,j) > \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC}$. Therefore, $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq \\tau_{P_{\\mathcal{L}}} + \\alpha \\cdot \\tau^*_{SPC}] = 1$ for all $j \\in \\mathcal{C}$. As for the $\\mathcal{L}$-median$(p=1)$\/$\\mathcal{L}$-means$(p=2)$ objectives, the second property of Theorem \\ref{round} ensures:\n\\begin{align}\n &\\Big{(}\\sum_{j \\in \\mathcal{C}}\\mathbb{E}_{\\phi\\sim \\mathcal{D}}[d(\\phi(j),j)^p]\\Big{)}^{1\/p} = \\notag \\\\\n &\\Big{(}\\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) = i]\\cdot d(i,j)^p\\Big{)}^{1\/p} = \\notag \\\\\n &\\Big{(}\\sum_{j \\in \\mathcal{C}}\\sum_{i \\in S_{P_{\\mathcal{L}}}}\\bar{x}_{i,j}\\cdot d(i,j)^p\\Big{)}^{1\/p} \\leq 2\\tau^*_{SPC} + \\tau_{P_{\\mathcal{L}}} \\notag\n\\end{align}\nwhere the last inequality follows from (\\ref{s2-bar-md}).\n\\end{proof}\n\nSince $P_{\\mathcal{L}}$ is a less restricted version of $P_{\\mathcal{L}}$-SPC, the optimal solution value $\\tau^*_{P_{\\mathcal{L}}}$ for $P_{\\mathcal{L}}$ in the original instance where we dropped the SPCs, should satisfy $\\tau^*_{P_{\\mathcal{L}}} \\leq \\tau^*_{SPC}$. Therefore, because $A_{P_{\\mathcal{L}}}$ is a $\\rho$-approximation algorithm for $P_{\\mathcal{L}}$, we get $\\tau_{P_{\\mathcal{L}}} \\leq \\rho\\cdot \\tau^*_{SPC}$. The latter implies the following.\n\n\\begin{corollary}\nThe approximation ratio achieved through Algorithm \\ref{alg-1} is $(\\rho+1)$ for $\\mathcal{L}$-center-SPC, and $(\\rho+2)$ for $\\mathcal{L}$-supplier-SPC\/$\\mathcal{L}$-median-SPC\/$\\mathcal{L}$-means-SPC.\n\\end{corollary}\n\n\\noindent \\textbf{Tighter analysis for the} \\emph{unrestricted} \\textbf{($\\mathcal{L} = 2^\\mathcal{F}$) case:} For this case, a more careful analysis leads to the following. \n\n\\begin{theorem}\nWhen $\\mathcal{L} = 2^\\mathcal{F}$, Algorithm \\ref{alg-1} achieves an objective value of at most $\\tau^*_{SPC}$ for all objectives we study (center\/supplier\/median\/means).\n\\end{theorem}\n\n\\begin{proof}\nNote that the first step of our framework will choose $S_{P_{\\mathcal{L}}} = \\mathcal{F}$, and hence $\\tau_{P_{\\mathcal{L}}} = 0$. In addition, a closer examination of our analysis for the supplier\/median\/means settings reveals the following. Focus on (\\ref{s2-aux1}) and (\\ref{s2-aux3}), and note that because $S_{P_{\\mathcal{L}}} = \\mathcal{F}$, we get $d(i,i') = 0$ since $\\kappa(i') = i'$ for each $ i' \\in S^*$. Hence, $d(i,j) \\leq d(i',j)$ which leads to an objective function value of at most $\\tau^*_{SPC}$ for all problems.\n\\end{proof}\n\\section{Addressing the Centroid Constraint}\\label{sec:3}\n\nIn this section we present results that incorporate the Centroid Constraint (CC) to a variety of the settings we study. Moreover, recall that for this case $\\mathcal{C} =\\mathcal{F}$, and hence the supplier objective reduces to the center one.\n\n\\subsection{Approximating $k$-center-SPC-CC}\n\nOur approach for solving this problem heavily relies on Algorithm \\ref{alg-1} with two major differences.\n\nThe first difference compared to Algorithm \\ref{alg-1} lies in the approximation algorithm $A_{k}$ used to tackle $k$-center. For $k$-center there exists a $2$-approximation which given a target radius $\\tau$, it either returns a solution where each $j \\in \\mathcal{C}$ gets assigned to a location $i_j$ with $d(i_j, j) \\leq 2\\tau$, or outputs an ``infeasible'' message, indicating that there exists no solution of radius $\\tau$ (\\cite{Hochbaum1986}).\n\nRecall now that w.l.o.g. the optimal radius $\\tau^{*}_{C}$ for the $k$-center-SPC-CC instance is known. In the first step of our framework we will use the variant of $A_{k}$ mentioned earlier with $\\tau^{*}_{C}$ as its target radius, and get a set of chosen locations $S_k$. The second step is then the same as in Algorithm \\ref{alg-1}, with the addition of the next constraint to the assignment LP:\n\\begin{align}\n x_{i,i} = 1, ~\\forall i \\in S_{k} \\label{s3-cnt-centr}\n\\end{align}\nThe overall process is presented in Algorithm \\ref{alg-2}.\n\n\\begin{algorithm}[t]\n$(S_{k}, \\phi_{k}) \\gets A_{k}(\\mathcal{C}, \\mathcal{F}, \\mathcal{L}, \\tau^{*}_{C})$\\;\nSolve LP (\\ref{s2-LP-1})-(\\ref{s2-LP-6}) with (\\ref{s2-LP-cnt}), (\\ref{s3-cnt-centr}) and $S_k$ as the chosen locations, and get a solution $(\\bar x, \\bar z)$\\;\n$\\phi \\gets$ \\textbf{KT-Round}$(\\mathcal{C}, S_{k}, P', \\bar x, \\bar z)$\\;\n\\caption{Approximating \\textbf{$k$-center-SPC-CC}}\\label{alg-2}\n\\end{algorithm}\n\n\\begin{theorem}\\label{s3-thm1}\nLet $\\tau^*_{C}$ the optimal value of the given \\textbf{$k$-center-SPC-CC} instance, and $\\mathcal{D}$ the distribution over assignments given by \\textbf{KT-Round}. Then Algorithm \\ref{alg-2} guarantees $|S_{k}| \\leq k$, $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\psi_q |P_q| ~\\forall P_q \\in \\mathcal{P}$, $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq 3\\tau^*_{C}] = 1$ for all $j \\in \\mathcal{C}$, and $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(i)=i] = 1$ for all $i \\in S_{k}$.\n\\end{theorem}\n\n\\begin{proof}\nAt first, notice that since $S_{k}$ results from running $A_{k}$, it must be the case that $|S_{k}| \\leq k$.\n\nFurthermore, because the optimal value of the $k$-center instance is less than $\\tau^{*}_{C}$, $A_{k}(\\mathcal{C}, \\mathcal{F}, \\mathcal{L}, \\tau^{*}_{C})$ will not return ``infeasible'', and $(S_{k}, \\phi_{k})$ has an objective value $\\tau_{k} \\leq 2\\tau^{*}_{C}$.\n\nGiven this, the reasoning in the proof of Theorem \\ref{s2-thm1} yields $\\sum_{\\{j,j'\\} \\in P_q}\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(j) \\neq \\phi(j')] \\leq 2\\psi_q |P_q|, ~\\forall P_q \\in \\mathcal{P}$, and $\\Pr_{\\phi \\sim \\mathcal{D}}[d(\\phi(j), j) \\leq 3\\tau^*_{C}] = 1$ for all $j \\in \\mathcal{C}$, if the $\\hat x$ defined in that proof also satisfies (\\ref{s3-cnt-centr}). Hence, the latter statement regarding $\\hat x$ is all we need to verify.\n\nA crucial property of $A_{k}$ when executed with a target radius $\\tau^*_C$, is that for all $i, \\ell \\in S_{k}$ with $i \\neq \\ell$ we have $d(i, \\ell) > 2 \\tau^*_C$ \\cite{Hochbaum1986}. Due to this, for all $i \\in S_k$ and $i' \\in S^*$ with $d(i,i') \\leq \\tau^*_C$, we have $i' \\in N(i)$. Suppose otherwise, and let $i' \\in N(\\ell)$ for some other $\\ell \\in S_k$. By definition this gives $d(i', \\ell) \\leq d(i',i) \\leq \\tau^*_C$. Thus, $d(i,\\ell) \\leq d(i,i') + d(i', \\ell) \\leq 2\\tau^*_C$. Finally, note that due to $\\tau^*_C$ being the optimal value of $k$-center-SPC-CC, we have $\\sum_{i': d(i,i') \\leq \\tau^*_C}x^*_{i',i} = 1$ for $i \\in S_k$. Since $\\{i' \\in S^*: d(i,i') \\leq \\tau^*_C\\} \\subseteq N(i)$, we finally get $\\hat x_{i,i} = 1$.\n\nTo conclude, we show that $\\Pr_{\\phi \\sim \\mathcal{D}}[\\phi(i) = i] = 1$ for every $i \\in S_k$. However, this is obvious, because of the second property of Theorem \\ref{round}, and $\\bar x$ satisfying (\\ref{s2-LP-1}), (\\ref{s3-cnt-centr}).\n\\end{proof}\n\n\\subsection{A Reassignment Step for the Unrestricted and $k$-Constrained Location Setting}\n\nWe now demonstrate a reassignment procedure that can be used to correct the output of Algorithm \\ref{alg-1}, in a way that satisfies the CC. Again, let $P_{\\mathcal{L}}$ be any of the vanilla objective functions, and consider Algorithm \\ref{alg-3}.\n\n\\begin{algorithm}[t]\nRun Algorithm \\ref{alg-1} to solve $P_\\mathcal{L}$-SPC, and get $S \\subseteq \\mathcal{C}$ and an assignment $\\phi: \\mathcal{C} \\mapsto S$ in return\\;\n\\While {there exists $i \\in S$ with $\\phi(i) \\neq i$} {\n$S \\gets S \\setminus \\{i\\}$\\;\n$i' \\gets \\argmin_{j \\in \\mathcal{C}: \\phi(j) = i}d(i,j)$\\;\n$S \\gets S \\cup \\{i'\\}$\\;\n\\For {all $j \\in \\mathcal{C}$ with $\\phi(j) = i$} {\n$\\phi(j) \\gets i'$\\;\n}\n}\n\\caption{Approximating \\textbf{$P_\\mathcal{L}$-SPC-CC}}\\label{alg-3}\n\\end{algorithm}\n\n\\begin{theorem}\\label{s3-thm2}\nLet $\\lambda$ the approximation ratio of Algorithm \\ref{alg-1} for \\textbf{$P_\\mathcal{L}$-SPC} with respect to the objective function. Then, Algorithm \\ref{alg-3} gives an approximation ratio $2\\lambda$ for the objective of \\textbf{$P_\\mathcal{L}$-SPC-CC}, while satisfying the CC and preserving the guarantees of Algorithm \\ref{alg-1} on SPCs, when $\\mathcal{L} = 2^\\mathcal{C}$ or $\\mathcal{L} = \\{S' \\subseteq \\mathcal{C}:~ |S'| \\leq k\\}$ for some integer $k$.\n\\end{theorem}\n\n\\begin{proof}\nTo show that Algorithm \\ref{alg-3} preserves the SPC guarantees, it suffices to prove that the following invariant is maintained throughout the reassignment. For any $j,j'$ that are assigned by Algorithm \\ref{alg-1} to the same location, at the end of the while loop we should still have $\\phi(j) = \\phi(j')$. Suppose now that we initially had $\\phi(j)=\\phi(j') = i$. If $\\phi(i) = i$, then no change will occur, and in the end we still have $\\phi(j)=\\phi(j') = i$. If on the other hand $\\phi(i) \\neq i$, we choose an $i' \\in \\{j'' \\in \\mathcal{C}: \\phi(j'') = i\\}$ and then set $\\phi(j)=\\phi(j') = i'$. Note that this assignment will not change, since the modification also ensures $\\phi(i') = i'$. The latter also guarantees that CC holds at termination. \n\nMoreover, notice that when $\\mathcal{L} = \\{S' \\subseteq \\mathcal{C}:~ |S'| \\leq k\\}$, the cardinality constraint is not violated. The reason for this is that every time we possibly add a new location to $S$, we have already removed another one from there. \n\nFinally, we need to reason about the approximation ratio of Algorithm \\ref{alg-3}. At first, let $i$ the location to which $j \\in \\mathcal{C}$ was assigned to by Algorithm \\ref{alg-1}. As we have already described, $j$ can only be reassigned once, and let $i'$ be its new assignment. Then $d(i',j) \\leq d(i,i') + d(i,j) \\leq 2d(i,j)$, where the second inequality follows because $i' = \\argmin_{j' \\in \\mathcal{C}: \\phi(j') = i}d(i,j')$. Hence, we see that the covering distance for every point at most doubles, and since the optimal value of \\textbf{$P_\\mathcal{L}$-SPC} is not larger than that of \\textbf{$P_\\mathcal{L}$-SPC-CC}, we get an approximation ratio of $2\\lambda$ for \\textbf{$P_\\mathcal{L}$-SPC-CC} (Notice that this holds regardless of the specific objective).\n\\end{proof}\n\n\n \n\\section{Improved Results for Problems with Must-Link Constraints}\\label{sec:4}\n\nSince must-link constraints (ML) are a special case of SPCs, Algorithm \\ref{alg-1} provides approximation results for the former as well (also note that due to $\\psi_p = 0 ~\\forall p$, we have no pairwise constraint violation when using Algorithm \\ref{alg-1} purely for ML). However, in this section we demonstrate how we can get improved approximation guarantees for some of the problems we consider. Specifically, we provide a $2\/3\/3\/3$-approximation for $k$-center-ML\/knapsack-center-ML\/$k$-supplier-ML\/knapsack-supplier-ML, which constitutes a clear improvement over the $3\/4\/5\/5$-approximation, given when Algorithm \\ref{alg-1} is executed using the best approximation algorithm for the corresponding vanilla variant. \n\nFirst of all, recall that in the ML case we are only looking for a set of locations $S$ and an assignment $\\phi: \\mathcal{C} \\mapsto S$, and not for a distribution over assignments. Also, notice that the must-link relation is transitive. If for $j,j'$ we want $\\phi(j) = \\phi(j')$, and for $j',j''$ we also require $\\phi(j') = \\phi(j'')$, then $\\phi(j) = \\phi(j'')$ is necessary as well. Given that, we view the input as a partition $C_1, C_2, \\hdots, C_t$ of the points of $\\mathcal{C}$, where all points in $C_q$, with $q \\in \\{1,\\hdots,t\\}$, must be assigned to the same location of $S$. We call each part $C_i$ of this partition a clique. Finally, for the problems we study, we can once more assume w.l.o.g. that the optimal radius $\\tau^*$ is known.\n\n\\begin{definition}\\label{Neighbor}\nTwo cliques $C_q, ~ C_p$ are called \\textbf{neighboring} if $\\forall j \\in C_q, ~ \\forall j' \\in C_p$ we have $d(j,j') \\leq 2\\tau^*$.\n\\end{definition}\n\nAlgorithm \\ref{alg-4} captures $k$-center-ML, knapsack-center-ML, $k$-supplier-ML and knapsack-supplier-ML at once, yielding improved approximations for each of them.\n\n\\begin{algorithm}[t]\n$C \\gets \\emptyset$, $S \\gets \\emptyset$\\;\nInitially all $C_1, C_2, \\hdots, C_t$ are considered uncovered\\;\n\\While{ there exists an uncovered $C_q$}{\nPick an uncovered $C_q$\\;\nPick an arbitrary point $j_q \\in C_q$\\;\n$C\\gets C \\cup \\{j_q\\}$\\;\n$C_q$ and all neighboring cliques $C_p$ of it, are now considered covered\\;\n}\n\\For {all $j_q \\in C$}{\n\\If {$k$-center\/$k$-supplier} {\n$i_q \\gets \\argmin_{i \\in \\mathcal{F}}d(i,j_q)$\\;$S \\gets S \\cup \\{i_q\\}$\\;\n}\n\\If {knapsack-center\/knapsack-supplier} {\n$i_q \\gets \\argmin_{i \\in \\mathcal{F}: d(i,j_q) \\leq \\tau^*}w_i$\\; $S \\gets S \\cup \\{i_q\\}$\\;\n}\n}\n\\For {all $j \\in \\mathcal{C}$}{\nLet $j_q \\in C$ the point whose clique $C_q$ covered $j$'s clique in the first while loop\\;\n$\\phi(j) \\gets i_q$\\;\n}\n\\caption{Approximating ML Constraints}\\label{alg-4}\n\\end{algorithm}\n\n\\begin{theorem}\nAlgorithm \\ref{alg-4} is a $2\/3\/3\/3$-approximation algorithm for \\textbf{$k$-center-ML}\/\\textbf{knapsack-center-ML}\/\\textbf{$k$-supplier-ML}\/\\textbf{knapsack-supplier-ML}.\n\\end{theorem}\n\n\\begin{proof}\nInitially, observe that the must-link constraints are satisfied. When the algorithm chooses a location $i_q$ based on some $j_q \\in C$, all the points in $C_q$ are assigned to $i_q$. Also, for the neighboring cliques of $C_q$ that got covered by it, their whole set of points ends up assigned to $i_q$ as well.\n\nWe now argue about the achieved approximation ratio. For one thing, it is clear that for every $j,j' \\in C_q$ we must have $d(j,j') \\leq 2\\tau^*$, and therefore $C_q$ is a neighboring clique of itself. We thus have the following two cases:\n\\begin{itemize}\n \\item \\textbf{$k$-center-ML:} Here for each $j_q \\in C$, we end up placing $j_q$ in $S$ as well. Also, all points $j$ assigned to $j_q$ belong to neighboring cliques of $C_q$, and therefore $d(j,j_q) \\leq 2\\tau^*$.\n \\item \\textbf{In all other problems}, for each $j_q \\in C$ we choose a location $i_q$ such that $d(i_q, j_q) \\leq \\tau^*$. Also, all points $j$ assigned to $i_q$ belong to neighboring cliques of $C_q$, and therefore $d(i_q,j) \\leq d(i_q,j_q) + d(j_q,j) \\leq 3\\tau^*$.\n\\end{itemize}\n\nFinally, we need to show that either the cardinality or the knapsack constraint on the set of chosen locations is satisfied. Toward that end, notice that if $C_q$ and $C_p$ belong in the same cluster in the optimal solution, then they are neighboring. Say $i^{\\star}$ is the location they are both assigned to. Then for all $j\\in C_q$ and $j' \\in C_p$ we get $ d(j,i^{\\star}) \\leq \\tau^*$ and $d(j',i^{\\star}) \\leq \\tau^*$. Hence, by the triangle inequality for all $j \\in C_q$ and $j' \\in C_p$ we have $d(j,j') \\leq 2\\tau^*$.\n\nGiven the previous observation, it must be the case that for every $j_q \\in C$, the optimal solution assigns it to a location $i^*_{j_q}$, such that for every other $j_{q'} \\in C$ we have $i^*_{j_q} \\neq i^*_{j_{q'}}$. Therefore, in the presence of a cardinality constraint:\n\\begin{align}\n |S| \\leq |C| = \\sum_{i^*_{j_q}: ~j_q \\in C}1 \\leq k \\notag\n\\end{align}\nand in the presence of a knapsack constraint:\n\\begin{align}\n \\sum_{i \\in S}w_i \\leq \\sum_{j_q \\in C}\\min_{i \\in \\mathcal{F}: d(i,j_q) \\leq \\tau^*}w_i \\leq \\sum_{j_q \\in C}w_{i^*_{j_q}} \\leq W \\notag\n\\end{align}\nwhere the last inequality in both cases follows from the optimal solution satisfying the corresponding constraint.\n\\end{proof}\n\n\\begin{observation}\nAlgorithm \\ref{alg-4} is a $2\/3$-approximation for $k$-center-ML-CC\/knapsack-center-ML-CC. This directly follows from steps 10, 13 and 17 of it.\n\\end{observation}\n\n\\begin{observation}\nDue to known hardness results for the vanilla version of the corresponding problems \\cite{Hochbaum1986}, Algorithm \\ref{alg-4} gives the best possible approximation ratios, assuming that $P \\neq NP$.\n\\end{observation}\n\n\\section{Experimental Evaluation}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|c|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|l|}{} & k & 4 & 6 & 8 & 10 \\\\ \\hline\n\\multirow{2}{*}{Adult} & Alg-1 & 2.27 & 4.73 & 12.53 & 21.81 \\\\ \\cline{2-6} \n & ALG-IF & 84.87 & 91.76 & 100.00 & 100.00 \\\\ \\hline\n\\multirow{2}{*}{Bank} & Alg-1 & 0.16 & 0.44 & 0.34 & 0.54 \\\\ \\cline{2-6} \n & ALG-IF & 55.84 & 71.48 & 92.85 & 99.93 \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{2}{*}{Credit}} & Alg-1 & 1.34 & 2.58 & 9.03 & 14.76 \\\\ \\cline{2-6} \n\\multicolumn{1}{|l|}{} & ALG-IF & 80.25 & 100.00 & 100.00 & 100.00 \\\\ \\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Percentage of constraints that are violated on average for metric $F_2$}\n\\label{tab:fairness-f2}\n\\end{table}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|c|l|l|l|l|l|} \n\\hline\n\\multicolumn{1}{|l|}{} & k & 4 & 6 & 8 & 10 \\\\ \n\\hline\n\\multirow{2}{*}{Adult} & Alg-1 & 1.88 & 2.41 & 3.09 & 3.48 \\\\ \n\\cline{2-6}\n & ALG-IF & 1.88 & 2.38 & 3.21 & 3.44 \\\\ \n\\hline\n\\multirow{2}{*}{Bank} & Alg-1 & 2.34 & 3.28 & 4.09 & 4.62 \\\\ \n\\cline{2-6}\n & ALG-IF & 2.36 & 3.34 & 4.67 & 4.93 \\\\ \n\\hline\n\\multicolumn{1}{|l|}{\\multirow{2}{*}{Credit}} & Alg-1 & 1.82 & 2.12 & 2.46 & 2.71 \\\\ \n\\cline{2-6}\n\\multicolumn{1}{|l|}{} & ALG-IF & 1.80 & 2.20 & 2.43 & 2.66 \\\\\n\\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Cost of fairness for metric $F_2$}\n\\label{tab:cost-f2}\n\\end{table}\n\n\n\nWe implement our algorithms in Python 3.8 and run our experiments on AMD Opteron 6272 @ 2.1 GHz with 64 cores and 512 GB 1333 MHz DDR3 memory. We focus on fair clustering applications with PBS constraints and evaluate against the most similar prior work. Comparing to \\cite{anderson2020} using $k$-means-PBS, shows that our algorithm violates fewer constraints while achieving a comparable cost of fairness. Similarly, our comparison with \\cite{brubach2020} using $k$-center-PBS-CC (that prior algorithm also satisfies the CC constraint) reveals that we are better able to balance fairness constraints and the objective value. Our code is publicly available at \\url{https:\/\/github.com\/chakrabarti\/pairwise_constrained_clustering}.\n\n\\textbf{Datasets: } We use 3 datasets from the UCI ML Repository \\cite{Dua:2019}: \\textbf{(1)} Bank-4,521 points \\cite{moro}, \\textbf{(2)} Adult-32,561 points \\cite{kohavi}, and \\textbf{(3)} Creditcard-30,000 points \\cite{yeh}. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|c|l|l|l|l|l|} \n\\hline\n\\multicolumn{1}{|l|}{} & k & 4 & 6 & 8 & 10 \\\\ \n\\hline\n\\multirow{2}{*}{Adult} & Alg-1 & 0.14 & 0.25 & 0.58 & 0.77 \\\\ \n\\cline{2-6}\n & ALG-IF & 7.98 & 7.07 & 8.90 & 9.42 \\\\ \n\\hline\n\\multirow{2}{*}{Bank} & Alg-1 & 0.02 & 0.16 & 0.20 & 0.50 \\\\ \n\\cline{2-6}\n & ALG-IF & 4.25 & 5.09 & 5.58 & 6.37 \\\\ \n\\hline\n\\multicolumn{1}{|l|}{\\multirow{2}{*}{Credit}} & Alg-1 & 0.00 & 0.07 & 0.21 & 0.25 \\\\ \n\\cline{2-6}\n\\multicolumn{1}{|l|}{} & ALG-IF & 0.97 & 3.80 & 4.17 & 3.90 \\\\\n\\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Percentage of constraints that are violated on average for metric $F_3$}\n\\label{tab:fairness-f3}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\\centering\n\n\\begin{tabular}{|c|l|l|l|l|l|} \n\\hline\n\\multicolumn{1}{|l|}{} & k & 4 & 6 & 8 & 10 \\\\ \n\\hline\n\\multirow{2}{*}{Adult} & Alg-1 & 1.13 & 1.20 & 1.23 & 1.24 \\\\ \n\\cline{2-6}\n & ALG-IF & 1.13 & 1.20 & 1.23 & 1.23 \\\\ \n\\hline\n\\multirow{2}{*}{Bank} & Alg-1 & 1.22 & 1.36 & 1.41 & 1.44 \\\\ \n\\cline{2-6}\n & ALG-IF & 1.24 & 1.41 & 1.46 & 1.54 \\\\ \n\\hline\n\\multirow{2}{*}{Credit} & Alg-1 & 1.12 & 1.11 & 1.10 & 1.10 \\\\ \n\\cline{2-6}\n & ALG-IF & 1.10 & 1.09 & 1.10 & 1.11 \\\\\n\\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Cost of fairness for metric $F_3$}\n\\label{tab:cost-f3}\n\\end{table}\n\n\\begin{table}[t]\n\n\\centering\n\n\\begin{tabular}{|c|l|l|l|l|l|l|l|} \n\\hline\n\\multicolumn{1}{|l|}{} & k & 10 & 20 & 30 & 40 & 50 & 60 \\\\ \n\\hline\n\\multirow{2}{*}{Adult} & Alg-2 & .01 & .01 & .01 & .02 & .02 & .04 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .00 & .00 & .00 & .00 & .00 & .00 \\\\ \n\\cline{2-8}\n & Alg-F(B) & .19 & .18 & .23 & .30 & .27 & .30 \\\\ \n\\hline\n\\multirow{2}{*}{Bank} & Alg-2 & .00 & .01 & .01 & .06 & .09 & .03 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .00 & .00 & .00 & .00 & .00 & .00 \\\\\n\\cline{2-8}\n & Alg-F(B) & .18 & .20 & .16 & .15 & .20 & .23 \\\\ \n\\hline\n\\multirow{2}{*}{Credit} & Alg-2 & .00 & .01 & .01 & .01 & .01 & .02 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .00 & .00 & .00 & .00 & .00 & .00 \\\\\n\\cline{2-8}\n & Alg-F(B) & .03 & .05 & .05 & .05 & .08 & .08 \\\\\n\\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Percentage of constraints that are violated on average for metric $F_1$}\n\\label{tab:fairness-f1}\n\\end{table}\n\n\\begin{table}[t]\n\n\\centering\n\n\\begin{tabular}{|c|l|l|l|l|l|l|l|} \n\\hline\n\\multicolumn{1}{|l|}{} & k & 10 & 20 & 30 & 40 & 50 & 60 \\\\ \n\\hline\n\\multirow{2}{*}{Adult} & Alg-2 & .39 & .28 & .23 & .20 & .17 & .16 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .54 & .48 & .46 & .46 & .43 & .42 \\\\ \n\\cline{2-8}\n & Alg-F(B) & .31 & .24 & .17 & .17 & .12 & .14 \\\\ \n\\hline\n\\multirow{2}{*}{Bank} & Alg-2 & .17 & .11 & .08 & .07 & .06 & .05 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .21 & .20 & .17 & .16 & .14 & .13 \\\\ \n\\cline{2-8}\n & Alg-F(B) & .12 & .07 & .06 & .05 & .04 & .03 \\\\ \n\\hline\n\\multirow{2}{*}{Credit} & Alg-2 & .38 & .29 & .25 & .24 & .21 & .19 \\\\ \n\\cline{2-8}\n & Alg-F(A) & .45 & .45 & .43 & .42 & .41 & .41 \\\\ \n\\cline{2-8}\n & Alg-F(B) & .28 & .25 & .21 & .20 & .18 & .17 \\\\\n\\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Objective achieved for metric $F_1$}\n\\label{tab:cost-f1}\n\\end{table}\n\n\n\\textbf{Algorithms: } \nIn all of our experiments, $\\mathcal{C} = \\mathcal{F}$ at first. When solving $k$-means-PBS, we use Lloyd's algorithm in the first step of Algorithm \\ref{alg-1} and get a set of points $L$. The set of chosen locations $S$ is constructed by getting the nearest point in $\\mathcal{C}$ for every point of $L$. This is exactly the approach used in \\cite{anderson2020}, where their overall algorithm is called ALG-IF. To compare Algorithm \\ref{alg-1} to ALG-IF, we use independent sampling for ALG-IF, in order to fix the assignment of each $j \\in \\mathcal{C}$ to some $i \\in S$, based on the distribution $\\phi_j$ produced by ALG-IF. For $k$-center-PBS-CC, we use Algorithm \\ref{alg-2} with a binary search to compute $\\tau^*_C$.\n\n\\textbf{Fairness Constraints: } We consider three similarity metrics ($F_1, F_2, F_3$) for generating PBS constraints. We use $F_1$ for $k$-center-PBS-CC and $F_2$, $F_3$ for $k$-means-PBS. $F_1$ is the metric used for fairness in the simulations of \\cite{brubach2020} and $F_2, F_3$ are the metrics used in the experimental evaluation of the algorithms in \\cite{anderson2020}. \n\n$F_1$ involves setting the separation probability between a pair of points $j$ and $j'$ to $d(j,j')\/R_{Scr}$ if $d(j,j') \\leq R_{Scr}$, where $R_{Scr}$ is the radius given by running the Scr algorithm \\cite{Mihelic2005SolvingTK} on the provided input. \n\n$F_2$ is defined so that the separation probability between a pair $j$,$j'$ is given by $d(j,j')$, scaled linearly to ensure all such probabilities are in $[0, 1]$. Adopting the approach taken by \\cite{anderson2020} when using this metric, we only consider pairwise constraints between each $j$ and its closest $m$ neighbors. For our experiments, we set $m = 100$. \n\nAgain in order to compare our Algorithm \\ref{alg-1} with \\cite{anderson2020}, we need the metric $F_3$. For any $j \\in \\mathcal{C}$, let $r_j$ the minimum distance such that $|j' \\in \\mathcal{C}: d(j,j') \\leq r_j| \\geq |\\mathcal{C}|\/ k$. Then the separation probability between $j$ and any $j'$ such that $d(j,j') \\leq r_j$, is set to $d(j,j')\/r_j$.\n\n\\textbf{Implementation Details: }As performed in \\cite{anderson2020,brubach2020}, we uniformly sample $N$ points from each dataset and run all algorithms on those sets, while only considering a subset of the numerical attributes and normalizing the features to have zero mean and unit variance. In our comparisons with \\cite{anderson2020} we use $N= 1000$, while in our comparisons with \\cite{brubach2020} $N$ is set to $250$. For the number of clusters $k$, we study the values $\\{4, 6, 8, 10\\}$ when comparing to \\cite{anderson2020}, and $\\{10, 20, 30, 40, 50, 60\\}$ when comparing to \\cite{brubach2020} (theoretically Algorithm \\ref{alg-2} is better for larger $k$). Finally, to estimate the empirical separation probabilities and the underlying objective function cost, we run 5000 trials for each randomized assignment procedure, and then compute averages for the necessary performance measures we are interested in.\n\n\\textbf{Comparison with \\cite{anderson2020}: }In Tables \\ref{tab:fairness-f2} and \\ref{tab:fairness-f3}, we show what percentage of fairness constraints are violated by ALG-IF and our algorithm, for the fairness constraints induced by $F_2$ and $F_3$, allowing for an $\\epsilon = 0.05$ threshold on the violation of a separation probability bound; we only consider a pair's fairness constraint to be violated if the empirical probability of them being separated exceeds that set by the fairness metric by more than $\\epsilon$. It is clear that our algorithm outperforms ALG-IF consistently across different values of $k$, different datasets, and both types of fairness constraints considered by \\cite{anderson2020}.\n\nIn order to compare the objective value achieved by both algorithms, we first compute the average connection costs over the 5000 runs. Since the cost of the clustering returned by Lloyd's algorithm contributes to both Algorithm \\ref{alg-1} and ALG-IF, we utilize that as an approximation of the cost of fairness. In other words, we divide the objective value of the final solutions by the cost of the clustering produced by Lloyd, and call this quantity cost of fairness. The corresponding comparisons are presented in Tables \\ref{tab:cost-f2}, \\ref{tab:cost-f3}. The cost of fairness for both algorithms is very similar, demonstrating a clear advantage of Algorithm \\ref{alg-1}, since it dominates ALG-IF in the percentage of fairness constraints violated.\n\n\n\\textbf{Comparison with \\cite{brubach2020}: } In Table \\ref{tab:fairness-f1} we show what percentage of fairness constraints are violated by the algorithm of \\cite{brubach2020} (named Alg-F) and Algorithm \\ref{alg-2}, using an $\\epsilon = 0$; if the empirical probability of separation of a pair exceeds the bound set by the fairness metric by any amount, it is considered a violation. We run ALG-F with two different choices of the scale parameter used in that prior work: $\\frac{1}{R_{Scr}}$ (Alg-F(A)) and $\\frac{16}{R_{Scr}}$ (Alg-F(B)), where $R_{Scr}$ is the value achieved using the Scr algorithm. The reason for doing so is that \\cite{brubach2020} consider multiple values for the separation probabilities, and we wanted to have a more clear comparison of our results against all of those. Alg-F(A) leads to 0 violations, while our algorithm produces a small number of violations in a few cases, and Alg-F(B) leads to a significant number of violations. In Table \\ref{tab:cost-f1}, we show the cost of the clusterings produced by ALG-F and Algorithm \\ref{alg-2}, measured in the normalized metric space by taking the average of the maximum radius of any cluster over the 5000 runs. Alg-F(b) leads to the lowest objective, followed relatively closely by our algorithm, and then finally Alg-F(A) has significantly higher objective values.\n\n\n\\section*{Acknowledgements}\nThe authors would like to sincerely thank Samir Khuller for useful discussions that led to some of the technical results of this work. In addition, we thank Bill Gasarch for his devotion to building a strong REU program, which facilitated coauthor Chakrabarti's collaboration. Finally, we thank the anonymous referees for multiple useful suggestions.\n\nBrian Brubach was supported in part by NSF award CCF-1749864. \nJohn Dickerson was supported in part by NSF CAREER Award IIS-1846237, NSF Award CCF-1852352, NSF D-ISN Award \\#2039862, NIST MSE Award \\#20126334, NIH R01 Award NLM-013039-01, DARPA GARD Award \\#HR00112020007, DoD WHS Award \\#HQ003420F0035, DARPA Disruptioneering Award (SI3-CMD) \\#S4761 and Google Faculty Research Award. Aravind Srinivasan was supported in part by NSF awards CCF-1422569, CCF-1749864, and CCF-1918749, as well as research awards from Adobe, Amazon, and Google. Leonidas Tsepenekas was supported in part by NSF awards CCF-1749864 and CCF-1918749, and by research awards from Amazon and Google. \n\n\\section*{Ethics Statement}\nOur primary contribution is general and theoretical in nature, so we do not foresee any immediate and direct negative ethical impacts of our work. That said, one use case of our framework---that we highlight prominently both in the general discussion of theoretical results as well as through experimental results performed on standard and commonly-used datasets---is as a tool to operationalize notions of \\emph{fairness} in a broad range of clustering settings. Formalization of fairness as a mathematical concept, while often grounded in legal doctrine~\\citep[see, e.g.,][]{Feldman15:Certifying,Barocas19:Fairness}, is still a morally-laden and complicated process, and one to which there is no one-size-fits-all ``correct'' approach.\n\nOur method supports a number of commonly-used fairness definitions; thus, were tools built based on our framework that operationalized those definitions of fairness, then the ethical implications---both positive and negative---of that decision would also be present. Our framework provides strong theoretical guarantees that would allow decision-makers to better understand the performance of systems built based on our approach. Yet, we also note that any such guarantees should, in many domains, be part of a larger conversation with stakeholders---one including understanding the level of comprehension~\\citep[e.g.,][]{Saha20:Measuring,Saxena20:How} and specific wants of stakeholders~\\citep[e.g.,][]{Holstein19:Improving,Madaio20:Co-Designing}.\n\\section{Omitted Details}\\label{appendix}\n\n\\paragraph{Proof of Theorem \\ref{np-hard}}\n\\begin{proof}\nOur reduction is based on the $k$-cut problem, in which we are given an undirected graph $G(V,E)$, $k$ distinct nodes $u_1, u_2, \\hdots, u_k \\in V$ and a $\\gamma \\in \\mathbb{N}_{\\geq 0}$. Then we want to know whether or not there exists a cut $S \\subseteq E$ with $|S| \\leq \\gamma$, such that in $G(V, E \\setminus S)$ the nodes $u_1, u_2, \\hdots, u_k$ are separated (each of them is placed in a different connected component).\\\\\n\n\\noindent \\textbf{Supplier\/Median\/Means:} We now describe the construction of an instance for the clustering problem. For each of the $k$ distinct nodes $u_f$ we create a location $i_{u_f}$, and for every two different locations $i_{u_f}, i_{u_{f'}}$ we set $d(i_{u_f}, i_{u_{f'}}) = 2$. Then, again for each of the $k$ distinct nodes $u_f$ we create a point $j_{u_f}$ that is co-located with $i_{u_f}$. In addition, for each of the $|V| - k$ remaining nodes $u$ of the graph we create a point $j_u$, such that $d(i_{u_f}, j_u) = 1$ for every previously created location $i_{u_f}$, and all those new points are co-located. This construction constitutes a valid metric space. As for the stochastic pairwise constraints, we only have one set of pairs $P$. For each $\\{u,v\\} \\in E$ let $j_u, j_v$ the corresponding points in the created metric, and then add $\\{j_u,j_v\\}$ to $P$. In the end, we ask for a solution that in expectation separates at most $\\gamma$ of the pairs of $P$ (i.e. $\\psi_P = \\gamma \/ |E|$).\n\nWe now claim that the clustering problem has a solution of objective value at most $1\/|V|-k\/\\sqrt{|V|-k}$ in the constructed instance for supplier\/median\/means iff there exists a $k$-cut of size at most $\\gamma$ in $G(V,E)$. Before we proceed, notice that this claim for the median and means objectives implies that there is no actual randomness in the assignment cost. This is true because $|V|-k, \\sqrt{|V|-k}$ are the least possible values for the corresponding objectives, resulting deterministically from the assignment of the $|V|-k$ points that do not correspond to nodes that need to be separated. \n\nTherefore, suppose at first that there exists a $k$-cut $S$ of size at most $\\gamma$. We construct a clustering solution as follows. Open all locations $i_{u_f}$ of the instance. Then for each $u \\in V$ with corresponding point $j_u$, assign $j_u$ to $i_{u_f}$, such that $u$ is in the same connected component in $G(V, E \\setminus S)$ as ${u_f}$. This clearly achieves deterministically the desired objective value cost for each case. Moreover, each separated pair of $P$ corresponds to an edge of $S$, and hence in the end the number of separations in this clustering solution will be upper bounded by $|S| \\leq \\gamma$.\n\nOn the other hand, assume that there exists a solution to the clustering instance of cost at most $1\/|V|-k\/\\sqrt{|V|-k}$. This immediately implies that all available locations are opened. Furthermore, since on expectation at most $\\gamma$ pairs are separated, there must be an assignment $\\phi$ that separates less than $\\gamma$. Given this assignment, we construct a $k$-cut as follows. Remove from $E$ all edges $\\{u,v\\}$ such that the corresponding points $j_u,j_v$ have $\\phi(j_u) \\neq \\phi(j_v)$. This clearly removes at most $\\gamma$ edges from $E$. To see that two of the special nodes $u_f, u_{f'}$ will be separated after the above edge removal, note that their respective points in the metric space are assigned to different clusters. Also, the corresponding node of each point in a cluster cannot have an edge to the corresponding node of a point in a different cluster, and hence $u_f, u_{f'}$ are definitely in distinct connected components.\\\\\n\n\\noindent \\textbf{Center:} To handle this objective function, we use an instance similar to that constructed for the supplier objective, with just one difference. This difference will make sure that we can only open locations that correspond to points coinciding with the locations used earlier. So in the previous instance, for each $j_{u_f}$ corresponding to a special node $u_f$ we create another point $h_{u_f}$, such that $d(j_{u_f}, h_{u_f}) = 1$, $d(j_u, h_{u_f}) = 2$ for all $j_u \\neq j_{u_f}$ with $j_u$ resulting from some $u \\in V$, and $d(h_{u_f}, h_{u_{f'}})=2$ for every $h_{u_{f'}}$ resulting from $u_{f'} \\neq u_f$. Then, the arguments for the supplier case go through in the exact same manner, since the only locations that can be considered for opening when aiming for a solution of radius $1$, are the points $j_{u_f}$.\n\n\n\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:level1}\n\nAssuming a uniform distribution of matter on large scales,\nthe observed data of high-redshift type Ia \nsupernovae(SNIa)\\cite{schm,ries1,ries2,perl} point to \n$\\Lambda$-dominated flat Friedmann-Robertson-Walker (FRW) models. \nThe darkness of the SNIa is reduced to accelerating expansion\nof the universe due to a positive $\\Lambda$ term. \n\n\nThese $\\Lambda$-dominated FRW models are \nconsistent also with the observed data of \ntemperature anisotropy in the Cosmic Microwave\nBackground (CMB) radiation\\cite{map,spg}, except for \nthe low-multipole components\\cite{olv, cont}. \nMoreover the observed correlation between the CMB and\nlarge-scale structure supports \nthese $\\Lambda$-dominated models, which can generate \nanisotropies due to the first-order(linear) \nISW effect\\cite{bou,turok,granett}. \n\nOn the other hand, alternative inhomogeneous models that can \nexplain the SNIa data without introducing a cosmological constant\n$\\Lambda$ have been independently proposed by C${\\rm \\acute{e}}$l${\\rm \n\\acute{e}}$rier\\cite{cele}, Goodwin et al.\\cite{good} and\nTomita\\cite{toma,tomb,lvm,tom} and subsequently studied by several \nauthors\\cite{iguchi,aln,bis,alex}. It turned out that some \ninhomogeneous cosmological models \nwith an inner large-scale underdense region\n(which we called {\\it a local void} in our previous works) with \na small Hubble constant ($h\\approx 0.5$) in the outer flat \nregion can also explain the CMB data\\cite{aln,blan,hunt,alex} \nas well as the SNIa data. In these models, the \ncosmological Copernican principle is violated since \nwe need to live near the center of an underdense region. \n\nHowever, recent observational\nstudies such as the baryon acoustic oscillations\n(BAO)\\cite{eisens,seo,perc1,perc2,perc3,gazt,zibin}, the kinematic\nSunyaev-Zeldovich effect either from clusters \\cite{bellido2} or \nreionized regions\\cite{cald} put stringent constraints on \nthese anti-Copernican models.\nAs a result, models with a local void on 300 Mpc \nscales seem to be ruled out. At\nthe moment, we need to consider inhomogeneous models with a local void on\nGpc scales so that the constraints from BAO at epochs of $z \\le 0.45$\nmay be avoided. Recently several Gpc void models have been studied by\nClifton et al.\\cite{clif} and Garc\\'ia-Bellido and Haugb${\\rm\n\\o}$lle\\cite{bellido1}. \n \nIn this paper we study the ISW effect\\footnote{In this paper, \n``the ISW effect'' means redshift\/blueshift of the CMB photons\ndue to time-evolving first-order or second-order metric perturbations. \n} in flat FRW models \nwith an inner underdense region on Gpc scales\nbased on previous results\\cite{tom1,tom2,tom3,is1,is2,ti}. \nThen we compare it with the ISW effect in the concordant flat FRW model\nwith a cosmological constant $\\Lambda$. \nAs we shall show, the ISW effect will be \nan excellent discriminator between our anti-Copernican models and the \nstandard concordant Copernican model.\nIn \\S 2, we present our inhomogeneous \ncosmological model with inner underdense regions and in \\S 3 we derive\nanalytic formulae for calculating the ISW effect in the inner \nand outer regions and we discuss the property of temperature\nanisotropy due to the ISW effect \nin our models and the concordant model. \n\\S 4 is dedicated to concluding remarks.\nIn what follows, we use the units of $8\\pi G = c = 1$.\nFor spatial coordinates, we use Latin subscripts \nrunning from 1 to 3. \n\n\\section{A cosmological model with inner underdense regions}\n\\label{sec:level2}\n\nOur inhomogeneous anti-Copernican models without a cosmological \nconstant $\\Lambda$ consist of two inner\nunderdense regions (I and II) and an outer flat region (III). The\nformer regions are described by negatively \ncurved FRW models ($\\Omega_{I0} = 0.3$ and\n$\\Omega_{II0} = 0.6$) and the outer region is \nby the Einstein-de Sitter model (EdS)($\\Omega_{III0} = 1$). Here in\nthese regions we use homogeneous models locally because the ISW effect\ncan be treated only in homogeneous models at present. We assume that these\nregions are connected by \ntwo infinitesimally thin walls at redshifts $z = 0.067$ and $0.45$\ncorresponding \nto the boundary between I and II and the boundary between II and III,\nrespectively. The latter redshift value $0.45$ corresponds to \na $\\sim$ Gpc radius of the spherical underdense region. \nThe Hubble constants $H_{I0},\nH_{II0}$ and $H_{III0}$ in these regions satisfy\na relation $H_{I0} \\ge H_{II0} \\ge H_{III0}$. Here we consider the\nfollowing two cases:\n\n\\begin{eqnarray}\n \\label{eq:m0}\n{\\rm case}\\ 1.&&\\ H_{I0} =60,\\quad H_{II0} =50, \\quad H_{III0} = 50 \n\\ {\\rm km\/s\/Mpc},\\cr\n{\\rm case}\\ 2.&&\\ H_{I0} =70,\\quad H_{II0} =55, \\quad H_{III0} = 50 \n\\ {\\rm km\/s\/Mpc},\n\\end{eqnarray}\n\n\n\\noindent\nwhere $H_{III0} (= 50)$ stands for the value necessary for the observed\nCMB anisotropies in the EdS model, $H_{I0} (= 70)$ in case 2 is the standard\nvalue in the local measurement, and the case 1 with smaller $H_{I0}$ and\n$H_{II0}$ is taken so as to consider a stringent observational\ncondition which is given by the kinematic Sunyaev-Zeldovich\neffect\\cite{bellido2}. \n\nIf we regard the outer region as the background, the inner region can be\ninterpreted as a local inhomogeneity \nand has an optical influence on the temperature of\nCMB radiation. If the observer is exactly at the center, the influence\nis isotropic, but if he is off-center, it brings\ndipole, quadrupole and the other multipole anisotropies. These\nanisotropies have already been analyzed and discussed in previous\npapers\\cite{tomdip,moff,alndp}. In what follows, we study the ISW effect\ndue to small-scale density perturbations (of a simple spherical top-hat\ntype) in the three regions in the inhomogeneous model. \n\nIn spherical coordinates $(r,\\theta,\\phi$), \nthe background metric of a constantly negatively curved spacetime in the \ninner regions I and II containing pressureless\nmatter with a matter density $\\rho$ is given by \n\\begin{eqnarray}\n \\label{eq:m1}\n ds^2 &=& a^2(\\eta) (-d\\eta^2 + dl^2), \\cr\n dl^2 &\\equiv& \\gamma_{ij} dx^i dx^j = dr^2 + \\sinh^2 (r) (d\\theta^2 +\n\\sin^2 \\theta ~d\\phi^2), \n\\end{eqnarray}\nand \n\\begin{eqnarray}\n \\label{eq:m1a}\n a(\\eta) &=& a_* (\\cosh \\eta -1), \\quad t = a_* (\\sinh \\eta - \\eta), \\cr\n \\rho a^2 &=& 3 [(a'\/a)^2 -1] = 6\/(\\cosh \\eta -1), \n\\end{eqnarray}\nwhere $t$ and $\\eta$ are the cosmic time and the conformal time. \nPrime $'$ represents $d\/d \\eta$ and $a_*$ is a constant.\nThe Hubble parameter, the density parameter and the \nredshift are \n\\begin{equation}\n \\label{eq:m2}\nH_\\alpha \\equiv a'\/a^2 = {\\sinh \\eta_{\\alpha} \\over a_{\\alpha *}\n(\\cosh \\eta_{\\alpha} -1)^2}, \\quad \n\\Omega_{\\alpha m} = {2 \\over\n\\cosh \\eta_{\\alpha} +1}, \\quad z_{\\alpha}+1 \n= {\\cosh \\eta_{\\alpha 0} -1 \\over \\cosh \\eta_{\\alpha} -1},\n\\end{equation}\nwhere $\\alpha$ is I or II, $\\eta_{\\alpha 0}$ is the present value of the\nconformal time $\\eta$ and for $a_{\\alpha 0} \\equiv a(\\eta_{\\alpha 0})$ we have\n$a_{\\alpha 0} H_{\\alpha 0} =\\sinh \\eta_{\\alpha 0}\/(\\cosh \\eta_{\\alpha 0} -1)$ and\n$\\Omega_{\\alpha m 0} = 2\/(\\cosh \\eta_{\\alpha 0 } +1)$. The constant $a_*$\nfor region I or II is given by\n\\begin{equation}\na_{\\alpha *}=\\frac{\\Omega_{\\alpha m 0}}{ 2 (1-\\Omega_{\\alpha m 0})^{3\/2}\n H_{\\alpha 0}},\n\\end{equation}\nwhere $H_{\\alpha 0}=H_{\\alpha}(\\eta_{\\alpha 0})$.\n\nIn the outer region III containing pressureless matter, the space-time \nmetric is \n\\begin{equation}\n \\label{eq:m3}\nds^2 = a^2(\\eta) [-d\\eta^2 + \\delta_{ij} dx^i dx^j],\n\\end{equation}\nwhere $a(\\eta) \\propto \\eta^2$. The Hubble parameter, the density \nparameter and the redshift are $H_{III} \\equiv a'\/a^2 = 2\/(\\eta a), \n\\ \\Omega_{IIIm} = 1$, and $z+1 = (\\eta_{III0}\/\\eta)^2$. Here for $a_0 = \na(\\eta_{III0})$ we have as $a_0 H_{III0} = 2\/\\eta_{III0}$.\n\nFor comparison, we consider a concordant flat FRW model with a cosmological\nconstant $\\Lambda$. The metric (\\ref{eq:m1}) is \n$dl^2 = \\delta_{ij} dx^idx^j$ and the\nscale factor satisfies $3(a'\/a)^2 = \n(\\rho_B + \\rho_\\Lambda), \\ 6(a'\/a)' = -(\\rho_B -2\\rho_\\Lambda)a^2$,\nwhere $\\rho_B$ and $\\rho_\\Lambda$ are the energy \ndensity of matter and that of a cosmological constant $\\Lambda$, \nrespectively. As the model parameter of the concordant \nflat-$\\Lambda$ model, we adopt\n$\\Omega_{m0} = 0.3$ and $H_0 = 70$ km\/s\/Mpc.\n\n\\section{Integrated Sachs-Wolfe effect due to density perturbations}\n\\label{sec:level3}\n\nNow we consider growing mode of \ndensity perturbations and the integrated Sachs-Wolfe\neffect in the inner and outer regions, separately.\n\n\\subsection{The inner regions I and II}\nThe first-order gauge-invariant growing density\nperturbations $\\epsilon_{mI}$ and the gauge-invariant potential\nperturbation (in the growing mode) $\\Phi_A (= \\Phi_H)$\\cite{bard} are\nexpressed as \n\\begin{eqnarray}\n \\label{eq:r1}\n\\epsilon_{I m} &=& -G(\\eta) \\Delta F \\cr\n\\Phi_A &=& - {1\\over 2}\\rho a^2 G(\\eta) F\n\\end{eqnarray}\nwith \n\\begin{eqnarray}\n \\label{eq:r2}\nG(\\eta) &\\equiv& {6 \\over \\cosh \\eta -1} \\ \\Bigl(1 - {\\eta (\\cosh \\eta\n+1) \\over 2 \\sinh \\eta} \\Bigr) + 1, \\cr\n&=& {1\\over 10}\\eta^2 (1 - {5\\over 84}\\eta^2 + \\cdot \\cdot \\cdot)\n\\quad {\\rm for} \\quad \\eta \\ll 1,\n\\end{eqnarray}\nwhere $\\epsilon_{mI}$ corresponds to the density perturbations in the\ncomoving synchronous gauge, $\\Phi_A$ is equal to the potential\nperturbations $\\phi^{(1)}$ in the longitudinal or Poisson gauge, and\n$F$ is the potential function given as an arbitrary function of\nspatial coordinates. The expression of \n$G(\\eta)$ was derived from the solution shown by Lifshits and\nKhalatinikov\\cite{lk}. $\\Delta F$ is the Laplacian\nof $F$ in the space $dl^2$, that is, $\\Delta F = F_{|i}^{|i}$, where\n$|i$ is covariant derivatives in the three dimensional space with\n$\\gamma_{ij}$. So we obtain from Eqs.(\\ref{eq:m1a}) and (\\ref{eq:r1}) \n\\begin{equation}\n \\label{eq:r3}\n\\phi^{(1)} = - {3 G(\\eta) \\over \\cosh \\eta -1} F.\n\\end{equation}\nThe first-order gauge-invariant temperature fluctuation due to the\nlinear ISW effect is expressed as \n\\begin{equation}\n \\label{eq:r4}\n\\Delta T^{(1)}\/T = -2 \\int^{\\lambda_e}_{\\lambda_o} d\\lambda \n\\ {\\phi'}^{(1)}\n\\end{equation}\nin the Poisson gauge, where the prime is $\\partial\/\\partial \\eta$ and\n$\\lambda$ is the affine parameter along the light path. $\\lambda_e$\nand $\\lambda_o$ are the emitter's and observer's values at the\ndecoupling and present epochs, respectively.\n\n\\begin{figure}[t]\n\\caption{\\label{fig:rs2} The matter density \ncontrast for a top-hat type spherical void.}\n\\includegraphics[width=8cm]{rs2a.eps}\n\\end{figure}\n\nIn this paper we consider a simple spherical top-hat type of\ncompensated density perturbation following our previous paper\\cite{ti},\nwhose spatial size is much\nsmaller than the horizon size. The spatial variation of the density \nperturbation is schematically shown in Fig.\\ref{fig:rs2}. We consider\nthe CMB photon paths\npassing through the center of the spherical perturbation. When the\nepoch in the center of the perturbation is $\\eta$, the integral of\n$\\phi^{(1)}$ along the light path reduces approximately to\n\\begin{equation}\n \\label{eq:r5}\n(\\Delta T\/T)_\\alpha = \\Delta T^{(1)}\/T = -{6 (\\epsilon_{\\alpha m 0})_c \\over\nG(\\eta_0) (\\cosh \\eta -1)^3} [-(14+\\cosh \\eta)\\sinh\n\\eta + 3\\eta (2\\cosh \\eta +3)] \\int^{\\lambda_e}_{\\lambda_o} d\\lambda\n\\ F\/c,\n\\end{equation}\nwhere $\\alpha$ is I or II, $(\\epsilon_{\\alpha m 0})_c$ and a constant\n$c$ are the central values of \n$\\epsilon_{\\alpha m 0}$ and $\\Delta F$, respectively, and the subscript $0$\ndenotes the present epoch.\nFrom integration of $F$ for the above top-hat type perturbations \nderived in the previous paper\\cite{ti}, we obtain\n\\begin{eqnarray}\n \\label{eq:r6}\n(\\Delta T\/T)_\\alpha &=& (\\epsilon_{I m 0})_c \\Bigl({a_0r_1 \\over\n(H_{I0})^{-1}} \\Bigr)^3 \\theta_\\alpha, \\cr\n\\theta_\\alpha &\\equiv& - {4 \\over 3}{(1-\\Omega_{\\alpha m 0})^{3\/2}\n\\over G(\\eta_0)(\\cosh \\eta -1)^3} [-(14+\\cosh \\eta)\\sinh\n\\eta + 3\\eta (2\\cosh \\eta +3)] w_1 (y) \\cr\n&\\times& \\Bigl(\\epsilon_{\\alpha m 0}\/\\epsilon_{I m 0}\\Bigr)_c\n\\Bigl(H_{\\alpha 0}\/H_{I0}\\Bigr)^3, \n\\end{eqnarray}\nwhere $w_1(y)$ is defined as $w_1(y) = -y \\ln (1+ 1\/y), \\ y=b\/c$ and\n$r_1\/r_0 = (1 + 1\/y)^{1\/3}$. For the value of $y$, we adopt $y = 0.5$\nas an example.\n\n\\subsection{The outer region III}\nThe first-order ISW effect does not appear and\nthe second-order ISW effcet is the lowest one. The second-order\ntemperature fluctuations were derived in our previous paper\\cite{ti} and\nexpressed as \n\\begin{equation}\n \\label{eq:r7}\n\\Delta T^{(2)}\/T = {4\\over 27}\\ c^2\\ (r_1)^3\\ w_2(y)\n(\\zeta_1 + 9\\ \\zeta_2)', \n\\end{equation}\nwhere $w_2(y) \\equiv y[1- y \\ln(1+1\/y)], \\ (\\zeta_1 + 9\\ \\zeta_2)' = -\n(39\/700) \\eta$ \\ for the EdS model, $r_1$ is the radius of\ninhomogeneities (cf. Fig. \\ref{fig:rs2}), and the central value of the\ndensity perturbation $(\\epsilon_{II m})_c$ is related to a constant $c$\nas \n\\begin{equation}\n \\label{eq:r9}\n(\\epsilon_{III m})_c = - {1\\over 20} \\eta^2 c.\n\\end{equation}\nUsing Eq.(\\ref{eq:r9}), the temperature fluctuations are expressed \nas\n\\begin{eqnarray}\n \\label{eq:r10}\n(\\Delta T\/T)_{III} &=& \\Delta T^{(2)}\/T = -{26\\over 63}\\\n\\Bigl({a_0r_1\\over (H_{III0})^{-1}} \n\\Bigr)^3 {{(\\epsilon_{III m 0})_c}^2\\over (1+z)^{1\/2}} \\ w_2(y), \\cr\n&=& (\\epsilon_{I m 0})_c \\Bigl({a_0r_1 \\over\n(H_{I0})^{-1}} \\Bigr)^3 \\theta_{III}, \\cr\n\\theta_{III} &\\equiv& -{26\\over 63}\\\n {{(\\epsilon_{III m 0})_c}^2\\over (\\epsilon_{I m 0})_c (1+z)^{1\/2}} \\\n \\Bigl({H_{III0} \\over H_{I0}} \\Bigr)^3 \\ w_2(y).\n\\end{eqnarray}\nThe temperature fluctuations are negative definite. They are not\nexactly observed fluctuations, because their observed values should be\nthe difference from the average value $\\langle \\Delta T^{(2)}\/T\n\\rangle$ of the sum of the second-order \ntemperature fluctuations which is caused by all possible primordial\ndensity perturbations and renormalized into the background temperature. \nThis average value is derived, taking account of power spectrum of\ndensity perturbations, in the procedure shown in a separate\npaper\\cite{tompp}. So the above $(\\Delta T\/T)_{III}$ should be here\nused to show the order of magnitude of second-order ISW effect, but\nfor the perturbations with large amplitudes, the above second-order\nfluctuations are regarded approximately as observed values, as the\nmean value can be neglected. \n\n\\subsection{The junction condition}\nThe deformation of the walls brings the complicated\nperturbations inside the walls and their neighborhoods, as was studied by\none of the present authors through the analysis of the junction\ncondition\\cite{tomjunct}. They include not only density perturbations,\nbut also gravitational-wave and rotational\nperturbations. Gravitational-wave perturbations propagate, but their\namplitudes are very small and the contribution to density\nperturbations is negligible, because of the small coupling between\nthem. Moreover the density and rotational perturbations caused by the \nperturbed walls\ndo not propagate in the present dust matter models and are constrained\nin the just neighborhoods of the walls. In the most part of the I, II\nand III regions, therefore, we see density perturbations which are\nindependent of the wall motions and were caused primordially due to the\ncommon origin. \n\nTheir amplitudes in the three regions were nearly equal at the \nearly stages, but the present amplitudes became different, because\nthey had different growth rates. \nHere we neglect the above complicated perturbations inside the walls\nand in their narrow neighborhoods. Then the three density perturbations\n$(\\epsilon_{Im0})_c, (\\epsilon_{IIm0})_c$ and \n$(\\epsilon_{IIIm0})_c$ included in the two equations (\\ref{eq:r6})\nand (\\ref{eq:r10}) are related as follows. First we assume that \n$\\epsilon_{Im}, \\epsilon_{IIm}$ and $\\epsilon_{IIIm}$ should be equal\nat early epochs of equal densities with the redshifts $z_1, z_2$ and\n$z_3 \\gg 1$, i.e. $\\epsilon_{Im}(z_1) = \\epsilon_{IIm}(z_2) =\n\\epsilon_{IIIm}(z_3)$, where $\\rho_I (z_1) = \\rho_{II} (z_2) =\n\\rho_{III} (z_3)$. The present densities $\\rho_{I0}, \\rho_{II0}$ and\n$\\rho_{III0}$ are related as $\\rho_{I0}\/\\rho_{III0} =\n(\\Omega_{I0}\/\\Omega_{III0})(H_{I0}\/H_{III0})^2$ and\n$\\rho_{II0}\/\\rho_{III0} = \n(\\Omega_{II0}\/\\Omega_{III0})(H_{II0}\/H_{III0})^2$. Then $z_1, z_2$\nand $z_3$ are related as \n\\begin{eqnarray}\n \\label{eq:r10a}\n(1+z_3)\/(1+z_1) &=& [\\rho_{I0}\/\\rho_{III0}]^{1\/3} = \n[(\\Omega_{I0}\/\\Omega_{III0}) (H_{I0}\/H_{III0})^2]^{1\/3}, \\cr\n(1+z_3)\/(1+z_2) &=& [\\rho_{II0}\/\\rho_{III0}]^{1\/3} = \n[(\\Omega_{II0}\/\\Omega_{III0}) (H_{II0}\/H_{III0})^2]^{1\/3}. \n\\end{eqnarray}\nTaking account of the growth rates, we obtain \n$\\epsilon_{Im0} = \\epsilon_{Im}(z_1)\nG(\\eta_{I0})\/G(\\eta_{I1})$, $\\epsilon_{IIm0} = \\epsilon_{IIm}(z_2)\nG(\\eta_{II0})\/G(\\eta_{II2})$ and $\\epsilon_{IIIm0} = \\epsilon_{IIIm}(z_3)\n(1+z_3)$. Accordingly, we obtain \n\\begin{eqnarray}\n \\label{eq:r10b}\n\\epsilon_{IIIm0} &=& (1+z_3) \\epsilon_{Im0}\nG(\\eta_{I1})\/G(\\eta_{I0}), \\cr\n\\epsilon_{IIIm0} &=& (1+z_3) \\epsilon_{IIm0}\nG(\\eta_{II2})\/G(\\eta_{II0}),\n\\end{eqnarray}\nwhere $z$ and $G(\\eta)$ are calculated\nusing Eqs.(\\ref{eq:m2}) and (\\ref{eq:r2}). Here we set $z_1 = 1000$. Then\n we have $(\\epsilon_{IIIm0}\/\\epsilon_{Im0},\n\\epsilon_{IIIm0}\/\\epsilon_{IIm0})$\\ are \\ $(1.65, 1.15)$ and $(1.83,\n1.22)$ \\ in cases 1 and 2, respectively. \n\n \n\\subsection{Flat-$\\Lambda$ models}\nFor comparison, we show the first-order and second-order temperature\nfluctuations in \nthe flat-$\\Lambda$ models with $\\Omega_m + \\Omega_{\\Lambda} =\n1$, which were derived as $(\\Delta T^{(1)}\/T)_{loc}$ and $(\\Delta\nT^{(2)}\/T)_{loc}$ in our previous paper\\cite{ti}. They are expressed as\n\\begin{eqnarray}\n \\label{eq:r11}\n(\\Delta T^{(1)}\/T)_{loc} &=& (\\epsilon_{Im0})_c \\Bigl({a_0r_1\\over\n(H_{I0})^{-1}} \\Bigr)^3 \\theta^{(1)}_\\Lambda, \\cr\n\\theta^{(1)}_\\Lambda &\\equiv& {4\\over 9} \\Bigl[{2\\Bigl({a'\\over\na}\\Bigr)^2 -{a'' \\over a} \\over {a'\\over a}P' -1} \\Bigr]_0\n\\Bigl({a'\\over a}\\Bigr)_0^{-3} \\Bigl [{a'\\over a}+\\Bigl({a'' \\over a}\n- 3\\Bigl({a'\\over a}\\Bigr)^2\\Bigr) P' \\Bigr]\\\n\\Bigl({\\epsilon_{m0}\\over \\epsilon_{Im0}}\\Bigr)_c\n\\Bigl({H_0\\over H_{I0}}\\Bigr)^3 w_1(y), \n\\end{eqnarray}\nand \n\\begin{eqnarray}\n \\label{eq:r12}\n(\\Delta T^{(2)}\/T)_{loc} &=& (\\epsilon_{Im0})_c \\Bigl({a_0r_1\\over\n(H_{I0})^{-1}} \\Bigr)^3 \\theta^{(2)}_\\Lambda, \\cr\n\\theta^{(2)}_\\Lambda &\\equiv& {16\\over 27} \\Bigl[{2\\Bigl({a'\\over\na}\\Bigr)^2 -{a'' \\over a} \\over {a'\\over a}P' \n-1} \\Bigr]_0^2 \\Bigl({a'\\over a}\\Bigr)_0^{-3} (\\zeta_1 + 9\\zeta_2)'\n\\Bigl({(\\epsilon_{m0})_c^2 \\over (\\epsilon_{m0I})_c}\\Bigr)\n\\Bigl({H_0\\over H_{I0}}\\Bigr)^3 w_2(y), \n\\end{eqnarray}\nwhere $\\epsilon_m$ and $(\\epsilon_{m0})_c$ are first-order density\nperturbation and its central value at present epoch, and \n$P(\\eta), Q(\\eta), \\zeta_1(\\eta)$ and $\\zeta_2(\\eta)$ are\nauxiliary quantities used in the previous paper\\cite{ti} and their\ndefinitions are shown in Appendix. \n\n\\subsection{Analyses and results}\nIn the following, we consider the behaviors of\nfluctuations due to the ISW effect \nin our inhomogeneous model in comparison with those \nin the concordant flat-$\\Lambda$ model. \n\nTo do so, we\nshow the amplitudes of temperature anisotropy due to the ISW\neffect from a spherical compensating void\/cluster with a given\ncomoving radius and the density contrast, represented by\nthe five quantities $\\theta_I, \\theta_{II}, \\theta_{III},\n\\theta^{(1)}_\\Lambda$ and \n$\\theta^{(2)}_\\Lambda$. Note that the first-order quantities $\\theta_I$, $\\theta_{II}$ and\n$\\theta^{(1)}_\\Lambda$ do not depend on $(\\epsilon_{Im0})_c$ and\n$(\\epsilon_{m0})_c$, while the second-order ones $\\theta_{III}$ and \n$\\theta^{(2)}_\\Lambda$ are proportional to $(\\epsilon_{Im0})_c$ and\n$(\\epsilon_{m0})_c$, respectively. Here $c$ denotes the values at the\ncenters of the voids\/clusters. \nWe assume that $(\\epsilon_{m0})_c = (\\epsilon_{Im0})_c$, so that the\ncosmological situation in the neighborhood of our observer in the\nflat-$\\Lambda$ model may be equal to that in the inner region I\nof the model. \n\nIn Fig. \\ref{fig:void0}, we show the behaviors of\n$\\theta_I, \\theta_{II}$ and $-\\theta_{III}$ in the interval $0 < z <\n1$, in cases 1 and 2, respectively. In Fig. \\ref{fig:void2}, in a similar manner, we show the behaviors of\n$\\theta_I, \\theta_{II}$ and $-\\theta_{III}$ in the interval $0 < z <\n10$, in cases 1 and 2, respectively. In these figures we adopted \n$(\\epsilon_{m0})_c = (\\epsilon_{Im0})_c = -0.37$, for \nwhich $-\\theta_{III}$ is comparable with $\\theta_I$ and $\\theta_{II}$,\nand $\\theta^{(1)}_\\Lambda$ is smaller than $-\\theta^{(2)}_\\Lambda$ for\n$z > 2.5$. For comparison,\n$\\theta^{(1)}_\\Lambda$ and $-\\theta^{(2)}_\\Lambda$ are also \nshown in them. Here the second-order\nquantities are multiplied by $-1$, \nbecause they are negative definite and we use only positive\nquantities in the figures. From these figures, we can see that $\\theta_I,\n\\theta_{II}$ and $\\theta^{(1)}_\\Lambda$ are \ncomparable in the regions I and II, though their behaviors are\ndifferent. \n\n It is found that $|\\theta^{(2)}_\\Lambda|$ is smaller than\n$|\\theta_{III}|$, though both quantities are of second-order. \nThis reflects the strong dependence on the Hubble constants\n(cf. Eq.(\\ref{eq:r10}) and Eq.(\\ref{eq:r12})) \nand the $\\Lambda$-dependence of the second-order ISW\neffect which was studied in the previous paper\\cite{ti}. \n \nAs a result we find the following common features in cases 1 and 2\nfrom these figures. \n\n\\noindent (1) In the inner regions I and II, $\\theta_I, \\theta_{II}$\nand $\\theta^{(1)}_\\Lambda$ are comparable, \nirrespective of $r_1, (\\epsilon_{Im0})_c$ and $(\\epsilon_{m0})_c$,\nthough $\\theta_I$ and $\\theta_{II}$ in case 1 seem to be larger by a\nfactor $\\sim 1.5$ than $\\theta_I$ and $\\theta_{II}$. \n\n\\noindent (2) In the outer region near the wall of $z = 0.45$,\n$|\\theta_{III}|$ is roughly comparable with $\\theta^{(1)}_\\Lambda$ for\n$|(\\epsilon_{Im0})_c| = 0.37$ and it is smaller or larger \nthan $\\theta^{(1)}_\\Lambda$ for $|(\\epsilon_{Im0})_c| <$ \\ or \\\n$>0.37$, respectively. Far outside the wall, $|\\theta_{III}|$ is\nlarger than $\\theta^{(1)}_\\Lambda$.\n\n\\noindent (3) Since $|\\epsilon_{m0}|$ is $\\approx 1$ for \nperturbations with a size $L \\approx 10 h^{-1}$Mpc \\ ($H_0 = 100h$\nkm\/s\/Mpc), $\\theta_{III}$ is extremely large or negligible compared\nwith $\\theta^{(1)}_\\Lambda$ for perturbations with\n$L \\ll 10 h^{-1}$ or $\\gg 10 h^{-1}$Mpc, respectively. \n\n\\noindent (4) First-order quantities $\\theta_I, \\theta_{II}$ and\n$\\theta^{(1)}_\\Lambda$ are \npositive\/negative for a cluster\/void with $(\\epsilon_{Im0})_c$\nand $(\\epsilon_{m0})_c$, respectively, while second-order quantities\n$|\\theta_{III}|$ and $\\theta^{(2)}_\\Lambda$ are negative\ndefinite. Therefore, in the concordant model, \nthe expected amplitude is larger for voids than clusters. Such an asymmetry\nis not expected in the outer region in our inhomogeneous model. \n\n\n\\noindent (5) At epochs of $z < z_c (= 2.5$ for $|(\\epsilon_{m0})_c|\n\\approx 0.37)$, $|\\theta^{(2)}_\\Lambda| < \n|\\theta^{(1)}_\\Lambda|$, so\nthat the temperature fluctuations in the flat-$\\Lambda$ models\nhave different signs for a cluster\/void with a density contrast\n $\\epsilon_{m0}$. For $z > z_c$, however, the second-order ISW \neffect is dominant also in the flat-$\\Lambda$ model. \nTherefore, the temperature\nfluctuations in the flat-$\\Lambda$ models for $z>z_c$ is \nnegative definite as in the outer region of our inhomogeneous models.\n\n \n\n\\begin{figure}[htbp]\n \\begin{tabular}{cc}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=11cm,clip]{lvn1a.ps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=11cm,clip]{lvn1b.ps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n\\vspace{-4cm}\n \\caption{\\label{fig:void0} The $z$-dependence of first and second\norder temperature fluctuations in case 1(left) and in case 2 (right) \nfor photons passing \nthrough the center of a compensated spherical void at $z < 1$. \nSolid curves denote $\\theta_{I}$ and $\\theta_{II}$ and the curve $a$\ndenotes $\\theta_{III}$. The curves $b$ and $c$ denote\n$\\theta^{(1)}_\\Lambda$ and $\\theta^{(2)}_\\Lambda$, respectively. The\ndotted vertical lines denote the boundaries at $z = 0.067$ and $z =\n0.45$. We adopted $(\\epsilon_{m0})_c = (\\epsilon_{Im0})_c = -0.37$, for \nwhich $-\\theta_{III}$ is comparable with $\\theta_I$ and $\\theta_{II}$.} \n\\end{figure} \n\n\\begin{figure}[htbp]\n \\begin{tabular}{cc}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=11cm,clip]{lvn2a.ps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=11cm,clip]{lvn2b.ps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n\\vspace{-4cm}\n \\caption{\\label{fig:void2} The $z$-dependence of first and second\norder temperature fluctuations in case 1(left) and in case 2 (right) \nfor photons passing \nthrough the center of a compensated spherical void at $z < 10$. \nSolid curves denote $\\theta_{I}$ and $\\theta_{II}$ and the curve $a$\ndenotes $\\theta_{III}$. The curves $b$ and $c$ denote\n$\\theta^{(1)}_\\Lambda$ and $\\theta^{(2)}_\\Lambda$, respectively. The\ndotted vertical lines denote the boundaries at $z = 0.067$ and $z =\n0.45$. We adopted $(\\epsilon_{m0})_c = (\\epsilon_{Im0})_c = -0.37$, for \nwhich $-\\theta_{III}$ is comparable with $\\theta_I$ and $\\theta_{II}$.} \n\\end{figure} \n\n\n\n\n\\section{Concluding remarks}\n\\label{sec:level4}\n\nIn this paper we studied the first-order and second-order ISW effect\nin our anti-Copernican inhomogeneous model with underdense\nregions in comparison with that in the concordant flat-$\\Lambda$\nmodel. We found that a distinct feature appears at the outer region at\nmoderate redshifts. In the concordant model, \nthe expected amplitude of temperature anisotropy \nis larger for voids than clusters whereas such an asymmetry\nis not expected in the outer region in our inhomogeneous model.\nWe showed, moreover,\nthat the first-order ISW effect in the inner regions of our models\nis comparable with that in the flat-$\\Lambda$ model, and that the\nsecond-order ISW effect in the outer region depends on the amplitude\n$\\epsilon_{m0}$ of density perturbations. \nThe ISW effect due to perturbations on scales larger than $100 h^{-1}$ Mpc \nwith a density contrast $|\\epsilon_{m0}|<0.37$ in the outer region is\nnegligible. On the other hand, in the inner region, \nno ISW effect appears due to \nperturbations on scales larger than the radius of the inner region. In our \ninhomogeneous model with underdense regions, \nthe ISW effect does not contribute to the low-multipole \ncomponents of CMB anisotropies\\cite{olv,cont} in accord with \nthe assertion proposed by Hunt and Sarker\\cite{hunt}, while in the\nflat-$\\Lambda$ model, the contribution from the ISW effect due to\nlarge-scale linear perturbations is significant. \n\nThe observed correlation between the CMB sky with the \nlarge-scale structure is usually interpreted as the evidence\nof the cosmological constant $\\Lambda$, which causes the first-order\nISW effect\\cite{bou,turok}. Recently, moreover, \nhot and cold spots on the CMB sky associated with super-structures\n(with $z \\sim 0.5$) in SDSS Luminous Red Galaxy catalog were measured\nby Granett et al.\\cite{granett} and the consistency with the ISW\neffect in the flat-$\\Lambda$ models was shown. It should be\nnoted, however, that they may be brought in principle by the\nfirst and second-order ISW effect also in our models with \nunderdense regions, as we showed in the present paper. Therefore, the\nobservational \nevidence for the existence of small-scale ISW effect for light paths \nthrough clusters of galaxies, superclusters and supervoids may support\nnot only the flat-$\\Lambda$ model, but also our models with \nunderdense regions.\n \nIn order to make a clear distinction between the two models,\nit is better to compare the overall amplitudes of temperature fluctuations \nassociated with a void(negative density) with those associated with \na cluster(positive density). As we have seen, for $0.45 < z < z_c$\n(which depends on \n$\\epsilon_{m0}$), the amplitudes for quasi-linear voids are larger\nthan those for \nquasi-linear clusters in the concordant model due to the second-order\neffect, while such an asymmetry cannot be\nexpected in our inhomogeneous model since there is no \nfirst-order effect in the outer region\\cite{ti,si}. \n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:into}\n\nA Type Ia supernova is believed to be a thermonuclear explosion of a\ncarbon-oxygen (CO) white dwarf \\citep{1960ApJ...132..565H} in a\nbinary system, \\citep[see, e.g.,][for a\n review]{2000ARA&A..38..191H,2005AstL...31..528I}. Most popular scenarios of the\nexplosion include (i) a gradual increase of the mass towards\nChandrasekhar limit \\citep[e.g.,][]{1973ApJ...186.1007W}, (ii) a\nmerger\/collision of two WDs\n\\citep[e.g.,][]{1984ApJS...54..335I,1984ApJ...277..355W,2013ApJ...778L..37K},\n(iii) an initial explosion at the surface of the sub-Chandrasekhar WD,\nwhich triggers subsequent explosion of the bulk of the material\n\\citep[e.g.,][]{1977PASJ...29..765N,1996ApJ...457..500H}. In all\nscenarios a thermonuclear runaway converts substantial fraction of CO\nmass into iron-group elements and the released energy powers the\nexplosion itself. The optical light of the supernova is in turn\npowered by the decay of radioactive elements, synthesized during\nexplosion. For the first year since the explosion the decay chain of\n$^{56}$Ni$\\rightarrow^{56}$Co$\\rightarrow^{56}$Fe is of prime\nimportance. As long as the expanding ejecta are optically thick for\ngamma-rays the bulk of the decay energy is thermalized and is\nre-emitted in the UV, optical and IR band. After several tens of days\nthe ejecta become optically thin for gamma-rays making SNIa a powerful\nsource of gamma photons.\n\nHere we report the results of \\INTEGRAL observations of\nSN2014J covering a period from $\\sim$16 to $\\sim$162 days since the\nexplosion. \n\nThe analysis of the SN2014J data obtained by \\INTEGRAL has been\nreported in \\citet{2014Natur.512..406C} (days $\\sim$50-100 since\nexplosion), \\citet{2014Sci...345.1162D} (days $\\sim$16-19),\n\\citet{isern} (days $\\sim$16-35), see also\n\\citet{2015A&A...574A..72D}. Despite of the proximity, SN2014J in\ngamma-rays is an extremely faint source and the expected signal is\nbelow 1\\% of the background. This makes the results sensitive to the\nadopted procedure of the background handling by different groups and\nlead to tension between some results. Here we have combined all \\INTEGRAL\ndata and uniformly process them using the same procedure as in\n\\citet{2014Natur.512..406C}. The resulting spectra and light-curves\nare compared with the predictions of basic type Ia models.\n\n\nCurrent state-of-the-art 3D simulations of type Ia explosions\n\\citep[e.g.,][]{2013MNRAS.429.1156S,2014MNRAS.438.1762F,2014ApJ...785..105M}\nlead to a complicated distribution of burning products in the ejecta\nand introduce a viewing angle dependence in the predicted gamma-ray\nflux. However, the overall significance of the SN2014J detection in\ngamma-rays by \\INTEGRAL (see \\S\\ref{sec:observations} and\n\\S\\ref{sec:results}) corresponds to $\\sim 10$ s.t.d. This precludes a\nvery detail model-independent analysis. We therefore took a\nconservative approach of comparing the data with a subset of popular\n1D SNIa models (see \\S\\ref{sec:models}), some of which were used in\n\\citet{2004ApJ...613.1101M} for assesment of SNIa gamma-ray\ncodes. While these models do not describe the full complexity of SNIa\nejecta, they can serve as useful indicators of the most basic\ncharacteristics of the explosion, including the total mass of\nradioactive nickel, total mass of the ejecta and the expansion\nvelocity. We also verify (\\S\\ref{sec:tem_late}) if adding an extra\ncomponent, corresponding to a transparent clump of radioactive Ni, on\ntop of the best-fitting 1D model, significantly improves the fit. In\n\\S\\ref{sec:opt} we make several basic consistency checks of gamma-ray\nand optical data, using optical observations taken\nquasi-simultaneously with \\INTEGRAL observations. Section\n\\ref{sec:conclusions} provides the summary of our results.\n\n\n\\section{SN2014J in M82}\n\\label{sec:sn2014j}\nSN2014J in M82 was discovered \\citep{2014CBET.3792....1F} on Jan. 21,\n2014. The reconstructed\n\\citep{2014ApJ...783L..24Z,2015ApJ...799..106G} date of the explosion\nis Jan. 14.75 UT with the uncertainty of order $\\pm0.3$ days. At the\ndistance of M82 ($\\sim3.5$ Mpc), this is the nearest SN Ia in several\ndecades. The proximity of the SN2014J triggered many follow-up\nobservations, including those by \\INTEGRAL\n\\citep{2014ATel.5835....1K}.\n\nThe SN is located $\\sim$1~kpc from the M82 nucleus and has a strong\n($A_{V}\\sim 2$) and complicated absorption in the UV-optical band\n\\citep[e.g.,][]{2014ApJ...784L..12G,2015ApJ...798...39M,2014ApJ...788L..21A,2014MNRAS.443.2887F,2014ApJ...792..106W,2015A&A...577A..53P,2015ApJ...805...74B,2014ApJ...795L...4K}.\n\nFrom the light curves and spectra SN2014J appears to be a ``normal''\nSNIa with no large mixing\n\\citep[e.g.,][]{2015ApJ...798...39M,2014MNRAS.445.4427A}, consistent\nwith the delayed-detonation models. Detection of stable Ni\n\\citep{2014ApJ...792..120F,2015ApJ...798...93T} in IR suggests high\ndensity of the burning material \\citep[see, e.g.,][]{1992ApJ...386L..13S}, characteristic for near-Chandrasekhar\nWD.\n\nSearch in X-ray, radio and optical bands (including pre-supernova\nobservations of M82) didn't reveal any evidence for accretion onto the\nWD before the explosion, any candidate for a companion star, or\ncompelling evidence for a large amount of circumbinary material,\nimplicitly supporting the DD scenario\n`\\citep{2014ApJ...790....3K,2014MNRAS.442.3400N,2014ApJ...790...52M,2014ApJ...792...38P},\nalthough some SD scenarios are not excluded.\n\nIn gamma-rays the first detection of SN2014J in $^{56}$Co lines was reported\n about 50 days since the explosion \\citep{2014ATel.5992....1C}. The\ngamma-ray signal from SN2014J was also reported in the earlier phase\n$\\sim$16-35 days after the explosion \\citep{2014ATel.6099....1I,2014Sci...345.1162D}.\n\nThroughout the paper we adopt the distance to M82 (and to SN2014J) of\n3.5 Mpc. The recent analysis by \\citet{2014MNRAS.443.2887F} suggests the\ndistance of $3.27\\pm0.2$ Mpc. This estimate is formally consistent with\nthe $D\\sim 3.53\\pm0.26$ Mpc from \\citealt{2006Ap.....49....3K} and our\nadopted value. Nevertheless, one should bear in mind that all fluxes\nand normalizations of best-fitting models can be overestimated\n(underestimated) by as much as $\\sim20$\\%.\n\nThe only other supernova sufficiently bright to allow for detailed study in gamma-rays from $^{56}$Ni\nand $^{56}$Co decay is the Type II SN1987A in Large Magellanic Cloud. In SN1987A the down-scattered hard X-ray continuum was first seen half a year after the explosion \\citep{1987Natur.330..227S,1987Natur.330..230D,1990SvAL...16..171S}, while $\\gamma$-ray lines of $^{56}$Co were detected several months later \\citep{1988Natur.331..416M,1989Natur.339..122T}. While SN2014J is more than 60 times further away from us than SN1987A, the larger amount of radioactive $^{56}$Ni and less massive\/opaque ejecta in type Ia supernovae made the detection of gamma-rays from SN2014J possible. \n\n\\section{\\INTEGRAL Observations and basic data analysis}\n\\label{sec:observations}\n\\INTEGRAL is an ESA scientific mission dedicated to fine\nspectroscopy and imaging of celestial $\\gamma$-ray sources in the\nenergy range 15\\,keV to 10\\,MeV \\citep{2003A&A...411L...1W}.\n\nThe \\INTEGRAL data used here were accumulated during revolutions\n1380-1386, 1391-1407 and\n1419-1428\\footnote{http:\/\/www.cosmos.esa.int\/web\/integral\/schedule-information\n},\ncorresponding to the period $\\sim$16-162 days after the explosion.\n\nIn the analysis we follow the procedures described in\n\\citet{2014Natur.512..406C,2013A&A...552A..97I} and use the data of two instruments\nSPI and ISGRI\/IBIS on board \\INTEGRAL.\n\n\\subsection{SPI}\n\\label{sec:spi}\nSPI is a coded mask germanium spectrometer on board \\INTEGRAL.\nThe instrument consists of 19 individual Ge detectors, has a field of\nview of $\\sim$30$^{\\circ}~$ (at zero response), an effective area $\\sim\n70$~cm$^2$ at 0.5 MeV and energy resolution of $\\sim$2 keV\n\\citep{2003A&A...411L..63V,2003A&A...411L..91R}. Effective angular resolution of SPI is\n$\\sim$2$^{\\circ}~$. During SN2014J observations 15 out of 19 detectors were\noperating, resulting in slightly reduced sensitivity and imaging\ncapabilities compared to initial configuration. \n\nPeriods of very high and variable background due to solar flares and passage through radiation belts were\nomitted from the analysis. In particular, based on the SPI\nanti-coincidence system count-rates, the revolutions 1389 and 1390\nwere completely excluded, as well as parts of revolutions 1405, 1406,\n1419, 1423 and 1426. The data analysis follows the scheme\nimplemented for the analysis of the Galactic Center positron\nannihilation emission \\citep{2005MNRAS.357.1377C,2011MNRAS.411.1727C}. We used only ``single'' events \\citep{2003A&A...411L..63V} and for\neach detector, a linear relation between the energy and the channel\nnumber was assumed and calibrated (separately for each orbit), using\nthe observed energies of background lines at ~198, 438, 584, 882,\n1764, 1779, 2223 and 2754 keV.\n\nThe flux of the supernova $S(E)$ at energy $E$ and the background\nrates in individual detectors $B_i(E,t)$ were derived from a simple model\nof the observed rates $D_i(E,t)$ in individual SPI detectors, where $i$ is\nthe detector number and $t$ is the time of observation with a typical\nexposure of 2000 s: \n\\begin{eqnarray}\nD_i(E,t)\\approx S(E)\\times R_i(E,t)+B_i(E,t). \n\\end{eqnarray}\nHere $R_i(E,t)$ is the effective area for the $i$-th detector, as seen from the\nsource position in a given observation. The background rate is assumed to be linearly proportional to\nthe Ge detectors' saturated event rate $G_{Sat}(t)$ above 8 MeV, averaged\nover all detectors, i.e. $B_i(E,t)=\\beta_i(E)G_{Sat}(t)+C_i(E)$, where\n$C_i(E)$ does not depend on time.\nThe coefficients $S(E)$,$\\beta_i(E)$ and $C_i(E)$ are free parameters of the model and are obtained by\nminimizing $\\chi^2$ for the entire data set. Even though the number of\ncounts in individual exposures is low, it is still possible to use a\nplain $\\chi^2$ approach as long as the errors are estimated using the mean\ncount rate and the total number of counts in the entire data set is\nlarge \\citep{1996ApJ...471..673C}. The linear nature of the model\nallows for straightforward estimation of statistical errors.\n\nDespite its proximity, SN2014J is still an extremely faint source in\n$\\gamma$-rays. Fig.\\ref{fig:background} shows the comparison of the\nquiescent SPI background, scaled down by a factor of $10^3$ with a\nsample of representative models. Two models labeled ``20d uniform''\nand ``16-35d W7'' show the models for the early period of SN2014J\nobservations. The former model is based on \nthe best-fitting \\PAR3 model to the SN\nspectra recorded between 50-100 days after explosion\n\\citep{2014Natur.512..406C}, recalculated for day 20. The model assumes\nuniform mixing of all elements, including the radioactive $^{56}$Ni,\nacross the ejecta. This model at day 20 produces prominent $^{56}$Ni\nlines near 158 keV and 812 keV. The latter model (\\W7, see\n\\S\\ref{sec:models}) averaged over period 16-35 days does not include\nmixing and it produces much fainter lines. Finally the ``50-162d W7''\nmodel corresponds to later observations. The most prominent features\nof this model are the $^{56}$Co lines at 847 and 1238 keV. Among all\nthese features the $^{56}$Co line at 1238 keV is located in the least\ncomplicated portion of the background spectrum.\n \n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim = 0 50mm 0 90mm,scale=0.7,clip]{back_mod_3.pdf}\n\\end{center}\n\\caption{SPI quiescent background in comparison with the\n representative model spectra. SPI background is multiplied by a\n factor $10^{-3}$. Green and blue lines correspond to the \\W7 model\n \\citep{1984ApJ...286..644N} averaged over the \\early and \\late periods (see \\S\\ref{sec:periods}),\n respectively. Red line shows the model the \\PAR3 model from \\citet{2014Natur.512..406C} for\n day 20 since the explosion. In this model all elements, including\n radioactive isotopes, are mixed uniformly over entire\n ejecta. The robust prediction of all plausible models is the\n presence of two $^{56}$Co line at 847 and 1238 keV during the late phase. Vertical lines\n show two energy bands used for making images. The ``cleanest'' SPI\n background is near the 1238 keV line, where no strong instrumental\n lines are present.\n \\label{fig:background}}\n\\end{figure*}\n\nThe spectral redistribution matrix accounts for the instrumental line\nbroadening estimated from the data, accumulated during SN2014J\nobservations. We parametrize the energy resolution as a Gaussian with\nthe energy dependent width \n\\begin{eqnarray}\n\\sigma_i\\approx0.94~(E_{line}\/500)^{0.115}~{\\rm keV}.\n\\label{eq:sigma_i}\n\\end{eqnarray}\n\nCompared to our previous analysis we amended the spectral\nredistribution matrix of SPI by including low energy tails, associated\nwith the interactions (Compton scattering) of incoming photons inside\nthe detector and in the surrounding material. These photons are still\nregistered as single events in the SPI data, but their energies are\nlower than the true incident energy. We used the results of\nMonte-Carlo simulation of SPI energy\/imaging response\n\\citep{2003A&A...411L..81S} and folded-in our procedure of spectrum\nreconstruction described above. For steep spectra the account for low\nenergy tail results in a modest $\\sim$10\\% change in the spectrum\nnormalization, while for the very hard SN2014J spectrum it produces a\nlow energy tail which provides large contribution to the continuum,\nwhile fluxes of narrow lines remain unaffected\n(Fig.\\ref{fig:offdiag}). With this response matrix the Crab Nebula\nspectrum, observed by \\INTEGRAL made between Feb 21 and 23, 2014, is\nwell described by a broken power law obtained by\n\\citet{2009ApJ...704...17J} for earlier Crab Nebula observations with\n\\INTEGRAL.\n\nIn our analysis we usually ignore the part of the spectrum at energies\nhigher than 1350 keV, since in the energy range between 1400 and 1700\nkeV the instrument suffers from the enhanced detector electronic\nnoise, while at even higher energies only weaker lines from $^{56}$Co\ndecay are expected (see Table~\\ref{tab:tem} in \\S\\ref{sec:tem}).\n The convolution of the fiducial SNIa model (see \\S\\ref{sec:models})\n with the simulated SPI response \\citep{2003A&A...411L..81S}\n confirmed that the contribution of high energy lines is negligible\n below 1350 keV, at least for ``single'' events considered here.\n\n The inspection of Fig.\\ref{fig:background} shows that there is no\n chance to detect continuum in the SPI data for any of our fiducial\n models. E.g., for a 100 keV wide energy bin between 600 and 700 keV\n the expected $S\/N$ after 4 Msec observation between days 50 and 162\n is $\\sim 0.5 \\sigma$. In the real data no evidence for significant\n continuum above 500 keV was found in the time-averaged spectra (see\n \\S\\ref{sec:spectra} below). As Fig.~\\ref{fig:offdiag} the off-diagonal tail of the 847 and 1238 keV lines dominates over\n intrinsic SN continuum (see Fig.~\\ref{fig:offdiag}), while the line\n shapes and fluxes are not affected. \n\n In general, we consider the inclusion of the off-diagonal term in\n the response as an improvement compared to a pure diagonal\n response. We used this improved response throughout the paper and at the same time in \\S\\ref{sec:spectra} we consider several data sets,\n which include or exclude the SPI data below $\\sim$400 keV. Inclusion\n of the low energy ($\\la 400$ keV) data boosts the S\/N, while the\n exclusion of these data (dominated by off-diagonal continuum) makes\n spectral fits less prone to possible uncertainties in the\n off-diagonal term calibration.\n \nTo verify the whole SPI pipeline, we have done an independent analysis\nof the same data using the tools and procedures originally developed\nand tuned for SN2011fe \\citep[see][]{2013A&A...552A..97I}. This\nanalysis includes energy calibration, background modeling and the\nbackground and source fluxes fitting. Verification of these steps is\nimportant since the source (SN2014J) is very faint and even subtle\nchanges in the calibration might result in significant changes in the\nsource spectrum. The fluxes in the 835-870 keV band were derived using\nthese two independent pipelines for every revolution during SN2014J\nobservations. Comparing fluxes point by point, we have found very good\nagreement, with the scatter well within statistical errors. The signal\nfrom SN2014J is seen in both pipelines. No systematic trends of\ndeviations with the variations of the flux level are found. We have\nconcluded that the results are fully consistent, within the\nassumptions made on the background parameterization.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{offdiag2.pdf}\n\\end{center}\n\\caption{Estimated contribution of the off-diagonal terms in the SPI\n spectral response to the SN spectrum. The blue line shows the\n predicted spectrum of the \\W7 model for the \\late period, convolved with a\n simplified (nearly diagonal) SPI response. In this approximation the\n instrumental broadening is parametrized as an energy dependent\n Gaussian with the width according to eq.\\ref{eq:sigma_i}. The red line shows\n the same spectrum convolved with the response which includes\n estimated off-diagonal terms, caused by Compton scattering of\n incident photons in the detector and surrounding structures. The\n off-diagonal component alone is shown with the dashed black line.\n The off-diagonal terms create a long low-energy tails associated\n with gamma-ray lines. The impact on the brightest lines is small,\n while the continuum is strongly affected, especially at low\n energies. The model \\W7 is averaged over the period 50-162 days after\n the explosion.\n\\label{fig:offdiag}}\n\\end{figure}\n\n\n\\subsection{ISGRI\/IBIS}\n\\label{sec:isgri}\n\nThe primary imaging instrument inboard \\INTEGRAL is IBIS \\citep{2003A&A...411L.131U} - a\ncoded-mask aperture telescope with the CdTe-based detector ISGRI \\citep{2003A&A...411L.141L}. It\nhas higher sensitivity to continuum emission than SPI in the 20-300\nkeV range\\footnote{\\texttt{http:\/\/www.cosmos.esa.int\/web\/integral\/ao13}} and has a spatial resolution $\\sim12'$. We note here, that\nneither ISGRI, nor SPI can distinguish the emission of SN2014J from\nthe emission of any other source in M82. In particular, M82 hosts\ntwo ultra-luminous and variable sources\n\\citep[e.g.][]{2014AstL...40...65S,2014Natur.514..202B} which \ncontribute to the flux below $\\sim 50$ keV. ISGRI however can easily\ndifferentiate between M82 and M81, which are separated by $\\sim30'$. The\nenergy resolution of ISGRI is $\\sim$10\\% at 100 keV. The ISGRI energy\ncalibration uses the procedure implemented in OSA 10.039. The images\nin broad energy bands were reconstructed using a standard\nmask\/detector cross-correlation procedure, tuned to produce zero\nsignal on the sky if the count rate across the detector matches the\npattern expected from pure background, which was derived from the same\ndataset by stacking detector images. The noise in the resulting images\nis fully consistent with the expected level, determined by photon\ncounting statistics. The fluxes in broad bands were calibrated using\nthe \nCrab Nebula observations with \\INTEGRAL made between Feb 21 and 23. The\n\\citet{2009ApJ...704...17J} model was assumed as a reference.\n\n\\subsection{Lightcurves, Spectra and Images}\n\\label{sec:periods}\nThe lightcurves in several energy bands were generated using IGSRI and\nSPI data. The time bins ($\\sim$3 days each) correspond to individual\nrevolutions of the satellite. Finer time bins are not practical given\nthat the source is very faint. The lightcurves are shown in\nFigs.\\ref{fig:lc_isgri}-\\ref{fig:lc_spi} together with a set of\nrepresentative models (see \\S\\ref{sec:models}). For the broad\n100-200 keV band the conversion of the ISGRI flux using Crab spectrum\nas a reference is not very accurate because of the difference in the\nshape of the incident spectra. The conversion factor has been\nrecalculated using several representative SN models, resulting in a\nmodest $\\sim$13\\% correction factor, applied to the fluxes shown in Fig. \\ref{fig:lc_isgri}.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 3cm 0cm 10cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.99]{lc_isgri_one5.pdf}\n\\end{center}\n\\caption{ISGRI light curve in the 100-200 keV band. The S\/N ratio in\n this band is expected to be the highest for the plausible\n models. The curves show the expected flux evolution for a\n set of models (see \\S\\ref{sec:models}). Color coding is explained in\n the legend.\n\\label{fig:lc_isgri}}\n\\end{figure*}\n \n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.99]{lc_spi_v8.pdf}\n\\end{center}\n\\caption{The same as in Fig.\\ref{fig:lc_isgri} for SPI data in two\n narrow bands near the brightest $^{56}$Co lines.\n\\label{fig:lc_spi}}\n\\end{figure*}\n\n\nIn principle, the spectra can be extracted for any interval covered by\nthe observations, e.g., for individual revolutions, as is done above\nfor the lightcurves in several broad bands. For comparison of the\nobserved and predicted spectra we decided to split the data into two\nintervals covering 16-35 and 50-162 days after the explosion,\nrespectively (see Table~\\ref{tab:sets}). The gap between days 35 and\n50 is partly due to a major solar flare. Below we refer to these two\ndata sets as \\early and \\late periods.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{nicofe2.pdf}\n\\end{center}\n\\caption{\\early and \\late periods of \\INTEGRAL observations\n used for spectra extraction, shown as thick horizontal bars. Three curves show the evolution of the\n $^{56}$Ni, $^{56}$Co and $^{56}$Fe masses, respectively, normalized to the\n initial $^{56}$Ni mass. Note that opacity effects tend to suppress\n the emergence of gamma-rays at early phases of the supernova\n evolution, unless radioactive isotopes are present in the outer\n layers of the ejecta, or the explosion is strongly asymmetric.\n The dashed red line shows the $^{56}$Co mass scaled down by the ratio of\n Co and Ni decay times $\\tau_{Co}\/\\tau{Ni}$, which allows one to\n compare the expected relative strength of Ni (blue curve) and Co\n (dashed red curve) gamma-ray lines as a function of time.\n\\label{fig:periods}}\n\\end{figure}\n\n\\begin{deluxetable*}{rccc}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{Data sets}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Set} &\n\\colhead{Dates} &\n\\colhead{Days since explosion} &\n\\colhead{Exposure\\tablenotemark{$\\alpha$}, Msec}\n}\n\\startdata\n\\early & 2014-01-31 : 2014-02-20 & ~16 : ~35 & 1.0\\\\ \n\\late & 2014-03-05 : 2014-06-25 & ~50 : 162 & 4.3\n\\enddata\n\\tablenotetext{$\\alpha$}{Corrected for the periods of high background and the dead-time of SPI}\n\\label{tab:sets}\n\\end{deluxetable*}\n\n\nUnlike the \\early period, when emergence of the $^{56}$Ni lines\nstrongly depends on the distribution of the radioactive Ni through the\nejecta, for the \\late period the emission in $^{56}$Co lines is a\ngeneric prediction of all plausible models. Two energy bands optimal\nfor detection of the SN signal in gamma-rays are clear from\nFig.\\ref{fig:background}. These two bands, containing the most\nprominent $^{56}$Co lines, were used to generate images. \nThe images were extracted from SPI data from the \\late period as in\n\\citep{2014Natur.512..406C}. Namely, we vary the assumed position of the source and\nrepeat the flux fitting procedure (see \\S\\ref{sec:spi}) for each\nposition. The resulting images of the signal-to-noise ratio in the\n835-870 and 1220-1270 keV energy bands are shown in\nFig.\\ref{fig:spi_image}. In both energy bands the highest peaks (4.7 and 4.3 $\\sigma$\nrespectively) coincide well (within 0.3$^{\\circ}~$) with the\nSN2014J position, marked by a cross.\n\nThe ISGRI spectra extracted at the known position of SN2014J for the\n\\early and \\late period are shown in Fig.~\\ref{fig:spec_isgri}. Low\nenergy (less than $\\sim$70 keV) part of the extracted spectrum is\nlikely contaminated by other sources in M82.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm -2cm 0cm 0cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.90]{spi_sn_maps_2lines4.png}\n\\end{center}\n\\caption{SPI images (S\/N ratio) during \\late period in two narrow\n bands around most prominent $^{56}$Co lines. Contours are at\n 2, 2.5 ... 5 $\\sigma$. Cross shows the position of SN2014J. The\n brightest peaks in each image coincide well with the position of\n SN2014J. Due to the dither pattern\\footnote{http:\/\/www.cosmos.esa.int\/web\/integral} used during observations of SN2014J the central part of the image is much better covered than the outer regions. It is therefore not surprising that the level of noise is increasing away from the nominal target. \n\\label{fig:spi_image}}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 10cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.9]{spec_isgri2.pdf}\n\\end{center}\n\\caption{ISGRI spectrum measured at the position of SN2014J during\n \\early (red) and \\late (blue) periods. The energies of the second\n set of points are multiplied by a factor 1.02 for the sake of\n clarity. Dashed histograms show the predicted spectra of the \\W7 model for\n the same periods. The agreement with the predictions is reasonable\n except for the energies lower than $\\sim 70$ keV, where the spectrum\n is likely contaminated by other sources in M82 \\citep[see, e.g,][]{2014AstL...40...65S}. Dark green line shows crude approximation of the M82 spectrum measured before 2014. \n\\label{fig:spec_isgri}}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 4cm 0cm 11cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.99]{comb_flate_spec3.pdf}\n\\end{center}\n\\caption{Combined ISGRI\/SPI spectrum for the \\late period. The\n model (\\W7, see Tab.\\ref{tab:models}) has been convolved with the SPI\n off-diagonal response. The\n SPI data below 450 keV are omitted since during \\late period the data at these energies are\n expected to be dominated by the off-diagonal response of SPI.\n\\label{fig:spec_flate}}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 4cm 0cm 11cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.99]{comb_early_spec.pdf}\n\\end{center}\n\\caption{Combined ISGRI\/SPI spectrum for the \\early period. The\n model (\\W7, see Tab.\\ref{tab:models}) has been convolved with the SPI\n off-diagonal response. \n\\label{fig:spec_early}}\n\\end{figure*}\n\n\n\\section{Models}\n\\label{sec:models}\n\\subsection{A set of representative models}\n\\label{sec:1d}\nFor comparison with the \\INTEGRAL data we used a set of representative\n1D models (Table \\ref{tab:models}), based on calculations of explosive\nnucleosynthesis models. To the first approximation, these models are\ncharacterized by the amount of radioactive nickel, total mass of the\nejecta and the expansion velocity. Although current state-of-the-art\nsimulations of type Ia explosions can be done in 3D\n\\citep[e.g.,][]{2013MNRAS.429.1156S,2014MNRAS.438.1762F,2014ApJ...785..105M},\nusing these models would introduce an additional viewing angle\ndependence. In order to avoid this extra degree of freedom and given\nthat the overall significance of the SN2014J detection in gamma-rays\nby \\INTEGRAL (see \\S\\ref{sec:observations} and \\S\\ref{sec:results})\ncorresponds to only $\\sim 10$ s.t.d., we decided to keep in this work\nonly a set of 1D models to confront with the data.\n\n\n\\begin{deluxetable}{llll}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{Set of models used in the paper}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Model} &\n\\colhead{$M_{Ni},~M_\\odot$} &\n\\colhead{$M_{tot},~M_\\odot$} &\n\\colhead{$E_K,~10^{51}~{\\rm erg}$} \n}\n\\startdata\n\\texttt{DDT1p1} & 0.54 & 1.36 & 1.29 \\\\ \n\\texttt{DDT1p4halo} & 0.62 & 1.55 & 1.3 \\\\ \n\\texttt{DDTe } & 0.51 & 1.37 & 1.09\\\\ \n\\DETO & 1.16 & 1.38 & 1.44 \\\\ \n\\HED6 & 0.26 & 0.77 & 0.72 \\\\ \n\\W7 & 0.59 & 1.38 & 1.24 \\\\ \n\\texttt{ddt1p4} & 0.66 & 1.36 & 1.35 \\\\ \n\\BALL3 & 0.66+0.04\\tablenotemark{$\\alpha$} & 1.36 & 1.35\\\\ \n\\DD4 & 0.61 & 1.39 & 1.24 \n\\enddata\n\\tablenotetext{$\\alpha$}{additional ``plume'' of $^{56}$Ni.}\n\\label{tab:models}\n\\end{deluxetable}\n\nThe set of models includes the deflagration model \\W7\n\\citep{1984ApJ...286..644N}, pure detonation model \\DETO\n\\citep{2003ApJ...593..358B}, the sub-Chandrasekhar model \\HED6\n\\citep{1996ApJ...457..500H}, and several variants of the delayed\ndetonation models: \\DD4 \\citep{ww91}, \\texttt{DDTe }\n~\\citep{2003ApJ...593..358B}, \\texttt{DDT1p1}, \\texttt{DDT1p4halo},\n\\texttt{ddt1p4}, \\BALL3 \\citep{isern}. The \\texttt{ddt1p4} model was\nbuilt to match the mass of $^{56}$Ni suggested by the early optical\nevolution of SN2014J \nas detected with the OMC of INTEGRAL (\\citet{2014ATel.6099....1I};\n P. Hofflich, private communication). In it, the\ntransition density from deflagration to detonation was fixed at\n$1.4~10^7~{\\rm g~cm^{-3}}$. Model \\texttt{DDT1p4halo} is a variant of\nthe later in which the white dwarf is surrounded by a 0.2 $M_\\odot$\nenvelope, as might result from a delayed merger explosion. The \\BALL3\nmodel is essentially the same as the \\texttt{ddt1p4} plus a plume of\n$0.04~M_\\odot$ of radioactive $^{56}Ni$ receding from the observer\n\\citep[see][for details]{isern}.\n\n\n\nThe emerging X-ray and gamma-ray radiation from the expanding SNIa is\ndetermined by the total amount of radioactive isotopes, their\ndistribution over velocities, the mass and the chemical composition of\nthe ejecta and expansion rate. The processes are essentially the same\nas in type II supernovae \\citep[see, e.g.][ for a prototypical example\n of type II supernova - SN1987A]{1987Natur.330..227S}. However, the\nmass of the ejecta and expansion rate differ strongly leading to much\nearlier and stronger signal in gamma-rays \\citep[see,\n e.g.][]{1969ApJ...155...75C,1981CNPPh...9..185W,1988ApJ...325..820A}. A\ncomprehensive set of computations of the expected gamma-ray flux for\ndifferent representative models was presented in \\citet{2014ApJ...786..141T}.\n\nHere we use the\nresults of similar calculations (see below), which account for line broadening,\nneeded for systematic comparison with the \\INTEGRAL data. \n\nA Monte-Carlo code follows the propagation of the $\\gamma-$photons\nthrough the ejecta and accounts for scattering and photoabsorption of\nphotons and annihilation of positrons. The predicted spectra were\ngenerated with a time step of one day, covering the entire\nobservational period. These model spectra were then averaged over the\nperiods of 16-35 and 50-162 days respectively, to provide fair\ncomparison with the \\INTEGRAL results for the \\early and \\late\nperiods. In particular, the effect of varying opacity in each model over the observational period is correctly captured by this procedure. \nThe computations include full treatment of Compton scattering\n(coherent and incoherent), photoabsorption and pair production\n\\citep[see][for details]{2004ApJ...613.1101M}. The positrons produced by $\\beta^+$\ndecay of $^{56}$Co (19\\% of all decays) annihilate in place via\npositronium formation. Both two-photon annihilation into the 511 keV\nline and the orthopositronium continuum are included.\n\n\\subsection{Transparent ejecta model (\\tem)}\n\\label{sec:tem}\nAs we discuss below (\\S\\ref{sec:results}) the \\INTEGRAL data are\nbroadly consistent with the subset of models listed in\nTable~\\ref{tab:models}. However, \\citet{2014Sci...345.1162D} reported\nan evidence of $^{56}$Ni at the surface in the first observations of\nSN2014J with \\INTEGRAL \\citep[see also][for an alternative analysis of\n early SN2014J observations]{isern}. Presence of radioactive material\nat the surface would be an important result, since traditional models,\nlisted in Table~\\ref{tab:models} do not predict it. One can\n attempt to patch our 1D models with an additional component\n describing an extra radioactive material at the surface. Assuming\n that the material at the surface is transparent to gamma-rays, the\n fluxes of individual lines associated with Ni and Co decay, their\n energies and widths can be tied together. The transparency\n assumption is justified by the large velocities and small initial\n densities expected for matter at the surface of supernovae\n ejecta. In any case, it provides a lower limit to the mass of\n radioactive material, as opacity would demand a larger gamma-ray\n production rate in order to explain a given gamma-ray flux.\n This approach allows to describe many\n lines, associated with a transparent clump with only 3\n parameters. Below we refere to this component as a Transparent\n Ejecta model (\\texttt{TEM}), and use it in combination with the\n best-performing \\W7 model from out default set 1D models (see\n \\S\\ref{sec:1d}), i.e., the data are compared with the predictions of\n \\W7+\\texttt{TEM} model. While this model by itself is not\n selfconsistent, it can be used to answer the following questions:\n \\begin{itemize}\n \\item Once the predicted signal for the \\W7 model is removed from\n the observed spectra, do residuals resemble a signal expected from\n a transparent clump of radioactive material?\n \\item Given the statistics accumulated by {\\it INTEGRAL}, how much\n radioactive material in a transparent clump can be ``hidden'' in\n the data on top of a given 1D model?\n \\end{itemize}\nIn this section we describe the \\texttt{TEM} model and then apply it\nto the data in \\S\\ref{sec:tem_late}.\n\nThe \\texttt{TEM} model assumes that all line energies are shifted\nproportionally to their energies (i.e., the same velocity structure\nfor all lines), while their flux ratios follow the predicted ratios\n\\citep{1994ApJS...92..527N} based on the decay chains of\n$^{56}$Ni$\\rightarrow^{56}$Co$\\rightarrow^{56}$Fe and\n$^{57}$Ni$\\rightarrow^{57}$Co$\\rightarrow^{57}$Fe. The list of the\nlines and their fluxes normalized to 1~$M_\\odot$ of $^{56}$Ni are\ngiven in Table~\\ref{tab:tem}. For a given time period the model has 3\nparameters: the initial $^{56}$Ni mass ($M_{Ni}$), energy\/redshift of\nthe 847 keV line ($E_{847}$) and the broadening of the 847 keV line\n($\\sigma_{847}$). The width of each line (Gaussian $\\sigma$) is\ndefined as\n\\begin{eqnarray}\n\\sigma_{line}=\\sigma_{847}\\times \\left(\n \\frac{E_{line}}{E_{847}}\\right ).\n\\end{eqnarray}\nOrtho-positronium continuum and pair\nproduction by gamma-ray photons are\nneglected, while the 511 keV line is added assuming that 19\\% of\n$^{56}$Co decays produce positrons, of which 25\\% form\npara-positronium yielding two 511 keV photons.\n\n\n\\begin{deluxetable}{rlll}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{Line fluxes averaged over days 50-162 for a transparent\n ejecta model (\\tem) for the initial $1~M_\\odot$ of $^{56}$Ni}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{$E_{line}$, keV} &\n\\colhead{$F_{line}\/F_{847}$} &\n\\colhead{Flux$^a$} &\n\\colhead{Isotope} \n}\n\\startdata\n 846.78 & $ 1.00 $ & $ 6.57~10^{-4}$ & $^{56}$Co \\\\ \n 158.38 & $ 7.98~10^{-3}$ & $ 5.25~10^{-6}$ & $^{56}$Ni \\\\ \n 1561.80 & $ 1.12~10^{-3}$ & $ 7.34~10^{-7}$ & $^{56}$Ni \\\\ \n 749.95 & $ 3.99~10^{-3}$ & $ 2.62~10^{-6}$ & $^{56}$Ni \\\\ \n 269.50 & $ 2.87~10^{-3}$ & $ 1.89~10^{-6}$ & $^{56}$Ni \\\\ \n 480.44 & $ 2.87~10^{-3}$ & $ 1.89~10^{-6}$ & $^{56}$Ni \\\\ \n 811.85 & $ 6.86~10^{-3}$ & $ 4.51~10^{-6}$ & $^{56}$Ni \\\\ \n 511.00 & $ 9.50~10^{-2}$ & $ 6.24~10^{-5}$ & $^{56}$Co \\\\ \n 1037.83 & $ 1.40~10^{-1}$ & $ 9.20~10^{-5}$ & $^{56}$Co \\\\ \n 1238.28 & $ 6.80~10^{-1}$ & $ 4.47~10^{-4}$ & $^{56}$Co \\\\ \n $^*$1771.49 & $ 1.60~10^{-1}$ & $ 1.05~10^{-4}$ & $^{56}$Co \\\\ \n $^*$2034.92 & $ 7.90~10^{-2}$ & $ 5.19~10^{-5}$ & $^{56}$Co \\\\ \n $^*$2598.58 & $ 1.69~10^{-1}$ & $ 1.11~10^{-4}$ & $^{56}$Co \\\\ \n $^*$3253.60 & $ 7.40~10^{-2}$ & $ 4.86~10^{-5}$ & $^{56}$Co \\\\ \n $^*$14.41 & $ 1.19~10^{-3}$ & $ 7.80~10^{-7}$ & $^{57}$Co \\\\ \n 122.06 & $ 1.03~10^{-2}$ & $ 6.79~10^{-6}$ & $^{57}$Co \\\\ \n 136.47 & $ 1.19~10^{-3}$ & $ 7.80~10^{-7}$ & $^{57}$Co\n\\tablecomments{$^a$ - Flux is in units of $~{\\rm phot~s^{-1}~cm^{-2}}$ \\\\\n$^*$ - Line is outside the energy range used for fitting} \\\\\n\\label{tab:tem}\n\\end{deluxetable}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{tem_spe.pdf}\n\\end{center}\n\\caption{Spectra predicted by the \\tem model for \\early (blue) and\n \\late (red) data sets, convolved with the SPI response. The\n broadening of the reference 847 keV line is set to 20 keV (Gaussian\n sigma). The initial $^{56}$Ni mass is 1 $M_\\odot$.\n\\label{fig:tem}}\n\\end{figure}\n\n\n\n\\section{Results}\n\\label{sec:results}\n\\subsection{Combined ISGRI+SPI spectrum}\n\\label{sec:spectra}\nThe SPI images (Fig.\\ref{fig:spi_image}) for \\late period unambiguously\nshow the characteristic signatures of $^{56}$Co decay from SN2014J. A\nmore quantitative statement on the amount of $^{56}$Ni synthesized\nduring explosion and on the properties of the ejecta can be obtained\nfrom the comparison of the data with the predictions of the\nmodels. Since the \\late period is less affected by the transparency of\nthe ejecta we start our analysis with the total spectrum obtained by\n\\INTEGRAL over this period.\n\n\\subsubsection{{\\bf Late} data}\nThe results of fitting of the combined ISGRI+SPI spectrum\n(Fig.\\ref{fig:spec_flate}) for the \\late period are given in Table\n\\ref{tab:mfit_late}. A full set of\nmodels from Tab.\\ref{tab:models} is used. The two groups of columns in Table~\\ref{tab:mfit_late} differ by the energy range in the SPI data used for comparison with the model. In the first group the data of ISGRI (70-600 keV) and SPI (400-1350 keV) are used. The\ndata below 70 keV are likely contaminated by other sources in M82. The\nSPI data below 400 keV are omitted since during the \\late period the data at\nthese energies are expected to be dominated by the off-diagonal\nresponse of SPI. I.e. the observed SPI spectrum below 500 keV includes\nsignificant contribution of the gamma-ray photons at higher energies,\nwhich are down-scattered inside the body of the telescope (see\nFig.\\ref{fig:offdiag}). The Null model (no source) gives $\\chi^2=1945.38$ for 1906 spectral\n bins. The improvement of the $\\chi^2$ relative to the Null is\n calculated by fixing the normalization at the predicted value for\n $D=3.5$ Mpc (column 2) and by letting it free (columns 3 and\n 4). The typical value of the $\\Delta \\chi^2\\sim 65$ suggests $\\sim\n 8~\\sigma$ detection. \n\nOne can draw two conclusions from this exercise. First of all a set of\ncanonical 1D deflagration (\\W7) or delayed detonation models (e.g., \\DD4)\nfit the data well without any adjustments to the normalization. The\npure detonation model \\DETO and a sub-Chandrasekhar model \\HED6 give\npoor fit and overproduce\/underproduce the observed flux,\nrespectively. Secondly, once the normalization is allowed to vary, all\nmodels give almost identical gain in the $\\chi^2$, suggesting that\nrelative strength of all prominent features is comparable in all\nmodels. Given the uncertainty in the distance to SN2014J (or M82) a\ndeviation of the normalization at the level of $\\sim$20\\% can not be\nexcluded. But \\DETO and \\HED6 models require by far larger changes in\nthe normalization.\n\n\\begin{deluxetable*}{lcrrcrr}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{$\\Delta\\chi^2$ for basic models for fixed and free\n normalization relative to the Null model of no source for the \\late period. }\n\\tablewidth{12cm}\n\\tablehead{\n\\colhead{Dataset:} & \\multicolumn{3}{c}{ISGRI(70-600 keV)\\&SPI(400-1350 keV)} & \\multicolumn{3}{c}{ISGRI(70-600 keV)\\&SPI(70-1350 keV)}\\\\\n\\\\ \n\\colhead{Model} &\n\\colhead{$N=1,\\Delta\\chi^2$} &\n\\colhead{$N_{free}$} &\n\\colhead{$\\Delta\\chi^2$} &\n\\colhead{$N=1,\\Delta\\chi^2$} &\n\\colhead{$N_{free}$} &\n\\colhead{$\\Delta\\chi^2$}\n}\n\\startdata\n\\texttt{DDT1p1} & 66.4 & 1.03$\\pm$ 0.13 & 66.5 & 87.3 & 1.09$\\pm$ 0.12 & 87.9 \\\\ \n\\texttt{DDT1p4halo} & 65.9 & 0.89$\\pm$ 0.11 & 66.9 & 88.1 & 0.93$\\pm$ 0.10 & 88.5 \\\\ \n\\texttt{DDTe } & 62.1 & 1.09$\\pm$ 0.14 & 62.5 & 82.3 & 1.15$\\pm$ 0.13 & 83.7 \\\\ \n\\DETO & 10.1 & 0.52$\\pm$ 0.06 & 66.4 & 30.2 & 0.55$\\pm$ 0.06 & 87.7 \\\\ \n\\HED6 & 47.8 & 1.86$\\pm$ 0.24 & 60.7 & 60.1 & 2.01$\\pm$ 0.22 & 80.5 \\\\ \n\\W7 & 65.0 & 0.94$\\pm$ 0.12 & 65.3 & 86.9 & 1.01$\\pm$ 0.11 & 86.9 \\\\ \n\\texttt{ddt1p4} & 64.9 & 0.85$\\pm$ 0.10 & 66.9 & 87.4 & 0.90$\\pm$ 0.10 & 88.4 \\\\ \n\\BALL3 & 63.2 & 0.83$\\pm$ 0.10 & 66.1 & 85.7 & 0.88$\\pm$ 0.09 & 87.5 \\\\ \n\\DD4 & 64.7 & 0.89$\\pm$ 0.11 & 65.7 & 87.0 & 0.95$\\pm$ 0.10 & 87.3 \\\\\n & & & & & & \\\\ \n\\texttt{No source}, $\\chi^2$ (d.o.f.) & & 1945.4 (1906) & & & 2696.9 (2566) & \n\\enddata\n\n\\tablecomments{$N$ is the normalization of the model with $N=1$ corresponding to the explosion at the distance of 3.5 Mpc.\n $\\Delta\\chi^2$ characterizes an improvement of $\\chi^2$\n for a given model relative to the Null model. Larger positive values\n indicate that the model is describing the data significantly better than other models (see Appendix).\n The data below 70 keV are likely contaminated by other sources\n in M82. SPI data below 400 keV \n are omitted in the first dataset (left half of the Table) since the data at these energies are\n expected to be dominated by the off-diagonal response of SPI (see \\S\\ref{sec:spi}).}\n\\label{tab:mfit_late}\n\\end{deluxetable*}\n\nWhile in the above analysis the SPI data with $E<400$ keV have been omitted to\nconcentrate on the data less affected by the off-diagonal response, the right part of Tab.\\ref{tab:mfit_late} extends the analysis down to 70 keV for\nboth instruments. The basic conclusions remain the same, although, as\nexpected, the significance of the detection increases to $\\gtrsim\n9~\\sigma$. \n\n\\subsubsection{{\\bf Early} data}\nWe now proceed with the same analysis of the \\early data. Table\n\\ref{tab:mfit_early} contains the gain in the $\\chi^2$ for the same\nset of models.\n\nThe \\DETO is clearly inconsistent with the data - inclusion of the\nmodel increases the $\\chi^2$ relative to the Null model (no source).\nThe \\HED6 model, which gave a poor fit to the \\late data, yields the\n$\\chi^2$ comparable to other models. This is because smaller amount of\n$^{56}$Ni is compensated by larger transparency of the lower-mass\nejecta, which is important for the \\early data.\n\nThe \\BALL3 model gives poor gain in $\\chi^2$ if the normalization is\nfixed and the SPI data below 400 keV are excluded. If the\nnormalization is free, and, especially, if the SPI data below 400 keV\nare included, this model performs marginally better than other\nmodels. However, it performs significantly better than other models\n when the SPI data below 400 keV are included. This is not surprising,\n since \\BALL3 model has been designed to fit the SPI data during this\n period \\citep[see][for details]{isern}. The different ``ranking'' of\n the \\BALL3 model seen in Table \\ref{tab:mfit_early} when SPI data\n below 400 keV are included or excluded, suggests a tension in the\n comparison of the fixed-normalization \\BALL3 model with the SPI and\n ISGRI data and also with the SPI data below and above 400 keV (see below). \n\n\n\\begin{deluxetable*}{lcrrcrr}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{The same as in Table \\ref{tab:mfit_late} for the \\early period.}\n\\tablewidth{12cm}\n\\tablehead{\n\\colhead{Dataset:} & \\multicolumn{3}{c}{ISGRI(70-600 keV)\\&SPI(400-1350 keV)} & \\multicolumn{3}{c}{ISGRI(70-600 keV)\\&SPI(70-1350 keV)}\\\\\n\\\\ \n\\colhead{Model} &\n\\colhead{$N=1,\\Delta\\chi^2$} &\n\\colhead{$N_{free}$} &\n\\colhead{$\\Delta\\chi^2$} &\n\\colhead{$N=1,\\Delta\\chi^2$} &\n\\colhead{$N_{free}$} &\n\\colhead{$\\Delta\\chi^2$}\n}\n\\startdata\n\\texttt{DDT1p1} & 14.9 & 0.84$\\pm$ 0.21 & 15.4 & 33.2 & 1.11$\\pm$ 0.19 & 33.5 \\\\ \n\\texttt{DDT1p4halo} & 14.6 & 1.00$\\pm$ 0.26 & 14.6 & 29.8 & 1.34$\\pm$ 0.24 & 31.8 \\\\ \n\\texttt{DDTe } & 14.3 & 1.30$\\pm$ 0.33 & 15.1 & 26.9 & 1.72$\\pm$ 0.30 & 32.6 \\\\ \n\\DETO & -83.9 & 0.28$\\pm$ 0.07 & 14.8 & -64.8 & 0.37$\\pm$ 0.06 & 35.2 \\\\ \n\\HED6 & 15.7 & 1.05$\\pm$ 0.26 & 15.8 & 32.7 & 1.39$\\pm$ 0.23 & 35.5 \\\\ \n\\W7 & 15.9 & 0.87$\\pm$ 0.22 & 16.2 & 34.8 & 1.14$\\pm$ 0.19 & 35.3 \\\\ \n\\texttt{ddt1p4} & 11.3 & 0.65$\\pm$ 0.17 & 15.7 & 33.3 & 0.86$\\pm$ 0.15 & 34.2 \\\\ \n\\texttt{3Dbball} & 6.7 & 0.56$\\pm$ 0.13 & 17.6 & 37.0 & 0.76$\\pm$ 0.12 & 41.4 \\\\ \n\\DD4 & 14.1 & 0.77$\\pm$ 0.19 & 15.5 & 33.6 & 1.01$\\pm$ 0.17 & 33.6 \\\\ \n& & & & & & \\\\ \n\\texttt{No source}, $\\chi^2$ (d.o.f.) & & 1856.7 (1906) & & & 2615.9 (2566) & \n\\enddata\n\\label{tab:mfit_early}\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{lcc}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{$\\Delta\\chi^2$ for the joint data set of the \\early and \\late spectra for a basic set of models with fixed normalization. The value of $\\Delta\\chi^2$ shows the improvement of the $\\chi^2$ relative to the Null model of no source. \n}\n\\tablewidth{12cm}\n\\tablehead{\n\\colhead{Model} &\n\\colhead{ISGRI \\& SPI(400-1350 keV)} &\n\\colhead{ISGRI \\& SPI(70-1350 keV)}\\\\\n\\colhead{} &\n\\colhead{$\\Delta\\chi^2$} &\n\\colhead{$\\Delta\\chi^2$} \n}\n\\startdata\n\\texttt{DDT1p1} & \\bf{81.3} & \\bf{120.5} \\\\ \n\\texttt{DDT1p4halo} & \\bf{80.5} & 117.8 \\\\ \n\\texttt{DDTe } & 76.4 & 109.2 \\\\ \n\\DETO & -73.8 & -34.7 \\\\ \n\\HED6 & 63.5 & 92.8 \\\\ \n\\W7 & \\bf{80.9} & \\bf{121.7} \\\\ \n\\texttt{ddt1p4} & 76.2 & \\bf{120.7} \\\\ \n\\texttt{3Dbball} & 69.9 & \\bf{122.7} \\\\ \n\\DD4 & \\bf{78.8} & \\bf{120.7} \n\\enddata\n\\tablecomments{Bold-faced are the models which have $\\Delta\\chi^2$ different from the model with the largest $\\Delta\\chi^2$ by less than 4, the criterion used to group models into ``more plausible'' and ``less plausible'' respectively (see Appendix).}\n\\label{tab:mfit_eandl}\n\\end{deluxetable*}\n\n\\subsubsection{{\\bf Early} and {\\bf Late} data together}\nFinally, in Table~\\ref{tab:mfit_eandl} we compare jointly the \\early\nand \\late data of ISGRI and SPI with the models, calculated for\ncorresponding periods. The two columns in Table~\\ref{tab:mfit_eandl}\ndiffer by the energy range in the SPI data used for comparison with\nthe model. In each case the normalization was fixed at the value set\nby the adopted distance of 3.5 Mpc. In each column we mark with bold\nface the models which have $\\Delta\\chi^2$ different from the model\nwith the largest $\\Delta\\chi^2$ by less than 4 (see Appendix for the clarification on the interpretation of this criterion in Bayesian and frequentist approaches). Once again, 1D deflagration model \\W7 and\n\"standard\" delayed detonation model perform well. The \\BALL3, which\nwas designed to account for tentative feature in the \\early SPI data\nat low energies, not surprisingly performs well if the SPI data below\n400 keV are included. However, if only the data above 400 keV are used\nfor SPI, this model yields significantly lower $\\Delta\\chi^2$ than the\n\\W7 or \\texttt{DDT1p1} models.\n\n\n\\subsection{Comparison of gamma-ray light curves with models}\n\\label{sec:lightcurves}\nWhile the spectra for the \\early and \\late periods already provide an\noverall test of the basic models, additional information can be\nobtained by analyzing the time variations of the fluxes in broad\nenergy bands (see Fig.\\ref{fig:lc_isgri} and \\ref{fig:lc_spi}). The\ntotal number of time bins is 34. Each bin corresponds to one\nrevolution (i.e. $\\sim$3 days). The first raw in\nTable~\\ref{tab:lc_chi2} provides the values of the $\\chi^2$ (for Null\nmodel of no source) in three energy bands: 100-200 keV (ISGRI),\n835-870 keV (SPI) and 1220-1272 keV (SPI). The normalization of the\nmodel lightcurves is fixed to 1. For 34 bins the value $\\chi^2$ for a\ncorrect model is expected to be in the interval $\\sim$26-42 in 68\\% of\ncases. Clearly, the Null model does not fit the data well.\n\nOther raws show the improvement of the $\\chi^2$ relative to the Null\nmodel. I.e., $\\displaystyle \\Delta \\chi^2=\\chi^2_{Null}-\\chi^2_{model}$. From\nTable~\\ref{tab:lc_chi2} it is clear that \\DETO model strongly\noverpredicts the flux in all bands and can be excluded ($\\chi^2$\nbecomes worse when this model is used). Other models leads to significant\nimprovement with respect to the Null model, except for the \\BALL3\nmodel in the 100-200 keV band where it exceeds the observed flux in\nthe early observation, while in the SPI bands all these models are\ncomparable.\n\nThe last column in Table~\\ref{tab:lc_chi2} provides the $\\chi^2$ for\nthree bands joints. This is basically the sum of the values of\n$\\chi^2$ for individual bands. Bold-faced are the best performing\nmodels: \\W7 and \\texttt{DDT1P1}. As in \\S\\ref{sec:spectra} these are\nthe models which have $\\Delta\\chi^2$ different from the model with the\nlargest $\\Delta\\chi^2$ by less than 4 (see Appendix).\n\nOne can also compare the lightcurves with the hypothesis of a\n constant flux. The mean level of flux was estimated for each band\n and the value of the $\\chi^2$ was calculated. The values of\n $\\Delta\\chi^2$ relative to ``No source'' are given in the last row\n of Table~\\ref{tab:lc_chi2}. One can see that this simple model is\n almost as good as other best-performing models in individual bands\n (even taking into account that this model has a free parameter -\n mean flux). This is of course the result of low statistical\n significance of the SN2014J detection that makes it difficult to\n constrain time variations of a faint signal. For the combined values\n for all three band the effective number of free parameter is 3 (mean\n fluxes in each band) and one can conclude that, e.g. \\W7 model\n performs marginally better than the constant flux model.\n\n\\begin{deluxetable*}{lrrrr}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{$\\Delta\\chi^2$ for light-curves in three energy bands for\n different models. The value of $\\Delta\\chi^2$ shows the improvement of the $\\chi^2$ relative to the Null model of no source. The value of the $\\chi^2$ for the Null model is given in the first raw.\n}\n\\tablehead{\n\\colhead{Model} &\n\\colhead{100-200 keV (ISGRI)} &\n\\colhead{835-870 keV (SPI)} &\n\\colhead{1220-1272 keV (SPI)} &\n\\colhead{Three bands jointly} \n}\n\\startdata\n\\texttt{No source}, $\\chi^2$ & 51.0 & 49.6 & 51.1 & 151.7\\\\ \n\\\\\n\\texttt{DDT1P1} & 16.9 & 18.6 & 19.3 & {\\bf 54.8}\\\\ \n\\texttt{DDT1P4halo} & 9.2 & 17.4 & 19.5 & 46.1\\\\\n\\texttt{DDTe } & 16.7 & 17.1 & 17.6 & 51.4\\\\ \n\\DETO & -105.0 & -6.3 & 13.5 & -97.8\\\\ \n\\HED6 & 20.6 & 16.0 & 14.1 & 50.7\\\\ \n\\W7 & 18.8 & 18.4 & 20.0 & {\\bf 57.2}\\\\ \n\\texttt{DDT1P4} & 9.1 & 17.5 & 20.3 & 46.9\\\\ \n\\texttt{3Dbball} & -4.8 & 17.4 & 20.4 & 33.0\\\\ \n\\DD4 & 12.7 & 17.9 & 20.0 & 50.6 \\\\\n\\texttt{CONST} & 18.7 & 16.1 & 18.5 & 53.3\n\\enddata\n\\tablecomments{Total number of time bins is 34. For the joint $\\chi^2$ the effective number of bins is three times larger - 102. }\n\\label{tab:lc_chi2}\n\\end{deluxetable*}\n\n\n\\subsection{Search for the velocity substructure in the \\late data}\n\\label{sec:tem_late}\n\\label{sec:tem_early}\nThe above analysis suggests that the \\INTEGRAL data broadly agree with\na subset of simple 1D models (e.g., \\W7 or \\DD4). Since the true\nstructure of SN2014J is surely more complicated than predicted by 1D\nmodels, it is interesting to verify if adding an extra component to\nthe model (on top of the best-performing \\W7 model) significantly\nimproves the fit. In this section we use \\tem model as such extra\ncomponent. This choice is partly driven by the discussion of a\npossible presence of $^{56}$Ni at or near the surface of the ejecta in\n\\citet{2014Sci...345.1162D} and \\citet{isern}. As described in\n\\S\\ref{sec:tem} the \\tem model described a transparent clump of\nradioactive Ni. All gamma-ray lines associated with the\nNi$\\rightarrow$Co$\\rightarrow$Fe decay in the \\tem model are tied to\nthe energy (redshift) and the width of the reference 847 keV line.\nThe flux ratios are also tied together using a model of an optically\nthin clump, taking into account time evolution of the Ni and Co\nmasses. Examples of spectra predicted by \\tem model (for 1~$M_\\odot$\nof $^{56}$Ni) are shown in Fig.~\\ref{fig:tem}.\n\n\nThus, we consider a composite model, consisting from the \\W7 model (with\nthe normalization fixed to 1) and the \\tem model. This two-component (\\W7+\\tem)\nmodel effectively searches for a transparent clump of radioactive\nmaterial on top of the base-line \\W7 model (see\nFig.\\ref{fig:tem_late}). The horizontal axis shows the energy of the\nreference 847 keV line in the observer frame and different colors\ncorrespond to different 847 keV line broadening parameterized through\na Gaussian $\\sigma$ - see legend. For a given redshift\/energy and\nwidth of the reference 847 keV line the model has the normalization\n(initial $^{56}$Ni mass) as the only free parameter. The best-fitting\n$^{56}$Ni mass is shown in the top panel of\nFig.\\ref{fig:tem_late}. The bottom panel (Fig.\\ref{fig:tem_late})\nshows the improvement in the $\\chi^2$ (relative to the \\W7 model\nalone) due to the \\tem model.\n\nAs is clear from Fig.~\\ref{fig:tem_late} this model does no provide\ncompelling evidence for a transparent clump on top of the \\W7 in the\n\\late data. Formally, there is a $\\Delta \\chi^2\\sim 9.5$ peak at\n$\\sim$858.5 keV, which corresponds to a narrow ($\\sim$1 keV broad, red\ncurve) component with a negative mass of $- 0.05~M_\\odot$, which can\nbe interpreted as a marginal evidence for a dip in the velocity\nsubstructure, given that this improvement of the $\\Delta \\chi^2$\n came at the cost of adding three more parameters\\footnote{We note,\n that the width and especially energy of the reference line are\n very nonlinear parameter that could lead to large changes in the\n $\\chi^2$.} to the model. One can estimate the constraints on the\n line flux (mass of a transparent clump) that such\n analysis can provide, by fixing the centroid energy and the width of the\n reference 847 keV line and calculating the expected statistical\n uncertainty. Since the normalization of the \\tem model is the only\nfree parameter in this particular experiment, the estimation of the\nuncertainty is straightforward (see Fig.\\ref{fig:sigma_m}). Three\ncurves shown in Fig.\\ref{fig:sigma_m} show 1$\\sigma$ uncertainty on\nthe initial $^{56}$Ni mass for the \\early set (dashed-blue: SPI data\nin the 70-1350 keV band; long-dashed-green: 400-1350 keV) and \\late\nset (solid-red: 400-1350 keV), respectively. Conservative upper limit\nbased on the assumption of pure statistical errors would be 3 times\nthese values. Letting the broadening and the redshift to be free\nparameters (look-elsewhere effect) would increase this limit even\nfurther.\n \nThese experiments show that the \\late data are consistent with a\npresence of a velocity substructure (parameterized via our \\tem\nmodel) on top of the 1D \\W7 model at the level $\\sim 0.05~M_\\odot$,\nprovided that the lines are slightly broadenened. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{g7_flate_400_w7.pdf}\n\\end{center}\n\\caption{ Fitting the SPI data in the 400-1350 keV band with a\n composite \\W7 + \\tem model. The normalization of the \\W7 model is\n fixed to 1. In the \\tem model all lines are tied to the energy\n (redshift) and the width of the reference 847 keV line. The flux\n ratios are tied using a model of optically thin clump, taking into\n account time evolution of the Ni and Co masses. This setup is\n optimized for a search of a transparent clump on top of the \\W7\n model). For a given energy and width the model has only\n normalization (initial $^{56}$Ni mass in the clump) as a free\n parameter. The bottom panel shows the improvement in $\\chi^2$ and\n the top panel shows the best-fitting $^{56}$Ni clump\n mass. Different colors correspond to a different 847 keV line\n broadening parameterized through a Gaussian $\\sigma$ - see legend.\n No compelling evidence for a clump is seen in the data. The\n sensitivity of the data to the mass of the clump strongly depends\n on the broadening of the lines (see Fig.\\ref{fig:sigma_m}).\n\\label{fig:tem_late}}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{sigma_m.pdf}\n\\end{center}\n\\caption{Uncertainty in the initial $^{56}$Ni mass as a function of\n line broadening for \\early set (dashed-blue: SPI data in the\n 70-1350 keV band; long-dashed-green: 400-1350 keV) and \\late\n set (solid-red: 400-1350 keV), respectively, assuming transparency to gamma-rays generated close to the surface. A conservative upper limit on the initial mass of \"extra\" radioactive $^{56}$Ni, is three times this value at a given line width. For the line broadening of $10^4~{\\rm km~s^{-1}}$ (FWHM), the expected value of $\\sigma_{847}$ is $\\sim 12~$keV. This value can be regarded as a fiducial value for a simple SNIa model.\n\\label{fig:sigma_m}}\n\\end{figure}\n\nWe now do a similar experiment with the \\early data, using a \\tem+\\W7\nmodel for SPI data in the 400-1350 keV band (Fig.\\ref{fig:tem_early},\nleft panel) and in the 70-1350 keV band (Fig.\\ref{fig:tem_early},\nright panel), respectively. \n\nThe left panel does not show any significance evidence for a clump on\ntop of the \\W7 model. The structure in the right panel is more\ncomplicated. The data used in this panel now include the $^{56}$Ni\nline at 158 keV. We note, that if the 158 keV line is able to escape,\nthen it is certainly true for higher energy lines of Ni and\nCo. Therefore the analysis should be done for the whole band to\nachieve the most significant results. First of all, our analysis does\nnot show compelling evidence for a narrow and unshifted component\nreported in \\citet{2014Sci...345.1162D} -- there is a weak\n($\\Delta\\chi^2\\sim6$, i.e. $\\sim 2.4~\\sigma$ detection if we ignore the freedom in the redshift and broadening) peak at 847.5 keV,\ncorresponding to a narrow line (black curve) with a mass of $\\sim\n0.027~M_\\odot$ of $^{56}$Ni. There are several separate peaks of\nsimilar magnitudes, covering the energy range of interest. However\nthere is a more significant (albeit also marginal) evidence for a\nredshifted and broad component with $M_{Ni}\\sim0.08~M_\\odot$,\n$E\\sim826.5~$keV and $\\sigma\\sim8~$keV \\citep[see][for\n discussion]{isern}. The gain in $\\chi^2$ is $\\sim$18 and for a\nfixed energy and broadening (putting under the rug possible systematic\nerrors in the background modeling and uncertainties in the calibration\nof the off-diagonal response) this would be a 4.2$~\\sigma$ detection. However\nthe freedom in the energy, width (look elsewhere effect) and the\nnormalization deteriorates the significance. Should all these free\nparameters be linear (as is normalization), one would expect the\nchange in the $\\chi^2$ of $\\sim 3$ due to pure statistical\nfluctuations. However, the energy and the width are nonlinear and the\ngain in $\\chi^2$ might be significantly larger. In\nFig.~\\ref{fig:tem_late} and \\ref{fig:tem_early} we see multiple peaks\nwith the change\/gain in $\\chi^2$ up to $\\sim$10. Assuming that the\nlatter value can be used as a crude estimate of a possible gain in\n$\\chi^2$ due to non-linearity of the \\tem model, the significance of\nthe detection of the excess drops below 3$\\sigma$.\n\nTaking the best-fitting parameters at the face value, we can go back\nto the \\late data and compare the spectra (in the 400-1350 keV band)\nwith the \\tem+\\W7 model, freezing \\tem model parameters at the\nbest-fitting values obtained for the \\early data. This gives the\n$\\chi^2=1883.05$, i.e. worse than the \\W7 model alone\n($\\chi^2=1879.3$). If we let the normalizations of both \\tem and \\W7\nmodels free (but freezing energy and broadening of the \\tem model),\nthen we improve slightly the $\\chi^2$ to 1878.9, but the best-fitting\nmass becomes slightly negative, although consistent with zero $-3~10^{-3}\\pm5~10^{-2}~M_\\odot$, while the\nbest-fitting normalization of \\W7 model becomes 0.92\n(c.f. Tab.~\\ref{tab:mfit_late} where SPI data are used together with\nthe ISGRI data).\n\nWe concluded that there is a tension between ``low'' energy SPI data\nin \\early observations and the rest of the \\INTEGRAL data\n(Tab.~\\ref{tab:mfit_early} and Tab.~\\ref{tab:lc_chi2}). However, this tension is not prohibitively large and could be attributed to statistical fluctuations in the data, if a conservative approach is adopted. A possible\nevidence of the redshifted and broadened 158 keV line in the \\early\ndata and possible implications are further discussed in \\citet{isern}.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{g7e_early_400_w7.pdf}\n\\includegraphics[trim= 0cm 5cm 0cm 2cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.49]{g7e_early_070_w7.pdf}\n\\end{center}\n\\caption{Same as in Fig.\\ref{fig:tem_late} but for the \\early SPI\n spectrum. {\\bf LEFT:} \\tem+\\W7 model and the SPI data in\n the 400-1350 keV band. The normalization of the \\W7 model is fixed to\n 1. {\\bf RIGHT:} \\tem+\\W7 model and the SPI data in\n the 70-1350 keV band. The normalization of the \\W7 model is fixed to\n 1. The low energy part of the SPI spectrum is included to make sure\n that the $^{56}$Ni line at 158 keV is within the energy range\n probed. There is a marginal evidence of a redshifted (by $\\sim$8000 km\/s)\n component with the width of $\\sim 8$ keV (Gaussian sigma),\n corresponding to $M_{Ni}\\sim0.08~M_\\odot$. See text\n for the discussion. \n\\label{fig:tem_early}}\n\\end{figure*}\n\n\\subsection{3PAR model}\n\\label{sec:3par}\nApart from the models discussed above, we also used \\PAR3 model,\nintroduced in \\citet{2014Natur.512..406C}. This is a spherically\nsymmetric model of homologously expanding ejecta with exponential\ndensity profile $\\displaystyle \\rho \\propto e^{v\/V_e}$. The model is\nchaaracterized by three parameters: initial mass of the $^{56}$Ni\n$M_{Ni}$, total mass of the ejecta $M_{ejecta}$, and characteristic\nexpansion velocity $V_e$ in the exponential density distribution. In\nthis model a mass-weighted root-mean-squared velocity of the ejecta is\n$\\displaystyle \\sqrt{12}V_e$.\n\nThe main shortcoming of this model is the assumption that all\nelements, including radioactive Ni and Co, are uniformly mixed through\nthe entire ejecta. This is an ad-hoc assumption, made in order to stay\nwith only three-parameteres model, but it is not justified. It has the\nmajor impact for the early gamma-ray light curve, producing gamma-ray\nemission even at the very early phase (see Fig.~\\ref{fig:background},\n\\ref{fig:lc_isgri}, \\ref{fig:lc_spi}). At later times (day 50 or\nlater), the role of mixing is less significant. We therefore applied\nthis model to the \\late ISGRI and SPI spectra to get estimates of\n$M_{Ni}$, $M_{ejecta}$ and $V_e$, which are not limited to values characteristic to the set of plausible models given in Table~\\ref{tab:models}. The main purpose of using this model is to understand the level of constraints provided by the \\INTEGRAL data on the main characteristics of the supernova. Simplicity of the model allows us to calculate this model on a large grid of possible values of $M_{Ni}$, $M_{ejecta}$ and $V_e$. \n\n\nA Monte Carlo radiative transfer code is used to calculate the\nemergent spectrum, which includes full treatment of Compton scattering\n(coherent and incoherent) and photoabsorption. Pair production by\n$\\gamma$-ray photons is neglected. The positrons produced by $\\beta^+$ decay\nannihilate in place via positronium formation. Both two-photon\nannihilation into the 511 keV line and the ortho-positronium continuum\nare included.\n\n \nThe results are shown in Fig.~\\ref{fig:3par}. The best-fitting values\n$M_{Ni}=0.63~M_\\odot$, $M_{ejecta}=1.8~M_\\odot$, $V_{e}=3~10^{3}~{\\rm\n km~s^{-1}}$ are marked with a cross. The 1$\\sigma$ confidence\ncontours (corresponding to $\\Delta \\chi^2=1$, i.e. for single\nparameter of interest) are shown with the thick solid line. Clearly,\nthe Ni mass $M_{Ni}$ and the characteristic expansion velocity $V_e$\nare better constrained than the ejecta mass. This is not surprizing,\ngiven that the data averaged over the period 50-162 days after\nexplosion are used, when the ejecta are relatively transparent for\ngamma-rays. As a results the flux in the lines depends primarily on\nthe Ni mass, line broadening is set by the expansion velocity, while\nejecta mass influence mostly the amplitude of the scattered component,\nwhich declines with time relative to the ortho-positronium continuum\nwhen the optical depth declines. If we fix the poorly constrained\nejecta mass to $M_{ejecta}=1.4~M_\\odot$, then the derived Ni mass is\nconstrained to the range 0.54-0.67$~M_\\odot$.\n\nFor the set of models listed in the Table~\\ref{tab:models} we can\nestimate the effective $V_e$ using the relation $\\displaystyle\nV_e=\\sqrt{\\frac{E_K}{6M_{ejecta}}}$, valid for pure exponential\nmodel. The values $\\displaystyle V_e$ vary between $\\sim2580~{\\rm\n km~s^{-1}}$ for \\texttt{DDTe } to $\\sim2960~{\\rm km~s^{-1}}$ for \\DETO models\nand is equal to $2740~{\\rm km~s^{-1}}$ and $2820~{\\rm km~s^{-1}}$ for\n\\W7 and \\texttt{DDT1p1} respectively. Not surprisingly all\n``successful'' models (e.g. \\W7 and \\texttt{DDT1p1}) have their\ncharacteristic parameters well inside contours plotted in\nFig.~\\ref{fig:3par}, while \\DETO and \\HED6 are far outside the\ncontours, primarily because of Ni mass.\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[trim= 0cm 12cm 0cm 1cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.99]{model_grid_fit2.pdf}\n\\end{center}\n\\caption{Confidence contours for \\PAR3 model, corresponding to $\\Delta\\chi^2=1$ with respect to the best-fitting value. The cross show the best-fitting parameters of the \\PAR3 model: $M_{Ni}\\sim 0.63~M_\\odot$, $v_e\\sim 3000~{\\rm km~s^{-1}}$, $M_{ejecta}\\sim 1.8~M_\\odot$. The \\late ISGRI and SPI spectra are used for this analysis.\nConfidence intervals plotted in this figure correspond to $1~\\sigma$ for a single parameter of interest.\nThe largest uncertainty is in the mass of the ejecta, while the Ni mass is the best determined quantity.\n\\label{fig:3par}}\n\\end{figure*}\n\n\\subsection{Summary of model fitting}\n\\label{sec:mf_sum}\nThe comparison of the \\INTEGRAL data with the subset of models (see the\nsections above) allows one to crudely rank the models according to\ntheir success in different tests. For each test (data set) we can\nchoose the ``best'' model, which provides the largest improvement\n$\\Delta\\chi^2$ compared to Null model (or having the smallest $\\chi^2$\nfor the lightcurves). We can then adopt an ad hoc definition that\nother models that have $\\chi^2$ different from the best model by 4\n(i.e. $\\sim 2~\\sigma$ confidence) are classified as ``good''. Similar approach can be applied to the lightcurves in each band (Tab. \\ref{tab:lc_chi2}), by adding 4 to the minimal value of the $\\chi^2$ among models. Applying\nthis test to Tables \\ref{tab:mfit_late} - \\ref{tab:lc_chi2} we\nconclude that \\W7 and \\texttt{DDT1p1} pass all these tests, closely followed by\n\\DD4, \\texttt{ddt1p4}, and then by \\texttt{DDT1p4halo} and \\BALL3. \\DETO and \\HED6 fail most of the tests. Of course, given\nthe uncertainties in the distance, background modeling and calibration\nissues, we can not reject models other than \\DETO and \\HED6. E.g., if we let the\nnormalization to be a free parameter (equivalent of a statement that\nthe distance is highly uncertain) then most of the models become\nbarely distinguishable. We rather state, that a whole\nclass of near-Chandrasekhar models provides a reasonable description\nof the data, with the \\W7 and \\texttt{DDT1p1} being the most successful, closely followed\nby a broader group of delayed-detonation models. \n\n\n\\section{Consistency with optical data}\n\\label{sec:opt}\nWe now make several basic consistency checks of gamma-ray and optical\ndata, using optical observations taken quasi-simultaneously with\n\\INTEGRAL observations.\n\n\\subsection{Optical and gamma-ray luminosities}\nWe use $BVRIJHK$ photometry reported by \\citet{2014MNRAS.443.2887F} to\nestimate the bolometric (UVOIR) luminosity of SN~2014J on days 73 and\n96 after the explosion. Since the data do not contain the $U$-band\nphotometry, we include the $U$ magnitude recovered on the bases of the\n$U-B$ color of the dereddened normal SN~Ia, SN~2003hv\n\\citep{2009A&A...505..265L}. The SN~2014J fluxes were corrected for\nthe extinction using slightly different extinction laws reported by\n\\citet{2014ApJ...788L..21A} and \\citet{2014MNRAS.443.2887F}. The\naverage of both fluxes for each epoch were used then to produce the\nintegrated flux. To this end we approximated the spectral energy\ndistribution by the combination of two functions each of which is a\nsmooth broken power law. The SED integration in the range of\n$0.1<\\lambda<10$ $\\mu$m with the distance of 3.5 Mpc, results in the\nluminosity estimates of $(11\\pm1)\\times10^{41}$ erg s$^{-1}$ on day\n73, and $(6.5\\pm0.6)\\times10^{41}$ erg s$^{-1}$ on day 96. These\nvalues agree well with the estimated amount of deposited energy in\nthe best-fitting \\PAR3 model: $\\sim 1.0\\times 10^{42}~{\\rm erg~s^{-1}}$\nand $\\sim 5.3\\times~10^{41}~{\\rm erg~s^{-1}}$ for day 73 and 96\nrespectively. According to this model the fraction of thermalized\nenergy is $\\sim$34\\% and $\\sim$20\\% for these dates respectively.\n\n\\subsection{Asymmetry in late optical spectra?}\n\nThe issue of asymmetry of SN~2014J ejecta is of vital importance\nbecause the strong deviation of the $^{56}$Ni distribution from the\nspherical symmetry would affect the interpretation of the gamma-ray\ndata. Generally, the asymmetry of the $^{56}$Ni distribution is\nexpected in the binary WD merger scenario\n\\citep{2012ApJ...747L..10P}. Moreover, a single degenerate scenario\nalso does not rule out the ejecta asymmetry caused by the noncentral\nearly deflagration \\citep{2014ApJ...782...11M}. In fact, signatures\nof asymmetry have been already detected in several SNe~Ia at the\nnebular stage ($t > 100$ d). The asymmetry is manifested in the\nemission line shift or\/and the double peak emission line profiles\n\\citep{2006ApJ...652L.101M,2010Natur.466...82M,2014arXiv1401.3347D}.\n\nTo probe a possible asymmetry of SN~2014J ejecta we rely on the\nnebular optical spectrum taken on day 119 after the $B$ maximum, i.e.,\n136 d after the explosion \\citep{bikmaev15} at the 1.5-m\nRussian-Turkish telescope (RTT-150) of the TUBITAK National\nObservatory (Antalya, Turkey). The SN~2014J spectrum corrected for\nthe interstellar reddening in M82 of $E(B-V) = 1$\n\\citep[c.f.][]{2014MNRAS.443.2887F} is shown in\nFig.~\\ref{fig:rtt-sn2011fe} together with that of SN~2011fe obtained\nat the same instrument on day 141 after the maximum. The spectra of\nboth supernovae look similar except for the blueshift of SN~2011fe\nemissions by $\\sim 10^3$ km s$^{-1}$ relative to SN~2014J.\n\nWe focus on the [Co\\,III] 5890 \\AA\\ emission that is not hampered \nmarkedly by the blending with other lines. It should be emphasised \nthat on day 136 d after the explosion this line is dominated by $^{56}$Co; \nthe contribution of $^{57}$Co and stable Co isotopes is negligible. \nThe Thomson optical depth at this epoch is small ($\\sim0.2$) and \ndoes not affect the line profile. \nThe [Co\\,III] emission is the superposition of \nfive lines of the a$^4$F - a$^2$G multiplet. Each line we describe by \nthe Gaussian with the amplitude proportional to the \ncollisional excitation rate times the radiative branching ratio. \nWe adopt the heliocentric recession \nvelocity of $+104\\pm15$ km s$^{-1}$ that takes into account the \nrecession velocity of +203 km s$^{-1}$ for M 82 (NASA Extragalactic \nDatabase NED) and the rotational velocity of M 82 at the SN~2014J position.\nThe best fit (Fig. \\ref{fig:fco3}) is found for the full width at half \nmaximum for each line FWHM = 10450 km s$^{-1}$ and the line shift of \n$v_s=+130\\pm17$ km s$^{-1}$. \nWith the exception of this small shift, each [Co\\,III] line \nis fairly symmetric at least in the radial velocity range of \n$|v_r| < 6100$ km s$^{-1}$.\nThe small line shift may be related to either \nintrinsically small asymmetry of $^{56}$Ni distribution, or the special\nviewing angle, if the ejecta is actually non-spherical.\nTo summarize, the SN~2014J optical spectrum does not show signatures\nof strong asymmetry.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 0cm 2cm 0cm 0cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.5]{snspec_rtt3.pdf}\n\\end{center}\n\\caption{The spectra of SN~2014J (day 119 after the maximum) and \nSN2011fe (day 141 after the maximum) obtained with RTT-150 \ntelescope \\cite{bikmaev15}. Overall the spectra are very similar in \nterms of the flux level, line shape\nand line ratios. The exception is the prominent blueshift of [Fe\\,III], \n[Fe\\,II], and [Co\\,III] emissions of SN~2011fe relative to SN~2014J. \nThe strong interstellar Na\\,I absorption in the SN~2014J spectrum \narises in the M~82 galaxy.\n\\label{fig:rtt-sn2011fe}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[trim= 5cm 8cm 0cm 4cm,\n width=1\\textwidth,clip=t,angle=0.,scale=0.6]{fco3.pdf}\n\\end{center}\n\\caption{[Co\\,III] 5900 \\AA\\ emission in the SN2014J spectrum on day 119\n({\\bf thin} line) along with the model ({\\bf thick} line) which \nincludes five component of the a$^4$F - a$^2$G multiplet.\nThe narrow absorption feature at the top of the profile is due \nto Na\\,I interstellar absorption in M 82. At the bottom shown is \nthe residual \"model minus observation\", which demonstrates a good \nfit in the range of 5770-6060 \\AA.\n\\label{fig:fco3}}\n\\end{figure}\n\n\\section{Discussion and conclusions}\n\\label{sec:conclusions}\nWe have analyzed a complete set of \\INTEGRAL observations of\nSN2014J. We confirm our previous results \\citep{2014Natur.512..406C}\nthat the data are broadly consistent with the predictions of a\nnearly-Chandrasekhar WD explosion, with (1D) deflagration or delayed\ndetonation models providing equally good description (see Tables\n\\ref{tab:mfit_eandl} - \\ref{tab:lc_chi2}). While pure deflagration\nmodels are disfavored because of the expected large scale mixing and\nincomplete burning in 3D simulations, in the 1D case they yield the\nsame gamma-ray flux as the delayed detonation models. Pure detonation\n(or strongly sub-Chandrasekhar) models strongly overproduce\n(underproduce) observed gamma-ray flux and can be excluded. Allowing a\nfreedom in the normalization of the model (equivalent to allowing the\ninitial mass of $^{56}$Ni to be a free parameter, while keeping other\nparameters unchanged) makes all models essentially indistinguishable\nat the level of statistics, accumulated by \\INTEGRAL.\n\nWe have searched for possible velocity substructure on top of the\npredictions from 1D models, by adding a set of broadened Gaussian\nlines to the best-performing \\W7 model. The energies and fluxes of the\nlines are tied to the predictions of the Ni and Co decay chains,\nappropriate for the optically thin clump of Ni. This analysis did not\nreveal strong evidence for a prominent velocity substructure in the\ngamma-ray data during the late phase of the SN evolution (after day\n50). Given the statistics accumulated by {\\it INTEGRAL}, a clump with\nthe $^{56}$Ni mass $\\sim 0.05~M_\\odot$ producing slightly broadened\nlines (Fig.\\ref{fig:sigma_m}) could be consistent with the \\late\ngamma-ray data. Similar analysis of the \\early data has a best-fitting\nsolution with a redshifted and broadened component with\n$M_{Ni}\\sim0.08~M_\\odot$, $E\\sim826.5~$keV and\n$\\sigma\\sim8~$keV. However, the statistical significance of this extra\ncomponent is marginal and the \\late observations do not provide\nfurther evidence for the presence of such component \\citep[see\n also][for independent analysis of early observations of\n SN2014J]{2014Sci...345.1162D,isern}.\n\nFrom the optical light curves and spectra SN2014J appears to be a\n``normal'' SNIa with layered structure and no evidence for large-scale\nmixing \\citep[e.g.,][]{2015ApJ...798...39M,2014MNRAS.445.4427A},\nconsistent with the delayed-detonation models. The detection of\nstable Ni \\citep{2014ApJ...792..120F,2015ApJ...798...93T} in IR\nsuggests high density of the burning material, characteristic for\nnear-Chandrasekhar WD.\n\nOptical spectrum taken at the nebular stage (day $\\sim 136$ after the\nexplosion) also do not show strong asymmetry in the Co and Fe\nlines. Unless the viewing angle is special, the distribution of these\nelements in the ejecta is symmetric. These data do not provide any\ndirect support for collision\/merger scenario. The late SN2014J\nspectrum is very similar to that of SN2011fe, albeit with the\npronounced blueshift of emission lines of the latter.\n\nApart from the above mentioned feature in the \\early observation,\nwhich we consider as marginal, the rest of the \\INTEGRAL and optical\ndata appear consistent with the predictions of ``canonical'' 1D\nexplosion models of a nearly-Chandrasekhar carbon-oxygen white dwarf.\n\n\\vspace*{1cm}\n\n\\acknowledgments\n\nThis work was based on observations with INTEGRAL, an ESA project with \ninstruments and a science data centre funded by ESA member states \n(especially the principal investigator countries: Denmark, France, \nGermany, Italy, Switzerland and Spain) and with the participation of \nRussia and the United States. We are grateful to ISOC for their scheduling efforts, and the INTEGRAL Users Group for their support in the observations.\nE.C., R.S., S.G. are partly supported by \ngrant No. 14-22-00271 from the Russian Scientific Foundation. J.I. is \nsupported by MINECO-FEDER and Generalitat de Catalunya grants. I.B. is partly supported by Russian Government Program of Competitive Growth of KFU. E.B. is supported by Spanish MINECO grant AYA2013-40545. \nThe SPI project has been completed under the responsibility and \nleadership of CNES, France. ISGRI has been realized by CEA with the \nsupport of CNES. We thank Adam Burrows, Peter H\\\"oflich, Rishi Khatri, Ken Nomoto, \nVictor Utrobin, Alexey Vikhlinin and Stan Woosley for helpful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction} \nThis note is a companion to our article \\cite{BSZ}, in which we study\nthe correlations between the zeros of a random holomorphic section $s \\in\nH^0(M, L^N)$ of a power $L^N$ of a positive line bundle $L \\to M$ over a\ncompact\n$m$-dimensional complex manifold $M$. Since the hypersurface volume of the\nzeros of a section of $L^N$ in a ball $U$ around a given point $z_0$ is\n$\\sim N {\\rm Vol\\,}(U)$, we rescale $U \\to \\sqrt{N} U$ to get a density of\nzeros independent of $N$. After expanding $U$ this way, all manifolds and\nline bundles appear asymptotically alike, and it is natural to ask if the\n{\\it local statistics} of zeros are universal, i.e. independent of $L, M,\n\\omega$ and $z_0$. To define our statistics, we first provide $H^0(M,L^N)$\nwith a natural Gaussian measure (see \\S\\S\\ref{cxgeom}--\\ref{random}). The\nlocal statistics are measured by the scaled {\\it $n$-point zero correlation\nforms}\n$\\vec K_{n}^N(\\frac{z^1}{\\sqrt{N}},\\dots,\\frac{z^n}{\\sqrt{N}})$, $z^j\\in\n\\sqrt{N}\\,U$ (see \\S\\ref{CC}). They are smooth forms on the ``off-diagonal\"\ndomain \n$\\mathcal{G}^m_n\\subset C^{mn}$ consisting of $n$-tuples of distinct points $z^j\\in\n{\\mathbb C}^m$, and their norms define scaled zero correlation measures\n$\\widetilde K_{n}^N(\\frac{z^1}{\\sqrt{N}},\\dots,\\frac{z^n}{\\sqrt{N}})$. (The correlation\nforms extend to all of ${\\mathbb C}^{mn}$ as currents of order\n0, and hence the same holds for the correlation measures.) In\n\\cite{BSZ}, we used geometric probability methods and a (universal) scaled\nSzeg\\\"o kernel to prove that there exist universal limits as $N \\to \\infty$\nof these correlation measures and more generally of the correlations\nbetween simultaneous zeros of\n$k\\le m$ sections. Here we use a complex analytic approach based on the\nPoincar\\'e-Lelong formula for the currents of integration over the zero set\nof a section, together with the scaled Szeg\\\"o kernel from\n\\cite{BSZ}, to give a proof of universality for the correlation forms. \nThis approach, although limited to the hypersurface case, allows for a\nresult on the level of forms and a somewhat simpler proof. \n\nOur universality theorem is as follows:\n\n\\bigskip\n\n\\pagebreak\n\\noindent{\\sc Main Theorem.} {\\it There is a universal current $\\vec\nK_{n}^{\\infty}\\in\\mathcal{D}'{}^{(m-1)n,(m-1)n}({\\mathbb C}^{mn})$ such that the following\nholds: suppose that $(L,h)$ is a positive Hermitian line bundle on an\n$m$-dimensional compact complex manifold $M$, and let $\\vec K_{n}^N$ be the\n$n$-point zero correlation current on $M^n$. Suppose $z^0\\in M$ and choose\nlocal holomorphic coordinates in $M$ about $z^0$ such that $\\Theta_h|_{z^0}\n=\\partial\\dbar|z|^2$. Then\n$$\\vec K_{n}^N\\left(\\frac{z^1}{\\sqrt{N}}, \\dots,\\frac{z^n}{\\sqrt{N}}\\right)\n=\\vec K_{n}^{\\infty}(z^1,\\dots,z^n)+ O\\left(\\frac{1}{\\sqrt{N}}\\right)\\,.$$\nFurthermore, $\\vec\nK_{n}^{\\infty}$ is a smooth form on the off-diagonal\ndomain \n$\\mathcal{G}^m_n$, and the error term has $k^{th}$ order derivatives $\\le\n\\frac{C_{A,k}}{\\sqrt{N}}$ on each compact subset $A\\subset \\mathcal{G}^m_n$,\n$\\forall k\\ge 0$. \n}\n\n\\bigskip Our method leads to integral formulae for these universal limit\nforms, although the details rapidly become complicated as the number $n$ of\npoints increase. For the case $m=2$, we carry out the calculation in\ncomplete detail in dimension one and also use the method to obtain an\nexplicit formula for the scaling limit pair correlation measures in all\ndimensions (Theorems \\ref{Hformula} and \\ref{mformula}). In particular, our\nformula gives the scaling limit pair correlations for\n${\\operatorname{SU}}(m+1)$-polynomials (which are sections of powers of the hyperplane bundle\nover complex projective space $\\C\\PP^m$). The universal formula in\ndimension one agrees, as it must, with that of Bogomolny-Bohigas-Leboeuf\n\\cite{BBL} and Hannay \\cite{H} in the case of random ${\\operatorname{SU}}(2)$-polynomials.\nSimilar formulas for correlations of zeros of real polynomials\nwere given in \\cite{BD}. \n\nBefore we get started on the proof, a few heuristic remarks on\ncorrelation measures and forms may be helpful. Roughly speaking, \n$\\widetilde K_{n}^N(z^1, \\dots,z^n)$ gives the conditional probability density of\nthe zero divisor of a random section $s$ (simultaneously) intersecting small\nballs around $z^{k+1}, \\dots, z^n$, given that the zero divisor\n(simultaneously) intersects small balls around $z^1,\n\\dots, z^k$. The correlation form $\\vec K_{n}^N$ gives a more refined\nconditional probability: Let $Y$ denote the set of holomorphic tangent\nhyperplanes in $M$. (We can identify $Y$ with the projectivized holomorphic\ncotangent bundle of\n$M$.) Then $\\vec K_{n}^N$ gives the conditional probability that the zero\ndivisor has tangent hyperplanes in small balls in $Y$ above\n$z^{k+1}, \\dots, z^n$, given that it has tangents in small balls above $z^1,\n\\dots, z^k$.\n\n\\medskip\n\n\\noindent{\\bf Acknowledgment.} The first draft of this paper was\ncompleted while the\nthird author was visiting the Erwin Schrodinger Institute in July 1998.\nHe wishes to thank that institution\nfor its hospitality and financial support.\n \n\n\n\\section{Notation}\\label{notation}\n\nWe summarize here the notation from complex analysis that we will need in the\nproof. This notation is the same as in \\cite{SZ} and \\cite{BSZ}, except\nthat different normalizations for the metric and volume form are used in\n\\cite{SZ}.\n\n\\subsection{Complex geometry}\\label{cxgeom}\n\nWe denote by $(L, h) \\to M$ a holomorphic line bundle with smooth\nHermitian metric $h$ whose\ncurvature form\n\\begin{equation}\\label{curvature}\\Theta_h=-\\partial\\dbar\n\\log\\|e_L\\|_h^2\\;,\\end{equation} \nis a positive (1,1)-form. Here, $e_L$ is a local non-vanishing\nholomorphic section of $L$ over an open set $U\\subset M$, and\n$\\|e_L\\|_h=h(e_L,e_L)^{1\/2}$ is the $h$-norm of $e_L$. As in\n\\cite{BSZ},\nwe give $M$ the\nHermitian metric corresponding to the K\\\"ahler\\ form\n$\\omega=\\frac{\\sqrt{-1}}{2}\\Theta_h$ and the induced Riemannian volume\nform \n\\begin{equation}\\label{dV} dV_M= \\frac{1}{m!}\n\\omega^m\\;.\\end{equation}\n\n\nWe denote by $H^0(M, L^{N})$ the space of holomorphic\nsections of\n$L^N=L\\otimes\\cdots\\otimes L$. The metric $h$ induces Hermitian\nmetrics\n$h^N$ on $L^N$ given by $\\|s^{\\otimes N}\\|_{h^N}=\\|s\\|_h^N$. We give\n$H^0(M,L^N)$ the Hermitian inner product\n\\begin{equation}\\label{inner}\\langle s_1, s_2 \\rangle = \\int_M h^N(s_1,\ns_2)dV_M \\quad\\quad (s_1, s_2 \\in H^0(M,L^N)\\,)\\;,\\end{equation} and we\nwrite\n$|s|=\\langle s,s \\rangle^{1\/2}$. \n\nFor a holomorphic section $s\\in H^0(M, L^N)$, we let $Z_s$ denote the\ncurrent\nof integration over the zero divisor of $s$:\n$$(Z_s,\\phi)=\\int_{Z_s}\\phi\\,,\\quad \\phi\\in\\mathcal{D}^{m-1,m-1}(M)\\,.$$ The\nPoincar\\'e-Lelong formula (see e.g., \\cite{GH}) expresses the\nintegration\ncurrent of a holomorphic section $s=ge_L^{\\otimes N}$ in the form:\n\\begin{equation}\n\\label{PL} Z_s = \\frac{i}{\\pi}\n\\partial\\dbar \\log |g|\n= \\frac{i}{\\pi}\n\\partial\\dbar \\log \\|s\\|_{h^N} + N\\omega\\;.\\end{equation} We also denote\nby $|Z_s|$\nthe Riemannian $(2m-2)$-volume along the regular points of $Z_s$,\nregarded as\na measure on $M$:\n\\begin{equation}\\label{volmeasure} (|Z_s|,\\phi)=\\int_{Z_s^{\\rm reg}}\\phi\nd{\\operatorname{Vol}}_{2m-2}=\\frac{1}{(m-1)!}\\int_{Z_s^{\\rm reg}}\\phi\n\\omega^{m-1}\\,;\\end{equation} i.e., \n$|Z_s|$ is the total variation measure of the current of integration\nover\n$Z_s$: \n\\begin{equation}\\label{volmeasure2}|Z_s|=\nZ_s\\wedge{\\textstyle \\frac{1}{(m-1)!}}\\omega^{m-1}\\,.\\end{equation}\n\n\\subsection{Random sections and Gaussian measures}\\label{random} \n\n\nWe now give $H^0(M,L^N)$ the complex\nGaussian probability measure\n\\begin{equation}\\label{gaussian}d\\mu(s)=\\frac{1}{\\pi^{d_N}}e^{-\n|c|^2}dc\\,,\\qquad s=\\sum_{j=1}^{d_N}c_jS_j^N\\,,\\end{equation} where\n$\\{S_j^N\\}$ is an orthonormal basis for\n$H^0(M,L^N)$ and $dc$ is $2d_N$-dimensional Lebesgue measure. This\nGaussian\nis characterized by the property that the $2d_N$ real variables $\\Re\nc_j,\n\\Im c_j$ ($j=1,\\dots,d_N$) are independent random variables with mean 0\nand\nvariance ${\\frac{1}{2}}$; i.e.,\n$${\\mathbf E}\\, c_j = 0,\\quad {\\mathbf E}\\, c_j c_k = 0,\\quad {\\mathbf E}\\, c_j \\bar c_k = \\delta_{jk}\\,.$$\nHere and throughout this paper, ${\\mathbf E}\\,$ denotes expectation: ${\\mathbf E}\\,\\phi = \\int\n\\phi d\\mu$.\n\n\n\nWe then regard the currents $Z_s$ (resp.\\ measures $|Z_s|$), as\ncurrent-valued (resp.\\ measure-valued) random variables on the\nprobability space $(H^0(M,L^N), d\\mu)$; i.e., for each test form\n(resp.\\ function) $\\phi$,\n$(Z_s,\\phi)$ (resp.\\ $(|Z_s|,\\phi)$) is a complex-valued random\nvariable.\n\nSince the zero current $Z_s$ is unchanged when $s$ is multiplied by an\nelement of ${\\mathbb C}^*$, our results are the same if we instead regard $Z_s$ as a\nrandom variable on the unit sphere $SH^0(M,L^N)$ with Haar probability\nmeasure. We prefer to use Gaussian measures in order to facilitate\nour computations.\n\n\\subsection{Correlation currents}\\label{CC}\n\nThe $n$-point correlation current of the zeros is the current on $M^n =\nM\n\\times M \\times \\cdots\n\\times M$ ($n$ times) given by \n\\begin{equation} \\vec K_n^N(z^1, \\dots, z^n):= {\\mathbf E}\\,(Z_s(z^1) \\otimes\nZ_s(z^2)\n\\otimes \\dots \\otimes\nZ_s(z^n)) \\end{equation}\nin the sense that for any test form $\\phi_1(z^1) \\otimes \\dots \\otimes\n\\phi_n(z^n) \\in \\mathcal{D}^{m-1,m-1}(M) \\otimes \\dots\n\\otimes \\mathcal{D}^{m-1,m-1}(M)$,\n\\begin{equation} \\big(\\vec K_n^N(z^1, \\dots, z^n),\n\\phi_1(z^1) \\otimes \\dots \\otimes \\phi_n(z^n)\\big)\n= {\\mathbf E}\\, \\left[\\big( Z_s,\\phi_1\\big)\\big( Z_s,\\phi_2\\big)\\cdots \n\\big(Z_s, \\phi_n\\big)\\right].\n\\end{equation}\nIn a similar way we define the $n$-point correlation measures $\\widetilde K_n^N$ as\nthe ``total variation measures\" of\nthe $n$-point correlation currents:\n\\begin{equation}\\widetilde K_n^N(z^1, \\dots, z^n)=\\vec K_n^N(z^1, \\dots, z^n)\\wedge\n\\frac{1}{(m-1)!}\\omega_{z^1}^{m-1}\\wedge\\cdots\\wedge \n\\frac{1}{(m-1)!}\\omega_{z^n}^{m-1}\n\\,,\\end{equation} i.e.,\n\\begin{equation}\\big(\\widetilde K_n^N(z^1, \\dots, z^n),\n\\phi_1(z^1) \\dots \\phi_n(z^n)\\big)\n= {\\mathbf E}\\, \\big[(|Z_s|, \\phi_1)(|Z_s|, \\phi_2)\\cdots\n(|Z_s|, \\phi_n)\\big] \\end{equation}\nwhere $\\phi_j \\in \\mathcal{C}^0(M).$\n\n\\begin{rem} In the case of pair correlation on a Riemann surface ($n=2, \\dim\nM=1$), the correlation measures take the form\n$$ \\vec K^N_2(z,w) = [\\Delta]\\wedge (\\vec K^N_1(z)\n\\otimes 1) + \\kappa^N(z,w)\\omega_z\\otimes\n\\omega_w \\quad (N\\gg 0)$$ where $[\\Delta]$ denotes the current of\nintegration along the diagonal $\\Delta=\\{(z,z)\\}\\subset M\\times M$, and\n$\\kappa^N\\in\\mathcal{C}^\\infty(M\\times M)$. \\end{rem}\n\n \n\\subsection{Szego kernels}\n\nAs in \\cite{Z,SZ,BSZ} and elsewhere, we analyze the\n$N \\to \\infty$ limit by lifting it to a principal $S^1$ bundle $\\pi: X\n\\to M$. Let us recall how this goes.\n\n\nWe denote by $L^*$ the dual line bundle to $L$, and define $X$ as the circle\nbundle $X=\\{\\lambda \\in L^* : \\|\\lambda\\|_{h^*}= 1\\}$, where $h^*$ is the norm on\n$L^*$ dual to $h$. We can view $X$ as the boundary of the disc bundle $D =\n\\{\\lambda\n\\in L^* : \\rho(\\lambda)>0\\}$, where $\\rho(\\lambda)=1-\\|\\lambda\\|^2_{h^*}$. The disc\nbundle\n$D$ is strictly pseudoconvex in $L^*$, since $\\Theta_h$ is positive, and\nhence\n$X$ inherits the structure of a strictly pseudoconvex CR manifold.\nAssociated to\n$X$ is the contact form $\\alpha= -i\\partial\\rho|_X=i\\bar\\partial\\rho|_X$. We also give\n$X$ the volume form\n\\begin{equation}\\label{dvx}dV_X=\\frac{1}{m!}\\alpha\\wedge \n(d\\alpha)^m=\\alpha\\wedge\\pi^*dV_M\\,.\\end{equation}\n\nThe setting for our analysis of the Szeg\\\"o kernel is the Hardy space\n$H^2(X)\n\\subset \\mathcal{L}^2(X)$ of square integrable CR functions on $X$, where we use\nthe inner product\n\\begin{equation}\\label{unitary} \\langle F_1, F_2\\rangle\n=\\frac{1}{2\\pi}\\int_X F_1\\overline{F_2}dV_X\\,,\\quad\nF_1,F_2\\in\\mathcal{L}^2(X)\\,.\\end{equation} We let $r_{\\theta}x =e^{i\\theta} x$\n($x\\in X$) denote the\n$S^1$ action on $X$. The action $r_\\theta$ commutes with the Cauchy-Riemann\noperator\n$\\bar{\\partial}_b$; hence $H^2(X) = \\bigoplus_{N = 0}^{\\infty} H^2_N(X)$,\nwhere\n$$H^2_N(X) = \\{ F \\in H^2(X): F(r_{\\theta}x) = e^{i N \\theta} F(x)\n\\}\\,.$$ A section $s_N$ of $L^{N}$ determines an equivariant function\n$\\hat{s}_N$ on $X$:\n\\begin{equation}\\label{snhat}\\hat{s}_N(z, \\lambda) = \\left( \\lambda^{\\otimes\nN}, s_N(z)\n\\right)\\,,\\quad (z,\\lambda)\\in X\\,;\\end{equation} then $\\hat s_N(r_\\theta x) =\ne^{iN\\theta} s_N(x)$. The map $s\\mapsto\n\\hat{s}$ is a unitary equivalence between $H^0(M, L^{\\otimes N})$ and\n$H^2_N(X)$. \n\nWe let $\\Pi_N : \\mathcal{L}^2(X) \\rightarrow H^2_N(X)$ denote the orthogonal\nprojection. The Szeg\\\"o kernel $\\Pi_N(x,y)$ is defined by\n\\begin{equation} \\Pi_N F(x) = \\int_X \\Pi_N(x,y) F(y) dV_X (y)\\,,\n\\quad F\\in\\mathcal{L}^2(X)\\,.\n\\end{equation} It can be given as\n\\begin{equation}\\label{Szego}\\Pi_N(x,y)=\\sum_{j=1}^{d_N}\\widehat \nS_j^N(x)\\overline{\\widehat S_j^N(y)}\\,,\\end{equation} where\n$S_1^N,\\dots,S_{d_N}^N$ form an orthonormal basis of $H^0(M,L^N)$. \n\n\\section{Scaling}\n\n\nIn order that we may study the local nature of the random variable\n$Z_s$, we\nfix a point $z^0\\in M$ and choose a holomorphic coordinate chart\n$\\Psi:\\Omega,0\\to U,z_0$ ($\\Omega\\subset {\\mathbb C}^m,\\ U\\subset M$) such that\n\\begin{equation}\\label{coord} \\Psi^*\\omega_{z^0} \n=\\left.\\frac{i}{ 2}\\sum_{j=1}^m dz_j\\wedge\nd\\bar z_j\\right|_0\\;.\\end{equation}\nFor example, if $L$ is the hyperplane section bundle $\\mathcal{O}(1)$ over\n${\\mathbb C}{\\mathbb P}^m$\nwith the Fubini-Study metric $h_{{\\operatorname{FS}}}$, and\n$z_0=(1:0:\\dots:0)$, then the coordinate chart $$\\Psi:{\\mathbb C}^m\\to\nU=\\{w\\in\\C\\PP^m:w_0\\neq 0\\}\\,,\\quad \\Psi(z)=(1:z_1:\\dots :z_m)$$ (i.e.,\n$z_j=w_j\/w_0$) satisfies (\\ref{coord}).\n\nTo simplify notation, we identify $U$ with $\\Omega$. For a current\n$T\\in\\mathcal{D}'{}^{p,q}(\\Omega)$, we write\n$$T\\left(\\frac{z}{\\sqrt{N}}\\right)=\\left(\\tau_{\\sqrt{N}}\\right)_* T\\in\n\\mathcal{D}'{}^{p,q}(\\sqrt{N}\\Omega)\\qquad (\\tau_\\lambda(z)=\\lambda z)\\,.$$ (In particular,\nif\n$T=\\sum T_{jk}(z)dz_j\\wedge d\\bar z_k$, then $T(\\frac{z}{\\sqrt{N}})=\n\\frac{1}{N}\\sum T_{jk}(\\frac{z}{\\sqrt{N}})dz_j\\wedge d\\bar z_k$.)\n\nWe define the {\\it rescaled zero current} of\n$s \\in\nH^0(M,L^N)$ by\n\\begin{equation} \\label{zhat} \\widehat\nZ^N_s(z):=Z_{s}\\left(\\frac{z}{\\sqrt{N}}\\right)\\,. \\end{equation} \nThe scaled $n$-point correlation currents\nare then defined by:\n\\begin{equation}\\label{def-var} {\\mathbf E}\\,\\left( \\widehat{Z}^N_s(z^1) \\otimes \n\\widehat{Z}^N_s(z^2) \\otimes \\cdots \\otimes \\widehat{Z}^N_s(z^n)\\right)\n=\\vec\nK_n^N\\left(\\frac{z^1}{\\sqrt{N}},\\dots,\\frac{z^n}{\\sqrt{N}}\\right)\\in\n\\mathcal{D}'{}^{n,n}(M^n).\n\\end{equation}\n\n\nFollowing the approach of \\cite{SZ}, we fix an orthonormal basis\n$\\{S^N_j\\}$\nof $H^0(M, L^{N})$ and write $S^N_j = f^N_j e_L^{\\otimes N}$ over $U$.\nAny\nsection in $H^0(M, L^{N})$ may then be written as $s = \\sum_{j=1}^{d_N}\nc_j\nf^N_j e_L^N$. To simplify the notation we\nlet $f^N=(f^N_1,\\ldots,f^N_{d_N}):U\\to {\\mathbb C}^{d_N}$ and we put $$\n\\sum_{j=1}^{d_N} c_j f_j = c\\cdot f^N\\;.$$ Hence\n\\begin{equation}\\label{PL2} Z_s = \\frac{\\sqrt{-1}}{\\pi} \\partial\\dbar\n\\log| c\\cdot f^N|\\;,\\quad \\widehat Z^N_s\n=\\frac{\\sqrt{-1}}{\n\\pi}\\d_z\\bar\\partial_z\\log\\big|c\\cdot f^N\\big(\\frac{z}{\\sqrt{N}}\\big)\n\\big|\n\\end{equation} and therefore\n\\begin{equation} \\widehat\nZ_s(z^1) \\otimes\\cdots\\otimes \\widehat Z_s(z^n)= \\left(\\frac{i }{\n\\pi}\\right)^n\\d_{z^1}\\bar\\partial_{z^1}\\cdots\\d_{z^n}\\bar\\partial_{z^n}\n\\left[\\log|c\\cdot\nf^N(\\frac{z^1}{\\sqrt{N}})|\\cdots\\log| c\\cdot\nf^N(\\frac{z^n}{\\sqrt{N}})|\\right]\\;. \\end{equation}\nWe then can write the rescaled correlation currents in the form\n\\begin{equation}\\label{PLcorscaled}\\begin{array}{l}\\displaystyle \n\\vec K^N_n\\left(\\frac{z^1}{\\sqrt{N}},\\dots,\\frac{z^n}{\\sqrt{N}}\\right)=\n{\\mathbf E}\\,\\big(\\widehat Z_s(z^1) \\otimes\\cdots\\otimes \\widehat \nZ_s(z^n)\\big)\\\\[12pt] \\displaystyle\\quad = \\left(\\frac{i }{\n\\pi}\\right)^n \\d_{z^1}\\bar\\partial_{z^1}\\cdots\\d_{z^n}\\bar\\partial_{z^n}\n\\int_{{\\mathbb C}^{d_N}}\n\\log\\left|c\\cdot\nf^N\\left(\\frac{z^1}{\\sqrt{N}}\\right)\\right| \\cdots\\log\\left|c\\cdot\nf^N\\left(\\frac{z^n}{\\sqrt{N}}\\right) \n\\right|\\frac{e^{-|c|^2}}{\\pi^{d_N}}\ndc\\;.\\end{array}\n\\end{equation} \n\n\n\n\\subsection{Scaling limit of the Szego kernel}\\label{scalingszego}\n\nThe asymptotics of the Szeg\\\"o kernel along the diagonal were given by\n\\cite{Ti} and \\cite{Z}:\n\\begin{equation}\\label{diag}\\frac{\\pi^m}{N^m}\\Pi_N(x,x)= 1\n+O(N^{-1})\\,.\\end{equation} For our proof of the Main Theorem,\nwe need the following lemma from \\cite{BSZ}, which gives the\n`near-diagonal' asymptotics of the Szego kernel.\n \n\\begin{lem} \\label{neardiag} Let $z^0\\in M$ and choose local coordinates\n$\\{z^j\\}$ in a neighborhood of\n$z_0$ so that $z^0=0$ and $\\Theta_h(z_0)=\\sum dz^j\\wedge d\\bar z^j$. Then\n\\begin{eqnarray*}\\frac{\\pi^m}{N^m}\\Pi_N(\\frac{z}{\\sqrt{N}},\\frac{\\theta}{N };\n\\frac{w}{\\sqrt{N}},\\frac{\\phi}{N})&=&e^{i2\\pi (\\theta-\\phi)+i\\Im (z\\cdot \\bar\nw)-{\\frac{1}{2}} |z-w|^2} + O(N^{-1\/2})\\;.\\end{eqnarray*}\n\\end{lem} Here, $(z,\\theta)$ denotes the point\n$e^{i\\theta}\\|e_L(z)\\|_he^*_L(z)\\in X$, and similarly for $(w,\\phi)$. In\n(\\ref{diag}) and Lemma \\ref{neardiag}, the expression \n$O(N^\\alpha)$ means a term with $k^{\\rm th}$ order derivatives $\\le C_k \nN^\\alpha$, for all $k\\ge 0$. Lemma \\ref{neardiag} says that the Szeg\\\"o kernel\nhas a universal scaling limit. In fact, its scaling limit is the first\nSzeg\\\"o kernel of the reduced Heisenberg group; see \\cite{BSZ}.\n\n\\section{Universality}\\label{universality}\n\nAll the ideas of the proof of the Main Theorem occur in the\nsimplest case $n = 2$.\nSo first we prove universality in that case and then extend the proof to\ngeneral $n$. \n\nThus, our first object is to prove that the large $N$ limit of the\nrescaled pair\ncorrelation current (from (\\ref{PLcorscaled}) with $n=2$)\n\\begin{equation}\\label{PLcor2}\\begin{array}{l}\\displaystyle \\vec\nK^N_2\\left(\\frac{z}{\\sqrt{N}},\\frac{w}{\\sqrt{N}}\\right)=\n{\\mathbf E}\\,\\left(\\widehat Z^N_s(z) \\otimes \\widehat Z^N_s(w)\\right)\\\\[10pt]\n\\displaystyle\\quad\\quad\\quad=\\frac{-1 }{ \\pi^2}\n\\d_{z}\\bar\\partial_{z}\\d_w\\bar\\partial_w\n\\int_{{\\mathbb C}^{d_N}}\\log\\left|c\\cdot\nf^N(\\frac{z}{\\sqrt{N}})\\right|\\,\\log\\left| c\\cdot\nf^N(\\frac{w}{\\sqrt{N}})\\right| \\frac{e^{-|c|^2}}{\\pi^{d_N}} dc \\end{array}\n\\end{equation}\nis universal. \n\nAs in \\cite{SZ}, we write $f^N=|f^N|u^N$ and expand the integrand in\n(\\ref{PLcor2}): \\begin{eqnarray}\\log|c\\cdot\nf^N(\\frac{z}{\\sqrt{N}})|\\log|c\\cdot\nf^N(\\frac{w}{\\sqrt{N}})|&=&\n\\log |f^N( \\frac{z}{\\sqrt{N}})| \\log |f^N(\n\\frac{w}{\\sqrt{N}})|\\nonumber \\\\\n&&+\\log|f^N(\\frac{z}{\\sqrt{N}})| \\log |c\\cdot u^N(\\frac{w}{\\sqrt{N}})\n|\\nonumber \\\\&&+ \\log |f^N(\\frac{w}{\\sqrt{N}})| \\log |c\\cdot\nu^N(\n\\frac{z}{\\sqrt{N}}) |\\nonumber \\\\ && +\\log |c\\cdot\nu^N(\\frac{z}{\\sqrt{N}})| \\log |c\\cdot\nu^N(\\frac{w}{\\sqrt{N}})\n|\\;.\\label{expand}\\end{eqnarray} \nLet us denote the terms resulting from this expansion by $E_1,\\ E_2,\\\nE_3,\\\nE_4$, respectively. In particular, \n\\begin{equation}\\label{E1}E_1=\\frac{-1 }{\n\\pi^2}\\d_z\\bar\\partial_z\\d_w\\bar\\partial_w \\left[\\log \\big|f^N( \\frac{z}{\\sqrt{N}},\n\\frac{z}{\\sqrt{N}})\\big| \\log \\big|f^N( \\frac{w}{\\sqrt{N}},\n\\frac{w}{\\sqrt{N}})\\big|\n\\right]\\;.\\end{equation} \n\nBy (\\ref{snhat}), $\\widehat S^N_j(z,\\theta)=e^{iN\\theta} \\|e_L(z)\\|_h^{N} f^N_j\n(z)$, where $(z,\\theta)$ are the\ncoordinates in $X$ given in \\S \\ref{scalingszego}. By (\\ref{Szego}),\n\\begin{equation}\\label{szego1}\n\\Pi_N(z,w)=\\|e_L(z)\\|_h^{N} \\|e_L(w)\\|_h^{N}\\langle f^N(z), f^N(w)\\rangle\\,,\n\\end{equation} where we write $\\Pi_N(z,w)=\\Pi_N(z,0;w,0)$.\nSince $\\Pi_N(z,z)^{1\/2} = \\|e_L(z)\\|^N_h |f^N(z)|$, each\nfactor in (\\ref{E1})\nhas the form ${\\frac{1}{2}}\\log \\Pi_N(\\frac{z}{\\sqrt{N}}, \\frac{z}{\\sqrt{N}}) - N\n\\log\n\\|e_L(\\frac{z}{\\sqrt{N}})\\|_h$. By\n(\\ref{diag}), $\\log\n\\Pi_N(\\frac{z}{\\sqrt{N}}, \\frac{z}{\\sqrt{N}})\\to 0$ as $N\\to\\infty$. On the\nother hand\n$$-iN\\d_z\\bar\\partial_z \\log\n\\|e_L(\\frac{z}{\\sqrt{N}})\\|_h = \\omega (\\frac{z}{\\sqrt{N}})\\;.$$ Hence the\nfirst term converges to the normalized Euclidean (double) K\\\"ahler\\ form:\n\\begin{equation}\\label{E1limit} E_1=\\frac{i}{ 2\\pi}\\partial\\dbar|z|^2 \\wedge\n\\frac{i}{\n2\\pi}\\partial\\dbar|w|^2 +O(\\frac{1}{N})\\;.\\end{equation}\n\nThe middle two terms vanish since\nthe integrals in $E_2$ and $E_3$ are independent of $w$ and $z$\nrespectively (see \\cite[\\S 3.2]{SZ}). The ``interesting term'' is\ntherefore\n\\begin{equation}\\label{interesting} E_4= \\frac{-1\n}{\\pi^2}\\d_z\\bar\\partial_z\\d_w\\bar\\partial_w \\int_{{\\mathbb C}^{d_N}}\\log|\nc\\cdot u^N(\\frac{z}{\\sqrt{N}})|\\log|c\\cdot\nu^N(\\frac{w}{\\sqrt{N}})| \\frac{e^{-|c|^2}}{\\pi^{d_N}} dc\n\\; .\\end{equation} To evaluate $E_4$, we consider\nthe integral \\begin{equation}\\label{GN} G_2^N(x^1,x^2) := \\int_{{\\mathbb C}^{d_N}}\n\\log\n|c\\cdot x^1| \\log | c\\cdot x^2| \\frac{e^{-|c|^2}}{\\pi^{d_N}} dc\\quad\n(x^1, x^2\\in\n{\\mathbb C}^{d_N}) \\end{equation}\nwith $x^1 = u^N(\\frac{z}{\\sqrt{N}}),\\ x^2 = u^N(\\frac{w}{\\sqrt{N}})$. \nTo simplify it, we construct a Hermitian orthonormal basis $\\{e_1,\n\\dots, e_{d_N}\\}$ for ${\\mathbb C}^{d_N}$ such that $x^1=e_1$ and\n\\begin{equation}\\label{2D}\nx^2 = \\xi_{1} e_1 + \\xi_{2} e_2,\\quad\\xi_{1} = \\langle x^2,\nx^1\\rangle,\\;\\; \\xi_{2} = \\sqrt{1 - |\\xi_{1}|^2}.\n\\end{equation}\nThis is possible because we can always multiply $e_2$ by a phase $e^{i\n\\theta}$ so that $\\xi_{2}$ is positive real. We then make a unitary\nchange of variables to express\nthe integral in the $\\{e_j\\}$ coordinates. Since the Gaussian is\n$U(d_N)$-invariant, (\\ref{GN}) simplifies to\n\\begin{equation} \\label{2dim} G_2^N(x^1,x^2) = G_2(\\xi_1,\\xi_2)=\n\\frac{1}{\\pi^2} \\int_{{\\mathbb C}^2 }e^{-(|c_1|^2 + |c_2|^2)} \\log |\\xi_{1}|\\log\n|c_1\\xi_{1} + c_2 \\xi_{2} | dc_1 dc_2\\end{equation}\n(where we used the fact that the Gaussian integral in\neach\n$c_j, j \\geq 3$ equals one by construction).\nBy performing a rotation of the $c_1$ variable, we may replace\n$\\xi_1$ with $|\\xi_1|$ and replace $G_2(\\xi_1,\\xi_2)$ with\n\\begin{equation}\\label{Gcos}G(\\cos\\theta):=G_2(\\cos \\theta, \\sin\\theta)\n\\,,\\end{equation}\nwhere $\\cos\\theta = |\\xi_1|=|\\langle x^1,x^2\\rangle|$,\n$0\\leq\\theta\\leq\\pi\/2$. Hence (\\ref{interesting}) becomes\n\\begin{equation}\\label{interesting-simp} E_4=\n\\frac{-1 }{\\pi^2}\\d_z\\bar\\partial_z\\d_w\\bar\\partial_w G(\\cos\\theta_N)\\;,\\quad\n\\cos\\theta_N= \n\\big|\\big\\langle u^N(\\frac{z}{\\sqrt{N}}),\nu^N(\\frac{w}{\\sqrt{N}})\\big\\rangle\\big|\\;.\\end{equation}\n\n\nBy the universal scaling formula for the Szego kernel\n(Lemma~\\ref{neardiag}) and (\\ref{szego1}),\nwe have\n\\begin{equation}\\label{thetalimit}\\cos\\theta_N \n=\\frac{|\\Pi_N(z,w)|}{\\Pi_N(z,z)^{1\/2}\\Pi_N(w,w)^{1\/2}}\n= e^{-{\\frac{1}{2}}\n|z-w|^2} + O(N^{-{\\frac{1}{2}}})\\;.\\end{equation}\nThus we get the universal formula:\n\\begin{equation}\\label{UPCC} \\vec K_2^{\\infty}(z,w) = \\frac{i}{\n2\\pi}\\partial\\dbar|z|^2 \\wedge \\frac{i}{\n2\\pi}\\partial\\dbar|w|^2 + \\frac{-1 }{\\pi^2}\\d_z\\bar\\partial_z\\d_w\\bar\\partial_w G(e^{-{\\frac{1}{2}}\n|z - w|^2}). \\end{equation}\nThis completes the proof for the pair correlation case $n=2$. (Notice that \nthe formula has the same form in all dimensions.) \n\nThe proof for general $n$ is similar. We again write $f^N=|f^N|u^N$ and\nexpand the integrand in (\\ref{PLcorscaled}):\n$$ \\begin{array}{l} \\log|c\\cdot\nf^N(\\frac{z^1}{\\sqrt{N}})|\\log|c\\cdot\nf^N(\\frac{z^2}{\\sqrt{N}})|\\cdots \\log|c\\cdot\nf^N(\\frac{z^n}{\\sqrt{N}})| \\\\ \\\\ \n= \\log |f^N( \\frac{z^1}{\\sqrt{N}})| \\log |f^N( \\frac{z^2}{\\sqrt{N}})|\n\\cdots \\log|f^N(\\frac{z^n}{\\sqrt{N}})|\n\\nonumber \\\\ \\\\ \n+\\log|f^N(\\frac{z^1}{\\sqrt{N}})|\\log |f^N( \\frac{z^2}{\\sqrt{N}})| \\cdots\n\\log|f^N(\\frac{z^{n-1}}{\\sqrt{N}})| \\log |c\\cdot\nu^N(\\frac{z^n}{\\sqrt{N}})\n|\\nonumber \\\\ \\\\ + \\cdots \\\\ \\\\+\\log |c\\cdot\nu^N(\\frac{z^1}{\\sqrt{N}})| \\log |c\\cdot\nu^N(\\frac{z^2}{\\sqrt{N}})| \\cdots \\log |c\\cdot\nu^N(\\frac{z^n}{\\sqrt{N}}) |\\;.\\end{array}$$\nWe denote the terms resulting from this expansion by $E_1,\\dots, E_{2^n}$,\nrespectively. As before, the first term converges to the normalized Euclidean\n``$n$-fold\" K\\\"ahler\\ form:\n$$E_1=\\frac{i}{2\\pi}\\partial\\dbar|z^1|^2\\wedge \\cdots\n\\wedge \\frac{i}{2\\pi}\\partial\\dbar|z^n|^2 +O(\\frac{1}{N})\\,.$$ The $E_{2^n}$\nterm is obtained from the function\n\\begin{equation}\\label{GNn} G_n^N(x^1, x^2, \\dots, x^n) :=\n\\int_{{\\mathbb C}^{d_N}} \\log\n|c\\cdot x^1| \\log |c\\cdot x^2|\\cdots \\log\n|c\\cdot x^n| \\frac{e^{-|c|^2}}{\\pi^{d_N}} dc\\,, \\end{equation} \n$x^1,x^2, \\dots, x^n\\in\n{\\mathbb C}^{d_N}$. Precisely, we substitute\n\\begin{equation}\\label{xj} x^j = u^N(\\frac{z^j}{\\sqrt{N}})\\end{equation}\nin (\\ref{GNn}) and apply the operator $\\left(\\frac{i }{\n\\pi}\\right)^n \\d_{z^1}\\bar\\partial_{z^1}\\cdots\\d_{z^n}\\bar\\partial_{z^n}$.\n As above, we define a special\nHermitian orthonormal basis\n$\\{e_1,\n\\dots,\ne_n\\}$ for the n-dimensional complex subspace spanned by $\\{x_1, \\dots,\nx_n\\}.$ We put:\n$$\\begin{array}{ll} x^1 = e_1 \\\\\nx^2 = \\xi_{21} e_1 + \\xi_{22}e_2 & \\xi_{22} =\n\\sqrt{1 - | \\xi_{21}|^2} \\\\ \\vdots \\\\\nx^n = \\xi_{n1} e_1 + \\dots + \\xi_{nn} e_n\\qquad & \\xi_{nn} = \\sqrt{1 -\n\\sum_{j \\leq n-1}\n|\\xi_{nj}|^2 }. \\end{array}$$\nSuch a basis exists because we can always multiply $e_j$ by a \nphase $e^{i \\theta}$ so that the last component $\\xi_{jj}$ is positive\nreal. \nWe complete $\\{e_j\\}$ to a basis\nof ${\\mathbb C}^{d_N}$, and we now let $c_j$ denote coordinates relative to this\nbasis. As above, we rewrite the Gaussian integral in these coordinates.\nAfter integrating out the variables $\\{c_{n+1}, \\dots,\nc_{d_N}\\}$, (\\ref{GNn}) simplifies to\nthe $n$-dimensional complex Gaussian integral\n\\begin{equation}\\label{ndim}\\begin{array}{lll} G^N_n(x^1,\\dots,x^n)&=&\nG_n(\\xi_{21},\n\\xi_{22},\n\\dots,\n\\xi_{nn})\\\\[10pt] &=&\n\\frac{1}{\\pi^n} \\int_{{\\mathbb C}^n} e^{-|c|^2} \\log|c_1| \\log\n|c_1 \\xi_{21} +\nc_2 \\xi_{22}|\\cdots \\log |c_1 \\xi_{n1} + \n\\dots c_n \\xi_{nn}| dc\\,.\\end{array}\\end{equation}\nNote that the variables $\\xi_{jk}$ depend on $N$; we write $\\xi_{jk} =\n\\xi_{jk}^N$ when we need to indicate this dependence. \n\nTo prove universality, we observe that the $\\xi_{jk}$\nare universal algebraic functions of the inner products $\\langle x^a,\nx^b \\rangle$. Indeed,\n\\begin{equation}\\label{lex}\n\\xi_{j1}\\bar\\xi_{k1}+\\cdots+\\xi_{jk}\\bar\\xi_{kk}=\\langle\nx^j,x^k\\rangle\\,,\\quad 1\\le k\\le j\\le n\\,,\\end{equation}\nwhere we set $\\xi_{11}=1$. These algebraic functions are obtained by\ninduction (lexicographically) using (\\ref{lex}). (The triangular matrix\n$(\\xi_{jk})$ is just the inverse of the matrix describing the Gram-Schmidt\nprocess.)\n\nBy (\\ref{xj}), it follows that\nthe $\\xi_{jk}^N$ \nare universal algebraic functions \nof the variables \n$$\\left\\langle\nu^N({\\textstyle \\frac{z^j}{\\sqrt{N}}}),\nu^N({\\textstyle\\frac{z^k}{\\sqrt{N}}})\n\\right\\rangle=\\frac{\\Pi_N(\\frac{z^j}{\\sqrt{N}}, \\frac{z^k}{\\sqrt{N}})}{\n\\Pi_N(\\frac{z^j}{\\sqrt{N}},\\frac{z^j}{\\sqrt{N}})^{1\/2}\n\\Pi_N(\\frac{z^k}{\\sqrt{N}},\\frac{z^k}{\\sqrt{N}})^{1\/2}}\n=e^{i\\Im\n(z^j\\cdot \\bar z^k)-{\\frac{1}{2}} |z^j-z^k|^2} + O({\\textstyle\\frac{1}{\\sqrt{N}}})\\,.$$\nWe note here that\n\\begin{equation}\\label{det} |x^1\\wedge\\cdots\\wedge x^n|^2=\\det( \\langle\nx^j,x^k\\rangle) \\to \\det \\left(e^{i\\Im (z^j\\cdot \\bar z^k)-{\\frac{1}{2}}\n|z^j-z^k|^2}\\right) =e^{-\\sum|z^j|^2}\\det\\left(e^{z_j\\cdot \\bar\nz_k}\\right)\\,.\\end{equation} When the $z_j$ are distinct (i.e.,\n$(z^1,\\dots,z^n)\\in \\mathcal{G}^m_n$), the limit determinant in (\\ref{det}) is\nnonzero (see \\cite{BSZ}) and thus $\\xi_{jk}^N=\n\\xi_{jk}^\\infty+O(\\frac{1}{\\sqrt{N}})$, where the $\\xi_{jk}^\\infty$ are\nuniversal real-analytic functions of $z\\in\\mathcal{G}^m_n$. We conclude that the \n$E_{2^n}$ term converges to a universal current:\n$$E_{2^n}=\\left(\\frac{i}{\\pi}\\right)^n \\d_{z^1}\\bar\\partial_{z^1}\\cdots\n\\d_{z^n}\\bar\\partial_{z^n}G_n(\\xi_{21}^\\infty,\\dots,\\xi_{kk}^\\infty) +\nO({\\textstyle\\frac{1}{\\sqrt{N}}})\\,.$$\n\nConsider now a general term $E_a$. Suppose without loss of generality that\n$E_a$ comes from \n$$ \\log |c\\cdot\nu^N(\\frac{z^1}{\\sqrt{N}})|\\cdots \\log |c\\cdot\nu^N(\\frac{z^k}{\\sqrt{N}})|\\log|f^N(\\frac{z^{k+1}}{\\sqrt{N}})|\\cdots\n\\log|f^N(\\frac{z^n}{\\sqrt{N}})|\\,.$$ As above we obtain\n$$E_a=\\left(\\frac{i}{\\pi}\\right)^k \\d_{z^1}\\bar\\partial_{z^1}\\cdots\n\\d_{z^k}\\bar\\partial_{z^k}G_k(\\xi_{21}^\\infty,\\dots,\\xi_{kk}^\\infty) \n\\wedge \\frac{i}{2\\pi}\n\\partial\\dbar |z^{k+1}|^2\\wedge\\cdots \\wedge \\frac{i}{2\\pi}\n\\partial\\dbar |z^{n}|^2 +O({\\textstyle\\frac{1}{\\sqrt{N}}})\\,.$$\nHence this\nterm also approaches a universal current. \n(As in the pair correlation case, terms with only one $u^N$ vanish.) \\qed\n\n\n\\section{Explicit formulae}\n \nWe now calculate explicitly the limit pair correlation measures\n$\\widetilde K^{\\infty}_2(z,w)$.\n\n\\subsection{Preliminaries}\n\nThe first step is to compute $\\Delta\nG(e^{-{\\frac{1}{2}} r^2})$, where $\\Delta$ is the Euclidean Laplacian on\n${\\mathbb C}^m$ and $r=|\\zeta|$ ($\\zeta\\in{\\mathbb C}^m$). To begin this computation, we\nwrite\n$a_j = r_j e^{i \\phi_j}$ and then rewrite (\\ref{2dim})--(\\ref{Gcos}) as\n\\begin{equation} \nG(\\cos\\theta)=\\frac{2}{\\pi}\\int_0^{\\infty} \\int_0^{\\infty}\\int_0^{2\\pi}\nr_1 r_2\ne^{-(r_1^2 + r_2^2)} \\log r_1 \\log |r_1 \\cos\\theta+ r_2 e^{i\n\\phi}\\sin\\theta| d\\phi dr_1 dr_2\\;.\\end{equation} We\nnow evaluate the inner integral by Jensen's formula, which gives\n\\begin{equation} \\int_0^{2\\pi} \\log|r_1 \\cos \\theta + r_2 \\sin \\theta e^{i\n\\phi}| d\\phi = \\left\\{ \\begin{array}{ll}2\\pi \\log (r_1 \\cos \\theta) &\n\\mbox{for}\\;\\;r_2 \\sin \\theta \\leq r_1 \\cos \\theta \\\\ & \\\\ 2\\pi\\log (r_2\n\\sin \\theta) & \\mbox{for} \\;\\;r_2 \\sin \\theta\\geq r_1 \\cos \\theta\n\\end{array}\n\\right. \\end{equation} Hence \\begin{equation} G(\\cos\\theta) = 4\n\\int_0^{\\infty} \\int_0^{\\infty} r_1 r_2 e^{-(r_1^2 + r_2^2)} \\log r_1\n\\log\n\\max ( r_1 \\cos \\theta , r_2 \\sin \\theta)dr_1 dr_2. \\end{equation}\n\nNow change variables again with $r_1 = \\rho \\cos \\phi, r_2 = \\rho \\sin\n\\phi$\nto get \\begin{equation} G(\\cos\\theta) = 4 \\int_0^{\\infty}\n\\int_0^{\\pi\/2} \\rho^3 e^{-\\rho^2} \\log (\\rho \\cos \\phi) \\log\\max (\\rho\n\\cos \\phi \\cos \\theta , \\rho \\sin \\phi \\sin \\theta) \\cos\\phi\\sin \\phi\nd\\phi d\\rho\\;. \\end{equation} Since $$ \\log \\max( \\rho \\cos \\phi \\cos\n\\theta\n, \\rho \\sin \\phi \\sin \\theta) = \\log (\\rho \\cos \\phi \\cos \\theta) +\n\\log^+\n(\\tan \\phi \\tan \\theta)\\;,$$ we can write\n$G=G_1+G_2$, where\n\\begin{eqnarray} \\label{G1} G_1(\\cos\\theta)&=&4\n\\int_0^{\\infty} \\int_0^{\\pi\/2} \\rho^3 e^{-\\rho^2} \\log (\\rho \\cos\n\\phi)\n\\log ( \\rho \\cos \\phi \\cos \\theta) \\cos\\phi \\sin \\phi d\\phi\nd\\rho\\\\ \\label{G2} G_2(\\cos\\theta) &=&4\\int_0^{\\infty} \\int_{\\pi\/2 -\n\\theta}^{\\pi\/2}\\rho^3 e^{-\\rho^2} \\log (\\rho \\cos \\phi) \\log ( \\tan\n\\phi\n\\tan \\theta) \\cos\\phi\\sin \\phi d\\phi d\\rho\\;. \\end{eqnarray}\n\n{From} (\\ref{G1}),\n$G_1(\\cos\\theta)=C_1+C_2\\log\\cos\\theta$ and thus $$G_1(e^{-{\\frac{1}{2}} r^2})=\nC_1-{\\frac{1}{2}} C_2 r^2\\;,$$\nso that \\begin{equation}\\label{forgetG1}\\Delta G_1(e^{-{\\frac{1}{2}}\nr^2})=\\left(\\frac{d^2 }{ dr^2} +\\frac{2m-1}{ r}\\frac{d }{\ndr}\\right)(C_1-{\\frac{1}{2}}\nC_2 r^2)=-2mC_2\\;.\\end{equation}\n\nWe now evaluate $\\Delta G_2(e^{-{\\frac{1}{2}} r^2})$. Since the integrand in (\\ref{G2})\nvanishes when $\\phi=\\pi\/2 -\\theta$, we have\n$$\\frac{d }{ dr}G_2(\\cos\\theta)= 4 \\left(\\frac{d }{ dr}\\log\\tan\\theta \n\\right) \\int_0^{\\infty} \\int_{\\pi\/2 -\n\\theta}^{\\pi\/2}\\rho^3 e^{-\\rho^2} \\log (\\rho \\cos \\phi) \\cos\\phi\\sin\n\\phi\nd\\phi d\\rho\\;.$$ Substituting $\\tan^2 \\theta =\ne^{r^2}-1$, we have $$\\frac{d }{ dr}\\log\\tan\\theta =\\frac{r}{\n1-e^{-r^2}}\\;.$$\nThus $$\\frac{d }{ dr}G_2(e^{-{\\frac{1}{2}} r^2})= \\frac{4r}{\n1-e^{-r^2}}(I_1+I_2)\\;,$$ where\n$$I_1=\\int_0^{\\infty} \\int_{\\pi\/2 -\n\\theta}^{\\pi\/2}\\rho^3 e^{-\\rho^2} (\\log \\rho) \\cos\\phi\\sin \\phi\nd\\phi d\\rho= C \\sin^2 \\theta = C(1-e^{-r^2})\\;,$$\n$$I_2=\\int_0^{\\infty} \\int_{\\pi\/2 -\n\\theta}^{\\pi\/2}\\rho^3 e^{-\\rho^2} (\\log \\cos\\phi) \\cos\\phi\\sin \\phi\nd\\phi d\\rho\\;.$$\n\nWe compute\n\\begin{eqnarray*}I_2 &=& {\\frac{1}{2}}\\int_{\\pi\/2 -\n\\theta}^{\\pi\/2} (\\log \\cos\\phi) \\cos\\phi\\sin \\phi\nd\\phi \\ ={\\frac{1}{2}}\\int_0^{\\sin\\theta}t\\log tdt\\\\ \n&=&\\frac{1}{ 8}(\\sin^2\\theta\\log\\sin^2\\theta-\\sin^2\\theta)=\\frac{1}{\n8}(1-e^{-r^2})\\left[\\log (1-e^{-r^2})-1\\right] \\end {eqnarray*}\nThus \\begin{equation} \\label{d1} \\frac{d }{ dr}G_2(e^{-{\\frac{1}{2}} r^2})=\n\\frac{r}{\n2}\\log (1-e^{-r^2}) +C'r\\;.\\end{equation}\nHence by (\\ref{forgetG1}) and (\\ref{d1}), \\begin{eqnarray}\\Delta\nG(e^{-{\\frac{1}{2}}\nr^2})&=& -2mC_2+ \\left(\\frac{d }{ dr}+\\frac{2m-1}{\nr}\\right)\\left(\\frac{r}{\n2}\\log(1-e^{-r^2})+C'r\\right)\\nonumber \\\\&=&\nm\\log(1-e^{-r^2})+\\frac{r^2}{\ne^{r^2}-1} +C''\\;.\\label{d2}\\end{eqnarray}\n\n\\subsection{Pair correlation in dimension 1} \n\nIn dimension one, the pair correlation form is the same as the pair\ncorrelation measure. We first give our universal formula in the\none-dimensional case. Our formula agrees with that of\nBogomolny-Bohigas-Leboeuf \\cite{BBL} and Hannay \\cite{H} for ${\\operatorname{SU}}(2)$\npolynomials.\n\n\\begin{theo} \\label{Hformula}\nSuppose $\\dim M=1$. Then $$\\vec K^N_2(\\frac{z}{\\sqrt{N}},\\frac{w}{\\sqrt{N}})\\to\n\\vec K^\\infty_2(z,w)=\\left[\\pi\\delta_0(z-w)\n+H({\\textstyle{{\\frac{1}{2}}}}|z-w|^2)\\right]\\frac{ i}{ 2\\pi}\\partial\\dbar |z|^2 \\wedge\n\\frac{i}{ 2\\pi}\\partial\\dbar|w|^2\\;,$$ where\n$$H(t)=\n\\frac{(\\sinh^2 t + t^2) \\cosh t -2t \\sinh t }{ \\sinh^3 t}=t-\\frac{2}{\n9}\nt^3+\\frac{2}{ 45}t^5+O(t^7)\\;.$$ \\end{theo}\n\n\\begin{proof} Making the change of variables $\\zeta=z-w$, we have by \n(\\ref{UPCC}),\n$$\\begin{array}{lll}\\displaystyle {\\mathbf E}\\,\\left(\\widehat Z^N(z) \\otimes \\widehat Z^N(w)\\right)\n&\\to &\\displaystyle\\frac{i}{ 2\\pi}\\partial\\dbar|z|^2 \\wedge \\frac{i}{\n2\\pi}\\partial\\dbar|w|^2-\\frac {1}{ \\pi^2} \\d_z\\bar\\partial_z\\d_w\\bar\\partial_w G(e^{-{\\frac{1}{2}}\n|z-w|^2})\\\\[14pt] &&=\\ \\displaystyle \\left[1+{4}\\frac{\\d^2}{\\d z \\bar\\partial\nz}\\frac{\\d^2}{\\d w \\bar\\partial w} G(e^{-{\\frac{1}{2}} |z-w|^2}) \\right]\\frac{i}{\n2\\pi}\\partial\\dbar|z|^2 \\wedge\\frac {i}{ 2\\pi}\\partial\\dbar|w|^2\\\\[14pt] &&=\\ \\displaystyle\n\\left[1+{4}\\left(\\frac{\\d^2}{\\d\\zeta \\bar\\partial\\zeta}\\right)^2 G(e^{-{\\frac{1}{2}}\n|\\zeta|^2}) \\right]\\frac{i}{ 2\\pi}\\partial\\dbar|z|^2 \\wedge \\frac{i}{\n2\\pi}\\partial\\dbar|w|^2\\\\[14pt] &&=\\ \\displaystyle \\left[1+\\frac{1}{ 4}\\Delta^2\nG(e^{-{\\frac{1}{2}} r^2}) \\right]\\frac{i}{ 2\\pi}\\partial\\dbar|z|^2 \\wedge \\frac{i}{\n2\\pi}\\partial\\dbar|w|^2\n\\end{array}$$\nBy (\\ref{d2}) with $m=1$, we have \\begin{eqnarray*}\n\\Delta^2 G(e^{-{\\frac{1}{2}} r^2})\n&=&\\left(\\frac{d^2 }{ dr^2} +\\frac{1}{ r}\\frac{d }{ dr}\\right) \n\\left[\\log(1-e^{-r^2})+\\frac{r^2}{ e^{r^2}-1}\\right]\\\\\n&=&4\\pi\\delta_0 +\\frac{8(e^{r^2}-1)^2\n-16r^2e^{r^2}(e^{r^2}-1)+4r^4e^{r^2}(e^{r^2}+1)}{ (e^{r^2}-1)^3}\\;.\n\\end{eqnarray*}\nFinally,\n\\begin{eqnarray*} \\left[1+\\frac{1}{\n4}\\Delta^2 G(e^{-{\\frac{1}{2}}\nr^2}) \\right] &=& \\pi\\delta_0 +\\frac{(e^{r^2}+1)(e^{r^2}-1)^2\n-4r^2e^{r^2}(e^{r^2}-1)+r^4e^{r^2}(e^{r^2}+1)}{ (e^{r^2}-1)^3}\\\\\n&=&\\pi\\delta_0 +\\frac{(\\sinh^2{\\frac{1}{2}} r^2 +\\frac{1}{ 4}r^4)\\cosh {\\frac{1}{2}} r^2\n- r^2\n\\sinh {\\frac{1}{2}} r^2 }{ \\sinh^3 {\\frac{1}{2}} r^2}\\;.\n\\end{eqnarray*} \\end{proof}\n\n\\subsection{Pair correlation in higher dimensions}\n\nThe limit pair correlation measure is given by\n\\begin{eqnarray*} \\widetilde K^{\\infty}_2(z,w) &=&\n\\lim_{N\\to\\infty} N^{2(m-1)}\\widetilde K^N_2(\\frac{z}{\\sqrt{N}},\\frac{w}{\\sqrt{N}})\\\\ & =\n& \\vec K^{\\infty}_2(z,w)\\wedge \\frac{1}{(m-1)!}\\left(\\frac{i}{2}\\partial\\dbar\n|z|^2\\right)^{m-1}\\wedge \\frac{1}{(m-1)!}\\left(\\frac{i}{2}\\partial\\dbar\n|w|^2\\right)^{m-1}\\,.\\end{eqnarray*} (The scaling $N^{2(m-1)}$ comes from the\nfact that $N\\omega(\\frac{z}{\\sqrt{N}})\n=N(\\tau_{\\sqrt{N}})_*\\omega\\to\n\\frac{i}{2}\\partial\\dbar |z|^2$.)\nWe now compute $\\widetilde K^{\\infty}_2$\nfor the case of a\nmanifold of general dimension\n$m>1$. It is convenient to express this measure in terms of the\nexpected density of zeros \n\\begin{equation}\\label{expden} \\widetilde K^\\infty_1(z)=\n\\lim_{N\\to\\infty}N^{m-1}\\widetilde K^N_1(\\frac{z}{\\sqrt{N}})=\n\\frac{m}{\\pi}dV_{{\\mathbb C}^m}\n=\\frac{1}{\\pi(m-1)!}\\left(\\frac{i}{2}\\partial\\dbar|z|^2\\right)^m\\,. \\end{equation}\nWe have the following explicit universal formula for the limit pair\ncorrelation measure. In particular, it gives the scaling limit pair\ncorrelation for the zeros of ${\\operatorname{SU}}(m+1)$-polynomials.\n\n\\begin{theo} \\label{mformula} Suppose $\\dim M=m>1$. Then\n$$\\widetilde K^\\infty_2(z,w)=\n\\left[\\gamma_m({\\textstyle{{\\frac{1}{2}}}}|z-w|^2)\\right]\n\\widetilde K^\\infty_1(z)\\wedge \\widetilde K^\\infty_1(w)\n\\;,$$\nwhere \\begin{eqnarray*}\\gamma_m(t)&=& \n\\frac{\\left[{\\textstyle{\\frac{1}{2}}}(m^2+m)\\sinh^2t + t^2\\right]\\cosh t-(m+1)t\n\\sinh t }{ m^2 \\sinh^3 t} +\\frac{m-1}{ 2m}\\\\\n&=&\\frac{(m-1)}{ 2m}t^{-1}+\\frac{m-1}{ 2m}+\\frac{(m+2)(m+1)}{\n6m^2}t\\\\\n&&\\quad\\quad -\\frac{(m+4)(m+3)}{\n90m^2}t^3+\n{\\frac {\\left (m+6\\right )\\left (m+5\\right )}\n{945{m}^{2}}}{t}^{5} +O(t^7) \n\\;.\\end{eqnarray*}\\end{theo}\n\n\\begin{proof} By (\\ref{UPCC}) and\n(\\ref{d2}), again writing $\\zeta=z-w$ (except this time $\\zeta\\in{\\mathbb C}^m$),\n\\begin{eqnarray}&&\\widetilde K^\\infty_2(z,w)\\ = \\ \n\\left[1+\\frac{4}{m^2}\\sum_{j,k=1}^m \\frac{\\partial^2}{\\partial z_j\n\\partial \\bar{z}_j } \\frac{\\partial^2}{\\partial w_k \\partial \\bar{w}_k }\nG(e^{-{\\frac{1}{2}}|z-w|^2})\\right] \n\\widetilde K^\\infty_1(z)\\wedge \\widetilde K^\\infty_1(w)\\nonumber \\\\ &&\\qquad =\\ \\left[1+\n\\frac{1}{4 m^2}\n\\Delta^2_\\zeta G(e^{-{\\frac{1}{2}}|\\zeta|^2})\\right] \\widetilde K^\\infty_1(z)\\wedge \\widetilde\nK^\\infty_1(w)\\nonumber\\\\\n&&\\qquad =\\ \\left[ 1 + \\frac{1}{4m^2}\n\\left(\\frac{d^2}{dr^2} +\n\\frac{2m -1}{r} \\frac{d}{dr}\\right)\\left( m \\log(1 - e^{-r^2}) +\n\\frac{r^2}{e^{r^2} - 1}\\right)\\right]\\widetilde K^\\infty_1(z)\\wedge \\widetilde\nK^\\infty_1(w)\\;.\\label{pc-m}\n\\end{eqnarray} Computing the Laplacian in (\\ref{pc-m}) leads to the stated\nformula. \\end{proof}\n\nNote that if we substitute $m=1$ in the expression for $\\gamma_m(t)$, we\nobtain Hannay's function $H(t)$. However for the case $m>1$, the limit\nmeasure is absolutely continuous on ${\\mathbb C}^m\\times {\\mathbb C}^m$, whereas in the\none-dimensional case, there is a self-correlation delta measure. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nState-of-the-art morphological taggers require thousands of annotated\nsentences to train. For the majority of the world's languages,\nhowever, sufficient, large-scale annotation is not available and\nobtaining it would often be infeasible. Accordingly, an important road\nforward in low-resource NLP is the development of methods that allow\nfor the training of high-quality tools from smaller amounts of\ndata. In this work, we focus on transfer learning---we train a\nrecurrent neural tagger for a low-resource language jointly with a\ntagger for a related high-resource language. Forcing the models to\nshare character-level features among the languages allows large gains\nin accuracy when tagging the low-resource languages, while maintaining\n(or even improving) accuracy on the high-resource language.\n\nRecurrent neural networks constitute the state of the art for a myriad\nof tasks in NLP, e.g., multi-lingual\npart-of-speech tagging \\cite{plank-sogaard-goldberg:2016:P16-2},\nsyntactic parsing \\cite{dyer-EtAl:2015:ACL-IJCNLP,zeman-EtAl:2017:K17-3}, morphological\nparadigm completion \\cite{cotterell-EtAl:2016:SIGMORPHON,cotterell-conll-sigmorphon2017} and language\nmodeling \\cite{DBLP:conf\/interspeech\/SundermeyerSN12,melis2017state}; recently, such\nmodels have also improved morphological tagging \\cite{DBLP:journals\/corr\/HeigoldNG16,heigold2017}. In\naddition to increased performance over classical approaches, neural\nnetworks also offer a second advantage: they admit a clean paradigm\nfor multi-task learning. If the learned representations for all of\nthe tasks are embedded jointly into a shared vector space, the various\ntasks reap benefits from each other and often performance improves for\nall \\cite{DBLP:journals\/jmlr\/CollobertWBKKK11}. We exploit this idea\nfor language-to-language transfer to develop an\napproach for cross-lingual morphological tagging.\n\nWe experiment on 18 languages taken from four different language\nfamilies. Using the Universal Dependencies treebanks, we emulate a\nlow-resource setting for our experiments, e.g., we attempt to train a\nmorphological tagger for Catalan using primarily data from a related\nlanguage like Spanish. Our results demonstrate the successful transfer\nof morphological knowledge from the high-resource languages to the\nlow-resource languages without relying on an externally acquired\nbilingual lexicon or bitext. We consider both the single- and\nmulti-source transfer case and explore how similar two languages must\nbe in order to enable high-quality transfer of morphological taggers.\\footnote{While\n we only experiment with languages in the same family, we show that closer\n languages within that family are better candidates for transfer. We remark that future\n work should consider the viability of more distant language pairs.}\n\n\\section{Morphological Tagging}\\label{sec:morpho-tagging}\n\n\\begin{figure*}\n \\includegraphics[width=1.0\\textwidth]{sentence.pdf}\n \\caption{Example of a morphologically tagged sentence in Russian using the annotation\n scheme provided in the UD dataset.}\n \\label{fig:russian-sentence}\n\\end{figure*}\n\n\nMany languages in the world exhibit rich inflectional morphology: the\nform of individual words mutates to reflect the syntactic\nfunction. For example, the Spanish verb \\word{so\\~{n}ar} will appear\nas \\word{sue\\~{n}o} in the first person present singular, but\n\\word{so\\~{n}{\\'a}is} in the second person present plural, depending\non the bundle of syntaco-semantic attributes associated with the given\nform (in a sentential context). For concreteness, we list a more\ncomplete table of Spanish verbal inflections in\n\\cref{tab:paradigm}. \\ryan{Notation in table is different.} Note that\nsome languages, e.g. the Northeastern Caucasian language Archi, display a veritable\ncornucopia of potential forms with the size of the verbal paradigm\nexceeding 10,000 \\cite{archi}.\n\nStandard NLP annotation, e.g., the scheme in \\newcite{sylakglassman-EtAl:2015:ACL-IJCNLP},\nmarks forms in terms of {\\em universal} key--attribute pairs, e.g., the\nfirst person present singular is represented as {\\small\n $\\left[\\right.$\\att{pos}{V}, \\att{per}{1}, \\att{num}{sg},\n \\att{tns}{pres}$\\left.\\right]$}. \nThis bundle of key--attributes\npairs is typically termed a morphological tag and we may view the goal\nof morphological tagging to label each\nword in its sentential context with the appropriate tag \\cite{oflazer1994tagging,hajic-hladka:1998:ACLCOLING}. As the\npart-of-speech (POS) is a component of the tag, we may view\nmorphological tagging as a strict generalization of POS tagging, where we\nhave significantly refined the set of available tags. All of the experiments in\nthis paper make use of the universal morphological tag set available\nin the Universal Dependencies (UD) \\cite{nivre2016universal}. As an example, we have\nprovided a Russian sentence with its UD tagging in\n\\cref{fig:russian-sentence}.\n\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{lllll} \\toprule\n & \\multicolumn{2}{c}{{\\sc present indicative}} & \\multicolumn{2}{c}{{\\sc past indicative}}\\\\ \\midrule\n & \\multicolumn{1}{c}{{\\sc singular}} & \\multicolumn{1}{c}{{\\sc plural}} & \\multicolumn{1}{c}{{\\sc singular}} & \\multicolumn{1}{c}{{\\sc plural}} \\\\ \\cmidrule(r){2-3} \\cmidrule(r){4-5}\n {\\sc 1} & \\word{sue\\~{n}o} & \\word{{so\\~{n}amos}} & \\word{so\\~{n}\\'{e}} & \\word{so\\~{n}amos} \\\\\n {\\sc 2} & \\word{sue\\~{n}as} & \\word{so\\~{n}\\'{a}is} & \\word{so\\~{n}aste} & \\word{so\\~{n}asteis} \\\\\n {\\sc 3} & \\word{sue\\~{n}a} & \\word{sue\\~{n}an} & \\word{so\\~{n}\\'{o}} & \\word{so\\~{n}aron} \\\\ \\bottomrule\n\\end{tabular} \n \\caption{Partial inflection table for the Spanish verb\n \\word{so\\~{n}ar}}\n \\label{tab:paradigm}\n\\end{table}\n\n\\paragraph{Transferring Morphology.}\\label{sec:transferring-morphology}\nThe transfer of morphology is arguably more dependent on the\nrelatedness of the languages in question than other annotations in\nNLP such as POS and named entity recognition (NER). POS lends itself\nnicely to a universal annotation scheme\n\\cite{DBLP:conf\/lrec\/PetrovDM12} and traditional NER is limited to a\nsmall number of cross-linguistic compliant categories, e.g., {\\sc person} and {\\sc place}.\nEven universal dependency arc labels employ cross-lingual labels \\cite{nivre2016universal}.\n\nMorphology, on the other hand, typically requires more fine-grained\nannotation, e.g., grammatical case and tense. It is often the case\nthat one language will make a semantic distinction in the form (or at\nall) that another does not. For example, the Hungarian noun overtly marks 17\ngrammatical cases and Slavic verbs typically distinguish two aspects\nthrough morphology, while English marks none of these distinctions. If the\nword form in the source language does not overly mark a grammatical\ncategory in the target language, it is\nnigh-impossible to expect a successful transfer. For this reason, much\nof our work focuses on the transfer of related\nlanguages---specifically exploring {\\em how} close two languages must\nbe for a successful transfer. Note that the language-specific nature\nof morphology does not contradict the universality of the annotation;\neach language may mark a different subset of categories, i.e., use\na different set of the universal keys and attributes, but\nthere is a single, universal set, from which\nthe key-attribute pairs are drawn. See \\newcite{newmeyer2007linguistic}\nfor a linguistic treatment of cross-lingual annotation.\n\n\\paragraph{Notation.}\nWe will discuss morphological tagging in terms of the following\nnotation. We will consider two (related)\nlanguages: a high-resource {\\em source} language $\\ell_s$ and a low-resource\n{\\em target} language $\\ell_t$. Each of these languages will have its own (potentially\noverlapping) set of morphological tags, denoted ${\\cal T}_s$ and\n${\\cal T}_t$, respectively. We will work with the union\nof both sets ${\\cal T} = {\\cal T}_s \\cup {\\cal T}_t$. An individual tag $m_i = \\left[k_1\\!\\!=\\!\\!v_1,\n \\ldots, k_M\\!\\!=\\!\\!v_M \\right] \\in {\\cal T}$ is comprised of\nuniversal keys and attributes, i.e., the pairs $(k_i, v_i)$ are completely\nlanguage-agnostic. In the case where a language\ndoes not mark a distinction, e.g., case on English nouns,\nthe corresponding keys are excluded from the tag. Typically, $|{\\cal T}|$ is large (see \\cref{tab:num-tags}).\nWe denote the set of training sentences for the high-resource source language\nas ${\\cal D}_s$ and the set of training sentences for the low-resource target language\nas ${\\cal D}_t$. In the experimental section, we will also consider\na multi-source setting where we have multiple high-resource\nlanguages, but, for ease of explication, we stick to the single-source\ncase in the development\nof the model.\n\n\n\\begin{figure*}\n \\begin{subfigure}[t]{0.57\\columnwidth}\n \\includegraphics[width=4.5cm]{tagger.pdf}\n \\caption{Vanilla architecture for neural morphological tagging.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[t]{0.57\\columnwidth}\n \\includegraphics[width=4.5cm]{tagger-joint.pdf}\n \\caption{Joint morphological tagging and language identification.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[t]{0.39\\columnwidth}\n \\includegraphics[height=3.5cm]{character-lstm.pdf}\n \\caption{Character-level Bi-LSTM embedder.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[t]{0.39\\columnwidth}\n \\includegraphics[height=3.5cm]{character-lstm-lang.pdf}\n \\caption{Language-specific Bi-LSTM embedder.}\n \\end{subfigure}\n \\label{fig:architectures}\n \\caption{We depict four subarchitectures used in the models we develop in this work. Combining\n (a) with the character embeddings in (c) gives the vanilla morphological tagging architecture of \\newcite{heigold2017}.\n Combining (a) with (d) yields the language-universal softmax architecture and (b) and (c) yields\n our joint model for language identification and tagging.}\n\\end{figure*}\n\n\n\\section{Character-Level Neural Transfer}\nOur formulation of transfer learning builds on work in multi-task\nlearning\n\\cite{DBLP:journals\/ml\/Caruana97,DBLP:journals\/jmlr\/CollobertWBKKK11}.\nWe treat each individual language as a task and train a joint model\nfor all the tasks. We first discuss the current state of the art\nin morphological tagging: a character-level recurrent neural\nnetwork. After that, we explore three augmentations to the\narchitecture that allow for the transfer learning scenario. All of our\nproposals force the embedding of the characters for both the source and\nthe target language to share the same vector space, but involve\ndifferent mechanisms, by which the model may learn\nlanguage-specific features.\n\n\\subsection{Character-Level Neural Networks}\\label{sec:character-level}\nCharacter-level neural networks currently constitute the state\nof the art in morphological tagging \\cite{heigold2017}. We\ndraw on previous work in defining a conditional\ndistribution over taggings ${\\boldsymbol t}$ for a sentence ${\\boldsymbol w}$ of length $|{\\boldsymbol w}| = N$ as\n\\begin{equation}\n p_{{\\boldsymbol \\theta}}({{\\boldsymbol t}} \\mid {{\\boldsymbol w}}) = \\prod_{i=1}^N p_{{\\boldsymbol \\theta}}(t_i \\mid {{\\boldsymbol w}}), \\label{eq:factorization}\n\\end{equation}\nwhich may be seen as a $0^\\text{th}$ order conditional random field (CRF)\n\\cite{DBLP:conf\/icml\/LaffertyMP01} with parameter vector ${{\\boldsymbol \\theta}}$.\\footnote{The parameter\n vector ${\\boldsymbol \\theta}$ is a vectorization of all the parameters discussed below.} Importantly, this factorization of\nthe distribution $p_{{\\boldsymbol \\theta}}({{\\boldsymbol t}} \\mid {{\\boldsymbol w}})$ also allows for efficient\nexact decoding and marginal inference in ${\\cal\n O}(N)$-time, but at the cost of not admitting any explicit interactions\nin the output structure, i.e., between adjacent tags.\\footnote{As an\n aside, it is quite interesting that a model with the factorization\n in \\cref{eq:factorization} outperforms the {\\sc MarMoT} model\n \\cite{mueller-schmid-schutze:2013:EMNLP}, which focused on modeling\n higher-order interactions between the morphological tags, e.g., they employ\n up to a (pruned) $3^\\text{rd}$ order CRF. That such a\n model achieves state-of-the-art performance indicates, however, that\n richer source-side features, e.g., those extracted by our\n character-level neural architecture, are more important for\n morphological tagging than higher-order tag interactions, which come\n with the added unpleasantness of exponential (in the order) decoding.}\nWe parameterize the distribution over tags at each time step as\n\\begin{equation}\n p_{{\\boldsymbol \\theta}}(t_i \\mid {{\\boldsymbol w}}) = \\text{softmax}\\left(W {\\boldsymbol e}_i + {\\boldsymbol b}\n \\right), \\label{eq:tagger}\n\\end{equation}\nwhere $W \\in \\mathbb{R}^{|{\\cal T}| \\times n}$ is an embedding matrix, ${\\boldsymbol b} \\in\n\\mathbb{R}^{|{\\cal T}|}$ is a bias vector and positional embeddings ${\\boldsymbol e}_i$\\footnote{Note\nthat $|{\\boldsymbol e}_i| = n$; see \\cref{sec:details} for the exact values used in the experimentation.} are\ntaken from a concatenation of the output of two long short-term memory recurrent neural networks (LSTMs) \\cite{hochreiter1997long}, folded forward and\nbackward, respectively, over a sequence of input vectors. \nThis constitutes a bidirectional LSTM \\cite{DBLP:journals\/nn\/GravesS05}. We define\nthe positional embedding vector as follows\n\\begin{equation}\n {\\boldsymbol e}_i = \\left[{\\text{LSTM}}({\\boldsymbol v}_{1:i});\n {\\text{LSTM}}({\\boldsymbol v}_{i+1:N})\\right], \\label{eq:embedder-e}\n\\end{equation}\nwhere each ${\\boldsymbol v}_i \\in \\mathbb{R}^n$ is, itself, a word\nembedding. Note that the function $\\text{LSTM}$ returns\nthe {\\em last} final hidden state vector\nof the network. This architecture is the {\\em context} bidirectional\nrecurrent neural network\nof \\newcite{plank-sogaard-goldberg:2016:P16-2}. Finally, we derive each word embedding vector ${\\boldsymbol v}_i$\nfrom a character-level bidirectional LSTM embedder. Namely, we define each word embedding\nas the concatenation\n\\begin{align}\n {\\boldsymbol v}_i = &\\left[ {\\text{LSTM}}\\left(\\langle c_{i_1}, \\ldots, \n c_{i_{M_i}}\\rangle \\right); \\right. \\\\\n &\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\left. {\\text{LSTM}} \\left(\\langle c_{i_{M_i}}, \\ldots, c_{i_1}\\rangle \\right) \\right]. \\nonumber \\label{eq:embedder-v} \n\\end{align}\nIn other words, we run a bidirectional LSTM over the\ncharacter stream. This bidirectional LSTM is the {\\em sequence} bidirectional recurrent neural network of \\newcite{plank-sogaard-goldberg:2016:P16-2}. Note a concatenation of the sequence of character symbols $\\langle c_{i_1}, \\ldots, c_{i_{M_i}} \\rangle$ results in the word string $w_i$. Each of\nthe $M_i$ characters $c_{i_k}$ is a member of the set $\\Sigma$. We take $\\Sigma$ to be the union of sets of characters in the\nlanguages considered.\n\nWe direct the reader to\n\\newcite{heigold2017} for a more in-depth discussion of\nthis and various additional architectures for the computation of ${\\boldsymbol v}_i$; the\narchitecture we have presented in \\cref{eq:embedder-v} is competitive\nwith the best performing setting in Heigold et al.'s study.\n\n\n\\subsection{Cross-Lingual Morphological Transfer as Multi-Task Learning}\\label{sec:multi}\nCross-lingual morphological tagging may be formulated as a\nmulti-task learning problem. We seek to learn a set of shared\ncharacter embeddings for taggers in both languages together through\noptimization of a joint loss function that combines the high-resource\ntagger and the low-resource one. The first loss function\nwe consider is the following:\n\\begin{align}\n {\\cal L}_{\\textit{multi}}({\\boldsymbol \\theta}) = -\\!\\!\\!\\sum_{({\\boldsymbol t}, {\\boldsymbol w}) \\in\n {\\cal D}_s} \\!\\!\\!\\! \\log&\\, p_{{\\boldsymbol \\theta}} ({\\boldsymbol t} \\mid {\\boldsymbol w} , \\ell_s ) \\\\[-5\\jot]\n \\nonumber & -\\!\\!\\!\\!\\sum_{({\\boldsymbol t}, {\\boldsymbol w}) \\in {\\cal D}_t} \\!\\!\n \\log p_{{\\boldsymbol \\theta}}\\left({\\boldsymbol t} \\mid {\\boldsymbol w} , \\ell_t \\right).\n\\end{align}\nCrucially, our cross-lingual objective forces both taggers to share\npart of the parameter vector ${\\boldsymbol \\theta}$, which allows it to represent\nmorphological regularities between the two languages in a common\nembedding space and, thus, enables transfer of knowledge. This is\nno different from monolingual multi-task settings, e.g., jointly\ntraining a chunker and a tagger for the transfer of syntactic information\n\\cite{DBLP:journals\/jmlr\/CollobertWBKKK11}. We point out that, in\ncontrast to our approach, almost all multi-task transfer learning,\ne.g., for dependency parsing \\cite{DBLP:conf\/aaai\/GuoCYWL16}, has shared word-level embeddings rather\nthan character-level embeddings. See \\cref{sec:related-work} for a\nmore complete discussion.\n\nWe consider two parameterizations of this distribution $p_{{\\boldsymbol \\theta}}(t_i\n\\mid {\\boldsymbol w}, \\ell)$. First, we modify the initial character-level LSTM\nembedding such that it also encodes the identity of the language.\nSecond, we modify the softmax layer, creating a\nlanguage-specific softmax.\n\n\n\\paragraph{Language-Universal Softmax.}\\label{par:arch1}\nOur first architecture has one softmax, as in \\cref{eq:tagger}, over all\nmorphological tags in ${\\cal T}$ (shared among all the languages).\nTo allow the architecture to encode morphological features\nspecific to one language, e.g., the third person present plural\nending in Spanish is {\\em -an}, but {\\em -{\\~a}o} in Portuguese,\nwe modify the creation of the character-level embeddings.\nSpecifically, we augment the character alphabet $\\Sigma$ with a\ndistinguished symbol that indicates the language: $\\text{{\\tt id}}_\\ell$. We\nthen pre- and postpend this symbol to the character stream\nfor every word before feeding the characters into the bidirectional LSTM\nThus, we arrive at the new {\\em language-specific} word embeddings,\n\\begin{align}\n {\\boldsymbol v}^{\\ell}_i = &\\left[ {\\text{LSTM}}\\left(\\langle \\text{{\\tt id}}_\\ell, c_{i_1}, \\ldots, \n c_{i_{M_i}}, \\text{{\\tt id}}_\\ell \\rangle \\right); \\right. \\\\\n &\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\left. {\\text{LSTM}} \\left(\\langle \\text{{\\tt id}}_\\ell, c_{i_{M_i}}, \\ldots, c_{i_1}, \\text{{\\tt id}}_\\ell \\rangle \\right) \\right]. \\nonumber \\label{eq:embedder-v} \n\\end{align}\nThis model creates a language-specific embedding vector ${\\boldsymbol v}_i$, but the\nindividual embeddings for a given character are shared among\nthe languages jointly trained on. The remainder\nof the architecture is held constant.\n\n\n\\paragraph{Language-Specific Softmax.}\\label{par:arch2}\nNext, inspired by the architecture of \\newcite{heigold2013multilingual}, we consider a language-specific\nsoftmax layer, i.e., we define a new output layer for every\nlanguage:\n\\begin{equation}\n p_{{\\boldsymbol \\theta}}\\left(t_i \\mid {\\boldsymbol w}, \\ell \\right) = \\text{softmax}\\left(W_{\\ell} {\\boldsymbol e}_i + {\\boldsymbol b}_{\\ell}\\right),\n \\label{eq:lang-specific}\n\\end{equation}\nwhere $W_{\\ell} \\in \\mathbb{R}^{|{\\cal T}| \\times n}$ and ${\\boldsymbol b}_{\\ell} \\in \\mathbb{R}^{|{\\cal T}|}$ are now {\\em language-specific}.\nIn this architecture, the embeddings ${\\boldsymbol e}_i$ are the same for all\nlanguages---the model has to learn language-specific behavior exclusively through the\noutput softmax of the tagging LSTM.\n\n\\paragraph{Joint Morphological Tagging and Language Identification.}\\label{sec:joint-arch}\nThe third model we exhibit is a joint architecture for tagging and\nlanguage identification. We consider the following loss function:\n\\begin{align}\n {\\cal L}_{\\textit{joint}} ({\\boldsymbol \\theta}) = -\\!\\!\\!\\sum_{({\\boldsymbol t}, {\\boldsymbol w}) \\in {\\cal D}_s} \\!\\!\\! \\log\\, & p_{{\\boldsymbol \\theta}}(\\ell_s, {\\boldsymbol t} \\mid {\\boldsymbol w}) \\\\[-5\\jot] \\nonumber\n &-\\!\\sum_{({\\boldsymbol t}, {\\boldsymbol w}) \\in {\\cal D}_t} \\!\\!\\!\\!\\! \\log p_{{\\boldsymbol \\theta}}\\left(\\ell_t, {\\boldsymbol t} \\mid {\\boldsymbol w}\\right),\n \\end{align}\nwhere we factor the joint distribution as \n\\begin{align}\n p_{{\\boldsymbol \\theta}}\\left(\\ell, {\\boldsymbol t} \\mid {\\boldsymbol w} \\right) &= p_{{\\boldsymbol \\theta}}\\left(\\ell \\mid {\\boldsymbol w} \\right) \\cdot p_{{\\boldsymbol \\theta}}\\left({\\boldsymbol t} \\mid {\\boldsymbol w}, \\ell \\right). \n\\end{align}\nJust as before, we define $p_{{\\boldsymbol \\theta}}\\left({\\boldsymbol t} \\mid {\\boldsymbol w}, \\ell \\right)$ above as in \\cref{eq:lang-specific} and\nwe define\n\\begin{equation}\n p_{{\\boldsymbol \\theta}}(\\ell \\mid {\\boldsymbol w}) = \\text{softmax}\\left(U\\tanh(V{\\boldsymbol e}_i)\\right),\n\\end{equation}\nwhich is a multi-layer perceptron with a binary softmax (over the two languages)\nas an output layer; we have added the additional parameters $V \\in\n\\mathbb{R}^{2 \\times n}$ and $U \\in \\mathbb{R}^{2 \\times 2}$. In the\ncase of multi-source transfer, this is a softmax over the set of\nlanguages.\n\n\\begin{table}\n \\begin{adjustbox}{width=1.\\columnwidth}\n \\begin{tabular}{llll llll} \\toprule\n \\multicolumn{4}{c}{Romance} & \\multicolumn{4}{c}{Slavic} \\\\ \\cmidrule(r){1-4} \\cmidrule(r){5-8}\n lang & train & dev & test & lang & train & dev & test \\\\ \\midrule\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES-CT.pdf}}}\\,(ca) & 13123 & 1709 & 1846 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & 8907 & 1115 & 1116 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & 14187 & 1552 & 274 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & 61677 & 9270 & 10148 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FR.pdf}}}\\,(fr) & 14554 & 1596 & 298 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & 6800 & 7000 & 727 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & 12837 & 489 & 489 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RU.pdf}}}\\,(ru) & 4029 & 502 & 499 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & 8800 & 271 & 288 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SK.pdf}}}\\,(sk) & 8483 & 1060 & 1061 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RO.pdf}}}\\,(ro) & 7141 & 1191 & 1191 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{UA.pdf}}}\\,(uk) & 200 & 30 & 25 \\\\ \\toprule\n \\multicolumn{4}{c}{Germanic} & \\multicolumn{4}{c}{Uralic} \\\\ \\cmidrule(r){1-4} \\cmidrule(r){5-8}\n lang & train & dev & test & lang & train & dev & test \\\\ \\midrule\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & 4868 & 322 & 322 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{EE.pdf}}}\\,(et) & 14510 & 1793 & 1806 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{NO.pdf}}}\\,(no) & 15696 & 2410 & 1939 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & 12217 & 716 & 648 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & 4303 & 504 & 1219 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{HU.pdf}}}\\,(hu) & 1433 & 179 & 188 \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Number of tokens in each of the train, development and test splits (organized by language family).}\n \\label{tab:lang-size}\n\\end{table}\n\n\\paragraph{Comparative Discussion.}\nThe first two architectures discussed in \\cref{par:arch1} represent two\npossibilities for a multi-task objective, where we condition on the\nlanguage of the sentence. The first integrates this knowledge at a\nlower level and the second at a higher level. The third architecture\ndiscussed in \\cref{sec:joint-arch} takes a different tack---rather\nthan conditioning on the language, it predicts it. The joint model\noffers one interesting advantage over the two architectures\nproposed. Namely, it allows us to perform a morphological analysis on\na sentence where the language is unknown. This effectively\nalleviates an early step in the NLP pipeline, where language\nid is performed and is useful in conditions\nwhere the language to be tagged may not be known {\\em a-priori},\ne.g., when tagging social media data.\n\nWhile there are certainly more complex architectures one could\nengineer for the task, we believe we have found a relatively diverse\nsampling, enabling an interesting experimental comparison. Indeed, it\nis an important empirical question which architectures are most\nappropriate for transfer learning. Since transfer learning affords the\nopportunity to reduce the sample complexity of the ``data-hungry''\nneural networks that currently dominate NLP research, finding a good\nsolution for cross-lingual transfer in state-of-the-art neural models\nwill likely be a boon for low-resource NLP in general.\n\n\n\\section{Experiments}\\label{sec:experiments}\nEmpirically, we ask three questions of our\narchitectures. i) How well can we transfer morphological tagging\nmodels from high-resource languages to low-resource languages in each architecture? (Does\none of the three outperform the others?) ii)\nHow many annotated data in the low-resource language do we need? iii)\nHow closely related do the languages need to be to get good transfer?\n\n\\subsection{Experimental Languages}\nWe experiment with the language families:\nRomance (Indo-European), Northern Germanic (Indo-European), Slavic (Indo-European)\nand Uralic.\nIn the Romance\nsub-grouping of the wider Indo-European family, we experiment on Catalan (ca), French (fr), Italian (it),\nPortuguese (pt), Romanian (ro) and Spanish (es). In the Northern Germanic\nfamily, we experiment on Danish (da), Norwegian (no) and Swedish (sv).\nIn the Slavic family, we experiment on Bulgarian (bg), Czech (bg), Polish (pl),\nRussian (ru), Slovak (sk) and Ukrainian (uk). Finally, in the Uralic\nfamily we experiment on Estonian (et), Finnish (fi) and Hungarian (hu).\n\\begin{table}\n \\begin{adjustbox}{width=1.\\columnwidth}\n \\begin{tabular}{llllllll} \\toprule\n \\multicolumn{2}{c}{Romance} & \\multicolumn{2}{c}{Slavic} & \\multicolumn{2}{c}{Germanic} & \\multicolumn{2}{c}{Uralic} \\\\ \\cmidrule(r){1-2} \\cmidrule(r){3-4} \\cmidrule(r){5-6} \\cmidrule(r){7-8}\n lang & $|{\\cal T}|$ & lang & $|{\\cal T}|$ & lang & $|{\\cal T}|$ & lang & $|{\\cal T}|$ \\\\ \\midrule\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES-CT.pdf}}}\\,(ca) & 172 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & 380 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & 124 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{EE.pdf}}}\\,(et) & 654 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & 232 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & 2282 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{NO.pdf}}}\\,(no) & 169 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & 1440 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FR.pdf}}}\\,(fr) & 142 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & 774 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & 155 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{HU.pdf}}}\\,(hu) & 634\\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & 179 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RU.pdf}}}\\,(ru) & 520 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & 375 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SK.pdf}}}\\,(sk) & 597 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RO.pdf}}}\\,(ro) & 367 & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{UA.pdf}}}\\,(uk) & 220 \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Number of unique morphological tags\n for each of the experimental languages (organized by language family).}\n \\label{tab:num-tags}\n\\end{table}\n\n\n\\subsection{Datasets}\nWe use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation\nof the $4^\\text{th}$ and $6^\\text{th}$ columns of the file\nformat)\n\\cite{nivre2016universal}. We list the size of the training,\ndevelopment and test splits of the UD treebanks we used in\n\\cref{tab:lang-size}. Also, we list the number of unique morphological\ntags in each language in \\cref{tab:num-tags}, which serves as an\napproximate measure of the morphological complexity each language\nexhibits. Crucially, the data are annotated in a cross-linguistically\nconsistent manner, such that words in the different languages that\nhave the same syntacto-semantic function have the same bundle of tags\n(see \\cref{sec:morpho-tagging} for a discussion). Potentially, further gains would be\npossible by using a more universal scheme, e.g., the {\\sc UniMorph} scheme.\n\n\n\\subsection{Baselines}\\label{sec:baselines}\nWe consider two baselines in our work. First, we consider\nthe {\\sc MarMoT} tagger \\cite{mueller-schmid-schutze:2013:EMNLP},\nwhich is currently the best performing non-neural model.\nThe source code for {\\sc MarMoT} is freely available online,\\footnote{\\url{http:\/\/cistern.cis.lmu.de\/marmot\/}}\nwhich allows us to perform fully controlled experiments\nwith this model.\nSecond, we consider the alignment-based projection\napproach of \\newcite{buys-botha:2016:P16-1}.\\footnote{We do\n not have access to the code as the model was developed in industry,\n so we compare to the numbers reported in the original paper, as well\n as additional numbers provided to us by the first author in a personal communication. The numbers will not be, strictly speaking, comparable. However,\n we hope they provide insight into the relative performance of the tagger.} We\ndiscuss each of the two baselines in turn. \n\n\\begin{table*}[t]\n \\centering\n \\begin{subtable}[b]{2.0\\columnwidth}\n \\centering\n \\begin{adjustbox}{width=1.\\columnwidth}\n \\begin{tabular}{clcccccccccccc} \\toprule\n & & \\multicolumn{12}{c}{target language} \\\\\n & & \\multicolumn{6}{c}{$|{\\cal D}_t| = 100$} & \\multicolumn{6}{c}{$|{\\cal D}_t| = 1000$} \\\\ \\cmidrule(r){3-8} \\cmidrule(r){9-14}\n&& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES-CT.pdf}}}\\,(ca) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FR.pdf}}}\\,(fr) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RO.pdf}}}\\,(ro) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES-CT.pdf}}}\\,(ca) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FR.pdf}}}\\,(fr) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RO.pdf}}}\\,(ro) \\\\ \\midrule\n \\multirow{6}{*}{\\rotatebox{90}{source language}}\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES-CT.pdf}}}\\,(ca) & --- & 87.9\\% & 84.2\\% & 84.6\\% & 81.1\\% & 67.4\\% & --- & 94.1\\% & {\\bf 93.5\\%} & 93.1\\% & {\\bf 89.0\\%} & 89.8\\% \\\\\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & 88.9\\% & --- & 85.5\\% & 85.6\\% & 81.8\\% & 69.5\\% & {\\bf 95.5\\%} & --- & {\\bf 93.5\\%} & 93.5\\% & 88.9\\% & 89.7\\% \\\\\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FR.pdf}}}\\,(fr) & 88.3\\% & 87.0\\% & --- & 83.6\\% & 79.5\\% & {\\bf 69.9\\%} & 95.4\\% & 93.8\\% & --- & 93.3\\% & 88.6\\% & 89.7\\% \\\\\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & 88.4\\% & 87.8\\% & 84.2\\% & --- & 80.6\\% & 69.1\\% & 95.4\\% & 94.0\\% & 93.3\\% & --- & 88.7\\% & {\\bf 90.3\\%} \\\\\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & 88.4\\% & 88.9\\% & 85.1\\% & 84.7\\% & --- & 69.6\\% & 95.3\\% & {\\bf 94.2\\%} & {\\bf 93.5\\%} & 93.6\\% & --- & 89.8\\% \\\\\n& {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RO.pdf}}}\\,(ro) & 87.6\\% & 87.2\\% & 85.0\\% & 84.4\\% & 79.9\\% & --- & 95.3\\% & 93.6\\% & 93.4\\% & 93.2\\% & 88.5\\% & --- \\\\ \\midrule\n & multi-source & {\\bf 89.8\\%} & {\\bf 90.9\\%} & {\\bf 86.6\\%} & {\\bf 86.8\\%} & {\\bf 83.4\\%} & 67.5\\% & 95.4\\% & {\\bf 94.2\\%} & 93.4\\% & {\\bf 93.8\\%} & 88.7\\% & 88.9\\% \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Results for the Romance languages.}\n \\end{subtable}\n \\par\\bigskip\n \\begin{subtable}[b]{2.0\\columnwidth}\n \\centering\n \\begin{adjustbox}{width=1.0\\columnwidth}\n \\begin{tabular}{clcccccccccccccc} \\toprule\n & & \\multicolumn{12}{c}{target language} \\\\\n & & \\multicolumn{6}{c}{$|{\\cal D}_t| = 100$} & \\multicolumn{6}{c}{$|{\\cal D}_t| = 1000$} \\\\ \\cmidrule(r){3-8} \\cmidrule(r){9-14}\n & & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RU.pdf}}}\\,(ru) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SK.pdf}}}\\,(sk) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{UA.pdf}}}\\,(uk) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RU.pdf}}}\\,(ru) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SK.pdf}}}\\,(sk) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{UA.pdf}}}\\,(uk) \\\\ \\midrule\n \\multirow{6}{*}{\\rotatebox{90}{source language}}\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & --- & 47.4\\% & 44.7\\% & 67.3\\% & 39.7\\% & 57.3\\% & --- & 73.7\\% & 75.0\\% & 84.1\\% & 70.9\\% & 72.0\\% \\\\\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & 57.8\\% & --- & 56.5\\% & 62.6\\% & 62.6\\% & 54.0\\% & 80.9\\% & --- & {\\bf 80.0\\%} & 84.1\\% & 78.1\\% & 64.7\\% \\\\\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & 54.3\\% & 54.0\\% & --- & 59.3\\% & 57.8\\% & 48.0\\% & 78.3\\% & 74.9\\% & --- & {\\bf 84.2\\%} & 75.9\\% & 57.3\\% \\\\\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{RU.pdf}}}\\,(ru) & {\\bf 68.8\\%} & 48.6\\% & 47.4\\% & --- & 46.5\\% & {\\bf 60.7\\%} & {\\bf 83.1\\%} & 73.6\\% & 76.0\\% & --- & 71.4\\% & {\\bf 72.7\\%} \\\\\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SK.pdf}}}\\,(sk) & 55.2\\% & 57.4\\% & 54.8\\% & 61.2\\% & --- & 49.3\\% & 77.6\\% & {\\bf 76.3\\%} & 78.4\\% & 83.9\\% & --- & 60.7\\% \\\\\n &{\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{UA.pdf}}}\\,(uk) & 44.1\\% & 36.0\\% & 34.4\\% & 43.2\\% & 30.0\\% & --- & 67.3\\% & 64.8\\% & 66.9\\% & 76.1\\% & 56.0\\% & --- \\\\ \\midrule\n & multi-source& 64.5\\% & {\\bf 57.9\\%} & {\\bf 57.0\\%} & {\\bf 64.4\\%} & {\\bf 64.8\\%} & 58.7\\% & 81.6\\% & 74.8\\% & 78.1\\% & 83.1\\% & {\\bf 79.6\\%} & 69.3\\% \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Results for the Slavic languages.}\n \\end{subtable}\n \\par\\bigskip\n \\begin{subtable}[b]{1.0\\columnwidth}\n \\centering\n \\begin{adjustbox}{width=1.\\columnwidth}\n \\begin{tabular}{clcccccc} \\toprule\n & & \\multicolumn{6}{c}{target language} \\\\\n & & \\multicolumn{3}{c}{$|{\\cal D}_t| = 100$} & \\multicolumn{3}{c}{$|{\\cal D}_t| = 1000$} \\\\ \\cmidrule(r){3-5} \\cmidrule(r){6-8}\n& & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{NO.pdf}}}\\,(no) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{NO.pdf}}}\\,(no) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) \\\\ \\midrule\n \\multirow{3}{*}{\\rotatebox{90}{source}}\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & --- & 77.6\\% & 73.1\\% & --- & 90.1\\% & 90.0\\% \\\\\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{NO.pdf}}}\\,(no) & 83.1\\% & --- & 75.7\\% & 93.1\\% & --- & 90.5\\% \\\\\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & 81.4\\% & 76.5\\% & --- & 92.6\\% & 90.2\\% & --- \\\\ \\midrule\n& multi-source & {\\bf 87.8\\%} & {\\bf 82.3\\%} & {\\bf 77.2\\%} & {\\bf 93.9\\%} & {\\bf 91.2\\%} & {\\bf 90.9\\%} \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Results for the Northern Germanic languages.}\n \\end{subtable}\n ~\n \\begin{subtable}[b]{1.0\\columnwidth}\n \\centering\n \\begin{adjustbox}{width=1.\\columnwidth}\n \\begin{tabular}{clcccccc} \\toprule\n & & \\multicolumn{6}{c}{target language} \\\\\n & & \\multicolumn{3}{c}{$|{\\cal D}_t| = 100$} & \\multicolumn{3}{c}{$|{\\cal D}_t| = 1000$} \\\\ \\cmidrule(r){3-5} \\cmidrule(r){6-8}\n & & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{EE.pdf}}}\\,(et) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{HU.pdf}}}\\,(hu) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{EE.pdf}}}\\,(et) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{HU.pdf}}}\\,(hu) \\\\ \\midrule\n \\multirow{3}{*}{\\rotatebox{90}{source}}\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{EE.pdf}}}\\,(et) & --- & {\\bf 60.9\\%} & {\\bf 60.4\\%} & --- & {\\bf 85.1\\%} & 74.8\\% \\\\\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & {\\bf 60.1\\%} & --- & 60.3\\% & {\\bf 82.3\\%} & --- & {\\bf 75.2\\%} \\\\\n & {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{HU.pdf}}}\\,(hu) & 47.1\\% & 48.3\\% & --- & 76.9\\% & 81.2\\% & --- \\\\ \\midrule\n& multi-source & 54.7\\% & 55.3\\% & 55.4\\% & 78.7\\% & 81.8\\% & 73.3\\% \\\\ \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Results for the Uralic languages.}\n \\end{subtable}\n \\caption{Results for transfer learning with our joint model. The tables highlight that\n the best source languages are often genetically and typologically closest. Also, we see that multi-source often helps, albeit more often in the $|{\\cal D}_t|=100$ case. }\n \\label{tab:results}\n\\end{table*}\n\n\n\n\\subsubsection{Higher-Order CRF Tagger}\nThe {\\sc MarMoT} tagger is\nthe leading non-neural approach to morphological tagging. This baseline is important since non-neural, feature-based approaches have\nbeen found empirically to be more efficient, in the sense that their\nlearning curves tend to be steeper. Thus, in the low-resource setting\nwe would be remiss to not consider a feature-based approach. Note\nthat this is not a transfer approach, but rather only uses\nthe low-resource data.\n\n\n\\subsubsection{Alignment-based Projection}\nThe projection approach of \\newcite{buys-botha:2016:P16-1} provides an\nalternative method for transfer learning. The idea is to construct\npseudo-annotations for bitext given an alignments\n\\cite{och2003systematic}. Then, one trains a standard tagger using\nthe projected annotations. The specific tagger employed is the {\\sc wsabie}\nmodel of \\newcite{DBLP:conf\/ijcai\/WestonBU11}, which---like our\napproach--- is a $0^\\text{th}$-order discriminative neural model. In\ncontrast to ours, however, their network is shallow. We compare\nthe two methods in more detail in \\cref{sec:related-work}.\n\n\\begin{table*}\n \\begin{adjustbox}{width=2.\\columnwidth}\n \\begin{tabular}{lccccccccccccc} \\toprule\n & \\multicolumn{13}{c}{Accuracy} \\\\\n & \\multicolumn{3}{c}{B\\&B (2016)} & \\multicolumn{2}{c}{{\\sc MarMoT}} & \\multicolumn{2}{c}{Ours (Mono)} & \\multicolumn{2}{c}{Ours (Universal)} & \\multicolumn{2}{c}{Ours (Joint)} & \\multicolumn{2}{c}{Ours (Specific)} \\\\ \\cmidrule(r){2-4} \\cmidrule(r){5-6} \\cmidrule(r){7-8} \\cmidrule(r){9-10} \\cmidrule(r){11-12} \\cmidrule(r){13-14} \n & en (int) & best (non) & best (int) & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ \\\\ \\cmidrule(r){2-14}\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & 36.3 & 38.2 & 50.0 & 56.5 & 78.8 & 40.2 & 66.6 & 57.8 & 80.9 & 64.5 & {\\bf 81.6} & 63.5 & 80.8 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & 24.4 & 49.3 & 53.4 & 49.2 & 69.2 & 32.1 & 66.1 & 57.4 & 77.6 & 57.9 & {\\bf 74.8} & 56.1 & 74.2 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & 36.6 & 46.9 & 46.9 & 75.9 & 90.9 & 45.3 & 86.6 & 77.6 & 90.1 & 87.8 & 93.9 & 89.2 & {\\bf 94.3} \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & 39.9 & 75.3 & 75.5 & 85.9 & 93.1 & 64.7 & 92.5 & 85.1 & 60.9 & 90.9 & {\\bf 94.2} & 90.7 & {\\bf 94.2} \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & 27.4 & 51.8 & 56.0 & 50.0 & 77.5 & 28.0 & 74.2 & 48.3 & 81.2 & 55.3 & 81.8 & 55.4 & 80.7 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & 38.1 & 75.5 & 75.9 & 81.7 & 92.3 & 67.0 & 88.9 & 84.7 & 93.1 & 86.8 & {\\bf 93.8} & 86.1 & 93.3 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & 25.3 & 47.4 & 51.3 & 51.7 & 71.1 & 32.1 & 60.9 & 47.4 & {\\bf 78.4} & 57.0 & 78.1 & 56.1 & 76.4 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & 36.6 & 71.9 & 72.2 & 77.0 & 86.3 & 61.7 & 85.6 & 80.6 & 88.7 & 83.4 & 88.7 & 82.4 & {\\bf 89.1} \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & 29.3 & 44.5 & 44.5 & 69.5 & 88.3 & 46.1 & 84.2 & 75.7 & 90.0 & 77.2 & {\\bf 90.9} & 78.3 & 90.7 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of our approach to various baselines for low-resource tagging under token-level accuracy. We compare on only those languages in \\newcite{buys-botha:2016:P16-1}. Note that tag-level accuracy was not reported in the original B\\&B paper, but was acquired through personal communication with the first author. All architectures presented in this work are used in their multi-source setting. The B\\&B and {\\sc MarMoT} models are single-source. }\n \\label{tab:baseline-table1}\n\\end{table*}\n\n\n\\subsubsection{Architecture Study}\nAdditionally, we perform a thorough study\nof the neural transfer learner, considering\nall three architectures. A primary goal of our experiments\nis to determine which of our three proposed neural transfer techniques\nis superior. Even though our experiments focus on morphological tagging,\nthese architectures are more general in that they may be easily applied to\nother tasks, e.g., parsing or machine translation. \nWe additionally explore the viability of multi-source transfer, i.e.,\nthe case where we have multiple source languages. All of our architectures\ngeneralize to the multi-source case without any complications.\n\n\n\\subsection{Experimental Details}\\label{sec:details}\nWe train our models with the following conditions. \n\\paragraph{Evaluation Metrics.}\n We evaluate using average per token accuracy, as is standard for\n both POS tagging and morphological tagging, and per feature $F_1$ as\n employed in \\newcite{buys-botha:2016:P16-1}. The per feature $F_1$\n calculates a key $F^k_1$ for each key in the target language's tags\n by asking if the key-attribute pair $k_i$$=$$v_i$ is in the predicted\n tag. Then, the key-specific $F^k_1$ values are averaged equally.\n Note that $F_1$ is a more flexible metric as it gives partial\n credit for getting some of the attributes in the bundle correct,\n where accuracy does not. \n \n \n \\paragraph{Hyperparameters.} \\ryan{Georg needs to check. Taken from: http:\/\/www.dfki.de\/~neumann\/publications\/new-ps\/Big_NLP_2016.pdf} \n Our networks are four layers deep (two LSTM layers for the character embedder, i.e., to compute ${\\boldsymbol v_i}$ and two LSTM layers for the tagger,\n i.e., to compute ${\\boldsymbol e_i}$) \n and we use an embedding size of 128 for\n the character input vector size and hidden layers of 256 nodes in all other cases. All networks are trained with the\n stochastic gradient method RMSProp \\cite{Tieleman2012}, with a fixed\n initial learning rate and a learning rate decay that is adjusted for\n the other languages according to the amount of training data. The\n batch size is always 16. Furthermore, we use dropout\n \\cite{DBLP:journals\/jmlr\/SrivastavaHKSS14}. The dropout probability\n is set to 0.2. We used Torch 7 \\cite{collobert2011torch7} to configure the computation graphs\n implementing the network architectures.\n\n\\section{Results and Discussion}\\ryan{Needs to be updated!}\nWe report our results in two tables. First, we report a detailed\ncross-lingual evaluation in \\cref{tab:results}. Secondly, we report a\ncomparison against two baselines in \\cref{tab:baseline-table1} (accuracy)\nand \\cref{tab:baseline-table2} ($F_1$). We\nsee two general trends of the data. First, we find that genetically closer\nlanguages yield better source languages. Second, we find that the\nmulti-softmax architecture is the best in terms of transfer ability,\nas evinced by the results in \\cref{tab:results}. \\saveForCR{To test for\nsignificance, we perform a paired permutation test\n\\cite{DBLP:conf\/coling\/Yeh00}.} We find a wider gap between our model\nand the baselines under the accuracy than under\n$F_1$. We attribute this to the fact\nthat $F_1$ is a softer metric in that\nit assigns credit to partially correct guesses.\n\n\\paragraph{Source Language.}\nAs discussed in \\cref{sec:transferring-morphology}, the transfer of\nmorphology is language-dependent. This intuition is borne out in the results from our\nstudy (see \\cref{tab:results}). We see that in the closer grouping of the Western Romance\nlanguages, i.e., Catalan, French, Italian, Portuguese, and Spanish, it\nis easier to transfer than with Romanian, an Eastern Romance language.\nWithin the Western grouping, we see that the close\npairs, e.g., Spanish and Portuguese, are amenable to\ntransfer. We find a similar pattern in the other language families,\ne.g., Russian is the best source language for Ukrainian, Czech is the best\nlanguage source for Slovak and Finnish is the best source language for Estonian.\n\n\\paragraph{Multi-Source Transfer.}\n In many cases, we\nfind that multiple sources noticeably improve the results over the\nsingle-source case. For instance, when we have multiple Romance languages\nas a source language, we see gains of up to 2\\%. We also see\ngains in the Northern Germanic languages when\nusing multiple source languages. From a linguistic point of view,\nthis is logical as different source languages may be similar to the\ntarget language along different dimensions, e.g., when transferring\namong the Slavic languages, we note that Russian retains the complex\nnominal case system of Serbian, but south Slavic Bulgarian is lexically more similar.\n\n\\paragraph{Performance Against the Two Baselines.}\\ryan{More to say!}\nAs shown in \\cref{tab:baseline-table1} and \\cref{tab:baseline-table2}, our model outperforms the projection tagger of\n\\newcite{buys-botha:2016:P16-1} even though our approach does not\nutilize bitext, large-scale alignment or\nmonolingual corpora---rather, all transfer\nbetween languages happens through the forced sharing of character-level\nfeatures.\\footnote{\nWe would like to highlight some issues of comparability\nwith the results in \\newcite{buys-botha:2016:P16-1}.\nStrictly speaking, the results are not comparable\nand our improvement over their method should be\ntaken with a grain of salt. As the source code\nis not publicly available and developed in industry, we resorted to numbers\nin their published work and additional numbers obtained\nthrough direct communication with the authors.\nFirst, we used a slightly newer version of UD to incorporate\nmore languages: we used v2 whereas they used v1.2. There\nare minor differences in the morphological tagset used between these\nversions. Also, in the $|{\\cal D}_t|=1000$ setting, we are training\non significantly more data than the models in \\newcite{buys-botha:2016:P16-1}.\nA much fairer comparison is to our models with $|{\\cal D}_t| = 100$.\nAlso, we compare to their method using their standard (non) setup.\nThis method is fair in so far as we evaluate in the same manner, but it disadvantages\ntheir approach, which cannot predict tags that are not in the source language.}\nOur model, does, however, require annotation of a small\nnumber of sentences in the target language for training. We note,\nhowever, that this does not necessitate a large number of human annotation hours \\cite{garrette-baldridge:2013:NAACL-HLT}.\n\n\\paragraph{Reducing Sample Complexity.}\nAnother interesting a point\nabout our model that is best evinced in \\cref{fig:curve}\nis the feature-based CRF approach seems to be\na better choice for the low-resource setting, i.e.,\nthe neural model has greater sample complexity. However,\nin the multi-task scenario, we find that\nthe neural tagger's learning curve is even steeper.\nIn other words, if we have to train a tagger on very little\ndata, we are better off using a neural multi-task approach\nthan a feature-based approach; preliminary\nattempts to develop a multi-task version of {\\sc MarMoT}\nfailed (see \\cref{fig:curve}).\n\n\n\n\\section{Related Work}\\label{sec:related-work}\nWe divide the discussion of related work topically into three parts\nfor ease of intellectual digestion.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{transfer.pdf}\n \\caption{Learning Curve for Spanish and Catalan comparing\n our monolingual model, our joint model and two {\\sc MarMoT} models. The\n first {\\sc MarMoT} model is identical to those trained in the rest of the paper\n and the second attempts a multi-task approach, which failed so no further experimentation was performed with this model.}\n \\label{fig:curve}\n\\end{figure}\n\n\n\\begin{table*}\n \\begin{adjustbox}{width=2.\\columnwidth}\n \\begin{tabular}{lccccccccccccc} \\toprule\n & \\multicolumn{13}{c}{$F_1$} \\\\\n & \\multicolumn{3}{c}{B\\&B (2016)} & \\multicolumn{2}{c}{{\\sc MarMoT}} & \\multicolumn{2}{c}{Ours (Mono)} & \\multicolumn{2}{c}{Ours (Universal)} & \\multicolumn{2}{c}{Ours (Joint)} & \\multicolumn{2}{c}{Ours (Specific)} \\\\ \\cmidrule(r){2-4} \\cmidrule(r){5-6} \\cmidrule(r){7-8} \\cmidrule(r){9-10} \\cmidrule(r){11-12} \\cmidrule(r){13-14}\n & en (int) & best (non) & best (int) & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ & $100$ & $1000$ \\\\ \\cmidrule(r){2-14}\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{BG.pdf}}}\\,(bg) & 51.6 & 61.9 & 65.0 & 53.7 & 74.7 & 26.0 & 68.0 & 55.1 & 77.3 & 56.6 & 77.8 & 55.1 & {\\bf 78.6} \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{CZ.pdf}}}\\,(cs) & 55.7 & 61.6 & 64.0 & 60.8 & 80.5 & 30.9 & 65.3 & 54.5 & 66.3 & 54.7 & 66.5 & 54.6 & {\\bf 67.0} \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{DK.pdf}}}\\,(da) & 65.4 & 70.7 & 73.1 & 69.7 & 92.9 & 35.3 & 90.1 & 85.9 & 93.2 & 86.9 & {\\bf 93.5} & 83.2 & 93.2 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{ES.pdf}}}\\,(es) & 60.7 & 74.0 & 74.6 & 82.4 & 92.6 & 55.9 & 91.4 & 88.4 & 93.6 & 89.2 & {\\bf 94.1} & 87.6 & 93.8 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{FI.pdf}}}\\,(fi) & 59.1 & 57.2 & 59.1 & 44.6 & 78.3 & 17.5 & 61.7 & 48.6 & 73.6 & 49.3 & {\\bf 74.4} & 46.2 & 73.9 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{IT.pdf}}}\\,(it) & 66.1 & 74.4 & 75.3 & 78.7 & 90.0 & 56.4 & 87.0 & 83.1 & 90.5 & 83.3 & {\\bf 91.9} & 82.7 & 91.7 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PL.pdf}}}\\,(pl) & 47.3 & 56.8 & 60.4 & 57.8 & 81.8 & 31.6 & 69.7 & 61.9 & 83.9 & 62.5 & {\\bf 84.7} & 62.6 & 83.2 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{PT.pdf}}}\\,(pt) & 60.2 & 69.2 & 73.1 & 67.6 & 80.0 & 42.9 & 82.0 & 77.9 & 86.3 & 78.1 & {\\bf 86.5} & 71.8 & 85.7 \\\\\n {\\setlength{\\fboxsep}{0pt}\\fbox{\\includegraphics[height=0.30cm,width=0.45cm]{SE.pdf}}}\\,(sv) & 55.1 & 72.1 & 74.6 & 69.7 & 90.2 & 44.1 & 86.4 & 82.5 & 93.2 & 83.5 & {\\bf 93.7} & 82.8 & 93.4 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison of our approach to various baselines for low-resource tagging under $F_1$ to allow for a more complete\n comparison to the model of \\newcite{buys-botha:2016:P16-1}. All architectures presented in this work are used in their multi-source setting. The B\\&B and {\\sc MarMoT} models are single-source. We only compare on those languages used in B\\&B.}\n \\label{tab:baseline-table2}\n\\end{table*}\n\n\n\\subsection{Alignment-Based Distant Supervision.}\nMost cross-lingual work in NLP---focusing on morphology or\notherwise---has concentrated on indirect supervision, rather than transfer\nlearning. The goal in such a regime is to provide noisy labels\nfor training the tagger in the low-resource language through\nannotations projected over aligned bitext with a high-resource\nlanguage. This method of projection was first introduced by\n\\newcite{DBLP:conf\/naacl\/YarowskyN01} for the projection of POS\nannotation. While follow-up work\n\\cite{DBLP:conf\/ijcnlp\/FossumA05,das-petrov:2011:ACL-HLT2011,tackstrom-mcdonald-uszkoreit:2012:NAACL-HLT}\nhas continually demonstrated the efficacy of projecting simple part-of-speech annotations,\n\\newcite{buys-botha:2016:P16-1} were the first to show the use\nof bitext-based projection for the training of a {\\em morphological}\ntagger for low-resource languages. \n\nAs we also discuss the training of a morphological tagger, our work is\nmost closely related to \\newcite{buys-botha:2016:P16-1} in terms of\nthe task itself. We contrast the approaches. The main difference lies\ntherein, that our approach is not projection-based and, thus, does not\nrequire the construction of a bilingual lexicon for projection based\non bitext. Rather, our method jointly learns multiple taggers and\nforces them to share features---a true transfer learning scenario. In\ncontrast to projection-based methods, our procedure always requires a\nminimal amount of annotated data in the low-resource target\nlanguage---in practice, however, this distinction is non-critical as\nprojection-based methods without a small mount of seed target language\ndata perform poorly \\cite{buys-botha:2016:P16-1}.\n\n\\subsection{Character-level NLP.}\nOur work also follows a recent trend in NLP, whereby traditional\nword-level neural representations are being replaced by\ncharacter-level representations for a myriad tasks, e.g., POS tagging\n\\newcite{DBLP:conf\/icml\/SantosZ14}, parsing\n\\cite{ballesteros-dyer-smith:2015:EMNLP}, language modeling\n\\cite{ling-EtAl:2015:EMNLP2}, sentiment analysis\n\\cite{DBLP:conf\/nips\/ZhangZL15} as well as the tagger of\n\\newcite{heigold2017}, whose work we build upon. Our work is also\nrelated to recent work on character-level morphological generation using neural architectures\n\\cite{faruqui-EtAl:2016:N16-1,rastogi-cotterell-eisner:2016:N16-1}.\n\n\n\n\\subsection{Neural Cross-lingual Transfer in NLP.}\nIn terms of methodology, however, our proposal bears similarity\nto recent work in speech and machine translation--we discuss\neach in turn. In speech recognition, \\newcite{heigold2013multilingual} train\na cross-lingual neural acoustic model on five Romance languages.\nThe architecture bears similarity to our multi-language softmax\napproach. Dependency parsing benefits from cross-lingual learning in a similar fashion \\cite{guo-EtAl:2015:ACL-IJCNLP2,DBLP:conf\/aaai\/GuoCYWL16}.\n\nIn neural machine translation\n\\cite{DBLP:conf\/nips\/SutskeverVL14,DBLP:journals\/corr\/BahdanauCB14},\nrecent work\n\\cite{firat-cho-bengio:2016:N16-1,zoph-knight:2016:N16-1,JohnsonSLKWCTVW16}\nhas explored the possibility of jointly train translation models for a\nwide variety of languages. Our work addresses a different task, but\nthe undergirding philosophical motivation is similar, i.e.,\nattack low-resource NLP through multi-task transfer learning.\n\\newcite{kann-cotterell-schutze:2017:ACL2017} offer a similar method for cross-lingual\ntransfer in morphological inflection generation.\n\n\\section{Conclusion}\nWe have presented three character-level recurrent neural network architectures for\nmulti-task cross-lingual transfer of morphological\ntaggers. We provided an empirical evaluation of the technique on 18\nlanguages from four different language families, showing wide-spread\napplicability of the method. We found that the transfer of morphological\ntaggers is an eminently viable endeavor among related language and,\nin general, the closer the languages, the easier the transfer of\nmorphology becomes. Our technique outperforms two strong baselines\nproposed in previous work. Moreover, we define standard low-resource\ntraining splits in UD for future research in low-resource morphological tagging.\nFuture work should focus on extending the neural morphological tagger\nto a joint lemmatizer \\cite{muller-EtAl:2015:EMNLP} and evaluate its functionality in the low-resource setting.\n\n\\section*{Acknowledgements}\nRC acknowledges the support of an NDSEG fellowship. Also, we\nwould like to thank Jan Buys and Jan Botha who helped us compare to the numbers reported in their paper. We would also like to thank Hinrich Sch{\\\"u}tze for\nreading an early draft and Tim Vieira and Jason Naradowsky\nfor helpful initial discussions. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}