diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgzmg" "b/data_all_eng_slimpj/shuffled/split2/finalzzgzmg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgzmg" @@ -0,0 +1,5 @@ +{"text":"\\section{Diffusion and opacities in cool B stars}\n\nSlowly rotating B stars on the main sequence are thought to be extremely stable as\nthere is only limited convection in the outer envelope and none at the surface, mass\nloss is expected to be small and rotational mixing is also negligible.\nIn such a stable environment, diffusion should proceed with few impediments.\nConsequently, the large abundance anomalies observed in HgMn stars are understood to be \nthe result of diffusion in the atmosphere of such stars.\n\n\nThat diffusion occurs in the atmosphere suggests that diffusion also occurs in the interior\nin the absence of mixing processes. \nRicher, Michaud \\&~Turcotte~(2000) and Richard, Michaud \\&~Richer~(2002) (see also Richard \nin these proceedings) have shown that\nradiative levitation pushes iron-peak elements up in the envelope of hot A and \ncool B stars. While these elements are radiatively supported throughout\nthe outer envelope of these stars, they tend to accumulate at a temperature of \nroughly 200\\,000~K because of a local reduction of the outward flux there.\nAt such a temperature, iron-peak elements are the dominant contributors to the opacity \nand, naturally, as they accumulate, the opacity also increases locally. \nFigure~1 shows the abundance profiles of two models, one with significant diffusion and\nanother where the effect of diffusion is only marginal. An overabundance of the order \nof a factor of ten is achieved in the former.\n\nWith the notable exception of Hydrogen, lighter elements that play an important \nrole in the opacity at lower temperature \nare generally not supported by radiative pressure and therefore sink out of the superficial\nregions toward the core. Consequently, their contribution to \nthe opacity diminishes.\n\nThe combined effect of the evolution of the chemical composition as a result of \ndiffusion yields a marked increase\nin the opacity at around 200\\,000~K and, perhaps somewhat surprisingly, an increase\nof the opacity at lower temperatures, as shown in Figure~1, due to the combined increase\nin the opacity due to hydrogen and iron. \n\nNotice that the abundances are homogeneous from the surface to a point deeper than\n200\\,000~K ($\\log T =5.3$). These low temperature regions are artificially homogenized\nwith an ad hoc turbulent mixing coefficient. In one respect this allows us to tweak the\nlevel of chemical anomaly in order to investigate the effect of diffusion in the star's\ninterior. Unfortunately, it also means that the structure of the cool regions of\nthe envelope may be significantly inaccurate.\n\n\\section{Pulsations in cool B stars}\n\nThe kappa-mechanism due to the opacity of iron-peak elements\nis responsible for variability on the lower main sequence (Pamyatnykh 1999). \nThe SPB stars (Slowly Pulsating B stars; see Pamyatnykh) are long-period pulsators\nfound in chemically normal young main-sequence stars earlier than B8. \nThe distribution of these stars overlap in the H.-R. diagram those of\nthe chemically peculiar but seemingly stable HgMn stars.\n\nApart from variability and surface chemical composition these two classes of\nstars are very similar. Interestingly, SPB stars are found to be mostly slowly rotating\nstars, as are the HgMn stars, but the lack of rapidly rotating SPB stars may well be\nonly a selection effect.\nThis suggests that there might well be a correlation between the chemical composition \nand the excitation of the pulsations, as in Am stars where diffusion leads to\nstability, or that the conditions that allow diffusion are\nnot conducive to pulsations occurring.\n\nThe best models currently available (Turcotte \\& Richard, submitted) do not however \nsupport the hypothesis that diffusion can undermine the excitation of pulsations. \nAs the opacity bump due to iron-peak is enhanced as a result of diffusion\nin those models, they suggest that the excitation of pulsations in HgMn stars should\nbe at least as high than in chemically normal SPB stars. Again in Figure~1, the differential work \nfor a given mode of pulsation is shown in a model with nearly normal composition \nand one with strongly enhanced\niron and opacity in the driving region. The net normalized growth rate, which must be positive\nfor a mode to be unstable and for the star to become variable, for this\nmode is 0.08 in the ``normal'' model and 0.22 in the ``peculiar'' model. The \npeak in the driving region is higher, but there is also more damping on the \nhot side of the peak. In this mode the net driving is in fact considerably enhanced,\nbut in many modes, especially in more evolved models, the net excitation (the value \nof the normalized growth rate) is surprisingly insensitive to the \nmagnitude of the abundance anomalies.\nNevertheless, one must conclude that the models are lacking the necessary ingredient \nto explain the lack of observed pulsations in HgMn stars.\n\\begin{figure}\n \\includegraphics[width=13cm]{turcotte_fig.eps}\n\\caption{The figure illustrates the effect of diffusion on the excitation of pulsations\nin the model of a 10~Myr old 4~M$_\\odot$ star. Two models of the same age and mass are compared,\none with only marginal change in abundances (dashed line) and one with more efficient\ndiffusion (solid line). The top panel shows the iron abundance profiles;\nthe middle panel the mean Rosseland opacity; and the bottom panel shows the \ndifferential work for a $\\ell=1, n=16$ mode with a period of 1.2 days. A positive work indicates\nmode excitation while a negative value indicates damping. By integrating the differential work\nover the whole star we obtain the total work integral which is used to calculate the \nnormalized growth rate (see text).}\n\\end{figure}\n\n\n\\section{What does this tell us about cool B stars?}\n\nWe can speculate as to what is the missing ingredient in the models.\n \nWe can first argue that adding mixing in the interior of HgMn stars \nwould not resolve the discrepancy as the result of mixing would be to homogenize\nthe composition to its initial, here solar, value. This would still leave too much iron-peak\nelements in the pulsations' driving region, leading to the expectation\nof pulsations in HgMn stars as in SPB stars.\n\nA possible solution may be the selective mass loss of certain elements from radiation\npressure in the atmosphere but not others (Babel 1995). It is possible that this\nmay lead to the depletion of some elements in the driving region. \nThe detailed process by which this depletion would occur, if indeed it can,\nhas not been worked out yet. \n\nAnother possibility is that the issues of mode selection, interference or\nvisibility that often befall the asteroseismology of pulsating stars obscures\nany direct conclusions we can hope to make on models and stellar physics \nfrom the observations. \n\nFinally, our models are lacking in one crucial aspect.\nOur current models cannot model the region cooler than 200\\,000~K consistently \nbecause of numerical problems. Therefore the structure of the models there may not\nbe appropriate. Though the work integrals seem rather insensitive to those regions,\na substantial change in structure there may lead to smaller predicted excitations.\n\nThe major stumbling block to improved models is the lack of\nopacity spectra appropriate to model diffusion consistently at low temperatures \n(Leblanc, Michaud \\&~Richer~2000). Only when this will be possible will the \nfull picture of mode driving in HgMn stars be achieved. Before then, the models \nremain informative of the processes that occur in the interior, but speculative\nas to the net effect of diffusion of mode damping.\n\nObservationally, the advent of space-based experiments dedicated to asteroseismology\nwill eventually resolve the question of whether HgMn stars are really stable or if \nthey undergo undetected low-amplitude variations. Observations are underway to \nidentify faint HgMn stars at the VLT so they can thereafter be observed in the\nplanetary field of CoRot. Whether very-low amplitude modes are detected or not, these \nobservations will pose important new constraints on the models.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\nCoreference annotations are costly and difficult to obtain, since trained annotators with sufficient world knowledge are necessary for reliable annotations. This paper presents a way to {\\em simulate} annotators using reinforcement learning. To motivate our approach, we rely on the following example from \\newcite[colors added to mark entity mentions]{strube-etal}:\n\\begin{itemize}\n \\item[(1)] [\\clusterAbf{Lynyrd Skynyrd}]$_1$ was formed in \\clusterBbf{Florida$_2$}. Other bands from [\\clusterBbf{the Sunshine State}]$_2$ include \\clusterCbf{Fireflight} and \\clusterDbf{Marilyn Manson}.\n\\end{itemize}\n\n\\newcite{strube-etal} cite the association between \\clusterB{Florida} and \\clusterB{the Sunshine State} as an example of a common source of name-name recall error for state-of-the-art coreference resolution systems. The challenge is that the two names co-occur relatively infrequently and are unlikely to do so in a moderate-sized, manually annotated training corpus. A state-of-the-art system may be able to infer the relation using distributional information about the phrase \\clusterB{the Sunshine State}, but is likely to have limited evidence for the decision that it is coreferential with \\clusterB{Florida} rather than \\clusterA{Lynyrd Skynyrd}.\n\nWhile coreference-annotated data is scarce, knowledge bases including factual information (such as that \\clusterC{Fireflight} is from \\clusterB{Florida}) are increasingly available. For a human annotator unaware that \\clusterB{Florida} is sometimes referred to as \\clusterB{the Sunshine State}, the information that \\clusterC{Fireflight} is from \\clusterB{Florida} is sufficient to establish that \\clusterB{Florida} and \\clusterB{the Sunshine State} are (with high probability) coreferential. This paper explores a novel architecture for making use of such information from knowledge bases by tying a coreference resolution system to a relation extraction system, enabling us to reward the coreference system for making predictions that lead us to infer facts that are consistent with such knowledge bases. This potentially provides us with more evidence for resolving coreference such as (1). \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{images\/arch.pdf}\n \\caption{\\label{fig:sys_arch}Our strategy for training a coreference resolver using reward from relation extraction.}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/overall-newest.pdf}\n \\caption{\\label{fig:overall}The columns show the different pipelines used to obtain data for training the reward models. The pipeline for: (i) RE-KG directly extracts triples from Wikidata, (ii) RE-Text runs Wikipedia summaries through OpenRE to generate triples, and (iii) RE-Joint adds an additional verification step by checking if the generated triples exist in Wikidata.}\n\\end{figure*}\n\n\nWe propose a training strategy (Figure~\\ref{fig:sys_arch}) in which we pass on the predictions of a neural coreference resolver to an open relation extraction (OpenRE) system, matching relations extracted from resolved sentences with a knowledge base. We show how checking the produced relationships for consistency against the knowledge base produces a reward that is, indirectly, a signal about the quality of the coreference resolution. In order to generalize this signal beyond the coverage of the knowledge base, we train a Universal Schema model \\cite{riedel-etal-2013-relation} and use its confidence as our reward function. With this reward function, we do policy-gradient fine-tuning of our coreference resolver, effectively optimizing its predictions' consistency with world knowledge. \n\n\\paragraph{Contributions}\nWe demonstrate that training a coreference resolver by reinforcement learning with rewards from a relation extraction system, results in improvements for coreference resolution.\nOur code is made publicly available at \\url{https:\/\/github.com\/rahular\/coref-rl}\n\n\\section{Consistency Reward for Coreference Resolution}\\label{sec:strategy}\n\nIn order to reward a coreference resolver for being consistent with world knowledge, we propose a simple training strategy based on relation extraction: (i) Sample a Wikipedia\\footnote{\\url{https:\/\/www.wikipedia.org}} document at random, (ii) Replace mentions with their antecedents using a coreference resolver, (iii) Apply an off-the-shelf openRE system to each rewritten document, (iv) Score relationships that include coreferent mentions using Universal Schema, and (v) Use the score as a reward for training the coreference resolvers. \n\n\\paragraph{Reward functions} To model consistency with world knowledge, we train different Universal Schema models \\cite{riedel-etal-2013-relation,Verga:McCallum:16}, resulting in three reward functions (Figure~\\ref{fig:overall}): \\textbf{RE-KG} (Knowledge Graph Universal Schema) is trained to predict whether two entities are linked in Wikidata\\footnote{\\url{https:\/\/www.wikidata.org}}; \\textbf{RE-Text} (Text-based Universal Schema) is trained to predict whether two entities co-occur in Wikipedia; and \\textbf{RE-Joint} (Joint Universal Schema) is trained to predict whether two entities are linked {\\it and} co-occur. The three rewards focus on different aspects of relationships between entities, giving complimentary views of what entities are related.\n\nSimilar to \\newcite{verga-etal-2016-multilingual}, we parameterize candidate relation phrases with a BiLSTM \\cite{graves2005framewise}, and use pre-trained Wikidata BigGraph embeddings \\cite{pbg} as the entity representations. We apply a one-layer MLP on the concatenated representations to get the reward value.\n\n\\paragraph{Updating the coreference resolver} Each resolved document is converted into $n$ subject-relation-object (SRO) triples by an open information retrieval system \\cite{openIE}. Each triple $t_i$ is then scored using a reward function to obtain a reward $r_i$ for $i \\in \\{1, \\ldots, n\\}$. The final document-level reward is the normalized sum of the individual rewards as shown in Equation~\\ref{eqn:reward}, where $R_h$ is a moving window containing the previous $h=100$ normalized reward values.\n\n\\begin{equation}\n R = \\frac{\\sum_{i} r_i - mean(R_h)}{stddev(R_h)}\n \\label{eqn:reward}\n\\end{equation}\n\nSince $R$ is not differentiable with respect to the coreference resolver's parameters, we use policy gradient training to update the coreference resolver. We select the best action according to the current policy, using random exploration of the alternative solutions with $p=\\frac{1}{10}$.\n\n\\paragraph{Multi-task reinforcement learning} Our overall training procedure is presented in Algorithm~\\ref{alg:distral}. After training the three aforementioned reward models, we create \\textbf{RE-Distill} by interpolating their trained weights. Next, we pre-train a coreference resolver using supervised learning, and fine-tune it using each of the three reward functions to get three different coreference policies: \\textbf{Coref-KG}, \\textbf{Coref-Text} and \\textbf{Coref-Joint}, respectively. We then use multi-task reinforcement learning to combine these three policies to get \\textbf{Coref-Distill}. Our approach is a particular instance of DisTraL \\cite{distral}, using policy gradient and model interpolation. Finally, \\textbf{Coref-Distill} is fine-tuned with rewards from {\\bf RE-Distill}. \n\n\\begin{algorithm}\n\\begin{small}\n\\caption{\\label{alg:distral} Multi-task Reinforcement Learning}\n\\begin{algorithmic}\n\\REQUIRE Baseline initialized policies $\\theta_n$ for $n\\in \\{1,2,3\\}$ \\label{alg:pretrained-policies}\\\\\n\\REQUIRE Reward functions \\texttt{reward$_n$} for $n\\in \\{1,2,3\\}$\n\\REQUIRE Distilled reward function \\texttt{reward$_*$}\n\\WHILE{stopping criterion not met}\n\\STATE Sample $k$ documents $D^k$\n\\FOR{$d\\in D^k$}\n\\FOR{$n\\in\\{1,2,3\\}$}\n\\STATE $\\mathcal{C}_d$ = entity clusters with $\\theta_n$\n\\STATE $d'$ = resolve $d$ with $\\mathcal{C}_d$\n\\STATE $\\mathcal{T}$ = obtain OpenIE triples for $d'$ \n\\STATE $r$ = reward$_n$($d'$)\n\\STATE $\\hat{g}_k$ = policy gradient for $\\theta_n$ with reward $r$\n\\STATE\n$\\theta_n^{k+1}=\\theta_n^k+\\alpha_k\\hat{g}_k$\n\\ENDFOR\n\\ENDFOR\n\\ENDWHILE \n\\STATE Distilled policy $\\theta_*=\\frac{\\theta_1+\\theta_2+\\theta_3}{3}$\n\\STATE Sample $k$ documents $D^k$\n\\FOR{$d\\in D^k$}\n\\STATE $d'$ = resolve $d$ with $\\mathcal{C}_d$\n\\STATE $\\mathcal{T}$ = obtain OpenIE triples for $d'$ \n\\STATE $r$ = reward$_*$($d'$)\n\\STATE $\\hat{g}_k$ = policy gradient for $\\theta_*$ with reward $r$\n\\STATE\n$\\theta_*^{k+1}=\\theta_*^k+\\alpha_k\\hat{g}_k$\n\\ENDFOR\n\\RETURN Distilled policy $\\theta_*$\n\\end{algorithmic}\n\\end{small}\n\\end{algorithm}\n\n\\section{Experiments}\\label{sec:experiments}\n\nWe use a state-of-the-art neural coreference resolution model \\cite{lee2018higher} as our baseline coreference resolver.\\footnote{\\url{https:\/\/github.com\/kentonl\/e2e-coref}} This model extends \\citet{lee2017end} with coarse-to-fine inference and ELMo pretrained embeddings \\cite{peters2018deep}.\n\n\n\\paragraph{Data} We use the standard training, validation, and test splits from the English OntoNotes.\\footnote{\\url{https:\/\/catalog.ldc.upenn.edu\/LDC2013T19}}\nWe also evaluate on the English WikiCoref \\cite{wikicoref}, with a validation and test split of 10 and 20 documents respectively.\n\n\\paragraph{Reward model training} We use data from English Wikipedia and Wikidata to train our three reward models.\nFor training \\textbf{RE-KG}, we sample 1 million Wikidata triples, and expand them to 12 million triples by replacing relation phrases with their aliases.\nFor \\textbf{RE-Text}, we pass the summary paragraphs from 50,000 random Wikipedia pages to Stanford's OpenIE extractor \\cite{corenlp}, creating 2 million triples.\nFor \\textbf{RE-Joint}, we only use Wikipedia triples that are grounded in Wikidata, resulting in 60,000 triples.\\footnote{That is, we retain only those triples whose subject and object can be linked to an entity in Wikidata.}\nWe further sample 200,000 triples from Wikidata and Wikipedia for validation, and train the reward models with early stopping based on the F$_1$ score of their predictions.\n\n\\paragraph{Evaluation}\nAll models are evaluated using the standard CoNLL metric, which is the average F$_1$ score over MUC, CEAFe, and $B^3$ \\cite{denis2009global}.\n\n\\section{Results}\\label{sec:results}\n\nSince the quality of our reward models is essential to the performance of the coreference resolver adaptations, we first report the validation accuracy and F$_1$ scores of the four reward models used, in Table~\\ref{tab:reward_results}. We clearly see the advantage of distillation, with a 5\\% absolute difference between the best single model ({\\bf RE-Text}) and {\\bf RE-Distill}.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|ccc}\n \\toprule\n {\\bf System} & {\\bf Data} & {\\bf Accuracy} & {\\bf F$_1$ score} \\\\\n \\midrule\n RE-KG & 12M & 0.64 & 0.78 \\\\\n RE-Text & 2M & 0.71 & 0.83 \\\\\n RE-Joint & 60K & 0.58 & 0.73 \\\\\n \\midrule\n RE-Distill & --- & \\textbf{0.78} & \\textbf{0.88} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Training data size, accuracy and F$_1$ scores of the reward models on the 200,000 validation triples.\\label{tab:reward_results}}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l|cc}\n \\toprule\n {\\bf System} & {\\bf OntoNotes}&{\\bf WikiCoref} \\\\\n \\midrule\n \\newcite{lee2018higher} & 72.60 & 57.49 \\\\\n \\midrule\n Coref-KG & 72.96 & 57.84 \\\\\n Coref-Text & 72.99 & 57.54 \\\\\n Coref-Joint & 72.77 & 57.51 \\\\\n \\midrule\n Coref-Distill & \\textbf{73.10} & \\textbf{58.14} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Coreference results: average F$_1$ scores on the OntoNotes and WikiCoref test sets. Differences are significant w.r.t. $B^3$ (bootstrap test, $p<0.05$).\\label{tab:coref_results}} \n\\end{table}\n\nTable~\\ref{tab:coref_results} presents the downstream effects of applying these reward functions to our baseline coreference policy.\\footnote{The models were re-trained from scratch, and the scores are slightly different from those reported in \\newcite{lee2018higher}.}\n\nThe coreference resolution results are similar to the relation extraction results: using a distilled policy, learned through multi-task reinforcement learning, leads to better results on both datasets.\\footnote{We repeat this experiment three times with different random seeds and observed the same pattern and very robust performance across the board.}\n\nWhile improvements over the current state of the art are relatively small, they reflect significant progress, as they demonstrate the ability to successfully augment coreference resolvers with ``free\" data from large-scale KB like Wikidata. For relation extraction, this could have positive downstream effects, and also ensure that relations are consistent with real world knowledge. Moreover, this approach has the potential to also be beneficial for coreference resolution in low resource languages, where less annotated data is available, as Wikidata triples are abundant for many languages.\n\n\\section{Analysis}\\label{sec:analysis}\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/analysis.pdf}\n \\caption{\\label{fig:analysis}Mention detection and linking examples by the baseline system from \\newcite{lee2018higher}, and the best performing fine-tuned system (Coref-Distill). Mentions of the same color are linked to form a coreference cluster.}\n\\end{figure*}\n\nEmpirically, we find that fine-tuning the coreference resolver on Wikidata results in two kinds of improvements: \n\n\\paragraph{Better mention detection} Since the model is rewarded if the SRO triples produced from the resolved document are present in Wikidata, the model can do well only if it correctly resolves the subject and object, which are usually named entities (more generally, noun phrases). Indeed, we see an improvement in mention detection as exemplified in the first example of Figure~\\ref{fig:analysis}. Compared to the baseline, the fine-tuned model identifies a larger number of entities, including ``southern hemisphere'', ``Cambridge'' and ``Oxford'', which are missed by the baseline model.\n\n\\paragraph{Better linking} As a direct consequence of the above, the model is inclined to also link noun phrases that are not entities. In the second example of Figure~\\ref{fig:analysis}, we see that ``This attempt'' is linked to ``releasing'' by the fine-tuned model. Interestingly, we do not see this type of \\textit{eventive} noun phrase linking either in OntoNotes or in the predictions of the baseline model. \n\nThis phenomenon, however, also has a side-effect of producing singleton clusters and spurious linking, which adversely affect the recall. On the OntoNotes test data, while the average precision of the best performing fine-tuned model is higher than the baseline (75.62 vs. 73.80), a drop in recall (70.75 vs. 71.34) causes the final F$_1$ score to only marginally improve.\n\n\\section{Related Work}\\label{sec:related}\n\\paragraph{Coreference resolution} Among neural coreference resolvers \\cite{Wu2017ADL, Meng2018TriadbasedNN}, \\citet{lee2017end} were the first to propose an end-to-end resolver which did not rely on hand-crafted rules or a syntactic parser. Extending this work, \\citet{lee2018higher} introduced a novel attention mechanism for iteratively ranking spans of candidate coreferent mentions, thereby improving the identification of long distance coreference chains. \\newcite{zhang-etal} improve pronoun coreference resolution by 2.2 F1 using linguistic features (gender, animacy and plurality) and a frequency based predicate-argument selection preference as external knowledge. \\newcite{emami-etal} incorporate knowledge into coreference resolution by means of information retrieval, finding sentences that are syntactically similar to a given instance, and improving F1 by 0.16.\n\n\\paragraph{Reinforcement learning} RL has been used for many NLP tasks, including coreference resolution \\cite{clark2016deep} and relation extraction \\cite{Zeng2018LargeSR}. \\citet{clark2016deep} use RL to improve coreference resolution by optimizing their mention ranking model and directly use the standard evaluation metrics as the rewards. We, on the other hand, perform end-to-end optimization by rewarding the model's consistency with real world knowledge using relation extraction. To our knowledge, we are the first to use consistency with world knowledge as a reward for tasks other than knowledge base construction.\\footnote{\\newcite{Mao:ea:18}, for example, use reinforcement learning with consistency-like reward to induce lexical taxonomies.} \n\n\\paragraph{Knowledge bases} Knowledge bases have been leveraged across multiple tasks across NLP \\cite{Bordes2011LearningSE,Chang2014TypedTD, Lin2015ModelingRP, Toutanova2015RepresentingTF, Yang2017LeveragingKB}. Specifically for coreference resolution, \\citet{Prokofyev2015SANAP} implement a resolver that ensures semantic relatedness of resulting coreference clusters by leveraging Semantic Web annotations. Their work incorporates knowledge graph information only in the final stage of the resolver's pipeline, and not during training. In contrast, our work augments information from the knowledge base directly into the training pipeline. Also, they use DBpedia \\cite{dbpedia07} as the ontology. Although both Wikidata and DBpedia are designed to support working with Wikipedia articles, DBpedia can be considered as a subset of Wikidata as Wikipedia infoboxes are its main data source. The advantage of Wikidata over DBpedia is its size, and the fact that it is multilingual, which will allow applying our method to other languages in the future. \n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe presented an architecture for adapting coreference resolvers by rewarding them for being consistent with world knowledge. Using simple multi-task reinforcement learning and a knowledge extraction pipeline, we achieved improvements over the state of the art across two datasets. We believe this is an important first step in exploring the usefulness of knowledge bases in the context of coreference resolution and other discourse-level phenomena. In this area, manually annotated data is particularly expensive, and we believe leveraging knowledge bases will eventually reduce the need for manual annotation. \n\n\\section*{Acknowlegments}\n\nWe thank the reviewers for their valuable comments.\nRahul Aralikatte, Daniel Hershcovich, Heather Lent, and Anders S{\\o}gaard are funded by a Google Focused Research Award. Heather Lent is also funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\\l}odowska-Curie grant agreement No. 801199. Chen Qiu is funded in part by the National Natural Science Foundation of China under grant No. 61773355 and the China Scholarship Council.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n The field of communication complexity studies the amount of communication required to solve the problem of computing discrete functions when the input is split between two or more parties. In the most commonly studied framework, there are two parties, often called Alice and Bob, and a communication problem is defined by a Boolean matrix $M=[m_{ij}]_{n \\times n}$, where \\emph{Boolean} means that the entries are either $0$ or $1$. Alice receives a row number, and Bob receives a column number $j$. Together, they should both compute the entry $m_{ij}$ by exchanging bits of information in turn, according to a previously agreed-on protocol. There is no restriction on their computational power; the only measure we care to minimize is the number of exchanged bits.\n\nA deterministic protocol $\\pi$ specifies how the communication proceeds. It specifies what bit a player sends at each step. This bit depends on the input of the player and the history of the communication so far. It is often assumed that the last communicated bit must be the output of the protocol. A protocol naturally corresponds to a binary tree as follows. Every internal node of the tree is associated with either Alice or Bob. If an internal node $v$ is associated with Alice, then it is labeled with a Boolean function $a_v$, which prescribes the bit sent by Alice at this node as a function of her input $i$. Similarly, the nodes associated with Bob are labeled with Boolean functions of $j$. Each leaf is labeled by $0$ or $1$ which corresponds to the output of the protocol at that leaf.\n\nWe denote the number of bits exchanged on the input $(i,j)$ by $\\cost_\\pi(i,j)$, which is precisely the length of the path from the root to the corresponding leaf. The \\emph{communication cost} of the protocol is simply the depth of the protocol tree, which is the maximum of $\\cost_\\pi(i,j)$ over all inputs $(i,j)$. \n$$\\cc(\\pi) \\coloneqq \\max_{i,j} \\cost_\\pi(i,j).$$\n\n\n We say that $\\pi$ computes $M$ if $\\pi(i,j)=m_{ij}$ for all $(i,j)$, where $\\pi(i,j)$ denotes the protocol's output on the input $(i,j)$. The \\emph{deterministic communication complexity} of $M$, denoted by $\\DD(M)$, is the smallest communication cost of a protocol that computes $M$. It is easy to see that $\\DD(M)\\le \\ceil{\\log(n)}+1$ as Alice can use $\\ceil{\\log(n)}$ bits to send her entire input to Bob, and Bob knowing the values of both $i$ and $j$, can send back $m_{ij}$. \n\nIt is well-known that if the deterministic communication complexity of a matrix is bounded by a constant $c$, then the matrix is highly structured -- its rank is at most $2^c$, and it can be partitioned into at most $2^c$ all-zero and all-one submatrices~\\cite{MR1426129}. These facts characterize the family of matrices that satisfy $\\DD(M)=O(1)$. A fundamental problem in communication complexity, with connections to harmonic analysis and operator theory~\\cite{Hat21}, is to obtain a characterization of families of matrices that have $O(1)$ \\emph{randomized} communication complexity. \n\nA (public-coin) \\emph{randomized protocol} $\\pi_R$ of cost $c$ is simply a probability distribution over the deterministic protocols of cost $c$. Given an input $(i,j)$, to compute $m_{ij}$, Alice and Bob use their shared randomness to sample a deterministic protocol from this distribution, and execute it. \n\nWe say that the error probability of $\\pi_R$ is at most $\\epsilon$ if $\\Pr[\\pi_R(i,j) \\neq m_{ij}] \\le \\epsilon$ for every input $(i,j)$. For $\\epsilon \\in (0,1\/2)$, let $\\RR_\\epsilon(M)$ denote the smallest cost of a randomized protocol that computes $M$ with error probability at most $\\epsilon$. Note that $\\epsilon=1\/2$ can be easily achieved by outputting a random bit; hence it is crucial that $\\epsilon$ is defined to be strictly less than $1\/2$. It is common to take $\\epsilon=\\frac{1}{3}$. Indeed, the choice of $\\epsilon$ is not important as long as $\\epsilon \\in (0,1\/2)$, since the probability of error can be reduced to any constant $\\epsilon'>0$ by repeating the same protocol independently for some $O(1)$ times, and outputting the most frequent output. We denote $\\RR(M) \\coloneqq \\RR_{1\/3}(M).$\n\nIt is well-known that the $n \\times n$ identity matrix $\\mathtt{I}_n$ satisfies $\\RR(\\mathtt{I}_n)\\le 3$ and $\\DD(\\mathtt{I}_n) =\\ceil{\\log(n)}+1$. Hence, in contrast to the deterministic case, there are matrices with $\\RR(M)=O(1)$ that have arbitrarily large rank. \n\nThere are very few known examples of matrix classes that have randomized communication complexity $O(1)$~\\cite{Hat21,harms2021randomized}. Let $\\mathcal M=(M_n)_{n \\in \\mathbb{N}}$ be a sequence of $n \\times n$ Boolean matrices $M_n$, and define $\\RR(\\mathcal M): n \\mapsto \\RR(M_n)$. Let us look at some necessary conditions for $\\mathcal M$ to satisfy $\\RR(\\mathcal M)=O(1)$. \n\nLet $\\cl(\\mathcal M)$ denote the \\emph{closure} of $\\mathcal M$, defined as the set of all square matrices that are a submatrix of some $M_n$. Note that $\\cl(\\mathcal M)$ is the smallest hereditary property that contains all the matrices in $\\mathcal M$, where a set of square matrices is called \\emph{hereditary} if it is closed under taking square submatrices. \n\n\nLet $\\GT_k$ denote the $k \\times k$ \\emph{Greater-Than} matrix defined as $\\GT_k(i,j)=1$ if and only if $i \\le j$. The sequence $\\mathcal M$ is called \\emph{stable}, if there exists $k\\in \\mathbb{N}$ such that $\\GT_k \\not\\in \\cl(\\mathcal M)$. It is well-known~\\cite{MR3439794,RamamoorthyS15} that $\\RR(\\GT_k)= \\Omega(\\log \\log k)$ which tends to infinity as $k$ grows. Hence, if $\\RR(\\mathcal M)=O(1)$, then $\\mathcal M$ must be stable. The term stability is coined due to Shelah's unstable formula theorem in model theory which characterizes unstable theories by the nonexistence of countably infinite half-graphs~\\cite{MR1083551}, where half-graphs are the graphs with biadjacency matrix $\\GT_k$ for some $k$. Stable graph families are known to have useful properties such as strong regularity lemmas~\\cite{malliaris2014regularity} and the Erd\\H{o}s-Hajnal property~\\cite{chernikov2018note}. \n\nThe second necessary condition for $\\RR(\\mathcal M)=O(1)$ follows from a bound on the number of matrices with $O(1)$ randomized communication complexity. A standard derandomization argument, \\cref{prop:CountR}, shows that the number of such $n\\times n$ matrices is bounded by $2^{O(n \\log n)}$. Consequently, if $\\RR(\\mathcal M)=O(1)$, then $|\\cl(\\mathcal M)_n|\\leq 2^{O(n \\log n)}$, where $\\cl(\\mathcal M)_n$ denotes the set of $n \\times n$ matrices in $\\cl(\\mathcal M)$. Thus, in the terminology of graph theory~\\cite{MR1490438,MR1769217}, the speed of growth of $|\\cl(\\mathcal M)_n|$ is at most \\emph{factorial}. \n\nInspired by the \\emph{Implicit Graph Conjecture}~\\cite{MR1186827} and its connection to the growth rate of hereditary graph properties, Harms, Wild and Zamaraev~\\cite{harms2021randomized} formulated a probabilistic version of the Implicit Graph Conjecture, which translates to the following statement in communication complexity (See \\cite[Conjecture 1.2 and Proposition 1.6]{harms2021randomized}). \n\n\\begin{conjecture}[Probabilistic Universal Graph Conjecture~\\cite{harms2021randomized}]\n\\label{conj:PUG}\nLet $\\mathcal M$ be a sequence of $n \\times n$ Boolean matrices. Then $\\RR(\\mathcal M)=O(1)$ if and only if $\\mathcal M$ is stable and $|\\cl(\\mathcal M)_n|\\leq 2^{O(n \\log n)}$. \n\\end{conjecture}\n\n\\cref{conj:PUG} speculates that the two necessary conditions for $\\RR(\\mathcal M)=O(1)$ that we discussed above are also sufficient. In other words, they characterize Boolean matrices that have randomized communication complexity $O(1)$. It is shown in \\cite{harms2021randomized} that \\cref{conj:PUG} is true for matrix sequences corresponding to restricted classes of hereditary graph families such as monogenic bipartite families, interval graphs, and families of bounded twin-width.\n\n\n\nIn this article, we prove the following theorem which refutes~\\cref{conj:PUG}. \n\n\\begin{theorem}[Main Theorem]\n\\label{thm:main}\nThere exists a stable sequence $\\mathcal M$ of Boolean matrices $(M_n)_{n \\in \\mathbb{N}}$ such that $\\RR(M_n)=\\Theta(\\log(n))$ and $|\\cl(\\mathcal M)_n|\\leq 2^{O(n \\log n)}$. \n\\end{theorem}\nNote that every $n\\times n$ matrix $M$ satisfies $\\RR(M)= O(\\log n)$. In particular, the above construction shows that this maximum is achievable even for stable hereditary matrix families of speed at most factorial. \n\n\n\nFurthermore, as a consequence of \\cref{conj:PUG}, \\cite{harms2021randomized} speculates that the randomized communication complexity of every hereditary property of Boolean matrices $\\mathcal M$ with at most factorial speed has a gap behavior, either $\\RR(\\mathcal M)=O(1)$ or $\\RR(\\mathcal M)=\\Omega(\\log \\log n)$. We refute this weaker conjecture as well. In particular, \\cref{main:thm2}, proved in \\cref{sec:mainthm2}, shows that for every growing function $w(n)<10^{-3}\\log n$, there exists a matrix sequence $\\mathcal M=(M_n)_{n\\in \\mathbb{N}}$ such that $\\RR(M_n)=w(n)$, and every $n^{1\/4}\\times n^{1\/4}$ submatrix $F$ of $M_n$ satisfies $\\RR(F)=O(1)$. As the proof of \\cref{thm:main} demonstrates, if we take $w(n)$ to be any function that is $\\omega(1)$ and $o(\\log \\log (n))$, then $\\cl(\\mathcal M)$ is a hereditary matrix property with factorial speed and $\\RR(\\mathcal M)=\\Theta(w(n))$. \n\nWe present the proof of \\cref{thm:main}, which builds on \\cref{main:thm2}, in \\cref{sec:thmmain}. \n\n\\section{Preliminaries}\n\nAll logarithms in this article are in base $2$. For a positive integer $n$, we denote $[n]=\\{1,\\ldots,n\\}$. We use the standard Bachmann-Landau asymptotic notations: $O(\\cdot)$, $\\Omega(\\cdot)$, $\\Theta(\\cdot)$, $o(\\cdot)$, and $\\omega(\\cdot)$. \n\nWe will need the following concentration inequality. \n\n\\begin{theorem}[Bernstein's inequality (See~\\cite{MR3185193})]\\label{thm:bernstein}\nLet $X$ be the sum of $n$ independent random variables $X_1,\\ldots,X_n$ each taking values in the interval $[0,c]$. Then for any $\\delta \\ge 0$, we have\n$$\\Pr[|X-\\Ex[X]| \\ge \\delta] \\le 2 e^{-\\frac{\\delta^2\/2}{\\Var(X)+(c \\delta\/3)}}. $$\n\\end{theorem}\n\nThe Cartesian product $A \\times B$ of two sets $A,B \\subseteq [n]$ is called a \\emph{combinatorial rectangle}. We will need the following lower bound on randomized communication complexity. \n\n\\begin{definition}\nLet $M$ be an $n \\times n$ Boolean matrix, and let $\\mu$ be a probability distribution on $[n] \\times [n]$. The discrepancy of a combinatorial rectangle $R \\subseteq [n] \\times [n]$ under $\\mu$ is defined as\n$$ \\Disc_{\\mu}(M,R) = \\left|\\Pr_{\\mu}[m_{ij}=1 \\text{ and } (i,j) \\in R] - \\Pr_{\\mu}[m_{ij}=0 \\text{ and } (i,j) \\in R]\\right|.$$\nThe discrepancy of $M$ under $\\mu$ is defined as \n$\\Disc_{\\mu}(f) = \\max_{R}\\{\\Disc_{\\mu}(M,R)\\}$,\nwhere the maximum is over all combinatorial rectangles $R$.\n\\end{definition}\n\n\n\\begin{theorem}\\cite[Proposition 3.28]{MR1426129}\n\\label{thm:DiscLower}\nLet $M$ be an $n \\times n$ Boolean matrix, and let $\\mu$ be a probability distribution on $[n] \\times [n]$. Then for every $\\epsilon>0$, \n$$\\RR_{\\frac{1}{2}-\\epsilon}(M) \\ge \\log \\frac{2\\epsilon}{ \\Disc_\\mu(M)}. $$\nIn particular, \n\\begin{equation}\n\\label{eq:discLow}\n\\RR(M) \\ge \\log \\frac{1}{3 \\Disc_\\mu(M)}. \n\\end{equation}\n\\end{theorem}\n\\iffalse \nWe start by giving examples of matrix families that satisfy $\\RR(\\mathcal M)=O(1)$. The following lemma generalizes the well-known example of $\\mathtt{I}_n$ to any matrix that has $O(1)$ 1-entries on each row. \n\\begin{lemma}\n\\label{lem:BoundedDegree}\nLet $M$ be a finite Boolean matrix where the number of $1$-entries in each row is at most $r$. Then \n$$\\RR(M) \\le \\lceil \\log(r) \\rceil +3.$$\n\\end{lemma}\n\\begin{proof}\nThe proof is a standard argument based on hashing. Let $m_{ij}$ denote the $ij$-th entry of $M$. Suppose that Alice receives the row number $i$, and Bob receives the column number $j$. Alice knows the set $O_i=\\{j : m_{ij}=1\\}$, and we have $|O_i| \\le r$. Let $k=\\lceil \\log(r) \\rceil +2$. Using the public randomness, together Alice and Bob sample $S_1,\\ldots,S_{k} \\subseteq [n]$ uniformly at random and independently. Then Bob sends a string of $k$ bits indicating which of these sets include $j$. Alice then checks to see if any element in $O_i$ matches this profile. If there is such an element, then Alice declares ``$m_{ij}=1$'', and otherwise she declares ``$m_{ij}=0$.'' \n\nThe number of communicated bits is $k+1 = \\lceil \\log(r)\\rceil + 3$. \nNote that an error can occur only when $m_{ij}=0$ and Alice mistakes $j$ for an element in $O_i$. By applying the union bound over all elements in $O_i$, the probability of error is at most $r 2^{-k} \\le \\frac{1}{4}$.\n\\end{proof}\n\\fi \n\n\nAs discussed in the introduction, stability is a necessary condition for a matrix sequence to satisfy $\\RR(\\mathcal M)=O(1)$. The next proposition proves a second necessary condition: an upper bound on $|\\cl(\\mathcal M)_n|$. \n\n\\begin{proposition}\n\\label{prop:CountR}\nThe number of $n\\times n$ matrices $M$ with $\\RR(M) \\le c$ is $2^{O(2^cn \\log n)}$. \n\\end{proposition}\n\\begin{proof}\nLet $M$ be an $n\\times n$ Boolean matrix with $\\RR(M) \\leq c$. For every such $M$, there is a distribution $\\mu_M$ over deterministic protocols $\\pi$ of cost $c$ such that \n$$\n\\Pr_{\\pi\\sim \\mu_M}[M(i,j)=\\pi(i,j)]\\geq \\frac{2}{3} \\qquad \\text{for all $i,j$.}\n$$\nBy the Chernoff bound, the error probability of the protocol can be reduced to strictly less than $\\frac{1}{n^2}$ by sampling $O(\\log n)$ independent samples from $\\mu_M$ and outputting the majority outcome. Thus by union bound, there exists $t=O(\\log n)$ deterministic protocols $\\pi_1,\\ldots, \\pi_t$, each of cost $c$, such that for every $i$ and $j$, \n\\begin{equation}\\label{eq:derandcounting}\n M(i,j)= \\mathsf{majority}\\{\\pi_1(i,j), \\ldots, \\pi_t(i,j)\\}. \n\\end{equation}\n\nNext, we show that the number of deterministic protocols of cost $c$ is at most $2^{O(2^c n)}$. Every such protocol corresponds to a binary tree of depth at most $c$, which has $O(2^c)$ nodes. Every node is associated with one of the two players, and the communicated bit is determined by the input of the corresponding player according to a function $[n] \\to \\{0,1\\}$. Thus there are $2^{n+1}$ possible choices for each node of the tree. Overall, this bounds the number of such protocols by $2^{O(2^cn)}$. \n\nFinally, since every matrix $M$ can be described in the form of \\cref{eq:derandcounting}, and there are $2^{O(2^cn)}$ choices for each $\\pi_i$, the number of such matrices is at most $2^{O(2^c n \\log n)}$. \n\\end{proof}\n\n\n\n\n\\section{Proof of \\cref{thm:main}}\\label{sec:thmmain}\nThe proof will rely on the following theorem, which involves a probabilistic argument presented in \\cref{sec:mainthm2}.\n\\begin{theorem}\n\\label{main:thm2}\nLet $w:\\mathbb{N}\\to \\mathbb{N}$ be a non-decreasing function satisfying $ w(n) \\to \\infty$ and $w(n)\\le 10^{-3} \\log(n)$. For every sufficiently large $n$, there exists an $n \\times n$ Boolean matrix $M$ with the following properties. \n\\begin{enumerate}[label=(\\roman*)]\n \\item $ \\RR(M) = w(n).$\n \\item Every $n^{1\/4} \\times n^{1\/4}$ submatrix $F$ of $M$ satisfies $\\RR(F) =O(1).$ \n\\end{enumerate}\n\\end{theorem}\n\n\nLet $w(n)= \\floor{10^{-3}\\log(n)}$, and for every sufficiently large $n$, let $M_n$ be the matrix that is guaranteed to exist by \\cref{main:thm2}. For smaller values of $n$, let $M_n$ be an arbitrary $n \\times n$ Boolean matrix and let $\\mathcal M$ denote the corresponding sequence. By \\cref{main:thm2}~(i), we have $\\RR(\\mathcal M)=\\Theta(\\log(n))$, and by \\cref{main:thm2}~(ii), $\\mathcal M$ is stable. \n\nIt remains to bound $|\\cl(\\mathcal M)|_n$. Let $F$ be an $n\\times n$ matrix in $\\cl(\\mathcal M)$. There are two cases to consider:\n\n\\begin{enumerate}\n \\item $F$ is a submatrix of an $M_k$ for $k > n^4$. In this case, by \\cref{main:thm2}~(ii), $\\RR(F) =O(1)$. So by~\\cref{prop:CountR}, the number of such matrices is bounded by $2^{O(n \\log n)}$.\n \\item $F$ is a submatrix of an $M_k$ with $n \\le k \\le n^4$. The number of such matrices is at most \n $$n^4 {n^4 \\choose n}^2 = 2^{O(n \\log n)}.$$\n\\end{enumerate}\n\nWe conclude that the total number of $n\\times n$ matrices in $\\cl(\\mathcal M)$ is $ 2^{O(n \\log n)}$ as desired. \n\n\\section{Proof of \\cref{main:thm2}}\\label{sec:mainthm2}\nWe will use a probabilistic argument to show the existence of an $n \\times n$ matrix $M$ that satisfies $\\RR(M) \\ge w(n)$, and the property (ii). Note that modifying a row of a matrix can change its randomized communication complexity by at most $1$. Hence, to guarantee $\\RR(M) = w(n)$, we can replace the rows of $M$ to all-zero rows, one by one, until we achieve $\\RR(M) = w(n)$. We will also show that for our construction, (ii) will remain valid under such modifications. \n\nLet $M=[m_{ij}]_{n \\times n}$ be selected uniformly at random from the set of all Boolean $n \\times n$ matrices that have exactly $r=2^{3 w(n)} t \\right]$ for $0 \\leq t \\leq nr$.\n\n\nNote that\n$$\\Ex[m_{ij}]= \\frac{r}{n}=p,$$\nand for $(i,j) \\neq (i',j')$, \n$$\\Ex[m_{ij} m_{i'j'}] \\le \\frac{{n-2 \\choose r-2}}{{n \\choose r}} \\le p^2=\\Ex[m_{ij}]\\Ex[m_{i'j'}].$$ \n\n\nIt follows that \n$$\\Ex [ |R \\cap M_1| ]= pab,$$\nand\n$$\\Var(|R \\cap M_1|) \\le abp(1-p) \\le abp \\le nr.$$ \n\nApplying Bernstein's inequality (\\cref{thm:bernstein}), for every $0 \\le t \\le nr$, we have \n\\begin{equation}\n\\label{eq:Bernstein}\n\\Pr \\left[\\left| |R \\cap M_1|-abp \\right| > t \\right] \\le 2 e^{-\\frac{t^2\/2}{nr+ t}} \\le 2 e^{-\\frac{t^2}{4nr}}. \n\\end{equation}\n\nDefine the probability distribution $\\mu$ on $[n] \\times [n]$ as \n\t$$\\mu:(i,j) \\mapsto \\left\\{ \n\t\\begin{array}{lcr}\n\t\\frac{1}{2rn} & \\qquad & m_{ij}=1 \\\\\n\t\\frac{1}{2(n-r)n}& & m_{ij}=0 \\\\\n\t\\end{array}\n\t\\right. .$$\n\nNote that $\\mu$ is defined so that it assigns the total measure of $\\frac{1}{2}$ uniformly to each of $M_0$ and $M_1$. Then, \n\\begin{eqnarray*}\n\\Disc_\\mu(R)&=& \\left|\\frac{|R \\cap M_1|}{2rn}- \\frac{|R \\cap M_0|}{2(n-r)n}\\right|= \\left|\\frac{|R \\cap M_1|}{2rn}- \\frac{|R|-|R \\cap M_1|}{2(n-r)n}\\right| \\\\\n&=&\\left|\\frac{n|R \\cap M_1|- r|R|}{2nr(n-r)}\\right|= \\left|\\frac{|R \\cap M_1|- p|R|}{2r(n-r)}\\right|\\le \\left|\\frac{|R \\cap M_1|- abp}{rn}\\right|.\n\\end{eqnarray*}\nBy substituting $t=\\epsilon rn$ in \\cref{eq:Bernstein}, we obtain \n$$\\Pr\\left[\\Disc_\\mu(R) \\ge \\epsilon \\right] \\le 2 e^{\\frac{-r^2n^2 \\epsilon^2}{4nr}}\\le 2 e^{\\frac{-rn \\epsilon^2}{4}}.$$\nBy applying the union bound over all the $2^{2n}$ possible rectangles, and taking $\\epsilon=\\frac{3}{\\sqrt{r}}$, we obtain that for sufficiently large $n$, \n$$\\Pr[\\Disc_\\mu(M) \\ge \\epsilon] \\le 2^{2n}\\times 2 e^{-\\frac{nr\\epsilon^2}{4}} \\le 2^{\\frac{-n}{4}+1} \\le \\frac{1}{10}. $$\nSubstituting $r=2^{3w(n)}$, and applying the discrepancy lower bound of \\cref{eq:discLow}, we obtain that for sufficiently large $n$, \n$$\\Pr\\left[\\RR(M) \\le w(n)\\right] \\le \\Pr\\left[\\RR(M) \\le \n\\log\\frac{\\sqrt{r}}{9}\\right]=\\Pr\\left[\\RR(M) \\le \n\\log\\frac{1}{3\\epsilon}\\right] \\le \\Pr[\\Disc_\\mu(M) \\ge \\epsilon] \\le \\frac{1}{10}. $$ \n \n\n\n\\paragraph{Verifying (ii):} Let $k=n^{1\/4}$. We first prove that with probability $1-o(1)$, for every $a,b\\le k$, every $a \\times b$ submatrix of $M$ contains a row or a column with at most two $1$'s. Note that the statement is trivial when $\\min(a,b)\\le 2$, and hence, we fix $a,b> 2$. \n\nIf $a \\le b$, then the probability that there is an $a \\times b$ submatrix such that each of its $b$ columns contains at least three $1$'s is bounded by\n$$\n\\binom{n}{a}\\binom{n}{b} \\left(\\binom{a}{3} p^3 \\right)^b \\le n^a n^b (a^3p^3)^{b} \\le (n^2 p^3 b^3)^b\\le \\left(\\frac{r^3}{n^{1\/4}} \\right)^b\\le \\left(\\frac{n^{0.03}}{n^{1\/4}} \\right)^b \\leq o(n^{-1\/2}),\n$$ \nwhere we used $r=2^{3 w(n)} b$, then the probability that there is an $a \\times b$ submatrix such that each of its $a$ rows contains at least three $1$'s is bounded by \n$$\n\\binom{n}{a}\\binom{n}{b} \\left(\\frac{\\binom{b}{3} {n-3 \\choose r-3}}{{n \\choose r}} \\right)^a \\le n^a n^b (b^3p^3)^{a} \\le (n^2 p^3 b^3)^a\\le \\left(\\frac{r^3}{n^{1\/4}} \\right)^a\\leq o(n^{-1\/2}). \n$$ \nThus by a union bound over all choices of $a,b\\leq k$, the probability that there is $a,b \\in [k]$ and an $a \\times b$ submatrix where every column or row contains at least three $1$'s is bounded by $o(k^2n^{-1\/2})$ which is $o(1)$ as desired. \n\n\nNow suppose that every $a \\times b$ submatrix $F$ of $M$ contains a row or a column with at most two $1$'s. We will show that in this case, every such $F$ corresponds to the biadjacency matrix of a disjoint union of two bipartite graphs that are both forests. Consider a row (or a column) with at most two $1$'s, and let $e_1$ and $e_2$ be the edges corresponding to these (at most) two entries. Removing this row from $F$ will result in a smaller submatrix, which by induction hypothesis, can be written as the union of two forests $\\mathcal F_1$ and $\\mathcal F_2$. Now $F$ can be decomposed into the union of two forests $\\mathcal F_1 \\cup \\{e_1\\}$ and $\\mathcal F_2 \\cup \\{e_2\\}$. \n\nThe bound $\\RR(F)=O(1)$ follows by first observing that each forest is an edge-disjoint union of two graphs, each a vertex-disjoint union of stars. Hence, it suffices to show that the biadjacency matrix of any vertex-disjoint union of stars has $O(1)$ randomized communication complexity. Suppose that $G$ is a union of vertex-disjoint stars $S_1,\\ldots, S_k$. Alice receives $u\\in V(G)$ and Bob receives $v\\in V(G)$, and they want to decide whether $(u,v)\\in E(G)$, which is equivalent to whether $u$ and $v$ belong to the same star. To solve this problem, Alice maps her input $u$ to the index $i$ such that $u \\in S_i$. Similarly, Bob maps $v$ to $j$ such that $v \\in S_j$. Now they can use the randomized communication protocol for $\\mathtt{I}_k$ to check whether $i=j$. This verifies (ii).\n\nFinally, note that if $F$ is a union of two forests, then replacing a row of $F$ with an all-zero row will not violate this property. \n\\bibliographystyle{alpha}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nInteresting phenomena emerge in the population dynamics\n in heterogeneous environments. For example, experimental and theoretical\n studies have shown that spatial heterogeneity accelerates\n the emergence of drug resistance \n\\cite{PhysRevLett.105.248104,zhang2011acceleration} and solid tumor evolution in heterogeneous microenvironments\n \\cite{zhang2012physics}. On a larger scale, heterogeneity\n plays a central role in population biology of infectious \ndiseases \\cite{PhysRevLett.99.148701} \nand emerges in the development\n of large physics projects, such as ATLAS, CERN \\cite{Turtscher}. Finally, in heterogeneous environments,\nevolved networks are modular when there are local extinctions\n \\cite{kashtan2009extinctions}.\n\nPopulations experience heterogeneous environments during\n migration. Migration can occur in different dimensions:\n for example, cells undergo one-dimensional, \ntwo-dimensional, or three-dimensional migration \\cite{doyle2009one}.\n In two-dimensional or three-dimensional migration, \nthe environmental gradient can additionally\nbe distinct in the different directions.\n For example, in the case of human migration, \nthe north-south direction has a greater environmental \ngradient than does the east-west direction \\cite{diamond1997guns}.\nThe heterogeneity is important in simulating human dispersal in \nthe Americas \\cite{steele2009human}.\n In the east-west direction, food production spread from southwest Asia \nto Egypt and Europe at about $0.7$ miles per year around 5000 BC, while in \nthe north-south direction, it spread \nnorthward in the American continent at about $0.2$ to $0.5$\n miles per year around 2000 BC \\cite{diamond1997guns}. This spread is on the same\n order as the velocity of human migration, so we estimate \nthat the human migration velocity in the east-west direction is\n about $2$ to $3$ times faster than in the north-south direction.\nPrevious work has generated detailed migration paths using geographical \ndata \\cite{anderson2000paleoindian} as well as results that match existing archaeological evidences \nwell after considering spatial and temporal variations \\cite{steele2009human}. \nWe do not try to generate a detailed map of human migration in this paper. \nInstead, we use a general model to generate east-west north-south asymmetry \nand study the role of a modular knowledge system.\n\nKnowledge of local environments, such as \neffective agricultural or animal husbandry techniques,\nwas vital to the survival of these early migrants \\cite{diamond1997guns}.\nEvolutionary epistemology views the \ngaining of knowledge as an adaptive process with blind \nvariation and selective retention \\cite{campbell1960blind}. \nCommunication of\n knowledge between individuals is also \nan efficient means to spread this discovered, locally\nadapted knowledge \\cite{mithen1996preshistory}. Similarly, models of social \nlearning theory stress the importance of social learning in the spread of \ninnovations \\cite{kandler2010social}. Here we\nmodel the adaptation of a population to the local environment\n using an evolutionary model with natural selection, mutation and communication. \nThe knowledge of an individual\n determines his or her fitness. Evolutionary psychology \nand archeology posit that the human mind is modular \\cite{steele1996weak},\n and that this modularity is shaped by evolution \n\\cite{tooby1995psychological} and facilitates understanding of local environments\n\\cite{mithen1996preshistory}. Conjugate to this modularity\n must be dynamical exchange of corpora of knowledge between individuals \\cite{Goldenfeld2011,AR}. \n\n\\section{Methods}\n\n\\begin{table}[h]\n\\caption{Symbols used in this paper}\\label{Table1}\n\\begin{tabular}{|c|c|}\n\\hline\nSymbol & Meaning \\\\\\hline\n$\\chi$ & Similarity between adjacent environments \\\\\\hline\n $v$ &Emigration velocity \\\\\\hline\n $t$ & Emigration time\\\\ \\hline\n$N$ &Number of individuals in one environment\\\\\\hline\n$N^*$ &Carrying capacity of one environment\\\\\\hline\n$N_0$ &Initial population size of one environment\\\\\\hline\n$f$ & Fitness\\\\\\hline\n$f^*$ &Fitness threshold\\\\\\hline\n$J$ & Interaction matrix\\\\\\hline\n$\\Delta$ & Connection matrix\\\\\\hline\n $K$ & Number of modules in a sequence\\\\\\hline\n$l$ & Module size \\\\\\hline\n$\\mu$ & Mutational rate\\\\\\hline\n$\\nu$ & Knowledge transfer rate\\\\\\hline\n$d$ &Genetic distance\\\\\\hline\n $S$ & A whole sequence\\\\\\hline\n$s$ & One locus in a sequence \\\\\\hline\n$L$ & Length of one sequence\\\\\\hline\n$M$ & Modularity\\\\\\hline\n\\end{tabular}\n\\end{table}\n\nTable \\ref{Table1} shows the symbols in this paper. \nThe observed emigration time and asymmetry of emigration \ntime are critical in the determination of the values of \nthese parameters.\nWe consider migration in random, asymmetric, \nmodularly correlated environments. \nWe use $9 \\times 25$ correlated, random environments, \nwhere $25$ is the number of environments\n in the north-south direction at the same longitude \\cite{kottek2006world}, \nand $9$ is chosen so that \n$9\/25$ is approximately the ratio of the east-west to \nnorth-south dimension of the\n Americas. See Fig.\\ \\ref{Fig1} for an illustration,\n where each square block corresponds to an environment.\n\nEach individual \n$\\textbf{a}$ has a fitness $f_\\textbf{a}$, as well as a sequence\n $S^\\textbf{a}$ that is composed of $L$\nloci, $s_i^\\textbf{a}$, representing the knowledge of the \nindividual. Fitness describes reproductive success and is proportional to\nthe reproduction rate. For simplicity, we take $s_i^\\textbf{a} = \\pm 1$.\nWe first consider a linear fitness landscape, later\ngeneralizing to an interacting landscape:\n\\begin{eqnarray}\nf[S] &=& 2 L + H[S]\n\\nonumber \\\\\nH[S] &=& \\sum_{i} s_i J_{i} \n\\label{Eq2}\n\\end{eqnarray}\nwhere $J_{i}$ is a quenched, Gaussian random interaction parameter, \nwith variance $\\sqrt 2$, and the offset $2L$ is chosen\nso that fitness is non-negative, since $H_{\\min}$ is $-2 L \/\\sqrt\\pi$. \nFor a given instance of the model, the interaction parameters $J_{i}$ \nare randomly chosen and then fixed for that instance of the model.\nWhen for each $i$ from $1$ to $L$, $s_i J_i>0$, the fitness reaches \nits highest value, and natural selection selects the sequence with \nthe best configuration.\n\nThe fitness of the population is influenced by the environment, \nquantified by interaction parameters $J$, describing the interaction between \n knowledge element $i$ of the individuals and the environment\n(see also Eq.\\ \\ref{Eq2} above). The \ninteraction parameters $J$\nin two adjacent environments, $J$ and $J'$, are correlated,\n\\begin{equation}\\label{Eq1}\n\\langle J_{i} J'_{i}\\rangle\/\\langle J_i^2\\rangle = \\chi\n\\end{equation}\nwhere $\\chi = \\chi_{\\rm{EW}}$ if the two have the same latitude,\n and $\\chi = \\chi_{\\rm{NS}}$ if they have the same longitude. The\n smaller the $\\chi$, the bigger the environmental gradient is. Here $0<\\chi<1$,\nand $\\chi_{\\rm{NS}} < \\chi_{\\rm{EW}}$, since the gradient of environment in the\nnorth-south direction is more dramatic \\cite{diamond1997guns}. \n\nIn each environment, we use a Markov process to describe the\n evolutionary dynamics, including replication with rate $f$, mutation\nwith rate $\\mu$ coming from discovering new knowledge\n through trial and error, and transfer of a corpus of knowledge\nof length $L\/K$ with rate $\\nu$. When individuals reproduce, \nthey inherent the knowledge and genes from their parent without error.\nBoth mutations and knowledge transfers are random, and they do not\ndepend on the fitness of individuals.\n The relative rates of replication, mutation, and transfer\n are $f$, $\\mu L$, and $\\nu K$, respectively, so on average each individual\n makes $\\mu L\/f \\approx \\mu\/2$ mutations, \nas $f \\approx 2L$ at short times for which these populations evolve,\nand $\\nu K\/f \\approx \\nu K\/(2L)$ \nknowledge transfers per lifetime of an individual.\nWe set the information sequence length $L=100$.\nDiscovery of new facts, represented by mutation, changes\none site, or 1\\% of the knowledge of an individual,\nwhereas knowledge transfer changes $1\/K$ of the knowledge.\nDiscovery of new facts should be rare,\nand in our simulation \nwe set $\\mu=0.5$, so that approximately one-quarter of the \nindividuals attempt to make a \ndiscovery through trial and error during his or her \nlifetime.\nWe consider $K=5$ corpora \nof knowledge. Transfer of one corpus, for example, could be one farmer\nattempting to communicate to another farmer how to grow a new crop in a new environment. \nKnowledge transfer must be\nrare, so we set $\\nu=6$, so that roughly $\\nu K \/ (2 L) \\approx 1\/7$ \nof the individuals attempt a\nknowledge transfer process during his or her lifetime. \nWe additionally consider various values of $\\nu$ in \nthis work to investigate the coupling of $\\nu$ to modularity.\nSelection is based on the fitness of the knowledge and\nit determines the the utility of theses mutation and knowledge transfer\nevents.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.85]{fig1.eps}\n\\end{center}\n\\caption{\\label{Fig1}\nPopulation distribution half way through \nthe migration. Color\n indicates density of the population in each environment. The maximum \ncapacity of each environments is $N^*=10000$. Initially there are \n$1000$ individuals in the top center environment $(1,5)$, and no individuals in\n other environments. Here $\\chi_{\\text{EW}}=0.8$, \n$\\chi_{\\text{NS}}=0.4$, $f^*-2L=0.3 L$, $L=100$, $\\mu=0.5$, \n$\\nu=6$, and $K=5$. Density was averaged over $24$ runs.\n}\n\\end{figure}\n\nThis dynamics of migration is described by a\nMarkov process, whose master equation\nis detailed in the Appendix.\nInitially, one of the \nenvironments with the highest latitude is occupied by $1000$ \nindividuals with random sequences, as Native Americans are believed\n to have entered the Americas through Alaska in the north. Since\n the population migrates from north down to south, we only allow \nmigration to the east, west, and south. In each environment, the \npopulation evolves according to the Markov dynamics.\n\nThe qualitative behavior of the migration depends on the\ncarrying capacity, $N^*$, and the fitness threshold, $f^*$. The carrying capacity is defined \nas the maximum population load of an environment \\cite{hui2006carrying}. \nAfter the population size reaches $N^*$, we randomly kill an individual every time another \nindividual reproduces, as described in detail in Eq.\\ \\ref{Eq8}. As a result, the total \nnumber of individuals does not exceed $N^*$.\nThe initial colonization of the Americas \noccurred before the Common Era, for which there are no reliable population data. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.7]{fig2.eps}\n\\end{center}\n\\caption{\\label{Fig2}\nAsymmetry in emigration times $t_{\\text{NS}}\/t_{\\text{EW}}$ for different $N^*$\n and $f^*$.\nUpper left, phase diagram of the linear model, Eq.\\ \\ref{Eq2}.\nUpper right, phase diagram of the quadratic model, Eq.\\ \\ref{Eq5}, with $M=1$.\nLower left, phase diagram of the quadratic model, with $M=1\/2$.\nLower right, phase diagram of the quadratic model, with $M=0$.\nOther parameters are as in Fig.\\ \\ref{Fig1}.\nThe color indicates the asymmetry in emigration times:\n$t_{\\rm NS} \/ t_{\\rm EW}$. \nThere are three phases, with the boundaries denoted by the two curves.\nThe steady-state fitness dynamics, $f(t)$ vs $t$, of the right \nphase and the left phase are shown in inset. The fitness dynamics of the middle\nphase in the north-south direction follows that of the upper inset, \nand in the east-west direction\n follows that of the lower inset. The phase boundaries are given approximately by\nequating the times in Eq.\\ \\ref{Eq3} for the north-south (left) or \neast-west (right) migration directions for linear model and $M=1$ quadratic model.\nThe model for human migration has $N^* = 10000$ and $f^*-2L = 30$.\n}\n\\end{figure}\n\nIt is estimated that there were seven million people in the Americas at the start \nof the Common Era \\cite{maddison2007world}, corresponding to $7000000\/(25\\times 9)=31111$ \nindividuals in each environment. We choose the carrying capacity to be $N^*=10000$, less\nthan $31111$, reflecting that the population size was smaller the earlier time of initial \npopulation expansion. \nWe show the results for various $N^*$ in Fig.\\ \\ref{Fig2}. \nWe introduce the fitness threshold, $f^*$, \nbecause individuals need to be well prepared before emigrating to the next environment. \nFor example, young male ground squirrels appear to disperse after attaining a threshold \nbody mass \\cite{nunes1996mass}, and dispersing males tend to have greater fat percentage for \ntheir bodies \\cite{nunes1996mass}. The increased body mass and fat \npercentage are thresholds required for migration. Similarly, naked mole-rats migrate more \nfrequently after body mass reaches a certain value \\cite{o1996dispersive}. \nIt is possible that some individuals try to emigrate without reaching the fitness \nthreshold when the local population size reach environmental capacity. \nHowever, they are not fit enough to colonize the new environment. \nThus, we employ a fitness threshold in our approach, and allow no emigration \nbefore the average fitness value reaches $f^*$. \nWhen the population size reaches $N^*$ and \nthe average fitness reaches $f^*$ in an environment, we \nmove $N_0=1000$ randomly chosen individuals to one of the unoccupied adjacent\n environments. Fitter individuals\nmay be more likely to migrate since they are physically better prepared to\nmigrate, while on the other hand less fit individuals may have more desire\nto migrate since they do not live well in the current environment. We randomly \nchoose individuals to migrate because of this ambiguous relationship between \nfitness and migration.\nIf we move fitter individuals instead of \nrandomly chosen individuals, the\ninitial fitness of the individuals in the new environment will be higher.\nThus, effectively the $\\chi$ would be higher. \nThe time required for a population to emigrate from\n an environment is denoted by the emigration time, $t$, \n and the emigration velocity $v$ is defined as $v=1\/t$. The emigration time \nof an environment is the time from the arrival of the first individuals to \nthe departure of the first individuals.\n\nTo compare our results with current human genetic data, we assign to each \nindividual another sequence $S'$, also composed of $L$ loci, and each locus\ncan take values $\\pm 1$. These sites correspond to automosal microsatellite \nmarker genotype data \\cite{wang2007genetic}, which we will compare with later in this paper. \nThe traits of the genetic data are neutral in \nour model. That is, the values of the loci in the sequence $S'$ have no \neffect on the fitness. The genetic sequence mutates at a rate $\\mu'$. \nWhen an individual reproduces, both the knowledge sequence and the \ngenetic sequence reproduce.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{fig3.eps}\n\\end{center}\n\\caption{\\label{Fig3} Emigration time versus $1-\\chi$ \n for the linear model and the quadratic model with various modularities.\n Here $\\chi_{\\rm EW} = \\chi_{\\rm NS} = \\chi$, and other\n parameters are as in Fig.\\ \\ref{Fig1}. }\n\\end{figure}\n\nWe set the time scale in our simulation by the observation that\n Native Americans spent about $10000$ years to migrate from the north\n tip of the American continent to the south tip \\cite{goebel2008late}, experiencing \nabout $25$ climate zones \\cite{kottek2006world}, so migration\n to a new environment occurred roughly every $400$ years, i.e. roughly every $20$ \ngenerations. In our simulation, we define a generation as the time\n period during which on average, each individual is replaced by \nanother individual. We find that the population migrates approximately\n once per $20$ generations when $\\chi_{\\text{EW}}=0.8$, \n$\\chi_{\\text{NS}}=0.4$, and $f^*-2L=0.3L$. One can estimate \nhow many generations it takes to migrate to the next environment. The rate of \nchange of fitness at short time roughly follows \\cite{park2015modularity},\n\\begin{equation}\ndf\/dt = 2L\n\\end{equation}\nSince $\\Delta f=0.6\\times(f^*-2L)=0.18\\times 2L$ for migration from the north \nor \n$\\Delta f=0.2\\times(f^*-2L)=0.06\\times 2L$ for migrating \nfrom the east or west, the emigration time is $0.18$ or $0.06$ \ndepending on the origin of migration, and this is consistent with Fig.\\ \\ref{Fig3}.\nWe use $\\Delta t=0.1$ as a rough estimate for emigration time.\nTo convert this time in our simulation to number of human replications, we consider \nthat one replication takes around $dt=1\/f=1\/2L$ time, so one emigration takes \n$\\Delta t\/dt=20$ generations.\nTo compare the genetic data with \ncurrent human data, we allow all \nenvironments to evolve for another $10000$ \nyears after all environments are occupied, without migration between environments. \nWe assume no gene flows between these environments, as previous work \\cite{ramachandran2011test}\nassumes that the asymmetry\nin the genetic distance originates from the asymmetry of gene flows in\ndifferent directions. Here we investigate another possible origin of the\nasymmetry of genetic distance, that is, the asymmetry already exists\nwhen the population colonized the Americas. It is quite possible\nthat both mechanisms help to create this asymmetry, but in order to show that the\ninitial colonizing process itself could generate this asymmetry, we \nsuppress the possibly asymmetric genetic flows.\n\n\\section{Results}\\label{sec:results}\n\nIn Fig.\\ \\ref{Fig1} we show a snapshot of population \ndistribution, approximately half way through the migration. \nMigration sweeps south\n and spreads both to the east and west. Migration forms a tilted \nfront, with slope magnitude equal to $v_{\\text{NS}}\/v_{\\text{EW}} = 0.35$,\nindicating the velocities of migration in \ndifferent directions are different. \n\nIn Fig.\\ \\ref{Fig2}\nwe show the three possible phases for different carrying capacity, $N^*$, and \nfitness threshold, $f^*$. Different phases correspond to whether \nthe migration is limited by the fitness\nthreshold or the population size threshold. In the left phase, the\npopulation is limited by the population size threshold, and there is\nno east-west north-south asymmetry. In addition, as the population\nmigrates, the maximum fitness value increases since the population is\nallowed to evolve further after reaching the fitness threshold, as shown\nin the left inset of the upper right figure. In the middle phase, the\nmigration in the east-west direction is limited by population size\nthreshold while the migration in the north-south direction is limited\nby the fitness threshold. The maximum fitness value increase as the\npopulation migrates in the east-west direction, but in the north-south\ndirection, the maximum fitness value is $f^*$. The degree of the\neast-west north-south asymmetry increases in this phase from the boundary\nwith the left phase to the boundary with the right phase. In the right\nphase, migrations in both directions are limited by the fitness\nthreshold, and the maximum fitness value remains the same as the\npopulation migrates. The east-west north-south asymmetry is approximately\nunchanged in this phase.\nThe boundaries of these phases are determined by\nnoting the times to reach the carrying capacity\nand the fitness threshold:\n\\begin{eqnarray}\nt_{N^*} &=& \\frac{\\ln (N^* \/ N_0) }{2 L}\n\\nonumber \\\\\nt_f &=& \\frac{(f^*-2L) (1-\\chi) }{2 L}\n\\label{Eq3}\n\\end{eqnarray}\nwhere $N_0$ is the initial population of one environment. \nHere we have used that the evolution of the\nfitness in one generation is small compared to the\noffset $2 L$, and that the evolution within one\nenvironment at steady state is from $\\chi (f^*-2L)+2L$\nto $f^*$ in the rightmost phase.\nThe left phase boundary in Fig.\\ \\ref{Fig2} is\ngiven by the condition $t_{N^*} = t_f$ in the north-south direction, and\nthe right phase boundary is given by $t_{N^*} = t_f$ in the east-west direction. \nWe note that our current choice of parameters is deep in the right phase, \nindicating that the east-west north-south asymmetry is robust to the change of \n$f^*$ or the ratio $N^*\/N_0$.\n\n\nWe determine quantitatively how the environmental gradient \ninfluences the velocity of migration. In Fig.\\ \\ref{Fig3} we show the\n emigration time versus $1-\\chi$, the change between adjacent \nenvironments.\n It is interesting that the emigration time is approximately \nproportional to $1-\\chi$. This occurs because in our simulation \nfor these parameters,\n the population reaches $N^*$ earlier than $f^*$, so the emigration\n time is the time required to reach $f^*$. \nFor our model,\n $f^*-2L = 0.3L$, while $\\max(f-2L)\\approx 2L$, \nso $f^*$ is still far from optimal, and the\n fitness increases linearly with time in the regime we are discussing.\n So $t=\\Delta f \/ v_f = (1-\\chi) f^* \/v_f$, where $v_f$ is a \nconstant for a fixed modularity. So $t\\propto 1-\\chi$, and we \nquantify the ratio of velocity in the two different directions as\n\\begin{equation}\\label{Eq4}\n\\frac{v_{\\text{EW}}(M)}{ v_{\\text{NS}}(M)}=\\frac{1\/t_{\\text{EW}}(M)}\n{1\/t_{\\text{NS}}(M)}=\\frac{1-\\chi_{\\text{NS}}}{1-\\chi_{\\text{EW}}}\n\\end{equation}\n\n\nIn the linear model, it is quite easy to evolve the optimal\npieces of knowledge, while in reality, finding the best knowledge\nis difficult at the individual level. \nWe now show that these results are robust to considering an\ninteracting model, while also demonstrating the significance\nof the modularity order parameter in the interacting model.\nAs finding optimal knowledge\n for a local environment is difficult, the fitness \nlandscape is rugged \\cite{PhysRevLett.99.228107}, and we use\na spin glass to represent the fitness:\n\\begin{eqnarray}\\label{Eq5}\nf[S] &=& 2 L + H[S]\n\\nonumber \\\\\nH[S] &=& \\sum_{ij} s_i s_j J_{ij} \\Delta_{ij}\n\\end{eqnarray}\nwhere $J_{ij}$ is a Gaussian random matrix, \nwith variance $1\/C$.\nThe offset value $2 L$ is chosen by Wigner's semicircle law\n \\cite{wigner1958distribution} so that the\nminimum eigenvalue of $f$ is non-negative. \nThe entries in the matrix $\\Delta$ are zero or one, with probability\n$C\/L$ per entry, so that the average number of connections\n per row is $C$. The optimization of this fitness model is hard\nwhen $L$ is large, and here we give a simple example to show why. \nConsider a case when $J_{ij}>0$, $J_{ik} >0$ and $J_{jk}<0$ \ngiven $i$, $j$ and $k$. To make $J_{ij}s_is_j$ positive and fitness value \nlarger, $s_i$ and $s_j$ must have the same sign. Similarly, to make $J_{ik}s_is_k$ \nand $J_{jk}s_js_k$ positive, we need $s_k$ to have the same sign with $s_i$, \nand $s_k$ to have different sign with $s_j$. This indicates that $s_i$ and $s_j$ \nhave different signs, contradicting that $s_i$ and $s_j$ having the same sign. This \nphenomena is called frustration in physics \\cite{sadoc2006geometrical}, making the \nfitness hard to optimize. Let us exemplify this using an example from the human knowledge \nsystem. Humans developed three pieces of knowledge: the knowledge of toxicity of \nmushrooms, the knowledge of red food, and the knowledge of apples. The interaction \nbetween the first two pieces of knowledge implies that red food is bad and \nundesirable, while the later two pieces of knowledge implies something on the \ncontrary. As a result the human knowledge system can be difficult to optimize. \nWe will discuss how modularity helps to reduce this frustration and thus \nmakes it easier to optimize the fitness in section \\ref{sec:discussion}.\n\nWe introduce modularity by an excess of interactions in\n $\\Delta$ along the\n$l \\times l$ block diagonals of the $L \\times L$\nconnection matrix. There are $K$ of these\nblock diagonals, and $K=L\/l$. Thus,\nthe probability of a connection is\n$C_0\/L$ when\n$ \\lfloor i\/l \\rfloor \\ne \\lfloor j\/l \\rfloor$\nand $C_1\/L$ when\n$ \\lfloor i\/l \\rfloor = \\lfloor j\/l \\rfloor$. The number of\nconnections is $C = C_0 + (C_1 - C_0) \/K$, and modularity \nis defined by $M = (C_1 - C_0) \/ (K C)$. In Fig.\\ \\ref{Fig6} we illustrate \nthree $20\\times 20$ matrices with modularities $1$, $0.5$ and $0$ and $C=9$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.7]{fig6.eps}\n\\end{center}\n\\caption{\\label{Fig6} Illustration of $L=20$ connection matrices with \ndifferent modularities. Left, a completely modular connection matrix, $M=1$. Middle, \na moderately modular connection matrix, $M=0.5$. Right, a non-modular \nconnection matrix, $M=0$.}\n\\end{figure}\n\n\nModularity, coupled with knowledge transfer, accelerates the \nevolution of a population in a new environment \\cite{park2015modularity}.\n We now check how modularity and knowledge transfer influence the\n velocity of migration. For different $M$ and\n $\\nu$, the results are shown in Fig.\\ \\ref{Fig4}. For small $M$, a larger\n $\\nu$ implies a smaller migrating velocity, indicating that the transfer of\n(non-useful) knowledge slows down evolution. As modularity increases,\n the migration velocity at larger $\\nu$ catches up with that of smaller\n $\\nu$. At $M=1$, in the range of $\\nu$ shown, \nthe faster the population migrates faster for larger $\\nu$. \n\nWe fit the curve of $v_{\\rm NS}$-$M$ for $\\nu\\leqslant 4$ in Fig.\\ \\ref{Fig4} with \nlinear regression, observing $R^2\\geqslant 0.95$, except for\n $\\nu =0$, which has a zero slope and larger noise. We also fit the data\n for $\\nu=1$ and $\\nu=3$, not shown in Fig.\\ \\ref{Fig4}. We show \n$dv_{\\text{NS}}\/dM$ versus modularity for different $\\nu$\n in the inset to\nFig.\\ \\ref{Fig4}. For $\\nu\\leqslant 4$, the slope is\n proportional to $\\nu$. So, $d v_{\\text{NS}}\/dM=\\alpha_{\\text{NS}} \\nu$, \nand after integration we have,\n\\begin{equation}\\label{Eq6}\nv_{\\text{NS}}=\\alpha_{\\text{NS}} \\nu M + v_{\\text{NS}}^0(\\nu)\n\\end{equation}\nwhere $v_{\\text{NS}}^0(\\nu)$ is determined by the evolutionary load of knowledge transfer. \nLinearity originates from perturbation of knowledge transfer when $\\nu$ is small. \nNote that for $\\nu=6$, the value used in most part of this paper, the linear relationship \nno longer holds, indicating that $\\nu=6$ is large enough to break the linearity.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.62]{fig4.eps}\n\\end{center}\n\\caption{\\label{Fig4} North-south Emigration velocity versus modularity for \ndifferent $\\nu$. The lines are linear fit of the data of the corresponding \nhorizontal gene transfer rate.\nThe inset shows $dv_{\\rm NS}\/dM$ versus $\\nu$.\n The dots are data points and the line is a linear fit to the data. \nOther parameters are as in Fig.\\ \\ref{Fig1}. }\n\\end{figure}\n\n\nFrom our model, we make a prediction by calculating \ngenetic distances between populations in different \nenvironments, using the genetic sequence $S'$. For each pair of environments, we calculate the \nfixation index $F_{\\text{ST}}$ between them using Eq.\\ 5.12 from \\cite{weir1996genetic}:\n\n\\begin{equation}\\label{Eq7}\nF_{\\text{ST}}=\\frac{\\sum_{i=1}^{L}\\left[\\frac{1}{2}\\sum_{j=1}^{2}(p_{ij}-p_{ij}')^2-\\frac{1}{2(2n-1)}\\left(2-\\sum_{j=1}^{2}(p_{ij}^2+p_{ij}'^2)\\right)\\right]}{\\sum_{i=1}^{L}(1-\\sum_{j=1}^{2}p_{ij}p_{ij}')}\n\\end{equation}\nwhere $p_{i1}$ is the probability of the value of locus $i$ being $+1$, and $p_{i2}$ \nis the probability of the value of locus $i$ being $-1$ in the first environment. \n$p_{ij}'$ is the corresponding \nprobability in the other environment. Here $n$ is the sample size drawn from \nthe population to estimate $F_{\\text{ST}}$, and in our case $n=18$, in accordance with the \naverage sample size used in \\cite{ramachandran2011test}.\n\n The east-west distance between \nenvironments $(x_1,y_1)$ and $(x_2,y_2)$ is $d_{\\text{EW}}=|x_1-x_2|$, and the north-south distance is \n$d_{\\text{NS}}=|y_1-y_2|$. We also calculated heterozygosities of the population\n of environment a, defined as\n\n\\begin{equation}\n\\text{het}_{a}=1-\\frac{1}{L}\\sum_{i=1}^L\\sum_{j=1}^2 p_{ij}^2\n\\end{equation}\nwhere $p_{i1}$ and $p_{i2}$ have the same meanings as those in Eq.\\ \\ref{Eq7}. \nEach fixation index $F_{\\text{ST}}$ \nwas regressed onto the sum of mean heterozygosity and geographic distance, which \ncan be either east-west distance or north-south distance. The $R^2$ of the regression \nis around $0.9$. For each pair of environments \n$a$ and $b$, we express the $F_{\\text{ST}}$ as,\n\n\\begin{eqnarray}\nF_{\\text{ST}}&=&c_{\\text{EW}} d_{\\text{EW}}+c_1\\frac{\\text{het}_a+\\text{het}_b}{2}+c_0\\\\\nF_{\\text{ST}}&=&c_{\\text{NS}} d_{\\text{NS}}+c_1'\\frac{\\text{het}_a+\\text{het}_b}{2}+c_0'\n\\end{eqnarray}\n\n The coefficient of \ngeographic distance term using east-west distance is $c_{\\text{EW}}$, and $c_{\\text{NS}}$ \nusing north-south distance. The ratio of them, $r=c_{\\text{NS}}\/c_{\\text{EW}}$, \nindicates the asymmetry of rate of change of genetic distance.\nFor humans in the Americas, the \nratio is approximately $1.26$ \\cite{ramachandran2011test}. The mutational rates of \ngenetic sequences at which the $F_{\\text{ST}}$ ratio is $1.26$ depend on modularity, \nas shown in Fig.\\ \\ref{Fig5}. The estimated mutation rate of human automosal microsatellites \nrange from $10^{-4}$ to $10^{-2}$ \\cite{kayser2000characteristics}. In our model, we can \ncalculate the mutational rate per generation $\\mu_g = \\mu'\\times L \/2L$. So for $M=1$ the \nmutational rate is $0.005$ per\nlocus per generation, and for $M=0$ the mutational rate is $0.025$\nper locus per generation. Thus the mutational rate for the $M=1$ case falls within the range \nof experimental results, indicating that human knowledge system is probably modular.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.62]{fig5.eps}\n\\end{center}\n\\caption{\\label{Fig5} Mutation rate of genetic sequence at which the $F_{\\text{ST}}$ \nratio is $1.26$ versus modularity . Other parameters are as in Fig.\\ \\ref{Fig1}. }\n\\end{figure}\n\n\\section{Discussion}\\label{sec:discussion}\n\nSo why is having a modular knowledge system so helpful in the human migration\nprocess? A migrating human population must adapt knowledge quickly. \nNew knowledge is generated through trial\n and error (mutation, $\\mu$). Communication (knowledge corpus transfer, $\\nu$)\n propagates useful new knowledge in the population. \nIf the knowledge system is non-modular, however, communication \ncauses confusion. This is because transfer of simply a $L\/K$ segment \n does not transfer useful information in a non-modular knowledge system. \nFor example, a hunter can teach\n a wood gatherer how to hunt, including how to make stone arrowheads. If the knowledge \nsystem of the wood gatherer is non-modular, the hunting module can interact with \nthe wood gathering module, and the wood gatherer may wrongly believe that arrow-shaped \ntools could also work for cutting trees, and replace his or her ax with arrows. For a \nmodular knowledge system, this frustrating confusion will not happen, and modularity \nreduces frustration.\nSo if the knowledge system is modular, the population can take advantage\n of faster knowledge communication, while if the system is non-modular,\n knowledge communication can cause confusion and is deleterious\nbetween individuals with different specializations.\n\nFor a population with a modular knowledge \nsystem, a smaller mutational rate of genes creates the same $F_{\\text{ST}}$ ratio, \nso the evolutionary rate is higher than the non-modular counterpart when the mutational \nrates are the same. The population \nwith a modular knowledge system evolves faster, and from Fisher's fundamental theorem of \nnatural selection \\cite{fisher1930genetical} we expect that the genetic diversity is\n higher in the more rapidly evolving population.\n\nWhy does environmental heterogeneity create an asymmetry of genetic distance in \ndifferent directions, even if environmental change does not directly \ninfluence genes in our model? For a\npopulation migrating in the north-south direction, the new environment poses severe challenges \nto the immigrants, and fewer founders may survive compared to a east-west migration. \nThis founder effect \nincreases the genetic distance between the immigrant population and the population \nthey originate from \\cite{hedrick2011genetics}. \nFor the population migrating in east-west direction, much milder \nenvironmental changes largely reduce the founder effect, thus reducing the genetic \ndistance from the original population. \n\nIn addition to spatial heterogeneity, our stochastic model \nnaturally creates temporal inhomogeneity. Even though the average fitness of a \npopulation changes smoothly, fitness spikes appear occasionally, corresponding \nto knowledgeable people or \"heroes\" in human history. Immediately after the initial \ncolonization of one environment, the highest individual fitness value is more than five \ntimes the average fitness value of the population in our model. \nAfter evolution of the population for approximately $400$ generations, \nthe fitness is \"saturated\", and the highest fitness is only 50\\% better than \nthe average fitness. This is consistent with our impression that more heroes emerge \nin a fast-changing society than a stagnant one.\n\n\n\\section{Conclusion}\n\nIn conclusion, we built a model of population migration in an asymmetric,\n two dimensional system. We have shown the vital role that modularity\n plays in the migration rates and gene flows. We have shown that a modular knowledge \nsystem coupled with knowledge transfer accelerates human migration. \nOur results demonstrate an east-west and\n north-south migration rate difference, and we have related environmental\n variation with longitude and latitude \nto migration rate. We have shown that the asymmetry of migration velocity \noriginates from asymmetric environmental gradients. \nThe asymmetry of migration velocity exists only if migration is limited by fitness. \nPredictions for asymmetry of genetic variation are in agreement \nwith patterns of human gene flow in the Americas. \nOur model may be applied to other systems such as the spread of invasive species, \ncancer cells migration, and bacterial migration.\n\n\\section*{Authors' contribution}\nD.W. wrote the codes and collected and analyzed the data. Both D.W. and M.W.D. \ndeveloped and analyzed the model and drafted the manuscript. All authors gave final approval for publication.\n\n\\section*{Competing interest}\nWe declare we have no competing interests.\n\n\\section*{Funding}\nWe received no funding for this study.\n\n\\section*{Appendix}\n\nThe dynamics of evolution in one environment is described by a master equation:\n\n\\begin{eqnarray}\\label{Eq8}\n\\frac{d P ( \\{ n_{\\bf a} \\}; t)}{d t} &=&\n\\sum_{ \\{ {\\bf a} \\} }\n\\bigg[\nf(S_{\\bf a}) (n_{\\bf a}-1) \\sum_{ \\{ {\\bf b} \\ne {\\bf a} \\} }\n \\frac{n_{\\bf b}+1}{N} P(n_{\\bf a}-1, n_{\\bf b}+1; t)\\nonumber\n\\\\&&-f(S_{\\bf a}) n_{\\bf a} \\sum_{ \\{ {\\bf b} \\ne {\\bf a} \\} } \n\\frac{n_{\\bf b}}{N} P(n_{\\bf a}, n_{\\bf b}; t)\n\\bigg]\\delta_{N,N^*}\n\\nonumber \n\\\\&& \\nonumber +\\sum_{ \\{ {\\bf a} \\} }\n\\bigg[\nf(S_{\\bf a}) (n_{\\bf a}-1)\n P(n_{\\bf a}-1 ; t) -f(S_{\\bf a}) n_{\\bf a} \n P(n_{\\bf a}; t)\n\\bigg](1-\\delta_{N,N^*})\n\\\\ &&\n+ \\mu\n\\sum_{ \\{ {\\bf a} \\} }\n\\sum_{ \\{ {\\bf b}=\\partial {\\bf a} \\} }\n\\bigg[\n(n_{\\bf b}+1) P(n_{\\bf a}-1, n_{\\bf b}+1; t) -\nn_{\\bf b} P(n_{\\bf a}, n_{\\bf b}; t)\n\\bigg]\n\\nonumber \\\\ &&\n+ \\nu\n\\sum_{ \\{ {\\bf a} \\} }\n\\sum_{k=1}^K\n\\sum_{ \\{ {\\bf b}, {\\bf b}_k \\ne {\\bf a}_k \\} }\n\\bigg[\n(n_{ {\\bf a} \/ {\\bf b}_k } +1)\n \\frac{ n_ { {\\bf b} \/ {\\bf a}_k } }{N} P(n_{\\bf a}-1, \nn_{ {\\bf a} \/ {\\bf b}_k } +1; t)\\nonumber\n\\\\&&-n_{ {\\bf a} \/ {\\bf b}_k }\n \\frac{ n_ { {\\bf b} \/ {\\bf a}_k } }{N} P(n_{\\bf a}, \nn_{ {\\bf a} \/ {\\bf b}_k } ; t)\n\\bigg]\n\\end{eqnarray}\n\nHere $n_{\\bf a}$ is the number of individuals with sequence $S_{\\bf a}$, with\nthe vector index ${\\bf a}$ used to label the $2^L$ sequences.\nThe notation $\\partial {\\bf a}$ means the $L$ sequences created by\na single mutation from sequence $S_{\\bf a}$.\nThe notation ${\\bf a} \/ {\\bf b}_k$ means the sequence created by\ntransferring module $k$ from sequence \n$S_{\\bf b}$ into sequence $S_{\\bf a}$. \nHere $N^*$ is the environmental capacity of the environment.\n\n\\bibliographystyle{vancouver}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs introduced in \\cite{coffee-time} and \\cite{geodetic}, the $P_3$-hull number of a simple connected graph is the minimum cardinality of a set $U$ of initially infected vertices that will eventually infect the entire graph where an uninfected node becomes infected if two or more of its neighbors are infected. There has been much work on formulas for the $P_3$-hull numbers of various types of graphs, \\cite{p_3-Hamming, MR3040145, p_3-product, p3Kneser}, as well as with the closely related notion of the $2$-neighbor bootstrap percolation problem,\n\\cite{2-neighbour_bootstrap_percolation, Marcilon_Thiago, Przykucki_Michal}.\n\nImportant to this paper is the decycling number. Given a graph $G$, its decycling number, $\\nabla(G)$, is the minimum cardinality of a set $U$ of vertices such that $G-U$ is acyclic. In general, it is very hard to compute a graph's decycling number. In fact, it has been shown to be NP-complete \\cite{DecyclingNumberIsNPComplete}. However, results in special cases have been obtained, \\cite{decycleOfGraphs, decylceRandom, DecyclingNumberOfGeneralizedPetersenGraphs, decycleCubic, decycleBoxProduct}.\n\nIn this paper, after initial definitions, we show that for a cubic graph, the $P_3$-hull number and the decycling number coincide, Theorem \\ref{Main Theorem}. By \\cite{DecyclingNumberOfGeneralizedPetersenGraphs}, it follows that the $P_3$-hull number of the generalized Petersen graph, $G(n,k)$, is $\\left\\lceil\\frac{n+1}{2}\\right\\rceil$, Corollary \\ref{p3_hull_GP}. Furthermore, the complement of any initial infecting set is a forest. In Theorem \\ref{components thm}, it is show that for any infecting set of minimum cardinality this forest always has exactly one or two components.\n\nIn addition, we introduce the notion of the infecting time of an infecting set and study it for the Petersen graph. Explicit times are computed for special infecting sets, Theorem \\ref{dan_time}. Giving explicit formulas for the minimal and maximal infecting times is a very difficult problem. However, we give complete answers for the special case of $G(n,1)$, Theorem \\ref{full_time}.\n\nFinally, we introduce a number of graphs related to the generalized Petersen graph. For a type of surgery, $G(n,k)\\# G(n,k)$, the $P_3$-hull number is computed in Theorem \\ref{thm_surgery}. Associated to a permutation $\\sigma$ of $S_n$, a generalization of $G(n,k)$, called $GG(n,\\sigma)$, is introduced. General bounds for its $P_3$-hull number are given in Theorem \\ref{odd cycles upper bound}. An exact answer is computed under an additional hypothesis in Theorem \\ref{thm:minGGP}.\n\n\n\n\\section{Initial Definitions}\n\n Throughout the paper, let $G = (V,E)$ be a finite, simple, connected graph and let $S\\subseteq V$. We write $G[S]$ for the corresponding \\textit{induced subgraph} of $G$ on $S$. We say $G$ is \\textit{cubic} if each vertex of $G$ has degree $3$.\n\n Following \\cite{p3Kneser}, the \\textit{$P_3$-interval}, $I[S]$, is the set $S$ together with all vertices in $G$ that have two or more neighbors in $S$.\n If $I[S] = S$, then the set $S$ is called \\textit{$P_3$-convex}.\n The $P_3$\\textit{-convex hull}, $H_\\mathcal C(S)$ of $S$, is the smallest $P_3$-convex set containing $S$.\n Iteratively, define $I^0[S] = S$ and $I^p[S] = I[I^{p-1}[S]]$ for any positive integer $p$. Then $H_\\mathcal C(S)$ is the union of all $I^p[S]$.\n\n If $H_\\mathcal C(S) = V$, we say that $S$ is a \\textit{$P_3$-hull set} of $G$. We also refer to a $P_3$-hull set as an \\textit{infecting set}. The minimum cardinality, $h_{P_3}(G)$, of a $P_3$-hull set in G is called the \\textit{$P_3$-hull number} of G. This will be the main object of our study, and we will refer to $P_3$-hull sets of cardinality $h_{P_3}(G)$ as \\textit{minimum size infecting sets}. For a $P_3$-hull set $S$, we say that the \\textit{infecting time} of $S$, denoted $T_I(S)$, is the smallest integer $p$ such that $I^p[S] = V$.\n\n Relevant for this paper, we say that $S$ is a \\textit{decycling set} of $G$ if the induced subgraph $G[V-S]$ is acyclic, \\cite{decycleOfGraphs, decylceRandom}. The minimum cardinality of a decycling set of $G$ is called the \\textit{decycling number} of $G$ and denoted by $\\nabla(G)$.\n\nRecall that a \\textit{generalized Petersen graph}, $G(n,k)$ with $1\\le k< \\frac n2$, has vertex set\n\\[V=\\{{u}_0,{u}_1,\\ldots,{u}_{n-1},\\,v_0,v_1,\\dots,v_{n-1}\\}\\]\nand edge set (interpreting each index modulo $n$)\n\\[E=\\{{u}_i{u}_{i+1},\\,{u}_iv_i, \\,v_iv_{i+k}:0\\leq i\\leq n-1\\}.\\]\n\n\n\n\\section{$P_3$-Hull Numbers and Minimum Infecting Sets for $G(n,k)$}\n\n\n\\begin{theorem}\\label{Main Theorem}\n Let $G = (V,E)$ be a cubic graph and $S\\subseteq V$. Then $S$ is an infecting set of $G$ if and only if it is a decycling set of $G$. In particular, $h_{P_3}(G) = \\nabla(G)$.\n\\end{theorem}\n\n\\begin{proof}\n\tWe show first that an infecting set is a decycling set via the contrapositive. Assume that $S$ is not a decycling set. Then there exists some nonempty $W \\subseteq V- S$ so that $G[W]$ is a cycle. Hence each $w\\in W$ has exactly two neighbors in $W$ and one neighbor in $V- W$.\n\t\n\tSuppose now that $S$ is an infecting set. Since $S\\cap W=\\emptyset$, there is a minimal integer $n\\ge 0$ so that $I^{n}[S] \\cap W = \\emptyset$ and $I^{n+1}[S] \\cap W \\not= \\emptyset$. But then for $w\\in I^{n+1}[S] \\cap W$, there must have been two neighbors of $w$ in $I^{n}[S]$. As $I^{n}[S] \\cap W = \\emptyset$, these two neighbors must lie in $V-W$ which is a contradiction.\n\t\n Next we show that a decycling set is an infecting set.\n Suppose that $S$ is a decycling set of $G$. If $S$ is not infecting, then $H_\\mathcal C(S)$ is a proper subset of $V$ that is still decycling. As the subgraph induced by $V-H_\\mathcal C(S)$ is a forest, there exists some $v\\in V-H_\\mathcal C(S)$ with degree $1$. This implies two neighbors of $v$ lie in $H_\\mathcal C(S)$ which is a contradiction.\n\\end{proof}\n\n\\begin{corollary} \\label{p3_hull_GP}\n$h_{P_3}(G(n,k)) = \\lceil\\frac{n+1}{2}\\rceil$.\n\\end{corollary}\n\n\\begin{proof}\n This follows from Theorem \\ref{Main Theorem} and \\cite[Theorem 3.1]{DecyclingNumberOfGeneralizedPetersenGraphs}, where it is shown that $\\nabla(G(n,k)) = \\left\\lceil\\frac{n+1}{2}\\right\\rceil$.\n\\end{proof}\n\nAs a prelude to infecting time calculations, we present an explicit minimum size infecting set for $G(n,k)$ (examples can be seen in Figure \\ref{ExamplesFigure}). An alternate example may be found in \\cite[Lemma 3.3]{DecyclingNumberOfGeneralizedPetersenGraphs}.\n\n\\begin{corollary}\\label{infecting_set}\n\tLet $c = \\gcd(n,k)$, $l=\\frac{n}{c}$, and\n\t\\[ \\mathcal S_v = \\{v_{j+ik}:0\\leq i\\leq l-1,\\, 0\\leq j\\leq c-1,\\, i\\text{ odd }\\}. \\]\n\tFor $l$ even, let\n\t\\[ \\mathcal S_u = \\{u_0\\}\\]\n\tand for $l$ odd, let\n\t\\[ \\mathcal S_u = \\{u_{c-1}\\} \\cup \\{u_j:0\\leq j\\leq c-1,\n\t\\, j\\text{ even }\\}. \\]\n\tThen \\[ \\mathcal S = \\mathcal S_v \\cup \\mathcal S_u \\] is a minimum size infecting set for $G(n,k)$.\n\\end{corollary}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (a) at (4*1,4*0) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (b) at (4*.866,.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (c) at (4*.5,.866*4) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{0em}$u_0$] (d) at (4*0,1*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (g) at (4*-1,0*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (f) at (4*-.866,.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (e) at (4*-.5,.866*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (j) at (4*0,-1*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (h) at (4*-.866,-.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (i) at (4*-.5,-.866*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (l) at (4*.866,-.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (k) at (4*.5,-.866*4) {};\n\t\t\n\t\t\\draw[black] (a) -- (b);\n\t\t\\draw[black] (b) -- (c);\n\t\t\\draw[black] (c) -- (d);\n\t\t\\draw[black] (d) -- (e);\n\t\t\\draw[black] (e) -- (f);\n\t\t\\draw[black] (f) -- (g);\n\t\t\\draw[black] (g) -- (h);\n\t\t\\draw[black] (h) -- (i);\n\t\t\\draw[black] (i) -- (j);\n\t\t\\draw[black] (j) -- (k);\n\t\t\\draw[black] (k) -- (l);\n\t\t\\draw[black] (l) -- (a);\n\t\t\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_3$] (m) at (2.5*1,2.5*0) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_2$] (n) at (2.5*.866,.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (o) at (2.5*.5,.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (p) at (2.5*0,1*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (s) at (2.5*-1,0*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_{10}$] (r) at (2.5*-.866,.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_{11}$] (q) at (2.5*-.5,.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_6$] (v) at (2.5*0,-1*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (t) at (2.5*-.866,-.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_7$] (u) at (2.5*-.5,-.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (x) at (2.5*.866,-.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (w) at (2.5*.5,-.866*2.5) {};\n\t\t\n\t\t\\draw[black] (n) -- (p);\n\t\t\\draw[black] (p) -- (r);\n\t\t\\draw[black] (r) -- (t);\n\t\t\\draw[black] (t) -- (v);\n\t\t\\draw[black] (v) -- (x);\n\t\t\\draw[black] (x) -- (n);\n\t\t\n\t\t\\draw[black, dashed] (m) -- (o);\n\t\t\\draw[black, dashed] (o) -- (q);\n\t\t\\draw[black, dashed] (q) -- (s);\n\t\t\\draw[black, dashed] (s) -- (u);\n\t\t\\draw[black, dashed] (u) -- (w);\n\t\t\\draw[black, dashed] (w) -- (m);\n\t\t\n\t\t\\draw[black] (m) -- (a);\n\t\t\\draw[black] (n) -- (b);\n\t\t\\draw[black] (o) -- (c);\n\t\t\\draw[black] (p) -- (d);\n\t\t\\draw[black] (q) -- (e);\n\t\t\\draw[black] (r) -- (f);\n\t\t\\draw[black] (s) -- (g);\n\t\t\\draw[black] (t) -- (h);\n\t\t\\draw[black] (u) -- (i);\n\t\t\\draw[black] (v) -- (j);\n\t\t\\draw[black] (w) -- (k);\n\t\t\\draw[black] (x) -- (l);\n\t\t\n\t\\end{tikzpicture}\n\t\\qquad\n\t\\begin{tikzpicture}\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$u_3$] (a) at (4*1,4*0) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$u_2$] (b) at (4*.866,.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (c) at (4*.5,.866*4) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{0em}$u_0$] (d) at (4*0,1*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (g) at (4*-1,0*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (f) at (4*-.866,.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (e) at (4*-.5,.866*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (j) at (4*0,-1*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (h) at (4*-.866,-.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (i) at (4*-.5,-.866*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (l) at (4*.866,-.5*4) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (k) at (4*.5,-.866*4) {};\n\t\t\n\t\t\\draw[black] (a) -- (b);\n\t\t\\draw[black] (b) -- (c);\n\t\t\\draw[black] (c) -- (d);\n\t\t\\draw[black] (d) -- (e);\n\t\t\\draw[black] (e) -- (f);\n\t\t\\draw[black] (f) -- (g);\n\t\t\\draw[black] (g) -- (h);\n\t\t\\draw[black] (h) -- (i);\n\t\t\\draw[black] (i) -- (j);\n\t\t\\draw[black] (j) -- (k);\n\t\t\\draw[black] (k) -- (l);\n\t\t\\draw[black] (l) -- (a);\n\t\t\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (m) at (2.5*1,2.5*0) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (n) at (2.5*.866,.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (o) at (2.5*.5,.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (p) at (2.5*0,1*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (s) at (2.5*-1,0*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (r) at (2.5*-.866,.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (q) at (2.5*-.5,.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_6$] (v) at (2.5*0,-1*2.5) {};\n\t\t\\node[fill,circle,inner sep=0pt,minimum size=1pt] (t) at (2.5*-.866,-.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_7$] (u) at (2.5*-.5,-.866*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_4$] (x) at (2.5*.866,-.5*2.5) {};\n\t\t\\node[fill,circle,inner sep=3pt,minimum size=1pt, label = \\hspace{1.5em}$v_5$] (w) at (2.5*.5,-.866*2.5) {};\n\t\t\n\t\t\n\t\t\\draw[black] (p) -- (t);\n\t\t\\draw[black] (t) -- (x);\n\t\t\\draw[black] (x) -- (p);\n\t\t\n\t\t\\draw[black, dashed] (q) -- (u);\n\t\t\\draw[black, dashed] (u) -- (m);\n\t\t\\draw[black, dashed] (m) -- (q);\n\t\t\n\t\t\\draw[black, densely dotted] (r) -- (v);\n\t\t\\draw[black, densely dotted] (v) -- (n);\n\t\t\\draw[black, densely dotted] (n) -- (r);\n\t\t\n\t\t\\draw[black, dashdotted] (s) -- (w);\n\t\t\\draw[black, dashdotted] (w) -- (o);\n\t\t\\draw[black, dashdotted] (o) -- (s);\n\t\t\n\t\t\\draw[black] (m) -- (a);\n\t\t\\draw[black] (n) -- (b);\n\t\t\\draw[black] (o) -- (c);\n\t\t\\draw[black] (p) -- (d);\n\t\t\\draw[black] (q) -- (e);\n\t\t\\draw[black] (r) -- (f);\n\t\t\\draw[black] (s) -- (g);\n\t\t\\draw[black] (t) -- (h);\n\t\t\\draw[black] (u) -- (i);\n\t\t\\draw[black] (v) -- (j);\n\t\t\\draw[black] (w) -- (k);\n\t\t\\draw[black] (x) -- (l);\n\t\t\n\t\\end{tikzpicture}\n\t\\centering\n\t\\caption{Minimum size infecting sets for $G(12,2)$ and $G(12, 4)$, respectively.}\n\t\\label{ExamplesFigure}\n\\end{figure}\n\n\\begin{proof}\n\tNote that the subgraph of $G(n,k)$ which is induced on the vertex set $\\{v_0,v_1,\\ldots,v_{n-1}\\}$ is the disjoint union of $c$ cycles of length $l$. The corresponding vertex set of each cycle is given by \t\\[ V_j = \\{v_{j+ik}:0\\leq i\\leq l-1\\}\\]\n\twith $0\\leq j \\leq c-1$.\n\t\n\tWe will distinguish two cases. The first is when $l$ is even.\n\tGiven $\\mathcal S$ as an initial set of infected points, the infection will spread to infect every vertex set $V_j$, $0\\leq j\\leq c-1$. From there with ${u}_0$ the infection spreads to all of $\\{{u}_0, {u}_1, \\ldots, {u}_{n-1}\\}.$ Note that $n=cl$ is even and\n\t\\[|\\mathcal S|=1+c\\frac{l}{2}=\\frac{n}{2}+1=\\bigg\\lceil\\frac{n+1}{2}\\bigg\\rceil\\]\n\tso that $\\mathcal S$ is a minimum size infecting set by Corollary \\ref{p3_hull_GP}.\n\t\n\tTurn now to the case of $l$ odd.\n\tGiven $\\mathcal S$ as an initial set of infected points, the infection will spread to $\\{{u}_j:0\\leq j\\leq c-1\\}.$ With ${u}_j$ and $v_{j+k}$ infected, also $v_j$ will be infected, and the infection will spread to infect every vertex set $V_j,\\hspace{1mm} 0\\leq j\\leq c-1$. From there with $\\{{u}_j:0\\leq j\\leq c-1\\}$ the infection spreads to all of $\\{{u}_0,{u}_1,\\ldots,{u}_{n-1}\\}.$ As\n\t\\[|\\mathcal S|=\\bigg\\lceil\\frac{c+1}{2}\\bigg\\rceil +c\\frac{l-1}{2}=\\bigg\\lceil c\\frac{l-1}{2}+\\frac{c+1}{2}\\bigg\\rceil=\\bigg\\lceil\\frac{n+1}{2}\\bigg\\rceil,\\]\n\tit follows that $\\mathcal S$ is a minimum size infecting set.\n\\end{proof}\n\nBy Theorem \\ref{Main Theorem}, the complement of a minimum size infecting set of a cubic graph is a forest. The next theorem constrains the number of connected components of this forest for $G(n,k)$.\n\n\\begin{theorem}\\label{components thm}\n\tLet $S$ be a minimum size infecting set of the generalized Petersen graph $G=G(n,k)=(V,E)$.\n\n\\begin{itemize}\t\n\t\\item If $n$ is odd, then $G[V-S]$ is a tree.\n\t\n\t\\item If $n$ is even, then the forest $G[V-S]$ may have two or one connected components. It will have two connected components if and only if $S$ has no neighboring points and one connected component if and only if $S$ has exactly one pair of points that are neighbors.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\n\tRecall for $G$ that $|V|=2n$ and $|E|=3n$. Write $\\nu$ and $\\epsilon$ for the number of vertices and edges, respectively, of $G[V-S]$. As $G[V-S]$ is a forest, $\\nu-\\epsilon$ is the number of trees in the forest.\n\t\n\tWhen $n$ is odd, write $n=2k+1$ so that $|S|=\\lceil \\frac{n+1}{2} \\rceil= k+1$ and $\\nu=2(2k+1) - (k+1) = 3 k +1$. At most, if $S$ has no neighboring points, passing from $G$ to $G[V-S]$ would remove $3|S|$ edges. Therefore, $\\epsilon\n\t\\geq 3(2k+1) - 3(k+1) = 3k$. It follows that $\\nu- \\epsilon \\le 1$, and $G[V-S]$ must be a single tree.\n\t\n\tWhen $n$ is even, write $n=2k$ so $|S|=\\lceil \\frac{n+1}{2} \\rceil= k+1$ and $\\nu = 2(2k) - (k+1) = 3 k -1$. As in the previous paragraph, we get $\\epsilon\n\t\\geq 3(2k) - 3(k+1) = 3k-3$. It follows that $\\nu- \\epsilon \\le 2$, and $G[V-S]$ has either one or two connected components. In addition, $G[V-S]$ is a tree exactly when $S$ has no neighboring points, and $G[V-S]$ has two trees exactly when $S$ has exactly one pair of neighboring points.\n\\end{proof}\n\n\n\n\\section{Infecting Times}\n\n\\begin{theorem}\\label{dan_time}\n\tLet $c = \\gcd(n,k)$, $l=\\frac{n}{c}$, and $\\mathcal S$ be the infecting set for $G(n,k)$ from Corollary \\ref{infecting_set}.\n\\begin{itemize}\t\n\t\\item When $l$ is even, the infecting time for $\\mathcal S$ is $\\frac{n}{2}$.\n \\item When $l$ is odd, the infecting time for $\\mathcal S$ is $\\frac{n-c}{2}+1$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\nNote that $n = cl$ with $1\\le c \\le k < \\frac n2$. Thus $l \\ge 3$.\n\nBegin with the case of $l\\ge 4$ even. Obviously, $\\{u_0\\} \\cup \\{v_k : 0\\leq k \\leq n-1\\}\\subseteq I^1[\\mathcal S]$ by definition. However, also $u_{n-1} \\in I^1[\\mathcal S]$. To see this, note that $v_{n-1} \\in V_{c-1}$, and $(c-1)+ik \\equiv n-1 \\mbox{ mod } n$ holds iff $i\\frac kc \\equiv -1 \\mbox{ mod } l$. As $l$ is even, the last congruency can only hold for $i$ odd. Thus, $v_{n-1} \\in \\mathcal S_v \\subseteq \\mathcal S$ and $u_{n-1} \\in I^1[\\mathcal S]$.\n\nIn addition to $u_{n-1} \\in I^1[\\mathcal S]$, $u_{1} \\in I^1[\\mathcal S]$ happens if and only if $c=1$. In any case, another $\\frac n2 -1$ steps are necessary for the infection to spread from $I^1[\\mathcal S]$ to the rest of the graph.\n\n\n\n\n\nTurn now to the case of $l \\ge 3$ odd. Obviously, \\[\\{u_j: 0\\le j\\le c-1\\} \\cup \\{v_{j+ik} : 0< i < l-1, 0\\le j\\le c-1\\}\\subseteq I^1[\\mathcal S]\\] by definition, where some of the $v_{j+ik}$ for $i\\in \\{0,l-1\\}$ will still be missing. In addition, exactly one of $u_c$ and $u_{n-1}$ will be in $I^1[\\mathcal S]$. To see this, note that $v_{c} \\in V_{0}$, and $i_1k \\equiv c \\mbox{ mod } n$ holds iff $i_1\\frac kc \\equiv 1 \\mbox{ mod } l$. Similarly,\n$v_{n-1} \\in V_{c-1}$, and $(c-1)+i_2k \\equiv n-1 \\mbox{ mod } n$ holds iff $i_2\\frac kc \\equiv -1 \\mbox{ mod } l$. Thus, $i_1\\frac kc + i_2\\frac kc\\equiv 0 \\mbox{ mod } l$, and $i_1+i_2$ is a multiple of $l$. With $0