diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznsuw" "b/data_all_eng_slimpj/shuffled/split2/finalzznsuw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznsuw" @@ -0,0 +1,5 @@ +{"text":"\\section{Discussion}\n\nReal environments don't provide scalar reward signals to learn from. Instead, organisms have developed various internal drives based on either primary or secondary goals \\citep{Baldassarre13}. Here we examined intrinsic rewards based on features derived from other agents in the environment, in order to establish whether such social signals could enable the evolution of altruism to solve intertemporal social dilemmas. In accord with evolutionary theory \\citep{axelrod81, Nowak1560}, we found that na\\\"ively implementing natural selection via genetic algorithms did not lead to the emergence of cooperation. Furthermore, assortative matchmaking was sufficient to generate cooperative behavior in cases where honest signals were available. Finally, we proposed a new multi-level evolutionary paradigm based on shared reward networks that achieves cooperation in more general situations.\n\nWe demonstrated that the reward network weights evolved differently for Cleanup versus Harvest, indicating that the two tasks necessitate different forms of social cooperation for optimal performance. This highlights the advantage of evolving rather than hand-crafting the weighting between individual reward and group reward, as optimal weightings cannot necessarily be anticipated for all environments. Evolving such weightings thus constitutes a form of meta-learning, wherein an entire learning system, including intrinsic reward functions, is optimized for fast learning \\cite{singh2010intrinsically, fernando2018meta}. Here we have extended these ideas to the multi-agent domain.\n\nWhy does evolving intrinsic social preferences promote cooperation? Firstly, evolution ameliorates the intertemporal choice problem by distilling the long timescale of collective fitness into the short timescale of individual reinforcement learning, thereby improving credit assignment between selfish acts and their temporally displaced negative group outcomes \\citep{hughes2018inequity}. Secondly, it mitigates the social dilemma itself by allowing evolution to expose social signals that correlate with, for example, an agent's current level of selfishness. Such information powers a range of mechanisms for achieving mutual cooperation like competitive altruism \\citep{hardy2006nice}, other-regarding preferences \\citep{cooper2016other}, and inequity aversion \\citep{fehr1999}. In accord, laboratory experiments show that humans cooperate more readily when they can communicate \\citep{ostrom1992covenants, janssen2010lab}. \n\nThe shared reward network evolution model was inspired by multi-level selection; yet it does not correspond to the prototypical case of that theory since its lower level units of evolution (the policy networks) are constantly swapping which higher level unit (reward network) they are paired with. Nevertheless, there are a variety of ways in which we see this form of modularity arise in nature. For example, free-living microorganisms occasionally form multi-cellular structures to solve a higher order adaptive problem, like slime mold forming a spore-producing stalk for dispersal \\citep{west2006social}, and many prokaryotes can incorporate plasmids (modules) found in their environment or received from other individuals as functional parts of their genome, thereby achieving cooperation in social dilemmas \\citep{griffin2004cooperation, mc2011horizontal}. Alternatively, in humans a reward network may represent a shared ``cultural norm'', with its fitness based on cultural information accumulated from the groups in which it holds sway. In this way, the spread of norms can occur independently of the success of individual agents \\citep{boyd2009}.\n\nNote that in this work, we have assumed that agents have perfect knowledge of other agents' rewards, while in real-world systems this is not typically the case. This assumption was made in order to disentangle the effects of cultural evolution from the quality of the signals being evolved over. Natural next steps include adding partial observability or noise to this signal (to make it more analogous to, for instance, a smile\/frown or other locally observable social signals), identifiability across episodes, or even deception.\n\nThe approach outlined here opens avenues for investigating alternative evolutionary mechanisms for the emergence of cooperation, such as kin selection \\citep{griffin2002} and reciprocity \\citep{trivers1971evolution}. It would be interesting to see whether these lead to different weights in a reward network, potentially hinting at the evolutionary origins of different social biases. Along these lines, one might consider studying an emergent version of the assortative matchmaking model along the lines suggested by \\cite{henrich2003}, adding further generality and power to our setup. Finally, it would be fascinating to determine how an evolutionary approach can be combined with multi-agent communication to produce that most paradoxical of cooperative behaviors: cheap talk.\n\n\\begin{acks}\nWe would like to thank Simon Osindero, Iain Dunning, Andrea Tacchetti, and many DeepMind colleagues for valuable discussions and feedback, as well as code development and support. \n\\end{acks}\n\\section{Methods}\n\nWe varied and explored different combinations of parameters, namely: (1) environments \\{Harvest, Cleanup\\}, (2) reward network features \\{prospective, retrospective\\}, (3) matchmaking \\{random, assortative\\}, and (4) reward network evolution \\{individual, shared, none\\}. We describe these in the following sections.\n\n\n\\subsection{Intertemporal social dilemmas}\n\nIn this paper, we consider Markov games \\citep{littman1994markov} within a MARL setting. Specifically we study intertemporal social dilemmas \\citep{leibo17, hughes2018inequity}, defined as games in which individually selfish actions produce individual benefit on short timescales but have negative impacts on the group over a longer time horizon. This conflict between the two timescales characterizes the intertemporal nature of these games. The tension between individual and group-level rationality identifies them as social dilemmas (e.g. the famous Prisoner's Dilemma). \n\nWe consider two dilemmas, each implemented as a partially observable Markov game on a 2D grid (see Figure \\ref{fig:gallery}), with $N=5$ players playing at a time. In the \\emph{Cleanup} game, agents tried to collect apples (reward ${+}1$) that spawned in a field at a rate inversely related to the cleanliness of a geographically separate aquifer. Over time, this aquifer filled up with waste, lowering the respawn rate of apples linearly, until a critical point past which no apples could spawn. Episodes were initialized with no apples present and zero spawning, thus necessitating cleaning. The dilemma occurred because in order for apples to spawn, agents must leave the apple field and clean, which conferred no reward. However if all agents declined to clean (defect), then no rewards would be received by any. In the \\emph{Harvest} game, again agents collected rewarding apples. The apple spawn rate at a particular point on the map depended on the number of nearby apples, falling to zero once there were no apples in a certain radius. There is a dilemma between the short-term individual temptation to harvest all the apples quickly and the consequential rapid depletion of apples, leading to a lower total yield for the group in the long-term. \n\nAll episodes last 1000 steps, and the total size of the playable area is 25$\\times$18 for Cleanup and 38$\\times$16 for Harvest. Games are partially observable in that agents can only observe via a 15$\\times$15 RGB window, centered on their current location. The action space consists of moving left, right, up, and down, rotating left and right, and the ability to tag each other. This action has a reward cost of 1 to use, and causes the player tagged to lose 50 reward points, thus allowing for the possibility of punishing free-riders \\citep{oliver1980rewards, gurerk2006competitive}. The Cleanup game has an additional action for cleaning waste.\n\n\n\\subsection{Modeling social preferences as intrinsic motivations}\n\nIn our model, there are three components to the reward that enter into agents' loss functions (1) total reward, which is used for the policy loss, (2) extrinsic reward, which is used for the extrinsic value function loss and (3) intrinsic reward, which is used for the intrinsic value function loss.\n\nThe {\\em total reward} for player $i$ is the sum of the extrinsic reward and an intrinsic reward as follows:\n\\begin{align}\\label{eq:totalreward}\n r_i(s_i,a_i) = r_i^E(s_i,a_i) + u_i(\\mathbf{f}_i) \\, .\n\\end{align}\nThe {\\em extrinsic reward} $r^E_i(s, a)$ is the environment reward obtained by player $i$ when it takes action $a_i$ from state $s_i$, sometimes also written with a time index $t$.\nThe {\\em intrinsic reward} $u(\\mathbf{f})$ is an aggregate social preference across features $\\mathbf{f}$ and is calculated according to the formula,\n\\begin{align}\\label{eq:inequityextend}\nu_{i}(\\mathbf{f}_i | \\boldsymbol{\\theta}) = \\mathbf{v}^\\mathrm{T} \\sigma \\left ( \\mathbf{W}^\\mathrm{T} \\mathbf{f}_i + \\mathbf{b} \\right ) \\,,\n\\end{align}\nwhere $\\sigma$ is the ReLU activation function, and $\\boldsymbol{\\theta} = \\{\\mathbf{W}, \\mathbf{v}, \\mathbf{b}\\}$ are the parameters of a 2-layer neural network with 2 hidden nodes. These parameters are evolved based on fitness (see Section \\ref{sec:arch_training}). The elements of $\\mathbf{v} = (v_1, v_2)$ approximately correspond to a linear combination of the coefficients related to advantagenous and disadvantagenous inequity aversion mentioned in \\cite{hughes2018inequity}, which were found via grid search in this previous work, but are here evolved. \n\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=1.\\textwidth]{figures\/arch_training2_NEW}\n \\vspace{-1.0cm}\n\\caption{(a) Agent $A_j$ adjusts policy $\\pi_j(s,a|\\phi)$ using off-policy importance weighted actor-critic (V-Trace) \\citep{espeholt2018impala} by sampling from a queue with (possibly stale) trajectories recorded from 500 actors acting in parallel arenas. (b) The architecture (shown only for 1 agent) includes a visual encoder (1-layer convolutional neural net with 6 3x3 filters, stride 1, followed by two fully-connected layers with 32 units each), intrinsic and extrinsic value heads ($V^I$ and $V^E$), a policy head $\\pi$, and a long-short term memory (LSTM, with 128 hidden units), which takes last intrinsic and extrinsic rewards ($u(\\mathbf{f})$ and $r^E$) and last action as input. The reward network weights are evolved based on total episode return.}\n\\label{fig:arch}\n\\vspace{-.25cm}\n\\end{figure*} \n\n\n\n\\input{pseudocode}\n\n\nThe feature vector $\\mathbf{f}_i$ is a player-specific vector quantity that agents can transform into intrinsic reward via their reward network. It's composed of features $f_{ij}$ derived from all players \\footnote{Note that we use both $i$ and $j$ to index over the players, but $i$ makes reference to the player \\emph{receiving} the intrinsic reward, while $j$ indexes the players \\emph{sending} the features over which the intrinsic reward of player $i$ is defined.}, so that each player has access to the same set of features, with the exception that its own feature is demarcated specially (by always occupying the first element of the vector). The features themselves are a function of recently received or expected future (extrinsic) reward for each agent. In Markov games the rewards received by different players may not be aligned in time. Thus, any model of social preferences should not be overly influenced by the precise temporal alignment of different players' rewards. Intuitively, they ought to depend on comparing temporally averaged reward estimates between players, rather than instantaneous values. Therefore, we considered two different ways of temporally aggregating the rewards. \n\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{figures\/evolution_schematic}\n \\vspace{-.5cm}\n\\caption{(a) Agents assigned and evolved with individual reward networks. (b) Assortative matchmaking, which preferentially plays cooperators with other cooperators and defectors with other defectors. (c) A single reward network is sampled from the population and assigned to all players, while 5 policy networks are sampled and assigned to the 5 players individually. After the episode, policy networks evolve according to individual player returns, while reward networks evolve according to aggregate returns over all players.}\n\\label{fig:evo}\n\\vspace{-.25cm}\n\\end{figure*} \n\n\nThe {\\em retrospective} method derives intrinsic reward from whether an agent judges that other agents have been actually (extrinsically) rewarded in the recent past. The {\\em prospective} variant derives intrinsic reward from whether other agents are expecting to be (extrinsically) rewarded in the near future.\\footnote{Our terms prospective and retrospective map onto the terms intentional and consequentialist respectively as used by \\cite{lerer17, peysakhovich2018}.} For the retrospective variant, $f_{ij} = e^t_j$, where the temporally decayed reward $e_j^t$ for the agents $j = 1,\\dots, N$ are updated at each timestep $t$ according to\n\\begin{equation}\n e_j^t= \\eta \\,e_j^{t-1} + r_j^{E,t} \\, ,\n\\end{equation}\nand $\\eta = 0.975$. The prospective variant uses the value estimates $V^E_j$ (see Figure \\ref{fig:arch}b) for $f_{ij}$ and has a stop-gradient before the reward network module so that gradients don't flow back into other agents (as in for example DIAL from \\citep{foerster2016}).\n\n\n\\subsection{Architecture and Training}\n\\label{sec:arch_training}\n\n\nWe used the same training framework as in \\cite{jaderberg2018human}, which performs distributed asynchronous training in multi-agent environments, including population-based training (PBT) \\citep{jaderberg2017population}. We trained a population of $N=50$ agents\\footnote{Similar to as in \\citep{espeholt2018impala}, we distinguish between an \"agent\" which acts in the environment according to some policy, and a \"learner\" which updates the parameters of a policy. In principle, a single agent's policy may depend on parameters updated by several separate learners.} with policies $\\{\\pi_i\\}$, from which we sampled $5$ players in order to populate each of $500$ arenas (where \\emph{arena} is an instantiation of a single episode of the environment) running in parallel. Within each arena, an episode of the environment was played with the sampled agents, before resampling new ones. Agents were sampled using one of two matchmaking processes (described in more detail below). Episode trajectories lasted 1000 steps and were written to queues for learning, from which weights were updated using V-Trace (Figure \\ref{fig:arch}a).\n\nThe set of weights evolved included learning rate, entropy cost weight, and reward network weights $\\theta$\\footnote{We can imagine that the reward weights are simply another set of optimization hyperparameters since they enter into the loss.}. The parameters of the policy network $\\phi$ were inherited in a Lamarckian fashion as in \\citep{jaderberg2017population}. Furthermore, we allowed agents to observe their last actions $a_{i,t-1}$, last intrinsic rewards ($u_{i,t-1}(\\mathbf{f}_i)$), and last extrinsic rewards ($r_{i,t-1}^E(s_i,a_i)$) as input to the LSTM in the agent's neural network. \n\nThe objective function was identical to that presented in \\cite{espeholt2018impala} and comprised three components: (1) the value function gradient, (2) policy gradient, and (3) entropy regularization, weighted according to hyperparameters baseline cost and entropy cost (see Figure \\ref{fig:arch}b). \n\nEvolution was based on a fitness measure calculated as a moving average of total episode return, which was a sum of apples collected minus penalties due to tagging, smoothed as follows:\n\\begin{equation}\n F_i^{n}= (1-\\nu) F_i^{n-1} + \\nu R_i^{n} \\, ,\n\\end{equation}\nwhere $\\nu = 0.001$ and $R_i^n = \\sum_t r^{E,t}_{i}$ is the total episode return obtained on episode $n$ by agent $i$ (or reward network $i$ in the case of the shared reward network evolution (see Section \\ref{sec:multi_evolve} for details).\n\nTraining was done via joint optimization of network parameters via SGD and hyperparameters\/reward network parameters via evolution in the standard PBT setup. Gradient updates were applied for every trajectory up to a maximum length of 100 steps, using a batch size of 32. Optimization was via RMSProp with epsilon=$10{^{-5}}$, momentum=0, decay rate=0.99, and an RL discount factor of 0.99. The baseline cost weight (see \\citet{mnih2016}) was fixed at 0.25, and the entropy cost was sampled from LogUniform($2\\times10{^{-4}}$,0.01) and evolved throughout training using PBT. The learning rates were all initially set to $4\\times10{^{-4}}$ and then allowed to evolve.\n\nPBT uses evolution (specifically genetic algorithms) to search over a space of hyperparameters rather than manually tuning or performing a random search, resulting in an adaptive schedule of hyperparameters and joint optimization with network parameters learned through gradient descent \\cite{jaderberg2017population}. \n\nThere was a mutation rate of $0.1$ when evolving hyperparameters, using multiplicative perturbations of $\\pm 20\\%$ for entropy cost and learning rate, and additive perturbation of $\\pm 0.1$ for reward network parameters. We implemented a burn-in period for evolution of $4\\times10^6$ agent steps, to allow network parameters and hyperparameters to be used in enough episodes for an accurate assessment of fitness before evolution.\n\n\n\n\\subsection{Random vs. assortative matchmaking}\n\nMatches were determined according to two methods: (1) random matchmaking and (2) assortative matchmaking. Random matchmaking simply selected uniformly at random from the pool of agents to populate the game, while cooperative matchmaking first ranked agents within the pool according to a metric of recent cooperativeness, and then grouped agents such that players of similar rank played with each other. This ensured that highly cooperative agents played only with other cooperative agents, while defecting agents played only with other defectors. For Cleanup, cooperativeness was calculated based on the amount of steps in the last episode the agent chose to clean. For Harvest, it was calculated based on the difference between the the agent's return and the mean return of all players, so that having less return than average yielded a high cooperativeness ranking. Cooperative metric-based matchmaking was only done with either individual reward networks or no reward networks (Figure \\ref{fig:evo}b). We did not use cooperative metric-based matchmaking for our multi-level selection model, since these are theoretically separate approaches.\n\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/rewevo_episode_reward_recolored-01.png}\n \\vspace{-.5cm}\n\\caption{Total episode rewards, aggregated over players. (a), (b): Comparing retrospective (backward-looking) reward evolution with assortative matchmaking and PBT-only baseline in (a) Cleanup and (b) Harvest. (c), (d): Comparing prospective (forward-looking) with retrospective (backward-looking) reward evolution in (c) Cleanup and (d) Harvest. The black dotted line indicates performance from \\cite{hughes2018inequity}. The shaded region shows standard error of the mean, taken over the population of agents.}\n\\label{fig:results1}\n\\vspace{-.25cm}\n\\end{figure*}\n\n\n\\subsection{Individual vs. shared reward networks}\n\\label{sec:multi_evolve}\n\nBuilding on previous work that evolved either the intrinsic reward \\citep{jaderberg2018human} or the entire loss function \\citep{houthooft2018evolved}, we considered the reward network weights to be hyperparameters that could be evolved in parallel with the policy parameters (Figure \\ref{fig:evo}a). Distinct from these methods, we separately evolved the reward network within its own population, thereby allowing different modules of the agent to compete only with like components. This allowed for independent exploration of hyperparameters via separate credit assignment of fitness, and thus considerably more of the hyperparameter landscape could be explored compared with using only a single pool. In addition, reward networks could be randomly assigned to any policy network, and so were forced to generalize to a wide range of policies. In a given episode, $5$ separate policy networks were paired with the same reward network, which we term a {\\em shared reward network}. In line with \\citep{jaderberg2017population}, the fitness determining the copying of policy network weights and evolution of optimization-related hyperparameters (entropy cost and learning rate) were based on individual agent return. By contrast, the reward network parameters were evolved according to fitness based on total episode return across the group of co-players (Figure \\ref{fig:evo}c).\n\n\n\nThis contribution is distinct from previous work which evolved intrinsic rewards \\citep[e.g.][]{jaderberg2018human} because (1) we evolve over social features rather than a remapping of environmental events, and (2) reward network evolution is motivated by dealing with the inherent tension in ISDs, rather than merely providing a denser reward signal. In this sense it's more akin to evolving a form of communication for social cooperation, rather than learning reward-shaping in a sparse-reward environment. We allow for multiple agents to share the same components, and as we shall see, in a social setting, this winds up being critical. Shared reward networks provide a biologically principled method that mixes group fitness on a long timescale and individual reward on a short timescale. This contrasts with hand-crafted means of aggregation, as in previous work \\citep{chang2004all,mataric1994learning}.\n\n\n\\section{Results}\n\n\nAs shown in Figure \\ref{fig:results1}, PBT without using an intrinsic reward network performs poorly on both games, where it asymptotes to 0 total episode reward in Cleanup and 400 for Harvest (the number of apples gained if all agents collect as quickly as they can). \n\nFigures \\ref{fig:results1}a-b compare random and assortative matchmaking with PBT and reward networks using retrospective social features. When using random matchmaking, individual reward network agents perform no better than PBT at Cleanup, and only moderately better at Harvest. Hence there is little benefit to adding reward networks over social features if players have separate networks, as these tend to be evolved selfishly. The assortative matchmaking experiments used either no reward network ($u(\\mathbf{f})$ = 0) or individual reward networks. Without a reward network, performance was the same as the PBT baseline. With individual reward networks, performance was very high, indicating that both conditioning the internal rewards on social features and a preference for cooperative agents to play together were key to resolving the dilemma. On the other hand, shared reward network agents perform as well as assortative matchmaking and the handcrafted inequity aversion intrinsic reward from \\citep{hughes2018inequity}, even using random matchmaking. This implies that agents didn't necessarily need to have immediate access to honest signals of other agents' cooperativeness to resolve the dilemma; it was enough to simply have the same intrinsic reward function, evolved according to collective episode return. Videos comparing performance of the PBT baseline with the retrospective variant of shared reward network evolution can be found at \\href{https:\/\/www.youtube.com\/watch?v=medBBLLM4c0}{https:\/\/youtu.be\/medBBLLM4c0} and \\href{https:\/\/www.youtube.com\/watch?v=yTjrlH3Ms9U}{https:\/\/youtu.be\/yTjrlH3Ms9U}.\n\nFigures \\ref{fig:results1}(c) and (d) compare the retrospective and prospective variants of reward network evolution. The prospective variant, although better than PBT when using a shared reward network, generally results in worse performance and more instability. This is likely because the prospective variant depends on agents learning good value estimates before the reward networks become useful, whereas the retrospective variant only depends on environmentally provided reward and thus does not suffer from this issue. Interestingly, we observed that the prospective variant does achieve very high performance if gradients are allowed to pass between agents via the value estimates $V^E_j$ (data not shown); however, this constitutes centralized learning, albeit with decentralized execution (see \\cite{foerster2016}). Such approaches are promising but less consistent with biologically plausible mechanisms of multi-agent learning which are of interest here and so were not pursued.\n\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/social_outcome_metrics_recolored-01.png}\n \\vspace{-.25cm}\n\\caption{Social outcome metrics for (a) Cleanup and (b) Harvest. \\textit{Top:} equality, \\textit{middle:} total amount of tagging, \\textit{bottom:} sustainability. The shaded region shows the standard error of the mean.}\n\\label{fig:results2}\n\\vspace{-.25cm}\n\\end{figure*}\n\n\n\nWe next plot various social outcome metrics in order to better capture the complexities of agent behavior (see Figure \\ref{fig:results2}). \nEquality is calculated as $\\mathbb{E}(1-G(\\mathbf{R}))$, where $G(\\mathbf{R})$ is the Gini coefficient over individual returns. Figure \\ref{fig:results2}b demonstrates that, in Harvest, having the prospective version of reward networks tends to lead to lower equality, while the retrospective variant has very high equality. Equality in Cleanup is more unstable throughout training, since it's not necessarily optimal, but tends to be lower overall than for Harvest, even when performance is high, indicating that equality might be harder to achieve in different games.\nTagging measures the average number of times a player fined another player throughout the episode. The middle panel of Figure \\ref{fig:results2}b shows that there is a higher propensity for tagging in Harvest when using either a prospective reward network or an individual reward network, compared to the retrospective shared reward network. This explains the performance shown in Figure \\ref{fig:results1}, as being tagged results in a very high negative reward. Tagging in the Cleanup task is overall much lower than in Harvest.\nSustainability measures the average time step on which agents received positive reward, averaged over the episode and over agents. We see in the bottom panel of \\ref{fig:results2}b that having no reward network results in players collecting apples extremely quickly in Harvest, compared with much more sustainable behavior with reward networks. In Cleanup, the sustainability metric is not meaningful and so this was not plotted.\n\n\nFinally, we can directly examine the weights of the final retrospective shared reward networks which were best at resolving the ISDs. Interestingly, the final weights evolved in the second layer suggest that resolving each game might require a different set of social preferences. In Cleanup, one of the final layer weights $v_2$ evolved to be close to $0$, whereas in Harvest, $v_1$ and $v_2$ evolved to be of large magnitude but opposite sign. We can see a similar pattern with the biases $\\mathbf{b}$. We interpret this to mean that Cleanup required a less complex reward network: it was enough to simply find other agents' being rewarded as intrinsically rewarding. In Harvest, however, a more complex reward function was perhaps needed in order to ensure that other agents were not over-exploiting the apples. We found that the first layer weights $\\mathbf{W}$ tended to take on arbitrary (but positive) values. This is because of random matchmaking: co-players were randomly selected and thus there was little evolutionary pressure to specialize these weights.\n\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figures\/compare_weights}\n\\caption{Distribution of layer 2 weights and biases of evolved retrospective shared reward network at $1.5\\times10{^{8}}$ training steps for (a) Cleanup, and (b) Harvest.}\n\\vspace{-0.3cm}\n\\label{fig:results3}\n\\end{figure}\n\\section{Introduction}\n\nNature shows a substantial amount of cooperation at all scales, from microscopic interactions of genomes and bacteria to species-wide societies of insects and humans \\citep{smith1997major}. This is in spite of natural selection pushing for short-term individual selfish interests \\citep{darwin1859}. In its purest form, altruism can be favored by selection when cooperating individuals preferentially interact with other cooperators, thus realising the rewards of cooperation without being exploited by defectors \\citep{hamilton-1964a, hamilton1964genetical, dawkins1976selfish, santos2006cooperation, fletcher2009simple}. However, many other possibilities exist, including kin selection, reciprocity and group selection \\citep{Nowak1560, ubeda2011power, trivers1971evolution, nowak2005evolution, wilson1975theory, smith1964group}. \n\nLately the emergence of cooperation among self-interested agents has become an important topic in multi-agent deep reinforcement learning (MARL). \\cite{leibo17} and \\cite{hughes2018inequity} formalize the problem domain as an {\\em intertemporal social dilemma} (ISD), which generalizes matrix game social dilemmas to Markov settings. Social dilemmas are characterized by a trade-off between collective welfare and individual utility. As predicted by evolutionary theory, self-interested reinforcement-learning agents are typically unable to achieve the collectively optimal outcome, converging instead to defecting strategies \\citep{leibo17, perolat17}.\nThe goal is to find multi-agent training regimes in which individuals resolve social dilemmas, i.e., cooperation emerges.\nPrevious work has found several solutions, belonging to three broad categories: 1) opponent modelling \\citep{foerster17, KleimanWeiner2016}, 2) long-term planning using perfect knowledge of the game's rules \\citep{lerer17, peysakhovich2018} and 3) a specific intrinsic motivation function drawn from behavioral economics \\citep{hughes2018inequity}. These hand-crafted approaches run at odds with more recent end-to-end model-free learning algorithms, which have been shown to have a greater ability to generalize (e.g. \\citep{espeholt2018impala}). We propose that evolution can be applied to remove the hand-crafting of intrinsic motivation, similar to other applications of evolution in deep learning.\n\nEvolution has been used to optimize single-agent hyperparameters \\citep{jaderberg2017population}, implement black-box optimization \\citep{wierstra2008natural}, and to evolve neuroarchitectures \\citep{miller1989designing, stanley2002evolving}, regularization \\citep{chan2002alleviating}, loss functions \\citep{jaderberg2018human, houthooft2018evolved}, behavioral diversity \\citep{conti2017improving}, and entire reward functions \\citep{singh2009rewards, singh2010intrinsically}. These principles tend to be driven by single-agent search and optimization or competitive multi-agent tasks. Therefore there is no guarantee of success when applying them in the ISD setting. More closely related to our domain are evolutionary simulations of predator-prey dynamics \\citep{yong2001cooperative}, which used enforced subpopulations to evolve populations of neurons which are sampled to form the hidden layer of a neural network.\\footnote{See also \\cite{potter2000cooperative} and \\cite{panait2005cooperative} for reviews of other evolutionary approaches to cooperative multi-agent problems.}\n\nTo address the specific challenges of ISDs, the system we propose distinguishes between optimization processes that unfold over two distinct time-scales: (1) the fast time-scale of learning and (2) the slow time-scale of evolution \\citep[similar to][]{hinton1987learning}. In the former, individual agents repeatedly participate in an intertemporal social dilemma using a fixed intrinsic motivation. In the latter, that motivation is itself subject to natural selection in a population. We model this intrinsic motivation as an additional additive term in the reward of each agent \\citep{chentanez2005}. We implement the intrinsic reward function as a two-layer fully-connected feed-forward neural network, whose weights define the genotype for evolution. We propose that evolution can help mitigate this intertemporal dilemma by bridging between these two timescales via an intrinsic reward function.\n\nEvolutionary theory predicts that evolving individual intrinsic reward weights across a population who interact uniformly at random does not lead to altruistic behavior \\citep{axelrod81}. Thus, to achieve our goal, we must structure the evolutionary dynamics \\citep{Nowak1560}. We first implement a ``Greenbeard'' strategy \\citep{dawkins1976selfish, jansen2006} in which agents choose interaction partners based on an honest, real-time signal of cooperativeness. We term this process {\\em assortative matchmaking}. Although there is ecological evidence of assortative matchmaking \\citep{keller98}, it cannot explain cooperation in all taxa \\citep{grafen1990animals, henrich2003, gardner2010greenbeards}. Moreover it isn't a general method for multi-agent reinforcement learning, since honest signals of cooperativeness are not normally observable in the ISD models typically studied in deep reinforcement learning.\n\n\n\n\n\\begin{figure*}[htb]\n \\centering \n \\includegraphics[width=0.7\\textwidth]{figures\/ssd_tasks}\n \\vspace{-.25cm}\n \\caption{Screenshots from (a) the Cleanup game, (b) the Harvest game. The size of the agent-centered observation window is shown in (b). The same size observation was used in all experiments.}\n \\vspace{-.25cm}\n \\label{fig:gallery}\n\\end{figure*}\n\n\n\nTo address the limitations of the assortative matchmaking approach, we introduce an alternative modular training scheme loosely inspired by ideas from the theory of multi-level (group) selection \\citep{wilson1975theory, henrich2003}, which we term {\\em shared reward network} evolution. Here, agents are composed of two neural network modules: a policy network and a reward network. On the fast timescale of reinforcement learning, the policy network is trained using the modified rewards specified by the reward network. On the slow timescale of evolution, the policy network and reward network modules evolve separately from one another. In each episode every agent has a distinct policy network but the same reward network. As before, the fitness for the policy network is the individual's reward. In contrast, the fitness for the reward network is the collective return for the entire group of co-players. In terms of multi-level selection theory, the policy networks are the lower level units of evolution and the reward networks are the higher level units. Evolving the two modules separately in this manner prevents evolved reward networks from overfitting to specific policies. This evolutionary paradigm not only resolves difficult ISDs without handcrafting but also points to a potential mechanism for the evolutionary origin of social inductive biases.\n\nThe paper is structured as follows. \nIn Section 2, we define our problem domain, and describe in detail our agent architecture and training methods. In Section 3, we present results from our experiments and further analyses of agent policies. Finally in Section 4, we discuss interpretations of our model as well as make suggestions for future work.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSince the mid-2000s, mobile telephones have become an inseparable part of our lives. The constant communication between a device and the mobile network leaves an involuntary trace of our activities. The spatio-temporal logs of our journeys and additional communication device information establish a promising model to evaluate human mobility and socioeconomic customs.\n\nFor the last two decades, uncovering underlying information from mobile phone records has been a developing research field. Data scientists, spatial data analysts, physicists and applied mathematicians pay more and more attention to discovering cellular data. Interpreting large amounts of Call Detail Records (CDRs) into useful information requires various tools and expertise. In the last ten years, dozens of research groups have published several major research journals discussing different applications of mobile network data analysis.\n\nSt. Stephen's Day is celebrated in Hungary every year on the 20$^{th}$ of August. Tens of thousands of people visit the capital for its all day long celebrations and the main event, a 30-minute-long firework show. The main area of the event includes three bridges and the embankments on the Danube for approximately three kilometres. With great views from the Buda and Pest embankments and the Castle District, these areas should show a significant spike in cellular activity and will be the primary subject of the analysis.\n\n\n\\section{Related works}\nUsing call detail records spanning over 52 weeks, accumulated over two-week-long periods, Gonzalez \\textit{et al.} analysed 100,000 randomly selected individuals' movements over half a year in Europe. They introduced basic human mobility patterns and discovered that most individuals travel only short distances, and just a few move over hundreds of kilometres. The study approximated the probability density function of travel distances with a truncated power-law \\cite{gonzalez2008understanding}.\n\nCandia \\textit{et al.} carried out a comprehensive study on the mean collective behaviour of individuals and examined how space and time deviations can be described using known percolation theory tools. They also proved that the inter-event time between consecutive calls is heavy-tailed, agreeing with previous studies on other human activities \\cite{candia2008uncovering}.\n\nA typical application of CDR processing is the sizeable social event detection and estimating the attendance during mass gatherings \\cite{wirz2013probing, mamei2016estimating, barnett2016social, hiir2019impact}.\n\nThis study builds upon previous works at John von Neumann Faculty of Informatics, \u00d3buda University. The 2014 State Foundation Day data set has already been analysed \\cite{pinter2018evaluation} regarding the large social event. Whereas in this work, the socioeconomic status of the attendees is studied.\n\nUsing mobile phone prices as Socioeconomic Status (SES) indicators has been proved to work well by Sultan \\textit{et al.} in \\cite{sultan2015mobile}. They identified areas in Pakistan where more expensive phones appear more often using indicators of accessibility to services, infrastructure, hygiene and communication. Their model performed with an absolute Pearson's correlation coefficient $> 0.35$ and p-value $< 0.01$.\n\nIn an earlier study, Pint\u00e9r \\textit{et al.} evaluated the connection between individuals' financial status and mobility customs. The authors used the radius of gyration, entropy, and Euclidean distance between home and work locations as mobility indicators and applied data fusion methods with average real estate prices to determine the influence of wealth on mobility customs \\cite{pinter2021evaluating}.\n\nRegarding socioeconomic status analysis, Pint\u00e9r \\textit{et al.} evaluated football fans' SES using their mobile phone details in Budapest during the 2016 UEFA European Football Championship. They eliminated CDRs from Subscriber Identity Modules (SIMs) during data preprocessing, which did not operate in mobile phones using Type Allocation Code (TAC) databases. \\cite{pinter2021analyzing} In another work, they analysed subscribers' wake-up times and explained how it correlates with their socioeconomic status. The analysis demonstrated a strong positive connection between the two indicators. They also showed that the mobile phone prices in the TAC database might have depreciated \\cite{pinter2022awakening}.\n\n\n\\section{Mobile network data}\n\nA CDR data set usually contains a caller ID, the cell tower it is connected to, its location, and a timestamp. Additional information on the purpose of communication, device type, and SIM holders' details help investigate more than just trajectories and cell densities.\n\nThe data set used for this research was obtained by Vodafone Hungary Ltd. The number of active SIMs was 11,540,058 in Hungary, of which Vodafone had an estimated 22.7 per cent of the market share in June 2014 (Hungarian National Media and Infocommunications Authority). These CDRs contain anonymous logs of customers' calls and text messages in Budapest, Hungary and its suburban areas, over approximately 525.14 km$^2$ (Hungarian Central Statistical Office).\n\nThe data set was collected between the 18$^{th}$ and 22$^{nd}$ of August 2014. A total amount of 191,528,883 records have been logged, between 8,890 cells. Three comma-separated value files have been acquired for the analysis in this research. The first one contained call data records, the second and third are supplementary information about cells and devices.\n\nThe CDRs in this data set consist of a timestamp, a hashed device identification (ID), a hashed cell ID and a type allocation code. TAC is the initial eight-digit segment of a device's International Mobile Equipment Identity (IMEI), uniquely identifying a particular device. In this data set, CDRs are of active call record type, meaning a record was made when the user was making a call or sending\/receiving a text message. Unfortunately, the data set does not contain information on cell switching, which would make more granulated data possible.\n\nThe supplementary cell lookup table contains cell IDs and positions as 2D coordinates in decimal degrees format. Cells at the same location were merged into base stations for further analysis, and the corresponding CDR's cell ID values were updated. Uniting particular cells is necessary for more straightforward data analysis. Nevertheless, cell tower antennas planted at an exact location might face different directions, but we do not have this information.\n\nThe device table contains a hashed device ID, the customer's age, gender, whether it is an individual or a business, and the subscription type (prepaid or postpaid). Some age and gender information is missing due to privacy restrictions.\n\n\n\\section{Methodology}\n\nDuring data cleansing, unnecessary spaces have been removed from the end of the lines. Furthermore, device and cell IDs were hashed information, a fundamental step for user privacy. On the contrary, it has a low information density and thus has been replaced with incrementing integer values. The reassigning does not affect the contained information, but is favourable to reducing data size. Due to the considerable amount of CDRs, the comma-separated value text files have been loaded into an SQLite database, with the scheme illustrated in Figure \\ref{fig:database}. To speed up data acquisitions, indices were created on timestamps, IDs and TACs.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.5\\linewidth]{images\/database.pdf}\n\t\\caption{The database structure showing CDR, device and cell tables with foreign key connections.}\n\t\\label{fig:database}\n\\end{figure}\n\nAlthough latitude and longitude values are in string format, there is no need for the original precision of 13 decimal points. This would mean $\\mu m$ resolution, while anything above the sixth decimal place is useless in this application. The unnecessary information can be discarded to save space and increase query speeds.\n\nAs a socioeconomic status indicator, the analysis used relative mobile phone ages in months to the event (August 2014) and phone release prices in EUR. Information on resolving the TACs is from the data provided by 51Degrees fused \\cite{pinter2021analyzing} with the GSMArena database \\cite{gsmarena}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{images\/cells_firework.pdf}\n\t\\caption{The selected cells for the large social event analysis.}\n\t\\label{fig:cellsfirework}\n\\end{figure}\n\nA data processing framework has been developed to acquire the relevant CDRs with the paired SES indicators. The first criteria are spatially being in a position that could indicate attendance at the fireworks. Cells along the two sides of the Danube are expected to be the primary cells servicing the attendees' mobile phones on the embankments. These cell IDs were selected manually due to the uneven separation line on the river. Any other cells that might support the selected areas are determined by a 250 m radius around the main event area, as visible in Figure \\ref{fig:cellsfirework}. Four cells were removed from the evaluation due to the insignificant activity count (less than 500) during the event. Extracting the selected cell IDs helps filter and transform the large CDR database table into a more manageable format. The temporal filtration rule for the fireworks data will be $\\pm$ 30 mins to the actual event. This will include possible additional users who did attend the venues but did not use their phones during the half-hour show.\n\nUser data from the device database table can be joined on the CDRs' device ID fields. This gives us the ability to analyse age and gender distributions in selected groups. The selected CDRs have been joined on the corresponding TAC values from the merged mobile phone property database for the SES indicator aggregation. For further analysis, relative ages of the appearing phones have been calculated from release dates and months to the event date (August 2014).\n\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{images\/event_normal.pdf}\n\t\t\\caption{Daily cell activities in the studied area.}\n\t\t\\label{subfig:event_cells}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{images\/event_cells.pdf}\n\t\t\\caption{Daily cell activities between the studied cells.}\n\t\t\\label{subfig:event_normal}\n\t\\end{subfigure}\n\t\\caption{Cell activities showing the extra mobile network load due to the State Foundation Day celebrations on the river Danube embankments and the Castle District in Budapest.}\n\t\\label{fig:event_day}\n\\end{figure}\n\n\n\\section{Results}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{images\/firework_price_in_eur.pdf}\n\t\t\\caption{Mobile phone prices, a higher value means higher SES.}\n\t\t\\label{subfig:firework_price}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.49\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{images\/firework_relative_age.pdf}\n\t\t\\caption{Mobile phone relative ages to the event date (August 2014) in months, a higher value means lower SES.}\n\t\t\\label{subfig:firework_age}\n\t\\end{subfigure}\n\t\\caption{Average socioeconomic status indicator distributions by riverside cells among the large social event attendees.}\n\t\\label{fig:firework}\n\\end{figure}\n\nThe analysis focused on the socioeconomic status indicator distribution among the State Foundation Day celebratory firework viewers in Budapest, on the banks of the river Danube and in the Castle District in August 2014. The time frame is 20:00 -- 21:30, including half an hour before and after the 30-minute show, marked with vertical lines in Figure \\ref{fig:event_day}.\n\nA cell-by-cell average of mobile phone prices and relative ages was calculated for the SES indicator distribution. Figure \\ref{fig:firework} shows the spatial distribution of these indicators using Voronoi polygons generated around the cell tower locations. Cells are coloured by the average SES indicators; the higher the value, the darker the colour.\n\nFigure \\ref{fig:firework} demonstrates an opposite trend between mobile phone price and age. The scatter plot of the same data is shown in Figure \\ref{fig:correlation}, where the Pearson correlation coefficient $= -0.7329$. Data points are coloured based on their location in the city, but there are no visible groups based on SES.\n\nThe expectation was that there would be a significant contrast between Buda and Pest in socioeconomic indicator distribution. However, there are only minor differences in this spatial resolution, from which it can be concluded that those interested in the event do not divide drastically into socioeconomic groups.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth]{images\/correlation.pdf}\n\t\\caption{The correlation between average phone prices and ages in cells, where Pearson's $r = -0.7329$.}\n\t\\label{fig:correlation}\n\\end{figure}\n\n\\section{Conclusions and future work}\n\nThis paper presented a concept of socioeconomic data analysis on a large social event using call detail records and mobile phone details. This work fits into current research tendencies, fusing mobile network generated mobility data with socioeconomic descriptors that make it possible to deduce socioeconomic status from anonymous call detail records.\n\nWe expected that in Buda, where the housing prices are higher \\cite{pinter2021evaluating}, more expensive and newer phones would generate the majority of the activity. Nonetheless, we found that slightly more expensive phones were active in Pest, but the difference, on average, was not substantial. The base station level aggregation may have partly caused this result, or the attendees might not have watched the fireworks isolated from each other based on social status.\n\nFor future work, the firework attendees could be grouped into visitors and homeowners in the activity areas. Calculating home positions on CDRs has already been demonstrated to be helpful in \\cite{pinter2018evaluation} and could make a difference in the conclusion of this study.\n\n\n\n\\section*{Author contributions}\n\nMethodology, K.S. and G.P.;\nConceptualisation, G.P. and K.S.;\nSoftware -- data processing, K.S.;\nSoftware -- data preprocessing, G.P.;\nValidation, K.S. and G.P.;\nVisualisation, K.S.;\nWriting, K.S.;\nSupervision, G.P. and I.F.\n\n\n\\section*{Acknowledgement}\n\nThe authors would like to thank Vodafone Hungary and 51Degrees for providing the Call Detail Records and the Type Allocation Code database used in this study. Map tiles by CartoDB, under CC BY 3.0.\n\n\n\\printbibliography\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA simple model of itinerant antiferromagnets is provided by electrons on a\nlattice with short-range repulsion. In the low temperature phase, the system\nis in a Spin Density Wave (SDW) state. In three dimensions, above the\ntransition temperature, the electrons form a so-called {\\it nearly\nantiferromagnetic Fermi liquid}. Traditional mean-field techniques for\nstudying SDW instabilities of Fermi liquids fail completely in low\ndimension. In two dimensions for example, the Random Phase Approximation\n(RPA) predicts finite temperature antiferromagnetic transitions while this\nis forbidden by the Mermin-Wagner theorem. Nevertheless, one can study\nuniversal critical behavior using various forms of renormalization group\ntreatments appropriate either for the strong\\cite{Chakravarty}\\cite{Sachdev}%\n\\cite{Chubukov} or the weak-coupling limits\\cite{Hertz}\\cite{Millis}. The\nself-consistent-renormalized approach of Moryia\\cite{moriya} also satisfies\nthe Mermin-Wagner theorem in two dimensions. Since cutoff-dependent scales\nare left undetermined by all these approaches they must be found by other\nmethods. For example, in the strong-coupling limit, the spin-stiffness\nconstant of the non-linear $\\sigma -$model must be determined from Monte\nCarlo simulations. In the weak-coupling case however, Monte Carlo\nsimulations are limited to very small systems, of order $10\\times 10$ that\ndo not allow one to study much of the critical regime.\n\nRecently, the Two-Particle Self-Consistent approach\\cite{Vilk} was developed\nto obtain from a microscopic model a {\\it quantitative} description of\nitinerant electrons not only far from phase transitions, but also in the\ncritical regime. It was shown\\cite{Vilk} that in this approach the\nMermin-Wagner theorem is satisfied and that, away from the critical regime,\nthe approach gives quantitative agreement with Monte Carlo simulations of\nthe nearest-neighbor\\cite{Vilk} and next-nearest neighbor\\cite{Veilleux}\nHubbard model in two dimensions. Quantitative agreement is also obtained as\none enters the narrow critical regime accessible in Monte Carlo simulations.\nThe approach is restricted to the one-band Hubbard model with on-site\ninteraction, but is valid for arbitrary dispersion relation. The TPSC\napproach also allows one to study the case where the instability of the\nitinerant electrons is at an incommensurate wave-vector, but in this paper\nwe restrict ourselves to the case where the order is at the\nantiferromagnetic wave vector. The self-consistent-renormalized approach of\nMoryia\\cite{moriya} cannot deal with the incommensurate case without {\\it a\npriori }information. Even though it has the same critical behavior as the\nTPSC approach it does not allow one to obtain quantitative parameter-free\nresults from a microscopic Hamiltonian.\n\nWe first show in full generality that the TPSC approach gives the leading\nterm of the critical behavior in a $1\/n$ expansion. In other words, it gives\nthe $n\\rightarrow \\infty $ limit of the $O\\left( n\\right) $ model where $n=3$\nis the physically correct (Heisenberg) limit. It will be apparent that there\nis no arbitrariness in cutoff so that, given a microscopic Hubbard model, no\nparameter is left undetermined. One can go with the same theory from the\nnon-critical to the critical regime.\n\nWe then show that the previously studied two-dimensional critical regimes,\nnamely quantum-critical\\cite{Sachdev} and renormalized classical\\cite\n{Chakravarty} are reproduced here to leading order in $1\/n$. In the quantum\ncritical regime, one usually distinguishes two cases\\cite{Sachdev}: Model A,\nwhere the paramagnetic Fermi surface does not intersect the magnetic\nBrillouin zone, and Model B where it does. This distinction is important in\nthe quantum critical regime because it changes the dynamical critical\nexponent. In this paper, we also give results on Model C, the case of\nperfect nesting. In this case, the microscopic approach shows that\nmodifications to frequency-independent thermodynamic properties can arise.\nIn particular, in the two-dimensional perfect-nesting case the usual\nexponential dependence of correlation length on temperature $\\exp \\left(\ncst\/T\\right) $ can be modified to be roughly $\\exp \\left( cst\/T^3\\right) $\nin some temperature region of the renormalized classical regime.\n\nThen we study the renormalized-classical crossover from $d=2$ to $d=3$ in\nthe highly anisotropic case of weakly coupled planes.\\cite{Konno} The\ngeneral theory of such crossover is given in Appendix D, along with a\ndiscussion of universal crossover functions. In the main text it is shown\nthat in the highly anisotropic case the crossover can occur in a rather\nunusual regime, namely $t_{\\Vert }\\ll k_BT_N\\ll t_{\\bot }$ where $t_{\\Vert\n}\\left( t_{\\bot }\\right) $ is the inter (intra) plane hopping integral and $%\nT_N$ is the three-dimensional N\\'{e}el temperature. This regime is unusual\nbecause even though one is dealing with an itinerant fermion system, the\ninequality $t_{\\Vert }\\ll k_BT_N$ means that the smallest fermionic\nMatsubara frequency is larger than the dispersion in the parallel direction,\nmaking the three-dimensional band structure irrelevant for one-particle\nproperties. In the language of Refs.\\cite{BourbonnaisCaron}\\cite{Boies},\nthere is ``no coherent band motion'' in the parallel direction. Physically,\nthe extent of the thermal de Broglie wave packet in the direction\nperpendicular to the planes is smaller than the distance between planes, a\nsituation that does not occur in a usual Fermi liquid since in the isotropic\ncase the inequality $k_BT\\ll E_F$ implies that the thermal de Broglie\nwavelength is much larger than the lattice spacing. Another way of\ndescribing this $t_{\\Vert }\\ll k_BT_N\\ll t_{\\bot }$ situation is to say that\nthe itinerant electrons become unstable at the two-particle level while\ntheir motion in the third direction is still quasi-classical, or quantum\nincoherent, at the single-particle level because of thermal fluctuations. In\nthe more usual situation, coherence at the one-particle level is established\nbefore the phase transition, namely $k_BT_N\\ll t_{\\Vert }\\ll t_{\\bot }$.\nThese two regimes have been extensively discussed in the $d=1$ to $d=3$\ncrossover of Luttinger liquids by Bourbonnais and Caron\\cite\n{BourbonnaisCaron}\\cite{Boies}.\n\nThe above single-particle incoherent regime $t_{\\Vert }\\ll k_BT_N\\ll t_{\\bot\n}$ is likely to be the relevant one for high-temperature superconductors.\nWhile the parent insulating compound $La_2CuO_4$ has been extensively\nstudied in the strong coupling limit, this type of compound is expected to\nbe in an intermediate-coupling regime. Hence, it is legitimate to approach\nthe problem not only from the strong-coupling limit\\cite{Keimer} but also\nfrom the weak-coupling side, especially with the TPSC approach where all\ncutoffs are determined by the microscopic model. This problem is commented\non at the end of the paper. More detailed quantitative comparisons with\nexperiment will appear later.\n\n\\section{Two-Particle Self-Consistent approach}\n\nWe start from the Hubbard model,\n\n\\begin{equation}\nH=-\\sum_{\\sigma }t_{i,j}\\left( c_{i\\sigma }^{\\dagger }c_{j\\sigma\n}+c_{j\\sigma }^{\\dagger }c_{i\\sigma }\\right) +U\\sum_in_{i\\uparrow\n}n_{i\\downarrow \\,\\,\\,\\,}. \\label{Hubbard}\n\\end{equation}\nIn this expression, the operator $c_{i\\sigma }$ destroys an electron of spin \n$\\sigma $ at site $i$. Its adjoint $c_{i\\sigma }^{\\dagger }$ creates an\nelectron. The symmetric hopping matrix $t_{i,j}$ determines the band\nstructure. Double occupation of a site costs an energy $U$ due to the\nscreened Coulomb interaction. In the present section, the hopping parameters\nneed not be specified. We work in units where $k_B=1$, and $\\hbar =1$. As an\nexample that occurs later, the dispersion relation in the $d$-dimensional\nnearest-neighbor model when the lattice spacing is $a$ is given by \n\\begin{equation}\n\\epsilon _{{\\bf k}}=-2t\\sum_{i=1}^d\\left( \\cos k_ia\\right) .\n\\end{equation}\nThe nearest-neighbor quasi-two dimensional case will be another case of\ninterest later, \n\\begin{equation}\n\\epsilon _{{\\bf k}}=-2t_{\\bot }\\left( \\cos k_xa_{\\bot }+\\cos k_ya_{\\bot\n}\\right) -2t_{\\Vert }\\cos k_za_{\\Vert }.\n\\end{equation}\n\nThe TPSC approach\\cite{Vilk},\\cite{Vilk2} can be summarized as follows. One\napproximates spin and charge susceptibilities $\\chi _{sp}$, $\\chi _{ch}$ by\nRPA-like forms but with two different effective interactions $U_{sp}$ and $%\nU_{ch}$ which are then determined self-consistently. Although the\nsusceptibilities have an RPA functional form, the physical properties of the\ntheory are very different from RPA\\ because of the self-consistency\nconditions on $U_{sp}$ and $U_{ch}$. The necessity to have two different\neffective interactions for spin and for charge is dictated by the Pauli\nexclusion principle $\\langle n_\\sigma ^2\\rangle =\\langle n_\\sigma \\rangle $\nwhich implies that both $\\chi _{sp}$ and $\\chi _{ch}$ are related to only\none local pair correlation function $\\langle n_{\\uparrow }n_{\\downarrow\n}\\rangle $. Indeed, using the fluctuation-dissipation theorem in Matsubara\nformalism we have the exact sum rules, \n\\begin{equation}\n\\langle n_{\\uparrow }^2\\rangle +\\langle n_{\\downarrow }^2\\rangle +2\\langle\nn_{\\uparrow }n_{\\downarrow }\\rangle -n^2=\\frac 1{\\beta N}\\sum_{\\widetilde{q}%\n}\\chi _{ch}(\\widetilde{q})\n\\end{equation}\nand \n\\begin{equation}\n\\langle n_{\\uparrow }^2\\rangle +\\langle n_{\\downarrow }^2\\rangle -2\\langle\nn_{\\uparrow }n_{\\downarrow }\\rangle =\\frac 1{\\beta N}\\sum_{\\widetilde{q}%\n}\\chi _{sp}(\\widetilde{q})\n\\end{equation}\nwhere $\\beta \\equiv 1\/T$, $n=\\langle n_{\\uparrow }\\rangle +\\langle\nn_{\\downarrow }\\rangle $, $\\widetilde{q}=({\\bf q},iq_n)$ with ${\\bf q}$ the\nwave vectors of an $N$ site lattice, and with $iq_n=2\\pi inT$ the bosonic\nMatsubara frequencies. The Pauli principle $\\langle n_\\sigma ^2\\rangle\n=\\langle n_\\sigma \\rangle $ applied to the left-hand side of both equations\nwith our RPA-like forms for $\\chi _{sp}$, $\\chi _{ch}$ on the right-hand\nside lead to \n\\begin{equation}\nn+2\\langle n_{\\uparrow }n_{\\downarrow }\\rangle -n^2=\\frac 1{\\beta N}\\sum_{%\n\\widetilde{q}}\\frac{\\chi _0(\\widetilde{q})}{1+\\frac 12U_{ch}\\chi _0(%\n\\widetilde{q})}, \\label{sumCharge}\n\\end{equation}\n\\begin{equation}\nn-2\\langle n_{\\uparrow }n_{\\downarrow }\\rangle =\\frac 1{\\beta N}\\sum_{%\n\\widetilde{q}}\\frac{\\chi _0(\\widetilde{q})}{1-\\frac 12U_{sp}\\chi _0(%\n\\widetilde{q})}, \\label{sumSpin}\n\\end{equation}\nwith $\\chi _0(\\widetilde{q})$ the susceptibility for non-interacting\nelectrons.\n\nIf $\\langle n_{\\uparrow }n_{\\downarrow }\\rangle $ is known, $U_{sp}$ and $%\nU_{ch}$ are determined from the above equations. This key quantity $\\langle\nn_{\\uparrow }n_{\\downarrow }\\rangle $ can be obtained from Monte Carlo\nsimulations or by other means. However, it may be also be obtained\nself-consistently\\cite{Vilk} by adding to the above set of equations the\nrelation \n\\begin{equation}\nU_{sp}=g_{\\uparrow \\downarrow }(0)\\,U\\quad ;\\quad g_{\\uparrow \\downarrow\n}(0)\\equiv \\frac{\\langle n_{\\uparrow }n_{\\downarrow }\\rangle }{\\langle\nn_{\\downarrow }\\rangle \\langle n_{\\uparrow }\\rangle }. \\label{Usp}\n\\end{equation}\nEqs.(\\ref{sumSpin}) and (\\ref{Usp}) define a set of self-consistent\nequations for $U_{sp}$ that involve only two-particle quantities. We call\nthis approach Two-Particle Self-Consistent to contrast it with other\nconserving approximations like Hartree-Fock or FLEX\\cite{FLEX} that are\nself-consistent at the one-particle level, but not at the two-particle\nlevel. The above procedure\\cite{Vilk} reproduces both Kanamori-Brueckner\nscreening as well as the effect of Mermin-Wagner thermal fluctuations,\ngiving a phase transition only at zero-temperature in two dimensions, as\ndiscussed in the following section. Quantitative agreement with Monte Carlo\nsimulations on the nearest-neighbor\\cite{Vilk} and next-nearest-neighbor\nmodels \\cite{Veilleux} is obtained\\cite{Vilk} for all fillings and\ntemperatures in the weak to intermediate coupling regime $U<8t$.\n\nWe emphasize that deep in the critical regime, the {\\it ansatz} Eq.(\\ref{Usp}%\n) fails in the sense that $g_{\\uparrow \\downarrow }(0)$ eventually reaches\nzero at $T=0$ in the nearest-neighbor Hubbard model at half-filling while\nthere is no reason to believe that this really happens. The physically\nappropriate choice in the renormalized classical regime described below, is\nto keep the value of $g_{\\uparrow \\downarrow }(0)$ fixed at its\ncrossover-temperature value. In the numerical calculations also described\nbelow, we are never far enough from $T_X$ to have to worry about this. The\nvalue of $g_{\\uparrow \\downarrow }(0)$ is the one that is determined\nself-consistently.\n\n\\section{Critical behavior of the TPSC approach in arbitrary dimension}\n\nIn this section we discuss the critical behavior of the TPSC approach in\narbitrary dimension for hypercubic systems. It is convenient to set the\nlattice spacing to unity.\n\nAs one approaches a phase transition, one enters the {\\it renormalized\nclassical }regime,\\cite{Chakravarty} where classical thermal fluctuations\ndominate. In this case, the universality class for {\\it static} properties\nis fully determined by two exponents. Dynamics must also be considered so\nthat one introduces a dynamical critical exponent.\n\nWe consider the case where the transition is at the antiferromagnetic wave\nvector ${\\bf Q}_d$ in $d$ dimensions: ${\\bf Q}_2{\\bf =}\\left( \\pi ,\\pi\n\\right) ,$ ${\\bf Q}_3{\\bf =}\\left( \\pi ,\\pi ,\\pi \\right) $ etc. Since ${\\bf Q%\n}_d$ is at the corner of the Brillouin zone, the spin susceptibility $\\chi\n_0\\left( {\\bf Q}_d\\right) $ is always, by symmetry, an extremum. This\nextremum is the absolute maximum at half-filling not only in the\nnearest-neighbor hopping model, but also in more general models with\nnext-nearest-neighbor hopping\\cite{Veilleux}\\cite{Benard}. The\nnearest-neighbor model is discussed in more details at the end of this\nsection. It has some special features resulting from the additional nesting\nsymmetry. In the two-dimensional case, we also comment on peculiarities of\nnesting and on quantum-critical behavior\\cite{Chubukov}\\cite{Sachdev}.\n\n\\subsection{Renormalized classical regime.}\n\nAs one decreases the temperature sufficiently close to the phase transition,\nthere appears a small energy scale $\\delta U$ that measures the proximity to\nthe phase transition as determined by the Stoner criterion. This scale is\ndefined more precisely in Eq.(\\ref{DeltaU}). The key physical point is that\nthis energy scale is the smallest. In particular, it is smaller than the\ntemperature \n\\begin{equation}\n\\delta U\\ll T\n\\end{equation}\nso that the zero Matsubara frequency representing classical behavior\ndominates all others. The self-consistency conditions Eqs.(\\ref{sumSpin})(%\n\\ref{Usp}) then lead to a strong temperature dependence of $\\delta U$. This\nis the renormalized-classical regime\\cite{Chakravarty}. In this regime, the\nantiferromagnetic correlation length $\\xi $ becomes so large that\\cite{Vilk2}\n\\begin{equation}\n\\xi \\gg \\xi _{th}\n\\end{equation}\nwhere \n\\begin{equation}\n\\xi _{th}\\equiv \\frac{\\left\\langle v_F\\right\\rangle }{\\pi T}\n\\end{equation}\nis the single-particle thermal de Broglie wavelength and $\\left\\langle\nv_F\\right\\rangle $ is the Fermi velocity averaged over the Fermi surface.\nThis provides a partial justification for the usual procedure\\cite{Millis}%\n\\cite{Hertz} that eliminates completely the Fermionic variables and\ndescribes the system in terms of collective Bosonic variables, as is usually\ndone in Hubbard-Stratonovich types of approaches.\\cite{Millis}\\cite{Hertz}\n\nWe first show that when most of the temperature dependence of the\nsusceptibility comes from the temperature dependence of $\\delta U$, the\nRPA-like form that we have implies that {\\it in any dimension }the dynamical\nexponent is $z=2$ while the classical exponent $\\gamma \/\\nu =2-\\eta $ takes\nthe value $\\gamma \/\\nu =2.$ The other classical exponent $\\nu $ is\ndetermined from the self-consistency condition Eq.(\\ref{sumSpin}). We show\nthat the corresponding universality class is the same as the $n\\rightarrow\n\\infty $ limit of the $O\\left( n\\right) $ classical model. This universality\nclass is known in turn to be the same as that of the spherical model.\\cite\n{Stanley} We conclude this discussion with the lower critical dimension $d=2$%\n. There the exponent $\\nu $ cannot strictly be defined since, as was shown\nbefore\\cite{Vilk}, the correlation length diverges exponentially at zero\ntemperature instead of diverging as a power law at finite temperature. This\nbehavior is also the one expected from the $n\\rightarrow \\infty $ model,\nalthough nesting leads to different temperature dependences that are\nexplained further.\n\n\\subsubsection{Exponents $\\gamma \/\\nu $ and $z$ in arbitrary dimension}\n\nThe antiferromagnetic transition is characterized by the appearance of a\nsmall energy scale, or equivalently a large correlation length, in the\nretarded spin susceptibility \n\\begin{equation}\n\\chi _{sp}^R({\\bf q,}\\omega )=\\frac{\\chi _0^R({\\bf q,}\\omega )}{1-\\frac 12%\nU_{sp}\\chi _0^R({\\bf q,}\\omega )}. \\label{GeneralRPA}\n\\end{equation}\nThe small energy scale is set by \n\\begin{equation}\n\\delta U=U_{mf,c}-U_{sp} \\label{DeltaU}\n\\end{equation}\nwhere the temperature-dependent ''mean-field critical'' interaction \n\\begin{equation}\nU_{mf,c}\\equiv 2\/\\chi _0\\left( {\\bf Q}_d,0\\right)\n\\end{equation}\nis the temperature-dependent value of $U_{sp}$ at which a phase transition\nwould occur according to mean-field theory. In the vicinity of this point\nthe small energy scale $\\delta U$ allows us to approximate $\\chi _{sp}^R(%\n{\\bf q,}\\omega )$ by expanding the denominator near ${\\bf q\\approx Q}_d$ and \n$\\omega \\approx 0$ to obtain,\n\n\\begin{equation}\n\\chi _{sp}^R({\\bf q+Q}_d{\\bf ,}\\omega )\\approx \\xi ^2\\frac 2{U_{sp}\\xi _0^2}%\n\\left[ \\frac 1{1+{\\bf q}^2\\xi ^2-i\\omega \\xi ^2\/D}\\right] \\label{chiRPA}\n\\end{equation}\nwhere the antiferromagnetic correlation length is defined by \n\\begin{equation}\n\\xi \\equiv \\xi _0\\left( \\frac{U_{sp}}{\\delta U}\\right) ^{1\/2} \\label{ksi}\n\\end{equation}\nwith the microscopic length scale set by \n\\begin{equation}\n\\xi _0^2\\equiv \\frac{-1}{2\\chi _0\\left( {\\bf Q}_d\\right) }\\left. \\frac{%\n\\partial ^2\\chi _0\\left( {\\bf q,}0\\right) }{\\partial q_x^2}\\right| _{{\\bf q=Q%\n}_d}. \\label{ksi02}\n\\end{equation}\nThe microscopic diffusion constant $D$ is defined on the other hand by \n\\begin{equation}\n\\frac 1D\\equiv \\frac{\\tau _0}{\\xi _0^2}\n\\end{equation}\nwhere the microscopic relaxation time is, \n\\begin{equation}\n\\tau _0=\\frac 1{\\chi _0\\left( {\\bf Q}_d\\right) }\\left. \\frac{\\partial \\chi\n_0^R\\left( {\\bf Q}_d{\\bf ,}\\omega \\right) }{\\partial i\\omega }\\right|\n_{\\omega =0}. \\label{Gamma0}\n\\end{equation}\nThis relaxation-time is non-zero in both models $B$ and $C$ where the Fermi\nsurface intersects the magnetic Brillouin zone.\n\nIn the presence of a large correlation length $\\xi $ the scaling $q\\sim \\xi\n^{-1}$ and $\\omega \\sim \\xi ^{-2}$ justifies the neglect of higher-order\nterms in the expansion Eq.(\\ref{chiRPA}). Comparing the approximate form Eq.(%\n\\ref{chiRPA}) with the general scaling expression \n\\begin{equation}\n\\chi _{sp}^R({\\bf q+Q}_d{\\bf ,}\\omega )\\approx \\xi ^{\\gamma \/\\nu }X\\left( \n{\\bf q}\\xi ,\\omega \\xi ^z\\right) \\label{GeneralScalingChi}\n\\end{equation}\nwhere $X\\left( {\\bf q}\\xi ,\\omega \\xi ^z\\right) $ is a scaling function, we\nimmediately have the announced results, \n\\begin{equation}\n\\frac \\gamma \\nu =2\\quad ;\\quad z=2.\n\\end{equation}\nThe Fisher scaling law $\\eta =2-\\frac \\gamma \\nu $ shows that the anomalous\nexponent $\\eta $ vanishes as in mean-field theory. In the following\nparagraphs, we compute the remaining exponent $\\nu $ to show that above four\ndimensions we do recover mean-field theory $\\nu =1\/2$ while for $24$, the integral in\nEq.(\\ref{XsiT-TN}) is dominated by the large momentum cutoff so that for $%\n\\xi >>1$, $\\left( 1-\\frac T{T_N}\\right) \\sim \\xi ^{-2}\\int d^dq\/q^4$.\n\n\\subsubsection{Two-dimensional case}\n\nWe have already proven in the last subsection that the transition\ntemperature vanishes in two dimensions. The correlation length may be found%\n\\cite{Vilk} in the renormalized classical regime directly by performing the\nintegral Eq.(\\ref{Consistency}) in $d=2$,\n\n\\begin{equation}\n\\xi =\\xi _0\\left( U_{sp}\/\\delta U\\right) ^{\\frac 12}\\sim \\Lambda ^{-1}\\exp\n(\\pi \\tilde{\\sigma}^2\\xi _0^2U_{sp}\/T) \\label{expo}\n\\end{equation}\nwhere $\\Lambda \\sim \\pi $ is usually of the order of the size of the\nBrillouin zone, but not always as we discuss below.\n\nIn $d=2$, we call $T_X$ the temperature at which $\\delta U\\ $is much smaller\nthan temperature and the magnetic correlation length $\\xi $ grows\nexponentially. While in higher dimensions a phase transition occurs at\nfinite temperature, in $d=2$ the critical regime with an exponentially\nincreasing $\\xi $ extends all the way to zero temperature. For example, the\ntemperature $T_X$ is plotted as a function of filling in the two-dimensional\nnearest-neighbor Hubbard model for $U=2.5$ in Fig.1 of Ref.\\cite{Vilk}. In\nthis reference, $T_X$ is called a quasi-critical temperature. We stress that\nthere is a range of fillings near half-filling where at $T_X$ it is the\nantiferromagnetic wave vector that grows, despite the fact that at\nzero-temperature the phase transition would be at an incommensurate wave\nvector.\n\nThe exponential growth of the two-dimensional $\\xi $ clearly suggests that\nsmall $3D$ effects existing in real systems may stabilize long-range order\nat ${\\bf Q}_{d=3}$, before $T=0$. We later characterize the crossover driven\nby a small $3D$ hopping parameter $t_{\\Vert }\\ll t_{\\bot }$ from\ntwo-dimensional critical behavior to three-dimensional critical behavior.\nBut before, we comment on the two-dimensional quantum-critical regime and on\npeculiarities induced by nesting in the renormalized-classical regime.\n\n\\subsection{Quantum-critical regime}\n\nWhen there is a critical value of the interaction $U_c$ {\\it at zero\ntemperature} where one finds a paramagnet for $UU_c$, then the $T=0$, $U=U_c$ point of the phase diagram is a quantum\ncritical point.\\cite{Hertz} The vicinity of this point in two dimensions has\nbeen studied again recently\\cite{Sachdev}. In order to study such a regime\nwithin the Hubbard model at half-filling, one must introduce\nnext-nearest-neighbor hopping since $U_c\\left( T=0\\right) =0$ at this\nfilling in the nearest-neighbor model. One finds that the TPSC approach has\nprecisely the $n\\rightarrow \\infty $ model A or model B quantum critical\nbehavior\\cite{Sachdev}, depending on the specific microscopic model. In\nparticular, $\\xi \\ $scales as $1\/T$ as one approaches the two-dimensional\nquantum critical point from finite temperature. Again, in the TPSC approach\nthe cutoffs are specified without ambiguity. Model C, the perfect nesting\ncase, is relevant only to the renormalized-classical case, as we now discuss.\n\n\\subsection{Peculiarities induced by perfect nesting in the\nrenormalized-classical regime, especially in $d=2$.}\n\nThe dispersion relation of the nearest-neighbor Hubbard model on hypercubic\nlattices in arbitrary dimension satisfies $\\epsilon _{{\\bf k+Q}_d}=-\\epsilon\n_{{\\bf k}}$. Furthermore, at half-filling the particle-hole symmetry implies\nthat the Fermi surface is fully nested, namely $\\mu =0$ so that the equality \n$\\epsilon _{{\\bf k+Q}_d}-\\mu =-\\left( \\epsilon _{{\\bf k}}-\\mu \\right) $ is\nsatisfied for all wave vectors ${\\bf k}$. Slightly away from half-filling,\nnesting in the form $\\epsilon _{{\\bf k+Q}_d}-\\mu \\sim -\\left( \\epsilon _{%\n{\\bf k}}-\\mu \\right) $ is also a good {\\it approximation} at finite\ntemperature as long as $T>\\mu $, as discussed above. When there is perfect\nnesting, the zero-temperature critical interaction vanishes $\\left(\nU_c=0\\right) $. Hence the fully nested Fermi surface, referred to as Model C\nabove, does not have the simple quantum-critical point described in the\nprevious sub-section.\n\nWhen there is perfect nesting, the microscopic interaction-independent\nquantities $\\xi _0^2$ and $\\tau _0$ have a peculiar temperature dependence.\nThis occurs because they are derivatives of the susceptibility which itself\ncontains logarithmic singularities in the zero-temperature limit. These\nquantities are evaluated in two dimensions and in the quasi two-dimensional\ncase in Appendix A. Dimensional arguments that follow simply from this\nappendix show that in $d>2$ \n\\begin{equation}\n\\xi _0^2\\sim 1\/\\left( T^2\\ln T^{-1}\\right) \\label{xsi0T}\n\\end{equation}\n\\begin{equation}\n\\tau _0\\sim 1\/\\left( T\\ln T^{-1}\\right) .\n\\end{equation}\nIn $d=2$, the $\\ln T^{-1}$ is replaced by $\\ln ^2T^{-1}$.\\cite{NoteT3}\n\nBy contrast, in the case of second-neighbor hopping, nesting is lost and the\nabove quantities are temperature independent for a wide range of values of\nthe second-neighbor hopping constant. The above temperature dependencies are\nthen a special property of nesting. In $d>2$ however, the above temperature\ndependencies are completely negligible in the critical regime since near the\nphase transition one can replace $T$ in the above expressions by $T_N$.\n\nThe only issue then is in two dimensions where the phase transition occurs\nat zero temperature. Even neglecting logarithms for the moment, one sees\nthat since $\\xi _0^2$ scales as $1\/T^2$ over a wide temperature range the\ncorrelation length in Eq.(\\ref{expo}) scales as $\\exp \\left( cst\/T^3\\right) $%\n. By contrast, in strong coupling, or in the non-nesting case of the\nweak-coupling limit, the correlation length scales as $\\exp (cst\/T)$.\n\nThe $\\exp \\left( cst\/\\left( T^3\\ln ^2T^{-1}\\right) \\right) $ behavior is\nhowever largely an unsolved problem. Indeed, in the critical regime in two\ndimensions, fluctuations remove the quasiparticle peak and replace it by\nprecursors of the antiferromagnetic bands, as shown in Ref.\\cite{Vilk2}. It\nis possible then that, in this regime, a more self-consistent treatment\nwould lead to $\\xi _0^2$ independent of temperature, as in the strong\ncoupling case or the non-nested weak-coupling case. It is also likely that\nthere will be an intermediate temperature range where the $\\exp \\left(\ncst\/\\left( T^3\\ln ^2T^{-1}\\right) \\right) $ regime prevails, even if deep in\nthe critical regime self-consistency leads to $\\exp \\left( cst\/T\\right) $\nbehavior.\n\nIt is important to recall that in practical calculations in the TPSC\napproach, one obtains a numerical value for the correlation length without\nadjustable parameter. For example in Fig.1 we present the temperature\ndependence of the correlation length for the two-dimensional\nnearest-neighbor Hubbard model. As discussed in Appendix A, in this case \n\\begin{equation}\n\\xi _0^2\\simeq {0.021U}_{mf,c}t_{\\bot }^2a_{\\bot }^2\/T^2 \\label{NumXsiZero}\n\\end{equation}\nand $U_{sp}\\simeq {U}_{mf,c}$ so that from the slope of the plot and from\nEq.(\\ref{expo}) one finds $\\tilde{\\sigma}^2\\simeq 0.21$. From the plot we\ncan also extract $\\Lambda ^{-1}\\simeq 0.022$ so that $\\xi $ is known without\nadjustable parameter. Appendix B explains physically the orders of magnitude\ntaken by $\\tilde{\\sigma}^2$ and $\\Lambda ^{-1}$ in this model. Similar\ncalculations can be done for arbitrary band structure. In strong-coupling\ncalculations,\\cite{Chakravarty}\\cite{Chubukov} one obtains $\\xi \\sim \\Lambda\n^{-1}\\exp (2\\pi \\rho _S\/T)$ with $\\rho _S$ a cutoff-dependent quantity that\ncan be evaluated only with Monte Carlo simulations.\n\nAnother consequence of the temperature behavior of $\\xi _0$ in Eq.(\\ref\n{xsi0T}) is that {\\it above} $T_X$ there is a range of temperatures for\nwhich the antiferromagnetic correlation length scales as $\\xi $ $\\sim \\xi\n_0\\sim 1\/T$. This behavior should not be confused with quantum-critical\nbehavior, even though the power-law scaling of the correlation length is the\nsame. Indeed, one finds that the argument of the exponential in Eq.(\\ref\n{expo}) is larger than unity in the corresponding regime while in the\nquantum-critical regime the argument of the exponential should be much less\nthan unity.\\cite{Sachdev} In fact the temperature dependence of the\nstaggered susceptibility for $T>T_X$ is also-different from the quantum\ncritical result.\n\n\\section{Quasi two-dimensional systems: Renormalized classical crossover\nfrom $d=2$ to $d=3$.}\n\nThe general discussion of universality in the renormalized-classical\ncrossover from $d=2$ to $d=3$ appears in Appendices C and D. In the present\nsection, we first clarify the various regimes of crossover, according to\nwhether or not single-particle coherence in the third dimension is\nestablished before the phase transition. Then, we go on to discuss the case $%\nt_{\\Vert }\\ll T_N \\over \\sim \\;$ 1 mag\/arcsec$^2$ below\nthe canonical Freeman (1970) value of $\\mu_0^B=21.65 \\pm 0.3$ mag\/arcsec$^2$)\nindicate that, over the age of the Universe, their mean stellar birthrate per \nunit area has been significantly lower than that of typical high surface \nbrightness (HSB) disks. Their current rate of star formation is similarly\nlow --- while some \\ion{H}{2} regions do exist in LSBs, the global\nstar formation rate in LSBs is lower by an order of magnitude than \ncomparably sized HSBs (McGaugh 1992; Knezek 1993; McGaugh \\& Bothun\n1994; R\\\"onnback \\& Bergvall 1994; de Blok, van der Hulst \\& Bothun 1995;\nde Blok 1997). The lack of\nsignificant star formation is reflected in the low metallicities of\nLSBs, which are typically $\\; \\buildrel < \\over \\sim \\;$ 1\/3 solar (McGaugh 1994;\nR\\\"onnback \\& Bergvall 1995; de Blok \\& van der Hulst 1998a). Not \ncoincidentally, LSBs are also very gas-rich systems. McGaugh \\& de Blok \n(1997) found that the gas mass fraction of galaxy disks correlates strongly \nwith surface brightness. In LSBs, as much as 50\\% of the disk mass \nis in the form of gas, compared to $\\sim$ 10\\% at high surface brightnesses. \nTheir low surface brightnesses, low star formation rates, low metallicities,\nand large gas fractions all argue that LSBs are systems which are forming \nstars much more slowly than their HSB counterparts.\n\nThe suppressed rate of star formation in LSB disks must ultimately be connected\nto the differing physical conditions of the interstellar medium (ISM)\nbetween LSB and HSB disk \ngalaxies. As star formation is presumed to take place in molecular clouds,\nthe molecular content of LSBs is of particular interest. In typical HSB\nspirals, the mass of molecular gas is comparable to that in neutral \n\\ion{H}{1} (e.g.,\\ Young \\& Knezek 1989). The situation in LSBs may be quite \ndifferent --- while several CO surveys of LSBs have been made (e.g.,\\ Schombert \net~al.\\ 1990 (S90); Knezek 1993; de Blok \\& van der Hulst 1998b (dBvdH)), CO \nemission has not been detected in any LSB disk galaxy. If CO emission traces \nmolecular gas content in the same way as in normal HSB galaxies, then the upper \nlimits on molecular gas in LSBs are typically $M_{H_2}\/M_{HI}$ $\\; \\buildrel < \\over \\sim \\;$ 0.1,\nand are more severe in a few cases. These \nupper limits have led to the speculation that the low disk surface\ndensities in LSBs preclude molecular cloud formation and, in turn, inhibit\nstar formation (e.g.,\\ S90; van der Hulst et~al.\\ 1993, Bothun et~al.\\ 1997).\nAlternatively, the lack of CO detection may simply reflect the fact that\nthe CO\/H$_2$\\ conversion factor is not a universal constant, so that perhaps\nlarge quantities of molecular H$_2$\\ exist despite the lack of detected CO \nemission.\n\nUnfortunately, an observational answer to the question of the molecular\ncontent of LSBs is inexorably tied to the CO\/H$_2$\\ conversion factor\nand its dependency on environment. For example, Wilson (1995) and\nIsrael (1997) recently showed that the CO\/H$_2$\\ conversion factor was a strong \nfunction of metallicity; this dependency raises the upper limits on the \nderived molecular content of LSBs. Nonetheless, even accounting for \nmetallicity effects, previous CO surveys should have detected CO\nin LSBs if they had $M_{H_2}\/M_{HI}$ ratios similar to HSBs.\nOther dependencies should also play a role. For example, the local gas\ndensity and temperature can affect CO\/H$_2$\\\n(e.g.,\\ Maloney \\& Black 1988, Scoville \\& Sanders 1987).\nThese are in turn affected by the ionizing radiation field and the\ndensity structure (``clumpiness'') of the ISM. In LSBs all these \nfactors may well be significantly different than expected for HSBs,\nsuch that the true molecular-to-atomic gas mass ratio ($M_{H_2}\/M_{HI}$) \nis only weakly constrained by direct CO measurements.\n\nTo explore the ISM properties of LSB galaxies in a manner independent\nof the CO\/H$_2$\\ conversion factor, we take a complementary, theoretical \nroute towards understanding the molecular content of LSB galaxies. \nWe construct models of an inhomogeneous ISM under varying physical \nconditions, spanning a range of disk galaxy types. The models employ a \nMonte Carlo approach to radiative transfer (see Spaans 1996), and explicitly \nsolve for the CO emissivity and $M_{H_2}\/M_{HI}$ ratio in galactic disks. \nWe investigate models on a grid of metallicity, surface brightness, and \nISM density structure, tracking the changing physical conditions between\nLSB and HSB disk galaxies. In particular, we\naddress the questions of how much molecular \nH$_2$\\ is expected in LSB disks, and whether the lack of observed CO in LSBs\nin fact indicates a lack of molecular gas.\n\n\\section{ISM Modeling}\n\\subsection{Modeling Technique}\n\nThe code developed by Spaans (1996) and its extensions as discussed in\nSpaans \\& van Dishoeck (1997), Spaans \\& Norman (1997), and Spaans \\& Carollo\n(1998) is used to\nderive the physical and chemical structure of the ambient ISM in LSBs.\nThe interested reader is referred to these papers for a detailed description\nof the code's structure. The main features can be summarized as follows.\n\n1) For a given metallicity, geometry, global pressure structure and\ndistribution of illuminating (ultraviolet) sources, the thermal and chemical\nbalance of the medium is computed in three dimensions. The continuum\n(dust attenuation) and line transfer is modeled through a Monte Carlo\nmethod. The self-shielding of H$_2$ and CO and the shielding of CO by H$_2$\nabsorption lines is explicitly included. The heating processes include\nphoto-electric emission by large molecules like Polycyclic Aromatic\nHydrocarbons (PAHs) and dust grains (Bakes \\& Tielens 1994), cosmic ray\nheating, collisional de-excitation of ultraviolet pumped H$_2$, and H$_2$\ndissociation heating.\nIt is assumed that 10\\% of the gas phase carbon is incorporated into PAHs.\nThis yields roughly equal photo-electric heating\ncontributions from carbonaceous particles larger\nand smaller than $10^{-6}$ cm. Generally photo-electric emission dominates\nthe heating rate unless the visual extinction exceeds 3 mag.\nThe cooling processes include fine-structure emission of C$^+$, C and O,\nrotational line emission of CO and vibrational (v=1-0) H$_2$ emission. All\nlevel populations are computed in statistical equilibrium and the line emission\nis again modeled through a Monte Carlo technique.\n\n2) The solutions to the thermal balance equations allow, for a given\nhydrodynamic pressure, multiple solutions. These constitute the possible\nmulti-phase structure of the ISM as first suggested by Field, Goldsmith, \\&\nHabing (1969). If multiple solutions exist, then one finds from a stability\nanalysis that there is a $\\sim 10^4$ K diffuse medium and a $\\sim 50$ K\ndense component. It is the density structure derived from these solutions\nwhich couples strongly with the chemical balance of interstellar gas, and\ntherefore with the amount of molecular gas which is supported by the stellar\nradiation field and the ambient pressure of the galaxy. This thermal\nstability approach does not incorporate the effects of hydrodynamic phenomena\nsuch as shocks or gravity. The cold component has a typical density of\n$\\sim 50-300$ cm$^{-3}$ and is representative of diffuse and translucent\nclouds in the Milky Way. To allow the inclusion of shocks and gravity in a\nphenomenological way, the dense phase is allowed to exhibit inhomogeneities.\nThat is, the ambient pressure determines the {\\it mean} density of this phase,\nwhile gravity as well as shocks drive perturbations in it.\n\n\\subsection{Model Parameters and Their Implementation}\n\nTo investigate the molecular content of the ISM the following model parameters\nare considered: average gas density, the average UV interstellar radiation \nfield (ISRF), metallicity, surface density, and ISM density structure.\nThese parameters are not all independent. To capture the essential\ndependencies of the ISM structure on ambient physical conditions the following\nscaling relations are adopted.\n\nThe HI volume density $n_{\\rm HI}$ correlates with HI surface density \n$\\Sigma_{HI}$ according to\n$$n_{\\rm HI} = \\Sigma_{HI}\/H,\\eqno(1)$$\nwhere $H=300$ pc is the scale height of the galaxy model. Using data from\nde Blok et~al.\\ (1996), one can derive a rough correlation between \nlocal surface brightness $\\mu^B$ and local HI density: \n$$\\log \\Sigma_{HI} \\approx -0.12*\\mu^B + 3.6.\\eqno(2)$$\nWith this relationship, the HI surface density and stellar surface brightness \ndo not drop off in lockstep; instead, the HI surface density falls off\nmore slowly. While this is generally true, it should be emphasized that\nthis relation is admittedly crude with a lot of real scatter. The aim\nis more to characterize the general behavior of disks to search for\nphysically meaningful trends rather than to attempt to model specific\nindividual galaxies. In global terms, the gas mass fraction of the \ndisk increases as surface brightness decreases\nsuch that very low surface brightness disks ($\\mu^B_0$ $\\; \\buildrel > \\over \\sim \\;$ \n23) can have half their baryonic mass in the form of gas (McGaugh \\&\nde Blok 1997), even assuming a trivial amount of molecular gas mass.\n\nThe luminosity profiles of disk galaxies (especially LSBs) are generally\nexponential,\n$$\\mu^B(r) = \\mu_0^B + 1.086*(r\/h),\\eqno(3)$$\nwith scale length $h$ and central surface brightness in B\nmags per square arcsecond $\\mu_0^B$.\nCombining equations (2) and (3), one finds\n$${\\rm log}\\Sigma_{HI} \\approx -0.12*\\mu^B_0 -0.13*(r\/h) + 3.6.\\eqno(4)$$\nAgain, the relationship implies that, as a function of radius, the\nHI surface density drops off more slowly than the stellar surface\nbrightness, reproducing the extended gaseous disks observed in disk galaxies. \nIn this parameterization, the gas surface\ndensity is exponential, but with a scale length 3.3 times larger than\nthat for the stars. While real gas disks are not as well described by\nexponentials as the stellar component, we again stress this is merely\na convenient approximation for modeling purposes. Deviations from\nthis approximation will alter only details and not the general trends\nof interest, and are probably small compared to the uncertainty\nin the modeling process.\nBecause equation (4) describes the HI surface density, while the model\ninputs are in terms of total (HI $+ H_2$) gas surface density,\nwe use an iterative scheme\nto arrive at the final model. First we calculate the model assuming\na total surface density given by equation (4). From this initial model, we \nderive the H$_2$ mass profile, then add this profile to the original\nHI profile to produce a total gas mass profile. This total profile is\nthen used as input to calculate a new, consistent ISM model.\n\nTo parameterize the strength of the ISRF in our models, we assume that\nthe ISRF is dominated by the contribution from the stellar populations\nin galaxies. Under this assumption, the ISRF scales with surface brightness:\n$$I_{UV} = I_{UV}(MW) * 10^{0.4*(\\mu_0^B({\\rm MW})-\\mu_0^B)}, \\eqno(2)$$\nwhere $I_{UV}(MW)$ is the strength of the ISRF in the Milky Way given\nby Draine (1978), and $\\mu_0^B({\\rm MW})$ is the central surface brightness\nof the Milky Way disk (assumed to be 21 mag arcsec$^{-2}$).\nThe wavelengths in the UV relevant to our results are between 912 and 1110\n\\AA\\ where lie all the H$_2$ and CO absorption lines which lead to\nphoto-dissociation of the molecules. By scaling the UV ISRF with B-band\nsurface brightness, we are assuming that the spectral shape is\n{\\it independent\\\/} of surface brightness. That is, we assume\nthat the stellar populations which give rise to the ISRF\ndo not drastically change as a function of surface brightness. This\nassumption is perhaps suspect. Since there is generally less star formation\nin LSB than in HSB galaxies, one might suspect the UV ISRF to be relatively\nweaker in LSBs than implied by the difference in B-band surface brightness.\nOn the other hand, LSBs do tend to be blue, late type galaxies which have\nharder spectral shapes in the optical. So one might equally well expect\nthis trend to continue into the UV, resulting in the opposite effect:\nthe difference in B-band surface brightness might overstate that in the UV.\nWithout strong constraints on the UV properties of\nLSBs we choose simply to hold the shape of\nthe ISRF fixed with optical surface brightness.\nIf the UV ISFR is relatively greater [less] than we assume, more [fewer]\nmolecules will be destroyed and so on balance there will\nbe less [more] gas mass in molecular form.\n\nWith the gas density and UV ISRF defined in terms of the disk surface\nbrightness, we can similarly define a parameter closely akin\nto the ionization parameter:\n$$\\log{U}=\\log(I_{UV}\/\\Sigma_{HI})=-0.28\\mu^B_0 + {\\rm constant},$$\nwhich essentially measures the number of ionizing photons per atom.\nBecause of our assumption that the ISRF scales linearly with\nsurface brightness, while the gas density drops more slowly, LSB\ngalaxies should have lower values of $U$ than HSBs. If,\nhowever, LSBs have a harder spectral shape than HSBs (due perhaps\nto a younger, hotter mean stellar population), this assumption\nmay underestimate $U$ in LSBs. While we \nuse surface brightness as a fundamental input parameter for the\nmodels, we note that with the pseudo-ionization parameter $U$ defined this\nway, models with central surface brightnesses 0, 1, 2, and 3 mag \narcsec$^{-2}$ below that of the Milky Way correspond to values\nof $U\/U_{MW}$ = 1.0, 0.5, 0.28, and 0.15, respectively.\n\nFinally, we need to characterize the inhomogeneity of the dense phase, if\nit is supported, in the models.\nThis inhomogeneity can be parameterized by {\\it choosing} a certain volume\nfraction $F$ of the gas in high density clumps with a fixed density contrast\n$C$. The size of the clumps is not varied and assumed equal to 2 pc, typical\nfor translucent clouds in the Milky Way. By investigating a range of density\ncontrasts, and therefore clump extinction, this somewhat arbitrary length\ndoes not strongly influence the results. We calculate one model (``H'', see\nTable 1) which\nis completely homogeneous and lacks any density structure, representing\na limiting extreme. Two more models\nare explored which have modest amounts of structure (``I1, I2'', with \nsmall $C$ and large $F$). Finally, the clumpy ISM models (``C1, C2'', large $C$\nand small $F$; see Table 1) are chosen to represent our own Galaxy at high ISM\npressure.\n\nWith these parameterizations, we are left with three variables\ndescribing the model galaxies: metallicity, ionization parameter,\nand ISM clumpiness. We create a grid of models spanning a range of plausible\nvalues: central surface brightness $\\mu_0^B = 21 \\to 24$, metallicity\n$Z\/Z_{\\sun} = 1 \\to 0.1$, and ISM types H (homogeneous, $P\\sim 10^3$ K cm$^{-3}$),\nI1 and I2 (intermediate, $P\\sim 2\\times 10^3$ K cm$^{-3}$), and C1 and C2 (clumpy,\n$P\\sim 10^4$ K cm$^{-3}$). These models thus capture\nthe properties of both high surface brightness spirals as well as low\nsurface brightness disks. For each model we calculate the H$_2$ gas mass\nfraction as a function of radius, as well as the CO emissivity\nand mass averaged gas temperature. From these models, we can\nanalyze ISM trends with surface brightness and address the question of\nmolecular gas content in low surface brightness disks.\n\n\\section{Results}\n\n\\subsection{Molecular Gas Fractions}\n\nFigure 1 shows $\\Sigma_{H_2}\/\\Sigma_{HI}$ as a function of radius for\nseveral characteristic models. Several trends are immediately obvious:\n\\begin{itemize}\n\\item At fixed metallicity and ISM structure, lower surface brightness\nmodels have {\\it higher} molecular fractions (Figure 1a). Because the number\nof ionizing photons per hydrogen atom decreases with decreasing surface \nbrightness, the molecules in the low surface brightness models are less apt \nto be dissociated by the background ISRF.\n\\item At fixed surface brightness and ISM density structure, models with \nlower metallicity have lower molecular hydrogen gas content (Figure 1b). \nThis result is due to the fact that dust grains act as formation sites \nfor molecules; lower metallicities mean fewer dust grains to drive molecule \nformation.\n\\item At fixed surface brightness and metallicity, clumpier ISM models have\nhigher molecular gas fractions (Figure 1c). In clumpy models, a larger\nmass fraction of the gas is found in denser cores, and are shielded\nfrom the background ISRF. Molecules in diffuse ISM models lack this shielding,\nand are more easily dissociated by the UV background.\n\\end{itemize}\n\nHow well do these models describe actual disk galaxies? One point of\nconstraint is provided by the Milky Way ISM. The high surface brightness,\nsolar metallicity, and clumpy ISM model shows a mean H$_2$\/HI mass ratio\n$\\sim 1$ averaged across the inner scale length of the disk, similar to\nthat inferred for Milky Way-like Sb galaxies (Young \\& Knezek 1989). This \nresult is not surprising, since the ISM models were scaled to the ISRF and \nstructure of the Milky Way's ISM, but nonetheless it is reassuring that we \nrecover the correct physical description for the given model inputs.\n\nAssigning a model to LSB galaxies is not as straightforward. Certainly\nLSB disks are lower in metallicity (Webster et~al.\\ 1983; McGaugh 1994;\nde Blok \\& van der Hulst 1998a) than HSB galaxies such as \nthe Milky Way. Their reduced surface brightnesses also probably\nresults in lower ionization parameters, although stellar population\ndifferences may modify this somewhat.\nThe density structure of the ISM in LSBs is not well determined,\nprecisely due to the fact that CO measurements have not yielded any\ndetections. Because of the lowered mass surface density of LSB disks\n(de Blok \\& McGaugh 1996, 1997), it is likely that the ISM pressures\nare too low to support the amount of multiphase structure found in the\nMilky Way. Such was the case in hydrodynamical models of LSB galaxies\nby Gerritsen \\& de Blok (1998), where a multiphase ISM was virtually \nabsent. Models H (homogeneous) and I1 and I2 (intermediate) are therefore\nlikely candidates to describe the density structure of LSB galaxies.\n\nFigure 2 shows the H$_2$\/HI mass ratio averaged over the inner disk\nscale length as a function of central surface brightness for the entire\ngrid of models. For metallicities typical of LSBs ($Z\/Z_{\\sun}$=0.1--0.3),\nthe models are lower in molecular content than the Milky Way, as \nexpected. Interestingly, though, the models are far from being void\nof molecular gas; mass fractions of 0.25 -- 0.5 are typical. Again,\nthe lowered ionization parameter as a function of surface brightness\nresults in\n{\\it higher} molecular fractions (at fixed metallicity and ISM structure) \nfor lower surface brightness galaxies. In fact, for very low surface \nbrightnesses, the molecular \ncontent can rival that of HSBs if they have any significant degree\nof clumpiness in their ISM. However, at such low surface brightnesses,\nthe ISM pressures are probably too low to support this level of\nstructure.\n\nNonetheless, our models suggest that typical LSB galaxies have\nmolecular contents which are only factors of 2--3 below that of\nnormal HSB spirals. The CO mass averaged gas temperatures in the molecular\nphase are presented in Figure 3 as a function of radius.\nIt is immediately obvious that the molecular gas in LSBs is by no means very\ncold, in contrast with their multi-phase counterparts. Typical temperatures\nare around 30--50 K, similar to Spitzer-type HI clouds in our own Milky Way. \nIn Figure 4, we show the cumulative H$_2$ gas mass fraction averaged over the\ninner scale length as a function\nof temperature for a Milky Way-like model ($\\mu^B_0 = 21, Z\/Z_{\\sun} = 1,$\nISM C2), a typical LSB model ($\\mu^B_0 = 23, Z\/Z_{\\sun} = 0.3,$ ISM I1),\nand a very low surface brightness model ($\\mu^B_0 = 24, Z\/Z_{\\sun} = 0.1,$ ISM H).\nFor the Milky Way model, nearly 50\\% of the molecular gas is at or below\n30 K, compared to 20\\% and only a few percent for the typical and\nextreme LSB models. Coupled with the decrease in total molecular\ncontent in the LSB models, our calculations suggest that LSBs should\nhave very small total amounts of {\\it cold} molecular gas.\n\nSuch high temperatures argue against efficient star formation in LSBs, but \nself-consistent rates of the order of $\\sim$ 0.05 M$_\\odot$ yr$^{-1}$ appear \nfeasible in these low metallicity environments (Norman \\& Spaans 1997;\nGerritsen \\& de Blok 1998).\nThis star formation rate is similar to observed star formation rates in LSBs\n(McGaugh \\& Bothun 1994; R\\\"onnback \\& Bergvall 1994; de Blok et~al.\\ 1995).\nIn conclusion, the lack of detected CO emission in LSBs\ndoes not preclude the presence of modest amounts of\nmolecular H$_2$ gas. The CO detectability of an LSB depends on both the\nCO abundance and excitation in the galaxy; we turn now to predictions\nof the CO intensity of LSBs in order to directly compare to searches\nfor CO emission from LSBs.\n\n\\subsection{CO Intensity and the CO\/H$_2$\\ Conversion Factor}\n\nTo calculate the CO intensity of the models,\nthe root mean square velocity of the interstellar clouds, the vertical\nvelocity dispersion, is taken equal to 10 km s$^{-1}$, a typical value in the\nMilky Way and other galaxies. The turbulent velocity width of individual\nclouds is assumed equal to 3 km s$^{-1}$, consistent with the observed\ncorrelation between cloud size and line width for the\nMilky Way (Maloney \\& Black 1988). We calculate the face-on CO intensities \nfor our different ISM models, integrated over the inner scale length. \n\nFigure 5 shows the variation in I(CO), the CO intensity in K km s$^{-1}$, \nas a function of metallicity, surface brightness, and ISM structure.\nAs with the H$_2$\/HI\nmass ratio, several trends are immediately apparent: lower metallicity,\nhigher surface brightnesses, and a more diffuse ISM all act to lower the\nCO intensity in the models. All these trends are as expected. Lower \nmetallicities mean fewer carbon and oxygen atoms are available to form \nthe CO molecule; higher surface brightnesses result in a stronger\nISRF which destroys the CO molecule; and a diffuse ISM is less effective\nat shielding the CO molecules against radiative dissociation.\n\nAlso plotted on Figure 5 are the observational upper limits to the CO \nintensity of LSB galaxies determined by S90 and dBvdH. If LSBs have \nsolar metallicity, these\nobservations should have detected CO emission. But the subsolar\nmetallicities of LSBs (McGaugh 1994) result in lowered CO intensities,\nmaking detection difficult. At $Z\/Z_{\\sun} \\sim 0.3$, the CO emission\nis only a factor of $\\sim 2 - 5$ below the observational limits,\nsuggesting that deeper CO mapping may in fact reveal the molecular\nISM of moderately metal poor LSBs. However, reducing the metallicity\nby another factor of three reduces the CO emission to levels 30 times\nfainter than the current observational limits; detecting these LSBs\nin CO will be very hard indeed. This drop in CO emission occurs in\nspite of the presence of a fair amount of H$_2$ in the models.\n\nPerhaps most germane to the observational status of molecular gas\nin LSB disk galaxies is the conversion factor $X = n({\\rm H}_2)\/I({\\rm CO})$\n(in units of $10^{21}$ cm$^{-2}$ (K km s$^{-1})^{-1})$. Figure 6 shows \nthis value calculated for the grid of ISM models. As expected,\n$X$ shows significant and systematic variation between \nthe different models. At solar metallicities, $X\\sim 0.1-1$, spanning \nthe ``standard'' value of X derived from Milky Way observations ($\\sim$\n0.2 -- 0.5; see, e.g.,\\ Scoville \\& Sanders 1987). Because the CO intensity \nscales non-linearly with density, and in a different manner\nfrom the H$_2$ mass, X has a strong dependence on the \ndensity structure of the ISM. Our models\ncalculate the properties of the ISM over the inner disk scale length,\naveraging over both cloud and inter-cloud regions. As the\nISM becomes more clumpy, X decreases as the CO intensity rises faster\nthan the H$_2$ mass fraction. The value of X determined in the Milky\nWay may therefore be quite different from that applicable to \ngalaxies with a more homogeneous ISM.\n\nAside from the dependence on ISM density structure, there is also\na clear correlation between X and metallicity: as metallicity drops,\nthe value of X increases. Such a trend has also been seen in observational\ndata (e.g.,\\ Wilson 1995; Israel 1997), and in models of low metallicity\nclouds (Maloney \\& Black 1988). The strength of this trend is still\nquite uncertain. Israel (1997) finds a strong dependence on metallicity \n($\\partial \\log X\/ \\partial \\log Z = -2.7\\pm 0.3$), whereas Wilson (1995) derives a weaker \nrelationship, $\\partial \\log X\/ \\partial \\log Z = -0.67\\pm 0.1$.\nIn our models, the relationship is dependent on the ISM phase structure, but \nfalls in the range $\\partial \\log X\/ \\partial \\log Z = -1 {\\rm\\ to} -2$.\nAgain, however, it is difficult to directly compare our theoretical \nvalues with those determined observationally due to the different\nphysical scales involved. \n\nGiven the strong dependence on metallicity and ISM density structure,\nit is clear that use of the standard Milky Way value of X is suspect in \nLSB galaxies. We can instead turn the problem around and ask, given \nour theoretical calculation of X, what are the inferred constraints on the \nmolecular gas fraction of LSBs from the CO studies of S90 and dBvdH. If our \nmodels are correct, X in LSBs may be greater than the ``standard value''\nby as much as a factor of 10, significantly raising the upper limits on LSB\nmolecular gas content. A similar conclusion was reached by dBvdH, who \nexplored the consequences of a non-standard value of X. In that study,\na value of X of four times the galactic value was favored, resulting in\nupper limits for LSB molecular contents of $M_{H_2}\/M_{HI} < 0.25$. \nOur models favor the use of a high value of $X$ for LSBs, and indicate\nthat the correct value may be even a factor of two higher than that \nfavored by dBvdH. If so, the current non-detections of CO in LSBs still \nallow for significant molecular component of the ISM. More stringent limits \non the molecular content of LSBs must await deeper CO observations.\n\n\\section{Discussion}\n\nOur models indicate that even very low surface brightness galaxies may not be \ncompletely void of molecular gas -- instead, the ISM may contain 10--20\\% \nof molecular gas (and perhaps more, depending on the detailed physical structure\nof the ISM). The physical conditions in this gas may be very \ndifferent from the conditions in the molecular ISM of the Milky Way. If\nthe ISM pressure is extremely low, as might be expected due to the low\nsurface mass density of LSB disks, the molecular phase of the ISM will\nbe diffuse and generally warmer than found in Galactic giant molecular\nclouds. The warm\ntemperature is due largely to the lack of shielding from the ISRF\nin a diffuse ISM; even a modest multi-phase ISM can self-shield the\nmolecular gas and lower the gas temperature. However the low\nsurface densities and star formation rates of LSB galaxies make it\nhard to generate and\/or sustain such a multiphase ISM (e.g.,\\ Gerritsen\n\\& de Blok 1998).\n\nAside from the explicit dependencies of the models on ISM structure,\nionization parameter, and metallicity, other more implicit model \ndependencies should also be reiterated. Our models assume that\nthe UV ISRF scales with optical surface brightness. Stellar population\ndifferences between LSB and HSB galaxies are not well-determined,\nbut the blue colors of LSBs argue that their stellar populations\nmay be hotter than those of HSBs. If so, we may underestimate the\nISRF in LSBs, thereby overestimating their molecular content. Similarly,\nwe have modeled an ISM where the neutral gas density increases\ncontinually into the center of the model, whereas many LSBs show\ncentral depressions of gas density. Again, this effect may push us\ntowards artificially high molecular contents (by underestimating\nthe ionization parameter). However, the HI mass profiles of LSBs\nare varied, so rather than acting as a systematic effect in our\nmodels, the dependency on gas profile is perhaps better viewed as \na caution against over-interpreting our results as they apply to \n{\\it individual} LSBs. A third model dependency worth noting is the\nassumption that the dust-to-gas ratio of the galaxies scales linearly \nwith metallicity. One expects something very close to this from\nsimple considerations of chemical evolution (Edmunds \\& Eales 1998), and\nsuch a relationship is supported by\nobservational data (e.g.,\\ Issa, MacLaren, \\& Wolfendale 1990).\nThese dependencies are all\ntied to the systematic properties of LSBs which remain ill constrained.\nRather than attempting any further iteration on\nthe models, we leave these effects as a caveat to the ensuing\ndiscussion.\n\nThese uncertainties not withstanding, our models may also shed light \non the lowered efficiency of star formation\nin LSB disks. Compared to HSBs, LSB galaxies have a lower fraction\nof molecular material from which they can produce stars. In addition,\nwhatever molecular gas exists, it is in a more diffuse, warmer\nstate than is typical for molecular material in HSBs. These warm \ntemperatures and low densities act to help stabilize any existing molecular \nclouds against gravitational collapse. Indeed, since the Jeans length \nscales as $\\sqrt{T\/\\rho}$, the size scale for the collapse of ISM \nsubstructure is quite large in LSBs. The larger size of any unstable \npatches makes them very susceptible to differential shear in the rotating \ndisks, so that gravitational collapse and subsequent star formation in the \nISM of LSBs will be quite difficult. Even in the solid body portion of\nthe rotation curve, where rotational shear is not a factor, the star\nformation rates remain low due to the increased collapse time of low\ndensity structure.\n\nThis stability has been parameterized (e.g.,\\ Quirk 1972; Kennicutt 1989)\nin a form very similar to the Toomre Q parameter for the growth of \naxisymmetric modes (Toomre 1964). Under such prescriptions, star formation\noccurs when the gas surface density exceeds some critical value:\n$\\Sigma_{gas} > \\alpha \\kappa \\sigma \/ 3.36 G$, where $\\kappa$ is the\nepicyclic frequency of the disk, $\\sigma$ the velocity dispersion of\nthe gas, and $\\alpha$ is a constant $\\sim 1$. Studies of LSB galaxies\nhave shown that the HI surface density is generally below this critical\nthreshold for star formation (van der Hulst et~al.\\ 1993). In fact, the\ninnermost regions of LSB disks are often suppressed in HI; adding\ndiffuse, undetected H$_2$ increases the gas surface density and may make\nLSB galaxies somewhat more susceptible to induced star formation (e.g.,\\ \nMihos et~al.\\ 1997; O'Neil, Bothun, \\& Schombert 1998). However, the required\namount is not very reasonable. There are some LSBs with star formation at\nsmall radii where the HI gas is sub-critical by a factor of 4 or more\n(de Blok, private communication), quite a bit more than can be made up by\nmolecular gas for reasonable model parameters. Whether this is a failure of\nour models or of the Quirk-Kennicutt criterion (or both) is unclear.\n\nSimilar to the local stability criteria, parameters exist to describe\nthe stability of disks to growing global bar modes. One such\nparameterization is the Toomre $X_2$ parameter: \n$X_2 ={ {\\kappa^2 R}\\over{4\\pi G \\Sigma_d} }$, where $\\Sigma_d$ is the\ntotal disk mass surface density. If $X_2 >> 1$, disks are stable\nagainst $m=2$ perturbations (Toomre 1981). Mihos et~al.\\ 1997 showed that \nbecause of their lowered disk surface density and increased dark matter \ncontent (relative to HSB disks), LSB galaxies are quite stable against such\ninduced bar modes. The inclusion of additional disk mass in the form\nof molecular ISM reduces this disk stability, but sufficient dark\nmatter exists in LSB galaxies to make them stable against all but\nthe strongest perturbations.\n\nWe note in passing that the quantity of mass in this (as yet undetected) \nmolecular \nISM is not nearly sufficient to account for all the dark matter in LSB\ndisks. Even under the dubious assumption of a maximum (stellar) disk,\nde Blok \\& McGaugh (1997) showed that the mass deficit in the inner\nregions of LSB galaxies is quite severe -- significant amounts of dark\nmatter must exist all the way into the centers of LSB disks. Under\nreasonable assumptions for the physical conditions in LSBs, our models\nsuggest that the molecular ISM can increase the disk surface \ndensity {\\it at most} by $\\; \\buildrel < \\over \\sim \\;$ 50\\%. To account for all the mass deficit implied\nby the rotation curve fitting of de Blok \\& McGaugh (1997), the molecular\nISM would need to be very cold and very clumpy, raising questions of\nwhy LSBs remain stable and how disk star formation is quenched. \n\nThe different evolutionary histories of HSB and LSB galaxies can be\ntraced to differences in their disk surface densities and in the \nconditions of their ISMs. A plausible evolutionary scenario for\nHSB galaxies has been outlined by Spaans \\& Norman (1997). In this\nscenario, once the proto-HSB gas disk forms, star formation begins\nat a retarded rate in the primordial molecular hydrogen ISM. This\nstar formation generates supernovae and enriches the ISM, leading to\na multiphase ISM that is able to cool and form stars efficiently --\nan HSB disk galaxy is born. In contrast, when a proto-LSB forms,\nit, too, forms a molecular ISM, but with a smaller molecular mass\nfraction and at lower surface density. At these low surface densities,\nit is difficult to trigger star formation or form\/maintain a multiphase\nISM. As a result, the LSB evolves little from its primordial conditions,\nmaintaining its low surface brightness and metallicity, and high gas\nfraction. \n\nUnder ``critical density'' conditions for star formation,\none might expect some bimodal surface brightness distribution for\ndisk galaxies, as galaxies will naturally follow one of two alternative\npaths depending on their surface density. There is a claim of\na bimodal surface brightness distribution in one cluster (Tully\n\\& Verheijen 1997), but this does not appear to be a general property\nof field galaxies (de Jong 1996).\nInstead, it is more likely that there is a continuum of physical\nconditions in disk galaxies driven ultimately by surface density.\nLow density environments result in lowered star formation activity\n(as $t_{\\rm dyn} \\sim \\rho^{-{1\\over 2}}$ even the absence of any critical\ndensity models)\nand suppress the formation of a multiphase ISM; \nas surface density increases along a galactic sequence, star formation \nand surface brightness increase, accompanied by a rise in the\namount of complex phase structure (and higher molecular fractions) \nin the ISM. It is through this interplay that galaxy evolution\nand ISM processes shape the (cosmological) star formation rate.\n\n\\acknowledgements\n\nWe thank Erwin de Blok and Greg Bothun for valuable discussions.\nM.S. and J.C.M. have been supported by NASA through Hubble Fellowship grants\n\\#~HF-01101.01-97A and \\#~HF-01074.01-94A, respectively, awarded by the \nSpace Telescope Science Institute,\nwhich is operated by the Association of Universities for Research in\nAstronomy, Inc., for NASA under contract NAS 5-26555.\n \n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\\label{INTRODUCTION}\n\nRecent Hi-C imaging and sequencing technology have elucidated the importance of 3-D chromatin structure and epigenetics in gene regulation \\cite{nora2012spatial,rao20143d,lieberman2009comprehensive}. In addition to containing compartments of active, gene rich euchromatin and compartments of inactive, gene poor heterochromatin, chromatin is spatially partitioned into topologically associating domains (TADs) \\cite{pombo2015three}. TADs are insulated regions of chromatin, where sequences within each respective region have more frequent interactions than with sequences in other regions of the genome. Borders of TADs are often marked by the presence of CCCTC-binding factor (CTCF) \\cite{ji20163d}. CTCF is a highly conserved zinc finger protein that recognizes 50 base pair variant sequences throughout the genome \\cite{ohlsson2001ctcf}. CTCF is thought to facilitate TAD formation by binding to two distant locations of DNA and then binding to itself, creating a loop of chromatin \\cite{phillips2009ctcf,rao20143d}. Recent studies of CTCF have shown it is essential to loop formation, driving epigenetic forces of gene expression, but it is not essential to compartmentalization of chromatin into active and inactive regions \\cite{nora2017targeted}. \n\nAlthough CTCF's role in loop formation is well characterized, its role in gene regulation is less well understood. Previous studies have noted CTCF's importance in gene regulation during development, showing that disruption of CTCF affects gene transcription in mouse oocytes \\cite{wan2008maternal}. Other studies have shown that disruption of CTCF affects essential genetic pathways of cell proliferation, differentiation, and apoptosis \\cite{torrano2005ctcf}. Recent studies have linked CTCF to alternative splicing of nearby genes. In mammalian CD45 genes, CTCF is thought to promote inclusion of exon 5 by pausing RNA polymerase II \\cite{shukla2011ctcf}. Genome-wide, CTCF is thought to facilitate exon inclusion, or alternate exon usage, during RNA splicing by bringing exons in closer proximity to their promoters \\cite{ruiz2017ctcf}. However, these studies remain limited in the scope of genes investigated or largely correlative, demanding a functional investigation of the effect of CTCF on alternative splicing.\n\nHere, we used previously published ChIP-seq and mRNA-seq data from a CTCF knockdown mouse embryonic stem cell (mESC) model to examine the extent of CTCF dependent alternative splicing events \\cite{nora2017targeted}. Specifically, we compared exon usage in genes that contain a CTCF binding site in mESC lines tagged with an auxin-inducible degron (AID) for CTCF and untagged wild type, before and after treatment with auxin. We provide evidence that the presence of intragenic CTCF alters exon usage in a transcription direction dependent manner. We show that degradation of CTCF in an AID system results in a higher proportion of upstream exon usage in alternative splicing. These results support the direct role of CTCF in regulating alternative splicing during embryogenesis, and nominate a heritable epigenetic system that can be probed to better understand the pathology of alternative splicing driven diseases that arise during development.\n\n\\section{METHOD}\n\\label{METHOD}\n\\subsection{Data Retrieval}\n\\label{Data}\nAll data analyzed in this study were from the previously published Nora et al., 2017 paper \\cite{nora2017targeted}. Expression levels for mRNA fragments were retrieved from the National Center for Biotechnology Gene Expression Omnibus (GSE98671). Experimental parameters and total reads were obtained from the supplements of Nora et al., 2017. CTCF ChIP-seq peak locations and magnitudes were provided by the Mirny Lab located at the Massachusetts Institute of Technology (mirnylab.mit.edu). Mouse genome mappings (NCBI37\/mm9) were available from the University of California Santa Cruz (UCSC) Genome Browser.\n\n\\subsection{Identification and Ranking of Gene Bound CTCF Sites}\n\\label{Identification}\nAll calculations were done on R (Version 3.4.1) using tools from the Bioconductor project. First, the most prominent CTCF sites that were successfully degraded by auxin were isolated. From the 43,607 CTCF ChIP peaks in the untreated sample, 13,131 peaks remained or were not fully degraded in the treated sample. These 13,131 peaks were identified by genomic location using the findOverlaps function and subsequently removed from analysis. The 30,554 remaining peaks were then cross referenced with the known gene locations of the mm9 assembly to find CTCF sites that were located within protein coding sequences. This final pool of 16,665 peaks were ranked by peak magnitude and the highest five thousand were examined in this study. \n\n\\subsection{Quantifying mRNA-Seq Reads}\n\\label{Quantifying}\nFor each of the 5,000 CTCF sites selected, the gene containing the CTCF site was found and the isoform with the most comprehensive selection of exons was selected. The locations of all exons upstream and downstream of the site were then identified. For each RNA-seq tag Density file, the signals in the exons upstream were summed. The resulting sum was divided by the total signal for the entire RNA-seq file and multiplied by the total number of reads in the experiment to estimate the number of reads in the upstream exons:\n$$R_{mRNA} = \\frac{\\sum_{exon} Signal }{\\sum_{total} Signal } R_{total} ,$$\nwhere $R_{mRNA}$ is the mRNA-seq Reads and $R_{total}$ the total number of reads in our experiment. The same calculation was done for the downstream exons. Estimated reads were rounded to the nearest whole number and pooled with data from experimental replicates under the same conditions. \n\n\\subsection{Statistical Analysis}\n\\label{Statistical}\nThe statistic used to compare distribution of isoforms around CTCF sites is the proportion ($P$) of reads upstream ($U$) compared to reads downstream ($D$), $P = \\frac{U}{D}$. This statistic will hereafter be referred to as the proportion of a CTCF site. The final set of sites with valid proportion values consisted of 2,636 sites. \n\n\n\\section{RESULTS}\n\\label{RESULTS}\n\\subsection{Orientation and Shift in Proportions}\n\\label{Orientation}\nBefore the influence of CTCF on alternative splicing can be examined, directionality effects due to CTCF and transcriptional direction have to be accounted for. Effects of various experimental conditions were quantified by dividing the proportion after treatment by the proportion before treatment. Kernel Density plots for the log change in proportions are plotted in Figure \\ref{fig1}. As Figure \\ref{fig1a} shows, changes in log expression ratio around the CTCF site does not depend on CTCF orientation. On the other hand, Figure \\ref{fig1b} shows that changes in log expression ratios are symmetric with respect to transcriptional direction. A two-sample t-test shows significant difference ($p.value = 2E-63$) between the distributions. Thus, comparisons must be made with respect to transcription orientation, with upstream of a CTCF site being defined as transcriptionally upstream and downstream as transcriptionally downstream. Once corrected for transcriptional direction, change in log expression ratio is positive (Figure \\ref{fig1c}).\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig1a}\n\t\t\\caption{}\n\t\t\\label{fig1a}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig1b}\n\t\t\\caption{}\n\t\t\\label{fig1b}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig1c}\n\t\t\\caption{}\n\t\t\\label{fig1c}\n\t\\end{subfigure}\n\t\\caption{Log change in proportions in CTCF-AID tagged cells from untreated to auxin 2 days ($N=2636$). (a) Distributions grouped by CTCF orientation overlap and show no significant difference. (b) Distributions grouped by transcription orientation are mirrored and show significant difference ($p.value=7E \u2013 63$). (c) When proportions are recalculated to account for transcriptional orientation, log change in proportions show significant deviation from zero ($p.value=2E \u2013 63$).}\n\t\\label{fig1}\n\\end{figure}\n\n\\subsection{Contingency Tables and Tests for Significance}\n\\label{Contingency}\nTo get an understanding of specific changes in isoform distribution and evaluate the significance of the change at specific CTCF sites, contingency tables were built for each site. Observations consist of the number of fragments detected upstream and downstream of the CTCF site with Fisher's exact test conducted to evaluate the difference. Fisher's exact test was preferred over chi-squared test because reads of mRNA fragment tend to be skewed and vary wildly between very large and very small counts. Multiple testing correction was performed using Bonferroni correction, resulting in a more conservative alpha for significance testing, $\\alpha=0.05\/2636=1.9E-5$.\nGiven the noisiness of the data, the reduced power owing to increased conservativeness of the test and FDR control were acceptable.\nThree contingency tables were constructed for each site to evaluate the influence of CTCF degradation on alternative splicing. The parameters for the tests are expressed in Table \\ref{tab1}. \n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{adjustbox}{width=1\\textwidth}\n\t\t\\begin{tabular}{|l|l|l|l|}\n\t\t\t\\hline\n\t\t\t\\textbf{Test} & \\textbf{Sample} & \\textbf{Comparison} & \\textbf{Observed}\\\\ \n\t\t\t\\hline\n\t\t\t1 & CTCF-AID tagged cells & Untreated vs auxin 2 days & \\thead{Distribution of fragments \\\\upstream and downstream\\\\ of the CTCF site}\\\\ \n\t\t\t\\hline\n\t\t\t2 & Wildtype untagged cells & Untreated vs auxin 2 days& \\thead{Distribution of fragments \\\\upstream and downstream\\\\ of the CTCF site}\\\\\n\t\t\t\\hline\n\t\t\t3 & Untreated tagged and untagged cells & CTCF-AID tagged vs wildtype untagged& \\thead{Distribution of fragments \\\\upstream and downstream\\\\ of the CTCF site}\\\\\n\t\t\t\\hline \t\t\t\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Description of tests conducted on contingency tables}\n\t\\label{tab1}\n\\end{table}\n\n\\subsection{Changes in Proportions in CTCF Bound Genes}\nThe distribution of p-values show anti-conservative trends for all three tests (Figure \\ref{fig2}), suggesting that the alternative hypothesis of equal exon usage in genes with CTCF binding sites may be true for some genes. In CTCF-AID tagged cells, treatment with auxin resulted in significant changes in proportions at 464 CTCF sites (Figure \\ref{fig2a}). Surprisingly, 356 sites in wildtype untagged cells also showed significant change after auxin treatment (Figure \\ref{fig1b}) even though treatment wouldn't result in CTCF depletion. Moreover, comparing untreated CTCF-AID to untreated untagged cells shows 483 significant sites (Figure \\ref{fig2c}). \n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig2a}\n\t\t\\caption{}\n\t\t\\label{fig2a}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig2b}\n\t\t\\caption{}\n\t\t\\label{fig2b}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig2c}\n\t\t\\caption{}\n\t\t\\label{fig2c}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fig2d}\n\t\\caption{}\n\t\\label{fig2d}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fig2e}\n\t\\caption{}\n\t\\label{fig2e}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fig2f}\n\t\\caption{}\n\t\\label{fig2f}\n\\end{subfigure}\n\t\\caption{Factors that Affect Alternative Splicing. (a \u2013 c) Distribution of p-values from Fisher's exact test of factors influencing splicing. (a) Test 1 evaluates the effect of auxin on CTCF-AID tagged cells. (b) Test 2 evaluates the effect of auxin on wildtype untagged cells. (c) Test 3 evaluates the effect of CTCF-AID tagging. (d \u2013 f) scatter plots mapping proportions in control and experiment and color coded by significance of p-values. (d, e) Untreated vs auxin 2 days on tagged and untagged WT cells. (f) Untagged WT vs CTCF-AID tagged untreated cells. }\n\t\\label{fig2}\n\\end{figure}\n\nA large magnitude of change in proportion isn't always significant while a large number of site that showed moderate proportion changes were significant. Significant sites showed both positive and negative change in proportions (Figure \\ref{fig3a}). Nonsignificant points showing large changes had relatively small numbers of reads, making it possible for small variations to show a large magnitude change but still give high p-values; meanwhile, sites showing significance close to the center have a large number of reads (Supplementary Table Sites). A number of sites overlap in the tests for which they are significant (Figure \\ref{fig3b}). There are more overlaps than expected for such stringent selection, suggesting that there may be common mechanisms causing these sites to display greater variation. \n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig3a}\n\t\t\\caption{}\n\t\t\\label{fig3a}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.4\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig3b}\n\t\t\\caption{}\n\t\t\\label{fig3b}\n\t\\end{subfigure}\n\t\\caption{CTCF sites grouped by tests they are significant for. (a) Scatter plot mapping log change in proportion in wildtype untagged cells to log change in CTCF-AID tagged cells. (b) Venn diagram showing overlaps in test significance. There were 1718 sites which were not significant for any of the tests.}\n\t\\label{fig3}\n\\end{figure}\n\n\\subsection{Distribution of Proportion Change}\n\\label{Distribution}\nAssuming that all CTCF sites are in one population and impacted similarly, the distribution of proportion changes can be viewed as a whole to analyze how treatment affected splicing. Exposure to auxin for 2 days caused proportions to significantly increase in CTCF-AID tagged cells and to significantly decrease in wild-type untagged cells (Figure \\ref{fig4a}). As expected, treated tagged cells showed significant increase in proportions compared to treated untagged cells. Surprisingly, untreated tagged cells also showed significant increases over untreated untagged cells. However, it is of note that these changes were smaller in magnitude (Figure \\ref{fig4b}). \n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.8\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig4a}\n\t\t\\caption{}\n\t\t\\label{fig4a}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.8\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{fig4b}\n\t\t\\caption{}\n\t\t\\label{fig4b}\n\t\\end{subfigure}\n\t\\caption{Log Change in proportions across treatments. (a) Comparing untreated to auxin 2 days. Untagged cells show decrease with $p.value=3E-16$. Tagged cells show increase with $p.value=2E-63$. (b) Comparing wildtype untagged to CTCF-AID tagged. Untreated cells show increase with $p.value=2E-51$. Treated cells show increase with $p.value=5E-171$.}\n\t\\label{fig4}\n\\end{figure}\n\n\\section{DISCUSSION}\n\\label{DISCUSSION}\nWhile CTCF has been noted as a key player in shaping the 3D structure of chromatin, its direct effects on gene regulation, and in particular alternative splicing, is less well characterized. As noted, this study investigated the direct effects of CTCF on alternative exon usage in mouse embryonic stem cells. Using previously published ChIP-seq and RNA-seq data from Nora et al., 2017, we investigated alternative splicing in genes containing a CTCF binding site. We quantified and compared the changes in alternative exon usage after the CTCF was removed from a CTCF bound gene was degraded. \n\nWe found that degrading CTCF using auxin in CTCF-AID tagged cells resulted in an increase in the proportion of upstream fragments used in final mRNA transcripts. This finding supports Shukla et al, 2011's hypothesis that CTCF binding to DNA blocks transcription proteins and causes pauses in mRNA transcription. Splicing occurs concurrently with transcription, and thus when transcription is paused due to CTCF, splicing elements are able to act upon RNA upstream that were already transcribed with greater frequency. Depletion of CTCF likely prevents these pauses in transcription, giving splicing elements less opportunity to act on exons upstream of CTCF sites. Here, we provide evidence for this mechanism by showing that CTCF depletion results in greater upstream exon usage in mRNA formation. \n\nAlthough these findings seem promising, it should be noted that significant differences in the proportions of exon usage were also observed in control cases that should exhibit none. In particular, there were 384 genes that exhibited a significant change in alternative splicing in the wild type, untagged cells after treatment with auxin. When comparing wild type, untagged cells with CTCF-AID tagged cells without the presence of auxin, we found 483 CTCF bound genes that exhibited significantly different alternative splicing among these two conditions. Although the magnitudes of the changes were smaller than those between the experimental condition and control, these changes are alarming as they suggest that the AID tagging method itself may cause changes in gene expression. One explanation is that the tagging process affected the expression levels of genes related to splicing factors and transcription controls. Another is that although ChIP-seq shows tagged CTCF still bound to DNA, the binding efficiency may be impacted to an extent that differences in splicing may be observed. What may be even harder to explain is why untagged cells showed lower proportions after exposure to auxin. Untagged cells did not have their CTCF degraded, showed no difference in gene expression levels, and did not suffer from the same cytotoxic effects that tagged cells displayed. Nonetheless, untagged cells showed an anti-conservative distribution of p-values and a significantly negative change in proportion. These observations should be examined further, or they may undermine the conclusions made above.\n\nThe effect of CTCF binding on alternative splicing in mouse embryonic stem cells is apparent in this study. We showed a number of genes that exhibit a change in upstream exon usage after depletion of CTCF, suggesting a functional role of CTCF in determining alternatively spliced mRNA transcripts. As noted by Li et al., 2016, alternative splicing and the resulting isoforms have great impact not only on biodiversity and genetic variation but also on disease \\cite{li2016rna}. Understanding the mechanisms behind alternative splicing can provide insight into the pathology of diseases such as developmental disorders and cancers. This has already been hinted by Filipova et al., 1998 when the authors associated CTCF binding with deletions resulting in breast and prostate cancers \\cite{filippova1998widely}. Perhaps more exciting, understanding the decision-making machinery behind alternative splicing can expose potential vulnerabilities in alternative splicing driven mechanism and inform potential targets for therapy. Future work should focus on characterizing the types of genes and pathways affected by CTCF mediating alternative splicing. \n\n\\bibliographystyle{elsarticle-num}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}