diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmcli" "b/data_all_eng_slimpj/shuffled/split2/finalzzmcli" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmcli" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\\label{Sec-Introduction}\n\nComputational science and engineering (CSE) involves the integration\nof a number of different techniques, requiring expertise in data\nstructures, algorithms, numerical analysis, programming methodologies,\nsimulation, visualization, data analysis, and performance\noptimization. The CSE community has embraced Python as a platform for\nattacking a wide variety of research problems, in part because of\nPython's support for easily gluing together tools from different\ndomains to solve complex problems. Teaching the theory and practice\nof CSE requires touching on all the subjects mentioned above, in the\ncontext of important and interesting scientific problems. Many of the\nsame advantages that Python brings to CSE research make it useful for\nteaching too. Traditionally, courses have tended to focus more\nnarrowly on particular aspects of CSE, such as numerical analysis,\nalgorithms, or high-performance computing. In developing a new,\nbroadly focused laboratory course in computational science,\nengineering and biology, we have sought to introduce students to the\nwide swath of techniques necessary to do effective research in CSE.\nPython and its many batteries serve remarkably well in this endeavor.\n\n\\emph{Computational methods for nonlinear systems} is a graduate\ncomputational science laboratory course jointly developed and taught\nby us. We initiated course development in the summer of 2004 to\nsupport the curricular needs of the Cornell IGERT program in nonlinear\nsystems, a broad and interdisciplinary graduate fellowship program\naimed at introducing theoretical and computational techniques\ndeveloped in the study of nonlinear and complex systems to a range of\nfields. The focal themes of the IGERT program span a number of areas\n- including complex networks, biological locomotion and manipulation,\npattern formation, and gene regulation - broadly interpreted in the\ncontext of complex systems and nonlinear dynamics. These themes form\nthe core of our course curriculum, augmented with other problems of\ninterest arising in the fields of statistical mechanics, applied\nmathematics, and computer science.\n\nThe format of the course is somewhat unusual. As a computational\nlaboratory course, it provides relatively little in the way of\nlectures: we prefer to have students learn by doing, rather than\nhaving us tell them how to do things. The course is autonomous,\nmodular, and self-paced: students choose computational modules to work\non from a large (and hopefully growing) suite of those available, and\nthen proceed to implement relevant simulations and analyses as laid\nout in the exercise. We provide \\emph{Hints} files to help the\nstudents along: these consist of documented skeletal code that the\nstudents are meant to flesh out. (In practice, we develop a module\nourselves, document each of the relevant pieces using Python's\ndocstrings, and then replace all the code bodies with the Python\nkeyword \\verb+pass+ so that the students can repopulate those code\nbodies themselves.) We have written several different visualization tools\nto provide visual feedback. We find these help to \nengage the students in new problems and are useful in code debugging.\n\nPython is a useful language for teaching for several reasons (even \nthough most of our incoming students have had no previous\nexperience with Python). Its clean syntax enables students to learn\nthe language quickly, and allows us to provide concise \nprogramming hints in our documented code fragments. Python's dynamic\ntyping and high-level, built-in datatypes enables students to get\nprograms working quickly, without having to struggle with type\ndeclarations and compile-link-run loops. Since Python is interpreted,\nstudents can learn the language by executing and analyzing individual\ncommands, and we can help them debug their programs by working with\nthem in the interpreter.\n\nOne of the other key advantages that Python brings to scientific\ncomputing is the availability of many packages supporting numerical\nalgorithms and visualization. While some of our exercises require\ndevelopment of algorithms from scratch, others rely on established\nnumerical routines implemented in third-party libraries. It is of\ncourse important to understand the fundamentals of algorithms, error\nanalysis, and algorithmic complexity, but it is also useful to know\nwhen and how to use existing solutions that have been developed by\nothers. We make heavy use of the {numpy}\\cite{Numpy} and\n{scipy}\\cite{Scipy} packages, for construction of efficient arrays and\nfor access to routines for generation of random numbers, integration\nof ordinary differential equations, root-finding, computation of\neigenvalues, etc. We use {matplotlib}\\cite{Matplotlib} for x-y\nplotting and histograms. We have written several visualization\nmodules that we provide to students, based on the {Python Imaging\nLibrary (PIL)}\\cite{PIL}, using PIL's ImageDraw\nmodule to place graphics primitives within\nan image, and the ImageTk module to paste an image into a Tk\nwindow for real-time animation. We recommend the use of the {ipython}\ninterpreter, which facilitates exploration by students\\cite{IPython}.\nAnd we have used {VPython}\\cite{VPython} to generate three-dimensional\nanimations to accompany some of our modules.\n\n\\section*{Course modules}\n\\label{Sec_Course_modules}\n\nThere are too many course modules to describe in detail here, and we\nrefer interested readers to our course website \\cite{CM4NS} for\ninformation on all the modules, as well as access to problems, hints,\nand answers. (Many of the exercises have also been incorporated into\na new textbook written by one of us.\\cite{Sethna2006}) Here, we\nhighlight a few of the modules, in order to illustrate both the\nbreadth of science that can be usefully taught with Python and variety\nof tools and techniques that Python can bring to bear on such\nproblems.\n\n\\subsection*{Small world networks}\n\nThe study of complex networks has flourished over the last several\nyears as researchers have discovered commonalities among networked\nstructures that arise in diverse fields such as biology, ecology,\nsociology, and computer science\\cite{Barabasi2002}. An interesting\nproperty found in many complex networks is exemplified in the popular\nnotion of ``six degrees of separation'', which suggests that any two\npeople on earth are connected through at most roughly five\nintermediate acquaintances. Duncan Watts and Steve\nStrogatz\\cite{Watts1998} developed a simple model of random networks\nthat demonstrate this ``small-world'' property. Our course module\nenables students to construct small-world networks and to examine how\nthe average path length connecting two nodes decreases rapidly as\nrandom, long-range bonds are introduced into a network consisting\ninitially of only short-ranged bonds (Figure 1).\n\nComputationally, this module introduces students to data structures\nfor the representation of undirected graphs, object-oriented\nencapsulation of those data structures, and graph traversal\nalgorithms. Python makes the development of an undirected graph data\nstructure exceedingly simple, a point made long ago by Python creator\nGuido van Rossum in one of his early essays on\nPython\\cite{VanRossum1998}. In an undirected graph, nodes are\nconnected to other nodes by edges. A simple way to implement this is\nto combine the two cornerstones of container-based programming in\nPython: lists and dictionaries. In our \\verb+UndirectedGraph+ class,\na dictionary of network neighbor connections (a neighbor dictionary)\nmaps a node identifier to a list of other nodes to which the reference\nnode is connected. Because the graph edges are undirected, we\nduplicate the connection information for each node: if an edge is\nadded connecting node 1 and node 2, the neighbor dictionary must be\nupdated so that node 2 is added to node 1's list of neighbors, and\nvice versa.\n\n\\begin{figure}\n\\includegraphics[height=2in]{betweenness.ps}\n\\caption{\\label{SmallWorldFig}\nNode and edge betweenness in a model of small-world networks. Nodes\n(red dots) are connected by undirected edges (black lines).\nBetweenness measures how central each node and edge is to the shortest\nnetwork paths connecting any two nodes. In this plot, node diameter\nand edge thickness are propotional to node and edge betweenness,\nrespectively. (Our simple graph visualization tool uses the Python Imaging\nLibrary.)\n}\n\\end{figure}\n\nWe can of course hide the details of adding edges \ninside an \\verb+AddEdge+ method defined on an \\verb+UndirectedGraph+ class:\n\n\\begin{verbatim}\nclass UndirectedGraph:\n # ...\n def AddEdge(self, node1, node2):\n self.AddNode(node1)\n self.AddNode(node2)\n if node2 not in self.neighbor_dict[node1]:\n self.neighbor_dict[node1].append(node2)\n if node1 not in self.neighbor_dict[node2]:\n self.neighbor_dict[node2].append(node1)\n\\end{verbatim}\n\nIn the small-world networks exercise, we choose to label nodes simply\nby integers, but Python's dynamic typing does not require this. If we\nwere playing the ``Kevin Bacon game'' of searching for\nshortest paths in actor collaboration networks, we could use our code\nabove to build a graph connecting names of actors (encoded as\nstrings). This dynamic typing allows for significant code reuse (as\ndescribed below in the section on Percolation). And it is worth\nmentioning that, while our \\verb+UndirectedGraph+ class is exceedingly simple\nand built to support only the analyses relevant to our course module,\nthe same basic principles are at work in a much more comprehensive,\nPython-based, graph construction and analysis package - named NetworkX\n- that has been developed at Los Alamos National Labs.\\cite{NetworkX}\n\n\\subsection*{Percolation}\n\nPercolation is the study of how objects become connected (or\ndisconnected) as they are randomly wired together (or cut apart).\nPercolation is an important and classic problem in the study of phase\ntransitions, and has practical relevance as well: considerable\ninterest over the years in percolation phenomena has come from the oil\nand gas industry, for example, where one is interested in extracting a\nfluid through a network of pores in rock.\n\nAlthough percolation is traditionally studied on regular lattices, it\nis a problem more generally applicable to arbitrary networks, and in\nfact, we are able to reuse some of the code developed in the\nsmall-world networks module to support the study of percolation. As\nnoted above, Python's dynamic typing makes our definition of a node in\na graph very flexible; in a percolation problem on a lattice, we can\nreuse our \\verb+UndirectedGraph+ class described previously by making node\nidentifiers be lattice index tuples $(i,j)$. We can thus easily make an\ninstance of bond percolation on a 2D square lattice of size $L$ (with\nperiodic boundary conditions) and bond fraction $p$:\n\n\\begin{verbatim}\ndef MakeSquareBondPercolation(L, p):\n g = UndirectedGraph()\n for i in range(L):\n for j in range(L):\n g.AddNode((i,j))\n if random.random() < p:\n g.AddEdge((i,j), ((i+1\n if random.random() < p:\n g.AddEdge((i,j), (i, (j+1\n return g\n\\end{verbatim}\n\n\\begin{figure}\n\\includegraphics[height=2in]{BondPercolation_10_0.4_1.ps}\n\\includegraphics[height=2in]{BondPercolation_1024_0.5_2.ps}\n\\includegraphics[height=2in]{SitePercolation_20_0.5_4.ps}\n\\caption{\\label{PercolationFig}\nTwo instances of bond percolation on a 2D square lattice, \nand an instance of site percolation on a hexagonal lattice.\nIn bond percolation, neighboring lattice points are connected with\nprobability $p$, and connected clusters in the resulting network are\nidentified via breadth-first search. Separately clusters are colored\ndistinctly, for a 10x10 grid (left) and a 1024x1024 grid (middle). In\nsite percolation (right), lattice sites are filled with probability\n$p$, and clusters connect neighboring sites that are filled. We study\nboth bond and site percolation to to introduce the concept of\nuniversality of phase transitions.}\n\\end{figure}\n\nInstances of percolation networks generated by this procedure are\nillustrated in Figure 2. Students use breadth-first search to identify\nall connected clusters in such a network, and our PIL-based\nvisualization tool colors each separate cluster distinctly, taking as\ninput a list of all nodes in each cluster. \n\nThe concept of universality of phase transitions is also introduced:\ndespite their microscopic differences, site-percolation on a 2D\nhexagonal lattice and bond-percolation on a 2D square lattice are\nindistinguishable from each other on long length scales, exhibiting\nthe same critical behavior (e.g., scaling exponents). Scaling\ncollapses are a useful construct for revealing the universality of\nphase transitions, and typically involve transforming the $x$ and $y$\naxes in specified ways to get disparate data sets to ``collapse'' onto\none universal scaling form. With Python, we can support \nsuch scaling collapses very flexibly by using the built-in\n\\verb+eval()+ function that evaluates expressions encoded as strings.\nRather than hard-coding particular functional forms for scaling collapses,\narbitrary mathematical expressions can be simply encoded and evaluated.\n\n\\subsection*{Biomechanics: The Walker}\n\nResearch in biomechanics aims to understand how living beings move,\nand robotics and prosthetics are two important technological areas\nthat can benefits from advances in the field. While much research in\nrobotics is focused on active sensing and control, Andy Ruina and\ncollaborators have been interested in passive biolocomotive systems,\nwhich are more properly understood as dynamical systems than as\ncontrol systems. The ``simplest walking model'' of Garcia et\nal.\\cite{Garcia1998} provides the basis of our Walker module. This\nmodel consists of a pair of legs connected at the hip (a double\npendulum), walking down an inclined ramp under the influence of\ngravity, with a heelstrike that imparts angular momemtum to the Walker\nas the swing leg strikes the floor and becomes the stance leg. As a\nwarmup, students integrate the equation of motion for a single\npendulum under gravity, and compute the period of the motion as a\nfunction of the initial pendulum angle.\n\nThe Pendulum and Walker modules introduce several important scientific\nand computational aspects. Ordinary differential equations (ODEs) \ndescribing the time evolution of the Pendulum and Walker \nneed to be integrated forward in time. In \nthe context of the simpler Pendulum, we highlight the properties of \naccuracy, fidelity, and stability in numerical integration, having \nstudents explore errors introduced by a finite time step $\\Delta t$.\nWe also highlight the need for event detection in many numerical \nintegration problems. In the Walker, for example, \na heelstrike occurs when the swing leg hits the floor.\nAccurately solving for the heelstrike collision involves transforming\nto a new set of integration variables, where an appropriate\ncombination of the pendulum angles becomes the independent variable,\nand time a dependent variable. We then integrate backwards in angle\nto find the time at which the heelstrike occurred. We use the\nscipy.integrate.odeint function to execute these integrations,\nproviding a function \\verb+dydt+ that evaluates the instantaneous time\nderivative of the Walker state vector field $\\vec y$ and a function\n\\verb+dzdc+ that evaluates the instantaneous time derivative of the\ntransformed system for heelstrike detection (where the independent variable\nis the ``collision variable'' $c = \\phi-2\\theta$).\n\n\\begin{verbatim}\ndef dydt(self, y,t):\n theta,thetaDot,phi,phiDot = y\n thetaDotdot = scipy.sin(theta-self.gamma)\n phiDotdot = thetaDotdot + \\\n (thetaDot**2)*sin(phi)-cos(theta-self.gamma)*sin(phi)\n return [thetaDot,thetaDotdot,phiDot,phiDotdot]\n\nself.trajectory = scipy.integrate.odeint(self.dydt, self.GetStateVector(),\\\n timepoints)\n\ndef dzdc(self, z, c):\n theta,thetaDot,phi,phiDot,t = z\n y = array([theta, thetaDot, phi, phiDot])\n thetaDot, thetaDotdot, phiDot, phiDotdot = self.dydt(y, t)\n cDot = phiDot - 2.*thetaDot\n return [thetaDot\/cDot,thetaDotdot\/cDot,phiDot\/cDot,phiDotdot\/cDot,\n 1.\/cDot]\n\nz = scipy.integrate.odeint(self.dzdc, [y[0],y[1],y[2],y[3],t], \n scipy.array([self.CollisionCondition(), 0.])\n\n\\end{verbatim}\n\n\\begin{figure}\n\\includegraphics[height=2in]{Walker23.ps}\n\\caption{\\label{WalkerFig}\nSnapshot in the perambulation of the Walker. The model consists of a\npair of coupled pendula (legs) walking down a ramp. The stance leg (red)\nremains fixed with respect to the floor, while the swing leg (orange)\nswings forward. Once the swing leg hits the floor ahead of the stance leg\n(heelstrike), the two legs switch roles. \nReal-time animation of the Walker is accomplished using VPython.\n}\n\\end{figure}\n\nThe Walker exhibits an interesting period-doubling route to chaos as\nthe slope of the inclined ramp is increased. Simple periodic walking\nis stable for small ramp angles, but becomes unstable to a period-two\ngait at a critical angle. The period-two orbit bifurcates to a\nperiod-four gait, etc., with increasing angle, culminating in a\nchaotic walk. (The chaos is, however, remarkably subtle, as is also\ntrue in systems like dripping faucets.) A snapshot of the Walker is\nshown in Figure 3. Other modules in our course\nenable students to study in considerable more detail these sorts of\nperiod-doubling bifurcations and chaotic dynamics in iterated\none-dimensional maps.\n\n\\subsection*{Pattern formation in cardiac dynamics}\n\nPattern formation is ubiquitous in spatially-extended nonequilibrium\nsystems. Many patterns involve regular, periodic phenomena in space\nand time, but equally important are localized coherent structures that\nbreak or otherwise interrupt these periodic structures. Patterns lie\nat the root of much activity in living tissues: the regular beating of\nthe human heart is perhaps our most familiar reminder of the\nspatiotemporal rhymicity of biological patterns. Cardiac tissue is an\nexcitable medium: rhythmic voltage pulses, initiated by the heart's\npacemaker cells (in the sinoatrial node), spread as a wave through the\nrest of the heart inducing the heart muscle to contract, thereby\npumping blood in a coherent fashion. In some situations, however,\nthis regular beating can become interrupted by the presence of spiral\nwaves in the heart's electrical activity (see Figure 4). These\nspiral waves generate voltage pulses on their own, disrupting the\ncoordinated rhythm of the normal heart, leading to cardiac arrythmia.\nA simple model of cardiac dynamics - the two-dimensional\nFitzHugh-Nagumo equations\\cite{FitzHugh1961, Nagumo1962} - \nis introduced in this course module, which \nwe developed in conjunction with Niels Otani. The FitzHugh-Nagumo model\ndescribes the coupled time evolution of two fields, the transmembrane\npotential $V$ and the recovery variable $W$ (given parameters\n$\\epsilon$, $\\gamma$ and $\\beta$):\n$$\n\\frac{\\partial V}{\\partial t} = \\nabla^2 V + \\frac{1}{\\epsilon}\n (V - V^3\/3 - W)\\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\frac{\\partial W}{\\partial t} = \\epsilon (V - \\gamma W + \\beta)\n$$\n\nFixed point solutions to the FitzHugh-Nagumo equations are found by \nroot-finding, which we accomplish using the \\verb+brentq+ function in scipy:\n\n\\begin{verbatim}\ndef FindFixedPoint(gamma, beta):\n f = lambda v, gamma, beta: (v-(v**3)\/3.)-((1.\/gamma)*(v+beta))\n vstar = scipy.optimize.brentq(f, -2., 2., args=(gamma, beta))\n wstar = ((1.\/gamma)*(vstar+beta))\n return vstar, wstar\n\\end{verbatim}\n\nWe also introduce students to finite-difference techniques for\ncomputing spatial derivatives in the solution of partial differential\nequations (PDEs). Numpy arrays are used to represent the $V$ and $W$\nfields of the FitzHugh-Nagumo model, and an important operation is the\ncomputation of the laplacian of the voltage field, $\\nabla^2 V(x,y)$.\nWe introduce the stencil notation for characterizing finite-difference\napproximations to $\\nabla^2 V$, and use a combination of array\narithmetic and array slicing to compactly and efficiently compute the\nderivative on the interior (non-boundary) cells of the simulation\ndomain. Students are asked to implement two different approximations\nto the laplacian operator (a five-point and nine-point stencil), \nand compare their effects on the detailed form of propagating electrical waves.\nThe computation of the five-point stencil is shown here:\n\n\\begin{figure}\n\\includegraphics[height=2in]{Cardiac.ps}\n\\includegraphics[height=2in]{CardiacShock.ps}\n\\caption{\\label{CardiacFig}\nSnapshots in the time evolution of the FitzHugh-Nagumo model of \ncardiac dynamics. The transmembrane voltage $V$ is depicted via a \ngrayscale map (higher voltages in lighter grays). Spiral waves in \nthe voltage field can lead to cardiac arrythmias by disrupting the \nnormal periodic rhythm generated by the sinoatrial node. (Right) \nUsers can administer local voltage pulses (white rectangle) to trigger\nspiral wave formation or to shock the arrhythmic heart back to a \nnormal beating state.\n}\n\\end{figure}\n\n\\begin{verbatim}\ndef del2_5(a, dx):\n \"\"\"del2_5(a, dx) returns a finite-difference approximation of the\n laplacian of the array a, with lattice spacing dx, using the five-point\n stencil:\n 0 1 0\n 1 -4 1\n 0 1 0\n \"\"\"\n del2 = scipy.zeros(a.shape, float)\n del2[1:-1, 1:-1] = (a[1:-1,2:] + a[1:-1,:-2] + \\\n a[2:,1:-1] + a[:-2,1:-1] - 4.*a[1:-1,1:-1])\/(dx*dx)\n return del2 \n\\end{verbatim}\n\nWe provide an animation tool that we have written, based on PIL and\nTkinter, that enables students to update the display of the voltage\nfield V at every time step, and to use the mouse to introduce local\n``shocks'' to the system. (See Figure 4.) These shocks are both\nuseful in initiating spiral waves and in resetting the global\nelectrical state of the system as a defribillator might do. Optional\nextensions to the module, developed by our collaborator Otani,\nallow for simulations of spontaneous pacemakers, dead regions of\ntissue, and more complex heart-chamber geometries, by letting the\nvarious parameters of the model become spatially-varying fields\nthemselves (again implemented via numpy arrays).\n\n\\subsection*{Gene regulation and the Repressilator}\n\nGene regulation describes a set of processes by which the expression\nof genes within a living cell - i.e., their transcription to messenger\nRNA and ultimately their translation to protein - is controlled.\nWhile modern genome sequencing has provided great insights into the\nconstituent parts (genes and proteins) of many organisms, much less is\nknown about how those parts of turned on and off and mixed and matched\nin different contexts: how is that a brain cell and a hair cell, for\nexample, can derive from the same genomic blueprint but have such\ndifferent properties? \n\nThe Repressilator is a relatively simple synthetic gene regulatory\nnetwork developed by Michael Elowitz and Stan\nLeibler\\cite{Elowitz2000}. Its name derives from its use of three\nrepressor proteins arranged to form a biological oscillator: these\nthree repressors act in a manner akin to the ``rock-paper-scissors''\ngame where TetR inhibits $\\lambda$ cI, which in turn inhibits LacI,\nwhich in turn inhibits TetR. A snapshot in the time evolution of the\nRepressilator is shown in Figure 5.\n\n\\begin{figure}\n\\includegraphics[height=2in]{Repressilator47.ps}\n\\caption{\\label{RepressilatorFig}\nSnapshot in the stochastic time evolution of the Repressilator. The \nstate for this model consists of 15 components: 3 protein concentrations \n(back row), 3 mRNA concentrations (middle row), and 3 sets of promoter-binding\nstates (front row): each promoter can be either unbound, singly-bound, \nor doubly-bound. At this instant, TetR (red) concentrations are high, \nleading to suppression of $\\lambda$ cI (yellow). Since $\\lambda$ cI is \nlow, however, LacI (green) concentrations are allowed to grow. This will\nlead to the eventual suppression of TetR.\n}\n\\end{figure}\n\nOne of the important scientific and computational features that we\nemphasize in this module are the differences between stochastic and\ndeterministic representations of chemical reaction networks. (We\nfirst introduce these concepts in a warmup exercise, Stochastic Cells,\nin which students simulate a much simpler biochemical network: one\nrepresentation the binding and unbinding of two monomer molecules $M$\nto form a single dimer $D$: $M + M \\leftrightarrow D$.) We introduce\nstudents to Petri nets as a graphical notation for encoding such\nnetworks, and then have them, from the underlying Petri net\nrepresentation, (a) synthesize differential equations describing the\ndeterministic time evolution of the system, and (b) implement the\nGillespie algorithm (a form of continuous time Monte Carlo) for\nstochastic simulation.\\cite{Gillespie1977} \nGillespie's ``direct method'' involves\nchoosing a particular reaction and reaction time based on the\ninstantaneous reaction rates. For the Repressilator, \nthis can be done quite compactly using\narray operations within numpy\/scipy:\n\n\\begin{verbatim}\nclass StochasticRepressilator (Repressilator):\n # ...\n def Step(self, dtmax):\n self.ComputeReactionRates()\n total_rate = sum(self.rates)\n # get exponentially distributed time\n ran_time = -scipy.log(1.-random.random())\/total_rate\n if ran_time > dtmax:\n return dtmax\n # get uniformly drawn rate in interval defined by total_rate\n ran_rate = total_rate*random.random()\n # find interval corresponding to random rate\n reac_index = len(self.rates) - sum(scipy.cumsum(self.rates) > ran_rate)\n reaction = self.reactions[reac_index]\n # execute specified reaction\n for chem, dchem in reaction.stoichiometry.items():\n chem.amount += dchem\n # return time at which reaction takes place\n return ran_time\n\\end{verbatim}\n\n\\subsection*{Other modules}\n\nOur course consists of a number of other modules which we can only\nmention in passing here. As noted, there is a suite of problems\nintroducing various aspects of chaos and bifurcations in iterated\nmaps. There is also a suite of small modules exploring properties of\nrandom walks and extremal statistics. We have two exercises examining\nconnections between statistical mechanics and computational\ncomplexity, by probing the nature of phase transitions in NP-complete\nproblems such as 3SAT. A random matrix theory module examines the\nnature of universality of eigenvalue distributions, and two other\nmodules explore the thermodynamics of large collective systems (the\nIsing model of simple magnets, and the molecular dynamics of large\nnumbers of atoms).\n\nWe continue to look for new problems to add to this collection, and for \ncollaborators interested in contributing their scientific and computational \nexpertise to this endeavor. (Please contact us if you have ideas for\ninteresting modules.) Our goal is to provide a hands-on introduction\nin scientific computing, and it is our hope that this course can help \nserve a number of educational objectives in the part of a larger curriculum \nin computational science and engineering.\n\n\\section*{Acknowledgments}\n\nWe thank our colleagues who have helped us develop computational\nmodules and have given us useful feedback: Steve Strogatz, Andy Ruina,\nNiels Otani, Bart Selman, Carla Gomes, and John Guckenheimer. We also\nthank all the students who have taken our course and have helped us\nwork the bugs out of exercises and solutions. Funding from the NSF\nIGERT program (award NSF DGE-0333366) and NSF DMR-0218475 helped\nsupport some initial development of course modules.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nMultimessenger observations of neutron-star (NS) mergers have the potential to revolutionize nuclear astrophysics much in the same way as observations of the cosmic microwave background (CMB) radiation revolutionized particle astrophysics. Neutron-star merger events simultaneously emit gravitational waves (GWs) and electromagnetic (EM) signals, from gamma-rays, X-rays, optical, infrared, to radio waves, and neutrinos. The first observation of a NS merger, GW170817 in the GW spectrum, GRB 170817A in the gamma-ray spectrum, and AT~2017gfo in the electromagnetic (EM) spectrum, was made on August 17, 2017, and in the weeks thereafter~\\cite{TheLIGOScientific:2017qsa,GBM:2017lvd,Monitor:2017mdv,Abbott:2018wiz}. Triggered by the Fermi and Integral telescopes~\\cite{Monitor:2017mdv,Savchenko:2017ffs}, this observation provided detailed spectral and temporal features both in GWs and EM radiation. Theoretical efforts to interpret this data has provided insights into the production of heavy r-process elements in NS mergers~\\cite{Drout:2017ijr}, and constraints on the EOS of dense matter~\\cite{Annala:2017llu,Fattoyev:2017jql,Most:2018hfd,Lim:2018bkq,Tews:2018iwm}. NS mergers have the potential to provide detailed information on the properties of the merging compact stars, such as their masses and radii~\\cite{Bauswein:2017vtn}, as well as on the properties of the densest baryonic matter to be observed in the universe. Future detections of NS mergers, anticipated during the next observing run of the Advanced LIGO and VIRGO detectors, could provide even stronger constraints on the EOS of strongly-interacting matter and the r-process. \n\nWe are pleased to contribute to this topical issue on \"First joint gravitational wave and electromagnetic observations: Implications for nuclear physics\", which contains several articles devoted to the theory and computing needed to improve the description of dense matter and to model neutron-star mergers - efforts that will play a key role in extracting insights from GW170817 and future detections. Here, we elaborate on earlier work in Ref.~\\cite{Tews:2018iwm}, where we analyzed GW170817 constraints on the dense matter EOS, to provide additional details, discussions, and new results. \n\nOur contribution is structured as follows. In Sec.~\\ref{sec:models} we describe the NS equation-of-state models employed in our analysis. In particular, we use two models: the minimal model or meta-model (MM), see Sec.~\\ref{sec:minmod} and the maximal or speed-of-sound model (CSM), see Sec.~\\ref{sec:maxmod}. Both models are constrained at low densities by state-of-the-art calculations of neutron-rich matter from chiral effective field theory (EFT). We discuss these models in the context of GW170817 in great detail in Sec.~\\ref{sec:results} and analyze the impact of phase transitions or future GW detections. Finally, we summarize our results and provide an outlook in Sec.~\\ref{sec:summary}.\n\n\\section{Models}\n\\label{sec:models}\n\nIn this section, we discuss the dense-matter models we use in our analysis. Calculations of the EOS of neutron matter based on Hamiltonians derived from chiral EFT provide a reliable method to estimate the uncertainties associated with poorly constrained aspects of two- and many-body nuclear forces at short-distance~\\cite{Lynn:2015jua,Tews:2018kmu}. Chiral EFT is a systematic expansion for nuclear forces in powers of momenta, and provides an efficient way to estimate theoretical uncertainties. It is however limited to momenta up to the so-called breakdown scale, $\\Lambda_b$, which signals the breakdown of the effective theory due to additional high-momentum physics, e.g. the onset of new degrees of freedom. Since $\\Lambda_b$ is expected to be of the order of $\\simeq 500-600$~MeV~\\cite{Melendez:2017phj}, chiral EFT is not applicable at all densities encountered in neutron stars and chiral EFT interactions have typically been used to describe neutron matter only up to saturation density, $n_{sat}$. Here, using insights obtained in Ref.~\\cite{Tews:2018iwm}, we will analyze to which extent chiral EFT predictions up to 2$n_{sat}$ with conservative error estimates provide useful constrains for the nuclear equation of state, even though uncertainties grow fast with density.\n\nTo describe the EOS at higher densities, we will consider two extrapolation schemes rooted in low-density microscopic predictions and widely covering our present uncertainties at higher density. These two schemes are the minimal model or meta-model (MM), based on a smooth extrapolation of chiral EFT results, and the maximal model or speed-of-sound model (CSM), which explores the widest possible domain for the EOS and contains also more drastic behavior with density; see Ref.~\\cite{Tews:2018iwm} for the first analysis of GWs with these models. These two models show some overlap for properties of dense neutron-star matter, as suggested from the masquerade phenomenon~\\cite{Alford:2004pf}, but also highlight differences: The confrontation of these models with each other and with observations sheds light on the impact of the presence of strong phase transitions at high density, as is detailed hereafter.\n\n\\subsection{Pure neutron matter from chiral EFT}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 2.0cm 1.5cm, clip=,width=0.9\\columnwidth]{NMEOS-band.pdf}\\hspace{0.4cm}\n\\includegraphics[trim= 0.0cm 0 2.0cm 1.5cm,\nclip=,width=0.9\\columnwidth]{NMEOS-Pband.pdf}\n\\end{center}\n\\caption{\\label{fig:chiralPNM} The energy per particle and pressure of pure neutron matter as functions of baryon density up to $2n_{\\rm sat}$. We show the constraints from Ref.~\\cite{Tews:2018kmu} based on AFDMC calculations with local chiral potentials at N$^2$LO (red bands). As a comparison, we show results at LO (black dashed lines), NLO (black dashed-dotted lines), as well as calculations using phenomenological $NN$ interactions only (black dotted lines) and including also phenomenological $3N$ forces (black solid lines). We also indicate the unitary-gas bound of Ref.~\\cite{Kolomeitsev:2016sjl} (blue dashed-dotted lines) and the part of the uncertainty band that we use for our NS modeling (red dotted lines); see text for more details.}\n\\end{figure*} \n\nNeutron stars are ideal laboratories to test theories of the strong interaction at finite chemical potential: the structure of neutron stars is governed by the knowledge of the EOS of neutron-star matter, relating energy density, pressure, and temperature. Additional uncertainties may come from rotation and magnetic field distribution in the star, but the dense-matter EOS is the key input. Since neutron stars explore densities from a few gram per cubic centimeter up to 10 times the nuclear saturation density, $n_{\\rm sat}=0.16 \\,\\mathrm{fm}^{-3} = 2.7\\!\\cdot\\! 10^{14} \\rm{g\\, cm}^{-3}$, the knowledge of the EOS is required for densities covering several orders of magnitude. Though young proto-neutron stars or neutron-star remnants also explore the EOS at high temperatures up to several tens of MeV, older neutron stars can typically be considered as cold objects at $T=0$. This is especially true for two binary NS during the inspiral phase of a neutron-star merger, whose properties can be analyzed from the premerger GW signal. \n\nWhile the EOS of the neutron-star crust, reaching up to $n_{\\rm sat}\/2$, is rather well constrained, the uncertainty of the EOS increases fast with density and the composition of the inner core of NS is still unknown. Nevertheless, in the density range from $n_{\\rm sat}\/2$ up to about $2n_{\\rm sat}$, the neutron-star EOS can be constrained by state-of-the-art nuclear-theory models. The starting point for these constraints are calculations of pure neutron matter (PNM). PNM is an idealized, infinite system consisting solely of neutrons, but it is much easier to compute than systems containing also protons. The reason is that certain parts of the nuclear interaction, e.g., tensor interactions, are weaker or do not contribute at all among neutrons. In contrast to symmetric nuclear matter, PNM is also not unstable with respect to density fluctuations below $n_\\mathrm{sat}$, and uniform matter remains the true ground state of PNM at all densities, simplifying its calculation.\n\nTo reliably describe neutron matter, one needs precise and accurate quantum many-body methods in combination with a reliable model for the nuclear interaction. Neutron matter has been extensively studied in the last decade, using a multitude of nuclear interactions and advanced \\textsl{ab initio} many-body methods. Among these are, e.g., many-body perturbation theory~\\cite{Hebeler:2009iv,Drischler:2016djf,Holt:2016pjb}, the coupled-cluster method~\\cite{Hagen:2013yba}, quantum Monte Carlo methods~\\cite{Gandolfi:2011xu}, or the self-consistent Green's function method~\\cite{Carbone:2014mja}. A comparison of these different studies, see e.g., Refs.~\\cite{Gandolfi:2015jma,Hebeler:2015hla}, shows that neutron matter is rather well constrained by these multiple \\textsl{ab initio} approaches using diverse nuclear Hamiltonians. In this paper, we will use calculations of neutron matter obtained with the auxiliary-field diffusion Monte Carlo (AFDMC) method~\\cite{Carlson:2014vla} together with modern nuclear Hamiltonians from chiral EFT. \n\nQuantum Monte Carlo methods are among the most precise many-body methods for strongly interacting systems~\\cite{Carlson:2014vla}. They provide the ground state of a many-body system, governed by a non-relativistic nuclear Hamiltonian defining the Schr\\\"odinger equation, by evolving a trial wave function $\\Psi_T$ in imaginary time, \n\\begin{equation}\n\\Psi_{GS}=\\lim_{\\tau \\to \\infty} e^{- \\mathcal{H}\\tau}\\Psi_T\\,,\n\\end{equation}\nwhere $\\Psi_T$ is constructed so that it has a non-vanishing overlap with the ground state $\\Psi_{GS}$. Expanding $\\Psi_T$ in eigenfunctions of the Hamiltonian, one can easily see that contributions of excited states decay with time, and only the ground-state component of the trial wave function remains. Quantum Monte Carlo methods have been used to successfully describe nuclei up to \\isotope[16]{O}~\\cite{Carlson:2014vla,Piarulli:2017dwd,Lonardoni:2017hgs} and neutron matter~\\cite{Gandolfi:2011xu,Lynn:2015jua}. At very low densities, where neutron matter is close to the unitary limit and interactions are dominated by large scattering-length physics, these methods~\\cite{Carlson:2008zza} have been successfully confronted to experimental measurements of cold atomic gases~\\cite{Nascimbene2010,Navon2010,Zwierlein:2015}. Due to its great success to study strongly-interacting matter and nuclei~\\cite{Gandolfi:2011xu,Lonardoni:2014bwa,Lynn:2015jua,Gandolfi:2016bth,Lonardoni:2017hgs}, we employ in this work the AFDMC method to determine PNM properties. For more details on Quantum Monte Carlo methods we refer the reader to Ref.~\\cite{Carlson:2014vla}. \n\nOn the interaction side, chiral EFT~\\cite{Epelbaum2009,Machleidt:2011zz} is a modern theory for nuclear forces that is consistent with the symmetries of Quantum Chromodynamics and systematically describes the nucleon-nucleon interaction in terms of explicitly resolved longer-range pion exchanges as well as short-range nucleon contact interactions. Chiral EFT is based on a momentum expansion in terms of $p\/\\Lambda_b$, where $p$ is the typical momentum of the nuclear system at hand, and $\\Lambda_b$ is the breakdown scale already discussed. The short-range interaction terms parametrize all unresolved and unknown high-energy physics beyond the breakdown scale, and depend on a set of low-energy couplings (LECs), which are typically fitted to nucleon-nucleon ($NN$) scattering data and properties of light nuclei. Chiral EFT does not only describe $NN$ interactions but also consistent three-body ($3N$) and higher many-body forces. It has been successfully applied to calculate properties of ground and excited states of nuclei, nuclear matter, as well as electroweak processes; see, e.g, Ref.~\\cite{Hebeler:2015hla} for a review. Most importantly, the systematic chiral EFT expansion enables the estimation of theoretical uncertainties for these physical systems.\n\nIn our analysis in this work, we use local chiral EFT interactions that have been constructed especially for the use in QMC methods in Refs.~\\cite{Lynn:2015jua,Gezerlis:2013ipa,Gezerlis:2014zia,Tews:2015ufa}. These interactions have been successfully tested in light- to medium-mass nuclei and in n-$\\alpha$ scattering~\\cite{Lynn:2015jua,Lonardoni:2017hgs} and agree with our current knowledge of the empirical parameters of nuclear matter~\\cite{Kolomeitsev:2016sjl,Margueron:2017eqc}. In Ref.~\\cite{Tews:2018kmu}, these interactions have been used to study neutron matter up to $2n_{\\rm sat}$ with theoretical uncertainty estimates using the AFDMC method. \nFor more details on QMC calculations with local chiral interactions we refer the reader to Ref.~\\cite{Lynn:2019rdt}. \n\nIn particular, in this work we use local chiral interactions at a cutoff scale $R_0=1.0$ fm with its systematic uncertainty estimates. In Fig.~\\ref{fig:chiralPNM} we show the results for the energy per particle and pressure of neutron matter at leading order (LO), next-to-leading order (NLO), and at next-to-next-to-leading order (N$^2$LO) with its uncertainty band for densities ranging from 0.04~fm$^{-3}$ up to $2n_{\\rm sat}$. We find that the uncertainty bands increase fast with density and are quite sizable at $2 n_{\\rm sat}$. In addition to the results for chiral interactions, we also show in Fig.~\\ref{fig:chiralPNM} AFDMC results employing the phenomenological AV8' $NN$ and AV8' $NN$ plus UIX $3N$ interactions as a comparison. It is interesting to note that the AV8' and NLO $NN$ interactions agree very well with each other, which highlights the fact that many-body forces are a considerable source of uncertainty. Finally, we also compare all calculations with the unitary-gas limit of Ref.~\\cite{Kolomeitsev:2016sjl}.\n\n\\subsection{Discussion of uncertainties}\n\nThe uncertainty bands shown in Fig.~\\ref{fig:chiralPNM} include the following sources of uncertainty: i) the truncation of the nuclear Hamiltonian within the chiral expansion, ii) the regularization scheme and scale, which are needed to implement nuclear Hamiltonians in many-body methods, iii) the uncertainties in the determination of low-energy couplings from data, and iv) the many-body uncertainty that originates in approximations made when solving the Schr\\\"odinger equation for the nuclear many-body system. The first three sources, which originate in the nuclear Hamiltonian, dominate over the many-body uncertainty from QMC methods. Among these three, the truncation uncertainty is the dominant source of uncertainty and we will discuss it in the following.\n\nThe truncation uncertainty can be expressed in the following way. \nIntroducing the dimensionless expansion parameter $Q=p\/\\Lambda_b$ and following Ref.~\\cite{Furnstahl:2015rha}, under the prerequisite that chiral EFT is a converging theory, one can define the order-by-order contributions to an observable $X$ using the following infinite summation,\n\\begin{equation}\\label{eq:chiralExp}\nX=X_0\\sum_{i=0}^{\\infty} c_i Q^{i}\\,.\n\\end{equation}\nHere, $X_0$ sets the natural scale expected for the observable $X$, e.g., the leading-order result, $X_0=X_{\\rm{LO}}$ ($c_0=1$), and the $c_{i\\ge 1}$ denote the expansion coefficients. In calculations of nuclear systems, due to practical reasons this sum has to be truncated at a certain order $n$, inducing the so-called truncation uncertainty. This uncertainty is intrinsic to \\emph{all} nuclear Hamiltonians but can be specified for chiral EFT Hamiltonians by \n\\begin{equation}\n\\Delta X=X- X_0\\sum_{i=0}^{n}c_i Q^{i}\\,.\n\\end{equation}\n\nIt has been shown in Ref.~\\cite{Furnstahl:2015rha} that for practical purposes an estimate of the magnitude of the first truncated term in Eq.~\\eqref{eq:chiralExp}, given by $i=n+1$, is a sufficient uncertainty estimate. To obtain this estimate, both the size of the unknown expansion coefficient $c_{n+1}$ and of the expansion parameter $Q$ are required. A conservative choice for the coefficient $c_{n+1}$ is the maximum of all previously found coefficients, \n\\begin{equation}\nc_{n+1}=\\max_{i=0}^n{c_i}\\,,\n\\end{equation} \nwhile $Q$ has to be estimated from the typical momentum scale for the system at hand. This uncertainty prescription is similar to the one presented by Epelbaum, Krebs, and Mei{\\ss}ner (EKM)~\\cite{Epelbaum:2014efa}, and the truncation uncertainty, e.g., at N$^2$LO, can be obtained from an order-by-order calculation as \n\\begin{align}\n\\Delta X^{\\nxlo{2}}=\\max &\n\\left(\\vphantom{X^{\\nxlo{2}}}Q^{4} \\left|X^{\\nxlo{0}}-X^{\\rm free}\\right|,Q^2 \\left|X^{\\nxlo{1}}-X^{\\nxlo{0}}\\right|,\\right. \\nonumber \\\\\n&\\quad \\left. Q\\left|X^{\\nxlo{2}}-X^{\\nxlo{1}}\\right|\n\\right)\\nonumber\\\\\n&= Q^4 X_0 \\max_{i=0}^n{c_i}\\,. \\label{eq:uncertainty}\n\\end{align}\nWe have used this uncertainty estimate to compute the truncation uncertainty, using $Q=\\sqrt{3\/5}k_F\/\\Lambda_b$, with the Fermi momentum $k_F$ and $\\Lambda_b=500 \\,\\mathrm{MeV}$. \n\n\\begin{table*}[t]\n\\centering\n\\setlength{\\tabcolsep}{10pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccccccccc}\n\\hline\n$P_{\\alpha}$ & $E_{sat}$ & $E_{sym}$ & $n_{sat}$ & $L_{sym}$ & $K_{sat}$ & $K_{sym}$ & $Q_{sat}$ & $Q_{sym}$ & $Z_{sat}$ & $Z_{sym}$ & $b$\\\\\n & MeV & MeV & fm$^{-3}$ & MeV & MeV & MeV & MeV & MeV & MeV & MeV & \\\\\n\\hline\nMax & -15 & 38 & 0.17 & 90 & 270 & 200 & 1000 & 2000 & 3000 & 3000 & 14 \\\\\nMin & -17 & 26 & 0.15 & 20 & 190 & -400 & -1000 & -2000 & -3000 & -3000 &1 \\\\\n\\hline\n\\end{tabular}\n\\caption{Empirical parameters and their domain of variation entering into the definition of the MM~(\\ref{eq:MM:energy}). The parameters $\\kappa_{sat}$ and $\\kappa_{sym}$ are fixed such that $m_{sat}^*\/m=0.75$ in symmetric matter and $m_n^*\/m-m_p^*\/m=-0.1$ in neutron matter.}\n\\label{tab:epbound}\n\\end{table*}\n\nThe total uncertainty bands in Fig.~\\ref{fig:chiralPNM} additionally include the other three sources of uncertainty. The regularization scheme dependence has been explored by explicitly including regulator artifacts for local regulators. Specifically, in Fig.~\\ref{fig:chiralPNM}, the neutron-matter uncertainty bands include three different local chiral Hamiltonians which explore short-range $3N$ regulator artifacts; see Ref.~\\cite{Lynn:2015jua} for details on the Hamiltonians and Ref.~\\cite{Huth:2017wzw} for details on the regulator artifacts. These two sources of uncertainties dominate the total uncertainty band, while the many-body uncertainty is negligible.\n\nTo estimate the convergence of the chiral expansion at different densities, the series of expansion coefficients of Eq.~\\eqref{eq:chiralExp} can provide insights. In Ref.~\\cite{Tews:2018kmu}, we have studied the convergence of the chiral series in pure neutron matter and found it to be reasonable up to a density of $2 n_{\\rm sat}$. Beyond that, we expect the chiral expansion to break down even though the expansion parameter only increases by approximately 25\\% from $n_{\\rm sat}$ to $2 n_{\\rm sat}$. Therefore, we restrict the chiral EFT input to densities up to $2 n_{\\rm sat}$. In addition, we exclude one chiral Hamiltonian from further consideration because its regulator artifacts lead to a spurious and unphysical attractive $3N$ contribution in neutron matter, as discussed in Ref.~\\cite{Tews:2018kmu}. This Hamiltonian represents the lower, soft part of the uncertainty band and is also in conflict with the unitary-gas bound of Ref.~\\cite{Kolomeitsev:2016sjl}, shown in Fig.~\\ref{fig:chiralPNM} as a blue dashed line. Excluding this Hamiltonian changes the lower bound of the uncertainty band to the red-dotted line in Fig.~\\ref{fig:chiralPNM}, in good agreement with the unitary-gas constraint.\n\nIn the following, we use this chiral EFT band up to a density $n_{\\text{tr}}$ to constrain two different modelings for the high density equation of state. By varying $n_{\\text{tr}}$ from $n_{\\text{sat}}$ to $2n_{\\text{sat}}$, we will show that, despite the rapid increase of the uncertainty of the neutron-matter EOS with density, chiral EFT constraints remain extremely useful up to $2 n_{\\rm sat}$. \n\n\\subsection{The minimal model}\n\\label{sec:minmod}\n\nThe first model that we consider in this analysis, the minimal model or meta-model (MM), assumes the EOS to be smooth enough to be describable in terms of a density expansion about $n_{sat}$. Here, we briefly describe the MM, but see also Refs.~\\cite{Margueron:2017eqc,Margueron:2017lup} for more details. \n\nThe MM is described in terms of the empirical parameters of nuclear matter, which are defined as the Taylor coefficients of the density expansion of the energy per particle of symmetric nuclear matter $e_{sat}(n)$ and the symmetry energy $s_{sym}(n)$, \n\\begin{align}\ne_{sat}(n) &= E_{\\text{sat}} + \\frac 1 2 K_{\\text{sat}} x^2 + \\frac 1 6 Q_{\\text{sat}} x^3 + \\frac 1 {24} Z_{\\text{sat}} x^4 + ... \\label{eq:esat}\\\\\ns_{sym}(n) &= E_{\\text{sym}} + L_{\\text{sym}} x+ \\frac{1}{2} K_{\\text{sym}} x^2 + \\frac{1}{6} Q_{\\text{sym}} x^3 \\nonumber \\label{eq:esym}\\\\ \n& +\\frac{1}{24} Z_{\\text{sym}} x^4 + ... \\,,\n\\end{align}\nwhere the expansion parameter $x$ is defined as $x=(n-n_{\\text{sat}})\/(3n_{\\text{sat}})$ and $n=n_n+n_p$ is the baryon density, $n_{n\/p}$ are the neutron and proton densities.\nA good representation of the energy per particle around $n_{sat}$ and for small isospin asymmetries $\\delta=(n_n-n_p)\/n$ can be obtained from the following quadratic approximation,\n\\begin{equation}\ne(n,\\delta)=e_{sat}(n)+s_{sym}(n)\\, \\delta^2\\, .\n\\end{equation}\nThe lowest order empirical parameters can be extracted from nuclear experiments~\\cite{Margueron:2017eqc}, but typically carry uncertainties. Especially the symmetry-energy parameters are of great interest to the nuclear physics community and considerable effort is invested into a better estimation of their size.\n\nThe MM constructs the energy per nucleon as,\n\\begin{eqnarray}\ne^N(n,\\delta)=t^{FG*}(n,\\delta)+v^N(n,\\delta),\n\\label{eq:MM:energy}\n\\end{eqnarray}\nwhere the kinetic energy is expressed as \n\\begin{eqnarray}\nt^{FG^*}(n,\\delta)&=&\\frac{t_{sat}^{FG}}{2}\\left(\\frac{n}{n_{sat}}\\right)^{2\/3} \n\\bigg[ \\left( 1+\\kappa_{sat}\\frac{n}{n_{sat}} \\right) f_1(\\delta) \\nonumber \\\\\n&& \\hspace{2.5cm} + \\kappa_{sym}\\frac{n}{n_{sat}}f_2(\\delta)\\bigg] ,\n\\label{eq:MM:kin}\n\\end{eqnarray}\nand the functions $f_1$ and $f_2$ are defined as\n\\begin{eqnarray}\nf_1(\\delta) &=& (1+\\delta)^{5\/3}+(1-\\delta)^{5\/3} \\, , \\\\\nf_2(\\delta) &=& \\delta \\left( (1+\\delta)^{5\/3}-(1-\\delta)^{5\/3} \\right) .\n\\end{eqnarray}\nThe parameters $\\kappa_{sat}$ and $\\kappa_{sym}$ control the density and asymmetry dependence of the Landau effective mass as ($q$=n or p),\n\\begin{equation}\n\\frac{m}{m^*_q(n,\\delta)} = 1 + \\left( \\kappa_{sat} + \\tau_3 \\kappa_{sym} \\delta \\right) \\frac{n}{n_{sat}} ,\n\\label{eq:effmass}\n\\end{equation}\nwhere $\\tau_3=1$ for neutrons and -1 for protons.\nTaking the limit $\\kappa_{sat}=\\kappa_{sym}=0$, Eq.~(\\ref{eq:MM:kin}) provides the free Fermi gas energy.\n\nThe potential energy in Eq.~(\\ref{eq:MM:energy}) is expressed as a series expansion in the parameter $x$ and is quadratic in the asymmety parameter $\\delta$,\n\\begin{eqnarray}\nv^N(n,\\delta)=\\sum_{\\alpha\\geq0}^N \\frac{1}{\\alpha!}( v_{\\alpha}^{sat}+ v_{\\alpha}^{sym} \\delta^2) x^\\alpha u^N_{\\alpha}(x) ,\n\\label{eq:MM:pot}\n\\end{eqnarray}\nwhere the function $u^N_{\\alpha}(x)=1-(-3x)^{N+1-\\alpha}\\exp(-b n\/n_{sat})$ ensures the limit $e^N(n=0,\\delta)=0$.\nThe parameter $b$ is taken large enough for the function $u^N_{\\alpha}$ to fall sufficiently fast with density and to not contribute at densities above $n_{sat}$. A typical value is $b=10\\ln2\\approx 6.93$ such that the exponential function is $1\/2$ for $n=n_{sat}\/10$.\nThe MM parameters $v_{\\alpha}^{sat}$ and $v_{\\alpha}^{sym}$ are simply expressed in terms of the empirical parameters. The MM as expressed in Eqs.(\\ref{eq:MM:energy}), (\\ref{eq:MM:kin}), and (\\ref{eq:MM:pot}) coincides with the meta-model ELFc described in Ref.~\\cite{Margueron:2017eqc}, where detailed relations can be found.\nTo obtain the neutron-star EOS, we extend our models to $\\beta$-equilibrium and include a crust as described in Ref.~\\cite{Margueron:2017lup}. By varying the empirical parameters within their known or estimated uncertainties, it was shown that the MM can reproduce many existing neutron-star EOS that are based on the assumption that a nuclear description is valid at all densities probed in neutron stars. Therefore, this model is a reliable representation for EOS without exotic phases of matter separated from the nucleonic phase through strong phase transitions.\n\nIn the following, the parameter space for the MM will be explored within a Markov-Chain Monte-Carlo algorithm, where the MM parameters are allowed to freely evolve inside the boundaries given in Table.~\\ref{tab:epbound}. The resulting models satisfy the chiral EFT predictions in neutron matter for the energy per particle and the pressure up to $n_{\\rm tr}$, causality, stability, positiveness of the symmetry energy ($s_{sym}(n)>0$), and also reach the maximum observed neutron-star mass $M_{\\rm max}^{\\rm obs}$, see the discussion in Sec.~\\ref{sec:MMandCSM}. The maximum density associated with each EOS within the MM is given either by the break-down of causality, stability, or positiveness of the symmetry energy condition, or by the end point of the stable neutron-star branch.\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{EpsPcomp.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{EpsPcomp800.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{EpsPcomp_032.pdf}\n\\caption{\\label{fig:EpsPcomp}\nComparison of the allowed EOS envelopes for the MM (black bands) and the CSM (red bands). We show three cases: a) the most general case, where $n_{\\text{tr}}=n_{\\text{sat}}$ and only $M_{\\rm{max}}\\geq 1.9 M_{\\odot}$ is enforced, b) for $n_{\\text{tr}}=n_{\\text{sat}}$ when enforcing $70\\leq \\tilde{\\Lambda} \\leq 720$ and c) for $n_{\\text{tr}}=2 n_{\\text{sat}}$. When additionally enforcing $R_{1.6}\\geq 10.68$ km~\\cite{Bauswein:2017vtn}, the hatched regions are excluded.\n}\n\\end{center}\n\\end{figure*}\n\n\\subsection{The maximal model}\n\\label{sec:maxmod}\n\nThe second model that we consider in this analysis, the maximal model (CSM), is based on an extension of the speed of sound in neutron-star matter. Starting from the pure neutron matter calculations, we construct the neutron-star EOS up to $n_{\\rm tr}$ by constructing a crust as described in Ref.~\\cite{Tews:2016ofv} and extending the neutron-matter results to $\\beta$ equilibrium above the crust-core transition. Having constructed the EOS up to $n_{\\rm tr}$ we compute the speed of sound, \n\\begin{equation}\nc_S^2 = \\frac{\\partial p(\\epsilon)}{\\partial \\epsilon}\\,,\n\\end{equation}\nwhere $p$ is the pressure and $\\epsilon$ is the energy density. Above $n_{\\rm tr}$, we parametrize the speed of sound in a very general way: we randomly sample a set of points $c_S^2(n)$, where the values for $c_S$ have to be positive and are limited by the speed of light (stability and causality), and interpolate between the different sampling points using linear segments. The individual points are randomly distributed in the interval $n_{\\rm tr}-12 n_{\\rm sat}$. From the resulting speed-of-sound curve, we reconstruct the EOS step-by-step starting at $n_{\\text{tr}}$, where $\\epsilon(n_{\\text{tr}})$, \n$p(n_{\\text{tr}})$, and $\\epsilon'(n_{\\text{tr}})$ are known:\n\\begin{align}\nn_{i+1}&= n_i + \\Delta n \\\\\n\\epsilon_{i+1} &= \\epsilon_i +\\Delta\\epsilon= \\epsilon_i + \\Delta n \\cdot \\left(\\frac{\\epsilon_i+p_i}{n_i}\\right) \\\\\np_{i+1} &= p_i + c_S^2 (n_i) \\cdot \\Delta \\epsilon\\,,\n\\end{align}\nwhere $i=0$ defines the transition density $n_{\\text{tr}}$. In the second line we have used the thermodynamic relation $p=n \\partial \\epsilon\/\\partial n -\\epsilon$, which is valid at zero temperature. \n\nIn that way, we iteratively obtain the high-density EOS. We have explored extensions for a varying number of $c_S^2(n)$ points, i.e., for 5-10 points, and found that the differences between these extensions are marginal. We, therefore, choose 6 sampling points. For each sampled EOS, we generate a second version which includes a strong first-order phase transition with a random onset density and width, to explicitly explore such extreme density behavior.\n\nThe CSM for neutron-star applications was introduced in Ref.~\\cite{Tews:2018kmu}, and represents and extension of the model of Ref.~\\cite{Alford:2013aca}. A similar model was used in Ref.~\\cite{Greif:2018njt}. However, in contrast to Ref.~\\cite{Tews:2018kmu} we have extended this model to explore the complete allowed parameter space for the speed of sound, by abandoning the specific functional form of Ref.~\\cite{Tews:2018kmu} in favor of an extension using linear segments. This more conservative choice leads to slightly larger uncertainty bands, but allows us to make more definitive statements about neutron-star properties. The resulting EOS parameterizations represent possible neutron-star EOS and may include drastic density dependences, e.g., strong phase transitions which lead to intervals with a drastic softening or stiffening of the EOS. \nThis represents a stark contrast to the MM, which does not include such behavior, and might give insights into the constituents of neutron-star matter at high-densities. The predictions of the CSM represent the widest possible domain for the respective neutron-star observables consistent with the low density input from chiral EFT. If observations outside of this domain were to be made, this would imply a breakdown of nuclear EFTs at densities below the corresponding $n_{\\rm tr}$. \n\nSince the CSM represents very general EOSs only governed by the density dependence of the speed-of-sound, it does not allow any statements about possible degrees of freedom. In this sense, it is very similar to extensions using piecewise polytropes which were introduced in Ref.~\\cite{Read:2008iy} and have been used extensively to determine neutron-star properties; see, e.g., Ref.~\\cite{Hebeler:2013nza,Raithel:2016bux,Annala:2017llu}. However, in contrast to polytropic extensions, in the CSM the speed of sound is continuous except when first-order phase transition are explicitly accounted for. Discontinuities in the speed of sound affect the study of tidal polarizabilities, where $c_S^{-1}$ enters, by introducing features whose source is solely the choice of parametrization.\n\n\\subsection{Comparison of MM and CSM}\n\\label{sec:MMandCSM}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{MRcomp.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{MRcomp_800.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.65\\columnwidth]{MRcomp_032.pdf}\n\\caption{\\label{fig:MRcomp}\nComparison of the allowed MR envelopes for the MM (black bands) and the CSM (red bands). We show three cases: a) the most general case, where $n_{\\text{tr}}=n_{\\text{sat}}$ and only $M_{\\rm{max}}\\geq 1.9 M_{\\odot}$ is enforced, b) for $n_{\\text{tr}}=n_{\\text{sat}}$ when enforcing $70\\leq \\tilde{\\Lambda} \\leq 720$, and c) for $n_{\\text{tr}}=2 n_{\\text{sat}}$. When additionally enforcing $R_{1.6}\\geq 10.68$ km~\\cite{Bauswein:2017vtn}, the hatched regions are excluded.\n}\n\\end{center}\n\\end{figure*} \n\nFor both the MM and CSM we generate thousands of EOSs that are consistent with low-density constraints from chiral EFT. In addition, the observations of heavy two-solar-mass pulsars in recent years~\\cite{Demorest2010,Antoniadis2013,Fonseca2016} place important additional constraints on these EOSs, which we enforce by requiring $M_{\\text{max}}>M_{\\rm max}^{\\rm obs}$ for all our EOSs. \nTo be conservative, as the limit for $M_{\\rm max}^{\\rm obs}$ we choose the centroid of the maximum observed mass minus twice the error-bar on the observation. For the two heaviest neutron stars observed up to now~\\cite{Demorest2010,Antoniadis2013,Fonseca2016}, this gives $M_{\\rm max}^{\\rm obs}\\approx 1.9 M_\\odot$. \n\nWe now compare the predictions of both the MM (black bands with solid contour) and CSM (red bands with dotted contour) for the EOS of neutron-star matter, see Fig.~\\ref{fig:EpsPcomp}, and the mass-radius (MR) relation, see Fig.~\\ref{fig:MRcomp}. In the respective figures, we show the EOS and MR envelopes for $n_{\\rm tr}=n_{\\rm sat}$ [panels (a)] and for $n_{\\rm tr}=2 n_{\\rm sat}$ [panels (c)], where ragged edges are due to the limited number of models. In all cases, the MM is a subset of the CSM, as expected. Also, the two models, which treat the neutron-star crust with different prescriptions, show excellent agreement at low densities. For $n_{\\rm tr}=n_{\\rm sat}$, the MM and CSM EOSs agree very well up to $n_{\\rm tr}$, while for $n_{\\rm tr}=2 n_{\\rm sat}$ the MM only samples a subset of the chiral EFT input, because the $M_{\\rm max}^{\\rm obs}$ constraint forces the EOS to be sufficiently stiff which excludes the softest low-density neutron-matter EOS. This is a consequence of the smooth density expansion around $n_{\\rm sat}$ in the MM. In the CSM, instead, a non-smooth stiffening of these softest EOS at higher densities can help stabilize heavy neutron stars, which is why the complete low-density band from chiral EFT is sampled. We also find that going from $n_{\\rm tr}=n_{\\rm sat}$ to $n_{\\rm tr}=2 n_{\\rm sat}$ allows to considerable reduce the EOS uncertainty for the CSM. The MM uncertainty is also slightly reduced and the MM band gets narrower. These results show that even though the theoretical uncertainties in the neutron-matter EOS increase fast in the density range $1-2 n_{\\text{sat}}$, the additional information provided allows to substantially reduce uncertainties in the CSM EOS: essentially, the chiral EFT constraint excludes the possibility of phase transitions in the region going from 1 to $2n_{sat}$. The impact of phase transitions above $2n_{sat}$ on the EOS is very much reduced compared to the case where they are allowed to appear at lower densities, because we impose the $M_{\\rm max}^{\\rm obs}$ constraint. A large domain of soft CSM EOSs is, thus, excluded. The stiff MM and CSM EOS are very close up to $2n_{sat}$, as expected.\n\nThese observations are also reflected in the MR predictions of both models. For $n_{\\rm tr}=n_{\\rm sat}$ [panel (a)], the CSM (MM) leads to a radius range of a typical neutron star of $1.4 M_{\\odot}$ of $8.4-15.2$ km ($10.9-13.5$ km). This range reduces dramatically for $n_{\\rm tr}=2 n_{\\rm sat}$ [panel (c)], where we find $8.7-12.6$ km ($10.9-12.0$ km) for the CSM (MM). \n\nIn the last case, the radius uncertainty for a typical neutron star is only about 1 km in the MM, compatible with the expected uncertainty of the NICER mission~\\cite{NICER1}. This allows for a possible tight confrontation between the MM and the NICER results. If such an observation should be made in the near future, we will be able to better constrain dense-matter phase transitions. In contrast, the CSM, which includes EOS with sudden softening or stiffening at higher densities, dramatically extends the allowed envelopes for the EOS and the MR relation as compared with the MM. These differences in the predictions of the MM and CSM can be used to identify regions for the neutron-star observables, for which statements about the constituents of matter might be possible. For example, the observation of a typical neutron star with a radius of 10 km would imply the existence of a softening phase transition, that would hint on new phases of matter appearing in the core of neutron stars. Instead, in regions were both the MM and CSM agree, the masquerade problem does not allow statements about the constituents of neutron-star matter at high densities~\\cite{Alford:2004pf}.\n\nIn Fig.~\\ref{fig:MRcomp}, the maximum mass for $n_\\mathrm{tr}=n_\\mathrm{sat}$ is almost $4M_\\odot$ while it is only $2.9M_\\odot$ if $n_\\mathrm{tr}=2n_\\mathrm{sat}$.\nIt is interesting to compare these findings with previous predictions for the maximum mass of neutron stars. Connecting a nucleonic EOS to the stiffest possible EOS at $n_\\mathrm{tr}=2n_\\mathrm{sat}$, the maximum mass was predicted to be $2.9 M_\\odot$~\\cite{Rhoades1974}, as in our case. With a similar approach but defining $n_\\mathrm{tr}$ to lie between 1 and $2n_\\mathrm{sat}$, Ref.~\\cite{Kalogera1996} predicted the maximum mass to be $3.2 M_\\odot$. Note, however, that by lowering $n_\\mathrm{tr}$, the authors found $3.9 M_\\odot$ as the maximum mass, again very close to our prediction. The maximum mass of neutron stars is therefore tightly correlated with $n_\\mathrm{tr}$ for both the MM and CSM models, as shown in Fig.~\\ref{fig:MRcomp}.\n\nFinally, due to the rather soft density dependence of chiral EFT constraints in the density range $1-2 n_{\\rm sat}$, $n_{\\rm tr}=2 n_{\\rm sat}$ together with the constraint $M_{\\text{max}}>M_{\\rm max}^{\\rm obs}$ seems to strongly disfavor EOS that lead to the appearance of disconnected compact-star branches, as suggested in Ref.~\\cite{Paschalidis:2017qmb}. Such EOS need very strong first-order phase transitions, which would soften the EOS so much that heavy two-solar-mass neutron stars cannot be supported, in accordance with the findings in Ref.~\\cite{Alford:2015dpa}. Instead, chiral EFT calculations up to $n_{\\rm tr}=2 n_{\\rm sat}$ imply that EOSs with first-order phase transitions lead to neutron stars of the classification \"A\" or \"C\" of Ref.~\\cite{Alford:2013aca}. \n\n\\section{Results for GW170817}\\label{sec:results}\n\nIn this section, we confront the recent neutron-star merger observation GW170817 by the LIGO-Virgo (LV) collaboration with our two classes of EOS models. \n\n\\subsection{Posterior of the LIGO-Virgo analysis}\n\\label{sec:posterior}\n\nThe LV collaboration observed the GW signal of GW170817 for about $100 s$ (several 1000 revolutions, starting from 25 Hz) and performed detailed analyses of the wave front~\\cite{Abbott:2018wiz}. Because the chirp mass $M_{\\text{chirp}}$, defined as \n\\begin{equation}\nM_{\\text{chirp}}=\\frac{(m_1 m_2)^{3\/5}}{(m_1+m_2)^{1\/5}}\\,,\n\\end{equation} \ncan be extracted from the entire signal, this observation allowed to put tight constraints on it. For GW170817, the LV collaboration precisely determined $M_{\\text{chirp}}= 1.186\\pm 0.001 M_{\\odot}$.\n\n\\begin{figure}[t]\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{M1M2histo.pdf}\\\\\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{Lamhisto.pdf}\n\\caption{\\label{fig:posteriors}\nPosteriors for the LV observation of GW170817. Upper panel: The mass distributions for $m_1$ and $m_2$ from Ref.~\\cite{Abbott:2018wiz} (histograms) and the distributions used in this work (solid lines), see Eq.~\\eqref{eq:massdist}. Lower panel: Marginalized and normalized posterior probability for the distribution $p(\\tilde{\\Lambda})$ as defined in this work. We also show the corresponding distributions for the analysis of the LV collaboration (LVC), and the reanalysis of Ref.~\\cite{De:2018uhw} for the two extreme cases [uniform mass prior (u) and mass prior informed by double neutron stars (d)].\n}\n\\end{figure} \n\n\n\\begin{table*}\n\\caption{\\label{tab:LVCLtilde}\nFit parameters of the Gaussians of Eq.~\\eqref{eq:Gaussians}}\n\\centering\n\\begin{tabularx}{\\textwidth}{XXXXXXXXXX}\n\\hline\n\\hline\nN & $a_1$ & $\\Lambda_1$ & $\\sigma_1$ & $a_2$ &$\\Lambda_2$ &$\\sigma_2$ & $a_3$ &$\\Lambda_3$ & $\\sigma_3$\\\\\n\\hline\n2 & 281.6 & 212.6 & 76.2 & 106.5\n& 547.5 & 171.0 & & &\\\\\n3 & 266.6 & 212.4 & 74.2 & 85.0 & 523.6 & 219.2 & 38.6\n& 560.8 & 49.5\\\\\n\\hline\n\\hline\n\\end{tabularx}\n\\end{table*}\n\nThe extraction of higher-order GW parameters from the wavefront is complicated for several reasons. First, higher-order parameters are sensitive to the GW signal at later times and, thus, only a smaller part of the signal is suitable for their extraction. Second, there exist ambiguities between different higher-order parameters, e.g., between the individual neutron-star spins and the tidal polarizability. Because of this, the LV collaboration provided results for both a low-spin and a high-spin scenario. In this work, we only investigate the low-spin scenario for two reasons. First, large spins are not expected from the observed galactic binary NS population. Second, because neutron stars spin down over time, low spins are also expected from the extremely long merger time of GW170817 of the order of gigayears. Therefore, the low spin scenario is expected to be the more realistic scenario for binary neutron-star mergers such as GW170817.\n\nThe above mentioned problems in the extraction of higher-order parameters lead to weaker constraints on the individual masses of the two component neutron stars in GW170817. With $m_1$ being the mass of the heavier and $m_2$ being the mass of the lighter neutron star in the binary, the mass distribution of the individual stars is typically described in terms of the parameter $q=m_2\/m_1$. The observed mass distributions for $m_1$ and $m_2$ are presented as histograms in the upper panel of Fig.~\\ref{fig:posteriors}. To use this information in our calculations, we describe the posterior of the LV collaboration for $M_{\\text{chirp}}$ and $q$ by the analytical probability distribution~\\cite{Margalit:2017}\n\\begin{equation}\np(q,M_{\\text{chirp}}) = p(q) p(M_{\\text{chirp}})\\,,\n\\end{equation}\nwhere\n\\begin{equation}\np(M_{\\text{chirp}}) \\propto \\exp [- (M_{\\text{chirp}}-\\bar{M}_{\\text{chirp}})^2\/2\\sigma_{M}^2]\\,,\n\\end{equation}\nwith $\\bar{M}_{\\text{chirp}}=1.186M_{\\odot}$ and $\\sigma_{M}= 10^{-3}M_{\\odot}$~\\cite{Abbott:2018wiz}. For the mass asymmetry $q$, we have fitted the function \n\\begin{equation}\np(q)=\\exp \\left(-\\frac12 v(q)^2 -\\frac{c}{2} v(q)^4 \\right)\\,,\\label{eq:massdist}\n\\end{equation}\nto the LV posterior for the component masses. We find $c=1.83$ and $v(q)=(q-0.89)\/0.20$, and compare the resulting normalized analytic distributions with the observed data in the upper panel of Fig.~\\ref{fig:posteriors}.\n\nSince in this work we will confront the gravitational-wave observations of the LV collaboration with nuclear physics constraints, i.e., use our set of EOSs together with the source properties of GW170817 to postdict the distribution of $\\tilde{\\Lambda}$, we do not make use of the observed probability distribution for $\\tilde{\\Lambda}$. However, for reasons of completeness, we have fitted functions consisting of two and three Gaussians of the form\n\\begin{eqnarray}\np(\\tilde{\\Lambda}) = \\sum_{i=1}^N a_i e^{-\\frac 1 2 \\left( \\frac{\\tilde{\\Lambda}-\\Lambda_i}{\\sigma_{i}}\\right)^2}\\label{eq:Gaussians}\n\\end{eqnarray}\nto the observed LV posterior for $\\tilde{\\Lambda}$. The resulting parameters $a_i$, $q_i$ and $\\sigma_{qi}$ are reported in Table~\\ref{tab:LVCLtilde}, and the resulting functions as well as the LV result are plotted in the lower panel of Fig.~\\ref{fig:posteriors}, where the horizontal black line represents the 90\\% LV confidence level for $\\tilde{\\Lambda}$. We also show the posteriors for the reanalysis of Ref.~\\cite{De:2018uhw} for the two extreme cases [uniform mass prior (u) and mass prior informed by double neutron stars (d)]. The main difference between the two analyses lies in the appearance of a second peak in the posterior probability distribution around $\\tilde{\\Lambda}\\sim 600$ for the LV result. The origin of this second peak is not well understood: the peak may be washed out considering a wider domain of frequencies, starting from 23~Hz as in Ref.~\\cite{De:2018uhw}. The presence of the second peak is indeed an important issue for the prediction of $\\tilde{\\Lambda}$: including the second peak, the upper boundary for the 90\\%-CL is 720, while it drops if the second peak is absent.\n\nTherefore, in the following, we consider a structureless flat probability distribution in $\\tilde{\\Lambda}$, and sample the mass distributions for $m_1$ and $m_2$ in GW170817 from the analytic function $p(q,M_{\\text{chirp}})$.\n\n\\subsection{Areas of constant $\\Lambda$}\n\n\\begin{figure}[t]\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{MRLam.pdf}\n\\caption{\\label{fig:MRLam}\nMass-radius envelopes for $n_{\\rm tr}=n_{\\rm sat}$ of Fig.~\\ref{fig:MRcomp}(a) and areas of constant $\\Lambda$ for all CSM EOS parametrizations. We show areas for $\\Lambda=200$ (red), $\\Lambda=400$ (green), $\\Lambda=800$ (blue), and for $\\Lambda=1600$ (brown). For a typical $1.4 M_{\\odot}$ neutron star (horizontal dashed line), a constraint on $\\Lambda$ is equivalent to a radius constraint. The corresponding values for the MM (not shown) always lie withing the areas for the CSM. \n}\n\\end{figure} \n\nBefore addressing GW170817, we focus on the tidal polarizability $\\Lambda$ of individual neutron stars. The tidal polarizability describes how a neutron star deforms under an external gravitational field, and depends on neutron-star properties as\n\\begin{align}\n\\Lambda &=\\frac23 k_2 \\left(\\frac{c^2}{G} \\frac{R}{M}\\right)^5\\,.\n\\end{align}\nHere, $k_2$ is the tidal love number, that is computed together with the Tolman-Oppenheimer-Volkoff equations; see, for example, Refs.~\\cite{Flanagan2008,Damour2009,Moustakidis:2016sab} for more details.\n\n\\begin{figure*}[t]\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.67\\columnwidth]{MchirpLam016.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.67\\columnwidth]{MchirpLam016_800.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.67\\columnwidth]{MchirpLam032.pdf}\\\\\n\\null\\hfill\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.67\\columnwidth]{MchirpLam016_800_fic.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.67\\columnwidth]{MchirpLam032_fic.pdf}\n\\caption{\\label{fig:MchirpLam}\nEnvelopes for the CSM (red) and the MM (black) for the predicted tidal polarizability parameter $\\tilde{\\Lambda}$ as a function of chirp mass for neutron-star binaries with component masses in the range $1.0-1.9 M_{\\odot}$. We show: panel (a) the results for $n_{\\text{tr}}=n_{\\text{sat}}$, panel (b) for $n_{\\text{tr}}=n_{\\text{sat}}$ when additionally enforcing the LV constraint from GW170817, and panel (c) for $n_{\\text{tr}}=2 n_{\\text{sat}}$. \nIn panels (d) and (e), we show how this band reduces under a fictitious observation of a merger of two $1.6 M_{\\odot}$ neutron stars when $\\tilde{\\Lambda}$ would be measured to be $200-300$. We indicate GW170817 and the fictitious measurement (blue error bars) and the corresponding chirp masses (dotted vertical lines). In panel (e), the GW observations together with nuclear physics constraints would rule out the MM.}\n\\end{figure*} \n\n\nIt is interesting to look at areas of constant $\\Lambda$ within the MR plane. In this case, the relation of neutron-star mass and radius is given by \n\\begin{align}\nM&=\\left(\\frac32 \\frac{\\Lambda}{k_2}\\right)^{-\\frac15} \\frac{c^2}{G} R\\,,\n\\end{align}\nleading to the following scaling relation,\n\\begin{align}\n\\left(\\frac{M}{M_{\\odot}}\\right)&=0.6243 \\left(\\frac{\\Lambda}{k_2}\\right)^{-\\frac15} \\left(\\frac{R}{1 \\,\\mathrm{km}}\\right)\\,.\n\\label{eq:scaling}\n\\end{align}\nFor constant $\\Lambda$, this implies an almost linear relationship between M and R, because the love number $k_2$ does not vary strongly in that case. In addition, for different values of $\\Lambda$, the slopes are rather similar due to the small exponent $-1\/5$. In Fig.~\\ref{fig:MRLam}, we plot the mass-radius relation for $n_{\\text tr}=n_{\\rm sat}$ for the CSM, together with areas of constant $\\Lambda$. In particular, we show areas for $\\Lambda=200, 400, 800$, and $1600$.\n\nWhile there is a tight correlation between radii and tidal polarizabilities, from Fig.~\\ref{fig:MRLam} one can see that both quantities still provide complementary information. For example, an exact observation of the tidal polarizability of a neutron star, i.e., with vanishing uncertainty, would still lead to a remaining uncertainty for the radius of a typical $1.4 M_{\\odot}$ neutron star. To be specific, for $\\Lambda=200$, the remaining radius uncertainty is still $\\approx 1$ km, compatible with the expected uncertainty of NICER~\\cite{NICER1}. For larger values of $\\Lambda$ this uncertainty decreases and for $\\Lambda=800$ it is only $\\approx 0.5$ km. However, based on GW170817 values larger than $720$ are ruled out for typical neutron stars. Hence, both tidal deformabilities and radii offer complementary information on neutron-star global structure. \n\nFinally, from Eq.~(\\ref{eq:scaling}), one can infer the following fit,\n\\begin{align}\n\\left(\\frac{M}{M_{\\odot}}\\right)&= \\frac{a}{(b+\\Lambda)^{1\/5}} \\left(\\frac{R}{1 \\,\\mathrm{km}}\\right)\\,,\n\\label{eq:scalingFit}\n\\end{align}\nwhere we find $a=0.406435$ and $b= 68.5$.\n\n\\subsection{Tidal polarizabilities of GW170817}\n\nFor neutron-star mergers, the GW signal allows the extraction of the binary tidal polarizability parameter $\\tilde{\\Lambda}$. This parameter is defined as a mass-weighted average of the individual tidal polarizabilities, \n\\begin{equation}\n\\tilde{\\Lambda}~=~\\frac{16}{13} \\left[\\frac{(m_1+12m_2)m_1^4\\Lambda_1 }{m_{\\text{tot}}^5}+ \\frac{(m_2+12m_1)m_2^4\\Lambda_2 }{m_{\\text{tot}}^5}\\right]\\,.\n\\end{equation}\nAs discussed in Sec.~\\ref{sec:posterior}, the extraction of the binary tidal polarizability suffers from increased uncertainties, due to its importance only during the last few orbits~\\cite{Flanagan2008,Damour2009} and correlations among the parameters. In the initial publication of the LV collaboration~\\cite{Abbott:2017}, the constraint on $\\tilde{\\Lambda}\\leq 800$ was reported with 90\\% confidence (corrected to be $\\tilde{\\Lambda}\\leq 900$ in Ref.~\\cite{Abbott:2018wiz}). This analysis, however, was very general and did not assume both objects in the binary system to have the same EOS. Several reanalyses have since improved this constraint. Assuming that both compact objects were neutron stars governed by the same EOS, Ref.~\\cite{De:2018uhw} used polytropic EOS models and a Bayesian parameter estimation with additional information on the source location from EM observations to derive limits on $\\tilde{\\Lambda}$ for different prior choices for the component masses: for uniform priors the reported 90\\% confidence interval was $\\tilde{\\Lambda}=84-642$, for a component mass prior\ninformed by radio observations of Galactic double neutron stars the result was $\\tilde{\\Lambda}=94-698$, and for a component mass\nprior informed by radio pulsars the reported result was $\\tilde{\\Lambda}=89-681$. A reanalysis by the LV collaboration found a new 90\\% confidence of $70 \\leq\\tilde{\\Lambda}\\leq 720$~\\cite{Abbott:2018wiz}; see Fig.~\\ref{fig:posteriors}. Finally, the LV collaboration reported an additional result, assuming that both merging objects were neutron stars governed by the same EOS~\\cite{Abbott:2018exr}. This EOS was based on the Lindblom parametrization~\\cite{Lindblom:2010bb}\nstitched to the SLy EOS for the crust, and resulted in $\\tilde{\\Lambda}=70-580$ with 90\\% confidence. For the different extractions, the lower limit is rather stable, but the upper limit varies from 580-800. \n\nIn general, the uncertainty range for all extractions is sizable. In the following, we will investigate the resulting $\\tilde{\\Lambda}$ obtained from state-of-the-art nuclear-physics models at low densities. To obtain these results, for all our EOS models we compute the combined tidal polarizability $\\tilde{\\Lambda}$ for thousands of NS-NS binaries where the sample the mass $m_1$ of the heavier neutron star in the range $1.0-1.9 M_{\\odot}$ and the mass of the lighter neutron star $m_2$ in the range $1.0 M_{\\odot}-m_1$ (implying $q\\leq 1$). This allows us to explore a wide range of mass asymmetries and chirp masses ranging from $0.871M_{\\odot}$ to $1.654 M_{\\odot}$, which naturally includes the chirp masses for several known neutron-star binaries as well as GW170817. We show the resulting envelopes for $\\tilde{\\Lambda}$ as a function of $M_{\\rm{chirp}}$ in Fig.~\\ref{fig:MchirpLam}. We also indicate the chirp mass for GW170817, $M_{\\rm chirp}^{\\rm GW170817}=1.186 M_{\\odot}$~\\cite{Abbott:2018wiz} (blue dashed vertical lines) that allows to extract nuclear-physics constraints on $\\tilde{\\Lambda}$ of GW170817. \n\nUsing nuclear-physics constraints from chiral EFT up to $n_\\text{sat}$ [panel (a)] leads to the widest allowed range for $\\tilde{\\Lambda}$ for a given chirp mass. This is true for both the MM and the CSM, but the CSM envelope is much larger due to the wider flexibility of the EOS at higher densities. For GW170817 ($M_{\\rm chirp}^{\\rm GW170817}=1.186 M_{\\odot}$), we find $\\tilde{\\Lambda}_{\\text{CSM}}=60-2180$ and $\\tilde{\\Lambda}_{\\text{MM}}=230-950$; for the CSM, the uncertainty in $\\tilde{\\Lambda}$ is much larger than the LV constraint for GW170817. For this transition density, both the MM and the CSM can be constrained by the LV constraint on GW170817 and, as a result, GW170817 adds information on the mass-radius relation of neutron stars. \n\nTo explore the impact of the LV constraint of Ref.~\\cite{Abbott:2018wiz}, we make use of $p(q,M_{\\text{chirp}})$ and, using a uniform prior, select only EOS-$m_{1,2}$ combinations with $70\\leq\\tilde{\\Lambda}\\leq 720$. In panel (b) of Fig.~\\ref{fig:MchirpLam} we show the resulting envelope for $\\tilde{\\Lambda}(M_{\\rm{chirp}})$ for the MM and CSM. In addition, we also show the resulting envelopes for the EOS and the MR relation in panels (b) of Figs.~\\ref{fig:EpsPcomp} and~\\ref{fig:MRcomp}, respectively. Please note that the resulting range of tidal polarizabilities for $M_{\\rm{chirp}}=1.186$ of $\\tilde{\\Lambda}=70-1020$ in Fig.~\\ref{fig:MchirpLam}(b) is larger than the LV constraint. The reason is that we accept all EOS that fulfill the LV constraint for any value of $q$ allowed according to $p(q)$. The range in Fig.~\\ref{fig:MchirpLam}(b), however, is computed for many more values of $q$. For example, if an EOS passes the constraint $\\tilde{\\Lambda}\\leq 720$ for $q=0.7$ than the resulting $\\tilde{\\Lambda}$ for $q=1$ will be larger. \n\nNaturally, enforcing this constraint rules out a considerable part of EOSs that lie both on the high-pressure and low-pressure side at high energy densities. This, again, is reflected in the mass-radius relation, where neutron stars with large radii are excluded by the LV constraint. For our analysis and the CSM, we find that the radius of a $1.4 M_{\\odot}$ neutron star, $R_{1.4}$, can be constrained to be $9.0\\, \\text{km}< R_{1.4}<13.6$ km. This was also found in Ref.~\\cite{Annala:2017llu}, where a polytropic EOS expansion was used to constrain the radius of neutron stars by enforcing the constraint $\\Lambda_{1.4}<800$ (the initial LV constraint of Ref.~\\cite{Abbott:2017}). Ref.~\\cite{Annala:2017llu} found that $R_{1.4}<13.6$ km, and both analyses are in excellent agreement. \n\nFinally, we assume the chiral EFT constraint to be valid up to $2n_\\text{sat}$ [panel (c)]. Even though the uncertainties are still sizable, the predicted total range for $\\tilde{\\Lambda}$ reduces dramatically. For GW170817, we find $\\tilde{\\Lambda}_{\\text{CSM}}=80-580$ and $\\tilde{\\Lambda}_{\\text{MM}}=280-480$. Our constraint, which is solely guided by nuclear-EFT input, is much tighter than the observational LV constraint and in excellent agreement with the recent detailed reanalysis by the LV collaboration~\\cite{Abbott:2018exr}. We emphasize, though, that our analysis is more constraining than the LV reanalysis: our 100\\% envelopes are compatible with the 90\\% contour of Ref.~\\cite{Abbott:2018exr}. Therefore, the sentiment that the neutron-star merger GW170817 revolutionized our understanding of the EOS, is a bit of an exaggeration. GW170817, however, represents a new hope for obtaining different constraints on the EOS that might also offer the possibility to investigate new phases of dense matter. In this sense, GW170817 and the expected future detections will surely contribute to answering the long standing question of the nature of the NS core.\n\nWe explicitly stress that our results imply that current nuclear physics knowledge in the relevant density range of $1-2 n_{\\rm sat}$, as obtained by ab inito calculations using modern nuclear Hamiltonians and state-of-the art many-body methods, is compatible with the recent neutron-star merger observation but more constraining for neutron-star observables and the EOS. In addition, efforts in the nuclear-theory community to improve nuclear interactions might allow to considerably reduce the theoretical uncertainty for the neutron-star-matter EOS between $1-2 n_{\\rm sat}$, which will tighten our constraints even more. In general, this very interesting density range provides an excellent laboratory to probe nuclear-theory predictions against astrophysical observations and heavy-ion collision experiments.\n\n\n\\subsection{Impact of varying ${\\bf n_{\\text{tr}}}$ and the validity of chiral EFT predictions}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{ntrRminRmax-band.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{ntrLamminmax-band.pdf}\n\\end{center}\n\\caption{\\label{fig:ntrRminRmax}\nRadius of a typical $1.4 M_{\\odot}$ neutron star, $R_{1.4}$ (left), and $\\tilde{\\Lambda}$ for $M_{\\rm chirp}=1.186 M_{\\odot}$ (right) as functions of $n_{\\rm{tr}}$. We show the envelopes for the CSM in red and for the MM in black. For the CSM, when requiring $c_S^2\\leq 0.5$ instead of $c_S^2\\leq1.0$, the hatched areas are excluded. We also indicate the constraints from GW170817 and the values of $n_{\\rm{tr}}$, above which nuclear-theory input alone becomes more constraining than observations.}\n\\end{figure*} \n\nThese present studies as well as the one of Ref.~\\cite{Tews:2018kmu} are the first to use chiral EFT calculations of the neutron matter EOS up to twice nuclear saturation density with reliable error estimates to compute tidal polarizabilities for GW170817. Reliable uncertainty estimates are critical for understanding the impact that GW detections will have on elucidating the properties of dense matter inside neutron stars, and theoretical calculations of the dense-matter EOS without uncertainty estimates are of limited value for a meaningful analysis of GW data. Uncertainty estimates have shown that chiral EFT input remains useful up to $2 n_{\\rm sat}$, and we find, in contrast to other recent publications~\\cite{Annala:2017llu,Fattoyev:2017jql,Most:2018hfd}, that GW170817 does \\emph{not} provide new insight about the EOS that cannot be obtained from current nuclear physics knowledge. This message tempers claims made in these recent publications which state that the upper limit on the tidal polarizability derived from GW data rules out stiff nuclear EOS. While this inference is correct, such stiff EOSs are already ruled out based on state-of-the-art nuclear Hamiltonians. In other words, models of dense matter excluded by the upper limit on the tidal deformability from GW170817 are already incompatible with the current microscopic EOSs at densities where error estimates can still be justified. \n\nNevertheless, the reliability of chiral interactions at these densities has been questioned. Although the convergence of the chiral expansion cannot be strictly proven in this density range, we present arguments to show that the order-by-order convergence of the chiral expansion for the EOS up to $2n_{\\rm sat}$ is still reasonable. First, the expansion parameter increases by only about 25\\% over the density interval $1-2 n_{\\rm sat}$. Second, Ref.~\\cite{Tews:2018kmu} analyzed the order-by-order convergence of the employed Hamiltonians at $2 n_{\\rm sat}$, and showed that, even though the reliability naturally decreases with increasing density, the order-by-order convergence remains reasonable and consistent with simple power counting arguments within the theoretical uncertainty estimates. Nevertheless, densities around $2 n_{\\rm sat}$ seem to provide an upper limit to the applicability of the chiral Hamiltonians we use in this work.\n\nTo support our main statement - namely that the constraints from GW170817 are compatible with but less restrictive than predictions of the EOS based on realistic nuclear potentials and do not yield specific new information about nuclear Hamiltonians or about possible phase transitions at supra-nuclear density - in this context, we investigate which density range for chiral EFT input is sufficient to justify our statement. We present the total uncertainty ranges for $R_{1.4}$ (left panel) and $\\tilde{\\Lambda}$ for $M_{\\rm chirp}=1.186 M_{\\odot}$(right panel) as functions of the density $n_{\\rm tr}$ in Fig.~\\ref{fig:ntrRminRmax}. For $R_{1.4}$, we indicate the upper limit on the radii of Ref.~\\cite{Annala:2017llu}, $R_{1.4}\\leq 13.6$ km, which was obtained using $n_{\\rm tr}=n_{\\rm sat}$ and the LV constraint (horizontal dotted line). We find that the CSM alone constrains the radii to be smaller than this bound for $n_{\\rm tr}>0.23 \\,\\mathrm{fm}^{-3} \\approx 1.44 n_{\\rm sat}$ (an 11\\% increase of the expansion parameter compared to $n_{\\rm sat}$). For the tidal polarizability, we indicate the LV constraint as a horizontal blue band and find that the CSM leads to $\\tilde{\\Lambda}\\leq 720$ as soon as $n_{\\rm tr}> 0.285 \\,\\mathrm{fm}^{-3} \\approx 1.78 n_{\\rm sat}$ (a 20\\% increase of the expansion parameter compared to $n_{\\rm sat}$). We would like to emphasize that these crucial values for $n_{\\rm tr}$ for both observables do not necessarily have to agree, as seen in Fig.~\\ref{fig:ntrRminRmax}. The reason is that the upper limit on $\\tilde{\\Lambda}$ depends on $q$ while $R_{1.4}$ does not. In Fig.~\\ref{fig:MchirpLam}(b) we have seen that when varying $q$ in the range allowed by GW170817, $\\tilde{\\Lambda}$ can increase to values $\\sim 1000$ for the EOS that pass the LV constraint from GW170817. Chiral EFT input becomes compatible with this value at $n_{\\rm tr}\\sim 0.23 \\,\\mathrm{fm}^{-3}$, in agreement with the value for $R_{1.4}$. At these values for $n_{\\rm tr}$, in particular at $1.44 n_{\\rm sat}$, arguments for the validity of chiral interactions remain even stronger, which strengthens the validity of our main statement.\n\nFinally, the value of $n_{\\rm tr}$ also affects the speed of sound inside neutron stars. The speed of sound is expected to approach the conformal limit of $c_S^2=1\/3$ at very high densities~\\cite{Kurkela:2010}. In neutron stars, though, it is not clear if this conformal limit remains valid or not. As discussed in detail in Ref.~\\cite{Tews:2018kmu}, the neutron-matter EOS up to $n_{\\rm tr}=2 n_{\\rm sat}$ requires the speed of sound to pass the conformal limit to be sufficiently stiff to stabilize the observed two-solar-mass neutron stars. In fact, for chiral models the speed of sound has to increase beyond the conformal limit for $n_{\\rm tr}>0.28 \\,\\mathrm{fm}^{-3}$ and even for phenomenological nuclear Hamiltonians, which lead to stiffer neutron-matter EOS, this statement remains valid for $n_{\\rm tr}>0.31 \\,\\mathrm{fm}^{-3}$. While there might be EOS that are much stiffer below $2 n_{\\rm sat}$ and, hence, stabilize the heaviest neutron stars while still obeying the conformal limit, such EOS are ruled out by modern nuclear Hamiltonians. \n\nTherefore, the neutron-matter EOS up to $2 n_{\\rm sat}$ for state-of-the-art nuclear Hamiltonians requires the speed of sound in neutron stars to experience a non-monotonous behavior, i.e, increasing beyond $c_S^2=1\/3$ but decreasing at higher densities to approach this limit.\nFor example, for chiral EFT interactions and $n_{\\rm tr}=2 n_{\\rm sat}$, the speed of sound has to reach values $c_S^2\\geq 0.4$. \nThe question remains, though, which forms of strongly-interacting matter lead to such a behavior for the speed of sound. \nIn order to estimate the impact of the speed-of-sound behavior on $R_{1.4}$ and $\\tilde{\\Lambda}$, we present hatched areas in Fig.~\\ref{fig:ntrRminRmax} which are excluded for $c_s^2\\leq0.5$. We choose this limiting value solely for illustrative purposes.\nThis constraint slightly reduces the upper bound on neutron-star radii but it would mostly rule out low-radius neutron stars. \nThe reason is that neutron stars can have very small radii only for strong first-order phase transitions with low onset densities. To simultaneously support $2M_{\\odot}$ neutron stars, the EOSs has to experience a sudden subsequent stiffening, i.e., the speed of sound has to increase dramatically. For a larger possible speed of sound, stronger phase transitions are allowed, which leads to stars with smaller radii. Limits on $c_S^2$, on the other hand, rule out the strongest phase transitions, and increase the smallest possible radius. For $c_S^2\\leq 0.5$, the lower limit on the radius of a $1.4M_{\\odot}$ neutron star is approximately 10 km, of the order of the constraint of Ref.~\\cite{Bauswein:2017vtn}.\n\n\\subsection{Impact of additional constraints}\n\n\\begin{figure}[t]\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{RLam.pdf}\n\\caption{\\label{fig:RLam}\nEnvelopes for the correlation between $\\tilde{\\Lambda}$ of GW170817 and the radius of a $1.4 M_{\\odot}$ (red) and the radius of a $1.6 M_{\\odot}$ (blue) neutron star for $n_{\\text{tr}}=2 n_{\\text{sat}}$ and the CSM. The corresponding values for the MM (not shown) lie within the CSM envelopes. \nWe also show the lower limit of the LV constraint on the tidal polarizability of GW170817~\\cite{Abbott:2018wiz}, the proposed constraint of Ref.~\\cite{Radice:2017lry} and its update of Ref.~\\cite{Radice:2018ozg}, and the radius constraint for a $1.6 M_{\\odot}$ neutron star from Ref.~\\cite{Bauswein:2017vtn}. \n}\n\\end{figure} \n\nEven though the tidal polarizabilities extracted from GW170817 alone may not revolutionize our understanding of the EOS, several additional constraints based on the EM counterpart were proposed. These additional constraints were mostly based on the fact that the EM signal of GW170817 does not seem to imply a prompt collapse of the hypermassive merger remnant to a black hole. Instead, it is argued that the merger remnant survived for several 100 milliseconds before collapse. Based on this assumption, several groups independently suggested the maximum mass of neutron stars to be less than $\\approx 2.2-2.3 M_{\\odot}$~\\cite{Margalit:2017,Shibata:2017xdx,Rezzolla:2017aly}. While this constraint is powerful for smooth EOS models, which exhibit a strong correlation between $M_{\\rm max}$ and radii of typical neutron stars, the appearance of strong first-order phase transitions in general EOS models implies that the maximum mass is not very constraining for the structure of typical neutron stars; see also Ref.~\\cite{Tews:2018iwm}. \n\nAdditional constraints for radii and tidal polarizabilities were proposed based on the same assumptions. Ref.~\\cite{Bauswein:2017vtn} suggested that the EM observation can be used to argue that $R_{1.6}\\geq 10.68_{-0.04}^{+0.15}$ km. In contrast to the $M_{\\rm max}$ constraint, a radius constraint has a sizable impact on the CSM: In Figs.~\\ref{fig:EpsPcomp}(b) and (c) as well as Figs.~\\ref{fig:MRcomp}(b) and (c) we indicate parts of the envelopes which are excluded by $R_{1.6}\\geq 10.68_{-0.04}^{+0.15}$ km by hatched areas. In addition, Ref.~\\cite{Radice:2017lry} suggested that the amount of ejecta determined from the EM observations implies $\\tilde{\\Lambda}>400$. This constraint was later updated to $\\tilde{\\Lambda}>300$~\\cite{Radice:2018ozg}. In Fig.~\\ref{fig:RLam}, we show the correlation between $\\tilde{\\Lambda}$ and the radii of a $1.4 M_{\\odot}$ neutron star, $R_{1.4}$, and a $1.6 M_{\\odot}$ neutron star, $R_{1.6}$, for $n_\\text{tr}=2n_\\text{sat}$ and the CSM. While in general radius and tidal polarizabilities are correlated, the appearance of phase transitions washes this correlation out. Fig.~\\ref{fig:RLam} again highlights the fact that even an exact determination of $\\tilde{\\Lambda}$ leaves a considerable radius uncertainty. Therefore, independent observations of radii and tidal polarizabilities are crucial to pin down the high-density EOS of nuclear matter.\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{Diffq0710.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{Diffq1007.pdf}\n\\end{center}\n\\caption{\\label{fig:q0710diff}\nEquations of state for $n_{\\rm tr}=n_{\\rm sat}$ which pass the LV constraint $70\\leq \\tilde{\\Lambda}\\leq 720$ for $q=0.7$ but not for $q=1.0$ [panel (a)] and vice versa [panel (b)].\n}\n\\end{figure*} \n\nIn Fig.~\\ref{fig:RLam}, we also show the constraints of Refs.~\\cite{Bauswein:2017vtn,Radice:2017lry,Radice:2018ozg}. The radius constraint implies that $\\tilde{\\Lambda}\\geq 180$ while the constraint of Ref.~\\cite{Radice:2017lry} (Ref.~\\cite{Radice:2018ozg}) implies $R_{1.6} \\sim R_{1.4}\\geq 11.5$ km ($10.5$ km). All of these constraints are based on empirical formulas extracted from simulations for a limited set of model EOSs. Especially for the constraints of Refs.~\\cite{Radice:2017lry,Radice:2018ozg}, this set contains only four nucleonic EOS and, therefore, is likely overestimated~\\cite{Tews:2018iwm}. \nIn the case of the first constraint, a similar argument might be made. Nevertheless, in that case the authors try to explore the full EOS dependence which results in a more conservative constraint.\nIn both cases, however, future numerical simulations with additional EOSs, including, e.g., phase transitions, can be used to refine these constraints and improve their robustness.\n\nIn addition to inferences from GW170817, additional future observations might dramatically improve our understanding of the EOS. \nThe NICER~\\cite{NICER1} and eXTP~\\cite{Watts:2018iom} missions will provide neutron-star radii with a few percent uncertainty: the NICER mission is expected to provide first results within this year. As we have seen above, these future radius observations might considerably reduce the ambiguity of the allowed EOS models. A measurement of $R_{1.4}$ with a 5\\% accuracy will add valuable information and might help distinguish EOSs with and without phase transitions; see also Ref.~\\cite{Tews:2018kmu}.\n\nIn addition, in the next years additional neutron-star merger observations by the LV collaboration are expected. While the uncertainty for the tidal polarizability associated with GW170817 is not sufficient to constrain the EOS, this might change for future observations. For example, mergers with better signal-to-noise ratios could be observed, or sufficiently many mergers are observed so that accurate information can be extracted. In addition, third generation GW detectors might provide tidal-polarizability measurements with 10\\% uncertainty. To illustrate the possibilities offered by such new GW events, we inject in Fig.~\\ref{fig:MchirpLam}(d) and (e) a fictituous measurement of $M_{chirp}=1.385$ and $\\tilde{\\Lambda}$ to be measured in the range $200-300$. Such an observation would dramatically reduce the uncertainties in the EOS: \nit would reduce the allowed radius range for a typical neutron star to \n11.7-13.4 km for $n_{\\rm tr}=n_{\\rm sat}$ and to only 11.7-12.5 km for $n_{\\rm tr}=2 n_{\\rm sat}$. Also, it is interesting to note that in this case the MM cannot reproduce the two events, GW170817 and the fictitious one. There is, therefore, a great potential to combine future detections as a filter for EOS models and the accumulation of GW tidal deformabilities may offer the possibility to make statements about the existence of phase transitions in dense matter.\n\n \\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{RtilLamtil016.pdf}\n\\includegraphics[trim= 0.0cm 0 0 0, clip=,width=0.9\\columnwidth]{RtilLamtil032.pdf}\n\\end{center}\n\\caption{\\label{fig:EmpRel}\nRelation connecting the common radius $\\hat{R}$ and the binary tidal polarizability $\\tilde{\\Lambda}$ for $0.712.55$ km. This, combined with the correlation between $\\Lambda$ and $R$, is used to deduce that $\\Lambda_\\text{1.4}>490$. As discussed earlier, both these correlations are \\emph{model dependent}. It is useful to compare these inferences to the predictions of our minimal model shown in Fig.~\\ref{fig:ntrRminRmax} which assumes a smooth EOS without phase transitions, does not violate experimental data for the neutron-skin thickness of $^{208}$Pb, but can accommodate smaller values for $R_\\text{1.4}$ and $\\Lambda_\\text{1.4}$. \n\nIn Ref. \\cite{Most:2018hfd}, the authors impose an additional constraint requiring that $M_\\text{max} < 2.16~M_\\odot$ and employ EOSs with and without strong first-order phase transitions to determine limits on the neutron star radius and deformability. In the absence of phase transitions they find that $12~\\text{km} < R_{1.4} < 13.45~ \\text{km}$ and require $\\Lambda_{1.4}> 375$. This range is deduced as the $2\\sigma$ interval by exploring a large suite of hadronic models. Our analysis based on the minimal model finds that smaller radii are possible. Further, we caution against using a probabilistic interpretation of the allowed ranges for $R_{1.4}$ and $\\Lambda_{1.4}$ because it is difficult to assign likelihoods to a specific realization of the EOS. The inclusion of strong phase transitions in \\cite{Most:2018hfd} allows for the existence of \"twin star\" solutions containing two separate stable branches of NSs. In this case, smaller values for $R_{1.4}$ and $\\Lambda_{1.4}$ are allowed and the constraints weaken to $R_{1.4}>8.53~\\text{km}$ and $\\Lambda_{1.4}>35.5$. The results obtained using the maximal model (CSM) are in good agreement with these limits. \n\n\\section{Summary}\n\\label{sec:summary}\n\nTo summarize, we confronted the recent GW observation with modern nuclear-physics constraints from chiral EFT. We elaborated on our results of Ref.~\\cite{Tews:2018iwm} and provided many additional results. \n\nIn particular, we have used two different classes of models to extend QMC results with chiral EFT interactions to higher densities encountered in the core of neutron stars. We have used a minimal model, that is based on a density expansion around saturation density, and a maximal model based on a very general expansion in the speed of sound, that explores all EOSs consistent with the low-density input from chiral EFT. We used these models to study the uncertainties for the EOS and neutron-star observables for chiral EFT input up to either $n_{\\rm sat}$ or $2 n_{\\rm sat}$.\n\nWe used these models with input from nuclear physics up to nuclear saturation density and data from GW170817 to deduce that the radius of a typical neutron star has to be $R_{1.4}\\leq 13.6$ km. If instead EFT predictions for the EOS are used up to twice nuclear saturation density we find that $\\tilde{\\Lambda}<580$ and $R_{1.4}\\leq 12.6$ km. These smaller ranges suggest that future observations need to provide much more precise constraints to enable conclusions about the EOS or provide evidence for novel phases of matter in neutron stars. We compared our results to other recent works, which arrived at the opposite conclusion, and discussed the robustness of our main statement. \n\nWe studied the impact of additional constraints on our findings. Most of these additional constraints are derived from interpretations of the EM counterpart of GW170817, and provide limits on radii, tidal polarizabilities, or the maximum mass. We showed that constraints on the maximum mass do not reduce the EOS uncertainty for typical neutron stars, in contrast to radius information, which is rather valuable. We also investigated how an upper limit on the speed of sound in neutron stars affects our findings.\n\nWe finally investigated the impact of strong first-order phase transitions on our predictions. Contrasting the predictions of the MM and the CSM may provide useful insights on how future measurements of $\\tilde{\\Lambda}$ from neutron-star mergers can help to identify new forms of matter at densities beyond nuclear saturation. \n\nTo conclude, we pose the question if and when the accuracy of gravitational-wave observations will be sufficiently small to provide constraints on the EOS that are tighter than the ones from nuclear theory. From our results, we estimate that the uncertainty $\\tilde{\\Lambda}$ needs to be of the order of $\\Delta\\tilde{\\Lambda}<300$ to test the chiral EFT prediction in the density range $n_{\\rm sat}-2n_{\\rm sat}$. Based on the contrast between MM and CSM, we expect that $\\Delta\\tilde{\\Lambda}<100$ is needed to shed light on the possible existence of phase transitions in dense matter.\n\n\\begin{acknowledgement}\nThis work was supported in part by the U.S.~DOE under Grants \nNo.~DE-AC52-06NA25396 and DE-FG02-00ER41132, by the LANL LDRD program, and by the National Science Foundation Grant No.~PHY-1430152 (JINA Center for the Evolution of the Elements). \nJ.M. was partially supported by the IN2P3 Master Project MAC, \"NewCompStar\" COST Action MP1304, PHAROS COST Action MP16214.\nThis research used resources provided by the Los Alamos National\nLaboratory Institutional Computing Program, which is supported by the\nU.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. Computational resources have been provided by the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-05CH11231. Computational resources have also been provided by the J\\\"ulich\nSupercomputing Center.\n\\end{acknowledgement}\n\n\\bibliographystyle{epj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the past few years, GAN \\cite{5} has been widely used in image generation, image inpainting, style transfer and super-resolution reconstruction, etc. However, with the great progresses in GAN, an essential problem has always been with us, that's mode collapse. This phenomenon heavily harms the diversity and quality of images generated by generator. In this paper, we mainly focus on mode collapse alleviation and aim to generate data in high diversity based on the available, and further, apply the augmentated data into downstream tasks for better performance.\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=1.0\\linewidth]{.\/figure\/top_figure_color.png}\n\\end{center}\n\\caption{\\textbf{Domain translation with PDPM.} Both StarGAN and PDPM are trained 10 epochs. PDPM can transfer the facial expressions of attribute images to other faces while the vanilla StarGAN can not capture the change of eyes, this indicates PDPM converges much faster. Besides, it is clear to see that PDPM generates accurate attention masks of changes compared with the first column while StarGAN captures much more background.}\n\\label{fig:animation}\n\\end{figure}\n\nIn general, mode collapse usually manifests as that the trained generator can only generate images in some specific classes which really harms the data diversity. Currently, to the best of our knowledge, there are two main ways to alleviate mode collapse, modifying the architecture (or training method) of GANs or refining the loss function. The main drawback of the former is its poor generalization performance since it is effective just for some specific networks, for example, in Unrolled GAN \\cite{25}, the generator has to consider both its current state and the state of discriminator after $K$ iterations which is hard to apply to other models. By contrast, the latter method usually has better generalization ability, but it is difficult to design an universal module, for example, in DRAGAN \\cite{31} and MSGAN \\cite{24}, new penalty terms are introduced for improving data diversity, but in our experiments (see Figure \\ref{fig:men-women}), we notice these methods may generate some noisy pixels which harms the image quality. Besides, using multiple GANs can alleviate this problem to some extent, but due to its high cost, this method is rarely adopted in practice. Up to this day, most approaches of mode collapse alleviation start with the original data space while few methods deal with this problem via features of the fake images.\n\nMoreover, in our experiments (see Figure~\\ref{fig:dp-collapse-sample}), we notice an abnormal phenomenon that sometimes very different latent vectors may be mapped to similar images which is the essential characteristic of mode collapse. Besides, in traditional GANs, the images generated by generator are more like the combination of several images, and this usually leads to low image resolution and quality. In brief, the observations stated above are appearances of mode collapse, and they indicatie the necessity of addressing this issue.\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=1.0\\linewidth]{.\/figure\/framework}\n\\end{center}\n\\caption{\\textbf{The proposed PDPM.} In the framework above, $f_1$ and $f_2$ are feature maps extracted from the discriminator, $z_1$ and $z_1$ are latent vectors while $g_1$ and $g_2$ are their corresponding fake images. $S(\\cdot)$ indicates the similarity measurement function. The key idea of PDPM is that the similarity relationship of fake images' features should be consistent with their corresponding latent vectors.}\n\\label{fig:proposed-framework}\n\\end{figure}\n\n\nTo alleviate the effects of mode collapse while avoiding the drawbacks of previous methods, a novel pluggable diversity penalty module is proposed in this paper, hereinafter, \\textbf{PDPM}. Figure~\\ref{fig:proposed-framework} shows the pipeline of our framework. Concretely, the more difference between latent vectors the more different their corresponding fake images should be, i.e., if two latent vectors are different, then PDPM enforces generator to generate two images with different features. Unlike current mainstream methods, PDPM performs constraints in feature space which is more robust than that in data space. In latent space, the similarity among latent vectors is given using Gram matrix. However, in data space, each image usually has a great amount of pixels which are uncessary for distinguishing, and in fact, ~\\cite{11} find that the feature representations can better describe an image than pixels. Thus, PDPM calculates the similarity of images via their corresponding feature maps. Besides, nonlinear mapping is performed for normalizing the similarity values. The key idea of PDPM is that the similarity of feature pairs should be consistent with the similarity of their corresponding latent vector pairs. This paper has the following contributions: \n\\begin{itemize}\n\\item A novel block named PDPM is proposed to alleviate mode collapse in GAN. PDPM has better generalization ability compared with most current methods, it can be used in almost all GANs as a pluggable attachment. Besides, PDPM performs constraints in feature space which is more robust and has better pixel value stability;\n\n\\item PDPM has great transfer ability and low computation cost. It can be used in image generation, image data augmentation, domain translation and so on, this indicates PDPM is not sensitive to tasks. Besides, PDPM is almost parameters-free which has only one balance coefficient;\n\n\\item Compared with other complex methods, PDPM is effective yet easy to perform. The results in Figure~\\ref{fig:synthetic} on 2D synthetic dataset show that PDPM can help GAN capture much more modes effectively, and Figure~\\ref{fig:animation} also suggests PDPM has good performance in domain translation. In image data augmentation, PDPM introduces a markable accuracy improvement on ResNet. In image generation, PDPM outperforms MSGAN, WGAN\\_GP, WGAN\\_GP\\_MS and some other SOTA architectures both visually and quantitatively (IS and FID).\n\\end{itemize}\n\n\\section{Related Work}\n\n\\noindent\n\\textbf{Mode Collapse Reduction}\\quad For improving data diversity and stable training, researchers have done a lot of work. In Unrolled GAN~\\cite{25}, Metz \\emph{et al.} define the generator objective with respect to an unrolled optimization of the discriminator. The generator has to consider both its current state and the state of discriminator after $K$ iterations. This can lead to a better solution, but it is hard to apply to other models. In Energy-based GAN \\cite{32}, Zhao \\emph{et al.} use entropy to measure the diversity of images generated by generator while maintaining low energy state. In VEEGAN~\\cite{26}, Srivastava \\emph{et al.} introduce a variational principle for estimating implicit probability distributions which can help avoide mode collapse. Further, in PacGAN~\\cite{28}, Lin \\emph{et al.} let the discriminator make decisions based on multiple samples from the same class which can penalize generator with mode collapse. In BourGAN~\\cite{29}, Xiao \\emph{et al.} treat modes as a geometric structure of data distribution in a metric space which also leads to a better genertor. Recently, in MSGAN~\\cite{24}, Mao \\emph{et al.} modify the objective function for encouraging the generators to explore more minor modes in data space. By contrast, our proposed PDPM captures modes in feature space which is more robust than MSGAN.\\\\\n\n\\noindent\n\\textbf{Data Augmentation Learning}\\quad Image data augmentation has been proven to be effective in practice. In~\\cite{8}, data augmentation is used to reduce overfitting. Also in~\\cite{4}, Shorten \\emph{et al.} find that even simple techniques such as cropping, rotating and flipping can have markable effects on reducing overfitting. Currently, as \\cite{9}, image data augmentation mainly has three branches which are traditional transformations, generative methods and learning the augmentation. The former method has been well studied while the latter has very high computation cost like NAS in \\cite{33}. In generative models, GAN is the representative, but it is rarely used due to the limited diversity of generated data caused by mode collapse. That's part of PDPM's motivation.\\\\\n\n\\noindent\n\\textbf{Convergence and Stability of GANs}\\quad The stability of training process and convergence speed are vital for GANs. In~\\cite{5}, the vanilla GAN is proposed for generating high-quality images, but at that time, it is not an easy task to train GAN stably due to the imbalance between generator and discriminator. Further, in~\\cite{6}, wasserstein distance is used to measure the similarity between distributions instead of KL-divergence, this reduces the difficulty of GAN training greatly, and then, in~\\cite{7}, the gradient penalty term is proposed to enforce the Lipschiz constraint instead of using weight clipping as WGAN. These difficulties in training GANs suggest refining loss function is not a trivial task, and indicate the necessity of evaluating convergence and stability of GANs. \\\\\n\n\\noindent\n\\textbf{Feature Representations of CNN}\\quad A deep convolutional layer can extract the feature of an input image accurately. In~\\cite{10}, deconvnet is used to visualize the features that a fully trained model has learned. Furthermore, in~\\cite{11}, Zhou \\emph{et al.} demonstrate that the convolutional neural networks are able to localize the discriminative regions of image. Based on this finding, we use the features extracted from discriminator to represent the images instead of using images directly. And in~\\cite{1}, the proposed Grad-CAM method also supports the results in ~\\cite{11}. Figure~\\ref{fig:grad-cam} in Appendix shows some Grad-CAM results on CelebA~\\cite{23} with our trained discriminator, and these results show PDPM can capture image features accurately.\\\\\n\n\n\\section{Motivation}\nTo alleviate mode collapse, generator must capture much more modes of the available data. As in Figure \\ref{fig:mode_collapse}, $z$ represents latent vectors while $f(z)$ represents the features of corresponding fake image. In feature space, mode collapse manifests as that only a part of modes can be captured by the generator like {\\tt\\small{the vanilla GAN}} group in Figure \\ref{fig:mode_collapse}. And as a result, the data generated by generator will gather in some specific classes or some typical features. Inspired by this, PDPM lets the generator to generate images with more discrete features first, and then, with the development of discriminator, PDPM's penalty term will make the latent vectors distribute around centers of feature modes mainly. The case that different latent vectors clustered around the similar feature by generator will result in higher penalty loss. In brief, PDPM makes the features of fake images much more discrete first ({\\tt\\small{PDPM starts}} in Figure \\ref{fig:mode_collapse}), and then with the help of discriminator and regularization term, to assign these latent vectors to different mode centers ({\\tt\\small{PDPM ends}} in Figure \\ref{fig:mode_collapse}). In next section, the pipeline of PDPM and its mechanism will be given in detail. \n\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[width=1.0\\linewidth]{.\/figure\/mode_collapse}\n\\end{center}\n\\caption{\\textbf{Illustration of mode collapse in feature space.} The {\\tt\\small{feature modes distribution}} is the implicit real distribution of data, and {\\tt\\small{the vanilla GAN}} indicates the modes captured by generator without PDPM.}\n\\label{fig:mode_collapse}\n\\end{figure}\n\n\\section{Pluggable Diversity Penalty Module}\nAs introduced in previous sections, PDPM penalizes the generator if two different latent vectors are mapped to images with similar features. In this section, $G(\\cdot)$ and $D(\\cdot)$ are used to indicate generator and discriminator, $p_z$ and $p_r$ are distributions of latent vectors and the real data. Besides, fake images indicate the images generated by generator, and if not specified, $f$ is used to represent the features of fake images extracted from discriminator.\n\n\\subsection{Measurement of similarity}\nSuppose $p_{z}(z)$ is the distribution of latent vectors which follows a standard normal distribution, a batch vectors $\\{z_1, z_2, ..., z_m\\}$ are randomly sampled from $p_{z}(z)$, and then, the normalized Gram matrix can be shown as:\n\\begin{equation}\nG_z^*(i, j)=\\frac{z_i^Tz_j}{||z_i||_2\\cdot ||z_j||_2} \\label{eq:original-latent-sim}\n\\end{equation}\nwhere $\\|\\cdot\\|_2$ represents l2-norm. It is reasonable to suppose that $z_i$ and $z_j$ are independent identically distributed (\\emph{i.i.d.}), and in fact, $G_{z}^*$ still follows a gaussian distribution which can be derived from the following claim:\\\\[6pt]\n\\emph{$f(x)$ and $g(x)$ are Gaussian PDFs with means $\\mu_f$ and $\\mu_g$ and standard deviations $\\sigma_f$ and $\\sigma_g$, then the product of $p(x)$ and $g(x)$ follows a scaled Gaussian distribution with $\\mu=\\frac{\\mu_f\\sigma_g^2+\\mu_g\\sigma_f^2}{\\sigma_f^2+\\sigma_g^2}$ and $\\sigma=\\sqrt{\\frac{\\sigma_f^2\\sigma_g^2}{\\sigma_f^2+\\sigma_g^2}}$. The scale factor is \\rm $s=\\frac{\\sigma_f^2\\sigma_g^2}{\\sqrt{\\sigma_f^2+\\sigma_g^2}}\\mathbf{exp}\\left [ -\\frac{(x-\\mu)^2}{2\\sigma^2} \\right ]$.}\\\\\n\nLikewise, the similarity of feature pairs can be got as follows:\n\\begin{equation}\nG_f^*(i, j)=\\frac{f_i^Tf_j}{||f_i||_2\\cdot ||f_j||_2} \\label{eq:original-feature-sim}\n\\end{equation}\nwhere $f_i$ represents flattened feature map of the \\emph{i-th} fake image extracted from discriminator. Since the values in Eq (\\ref{eq:original-latent-sim}) and Eq (\\ref{eq:original-feature-sim}) can be zero or negative, performing division directly doesn't make sense, and thus, the \\emph{sigmoid} function is used to scale them.\nThe scale factor is denoted by $s$ and Eqs (\\ref{eq:original-latent-sim}) and (\\ref{eq:original-feature-sim}) can be revised as:\n\\begin{equation}\nG_z(i, j)=\\sigma(s\\frac{z_i^Tz_j}{||z_i||_2\\cdot ||z_j||_2}) \\label{eq:revised-latent-sim}\n\\end{equation}\n\n\\begin{equation}\nG_f(i, j)=\\sigma(s\\frac{f_i^Tf_j}{||f_i||_2\\cdot ||f_j||_2}) \\label{eq:revised-feature-sim}\n\\end{equation}\n\n\\subsection{Loss function}\nFor alleviating mode collapse, the diversity penalty module should have the following attributes:\n\n\\begin{itemize}\n\\item if two latent vectors are similar, their corresponding fake images are unlikely to be very different.\n\\item if two latent vectors are different, their corresponding images have to be different likewise, which means the corresponding feature maps exist much difference.\n\\end{itemize}\n\n\\noindent Obviously, the diversity penalty module should pay much attention to the second situation which often results in mode collapse. Through these observations, the diversity penalty module is designed as follows:\n\\begin{equation}\nDP(z)=\\frac{1}{m^2}\\sum_{i=1}^{m}\\sum_{j=1}^{m}\\frac{G_f(i,j)}{G_z(i,j)}\\label{eq:dp-z}\n\\end{equation}\nwhere \\emph{m} represents the batch size, and $DP(z)$ has to be minimized when training GANs. Taking the vanilla GAN for example, $D(\\cdot)$ is trained to maximize the probability of assigning the correct label to both real images and fake images, also, $G(\\cdot)$ is trained simultaneously to get high score from $D(\\cdot)$. Thus, the basic loss function of GAN can be formulated as follows:\n\\begin{align}\n&\\max_{G} L_G(z)= \\mathbb{E}_{z\\sim p_{z}}D(G(z)) \\\\\n&\\min_{D} L_D(z, x)=\\mathbb{E}_{z\\sim p_{z}}D(G(z))-\\mathbb{E}_{x\\sim p_r}D(x) \\\\\n&\\max_{G}\\min_{D}\\,\\mathbb{E}_{z\\sim p_{z}}D(G(z))-\\mathbb{E}_{x\\sim p_r}D(x) \n\\end{align}\nTo perform diversity penalty, we just need to add diversity penalty loss to generator. The loss function of GAN with PDPM can be formulated as follows: \n\\begin{align}\n&\\max_{G} L_G(z)= \\mathbb{E}_{z\\sim p_{z}}D(G(z)) - \\lambda \\mathbb{E}_{z\\sim p_{z}}DP(z)\\label{eq:gloss}\\\\\n&\\min_{D} L_D(z, x)=\\mathbb{E}_{z\\sim p_{z}}D(G(z))-\\mathbb{E}_{x\\sim p_r}D(x) \\label{eq:dloss}\\\\\n&\\max_{G}\\min_{D}\\,\\mathbb{E}_{z\\sim p_{z}}D(G(z))-\\mathbb{E}_{x\\sim p_r}D(x) - \\lambda \\mathbb{E}_{z\\sim p_{z}}DP(z) \\label{eq:total-loss}\n\\end{align}\nwhere $\\lambda$ is a balance coefficient of diversity penalty term. The loss function of discriminator remains unchanged.\nAccording to Eqs (\\ref{eq:gloss}), (\\ref{eq:dloss}) and (\\ref{eq:total-loss}), the training pipeline can be summarized in \\textbf{Algorithm 1}.\n\n\\subsection{Mechanism Explanations}\nWhen training GANs, the discriminator is usually trained $k$ times while generator is trained only once, and that means discriminator usually converges better than generator. At the begining of training, PDPM enforces generator to generate fake images with discrete features, and this makes it possible for generator to capture more feature modes like {\\tt\\small{PDPM starts}} in Figure \\ref{fig:mode_collapse}. At that time, the discriminator is not well trained, and it dose not penalize the generator severely. Then, as the discriminator is trained better and better, it will enfoece the generator to map the latent vectors around peaks of the feature distribution like {\\tt\\small{PDPM ends}} in Figure \\ref{fig:mode_collapse}, and the case that latent vectors are mapped to the saddle of feature distribution will get low score from discriminator. Thus, when PDPM converges, most of the latent vectors will be mapped to the surroundings of feature modes while little vectors scatter around untypical feature centers.\n\n\\begin{algorithm}[h] \n\\label{algorithm-1}\n\\caption{\\small{GAN} with \\small{PDPM} training via mini-batch \\small{Adam}} \n\\begin{algorithmic}[1] \n\\FOR{total training \\emph{epochs}}\n\\FOR{\\emph{k} times} \n\\STATE Sample a batch data from $p_z$ : $\\{z_1, z_2, ...,z_m\\}$; \n\\STATE Sample a batch data from $p_r$ : $\\{x_1, x_2, ...,x_m\\}$; \n\\STATE Update discriminator :\n\\STATE \\qquad $\\theta_d\\leftarrow \\;\\theta_d-\\nabla_{\\theta_d}\\frac{1}{m}\\sum_{i=1}^{m}L_D(z_i, x_i)$\n\\ENDFOR \n\\STATE Sample a batch data from $p_z$ : $\\{z_1, z_2, ...,z_m\\}$;\n\\STATE Update generator :\n\\STATE \\quad \\;\\; \\qquad $\\theta_g\\leftarrow \\;\\theta_g-\\nabla_{\\theta_g}\\frac{1}{m}\\sum_{i=1}^{m}L_G(z_i)$\n\\ENDFOR \n\\end{algorithmic} \n\\end{algorithm}\n\n\n\\section{Experiments}\n\n\\begin{comment}\n\\begin{itemize}\n\\item Whether the similarity measurement in feature space is valid or not ? How dose the convergence performance and training stability of PDPM ?\n\\item Whether PDPM outperforms other SOTA architectures both visually and quantitatively or not ?\n\\item Whether PDPM alleviates mode collapse effectively compared with other GANs specially designed or not?\n\\item How well dose PDPM perform in other tasks such as image generation, image data augmentation, domain translation and so on ? \n\\end{itemize}\n\\end{comment}\nIn this section, PDPM is evaluated from several different views. First, in {\\tt\\small{Basic Attribute Evaluation of PDPM}} part, the feasibility analysis of similarity measurement and the convergence performance of PDPM will be talked in detail, and next, in {\\tt\\small{Ablation Study}} part, both visual and quantitative comparisons between PDPM and other typical architectures such as ALI~\\cite{27}, Unrolled GAN~\\cite{25}, VEEGAN~\\cite{26}, PacGAN~\\cite{28} and BourGAN~\\cite{29} on 2D Synthetic Datasets will be given, which indicates the efficiency of PDPM. Further, in {\\tt\\small{Multi Task Applications}}, PDPM is applied into domain translation, image generation, image data augmentation and other tasks, and the results show PDPM outperforms most mainstream GANs such as DCGAN \\cite{15}, WGAN\\_GP \\cite{7} and MSGAN \\cite{24}.\n\n\\subsection{Datasets}\nThe datasets used in our experiments are MNIST, Fashion-MNIST, CIFAR-10, CelebA and 2D Synthetic Datasets. For the first three datasets, only training set is used while no changes are made in testing set. In some tables, M, F-M and C10 are used to represent MNIST, Fashion-MNIST and CIFAR-10 respectively. \n\n\\subsection{Training Details}\nUnless specified, $\\mathbf{Adam}$ optimizer with $\\beta_1$=0.5 and $\\beta_2$=0.9 is used for training GANs, and $\\mathbf{SGD}$ optimizer with weight decay(1e-4) and momentum(0.9) is used for training ResNet. The initial learning rates are 1e-4 and 1e-3 for GANs and ResNet respectively. The traditional data augmentation methods contain {\\tt RandomHorizontalFlip, RandomVerticalFlip, RandomResizedCrop, RandomRotation} and {\\tt RandomColorJitter}.\n\n\\subsection{Basic Attribute Evaluation of PDPM}\nSimilarity measurement must has two basic characteristics:\n\\begin{itemize}\n\\item The similarity value should be higher within classes than between classes.\n\\item Visually similar images should be close in feature space.\n\\end{itemize}\nPicking Fashion-MNIST images as samples, the similarity values among different categories are calculated via their corresponding feature maps extracted from discriminator. Results shown in Figure~\\ref{fig:similarity-fm} tell that the similarity value is higher within classes than between classes.\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[width=0.71\\linewidth]{.\/figure\/similarity_fm_no_title}\n\\end{center}\n \\caption{\\textbf{Similarity Analysis on Fashion-MNIST.} \nFor avoiding occasionality, 5k images are sampled per class. The similarity is computed via Eq (\\ref{eq:revised-feature-sim}).} \n\\label{fig:similarity-fm}\n\\end{figure}\n\nFurther, similar operation is performed within one specific class on Fashion-MNIST to verify the second character stated above. Results are attached in Appendix Figure~\\ref{fig:fm-one-class}, they confirm that visually similar images are also similar in feature space and vice versa. These statistical results verify the reasonability of our similarity measurement.\n\nBesides, for GANs, whether it can converge stably or not is vital, and thus, the evaluation of PDPM on MNIST, Fashion-MNIST, CIFAR-10 and CelebA is made, respectively. Architectures of GAN are contained in Appendix Table~\\ref{architecture}. Figure \\ref{fig:gloss} gives the convergence results of domain translation on CelebA, the detailed results of other datasets are attached in Appendix Figure \\ref{fig:convergence}. In domain translation, StarGAN \\cite{30} is set as baseline, two groups with PDPM are set for comparison. From the results shown in Figure \\ref{fig:gloss}, it is clear to see that PDPM can accelerate the convergence of generator significantly. \n\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[width=0.45\\textwidth]{.\/figure\/gloss}\n\\end{center}\n \\caption{\\textbf{Loss of generator.} The balance coefficient $\\lambda$ in PDPM is set to 1e-3 in StarGAN\\_PDPM 1 and 1e-4 in StarGAN\\_PDPM 2.}\n \\label{fig:gloss}\n\\end{figure}\n\nThis acceleration is achieved because PDPM can capture accurate feature representations which are vital in facial expression transfer. Besides, in Figure \\ref{fig:animation}, the first column indicates the original images and their corresponding facial masks, the following columns are results of facial expression transfer and their corresponding attention masks. These attention masks should capture the changes between the image after transformation and the original. It can be seen that PDPM can generate much clearer facial attention masks with less background which can bring better and smoother detail changes compared with the vanilla StarGAN. Besides, StarGAN in Figure \\ref{fig:animation} can not transfer the changes of eyes to new face image, this indicates that when training with same epochs, the vanilla StarGAN converges much worse than PDPM.\n\n\n\\subsection{Ablation Study}\nIn this part, the effects of PDPM are stated in detail. First, in {\\tt\\small{Evaluation on Basic Datasets}}, both visual and quantitative results of mode collapse and mode collapse alleviation are given on MNIST, Fashion-MNSIT and CIFAR-10, further, in {\\tt\\small{Evaluation on 2D Synthetic Datasets}}, the comparison between some typical GANs with and without PDPM are given for precise comparison.\n\n\\subsubsection{Evaluation on Basic Datasets}\n\\begin{figure*}[t]\n\t\\subfigure[WGAN\\_GP without PDPM on MNIST]{\n \\includegraphics[width=0.49\\textwidth]{.\/figure\/mode_collapse_mnist_samples}\n }\n \\subfigure[WGAN\\_GP with PDPM on MNIST]{\n \\includegraphics[width=0.49\\textwidth]{.\/figure\/mode_collapse_dp}\n }\n \\caption{\\textbf{Alleviation of mode collapse via PDPM.} (a) WGAN\\_GP without PDPM. (b) WGAN\\_GP with PDPM $\\lambda$=5. The value above each image pair indicates the similarity value of their latent vectors. It can be found that in GAN without PDPM, latent vectors with low similarity value can be mapped to similar images while PDPM not.}\n \\label{fig:dp-collapse-sample}\n\\end{figure*}\n\nIn vanilla GANs,the latent vectors even with very low similarity may be mapped to similar images, but with PDPM, this phenomenon is alleviated since this situation will result in higher loss. That is, PDPM makes similar fake images have corresponding latent vectors with higher similarity. Using the method shown in Appendix Figure \\ref{fig:mode-collapse-sample}, the similar images and their corresponding latent vectors can be got simultaneously with our trained generator. With these fake images and their latent vectors, the similarity value can be calculated via Eqs (\\ref{eq:revised-latent-sim}) and (\\ref{eq:revised-feature-sim}). Part of these results are shown in Figure \\ref{fig:dp-collapse-sample}, others are attached in Appendix. These results indicate that PDPM prevents the generator from mapping latent vectors with low similarity to similar fake images. \n\n\\begin{table}[htpb]\n\\setlength{\\abovecaptionskip}{0.2cm}\n\\caption{\\textbf{Statistic results of diversity penalty module.}}\n\\begin{tabular}{c|l|lll}\n\\hline\n\\multicolumn{2}{l|}{\\small{Dataset}} & \\small{WGAN\\_GP} & \\small{PDPM $\\lambda$=5} & \\small{PDPM $\\lambda$=10} \\\\ \\hline\n\\multirow{10}{*}{\\small{M \/\/ FM}} & 1 & 0.68 \/\/ 0.65 & 0.78 \/\/ 0.82 & 0.82 \/\/ 0.84 \\\\\n & 2 & 0.67 \/\/ 0.63 & 0.66 \/\/ 0.77 & 0.65 \/\/ 0.85 \\\\\n & 3 & 0.64 \/\/ 0.63 & 0.77 \/\/ 0.82 & 0.77 \/\/ 0.89 \\\\\n & 4 & 0.69 \/\/ 0.61 & 0.80 \/\/ 0.83 & 0.78 \/\/ 0.81 \\\\\n & 5 & 0.64 \/\/ 0.66 & 0.78 \/\/ 0.84 & 0.77 \/\/ 0.81 \\\\\n & 6 & 0.68 \/\/ 0.64 & 0.78 \/\/ 0.83 & 0.78 \/\/ 0.81 \\\\\n & 7 & 0.64 \/\/ 0.63 & 0.75 \/\/ 0.84 & 0.77 \/\/ 0.86 \\\\\n & 8 & 0.61 \/\/ 0.66 & 0.74 \/\/ 0.86 & 0.75 \/\/ 0.87 \\\\\n & 9 & 0.67 \/\/ 0.64 & 0.80 \/\/ 0.79 & 0.80 \/\/ 0.83 \\\\\n & 10 & 0.64 \/\/ 0.65 & 0.76 \/\/ 0.84 & 0.75 \/\/ 0.82 \\\\ \\hline\n\\end{tabular}\n\\label{fig:tab-statistics}\n\\end{table}\n\nFurther, to avoid occasionality, 5k similar fake image pairs per class are generated by the generator with and without PDPM, and Eq (\\ref{eq:revised-latent-sim}) is used for calculating similarity between latent vectors. In Table \\ref{fig:tab-statistics}, the value indicates the similarity of latent vector pairs whose corresponding fake images are similar under MSE metrics. It can be seen that PDPM reduces the chance of two different latent vectors mapped to similar fake images.\n\n\n\n\\begin{comment}\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[width=0.45\\textwidth]{.\/figure\/sta}\n\\end{center}\n \\caption{\\textbf{Statistic result of diversity penalty.} The diversity penalty significantly improves the similarity value of similar images' corresponding latent vectors, and that means, it reduces the chance of two very different latent vectors mapped to similar images.$\\lambda_2$ indicates the coefficient of diversity penalty term.}\n \\label{fig:dp-statistics}\n\\end{figure}\n\\end{comment}\n\nIn GANs, IS~\\cite{18,21} and FID~\\cite{19} are commonly accepted metrics used for evaluationg the quality and diversity of fake images. On the datasets stated above, 5k fake images per class are generated using the generator with and without PDPM for calculating IS and FID. The parameter $n_{splits}$ of IS is set to 10. Table \\ref{tab:score} shows the details. Greater IS value and lower FID value are signs of high quality and diversity of generated data.\n\n\\begin{table}[htpb]\n\\setlength{\\abovecaptionskip}{0.2cm}\n\\begin{spacing}{1.2}\n\\caption{\\textbf{Quantitative results of IS and FID.}}\n\\label{tab:score}\n\\begin{tabular}{l|l|lll}\n\\hline\n\\multicolumn{2}{l|}{\\small{Dataset}} & \\small{WGAN\\_GP} & \\small{PDPM $\\lambda$=5} & \\small{PDPM $\\lambda$=10} \\\\ \\hline\n\\multirow{2}{*}{\\small{M}} & $\\uparrow$IS & 2.18$\\pm$.003 & 2.19$\\pm$.005 & \\textbf{2.31$\\pm$.005} \\\\\n & $\\downarrow$FID & 7.36$\\pm$.012 & 6.43$\\pm$.009 & \\textbf{5.88$\\pm$.011} \\\\ \\hline\n\\multirow{2}{*}{\\small{FM}} & $\\uparrow$IS & 4.28$\\pm$.004 & \\textbf{4.38$\\pm$.006} & 4.36$\\pm$.005 \\\\\n & $\\downarrow$FID & \\textbf{15.68$\\pm$.007} & 15.97$\\pm$.013 & 15.72$\\pm$.011 \\\\ \\hline\n\\multirow{2}{*}{\\small{C10}} & $\\uparrow$IS & 7.35$\\pm$.007 & 7.52$\\pm$.005 & \\textbf{7.83$\\pm$.007} \\\\\n & $\\downarrow$FID & 29.84$\\pm$.017 & \\textbf{28.45$\\pm$.015} & 29.03$\\pm$.013 \\\\ \\hline\n\\multirow{2}{*}{\\small{CelebA}} & $\\uparrow$IS & 2.78$\\pm$.002 & 2.91$\\pm$.005 & \\textbf{2.94$\\pm$.002} \\\\\n & $\\downarrow$FID & 33.48$\\pm$.002 & 25.45$\\pm$.015 & \\textbf{24.86$\\pm$.002} \\\\ \\hline\n\\end{tabular}\n\\end{spacing}\n\\end{table}\n\n\\subsubsection{Evaluation on 2D Synthetic Datasets}\nOn synthetic dataset, the quantitative evaluation results of mode collapse can be got accurately, because the distribution of data and its modes are known. As prior works, GANs with and without PDPM are evaluated on \\textbf{2D Ring} and \\textbf{2D Grid}. 2D Ring dataset contains eight 2D Gaussian distributions whose centers locate on a ring equally. 2D Grid contains twenty-five 2D Gaussian distributions whose centers locate on the meshgrid of a square. For comparison, PDPM is applied into the vanilla GAN, Unrolled GAN and BourGAN. The number of modes captured by generator and the percentage of points generated by generator in high-quality (h-q) are used as metrics. As in \\cite{26}, a sample is counted as high quality, if it is within three standard deviations of the nearest mode, and the number of modes captured by generator is the number of Gaussian centers which are nearest to at least one high quality sample.\n\n\\begin{table}[htpb]\n\\begin{spacing}{1.15}\n\\setlength{\\abovecaptionskip}{0.2cm}\n\\caption{\\textbf{Quantitative results on 2D Synthetic Dataset.}}\n\\label{tab:synthetic_table}\n\\begin{tabular}{lllll}\n\\hline\n & \\multicolumn{2}{l}{2D Ring} & \\multicolumn{2}{l}{2D Grid} \\\\\n & modes & h-q & modes & h-q \\\\ \\hline\nGAN & 1.0 & $\\times$ & 17.7 & 82.3 \\\\\nALI & 2.8 & 0.13 & 12.8 & 1.6 \\\\\nUnrolled GAN & 7.6 & 87.97 & 14.9 & 4.89 \\\\\nVEEGAN & 8.0 & 86.77 & 24.4 & 77.16 \\\\\nPacGAN & 7.8 & 98.21 & 24.3 & 79.46 \\\\\nBourGAN & 8.0 & 99.76 & 25.0 & 95.91 \\\\ \\bottomrul\nGAN\\_PDPM & 2.0 & $\\times$ & 21.3 & 80.8 \\\\\nUnrolled\\_PDPM & 8.0 & 99.36 & 21.7 & 75.21 \\\\\nBourGAN\\_PDPM & 8.0 & 99.89 & 25.0 & 95.99 \\\\ \\bottomrul\n\\end{tabular}\n\\end{spacing}\n\\end{table}\n\n\\begin{figure*}[htpb]\n\\begin{center}\n \\includegraphics[width=.98\\linewidth]{.\/figure\/synthetic.png}\n\\end{center}\n \\caption{\\textbf{Visual results on Synthetic Dataset.} From the first two columns, it can be seen that PDPM help the vanilla GAN capture more modes, especially in 2D Grid, the GAN with PDPM captures four more modes than its vanilla counterpart.}\n\\label{fig:synthetic}\n\\end{figure*}\n\nFrom visual results in Figure~\\ref{fig:synthetic} and quantitative results in Table~\\ref{tab:synthetic_table}, it can be seen that GAN with PDPM captures more modes of the data distribution, and the vanilla GAN group with PDPM outperforms the ALI and Unrolled GAN on 2D Grid Dataset while closer to VEEGAN and PacGAN. Besides, from BourGAN and BourGAN\\_PDPM in Figure \\ref{fig:synthetic}, it is clear to see that the group with PDPM converges to the mode centers better than its vanilla counterpart. The results of Unrolled GAN, VEEGAN and PacGAN are from \\cite{29}, no official codes of VEEGAN and PacGAN are found until we finish this part, thus, PDPM is not applied into these GANs. \n\n\n\\subsection{Multi Task Applications}\nIn this part, PDPM is applied into image data augmentation, image generation and domain translation, results in these tasks all suggest that GANs with PDPM outperform their vanilla counterpart.\n\n\\subsubsection{Image Generation on CelebA}\nThe GANs are split into two groups which are DCGAN series with \\{DCGAN, DCGAN\\_MS, DCGAN\\_PDPM\\} and WGAN\\_GP series with \\{WGAN\\_GP, WGAN\\_GP\\_MS, WGAN\\_GP\\_PDPM\\}. Here MS represents the mode seeking regularization propsed in MSGAN. The coefficient $\\lambda_{ms}$ is set to 1, and the penalty coefficient $\\lambda$ of PDPM shown in Eq (\\ref{eq:total-loss}) is set to 10. $\\mathbf{Adam}$ with $\\beta_1$=0.5 and $\\beta_2$=0.9 is used as optimizer, and the learning rate is set to 1e-4. All GANs are trained with a batch size of 128 and 100 epochs in total. The details of architectures are attached in Appendix Table \\ref{gans-celeba}. Figure \\ref{fig:men-women} shows the results of linear interpolation in latent space, and Table \\ref{is-fid-celeba} gives the quantitative results of IS and FID. In Figure \\ref{fig:men-women}, MS group generates many noisy pixels, and the transition between images is not smooth since the man with glasses only appears in last two images. By contrast, PDPM can interpolate between two images without noisy pixels, and the transition is much more smoother.\n\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[width=0.45\\textwidth]{.\/figure\/man_mark}\n\\end{center}\n \\caption{\\textbf{Linear interpolation in latent space.} From (1) to (6) are WGAN\\_GP, WGAN\\_GP\\_MS, WGAN\\_GP\\_PDPM, DCGAN, DCGAN\\_MS and DCGAN\\_PDPM. The MS group in blue box generates some noisy pixels while PDPM group in purple box not.}\n \\label{fig:men-women}\n\\end{figure}\n\nIt can be seen that PDPM also gets higher IS value and lower FID value compared with DCGAN, WGAN\\_GP and MSGAN from quantitative results shown in Table \\ref{is-fid-celeba}.\n\n\\begin{table}[htpb]\n\\setlength{\\abovecaptionskip}{0.2cm}\n\\begin{spacing}{1.2}\n\\caption{\\textbf{IS and FID results on CelebA.}} \n\\label{is-fid-celeba}\n\\begin{tabular}{lllll}\n\\hline\n\\multicolumn{2}{l}{} &\\small{DCGAN} &\\small{DCGAN\\_MS} &\\small{DCGAN\\_PDPM} \\\\ \\cline{3-5} \n\\multicolumn{2}{l}{$\\uparrow$\\small{IS}} &2.113$\\pm$ 0.014 &2.360$\\pm$0.006 & \\textbf{2.379$\\pm$0.013} \\\\\n\\multicolumn{2}{l}{$\\downarrow$\\small{FID}} &24.23$\\pm$0.150 &23.51$\\pm$0.090 & \\textbf{21.76$\\pm$0.110} \\\\ \\hline\n\\multicolumn{2}{l}{} &\\small{WGAN\\_GP} & \\small{WGAN\\_GP\\_MS} & \\small{WGAN\\_GP\\_PDPM} \\\\ \\cline{3-5} \n\\multicolumn{2}{l}{$\\uparrow$\\small{IS}} &2.775$\\pm$0.018 & 2.927$\\pm$0.016 & \\textbf{2.941$\\pm$0.021} \\\\\n\\multicolumn{2}{l}{$\\downarrow$\\small{FID}} &33.48$\\pm$0.011 & 24.86$\\pm$0.020 & \\textbf{24.18$\\pm$0.031}\\\\ \\hline\n\\end{tabular}\n\\end{spacing}\n\\end{table}\n\n\n\\subsubsection{Image Data Augmentation with PDPM}\nGANs with PDPM are used for augmentating data on MNIST, Fashion-MNIST and CIFAR-10. The fake images are served as auxiliary training set. ResNet20 proposed in \\cite{20} is adopted as classification net. $\\mathbf{SGD}$ optimizer is used with learning rate decay.\nResults of accuracy on testing set are shown in Table~\\ref{acc-table}, training details are attached in Appendix Figure~\\ref{fig:resnet-acc}.\n\n\\begin{table}[htpb]\n\\setlength{\\abovecaptionskip}{0.2cm}\n\\begin{spacing}{1.2}\n\\caption{\\textbf{Testing Accuracy on Several Datasets.}}\n\\label{acc-table}\n\\begin{tabular}{l|lll}\n\\hline\nTesting Acc & MNIST & Fashion-MNIST & CIFAR-10 \\\\ \\hline\nBaseline & 0.9897 & 0.9257 & $\\times$ \\\\\nDA & $\\times$ & $\\times$ & 0.9172 \\\\\nWGAN\\_GP & 0.9951 & 0.9394 & 0.9184 \\\\\nWGAN\\_GP\\_MS & 0.9961 & 0.9430 & 0.9200 \\\\\nPDPM\\_1 $\\lambda=5$ & \\textbf{0.9975} &0.9465 & \\textbf{0.9239} \\\\\nPDPM\\_2 $\\lambda=10$ & 0.9969 & \\textbf{0.9527} & 0.9212 \\\\ \\hline\n\\end{tabular}\nDA : Traditional Data Augmentation\n\\end{spacing}\n\\end{table}\n\nCompared with WGAN\\_GP, PDPM gains improvements of 0.24\\%, 1.33\\% and 0.55\\% on MNIST, Fashion-MNIST and CIFAR-10 respectively. More details about the training process refer to Appendix Figure~\\ref{fig:resnet-acc}.\n\n\n\\section{Conclusion}\nIn this paper, a pluggable block called diversity penalty module (PDPM) has been proposed to alleviate mode collapse in GAN. This penalty term is used to enforce the similarity between feature pairs to be consistent with that between latent vector pairs. The advantage of our proposed method is its generalization ability, it almost can be combined with all GANs in different architectures and vision tasks.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIf $X$ and $Y$ are finite $2$-complexes with $\\pi_1(X) \\cong \\pi_1(Y)$, then it is well-known that $X \\vee aS^2 \\simeq Y \\vee bS^2$ for some $a,b \\ge 0$ \\cite{Wh39}.\nThe set of homotopy types of finite $2$-complexes $X$ with fixed $\\pi_1(X) \\cong G$ can therefore be viewed as a tree (an acyclic graph) with edges between each $X$ and $X \\vee S^2$. This tree is graded by $\\chi(X)$, which has a minimum value $\\chi_{\\min}(G)$ and satisfies $\\chi(X \\vee S^2) = \\chi(X)+1$. \nEach $X$ is also homotopy equivalent to the presentation complex $X_{\\mathcal{P}}$ of some presentation $\\mathcal{P}$ of $G$.\n\nIn the 1960s-70s, the structure of this tree was investigated in a series of papers by Cockroft-Swan \\cite{CS61}, Dyer-Sieradski \\cite{DS73,DS75,SD79} and Dyer (see, for example, \\cite{Dy78,Dy79a,Dy79b}).\nHowever, it was not until 1976 that Dunwoody \\cite{Du76} and Metzler \\cite{Me76} independently found examples of finite $2$-complexes $X$, $Y$ such that $X \\vee S^2 \\simeq Y \\vee S^2$ but $X \\not \\simeq Y$.\nConversely, Browning \\cite{Br78} showed that, if $G$ is finite, then $\\chi(X) = \\chi(Y) >\\chi_{\\min}(G)$ implies $X \\simeq Y$ (see also \\cite{HK93}). \nThe tree of finite $2$-complexes $X$ with $\\pi_1(X) \\cong G$ finite is therefore of the form given in \\cref{figure:diagrams}a.\nThis raises the following question:\n\n\\begin{problem} \\label{problem:2-complexes}\nFor which $k$ do there exist homotopically distinct finite $2$-complexes $X_1$, $X_2$ such that $\\pi_1(X_i) \\cong G$ and $\\chi(X_i) = k + \\chi_{\\min}(G)$? {\\normalfont (see \\cref{figure:diagrams}b)}\n\\setcounter{thm}{\\value{thm}-1}\n\\end{problem}\n\nThere are two main approaches to this problem. The first is in the 1979 Problems List edited by Wall \\cite[Problem D5]{Wa79} and the second is from Dyer \\cite[Problem C]{Dy79a}.\n\\begin{clist}{(1)}\n\\item\nFind $X_1$ such that $\\chi(X_1) = k + \\chi_{\\min}(G)$ and $X_1 \\not \\simeq Y \\vee S^2$ for all finite $2$-complexes $Y$. We can then take $X_2 = X \\vee kS^2$ where $\\chi(X) = \\chi_{\\min}(G)$. \n\\item\nFind $X_1$, $X_2$ such that $X_1 \\vee (k+1)S^2 \\simeq X_2 \\vee (k+1)S^2$ but $X_1 \\vee kS^2 \\not \\simeq X_2 \\vee kS^2$.\n\\end{clist}\n\nThe examples of Metzler and Dunwoody showed that $k=0,1$ are possible respectively, though no examples were found for $k \\ge 2$.\nThis is, in part, owing to difficulties in related problems in algebra. For example, Dunwoody's construction involves realising non-free stably free $\\mathbb{Z} T$-modules of rank $k = 1$ as $\\pi_2(X)$ for $X$ a finite $2$-complex with $\\pi_1(X) \\cong T$ the trefoil group. However, there has previously been no known example of a stably free $\\mathbb{Z} G$-module of rank at least two \\cite[p623]{Jo12b}. \n\nOur main algebraic result is the construction of non-free stably free $\\mathbb{Z} G$-modules of arbitrary rank $k$. \nIn fact, we show that there are infinitely many stably free $\\mathbb{Z} G$-modules which are distinct even up to $\\Aut(G)$-isomorphism (see \\cref{subsection:Aut(G)}). Let $\\cd(G)$ denote the cohomological dimension of $G$.\n\n\\begingroup\n\\renewcommand\\thethm{\\Alph{thm}}\n\\begin{thm} \\label{thm:main-SF}\nFor all $k \\ge 1$, there exists a finitely presented group $G$ and infinitely many stably free $\\mathbb{Z} G$-modules of rank $k$ which are distinct up to $\\Aut(G)$-isomorphism. Furthermore, for all $d \\ge 2$, we can assume that $\\cd(G) = d$.\n\\end{thm}\n\\endgroup\n\nOur simplest example is when $G = \\ast_{i=1}^k T$ is a free product of trefoil groups $T$, which has $\\cd(G) = 2$. \nHere the case $k=1$ was shown by Berridge-Dunwoody \\cite{BD79}. For $k \\ge 2$, the main idea of our proof will be to use Bergman's theorem on modules over coproducts of rings \\cite{Be74} in the case of $\\mathbb{F}[\\ast_{i=1}^k T]$-modules for $\\mathbb{F}$ a field.\nThis strategy was proposed by Evans in \\cite{Ev99}, though an example was never given.\n\n\\begin{figure}[t] \\vspace{-4mm} \n\\begin{center}\n\\begin{tabular}{ccccc}\t\n\\begin{tabular}{l}\n\\begin{tikzpicture}\n\\draw[fill=black] (2,0) circle (2pt);\n\\draw[fill=black] (3,0) circle (2pt);\n\\draw[fill=black] (2,1) circle (2pt);\n\\draw[fill=black] (2,2) circle (2pt);\n\\draw[fill=black] (2,3) circle (2pt);\n\\node at (2,3.6) {$\\vdots$};\n\\node at (1.2,3) {(a)};\n\n\\end{tabular}\n&&&&\t\n\\begin{tabular}{l}\n\\begin{tikzpicture}\n\\draw[fill=black] (2,-1) circle (2pt);\n\\draw[fill=black] (2,0) circle (2pt);\n\\draw[fill=black] (1,1) circle (2pt);\n\\draw[fill=black] (2,1) circle (2pt);\n\\draw[fill=black] (3,1) circle (2pt);\n\\draw[fill=black] (2,2) circle (2pt);\n\n\\node at (2,2.6) {$\\vdots$};\n\\draw[black] (2,0) node[right]{\n$\\left.\n \\begin{array}{ll}\n \\\\ \\\\ \\\\\n \\end{array}\n\\right \\} k$};\n\\node at (1.2,2) {(b)};\n\n\\end{tabular}\n\\end{tabular}\n\\end{center}\n\\caption{Branching phenomena in the tree of finite $2$-complexes $X$ with $\\pi_1(X) \\cong G$. The vertical height is the Euler characteristic.} \\label{figure:diagrams}\n\\vspace{-2mm}\n\\end{figure}\n\nNow recall that \\cref{problem:2-complexes} has a natural analogue in higher dimensions.\nFor $n \\ge 2$, a \\textit{$(G,n)$-complex} is an $n$-complex $X$ with $\\pi_1(X) \\cong G$ and such that $\\widetilde X$ is $(n-1)$-connected.\nIf $X$, $Y$ are finite $(G,n)$-complexes, then $X \\vee aS^n \\simeq Y \\vee bS^n$ for some $a,b \\ge 0$, and $\\chi(X)$ has a minimal value $\\chi_{\\min}(G,n)$ among finite $(G,n)$-complexes.\nThe natural extension of \\cref{problem:2-complexes} to $(G,n)$-complexes was considered by Dyer in \\cite{Dy79a,Dy79b}.\nHowever, there were still no known examples found for $k \\ge 2$.\n\nOur main topological result is the following, which gives a complete resolution of \\cref{problem:2-complexes} and its generalisation to $(G,n)$-complexes. \nThis corresponds to the first approach to \\cref{problem:2-complexes} and so answers both \\cite[Problem D5]{Wa79} and the more general question of Dyer \\cite[p378]{Dy79b} in the affirmative. \nIn fact, we will show that infinitely many homotopically distinct $X_i$ exist for each $n$ and $k$.\n\n\\begingroup\n\\renewcommand\\thethm{\\Alph{thm}}\n\\begin{thm} \\label{thm:main}\nFor all $n \\ge 2$ and $k \\ge 0$, there exists a finitely presented group $G$ and infinitely many homotopically distinct finite $(G,n)$-complexes $X_i$ such that $\\chi(X_i) = k + \\chi_{\\min}(G,n)$ and $X_i \\not \\simeq Y_i \\vee S^n$ for any finite $(G,n)$-complex $Y_i$.\n\\end{thm}\n\\endgroup\n\nFor $n = 2$, our simplest example is when $k \\ge 1$ and $G = \\ast_{i=1}^k T$. \nThe $X_i$ will be some infinite collection of finite $2$-complexes of the form $k X_{\\mathcal{P}_i} = \\bigvee_{j=1}^k X_{\\mathcal{P}_i}$ where\n\\[ \\mathcal{P}_i = \\langle x,y,a,b \\mid x^2=y^3, a^2=b^3, x^{2i+1}=a^{2i+1}, y^{3i+1}=b^{3i+1} \\rangle \\]\nare the presentations of Harlander-Jensen \\cite{HJ06a}.\nIf $k=0$, we will instead use the finite $2$-complexes $X_i$ constructed by Lustig in \\cite{Lu93}.\n\nWe will also use \\cref{thm:main} to show that syzygies $\\Omega_n^G(\\mathbb{Z})$ can have branching at all level $k \\ge 0$ (\\cref{cor:syzygies}). This gives some response to the remark made by Johnson \\cite[p.xiii]{Jo12a} that very little is known about the branching of $\\Omega_n^G(\\mathbb{Z})$ outside of the finite case where branching occurs only at the minimal level and level one.\n\nNow recall that, if $\\mathbb{Z} G$ is Noetherian of Krull dimension $d_G$ and $d = d_G +1$, then stably free $\\mathbb{Z} G$-modules of rank $\\ge d$ are free and, if $X_1$, $X_2$ are finite $(G,n)$-complexes with $\\chi(X_1) = \\chi(X_2) \\ge d + \\chi_{\\min}(G)$, then $X_1 \\simeq X_2$ (see \\cite{Ha19} for the case $n=2$). \nIf $G$ is polycyclic-by-finite, then $\\mathbb{Z} G$ is Noetherian and it is conjectured that these are the only such groups (see \\cite[p328]{KL19} for recent work). \nIf $\\mathbb{Z} G$ is not Noetherian, then such a bound $d$ can often still be found; for example, if $G$ is a free group, then we can take $d=0$ by results of Bass \\cite{Ba64} and Wall \\cite{Wa65}.\nHowever, there has been no known example of a group $G$ for which no such bound $d$ exists.\n\nOur next result will be to give an example of a group $G$ for which there exists non-free stably free $\\mathbb{Z} G$-modules of arbitrary rank.\nOur simplest example is $G = \\ast_{i=1}^\\infty T$.\n\n\\begingroup\n\\renewcommand\\thethm{\\Alph{thm}}\n\\begin{thm} \\label{thm:main-SF-further}\nThere exists a group $G$ such that, for all $k \\ge 1$, there are infinitely many stably free $\\mathbb{Z} G$-modules of rank $k$ which are distinct up to $\\Aut(G)$-isomorphism.\nFurthermore, for all $d \\ge 2$, we can assume that $\\cd(G) = d$.\n\\end{thm}\n\\endgroup\n\nIn all our examples, $G$ is not finitely generated and so there does not exist a \\textit{finite} $(G,n)$-complex and $\\chi(X)$ is not well-defined. \nWe nonetheless show the following.\n\n\\begingroup\n\\renewcommand\\thethm{\\Alph{thm}}\n\\begin{thm} \\label{thm:main-further}\nFor all $n \\ge 2$, there exists a group $G$ and an aspherical $(G,n)$-complex $Y$ such that, for all $k \\ge 1$, there are infinitely many homotopically distinct $(G,n)$-complexes $X_i$ with $X_i \\vee S^n \\simeq Y \\vee (k+1)S^n$.\n\\end{thm}\n\\endgroup\n\nFinally, we consider the question of whether or not the techniques used to prove Theorems \\ref{thm:main-SF} and \\ref{thm:main} can be applied to all groups of the form $G = \\ast_{i=1}^k G_i$ and finite $2$-complexes $X$ with $\\pi_1(X) \\cong G$.\nFor a field $\\mathbb{F}$, we show that $\\pi_2(X) \\otimes \\mathbb{F}$ is uniquely a direct sum of induced $\\mathbb{F} G_i$-modules provided it has no direct summand of the form $\\mathbb{F} G$ (Propositions \\ref{prop:existence}, \\ref{prop:uniqueness}). This makes it possible to distinguish finite $2$-complexes $X_1$, $X_2$ by distinguishing the component $\\mathbb{F} G_i$-modules. This works in the case $G_i = T$ but cannot work in many other cases such as if the $G_i$ are finite.\n\nIn contrast, we give examples to show that $\\pi_2(X)$ need not be a direct sum of induced $\\mathbb{Z} G_i$-modules (\\cref{thm:non-existence}) and that, if so, this decomposition need not be unique (\\cref{thm:non-uniqueness}). \nThis limits the potential to use the methods presented in this article to find a general cancellation theorem for finite $2$-complexes.\n\nWe will conclude this article with a list of five open problems. \n\n\\section{Preliminaries on $R G$-modules}\n\\label{section:RG-modules}\n\nLet $G$ be a group, let $R$ be a ring and let $R G$ denote the group ring of $G$ with coefficients in $R$. We will now develop the necessary preliminaries on $R G$-modules.\n\n\\subsection{Stably free $R G$-modules}\n\\label{subsection:stably-finite}\n\nFor a ring $R$, a finitely generated (left) $R$-module $S$ is \\textit{stably free} if there exists $n$, $m$ such that $S \\oplus R^n \\cong R^m$.\nIn order to have a well-defined notion of rank, certain conditions on $R$ must be imposed:\n\\begin{clist}{(I)}\n\\item For all $n, m$, $R^n \\cong R^m$ implies $n=m$ (\\textit{invariant basis number property})\n\\item For all $n, m$, $S \\oplus R^n \\cong R^m$ implies $n \\le m$ (\\textit{surjective rank property})\n\\item For all $n$, $S \\oplus R^n \\cong R^n$ implies $S=0$ (\\textit{stable finiteness property})\n\\end{clist}\nSuppose $R$ satisfies (I). If $S$ is a stably free $R$-module, then we can define the \\textit{rank} of $S$ to be $\\rank (S) = m-n$ for any $n$, $m$ such that $S \\oplus R^n \\cong R^m$. If $R$ satisfies (II), then $\\rank(S) \\ge 0$ for all $S$. If $R$ satisfies (III), then $S \\ne 0$ implies that $\\rank(S) \\ge 1$.\n\nIt is straightforward to see that (III) $\\Rightarrow$ (II) $\\Rightarrow$ (I). Conversely, examples were given by Cohn \\cite{Co66} to show that $(R \\ne 0)$ $\\not\\Rightarrow$ (I) $\\not\\Rightarrow$ (II) $\\not\\Rightarrow$ (III). Rings which satisfy (III) are also known as weakly finite and satisfy the equivalent condition that, for all $n$, one-sided inverses in $M_n(R)$ are two-sided, i.e. $uv=1$ if and only if $vu=1$.\n\nWe would now like to determine when conditions (I)-(III) hold for $R G$.\nThe following is a consequence of \\cite[Proposition 2.4, Theorem 2.6]{Co66}.\n\n\\begin{prop}\nLet $R$ be a commutative ring and let $G$ be a group. Then $R G$ has the surjective rank property, and hence also the invariant basis number property.\n\\end{prop}\n\nIt remains to determine when $R G$ is stably finite.\nIt was shown by Kaplansky \\cite{Ka72} that, if $\\mathbb{F}$ is a field of characteristic $0$, then $\\mathbb{F} G$ is stably finite for all groups $G$. This implies that $\\mathbb{Z} G$ is stably finite since $\\mathbb{Z} G \\subseteq \\mathbb{Q} G$.\nKaplansky conjectured that this holds for all fields $\\mathbb{F}$, but this remains open.\n\nThe best result for general fields $\\mathbb{F}$ is the following theorem of Elek-Szab\\'{o} \\cite{ES04}, which built upon earlier work of Ara, O'Meara and Perera \\cite[Theorem 3.4]{AOP02}. \n\n\\begin{thm} \\label{thm:ES}\nLet $\\mathbb{F}$ be a field and let $G$ be a sofic group. Then $\\mathbb{F} G$ is stably finite.\n\\end{thm}\n\nFor a definition of sofic, see \\cite[p430]{ES04}. For our purposes, it suffices to note that $G=1$ is sofic and that sofic groups are closed under direct\/free products, direct\/inverse limits, subgroups, and that the extension of an amenable group (see \\cite[p227]{AOP02}) by a sofic group is sofic. There is no known example of a non-sofic group.\n\nAll groups which will be considered in this article are sofic. We can therefore assume, when needed, that non-trivial stably free $\\mathbb{F} G$-modules have rank $\\ge 1$.\n\n\\subsection{$R G$-modules over free products}\n\\label{subsection:Bergman}\n\nFix groups $G_1, \\cdots, G_n$, let $G = \\ast_{k=1}^n G_k$ denote the free product and let $\\iota_k : G_k \\hookrightarrow G$ denote the inclusion map for each $k$.\n\nLet $R$ be a ring. If $M_k$ is an $R G_k$-module, then ${\\iota_k}_\\#(M_k) = R G \\otimes_{R G_k} M_k$ is an $R G$-module. We say that an $R G$-module $M$ is \\textit{induced} if there exists $R G_k$-modules $M_k$ and an $R G$-module isomorphism\n\\[ M \\cong {\\iota_1}_\\#(M_1) \\oplus \\cdots \\oplus {\\iota_n}_\\#(M_n).\\]\n\nWe now define two special types of map between induced $R G$-modules.\nFirstly, if $M = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k)$ and $M' = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k')$ are induced $R G$-modules, then an $R G$-module homomorphism $f : M \\to M'$ is called an \\textit{induced homomorphism} if there exists $R G_k$-module homomorphisms $f_k : M_k \\to M_k'$ such that $f = \\oplus_{k=1}^n \\iota_*(f_k)$.\n\nNow, let $M = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k)$ be an induced $R G$-module and suppose there exists $a$ for which $M_a \\cong M_a' \\oplus R G_a$ for some $R G_a$-module $M_a'$. Then, for any $b \\ne a$, there is an isomorphism\n\\[f_{a,b} : {\\iota_a}_\\#(M_a' \\oplus R G_a) \\oplus {\\iota_b}_\\#(M_b) \\to {\\iota_a}_\\#(M_a') \\oplus {\\iota_b}_\\#(M_b \\oplus R G_b)\\]\ninduced by ${\\iota_a}_\\#(R G_a) \\cong R G \\cong {\\iota_b}_\\#(R G_b)$. \nWe define a \\textit{free transfer isomorphism} on $a,b$ to be the isomorphism $F_{a,b} : M \\to M'$ which extends $f_{a,b}$ by the identity map on the other components and where\n\\[ M' = {\\iota_1}_\\#(M_1) \\oplus \\cdots \\oplus {\\iota_a}_\\#(M_a') \\oplus \\cdots \\oplus {\\iota_b}_\\#(M_b \\oplus R G_b) \\oplus \\cdots \\oplus {\\iota_n}_\\#(M_n).\\]\n\nThe following can be viewed as a special case of Bergman's theorem on modules over coproducts of rings \\cite{Be74}. We now restrict to the case where $R = \\mathbb{F}$ is a field.\n\n\\begin{thm}[Bergman]\n\\label{thm:Bergman}\nLet $M$ be a finitely generated induced $\\mathbb{F} G$-module. Then:\n\\begin{clist}{(i)}\n\\item If $M' \\subseteq M$ is a submodule, then $M'$ is an induced $\\mathbb{F} G$-module.\n\\item If $M'$ is an induced $\\mathbb{F} G$-module, then $M \\cong M'$\nif and only if they are connected by a sequence of induced isomorphisms and free transfer isomorphisms.\n\\end{clist}\n\\end{thm}\n\nFor the convenience of the reader, we will briefly outline how this can be deduced from Bergman's results. Here will will use the terminology from \\cite[p1-4]{Be74}.\n\n\\begin{proof}[Proof \\normalfont (outline)]\nFirstly, note that $\\mathbb{F} G$ is a the coproduct of the $\\mathbb{F}$-rings $\\mathbb{F} G_k$ which are faithful since they come equipped with natural injections $\\iota_k : \\mathbb{F} G_k \\hookrightarrow \\mathbb{F} G$. \n\nPart (i) follows immediately from \\cite[Theorem 2.2]{Be74}. For part (ii), suppose $f: M \\to M'$ is an isomorphism of $\\mathbb{F} G$-modules. By \\cite[Theorem 2.3]{Be74}, and the remark on \\cite[p3]{Be74}, $f$ is the composition of induced isomorphisms, free transfer isomorphisms and transvections. Since transvections are module automorphisms, omitting them from the composition still leaves an isomorphism of $\\mathbb{F} G$-modules. \\end{proof}\n\n\\begin{corollary} \\label{cor:Bergman}\nLet $M = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k)$ be a finitely generated induced $\\mathbb{F} G$-module and suppose each $M_k$ has no direct summand of the form $\\mathbb{F} G_k$. Then:\n\\begin{clist}{(i)}\n\\item If $M' = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k')$ is an induced $\\mathbb{F} G$-module, then $M \\cong M'$ as $\\mathbb{F} G$-modules if and only if $M_k \\cong M_k'$ as $\\mathbb{F} G_k$-modules for all $k$.\n\\item $M$ has no direct summand of the form $\\mathbb{F} G$.\n\\end{clist}\n\\end{corollary}\n\n\\begin{proof}\nPart (i) follows from \\cref{thm:Bergman} (ii) since, if the $M_k$ have no direct summands of the form $\\mathbb{F} G_k$, then there are no free transfer isomorphisms by definition. \n\nTo see part (ii) note that, if $M \\cong M' \\oplus \\mathbb{F} G$, then $M' \\subseteq M$ is a submodule and so is an induced $\\mathbb{F} G$-module by \\cref{thm:Bergman} (i). If $M' = \\bigoplus_{k=1}^n {\\iota_k}_\\#(M_k')$, then $M \\cong {\\iota_1}_\\# (M_1' \\oplus \\mathbb{F} G_1) \\oplus \\bigoplus_{k=2}^n {\\iota_k}_\\#(M_k')$ which contradicts the result from (i).\n\\end{proof}\n\n\\subsection{$R G$-modules up to $\\Aut(G)$-isomorphism}\n\\label{subsection:Aut(G)}\n\nIf $M$ is an $R G$-module and $\\theta \\in \\Aut(G)$, then we can define $M_\\theta$ to be the $R G$-module with the same underlying abelian group as $M$ but with $G$-action given by $g \\cdot_{M_{\\theta}} m = \\theta(g) \\cdot_{M} m$ for $g \\in G$ and $m \\in M$. We say that $R G$-modules $M$ and $M'$ are \\textit{$\\Aut(G)$-isomorphic} if $M \\cong (M')_\\theta$ are isomorphic as $R G$-modules for some $\\theta \\in \\Aut(G)$.\nThis has a number of basic properties. In particular, if $M$ and $M'$ are $\\mathbb{Z} G$-modules and $\\theta \\in \\Aut(G)$, then $(M \\oplus M')_\\theta \\cong M_\\theta \\oplus (M')_\\theta$, and $(R G)_\\theta \\cong R G$ for all $\\theta \\in \\Aut(G)$. \n\nRecall that a subgroup $N \\subseteq G$ is \\textit{characteristic} if $\\theta(N) = N$ for all $\\theta \\in \\Aut(G)$. We also say that a surjective map $f: G \\twoheadrightarrow H$ is characteristic if $\\Ker(f) \\subseteq G$ is characteristic and, if so, then there is an induced map $\\bar{\\cdot} : \\Aut(G) \\to \\Aut(H)$.\n\nThe following is straightforward (see, for example, \\cite[Corollary 7.4]{Ni20a}).\n\n\\begin{prop} \\label{prop:modules-over-quotients}\nLet $G$ be a group, let $f: G \\twoheadrightarrow H$ be characteristic and let $\\bar{\\cdot} : \\Aut(G) \\to \\Aut(H)$ be the map induced by $f$. If $M$ is an $R G$ module and $\\theta \\in \\Aut(G)$, then\n$f_\\#(M_\\theta) \\cong (f_\\#(M))_{\\bar{\\theta}}$\nare isomorphic as $R H$-modules. \n\\end{prop}\n\nThe following will be of use in applying \\cref{prop:modules-over-quotients} to the case where $G$ is a free product. \nWe say that a group $G$ is \\textit{indecomposable} if it is non-trivial and $G \\cong G_1 \\ast G_2$ implies $G_1$ or $G_2$ is trivial.\n\n\\begin{prop} \\label{prop:free-product-char}\nLet $G = G_1 \\ast \\cdots \\ast G_n$ where each $G_k$ is indecomposable and not infinite cyclic. For each $k$, let $f_k : G_k \\twoheadrightarrow H_k$ be characteristic and such that, if $G_i \\cong G_j$, then $H_i \\cong H_j$ and $f_i$, $f_j$ differ by automorphisms of $G_i$, $H_i$.\n\nIf $f : G \\twoheadrightarrow H_1 \\ast \\cdots \\ast H_n$ is the map with $f \\mid_{G_k} = f_k$, then $f$ is characteristic.\n\\end{prop}\n\nOur proof will be a routine application of the following version of the Kurosh subgroup theorem \\cite[Theorem 5.1]{Ma77}.\n\n\\begin{thm}[Kurosh subgroup theorem]\nLet $G = G_1 \\ast \\cdots \\ast G_n$. If $H \\subseteq G$ is a subgroup, then\n\\[ H = F(X) \\ast (\\ast_{k=1}^n g_k H_k g_k^{-1})\\]\nwhere $F(X)$ is the free group on a set $X$, $H_k \\subseteq G_k$ is a subgroup and $g_k \\in G$.\n\\end{thm}\n\n\\begin{proof}[Proof of \\cref{prop:free-product-char}]\nLet $\\varphi \\in \\Aut(G)$. Then $\\varphi(G_k) \\subseteq G$ is indecomposable and not infinite cyclic and so, by the Kurosh subgroup theorem, we have $\\varphi(G_k) = g_{i_k} H_{i_k} g_{i_k}^{-1}$ for some subgroup $H_{i_k} \\subseteq G_{i_k}$. Since $\\varphi$ is an automorphism, we have:\n\\[ G = \\ast_{k=1}^n (g_{i_k} H_{i_k} g_{i_k}^{-1}) \\subseteq \\ast_{k=1}^n (g_{i_k} G_{i_k} g_{i_k}^{-1}) \\subseteq \\ast_{k=1}^n (g_k G_k g_k^{-1}) = G \\]\nwhich implies that $H_{i_k} = G_{i_k}$ and that the $i_k$ are distinct.\n\nLet $N_k = \\Ker(f_k) \\subseteq G_k$ and note that $N = \\Ker(f)$ is generated by the subgroups $g N_k g^{-1}$ for $g \\in G$. If $\\varphi \\in \\Aut(G)$, then the above implies that $\\varphi \\mid_{G_k} = c_{g_{i_k}} \\circ \\varphi_{i,i_k}$ where $\\varphi_{i,i_k} : G_i \\to G_{i_k}$ is an isomorphism and $c_{g_{i_k}} : G_{i_k} \\to G$ is conjugation by $g_{i_k}$. Since $f_{i}, f_{i_k}$ differ by automorphisms of $G_i, G_{i_k}$, we have $\\varphi_{i,i_k}(N_i) = \\varphi_{i_k}(N_{i_k})$ for some $\\varphi_{i_k} \\in \\Aut(G_{i_k})$ and so $\\varphi_{i,i_k}(N_i) = N_{i_k}$ since $N_{i_k}$ is characteristic. Hence $\\varphi(g N_k g^{-1}) = (g g_{i_k}) N_{i_k} (g g_{i_k})^{-1} \\subseteq N$ and so $N$ is characteristic.\n\\end{proof}\n\n\\section{Groups of finite cohomological dimension}\n\\label{section:cd(G)}\n\nWe will now recall some basic facts about groups with finite cohomological dimension which are due to Serre \\cite{Se71}.\nA standard reference is the notes of Bieri \\cite{Bi81}.\n\nA group $G$ has \\textit{cohomological dimension $n$}, written $\\cd(G) = n$, if $n$ is the smallest integer for which there exists a projective resolution of $\\mathbb{Z} G$-modules of the form:\n\\[ 0 \\to P_n \\to \\cdots \\to P_1 \\to P_0 \\to \\mathbb{Z} \\to 0.\\]\nThis is equivalent to asking that $H^i(G;M)=0$ for all $i > n$ and all $\\mathbb{Z} G$-modules $M$ \\cite[Proposition 5.1(a)]{Bi81}.\nIf no such $n$ exists, then we take $\\cd(G) = \\infty$.\n\nA group $G$ is said to be \\textit{of type $\\FL$} if, for some $n \\ge 0$, there exists a resolution of finitely generated free $\\mathbb{Z} G$-modules of the form:\n\\[ 0 \\to F_n \\to \\cdots \\to F_1 \\to F_0 \\to \\mathbb{Z} \\to 0\\]\nThe following is \\cite[Propositions 1.5, 4.1(b)]{Bi81}.\n\n\\begin{prop} \\label{prop:FP+cd}\nLet $G$ be a group with $\\cd(G) = n$. If $G$ is of type $\\FL$, then there exists a resolution of finitely generated free $\\mathbb{Z} G$-modules of the form:\n\\[ 0 \\to F_n \\to \\cdots \\to F_1 \\to F_0 \\to \\mathbb{Z} \\to 0.\\]\n\\end{prop}\n\nWe now recall how these conditions are related under amalgamated free products and direct products. The following is \\cite[Proposition 2.13(a), Proposition 6.1]{Bi81}.\n\n\\begin{lemma} \\label{lemma:cd-amalg}\nLet $G = G_1 \\ast_H G_2$ for groups $G_1$, $G_2$ with a common subgroup $H$.\n\\begin{clist}{(i)}\n\\item If $G_1$, $G_2$ are of type $\\FL$ and $H$ is of type $\\FL$, then $G$ is of type $\\FL$\n\\item If $n = \\max\\{\\cd(G_1),\\cd(G_2)\\} < \\infty$ and $\\cd(H) < n$, then $\\cd(G) = n$. \n\\end{clist}\n\\end{lemma}\n\nThe following is a consequence of more general results on group extensions which can be found in \\cite[Proposition 2.7, Theorem 5.5]{Bi81}.\n\n\\begin{lemma} \\label{lemma:cd-direct}\nLet $G = G_1 \\times G_2$ for groups $G_1$, $G_2$.\n\\begin{clist}{(i)}\n\\item If $G_1$, $G_2$ are of type $\\FL$, then $G$ is of type $\\FL$\n\\item If $\\cd(G_1), \\cd(G_2) < \\infty$, $G_1$ is of type $\\FL$ and $H^n(G_1;\\mathbb{Z} G_1)$ is $\\mathbb{Z}$-free for $n = \\cd(G_1)$, then $\\cd(G) = \\cd(G_1) + \\cd(G_2)$.\n\\end{clist}\n\\end{lemma}\n\nWe will now give a construction of groups which will be the basis for our examples in \\cref{thm:main-SF} in the case $d \\ge 3$. This is inspired by a construction of Lustig \\cite{Lu93}.\n\nLet $G$ be a group and let $m \\ge 2$ be an integer. Then define\n\\[ G_+ = ( G \\ast \\langle r \\mid \\hspace{-.8mm}-\\rangle )\/ [r^m,G], \\]\nwhich is isomorphic to $(G \\times \\langle q \\mid \\hspace{-.8mm}-\\rangle) \\ast_{\\langle q = r^{m} \\rangle} \\langle r \\mid \\hspace{-.8mm}-\\rangle$. For integers $m_1, \\cdots, m_{n-1} \\ge 2$, we can define $G_{(n)}$ inductively by letting $G_{(1)} = G$ and $G_{(i+1)} = (G_{(i)})_+$ for $i \\ge 1$. We will label the new generator by $r_i$.\nThe choice of $m_i \\ge 2$ will not matter for the purposes of this article; it suffices to consider the case $m_i=2$.\n\nLet $\\iota : G \\to G_{(n)}$ be the composition of the natural maps $G_{(i)} \\to G_{(i+1)}$ and let $f: G_{(n)} \\to G$ be the map which sends $r_i \\mapsto 1$ for each $i$. We have that $f \\circ \\iota = \\id_{G}$ and so $\\iota$ is injective, $f$ is surjective and $G$ is a retract of $G_{(n)}$.\n\n\\begin{prop} \\label{prop:group-construction}\nLet $n \\ge 1$ and let $G$ be a finitely presented group of type $\\FL$ with $\\cd(G) = d$. Then:\n\\begin{clist}{(i)}\n\\item $G_{(n)}$ is a finitely presented group of type $\\FL$ with $\\cd(G_{(n)})=n+d-1$\n\\item The map $f: G_{(n)} \\twoheadrightarrow G$ is characteristic.\n\\end{clist}\n\\end{prop}\n\nIn order to prove this, we will first need the following lemma. The proof is identical to the one given in \\cite[p174]{Lu93}.\n\n\\begin{lemma} \\label{lemma:group-construction}\nLet $G$ be a torsion free group and let $G_+ = (G \\times \\langle q \\mid \\hspace{-.8mm}-\\rangle) \\ast_{\\langle q = r^m \\rangle} \\langle r \\mid \\hspace{-.8mm}-\\rangle$ for some $m \\ge 2$. Then the map $f : G_+ \\twoheadrightarrow G$ which sends $r \\mapsto 1$ is characteristic.\n\\end{lemma}\n\n\\begin{proof}[Proof of \\cref{prop:group-construction}]\nIt is clear that $G_{(n)}$ is finitely presented. We now prove (i) by induction, noting that it is trivial in the case $n=1$.\n\nSuppose (i) holds for $n$ and note that $G_{(n+1)} \\cong (G_{(n)} \\times \\mathbb{Z}) \\ast_{\\mathbb{Z}} \\mathbb{Z}$. It is well known that $K(\\mathbb{Z},1) \\simeq S^1$ and so $\\mathbb{Z}$ is of type $\\FL$, $\\cd(\\mathbb{Z})=1$ and $H^1(\\mathbb{Z};\\mathbb{Z}[\\mathbb{Z}])=0$. By \\cref{lemma:cd-direct}, $G_{(n)} \\times \\mathbb{Z}$ is of type $\\FL$ and $\\cd(G_{(n)} \\times \\mathbb{Z}) = n+d$. By \\cref{lemma:cd-amalg}, this implies that $G_{(n+1)}$ is of type $\\FL$ and $\\cd(G_{(n+1)})=n+d$ as required.\n\nSince $\\cd(G_{(n)}) < \\infty$, $G_{(n)}$ is torsion free for all $n$ \\cite[Proposition 4.11]{Bi81}. By \\cref{lemma:group-construction}, this implies that the map $f_{i+1}: G_{(i+1)} \\twoheadrightarrow G_{(i)}$, $r_{i+1} \\mapsto 1$ is characteristic for all $i \\ge 1$. Hence $f = f_n \\circ f_{n-1} \\circ \\cdots \\circ f_{2}$ is characteristic by composition.\n\\end{proof}\n\n\\section{Proof of \\cref{thm:main-SF}}\n\\label{section:proof-main-algebra}\n\nRecall that the trefoil group $T$ is defined as $\\pi_1(S^3 \\, \\setminus \\, N(K))$ where $N(K)$ is the knot exterior of the trefoil knot $K \\subseteq S^3$. It has presentation $\\mathcal{P} = \\langle x,y \\mid x^2 = y^3 \\rangle$.\n\nLet $T''$ denote the second derived subgroup of $T$, i.e. $T'' = (T')'$, and let $f: T \\twoheadrightarrow T\/T''$ be the quotient map.\nNote that $T\/T''$ is polycyclic since $(T\/T'')' \\cong \\mathbb{Z}^2$ and $(T\/T'')\/(T\/T'')' \\cong \\mathbb{Z}$.\nThe following was shown by P. H. Berridge and M. J. Dunwoody \\cite{BD79}, building upon previously work of Dunwoody \\cite{Du76}.\n\n\\begin{thm}[Berridge-Dunwoody] \\label{thm:BD}\nThere exists infinitely many rank one stably free $\\mathbb{Z} T$-modules $S_i$ for $i \\ge 1$ such that: \n\\begin{clist}{(i)}\n\\item $S_i \\oplus \\mathbb{Z} T \\cong \\mathbb{Z} T^2$\t.\n\\item There exists distinct primes $p_i$ for which $\\mathbb{F}_{p_i} \\otimes f_\\#(S_j) \\cong \\mathbb{F}_{p_i} [T\/T'']$ are isomorphic as $\\mathbb{F}_{p_i} [T\/T'']$-modules if and only if $i = j$.\n\\end{clist}\nIn particular, the $S_i$ are distinct up to $\\mathbb{Z} G$-module isomorphism.\n\\end{thm}\n\n\\begin{remark} \\label{remark:relation-module}\nFor $i \\ge 0$, let $M_i = \\Ker(\\cdot\\left(\\begin{smallmatrix} x^{2i+1}-1 \\\\ y^{3i+1}-1 \\end{smallmatrix}\\right) : \\mathbb{Z} T^2 \\twoheadrightarrow \\mathbb{Z} T)$ be the relation module for the generating set $\\{x^{2i+1},y^{3i+1}\\}$, which is a stably free $\\mathbb{Z} T$-module of rank one. It was shown in \\cite{BD79} that $\\mathbb{F}_p \\otimes f_\\#(M_i) \\cong \\mathbb{F}_p[T\/T'']$ as $\\mathbb{F}_p[T\/T'']$-modules if and only if $p \\mid i(i+1)$. \nThere exists integers $\\ell_i$ for $i \\ge 1$ and primes $p_i$ such that $p_i \\mid \\ell_j(\\ell_j+1)$ if and only if $i=j$, and so we can take $S_i = M_{\\ell_i}$ in \\cref{thm:BD}. It is not known whether or not the $M_i$ are all distinct up to $\\mathbb{Z} G$-module isomorphism.\n\\end{remark}\n\nFor the rest of this section, fix $k \\ge 1$ and $n \\ge 1$. Let $G = T_1 \\ast \\cdots \\ast T_k$ where $T_j \\cong T$ is the trefoil group and let $G_{(n)}$ be as defined in \\cref{section:cd(G)}.\nSince $T$ is a knot group, $T$ has type $\\FL$ and $\\cd(T) =2$ \\cite[p212]{Br82}.\nBy \\cref{lemma:cd-amalg} and \\cref{prop:group-construction}, this implies that $G_{(n)}$ has type $\\FL$ and $\\cd(G_{(n)}) = n+1$. The aim of the rest of this section will be to prove the following theorem which implies \\cref{thm:main-SF}.\n\n\\begin{thm} \\label{thm:main-SF-detailed}\nFor each $n \\ge 1$ and $1 \\le m \\le k$, there exists infinitely many stably free $\\mathbb{Z} G_{(n)}$-modules $\\widehat S_i$ for $i \\ge 1$ such that:\n\n\\begin{clist}{(i)}\n\\item $\\widehat S_i \\oplus \\mathbb{Z} G_{(n)} \\cong \\mathbb{Z} G_{(n)}^{m+1}$.\n\\item $\\widehat S_i$ has no direct summand of the form $\\mathbb{Z} G_{(n)}$.\n\\item The $\\widehat S_i$ for $i \\ge 1$ are distinct up to $\\Aut(G_{(n)})$-isomorphism of $\\mathbb{Z} G_{(n)}$-modules.\t\n\\end{clist}\n\\end{thm}\n\nNote that the case $m = k$ is sufficient to establish \\cref{thm:main-SF}. This result shows that the tree of stably free $\\mathbb{Z} G_{(n)}$-modules has branching at all ranks $1 \\le m \\le k$.\nWe do not know whether branching occurs at ranks $\\ge k+1$, even in the case $G = T$.\n\nIn order to prove \\cref{thm:main-SF-detailed}, we will begin with the following lemma.\n\n\\begin{lemma} \\label{lemma:f=char}\nLet $f_j : T_j \\twoheadrightarrow T_j\/(T_j)''$ be the quotient maps aand let\n\\[ f: G \\twoheadrightarrow (T_1\/T_1'') \\ast \\cdots \\ast (T_k\/T_k'') \\]\nbe the map induced by the $f_j$. Then $f$ is characteristic. \n\\end{lemma}\n\n\\begin{proof}\nFor any group $G$, it is well known that $G' \\subseteq G$ is characteristic and so $G'' \\subseteq G$ is characteristic also. Hence $f_j$ is characteristic for each $j$. Since $T$ is indecomposable and not infinite cyclic, $f$ is characteristic by \\cref{prop:free-product-char}.\n\\end{proof}\n\nFor simplicity, we will begin by proving \\cref{thm:main-SF-detailed} in the case $n=1$, i.e. where $G_{(n)}=G$.\nFrom now on, fix $1 \\le m \\le k$. For integers $i_1, \\cdots, i_m$, define\n\\[S_{i_1, \\cdots, i_m} = {\\iota_1}_\\#(S_{i_1}) \\oplus \\cdots \\oplus {\\iota_m}_\\#(S_{i_m}) \\]\nwhere $\\iota_j : T_j \\hookrightarrow G$ is the inclusion map.\nWe will now prove the following as a consequence of Bergman's theorem, which we will apply by using \\cref{cor:Bergman}.\n\n\\begin{prop} \\label{prop:SF-trefoil}\nFor integers $i_1, \\cdots, i_m$, we have:\n\\begin{clist}{(i)}\n\\item $S_{i_1, \\cdots, i_m} \\oplus \\mathbb{Z} G \\cong \\mathbb{Z} G^{m+1}$.\n\\item $S_{i_1,\\cdots, i_m}$ has no direct summand of the form $\\mathbb{Z} G$.\n\\item If $S_{i_1, \\cdots, i_m} \\cong S_{i_1',\\cdots, i_m'}$ are $\\Aut(G)$-isomorphic as $\\mathbb{Z} G$-modules then, as sets, we have $\\{i_1, \\cdots, i_m\\} = \\{i_1',\\cdots,i_m'\\}$.\n\\end{clist}\n\\end{prop}\n\n\\begin{proof}\nPart (i) is a straightforward consequence of \\cref{thm:BD} (i). \n\nLet $\\bar{G} = \\ast_{j=1}^n T_j\/T_j''$ and let $\\text{$\\bar{\\iota}_j$} : T_j\/T_j'' \\hookrightarrow \\bar{G}$ be inclusion. By \\cref{thm:BD} (ii), there exists $p$ such that $\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j}) \\not \\cong \\mathbb{F}_p[T_j\/T_j'']$ for all $j$. Fix $p$ and note that:\n\\[ \\mathbb{F}_p \\otimes f_\\#(S_{i_1,\\cdots, i_m}) \\cong \\textstyle \\bigoplus_{j=1}^m \\mathbb{F}_p \\otimes (f \\circ \\iota_j)_\\#(S_{i_j}) \\cong \\bigoplus_{j=1}^m \\text{$\\bar{\\iota}_j$}_\\#(\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j})) \\]\nis an induced $\\mathbb{F}_p \\bar{G}$ module. \nIn order to show that \\cref{cor:Bergman} applies, it remains to show that $\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j})$ has no direct summand of the form $\\mathbb{F}_p[T_j\/T_j'']$.\n\nIf $\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j}) \\cong S \\oplus \\mathbb{F}_p[T_j\/T_j'']$, then $S \\oplus \\mathbb{F}_p[T_j\/T_j'']^2 \\cong \\mathbb{F}_p[T_j\/T_j'']^2$.\nSince $T_j\/T_j''$ is polycyclic, it is amenable and so sofic. By \\cref{thm:ES}, $\\mathbb{F}_p[T_j\/T_j'']$ is stably finite and so $S= 0$. Hence $\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j}) \\cong \\mathbb{F}_p[T_j\/T_j'']$, which is a contradiction.\n\nTo show (ii) note that, if $S_{i_1,\\cdots, i_m}$ has a direct summand $\\mathbb{Z} G$, then $\\mathbb{F}_p \\otimes f_\\#(S_{i_1,\\cdots, i_m})$ has a direct summand $\\mathbb{F}_p \\bar{G}$. This contradicts \\cref{cor:Bergman} (ii).\n\nTo show (iii), suppose that $\\{i_1, \\cdots, i_m\\} \\ne \\{i_1',\\cdots,i_m'\\}$ as sets. By symmetry, we can assume that there exists $i_r' \\not \\in \\{i_1, \\cdots, i_m\\}$. Let $p = p_{i_r'}$ in the notation of \\cref{thm:BD}. By the argument above, $\\mathbb{F}_p \\otimes f_\\#(S_{i_1,\\cdots, i_m})$ has no direct summand of the form $\\mathbb{F}_p \\bar{G}$. On the other hand, $\\mathbb{F}_p \\otimes {f_r}_\\#(S_{i_r'}) \\cong \\mathbb{F}_p[T_r\/T_r'']$ which implies that\n\\begin{align*} \\mathbb{F}_p \\otimes f_\\#(S_{i_1',\\cdots, i_m'}) & \\cong \\textstyle \\bigoplus_{j=1, j \\ne r}^m \\text{$\\bar{\\iota}_j$}_\\#(\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j})) \\oplus \\mathbb{F}_p \\bar{G} \\\\\n& \\cong \\textstyle \\bigoplus_{j=1, j \\ne r}^{m-1} \\text{$\\bar{\\iota}_j$}_\\#(\\mathbb{F}_p \\otimes {f_j}_\\#(S_{i_j})) \\oplus \\mathbb{F}_p \\bar{G}^2 \\cong \\cdots \\cong \\mathbb{F}_p \\bar{G}^m.\t\n\\end{align*}\nIf $S_{i_1,\\cdots, i_m} \\cong S_{i_1',\\cdots, i_m'}$ are $\\Aut(G)$-isomorphic, then $S_{i_1,\\cdots, i_m} \\cong (S_{i_1',\\cdots, i_m'})_\\theta$ for some $\\theta \\in \\Aut(G)$. By \\cref{lemma:f=char}, $f$ is characteristic and so, by \\cref{prop:modules-over-quotients}, $f_\\#((S_{i_1',\\cdots, i_m'})_\\theta) \\cong (f_\\#(S_{i_1',\\cdots, i_m'}))_{\\bar{\\theta}}$ for some $\\bar{\\theta} \\in \\Aut(\\bar{G})$. In particular, we have:\n\\[ \\mathbb{F}_p \\otimes f_\\#(S_{i_1,\\cdots, i_m}) \\cong (\\mathbb{F}_p \\otimes f_\\#(S_{i_1',\\cdots, i_m'}))_{\\bar{\\theta}} \\cong (\\mathbb{F}_p \\bar G^m)_{\\bar{\\theta}} \\cong \\mathbb{F}_p \\bar{G}^m\\]\nwhich is a contradiction.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:main-SF-detailed}]\nLet $\\iota : G \\hookrightarrow G_{(n)}$ and $f: G_{(n)} \\twoheadrightarrow G$ be as defined in \\cref{section:cd(G)}. This satisfies $f \\circ \\iota = \\id_G$ and, by \\cref{prop:group-construction}, $f$ is characteristic. Define $\\widehat S_i = \\iota_\\#(\\widehat S_{i_1, \\cdots, i_m})$, where $i_j = i$ for all $j$. By \\cref{prop:SF-trefoil}, it is now straightforward to check that the $\\widehat S_i$ has the required properties.\n\\end{proof}\n\nWe conclude this section with extended remarks on \\cref{thm:main-SF} and \\cref{thm:main-SF-detailed}.\n\n\\subsubsection{Relation modules}\n\nBy \\cref{remark:relation-module}, $S_i$ is the relation module for the generating set $\\{x^{2\\ell_i+1},y^{3\\ell_i+1}\\}$ of $T$. It follows that $S_{i_1, \\cdots, i_m}$ is the relation module for the generating set $\\{x_i^{2\\ell_i+1},y_i^{3\\ell_i+1}\\}_{i=1}^k$ of $G = T_1 \\ast \\cdots \\ast T_k$ where $T_i = \\langle x_i, y_i \\mid x_i^2 = y_i^3 \\rangle$.\n\n\\subsubsection{Change of field}\n\nIn the proof of \\cref{prop:SF-trefoil}, the $\\mathbb{Z} G$-modules were distinguished by passing to $\\mathbb{F}_p G$ for various $p$. An alternate approach is to instead pass to $\\mathbb{Q} G$ and use the results of Lewin \\cite{Le82}.\nThis has the advantage that $\\mathbb{Q} G$ is stably finite by results of Kaplansky, and so we need not rely on \\cref{thm:ES}. However, whilst non-free stably free $\\mathbb{Z} G$-modules can be detected on $\\mathbb{Q} G$ using \\cite{Le82}, it is not clear how one would detect infinitely many distinct stably free $\\mathbb{Z} G$-modules on $\\mathbb{Q} G$.\n\n\\subsubsection{Alternate constructions}\n\nThere are more ways to deduce \\cref{thm:main-SF} in the case $d \\ge 3$ from the case $d=2$.\nBy \\cref{prop:SF-trefoil} and the proof of \\cref{thm:main-SF-detailed}, it suffices to find a finitely presented group $G$ with $\\cd(G) = d$ and a characteristic quotient $f : G \\twoheadrightarrow \\ast_{i=1}^N T$ for some $N \\ge k$.\nTwo such constructions are as follows.\n\\begin{clist}{(1)}\n\\item \nLet $G = \\ast_{i=1}^r (\\ast_{j=1}^{n_i} T)_{(d-1)}$ where $1 \\le n_1 \\le \\cdots \\le n_r$ and $N = \\sum_{i=1}^r n_i$. Then $\\cd(G) = d$ and there is a characteristic quotient $f : G \\twoheadrightarrow \\ast_{i=1}^N T$.\nFor example, we can take $G = (\\ast_{i=1}^k T)_{(d-1)}$ as above, or $G = \\ast_{i=1}^k T_{(d-1)}$ (see \\cref{thm:main-SF-further-detailed}).\n\n\\item\nLet $G = (\\ast_{i=1}^N T) \\times \\Gamma$ where $\\Gamma$ is a finitely presented group with $\\cd(\\Gamma) = d-2$, $Z(\\Gamma)=1$ and which does not contain $\\ast_{i=1}^N T$ as a direct factor. By \\cref{lemma:cd-direct}, we have $\\cd(G) = d$. If $N \\ge 2$, then $Z(\\ast_{i=1}^N T)=1$ and it can be deduced from \\cite[Corollary 2.2]{Jo83} that $f: G \\twoheadrightarrow \\ast_{i=1}^N T$ is characteristic. \n\nFor example: If $d=3$, let $\\Gamma$ be a free group of rank $\\ge 2$.\nIf $d = 4$, let $\\Gamma$ be a surface group of genus $\\ge 2$.\nIf $d \\ge 5$, let $\\Gamma \\subseteq L$ be a cocompact torsion free lattice in a non-compact simple Lie group $L$ with dimension $d-2$ over its maximal compact subgroup. \nNote that there are infinitely many such $\\Gamma$ up to commensurability.\nI am indebted to F. E. A. Johnson for this observation.\n\\end{clist}\n\n\\section{Module invariants of CW-complexes}\n\\label{section:module-invariants}\n\nLet $X$ be a CW-complex and recall that its cellular chain complex $C_*(\\widetilde X)$ is a chain complex of free $\\mathbb{Z}[\\pi_1(X)]$-modules under the monodromy action. The chain homotopy type of $C_*(\\widetilde X)$ is a homotopy invariant for $X$ and so, for all $n$, the $\\mathbb{Z}[\\pi_1(X)]$-module $H_n(C_*(\\widetilde X))$ is also a homotopy invariant.\n\nIf $G$ is a group and $\\rho: \\pi_1(X) \\cong G$, then every $\\mathbb{Z}[\\pi_1(X)]$-module $M$ can be converted to a $\\mathbb{Z} G$-module with action $g \\cdot_{\\mathbb{Z} G} m := \\rho^{-1}(g) \\cdot_{\\mathbb{Z}[\\pi_1(X)]} m$ for $g \\in G$ and $m \\in M$. In this notation, $H_n(C_*(\\widetilde X))_\\rho$ is a $\\mathbb{Z} G$-module. We will denote this by $H_n(X;\\mathbb{Z} G)$ when $\\rho$ is understood. If $\\rho' : \\pi_1(X) \\cong G$ and $\\theta = \\rho \\circ (\\rho')^{-1} \\in \\Aut(G)$, then $H_n(C_*(\\widetilde X))_{\\rho'} \\cong (H_n(C_*(\\widetilde X))_\\rho)_\\theta$. In particular, the $\\Aut(G)$-isomorphism class of $H_n(X;\\mathbb{Z} G)$ is a homotopy invariant and is independent of the choice of $\\rho$.\n\nThe aim of this section will be to consider how $H_n(X;\\mathbb{Z} G)$ changes under wedge product. We will also give a mild variation of this invariant under group quotients.\n\n\\subsection{Homology of a wedge product}\n\\label{subsection:wedge}\n\nThe following is presumably well-known. However, we were not able to locate a suitable reference in the literature.\n\n\\begin{prop} \\label{prop:CW-of-a-wedge}\nLet $X_1$, $X_2$ be CW-complexes with a single 0-cell such that $\\pi_1(X_k) \\cong G_k$. Let $X = X_1 \\vee X_2$ which has $\\pi_1(X) \\cong G$ where $G= G_1 \\ast G_2$. Then:\n\\[ \nC_i(\\widetilde X) = \n\\begin{cases}\n{\\iota_1}_\\#(C_i(\\widetilde X_1)) \\oplus {\\iota_2}_\\#(C_i(\\widetilde X_2)), & \\text{if $i \\ge 1$} \\\\\n\\mathbb{Z} G , & \\text{if $i=0$}\n\\end{cases}\n\\]\nwhere $\\partial_i = {\\iota_1}_\\#(\\partial^{X_1}_{i}) \\oplus {\\iota_2}_\\#(\\partial^{X_2}_{i})$ for $i \\ge 2$, $\\partial_1 = ({\\iota_1}_\\#(\\partial^{X_1}_{1}), {\\iota_1}_\\#(\\partial^{X_2}_{1}))$ and $\\partial_0 = \\varepsilon_G$.\n\\end{prop}\n\n\\begin{proof}\nIt suffices to compute an explicit model for $\\widetilde X$ in terms of $\\widetilde X_1$ and $\\widetilde X_2$. Such a model, which is often attributed to Scott-Wall \\cite{SW79}, is provided by taking the graph of spaces structure on $X = X_1 \\vee X_2$ and lifting it to $\\widetilde X$. \n\nDefine a graph $(V,E)$ with vertex set $V = V(X_1) \\sqcup V(X_2)$ where $V(X_1)$ is the set of elements in $G_1 \\ast G_2$ with final term in $G_2$, i.e. the identity $e$ as well as the elements of the form $g_n \\cdots g_1 g_1$ for $n \\ge 1$ where $g_i \\in G_2 \\setminus \\{ 1 \\}$ when $i$ is odd and $g_i \\in G_1 \\setminus \\{ 1 \\}$ otherwise. Define $V(X_2)$ similarly. Note that, whilst $V(X_1) \\cap V(X_2) = \\{1\\}$ as subsets of $G_1 \\ast G_2$, the elements $1 \\in V(X_i)$ are not identified in $V$.\n\nDefine $E = \\bigsqcup_{v \\in V(X_1)} (G_1 \\, \\setminus \\, \\{1\\})_v \\sqcup \\bigsqcup_{v \\in V(X_2)} (G_2 \\, \\setminus \\, \\{1\\})_v \\sqcup \\{e_{1,1} \\}$ where, for each $v \\in V(X_1)$ and $g \\in G_1 \\, \\setminus \\, \\{1\\}$, we have a directed edge $e_{v,vg} = (g)_v$ from $v$ to $vg$ which is labeled by $g \\in G$. Similarly for $V(X_2)$ and $G_2$. The edge $e_{1,1}$ from $1 \\in V(X_1)$ to $1 \\in V(X_2)$ is labeled by $1 \\in G$.\n\nLet $\\ast \\in X_i$ denote the $0$-cell and, for each $g \\in G_i$, let $\\ast_g \\in \\widetilde X_i$ denote its corresponding lift.\nOur model is the CW-complex\n\\[ X_{(V,E)} = \\left(\\bigsqcup_{v \\in V(X_1)} (\\widetilde X_1)_v \\sqcup \\bigsqcup_{v \\in V(X_2)} (\\widetilde X_2)_v \\right)\/\\sim \\]\nwhere, if we have a directed edge $e_{v_1,v_2} \\in E$ with label $g \\in G$, then $(\\ast_g)_{v_1} \\sim (\\ast_1)_{v_2}$ where, if $v_1 \\in V(X_i)$, then $(\\ast_g)_{v_1} \\in (\\widetilde X_i)_{v_1}$ and similarly for $(\\ast_1)_{v_2}$. \nBy comparing with the construction in \\cite{SW79}, we have $\\widetilde X \\simeq X_{(V,E)}$.\n\nWe now determine the induced action of $G = G_1 \\ast G_2$ on $X_{(V,E)}$. Note that $G_1$ acts $(\\widetilde X_1)_1$ by monodromy and freely permutes the $\\ast_g \\in (\\widetilde X_1)_1$. This action extends to all of $X_{(V,E)}$ inductively, and similarly for the action of $G_2$ on $(\\widetilde X_2)_2$. Since $G = \\langle G_1, G_2 \\rangle$, this determines the full action of $G$ on $X_{(V,E)}$. \n\nIt now remains to read off the cell structure of $X_{(V,E)}$ under this $G$-action. For $i \\ge 1$, the $i$-cells lie in the interior of the copies of $\\widetilde X_1$, $\\widetilde X_2$ and so are unaffected by the relation $\\sim$. This implies that:\n\\[C_i(X_{(V,E)}) = \\bigoplus_{v \\in V(X_1)} v \\cdot C_i(\\widetilde X_1) \\oplus \\bigoplus_{v \\in V(X_2)} v \\cdot C_i(\\widetilde X_2) \\]\nas an abelian group. Since $G$ acts on the $V(X_j)$ in the natural way, and the elements of $V(X_j)$ are coset representatives for $G\/G_j$, we have that: \n\\[ \\bigoplus_{v \\in V(X_1)} v \\cdot C_i(\\widetilde X_1) \\cong \\mathbb{Z} G \\otimes_{\\mathbb{Z} G_j} C_i(\\widetilde X_j) \\cong {\\iota_j}_\\#(C_i(\\widetilde X_j)) \\]\nas $\\mathbb{Z} G$-modules. We can determine $C_0(X_{(V,E)})$ and the $\\partial_i$ similarly.\n\\end{proof}\n\n\\begin{corollary} \\label{cor:pi_2-of-wedge}\nLet $X_1$ and $X_2$ be CW-complexes with a single $0$-cell such that $\\pi_1(X_i) \\cong G_i$. Let $X = X_1 \\vee X_2$ which has $\\pi_1(X) \\cong G$ where $G= G_1 \\ast G_2$. Then:\n\\[ H_n(X;\\mathbb{Z} G) \\cong {\\iota_1}_\\#(H_n(X; \\mathbb{Z} G_1)) \\oplus {\\iota_2}_\\#(H_n(X;\\mathbb{Z} G_2)). \\]\n\\end{corollary}\n\n\\begin{remark}\nThis could be deduced from the Mayer-Vietoris sequence for homology with local coefficients \\cite[Theorem 2.4]{Wh78}, though the above argument is more direct.\n\\end{remark}\n\n\\subsection{Homology under group quotients} \n\\label{subsection:change-of-group}\n\nLet $X$ be a CW-complex with $\\rho : \\pi_1(X) \\cong G$ and let $C_*(\\widetilde X)_\\rho$ be the corresponding chain complex of $\\mathbb{Z} G$-modules. \nIf $f: G \\twoheadrightarrow H$ is a quotient of groups, then $f_\\#(C_*(\\widetilde X)_\\rho)$ is a chain complex of free $\\mathbb{Z} H$-modules with boundary maps $\\id_{\\mathbb{Z} H} \\otimes \\partial_i$, and $H_n(f_\\#(C_*(\\widetilde X)_\\rho))$ is a $\\mathbb{Z} H$-module. \nWe will denote this by $H_n(X;\\mathbb{Z} H)$ when $f$ and $\\rho$ are understood.\n\nSubject to conditions on $f$, this give an additional homotopy invariant for $X$.\n\n\\begin{prop} \\label{prop:ZH-homology}\nIf $f$ is characteristic, then the $\\Aut(H)$-isomorphism class of $H_n(X;\\mathbb{Z} H)$ is a homotopy invariant and is independent of the choice of $\\rho$.\t\n\\end{prop}\n\n\\begin{proof}\nIf $C_*(\\widetilde X)_\\rho \\simeq C_*(\\widetilde Y)_{\\rho'}$ are chain homotopic as chain complexes of $\\mathbb{Z} G$-modules, then $f_\\#(C_*(\\widetilde X)_\\rho) \\simeq f_\\#(C_*(\\widetilde Y)_{\\rho'})$ are chain homotopic as chain complexes of $\\mathbb{Z} H$-modules.\nLet $\\theta \\in \\Aut(G)$. Since $f$ is characteristic, \\cref{prop:modules-over-quotients} implies that $f_\\#((C_*(\\widetilde X)_\\rho)_\\theta) \\cong (f_\\#(C_*(\\widetilde X)_\\rho))_{\\bar{\\theta}}$\nfor some $\\bar{\\theta} \\in \\Aut(H)$. The result now follows.\n\\end{proof}\n\n\\section{Algebraic classification of finite $(G,n)$-complexes}\n\\label{section:Gn-complexes}\n\nA \\textit{$(G,n)$-complex} is an $n$-dimensional CW-complex $X$ such that $\\pi_1(X) \\cong G$ and the universal cover $\\widetilde X$ is $(n-1)$-connected. By contracting a maximal spanning tree, $X$ is homotopy equivalent to a $(G,n)$-complex with a single $0$-cell. For convenience, we will now assume that a $(G,n)$-complex has a single $0$-cell which is the basepoint.\n\nIf $i \\ge 2$, then $\\pi_i(X) \\cong \\pi_i(\\widetilde X)$ as abelian group. In this way, we can view $\\pi_i(X)$ as a $\\mathbb{Z} G$-module under the monodromy action.\nIf $2 \\le i < n$, then $\\pi_i(X) = 0$ since $\\widetilde X$ is $(n-1)$-connected. If $i = n$, then the Hurewicz theorem implies that:\n\\[ \\pi_n(X) \\cong H_n(\\widetilde X ;\\mathbb{Z}) \\cong H_n(X;\\mathbb{Z} G) \\]\nas $\\mathbb{Z} G$-modules. In particular, \\cref{cor:pi_2-of-wedge} applies to $\\pi_n(X)$.\n\n\\subsection{Algebraic $n$-complexes and the D2 problem}\n\\label{subsection:algebraic-n-complexes}\n\nLet $G$ be a group. \nAn \\textit{algebraic $n$-complex over $\\mathbb{Z} G$} is an exact chain complex:\n\\[ E = (F_n \\xrightarrow[]{\\partial_n} \\cdots \\xrightarrow[]{\\partial_2} F_1 \\xrightarrow[]{\\partial_1} F_0 \\xrightarrow[]{\\partial_0} \\mathbb{Z} \\to 0)\\]\nwhere the $F_i$ are finitely generated stably free $\\mathbb{Z} G$-modules. \n\nLet $\\text{\\normalfont{Alg}}(G,n)$ denote the equivalence classes of algebraic $n$-complexes over $\\mathbb{Z} G$ up to chain homotopy equivalences of the unaugmented complex $(F_i,\\partial_i)_{i=1}^n$.\nThe \\textit{$n$th homotopy group} of $E$ is the $\\mathbb{Z} G$ module $\\pi_n(E) = \\Ker(\\partial_n)$ and is an invariant of the chain homotopy class of $E$. \nIf $n \\ge 2$, we can assume the $F_i$ are free since every algebraic $n$-complex is chain homotopy equivalent to such a complex.\n\n Let $\\PHT(G,n)$ denote the polarised homotopy types of finite $(G,n)$-complexes, i.e. the homotopy types of pairs $(X,\\rho)$ where $\\rho: \\pi_1(X) \\cong G$.\nIf $(X,\\rho) \\in \\PHT(G,n)$, then $C_*(\\widetilde X)_{\\rho}$ is a chain complex of $\\mathbb{Z} G$-modules such that $H_0(C_*(\\widetilde X)_{\\rho}) \\cong \\mathbb{Z}$ and $H_i(C_*(\\widetilde X)_{\\rho}) = 0$ for $1 \\le i < n$. In particular, there is a map:\n\\[ \\Psi: \\PHT(G,n) \\to \\text{\\normalfont{Alg}}(G,n). \\]\n\nRecall that a finitely presented group $G$ has the \\textit{D2 property} if every finite CW-complex $X$ such that $\\pi_1(X) \\cong G$, $H_i(\\widetilde X;\\mathbb{Z}) = 0$ for $i > 2$ and $H^{n+1}(X;M)=0$ for all finitely generated $\\mathbb{Z} G$-modules $M$ is homotopy equivalent to a finite 2-complex.\nThe following is a mild improvement of Wall's results on finiteness conditions for CW-complexes due to Johnson \\cite{Jo03a} and Mannan \\cite{Ma09}. \nThis precise version follows from \\cite[Corollary 8.27]{Jo12a} in the case $n \\ge 3$ and \\cite[Theorem 2.1]{Ni19} in the case $n=2$.\n\n\\begin{prop} \\label{prop:realisation-thm}\nLet $G$ be a finitely presented group. \nIf $n \\ge 3$, then $\\Psi$ is bijective.\nIf $n = 2$, then $\\Psi$ is injective and is bijective if and only if $G$ has the {\\normalfont D2} property.\n\\end{prop}\n\n\\begin{remark}\nPart (i) is often vacuous since there are finitely presented groups $G$ for which no algebraic $n$-complex over $\\mathbb{Z} G$ exists for all $n \\ge 3$. The first example was found by Stallings in \\cite{St63} (see also \\cite[Proposition 2.14]{Bi81}) and was later generalised to a class of right-angled Artin groups by Bestvina-Brady \\cite[Main Theorem]{BB97}.\n\\end{remark}\n\n\\subsection{Realising $\\mathbb{Z} G$-modules by algebraic $n$-complexes}\n\\label{subsection:pi_n-realisation}\n\nThe $n$th \\textit{stable syzygy} $\\Omega_{n}^G(\\mathbb{Z})$ is the set of $\\mathbb{Z} G$-modules $M$ for which $M \\oplus \\mathbb{Z} G^i \\cong \\pi_{n-1}(E) \\oplus \\mathbb{Z} G^j$ for some $i, j \\ge 0$ and some algebraic $(n-1)$-complex $E$ over $\\mathbb{Z} G$. We will denote this by $\\Omega_n(\\mathbb{Z})$ when the choice of $G$ is clear from the context. This is well-defined and does not depend on the choice of $E$ \\cite[Theorem 8.9]{Jo12a}.\nIt also comes with a map:\n\\[ \\pi_n : \\text{\\normalfont{Alg}}(G,n) \\to \\Omega_{n+1}(\\mathbb{Z}). \\]\n\nThe following can be found in \\cite[Proposition 8.18]{Jo12a}.\n\n\\begin{prop} \\label{prop:realisation-of-syzygies}\nLet $n \\ge 2$ and let $G$ be an infinite finitely presented group of type $\\FL$ such that $H^{n+1}(G;\\mathbb{Z} G)=0$. Then $\\pi_n$ is bijective.\n\\end{prop}\n\nThe following is a straightforward consequence of Propositions \\ref{prop:FP+cd} and \\ref{prop:realisation-of-syzygies}.\n\n\\begin{prop} \\label{prop:syzygies-cd-finite}\nLet $G$ be a finitely presented group of type $\\FL$ with $\\cd(G) = d$.\n\\begin{clist}{(i)}\n\\item If $n \\ge d$, then $\\Omega_{n}(\\mathbb{Z})$ is the set of stably free $\\mathbb{Z} G$-modules\n\\item If $n \\ge d$, then $\\pi_n: \\text{\\normalfont{Alg}}(G,n) \\to \\Omega_{n+1}(\\mathbb{Z})$ is bijective\n\\item If $n = d-1$, then $0 \\not \\in \\IM(\\pi_{n} : \\text{\\normalfont{Alg}}(G,n) \\to \\Omega_{n+1}(\\mathbb{Z}))$.\n\\end{clist}\n\\end{prop}\n\n\\begin{remark}\nThis implies that, for $n \\ge 2$, $\\pi_n : \\text{\\normalfont{Alg}}(G,n) \\to \\Omega_{n+1}(\\mathbb{Z})$ is not surjective whenever $\\cd(G) = n+1$ (for example, $G = \\mathbb{Z}^{n+1}$). This was noted in \\cite[p107]{Jo12a}.\n\\end{remark}\n\nIt is possible to see that $\\cd(G) \\le n$ implies that $\\pi_n: \\text{\\normalfont{Alg}}(G,n) \\to \\Omega_{n+1}(\\mathbb{Z})$ is surjective directly (see, for example, \\cite[Theorem 4]{HJ06b}).\nThe following is now clear.\n\n\\begin{corollary} \\label{cor:homotopy-cd-finite}\nLet $n \\ge 3$ and let $G$ be a finitely presented group of type $\\FL$ with $\\cd(G) = n$. Then $\\pi_n$ gives a one-to-one correspondence between homotopy types of finite $(G,n)$-complexes and $\\Aut(G)$-isomorphism classes of stably free $\\mathbb{Z} G$-modules.\n\\end{corollary}\n\nFinally, we note the following where $\\rank(P)$ denotes the stably free rank of $P$.\n\n\\begin{prop} \\label{prop:rank-computation}\nLet $G$ be a finitely presented group of type $\\FL$ with $\\cd(G) = d$ and let $n \\ge d-1$. Then $\\chi(X) = k + \\chi_{\\min}(G,n)$ if and only if:\n\\[ \\rank(\\pi_n(X)) = k + \\min\\{ \\rank(\\pi_n(X_0)) : \\text{$X_0$ a finite $(G,n)$-complex} \\}. \\]\nIn particular, if $n \\ge \\max\\{3, d\\}$, then $k = \\rank(\\pi_n(X))$.\n\\end{prop}\n\n\\section{Proof of \\cref{thm:main}}\n\\label{section:proof-main-topological}\n\nWe will now prove \\cref{thm:main} separately in the two cases of non-minimal Euler characteristic ($k \\ge 1$) and minimal Euler characteristic ($k =0$). \nThroughout, $T_i \\cong T$ will denote the trefoil group and $G_{(n)}$ will be as defined in \\cref{section:cd(G)}.\n\n\\subsection{Finite $(G,n)$-complexes with non-minimal Euler characteristic}\n\\label{subsection:non-min-EC}\n\nThe aim of this section will be to prove the following. Note that, in the case $n \\ge 3$, we could also take $G$ to be one of the other groups listed at the end of \\cref{section:proof-main-algebra}.\t\n\n\n\\begin{thm} \\label{thm:main-non-min-EC}\nLet $n \\ge 2$, let $k \\ge 1$ and let $G = (T_1 \\ast \\cdots \\ast T_k)_{(n-1)}$. Then, for all $1 \\le m \\le k$, there exists infinitely many finite $(G,n)$-complexes $\\widehat X_i$ such that:\n\\begin{clist}{(i)}\n\\item $\\pi_n(\\widehat X_i) \\cong \\widehat S_i$ as $\\mathbb{Z} G$-modules (where $\\widehat S_i$ is as defined in \\cref{thm:main-SF-detailed})\n\\item $\\chi(\\widehat X_i) = m + \\chi_{\\min}(G,n)$\n\\item $\\widehat X_i \\not \\simeq Y_i \\vee S^2$ for any finite $(G,n)$-complex $Y_i$.\n\\end{clist}\n\\end{thm}\n\nSince the $\\Aut(G)$-isomorphism class of $\\pi_n(\\widehat X_i)$ is a homotopy invariant, it follows that the $\\widehat X_i$ are homotopically distinct by \\cref{thm:main-SF-detailed}. By restricting to the case $m=k$, this implies \\cref{thm:main} for $k \\ge 1$. \n\nWe will begin with the case $n=2$, where $G = T_1 \\ast \\cdots \\ast T_k$. Let $S_i$ be the stably free $\\mathbb{Z} T$-modules from \\cref{thm:BD} and, for $1 \\le m \\le k$, recall that:\n\\[ S_{i_1, \\cdots, i_m} = {\\iota_1}_\\#(S_{i_1}) \\oplus \\cdots \\oplus {\\iota_m}_\\#(S_{i_m}).\\]\nThe case of interest will be $\\widehat S_i = S_{i_1, \\cdots, i_m}$ where $i_j = i$ for all $j$.\n\nThe main result which we will use is the following, which is \\cite[Theorem 4.5]{HJ06a}.\n\n\\begin{thm}[Harlander-Jensen] \\label{thm:HJ}\nThe trefoil group $T$ has presentations\n\\[ \\mathcal{P}_i = \\langle x,y,a,b \\mid x^2=y^3, a^2=b^3, x^{2i+1}=a^{2i+1}, y^{3i+1}=b^{3i+1} \\rangle \\]\nfor $i \\ge 0$. For each $i$, there exists $\\ell_i$ such that $S_i \\cong \\pi_2(X_{\\mathcal{P}_{\\ell_i}})$ as $\\mathbb{Z} T$-modules.\n\\end{thm}\n\n\\begin{remark}\nNote that $\\mathcal{P}_0 \\simeq \\langle x, y \\mid x^2=y^3, 1 \\rangle$ and $\\mathcal{P}_1$ is homotopy equivalent to the presentation found by Dunwoody in \\cite{Du76}.\t\n\\end{remark}\n\nLet $X_i = X_{\\mathcal{P}_{\\ell_i}}$ for each $i \\ge 1$. For integers $i_j \\ge 1$, define:\n\\[ X_{i_1, \\cdots, i_n} = X_{i_1} \\vee \\cdots \\vee X_{i_n}\\]\nwhich is a finite $2$-complex with $\\pi_1(X_{i_1, \\cdots, i_n}) \\cong T_1 \\ast \\cdots \\ast T_k$. Let $\\widehat X_i = X_{i_1, \\cdots, i_n}$ where $i_j = i$ for all $j$.\nBy repeated application of \\cref{cor:pi_2-of-wedge}, we have that $\\pi_2(X_{i_1,\\cdots,i_n}) \\cong S_{i_1, \\cdots, i_n}$ and so $\\pi_2(\\widehat X_i) \\cong \\widehat S_i$. \nSince $\\rank(\\widehat S_i) = m$, we have that $\\chi(\\widehat X_i) = m + \\chi_{\\min}(G)$ by \\cref{prop:rank-computation}. Finally, if $\\widehat X_i \\simeq Y_i \\vee S^2$, then:\n\\[ \\widehat S_i \\cong \\pi_2(\\widehat X_i) \\cong \\pi_2(Y_i) \\oplus (\\mathbb{Z} G \\otimes_{\\mathbb{Z}} \\pi_2(S^2)) \\cong \\pi_2(Y_i) \\oplus \\mathbb{Z} G \\]\nwhich is a contradiction since $\\widehat S_i$ has no summand of the form $\\mathbb{Z} G$ by \\cref{thm:main-SF-detailed}.\t\nThis completes the proof of \\cref{thm:main-non-min-EC} in the case $n=2$.\n\nWe will now consider the case $n \\ge 3$. where $G = (T_1 \\ast \\cdots \\ast T_k)_{(n-1)}$. By \\cref{thm:main-SF-detailed}, there exists stably free $\\mathbb{Z} G$-modules $\\widehat S_i$ of rank $m$ and which have no summand of the form $\\mathbb{Z} G$. By \\cref{prop:group-construction}, we have that $\\cd(G) = n$ and so, by \\cref{cor:homotopy-cd-finite}, there exists finite $(G,n)$-complexes $\\widehat X_i$ such that $\\pi_n(\\widehat X_i) \\cong \\widehat S_i$. We can now argue similarly to the case $n=2$. This completes the proof of \\cref{thm:main-non-min-EC}.\n\n\\subsubsection{Application to Syzygies}\n\nWe now discuss consequences of \\cref{thm:main-non-min-EC} for syzygies.\nRecall that a $\\mathbb{Z} G$-module $M_0 \\in \\Omega_n(\\mathbb{Z})$ is \\textit{minimal} if $M \\in \\Omega_n(\\mathbb{Z})$ implies that $M \\oplus \\mathbb{Z} G^i \\cong M_0 \\oplus \\mathbb{Z} G^j$ for some $i \\le j$. For $k \\ge 0$, we say that $M \\in \\Omega_n(\\mathbb{Z})$ has \\textit{level $k$} if $M \\oplus \\mathbb{Z} G^i \\cong M_0 \\oplus \\mathbb{Z} G^j$ where $j-i = k$ and $M_0$ is minimal.\nIf $X$ is a finite $(G,n)$-complex, then $\\pi_n(X) \\in \\Omega_{n+1}(\\mathbb{Z})$. If $\\cd(G) = n$ and $\\pi_n(X)$ is stably free $\\mathbb{Z} G$-module of rank $k$, then $\\pi_n(X)$ has level $k$. \nHence, by \\cref{thm:main-non-min-EC}, we have: \n\n\\begin{corollary} \\label{cor:syzygies}\nFor all $n \\ge 3$ and $k \\ge 1$, there exists a finitely presented group $G$ such that $\\Omega_n(\\mathbb{Z})$ has branching at level $k$. Furthermore, there exists infinitely many $\\mathbb{Z} G$-modules $M_i \\in \\Omega_n(\\mathbb{Z})$ at level $k$ which are distinct up to $\\Aut(G)$-isomorphism.\n\\end{corollary}\n\n\\subsection{Finite $(G,n)$-complexes with minimal Euler characteristic}\n\\label{subsection:min-EC}\n\nThe following is the main result of \\cite{Lu93}.\n\n\\begin{thm}[Lustig] \\label{thm:lustig}\nLet $G = T_{(2)}$. Then there exists infinitely many homotopically distinct finite $2$-complexes $X_i$ for $i \\ge 1$ such that $\\pi_1(X_i) \\cong G$ and $\\chi(X_i) = 1$.\n\\end{thm}\n\nThe aim of this section will be to give the following generalisation of Lustig's result, which includes an identification of $\\chi_{\\min}(T_{(2)})$.\n\n\\begin{thm} \\label{thm:main-min-EC}\nLet $n \\ge 2$ and let $G = T_{(n)}$. Then there exists infinitely many finite $(G,n)$-complexes $\\widehat X_i$ for $i \\ge 1$ such that:\n\\begin{clist}{(i)}\n\\item $H_n(\\widehat X_i; \\mathbb{Z} T) \\cong S_i$ as $\\mathbb{Z} T$-modules (where $S_i$ is as defined in \\cref{thm:BD})\n\\item $\\chi(\\widehat X_i) = \\chi_{\\min}(G,n)$\n\\end{clist}\n\\end{thm}\n\n\\begin{remark}\nThis corrects a statement made in \\cite[Section 5]{HJ06b} where it was suggested that $\\chi(X_i) = 1 + \\chi_{\\min}(T_{(2)})$. \nIn fact, we have $\\chi(\\widehat X_i) = \\chi_{\\min}(T_{(n)},n) = 1-n$.\n\\end{remark}\n\nBy \\cref{prop:ZH-homology}, the $\\Aut(T)$-isomorphism class of $H_n(\\widehat X_i; \\mathbb{Z} T)$ is a homotopy invariant and so the $\\widehat X_i$ are homotopically distinct by \\cref{thm:BD}. Hence this implies \\cref{thm:main} in the case $k = 0$.\n\nWe will begin with the following lemma, which can be verified directly.\n\n\\begin{lemma} \\label{lemma:alg(n)-alg(n+1)}\nLet $n \\ge 2$, let $G$ be a group and let $E = (\\mathbb{Z} G^{d_i}, \\partial_i)_{i=1}^n \\in \\text{\\normalfont{Alg}}(G,n)$.\nIf $G_{+} = (G \\times \\langle q \\mid \\hspace{-.8mm}-\\rangle) \\ast_{\\langle q = r^2 \\rangle} \\langle r \\mid \\hspace{-.8mm}-\\rangle$, then:\n\\[ E_+ = (\\mathbb{Z} G_+^{d_n} \\xrightarrow[]{\\partial_{n+1}^+} \\cdots \\xrightarrow[]{\\partial_2^+} \\mathbb{Z} G_+^{d_1+1} \\xrightarrow[]{\\partial_1^+} \\mathbb{Z} G_+ \\xrightarrow[]{\\varepsilon_{G_+}} \\mathbb{Z} \\to 0) \\in \\text{\\normalfont{Alg}}(G_+,n+1)\\]\nwhere $\\partial_1^+ = \\cdot\\left(\\begin{smallmatrix}\\partial_1 \\\\ r-1 \\end{smallmatrix}\\right)$, $\\partial_2^+ = \\cdot\\left(\\begin{smallmatrix}\\partial_2 & 0 \\\\ r^2-1 & -\\partial_1 \\cdot (r+1) \\end{smallmatrix}\\right)$ and $\\partial_i^+ = \\cdot\\left(\\begin{smallmatrix}\\partial_i & 0 \\\\ r^2-1 & -\\partial_{i-1} \\end{smallmatrix}\\right)$ for $i \\ge 3$. The $\\partial_*$ are the induced maps and we take $d_{n+1}=0$, $\\partial_{n+1}=0$.\n\\end{lemma}\n\n\\begin{remark}\nThis also works when $G_{+} = (G \\times \\langle q \\mid \\hspace{-.8mm}-\\rangle) \\ast_{\\langle q = r^m \\rangle} \\langle r \\mid \\hspace{-.8mm}-\\rangle$ for $m \\ge 2$.\n\\end{remark}\n\nLet $\\mathcal{P} = \\langle x, y \\mid x^2 = y^3 \\rangle$ be the standard presentation for $T$ and note that:\n\\[ C_*(\\widetilde X_{\\mathcal{P}}) \\cong (\\mathbb{Z} T \\xrightarrow[]{\\partial_2} \\mathbb{Z} T^2 \\xrightarrow[]{\\partial_1} \\mathbb{Z} T \\xrightarrow[]{\\varepsilon_T} \\mathbb{Z} \\to 0) \\in \\text{\\normalfont{Alg}}(T,2)\\]\nwhere $\\partial_1 = \\cdot\\left(\\begin{smallmatrix} x-1 \\\\ y-1 \\end{smallmatrix}\\right)$ and $\\partial_2 = \\cdot\\left(\\begin{smallmatrix} x+1 & -(y^2+y+1) \\end{smallmatrix}\\right)$.\nThis has $\\pi_2(C_*(\\widetilde X_{\\mathcal{P}})) = 0$.\n\nFor each $n \\ge 1$, define $\\widetilde E_n \\in \\text{\\normalfont{Alg}}(T_{(n)},n+1)$ by $\\widetilde E_1 = C_*(\\widetilde X_{\\mathcal{P}})$ and $\\widetilde E_{n} = (\\widetilde E_{n-1})_+$ for $n \\ge 2$ using \\cref{lemma:alg(n)-alg(n+1)}.\nLet $E_n \\in \\text{\\normalfont{Alg}}(T_{(n)},n)$ denote the restriction to the first $n+1$ terms in $\\widetilde E_n$. Since $\\pi_2(\\widetilde E_1) = 0$, we have that $\\pi_{n+1}(\\widetilde E_n) = 0$ and this implies that\n$\\pi_{n}(E_n) = \\IM(\\partial_{n+1}^{\\widetilde E_n}) \\cong \\mathbb{Z} T_{(n)}$.\n\nFor $n \\ge 2$, let $\\Delta_n = \\partial_{n}^{\\widetilde E_n}$ denote the final boundary map in $E_n$, so that:\n\\[ \n\\Delta_1 = \\partial_1 \\cdot (r_1+1), \\quad \n\\Delta_{n} = \\cdot\\left(\\begin{smallmatrix} v_{n} & 0 \\\\ r_{n-1}^2-1 & -\\Delta_{n-1} \\end{smallmatrix}\\right) : \\mathbb{Z} T_{(n)}^{n+1} \\to \\mathbb{Z} T_{(n)}^{\\frac{n(n+1)}{2}}\n\\]\nwhere $v_{n} = (r_{n-2}^2-1, (-1)(r_{n-3}^2-1), \\cdots, (-1)^{n-3}(r_1^2-1), (-1)^{n-2}\\partial_2)$. Here $\\Delta_1$ is defined for the purposes of this definition and \ndoes not coincide with $\\partial_1^{E_1} = \\partial_1$.\n\nLet $\\alpha_n, \\beta_n$ denote the last two row vectors in $\\Delta_n$, which are defined by: \n\\[ \\alpha_1 = (x-1)(r_1+1), \\quad \\beta_1 = (y-1)(r_1+1)\\]\n\\[ \\alpha_n = (\\underbrace{0, \\cdots, 0}_{n-2} , r_{n-1}^2-1, 0, -\\alpha_{n-1}), \\quad \\beta_n = (\\underbrace{0, \\cdots, 0}_{n-1} , r_{n-1}^2-1, -\\beta_{n-1}).\\]\nFor each $i \\ge 0$, let $\\alpha_n^{(i)} = \\Sigma_x \\alpha_n$, $\\beta_n^{(i)} = \\Sigma_y \\beta_n$ where $\\Sigma_x = \\sum_{j=0}^{2i} x^j$, $\\Sigma_y = \\sum_{j=0}^{3i} y^j$. \n\nWe will now show that following, where we adopt the notation of \\cref{subsection:change-of-group}.\n\n\\begin{prop} \\label{prop:alg-alterations}\nFor $n \\ge 2$, let $\\Delta_n^{(i)}$ be the matrix $\\Delta_n$ but with $\\alpha_n, \\beta_n$ replaced by $\\alpha_n^{(i)}, \\beta_n^{(i)}$, and let $E_{n}^{(i)}$ to be the resolution $E_n$ but with $\\Delta_n$ replaced by $\\Delta_n^{(i)}$. Then:\n\\begin{clist}{(i)}\n\\item $E_n^{(i)} \\in \\text{\\normalfont{Alg}}(T_{(n)},n)$\n\\item If $f : T_{(n)} \\twoheadrightarrow T$, then $H_n(E_n^{(i)};\\mathbb{Z} T) \\cong \\Ker(\\cdot\\left(\\begin{smallmatrix}x^{2i+1}-1 \\\\ y^{3i+1}-1 \\end{smallmatrix}\\right))$ as $\\mathbb{Z} T$-modules.\n\\end{clist}\n\\end{prop}\n\nFor the convenience of the reader, we will write this explicitly in the case $n=2$:\n\n\\newcommand{\\scriptsize{\\cdot\\left(\\begin{smallmatrix}{\\scriptsize{\\cdot\\left(\\begin{smallmatrix}\nx+1 & -(y^2+y+1) & 0 \\\\ \n(r_1^2-1)\\Sigma_x & 0 & (1-x^{2i+1})(r_1+1) \\\\ \n0 & (r_1^2-1)\\Sigma_y & (1-y^{3i+1})(r_1+1) \n\\end{smallmatrix}\\right)}}\n\n\\newcommand{\\cdot\\left(\\begin{smallmatrix}x-1 \\\\ y-1 \\\\ r_1 -1 \\end{smallmatrix}\\right)}{\\cdot\\left(\\begin{smallmatrix}x-1 \\\\ y-1 \\\\ r_1 -1 \\end{smallmatrix}\\right)}\n\n\\vspace{-2mm}\n\\[ E_2^{(i)} = (\\mathbb{Z} T_{(2)}^3 \\xrightarrow[]{\\scriptsize{\\cdot\\left(\\begin{smallmatrix} \\mathbb{Z} T_{(2)}^3 \\xrightarrow[]{\\cdot\\left(\\begin{smallmatrix}x-1 \\\\ y-1 \\\\ r_1 -1 \\end{smallmatrix}\\right)} \\mathbb{Z} T_{(2)} \\xrightarrow[]{\\varepsilon} \\mathbb{Z} \\to 0). \\]\n\\vspace{-2mm}\n\nIn order to prove this, we will first need the following technical lemma.\n\n\\begin{lemma} \\label{lemma:identity}\nLet $G$ be a group with $T \\subseteq G$. For $i = 1, 2, 3$, there exists $\\lambda_i, \\mu_i \\in \\mathbb{Z} T \\subseteq \\mathbb{Z} G$ such that, for all $r \\in G$, we have:\n\n\\vspace{2mm}\n\\begin{adjustbox}{width=\\textwidth+3mm}\n$(r-1,0,1-x) = \\lambda_1 \\cdot (\\Sigma_x(r-1),0,1-x^{2i+1}) + \\lambda_2 \\cdot (0,\\Sigma_y(r-1),1-y^{3i+1}) + \\lambda_3 \\cdot (\\partial_2,0) \\cdot (r-1)$\n\\end{adjustbox}\n\n\\vspace{2mm}\n\\begin{adjustbox}{width=\\textwidth+3mm}\n$(0,r-1,1-y) = \\mu_1 \\cdot (\\Sigma_x(r-1),0,1-x^{2i+1})\n+ \\mu_2 \\cdot (0,\\Sigma_y(r-1),1-y^{3i+1}) + \\mu_3 \\cdot (\\partial_2,0) \\cdot (r-1)$\n\\end{adjustbox}\n\\end{lemma}\n\n\\begin{proof}[Proof of \\cref{prop:alg-alterations}]\nTo prove (i), it suffices to show that $\\IM(\\cdot\\Delta_n^{(i)}) = \\IM(\\cdot\\Delta_n)$ for $i \\ge 1$. We have $\\IM(\\cdot\\Delta_n^{(i)}) \\subseteq \\IM(\\cdot\\Delta_n)$, so it remains to show $\\alpha_n, \\beta_n \\in \\IM(\\cdot\\Delta_n^{(i)})$.\n\nBy the proof of \\cref{lemma:identity}, we have $\\mathbb{Z} T \\cdot \\{ x-1, y-1 \\} = \\mathbb{Z} T \\cdot \\{ x^{2i+1}-1,y^{3i+1}-1 \\}$. It follows that $\\mathbb{Z} T \\cdot \\{ \\alpha_1, \\alpha_2\\} = \\mathbb{Z} T \\cdot \\{\\alpha_1^{(i)}, \\beta_1^{(i)}\\}$ which implies that $\\alpha_1, \\beta_1 \\in \\IM(\\cdot\\Delta_1^{(i)})$.\nThe case $n = 2$ is done in \\cref{lemma:identity}, which provides $\\lambda_i$ such that:\n\\[ \\alpha_2 = \\lambda_1 \\cdot \\alpha_2^{(i)} + \\lambda_2 \\cdot \\beta_2^{(i)} + \\lambda_3(r_1^2-1) \\cdot (\\partial_2,0) \\]\nand similarly for $\\mu_i$ and $\\beta_2$.\nLet $\\gamma_1, \\cdots , \\gamma_{n-1}$ denote the first $n-1$ rows of $\\Delta_n$, the remaining two rows being $\\alpha_n, \\beta_n$. It is now straightforward to see that:\n\\[ \\alpha_n = \\lambda_1 \\cdot \\alpha_n^{(i)} + \\lambda_2 \\cdot \\beta_n^{(i)} + \\lambda_3(-1)^n((r_{n-1}^2-1) \\cdot \\gamma_1 + \\sum_{i=2}^{n-1} (-1)^{i}(r_{n-i}^2-1) \\cdot \\gamma_{i}) \\]\nfor $n \\ge 2$, and similarly for $\\beta_n$. Hence $\\alpha_n, \\beta_n \\in \\IM(\\cdot\\Delta_n^{(i)})$ for all $n \\ge 2$.\n\nTo prove (ii), note that $H_n(E_n^{(i)}; \\mathbb{Z} T) = \\Ker(f_\\#(\\Delta_n^{(i)}))$. For each $n \\ge 2$, we have:\n\\[ f_\\#(\\Delta_n^{(i)}) = \\cdot \\left(\\begin{smallmatrix} f_\\#(v_n) & 0 \\\\ 0 & - f_\\#(\\Delta_{n-1}^{(i)}) \\end{smallmatrix}\\right). \\]\nSince $f_\\#(v_n) = (0, \\cdots, 0, (-1)^{n-2}\\partial_2)$ is injective, this implies that $\\Ker(f_\\#(\\Delta_n^{(i)})) = \\Ker(-f_\\#(\\Delta_{n-1}^{(i)}))$ and so, by induction:\n\\[ \\Ker(f_\\#(\\Delta_n^{(i)})) \\cong \\Ker(f_\\#(\\Delta_1^{(i)})) = \\Ker(\\cdot\\left(\\begin{smallmatrix} 2(x^{2i+1}-1) \\\\ 2(y^{3i+1}-1) \\end{smallmatrix}\\right)) = \\Ker(\\cdot\\left(\\begin{smallmatrix} x^{2i+1}-1 \\\\ y^{3i+1}-1 \\end{smallmatrix}\\right)). \\qedhere \\]\n\\end{proof}\n\\vspace{-1mm}\n\nLet $G = T_{(n)}$. For each $i \\ge 1$, there exists $\\ell_i$ such that $\\Ker(\\cdot\\left(\\begin{smallmatrix} x^{2\\ell_i+1}-1 \\\\ y^{3\\ell_i+1}-1 \\end{smallmatrix}\\right)) \\cong S_i$ where the $S_i$ are as defined in the discussion following \\cref{thm:BD}\n\nIf $n \\ge 3$, then \\cref{prop:realisation-thm} implies that there exists finite $(G,n)$-complexes $\\widehat X_i$ such that $C_*(X) \\simeq E_n^{(\\ell_i)}$ are chain homotopy equivalent where $X$ is the universal cover of $\\widehat X_i$. This is also true when $n=2$ by taking $\\widehat X_i = X_i = \\mathcal{P}_{\\ell_i}$ where:\n\\[ \\mathcal{P}_i = \\langle a, b, c \\mid a^2=b^3, [a^2,b^{2i+1}], [a^2,c^{3i+1}] \\rangle \\]\nare the presentations given by Lustig in \\cite{Lu93}.\n\nBy \\cref{prop:alg-alterations}, $H_n(\\widehat X_i; \\mathbb{Z} T) \\cong S_i$ as $\\mathbb{Z} T$-modules. It is straightforward to see that $\\rank(\\pi_n(E_n^{(\\ell_i)})) = \\rank(\\pi_n(E_n)) = 1$. By \\cref{prop:modules-over-quotients}, $\\cd(G) = n+1$ and so $0 \\not \\in \\IM(\\pi_n : \\PHT(G,n) \\to \\Omega_{n+1}(\\mathbb{Z}))$ by \\cref{prop:syzygies-cd-finite}. Hence, by \\cref{prop:rank-computation}, we have $\\chi(X_i) = \\chi_{\\min}(G,n)$. This completes the proof of \\cref{thm:main-min-EC}. By combining with \\cref{thm:main-non-min-EC}, this completes the proof of \\cref{thm:main}.\n\n\\section{Proofs of Theorems \\ref{thm:main-SF-further} and \\ref{thm:main-further}}\n\\label{section:proof-main-further}\n\nThe aim of this section will be to prove the following two theorems which imply Theorems \\ref{thm:main-SF-further} and \\ref{thm:main-further} respectively. The proofs are similar to that of Theorems \\ref{thm:main-SF} and \\ref{thm:main} and so many of the details will be omitted.\nWe will let $T$ denote the trefoil group.\n\n\\begin{thm} \\label{thm:main-SF-further-detailed}\nLet $d \\ge 2$ and let $G = \\ast_{i=1}^\\infty T_{(d-1)}$. Then $\\cd(G) = d$ and, for all $k \\ge 1$, there exists infinitely many stably free $\\mathbb{Z} G$-modules of rank $k$ which are distinct up to $\\Aut(G)$-isomorphism.\n\\end{thm}\n\nLet $S_i$ denote the stably free $\\mathbb{Z} T$-modules of \\cref{thm:BD} and let $\\iota_j : T_j \\hookrightarrow G$.\n\n\\begin{proof}\nLet $k \\ge 1$ and let $\\widehat S_i^{(k)} = \\bigoplus_{j=1}^k {\\iota_j}_\\#(S_i)$ for $i \\ge 1$. Since $\\widehat S_i^{(k)} \\oplus \\mathbb{Z} G \\cong \\mathbb{Z} G^{k+1}$, the $\\widehat S_i^{(k)}$ are stably free $\\mathbb{Z} G$-modules of rank $k$.\nLet $f: G \\twoheadrightarrow \\ast_{j=1}^\\infty T_j\/T_j''$ be induced by the characteristic quotients $f_j : (T_j)_{(d-1)} \\twoheadrightarrow T_j\/T_j''$.\nThis is characteristic by a mild generalisation of \\cref{prop:free-product-char} which applies since $T_j$ is finitely generated.\n\nFor $p$ prime, we have that $\\mathbb{F}_p \\otimes f_\\#(\\widehat S_i^{(k)}) \\cong \\oplus_{j=1}^k \\text{$\\bar{\\iota}_j$}_\\#(\\mathbb{F}_p \\otimes {f_j}_\\#(S_i))$ where $\\text{$\\bar{\\iota}_j$} : T_j\/T_j'' \\hookrightarrow \\ast_{j=1}^\\infty T_j\/T_j''$ is the inclusion map.\nSimilarly to the proof of \\cref{thm:main-SF-detailed}, there exists primes $p_i$ for $i \\ge 1$ such that $\\mathbb{F}_{p_i} \\otimes {f_j}_\\#(S_i) \\cong \\mathbb{F}_{p_i} [T_j\/T_j'']$ if and only if $i = j$. Since \\cref{thm:Bergman} and \\cref{cor:Bergman} also holds for infinite free products (see \\cite{Be74}), we get the $\\mathbb{F}_p \\otimes f_\\#(\\widehat S_i^{(k)})$ are distinct up to $\\Aut(G)$-isomorphism. Since $f$ is characteristic, the $\\widehat S_i^{(k)}$ are distinct up to $\\Aut(G)$-isomorphism also.\n\\end{proof}\n\n\\begin{thm} \\label{thm:main-further-detailed}\nLet $n \\ge 2$ and let $G = \\ast_{i=1}^\\infty T_{(n-1)}$.\nThen there exists an aspherical $(G,n)$-complex $Y$ such that, for all $k \\ge 1$, there are infinitely many homotopically distinct $(G,n)$-complexes $X_i$ with $X_i \\vee S^n \\simeq Y \\vee (k+1)S^n$.\n\\end{thm}\n\n\\begin{proof}\nBy \\cref{lemma:alg(n)-alg(n+1)}, there exists $\\widetilde E_{n-1} \\in \\text{\\normalfont{Alg}}(T_{(n-1)},n)$ with $\\pi_n(\\widetilde E_{n-1}) = 0$. If $n \\ge 3$, then \\cref{prop:realisation-thm} implies that there exists a finite $(G,n)$-complex $Y_0$ such that $C_*(\\widetilde Y_0) \\simeq \\widetilde E_{n-1}$ are chain homotopy equivalent. This is also true when $n=2$ by taking $Y_0 = X_{\\mathcal{P}}$ where $\\mathcal{P} = \\langle x, y \\mid x^2=y^3 \\rangle$ is the standard presentation for $T$. Hence, for all $n \\ge 2$, $Y = \\vee_{i=1}^\\infty Y_0$ is an aspherical $(G,n)$-complex.\n\nFor all $i \\ge 1$, let $X_i = \\bigvee_{j=1}^k \\widehat X_i \\vee \\bigvee_{j=k+1}^\\infty Y_0$ where the $\\widehat X_i$ are the finite $(T_{(n-1)},n)$-complexes such that $\\pi_n(\\widehat X_i) \\cong S_i$ which were constructed in \\cref{thm:main-non-min-EC}.\nThen $X_i$ is a $(G,n)$-complex such that:\n\\[ \\pi_n(X_i) \\cong \\bigoplus_{j=1}^k {\\iota_j}_\\#(\\pi_n(\\widehat X_i)) \\oplus \\bigoplus_{j=k+1}^\\infty {\\iota_j}_\\#(\\pi_n(Y)) \\cong \\bigoplus_{j=1}^k {\\iota_j}_\\#(S_i) = \\widehat S_i^{(k)}. \\]\nSince the $\\widehat S_i^{(k)}$ are distinct up to $\\Aut(G)$-isomorphism, this implies that the $X_i$ are homotopically distinct.\nBy \\cref{thm:BD} and \\cref{prop:syzygies-cd-finite}, we have that $\\widehat X_i \\vee S^n \\simeq Y_0 \\vee 2S^n$. It follows that $X_i \\vee S^n \\simeq Y \\vee (k+1)S^n$, as required.\n\\end{proof}\n\n\\section{Some remarks on induced module decompositions}\n\\label{subsection:induced}\n\nRecall that Theorems \\ref{thm:main-SF} and \\ref{thm:main} concerned stably free $\\mathbb{Z} G$-modules and finite $2$-complexes $X$ with $\\pi_1(X) \\cong G$ where $G = \\ast_{i=1}^k G_i$.\nIn our example, $\\pi_2(X) \\otimes \\mathbb{F}_p$ was an induced $\\mathbb{F}_p G$-module whose component $\\mathbb{F}_p T$-modules $M_i$ were unique up to $\\mathbb{F}_p T$-isomorphism where $G_i = T$ is the trefoil group.\n\nThe aim of this section will be to investigate the extent to which this applies to all groups of the form $G = \\ast_{i=1}^k G_i$ and to $\\pi_2(X)$ rather than just $\\pi_2(X) \\otimes \\mathbb{F}$.\nFor simplicity, we will restrict to the case of $2$-complexes. However, all results have analogues for $(G,n)$-complexes for $n \\ge 3$.\n\n\\subsection{Existence of induced module decompositions}\n\\label{subsection:existence}\n\nWe will begin by considering the question of existence. From now on, we will take $\\mathbb{F}$ to be a field.\n\n\\begin{prop}[Existence over \\text{$\\mathbb{F}[G_1 \\ast \\cdots \\ast G_k]$}] \\label{prop:existence}\nLet $X$ be a finite $2$-complex with $\\pi_1(X) \\cong G_1 \\ast \\cdots \\ast G_k$. Then $\\pi_2(X) \\otimes \\mathbb{F}$ is an induced $\\mathbb{F}[G_1 \\ast \\cdots \\ast G_k]$-module.\n\\end{prop}\n\n\\begin{proof}\nLet $X_i$ be a finite $2$-complex with $\\pi_1(X_i) \\cong G_i$. Then $\\pi_1(\\vee_{i=1}^k X_i) \\cong \\ast_{i=1}^k G_i$ and so there exists $a, b \\ge 0$ such that $X \\vee aS^2 \\simeq \\vee_{i=1}^k X_i \\vee bS^2$. This implies that\n\\[ (\\pi_2(X) \\otimes \\mathbb{F}) \\oplus \\mathbb{F} G^a \\cong {\\iota_1}_\\#((\\pi_2(X_1) \\otimes \\mathbb{F}) \\oplus \\mathbb{F} G_1^b) \\oplus \\bigoplus_{j = 2}^k {\\iota_j}_\\#(\\pi_2(X_j) \\otimes \\mathbb{F}) \\]\nand so $\\pi_2(X) \\oplus \\mathbb{F}$ is a submodule of an induced $\\mathbb{F}[G_1 \\ast \\cdots \\ast G_k]$-module. Hence, by \\cref{thm:Bergman}, $\\pi_2(X) \\oplus \\mathbb{F}$ is an induced $\\mathbb{F}[G_1 \\ast \\cdots \\ast G_k]$-module.\n\\end{proof}\n\n\\begin{thm}[Non-existence over \\text{$\\mathbb{Z}[G_1 \\ast \\cdots \\ast G_k]$}] \\label{thm:non-existence}\nFor all $k \\ge 2$, there exists a finite $2$-complex $X$ with $\\pi_1(X) \\cong G_1 \\ast \\cdots \\ast G_k$ such that $\\pi_2(X)$ is not an induced $\\mathbb{Z}[G_1 \\ast \\cdots \\ast G_k]$-module.\n\\end{thm}\n\nIn order to prove this, we will need the following method of proving that presentation complexes are homotopy equivalent.\nIf $\\mathcal{P} = \\langle x_1, \\dots, x_n \\mid r_1, \\dots, r_m \\rangle$, then an \\textit{elementary transformation} on $\\mathcal{P}$ is an operation that replaces a relator $r_i$ with:\n\\begin{clist}{(i)}\n\\item $\\omega r_i \\omega^{-1}$ for a word $\\omega \\in F(x_1 \\cdots, x_n)$ (\\textit{conjugation})\n\\item $r_i^{-1}$ (\\textit{inversion})\n\\item $r_i r_j$ or $r_j r_i$ for some $j \\ne i$ (\\textit{left or right multiplication}).\n\\end{clist}\nWe say that two group presentations $\\mathcal{P}$ and $\\mathcal{Q}$ are \\textit{$Q$-equivalent} if they are related by a sequence of elementary transformations. If $\\mathcal{P}$ and $\\mathcal{Q}$ are $Q$-equivalent, then $X_{\\mathcal{P}}$ and $X_{\\mathcal{Q}}$ are (simple) homotopy equivalent \\cite[p20-29]{HMS93}.\n\nWe begin by noting the following, which is a generalisation of \\cite[Theorem 3]{HLM85}.\n\n\\begin{prop} \\label{prop:def(G*H)}\nLet $k \\ge 1$ and let $m_i, n_i \\ge 1$ for $i =1, \\cdots, k$. Suppose there exists integers $r_i$, $q_i$ such that $(q_i,q_j)=1$ for all $i \\ne j$ and, for all $i$, we have: \n\\[ r_i^{m_i}-1=n_iq_i, \\qquad r_i \\equiv 1 \\mod n_i , \\qquad (m_i,n_i) \\ne 1. \\]\nThen $G = \\ast_{i=1}^k (\\mathbb{Z}\/m_i \\times \\mathbb{Z}\/n_i)$ has a presentation\n\\[ \\mathcal{P} = \\langle a_1, b_1, \\dots, a_k, b_k \\mid a_1^{m_1}, \\dots, a_k^{m_k}, a_1b_1a_1^{-1}b_1^{-r_1}, \\dots, a_kb_ka_k^{-1}b_k^{-r_k}, b_1^{n_1} \\cdots b_k^{n_k} \\rangle \\]\nof deficiency $-1$. Furthermore, if $\\mathcal{P}_i = \\langle a,b \\mid a^{n_i}, b^{m_i}, [a,b] \\rangle$ is the standard presentation for $\\mathbb{Z}\/m_i \\times \\mathbb{Z}\/n_i$, then $X_{\\mathcal{P}} \\vee (k-1)S^2 \\simeq X_{\\mathcal{P}_1} \\vee \\cdots \\vee X_{\\mathcal{P}_k}$.\n\\end{prop}\n\nThe conditions on $m_i, n_i$ are satisfied in the case where $m_i = n_i = p_i$ for distinct primes $p_i$. In particular, this applies to all groups of the form $G = \\ast_{i=1}^k (\\mathbb{Z}\/p_i)^2$.\n\n\\begin{proof}\nThat proof that $\\mathcal{P}$ presents $G$ is similar to the case $k=2$ (see \\cite[Theorem 3]{HLM85}), as so will be omitted.\nLet $\\mathcal{P}_+$ denote the presentation $\\mathcal{P}$ with additional relations $b_1^{n_1}, \\cdots, b_{k-1}^{n_{k-1}}$, so that $X_{\\mathcal{P}_+} \\simeq X_{\\mathcal{P}} \\vee (k-1)S^2$. In order to show that $X_{\\mathcal{P}} \\vee (k-1)S^2 \\simeq X_{\\mathcal{P}_1} \\vee \\cdots \\vee X_{\\mathcal{P}_k}$, it therefore suffices to show that $\\mathcal{P}_+$ and $\\mathcal{P}_1 \\ast \\cdots \\ast \\mathcal{P}_k$ are $Q$-equivalent. To see this, note that we can replace $b_1^{n_1} \\cdots b_k^{n_k} \\leadsto b_k^{n_k}$ by left-multiplying by the $b_i^{-n_k}$ for $1 \\le i < k$. Since $r_i \\equiv 1 \\mod n_i$, we can then replace $a_ib_ia_i^{-1}b_i^{-r_i} \\leadsto [a_i,b_i]$ by successively right-multiplying by $b_i^{n_i}$.\n\\end{proof}\n\nWe say that two $\\mathbb{Z} G$-modules $M$ and $M'$ are \\textit{stably isomorphic}, written $M \\cong_s M'$, if there exists $a,b \\ge 0$ such that $M \\oplus \\mathbb{Z} G^a \\cong M' \\oplus \\mathbb{Z} G^b$.\n\n\\begin{lemma} \\label{lemma:stable-uniqueness}\nFor $1 \\le i \\le k$, let $M_i$, $M_i'$ be finitely generated $\\mathbb{Z} G_i$-lattices such that \n\\[ {\\iota_1}_\\#(M_1) \\oplus \\cdots \\oplus {\\iota_k}_\\#(M_k) \\cong {\\iota_1}_\\#(M_1') \\oplus \\cdots \\oplus {\\iota_k}_\\#(M_k')\\] \nas $\\mathbb{Z}[G_1 \\ast \\cdots \\ast G_k]$-modules. Then $M_i \\cong_s M_i'$ for all $1 \\le i \\le k$.\n\\end{lemma}\n\n\\begin{proof}\nFor $1 \\le i \\le k$, let $q_i : G_1 \\ast \\cdots \\ast G_k \\twoheadrightarrow G_i$ be the projection map. By applying $(q_1)_\\#$ to the given isomorphism of $\\mathbb{Z}[G_1 \\ast G_2]$-modules, we get that\n\\[ M_1 \\oplus \\bigoplus_{j=2}^k {(q_1 \\circ \\iota_j)}_\\#(M_j) \\cong M_1' \\oplus \\bigoplus_{j=2}^k {(q_1 \\circ \\iota_j)}_\\#(M_j') \\]\nas $\\mathbb{Z} G_1$-modules.\nIf $j \\ne 1$, then $q_1 \\circ \\iota_j : G_j \\to G_1$, $g \\mapsto 1$. If $M$ is a finitely generated $\\mathbb{Z} G_j$-module, then ${(q_1 \\circ \\iota_j)}_\\#(M) \\cong \\mathbb{Z} G_1 \\otimes_{\\mathbb{Z}} (\\mathbb{Z} \\otimes_{\\mathbb{Z} G_j} M)$. If $\\mathbb{Z} \\otimes_{\\mathbb{Z} G_j} M \\cong \\mathbb{Z}^{r_M} \\oplus F_M$ for $F_M$ a finite abelian group and $r_M \\ge 0$, then ${(q_1 \\circ \\iota_j)}_\\#(M) \\cong \\mathbb{Z} G_1^{r_M} \\oplus F_M G_1$.\n\nIn particular, for some finite abelian groups $F, F'$ and some $r,r' \\ge 0$, we have $M_1 \\oplus \\mathbb{Z} G_1^{r} \\oplus F G_1 \\cong M_1' \\oplus \\mathbb{Z} G_1^{r'} \\oplus F' G_1$.\nSince $M_1, M_1'$ are $\\mathbb{Z} G_1$-lattices, this $\\mathbb{Z} G_1$-isomorphism must induce isomorphisms $F G_1 \\cong F' G_2$ and $M_1 \\oplus \\mathbb{Z} G_1^{r} \\cong M_1' \\oplus \\mathbb{Z} G_1^{r'}$. Hence $M_1 \\cong_s M_1'$ and, by symmetry, we have that $M_i \\cong_s M_i'$ for all $1 \\le i \\le k$.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:non-existence}]\nLet $p_1, \\cdots, p_k$ be distinct primes and let $G = \\ast_{i=1}^k (\\mathbb{Z}\/p_i)^2$. By \t\\cref{prop:def(G*H)}, $G$ has a presentation $\\mathcal{P}$ of deficiency $-1$. \nWe claim that $\\pi_2(X_{\\mathcal{P}})$ is not an induced $\\mathbb{Z}[G_1 \\ast \\cdots \\ast G_k]$-module, where $G_i = (\\mathbb{Z}\/p_i)^2$ for all $i$.\n\nSuppose that $\\pi_2(X_{\\mathcal{P}}) = {\\iota_1}_\\#(M_1) \\oplus \\cdots \\oplus {\\iota_k}_\\#(M_k)$ for $\\mathbb{Z} G_i$-modules $M_i$. \nAgain by \\cref{prop:def(G*H)}, we have that $X_{\\mathcal{P}} \\vee (k-1)S^2 \\simeq X_{\\mathcal{P}_1} \\vee \\cdots \\vee X_{\\mathcal{P}_k}$ where the $\\mathcal{P}_i = \\langle a,b \\mid a^{p_i}, b^{p_i}, [a,b] \\rangle$ are the standard presentations for $G_i$.\nHence, we have:\n\\[ {\\iota_1}_\\#(M_1 \\oplus \\mathbb{Z} G_1^{k-1}) \\oplus \\bigoplus_{j = 2}^k {\\iota_j}_\\#(M_j) \\cong \\bigoplus_{j = 1}^k {\\iota_j}_\\#(\\pi_2(X_{\\mathcal{P}_j})). \\]\nBy \\cref{lemma:stable-uniqueness}, this implies that $M_i \\cong_s \\pi_2(X_{\\mathcal{P}_i})$ for all $i$ and so $M_i \\in \\Omega_3^{G_i}(\\mathbb{Z})$.\n\t\nIt follows from \\cite[Proposition 2.1]{Sw65}\t that $\\pi_2(X_{\\mathcal{P}_i}) \\in \\Omega_3^{G_i}(\\mathbb{Z})$ is minimal and so $M_i \\oplus \\mathbb{Z} G_i^{r_i} \\cong \\pi_2(X_{\\mathcal{P}_i}) \\oplus \\mathbb{Z} G_i^{s_i}$ for some integers $r_i \\le s_i$. This gives that:\n\\[ \\pi_2(X_{\\mathcal{P}}) \\oplus \\mathbb{Z} G^{s_1 + \\cdots + s_k + k -1} \\cong \\pi_2(X_{\\mathcal{P}}) \\oplus \\mathbb{Z} G^{r_1 + \\cdots + r_k}. \\]\nBy \\cite[Proposition 2.1]{Jo12a}, $\\sum s_i + k -1 = \\sum r_i \\le \\sum s_i$ which is a contradiction.\t\n\\end{proof}\n\n\\subsection{Uniqueness of induced module decompositions}\n\\label{subsection:uniqueness}\n\nWe will now turn to the question of uniqueness. The following is an immediate consequence of \\cref{cor:Bergman}.\t\n\n\\begin{prop}[Uniqueness over \\text{$\\mathbb{F}[G_1 \\ast \\cdots \\ast G_k]$}] \\label{prop:uniqueness}\nLet $X$ be a finite $2$-complex with $\\pi_1(X) \\cong G_1 \\ast \\cdots \\ast G_k$.\nIf $\\pi_2(X) \\otimes \\mathbb{F} \\cong {\\iota_1}_\\#(M_1) \\oplus \\cdots \\oplus {\\iota_k}_\\#(M_k)$ for $\\mathbb{F} G_i$-modules $M_i$ such that $\\mathbb{F} G_i \\nmid M_i$\n, then the $M_i$ are unique up to $\\mathbb{F} G_i$-module isomorphism.\n\\end{prop}\n\n\\begin{thm}[Non-uniqueness over \\text{$\\mathbb{Z}[G_1 \\ast \\cdots \\ast G_k]$}] \\label{thm:non-uniqueness}\nFor all $k \\ge 2$, there exists finite $2$-complexes $X_i$, $Y_i$ with $\\pi_1(X_i) \\cong \\pi_1(Y_i) \\cong G_i$ for $1 \\le i \\le k$ such that \n\\[ \\pi_2(X_1 \\vee \\cdots \\vee X_k) \\cong \\pi_2(Y_1 \\vee \\cdots \\vee Y_k) \\] \nbut, for all $i$, $\\mathbb{Z} G_i \\nmid \\pi_2(X_i), \\, \\pi_2(Y_i)$ and $\\pi_2(X_i) \\not \\cong \\pi_2(Y_i)$ are not $\\Aut(G_i)$-isomorphic.\n\\end{thm}\n\nIn order to prove this, we will begin by proving the following.\nWe note that this holds for a larger class of abelian groups than elementary abelian $p$-groups.\n\n\\begin{prop} \\label{prop:abelian-collapse}\nLet $k \\ge 2$ and let $p_i$ be distinct primes and $n_i \\ge 1$ for $i =1, \\cdots, k$. If $\\mathcal{P}_i$, $\\mathcal{P}_i'$ are two presentations for $G_i = (\\mathbb{Z}\/p_i)^{n_i}$ with the same deficiency, then $X_{\\mathcal{P}_1} \\vee \\cdots \\vee X_{\\mathcal{P}_k} \\simeq X_{\\mathcal{P}_1'} \\vee \\cdots \\vee X_{\\mathcal{P}_k'}$.\n\\end{prop}\n\n\\begin{proof}\nFor ease of notation, we will let $k=2$. The general case is analogous. Let:\n\\[ \\mathcal{P}_r^{(i)} = \\langle a_1, \\cdots, a_{n_i} \\mid a_1^{p_i}, \\cdots, a_{n_i}^{p_i}, [a_1^r,a_2], \\{ [a_i,a_j] : i < j, (i,j) \\ne (1,2) \\} \\rangle\\]\nfor $r \\in \\mathbb{Z}$ with $(r,p_i)=1$. This is a presentation for $G_i$ and, since the homotopy type of $\\mathcal{P}_r^{(i)}$ can be shown to depend only on $r \\mod p_i$, we can take $r \\in (\\mathbb{Z}\/p_i)^\\times$.\n\nIt was shown by Browning \\cite{Br79} (see also \\cite[Proposition 9.2]{GL91}) that, if $\\mathcal{P}$ is a presentation for $(\\mathbb{Z}\/p_i)^{n_i}$, then $X_{\\mathcal{P}} \\simeq X_{\\mathcal{P}_r^{(i)}} \\vee \\ell S^2$ for some $r \\in (\\mathbb{Z}\/p_i)^\\times$, $\\ell \\ge 0$.\nIt suffices to show that $X_{\\mathcal{P}_{r}^{(1)}} \\vee X_{\\mathcal{P}_{s}^{(2)}} \\simeq X_{\\mathcal{P}_{1}^{(1)}} \\vee X_{\\mathcal{P}_{1}^{(2)}}$ for all $r \\in (\\mathbb{Z}\/p_1)^\\times$, $s \\in (\\mathbb{Z}\/p_2)^\\times$.\n\nAs in \\cref{prop:def(G*H)}, there exists integers $r_i$, $q_i$ such that $(q_i,q_j)=1$ for all $i \\ne j$ and such that $r_i^{p_i}-1=p_iq_i$ and $r_i \\equiv 1 \\mod p_i$ for all $i$.\nLet $r,s$ be integers such that $(r,p_1)=1$ and $(s,p_2)=1$. If $(rq_1,sq_2)=1$ then, by the same argument as given in \\cref{prop:def(G*H)}, $G = G_1 \\ast G_2$ has a presentation:\n\\begin{align*} \\mathcal{P}_{r,s} = &\\, \\langle a_1, \\dots, a_{n_1}, b_1, \\cdots, b_{n_2} \\mid \\{a_i^{p_1}\\}_{i=2}^{n_1}, \\{b_i^{p_2}\\}_{i=2}^{n_2}, a_1^{p_1} \\cdot b_1^{p_2}, \\\\ \n& a_2(a_1^r)a_2^{-1}(a_1^r)^{-r_1}, b_2(b_1^s)b_2^{-1}(b_1^s)^{-r_2}, \\{[a_i,a_j], [b_i,b_j] : i < j, (i,j) \\ne (1,2) \\} \\rangle.\n\\end{align*}\nThis form is general for all $r, s$ since, by Dirichlet's theorem on arithmetic progressions, there exists $r',s'$ such that $r' \\equiv r \\mod p_1$, $s '\\equiv s \\mod p_2$ and $(r'q_1,s'q_2)=1$.\n\nLet $(\\mathcal{P}_{r,s})_+$ denote the presentation $\\mathcal{P}_{r,s}$ with the additional relation $a_1^{p_1}$.\nIn $(\\mathcal{P}_{r,s})_+$, we can replace $a_1^{p_1} \\cdot b_1^{p_2} \\leadsto b_1^{p_2}$ by left multiplying with $a_1^{-p_1}$, then replace $a_2(a_1^r)a_2^{-1}(a_1^r)^{-r_1} \\leadsto [a_2, a_1^r]$ by right multiplying with $a_1^{r_1-1}$ (which works since $r_i \\equiv 1 \\mod p_i$), and similarly $b_2(b_1^s)b_2^{-1}(b_1^s)^{-r_2} \\leadsto [b_2, b_1^s]$. This implies that $(\\mathcal{P}_{r,s})_+$ and $\\mathcal{P}_{r}^{(1)} \\ast \\mathcal{P}_{s}^{(2)}$ are $Q$-equivalent and so\n$X_{\\mathcal{P}_{r,s}} \\vee S^2 \\simeq X_{(\\mathcal{P}_{r,s})_+} \\simeq X_{\\mathcal{P}_{r}^{(1)}} \\vee X_{\\mathcal{P}_{s}^{(2)}}$.\n\nNote that $\\mathcal{P}_{1,s}$ differs from $\\mathcal{P}_{r,s}$ by a changing $a_2a_1a_2^{-1}a_1^{-r_1} \\leadsto a_2(a_1^r)a_2^{-1}(a_1^r)^{-r_1}$. Since both relations hold in $G$, we can add $a_2(a_1^r)a_2^{-1}(a_1^r)^{-r_1}$ to $\\mathcal{P}_{1,s}$ and add $a_2a_1a_2^{-1}a_1^{-r_1}$ to $\\mathcal{P}_{r,s}$ to get that $X_{\\mathcal{P}_{1,s}} \\vee S^2 \\simeq X_{\\mathcal{P}_{r,s}} \\vee S^2$. By symmetry, we also have that $X_{\\mathcal{P}_{r,1}} \\vee S^2 \\simeq X_{\\mathcal{P}_{r,s}} \\vee S^2$ and so $X_{\\mathcal{P}_{r}^{(1)}} \\vee X_{\\mathcal{P}_{s}^{(2)}} \\simeq X_{\\mathcal{P}_{1}^{(1)}} \\vee X_{\\mathcal{P}_{1}^{(2)}}$.\n\\end{proof}\n\nThe following can be found in \\cite[Theorem 1.2 (3)(iv)]{Li93}. This can also be deduced by combining the earlier work \\cite[Proposition 9]{SD79} with \\cite[Theorem 1.7]{Br79}.\n\n\\begin{lemma} \\label{lemma:abelian-non-cancellation}\nLet $G = (\\mathbb{Z}\/p)^n$ for $p$ prime and $n \\ge 1$. Let $\\delta(G)$ denote the number of $\\Aut(G)$-isomorphism classes of modules $\\pi_2(X_{\\mathcal{P}})$ for $\\mathcal{P}$ a presentation with $\\Def(\\mathcal{P}) = \\Def(G)$. If $p=2$, then $\\delta(G) = 1$ and, if $p$ is odd, then:\n\\[ \\delta(G) = \\begin{cases} (\\frac{p-1}{2},n-1), & \\text{if $n$ is even} \\\\\n (\\frac{p-1}{2}, \\frac{n-1}{2}), & \\text{if $n$ is odd}.\n \\end{cases}\n \\]\n\\end{lemma}\n\n\\begin{proof}[Proof of \\cref{thm:non-uniqueness}]\nLet $k \\ge 2$ and, for $i = 1, \\cdots, k$, let $p_i$ be distinct primes with $p_i \\equiv 1 \\mod 4$ and let $G_i = (\\mathbb{Z}\/p_i)^3$. By \\cref{lemma:abelian-non-cancellation}, we have that $\\delta(G_i) = 2$ and so there exists presentations $\\mathcal{P}^{(i)}$, $\\mathcal{Q}^{(i)}$ for $G_i$ such that $\\Def(\\mathcal{P}^{(i)}) = \\Def(\\mathcal{Q}^{(i)}) = \\Def(G)$ and $\\pi_2(X_{\\mathcal{P}^{(i)}}) \\not \\cong \\pi_2(X_{\\mathcal{Q}^{(i)}})$ are not $\\Aut(G_i)$-isomorphic.\n\t\nSimilarly to the proof of \\cref{thm:non-existence}, $\\pi_2(X_{\\mathcal{P}^{(i)}}), \\pi_2(X_{\\mathcal{Q}^{(i)}}) \\in \\Omega_3^{G_i}(\\mathbb{Z})$ are minimal by \\cite[Proposition 2.1]{Sw65}. This implies that $\\mathbb{Z} G_i \\nmid \\pi_2(X_{\\mathcal{P}^{(i)}}), \\, \\pi_2(X_{\\mathcal{Q}^{(i)}})$ for all $i$.\nBy \\cref{prop:abelian-collapse}, we have that $X_{\\mathcal{P}^{(1)}} \\vee \\cdots \\vee X_{\\mathcal{P}^{(k)}} \\simeq X_{\\mathcal{Q}^{(1)}} \\vee \\cdots \\vee X_{\\mathcal{Q}^{(k)}}$ and so $\\pi_2(X_{\\mathcal{P}^{(1)}} \\vee \\cdots \\vee X_{\\mathcal{P}^{(k)}}) \\cong \\pi_2(X_{\\mathcal{Q}^{(1)}} \\vee \\cdots \\vee X_{\\mathcal{Q}^{(k)}})$ as required.\t\n\\end{proof}\n\n\\section{List of open problems}\n\\label{subsection:problems}\n\nWe now collect together a list of open problems on stably free $\\mathbb{Z} G$-modules and the homotopy type of finite $2$-complexes. Problems \\ref{problem:2}, \\ref{problem:4} and \\ref{problem:5} have analogues for $(G,n)$-complexes and are open for all $n \\ge 2$.\n\n\\subsection{Branching behaviour under multiple stabilisations}\n\nAs in the introduction, it is possible to resolve \\cref{problem:2-complexes} by exhibiting two types of branching behaviour.\nThe first type is exhibited in Theorems \\ref{thm:main-SF} and \\ref{thm:main}. It therefore remains to determine whether examples of the second type also exist.\n\n\\begin{problemlist} \\label{problem:1}\nFor which $k \\ge 1$ does there exist a group $G$ and stably free $\\mathbb{Z} G$-modules $S_1$, $S_2$ such that $S_1 \\oplus \\mathbb{Z} G^k \\cong S_2 \\oplus \\mathbb{Z} G^k$ and $S_1 \\oplus \\mathbb{Z} G^{k-1} \\not \\cong S_2 \\oplus \\mathbb{Z} G^{k-1}$?\n\\end{problemlist}\n\n\\begin{problemlist} \\label{problem:2}\nFor which $k \\ge 1$ do there exist finite $2$-complexes $X_1$, $X_2$ with $\\pi_1(X_1) \\cong \\pi_1(X_2)$ such that $X_1 \\vee kS^2 \\simeq X_2 \\vee kS^2$ and $X_1 \\vee (k-1)S^2 \\not \\simeq X_2 \\vee (k-1)S^2$?\n\\end{problemlist}\n\nTo the best of our knowledge, both problems are open for all $k \\ge 2$. \\cref{problem:2} in the case $k=2$ can be found in \\cite[Problem C]{Dy79a} and later appeared in \\cite[p124]{HMS93}.\n\n\\begin{figure}[h] \\vspace{-4mm} \n\\begin{center}\n\\begin{tabular}{ccccc}\n\\begin{tabular}{l}\n\\begin{tikzpicture}\n\\draw[fill=black] (1.5,1) circle (2pt);\n\\draw[fill=black] (3,0) circle (2pt);\n\\draw[fill=black] (2.5,1) circle (2pt);\n\\draw[fill=black] (2,2) circle (2pt);\n\\draw[fill=black] (2,3) circle (2pt);\n\\node at (2,3.6) {$\\vdots$};\n\\draw[black] (2.75,1) node[right]{\n$\\left.\n \\begin{array}{ll}\n \\\\ \\\\ \\\\\n \\end{array}\n\\right \\} k$};\n\\node at (1,3) {(a)};\n\n\\end{tabular}\n&&&&\n\\begin{tabular}{l}\n\\begin{tikzpicture}\n\\draw[fill=black] (1,1) circle (2pt);\n\\draw[fill=black] (2,1) circle (2pt);\n\\draw[fill=black] (3,1) circle (2pt);\n\n\\draw[fill=black] (2,2) circle (2pt);\n\n\\draw[fill=black] (1,3) circle (2pt);\n\\draw[fill=black] (2,3) circle (2pt);\n\\draw[fill=black] (3,3) circle (2pt);\n\n\\draw[fill=black] (2,4) circle (2pt);\n\n\\node at (2,2.6) {$\\vdots$};\n\\node at (2,4.6) {$\\vdots$};\n\n\\node at (1,4) {(b)};\n\n(1,1) -- (2,2) (2,1) -- (2,2) (3,1) -- (2,2) (1,3) -- (2,4) (2,3) -- (2,4) (3,3) -- (2,4);\n\\end{tikzpicture}\n\\end{tabular}\n\\end{tabular}\n\\end{center}\n\\caption{Further branching phenomena in the tree of stably free $\\mathbb{Z} G$-modules or the tree of finite $2$-complexes $X$ with $\\pi_1(X) \\cong G$.} \\label{figure:further-branching}\n\\vspace{-2mm}\n\\end{figure}\n\n\\subsection{Finite $2$-complexes with arbitrary non-minimal Euler characteristic}\n\nWe can also ask whether or not the behaviour exhibited in \\cref{thm:main-SF-further} actually holds for finitely presented groups and finite $2$-complexes (see \\cref{figure:further-branching}b). As explained in the introduction, both problems have a negative answer when $\\mathbb{Z} G$ is Noetherian.\n\n\\begin{problemlist} \\label{problem:3}\nDoes there exist a finitely presented group $G$ such that, for infinitely many $k \\ge 1$, there exists a non-free stably free $\\mathbb{Z} G$-module of rank $k$?\n\\end{problemlist}\n\n\\begin{problemlist} \\label{problem:4}\nDoes there exist a finitely presented group $G$ such that, for infinitely many $k \\ge 0$, there exists homotopically distinct finite $2$-complexes $X_1$, $X_2$ with $\\pi_1(X_i) \\cong G$ and $\\chi(X_i) = k + \\chi_{\\min}(G)$?\n\\end{problemlist}\n\n\\subsection{Towards a general cancellation theorem for $2$-complexes}\n\nThe following is motivated by the results in \\cref{subsection:induced}.\nRecall that a CW-complex $X$ is irreducible if $X \\simeq Y \\vee Z$ for CW-complexes $Y$, $Z$ implies $Y$ or $Z$ is contractible.\n\n\\begin{problemlist} \\label{problem:5}\nLet $X_i$, $Y_i$ be irreducible non-simply connected finite $2$-complexes. When does $X_1 \\vee \\cdots \\vee X_k \\simeq Y_1 \\vee \\cdots \\vee Y_k$ imply that $X_i \\simeq Y_{\\sigma(i)}$ for some $\\sigma \\in S_k$?\n\\end{problemlist}\n\nHere irreducibility is necessary since it rules out the following two situations:\n\n\\begin{clist}{(a)}\n\\item \\textit{Exchange of subfactors}: If $X \\not \\simeq Z$, $Y \\not \\simeq \\ast$, then $(X \\vee Y) \\vee Z \\simeq X \\vee (Y \\vee Z)$.\n\\item \\textit{Non-cancellation}: If $X \\vee S^2 \\simeq Y \\vee S^2$, $X \\not \\simeq Y$, then $X \\vee (Z \\vee S^2) \\simeq Y \\vee (Z \\vee S^2)$. \n\\end{clist}\n\nThe finite $2$-complexes given in the proof of \\cref{thm:non-uniqueness} are irreducible and so show that some further conditions must be imposed.\nThis was shown to be true by Jajodia \\cite[Corollary 4]{Ja79} when the $X_i$, $Y_i$ have a single $2$-cell.\n\nFor a field $\\mathbb{F}$, it is possible to use Propositions \\ref{prop:existence} and \\ref{prop:uniqueness} to show that, subject to certain conditions, $\\pi_2(X_i) \\otimes \\mathbb{F} \\cong \\pi_2(Y_{\\sigma(i)}) \\otimes \\mathbb{F}$ are $\\Aut(G_i)$-isomorphic for some $\\sigma \\in S_k$. Results of this type are a first approximation to \\cref{problem:5} but are only useful in cases such as $G_i = T$ where information about $\\pi_2(X_i)$ is not lost by passing to $\\mathbb{F} G_i$.\nIn contrast, if $G_i$ is finite, then $\\pi_2(X_i) \\otimes \\mathbb{F}$ is determined by $\\chi(X_i)$ ssince $\\pi_2(X_i) \\otimes \\mathbb{F} \\cong I_{\\mathbb{F}}(G) \\oplus \\mathbb{F} G^{\\chi(X_i)-1}$ where $I_{\\mathbb{F}}(G) = \\Ker(\\varepsilon: \\mathbb{F} G \\twoheadrightarrow \\mathbb{F})$ \\cite[p120]{Jo03a}.\n\n\\section*{Acknowledgements}\nI would like to thank Martin Dunwoody, Jens Harlander and Francis E. A. Johnson for useful correspondence and a number of helpful comments.\nThis work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP\/N509577\/1.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nLarge language models (LMs) can perform unseen tasks by conditioning on a few labeled examples, effectively inferring the underlying tasks through a process known as \\textit{in-context learning} \\citep{gpt3}.\nHowever, task inference is implicit, and the ability of models to \\textit{explicitly} reason about it remains unexplored.\nIn this work, we show that LMs can explicitly describe an underlying task, in natural language, given a few labeled examples.\n\nWe introduce the \\textit{instruction induction} challenge, in which a model is provided with a few input-output demonstrations, and is requested to generate a natural language instruction describing the connection between the input-output pairs.\nIn our experiments, inducing instructions is done in a zero-shot manner by simply prompting the models to explain a small set of given demonstrations, as shown in Figure \\ref{fig:induction_example};\nwe do not perform fine-tuning or use any labeled instruction induction data.\n\nWe examine instruction induction on 24 tasks, ranging from morphosyntactic tasks (e.g., pluralization) to style transfer (e.g., formality) and sentiment analysis.\nAs a basic evaluation protocol, we collect human annotations and use them as gold-standard references; the generated instructions are then compared to these references using BERTScore \\citep{bertscore}.\nMoreover, we suggest a novel evaluation metric for instruction induction: \\textit{execution accuracy}. The execution accuracy of a generated instruction is measured by testing whether LMs can correctly perform the task in a zero-shot manner by using the generated instruction alone, without any demonstrations.\n\nOur experiments reveal a surprising ability at generating correct instructions.\nThe best-performing model, InstructGPT \\citep{instruct-gpt}, achieves an average BERTScore of 44.4, compared to human performance of 60.0; when measuring execution accuracy, the model reaches 43.6, with human-written instructions reaching 66.4.\nFor some tasks, the model's performance is on par or even better than human performance.\nWhen qualitatively examining the generated instructions, we often observe accurate instructions, even for some of the more challenging tasks.\nFor instance, in the task of formality style transfer, generated instructions include ``Translate the inputs into more formal language'' and ``Use formal language''.\nFor semantic text similarity, the generated instructions include ``For each input, rate the similarity of the two sentences on a scale of 0 to 5, with 5 being a perfect match'' and ``Determine whether the two sentences are about the same thing''.\n\nDespite these impressive results, we find that this ability is currently unique to InstructGPT \\citep{instruct-gpt}, which is both very large (175B parameters) and was especially fine-tuned to follow instructions.\nAblations on smaller versions of InstructGPT as well as the original 175B-parameter GPT-3 \\citep{gpt3} yield dramatically weaker performance.\nThese findings are in line with recent work showing that increasing model size unlocks new capabilities \\citep{palm,predictability-and-surprise}, and serves as additional evidence for the strength of instruction tuning \\citep{sanh2022multitask,wei2022finetuned,instruct-gpt}, perhaps even pointing to the necessity of complementing standard next-word prediction with additional objectives.\n\n\nThe fact that models can induce natural language instructions suggests that instruction-induction may serve as a learning paradigm of its own, where the goal is to find the best description in the natural language hypothesis space. While we currently provide a proof-of-concept for that idea, extending it by grounding models in natural language has the immediate benefit of human interpretability, and might also help alleviate overfitting and other issues associated with spurious correlations.\n\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=1.0\\textwidth]{images\/fig1_office.pdf}\n \\caption{An example of instruction induction for the task of formality style transfer. \\textit{Left:} the standard in-context learning setting; given five demonstrations, complete the sixth. \\textit{Right:} instruction induction; the language model is prompted to generate a natural language instruction that describes the demonstrations. Model completions are in \\textcolor{fig1blue}{blue}, prompt templates are in \\textcolor{fig1pink}{pink}.}\n \\label{fig:induction_example}\n\\end{figure}\n\n\n\\section{Instruction Induction}\n\n\n\nWe begin by formulating the task of instruction induction. Given a sequence of $n$ demonstrations $\\{x_k, y_k\\}_{k \\in \\{1,\\ldots,n\\}}$, the goal is to generate a \\textit{single} natural language instruction, such that for each $x_k$, following the instruction results in $y_k$.\nThis format is similar to in-context learning \\citep{gpt3}, only here the desired output is an \\textit{instruction} describing the relation between the inputs and outputs of the demonstrations. We require models to perform this in a zero-shot setting, without any fine-tuning on labeled data.\nFigure~\\ref{fig:induction_example} illustrates the difference between standard in-context prompting and instruction-induction prompting.\n\nTo elicit models to generate instructions, we consider prompts that would elicit humans to do so. We design a meta-prompt presenting instruction induction as a challenge puzzle and verify its clarity in a human study (\\S\\ref{sec:verification}).\nThe prompt is presented in Figure~\\ref{fig:induction_example} (right side, in pink).\\footnote{We found this prompt informative for both humans and models in preliminary experiments.}\n\nWhile prior work already shows that large LMs are often able to infer a latent task from a given set of demonstrations, this has been largely based on their ability to \\textit{execute} the task on a held-out example.\nInstruction induction requires that the model \\textit{describe} the underlying task in natural language.\n\n\\section{Data}\n\\label{sec:data}\n\nWe evaluate on 24 tasks, listed in Table \\ref{tab:tasks}.\nWe select these tasks as they vary in difficulty and represent different aspects of language understanding, ranging from surface-level spelling to sentence similarity and causality detection.\\footnote{See Appendix~\\ref{sec:data_construction} for the full details of each task.}\nWe review the dataset's format, the annotation and verification processes we conducted to ensure that the tasks are viable, and finally discuss a theoretical limitation of this setup.\n\n\\input{03a_data_tab}\n\n\n\\subsection{Format}\n\nIn every task, each single \\textit{demonstration} $(x_k, y_k)$ is formatted as follows:\n\\begin{center}\nInput: $x_k$ ~~\\\\\nOutput: $y_k$\n\\end{center}\nFor instance, one demonstration in the pluralization task is ``Input: cat'' followed by ``Output: cats'' in a new line.\nWe split each task's demonstrations into two sets: an \\textit{induce} set, which we use for generating instructions, and an \\textit{execute} set, which is held out for the execution accuracy evaluation metric (see \\S\\ref{sec:execution}).\nEach \\textit{instruction induction example} is composed of 5 demonstrations sampled randomly without replacement from the induce set, concatenated with new-line separators; we create 100 examples for each task.\nWhen generating instructions, each example is placed inside the instruction induction prompt, and fed to the model (Figure~\\ref{fig:induction_example}, right).\n\n\n\\subsection{Annotating Reference Instructions}\n\\label{sec:annotations}\n\nWe collect 10 gold-reference human-annotated instructions via college-graduate English-speaking annotators.\nFor each task, we provide the annotators with the exact same input we intend to provide a model: 5 input-output demonstrations wrapped by the instruction-induction prompt (Figure \\ref{fig:induction_example}).\nWe manually verify each annotation and discard ones that do not correctly describe the task. \nWe refer to this set of annotations as the \\textit{gold} annotations, and use them for reference-based evaluation (see \\S\\ref{sec:eval}).\n\n\n\n\\subsection{Verification}\n\\label{sec:verification}\n\nPrior to the instruction induction experiments, we conduct two tests to ensure that either models or humans can infer the underlying task given 5 demonstrations.\nWe first verify that models can indeed execute our tasks given 5 demonstrations using in-context learning.\nSecondly, we conduct a human study to confirm that 5 demonstrations are enough for humans to describe the latent tasks.\n\n\\paragraph{In-Context Learning}\nWe prompt models with 5 input-output demonstrations and concatenate an additional test input $x_{k+1}$,\nand verify that the models are able to correctly predict $y_{k+1}$ (Figure~\\ref{fig:induction_example}, left).\nFor each task, we repeat this experiment 100 times, each with a different set of demonstrations and test inputs.\nWe do not provide the model with any instruction beyond the ``Input: $x_k$ Output: $y_k$'' format. We evaluate each task using its predefined evaluation metric.\\footnote{All metrics are variants of simple string matching, with some task-specific heuristics, for example, to allow for multiple correct answers. See Appendix~\\ref{sec:data_construction} for exact details.}\nThe in-context results for GPT-3 \\citep{gpt3} and InstructGPT \\citep{instruct-gpt} (see model details in \\S\\ref{sec:results}) are reported in Table \\ref{tab:verification_tab} in Appendix~\\ref{sec:data_verification}, which shows that in-context learning can reach 80\\% accuracy and above on most tasks.\n\n\\paragraph{Human Study}\nTo assess the human ability to induce instructions, we collect human-written instructions, using annotators that \\textit{did not} participate in the gold references collection. As in the gold-reference annotation process, we provide annotators with the same input we intend to provide to models. We refer to this set of annotations as the \\textit{control} annotations. We then manually count, for each task, the number of annotators that provided a correct instruction, and report the correct instructions percentage in Table \\ref{tab:verification_tab} (Appendix~\\ref{sec:data_verification}). \nIn all but one task (\\textit{Larger Animal}), at least 4 out of 5 annotators were able to produce correct task descriptions.\n\nWe also use the control group's annotations to establish a human baseline for automatic evaluation metrics.\nFor reference-based evaluation (\\S\\ref{sec:bertscore}), we treat the control annotations as generated instructions and compare them against the gold annotations, while for execution accuracy (\\S\\ref{sec:execution}), we use the control annotations to measure human performance, and the gold references as a ceiling metric.\n\n\n\\subsection{Ambiguity}\n\\label{sec:ambiguity}\n\nA theoretical challenge in inducing instructions is ambiguity.\nFor example, when given the single demonstration ``Input: The coffee is too hot. Output: The, too, hot'', one could infer that the underlying task is either ``write all the words containing the letter T'' or ``write all the three-lettered words'', both valid interpretations.\nAmbiguity might confuse models tasked with instruction induction while also making evaluation less reliable.\nIn practice, providing 5 demonstrations typically resolves the ambiguity in our set of tasks.\nAs evident from the data verification process, our tasks can typically be inferred by models and\/or humans.\n\nInducing more complex task descriptions, such as predicting detailed annotation guidelines, may pose a greater challenge in terms of ambiguity. We hypothesize that providing more than 5 demonstrations could mitigate some of that challenge, and leave further exploration of this avenue to future work.\n\n\n\n\\section{Evaluating Generated Instructions}\n\\label{sec:eval}\nAs a standard text generation metric, we report BERTScore \\citep{bertscore}. However, the instruction induction challenge has a unique property, which does not usually hold for other text generation tasks: the instructions are \\textit{executable}. Their correctness can therefore be measured directly by utilizing them as prompts.\n\n\n\n\\subsection{Reference-Based Evaluation}\n\\label{sec:bertscore}\n\nWe use BERTScore \\citep{bertscore} to compare the model-generated instructions against the collected gold annotations.\nAs mentioned in \\S\\ref{sec:annotations}, we use only the correct, verified annotations as references. \nWe take the maximal BERTScore-F1 over all gold-reference annotations to account for natural variations in instruction formulation.\\footnote{We use BERTScore version 0.3.11 with the DeBERTa-xl-MNLI model \\citep{he2021deberta, nangia-etal-2017-repeval}.}\nWe also establish a human baseline for each task using the \\textit{control} annotations, which were collected from a separate control group of annotators (\\S\\ref{sec:data_verification}), which we compare against the \\textit{gold} annotations in exactly the same way as model-generated instructions.\n\n\n\\subsection{Execution Accuracy}\n\\label{sec:execution}\n\nWe introduce \\textit{execution accuracy}, a new metric unique to the instruction induction task.\nTo measure the execution accuracy of a predicted instruction $I$ (e.g., ``Write the plural form of the given word.'') for a task $T$ (pluralization), we prompt a model with $I$ and an input $x$ (``cat'').\nWe then test, given $I$ and $x$, whether the model can correctly predict $y$, the output of performing $T$ on the input $x$ (\\textit{cats}).\n\nTo obtain meaningful results, we measure execution accuracy on the 100 held-out \\textit{execute} examples for each task.\nThe execution accuracy of an instruction $I$ is therefore computed by taking the average over $Score_{T}(I(x_n),y_n)$ for all $x_n$ in the \\textit{execute} set, where $Score_{T}$ denotes the task's corresponding metric (see Appendix~\\ref{sec:data_construction}), and $I(x_n)$ is the result of prompting a predefined language model with the instruction $I$ and the input $x_n$.\nAs recent models are trained to follow instructions \\citep{sanh2022multitask,wei2022finetuned,instruct-gpt}, and due to the relative clarity of our tasks, we expect correct instructions to yield high execution accuracy when using a sufficiently powerful execution model.\n\n\n\n\\section{Results}\n\\label{sec:results}\n\n\\paragraph{Baseline Models}\nWe experiment with eight versions of GPT-3 \\citep{gpt3}, a Transformer decoder language model.\nFirst, we experiment with the most current version available in the OpenAI API, for each of the four available model sizes.\nThough not stated explicitly in the API, we assume these models are those reported by \\citet{instruct-gpt}, and we therefore refer to them as \\textit{Instruct} models.\\footnote{Concretely, we use: text-davinci-002, text-curie-001, text-babbage-001, text-ada-001.}\nWe also experiment with the four originally published GPT-3 versions.\\footnote{davinci, curie, babbage, ada.} \nBy default, we refer to the largest Instruct model as \\textit{InstructGPT}, and the original 175B-parameter model as \\textit{GPT-3}.\nAll model generations were produced using the greedy decoding algorithm.\n\n\\begin{table}[t]\n\\small\n\\centering\n\\begin{tabular}{@{}lrr@{}}\n\\toprule\n\\textbf{Model} & \\textbf{BERTScore} & \\textbf{Execution} \\\\\n\\midrule\n\\textit{GPT-3} & & \\\\\n~~~~Ada & -7.7 & 4.0 \\\\\n~~~~Babbage & 4.1 & 3.2 \\\\\n~~~~Curie & 13.9 & 7.9 \\\\\n~~~~DaVinci & 14.6 & 6.5 \\\\\n\\midrule\n\\textit{InstructGPT} & & \\\\\n~~~~Ada & 5.9 & 4.4 \\\\\n~~~~Babbage & -0.5 & 3.8 \\\\\n~~~~Curie & 10.7 & 8.8 \\\\\n~~~~DaVinci & 44.4 & 43.6 \\\\\n\\midrule\n\\textit{Human (Control)} & 60.0 & 66.4 \\\\\n\\bottomrule\n\\\\\n\\end{tabular}\n\\caption{Average BERTScore and execution accuracy across tasks. BERTScore is measured against the gold references. The execution accuracy for all generated instructions is measured using InstructGPT as the execution model. Human performance is measured using the human control group's instructions.}\n\\label{tab:averages}\n\\end{table}\n\\label{sec:results_bertscore}\n\n\n\\subsection{Comparing to Gold Annotations}\n\nFigure~\\ref{fig:bertscore} presents the average BERTScore per task (see \\S\\ref{sec:bertscore}).\nResults show that the InstructGPT model has, to some extent, the ability to induce instructions from a few demonstrations; in 13 out of 24 tasks it achieves at least 75\\% of human performance.\nGPT-3, on the other hand, is quite far from human performance across the board.\n\nTable~\\ref{tab:averages} shows the average scores across all tasks. We observe the same trend; while InstructGPT's BERTScore is 15.6 points lower than human performance, the gap between GPT-3 and humans is 45.4 points.\nMoreover, we observe that smaller models -- even those fine-tuned to follow instructions -- do not exhibit any instruction-induction abilities. Scores are slightly higher for larger models of the same family (except for the InstructGPT-Babbage outlier), but are overall low.\nExcluding the largest models, there does not appear to be a significant advantage for Instruct models over the originals when controlling for model size.\n\n\n\\subsection{Execution Accuracy}\\label{sec:results_execution}\n\nWe compute the execution accuracy as detailed in \\S\\ref{sec:execution}, and report the average over 100 generated instructions for each task.\nAs an execution model, we use the largest InstructGPT model.\nWe also use this model to induce instructions, and while using it as an execution model might bias results towards its own generations, preliminary experiments show that no other model is as good at following instructions as InstructGPT.\nAs a point of reference, we apply the execution accuracy evaluation protocol to human-written instructions.\nFirst, to compare models with human performance, we measure the execution accuracy of the \\textit{control} annotation set.\nSecond, to account for limitations in the execution model, we measure execution accuracy of the correct (manually verified) \\textit{gold} annotations, which acts as an approximated ceiling metric.\n\nFigure~\\ref{fig:execution_acc} presents the execution accuracy per task.\nIn 12 out of 24 tasks, InstructGPT achieves at least 75\\% of the execution accuracy measured for the human-written instructions.\nGPT-3 shows much weaker execution accuracy, scoring less than 10\\% on 20 of the 24 tasks.\nIn fact, only in the cases of formality, passivization, and cause selection does it approach human performance, and that is largely an artifact of a more lenient evaluation metric in the case of formality and cause selection, or due to the execution model being right for the wrong reasons in the case of passivization (see \\S\\ref{sec:analysis}).\nIn some tasks, the control annotations are of high quality and reach a higher score than the verified gold annotations, likely due to variance of the execution model in such cases.\n\nTable~\\ref{tab:averages} shows the same trends. On average, InstructGPT achieves 65.7\\% of human performance, while GPT-3 reaches only 9.8\\% of human performance. When considering different model families or sizes, we do not see any substantial improvements when increasing model size or adding instruction tuning, with the exception of the largest InstructGPT model. The ability to generate instructions seems to only emerge when a model is both large enough and aligned to follow instructions.\nOverall, even the best-performing model still does not reach human performance, leaving room for future improvement.\n\n\n\n\n\n\n\n\\section{Analysis}\n\\label{sec:analysis}\n\n\n\n\\begin{table}[t]\n \\small\n \\centering\n \\begin{tabular}{@{}p{0.18\\textwidth}p{0.35\\textwidth}p{0.41\\textwidth}@{}}\n \\toprule\n \\textbf{Task} & \\textbf{GPT-3} & \\textbf{InstructGPT} \\\\\n \\midrule\n First letter & The friend's output was: & Write the first letter of each word. \\\\\n \\midrule\n Sentence Similarity & The friend wrote the following output: & For each input, rate the similarity of the two sentences on a scale of 0 to 5, with 5 being a perfect match. \\\\\n \\midrule\n Pluralization & The friend's output was: & Add `s' to the end of each word. \\\\\n \\midrule\n Passivization & The friend wrote the following output: & Reverse the order of the subject and the object in the sentence. \\\\\n \\midrule\n Antonyms & The friend's output was: & Reverse the input. \\\\\n \\bottomrule\n \\\\\n \\end{tabular}\n \\caption{Examples of the instructions generated by GPT-3 and InstructGPT for five of our tasks.}\n \\label{tab:analysis}\n\\end{table}\n\nTo gain further insight into the successes and failures of instruction induction prompting, we manually analyze the model-generated instructions of 5 tasks.\nTable \\ref{tab:analysis} shows the most common predictions of GPT-3 and InstructGPT for each of these tasks.\n\nInstructGPT obtains high, or close to human execution accuracy scores for three of these tasks (\\textit{First Letter}, \\textit{Sentence Similarity}, \\textit{Pluralization}).\nIndeed, the instructions for both \\textit{First Letter} and \\textit{Sentence Similarity} accurately describe the task.\nHowever, the instruction generated for \\textit{Pluralization} is not entirely precise, since it dismisses other forms of pluralization such as -es, -ies, and irregulars.\nAlthough the instruction only asks to add an ``s'', the execution model often ignores the specifics and produces the correct plural form; in one case, the input word was ``life'' and the output was ``lives''.\nWhile this particular instruction accounts for 24\\% of the induced instructions in the pluralization task, some predictions do explicitly mention pluralization, though not always accurately, e.g., ``Add -s to the end of each word to make it plural''.\n\nFor some tasks, InstructGPT fails to produce accurate instructions, even if it is able to solve via in-context learning (see Table~\\ref{tab:verification_tab}).\nIn \\textit{Passivization}, 98\\% of the predicted instructions were to simply ``reverse the order of the subject and object'', while ignoring additional surface-form manipulations needed to convert the given sentence into passive form; e.g., for the input ``The authors supported the scientist'', following the instructions produces the output ``The scientist supported the authors'', while the correct passive form is ``The scientist was supported by the authors''.\nSurprisingly, the instructions generated by GPT-3 obtained higher execution accuracy than the InstructGPT, even though they were entirely unrelated.\nIn 24\\% of the cases, GPT-3 predicted ``The friend wrote the following output:'' -- an instruction that apparently prompts the execution model to often rephrase the input in passive form.\nLastly, in \\textit{Antonyms}, 60\\% of InstructGPT's predictions were ``Reverse the input'', and another 11\\% were ``Reverse the word''.\nWhile one could imagine an interpretation of these instructions that reflects the task (reversing the \\textit{meaning} of the word), the execution model interprets them literally, and reverses the input words' letters.\n\nOverall, GPT-3 did not exhibit any instruction induction abilities, although it did often phrase outputs in imperative language.\nOne relatively common prediction was the generic instruction ``Write an output for every input''.\nBecause these empty instructions are in the right format, they tend to have some overlap with the reference instructions, which inflates their BERTScore.\nExecution accuracy, on the other hand, is robust to this phenomenon, and typically assigns GPT-3's outputs very low scores.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\paragraph{In-Context Learning}\n\\citet{gpt3} suggest that models can learn a task by conditioning on few input-output demonstration pairs, without any fine-tuning or gradient updates. This paradigm, known as \\textit{in-context learning} or \\textit{prompt-based learning} \\citep{prompt-based}, has been the focus of many research efforts lately:\n\\citet{glam} suggest methods for more efficient in-context learning,\n\\citet{zhao-2021-calibrate} study methods for improving the stability and accuracy of prompt-based models, \\citet{chen-meta-incontext} and \\citet{min2022metaicl} conduct meta-training with an in-context learning objective,\nwhile other work studies the effect of the provided prompts \\citep{reynolds-2021-prompt-programming, webson-pavlick-2021, min2022rethinking}, or suggests prompt reframing techniques \\citep{reframing-prompts} and prompt retrieval methods \\citep{prompt-retrieval}.\nTo the best of our knowledge, all previous work study in-context learning through the lens of \\textit{executing} a latent task, while we focus on the ability to explicitly \\textit{describe} it.\n\n\\paragraph{The Instruction Paradigm}\n\\citet{turking-test} propose to learn new tasks from natural language instructions.\n\\citet{naturalinstructionsv1} and \\citet{naturalinstructionsv2} collect crowdsourcing instructions used to create NLP datasets into a benchmark for measuring the ability to solve tasks by reading instructions.\nRecent work shows that fine-tuning on task instructions (\\textit{instruction tuning}) improves the zero-shot learning abilities of LMs \\citep{sanh2022multitask,wei2022finetuned,instruct-gpt}. This work focuses on models' ability to \\textit{generate} instructions, rather than their ability to \\textit{execute} instructions written by humans.\n\n\\paragraph{Intermediate Reasoning Steps}\n\\citet{nye2022show} show that LMs can perform complex computations by writing intermediate steps on a ``scratchpad''.\nIn \\textit{chain of thought prompting} \\citep{wei2022chain}, input-output demonstrations are enriched with sentences elaborating intermediate task reasoning steps, improving the performance of LMs on tasks requiring reasoning skills.\nSubsequent work further improves the performance on such tasks using a \\textit{self-consistency} ensemble \\citep{self-consistency-cot}, which samples a set of diverse chain-of-thought reasoning paths, taking the majority vote over all generated answers.\n\\citet{star-cot} utilize a small set of examples labeled with chain-of-thought rationales and a large set of unlabeled data to iteratively bootstrap automatic rationale generation, thus creating a large dataset labeled with such rationales to enable fine-tuning.\nIn contrast, we study the ability of LMs to generate a description of the task, rather than generating intermediate reasoning steps as a means of executing complex tasks.\n\n\n\n\n\n\n\n\\section{Discussion}\n\nThis work demonstrates that large LMs can not only infer new tasks based on a handful of demonstrations, but also describe them in natural language.\nWe provide evidence of this ability on a diverse set of language tasks, and show that while instruction induction abilities are limited to a single state-of-the-art model, this model does indeed approach human performance on about half the tasks.\n\nIt is not unreasonable to assume that models in the near future will be even better at processing human-generated instructions, and it is therefore interesting to discuss the potential applications of instruction induction.\nIn particular, we envision a use case in which instruction induction serves as a machine learning approach; instead of converting a dataset into a set of continuous parameters, we could produce a natural language instruction that best describes the data. Grounding the model in concise natural language has the advantage of interpretability, and has the potential to solve fundamental issues pertaining to spurious correlations. While it is still too early to determine whether this approach is viable, we view it as an intriguing direction for future research.\n\n\n\n\n\\section{Dataset Details}\n\\label{sec:data_construction}\n\nThis appendix details each task's dataset (\\S\\ref{sec:data_construction_details}).\nSome datasets rely on a set of common English nouns (CEN), described at \\S\\ref{sec:common_nouns}.\n\n\n\\subsection{Tasks}\n\\label{sec:data_construction_details}\n\nWe elaborate on each task's data source, preprocessing protocol, and evaluation metric used in the in-context learning and execution accuracy experiments.\nAs mentioned in \\S\\ref{sec:data}, each task has \\textit{induce} and \\textit{execute} sets; unless stated otherwise, we sample 100 examples as the execute set for each task.\nWhen evaluating outputs, the generated text is first normalized; we take only the first generated sentence and lowercase it.\nWe apply exact string match as the evaluation metric where applicable, elaborating only where alternative metrics are used.\n\n\\paragraph{First Letter}\nIn each demonstration, $x_k$ is a noun, and $y_k$ is the first letter of that noun. We construct the demonstrations by extracting the first letter of each word in CEN.\n\n\\paragraph{Second Letter}\nIdentical to the \\textit{First Letter} task, only here $y_k$ is the second letter of $x_k$.\n\n\\paragraph{List Letters}\n$x_k$ is a noun from CEN, and $y_k$ is a list of $x_k$'s letters, separated by spaces.\n\n\\paragraph{Starting With}\n$x_k$ contains a sentence and a letter in brackets, and $y_k$ lists the words in $x_k$ that start with the given letter. We avoid cases in which $y_k$ is empty, i.e., there is always at least one word in the input sentence starting with the given letter.\nSentences are taken from the CoLA dataset \\citep{warstadt2018neural}.\nFor the induce set, we create all (sentence, letter) pairs using CoLA's train set, and then sample 3,000 pairs.\nFor the \\textit{execute} set, we create all (sentence, letter) pairs from CoLA's in-domain and out-of-domain dev sets, and then sample 50 in-domain and 50 out-of-domain examples.\nWe evaluate using exact \\textit{set} match, by treating the output (and $y_k$) as a set of strings.\n\n\\paragraph{Pluralization}\nGiven a singular noun $x_k$, produce the plural form $y_k$.\nWe take noun inputs from the CEN set, filtering out mass nouns using a predefined list.\\footnote{\\url{https:\/\/gist.github.com\/sudodoki\/b5408fa4ba752cc22597250fc58a5970}}\nTo create the plural forms, we apply an automatic pluralization engine\\footnote{\\url{https:\/\/pypi.org\/project\/inflect\/}} and exclude nouns for which the engine's output did not appear at least 50 times in the Wikitext-103 corpus.\nThis results in 2,043 singular-plural noun pairs.\n\n\\paragraph{Passivization}\nGiven a simple active sentence $x_k$, rephrase the sentence in passive voice $y_k$.\nWe use the 1,000 HANS \\citep{mccoy-etal-2019-right} evaluation set active-passive entailed sentence pairs.\n\n\\paragraph{Negation}\n$y_k$ is the negation of the input sentence $x_k$.\nWe use the negated LAMA dataset \\citep{petroni-etal-2019-language,kassner-schutze-2020-negated}, taking the 304 negated SQuAD \\citep{rajpurkar-etal-2016-squad} sentences, 300 ConceptNet \\citep{speer-havasi-2012-representing} sentences, 200 T-REx \\citep{elsahar-etal-2018-rex} sentences and 200 Google-RE\\footnote{\\url{https:\/\/code.google.com\/archive\/p\/relation-extraction-corpus\/}} sentences. For ConceptNet and T-REx, we manually select these sentences to ensure their quality. For Google-RE, we automatically sample 100 sentences from the \\textit{place of birth} relation, and 100 from the \\textit{place of death} relation.\n\n\\paragraph{Antonyms}\n$y_k$ is the antonym of the input word $x_k$.\nWe use the antonym pairs from oLMpics \\citep{talmor-etal-2020-olmpics}, which were extracted from ConceptNet \\citep{speer-havasi-2012-representing} and WordNet \\citep{fellbaum1998wordnet}.\nFor uniformity, we verify that all pairs are indeed antonyms according to WordNet.\n\n\\paragraph{Synonyms}\n$x_k$ is a word and $y_k$ is its synonym.\nAs in the antonyms task, we use the synonym pairs of \\citet{talmor-etal-2020-olmpics}.\nSince there can be multiple synonyms for each input word, the task's in-context and execution accuracy are evaluated by testing whether the gold answer (a single word) is contained in the predicted answer (which may be a list of words).\n\n\\paragraph{Membership}\n$x_k$ is a list of words, where some of the words represent animals, and $y_k$ lists the animals from $x_k$.\nTo construct the task's data, we first select 6 word categories: animals, clothing, colors, food, vehicles, and professions.\nWe then take 10-50 words from each category, using only words that are categorized at the A1 or A2 levels according to the Common European Framework of Reference for Languages (CEFR).\\footnote{https:\/\/languageresearch.cambridge.org\/american-english}\nUsing these words, we create random lists containing between 5 to 7 words, where 3 or 4 are animals and the rest belong to one of the other 5 categories. The induce split is constructed by sampling 3,000 such combinations, using 80\\% of each category's words. The execute split is constructed by sampling 100 such combinations, using the remaining 20\\% of each category's words.\nThe task's in-context and execution accuracy are evaluated using an exact \\textit{set} match, by treating the output (and $y_k$) as a set of strings.\n\n\\paragraph{Rhymes}\n$y_k$ is a rhyme of the input word $x_k$.\nThe data was constructed by taking words categorized at the A1, A2, or B1 levels according to CEFR.\nWe then use CMU's pronouncing dictionary\\footnote{\\url{https:\/\/github.com\/cmusphinx\/cmudict}} to find rhyming groups for these words.\nThe execute split is constructed by sampling 30 rhyming groups, each containing two or more words, and sampling 100 unique words. The induce split is constructed using the rest of the rhyming groups.\nWe evaluate this task by checking whether the predicted word is contained in the rhyming group of $x_k$.\n\n\\paragraph{Larger Animal}\n$x_k$ is two animals, and $y_k$ is the (physically) larger one.\nWe use the object comparison data from oLMpics \\citep{talmor-etal-2020-olmpics}, taking the train split, which only contains animals.\nWe construct the induce set using a sample of 80\\% of the animals and the execute set by sampling 100 pairs out of the remaining 20\\% animals.\n\n\\paragraph{Cause Selection}\n$x_k$ contains two sentences describing related events, where one event caused the other; $y_k$ contains the cause sentence.\nAs data source, we use the 50 examples from the BIG-bench \\citep{bigbench} \\textit{Cause and Effect} task, randomly splitting them to equally-sized induce and execute sets.\nIn each of the induce demonstrations, we randomly sample the position of the cause sentence (either the first or the second sentence in $x_k$).\nFor examples in the execute set, we take both options for each cause and effect pair, doubling the data.\n\n\\paragraph{Common Concept}\n$x_k$ contains a few entities that share a non-trivial common underlying concept, while $y_k$ describes that common concept.\nWe use the 32 examples from \\textit{Novel Concepts} in BIG-bench \\citep{bigbench}, using half for induce and half for execute.\nAs the BIG-bench answers usually contain clear ``task markers'' (e.g., answers that start with ``They all have...'', indicating that the task was to find a common concept), we remove them from our demonstrations.\nThe task's in-context and execution accuracy are evaluated using unigram overlap (F1).\n\n\\paragraph{Formality}\n$x_k$ is a sentence in informal English, and $y_k$ is its paraphrase in more formal language.\nWe write 30 sentence pairs ourselves, following existing guidelines for converting informal sentences into formal ones.\\footnote{\\url{https:\/\/www.niu.edu\/writingtutorial\/style\/formal-and-informal-style.shtml}, \\url{https:\/\/www.uts.edu.au\/current-students\/support\/helps\/self-help-resources\/grammar\/formal-and-informal-language}}\nThe task's in-context and execution accuracy are evaluated using unigram overlap (F1).\n\n\\paragraph{Sum}\n$x_k$ contains two numbers separated by a space, and $y_k$ is their sum.\nFor each number in the range $[0,99]$, we enumerate over all pairs.\n\n\\paragraph{Difference}\n$x_k$ contains two numbers separated by a space, and $y_k$ is the difference between them.\nWe use all number pairs such that both input numbers are in the range $[0,198]$, and always subtract the smaller number from the bigger number.\n\n\\paragraph{Number to Word}\n$x_k$ is a number written in digits (e.g., 28), and $y_k$ is the same number written in words (e.g, twenty-eight).\nWe use all numbers in range [0,9999].\n\n\\paragraph{Translation}\n$x_k$ is an English word and $y_k$ is its translation to some target language -- either German, Spanish, or French. We use CEN as input words, and obtain their translations via Wiktionary.\\footnote{\\url{https:\/\/github.com\/open-dsl-dict\/wiktionary-dict}}\nFor evaluation, we check whether the predicted answer is contained in the set of the possible gold answers.\n\n\\paragraph{Sentiment Analysis}\n$x_k$ is a movie review and $y_k$ is a binary label, either ``positive'' or ``negative'', marking the review's sentiment.\nWe use the Stanford Sentiment Treebank dataset \\citep{socher-etal-2013-recursive} from GLUE \\citep{wang-etal-2018-glue}, taking the train split as our induce set and the dev split as the execute set.\nWe consider only full sentences, discarding sentence constituents and sentences containing more than 10 words.\nThis leaves us with an induce set of 1,167 examples. To create label-balanced instruction induction examples, we sample each sequence of 5 demonstrations such that there are at least 2 demonstrations for each label.\n\n\\paragraph{Sentence Similarity}\n$x_k$ contains two sentences, and $y_k$ reflects the semantic similarity of the two input sentences.\nThe similarity is measured on a scale of 0 to 5, and the labels contain an additional short textual description of the numerical label, e.g., ``5 - perfectly''.\nWe use the Semantic Textual Similarity Benchmark dataset \\citep{cer-etal-2017-semeval} from GLUE, rounding the similarity scores and taking the train split as the induce set and the dev split as the execute set.\nWe discard examples in which at least one of the sentences contains more than 10 words, which leaves us with an induce set of 3,716 examples.\nIn each instruction induction example, we sample at least one pair with a score of 0 and one with a score of 5, so that models will be exposed to the minimal and maximal scores when generating an instruction.\nWe evaluate whether the predicted answer matches one of three valid outputs for each label: the numerical label (``5''), the verbal label (``perfectly''), or the combined label (``5 - perfectly'').\n\n\\paragraph{Word in Context}\n$x_k$ contains a target word and two contexts (sentences) for that word, and $y_k$ is a binary label reflecting whether the word has the same meaning in both contexts.\nWe use the Word in Context dataset \\citep{pilehvar-camacho-collados-2019-wic} from SuperGLUE \\citep{superglue}, taking the train split as the induce set and the dev split as the execute set.\nWe discard examples in which at least one of the sentences contains more than 10 words, which leaves us with an induce set of 4,084 examples.\nTo create label-balanced instruction induction examples, we sample each sequence of 5 demonstrations such that there are at least 2 demonstrations for each label.\nWe evaluate whether the predicted label matches one of several possible outputs: ``same'', ``yes'', or ``true'' for an identical meaning, and ``not the same'', ``no'', or ``false'' for a different meaning.\n\n\n\\subsection{Common English Nouns}\n\\label{sec:common_nouns}\n\nWe create a dataset of common English nouns (CEN) by filtering high-frequency nouns from the Wikitext-103 corpus \\citep{merity2016pointer}.\nWe first create a vocabulary of the 10,000 most frequent words in the corpus, from which we will later select the nouns.\nWe then process the corpus with SpaCy's part-of-speech tagger and lemmatizer,\\footnote{\\url{https:\/\/spacy.io\/}}\nand retain only nouns that appear in their singular form by verifying that their part-of-speech tag is ``NN'' and testing whether the word's lemma is identical to the word itself.\nWe additionally filter nouns that have less than 3 letters. Overall, this leaves us with a set of 3,406 nouns.\n\n\n\n\\section{Data Verification}\n\\label{sec:data_verification}\n\n\nTable~\\ref{tab:verification_tab} shows the results for the data verification experiments (\\S\\ref{sec:verification}). As evident by these results, most of our tasks can be inferred in-context by models. Moreover, all tasks but one can be accurately described by at least 4 out 5 human annotators.\n\n\\input{03b_in_context_tab}\n\n\n\n\n\n\n\\section*{Checklist}\n\n\n\n\n\\begin{enumerate}\n\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{}\n \\item Did you describe the limitations of your work?\n \\answerYes{See \\S\\ref{sec:ambiguity} and the comment regarding the execution model in \\S\\ref{sec:results_execution}.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerNA{}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerNA{}\n \\item Did you include complete proofs of all theoretical results?\n \\answerNA{}\n\\end{enumerate}\n\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{We include our data as well as model predictions. Our code will be made publicly available upon acceptance.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{We didn't perform any training, only inference, for which we reported the inference settings, see \\S\\ref{sec:results}.}\n \\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerNA{}\n \\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{}\n\\end{enumerate}\n\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{}\n \\item Did you mention the license of the assets?\n \\answerNo{We cited the assets and provided necessary links, license details can be found there. All datasets used are open and publicly available.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNo{}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNo{No such danger in our data.}\n\\end{enumerate}\n\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerYes{See \\S\\ref{sec:annotations} and \\S\\ref{sec:verification}.}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{The annotations were very limited in scope (a few minutes of work from each annotator), which we got via personal friends who were not involved in the project.}\n\\end{enumerate}\n\n\n\\end{enumerate}\n\n\\section{Data Construction}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}