|
{"text":"\\section{Introduction}\n\nNearly 400 extrasolar planets have now been discovered\nusing the radial velocity (RV) method. RV surveys currently\nhave good statistical completeness only for planets with \nperiods of less than ten years \\citep{cumming,butlercat,carnp}, \ndue to the limited temporal baseline of the observations, \nand the need to observe for a complete orbital period to \nconfirm the properties of a planet with confidence. \nThe masses of discovered planets range from\njust a few Earth masses \\citep{hotNep} up to around 20 Jupiter\nmasses (M$_{\\mathrm{Jup}}$). We note that a 20 M$_{\\mathrm{Jup}}$~object would be considered\nby many to be a brown dwarf rather than a planet, but that there is \nno broad consensus on how to define the upper mass limit for\nplanets. For a good overview of RV planets to date, see\n\\citet{butlercat} or \\url{http:\/\/exoplanet.eu\/catalog-RV.php}.\n\nThe large number of RV planets makes it possible to examine\nthe statistics of extrasolar planet populations. Several\ngroups have fit approximate power law distributions in\nmass and semimajor axis to the set of known extrasolar\nplanets (see for example \\citet{cumming}). Necessarily, \nhowever, these power laws are\nnot subject to observational constraints at orbital\nperiods longer than 10 years -- and it is at these\norbital periods that we find giant planets in our own\nsolar system. We cannot obtain a good\nunderstanding of planets in general without information\non long period extrasolar planets.\nNor can we see how our own solar system fits into\nthe big picture of planet formation in the galaxy\nwithout a good census of planets in Jupiter- and\nSaturn-like orbits around other stars.\n\nRepeatable detections of extrasolar planets (as opposed\nto one-time microlensing detections)\nhave so far been made by transit detection (e.g. \\citet{hd209458}), by RV\nvariations \\citep{51peg}, by astrometric wobble \\citep{benedict}, \nor by direct imaging \\citep{hr8799}.\nOf these methods, transits are efficient only for \ndetecting close-in planets. As noted above, precision\nRV observations have not been going on long enough to \ndetect more than a few planets with periods longer than ten years, \nbut even as RV temporal baselines increase, long period\nplanets will remain harder to detect due to their slow orbital velocities.\nThe amplitude of a star's astrometric wobble increases \nwith the radius of its planet's orbit, but decades-long\nobserving programs are still needed to find long-period planets. \nDirect imaging is the only method that allows us to characterize\nlong-period extrasolar planets on a timescale of months \nrather than years or decades.\n\nDirect imaging of extrasolar planets is technologically\npossible at present only in the infrared, based on\nthe planets' own thermal luminosity, not on reflected\nstarlight. The enabling technology is adaptive optics (AO),\nwhich allows 6-10m ground-based telescopes to obtain diffraction\nlimited IR images several times sharper than those from\nHST, despite Earth's turbulent atmosphere. Theoretical\nmodels of giant planets indicate that\nsuch telescopes should be capable of detecting self-luminous\ngiant planets in large orbits around young, nearby stars.\nThe stars should be young because the glow of\ngiant planets comes from gravitational potential\nenergy converted to heat in their formation and\nsubsequent contraction: lacking any internal fusion,\nthey cool and become fainter as they age.\n\nSeveral groups have published the results of AO imaging\nsurveys for extrasolar planets around F, G, K, or M stars\nin the last five years (see for \nexample \\citet{masciadri,kasper,biller1,GDPS}; and \\citet{chauvin}). \nOf these, most have used wavelengths in the 1.5-2.2 $\\mu$m\nrange, corresponding to the astronomical $H$ and $K_S$\nfilters \\citep{masciadri,biller1,GDPS,chauvin}. They have targeted\nmainly very young stars. Because young stars are rare, the median\ndistance to stars in each of these surveys has been\nmore than 20 pc.\n\nIn contrast to those above, our survey concentrates on \nvery nearby F, G, and K stars, with proximity prioritized more than \nyouth in the sample selection. The median distance to our\nsurvey targets is only 11.2 pc. Ours is also the first survey \nto include extensive observations in the $M$\nband, and only the second to search solar-type stars in the $L'$\nband (the first was \\citet{kasper}). The distinctive focus on older, very\nnearby stars for a survey using longer wavelengths \nis natural: longer wavelengths\nare optimal for lower temperature planets which are most likely\nto be found in older systems, but which would be undetectable\naround all but the nearest stars. More information on our\nsample selection, observations, and data analysis can be found\nin our Observations paper, \\citet{obspaper}, which also details \nour careful evaluation of our survey's sensitivity, including extensive\ntests in which fake planets were randomly placed in the raw\ndata and then recovered by an experimenter who knew neither\ntheir positions nor their number. Such tests are essential\nfor establishing the true relationship between source\nsignificance (i.e. 5$\\sigma$, 10$\\sigma$, etc.) and survey\ncompleteness.\n\nOur survey places constraints on a more mature population of \nplanets than those that have focused on very young stars, \nand confirms that a paucity of giant planets\nat large separations from sun-like stars is robustly \nobserved at a wide range of wavelengths.\n\nIn Section \\ref{sec:rv}, we review power law fits to the distribution\nof known RV planets, including the normalization of the power laws.\nIn Section \\ref{sec:tmod}, we present the constraints our survey\nplaces on the distribution of extrasolar giant planets, based on\nextensive Monte Carlo simulations. In Section \\ref{sec:long} we\ndiscuss the promising future of planet-search observations in the\n$L'$ and especially the $M$ band, and in Section \\ref{sec:concl}\nwe conclude.\n\n\n\n\\section{Statistical Distributions from RV Planets} \\label{sec:rv}\n\nNearly 400 RV planets are known. See \\citet{butlercat}\nfor a useful, conservative listing of confirmed extrasolar\nplanets as of 2006, or \\url{http:\/\/exoplanet.eu\/catalog-RV.php}\nfor a frequently-updated catalog of all confirmed and many\nsuspected extrasolar planet discoveries. \n\nThe number of\nRV planets is sufficient for meaningful statistical\nanalysis of how extrasolar planets are distributed in\nterms of their masses and orbital semimajor axes. The lowest mass \nplanets and those with the longest orbital periods are\ngenerally rejected from such analyses to reduce bias from\ncompleteness effects, but there remains a considerable\nrange (2-2000 days in period, or roughly 0.03-3.1 AU\nin semimajor axis for solar-type stars; and 0.3-20 M$_{\\mathrm{Jup}}$~in mass)\nwhere RV searches have good completeness \\citep{cumming}. There is evidence\nthat the shortest period planets, or `hot Jupiters,'\nrepresent a separate population, a `pileup' of planets\nin very close-in orbits that does not follow the same\nstatistical distribution as planets in more distant\norbits \\citep{cumming}. The hot Jupiters are therefore\noften excluded from statistical fits to the overall\npopulations of extrasolar planets, or at least from the fits\nto the semimajor axis distribution.\n\n\\citet{cumming} characterize the distribution of\nRV planets detected in the Keck Planet Search\nwith an equation of the form \n\n\\begin{equation}\n dN =C_0 M^{\\alpha_L} P^{\\beta_L} d\\ln(M) d\\ln(P).\n\\label{eq:cumming}\n\\end{equation}\n\nwhere $M$ is the mass of the planet, $P$ is the orbital\nperiod, and $C_0$ is a normalization constant. They\nstate that 10.5\\% of solar-type stars have a planet\nwith mass between 0.3 and 10 M$_{\\mathrm{Jup}}$~and period between\n2 and 2000 days, which information can be used to\nderive a value for $C_0$ given values for the power\nlaw exponents $\\alpha_L$ and $\\beta_L$. They find\nthat the best-fit values for these are $\\alpha_L = -0.31 \\pm 0.2$\nand $\\beta_L = 0.26 \\pm 0.1$, where the $_L$ subscript is our\nnotation to make clear that these are the exponents\nfor the form using logarithmic differentials.\n\nIn common with a number of other groups, we \nchoose to represent the power law with ordinary differentials, \nand to give it in terms of orbital semimajor axis $a$ rather than\norbital period $P$:\n\n\\begin{equation}\n dN =C_0 M^{\\alpha} a^{\\beta} dM da.\n\\label{eq:uspower}\n\\end{equation}\n\nWhere $C_0$, of course, will not generally have\nthe same value for Equations \\ref{eq:cumming} and\n\\ref{eq:uspower}. Manipulating the two equations and using Kepler's\nThird Law makes it clear that\n\n\\begin{equation}\n\\alpha = \\alpha_L - 1.\n\\label{eq:setalpha}\n\\end{equation}\n\nand\n\n\\begin{equation}\n\\beta = \\frac{3}{2}\\beta_L - 1.\n\\label{eq:setbeta}\n\\end{equation}\n\nThe \\citet{cumming} exponents produce $\\alpha=-1.31 \\pm 0.2$\nand $\\beta = -0.61 \\pm 0.15$ when translated into our form.\nThe mass power law is well behaved, but the integral of the\nsemimajor axis power law does not converge as $a \\rightarrow \\infty$,\nso an outer truncation radius is an important parameter\nof the semimajor axis distribution.\n\n\\citet{butlercat} presents the 2006 Catalog of Nearby \nExoplanets, a carefully described heterogenous sample \nof planets detected by several different RV search programs. \nWith appropriate caution, \\citet{butlercat} refrain from quoting confident\npower law slopes based on the combined discoveries\nof many different surveys with different detection limits\nand completeness biases (in contrast, the \\citet{cumming}\nanalysis was restricted to stars in the Keck Planet\nSearch, which were uniformly observed up to a given minimum\nbaseline and velocity precision). \\citet{butlercat} do tentatively adopt\na power law with the form of Equation \\ref{eq:uspower}\nfor mass only, and state that $\\alpha$ appears to be\nabout -1.1 (or -1.16, to give the exact result of a formal\nfit to their list of exoplanets). However they caution\nthat due to their heterogeneous list of planets discovered\nby different surveys, this power law should be taken more as a descriptor\nof the known planets than of the underlying distribution.\nThey do not quote a value for the semimajor axis power\nlaw slope $\\beta$.\n\nBased mostly on \\citet{cumming}, but considering \\citet{butlercat}\nas helpful additional input, we conclude that the true\nvalue of the mass power law slope $\\alpha$ is probably between\n-1.1 and -1.51, with -1.31 as a good working model. The\nvalue of the semimajor axis power law slope $\\beta$ is probably\nbetween -0.46 and -0.76, with -0.61 as a current best guess. \nThe outer truncation radius of\nthe semimajor axis distribution cannot be constrained by\nthe RV results: surveys like ours exist, in part, to\nconstrain this interesting number.\n\nThe only other result we need from the RV searches\nis a normalization that will allow us to find $C_0$.\nWe elect not to use the \\citet{cumming} value\n(10.5\\% of stars having a planet\nwith mass between 0.3 and 10 M$_{\\mathrm{Jup}}$~and period between\n2 and 2000 days), because this range includes the\nhot Jupiters, a separate population. \n\nWe take our normalization instead from the Carnegie\nPlanet Sample, as described in \\citet{carnp}.\nTheir Table 1 (online only) lists 850 \nstars that have been thoroughly investigated with RV.\nThey state that all planets with mass at \nleast 1 M$_{\\mathrm{Jup}}$~and orbital period less\nthan 4 years have been detected around these stars. \nForty-seven of these stars are marked in\nTable 1 as having RV planets. Table 2 from \\citet{carnp}\ngives the measured properties\nof 124 RV planets, including those orbiting 45 of the\n47 stars listed as planet-bearing in Table 1. The stars \nleft out are HD 18445 and and HD 225261. We cannot\nfind any record of these stars having planets, and therefore\nas far as we can tell they are typos in Table 1. \n\nSince all planets with masses above 1 M$_{\\mathrm{Jup}}$~and periods less\nthan 4 years orbiting stars in the \\citet{carnp} list\nof 850 may be relied upon to have been discovered, we\nmay pick any sub-intervals in this range of mass and\nperiod, and divide the number of planets falling into\nthese intervals by 850 to obtain our normalization.\nWe selected the range 1-13 M$_{\\mathrm{Jup}}$~in mass, and 0.3-2.5 AU\nin semimajor axis. Twenty-eight stars, or 3.29\\% of\nthe 850 in the \\citet{carnp} list, have one or more planets in this\nrange. Our inner limit of 0.3 AU excludes the hot Jupiters, and\nthus the 3.29\\% value provides our final normalization.\nWe note that if we adopt the \\citet{cumming} best-fit power\nlaws, and use the 3.29\\% normalization to predict the percentage\nof stars having planets with masses between 0.3 and 10\nM$_{\\mathrm{Jup}}$~and orbital periods between 2 and 2000 days, we find a value\nof 9.3\\%, which is close to the \\citet{cumming} value\nof 10.5\\%. The slight difference is probably not significant,\nbut might be viewed as upward bias in the \\citet{cumming} value\ndue to the inclusion of the hot Jupiters. In any case we would\nnot have obtained very different constraints if we had\nused the \\citet{cumming} normalization in our\nMonte Carlo simulations.\n\nFor comparison, among the other papers reporting\nMonte Carlo simulations similar to ours,\n\\citet{kasper} used a normalization of 3\\% for\nplanets with semimajor axes of 1-3 AU and masses greater\nthan 1 M$_{\\mathrm{Jup}}$. This is close to our value of 3.29\\% for a\nsimilar range. \\citet{GDPS} and \\citet{nielsen} fixed\n$\\alpha$ and $\\beta$ in their simulations, and let\nthe normalization be a free parameter. \\citet{chauvin}\nobtained their normalization from \\citet{cumming},\nand \\citet{nielsenclose} obtained theirs from \\citet{carnp}.\n\n\\citet{juric} provide a helpful\nmathematical description of the eccentricity distribution\nof known RV planets:\n\n\\begin{equation}\nP(\\epsilon) = \\epsilon e^{-\\epsilon^2\/(2\\sigma^2)}.\n\\label{eq:juric}\n\\end{equation}\n\nwhere $P(\\epsilon)$ is the probability of a given extrasolar\nplanet's having orbital eccentricity $\\epsilon$, $e$ is the \nroot of the natural logarithm, and $\\sigma = 0.3$.\nWe find that this mathematical form provides an excellent fit to\nthe distribution of real exoplanet eccentricities \nfrom Table 2 of \\citet{carnp}, so we have used it\nas our probability distribution to generate\nrandom eccentricities for the Monte Carlo simulations we describe\nin Section \\ref{sec:tmod} below.\n\n\\section{Constraints on the Distribution of Planets} \\label{sec:tmod}\n\n\\subsection{Theoretical Spectra}\n\\citet{bur} present high resolution, flux-calibrated theoretical spectra\nof giant planets or brown dwarfs for ages ranging from\n0.1-5.0 Gyr and masses from 1 to 20 M$_{\\mathrm{Jup}}$~(these are available\nfor download from \\url{http:\/\/www.astro.princeton.edu\/\\textasciitilde burrows\/}). We\nhave integrated these spectra to give absolute\nmagnitudes in the $L'$ and $M$ filters used in Clio\n(see Tables \\ref{tab:burl} and \\ref{tab:burm}), and have found that\nthe results can be reasonably interpolated to give the $L'$ or\n$M$ band magnitudes for all planets of interest for\nour survey. \\citet{bar} also present models of giant\nplanets and brown dwarfs, pre-integrated into magnitudes\nin the popular infrared bands. These models predict\nslightly better sensitivity to low mass planets in the $L'$ band\nand slightly poorer sensitivity in the $M$ band, relative\nto the \\citet{bur} models. We cannot say if the difference\nis due to the slightly different filter sets used (MKO\nfor Clio vs. Johnson-Glass and Johnson for \\citet{bar}),\nor if it is intrinsic to the different model spectra\nused in \\citet{bur} and \\citet{bar}. We have chosen\nto use the \\citet{bur} models exclusively herein, to\navoid any errors due to the slight filter differences. \nSince the \\citet{bur} models predict poorer\nsensitivity in the $L'$ band, in which the majority\nof our survey was conducted, our decision to use\nthem is conservative.\n\n\\begin{deluxetable}{cccccc}\n\\tablewidth{0pc}\n\\tablecolumns{6}\n\\tablecaption{$L'$ Band Absolute Mags from \\citet{bur} \\label{tab:burl}}\n\\tablehead{\\colhead{Planet Mass} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at}\\\\\n\\colhead{in M$_{\\mathrm{Jup}}$} & \\colhead{0.10 Gyr} & \\colhead{0.32 Gyr} & \\colhead{1.0 Gyr} & \\colhead{3.2 Gyr} & \\colhead{5.0 Gyr}} \n\\startdata\n\\phantom{0o0} 1.0 \\phantom{0o0} & \\phantom{0o0} 19.074 \\phantom{0o0} & \\phantom{0o0} 23.010 \\phantom{0o0} & \\phantom{0o0} 27.870 \\phantom{0o0} & \\phantom{0o0} 33.50\\tablenotemark{a} \\phantom{0o0} & \\phantom{0o0} 35.50\\tablenotemark{a} \\phantom{0o0} \\\\\n2.0 & 16.793 & 19.351 & 23.737 & 28.398 & 29.479 \\\\\n5.0 & 14.500 & 16.397 & 18.588 & 22.437 & 24.407 \\\\\n7.0 & 13.727 & 15.390 & 17.336 & 20.131 & 21.574 \\\\\n10.0 & 12.888 & 14.437 & 16.246 & 18.480 & 19.466 \\\\\n15.0 & 12.00\\tablenotemark{b} & 13.61\\tablenotemark{b} & 14.773 & 16.816 & 17.691 \\\\\n20.0 & 11.30\\tablenotemark{b} & 12.98\\tablenotemark{b} & 14.190 & 15.967 & 16.766 \\\\\n\\enddata\n\\tablenotetext{a}{\\footnotesize{No models for these\nvery faint planets appear in \\citet{bur}. We have inserted ad hoc \nvalues to smooth the interpolations. Any effect of\nthe interpolated magnitudes for planets we could actually detect\nis negligible.}}\n\\tablenotetext{b}{\\footnotesize{No models for these\nbright, hot planets appear in \\citet{bur}, which focuses\non cooler objects. We have added values from \\citet{bar} and then \nadjusted them to slightly fainter values to ensure smooth interpolations.}}\n\\end{deluxetable}\n\n\n\\begin{deluxetable}{cccccc}\n\\tablewidth{0pc}\n\\tablecolumns{6}\n\\tablecaption{$M$ Band Absolute Mags from \\citet{bur} \\label{tab:burm}}\n\\tablehead{\\colhead{Planet Mass} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at} & \\colhead{Mag at}\\\\\n\\colhead{in M$_{\\mathrm{Jup}}$} & \\colhead{0.10 Gyr} & \\colhead{0.32 Gyr} & \\colhead{1.0 Gyr} & \\colhead{3.2 Gyr} & \\colhead{5.0 Gyr}} \n\\startdata\n\\phantom{0o0} 1.0 \\phantom{0o0} & \\phantom{0o0} 14.974 \\phantom{0o0} & \\phantom{0o0} 16.995 \\phantom{0o0} & \\phantom{0o0} 19.987 \\phantom{0o0} & \\phantom{0o0} 25.0\\tablenotemark{a} \\phantom{0o0} & \\phantom{0o0} 26.0\\tablenotemark{a} \\phantom{0o0} \\\\\n2.0 & 14.023 & 15.313 & 17.807 & 21.295 & 22.163 \\\\\n5.0 & 13.014 & 14.017 & 15.153 & 17.167 & 18.537 \\\\\n7.0 & 12.618 & 13.561 & 14.558 & 16.126 & 16.909 \\\\\n10.0 & 12.189 & 13.096 & 14.093 & 15.315 & 15.951 \\\\\n15.0 & 11.55\\tablenotemark{b} & 12.60\\tablenotemark{b} & 13.370 & 14.512 & 14.990 \\\\\n20.0 & 11.29\\tablenotemark{b} & 12.21\\tablenotemark{b} & 13.069 & 14.122 & 14.580 \\\\\n\\enddata\n\\tablenotetext{a}{\\footnotesize{No models for these\nvery faint planets appear in \\citet{bur}. We have inserted ad hoc \nvalues to smooth the interpolations. Any effect of\nthe interpolated magnitudes for planets we could actually detect\nis negligible.}}\n\\tablenotetext{b}{\\footnotesize{No models for these\nbright, hot planets appear in \\citet{bur}, which focuses\non cooler objects. We have added values from \\citet{bar} and then \nadjusted them to slightly fainter values to ensure smooth interpolations.}}\n\\end{deluxetable}\n\n\n\\subsection{Introducing the Monte Carlo Simulations} \\label{sec:mcint}\nIn common with several other surveys \\citep{kasper,biller1,GDPS,chauvin}\nwe have used our survey null result to set upper limits on planet\npopulations via Monte Carlo simulations. In these simulations,\nwe input our sensitivity data in the form of tabular files giving\nthe sensitivity in apparent magnitudes as a function of separation\nin arcseconds for each star. Various features of our images\ncould cause the sensitivity at a given separation to vary somewhat\nwith position angle: to quantify this, our tabular files give\nten different values at each separation, corresponding\nto ten different percentiles ranging from the worst to the best\nsensitivity attained at that separation. These files are described in detail\nin \\citet{obspaper}, and are available for download\nfrom \\url{http:\/\/www.hopewriter.com\/Astronomyfiles\/Data\/SurveyPaper\/}\n\nThe Monte Carlo simulations described below allow us to use the\nobserved sensitivity to planets in our survey to calculate directly\nthe probability of a given parameter or set of parameters describing\nthe exoplanet population. This, in turn, allows us to constrain these\n parameters at a given confidence level. This is a maximum likelihood\ntechnique that allows us to incorporate all the individual probability\nfunctions of the data, as well as parameterized models of the\nexoplanet population. The approach is similar to a Bayesian approach.\n However, we also use the results of the simulations to set confidence\nlimits to the parameters, a more classical approach.\n\nEach Monte Carlo simulation runs with given planet distribution\npower law slopes $\\alpha$ and $\\beta$, and a given outer \ntruncation value $R_{trunc}$\nfor the semimajor axis distribution. Using the normalization \ndescribed in Section \\ref{sec:rv}, the probability $P_{plan}$ \nof any given star having a planet\nbetween 1 and 20 M$_{\\mathrm{Jup}}$~is then calculated from the \ninput $\\alpha$, $\\beta$, and $R_{trunc}$. In each\nrealization of our survey, each star is randomly assigned \na number of planets, based on Poisson statistics \nwith mean $P_{plan}$. In most cases $P_{plan} << 1$,\nso the most likely number of planets is zero. If the \nstar turns out to have one or more planets, the mass \nand semimajor axis of each are randomly selected\nfrom the input power law distributions. The eccentricity \nis randomly selected from the\n\\citet{juric} distribution, and an inclination is \nrandomly selected from the distribution $P(i) \\propto \\sin(i)$. \nIf the star is a binary, the planet may be dropped from \nthe simulation at this point if the orbit seems likely \nto be unstable. In general, we consider circumstellar\nplanets to be stable as long as their apastron distance is less than\n$1\/3$ the projected distance to the companion star, and circumbinary\nplanets to be stable as long as their periastron distance\nis at least three times greater than the projected separation \nof the binary. For planets orbiting low-mass secondaries, \na smaller limit on the apastron distance is sometimes\nimposed, while often circumbinary planets required such distant\norbits that they were simply not considered; the details are given\nin Table \\ref{tab:binaries}. For each planet passing the orbital\nstability checkpoint, a full orbit is calculated\nusing a binary star code written by one of us (M. K.). \nThe projected separation\nin arcseconds is found, and the magnitude of the planet is calculated from its\nmass, distance, and age using the \\citet{bur} models.\n\nTwo further random choices complete the determination of whether\nthe simulated planet is detected. First, one of the ten percentiles\ngiven in the sensitivity files is randomly selected. Combined\nwith the separation in arcseconds, this selection specifies\nthe sensitivity of our observation at the location of the simulated\nplanet. The second random choice is needed because planets\nappearing at low significance in our images would have\na less than 100\\% chance of being confidently detected. \nOur blind sensitivity tests using fake planets placed in our \nraw data showed that we could confirm 97\\% of 10$\\sigma$ sources,\n46\\% of 7$\\sigma$ sources, and 16\\% of 5$\\sigma$ sources, \nwhere $\\sigma$ is a measure of the PSF-scale noise in a \ngiven region of the image (see \\citet{obspaper} for details). \nThis second and final random choice in our Monte Carlo simulations\nis therefore arranged to ensure that\na randomly selected 16\\% of planets with 5-7$\\sigma$ significance,\nand 46\\% of planets with 7-10$\\sigma$ significance,\nare recorded in the simulation as detected objects. \nAlthough we have 97\\% completeness at 10$\\sigma$, we choose\nto consider 100\\% of simulated planets with 10$\\sigma$ or \ngreater significance to be detected, because at only slightly\nabove 10$\\sigma$ the true completeness certainly becomes 100\\% for\nall practical purposes. Note that we have conservatively\nallowed the detection probabilities to increase stepwise,\nrather than in a continuous curve, from 5 to 10$\\sigma$: that is,\nin our Monte Carlo simulations, planets with 5-7$\\sigma$ significance\nare detected at the 5$\\sigma$ rate from our blind sensitivity\ntests, while those with 7-10$\\sigma$ significance are detected\nat the 7$\\sigma$ rate.\n\nThe low completeness (16\\%) at 5$\\sigma$, as determined from \nour blind sensitivity tests using fake planets, may seem surprising.\nIn these tests we distinguished between planets that were suggested by\na concentration of unusually bright pixels (`Noticed'), or else\nconfidently identified as real sources (`Confirmed').\nMany more planets were noticed\nthan were confirmed: for noticed planets, the rates\nare 100\\% at 10$\\sigma$, 86\\% at 7$\\sigma$, and 56\\% at 5$\\sigma$.\nHowever, very many false positives were also noticed, so\nsources that are merely noticed but not confirmed do not\nrepresent usable detections. The completeness levels\nwe used in our Monte Carlo simulations (16\\% at 5$\\sigma$ and\n46\\% at 7$\\sigma$) refer to confirmed\nsources. No false positives were confirmed in any of our \nblind tests. Followup observations of suspected sources are costly in terms of\ntelescope time, so a detection strategy with a low false-positive\nrate is important. \n\nThough sensitivity estimators (and therefore\nthe exact meaning of 5$\\sigma$) differ among\nplanet imaging surveys, ours was quite\nconservative, as is explained in \\citet{obspaper}. \nThe low completeness we find at 5$\\sigma$, which\nhas often been taken as a high-completeness sensitivity\nlimit, should serve as a warning to future workers in this field, and an\nencouragement to establish a definitive significance-completeness\nrelation through blind sensitivity tests as \nwe have done.\n\nNote that our blind sensitivity tests,\ncovered in \\citet{obspaper}, are completely distinct\nfrom the Monte Carlo simulations covered herein. The blind\ntests involved inserting a little over a hundred fake planets\ninto our raw image data to establish our point-source sensitivity. \nIn our Monte Carlo work we simulated the orbits, masses,\nand brightnesses of millions of planets, and compared\nthem to our previously-established sensitivity limits\nto see which planets our survey could have detected.\n\n\\subsection{A Detailed Look at a Monte Carlo Simulation} \\label{sec:mcdet}\n\nTo evaluate the significance of our survey and provide some\nguidance for future work, we have analyzed in detail a single Monte\nCarlo simulation. We chose the \\citet{cumming} best fit\nvalues of $\\alpha = -1.31$ and $\\beta = -0.61$, with\nthe semimajor axis truncation radius set to 100 AU. \nPlanets could range in mass from 1 to 20 M$_{\\mathrm{Jup}}$. \nAs described in Section \\ref{sec:rv} above, we normalized\nthe planet distributions so that each star had a 3.29\\% probability\nof having a planet with semimajor axis between 0.3 and 2.5\nAU and mass between 1 and 13 M$_{\\mathrm{Jup}}$. \nThe simulation consisted of 50,000 realizations of our survey\nwith these parameters. In all, 505,884 planets were\nsimulated, of which 51,879 were detected. \n\nIn 38\\% of the 50,000 realizations, our survey found zero planets,\nwhile 37\\% of the time it found one, and 25\\% of the time it found\ntwo or more. The planet \ndistribution we considered in this simulation\ncannot be ruled out by our survey, since a null result\nsuch as we actually obtained turns out not to be very improbable.\n\nThe large number of survey realizations in our simulation allows the\ncalculation of precise statistics for potentially detectable planets.\nThe median mass of detected planets in our simulation was 11.36 M$_{\\mathrm{Jup}}$, \nthe median semimajor axis was 43.5 AU, the median angular \nseparation was 2.86 arcsec,\nand the median significance was 21.4$\\sigma$. This last number\nis interesting because it suggests that, for our\nsurvey, any real planet detected was likely to appear at high\nsignificance, obvious even on a preliminary, `quick-look' \nreduction of the data. This suggests that performing such \nreductions at the telescope should be a high priority, to \nallow immediate confirmation and followup if a candidate\nis seen. Figure \\ref{fig:sighist} presents as a histogram\nthe significance of all planets detected in this Monte Carlo simulation.\n\n\\begin{figure}\n\\plotone{f1.eps}\n\\caption{Histogram of detection significance for the\n51,879 simulated planets detected in 50,000 realizations of our\nsurvey with the \\citet{cumming} distribution ($\\alpha = -1.31$,\n$\\beta = -0.61$) truncated at 100 AU. Our detection\nrates went down for significance less than 10 $\\sigma$,\nbut some 5-7 $\\sigma$ planets are still detected. The\nrelatively high median significance of 21.4 $\\sigma$ suggests\nany detected planet would most likely be quite obvious --\na good argument for doing `quick-look' data reductions\nas soon as possible at the telescope.\n\\label{fig:sighist}}\n\\end{figure}\n\nWe suspected that there would be a detection bias\ntoward very eccentric planets, because these would spend\nmost of their orbits near apastron, where they would be\neasier to detect. This bias did not appear at any\nmeasurable level in our simulation. However, there was\na weak but clear bias toward planets in low-inclination\norbits, which, of course, spend more of their time\nat large separations from their stars than do planets\nwith nearly edge-on orbits.\n\nA concern with any planet imaging survey is how strongly the results\nhinge on the best (i.e. nearest and youngest) few stars. \nA survey of 54 stars may have far less\nstatistical power than the number would imply if the best two or three\nstars had most of the probabilty of hosting detectable planets.\nTable \\ref{tab:sdp} gives the percentage of planets detected\naround each star in our sample based on our detailed Monte Carlo\nsimulation. Due to poor data quality, binary orbit constraints,\nor other issues, a few stars had zero probability of detected planets\ngiven the distribution used here. In general, however, the likelihood of \nhosting detectable planets is fairly well distributed. \n\nIn Table \\ref{tab:binaries}, we give the details of planetary\norbital constraints used in our Monte Carlo simulations for\neach binary star we observed, complete with the separations\nwe measured for the binaries. Note that HD 96064 B\nis a close binary star in its own right, so planets orbiting\nit were limited in two ways: the apastron could not be too\nfar out, or the orbit would be rendered unstable by proximity\nto HD 96064 A -- but the periastron also could not be too far in,\nor the binary orbit of HD 96064 Ba and HD 96064 Bb would render\nit unstable. Planets individually orbiting \nHD 96064 Ba or HD 96064 Bb were not considered in our survey,\nsince to be stable the planets would have to be far too close-in \nfor us to detect them. The constraints described\nin Table \\ref{tab:binaries} account for most of the stars in\nTable \\ref{tab:sdp} with few or no detections reported.\n\nA final question our detailed simulation can address\nis how important the $M$ band observations\nwere to the survey results. In Table \\ref{tab:mband}, we show\nthat when $M$ band observations were made, they \ndid substantially increase\nthe number of simulated planets detected. \n\n\n\n\\begin{center}\n\\begin{deluxetable}{lcccc}\n\\tablewidth{0pc}\n\\tablecolumns{5}\n\\tablecaption{Percentage of Detected Planets Found Around Each Star \\label{tab:sdp}}\n\\tablehead{ & \\colhead{\\% of Total} & \\colhead{Median} & \\colhead{Median} &\n \\colhead{Median} \\\\\n\\colhead{Star Name} & \\colhead{Detected Planets} & \\colhead{Mass} &\n\\colhead{Semimajor Axis} & \\colhead{Separation}}\n\\startdata\nGJ 117 & 6.07 & 7.66 M$_{\\mathrm{Jup}}$~& 39.36 AU & 3.64 arcsec \\\\\n$\\epsilon$ Eri & 5.83 & 6.98 M$_{\\mathrm{Jup}}$~& 18.26 AU & 4.35 arcsec \\\\\nHD 29391 & 5.80 & 8.14 M$_{\\mathrm{Jup}}$~& 49.13 AU & 2.71 arcsec \\\\\nGJ 519 & 4.74 & 10.44 M$_{\\mathrm{Jup}}$~& 40.51 AU & 3.28 arcsec \\\\\nGJ 625 & 4.67 & 9.72 M$_{\\mathrm{Jup}}$~& 29.18 AU & 3.48 arcsec \\\\\nGJ 5 & 4.45 & 9.60 M$_{\\mathrm{Jup}}$~& 53.42 AU & 3.08 arcsec \\\\\nBD+60 1417 & 3.95 & 11.58 M$_{\\mathrm{Jup}}$~& 44.48 AU & 2.05 arcsec \\\\\nGJ 355 & 3.81 & 9.71 M$_{\\mathrm{Jup}}$~& 53.91 AU & 2.34 arcsec \\\\\nGJ 354.1 A & 3.67 & 9.58 M$_{\\mathrm{Jup}}$~& 60.12 AU & 2.64 arcsec \\\\\nGJ 159 & 3.57 & 9.73 M$_{\\mathrm{Jup}}$~& 57.95 AU & 2.71 arcsec \\\\\nGJ 349 & 3.35 & 11.38 M$_{\\mathrm{Jup}}$~& 44.40 AU & 3.17 arcsec \\\\\n61 Cyg B & 3.29 & 11.32 M$_{\\mathrm{Jup}}$~& 19.53 AU & 4.08 arcsec \\\\\nGJ 879 & 3.03 & 11.18 M$_{\\mathrm{Jup}}$~& 36.84 AU & 3.69 arcsec \\\\\nGJ 564 & 2.94 & 10.67 M$_{\\mathrm{Jup}}$~& 56.80 AU & 2.70 arcsec \\\\\nGJ 410 & 2.93 & 12.78 M$_{\\mathrm{Jup}}$~& 41.83 AU & 3.03 arcsec \\\\\nGJ 450 & 2.89 & 12.90 M$_{\\mathrm{Jup}}$~& 38.72 AU & 3.66 arcsec \\\\\nGJ 3860 & 2.68 & 12.70 M$_{\\mathrm{Jup}}$~& 49.72 AU & 2.69 arcsec \\\\\nHD 78141 & 2.58 & 12.47 M$_{\\mathrm{Jup}}$~& 57.00 AU & 2.24 arcsec \\\\\nBD+20 1790 & 2.51 & 12.14 M$_{\\mathrm{Jup}}$~& 58.33 AU & 2.02 arcsec \\\\\nGJ 278 C & 2.20 & 12.68 M$_{\\mathrm{Jup}}$~& 54.56 AU & 3.04 arcsec \\\\\nGJ 311 & 2.19 & 12.55 M$_{\\mathrm{Jup}}$~& 52.07 AU & 3.20 arcsec \\\\\nHD 113449 & 2.17 & 12.52 M$_{\\mathrm{Jup}}$~& 59.31 AU & 2.29 arcsec \\\\\nGJ 211 & 2.10 & 13.59 M$_{\\mathrm{Jup}}$~& 50.51 AU & 3.30 arcsec \\\\\nBD+48 3686 & 2.08 & 12.56 M$_{\\mathrm{Jup}}$~& 55.05 AU & 2.01 arcsec \\\\\nGJ 282 A & 2.05 & 13.39 M$_{\\mathrm{Jup}}$~& 49.85 AU & 2.99 arcsec \\\\\nGJ 216 A & 2.03 & 12.71 M$_{\\mathrm{Jup}}$~& 42.98 AU & 4.21 arcsec \\\\\n61 Cyg A & 1.97 & 13.70 M$_{\\mathrm{Jup}}$~& 20.94 AU & 4.54 arcsec \\\\\nHD 1405 & 1.54 & 13.13 M$_{\\mathrm{Jup}}$~& 66.34 AU & 2.04 arcsec \\\\\nHD 220140 A & 1.54 & 11.73 M$_{\\mathrm{Jup}}$~& 36.85 AU & 1.73 arcsec \\\\\nHD 96064 A & 1.49 & 12.63 M$_{\\mathrm{Jup}}$~& 46.64 AU & 1.75 arcsec \\\\\nHD 139813 & 1.43 & 14.33 M$_{\\mathrm{Jup}}$~& 59.71 AU & 2.37 arcsec \\\\\nGJ 380 & 0.92 & 15.76 M$_{\\mathrm{Jup}}$~& 25.31 AU & 4.21 arcsec \\\\\nGJ 896 A & 0.61 & 12.43 M$_{\\mathrm{Jup}}$~& 6.47 AU & 0.98 arcsec \\\\\nGJ 860 A & 0.38 & 11.58 M$_{\\mathrm{Jup}}$~& 53.26 AU & 6.62 arcsec \\\\\n$\\tau$ Ceti & 0.38 & 17.19 M$_{\\mathrm{Jup}}$~& 25.49 AU & 5.52 arcsec \\\\\nGJ 896 B & 0.34 & 11.40 M$_{\\mathrm{Jup}}$~& 6.78 AU & 1.14 arcsec \\\\\n$\\xi$ Boo B & 0.32 & 12.07 M$_{\\mathrm{Jup}}$~& 8.25 AU & 1.36 arcsec \\\\\nHD 220140 B & 0.28 & 12.04 M$_{\\mathrm{Jup}}$~& 25.92 AU & 1.37 arcsec \\\\\n$\\xi$ Boo A & 0.24 & 12.89 M$_{\\mathrm{Jup}}$~& 8.72 AU & 1.50 arcsec \\\\\nGJ 659 B & 0.21 & 17.71 M$_{\\mathrm{Jup}}$~& 62.54 AU & 2.81 arcsec \\\\\nGJ 166 B & 0.17 & 16.12 M$_{\\mathrm{Jup}}$~& 6.19 AU & 1.34 arcsec \\\\\nGJ 684 A & 0.17 & 14.93 M$_{\\mathrm{Jup}}$~& 85.98 AU & 4.87 arcsec \\\\\nHD 96064 B & 0.13 & 14.43 M$_{\\mathrm{Jup}}$~& 38.55 AU & 1.60 arcsec \\\\\nGJ 505 B & 0.12 & 15.94 M$_{\\mathrm{Jup}}$~& 17.11 AU & 1.61 arcsec \\\\\nGJ 166 C & 0.10 & 15.56 M$_{\\mathrm{Jup}}$~& 6.43 AU & 1.52 arcsec \\\\\nGJ 505 A & 0.07 & 16.32 M$_{\\mathrm{Jup}}$~& 18.08 AU & 1.75 arcsec \\\\\nGJ 702 A & 0.02 & 15.90 M$_{\\mathrm{Jup}}$~& 6.21 AU & 1.50 arcsec \\\\\nGJ 684 B & None & NA & NA & NA \\\\\nGJ 860 B & None & NA & NA & NA \\\\\nGJ 702 B & None & NA & NA & NA \\\\\nHD 77407 A & None & NA & NA & NA \\\\\nGJ 659 A & None & NA & NA & NA \\\\\nGJ 3876 & None & NA & NA & NA \\\\\nHD 77407 B & None & NA & NA & NA \\\\\n\\enddata\n\\tablecomments{This table applies\nto our detailed Monte Carlo simulation\nwith 50,000 survey realizations run using\n$\\alpha=-1.31$, $\\beta=-0.61$, and semimajor\naxis truncation radius 100 AU. Of all the simulated\nplanets that were detected, we present here\nthe percentage that were found around each given star,\nand the median mass, semimajor axis, and projected\nseparation for simulated planets found around each star.\nThe table thus indicates around which stars our\nsurvey had the highest likelihood of detecting\na planet. Many stars with poor likelihood are\nbinaries, with few stable planetary orbits possible.}\n\\end{deluxetable}\n\\end{center}\n\n\\begin{center}\n\\begin{deluxetable}{lccccc}\n\\tablewidth{0pc}\n\\tablecolumns{6}\n\\tablecaption{Constraints on Simulated Planet Orbits Around Binary Stars\\label{tab:binaries}}\n\\tablehead{ & & \\colhead{constraints on} &\n \\colhead{constraints on} & \\colhead{constraints on} \\\\ \n & \\colhead{separation} & \\colhead{circumprimary} &\n \\colhead{circumsecondary} & \\colhead{circumbinary} \\\\ \n\\colhead{Star Name} & \\colhead{(arcsec)} & \\colhead{apastron} & \n\\colhead{apastron} & \\colhead{periastron}}\n\\startdata\nHD 220140 AB & 10.828 & $<$3.61 asec (71.3 AU) & $<$2.17 asec (42.8 AU) & No Stable Orbits \\\\\nHD 96064 AB & 11.628 & $<$3.88 asec (95.6 AU) & $<$2.33 asec (57.3 AU) & No Stable Orbits\\\\\nHD 96064 Bab & 0.217 & No Stable Orbits & No Stable Orbits & $>$0.65 asec\n(16.1 AU) \\\\\nGJ 896 AB & 5.366 & $<$1.79 asec (11.8 AU) & $<$1.79 asec (11.8 AU) & No Stable Orbits \\\\\nGJ 860 AB & 2.386 & $<$0.79 asec (3.17 AU) & $<$0.60 asec (2.41 AU) &\n$>$7.15 asec (28.7 AU) \\\\\n$\\xi$ Boo AB & 6.345 & $<$2.12 asec (14.2 AU) & $<$2.12 asec (14.2 AU) & No Stable Orbits \\\\\nGJ 166 BC & 8.781 & $<$2.20 asec (10.6 AU) & $<$2.20 asec (10.6 AU) & No Stable Orbits \\\\\nGJ 684 AB & 1.344 & $<$0.45 asec (6.34 AU) & $<$0.27 asec (3.80 AU) &\n$>$4.03 asec (56.8 AU) \\\\\nGJ 505 AB & 7.512 & $<$2.50 asec (29.8 AU) & $<$2.50 asec (29.8 AU) & No Stable Orbits \\\\\nGJ 702 A & 5.160 & $<$1.76 asec (8.85 AU) & $<$1.32 asec (6.64 AU) &\n$>$15.9 asec (79.7 AU) \\\\\nHD 77407 AB & 1.698 & $<$0.57 asec (17.2 AU) & $<$0.34 asec (10.2 AU) &\n$>$5.11 asec (153.7 AU) \\\\\n\\enddata\n\\tablecomments{Planets orbiting the primary in a binary star\nwere considered to be de-stabilized by the gravity of the\nsecondary if their apastron distance from the primary was\ntoo large. Similarly, planets orbiting the secondary\nhad to have small enough apastron distances to avoid\nbeing de-stabilized by the primary. Circumbinary planets\nhad to have a large enough periastron distance to avoid\nbeing de-stabilized by the differing gravitation of the\ntwo components of the binary. Note that HD 96064B is\nitself a tight binary star, so planets orbiting it\nhad both a minimum periastron and a maximum apastron.\nConstraints are given in AU as well as arcseconds so constraints\ncan easily be compared with actual or hypothetical planetary systems.}\n\\end{deluxetable}\n\\end{center}\n\n\n\n\\begin{center}\n\\begin{deluxetable}{lcccc}\n\\tablewidth{0pc}\n\\tablecolumns{5}\n\\tablecaption{Importance of the $M$ Band Data \\label{tab:mband}}\n\\tablehead{ & \\colhead{Total simulated} &\\colhead{2-band} & \\colhead{$L'$-only} & \\colhead{$M$-only}\\\\\n\\colhead{Star Name} & \\colhead{detections} & \\colhead{detections} & \\colhead{detections} & \\colhead{detections}}\n\\startdata\n$\\epsilon$ Eri & 2850 & 46.98\\% & 8.28\\% & 44.74\\% \\\\\n61 Cyg B & 1610 & 52.73\\% & 1.55\\% & 45.71\\% \\\\\n61 Cyg A & 965 & 63.01\\% & 22.80\\% & 14.20\\% \\\\\n$\\xi$ Boo B & 157 & 61.15\\% & 18.47\\% & 20.38\\% \\\\\n$\\xi$ Boo A & 115 & 60.00\\% & 18.26\\% & 21.74\\% \\\\\nGJ 702 A & 9 & 22.22\\% & 0.00\\% & 77.78\\% \\\\\n\\enddata\n\\tablecomments{The usefulness of $M$ band\nobservations, based on our detailed Monte\nCarlo simulation. When $M$ band observations \nwere made of a given star, they did substantially increase\nthe number of simulated planets detected around that star. }\n\\end{deluxetable}\n\\end{center}\n\n\\subsection{Monte Carlo Simulations: Constraining the Power Laws} \\label{sec:mcbig}\n\nThe planet distribution we used in the single Monte Carlo\nsimulation described above could not be ruled out by our survey.\nTo find out what distributions could be ruled out, we performed\nMonte Carlo simulations assuming a large number of different\npossible distributions, parametrized by the two power\nlaw slopes $\\alpha$ and $\\beta$, and by the outer semimajor\naxis truncation radius $R_{trunc}$. \nRegardless of the values of $\\alpha$ and $\\beta$, each simulation \nwas normalized to match the RV statistics\nof \\citet{carnp}: any given star had 3.29\\% probability of\nhosting a planet with mass between 1 and 13 M$_{\\mathrm{Jup}}$~and \nsemimajor axis between 0.3 and 2.5 AU. \nThe mass range for simulated planets was 1-20 M$_{\\mathrm{Jup}}$.\n\nWe tested three\ndifferent values of $\\alpha$: -1.1, -1.31, and -1.51, roughly\ncorresponding to the most optimistic permitted, the best\nfit, and the most pessimistic permitted values from \\citet{cumming}.\nFor each value of $\\alpha$, we ran simulations spanning\na wide grid in terms of $\\beta$ and $R_{trunc}$. In constrast\nto the extensive results described in Section \\ref{sec:mcdet},\nthe only data saved for these simulations was the probability\nof finding zero planets. Since we did in fact obtain a null\nresult, distributions for which the probability of this was\nsufficiently low can be ruled out. \n\nFigures \\ref{fig:bmc1.31} and \\ref{fig:bmc_other} show the probability of a null\nresult as a function of $\\beta$ and $R_{trunc}$ for our \nthree different values of $\\alpha$. Figure \\ref{fig:bmc1.31}\npresents constraints based on $\\alpha = -1.31$, the best-fit\nvalue from RV statistics, while Figure \\ref{fig:bmc_other}\ncompares the optimistic case $\\alpha = -1.1$ and the\npessimistic case $\\alpha = -1.51$. Each pixel in these\nfigures represents a Monte Carlo simulation involving\n15,000 realizations of our survey; generating the\nfigures took several tens of hours on a fast PC.\nContours are overlaid at selected probability\nlevels. Regions within the 1\\%, 5\\%, and 10\\% contours can, of course,\nbe ruled out at the 99\\%, 95\\%,\nand 90\\% confidence levels respectively. For example,\nwe find that the most optimistic power laws allowed\nby the \\citet{cumming} RV statistics, $\\alpha = -1.1$ and $\\beta = -0.46$, \nare ruled out with 90\\% confidence if $R_{trunc}$ is 110 AU\nor greater. Similarly, $\\alpha = -1.51$ and $\\beta = -0.3$, \ntruncated at 100 AU, is ruled out. Though\n$\\beta=0.0$ is not physically plausible, previous work\nhas sometimes used it as an example: for $\\alpha=-1.31$,\nwe rule out $\\beta=0.0$ unless $R_{trunc}$ is less than\n38 AU.\n\n\\begin{figure}\n\\includegraphics[scale=6.0]{f2.eps}\n\\caption{Probability of our survey detecting zero planets,\nas a function of the power law slope of the semimajor\naxis distribution $\\beta$, where $\\frac{dn}{da} \\propto a^{\\beta}$,\nand the outer truncation radius of the semimajor axis distribution.\nHere, the slope of the mass distribution $\\alpha$ has been taken as -1.31,\nwhere $\\frac{dn}{dM} \\propto M^{\\alpha}$. Since we found\nno planets, distributions that lead to a probability $P$ of\nfinding no planets are ruled out at the $1-P$ confidence\nlevel: for example, the region above and to the right of the\n0.1 contour is ruled out at the 90\\% confidence level\n\\label{fig:bmc1.31}}\n\\end{figure}\n\n\\begin{figure}\n\\plottwo{f3a.eps}{f3b.eps}\n\\caption{Probability of our survey detecting zero planets,\nas a function of the power law slope of the semimajor\naxis distribution $\\beta$, where $\\frac{dn}{da} \\propto a^{\\beta}$,\nand the outer truncation radius of the semimajor axis distribution.\nHere, the slope of the mass distribution $\\alpha$ has been taken as -1.1\n(left) and -1.51 (right),\nwhere $\\frac{dn}{dM} \\propto M^{\\alpha}$. Since we found\nno planets, distributions that lead to a probability $P$ of\nfinding no planets are ruled out at the $1-P$ confidence\nlevel: for example, the regions above and to the right of the\n0.1 contours are ruled out at the 90\\% confidence level\n\\label{fig:bmc_other}}\n\\end{figure}\n\n\n\\subsection{Model-independent Constraints} \\label{sec:indep}\n\nIt is also possible to place constraints on the distribution\nof planets without assuming a power law or any other particular\nmodel for the statistics of planetary masses and orbits. Note\nwell that by ``model-independent'' in this context, we mean\nindependent only of models for the statistical distributions of\nplanets in terms of $M$ and $a$ -- not \nindependent of models of planetary \\textit{spectra} such \nthose we obtain from \\citet{bur}. The latter are our only\nmeans of converting from planetary mass and age to detectable flux,\nand as such they remain indispensable.\n \nTo place our model-independent constraints, we performed an additional series\nof Monte Carlo simulations on a grid of planet mass and\norbital semimajor axis. For each grid point we seek to\ndetermine a number $P(M,a)$ such that, with some specified\nlevel of confidence (e.g., 90\\%), the probability of a star like those\nin our sample having a planet with the specified mass $M$\nand semimajor axis $a$ is no more than $P(M,a)$. We determine\n$P(M,a)$ by a search: first a guess is made, and a Monte Carlo\nsimulation assuming this probability is performed. If more\nthan 10\\% of the realizations of our survey turn up a null\nresult, the guessed probability is too low; if less than 10\\%\nturn up a null result, the probability is too high. It is\nadjusted in steps of ever-decreasing size until the correct value\nis reached.\n\nFigure \\ref{fig:bmcgrid1} shows the 90\\% confidence upper limit on\n$P(M,a)$ as a function of mass $M$ and semimajor axis $a$.\nEach pixel represents thousands of realizations of our survey,\nwith $P(M,a)$ finely adjusted to reach the correct value.\nContours are overplotted showing where $P(M,a)$\nis less than 8\\%, 10\\%, 25\\%, 50\\%, and 75\\%, with 90\\% confidence.\nNote that $P(M,a)$, the value constrained by our\nsimulations, is a probability rather than a fixed fraction. \nThe probability is the more scientifically interesting\nnumber, but is harder to constrain.\nFor example, if 3.7\\% is the fraction of \\textit{the actual stars in our sample}\nthat have planets with easy-to-detect properties, there are\n2 such planets represented in our 54-star survey.\nHowever, if the \\textit{probability} of a star\n\\textit{like those in our sample} having such a planet\nis 3.7\\%, there is still a nonzero probablity (13\\% in this case) \nthat no star in our sample actually has such a planet.\n\nThe results presented in Figure \\ref{fig:bmcgrid1} can\nbe interpreted as model-independent constraints on planet\npopulations. For example, with\n90\\% confidence we find that less than 50\\% of\nstars with properties like those in our survey have a 5 M$_{\\mathrm{Jup}}$~or\nmore massive planet in an orbit with a semimajor axis between\n30 and 94 AU. Less than 25\\% of stars like those\nin our survey have a 7 M$_{\\mathrm{Jup}}$~or more massive planet between\n25 and 100 AU, less than 15\\% have a 10 M$_{\\mathrm{Jup}}$~or more massive \nplanet between 22 and\n100 AU, and less than 12\\% have a 15 M$_{\\mathrm{Jup}}$~or more massive planet\/brown\ndwarf between 15 and 100 AU. Going to the most massive objects\nconsidered in our simulations, we can set limits ranging inward\npast 10 AU: we find that less than 25\\% of \nstars like those surveyed have a 20 M$_{\\mathrm{Jup}}$~object orbiting \nbetween 8 and 100 AU. These constraints hold\nindependently of how planets are distributed in terms\nof their masses and semimajor axes. \n\n\\begin{figure}\n\\includegraphics[scale=6.0]{f4.eps}\n\\caption{90\\% confidence level upper limits on the probability\n$P(M,a)$ that a star like those in our survey will have\na planet of mass $M$ and semimajor axis $a$. This plot shows,\nfor example, that our survey constrains the abundance\nof 10 M$_{\\mathrm{Jup}}$~or more massive planets with orbital semimajor \naxes between 22 and 100 AU to be less than 15\\% around\nsun-like stars. The abundance of 5 M$_{\\mathrm{Jup}}$~or more massive\nplanets between 25 and 94 AU is constrained to be less\nthan 50\\%. The latter range does not extend all the way to \n100 AU because our sensitivity\nto planets in very distant orbits decreases somewhat\ndue to the possibility of their lying beyond our field\nof view.\n\\label{fig:bmcgrid1}}\n\\end{figure}\n\nHR 8799 appears to have a remarkable system of\nthree massive planets, seen at projected\ndistances of 24, 38, and 68 AU, with masses\nof roughly 10, 10, and 7 M$_{\\mathrm{Jup}}$, respectively \\citep{hr8799}.\nUsing a Monte Carlo simulation like those used to\ncreate Figure \\ref{fig:bmcgrid1}, we find with\n90\\% confidence that less than 8.1\\% of stars like\nthose in our survey have a clone of the HR 8799 planetary\nsystem. For purposes of this simulation we adopted the masses\nabove, and set the planets' orbital radii equal to their projected separations.\nOur 8.1\\% limit represents a step toward determining\nwhether or not systems of massive planets in wide orbits\nare more common around more massive stars such as\nHR 8799 than FGK stars such as those we have surveyed.\n\n\\subsection{Our Survey in the Big Picture} \\label{sec:bigpic}\n\nThe surveys of \\citet{kasper} and \\citet{biller1},\nhave set constraints on the distributions\nof extrasolar planets similar to those we present herein,\nwhile \\citet{nielsen} and especially \\citet{GDPS} \nhave set stronger constraints. \nMore recent analyses by \\citet{nielsenclose} and \\citet{chauvin}\nalso provide constraints on the planetary distribution. For\nexample, \\citet{nielsenclose} provide a 68\\% confidence that the \n\\citet{cumming} distribution can be excluded for a truncation \nradius of 28 AU. However, if\ndifferent models are used this number jumps to 83 AU. Chauvin et al.\nindicate a similar limit from analyzing their results using Baraffe\net al. 2003 models. For the standard parameters they indicate a maximum\npermitted truncation radius of approximately 35 AU. In this context, \nthe results presented here\nprovide looser constraints on the planet distribution, but provide an\nindependent check on the model-dependent systematic errors which may\nexist with shorter wavelength data, due to incorrect model brightness\nestimates or age determination.\n\nTheoretical spectra of self-luminous extrasolar planets are\nvery poorly constrained observationally. The recent detections\nof possible planets around HR 8799 \\citep{hr8799}, Fomalhaut \\citep{fomalhaut},\nand $\\beta$ Pic \\citep{betapic} are either single-band\n($\\beta$ Pic) or only beginning to be evaluated at multiple\nwavelengths (HR 8799, Fomalhaut). The candidate planets\norbiting HR 8799 and $\\beta$ Pic are hotter than we would\nexpect to find orbiting middle-aged stars such as those\nin our survey, while HST photometry of Fomalhaut b suggests\nmuch of its brightness is starlight reflected from a\ncircumplanetary dust disk. Our survey, and other\nexoplanet surveys, must therefore\nbe interpreted using models of planetary spectra that are not yet\nwell-tested against observations.\n\nSuch models predict brightnesses in the $H$ band, and particularly\nin narrow spectral windows within the $H$ band, that\nare enormously in excess of black body fluxes. The constraints\nset by \\citet{masciadri,biller1,GDPS,nielsen,nielsenclose}; and \\citet{chauvin}\ndepend on the accuracy of these predictions of\nremarkable brightness in the $H$ band. The $L'$ and\n$M$ bands that we have used are nearer the blackbody\npeaks of low-temperature self-luminous planets, and might\nbe expected to be more reliable. \n\nHowever, \\citet{L07} and \\citet{reid} \nsuggest that the $M$ band brightness at least of hotter extrasolar\nplanets will be less than predicted by \\citet{bur} due to above-equilibrium\nconcentrations of CO from convective mixing. \\citet{NCE} present\nnew models indicating the effect is present for planets with\n$T_{\\mathrm{eff}}$~ranging from 600 to 1800K. The maximum $M$ band flux\nsupression is about 40\\%, and flux supression disappears \ncompletely for $T_{\\mathrm{eff}}$~below 500K. Based on \\citet{bur},\nthis $T_{\\mathrm{eff}}$~value corresponds to planets of about 3.5,\n6.5, 12, and 15 M$_{\\mathrm{Jup}}$~at ages of 100 Myr, 300 Myr, 1 Gyr,\nand 2 Gyr, respectively. In many cases our $M$ band observations \nwere sensitive to planets at lower masses than these values, \nand therefore $T_{\\mathrm{eff}}$~lower than 500K, implying that the CO flux\nsupression would have no effect on our mass limits. In other\ncases our $M$ band sensitivity did not extend so low. However,\ngiven that $M$ band observations formed a relatively small part\nof our survey, and CO supression would affect only a fraction\neven of them, the total effect on the statistical conclusions\nof our survey should be entirely negligible. \n\nTheoretical spectra such as those of \\citet{bur} may or\nmay not be more reliable in the $L'$ and $M$\nbands than at shorter wavelengths. However, so long as\nthe models remain poorly constrained by observations at\nevery wavelength, conclusions based on observations at\nmultiple wavelengths will be more secure. Our survey,\nwith that of \\citet{kasper}, has diversified planet imaging\nsurveys across a broader range of wavelengths.\n\nIn another sense our survey differs even from that of\n\\citet{kasper}: we have investigated older stars.\nThis is significant because planetary systems up to ages of \nseveral hundred Myr may still be undergoing substantial\ndynamical evolution due to planet-planet interactions \\citep{juric,levison}. \nOur survey did not necessarily probe the same planet\npopulation as, for example, those of \\citet{kasper} and \\citet{chauvin}.\n\nFinally, theoretical models of older planets are likely\nmore reliable than for younger ones, as these planets are further\nfrom their unknown starting conditions and moving toward a well-understood,\nstable configuration such as that of Jupiter. It has been suggested by\n\\citet{faintJup} and \\citet{fortney} that theoretical planet models \nsuch as those of \\citet{bur} and \\citet{bar} may overpredict \nthe brightness of young ($<$ 100 Myr) planets \nby orders of magnitude, while for older planets the models are more accurate.\n\nWe have focused on nearby, mature star systems, and\nhave conservatively handled the ages of stars.\nThis makes our survey uniquely able to confirm that the rarity \nof giant planets at large separations around solar-type stars,\nfirst noticed in surveys strongly weighted toward\nyoung stars, persists at older system ages. It is not an\nartifact of model inaccuracy at young ages due to\nunknown initial conditions.\n\n\\section{The Future of the $L'$ and $M$ Bands} \\label{sec:long}\nIn the $L'$ and $M$ bands, the sky brightness is much worse than at\nshorter wavelengths. However, models (e.g., \\citet{bur}) predict \nthat in the $L'$ and $M$ bands, planets fade less severely with \nincreasing age (or, equivalently, decreasing $T_{\\mathrm{eff}}$). \nAlso, planet\/star flux ratios are more favorable in the $L'$ and \n$M$ bands than at shorter wavelengths such as the $H$ and $K_S$ bands. \n\nIt makes sense to use the $L'$ and $M$ bands on bright stars,\nwhere the planet\/star\nflux ratio is a more limiting factor than the sky brightness.\nIn \\citet{newvega}, we have shown that $M$ band observations\ntend to do better than those at shorter wavelengths\nat small separations from bright stars. \n\nThe $L'$ and $M$ bands are most useful, however, for detecting \nthe lowest temperature planets, which have the reddest \n$H - L'$ and $H - M$ colors. Such very low temperature\nplanets can only be detected around the nearest stars, so\nit is for very nearby stars that $L'$ and $M$ band observations\nare most useful. For distant stars, around which \nonly relativly high $T_{\\mathrm{eff}}$~planets can be detected,\nthe $H$ and $K_S$ bands are much better. We will now quantitatively\ndescribe the advantage of $L'$ and $M$ band observations over shorter\nwavelengths for planet-search observations of nearby stars.\n\nMost AO planet searches to date have used the $H$ and $K_S$ bands, \nor specialized filters in the same wavelength regime. \nWhile the $K_S$ band has been used extensively to search for \nplanets around young stars \\citep{masciadri,chauvin}, our comparison \nhere will focus on the $H$ band regime. Models indicate it offers\nbetter sensitivity than the $K_S$ band except for planets\nyounger than 100 Myr \\citep{bur,bar}, and most of the stars\nwe will suggest the $L'$ and the $M$ bands are useful for\nwill be older than this. The most sensitive $H$-regime planet search\nobservations made to date are those of \\citet{GDPS}, in part because\nof their optimized narrow-band filter. They attained an effective\nbackground-limited point-source sensitivity of about $H=23.0$.\nBased on the models of \\citet{bur}, \\citet{GDPS} would have set \nbetter planetary\nmass limits than our observations around all of our own survey\ntargets except the very nearest objects, such as $\\epsilon$\nEri and 61 Cyg. Thus, at present, the $H$-regime delivers far better\nplanet detection prospects than the $L'$ and $M$ bands for most stars.\n\nHowever, as detector technology improves, larger telescopes are \nbuilt, and longer\nplanet detection exposures are attempted, the sensitivity\nat all wavelengths will increase. This means that low-temperature \nplanets, with their red IR colors, will be detectable \nat larger distances, and the utility\nof the $L'$ and especially the $M$ bands will increase.\nIn Figure \\ref{fig:HLM1} we show the minimum detectable planet\nmass for hypothetical stars at 10 and 25 pc distance as a function of the\nincrease over current sensitivity in the $H$, $L'$, and $M$\nbands, and in Figure \\ref{fig:HLM2} we present the same comparison\nfor a star at 5 pc. We have taken current sensitivity to be $H = 23.0$\n(i.e., \\citet{GDPS}), $L' = 16.5$, and $M = 13.5$ (i.e., the\npresent work, scaled to an 8m telescope such as \\citet{GDPS}\nused). These are background limits, not applicable close\nto bright stars. Based on \\citet{newvega}, we believe\nthe $L'$ and $M$ bands will do even better relative to $H$\ncloser to the star where observations are no longer\nbackground limited. Of course $H$ band observations with\nnext-generation extreme AO systems such as GPI and SPHERE will offer improved\nperformance close to the star, but advances in $M$-band AO\ncoronography (e.g. \\citet{phaseplate}), will also improve the\nlonger-wavelength results. In any case, Figures \\ref{fig:HLM1}\nand \\ref{fig:HLM2} compare background-limited performance only.\n\nThe supression of flux in the $M$ band\ndue to elevated levels of CO \\citep{L07,reid} does\nnot apply to planets at the low temperatures relevant for\nFigures \\ref{fig:HLM1} and \\ref{fig:HLM2}.\nBased on \\citet{bur}, the entire mass range covered by both \nFigures corresponds to planets with $T_{\\mathrm{eff}}$~below 500K, except for\nplanets with masses above 6.5 M$_{\\mathrm{Jup}}$~in the left panel of\nFigure \\ref{fig:HLM1} (25 pc distance, 300 Myr age). This\nupper section of the 25 pc, 300 Myr panel is irrelevant to \nthe important implications of the figure. According to \\citet{NCE}, \nthere is no supression of the $M$ band for effective temperatures\nbelow 500K.\n\n\\begin{figure*}\n\\plottwo{f5a.eps}{f5b.eps}\n\\caption[$H$, $L'$, and $M$ band Compared]{Minimum detectable\nplanet mass in units of M$_{\\mathrm{Jup}}$~for stars at 25pc (left) and 10pc (right),\nin the $H$, $L'$, and $M$ bands, as a function of increase\nover current sensitivity. We have taken current sensitivities\nto be $H = 23.0$, $L' = 16.5$, and $M = 13.5$.\nWhile the $H$ band will likely remain the wavelength of choice\nfor planet search observations of stars at 25 pc and beyond,\nan increase of only 2.4 mag over current sensitivities, even though\nparalleled by an equal increase in $H$ band sensitivity, will render the \n$M$ band more sensitive than $H$ for planets around all stars\nnearer than 10 pc. The relative effectiveness of different\nwavelengths depends sensitively on the distance to a star system,\nbut it is essentially independent of the stellar age, as explained\nin the text.\n\\label{fig:HLM1}}\n\\end{figure*}\n\n\\begin{figure*}\n\\plotone{f6.eps}\n\\caption[$H$, $L'$, and $M$ band Compared]{Minimum detectable\nplanet mass in units of M$_{\\mathrm{Jup}}$~for stars at 5pc, in the \n$H$, $L'$, and $M$ bands, as a function of increase\nover current sensitivity. We have taken current sensitivities\nto be $H = 23.0$, $L' = 16.5$, and $M = 13.5$. Given only\na 1 magnitude increase in $M$ band sensitivity, paralleled\nby an equal increase at $H$ band, the $M$ band would be the\nbest wavelength for planet search observations around all\nstars nearer than 5 pc. While the sensitivity increases \nrequired to render $M$ preferable in Figure \\ref{fig:HLM1}\nrequire substantial improvements to existing instruments\nand telescopes, the 1 mag increase required at 5 pc could\nbe obtained by simply increasing the exposure time.\nAs with Figure \\ref{fig:HLM1}, this result concerning the\nrelative effectiveness of different wavelengths is independent\nof stellar age, to first order.\n\\label{fig:HLM2}}\n\\end{figure*}\n\n\nWe have deliberately chosen the characteristics of the hypothetical \nstars in Figures \\ref{fig:HLM1} and \\ref{fig:HLM2} to be \nless good than the best\navailable planet search candidates, so that in each case\nstars closer and\/or younger than the example\nactually exist. Using the very youngest stars would also\nhave resulted in sensitivities better than 1 M$_{\\mathrm{Jup}}$, a mass\nregime not covered by the \\citet{bur} models used in the Figures.\n\nFigures \\ref{fig:HLM1} and \\ref{fig:HLM2} illustrate \nthree very important points.\nFirst, the $L'$ band appears to have only secondary usefulness since\neither the $H$ band or the $M$ band always offers sensitivity\nto lower-mass planets.\nSecond, Figure \\ref{fig:HLM2} shows that with a relatively \nminor increase of 1 magnitude in sensitivity, \nthe $M$ band will be sensitive to lower-mass \nplanets around all stars within 5 pc than can be\ndetected with $H$ band observations, even if the $H$ band sensitivity\nincreases the same amount. Third, Figure \\ref{fig:HLM1}\nshows that the advantage of the $M$ band decreases with\nincreasing distance, but that as larger telescopes and\nlonger exposures increase sensitivities to 2.5 mag\nabove present levels, the $M$ band will be superior\nto $H$ out to 10 pc. With an increase of 4 mag, the $M$ \nband would surpass $H$ out to 25 pc -- but as such a large\nsensitivity increase would be difficult to achieve, $H$ band\nwill likely remain the primary wavelength for stars at\n25 pc and beyond. For stars closer than 10 pc, however,\nthe $M$ band already offers excellent sensitivity that\nhas barely been exploited so far. Given reasonable\nsensitivity increases, $M$ should become the primary\nband for planet searches around stars at a distance\nof 10 pc or less.\n\nInterestingly, the conclusions of Figures\n\\ref{fig:HLM1} and \\ref{fig:HLM2} are essentially independent \nof age: extensive calculations by \\citet{thesis} showed that \nthe relative usefulness of \ndifferent wavelengths had only a weak dependence on age, for\nstars at a fixed distance -- and even this weak age dependence\ncould change sign on switching from the models of \\citet{bur} to \nthose of \\citet{bar}. This means that\nif we change the ages of the stars in Figures \\ref{fig:HLM1} and \\ref{fig:HLM2}\nbut leave the distances the same, the $L'$, $M$, and $H$ band\ncurves will slide up or down but remain essentially fixed\nin their relative positions. For example, given a 3 magnitude increase\nin sensitivity at both wavelengths, $M$ band observations\nwill detect lower mass planets than $H$-band ones around\na star at 10 pc, whether the stellar age is 5 Gyr, 1 Gyr,\nor 100 Myr. This is to be expected, since if one dials down\nthe age of a given hypothetical star system, the $T_{\\mathrm{eff}}$~(and\ntherefore IR color) of the faintest detectable planets will \nremain about the same, though their masses will decrease.\n\nAgain, Figures \\ref{fig:HLM1} and \\ref{fig:HLM2} apply only to \nbackground-limited\nsensitivity. However, given the much more favorable planet\/star flux\nratios in the $M$ band relative to $H$, we would expect the longer\nwavelength observations to remain equally competitive closer to\nthe star. Advances in $M$ band coronography will likely parallel\nthe development of $H$ band extreme AO systems such as GPI and\nSPHERE. Though at present they are\nsurpassed in sensitivity by $H$-regime observations for\nall but the nearest stars, the $L'$ and especially the $M$ bands\nhold considerable promise for the future.\n\n\\section{Conclusion} \\label{sec:concl}\nWe have surveyed unusually nearby, mature star systems\nfor extrasolar planets in the $L'$ and $M$ bands using\nthe Clio camera with the MMT AO system. By extensive\nuse of blind sensitivity tests involving fake planets\ninserted into our raw data (reported in detail in\n\\citet{obspaper}), we \nestablished a definitive significance vs. completeness relation for planets in\nour data, which we then used in Monte Carlo simulations\nto constrain planet distributions.\n\nWe set interesting limits on the masses of planets and\nbrown dwarfs in the star systems we surveyed, but\nwe did not detect any planets. Based on this\nnull result, we place constraints on the\npower laws that may describe the distribution of\nextrasolar planets in mass and semimajor axis.\nWe also place constraints on planet abundances\nindependent of the distributions. If the distribution of planets\nis a power law with $dN \\propto M^{\\alpha} a^{\\beta} dM da$, the\nwork of \\citet{cumming} and \\citet{butlercat} indicates that the \nmost optimistic (i.e. planet-rich) case permitted by the statistics of known\nRV planets correponds to about $\\alpha = -1.1$ and $\\beta = -0.46$.\nNormalizing the distribution to be consistent with RV \nstatistics, we find that these values of $\\alpha$ and\n$\\beta$ are ruled out at the 90\\% confidence level, unless the\nsemimajor axis distribution is truncated at a radius $R_{trunc}$\nless than 110 AU. Though $\\beta=0.0$ is not physically plausible, \nprevious work has sometimes used it an example: \nfor $\\alpha=-1.31$, corresponding\nto the best-fit value from \\citet{cumming},\nwe rule out $\\beta=0.0$ unless $R_{trunc}$ is less than 38 AU.\nIndependent of distribution models,\nwith 90\\% confidence no more than 50\\% of stars like those in\nour survey have a 5 M$_{\\mathrm{Jup}}$~or more massive planet orbiting\nbetween 30 and 94 AU, no more than 15\\% have a 10 M$_{\\mathrm{Jup}}$~planet\norbiting between 22 and 100 AU, and no more than 25\\% have a\n20 M$_{\\mathrm{Jup}}$~object orbiting between 8 and 100 AU.\n\n\nOur constraints on planet abundances are similar to those placed by\n\\citet{kasper} and \\citet{biller1}, but less tight\nthan those of \\citet{nielsen} and especially \\citet{GDPS},\nThe recent work of \\citet{nielsenclose} and \\citet{chauvin}\nalso placed tighter constraints on exoplanet distributions than\nour survey. However, we have surveyed a more nearby, older set of\nstars than any previous survey, and have therefore\nplaced constraints on a more mature population of planets.\nAlso, we have confirmed that a paucity of giant planets\nat large separations from sun-like stars is robustly observed at a wide\nrange of wavelengths.\n\nThe best current $H$ regime observations, those of \\citet{GDPS},\nwould attain sensitivity to lower mass planets than did our\n$L'$ and $M$ band observations for all of our survey targets\nexcept those lying within 4 pc of the Sun. However, as larger\ntelescopes are built and longer exposures are attempted, the\nsensitivity of $M$ band observations may be expected to increase\nat least as fast as that of $H$ band observations (in part because\n$M$ band detectors are currently a less mature technology).\nAs shown in Figures \\ref{fig:HLM1} and \\ref{fig:HLM2}, a modest increase from \ncurrent sensitivity levels, even\nif paralleled by an equal increase in $H$ band sensitivity, would render\nthe $M$ band the wavelength of choice for extrasolar planet\nsearches around a large number of nearby stars.\n\n\n\n\\section{Acknowledgements} \nThis research has made use of the SIMBAD online database,\noperated at CDS, Strasbourg, France, and\nthe VizieR online database (see \\citet{vizier}).\n\nWe have also made extensive use of information and code\nfrom \\citet{nrc}. \n\nWe have used digitized images from the Palomar Sky Survey \n(available from \\url{http:\/\/stdatu.stsci.edu\/cgi-bin\/dss\\_form}),\n which were produced at the Space \nTelescope Science Institute under U.S. Government grant NAG W-2166. \nThe images of these surveys are based on photographic data obtained \nusing the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope.\n\nFacilities: \\facility{MMT, SO: Kuiper}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{Modern socio-economic challenges require a new approach}\n\nNowadays we are facing a number of serious problems such as financial instabilities, an unsustainable economy and related global warming, the lack of social cooperation and collaboration causing the rise of conflict, terrorism and war. \nTraditional approaches to remedy such problems are based on top-down control. \nWhereas in the past this way of thinking worked reasonably well, \nthe high interconnectivity in modern systems will eventually but unavoidably lead to its failure\nas systems become uncontrollable by central entities due to stronger internal effects, leading to often unpredictable cascading behavior~\\cite{helbing2013globally} and catastrophic failures~\\cite{BPPSH10}.\n\nInstead of entirely top-down based approaches, designing mechanisms to promote desired results like increased cooperation, coordination, and better resource efficiency could help to deal with current socio-economic challenges. Importantly, a multidimensional incentive system is needed to design the desired interactions and appropriate feedback mechanisms~\\cite{interaction:support,new:economy}. Such incentives have to be implemented in a bottom-up way, allowing systems to self-organize~\\cite{helbing:self} and thus promoting creativity, innovation, and diversity~\\cite{blog:self-organized:society}. \n\nDiversity acts as a motor of innovation, can promote collective intelligence~\\cite{page:difference}, and is fundamental for the resilience of society~\\cite{2016man,change:complex}. This renders socio-economic and cultural diversity equally important as biodiversity. \nThe importance of diversity, however, is not restricted to individual, cultural, social, or economic domains. For instance, diversity among digital services in competition for the attention of users can mitigate the risk of totalitarian control and manipulation by extremely powerful monopolies of information. \nAs we explain in Sec.~\\ref{sec_model}, the loss of diversity in the digital world can lead to a systematic and irreversible collapse of the digital ecosystem~\\cite{ecology20,worldmodel}, akin \nto the loss of biodiversity in the physical ecosystem. \nSuch a collapse can have dramatic consequences for the freedom of information and eventually for the freedom of society.\nIn this contribution, we show how such a catastrophic collapse could be avoided on a systematic level by introducing a multidimensional incentive system in which an appropriately designed cryptocurrency provides an incentive for individuals to perform certain tasks in their socio-digital environment. We refer to this cryptocurrency as ``Social Bitcoins''\\footnote{The details of the implementation of such a cryptocurrency is beyond the scope of this contribution.}.\n\nImportantly, to successfully meet these challenges, tools, ideas and concepts from complexity science have to be combined with technologies like the blockchain, economic knowledge (and potentially Internet of Things technology to measure ``externalities'').\n\n\n\\section{A multidimensional financial system}\n\nThe invention of money has led to unprecedented wealth and has provided countless benefits for society. \nHowever, the current monetary system is not appropriate any more to control highly interconnected dynamical complex systems like the ones our economy and financial system nowadays form. Whereas such systems are in general difficult to control and understand and nearly impossible to predict, they exhibit the tendency to self-organize~\\cite{helbing:self,PhysRevLett.59.381}. New approaches to face todays challenges should therefore take advantage of this system intrinsic tendency. \n\nCentral banks like the ECB can control the amount of money in the market by means ranging from adjusting interest rates to quantitative easing. Recently, the ECB has lowered interest rates to the lowest value of all time (even introducing negative rates for some bank deposits~\\cite{reuters:negative}) and has further increased its efforts to buy government bonds~\\cite{quantitative:easing}. \nThese measures are intended to boost economy and increase inflation in the Euro zone to the target of $2\\%$. Despite these efforts, inflation has remained close to $0\\%$, raising doubts about the capacity to act and the credibility of the ECB~\\cite{reuters:credibility}.\nFurthermore, liquidity pumped into the market does not reach efficiently enough the real economy.\nAs a consequence recently ``helicopter money'' has been discussed as a possible solution~\\cite{reuters:helicopter,helicopter}. \nImportantly, these problems are not limited to the Euro zone.\nFor example, due to the interconnected nature of our economic and financial systems, the state of the global economy limits the decisions the Fed can take concerning a raise of interest rates, as such a raise could pose a threat for the global economy~\\cite{reuter:fed}.\n\nThe problem is that the current monetary system provides only a one-dimensional control variable. \nLet us consider the human body and how it self-organizes as an example. \nTo ensure its healthy function, it is not enough to adjust only the amount of water one drinks. Instead, the body needs water, air, carbohydrate, different proteins and vitamins, mineral nutrients and more. None of these needs can be replaced by another. \nWhy should this be different in systems like our economy, the financial system, or society?\n\nIndeed, a multidimensional currency system could help to solve the problems mentioned above, where the different dimensions can be converted at a low (or negligible) cost.\nSuch multidimensional incentive system could be used to promote self-organization of financial and economic systems in a bottom-up way~\\cite{thinking:ahead}. \nThis opens the door to ``Capitalism 2.0'' and ``Finance 4.0'' (see~\\cite{capitalism20,beyond:superintelligence,qualified:money,blog:why:need} for details).\n\nA special case of a multidimensional incentive system is ``qualified money''. The concept was first introduced by Dirk Helbing in~\\cite{thinking:ahead,qualified:money}. \nInstead of a scalar (one-dimensional) quantity, like the Euro or any other currency, money could be multidimensional and earn its own reputation. \nTo illustrate this, consider the example that there were two dimensions of money. \nBy law, the first could only be invested into real values, but not into financial products. Instead, the latter dimension could. There would be an exchange rate (and cost) to convert one dimension into the other. As a consequence, the ECB could increase the amount of money for real investments directly, hence avoiding the problem mentioned earlier. In other words, the decision space on which institutions like the ECB can act\nwould considerably increase without them acting outside of their mandate. \nQualified money, which could be realized in a Bitcoin-like~\\cite{bitcoin} way\\footnote{That means, transactions are transparent. It is important to have a dimension of qualified money which cannot be tracked, and this dimension should loose value more rapidly to incentive spending it soon. See~\\cite{qualified:money} for details.}, could earn its own reputation depending on how and where it was created and what businesses it supports. The reputation then can give the money more or less value, which can lead to a more sustainable economy as sustainability would become measurable and transparent to individuals (for details see~\\cite{capitalism20,qualified:money}).\nThe concept of qualified money is not limited to the above described two dimensions. Instead, everything people care about can be represented by a dimension in the currency vector.\nAs we explain in the following, one dimension of qualified money could be socio-digital capital that can be acquired in the digital world. \n\nModern information and communication technologies play an important role \nin facing todays challenges. \nIndeed, nowadays the digital and physical world are strongly interdependent and cannot be treated in isolation any more~\\cite{helbing2013globally}. \nThe huge success of Web 2.0 and online social networks is changing the way humans interact at a global scale. They promote cooperation and collaboration on unprecedented scales, but at the same time powerful monopolies of information have the power to alter individuals' emotions and decisions~\\cite{Epstein18082015,Bond2012}. Supercomputers nowadays perform a large fraction of all financial transactions, hence influencing the prices of important commodities, which can lead to starving, conflicts, war, etc. Information and communication technologies thus are both a crucial part of the problems society has to solve as well as a fundamental and promising piece of the solution. \n\n\n\\section{Decentralized information architectures and qualified money: A Social Bitcoin}\n\n\\subsection{Decentralized architectures}\n\nThe existence of powerful monopolies of information like big IT-companies or even some governments can lead to the loss of control by individuals, companies, or states. \nBesides, the economic damage attributed to cybercrime is growing exponentially and is estimated to reach 2.1 trillion dollars in 2019~\\cite{cybercrime}. \nHence, it is time to design more resilient information and communication technologies that---by design---cannot be exploited by single entities. Decentralized architectures naturally provide these benefits~\\cite{helbing:digital_democracy,Contreras11122015,isocial}. \n\n\n\n\\subsection{Social Bitcoins and Web 4.0}\n\\label{sec_socbit}\n\n\nAs explained earlier, the main idea behind qualified money is to price a broader spectrum of externalities.\nThis means that it can be applied to, for instance, information. This can be realized in many different ways. The exact details would probably emerge in a self-organized way, depending on choices and preferences of individuals. But how could such a system look like and what benefits would it provide? Here, we discuss a possible vision in which one dimension of qualified money, socio-digital capital, can be priced in terms of Social Bitcoins that can be mined using online social networks and digital infrastructures. It is impossible to foresee the exact details of such a system, nevertheless, in the following we will sketch a possible vision of a future Internet and digital world in which individuals perform the routing of messages and information using their social contacts and technological connections rather than relying on service providers. \n\nThe use of the Internet has changed fundamentally since its invention. At first, it was a collection of static web pages. Then,\nWeb 2.0 emerged as ``a collaborative medium, a place where we [could] all meet and read and write''~\\cite{wiki:web20}. Consequently, Web 3.0 constitutes a ``Semantic Web''~\\cite{wiki:web20}, where data can be processed by machines. Let us refer to a digital world in which information is managed in a bottom-up way, free of central monopolies in control of the vast majority of information, as Web 4.0\\footnote{In~\\cite{web40} Web 4.0 is described as follows: ``Web 4.0 will be as a read-write-execution-concurrency web with intelligent interactions, but there\nis still no exact definition of it. Web 4.0 is also known as symbiotic web in which human mind\nand machines can interact in symbiosis.''}. A digital democracy~\\cite{helbing:digital_democracy}, if you will. \nAssume that this digital world was composed of many interacting, decentralized systems, which---in the absence of central control---compete for the attention of individuals~\\cite{ecology20,worldmodel}. As we explain in Sec.~\\ref{sec_model}, such a state is possible but fragile. Now assume that, in the future of the Internet, each individual routes information using their social and technological connections rather than relying on service providers\\footnote{It is important to note that this new type of information routing requires efficient and secure encryption to ensure privacy of individuals, whenever they wish so.}. In decentralized architectures, this task has to be performed relying only on local knowledge. As shown in~\\cite{geometry:multilayer}, this type of routing can be performed very efficiently and---most importantly---can be perfected if individuals actively use multiple networks simultaneously\\footnote{This is only the case if the different networks are related such that they exhibit \\textit{geometric correlations}. As shown in~\\cite{geometry:multilayer}, real systems obey this condition. }. \nThis fact constitutes an important starting point to design appropriate incentives to sustain digital democracy.\n\n\nAssume that individuals could earn Social Bitcoins by routing information in the way explained above. These Social Bitcoins would form a dimension of qualified money~\\cite{thinking:ahead,capitalism20,qualified:money} and could (with some additional cost) be exchanged and hence converted into other dimensions of the currency vector.\nTheir exchange rate would depend on the trust individuals have in the system and how much they value their socio-digital environment.\n\n\n\n\nThe important point is that now individuals have an incentive to route information (``mining'' Social Bitcoins). As a consequence, individuals will optimize to some extend their capabilities to perform this action. As explained above and shown in~\\cite{geometry:multilayer}, the routing success can be increased and even perfected if individuals actively use many networks simultaneously\\footnote{In addition, there are other aspects individuals might optimize, see for instance~\\cite{nav:game}.}. Hence, the introduction of a Social Bitcoin would constitute an incentive to be active in several networks, as illustrated in Fig.~\\ref{fig_scheme}. \n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=1\\linewidth]{figure1.pdf}\n \\caption{\\textbf{Illustration of the incentive to mine Social Bitcoins.} Social Bitcoins form one dimension of qualified money and can be mined by performing search and navigation tasks using social and technological connections\n in a future digital world. Hence, acquiring Social Bitcoins constitutes an incentive to perform routing. Individuals optimize their strategy to route information and will be active simultaneously in more networks, as this (among other aspects~\\cite{nav:game}) increases routing performance~\\cite{geometry:multilayer}. \nPerforming routing in less active networks could increase the reputation of the mined Social Bitcoins, providing an additional incentive to engage in less active networks.\n This then could sustain digital diversity~\\cite{ecology20,worldmodel} and at the same time increase the performance of routing. \n \\textit{Icon credits: Mourad Mokrane, lastspark, and Joshua Jones from thenounproject.com (CC BY 3.0).}\n \\label{fig_scheme}}\n\\end{figure}\nIn addition, search and navigation tasks taking place in less active networks could increase the reputation of the mined Social Bitcoins, providing further incentive to engage in less active networks.\nIn other words, sustainability in the digital world could be priced and would become transparent to individuals who then could adjust their behavior accordingly. \nImportantly, as we explain in detail in the following, this optimization could make digital diversity robust and sustainable.\n\n\n\n\n\n\n\\subsection{How a Social Bitcoin could sustain digital diversity}\n\\label{sec_model}\n\nHere, we present a mathematical model to illustrate the potential effect of a Social Bitcoin.\nAs mentioned earlier, many digital services compete for the attention of individuals. In this context, the attention of users\ncan be considered a scarce resource and hence the digital world forms a complex ecosystem in which networks represent competing species.\nA concise description of the digital ecology was developed in~\\cite{ecology20}.\nIn a nutshell, multiple online social networks compete for the attention of individuals in addition to obeying their intrinsic evolutionary dynamics. This dynamics is given by two main mechanisms, the influence of mass media and a viral spreading dynamics acting on top of pre-existing underlying offline social networks~\\cite{our:model}.\nImportantly, the parameter that quantifies the strength of viral spreading, $\\lambda$, determines the final fate of the network. If $\\lambda$ is below a critical value $\\lambda_c$, the network will eventually become entirely passive, with corresponds to the death of the network. On the other hand, for $\\lambda > \\lambda_c$, the activity of the network is sustained~\\cite{our:model,Bruno:Ribeiro}. \nThe competition between multiple networks can be modeled assuming that more active networks are more attractive to users. \nHence, the total virality, which reflects the overall involvement of individuals in online social networks, is distributed between the different networks as a function of their activities. More active networks obtain a higher share of the virality, which then makes these networks more active. Note that this induces a rich-get-richer effect. Interestingly, despite this positive feedback loop, diminishing returns induced by the network dynamics allows for a stable coexistence (digital diversity) of several networks in a certain parameter range (we refer the reader to~\\cite{ecology20} for details).\n\nThe system can be described by the following meanfield equations\\footnote{In the framework of~\\cite{ecology20}, \nthese equations are the result of taking the limit of $\\nu \\rightarrow \\infty$, where $\\nu$ describes the ratio between the rate at which the viral spreading and the influence of mass media occur. As shown in~\\cite{ecology20}, taking this limit has no impact on the stability of the system.}\n\\begin{equation}\n \\dot{\\rho}^\\mathrm{a}_{i} = \\rho^\\mathrm{a}_{i} \\biggr\\{ \\lambda \\left<k\\right> \\omega_i(\\gvec{\\rho}^\\mathrm{a}) \\left[ 1 - \n\\rho^\\mathrm{a}_{i} \\right] -1 \\biggl\\} \\,, \\quad i = 1, \\dots, n \\,,\n\\label{dynamicalsystem}\n\\end{equation}\nwhere $\\rho^\\mathrm{a}_i$ denotes the fraction of active users in network $i$, $\\lambda$ is the total virality mentioned earlier, and $\\left<k\\right>$ denotes the mean degree of the network, i.e. the average number of connections each node has. \nThe weights $\\omega_i(\\gvec{\\rho}^a)$ depend on the activities in all networks, $\\gvec{\\rho}^\\mathrm{a} = \\left(\\rho_1^\\mathrm{a},\\rho_2^\\mathrm{a}, \\dots, \\rho_n^\\mathrm{a} \\right) $, and govern the distribution of virality between different networks. In~\\cite{ecology20} we used\n$\n \\omega_i(\\gvec{\\rho}^\\mathrm{a}) = \\left[ \\rho_i^\\mathrm{a} \\right]^\\sigma \/\n \\sum_{j=1}^{n} \\left[ \\rho_j^\\mathrm{a} \\right]^\\sigma\n$,\nwhere $\\sigma$ denotes the activity affinity that quantifies how much more prone individuals are to engage in more active networks. \n\nAs mentioned earlier, assume that the introduction of Social Bitcoins incentivizes users to simultaneously use multiple networks in order to increase their capabilities to successfully perform search and navigation tasks and hence increase their expected payoff. The exact form of this incentive depends on the details of the implementation of the systems' architectures and Social Bitcoins, which comprises an interesting future research direction.\nHere, we model the additional tendency of individuals to engage in multiple (and less active) networks by shifting the weight of the distribution of the virality towards networks with lower activity, hence hindering the rich-get-richer effect described earlier. \nIn particular, let us consider the following form of the weight function,\n\\begin{equation}\n \\omega_i(\\gvec{\\rho}^\\mathrm{a}) = \\underbrace{\\frac{\\left[ \\rho_i^\\mathrm{a} \\right]^\\sigma}{\\sum_{j=1}^{n_l} \\left[ \\rho_j^\\mathrm{a} \\right]^\\sigma}}_{\\text{rich-get-richer~\\cite{ecology20}}}\n + \\underbrace{\\xi (\\left<\\gvec{\\rho}^a\\right>-\\rho_i^\\mathrm{a})}_{\\text{Social Bitcoin incentive}}\n \\,,\n \\label{eq_weights}\n\\end{equation}\nwhere $\\xi$ is a parameter proportional to the value of Social Bitcoins and $\\left<\\gvec{\\rho}^a\\right> = \\frac{1}{n} \\sum_{i=1}^n \\rho_i^\\mathrm{a}$ denotes the mean activity among all networks.\n\nThe effect of the inclusion of the new term (``Social Bitcoin incentive'') in Eq.~\\eqref{eq_weights} can change the behavior of the system dramatically if $\\xi$ is large enough, which we illustrate\\footnote{Here we present only a brief discussion of the dynamical system given by Eqns.~\\eqref{dynamicalsystem} and~\\eqref{eq_weights}. A more detailed analysis and the investigation of different forms of the incentive term in Eq.~\\eqref{eq_weights} is left for future research.} for two competing networks. Let us first consider the case of $\\xi = 0.2$. In this case, the qualitative behavior of the system is similar to the one in absence of Social Bitcoins as described in~\\cite{ecology20}. \nBelow a critical value of the activity affinity, $\\sigma < \\sigma_c$, coexistence is possible (solid green central branch in Fig.~\\ref{fig_bif}~(top) \\& central green diamond in Fig.~\\ref{fig_bif}~(middle, left)), but---once lost---cannot be recovered. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1\\linewidth]{figure2a.pdf}\n\\includegraphics[width=1\\linewidth]{figure2b.pdf}\n\\includegraphics[width=1\\linewidth]{figure2c.pdf}\n\\caption{\n\\textbf{Fragility of digital diversity.} Here, we consider two networks,\n$\\lambda \\left<k\\right> = 2$ and $\\xi = 0.2$.\n\\textbf{Top:} Bifurcation diagram (subcritical pitchfork bifurcation). $\\rho_i$ denotes the fraction of active users in network $i$. Green solid lines represent stable solutions and red dashed lines correspond to unstable fixed points. \n\\textbf{Middle:} Stream line plots for $\\sigma=0.75$ (left) and $\\sigma=1.5$ (right). \n\\textbf{Bottom:} Evolution of the system for initial conditions $\\rho_1 = 0.4$, $\\rho_2 = 0.3$. \nFor $15 \\leq t < 45$ (between the dashed lines) we set $\\sigma = 1.5$ and otherwise we set $\\sigma = 0.75$.\n\\label{fig_bif}}\n\\end{figure}\nTo illustrate this, assume that we start with $\\sigma < \\sigma_c$ and the system approaches the coexistence solution (central green diamond in Fig.~\\ref{fig_bif}~(middle, left)). Then, we change $\\sigma$ to some value larger than $\\sigma_c$. Hence, the coexistence solution becomes unstable and the system eventually approaches the solution where either $\\rho_1 = 0$ or $\\rho_2=0$ (green diamonds in Fig.~\\ref{fig_bif}~(middle, right)). Now, after changing $\\sigma$ back to a value below $\\sigma_c$, the system does not return to the again stable coexistence state, but instead remains in the domination state, which is also stable (outer green diamonds in Fig.~\\ref{fig_bif}~(middle, left)). This example is illustrated in Fig.~\\ref{fig_bif}~(bottom), where we explicitly show the evolution of the fraction of active users for both networks\\footnote{Note that here we describe an idealized system without noise. Noise in real systems would speed up significantly the separation of the trajectories in Fig.~\\ref{fig_bif}~(bottom) shortly after the first dashed gray line.}. \nTo conclude, the system is fragile in the sense that an irreversible loss of digital diversity is possible---similar to the loss of biodiversity.\n\nInterestingly, for a higher value of $\\xi$ the behavior of the system differs dramatically, which we illustrate here for $\\xi=1$. The solution corresponding to equal coexistence of two networks, hence $\\rho_1 = \\rho_2 \\neq 0$, is stable as before for values of $\\sigma$ below some critical value $\\sigma_c$. However, in this regime now the domination solutions ($\\rho_1 = 0$ or $\\rho_2 = 0$, denoted by the red squares in Fig.~\\ref{fig_bif2}~(middle, left)) are unstable. This means that, independently from the initial conditions, in this regime the system always approaches the coexistence solution. For $\\sigma > \\sigma_c$ the equal coexistence solution becomes unstable and new stable solutions emerge (green diamonds in Fig.~\\ref{fig_bif2}~(middle, right)). These unequal coexistence solutions correspond to the case that one network has a significantly higher activity than the other, but the activities of both networks are sustained. \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1\\linewidth]{figure3a.pdf}\n\\includegraphics[width=1\\linewidth]{figure3b.pdf}\n\\includegraphics[width=1\\linewidth]{figure3c.pdf}\n\\caption{\\textbf{\nRobustness of digital diversity.} Here, we consider two networks,\n$\\lambda \\left<k\\right> = 2$ and $\\xi = 1.0$.\n\\textbf{Top:} Bifurcation diagram. $\\rho_i$ denotes the fraction of active users in network $i$. Green solid lines represent stable solutions and red dashed lines correspond to unstable fixed points. For better readability, here we do not show the unstable fixed points for $\\rho_1=0$ and $\\rho_2=0$. \n\\textbf{Middle:} Stream line plots for $\\sigma=0.75$ (left) and $\\sigma=2.5$ (right). \n\\textbf{Bottom:} Evolution of the system for initial conditions $\\rho_1 = 0.4$, $\\rho_2 = 0.3$.\nFor $30 \\leq t < 80$ (between the dashed lines) we set $\\sigma = 2.5$ and otherwise we set $\\sigma = 1.75$.\n\\label{fig_bif2}}\n\\end{figure}\nLet us again consider the explicit example of two networks and start with $\\sigma < \\sigma_c$. The system approaches the equal coexistence solution (green square in Fig.~\\ref{fig_bif2}~(middle, left)). Then, we change $\\sigma$ to some value above $\\sigma_c$. Now, the system approaches the unequal coexistence solution (green diamonds in Fig.~\\ref{fig_bif2}~(middle, right)), but now the activity in both networks is sustained. By lowering $\\sigma$ again below $\\sigma_c$, the system recovers the equal coexistence solution, in contrast to the previous case. \nThis example is shown in Fig.~\\ref{fig_bif2}~(bottom) where we present the fraction of active users in both networks.\nTo conclude, in contrast to the case discussed before, the system is robust in the sense that an irreversible loss of digital diversity cannot occur. \n\nTo sum up, the introduction of a multidimensional incentive system in which one dimension represents socio-digital capital in terms of Social Bitcoins that can be mined by performing search and navigation tasks in a future digital world can make digital diversity robust---given that the value of Social Bitcoins is high enough. \n\n\n\n\\section{Outlook and future research directions}\n\nA multidimensional financial system offers manifold success opportunities for individuals, companies, and states. Top-down control alone is destined to fail in a hyperconnected world. Hence, we need a new approach that incorporates the bottom-up empowerment of society and the right incentives and feedback mechanisms to promote creativity and innovations.\nThe initiative ``A nation of makers''~\\cite{nation:of:makers} in the US as well as the rise of citizen science~\\cite{citizen:science} constitute promising starting points for such a development. Nevertheless, increasing financial instabilities emphasize the pressing need to redesign certain aspects of the financial system, hence the urge to create ``Finance 4.0''~\\cite{qualified:money}.\n\nIn this perspective, multiple monetary dimensions could represent different externalities (negative ones like noise, environmental damage, etc. and positive ones such as recycling of resources, cooperation, creation of new jobs and so on). Building on this framework, appropriate feedback and coordination mechanisms could increase resource efficiency and lead to a more sustainable, circular, cooperative economy. \nThis can be achieved in a bottom-up way in terms of an improved version of capitalism based on the abilities of self-organization intrinsically present in dynamical complex systems by accounting for externalities in a multidimensional incentive system. The Internet of Things and the blockchain technology underlying the Bitcoin architecture provide the technological requirements to realize ``Finance 4.0'' and ``Capitalism 2.0'' based on knowledge from the science of complex systems~\\cite{capitalism20,beyond:superintelligence,qualified:money,iot:hand,blog:why:need}. \n\nNowadays, the digital and physical world are strongly interdependent. We have presented an example how a multidimensional incentive system, in particular a Social Bitcoin generated in a bottom-up way by performing search and navigation tasks in a possible future digital world can sustain digital diversity, which is essential for the freedom of information. Furthermore, a diverse digital landscape is expected to create business opportunities for individuals and companies~\\cite{new:economy,beyond:superintelligence,iot:hand} facing the disappearance of half of todays jobs~\\cite{Frey13thefuture}. \nThe price of Social Bitcoins is crucial for the desired effect of sustaining digital diversity. This price, however, is determined dynamically by the market \nand may depend on other dimensions of the currency system.\nThe development of a concise and general theory of this system and possible implementations comprise interesting future research directions. \n\n\n\n\\begin{acknowledgments}\nK-K. Kleineberg acknowledges support by the European Commission through the Marie Curie ITN ``iSocial'' grant no.\\ PITN-GA-2012-316808.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\n\\section{Introduction}\\label{sec1:intro}\n\nThe identity of dark matter has not yet been resolved at the time this article was written. One interesting possibility is to have a neutral lightest supersymmetric particle (LSP), which is stable via $R$-parity conservation, as the dark matter\\cite{EHNOS}. Neutralino is a popular candidate for dark matter and sneutrino is another possibility~\\footnote{Left-sneutrino is excluded by direct detection\\cite{snuL}, but right-sneutrino is still viable.}. Here, we focus our attention on yet another hypothesis within supergravity, i.e. gravitino as cold dark matter\\cite{GDM-CMSSM,otherGDM}. \n\nIn supergravity models (i.e. models with gravity mediated supersymmetry breaking)\\cite{SUGRA}, the gravitino mass is close to the other sparticle masses. However, it is not a priori whether the gravitino is lighter or heavier than the others. Note that this is different from gauge mediated models in which the gravitino mass can be naturally very light\\cite{GMSB}. We assume supergravity models here, with supersymmetric masses of $\\sim 1\\,{\\rm GeV} - 1$~TeV and the gravitino is the LSP. \nIn this framework, the coupling between gravitino and matter fields is very small, $\\sim 1\/M_{\\rm Pl}$. Because of this, gravitino is practically undetectable (aside from its gravitational effect). Also, the next lightest supersymmetric particle (NLSP) could be long lived, with a lifetime of typically O(1\\,s) or longer. In this case, the NLSP decay affects the primordial light element abundances\\cite{moroi}. \nThe phenomenology of this scenario depends largely on what the NLSP is. \nWe will discuss below various possibilities, each with its own distinct phenomenology. \n \n\n\\section{Gravitino Dark Matter in Supergravity Models}\n\nThe biggest theoretical uncertainty in supersymmetric models arises from the fact that we do not know how supersymmetry, if it does exist, is broken in nature. Because of this, the values of the soft couplings are uncertain. We can take them as free parameters. However, because of the large number of parameters, we need to make some simplifying assumptions. \nMotivated by the Grand Unified Theory (GUT), the usual assumption is that parameters of the same type are unified at the GUT scale. Their values at weak scale are then derived by employing the renormalization group equation (RGE). \nThe simplest model of this kind is the \nCMSSM (Constrained Minimal Supersymmetric Standard Model)~\\footnote{Also known as mSUGRA, depending on your preference.}, in which we have \nuniversal gaugino mass $m_{1\/2}$, universal sfermion mass $m_0$, and\nuniversal trilinear coupling $A_0$ at the GUT scale. In addition, we have two parameters from the Higgs sector, i.e. the ratio of the two Higgs vevs $\\tan \\beta\\equiv \\langle H_1\\rangle\/\\langle H_2 \\rangle$, and the sign of $\\mu$ where $\\mu$ is the Higgs mixing parameter in the superpotential.\nNote however that in GUT theories, e.g. $SU(5)$ or $SO(10)$, the Higgs fields are contained in different multiplets as compared to the matter multiplets. This motivates a generalization of the CMSSM, in which the Higgs soft masses $m_{1,2}$ are not necessarily equal to $m_0$ at the GUT scale\\cite{NUHM}. Furthermore, we can trade $m_{1,2}$ with $\\mu$ and the CP-odd Higgs mass $m_A$ as our free parameters through the electroweak symmetry breaking condition. The resulting model is called Non-Universal Higgs Masses (NUHM) model\\cite{ourNUHM}. Thus the NUHM parameters are $m_{1\/2}$, $m_0$, $A_0$, $\\tan \\beta$, $\\mu$ and $m_A$.\n\nIn the usual scenario with neutralino LSP, we implicitly assume that gravitino is sufficiently heavy such that it decouples from the low energy theory. In the gravitino dark matter (GDM) scenario, on contrary, we assume that the gravitino mass $m_{\\widetilde{G}} = m_{3\/2}$ is sufficiently small such that the gravitino is the LSP. For our purposes, we can take $m_{3\/2}$ as another free parameter. Within CMSSM, with gravitino LSP, there are three possible NLSP, i.e. neutralino, stau and stop particles\\cite{GDM-CMSSM,stopNLSP}. For NUHM, in addition, we can have selectron or sneutrino as the NLSP\\cite{GDM-NUHM}. Of course for a more general model of MSSM we can have more possibilities. \n\n\n\\section{Phenomenological Constraints}\n\n\n\\subsection{Dark matter relic density constraint}\n\nBeing a very weakly interacting particle, gravitino decoupled very quickly from the thermal plasma in the early universe. This leads to a concern that the gravitino could be over-abundance. However, inflation can solve this problem\\cite{ELN}. In inflationary models, the early gravitino density together with other densities are diluted by the inflation. Gravitino is then reproduced by reheating after the inflation\\cite{KL}, although with a smaller yield that can still satisfy the relic density constraint~\\footnote{This can still impose a strong constraint on the inflationary theories~\\cite{graInf}. However, this topic is beyond the scope of this article.}. \n\nGravitino relic density consists of two parts, the thermal relic $\\Omega_{\\widetilde{G}}^{\\rm T}$ which is produced by reheating, and the non-thermal relic $\\Omega_{\\widetilde{G}}^{\\rm NT}$ coming from the decay of the NLSP. \n\\begin{equation}\n\\Omega_{\\widetilde{G}} h^2 = \\Omega_{\\widetilde{G}}^{\\rm T} h^2 + \\Omega_{\\widetilde{G}}^{\\rm NT} h^2\n\\label{eq:relic}\n\\end{equation}\nThe thermal relic is related to the reheating temperature $T_R$ through the following relation~\\cite{BBB} \n\\begin{equation}\n\\Omega_{\\widetilde{G}}^{\\rm T} h^2 \\simeq 0.27 \\left( \\frac{T_R}{10^{10}\\, {\\rm GeV}} \\right) \\left( \\frac{100 \\, {\\rm GeV}}{m_{\\widetilde{G}}} \\right) \\left( \\frac{m_{\\tilde{g}}}{1 \\, {\\rm TeV}} \\right)^2 \n\\end{equation}\nwhere $m_{\\tilde{g}}$ is the gluino mass. We can see that for $m_{\\widetilde{G}} = 100$~GeV and $m_{\\tilde{g}} = 1$~TeV, to get $\\Omega_{\\widetilde{G}}^{\\rm T} h^2 \\lesssim 0.1$ we need $T_R \\lesssim 10^{10}$~GeV. The value of $T_R$ depends on the inflation model. For our purpose, we take $T_R$ as a free parameter. \nFor one to one decays of NLSP to gravitino, which is generally the case, the gravitino non-thermal relic can be written as:\n\\begin{equation}\n\\Omega_{\\widetilde{G}}^{\\rm NT} h^2 = \\frac{m_{\\widetilde{G}}}{m_{\\rm NLSP}} \\Omega_{\\rm NLSP} h^2\n\\end{equation}\nwhere $\\Omega_{\\rm NLSP} h^2$ is the NLSP density before the decay.\nDue to the long lifetime, the NLSP density is frozen out long before its decay, and this can be calculated by the\nusual method of solving the Boltzmann equation in the expanding universe. Note that even if the NLSP density is larger than the WMAP value, we might still satisfy the dark matter relic density constraint because of the rescaling by the mass ratio $m_{\\widetilde{G}}\/m_{\\rm NLSP}$. \nThe NLSP density can also be written in term of the yield, $Y_{\\rm NLSP} = n_{\\rm NLSP}\/s$, where $n$ is the number density and $s$ is the entropy density. \nThis is related to $\\Omega_{\\rm NLSP} h^2$ by \n\\begin{equation}\nY_{\\rm NLSP} M_{\\rm NLSP} = \\Omega_{\\rm NLSP} h^2 \\times (3.65 \\times 10^{-9} \\; {\\rm GeV}) \n\\end{equation}\n\nThe total relic density of the gravitino must not exceed the upper limit of the dark matter relic density range as suggested by WMAP\\cite{WMAP5}:\n\\begin{equation}\n\\Omega_{\\rm DM}^{\\rm WMAP} h^2 \\simeq 0.113 \\pm 0.004\n\\end{equation}\nTaking $2 \\sigma$, this means that $\\Omega_{\\widetilde{G}} h^2 \\lesssim 0.121$. The percentage of the thermal versus non-thermal relic density depends on the strength of the NLSP interactions which determine $\\Omega_{\\rm NLSP} h^2$. If we can measure these interactions at colliders we can deduce the reheating temperature\\cite{TR}.\n\n\n\n\\subsection{The BBN constraints}\n\nBig bang nucleosynthesis (BBN) is often cited as the greatest success of the big bang theory. By using simple assumptions that the early universe is in thermal equilibrium and expanding one can calculate the primordial light element abundances (using standard nuclear cross sections) and gets results which agree very well with the observations. \nIf there is a metastable particle that decays during or after the BBN era, the\nlight element abundances can be altered by the participation of the energetic decay products in the nucleosynthesis processes. Thus, BBN provides a stringent constraint for gravitino dark matter. On the other hand, the prediction of the standard BBN (sBBN) is not perfectly in agreement with the observational data. There seems to be discrepancy between the observed lithium abundances and the predicted values as shown in Table~\\ref{li-tbl}. \n\\begin{table}\n\\tbl{Comparison of lithium abundances from standard model prediction and observations.}\n{\\begin{tabular}{lcc}\n\\toprule\n& $^7$Li\/H & $^6$Li\/$^7$Li \\\\\n\\colrule\nObservation & $(1-2)\\times 10^{-10}$ & $\\sim 0.01 - 0.15$ \\\\\nStandard BBN (with CMB) & \\hphantom{00}$\\sim 4 \\times 10^{-10}$ & $< 10^{-4}$ \\\\\n\\botrule\n\\end{tabular}}\n\\label{li-tbl}\n\\end{table} \nThis is known as the lithium problem. The discrepancy on $^6$Li is particularly difficult to be solved within the standard theory. There is no known astrophysical process that can produce large amount of $^6$Li. Moreover $^6$Li is fragile. Therefore we should expect less rather than more $^6$Li compared to the prediction. \nThe lithium problem could be an indication of a new physics beyond the standard model. There are two proposed solution to this problem through a hypothesized metastable particle. The first one is through catalytic effect\\cite{pospelov,otherCat}. \nThe process $d + ^4$He $\\to \\, ^6$Li $+ \\gamma$ is suppressed by parity. \nIf there is a massive negatively charged particle $X^-$ that is bound to $^4$He by Coulomb interaction it can absorb the emitted photon, hence the process is no longer parity suppressed. Simultaneously, the $X^-$ particle is freed from the bound by the energy released and can subsequently be attached to another $^4$He, thus acting as a catalysis for $^6$Li production. \nThis catalytic effect can also effect other light element abundances such as beryllium\\cite{beryCat}. \n\nAnother proposed solution to the lithium problem is through hadronic decay of a metastable particle\\cite{jedamzik}. The decay produces energetic $n$, $p$ and also T, $^3$He (through spallation of $^4$He), which then interact with the ambient nuclei, e.g. $n + p \\to $D, T + $^3$He $\\to ^6$Li (producing more $^6$Li), and $^7$Be($n$,$p$)$^7$Li($p$,$^4$He)$^4$He (reducing $^7$Li). Note that deuterium is also produced, hence put some constraints on this scenario. \n\n\n\\subsection{Astrophysical constraints}\n\nIf the NLSP decays at a time later than the BBN, the photons produced by the decay might not be fully thermalised by the time of recombination. This can cause distortion on the cosmic microwave background radiation black body spectrum\\cite{lamon+durrer}. \nThe size of the distortion depends on the amount of the energy injected into photons. This is represented by a chemical potential $\\mu$ for photon.\nCMB spectrum measurement by the FIRAS instrument onboard of COBE satellite sets an upper limit on $\\mu$\\cite{COBE-FIRAS}:\n\\begin{equation}\n\\left| \\mu \\right| \\lesssim 9 \\times 10^{-5} \n\\end{equation}\nThis limit is only important for lifetime $\\gtrsim 10^6$~s since photons produced earlier should have enough time to thermalize before recombination. \n\nGravitinos from the NLSP decay at a late time have larger velocities compared to the primordial gravitino. This leads to a longer free-streaming length, smoothing out small scale density perturbation. If the dark matter relic density is dominated by non-thermal relic, the structure formation would be affected. This scenario is proposed as a solution to the small scale problem\\cite{warmgravitino}. \n\n\n\\subsection{Collider constraints}\n\nAt colliders, heavier supersymmetric particles can still be produced provided that there is enough energy in the collisions. These sparticle would quickly decay, cascading down to the NLSP. Due to its long lifetime the NLSP itself would escape from the detector before eventually decays to gravitino, hence appear as a stable particle with respect to the detectors. There have been searches for stable massive particles (SMP) at colliders\\cite{fairbairn}. \n\nA particularly interesting signal would be produced if the NLSP is electromagnetically charged. In this case the NLSP should traverse the calorimeter and subsequently be detected by the muon detector. The first obvious step of the data analysis is of course to discover this charged NLSP.\n The CDF collaboration, based on 1.0~fb$^{-1}$ of data at $\\sqrt{s} = 1.96$~TeV, sets a lower bound on (meta)stable stop particle at $249$~GeV\\cite{CDF-CHAMPS}; while the D0 collaboration, using 1.1~fb$^{-1}$ of data, sets upper limits for stable stau pair production cross section from 0.31~pb to 0.04~pb for stau masses between 60~GeV and 300~GeV\\cite{D0-CHAMPS} and lower mass limits of 206~GeV and 171~GeV for pair produced stable charged gauginos and higgsinos respectively. \n\nHowever, not all possible NLSP are charged. If neutralino or sneutrino is the NLSP, they would not be detected. Only neutralino with a very short lifetime ($\\lesssim$ few ns) can be detected through its decay product. The CDF sets a lower limit on the neutralino mass at 101~GeV for lifetime 5~ns\\cite{CDF-chi}. \nSimilar to the familiar case with neutralino LSP, there are also various signatures from the cascade decays. The same methods of analysis can be applied to the case with stable neutral NLSP. For long-lived neutralino NLSP the signatures would be indistinguishable from that of neutralino LSP scenario. For sneutrino NLSP, however, the signatures would in general be different\\cite{CoKra}.\n\n\n\n\\section{Phenomenology of GDM with Various NLSP} \n\nIn this section we look at each scenario of neutralino, stau, stop and sneutrino NLSP. We do not include chargino NLSP\\cite{charginoNLSP} here.\n\n\\subsection{Neutralino NLSP}\n\nFor neutralino mass of 1~TeV and gravitino mass of 1~GeV the neutralino lifetime is about $O(1)$~s. The lifetime is longer for a smaller mass gap. Thus the neutralino in this scenario would escape the collider detectors and trigger large missing energy signatures. Assuming that all primordial neutralinos has eventually decayed to gravitino, only gravitino is floating around today. Therefore WIMP direct detection experiments would not see any signal. There would be no indirect astrophysical signal from dark matter annihilation in the halo either. \nIn this case it would be difficult to proof the identity of the dark matter, i.e. whether it is gravitino, axino or whether something else. We will also need to check whether $R$-parity is really conserved. \n\nThe neutralino NLSP in the CMSSM is much constrained by the BBN, especially when the neutralino lifetime is $\\gtrsim 10^4$~s where the electromagnetic shower effect on the light element abundances becomes important. The main reason is because neutralino has relatively large freeze-out density for most of the parameter space. \n\n\n\\subsection{Stau NLSP}\n\nIf produced, a stau NLSP would be seen as a massive stable charged particle at colliders. It would leave a clean track in the inner detector and then reach the muon detector, hence it would look like a slow\/heavy muon. Because of its electromagnetic charge, the stau can be slowed down by making it go through a bulky medium. Thus it can be trapped and stored until it decays\\cite{FS,HNR}.\nIn this way one can hope to measure its lifetime. \n\nWithin the CMSSM, stau NLSP has the largest allowed region of parameter space, hence thought as the natural candidate. Stau NLSP would yield catalytic effect on BBN. This has attract much attention and many papers are devoted in the study of this topic, in particular regarding the lithium problem solution\\cite{stauCat}. \n\n\n\\subsection{Stop NLSP}\n\n\nA long lived stop would hadronize once it is produced. By taking analogy with heavy quark hadrons one can deduce the lightest hadron states and their lifetimes. The light stop sbaryons are $\\Lambda_{\\widetilde T}^+ \\equiv {\\tilde t_1} ud$ (which is the lightest), $\\Sigma_{\\widetilde T}^{++,+,0} \\equiv \\tilde{t}_1 (uu, ud, dd)$ (which decays through strong interaction), and $\\Xi_{\\widetilde T}^{+,0} \\equiv \\tilde{t}_1 s (u,d)$ (which decays semileptonically with lifetime $\\tau \\lesssim 10^{-2}$~s). The light stop mesinos are ${\\widetilde T}^0 \\equiv \\tilde{t}_1 {\\bar u}$ (which is the lightest), ${\\widetilde T}^+ \\equiv \\tilde{t}_1 {\\bar d}$ (with lifetime $\\tau \\simeq 1.2$~s), and ${\\widetilde T}_s \\equiv \\tilde{t}_1 {\\bar s}$ (with lifetime $\\tau \\simeq 2 \\times 10^{-6}$~s). \nThe antistop would hadronize into the corresponding antisbaryons and antimesinos.\nIn the early universe, being the lighter one, the neutral $\\widetilde{T}^0$ is more abundance than the charged $\\Lambda_{\\widetilde T}^\\pm$. This reduces the catalytic effect on BBN. Moreover, due to its strong interaction, the freeze out density of stop is generally small. Therefore this scenario can generally satisfy the BBN constraint\\cite{stopNLSP}.\n\nOn the other hand, it was shown\\cite{KS} that stop NLSP scenario is fit to solve the lithium problem through hadronic decay. Note that further annihilation of stop occurs after the hadronization, with annihilation rate $\\Gamma_{\\rm ann} = \\langle \\sigma v \\rangle n_{\\tilde t}$ where $\\sigma \\sim R_{\\rm had}^2$ and $v \\simeq \\sqrt{3 T \/ m_{\\tilde t}}$. \nThe final stop abundance before its decay can be written as\n\\begin{eqnarray}\nm_{\\tilde{t}} Y_{\\tilde{t}} \n &=& 0.87 \\times 10^{-14} \\, {\\rm GeV} \\left(\n \\frac{f_{\\sigma}}{0.1} \\right)^{-2} \\left( \\frac{g_{*}}{17.25}\n \\right)^{-1\/2} \\nonumber \\\\ \n&& \\times \\left( \\frac{T_{\\rm QCD}}{150 \\, {\\rm MeV}}\n \\right)^{-3\/2} \\left( \\frac{m_{\\tilde{t}}}{10^{2} \\, {\\rm\n GeV}} \\right)^{3\/2}\n\\end{eqnarray}\nIt was found that, with $f_\\sigma = 0.1$, the lithium problem solution prefers $m_{\\tilde{t}} = 400 - 600$~GeV and $m_{3\/2} = 2-10$~GeV. \n\n\n\n\\subsection{Sneutrino NLSP}\n\nSince sneutrino does not interact strongly nor electromagnetically the effect of sneutrino decays on BBN can be guessed to be small, but nonzero. The BBN effect comes through energy transfer (elastic\/inelastic) from energetic neutrino (produced by the decay $\\tilde{\\nu} \\to \\widetilde{G} + \\nu$) to the background particles, and from 3\/4-body decay modes\\cite{KKKM} ($\\tilde{\\nu} \\to \\widetilde{G} + \\nu + (\\gamma, Z)$, $\\tilde{\\nu} \\to \\widetilde{G} + \\ell + W$) and 4-body ($\\tilde{\\nu} \\to \\widetilde{G} + \\nu + f + \\bar{f}$,\n$\\tilde{\\nu} \\to \\widetilde{G} + \\ell + f + \\bar{f}^\\prime$). Although the 3 and 4 body decay branching ratios are small, they can still be important since they produced particles that can directly involve in the nucleosynthesis processes. \nFor $M_{\\tilde{\\nu}} \\sim O(100$~GeV), the BBN constraint can be satisfied if the sneutrino density before the decay is \n\\begin{eqnarray}\nY_{\\tilde{\\nu}} M_{\\tilde{\\nu}} &\\lesssim & \\mathcal{O}(10^{-11}) \\ {\\rm GeV} \\hspace{1cm} {\\rm for} \\quad B_h =\n10^{-3} \\\\\nY_{\\tilde{\\nu}} M_{\\tilde{\\nu}} &\\lesssim & \\mathcal{O}(10^{-8}) \\ {\\rm GeV} \\hspace{1cm} {\\rm for} \\quad B_h = 10^{-6}\n\\end{eqnarray}\nwhere $B_h$ is the hadronic branching ratio. The sneutrino NLSP and gravitino LSP scenario was explored within NUHM models, and it was found that there are large regions of parameter space still allowed\\cite{GDM-NUHM}.\n\nAt colliders, similar to the neutralino (N)LSP case, sneutrino NLSP would yield a missing energy signature. We can study this scenario through cascade decays of heavier supersymmetric particles. In general, the signatures are different from those in neutralino case. Signatures that are thought to be the best for neutralino LSP might not be suitable for this sneutrino NLSP case. A preliminary study of collider phenomenology with sneutrino NLSP has been done in Ref.~\\refcite{CoKra}. However, a more detail study might reveal more information. \n\n\n\\section{Concluding Remarks}\n\nGravitino is a feasible and interesting candidate for dark matter.\nThere are many possible phenomenology with gravitino dark matter depending\non the choice for the NLSP, which we have summarized in this article. \nFuture and upcoming collider experiments, such as the LHC, might be able to unveil some hints on the identity of dark matter. Progresses in direct and indirect detection experiments are also looked promising. The next few years would be an interesting time to find out more about dark matter, and whether gravitino can still stand up as a candidate for dark matter. \n\n\n\\section*{Acknowledgments}\nMy participation in Dark2009 conference was made possible by the support of British Royal Society. I thank my collaborators on the subject of this paper: John Ellis, Lorenzo Diaz-Cruz, Terrance Figy, Kazunori Kohri, Keith Olive, Krzysztof Rolbiecki, and Vassilis Spanos. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\\section{INTRODUCTION}\n\nA major goal of modern astronomy is to piece together \nthe dynamic and chemical evolution of the Galactic disk. To this end, one of the principle \napproaches for probing the disk has been to study open \nclusters. Clusters are valuable astrophysical tools as they share \ncommon distances, common ages and common initial chemical abundances.\nWith the disk richly populated by both field stars and open clusters, and considering that clusters\nare relatively well studied, the logical step in piecing together a more complete picture\nof the chemical and dynamical history of the disk is to study field stars.\n\nIn recent years, the advent of large surveys such as \\emph{HIPPARCOS} \\citep{1997A&A...323L..49P} \nhas yielded precise parallaxes for thousands of nearby field stars, and in doing\nso, provided the necessary tools for investigating the field. In particular, studies of \nthe velocity distributions of disk field stars in the solar neighborhood have identified \nstellar overdensities in kinematic phase space (\\citet{1999MNRAS.308..731S}). The potential \napplication of these velocity structures, commonly referred to as moving groups, was \nfirst identified by \\citet{1958MNRAS.118...65E} who considered these assemblages to be \nrelic structures of dissolved open clusters.\nIn this paradigm, a moving group is essentially a spatially unassociated open cluster;\ntherefore it should possess some of the same characteristics that make \nopen clusters such valuable astrophysical tools (common ages and\ncommon initial chemical abundances) and similar techniques that are useful for studying open clusters\ncould be applied. \n\nRelatively little work has been done to explore the \nreality of smaller moving groups (kinematic assemblages of $\\sim$ 100 stars) \nas dissolved open clusters and their use in chemically \ntagging the galactic disk, with two notable exceptions: the Ursa Major Group and the HR1614\nMoving Group. \\citet{1993AJ....105..226S} examined the Ursa Major moving group\nand utilized age information inferred from chromospheric emission to constrain group membership\nin UMa. While this study did not utilize chemical tagging to constrain group membership, it did illustrate the\nviability of moving groups as dissolved populations of open clusters. \\citet{2003AJ....125.1980K} and\n\\citet{2005PASP..117..911K}\nrevisited the membership of the UMa group, using new and extant abundances. They used the \nresults to constrain membership in the UMa group, showed the members to be chemically homogeneous, \nand noticed overexcitation\/overionization effects in the cooler field star members of the group, \nsimilar to those observed in young ($<$ 500 Myr) cool open cluster dwarfs (\\citet{2003AJ....125.2085S},\n\\citet{2004ApJ...602L.117S}). The first in depth application of chemical tagging to constrain \nmoving group membership was by \\citet{2007AJ....133..694D}, who derived \nabundances for various elements for the HR 1614 moving group. They found that for their 18 star \nsample, 14 stars were metal-rich ([Fe\/H] $\\geq$ 0.25 dex with $\\sigma$=0.03) leading to \nthe conclusion that the HR 1614 moving group, with its distinct kinematics and distinctly super-solar\nchemical abundances, was a remnant of a dissolved open cluster.\n\nIn the field of moving group populations, the classical Wolf 630 moving group is an intriguing target.\t\nThe first identification of the Wolf 630 moving group was made by\n\\citet{1965Obs....85..191E} who noted that several K and M dwarfs and giants \nin the solar neighborhood appeared to have similar space motions to that of the multiple star\nsystem Wolf 630 ((U,V,W)=(23, -33, 18) kms$^{-1}$). These kinematics, distinctive of\nmembership in an old disk population, placed the stars in a relatively sparsely populated\nregion of kinematic phase space \\citep{1969PASP...81..553E}. Eggen also noted that the \ncolor magnitude diagram for the K and M dwarfs and giants with kinematics similar to those of Wolf\n630 appeared to trace an evolutionary sequence similar to the old ($\\sim$ 5 Gyr; \\citet{1999AJ....117..330J})\nM67 open cluster . Although his sources are not completely transparent, at least\nsome (17 of 54 stars) of the distances in his study were determined from trigonometric parallaxes , \nwith the remainder coming from ``luminosity estimates of many kinds''.\nAs a rudimentary form of chemical tagging, \\citet{1970PASP...82...99E} estimated metallicities \nof 23 Wolf 630 group members through \\emph{uvby-$\\beta$} photometry. Variations in the \n$\\delta$[m$_1$] index were found to be comparable \nto the Hyades, Praesepe, and the Coma Berenices clusters, implying chemical homogeneity. \n\n\\citet{1979LIACo..22..355T} studied the chemical \ncomposition of five field giant stars that were alleged members \nof Wolf 630 using high dispersion coud\\'{e} spectra described in \\citet{1979LIACo..22..355T}.\nEmploying a curve of growth approach and measured equivalent widths, they \nfound that three stars appeared to be chemically homogeneous with an overall metallicity for Wolf 630 \nof [Fe\/H]$\\sim$+0.23. However, it must be noted that their abundances were not measured with\nrespect to the Sun, but are instead quoted with respect to a standard star of presumed\nsolar metallicity (HD 197989), which has since been determined to be a K0III. While they derived a metallicity \nof 0.00 for their reference star, literature determinations suggest a value of -0.24. \nThis would lower the average metallicity for the group to [Fe\/H]$\\sim$ -0.02.\n\n\\citet{1983MNRAS.204..841M} revisited the membership of the Wolf 630 moving group by\nrecreating the approach presumably\nutilized by \\citet{1965Obs....85..191E} to find his original Wolf sample. In summary, they calculate the parallax\nthat yields a V velocity for each group candidate equal to the assumed group velocity of V=-32.8 $\\pm$ 1.3 kms$^{-1}$.\nThe final absolute magnitudes they report assume these parallaxes. Typical uncertainties in their \nabsolute magnitudes appear to be between 0.2-0.4 magnitudes, larger than magnitude uncertainties obtainable\nwith precise parallax information currently available from Hipparcos. The color-magnitude diagram assuming these\nM$_{V}$ values was compared to the scatter of apparent members\nwith the observed scatter in the old open cluster M67. They concluded \nthat either (1) the intrinsic scatter in the Wolf 630 moving group color-magnitude diagram was greater\nthan that of M67, or (2) the errors in radial velocities and\/or proper motions they utilized must have been\nunderestimated by a factor of 2.4 or (3) many of the stars in their sample were, in fact, non-members. \n\n\\citet{1994AAS...185.4516T} examined metallicities from ``published values of [Fe\/H] from diverse papers'' \nof 40 members of the Wolf 630 group. His sample contains 26 \\% of Eggen's original objects \n\\citep{1969PASP...81..553E}. He concluded that metallicity dispersions within his sample were too \ngreat for meaningful conclusions about the existence or non-existence of a genuine, chemically distinct \nWolf 630 moving group. This suggests the need to obtain high quality [Fe\/H] determinations with \nminimal uncertainties in testing for chemical uniqueness in a putative Wolf 630 sample.\n\nThe analysis of solar neighborhood Hipparcos data by \\citet{1999MNRAS.308..731S}\nindicates a kinematic rediscovery of the Wolf 630 group. Their figure 10, showing the \\emph{UV} velocity distribution \nfor 3561 late type dwarfs in the solar neighborhood presents a clear overdensity of stars near the \nposition of Wolf 630. Furthermore, this structure appears to be distinctly separated from any other \nknown moving groups or stellar streams. This provides compelling evidence that Wolf 630 is a real \nkinematic structure. The question to be asked is if this kinematic structure is composed of stars\nwith a common origin?\n\nDespite the distinctive kinematics exhibited by the Wolf 630 moving group when examined with updated\nHipparcos parallaxes, it has not been specifically targeted in an abundance study which makes use of the modern\nastrometric and spectroscopic data. This is remedied in this paper, where accurate parallaxes and photometry from \nthe updated \\emph{HIPPARCOS} \ndata reduction \\citep{2007A&A...474..653V} coupled with high precision radial velocities from CORAVEL \n(\\citet{2004A&A...418..989N} and references therein) allow for developing a Wolf 630 sample \nwith internally consistent distances and absolute magnitudes, thereby removing the uncertainties faced by \n\\citet{1983MNRAS.204..841M}. Furthermore, our uniform high resolution spectroscopic study of \nWolf 630 moving group candidate members provides a single, consistent \nset of metallicites with low internal uncertainty to test chemical homogeneity in the \ngroup, removing the largest source of uncertainty from \\citet{1994AAS...185.4516T}. \n \n \\section{DATA, OBSERVATIONS AND ANALYSIS}\n\\subsection{Literature Data}\n\nThe 34 stars in this sample, listed in Table 1, were previously identified as members of the Wolf 630 \ngroup (\\citet{1969PASP...81..553E}, \\citet{1983MNRAS.204..841M}) according\nto their \\emph{UVW} kinematics. In this study, we use updated parallaxes and proper motions from the latest \nreduction of Hipparcos data \\citep{2007A&A...474..653V}. Precision radial \nvelocities were taken from the compilation of \\citet{2004A&A...418..989N}. Visible band photometry \n(\\emph{B}, \\emph{V}, \\emph{B$_{Tycho}$}, \\emph{V$_{Tycho}$}) was taken from the \n{\\it HIPPARCOS\\\/} catalogue \\citep{1997A&A...323L..49P}. Near infrared \\emph{J}, \\emph{H} and \n\\emph{K} photometry was taken from the 2MASS Catalog \\citep{2003tmc..book.....C}. \n\n\\subsection{Kinematics}\n\nWe determined galactic \\emph{UVW} kinematics from the proper motions, parallaxes and\nradial velocities using a modified version of the code of \\citet{1987AJ.....93..864J}.\nHere, the U velocity is positive towards the Galactic center, the V velocity\nis positive in the direction of Galactic rotation and the W velocity is\npositive in the direction of the North Galactic Pole (NGP). \nThe relevant parameters for determination of these kinematics are presented \nin Table \\ref{uvw}. \n\n\\subsection{Spectroscopic Observations and Reductions}\n\nSpectroscopy of the sample was obtained in March 2007 \nand November 2008 with the KPNO 4 meter Mayall telescope, the echelle spectrograph \nwith grating 58.5-63 and a T2KB 2048X2048 CCD detector. The slit width of $\\sim$ 1 \narcsec yielded a resolution of R $\\sim$ 40,000 with a typical S\/N of 200 per summed pixel. The spectra \nhave incomplete wavelength coverage extending from approximately 5800 {\\AA} to 7800 {\\AA}. \nThe spectra have been reduced using standard routines in the {\\sf echelle} package of \nIRAF{\\footnote{IRAF is distributed by the National Optical Astronomy Observatories, \nwhich are operated by the Association of Universities for Research in Astronomy, Inc., \nunder cooperative agreement with the National Science Foundation.}. These include \nbias correction, flat-fielding, scattered light correction, order extraction, and \nwavelength calibration. Sample spectra are presented in Figure \\ref{spectra}.\n\n \n\\subsection{Line Selection}\n\nSpectroscopic physical parameters are typically determined by enforcing balance constraints\non abundances derived from lines of Fe, which has a plethora of suitable neutral (\\ion{Fe}{1}) \nand ionized (\\ion{Fe}{1}I) features in the optical. We compiled low excitation potential \n($\\chi <$ 6.00 eV) \\ion{Fe}{1} and \\ion{Fe}{2} lines from \\citet{1990A&AS...82..179T}, \nthe VIENNA Atomic Line Database (\\citet{1995A&AS..112..525P}, \\citet{1997BaltA...6..244R}, \n\\citet{1999POBeo..65..223K}, \\citet{2000BaltA...9..590K}), \\citet{2004ApJ...603..697Y}, \n\\citet{2006AJ....131.1057S} and \\citet{2007AJ....133..694D}. \nLines that were not apparent in a high-resolution solar spectrum \\citep{2005MSAIS...8..189K} \nwere removed from the linelist. \nIn order to guarantee that these lines were unaffected by blending effects, especially those\narising in cool stars that might not be noticeable in the solar spectrum, \nthe 2002 version of the MOOG spectral analysis program \\citep{1973PhDT.......180S} was used to \ncompute synthetic spectra in 1 {\\AA} blocks surrounding all Fe features, using VALD linelists. \nIf a line had closely neighboring features with MOOG-based relative strength parameters within an order of magnitude \nit was removed from consideration. In this manner, a final list of 145 \\ion{Fe}{1} lines and 11 \\ion{Fe}{2} lines\nwas formed. These linelists are presented in Table \\ref{eqw}. The equivalent\nwidths listed are for measurements in a high resolution solar spectrum. \n\nLinelists for other elements of interest have also been compiled from multiple \nsources (\\citet{1990A&AS...82..179T}, \\citet{1998AJ....115..666K}, \\citet{2006AJ....131..455D}). \nThese elements include Li, Na, Al, Ba, a selection of $\\alpha$ elements (O, Mg, Si, Ca, Ti I and Ti II)\nand a selection of Fe peak elements (Cr, Mn and Ni). The lines are also given in Table \\ref{eqw}.\nEquivalent widths are again for\nmeasurements in the high resolution solar spectrum. The equivalent widths \nthat were measurable for each individual star are given in Table \\ref{alleqw}, with\ncorresponding abundances derived from each equivalent width.\n\n\n\\subsection{Equivalent Widths}\n\nEquivalent widths for the lines of interest were measured in each star and in a high resolution solar spectrum\nusing the spectral analysis tool SPECTRE \\citep{1987BAAS...19.1129F}. \nFinal abundances were obtained from equivalent widths through use of the MOOG LTE spectral\nanalysis tool \\citep{1973PhDT.......180S} with an input\nKurucz model atmosphere characterized by the four fundamental physical parameters: temperature, surface gravity, \nmicroturbulent velocity ($\\zeta$) and metallicity. Unless noted otherwise all abundances are differential with respect to \nthe Sun and are presented in the standard bracket notation \n([X\/H]=log($\\frac{N(X)}{N(H)}$)$_* - $ log($\\frac{(N(X)}{N(H)}$)$_\\odot$ where logN(H)$\\equiv$ 12).\n\n\\subsection{Initial Parameters: Photometric }\n\nThe color-$T_{\\rm eff}$-[Fe\/H] calibrations of \\citet{2005ApJ...626..465R} were used to\ndetermine photometric temperatures from Johnson $B-V$, Tycho $B_T-V_T$, Johnson\/2MASS $V-J_2$,\n$V-H_2$ and $V-K_2$. The color indices for 8 stars were\noutside of the calibrated ranges; consequently\nphotometric temperatures were not derived. Uncertainties in the photometric\ntemperatures were conservatively taken as the standard deviation of the temperatures\nderived from each of the respective colors.\nWith the availability of high quality Hipparcos Parallaxes, physical\nsurface gravities were calculated from:\n\n\\begin{centering}\n$log\\frac{g}{g_{\\odot}} = log\\frac{M}{M_{\\odot}}+4log\\frac{T_{eff}}{T_{eff,\\odot}}+0.4V_o+0.4B.C.+2log\\pi+0.12$\\\\\n\\end{centering}\n\n\\noindent where M is the mass in solar masses, estimated from Yale-Yonsei isochrones \\citep{2004ApJS..155..667D}\nof solar metallicity, bolometric corrections are from \\citet{2005oasp.book.....G}, and \n$\\pi$ is the parallax. Initial microturbulent velocities were found from the calibrations of \\citet{2004A&A...420..183A}.\nThese photometric parameters provided the initial guesses for physical parameters when deriving the final\nspectroscopic values. Additionally, the photometric calibrations provided reasonable estimates to compare to\nspectroscopically derived results. Using the updated calibrations of \\citet{2010A&A...512A..54C} does not change the \nresults described herein.\n\n\\subsection{Spectroscopic Parameters}\n\nThe refined Fe linelists discussed above acted as target lists for each \nof the stars in the sample. The typical star contained $\\sim$80 of the 145 good \\ion{Fe}{1} lines \nthat were measurable in the solar spectrum. \nSeveral of the stars showed correlations between \\ion{Fe}{1} excitation potential and reduced \nequivalent width. If ignored, such correlations can be imposed onto the temperatures and microturbulent \nvelocities, resulting\nin non-unique solutions of physical parameters. Consequently, two \nlinelists for the \\ion{Fe}{1} lines were formed for each star; a correlated and an uncorrelated sample.\nFinal basic physical parameters for the sample were derived using a modification to the standard techniques\nof Fe excitation\/ionization\/line strength balance. In all the approaches\ndescribed below, a differential analysis was used where the same lines were measured in a solar\nspectrum and in the stellar spectra. Final abundances were then determined by subtracting the solar\nabundance from the stellar abundance in a line by line fashion.\n\nThe first technique utilized the uncorrelated line sample and proceeded as follows:\ntemperatures of input model atmospheres were adjusted to remove\nany correlation in solar-normalized abundances with respect to excitation potential; $\\zeta$ is adjusted to remove\nany correlation with line strength and log $g$ is adjusted until the mean abundance from \\ion{Fe}{1}I lines matches\nthe abundance from \\ion{Fe}{2} lines. This approach required\nsimultaneously adjusting temperatures, surface gravities, metallicities and microturbulent velocities to \nconverge to a common solution. Use of the uncorrelated line sample, as described above, is necessary \nto ensure a unique solution. This approach will be referred to as the ``classical'' approach.\n\nThe second approach used the correlated line sample and the Hipparcos-based physical surface gravities. \nThe \\ion{Fe}{2} abundances are primarily set by this gravity. \nThe temperature was adjusted to force the mean abundance from \\ion{Fe}{1} lines to match that from \\ion{Fe}{1}I lines. \nThe microturbulent velocity was adjusted until the abundance from \\ion{Fe}{1} lines had no dependence on reduced equivalent \nwidth. The advantage of this approach is that it does not require simultaneous solutions requiring excitation balance \nand equivalent width balance, allowing use of a full correlated line sample. This approach will be \nreferred to as the ``physical surface gravity'' approach.\n\nWhen comparing results from the classical and physical surface gravity approaches it was apparent that the \nmicroturbulent velocities were nearly identical ($\\delta \\zeta \\approx \\pm 0.04$ km s$^{-1}$. \nThus our final spectroscopic parameters were determined as follows. \nThe microturbulent velocities from the ``classical'' approach and the \n``physical surface gravity'' approach were averaged to yield a final value. The correlated line sample \nwas used to determine the temperature and surface gravity using excitation\/ionization balance. \nFor the remainder of the work, the results from this approach were used for the physical parameters of these 30 \nstars. The remaining 4 stars would not converge to an acceptable solution and the following alternative approach \nwas developed.\n\nThe coolest stars in the sample (HIP105341-dwarf, HIP114155-giant and HIP5027-dwarf) had an insufficient \nnumber of well-measured \\ion{Fe}{2} lines for accurately determining the surface gravity \nspectroscopically. Consequently, Hipparcos-based physical surface gravities were used to set the gravity, and \nthe temperature and microturbulence were iterated to eliminate correlations in [\\ion{Fe}{1}\/H] versus excitation potential\nand versus the reduced equivalent width. This is the ``physical surface gravity'' approach. \n\nFinally, one of the stars in the sample (HIP 5027) had a microturbulence correlation which could not be removed \nwithout utilizing unreasonable surface gravities. For this star, the surface gravity was set based on Yale-Yonsei \nisochrones \\citep{2004ApJS..155..667D}. The microturbulent velocity was set to zero and the temperature \nwas determined from excitation balance. \n\nThe final basic physical parameters \n(T$_{Spec}$, log $g$, microturbulent velocity ($\\xi$) and [Fe\/H]) are\npresented in Table \\ref{basic} and final abundances are summarized in Table \\ref{abundances}.\nFor the interested reader, we also provide plots of all abundances ([X\/H]) versus [Fe\/H] in an appendix.\n\n\n\\subsection{Lithium}\n\nAbundances have been derived for lithium using spectral synthesis. We\nuse the {\\sf synth} driver of MOOG to synthesize a spectrum of the lithium line\nat $\\lambda$=6707.79 {\\AA} with an updated version of the linelist from \\citet{1997AJ....113.1871K}. \nAppropriate smoothing factors were determined by measuring\nclean, weak lines in the lithium region. The lithium abundance was varied until a best fit is obtained \nfrom visual inspection. A sample synthesis is presented in Figure \\ref{lithium_synthesis}.\n\nUncertainties in lithium abundances have\nbeen determined by examining the change in Li abundance in syntheses with arbitrary changes in\nphysical parameters of $\\Delta$T=150 K, $\\Delta$log\\emph{g}=0.12 cm s$^{-2}$\nand $\\Delta$$\\xi$=0.60 km s$^{-1}$, and adding the resultant abundance differences\nin quadrature. \n\n\\subsection{Oxygen}\n\nOxygen abundances for many stars have been derived from the near-IR $\\lambda$7771 \nequivalent widths. \nAbundances derived from the triplet are known to be enhanced by NLTE effects;\ntherefore appropriate corrections have been applied following \n\\citet{2003A&A...402..343T}. \n\nFor the giants and subgiants, oxygen abundances have also been derived from the\nforbidden line at $\\lambda$6300.34 \\AA. While this line is found to be free\nfrom NLTE effects (\\citet{2003A&A...402..343T}), care must be taken as the line is\nblended with a nearby Ni feature at $\\lambda$6300.31 \\AA. This blend is treated\nusing the {\\sf blends} driver of MOOG, following \\citet{2006AJ....131.1057S}. \nThe Ni abundance utilized to account for blending is the mean value derived from the\nEWs of \\ion{Ni}{1} lines in our sample.\n\nA possible CN feature at 6300.265 {\\AA} and two at 6300.482 {\\AA} with log(gf)\nvalues of -2.70, -2.24 and -2.17 are claimed by \\citet{1963rspx.book.....D}. \nIn order to explore these blends, multiple syntheses of the $\\lambda$6300 {\\AA} region \nwere performed using high resolution spectral\natlases of the Sun \\citep{2005MSAIS...8..189K} and the K giant, Arcturus \\citep{2000vnia.book.....H}. \nThe CN features, if real, were found to be unimportant in\nthe solar spectrum. Large variations in carbon abundances ($>$ 0.50 dex) appear to have\nlittle impact on the overall spectrum. For warm dwarfs, the syntheses confirm\nthat the Ni features are the dominant blends affecting O determination. \n\nThe situation appears to be dramatically different for cooler giants. In the high resolution \nspectral atlas for Arcturus, the CN blend, if real, appears to dominate over the Nickel blend.\nOxygen syntheses were performed in order to calibrate the gf values of the CN molecules in the\nlinelist to match the spectrum of Arcturus, but results were inconclusive. In particular, appropriate\nsmoothing factors were difficult to determine as the ultra high \nresolution of the Arcturus atlas makes Gaussian smoothing by an instrumental profile inappropriate. \nIn an attempt to accurately reflect the smoothing in the\nspectral atlas, broadening was done using a convolution of a macroturbulent broadening of 5.21 $\\pm$ 0.2 kms$^{-1}$\n\\citep{1981ApJ...245..992G} and a rotational broadening characterized by vsin(i)=2.4 $\\pm$ 0.4 kms$^{-1}$ \n\\citep{1981ApJ...245..992G} with a limb darkening coefficient of 0.9 (from \\citet{2005oasp.book.....G}).\nWith this smoothing, the spectrum for Arcturus in the forbidden oxygen region was fit by increasing the CN\nfeatures gf values by $\\sim$0.40 dex, while assuming a [C\/Fe]=-0.06 as found by \\citet{2002AJ....124.3241S}. \nIn attempting to apply this calibrated linelist to synthesize the forbidden line region for\none of the giants in our sample (HIP17792; chosen because its physical parameters were\nsimilar to those of Arcturus) no reasonable abundance of carbon yielded a satisfactory fit. This may suggest\nthat the gf values in the linelist need to be more well constrained. In light of the ambiguous\nresults, it is concluded that an accurate determination \nof the carbon abundance is essential for proper treatment of any CN blending feature that may exist. \nWe suggest that a spectroscopic analysis of cool giants with appropriate wavelength coverage to allow \nmeasurement of a precise carbon abundance would allow for calibration of the forbidden oxygen linelist, which\nwould be a project of not insignificant interest. \nUnfortunately the wavelength coverage of the observed spectra does not include any appropriate carbon features to\nallow definitive conclusions as to the reality of the CN blending features found in \\citet{1963rspx.book.....D}.\nIn light of the unresolved nature of this CN blending, abundances reported herein do not include it. \nFurther justification for ignoring the CN blending is discussed in the results.\n\n\\subsubsection{Uncertainty Estimates}\n\nThe uncertainties in experimental and theoretical log(gf) values (likely at least\n0.1 dex) can be a significant source of error; however, by performing a line-by-line differential analysis with respect\nto the Sun, uncertainties due to transition \nprobabilities are eliminated to first order.\n\nHere, then, it is the uncertainty in physical parameters that underlie the uncertainties in the abundances. \nErrors in the temperature were\ndetermined by adjusting the temperature solution until the correlation between [Fe\/H] and excitation \npotential (excitation balance) reached a 1-$\\sigma$ linear correlation coefficient for the given number of lines. \nThe uncertainty in microturbulent velocity was determined in the same manner, by adjusting the microturbulence \nuntil the linear correlation coefficient for [Fe\/H] versus equivalent width (equivalent width balance) \nresulted in a 1-$\\sigma$ deviation. For HIP 5027, which would not converge to a unique solution for microturbulence, \nan uncertainty in microturbulence of 0.20 kms$^{-1}$ was adopted.\n\nFor the cases where the physical \nsurface gravity was utilized, the uncertainty was estimated by propagating the \nuncertainties in the temperature, mass, apparent magnitude, parallax and bolometric corrections.\nThe uncertainties in the spectroscopically determined surface gravities required a deeper treatment. \nSince gravity is calculated by eliminating the difference in iron abundance derived from [\\ion{Fe}{1}\/H] and [\\ion{Fe}{2}\/H],\nthe uncertainty in surface gravity is related to the quadratic sum of the the uncertainties in [\\ion{Fe}{1}\/H] and [\\ion{Fe}{1}I\/H]. \nThese abundances, in turn, have sensitivities that depend on the basic physical \nparameters. Proper uncertainty calculations, therefore, require an iterative procedure.\nThe errors in [Fe I\/H] and [Fe II\/H] are a \ncombination of the measurement uncertainties and the uncertainties in the \nphysical parameters. The line measurement uncertainties in Fe I and Fe II were estimated as the standard\ndeviation of the abundances from all Fe I and Fe II lines, respectively. Abundance sensitivities for \narbitrary changes in temperature ($\\pm$ 150 K), surface gravity ($\\pm$ 0.12 dex) and microturbulence \n($\\pm$ 0.60 kms$^{-1}$) were determined by \nadjusting each parameter individually and recording the resultant difference in abundance. To determine\nabundance uncertainties the abundance differences must be properly normalized by the respective parameter's uncertainty.\nFor example, in HIP3455 the total temperature uncertainty was\nfound to be 35 K. The final abundance uncertainty introduced by the arbitrary temperature change would, therefore, be\nequal to the difference in abundance multiplied by $\\frac{35 K}{150 K}$, where 35 K is the temperature uncertainty and\n150 K is the arbitrary temperature change introduced to determine the temperature sensitivity.\nFor the first calculation the uncertainties in temperature and microturbulent velocity were determined as \nabove and the uncertainty in surface gravity was unknown; consequently its contribution to abundance\nuncertainty was initially ignored. Adding the measurement errors in [Fe I\/H] and [Fe II\/H] in quadrature\nwith the physical parameter abundance uncertainties from temperature and microturbulence yields\na first estimate for the uncertainty in the surface gravity. This gravity uncertainty can then be added\nin quadrature to the line measurement uncertainty, the temperature uncertainty and the microturbulent uncertainty\nto yield a final uncertainty for the surface gravity. The surface gravity in the model atmosphere was adjusted\nuntil the difference in abundance between [Fe I\/H] and [Fe II\/H] was equal to their quadrature added\nuncertainties. The difference between this gravity and the spectroscopically derived gravity provides the final\nuncertainty in surface gravity.\n\nUncertainties in abundances were found by introducing arbitrary changes in T, microturbulence \nand surface gravity ($\\Delta$ T=150 K, $\\Delta$ $\\xi$=0.60 kms$^{-1}$, and $\\Delta$log $g$=0.12 cm s$^{-2}$),\nnormalized by the respective parameter uncertainties. \nThe uncertainties introduced by each of these parameter changes was added in quadrature to obtain \ntotal parameter-based uncertainties. Measurement uncertainties were taken as the uncertainty\nin the weighted mean for all lines of a given element. For elements with only a single line available, the \nstandard deviation of all \\ion{Fe}{1} abundances was utilized as an estimate of the line measurement uncertainty. \nThe final uncertainties in the abundances were determined by adding the parameter-based abundance uncertainties \nwith the measurement uncertainties in quadrature.\n\nA sample table of the normalized parameter changes and their final resultant \n[\\ion{Fe}{1}\/H] errors on a given star is presented in Table \\ref{errors}. \n\n\\subsection{Physical Parameter Comparisons:} \n\\subsubsection{Temperatures: Spectroscopic Versus Photometric}\n\nThe temperatures for the stars in the sample were determined from photometric calibrations as\nwell as through spectroscopic excitation balance. In Figure \\ref{param_diff}\nthe spectroscopic temperature is plotted versus the photometric temperature. The line represents perfect agreement\nbetween the two temperatures. It can clearly be seen that\nthe temperatures from the two techniques are equivalent within their respective uncertainties. \nThere is a slight indication that spectroscopic \ntemperatures may be systematically higher, with 66 \\% of the stars lying above the line, however the effects\non the abundance analysis are negligible and do not change any conclusions. \t\t\t\t\t\t\t\n\n\\subsubsection{Surface Gravity: Spectroscopic Versus Physical}\n\nThe surface gravity was determined from Hipparcos data (i.e. physical surface gravities) and \nspectroscopically via ionization balance. In \nFigure \\ref{param_diff}, the spectroscopic surface gravity is plotted versus \nthe physical surface gravity. The line shows the trend for the gravities being equal. \nWithin their respective \nuncertainties, the surface gravities are equal.\n\n\\section{RESULTS AND DISCUSSION}\n\nThe primary goal of the paper is to determine if the kinematically defined\nWolf 630 Moving Group represents a stellar population \nof a single age and chemical composition. The sample\nstars have been plotted in the HR diagram (Figure \\ref{HR_spec_final}) to determine if they are\ncoincident with a single evolutionary sequence. The sequence\ntraced by the majority of stars coincides with a Yale-Yonsei \nisochrone \\citep{2004ApJS..155..667D} of 2.7 $\\pm$ 0.5 Gyr with an assumed solar metallicity. \nIn attempting to qualitatively use ages as a constraint for establishing\nmembership in a distinct evolutionary sequence, it will be assumed that the isochrone\nwhich fits the majority of the sample provides a reasonable estimate of the age range of \na dominant coeval group, if it indeed exists.\n\nThe abundance results are presented in Table \\ref{abundances} and as plots of [X\/H] versus temperature (Appendix). \nLithium and oxygen abundances were also derived, but they are presented and discussed\nseparately as the approach utilized for these abundance results involved synthesis (Li) or use\nof the MOOG \\emph{blends} driver (O). In order to visually present the abundance results, the metallicity distribution\nof the entire sample is presented in the form of a ``smoothed histogram'' in Figure \\ref{fe_smooth}.\nThis distribution has been generated by characterizing each star with a gaussian \ncentered on its mean [Fe\/H] with standard deviation equal to the [Fe\/H] uncertainty. \nThe distributions are summed to yield a final smoothed histogram and have been renormalized to a unit\narea. In this manner, the distributions include uncertainties \nin abundances, making them useful for a visual examination of the complete sample to discern\nif any stars yield abundances that deviate from the sample as a whole. The distribution\nis clearly not unimodal or symmetric. It is dominated by a near-solar metallicity peak and two \nsmaller peaks at [Fe\/H]$\\sim$-0.50 and [Fe\/H]$\\sim$+0.30. It is clear that our Wolf 630 \nmoving group sample is not characterized by a single chemical composition.\n\n\\subsection{Approach to Chemically Tagging}\n\nWhile our entire sample cannot be characterized\nby a single chemical abundance, we can investigate whether there is a dominant subsample having\ncommon abundances and age. \nThis is done by eliminating stars that are clearly outliers, using arguments\nbased on extreme abundances, evolutionary state (inferred from HR diagram positions,\nlithium abundance, chromospheric acitivities and surface gravities) or a combination thereof.\nThese members will be classified as ``unlikely'' members of a dominant homogeneous group. In this\nway we can, for example, establish a subsample that is characterized by a dominant [Fe\/H], if it exists.\nStars with such an [Fe\/H] will be classified as\neither ``possible'' or ``likely'' members of a chemically homogeneous, isochronal population\nhaving common kinematics. The final\ndistinctions between ``possible'' and ``likely'' will be made based on evolutionary status\nand additional abundance information inferred from lithium, alpha elements and \niron peak elements. Particular interest is paid to the iron abundance, [Fe\/H], as it is\nconsidered the most well determined abundance, primarily \ndue to the quality and size of the Fe line sample. \n\nThe quantitative constraint adopted for determining chemical homogeneity\nwas to require that a star's abundance, within its\nuncertainty, rest within a metallicity band centered on the weighted mean abundance of\nstars in the sample. The half-width of this band was conservatively taken to be \n3 times the uncertainty in the weighted mean. This approach was followed in an iterative fashion\nwhere whenever a star was determined to be an ``unlikely'' member of a dominant chemical group\nit was removed from the sample and a new weighted mean and band size was found. In this\nmanner, a common abundance for the sample was converged to for each element (except Lithium and Oxygen).\nExamples of the band plots for [Fe\/H] versus T$_{\\rm eff}$ is given in Figure \\ref{fe_abtemp}, \nwhere [Fe\/H] is plotted versus temperature. The solid line gives the weighted mean [Fe\/H] while the \ndotted lines give the 3-$\\sigma$ uncertainties in this mean, i.e. the abundance band. \n\nThis visual analysis from examining the abundance distributions served as a guide for identifying the clearly\nunlikely members. Abundance information\nalone was used to constrain giant star membership in a dominant chemical group, as robust discriminants of age\nare unavailable. Many of the dwarfs lay above the main sequence, leading\nto the question of if they might be pre-or-post main sequence objects. \nConsequently a diagnostic was needed to constrain evolutionary status for these dwarf and subgiant stars. \nThe full analysis, therefore, examined each star individually, utilizing abundances and \ninformation on evolutionary status (inferred from chromospheric activities, isochrone ages and surface gravities)\nto classify each star in its appropriate category (unlikely, possible or likely). \n\nFigure \\ref{lithium} shows the absolute lithium abundance\nversus effective temperature for the litle-evolved stars in our sample and for a sample of dwarf stars in the \nPleiades, Hyades, NGC752 and M67. The lithium abundances of the sample stars are plotted with each cluster: \nfilled hexagons are dwarfs, filled triangles are upper limits for dwarfs, open hexagons are subgiants (as inferred\nfrom HR-diagram positions and apparently low levels of chromospheric activity) and open triangles are\nupper limits for subgiants. Accepted ages are given for each of the respective clusters, with the \nPleiades trend being used as a baseline to indicate that a star is likely to be young (i.e.\nif a star has a lithium abundance which rests in the Pleiades lithium abundance trend it is likely\na young star). \n\n\n\\subsection{Final Membership}\n\nWith the considerations above, the 34 stars in the sample have been classified as unlikely, possible and\nlikely members of a common chemical, temporal and kinematic assemblage. There were a total\nof 13 stars removed from group membership due to classification as unlikely members. \nIf the remaining 21 stars classified as possible and likely are considered to represent a chemically\ndistinct group, then out of the original kinematically defined sample, $\\sim$ 60\\% remain members\nof a kinematically and chemically related group with a common 2-3 Gyr age insofar as we can tell. \n\nThe final evolutionary\nsequence traced by the possible and likely members is presented in Figure \\ref{HR_spec_final}, with\npossible members plotted in red and likely members plotted in green. The group is reasonably well traced \nby an evolutionary sequence of $\\sim$ 2.7 Gyr (solid line) with lower and upper limits of 2.2 Gyr and 3.2 Gyr \n(dashed lines). The dwarf members, HIP 41484, HIP 105341, HIP 14501 and HIP 43557,\nhave positions that place them slightly above the main sequence; however, based on lithium abundances, none of the\nstars are believed to be pre-main sequence objects and surface gravities are all consistent with dwarf status. \nThe giants HIP 3992, HIP 34440 and HIP3455 appear\nto form a red giant clump. The remaining members all lay on the best fit isochrone within their \nrespective uncertainties. Thus the possible and likely members we identify can be\ncharacterized by a distinct evolutionary sequence of 2.7 $\\pm$ 0.5 Gyrs.\n\nThe final UV kinematic phase space plot is presented in Figure \\ref{uvw_final}, where possible members \nare again red and likely members are green. For our initial full sample, the RMS U and V velocities\nare 23.92 and 34.46 kms$^{-1}$, respectively. In the final subsample of group members, U$_{RMS}$=25.21 kms$^{-1}$\nand V$_{RMS}$=35.8 kms$^{-1}$, therefore the kinematic identity has not been \nsignificantly altered by the requirement of chemical and temporal coherence to establish group membership,\nwhich points to the necessity to utilize criteria other than kinematics to robustely link members of\nmoving groups.\n\nThe weighted mean abundances of the final possible and likely members of a dominant chemical group\nare presented in Table \\ref{group}. The quoted errors are the uncertainties in the weighted\nmean. In order to explore the homogeneity of our samples a reduced chi-squared statistic \nis presented for each element assuming a constant mean abundance. \nPerforming this test for [Fe\/H] for warm stars (T$\\ge$ 5000 K) in the Hyades cluster sample data \nfrom \\citet{2006AJ....131.1057S}, yields a $ \\chi_{\\nu}^{2}$ of 1.303. \nFor a set of 7 Pleaides stars from \\citet{2003AJ....125.2085S}, \nthe reduced chi-squared in [Fe\/H] is 1.818. Note that the cool stars were removed \nfrom the calculation as they are believed to be impacted by overexcitation\/ionization effects.\nFrom these chi-squared values, we estimate the Hyades and Pleiades are chemically homogeneous\nwith a roughly 2-sigma significance. With these open clusters assumed to be chemically homogeneous,\nan approximate reduced chi-squared of $\\le$ 2, therefore, provides a rough quantitative indication of homogeneity.\nThe $ \\chi_{\\nu}^{2}$\nis presented for the full sample of 34 stars ($\\chi_{\\nu}^2all$), the final sample of 21 possible\nand likely group members ($\\chi_{\\nu}^2group$) and the 11 likely members ($\\chi_{\\nu}^2likely$). \nFirst, the very large $\\chi_{nu}^{2}$ for the full sample confirms that the initial kinematically defined\nsample of alleged Wolf 630 members is clearly not chemically monolithic. \nThe decrease in reduced chi-squared between the full sample and the chemically\ndistinct subsample demonstrates that chemically discrepant stars have been removed. Even in the\nlikely subsample the $\\chi_{\\nu}^2$ values remain uncomfortably large for Na and Al. \nDiscussion of these discrepancies is reserved for a later section.\n\nConsidering the reduced chi-squared for other homogeneous open cluster samples\nis comparable to the reduced chi-squared for the possible and likely members of the\nsample across multiple elements, the chosen sample is considered to represent a\nchemically consistent group with a weighted average metallicity of \n[Fe\/H]=-0.01 $\\pm$ 0.02 (uncertainty in the weighted mean).\n{\\textbf Using precise chemical tagging of the 34 star sample of the Wolf moving group,\na single evolutionary sequence of 2.7 $\\pm$ 0.5 Gyr and [Fe\/H]=-0.01 $\\pm$ 0.02\nhas been identified for a subsample of 19 stars.} \n\n\n\\subsection{Open Clusters and Moving Groups: Chemically Tagging the Disk}\n\nWe present additional results here that illustrate the application\nof moving group field star members in exploring stellar and chemical evolution in the Galactic disk.\n\n\\subsubsection{Na and Al Abundances}\n\nThe abundances of Na and Al appear to be enhanced for some of the stars in the sample.\nSimilar enhancements have been observed in many open clusters. Most recently,\nan analysis of abundances in the Hyades cluster found abundance enhancements\nin Na and Al of 0.2-0.5 dex in giant stars when compared with dwarfs (\\citet{2009ApJ...701..837S})\nin line with observations of giant stars in old open clusters (\\citet{2005AJ....129.2725F},\n\\citet{2008AJ....135.2341J}).\nThese enhancements can be compared to those observed in the group members of this work.\n\nPlots of [Na\/Fe] (top panel) and [Al\/Fe] (bottom panel) versus surface gravity are presented in Figure \\ref{na_al}.\nFor the members of the group, the Na and Al enhancements are relatively modest, as seen\nin a relatively slight upward shift in abundances between dwarfs and subgiants. The giant abundances,\nin general, can be brought into agreement with dwarf abundances with \ndownward revisions of 0.1-0.2 dex, consistent with NLTE corrections found in field clump \ngiants with surface gravities down to log $g$=2.10 (\\citet{2006A&A...456.1109M}). The single \nstar which has greatly enhanced [Na\/Fe] and [Al\/Fe], HIP 114155, is an evolved, metal poor red \ngiant with enrichments of 0.53 dex and 0.51 dex, comparable to those found by \\citet{2009ApJ...701..837S}.\nAccording to the NLTE correction table of \\citet{2003ChJAA...3..316T}, the recommended NLTE correction is\nat most -0.10, although the calculations performed do not extend below a temperature of\n4500 K. \\citet{1999A&A...350..955G} performed an extensive set of NLTE corrections for Na, and based on their\nresults, there is a recommended NLTE correction of $\\sim$ 0.20 dex. Even considering these corrections,\nthe Na abundance remains enhanced. Although there are few NLTE corrections for Al in the literature, \n\\citet{2008A&A...481..481A} suggest NLTE corrections of roughly 0.60 dex upward. This is opposite to the \nnecessary correction to remove the enhancement, however the corrections are for low-metallicities \n([Fe\/H]$\\approx$-2.00). Further NLTE calculations for cool, moderately low metallicity giant like\nHIP114155 are needed to determine whether the enhanced abundances in this star are a result of NLTE effects.\n\nThe other points of interest in Figure \\ref{na_al} are the two dwarfs with the greatest\nsurface gravities ([Na\/Fe]=-0.38 in HIP105341 and [Na\/Fe]=-0.33 in HIP5027). Closer\ninspection shows that these are the two coolest dwarfs in the sample, perhaps \npointing to overexcitation\/ionization as a culprit for decreased abundances, similar to \noverexcitation\/ionization effects observed in cool open cluster dwarfs \n(\\citet{2003AJ....125.2085S}, \\citet{2004ApJ...603..697Y}, \\citet{2005PASP..117..911K}\nand \\citet{2006AJ....131.1057S}). \n\nSimilar effects are not apparent for [Al\/Fe]. A single Na line was measurable with\na relatively low excitation potential of 2.10 eV, while two Al lines of 3.14 eV and 4.02 eV were used.\nAdditionally, the ionization potential of Al is $\\sim$ 0.9 eV higher than for Na. These differences\nare qualitatively consistent with those needed for overexcitation\/overionization to be manifest.\nThis can be further explored by comparing abundances from \\ion{Fe}{1} and \\ion{Fe}{1}I. \n\n\\subsubsection{Overexcitation and Overionization in Cool Dwarfs: \\ion{Fe}{1} and \\ion{Fe}{1}I Abundances}\n\nIn order to more closely examine the possible effects of overexcitation and overionization for the sample,\nabundances have been derived from \\ion{Fe}{1} and \\ion{Fe}{2} lines using physical surface gravities \n(spectroscopic gravities are unsuitable for this purpose since ionization balance forces agreement between\nabundances of \\ion{Fe}{1} and \\ion{Fe}{2}).\nRefer to Figure \\ref{overionization_dwarfs} where the difference in abundances between ionized and neutral\nFe are plotted versus temperature. For stars \nwarmer than 4500 K the general trend reveals no overionization within the uncertainties. The \nsame two coolest dwarfs which evince unusually low [Na\/Fe], show large degrees of Fe overionization, \n\nSource of overionization in cool dwarfs are not well-understood, however, one possible explanation is that the stars\nare active young dwarfs and, thus, heavily spotted. Recent work suggests that heavily spotted stars have radii which\nare ``puffed'' compared to standard stellar models (\\citet{2002ApJ...567.1140T}, \\citet{2008A&A...478..507M}. \nAn increased radius would decrease the surface gravity of the star compared to unspotted analogs, \nwhich would result in increased \\ion{Fe}{2} line strengths\nvia overionization. In order to explore the viability of this explanation,\nthe radius that corresponds to the surface gravity needed to eliminate the abundance difference between\n[\\ion{Fe}{2}\/H] and [\\ion{Fe}{1}\/H] was determined for HIP5027. A \nsurface gravity of 3.57 was found to produce agreement between abundances from \\ion{Fe}{1} and \\ion{Fe}{2}, holding \ntemperature and microturbulence constant. From Yale-Yonsei\nisochrones, a mass of 0.66 M$_{\\odot}$ is assumed. The radius for this gravity is R=2.19 R$_{\\odot}$. \nThe radius corresponding to this mass and the physical surface\ngravity of log $g$=4.70 is R=0.60 R$_{\\odot}$. From \\citet{2008A&A...478..507M} an upper limit that\ncan be expected for radius changes in this ``spotted'' regime is $\\sim$10\\%, well beneath the radius\nchange implied by the necessary surface gravity change to eliminate overionization and well\noutside of the uncertainty in the physical surface gravity. This points\nto a more likely scenario of significant NLTE effects yielding increased overionization as a function\nof decreasing temperature as observed in many cool open cluster dwarfs (\\citet{2004ApJ...603..697Y},\n\\citet{2006AJ....131.1057S}).\n\n \n\\subsubsection{Oxygen Abundances: Moving Groups Versus Open Clusters}\n\nAbundances for the $\\lambda$7771, $\\lambda$7774, \n$\\lambda$7775 high excitation potential oxygen triplet have been derived from equivalent widths. \nSince abundances derived from the triplet are believed to be enhanced by NLTE effects,\ncorrections from the work of \\citet{2003A&A...402..343T} have been applied\nto derive NLTE corrected abundances from the triplet lines.\nThe equivalent widths\nfor the triplet, the LTE oxygen abundances, and the final NLTE oxygen abundances are shown\nin Table \\ref{oxygen}.\n\nThe $\\lambda$7774 {\\AA} and $\\lambda$7775 {\\AA} lines appear \nto be enhanced as a general function of decreasing temperature in both dwarfs \n(Figure \\ref{oxygen_dwarf_diff}) and giants. A similar \nenhancement of the central line (7774.1 {\\AA}) in Hyades giants was noted by \n\\citet{2006AJ....131.1057S}. They believed this \nenhancement to be due to a possible blend with an \\ion{Fe}{1} feature at 7774.00 {\\AA}. \nWhile the nature of any blending for the reddest feature (7775 \\AA) is unclear, \nvisual inspection of the spectral line reveals a slight asymmetry, possibly indicating a blend.\nThe distinct increase in [O\/H] abundances derived from the red features of the triplet as\na function of decreasing temperature suggest that only the blue line (7771.1 {\\AA})\nof the triplet should be used for oxygen abundance determinations in cooler stars. \n\nIn order to test the possibility of an Fe blend as discussed\nabove, two cool stars of the sample with no measurable oxygen abundances (HIP5027 and HIP105341)\nwere examined to see if they showed any indications of an Fe blending feature near 7774 {\\AA}. \nIn HIP5027 a possible detection of a feature at 7774 {\\AA} was found to have a measured \nequivalent width of roughly 6.0 m{\\AA}. This strength is not inconsistent with the expected \ncontribution required from two nearby \\ion{Fe}{1} features at 7773.979 {\\AA} and \n7774.06 {\\AA} for the derived Fe abundance.\n\nNeglecting the two red triplet lines in the cool dwarfs, the [O\/H] trend of the our dwarf \nsample is plotted along with the Pleiades trend from \\citet{2004ApJ...602L.117S} \n(where [Fe\/H]=0.00 was assumed to calculate [O\/Fe]), and the Hyades trend of \n\\citet{2006AJ....131.1057S} (where [Fe\/H]=$+$0.13 was assumed to calculate [O\/Fe])\nin Figure \\ref{oxygen_dwarf_simon}. \nUsing $\\lambda$7772 triplet-based [O\/H] abundances in 45 Hyades dwarfs, they found \na remarkable increase in [O\/H] as a function of decreasing temperature for stars with \nT$_{eff}$$\\le$5400 K. The increase of [O\/H] in the $\\sim$ 120 Myr old Pleiades appeared\nto be steeper than that in the $\\sim$ 625 Myr old Hyades, \nperhaps pointing to an age-related effect whereby [O\/H] enhancements in cooler\nstars decrease as a function of increasing age. \n\nOur field dwarfs do not show a drastic increase in abundance as a function of decreasing temperature.\nThe single star\nthat appears to reside within the increasing Hyades trend at cooler temperatures is metal weak\n(HIP 42499, [Fe\/H]=-0.56), resulting in [O\/Fe]=+0.47. The enhanced [O\/Fe] ratio at this low metallicity \nis unsurprising and coincides with the characteristic field dwarf enhancements observed as a function \nof decreasing temperature for oxygen in other metal poor field stars \\citep{1989ApJ...347..186A}.\nIf the abundance trend observed by \\citet{2004ApJ...602L.117S} and \\citet{2006AJ....131.1057S} \nis age dependent, the lack of a distinct trend of increasing [O\/Fe] with decreasing abundance\nmay point to the stars in the sample being older than the Hyades,\nnot inconsistent with the 2.7 Gyr age of the dominant subsample identified above. \nIf not an age-related effect, then an as yet unknown dichotomy between oxygen \nabundances in field stars and cluster stars would have to be explored with \nabundances of field stars of quantifiable age. \n\nFor the giant stars in the sample, oxygen abundances have been derived from the infrared triplet\nand from the forbidden line at $\\lambda$6300.301 {\\AA}, through use of the \\emph{blends} driver\nof MOOG, following the approach of \\citet{2006AJ....131.1057S}. \n\nIn examining the giant triplet abundances, a similar effect as in the dwarfs is observed as temperatures\ndecrease with enhancements in oxygen abundances derived from both the 7774 {\\AA} and\n7775 {\\AA} lines. \nNLTE corrections were applied to the $\\lambda$7771 triplet abundances by interpolating within the grids\nof \\citet{2003A&A...402..343T}. The results of these corrections are presented in \nFigure \\ref{giant_oxygen_forbidden}\nwhere forbidden minus permitted [O\/H] differences versus temperature are plotted. Notice that as the \ntemperature decreases, the\nabundance from the redder lines of the triplet appear to be enhanced relative to the forbidden\nline. While the NLTE corrections decreased the abundance enhancements in the cooler stars of the sample, \nthey did not eliminate them. This yields further evidence of blending\neffects in the reddest lines of the triplet as a function of coolier temperature. \nFor the purposes of this paper, the oxygen abundances derived from the red \nfeatures of the triplet will not be used.\n\nIn Figure \\ref{giant_forbid_bluetriple} the difference in abundance from the forbidden\noxygen line (6300.34 {\\AA}) and the NLTE corrected blue triplet line (7771 {\\AA}) is plotted versus\ntemperature (top plot) and surface gravity (bottom plot). The dotted line shows a zero difference\nbetween the two abundances. The NLTE-corrected permitted oxygen abundances (7771 {\\AA})\nappear to agree well with the forbidden oxygen abundance (6300 {\\AA}) indicating that the blue line\nof the triplet can provide a reliable oxygen abundance when proper care is taken to make the\nnecessary NLTE corrections. \n\nThe single outlier is the highly evolved giant HIP 114155. The larger abundance \nfrom the blue triplet feature\nin this star is believed to be from NLTE effects that are not removed using the\ncorrections of \\citet{2003A&A...402..343T} as the grid for the corrections does not\nextend below 4500 K. While the temperature extrapolation is sufficient for less evolved stars\n(i.e. NLTE triplet abundances in stars with surface gravities above 2.0 all agree with the \nforbidden abundance, even at temperatures below 4500 K), the corrections for more\nevolved stars, with surface gravities $\\sim$1.00, are significantly larger. \nThe good agreement between all other forbidden and blue triplet oxygen abundances indicates \nthe inadequacy extrapolating the NLTE corrections in cool, evolved stars. \n\nThe final salient point to make regarding the oxygen abundances is to address the alleged\nCN blending feature previously discussed. As mentioned, \\citet{1963rspx.book.....D}\nlist CN features at 6300.265 {\\AA} and two features at 6300.482 {\\AA} with gf\nvalues of 5.78E-3, 6.82E-3 and 2.01E-3. Recall that the inability to adequately calibrate a linelist\nincluding these features with a high resolution atlas of Arcturus led to the\nfeatures not being utilized in the derivation of forbidden line oxygen abundances. With the good\nagreement between forbidden oxygen neglecting the CN features and the NLTE corrected\nblue line of the triplet in Figure \\ref{giant_forbid_bluetriple}, it is suggested that \nthe CN blending features may not be important.\n\n\\section{SUMMARY}\n\nThe existence of spatially unassociated groups of stars moving through the solar neighborhood\nwith common U and V kinematics has been explored for over half a century \n\\citep{1958MNRAS.118...65E}. Despite this long history, the exact origins of these\nso called moving groups is still a matter of some debate. The classical view contends\nthat they are dissolved open clusters which have retained common kinematics and \ndrifted into spatially elongated stellar streams. If this is indeed true, moving group\nmembers should possess similar characteristics to those of open cluster stars:\nparticularly, common chemical abundances and residence along a distinct evolutionary sequence in an\nHR diagram. \n\nIn order to address the viability of moving groups being dissolved open clusters, \nwe have performed a high resolution spectroscopic abundance analysis\nof a 34 star sample of the kinematically distinct Wolf 630 moving group, selected for\nits residence in a sparsely populated region of the \\emph{UV} plane in the solar\nneighborhood. Our abundance measurements reveal that the sample can not be characterized \nby a uniform abundance pattern. The individual\nstars have been closely scrutinized, making use of abundances, evolutionary \nstate and qualitative age information to constrain membership as an unlikely,\npossible or likely member of a subsample with a dominant abundance trend\nand consistent age. There\nappears to be a group with a weighted mean of [Fe\/H]=-0.01 $\\pm$ 0.02 \n(uncertainty in the weighted mean) that is composed of 19 stars. \nThese final members are well traced by an evolutionary sequence \nof 2.7 $\\pm$ 0.5 Gyr as determined from Yale-Yonsei isochrones \n\\citep{2004ApJS..155..667D}. Thus, the existence of moving groups\nas relic structures of dissolved clusters remains plausible based on the homogeneity \nof the subgroup identified above. \n\nWe have also explored some of the additional uses for abundances in moving groups in \nchemically tagging the galactic disk. We found evidence for overexcitation\/overionization\neffects from both Na and from \\ion{Fe}{1} versus \\ion{Fe}{2} abundances in the coolest dwarfs of the sample,\nlikely attributable to increasing NLTE effects as a function of cooling temperature. We find\nthe necessity to apply NLTE corrections of 0.10-0.20 dex to Na abundances in giant stars.\nFinally, we derived oxygen abundances for the stars in the sample from both the\nforbidden line at 6300 {\\AA} and the near-IR triplet. First, we find evidence for blending in the IR\ntriplet in both dwarfs and giant stars, possibly by \\ion{Fe}{1} features near the $\\lambda$7774 line. \nSecond, we find that NLTE effects on \\ion{O}{1} in low log $g$ cool giants are important and cannot be accounted\nfor by extrapolating current NLTE calculations. Finally, we find reliable oxygen abundances\nfrom the forbidden line in giant stars and again find evidence of increased NLTE effects\nas a function of cooling temperature manifested in increased triplet derived abundances.\n\n\n\\begin{acknowledgements}\nThe authors would like to gratefully acknowledgement support for this work provided by \nNSF grants AST-0908342 and AST-0239518. Furthermore,\nwe would like to thank the referee for many useful comments which place the work into a broader context.\n\\end{acknowledgements}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
{"text":"\n\n\n\n\n\n\n\\section{Introduction}\n\nNeural decoding, the accurate prediction of animal behavior from brain activity, is a fundamental challenge in neuroscience with important applications in the development of robust brain machine interfaces. Recent technological advances have enabled simultaneous recordings of neural activity and behavioral data in experimental animals and humans~\\cite{dombeck,Seelig,chen2018imaging,lfads,Ecker2010c,wirelesshuman,largescale}. Nevertheless, our understanding of the complex relationship between behavior and neural activity remains limited.\n\n\\input{fig\/data}\n\nA major reason is that it is difficult to obtain many long recordings from mammals and a few subjects are typically not enough to perform meaningful analyses~\\cite{pei2021neural}. This is less of a problem when studying the fly \\textit{Drosophila melanogaster}, for which long neural and behavioral datasets can be obtained for many individual animals \\textbf{(Fig.~\\ref{fig:data})}. Nevertheless, current supervised approaches for performing neural decoding ~\\cite{supervised1,MLfordecoding} still do not generalize well across animals because each nervous system is unique \\textbf{(Fig. \\ref{fig:domain}A)}. This creates a significant domain-gap that necessitates tedious and difficult manual labeling of actions. Furthermore, a different model must be trained for each individual animal, requiring more annotation and overwhelming the resources of most laboratories. \n\nAnother problem is that experimental neural imaging data often has unique temporal and spatial properties. The slow decay time of fluorescence signals introduces temporal artifacts. Thus, neural imaging frames include information about an animal's previous behavioral state. This complicates decoding and requires specific handling that standard machine learning algorithms do not provide. \n\nTo address these challenges, we propose to learn neural action representations---embeddings of behavioral states within neural activity patterns---in an unsupervised fashion. To this end, we leverage the recent development of computer vision approaches for automated, markerless 3D pose estimation ~\\cite{Gunel19a, dlc} to provide the required supervisory signals without human intervention. We first show that using contrastive learning to generate latent vectors by maximizing the mutual information of simultaneously recorded neural and behavioral data modalities is not sufficient to overcome the domain gap between animals and to generalize to unlabeled animals at test time \\textbf{(Fig.~\\ref{fig:domain}C)}. To address this problem, we introduce two sets of techniques:\n\\begin{enumerate}[leftmargin=*]\n\n\\item To close the domain gap between animals, we leverage 3D pose information. Specifically, we use pose data to find sequences of similar actions between a source and multiple target animals. Given these sequences, we input to our model neural data from one animal and behavioral data composed of multiple animals. This allows us to train our decoder to ignore animal identity and close the domain gap.\n\n\\item To mitigate the slowly decaying calcium data impact from past actions on neural images, we add simulated randomized versions of this effect to our training neural images in the form of a temporally exponentially decaying random action. This trains our decoder to learn the necessary invariance and to ignore the real decay in neural calcium imaging data. Similarly, to make the neural encoders robust to imaging noise resulting from low image spatial resolution, we mix random sequences into sequences of neural data to replicate this noise.\n\n\n\\end{enumerate}\nThe combination of these techniques allowed us to bridge the domain gap across animals in an unsupervised manner, making it possible to perform action recognition on unlabeled animals \\textbf{(Fig.~\\ref{fig:domain}E)} better than earlier techniques, including those requiring supervision~\\cite{MLfordecoding,behavenet,eegselfsupervised}.\n\nTo test the generalization capacity of neural decoding algorithms, we record and use MC2P dataset, which we will make publicly available. It includes two-photon microscope recordings of multiple spontaneously behaving \\textit{Drosophila}, and associated behavioral data together with action labels.\n\nFinally, to demonstrate that our technique generalizes beyond this one dataset, we tested it on two additional ones: One that features neural ECoG recordings and 2D pose data for epileptic patients~\\cite{peterson_2021,SINGH2021109199} along with the well-known H36M dataset~\\cite{Ionescu14a} in which we treat the multiple views as independent domains. Our method markedly improves across-subject action recognition in all datasets.\n\nWe hope our work will inspire the use and development of more general unsupervised neural feature extraction algorithms in neuroscience. These approaches promise to accelerate our understanding of how neural dynamics give rise to complex animal behaviors and can enable more robust neural decoding algorithms to be used in brain-machine interfaces.\n\n\n\n\n\n\n\t\n\t\n\n\n\n\n\n\n\n\\section{Approach}\nOur goal is to be able to interpret neural data such that, given a neural image, one can generate latent representations that are useful for downstream tasks. This is challenging, as there is a wide domain-gap in neural representations for different animals \\textbf{(Fig. \\ref{fig:domain}A)}. We aimed to leverage unsupervised learning techniques to obtain rich features that, once trained, could be used on downstream tasks including action recognition to predict the behaviors of unlabeled animals. \n\nOur data is composed of two-photon microscopy neural images synchronized with 3D behavioral data, where we do not know where each action starts and ends. We leveraged contrastive learning to generate latent vectors from both modalities such that their mutual information would be maximized and therefore describe the same underlying action. However, this is insufficient to address the domain-gap between animals \\textbf{(Fig. \\ref{fig:domain}C)}. To address this issue, we perform swapping augmentation: we replace the original pose or neural data of an animal with one from another set of animals for which there is a high degree of 3D pose similarity at each given instance in time. \n\nUnlike behavioral data, neural data has unique properties. Neural calcium data contains information about previous actions because it decays slowly across time and it involves limited spatial resolution. To teach our model the invariance of these artifacts of neural data, we propose two data augmentation techniques: (i) Neural Calcium augmentation - given a sequence of neural data, we apply an exponentially decaying neural snapshot to the sequence, which imitates the decaying impact of previous actions, (ii) Neural Mix augmentation - to make the model more robust to noise, we applied a mixing augmentation which merges a sequence of neural data with another randomly sampled neural sequence using a mixing coefficient.\n\nTogether, these augmentations enable a self-supervised approach to (i) bridge the domain gap between animals allowing testing on unlabeled ones, and (ii) imitate the temporal and spatial properties of neural data, diversifying it and making it more robust to noise. In the following section, we describe steps in more detail.\n\n\n\n\n\n\n\n\\subsection{Problem Definition}\n\nWe assume a paired set of data $\\mathcal{D}_{s}=\\left\\{\\left(\\mathbf{b}_{\\mathbf{i}}^{s}, \\mathbf{n}_{\\mathbf{i}}^{s}\\right)\\right\\}_{i=1}^{n_{s}}$, where $\\mathbf{b}^{s}_\\mathbf{i}$ and $\\mathbf{n}^{s}_\\mathbf{i}$ represent behavioral and neural information respectively, with $n_s$ being the number of samples for animal $s\\in \\mathcal{S}$. \nWe quantify behavioral information $\\mathbf{b}^{s}_\\mathbf{i}$ as a set of 3D poses $\\mathbf{b}^{s}_k$ for each frame $k\\in\\mathbf{i}$ taken of animal $s$, and neural information $\\mathbf{n}^{s}_\\mathbf{i}$ as a set of two-photon microscope images $\\mathbf{n}^{s}_k$, for all frames $ k \\in \\mathbf{i}$ capturing the activity of neurons. The data is captured such that the two modalities are always synchronized (paired) without human intervention, and therefore describe the same set of events. Our goal is to learn an unsupervised parameterized image encoder function $f_n$, that maps a set of neural images $\\mathbf{n}^{s}_\\mathbf{i}$ to a low-dimensional representation. We aim for our learned representation to be representative of the underlying action label, while being agnostic to both modality and the identity. We assume that we are not given action labels during unsupervised training. Also note that we do not know at which point in the captured data an action starts and ends. We just have a series of unknown actions performed by different animals.\n\n\\subsection{Contrastive Representation Learning}\n\nFor each input pair $\\left(\\mathbf{b}_{\\mathbf{i}}^{s}, \\mathbf{n}_{\\mathbf{i}}^{s}\\right)$, we first draw a random augmented version\n$( \\tilde{\\mathbf{b}}_{\\mathbf{i}}^{s}, \\tilde{\\mathbf{n}}_{\\mathbf{i}}^{s} ) $ with a sampled transformation function $t_{n} \\sim \\mathcal{T}_n$ and $t_{b} \\sim \\mathcal{T}_b$ , where $\\mathcal{T}_n$ and $\\mathcal{T}_b$ represent a family of stochastic augmentation functions for behavioral and neural data, respectively, which are described in the following sections. Next, the encoder functions $f_b$ and $f_n$ transform the input data into low-dimensional vectors $\\mathbf{h}_b$ and $\\mathbf{h}_n$, followed by non-linear projection functions $g_b$ and $g_n$, which further transform data into the vectors $\\mathbf{z}_b$ and $\\mathbf{z}_n$. During training, we sample a minibatch of N input pairs $\\left(\\mathbf{b}_{\\mathbf{i}}^{s}, \\mathbf{n}_{\\mathbf{i}}^{s}\\right)$, and train with the loss function\n\\begin{equation}\n\\mathcal{L}^{b\\rightarrow n}_{NCE} = - \\sum_{i=1}^{N} \\log \\frac{\\exp \\left(\\left\\langle\\mathbf{z}^{i}_{b}, \\mathbf{z}^{i}_{n}\\right\\rangle \/ \\tau\\right)}{\\sum_{k=1}^{N} \\exp \\left(\\left\\langle\\mathbf{z}^{i}_{b}, \\mathbf{z}^{k}_{n}\\right\\rangle \/ \\tau\\right)}\n\\label{eq:nce}\n\\end{equation}\nwhere $\\left\\langle\\mathbf{z}^{i}_{b}, \\mathbf{z}^{i}_{n}\\right\\rangle$ is the cosine similarity between behavioral and neural modalities and $\\tau \\in \\mathbb{R}^{+}$ is the temperature parameter. An overview of our method for learning $f_n$ is shown in \\textbf{Fig~\\ref{fig:method}}. Intuitively, the loss function measures classification accuracy of a N-class classifier that tries to predict $\\mathbf{z}^{i}_{n}$ given the true pair $\\mathbf{z}^{i}_{b}$.\n To make the loss function symmetric with respect to the negative samples, we also define\n \n\\begin{equation}\n\\mathcal{L}^{n\\rightarrow b}_{NCE} = - \\sum_{i=1}^{N} \\log \\frac{\\exp \\left(\\left\\langle \\mathbf{z}^{i}_{b}, \\mathbf{z}^{i}_{n} \\right\\rangle \/ \\tau\\right)}{\\sum_{k=1}^{N} \\exp \\left(\\left\\langle \\mathbf{z}^{k}_{b}, \\mathbf{z}^{i}_{n} \\right\\rangle \/ \\tau\\right)}.\n\\label{eq:nce2}\n\\end{equation}\n\n\n\\input{fig\/method}\n\n\n \n\\noindent We take the combined loss function to be $\\mathcal{L}_{NCE} = \\mathcal{L}^{b\\rightarrow n}_{NCE} + \\mathcal{L}^{n\\rightarrow b}_{NCE}$, as in \\cite{zhang2020contrastive, Yuan_2021_CVPR}. The loss function maximizes the mutual information between two modalities \\cite{oord2019representation}. Although standard contrastive learning bridges the gap between different modalities, it does not bridge the gap between different animals \\textbf{(Fig.~\\ref{fig:domain}C)}. This is a fundamental challenge that we address in this work through augmentations described in the following section, which are part of the neural and behavioral family of augmentations $\\mathcal{T}_n$ and $\\mathcal{T}_b$. \n\n\\parag{Swapping Augmentation.}\nGiven a set of consecutive 3D poses $\\mathbf{b}^{s}_\\mathbf{i}$, for each $k\\in\\mathbf{i}$, we stochastically replace $\\mathbf{b}^{s}_k$ with one of its nearest pose neighbors in the set of domains $\\mathcal{D}_{\\mathcal{S}}$, where $\\mathcal{S}$ is the set of all animals. To do this, we first randomly select a domain $\\hat{s} \\in \\mathcal{S}$ and define a probability distribution $\\mathbf{P}^{\\hat{s}}_{\\mathbf{b}^{s}_k}$ over the domain $\\mathcal{D}_{\\hat{s}}$ with respect to $\\mathbf{b}^{s}_k$, \n\n\\begin{equation}\n\\mathbf{P}^{\\hat{s}}_{\\mathbf{b}^{s}_k} (\\mathbf{b}^{\\hat{s}}_l) = \n\\frac{\\exp ( - \\| \\mathbf{b}^{\\hat{s}}_l - \\mathbf{b}^{s}_k \\|_{2})}{\\sum_{ \\mathbf{b}^{\\hat{s}}_m \\in \\mathcal{D}_{\\hat{s}} } \\exp ( - \\| \\mathbf{b}^{\\hat{s}}_m - \\mathbf{b}^{s}_k \\|_{2})}.\n\\end{equation}\n\nWe then replace each 3D pose $\\mathbf{b}^{s}_k$ by first uniformly sampling a new domain $\\hat{s}$, and then sampling from the above distribution $\\mathbf{P}^{\\hat{s}}_{\\mathbf{b}^{s}_k}$, which yields in $\\tilde{\\mathbf{b}}^{s}_k \\sim \\mathbf{P}^{\\hat{s}}_{\\mathbf{b}^{s}_k}$. In practice, we calculate the distribution $\\mathbf{P}$ only over the first $\\mathbf{N}$ nearest neighbors of $\\mathbf{b}^{s}_k$, in order to sample from a distribution of the most similar poses. We empirically set $\\mathbf{N}$ to $128$. Swapping augmentation reduces the identity information in the behavioral data without perturbing it to the extent that semantic action information is lost. Since each behavioral sample $\\mathbf{b}^{s}_\\mathbf{i}$ is composed of a set of 3D poses, and each 3D pose $\\mathbf{b}^{s}_k, \\forall k \\in \\mathbf{i}$ is replaced with a pose of a random domain, the transformed sample $\\tilde{\\mathbf{b}}_{\\mathbf{i}}^{s}$ is now composed of multiple domains. This forces the behavioral encoding function $f_{b}$ to leave identity information out, therefore merging multiple domains in the latent space \\textbf{(Fig. \\ref{fig:swap-aug})}.\n\nSwapping augmentation is similar to the synonym replacement augmentation used in natural language processing \\cite{wei-zou-2019-eda}, where randomly selected words in a sentence are replaced by their synonyms, therefore changing the syntactic form of the sentence without altering the semantics. To the best of our knowledge, we are the first to use swapping augmentation in the context of time-series analysis or for domain adaptation. \n\nTo keep swapping symmetric, we also swap the neural modality. To swap a set of neural images $\\mathbf{n}_{\\mathbf{i}}^{s}$, we take its behavioral pair $\\mathbf{b}_{\\mathbf{i}}^{s}$, and searched for similar sets of poses in other domains, with the assumption that similar sets of poses describe the same action. Therefore, once similar behavioral data is found, their neural data can be swapped. Note that, unlike behavior swapping, we do not calculate the distribution on individual 3D pose $\\mathbf{b}^{s}_k$, but instead on the whole set of behavioral data $\\mathbf{b}_{\\mathbf{i}}^{s}$, because similarity in a single pose does not necessarily imply similar actions. More formally, given the behavioral-neural pair $\\left(\\mathbf{b}_{\\mathbf{i}}^{s}, \\mathbf{n}_{\\mathbf{i}}^{s}\\right)$, we swap the neural modality $\\mathbf{n}_{\\mathbf{i}}^{s}$ with $\\mathbf{n}^{\\hat{s}}_\\mathbf{j}$, with the probability\n\n\\begin{equation}\n\\mathbf{P}^{\\hat{s}}_{\\mathbf{n}^{s}_\\mathbf{i}} (\\mathbf{b}^{\\hat{s}}_\\mathbf{j}) = \n\\frac{\\exp ( - \\| \\mathbf{b}^{\\hat{s}}_\\mathbf{j} - \\mathbf{b}^{s}_\\mathbf{i} \\|_{2})}{\\sum_{ \\mathbf{b}^{\\hat{s}}_\\mathbf{m} \\in \\mathcal{D}_{\\hat{s}} } \\exp ( - \\| \\mathbf{b}^{\\hat{s}}_\\mathbf{m} - \\mathbf{b}^{s}_\\mathbf{i} \\|_{2})}.\n\\end{equation}\nThis results in a new pair $\\left(\\mathbf{b}_{\\mathbf{i}}^{s}, \\tilde{\\mathbf{n}}_{\\mathbf{i}}^{s}\\right)$, where the augmented neural data comes from a new animal $\\hat{s} \\in \\mathcal{S}\/s$. \n\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=8cm]{figures\/teaser_small.pdf}\n \\caption{\\textbf{Swapping augmentation.} Each 3D pose in the motion sequence of Domain 2 is randomly replaced with its neighbors, from the set of domains $\\hat{s} \\in \\mathcal{S}\/s$, which includes Domain 1 and Domain 3. The swapping augmentation hides identity information, while keeping pose changes in the sequence minimal.}\n \\label{fig:swap-aug}\n\\end{figure}\n\n\n\\parag{Neural Calcium Augmentation.}\nOur neural data was obtained using two-photon microscopy and fluorescence calcium imaging. The resulting images are only a function of the underlying neural activity, and have temporal properties that differ from the true neural activity. For example, calcium signals from a neuron change much more slowly than the neuron's actual firing rate. Consequently, a single neural image $\\mathbf{n}_t$ includes decaying information concerning neural activity from the recent past, and thus carries information about previous behaviors. This makes it harder to decode the current behavioral state.\n\nWe aimed to prevent this overlap of ongoing and previous actions. Specifically, we wanted to teach our network to be invariant with respect to past behavioral information by augmenting the set of possible past actions. \nTo do this, we generated new data $\\tilde{\\mathbf{n}}^{s}_\\mathbf{i}$, that included previous neural activity $\\mathbf{n}^{s}_k$. To mimic calcium indicator decay dynamics, given a neural data sample $\\mathbf{n}^{s}_\\mathbf{i}$ of multiple frames, we sample a new neural frame $\\mathbf{n}^{s}_k$ from the same domain, where $k \\notin \\mathbf{i}$. We then convolve $\\mathbf{n}^{s}_k$ with the temporally decaying calcium convolutional kernel $\\mathcal{K}$, therefore creating a set of images from a single frame $\\mathbf{n}^{s}_k$, which we then add back to the original data sample $\\mathbf{n}^{s}_\\mathbf{i}$. This results in $\\tilde{\\mathbf{n}}^{s}_\\mathbf{i} = \\mathbf{n}^{s}_\\mathbf{i} + \\mathcal{K} * \\mathbf{n}^{s}_k$ where $*$ denotes the convolutional operation. \nIn the Supplementary Material, we explain calcium dynamics and our calculation of the kernel $\\mathcal{K}$ in more detail. \n\n\\parag{Neural Mix Augmentation.} Two-photon microscopy images often include multiple neural signals combined within a single pixel. This is due to the the fact that multiple axons can be present in a small tissue volume that is below the spatial resolution of the microscope. To mimic this noise-adding effect, given a neural image $\\mathbf{n}^s_{\\textbf{i}}$, we randomly sample a set of frames $\\mathbf{n}^{\\hat{s}}_{\\textbf{k}}$, from a random domain $\\hat{s}$. We then return the blend of these two videos, $\\tilde{\\mathbf{n}}^s_{\\textbf{i}} = \\mathbf{n}^s_{\\textbf{i}} + \\alpha \\mathbf{n}^{\\hat{s}}_{\\textbf{k}}$, to mix and hide the behavioral information. Unlike the CutMix \\cite{cutmix} and Mixup \\cite{mixup} augmentations used for supervised training, we apply the augmentation in an unsupervised setup to make the model more robust to noise. We sample a random $\\alpha$ for the entire set of samples in $\\mathbf{n}^s_{\\textbf{i}}$. \n\n\n\n\n\\section{Conclusion}\n\nWe have introduced an unsupervised neural action representation framework for neural imaging and behavioral videography data. We extended previous methods by incorporating a new swapping based domain adaptation technique which we have shown to be useful on three very different multimodal datasets, together with a set of domain-specific neural augmentations. Two of these datasets are publicly available. We created the third dataset, which we call MC2P, by recording video and neural data for \\textit{Drosophila melanogaster} and will release it publicly to speed-up the development of self-supervised methods in neuroscience. We hope our work will help the development of effective brain machine interface and neural decoding algorithms. In future work, we plan to disentangle remaining long-term non-behavioral information that has a global effect on neural data, such as hunger or thirst, and test our method on different neural recording modalities. As a potential negative impact, we assume that once neural data is taken without consent, our method can be used to extract private information.\n\n\n\n\\section{Experiments}\n\nWe test our method on three datasets. In this section, we describe these datasets, the set of baselines against which we compare our model, and finally the quantitative comparison of all models.\n\n\n\\subsection{Datasets}\n\nWe ran most of our experiments on a large dataset of fly neural and behavioral recordings that we acquired and describe below. To demonstrate our method's ability to generalize, we also adapted it to run on another multimodal dataset that features neural ECoG recordings and markerless motion capture\\cite{peterson_2021,SINGH2021109199}, as well as the well known H36M human motion dataset~\\cite{Ionescu14a}.\n\n\\parag{MC2P:} Since there was no available neural-behavioral dataset with a rich variety of spontaneous behaviors from multiple individuals, we acquired our own dataset that we name \\textit{Motion Capture and Two-photon Dataset (MC2P)}. We will release this dataset publicly.\nMC2P features data acquired from tethered behaving adult flies, \\textit{Drosophila melanogaster} \\textbf{(Fig.~\\ref{fig:data})}, It includes:\n\\begin{enumerate}[leftmargin=*]\n\n \\item Infrared video sequences of the fly acquired using six synchronized and calibrated infrared cameras forming a ring with the animal at its center. The images are $480\\times 960$ pixels in size and recorded at $100$ fps. \n\n \\item Neural activity imaging obtained from the axons of descending neurons that pass from the brain to fly's ventral nerve cord (motor system) and drive actions. The neural images are $480\\times 736$ pixels in size and recorded at $16$ fps using a two-photon microscope \\cite{chen18} that measures the calcium influx which is a proxy for the neuron's actual firing rate. \n \n\\end{enumerate}\nWe recorded 40 animals over 364 trials, resulting in 20.7 hours of recordings with 7,480,000 behavioral images and 1,197,025 neural images. We provide additional details and examples in the Supplementary Material. We give an example video of synchronized behavioral and neural modalities in \\textbf{Supplementary Videos {\\color{pearDark} 1-2}}.\n\nTo obtain quantitative behavioral data from video sequences, we extracted 3D poses expressed in terms of the 3D coordinates of 38 keypoints~\\cite{Gunel19a}. We provide an example of detected poses and motion capture in \\textbf{Supplementary Videos {\\color{pearDark} 3-4}}. For validation purposes, we manually annotated a subset of frames using eight behavioral labels: \\textit{forward walking}, \\textit{pushing}, \\textit{hindleg grooming}, \\textit{abdominal grooming}, \\textit{rest}, \\textit{foreleg grooming}, \\textit{antenna grooming}, and \\textit{eye grooming}. We provide an example of behavioral annotations in \\textbf{Supplementary Video {\\color{pearDark} 5}}. \n\n\n\n\\parag{ECoG dataset~\\cite{peterson_2021,SINGH2021109199}:} This dataset was recorded from epilepsy patients over a period of 7-9 days. Each patient had 90 electrodes implanted under their skull. The data comprises human neural Electrocorticography (ECoG) recordings and markerless motion capture of upper-body 2D poses. The dataset is labeled to indicate periods of voluntary spontaneous motions, or rest. As for two-photon images in flies, ECoG recordings show a significant domain gap across individual subjects. We applied our multi-modal contrastive learning approach on ECoG and 2D pose data along with swapping-augmentation. Then, we applied an across-subject benchmark in which we do action recognition on a new subject without known action labels.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.49\\textwidth]{figures\/h36m_domain.png}\n \\caption{\\textbf{Domain Gap in the H3.6M dataset.} Similar to the domain gap across nervous systems, RGB images show a significant domain gap when the camera angle changes across individuals. We guide action recognition across cameras in RGB images using 3D poses and behavioral swapping.}\n \\label{fig:h36m_gap}\n \\vspace{-10pt}\n\\end{figure}\n\n\\parag{H3.6M \\cite{Ionescu14a}:} H3.6M is a multi-view motion capture dataset that is not inherently multimodal. However, to test our approach in a very different context than the other two cases, we treated the videos acquired by different camera angles as belonging to separate domains. Since videos are tied to 3D poses, we used these two modalities and applied swapping augmentation together with multimodal contrastive learning to reduce the domain gap across individuals. Then, we evaluated the learned representations by performing action recognition on a camera angle that we do not have action labels for. This simulates our across-subject benchmark used in the MC2P dataset. For each experiment we selected three actions, which can be classified without examining large window sizes. We give additional details in the Supplementary Material. \n\n\\subsection{Baselines}\n\n\\input{fig\/accuracy}\n\nWe evaluated our method using two supervised baselines, Neural Linear and Neural MLP. These directly predict action labels from neural data without any unsupervised pretraining using cross-entropy loss. We also compared our approach to three regression methods that attempt to regress behavioral data from neural data, which is a common neural decoding technique. These include a recent neural decoding algorithm, BehaveNet~\\cite{behavenet}, as well as to two other regression baselines with recurrent and convolutional approaches: Regression (Recurrent) and Regression (Convolution). In addition, we compare our approach to recent self-supervised representation learning methods, including SeqCLR~\\cite{eegcontrastive} and SimCLR~\\cite{simclr}. We also combine convolutional regression-based method (Reg. (Conv)) or the self-supervised learning algorithm SimCLR with the common domain adaptation techniques Gradient Reversal Layer (GRL)\\cite{GRL}, or Mean Maximum Discrepancy \\cite{MMD}. This yields four domain adaptation models. Finally, we apply a recent multi-modal domain adaptation network for action recognition, MM-SADA\\cite{munro20multi} on MC2P dataset. For all of these methods, we used the same backbone architecture. We describe the backbone architecture and the baseline methods in more detail in the Supplementary Material. \n\n\n\\input{fig\/accuracy_h36m}\n\n\\subsection{Benchmarks}\n\nSince our goal is to create useful representations of neural images in an unsupervised way, we focused on single- and across-subject action recognition. \nSpecifically, we trained our neural decoder $f_{n}$ along with the others without using any action labels. Then, freezing the neural encoder parameters, we trained a linear model on the encoded features, which is an evaluation protocol widely used in the field \\cite{simclr, Lin_2020, He_2020_CVPR, Dave2021TCLRTC}. We used either half or all action labels. We mention the specifics of the train-test split in the Supplementary Material.\n\n\\parag{Single-Subject Action Recognition.} For each subject, we trained and tested a simple linear classifier \\textit{independently} on the learned representations to predict action labels. We assume that we are given action labels on the subject we are testing. In \\textbf{Table~\\ref{tab:accuracy}} we report aggregated results. \n\n\n\\parag{Across-Subject Action Recognition.} We trained linear classifiers on N-1 subjects simultaneously and tested on the left-out one. Therefore, we assume we do not have action labels for the target subject. We repeated the experiment for each individual and report the mean accuracy in \\textbf{Table~\\ref{tab:accuracy}} and \\textbf{Table~\\ref{tab:results_h36m}}. \n\n\n\\parag{Identity Recognition.} As a sanity check, we attempted to classify subject identity among the individuals given the learned representations. We again used a linear classifier to test the domain invariance of the learned representations. \nIn the case that the learned representations are domain (subject) invariant, we expect that the linear classifier will not be able to detect the domain of the representations, resulting in a lower identity recognition accuracy. Identity recognition results are rerported in \\textbf{Table~\\ref{tab:accuracy}} and \\textbf{Table~\\ref{tab:results_h36m}}.\n\\subsection{Results}\n\\vspace{10pt}\n\n\n\\parag{Single-Subject Action Recognition on M2CP.}\n\n\nFor the Single-Subject baseline, joint modeling of common latent space out-performed supervised models by a large margin, even when the linear classifier was trained on the action labels of the tested animal. Our swapping and neural augmentations resulted in an accuracy boost when compared with a simple contrastive learning method, SimCLR\\cite{simclr}. Although regression-based methods can extract behavioral information from the neural data, they do not produce discriminative features. When combined with the proposed set of augmentations, our method performs better than previous neural decoding models because it extracts richer features thanks to a better unsupervised pretraining step. Domain adaptation techniques do not result in a significant difference in the single-subject baseline; the domain gap in a single animal is smaller than between animals.\n\n\\parag{Across-Subject Action Recognition on M2CP.}\n\nWe show that supervised models do not generalize across animals, because each nervous system is unique. Before using the proposed augmentations, the contrastive method SimCLR performed worse than convolutional and recurrent regression-based methods including the current state-of-art BehaveNet~\\cite{behavenet}. This was due to large domain gap between animals in the latent embeddings \\textbf{(Fig.~\\ref{fig:domain}C)}. Although the domain adaptation methods MMD (Maximum Mean Discrepancy) and GRL (Gradient Reversal Layer) close the domain gap when used with contrastive learning, they do not position semantically similar points near one another \\textbf{(Fig.~\\ref{fig:domain}D)}. As a result, domain adaptation-based methods do not result in significant improvements in the across-subject action recognition task. Although regression-based methods suffer less from the domain gap problem, they do not produce representations that are as discriminative as contrastive learning-based methods. Our proposed set of augmentations close the domain gap, while improving the action recognition baseline for self-supervised methods, for both single-subject and across-subject tasks \\textbf{(Fig.~\\ref{fig:domain}E)}.\n\n\\parag{Action Recognition on ECoG Motion vs Rest.} \n\nAs shown at the bottom of \\textbf{Table~\\ref{tab:results_h36m}}, our approach significantly lowers the identity information in ECoG embeddings, while significantly increasing across-subject action recognition accuracy compared to the regression and multi-modal SimCLR baselines. Low supervised accuracy confirms a strong domain gap across individuals. Note that uni-modal contrastive modeling of ECoG recordings (SimCLR (ECoG)) does not yield strong across-subject action classification accuracy because uni-modal modeling cannot deal with the large domain gap in the learned representations.\n\n\\parag{Human Action Recognition on H3.6M.}\nWe observe in \\textbf{Table~\\ref{tab:results_h36m}} that, similar to the previous datasets, the low performance of the supervised baseline and the uni-modal modeling of RGB images (SimCLR (RGB)) are due to the domain-gap in the across-subject benchmark. This observation is confirmed by the high identity recognition of these models. %\nOur swapping augmentation strongly improves compared to the regression and multi-modal contrastive (SimCLR) baselines. Similar to the previous datasets, uni-modal contrastive training cannot generalize across subjects, due to the large domain gap.\n\n\\subsection{Ablation Study}\n\n\\input{fig\/ablation}\n\nWe compare the individual contributions of different augmentations proposed in our method. We report these results in \\textbf{Table~\\ref{tab:ablation}}. We observe that all augmentations contribute to single- and across-subject benchmarks. Our swapping augmentation strongly affects the across-subject benchmark, while at the same time greatly decreasing the domain gap, as quantified by the identity recognition result. Other augmentations have minimal effects on the domain gap, as they only slightly affect the identity recognition benchmark.\n\n\\section{Human Actions}\n\nWe apply multi-modal contrastive learning on windows of time series and RGB videos. We make the analogy that, similar to the neural data, RGB videos from different view angles show a domain gap although they are tied to the same 3D pose. Therefore, to test our method, we select three individuals with different camera angles where all actors perform the same three actions. We test domain adaptation using the Across-Subject benchmark, where we train our linear action classifier on labels of one individual and test it on the others. We repeat the same experiment three times and report the mean results. We show the results of Across-Subject and Identity Recognition in \\textbf{Table~\\ref{tab:results_h36m}}. \n\nFor preprocessing, we remove global translation and rotation from 3D poses by subtracting the root joint and then rotating the skeletons to point in the same direction. We use resnet18 for the RGB encoder and a 4 layer convolutional network for the 3D pose encoder. We use S1, S5 and S7 and all their behaviors for training, except for the three behaviors which we used for testing. For each number, we report three-fold cross-validation results.\n\n\n\\begin{figure*}[t]%\n \\centering\n \\includegraphics[width=1\\textwidth]%\n {figures\/Fig8-SwappingStatistics.pdf}\n \\caption{\\textbf{Changing window size for the swapping of behavioral modality on the MC2P dataset.} Statistics of the behavioral modality as a function of changing the window size. Decreasing the window size increases clustering homogeneity and Mean Maximum Discrepancy (MMD) when applied to the raw data, therefore suggesting higher quality swapping in individual poses instead of sequences of poses. Swapping augmentation with a smaller window size lowers the degree of perturbation, quantified by Mean Squared Distance. However, identity recognition accuracy does not change considerably when swapping is done with different window sizes. }\n \\label{fig:swapping_statistics}\n\\end{figure*}\n\n\\section{Dataset Details} \n\n\\paragraph{Dataset Collection.} Here we provide a more detailed technical explanation of the experimental dataset. Transgenic female \\textit{Drosophila melanogaster} flies aged 2-4 days post-eclosion were selected for experiments. They were raised on a 12h:12h day, night light cycle and recorded in either the morning or late afternoon Zeitgeber time. Flies expressed both GCaMP6s and tdTomato in all brain neurons as delineated by otd-Gal4 expression,\n\n{ \\hspace{-15pt} \\footnotesize\n($;\\frac{Otd-nls:FLPo (attP40)}{P{20XUAS-IVS-GCaMP6s}attP40};\\frac{R57C10-GAL4, tub>GAL80>}{P{w[+mC]=UAS-tdTom.S}3}.$)\n}\nThe fluorescence of GCaMP6s proteins within the neuron increases when it binds to calcium. There is an increase in intracellular calcium when neurons become active and fire action potentials.\nDue to the relatively slow release (as opposed to binding) of calcium by GCaMP6s molecules, the signal decays exponentially.\nWe also expressed the red fluorescent protein, tdTomato, in the same neurons as an anatomical fiduciary to be used for neural data registration. This compensates for image deformations and translations during animal movements.\nWe recorded neural data using a two-photon microscope (ThorLabs, Germany; Bergamo2) by scanning the cervical connective. This neural tissue serves as a conduit between the brain and ventral nerve cord (VNC) \\cite{chen2018imaging}.\nThe brain-only GCaMP6s expression pattern in combination with restrictions of recording to the cervical connective allowed us to record a large population of descending neuron axons while also being certain that none of the axons arose from ascending neurons in the VNC. Because descending neurons are expected to drive ongoing actions \\cite{Cande}, this imaging approach has the added benefit of ensuring that the imaged cells should, in principle, relate to paired behavioral data.\n\n\\vspace{-10pt}\n\\paragraph{Neural Pre-Processing.} For neural data processing, data were synchronized using a custom Python package \\cite{aymanns21utils2p}.\nWe then estimated the motion of the neurons using images acquired on the red (tdTomato) PMT channel.\nThe first image of the first trial was selected as a reference frame to which all other frames were registered.\nFor image registration, we estimated the vector field describing the motion between two frames. To do this, we numerically solved the optimization problem in \\textbf{Eq.~\\ref{eq:opticalflow}}, where $w$ is the motion field, $\\mathcal{I}_t$ is the image being transformed, $\\mathcal{I}_r$ is the reference image, and $\\Omega$ is the set of all pixel coordinates \\cite{chen2018imaging, aymanns21ofco}.\n\\begin{align}\n \\label{eq:opticalflow}\n \\hat{w} = argmin_w \\sum_{x\\in\\Omega} ||\\mathcal{I}_t(x + w(x)) &- \\mathcal{I}_r(x)||^{2}_{2} \\\\ &- \\lambda \\sum_{x\\in\\Omega} || \\nabla w(x) ||^2_2 \\nonumber\n\\end{align}\nA smoothness promoting parameter $\\lambda$ was empirically set to 800.\nWe then applied $\\hat{w}$ to the green PMT channel (GCaMP6s).\nTo denoise the motion corrected green signal, we trained a DeepInterpolation network \\cite{deepinterpolation} for nine epochs for each animal and applied it to the rest of the frames.\nWe only used the first 100 frames of each trial and used the first and last trials as validation data.\nThe batch size was set to 20 and we used 30 frames before and after the current frame as input.\nIn order to have a direct correlation between pixel intensity and neuronal activity we applied the following transformation to all neural images $\\frac{F - F_0}{F_0} \\times 100$, where $F_0$ is the baseline fluorescence in the absence of neural activity. To estimate $F_0$, we used the pixel-wise minimum of a moving average of 15 frames.\n\n\n\\paragraph{Neural Fluorescence Signal Decay.}\nThe formal relationship between the neural image $\\mathbf{n}_t$ and neural activity (underlying neural firings) $\\mathbf{s}_t$ can be modeled as a first-order autoregressive process\n$$\\mathbf{n}_t=\\gamma \\mathbf{n}_{t-1}+ \\alpha \\mathbf{s}_t,$$\nwhere $\\mathbf{s}_t$ is a binary variable indicating an event at time $t$ (e.g. the neuron firing an action potential).\nThe amplitudes $\\gamma$ and $\\alpha$ determine the rate at which the signal decays and the initial response to an event, respectively. In general, $0 < \\gamma < 1 $, therefore resulting in an exponential decay of information pertaining to $\\mathbf{s}_t$ to be inside of $\\mathbf{n}_t$. A single neural image $\\mathbf{n}_t$ includes decaying information from previous neural activity, and hence carries information from previous behaviors. For more detailed information on calcium dynamics, see \\cite{pnevmatikakis2013bayesian, Rupprecht21}. Assuming no neural firings, $\\mathbf{s}_{t}=0$, $\\mathbf{n}_{t}$ is given by $\\mathbf{n}_{t} = \\gamma^{t} \\mathbf{n}_{0}$. Therefore, we define the calcium kernel $\\mathcal{K}$ as $\\mathcal{K}_t = \\gamma ^ {t}$.\n\n\\paragraph{Dataset Analysis.} We show the distribution of annotations across 7 animals and action duration distribution in \\textbf{Supplementary Fig.~\\ref{fig:dataset}}. Unlike scripted actions in human datasets, the animal behavior is spontaneous, therefore does not follow a uniform distribution. The average duration of behaviors can also change across behaviors. Walking is the most common behavior and lasts longer than other behaviors.\nWe visualize the correlation between the neural and behavioral energy in \\textbf{Supplementary Fig.~\\ref{fig:energy}}. We quantify the energy as the Euclidean distance between consecutive, vectorized 3D poses. Similarly, for the neural energy, we calculate the Euclidean distance between consecutive images. To be able to compare corresponding energies, we first synchronize neural and behavioral modalities. We then smooth the corresponding time series using Gaussian convolution with kernel size of 11 frames. We observe that there is a strong correlation between the modalities, suggesting large mutual information.\n\n\\section{Method Details}\n\\paragraph{Augmentations.}\nAside from the augmentations mentioned before, for the neural image transformation family $\\mathcal{T}_n$, we used a sequential application of Poisson noise and Gaussian blur and color jittering. In contrast with recent work on contrastive visual representation learning, we only applied brightness and contrast adjustments in color jittering because neural images have a single channel that measures calcium indicator fluorescence intensity. We did not apply any cropping augmentation, such as cutout, because action representation is often highly localized and non-redundant (e.g., grooming is associated with the activity of a small set of neurons and thus with only a small number of pixels). We applied the same augmentations to each frame in single sample of neural data. \n\nFor the behavior transformation family $\\mathcal{T}_b$, we used a sequential application of scaling, shear, and random temporal and spatial dropping. We did not apply rotation and translation augmentations because the animals were tethered (i.e., restrained from moving freely), and their direction and absolute location were fixed throughout the experiment. We did not use time warping because neural and behavioral information are temporally linked (e.g., fast walking has different neural representations than slow walking). \n\n\n\\paragraph{Swapping Parameters.} We analyze the effects of swapping individual poses, instead of whole motion sequences, through our swapping augmentation in \\textbf{Fig.~\\ref{fig:swapping_statistics}}. We compare the distribution similarity across individuals when tested on single poses and windows of poses. We observe that the distribution similarity across individuals in behavioral modality is much larger in pose level when compared to the whole motion sequence, therefore making it easier to swap behavioral data in pose level. We quantify the distribution similarity using MMD (Mean Maximum Discrepancy) and Homogeneity metrics. Similarly, swapping individual poses decreases the overall change in the motion sequence, as quantified by the Mean Squared Distance. Yet, the degree to which identity information is hid does not strongly correlate with the window size of swapping. Therefore, overall, suggesting swapping in pose level is better than swapping whole motion sequences. \n\n\\paragraph{Implementation Details:}\nFor all methods, we initialized the weights of the networks randomly unless otherwise specified. To keep the experiments consistent, we always paired $32$ frames of neural data with $8$ frames of behavioral data. For the neural data, we used a larger time window because the timescale during which dynamic changes occur are smaller. For the paired modalities, we considered data synchronized if their center frames had the same timestamp. We trained contrastive methods for $200$ epochs and set the temperature value $\\tau$ to $0.1$. We set the output dimension of $\\mathbf{z}_b$ and $\\mathbf{z}_n$ to $128$. We used a cosine training schedule with three epochs of warm-up. For non-contrastive methods, we trained for $200$ epochs with a learning rate of $1e-4$, and a weight decay of $1e-5$, using the Adam optimizer \\cite{adam}. We ran all experiments using an Intel Core i9-7900X CPU, 32 GB of DDR4 RAM, and a GeForce GTX 1080. Training for a single SimCLR network for 200 epochs took 12 hours. To create train and test splits, we removed two trials from each animal and used them only for testing. We used the architecture shown in \\textbf{Supplementary Table {\\color{pearDark} 1}} for the neural image and behavioral pose encoder. Each layer except the final fully-connected layer was followed by Batch Normalization and a ReLU activation function \\cite{batchnorm}. For the self-attention mechanism in the behavioral encoder \\textbf{(Supplementary Table~{\\color{pearDark} 1})}, we implement Bahdanau attention~\\cite{bahdanau}. Given the set of intermediate behavioral representations $S \\in \\mathbb{R} ^{T \\times D}$, we first calculated,\n$$\n\\mathbf{r}=W_{2} \\tanh \\left(W_{1} S^{\\top}\\right) \\quad \\text { and } \\quad \\mathbf{a}_{i}=-\\log \\left(\\frac{\\exp \\left(\\mathbf{r}_{i}\\right)}{\\sum_{j} \\exp \\left(\\mathbf{r}_{j}\\right)}\\right)\n$$\nwhere $W_{1}$ and $W_{2}$ are a set of matrices of shape $\\mathbb{R}^{12\\times D}$ and $\\mathbb{R}^{1\\times12}$ respectively. $\\mathbf{a}_i$ is the assigned score i-th pose in the sequence of motion. Then the final representation is given by $\\sum_{i}^{T} \\mathbf{a}_i S_{i}$.\n\n\n\\section{Baseline Methods}\n\n\n\\parag{Supervised:} A feedforward network trained with manually annotated action labels using cross-entropy loss, having neural data as input. We discarded datapoints that did not have associated behavioral labels. For the MLP baseline, we trained a simple three layer MLP with a hidden layer size of 128 neurons with ReLU activation and without batch normalization. \n\n\\parag{Regression (Convolutional):} A fully-convolutional feedforward network trained with MSE loss for behavioral reconstruction task, given the set of neural images. To keep the architectures consistent with the other methods, the average pooling is followed by a projection layer, which is used as the final representation of this model.\n\n\\parag{Regression (Recurrent):} This is similar to the one above but the last projection network was replaced with a two-layer GRU module. The GRU module takes as an input the fixed representation of neural images. At each time step, the GRU module predicts a single 3D pose with a total of eight steps to predict the eight poses associated with an input neural image. This model is trained with an MSE loss. We take the input of the GRU module as the final representation of neural encoder. \n\n\\begin{table}[t]\n\\setlength{\\tabcolsep}{2pt}\n\n\n\n\n\\caption*{{\\bf (a)} First part of the Neural Encoder $f_n$}\n\\vspace{-10pt}\n\\scriptsize\n\\begin{center}\n\\begin{tabular}[t]{ l r c c r }\n \\toprule\n Layer & \\# filters & K & S & Output \\\\ \n \\midrule\n input & 1 & - & - & $T\\times 128 \\times 128 $ \\\\ \n conv1 & 2 & (3,3) & (1,1) & $T\\times 128 \\times 128 $ \\\\ \n mp2 & - & (2,2) & (2,2) & $T\\times 64 \\times 64 $ \\\\ \n conv3 & 4 & (3,3) & (1,1) & $T\\times 64 \\times 64 $ \\\\ \n mp4 & - & (2,2) & (2,2) & $T\\times 32 \\times 32 $ \\\\ \n conv5 & 8 & (3,3) & (1,1) & $T\\times 32 \\times 32 $ \\\\ \n mp6 & - & (2,2) & (2,2) & $T\\times 16 \\times 16 $ \\\\\n conv7 & 16 & (3,3) & (1,1) & $T\\times 16 \\times 16 $ \\\\\n mp8 & - & (2,2) & (2,2) & $T\\times 8 \\times 8 $ \\\\\n conv9 & 32 & (3,3) & (1,1) & $T\\times 8 \\times 8 $ \\\\\n mp10 & - & (2,2) & (2,2) & $T\\times 4 \\times 4 $ \\\\\n conv11 & 64 & (3,3) & (1,1) & $T\\times 4 \\times 4 $ \\\\\n mp12 & - & (2,2) & (2,2) & $T\\times 2 \\times 2 $ \\\\\n fc13 & 128 & (1,1) & (1,1) & $T\\times 1 \\times 1 $ \\\\\n fc14 & 128 & (1,1) & (1,1) & $T\\times 1 \\times 1$ \\\\\n \\bottomrule\n\\end{tabular}\n\n\n\n\n\\vspace{10pt}\n\\caption*{{\\bf (a)} Second part of the Neural Encoder $f_n$}\n\\scriptsize\n\\begin{tabular}[t]{ l r r r r r }\n \\toprule\n Layer & \\# filters & K & S & Output \\\\ \n \\midrule\ninput & 60 & - & - & $T\\times 128 $ \\\\ \nconv1 & 64 & (3) & (1) & $T \\times 128 $ \\\\ \nconv2 & 80 & (3) & (1) & $T \\times 128 $ \\\\ \nmp2 & - & (2) & (2) & $T \/ 2 \\times 128 $ \\\\ \nconv2 & 96 & (3) & (1) & $T \/ 2 \\times 128 $ \\\\ \nconv2 & 112 & (3) & (1) & $T \/ 2 \\times 128 $ \\\\ \nconv2 & 128 & (3) & (1) & $T \/ 2 \\times 128 $ \\\\ \nattention6 & - & (1) & (1) & $1 \\times 128 $ \\\\\nfc7 & 128 & (1) & (1) & $1 \\times 128$ \\\\\n \\bottomrule\n\\end{tabular}\n\n\n\\vspace{10pt}\n\\caption*{{\\bf (a)} Behavioral Encoder $f_b$}\n\\begin{tabular}[t]{ l r r r r r }\n \\toprule\n Layer & \\# filters & K & S & Output \\\\ \n \\midrule\ninput & 60 & - & - & $T\\times 60 $ \\\\ \nconv1 & 64 & (3) & (1) & $T \\times 64 $ \\\\ \nconv2 & 80 & (3) & (1) & $T \\times 80 $ \\\\ \nmp2 & - & (2) & (2) & $T \/ 2 \\times 80 $ \\\\ \nconv2 & 96 & (3) & (1) & $T \/ 2 \\times 96 $ \\\\ \nconv2 & 112 & (3) & (1) & $T \/ 2 \\times 112 $ \\\\ \nconv2 & 128 & (3) & (1) & $T \/ 2 \\times 128 $ \\\\ \nattention6 & - & (1) & (1) & $1 \\times 128 $ \\\\\nfc7 & 128 & (1) & (1) & $1 \\times 128$ \\\\\n \\bottomrule\n\\end{tabular}\n\n\\end{center}\n\\normalsize\n\\normalsize\n\n\\vspace{7pt}\n\\caption{\\textbf{Architecture details.} Shown are half of the neural encoder $f_n$ and behavior encoder $f_b$ functions. How these encoders are used is shown in \\textbf{Fig.~{\\color{pearDark}3}}. Neural encoder $f_n$ is followed by 1D convolutions similar to the behavioral encoder $f_b$, by replacing the number of filters. Both encoders produce $128$ dimensional output, while first half of the neural encoder do not downsample on the temporal axis. \\textit{mp} denotes a max-pooling layer. Batch Normalization and ReLU activation are added after every convolutional layer. }\n\n\\label{table:encoder}\n\\end{table}\n\n\n\n\\parag{BehaveNet \\cite{behavenet}:} This uses a discrete autoregressive hidden Markov model (ARHMM) to decompose 3D motion information into discrete \"behavioral syllables.\" As in the regression baseline, the neural information is used to predict the posterior probability of observing each discrete syllable. Unlike the original method, we used 3D poses instead of RGB videos as targets. We skipped compressing the behavioral data using a convolutional autoencoder because, unlike RGB videos, 3D poses are already low-dimensional.\n\n\n\n\n\\parag{SimCLR \\cite{simclr}:} We trained the original SimCLR module without the calcium imaging data and swapping augmentations. As in our approach, we took the features before the projection layer as the final representation. \n\n\\parag{Gradient Reversal Layer (GRL) \\cite{GRL}:} Together with the contrastive loss, we trained a two-layer MLP domain discriminator per modality, $D_{b}$ and $D_{n}$, which estimates the domain of the neural and behavioral representations. Discriminators were trained by minimizing\n\\begin{equation}\n\\begin{array}{r}\n\\mathcal{L}_{D}=\\sum_{x \\in\\{\\mathbf{b}, \\mathbf{n}\\}}-d \\log \\left(D_{m}\\left(f_{m}(x)\\right)\\right) \\; \\\\\n\\end{array}\n\\end{equation}\nwhere $d$ is the one-hot identity vector. Gradient Reversal layer is inserted before the projection layer. Given the reversed gradients, the neural and behavioral encoders $f_n$ and $f_{b}$ learn to fool the discriminator and outputs invariant representations across domains, hence acting as a domain adaptation module. We kept the hyperparameters of the discriminator the same as in previous work \\cite{munro20multi}. We froze the weights of the discriminator for the first 10 epochs, and trained only the $\\mathcal{L}_{NCE}$. We trained the network using both loss functions, $\\mathcal{L}_{NCE} + \\lambda_{D} \\mathcal{L}_{D}$, for the remainder of training. We set the hyperparameters $\\lambda_{D}$ to $10$ empirically.\n\n\\paragraph{Maximum Mean Discrepancy (MMD) \\cite{MMD}:} We replaced adversarial loss in GRL baseline with a statistical test that minimizes the distributional discrepancy from different domains.\n \n \\paragraph{MM-SADA \\cite{munro20multi}:} A recent multi-modal domain adaptation model for action recognition that minimizes cross-entropy loss on target labels, adverserial loss for domain adaptation, and contrastive losses to maximize consistency between multiple modalities. As we do not assume any action labels during the contrastive training phase, we removed the cross-entropy loss.\n\n \\paragraph{SeqCLR \\cite{eegcontrastive}:} This approach learns a uni-modal self-supervised contrastive model. Hence, we only apply it to the neural imaging data, without using the behavioral modality. As this method was previously applied on datasets with Electroencephalography (ECoG) imaging technique, we removed ECoG specific augmentations. \\looseness-1\n\n\n\\paragraph{Maximum Mean Discrepancy (MMD):} We replaced adversarial loss in GRL baseline with a statistical test to minimize the distributional discrepancy from different domains \\cite{MMD}. Similar to previous work, we applied MMD only on the representations before the projection layer independently on both modalities \\cite{munro20multi, kangcontrastive}. Similar to the GLR baseline, we first trained 10 epochs only using the contrastive loss, and trained using the combined losses $\\mathcal{L}_{NCE} + \\lambda_{MMD} \\mathcal{L}_{MMD}$ for the remainder. We set the hyperparameters $\\lambda_{MMD}$ as $1$ empirically.\nFor the domain adaptation methods GRL and MMD, we reformulated the denominator of the contrastive loss function. Given a domain function $dom$ which gives the domain of the data sample, we replaced one side of $L_{NCE}$ in Eq.~\\ref{eq:nce} with,\n\n\\begin{equation}\n\\log \\frac{\\exp \\left(\\left\\langle\\mathbf{z}^{i}_{b}, \\mathbf{z}^{i}_{n}\\right\\rangle \/ \\tau\\right)}{\\sum_{k=1}^{N} \\mathbf{1}_{[dom(i) = dom(k)]} \\exp \\left(\\left\\langle\\mathbf{z}^{i}_{b}, \\mathbf{z}^{k}_{n}\\right\\rangle \/ \\tau\\right)},\n\\end{equation}\n\nwhere selective negative sampling prevents the formation of trivial negative pairs across domains, therefore making it easier to merge multiple domains. Negative pairs formed during contrastive learning try to push away inter-domain pairs, whereas domain adaptation methods try to merge multiple domains to close the domain gap. We found that the training of contrastive and domain adaptation losses together could be quite unstable, unless the above changes were made to the contrastive loss function.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.45\\textwidth]{figures\/Fig4-MC2P_Statistics.png}\n \\caption{\\textbf{Motion Capture and two-photon dataset statistics.} Visualizing \\textbf{(A)} the number of annotations per animal and \\textbf{(B)} the distribution of the durations of each behavior across animals. Unlike scripted human behaviors, animal behaviors occur spontaneously. The total number of behaviors and their durations do not follow a uniform distribution, therefore making it harder to model.}\n \\label{fig:dataset}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n \\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1\\textwidth]{figures\/energy.pdf}\n \\includegraphics[width=1\\textwidth]{figures\/energy2.pdf}\n \\includegraphics[width=1\\textwidth]{figures\/energy3.pdf}\n \\caption{\\textbf{Visualizing the temporal correlation between behavioral and neural energies on multiple animals.} The behavioral and neural energies are calculated as the normalized distances between consecutive frames. The multi-modal energies show a similar temporal pattern. The slower neural energy decay is due to the calcium dynamics.}\n \\label{fig:energy}\n\\end{figure*}\n\n\n\\newpage\n\\section{Supplementary Videos}\n\\label{sup:video}\n\\paragraph{Motion Capture and Two-Photon (MC2P) Dataset.} The following videos are sample behavioral-neural recordings from two different flies. The videos show \\textbf{(left)} raw behavioral RGB video together with \\textbf{(right)} registered and denoised neural images at their original resolutions. The behavioral video is resampled and synchronized with the neural data. The colorbar indicates normalized relative intensity values. Calculation of $\\Delta F \/ F$ is previously explained under Dataset Collection section. \\\\\n\n\n\\noindent\\textbf{Video 1:} \\url{https:\/\/drive.google.com\/file\/d\/1Cepy5xjLj4XiQUITY_yKKu2B4WKdl6nx}\n\n\\noindent\\textbf{Video 2:}\n\\url{https:\/\/drive.google.com\/file\/d\/1OSszc_fMR2Ol2WkUdj1E4u58rFVaMr6E}\n\n\n\n\\paragraph{Action Label Annotations.} Sample behavioral recordings from multiple animals using a single camera. Shown are eight different action labels: \\textit{forward walking}, \\textit{pushing}, \\textit{hindleg grooming}, \\textit{abdominal grooming}, \\textit{foreleg grooming}, \\textit{antennal grooming}, \\textit{eye grooming} and \\textit{resting}. Videos are temporally down-sampled. Animals and labels are randomly sampled. \\\\\n\n\\noindent\\textbf{Video 3:}\n\\url{https:\/\/drive.google.com\/file\/d\/1cnwRRyDZ4crrVVxRBbx32Za-vlxSP7sy}\n\n\n\n\\paragraph{Animal Motion Capture.} Sample behavioral recordings with 2D poses from six different camera views. Each color denotes a different limb. The videos are temporally down-sampled for easier view. \\\\\n\n\\noindent\\textbf{Video 4:}\n\\url{https:\/\/drive.google.com\/file\/d\/1uYcL7_Zl-N0mlG1VTrg67s2Cy71wml5S}\n\n\n\\noindent\\textbf{Video 5:}\n\\url{https:\/\/drive.google.com\/file\/d\/1eMcP-Ec1c4yBQpC4CNv45py7gObmuUeA}\n\n\n\n\\section{Related Work}\n\n\\paragraph{Neural Decoding.} The ability to infer behavioral intentions from neural data, or neural decoding, is essential for the development of effective brain-machine interfaces and for closed-loop experimentation~\\cite{bmi,closed-loop}. Neural decoders can be used to increase mobility of patients with disabilities~\\cite{COLLINGER201884, GANZER2020763}, or neuromuscular diseases~\\cite{neuromusculardisease}, and can expand our understanding of how the nervous system works~\\cite{biologicalinsight}. However, most neural decoding methods require manual annotations of training data that are both tedious to acquire and error prone~\\cite{MLfordecoding, decodingannotation}. \n\nExisting self-supervised neural decoding methods~\\cite{Wang2018AJILEMP,eegselfsupervised, eegcontrastive, labelingneural} cannot be used on unlabeled subjects without action labels. A potential solution would be to use domain adaptation techniques to treat each new subject as a new domain. However, existing domain adaptation studies of neural decoding~\\cite{domainadaptneuraldecoding, stablebrainmachine} have focused on gradual domain shifts associated with slow changes in sensor measurements rather than the challenge of generalizing across individual subjects. In contrast to these methods, our approach is self-supervised and can generalize to unlabeled subjects at test time, without requiring action labels for new individuals.\n\n\n\\paragraph{Action Recognition.}\n\nContrastive learning has been extensively used on human motion sequences to perform action recognition using 3D pose data~\\cite{liu2020snce, su2020predict, Lin_2020} and video-based action understanding~\\cite{pan2021videomoco, Dave2021TCLRTC}. However, a barrier to using these tools in neuroscience is that the statistics of our neural data---the locations and sizes of cells---and behavioral data---body part lengths and limb ranges of motion---can be very different from animal to animal, creating a large domain gap.\n\nIn theory, there are multimodal domain adaptation methods for action recognition that could deal with this gap~\\cite{munro20multi, chen2019temporal, xu2021aligning}. However, they assume supervision in the form of labeled source data. In most laboratory settings, where large amounts data are collected and resources are limited, this is an impractical solution. \n\n\n\n\n\\paragraph{Representation Learning.} \n\nMost efforts to derive a low dimensional representation of neural activity have used recurrent models \\cite{Nassar2019TreeStructuredRS, Linderman621540, pmlr-v54-linderman17a}, variational autoencoders \\cite{Gao2016LinearDN, lfads}, and dynamical systems~\\cite{Multiscale, neuralrecordingtech}. To capture low-dimensional behavioral information, recent methods have enabled markerless predictions of 2D \\cite{sleap, graphpose, Bala, couzin, li2020deformation} and 3D poses in animals ~\\cite{anipose, Gunel19a, lp3d, zebradataset}. Video and pose data have previously been used to segment and cluster temporally related behavioral information \\cite{task_programming, Segalin2020, berman21, quantify, robertZebraFish20}. \n\nBy contrast, there have been relatively few approaches developed to extract behavioral information from neural imaging data~\\cite{behavenet, subspace, MLfordecoding}. Most have focused on identifying linear relationships between these two modalities using simple supervised methods, such as correlation analysis, generalized linear models \\cite{subtrate, musall19, stringer19}, or regressive methods \\cite{behavenet}. \nWe present the first joint modeling of behavioral and neural modalities to fully extract behavioral information from neural data using a self-supervised learning technique.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} |
|
|