diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzomup" "b/data_all_eng_slimpj/shuffled/split2/finalzzomup" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzomup" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:intro}\nThe Hunt for Observable Signatures of Terrestrial planetary Systems (HOSTS) on\nthe Large Binocular Telescope Interferometer (LBTI) will survey nearby stars\nfor faint exozodiacal dust (exozodi). This warm circumstellar dust, such as\nthat found in the vicinity of Earth, is generated in asteroidal collisions and\ncometary breakups. We define exozodiacal dust as sitting in the habitable\nzone, that is $\\sim$1 AU from a Solar-type star, and therefore as having a\ntemperature comparable to the Earth, i.e. $\\sim$278 K.\n\nThe goal of the LBTI HOSTS survey is to provide information on exozodi needed\nto develop a future space telescope aimed at direct detection of habitable\nzone terrestrial planets (aka. exoEarths). The habitable zone is defined by\nwhere a terrestrial planet can have long-term surface water, but its exact\nboundaries depend on planetary properties. Nevertheless, surface temperatures\nnear 300~K imply that Earth-mass exoplanets need insolations comparable to\nthat of Earth up to 1.2 times greater than Earth's\n\\citep[e.g.][]{Leconte:2013,Kopparapu:2013}. There is no single agreed upon\ndefinition of exozodi in the literature \\citep{Roberge:2012}. The HOSTS team\nhas adopted a definition that scales the surface density of the Sun's Zodiacal\ndisk at the Earth equivalent insolation distance (EEID). Thus the surface\ndensity profile expands with stellar luminosity, and allows the ``exozodi''\nlevel to be compared across stars of different types. See the companion paper\n\\citet{Kennedy:2014} for a full discussion of our adopted model. This\nreference model includes dust interior to the habitable zone all the way in to\nthe sublimation radius, so this model may test how close-in dust such as that\ndetected in near-infrared interferometric surveys\n\\citep{Absil:2013,Ertel:2014} is related to habitable zone dust.\n\nThe typical exozodi detection from space-based photometry and\nspectrophotometry, primarily with the IRS instrument on the Spitzer Space\nTelescope, is $\\sim$1000 times the Solar System's level (3 $\\sigma$),\ni.e. 1000 zodi \\citep{Beichman:2006,Lawler:2009,Chen:2014}. The best limits\nfrom the ground-based Keck interferometer are 500 zodi (3$\\sigma$)\n\\citep{Millan-Gabet:2011,Mennesson:2014}. Interferometric searches for dust in\nthe near-infrared can find dust interior to the habitable zone, at temperatures\n$\\gtrsim$500K \\citep{Absil:2013,Ertel:2014} or that comes from scattered light,\nand far-infrared and submillimeter telescopes can find dust much cooler than\nexozodi, at temperatures $<$100~K \\citep[e.g.][]{Eiroa:2013}. LBTI-HOSTS will\nbe the first survey capable of measuring exozodi known to be at habitable zone\ntemperatures and at the 10-20 zodi level (3$\\sigma$).\n\nExozodi of this brightness would be the major source of astrophysical noise\nfor a future space telescope aimed at direct imaging and spectroscopy of\nhabitable zone terrestrial planets. For example, more than about 4 zodis\nwould cause the integration time for coronagraphic imaging of an Earth-like\nplanet in the habitable zone of a G2V star at 10 pc to exceed 1 day, using a\n4m telescope and the other baseline astrophysical and mission parameters given\nin \\citet{Stark:2014}. A larger telescope can tolerate larger zodi levels for\nthe same integration time.\n\nDetections of warm dust will also reveal new information about planetary\nsystem architectures and evolution. Asteroid belts undergoing steady state\ncollisions should grind themselves down in much less time than the Gyr ages of\nnearby stars. So, warm debris disks around old stars may signal late cometary\ninfluxes or stochastic collisional events\n\\citep[e.g.][]{Wyatt:2007,Gaspar:2013}. While $\\sim$20\\% of nearby stars\nhave cold, i.e. $<$150~K, dust \\citep{Eiroa:2013} and $\\sim$15\\% have hot,\ni.e. $>$500~K, dust \\citep{Ertel:2014}, there is presently no demonstrated connection\nbetween the two. To understand the evolution of planetary\nsystems, we seek to measure the luminosity function of exozodi with age and\nstellar mass and determine whether the presence of cold outer disks correlates\nwith warm inner exozodi.\n\nLBTI is a nulling interferometer, designed to use the 8.4 m apertures of the\nLBT fixed in a common mount at a 14.4 m separation, for the detection of\nemission from warm dust around nearby stars. LBTI works in the thermal\ninfrared, employing dual adaptive secondaries to correct atmospheric seeing,\nand providing low thermal background and high Strehl images to the science\ncamera NOMIC \\citep{Hinz:2008,Hinz:2012}. Closed loop phase tracking in the\nnear infrared is used to stabilize the destructive interference of the star at\nN-band (9.8-12.4 $\\mu$m) and detect flux from the resolved dust disk \\cite{Defrere:2014a}. The\nseparation of the LBT mirrors at a working wavelength of 11 $\\mu$m produces a\nfirst transmission peak centered at 79 mas (1 AU at 13 pc) and an inner\nworking angle (half transmission) of 39 mas (1 AU at 25 pc).\n\nTogether, observations of thermal emission from disks with LBTI and\nimages with space-based optical coronagraphs capable of probing the same\nangular scales in scattered light will measure the albedo of dust\ngrains. Albedo is one of the few available constraints on dust composition and\nthereby parent body composition for debris disks. Scattered light images of\ndust in the habitable zones of several nearby stars may be possible with a\ncoronagraph on the WFIRST-AFTA mission \\citep{Spergel:2013}.\n\n\n\\section{Target List Assembly and Exclusions}\n\n\\subsection{Target Selection Goals}\n\nTarget selection for HOSTS is a balance between including stars that are expected targets\nof a future exoEarth mission and including stars of various types to enable the best\nunderstanding of the statistical distribution of exozodi over a range of parameters. The\ntwo approaches are complementary and together enable investigations of habitable zone dust\nproduction across a range of host stellar types.\n\nThe mission-driven approach concentrates on F, G, and K-type stars that are\nthe best targets for future direct observations of exoEarths, thereby\nproviding ``ground truth'' dust observations. The sensitivity sweet spot for\nan optical planet imager lies with G and K stars because 1) the planet-to-star\ncontrast ratio is inversely proportional to stellar luminosity and 2) the\norbital radius of the habitable zone increases as $\\sqrt{L_*}$\n\\footnotemark[1]. As a result, M-type stars have favorable planet-to-star contrast\nratios but habitable zones close to the stars, whereas A-type stars have poor\ncontrast ratios and habitable zones further from the stars.\n\n\\footnotetext[1]{The EEID, i.e. where a planet receives the same incident flux\n as Earth, defines the habitable zone (see Section \\ref{sec:sunlike}). Since\n the flux at the EEID is a constant, a 1~R$_\\earth$ planet there always has\n the same absolute magnitude independent of host star luminosity. However,\n the absolute magnitude of stars decreases toward earlier spectral type\n stars, thus increasing the star-to-planet flux ratio. The radial\n temperature dependence of a blackbody emitter in a stellar radiation field\n can be calculated by equating the incident flux (L$_*$\/4$\\pi$) with the\n emergent flux (4$\\sigma$r$_{\\rm EEID}^2$T$_{\\rm HZ}^4$). Thus, for a\n fixed temperature, as in a habitable zone, the radius at which a blackbody\n reaches that temperature is proportional to $\\sqrt{L}$.}\n\n\nNot every potential target of a future exoEarth mission can be observed with\nLBTI; for one thing, many lie in the southern hemisphere and are not\nobservable from LBT on Mount Graham, AZ. Furthermore, some stars bright\nenough at visual wavelengths and therefore accessible to an exoEarth mission\nwould be too faint for LBTI to achieve good sensitivity in the limited total\nobserving time. Our goal is to design a survey that can fully inform target\nselection for a future exoEarth mission; survey results will have to be\nmodeled and then extrapolated to lower dust levels. Therefore, there must be\nobservational extensions to the mission-driven sample that will inform models\nof dust evolution and aid extrapolation.\n\nThe second approach, a LBTI sensitivity-driven approach, selects targets based\nonly on the expected LBTI exozodi sensitivities, without consideration of\nexoEarth mission constraints. This would naturally select more early-type\nstars (A stars and early F-type stars) because they are brighter, have\nhabitable zones at large separations, and higher F$_{\\rm disk}$\/F$_*$ at\nN-band (see \\citet{Kennedy:2014} for details). Therefore, the results of this\ntype of survey would have to be extrapolated to later spectral type targets\nusing planet formation theory.\n\nThe brightest nearby late-F to K-type stars can satisfy both the mission and\nsensitivity-driven selection criteria, and we give a description of these in Section\n\\ref{sec:sunlike}, we show that there are 25-48 such stars, depending on LBTI sensitivity.\nWe anticipate that HOSTS will survey $\\sim 50$ stars, given the amount of observing time\nallocated on LBTI, so the target selection approach followed will determine the rest of\nthe observed stars.\n\nWe lay out here the considerations that lead to the final HOSTS target\nlist. We discuss how to balance mission-driven and sensitivity considerations\nto maximize scientific return from the HOSTS project. By presenting our target\nlist in this early paper, we also hope to encourage intensive study of these stars\nwith other techniques that will eventually enhance our ability to understand the\nevolution of circumstellar dust with time.\n\n\\subsection{Target Selection Constraints} \\label{sec:constraints}\n\nWe started with a list of all bright, northern main sequence stars of spectral\ntypes A through M observable from LBT (declination $> -30^\\circ$) by using two\ncatalogs: the Unbiased Nearby Stars (UNS) sample assembled for cold debris\ndisks studies \\citep{Phillips:2010} and the Hipparcos 30 pc sample assembled\nfor exoEarth mission planning \\citep{Turnbull:2012}. UNS is complete to about\n16 pc for K-type stars and about 45 pc for A-type.\n\nBinary stars were excluded based on both technical and scientific\ncriteria. There are two technical reasons to exclude binary stars: 1) to\nensure that the adaptive optics system can easily lock on the target of\ninterest and 2) to ensure that flux from the companion does not contaminate\nthe area being searched for exozodi emission. We therefore excluded binary\nstars with separations $< 1\\farcs5$. Some stars are known to be spectroscopic\nbinaries (SBs) but without well-measured orbits. We excluded all such SBs\nbecause their maximum separations might fall within an angular range of 10s to\n100s of mas and provide confusing non-null signals. The main sources of\ninformation about multiplicity were the Washington Visual Double Star Catalog\n\\citep{Mason:2013} and the 9th Catalogue of Spectroscopic Binary Orbits\n\\citep{Pourbaix:2009}.\n\nWe further excluded stars with flux densities $<$1 Jy in the broad N-band\n($\\sim$11 $\\mu$m) filter used for the HOSTS survey. We anticipate that the\nLBTI null would be degraded for fainter stars. To estimate the brightness of\nour targets, we fit Kurucz stellar models to available photometry at BVJHK\nplus WISE bands W3 (12 $\\mu$m) and W4 (22 $\\mu$m) and then used the model to\npredict the NOMIC flux density.\n\nWe also only considered stars with inner habitable zone distances probed by\nthe LBTI transmission pattern, i.e. zones larger than about 60 mas. An exozodi\ndisk smaller than this has low transmission efficiency, i.e. it is nulled\nalong with the star because the LBTI transmission pattern peak is at $\\approx$79 mas.\nA general result of our brightness cuts is that our target\nstars are all within 28 pc. Therefore, our angular criterion, above, excluded\nbinaries with separations $\\lesssim$50 AU. Furthermore, studies of\nprotoplanetary disk evolution indicate that stellar companions within 100~AU\nof the primary stars cause lower disk masses and faster disk dissipation,\npossibly inhibiting planet formation\n\\citep[e.g.][]{Osterloh:1995,Jensen:1996,Andrews:2005,Harris:2012}. We therefore also excluded physical\nbinaries with separations $<$100~AU. Although it would be interesting to study\nthe effect of binary separation on habitable zone dust, we emphasized the\nformation of an overall sample with as few selection effects as possible and\neschewed inclusion of subsamples too small to provide statistically meaningful\nresults.\n\nFinally, we excluded giant stars (luminosity class III), i.e. stars that\nappear significantly brighter than the main sequence. LBTI would probe regions\naround these stars that are significantly larger than the size of the\nhabitable zones that existed when the stars resided on the main sequence and\nthus not directly comparable to the rest of the sample. Table\n\\ref{tab:binary} lists the targets excluded for binarity and location above\nthe main sequence.\n\n\\subsection{Target List Categories}\n\nWe categorize the targets that meet the above criteria into two samples\ndescribed below in:\n\nSection \\ref{sec:sunlike}: The Sun-like sample includes targets with spectral types\nlater than F5. These 48 stars are potential targets for a future exoEarth\nmission. Of these 25 have flux density $>$2 Jy at N-band.\n\n\nSection \\ref{sec:bright}: The sensitivity-driven, i.e. early-type star, sample\nincludes targets with spectral types between A0 and F4. These 20 stars provide\nadditional information on the exozodi luminosity function. Of these, 15 have\nflux density $>$2Jy at N-band.\n \nTogether, there are 68 sources in the above categories from which the\noptimal HOSTS survey can be created.\n\n\n\\section{Sun-like Sample} \\label{sec:sunlike} \n\nOur objective for this LBTI-HOSTS sub-sample is to observe stars that are probable targets\nfor a future exoEarth mission, based on current knowledge, and stars that inform our\nunderstanding of the typical exozodi levels around similar stars. These observations will\nprovide dust measurements (or upper limits) for specific stars. They will also supply a\nlarger sample of solar-type stars with which to model the distribution of exozodi levels\nfor Sun-like stars. This will enable evaluation of the suitability, as exoEarth mission\ntargets, of individual stars that could not be observed from LBTI (because, for example,\nthey were too faint or too far south). Here, we define ``Sun-like'' as having spectral\ntypes later than or equal to F5. The coolest star that passed all our cuts is spectral\ntype K8. The majority of high photometric quality targets for the Kepler mission's\nexoplanet search are also of spectral types mid-K to mid-F (4500-7000 K)\n\\citep{Christiansen:2012}.\n\nThe great technical challenge for direct exoEarth observations is to suppress\nthe central starlight tremendously yet allow light from extremely faint\nplanets to be detected at small angular separations. Therefore, the best\nsystems to search for exoEarths are those with widely separated habitable\nzones (HZs) and with high planet-to-star flux ratios. A full discussion of\nall the considerations that go into determining a star's habitable zone\nboundaries appears in \\citet{Kopparapu:2013}. However, to first order, the\nlocation of a star's HZ is set by how much light would hit an Earth-twin\nplanet. Therefore, the Earth-equivalent insolation distance (EEID)\napproximately scales in the following way\n\\begin{equation}\nr_\\mathrm{EEID} \\: \\approx \\: r_\\oplus \\times \\left( L_\\star \\, \/ \\, L_\\sun \\right)^{1\/2 } \\; ,\n\\end{equation}\nwhere $L$ is the bolometric luminosity and $r_\\oplus$ is the Earth-Sun distance.\n\nFollowing \\citet{Turnbull:2012}, the planet-to-star reflected flux ratio at\nvisible wavelengths for\nan Earth-twin planet is approximately\n\\begin{equation}\n\\left(F_p \\, \/ \\, F_\\star \\right)_\\mathrm{HZ} \\: \\approx \\: (1.2 \\times 10^{-10}) \\, \/ \\, (L_\\star\/L_\\odot)\n\\end{equation}\nSo as the stellar luminosity increases, the HZ moves\noutwards, increasing the separation of an exoEarth from its host star (good).\nHowever, simultaneously the planet-to-star flux ratio decreases, resulting in\nlonger exposure times to reach a given detection limit (bad).\n\nThese two competing effects largely dictate which stars are the best for\ndirect observations of exoEarths. The current consensus is that starlight\nsuppression technologies working at optical wavelengths (e.g.\\ internal\ncoronagraphs) are the most advanced \\citep{Greene:2013}. For these mission\nconcepts, the best targets are nearby stars of mid-F, G, and K spectral types.\nIn general, for a given optical coronagraphic telescope aperture, the less\nexozodi noise contribution that a star system has, the earlier the spectral type\nof star systems that can be searched with high completeness. An interferometric\nmission, such as an array of 4$\\times$4m free-flying mid-infrared telescopes,\nprovides somewhat different completeness as a function of stellar\nluminosity. For the HOSTS survey, we make no assumptions about exoEarth\ndetection technology. If we keep the ratio of F:G:K stars fixed, the best\ntargets for an interferometric telescope agree well with the coronagraphic\ntelescope list \\citep{Defrere:2010}.\n\nWe found 48 stars that met all of our selection criteria and some of their\nbasic properties are listed\nin Table \\ref{tab:sunlike} and shown in Figures \\ref{fig:cmd} and\n\\ref{fig:eeid}. Our current working knowledge of the LBTI system is that the\nnull quality will not depend on stellar brightness for stars brighter than 2\nJy at N-band; there are 25 such bright stars on our list. We expect that for\nstars fainter than 2 Jy, the degradation in the null will be a gentle function\nof brightness, but this remains to be tested.\n\nThe mean distance to these sample stars is 11.4~pc; the closest star is at\n3.2~pc and the most distant at 21.4~pc. The presence\/absence of a disk was\nnot a criterion for selecting stars in the Sun-like star sample. What is known\nabout the presence or absence of hot\/warm (potentially N-band detectable) and cold (far-IR\ndetectable) circumstellar disks is noted in the Table. Each dust survey has\nsomewhat different limits; the reader should consult the original papers for\ndetails.\n\n\\begin{figure}\n\\centering\\epsfig{file=paper_MV_vs_B-V.ps,width=2.25in,clip=,angle=90}\n\\caption{Color magnitude (absolute V magnitude versus B-V color) plot of the\n complete sample. Sun-like stars, defined as spectral type F5 and later, are\n shown with green filled circles. Early type stars, defined as spectral types F4 and\n earlier, are show with blue filled triangles. The black line shows the MK main sequence\n as given in \\citet{DrillingLandolt}. \\label{fig:cmd}}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\\epsfig{file=paper_dist_vs_lum.ps,width=2.25in,clip=,angle=90}\n\\caption{Distance versus luminosity for the complete sample. The black line shows an Earth\n equivalent insolation distance (EEID) of 60 mas. Stars that fall above this line were\n excluded because their exozodi would largely fit within the first null, and therefore\n LBTI would not be very sensitive to such dust. The LBTI the inner working angle in\n N-band (11 $\\mu$m), defined as $\\lambda$\/4B, is $\\approx$ 39 mas for the LBT mirror\n separation of 14.4 m while the first transmission peak is at 79 mas. That stars $>$1\n L$_\\odot$ fall well below the line shows that the N-band flux density requirement drives\n the source selection rather than the EEID requirement. \\label{fig:eeid}}\n\\end{figure}\n\n\n\\section{Sensitivity-Driven, i.e. Early-type, Sample} \\label{sec:bright} \n\nOur objective for this sample is to find stars for which LBTI can make its most sensitive\nobservations of $\\sim$300~K dust, regardless of the spectral type of the host\nstar. To create this sample, we select all stars that have N-band flux densities $\\geq\n1$~Jy and for which the location of the EEID is $>$60 mas. This preferentially selects\nA-type and early F-type stars. In general, these are not good exoEarth imaging targets\nthemselves, because of the low habitable-zone-planet-to-star contrast. However, they will\nprovide an important addition to our understanding of the exozodiacal dust luminosity\nfunction as it might depend on mass and luminosity of the host star.\n\nWe find an additional 20 stars that meet our selection criteria and were not\nalready selected in the Sun-like Sample. These stars are all earlier spectral\ntype than F4 and are given in Table~\\ref{tab:bright}. These stars are\ntypically further away than the Sun-like samples tars, with an average\ndistance of 18.6~pc. Twelve stars have significant infrared excesses\nindicating abundant circumstellar dust at some distance from the stars;\nreferences are given in the table.\n\n\n\\section{Discussion} \\label{sec:discussion} \n\nDespite our attempts to reject known binaries (see Section\n\\ref{sec:constraints}), there could be unknown companions that would transmit\nthrough the LBTI null pattern and therefore generate some signal. There are\nsome ways to distinguish a companion from a disk using LBTI. A companion will\ntransmit primarily through a single LBTI fringe, unlike a spatially resolved\ndisk. Therefore, a companion will produce a null that varies as the\norientation of the projected baseline of the interferometer changes due to\nEarth's rotation over an observation. However, an inclined disk would have a\nsimilar effect; therefore distinguishing a companion from a disk will likely\nrequire follow-up observations. For example, measuring the source null in\nnarrower filters at the short and long wavelength ends of N-band, i.e. 8 and\n12.5 $\\mu$m, would provide some information on its temperature and spatial\nextent. Radial velocity observations will constrain the possible masses and\norbital periods of suspected companions.\n\nAny companions discovered by LBTI are likely to be of substellar mass. All\nbut four of the Sun-like sample stars and seven of the Early-type sample stars\nhave been studied extensively by radial velocity planet search programs\n\\citep[e.g.][]{Butler:2006,Lagrange:2009,Fischer:2014}. At the separation of\nmaximum transmission, i.e. 79 mas, a 80 M$_{\\rm Jup}$ brown dwarf in an orbit\ninclined at 45$^\\circ$ would induce a typical reflex motion of about 2~km\ns$^{-1}$ for our sample stars, which could be detected for all but the most\nrapidly rotating stars in the sample.\n\nIn advance of scheduled HOSTS observing runs, a prioritized list of targets will be\nconstructed based on the target observability (e.g. above airmass 1.5 for more than 2 hr)\nand our expected sensitivity to exozodi. To determine our expected sensitivity, we pass\nan exozodi model, described in \\citet{Kennedy:2014}, through an LBTI null model to\ncalculate an exozodi limit for each star, in the unit of a ``zodi.'' This model was\ndesigned to be simple, to facilitate comparisons between observations of stars of varying\nproperties, and to have a straightforward correspondence with the Solar System's actual\nZodiacal cloud. The basic features of this model are a fixed surface density in the\nhabitable zone, i.e. at a temperature of 278~K, and a weak power law dependence of surface\ndensity on radius from the star that matches the Zodiacal cloud's. \nFigure \\ref{fig:zodisens} shows this estimation based on current knowledge\nof the achievable LBTI null depth. LBTI is still in commissioning, so the final dependence\nof null depth on target brightness is not yet well established. We have assumed that for\ntargets brighter than 2 Jy, LBTI will be systematics limited, so the target's flux density\nwill not affect our zodi sensitivity. However, there may be additional degradation of the\nnull for targets of 1--2 Jy, which comprise 28\/68 of our targets.\n\nThere are many other possible definitions of a ``zodi'' including ones defined in terms of\na fixed $F_{\\rm dust}\/F_\\star$ or $L_{\\rm dust}\/L_\\star$ at a given temperature\n\\citep{Roberge:2012}. For comparison purposes, we also calculate an exozodi sensitivity\nfor each of our target stars by assuming a version of our reference model that contains\ndust only in the habitable zone, i.e. extending from 0.95 - 1.37 $\\sqrt{L_*}$ and\nnormalized to a fixed $F_{\\rm dust}\/F_\\star = 5 \\times 10^{\\rm -5}$ at a wavelength of\n11 $\\mu$m (for our NOMIC filter). These limits are also shown in Figure\n\\ref{fig:zodisens}. \n\n\\begin{figure}[h]\n\\centering\\epsfig{file=paper_zodisens_vs_B-V.ps,width=2.25in,clip=,angle=90}\n\\caption{Expected LBTI sensitivity (1 $\\sigma$) to dust, in zodis as defined in\n \\citet{Kennedy:2014} and assuming a null depth of 10$^{-4}$ for the complete sample\n (green and blue circles). In this model, the surface density of the disk in the\n habitable zone is fixed and the flux density of the disk relative to the\n star is $\\propto T^3_\\star$, so LBTI is more sensitive to dust around hotter (bluer) stars.\nNote that these values of the sensitivity assume that the null is limited by\n systematics and not by target flux density. An alternative definition of a\n zodi is a fixed flux density in the habitable zone (orange squares); in this case\n LBTI's limits are only weakly a function of stellar luminosity. \\label{fig:zodisens}}\n\\end{figure}\n\nThe goal of the overall survey is not only to identify exozodiacal dust around specific\nstars of interest, but also to measure the luminosity function of disks in a general\nstatistical sense. As such, we define here a key metric for the overall survey - $Z10$ -\nthe fraction of stars with more than 10 zodis. This level of exozodi, versus a Solar\nSystem level of dust, would cut the number of Earth-like planets imaged by a future\ndirect-imaging mission by half \\citep{Stark:2014}. This recent work also shows, however,\nthat a mission that observes an ensemble of stars has a total planet yield that is a weak\nfunction of the exozodi level \\citep{Stark:2014}.\n\nIn a real-world ground-based observing program, under changing seeing and\ntransparency and seasonally biased conditions, it will be impossible to\nobserve all stars, even those brighter than 2 Jy, to identical zodi detection\ndepths. Of the 68 stars in Tables 1 and 2, we expect to observe $\\sim$50.\nWhat is critical is that no biases to the sample are introduced during the\nselection of the actual observed targets.\n\nThe ability of the LBTI survey to constrain $Z10$ depends on both the number of observed\ntargets and the sensitivity of each individual measurement. We performed Monte Carlo\nsimulations to estimate the expected accuracy of $Z10$ as a function of the number of\ntargets. Assuming that the underlying distribution of disk brightnesses follows a\nlog-normal distribution whose width is set by $Z10$, we determine how well $Z10$ is\nconstrained by the LBTI observations. We assume that each star is treated as a unique\nobservation. The bright end of the distribution is already constrained by Spitzer\/KIN\/WISE\nobservations \\citep{Lawler:2009, Millan-Gabet:2011, Kennedy:2013}; therefore, we set the\nfrequency of 1000 zodi disks to be 1\\%.\n\nAt first we consider a uniform survey depth with 3-zodi sensitivity for each\nmeasurement (1-$\\sigma$), which we assume would be the perfect, likely\nunachievable, survey LBTI could perform. Figure~\\ref{layeredFig} shows how\nwell $Z10$ is constrained by uniform-depth surveys ranging from 30 to 70\nstars. We find that a 50 star survey can measure $Z10$ with $\\sim$20\\%\naccuracy (for $Z10$$\\simeq$0.3-0.4). Using advanced statistical methods to\nallow averaging over multiple targets to achieve deeper zodi limits, it may be\npossible to improve on these rough limits \\citep{Mennesson:2014}.\n\nSince variations in weather will inevitably result in non-uniform sensitivity,\nFigure~\\ref{layeredFig} also shows the constraints on $Z10$ for a 2-layered\nsurvey, where 40 stars are observed with 3-zodi accuracy and another 30 stars\nwith only 10-zodi accuracy. We find that this layered survey has equivalent\npower to reveal the zodi luminosity function as a 50 star survey done to a\ndepth of 3 zodis. We conclude that an optimal observing strategy should not\nmandate uniform sensitivity, thereby concentrating a large fraction of\ntelescope time on a small number of stars, but will instead observe a greater\nnumber of stars, some with greater depth than others.\n\n\\begin{figure}[th]\n\\begin{center} \t\n\\includegraphics[width=3.in]{layeredFig.eps}\n\\end{center}\n\\caption{The ability of the overall survey to constrain $Z10$ (the fraction of\n stars with $\\geq$ 10 zodis of dust) depends on the number of targets and the\n accuracy of each measurement. Based on Monte Carlo simulations we find that\n a survey with two levels of sensitivity - 40 stars with 3-zodi accuracy and\n 30 stars with 10-zodi accuracy (black line) - is roughly equivalent to a 50\n star survey of uniform 3-zodi depth (green dashed line).\n}\\label{layeredFig}\n\\end{figure}\n\nThe HOSTS survey is expected to begin in 2015 and to continue for two to three years. \nDuring commissioning, LBTI observed $\\eta$ Crv, one of the early-type sample\nstars with a known mid-infrared excess. The observations demonstrate the power\nof LBTI to constrain the dust distribution in the habitable zone\n\\citep{Defrere:2014}.\n\n\n\\acknowledgements \n\nThe Large Binocular Telescope Interferometer is funded by the National Aeronautics and\nSpace Administration as part of its Exoplanet Exploration Program. This work of GMK \\&\nMCW was supported by the European Union through ERC grant number 279973. This research has\nmade use of the SIMBAD database and the VizieR catalogue access tool, CDS, Strasbourg,\nFrance and the Washington Double Star Catalog maintained at the U. S. Naval Observatory.\n \n\\section{Appendix: Binary Stars Excluded from Sample \\label{tab:binaries}}\n\nWe list here (Table 3) stars that could otherwise meet our selection criteria but that\nwere excluded due to binarity as described in Section 2.2. Much of\nthe information in this table comes from Washington Double Star Catalog (Mason\net al. 2001-2014).\n\n \n\n\\begin{deluxetable*}{llrrccccll}\n\\tabletypesize{\\footnotesize}\n\\tablecaption{Stars in the Sun-like Star Sample \\label{tab:sunlike}} \n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{HD}&\\colhead{Name}&\\colhead{RA} &\\colhead{Dec}&\\colhead{Distance}\n &\\colhead{Spectral} &\\colhead{EEID}\n &\\colhead{F$_\\nu$ (N-band)} &\\colhead{hot\/warm}&\\colhead{cold}\\\\\n & & & &\\colhead{(pc)}&\\colhead{Type}& &\\colhead{(Jy)}&\\colhead{excess} &\\colhead{excess}\n}\n\\startdata\n693 &6 Cet & 00:11:15.86 &-15:28:04.7 &18.7 & F8V &0.095 &1.15 & &n (E13)\\\\\n4628 & & 00:48:22.98 &+05:16:50.2 &7.5 & K2.5V &0.072 &1.30 & &n (T08)\\\\\n9826 &$\\upsilon$ And & 01:36:47.84 &+41:24:19.6 &3.5 & F9V &0.136 &2.36 &n (A13) &n (B06,E13)\\\\ \n10476 &107 Psc & 01:42:29.76 &+20:16:06.6 &7.5 & K1V &0.090 &2.02 &n (MG11)&n (T08)\\\\ \n10700 &tau Cet & 01:44:04.08 &-15:56:14.9 &3.6 & G8.5V &0.182 &5.42 &y (A13) &y (G04)\\\\ \n10780\t&GJ 75\t & 01:47:44.83 &+63:51:09.0 &10.1 & K0V &0.072 &1.12 & &n (L09)\\\\\n16160 &GJ 105 & 02:36:04.89 &+06:53:12.7 &7.2 & K3V &0.073 &1.53 & &n (T08)\\\\\n16895 &13 Per & 02:44:11.99 &+49:13:42.4 &11.1 & F7V &0.138 &2.43 &n (A13) &n (Be06)\\\\ \n17206 &tau01 Eri & 02:45:06.19 &-18:34:21.2 &14.2 & F75 &0.115 &1.69 & &n (T08)\\\\\n19373 &iot Per & 03:09:04.02 &+49:36:47.8 &10.5 & F9.5V &0.141 &2.85 &n (MG11)&n (T08) \\\\ \n22049 &eps Eri & 03:32:55.84 &-09:27:29.7 &3.2 & K2V &0.172 &7.39 &n (A13) &y (B09)\\\\ \n22484 &LHS 1569 & 03:36:52.38 &+00:24:06.0 &14.0 & F8V &0.127 &2.35 &y (A13) &y (T08)\\\\\n23754 &tau06 Eri & 03:46:50.89 &-23:14:59.0 &17.6 & F5IV-V &0.128 &2.10 & &n (G13)\\\\\n26965 &omi Eri & 04:15:16.32 &-07:39:10.3 &5.0 & K0.5V &0.128 &3.51 & &n (L02)\\\\ \n30652 &1 Ori & 04:49:50.41 &+06:57:40.6 &8.1 & F6V &0.205 &4.76 &n (A13) &n (T08)\\\\ \n32147 & & 05:00:49.00 &-05:45:13.2 &8.7 & K3V &0.059 &1.00 & &n (L09)\\\\\n34411 &lam Aur & 05:19:08.47 &+40:05:56.6 &12.6 & G1.5IV &0.105 &1.80 &n (MG11)&n (T08)\\\\ \n35296 &V1119 Tau & 05:24:25.46 &+17:23:00.7 &14.4 & F8V &0.090 &1.03 & &n (T08)\\\\\n38393 &gam Lep & 05:44:27.79 &-22:26:54.2 &8.9 & F6V &0.175 &4.40 &n (MG11)&n (Be06)\\\\ \n48737 &ksi Gem & 06:45:17.36 &12:53:44.13 &18.0 & F5IV &0.196 &4.34 &n (A13) &n (K13) \\\\\n78154 &sig02 Uma A & 09:10:23.54 &+67:08:02.4 &20.4 & F6IV &0.099 &1.24 & &n (G13)\\\\\n84117 &GJ 364 & 09:42:14.42 &-23:54:56.0 &15.0 & F8V &0.093 &1.11 & &n (E13)\\\\\n88230 &NSV 4765 & 10:11:22.14 &+49:27:15.3 &4.9 & K8V &0.065 &1.91 &n (MG11)&n (T08)\\\\ \n89449 &40 Leo & 10:19:44.17 &+19:28:15.3 &21.4 & F6IV &0.098 &1.10 & &n (G13)\\\\\n90839 &36 Uma & 10:30:37.58 &+55:58:49.9 &12.8 & F8V &0.099 &1.25 & &n (T08)\\\\\n95128 &47 Uma & 10:59:27.97 &+40:25:48.9 &14.1 & G1V &0.091 &1.35 &n (MG11)&n (T08)\\\\\n101501 &61 Uma & 11:41:03.02 &+34:12:05.9 &9.6 & G8V &0.081 &1.24 & &n (G03)\\\\\n102870 &bet Vir & 11:50:41.72 &+01:45:53.0 &10.9 & F9V &0.173 &4.30 &n (A13) &n (T08)\\\\\n115617 &61 Vir & 13:18:24.31 &-18:18:40.3 &8.6 & G7V &0.108 &2.20 &n (L09,E14) &y (L09)\\\\ \n120136 &tau Boo & 13:47:15.74 &+17:27:24.8 &15.6 & F6IV &0.114 &1.67 &n (E14) &n (B09)\\\\\n126660 &tet Boo & 14:25:11.8 &+51:51:02.7 &14.5 & F7V &0.147 &3.12 & &n (T08)\\\\\n131977 &KX Lib & 14:57:28.00 &-21:24:55.7 &5.8 & K4V &0.076 &1.95 &n (L09) &n (Be06) \\\\ \n141004 &lam Ser & 15:46:26.61 &+07:21:11.0 &12.1 & G0IV-V &0.121 &2.40 &n (A13) &n (K10)\\\\ \n142373 &LHS 3127 & 15:52:40.54 &+42:27:05.5 &15.9 & F8Ve &0.111 &2.03 &n (A13) &n (T08)\\\\\n142860 &gam Ser & 15:56:27.18 &+15:39:41.8 &11.2 & F6IV &0.151 &2.93 &n (A13) &n (T08)\\\\\n149661 &V2133 Oph & 16:36:21.45 &-02:19:28.5 &9.8 & K2V &0.068 &1.00 &n (E14) &n (T08)\\\\\n156026 &V2215 Oph & 17:16:13.36 &-26:32:46.1 &6.0 & K5V &0.064 &1.64 & &n (Be06)\\\\\n157214 &w Her & 17:20:39.30 &+32:28:21.2 &14.3 & G0V &0.079 &1.01 & &n (T08)\\\\\n160915 &58 Oph & 17:43:25.79 &-21:40:59.5 &17.6 & F5V &0.096 &1.18 &n (E14) &n (E14)\\\\\n173667 &110 Her & 18:45:39.72 &+20:32:46.7 &19.2 & F6V &0.131 &2.18 &y (A13) &n (T08)\\\\\n185144 &sig Dra & 19:32:21.59 &+69:39:40.2 &5.8 & G9V &0.113 &2.72 &n (A13) &n (T08)\\\\\n192310 &GJ 785 & 20:15:17.39 &-27:01:58.7 &8.9 & K2+V &0.071 &1.25 & &n (Be06)\\\\\n197692 &psi Cap & 20:46:05.73 &-25:16:15.2 &14.7 & F5V &0.136 &2.08 &n (E14) &n (L09)\\\\\n201091 &61 Cyg A & 21:06:53.95 &+38:44:58.0 &3.5 & K5V &0.106 &4.43 &n (A13) &n (G04)\\\\ \n201092 &61 Cyg B & 21:06:55.26 &+38:44:31.4 &3.5 & K7V &0.085 &3.28 &n (A13) &n (G04) \\\\ \n215648 &ksi Peg A & 22:46:41.58 &+12:10:22.4 &16.3 & F7V &0.132 &2.22 &n (E14) &n (G13)\\\\\n219134 & & 23:13:16.98 &+57:10:06.1 &6.5 & K3V &0.080 &1.86 & &n (T08)\\\\\n222368 &iot Psc & 23:39:57.04 &+05:37:34.6 &13.7 & F7V &0.137 &2.40 &n (MG11)&n (B06)\\\\ \n\\enddata\n\\tablecomments{Excess References: A13=\\citet{Absil:2013}; Be06=\\citet{Beichman:2006};B06=\\citet{Bryden:2006};\n B09=\\citet{Bryden:2009}; E13=\\citet{Eiroa:2013}; E14=\\citet{Ertel:2014}; G13=\\citet{Gaspar:2013}; G03=\\citet{Greaves:2003}; G04=\\citet{Greaves:2004}; K13=\\citet{Kennedy:2013}; K10=\\citet{Koerner:2010};L02=\\citet{Laureijs:2002}; L09=\\citet{Lawler:2009}; MG11=\\citet{Millan-Gabet:2011};T08=\\citet{Trilling:2008}}\n\n\\end{deluxetable*} \n \n\n\\begin{deluxetable*}{llrrccccll}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Stars in the Sensitivity-Driven Sample\\label{tab:bright}} \n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{HD}&\\colhead{Name}&\\colhead{RA} &\\colhead{Dec}&\\colhead{Distance}\n &\\colhead{Spectral} &\\colhead{EEID}\n &\\colhead{F$_\\nu$ (N-band)} &\\colhead{hot\/warm}&\\colhead{cold}\\\\\n & & & &\\colhead{(pc)}&\\colhead{Type} & &\\colhead{(Jy)}&\\colhead{excess} &\\colhead{excess}\n}\n\\startdata\nHD 33111 & bet Eri & 05:07:51.0 & -05:05:11.2 & 27.4 & A3IV &0.248 &3.72 &n (E14) &y (G13)\\\\\nHD 38678 & zet Lep & 05:46:57.3 & -14:49:19.0 & 21.6 & A2IV-V &0.176 &2.06 &y (FA98), n (A13) &y (FA98)\\\\\nHD 40136 & eta Lep & 05:56:24.3 & -14:10:03.7 & 14.9 & F2V &0.161 &2.36 & &y (L09)\\\\\nHD 81937 & h UMa & 09 31 31.7 & +63:03:42.8 & 23.8 & F0IV &0.168 &2.55 & &n (B06)\\\\\nHD 95418 & beta UMa & 11:01:50.5 & +56:22:56.7 & 24.4 & A1IV &0.316 &4.20 &y (FA98), n (A13) &y (S06)\\\\\nHD 97603 & del Leo & 11:14:06.5 & +20:31:25.4 & 17.9 & A5IV &0.278 &3.90 &n (A13) &n (G13)\\\\\nHD 102647 & bet Leo & 11:49:03.6 & +14:34:19.4 & 11.0 & A3V &0.336 &6.85 &y (A13) &y (S06)\\\\\nHD 103287 & gam Uma & 11:53:49.8 & +53:41:41.1 & 25.5 & A1IV &0.308 &3.69 & &n (S06)\\\\\nHD 105452 & alf Crv & 12:08:24.8 & -24:43:44.0 & 14.9 & F1V &0.139 &1.97 & &n (G13)\\\\ \nHD 106591 & del UMa & 12:15:25.6 & +57:01:57.4 & 24.7 & A2V &0.199 &2.00 &n (A13) &n (G13)\\\\\nHD 108767 & del Crv & 12:29:51.8 & -16:30:55.6 & 26.6 & A0IV &0.251 &2.25 &y (E14) &n (S06)\\\\\nHD 109085 & eta Crv & 12:32:04.2 & -16:11:45.6 & 18.3 & F2V &0.125 &1.76 &n (A13) &y (W05)\\\\\nHD 128167 & sig Boo & 14:34:40.8 & +29:44:42.4 & 15.8 & F2V &0.117 &1.39 & &y (L02)\\\\\nHD 129502 & 107 Vir & 14:43:03.6 & -05:39:29.5 & 18.3 & F2V &0.151 &2.60 &n (E14) &n (G13)\\\\\nHD 164259 & zet Ser & 18:00:29.0 & -03:41:25.0 & 23.6 & F2IV &0.106 &1.14 &n (E14) &n (L09)\\\\ \nHD 172167 & Vega & 18:36:56.3 & +38:47:01.3 & 7.7 & A0V &0.916 &38.55 &y (A13) &y (G86)\\\\\nHD 187642 & Altair & 19:50:47.0 & +08:52:06.0 & 5.1 & A7V &0.570 &21.63 &y (A13) &n (R05)\\\\\nHD 203280 & Alderamin & 21:18:34.8 & +62:35:08.1 & 15.0 & A8V &0.294 &7.04 &y (A13) &n (C05)\\\\\nHD 210418 & tet Peg & 22:10:12.0 & +06:11:52.3 & 28.3 & A1V &0.179 &1.61 &n (E14) &n (S06)\\\\\nHD 216956 & Fomalhaut & 22:57:39.0 & -29:37:20.0 & 7.7 & A4V &0.504 &15.41 &y (L13) &y (G86)\\\\\n\\enddata\n\\tablecomments{Excess References: \nA13=\\citet{Absil:2013}; \nB06=\\citet{Bryden:2006}; \nC05=\\citet{Chen:2005}; \nE14=\\citet{Ertel:2014};\nFA98=\\citet{Fajardo:1998}; \nG13=\\citet{Gaspar:2013}; \nG86=\\citet{Gillett:1986};\nL02=\\citet{Laureijs:2002};\nL09=\\citet{Lawler:2009}; \nL13=\\citet{Lebreton:2013}; \nR05=\\citet{Rieke:2005}; \nS06=\\citet{Su:2006}; \nW05=\\citet{Wyatt:2005}}\n\\end{deluxetable*}\n \n\n\\begin{deluxetable*}{llll} \n\\tabletypesize{\\small}\n\\tablecaption{Binary Stars Excluded from the Sample\\label{tab:binary}} \n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{HD}& \\colhead{Name} &\\colhead{SpTyp} &\\colhead{Binarity Notes}\n}\n\\startdata\n\\hline\nHD 432 &bet Cas &F2IV &WDS says SB with P=27d\\\\\nHD 4614 &eta Cas &G3V &VB 12'', 70AU, P=480yr\\\\\nHD 6582 &mu Cas &G5V &VB 1'', 7.5 AU + SB\\\\\nHD 8538 &del Cas &A5III-IV &SB, perhaps eclipsing\\\\\nHD 11443 &alf Tri &F6IV &SB, P=1.8d\\\\\nHD 11636 &bet Ari &A5V &SB9, P=107d\\\\\nHD 13161 & &A5IV &SB, P=31d, resolved by MarkIII\\\\\nHD 13974 & &G0V &SB, P=10d\\\\\nHD 16970 & &A2V &VB, 2.3''\\\\\nHD 20010 & &F6V &VB 4.4'', 62 AU\\\\\nHD 20630 &kap01 Cet &G5V &WDS says SB, but not variable in \\citet{Nidever:2002}\\\\\nHD 39587 &chi1 Ori &G0V &VB 0.7'', 6 AU, P=14yr\\\\\nHD 40183 & &A1IV-V &SB, P=3.7d, resolved by MarkIII\\\\\nHD 47105 & &A1.5IV &SB\/speckle VB \\\\\nHD 48915 &Sirius &A0V &VB, 7.5''\\\\\nHD 56986 &del Gem &F2V &SB, P=6.1 yr\\\\\nHD 60179 &Castor A &A1.5IV &SB, P=9.2d plus VB, P=467 yr\\\\\nHD 61421 &Procyon &F5IV &VB 5'', 18AU, P=40 yr and SB\\\\\nHD 76644 &\t &A7V &SB, P=11yr\\\\\nHD 76943 &10 UMa &F3V &SB\/VB 0.6'', P=21.8 yr\\\\\nHD 82328 &tet UMa &F7V &WDS says SB (SBC7), but not in SB9, also VB, 5''\\\\\nHD 82885 &SV Lmi &G8III &VB 3.8'', 43 AU, P=200 yr\\\\\nHD 95735 & &M2V &EB\\\\\nHD 98231 &GJ 423A &G0V &Complicated multiple system\\\\\nHD 104304 & &G8IV &VB, 1''\\\\\nHD 109358 &bet Cvn &G0V &SB, 0.1'', 1.3 AU\\\\\nHD 110379 &GJ 482A &F0V &VB, 3.75'', 44AU, P=171yr\\\\\nHD 112413 &alf02 CVn &A0II-III &SB\\\\\nHD 114378J &alph Com &F5V &VB 0.7'', 12 AU, P=26yr\\\\\nHD 114710 &bet Com &G0V &possible SB\\\\\nHD 116656 &Mizar A &A1.5V &SB, P=20.5d\\\\\nHD 118098 &zet Vir &A3V &VB, companion is M4-M7\\\\\nHD 121370 &eta Boo &G8V &SB9, P=494d\\\\\nHD 130841 &alf Lib A &A4IV-V &possible SB\\\\\nHD 131156 &37 Boo &G8V &VB 4.9'', 33AU, P=151yr\\\\\nHD 131511 &GJ 567 &K2V &possible SB P=125d\\\\\nHD 133640 &GJ 575 &G0V &VB, 3.8'', 48 AU (now at 1.5'')\\\\\nHD 139006 &alf CrB &A1IV &SB, P=17d\\\\\nHD 140538 & &G2.5V &VB, 4.4'', 65 AU\\\\\nHD 144284 &tet Dra &F8 IV-V &SB, P=3.1d\\\\\nHD 155125 & &A2IV-V &VB 0.86''\\\\\nHD 155886 &GJ 663A &K2V &VB 5'', 28 AU\\\\\nHD 155885 &36 Oph &K2V &VB 15'', 87AU, P=569yr + possible SB\\\\\nHD 156164 &del Her &A1IV &SB 0.1''\\\\\nHD 156897 &40 Oph &F2V &VB, approx 4'' (CCDM)\\\\\nHD 159561 &alf Oph &A5II &VB 0.8''\\\\\nHD 160269 & &G0V &VB 1.5'', 21 AU\\\\\nHD 160346 &GJ 688 &K3V &SB P=84d\\\\\nHD 161797 &mu Her &G5IV &VB 1.4'', 12AU, P=65yr\\\\\nHD 165341 &70 Oph A &K0V &VB 4.6'', 23AU, P=88yr\\\\\nHD 165908 &b Her &F7V &VB 1.1'', 17 AU, P=26 yr\\\\\nHD 170153 & &F7V &SB 0.1'', 1AU, P=0.8yr\\\\\nCCDM 19026-2953 A & &A2.5V &VB 0.53''\\\\\nHD 177724 & &A0IV-V &SB + VB, 5''\\\\\nHD 182640 &del Aql &F1IV-V &SB, resolved at 0.1''\\\\\nHD 185395 & tet Cyg &F4V &VB\/SB, approx 2.5''\\\\\nHD 207098 & del Cap &F2III &EB\\\\\nHD 210027 &iot Peg &F5V &SB1, P=10d\\\\\nHD 224930 &85 Peg &G5V &VB 0.8'', 10AU, P=26 yr\\\\\n\\enddata\n\\end{deluxetable*}\n\n\\bibliographystyle{apj} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n\nOne of the most debated issues in stellar astrophysics is whether the\ncoronal abundances are similar to the photospheric counterparts in\ncool stars. A different composition in the corona and the\nphotosphere would indicate a physical process taking place\nsomewhere between the cooler photospheric material and the hotter\ncorona. Such a process should be capable of distinguishing between\ndifferent elements based on a certain physical variable. A\npattern related to that variable would help understanding the\nphysical processes taking place in the coronal loops.\nSuch a pattern has been observed in the Sun, which shows, on average,\nan enhancement of elements with a low first ionization potential (FIP)\nin the corona with respect to the photosphere. The so called ``FIP\neffect'' is actually observed in the solar corona and slow wind, but\nis absent in coronal holes or fast wind \\citep{lam95,feld00}.\nStars similar to the Sun, such as $\\alpha$~Cen, show a\nsimilar FIP effect \\citep{dra97,raa03}. \nLess active stars, such as Procyon (F4IV),\ndo not show any fractionation \\citep{raa02,sanz04}. \nIntermediate-activity stars,\nlike $\\epsilon$~Eri, 36~Oph or 70~Oph present a much lower FIP effect,\nif any is present \\citep{lam96,sanz04,woo06,ness08}. \n\nFor the most active stars, \nthe results are more uncertain. Their coronal\ncomposition is relatively easy to determine once high spectral\nresolution is available. But their high rotation rates\nhamper the measurements of photospheric composition due to line\nbroadening, making the comparison more difficult. \nThe photospheric composition can only be well\ncalculated for stars with low {\\it projected} rotational\nvelocity ($v \\sin i$).\nInitial studies of\nactive stars made with XMM-Newton and Chandra high-resolution spectra\nfound a very different pattern from the Sun \\citep[e.g.][and references\n therein]{gud04}. The elements with low FIP would \nactually be underabundant in the corona, a ``metal abundance\ndepletion'' \\citep[``MAD syndrome'',][]{sch96}, or alternatively, \nthe elements with high FIP would\nbe enhanced in the corona \\citep[the ``inverse FIP effect'',][]{bri01}. \nA caveat of many of the active stars is that \ntheir coronal abundances are actually compared to the solar photosphere,\nwhich at least yields to risky conclusions \\citep{fav03}. Moreover, most\nactive stars with photospheric abundances calculated have large\n$v \\sin i$, with broad photospheric lines. \nAs a result, their abundances determination could actually\ncarry some hidden errors not assessed well by formal\ncalculations. This is the case for II Peg, AB Dor, or AR Lac (see\nTable~\\ref{tab:pastabundances}).\n\n\n\\begin{table*}\n\\caption{Fractionation effects in other stars with known\n photospheric and coronal abundances}\\label{tab:pastabundances} \n\\tabcolsep 3.pt\n\\begin{center}\n\\begin{footnotesize}\n \\begin{tabular}{lrcccccc}\n\\hline \\hline\n{Star} & HD & Sp. Type & {$L_{\\rm X}$} & $L_{\\rm X}$\/$L_{\\rm bol}$ &\n$v \\sin i$ & FIP effect? & Reference$^a$ \\\\\n & & & (erg s$^{-1}$) & & km\\,s$^{-1}$ & & \\\\\n\\hline\n{Sun} & \\dots & G2V & $\\sim$27.5 & -6.1 & \\dots & FIP effect & FE00 \\\\\n{Procyon} & 61421 & F4IV & 27.9 & -6.5 & 6.1 & No FIP effect & SF04,RA02 \\\\\n{$\\epsilon$ Eri} & 22049 & K2V & 28.2 & -5.1 & 2.4 & Small\/No FIP effect & SF04 \\\\ \n{$\\xi$ UMa B} & 98230 & G5V\/[KV] & 29.5 & -4.3 & 2.8 & (No FIP effect) & BA05 \\\\\n{$\\lambda$ And} & 222107 & G8III\/? & 30.4 & -4.5 & 7.3 & No FIP effect & SF04 \\\\ \n{Capella} & 34029 & G1III\/G8III & 30.5 & -5.3 & 32.7 & (No FIP effect) & BR00 \\\\\n{II Peg} & 224085 & K2IV\/M0-3V & 31.1 & -2.8 & 21 & Small inverse FIP & HU01 \\\\\n{AB Dor} & 25647 & K1IV & 30.1 & -3.0 & 90 & Inverse FIP & SF03 \\\\\n{AR Lac} & 210334 & G2IV\/K0IV & 30.9 & -3.4 & 46\/81 & Small inverse FIP & HU03 \\\\\n{V851 Cen} & 119285 & K2IV-III\/? & 30.8 & -3.5 & 6.5 & No FIP effect & SF04 \\\\\nAR Psc & 8357 & G7V\/K1IV & 30.7 & -3.3 & 7.0 & No FIP effect &This work \\\\\nAY Cet & 7672 & WD\/G5III & 31.0 & -4.2 & 4.6 & No FIP effect & This work \\\\\n\\hline\n\\end{tabular}\n\\end{footnotesize}\n\\end{center}\n{\\it Note}: $L_{\\rm X}$ (erg s$^{-1}$) calculated in the range\n5--100~\\AA\\ (0.12--2.4 keV).\\\\\n{\\it $^a$References for FIP effect}: BA05 \\citep{bal05}, BR00 \\citep{bri00}, FE00\n\\citep{feld00}, HU01 \\citep{hue01}, HU03 \\citep{hue03}, RA02\n\\citep{raa02}, SF04 \\citep{sanz04}, SF03 \\citep{sanz03b}. \n\\end{table*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[angle=90,width=0.45\\textwidth]{12069f1a.ps}\n \\includegraphics[angle=90,width=0.45\\textwidth]{12069f1b.ps}\n \\caption{Light curves of AR Psc (using all EPIC detectors) and AY\n Cet (using order +1 and -1 of LETGS) in X-rays. Upper axis\n indicate the orbital phase, assuming that secondary star is\n located behind the primary star at phase 0.}\n \\label{fig:lcs}\n\\end{figure*}\n\nTheoretical explanations of the FIP effect are found in\n\\citet{lam04}. The authors also propose that Alfv\\'en waves,\ncombined with pondermotive forces, would explain the observed FIP\neffect on the Sun, pointing towards the disappearance of any\nfractionation of stars with lower or higher activity levels. The same\nmechanism could also explain the inverse FIP effect, if such exists,\nby reflecting the Alfv\\'en waves in the chromosphere. Alf\\'en waves\nare also being suggested in recent years as responsible for the\nenergy transportation between the outer convective layers of the star\nand the much hotter corona \\citep[e.g.][and references\n therein]{erd07,dep07}, one of the most important\nproblems unresolved in stellar astrophysics. Both questions, the energy\ntransportation and the FIP fractionation, could actually be connected.\n\nIt is thus essential to understand whether active stars suffer an\ninverse FIP effect or just no significant fractionation with respect\nto the photospheric composition. Given the problems measuring the\nphotospheric abundances, we should mainly trust\nthose results for stars with low $v \\sin i$. \\citet{sanz04}\nshowed the case of two active stars, $\\lambda$~And and V851~Cen,\nfor which coronal and photospheric abundances are consistent, and no\neffect related to FIP would be present. Stars such as Capella\n\\citep[which shows solar photospheric abundances,][]{bri00,arg03,aud03}\nshow no inverse FIP effect either, although their\nphotospheric abundance is poorly known\n\\citep[{[Fe\/H]=--0.16},][]{mcw90}. Other cases of active stars\nwith no sign of an inverse FIP effect, which are compared to poorly\nknown photospheric iron abundances, are $\\sigma^2$~CrB\n\\citep{suh05}, EK Dra \\citep{tel05}, and YY Men\n\\citep{aud04}\\footnote{Although the authors claim that coronal iron\n is depleted in YY Men, the values in the corona and photosphere of the\n star are actually consistent, once reasonable uncertainties of \n 0.2 dex are considered for the photospheric iron, calculated from a\n low-resolution spectrum by \\citet{ran93}.}. \nThe present situation is still confusing given the few cases described\nin the literature with both photospheric and coronal abundances \nwell-calculated. In this \nwork we present the results for two more RS~CVn systems known to have\nnarrow photospheric lines despite their high rotation and activity\nlevel.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[angle=270,width=0.95\\textwidth]{12069f2a.ps}\n\n \\hspace{5mm}\n\n \\includegraphics[angle=270,width=0.95\\textwidth]{12069f2b.ps}\n \\caption{AR Psc and AY Cet X-ray spectra. The dashed line\n represents the continuum predicted by the EMD.} \n \\label{fig:speccorona}\n\\end{figure*}\n\nFollowing \\citet{sanz04} we have chosen two active stars that are observed\nwith their pole on, and therefore have low $v \\sin i$.\nThus, their optical spectra display narrow\nlines, allowing us to measure the photospheric abundances better. The\ntwo RS CVn binaries are well known X-ray emitters. Their emission\nis attributed to coronal activity, powered by the fast rotation forced\nby the close binariety. {AR Psc} \\citep[G7V\/K1IV, $v \\sin i$=7\nkm\\,s$^{-1}$,][]{nor04}\nand {AY Cet} \\citep[WD\/G5III, $v \\sin i$=4.6 km\\,s$^{-1}$,][]{mas08} \nwere observed with the\nExtreme Ultraviolet Explorer Observatory (EUVE), giving a\nfirst glimpse of its coronal emission \\citep{sanz03}.\nOptical spectra also indicate a high level of chromospheric activity\n\\citep{mon97}. Their EMD is similar to other active\nstars, either showing an inverse FIP effect, like AB~Dor\n\\citep{sanz03b,gar05,gar08}, or no\nFIP-related fractionation, such as $\\lambda$~And \\citep{sanz04}. In the case of\nAY~Cet, it is not expected that its white dwarf\ncompanion contributes substantially to the X-rays band.\nPhotospheric abundances were calculated by \\citet{ott98} for AY Cet in\nonly three elements: [Fe\/H]=-0.32, [Mg\/H]=-0.22, and [Si\/H]=-0.32,\nwith \\citet{asp05} values as reference. \\citet{sha06} finds a low iron\nabundance in AR~Psc, but no information is provided on the reference\nsystem used in the calculations.\n\nThe paper is divided as follows: in Sect. 2 the observations are\ndescribed. Results are given in Sect. 3, with discussion in Sect. 4, to\nfinish with the conclusions. \n\n\n\n\\section{Observations}\nTime was awarded (P.I. J. Sanz-Forcada) for observing high-resolution \nX-rays spectra of AY~Cet and AR~Psc.\nThe Chandra Low Energy Transmission Grating Spectrograph (LETGS)\n\\citep[$\\lambda\\lambda\\sim$3--175,\n$\\lambda\/\\Delta\\lambda\\sim$60-1000,][]{wei02} observed AY~Cet in May\n2005 in combination\nwith the High Resolution Camera (HRC-S). Data were reduced using the CIAO v4.0\npackage. The positive and negative orders were summed for the flux\nmeasurements. Lines formed in\nthe first dispersion order, but contaminated with contribution from\nhigher dispersion orders, were not employed in the analysis. Light\ncurves were obtained from the LETG spectra (first and higher \norders) of AY~Cet with the background properly subtracted\n(Fig.~\\ref{fig:lcs}).\nXMM-Newton observed AR~Psc on 11 January 2006. XMM-Newton observes\nsimultaneously with the RGS \\citep[Reflection Grating\nSpectrometer,][]{denher01} ($\\lambda\\lambda\\sim$6--38~\\AA,\n$\\lambda$\/$\\Delta\\lambda\\sim$100--500) and the EPIC (European Imaging\nPhoton Camera) PN and MOS detectors (sensitivity range 0.15--15 keV\nand 0.2--10~keV, respectively).\nThe RGS data were reduced using the standard SAS (Science Analysis\nSoftware) version 8.0.1 package, and the RGS 1 and 2 spectra were\ncombined to improve the measurement of the line fluxes\n(Fig.~\\ref{fig:speccorona}). The \nEPIC light curve (Fig.~\\ref{fig:lcs}) was constructed by combining the\nPN and MOS count rates in a circle around the target, with background\nregions properly subtracted using the SAS task {\\em epiclccorr}, which\nalso corrects from several instrumental effects. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{12069f3.ps}\n \\caption{AR Psc and AY Cet optical spectra.} \n \\label{fig:specphot}\n\\end{figure}\n\nOptical spectra (Fig.~\\ref{fig:specphot}) were acquired on 27 and 28 \nNovember 2002 to\ncalculate the photospheric abundances of the two stars, as part of a\nwider observational campaign. The detailed instrumental setup and\nobservations are described well in \\citet{aff05}, so only a short\ndescription is given here. We used the high-resolution cross-dispersed echelle\nspectrograph SOFIN, mounted on the Cassegrain focus of the 2.56 m Nordic Optical\nTelescope (NOT) located at the Observatorio del Roque de Los Muchachos (La\nPalma, Spain). Exposure times ranged from 4 to 20 min,\nresulting in high S\/N per pixel ($\\approx 0.025$ \\AA\/px)\\,\naveraging at about 280. A\nspectrum of a Th-Ar lamp was obtained following each stellar spectrum, ensuring\naccurate wavelength calibration. \nThe total spectral range is 3900--9900 \\AA, and the resolving power \nis $R=\\lambda\/\\Delta\\lambda\\,\\approx\\,80\\,000$. The spectra were\nreduced with the standard software available \nwithin the CCDRED and ECHELLE packages of IRAF.\\footnote{IRAF (Image\n Reduction and Analysis Facility) is \ndistributed by National Optical Astronomy Observatories, operated by the\nAssociation of Universities for Research in Astronomy, Inc., under cooperative\nagreement with the National Science Foundation.} The analysis includes overscan\nsubtraction, flat-fielding, removal of scattered light, extraction of\none-dimensional spectra, wavelength calibration, and continuum\nnormalization \\citep[see][for further details]{aff05}. The\n$EW$s were measured using the SPLOT task in\nIRAF, assuming a Gaussian profile for weak or moderately strong lines ($EW \\la\n100$ m\\AA) and a Voigt profile for stronger lines. The accuracy (absolute\nerror) is harder to assess. It almost certainly contains a systematic error due\nto the continuum location, because of the presence of interference\nfringes (which could not be completely removed) in the redder part of\nthe stellar spectra, which cause a modulation of \nthe local continuum. This error could be particularly important for\nthe weak lines. \n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.498\\textwidth]{12069f4a.eps}\n \\includegraphics[width=0.498\\textwidth]{12069f4b.eps}\n \\caption{Emission measure distribution (EMD) of AR Psc and AY\n Cet. Thin lines represent the relative\n contribution function for each ion (the emissivity function\n multiplied by the EMD at each point). Small numbers indicate the\n ionization stages of the species. Also plotted are the \n observed-to-predicted line flux ratios for the ion stages in the \n upper figure.\n The dotted lines denote a factor of 2.} \n \\label{fig:emds}\n\\end{figure*}\n\n\\input{12069t2.tex}\n\\input{12069t3.tex}\n\n\\begin{table}\n\\caption{Emission measure distribution of the target stars}\\label{tab:emds}\n\\begin{center}\n\\begin{small}\n\\begin{tabular}{lrr}\n\\hline \\hline\n{log~$T$} & \\multicolumn{2}{c}{log $\\int N_{\\rm e} N_{\\rm H} {\\rm d}V$\n (cm$^{-3}$)$^a$} \\\\\n(K) & {AR Psc} & {AY Cet} \\\\ \n\\hline\n6.0 & 50.40\\hfill\\hspace{1ex} & 50.60\\hfill\\hspace{1ex} \\\\\n6.1 & 50.70\\hfill\\hspace{1ex} & 50.80\\hfill\\hspace{1ex} \\\\\n6.2 & 50.90$^{+0.10}_{-0.30}$ & 51.10$^{+0.20}_{-0.40}$ \\\\\n6.3 & 51.05$^{+0.10}_{-0.30}$ & 51.40$^{+0.20}_{-0.20}$ \\\\\n6.4 & 51.05$^{+0.20}_{-0.30}$ & 51.10$^{+0.30}_{-0.20}$ \\\\\n6.5 & 51.20$^{+0.10}_{-0.30}$ & 51.55$^{+0.30}_{-0.30}$ \\\\\n6.6 & 51.30$^{+0.20}_{-0.30}$ & 51.90$^{+0.30}_{-0.30}$ \\\\\n6.7 & 51.40$^{+0.20}_{-0.30}$ & 52.30$^{+0.20}_{-0.30}$ \\\\\n6.8 & 51.70$^{+0.20}_{-0.20}$ & 52.85$^{+0.20}_{-0.20}$ \\\\\n6.9 & 52.70$^{+0.10}_{-0.00}$ & 53.30$^{+0.10}_{-0.10}$ \\\\\n7.0 & 52.90$^{+0.00}_{-0.10}$ & 53.55$^{+0.00}_{-0.00}$ \\\\\n7.1 & 52.40$^{+0.20}_{-0.20}$ & 52.40$^{+0.15}_{-0.25}$ \\\\\n7.2 & 52.60$^{+0.20}_{-0.30}$ & 52.60$^{+0.10}_{-0.30}$ \\\\\n7.3 & 52.70$^{+0.10}_{-0.30}$ & 52.75$^{+0.15}_{-0.15}$ \\\\\n7.4 & 52.50$^{+0.10}_{-0.30}$ & 52.70$^{+0.30}_{-0.25}$ \\\\\n7.5 & 51.60\\hfill\\hspace{1ex} & 52.30\\hfill\\hspace{1ex} \\\\\n\\hline\n\\end{tabular}\n\\end{small}\n\\end{center}\n$^a$Emission measure, where $N_{\\rm e}$ \nand $N_{\\rm H}$ are electron and hydrogen densities, in\ncm$^{-3}$. Error bars provided are not independent\nbetween the different temperatures, see text.\n\\end{table}\n\n\\section{Results}\nLight curves of AR Psc and AY Cet (Fig.~\\ref{fig:lcs}) show no flares \nin these observations, although variability up to a $\\sim$10\\% takes place\nin AR~Psc. The small portion of the orbital period\ncovered prevents us from\nidentifying variability related to orbital phase in both stars. \nBoth systems behave like the EUVE observations reported by \n\\citet{sanz03}. The spectra recorded by EUVE were of limited use because of\nlow statistics. The large number of lines observed with XMM-Newton\nand Chandra have allowed us to construct a more accurate emission\nmeasure distribution (EMD) as a\nfunction of temperature, defined as $E\\!M(T) = \\int_{\\Delta T} N_H N_e\ndV$ [cm$^{-3}$]. We used a\nline-based analysis described in \\citet{sanz03b} and references\ntherein. In short,\nindividual line fluxes are measured\\footnote{We then measured fluxes\n for interstellar medium absorption (ISM), using $\\log N_{\\rm\n H}=18.8$ (AY Cet) and $\\log N_{\\rm H}=18.3$ (AR Psc), although it\n is only important in a few lines.} \n(Tables~\\ref{tab:flarpsc}, \\ref{tab:flaycet}) and then compared to a\ntrial EMD, which is combined with the atomic emission model\nAstrophysical Plasma Emission Database \\citep[APED v1.3.1,][]{aped} in\norder to produce theoretical fluxes of the lines. The comparison of\nmeasured and modeled line fluxes result in an improved EMD that is\nused again to produce new modeled line fluxes. \nThe iterative process result\nin a solution that is not unique, but reliably approximates the\nobserved and modeled fluxes, and it presumably resembles the real EMD\nof the corona. Error bars were calculated using a Monte Carlo method\nthat seeks the best solution for different line fluxes within their\n1--$\\sigma$ errors \\citep[see][for more details]{sanz03b}. \nIn our case the lines measured are formed\nat different temperatures and correspond to several elements. We took\nthe caution to construct an initial\nEMD using only Fe lines and then progressively added lines of each\nelement with overlapping temperatures to the analysis. The determined\nEMDs are displayed in Fig.~\\ref{fig:emds} and Table~\\ref{tab:emds}.\nCoronal abundances (Fig.~\\ref{fig:abuncorona}, Table~\\ref{tab:abundances}) \nwere calculated with the same process and then\ncompared to the values measured in their own\nphotospheres (see below). We used the solar photospheric values \\citep{asp05} \nas reference. Some solar abundance values have\nchanged between \\citet{anders} \\citep[used in][]{sanz04} and\n\\citet{asp05}. Most notably, [Fe\/H] is now 7.45 instead of 7.67. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{12069f5a.eps}\n \\includegraphics[width=0.45\\textwidth]{12069f5b.eps}\n \\caption{Coronal abundances of AR Psc and AY Cet, in increasing\n order of FIP. Filled circles are coronal values and open circles\n represent photospheric values. A dashed line\n indicates the adopted solar photospheric abundance \\citep{asp05}.} \n \\label{fig:abuncorona}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{12069f6a.eps}\n \\includegraphics[width=0.45\\textwidth]{12069f6b.eps}\n \\caption{Photospheric abundances of AR Psc and AY Cet. A dashed line\n indicates the adopted solar photospheric abundance \\citep{asp05}.} \n \\label{fig:abunphot}\n\\end{figure}\n\nThe photospheric abundances (Fig.~\\ref{fig:abunphot}, \nTable~\\ref{tab:abundances}) were calculated\nthrough the analysis of the \noptical spectra, in which we select a group of lines that are free\nof blends, as explained in \\citet{aff05} and \\citet{mor03}.\nTo avoid the difficulty in defining the continuum in\nthe blue part of the spectra, only lines with $\\lambda$ $>$ 5500 \\AA\\ were\nselected. Lines that appeared\nasymmetric or showed an unusually large width were assumed to be blended with\nunidentified lines so were discarded from the initial sample.\nTo obtain information on individual abundances from the spectral\nlines of various elements, one must first \ndetermine the parameters that characterize the atmospheric model; i.e., the\neffective temperature ($T_{\\rm eff}$), the surface gravity ($\\log g$),\nthe microturbulent velocity ($\\xi$), and the\niron abundance. They were calculated in an iterative process from the\ncomparison of the observed spectra and the model for a given set of\nparameters. \nThe atmospheric parameters and metal\nabundances were determined using the measured $EW$s and a standard\nlocal thermodynamic equilibrium (LTE) analysis with the most recent\nversion of the line abundance code \nMOOG \\citep{sne73} and a grid of \\citet{kur93} ATLAS9 atmospheres,\ncomputed without the overshooting option and with a mixing length to\npressure scale height ratio $\\alpha \n=0.5$. \nAssumptions made in the models include: the\natmosphere is plane-parallel and in hydrostatic equilibrium, the\ntotal flux is constant, the source function\nis described by the Planck function, the populations of different excitation\nlevels and ionization stages are governed by LTE.\nThe abundances were derived from theoretical curves of growth, computed\nby MOOG, using model atmospheres \nand atomic data (wavelength, \nexcitation potential, $gf$ values). The input model was constructed using as\natmospheric parameters the average values of previous determinations\nfound in the literature and solar metallicity.\nFurther details on the iterative process followed, and the errors\ndetermination is described in \\citet{aff05}.\nFor an elemental abundance derived\nfrom many lines, the uncertainty of the atmospheric parameters is the dominant\nerror, while for an abundance derived from a few lines, the uncertainty in the\nequivalent widths may be more significant.\nIn our case, the atmospheric parameters calculated are \n$T_{\\rm eff}=4995\\pm 170$~K, $\\log g=3.27\\pm 0.60$ [cm\\,s$^{-2}$] ,\nand $\\xi=1.29\\pm 0.13$~km\\,s$^{-1}$ for AR Psc, and $T_{\\rm eff}\n=4967\\pm 185$~K, $\\log g=2.34 \\pm 0.47$ [cm\\,s$^{-2}$], and $\\xi=1.56\n\\pm 0.08$~km\\,s$^{-1}$ \nfor AY Cet. In the case of AY Cet, these parameters are in good\nagreement with \\citet{ott98} except for lower gravity in our\ncase; iron abundance agrees with that of Ottman et al., but Mg and Si\nare substantially higher (-0.22 and -0.32 in their case, respectively).\n\nThe coronal abundances are compared to the photosphere of the same\nstars (Fig.~\\ref{fig:abuncorona}), using the solar photospheric values\nas scale. There is no proof of \nMAD or any inverse FIP effect in either of the two cases. \nThe best calculated values, the abundance of Fe, Ni and Mg,\nshow consistent coronal and photospheric abundances. \nSome metal depletion seems to take place for Si and O in AY Cet.\nIt must be noticed that photospheric\nabundances of elements other than Fe are calculated with fewer lines, so\nwe are less confident about their values.\nThese results confirm those of $\\lambda$~And and V851~Cen\n\\citep{sanz04}, displayed here for easier comparison\n(Fig.~\\ref{fig:abuncorona2}, Table~\\ref{tab:abundances}.\nIt is also worth checking the abundance ratio [Ne\/O] in the\n corona. \\citet{dra05} find that an average value of [Ne\/O]=0.41,\n calculated in the corona of nearby stars, would help for solving the\n ``solar model problem'' \\citep[see also discussion in][]{sch05}. The\n ratios in the coronae of AR~Psc and AY~Cet are consistent with the\n value observed by \\citet{dra05}. An upwards correction of the Ne\n solar abundance, as proposed by these authors, would not affect our\n results since we compared coronal and photospheric abundances of the\n same star. \n\n\n\\section{Discussion}\n\nThe results are suggestive of a lack of MAD effect. \nWe see once again that coronal\nabundances of active stars, once they are compared with their own\nphotospheric values instead of the solar photosphere, show no sign of\ninverse FIP effect or MAD. So far, all the active stars with inverse FIP\neffect observed (we discard those compared to solar photosphere) have\na high projected rotational velocity \n(Table~\\ref{tab:pastabundances}). A large $v \\sin i$ broadens the\nlines observed in the optical wavelengths, those usually employed to \ncalculate photospheric abundances. \nThe broadening of lines might yield erroneous measurements of equivalent\nwidths (blends are included in the measurements, and continuum is more\ndifficult to place), and\ntherefore wrong photospheric abundances. \nAll active stars are fast rotators, but only the cases with high projected\nrotational velocity will \naffect the measurements of lines. \\citet{ott98} note that values of\n$v \\sin i \\ga 20$\\,km\\,s$^{-1}$ yield wrong results. We therefore are more inclined\nto believe that the inverse FIP effect is an observational effect,\nrather than a real fact. Thus, from now on we will interpret the\nresults in this basis, insisting that more observations are needed in order to\nclarify how real the inverse FIP effect is. \n\n\n\\begin{table*}\n\\caption{Coronal and photospheric abundances of the elements ([X\/H], solar\n units) in the target stars.}\\label{tab:abundances} \n\\tabcolsep 3.pt\n\\begin{center}\n\\begin{footnotesize}\n \\begin{tabular}{lrccrrrrrrrr}\n\\hline \\hline\n{X} & {FIP} & Ref.$^a$ & (AG89$^a$) & \\multicolumn{2}{c}{AR Psc} &\n\\multicolumn{2}{c}{AY Cet} & \\multicolumn{2}{c}{$\\lambda$ And$^b$} &\n\\multicolumn{2}{c}{V851 Cen$^b$}\\\\ \n & eV & \\multicolumn{2}{c}{solar photosphere} & Photosphere & Corona & Photosphere & Corona & Photosphere & Corona & Photosphere & Corona\\\\\n\\hline\n Na & 5.14 & 6.17 & (6.33) & 0.33$\\pm$ 0.12 & \\ldots & 0.49$\\pm$ 0.09 & \\ldots & -0.09$\\pm$ 0.10 & \\ldots & 0.39$\\pm$ 0.11 & \\ldots \\\\\n Al & 5.98 & 6.37 & (6.47) & 0.33$\\pm$ 0.12 & \\ldots & 0.34$\\pm$ 0.09 & \\ldots & \\ldots & 0.05$\\pm$ 0.17 & 0.35$\\pm$ 0.05 & \\ldots \\\\\n Ca & 6.11 & 6.31 & (6.36) & 0.21$\\pm$ 0.15 & \\ldots & 0.26$\\pm$ 0.15 & \\ldots & -0.15$\\pm$ 0.10 & -0.20$\\pm$ 0.37 & 0.17$\\pm$ 0.08 & 0.55$\\pm$ 0.46 \\\\\n Ni & 7.63 & 6.23 & (6.25) & -0.31$\\pm$ 0.23 & -0.04$\\pm$ 0.27 & -0.08$\\pm$ 0.15 & 0.05$\\pm$ 0.25 & -0.38$\\pm$ 0.10 & -0.28$\\pm$ 0.13 & -0.28$\\pm$ 0.10 & 0.12$\\pm$ 0.43 \\\\\n Mg & 7.64 & 7.53 & (7.58) & -0.02$\\pm$ 0.08 & -0.05$\\pm$ 0.12 & 0.13$\\pm$ 0.07 & -0.04$\\pm$ 0.16 & -0.05$\\pm$ 0.10 & -0.18$\\pm$ 0.07 & 0.10$\\pm$ 0.03 & -0.07$\\pm$ 0.17 \\\\\n Fe & 7.87 & 7.45 & (7.67) & -0.43$\\pm$ 0.20 & -0.18$\\pm$ 0.20 & -0.31$\\pm$ 0.14 & -0.48$\\pm$ 0.15 & -0.28$\\pm$ 0.10 & -0.38$\\pm$ 0.05 & -0.01$\\pm$ 0.10 & -0.28$\\pm$ 0.10 \\\\\n Si & 8.15 & 7.51 & (7.55) & 0.02$\\pm$ 0.25 & -0.14$\\pm$ 0.13 & 0.10$\\pm$ 0.13 & -0.46$\\pm$ 0.18 & -0.26$\\pm$ 0.10 & -0.35$\\pm$ 0.07 & -0.01$\\pm$ 0.09 & -0.58$\\pm$ 0.32 \\\\\n S & 10.36 & 7.14 & (7.21) & \\ldots & \\ldots & \\ldots & 0.09$\\pm$ 0.32 & \\ldots & -0.60$\\pm$ 0.16 & \\ldots & -0.98$\\pm$ 1.32 \\\\\n C & 11.26 & 8.39 & (8.56) & \\ldots & 0.62$\\pm$ 0.33 & \\ldots & 0.28$\\pm$ 0.23 & \\ldots & \\ldots & \\ldots & -0.17$\\pm$ 0.40 \\\\\n O & 13.61 & 8.66 & (8.93) & 0.83$\\pm$ 0.34 & 0.31$\\pm$ 0.13 & 0.94$\\pm$ 0.26 & -0.32$\\pm$ 0.30 & 0.02$\\pm$ 0.10 & -0.03$\\pm$ 0.13 & \\ldots & 0.18$\\pm$ 0.23 \\\\\n N & 14.53 & 7.78 & (8.05) & \\ldots & 0.49$\\pm$ 0.07 & \\ldots & 0.20$\\pm$ 0.15 & \\ldots & \\ldots & \\ldots & 0.27$\\pm$ 0.14 \\\\\n Ar & 15.76 & 6.18 & (6.56) & \\ldots & 0.88$\\pm$ 0.21 & \\ldots & \\ldots & \\ldots & 0.10$\\pm$ 0.26 & \\ldots & 0.33$\\pm$ 0.54 \\\\\n Ne & 21.56 & 7.84 & (8.09) & \\ldots & 0.61$\\pm$ 0.10 & \\ldots & 0.09$\\pm$ 0.13 & \\ldots & 0.17$\\pm$ 0.06 & \\ldots & 0.66$\\pm$ 0.13 \\\\\n\\hline\n\\end{tabular}\n\\end{footnotesize}\n\\end{center}\n$^a$ Solar photospheric abundances from \\citet{asp05}, adopted in\nthis work, are expressed in logarithmic scale. \nNote that several values have been\nupdated in the literature since \\citet{anders}, also listed in\nparenthesis for easier comparison.\\\\\n$^b$ Results adapted from \\citet{sanz04}.\n\\end{table*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{12069f7a.eps}\n \\includegraphics[width=0.45\\textwidth]{12069f7b.eps}\n \\caption{Same as in Fig.~\\ref{fig:abuncorona}, but for\n $\\lambda$~And and V851 Cen. Results from \\citet{sanz04}, adapted to\n the solar reference values of \\citet{asp05}.}\n \\label{fig:abuncorona2}\n\\end{figure*}\n\nAssuming that no inverse FIP effect exists among active stars, the\nobservations suits the coronal model explained by\n\\citet{lam04}. The works carried out in the Sun in past years have\nstrengthened the idea of the Alfv\\'en waves as the main force responsible for\nthe energy transportation between chromosphere and corona of the\nSun. Recent observations with the mission Hinode \\citep{dep07,erd07}\nsupport this idea. In the model described by \\citet{lam04}, the\nAlfv\\'en waves, combined with \npondermotive forces, would be responsible for\npart of the energy transportation in the corona. Their model explains\nthe observed abundances pattern in the solar corona: either the\nlow FIP in the coronal active regions and the slow wind or the\nabsence of FIP effect in coronal holes or fast wind. According to this\nmodel, the stars earlier than the Sun, like Procyon, would behave more\nlike coronal holes, because they have shallower convective zones\nyielding smaller coronal magnetic fields. Their chromospheric Alfv\\'en\nwaves are stronger than the coronal fields, similar to a \ncoronal hole. In stars later than the Sun, but not very active yet,\nthere is a deeper convection, yielding lower frequency chromospheric\nwaves and higher coronal magnetic fields. The wave transmission is less\nefficient and this would yield a reduced FIP effect. This is the case\nfor $\\epsilon$~Eri or $\\alpha$~Cen.\nFollowing the same reasoning, it makes\nsense to find that Alfv\\'en waves are more inefficient for the more\nactive stars, yielding no FIP fractionation for stars such as AR~Psc\nor AY~Cet. Some arguments have been\nproposed by \\citet{lam04} to explain why an inverse FIP effect could\nbe present in active stars, such as AB~Dor: wave reflection in the\nchromosphere (more \nlikely for stars with lower gravity), or turbulence introduced by\ndifferential rotation. \n\nA more complete census of active stars with well measured photospheric\nand coronal abundances would be necessary to confirm the results, but\nif we assume that the cases with inverse FIP effect observed are a\nproduct of observational effects, the model proposed by \\citet{lam04}\ncould actually explain the observed abundances patterns in both low\nand high activity stars.\n\n\n\n\\section{Conclusions}\n\nThe results suggest that metal abundance depletion, or\ninverse FIP effect, are not present in these two stars. This\nagrees with results for other active stars with\nwell-known photospheric abundances, such as $\\lambda$~And or \nV851~Cen. Stars with observed inverse FIP effect have their\nphotospheric lines broadened by the high rotation of the stars,\nhampering their photospheric measurements. The results observed in\nAR~Psc and AY~Cet suit the expected behavior in active stars\naccording to the model of \\citet{lam04}, which combines the use of\npondermotive forces with Alfv\\'en waves. The same model is also able to\nexplain both the FIP effect in the solar corona and slow wind and the\nabsence of FIP fractionation in solar coronal holes and fast wind.\nFurther measurements in active\nstars with low projected rotational velocity would help to assure this\nconfirmation of the model. \n\n\\begin{acknowledgements}\n We acknowledge support by the Ram\\'on y Cajal Program\n ref. RYC-2005-000549, financed by the Spanish Ministry of Science.\n JS wants to thank Aitor Ibarra for his aid in the treatment of XMM\n data. This research has also made use\n of NASA's Astrophysics Data System Abstract Service and the SIMBAD\n Database, operated at CDS, Strasbourg (France).\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuick detection of change-points from data streams is a classic and fundamental problem in signal processing and statistics, with a wide range of applications from cybersecurity \\cite{LakhinaCrovellaDiot2004} to gene mapping \\cite{Siegmund07}. Classical statistical change-point detection \\cite{Siegmund1985,non-parametric-change93,Chen12,Veeravalli13book,Tartakovsky2014}, where one monitors {\\it i.i.d.} univariate and low-dimensional multivariate observations is a well-developed area. Outstanding contributions include Shewhart's control chart \\cite{Shewhart31}, Page's CUSUM procedure \\cite{Page1954},\nShiryaev-Roberts procedure \\cite{Shiryaev1963}, Gordon's non-parametric procedure \\cite{gordon1994efficient}, and window-limited procedures \\cite{lai1995sequential}.\nVarious asymptotic (see, e.g., \\cite{Lorden1971,Pollak85,Pollak87,lai1995sequential,lai1998information}) and nonasymptotic \\cite{Moustakides86} results have been established for these classical methods. High-dimensional change-point detection (also referred to as the multi-sensor change-point detection) is a more recent topic, and various statistical procedures are proposed including \\cite{ingster02,ingster09,Ingster10,korostelev2008,Mei2008,VVV08,levy09,Fellouris14,Zaid2014,Xie2015,Mei16}.\nHowever, there has been very little research on the computational aspect of change-point detection, especially in the high-dimensional setting.\n\n\\subsection{Outline}\\label{sectOutline}\nThis paper presents a computational framework to solve change-point detection problems which is completely general: it can process many high-dimensional situations achieving improved false detection control. The main idea is to adapt the framework for hypothesis testing using convex optimization \\cite{PartI} to change-point detection. Change-point detection can be viewed as a multiple-testing problem, where at each time, one has to test whether there has been no change, or there already has been a change-point. With our approach, at each time a detector is designed by convex optimization to achieve the above goal.\nThe convex optimization framework is computationally efficient and can control false detection uniformly according to a pre-specified level.\n\n\nSince change-point detection in various settings is the subject of huge literature (see, e.g.,\n\\cite{Bass1993,neumann1997optimal,TartVeer2004,GJTZ2008,goldenshluger2008change,Poor2009,Xie2015,Chen12,lai1998information,Siegmund1985,non-parametric-change93,Veeravalli13book,Tartakovsky2014} and references therein), it would be too\ntime-consuming to position our developments w.r.t. those presented in the literature. Instead,\nwe illustrate our approach by its application to a simple example and {then comment on the ``spirit'' of} our constructions and results (which, we believe, is somehow different from majority of traditional approaches to change detection).\\par\n\\paragraph{Illustrating problem.} We consider a simple version of the classical problem of change detection in the input of a dynamical system (see, e.g., \\cite{gholson1977,willsky1985,mazor1998} and\nreferences therein), where we observe\nnoisy outputs $\\omega_t\\in{\\mathbf{R}}^\\nu$ of a discrete time linear time invariant system on time horizon $t=1, \\ldots, d$:\n\\begin{equation}\\label{ieq1}\n\\begin{array}{rcl}\nx_t&=&Ax_{t-1}+bu_t,\\\\\n\\omega_t&=&Cx_t+\\xi_t,\n\\end{array}\n\\end{equation}\nwhere the inputs $u_t$ are scalars, $A$, $b$, $C$ are known, and the observation noises $\\xi_t\\sim{\\cal N}(0,I_\\nu)$ are independent across time $t=1,...,d$. The input $u=[u_1;...;u_d]$ to the system can be either zero ({\\sl nuisance hypothesis}), or a signal of ``some shape $\\tau\\in\\{1,...,d\\}$ and some magnitude $\\geq \\rho>0$,'' meaning that $u_t=0$ for $t<\\tau$, and $u_\\tau\\geq\\rho$ (so that $\\tau$ represents the change-point location in time); we refer to the latter option as to the {\\sl {signal} hypothesis}. We observe $\\omega_t$'s one by one, and our goal is to design\ndecision rules $\\{{\\cal T}_t:1\\leq t\\leq d\\}$ and thresholds $\\{\\rho_{t\\tau}>0,1\\leq\\tau\\leq t\\leq d\\}$\nin such a way that\n\\par$\\bullet$ rule ${\\cal T}_t$ is invoked at time $t$. Depending solely on the observations $\\omega_1,...,\\omega_t$ available at this time, this rule\n\\begin{itemize}\n\\item either accepts the {signal} hypothesis, in which case we terminate with ``{signal}'' conclusion,\n \\item or claims that so far the nuisance hypothesis {is not rejected} (``nuisance conclusion at time $t$''), in which case we pass to time instant $t+1$ (when $t0$ is a parameter. It may happen that these two hypotheses can be decided upon with risk $\\leq \\epsilon$, meaning that ``in the nature'' there exists a test which, depending on observations $\\omega_1,...,\\omega_t$, accepts exactly one of the hypotheses with error probabilities (i.e., probability to reject $H_1$ when $u=0$ and the probability to reject $H_2(\\rho)$ when $u$ is a signal of shape $\\tau$ and magnitude $\\geq\\rho$) at most $\\epsilon$. One can easily find the smallest $\\rho=\\rho^*_{t\\tau}$ for which such a test exists\\footnote{Note that\nthe observation $(\\omega_1,...,\\omega_t)$ is of the form ${\\bar A}_tu+\\xi^t$ with standard (zero mean, unit covariance matrix) Gaussian noise $\\xi^t=(\\xi_1,...,\\xi_t)$. It is immediately seen\nthat $\\rho^*_{t\\tau}$ is the smallest $\\rho$ for which the distance $\\inf_{v\\in V_{t\\tau}}\\|v\\|_2$ from the origin to the convex set\n$V_{t\\tau}=\\{{\\bar A}_tu:\\;u_s=0\\mbox{ for }s<\\tau,\\,u_\\tau\\geq\\rho\\}$ is at least $2{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)$, where ${\\mathop{\\hbox{\\small\\rm ErfInv}}}$ is the inverse error function, see (\\ref{inverf}).}. Clearly, by construction, $\\rho^*_{t\\tau}$ is a lower bound on the threshold $\\rho_{t\\tau}$ of any inference routine which meets the design specifications we are dealing with.\n Near-optimality of our inference routine means, essentially, that our thresholds $\\bar{\\rho}_{t\\tau}$ are close to the ``ideal'' thresholds $\\rho^*_{t\\tau}$ independently of particular values of parameters of model \\rf{ieq1}:\n \\[\n {\\bar{\\rho}_{t\\tau}\\over \\rho^*_{t\\tau}}\\leq {1\\over 2}\\left[1+{{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon\/d^2)\\over{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)}\\right]\n \\] (for details, see Proposition \\ref{prop40}).\n\\paragraph{Paper's scope.} The developments to follow are in no sense restricted to the simplest model of nuisance and signal inputs we have considered so far. In fact, we\n{allow nuisance} inputs to vary in a prescribed set $N\\ni 0$, and for signal inputs to have $K$ different ``shapes,'' with signals of ``shape $k\\leq K$ and magnitude $\\rho>0$'' varying in prescribed sets $U_k(\\rho)$ shrinking as $\\rho>0$ grows. We treat\n {two cases separately}:\n\\par {\\bf I.} {\\sl ``Decision rules based on affine detectors,''} in Section \\ref{ChPADSetUp}. In this case, $N\\ni 0$ is a convex compact set, and $U_k(\\rho)=N+\\rho W_k$, where $W_k$ are closed convex sets not containing the origin and such that $\\rho W_k\\subset W_k$ whenever $\\rho\\geq1$, implying that $U_k(\\rho)$ indeed shrinks as $\\rho$ grows. As far as the observation noises are concerned, we require the vector $\\xi^d=(\\xi_1,...,\\xi_d)$ to be zero mean sub-Gaussian, with the (perhaps, unknown) matrix {parameter} (see 4, Section \\ref{sectnotconv}) belonging to a given convex compact set. This case\n {covers the } example we have started with.\n\\par{\\bf II.} {\\sl ``Decision rules based on quadratic detectors,''} in Section \\ref{QDSection}. In this case, $N\\ni 0$ is a bounded set given by a finite system of quadratic inequalities, and $U_k(\\rho)$, $1\\leq k\\leq K$, is given by a parametric system of quadratic inequalities of appropriate structure (for details, see Section \\ref{newsetup}). The simplest illustration here is the case when $u_t$ in (\\ref{ieq1}) are allowed to be vectors, the only nuisance input is $u=0$, and a signal input of shape $\\tau\\leq d$ of magnitude $\\geq \\rho$ is a block-vector $[u_1;...;u_d]$ with $u_1=...=u_{\\tau-1}=0$ and $\\|u_\\tau\\|_2\\geq\\rho$.\n {The noise $\\xi^d=[\\xi_1;...;\\xi_d]$ is assumed} to be zero mean Gaussian, with (perhaps, unknown) covariance matrix varying in a known convex compact set.\n\\paragraph{Comments.} To complete the introduction, let us comment on the ``spirit'' of our constructions and results, which we refer to as {\\em operational}. Following the line of research in \\cite{GJN,Seq2015,PartI}, we allow for rather general {\\sl structural} assumptions on the components of our setup (system \\rf{ieq1} and descriptions of nuisance and signal inputs) and are looking for {\\sl computation-friendly} inference routine\nmeaning that our easy-to-implement routines and their performance characteristics are given by {\\sl efficient computation} (usually based on Convex Optimization). This {appears to be in sharp} contrast {with} the traditional in statistics ``closed analytical form'' {\\sl descriptive} procedures and performance characteristics. While closed analytical form results possess strong explanatory power, these results usually impose severe restrictions on the underlying setup and in this respect are much more restrictive than operational results. We believe that in many applications, including those considered in this paper, the relatively broad applicability of operational results more than compensates for the lack of explanatory\n {power that is typical of computation-based} constructions. It should be added that under favorable circumstances (which, in the context of this paper, do take place in case I), the operational procedures we are about to develop\n {are provably} near-optimal in a certain precise sense (see Section \\ref{sect:assessing}). Therefore, their performance, whether good or bad from the viewpoint of a particular application, is nearly the best possible under the circumstances.\n\n\n\\subsection{Terminology and notation}\\label{sectnotconv}\nIn what follows,\n\\par 1.\n All vectors are column vectors.\n\n \\par\n 2.\n We use ``MATLAB notation:'' for matrices $A_1,...,A_k$ of common width, $[A_1;A_2;...;A_k]$ stands for the matrix obtained by {(up-to-down) vertical concatenation of $A_1, A_2,..., A_k$;} for matrices $A_1,...,A_k$ of common height, $[A_1,A_2,...,A_k]$ is the matrix obtained by {(left-to-right) horizontal concatenation of $A_1, A_2, ...,A_k$.}\n\\par\n3.\n ${\\mathbf{S}}^n$ is the space of $n\\times n$ real symmetric matrices, and ${\\mathbf{S}}^n_+$ is the cone of positive semidefinite matrices from ${\\mathbf{S}}^n$. Relation $A\\succeq B$ ($A\\succ B$) means that $A$, $B$ are symmetric matrices of the same size such that $A-B$ is positive semidefinite (respectively, positive definite), and $B\\preceq A$ ($B\\prec A$) is the same as $A\\succeq B$ (respectively, $A\\succ B$).\n\n\\par\n4. ${\\cal SG}[U,{\\cal U}]$, where $U$ is a nonempty subset of ${\\mathbf{R}}^n$, and ${\\cal U}$ is a nonempty subset of ${\\mathbf{S}}^n_+$, stands for the family of all Borel sub-Gaussian\nprobability distributions on ${\\mathbf{R}}^n$ with sub-Gaussianity parameters from $U\\times{\\cal U}$. In other words, $P\\in {\\cal SG}[U,{\\cal U}]$ if and only if $P$ is a probability distribution such that for some $u\\in U$ and $\\Theta\\in {\\cal U}$ one has $\\ln(\\int {\\rm e}^{h^Ty}P(dy))\\leq u^Th+{1\\over 2}h^T\\Theta h$ for all $h\\in {\\mathbf{R}}^n$ (whenever this is the case, $u$ is the expectation of $P$); we refer to $\\Theta$ as to {\\sl sub-Gaussianity matrix} of $P$. For a random variable $\\xi$ taking values in ${\\mathbf{R}}^n$, we write $\\xi\\sim {\\cal SG}[U,{\\cal U}]$ to express the fact that the distribution $P$ of $\\xi$ belongs to ${\\cal SG}[U,{\\cal U}]$.\\par\n Similarly, ${\\cal G}[U,{\\cal U}]$ stands for the family of all Gaussian distributions ${\\cal N}(u,\\Theta)$ with expectation $u\\in U$ and covariance matrix $\\Theta\\in{\\cal U}$, and $\\xi\\sim {\\cal G}[U,{\\cal U}]$ means that $\\xi\\sim {\\cal N}(u,\\Theta)$ with $u\\in U$, $\\Theta\\in{\\cal U}$.\n \\par\n5.\n Given two families ${\\cal P}_1$, ${\\cal P}_2$ of Borel probability distributions on ${\\mathbf{R}}^n$ and a {\\sl detector} $\\phi$ (a Borel real-valued function on ${\\mathbf{R}}^n$),\n$\\hbox{\\rm Risk}(\\phi|{\\cal P}_1,{\\cal P}_2)$ stands for the risk of the detector \\cite{GJN} taken w.r.t. the families ${\\cal P}_1$, ${\\cal P}_2$, that is, the smallest $\\epsilon$ such that\n \\begin{equation}\\label{riskis}\n \\begin{array}{lrcl}\n (a)&\\int {\\rm e}^{-\\phi(y)}P(dy)&\\leq&\\epsilon\\,\\;\\forall P\\in{\\cal P}_1,\\\\\n (b)&\\int {\\rm e}^{\\phi(y)}P(dy)&\\leq&\\epsilon\\,\\;\\forall P\\in{\\cal P}_2.\\\\\n \\end{array}\n \\end{equation}\n When ${\\cal T}$ is a test deciding on ${\\cal P}_1$ and ${\\cal P}_2$ via random observation $y\\sim P\\in{\\cal P}_1\\cup{\\cal P}_2$ (that is, ${\\cal T}:{\\mathbf{R}}^n\\to\\{1,2\\}$ is a Borel function, with ${\\cal T}(y)=1$ interpreted as ``given observation $y$, the test accepts the hypothesis $H_1:P\\in{\\cal P}_1$ and rejects the hypothesis $H_2:P\\in{\\cal P}_2$,'' and ${\\cal T}(y)=2$ interpreted as ``given observation $y$, ${\\cal T}$ accepts $H_2$ and rejects $H_1$'')\n$$\\begin{array}{rcl}\n\\hbox{\\rm Risk}_1({\\cal T}|{\\cal P}_1,{\\cal P}_2)&=&\\sup_{P\\in{\\cal P}_1} \\hbox{\\rm Prob}_{y\\sim P}\\{y:{\\cal T}(y)=2\\},\\\\\n\\hbox{\\rm Risk}_2({\\cal T}|{\\cal P}_1,{\\cal P}_2)&=&\\sup_{P\\in{\\cal P}_2}\\hbox{\\rm Prob}_{y\\sim P}\\{y:{\\cal T}(y)=1\\}\\\\\n\\end{array}\n$$\nstand for the partial risks of the test, and\n\\[\n \\hbox{\\rm Risk}({\\cal T}|{\\cal P}_1,{\\cal P}_2)=\\max[\\hbox{\\rm Risk}_1({\\cal T}|{\\cal P}_1,{\\cal P}_2),\\hbox{\\rm Risk}_2({\\cal T}|{\\cal P}_1,{\\cal P}_2)]\n \\] stands for the risk of the test.\n \\par\n A detector $\\phi(\\cdot)$ and a real $\\alpha$ specify a test ${\\cal T}^{\\phi,\\alpha}$ which accepts $H_1$ (${\\cal T}^{\\phi,\\alpha}(y)=1$) when $\\phi(y)\\geq\\alpha$, and accepts $H_2$ (${\\cal T}^{\\phi,\\alpha}(y)=2$) otherwise. From (\\ref{riskis}) it is immediately seen that\n \\begin{equation}\\label{mainbasis}\n \\begin{array}{l}\n \\hbox{\\rm Risk}_1({\\cal T}^{\\phi,\\alpha}|{\\cal P}_1,{\\cal P}_2)\\leq{\\rm e}^{\\alpha}\\hbox{\\rm Risk}(\\phi|{\\cal P}_1,{\\cal P}_2),\\\\\n \\hbox{\\rm Risk}_2({\\cal T}^{\\phi,\\alpha}|{\\cal P}_1,{\\cal P}_2) \\leq {\\rm e}^{-\\alpha}\\hbox{\\rm Risk}(\\phi|{\\cal P}_1,{\\cal P}_2).\n \\end{array}\n \\end{equation}\n\\par\n\nAll proofs are transferred to the appendix.\n\n\\section{Dynamic change detection: preliminaries}\\label{sect:ChPD}\nIn the sequel, we address the situation which can be described informally as follows.\nWe observe\nnoisy outputs of a linear system at times $t=1,...,d$, the input to the system being an unknown vector $x\\in{\\mathbf{R}}^n$. Our ``full observation'' is\n\\begin{equation}\\label{eqOS}\ny^d=\\bar{A}_dx+\\xi^d,\n\\end{equation}\nwhere $\\bar{A}_d$ is a given $\\nu_d\\times n$ {\\sl sensing matrix}, and $\\xi^d\\sim{\\cal SG}[\\{0\\},{\\cal U}]$ (see item 4\\ in Section \\ref{sectnotconv}), where ${\\cal U}$ is a given nonempty convex compact subset of $\\mathop{\\hbox{\\rm int}}{\\mathbf{S}}_+^{\\nu_d}$.\n\\par\nObservation $y^d$ is obtained in $d$ steps; at a step (time instant) $t=1,...,d$, the observation is\n\\begin{equation}\\label{eqOSt}\ny^t=\\bar{A}_tx+\\xi^t\\equiv S_{t}[\\bar{A}_dx+\\xi^d]\\in{\\mathbf{R}}^{\\nu_t},\n\\end{equation}\nwhere $1\\leq \\nu_1\\leq \\nu_2\\leq...\\leq\\nu_d$, $S_t$ is $\\nu_t\\times \\nu_d$ matrix of rank $\\nu_t$ and $y^t$ ``remembers'' $y^{t-1}$, meaning that $S_{t-1}=R_tS_t$ for some matrix $R_t$.\nClearly, $\\xi^t$ is sub-Gaussian with parameters $(0,\\Theta_t)$,\nwith\n \\begin{equation}\\label{thetat}\n\\Theta_t={S_t}\\Theta S_t^T\\subset {\\cal U}_t:=\\{{S_t}\\Theta S_t^T:\\,\\Theta\\in{\\cal U}\\};\n\\end{equation} note that ${\\cal U}_t$, $1\\leq t\\leq d$, are convex compact sets comprised of positive definite $\\nu_t\\times \\nu_t$ matrices.\n\\par\nOur goal is to build a {\\sl dynamic test} for deciding on the {\\sl null}, or {\\sl nuisance}, hypothesis, stating that the input to the system underlying our observations is a nuisance, vs. the alternative of a {\\sl signal} input.\nSpecifically, at every {time $t=1,...,d$,} given observation $y^t$, we can either decide that the input is a signal and terminate (``termination at step $t$ with a signal conclusion,'' or, equivalently, ``detection of a signal input at time $t$''), or to decide (``nuisance conclusion at step $t$'') that so far, the nuisance hypothesis holds true, and to pass to the next time instant $t+1$ (when $t0$ {is an element} of the set\n $$\n W_k^\\rho=\\{w=\\rho y:y\\in W_k\\}.\n $$\n \\begin{quote}\n {\\small\n{\\bf Example:} Let $K=n$ and let $W_k$ be the set of all inputs $w\\in{\\mathbf{R}}^n$ with the first $k-1$ entries in $w$ equal to zero, and $k$-th entry $\\geq1$. In this case, the shape of an activation $w\\in{\\mathbf{R}}^n$ is its ``location'' -- the index of the first nonzero entry in $w$, and activations of shape $k$ and magnitude $\\geq\\rho$ are vectors $w$ from ${\\mathbf{R}}^n$ with the first nonzero entry in position $k$ and the value of this entry at least $\\rho$.\\\\\nWe have presented the simplest formalization of what informally could be called ``activation up.'' To get equally simple formalization of an ``activation down,'' one should take $K=2n$ and define $W_{2i-1}$ and $W_{2i}$, $i\\leq n$, as the sets of all vectors from ${\\mathbf{R}}^n$ for which the first nonzero entry is in position $i$, and the value of this entry is at least 1 for $W_{2i-1}$ (``activation up'' of magnitude $\\geq1$ at time $i$) or is at most $-1$ for $W_{2i}$ (``activation down'' of magnitude $\\geq1$ at time $i$).}\n\\end{quote}\n\\par3. {\\sl The formal} description of ``signal'' inputs is as follows: these are vectors $x$ from $X$ which for some $k\\leq K$ can be represented as $x=v+w$ with $v\\in V_k$ and $w\\in W_k^\\rho$ for some $\\rho >0$, where $W_k$ are as described above, and $V_k$, $0\\in V_k$, are nonempty compact convex subsets of $X$.\\footnote{In the informal description of signals, $V_k$ were identified with the set $N$ of nuisances; now we lift this restriction in order to add more flexibility.} Thus, when speaking about signals (or signal inputs), we assume that we are given $K$ nonempty closed convex sets $W_k$, $k\\leq K$, each of them semi-conic and not containing the origin, and $K$ nonempty compact convex sets $V_k\\subset X$. These sets give rise to single-parametric families of compact convex sets\n\\\n\\begin{array}{rclcrcl}\nW^\\rho_k&=&\\{\\rho y:y\\in W_k\\},&\nX^\\rho_k&=&[V_k+W^\\rho_k]\\bigcap X,\\\\\n\\end{array}\n\\\nindexed by ``activation shape'' $k$ and\nparameterized by ``activation magnitude'' $\\rho>0$. Signals are exactly the {elements of} the set $\\widehat{X}=\\bigcup_{{\\rho>0,k\\leq K}}X^\\rho_k$. In the sequel, we refer to inputs from $N$ as to {\\sl feasible nuisances}, to inputs from $X^\\rho_k$ as to {\\sl feasible signals with activation of shape $k$ and magnitude $\\geq\\rho$}, and to inputs from $\\widehat{X}$ as to {\\sl feasible signals}.\nTo save words, in what follows `` {a} signal of shape $k$ and magnitude $\\geq\\rho$'' means exactly the same as `` {a} signal with activation of shape $k$ and magnitude $\\geq\\rho$.''\n\\par\nFrom now on, we make the following assumption:\n\\begin{assump}\\label{ass:0}\nFor every $k\\leq K$, there exists $R_k>0$ such that the set $X^{R_k}_k$ is nonempty.\n\\end{assump}\nSince $X^\\rho_k$ shrinks as $\\rho$ grows due to semi-conicity of $W_k$, it follows that {\\sl for every $k$, the sets $X^\\rho_k$ are nonempty for all small enough positive $\\rho$.}\n\\subsection{Construction}\\label{sgcase}\n\\subsubsection{Outline}\\label{sec:outline}\nGiven an upper bound $\\epsilon\\in(0,1\/2)$ on the probability of false alarm, our course of actions {is as follows}.\n\n\n \\par1. We select $d$ positive reals $\\epsilon_t$, $1\\leq t\\leq d$, such that $\\sum_{t=1}^d\\epsilon_t=\\epsilon$; $\\epsilon_t$ will be an upper bound on the probability {of a false alarm} at time $t$.\n\n \\par2.\nWe select thresholds $\\rho_{tk}>0$, $1\\leq k\\leq K$\nin such a way that a properly designed test ${\\cal T}_t$ utilizing the techniques of \\cite[Section 3]{PartI} is able to distinguish reliably, given an observation $y^t$, between the hypotheses $H_{1,t}:x\\in N$ and $H_{2,t}:x\\in\\bigcup\\limits_{k=1}^KX^{\\rho_{tk}}_k$ on the input $x$ underlying observation $y^t$. After $y^t$ is observed, we\n apply test ${\\cal T}_t$ to this observation, and, according to what the test says,\n \\begin{itemize}\n \\item either claim that the input is a signal, and terminate,\n \\item or claim that {\\sl so far}, the hypothesis of nuisance input seems to be valid, and either pass to the next observation (when $t0$ is small enough (this was already assumed) and is empty for all large enough values of $\\rho$ (since $X$ is compact and $W_k$ is a nonempty closed convex set not containing the origin). From these observations and compactness of $X$ it follows that {\\sl there exists the largest $\\rho=R_k>0$ for which $X^\\rho_k$ is nonempty.}\n\\par\nLet us fix $t\\in\\{1,...,d\\}$, and let\n\\begin{equation}\\label{letUt}\n{\\cal U}_t=\\{S_t\\Theta S_t^T:\\Theta\\in{\\cal U}\\}\n \\end{equation}\n be the set of allowed covariance matrices of the observation noise $\\xi^t$ in observation $y^t$, so that ${\\cal U}_t$ is a convex compact subset of the interior of ${\\mathbf{S}}^{\\nu_t}_+$.\nAccording to our assumptions, for any nuisance input the distribution of the associated observation $y^t$, see (\\ref{eqOSt}), belongs to the family\n${\\cal SG}[N^t,{\\cal U}_t]$, with\n\\begin{equation}\\label{letNt}\nN^t=\\bar{A}_tN,\n\\end{equation}\nwhere $N\\subset X$ is the convex compact set of nuisance inputs. Given, along with $t$, an integer $k\\leq K$ and a real $\\rho\\in(0,R_k]$, we can define the set\n\\begin{equation}\\label{letUtkrho}\nU^t_{k\\rho}=\\{\\bar{A}_t x:x\\in X_k^\\rho\\};\n\\end{equation}\nwhatever be a signal input from $X_k^\\rho$, the distribution of\n {observation $y^t$ associated with $x$} belongs to the family ${\\cal SG}[U^t_{k\\rho},{\\cal U}_t]$. Applying\nProposition \\ref{subG} to data $U_1=N^t$, $U_2=U^t_{k\\rho}$, and ${\\cal U}={\\cal U}_t$, we arrive at the convex-concave saddle point problem\n\\begin{equation}\\label{ccproblem}\n{\\cal SV}_{tk}(\\rho)=\\min\\limits_{h\\in{\\mathbf{R}}^{\\nu_t}}\\max\\limits_{\\theta_1\\in N^t,\\theta_2\\in U^t_{k\\rho},\\Theta\\in {\\cal U}_t}\n\\left[ \\mbox{\\small$\\frac{1}{2}$} h^T[\\theta_2-\\theta_1]+ \\mbox{\\small$\\frac{1}{2}$} h^T\\Theta h\\right].\n\\end{equation}\nThe corresponding saddle point\n$$\n(h_{tk\\rho};\\theta_{tk\\rho}^1,\\theta_{tk\\rho}^2,\\Theta_{tk\\rho})\n$$\ndoes exist and gives rise to the affine detector\n\\begin{equation}\\label{eq600}\n\\phi_{tk\\rho}(y^t)=h_{tk\\rho}^T[y^t-w_{tk\\rho}],\\,\\,w_{tk\\rho}= \\mbox{\\small$\\frac{1}{2}$}[\\theta_{tk\\rho}^1+\\theta_{tk\\rho}^2],\n\\end{equation}\nand risk\n\\begin{equation}\\label{eq601}\n\\begin{array}{l}\n\\hbox{\\rm Risk}(\\phi_{tk\\rho}|{\\cal SG}[N^t,{\\cal U}_t],{\\cal SG}[U^t_{k\\rho},{\\cal U}_t])\\leq \\epsilon_{tk\\rho}:=\\exp\\{{\\cal SV}_{tk}(\\rho)\\}\\\\\n\\multicolumn{1}{r}{=\n\\exp\\{-{ \\mbox{\\small$\\frac{1}{8}$}}[\\theta_{tk\\rho}^1-\\theta_{tk\\rho}^2]^T[\\Theta_{tk\\rho}]^{-1}[\\theta_{tk\\rho}^1-\\theta_{tk\\rho}^2]\\}.}\\\\\n\\end{array}\n\\end{equation}\nTherefore, in view of (\\ref{mainbasis}),\n\\begin{equation}\\label{eq602}\n\\begin{array}{rcll}\n\\int\\limits_{{\\mathbf{R}}^{\\nu_t}}\\exp\\{-\\phi_{tk\\rho}(y^t)\\}P(dy^t)&\\leq&\\epsilon_{tk\\rho}&\\forall P\\in{\\cal SG}[N^t,{\\cal U}_t],\\\\\n\\int\\limits_{{\\mathbf{R}}^{\\nu_t}}\\exp\\{\\phi_{tk\\rho}(y^t)\\}P(dy^t)&\\leq&\\epsilon_{tk\\rho}&\\forall P\\in{\\cal SG}[U^t_{k\\rho},{\\cal U}_t].\n\\end{array}\n\\end{equation}\nTo proceed, we need the following simple observation:\n\\begin{lemma}\\label{lem1}\nFor every $t\\in\\{1,...,d\\}$ and $k\\in\\{1,...,K\\}$, the function ${\\cal SV}_{tk}(\\rho)$ is concave, nonpositive and nonincreasing continuous function of $\\rho\\in(0,R_k]$, and\n$\\lim_{\\rho\\to+0}{\\cal SV}_{tk}(\\rho)=0$.\\par\nMoreover, if ${\\cal U}$ contains a $\\succeq$-largest element $\\overline{\\Theta}$, that is, $\\overline{\\Theta}\\succeq \\Theta$ for some $\\overline{\\Theta}\\in {\\cal U}$ and all $\\Theta\\in {\\cal U}$, then $\\Gamma_{tk}(\\rho)=\\sqrt{-{\\cal SV}_{tk}(\\rho)}$ is a nondecreasing continuous convex nonnegative function on $\\Delta_k=(0,R_k]$.\n\\end{lemma}\n\n\\subsubsection{Implementation: construction}\\label{impl:constr}\nRecall that we have split the required false alarm probability $\\epsilon$ between decision steps $t=1,...,d$:\n$$\n\\epsilon=\\sum_{t=1}^d\\epsilon_t.\\eqno{[\\epsilon_t>0\\,\\forall t]}\n$$\nAt time instant $t\\in\\{1,...,d\\}$ we act as follows:\n\\par1. For $\\varkappa\\in(0,1]$, let\n\\\n\\begin{array}{rcl}\n{\\cal K}_t(\\varkappa)&=&\\{k\\leq K: {\\cal SV}_{tk}(R_k)< \\ln(\\varkappa)\\},\\\\\nK_t(\\varkappa)&=&\\mathop{\\hbox{\\rm Card}}{\\cal K}_t(\\varkappa),\n\\end{array}\n\\\nso that $K_t(\\varkappa)$ is nondecreasing and continuous from the left,\nand let\\footnote{Specific choices of parameters $\\varkappa_t$, $K_t(\\varkappa_t)$, etc., allow to control false alarm and signal miss probabilities; the rationale behind these choices becomes clear from the proof of Proposition \\ref{prop16}.}\n\\begin{equation}\\label{varkappat}\n\\varkappa_t=\\sup\\left\\{\\varkappa\\in(0,1]:K_t(\\varkappa)\\leq{\\epsilon\\epsilon_t\\over\\varkappa^2}\\right\\}.\n\\end{equation}\nClearly, $\\varkappa_t$ is well defined, takes values in $(0,1]$, and since $K_t(\\varkappa)$ is continuous from the left, we have\n\\begin{equation}\\label{eq700}\nK_t(\\varkappa_t)\\leq {\\epsilon\\epsilon_t\\over\\varkappa_t^2}.\n\\end{equation}\nFor $k\\in{\\cal K}_t(\\varkappa_t)$, we have $0=\\lim_{\\rho\\to+0}{\\cal SV}_{tk}(\\rho)>\\ln(\\varkappa_t)$ and ${\\cal SV}_{tk}(R_k)<\\ln(\\varkappa_t)$. Invoking Lemma \\ref{lem1}, there exists (and can be rapidly approximated to high accuracy by bisection) $\\rho_{tk}\\in(0,R_k)$ such that\n\\begin{equation}\\label{balance}\n{\\cal SV}_{tk}(\\rho_{tk})=\\ln(\\varkappa_t).\n\\end{equation}\nAfter $\\rho_{tk}$ is specified, we build the associated detector $\\phi_{tk}(\\cdot)\\equiv \\phi_{tk\\rho_{tk}}(\\cdot)$ according to (\\ref{eq600}). Note that the risk (\\ref{eq601}) of this detector is $\\epsilon_{tk\\rho_{tk}}=\\varkappa_t$.\n\\par\nFor $k\\not\\in{\\cal K}_t(\\varkappa_t)$, we set $\\rho_{tk}=+\\infty$.\n\\par3.\nFinally, we set\n$\n\\alpha_t=\\ln(\\varkappa_t\/\\epsilon).\n$\nand process observation $y^t$ at step $t$ as follows:\n\\begin{itemize}\n\\item if there exists $k$ such that $\\rho_{tk}<\\infty$ and $\\phi_{tk}( {y^t})<\\alpha_t$, we claim that the input underlying observation is a signal and terminate;\n\\item otherwise, we claim that so far, the nuisance hypothesis is not rejected, and pass to the next time instant $t+1$ (when $t- \\mbox{\\small$\\frac{1}{2}$}\\delta_t^2$ for all small enough $\\rho>0$. Invoking Lemma \\ref{lem1}, there exists (and can be rapidly approximated to high accuracy by bisection) $\\rho=\\rho_{tk}\\in\\Delta_k$ such that\n\\begin{equation}\\label{balance1}\n{\\cal SV}_{tk}(\\rho_{tk})=- \\mbox{\\small$\\frac{1}{2}$}\\delta_t^2.\n\\end{equation}\nAfter $\\rho_{tk}$ is specified, we define the associated detector $\\phi_{tk}(\\cdot)\\equiv \\phi_{tk\\rho_{tk}}(\\cdot)$ by\n {applying the construction from Proposition \\ref{subG}\nto the data $U_1=N^t$, $U_2=U^t_{k\\rho_{tk}}$, ${\\cal U}={\\cal U}_t$}(see (\\ref{letUt}), (\\ref{letNt}), (\\ref{letUtkrho})), that is, find a\nsaddle point\n$(h_*;\\theta_1^*,\\theta_2^*,\\Theta_*)$ of the convex-concave function\n$$\n \\mbox{\\small$\\frac{1}{2}$} h^T[\\theta_2-\\theta_1]+ \\mbox{\\small$\\frac{1}{2}$} h^T\\Theta h:{\\mathbf{R}}^{\\nu_t}\\times(N^t\\times U^t_{k\\rho_{tk}}\\times{\\cal U}_t)\\to{\\mathbf{R}}\n$$\n(such a saddle point does exist).\nBy Proposition \\ref{subG}, the affine detector\n$$\n\\phi_{tk}(y^t)= h_*[y^t-w_*],\\,\\,w_*= \\mbox{\\small$\\frac{1}{2}$}[\\theta_1^*+\\theta_2^*]\n$$\nhas the risk {bounded by}\n\\begin{equation}\\label{eq367}\n\\exp\\{- \\mbox{\\small$\\frac{1}{2}$}\\delta^2\\}=\\epsilon_\\star=\\exp\\{ \\mbox{\\small$\\frac{1}{2}$} h_*^T[\\theta_2^*-\\theta_1^*]+ \\mbox{\\small$\\frac{1}{2}$} {h_*^T}\\Theta_*h_*\\}.\n\\end{equation}\nMoreover (see (\\ref{eq1100Aff})), for all $\\alpha\\leq\\delta^2$ and $\\beta\\leq\\delta^2$ it holds\n\\begin{equation}\\label{eq1111}\n\\begin{array}{lll}\n(a)&\\forall (\\theta\\in N^t,\\Theta\\in {\\cal U}_t):&\\hbox{\\rm Prob}_{y^t\\sim {\\cal N}(\\theta,\\Theta)}\\{\\phi_{tk}(y^t)\\leq\\alpha\\}\\leq {\\mathop{\\hbox{\\small\\rm Erf}}}(\\delta-\\alpha\/\\delta),\\\\\n(b)&\\forall (\\theta\\in U^t_{k\\rho_{tk}},\\Theta\\in {\\cal U}_t):&\\hbox{\\rm Prob}_{y^t\\sim {\\cal N}(\\theta,\\Theta)}\\{\\phi_{tk}(y^t)\\geq-\\beta\\}\\leq {\\mathop{\\hbox{\\small\\rm Erf}}}(\\delta-\\beta\/\\delta).\n\\end{array}\n\\end{equation}\nComparing the second equality in (\\ref{eq367}) with the description of ${\\cal SV}_{tk}(\\rho_{tk})$, we see that $\\epsilon_\\star=\\exp\\{{\\cal SV}_{tk}(\\rho_{tk})\\}$, which\ncombines with the first equality in (\\ref{eq367}) and with (\\ref{balance1}) to imply that $\\delta$ in (\\ref{eq367}) is nothing but $\\delta_t$ as given by (\\ref{deltat}). The bottom line is that\n\\begin{quotation}\\noindent\n$(\\#)$ {\\sl For $k\\in{\\cal L}_t(\\delta_t)$, we have defined reals $\\rho_{tk}\\in\\Delta_k$ and affine detectors $\\phi_{tk}(y^t)$ such that relations {\\rm (\\ref{eq1111})} are satisfied with $\\delta=\\delta_t$ given by {\\rm (\\ref{deltat})} and every $\\alpha\\leq\\delta_t^2,\\beta\\leq\\delta_t^2$.}\n\\end{quotation}\nFor $k\\not\\in{\\cal L}_t(\\delta_t)$, we set $\\rho_{tk}=\\infty$.\n\\par3.\\\nFinally, we process observation $y^t$ at step $t$ as follows. We set\n\\begin{equation}\\label{eqalphabeta}\n\\alpha=-\\beta={\\delta_t\\over 2}\\left[{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)-{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon_t\/L_t(\\delta_t))\\right],\n\\end{equation}\nthus ensuring, in view of (\\ref{eq801}), that $\\alpha\\leq\\delta_t^2,\\beta\\leq\\delta_t^2$. Next, given observation $y^t$, we look at {the} $k$'s with finite $\\rho_{tk}$ (that is, at $k$'s from ${\\cal L}_t(\\delta_t)$) and check whether for at least one of these $k$'s the relation $\\phi_{tk}(y^t)<\\alpha$ is satisfied. If this is the case, we terminate and claim that the input is a signal, otherwise we claim that so far, the nuisance hypothesis seems to be true, and pass to time $t+1$ (if $t- \\mbox{\\small$\\frac{1}{2}$} {{\\mathop{\\hbox{\\small\\rm ErfInv}}}^2(\\epsilon)}$. In this case, informally speaking, even the feasible signal of shape $k$ and the largest possible magnitude $R_k$ does not\nallow\nto claim at time $t$ that the input is signal ``$(1-\\epsilon)$-reliably.''\n\\begin{quote}\nIndeed, denoting by $(h_*;\\theta_1^*,\\theta_2^*,\\Theta_*)$ the saddle point of the convex-concave function (\\ref{ccproblem}) with $\\rho=R_k$, we have\n$\n\\theta_1^*=\\bar{A}_tz_*$ with some $z_*\\in N$, $\\theta_2^*=\\bar{A}_t[v_*+R_kw_*]$ with $v_*\\in V_k$, $w_*\\in W_k$ and $v_*+R_kw_*\\in X$, and\n$$ \\mbox{\\small$\\frac{1}{2}$}\\|[\\Theta^*]^{-1\/2}[\\theta_1^*-\\theta_2^*]\\|_2=\\sqrt{-2{\\cal SV}_{tk}(R_k)}<{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon).$$\nThe latter implies that when $\\xi^t\\sim{\\cal N}(0,\\Theta_*)$ (which is possible), there is no test which allows distinguishing via observation $y^t$ with risk $\\leq\\epsilon$ between the feasible nuisance input $z_*$ and the feasible signal $v_*+R_k w_*$ of shape $k$ and magnitude $\\geq R_k$. In other words, even after the nuisance hypothesis is reduced to a single nuisance input $z_*$, and the alternative to this hypothesis is reduced to a single signal $v_*+R_k w_*$ of shape $k$ and magnitude $R_k$, we are still unable to distinguish $(1-\\epsilon)$-reliably between these two hypotheses via observation $y^t$ available at time $t$.\n\\end{quote}\nNow consider the situation where\n\\begin{equation}\\label{assum1}\n{\\cal SV}_{tk}(R_k)\\leq- \\mbox{\\small$\\frac{1}{2}$} {{\\mathop{\\hbox{\\small\\rm ErfInv}}}^2(\\epsilon)},\n\\end{equation}\nso that there exists $\\rho_{tk}^*\\in(0,R_k)$ such that\n\\begin{equation}\\label{rhostar}\n{\\cal SV}_{tk}(\\rho_{tk}^*)=- \\mbox{\\small$\\frac{1}{2}$} {{\\mathop{\\hbox{\\small\\rm ErfInv}}}^2(\\epsilon)}.\n\\end{equation}\nSimilarly to the above, $\\rho_{tk}^*$ is {just} the smallest magnitude of signal of shape $k$ which is distinguishable from nuisance at time $t$, meaning that for every $\\rho'<\\rho_{tk}^*$ there\nexist a feasible nuisance input $u$ and feasible signal input of shape $k$ and magnitude $\\geq\\rho'$ such that these two inputs cannot be distinguished via $y^t$ with risk $\\leq \\epsilon$. A natural way to quantify the quality of an inference procedure is to look at the smallest magnitude $\\rho$ of a feasible signal of shape $k$ which, with probability $1-\\epsilon$, ensures the {signal} conclusion and termination at time $t$. We can quantify the performance of a procedure by the ratios $\\rho\/\\rho_{tk}^*$ stemming from various $t$ and $k$, the closer these ratios are to 1, the better. The result of this quantification of the inference procedures we have developed is as follows:\n\\begin{proposition}\\label{prop40} Let $\\epsilon\\in(0,1\/2)$, $t\\leq d$ and $k\\leq K$ be such that {\\rm (\\ref{assum1})} {is satisfied. Let} $\\epsilon_t=\\epsilon\/d$, $1\\leq t\\leq d$, and let $\\rho_{tk}^*\\in(0,R_k)$ be given by {\\rm (\\ref{rhostar})}. Let, further, a real $\\chi$ satisfy\n$\\chi>\\underline{\\chi}$ where\n\\begin{equation}\\label{chi}\n\\underline{\\chi}=\\left\\{\n\\begin{array}{ll}{1\\over 2}\\left[1+{{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon\/(Kd))\\over{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)}\\right],&\\hbox{Gaussian case}\\\\\n& \\hbox{${\\cal U}$ contains $\\succeq$-largest element,}\\\\\n\\left({1\\over 2}\\left[1+{{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon\/(Kd))\\over{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)}\\right]\\right)^2,&\\hbox{Gaussian case}\\\\&\\hbox{${\\cal U}$ does not contain $\\succeq$-largest element,}\\\\\n{\\ln(Kd\/\\epsilon^2)\\over{\\mathop{\\hbox{\\small\\rm ErfInv}}}^2(\\epsilon)},&\\hbox{sub-Gaussian case.}\n\\end{array}\\right.\n\\end{equation}\nThen, whenever the input is a feasible signal of shape $k$ and magnitude at least $\\chi\\rho_{tk}^*$, the probability for the inference procedure from Section \\ref{sgcase} in the sub-Gaussian case, and the procedure from Section \\ref{sect:refinement} in the Gaussian case, to terminate at time $t$ with the signal inference is at least $1-\\epsilon$.\n\\end{proposition}\n\n\\paragraph{Discussion.} Proposition \\ref{prop40} states that when (\\ref{assum1}) holds (which, as was explained, just says that feasible signals of shape $k$ of the largest possible magnitude $R_k$ can be $(1-\\epsilon)$-reliably detected at time $t$), the ratio $\\chi$ of\n {the} magnitude of {a signal} of shape $k$ which is detected $(1-\\epsilon)$-reliably by the inference procedure we have developed to the lower bound $\\rho_{tk}^*$ on the magnitude of activation of shape $k$ detectable $(1-\\epsilon)$-reliably at time $t$ by {\\sl any other} inference procedure can be made arbitrarily close to the right hand side quantities in (\\ref{chi}). It is immediately seen that the latter quantities are upper-bounded by $\\bar{\\chi}=O(1)\\ln(Kd\/\\epsilon)\/\\ln(1\/\\epsilon)$, provided $\\epsilon\\leq 0.5$. We see that {\\sl unless $K$ and\/or $d$ are extremely large, $\\bar{\\chi}$ is a moderate constant}. Moreover, when $K$, $d$ remain fixed and $\\epsilon\\to+0$, we have $\\bar{\\chi}\\to 1$, which, informally speaking, means that {\\sl with $K,d$ fixed, the performance of the inference routines in this section\napproaches the optimal performance as $\\epsilon\\to+0$.}\n\\subsection{Numerical illustration}\\label{numill}\nThe setup of the numerical experiment we are about to report upon is as follows. We observe on time horizon $\\{t=1,2,...,d=16\\}$ the output $z_1,z_2,...$ of the dynamical system\n\\begin{equation}\\label{finiteD}\n(I-\\Delta)^3 z=\\kappa (I-\\Delta)^2(u+\\zeta),\\,\\,\\kappa=(0.1d)^{-3}\\approx 0.244,\n\\end{equation}\nwhere $\\Delta$ is the shift in the space of two-sided sequences: $(\\Delta z)_t=z_{t-1}$, $\\{u_t:-\\infty< t<\\infty\\}$ is the input, and $\\zeta=\\{\\zeta_t<-\\inftyk$.\n\\par\n {In this situation}, the detection problem becomes a version of the standard problem of detecting sequentially a pulse of a given form in the (third) derivative of a time series observed in Gaussian noise.\nThe goal of our experiment was to evaluate {the performance of the inference procedure from Section \\ref{sect:refinement} for this example}. The procedure was tuned to the probability of false alarm $\\epsilon=0.01$, equally distributed between the $d=16$ time instants, that is, we used $\\epsilon_t=0.01\/16$, $t=1,...,d=16$.\n\\par\nWe present the numerical results in Figure \\ref{fig111}. We denote by $\\rho_{tk}$ the magnitude of an activation of shape $k$ which is provably detected at time $k$ with confidence level $0.99$; we also denote\nby $\\rho^*_{tk}$ the ``oracle'' lower bound on this quantity.\\footnote{$\\rho_{tk}^*$ (defined in Section \\ref{sect:assessing}) is the minimal magnitude of activation of shape $k$ such that the ``ideal'' inference which knows $k$ in advance, tuned for\nreliability $0.99$, terminates with a signal conclusion at time $t$. When $\\rho_{tk}^*>R$, the maximal allowed activation magnitude, we set\n$\\rho^*_{tk}=+\\infty$.\nRecall that in the reported experiments $R=10^4$ is used.} Figure \\ref{fig111} displays the dependence of $\\rho_{tk}$ (left plots) and the ratio $\\rho_{tk}\/\\rho^*_{tk}$ (right plots) on $k$ (horizontal axis) for different activation geometries (pulses, jumps up, and steps). We display these data only for the pairs $t,k$ with finite $\\rho^*_{tk}$; recall that $\\rho^*_{tk}=\\infty$ means that with the upper bound $R=10^4$ on the uniform norm of a feasible input, even the ideal inference does not\nallow us to detect 0.99-reliably an activation of shape $k$ at time $t$.\\par Our experiment shows that $\\rho^*_{tk}$ is finite in the domain $\\{(t,k):\\, 4\\leq t\\leq 16,\\,3\\leq k\\leq t\\}$. The restriction $k\\leq t$ is quite natural: we cannot detect a signal of shape $k$ before the corresponding activation starts. Note that signals of shapes $k=1,2$ are ``undetectable,'' and that no signal inputs can be detected at time $t=3$ seemingly due to the fact that activation can be completely masked by the initial conditions in the case of ``early'' activation and\/or short observation horizon. Our experiment shows that this phenomenon affects equally the inference routines from Sections \\ref{sgcase} and \\ref{sect:refinement}, and the ideal detection, and disappears when the initial conditions for (\\ref{eqOStnew}) are set to 0 and our inferences are adjusted to this a priori information.\n\\par\nThe data in Figure \\ref{fig111} show that the ``non-optimality ratios'' $\\rho_{tk}\/\\rho^*_{tk}$ of the proposed inferences as compared to the ideal detectors are quite moderate -- they never exceed 1.34; not that bad, especially when taking into account that the ideal detection assumes {\\em a priori} knowledge of the activation shape (position).\n\n\\begin{figure}\n$$\n\\begin{array}{cc}\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rhop}&\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rtop}\\\\\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rhoj}&\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rtoj}\\\\\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rhos}&\n\\epsfxsize=170pt\\epsfysize=150pt\\epsffile{rtos}\n\\end{array}\n$$\n\n\\caption{\\label{fig111}. Performance of detector from Section \\ref{sect:refinement}, dynamics (\\ref{finiteD}). Left: $\\rho_{tk}$, $\\max[k,4]\\leq t\\leq 16$ (ranges and values) vs. $k$, $3\\leq k\\leq 16$.\nRight: $\\rho_{tk}\/\\rho_{tk}^*$, $\\max[k,4]\\leq t\\leq 16$ (ranges and values) vs $k$, $3\\leq k\\leq 16$.\nActivation geometry: pulses for top plots, jumps up for middle plots, and steps for bottom plots.\n}\n\\end{figure}\n\n\\subsection{Extension: union-type nuisance}\\label{sect:exten}\nSo far, we have considered the case of a single nuisance hypothesis and multiple signal alternatives. The proposed approach can be easily extended to the case of multiple nuisance hypotheses, namely, to the situation differing from the one described in Section \\ref{ADSetUp} in exactly one point -- instead of assuming that nuisances belong to a closed convex set $N\\subset X$, we can assume that nuisance inputs run through the union $\\bigcup_{m=1}^MN_m$ of given closed convex sets $N_m\\subset X$, with $0\\in N_m$ for all $m$. The implied modifications of our constructions and results are as follows.\n\\paragraph{Sub-Gaussian case.} In this case, the construction of Section \\ref{impl:constr} in \\cite{PartI}, as applied to $N_m$ in the role of $N$, gives rise to $M$ functions\n\\begin{equation}\\label{ccproblemm}\n{\\cal SV}_{tk}^m(\\rho)=\\min\\limits_{h\\in{\\mathbf{R}}^{\\nu_t}}\\max\\limits_{\\theta_1\\in N^{tm},\\theta_2\\in U^t_{k\\rho},\\Theta\\in {\\cal U}_t}\n\\left[ \\mbox{\\small$\\frac{1}{2}$} h^T[\\theta_2-\\theta_1]+ \\mbox{\\small$\\frac{1}{2}$} h^T\\Theta h\\right],\\,\\,\\,N^{tm}=\\bar{A}_tN_m,\n\\end{equation}\n$1\\leq m\\leq M$, and thus - to the parametric families\n\\begin{equation}\\label{Ktdeltam}\n\\begin{array}{rcl}\n{\\cal K}_t(\\varkappa)&=&\\{k\\leq K: {\\cal SV}_{tk}^m(R_k)< \\ln(\\varkappa),\\,\\forall 1\\leq m\\leq M\\},\\\\\nK_t(\\varkappa)&=&\\mathop{\\hbox{\\rm Card}}{\\cal K}_t(\\varkappa),\n\\end{array}\n\\end{equation}\nso that $K_t(\\varkappa)$ is nondecreasing and continuous from the left.\nAt time instant $t$ we act as follows:\n\\begin{enumerate}\n\\item We define the quantity\n$$\n\\varkappa_t=\\sup\\left\\{\\varkappa\\in(0,1]:MK_t(\\varkappa)\\leq{\\epsilon\\epsilon_t\\over\\varkappa^2}\\right\\}.\n$$\nClearly,\n$\\varkappa_t$ is well defined, takes values in $(0,1)$, and since $K_t(\\varkappa)$ is continuous from the left, we have\n\\begin{equation}\\label{eq700m}\nMK_t(\\varkappa_t)\\leq {\\epsilon\\epsilon_t\\over\\varkappa_t^2}.\n\\end{equation}\nWe set\n$\n\\alpha_t=\\ln(\\varkappa_tM\/\\epsilon).\n$\n\\item\nFor $k\\in{\\cal K}_t(\\varkappa_t)$, we have $0=\\lim_{\\rho\\to+0}{\\cal SV}_{tk}^m(\\rho)>\\ln(\\varkappa_t)$ and ${\\cal SV}_{tk}^m(R_k)<\\ln(\\varkappa_t)$. Invoking Lemma \\ref{lem1}, there exists (and can be rapidly approximated to high accuracy by bisection) $\\rho_{tk}\\in(0,R_k)$ such that\n\\begin{equation}\\label{balancem}\n\\max_{m\\leq M}{\\cal SV}_{tk}^m(\\rho_{tk})=\\ln(\\varkappa_t).\n\\end{equation}\nGiven $\\rho_{tk}$, we define the affine detectors\n$$\n\\phi_{tk}^m(y^t)=h_{tkm}^T[y^t-w_{tkm}],\\,\\,w_{tkm}= \\mbox{\\small$\\frac{1}{2}$}[\\theta_{1,tkm}+\\theta_{2,tkm}],\n$$\nwhere $(h_{tkm};\\theta_{1,tkm},\\theta_{2,tkm},\\Theta_{tkm})$ is a solution to the saddle point problem\n(\\ref{ccproblemm}) with $\\rho=\\rho_{tk}$.\n\\par\nFor $k\\not\\in{\\cal K}_t(\\varkappa_t)$, we set $\\rho_{tk}=+\\infty$.\n\\item\\label{fffinal}\nFinally, we process the observation $y^t$ at step $t$ as follows:\n\\begin{itemize}\n\\item if there exists $k$ such that $\\rho_{tk}<\\infty$ and $\\phi_{tk}^m(y^t)<\\alpha_t$ for all $m\\leq M$, we claim that the observed input is a signal, and terminate;\n\\item otherwise, we claim that so far, the nuisance hypothesis is not rejected, and pass to the next time instant $t+1$ (when $t{\\ln(dKM\/\\epsilon^2)\\over {\\mathop{\\hbox{\\small\\rm ErfInv}}}^2(\\epsilon)},\n$$\nand every feasible signal input of shape $k$ and magnitude $\\geq \\chi\\max\\limits_{m\\leq M} \\rho_{tk}^{m*}$, the probability of termination with signal conclusion at time $t$ is $\\geq 1-\\epsilon$.\n\\end{proposition}\nProof of Proposition \\ref{prop16m} is given by a straightforward modification of the proofs of Propositions \\ref{prop16} and \\ref{prop40}.\n\n\\section{Change detection via quadratic detectors}\\label{QDSection}\n\\subsection{Outline}\nIn Section \\ref{ChPADSetUp}, we were interested in deciding as early as possible upon the hypotheses about the input $x$ underlying observations \\rf{eqOSt} in the situation where both signals and nuisances\nformed finite unions of convex sets. Solving this problem was reduced to decisions on pairs of {\\sl convex} hypotheses -- those stating\nthat the expectation of a (sub-)Gaussian random vector with partly known covariance matrix belongs to the union of convex sets associated with the hypotheses, and we could make decisions looking at the (signs of) properly\n {built} affine detectors -- affine functions of observations. Now we intend to address\nthe case when the signals (or nuisances) are specified by {\\sl non-convex} restrictions, such as ``$u$ belongs to a given linear subspace and has Euclidean norm at least $\\rho>0$.''\nThis natural setting is difficult to capture via convex hypotheses: in such an attempt, we are supposed to ``approximate'' the restriction ``the $\\|\\cdot\\|_2$-norm of vector $x$ is $\\geq \\rho$'' by the union of convex hypotheses like ``$i$-th entry in $x$ is $\\geq\\rho'$''\/``$i$-th entry in $x$ is $\\leq-\\rho'$''; the number of these hypotheses grows with the input's dimension, and the ``quality of approximation,''\nwhatever be its definition, deteriorates as the dimension grows.\n\\par\nIn this situation, a natural way to proceed is to look at ``quadratic liftings'' of inputs and observations. Specifically, given a vector $w$ of dimension $m$, let us associate with it its ``quadratic lifting'' -- the symmetric $(m+1)\\times (m+1)$ matrix $Z(w)=[w;1][w;1]^T$. Observe that the restrictions on $w$ expressed by linear and quadratic constraints induce {\\sl linear} restrictions\non $Z(w)$.\nSecondly, given noisy observation $y^t=\\bar{A}_tx+\\xi^t$ of signal $x$, the quadratic lifting $Z(y^t)$ can be thought of as noisy observation of\nan {\\sl affine image} $\\widehat{A}Z(x)\\widehat{A}^T$ of $Z(x)$, where \\[\\widehat{A}=\\left[\\begin{array}{c|c}\\bar{A}_t&\\cr\\hline&1\\cr\\end{array}\\right],\\]\n(here and in what follows the empty block refers to the null matrix). As a result, roughly speaking, linear and quadratic constraints on the input translate into linear constraints on the expectation of ``lifted observation'' $Z(y^t)$, and different hypotheses on input, expressed by linear and quadratic constraints, give rise to convex hypotheses on $Z(y^t)$. Then, in order to decide on the resulting\nconvex hypotheses, we can use affine in $Z(y^t)$, that is, {\\sl quadratic} in $y^t$, detectors, and this is what we intend to do.\n\\subsection{Preliminaries}\n\\subsubsection{Gaussian case}\nIn the sequel, the following result (which is a slightly modified concatenation of Propositions 3.1 and 5.1 of \\cite{PartI}) is used:\n\n\\begin{proposition}\\label{concatenation}\n \\item[$\\qquad${\\rm (i)}] Let ${\\cal U}$ be a convex compact set contained in the interior of the cone ${\\mathbf{S}}^\\nu_+$ of positive semidefinite $\\nu\\times\\nu$ matrices in the space ${\\mathbf{S}}^\\nu$ of symmetric $\\nu\\times\\nu$ matrices.\n Let $\\Theta_*\\in {\\mathbf{S}}^\\nu_+$ be such that $\\Theta_*\\succeq \\Theta$ for all $\\Theta\\in{\\cal U}$, and let $\\delta\\in[0,2]$ be such that\n\\begin{equation}\\label{56delta}\n\\|\\Theta^{1\/2}\\Theta_*^{-1\/2}-I_\\nu\\|\\leq\\delta,\\;\\;\\forall \\Theta\\in{\\cal U},\n\\end{equation}\nwhere $\\|\\cdot\\|$ is the spectral norm.\\footnote{with $\\delta=2$, (\\ref{56delta}) is satisfied for all $\\Theta$ such that $0\\preceq\\Theta\\preceq \\Theta_*$, so that the restriction $\\delta\\leq2$ is w.l.o.g.} Finally,\nlet $\\gamma\\in(0,1)$, $A$ be a $\\nu\\times(n+1)$ matrix, ${\\cal Z}$ be a nonempty convex compact subset of the set ${\\cal Z}^+=\\{Z\\in{\\mathbf{S}}^{n+1}_+:Z_{n+1,n+1}=1\\}$, and let\n\\begin{equation}\\label{phiZ}\n\\phi_{\\cal Z}(Y):=\\max_{Z\\in{\\cal Z}} {\\hbox{\\rm Tr}}(ZY)\n\\end{equation}\nbe the support function of ${\\cal Z}$. These data specify the closed convex set\n\\begin{equation}\\label{PcH}\n{\\cal H}={\\cal H}^\\gamma:=\\{(h,H)\\in {\\mathbf{R}}^\\nu\\times {\\mathbf{S}}^\\nu:-\\gamma\\Theta_*^{-1}\\preceq H\\preceq \\gamma \\Theta_*^{-1}\\},\n\\end{equation}\nthe matrix\n\\begin{equation}\\label{matrixB}\nB=\\left[\\begin{array}{c}A\\cr [0,...,0,1]\\cr\\end{array}\\right]\n\\end{equation}\nand the function $\\Phi_{A,{\\cal Z}}:\\,{\\cal H}\\times{\\cal U}\\to{\\mathbf{R}}$,\n\\begin{equation}\\label{phi}\n\\begin{array}{rcl}\n\\Phi_{A,{\\cal Z}}(h,H;\\Theta)&=&- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}(I-\\Theta_*^{1\/2}H\\Theta_*^{1\/2})+ \\mbox{\\small$\\frac{1}{2}$} {\\hbox{\\rm Tr}}([\\Theta-\\Theta_*]H)\\\\\n&&+{\\delta(2+\\delta)\\over 2(1-\\|\\Theta_*^{1\/2}H\\Theta_*^{1\/2}\\|)}\\|\\Theta_*^{1\/2}H\\Theta_*^{1\/2}\\|_F^2\\\\\n&&+{1\\over 2}\\phi_{\\cal Z}\\left(B^T\\left[\\hbox{\\small$\\left[\\begin{array}{c|c}H&h\\cr\\hline h^T&\\end{array}\\right]+\n\\left[H,h\\right]^T[\\Theta_*^{-1}-H]^{-1}\\left[H,h\\right]$}\\right]B\\right)\n,\n\\end{array}\n\\end{equation}\nwhere $\\|\\cdot\\|_F$ is the Frobenius norm of a matrix.\n\\par\nFunction $\\Phi_{A,{\\cal Z}}\n$ is continuous on its domain, convex in $(h,H)\\in{\\cal H}$ and concave in $\\Theta\\in{\\cal U}$ and possesses the following property:\n\\begin{quotation}\n\\noindent\nWhenever $u\\in {\\mathbf{R}}^n$ is such that $[u;1][u;1]^T\\in{\\cal Z}$ and $\\Theta\\in{\\cal U}$, the Gaussian random vector $\\zeta\\sim{\\cal N}(A[u;1],\\Theta)$ satisfies the relation\n\\begin{equation}\\label{moments}\n\\forall (h,H)\\in{\\cal H}: \\;\\;\\ln\\left({\\mathbf{E}}_{\\zeta\\sim {\\cal N}(A[u;1],\\Theta)}\\left\\{{\\rm e}^{ \\mbox{\\small$\\frac{1}{2}$}\\zeta^TH\\zeta+h^T\\zeta}\\right\\}\\right)\\leq \\Phi_{A,{\\cal Z}}(h,H;\\Theta).\n\\end{equation}\n\\end{quotation}\nBesides this, $\\Phi_{A,{\\cal Z}}$ is coercive in $(h,H)$: $\\Phi_{A,{\\cal Z}}(h_i,H_i;\\Theta)\\to+\\infty$ as $i\\to\\infty$ whenever $\\Theta\\in {\\cal U}$, $(h_i,H_i)\\in{\\cal H}$ and $\\|(h_i,H_i)\\|\\to\\infty$, $i\\to\\infty$.\n\\item[$\\qquad${\\rm (ii)}] Let two collections of data from {\\rm (i):} $({\\cal U}_\\chi,\\Theta_*^{(\\chi)},\\delta_\\chi,\\gamma_\\chi,A_\\chi,{\\cal Z}_\\chi)$, $\\chi=1,2$, with common $\\nu$ be given, giving rise to the sets ${\\cal H}_\\chi $, matrices $B_\\chi $, and functions $\\Phi_{A_\\chi ,{\\cal Z}_\\chi }(h,H;\\Theta)$, $\\chi=1,2$. These collections\nspecify the families of normal distributions\n\\\n{\\cal G}_\\chi =\\{{\\cal N}(v,\\Theta): \\Theta\\in{\\cal U}_\\chi \\ \\&\\ \\exists u: v=A_\\chi [u;1], [u;1][u;1]^T\\in{\\cal Z}_\\chi \\},\\,\\chi=1,2.\n\\\nConsider the convex-concave saddle point problem\n\\begin{equation}\\label{SPPLift}\n{\\cal SV}=\\min\\limits_{(h,H)\\in{\\cal H}_1\\cap{\\cal H}_2}\\max\\limits_{\\Theta_1\\in{\\cal U}_1,\\Theta_2\\in{\\cal U}_2} \\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_{A_1,{\\cal Z}_1}(-h,-H;\\Theta_1)+\n\\Phi_{A_2,{\\cal Z}_2}(h,H;\\Theta_2)\\right]}_{\\Phi(h,H;\\Theta_1,\\Theta_2)}.\n\\end{equation}\nA saddle point {$(h_*, H_* ;\\Theta_1^*,\\Theta_2^*)$} does exist in this problem, and the induced quadratic detector\n\\begin{equation}\\label{iquaddet}\n\\phi_*(\\omega)= \\mbox{\\small$\\frac{1}{2}$}\\omega^TH_*\\omega+h_*^T\\omega+\\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_{A_1,{\\cal Z}_1}(-h_*,-H_*;\\Theta^*_1)\n-\\Phi_{A_2,{\\cal Z}_2}(h_*,H_*;\\Theta^*_2)\\right]}_{a},\n\\end{equation}\nsatisfies\n\\begin{equation}\\label{riskagain}\\begin{array}{llll}\n(a)&~~\\int_{{\\mathbf{R}}^\\nu}{\\rm e}^{-\\phi_*(\\omega)}P(d\\omega)&\\leq \\epsilon_\\star:={\\rm e}^{{\\cal SV}}\\;\\;&\\forall P\\in {\\cal G}_1,\\\\\n(b)&~~\\int_{{\\mathbf{R}}^\\nu}{\\rm e}^{\\phi_*(\\omega)}P(d\\omega)&\\leq \\epsilon_\\star\\;\\;&\\forall P\\in {\\cal G}_2.\n\\end{array}\n\\end{equation}\nThat is, the risk, as defined in item 5\\ of Section \\ref{sectnotconv}, of the detector $\\phi_*$ on the families ${\\cal G}_1,{\\cal G}_2$ satisfies\n$$\n\\hbox{\\rm Risk}(\\phi_*|{\\cal G}_1,{\\cal G}_2)\\leq\\epsilon_\\star.\n$$\n\\end{proposition}\nFor the proof, see \\cite{PartI}; for the reader's convenience, we reproduce the proof in Section \\ref{AppConcProof}. The justification for the remark below can be found in appendix \\ref{just_remark41}.\n\n\n\\begin{remark}\\label{remark41} {\\rm Note that the computational effort of solving {\\rm (\\ref{SPPLift})} reduces dramatically in the ``{\\em easy case}'' of the situation described in item (ii) of Proposition \\ref{concatenation}, specifically,\nin the case where\n\\begin{itemize}\n\\item the observations are {\\sl direct}, meaning that $n=\\nu$ and $A_\\chi [u;1] \\equiv u$, $u\\in{\\mathbf{R}}^\\nu$, $\\chi=1,2$;\n\\item the sets ${\\cal U}_\\chi$ are comprised of positive definite {\\sl diagonal} matrices, and the matrices $\\Theta^{(\\chi)}_*$ are diagonal as well, $\\chi=1,2$;\n\\item the sets ${\\cal Z}_\\chi$, $\\chi=1,2$, are convex compact sets of the form\n$$\n{\\cal Z}_\\chi=\\{Z\\in{\\mathbf{S}}^{\\nu+1}_+: Z\\succeq0,\\,{\\hbox{\\rm Tr}}(ZQ^\\chi_j)\\leq q^\\chi_j,\\,1\\leq j\\leq J_\\chi\\}\n$$\nwith {\\sl diagonal} matrices $Q^\\chi_j$, \\footnote{In terms of the sets $U_\\chi$, this assumption means that the latter sets\nare given by linear inequalities on the {\\sl squares} of entries in $u$.} and these sets intersect the interior of the positive semidefinite cone ${\\mathbf{S}}^{\\nu+1}_+$.\n\\end{itemize}\nIn this case, the convex-concave saddle point problem {\\rm (\\ref{SPPLift})} admits a saddle point $(h_*,H_*;\\Theta_1^*,\\Theta_2^*)$\nwhere $h_*=0$ and $H_*$ is diagonal, and restricting $h$ to be zero and $H$ to be diagonal reduces drastically the design dimension of the saddle point problem.\n\n\\end{remark}\n\n\\subsubsection{Sub-Gaussian case}\nSub-Gaussian version of Proposition \\ref{concatenation} is as follows:\n\\begin{proposition}\\label{concatenationSG}\n \\item[$\\qquad${\\rm (i)}] Let ${\\cal U}$ be a convex compact set contained in the interior of the cone ${\\mathbf{S}}^\\nu_+$ of positive semidefinite $\\nu\\times\\nu$ matrices in the space ${\\mathbf{S}}^\\nu$ of symmetric $\\nu\\times\\nu$ matrices, let $\\Theta_*\\in {\\mathbf{S}}^\\nu_+$ be such that $\\Theta_*\\succeq \\Theta$ for all $\\Theta\\in{\\cal U}$, and let $\\delta\\in[0,2]$ be such that {\\rm (\\ref{56delta})} holds true.\nFinally, let $\\gamma, \\gamma^+$ be such that $0<\\gamma<\\gamma^+<1$, $A$ be $\\nu\\times(n+1)$ matrix, ${\\cal Z}$ be a nonempty convex compact subset of the set ${\\cal Z}^+=\\{Z\\in{\\mathbf{S}}^{n+1}_+:Z_{n+1,n+1}=1\\}$, and let $\\phi_{\\cal Z}(Y)$ be the support function of ${\\cal Z}$, see {\\rm (\\ref{phiZ})}.\nThese data specify the closed convex sets\n\\\n\\begin{array}{rcl}\n{\\cal H} &=& {\\cal H}^\\gamma:=\\{(h,H)\\in {\\mathbf{R}}^\\nu\\times {\\mathbf{S}}^\\nu:\\;-\\gamma\\Theta_*^{-1}\\preceq H\\preceq \\gamma \\Theta_*^{-1} \\},\\\\\n\\widehat{{\\cal H}} & = & \\widehat{{\\cal H}}^{\\gamma,\\gamma^+} =\\left\\{(h,H,G)\\in {\\cal H}^\\gamma \\times {\\mathbf{S}}^\\nu:\n0\\preceq G\\preceq \\gamma^+ \\Theta_*^{-1},\\,H\\preceq G\\right\\},\n\\end{array}\n\\\nthe matrix $B$ given by {\\rm (\\ref{matrixB})},\nand the functions\n\\begin{equation}\\label{phiSG}\n\\hbox{\\footnotesize$\\begin{array}{l}\n\\Psi_{A,{\\cal Z}}(h,H,G)\\\\\n\\quad=- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}(I-\\Theta_*^{1\/2}G\\Theta_*^{1\/2})\\\\\n~~~~~ + \\mbox{\\small$\\frac{1}{2}$}\\phi_{\\cal Z}\\left(B^T\\left[\\hbox{\\scriptsize$\\left[\\begin{array}{r|r}H&h\\cr\\hline h^T&\\cr\\end{array}\\right]$}+ [H,h]^T[\\Theta_*^{-1}-G]^{-1}[H,h]\\right]B\\right): {\\widehat{{\\cal H}}}\\times{\\cal Z}\\to{\\mathbf{R}},\\\\\n\\Psi^\\delta_{A,{\\cal Z}}(h,H,G;\\Theta)\\\\\n\\quad=- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}(I-\\Theta_*^{1\/2}G\\Theta_*^{1\/2})+ \\mbox{\\small$\\frac{1}{2}$}{\\hbox{\\rm Tr}}([\\Theta-\\Theta_*]G)+{\\delta(2+\\delta)\\over2(1-\\|\\Theta_*^{1\/2}G\\Theta_*^{1\/2}\\|)}\\|\n\\Theta_*^{1\/2}G\\Theta_*^{1\/2}\\|_F^2\\\\\n\\quad+ \\mbox{\\small$\\frac{1}{2}$}\\phi_{\\cal Z}\\left(B^T\\left[\\hbox{\\scriptsize$\\left[\\begin{array}{r|r}H&h\\cr\\hline h^T&\\cr\\end{array}\\right]$}+ [H,h]^T[\\Theta_*^{-1}-G]^{-1}[H,h]\\right]B\\right):\n{ {\\widehat{{\\cal H}}}\\times\\{0\\preceq\\Theta\\preceq \\Theta_*\\}\\to{\\mathbf{R}},}\\\\\n\\Phi_{A,{\\cal Z}}(h,H)=\\min\\limits_G\\left\\{\\Psi_{A,{\\cal Z}}(h,H,G):(h,H,G)\\in {\\widehat{{\\cal H}}}\\right\\}:{\\cal H}\\to{\\mathbf{R}},\\\\\n\\Phi^\\delta_{A,{\\cal Z}}(h,H;\\Theta)=\\min\\limits_G\\left\\{\\Psi^\\delta_{A,{\\cal Z}}(h,H,G;\\Theta):\n(h,H,G)\\in\\widehat{{\\cal H}}\\right\\}:{\\cal H}\\times\\{0\\preceq\\Theta\\preceq \\Theta_*\\}\\to{\\mathbf{R}},\\\\\n\\end{array}$}\n\\end{equation}\nwhere, same as in {\\rm (\\ref{phi})}, $\\|\\cdot\\|$ is the spectral, and $\\|\\cdot\\|_F$ is the Frobenius norm of a matrix.\n\\par\nFunction $\\Phi_{A,{\\cal Z}}(h,H)$ is convex and continuous on its domain, while function $\\Phi^\\delta_{A,{\\cal Z}}(h,H;\\Theta)$ is continuous on its domain, convex in $(h,H)\\in{\\cal H}$ and concave in\n$\\Theta\\in\\{0\\preceq\\Theta\\preceq\\Theta_* {\\}}$.\nBesides this,\n\\begin{quotation}\\noindent\nWhenever $u\\in{\\mathbf{R}}^n$ is such that $[u;1][u;1]^T\\in{\\cal Z}$ and $\\Theta\\in{\\cal U}$,\nthe sub-Gaussian random vector $\\zeta$ with parameters $(A[u;1], \\Theta)$ satisfies the relation\n\\begin{equation}\\label{momentsSG}\n\\begin{array}{ll}\n\\multicolumn{2}{l}{\\forall (h,H)\\in{\\cal H}:}\\\\\n(a)&\\ln\\left({\\mathbf{E}}_{\\zeta}\\left\\{{\\rm e}^{ \\mbox{\\small$\\frac{1}{2}$}\\zeta^TH\\zeta+h^T\\zeta}\\right\\}\\right)\\leq \\Phi_{A,{\\cal Z}}(h,H),\\\\\n(b)&\\ln\\left({\\mathbf{E}}_{\\zeta}\\left\\{{\\rm e}^{ \\mbox{\\small$\\frac{1}{2}$}\\zeta^TH\\zeta+h^T\\zeta}\\right\\}\\right)\\leq \\Phi^\\delta_{A,{\\cal Z}}(h,H;\\Theta).\\\\\n\\end{array}\n\\end{equation}\n\\end{quotation}\nBesides this, $\\Phi_{A,{\\cal Z}}$ and $\\Phi^\\delta_{A,{\\cal Z}}$ are coercive in $(h,H)$: $\\Phi_{A,{\\cal Z}}(h_i,H_i)\\to+\\infty$ and $\\Phi^\\delta_{A,{\\cal Z}}(h_i,H_i;\\Theta)\\to+\\infty$ as $i\\to\\infty$ whenever $\\Theta\\in {\\cal U}$, $(h_i,H_i)\\in{\\cal H}$ and $\\|(h_i,H_i)\\|$ $\\to\\infty$, $i\\to\\infty$.\n\\item[$\\qquad${\\rm (ii)}] Let two collections of data from {\\rm (i):} $({\\cal U}_\\chi,\\Theta_*^{(\\chi)},\\delta_\\chi,\\gamma_\\chi, {\\gamma^+_\\chi},A_\\chi,{\\cal Z}_\\chi)$, $\\chi=1,2$, with common $\\nu$ be given, giving rise to the sets ${\\cal H}_\\chi $, matrices $B_\\chi $, and functions $\\Phi_{A_\\chi ,{\\cal Z}_\\chi }(h,H)$, $\\Phi^{\\delta_\\chi}_{A_\\chi,{\\cal Z}_\\chi }(h,H;\\Theta)$, $\\chi=1,2$. These collections specify the families of distributions\n$\n{\\cal SG}_\\chi,\n$ $\\chi=1,2$, where ${\\cal SG}_\\chi$ is comprised of all sub-Gaussian distributions with parameters $v,\\Theta$, such that $v$ can be represented as $A_\\chi[u;1]$ for some $u$ with $[u;1][u;1]^T\\in{\\cal Z}_\\chi$, and $\\Theta\\in {\\cal U}_\\chi$.\nConsider the convex-concave saddle point problem\n\\begin{equation}\\label{SPPLiftSG}\n{\\cal SV}=\\min\\limits_{(h,H)\\in{\\cal H}_1\\cap{\\cal H}_2}\\max\\limits_{\\Theta_1\\in{\\cal U}_1,\\Theta_2\\in{\\cal U}_2} \\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi^{\\delta_1}_{A_1,{\\cal Z}_1}(-h,-H;\\Theta_1)+\n\\Phi^{\\delta_2}_{A_2,{\\cal Z}_2}(h,H;\\Theta_2)\\right]}_{\\Phi^{\\delta_1,\\delta_2}(h,H;\\Theta_1,\\Theta_2)}.\n\\end{equation}\nA saddle point $(h_*,H_*;\\Theta_1^*,\\Theta_2^*)$ does exist in this problem, and the induced quadratic detector\n\\\n\\phi_*(\\omega)= \\mbox{\\small$\\frac{1}{2}$}\\omega^TH_*\\omega+h_*^T\\omega+\\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi^{\\delta_1}_{A_1,{\\cal Z}_1}(-h_*,-H_*;\\Theta^*_1)\n-\\Phi^{\\delta_2}_{A_2,{\\cal Z}_2}(h_*,H_*;\\Theta^*_2)\\right]}_{a},\n\\\nwhen applied to the families of sub-Gaussian distributions ${\\cal SG}_\\chi $, $\\chi=1,2$, has the risk $$\n\\hbox{\\rm Risk}(\\phi_*|{\\cal SG}_1,{\\cal SG}_2)\\leq \\epsilon_\\star:={\\rm e}^{{\\cal SV}},$$ that is\n\\\n\\begin{array}{lrl}\n(a)&~~\\int_{{\\mathbf{R}}^\\nu}{\\rm e}^{-\\phi_*(\\omega)}P(d\\omega)\\leq \\epsilon_\\star\\;\\;&\\forall P\\in {\\cal SG}_1,\\\\\n(b)&~~\\int_{{\\mathbf{R}}^\\nu}{\\rm e}^{\\phi_*(\\omega)}P(d\\omega)\\leq \\epsilon_\\star\\;\\;&\\forall P\\in {\\cal SG}_2.\n\\end{array}\n\\\nSimilarly, the convex minimization problem\n\\begin{equation}\\label{SPPLiftConvSG}\n{\\hbox{\\rm Opt}}=\\min\\limits_{(h,H)\\in{\\cal H}_1\\cap{\\cal H}_2}\\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_{A_1,{\\cal Z}_1}(-h,-H)+\n\\Phi_{A_2,{\\cal Z}_2}(h,H)\\right]}_{\\Phi(h,H)}\n\\end{equation}\nis solvable, and the induced by its optimal solution $(h_*,H_*)$ quadratic detector\n\\[\n\\phi_*(\\omega)= \\mbox{\\small$\\frac{1}{2}$}\\omega^TH_*\\omega+h_*^T\\omega+\\underbrace{ \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_{A_1,{\\cal Z}_1}(-h_*,-H_*)\n-\\Phi_{A_2,{\\cal Z}_2}(h_*,H_*)\\right]}_{a},\n\\]\nwhen applied to the families of sub-Gaussian distributions ${\\cal SG}_\\chi $, $\\chi=1,2$, has the risk $$\n\\hbox{\\rm Risk}(\\phi_*|{\\cal SG}_1,{\\cal SG}_2)\\leq\\epsilon_\\star:={\\rm e}^{\\hbox{\\scriptsize\\rm Opt}},$$ so that for just defined $\\phi_*$ and $\\epsilon_\\star$ relation {\\rm (\\ref{SPPLiftConvSG})} takes place.\n\\end{proposition}\n\\begin{remark}\\label{remrem} {\\rm\nProposition \\ref{concatenationSG} offers two options for building quadratic detectors for the families ${\\cal SG}_1$, ${\\cal SG}_2$, those based on the saddle point of {\\rm (\\ref{SPPLiftSG})} and on the optimal solution to {\\rm (\\ref{SPPLiftConvSG})}. Inspecting the proof, the number of options can be increased to 4: we can replace any of the functions $\\Phi^{\\delta_\\chi}_{A_\\chi,{\\cal Z}_\\chi}$, $\\chi=1,2$ (or both these functions simultaneously) with $\\Phi_{A_\\chi,{\\cal Z}_\\chi}$. The second of the original two options is exactly what we get\nwhen replacing both $\\Phi^{\\delta_\\chi}_{A_\\chi,{\\cal Z}_\\chi}$, $\\chi=1,2$, with {$\\Phi_{A_\\chi,{\\cal Z}_\\chi}$}. It is easily seen that depending on the data, {each} of these 4 options can result in the smallest risk bound. Thus, it makes sense to keep all these options in mind and to use the one which, under the circumstances, results in the best risk bound. Note that the risk bounds are efficiently computable, so that identifying the best option is easy.}\n\\end{remark}\n\\subsection{Setup}\\label{newsetup}\nWe continue to consider the situation described in Section \\ref{sect:ChPD}, but with different specifications of noise and of nuisance and signal inputs, as compared to Section \\ref{ADSetUp}.\n\\par\nWe define nuisance and signal inputs as follows.\n\\par1. Admissible inputs, nuisance and signal alike, belong to a bounded set $X\\subset{\\mathbf{R}}^n$ containing the origin cut off ${\\mathbf{R}}^n$ by a system of quadratic inequalities:\n\\begin{equation}\\label{seq100}\nX=\\{x\\in{\\mathbf{R}}^n: {\\hbox{\\rm Tr}}(Q_iZ(x))\\leq q_i,\\,1\\leq i\\leq I\\},\\end{equation}\nwhere $Q_i$ are $(n+1)\\times (n+1)$ symmetric matrices.\nWe assume w.l.o.g. that the first constraint defining $X$ is $\\|x\\|_2^2\\leq R^2$, that is, $Q_1$ is the diagonal matrix with the diagonal $1,...,1,0$, and $q_1=R^2$. We set\n\\begin{equation}\\label{seq101}\n{\\cal X}=\\{W\\in{\\mathbf{S}}^{n+1}_+: W_{n+1,n+1}=1, {\\hbox{\\rm Tr}}(WQ_i)\\leq q_i,1\\leq i\\leq I\\},\n\\end{equation}\nso that ${\\cal X}$ is a convex compact set in ${\\mathbf{S}}^{n+1}_+$, and $Z(x)\\in{\\cal X}$ for all $x\\in X$.\n\\par2. The set $N$ of nuisance inputs contains the origin and is cut off $X$ by a system of quadratic inequalities, so that\n\\begin{equation}\\label{seq102}\nN=\\{x\\in{\\mathbf{R}}^n: {\\hbox{\\rm Tr}}(Q_iZ(x))\\leq q_i,\\,1\\leq i\\leq I_+\\}, \\;I_+>I.\\end{equation}\nWe set\n\\begin{equation}\\label{seq101A}\n{\\cal N}=\\{W\\in{\\mathbf{S}}^{n+1}_+: W_{n+1,n+1}=1, {\\hbox{\\rm Tr}}(WQ_i)\\leq q_i,\\,1\\leq i\\leq I_+\\},\n\\end{equation}\nso that ${\\cal N}\\subset{\\cal X}$ is a convex compact set in ${\\mathbf{S}}^{n+1}_+$, and $Z(x)\\in{\\cal N}$ for all $x\\in N$.\n\\par3. Signals belonging to $X$ are\nof different shapes and magnitudes, with signal of shape $k$, $1\\leq k\\leq K$, and magnitude $\\geq 1$ defined as a vector from the set\n $$\n W_k=\\{x\\in{\\mathbf{R}}^n: {\\hbox{\\rm Tr}}(Q_{ik}Z(x))\\leq b_{ik},\\,1\\leq i\\leq I_k\\}\n $$\n with two types of quadratic constraints:\n \\begin{itemize}\n \\item constraints of type A: $b_{ik}\\leq0$, the symmetric matrices $Q_{ik}$ have zero North-West (NW) block of size $n\\times n$, and zero South-East (SE) diagonal entry; these constraints are just linear constraints on $x$;\n \\item constraints of type B: $b_{ik}\\leq0$, the only nonzeros in $Q_{ik}$ are in the NW block of size $n\\times n$.\n \\end{itemize}\n We denote the sets of indices $t$ of constraints of these two types by ${\\cal I}_k^A$ and ${\\cal I}_k^B$ and assume that at least one of the right hand sides $b_{ik}$ is strictly negative, implying that $W_k$ is at a positive distance from the origin.\n\\par\nWe define a {signal} of shape $k$ and magnitude $\\geq\\rho>0$ as a vector from the set $W^\\rho_k=\\rho W_k$; note that\n{\\small $$\nW^\\rho_k=\\{x\\in{\\mathbf{R}}^n: {\\hbox{\\rm Tr}}(Q_{ik}Z(x))\\leq \\rho b_{ik},i\\in {\\cal I}_k^A,{\\hbox{\\rm Tr}}(Q_{ik}Z(x))\\leq \\rho^2 b_{ik},i\\in {\\cal I}_k^B\\}.\n$$}\\noindent\nWe set\n\\[\\begin{array}{l}\n{\\cal W}_k^\\rho=\\{W\\in{\\mathbf{S}}^{n+1}_+: \\,W_{n+1,n+1}=1,\\, {\\hbox{\\rm Tr}}(Q_{ik}W)\\leq \\rho b_{ik},\\,i\\in {\\cal I}_k^A,\\\\\n\\multicolumn{1}{r}{{\\hbox{\\rm Tr}}(Q_{ik}W)\\leq \\rho^2 b_{ik},\\,i\\in {\\cal I}_k^B\\},}\\\\\n\\end{array}\n\\\nensuring that $Z(x)\\in {\\cal W}^\\rho_k$ whenever $x\\in W^\\rho_k$. Note that sets ${\\cal W}_k^\\rho$ shrink as $\\rho>0$ grows due to $b_{ik}\\leq0$. We assume that for small $\\rho>0$, the sets ${\\cal W}_k^\\rho\\cap {\\cal X}$ are nonempty (this is definitely the case when some signals of shape $k$ and positive magnitude are admissible inputs -- otherwise signals of shape $k$ are of no interest in our context, and we can ignore them). Since ${\\cal X}$ is compact and some of $b_{ik}$ are negative, the sets ${\\cal W}^\\rho_k$ are empty for large enough values of $\\rho$.\nAs a byproduct of the compactness of ${\\cal X}$, it is immediately seen that there exists $R_k\\in(0,\\infty)$ such that $W_k^\\rho$ is nonempty when $\\rho\\leq R_k$ and is empty when $\\rho>R_k$.\n\\subsection{Change detection via quadratic detectors, Gaussian case}\\label{sectquaddet}\nIn this section, we consider the situation of Section \\ref{sect:ChPD}, {\\sl assuming the noise $\\xi^d$ in {\\rm (\\ref{eqOS})} to be zero mean Gaussian: $\\xi^d\\sim {\\cal N}(0,\\Theta)$.}\n\\subsubsection{Preliminaries}\\label{ssprelim}\nGiven $t\\leq d$, let us set\n$$\nA_t=[\\bar{A}_t,0], B_t=\\left[\\begin{array}{c}A_t\\cr [0,...,0,1]\\cr\\end{array}\\right],\n$$\nso that the observation $y^t\\in{\\mathbf{R}}^{\\nu_t}$ at time $t$ is Gaussian with the expectation $A_t[x;1]$ and covariance matrix $\\Theta$ belonging to the convex compact subset ${\\cal U}_t$ of the interior of the positive semidefinite cone ${\\mathbf{S}}^{\\nu_t}_+$, see (\\ref{eqOSt}), (\\ref{thetat}).\n\\par\nWe fix $\\gamma\\in(0,1)$, and $\\Theta_{*,d}\\in{\\mathbf{S}}^{\\nu_d}_+$ such that $\\Theta_{*,d}\\succeq\\Theta$ for all $\\Theta\\in{\\cal U}_d$. For $1\\leq t\\leq d$, we\nset {$\\Theta_{*,t}=S_t\\Theta_{*,d}S_t^T$}, so that $\\Theta_{*,t}\\succ0$ is such that $\\Theta_{*,t}\\succeq\\Theta$ for all $\\Theta\\in {\\cal U}_t$. Further, we specify reals $\\delta_t\\in[0,2]$ and $\\gamma\\in (0,1)$ such that\n$$\n\\|\\Theta^{1\/2}[\\Theta_{*,t}]^{-1\/2}-I_{\\nu_t}\\|\\leq\\delta_t\\,\\,\\forall \\Theta\\in {\\cal U}_t,\n$$\nand set\\aic{}{\\footnote{Note that parameter $\\gamma\\in (0,1)$ is introduced to prevent $\\Phi_t$ to become infinite. Therefore, the larger $\\gamma$ is, the better the computed quadratic detector would be. In practice, $\\gamma=0.999$ would fit most applications.}}\n$$\n{\\cal H}_t=\\{(h,H)\\in{\\mathbf{R}}^{\\nu_t}\\times {\\mathbf{S}}^{\\nu_t}: -\\gamma\\Theta_{*,t}^{-1}\\preceq H\\preceq \\gamma\\Theta_{*,t}^{-1}\\}.\n$$\nFinally, given $t$, we put\n$$\n\\begin{array}{l}\n\\Phi_{t}(h,H;\\Theta)=- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}(I-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2})+ \\mbox{\\small$\\frac{1}{2}$} {\\hbox{\\rm Tr}}([\\Theta-\\Theta_{*,t}]H)\\\\\n\\quad+{\\delta_t(2+\\delta_t)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2\\\\\n\\quad+{1\\over 2}\\phi_{{\\cal N}}\\left(B_t^T\\left[\\hbox{\\small$\\left[\\begin{array}{c|c}H&h\\cr\\hline h^T&\\end{array}\\right]+\n\\left[H,h\\right]^T[\\Theta_{*,t}^{-1}-H]^{-1}\\left[H,h\\right]$}\\right]B_t\\right):{\\cal H}_t\\times{\\cal U}_t\\to{\\mathbf{R}},\\\\\n\\end{array}\n$$\nand given $t$, $k$ and $\\rho\\in(0,R_k]$, we set\n$$\n\\begin{array}{l}\n\\Phi_{tk\\rho}(h,H;\\Theta)=- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}(I-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2})+ \\mbox{\\small$\\frac{1}{2}$} {\\hbox{\\rm Tr}}([\\Theta-\\Theta_{*,t}]H)\\\\\n\\quad+{\\delta_t(2+\\delta_t)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2\\\\\n\\quad+{1\\over 2}\\phi_{{\\cal Z}^\\rho_k}\\left(B_t^T\\left[\\hbox{\\small$\\left[\\begin{array}{c|c}H&h\\cr\\hline h^T&\\end{array}\\right]+\n\\left[H,h\\right]^T[\\Theta_{*,t}^{-1}-H]^{-1}\\left[H,h\\right]$}\\right]B_t\\right):{\\cal H}_t\\times{\\cal U}_t\\to{\\mathbf{R}},\\\\\n\\end{array}\n$$\nand ${\\cal Z}^\\rho_k={\\cal W}^\\rho_k\\bigcap {\\cal X}.$\n\\par\nInvoking Proposition \\ref{concatenation}, we obtain the following\n\\begin{corollary}\\label{corLift}\nGiven $t\\leq d$, $k\\leq K$ and $\\rho\\in(0,R_k]$, consider the convex-concave saddle point problem\n\\\n{\\cal SV}_{tk}(\\rho)=\\min_{(h,H)\\in {\\cal H}_t}\\max_{\\Theta_1,\\Theta_2\\in {\\cal U}_t} \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_t(-h,-H;\\Theta_1)+\\Phi_{tk\\rho}(h,H;\\Theta_2)\\right].\n\\\nThis saddle point problem is solvable, and a saddle point $(h_*,H_*;\\Theta_1^*,\\Theta_2^*)$ induces quadratic detector\n\\\n\\begin{array}{rcl}\n\\phi_{tk\\rho}(\\omega^t)&=& \\mbox{\\small$\\frac{1}{2}$}[\\omega^t]^TH_*\\omega^t+h_*^T\\omega^t+a:{\\mathbf{R}}^{\\nu_t}\\to{\\mathbf{R}},\\\\\na&=& \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_t(-h_*,-H_*;\\Theta_1^*)-\\Phi_{tk\\rho}(h_*,H_*;\\Theta_2^*)\\right],\\\\\n\\end{array}\n\\\nsuch that,\nwhen applied to observation $y^t=\\bar{A}_tx+\\xi^t$, see {\\rm (\\ref{eqOSt})}, we have:\n\\par{\\rm (i)} whenever $x\\in X$ is a nuisance input,\n\\begin{equation}\\label{case1}\n{\\mathbf{E}}_{y^t}\\left\\{{\\rm e}^{-\\phi_{tk\\rho}(y^t)}\\right\\}\\leq \\epsilon_{tk\\rho}:=\\exp\\{{\\cal SV}_{tk}(\\rho)\\};\n\\end{equation}\n\\par{\\rm (ii)} whenever $x\\in X$ is a signal of shape $k$ and magnitude $\\geq\\rho$,\n\\begin{equation}\\label{case2}\n{\\mathbf{E}}_{y^t}\\left\\{{\\rm e}^{\\phi_{tk\\rho}(y^t)}\\right\\}\\leq \\epsilon_{tk\\rho}.\n\\end{equation}\n\\end{corollary}\n\n\\subsubsection{Construction and performance characterization} The construction to follow is similar to that from Section \\ref{impl:constr}. Given $t\\leq d$ and $k\\leq K$,\nit is easily seen that the function ${\\cal SV}_{tk}(\\rho)$ possesses the following properties:\n\\begin{itemize}\n\\item it is nonpositive on $\\Delta_k=(0,R_k]$ and nonincreasing in $\\rho$ (indeed, $\\Phi_t(0,0;\\cdot)\\equiv\\Phi_{tk\\rho}(0;0;\\cdot)=0$ and $\\Phi_{tk\\rho}(\\cdot,\\cdot)$ decreases as $\\rho$ grows since ${\\cal Z}^\\rho_k$ shrinks as $\\rho$ grows, implying that $\\phi_{{\\cal Z}^\\rho_k}(\\cdot)$ decreases as $\\rho$ grows);\n\\item the function tends to 0 as $\\rho\\to +0$;\n\\item the function is continuous on $\\Delta_k$.\n\\end{itemize}\nGiven an upper bound $\\epsilon\\in(0,1\/2)$ on the probability of a false alarm, let us set\n$$\n\\epsilon_t={\\epsilon\\over d},\\,\\,\\varkappa={\\epsilon\\over\\sqrt{dK}},\\,\\,\\alpha=-\\ln(dK)\/2.\n$$\nGiven $t$, $k$, we define $\\rho_{tk}$ as follows: if ${\\cal SV}_{tk}(R_k)>\\ln(\\varkappa)$, we set $\\rho_{tk}=+\\infty$, otherwise we use bisection to find\n$\\rho\\in(0,R_k]$ such that\n$$\n{\\cal SV}_{tk}(\\rho_{tk})=\\ln(\\varkappa).\n$$\nOur change detection procedure is as follows: at a step $t=1,2,...,d$, given the observation $y^t$, we look at all values $k\\leq K$ for which $\\rho_{tk}<\\infty$. If $k$ is such that $\\rho_{tk}<\\infty$, we check whether $\\phi_{tk\\rho_{tk}}(y^t)<\\alpha$. If it is the case, we terminate with a signal conclusion. If $\\phi_{tk\\rho_{tk}}(y^t)\\geq \\alpha$ for all $k$ corresponding to $\\rho_{tk}<\\infty$, we claim that so far, the nuisance hypothesis seems to be valid, and pass\nto time $t+1$ (if $t0$, and $t\\leq d$ is such that $\\rho_{tk}\\leq\\rho$, then the probability for the detection procedure to terminate with a signal\nconclusion at time $t$ or earlier is at least $1-\\epsilon$.\n\\end{itemize}\n\\end{proposition}\n\n\n\\subsubsection{Numerical illustration}\nHere we report on a preliminary numerical experiment with the proposed detection procedure via quadratic detectors.\n\\paragraph{Observation scheme} we deal with is given by\n\\begin{equation}\\label{ldsm}\n\\begin{array}{rcl}\nz_t&=&Az_{t-1}+Bx_t,\\\\\nw_t&=&Cz_t+\\xi_t;\\\\\n\\end{array}\n\\end{equation}\nhere $z_t$, $x_t$, $w_t$ are, respectively, the states, the inputs and the outputs of a linear dynamical system, of dimensions $n_z$, $n_x$, $n_w$, respectively, and $\\xi_t$ are independent across $t$ standard Gaussian noises.\nWe assume that the observation at time $t$, $1\\leq t\\leq d$, is the collection $w^t=[w_1;w_2;...;w_t]$. In order to account for the initial state $x_0$ and to make the expectations of observations known linear functions of the inputs, we, same as in Section \\ref{numill}, define $E_t$ as the linear subspace in ${\\mathbf{R}}^{n_wt}$ comprised by all collections of accumulated outputs $[w_1;...;w_t]$ of the zero-input system\n$$\nz_s=Az_{s-1},\\,w_s=Cz_s,\n$$\nand define our (accumulated) observation $y^t$ at time $t$ as the projection of the observation $w^t$ onto the orthogonal complement $E_t^\\perp$ of $E_t$. We represent this projection by the vector $y^t$ of its coordinates in an orthonormal basis of $E_t^\\perp$ and set $\\nu_t=\\dim E_t^\\perp$. Note that in this case the corresponding noises $\\xi^t$, $1\\leq t\\leq d$, see (\\ref{eqOSt}), are standard Gaussian of dimensions $\\nu_t$ (as projections of standard Gaussian vectors), so\nthat we are in the situation of ${\\cal U}_t=\\{I_{\\nu_t}\\}$, see (\\ref{thetat}). Therefore we can set\n$\\Theta_{*,t}=I_{\\nu_t}$, and $\\delta_t=0$, see Section \\ref{ssprelim}. \\par\nWe define the admissible nuisance and signals inputs as follows:\n\\begin{itemize}\n\\item the admissible inputs $x=[x_1;...;x_d]$, $x_t\\in{\\mathbf{R}}^{n_x}$, are those with $\\|x\\|_2\\leq R$ (we set $R=10^4$);\n\\item the only nuisance input is $x=0\\in{\\mathbf{R}}^n$, $n=n_xd$;\n\\item there are $K=d$ signal shapes, signal of shape $k$ and magnitude $\\geq 1$ being a vector of the form $x=[0;...;0;x_k;x_{k+1};...;x_d]$ with $\\|x_k\\|_2\\geq 1$ (``signal of shape $k$ and magnitude $\\geq1$ starts at time $k$ with block $x_k$ of energy $\\geq1$''). We consider three different types of the signal behavior after time $k$:\n \\begin{itemize}\n \\item {\\sl pulse}: $x_{k+1}=...=x_d=0$,\n \\item {\\sl step}: $x_k=x_{k+1}=...=x_d$,\n \\item {\\sl free jump}: $x_{k+1},...,x_d$ may be arbitrary.\n\\end{itemize}\n\\end{itemize}\nThe description of the matrix $\\bar{A}_t$ arising in (\\ref{eqOSt}) is self-evident. The description, required in Section \\ref{newsetup}, of\nthe nuisance set $N$ by quadratic constraints imposed on the quadratic lifting of an input is equally self-evident. The corresponding descriptions of signals of shape $k$ and magnitude $\\geq1$ are as follows:\n\\begin{itemize}\n\\item {\\sl pulse:} $Q_{1k}$ is the diagonal $(n+1)\\times (n+1)$ matrix with the only nonzero diagonal entries, {equal to -1}, in positions $(i,i)$,\n$i\\in J_k:=\\{i:(k-1)n_x+1\\leq i\\leq kn_x\\}$\\footnote{VG: $I_k$ was an integer and is now defined as a set! }, and $b_{1k}=-1$. The constraint\n${\\hbox{\\rm Tr}}(Q_{1k}Z(x))\\leq b_{1k}$ says exactly that $\\|x_k\\|_2^2\\geq1$. The remaining constraints are homogeneous and express the facts that\n \\begin{itemize}\n \\item the entries in $Z(x)$ with indices $(i,n+1)$ and $i\\leq n$, except for those with $i\\in J_k$, are zeros, which can be easily expressed by homogeneous constraints of type A, and\n \\item the entries in $Z(x)$ with indices $(i,j)$, $i\\leq j\\leq n$, except for those with $i,j\\in J_k$, are zeros, which can be easily expressed by homogeneous constraints of type B;\n \\end{itemize}\n\\item {\\sl step:} $Q_{1k}$ and $b_{1k}$ are exactly as above. The remaining constraints are homogeneous and express the facts that\n\\begin{itemize}\n\\item the entries in $Z(x)$ with indices $(i,n+1)$ and {$i\\leq i_k:=n_x(k-1)$} are zero (homogeneous constraints of type A);\n\\item the entries in $Z(x)$ with indices $i\\leq j\\leq n$, {$i \\leq i_k$}, are zero (homogeneous constraints of type B);\n\\item the entries in $Z(x)$ with indices $(i,n+1)$ and $(i',n+1)$ such that {$i_k < i,i'\\leq n$} and $i-i'$ is an integer multiple of $n_x$, are equal to each other (homogeneous constraints of type A);\n\\item the entries with indices $(i,j)$ and $(i',j')$ such that {$i_k< i,i',j,j'\\leq n$} and both $i-i'$, $j-j'$ are integer multiples of $n_x$, are equal to each other (homogeneous constraints of type B);\n \\end{itemize}\n\\item {\\sl free jump:} $Q_{1k}$ and $b_{1k}$ are exactly as above, the remaining constraints are homogeneous and express the facts that\n\\begin{itemize}\n\\item the entries in $Z(x)$ with indices $(i,n+1)$, {$i \\leq i_k$}, are zeros (homogeneous constraints of type A);\n\\item the entries in $Z(x)$ with indices $(i,j)$ such that {$i \\leq i_k$} and $i\\leq j$ are zeros (homogeneous constraints of type B).\n\\end{itemize}\n\\end{itemize}\n\\paragraph{Numerical results.} The discrete time dynamical system (\\ref{ldsm}) we consider is obtained by the discretization of the continuous-time model\n$$\n{d\\over ds}\\left[\\begin{array}{c}u(s)\\cr v(s)\\cr\\end{array}\\right]=\\underbrace{\\left[\\begin{array}{c|c}&I_2\\cr\\hline\n&\\cr\\end{array}\\right]}_{A_c}\\left[\\begin{array}{c}u(s)\\cr v(s)\\cr\\end{array}\\right]+\\underbrace{\\left[\\begin{array}{c}\\cr I_2\\cr\\end{array}\\right]}_{B_c}x(s),\n$$\n with unit time step, assuming the input $x(s)=x_t$ constant on consecutive segments $[t-1,t]$. We obtain the discrete-time system\n$$\n\\underbrace{\\left[\\begin{array}{c}u_t\\cr v_t\\cr\\end{array}\\right]}_{z_t}=Az_{t-1}+Bx_t,\\,\\,A=\\exp\\{A_c\\},\\,B=\\int_0^1\\exp\\{(1-s)A_c\\}B_cds,\n$$\nor, which is the same, the system\n$$\n\\begin{array}{rcrrl}\nu_t&=&u_{t-1}&+v_{t-1}&+ \\mbox{\\small$\\frac{1}{2}$} x_t,\\\\\nv_t&=&&v_{t-1}&+x_t.\\\\\n\\end{array}\n$$\nThe system output $u_t$ is observed with the standard Gaussian noise at times $t=1,2,...,d$. Our time horizon was $d=8$, and required probability of false alarm was $\\epsilon=0.01$ \\par\nThe results of experiments are presented in Table \\ref{table16}; the cells $t,k$ with $k>t$ are blank, because signals of shape $k>t$ start after time $t$ and are therefore ``completely invisible'' at this time. Along with the quantity $\\rho_{tk}$ -- the magnitude of the signal of shape $k$ which makes it detectable, with probability $1-\\epsilon=0.99$ at time $t$ (the first number in a cell) we present the ``non-optimality index'' (second number in a cell) defined as follows. Given $t$ and $k$, we\ncompute the largest {$\\rho=\\rho_{t k}^*$} such that for a signal $x^{tk}$ of shape $k$ and magnitude $\\geq\\rho$, the $\\|\\cdot\\|_2$-norm of $\\bar{A}_tx$, see (\\ref{eqOSt}), is $\\leq 2{\\mathop{\\hbox{\\small\\rm ErfInv}}}(\\epsilon)$. The latter implies that if all we need to decide at time $t$ is whether the input is the signal $\\theta x^{tk}$ with $\\theta<1$, or is identically zero, a $(1-\\epsilon)$-reliable decision would be impossible.\\footnote{According to our convention, meaningful inputs should be of Euclidean norm at most $10^4$. Consequently, in the case $\\rho_{tk}^*>10^4$, we put $\\rho_{tk}^*=\\infty$.} Since $\\theta$ can be made arbitrarily close to 1, $\\rho_{tk}^*$ is a lower bound on the magnitude of a signal of shape $k$ which can be detected $(1-\\epsilon)$-reliably, by a procedure utilizing observation $y^t$ (cf. Section \\ref{sect:assessing}). The non-optimality index reported in the table is the ratio $\\rho_{tk}\/\\rho_{tk}^*$. Note that the computed values of this ratio are neither close to one (which is a bad news for us), nor ``disastrously large'' (which is a good news). In this respect it should be mentioned that $\\rho_{tk}^*$\nare overly optimistic estimates of the performance of an ``ideal'' change detection routine.\n\n\\begin{center}\n\\begin{table}\n\\centering\n$$\n\\begin{array}{|c|}\n\\hline\\hline\n\\hbox{\\small$\\begin{array}{||c|c|c|c|c|c|c|c|c||}\n\\hline\n\\hbox{\\scriptsize$\\begin{array}{cc}&k\\cr\nt&\\cr\\end{array}$}&1&2&3&4&5&6&7&8\\\\\n\\hline\n1& \\infty\/1.00&&&&&&&\\\\ \\hline\n2& \\infty\/1.00& \\infty\/1.00&&&&&&\\\\ \\hline\n3& \\infty\/1.00&37.8\/1.66&37.8\/1.66&&&&&\\\\ \\hline\n4& \\infty\/1.00&28.5\/1.68&15.6\/1.68&28.5\/1.67&&&&\\\\ \\hline\n5& \\infty\/1.00&24.8\/1.69&11.4\/1.69&11.4\/1.69&24.8\/1.69&&&\\\\ \\hline\n6& \\infty\/1.00&23.0\/1.70& 9.6\/1.70& 7.9\/1.70& 9.6\/1.70&23.0\/1.70&&\\\\ \\hline\n7& \\infty\/1.00&21.7\/1.71& 8.6\/1.71& 6.4\/1.71& 6.4\/1.71& 8.6\/1.71&21.7\/1.71&\\\\ \\hline\n8& \\infty\/1.00&20.9\/1.72& 8.0\/1.71& 5.6\/1.72& 5.1\/1.72& 5.6\/1.72& 8.0\/1.71&20.9\/1.72\\\\ \\hline\n\\end{array}$}\\\\\n\\hbox{Signal geometry: pulse}\\\\\n\\hline\\hline\n\\hbox{\\small$\\begin{array}{||c|c|c|c|c|c|c|c|c||}\n\\hline\n\\hbox{\\small$\\begin{array}{cc}&k\\cr\nt&\\cr\\end{array}$}&1&2&3&4&5&6&7&8\\\\\n\\hline\n1& \\infty\/1.00&&&&&&&\\\\ \\hline\n2& \\infty\/1.00& \\infty\/1.00&&&&&&\\\\ \\hline\n3&19.0\/1.67&19.0\/1.67&37.8\/1.66&&&&&\\\\ \\hline\n4& 7.8\/1.68& 7.8\/1.68&10.3\/1.68&28.5\/1.67&&&&\\\\ \\hline\n5& 4.2\/1.70& 4.2\/1.70& 4.9\/1.69& 7.9\/1.69&24.8\/1.69&&&\\\\ \\hline\n6& 2.6\/1.70& 2.6\/1.70& 2.8\/1.70& 3.8\/1.71& 6.9\/1.70&23.0\/1.70&&\\\\ \\hline\n7& 1.7\/1.71& 1.7\/1.71& 1.9\/1.72& 2.2\/1.71& 3.3\/1.71& 6.3\/1.71&21.7\/1.71&\\\\ \\hline\n8& 1.2\/1.72& 1.2\/1.72& 1.3\/1.72& 1.5\/1.73& 1.9\/1.72& 2.9\/1.72& 5.9\/1.72&20.9\/1.72\\\\ \\hline\n\\end{array}$}\\\\\n\\hbox{Signal geometry: step}\\\\\n\\hline\\hline\n\\hbox{\\small$\\begin{array}{||c|c|c|c|c|c|c|c|c||}\n\\hline\n\\hbox{\\small$\\begin{array}{cc}&k\\cr\nt&\\cr\\end{array}$}&1&2&3&4&5&6&7&8\\\\\n\\hline\n1& \\infty\/1.00&&&&&&&\\\\ \\hline\n2& \\infty\/1.00& \\infty\/1.00&&&&&&\\\\ \\hline\n3& \\infty\/1.00& \\infty\/1.00&37.8\/1.66&&&&&\\\\ \\hline\n4& \\infty\/1.00& \\infty\/1.00&38.3\/1.68&28.5\/1.67&&&&\\\\ \\hline\n5& \\infty\/1.00& \\infty\/1.00&38.5\/1.69&28.7\/1.69&24.8\/1.69&&&\\\\ \\hline\n6& \\infty\/1.00& \\infty\/1.00&38.8\/1.70&28.9\/1.70&25.0\/1.70&23.0\/1.70&&\\\\ \\hline\n7& \\infty\/1.00& \\infty\/1.00&39.0\/1.71&29.1\/1.72&25.3\/1.72&23.2\/1.72&21.7\/1.71&\\\\ \\hline\n8& \\infty\/1.00& \\infty\/1.00&39.2\/1.72&29.1\/1.72&25.3\/1.72&23.2\/1.72&21.8\/1.72&20.9\/1.72\\\\ \\hline\n\\end{array}$}\\\\\n\\hbox{Signal geometry: free jump}\\\\\n\\hline\n\\end{array}\n$$\n\\caption{\\label{table16} Change detection via quadratic detectors. The first number in a cell is $\\rho_{tk}$, the second is the nonoptimality index $\\rho_{tk}\/\\rho_{tk}^*$.}\n\\end{table}\n\\end{center}\n\n\\subsection{Change detection via quadratic detectors, sub-Gaussian case}\\label{sectquaddetSG}\nUsing Proposition \\ref{concatenationSG} in the role of Proposition \\ref{concatenation}, the constructions and the results of Section \\ref{sectquaddet} can be easily adjusted to the situation when the noise $\\xi^d$ in {\\rm (\\ref{eqOS})} is zero mean sub-Gaussian, $\\xi^T\\sim{\\cal SG}(0,\\Theta)$, rather than Gaussian. In fact, there are two options\nfor such an adjustment, based on quadratic detectors yielded by saddle point problem (\\ref{SPPLiftSG}) and convex minimization problem (\\ref{SPPLiftConvSG}), respectively. To save space, we restrict ourselves with the first option; utilizing the second option is completely similar.\n\\par\nThe only modification of the contents of Section \\ref{sectquaddet} needed to pass from Gaussian to sub-Gaussian observation noise is the redefinition of the functions $\\Phi_{t}(h,H;\\Theta)$ and $\\Phi_{tk\\rho}(h,H;\\Theta)$\n introduced in Section \\ref{ssprelim}. In our {present} situation,\n \\begin{itemize}\n \\item $\\Phi_{t}(h,H;\\Theta)$ should be redefined as the function $\\Phi^{\\delta_t}_{A_t,{\\cal N}}(h,H;\\Theta)$ given by relation\n(\\ref{phiSG}) as applied to $\\delta_t$ in the role of $\\delta$, $A_t=[\\bar{A}_t,0]$ in the role of $A$, the set ${\\cal N}$, see (\\ref{seq101A}), in the role of ${\\cal Z}$, and the matrix $\\Theta_{*,t}$ in the role of $\\Theta_*$.\n\\item $\\Phi_{tk\\rho}(h,H;\\Theta)$ should be redefined as the function $\\Phi^{\\delta_t}_{A_t,{\\cal Z}^\\rho_k}(h,H;\\Theta)$ given by (\\ref{phiSG}) with ${\\cal Z}^\\rho_k$ in the role of ${\\cal Z}$ and\nthe just specified {$\\delta_t,A_t,\\Theta_*$}.\n\\end{itemize}\nWith this redefinition of $\\Phi_{t}(h,H;\\Theta)$ and $\\Phi_{tk\\rho}(h,H;\\Theta)$, Corollary \\ref{corLift} and Proposition \\ref{propquad} (with the words ``let the observation noise $\\xi^d$ be Gaussian with zero mean and covariance matrix $\\Theta\\in {\\cal U}_d$'' replaced with ``let the observation noise $\\xi^d$ be sub-Gaussian with zero mean and matrix parameter $\\Theta\\in {\\cal U}_d$'') remain intact.\n\n\n\n\\section{{Rust signal detection}}\n\n\n\\subsection{Situation}\\label{ysituation}\nIn this Section, we present an example motivated by material science applications, in which one aims to detect the onset of a rust signal in a piece of metal from a sequence of noisy images. In general, this setup can be used to detect degradation in systems\nof a similar nature.\n\nThe rust signal occurs at some time, and its energy grows in the subsequent images.\nThis can be modeled as follows. At times $t=0,1,...,d$,\nwe observe vectors\n\\begin{equation}\n\\label{yeq1}\ny_t=y+x_t+\\xi_t\\in{\\mathbf{R}}^\\nu,\n\\end{equation}\nwhere\n\\begin{itemize}\n\\item $y$ is a fixed deterministic ``background,''\n\\item $x_t$ is a deterministic {\\sl spot}, which may {correspond} to a rust signal at time $t$, and\n\\item $\\xi_t$ are independent across all $t$ zero mean Gaussian observation noises with covariance matrices $\\Sigma_t$.\n\\end{itemize}\nWe assume that $x_0=0$, and our ideal goal is to decide on the nuisance hypothesis $x_t=0$, $1\n\\leq t\\leq d$, versus the alternative that the input (``signal'') $x=[x_1;...;x_d]$ is of some shape and some positive magnitude. We specify the shape and the magnitude below.\n\n\n\\subsubsection{Assumptions on observation noise}\\label{yobsnoise}\nAssume that the observation noise covariance matrices $\\Sigma_t$, for all $t$, are known to belong to a given convex compact subset $\\Xi$ of the interior of the\npositive semidefinite cone ${\\mathbf{S}}^\\nu_+$. We allow the following two scenarios:\n\\begin{itemize}\n\\item[{\\bf C.1}]: $\\Sigma_t=\\Sigma\\in\\Xi$ for all $t$;\n\\item[{\\bf C.2}]: $\\Sigma_t$ can vary with $t$, but stay all the time in $\\Xi$.\n\\end{itemize}\n\\subsubsection{Assumptions on spots}\\label{sec:spots}\nWe specify signals $x=[x_1;...;x_d]$ by {\\sl shape} $k\\in\\{1,...,K\\}$, $K = d$, and {\\sl magnitude} $\\rho>0$. Namely, signal $x=[x_1;...;x_d]$ of shape $k$ and magnitude $\\geq\\rho>0$ ``starts'' at time $k$, meaning that $x_t=0$ when $t1$,\n\\begin{itemize}\n\\item with $\\alpha_{t,s}\\equiv 0$, we get an ``occasional spot'' of magnitude $\\geq {\\rho}$ and shape $k$: $x_t=0$ for $tk$;\n\\item with $\\alpha_{t,1}=\\lambda_t\\geq0$ and $\\alpha_{t,s}=0$ when $s>1$, we get $x_t=0$ for $tk$. In other words, the energy of the signal of shape $k$ increases or decreases in a prescribed way after the instant $k$.\n \\end{itemize}\n\n\\item Setting $\\alpha_{t,s}\\equiv0$, we get signals of shape $k$ with $x_t=0$ for $t0$, and put\n\\[\nX=\\{x\\in {\\mathbf{R}}^n:\\,{\\hbox{\\rm Tr}}(Z(x)Q_i)\\leq R^2,\\,1\\leq i\\leq n\\}, \\;\\;Q_i={\\hbox{\\rm Diag}}\\{e_i\\},\n\\]\nwhere $e_i$ is $i$th canonical basis vector in ${\\mathbf{R}}^{n+1}$.\n We further set $I=n$ and (cf. (\\ref{seq101}))\n$$\n{\\cal X}=\\{W\\in{\\mathbf{S}}^{n+1}_+: W_{n+1,n+1}=1, \\,{\\hbox{\\rm Tr}}(WQ_i)\\leq R^2,\\,1\\leq i\\leq n\\}.\n $$\n\\par2.\nIn our current situation, the nuisance set $N$ is the origin. To represent this set in the form (\\ref{seq102}), it suffices to set $I_+=I+1={n}+1$, $q_{{n}+1}=0$, and to take, as $Q_{{n}+1}$, the $(n+1)\\times (n+1)$ diagonal matrix with the diagonal entries $1,...,1,0$.\nWe put (cf. \\rf{seq101A})\n$$\n{\\cal N}=\\{W\\in{\\mathbf{S}}^{n+1}_+:\\, W_{n+1,n+1}=1,\\, {\\hbox{\\rm Tr}}(WQ_i)\\leq q_i,\\,1\\leq i\\leq n+1\\}.\n$$\n\\par3. Sets $W_k$ of signals of shape $k$ and magnitude $\\geq1$, as described in Section \\ref{sec:spots}, are given by quadratic constraints on $x=[x_1;...;x_d]$:\n\\\\\n$\\bullet$ linear constraints on the traces of diagonal blocks $Z_t(x)=x_tx_t^T$ in $Z(x)=[x_1;...;x_d;1][x_1;...;x_d;1]^T$, $1\\leq t\\leq d$, namely,\n\\begin{equation}\\label{yeq30}\n\\begin{array}{c}\n{\\hbox{\\rm Tr}}(Z_t(x))\\leq 0,\\;1\\leq t\\leq {k-1};\\;\\;\\;-{\\hbox{\\rm Tr}}(Z_k(x))\\leq -p(1)=-1;\\\\\n-{\\hbox{\\rm Tr}}(Z_t(x))+\\sum_{s=1}^{t-k} {\\alpha_{t,s}{\\hbox{\\rm Tr}}(Z_{t-s}(x))}\\leq -p(t-k+1),\\;k0$ and known range $[\\vartheta,1]$ of the factor $\\theta$,\nwith $\\vartheta\\in(0,1]$.\n\\par2. The only restrictions on the activation signal, apart from the component-wise boundedness, are energy constraints in \\rf{yeq2} (e.g., linear constraints as in \\rf{yeq3} are not allowed).\n\\def{\\cal SP}{{\\cal SP}}\n\\par\nNow, computational problems we should solve in the framework of the approach developed in Section \\ref{sectquaddet} reduce to building and solving, for given $t\\in\\{1,...,d\\}$, $k\\in\\{1,...,t\\}$, and $\\rho>0$, saddle point problems associated with $t,k,\\rho$. Let us fix $t\\in\\{1,...,d\\}$, $k\\in\\{1,...,t\\}$, and $\\rho>0$, and let ${\\cal SP}(t,k,\\rho)$ denote the corresponding saddle point problem. This problem is built as follows.\n\\par1) We deal with observations\n\\[\ny^t=x^t+\\xi^t,\\,\\,\\xi^t\\sim{\\cal N}(0,\\Theta)\n\\]\nwhere\n\\begin{enumerate}\n\\item[(a)]\n $y^t$, $x^t$ are block vectors with $t$ blocks, $y_i$ and $x_i$, respectively; dimension of every block is $\\nu$;\n\\item[(b)] $\\Theta\\in {\\cal U}_t$, where ${\\cal U}_t$ is comprised of matrices $\\Theta=\\Theta_\\theta$ with $t\\times t$ blocks $\\Theta^{ij}_\\theta$ of size $\\nu\\times\\nu$ such that\n\\begin{equation}\\label{givingrise}\n\\Theta_\\theta^{ij}=\\left\\{\\begin{array}{ll}\\theta\\sigma^2I_\\nu,&i\\neq j\\\\\n2\\theta\\sigma^2I_\\nu,&i=j\\\\\n\\end{array}\\right.,\n\\end{equation}\nwith parameter $\\theta$ running through $[\\vartheta,1]$ (cf. \\rf{yeq5}). In other words, denoting by $J_t$ the $t\\times t$ matrix with diagonal entries equal to 2 and off-diagonal entries equal to 1, we have\n$$\n{\\cal U}_t=\\{ {J_t \\otimes \\theta\\sigma^2 I_\\nu}:\\,\\vartheta\\leq\\theta\\leq 1\\},\n$$\nwhere $A\\otimes B$ is the Kronecker product of matrices $A$, $B$: $A\\otimes B$ is block matrix obtained by replacing entries {$A_{ij}$ in $A$} with blocks {$A_{ij}B$}.\n\\end{enumerate}\n\\par\nIt is immediately seen that ${\\cal U}_t$ has the $\\succeq$-largest element, specifically, the matrix\n\\[\n\\Theta_{*,t}=\\sigma^2{J_t\\otimes I_\\nu}.\n\\]\nNote that\n\\[\n\\Theta_{*,t}^{1\/2}=\\sigma{J_t^{1\/2}\\otimes I_\\nu} \\hbox{\\ and\\ } \\Theta\\in{\\cal U}_t\\Rightarrow \\|\\Theta^{1\/2}\\Theta_{*,t}^{-1\/2}-I_{\\nu t}\\|\\leq \\delta:=1-\\sqrt{\\vartheta}.\n\\]\n\\par2) We specify the set ${\\cal Z}_{tk\\rho}\\subset {\\mathbf{S}}^{\\nu t+1}_+$ as follows:\n\\[\\begin{array}{rcl}\n{\\cal Z}_{tk\\rho}&=&\\left\\{Z\\in{\\mathbf{S}}^{\\nu t+1}_+: Z_{\\nu t+1,\\nu t+1}=1,\n{\\hbox{\\rm Tr}}\\left(Z{\\hbox{\\rm Diag}}\\{{\\cal D}_{tks},0\\}\\right)\\leq \\aic{\\rho}{{\\rho^2}} d_{tks},\\;1\\leq s\\leq S_{tk}\\right\\},\\\\\n{\\cal D}_{tks}&=&{D_{tks}\\otimes I_\\nu}\\\\\n\\end{array}\n\\]\nwith {\\sl diagonal} $t\\times t$ matrices $D_{tks}$ readily given by the coefficients in \\rf{yeq30}.\n\\par\nNow, we are in the situation where functions $\\Phi_t$ and $\\Phi_{tk\\rho}$ from Section \\ref{ssprelim} are as follows:\n{\\small\\\n\\begin{array}{rcl}\n\\Phi_t(h,H;\\Theta)&=&- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}\\left(I_{\\nu t}-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\right)+ \\mbox{\\small$\\frac{1}{2}$}{\\hbox{\\rm Tr}}\\left([\\Theta-\\Theta_{*,t}]H\\right)\\\\\n&&+\n{\\delta(2+\\delta)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2+ \\mbox{\\small$\\frac{1}{2}$} h^T[\\Theta_{*,t}^{-1}-H]^{-1} h\\\\\n\\Phi_{tk\\rho}(h,H;\\Theta)\n&=&- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}\\left(I_{\\nu t}-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\right)+ \\mbox{\\small$\\frac{1}{2}$}{\\hbox{\\rm Tr}}\\left([\\Theta-\\Theta_{*,t}]H\\right)\\\\\n&&+\n{\\delta(2+\\delta)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2\\\\\n&&+ \\mbox{\\small$\\frac{1}{2}$}\\max\\limits_{Z\\in {\\cal Z}_{tk\\rho}}{\\hbox{\\rm Tr}}\\left(Z\\hbox{\\scriptsize$\\left[\\begin{array}{c|c}H+H[\\Theta_{*,t}^{-1}-H]^{-1}H&h+H[\\Theta_{*,t}^{-1}-H]^{-1}h\\cr\n\\hline\nh^T+h^T[\\Theta_{*,t}^{-1}-H]^{-1}H&h^T[\\Theta_{*,t}^{-1}-H]^{-1}h\\cr\\end{array}\\right]$}\\right).\\\\\n\\end{array}\n\\]\nThe saddle point problem ${\\cal SP}(t,k,\\rho)$ reads\n\\begin{equation}\\label{yeq444}\n\\begin{array}{c}\n\\min\\limits_{(h,H)\\in{\\cal H}}\\left[\\Psi(h,H):=\\max\\limits_{\\Theta_1,\\Theta_2\\in{\\cal U}_t} \\mbox{\\small$\\frac{1}{2}$}\\left[\\Phi_t(-h,-H;\\Theta_1)+\\Phi_{tk\\rho}(h,H;\\Theta_2)\\right]\\right],\\\\\n{\\cal H}=\\{(h,H):-\\gamma \\Theta_{*,t}^{-1}\\preceq H\\preceq \\gamma\\Theta_{*,t}^{-1}\\}.\n\\end{array}\n\\end{equation}\nObserve that\nwhen $(h,H)\\in{\\cal H}$ and $\\Theta\\in{\\cal U}_t$, we clearly have $\\Phi_t(h,H;\\Theta)=\\Phi_t(-h,H;\\Theta)$ and $\\Phi_{tk\\rho}(h,H;\\Theta)=\\Phi_{tk\\rho}(-h,H;\\Theta)$, where the concluding relation is due to the fact that whenever $Z\\in{\\cal Z}_{tk\\rho}$, we\nalso have $EZE\\in{\\cal Z}_{tk\\rho}$, where $E$ is the diagonal matrix with diagonal $1,1,...,1,-1$. As a result, (\\ref{yeq444}) has a saddle point with $h=0$, and\nbuilding such a saddle point reduces to solving the problem\n\\begin{equation}\\label{yeq4444}\n\\min\\limits_{H\\in\\widehat{{\\cal H}}}\\left[\\widehat{\\Psi}(H):=\\max\\limits_{\\Theta_1,\\Theta_2\\in{\\cal U}_t} \\mbox{\\small$\\frac{1}{2}$}\\left[\\widehat{\\Phi}_t(-H;\\Theta_1)+\\widehat{\\Phi}_{tk\\rho}(H;\\Theta_2)\\right]\\right],\\\\\n\\end{equation}\nwhere\n\\[\\begin{array}{rcl}\n\\widehat{\\Phi}_t(H;\\Theta)&=&- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}\\left(I_{\\nu t}-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\right)\\\\\n&&+ \\mbox{\\small$\\frac{1}{2}$}{\\hbox{\\rm Tr}}\\left([\\Theta-\\Theta_{*,t}]H\\right)+\n{\\delta(2+\\delta)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2,\\\\\n\\widehat{\\Phi}_{tk\\rho}(H;\\Theta)&=&- \\mbox{\\small$\\frac{1}{2}$}\\ln{\\hbox{\\rm Det}}\\left(I_{\\nu t}-\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\right)+ \\mbox{\\small$\\frac{1}{2}$}{\\hbox{\\rm Tr}}\\left([\\Theta-\\Theta_{*,t}]H\\right)\\\\\n&&+\n{\\delta(2+\\delta)\\over 2(1-\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|)}\\|\\Theta_{*,t}^{1\/2}H\\Theta_{*,t}^{1\/2}\\|_F^2\\\\\n&&+ \\mbox{\\small$\\frac{1}{2}$}\\max\\limits_{Z\\in {\\cal Z}_{tk\\rho}}{\\hbox{\\rm Tr}}\\left(\\hbox{\\rm NW}_{\\nu t}(Z)\\left[H+H[\\Theta_{*,t}^{-1}-H]^{-1}H\\right]\\right),\\\\\n\\widehat{{\\cal H}}&=&\\{H:-\\gamma \\Theta_{*,t}^{-1}\\preceq H\\preceq \\gamma {\\Theta_{*,t}^{-1}}\\},\\\\\n\\end{array}\\]\nand $\\hbox{\\rm NW}_\\ell(Q)$ is the North-Western $\\ell\\times\\ell$ block of $(\\ell+1)\\times(\\ell+1)$ matrix $Q$.\n\\par\nNote that the saddle point problem in (\\ref{yeq4444}) has symmetry; specifically, if ${\\cal D}=I_t\\otimes P$ with matrix $P$ which is obtained from $\\nu\\times\\nu$ permutation matrix by replacing some entries equal to 1 with minus these entries, then\n\\begin{itemize}\n\\item ${\\cal D}^T\\Theta {\\cal D}=\\Theta$ for every $\\Theta\\in{\\cal U}_t$,\n\\item ${\\hbox{\\rm Diag}}\\{{\\cal D},1\\}^TZ{\\hbox{\\rm Diag}}\\{{\\cal D},1\\}\\in {\\cal Z}_{tk\\rho}$ whenever $Z\\in{\\cal Z}_{tk\\rho}$,\n\\item ${\\cal D}^TH{\\cal D}\\in\\widehat{{\\cal H}}$ whenever $H\\in\\widehat{{\\cal H}}$.\n\\end{itemize}\nHence, as is immediately seen from (\\ref{yeq4444}), it holds $\\widehat{\\Psi}({\\cal D}^TH{\\cal D})=\\widehat{\\Psi}(H)$. As a result, (\\ref{yeq4444}) has a saddle point with $H={\\cal D}^TH{\\cal D}$ for all indicated ${\\cal D}$'s, or, which is the same, with $H={G\\otimes I_\\nu}$, for some $t\\times t$ symmetric matrix $G$. Specifying $G$ reduces to solving saddle point problem of sizes\n{\\sl not affected by $\\nu$}, specifically, the problem\n\\def{\\cal J}{{\\cal J}}\n\\begin{equation}\\label{yeq44444}\n\\min\\limits_{G\\in\\widehat{{\\cal G}}}\\left[\\widetilde{\\Psi}(G):=\\max\\limits_{{\\cal I}_1,{\\cal I}_2\\in {\\cal J}_t} \\mbox{\\small$\\frac{1}{2}$}\\left[\\widetilde{\\Phi}_t(-G;{\\cal I}_1)+\\widetilde{\\Phi}_{tk\\rho}(G;{\\cal I}_2)\\right]\\right],\n\\end{equation}\n\\[\n\\begin{array}{rcl}\n\\widetilde{\\Phi}_t(G;{\\cal I})&=&-{\\nu\\over 2}\\ln{\\hbox{\\rm Det}}\\left(I_{t}-J_t^{1\/2}GJ_t^{1\/2}\\right)+{\\nu\\over 2}{\\hbox{\\rm Tr}}\\left([{\\cal I}-J_t]G\\right)\\\\\n&&+\n{\\delta(2+\\delta)\\nu\\over 2(1-\\|J_t^{1\/2}G J_t^{1\/2}\\|)}\\|J_t^{1\/2}GJ_t^{1\/2}\\|_F^2,\\\\\n\\widetilde{\\Phi}_{tk\\rho}(G;{\\cal I})&=&-{\\nu\\over 2}\\ln{\\hbox{\\rm Det}}\\left(I_{t}-J_t^{1\/2}GJ_t^{1\/2}\\right)+{\\nu\\over 2}{\\hbox{\\rm Tr}}\\left([{\\cal I}-J_t]G\\right)\\\\\n&&+\n{\\delta(2+\\delta)\\nu\\over 2(1-\\|J_t^{1\/2}GJ_t^{1\/2}\\|)}\\|J_t^{1\/2}GJ_t^{1\/2}\\|_F^2\\\\\n&&+{\\nu\\over 2}\\max\\limits_{W\\in {\\cal W}_{tk\\rho}}{\\hbox{\\rm Tr}}\\left(W\\left[G+G[J_t^{-1}-G]^{-1}G\\right]\\right),\n\\end{array}\n\\]\nwhere\n\\[\\begin{array}{rcl}\nJ_t&=&\\sigma^2[I_t+[1;...;1][1;...;1]^T],\\\\\n{\\cal J}_t&=&\\{\\theta J_t:\\vartheta\\leq \\theta\\leq 1\\},\\\\\n\\widehat{{\\cal G}}&=&\\{G\\in{\\mathbf{S}}^{t}_+:-\\gamma J_t^{-1}\\preceq G\\preceq \\gamma J_t^{-1}\\},\\\\\n{\\cal W}_{tk\\rho}&=&\\left\\{W:W\\in{\\mathbf{S}}^{t}_+,{\\hbox{\\rm Tr}}(WD_{tks})\\leq \\rho \\nu^{-1}d_{tks},\\,1\\leq s\\leq S_{tk}\\right\\}.\\\\\n\\end{array}\\]\n\\begin{remark}\\label{yrem1} {\\rm Our approach is aimed at processing the situation where the magnitude of a spot is quantified by its energy. When $y_t$ represents an image with $\\nu$ pixels, this model makes sense if changes in image are more or less spatially uniform, so that a\n``typical spot of the magnitude 1'' means small (eventually, $\\ll \\sigma$) change in brightness of a significant fraction of the pixels (i.e., we are in the case of {\\em dense alternatives}, in the terminology of \\cite{Ingster10}). We can also easily process the model where ``typical spot of magnitude 1'' means large (of order of 1) changes in brightnesses of just few pixels (in the terminology of \\cite{Ingster10}, this is the case of {\\em sparse alternatives}). In the latter situation, we\ndo not need quadratic lift: we can model the set of ``spots of shape $k$ and magnitude $\\geq\\rho>0$'' as the union of two convex sets, one where the $k$-th entry in the spot is $\\geq \\rho$, and the other one -- where this entry is $\\leq-\\rho$. In this model, all we need are affine detectors.}\n\\end{remark}\n\\subsection{Real-data example}\n\n\nIn this Section, we consider a sequence of metal corrosion images captured using bright-field transmission electron microscopy.\\footnote{Data courtesy of\nDr. Josh Kacher at the School of Materials Science and Engineering, Georgia Institute of Technology. More details can be found in Section 3.1 of \\cite{CaoZhu17}\n} We downsize each image to 308-by-308 pixels. There are 23 gray images (frames) in the sequence and 2 frames per second. Hence, this corresponds to 11.5 seconds from the original video. At some point, a corrosion spot initiates in the image sequence. Sample images from the sequence are\nillustrated in Fig. \\ref{fig:data}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 0.9\\textwidth]{Rust.pdf}\n\\end{center}\n\\caption{A sequence of metal corrosion images. The time (index for the image in the sequence) is labeled; the corrosion initiates at time $t = 8$ (marked by red circle) and develops over time.}\n\\label{fig:data}\n\\end{figure}\n\n\nThe dynamics of the signal model, in terms of the definition in \\rf{yeq2}, has the following parameters: $\\alpha_{t,1} = 1$, and $\\alpha_{t,s}=0$ for $s>1$; $p(1)=1$ and $p(s)=0$ for $s>1$; $\\rho$ is about $1.2\\times 10^2$ and it is estimated from the real-data.\n\nIn the example, we set the risk tolerance $\\epsilon = 0.1$, and let $\\vartheta = 0.5$. To evaluate detection performance, we run 3000 Monte Carlo trials and add zero-mean Gaussian noise (with variance 25) to the images.\nTo estimate the noise variance $\\sigma^2$, we\nuse the empirical estimation obtained\ntaking the first 5 noisy images in the sequence (hence we assume they do not contain a rust spot). The corresponding estimation is 25.\n\nSince the rust signal is local, i.e., when it occurs, a cluster of pixels captures the rust, we will apply our detector in the following scheme. Break each image into (rectangular or square) patches of equal size. Design a quadratic detector as described above for a patch. Then at each time, whenever one patch detects a change, we claim there has been a change - this corresponds to a ``multi-sensor'' scheme and the local detection statistic by taking\ntheir maximum.\\par\nWe compare our quadratic detector to the ``sliding window'' ({\\tt{Sl-W}}) detector developed in \\cite{korostelev2008,guiguesjnps12} and defined as follows. Given ``window width'' $h\\in\\{1,2,...\\}$ and\ndenoting by\n$y_{t j}$ the vector of observations at time $t$ in patch $j$, we build the left and the right estimates, ${\\bar y}_{\\ell j}^t ( h)$ and ${\\bar y}_{r j}^t ( h)$, of $y_{tj}$:\n$$\n{\\bar y}_{\\ell j}^t ( h) = \\frac{1}{h} \\sum_{i=t-h+1}^t y_{i j} \\mbox{ and }{\\bar y}_{r j}^t ( h) =\\frac{1}{h} \\sum_{i=t}^{t+h-1} y_{i j}.\n$$\nAt time $t$, {\\tt{Sl-W}} always accepts the nuisance hypothesis when $t \\leq 2h-2$; when $t \\geq 2h-1$, the nuisance hypothesis is accepted\nif for every patch $j=1,\\ldots,N$, it holds\n$$\n\\max_{h \\leq \\tau \\leq t-h+1}\\;\\|{\\bar y}_{\\ell j}^\\tau ( h) - {\\bar y}_{r j}^\\tau ( h)\\|_{\\infty} \\leq \\kappa,\n$$\nand is rejected otherwise. In our experiments, $h=2$ and $h=3$ were used. The corresponding thresholds $\\kappa$ are computed using Monte-Carle simulation, see \\cite{guiguesjnps12} for details.\n\\par\nSimulation results are presented in Table \\ref{singletable}. While the performance of {\\tt{Sl-W}} with properly selected $h$ and the number of patches $N$ is quite good, the quadratic detector is a clear winner in terms of reliability (zero empirical probabilities of a false alarm and a miss), and with $N=49$, there is no delay in detecting the change.\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|}\n\\hline\n\\\\\n\\begin{tabular}{|c||c|c|c||}\n\\hline\n & \\multicolumn{3}{c||}{Number $K$ of patches}\\\\\n\\hline\nDetector& $K=1$ & $K=4$ & $K=8$ \\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=2$ &[10.0,11.0,14.0] & [10.0,10.6,14] &[10.0,10.2,14]\\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=3$ &[10.0,10.9,11.0] &[10.0,10.7,11.0]&[5.0,10.3,11.0]\\\\\n\\hline\n{{\\tt{Quadratic detector}}} & [13.0,13.0,13.0]& [11.0,11.0,11.0] & [10.0,10.0,10.0] \\\\\n\\hline\nDetector& $K=16$ & $K=28$ & $K=49$ \\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=2$ &[10.0,10.1,11.0]&[9.0,10.0,11.0]&[10.0,10.1,11]\\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=3$ &[5.0,9.8,11.0]&[5.0,8.7,10.0]&[5.0,6.8,10.0]\\\\\n\\hline\n{{\\tt{Quadratic detector}}} & [10.0,10.0,10.0] & [10.0,10.0,10.0] & [8.0,8.0,8.0] \\\\\n\\hline\n\\end{tabular}\\\\\nStopping time. Data in a cell $[t_{\\min}, \\bar{t}, t_{\\max}]$: $\\bar{t}$ is the mean, and $[t_{\\min}, t_{\\max}]$ is the range of \\\\ instant\nwhere the signal conclusion has been made. The actual change occurs at time $8$.\\\\\n\\hline\n\\\\\n\\begin{tabular}{|c||c|c|c|c|c|c||c|c|c|c|c|c||}\n\\hline\n& \\multicolumn{6}{c||}{ {\\tt{False alarm probability}}} & \\multicolumn{6}{c||}{ {\\tt{Miss detection rate}}} \\\\\n\\hline\nNb. of patches& $1$ & $4$ & $8$ & $16$ & $28$ & $49$ & $1$ & $4$ & $8$ & $16$ & $28$ & $49$ \\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=2$ & 0 & 0 & 0& 0& 0& 0 &0.34 &0.01&0&0&0&0 \\\\\n\\hline\n{{\\tt{Sl-W detector}}}, $h=3$ & 0 & 0 & 0.006& 0.05& 0.26& 0.64 & 0&0&0&0&0&0 \\\\\n\\hline\n{{\\tt{Quadratic detector}}} & 0&0&0&0&0&0 &0 &0&0&0&0&0 \\\\\n\\hline\n\\end{tabular}\\\\\nProbabilities of false alarm and miss rates.\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{singletable} Numerical results for rust detectio\n}\n\\end{table}\n\n\n\\if{\nFig. \\ref{fig:EDD} illustrates the \\aic{expected}{} detection delay (EDD) versus the number of patches. In all instances, the false-detection-rate and miss-detection-rate are both zero.\nNote that when the number of patches is greater than 25, we can detect the change immediately after it occurs (the delay is only 1 sample).\n}\\fi\n\n\n\n\n\\if{\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width = 0.5\\textwidth]{YaoGraphN.pdf}\n\\end{center}\n\\caption{The expected detection delay (EDD) versus the number of patches for the metal corrosion data. The miss-detection-rate and false-detection-rate for all instances are zero in this example.}\n\\label{fig:EDD}\n\\end{figure}\n}\\fi\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Summary and concluding remarks}\n\\label{conclusions}\n\n\nIn the present paper we devised an analysis method to uniformly investigate a set of 12 OCs, located in the Sagittarius spiral arm. We took advantage of public Washington $CT_1$ photometric data combined with GAIA DR2 astrometry. The use of GAIA data proved to be an essential tool, since these objects are projected against dense stellar fields, which results in severe contamination in their CMDs. \n\nOnce we estimated the structural parameters from King profile fitting to the OCs' RDPs, we searched for statistically significant concentrations of data in the 3D astrometric space, in order to assign membership likelihoods. PARSEC isochrones covering a wide range of astrophysical parameter values were automatically fitted to the photometric data of high-membership stars and the basic astrophysical parameters ($(m-M)_{0}$, $E(B-V)$, log $t$ and $[Fe\/H]$) were derived. Then, OCs' mass functions were built and their total masses were estimated. \n\nWe confirmed BH\\,150 as a genuine OC, as judged from the outcomes of our decontamination procedure showing real concentrations of stars in the astrometric space, which allowed the identification of clearer evolutive sequences in the object's CMD. Its physical \nnature had been under debate, because of purely photometric analyses or photometric data \ncombined with lower quality astrometry could not unambiguously disentangle the OC from field populations.\n\nThe studied OCs have similar Galactocentric distances, and hence are affected by nearly the\nsame Galactic gravitational potential. For this reason, we speculate that any\ndifference in the OC dynamical stages is caused by differences in the internal dynamical evolution. \nBased on this assumption, we split the OC sample in three groups according to their $r_{h}\/R_J$ ratios, which has resulted to be a good indicator of the relative OC dynamical evolution stages.\nParticularly, we found that the larger the $r_{h}\/R_J$ ratio, the less dynamically evolved \nan OC.\n\nThe set of investigated OCs are not in an advanced stage of dynamical evolution, since their \nconcentration parameters span the lower part of the $c$ regime ($c\\lesssim0.75$). In general, the \nstudied OCs present $c$ values which are within the smallest ones for OCs of similar core radii.\nTheir tidal radii reveal that they are relative small OCs as compared to the sizes of previously\nstudied OCs. We verified a general trend in which the higher the concentration parameter, the higher the age\/$t_{\\textrm{rh}}$. Those relative more dynamically evolved OCs have apparently \nexperienced more important low-mass star loss.\n\n\n\n \n\n \n\\section{Data collection and reduction}\n\\label{data_collection_reduction}\n\n\n\n\n\\begin{table*}\n \\small\n\n \\caption{Observations log of the studied OCs.}\n \\label{log_observations}\n \\begin{tabular}{lccccccc}\n \n \\hline\n\nCluster & $\\rmn{RA}$ & $\\rmn{DEC}$ & $\\ell$ & $b$ & Filter & Exposure & Airmass \\\\ \n & ($\\rmn{h}$:$\\rmn{m}$:$\\rmn{s}$) & ($\\degr$:$\\arcmin$:$\\arcsec$) & ($^{\\circ}$) & ($^{\\circ}$) & & (s) & \\\\\n\\hline \n\n\n\n\nCollinder\\,258 & 12:27:16 & -60:46:42 & 299.9843 & 01.9550 & $C$ & 15,150 & 1.2,1.3 \\\\\n & & & & & $R$ & 2,20 & 1.3,1.3 \\\\\n\nNGC\\,6756 & 19:08:45 & 04:43:01 & 39.1046 & -01.6865 & $C$ & 30,90,90,900 & 1.2,1.2,1.2,1.2 \\\\\n & & & & & $R$ & 20,20,150,150 & 1.2,1.2,1.2,1.2 \\\\\n \nCzernik\\,37 & 17:53:12 & -27:22:00 & 2.2053 & -0.6255 & $C$ & 30,450 & 1.0,1.0 \\\\\n & & & & & $R$ & 5,45 & 1.0,1.0 \\\\\n \nNGC\\,5381 & 14:00:40 & -59:36:18 & 311.5940 & 02.0975 & $C$ & 90,120,600,600 & 1.1,1.1,1.2,1.2 \\\\\n & & & & & $R$ & 30,30,120,120 & 1.2,1.2,1.2,1.2 \\\\\n \nTrumpler\\,25 & 17:24:24 & -39:01:01 & 349.1460 & -01.7594 & $C$ & 30,300 & 1.0,1.0 \\\\\n & & & & & $R$ & 5,300 & 1.0,1.0 \\\\\n \nBH\\,150 & 13:38:04 & -63:20:42 & 308.1334 & -0.9451 & $C$ & 60,60,600,600 & 1.2,1.2,1.2,1.2 \\\\\n & & & & & $R$ & 5,15,15,90,90 & 1.2,1.2,1.2,1.2,1.2 \\\\\n \nRuprecht\\,111 & 14:36:00 & -59:58:48 & 315.6658 & 0.2811 & $C$ & 30,45,450 & 1.2,1.2,1.2 \\\\\n & & & & & $R$ & 15,15 & 1.2,1.2 \\\\\n \nRuprecht\\,102 & 12:13:34 & -62:43:48 & 298.6088 & -0.1766 & $C$ & 45,450 & 1.2,1.2 \\\\\n & & & & & $R$ & 7,45 & 1.2,1.2 \\\\\n \nNGC\\,6249 & 16:57:36 & -44:49:00 & 341.5242 & -01.1772 & $C$ & 30,30 & 1.0,1.0 \\\\\n & & & & & $R$ & 5,5 & 1.0,1.0 \\\\\n \nBasel\\,5 & 17:52:24 & -30:06:00 & 359.7616 & -01.8636 & $C$ & 30,30,600,600 & 1.0,1.0,1.0,1.0 \\\\\n & & & & & $R$ &15,15,120,120 & 1.0,1.0,1.0,1.0 \\\\\n \nRuprecht\\,97 & 11:57:28 & -62:43:00 & 296.7920 & -0.4901 & $C$ & 60,90,900 & 1.3,1.3,1.3 \\\\\n & & & & & $R$ & 60,180,180 & 1.3,1.3,1.3 \\\\\n \nESO\\,129-SC32 & 11:44:11 & -61:03:29 & 294.8851 & 0.7587 & $C$ & 60,60,450,450 & 1.3,1.3,1.3,1.3 \\\\\n & & & & & $R$ & 10,10,60,120 & 1.3,1.3,1.3,1.3 \\\\\n\n\n\n\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\n\n\n\n\\begin{figure*}\n\\begin{center}\n\n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Cr258.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_NGC6756.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Czernik37.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_NGC5381.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Trumpler25.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_BH150.jpg}\n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht111.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht102.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_NGC6249.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Basel5.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht97.jpg} \n \\includegraphics[width=0.325\\textwidth]{DSS2_nearIR_14x14_arcmin_ESO129-32.jpg} \n\\caption{ DSS2 near-IR images of the OCs (from top-left to bottom-right) Collinder\\,258, NGC\\,6756, Czernik\\,37, NGC\\,5381, Trumpler\\,25, BH\\,150, Ruprecht\\,111, Ruprecht\\,102, NGC\\,6249, Basel\\,5, Ruprecht\\,97 and ESO\\,129-SC32. Image sizes are $14^{\\arcmin}\\times14^{\\arcmin}$. North is up and East to the left. }\n\n\n\\label{images_clusters_parte1}\n\\end{center}\n\\end{figure*}\n\n\nImages taken in $R$ (Kron-Kousins) and $C$ (Washington) filters were downloaded from the National Optical Astronomy Observatory (NOAO) public archive\\footnote[1]{http:\/\/www.noao.edu\/sdm\/archives.php}. Observations were carried out during the nights 2008\\,May\\,08 to 2008\\,May\\,12 with the Tek2K CCD imager (scale of 0.4 arcsec\\,pixel$^{-1}$, which provides a field of view of 13.6 arcmin$^2$) attached to the 0.9-m telescope at the Cerro Tololo Inter-American Observatory (CTIO, Chile; programme no. 2008A-0001, PI: Clari\\'a). The observations log is showed in Table \\ref{log_observations}. To illustrate the reader, images for the 12 selected OCs are showed in Figure \\ref{images_clusters_parte1}.\n\n\n\n\n\n\nCalibration (bias, dome and sky flats in both $C$ and $R$ filters) and standard star field images (SA\\,101, SA\\,107, SA\\,110; \\citeauthor{Landolt:1992}\\,\\,\\citeyear{Landolt:1992}; \\citeauthor{Geisler:1996}\\,\\,\\citeyear{Geisler:1996}) were also downloaded together with the images for the investigated OCs. The data reduction steps followed the standard procedures employed for optical CCD photometry: overscan and bias subtraction and division by normalized flat fields. All procedures were implented with the use of {\\fontfamily{ptm}\\selectfont QUADRED} package in {\\fontfamily{ptm}\\selectfont IRAF}\\footnote[2]{{\\fontfamily{ptm}\\selectfont \nIRAF} is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.}. Images were taken with four amplifiers and were properly mosaiced in a single image extension. Bad pixels masks were also built and defective regions were corrected via linear interpolations performed on the images.\n\n\nThe photometry was performed on the reduced images using a point spread function (PSF)-fitting algorithm. We used a modified version of the STARFINDER code \\citep{Diolaiti:2000}, which draws empirical PSFs from pre-selected stars on the images (with high signal-to-noise and relatively isolated from nearby sources) and cross-correlates them with every point source detected above a defined threshold. The main modification consisted in automatising the code, minimising the user intervention during the choice of proper sources for PSF modelling. This allowed us to deal with a relatively large number of images taken in crowded fields. In this step, we only kept in the photometric catalogues those stars for which the correlation coefficients between the measured profile and the modelled PSF resulted greater than 0.7. This criterion minimized the introduction of spurious detections and, at the same time, allowed the detection of faint stars contaminated by the background noise. \n\nAstrometric solutions were computed for the whole set of images by mapping the positions of stars in each CCD frame with the corresponding coordinates as given in the GAIA DR2 catalogue for the observed region. A set of linear equations were calibrated, which allowed the transformation between the CCD reference system and the equatorial system (see \\citeauthor{Caetano:2015}\\,\\,\\citeyear{Caetano:2015} and references therein for further details) with an astrometric precision better than $\\sim0.1\\,$mas. Finally, our pipeline builds a final master table (containing the instrumental $c$ and $r$ magnitudes) by registering the fainter stars detected in the longer exposure frames and successively including those brighter sources identified in shorter exposures, thus avoiding the inclusion of saturated objects. \n\nNearly 70 magnitudes of standard stars per filter per night were measured in order to calibrate the transformation equations between the instrumental and standard systems. Standard star field SA101 was observed repeatedly in a wide range of airmass ($\\sim1.1-2.5$), which allowed the determination of the extinction coefficients. We used the following calibration equations:\n\n\n\n\\begin{align}\n c\\, & =\\,c_{1} + C + c_{2}\\times X_{C} + c_{3}\\times(C-T_{1}), \\\\\n r\\, & =\\,t_{11} + T_{1} + t_{12}\\times X_{T_{1}} + t_{13}\\times(C-T_{1}), \n\\end{align}\n\\noindent\n \n\\noindent\nwhere $c,r$ represent instrumental magnitudes and $C,T_1$ are the standard ones; $c_1$ and $t_{11}$ are the zero-point coefficients, $c_2$ and $t_{12}$ are the extinction coefficients, $c_3$ and $t_{13}$ are the colour terms. The airmass in both filters are represented as $X_C$ and $X_{T_{1}}$. The coefficients were obatined via multiple linear regression as implemented in the {\\fontfamily{ptm}\\selectfont IRAF} task {\\fontfamily{ptm}\\selectfont FITPARAMS}. The results are presented in Table \\ref{results_FITPARAMS}. Instead of using instrumental $t_1$ magnitudes, \\cite{Geisler:1996} proposed that the $R$ filter is an excellent substitute of the $T_1$ filter due to increased transmission at all wavelengths. \n\n\n\\begin{table}\n \\begin{minipage}{85mm}\n \\caption{Mean values of the fitted coefficients and residuals for the presently calibrated $CT_1$ photometric data set.}\n \\label{results_FITPARAMS}\n \\begin{tabular}{lcccc}\n \n\\hline\n\nFilter & Zero & Extinction & Colour & Residual \\\\\n & point & coefficient & term & (mag) \\\\\n\n\\hline\n\n$C$ & 3.884$\\pm$0.023 & 0.282$\\pm$0.002 & -0.173$\\pm$0.011 & 0.011 \\\\\n$T_{1}$ & 3.306$\\pm$0.024 & 0.089$\\pm$0.002 & -0.041$\\pm$0.004 & 0.010 \\\\\n \n\\hline\n\\end{tabular}\n\\end{minipage}\n\\end{table}\n\n\n\n\n\\begin{table}\n \\tiny\n\\begin{minipage}{85mm}\n \\caption{Stars in the field of NGC\\,5381: Identifiers, coordinates, magnitudes and photometric uncertainties.}\n \\label{excerpt_NGC5381}\n \\begin{tabular}{ccccc}\n \n\\hline\n\nStar ID & $\\rmn{RA}$ & $\\rmn{DEC}$ & $C$ & $T_{1}$ \\\\\n & ($^{\\circ}$) & ($^{\\circ}$) & (mag) & (mag) \\\\\n\\hline\n\n 1 & 210.1042480 & -59.5857430 & 13.429$\\pm$0.001 & 12.631$\\pm$0.001 \\\\ \n 2 & 210.1034546 & -59.5751610 & 13.619$\\pm$0.001 & 13.024$\\pm$0.001 \\\\ \n 3 & 210.1103973 & -59.5492859 & 14.035$\\pm$0.001 & 12.383$\\pm$0.002 \\\\ \n 4 & 210.0230560 & -59.6616936 & 16.398$\\pm$0.007 & 14.996$\\pm$0.004 \\\\ \n 5 & 210.3296967 & -59.5831680 & 17.027$\\pm$0.010 & 15.274$\\pm$0.006 \\\\ \n$-$ & $-$ & $-$ & $-$ & $-$ \\\\\n\\hline \n\\end{tabular}\n\\end{minipage}\n\\end{table}\n\n\n\n\n\n\n\nFinally, the above equations were inverted in order to convert instrumental magnitudes to the standard system and obtain photometric uncertainties properly propagated into the final magnitudes, according to the STARFINDER algorithm. This step was implemented via the {\\fontfamily{ptm}\\selectfont IRAF INVERTFIT} task. For each OC, the photometric catalogue consists of an identifier for each star (ID), equatorial coordinates ($\\rmn{RA}$ and $\\rmn{DEC}$), magnitudes in filters $C$ and $T_1$ with their respective photometric uncertainties. An excerpt of the final table for the OC NGC\\,5381 is presented in Table \\ref{excerpt_NGC5381}. Our typical uncertainties are illustrated in Figure \\ref{photerrors_C_T1_NGC5381}. We also derived the completeness level of our photometry at different magnitudes. To accomplish this, we performed artificial star tests on our images. The detailed procedure and results are described in section 2 of \\cite{Angelo:2018}. \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=8.5cm]{phot_errors_NGC5381.pdf}\n \\caption{Photometric errors as a function of the $T_1$ mag for stars in the field of NGC\\,5381, which\nare typical in our photometric catalogues.}\n \\label{photerrors_C_T1_NGC5381}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n \n \n\n \n \n \n \n \n \n\n\\section{Discussion}\n\\label{discussion}\n\n\n\n\\begin{figure*}\n\\begin{center}\n\n \\includegraphics[width=1.0\\textwidth]{Rgal_versus_age.png}\n\n\n\\caption{ Panel (a): Galactocentric distance $R_{\\textrm{G}}\\,$ versus log($t$\/yr) for the studied OC sample. Symbols' colours were assigned according to the different $r_{h}\/R_J$ \nranges described in the text. Red symbols: Collinder\\,258 (\\textcolor{red}{$\\CIRCLE$}), NGC\\,6756 (\\textcolor{red}{$\\blacktriangle$}), Czernik\\,37 (\\textcolor{red}{$\\blacklozenge$}) and NGC\\,6249 (\\textcolor{red}{$\\blacksquare$}); black symbols: Trumpler\\,25 ($\\CIRCLE$), BH\\,150 ($\\blacktriangle$), Ruprecht\\,111 ($\\blacksquare$), Basel\\,5 ($\\blacklozenge$); blue symbols: NGC\\,5381 (\\textcolor{blue}{$\\CIRCLE$}), Ruprecht\\,102 (\\textcolor{blue}{$\\blacktriangle$}), Ruprecht\\,97 (\\textcolor{blue}{$\\blacklozenge$}) and ESO\\,129-SC32 (\\textcolor{blue}{$\\blacksquare$}). Panel (b): Radial metallicity $[Fe\/H]$ distribution. The continuous line is the relationship derived by Netopil et al. (2016). The dashed lines represent its upper and lower limits. Panel (c): Distribution of the OCs projected on to the Galactic plane. The spiral pattern was taken from Vall\\'ee\\,(2008). Panel (d): Distribution of OCs perpendicular to the Galactic plane (horizontal line). The grey dots represent OCs taken from Kharchenko et al. (2013) and Dias et al. (2002).}\n\n\n\n \n \\label{Rgal_versus_age}\n\\end{center}\n\n\\end{figure*}\n\n\nThe OC sample studied in this work present nearly similar ages and Galactocentric distances, with the sole exception of BH\\,150 (see Fig.~\\ref{Rgal_versus_age}, panel (a)). They are located close to the Galactic plane ($\\vert Z\\vert\\leq75\\,$pc) and are part of the Sagittarius arm (see panels (c) and (d)). Their colour-excess $E(B-V)$ vary from $\\sim0.2$ up to $\\sim1.6$ (Table \\ref{astroph_params}), witnessing the different amount of dust and gas distributed along their line of sight, as expected. \n\nAs shown in Fig.~\\ref{Rgal_versus_age}, panel (b), they span metallicities from slightly sub-solar up to moderately more metal-rich values than the Sun, being most of them of solar metal content. In this panel we superimposed the $[Fe\/H]$-R$_{G}$ relation (continuous line; slope -0.086\\,dex\/kpc) as derived in \\citeauthor{Netopil:2016}\\,\\,(\\citeyear{Netopil:2016}; their table 3) by fitting the radial metallicity distribution for a set of 88 OCs in the range $R_G<12\\,$kpc. The upper and lower limits for this relation, as derived from the informed parameters uncertainties, are represented by dashed lines. OCs from the samples of \\cite{Kharchenko:2013} and \\cite{Dias:2002} are also shown (grey filled circles), for comparison purposes. Considering uncertainties, our cluster sample -- except possibly NGC 5381 -- agree well with the results of \\cite{Netopil:2016}.\n\nNGC\\,5381 ($R_G=6.8\\pm0.5\\,$kpc) departs from the expected relation towards lower metallicity. Despite this, other clusters also show relative low-metallicity values. Indeed, the metallicity distribution derived by \\cite{Netopil:2016} (their figure 10) shows azimuthal variations in $[Fe\/H]$ from $\\sim-0.2\\,$ to $\\sim+0.2$ for $R_G$ in the range $\\sim6.3-7.3\\,$kpc.\n\n\n\n\n\\begin{figure*}\n\\begin{center}\n\n \\includegraphics[width=1.0\\textwidth]{plots_radius_age_mphot_parte1.png}\n\\caption{ Relationships between different OC properties. Symbols\u00b4 colours are as\nin Fig.~\\ref{Rgal_versus_age}. Small dots represent OCs from the literature.}\n\n\\label{plots_radius_age_mphot_parte1}\n\n\\end{center}\n\\end{figure*}\n\n\n\n\\begin{figure*}\n\\begin{center}\n\n \n \\includegraphics[width=1.0\\textwidth]{plots_radius_age_mphot_parte2.png}\n\n\\caption{ Relationships between different OC astrophysical parameters. \nSymbols and colours are as in Fig.~\\ref{Rgal_versus_age}. Small dots represent \nJoshi et al.'s (2016) OC sample, while the dashed line represent their\nderived relationship.}\n\n\\label{plots_radius_age_mphot_parte2}\n\\end{center}\n\\end{figure*}\n\n\n\n\n\nIn the subsequent analysis, we employ parameters associated to the dynamical evolution, namely mass, age, core, half-light, tidal and Jacobi radii, in order to characterise the dynamical stages of the investigated sample. Panel (a) of Fig.~\\ref{plots_radius_age_mphot_parte1} shows the $r_{h}\/R_J$ versus $R_G$ plane, which gives us some hints on the dynamical evolution of the investigated OCs \\citep{Baumgardt:2010}. Because of the similar $R_G$ values, the Galactic gravitational potential is\nnot expected to produce differential tidal effects \\citep{Lamers:2005,Piatti:2018}. Therefore, we interpret any difference in the OC dynamical stages as being caused mainly by internal dynamical evolution. Consequently, the larger the $r_{h}\/R_J$ ratio in panel (a), the relative less dynamically relaxed an OC. \n\nTo the light of this correlation, we split our OC sample in three groups, distinguished by red ($r_h\/R_J\\lesssim0.15$), black ($0.15\\lesssim r_h\/R_J\\lesssim0.21$) and blue symbols ($r_h\/R_J\\gtrsim0.21$), respectively, as also indicated in Fig.~\\ref{Rgal_versus_age}. OCs in the blue group (NGC\\,5381, Ruprecht\\,102, Ruprecht\\,97 and ESO\\,129-SC32) are relatively less evolved; the black ones (Trumpler\\,25, Ruprecht\\,111 and Basel\\,5; BH\\,150 included) are at a relative intermediate stage of dynamical evolution, while the red ones (Collinder\\,258, NGC\\,6756, Czernik\\,37 and NGC\\,6249) are the most advanced in dynamical two-body relaxation. It is noticeable that all investigated OCs present $r_h\/R_J<0.5$, which makes these stellar aggregates stable against rapid dissolution. Some studies (e.g., \\citeauthor{Portegies-Zwart:2010}\\,\\,\\citeyear{Portegies-Zwart:2010} and references therein) have extensively investigated the combined effects of mass loss by stellar evolution and dynamical evolution in the tidal field of the host galaxy and showed that, when clusters expand to a radius of $\\sim$0.5\\,$r_J$, they lose equilibrium and most of their stars overflow $r_J$.\n\n\nIn panel (b) of Fig.~\\ref{plots_radius_age_mphot_parte1} we plot the concentration parameter $c$\\,(=log($r_t\/r_c$)) as a function of age for our OC sample and compare them with those in the literature. As can be seen, the OCs studied here are within those with smaller \n$c$ values for their ages. Likewise, \nthere is a hint of relative different mass segregation (e.g., \\citeauthor{de-La-Fuente-Marcos:1997}\\,\\,\\citeyear{de-La-Fuente-Marcos:1997}; \\citeauthor{Portegies-Zwart:2001}\\,\\,\\citeyear{Portegies-Zwart:2001}), in the sense that the smaller the $c$ values, the less dynamically evolved an OC. Note that the selected OCs have $r_c$ that \nshow a trend with $c$ (see panel (c)) following a much tighter relationship than that\nobserved for the vast majority of other known OCs. We can see that the less evolved OCs in our sample present less compact cores. This is an expected trend, since as internal relaxation transports energy from the (dynamically) warmer central core to the cooler outer regions, the core contracts as it loses energy \\citep{Portegies-Zwart:2010}. Furthermore, the OCs in our sample are relatively small as judged by their tidal radii ($r_t$, see panel (d)) and present their stellar content well within their respective Jacobi radii (Figure \\ref{plots_radius_age_mphot_parte2}, panel (d)). \n\nThe classification scheme proposed in Fig.~\\ref{plots_radius_age_mphot_parte1}, panel (a), is supported by the results presented in panel (b) of Fig.~\\ref{plots_radius_age_mphot_parte2}, where $c$ is plotted as function of age\/$t_{\\textrm{rh}}$. As can be seen, those OCs with smaller $c$ values show smaller age\/$t_{\\textrm{rh}}$ ratios. In this sense, \\cite{Vesperini:2009} provided an evolutionary picture from results of $N$-body simulations of star clusters in a tidal field. They showed that two-body relaxation causes star clusters to subsequently lose memory of their initial structure (e.g., initial density profile and concentration) and the concentration parameter $c$ increases steadily with time. This overall trend was also verified, although with considerable scatter, by \\cite{Piatti:2016} and \\cite{Angelo:2018} (figure 14 in both papers), who compared the concentration parameters of a set of investigated Galactic OCs with a sample of 236 OCs analysed homogeneously by \\cite{Piskunov:2007}.\n\nIndeed, the more evolved OCs (red symbols) present larger age\/$t_{\\textrm{rh}}$ compared to the less evolved ones (blue symbols). This is consistent with the overall evaporation scenario, in which the larger the age\/$t_{\\textrm{rh}}$ ratio, the more depleted the lower mass content. This can be seen in our clusters' mass functions (Fig.~\\ref{mass_func_parte1}), where the ones from the red group show systematically higher depletion of their lowest mass bin when compared to those from the blue group. In all cases, the mass function steepnesses for the higher mass bins do not show noticeable deviations from linear trends as exhibited by Kroupa and Salpeter IMFs (log\\,$\\phi(m)$ $\\propto$ -2.3\\,log\\,$m$, for $m>0.5\\,$M$_{\\odot}$). Since our 12 OCs are located at almost the same $R_G$, we expect no differential impact of the tidal field on the clusters' mass function depletion. \n\n\nAs stated by \\cite{Bonatto:2004a}, \nOCs will leave at advanced evolutionary stages only a core, with most of the low-mass stars dispersed into the background. In this sense, it is noticeable that smaller OCs are those\nrelatively more evolved, so that they have had chance to lose low-mass stars, while the\nmore massive ones have concentrated toward the OC centres. \nConsequently, their $r_t$\/$r_J$ ratios are \nsmaller than those for relative less evolved OCs, which have mainly expanded within\nthe Jacobi volume.\n(see panel (d) of Fig.~\\ref{plots_radius_age_mphot_parte2}).\n\nAssuming $m_{\\textrm{Kroupa}}$ (Table \\ref{astroph_params}) as a rough estimate of the initial OC mass, the studied OCs may have lost more than $\\sim60$\\,percent of their initial mass. Note that Fig.~\\ref{mass_func_parte1} shows that the OCs\u00b4 mass functions\ndepart from the linear trend at different stellar masses, which could be linked with\ntheir relative different dynamical stages, in the sense that the more evolved a system\nthe more massive the lower mass end. On the other hand, the estimated OC masses are\nwithin the range of those obtained by \\cite{Joshi:2016}, as shown in panel (a) of \nFig.~\\ref{plots_radius_age_mphot_parte2}. The dashed line is a linear fit performed by \\citeauthor{Joshi:2016}\\,\\,(\\citeyear{Joshi:2016}, their equation 8) to the mean masses within age bins of $\\Delta$log($t$\/yr)=0.5.\n\n\nFor completeness purposes we included\nin panel (c) OC masses as a function of age\/$t_{\\textrm{rh}}$, which follows eq. (9). For all investigated clusters, the derived ages are larger than the corresponding $t_{rh}$ (7 $\\lesssim$ age\/$t_{\\textrm{rh}}$$\\lesssim$ 164), which means that they have had enough time to dynamically evolve. This statement is also true even if we consider $m_{\\textrm{Kroupa}}$ (Table \\ref{astroph_params}) to determine $t_{rh}$ (in which case we have $5\\lesssim$ age\/$t_{\\textrm{rh}}$$\\lesssim$ 140). Fig.~\\ref{plots_radius_age_mphot_parte2} shows that the larger the age\/$t_{\\textrm{rh}}$, the smaller the cluster mass. This is an expected result, since clusters lose stars as its ages surpass many times its $t_{rh}$.\n\n\nFrom the present analysis we advocate that the studied OCs are in different dynamical states. Assuming that these objects have been subject to the same Galactic tidal effects -- they \nhave similar ages (except BH\\,150) and Galactocentric distances --, the differences in their dynamical stages tell us about the wide family of OCs formed in the Sagittarius spiral\narm.\n\n\n\n\n\n\n\n\n\n\\section{Method}\n\\label{method}\n\n\n\\subsection{Structural parameters determination}\n\\label{center_RDPs_struct_params}\n\n\n\nThe first step in our analysis was to determine the OCs' central coordinates, simultaneously with\ntheir core ($r_c$) and tidal ($r_t$) radii. In each case, we used \na uniformly spaced square grid (steps of 0.25\\,arcmin), centred on the literature coordinates and with full\nextension equal to $\\sim$2$-$4 times the informed limiting radius. These grids typically contained $\\sim$200$-$400 square cells. We used each of these cells as a putative centre and built radial density profiles (RDPs) by performing stellar counts in concentric rings with widths varying from 0.50 to 1.50\\,arcmin, in steps of 0.25\\,arcmin. The background levels were estimated by taking into account\nthe average of the stellar densities corresponding to the external rings, where the stellar densities fluctuate around a nearly constant value. \n\nThe background subtracted RDPs were then fitted using a \\cite{King:1962} model:\n\n\n\\begin{equation}\n \\sigma(r) \\propto \\left( \\frac{1}{\\sqrt{1+(r\/r_c)^2}} - \\frac{1}{\\sqrt{1+(r_t\/r_c)^2}} \\right)^2\n,\\end{equation} \n\n\n\\noindent\nA grid of $r_t$ and $r_c$ values spanning the whole range of radii \\citep{Piskunov:2007} was employed and we searched for the values that minimized $\\chi^2$. The final centres were assumed as the coordinates that produce the smoothest stellar RDPs and, at the same time, the highest density in the innermost region. This procedure is analogous to that employed by \\cite{Bica:2011} in their study of very field-contaminated OCs. The best \nKing's model fits are plotted in Fig.~\\ref{RPDs_parte1} with blue lines, while the corresponding $r_t$ and $r_c$ values (converted to pc according to the distance moduli; see Section \\ref{members_selection}) are \nshowed in Table \\ref{struct_params}. The determined central coordinates are also showed in the same table. Additionally, we fitted a \\cite{Plummer:1911} profile to each RDP, for comparison purposes:\n\n\\begin{equation}\n \\sigma(r) \\propto \\frac{1}{\\left[1+(r\/a)^2\\right]^2}\n.\\end{equation} \n\n\\noindent\nThese fits were represented in Fig.~\\ref{RPDs_parte1} with red lines. As can be seen, that both profiles are nearly indistinguishable in the inner OCs' region ($r\\lesssim r_c$). The $a$ parameter is the Plummer radius which is related to the half-light radius $r_{\\textrm{h}}$ by the relation $r_{\\textrm{h}}\\sim1.3a$.\n\n\n\n\n\\begin{figure*}\n\\begin{center}\n\n\n\\parbox[c]{1.0\\textwidth}\n {\n \n \\includegraphics[width=0.333\\textwidth]{rprofile_Cr258.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_NGC6756.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_Czernik37.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_NGC5381.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_Trumpler25.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_BH150.pdf} \n \\includegraphics[width=0.333\\textwidth]{rprofile_Ruprecht111.pdf}\n \\includegraphics[width=0.333\\textwidth]{rprofile_Ruprecht102.pdf}\n \\includegraphics[width=0.333\\textwidth]{rprofile_NGC6249.pdf}\n \\includegraphics[width=0.333\\textwidth]{rprofile_Basel5.pdf}\n \\includegraphics[width=0.333\\textwidth]{rprofile_Ruprecht97.pdf}\n \\includegraphics[width=0.333\\textwidth]{rprofile_ESO129-32.pdf}\n\n\n }\n\\caption{ Normalized RDPs before and after background subtraction drawn with open and filled symbols, respectively.\nPoisson error bars are showed. The vertical continous and dotted lines represent the OC limiting radius and its uncertainty, respectively. The horizontal continuous line represents the mean backround density. The blue and red curves represent the fitted King\\,(1962) and Plummer\\,(1911) profiles, respectively.}\n\n\n\\label{RPDs_parte1}\n\\end{center}\n\\end{figure*}\n\n\n\n\n\n\n\\begin{table*}\n\n \\caption{ Determined central coordinates, Galactocentric distances and structural parameters of the studied OCs. }\n \\label{struct_params}\n \\begin{tabular}{lccccccc}\n \n\\hline\n\n\n Cluster &$\\rmn{RA}$ &$\\rmn{DEC}$ & R$_{\\textrm{GC}}^{*}$ & $r_c$ & $r_{h}^{\\dag}$ & $r_t$ & R$_J^{\\dag\\dag}$ \\\\\n &($\\rmn{h}$:$\\rmn{m}$:$\\rmn{s}$) & ($\\degr$:$\\arcmin$:$\\arcsec$) & (kpc) & (pc) & (pc) & (pc) & (pc) \\\\ \n\n\\hline\n\nCollinder\\,258 & 12:27:17 & -60:46:45 &7.4\\,$\\pm$\\,0.5 &0.50\\,$\\pm$\\,0.19 &0.65\\,$\\pm$\\,0.12 &1.23\\,$\\pm$\\,0.46 & 4.77\\,$\\pm$\\,0.44 \\\\ \nNGC\\,6756 & 19:08:44 & 04:42:53 &6.6\\,$\\pm$\\,0.5 &0.68\\,$\\pm$\\,0.14 &0.99\\,$\\pm$\\,0.15 &2.10\\,$\\pm$\\,0.40 & 7.73\\,$\\pm$\\,0.65 \\\\\nCzernik\\,37 & 17:53:15 & -27:22:53 &6.5\\,$\\pm$\\,0.6 &0.72\\,$\\pm$\\,0.16 &1.01\\,$\\pm$\\,0.15 &2.03\\,$\\pm$\\,0.41 & 7.12\\,$\\pm$\\,0.66 \\\\\nNGC\\,5381 & 14:00:45 & -59:35:12 &6.8\\,$\\pm$\\,0.5 &1.88\\,$\\pm$\\,0.49 &2.05\\,$\\pm$\\,0.40 &2.86\\,$\\pm$\\,0.61 & 7.33\\,$\\pm$\\,0.59 \\\\\nTrumpler\\,25 & 17:24:29 & -39:00:48 &6.3\\,$\\pm$\\,0.6 &1.57\\,$\\pm$\\,0.30 &1.91\\,$\\pm$\\,0.20 &3.13\\,$\\pm$\\,0.51 &10.04\\,$\\pm$\\,0.89 \\\\\nBH\\,150 & 13:38:04 & -63:20:27 &6.6\\,$\\pm$\\,0.5 &0.79\\,$\\pm$\\,0.18 &1.20\\,$\\pm$\\,0.17 &2.90\\,$\\pm$\\,0.70 & 7.82\\,$\\pm$\\,0.69 \\\\\nRuprecht\\,111 & 14:36:04 & -59:59:21 &6.8\\,$\\pm$\\,0.5 &0.67\\,$\\pm$\\,0.22 &0.94\\,$\\pm$\\,0.22 &1.83\\,$\\pm$\\,0.44 & 5.40\\,$\\pm$\\,0.47 \\\\\nRuprecht\\,102 & 12:13:37 & -62:42:55 &7.1\\,$\\pm$\\,0.6 &1.29\\,$\\pm$\\,0.37 &1.73\\,$\\pm$\\,0.24 &3.40\\,$\\pm$\\,0.83 & 6.37\\,$\\pm$\\,0.55 \\\\\nNGC\\,6249 & 16:57:38 & -44:48:15 &6.9\\,$\\pm$\\,0.5 &0.37\\,$\\pm$\\,0.10 &0.65\\,$\\pm$\\,0.13 &2.00\\,$\\pm$\\,0.67 & 4.94\\,$\\pm$\\,0.44 \\\\\nBasel\\,5 & 17:52:27 & -30:05:32 &6.3\\,$\\pm$\\,0.6 &0.61\\,$\\pm$\\,0.15 &1.05\\,$\\pm$\\,0.20 &2.88\\,$\\pm$\\,0.76 & 5.41\\,$\\pm$\\,0.54 \\\\\nRuprecht\\,97 & 11:57:35 & -62:43:20 &7.2\\,$\\pm$\\,0.5 &3.01\\,$\\pm$\\,0.85 &3.67\\,$\\pm$\\,0.49 &5.55\\,$\\pm$\\,1.22 & 8.54\\,$\\pm$\\,0.65 \\\\\nESO\\,129-SC32 & 11:44:06 & -61:05:56 &7.3\\,$\\pm$\\,0.5 &1.73\\,$\\pm$\\,0.54 &2.39\\,$\\pm$\\,0.42 &4.65\\,$\\pm$\\,1.08 & 7.78\\,$\\pm$\\,0.60 \\\\\n\n\n\\hline\n\\multicolumn{8}{l}{ \\textit{Note}: To convert 1 arcmin into pc we used the expression $(\\pi\\,\/10800)\\times10^{[(m-M)_{0}+5]\/5}$, where $(m-M)_{0}$ }\\\\\n\\multicolumn{8}{l}{ is the OC distance modulus (see Table \\ref{astroph_params}). } \\\\\n\\multicolumn{8}{l}{ $^{*}$ The $R_G$ were obtained from the distances in Table \\ref{astroph_params}, assuming that the Sun is located at 8.0\\,$\\pm$\\,0.5\\,kpc } \\\\\n\\multicolumn{8}{l}{ from the Galactic centre \\citep{Reid:1993a}. }\\\\\n\\multicolumn{8}{l}{ $^{\\dag}$ Half-light radius (Section \\ref{mass_functions}). } \\\\\n\\multicolumn{8}{l}{ $^{\\dag\\dag}$ Jacobi radius (Section \\ref{mass_functions}). }\n\n\n\n\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\n\n\\subsection{Membership determination}\n\\label{memberships}\n\n\nAfter the structural analysis, we used the Vizier service\\footnote[3]{http:\/\/vizier.u-strasbg.fr\/viz-bin\/VizieR} to extract astrometric data from the GAIA DR2 catalogue \\citep{Gaia-Collaboration:2018} for stars in a large circular area of radius 20\\,arcmin centred on each target. This region is large enough to encompass completely the field of view corresponding to our Washington images. For each OC we cross-matched our photometric catalogues with GAIA and executed a routine that explores the three-dimensional (3D) parameters space of proper motions and parallaxes ($\\mu_{\\alpha}$, $\\mu_{\\delta}$, $\\varpi$) corresponding to stars in the OC area ($r\\lesssim r_t$) and in a control field (stars in the region $r\\gtrsim r_t$). The routine is devised to detect and evaluate statistically the overdensity of OC stars in comparison to the field in each part of the parameters space. This was a critical step in our analysis, since the studied OCs are located at low Galactic latitudes ($\\vert b\\vert\\leq2^{\\circ}$) and thus projected against dense stellar fields.\n\n\nThe routine is completely described in \\cite{Angelo:2019a}. Briefly, the procedure consists in dividing the astrometric space in cells with widths proportional to the sample mean uncertainties (\\,$\\langle\\Delta\\mu_{\\alpha}\\rangle$, $\\langle\\Delta\\mu_{\\delta}\\rangle$ and $\\langle\\Delta\\varpi\\rangle$\\,) in each astrometric parameter. Cell widths are typically 1.0\\,mas.yr$^{-1}$ and 0.1\\,mas for proper motion components and parallax, respectively. These values correspond to $\\sim10\\times\\langle\\Delta\\mu_{\\alpha}\\rangle$, $\\sim10\\times\\langle\\Delta\\mu_{\\delta}\\rangle$ and $\\sim1\\times\\langle\\Delta\\varpi\\rangle$. These cell sizes allow to accomodate a significant number of stars inside each cell and they are small enough to properly sample the fluctuations across the 3D space.\n\n\nInside each cell, we determined membership likelihoods for stars in the OC sample ($l_{\\textrm{star}}$) by employing a multivariate gaussian:\n\n\\begin{equation}\n\\begin{aligned}\n l_{\\textrm{star}} = \\frac{\\textrm{exp}\\left[-\\frac{1}{2}(\\boldsymbol{X}-\\boldsymbol{\\mu})^{\\textrm{T}}\\boldsymbol{\\sum}^{-1}(\\boldsymbol{X}-\\boldsymbol{\\mu})\\right]}{\\sqrt{(2\\pi)^3\\vert\\boldsymbol{\\sum}\\vert}} \n\\end{aligned} \n\\label{likelihood_formula}\n,\\end{equation}\n\n\n\\noindent\nwhere $\\boldsymbol{X}$ is the column vector ($\\mu_{\\alpha}$,\\,$\\mu_{\\delta}$,\\,$\\varpi$) for a given star and $\\boldsymbol{\\mu}$ is the mean vector for the sample of OC stars contained within the cell. $\\boldsymbol{\\sum}$ is the full covariance matrix, which incorporates the uncertainties and the correlations between the astrometric parameters (see equation 2 of \\citeauthor{Angelo:2019a}\\,\\,\\citeyear{Angelo:2019a}). Then the same calculation was performed for stars in the control field. For each sample inside a given cell, the total likelihood was taken multiplicatively: $\\mathcal{L}=\\prod_{i}^{} l_i$.\n\nIn order to compare statistically the dispersion of data for both samples (OC and control field) in a given cell, we employed the objective function:\n\n\\begin{equation}\n S = -\\textrm{log}\\,\\mathcal{L}\n \\label{func_entropia}\n.\\end{equation}\n\n\n\\noindent\nThen we searched for cells for which $S_{\\textrm{clu}}.| and\n|.~e|, and the |code| type\n\\cite{taha_gentle_2004,kiselyov_design_2014}. In the Scala version of\nour library, staging annotations are implicit: they are determined by\ninferred types. Staging annotations are optimization directives,\nguiding the partial evaluation of library expressions. Thus, staging\nannotations are not crucial to understanding what our library can express,\nonly how it is optimized.\nOn first read, staging\nannotations\nmay be simply disregarded. We get back to them, in detail, in\n\\S\\ref{sec:msp}.\n\nThe (Meta)OCaml library interface is given in\nFigure~\\ref{f:interface}. The library includes stream producers (one\ngeneric---|unfold|, and one specifically for arrays---|of_arr|), the generic\nstream consumer (or stream reducer) |fold|, and a number of stream\ntransformers. Ignoring |code| annotations, the signatures are\nstandard. For instance, the generic |unfold| combinator takes a\nfunction from a state, |'z|, to a value |'a| and a new state\n(or nothing at all), and, given an initial state |'z|, produces an\nopaque stream of |'a|s.\n\n\\begin{figure}[tb!p]\n\\vspace{-1.8mm}\n\\begin{lstlisting}\n$\\textrm{Stream representation (abstract)}$\ntype 'a stream\n\n$\\textrm{Producers}$\nval of_arr : 'a array code -> 'a stream\nval unfold : ('z code -> ('a * 'z) option code) ->\n 'z code -> 'a stream\n\n$\\textrm{Consumer}$\nval fold : ('z code -> 'a code -> 'z code) ->\n 'z code -> 'a stream -> 'z code\n\n$\\textrm{Transformers}$\nval map : ('a code -> 'b code) -> 'a stream ->\n 'b stream\nval filter : ('a code -> bool code) ->\n 'a stream -> 'a stream\nval take : int code -> 'a stream -> 'a stream\nval flat_map : ('a code -> 'b stream) ->\n 'a stream -> 'b stream\nval zip_with : ('a code -> 'b code -> 'c code) ->\n ('a stream -> 'b stream -> 'c stream)\n\\end{lstlisting}\n\\caption{The library interface}\n\\label{f:interface}\n\\end{figure}\n\nThe first example is summing the squares of elements of an array\n|arr|---in mathematical notation, $\\sum{a_i^2}$. The\ncode\n\\vspace{-0.5mm}\n\\begin{lstlisting}\nlet sum = fold (fun z a -> .<.~a + .~z>.) .<0>.\n\nof_arr ..\n |> map (fun x -> .<.~x * .~x>.)\n |> sum\n\\end{lstlisting}\nis not far from the mathematical notation. Here,\n$\\triangleright$, like the similar operator in F\\#, is the\ninverse function application: argument to the left, function to the\nright. The stream components are first-class and hence may be\npassed around, bound to identifiers and shared; in short, we can build\nlibraries of more complex components.\n\n\\noindent In this simple example, the generated code is understandable:\n\n\\begin{code}\nlet s_1 = ref 0 in\nlet arr_2 = arr in\n for i_3 = 0 to Array.length arr_2 -1 do\n let el_4 = arr_2.(i_3) in\n let t_5 = el_4 * el_4 in\n s_1 := t_5 + !s_1\n done;\n!s_1\n\\end{code}\n\nIt is relatively easy to see which part of the code came from which\npart of the pipeline ``specification''. The generated code has no\nclosures, tuples or other heap-allocated structures: it looks as if it\nwere hand-written by a competent OCaml programmer. The iteration\nis driven by the source operator, |of_arr|, of the pipeline. This is\nprecisely the iteration pattern that Java 8 streams optimize. As we will\nsee in later examples, this is but one of the optimal iteration patterns\narising in stream pipelines.\n\nThe next example sums only some elements:\n\\begin{lstlisting}\nlet ex = of_arr .. |> map (fun x -> .<.~x * .~x>.)\n\nex |> filter (fun x -> .<.~x mod 17 > 7>.) |> sum\n\\end{lstlisting}\nWe have abstracted out the\nmapped stream as |ex|. The earlier example is, hence,\n\\lstinline{ex |> sum}. The current example applies |ex| to the more complex\nsummator that first filters out elements before summing the\nrest. The next example limits the number of summed elements to a\nuser-specified value |n|\n\\begin{lstlisting}\nex |> filter (fun x -> .<.~x mod 17 >7>.)\n |> take ..\n |> sum\n\\end{lstlisting}\nWe stress that the limit is applied to the filtered stream, not to the original\ninput; writing this\nexample in mathematical notation would be cumbersome.\nThe generated code\n\\begin{code}\nlet s_1 = ref 0 in\nlet arr_2 = arr in\nlet i_3 = ref 0 in\nlet nr_4 = ref n in\nwhile !nr_4 > 0 && !i_3 <= Array.length arr_2 -1 do\n let el_5 = arr_2.(! i_3) in\n let t_6 = el_5 * el_5 in\n incr i_3;\n if t_6 mod 17 > 7\n then (decr nr_4; s_1 := t_6+!s_1)\ndone; ! s_1\n\\end{code}\nagain looks as if it were handwritten, by a competent\nprogrammer. However, compared to the first example, the code is more tangled;\nfor example, the |take ..| part of the pipeline contributes to\nthree separate places in the code: where the |nr_4| reference cell is created,\ntested and mutated. The iteration pattern is more complex. Instead of\na |for| loop there is a |while|, whose termination conditions come\nfrom two different pipeline operators: |take| and |of_arr|.\n\nThe dot-product of two arrays |arr1| and |arr2| looks just as simple\n\\begin{lstlisting}\nzip_with (fun e1 e2 -> .<.~e1 * .~e2>.)\n (of_arr ..)\n (of_arr ..) |> sum\n\\end{lstlisting}\nshowing off the zipping of two streams, with the straightforward, again\nhand-written quality, generated code:\n\\begin{code}\nlet s_17 = ref 0 in\nlet arr_18 = arr1 in let arr_19 = arr2 in\n for i_20 = 0 to\n min (Array.length arr_18 -1)\n (Array.length arr_19 -1) do\n let el_21 = arr_18.(i_20) in\n let el_22 = arr_19.(i_20) in\n s_17 := el_21 * el_22 + !s_17\n done; ! s_17\n\\end{code}\nThe optimal iteration pattern is different still (though\nsimple): the loop condition as well as the loop body are equally\ninfluenced by two |of_arr| operators.\n\nIn the final, complex example we zip two complicated streams. The\nfirst is a finite stream from an array, mapped, subranged, filtered\nand mapped again. The second is an infinite stream of natural numbers\nfrom 1, with a filtered flattened nested substream. After zipping,\nwe fold everything into a list of tuples.\n\n\n\\definecolor{light-gray}{gray}{0.6}\n\\sethlcolor{light-gray}\n\\begin{lstlisting}\nzip_with (fun e1 e2 -> .<(.~e1,.~e2)>.)\n (of_arr .. (* 1st stream *)\n |> map (fun x -> .<.~x * .~x>.)\n |> take .<12>.\n |> filter (fun x -> .<.~x mod 2 = 0>.)\n |> map (fun x -> .<.~x * .~x>.))\n (iota .<1>. (* 2nd stream *)\n |> flat_map (fun x -> iota .<.~x+1>. |> take .<3>.)\n |> filter (fun x -> .<.~x mod 2 = 0>.))\n |> fold (fun z a -> .<.~a :: .~z>.) .<[]>.\n\\end{lstlisting}\n\\sethlcolor{yellow}\n\nWe did not show any types, but they exist (and have been\ninferred). Therefore, an attempt to use an invalid operation on stream\nelements (like concatenating integers or applying an ill-fitting\nstream component) will be immediately rejected by the type-checker.\n\nAlthough the above pipeline is purely functional, modular and rather\ncompact, the generated code (shown in Appendix\nA of the extended version) is\nlarge, entangled and highly imperative. Writing such code correctly\nby hand is clearly challenging.\n\n\n\\section{Stream Fusion Problem}\n\\label{sec:fusion-problem}\n\nThe key to an expressive and performant stream library is a\nrepresentation of streams that fully captures the generality of\nstreaming pipelines and allows desired optimizations. To understand\nhow the representation affects implementation and optimization\nchoices, we review past approaches. We see that, although some of them\ntake care of the egregious overhead, none manage to eliminate all of\nit: the assembled stream pipeline remains slower than hand-written\ncode.\n\n\nThe most straightforward representation of streams is a linked list,\nor a file, of elements. It is also the least performing. The first\nexample in \\S\\ref{sec:overview}, of summing squares, will entail:\n(1) creating a stream from an array by copying all elements into it;\n(2) traversing the list creating another stream, with squared elements;\n(3) traversing the result, summing the elements. We end up creating three\nintermediate lists. Although the whole processing still takes\ntime linear in the size of the stream, it requires repeated traversals\nand the production of linear-size intermediate structures.\nAlso, this straightforward representation cannot cope with\nsources that are always ready with an element: ``infinite streams''.\n\nThe problem, thus, is deforestation \\cite{wadler-deforestation}: eliminating\nintermediate, working data structures. For streams, in particular, deforestation\nis typically called ``stream fusion''. One can discern two main groups of stream\nrepresentations that let us avoid building intermediate data\nstructures of unbounded size.\n\n\\paragraph{Push Streams.}\nThe first, heavily algebraic approach, represents a stream by its\nreducer (the fold operation) \\cite{meijer-functional}.\nIf we introduce the ``shape functor'' for a\nstream with elements of type |'a| as\n\\begin{code}\ntype ('a,'z) stream_shape =\n | Nil\n | Cons of 'a * 'z\n\\end{code}\nthen the stream is formally defined as:\\footnote{\\relax\nStrictly speaking, \\texttt{stream} should be a record type:\nin OCaml, only record or object components may have\nthe type with explicitly quantified type variables. For the sake of\nclarity we lift this restriction in the paper.}\n\\begin{code}\ntype 'a stream = forall'w. (('a,'w) stream_shape -> 'w) -> 'w\n\\end{code}\nA stream of |'a|s is hence a function with the ability to turn any\ngeneric ``folder'' (i.e., a function from |('a,'w) stream_shape| to\n|'w|) to a single |'w|. The ``folder'' function is formally called an\n|F|-algebra for the |('a,-) stream_shape| functor.\n\nFor instance, an array is easily representable as such a fold:\n\\begin{code}\nlet of_arr : 'a array -> 'a stream =\n fun arr -> fun folder ->\n let s = ref (folder Nil) in\n for i=0 to Array.length arr - 1 do\n s := folder (Cons (arr.(i),!s))\n done; !s\n\\end{code}\n\nReducing a stream with the reducing\nfunction |f| and the initial value |z| is especially straightforward in\nthis representation:\n\\begin{code}\nlet fold : ('z -> 'a -> 'z) -> 'z -> 'a stream -> 'z =\n fun f z str ->\n str (function Nil -> z | Cons (a,x) -> f x a)\n\\end{code}\n\nMore germane to our discussion is that mapping over the stream (as well as |filter|-ing and\n|flat_map|-ing) are also easily expressible, without creating any variable-size\nintermediate data structures:\n\\begin{code}\nlet map : ('a -> 'b) -> 'a stream -> 'b stream =\n fun f str ->\n fun folder -> str (fun x -> match x with\n | Nil -> folder Nil\n | Cons (a,x) -> folder (Cons (f a,x)))\n\\end{code}\n\nA stream element |a| is transformed ``on the fly'' without collecting in\nworking buffers. Our sample squaring-accumulating pipeline runs in\nconstant memory now. Deforestation, or stream fusion, has been\naccomplished. The simplicity of this so-called ``push stream'' approach\nmakes it popular: it is used, for example, in the reducers of Clojure\nas well as in the OCaml ``batteries'' library. It is also the basis\nof Java 8 Streams, under an object-oriented reformulation of the same\nconcepts.\n\nIn push streams, it is the stream producer, e.g., |of_arr|, that\ndrives the optimal execution of the stream. Implementing |take| and\nother such combinators that restrict the processing to a prefix of the\nstream requires extending the representation with some sort of a\n``feedback'' mechanism (often implemented via exceptions). Where push\nstreams stumble is the zipping of two streams, i.e., the processing of\ntwo streams in parallel. This simply cannot be done with constant\nper-element processing cost. Zipping becomes especially complicated\n(as we shall see in \\S\\ref{sec:zip}) when the two pipelines contain\nnested streams and hence produce elements at generally different\nrates.\\footnote{The Reactive Extensions (Rx) framework~\\cite{rx} gives a\n real-life example of the complexities of implementing \\texttt{zip}.\n Rx is push-based and supports \\texttt{zip} at the cost of\n maintaining an unbounded intermediate queue. This deals with the\n ``backpressure in Zip'' issue, extensively-discussed in\n the Rx github repo. Furthermore, Rx seems to have abandoned blocking\n zip implementations since 2014.}\n\\begin{comment}\nBy checking the code for Rx\n[here](https:\/\/github.com\/Reactive-Extensions\/Rx.NET\/blob\/master\/Rx.NET\/Source\/System.Reactive.Linq\/Reactive\/Linq\/Observable\/Zip.cs#L89),\none notices that values are enqueued in every ```OnNext``` call in an\nunbounded queue (a dynamically allocated intermediate structure!), as\na way to deal with the problem of the extensively-discussed\n\"Backpressure in Zip\" issue in the Rx github repo. Furthermore,\nblocking Zip implementations seem to have been abandoned in Rx,\nstarting in 2014,\n[#871](https:\/\/github.com\/ReactiveX\/RxJava\/pull\/871#issuecomment-36649124)\nbecause the model of the proposed solutions diverges from the intended\nmodel of Rx.\n\\end{comment}\n\n\n\\paragraph{Pull Streams.}\nAn alternative representation of streams, pull streams, has a long pedigree,\nall the way from the\ngenerators of Alphard \\cite{alphard} in the '70s. These are objects that implement two\nmethods: |init| to initialize the state and obtain the first element,\nand |next| to advance the stream to the next element, if any. Such\na ``generator'' (or IEnumerator, as it has come to be popularly known) can\nalso be understood algebraically---or rather,\nco-algebraically. Whereas push streams represent a\nstream as a fold, pull streams, dually, are the expression\nof an \\emph{unfold} \\cite{meijer-functional,Gibbons-unfold}:\\footnote{\\relax\nFor the sake of explanation, we took another liberty with the\nOCaml notation, avoiding the GADT syntax for the existential.}\n\\begin{code}\ntype 'a stream = exists's. 's * ('s -> ('a,'s) stream_shape)\n\\end{code}\nThe stream is, hence, a pair of the current state and the so-called\n``step'' function that, given a state, reports the end-of-stream\ncondition |Nil|, or the current element and the next state. (Formally, the\nstep function is the F-co-algebra for the |('a,-) stream_shape| functor.)\nThe existential quantification over the state keeps it private: the only\npermissible operation is to pass it to the step function.\n\nWhen an array is represented as a pull stream, the state is the tuple\nof the array and the current index:\n\\begin{code}\nlet of_arr : 'a array -> 'a stream =\n let step (i,arr) =\n if i < Array.length arr\n then Cons (arr.(i), (i+1,arr)) else Nil\n in fun arr -> ((0,arr),step)\n\\end{code}\nThe step function---a pure combinator rather than a\nclosure---dereferences the current element and advances the index. Reducing the\npull stream now requires an iteration, of repeatedly calling |step|\nuntil it reports the end-of-stream. (Although the types of\n|of_arr|, |fold|, and |map|, etc. nominally remain the same,\nthe meaning of |'a stream| has changed.)\n\\begin{code}\nlet fold : ('z -> 'a -> 'z) -> 'z -> 'a stream -> 'z =\n fun f z (s,step) ->\n let rec loop z s = match step s with\n | Nil -> z\n | Cons (a,t) -> loop (f z a) t\n in loop z s\n\\end{code}\nWith pull streams, it is the reducer, i.e., the stream consumer, that drives\nthe processing. Mapping over the stream\n\\begin{code}\nlet map : ('a -> 'b) -> 'a stream -> 'b stream =\n fun f (s,step) ->\n let new_step = fun s -> match step s with\n | Nil -> Nil\n | Cons (a,t) -> Cons (f a, t)\n in (s,new_step)\n\\end{code}\nmerely transforms its step function: |new_step| calls the old step and\n|map|s the returned current element, passing it immediately to the\nconsumer, with no buffering. That is, like push streams, pull streams also\naccomplish fusion.\nBefitting their co-algebraic nature, pull streams can represent both\nfinite and infinite streams. Stream combinators, like |take|, that cut\nevaluation short are also\neasy. On the other hand, skipping elements (filtering) and nested\nstreaming is more complex with pull streams, requiring the\ngeneralization of the |stream_shape|, as we shall see in\n\\S\\ref{sec:implementation}. The main advantage of pull streams over\npush streams is in expressiveness: pull streams have the ability to\nprocess streams in parallel, enabling |zip_with| as well as more complex stream\nmerging. Therefore, we take pull streams as the basis of our library.\n\n\n\\paragraph{Imperfect Deforestation.}\nBoth push and pull streams eliminate the intermediate lists\n(variable-size buffers) that plague a naive implementation of the\nstream library. Yet they do not eliminate all the abstraction overhead.\nFor example, the |map| stream combinator transforms\nthe current stream element by passing it to some function |f| received\nas an argument of |map|. A hand-written implementation would have no other\nfunction calls. However, the pull-stream\n|map| combinator introduces a closure:\n|new_step|, which receives a |stream_shape| value from the\nold |step|, pattern-matches on it and constructs the new |stream_shape|.\nThe push-stream |map| has the same problem: The step function\nof |of_arr| unpacks the current state and then packs the array and the\nnew index again into the tuple. This repeated deconstruction and\nconstruction of tuples and co-products is the abstraction overhead,\nwhich a complete deforestation should eliminate, but pull and push\nstreams, as commonly implemented, do not.\nSuch ``constant'' factors make\nlibrary-assembled stream processing much slower than the hand-written\nversion (by up to two orders of magnitude---see \\S\\ref{sec:experiments}).\n\n\n\\section{Staging Streams}\n\nA well-known way of eliminating abstraction overhead and\ndelivering ``abstraction without guilt'' is program generation:\ncompiling a high-level abstraction into efficient code.\nIn fact, the original deforestation algorithm in the literature\n\\cite{wadler-deforestation} is closely related to partial evaluation\n\\cite{sorensen-unifying}. This section introduces staging: one\nparticular, manual technique of partial evaluation. It lets us\nachieve our goal of eliminating all abstraction\noverhead from the stream library. Perfect stream fusion with staging\nis hard: \\S\\ref{sec:simple-staging} shows that straightforward staging\n(or automated partial evaluation) does not achieve full deforestation.\nWe have to re-think general stream processing\n(\\S\\ref{sec:overhead-elim}).\n\n\n\\subsection{Multi-Stage Programming}\n\\label{sec:msp}\n\nMulti-stage programming (MSP), or \\emph{staging} for short, is a way\nto write programs that generate programs. MSP may be thought of as a\nprincipled version of the familiar ``code templates'', where the\ntemplates ensure by their very construction that the generated code is not only\nsyntactically well-formed but also well-scoped and well-typed.\n\nIn this paper we use BER MetaOCaml~\\cite{kiselyov_design_2014},\nwhich is a dialect of OCaml with MSP extensions.\nThe first MSP feature is brackets, \\sv{.<} and \\sv{>.}, which\nenclose a code template. For example, \\sv{.<1+2>.} is a template\nfor generating code to add two literals 1 and 2.\n\\begin{lstlisting}\nlet c = .<1 + 2>.\n### val c : int code = .<1 + 2>.\n\\end{lstlisting}\nThe output of the interpreter demonstrates that the code template is a\nfirst-class object; moreover, it is a value: a \\emph{code value}.\nMetaOCaml can print such values, and also write them into a file to\ncompile it later. The code value is typed: our sample template\ngenerates integer-valued code.\n\nAs behooves templates, they can have holes to splice-in other\ntemplates. The splicing MSP feature, |.~|, is called an\n\\emph{escape}. In the following example, the template |cf| has two\nholes, to be filled in with the same expression. Then |cf c| fills\nthe holes with the expression |c| created earlier.\n\\begin{lstlisting}\nlet cf x = .<.~x + .~x>.\n### val cf : int code -> int code = \ncf c\n### - : int code = .<(1 + 2) + (1 + 2)>.\n\\end{lstlisting}\n\nOne may regard brackets and escapes as annotating code: which portions should\nbe evaluated as usual (at the present stage, so to speak) and which in the\nfuture (when the generated code is compiled and run).\n\n\n\\begin{comment}\n\\paragraph{Continuation Passing Style and Staging. }\nIt has been discovered that a program in continuation-passing style can be partially evaluated effectively~\\cite{consel_for_1991}.\nConsequently, as staging is a guided form of partial evaluation, a multi-stage program can benefit from CPS as well~\\cite{taha_gentle_2008,taha_phd_1999}.\nThe usual problem with direct style is that in some statements, that terms are tested for equality (e.g. |if|), the test condition of the expression may be delayed.\nUnfortunately, in such cases the whole if-statement must be delayed.\nAs a result, we witness this inability in type definitions of methods. For example we meet |'a option code| instead of |'a code option| in their type signatures.\nThis means that the resulting code will not be completely free of pattern matching the |None| and |Some|.\nInstead we would like to eliminate introspections like these if they are\nnot part of the core logic of an algorithm.\nOn the other hand, CPS makes all control flow explicit.\nEvery path in the control flow is represented as a method invocation and the placement of explicit continuation calls for each branch of an if-statement enables the staging of statements like that entirely.\nIn this work we use CPS encoding where possible to enjoy the same benefits.\n\\end{comment}\n\n\\subsection{Simple Staging of Streams}\n\\label{sec:simple-staging}\n\nWe can turn a library into, effectively, a compiler of efficient\ncode by adding staging annotations. This is not a simple matter of\nannotating one of the standard definitions (either pull- or push-style)\nof |'a stream|, however. To see this, we next consider staging\na set of pull-stream\ncombinators. Staging helps with performance, but the abstraction overhead still remains.\n\nThe first step in using staging is the so-called ``binding-time\nanalysis'': finding out which values can be known only at run-time\n(``dynamically'') and what is known already at code-generation time,\n(``statically'') and hence can be pre-computed. Partial evaluators\nperform binding-time analysis, with various degrees of sophistication\nand success, automatically and opaquely. In staging, binding-time analysis is\nmanual and explicit.\n\nWe start with the pull streams |map| combinator, which, recall,\nhas a type signature:\n\\begin{code}\ntype 'a stream = exists's. 's * ('s -> ('a,'s) stream_shape)\nval map : ('a -> 'b) -> 'a stream -> 'b stream\n\\end{code}\nIts first argument, the mapping function |f|, takes the current\nstream element, which is clearly\nnot known until the processing pipeline is run. The result is likewise\ndynamic. However, the mapping operation itself can be known\nstatically. Hence the staged |f| may be given the type\n|'a code -> 'b code|: given code to compute |'a|s, the mapping function, |f|,\nis a static way to produce code to compute |'b|s.\n\nThe second argument of |map| is the pull stream,\na tuple of the\ncurrent state (|'s|) and the step function. The state is not known\nstatically. The result of the step function\ndepends on the current state and, hence, is fully dynamic. The step\nfunction itself, however, can be statically known.\nHence\nwe arrive at the following type of the staged stream\n\\begin{code}\ntype 'a st_stream =\n exists's. 's code * ('s code -> ('a,'s) stream_shape code)\n\\end{code}\nHaving done such binding-time analysis for the arguments of the |map|\ncombinator, it is straightforward to write the staged |map|,\nby annotating---i.e., placing brackets and escapes on---the original\n|map| code according to the decided binding-times:\n\\begin{code}\nlet map : ('a code -> 'b code) ->\n 'a st_stream -> 'b st_stream =\n fun f (s,step) ->\n let new_step = fun s -> . Nil\n | Cons (a,t) -> Cons (.~(f ..), t)>.\n in (s,new_step)\n\\end{code}\nThe combinators |of_arr| and |fold| are staged analogously. We use\nthe method of \\cite{inoue-reasoning} to prove the correctness, which\neasily applies to this case, given that |map| is non-recursive. The sample\nprocessing pipeline (the first example from \\S\\ref{sec:overview})\n\\begin{code}\nof_arr .<[|0;1;2;3;4|]>.\n |> map (fun a -> .<.~a * .~a>.)\n |> fold (fun x y -> .<.~x + .~y>.) .<0>.\n\\end{code}\nthen produces the following code:\n\\begin{codeE}\n- : int code = .<\nlet rec loop_1 z_2 s_3 =\n match match match s_3 with\n | (i_4,arr_5) ->\n if i_4 < (Array.length arr_5)\n then Cons ((arr_5.(i_4)),\n ((i_4 + 1), arr_5))\n else Nil\n with\n | Nil -> Nil\n | Cons (a_6,t_7) -> Cons ((a_6 * a_6), t_7)\n with\n | Nil -> z_2\n | Cons (a_8,t_9) -> loop_1 (z_2 + a_8) t_9 in\nloop_1 0 (0, [|0;1;2;3;4|])>.\n\\end{codeE}\nAs expected, no lists, buffers or other variable-size data\nstructures are created. Some constant overhead is gone too: the\nsquaring operation of |map| is inlined. However, the triple-nested\n|match| betrays the remaining overhead of constructing and\ndeconstructing |stream_shape| values. Intuitively, the clean abstraction\nof streams (encoded in the pull streams type of |'a stream|) isolates\neach operator from others. The result does not take advantage of\nthe property that, for this pipeline (and others of the same style),\nthe looping of all three operators (|of_arr|, |map|, and |fold|) will\nsynchronize, with all of them processing elements until the same last one.\nEliminating the overhead requires a different computation model for\nstreams.\n\n\n\n\\section{Eliminating All Abstraction Overhead in Three Steps}\n\\label{sec:overhead-elim}\n\nWe next describe how to purge all of the stream library abstraction\noverhead and generate code of hand-written quality and performance. We\nwill be continuing the simple running example of the earlier sections,\nof summing up squared elements of an\narray. (\\S\\ref{sec:implementation} will later lift the same insights\nto more complex pipelines.) As in \\S\\ref{sec:simple-staging}, we will\nbe relying on staging to generate well-formed and well-typed code. The\nkey to eliminating abstraction overhead from the generated code is to\nmove it to a generator, by making the generator take better advantage\nof the available static knowledge. This is easier said than done: we\nhave to use increasingly more sophisticated transformations of the\nstream representation to expose more static information and make it\nexploitable. The three transformations we show next require\nmore-and-more creativity and domain knowledge, and cannot be performed\nby a simple tool, such as an automated partial evaluator. In the\nprocess, we will identify three interesting concepts in stream\nprocessing: the structure of iteration (\\S\\ref{sec:fusing-stepper}),\nthe state kept (\\S\\ref{sec:state-fusing}), and the optimal kind of loop\nconstruct and its contributors (\\S\\ref{sec:imperative-loops}).\n\n\\subsection{Fusing the Stepper}\n\\label{sec:fusing-stepper}\n\nModularity is the cause of the abstraction overhead we observed in\n\\S\\ref{sec:simple-staging}: structuring\nthe library as a collection of composable components forces them to\nconform to a single interface. For example, each component has to\nuse the uniform stepper function interface (see the |st_stream| type)\nto report the next stream element or\nthe end of the stream. Hence, each component has to generate code to\nexamine (deconstruct) and construct the |stream_shape| data type.\n\nAt first glance, nothing can be done about this: the\nresult of the step function, whether it is |Nil| or a |Cons|,\ndepends on the current state, which is surely not known until the\nstream processing pipeline is run. We do know however\nthat the step function invariably returns either |Nil| or a\n|Cons|, and the caller must be ready to handle both alternatives. We\nshould exploit this static knowledge.\n\nTo statically (at code generation-time) make sure that the caller of\nthe step function handles both alternatives of its result, we\nhave to change the function to accept a pair of handlers:\none for a |Nil| result and one for a |Cons|.\nIn other words, we have to change the result's representation,\nfrom the sum |stream_shape| to a product of eliminators.\nSuch a replacement effectively removes the need to construct the\n|stream_shape| data type at run-time in the first place. Essentially,\nwe change |step| to be in\ncontinuation-passing style, i.e., to accept the continuation for its\nresult. The |stream_shape| data type nominally remains, but it becomes the\nargument to the continuation and we mark its variants as\nstatically known (with no need to construct it at\nrun-time). All in all, we arrive at the following type for\nthe staged stream\n\\begin{code}\ntype 'a st_stream =\n exists's. 's code *\n (forall'w. 's code ->\n (('a code,'s code) stream_shape -> 'w code) ->\n 'w code)\n\\end{code}\n\nThat is, a stream is again a pair of a hidden state, |'s| (only\nknown dynamically, i.e., |'s code|),\nand a step function, but the step function does not return\n|stream_shape| values (of dynamic |'a|s and |'s|s) but accepts an\nextra argument (the continuation) to pass such values to. The step\nfunction returns whatever (generic type |'w|, only known dynamically) the continuation\nreturns.\n\nThe variants of the |stream_shape| are now known when |step|\ncalls its continuation, which happens at code-generation time. The\n|map| combinator becomes\n\\begin{code}\nlet map : ('a code -> 'b code) ->\n 'a st_stream -> 'b st_stream =\n fun f (s,step) ->\n let new_step s k = step s @@ function\n | Nil -> k Nil\n | Cons (a,t) -> .., t))>.\n in (s,new_step)\n\\end{code}\ntaking into account that |step|, instead of returning the result, calls a\ncontinuation on it. Although the data-type |stream_shape| remains, its\nconstruction and pattern-matching now happen at code-generation\ntime, i.e., statically. As another example, the |fold| combinator becomes:\n\\begin{code}\nlet fold : ('z code -> 'a code -> 'z code) ->\n 'z code -> 'a st_stream -> 'z code\n = fun f z (s,step) ->\n .. @@ function\n | Nil -> ..\n | Cons (a,t) -> .. a) .~t>.)\n in loop .~z .~s>.\n\\end{code}\nOur running example pipeline, summing the squares of all elements of\na sample array, now generates the following code\n\\begin{code}\nval c : int code = .<\n let rec loop_1 z_2 s_3 =\n match s_3 with\n | (i_4,arr_5) ->\n if i_4 < (Array.length arr_5)\n then\n let el_6 = arr_5.(i_4) in\n let a'_7 = el_6 * el_6 in\n loop_1 (z_2 + a'_7) ((i_4 + 1), arr_5)\n else z_2 in\n loop_1 0 (0, [|0;1;2;3;4|])>.\n\\end{code}\nIn stark contrast with the naive staging\nof \\S\\ref{sec:simple-staging}, the generated code has no traces of\nthe |stream_shape| data type.\nAlthough the data type is still constructed and deconstructed,\nthe corresponding overhead is shifted from the generated code to the\ncode-generator. Generating code may take a bit longer but the\nresult is more efficient. For full fusion, we will need to shift\noverhead to the generator two more times.\n\n\\subsection{Fusing the Stream State}\n\\label{sec:state-fusing}\n\nAlthough we have removed the most noticeable repeated\nconstruction and deconstruction of the |stream_shape| data type,\nthe abstraction overhead still remains. The main loop in the generated code\npattern-matches on the current state, which is the pair of the index\nand the array. The recursive invocation of the loop packs the index\nand the array back into a pair. Our task is to deforest the pair\naway. This seems rather difficult, however: the state is being updated\non every iteration of the loop, and the loop structure (e.g., number\nof iterations) is generally not statically known. Although it is the\n(statically known) |step| function that computes the updated state,\nthe state has to be threaded through the fold's |loop|, which\ntreats it as a black-box piece of code. The\nfact it is a pair cannot be exploited and, hence, the overhead cannot\nbe shifted to the generator. There is a way out, however. It requires\na non-trivial step:\nThe threading of the state through the loop can be eliminated if the\nstate is mutable.\n\nThe step function no longer has to return (strictly speaking: pass to its\ncontinuation) the updated state: the\nupdate happens in place. Therefore, the state no longer\nhas to be annotated as dynamic---its structure can be known to the\ngenerator. Finally, in order to have the appropriate operator\nallocate the reference cell for the array index, we need to employ\nthe let-insertion technique \\cite{bondorf-improving}, by also\nusing continuation-passing style for\nthe initial state. The definition of the stream type (|'a st_stream|)\nnow becomes:\n\\begin{code}\ntype 'a st_stream =\n exists's.\n (forall'w. ('s -> 'w code) -> 'w code) *\n (forall'w. 's ->\n (('a code,unit) stream_shape -> 'w code) ->\n 'w code)\n\\end{code}\nThat is, a stream is a pair of an |init| function and a |step| function.\nThe |init| function implicitly hides a state: it knows how to call a continuation\n(that accepts\na static state and returns a generic dynamic value, |'w|) and returns\nthe result of the continuation. The |step| function is much like\nbefore, but operating on a statically-known state (or more correctly, a\nhidden state with a statically-known structure).\n\nThe new |of_arr| combinator demonstrates\nthe let-insertion (the allocation of the reference cell for the\ncurrent array index) in |init|, and the in-place update of the state (the |incr|\noperation):\n\\begin{code}\nlet of_arr : 'a array code -> 'a st_stream =\n let init arr k =\n ..,..))>.\n and step (i,arr) k =\n .., ()))\n else .~(k Nil)>.\n in\n fun arr -> (init arr,step)\n\\end{code}\nOnce again, until now the state of the |of_arr| stream had the type\n|(int * 'a array) code|. It has become |int ref code * 'a array code|,\nthe statically known pair of two code values. The construction and\ndeconstruction of that pair now happens at code-generation time.\n\nThe earlier |map| combinator did not even look at the current state (nor\ncould it), therefore its code remains unaffected by the change in the\nstate representation. The |fold| combinator no longer has to thread\nthe state through its loop:\n\\begin{code}\nlet fold : ('z code -> 'a code -> 'z code) ->\n 'z code -> 'a st_stream -> 'z code\n = fun f z (init,step) ->\n init @@ fun s ->\n . ..\n | Cons (a,_) -> .. a)>.)\n in loop .~z>.\n\\end{code}\nIt obtains the\nstate from the initializer and passes it to the step function, which\nknows its structure. The generated code for the running-example\nstream-processing pipeline is:\n\\begin{code}\nval c : int code = .<\n let i_8 = ref 0\n and arr_9 = [|0;1;2;3;4|] in\n let rec loop_10 z_11 =\n if ! i_8 < Array.length arr_9\n then\n let el_12 = arr_9.(! i_8) in\n incr i_8;\n let a'_13 = el_12 * el_12 in\n loop_10 (z_11+a'_13)\n else z_11 in\n loop_10 0>.\n\\end{code}\nThe resulting code shows the absence of any overhead. All intermediate\ndata structures have been eliminated. The code is what we could expect\nto get from a competent OCaml programmer.\n\n\\subsection{Generating Imperative Loops}\n\\label{sec:imperative-loops}\n\nIt seems we have achieved our goal. The library (extended for\nfiltering, zipping, and nested streams) can be used in (Meta)OCaml\npractice. It relies, however, on tail-recursive function calls. These\nmay be a good fit for OCaml,\\footnote{Actually, our benchmarking reveals\n that for- and while-loops are currently faster even in OCaml.}\nbut not for Java or Scala. (In Scala, tail-recursion is only\nsupported with significant run-time overhead.) The fastest way to\niterate is to use the native while-loops, especially in Java or\nScala. Also, the dummy |('a code,unit) stream_shape| in\nthe |'a st_stream| type looks\nodd: the |stream_shape| data type has become artificial.\nAlthough |unit| has no effect on generated code, it is\nless than pleasing aesthetically to need a placeholder type in our\nsignature. For these reasons, we embark on one last transformation.\n\nThe last step of stream staging is driven by several insights. First\nof all, most languages provide two sorts of imperative loops: a\ngeneral while-loop and the more specific, and often more efficient (at\nleast in OCaml) for-loops. We would like to be able to generate\nfor-loops if possible, for instance, in our running example.\nHowever, with added subranging or zipping (described in detail\nin \\S\\ref{sec:implementation}, below) the pipeline can no longer be represented as an\nOCaml for-loop, which cannot accommodate extra termination\ntests. Therefore, the stream producer should not commit to any particular\nloop representation. Rather, it has to collect all the needed\ninformation for loop generation, but leave the actual generation to the\nstream consumer, when the entire pipeline is known. Thus the\nstream representation type becomes as follows:\n\n\\begin{code}\ntype ('a,'s) producer_t =\n | For of\n {upb: 's -> int code;\n index: 's -> int code -> ('a -> unit code) ->\n unit code}\n | Unfold of\n {term: 's -> bool code;\n step: 's -> ('a -> unit code) -> unit code}\n and 'a st_stream =\n exists's. (forall'w. ('s -> 'w code) -> 'w code) *\n ('a,'s) producer_t\n and 'a stream = 'a code st_stream\n\\end{code}\n\nThat is, a stream type is a pair of an |init| function (which, as before,\nhas the ability to call a continuation with a hidden state) and an encoding\nof a producer. We distinguish two sorts of producers: a producer that can be driven\nby a for-loop or a general ``unfold'' producer. Each of them supports two\nfunctions. A for-loop producer carries the exact upper bound, |upb|, for\nthe loop index variable and the |index| function that returns the stream element\ngiven an index. For a general producer,\nwe refactor (with an eye for the while-loop) the earlier representation\n\\begin{code}\n(('a code,unit) stream_shape -> 'w code) -> 'w code\n\\end{code}\ninto two components: the termination test, |term|, producing\na dynamic |bool| value (if the test yields\n|false| for the current state, the loop is finished) and the |step| function,\nto produce a new stream element and advance the state. We also used\nanother insight: the imperative-loop--style of the processing pipeline\nmakes it unnecessary (moreover, difficult) to be passing around the\nconsumer (|fold|) state from one iteration to another. It is easier to\naccumulate the state in a mutable cell. Therefore, the answer type of\nthe |step| and |index| functions can be |unit code| rather than |'w code|.\n\nThere is one more difference from the earlier staged stream, which is a bit\nharder to see. Previously, the stream value was annotated as\ndynamic: we really cannot know before running the\npipeline what the current element is. Now, the value produced by the |step| or\n|index| functions has the type |'a| without any |code| annotations,\nmeaning that it is statically known! Although the value of the current\nstream element is determined only when the pipeline is run, its\nstructure can be known earlier. For example, the new type lets the\nproducer yield a pair of values: even though the values themselves are\nannotated as dynamic (of a |code| type) the fact that it is a pair can be\nknown statically. We use this extra flexibility of the more general\nstream value type extensively in \\S\\ref{sec:take}.\n\nWe can now see the new design in action.\nThe stream producer |of_arr| is surely the for-loop-style producer:\n\\begin{code}\nlet of_arr : 'a array code -> 'a stream = fun arr ->\n let init k = ..)>.\n and upb arr = ..\n and index arr i k =\n ..)>.\n in (init, For {upb;index})\n\\end{code}\nIn contrast, the unfold combinator\n\\begin{code}\nlet unfold : ('z code -> ('a * 'z) option code) ->\n 'z code -> 'a stream = ...\n\\end{code}\nis an |Unfold| producer.\n\nImportantly, a producer that starts as a for-loop may later be converted to a\nmore general while-loop producer, (so as to tack on extra\ntermination tests---see |take| in \\S\\ref{sec:take}). Therefore, we need the\nconversion function\n\\begin{code}\nlet for_unfold : 'a st_stream -> 'a st_stream= function\n | (init,For {upb;index}) ->\n let init k = init @@ fun s0 ->\n ..,s0))>.\n and term (i,s0) = ..\n and step (i,s0) k =\n index s0 .. @@\n fun a -> .<(incr .~i; .~(k a))>.\n in (init, Unfold {term;step})\n | x -> x\n\\end{code}\nused internally within the library.\n\nThe stream mapping operation composes the mapping function with the\n|index| or |step|: transforming, as before, the produced value\n``in-flight'', so to speak.\n\\begin{code}\nlet rec map_raw: ('a -> ('b -> unit code) -> unit code)\n -> 'a st_stream -> 'b st_stream =\n fun tr -> function\n | (init,For ({index;_} as g)) ->\n let index s i k = index s i @@ fun e -> tr e k in\n (init, For {g with index})\n | (init,Unfold ({step;_} as g)) ->\n let step s k = step s @@ fun e -> tr e k in\n (init, Unfold {g with step})\n\\end{code}\nWe have defined |map_raw| with the general type (to be used later,\ne.g., in \\S\\ref{sec:take}); the familiar |map| is a special case:\n\\begin{code}\nlet map : ('a code -> 'b code) -> 'a stream -> 'b stream\n = fun f str -> map_raw (fun a k ->\n ..)>.) str\n\\end{code}\nThe mapper |tr| in\n|map_raw| is in the continuation-passing style with the\n|unit code| answer-type. This allows us to perform let-insertion\n\\cite{bondorf-improving}, binding the mapped value to a variable, and\nhence avoiding the potential duplication of the mapping operation.\n\nAs behooves pull-style streams, the consumer at the end of the\npipeline generates the loop to drive\nthe iteration. Yet we do manage to generate for-loops, characteristic of\npush-streams, see \\S\\ref{sec:fusion-problem}.\n\\begin{code}\nlet rec fold_raw :\n ('a -> unit code) -> 'a st_stream -> unit code\n = fun consumer -> function\n | (init,For {upb;index}) ->\n init @@ fun sp ->\n .. @@ consumer)\n done>.\n | (init,Unfold {term;step}) ->\n init @@ fun sp ->\n ..\n\\end{code}\nIt is simpler (especially when we add nesting later) to implement a\nmore general |fold_raw|, which feeds the eventually produced\nstream element to the given imperative |consumer|.\nThe ordinary |fold| is a wrapper that provides such a consumer,\naccumulating the result in\na mutable cell and extracting it at the end.\n\\begin{code}\nlet fold : ('z code -> 'a code -> 'z code) ->\n 'z code -> 'a stream -> 'z code\n= fun f z str ->\n . .. a)>.)\n str);\n !s)>.\n\\end{code}\n\nThe generated code for our running example is:\n\\begin{code}\nval c : int code = .<\n let s_1 = ref 0 in\n let arr_2 = [|0;1;2;3;4|] in\n for i_3 = 0 to (Array.length arr_2) - 1 do\n let el_4 = arr_2.(i_3) in\n let t_5 = el_4 * el_4 in s_1 := !s_1 + t_5\n done;\n ! s_1>.\n\\end{code}\nThis code could not be better. It is what we expect an OCaml programmer to\nwrite, and, furthermore, such code performs ultimately\nwell in Scala, Java and other languages.\nWe have achieved our goal---for simple pipelines, at least.\n\n\n\\section{Full Library}\n\\label{sec:implementation}\n\nThe previous section presented our approach of eliminating all\nabstraction overhead of a stream library through the creative use of\nstaging---generating code of hand-written quality and efficiency.\nHowever, a full stream library has more combinators than we have\ndealt with so far. This section describes the remaining facilities:\nfiltering, sub-ranging, nested streams and parallel streams\n(zipping).\nConsistently achieving deforestation and high performance in the\npresence of all these features is a challenge.\nWe identify three concepts of stream processing that drive our effort:\nthe rate of production and consumption of stream elements\n(\\emph{linearity} and filtering---\\S\\ref{sec:filter-nest}),\nsize-limiting a stream (\\S\\ref{sec:take}), and processing multiple\nstreams in tandem (zipping---\\S\\ref{sec:zip}). We conclude our core\ndiscussion with a theorem of eliminating all overhead.\n\n\n\n\\subsection{Filtered and Nested Streams}\n\\label{sec:filter-nest}\n\nOur library is primarily based on the design presented at the end of\n\\S\\ref{sec:overhead-elim}. Filtering and nested streams (|flat_map|)\nrequire an extension, however, which lets us treat filtering\nand flat-mapping uniformly.\n\nLet us look back at this design. It centers on two operations, |term|\nand |step|: forgetting for a moment the staging annotations, |term s|\ndecides whether the stream still continues, while |step s| produces\nthe current element and advances the state. Exactly one stream element\nis produced per advance in state. We call such streams\n\\emph{linear}. They have many useful algebraic properties, especially when it\ncomes to zipping. We will exploit them in \\S\\ref{sec:zip}.\n\nClearly the |of_arr| stream producer and the more general |unfold|\nproducers build linear streams. The |map| operation preserves the\nlinearity. What destroys it is filtering and nesting. In the filtered\nstream \\lstinline{prod |> filter p}, the advancement of the |prod|\nstate is no longer always accompanied by the production of the stream\nelement: if the filter predicate |p| rejects the element, the pipeline\nwill yield nothing for that iteration. Likewise, in the nested stream\n\\lstinline{prod |> flat_map (fun x -> inner_prod x)}, the advancement\nof the |prod| state may lead to zero, one, or many stream elements\ngiven to the pipeline consumer.\n\nGiven the importance of linearity (to be seen in full in\n\\S\\ref{sec:zip}) we keep track of it in the stream representation. We\nrepresent a non-linear stream as a composition of an always-linear\nproducer with a non-linear transformer:\n\\begin{code}\ntype card_t = AtMost1 | Many\n\ntype ('a,'s) producer_t =\n | For of\n {upb: 's -> int code;\n index: 's -> int code -> ('a -> unit code) ->\n unit code}\n | Unfold of\n {term: 's -> bool code;\n card: card_t;\n step: 's -> ('a -> unit code) -> unit code}\n and 'a producer =\n exists's. (forall'w. ('s -> 'w code) -> 'w code) *\n ('a,'s) producer_t\n and 'a st_stream =\n | Linear of 'a producer\n | Nested of exists'b. 'b producer * ('b -> 'a st_stream)\n and 'a stream = 'a code st_stream\n\\end{code}\nThe difference from the earlier representation in\n\\S\\ref{sec:overhead-elim} is the addition of a\nsum data type with variants |Linear| and |Nested|,\nfor linear and nested streams. We also\nadded a cardinality marker to the general producer, noting if it\ngenerates possibly many elements or at most one.\n\nThe |flat_map| combinator adds a non-linear transformer to the\nstream (recursively descending into the already nested stream):\n\\begin{code}\nlet rec flat_map_raw :\n ('a -> 'b st_stream) -> 'a st_stream -> 'b st_stream =\n fun tr -> function\n | Linear prod -> Nested (prod,tr)\n | Nested (prod,nestf) ->\n Nested (prod,fun a -> flat_map_raw tr @@ nestf a)\n\nlet flat_map :\n ('a code -> 'b stream) -> 'a stream -> 'b stream =\n flat_map_raw\n\\end{code}\nThe |filter| combinator becomes just a particular case of flat-mapping: nesting of\na stream that produces at most one element:\n\\begin{code}\nlet filter : ('a code -> bool code) ->\n 'a stream -> 'a stream = fun f ->\n let filter_stream a =\n ((fun k -> k a),\n Unfold {card = AtMost1; term = f;\n step = fun a k -> k a})\n in flat_map_raw (fun x -> Linear (filter_stream x))\n\\end{code}\nThe addition of recursively |Nested| streams requires an adjustment\nof the earlier, \\S\\ref{sec:overhead-elim},\n|map_raw| and |fold| definitions to recursively descend down the nesting. The\nadjustment is straightforward; please see the accompanying source code\nfor details. The adjusted |fold| will generate nested loops for\nnested streams.\n\n\n\n\\subsection{Sub-Ranging and Infinite Streams}\n\\label{sec:take}\n\nThe stream combinator |take| limits the size of the stream:\n\\begin{code}\nval take : int code -> 'a stream -> 'a stream\n\\end{code}\nFor example, |take .<10>. str| is a stream of the first 10 elements of\n|str|, if there are that many. It is the |take| combinator that lets us\nhandle conceptually infinite streams. Such infinite streams are easily\ncreated with |unfold|: for example, |iota n|, the stream of all\nnatural numbers from |n| up:\n\\begin{code}\nlet iota n = unfold (fun n -> ..) n\n\\end{code}\n\nThe implementation of |take| demonstrates and justifies design\ndecisions that might have seemed arbitrary earlier. For example,\ndistinguishing linear streams and indexed, for-loop--style producers\nin the representation type pays off. In a linear stream pipeline, the\nnumber of elements at the end of the pipeline is the same as the\nnumber of produced elements. Therefore, for a linear stream, |take|\ncan impose the limit close to the production. The for-loop-style\nproducer is particularly easy to limit in size: we merely need to\nadjust the upper bound:\n\\begin{code}\nlet take = fun n -> function\n | Linear (init, For {upb;index}) ->\n let upb s = .. in\n Linear (init, For {upb;index})\n ...\n\\end{code}\nLimiting the size of a non-linear stream is slightly more\ncomplicated:\n\\begin{code}\nlet take = fun n -> function\n ...\n | Nested (p,nestf) ->\n Nested (add_nr n (for_unfold p),\n fun (nr,a) ->\n map_raw (fun a k -> .<(decr .~nr; .~(k a))>.) @@\n more_termination . 0>. (nestf a))\n\\end{code}\nThe idea is straightforward: allocate a reference cell |nr|\nwith the remaining element count (initially |n|), add the check\n|!nr > 0| to the termination condition of the stream producer, and\narrange to decrement the |nr| count at the end of the\nstream. Recall, for a non-linear stream---a composition of several\nproducers---the count of eventually produced elements may differ\narbitrarily from the count of the elements emitted by the first\nproducer. A moment of thought shows that the range check |!nr > 0|\nhas to be added not only to the first producer but to the producers of\nall nested substreams: this is the role of\nfunction |more_termination| (see the accompanying code for its\ndefinition) in the fragment above. The operation\n|add_nr| allocates cell |nr| and adds the termination condition to\nthe first producer. Recall that, since for-loops in OCaml cannot take\nextra termination conditions, a for-loop-style producer has to be first\nconverted to a general unfold-style producer, using |for_unfold|, which\nwe defined in \\S\\ref{sec:overhead-elim}. The operation |add_nr|\n(definition not shown) also adds |nr| to the produced value: The result of\n|add_nr n (for_unfold p)| is of type\n|(int ref code,'a code) st_stream|. Adding the operation to\ndecrement |nr| is conveniently done with |map_raw|\nfrom \\S\\ref{sec:overhead-elim}.\nWe, thus, now see the use for the more general (|'a| and not just |'a code|)\nstream type and the general stream mapping function.\n\n\n\n\\subsection{zip: Fusing Parallel Streams}\n\\label{sec:zip}\n\nThis section describes the most complex operation: handling two\nstreams in tandem, i.e., zipping:\n\\begin{code}\nval zip_with : ('a code -> 'b code -> 'c code) ->\n ('a stream -> 'b stream -> 'c stream)\n\\end{code}\nMany stream libraries lack this operation: first, because zipping is\npractically impossible with push streams, due to inherent complexity,\nas we shall see shortly. Linear streams and the general |map_raw|\noperation turn out to be important abstractions that make the problem\ntractable.\n\nOne cause of the complexity of |zip_with| is the need to consider many\nspecial cases, so as to generate code of hand-written\nquality. All cases share the operation of combining the elements\nof two streams to obtain the element of the zipped stream. It is\nconvenient to factor out this operation:\n\\begin{code}\nval zip_raw: 'a st_stream -> 'b st_stream ->\n ('a * 'b) st_stream\n\nlet zip_with f str1 str2 =\n map_raw (fun (x,y) k -> k (f x y)) @@\n zip_raw str1 str2\n\\end{code}\nThe auxiliary |zip_raw| builds a stream of pairs---statically known\npairs of dynamic values. Therefore, the overhead of constructing and\ndeconstructing the pairs is incurred only once, in the generator. There is\nno tupling in the generated code.\n\nThe |zip_raw| function is a dispatcher for various special cases, to\nbe explained below.\n\\begin{code}\nlet rec zip_raw str1 str2 = match (str1,str2) with\n | (Linear prod1, Linear prod2) ->\n Linear (zip_producer prod1 prod2)\n | (Linear prod1, Nested (prod2,nestf2)) ->\n push_linear (for_unfold prod1)\n (for_unfold prod2,nestf2)\n | (Nested (prod1,nestf1), Linear prod2) ->\n map_raw (fun (y,x) k -> k (x,y)) @@\n push_linear (for_unfold prod2)\n (for_unfold prod1,nestf1)\n | (str1,str2) ->\n zip_raw (Linear (make_linear str1)) str2\n\\end{code}\n\nThe simplest case is zipping two linear streams. Recall, a linear\nstream produces exactly one element when advancing the state. Zipped\nlinear streams, hence, yield a linear stream that produces a pair of\nelements by advancing the state of both argument streams exactly once.\nThe pairing of the stream advancement is especially efficient for\nfor-loop--style streams, which share a common state, the index:\n\\begin{code}\nlet rec zip_producer:\n 'a producer -> 'b producer -> ('a * 'b) producer =\n fun p1 p2 -> match (p1,p2) with\n | (i1,For f1), (i2,For f2) ->\n let init k =\n i1.init @@ fun s1 ->\n i2.init @@ fun s2 -> k (s1,s2)\n and upb (s1,s2) = ..)\n and index fun (s1,s2) i k =\n f1.index s1 i @@ fun e1 ->\n f2.index s2 i @@ fun e2 -> k (e1,e2)\n in (init, For {upb;index})\n | (* elided *)\n\\end{code}\n\nIn the general case, |zip_raw str1 str2| has to determine how to advance\nthe state of |str1| and |str2| to produce one element of the zipped\nstream: the pair of the current elements of |str1| and\n|str2|. Informally, we have to reason all the way from the production of an\nelement to the advancement of the state. For linear streams, the\nrelation between the current element and the state is one-to-one. In\ngeneral, the state of the two components of the zipped stream advance\nat different paces. Consider the following sample streams:\n\\begin{code}\nlet stre = of_arr arr1\n |> filter (fun x -> .<.~x mod 2 = 0>.)\nlet strq = of_arr arr2\n |> map (fun x -> .<.~x * .~x>.)\nlet str2 = of_arr arr1\n |> flat_map (fun _ -> of_arr .<[|1;2]>.)\nlet str3 = of_arr arr1\n |> flat_map (fun _ -> of_arr .<[|1;2;3]>.)\n\\end{code}\nTo produce one element of |zip_raw stre strq|, the state of |stre|\nhas to be advanced a statically-unknown number of times.\nZipping nested streams is even harder---e.g.,\n|zip_raw str2 str3|, where the states advance in complex\npatterns and the end of the inner stream of |str2| does not align with\nthe end of the inner stream in |str3|.\n\nZipping simplifies if one of the streams is linear, as in\n|zip_raw stre| |strq|. The key insight is\nto advance the linear stream |strq| after we are sure to have obtained the\nelement of the non-linear stream |stre|. This idea is elegantly\nrealized as mapping of the step function of |strq| over |stre|\n(the latter, is, recall, |int stream|, which is |int code st_stream|),\nobtaining the desired zipped |(int code, int code)| |st_stream|:\n\\begin{code}\nmap_raw (fun e1 k ->\n strq.step sq (fun e2 -> k (e1,e2))) stre\n\\end{code}\n\nThe above code is an outline: we have to initialize |strq| to obtain its\nstate |sq|, and we need to push the termination condition of |strq|\ninto |stre|. Function |push_linear| in the accompanying code\ntakes care of all these details.\n\nThe last and most complex case is zipping two non-linear streams. Our solution is\nto convert one of them to a linear stream, and then use the approach just\ndescribed. Turning a non-linear stream to a producer\ninvolves ``reifying'' a stream: converting an |'a stream| data type to\nessentially a |(unit -> 'a option) code| function, which, when called,\nreports the new element or the end of the stream. We have to create a\nclosure and generate and deconstruct the intermediate data\ntype |'a option|. There is no way around this: in one form or another,\nwe have to capture the non-linear stream's continuation. The human\nprogrammer will have to do the same---this is precisely what makes zipping so difficult\nin practice. Our library reifies only one of the two zipped\nstreams, without relying on tail-call optimization, for\nmaximum portability.\n\n\\subsection{Elimination of All Overhead, Formally}\n\nSections \\ref{sec:overview}, above, and \\ref{sec:experiments}, below,\ndemonstrate the elimination of abstraction overhead on selected\nexamples and benchmarks. We now state how and why the overhead is\neliminated in all cases.\n\nWe call the higher-order arguments of |map|, |filter|, |zip_with|,\netc. ``user-generators'': they are specified by the library user and\nprovide per-element stream processing.\n\n\\begin{theorem}\\label{thm:expr}\nAny well-typed pipeline generator---built by composing a stream\nproducer, Fig.\\ref{f:interface}, with an\narbitrary combination of transformers followed by a reducer---terminates,\nprovided the user-generators do. The resulting code---with the sole\nexception of pipelines zipping two flat-mapped\nstreams---constructs no data structures beyond those\nconstructed by the user-generators.\n\\end{theorem}\n\nTherefore, if the user generators proceed without\nconstruction\/allocation, the entire pipeline, after the initial\nset-up, runs without allocations. The only exception is the zipping of\ntwo streams that are both made by flattening inner streams. In\nthis case, the rate-adjusting allocation is inevitable, even in\nhand-written code, and is not considered overhead.\n\n\\emph{Proof sketch:} The proof is simple, thanks to the explicitness\nof staging and treating the generated code as an opaque value that\ncannot be deconstructed and examined. Therefore, the only tuple\nconstruction operations in the generated code are those that we have\nexplicitly generated. Hence, to prove our theorem, we only have to\ninspect the brackets that appear in our library implementation,\nchecking for tuples or other objects.\n\n\\begin{comment}\n\\subsubsection{Pull Streams}\n\nIn pull based iteration there are two participating concepts: enumerables and enumerators. Enumerables represent collections of items and an enumerators are stateful entities that drive traversals.\nCombinators like \\sv{map} transform enumerators by delegating to the methods of the enumerator of a previous step in the pipeline (e.g., some \\sv{filter} combinator that precedes \\sv{map}).\nAn |enumerator| consists of two methods conceptually, one that returns the current element and one that advances the iteration.\n\n\\begin{lstlisting}\ntype 't enumerator =\n { current: 't;\n moveNext: unit -> bool;\n }\n\\end{lstlisting}\n\nProgramming with enumerations can be as simple as all functional pipelines.\nPull-based streams support incremental processing because each combinator operates on enumerators and returns new ones that apply the documented transformation. No intermediate data are produced.\n\n\\begin{lstlisting}\nnums |> Enum.filter (fun x -> x\n |> Enum.map (fun x -> x * x)\n |> Enum.sum\n\\end{lstlisting}\n\nWhen a fold operation terminates the pipeline, the code that drives the iteration is executed inside \\sv{fold}.\nWe say that \\sv{fold} is the \\sv{consumer} and that iteration happens at the \\sv{consumer}'s side.\n\n\\subsubsection{Push Streams}\n\nOn the other hand, a push stream is merely based on function application.\nA stream encodes the loop as a function in continuation passing style, accepting a lambda (|iterf|) to apply on each element of the source.\n|ofArray| below represents such function.\nPush-based streams also support incremental processing and they do not produce intermediate results.\nThe main difference between pull and push, however, lies in the side of the execution of the loop which is closer\nto the producer rather than to the consumer.\n\n\\begin{lstlisting}\ntype 't stream = Stream of (('t -> bool) -> bool)\n\nlet ofArray source =\n let s iterf =\n let i = ref 0 in\n let next = ref true in\n while !i < Array.length(source) && !next do\n next := iterf source.(!i);\n i := !i + 1;\n done;\n !next;\n in\n Stream (s)\n\\end{lstlisting}\nTransformation functions like \\sv{map} essentially compose high-order functions in CPS and a terminal operation like \\sv{fold} evaluates the stream.\nAs we witness from the snippet above, push streams contain the encoded loop at the producer side (the array in this combinator).\nOnce more, a stream operation is initiated at the consumer side but now the stack of the execution is located ``near'' the producer.\n\\end{comment}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe evaluated our approach on several benchmarks from past literature,\nmeasuring the iteration throughput:\n\\begin{bullets}\n \\item \\textbf{sum}: the simplest\n\\lstinline{of_arr arr |> sum} pipeline, summing the elements of an array;\n \\item \\textbf{sumOfSquares}: our running example from\n \\S\\ref{sec:simple-staging} on;\n \\item \\textbf{sumOfSquaresEven}: the sumOfSquares benchmark with added\n filter, summing the squares of only the even array elements;\n \\item \\textbf{cart}: $\\sum x_iy_j$, using |flat_map| to build the outer-product stream;\n \\item \\textbf{maps}: consecutive map operations with integer\n multiplication;\n \\item \\textbf{filters}: consecutive filter operations using\n integer comparison;\n \\item \\textbf{dotProduct}: compute dot product of two arrays using\n|zip_with|;\n \\item \\textbf{flatMap\\_after\\_zipWith}: compute $\\sum (x_i+x_i)y_j$,\n like cart above, doubling the |x| array via |zip_with (+)| with itself;\n \\item \\textbf{zipWith\\_after\\_flatMap}: |zip_with| of two streams one\n of which is the result of |flat_map|;\n \\item \\textbf{flat\\_map\\_take}: |flat_map| followed by |take|.\n\\end{bullets}\nThe source code of all benchmarks is available at the project's\nrepository and the OCaml versions are also listed in\nAppendix~D of the extended version.\nOur\nbenchmarks come from the sets by Murray et al.~\\cite{murray_steno:_2011} and\nCoutts et al.~\\cite{coutts_stream_2007}, to which we added more complex\ncombinations (the last three on the list above). (The Murray and Coutts sets\nalso contain a few more simple operator combinations, which we omit for conciseness,\nas they share the\nperformance characteristics of other benchmarks.)\n\nThe staged code was generated using our library (\\strymonas{}), with MetaOCaml on the\nOCaml platform and LMS on Scala, as detailed below.\nAs one basis of\ncomparison, we have implemented all benchmarks using\nthe streams libraries available on each platform\\footnote{\\relax\nWe restrict our attention to\nthe closest feature-rich apples-to-apples comparables: the industry-standard\nlibraries for OCaml+JVM languages. We also report qualitative comparisons\nin \\S\\ref{sec:related}.}:\nBatteries~\\footnote{Batteries is the widely used ``extended\n standard'' library in\n OCaml~\\url{http:\/\/batteries.forge.ocamlcore.org\/}.}\nin OCaml and the standard Java 8 and Scala streams. As there is not a unifying\nmodule that implements all the combinators we employ, we use data type\nconversions where possible. Java 8 does not support a |zip| operator,\nhence some benchmarks are missing for that setup.\\footnote{One could\nemulate \\texttt{zip} using\n\\texttt{iterator} from Java 8 push-streams---at significant\ndrop in performance. This encoding also markedly differs from\nthe structure of our other stream implementations.}\n\nAs the baseline and the other basis of comparison, we have hand-coded\nall the benchmarks, using high-performance, imperative code, with\n|while| or index-based |for|-loops, as applicable.\nIn \\scala{} we use only |while|-loops as they are the analogue of\nimperative iterations; |for|-loops in Scala operate over |Range|s\nand have worse performance. In fact, in one case we had\nto re-code the hand-optimized loop upon discovering that it was\nnot as optimal as we thought: the library-generated code\nsignificantly outperformed it!\n\n\\paragraph*{Input:} All tests were run with the same input set. For the\n\\textbf{sum}, \\textbf{sumOfSquares}, \\textbf{sumOfSquaresEven}, \\textbf{maps},\n\\textbf{filters} we used an array of $N = 100,000,000$ small integers:\n$x_i = i\\ \\mathrm{mod}\\ 10$. The\n\\textbf{cart} test iterates over two arrays. An outer one of $10,000,000$\nintegers and an inner one of $10$. For the \\textbf{dotProduct} we used\n$10,000,000$ integers, for the \\textbf{flatMap\\_after\\_zipWith} $10,000$, for\nthe \\textbf{zipWith\\_after\\_flatMap} $10,000,000$ and for the\n\\textbf{flat\\_map\\_take} $N$ numbers sub-sized by $20\\%$ of $N$.\n\n\\paragraph*{Setup:} The system we use runs an x64 OSX El Capitan 10.11.4\noperating system on bare metal. It is equipped with a 2.7 GHz Intel Core i5\nCPU (I5-5257U) having 2 physical and 2 logical cores. The total memory of the\nsystem is 8 GB of type 1867 MHz DDR3. We use version build 1.8.0\\_65-b17 of the\nOpen JDK. The compiler versions of our setup are presented in the table below:\n\\begin{center}\n\\begin{tabular}{ c c c }\n \\toprule\n Language & Compiler & Staging \\\\\n \\midrule\n Java & Java 8 (1.8.0\\_65) & \\textemdash \\\\\n Scala & 2.11.2 & LMS 0.9.0 \\\\\n OCaml & 4.02.1 & BER MetaOCaml N102 \\\\\n \\bottomrule\n\\end{tabular}\n\\end{center}\n\n\\addtolength{\\abovecaptionskip}{-8mm}\n\\addtolength{\\belowcaptionskip}{-1.5mm}\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{0.9\\textwidth}\n \\centering\n \\includegraphics[width=.98\\linewidth]{ocaml.pdf}\n\\end{subfigure}\n\\caption{OCaml microbenchmarks in msec \/ iteration (avg. of 30, with mean-error bars shown). ``Staged'' is our library (\\strymonas{}). The figure is truncated: OCaml batteries take more than 60sec (per iteration!) for some complex benchmarks.}\n\\label{fig:microbenchmarks1}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{0.9\\textwidth}\n \\centering\n \\includegraphics[width=.98\\linewidth]{jvm.pdf}\n\\end{subfigure}\n\\caption{JVM microbenchmarks (both Java and Scala) in msec \/ iteration (avg. of 30, with mean-error bars shown). ``Staged\\_scala'' is our library (\\strymonas{}). The figure is truncated.}\n\\label{fig:microbenchmarks2}\n\\end{figure*}\n\n\\addtolength{\\abovecaptionskip}{+8mm}\n\\addtolength{\\belowcaptionskip}{+1mm}\n\n\\paragraph*{Automation:} For Java and \\scala{} benchmarks we used the Java\nMicrobenchmark Harness (JMH)~\\cite{aleksey_shipilev_openjdk} tool: a\nbenchmarking tool for JVM-based languages that is part of the OpenJDK. JMH is an\nannotation-based tool and takes care of all intrinsic details of the execution\nprocess. Its goal is to produce as objective results as possible. The JVM\nperforms JIT compilation (we use the C2 JIT compiler) so the benchmark author\nmust measure execution time after a certain warm-up period to wait for transient\nresponses to settle down. JMH offers an easy API to achieve that. In our\nbenchmarks we employed 30 warm-up iterations and 30 proper iterations. We also\nforce garbage collection before benchmark execution and between runs. All OCaml code\nwas compiled with |ocamlopt| into machine code. In particular, the\nMetaOCaml-generated code was saved into a file, compiled, and then\nbenchmarked in isolation. The test harness invokes the compiled\nexecutable via |Sys.command|, which is\nnot included in the results. The harness calculates the average\nexecution time, computing the mean error and standard deviation using\nthe Student-T distribution. The same method is employed\nin JMH.\nFor all tests, we do not measure the time needed to initialize\ndata-structures (filling arrays), nor the run-time compilation cost of\nstaging. These costs are constant (i.e., they become proportionally\ninsignificant for larger inputs or more iterations) and they were\nsmall, between 5 and 10ms, for all our runs.\n\n\\paragraph*{Results:} In Figures~\\ref{fig:microbenchmarks1} and\n~\\ref{fig:microbenchmarks2} we present the results of our experiments divided\ninto two categories: a) the OCaml microbenchmarks of baseline, staged and\nbatteries experiments and b) the JVM microbenchmarks. The JVM diagram\ncontains the baselines for both Java and Scala. Shorter bars are better.\nRecall that all ``baseline'' implementations are carefully hand-optimized code.\n\nAs can be seen, our staged library achieves extremely high\nperformance, matching hand-written code (in either OCaml, Java, or\nScala) and outperforming other library options by orders of\nmagnitude. Notably, the highly-optimized Java 8 streams are more than\n10x slower for perfectly realistic benchmarks, when those do not\nconform to the optimal pattern (linear loop) of push streams.\n\n\\begin{comment}\n\\aggelos{raw observations needed}\n\n\\subparagraph*{OCaml}\nOCaml staged version fuses all examples completely.\n\nOCaml batteries are off the charts.\n\n\\subparagraph*{Java}\n\nScala staged version fuses all examples completely.\n\nJava fuses the sum* but not cart or the megamorphic maps and filters.\n\n\\aggelos{we say 60 in the caption of OCaml benchmarks but it may be way more than 60sec for one that didnt terminate}\n\\end{comment}\n\n\n\\section{Related Work}\n\\label{sec:related}\n\nThe literature on stream library designs is rich. Our approach is the first\nto offer full generality while eliminating processing overhead. We discuss\nindividual related work in more detail next.\n\nOne of the earliest stream libraries that rely on staging is Common Lisp's\nSERIES \\cite{lisp-series,waters_series_1991}, which extensively relies on Lisp\nmacros to interpret a subset of Lisp code as a stream EDSL. It builds a data\nflow graph and then compiles it into a single loop. It can handle filtering,\nmultiple producers and consumers, but not nested streams. The (over)reliance on\nmacros may lead to surprises since the programmer might not be aware that what\nlooks like CL code is actually a DSL, with a slightly different semantics and\nsyntax. An experimental Pipes package \\cite{lisp-pipes} attempts to re-implement\nand extend SERIES, using, this time, a proper EDSL. Pipes extends SERIES by\nallowing nesting, but restricts zipping to simple cases. It was posited\nthat ``arbitrary outputs per input, multiple consumers, multiple producers:\nchoose two'' \\cite{lisp-pipes}. Pipes ``almost manages'' (according to its\nauthor) to implement all three features. Our library demonstrates the\nconjecture is false by supporting all three facilities in full generality\nand with high performance.\n\nLippmeier et al.~\\cite{Lippmeier_dataflow_2013} present a line of work\nbased on SERIES. They aim to transform first-order, non-recursive, synchronous,\nfinite data-flow programs into fused pipelines. They derive inspiration from\ntraditional data-flow languages like Lustre~\\cite{halbwachs_synchronous_1991}\nand Lucid Synchrone~\\cite{pouzet_lucid_2006}. In contrast, our library\nsupports a greater range of fusible combinators, but for bulk data processing.\n\n\n\n\n\nHaskell has lazy lists, which seem to offer incremental processing\nby design. Lazy lists cannot express pipelines that require\nside-effects such as reading or writing files.\\footnote{We disregard\n the lazy IO misfeature\n \\cite{iteratees}.} The all-too-common memory leaks point out that\nlazy lists do not offer, again by design, stream fusion.\nOvercoming the drawbacks of lazy lists, coroutine-like iteratees\n\\cite{iteratees} and many of their reimplementations support incremental\nprocessing even in the presence of effects, for nested\nstreams and for several consumers and producers. Although iteratees avoid\nintermediate streams they still suffer large overheads for captured\ncontinuations, closures, and coroutine calls.\n\n\n\nCoutts et al.~\\cite{coutts_stream_2007} proposed \\emph{Stream\nFusion} (the approach that has become associated with this fixed term), building on previous work (|build|\/|foldr|~\\cite{gill_shortcut_1993}\nand |destroy|\/|unfoldr|~\\cite{svenningsson_shortuct_2002}) by fusing maps,\nfilters, folds, zips and nested lists. The approach relies on\nthe rewrite GHC \\textsc{Rules}. Its notable contribution is the support for\nstream filtering. In that approach there is no specific treatment of\nlinearity. The Coutts et al. stream fusion supports zipping, but only in simple cases (no zipping\nof nested, subranged streams). Finally, the Coutts et al. approach does not fully fuse\npipelines that contain nested streams (|concatMap|). The reason is that the\nstream created by the transformation of |concatMap| uses an internal function\nthat cannot by optimized by GHC by employing simple case reduction. The problem\nis presented very concisely by Farmer et al. in the\n\\emph{Hermit in the Stream} work~\\cite{farmer_hermit_2012}.\n\nThe application of HERMIT~\\cite{farmer_hermit_2012} to streams\n\\cite{farmer_hermit_2014} fixes the shortcomings of the Coutts et al. Stream\nFusion~\\cite{coutts_stream_2007} for |concatMap|. As the authors and\nCoutts say, |concatMap| is complicated because its\nmapping function may create any stream whose size is not statically known.\nThe authors implement Coutts's idea of\ntransforming |concatMap| to\n|flatten|; the latter supports fusion for a constant\ninner stream. Using HERMIT instead of GHC \\textsc{Rules}, Farmer et al. present\nas motivating examples two cases. Our approach handles the \\emph{non-constant\ninner stream case} without any additional action.\n\nThe second case is about \\emph{multiple inner streams} (of the same state type).\nFarmer et al. eliminate some overhead yet do not produce fully\nfused code. E.g., pipelines such as the following (in Haskell) are not\nfully fused:\n\\begin{code}\nconcatMapS (\\x -> case even x of\n True -> enumFromToS 1 x\n False -> enumFromToS 1 (x + 1))\n\\end{code}\n(Farmer et al. raise the question of how often such cases arise in a real\nprogram.) Our library internally places no restrictions on inner streams; it\nmay well be that the flat-mapping function produces streams of different\nstructure for each element of the outer stream. On the other hand, the\n|flat_map| interface only supports nested streams of a fixed\nstructure---hence with the applicative rather than monadic interface. We can\nprovide a more general |flat_map| with the continuation-passing\ninterface for the mapping function, which then implements:\n\\begin{code}\nflat_map_cps (fun x k ->\n ..)\n\\end{code}\nWe have refrained from offering this more general interface since there does not seem to be\na practical need.\n\nGHC \\textsc{Rules} \\cite{jones_playing_2001}, extensively used in Stream\nFusion, are applied to typed code but by themselves are not typed and\nare not guaranteed type-preserving. To write GHC rules, one has to have a\nvery good understanding of GHC optimization passes, to ensure that the\nRULE matches and has any effect at all. \\textsc{Rules} by themselves offer no\nguarantee, even the guarantee that the re-written code is\nwell-typed. Multi-stage programming ensures\nthat all staging transformations are type-correct.\n\nJonnalagedda et al. present a library using only CPS encodings\n(fold-based)~\\cite{jonnalagedda_fold-based_2015}. It uses the\nGill et al. foldr\/build technique~\\cite{gill_shortcut_1993} to get staged streams\nin Scala. Like foldr\/build, it does not support\ncombinators with multiple inputs such as |zip|.\n\nIn our work, we employ the traditional MSP programming model to implement a\nperformant streaming library. Rompf et al.~\\cite{rompf_optimizing_2013}\ndemonstrate a loop fusion and deforestation algorithm for data parallel loops\nand traversals. They use staging as a compiler transformation\npass and apply to query processing for in-memory\nobjects. That technique lacks the rich range of fused combinators over finite or\ninfinite sources that we support, but seems adequate for\nthe case studies presented in that work. Porting our technique from\nthe staged-library level to the compiler-transformation level may be applicable\nin the context of Scala\/LMS.\n\nGeneralized Stream Fusion~\\cite{mainland_generalized_stream_fusion}\nputs forward the idea of \\emph{bundled} stream representations. Each\nrepresentation is designed to fit a\nparticular stream consumer following the documented cost model.\nAlthough this design does not present a\nconcrete range of optimizations to fuse combinators and\ngenerate loop-based code directly, it presents a generalized model\nthat can ``host'' any number of specialized stream\nrepresentations. Conceptually, this framework could be used to implement\nour optimizations.\nHowever, it relies on the black-box GHC\noptimizer---which is the opposite of our approach of full transparency\nand portability.\n\nZiria~\\cite{steward_vytiniotis_2015}, a language for wireless\nsystems' programming, compiles high-level reconfigurable data-flow\nprograms to vectorized, fused C-code. Ziria's |tick| and |process|\n(pull and push respectively) demonstrate the benefits of having both\nprocessing styles in the same library. It would be interesting to combine\nour general-purpose stream library with Ziria's generation of\nvectorized C code.\n\nSvensson et al.\\cite{svensson_defunctionalizing_2014} unify pull- and\npush- arrays into a single library by defunctionalizing push arrays,\nconcisely explaining why pull and push must co-exist under a unified\nlibrary. They use a compile monad to interpret their embedded\nlanguage into an imperative target one. In our work we get that for\nfree from staging. Similarly, the representation of arrays in memory, with their\n|CMMem| data type, corresponds to staged arrays (of type |'a array code|) in our work.\nThe library they derive from the defunctionalization of |Push| streams is called\n|PushT| and the authors provide evidence that indexing a push array\ncan, indeed, be efficient (as opposed to simple push-based streams).\nThe paper does not seem to handle more challenging\ncombinators like |concatMap| and |take| and does not efficiently\nhandle the combinations of infinite and finite sources. Still,\nwe share the same goal: to unify both styles of streams under\none roof. Finally, Svensson et al. target arrays for embedded\nlanguages, while we target arrays natively in the language. Fusion is\nachieved by our library without relying on a compiler to intelligently\nhandle all corner cases.\n\n\n\\begin{comment}\nReducers in clojure are essentially a clever stream fusion\ntechnique~\\footnote{\\url{http:\/\/clojure.com\/blog\/2012\/05\/15\/anatomy-of-reducer.html} .}:\nA streaming operation on a collection can be represented as an operation on the\nreducer of that collection: a collection can in fact be represented as a\nreducer. The blog post mentions parallel processing: reductions can be done in\nparallel in some circumstances. We should investigate generating parallel\nimplementations of our streams. Yet the Clojure approach is not without\nproblems: what to do about zips and other processing that requires suspensions?\nFurthermore, concatMap can be implemented as a reducer transformer only in\nspecial cases (e.g., think of a sequence as a polynomial, and reduction is\nevaluating this polynomial.) The above blog mentions that `take' is special (but\ndoesn't give much detail). The fact that a reducer function sometimes takes 0\nand sometimes 2 or 3 arguments makes efficient processing problematic: we'd\nrather not to check the number of arguments every time we process a new value\nfrom the stream.\n\\end{comment}\n\n\\section{Discussion: Why Staging?}\n\\label{sec:why-staging}\n\nOur approach relies on staging. This may impose a barrier to the\npractical use of the library: staging annotations are unfamiliar to\nmany programmers. Furthermore, it is natural to ask whether our\napproach could be implemented as a compiler optimization pass.\n\n\n\n\\paragraph{Complexity of staging.}\n\nHow much burden staging really imposes on a programmer is an empirical\nquestion. As our library becomes known and more-used we hope to\ncollect data to answer this. In the meantime, we note that staging can\nbe effectively hidden in code combinators. The first code example of\n\\S\\ref{sec:overview} (summing the squares of elements of an array) can\nbe written without the use of staging annotations as:\n\n\\begin{lstlisting}\nlet sum = fold (fun z a -> add a z) zero\n\nof_arr $arr$\n |> map (fun x -> mul x x)\n |> sum\n\\end{lstlisting}\n\nIn this form, the functions that handle stream elements are written\nusing a small combinator library, with operations |add|, |mul|, etc. that\nhide all staging. The operations are defined simply as\n\\begin{code}\nlet add x y = .<.~x + .~y>. and mul x y = .<.~x * .~y>.\nlet zero = .<0>.\n\\end{code}\n\n\nFurthermore, our Scala implementation has no explicit staging\nannotations, only \\sv{Rep} types (which are arguably less\nintrusive). For instance, a simple pipeline is shown below:\n\n\\begin{lstlisting}[style=Scala,basicstyle={\\small\\ttfamily},literate=\n {=>}{{$\\Rightarrow$}}2]\ndef test (xs : Rep[Array[Int]]) : Rep[Int] =\n Stream[Int](xs).filter(d => d\n\\end{lstlisting}\n\n\n\\paragraph{Staging vs. compiler optimization.}\n\nOur approach can certainly be cast as an optimization pass. The\ncurrent staging formulation is an excellent blueprint for such a\ncompiler rewrite. However, staging is both less intrusive and more\ndisciplined---with high-level type safety guarantees---than changing\nthe compiler. Furthermore, optimization is guaranteed only with full\ncontrol of the compiler. Such control is possible in a domain-specific\nlanguage, but not in a general-purpose language, such as the ones we\ntarget. Relying on a general-purpose compiler for library\noptimization is slippery. Although compiler analyses and\ntransformations are (usually) sound, they are almost never complete: a\ncompiler generally offers no guarantee that any optimization will be\nsuccessfully applied.\\footnote{A recent quote by Ben Lippmeier, discussing RePa\n \\cite{keller_repa_2010} on Haskell-Cafe, captures well the\n frustrations of advanced library writers: ``The compilation method\n [...] depends on the GHC simplifier acting in a certain way---yet\n there is no specification of exactly what the simplifier should do,\n and no easy way to check that it did what was expected other than\n eyeballing the intermediate code. We really need a different\n approach to program optimisation [...] The [current approach] is\n fine for general purpose code optimisation but not `compile by\n transformation' where we really depend on the transformations doing\n what they're supposed\n to.''---\\url{http:\/\/mail.haskell.org\/pipermail\/haskell-cafe\/2016-July\/124324.html}} There are several instances when an innocuous\nchange to a program makes it much slower. The compiler is a black box,\nwith the programmer forced into constantly reorganizing the program in\nunintuitive ways in order to achieve the desired\nperformance.\n\n\n\n\n\\section{Conclusions}\n\nWe have presented the principles and the design of stream libraries\nthat support the widest set of operations from past libraries\nand also permit elimination of the entire abstraction overhead. The\ndesign has been implemented as the \\strymonas{} library, for OCaml and for\nScala\/JVM. As confirmed experimentally, our library indeed offers the\nhighest, guaranteed, and portable performance. Underlying the library\nis a representation of streams that captures the\nessence of iteration in streaming pipelines. It recognizes which\noperators drive the iteration, which contribute to filtering\nconditions, whether parts of the stream have linearity properties, and\nmore. This decomposition of the essence of stream iteration is what\nallows us to perform very aggressive optimization, via staging,\nregardless of the streaming pipeline configuration.\n\n\\acks\n\nWe thank the anonymous reviewers of both the program committee and the\nartifact evaluation committee for their constructive comments. We\ngratefully acknowledge funding by the European Research Council under\ngrant 307334 (\\textsc{Spade}).\n\n\\begin{comment}\nLast but not least,\noptimizations via bulk memory operations (such as |append| in Generalized Stream\nFusion \\cite{mainland_generalized_stream_fusion}) can be incorporated in our\nlibrary by generating calls to |Array.copy|, |Array.blit| etc (or, unsafe OCaml\neven).\n\nIn our work we adopt a sequential model of execution. In order to run efficient\ndata parallel pipelines it is possible to follow FlumeJava's approach\n\\cite{chambers_flumejava_2010} that seems a promising direction. Concretely,\nimplementing the |ParallelDo|~\\emph{producer-consumer fusion} is possible. The\nunderlying \\emph{sibling fusion} whould require adding high-order abstract\nsyntax to our stream language, to designate sharing and detect siblings in a\ntype safe manner.\n\nFinally, we aim to transcend the realm of sequential and data parallel execution\nby implementing accelerators for CUDA, OpenCL and others just by relying only to\noffshoring~\\cite{eckhardt_implicitly_2005}.\n\\end{comment}\n\n\\clearpage\n\n\\balance\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}