{"text":"\\section{Introduction}\n\nThe Space Telescope Imaging Spectrograph (STIS) (Kimble et al.\\ \\markcite{kimble97}1997;\nWoodgate et al.\\ \\markcite{woodgate98}1998; Walborn \\& Baum \\markcite{walborn98}1998) was used during the Hubble\nDeep Field -- South (HDF--S) (Williams et al.\\ \\markcite{williams99}1999) observations\nfor ultraviolet spectroscopy (Ferguson et al.\\ \\markcite{ferguson99}1999) and ultraviolet\nand optical imaging. In this paper we present the imaging data.\n\nThe Hubble Deep Field -- North (HDF--N) (Williams et al.\\ \\markcite{williams96}1996) is the\nbest studied field on the sky, with $>$1~Msec of Hubble Space\nTelescope (HST) observing time (including follow-up observations\nby Thompson et al.\\ \\markcite{thompson99}1999 and Dickinson et al.\\ \\markcite{dickinson99}1999), and countless\nobservations with ground-based telescopes (e.g., Cohen et al.\\ \\markcite{cohen96}1996;\nConnolly et al.\\ \\markcite{connolly97}1997). Results obtained to date include a measurement\nof the ultraviolet luminosity density of the universe at $z>2$\n(Madau et al.\\ \\markcite{madau96}1996), the morphological distribution of faint galaxies\n(Abraham et al.\\ \\markcite{abraham96}1996), galaxy-galaxy lensing (Hudson et al.\\ \\markcite{hudson98}1998), and\nhalo star counts (Elson, Santiago \\& Gilmore \\markcite{elson96}1996). See Ferguson \\markcite{ferguson98}(1998) and\nLivio, Fall \\& Madau \\markcite{livio98}(1998) for reviews and further references. The HDF--S\ndiffers from the HDF--N in several ways. First, the installation\nof STIS and NICMOS on HST in 1997 February has enabled parallel\nobservations with three cameras. In addition to the STIS data, the\nHDF--S dataset includes deep WFPC2 imaging (Casertano et al.\\ \\markcite{casertano99}1999),\ndeep near-infrared imaging (Fruchter et al.\\ \\markcite{fruchter99}1999), and wider-area\nflanking field observations (Lucas et al.\\ \\markcite{lucas99}1999). Second, the STIS\nobservations were centered on QSO J2233-606, at $z \\approx 2.24$,\nto obtain spectroscopy. Finally, the field was chosen in the southern\nHST continuous viewing zone in order to enable follow-up observations\nwith ground-based telescopes in the southern hemisphere.\n\nIn section 2 we describe the observations. In section 3 we describe\nthe techniques we used to reduce the CCD images. In section 4 we\ndescribe the reduction of the MAMA images. In section 5 we describe\nthe procedures used to catalog the images. In section 6 we present\nsome statistics of the data, including galaxy number counts and color\ndistributions. Our purpose in this paper is to produce a useful\nreference for detailed analysis of the STIS images. Thus for the\nmost part we refrain from model comparisons and speculation on the\nsignificance of the results. We expect the STIS images to be useful\nfor addressing a wide variety of astronomical topics, including\nthe sizes of the faintest galaxies, the ultraviolet-optical color\nevolution of galaxies, the number of faint stars and white dwarfs\nin the galactic halo, and the relation between absorption line\nsystems seen in the QSO spectrum and galaxies near to the line of\nsight. We also expect the observations to be useful for studying\nsources very close to the quasar, and perhaps for detecting the\nhost galaxy of the quasar. However, this may require a re-reduction\nof the images, as the quasar is saturated in all of the CCD exposures,\nand there are significant problems with scattered light and\nreflections.\n\n\\section{Description of the observations}\n\nThe images presented here were taken in 4 different modes, 50CCD\n(Figure~\\ref{logclr}), F28X50LP (Figure~\\ref{loglp}), NUVQTZ\n(Figure~\\ref{lognuv}), and FUVQTZ (Figure~\\ref{logfuv}). The 50CCD\nand F28X50LP modes used the Charge Coupled Device (CCD) detector.\nThe 50CCD is a clear, filterless mode, while the F28X50LP mode uses\na long-pass filter beginning at about 5500{\\AA}. The FUVQTZ and\nNUVQTZ used the Multi-Anode Microchannel Array (MAMA) detectors as\nimagers with the quartz filter. The quartz filter was selected to\nreduce the sky noise due to airglow to levels below the dark noise. The\neffective areas of the 4 modes are plotted in Figure~\\ref{filttrans},\nalong with a pseudo-$B_{430}$ bandpass constructed from the 50CCD\nand F28X50LP fluxes. The MAMA field of view is a square, $25\\arcsec$\non a side, and was dithered so that the observations include data\non a field approximately $30\\arcsec$ square. The 50CCD mode is\nfilterless imaging with a CCD. The field of view is a square\n$50\\arcsec$ on a side, and the dithering extends to a square\n$60\\arcsec$ on a side. The F28X50LP is a long-pass filter that\nvignettes the field of view of the CCD to a rectangle $28\\times\n50\\arcsec$. The observations were dithered to image the entire\nfield of view of the 50CCD observations, although the exposure time\nper point on the sky is thus approximately half the total exposure\ntime spent in this mode. The original pixel scale is\n$0.0244\\arcsec$~pix$^{-1}$ for the MAMA images, and\n$0.05071\\arcsec$~pix$^{-1}$ for the CCD images. The final combined\nimages have a scale of $0.025\\arcsec$~pix$^{-1}$ in all cases.\nTable~\\ref{obstab} describes the observations. The filterless 50CCD\nobservations correspond roughly to V+I, and reach a depth of 29.4\nAB magnitudes at $10\\sigma$ in a 0.2 square arcsecond aperture (320\ndrizzled pixels). This is the deepest exposure ever made in the\nUV-optical wavelength region.\n\n\\subsection{Selection of the Field}\n\nSelection of the field is described by Williams et al.\\ \\markcite{williams99}(1999). The QSO\nis at RA~=~$\\rm 22^h33^m37.5883^s$, Dec~=~$-60^{\\circ} 33\\arcmin\n29.128\\arcsec$ (J2000). The errors on this position are estimated\nto be less than 40 milli-arcseconds (Zacharias et al.\\ \\markcite{zacharias98}1998). The\nposition of the QSO on the 50CCD and F28X50LP images is x=1206.61,\ny=1206.32, and on the MAMA images is x=806.61, y=806.32.\n\n\\subsection{Test Data}\n\nTest observations of the field were made in 1997 October. \nThese data are not used in the present analysis. While the test\nexposures do not add significantly to the exposure time, they would\nprovide a one-year baseline for proper motion studies of the brighter\nobjects.\n\n\\subsection{Observing Plan}\n\nThe STIS observations were scheduled so that the CCD was used in the\norbits that were impacted by the South Atlantic Anomaly, and the\nMAMAs were used in the clear orbits. The observations were made in the\ncontinuous viewing zone, and therefore were all made close to the\nlimb of the Earth. The G430M spectroscopy, all of which was read-noise\nlimited, was done during the day or bright part of the orbit, while\nthe CCD imaging was all done during the night or dark part of the\norbit. The MAMA imaging, done with the quartz filter, is insensitive\nto scattered Earth light, and was therefore done during bright time. A\nmore detailed discussion of the scheduling issues is given by\nWilliams et al.\\ \\markcite{williams99}(1999). The sky levels in the 50CCD images were\napproximately twice the square of the read noise, so these data are\nmarginally sky noise limited. The MAMA images are limited by the dark\nnoise.\n\n\\subsection{Dithering and Rotation}\n\nThe images were dithered in right ascension (RA) and declination\n(Dec) in order to sample the sky at the sub-pixel level. In addition,\nvariations in rotation of about $\\pm 1$ degree were used to provide\nadditional dithering for the WFPC2 and NICMOS fields during the\nSTIS spectroscopic observations. The STIS imaging observations were\ninterspersed with the STIS spectroscopic observations; therefore,\nall of the images were dithered in rotation as well as RA and Dec.\n\n\\subsection{CR-SPLIT and pointing strategy}\n\nThe CCD exposures were split into 2 or 3 {\\sc cr-split}s that each\nhave the same RA, Dec, and rotation. This facilitates cosmic ray\nremoval, although as discussed below, this was only used in the\nfirst iteration of the data reduction. The final 50CCD image is\nthe combination of 193 exposures making up 67 {\\sc cr-split}\npointings. After standard pipeline processing, (including bias and\ndark subtraction, and flatfielding), each exposure is given a {\\sc\nflt} file extension, and the cosmic-ray rejected combinations of\neach {\\sc cr-split} is given a {\\sc crj} file extension. The final\nF28X50LP image is the combination of 66 exposures making up 23 {\\sc\ncr-split} pointings. The F28X50LP image included 12 pointings at\nthe northern part of the field, one pointing at the middle of the\nfield, and 10 pointings at the southern half of the field.\n\n\\subsection{PSF observations}\n\nIn order to allow for PSF subtraction of the QSO present in the\ncenter of the STIS 50CCD image, two SAO stars of about 10~mag were\nobserved in the filterless 50CCD mode before and after the main\nHDF-S campaign. The stars are SAO 255267, a G2 star, and SAO 255271,\nan F8 star, respectively. These targets have spectral energy\ndistributions in the STIS CCD sensitivity range similar to that of\nthe QSO. For each star, 32 different {\\sc cr-split} exposures were taken.\nThe following strategy was used: (i) four different exposure times\nbetween 0.1 s and 5 s for each {\\sc cr-split} frame, to ensure high\nsignal-to-noise in the wings while not saturating the center; (ii)\na four-position dither pattern with quarter-pixel sampling and\n{\\sc cr-split} at each pointing with each exposure time; (iii) use of\ngain=4, to insure no saturation in the A-to-D conversion. During\nthe observations for SAO255267, a failure in the guide star\nacquisition procedure caused the loss of its long-exposure (5~s)\nimages. Gain=4 has a well-documented large scale pattern noise that\nmust be removed, e.g., by Fourier filtering, before a reliable PSF\ncan be produced. These data are not discussed further in this paper,\nbut are available from the HST archive for further analysis.\n\n\\section{Reduction of the CCD Images}\n\n\\subsection{Bias, Darks, Flats and Masks}\n\nStandard processing of CCD images involves bias and dark subtraction,\nflatfielding, and masking of detector defects. The bias calibration\nfile used for the HDF-S was constructed from 285 individual exposures,\ncombined together with cosmic-ray and hot-pixel trail rejection.\n\nThe dark file was constructed from a ``superdark'' frame and a\n``delta'' dark frame. The superdark is the cosmic-ray rejected\ncombination of over 100 individual 1200~s dark exposures taken\nover the several months preceding the HDF-S campaign. The delta\ndark adds into this high S\/N dark frame the pixels that are more\nthan $5\\sigma$ from the mean in the superdark-subtracted combination\nof 14 dark exposures taken during the HDF-S campaign. Calibration\nof the images with this dark frame removes most of the hot pixels\nbut still leaves several hundred in each image.\n\nAn image mask was constructed to remove the remaining hot pixels\nand detector features. The individual cosmic-ray rejected HDF-S\n50CCD exposures were averaged together without registration. The\nremaining hot pixels were identified with the IRAF\\footnote[12]{IRAF\nis distributed by the National Optical Astronomy Observatories,\nwhich are operated by the Association of Universities for Research\nin Astronomy, Inc., under cooperative agreement with the National\nScience Foundation.} {\\sc cosmicrays} task. These pixels were\nincluded in a mask that was used to reject pixels during the\n{\\sc drizzle} phase. Pixels that were more than $5\\sigma$ below the mean\nsky background were also masked, as were the 30 worst hot pixel\ntrails, and the unilluminated portions of the detector around the\nedges. Hot pixel trails run along columns and are caused by high\ndark current in a single pixel along the column.\n\nFlatfielding was carried out by the IRAF\/STSDAS {\\sc calstis}\npipeline using two reference files. The first, the {\\sc pflat}\ncorrects for small-scale pixel-to-pixel sensitivity variations,\nbut is smooth on large scales. This file was created from ground-test\ndata but comparisons to a preliminary version of the on-orbit flat\nrevealed only a few places where the difference was more than 1\\%.\nThe CCD also shows a 5-10\\% decrease in sensitivity near the edges\ndue to vignetting. This illumination pattern was corrected by a\nlow-order fit to a sky flat constructed from the flanking field\nobservations.\n\n\\subsection{Shifts and rotations}\n\nAfter pipeline processing, the CCD images were reduced using the\nIRAF\/STSDAS package {\\sc dither}, and test versions called {\\sc\nxdither}, and {\\sc xditherii}. These packages include the {\\sc\ndrizzle} software (Fruchter \\& Hook \\markcite{fruchterhook98}1998; Fruchter et al.\\ \\markcite{fruchteretal98}1998;\nFruchter \\markcite{fruchter98}1998). We used {\\sc drizzle} version 1.2, dated 1998\nFebruary. The test versions differ from the previously released\nversion primarily in their ability to remove cosmic rays from each\nindividual exposure, and include tasks that have not yet been\nreleased.\n\nThe {\\sc xditherii} package uses an iterative process to reject cosmic\nrays and determine the x and y sub-pixel shifts, which we summarize\nhere. The standard pipeline rejects cosmic rays using each {\\sc\ncr-split} of 2 or 3 images. The resulting {\\sc crj} files are used\nas the first iteration, we determine the x and y shifts, and the\nfiles are median combined. The resulting preliminary combination\nis then shifted back into the frame of each of the original exposures\n({\\sc flt} files), and a new cosmic ray mask is made. By comparing\neach exposure to a high signal-to-noise combination of all of the\ndata, we are less likely to leave cosmic ray residuals. The x and\ny shifts are determined at each iteration as well.\n\nThe rotations used in combining the data were determined from the\n{\\sc roll\\_avg} parameter in the jitter files, using the program\n{\\sc bearing}. We did not seek to improve on these rotations via\ncross-correlation or any other method. We did use cross-correlation\nto determine the x and y shifts.\n\nDetermination of the sub-pixel x and y shifts was done with an\niterative procedure. The first iteration was obtained by determining\nthe centroid of the bright point source just west of the QSO, using\nthe pipeline cosmic-ray rejected {\\sc crj} files. We could not use\ncross-correlation in this first iteration, because the very bright\nstar on the southern edge of the field was present on images taken\nat some, but not all, dither positions, which corrupted the\ncross-correlation. The source we used for centroiding was clearly\nvisible on all of the 50CCD and F28X50LP frames.\n\nUsing these shifts (which were accurate to better than 1 pixel),\nwe created a preliminary combined image. After pipeline processing\nand cosmic ray rejection, the {\\sc drizzle} program was used to\nshift and rotate each {sc crj} file onto individual outputs, without\ncombining them. We then used the task {\\sc imcombine} to create a\nmedian combination of the files. This preliminary image was then\nshifted and rotated back into the frame of each individual exposure\nusing the {\\sc xdither} task, {\\sc blot}, ready for the next iteration\nof the cosmic-ray rejection procedure. \n\n\\subsection{Cosmic ray rejection}\n\nIn this iteration, we discarded the {\\sc crj} files, and went back\nto the {\\sc flt} files, in which each exposure had undergone bias\nand dark subtraction and flatfielding, but not cosmic-ray rejection.\nEach exposure was compared to the blotted image, and a cosmic-ray\nmask for that exposure was created from all of the pixels that\ndiffered (positively or negatively) by more than a given threshold\nfrom the blotted image. In the version 1.0 released 50CCD image,\nthis threshold was set to be $5\\sigma$. However, we believe that\na small error in the sky level determination, introduced by the\namplifier ringing correction discussed below, meant that our \nrejection was approximately at the $3\\sigma$ level.\nThe cosmic ray masks were multiplied by the hot pixel masks discussed\nabove, and resulted in about 8\\% of the pixels being masked as\neither cosmic rays or hot pixels. This is, perhaps, overly\nconservative. A less conservative cut (after correcting the error\nin the sky value) would result in slightly higher exposure time\nper pixel, and thus an improvement of 1-2\\% in the signal to noise\nratio. The cosmic ray mask was combined with the hot pixel and\ncosmetic defect mask.\n\nThis problem with the sky value was corrected in the F28X50LP image,\nand a $3\\sigma$ level was used in the cosmic ray rejection.\n\n\\subsection{Amplifier ringing correction}\n\nHorizontal features due to amplifier ringing, varying in pattern\nfrom image to image, were present in most of the STIS CCD frames.\nWhen a pixel saw a highly saturated signal, the bias level was\ndepressed in the readout for the next few rows. The very high\nsignals causing this ringing came from hot pixels and from the\nsaturated QSO. The signal-to-noise ratio in the overscan region of\nthe detector was not sufficient to remove these features well. We\nremoved them with a procedure that subtracted on a row-by-row basis,\nfrom each individual image, the weighted average of the background\nas derived from the innermost 800 columns after masking and rejecting\n``contaminated'' pixels. The masks included all visible sources,\nhot pixels, and cosmic-ray hits. The source mask was determined\nfrom the initial registered median-combined image, shifted back to\nthe reference frame of each of the individual images. For the\nunmasked pixels in each row, the 50 highest and lowest were rejected\nand the mean of the remaining pixels was subtracted from the each\npixel in that row.\n\nHeavily smoothing the images reveals very slight horizontal residuals\nthat were not removed by the present choice of parameters in this\nprocess.\n\n\\subsection{Drizzling it all together}\n\nThe final image combination was done by drizzling the amplifier-ringing\ncorrected pipeline products together onto a single output image.\nThe exposures were weighted by the square of the exposure time,\ndivided by the variance, which is (sky+rn$^2$+dark). The rotations\nwere corrected so that North is in the +y direction, and the scale\nused was 0.492999 original CCD pixels per output pixel so that the\nfinal pixel scale is exactly 0.025 arcsec\/pixel. For the 50CCD data\nwe used a {\\sc pixfrac}=0.1, which is approximately equivalent to\ninterleaving, where each input pixel falls on a single output pixel.\nFor the F28X50LP data we used {\\sc pixfrac}=0.6, as a smaller {\\sc\npixfrac} left visible holes in the final image. See Fruchter \\& Hook \\markcite{fruchterhook98}(1998)\nfor a discussion of the meaning of the {\\sc drizzle} parameters. The\npoint spread functions of bright, non-saturated point sources are\nshown in Figure~\\ref{psf}. The sources selected are the point source\njust to the west of the quasar in the 50CCD and F28X50LP images,\nand the QSO in the MAMA images.\n\nThe final image is given in counts per second, which can be converted\nto magnitudes on the {\\sc stmag} system using the photometric\nzeropoints given by the {\\sc photflam} parameter supplied in the\nimage headers. We used the pipeline photometric zeropoints for the\n50CCD and MAMA images, but revised the F28X50LP zeropoint by 0.1\nmagnitude based on a comparison of STIS photometry of the HST\ncalibration field in $\\omega$ Centauri with the ground-based\nphotometry of Walker \\markcite{walker94}(1994). The zeropoints in the AB magnitude\nsystem which we used are 26.386, 25.291, 23.887, and 21.539, for\nthe 50CCD, F28X50LP, NUVQTZ and FUVQTZ respectively. We also supply\nthe weight image, which is the sum of the weights falling on each\npixel. For the F28X50LP image, we supply an exposure-time image,\nwhich is the total exposure time contributing to each pixel. We\nhave multiplied this image by the area of the output pixels. The\nworld coordinate system in the headers was corrected so that North\nis exactly in the +y direction, and the pixel scale is exactly\n0.025 arcsec\/pixel.\n\n\\subsection{Window reflection}\n\nA window in the STIS CCD reflects slightly out-of-focus light from\nbright sources to the +x, $-$y direction (SW on the HDF-S images).\nThe QSO is saturated in every 50CCD and F28X50LP exposure. The\nwindow reflection of the QSO is clearly visible in the F28X50LP\nimage, but has been partially removed from the 50CCD image by the\ncosmic-ray rejection procedure. We wish to emphasize that it has\nonly been partially removed, and there are remaining residuals.\nThese residuals should not be mistaken for galaxies near the QSO,\nnor should they be mistaken for the host galaxy of the QSO. There\nis additional reflected light from the QSO (and from the bright\nstar at the southern edge) evident in the images. We believe that\nthe version 1.0 released images are not appropriate for searching\nfor objects very close to or underlying the QSO, and that such a\nsearch would require re-processing the raw data with particular\nattention paid to the window reflection, other reflected light,\nand to the PSF of the QSO. The diffraction spikes of the QSO are\nsmeared in the final images by the rotation of the individual\nexposures.\n\n\\section{Reduction of the MAMA Images}\n\nThe near-UV and far-UV images are respectively the weighted averages\nof 12 and 25 registered frames, with total exposure times of 22616~s\nand 52124~s. The MAMAs do not suffer from read noise or cosmic\nrays, and the quasar is not saturated in any of the UV data. However,\nthe MAMAs do have calibration issues that must be addressed.\n\n\\subsection{Flats, Dark Counts, and Geometric Correction}\n\nPrior to combination, all frames were processed with CALSTIS,\nincluding updated high-resolution pixel-to-pixel flat field files\nfor both UV detectors. Geometric correction and rescaling were\napplied in the final combinations via the {\\sc drizzle} program. The\nquartz filter changes the far-UV plate scale relative to that in\nthe far-UV clear mode, and so the relative scale between MAMA\nimaging modes was determined from calibration images of the globular\ncluster NGC~6681.\n\nDark subtraction for the near-UV image was done by subtracting a\nscaled and flat-fielded dark image from each near-UV frame. The\nscale for the dark image was determined by inspection of the\nright-hand corners of the near-UV image, because these portions of\nthe detector are occulted by the aperture mask and thus only register\ndark counts. For the far-UV images, {\\sc calstis} removes a nearly flat\ndark frame, but the upper left-hand quadrant of STIS far-UV frames\ncontains a residual glow in the dark current after nominal calibration.\nThis glow varies from frame to frame and also appears to change\nshape slightly with time. To remove the residual dark current, the\n16 far-UV frames with the highest count rates in the glow region\nwere co-added without object registration but with individual object\nmasks for the only two obvious objects in the far-UV frames (the\nquasar and bright spiral NNE of the quasar). We then fit the result\nwith a cubic spline to produce a glow profile. This profile was\nthen scaled to the residual glow in each processed frame and\nsubtracted prior to the final drizzle. Even during observations\nwith a strong dark glow, where the dark count rate is an order of\nmagnitude higher than normal, it is still very low, reaching rates\nno higher than $6\\times 10^{-5}$cts~sec$^{-1}$~pix$^{-1}$. The glow\nthus appears as a higher concentration of ones in a sea of zeros,\nand the subtraction of a smooth glow profile from such quantized\ndata over-subtracts from the zeros and under-subtracts from the\nones. These effects are visible in the corrected data, even when\nsmoothed out considerably in the final drizzled far-UV image. A\nlow-resolution flat-field correction was applied to the far-UV\nframes after subtraction of the residual dark glow. The near-UV\nframes require no low-resolution flat field correction.\n\n\\subsection{Shifts and Rotations}\n\nCurrently, geometrically corrected NUVQTZ and FUVQTZ frames do not\nhave the same plate scale. Although geometric correction, rotation,\nand rescaling is applied during the final summation of individual\ncalibrated frames, we first produced a set of calibrated frames that\nincluded these corrections, in order to accurately determine the\nrelative shifts between them; this information was then used in\nconjunction with these corrections in the final drizzle. All near-UV\nand far-UV frames were geometrically corrected, rescaled to\n$0.025\\arcsec~pix^{-1}$, and rotated to align North with the +y image\naxis. The roll angle specified in the jitter files was used to\ndetermine the relative roll between frames, and the mean difference\nbetween the planned roll and the jitter roll determined the absolute\nrotation. It is difficult to determine accurate roll angles from the\nimages themselves, because of the scarcity of objects in the MAMA\nimages. All near-UV and far-UV frames were then cross-correlated\nagainst one of the far-UV frames to provide shifts in the output\ncoordinate system. Note that centroiding on the quasar in all far-UV\nand near-UV frames yields the same shifts as cross-correlation, within\n0.1 pixel.\n\n\\subsection{Drizzling}\n\nThe calibrated frames were drizzled to a $1600 \\times 1600$ pixel\nimage, including the above corrections, rescaling, rotations, and\nshifts. We updated the world coordinate system in the image headers\nto exactly reflect the plate scale, alignment, and the astrometry\nof the quasar.\n\nFor both the far-UV and near-UV frames, individual pixels in each\nframe were weighted by the ratio of the exposure time squared to\nthe dark count variance; this weights the exposures by (S\/N)$^2$\nfor sources that are fainter than the background. Although the\nvariations in the far-UV dark profile are smooth, the near-UV dark\nprofile is an actual sum of dark frames, and so we smoothed the\nnear-UV dark profile to determine the weights. With this weighting\nalgorithm, pixels in the upper left-hand quadrant of a given far-UV\nimage contribute less when the dark glow is high, and contribute\nmore when it is low. The statistical errors (cts~s$^{-1}$) in the\nfinal drizzled image, for objects below the background (e.g.,\nobjects other than the quasar), are given by the square root of\nthe final drizzled weights file.\n\nThe drizzle ``dropsize'' ({\\sc pixfrac}) was 0.6, thus improving the\nresolution over a {\\sc pixfrac} of 1.0 (which would be equivalent to\nsimple shift-and-add). The $1600 \\times 1600$ pixel format contains\nall dither positions, and pixels outside of the dither pattern\nare at a count rate of zero. The pixel mask for each near-UV input\nframe included the occulted corners of the detector, a small number\nof hot pixels, and pixels with relatively low response (those with\nvalues $\\le$ 0.75 in the high-resolution flat field). The pixel\nmask for each far-UV frame included hot pixels and all pixels\nflagged in the data quality file for that frame. When every input\npixel drizzled onto a given output pixel was masked, that\npixel was set to zero.\n\n\\subsection{Window Reflection}\n\nAs with the CCD, a window reflection of the QSO\nappears in the near-UV image. This reflection appears $\\approx 0.2\\arcsec$\neast of the QSO itself, and should not be considered an astronomical object.\n \n\\section{Cataloging}\n\n\\subsection{Cataloging the Optical Images}\n\nThe catalog was created using the {\\sc SExtractor} package\n(Bertin \\& Arnouts \\markcite{bertin96}1996), revision of 1998 November 19, with some minor\nmodifications that were done for this application. We used two\nseparate runs of {\\sc SExtractor}, and manually merged the resulting\noutput catalogs. The first run used a set of parameters selected\nto optimize the detection of faint sources while not splitting what\nappeared to the eye to be substructure in a single object. We varied\nthe parameters {\\sc detect\\_thresh}, {\\sc deblend\\_mincont}, {\\sc\nback\\_size}, and {\\sc back\\_filtersize}. We decided to use a\ndetection threshold corresponding to an isophote of $0.65\\sigma$.\nSources were required a minimum area of 16 connected pixels above\nthis threshold. Deblending was done when the flux in the fainter\nobject was a minimum of 0.03 times the flux in the brighter object.\nThe background map was constructed on a grid of 60 pixels, and\nsubsequently filtered with a $3\\times3$ median filter. Prior to\ncataloging, the image was convolved with a Gaussian kernel with\nfull width half maximum of 3.4 pixels. As discussed in\nFruchter \\& Hook \\markcite{fruchterhook98}(1998), the effects of drizzling on the photometry\nis no more than 2\\%, and in our well-sampled 50CCD field, the\neffects should be much less than this. This effect is smaller than\nother uncertainties in the photometry of extended objects.\n\nThe second run of {\\sc SExtractor} was optimized to detect objects\nthat lay near the QSO and the bright star at the southern edge of\nthe image. These objects tend to be blended in with the point source\nat the lower detection threshold. Although our catalog might include\ngalaxies that are associated with absorption lines in the quasar\nspectrum, we did not attempt to subtract the quasar light from the\nimage, and so the catalog does not include objects within $3\\arcsec$\nof the quasar. The parameters used for the second run were the same\nas for the first run, with the exception of the {\\sc detect\\_thresh}\nparameter, which was set to $3.25\\sigma$. This parameter not only\nsets the minimum flux level for detection, but also is the isophote\nused to determine the extent of the object. Several objects fall\nbetween the $0.65\\sigma$ isophote and the $3.25\\sigma$ isophote of\nthe quasar. These are not deblended on the first {\\sc SExtractor}\nrun, because their fluxes are below 0.03 of the quasar flux, but\nare detected (without the need for deblending) on the second run.\nObjects near the quasar detected in the second run were added to\nthe catalog generated by the first run, and flagged accordingly.\nObjects from the second run that were not confused with the quasar\nor the bright star were not included. The isophotal photometry of\nobjects from the second run will not be consistent with the photometry\nof objects from the first run, because a different isophote was\nused. Eight objects were added to the catalog in this way.\n\nIn addition, 26 objects from the first {\\sc SExtractor} run \nwere clearly spurious due to the diffraction spikes of the QSO and \nthe bright star. These were manually deleted from the catalog. \n\nPhotometry of the F28X50LP image was done with {\\sc SExtractor}\nrun in two-image mode, in which the objects were detected and\nidentified on the 50CCD image, but the photometry was done in the\nother band. Isophotes and elliptical apertures are thus determined\nby the extent of the objects on the 50CCD images. Objects detected\nin the F28X50LP image but not on the 50CCD image are impossible,\nsince it has a lower throughput and shorter exposure time.\n\n\n\\subsection{Cataloging the Ultraviolet Images}\n\nFluxes in the UV were calculated outside of {\\sc SExtractor} because\nit had some problems handling quantized low-signal data. To determine\nthe gross flux, we summed the countrate within the area for each\nobject appearing in the {\\sc SExtractor} 50CCD segmentation map.\nWe then created an object mask by ``growing'' each object in the\nsegmentation map, using the IDL routine {\\sc dilate}, until it\nsubtended an area three times its original size. The resulting mask\nexcludes faint emission outside of the {\\sc SExtractor} isophotes\nfor all known objects in the field. The sky was calculated from\nthose exposed pixels within a $151 \\times 151$ pixel box centered\non each object, excluding pixels from the mask. The mean countrate\nper pixel in this sky region was used to determine the background\nfor each object (the median is not a useful quantity when dealing\nwith very low quantized signals), and thus the net flux. Statistical\nerrors per pixel for objects at or below the background are determined\nfrom the {\\sc drizzle} weight image raised to the $-1\/2$ power.\nThe statistical errors for the gross flux and sky flux were calculated\nusing this pixel map of statistical errors, and thus underestimate\nthe errors for bright objects such as the quasar.\n\nSome objects that are fully-exposed in the CCD image do not fall\nentirely within the exposed area of the MAMA images; for these\nobjects, we calculated the UV flux in the exposed area only, without\ncorrecting for the incomplete exposure, and flagged such objects\naccordingly. Objects were also flagged if the sky-box described\nabove did not contain at least 100 pixels (e.g., the quasar). For\nthese objects, we calculated a global sky value from a larger $685\n\\times 670$ pixel box, roughly centered in each MAMA image, that\nonly includes areas fully exposed in the dither pattern, and excludes\npixels in the object mask. When the net flux incorporates this\nglobal sky value, they have been flagged accordingly. We do not\nexpect or see any evidence for objects in the ultraviolet images\nthat do not appear on the 50CCD image.\n\n\\subsection{The Catalog}\n\nThe catalog is presented in Table~\\ref{cattab}, which contains a\nsubset of the photometry. The full catalogs are available on the World\nWide Web. For each object we report the following parameters:\n\n{\\bf ID:} The {\\sc SExtractor} identification number. The objects\nin the list have been sorted by right ascension (first) and\ndeclination (second), and thus are no longer in catalog order. In\naddition, the numbers are no longer continuous, as some of the object\nidentifications from the first {\\sc SExtractor} run have been\nremoved. Objects from the second {\\sc SExtractor} run have had\n10000 added to their identification numbers. These identification\nnumbers provide a cross-reference to the segmentation maps.\n\n{\\bf HDFS\\_J22r$-$60d:} The minutes and seconds of right ascension\nand declination, from which can be constructed the catalog name of\neach object. To these must be added 22 hours (RA) and $-60$ degrees\n(Dec). The first object in the catalog is HDFS\\_J223333.69$-$603346.0,\nat RA 22$^h$ 33$^m$ 33.69$^s$, Dec 60$\\deg$ 33$\\arcmin$\n46.0$\\arcsec$, epoch J2000.\n\n{\\bf x, y:} The x and y pixel positions of the object on the 50CCD and\nF28X50LP images. To get the x and y pixel positions on the MAMA\nimages, subtract 400 from each.\n\n{\\bf $m_i$, $m_a$:} The isophotal ($m_i$) and ``mag\\_auto'' ($m_a$)\n50CCD magnitudes. The magnitudes are given in the AB system\n(Oke \\markcite{oke71}1971), where $m = -2.5 log f_{\\nu} - 48.60$. The isophotal\nmagnitude is determined from the sum of the counts within the\ndetection isophote, set to be 0.65$\\sigma$. The ``mag\\_auto'' is\nan elliptical Kron \\markcite{kron80}(1980) magnitude, determined from the sum of\nthe counts in an elliptical aperture. The semi-major axis of \nthis aperture is defined by 2.5 times the first\nmoments of the flux distribution within an ellipse roughly twice the\nisophotal radius. However if the aperture defined this way \nwould have a semi-major axis smaller than than 3.5 pixels, a\n3.5 pixel value is used.\n\n{\\bf clr-lp:} Isophotal color, 50CCD$-$F28X50LP, in the AB magnitude\nsystem, as determined in the 50CCD isophote. {\\sc SExtractor} was\nrun in two-image mode to determine the photometry in the F28X50LP\nimage, using the 50CCD image as the detection image. When the\nmeasured F28X50LP flux is less than $2\\sigma$, we determine an\nupper limit to the color using the flux plus $2\\sigma$ when the\nmeasured flux is positive, and $2\\sigma$ when the measured flux is\nnegative. We did not clip the 50CCD photometry.\n\n{\\bf nuv-clr, fuv-clr:} Isophotal colors, NUVQTZ-50CCD and\nFUVQTZ-50CCD, in the AB magnitude system. Photometry in the MAMA\nimages are discussed above. Photometry of objects falling partially\noutside the MAMA image are flagged and should not be considered\nreliable. When the measured flux is less than $2\\sigma$, we give\nlower limits to the color as discussed above.\n\n{\\bf $r_h$:} The half-light radius of the object in the 50CCD image,\ngiven in milli-arcseconds. The half-light radius was determined by\n{\\sc SExtractor} to be the radius at which a circular aperture\ncontains half of the flux in the ``mag\\_auto'' elliptical aperture.\n\n{\\bf s\/g:} A star-galaxy classification parameter determined by a\nneural network within {\\sc SExtractor}, and based upon the morphology\nof the object in the 50CCD images (see Bertin \\& Arnouts \\markcite{bertin96}1996 for a detailed\ndescription of the neural network). Classifications near 1.0 are\nmore like a point source, while classifications near 0.0 are more\nextended.\n\n{\\bf flags:} Flags are explained in the table notes, and include both\nthe flags returned by {\\sc SExtractor}, and additional flags we\nadded while constructing the catalog.\n\n\\section{Statistics}\n\nIn this section we present several statistics of the data compiled\nfrom the catalog.\n\n\\subsection{Source Counts}\n\nThe source counts in the 50CCD image are given in Table~\\ref{nctable},\nand plotted as a function of AB magnitude in Figure~\\ref{numcts},\nwhere they are compared with the galaxy counts from the HDF-N WFPC2\nobservations, as compiled by Williams et al.\\ \\markcite{williams96}(1996). The counts are\ncompiled directly from the catalog, although all flagged regions\nhave been excluded, so that the counts do not include objects near\nthe edge of the image, or near the quasar. We plot only the Poissonian\nerrors, although there might be an additional component due to\nlarge-scale structure. We plot all sources, including both galaxies\nand stars, although we do not expect stars to contribute substantially\nto the source counts. No corrections for detection completeness\nhave been made, and the counts continue to rise until fainter than\n30~mag. The turnover fainter than this is due to incompleteness;\nthe counts do not turn over for astrophysical or cosmological\nreasons. \n\n\\subsection{Colors and Dropouts}\n\nThe 50CCD-F28X50LP colors of objects in the STIS images are plotted\nas points in Figure~\\ref{lpcolor}. Flagged objects have been removed\nfrom the sample. For comparison, we plot K--corrected (no-evolution)\ncolors of the template galaxies in the Kinney et al.\\ \\markcite{kinney96}(1996) sample\nas a function of redshift on the left of the figure. The LP filter is\nable to distinguish blue galaxies at $z<2.5$, but becomes\ndominated by the noise for blue galaxies fainter than 28~mag, and\nloses color resolution at $z>3$, where the Ly$\\alpha$ forest\ndominates the color in these bandpasses.\n\nBecause the F28X50LP bandpass is entirely contained within the\n50CCD bandpass, it is possible, by subtracting an appropriately\nscaled version of the measured F28X50LP flux from the 50CCD flux,\nto construct a pseudo-$B_{430}$ measurement (see Figure~\\ref{filttrans}).\nThis pseudo-$B_{430}$ is combined with the NUVQTZ and the F28X50LP\nmeasurements in a color-color diagram in Figure~\\ref{nuvdrop}. NUV\ndrop-outs, indicated on this figure by the dashed line, are those\nobjects with blue colors in the visible, but red colors in the UV,\nindicative of galaxies at $z>\\sim 1.5$. These galaxies show blue\ncolors characteristic of rapid star formation, while the red NUV\nto optical color is due to the Lyman break and absorption by the\nLy$\\alpha$ forest. The selection criteria were determined using\nthe models of Madau et al.\\ \\markcite{madau96}(1996). In an inset to Figure~\\ref{nuvdrop},\nwe plot the efficiency of these criteria for selecting galaxies of\nhigh redshift. The solid line is the fraction of all of the models\nthat meet these criteria, while the dotted line is the fraction of\nthose models with ages $<10^8$ years and foreground-screen extinction\nless than $A_B = 2$. These criteria are very efficient at finding\nyoung, star-forming galaxies at $1.5 < z < 3.5$. We have removed\npoint sources from this figure, including the bright object just\nwest of the QSO, which is extremely red and is likely to be an M\nstar.\n\nIn Figure~\\ref{fuvdrop} we give a FUV-NUV vs NUV-50CCD color-color\nplot showing FUV dropouts, where the Lyman break is passing through\nthe FUV bandpass at $z>0.6$. Of the 17 galaxies in the MAMA field with\nNUV magnitudes brighter than 28.4, only 3 have a clear signature of a\nLyman break at $0.60.6$.\n\n\\section{Conclusions}\n\nWe have presented the STIS imaging observations that were done as\npart of the Hubble Deep Field -- South campaign. The 50CCD image is\nthe deepest image ever made in the UV-optical wavelength region,\nand achieves a point source resolution near the diffraction limit\nof the HST. We have presented the catalog, and some statistics of\nthe data. These data will be useful for the study of the number and\nsizes of faint galaxies, the UV-optical color evolution of galaxies,\nthe number of faint stars and white dwarfs in the galactic halo, and\nthe relation between absorption line systems seen in the QSO spectrum\nand galaxies near to the line of sight. Follow-up observations of\nthe HDF-South fields by southern hemisphere ground-based telescopes,\nby HST, and by other space missions will also greatly increase our\nunderstanding of the processes of galaxy formation and evolution.\n\nThe images and catalog presented here are available on the World Wide\nWeb at: $<$http:\/\/www.stsci.edu\/ftp\/science\/hdfsouth\/hdfs.html$>$.\n\n\\bigskip\n\n\\acknowledgments\n\nWe would like to thank all of the people who contributed to making the\nHDF-South campaign a success, including those who helped to identify a\ntarget quasar in the southern CVZ, and those who helped in planning\nand scheduling the observations. JPG, TMB, and HIT wish to acknowledge\nfunding by the Space Telescope Imaging Spectrograph Investigation\nDefinition Team through the National Optical Astronomical Observatories,\nand by the Goddard Space Flight Center. CLM and CMC wish to acknowledge\nsupport by NASA through Hubble Fellowship grants awarded by STScI.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{References}}\n\\bibpunct{(}{)}{;}{a}{,}{;}\n\n\\newcommand{\\citepos}[1]{}\n\\renewcommand{\\citepos}[1]{\\citeauthor{#1}'s (\\citeyear{#1})}\n\n\\begin{document}\n\n\n\n\n\\title{Yet Another Statistical Analysis of Bob Ross Paintings}\n\n\\author{Christopher Steven Marcum, PhD \\\\ National Institutes of Health}\n\\date{\\today{}}\n\\maketitle\n\n\\abstract{\nIn this paper, we analyze a sample of clippings from paintings by the late artist Bob Ross. Previous work focused on the qualitative themes of his paintings \\citep{hickey2014sawb}; here, we expand on that line of research by considering the colorspace and luminosity values as our data. Our results demonstrate the subtle aesthetics of the average Ross painting, the common variation shared by his paintings, and the structure of the relationships between each painting in our sample. We reveal, for the first time, renderings of the average paintings and introduce ``eigenross'' components to identify and evaluate shared variance. Additionally, all data and code are embedded in this document to encourage future research, and, in the spirit of Bob Ross, to teach others how to do so.\n{\\\\{\\linebreak \\bf Keywords}: art, Bob Ross, paintings, linear subspace}\n}\n\n\\doublespace\n\n\\section{Introduction}\nPainter Bob Ross (1942--1995) was an icon of American art education. For over a decade, his television program, \\emph{The Joy of Painting}, taught and entertained millions of Americans tuning in to the half-hour show on PBS. In the course of his art career, Ross is estimated to have painted upward of $25,000$ paintings. As a master of the Alexander ``wet-on-wet'' oil painting technique, Ross's iconic ``happy'' clouds, mountains, streams, and, of course, trees were laid down on canvas in a matter of seconds. Recently, his set of paintings became the subject of a popular blog post titled ``A Statistical Analysis of the Work of Bob Ross'' by Walt Hickey on Nate Silver's pop-stat site \\href{http:\/\/fivethirtyeight.com\/features\/a-statistical-analysis-of-the-work-of-bob-ross\/}{fivethirtyeight.com}. The post ``went viral'' on social media sites and is the inspiration for the current work. \n\nAs a commendable digest, Hickey's approach to Ross's work should rightly be characterized as a statistical analysis of qualitative features, subjects, and themes of the paintings. He enumerates the frequency distribution of various aesthetic elements (trees, rocks, hills, etc) and, with great levity, calculates and describes the conditional probability that Ross paints one element given that he's already painted another More, Hickey delved into the voluminous library of episodes of \\emph{The Joy of Painting} to determine such statistical anomalies of the presence of humans (n=2), and chimneys (n=1), in Ross's work. Hickey also employed the k-means clustering algorithm on the data represented by these features to determine unique subsets of paintings. In this paper, we take a different approach that advances this prior work in a quantitative analysis of digital representations of Ross's paintings.\n\nIn particular, we consider a different set of research questions to be addressed by formal statistical analysis. First, what does the ``average'' Bob Ross painting look like? Here, we diverge from Hickey's approach, which describes the ``typical'' Ross painting, the features of which may be quantified using conditional probabilities as he did. Instead, we specifically want to describe average tendency in the red-green-blue colorspace of digital representations of Ross's work. That is, can we render a representation of the central tendency for Ross's work by averaging over his paintings? Second, we ask what is the common variation shared across Bob Ross's paintings? The answer to this question will shed light on the commonly held belief that Ross has a relatively standardized theme. Finally, we ask to what extent are separate Bob Ross paintings correlated with one another: what is the relationship \\emph{between} Bob Ross paintings? \n\nFinally, this manuscript serves a didactic purpose: it illustrates how to conduct comparative quantitative research with completely reproducible results from image data packaged with the manuscript. To this end, this paper was prepared using the Sweave interface between Latex and R on a Linux operating system and the code generating the analysis is embedded in both the compiled pdf and the source. Additionally, an archive of the data can be found deposited online at the journal.\n\n\\section{Data}\nThe first thirty images returned from a Google image search of \\linebreak ``bob+ross+paintings'' in large format (as defined by Google) and attributed to Bob Ross were downloaded on November 1st, 2014. The selection criteria also included that the \\emph{Ross} signature be present or, alternatively, that the painting could be verified against the catalog of known Bob Ross paintings from the \\emph{Joy of Painting} television program which is validated by comparison with the archive on \\href{http:\/\/www.tv.com\/shows\/the-joy-of-painting\/forums\/pictures-of-every-painting-15711-691600\/}{tv.com}. Each image was saved using a br\\%d.jp*g naming precedent, where br stands for bobross, \\%d is an integer from $1$ to $30$ and $*$ is either null or the letter $e$ depending on the image source, which is used by the following embedded script.\n\nNext, each image was cropped down to a square 550 by 550 pixels. This was automatically achieved by drawing the clipping window about the Cartesian center pixel (detected via gravity method) of each image using imagemagick and a Bourne-shell loop. The following snippet demonstrates the code, which was saved to a file called ConvertAllImages.sh. Thus, comparisons made between images are done on the pixel subset contained within this clipping window.\n\n\\begin{verbatim}\n#!\/bin\/sh\nlist=`ls br*jp*`\ni=1\nfor image in $list; do\nconvert $image -gravity center -crop 550X550+0+0 $i.jpg\ni=$((i+1))\ndone\n\\end{verbatim}\n\nThe resulting library of clipped images can be found in Table~\\ref{mat}. The intensity values of each image's three channels (red, green, and blue) was read on a pixel-by-pixel basis and stored as a three dimensional array (with dimensions $[550,550,3]$) using the ``jpeg'' library for \\texttt{R}. These arrays contain the data used in the subsequent analysis. The images are sampled at 100 dpi.\n\n\\section{Analysis}\n\nOur primary research question is, ``what does the average Bob Ross painting look like?'' To address this, we integrate over the respective channel indexes in the data array to obtain the mean value for each red, green, and blue channel among the 30 clippings. The resulting figure is rendered as a raster image and displayed in Figure~\\ref{res1}. Despite considerable apparent variation in the supporting set of images, this average (while quite abstract in detail) clearly shows a preference gradient for blues and pinks at the top of the image and greens and browns at the bottom. One can also detect the faint gray outlines of the trunks and branches in Ross' ``Happy Trees'' rising from the bottom toward the top of the image, suggesting that Happy Trees have low alignment variance as we'd expect. The lower and upper 95\\% confidence range in these images only validates the consistency in the pixel-by-pixel averages; though, we note that the darker saturation of browns, grays, and blacks Ross used in his buildings is readily apparent in the lower bound image rendering.\n\nSecond, while the average is interesting, it fails to account for the variation in Ross' bucolic landscapes and cannot address the second research question: ``what is the common variation shared across Bob Ross' paintings?'' To examine this, we consider the eigenspace among the covariances across the dataset. We derive a set of orthonormal vectors that best describe the shared variance across the distribution of the data---we'll call these vectors the eigenrosses. Specifically, using simple principal components analysis, we project the data back onto the eigenrosses and compare shared variances in the highly loading eigenross components. This classic approach is used in a wide variety of data reduction and statistical applications including, factor analysis, spectral analysis, and face-detection software (i.e., vis-a-vis eigenfaces methods). We conduct this for each of the three color channels as well as a flattened (monochromatic) version of the covariances---the flattened version is derived by averaging each channel with respect to each pixel, which is the Gaussian method of converting to grayscale used by most image manipulation software.\n\nThe proportion of shared variances from the eigenross components is plotted in Figure~\\ref{res2}. Interestingly, despite the commonly held belief that Ross' paintings are relatively similar, the plot demonstrates considerable variation---it's not until the fifth eigenross component is reached until 50\\% of common variation across the whole set is attained for all channels including the flattened version (in gray). However, the first two components jointly explain more than 30\\% of the variance. We can explore this further by rendering images from these components.\n\nEach of the first five eigenross components is displayed in Figure~\\ref{eigenross}. As these are channel-independent orthonormal transformations we cannot recombine the red, green, and blue eigenross components in a reasonable way; thus, the components are plotted separately. Lighter colored areas indicate lower pixel-by-pixel shared variance across the set of clippings in that channel. Examination of the first two components demonstrates a clear preference for upper sky and lower foreground shared variation in the red and blue channels, and a clear foreground preference in the greens. The remainder of the eigenross components appear to differentiate trees, mountains, and buildings in an order different for each channel.\n\nFinally, to address the third question, ``what is the relationship between Bob Ross paintings'' we simply examine the correlation structure using a network perspective. Specifically, we posit a relationship between two paintings if the product of red, green, and blue channel correlations is greater than or equal to $0.3^3$; in other words, two paintings are said to be related if the total correlation between them is moderate by classical standards \\citep{cohen1988spab}. The resulting network is depicted in Figure~\\ref{network}. \n\nNearly half (n=12) of the data are isolates in this network. The isolates include the three paintings that feature buildings. In the connected component, there are two clusters, bridged by the relationship between paintings 16 and 28. Paintings, 1 and 16 appear to be the most central. These qualitative interpretations of the plot are confirmed quantitatively in Table~\\ref{net}, which reports degree (number of connections), betweenness (number of non-redundant shortest-paths), and closeness (measure of being in the middle of the network) centrality scores for the paintings in the connected component of the network \\citep{wasserman&faust1994snam}. \n\n\\section{Conclusion}\nAs we mark the $20^{th}$ anniversary of Bob Ross's death this year, the popularity of \\textit{The Joy of Painting} is again on the rise. Various public tributes have surfaced recently, including a video mash-up set to music called ``Happy Trees'' by PBS on YouTube, the selection of a Bob Ross themed costume as the winner of the Smithsonian National Zoo's Annual ``Night of the living zoo '' Costume Contest in 2013, a weekly ``Bob Ross Night'' in Missoula \\url{http:\/\/www.zootownarts.org\/bobross}, and now two statistical analyses (one qualitative and one quantitative) of his work. \n\nIn this paper, we've conducted yet another statistical analysis of Bob Ross paintings. Rather than examine the qualitative features of the subjects as prior work has done, we used the quantitative values of the colorspace in digital representations of the paintings as our data. We've demonstrated the subtle aesthetics of the average Bob Ross painting, the common variance shared by a set of Ross's work, and the structure of relationships between this sample using relatively simple quantitative techniques. \n\nFinally, there are a number of limitations with the current approach. First, we consider only a very small sample (n=30) of the publicly available set of Ross's work, a corpus many thousands of paintings in number. Future work may wish to expand this dataset. Indeed, the techniques employed here may be used to identify specific works attributed to an artist in a larger corpus of mixed artists' work \\citep{cutzu3&2005dpp}. Second, to standardize the dataset we take only the innermost central region; thus, we have a sample within a sample. It's possible that this strategy does not fully represent the variety of work encompassed by the whole lot---however, given the high spectral variability reported in our results, we believe this strategy is indeed representative of his works. Third, the source digital images that we collected from the internet were not rendered in a uniform manner; in an ideal data scenario, digital reproductions would have been obtained using the same high-resolution equipment in a controlled lighting environment. This gives rise to random errors in the channel values. However, these limitations did not impede evaluation of the research questions set forth here. As a proof-of-concept, the fact that our methods were able to recover discernible features in both the average and the variance of the set of paintings lends confidence to our results. We leave it to future research to further this approach by mitigating these limitations with a larger, more standardized, sample. Additionally, future research should take a mixed-methods approach to statistically combine the results of qualitative and quantitative analysis of a body of art in this manner.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\n\\section{Introduction}\nFew-shot learning~\\cite{fei2006one, fink2005object, wu2010towards, lake2015human, wang2020generalizing} is the learning problem where a learner experiences only a limited number of examples as supervision.\nIn computer vision, it has been most actively studied for the tasks of image classification~\\cite{alexnet, vgg, resnet} and semantic segmentation~\\cite{deeplab, fcn, deconvnet, unet} among many others~\\cite{han2021query, ojha2021few, ramon2021h3d, yue2021prototypical, zhao2021few}.\nFew-shot classification (FS-C\\xspace) aims to classify a query image into target classes when a few support examples are given for each target class. \nFew-shot segmentation (FS-S\\xspace) is to segment out the target class regions on the query image in a similar setup.\nWhile being closely related to each other~\\cite{li2009towards, yao2012describing, zhou2019collaborative},\nthese two few-shot learning problems have so far been treated individually.\nFurthermore, the conventional setups for the few-shot problems, FS-C\\xspace and FS-S\\xspace, are limited and do not reflect realistic scenarios; \nFS-C\\xspace~\\cite{matchingnet, ravi2016optimization, koch2015siamese} presumes that the query always contains one of the target classes in classification, while FS-S\\xspace~\\cite{shaban2017oslsm, rakelly2018cofcn, hu2019amcg} allows the presence of multiple classes but does not handle the absence of the target classes in segmentation.\nThese respective limitations prevent few-shot learning from generalizing to and evaluating on more realistic cases in the wild. \nFor example, when a query image without any target class is given as in \\figref{fig:teaser}, FS-S\\xspace learners typically segment out arbitrary salient objects in the query.\n\nTo address the aforementioned issues, we introduce the \\textit{integrative task of few-shot classification and segmentation} (FS-CS\\xspace) that combines the two few-shot learning problems into a multi-label and background-aware prediction problem. \nGiven a query image and a few-shot support set for target classes, FS-CS\\xspace aims to \\textit{identify the presence of each target class} and \\textit{predict its foreground mask} from the query.\nUnlike FS-C\\xspace and FS-S\\xspace, it does not presume either the class exclusiveness in classification or the presence of all the target classes in segmentation.\n\n\n\nAs a learning framework for FS-CS\\xspace, we propose {\\em integrative few-shot learning} (iFSL\\xspace) that learns to construct shared foreground maps for both classification and segmentation.\nIt naturally combines multi-label classification and pixel-wise segmentation by sharing class-wise foreground maps and also allows to learn with class tags or segmentation annotations. \nFor effective iFSL\\xspace, we design the {\\em attentive squeeze network} (ASNet\\xspace) that computes semantic correlation tensors between the query and the support image features and then transforms the tensor into a foreground map by strided self-attention. \nIt generates reliable foreground maps for iFSL\\xspace by leveraging multi-layer neural features~\\cite{hpf, hsnet} and global self-attention~\\cite{transformers, vit}. \nIn experiments, we demonstrate the efficacy of the iFSL\\xspace framework on FS-CS\\xspace and compare ASNet\\xspace with recent methods~\\cite{xie2021few, wu2021learning, hsnet, xie2021scale}.\nOur method significantly improves over the other methods on FS-CS\\xspace in terms of classification and segmentation accuracy and also outperforms the recent FS-S\\xspace methods on the conventional FS-S\\xspace.\nWe also cross-validate the task transferability between the FS-C\\xspace, FS-S\\xspace, and FS-CS\\xspace learners, and show the FS-CS\\xspace learners effectively generalize when transferred to the FS-C\\xspace and FS-S\\xspace tasks.\n\n\n\nOur contribution is summarized as follows:\n\\begin{itemize}\n \\item We introduce the task of \\textit{integrative few-shot classification and segmentation} (FS-CS\\xspace), which combines few-shot classification and few-shot segmentation into an integrative task by addressing their limitations.\n \\item We propose the \\textit{integrative few-shot learning framework} (iFSL\\xspace), which learns to both classify and segment a query image using class-wise foreground maps.\n \\item We design the \\textit{attentive squeeze network} (ASNet\\xspace),\n which squeezes semantic correlations into a foreground map for iFSL\\xspace via strided global self-attention.\n \\item We show in extensive experiments that the framework, iFSL\\xspace, and the architecture, ASNet\\xspace, are both effective, achieving a significant gain on FS-S\\xspace as well as FS-CS\\xspace.\n\\end{itemize}\n\n\n\n\n\\section{Related work}\n\\smallbreakparagraph{Few-shot classification (FS-C\\xspace)}.\nRecent FS-C\\xspace methods typically learn neural networks that maximize positive class similarity and suppress the rest to predict the most probable class.\nSuch a similarity function is obtained by a) meta-learning embedding functions~\\cite{koch2015siamese, matchingnet, protonet, allen2019infinite, tewam, can, feat, deepemd, renet}, b) meta-learning to optimize classifier weights~\\cite{maml, leo, mtl}, or c) transfer learning~\\cite{closer, rfs, dhillon2019baseline, wang2020few, negmargin, gidaris2018dynamic, qi2018low, rodriguez2020embedding}, all of which aim to generalize to unseen classes.\nThis conventional formulation is applicable if a query image corresponds to no less or more than a single class among target classes.\nTo generalize FS-C\\xspace to classify images associated with either none or multiple classes,\nwe employ the multi-label classification~\\cite{mccallum1999multi, boutell2004learning, cole2021multi, lanchantin2021general, durand2019learning}.\nWhile the conventional FS-C\\xspace methods make use of the class uniqueness property via using the categorical cross-entropy, we instead devise a learning framework that compares the binary relationship between the query and each support image individually and estimates a binary presence of the corresponding class.\n\n\n\n\n\\smallbreakparagraph{Few-shot semantic segmentation (FS-S\\xspace)}.\nA prevalent FS-S\\xspace approach is learning to match a query feature map with a set of support feature embeddings that are obtained by collapsing spatial dimensions at the cost of spatial structures~\\cite{wang2019panet, zhang2021self, siam2019amp, yang2021mining, liu2021anti, dong2018few, nguyen2019fwb, zhang2019canet, gairola2020simpropnet, yang2020pmm, liu2020ppnet}.\nRecent methods~\\cite{zhang2019pgnet, xie2021scale, xie2021few, wu2021learning, tian2020pfenet} focus on learning structural details by leveraging dense feature correlation tensors between the query and each support.\nHSNet~\\cite{hsnet} learns to squeeze a dense feature correlation tensor and transform it to a segmentation mask via high-dimensional convolutions that analyze the local correlation patterns on the correlation pyramid.\nWe inherit the idea of learning to squeeze correlations and improve it by analyzing the spatial context of the correlation with effective global self-attention~\\cite{transformers}. \nNote that several methods~\\cite{yang2020brinet, wang2020dan, sun2021boosting} adopt non-local self-attention~\\cite{nlsa} of the query-key-value interaction for FS-S\\xspace, but they are distinct from ours in the sense that they learn to transform image feature maps, whereas our method focuses on transforming dense correlation maps via self-attention.\n\nFS-S\\xspace has been predominantly investigated as an one-way segmentation task, \\ie, foreground or background segmentation, since the task is defined so that every target (support) class object appears in query images, thus being not straightforward to extend to a multi-class problem in the wild.\nConsequently, most work on FS-S\\xspace except for a few~\\cite{wang2019panet, tian2020differentiable, liu2020ppnet, dong2018few} focuses on the one-way segmentation, where the work of \\cite{tian2020differentiable, dong2018few} among the few presents two-way segmentation results from person-and-object images only, \\eg, images containing (person, dog) or (person, table).\n\n\n\\smallbreakparagraph{Comparison with other few-shot approaches.}\nHere we contrast FS-CS\\xspace with other loosely-related work for generalized few-shot learning.\nFew-shot open-set classification~\\cite{liu2020few} brings the idea of the open-set problem~\\cite{scheirer2012toward, fei2016breaking} to few-shot classification by allowing a query to have no target classes.\nThis formulation enables background-aware classification as in FS-CS\\xspace, whereas multi-label classification is not considered.\nThe work of \\cite{tian2020generalized, ganea2021incremental} generalizes few-shot segmentation to a multi-class task, but it is mainly studied under the umbrella of incremental learning~\\cite{mccloskey1989catastrophic, rebuffi2017icarl, castro2018end}.\nThe work of \\cite{siam2020weakly} investigates weakly-supervised few-shot segmentation using image-level vision and language supervision, while FS-CS\\xspace uses visual supervision only.\nThe aforementioned tasks generalize few-shot learning but differ from FS-CS\\xspace in the sense that FS-CS\\xspace integrates two related problems under more general and relaxed constraints.\n\\section{Problem formulation}\n\\label{sec:ourtask}\nGiven a query image and a few support images for target classes, we aim to {\\em identify the presence} of each class and {\\em predict its foreground mask} from the query (\\figref{fig:teaser}), which we call the {\\em integrative few-shot classification and segmentation} (FS-CS\\xspace). \nSpecifically, let us assume a target (support) class set $\\mathcal{C}_{\\text{s}}$ of $N$ classes and its support set $\\mathcal{S}=\\{ (\\mathbf{x}_{\\text{s}}^{(i)}, y_{\\text{s}}^{(i)}) | y_{\\text{s}}^{(i)} \\in \\mathcal{C}_{\\text{s}} \\}^{NK}_{i=1}$, which contains $K$ labeled instances for each of the $N$ classes, \\ie, $N$-way $K$-shot~\\cite{matchingnet, ravi2016optimization}. \nThe label $y_{\\text{s}}^{(i)}$ is either a class tag (weak label) or a segmentation annotation (strong label). \nFor a given query image $\\mathbf{x}$, we aim to identify the multi-hot class occurrence $\\mathbf{y}_\\text{C}$ and also predict the segmentation mask $\\mathbf{Y}_\\text{S}$ corresponding to the classes. \nWe assume the class set of the query $\\mathcal{C}$ is a subset of the target class set, \\ie, $\\mathcal{C} \\subseteq \\mathcal{C}_{\\text{s}}$, thus it is also possible to obtain $\\mathbf{y}_\\text{C} = \\varnothing$ and $\\mathbf{Y}_\\text{S} = \\varnothing$. \nThis naturally generalizes the existing few-shot classification~\\cite{matchingnet, protonet} and few-shot segmentation~\\cite{shaban2017oslsm, rakelly2018cofcn}. \n\n\n\n\\smallbreakparagraph{Multi-label background-aware prediction.}\nThe conventional formulation of few-shot classification (FS-C\\xspace)~\\cite{matchingnet, protonet, maml} assigns the query to one class among the target classes exclusively and ignores the possibility of the query belonging to none or multiple target classes. \nFS-CS\\xspace tackle this limitation and generalizes FS-C\\xspace to multi-label classification with a background class. \nA multi-label few-shot classification learner $f_{\\text{C}}$ compares semantic similarities between the query and the support images and estimates class-wise occurrences: $\\hat{\\mathbf{y}}_{\\text{C}} = f_{\\text{C}}(\\mathbf{x}, \\mathcal{S}; \\theta)$ where $\\hat{\\mathbf{y}}_{\\text{C}}$ is an $N$-dimensional multi-hot vector each entry of which indicates the occurrence of the corresponding target class.\nNote that the query is classified into a \\textit{background} class if none of the target classes were detected. \nThanks to the relaxed constraint on the query, \\ie, the query not always belonging to exactly one class, FS-CS\\xspace is more general than FS-C\\xspace.\n\n\n\n\\smallbreakparagraph{Integration of classification and segmentation.}\nFS-CS\\xspace integrates multi-label few-shot classification with semantic segmentation by adopting pixel-level spatial reasoning.\nWhile the conventional FS-S\\xspace~\\cite{shaban2017oslsm, rakelly2018cofcn, wang2019panet, siam2019amp, nguyen2019fwb} assumes the query class set exactly matches the support class set,\n\\ie, $\\mathcal{C} = \\mathcal{C}_{\\text{s}}$,\nFS-CS\\xspace relaxes the assumption such that the query class set can be a subset of the support class set,\n\\ie, $\\mathcal{C} \\subseteq \\mathcal{C}_{\\text{s}}$.\nIn this generalized segmentation setup along with classification, an integrative FS-CS\\xspace learner $f$ estimates both class-wise occurrences and their semantic segmentation maps: $\\{ \\hat{\\mathbf{y}}_{\\text{C}}, \\hat{\\mathbf{Y}}_{\\text{S}}\\} = f(\\mathbf{x}, \\mathcal{S} ; \\theta)$.\nThis combined and generalized formulation gives a high degree of freedom to both of the few-shot learning tasks, which has been missing in the literature;\nthe integrative few-shot learner can predict multi-label background-aware class occurrences and segmentation maps simultaneously under a relaxed constraint on the few-shot episodes.\n\n\\section{Integrative Few-Shot Learning (iFSL\\xspace)}\n\\label{sec:ourmethod}\nTo solve the FS-CS\\xspace problem, we propose an effective learning framework, \\textit{integrative few-shot learning (iFSL\\xspace)}.\nThe iFSL\\xspace framework is designed to jointly solve few-shot classification and few-shot segmentation using either a class tag or a segmentation supervision.\nThe integrative few-shot learner $f$ takes as input the query image $\\mathbf{x}$ and the support set $\\mathcal{S}$ and then produces as output the class-wise foreground maps.\nThe set of class-wise foreground maps $\\mathcal{Y}$ is comprised of $\\mathbf{Y}^{(n)} \\in \\mathbb{R}^{H \\times W}$ for $N$ classes: \n\\begin{align}\n\\mathcal{Y} = f(\\mathbf{x}, \\mathcal{S}; \\theta) = \\{ \\mathbf{Y}^{(n)}\\}_{n=1}^{N},\n\\label{eq:foreground_mask}\n\\end{align}\nwhere $H \\times W$ denotes the size of each map and $\\theta$ is parameters to be meta-learned.\nThe output at each position on the map represents the probability of the position being on a foreground region of the corresponding class.\n\n\\smallbreakparagraph{Inference.}\niFSL\\xspace infers both class-wise occurrences and segmentation masks on top of the set of foreground maps $\\mathcal{Y}$.\nFor class-wise occurrences, a multi-hot vector $\\hat{\\mathbf{y}}_\\text{C} \\in \\mathbb{R}^{N}$ is predicted via max pooling followed by thresholding:\n\\begin{align}\n\\hat{\\mathbf{y}}_{\\text{C}}^{(n)} &= \n\\begin{cases}\n 1 \\text{\\; if \\,} \\max_{\\mathbf{p} \\in [H] \\times [W]} \\mathbf{Y}^{(n)}(\\mathbf{p}) \\geq \\delta,\\\\\n 0 \\text{\\; otherwise,}\n\\end{cases}\n\\label{eq:predict_class}\n\\end{align}\nwhere $\\mathbf{p}$ denotes a 2D position, $\\delta$ is a threshold, and $[ k ]$ denotes a set of integers from 1 to $k$, \\ie, $[k] = \\{1,\\! 2,\\! \\cdots,\\! k\\}$.\nWe find that inference with average pooling is prone to miss small objects in multi-label classification and thus choose to use max pooling.\nThe detected class at any position on the spatial map signifies the presence of the class. \n\nFor segmentation, a segmentation probability tensor $\\mathbf{Y}_{\\text{S}} \\in \\mathbb{R}^{ H \\times W \\times (N + 1)}$ is derived from the class-wise foreground maps.\nAs the background class is not given as a separate support, we estimate the background map in the context of the given supports; we combine $N$ class-wise background maps into \\textit{an episodic background map} on the fly. \nSpecifically, we compute the episodic background map $\\mathbf{Y}_{\\text{bg}}$ by averaging the probability maps of not being foreground and then concatenate it with the class-wise foreground maps to obtain a segmentation probability tensor $\\mathbf{Y}_{\\text{S}}$: \n\\begin{align}\n\\mathbf{Y}_{\\text{bg}} &= \\frac{1}{N} \\sum_{n=1}^{N}(\\mathbf{1} - \\mathbf{Y}^{(n)}), \\label{eq:bg_mask}\\\\\n\\mathbf{Y}_{\\text{S}} &= \\left[ \\mathbf{Y} || \\mathbf{Y}_{\\text{bg}} \\right] \\in \\mathbb{R}^{ H \\times W \\times (N + 1)}.\n\\label{eq:merge_mask}\n\\end{align}\nThe final segmentation mask $\\hat{\\mathbf{Y}}_\\text{S} \\in \\mathbb{R}^{H \\times W}$ is obtained by computing the most probable class label for each position:\n\\begin{equation}\n\\hat{\\mathbf{Y}}_\\text{S} = \\argmax_{n \\in [N + 1]} \\mathbf{Y}_{\\text{S}}.\n\\label{eq:predict_mask}\n\\end{equation}\n\n\n\\smallbreakparagraph{Learning objective.}\nThe iFSL\\xspace framework allows a learner to be trained using a class tag or a segmentation annotation using the classification loss or segmentation loss, respectively.\nThe classification loss is formulated as the average binary cross-entropy between the spatially average-pooled class scores and its ground-truth class label:\n\\begin{align}\n \\mathcal{L}_{\\text{C}} &= -\\frac{1}{N}\\sum_{n=1}^{N}\\mathbf{y}_{\\text{gt}}^{(n)} \\log \\frac{1}{H W}\\sum_{\\scriptscriptstyle{\\mathbf{p} \\in [H] \\! \\times \\! [W]}} \\mathbf{Y}^{(n)}(\\mathbf{p}), \n \\label{eq:loss_cls_final}\n\\end{align}\nwhere $\\mathbf{y}_\\text{gt}$ denotes the multi-hot encoded ground-truth class.\n\nThe segmentation loss is formulated as the average cross-entropy between the class distribution at each individual position and its ground-truth segmentation annotation:\n\\begin{align}\n \\mathcal{L}_{\\text{S}} &= - \\frac{1}{(N + 1)}\\frac{1}{H W} \\sum_{n=1}^{N + 1}\\sum_{\\scriptscriptstyle{\\mathbf{p} \\in [H] \\! \\times \\! [W]}} \\mathbf{Y}_\\text{gt}^{(n)}(\\mathbf{p}) \\log \\mathbf{Y}_{\\text{S}}^{(n)}(\\mathbf{p}), \n \\label{eq:loss_seg_final}\n\\end{align} where $\\mathbf{Y}_\\text{gt}$ denotes the ground-truth segmentation mask. \n\nThese two losses share a similar goal of classification but differ in whether to classify each \\textit{image} or each \\textit{pixel}. Either of them is thus chosen according to the given level of supervision for training.\n\n\\section{Model architecture}\nIn this section, we present \\textit{Attentive Squeeze Network} (ASNet\\xspace) of an effective iFSL\\xspace model.\nThe main building block of ASNet\\xspace is the attentive squeeze layer (AS\\xspace layer), which is a high-order self-attention layer that takes a correlation tensor and returns another level of correlational representation.\nASNet\\xspace takes as input the pyramidal cross-correlation tensors between a query and a support image feature pyramids, \\ie, a hypercorrelation~\\cite{hsnet}.\nThe pyramidal correlations are fed to pyramidal AS\\xspace layers that gradually squeeze the spatial dimensions of the support image, and the pyramidal outputs are merged to a final foreground map in a bottom-up pathway~\\cite{hsnet, fpn, refinenet}.\n\\figureref{fig:overview}~illustrates the overall process of ASNet\\xspace.\nThe $N$-way output maps are computed in parallel and collected to prepare the class-wise foreground maps in \\eqref{eq:foreground_mask} for iFSL\\xspace.\n\n\n\\subsection{Attentive Squeeze Network (ASNet)}\n\\smallbreakparagraph{Hypercorrelation construction.}\nOur method first constructs $NK$ hypercorrelations~\\cite{hsnet} between a query and each $NK$ support image and then learns to generate a foreground segmentation mask \\wrt each support input.\nTo prepare the input hypercorrelations, an episode, \\ie, a query and a support set, is enumerated into a paired list of the query, a support image, and a support label: $\\{(\\mathbf{x}, (\\mathbf{x}_{\\text{s}}^{(i)}, y_{\\text{s}}^{(i)})) \\}_{i=1}^{NK}$.\nThe input image is fed to stacked convolutional layers in a CNN and its mid- to high-level output feature maps are collected to build a feature pyramid $\\{\\mathbf{F}^{(l)}\\}_{l=1}^{L}$, where $l$ denotes the index of a unit layer, \\eg, $\\texttt{Bottleneck}$ layer in ResNet50~\\cite{resnet}.\nWe then compute cosine similarity between each pair of feature maps from the pair of query and support feature pyramids to obtain 4D correlation tensors of size $H_{\\text{q}}^{(l)} \\times W_{\\text{q}}^{(l)} \\times H_{\\text{s}}^{(l)} \\times W_{\\text{s}}^{(l)}$, which is followed by ReLU~\\cite{relu}:\n\\begin{equation}\n\\mathbf{C}^{(l)}(\\mathbf{p}_{\\text{q}}, \\mathbf{p}_{\\text{s}}) = \\mathrm{ReLU}\\left( \\frac{\\mathbf{F}_{\\text{q}}^{(l)}(\\mathbf{p}_{\\text{q}}) \\cdot \\mathbf{F}_{\\text{s}}^{(l)}(\\mathbf{p}_{\\text{s}})}{||\\mathbf{F}_{\\text{q}}^{(l)}(\\mathbf{p}_{\\text{q}})|| \\, ||\\mathbf{F}_{\\text{s}}^{(l)}(\\mathbf{p}_{\\text{s}})||} \\right).\n\\end{equation}\nThese $L$ correlation tensors are grouped by $P$ groups of the identical spatial sizes, and then the tensors in each group are concatenated along a new channel dimension to build a hypercorrelation pyramid: $\\{\\mathbf{C}^{(p)} | \\mathbf{C}^{(p)} \\in \\mathbb{R}^{ H_{\\text{q}}^{(p)} \\times W_{\\text{q}}^{(p)} \\times H_{\\text{s}}^{(p)} \\times W_{\\text{s}}^{(p)} \\times C_{\\text{in}}^{(p)}} \\}_{p=1}^{P}$ such that the channel size $C_{\\text{in}}^{(p)}$ corresponds to the number of concatenated tensors in the $p_{\\text{th}}$ group. \nWe denote the first two spatial dimensions of the correlation tensor, \\ie, $\\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}}}$, as query dimensions, and the last two spatial dimensions, \\ie, $\\mathbb{R}^{H_{\\text{s}} \\times W_{\\text{s}}}$, as support dimensions hereafter.\n\n\n\\smallbreakparagraph{Attentive squeeze layer (AS\\xspace layer).}\nThe AS\\xspace layer transforms a correlation tensor to another with a smaller support dimension via strided self-attention. \nThe tensor is recast as a matrix with each element representing a support pattern.\nGiven a correlation tensor $\\mathbf{C} \\in \\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} }$ in a hypercorrelation pyramid, we start by reshaping the correlation tensor as a block matrix of size $H_{\\text{q}} \\times W_{\\text{q}}$ with each element corresponding to a correlation tensor of $\\mathbf{C}(\\mathbf{x}_{\\text{q}}) \\in \\mathbb{R}^{H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} }$ on the query position $\\mathbf{x}_{\\text{q}}$ such that \n\\begin{equation}\n\\mathbf{C}^{\\text{block}} = \n\\begin{bmatrix} \n \\mathbf{C}((1, 1)) & \\hdots & \\mathbf{C}((1, W_{\\text{q}})) \\\\\n \\vdots & \\ddots & \\vdots \\\\\n \\mathbf{C}((H_{\\text{q}}, 1)) & \\hdots & \\mathbf{C}((H_{\\text{q}}, W_{\\text{q}})) \n\\end{bmatrix}\n\\label{eq:block_matrix}\n.\n\\end{equation}\nWe call each element a \\textit{support correlation tensor}.\nThe goal of an AS\\xspace layer is to analyze the global context of each support correlation tensor and extract a correlational representation with a reduced support dimension while the query dimension is preserved: \n$\\mathbb{R}^{ H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}} \\times W_{\\text{s}} \\times C_{\\text{in}} } \\rightarrow \\mathbb{R}^{H_{\\text{q}} \\times W_{\\text{q}} \\times H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}} }$, where $H_{\\text{s}}' \\leq H_{\\text{s}}$ and $W_{\\text{s}}' \\leq W_{\\text{s}}$.\nTo learn a holistic pattern of each support correlation, we adopt the global self-attention mechanism~\\cite{transformers} for correlational feature transform.\nThe self-attention weights are shared across all query positions and processed in parallel.\n\n\nLet us denote a support correlation tensor on any query position $\\mathbf{x}_{\\text{q}}$ by $\\mathbf{C}^{{\\text{s}}} = \\mathbf{C}^{\\text{block}}(\\mathbf{x}_{\\text{q}})$ for notational brevity as all positions share the following computation.\nThe self-attention computation starts by embedding a support correlation tensor $\\mathbf{C}^{{\\text{s}}}$ to a target\n\\footnote{\nIn this section, we adapt the term ``target'' to indicate the ``query'' embedding in the context of self-attention learning~\\cite{transformers, vit, lsa, pvt, lrnet} to avoid homonymous confusion with the ``query'' image to be segmented.\n}\n, key, value triplet: \n$\n\\mathbf{T}, \\mathbf{K}, \\mathbf{V} \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{hd}}},\n$\nusing three convolutions of which strides greater than or equal to one to govern the output size.\nThe resultant target and key correlational representations, $\\mathbf{T}$ and $\\mathbf{K}$, are then used to compute an attention context.\nThe attention context is computed as following matrix multiplication:\n\\begin{equation}\n\\mathbf{A} = \\mathbf{T} \\mathbf{K}^{\\top} \\in \\mathbb{R}^{H_{\\text{s}}' \\times W_{\\text{s}}' \\times H_{\\text{s}}' \\times W_{\\text{s}}'}.\n\\label{eq:sixd_attn}\n\\end{equation}\nNext, the attention context is normalized by softmax such that the votes on key foreground positions sum to one with masking attention by the support mask annotation $\\mathbf{Y}_{\\text{s}}$ if available to attend more on the foreground region:\n\\begin{align*}\n\\bar{\\mathbf{A}}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}_{\\text{k}}) = \\frac{\\exp \\left( \\mathbf{A}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}_{\\text{k}}) \\mathbf{Y}_{\\text{s}}(\\mathbf{p}_{\\text{k}}) \\right)}{\\sum_{\\mathbf{p}_{\\text{k}}'}\\exp \\left( \\mathbf{A}(\\mathbf{p}_{\\text{t}}, \\mathbf{p}'_{\\text{k}}) \\mathbf{Y}_{\\text{s}}(\\mathbf{p}'_{\\text{k}}) \\right)},\n\\end{align*}\n\\begin{equation}\n\\text{where\\, } \\mathbf{Y}_{\\text{s}}(\\mathbf{p}_{\\text{k}}) =\n\\begin{cases}\n 1 \\quad \\; \\; \\text{if} \\; \\mathbf{p}_{\\text{k}} \\in [H'_{\\text{s}}] \\times [W'_{\\text{s}}] \\text{ is foreground,} \\\\ \n - \\infty \\; \\text{otherwise.}\n\\end{cases} \n\\label{eq:masked_attn}\n\\end{equation}\nThe masked attention context $\\bar{\\mathbf{A}}$ is then used to aggregate\nthe value embedding $\\mathbf{V}$:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}}}_{{\\text{A}}} = \\bar{\\mathbf{A}} \\mathbf{V} \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{hd}} }.\n\\label{eq:agg}\n\\end{equation}\nThe attended representation is fed to an MLP layer, $\\mathbf{W}_{\\text{o}}$, and added to the input.\nIn case the input and output dimensions mismatch, the input is optionally fed to a convolutional layer, $\\mathbf{W}_{\\text{I}}$.\nThe addition is followed by an activation layer $\\varphi(\\cdot)$ consisting of a group normalization~\\cite{groupnorm} and a ReLU activation~\\cite{relu}:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}}}_{\\text{o}} = \\varphi(\\mathbf{W}_{\\text{o}}(\\mathbf{C}^{{\\text{s}}}_{\\text{A}}) + \\mathbf{W}_{\\text{I}}(\\mathbf{C}^{{\\text{s}}}) ) \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}}}.\n\\end{equation}\nThe output is then fed to another MLP that concludes a unit operation of an AS\\xspace layer:\n\\begin{equation}\n\\mathbf{C}^{{\\text{s}} \\prime} = \\varphi(\\mathbf{W}_\\text{FF}(\\mathbf{C}^{{\\text{s}}}_{\\text{o}}) + \\mathbf{C}^{{\\text{s}}}_{\\text{o}}) \\in \\mathbb{R}^{ H_{\\text{s}}' \\times W_{\\text{s}}' \\times C_{\\text{out}}},\n\\end{equation}\nwhich is embedded to the corresponding query position in the block matrix of \\eqref{eq:block_matrix}.\nNote that the AS\\xspace layer can be stacked to progressively reduce the size of support correlation tensor, $H_{\\text{s}}' \\times W_{\\text{s}}'$, to a smaller size.\nThe overall pipeline of AS\\xspace layer is illustrated in the supplementary material.\n\n\n\n\\smallbreakparagraph{Multi-layer fusion.}\nThe pyramid correlational representations are merged from the coarsest to the finest level by cascading a pair-wise operation of the following three steps: upsampling, addition, and non-linear transform.\nWe first bi-linearly upsample the bottommost correlational representation to the query spatial dimension of its adjacent earlier one and then add the two representations to obtain a mixed one $\\mathbf{C}^{\\text{mix}}$.\nThe mixed representation is fed to two sequential AS\\xspace layers until it becomes a point feature of size $H_{\\text{s}}' = W_{\\text{s}}'=1$, which is fed to the subsequent pyramidal fusion.\nThe output from the earliest fusion layer is fed to a convolutional decoder, which consists of interleaved 2D convolution and bi-linear upsampling that map the $C$-dimensional channel to 2 (foreground and background) and the output spatial size to the input query image size.\nSee \\figref{fig:overview} for the overall process of multi-layer fusion.\n\n\n\n\n\n\n\\smallbreakparagraph{Class-wise foreground map computation.}\nThe $K$-shot output foreground activation maps are averaged to produce a mask prediction for each class.\nThe averaged output map is normalized by softmax over the two channels of the binary segmentation map to obtain a foreground probability prediction $\\mathbf{Y}^{(n)} \\in \\mathbb{R}^{H \\times W}$.\n\n\n\\section{Experiments}\nIn this section we report our experimental results regarding the FS-CS\\xspace task, the iFSL\\xspace framework, as well as the ASNet\\xspace after briefly describing implementation details and evaluation benchmarks.\nSee the supplementary material for additional results, analyses, and experimental details.\n\n\n\n\n\n\\subsection{Experimental setups}\n\\smallbreakparagraph{Experimental settings.}\nWe select ResNet50 and ResNet-101~\\cite{resnet} pretrained on ImageNet~\\cite{russakovsky2015imagenet} as our backbone networks for a fair comparison with other methods and freeze the backbone during training as similarly as the previous work~\\cite{tian2020pfenet, hsnet}.\nWe train models using Adam~\\cite{adam} optimizer with learning rate of $10^{-4}$ and $10^{-3}$ for the classification loss and the segmentation loss, respectively.\nWe train all models with 1-way 1-shot training episodes and evaluate the models on arbitrary $N$-way $K$-shot episodes.\nFor inferring class occurrences, we use a threshold $\\delta = 0.5$.\nAll the AS\\xspace layers are implemented as multi-head attention with 8 heads. \nThe number of correlation pyramid is set to $P=3$.\n\n\n\n\\smallbreakparagraph{Dataset.}\nFor the new task of FS-CS\\xspace,\nwe construct a benchmark adopting the images and splits from the two widely-used FS-S\\xspace datasets, Pascal-5$^{i}$~\\cite{shaban2017oslsm, pascal} and COCO-20$^{i}$~\\cite{nguyen2019fwb, coco}, which are also suitable for multi-label classification~\\cite{wang2017multi}.\nWithin each fold, we construct an episode by randomly sampling a query and an $N$-way $K$-shot support set that annotates the query with $N$-way class labels and an $(N\\!+\\!1)$-way segmentation mask in the context of the support set.\nFor the FS-S\\xspace task, we also use Pascal-5$^{i}$ and COCO-20$^{i}$ following the same data splits as \\cite{shaban2017oslsm} and \\cite{nguyen2019fwb}, respectively.\n\n\\smallbreakparagraph{Evaluation.}\nEach dataset is split into four mutually disjoint class sets and cross-validated.\nFor multi-label classification evaluation metrics, we use the 0\/1 exact ratio $\\mathrm{ER} = \\mathbbm{1} [\\mathbf{y}_{\\text{gt}} = \\mathbf{y}_{\\text{C}} ]$~\\cite{durand2019learning}.\nIn the supplementary material, we also report the results in accuracy $\\mathrm{acc} = \\frac{1}{N} \\sum_n \\mathbbm{1} [\\mathbf{y}_{\\text{gt}}^{(n)} = \\mathbf{y}^{(n)}_{\\text{C}} ]$.\nFor segmentation, we use mean IoU $\\mathrm{mIoU} = \\frac{1}{C}\\sum_c \\mathrm{IoU}_c$~\\cite{shaban2017oslsm, wang2019panet}, where $\\mathrm{IoU}_c$ denotes an IoU value of $c_{\\text{th}}$ class.\n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.97\\linewidth]{fig\/qual2way.pdf}\n \\vspace{-1mm}\n\t\\caption{2-way 1-shot segmentation results of ASNet\\xspace on FS-CS\\xspace.\n\tThe examples cover all three cases of $\\mathcal{C} = \\varnothing$, $\\mathcal{C} \\subset \\mathcal{C}_{\\text{s}}$, and $\\mathcal{C} = \\mathcal{C}_{\\text{s}}$.\n\tThe images are resized to square shape for visualization.\n \\vspace{-4mm}\n}\n\\label{fig:qual2way}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{Experimental evaluation of iFSL\\xspace on FS-CS\\xspace}\nIn this subsection, we investigate the iFSL\\xspace learning framework on the FS-CS\\xspace task.\nAll ablation studies are conducted using ResNet50 on Pascal-$i^5$ and evaluated in 1-way 1-shot setup unless specified otherwise.\nNote that it is difficult to present a fair and direct comparison between the conventional FS-C\\xspace and our few-shot classification task since FS-C\\xspace is always evaluated on single-label classification benchmarks~\\cite{matchingnet, tieredimagenet, cifarfs, metaoptnet, metadataset}, whereas our task is evaluated on multi-label benchmarks~\\cite{pascal, coco}, which are irreducible to a single-label one in general. \n\n\n\n\\smallbreakparagraph{Effectiveness of iFSL\\xspace on FS-CS\\xspace.}\nWe validate the iFSL\\xspace framework on FS-CS\\xspace and also compare the performance of ASNet\\xspace with those of three recent state-of-the-art methods, PANet~\\cite{wang2019panet}, PFENet~\\cite{tian2020pfenet}, and HSNet~\\cite{hsnet}, which are originally proposed for the conventional FS-S\\xspace task; all the models are trained by iFSL\\xspace for a fair comparison. \nNote that we exclude the background merging step (Eqs.~\\ref{eq:bg_mask} and \\ref{eq:merge_mask}) for PANet as its own pipeline produces a multi-class output including background.\nTables \\ref{table:ipa} and \\ref{table:ico} validate the iFSL\\xspace framework on the FS-CS\\xspace task quantitatively, where our ASNet\\xspace surpasses other methods on both 1-way and 2-way setups in terms of few-shot classification as well as the segmentation performance.\nThe 2-way segmentation results are also qualitatively demonstrated in \\figref{fig:qual2way} visualizing exhaustive inclusion relations between a query class set $\\mathcal{C}$ and a target (support) class set $\\mathcal{C}_{\\text{s}}$ in a 2-way setup.\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.9\\linewidth]{fig\/multiway_legend.pdf}\n \\vspace{-3mm}\n \\includegraphics[width=0.49\\linewidth]{fig\/multiway_er_bar.pdf} \\hfill\n \\includegraphics[width=0.49\\linewidth]{fig\/multiway_miou_bar.pdf}\n \\caption{$N$-way 1-shot FS-CS\\xspace performance comparison of four methods by varying $N$ from 1 to 5.\n\t}\n\\label{fig:multiway}\n\\end{figure}\n\n\\smallbreakparagraph{Weakly-supervised iFSL\\xspace.}\nThe iFSL\\xspace framework is versatile across the level of supervision: weak labels (class tags) or strong labels (segmentation masks).\nAssuming weak labels are available but strong labels are not, ASNet\\xspace is trainable with the classification learning objective of iFSL\\xspace (Eq.~\\ref{eq:loss_cls_final}) and its results are presented as $\\text{ASNet\\xspace}_{\\text{w}}$ in \\tableref{table:ipa}.\n$\\text{ASNet\\xspace}_{\\text{w}}$ performs on par with ASNet\\xspace in terms of classification ER (82.0\\% \\textit{vs.}~84.9\\% on 1-way 1-shot), but performs ineffectively on the segmentation task (15.0\\% \\textit{vs.}~52.3\\% on 1-way 1-shot).\nThe result implies that the class tag labels are sufficient for a model to recognize the class occurrences, but are weak to endorse model's precise spatial recognition ability.\n\n\n\n\n\\smallbreakparagraph{Multi-class scalability of FS-CS\\xspace.}\nIn addition, FS-CS\\xspace is extensible to a multi-class problem with arbitrary numbers of classes, while FS-S\\xspace is not as flexible as FS-CS\\xspace in the wild.\nFigure~\\ref{fig:multiway} compares the FS-CS\\xspace performances of four methods by varying the $N$-way classes from one to five, where the other experimental setup follows the same one as in \\tableref{table:ipa}.\nOur ASNet\\xspace shows consistently better performances than other methods on FS-CS\\xspace in varying number of classes.\n\n\n\n\n\\smallbreakparagraph{Robustness of FS-CS\\xspace against task transfer.}\nWe evaluate the transferability between FS-CS\\xspace, FS-C\\xspace, and FS-S\\xspace by training a model on one task and evaluating it on the other task.\nThe results are compared in \\figref{fig:task_transfer} in which `$\\text{FS-S\\xspace} \\rightarrow$ FS-CS\\xspace' represents the result where the model trained on the $\\text{FS-S\\xspace}$ task (with the guarantee of support class presence) is evaluated on the FS-CS\\xspace setup.\nTo construct training and validation splits for FS-C\\xspace or FS-S\\xspace, we sample episodes that satisfy the constraint of support class occurrences~\\footnote{We sample 2-way 1-shot episodes having a single positive class for training on FS-C\\xspace or evaluating on FS-C\\xspace.\nWe collect 1-way 1-shot episodes sampled from the same class for training on FS-S\\xspace or evaluating on FS-S\\xspace.}. \nFor training FS-C\\xspace models, we use the class tag supervision only. \nAll the other settings are fixed the same, \\eg, we use ASNet\\xspace with ResNet50 and Pascal-$i^5$.\n\nThe results show that FS-CS\\xspace learners, \\ie, models trained on FS-CS\\xspace, are transferable to the two conventional few-shot learning tasks and yet overcome their shortcomings.\nThe transferability between few-shot classification tasks, \\ie, FS-C\\xspace and $\\text{FS-CS\\xspace}_{\\text{w}}$, is presented in \\figref{fig:task_transfer}(a).\nOn this setup, the $\\text{FS-CS\\xspace}_{\\text{w}}$ learner is evaluated by predicting a higher class response between the two classes, although it is trained using the multi-label classification objective.\nThe FS-CS\\xspace learner closely competes with the FS-C\\xspace learner on FS-C\\xspace in terms of classification accuracy.\nIn contrast, the task transfer between segmentation tasks, FS-S\\xspace and FS-CS\\xspace, results in asymmetric outcomes as shown in \\figref{fig:task_transfer}(b)~and~(c).\nThe FS-CS\\xspace learner shows relatively small performance drop on FS-S\\xspace, however, the FS-S\\xspace learner suffers a severe performance drop on FS-CS\\xspace.\nQualitative examples in \\figref{fig:teaser} demonstrate that the FS-S\\xspace learner predicts a vast number of false-positive pixels and results in poor performances.\nIn contrast, the FS-CS\\xspace learner successfully distinguishes the region of interest by analyzing the semantic relevance of the query objects between the support set.\n\n\\input{table\/task_transfer}\n\\input{table\/f1pa}\n\\input{table\/f2co}\n\n\\input{table\/ablation}\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Comparison with recent FS-S\\xspace methods on FS-S\\xspace}\nTables~\\ref{table:fpa} and \\ref{table:fco} compare the results of the recent few-shot semantic segmentation methods and ASNet\\xspace on the conventional FS-S\\xspace task.\nAll model performances in the tables are taken from corresponding papers, and the numbers of learnable parameters are either taken from papers or counted from their official sources of implementation.\nFor a fair comparison with each other, some methods that incorporate extra unlabeled images~\\cite{yang2021mining, liu2020ppnet} are reported as their model performances measured in the absence of the extra data.\nNote that ASNet\\xspace in Tables~\\ref{table:fpa} and \\ref{table:fco} is trained and evaluated following the FS-S\\xspace setup, not the proposed FS-CS\\xspace one.\n\nThe results verify that ASNet\\xspace outperforms the existing methods including the most recent ones~\\cite{wu2021learning, xie2021few, yang2021mining}.\nEspecially, the methods that cast few-shot segmentation as the task of correlation feature transform, ASNet and HSNet~\\cite{hsnet}, outperform other visual feature transform methods, indicating that learning correlations is beneficial for both FS-CS\\xspace and FS-S\\xspace.\nNote that ASNet\\xspace is the most lightweight among others as ASNet\\xspace processes correlation features that have smaller channel dimensions, \\eg, at most 128, than visual features, \\eg, at most 2048 in ResNet50.\n\n\n\n\n\n\n\\subsection{Analyses on the model architecture}\nWe perform ablation studies on the model architecture to reveal the benefit of each component.\nWe replace the global self-attention in the ASNet\\xspace layer with the local self-attention~\\cite{lsa} to see the effect of the global self-attention~(\\tableref{table:ablation_architecture}\\texttt{a}).\nThe local self-attention variant is compatible with the global ASNet\\xspace in terms of the classification exact ratio but degrades the segmentation mIoU significantly, signifying the importance of the learning the global context of feature correlations.\nNext, we ablate the attention masking in \\eqref{eq:masked_attn}, which verifies that the attention masking prior is effective~(\\tableref{table:ablation_architecture}\\texttt{b}).\nLastly, we replace the multi-layer fusion path with spatial average pooling over the support dimensions followed by element-wise addition~(\\tableref{table:ablation_architecture}\\texttt{c}), and the result indicates that it is crucial to fuse outputs from the multi-layer correlations to precisely estimate class occurrence and segmentation masks.\n\n\n\n\\section{Discussion}\nWe have introduced the integrative task of few-shot classification and segmentation (FS-CS\\xspace) that generalizes two existing few-shot learning problems.\nOur proposed integrative few-shot learning (iFSL\\xspace) framework is shown to be effective on FS-CS\\xspace, in addition, our proposed attentive squeeze network (ASNet\\xspace) outperforms recent state-of-the-art methods on both FS-CS\\xspace and FS-S\\xspace.\nThe iFSL\\xspace design allows a model to learn either with weak or strong labels, that being said, \nlearning our method with weak labels achieves low segmentation performances. \nThis result opens a future direction of effectively boosting the segmentation performance leveraging weak labels in the absence of strong labels for FS-CS\\xspace.\n\n\n\n\\section{Supplementary Material}\n\n\n\\subsection{Detailed model architecture}\nThe comprehensive configuration of attentive squeeze network is summarized in \\tableref{table:asnet}, and its building block, attentive squeeze layer, is depicted in \\figref{fig:aslayer}.\nThe channel sizes of the input correlation $\\{C_\\text{in}^{(1)}, C_\\text{in}^{(2)}, C_\\text{in}^{(3)}\\}$ corresponds to $\\{4, 6, 3\\}$, $\\{4, 23, 3\\}$, $\\{3, 3, 1\\}$ for ResNet50~\\cite{resnet}, ResNet101, VGG-16~\\cite{vgg}, respectively.\n\n\n\\subsection{Implementation details}\nOur framework is implemented on PyTorch~\\cite{pytorch} using the PyTorch Lightning~\\cite{falcon2019pytorch} framework.\nTo reproduce the existing methods, we heavily borrow publicly available code bases.~\\footnote{PANet~\\cite{wang2019panet}: \\url{https:\/\/github.com\/kaixin96\/PANet} \\\\ PFENet~\\cite{tian2020pfenet}: \\url{https:\/\/github.com\/dvlab-research\/PFENet} \\\\\nHSNet~\\cite{hsnet}: \\url{https:\/\/github.com\/juhongm999\/hsnet}}\nWe set the officially provided hyper-parameters for each method while sharing generic techniques for all the methods, \\eg, excluding images of small support objects for support sets or switching the role between the query and the support during training. \nNVIDIA GeForce RTX 2080 Ti GPUs or NVIDIA TITAN Xp GPUs are used in all experiments, where we train models using two GPUs on Pascal-$5^{i}$~\\cite{shaban2017oslsm} while using four GPUs on COCO-$20^{i}$~\\cite{nguyen2019fwb}.\nModel training is halt either when it reaches the maximum $500_{\\text{th}}$ epoch or when it starts to overfit.\nWe resize input images to $400 \\times 400$ without any data augmentation strategies during both training and testing time for all methods.\nFor segmentation evaluation, we recover the two-channel output foreground map to its original image size by bilinear interpolation.\nPascal-$5^{i}$ and COCO-$20^{i}$ is derived from Pascal Visual Object Classes 2012~\\cite{pascal} and Microsoft Common Object in Context 2014~\\cite{coco}, respectively.\nTo construct episodes from datasets, we sample support sets such that one of the query classes is included in the support set by the probability of 0.5 to balance the ratio of background episodes across arbitrary benchmarks.\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=\\linewidth]{supp\/fig\/aslayer.pdf}\n \n \\vspace{-5mm}\n\t\\caption{Illustration of the proposed attentive squeeze layer (Sec.~5.1. in the main paper).\n\tThe shape of each output tensor is denoted next to arrows.\n }\n\\label{fig:aslayer}\n\\end{figure}\n\n\\newcommand{\\plh}{%\n {\\ooalign{$\\phantom{0}$\\cr\\hidewidth$\\scriptstyle\\times$\\cr}}%\n}\n\n\\begin{table}[t!]\n \\centering\n \\setlength{\\extrarowheight}{-2.0pt}\n \\scalebox{0.70}{\n \\begin{tabular}{ccc}\n \\toprule\n $p = 1$ & $p = 2$ & $p = 3$ \\\\ \n $\\frac{H}{8}\\plh\\frac{H}{8}\\plh\\frac{H}{8}\\plh\\frac{H}{8}\\plh C_\\text{in}^{(1)}$ & $\\frac{H}{16}\\plh\\frac{H}{16}\\plh\\frac{H}{16}\\plh\\frac{H}{16}\\plh C_\\text{in}^{(2)}$ & $\\frac{H}{32}\\plh\\frac{H}{32}\\plh\\frac{H}{32}\\plh\\frac{H}{32}\\plh C_\\text{in}^{(3)}$ \\\\ \n \\midrule\n {[pool support dims. by half]} & & \\\\\n $\\mathrm{AS}(C_\\text{in}^{(1)}\\rightarrow32, 5, 4, 2)$ & $\\mathrm{AS}(C_\\text{in}^{(2)}\\rightarrow32, 5, 4, 2)$ & $\\mathrm{AS}(C_\\text{in}^{(3)}\\rightarrow32, 5, 4, 2)$ \\\\\n $\\mathrm{AS}(32\\rightarrow128, 5, 4, 2$) & $\\mathrm{AS}(32\\rightarrow128, 5, 4, 2$) & $\\mathrm{AS}(32\\rightarrow128, 3, 2, 1$) \\\\\n {[pool support dims.]} & & \\\\\n {[upsample query dims.]} & & \\\\\n \\multicolumn{2}{c}{ {[element-wise addition]} } & \\\\\n \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 1, 1, 0)$ } & \\\\\n \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 2, 1, 0)$ } & \\\\\n \\multicolumn{2}{c}{ {[upsample query dims.]} } \\\\\n & \\multicolumn{2}{c}{ {[element-wise addition]} } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 1, 1, 0)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{AS}(128\\rightarrow128, 2, 1, 0)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(128\\rightarrow128, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(128\\rightarrow64, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ {[upsample query dims.]} } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(64\\rightarrow64, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{ReLU}$ } \\\\\n & \\multicolumn{2}{c}{ $\\mathrm{conv}(64\\rightarrow2, 3, 1, 1)$ } \\\\\n & \\multicolumn{2}{c}{ {[interpolate query dims. to the input size]} } \\\\\n \\bottomrule\n \\end{tabular}\n } \n \\caption{Comprehensive configuration of ASNet\\xspace of which overview is illustrated in Fig.~2 in the main paper.\n The top of the table is the input of the model and the detailed architecture of the model below it.\n $\\mathrm{AS}(C_{\\text{in}} \\rightarrow C_{\\text{out}}, k, s, p)$ denotes an AS\\xspace layer of the kernel size ($k$), stride ($s$), padding size ($p$) for the convolutional embedding with the input channel ($C_{\\text{in}}$) and output channel~($C_{\\text{out}}$). \\label{table:asnet}\n }\n\\end{table}\n\n\n\n\n\\subsection{Further analyses}\nIn this subsection we provide supplementary analyses on the iFSL\\xspace framework and ASNet\\xspace.\nAll experimental results are obtained using ResNet50 on Pascal-$5^{i}$ and evaluated with 1-way 1-shot episodes unless specified otherwise.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n \\includegraphics[width=0.9\\linewidth]{supp\/fig\/cls_thr.pdf}\n\t\\caption{Classification threshold $\\delta$ and its effects.\n }\n\\label{fig:cls_thr}\n\\vspace{-3mm}\n\\end{figure}\n\n\n\\smallbreakparagraph{The classification occurrence threshold $\\delta$.}\nEquation~2 in the main paper describes the process of detecting object classes on the shared foreground map by thresholding the highest foreground probability response on each foreground map.\nAs the foreground probability is bounded from 0 to 1, we set the threshold $\\delta=0.5$ for simplicity.\nA high threshold value makes a classifier reject insufficient probabilities as class presences.\nFigure~\\ref{fig:cls_thr} shows the classification 0\/1 exact ratios by varying the threshold, which reaches the highest classification performance around $\\delta=0.5$ and $0.6$.\nFine-tuning the threshold for the best classification performance is not the focus of this work, thus we opt for the most straightforward threshold $\\delta=0.5$ for all experiments.\n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=0.99\\linewidth]{supp\/fig\/bg.pdf}\n\t\\caption{Visualization of background map for each support class and the merged background map $\\mathbf{Y}_{\\text{bg}}$ for the query.\n\tHigh background response is illustrated in black.\n\t}\n\\label{fig:bg}\n\\vspace{-3mm}\n\\end{figure}\n\n\\smallbreakparagraph{Visualization of $\\mathbf{Y}_{\\text{bg}}$.}\nFigure~\\ref{fig:bg} visually demonstrates the background merging step of iFSL\\xspace in Eq.~(3) in the main paper.\nThe background maps are taken from the 2-way 1-shot episodes.\nThe background response of the negative class is relatively even, \\ie, the majority of pixels are estimated as background, whereas the background response of the positive class highly contributes to the merged background map.\n\n\n\n\n\n\\input{supp\/table\/clsplussegloss}\n\n\\input{supp\/table\/ipa101}\n\n\n\n\\smallbreakparagraph{iFSL\\xspace with weak labels, strong labels, and both.}\n\\tableref{table:clsplussegloss} compares FS-CS\\xspace performances of three ASNets each of which trained with the classification loss (Eq.~(6) in the main paper), the segmentation loss (Eq.~(7) in the main paper), or both.\nThe loss is chosen upon the level of supervisions on support sets; classification tags (weak labels) or segmentation annotations (strong labels).\nWe observe that neither the classification nor segmentation performances deviate significantly between $\\mathcal{L}_{\\text{S}}$ and $\\mathcal{L}_{\\text{C}} + \\mathcal{L}_{\\text{S}}$;\ntheir performances are not even 0.3\\%p different.\nAs a segmentation annotation is a dense form of classification tags, thus the classification loss influences insignificantly when the segmentation loss is used for training.\nWe thus choose to use the segmentation loss exclusively in the presence of segmentation annotations.\n\n\n\n\n\\subsection{Additional results}\nHere we provide several extra experimental results that are omitted in the main paper due to the lack of space.\nThe contents include results using other backbone networks, another evaluation metric, and $K$ shots where $K > 1$.\n\n\n\n\\smallbreakparagraph{iFSL\\xspace on FS-CS\\xspace using ResNet101.}\nWe include the FS-CS\\xspace results of the iFSL\\xspace framework on Pascal-$5^{i}$ using ResNet101~\\cite{resnet} in \\tableref{table:ipa101}, which is missing in the main paper due to the page limit. \nAll other experimental setups are matched with those of Table~1 in the main paper except for the backbone network.\nASNet\\xspace also shows greater performances than the previous methods on both classification and segmentation tasks with another backbone.\n\n\n\n\\input{supp\/table\/acc}\n\\smallbreakparagraph{FS-CS\\xspace classification metrics: 0\/1 exact ratio and accuracy.}\n\\tableref{table:ipa50_acc} presents the results of two classification evaluation metrics of FS-CS\\xspace: 0\/1 exact ratio~\\cite{durand2019learning} and classification accuracy.\nThe classification accuracy metric takes the average of correct predictions for each class for each query, while 0\/1 exact ratio measures the binary correctness for all classes for each query, thus being stricter than the accuracy; the exact formulations are in Sec.~6.1. of the main paper.\nASNet\\xspace shows higher classification performance in both classification metrics than others.\n\n\\input{supp\/table\/ipa50_5shot}\n\\input{supp\/table\/ipa101_5shot}\n\n\n\n\\smallbreakparagraph{iFSL\\xspace on 5-shot FS-CS\\xspace.}\nTables~\\ref{table:ipa50_5shot} and \\ref{table:ipa101_5shot} compares four different methods on the 1-way 5-shot and 2-way 5-shot FS-CS\\xspace setups, which are missing in the main paper due to the page limit.\nAll other experimental setups are matched with those of Table~1 in the main paper except for the number of support samples for each class, \\ie, varying $K$ shots.\nASNet\\xspace also outperforms other methods on the multi-shot setups.\n\n\n\n\n\n\n\n\\input{supp\/table\/fpavgg}\n\n\n\\input{supp\/table\/multiway}\n\n\n\\smallbreakparagraph{ASNet\\xspace on FS-S\\xspace using VGG-16.}\n\\tableref{table:fpavgg} compares the recent state-of-the-art methods and ASNet\\xspace on FS-S\\xspace using VGG-16~\\cite{vgg}.\nWe train and evaluate ASNet\\xspace with the FS-S\\xspace problem setup to fairly compare with the recent methods.\nAll the other experimental variables are detailed in Sec.~6.3. and Table~3 of the main paper.\nASNet\\xspace consistently shows outstanding performances using the VGG-16 backbone network as observed in experimnets using ResNets.\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=\\linewidth]{supp\/fig\/2way_extra_coco_v1.pdf}\n\t\\caption{2-way 1-shot FS-CS\\xspace segmentation prediction maps on the COCO-$20^{i}$ benchmark.}\n\\label{fig:2way_extra_coco}\n\\end{figure*}\n\n\n\\smallbreakparagraph{Qualitative results.}\nWe attach additional segmentation predictions of ASNet\\xspace learned with the iFSL\\xspace framework on the FS-CS\\xspace task in \\figref{fig:2way_extra_coco}.\nWe observe that ASNet\\xspace successfully predicts segmentation maps at challenging scenarios in the wild such as a) segmenting tiny objects, b) segmenting non-salient objects, c) segmenting multiple objects, and d) segmenting a query given a small support object annotation.\n\n\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\small\n\t\\includegraphics[width=0.80\\linewidth]{supp\/fig\/fssfscs.pdf}\n\t\\caption{2-way 1-shot FS-CS\\xspace segmentation prediction maps of $\\text{ASNet}_{\\text{FS-S\\xspace}}$ and $\\text{ASNet}_{\\text{FS-CS\\xspace}}$.}\n\\label{fig:fssfcss}\n\\end{figure*}\n\n\n\\smallbreakparagraph{Qualitative results of $\\text{ASNet}_{\\text{FS-S\\xspace}}$.}\nFigure~\\ref{fig:fssfcss} visualizes typical failure cases of the $\\text{ASNet}_{\\text{FS-S\\xspace}}$ model in comparison with $\\text{ASNet}_{\\text{FS-CS\\xspace}}$; these examples qualitatively show the severe performance drop of $\\text{ASNet}_{\\text{FS-S\\xspace}}$ on FS-CS\\xspace, which is quantitatively presented in Fig.~5~(b) of the main paper.\nSharing the same architecture of ASNet\\xspace, each model is trained on either FS-S\\xspace or FS-CS\\xspace setup and evaluated on the 2-way 1-shot FS-CS\\xspace setup.\nThe results demonstrate that $\\text{ASNet}_{\\text{FS-S\\xspace}}$ is unaware of object classes and gives foreground predictions on any existing objects, whereas $\\text{ASNet}_{\\text{FS-CS\\xspace}}$ effectively distinguishes the object classes based on the support classes and produces clean and adequate segmentation maps.\n\n\n\n\n\n\n\\input{supp\/table\/ico50_foldwise}\n\\input{supp\/table\/fco50_foldwise}\n\\smallbreakparagraph{Fold-wise results on COCO-$\\mathbf{20^{i}}$.}\nTables~\\ref{table:ico50_foldwise} and \\ref{table:fco50_foldwise} present fold-wise performance comparison on the FS-CS\\xspace and FS-S\\xspace tasks, respectively.\nWe validate that ASNet\\xspace outperforms the competitors by large margins in both the FS-CS\\xspace and FS-S\\xspace tasks on the challenging COCO-$20^{i}$ benchmark.\n\n\n\n\n\n\\smallbreakparagraph{Numerical performances of Fig.~4 in the main paper.}\nWe report the numerical performances of the Fig.~4 in the main paper in \\tableref{table:multiway} as a reference for following research.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nDealing with nonlinear phenomena has become one of the predominant issues for mechanical engineers, in the objective of virtual testing. Whether they are geometrical or related to the material behavior, nonlinearities can be treated by a combination of Newton and linear solvers. Newton algorithms can be modified, secant, quasi-Newton \\cite{crisfield1979faster,deuflhard1975relaxation,dennis1977quasi,zhang1982modified}, depending mostly on the complexity of tangent operators computation. If the meshed structure has a large number of degrees of freedom, linear solvers are chosen to be iterative and parallel, belonging to the class of Domain Decomposition Methods for instance \\cite{mandel1993balancing,le1994domain,rixen1999simple,farhat2001feti,gosselet2006non}.\n\nThis article focuses on the nonlinear substructuring and condensation method, which has been investigated in previous studies \\cite{cresta2007nonlinear,pebrel2008nonlinear,bordeu2009balancing,negrello2016substructured}. The substructured formulation involves a choice of interface transmission conditions, which can be either primal, dual or mixed, referring either to interface displacements, nodal interface reactions, or a linear combination of the two previous types -- i.e. Robin interface conditions. In this context, the mixed formulation has shown good efficiency \\cite{cresta2008decomposition,pebrel2009etude,hinojosa2014domain,negrello2016substructured}, mostly due to a sound choice of the parameter introduced in the linear combination of interface conditions. Being homogeneous to a stiffness, and often refered to as an \\textit{interface impedance}, this parameter can indeed be optimized, depending on the mechanical problem \\cite{lions1990schwarz}. However, the computational cost of the optimal value involves in general storage and manipulation of global matrices, and is consequently not affordable in the framework of parallel computations. \n\nThe interface impedance, in DDM methods for structural mechanics, should model, from the point of view of one substructure, its interactions with the complement of the whole structure. In order to achieve good convergence rates without degrading computational speed, interface impedance can generally be approximated either by \\textit{short scale} or \\textit{long scale} formulations, depending on the predominant phenomena which must be accounted for. In the mechanical context, for instance, a common \\textit{short scale} approximation can be built by assembling interface stiffness of the neighbors \\cite{cresta2008decomposition, pebrel2009etude, hinojosa2014domain, negrello2016substructured}.\n \nHowever, filtering long range interactions gives quite a coarse approximation of interface impedance, and does not give an accurate representation of the environment of each substructure. A good evaluation of the remainder of the structure should indeed couple these two strategies. Starting from this consideration, we propose here a new construction process of the interface impedance, based on a ``spring in series'' modeling of the structure, which couples the \\textit{long} and \\textit{short} range interactions with the structure. The heuristic we develop is strongly influenced by the availability of the various terms involved in our approximation. \n\nThe first section of this paper introduces the reference (mechanical or thermal) problem and the notations used in the following. A succinct presentation of the nonlinear substructuring and condensation method then recalls the principles of its mixed formulation: how the interface nonlinear condensed problem is built from nonlinear local equilibriums, and the basics of the whole solving process, involving a global Newton algorithm combined with two internal solvers (parallel local Newton algorithms and a multi-scale linear preconditioned Krylov solver for tangent interface system). At Section~\\ref{sec:new_heuristic}, the question of finding a relevant Robin parameter for mixed interface conditions is developed, mainly based on the observation that for each substructure, the optimal interface impedance is the nonlinear discretized \\textit{Dirichlet-to-Neumann} operator of its complementary part. The new heuristic is then introduced, starting from the model of two springs in series, and a possible nonlinear multi-scale interpretation is given. The details of the two-scale approximation can be found at subsection \\ref{ssec:two_scale}, its efficiency is evaluated at last section on several academic numerical examples.\n\n\\section{Reference problem, notations}\n\n\\subsection{Global nonlinear problem}\n\nWe consider here a nonlinear partial differential equation on a domain $\\Omega$, representative of a structural mechanical or thermal problem, with Dirichlet conditions on the part $\\partial \\Omega_u \\neq \\varnothing$ of its boundary, and Neumann conditions on the complementary part $\\partial \\Omega_F$. After discretization with the Finite Element method, the problem to be solved reads:\n\\begin{equation}\\label{eq:nl_problem}\nf_{int}(u) + f_{ext} = 0\n\\end{equation}\nVector $f_{ext}$ takes into account boundary conditions (Dirichlet or Neumann) and dead loads, operator $f_{int}$ refers to the discretization of homogeneous partial differential equation.\n\n\\begin{remark} In linear elasticity, under the small perturbations hypothesis, one has:\n\\begin{equation*}\nf_{int}(u) = -Ku\n\\end{equation*}\nwith $K$ the stiffness matrix of the structure.\n\\end{remark}\n\n\\subsection{Substructuring}\n\nClassical DDM notations will be used -- see figure \\ref{fig:notations}: global domain $\\Omega$ is partitioned into $N_s$ subdomains $\\Omega\\ensuremath{^{(s)}}$. For each subdomain, a trace operator $t\\ensuremath{^{(s)}}$ restricts local quantities $x\\ensuremath{^{(s)}}$ defined on $\\Omega\\ensuremath{^{(s)}}$ to boundary quantities $x\\ensuremath{^{(s)}}_b$ defined on $\\Gamma\\ensuremath{^{(s)}} \\equiv \\partial \\Omega\\ensuremath{^{(s)}} \\backslash \\partial \\Omega$:\n\\begin{equation*}\nx\\ensuremath{^{(s)}}_b = t\\ensuremath{^{(s)}} u\\ensuremath{^{(s)}} = x\\ensuremath{^{(s)}}_{\\vert \\Gamma\\ensuremath{^{(s)}}}\n\\end{equation*}\nQuantities defined on internal nodes (belonging to $\\Omega\\ensuremath{^{(s)}} \\backslash \\Gamma\\ensuremath{^{(s)}}$) are written with subscript $i$: $x\\ensuremath{^{(s)}}_i$.\n\nGlobal primal (resp. dual) interface are noted $\\Gamma_A$ (resp $\\Gamma_B$).\nPrimal assembly operators $A\\ensuremath{^{(s)}}$ are defined as canonical prolongation operators from $\\Gamma\\ensuremath{^{(s)}}$ to $\\Gamma_A$: $A\\ensuremath{^{(s)}}$ is a full-ranked boolean matrix of size $n_A \\times n_b\\ensuremath{^{(s)}}$ - where $n_A$ is the size of global primal interface $\\Gamma_A$ and $n_b\\ensuremath{^{(s)}}$ the number of boundary degrees of freedom belonging to subdomain $\\Omega\\ensuremath{^{(s)}}$. \n\\begin{figure}[ht]\n\\begin{center}\n\\hspace{-0.5cm}\\subfloat[Subdomains]{\\includegraphics[width=4cm]{graph_2a.pdf}}\n\\subfloat[Local interface]{\\includegraphics[width=4cm]{graph_2b.pdf}}\n\\subfloat[Interface nodes]{\\includegraphics[width=3.5cm]{graph_2c.pdf}}\n\\subfloat[Interface connexions]{\\includegraphics[width=3.5cm]{graph_2d.pdf}}\n\n\\hspace{-0.75cm}\\subfloat{\\includegraphics[width=14.5cm]{matrices.pdf}}\n\\end{center}\n\\caption{Local numberings, interface numberings, trace and assembly operators}\n\\label{fig:notations}\n\\end{figure}\n\nDiamond notations are used in the following: for a domain $\\Omega$ substructured in $N_s$ subdomains $\\left( \\Omega\\ensuremath{^{(s)}} \\right)$, concatenated local variables are superscripted $\\ensuremath{^{\\diamondvert}}$, $\\ensuremath{^{\\diamondminus}}$ or $\\ensuremath{^{\\diamondbackslash}}$, depending on the alignment.\n\\begin{equation*}\n\\begin{aligned}\nx\\ensuremath{^{\\diamondvert}} = & \\begin{pmatrix} \nx^{(1)} \\\\\n\\vdots \\\\\n x^{(N_{s})} \\end{pmatrix},\\qquad x\\ensuremath{^{\\diamondminus}} = \\begin{pmatrix}\n{x^{(1)}} \\, \\ldots \\, {x^{(N_{s})}} \\end{pmatrix},\\qquad\n M\\ensuremath{^{\\diamondbackslash}} = \\begin{pmatrix} \nM^{(1)} & 0 & 0 \\\\\n0 & \\ddots & 0 \\\\\n0 & 0 & M^{(N_{s})} \\\\\n\\end{pmatrix} \\\\\n\\end{aligned}\n\\end{equation*}\nAny matrix $B\\ensuremath{^{(s)}}$ satisfying $\\text{Range}(B\\ensuremath{^{\\diamondminus^T}}) = \\text{Ker}(A\\ensuremath{^{\\diamondminus}})$ can be assigned to dual assembly operator -- see figure~\\ref{fig:notations} for the most classical choice. \n\n\n\\section{Nonlinear substructuring and condensation: mixed formulation} \n\nThis section recalls the principle of nonlinear substructuring and condensation, which is explained in details in \\cite{negrello2016substructured}.\n\n\\subsection{Formulation of the condensed problem}\n\n Nonlinear problem \\eqref{eq:nl_problem} is decomposed into $N_s$ nonlinear subproblems:\n\\begin{equation*}\nf_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}}) + f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\lambda\\ensuremath{^{\\diamondvert}}_b = 0\\ensuremath{^{\\diamondvert}}\n\\end{equation*}\nwhere $\\lambda\\ensuremath{^{(s)}}_b$ is the unknown local interface nodal reaction, introduced to represent interactions of the subdomain $\\Omega\\ensuremath{^{(s)}}$ with neighboring subdomains.\n\nTransmission conditions hold:\n\\begin{equation*}\n\\left\\lbrace \\begin{aligned}\nB\\ensuremath{^{\\diamondminus}} u\\ensuremath{^{\\diamondvert}}_b = 0 \\\\\nA\\ensuremath{^{\\diamondminus}} \\lambda\\ensuremath{^{\\diamondvert}}_b = 0\n\\end{aligned} \\right.\n\\end{equation*}\n\nThe mixed formulation consists in introducing a new interface unknown:\n\\begin{equation*}\n\\mu\\ensuremath{^{\\diamondvert}}_b = \\lambda\\ensuremath{^{\\diamondvert}}_b + Q\\ensuremath{^{\\diamondbackslash}}_b u\\ensuremath{^{\\diamondvert}}_b\n\\end{equation*}\nwhere the matrix $Q\\ensuremath{^{\\diamondbackslash}}_b$ is a parameter of the method. It has to be symmetric positive definite, and can be interpreted as a stiffness added to the interface, per subdomain: $Q\\ensuremath{^{\\diamondbackslash}}_b$ is called \\textit{interface impedance}. \\medskip\n\nLocal equilibriums can then be reformulated as:\n\\begin{equation}\\label{eq:local_eq}\nf_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}}) + f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\left( \\mu_b\\ensuremath{^{\\diamondvert}} - Q\\ensuremath{^{\\diamondbackslash}}_b u\\ensuremath{^{\\diamondvert}}_b \\right) = 0\n\\end{equation}\nWe assume the existence, at least locally, of a nonlinear mixed analogue $H\\ensuremath{^{\\diamondvert}}_{nl}$ of the Schur complement (ie. a discrete Robin-to-Dirichlet operator):\n\\begin{equation}\\label{eq:local_eq_H}\nu\\ensuremath{^{\\diamondvert}}_b = H\\ensuremath{^{\\diamondvert}}_{nl}\\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right)\n\\end{equation}\n\\begin{prop} The tangent operator~$H\\ensuremath{^{\\diamondbackslash}}_{t}$ to~$H\\ensuremath{^{\\diamondvert}}_{nl}$ can be explicitly computed in function of the tangent stiffness~$K\\ensuremath{^{\\diamondbackslash}}_t$:\n\\begin{equation*}\n\\begin{aligned}\n H\\ensuremath{^{\\diamondbackslash}}_t = \\frac{\\partial H\\ensuremath{^{\\diamondvert}}_{nl}}{\\partial \\mu\\ensuremath{^{\\diamondvert}}_b} = t\\ensuremath{^{\\diamondbackslash}} \\left( K_t\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q\\ensuremath{^{\\diamondbackslash}}_b t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} t\\ensuremath{^{\\diamondbackslash^T}} \\\\\n\\end{aligned}\n\\end{equation*}\nMoreover, in the linear case, the Robin-to-Dirichlet operator written $H\\ensuremath{^{\\diamondvert}}_{l}$ is affine, with the constant term associated with external forces:\n\\begin{equation*}\n\\begin{aligned}\n& H\\ensuremath{^{\\diamondvert}}_l \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = H\\ensuremath{^{\\diamondbackslash}}_t \\mu\\ensuremath{^{\\diamondvert}}_b + b\\ensuremath{^{\\diamondvert}}_m \\\\\n\\text{with }& b\\ensuremath{^{\\diamondvert}}_m = t\\ensuremath{^{\\diamondbackslash}} \\left( K\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q\\ensuremath{^{\\diamondbackslash}}_b t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} f\\ensuremath{^{\\diamondvert}}_{ext}\n\\end{aligned}\n\\end{equation*}\n\\end{prop}\n\n\\begin{remark} \nFor the upcoming discussion, we will make use of the nonlinear primal Schur complement (Dirichlet-to-Neumann, noted $S_{nl}\\ensuremath{^{(s)}}$) which is such that $\\lambda_b\\ensuremath{^{(s)}}=S_{nl}\\ensuremath{^{(s)}}(u_b\\ensuremath{^{(s)}};f_{ext}\\ensuremath{^{(s)}})$. The tangent primal Schur complement can be computed from the tangent stiffness matrix:\n\\begin{equation*}\nS_t\\ensuremath{^{(s)}}=K\\ensuremath{^{(s)}}_{t_{bb}}-K\\ensuremath{^{(s)}}_{t_{bi}}{K\\ensuremath{^{(s)}}_{t_{ii}}}^{-1}K\\ensuremath{^{(s)}}_{t_{ib}}\n\\end{equation*}\nand we have $H_t\\ensuremath{^{\\diamondbackslash}} = (S\\ensuremath{^{\\diamondbackslash}}_t + Q\\ensuremath{^{\\diamondbackslash}}_b)^{-1}$. Note that the tangent dual Schur complement (Neumann-to-Dirichlet) can be written as ${S_t\\ensuremath{^{(s)}}}^{\\dagger} = t\\ensuremath{^{(s)}}{K_t\\ensuremath{^{(s)}}}^{\\dagger}{t\\ensuremath{^{(s)}}}^T$. In the linear case, the primal Schur complement is an affine operator with the constant term due to the external load:\n\\begin{equation*}\n\\begin{aligned}\n& S\\ensuremath{^{\\diamondvert}}_l \\left( u\\ensuremath{^{\\diamondvert}}_b; f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = S\\ensuremath{^{\\diamondbackslash}}_t u\\ensuremath{^{\\diamondvert}}_b + b\\ensuremath{^{\\diamondvert}}_p \\\\\n\\text{with }& b\\ensuremath{^{\\diamondvert}}_p = f_{ext_b}\\ensuremath{^{\\diamondvert}} - K\\ensuremath{^{(s)}}_{t_{bi}}{K\\ensuremath{^{(s)}}_{t_{ii}}}^{-1}f_{ext_i}\\ensuremath{^{\\diamondvert}} = (S_t\\ensuremath{^{\\diamondbackslash}}+Q_b\\ensuremath{^{\\diamondbackslash}})b_m\\ensuremath{^{\\diamondvert}}\n\\end{aligned}\n\\end{equation*}\n\\end{remark}\n\nThanks to the complementarity between balanced and continuous quantities, and to the symmetry positive definiteness of $Q\\ensuremath{^{\\diamondbackslash}}_b$, any boundary displacement (defined independently on neighboring subdomains) can be split in a unique way into a continuous field belonging to $\\text{Ker}\\left( B\\ensuremath{^{\\diamondminus}} \\right)$ and a balanced field belonging to $\\text{Ker}\\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b \\right)$. Thus, the transmission conditions can be written in terms of $\\mu\\ensuremath{^{\\diamondvert}}_b$ and $u\\ensuremath{^{\\diamondvert}}_b$, and gathered in a single equation:\n\\begin{equation*}\nA\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b - u\\ensuremath{^{\\diamondvert}}_b = 0\n\\end{equation*}\n\nFinally, interface condensed problem reads:\n\\begin{equation}\\label{eq:interf_pb}\nR\\ensuremath{^{\\diamondvert}}_b(\\mu_b\\ensuremath{^{\\diamondvert}}) \\equiv A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b - H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) = 0\n\\end{equation} \n\n\\subsection{Solving strategy}\n\n\\subsubsection{Newton-Krylov algorithm}\n\nNonlinear substructuring and condensation results in applying a global Newton algorithm to interface problem \\eqref{eq:interf_pb} instead of problem \\eqref{eq:nl_problem}. Three steps are then involved in the solving process:\n\\begin{enumerate}[label=(\\roman*)]\n\\item Local solutions of nonlinear equilibriums \\eqref{eq:local_eq} are computed by applying local Newton algorithms.\n\\item The interface mixed residual is assembled.\n\\item The interface tangent problem is solved by a DDM solver.\n\\end{enumerate}\nNewton global algorithm can be written, with previous notations:\n\\begin{equation*}\n\\left\\lbrace \\begin{aligned}\n&\\frac{\\partial R\\ensuremath{^{\\diamondvert}}_b}{\\partial \\mu_b\\ensuremath{^{\\diamondvert}}} d\\mu\\ensuremath{^{\\diamondvert}}_b + R\\ensuremath{^{\\diamondvert}}_b = 0 \\\\\n&\\mu\\ensuremath{^{\\diamondvert}}_b += d\\mu\\ensuremath{^{\\diamondvert}}_b\n\\end{aligned} \\right.\n\\end{equation*}\nTangent problem then reads:\n\\begin{equation}\\label{eq:tg_pb}\n\\left( A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} - H\\ensuremath{^{\\diamondbackslash}}_t \\right) d\\mu\\ensuremath{^{\\diamondvert}}_b = H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b, Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) - A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu\\ensuremath{^{\\diamondvert}}_b\n\\end{equation}\n\n\\subsubsection{Alternative formulation}\\label{sec:altern_formul}\nTangent problem \\eqref{eq:tg_pb} could be treated by a FETI-2LM solver \\cite{roux2009feti}. An equivalent formulation of problem \\eqref{eq:interf_pb} is also possible, where the boundary interface unknown $\\mu\\ensuremath{^{\\diamondvert}}_b$ is replaced by a couple of interface unknowns $\\left( f_B, v_A \\right)$, $f_B$ being a nodal reaction and $v_A$ an interface displacement. Couple $\\left( f_B, v_A \\right)$ is made unique by imposing the three following conditions:\n\\begin{itemize}[label=$\\circ$]\n\\item $f_B$ is balanced\n\\item $v_A$ is continuous \n\\item $\\mu\\ensuremath{^{\\diamondvert}}_b = B\\ensuremath{^{\\diamondminus^T}} f_B + Q\\ensuremath{^{\\diamondbackslash}}_b A\\ensuremath{^{\\diamondminus^T}} v_A$\n\\end{itemize}\nWith this formulation, tangent problem is expressed by:\n\\begin{equation}\\label{eq:pb_tg2}\n\\begin{aligned}\n&\\left(A\\ensuremath{^{\\diamondminus}} S\\ensuremath{^{\\diamondbackslash}}_t A\\ensuremath{^{\\diamondminus^T}}\\right) dv_A = A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S\\ensuremath{^{\\diamondbackslash}}_t \\right) b\\ensuremath{^{\\diamondvert}}_m \\\\\n\\text{with }&b\\ensuremath{^{\\diamondvert}}_m = H\\ensuremath{^{\\diamondvert}}_{nl} \\left( \\mu\\ensuremath{^{\\diamondvert}}_b; Q\\ensuremath{^{\\diamondbackslash}}_b, f\\ensuremath{^{\\diamondvert}}_{ext} \\right) - A\\ensuremath{^{\\diamondminus^T}} v_A\n\\end{aligned}\n\\end{equation}\nEquation \\eqref{eq:pb_tg2} has the exact form of a BDD \\cite{mandel1993balancing} problem. It can thus conveniently be solved with usual preconditioner and coarse problem. The following quantities can then be deduced:\n\\begin{equation}\\label{eq:recup_mu_u_lam}\n\\begin{aligned}\n& d\\mu_b\\ensuremath{^{\\diamondvert}} = S_t\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} dv_A - A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right) b_m\\ensuremath{^{\\diamondvert}} \\\\\n& du\\ensuremath{^{\\diamondvert}} = \\left( K_t\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} t\\ensuremath{^{\\diamondbackslash^T}} \\left( A\\ensuremath{^{\\diamondminus}} \\left[ Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right] b_m\\ensuremath{^{\\diamondvert}} + d\\mu_b\\ensuremath{^{\\diamondvert}} \\right) \\\\\n& du_b\\ensuremath{^{\\diamondvert}} = t\\ensuremath{^{\\diamondbackslash}} du\\ensuremath{^{\\diamondvert}} \\\\\n& d\\lambda_b\\ensuremath{^{\\diamondvert}} = S_t\\ensuremath{^{\\diamondbackslash}} du_b\\ensuremath{^{\\diamondvert}} - A\\ensuremath{^{\\diamondminus}} \\left( Q\\ensuremath{^{\\diamondbackslash}}_b + S_t\\ensuremath{^{\\diamondbackslash}} \\right) b_m\\ensuremath{^{\\diamondvert}} = d\\mu_b\\ensuremath{^{\\diamondvert}} - Q\\ensuremath{^{\\diamondbackslash}}_b du_b\\ensuremath{^{\\diamondvert}}\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Typical algorithm}\n\nAlgorithm~\\ref{alg:robin-bdd} sums up the main steps of the method with the mixed nonlinear local problems and primal tangent solver. For simplicity reasons, only one load increment was considered.\n\nAs can be seen in this algorithm, several convergence thresholds are needed:\n\\begin{itemize}\n\\item Global convergence criterion $\\varepsilon_{NG}$: since our approach is mixed, the criterion not only controls the quality of the subdomains balance (as in a standard Newton approach) but also the continuity of the interface displacement which is measured by an appropriate norm written $\\|\\cdot\\|_B$. \n\\item Local nonlinear thresholds $\\varepsilon_{NL}\\ensuremath{^{\\diamondvert}}$, which are associated with the Newton processes carried out independently on subdomains.\n\\item The global linear threshold of the domain decomposition (Krylov) solver $\\varepsilon_{K}$ (here BDD). \n\\end{itemize}\nThe other parameters of the method are the initializations of the various iterative solvers and the choice of the impedance matrices $Q_b\\ensuremath{^{\\diamondbackslash}}$. \n\n\\begin{algorithm2e}[!ht]\n\\DontPrintSemicolon\n\\KwSty{Define:}\\;\n$r_{nl}^{m\\diamondvert}(u\\ensuremath{^{\\diamondvert}},\\mu_b\\ensuremath{^{\\diamondvert}}) = f_{int}\\ensuremath{^{\\diamondvert}}(u\\ensuremath{^{\\diamondvert}})-t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} u\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\mu_b\\ensuremath{^{\\diamondvert}}+ f\\ensuremath{^{\\diamondvert}}_{ext}$\\;\n\\BlankLine\n\\KwSty{Initialization:}\\;\n$(u_0\\ensuremath{^{\\diamondvert}},\\lambda_{b_0}\\ensuremath{^{\\diamondvert}})$ such that $B\\ensuremath{^{\\diamondminus}} t\\ensuremath{^{\\diamondbackslash}} u_0\\ensuremath{^{\\diamondvert}}=0$ and $A\\ensuremath{^{\\diamondminus}} \\lambda_{b_0}\\ensuremath{^{\\diamondvert}}=0$\\;\n\\KwSty{Set} $k=0$\\;\n\\KwSty{Define} $\\mu_{b_k}\\ensuremath{^{\\diamondvert}}= \\lambda_{b_k}\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} u_k\\ensuremath{^{\\diamondvert}}$\\;\n\\While{$\\| r_{nl}^{m\\diamondvert}(u_k\\ensuremath{^{\\diamondvert}},\\mu_{b_k}\\ensuremath{^{\\diamondvert}})\\|+ \\| B\\ensuremath{^{\\diamondminus}} t\\ensuremath{^{\\diamondbackslash}} u\\ensuremath{^{\\diamondvert}}\\|_{B}>\\varepsilon_{NG}$ }{%\n \\KwSty{Local nonlinear step}: \\\n \\KwSty{Set} $u_{k,0}\\ensuremath{^{\\diamondvert}}=u_{k}\\ensuremath{^{\\diamondvert}}$ and $j=0$\\; \n \\While{$\\| r_{nl}^{m\\diamondvert}(u_{k,j}\\ensuremath{^{\\diamondvert}},\\mu_{b_k}\\ensuremath{^{\\diamondvert}})\\|>\\varepsilon\\ensuremath{^{\\diamondvert}}_{NL}$}{\n $u_{k,j+1}\\ensuremath{^{\\diamondvert}}=u_{k,j}\\ensuremath{^{\\diamondvert}}-\\left(K_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} t\\ensuremath{^{\\diamondbackslash}} \\right)^{-1} r_{nl}^{m\\diamondvert}( u_{k,j}\\ensuremath{^{\\diamondvert}}, \\mu_{b_k}\\ensuremath{^{\\diamondvert}})$ \\;\n \\KwSty{Set} $j=j+1$\n }\n \\KwSty{Linear right-hand side}:\\;\n $b_{m_k}\\ensuremath{^{\\diamondvert}} = A\\ensuremath{^{\\diamondminus^T}} \\left( A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right)^{-1} A\\ensuremath{^{\\diamondminus}} \\mu_{b_k}\\ensuremath{^{\\diamondvert}} -t\\ensuremath{^{\\diamondbackslash}} u_{k,j}\\ensuremath{^{\\diamondvert}} $\\;\n $b_{p_k}\\ensuremath{^{\\diamondvert}} = (S_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} + Q_b\\ensuremath{^{\\diamondbackslash}})b_{m_k}\\ensuremath{^{\\diamondvert}} $\\;\n \\KwSty{Global linear step}:\\;\n \\KwSty{Set} $dv_A^{0}=0$ and $i=0$\\; \n \\While{$\\|b\\ensuremath{^{\\diamondvert}}_{p_k}-\\left( A\\ensuremath{^{\\diamondminus}}\\; S_{t_{k,j}}\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right) dv^i\\|>\\varepsilon_{K}$}{\n Make BDD iterations (index $i$)\n }\n \\KwSty{Set} $u_{k+1}\\ensuremath{^{\\diamondvert}} = u_k\\ensuremath{^{\\diamondvert}} + du_k^{i\\diamondvert}$ and $ \\lambda_{b_{k+1}}\\ensuremath{^{\\diamondvert}} = \\lambda_{b_k}\\ensuremath{^{\\diamondvert}} + d\\lambda_{b_k}^{i\\diamondvert}$ using \\eqref{eq:recup_mu_u_lam}\\;\n \\KwSty{Set} $k=k+1$\\;\n}\n\\caption{Mixed nonlinear approach with BDD tangent solver}\n\\label{alg:robin-bdd}\n\\end{algorithm2e}\n\n\n\n\\section{New heuristic for the interface impedance}\\label{sec:new_heuristic}\n\n\\subsection{Motivation}\\label{ssec:motivation}\n\n\nThe parameter $Q\\ensuremath{^{\\diamondbackslash}}_b$ is involved all along the solving, and a special care should be paid to its computation. \n\n\\medskip\n\nIn order to frame the ideas, let us consider the Robin-Robin algorithm with stationary iteration, in the nonlinear case, with nonlinear impedance. Starting from the initial guess $\\mu_b\\ensuremath{^{\\diamondvert}}=0$, we have the iterations of Algorithm~\\ref{alg:robin-robin}.\n\n\\SetNlSty{texttt}{(}{)}\n\\begin{algorithm2e}[ht]\n\t\\DontPrintSemicolon\n\\nl Parallel solve: $S_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}};f\\ensuremath{^{\\diamondvert}}_{ext})+Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}})= \\mu_b\\ensuremath{^{\\diamondvert}}$\\;\n\\nl Parallel post-processing: $\\lambda_b\\ensuremath{^{\\diamondvert}} = S_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}};f\\ensuremath{^{\\diamondvert}}_{ext}) = \\mu_b\\ensuremath{^{\\diamondvert}}-Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}})$\\;\n\\nl Assembly: $\\bar{u}_b\\ensuremath{^{\\diamondvert}} = A\\ensuremath{^{\\diamondminus^T}} \\tilde{A}\\ensuremath{^{\\diamondminus}} u_b\\ensuremath{^{\\diamondvert}}$, and $\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} = \\left( I - \\tilde{A}\\ensuremath{^{\\diamondminus^T}} A\\ensuremath{^{\\diamondminus}} \\right) \\lambda_b\\ensuremath{^{\\diamondvert}}$\\;\n\\nl Parallel update of interface unknown: $\\mu_b\\ensuremath{^{\\diamondvert}} = Q_{nl}\\ensuremath{^{\\diamondvert}}( \\bar{u}_b\\ensuremath{^{\\diamondvert}}) + \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}$\\;\n\\caption{Robin-Robin stationary iteration}\n\\label{alg:robin-robin}\n\\end{algorithm2e}\t\n\n\\medskip\n\nAssembled quantities $\\bar{u}_b\\ensuremath{^{\\diamondvert}}$ and $\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}$ are defined such that the interface conditions can be written as:\n\\begin{equation}\\label{eq:new_interf_cond}\nu_b\\ensuremath{^{\\diamondvert}} = \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\text{ and }\n\\lambda_b\\ensuremath{^{\\diamondvert}} = \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}\n\\end{equation}\nand we assume the nonlinear local operators $Q_{nl}\\ensuremath{^{\\diamondvert}}$ to be such that the equivalence between \\eqref{eq:new_interf_cond} and the following equation is ensured:\n\\begin{equation}\\label{eq:intQnl}\n\\left(Q_{nl}\\ensuremath{^{\\diamondvert}}(u_b\\ensuremath{^{\\diamondvert}}) - Q_{nl}\\ensuremath{^{\\diamondvert}}(\\bar{u}_b\\ensuremath{^{\\diamondvert}})\\right) + \\left(\\lambda_b\\ensuremath{^{\\diamondvert}}-\\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}}\\right)=0\n\\end{equation}\n\n\nConsidering a given subdomain $\\Omega\\ensuremath{^{(j)}}$, and writing $\\Omega\\ensuremath{^{(\\overline{j})}}$ its complement, we can condense the whole problem on its interface; the boundary displacement $u_b\\ensuremath{^{(j)}}$ must then be the solution to:\n\\begin{equation*}\nS_{nl}\\ensuremath{^{(j)}}(u_b\\ensuremath{^{(j)}};f_{ext}\\ensuremath{^{(j)}}) + S_{nl}\\ensuremath{^{(\\overline{j})}}(u_b\\ensuremath{^{(j)}};f_{ext}\\ensuremath{^{(\\overline{j})}}) =0\n\\end{equation*}\nComparing this equation with line~(1) of algorithm~\\ref{alg:robin-robin}, one can see that, starting from a zero initial guess $\\mu_b\\ensuremath{^{(j)}}=0$, the method converges in only one iteration with $Q_{nl}\\ensuremath{^{(j)}} = S_{nl}\\ensuremath{^{(\\overline{j})}}$: the ideal impedance is the Dirichlet-to-Neumann operator of the complement. \\medskip\n\nIn order to further discuss the problem, we now consider the linear case, and we recall that we have: $S_{l}\\ensuremath{^{(j)}}(u_b\\ensuremath{^{(j)}}) = S_{t}\\ensuremath{^{(\\overline{j})}} u_b\\ensuremath{^{(j)}} + b_p\\ensuremath{^{(\\overline{j})}}$. In that case, the optimal impedance is thus an affine operator whose linear part (which we will write $Q_b\\ensuremath{^{(j)}}$ in agreement with the development of previous section) accounts for the stiffness of the complement domain, whereas the constant part accounts for the external load on the complement part. Note that another point of view is to use a strictly linear impedance together with a good (non-zero) initialization for $\\mu_b\\ensuremath{^{\\diamondvert}}$ which should account for the external load on the complement domain.\n\nThe construction of a good constant part for the impedance is usually realized, in the linear case, by the introduction of a well-chosen coarse problem; this is discussed in subsection~\\ref{ssec:multi}. \nIn the nonlinear case, building a coarse problem which would connect all subdomains during their inner Newton loop seems complex; more, it would break the independent computations. It looks simpler to rely on a good initialization in order to propagate the right-hand side: this can be done at low cost by easing accuracy constraints in the first inner Newton loop (adapting $\\varepsilon_{NL}$), and then using the multiscale solver of the global linear step. Note that in \\cite{klawonn2017new} a coarse problem is built for nonlinear versions of FETIDP and BDDC but, again, it mainly serves to find a good initialization before independent parallel nonlinear solves.\n\nWe now focus on the construction of $Q_b\\ensuremath{^{\\diamondbackslash}}$, i.e. the linear part of the impedance. In the linear case, one can show \\cite{magoules2004optimal,gander2011optimal} that for a slab-wise decomposition of the structure (or a tree-like decomposition, i.e. whose connectivity graph has no cycles), the setting $Q_b\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}}$ is optimal, in the sense that the convergence is reached in a maximum number of iterations equal to the number of subdomains (iterations are only needed to propagate the right-hand side). If the convenient coarse grid is added, convergence can be extremely fast. For an arbitrary decomposition, the optimality of such a setting can theoretically be lost, because of the unclear propagation rate of the right-hand side \\cite{nier1998remarques,gander2011optimal}. However, the pertinence of this value still seems to be ongoing, especially being given the difficulty to define a more relevant setting for a matrix operator $Q_t\\ensuremath{^{\\diamondbackslash}}$.\n\n\\medskip\n\n\nStarting from these considerations, let us further analyze the terms of the following expression of the interface impedance for a given subdomain $\\Omega\\ensuremath{^{(j)}}$:\n\\begin{equation}\\label{eq:opti_qb_lin}\nQ_t\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}} = K_{bb}\\ensuremath{^{(\\overline{j})}} - K_{bi}\\ensuremath{^{(\\overline{j})}} K_{ii}\\ensuremath{^{(\\overline{j})^{-1}}} K_{ib}\\ensuremath{^{(\\overline{j})}}\n\\end{equation}\n\n\nThe first term, $ K\\ensuremath{^{(\\overline{j})}}_{bb}$, accounts for very local interactions. It is sparse, and exactly has the fill-in of matrix $K_{bb}\\ensuremath{^{(j)}}$. The second term, $K\\ensuremath{^{(\\overline{j})}}_{bi} {K\\ensuremath{^{(\\overline{j})}}_{ii}}^{-1} K\\ensuremath{^{(\\overline{j})}}_{ib}$, accounts for long range interactions, it depends on the whole structure (geometry and material), and couples all degrees of freedom together via in-depth interactions. It is thus a full matrix; this property can be seen as the consequence of the pseudo-differentiability of the underlying Steklov-Poincar\u00e9 operator of which the Schur complement is the discretization. It is important to note the minus sign: the short range part is very stiff and the global effects mitigate it. \n\nObviously, formula~\\eqref{eq:opti_qb_lin} is intractable in a distributed environment. However, different strategies have been investigated to compute approximations at low cost -- see next subsection for a quick review.\n\n\\medskip\nIn the nonlinear context, the use of a linear impedance is of course non-optimal. Moreover, the best linear impedance probably resembles the Schur complement of the remainder of the subdomain in the final configuration, which is of course unknown a priori. Our aim is then to try to find a heuristic which gives an easy-to-compute approximation of the formula~\\eqref{eq:opti_qb_lin} to be applied to the initial tangent stiffness. \n\n\n\n\\subsection{Quick review}\\label{ssec:review}\n\nThe question of finding a good approximation of the Schur complement of a domain is at the core of mixed domain decomposition methods like optimized Schwarz methods \\cite{gander2006osm} or the Latin method \\cite{ladeveze2000micro}. Studies have proved that they needed to reproduce short-range effects (like local heterogeneity) but also structural effects (like the anisotropy induced by the slenderness of plate structures \\cite{SAAVEDRA.2012.1}). When one wishes to choose an invariant scalar (or tensor in case of anisotropy) for each interface, it can be beneficial to use a coarse model for its estimation \\cite{SAAVEDRA.2016.hal.1}. A possibility in order to better model short-range interaction between interface nodes is to use Ventcell conditions instead of simple Robin conditions \\cite{Hoang2014}; this enables to recover the same sparsity for the impedance as for the stiffness of the subdomain. An extreme strategy is to use (scalar) Robin conditions on the Riesz' image of the normal flux leading to a fully populated impedance matrix \\cite{DESMEURE.2011.3.1}. A more reasonable strategy is to use a strip approximation of the Schur complement \\cite{magoules_algebraic_2006}, which can also be computed by adding elements to the subdomains \\cite{Oumaziz2017}, in the spirit of restricted additive Schwarz methods \\cite{cai1999restricted}. \n\n\nFrom an algebraic point of view, short range approximation $ K\\ensuremath{^{(\\overline{j})}}_{t_{bb}}$ (or even $ \\operatorname{diag}(K\\ensuremath{^{(\\overline{j})}}_{t_{bb}})$) is sometimes used for FETI's preconditioner \\cite{Far94bis}, where it is called lumped approximation. \nLet $\\text{neigh}(j)$ be the set of the neighbors of subdomain $j$, we have\n\\begin{equation}\\label{eq:ktbb_neigh}\n\\text{lumped: }K_{t_{bb},l}\\ensuremath{^{\\text{neigh}(j)}} \\equiv K_{t_{bb}}\\ensuremath{^{(\\overline{j})}} = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} K\\ensuremath{^{(s)}}_{t_{bb}} A\\ensuremath{^{(s)^T}} \\right) A\\ensuremath{^{(j)}}\n\\end{equation}\nor even:\n\\begin{equation}\\label{eq:ktbb_neigh_diag}\n\\text{superlumped: }K_{t_{bb},sl}\\ensuremath{^{\\text{neigh}(j)}} \\equiv \\operatorname{diag}\\left(K_{t_{bb}}\\ensuremath{^{(\\overline{j})}}\\right) = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} \\operatorname{diag}\\left(K\\ensuremath{^{(s)}}_{t_{bb}}\\right) A\\ensuremath{^{(s)^T}} \\right) A\\ensuremath{^{(j)}}\n\\end{equation}\nBeing an assembly among a few subdomains of sparse block-diagonal matrices, this term is quite cheap to compute, and does not require any extra-computations, since local tangent stiffnesses are calculated anyway at each iteration of the solving process. The efficiency of the simple approximation \\eqref{eq:ktbb_neigh} has been studied, in the context of nonlinear substructuring and condensation, in some research works \\cite{cresta2007nonlinear, hinojosa2014domain, negrello2016substructured}, and has given good results when tested on rather homogeneous structures of standard shape. \\medskip\n\n\nIn the domain decomposition framework for linear problems, long range interactions are taken into account thanks to the coarse grid problems \\cite{Far94bis,mandel1993balancing,nouy03b}, which enables the method to comply with Saint-Venant's principle. These are closely related to projection techniques inspired by homogenization \\cite{ibrahimbegovic2003strong, feyel2000fe, ladeveze2000micro, guidault2007two, guidault2008multiscale} in order to get low rank approximations. Let $U$ be an orthonormal basis of a well chosen subspace of displacements, the approximation can be written as:\n\\begin{equation}\\label{eq:long_scale}\nS\\ensuremath{^{(\\overline{j})}} \\simeq U (U^T S\\ensuremath{^{(\\overline{j})}} U) U^T\n\\end{equation}\nSaint-Venant's principle imposes $U$ to contain at least the rigid body motions of $\\Omega\\ensuremath{^{(j)}}$, for computational efficiency it can be complemented by affine deformation modes or by displacements defined independently by interfaces.\\medskip\n\n\nHowever, if \\textit{short range} approximations do not provide enough information to give a good representation of the faraway structure influence on a substructure $\\Omega\\ensuremath{^{(j)}}$, neither do \\textit{long range} approximation give a good estimation of the near-field response to a sollicitation. Besides, in the context of small displacements, a lack of precision on the close structure is more problematic than the filtering of long range interactions: predominant mechanical reactions usually come from nearby elements of the mesh.\n \nThe best strategy for $Q\\ensuremath{^{(j)}}_b$ would combine both \\textit{short} and \\textit{long} range formulations, however this version has not been much investigated yet. In particular it is not that easy to ensure the positivity of the impedance if the two approximations are computed independently. In \\cite{gendre2011two} an expensive scale separation was introduced in the context of non intrusive global\/local computations where $\\Omega\\ensuremath{^{(\\overline{j})}}$ was somehow available (which is not the case in our distributed framework). We propose here a new expression for parameter $Q\\ensuremath{^{(j)}}_b$, in the context of nonlinear substructuring and condensation with mixed interface conditions, which combines short and long scale formulations, at low computational cost.\n\n\n\\subsection{Spring in series model}\\label{ssec:springs}\n\nOur heuristic for the impedance relies on the simple observation that finding a two-scale approximation of the flexibility of $\\Omega\\ensuremath{^{(\\overline{j})}}$ may be more patent than for the stiffness. It is inspired by the simple model of two springs assembled in series: one spring models the stiffness of the neighboring subdomains whereas the second models the stiffness of the faraway subdomains (see figure \\ref{fig:springs_series}). The resulting equivalent flexibility is the sum of the two flexibilities. \n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{spring_series.pdf}\n\\end{center}\n\\caption{Springs in series model}\n\\label{fig:springs_series}\n\\end{figure}\nIn practice, in order to recover the structure of \\eqref{eq:opti_qb_lin}, while remaining tractable, we propose the local flexibility ${S_t\\ensuremath{^{\\text{neigh}(j)}}}^{-1}$ to be the inverse of a sparse matrix, and the long-range flexibility ${S_t^{\\text{far}(j)}}^{-1}$ to be low-rank. The latter condition is also motivated by \\cite{bebendorf2003existence,amestoy2016complexity}, where it is shown that low-rank approximants of fully populated inverse operators, arising from FE discretization of elliptic problems, can be derived from the hierarchical-matrices theory. Typically we have:\n\\begin{equation}\\label{eq:flex2terms}\n\\begin{aligned}\n{Q_b\\ensuremath{^{(j)}}}^{-1} = {K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}}^{-1} + A\\ensuremath{^{(j)^T}} V F V^T A\\ensuremath{^{(j)}}\n\\end{aligned}\n\\end{equation}\nwhere $K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}$ can refer for instance to expressions \\eqref{eq:ktbb_neigh} or \\eqref{eq:ktbb_neigh_diag}, $F$ is a small-sized $m \\times m$ square matrix, and $V$ an interface vectors basis of size $n_A \\times m$. Writing $V\\ensuremath{^{(j)}} = A\\ensuremath{^{(j)^T}} V$ the local contribution of basis $V$, expression \\eqref{eq:flex2terms} can be inversed using the Sherman-Morrisson formula:\n\\begin{equation}\n\\begin{aligned}\nQ_b\\ensuremath{^{(j)}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} - \\underset{W_b\\ensuremath{^{(j)}}}{\\underbrace{ K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} V\\ensuremath{^{(j)}}}} \\, \\, \\underset{ {M\\ensuremath{^{(j)}}}^{-1} }{\\underbrace{ \\left( F^{-1} + V\\ensuremath{^{(j)^T}} K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} V\\ensuremath{^{(j)}} \\right)^{-1}}} \\, \\, \\underset{W_b\\ensuremath{^{(j)^T}}}{\\underbrace{ V\\ensuremath{^{(j)^T}} K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} }}\n\\end{aligned}\n\\end{equation} \nThis stiffness is a sparse matrix corrected by a low-rank term; then, when solving the (generalized) Robin problems, the Sherman-Morrisson formula can be used again:\n\\begin{equation}\n\\begin{aligned}\n\\text{let } \\tilde{K}_t\\ensuremath{^{(j)}} & \\equiv (K_t\\ensuremath{^{(j)}}+ t\\ensuremath{^{(j)^T}} {K_{t_{bb}}^{\\text{neigh}(j)} } t\\ensuremath{^{(j)}}) \n\\text{ and } W\\ensuremath{^{(j)}} \\equiv t\\ensuremath{^{(j)^T}} W_b\\ensuremath{^{(j)}}:\\\\\n(K_t\\ensuremath{^{(j)}}+ t\\ensuremath{^{(j)^T}} Q_b\\ensuremath{^{(j)}} t\\ensuremath{^{(j)}})^{-1} & = \\tilde{K}_t^{(j)^{-1}} + \\tilde{K}_t^{(j)^{-1}} W\\ensuremath{^{(j)}} \\left(M\\ensuremath{^{(j)}} - W\\ensuremath{^{(j)^T}} \\tilde{K}_t^{-1} W\\ensuremath{^{(j)}} \\right)^{-1} W\\ensuremath{^{(j)^T}} \\tilde{K}_t^{(j)^{-1}}\n\\end{aligned}\n\\end{equation} \nThe short-range term enables to regularize the problem without impairing the sparsity of the stiffness matrix.\n\n\\subsection{A multi-scale interpretation}\\label{ssec:multi}\n\n\nIn the spirit of \\cite{oumaziz2}, we can derive a multi-scale interpretation of the additive form \\eqref{eq:flex2terms} adopted for the interface impedance. \n\nStarting from Algorithm~\\ref{alg:robin-robin}, a macroscopic condition, inspired from the Latin method, can be imposed on the nodal reactions: the nodal reactions should satisfy a weak form of the interface balance, defined by a macroscopic basis $C_A$:\n\\begin{equation}\\label{eq:macro_const}\nC_A^T A\\ensuremath{^{\\diamondminus}} \\lambda_b\\ensuremath{^{\\diamondvert}} = 0\n\\end{equation}\nIn the linear case, this condition can be enforced by the introduction of a Lagrange multiplier $\\alpha$ (details can be found in \\cite{oumaziz2}) in the interface condition \\eqref{eq:intQnl}:\n\\begin{equation*}\n\\lambda_b\\ensuremath{^{\\diamondvert}} - \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} \\left( u_b\\ensuremath{^{\\diamondvert}} - \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\right) + Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} C_A \\alpha = 0\n\\end{equation*}\nAfter algebraic calculations, writing local equilibriums with this new condition leads to:\n\\begin{equation*}\n\\left[ K\\ensuremath{^{\\diamondbackslash}} + t\\ensuremath{^{\\diamondbackslash^T}} Q_b\\ensuremath{^{\\diamondbackslash}} \\left( I\\ensuremath{^{\\diamondbackslash}} - P_{C_A}\\ensuremath{^{\\diamondbackslash}} \\right) t\\ensuremath{^{\\diamondbackslash}} \\right] u\\ensuremath{^{\\diamondvert}} = f_{ext}\\ensuremath{^{\\diamondvert}} + t\\ensuremath{^{\\diamondbackslash^T}} \\left[ \\bar{\\lambda}_b\\ensuremath{^{\\diamondvert}} + Q_b\\ensuremath{^{\\diamondbackslash}} \\left( I\\ensuremath{^{\\diamondbackslash}} - P_{C_A}\\ensuremath{^{\\diamondbackslash}} \\right) \\bar{u}_b\\ensuremath{^{\\diamondvert}} \\right]\n\\end{equation*}\nwhere $P_{C_A}\\ensuremath{^{\\diamondbackslash}} = A\\ensuremath{^{\\diamondminus^T}} C_A \\left( C_A^T A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} C_A \\right)^{-1} C_A^T A\\ensuremath{^{\\diamondminus}} Q_b\\ensuremath{^{\\diamondbackslash}}$ is a projector on the low-dimension subspace $\\operatorname{Range}(A\\ensuremath{^{\\diamondminus^T}} C_A)$.\n\nNot only the coarse space associated to the macroscopic constraint \\eqref{eq:macro_const} results in the propagation of the right-hand side on the whole structure ($P_{C_A}\\ensuremath{^{\\diamondbackslash}}$ is not sparse) but also in the modification of the impedance by the symmetric negative low rank term $-Q_b\\ensuremath{^{\\diamondbackslash}} P_{C_A}\\ensuremath{^{\\diamondbackslash}}$.\n\n\\medskip\n\n\nIn our nonlinear context, considering the basic setting $Q_b\\ensuremath{^{(j)}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}}$ \\eqref{eq:ktbb_neigh} or \\eqref{eq:ktbb_neigh_diag}, the modification $\\bar{Q}_b\\ensuremath{^{\\diamondbackslash}} = K_{t_{bb}}\\ensuremath{^{\\text{neigh}(j)}} - W_b\\ensuremath{^{(j)}} M\\ensuremath{^{(j)^{-1}}} W_b\\ensuremath{^{(j)^T}}$ proposed in \\eqref{eq:flex2terms} can be seen as the introduction of a multi-scale computation inside the mixed nonlinear substructuring and condensation method. As said earlier, the propagation of the right-hand side is ensured by a well-built initialization, which can be realized by adapting the inner Newton criterion $\\epsilon_{NL}$ at each global iteration.\n\n\n\n\n\\subsection{Two-scale approximation of the flexibility}\\label{ssec:two_scale}\n\n\\subsubsection{General idea}\n\nFrom previous analysis, we try to derive an approximation of the (linear) optimal flexibility \\eqref{eq:opti_qb_lin} which takes the additive form of \\eqref{eq:flex2terms}. Being given a substructure $\\Omega\\ensuremath{^{(j)}}$, we write $S_A\\ensuremath{^{(\\overline{j})}} = \\sum_{s \\neq j} A\\ensuremath{^{(s)}} S_t\\ensuremath{^{(s)}} A\\ensuremath{^{(s)^T}}$ the assembly of local tangent Schur complements on the remainder $\\Omega\\ensuremath{^{(\\overline{j})}}$.\n\nUsing the quotient and the inverse formulas for the Schur complement, we have:\n\\begin{equation}\\label{eq:flexSt}\n{S_t\\ensuremath{^{(\\overline{j})}}}^{-1} = \\left( {S_A\\ensuremath{^{(\\overline{j})}}}^{-1} \\right)_{bb} = A\\ensuremath{^{(j)^T}} {S_A\\ensuremath{^{(\\overline{j})}}}^{-1} A\\ensuremath{^{(j)}}\n\\end{equation}\n\n\\begin{remark}\nWe here assume a substructuring ensuring the inversibility of $S_A\\ensuremath{^{(\\overline{j})}}$ and $S_t\\ensuremath{^{(\\overline{j})}}$, i.e. Dirichlet conditions are not concentrated on only one subdomain, and the complementary part of each subdomain is connected. In practice, this is almost always the case; if not, a simple subdivision can overcome the problem.\n\\end{remark}\n\n\\medskip\n\nClassical preconditioners of BDD-algorithm can then be used as approximations of the inverse of $S_A\\ensuremath{^{(\\overline{j})}}$. \nWe hence introduce $\\hat{G}_A\\ensuremath{^{(\\overline{j})}} = \\left[\\, \\ldots, \\, \\hat{A}\\ensuremath{^{(s)}}_j R_b\\ensuremath{^{(s)}}, \\, \\ldots \\, \\right]_{s \\neq j}$ the concatenation of the scaled local traces of rigid body motions ($R_b\\ensuremath{^{(s)}}$) of subdomains belonging to $\\Omega\\ensuremath{^{(\\overline{j})}}$, with $\\hat{A}\\ensuremath{^{(s)}}_j$ scaled assembly operators taking into account the absence of matter inside subdomain $\\Omega\\ensuremath{^{(j)}}$. Considering the classical definition of scaled assembly operators $\\tilde{A}\\ensuremath{^{(s)}}$ \\cite{klawonn2001feti}, modified operators $\\hat{A}\\ensuremath{^{(s)}}_j$ can be defined as:\n\\begin{equation*}\n\\begin{aligned}\n \\hat{A}\\ensuremath{^{(s)}}_j = & \\left\\lbrace \\,\\,\\, \\begin{aligned} \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} - A\\ensuremath{^{(j)}} \\Delta\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(s)}} \\Delta\\ensuremath{^{(s)}} \\quad \\text{ if } s \\neq j \\\\\n0 \\quad \\text{ if } s = j \\end{aligned} \\right. \\\\\n& \\hspace{0.5cm} \\text{ with } \\,\\, \\Delta\\ensuremath{^{(s)}} \\equiv \\operatorname{diag} \\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right)\n\\end{aligned}\n\\end{equation*}\n\n\nLet $P_A\\ensuremath{^{(\\overline{j})}}$ be the $S_A\\ensuremath{^{(\\overline{j})}}$-orthogonal projector on $\\operatorname{Ker}\\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\right)$:\n\\begin{align*}\n P_A\\ensuremath{^{(\\overline{j})}} = I - \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \n\\end{align*}\nwe have:\n\\begin{equation}\\label{eq:sep_inv_proj}\nS_A\\ensuremath{^{(\\overline{j})^{-1}}} = P_A\\ensuremath{^{(\\overline{j})}} S_A\\ensuremath{^{(\\overline{j})^{-1}}} P_A\\ensuremath{^{(\\overline{j})^T}} + \\left( I - P_A\\ensuremath{^{(\\overline{j})}} \\right) S_A\\ensuremath{^{(\\overline{j})^{-1}}} \\left( I - P_A\\ensuremath{^{(\\overline{j})}} \\right)^T \n\\end{equation}\nThe BDD-theory states that in the first term, $S_A\\ensuremath{^{(\\overline{j})^{-1}}}$ can be conveniently approximated by a scaled sum of local inverses\\footnote{The GENEO theory \\cite{SPILLANE:2013:FETI_GenEO_IJNME} states that, if needed, computable extra modes shall be inserted in $\\hat{G}_A\\ensuremath{^{(\\overline{j})}}$ in order to maintain the quality of the approximation.}.\nAfter developing and factorizing, we have a first approximation of the flexibility:\n\\begin{equation}\\label{eq:expr_inv_st}\n\\begin{aligned}\nQ_{BDD}\\ensuremath{^{(j)^{-1}}} \\equiv A\\ensuremath{^{(j)^T}} \\left( P_A\\ensuremath{^{(\\overline{j})}} \\sum_{s \\neq j} \\hat{A}\\ensuremath{^{(s)}}_j {S_t\\ensuremath{^{(s)}}}^{\\dagger} \\hat{A}\\ensuremath{^{(s)^T}}_j P_A\\ensuremath{^{(\\overline{j})^T}} + \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} \\right) A\\ensuremath{^{(j)}} \\\\\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsubsection{Long range interactions term}\n\nThe second term of expression \\eqref{eq:expr_inv_st}, written $\\hat{F}_{A,2}\\ensuremath{^{(j)}}$, is a matrix of low rank $m^{(j)}$, where $m^{(j)}$ is the number of neighbors rigid body motions. It could be used as is, however its computation involves the inversion of quantity $\\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}}$, an interface matrix of rank $m\\ensuremath{^{(\\overline{j})}}$, where $m\\ensuremath{^{(\\overline{j})}}$ is the number of local rigid body modes of the whole remainder $\\Omega\\ensuremath{^{(\\overline{j})}}$. In the context of large structures with a high number of subdomains, $m^{(\\overline{j})}$ can increase drastically; saving the computation and factorization of such a matrix could then become quite interesting. Moreover, during the computation of the structure coarse problem, a close quantity is already assembled and factorized: the matrix $\\tilde{G}_A^T S_A \\tilde{G}_A$ -- with $S_A \\equiv \\sum_{s=1}^{N_s} A\\ensuremath{^{(s)}} S\\ensuremath{^{(s)}} A\\ensuremath{^{(s)^T}}$ and $\\tilde{G}_A \\equiv \\left[ \\ldots , \\, \\tilde{A}\\ensuremath{^{(s)}} R_b\\ensuremath{^{(s)}}, \\, \\ldots \\right]$. Compared to $\\hat{G}\\ensuremath{^{(\\overline{j})^T}} S_A\\ensuremath{^{(\\overline{j})}} \\hat{G}\\ensuremath{^{(\\overline{j})}}$, the addition of the local term linked to $\\Omega\\ensuremath{^{(j)}}$ in $\\tilde{G}_A^T S_A \\tilde{G}_A$ somewhat balances the classical scaling on its boundary (taking into account non-existant matter inside $\\Omega\\ensuremath{^{(j)}}$), we thus propose:\n\\begin{equation*}\n\\hat{F}_{A,2}\\ensuremath{^{(j)}} \\simeq A\\ensuremath{^{(j)^T}} \\hat{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\tilde{G}_A^T S_A \\tilde{G}_A \\right)^{-1} \\hat{G}_A\\ensuremath{^{(\\overline{j})^T}} A\\ensuremath{^{(j)}} \\equiv \\tilde{F}_{A,2}\\ensuremath{^{(j)}}\n\\end{equation*}\n\n\\subsubsection{Short range interactions term}\n\nThe first term of expression \\eqref{eq:expr_inv_st}, written $\\hat{F}_{A,1}\\ensuremath{^{(j)}}$, can also be simplified. First, for numerical efficiency, a diagonal lumping technique is used to approximate the local Schur complements (as explained in section \\ref{ssec:review}). Then, in order to preserve sparsity, the projectors are removed. Assuming stiffness scaling is used we then directly recover the inverse of the superlumped stiffness of the neighbors:\n\\begin{equation}\\label{eq:tild_ktbb}\n\\begin{aligned}\n\\hat{F}_{A,1}\\ensuremath{^{(j)}} & \\simeq A\\ensuremath{^{(j)^T}} \\sum_{s \\in \\text{neigh}(j)} \\hat{A}\\ensuremath{^{(s)}}_j \\operatorname{diag}\\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right)^{-1} \\hat{A}\\ensuremath{^{(s)^T}}_j A\\ensuremath{^{(j)}} \\\\ \n& = A\\ensuremath{^{(j)^T}} \\left( \\sum_{s \\in \\text{neigh}(j)} A\\ensuremath{^{(s)}} \\operatorname{diag} \\left( K_{t_{bb}}\\ensuremath{^{(s)}} \\right) A\\ensuremath{^{(s)^T}} \\right)^{-1} A\\ensuremath{^{(j)}} = K_{t_{bb},\\, sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}}\n\\end{aligned}\n\\end{equation}\n\n\\subsubsection{Scaling issue}\n\nA way to avoid building the modified scaled assembly operators $\\hat{A}\\ensuremath{^{(s)}}_j$ is to notice that for $s \\neq j$, the following relation holds between modified and classical scaling operators $\\tilde{A}\\ensuremath{^{(s)}}$ \\cite{klawonn2001feti}:\n\\begin{equation*}\n\\begin{aligned}\nA\\ensuremath{^{(j)^T}} \\hat{A}_j\\ensuremath{^{(s)}} & = \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\tilde{A}\\ensuremath{^{(s)}} \\\\\n\\text{with }\\tilde{D}\\ensuremath{^{(j)}} \\equiv A\\ensuremath{^{(j)^T}} \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} \\right) & \\left( A\\ensuremath{^{\\diamondminus}} \\Delta\\ensuremath{^{\\diamondbackslash}} A\\ensuremath{^{\\diamondminus^T}} - A\\ensuremath{^{(j)}} \\Delta\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(j)}}\n\\end{aligned}\n\\end{equation*}\nand we observe that the local diagonal matrix $\\tilde{D}\\ensuremath{^{(j)}}$ can be \nextracted without cost from $\\tilde{A}\\ensuremath{^{(j)}}$:\n\\begin{equation*}\n\\tilde{D}\\ensuremath{^{(j)}} = A\\ensuremath{^{(j)^T}} \\left( I - A\\ensuremath{^{(j)}} \\tilde{A}\\ensuremath{^{(j)^T}} \\right)^{-1} A\\ensuremath{^{(j)}}\n\\end{equation*}\n\\begin{remark}\nWith evident notations, for a scaling based on the material stiffness, the diagonal coefficient of $\\tilde{D}\\ensuremath{^{(j)}}$ associated with degree of freedom $x$ is equal to:\n\\begin{equation*}\n\\begin{aligned} \n\\tilde{D}\\ensuremath{^{(j)}}_{xx} = \\dfrac{ \\sum_s K_{t_{xx}}\\ensuremath{^{(s)}} }{ \\sum_{s \\neq j } K_{t_{xx}}\\ensuremath{^{(s)}} } = \\left( 1 - \\dfrac{K_{t_{xx}}\\ensuremath{^{(j)}} }{ \\sum_s K_{t_{xx}}\\ensuremath{^{(s)}}} \\right)^{-1} = \\left( 1 - \\tilde{A}\\ensuremath{^{(j)}}_x \\right)^{-1}\n\\end{aligned}\n\\end{equation*}\\qed\n\\end{remark}\n\\medskip\n\n\n\\noindent \\textit{Final expression.} To conclude, we propose the following two-scale impedance:\n\\begin{equation}\\label{eq:final_expr_qb}\n\\left( Q\\ensuremath{^{(j)}}_{b, \\,2s} \\right)^{-1} = K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}} + \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} \\tilde{G}_A\\ensuremath{^{(\\overline{j})}} \\left( \\tilde{G}_A^T S_A \\tilde{G}_A \\right)^{-1} \\tilde{G}_A\\ensuremath{^{(\\overline{j})^T}} A\\ensuremath{^{(j)}} \\tilde{D}\\ensuremath{^{(j)}}\n\\end{equation}\n\n\n\\subsection{Attempt to enrich the short-range approximation}\\label{ssec:Qritz}\n\nThe short range part of the impedance, corresponding to the sparse approximation of $\\hat{F}_{A,1}\\ensuremath{^{(j)}}$ by $K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}}$, seems very crude. In particular, we most probably underestimate the flexibility of the neighbors by using a diagonal operator.\n\nWe believe it is worth mentioning the tentative improvement which consisted in adding another low rank term:\n\\begin{equation*}\n\\hat{F}_{A,1}\\ensuremath{^{(j)}} \\simeq K_{t_{bb},\\,sl}\\ensuremath{^{\\text{neigh}(j)^{-1}}} + \\tilde{D}\\ensuremath{^{(j)}} A\\ensuremath{^{(j)^T}} V_k \\Theta_k V_k^T A\\ensuremath{^{(j)}} \\tilde{D}\\ensuremath{^{(j)}}\n\\end{equation*}\nwhere $\\Theta_k$ is a diagonal matrix and $V_k$ an orthonormal basis, approximations of the eigen-elements of $S_A\\ensuremath{^{(\\overline{j})^{-1}}}$ associated with the higher part of the spectrum. They could be obtained at a moderate cost by post-processing the tangent BDD iterations in the spirit of \\cite{gosselet2013total} (but considering the classical eigenvalues instead of the generalized ones). \n\nThis low rank term could be concatenated with the one associated with rigid body motions $\\tilde{F}_{A,2}\\ensuremath{^{(j)}}$, and thus did not modify the usability of the approximation. \nWe observed that it led to a stiffness which was closer to our reference $S_t\\ensuremath{^{(\\overline{j})^{-1}}}$ (measured with the Frobenius norm). But in practice when using it as the impedance in our numerical experiments, the reduction achieved in iterations numbers was not worth the additional cost of the enrichment term -- this is why we do not present it in detail. This ``improvement'' may be more useful on other classes of nonlinear problems for which it would be important not to overestimate the stiffness of the remainder of the structure.\n\n\n\n\n\\section{Results}\n\n\\subsection{Two test cases}\\label{sec:two_test_cases}\n\n\nThe efficiency of the expression \\eqref{eq:final_expr_qb} is evaluated on two numerical test cases. First test case is a bi-material beam with bending load, represented on figure~\\ref{fig:test_case_bimat}. Material and geometrical parameters are given in table \\ref{tab:params_tests_case}: one of the two materials is chosen to be elastoplastic with linear hardening, the other one is chosen to remain elastic. Load is applied with imposed displacement on the edge defined by $x=L$.\n\nSecond test case is a homogeneous multiperforated beam with bending load, represented on figure~\\ref{fig:test_case_multiperf}. Material and geometrical parameters are given in table \\ref{tab:params_tests_case}: material is chosen to be elastoplastic with linear hardening. Load is applied with imposed displacement $u_D$ on the edge defined by $x=L$.\n\n\n\\begin{figure}[!ht]\n\\includegraphics[width=\\textwidth]{test_case_betarme2.pdf}\n\\caption{Bi-material beam: partition and loading}\n\\label{fig:test_case_bimat}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[width=15cm]{test_case_multiperf.pdf}\n\\end{center}\n\\caption{Multiperforated beam: partition and loading}\n\\label{fig:test_case_multiperf}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{minipage}{0.56\\linewidth}\n\\begin{tabular}{|c:c|c|}\n\\hline \\hline\n\\multicolumn{3}{|c|}{Bi-material beam} \\\\\n\\hline \\hline\n\\multicolumn{3}{|c|}{ Material parameters} \\\\\n\\hline \\hline\n & Material 1 & Material 2 \\\\\nYoung & $E_1 = 420e2$ & $E_2 = 210e6$ \\\\\nPoisson coefficient & $\\nu_1 = 0.3$ & $\\nu_2 = 0.3$ \\\\\nElastic limit & & $\\sigma_{0_2} = 420e3$ \\\\\nHardening coefficient & & $h_2 = 1e3$ \\\\\n\\hline \\hline\n\\multicolumn{3}{|c|}{Geometrical parameters} \\\\\n\\hline \\hline\nTotal length & \\multicolumn{2}{:c|}{L $ = 13$} \\\\\nTotal height & \\multicolumn{2}{:c|}{H $ = 2$} \\\\\nHeight of an armature & \\multicolumn{2}{:c|}{H$_\\text{a} = 0.25$} \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{minipage}\n\\begin{minipage}{0.43\\linewidth}\n\\begin{tabular}{|c:c|}\n\\hline \\hline\n\\multicolumn{2}{|c|}{Multiperforated beam} \\\\\n\\hline \\hline\n\\multicolumn{2}{|c|}{ Material parameters} \\\\\n\\hline \\hline\n & \\\\\nYoung & $E = 210e6$ \\\\\nPoisson coefficient & $\\nu = 0.3$ \\\\\nElastic limit & $\\sigma_0 = 420e3$ \\\\\nHardening coefficient & $h = 1e6$ \\\\\n\\hline \\hline\n\\multicolumn{2}{|c|}{Geometrical parameters} \\\\\n\\hline \\hline\nLength & L $ = 10$ \\\\\nHeight & H $ = 1$ \\\\\nHole radius & r $ = 2\/30$ \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{minipage}\n\\caption{Material and geometrical parameters}\n\\label{tab:params_tests_case}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\\subsection{Elastic analysis}\n\nThe ultimate goal of this paper is to assess the performance of the new impedance \\eqref{eq:final_expr_qb} in the nonlinear multi-scale distributed context. Before we reach that point, a preliminary mono-scale elastic study is performed in order to verify that the heuristic developed in previous sections is actually able to capture both short and long range interactions within the structure.\n\nSollicitations are here keeped low enough to remain in the elastic domain of every materials: bi-material beam and multiperforated beam are both submitted to a bending load of intensity $u_D = 1.5 \\,\\, 10^{-3}$. More, decomposition is for now only performed along $x$-axis (multiple points will be involved in next section, where the nonlinear multi-scale context is considered). One of the interest of the elastic linear case with slab-wise decomposition relies on the ability to express the optimal interface impedance: $Q_b\\ensuremath{^{(j)}} = S_t\\ensuremath{^{(\\overline{j})}}$ (see \\ref{ssec:motivation}). Even if the computational cost of this parameter would be, in a real situation, absolutely not affordable in the context of parallel resolutions, it was calculated here for the purpose of our analysis. A comparison with an optimal reference can thus be made for the two following expressions:\n\\begin{itemize}[label=$\\circ$]\n\\item a classical choice $K_{bb,l}\\ensuremath{^{\\text{neigh}(j)}}$: see \\eqref{eq:ktbb_neigh}\n\\item the new expression $Q_{b,\\,2s}\\ensuremath{^{(j)}}$: see \\eqref{eq:final_expr_qb}\n\\end{itemize}\nBeing given the alternative formulation we chose for the mixed nonlinear substructuring and condensation method (see section \\ref{sec:altern_formul}), an elastic resolution would be strictly equivalent to a primal BDD resolution. Therefore, no comparison of different interface impedances is possible with Algorithm~\\ref{alg:robin-bdd}. A mono-scale FETI-2LM solver \\cite{roux2009feti} was hence implemented, corresponding to the first formulation of the mixed interface problem with the $\\mu_b\\ensuremath{^{\\diamondvert}}$ unknown \\eqref{eq:tg_pb}. This algorithm enables to solve linear problems with Robin interface transmission conditions. \n\nNote that an optimal coarse problem could be added in order to recover an efficient multi-scale solver \\cite{dubois2012optimized,haferssas2015robust,loisel2015optimized}. However, this augmentation strategy would make it impossible to discern the efficiency of the long range interactions term of our two-scale impedance. Again, our aim is not to compete with augmented Krylov solvers for linear problems but to find an alternative way, compatible with nonlinear problems, to introduce long-range effects. The mono-scale formulation is thus preserved in order to evaluate the ability of \\eqref{eq:final_expr_qb} to introduce in local equilibriums information related to the interactions with the far structure, in a linear context where the optimal parameter is known. \n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\subfloat[Bi-material beam]{\\includegraphics[width=10cm]{results_lin_bimat.pdf}\\label{subtab:results_lin_bimat}} \\qquad\n\\subfloat[Multiperforated beam]{\\includegraphics[width=10cm]{results_lin_multiperf.pdf}\\label{subtab:results_lin_multiperf}}\n\\end{center}\n\\caption{Comparison of the three interface impedances: linear behavior}\n\\label{tab:results_lin}\n\\end{table}\n\nResults are given on table \\ref{tab:results_lin} for the two previously introduced test cases. \n\n\nAs expected, for both test cases, the optimal interface impedance $S_t^{(\\overline{j})}$ rounds off the resolution after a number of iterations equal to the number of subdomains minus one. Being given the repartition of the subdomains (no multiple points) and the absence of a coarse problem, this is the best convergence rate that can be achieved: mixed transmission conditions with interface impedance $S_t^{(\\overline{j})}$ is optimal.\n\nThe classical choice $K_{bb,l}^{\\text{neigh}(j)}$ does not involve any information on the long range interactions of a subdomain with the faraway structure inside local equilibriums: the number of iterations drastically increases along with the number of substructures. \n\nThe new expression $Q_{b,2s}^{(j)}$ introduced in this paper highly reduces the numbers of FETI-2LM iterations, compared to classical choice $K_{bb,l}^{\\text{neigh}(j)}$: gains are between 77 and 98\\%. This should mostly be due to the additive form of expression \\eqref{eq:final_expr_qb}, with the introduction of a long range interactions-term in the flexibility -- obviously, the absence of coarse problem in the resolution reinforces the benefits of this term. The forthcoming nonlinear study, based on algorithm \\ref{alg:robin-bdd}, will replace this expression in a context of multiscale computation. \n\nPerformance of expression $Q_{b,2s}^{(j)}$ is evidently not as good as that of optimal expression $S_t^{(\\overline{j})}$, but the increase in iterations numbers is only of about ten times the optimal iterations number (while it reaches about hundreds times the optimal iterations number for $K_{bb,l}^{\\text{neigh}(j)}$). We also recall that interface impedance $S_t^{(\\overline{j})}$ can not be computed in parallel resolutions: expression $Q_{b,2s}^{(j)}$, at the contrary, is fully and easily tractable.\n\n\nThe expression introduced here to evaluate the interface impedance thus seems, at least in the linear case, to achieve great performance at very low cost. \n\n\\begin{remark} As said earlier, the second effect of multiscale approaches (beside modifying the Robin condition), lies in the instantaneous propagation of the right-hand side. In our approach, the absence of a coarse problem is somehow compensated by the presence of tangent interface systems (solved with state of the start multi-scale BDD method). As an example, we initialized our linear FETI-2LM solver with the fields resulting from one BDD iteration. For the multiperforated beam split in 15 subdomains, the number of FETI-2LM iterations goes from 113 to 89, which is significant (for less subdomains, the coarse problem is too small to bring any valuable piece of information). In the spirit of \\cite{negrello2016substructured}, a tuned setting of the solvers' thresholds (synchronized with the evolution of the global residual, i.e. the precision of the global solution) could perform a good compromise between a global spread of the information and independent computations.\n\n\\end{remark}\n\n\\subsection{Plastic analysis}\\label{sec:it_numbers}\n\nThe evaluation of the performance of expression \\eqref{eq:final_expr_qb} is continued with a plastic evolution study. The two test cases are submitted to bending loads, applied incrementally. Bi-material beam loading is decomposed as follows:\n\\begin{align}\nu_D & = \\left[ 0.05, \\, \\, 0.1, \\, \\, 0.15, \\, \\, 0.2, \\, \\, 0.25, \\, \\, 0.3, \\, \\, 0.35, \\, \\, 0.375, \\, \\, 0.4, \\, \\, 0.425, \\, \\, 0.45 \\right] u_{max} \\label{eq:load_incre_bimat}\\\\\nu_{max} & = 7.1 \\nonumber\n\\end{align}\nFor multiperforated beam loading, the incremental decomposition is set to:\n\\begin{align}\nu_D & = \\left[ 0.4, \\, \\, 0.6, \\, \\, 0.8, \\, \\, 1, \\, \\, 1.15, \\, \\, 1.3, \\, \\, 1.45, \\, \\, 1.5 \\right] u_{max} \\label{eq:load_incre_multiperf} \\\\\nu_{max} & = 0.275 \\nonumber\n\\end{align}\n\\begin{remark} \nFor the sake of clarity, every over load increment \\eqref{eq:load_incre_bimat} and \\eqref{eq:load_incre_multiperf} is represented in the forthcoming results tables. \n\\end{remark}\n\nThe substructuring of the bi-material beam involves 13 subdomains along $x$-axis, while multiperforated beam is decomposed into 30 subdomains with multiple points (see figures~\\ref{fig:test_case_bimat} and~\\ref{fig:test_case_multiperf}).\n\n\nNumbers of Krylov iterations, cumulated over global Newton loops and load increments, are stored for the three interface impedances $S_t^{(\\overline{j})}$, $K_{bb,l}^{\\text{neigh}(j)}$ and $Q_{b,2s}^{(j)}$ and the two test cases in tables \\ref{tab:iter_bimat} and \\ref{tab:iter_multiperf}. Indeed, performance of the solver is in particular linked to the number of processor communications, which are directly proportional to the number of Krylov iterations. \n\nThe computation of local tangent operators, at each global iteration, is also a costly operation. The numbers of global Newton iterations, cumulated over load increments, are thus also stored for each expression of the interface impedance and the two test cases. Note that in these cases, the number of Krylov iterations is almost constant per linear system, the cumulated numbers of Krylov iterations are thus nearly proportional to the numbers of global Newton iterations; the latter are therefore only stored for the last load increment. \n\nA fourth approach has been added to the study, written NKS in both tables, and corresponding to the ``classical'' resolution process used in nonlinear structural mechanical problems: a global Newton algorithm, combined with a linear DD solver for the tangent systems. The main difference between the nonlinear substructuring and condensation method and this classical technique resides in the nonlinear\/linear algorithms used for local resolutions. The resulting comparisons with approaches $S_t^{(\\overline{j})}$, $K_{bb,l}^{\\text{neigh}(j)}$ and $Q_{b,2s}^{(j)}$ hence represent the gains that can be achieved with the mixed nonlinear substructuring and condensation method, in the more general framework of nonlinear solvers. \\medskip\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{minipage}{\\linewidth}\n\\begin{center}\n\\begin{tabular}{|c|cccccc|c|}\n\\hline\n\\multicolumn{7}{|c|}{Krylov} & Global Newton \\\\\n\\hline\nload inc. & 0.05 & 0.15 & 0.25 & 0.35 & 0.4 & 0.45 & 0.45\\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 37 & 229 & x & & & & x \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 36 & 259 & 598 & 978 & 1357 & 1772 & 47 \\\\\n$Q_{b,2s}^{(j)}$ & 37 & 266 & 580 & 891 & 1204 & 1514 & 39 \\\\\n\\hline\n\\hline\nNKS & 74 & 296 & 633 & 970 & 1344 & 1795 & 48 \\\\\n\\hline\n\\multicolumn{8}{c}{ } \\\\\n\\hline\n\\multicolumn{8}{|c|}{Gains (\\%) } \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & -3 & -3 & 3 & 9 & 11 & 15 & 17 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 50 & 10 & 8 & 8 & 10 & 16 & 19 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\caption{Bi-material beam: Krylov cumulated iterations over load increments, global Newton cumulated iterations}\n\\label{tab:iter_bimat}\n\\end{table}\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{minipage}{\\linewidth}\n\\begin{center}\n\\begin{tabular}{|c|cccc|c|}\n\\hline\n\\multicolumn{5}{|c|}{Krylov} & Global Newton\\\\\n\\hline\nload inc. & 0.6 & 1 & 1.3 & 1.5 & 1.5\\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 48 & 172 & 322 & 481 & 30 \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 61 & 212 & 373 & 548 & 35 \\\\\n$Q_{b,2s}^{(j)}$ & 48 & 170 & 300 & 438 & 28 \\\\\n\\hline\n\\hline\nNKS & 73 & 222 & 385 & 561 & 36 \\\\\n\\hline\n\\multicolumn{6}{c}{ } \\\\\n\\hline\n\\multicolumn{6}{|c|}{Gains (\\%)} \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & 21 & 20 & 20 & 20 & 20 \\\\\n$Q_{b,2s}^{(j)}$ vs. $S_t^{(\\overline{j})}$ & 0 & 1 & 7 & 9 & 7 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 34 & 23 & 22 & 22 & 22 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\end{center}\n\\caption{Multiperforated beam: Krylov cumulated iterations over load increments, global Newton cumulated iterations}\n\\label{tab:iter_multiperf}\n\\end{table}\n\n\nA first preliminary observation compares results for interface impedance $S_t^{(\\overline{j})}$ in the linear and the nonlinear case: the primitive guess we made about $S_t^{(\\overline{j})}$ being the best possible approximation we could analytically define of the interface impedance value was mistaken in the nonlinear formulation. For bi-material beam for instance, the resolution ended up with a divergence in the local Newton solvers, caused by fake high levels of plasticity inside subdomains, artifacts of the resolution -- this may be due to an excessively soft interface impedance, which lets the material deform more than necessary. \n\nSecondly, although our first guess was apparently misguided, the additive expression we derived from it seems to behave very satisfyingly: best performance is now achieved -- in the nonlinear process -- with the new expression of interface impedance $Q_{b,2s}^{(j)}$. Gains in terms of Krylov cumulated iterations, compared to classical interface impedance $K_{bb,l}^{\\text{neigh}(j)}$, vary from 15\\% to 20\\% at the end of the resolution: a benefit which should represent a non negligible decrease in CPU time for large structure problems (where each communication operation can be highly time-consuming). Compared to the interface impedance $S_t^{(\\overline{j})}$ -- which is not computationally affordable in practice, -- only the multiperforated beam can be effectively studied (convergence was not reached for bi-material beam): gains, for approach $Q_{b,2s}^{(j)}$, reach up 9\\% at the end of the resolution -- in terms of cumulated numbers of Krylov iterations. \n\n\\begin{remark}\nBi-material beam was meshed with 25~789 degrees of freedom, and its substructuring into 13 subdomains involved 984 interface degrees of freedom. Multiperforated beam was meshed with 30~515 degrees of freedom, and its substructuring into 30 subdomains involved 1641 interface degrees of freedom. Despite the relative smallness of these test cases, we expect them to be representative of computations on larger structures. Unfortunately our Octave-based code did not allow meaningful time measurements and large scale computations. Moreover, limiting communication as we try to do would be even more appreciable on computations involving many processors. The number of Krylov iterations seems to be the fairest and most reliable performance measurement.\n\\end{remark}\n\nComparison with classic method shows similar results for both test cases: at the end of the resolution, gains vary from 16 to 22\\% for Krylov cumulated iterations, and from 19 to 22\\% for global Newton cumulated iterations. This gain corresponds to the overall performance of the nonlinear substructuring and condensation method that can be achieved with mixed approach, compared to classical procedures. \n\n\n\\begin{remark}\nThe rather limited performance of mixed nonlinear substructuring and condensation method with classical interface impedance $K_{bb,l}^{\\text{neigh}(j)}$, compared to the classical resolution method, can be noticed in the above two examples. This lack of efficiency can probably be imputed to the difficulty of giving full account of long range phenomena with a short-scale interface impedance, whereas they prevail in the case of local heterogeneity (bi-material beam) and slenderness of plate structures (multiperforated beam).\n\\end{remark}\n\n\\subsection{Coupling with SRKS-method}\n\n\nAn augmentation strategy of Krylov subspaces, at each global nonlinear iteration, is possible by extracting Ritz vectors and values at the end of each Krylov solving and re-using them to construct an augmentation basis for the following Krylov iterations. The so-called TRKS method \\cite{gosselet2013total} reuses all of the produced Ritz vectors, while SRKS method \\cite{gosselet2013total} consists in selecting the Ritz values which are good enough approximations of tangent operator eigenvalues, and the corresponding Ritz vectors. SRKS method was implemented and its coupling with nonlinear substructuring and condensation method was studied for both test cases defined at section \\ref{sec:two_test_cases}. \n\nResults are given in tables \\ref{tab:iter_bimat_SRKS} and \\ref{tab:iter_multiperf_SRKS}. \n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|cccccc:c|}\n\\hline\n\\multicolumn{8}{|c|}{Krylov}\\\\\n\\hline\n\\multicolumn{7}{|c:}{ with SRKS } & wo SRKS \\\\\n\\hline\nload inc. & 0.05 & 0.15 & 0.25 & 0.35 & 0.4 & 0.45 & 0.45 \\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 37 & 97 & x & & & & x \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 36 & 113 & 218 & 339 & 452 & 578 & 1772 \\\\ \n$Q_{b,2s}^{(j)}$ & 37 & 98 & 197 & 304 & 398 & 492 & 1514 \\\\\n\\hline\n\\hline\nNKS & 53 & 130 & 245 & 370 & 492 & 648 & 1795 \\\\\n\\hline\n\\multicolumn{8}{c}{ } \\\\\n\\hline\n\\multicolumn{8}{|c|}{Gains (\\%)} \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & -3 & 13 & 10 & 10 & 12 & 15 & 15 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{\\text{neigh}(j)}$ vs. NKS & 30 & 25 & 20 & 18 & 19 & 24 & 16 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Bi-material beam, coupling with SRKS: Krylov cumulated iterations over load increments}\n\\label{tab:iter_bimat_SRKS}\n\\end{table}\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|cccc:c|}\n\\hline\n\\multicolumn{6}{|c|}{Krylov} \\\\\n\\hline\n\\multicolumn{5}{|c:}{with SRKS} & wo SRKS \\\\\n\\hline\nload inc. & 0.6 & 1 & 1.3 & 1.5 & 1.5 \\\\\n\\hline\n$S_t^{(\\overline{j})}$ & 48 & 164 & 304 & 445 & 481 \\\\\n$K_{bb,l}^{\\text{neigh}(j)}$ & 61 & 201 & 351 & 506 & 548 \\\\\n$Q_{b,2s}^{(j)}$ & 47 & 159 & 280 & 410 & 438 \\\\\n\\hline\n\\hline\nNKS & 72 & 212 & 362 & 517 & 561 \\\\\n\\hline\n\\multicolumn{6}{c}{ } \\\\\n\\hline\n\\multicolumn{6}{|c|}{Gains (\\%) } \\\\\n\\hline\n$Q_{b,2s}^{(j)}$ vs. $K_{bb,l}^{\\text{neigh}(j)}$ & 20 & 20 & 20 & 20 & 20 \\\\\n$Q_{b,2s}^{(j)}$ vs. $S_t^{(\\overline{j})}$ & 2 & 3 & 8 & 8 & 9 \\\\\n\\hline\n\\hline\n$Q_{b,2s}^{(j)}$ vs. NKS & 35 & 25 & 23 & 21 & 22 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Multiperforated beam, coupling with SRKS: Krylov cumulated iterations over load increments}\n\\label{tab:iter_multiperf_SRKS}\n\\end{table}\n\n\n\n\n\nAs expected, SRKS leads to a global decrease of the number of Krylov iterations, observable by comparing the columns ''with'' and ''without'' SRKS of results tables. For the bi-material beam, Krylov iterations are reduced on average by 67\\% at last load increment; for multiperforated beam the average reduction is only close to 8\\% (a small number of Krylov iterations implies a small number of post-processed Ritz vectors: this could partly explain the less impressive efficiency of SRKS method on this test case). \n\nConcerning global Newton solver, the cumulated numbers of iterations remained constant with and without SRKS -- as expected, -- they were thus not presented again in this section. \n\nTables \\ref{tab:iter_bimat_SRKS} and \\ref{tab:iter_multiperf_SRKS} confirm observations of previous section. Even if the cumulated numbers of Krylov iterations are decreased thanks to SRKS, the overall gains generated by the new expression $Q_{b,2s}^{(j)}$ remain rather constant, and are even better for bi-material beam (indeed, the classic method NKS suffered from a slight degradation of its overall performance, and the gain of impedance $Q_{b,2s}^{(j)}$ compared to NKS reaches then 24\\% at the end of the resolution, in terms of Krylov cumulated iterations). \n\n\\section{Conclusion}\n\nA new approximation of the interface impedance has been developed, in the context of nonlinear substructuring and condensation methods with mixed approach. The expression of the interface impedance introduced here couples both short and long range interactions terms.\n\nThe procedure for building such a parameter consists in evaluating, for a given subdomain, the Schur tangent operator of the remainder of the structure (i.e. the optimal value in a linear context), which was originally the best analytic expression we could produce to approximate the optimal interface impedance in the nonlinear context. This evaluation involves a short scale term, basically consisting in the stiffness of the considered subdomain neighbors, and a long scale low rank term, composed of the projection of the Schur tangent operator into the space generated by rigid body modes, thereby capturing long range interactions with the faraway structure. \n\nPerformance of a FETI-2LM solver was studied on a linear case, where the Schur tangent operator of the remainder is exactly the optimal value for the interface impedance -- despite its intractability in practice in parallel resolution processes. Although, as expected, the new additive expression of the impedance did not produce as good results as this optimal value, it managed quite impressive gains, in particular compared to the classical choice made in this framework -- i.e. the stiffness assembled over the neighbors of a subdomain. \n\nPerformance of the mixed nonlinear substructuring and condensation method was also studied, on a plasticity case. Not only the exact computation of Schur tangent operator is not affordable in the framework of parallel distributed computations -- unlike the new expression we build, which was chosen to be inexpensively calculable in parallel, -- but it also was found to achieve not as good results as this new expression. This suggests that the level of accuracy obtained on the representation of a substructure environment with the additive expression of the interface impedance introduced here is increased. \n\nEventually, a study of the coupling of the resolution process with a selective reuse procedure of Krylov solver Ritz vectors (SRKS) tends to assess that performance of this new expression is maintained while numbers of Krylov iterations are decreased.\nAll these considerations are rather promising for implementations at larger scales.\n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\\section{Introduction}\n \nThe BESIII\\xspace experiment is located at the Institute of High Energy Physics in Beijing. Symmetric \\ensuremath{e^+e^-}\\xspace collisions from Beijing Electron-Positron Collider (BEPCII) in an energy range between \\SI{2.0}{\\si{\\giga\\electronvolt}\\xspace} and \\SI{4.6}{\\si{\\giga\\electronvolt}\\xspace} are analyzed. The maximum luminosity of BEPCII of \\SI{1e33}{\\Lumi} at $\\sqrt{s}=$\\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace} were surpassed in April 2016. \n\nThe detector measures charged track momenta with a relative precision of \\SI{0.5}{\\percent} (@\\SI{1.0}{\\si{\\giga\\electronvolt}\\xspace\/\\ensuremath{c}\\xspace}) using a multi-wire drift chamber in a \\SI{1}{\\tesla} magnetic field. Electromagnetic showers are measured in a caesium iodide calorimeter with a relative precision of \\SI{2.5}{\\percent} (@\\SI{1.0}{\\si{\\giga\\electronvolt}\\xspace}) and a good particle identification is achieved by combining information from energy loss in the drift chamber, from the time-of-flight system and from the calorimeter. Muons can be identified using 9 layers of resistive plate chambers integrated in the magnet return yoke. Details are provided elsewhere \\cite{Ablikim:2009aa}.\nBESIII\\xspace has collected large data samples in the tau-charm region. The interesting samples for the study of charmed hadrons are usually at a center-of-mass energy close to a threshold. The samples of interest for the analyses described in the following were recorded at the \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace\/\\ensuremath{\\Dp {\\kern -0.16em \\Dm}}\\xspace threshold $(\\sqrt{s}=\\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace})$ and at the \\ensuremath{\\Dsp{\\kern -0.16em \\Dsm}}\\xspace threshold $(\\sqrt{s}=\\SI{4.009}{\\si{\\giga\\electronvolt}\\xspace})$. Integrated luminosities of \\SI{2.81}{\\invfb} and \\SI{0.482}{\\invfb} were recorded, respectively.\n\n\\begin{figure}[tbp]\n \\centering\n \\begin{tikzpicture}\n\t \n\t \\draw[draw=none, use as bounding box](0,1.5) rectangle (10.2cm,6cm);\n\t \\begin{scope}[scale=0.5,color=black!70!white]\n\t\t \\coordinate (origin) at (9,9);\n\t\t \\draw[->] (origin) -- +(0:1) node[above] (n1) {z};\n\t\t \\draw[->] (origin) -- +(90:1) node[left] (n2) {y};\n\t\t\n\t \\end{scope}\n\t \\begin{scope}[line width=2pt]\n\t\t \\node[draw,black,rectangle,rounded corners=3pt,minimum size=3pt] (pv) at (5,3) {\\ensuremath{\\psi(3770)}\\xspace};\n\t\t \\draw[blue,->] (1,3) node[below] {\\ensuremath{e^+}\\xspace} -- (pv);\n\t\t \\draw[blue,->] (9,3) node[above] {\\en} -- (pv);\n\t\t\n\t\t \\node[draw,red,circle,minimum size=3pt, label=below:$\\ensuremath{\\kern 0.2em\\overline{\\kern -0.2em D}\\rule{0pt}{1.5ex}^0}\\xspace_{tag}$] (Dbarvtx) at (6.5,2.5) {};\n\t\t \\draw[black,dashed,-] (pv) -- (Dbarvtx);\n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(5:2cm); \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-5:2cm) node[right] {hadrons}; \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-15:2cm); \n\t\t \\draw[->,line width=1.5pt,black!60!white] (Dbarvtx) -- +(-25:2cm); \n\t\t\n\t\t \\node[draw,red,circle,minimum size=3pt, label=above right:$\\ensuremath{D^0}\\xspace$] (Dvtx) at (3.5,3.5) {};\n\t\t \\node[draw,black!60!white,fill,circle,minimum size=0pt] (KSvtx) at ($ (Dvtx) + (182:1.5) $) {};\n\t\t \\draw[black!60!white,dashed,-] (Dvtx) -- (KSvtx);\n\t\t \\draw[black,dashed,-] (pv) -- (Dvtx);\n\t\t \\draw[black!60!white] (KSvtx) -- +(190:1) node[left] {$l^+$};\n\t\t \\draw[black!60!white,dashed] (KSvtx) -- +(170:1) node[left] {$\\nu_l$};\n\t\t \\draw[black!60!white] (Dvtx) -- +(150:2) node[left] {};\n\t\t \\draw[black!60!white,-] (Dvtx) -- +(120:2) node[left] {hadrons};\n\t \\end{scope}\n\t \n\t\t\n\t\t\t \n\t\t\t\t \n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t \n\t\t\n\t \n \\end{tikzpicture}\n \\caption{\\ensuremath{\\psi(3770)}\\xspace decay topology in the \\ensuremath{\\psi(3770)}\\xspace rest frame. An undetected particle track can be reconstructed using the constrained kinematics of the decay. Typical tag modes for \\ensuremath{C\\!P}\\xspace and flavour eigenstates are listed.}\n \\label{fig:psiprprdecay}\n\\end{figure}\nThe at-threshold decay topology at a center-of-mass energy of \\SI{3.773}{\\si{\\giga\\electronvolt}\\xspace} is illustrated in \\cref{fig:psiprprdecay}. A pair of mesons is produced and it is possible to conclude from the decay of one meson (so-called tag meson) properties of the second decay. For instance in case of neutral \\ensuremath{D}\\xspace decays the flavour or the \\ensuremath{C\\!P}\\xspace quantum numbers of the signal decay can be measured, even if the signal final state does not provide this information. In case of charged \\ensuremath{D}\\xspace decays the reconstruction of both decays is used to reduce the background and furthermore if undetected particles are involved in the signal decay the four momenta of those can be reconstructed. In particular the study of leptonic and semi-leptonic decays benefits from this. The reconstruction of both decays in each event is referred to as double tag technique.\n\nIn the following we present the measurements of the \\ensuremath{D^+_s}\\xspace decay constant (\\cref{sec:dsmunu}), first evidence of the decay $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau\\ensuremath{\\nu_\\tau}\\xspace$ (\\cref{sec:dptaunu}) and the analysis of the decay $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$ (\\cref{sec:dzkspizpiz}).\n\\section{Pure leptonic $\\ensuremath{D_{s(d)}}\\xspace^+$ decays}\n\\label{sec:pureLeptonicD}\nThe pure leptonic decay of charged \\ensuremath{D_{s(d)}}\\xspace mesons proceeds via the annihilation of \\ensuremath{c}\\xspace and \\ensuremath{\\overline s}\\xspace(\\ensuremath{\\overline d}\\xspace) to a virtual \\ensuremath{W^\\pm}\\xspace boson and its decay to $l^+\\nu_l$. The decay rate can be parametrized as:\n\\begin{align}\n\t\\Gamma(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l) = \\frac{G_F^2}{8\\pi} f_{\\ensuremath{D_{s(d)}}\\xspace}^2 m_l^2 m_{\\ensuremath{D_{s(d)}}\\xspace} \\left( 1-\\frac{m_l^2}{m^2_{\\ensuremath{D_{s(d)}}\\xspace}}\\right)^2 \\abs{V_{cs(d)}}^2.\n \\label{eqn:ds:decayRate}\n\\end{align}\nWith the Fermi constant $G_F$, the lepton mass $m_l$, the corresponding CKM matrix element $\\abs{V_{cs(d)}}^2$, the \\ensuremath{D_{s(d)}}\\xspace mass $m_{\\ensuremath{D_{s(d)}}\\xspace}$ and the decay constant $f_{\\ensuremath{D_{s(d)}}\\xspace}^2$. The decay constant parametrizes the \\ensuremath{\\mathrm{QCD}}\\xspace effects on the decay. From the measurement of the decay width $\\Gamma(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l)$ the decay constant $f_{\\ensuremath{D_{s(d)}}\\xspace}^2$ can be extracted. \n\nThe branching fraction can be measured via the previously described double tag technique. In each event the tag decay is reconstructed via numerous decay channels. The number of events that contain a tag candidate is denoted by $N_{\\text{tag}}$. Among those events the signal decay is reconstructed and the number of events that contain a tag decay and a signal decay is denoted by $N_{\\text{sig,tag}}$. The branching fraction is given by:\n\\begin{align}\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D_{s(d)}}\\xspace\\ensuremath{\\rightarrow}\\xspace l^+\\nu_l) = \\frac{N_{\\text{sig,tag}}}{\\epsilon_{\\text{sig,tag}}}\\times\\frac{\\epsilon_{\\text{tag}}}{N_{\\text{tag}}}.\n \\label{eqn:ds:bf}\n\\end{align}\nThe efficiencies for reconstruction and selection $\\epsilon_i$ are obtained from simulation.\nSince the final state contains a neutrino which is not detected the signal yield is determined using the missing mass:\n\\begin{align}\n MM^2 = \\frac{\\left(E_{\\text{beam}}-E_\\mu\\right)^2}{c^4} - \\frac{\\left(-\\vec{p}_{\\ensuremath{D_{s(d)}}\\xspace}-\\vec{p}_{\\ensuremath{\\mu^+}\\xspace}\\right)^2}{\\ensuremath{c}\\xspace^2}.\n\\end{align}\nThe beam energy is denoted by $E_{\\text{beam}}$ and the reconstructed momentum of the tag \\ensuremath{D_{s(d)}}\\xspace decay candidate by $\\vec{p}_{\\ensuremath{D_{s(d)}}\\xspace}$.\n\n\\pagebreak\n\\subsection{$\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$}\n\\label{sec:dsmunu}\nThe distribution of $MM^2$ of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ is shown in \\cref{fig:ds:missingMass}. The \\ensuremath{\\tau^+}\\xspace is reconstructed via its decay to $\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\nub_\\tau}\\xspace$. The yield is determined via a simultaneous fit to signal and sideband regions whereas the sideband regions are defined in the \\ensuremath{\\Dbar^+_s}\\xspace mass spectrum of the tag candidate. The $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ signal is shown as red dotted curve and the $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ signal as black dot-dashed curve. Background from misreconstructed tag \\ensuremath{D^+_s}\\xspace decays and background from non-\\ensuremath{\\Dsp{\\kern -0.16em \\Dsm}}\\xspace events is shown as green short dashed and violet long dashed curve, respectively.\nWithin a sample of \\num{15127(321)} events which contain a tag candidate we find \\num{69.3(93)} $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ decays and \\num{32.5(43)} $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ decays. In the fitting procedure the ratio of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ to $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ was constraint to its Standard model prediction. The yields are corrected for radiative effects and we obtain:\n\\begin{align}\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace) &= \\SIerrs{0.495}{0.067}{0.026}{\\percent} \\nonumber\\\\\n {\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace) &= \\SIerrs{4.83}{0.65}{0.26}{\\percent}.\n\\end{align}\n\\begin{wrapfigure}[21]{r}{0.35\\textwidth}\n \\centering\n \\includegraphics[width=0.35\\textwidth]{Dslnu}\n \\caption{$MM^2$ distribution of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$. Signal (a) and sideband (b) regions are shown.}\n \\label{fig:ds:missingMass}\n\\end{wrapfigure}\nThe branching fractions {\\ensuremath{\\mathcal B}}\\xspace($\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$) and {\\ensuremath{\\mathcal B}}\\xspace($\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$) are consistent with the world average within \\num{1} and \\num{1.5} standard deviations, respectively.\nFurthermore, the branching fractions are consistently determined using a fitting method which does not rely on the ratio of $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ to $\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$. For further details we refer to \\cite{Ablikim:2016duz}.\n\nUsing ${\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+_s}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace)$ the decay constant $f_{\\ensuremath{D^+_s}\\xspace}$ is determined using \\cref{eqn:ds:decayRate}:\n\\begin{align}\n f_{\\ensuremath{D^+_s}\\xspace} = \\SIerrs{241.0}{16.3}{6.5}{\\si{\\mega\\electronvolt}\\xspace}.\n\\end{align}\nThe CKM matrix element $\\abs{V_{cd}}=\\num{0.97425(22)}$ \\cite{Agashe:2014kda} and the \\ensuremath{D^+_s}\\xspace lifetime \\cite{Agashe:2014kda} is used. A good agreement with LQCD calculations is found. Result are published in \\cite{Ablikim:2016duz}\n\n\\subsection{$\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$}\n\\label{sec:dptaunu}\nThe $MM^2$ distribution of $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ is shown in \\cref{fig:MMsqDptaunu}. The most severe background to the signal channel is $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$. To distinguish signal and background in a fitting procedure we use the difference in energy deposit of pions and muons in the electromagnetic calorimeter (EMC). We split the sample into events with an energy deposit larger and sample $\\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$. As shown in \\cref{fig:MMsqDptaunu}(b) above $\\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$ the number of $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ events is reduced compared to the number of $\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$ events.\n\\begin{figure}[tb]\n\t\\centering\n\t\\subfloat[$E_{EMC} \\leq \\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$]{\n\t\\includegraphics[width=0.45\\textwidth]{dptaunuLow}}\n\t\\subfloat[$E_{EMC} > \\SI{300}{\\si{\\mega\\electronvolt}\\xspace}$]{\n\t\\includegraphics[width=0.45\\textwidth]{dptaunuHigh}}\n\t\\caption{$MM^2$ distribution for the decay $\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace$. The signal is shown as solid orange line. Background comes mainly from \\ensuremath{D^+}\\xspace decays to $\\mu^+\\ensuremath{\\nu_\\mu}\\xspace$ (solid black) and to $\\ensuremath{\\pi^+}\\xspace\\KL$ (dashed blue).}\n\t\\label{fig:MMsqDptaunu}\n\\end{figure}\n\nWe obtain a preliminary signal yield of \\SI{137(27)} events. The significance of the signal is larger than \\SI{4}{\\stdDev}. The preliminary branching fraction is given by:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\tau^+\\ensuremath{\\nu_\\tau}\\xspace) = \\SI{1.20(24)}{\\timesten{-3}}.\n\\end{align}\nFurthermore, we extract the ratio of $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ to $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ decays:\n\\begin{align}\n\tR := \\frac{\\Gamma(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace)}{\\Gamma(\\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace)} = \\num{3.21(64)}. \n\\end{align}\nThe result is consistent with the Standard Model prediction.\n\n\\section{Analysis of the decay $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$}\n\\label{sec:dzkspizpiz}\nWe present preliminary results of the branching measurement of the decays $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace$ and $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace$. Furthermore we determine the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$ using the \\ensuremath{C\\!P}\\xspace eigenstates $\\KS\\ensuremath{\\pi^0}\\xspace$ and $\\KL\\ensuremath{\\pi^0}\\xspace$. The challenge in this channel is the reconstruction of the \\KL decay since its long decay time signals of its decay products in the drift chamber is very unlikely. We use the constraint kinematics at the \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace threshold to predict the \\KL four-momentum and furthermore require a certain energy deposit in the electromagnetic calorimeter.\n\nThe branching fraction of a \\ensuremath{C\\!P}\\xspace eigenstate can be measured in a self-normalization way using Cabibbo favoured (CF) tag channels. We define:\n\\begin{align}\n\tM^\\pm = \\frac{N_{CF,CP\\pm}}{\\epsilon_{CF,CP\\pm}}\\frac{\\epsilon_{CF}}{N_{CF}}.\t\n\\end{align}\nThe yields of double of double and single tag events is denoted by $N_{CF,CP\\pm}$ and $N_{CF}$ and the corresponding reconstruction efficiencies by $\\epsilon_{CF,CP\\pm}$ and $\\epsilon_{CF}$.\nThe branching fraction is given by:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace\\pm} = \\frac{1}{1\\mp C_{f}} M^\\pm, \\qquad C_f = \\frac{M^- - M^+}{M^- + M^+}.\n\\end{align}\n\nWe use the flavour tag channels $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace$, $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\pi^-}\\xspace\\ensuremath{\\pi^+}\\xspace$ and $\\ensuremath{K^-}\\xspace\\ensuremath{\\pi^+}\\xspace\\ensuremath{\\pi^0}\\xspace$. The double tag yields and the preliminary branching fractions are listed in \\cref{tab:dzkspizpizBF}. The branching fractions of the final states $\\KSL\\ensuremath{\\pi^0}\\xspace$ and $\\KS\\ensuremath{\\pi^0}\\xspace\\piz$ are consistent with the PDG average \\cite{Agashe:2014kda} and the branching fraction to $\\KS\\ensuremath{\\pi^0}\\xspace\\piz$ is the first accurate measurement.\n\nFrom the branching fractions we can calculate the asymmetry between the \\ensuremath{C\\!P}\\xspace eigenstates:\n\\begin{align}\n\tR_{\\ensuremath{K^0}\\xspace\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)} = \\frac{{\\ensuremath{\\mathcal B}}\\xspace_{\\KS\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}-{\\ensuremath{\\mathcal B}}\\xspace_{\\KL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}}{{\\ensuremath{\\mathcal B}}\\xspace_{\\KS\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}+{\\ensuremath{\\mathcal B}}\\xspace_{\\KL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)}}.\n\\end{align}\nThe results are also listed in \\cref{tab:dzkspizpizBF}.\n\\begin{table}[tbp]\n\t\\centering\n\t\\caption{Double tag yields and branching fractions of \\ensuremath{C\\!P}\\xspace eigenstates $\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$. Uncertainties are statistical only.}\n\t\\label{tab:dzkspizpizBF}\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\tChannel & \\ensuremath{C\\!P}\\xspace &$N_{CF,CP\\pm}$& ${\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace\\pm}$& R\\\\\n\t\t\\hline\n\t\t\\KS\\ensuremath{\\pi^0}\\xspace & $+$ &\\num{7141(91)}& \\num{1.230(020)} & \\multirow{2}{*}{\\num{0.1077(125)}} \\\\\n\t\t\\KL\\ensuremath{\\pi^0}\\xspace & $-$ &\\num{6678(118)}& \\num{0.991(019)} & \\\\\n\t\t\\hline\n\t\t\\KS\\ensuremath{\\pi^0}\\xspace\\piz & $+$ &\\num{2623(60)}& \\num{0.975(24)} & \\multirow{2}{*}{\\num{-0.0929(209)}} \\\\\n\t\t\\KL\\ensuremath{\\pi^0}\\xspace\\piz & $-$ &\\num{2136(69)}& \\num{1.18(04)} & \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\pagebreak\n\\subsection{Measurement of $y_{CP}$}\nUsing the final states $\\KS\\ensuremath{\\pi^0}\\xspace$ and $\\KL\\ensuremath{\\pi^0}\\xspace$ we determine the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$. The branching ratio of a \\ensuremath{C\\!P}\\xspace eigenstate is connected to the branching ratio of a pure flavour eigenstate via:\n\\begin{align}\n\t{\\ensuremath{\\mathcal B}}\\xspace_{\\ensuremath{C\\!P}\\xspace} \\approx {\\ensuremath{\\mathcal B}}\\xspace_{\\text{flavour}} (1\\mp y_{\\ensuremath{C\\!P}\\xspace}). \t\n\\end{align}\nThe parameter $y_{\\ensuremath{C\\!P}\\xspace}$ is then given by the asymmetry of branching ratios of \\ensuremath{C\\!P}\\xspace even and odd states to pure flavour states f:\n\\begin{align}\n\ty_{\\ensuremath{C\\!P}\\xspace} = \\frac{{\\ensuremath{\\mathcal B}}\\xspace_{-;f} - {\\ensuremath{\\mathcal B}}\\xspace_{+;f}}{{\\ensuremath{\\mathcal B}}\\xspace_{-;f} + {\\ensuremath{\\mathcal B}}\\xspace_{+;f}}.\n\\end{align}\nThe previously mentioned Cabibbo favoured final states are not pure flavour eigenstates. Therefore, we use the semi-leptonic decay to $\\ensuremath{K^-}\\xspace e^+\\ensuremath{\\nu_e}\\xspace$.\nWe obtain a preliminary value of:\n\\begin{align}\n\ty_{\\ensuremath{C\\!P}\\xspace} = \\SI{0.98(243)}{\\percent}.\n\\end{align}\nWe quote statistical uncertainty only. The result is in agreement with a previous measurement of BESIII\\xspace \\cite{Ablikim:2015hih} as well as with the HFAG average \\cite{arXiv:1612.07233}. Results are preliminary and we quote statistical uncertainties only.\n\n\\section{Summary}\nThe BESIII\\xspace experiment has collected large data sample at charm-related thresholds. The constraint kinematics at those energies allow the reconstruction of (semi-) leptonic decays with low background. Furthermore, the quantum entanglement of \\ensuremath{\\Dz {\\kern -0.16em \\Dzb}}\\xspace at threshold provides a unique laboratory for the analysis of \\ensuremath{C\\!P}\\xspace eigenstates. \nWe present the analysis of the leptonic decay of \\ensuremath{D^+_s}\\xspace to $\\ensuremath{\\mu^+}\\xspace\\ensuremath{\\nu_\\mu}\\xspace$ and $\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace$ with the measurement of branching fractions of the derived \\ensuremath{D^+_s}\\xspace form factor. Recently, BESIII\\xspace has found preliminary evidence of the decay \\ensuremath{D^+}\\xspace\\ensuremath{\\rightarrow}\\xspace\\ensuremath{\\tau^+}\\xspace\\ensuremath{\\nu_\\tau}\\xspace with a statistical significance above \\SI{4}{\\stdDev}. \nThe analysis of the $\\ensuremath{D^0}\\xspace\\ensuremath{\\rightarrow}\\xspace\\KSL\\ensuremath{\\pi^0}\\xspace(\\ensuremath{\\pi^0}\\xspace)$ includes the measurement of the branching fractions and using the decays to $\\KSL\\ensuremath{\\pi^0}\\xspace$ the measurement of the \\ensuremath{D^0}\\xspace mixing parameter $y_{\\ensuremath{C\\!P}\\xspace}$.\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}