diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzisfq" "b/data_all_eng_slimpj/shuffled/split2/finalzzisfq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzisfq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nEvery atom heavier than lithium has been processed by stars. \nElements of the $\\alpha$ process, iron peak, and \n{\\em r} process have different formation sites, \nand therefore, understanding the distribution of \nthese elements in nearby stars can lead to a \nbetter understanding of the Galaxy's \nchemical enrichment history.\nThe {\\em r}-process\\ site is the least well understood.\nEuropium is our choice for an {\\em r}-process\\ investigation\nfor two reasons:\\ 96\\% of Galactic europium is \nformed through the {\\em r}-process\\ \\citep{burris_2000}, \nand it has several strong lines in the \nvisible portion of the electromagnetic spectrum.\nIn order to provide insight into the Galactic\nenrichment history, however, europium measurements\nin very large stellar samples will be needed.\n\nThe study of {\\em r}-process\\ elements is well established in metal-poor \nstars \\citep[e.g.,][]{sneden_2008,lai_2008,frebel_2007,\nsimmerer_2004,johnson_2001},\nwhere {\\em r}-process\\ abundances probe early Galactic history. \nA handful of studies have measured europium in \nsolar-metallicity stars \n\\citep[e.g.,][]{reddy_2006,bensby_2005,koch_2002,woolf_1995}, \nbut have done so in samples of 50--200 stars.\nWe aim to extend such work to a substantial sample of \n1000 stars at solar metallicity, using spectra\ncollected by the California and Caltech Planet Search\n(CCPS).\nSuch a large sample will require automated analysis.\n\nOur goal in this paper is to establish a new, \nautomated abundance fitting method based on Spectroscopy Made Easy\n\\citep[SME;][]{valenti_1996}, an LTE spectral synthesis code \ndiscussed in further detail in \\S\\,\\ref{abund}. \nThe analysis builds on the framework of the \n\\citealt{valenti_2005} (hereafter VF05) Spectroscopic \nProperties of Cool Stars (SPOCS) catalog.\nOur method will yield consistent results across large stellar \nsamples, generating smaller errors than previous analyses.\n\nWe begin here with 41\\ stars that\nhave been examined in the literature, and we include \nthree europium lines:\\ 4129\\,\\AA, 4205\\,\\AA, and 6645\\,\\AA.\nOur stellar observations are detailed in \\S\\,\\ref{obs}.\nWe describe the details of our europium \nabundance measurement algorithm in \\S\\,\\ref{abund}, \ncompare our values with existing europium literature\nin \\S\\,\\ref{comp},\nand summarize the results of the study in \n\\S\\,\\ref{summary}.\n\n\\section{Observations}\n\\label{obs}\n\nThe spectra of the 41\\ stars included in this study were taken with the HIRES echelle spectrograph \\citep{vogt_1994} on the 10-m Keck I telescope in Hawaii. \nThe spectra date from January 2004 to September 2008 and have resolution $R \\sim 50\\,000$ and signal-to-noise ratio (S\/N) of $\\sim$\\,$160$ near \\mbox{4200\\,\\AA}, where the two strongest Eu~{\\sc ii}\\ lines included in this study are located. \nThe spectra have the same resolution but S\/N~$\\sim 340$ at \\mbox{6645\\,\\AA}, where a third, weaker Eu~{\\sc ii}\\ line is located. \n\nThe spectra were originally obtained by the CCPS with the intention of detecting exoplanets. \nThe CCPS uses the same spectrometer alignment each night and employs the HIRES exposure meter \\citep{kibrick_2006}, so the stellar observations are extremely consistent, even across years of data collection. \nFor a more complete description of the CCPS and its goals, see \\citealt{marcy_2008}.\nIt should be noted that the iodine cell Doppler technique \\citep{marcy_1992} imprints molecular iodine lines only between the wavelengths of 5000 and 6400\\,\\AA, leaving the regions of interest for this study iodine free.\n\n\\subsection{Stellar Sample}\n\\label{stellsamp}\nFor this study we choose to focus on a subset of \nstars that have been analyzed in other abundance studies. \nWe compare our measurements with those from \n\\citealt{woolf_1995,simmerer_2004,bensby_2005}; and \n\\citealt{delpeloso_2005a}. We select CCPS target stars that \nhave temperature, gravity, mass, and metallicity measurements \nin the SPOCS catalog (VF05). \n\nOur initial stellar sample consisted of 44 objects that were \nmembers both of the aforementioned studies and the SPOCS catalog, \nand that were observed by the CCPS on the Keck I telescope.\nThree stars that otherwise fit our criteria, but for which we\ncould only measure the europium abundance in one line, were \nremoved from the sample.\nThey were HD\\,64090, HD\\,188510, and HIP\\,89215. \nWhile stellar europium abundances may be successfully \nmeasured from a single line, \nthe goal of this study is to establish the robustness \nof our method, \nand so we deem it necessary to determine the \neuropium abundance in at least two lines to \ninclude a star in our analysis here.\nWe describe our criteria for rejecting a fit in \n\\S\\,\\ref{stellar_abund}.\n\nThe 41\\ stars included in this study have \n$3.4 < V < 10.0$,\n$0.50 < B-V < 0.92$, and\n$5 < d < 75\\unit{pc}$ ({\\em Hipparcos}; \\citealt{hipparcos}).\nVF05 determine stellar properties using SME,\nso it is reasonable to adopt their values for our SME europium\nanalysis.\nBased on the VF05 analysis, our stars have\nmetallicity $-1.4 < \\textrm{[M\/H]} < 0.4$,\neffective temperature $4940 < T_{\\rm eff} < 6230$, and\ngravity $3.8 < \\log{g} < 4.8$.\nIt should be noted that throughout this study we use the \n[M\/H]\\ parameter for a star's metallicity, rather than \nthe iron abundance [Fe\/H]. \nOur [M\/H]\\ value is taken from VF05, where it \nis an independent model parameter that adjusts the\nrelative abundance of all metals together.\nIt is not an average of individual metal \nabundances.\nThe full list of stellar properties, from the {\\em Hipparcos} and\nSPOCS catalogs, appears in Table \\ref{starsinfo_table}.\n\n\\begin{deluxetable*}{rrrrrrrrrc}\n\\tablecaption{Stellar Data.\\label{starsinfo_table}}\n\\tablewidth{0pt}\n\\tablehead{\n \\multicolumn{3}{c}{{Identification}}\n & \\colhead{$V$\\tablenotemark{a}}\n & \\colhead{$B-V$\\tablenotemark{a}}\n & \\colhead{$d$\\tablenotemark{a}}\n & \\colhead{$T_{\\rm eff}$\\tablenotemark{b}}\n & \\multirow{2}{*}{[M\/H]\\tablenotemark{b}}\n & \\colhead{$\\log{g}$\\tablenotemark{b}}\n & \\multirow{2}{*}{Ref.\\tablenotemark{c}} \\\\\n \\colhead{HD}\n & \\colhead{HR}\n & \\colhead{HIP}\n & \\colhead{(mag)}\n & \\colhead{(mag)}\n & \\colhead{(pc)}\n & \\colhead{(K)}\n & \n & \\colhead{(cgs)}\n & \n}\n\\startdata \n 3795 & 173 & 3185 & 6.14 & 0.72 & 28.6 & 5369 & $ -0.41$ & 4.16 & 1 \\\\\n 4614 & 219 & 3821 & 3.46 & 0.59 & 6.0 & 5941 & $ -0.17$ & 4.44 & 4 \\\\\n 6734 & \\ldots & 5315 & 6.44 & 0.85 & 46.4 & 5067 & $ -0.28$ & 3.81 & 1 \\\\\n 9562 & 448 & 7276 & 5.75 & 0.64 & 29.7 & 5939 & $ 0.19$ & 4.13 & 1, 2, 4 \\\\\n 9826 & 458 & 7513 & 4.10 & 0.54 & 13.5 & 6213 & $ 0.12$ & 4.25 & 4 \\\\\n 14412 & 683 & 10798 & 6.33 & 0.72 & 12.7 & 5374 & $ -0.45$ & 4.69 & 1 \\\\\n 15335 & 720 & 11548 & 5.89 & 0.59 & 30.8 & 5891 & $ -0.20$ & 4.07 & 4 \\\\\n 16397 & \\ldots & 12306 & 7.36 & 0.58 & 35.9 & 5788 & $ -0.35$ & 4.50 & 1 \\\\\n 22879 & \\ldots & 17147 & 6.68 & 0.55 & 24.3 & 5688 & $ -0.76$ & 4.41 & 1, 2, 4 \\\\\n 23249 & 1136 & 17378 & 3.52 & 0.92 & 9.0 & 5095 & $ 0.03$ & 3.98 & 1 \\\\\n 23439 & \\ldots & 17666 & 7.67 & 0.80 & 24.5 & 5070 & $ -0.73$ & 4.71 & 3 \\\\\n 30649 & \\ldots & 22596 & 6.94 & 0.59 & 29.9 & 5778 & $ -0.33$ & 4.44 & 4 \\\\\n 34411 & 1729 & 24813 & 4.69 & 0.63 & 12.6 & 5911 & $ 0.09$ & 4.37 & 4 \\\\\n 43947 & \\ldots & 30067 & 6.61 & 0.56 & 27.5 & 5933 & $ -0.28$ & 4.37 & 2 \\\\\n 45184 & 2318 & 30503 & 6.37 & 0.63 & 22.0 & 5810 & $ 0.03$ & 4.37 & 1 \\\\\n 48938 & 2493 & 32322 & 6.43 & 0.55 & 26.6 & 5937 & $ -0.39$ & 4.31 & 4 \\\\\n 84737 & 3881 & 48113 & 5.08 & 0.62 & 18.4 & 5960 & $ 0.14$ & 4.24 & 4 \\\\\n 86728 & 3951 & 49081 & 5.37 & 0.68 & 14.9 & 5700 & $ 0.11$ & 4.29 & 4 \\\\\n 102365 & 4523 & 57443 & 4.89 & 0.66 & 9.2 & 5630 & $ -0.26$ & 4.57 & 2 \\\\\n \\ldots & \\ldots & 57450 & 9.91 & 0.58 & 73.5 & 5272 & $ -1.42$ & 4.30 & 3 \\\\\n 103095 & 4550 & 57939 & 6.42 & 0.75 & 9.2 & 4950 & $ -1.16$ & 4.65 & 3 \\\\\n 109358 & 4785 & 61317 & 4.24 & 0.59 & 8.4 & 5930 & $ -0.10$ & 4.44 & 4 \\\\\n 115617 & 5019 & 64924 & 4.74 & 0.71 & 8.5 & 5571 & $ 0.09$ & 4.47 & 4 \\\\\n 131117 & 5542 & 72772 & 6.30 & 0.61 & 40.0 & 5973 & $ 0.10$ & 4.06 & 2 \\\\\n 144585 & \\ldots & 78955 & 6.32 & 0.66 & 28.9 & 5854 & $ 0.25$ & 4.33 & 1 \\\\\n 156365 & \\ldots & 84636 & 6.59 & 0.65 & 47.2 & 5856 & $ 0.24$ & 4.09 & 1 \\\\\n 157214 & 6458 & 84862 & 5.38 & 0.62 & 14.4 & 5697 & $ -0.15$ & 4.50 & 4 \\\\\n 157347 & 6465 & 85042 & 6.28 & 0.68 & 19.5 & 5714 & $ 0.03$ & 4.50 & 1 \\\\\n 166435 & \\ldots & 88945 & 6.84 & 0.63 & 25.2 & 5843 & $ 0.01$ & 4.44 & 1 \\\\\n 169830 & 6907 & 90485 & 5.90 & 0.52 & 36.3 & 6221 & $ 0.08$ & 4.06 & 1 \\\\\n 172051 & 6998 & 91438 & 5.85 & 0.67 & 13.0 & 5564 & $ -0.24$ & 4.50 & 1 \\\\\n 176377 & \\ldots & 93185 & 6.80 & 0.61 & 23.4 & 5788 & $ -0.23$ & 4.40 & 1 \\\\\n 179949 & 7291 & 94645 & 6.25 & 0.55 & 27.0 & 6168 & $ 0.11$ & 4.34 & 1 \\\\\n 182572 & 7373 & 95447 & 5.17 & 0.76 & 15.1 & 5656 & $ 0.36$ & 4.32 & 1 \\\\\n 190360 & 7670 & 98767 & 5.73 & 0.75 & 15.9 & 5552 & $ 0.19$ & 4.38 & 1 \\\\\n 193901 & \\ldots & 100568 & 8.65 & 0.55 & 43.7 & 5408 & $ -1.19$ & 4.14 & 3 \\\\\n 199960 & 8041 & 103682 & 6.21 & 0.64 & 26.5 & 5962 & $ 0.24$ & 4.31 & 1 \\\\\n 210277 & \\ldots & 109378 & 6.54 & 0.77 & 21.3 & 5555 & $ 0.20$ & 4.49 & 1 \\\\\n 217107 & 8734 & 113421 & 6.17 & 0.74 & 19.7 & 5704 & $ 0.27$ & 4.54 & 1 \\\\\n 222368 & 8969 & 116771 & 4.13 & 0.51 & 13.8 & 6204 & $ -0.08$ & 4.18 & 4 \\\\\n 224383 & \\ldots & 118115 & 7.89 & 0.64 & 47.7 & 5754 & $ -0.06$ & 4.31 & 1 \\\\\n\\enddata\n\\tablenotetext{a}{$V$-magnitude, color index, and parallax-based distance \nfrom the {\\em Hipparcos} catalogue \\citep{hipparcos}.}\n\\tablenotetext{b}{Stellar parameters previously published in VF05.}\n\\tablenotetext{c}{Star included for comparison to the following \nworks:~1---\\citealt{bensby_2005};\n2---\\citealt{delpeloso_2005a};\n3---\\citealt{simmerer_2004};\n4---\\citealt{woolf_1995}}\n\\end{deluxetable*}\n\n\n\\subsection{Co-adding Data}\n\\label{data}\nThe nature of the radial velocity planet search dictates that most stars \nwill have multiple, and in some cases, dozens of, observations. \nTo take advantage of the multiple exposures, we carefully \nco-add the reduced echelle spectra where possible. \n\nThe co-adding procedure is as follows:\\ a 2000-pixel region \n(approximately half the full order width) near \nthe middle of each order is cross-correlated, order by order, \nwith one observation arbitrarily designated as standard. \nThe pixel shifts are then examined and a linear trend as a function of \norder number is fit to the pixel shifts. \nFor any order whose pixel shift falls more than 0.4 pixels from the linear \ntrend, the value predicted by the linear trend is substituted. \nThis method corrects outlying pixel shift values, which often proved to\nbe one of a handful of problematic orders where the echelle blaze\nfunction shape created a false cross-correlation peak.\n\nEach spectral order is adjusted by its appropriate fractional pixel\nshift before all the newly aligned spectra are added together.\nIn order to accurately add spectra that have been shifted by \nnon-integer pixel amounts, we use a linear interpolation between \npixel values to artificially (and temporarily) increase \nthe sampling density by a factor of 20.\nAfter the co-adding, the resultant high-S\/N \nmulti-observation spectrum is sampled back down to its \noriginal spacing. \n\nThe number of observations used per star is recorded in the $N_{obs}$ \ncolumn of Table \\ref{starsvals_table}. \nThe co-adding proved particularly beneficial when fitting the relatively \nweak Eu~{\\sc ii}\\ line at 6645\\,\\AA. \nTwo sample stellar spectra, from a star with 1 observation\nand from a star with 17 observations, are given in Figure \n\\ref{coadding_works} as evidence of the S\/N advantages \nof this procedure.\n\n\\begin{deluxetable}{rrrrrrr}\n\\tablecaption{Stellar abundance values.\\label{starsvals_table}}\n\\normalsize\n\\tablehead{\n \\multirow{2}{*}{Name\\tablenotemark{a}} \n & \\multirow{2}{*}{$N_{obs}$}\n & \\colhead{4129 \\AA}\n & \\colhead{4205 \\AA}\n & \\colhead{6645 \\AA}\n & \\colhead{Weighted} \\\\\n & \n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{[Eu\/H]\\tablenotemark{b}}\n & \\colhead{Average\\tablenotemark{c}}\n}\n \n\\startdata\n 3795 & 2 & $0.07$ & $0.06$ & $0.13$ &$0.07 \\pm 0.03$ \\\\\n 4614 & 2 & $-0.17$ & $-0.19$ & $-0.24$ &$-0.18 \\pm 0.02$ \\\\\n 6734 & 6 & $-0.02$ & $-0.06$ & $0.01$ &$-0.03 \\pm 0.03$ \\\\\n 9562 & 3 & $0.11$ & $0.14$ & $0.08$ &$0.12 \\pm 0.02$ \\\\\n 9826 & 1 & $0.10$ & $0.13$ & $0.09$ &$0.12 \\pm 0.02$ \\\\\n 14412 & 70 & $-0.28$ & $-0.15$ & $-0.16$ &$-0.24 \\pm 0.06$ \\\\\n 15335 & 2 & $-0.13$ & $-0.10$ & $-0.08$ &$-0.12 \\pm 0.03$ \\\\\n 16397 & 1 & $-0.21$ & $-0.20$ & $-0.22$ &$-0.21 \\pm 0.02$ \\\\\n 22879 & 27 & $-0.55$ & $-0.50$ & $-0.57$ &$-0.54 \\pm 0.03$ \\\\\n 23249 & 34 & $0.05$ & $0.26$ & $0.24$ &$0.17 \\pm 0.11$ \\\\\n 23439 & 24 & $-0.43$ & $-0.36$ & $-0.47$ &$-0.38 \\pm 0.03$ \\\\\n 30649 & 3 & $-0.13$ & $-0.11$ & $-0.13$ &$-0.12 \\pm 0.02$ \\\\\n 34411 & 70 & $0.12$ & $0.13$ & $0.11$ &$0.13 \\pm 0.02$ \\\\\n 43947 & 6 & $-0.21$ & $-0.19$ & $-0.21$ &$-0.20 \\pm 0.02$ \\\\\n 45184 & 95 & $0.01$ & $-0.03$ & $0.02$ &$0.00 \\pm 0.02$ \\\\\n 48938 & 2 & $-0.30$ & $-0.28$ & $-0.41$ &$-0.30 \\pm 0.03$ \\\\\n 84737 & 7 & $0.14$ & $0.14$ & $0.14$ &$0.14 \\pm 0.02$ \\\\\n 86728 & 46 & $0.06$ & $0.11$ & $0.10$ &$0.09 \\pm 0.03$ \\\\\n 102365 & 12 & $-0.13$ & $-0.08$ & $-0.14$ &$-0.11 \\pm 0.03$ \\\\\n HIP57450 & 1 & $-1.10$ & $-1.03$ &$\\leq -0.95$ &$-1.06 \\pm 0.04$ \\\\\n 103095 & 9 & \\ldots & $-0.37$ & $-0.38$ &$-0.37 \\pm 0.02$ \\\\\n 109358 & 47 & $-0.12$ & $-0.16$ & $-0.12$ &$-0.14 \\pm 0.03$ \\\\\n 115617 & 165 & $0.05$ & $0.02$ & $0.09$ &$0.04 \\pm 0.03$ \\\\\n 131117 & 2 & $0.10$ & $0.09$ & $0.12$ &$0.10 \\pm 0.02$ \\\\\n 144585 & 17 & $0.22$ & $0.25$ & $0.22$ &$0.23 \\pm 0.03$ \\\\\n 156365 & 1 & $0.11$ & $0.20$ & $0.14$ &$0.13 \\pm 0.04$ \\\\\n 157214 & 24 & $0.05$ & $0.06$ & $0.07$ &$0.06 \\pm 0.02$ \\\\\n 157347 & 77 & $0.09$ & $0.12$ & $0.13$ &$0.10 \\pm 0.02$ \\\\\n 166435 & 1 & $-0.05$ & $-0.17$ & $0.04$ &$-0.07 \\pm 0.06$ \\\\\n 169830 & 8 & $0.02$ & $0.05$ & $0.05$ &$0.03 \\pm 0.02$ \\\\\n 172051 & 37 & $-0.19$ & $-0.15$ & $-0.12$ &$-0.17 \\pm 0.03$ \\\\\n 176377 & 56 & $-0.20$ & $-0.21$ & $-0.23$ &$-0.20 \\pm 0.02$ \\\\\n 179949 & 1 & $0.05$ & $0.02$ & $0.05$ &$0.04 \\pm 0.03$ \\\\\n 182572 & 56 & $0.25$ & $0.41$ & $0.36$ &$0.31 \\pm 0.08$ \\\\\n 190360 & 73 & $0.17$ & $0.29$ & $0.30$ &$0.24 \\pm 0.07$ \\\\\n 193901 & 2 & $-0.92$ & $-0.91$ & $-0.98$ &$-0.91 \\pm 0.02$ \\\\\n 199960 & 3 & $0.14$ & $0.19$ & $0.19$ &$0.16 \\pm 0.03$ \\\\\n 210277 & 74 & $0.20$ & $0.34$ & $0.32$ &$0.27 \\pm 0.07$ \\\\\n 217107 & 37 & $0.20$ & $0.43$ & $0.33$ &$0.29 \\pm 0.11$ \\\\\n 222368 & 2 & $0.08$ & $0.09$ & $0.08$ &$0.08 \\pm 0.02$ \\\\\n 224383 & 2 & $0.00$ & $-0.01$ & $0.04$ &$0.00 \\pm 0.02$ \\\\\n Vesta & 3 & $-0.04$ & $0.00$ & $0.00$ &$-0.03 \\pm 0.03$ \\\\\n\\enddata\n\\tablenotetext{a}{All names are HD numbers unless otherwise indicated.}\n\\tablenotetext{b}{For a given star, [Eu\/H] = $\\log{\\epsilon\\left(\\textrm{Eu}\\right)} - \\log{\\epsilon\\left(\\textrm{Eu}\\right)_{\\odot}}$, where \\mbox{$\\log{\\epsilon\\left(X\\right)} = \\log_{10}{\\left(N_{X}\/N_{H}\\right)}$.}}\n\\tablenotetext{c}{The weight of each line in the average is based on 50 Monte Carlo trials in each Eu~{\\sc ii}~line. We have adopted an error floor of $0.02\\unit{dex}$, added in quadrature to the errors determined by our Monte Carlo procedure. See \\S\\ref{errors} for a more complete description of the weighted average and associated uncertainty.}\n\\end{deluxetable}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f1.eps}\n\\figcaption[Co-added Stellar Spectra.]{The three Eu~{\\sc ii}\\ lines considered in this paper, \n plotted for two different stars to demonstrate the advantage to be gained\n from co-adding multiple observations of the same star. \n Note that the rightmost plots have a different ordinate axis scaling than \n the other four panels.\n The top three panels, showing HD\\,156365, include 1 spectrum, while the bottom \n three panels, showing HD\\,144585, include 17. \n The two stars have approximately the same metallicity and\n effective temperature. \n The advantage from co-adding is most profound\n in the weak 6645-\\AA\\ line.\n \\label{coadding_works}}\n\\end{figure}\n\n\\section{Abundance Measurements}\n\\label{abund}\n\nWe use the SME suite of routines for our spectral synthesis, both for fine-tuning line lists based on the solar spectrum (\\S\\,\\ref{linelists}) and for measuring europium in each star (\\S\\,\\ref{stellar_abund}).\nSME is an LTE spectral synthesis code based on the \\citealt{kurucz_1992} grid of stellar atmosphere models. \nIn brief, to produce a synthetic spectrum, SME interpolates between the atmosphere models, calculates the continuous opacity, computes the radiative transfer, and then applies line broadening, which is governed by macroturbulence, stellar rotation, and instrumental profile. \nConsult \\citealt{valenti_1996} and VF05 for a more in-depth description of SME's inner workings.\n\nThroughout this study we use SME only to compute synthetic spectra.\nAll fitting is done in specialized routines of our own design, external to SME.\n\nIn this section we outline our technique for measuring europium abundances in the set of 41\\ stars included in this work. \nIn broad strokes, we first determine the atomic parameters of our spectral lines by fitting the solar spectrum (\\S\\,\\ref{linelists}). \nThen, we use that line list to measure the europium abundance in three transitions (4129\\,\\AA, 4205\\,\\AA, and 6645\\,\\AA); a weighted average of the three transitions determines a star's final europium value (\\S\\,\\ref{stellar_abund}). Finally, we estimate our uncertainties by adding artificial noise to our data in a series of Monte Carlo trials (\\S\\,\\ref{errors}).\nWe also here include notes on individual lines (\\S\\,\\ref{individual_lines}).\n\n\\subsection{Line Lists}\n\\label{linelists}\nWe use relatively broad spectral segments in our europium \nanalysis.\nThe regions centered on the Eu~{\\sc ii}\\ 4129\\,\\AA\\ and \n6645\\,\\AA\\ lines are 5\\,\\AA\\ wide, and the region centered\non the Eu~{\\sc ii}\\ 4205\\,\\AA\\ line is 8\\,\\AA\\ wide.\nWe find it necessary to use such broad spectral segments\nin order to fit a robust and consistent continuum in \nthe crowded blue regions.\n\nLine lists are initially drawn from the Vienna Astrophysics Line \nDatabase (VALD; \\citealt{piskunov_1995,kupka_1999}). \nThe VALD line lists, in the regions surrounding all three \nEu~{\\sc ii}\\ transitions, make extensive use of Kurucz line lists. \n\nWe apply the original VALD line list to an observed solar spectrum in\norder to determine the list's completeness and to adjust line parameters\nas needed.\nIn the blue, we use the disk-center solar spectrum from \n\\citealt{wallace_1998} with the following global parameters: \n$T_{\\rm eff}$~$ = 5770\\unit{K}$, \n$\\log{g} = 4.44$, \n[M\/H]~$ = 0$, \nmicroturbulence $v_{mic} = 1.0\\unit{km s$^{-1}$}$, \nmacroturbulence $v_{mac} = 3.6\\unit{km s$^{-1}$}$,\nrotational velocity $v\\sin{i} = 0\\unit{km s$^{-1}$}$, \nand radial velocity $v_{rad} = 0\\unit{km s$^{-1}$}$. \nThese are the same solar parameters adopted in VF05.\nIn the red, we find the \\citealt{wallace_1998} solar atlas \nto have insufficient S\/N to accurately determine\nthe atomic parameters.\nAt 6645\\,\\AA, therefore, we instead compare our \nline list to the disk-integrated NSO solar spectrum \n\\citep{kurucz_1984}, \nadjusting $v\\sin{i}$ to $1.6\\unit{km s$^{-1}$}$ because\nthe full solar disk has more substantial rotational \nbroadening.\n\nWe find that adjustments to the oscillator strength \n($\\log{\\textrm{\\em gf}}$) and van der Waals broadening ($\\Gamma_{6}$) parameters \nare required for the strongest lines in a given \nwavelength segment, even far from the Eu~{\\sc ii}\\ line of \ninterest.\nFor example, the Fe~{\\sc i}\\ line at 4202\\,\\AA\\ has an equivalent \nwidth $W=326\\unit{m\\AA}$ in the Sun \nand significantly affects \nthe continuum of the 4205-\\AA\\ region.\nThe $\\log{\\textrm{\\em gf}}$\\ parameter controls the line depth while the\n$\\Gamma_{6}$\\ parameter controls the line shape, so in general\nthe two parameters are orthogonal.\n\nWe used the Kurucz $\\log{\\textrm{\\em gf}}$\\ values provided \nby VALD where possible, but adjustments\nwere necessary where line depths were poorly fit.\nFor $\\Gamma_{6}$, VALD returns the \\citealt{barklem_2000} \nparameters for beryllium through barium ($Z$ of 4--56),\nbut has no $\\Gamma_{6}$\\ values above $Z=56$.\nWe therefore find it necessary to fit $\\Gamma_{6}$\\ in \nspecies heavier than barium and in deep features\nnot fit well by VALD values.\nWe take particular care to determine the appropriate \n$\\log{\\textrm{\\em gf}}$\\ and $\\Gamma_{6}$\\ parameters for all lines \nadjacent to the Eu~{\\sc ii}\\ line of interest. \nWe find the best value for these parameters with the SME \nsynthesizer by performing a $\\chi^2$\\ minimization against \nthe solar spectrum on each parameter.\n\nWhere the VALD line list is insufficient, we add line data \nfrom \\citealt{moore_1966}, \nNIST (whose lists are based on a variety of sources) and \nC.~Sneden (private communication).\nWe also add CH and CN molecular lines based on values\nobtained from the Kurucz molecular line list web \nsite.\\footnote{\\tt http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}\nIn the 4129-\\AA\\ region, we find it necessary to add \nthree artificial iron lines in order to match the solar spectrum.\nWe follow here the precedent of \\citealt{delpeloso_2005a},\nthough we find our fit requires the lines to have \nslightly different wavelengths and $\\log{\\textrm{\\em gf}}$\\ values.\nThe complete line list for Eu~{\\sc ii}\\ at 4129\\,\\AA\\ appears in\nTable \\ref{4129_nso_table}, \nfor 4205\\,\\AA\\ in Table \\ref{4205_nso_table}, and\nfor 6645\\,\\AA\\ in Table \\ref{6645_nso_table}.\nThe corresponding plots of the regions in these tables appear in\nFigures \\ref{eu4129_nso}, \\ref{eu4205_nso}, and \\ref{eu6645_nso}.\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f2.eps}\n\\figcaption[Eu 4129\\,\\AA\\ in NSO.]{The 4129-\\AA\\ Eu~{\\sc ii}\\ line in the solar spectrum. \n Individual lines are annotated and are listed in Table \\ref{4129_nso_table}.\n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 32 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}).\n The cross-hatched region indicates a portion of the spectrum \n used to fit a continuum. \n This plot represents a subset of the spectral region used in our \n analysis; the full region is 5\\,\\AA\\ wide and contains three additional \n continuum fitting regions.\n \\label{eu4129_nso}}\n\\end{figure}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f3.eps}\n\\figcaption[Eu 4205\\,\\AA\\ in solar atlas.]{The 4205-\\AA\\ Eu~{\\sc ii}\\ line in the solar spectrum. \n Individual lines are annotated and are listed in Table \\ref{4205_nso_table}. \n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 30 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}). \n The cross-hatched region indicates a portion of the spectrum \n used to fit a continuum. This plot represents a subset of the \n spectral region used in our analysis; the full region is 8\\,\\AA\\ wide \n and contains six additional continuum fitting regions.\n \\label{eu4205_nso}}\n\\end{figure}\n \n\\begin{deluxetable*}{ccccccc}[!htp]\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 4129\\,\\AA.\\label{4129_nso_table}}\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{{$\\log{\\textrm{\\em gf}}$}} \n & \\multicolumn{2}{c}{{$\\Gamma_{6}$}}\\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}}\n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}}\n}\n\n\\startdata\n 4129.147 & Pr {\\sc ii} & 1.039 & $-0.100$ & $-0.100$ & $-7.454$ & \\ldots \\\\\n 4129.159 & Cr {\\sc i} & 3.013 & $-1.948$ & $-1.948$ & $-6.964$ & $-7.362$ \\\\\n 4129.159 & Ti {\\sc ii} & 1.893 & $-2.300$ & $-1.730$ & $-6.900$ & $-7.908$ \\\\\n 4129.166 & Ti {\\sc i} & 2.318 & $-0.200$ & $-0.231$ & $-6.900$ & $-7.572$ \\\\\n 4129.174 & Ce {\\sc ii} & 0.740 & $-3.000$ & $-0.901$ & $-7.493$ & \\ldots \\\\\n 4129.182\\tablenotemark{c} &\n Cr {\\sc i} & 2.914 & $-0.100$ & \\ldots & $-8.300$ & \\ldots \\\\\n 4129.220 & Fe {\\sc i} & 3.417 & $-3.500$ & $-2.030$ & $-6.857$ & $-7.255$ \\\\\n 4129.220 & Sm {\\sc ii} & 0.248 & $-1.123$ & $-1.123$ & $-7.536$ & \\ldots \\\\\n 4129.425 & Dy {\\sc ii} & 0.538 & $-0.522$ & $-0.522$ & $-7.554$ & \\ldots \\\\\n 4129.426 & Nb {\\sc i} & 0.086 & $-0.780$ & $-0.780$ & $-7.462$ & \\ldots \\\\\n 4129.458 & Fe {\\sc i} & 3.396 & $-1.950$ & $-1.970$ & $-6.863$ & $-7.206$ \\\\\n 4129.522\\tablenotemark{d} &\n Fe {\\sc i} & 3.140 & $-3.497$ & \\ldots & $-6.873$ & \\ldots \\\\\n 4129.643 & Ti {\\sc i} & 2.239 & $-1.987$ & $-1.987$ & $-7.529$ & $-7.529$ \\\\\n 4129.705 & Eu {\\sc ii} & 0.000 & $+0.260$ & $+0.173$ & $-7.174$ & \\ldots \\\\\n 4129.817 & Co {\\sc i} & 3.812 & $-1.808$ & $-1.808$ & $-6.099$ & $-7.782$ \\\\\n 4129.837 & Nd {\\sc ii} & 2.024 & $-0.543$ & $-0.543$ & $-6.237$ & \\ldots \\\\\n 4129.959\\tablenotemark{d} &\n Fe {\\sc i} & 2.670 & $-3.139$ & \\ldots & $-7.322$ & \\ldots \\\\\n 4130.035 & Fe {\\sc i} & 1.557 & $-3.900$ & $-4.345$ & $-7.885$ & $-7.826$ \\\\\n 4130.036 & Fe {\\sc i} & 3.111 & $-2.350$ & $-2.636$ & $-8.026$ & $-7.857$ \\\\\n 4130.073 & Cr {\\sc i} & 2.914 & $-1.971$ & $-1.971$ & $-6.929$ & $-7.349$ \\\\\n 4130.122\\tablenotemark{e} &\n V {\\sc i} & 1.218 & $-1.000$ & $-3.142$ & $-7.060$ & $-7.800$ \\\\\n 4130.233 & Mn {\\sc i} & 2.920 & $-2.400$ & $-3.309$ & $-7.900$ & $-7.784$ \\\\\n 4130.315 & V {\\sc i} & 2.269 & $-0.300$ & $-0.607$ & $-7.187$ & $-7.585$ \\\\\n 4130.364 & Gd {\\sc ii} & 0.731 & $+0.177$ & $-0.090$ & $-6.608$ & \\ldots \\\\\n 4130.452 & Cr {\\sc i} & 2.914 & $-1.099$ & $-2.751$ & $-6.805$ & $-7.348$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{Identification from \\citealt{moore_1966}, \\loggf\\ and \\gamsix\\ from \\chisq\\ minimization.}\n\\tablenotetext{d}{Artificial iron lines included after \\citealt{delpeloso_2005a}.}\n\\tablenotetext{e}{Identification and parameters from Kurucz line lists hosted by the University of Hannover:\\newline \\texttt{http:\/\/www.pmp.uni-hannover.de\/cgi-bin\/ssi\/test\/kurucz\/sekur.html}.}\n\\end{deluxetable*}\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f4.eps}\n\\figcaption[Eu 6645\\,\\AA\\ in solar atlas.]{The 6645-\\AA\\ Eu~{\\sc ii}\\ line in the solar \n spectrum. Note that the ordinate axis is scaled differently than in Figures \n \\ref{eu4129_nso} and \\ref{eu4205_nso}.\n Individual lines are annotated and are listed in Table \\ref{6645_nso_table}.\n The hyperfine components (see \\S\\,\\ref{linelists}) appear in the inset plot, \n which is aligned with the wavelength scale of the main plot.\n The relative strengths of the 30 hyperfine components \\citep{ivans_2006}\n are plotted on a linear scale in the inset; the top half of the inset \n contains the components from the $^{151}$Eu isotope while the\n $^{153}$Eu isotope components appear on the bottom.\n The gray box indicates the portion of the spectrum used to calculate \n $\\chi^2$\\ during the abundance fitting step (see \\S\\,\\ref{stellar_abund}). \n The two cross-hatched regions indicate the portion of the spectrum used \n to fit a continuum.\n This plot represents a subset of the spectral region used in our analysis; \n the full region is 5\\,\\AA\\ wide and contains four additional continuum fitting regions.\n \\label{eu6645_nso}}\n\\end{figure}\n\nHyperfine splitting is the dominant broadening mechanism for the\neuropium spectral lines. \nThe interaction between the nuclear spin and the atom's \nangular momentum vector causes energy level splitting\nin atoms with odd atomic numbers (europium is $Z=63$). \nThe effect is particularly pronounced in rare earth \nelements.\nThe 4129-\\AA\\ and 4205-\\AA\\ lines, for example, have FWHMs \nof 1.5\\,\\AA, due largely to hyperfine structure\n(but not isotope splitting---see insets of Figures \n\\ref{eu4129_nso}--\\ref{eu6645_nso}).\nSince the relative strengths of the hyperfine\ncomponents are constant without regard to\ntemperature, pressure, or magnetic field \\citep{abt_1952},\nthe components measured in laboratory settings\ncan be applied to stellar spectra.\n\nLike other spectral fitting packages,\nSME has no built-in treatment of hyperfine \nstructure. \nWe therefore convert a single europium line into \nits constituent hyperfine components and include them \nas separate entries in the star's line list. \nThe relative strengths and wavelength offsets of the \nhyperfine components come from \\citealt{ivans_2006}, \nwhich bases the values on an FTS laboratory analysis. \nFollowing the procedure of \\citealt{ivans_2006}, \nwe assume a solar system \ncomposition for the relative abundance of the two \neuropium isotopes ($^{151}$Eu at 47.8\\% and \n$^{153}$Eu at 52.2\\%, from \\citealt{rosman_1998}).\nWe divide the Eu~{\\sc ii}\\ $\\log{\\textrm{\\em gf}}$\\ values listed in \nTables \\ref{4129_nso_table}, \\ref{4205_nso_table}, \nand \\ref{6645_nso_table} amongst the \nhyperfine components according to their relative strengths. \nAll other attributes of the Eu~{\\sc ii}\\ lines remain the same in \nthe creation of the hyperfine lines. \nThe relative strengths of the hyperfine components are displayed \nin the inset plots in the solar spectrum Figures\n\\ref{eu4129_nso}, \\ref{eu4205_nso}, and \\ref{eu6645_nso}.\n\n\\begin{deluxetable*}{ccccccc}\n\\tablecolumns{7}\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 4205\\,\\AA.\\label{4205_nso_table}}\n\\tablewidth{0pt}\n\\scriptsize\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{$\\log{\\textrm{\\em gf}}$}\n & \\multicolumn{2}{c}{$\\Gamma_{6}$} \\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}} \n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}} \n}\n\n\\startdata\n 4204.695 & Y {\\sc ii} & 0.000 & $-1.800$ & $-1.760$ & $-7.700$ & \\ldots \\\\\n 4204.717 & Ce {\\sc ii} & 0.792 & $-0.963$ & $-0.963$ & $-7.000$ & \\ldots \\\\\n 4204.759\\tablenotemark{c} &\n CH & 0.519 & $-1.140$ & $-1.158$ & $-8.900$ & \\ldots \\\\\n 4204.771\\tablenotemark{c} &\n CH & 0.520 & $-1.900$ & $-1.135$ & $-8.500$ & \\ldots \\\\\n 4204.801 & Sm {\\sc ii} & 0.378 & $-1.771$ & $-1.771$ & $-6.738$ & \\ldots \\\\\n 4204.831\\tablenotemark{c} &\n CH & 0.520 & $-3.360$ & $-3.360$ & $-7.699$ & \\ldots \\\\\n 4204.858 & Gd {\\sc ii} & 0.522 & $-0.668$ & $-0.668$ & $-6.787$ & \\ldots \\\\\n 4204.990 & Cr {\\sc i} & 4.616 & $-1.457$ & $-1.457$ & $-6.658$ & $-7.852$ \\\\\n 4205.000 & Fe {\\sc i} & 4.220 & $-1.900$ & $-2.150$ & $-7.000$ & $-7.548$ \\\\\n 4205.038 & V {\\sc ii} & 1.686 & $-1.850$ & $-1.875$ & $-6.800$ & $-7.913$ \\\\\n 4205.042 & Eu {\\sc ii} & 0.000 & $+0.250$ & $+0.120$ & $-6.800$ & \\ldots \\\\\n 4205.084 & V {\\sc ii} & 2.036 & $-1.100$ & $-1.300$ & $-6.900$ & $-7.956$ \\\\\n 4205.098 & Fe {\\sc i} & 2.559 & $-4.900$ & $-4.900$ & $-6.671$ & $-7.865$ \\\\\n 4205.107 & Cr {\\sc i} & 4.532 & $-1.160$ & $-1.160$ & $-6.582$ & $-7.776$ \\\\\n 4205.163 & Ce {\\sc ii} & 1.212 & $-0.653$ & $-0.653$ & $-6.672$ & \\ldots \\\\\n 4205.253 & Nd {\\sc ii} & 0.680 & $-0.992$ & $-0.992$ & $-6.699$ & \\ldots \\\\\n 4205.303 & Nb {\\sc i} & 0.049 & $-0.850$ & $-0.850$ & $-6.677$ & \\ldots \\\\\n 4205.381 & Mn {\\sc ii} & 1.809 & $-3.300$ & $-3.376$ & $-6.800$ & $-8.001$ \\\\\n 4205.402\\tablenotemark{c} &\n CH & 0.488 & $-2.300$ & $-3.960$ & $-8.000$ & \\ldots \\\\\n 4205.427\\tablenotemark{c} &\n CH & 1.019 & $-2.300$ & $-1.130$ & $-8.000$ & \\ldots \\\\\n 4205.491\\tablenotemark{c} &\n CH & 1.019 & $-3.463$ & $-3.463$ & $-8.000$ & \\ldots \\\\\n 4205.533\\tablenotemark{c} &\n CH & 1.019 & $-1.800$ & $-1.149$ & $-8.000$ & \\ldots \\\\\n 4205.538 & Fe {\\sc i} & 3.417 & $-1.100$ & $-1.435$ & $-7.800$ & $-7.224$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{All molecular line data (except \\gamsix, from \\chisq\\ minimization) from Kurucz web site:\\ \\texttt{http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}.}\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{ccccccc}\n\\tablecolumns{7}\n\\tablecaption{Line list for the region near Eu~{\\sc ii}\\ at 6645\\,\\AA.\\label{6645_nso_table}}\n\\tablewidth{0pt}\n\\scriptsize\n\\tablehead{\n \\colhead{$\\lambda$} \n & \\multirow{2}{*}{Element}\n & \\colhead{Lower Level}\n & \\multicolumn{2}{c}{$\\log{\\textrm{\\em gf}}$}\n & \\multicolumn{2}{c}{$\\Gamma_{6}$} \\\\\n \\colhead{(\\AA)}\n & \n & \\colhead{(eV)}\n & \\colhead{solar fit} \n & \\colhead{VALD\\tablenotemark{a}} \n & \\colhead{solar fit}\n & \\colhead{VALD\\tablenotemark{b}} \n}\n\n\\startdata\n 6644.320\\tablenotemark{c} &\n CN & 0.805 & $-1.456$ & $-2.258$ & $-7.695$ & \\ldots \\\\\n 6644.415\\tablenotemark{d} &\n La {\\sc i} & 0.131 & $-1.330$ & $-2.070$ & $-8.000$ & \\ldots \\\\\n 6645.111 & Eu {\\sc ii} & 1.380 & $+0.219$ & $+0.205$ & $-7.218$ & \\ldots \\\\\n 6645.210 & Si {\\sc i} & 6.083 & $-2.510$ & $-2.120$ & $-7.118$ & \\ldots \\\\\n 6645.372 & Fe {\\sc i} & 4.386 & $-2.759$ & $-3.536$ & $-6.780$ & $-7.808$ \\\\\n\\enddata\n\\tablenotetext{a}{All line data except \\gamsix\\ from Kurucz databases via VALD (unless otherwise noted).}\n\\tablenotetext{b}{Van der Waals parameters where available from \\citealt{barklem_2000} via VALD (unless otherwise noted).}\n\\tablenotetext{c}{All molecular line data (except \\gamsix, from \\chisq\\ minimization) from Kurucz web site:\\ \\texttt{http:\/\/kurucz.harvard.edu\/LINELISTS\/LINESMOL\/}.}\n\\tablenotetext{d}{Identification and parameters from Kurucz line lists hosted by the University of Hannover:\\newline \\texttt{http:\/\/www.pmp.uni-hannover.de\/cgi-bin\/ssi\/test\/kurucz\/sekur.html}.}\n\\end{deluxetable*}\n\n\\subsection{Europium Abundances}\n\\label{stellar_abund}\nIn order to measure the europium abundances in our selected stars, \nwe begin with the co-added spectra described in \\S\\,\\ref{data}. \nWe do a preliminary continuum fit using the SME routines,\nwhich follow the VF05 procedure:\\ deep features are filled \nin with a median value from neighboring spectral orders, \nthen a sixth-order polynomial is fit to the region of interest.\nThe built-in procedure creates a flat, normalized\ncontinuum, though we find it necessary to fine-tune the \ncontinuum normalization in the course of our europium \nfitting.\n\nFrom the VF05 SPOCS catalog we take \n$T_{\\rm eff}$, $\\log{g}$, [M\/H], $v\\sin{i}$, $v_{mac}$, and $v_{mic}$\n(fixed at 0.85\\unit{km s$^{-1}$}) \nfor each star we consider. \nIn general, the global parameters from VF05 agree very well with\nthe values adopted in the studies we compare to here. \n(Most of the literature values fall within the 2-$\\sigma$ errors\nquoted in VF05.) The VF05 catalog is one of the largest and most \nreliable sources of stellar properties determined to date; \n\\citealt{haywood_2006}, for example, finds the VF05 $T_{\\rm eff}$\\ and [M\/H]\\\nto be in good agreement with other reliable measurements.\n\nFor most element abundances we use a scaled solar\nsystem composition, shifting the \n\\citealt{grevesse_1998} solar abundances by \nthe star's [M\/H].\nThe exceptions to this rule are \nsodium, silicon, titanium, iron, and nickel,\nwhich VF05 measured individually. \nThose $\\alpha$ and iron-peak elements are\ntherefore treated independently of the na\\\"ive scaled-solar\nadjustment.\nIt is possible that a more explicit treatment of iron-peak\nand $\\alpha$ elements would improve our europium measurement\naccuracy. \nFor the present study, however, we deem individual \nabundance analysis (apart from europium) unnecessary.\nWe hold fixed the abundances of all elements other than \neuropium in the subsequent analysis.\n\nWe fit for the europium abundance by iterating three \n$\\chi^2$-minimization routines that solve for the \nwavelength alignment, spectrum continuum, and \neuropium abundance.\nA summary of each routine follows:\n\\begin{enumerate}\n\\item Wavelength. \nThe pixel scale is pre-determined from\nthe thorium lamp calibration taken each night.\nA first estimate of the rest frame wavelengths comes from\na cross-correlation of the full spectral segment with the \nsolar spectrum, a built-in functionality of SME.\nWe then use a 2-\\AA\\ region immediately surrounding \nthe Eu~{\\sc ii}\\ line of interest to perform a \n$\\chi^2$\\ minimization between the modeled stellar \natmosphere and the spectral data, thus solving for\nthe wavelength scale alignment as precisely as \npossible in the Eu~{\\sc ii}\\ region.\n\\item Continuum.\nWe fit a quadratic function across the points designated \nin the solar spectrum as continuum-fitting points \n(the cross-hatch regions in \nFigures \\ref{eu4129_nso}, \\ref{eu4205_nso},\nand \\ref{eu6645_nso}).\nWe adjust the quadratic continuum function vertically\nto require that 1--2\\% of spectral points \nin the full spectral segment are above unity, \nthereby ensuring that all spectra are scaled identically.\n\\item Abundance.\nWe perform a $\\chi^2$\\ minimization \nadjusting only the abundance of europium.\nWe begin with the solar abundance value scaled\nby the star's metallicity, then search \n$1.0\\unit{dex}$\nof europium abundance space to find the \nbest-fit value.\nIn minimizing the $\\chi^2$\\ statistic, \nwe calculate the residuals between the data \nand the fit in a limited region around the \nEu~{\\sc ii}\\ line (the gray regions in Figures \n\\ref{eu4129_nso}, \\ref{eu4205_nso}, and \n\\ref{eu6645_nso}).\n\\end{enumerate}\n\\noindent The wavelength alignment, spectrum continuum, and europium abundance \nfitting routines are run in that order and iterated \nuntil a stable solution is reached.\nIn most cases a stable solution requires only one or \ntwo iterations.\nThe abundances for each line in each star as \ndetermined by this process are listed in Table \n\\ref{starsvals_table}.\n\nAfter running our automatic europium fitting algorithm \non all 41\\ stars,\nwe examine each line in each star by eye to \nconfirm that the fit is successful. \nIn a few of the metal-poor 4129-\\AA\\ fits, the blended lines\nthat encroach on Eu~{\\sc ii}\\ were poorly enough fit with the\nVF05 SPOCS values that the europium value was \nunconvincing. \nIn those cases, the 4129-\\AA\\ value is\nomitted from Table \\ref{starsvals_table} and does not\ncontribute to the average.\nSee Figure \\ref{keep} for a comparison of a rejected\n4129-\\AA\\ feature and a robust 4129-\\AA\\ fit.\nSimilarly, in a few metal-poor stars the\n6645-\\AA\\ line is too weak to be seen in the noise, \nand hence the output of the fitting routine serves only\nas an upper limit on the europium abundance.\nSee Figure \\ref{keep} for a comparison of a measureable \n6645-\\AA\\ feature and a feature that provides an \nupper limit.\nIn the cases where the 6645-\\AA\\ line is an upper limit\nonly, it is listed as such in Table \\ref{starsvals_table}\nand does not contribute to the average. \n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f5.eps}\n\\figcaption[Bad fits vs.\\ good fits in comparable stars.]{Bad fits (top) \n versus good fits (bottom) in comparable stars. \n In the upper left panel, the SPOCS stellar properties fit the lines adjacent to\n Eu~{\\sc ii}\\ poorly enough that [Eu\/H]\\ is unreliable. That line is removed from further \n analysis.\n In the upper right panel, the 6645-\\AA\\ line is buried in the noise, meaning\n the fit represents an upper limit to the europium abundance. The abundance\n upper limit is noted in Table \\ref{starsvals_table}, but does not contribute\n to the overall [Eu\/H]\\ measurement in the star.\n The lower two panels include fits to stars with similar characteristics \n to the stars in the upper panels, but where the 4129-\\AA\\ and 6645-\\AA\\\n fits were more successful.\n \\label{keep}}\n\\end{figure}\n\nIf both 4129\\,\\AA\\ and 6645\\,\\AA\\ proved problematic we removed the\nstar from our analysis entirely. \nThe three stars for which this was the case (listed in\n\\S\\,\\ref{stellsamp}) are omitted from \nTables \\ref{starsinfo_table} and \\ref{starsvals_table}.\nSince all of the rejected stars were\nmetal poor, we conclude that our \nfitting routine is most robust at solar metallicity, \nand becomes less reliable at [M\/H]~$< -1$. \nTemperature may also play a role, as one of the\nrejected stars, HD\\,64090, has $T_{\\rm eff} = 7300\\unit{K}$; \nVF05 determined SME to be reliable between 4800 and \n$6500\\unit{K}$.\n\nFor each star we calculate a weighted average europium \nabundance value based on the three (or, in some \ncases, two) spectral lines.\nWeighting the average is important because for stars with\nrelatively few observations (e.g., HD\\,156365 in Figure \n\\ref{coadding_works}), the weak line at 6645\\,\\AA\\ should \nbe weighted significantly less than the more robust blue \nlines. \nStars with a larger number of observations \nand higher S\/N spectra\n(e.g., HD\\,144585 in Figure \\ref{coadding_works})\nshould have the 6645-\\AA\\ line weighted more strongly.\nIn order to determine the relative weights of the three \nspectral lines, we tested the robustness of our fit by \nadding artificial noise to the spectra. \nWe describe that process in \\S\\,\\ref{errors}.\n\n\\subsection{Error Analysis}\n\\label{errors}\nWe begin our error analysis by adding to the data \nGaussian-distributed random noise with a standard \ndeviation set by the photon noise at each pixel.\nWe then fit the europium line again, using the \nsame iterative $\\chi^2$-minimization process described \nin \\S\\,\\ref{stellar_abund}, and repeat the process \n50 times.\nThe standard deviation of the 50 Monte Carlo \ntrials determines the relative weights of the \nlines in the average listed in \nTable \\ref{starsvals_table}, with lower \nstandard deviation lines (corresponding \nto a more robust fit) weighted more \nstrongly.\n\nAs expected, the results of the Monte Carlo trials show \nthat the larger the photon noise in an observation, the \nless that line is to be weighted. \nWe derive a linear relation between \nthe Poisson uncertainty of an observation\nand its relative weight, based on the Monte Carlo results.\nBecause the Monte Carlo trials are CPU intensive, we plan\nto use that relation to determine \nrelative weights in the future.\n\nWe estimate the uncertainty of our europium\nabundances to be the sum of the squares of the \nresiduals for all three lines, where \nwe assign the residuals the same relative\nweights we applied when calculating the average.\nWe also impose an error floor of $0.02\\unit{dex}$,\nadded to the uncertainty in quadrature.\n\nThe error floor comes from a comparison of\nabundance values from the individual lines\n(Figure \\ref{line_compare}, discussed in more \ndetail in \\S\\,\\ref{compare_lines});\nerror bars of $0.03\\unit{dex}$ on each measurement\nwould make the reduced $\\sqrt{\\chi^2}$~$=1$. \nA minimum error of $0.03\\unit{dex}$ on each \neuropium measurement\ntranslates into an uncertainty of $0.02\\unit{dex}$ \nfor the final averaged value.\nThe $0.02\\unit{dex}$ error floor is included in the \nuncertainty quoted in Table \\ref{starsvals_table}.\n\n\\begin{figure} \n\\includegraphics[width=\\columnwidth]{f6.eps}\n\\figcaption[Comparing three Eu~{\\sc ii}\\ lines.]{A comparison of the Eu~{\\sc ii}\\ values \n measured in the 41\\ stars of this study, as listed in Table \n \\ref{starsvals_table}. \n Vesta is marked with a plus symbol.\n The solid line represents a 1:1 correlation; the dashed line is a \n best fit to the data.\n Points are omitted where the fit was poor in one of the Eu~{\\sc ii}\\ lines. \n Upper limits for the 6645-\\AA\\ line are marked with arrows.\n See \\S\\,\\ref{compare_lines} for a discussion of the quality of the fit.\n \\label{line_compare}}\n\\end{figure}\n\nAs an initial test of the robustness of our fitting routine,\nwe measure the europium abundance in a spectrum of Vesta,\nwhich serves as a solar proxy.\nThe values are listed in the last row of\nTable \\ref{starsvals_table}.\nThe three Vesta europium abundances have a standard\ndeviation of $0.022\\unit{dex}$, indicating that \nsystematic errors ($\\sim$\\,$0.03\\unit{dex}$ for Vesta,\nits offset from the solar value) may be a significant\nportion of the error budget.\n\nFor the moment we absorb any systematic error with \nour random error estimates.\nThe full sample of 1000 stars, the analysis \nof which will follow this work, will\nallow a far more thorough investigation of \nthe dependence of our results on various model \nparameters ($T_{\\rm eff}$, $\\log{g}$, etc.)\\ than can be \naccomplished here.\nTherefore, we delay a substantive discussion of \nsystematics until \nwe have more europium abundance measurements in hand,\nthough we touch on it again in \\S\\,\\ref{compare_lines}\nand \\S\\,\\ref{compare_lit}.\nIt is important to note that since most of the CCPS stars \nare similar to the Sun, and we are treating each star\nidentically, our results will be internally consistent.\n\n\\subsection{Notes on Individual Lines}\n\\label{individual_lines}\n\n\\subsubsection{Europium 4129\\,\\AA}\n\\label{abund:eu4129}\nThe europium line at 4129\\,\\AA\\ is the result of a \nresonance transition of Eu~{\\sc ii}, and is a strong, \nrelatively clean line. \nIt provides the most reliable measurement of \neuropium abundance in a star. \nOur fit to the Eu~{\\sc ii}\\ line at 4129\\,\\AA\\ in the \nsolar spectrum (described in \\S\\,\\ref{linelists}) \nappears as Figure \\ref{eu4129_nso}, with the 32\nhyperfine components (16 each from $^{151}$Eu \nand $^{153}$Eu; \\citealt{ivans_2006}) \nrepresented in the Figure \n\\ref{eu4129_nso} inset.\n\nIn the course of stellar fitting, the europium \nabundance is determined from the 4129\\,\\AA\\ line \nin all 41\\ stars except HD\\,103095. \nThat star is quite metal poor ([M\/H]~$= -1.16$) and cool \n($T_{\\rm eff} = 4950$), factors that likely contribute to the \npoor fit in the lines adjacent to the Eu~{\\sc ii}\\ line of \ninterest (see Figure \\ref{keep}).\n\n\\subsubsection{Europium 4205\\,\\AA}\n\\label{abund:eu4205}\nThe Eu~{\\sc ii}\\ line at 4205\\,\\AA\\ is the other fine-structure\ncomponent of the resonance transition responsible\nfor the line at 4129\\,\\AA.\nThough it is as strong as the Eu~{\\sc ii}\\ line at 4129\\,\\AA, \ncontamination from embedded lines (see Figure \\ref{eu4205_nso})\nhas the potential make the 4205-\\AA\\ line less reliable. \nHowever, it is useful as a comparison line for the results\nfrom the 4129-\\AA\\ Eu~{\\sc ii}\\ line.\nOur solar spectrum fit at 4205\\,\\AA\\ appears in \nFigure \\ref{eu4205_nso}, with the \n30 hyperfine components (15 each from $^{151}$Eu and $^{153}$Eu;\n\\citealt{ivans_2006}) \nin the inset.\nDespite its blended nature, the fit appears sound\nin all 41\\ stars.\n\n\\subsubsection{Europium 6645\\,\\AA}\n\\label{abund:eu6645}\nThe Eu~{\\sc ii}\\ line at 6645\\,\\AA\\ is weaker than the lines in the blue, but\nit is also relatively unblended, making it worthwhile to fit wherever \npossible. \nOur solar spectrum fit at 6645\\,\\AA\\ appears in \nFigure \\ref{eu6645_nso},\nwith the inset plot showing the 30 hyperfine components (15\neach from $^{151}$Eu and $^{153}$Eu; \\citealt{ivans_2006}). \nIn HIP\\,57450, which is metal poor ([M\/H]~$= -1.42$) and has only\none observation, the 6645-\\AA\\ line was lost in the noise\n(see Figure \\ref{keep}), and\nonly the 4129-\\AA\\ and 4205-\\AA\\ lines contribute to the final\neuropium abundance; the 6645-\\AA\\ line provides an upper limit only.\n\n\n\\section{Results}\n\\label{comp}\n\\subsection{Comparison of Individual Lines}\n\\label{compare_lines}\nWe compare our results from the three Eu~{\\sc ii}\\ lines in\nFigure \\ref{line_compare}.\nThe Vesta abundances, included as a solar \nproxy, are consistent with the stellar \nabundances to within $0.03\\unit{dex}$.\nBy assigning a measurement uncertainty\nof $0.03\\unit{dex}$ to each line, the \nreduced $\\sqrt{\\chi^2}$\\ for each of the three plots is unity.\nThis measurement uncertainty is the source of the\nerror floor discussed in \\S\\,\\ref{errors}.\n\n\\begin{figure*} \n\\centering\n\\includegraphics[width=0.80\\textwidth]{f7.eps}\n\\figcaption[Comparing our Eu~{\\sc ii}\\ values to others'.]{A comparison of the \n final Eu~{\\sc ii}\\ values from this work with literature measurements. \n The solid line represents a 1:1 correlation. \n The horizontal error bars represent the uncertainties as quoted in \n the source literature. \n The vertical error bars come from our analysis as described in \\S\\,\\ref{errors}. \n Two stars, HD\\,9562 and HD\\,22879 (with [Eu\/H]\\ of $+0.12$ and $-0.54$, \n respectively), were included in the \\citealt{bensby_2005}, \n \\citealt{delpeloso_2005a}, and \\citealt{woolf_1995} analyses, so those \n two stars are represented by three points along the abscissa at the \n same ordinate value. \n See Table \\ref{starsinfo_table} for the full list of the europium \n abundances plotted here, and see \\S\\,\\ref{compare_lit} for a discussion\n of the quality of our fit based on this plot. \n \\label{me_vs_them}}\n\\end{figure*}\n\nCalculating the best-fit line for each \ncomparison plot (the dashed lines in\nFigure \\ref{line_compare}) lends insight\ninto possible systematic trends in our analysis.\nIn the blue, the two Eu~{\\sc ii}\\ lines (4129\\,\\AA\\ and\n4205\\,\\AA) have no apparent linear systematic \ntrend:\\ the slope of the best-fit line is 1.03.\nThe red Eu~{\\sc ii}\\ line (6645\\,\\AA), however, exhibits\na minor systematic trend:\\ the best-fit line\nin the red has a slope of 1.10 relative to the blue,\ni.e., a 10\\% stretching of the blue abundance values \nabout [Eu\/H]~$= 0$ roughly reproduces the red \nabundance values.\nSince this systematic trend is absorbed by the\n$0.03\\unit{dex}$ error bars assigned to each point \n(necessary to make reduced $\\sqrt{\\chi^2}$\\ unity even when\ncomparing the non-systematic blue lines),\nwe make no attempt to correct this systematic\ntrend here.\n\nBetween the relatively low measurement uncertainty\nneeded to achieve reduced $\\sqrt{\\chi^2}$\\ of unity and the minor\nsystematic trend that only appears in the red, \nwe conclude that the europium abundance values \nderived from the Eu~{\\sc ii}\\ lines at 4129\\,\\AA, \n4205\\,\\AA, and 6645\\,\\AA\\ are consistent with one \nanother.\n\n\\subsection{Comparison with Literature Europium Measurements}\n\\label{compare_lit}\nIn Figure \\ref{me_vs_them}, we compare our \nfinal europium abundance measurements to the\nliterature values.\nThe agreement is quite good.\nThe literature values, plotted with the error\nbars quoted in the original studies, appear\nas the abscissa.\nOur europium values, plotted with the error\nbars calculated in \\S\\,\\ref{errors} and listed\nin Table \\ref{starsvals_table}, appear\nas the ordinate.\nThe solid line represents a 1:1 correlation.\nComparing the points to the 1:1 line,\nthe reduced $\\sqrt{\\chi^2}$~$=\\rchisqtot$.\nThat value is dominated by\nHD\\,103095, the \\citealt{simmerer_2004} data \npoint with the smallest error bars, and \nomitting it makes\nthe reduced $\\sqrt{\\chi^2}$~$=\\rchisqsm$. \n\nAdopting the global values ($T_{\\rm eff}$, [M\/H], $\\log{g}$)\nused by the comparison studies (instead of the VF05 \nvalues) drops the reduced $\\sqrt{\\chi^2}$\\ to \\rchisqlit, indicating\nthat some of the scatter in Figure \\ref{me_vs_them}\nis from the choice of stellar parameters.\nWe emphasize that the VF05 stellar parameters are \nreliable; we calculate reduced $\\sqrt{\\chi^2}$\\ using\nthe comparison studies' values in an attempt to \nseparate how much of the disagreement in Figure \n\\ref{me_vs_them} is from the europium abundance\ntechnique and how much is from the parameters adopted.\n\nOmitting the outlier mentioned at the beginning\nof this section, we examine \nthe data in Figure \\ref{me_vs_them} to \nsearch for systematics in our results relative \nto the literature values. \nWe consider the effects of a global offset in\nour europium values and a linear trend\nwith europium abundance, finding that systematic\noffsets of $\\sim$\\,$0.1\\unit{dex}$ and linear trends\nof $\\sim$\\,$40\\%$ are needed\nto make reduced $\\sqrt{\\chi^2}$\\ climb to 2.\nWe conclude that our error bars have sufficiently\ncharacterized our uncertainties.\n\nOverall, we find reduced $\\sqrt{\\chi^2}$\\ to be dominated by the \npoints at [Eu\/H]~$< 0.5$, consistent with\nour conclusion in \\S\\,\\ref{stellar_abund} that\nour results are most reliable near solar \nmetallicities, although\nthe larger error bars on the \\citealt{woolf_1995}\npoints make the correlation very forgiving near\n[Eu\/H]~$=0$. \nOur spectra have very high S\/N:\\ $\\sim$\\,160 \nat 4200\\,\\AA\\ \nin a single observation, enhanced substantially\nby our co-adding procedure (\\S\\,\\ref{data}).\nThe automated\nSME synthesis treats all stars consistently,\nespecially important for line blends in the \ncrowded blue regions.\nFor these reasons we believe that our europium abundance\ntechnique is accurate and robust and\nthat our smaller error bars are warranted.\n\nWe conclude that\nour abundance measurements are consistent\nwith previous studies, not surprising\nsince most abundance techniques rely on the same\nKurucz stellar atmosphere models.\nBecause the majority of the points in\nFigures \\ref{me_vs_them} fall near the\n1:1 correlation, we also conclude that\nnear [Eu\/H]~$=0$ our errors are \n$\\sim$\\,$0.03\\unit{dex}$, though at\n[Eu\/H]~$< -0.5$, the errors may be as high as \n$0.1\\unit{dex}$.\n\n\n\\section{Summary}\n\\label{summary}\nWe have established that our method for measuring europium in \nsolar-metallicity stars using SME is sound.\nThe resolution and S\/N of the Keck HIRES spectra are \nsufficiently high to fit the Eu~{\\sc ii}\\ lines in question. \nThe values obtained from the \nthree europium lines are self-consistent, and our final \naveraged europium value for each of the 41\\ stars in this study\nare consistent with the literature values for those stars.\n\nBy employing SME to calculate our synthetic spectra,\nwe are self-consistently modeling all the lines in the regions\nof interest. \nAny blending from neighboring lines is treated consistently\nfrom star to star, adding robustness to our europium determination.\nUsing SME has the added benefit of allowing us to \nadopt the stellar parameters from the SPOCS catalog.\nOur automated procedure ensures all stars are treated consistently.\n\nHaving established a new method for measuring stellar \neuropium abundances, we intend to apply our technique to \n1000 F, G, and K stars from the Keck CCPS survey.\nOur analysis of europium in these stars will represent the \nlargest and most consistent set of europium measurements in\nsolar-metallicity stars to date, and will provide\ninsight into the question of the {\\em r}-process\\ formation\nsite and the enrichment history of the Galaxy.\n\n\n\\acknowledgements\nThe author is indebted to\nGeoffrey W.~Marcy, \nChristopher Sneden,\nDebra A.~Fischer, \nJeff A.~Valenti, \nAnna~Frebel,\nJames W.~Truran,\nand Taft E.~Armandroff\nfor productive and enlightening conversations about the\nprogress of this work.\nParticular thanks are extended to\nChristopher Sneden,\nGeoffrey W.~Marcy, and\nLouis-Benoit Desroches\nfor their thoughtful comments on this paper.\nThe author is also grateful to her fellow observers\nwho collected the Keck HIRES data used\nhere:\\ Geoffrey W.~Marcy, \nDebra A.~Fischer,\nJason T.~Wright, \nJohn Asher Johnson,\nAndrew W.~Howard, \nChris McCarthy,\nSuneet Upadhyay,\nR.~Paul Butler,\nSteven S.~Vogt,\nEugenio Rivera,\nand Joshua Winn.\nWe gratefully acknowledge the dedication of the\nstaff at Keck Observatory, particularly Grant Hill\nand Scott Dahm for their HIRES support.\nThis research has made use of \nthe SIMBAD database, operated at CDS, Strasbourg, France; \nthe Vienna Atomic Line Database; \nthe Kurucz Atomic and Molecular Line Databases; \nthe NIST Atomic Spectra Database;\nand NASA's Astrophysics Data System Bibliographic Services.\nThe author extends thanks to those of Hawaiian \nancestry on whose sacred mountain of Mauna Kea we\nare privileged to be guests. \nWithout their generous hospitality, the Keck observations\npresented here would not have been possible.\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{0pt}{8pt plus 2pt minus 1pt}{6pt plus 2pt minus 1pt}\n\n\\usepackage{balance}\n\n\\usepackage[scaled]{beramono}\n\\usepackage{listings}\n\n\\lstset{\n language=Python,\n showstringspaces=false,\n formfeed=\\newpage,\n tabsize=4,\n commentstyle=\\itshape,\n basicstyle=\\ttfamily,\n breaklines=true,\n morekeywords={models, lambda, forms}\n}\n\n\\usepackage{color}\n\\definecolor{Orange}{rgb}{0.9,0.5,0}\n\\definecolor{NavyBlue}{rgb}{0.1, 0.4, 0.8}\n\\definecolor{Magenta}{rgb}{0.8, 0.1, 0.6}\n\\definecolor{Red}{rgb}{1, 0, 0}\n\\newcommand{\\marco}[1]{\\textcolor{Magenta}{\\textbf{[By Marco: #1]}}}\n\\newcommand{\\gloria}[1]{\\textcolor{blue}{\\textbf{[By Gloria: #1]}}}\n\\newcommand{\\bruno}[1]{\\textcolor{Red}{\\textbf{[By Bruno: #1]}}}\n\\newcommand{\\yahui}[1]{\\textcolor{Orange}{\\textbf{[By Yahui: #1]}}}\n\n\\newcommand{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}\n\\newcommand{$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}{$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}}\n\n\n\n\n\n\\usepackage[belowskip=-0.3em,aboveskip=2pt]{caption}\n\\begin{document}\n\\fancyhead{}\n\\title{Gesture-to-Gesture Translation in the Wild via Category-Independent Conditional Maps}\n\n\\author{Yahui Liu}\n\\email{yahui.liu@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Marco De Nadai}\n\\email{denadai@fbk.eu}\n\\orcid{0000-0001-8466-3933}\n\\affiliation[obeypunctuation=true]{\\institution{FBK}, \\city{Trento}, \\country{Italy}}\n\n\\author{Gloria Zen}\n\\email{gloria.zen@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Nicu Sebe}\n\\email{niculae.sebe@unitn.it}\n\\affiliation[obeypunctuation=true]{\\institution{University of Trento}, \\city{Trento}, \\country{Italy}}\n\n\\author{Bruno Lepri}\n\\email{lepri@fbk.eu}\n\\affiliation[obeypunctuation=true]{\\institution{FBK}, \\city{Trento}, \\country{Italy}}\n\n\n\n\n\\renewcommand{\\shortauthors}{Liu, et al.}\n\\renewcommand{\\shorttitle}{Gesture-to-Gesture Translation in the Wild}\n\n\n\\begin{abstract}\n\\begin{sloppypar}\nRecent works have shown Generative Adversarial Networks (GANs) to be particularly effective in image-to-image translations.\nHowever, in tasks such as body pose and hand gesture translation, existing methods usually require precise annotations, e.g. key-points or skeletons, which are time-consuming to draw. \nIn this work, we propose a novel GAN architecture that decouples the required annotations into a category label - that specifies the gesture type - and a simple-to-draw category-independent conditional map - that expresses the location, rotation and size of the hand gesture.\nOur architecture synthesizes the target gesture while preserving the background context, thus effectively dealing with gesture translation \\textit{in the wild}.\nTo this aim, we use an attention module and a rolling guidance approach, which loops the generated images back into the network and produces higher quality images compared to competing works.\nThus, our GAN learns to generate new images from simple annotations without requiring key-points or skeleton labels.\nResults on two public datasets show that our method outperforms state of the art approaches both quantitatively and qualitatively.\nTo the best of our knowledge, no work so far has addressed the gesture-to-gesture translation \\emph{in the wild} by requiring user-friendly annotations.\n\n\\end{sloppypar}\n\n\\end{abstract}\n\n\\begin{CCSXML}\n\n\n10010147.10010178.10010224<\/concept_id>\nComputing methodologies~Computer vision<\/concept_desc>\n500<\/concept_significance>\n<\/concept>\n\n10010147.10010257<\/concept_id>\nComputing methodologies~Machine learning<\/concept_desc>\n300<\/concept_significance>\n<\/concept>\n<\/ccs2012>\n\\end{CCSXML}\n\n\\ccsdesc[500]{Computing methodologies~Computer vision}\n\\ccsdesc[300]{Computing methodologies~Machine learning}\n\n\\keywords{GANs, image translation, hand gesture}\n\n\n\n\n\\maketitle\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\columnwidth]{figures\/teaser2.pdf}\n \n \\caption\n Our proposal decouples the \\emph{category label} that specifies the gesture type (e.g., gesture \"5\" or \"7\") from the \\emph{conditional map} (i.e. triangle) that controls the location, orientation and size of the target gesture.\n Existing works require a detailed conditional map (e.g. skeleton) that is gesture-dependent. \n In this example, we show that our method significantly lowers the drawing effort and expertise required by users. Our method can generate multiple output images with the same map for multiple gesture categories. \n \n }\n \\label{fig:teaser}\n \\vspace{-0.3em}\n\\end{figure}\n\n\\section{Introduction}\n\\begin{sloppypar}\nPhoto-editing software, fashion, and retail markets would enormously benefit from the possibility of modifying an image through a simple user input describing the changes to make (e.g. change the hand gesture of the person in the picture from ``open hand'' to ``ok'').\nHowever, despite the significant advances of Generative Adversarial Networks (GANs)~\\cite{goodfellow2014generative,radford2015unsupervised,arjovsky2017wasserstein,gulrajani2017improved}, the generation of images \\emph{in the wild} without precise annotations (e.g. hand skeleton) is still an open problem.\nPrevious literature on image-to-image translation has relied either on pixel-to-pixel mappings~\\cite{isola2017image,zhu2017unpaired,yang2018crossing} or precise annotations to localize the instance to be manipulated, such as segmentation masks~\\cite{mo2018instagan}, key-points~\\cite{siarohin2018deformable}, skeletons~\\cite{tang2018gesturegan} and facial landmarks~\\cite{sanchez2018triple}. \nHowever, obtaining such annotations is not trivial.\nOn one hand, automatic methods for key-points extraction~\\cite{cao2017realtime,simon2017hand} may fail or the reference gesture image\/video~\\cite{siarohin2018animating} may be not available. On the other, drawing such annotations is complicated, time-consuming, and their quality directly affects the performance of the network. \n\nMoreover, existing methods often focus on foreground content, e.g., the target gesture or facial expression, generating blurred and imprecise backgrounds~\\cite{siarohin2018deformable,ma2017pose,reed2016learning}.\nThese methods are well suited in the cases where\nshare fixed or similar spatial structures, such as facial expression datasets~\\cite{liu2015deep,fabian2016emotionet}.\nInstead, in image-to-image translations \\textit{in the wild}, both the foreground and background between the source image and target image can vary a lot~\\cite{tang2018gesturegan}. \n\nIn this paper, we propose a novel method, named as ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, that requires a simple-to-draw annotation, such as a triangle, and focuses on the challenging task of hand gesture-to-gesture translation \\emph{in the wild}.\nIn general, annotations such as key-points or skeletons are category-dependent since they provide four types of information at the same time, namely \\textit{category} (e.g., gesture ``5\"), \\textit{location}, \\textit{scale} and \\textit{orientation} of the hand gesture.\nInstead, our ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~decouples the \\textit{category} from the \\textit{location}-\\textit{scale}-\\textit{orientation} information.\nUsing a category-independent conditional map significantly lowers the annotation cost, allowing to generate multiple target images while requiring users to draw only a single map. \nIn this work, we refer to ``annotations'' as the user effort to draw the desired target gesture also at deploy time, besides the effort needed for generating the training data for our model. \nThe intuition of our approach is depicted in Figure~\\ref{fig:teaser}.\nFurthermore, we propose a novel architecture that uses an attention module and a rolling guidance approach to perform gesture-to-gesture translation \\textit{in the wild}.\nOur research yields three main contributions:\n\\begin{itemize}\n \\item \\emph{Decoupled conditional map and category.} We designed a general architecture for gesture-to-gesture translations that separately encodes the category label (e.g., gesture ``5\") and the category-independent conditional map. \n This allows to perform several image translations with the same conditional map.\n \\item \\emph{Rolling guidance and attention mask.} We propose a novel rolling guidance approach that allows generating higher quality output images by feeding the generated image back to the input as an additional condition to be considered. \n \n \n Also, ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~learns unsupervisedly an attention mask that allows to preserve the details shared between the input and target image.\n \n \n \\item \\emph{Simple-to-draw conditional maps.} \n We propose a triangle conditional map as the simplest and minimal necessary user provided condition for gesture-to-gesture translation.\n To the best of our knowledge, no work so far has addressed the gesture-to-gesture translation task \\emph{in the wild} by requiring user-friendly annotations. \n Furthermore, we assess the performance of our method with different shapes, such as boundary and skeleton maps.\n \n \n Finally, we enrich two public datasets with different conditional maps for each gesture image, specifically based on triangles and boundaries.\n\\end{itemize}\n\\end{sloppypar}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{figures\/diagram.pdf}\n \\vspace{-1em}\n \\caption{Overview of $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, that allows to translate hand gestures \\emph{in the wild} by separately encoding the input image and the target gesture attributes including the category label (e.g., gesture 3) and the category-independent conditional map. We include an unsupervised attention mask to preserve the details shared between the input image and the target image. Specially, we feed the first reconstructed image back to the input conditions encoding module to improve quality of output image.}\n \\label{fig:ourGAN}\n \n\\end{figure*}\n\n\n\\section{Related work}\nRecently, there have been a lot of works on Generative Adversarial Networks (GANs) and, particularly, on conditional GANs for the task of image-to-image translation~\\cite{isola2017image,zhu2017unpaired,zhu2017toward,choi2018stargan,pumarola2018ganimation}. \nA significant line of these works have focused on translation tasks where the the input and target images are spatially aligned, as in the case of style transfer~\\cite{zhu2017unpaired,johnson2016perceptual,isola2017image}, emotional content transfer~\\cite{emotionGAN18} or \nimage inpaiting~\\cite{zhang2018semantic}.\nIn general, these works aim to preserve the main image content while presenting them in various styles.\nAnother line of works have tackled the more challenging task of image translation where the target object is spatially unaligned with respect to the original location, shape or size of the input original object.\nThis is the case of image-to-image translation tasks like facial expression~\\cite{sanchez2018triple,geng20193d}, \nhuman pose~\\cite{ma2017pose,siarohin2018deformable} or \nhand gesture translation~\\cite{tang2018gesturegan}.\nTo this aim, methods usually require geometry information as guidance to guarantee where and how to edit visual patterns corresponding to the image attributes,\nsuch as key-points~\\cite{ma2017pose,siarohin2018animating,ma2018disentangled}, skeletons~\\cite{tang2018gesturegan}, object segmentation masks~\\cite{mo2018instagan}, facial landmarks~\\cite{sanchez2018triple}, action units~\\cite{pumarola2018ganimation} or 3D models~\\cite{geng20193d}. \n\n\n GANimation~\\cite{pumarola2018ganimation} learns an attention mask to preserve the background content of the source image for the spatially aligned translation of facial expressions. \n InstaGAN~\\cite{mo2018instagan} performs multi-instance domain-to-domain image translation by requiring as input precise segmentation masks for the target objects.\n Pose Guided Person Generation Network (PG$^2$)~\\cite{ma2017pose} proposes a two stages generating framework to refine the output image given a reference image and a target pose.\n MonkeyNet~\\cite{siarohin2018animating} generates a conditioned video transferring body movements from a target driving video to a reference appearance image. The target key-points are obtained automatically using state-of-the-art detectors.\n\nRelatively few works have considered the challenging task of image translation \\textit{in the wild}, where both the foreground content and the background context undergoes significant variation~\\cite{tang2018gesturegan}. \nIn these cases, not only the networks learn to synthesize the target object or attribute, but they also have to correctly locate it in the image while the pixels depicting the content are preserved from the original image.\nGestureGAN~\\cite{tang2018gesturegan} proposes a novel color loss to improve the output image quality and to produce sharper results. However, this approach does not aim to separately encode the foreground content and the background context.\n\nFurthermore, the large majority of image-to-image translations works focus on one to one domain mapping. \nRecently, efficient solutions have been proposed to address the multi-domain translations~\\cite{choi2018stargan,geng20193d}. In particular, StarGAN~\\cite{choi2018stargan} proposes the use of multiple mask vectors to generalize on different datasets and domain labels. 3DMM~\\cite{geng20193d} decomposes an image into shape and texture spaces, and relies on an identity and target coefficients to generate images in multiple domains. They, however, focus on multi-domain face attributes where neither the background quality or the misalignment are considered.\n\nOur work falls within the category of multi-domain and image-to-image translation \\emph{in the wild}. Still, rather than asking expensive annotation to the user, we propose a novel user-friendly annotation strategy.\nTo the best of our knowledge, no works so far have investigated other approaches for gesture translation that also allow to reduce the annotation effort.\n\n\n\\section{Our approach}\nThe goal of this work is to perform gesture translation \\emph{in the wild}, conditioned on the user provided gesture attributes, such as the desired category, location, scale and orientation. \nSpecifically, we seek to estimate the mapping $\\mathcal{M}$: $(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y) \\rightarrow \\mathbf{I}_Y$ that translates an input image $\\mathbf{I}_X$ into an output image $\\mathbf{I}_Y$, conditioned to a hand gesture category $C_Y$ and a simple-to-draw conditional map $\\mathbf{S}_Y$, which encodes the desired location, scale and orientation of the gesture.\nThe generated image is discriminated through a Conditional Critic discriminator~\\cite{pumarola2018ganimation} that judges the photo-realism of the generated image and assess the category of the generated gesture (e.g. gesture \"5\"). \nFurthermore, in order to efficiently deal with the gesture translation \\emph{in the wild}, we propose a rolling guidance approach and an attention mask module. \nFigure~\\ref{fig:ourGAN} depicts the architecture of our approach. In this Section we further explain the details of the architecture and the loss functions used to train our framework.\n\n\\subsection{Network Architecture}\n\\label{sec:network}\n\n\n\\noindent \\textbf{Generator.} \nOur Generator $G$ takes as input the conditional image $\\textbf{I}_X$, the category label $C_Y$ and the conditional map $\\textbf{S}_Y$\nand it outputs the translated image $\\textbf{I}_Y$.\nThe original image and the target gesture conditions are encoded\nrespectively through the encoder $E_1$ and $E_2$, and then concatenated and provided to $F_{res}$. Then, the output features of $E_2$ and $F_{res}$ are concatenated and provided to $F_{dec}$. \nSimilarly to~\\cite{pumarola2018ganimation}, in the decoder we learn to predict in an unsupervised manner the approximated image $\\widetilde{\\textbf{I}}_Y$ and an \\textit{Attention Mask} $\\mathbf{A} \\in [0,1.0]^{{H\\times W}}$. \nSince both the background and foreground between the input and target images can vary a lot, the learned attention mask $\\mathbf{A}$ tends to preserve the shared part between $\\mathbf{I}_X$ and $\\mathbf{I}_Y$. \nPixels in $\\mathbf{A}$ with higher value indicate to preserve the corresponding pixels from $\\mathbf{I}_X$, otherwise from $\\widetilde{\\mathbf{I}}_Y$.\nThe final generated image is thus obtained as:\n\\begin{equation}\n\\widehat{\\mathbf{I}}_Y = \\mathbf{A}*\\mathbf{I}_X + (1-\\mathbf{A})*\\widetilde{\\mathbf{I}}_Y\n\\label{eq:attention-mask}\n\\end{equation}\nFurthermore, the generated image $\\mathbf{I}_Y$ is rolled back as an input condition and concatenated together with the category label $C_Y$ and the conditional map $\\textbf{S}_Y$. More details are provided in Section~\\ref{Sec:rolling-guidance}. \n\n\\medskip\n\\noindent \\textbf{Discriminator.} Our Discriminator $D$ takes as input the generated image $\\widehat{\\textbf{I}}_Y$ and its conditional map $\\textbf{S}_Y$. \nThe outputs of $D$ consist of two parts: $D_{cat}$ predicts the category label \nand $D_{prob}$ classifies whether local image patches are real or fake. \nAs in~\\cite{pumarola2018ganimation}, D$_{prob}$ is based on PatchGANs~\\cite{isola2017image}.\nThe conditional map $\\textbf{S}_Y$ is needed by $D$ to verify that the generated gesture has also the right location, scale and orientation.\n\nWe refer to the Supplementary Material for additional details in the architecture.\n\n\\subsection{Objective Formulation}\n\\label{sec:loss}\n\nThe loss function of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ is composed of four main components, namely \\emph{GAN Loss} that pushes the generated image distribution to the distribution of source images; \\emph{Reconstruction loss} that forces the generator to reconstruct the source and target image; \\emph{Category Label loss} that allows to properly classify the generated image into hand gesture classes; and \\emph{Total Variation loss} that indirectly learns the attention mask in an unsupervised fashion.\n\n\\medskip\n\\noindent \\textbf{GAN Loss.} To generate images with the same distribution of the source images,\nwe adopt an adversarial loss: \n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{GAN} = & \\mathbb{E}_{\\mathbf{I}_Y, \\mathbf{S}_Y\\sim \\mathbb{P}}[\\log D_{prob}(\\mathbf{I}_Y, \\mathbf{S}_Y)] + \\\\ & \\mathbb{E}_{\\mathbf{I}_X,\\mathbf{S}_Y, C_Y\\sim \\mathbb{P}}[\\log (1- D_{prob}(G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y), \\mathbf{S}_Y))] \\\\\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbb{P}$ is the data distribution of the hand gesture images in the dataset, and\nwhere $G$ generates an image $G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y)$ conditioned on both the input image $\\mathbf{I}_X$, the conditional map $\\mathbf{S}_Y$, and the target category $C_Y$, while $D$ tries to distinguish between real and fake images. In this paper, we refer to the term $D_{prob}(\\mathbf{I}, \\mathbf{S})$ as a probability distribution over sources given by $D$.\n\n\\medskip\n\\noindent \\textbf{Reconstruction Loss.} The adversarial loss does not guarantee that the generated image is consistent with both the target conditional map $\\mathbf{S}$ and category $C$.\nThus, we first apply a \\emph{forward reconstruction loss} that ties together the target image $\\mathbf{I}_Y$ with its target conditional map $\\mathbf{S}_Y$ and category $C_Y$:\n\\begin{equation}\n \\mathcal{L}_{rec} = \\|G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y) - \\mathbf{I}_Y\\|_1\n\\end{equation}\nThen, instead of using perceptual features (e.g. extracted from VGG~\\cite{very2015simonyan} networks) to force the model to reconstruct the source image, we propose a simplified \\emph{self-reconstruction (identity) loss}:\n\\begin{equation}\n \\mathcal{L}_{idt} = \\|G(\\mathbf{I}_X, \\mathbf{S}_X, C_X) - \\mathbf{I}_X \\|_1\n\\end{equation}\nwhere $\\mathbf{S}_X$ is the conditional map of the source image and $C_X$ the category label of the source image.\nFinally, we apply the \\emph{cycle consistency loss}~\\cite{zhu2017unpaired,kim2017learning} to reconstruct the original image from the generated one:\n\\begin{equation}\n \\mathcal{L}_{cyc}=\\|G(G(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y), \\mathbf{S}_X, C_X) - \\mathbf{I}_X\\|_1\n\\end{equation}\nNote that we apply the cycle reconstruction loss only in one direction to reduce computation, i.e., A-to-B-to-A, since a translation pair based on two images A and B may be sampled either as A-to-B or as B-to-A\nduring the training.\n\n\\medskip\n\\noindent \\textbf{Category Label loss.} We enforce the generator to render realistic samples that have to be correctly classified to the hand gesture expressed by the input category label.\nSimilarly to StarGAN~\\cite{choi2018stargan}, we split the \\emph{Category Label loss} in two terms: a gesture classification loss of the real image $\\mathbf{I}_Y$ used to optimize $D$, and a gesture classification loss of the generated image $\\hat{\\mathbf{I}}_Y$, used to optimize $G$.\nSpecifically:\n\\begin{equation}\n \\mathcal{L}_{cls} = \\mathbb{E}_{\\mathbf{I}_Y, C_Y}[- \\log D_{cat}(C_Y | \\mathbf{I}_Y, \\mathbf{S}_Y)]\n\\end{equation}\n\\begin{equation}\n\\mathcal{L}_{\\hat{cls}} = \\mathbb{E}_{\\hat{\\mathbf{I}}_Y, C_Y}[- \\log D_{cat}(C_Y | \\hat{\\mathbf{I}}_Y, \\mathbf{S}_Y)]\n\\end{equation}\nwhere $D_{cat}(C_Y | \\mathbf{I}_Y, \\mathbf{S}_Y)$ and $D_{cat}(C_Y | \\hat{\\mathbf{I}}_Y,\\mathbf{S}_Y)$ represent a probability distribution over the categories of hand gestures respectively in the real and generated images.\nIn other words, these losses allow to generate images that can be correctly classified as the target hand gesture category.\n\n\\medskip\n\\noindent \\textbf{Total Variation loss}. \nTo prevent the final generated image having artifacts, we use a Total Variation Regularization, $f_{tv}$,\nas in GANimation~\\cite{pumarola2018ganimation}.\nHowever, differently from them, we calculate $f_{tv}$ over the approximated image $\\widetilde{\\mathbf{I}}_Y$ instead of the attention mask $\\mathbf{A}$, \nthus allowing to freely explore the shared pixels between the source and target images. \nThe total variation loss is applied both to the \\textit{forward reconstruction} and \\textit{self-reconstruction} and is formulated as:\n\\begin{equation}\n \\mathcal{L}_{tv} = f_{tv}(G_C(\\mathbf{I}_X, \\mathbf{S}_X, C_X)+f_{tv}(G_C(\\mathbf{I}_X, \\mathbf{S}_Y, C_Y))\n\\end{equation}\nThe total variation regularization $f_{tv}$ is defined as:\n\\begin{dmath}\n f_{tv}(\\mathbf{I})=\n \\mathbb{E}_{\\mathbf{I}} \\left[\\sum_{i,j}^{H-1,W-1}[(\\mathbf{I}_{i+1, j} - \\mathbf{I}_{i,j})^2 + (\\mathbf{I}_{i, j+1} - \\mathbf{I}_{i,j})^2]\\right]\n\\end{dmath}\nwhere $\\mathbf{I}_{i,j}$ is the entry $i,j$ of the image matrix $\\mathbf{I}$.\n\n\n\\medskip\n\\noindent \\textbf{Total loss}. The final objective function used to optimize $G$ and $D$ is formulated as follows:\n\\begin{dmath}\n\\mathcal{L}_{D} = \\lambda_{D}\\mathcal{L}_{GAN} + \\lambda_{cls}\\mathcal{L}_{cls}\n\\label{eq:loss_D}\n\\end{dmath}\n\\begin{dmath}\n\\mathcal{L}_{G} = \\lambda_{G}\\mathcal{L}_{GAN} + \\lambda_{rec}\\mathcal{L}_{rec} + \\lambda_{idt}\\mathcal{L}_{idt} + \\lambda_{cyc}\\mathcal{L}_{cyc} + \\lambda_{cls}\\mathcal{L}_{\\hat{cls}} + \\lambda_{tv}\\mathcal{L}_{tv}\n\\label{eq:loss_G}\n\\end{dmath}\nwhere $\\lambda_{D}$, $\\lambda_{G}$, $\\lambda_{rec}$, $\\lambda_{idt}$, $\\lambda_{cyc}$, $\\lambda_{cls}$, and $\\lambda_{tv}$ are hyper-parameters that control the relative importance of each loss term.\n\n\n\\subsection{Rolling Guidance}\n\\label{Sec:rolling-guidance}\n\\begin{sloppypar}\nWhile the total variation loss $\\mathcal{L}_{tv}$ also enforces the approximated images $\\widetilde{\\mathbf{I}}_Y$ to be smooth, the source and target images might contain edges and details that have to be preserved. \nMoreover, the $E_1$ and $E_2$ encoders mostly focus on the gesture, failing to learn important details of the context, which might result in blurred images.\nInspired by previous works~\\cite{mosinska2018beyond,ma2017pose, zhang2014rolling}, we propose a Rolling Guidance approach to refine the generated image in a two-stage process. \nFirst, the network generates an initial version $\\widehat{\\mathbf{I}}_Y$ from input ($\\mathbf{I}_X$, $\\mathbf{S}_Y$, $C_Y$). \nThen, $\\widehat{\\mathbf{I}}_Y$ is fed back to $E_2$. Thus, the network generates a refined version of $\\widehat{\\mathbf{I}}_Y$ from input ($\\mathbf{I}_X$, $\\mathbf{S}_Y$, $C_Y$, $\\widehat{\\mathbf{I}}_Y$).\nNote that there exists some approaches, like PG$^2$~\\cite{ma2017pose}, feed the initial generated image back and concatenated to the source input to learn difference map to refine the results. However, images with a considerable variation in both the foreground and background between source and target images might result in an ill-posed problem. Gesture-to-gesture translation in the wild shows such an issue. For this reason, in ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, we feedback the generated image to $E_2$ to refine the generated image and to learn the condition related features of the target gesture at the same time. This results in better generalization and significantly improves the results as well.\n\\end{sloppypar}\n\n\\section{Experiments}\nWe compare our model with the state of the art techniques on two hand gesture datasets.\nFirst, we evaluate our model quantitatively through various widely used metrics that compare the generated image with the ground truth. \nThen, we also evaluate our results qualitatively through a perceptual user study.\nWe released the resulted dataset and annotations, source code and trained models are available at: \\url{https:\/\/github.com\/yhlleo\/TriangleGAN}.\n\n\\subsection{Datasets}\nThe NTU Hand Gesture dataset~\\cite{ren2013robust} is a collection of 1,000 RGB-D images recorded with a Kinect sensor. It includes 10 gestures repeated 10 times by 10 subjects under a cluttered background. Image resolution is 640x480. The Creative Senz3D dataset~\\cite{memo2018head} is a collection of 1,320 images, where 11 gestures are repeated 30 times by 4 subjects. \nImage resolution is 640x480.\n\n\\medskip\n\\noindent \\textbf{Experimental setting.}\nWe consider two dataset setups in our experimental evaluation.\n\n\\medskip\n\\noindent \\textit{Normal} uses the same setup as GestureGAN~\\cite{tang2018gesturegan}, in order to directly compare with the state of the art.\nGestureGAN authors used only a subset of the datasets (acquired by OpenPose~\\cite{cao2018openpose}): 647 images out of 1,000 for NTU Hand Gesture and 494 out of 1,320 for Creative Senz3d. It shows that the state-of-the-art detector OpenPose~\\cite{cao2018openpose} fails at detecting key-points for about 50\\% of the images.\nThe resulting number of training and test data pairs obtained are respectively the following:\n21,153 and 8,087 for NTU Hand Gesture; \n30,704 and 11,234 for Creative Senz3D.\nThese numbers are different from those reported in~\\cite{tang2018gesturegan} since we here report only the unique pairs without considering flipping and A-to-B reverse ordering.\n\n\n\\medskip\n\\noindent \\textit{Challenging} pushes the limits of our model by ensuring that all the translation pairs \"A-to-B\" to a specific image \"B\" are included either in the train or in the test, a condition not ensured by the \\emph{normal} setting and, thus, by state of the art.\nAs a consequence, the model here generates multi-domain images without previous knowledge about it.\nWe randomly select for the training and test data the following number of pairs: \n22,050 and 13,500 for NTU Hand Gesture;\n138,864 and 16,500 for the Creative Senz3D. \n\n\n\n\n\\medskip\n\\noindent \\textbf{Conditional Maps.}\nWe consider three possible shapes of the conditional map to prove the effectiveness and generality of our method. Sample images are reported in Figure~\\ref{fig:maps}.\n\n\\medskip\n\\noindent \\textit{Triangle Map.}\nIn this type of annotation, the user has to provide an oriented triangle which outlines the size, base and orientation of the hand palm.\nThis conditional map is easy to draw, as it is possible to provide a simple interface where users can draw a triangle simply specifying the three delimiting points of the triangle, plus its base.\nMoreover, the triangle conditional map is category independent, as it does not contain any information about the gesture.\nWe annotated all images for both datasets with the corresponding triangle conditional maps.\n\n\\medskip\n\\noindent \\textit{Boundary Map.}\nIn the boundary map annotation, the user has to draw the contour of the desired target gesture. This type of annotation is weakly category dependent, since from the conditional map (see Figure~\\ref{fig:maps} in the center) it may be possible to infer the target gesture category. However, as this shape is easy to draw, it may be a valid alternative to the skeleton and triangle maps.\nWe annotated all 1,320 images for the NTU Hand dataset with the corresponding boundary annotations.\n\n\\medskip\n\\noindent \\textit{Skeleton Map.}\nIn the skeleton map, the user is required to draw either a complicated skeleton of the hand gesture or the exact position of the hand gesture key-points.\nHowever, when the target gesture image is available, it is sometimes possible to draw them automatically. \nAs in~\\cite{tang2018gesturegan}, we obtain the skeleton conditional maps by connecting the key-points obtained through OpenPose, a state of the art hand key-points detector~\\cite{simon2017hand, cao2018openpose}.\nIn the case of \\textit{normal} experimental setting, the hand pose is detected for all the 647 and 494 images of the two datasets. Instead, in the case of \\textit{challenging} experimental setting, the key-points could not be obtained for over half of the image set. \nTo this reason, the skeleton map is considered only in the \\textit{normal} experimental setting.\nThis conditional map is hard to draw, and strongly dependent on the category of hand gesture.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/S2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/S.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/B2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/B.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/T2.jpg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.16\\columnwidth]{figures\/maps\/T.png}}\n\t\\caption{The three considered shapes of the conditional map, sorted by user drawing effort: from (left) the most difficult to (right) the easiest to draw.}\n\t\\label{fig:maps}\n\\end{figure}\n\n\n\\subsection{Evaluation}\n\\label{Sec:evaluation}\n\n\\textbf{Baseline models.}\nAs our baseline models we select the state of the art for hand gesture translation \\emph{in the wild} GestureGAN~\\cite{tang2018gesturegan}. \nAlso, we adopt StarGAN~\\cite{choi2018stargan}, GANimation~\\cite{pumarola2018ganimation}, and PG$^2$~\\cite{ma2017pose} as they showed impressive results on multi-domain image-to-image translation. \nBoth StarGAN and GANimation learn to use attribute vectors to transfer facial images from one expression to another one. GestureGAN learns to transfer hand gestures via category-dependent skeleton maps.\n\n\\medskip\n\\noindent \\textbf{Evaluation metrics.}\nWe quantitatively evaluate our method performance using two metrics that measure the quality of generated images, namely Peak Signal-to-Noise Ratio (PSNR) and Fr\u00e9chet Inception Distance (FID)~\\cite{NIPS2017_7240}, and the F1-score, which measures whether the generated images depict a consistent category label. \nMoreover, to be comparable with GestureGAN~\\cite{tang2018gesturegan}, we employ the Mean Squared Error (MSE) between pixels and the Inception Score (IS)~\\cite{gulrajani2017improved}.\nHowever, these two metrics were indicated as highly unstable~\\cite{barratt2018note,borji2019pros}\nand they are not strictly related to the quality of generated images. To this reason, we report their results on a separate table and do not further discuss them. \n\n\\medskip\n\\noindent \\textit{PSNR.} It compares two images through their MSE and the maximal pixel intensity ($MAX_I = 255$). It is defined as: $PSNR = 20 \\log_{10}(\\frac{MAX_I}{\\sqrt{MSE}})$.\n\n\n\\medskip\n\\noindent \\textit{FID.} It is defined as the distance between two Gaussian with mean and covariance $(\\mu_x, \\Sigma_x)$ and $(\\mu_y, \\Sigma_y)$: $FID(x,y) = || \\mu_x- \\mu_y ||^2_2 + Tr(\\Sigma_x + \\Sigma_y - 2(\\Sigma_x\\Sigma_y)^{1\/2})$, where the two Gaussians are defined in the feature space of the Inception model.\n\n\\medskip\n\\noindent \\textit{F1.} The F1-score for binary classifiers is defined as $F_1 = (2 p r)\/(p+r)$ where $p$ and $r$ are the precision and recall. For multi-class classifiers it can be defined as the sum of \nthe F1-scores of each class, weighted by the percentage of labels belonging to the class.\nThe resulting measure ranges from 0 to 1, where 1 means that all the classes are correctly classified and 0 the opposite. Thus, we use the F1-score to evaluate the category consistence of our generated images. To compute the F1-score, we train a network to recognize hand gestures. Further details are provided in the Implementation Details.\n\n\n\\medskip\n\\noindent \\textit{Perceptual User study.} \nWe run a ``fake'' vs ``real'' perceptual user study following the same protocol as~\\cite{tang2018gesturegan,yang2018crossing}. Users are shown a pair of images, the original and the generated image, and they are asked to select the fake one. The images are displayed for only 1 second before the user can provide his or her choice. The image pairs are randomly shuffled in order to avoid introducing bias in the results. \nOverall, we collected perceptual annotations from 12 users and each user was asked to vote for 98 image comparisons. Specifically, 12 image translation pairs were selected for each dataset and experimental setting. \n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Quantitative comparison for the gesture-to-gesture translation task in \\textit{normal} experimental setting, using the same evaluation metrics as GestureGAN~\\cite{tang2018gesturegan}.}\n\n\t\\resizebox{0.99\\linewidth}{!}{ \n\t\\begin{tabular}{@{}l rrr rrr@{}}\n\t\\toprule\n\t\t\\textbf{Model} & \n\t\t\\multicolumn{3}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n\t\t\\cmidrule(r{4pt}){2-4} \\cmidrule(l{4pt}){5-7}\n\t\t& MSE & PSNR & IS & MSE & PSNR & IS\n\t\t\\\\\n\t\t\\midrule\n\t\tPG$^2$~\\cite{ma2017pose} & 116.10 & 28.24 & 2.42 & 199.44 & 26.51 & 3.37\n\t\t\\\\\n\t\tYan \\emph{et al.}~\\cite{yan2017skeleton} & 118.12 & 28.02 & 2.49 & 175.86 & 26.95 & 3.33\n\t\t\\\\\n\t\tMa \\emph{et al.}~\\cite{ma2018disentangled} & 113.78 & 30.65 & 2.45 & 183.65 & 26.95 & 3.38\n\t\t\\\\\n\t\tPoseGAN \\emph{et al.}~\\cite{siarohin2018deformable} & 113.65 & 29.55 & 2.40 & 176.35 & 27.30 & 3.21\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} & 105.73 & 32.61 & \\textbf{2.55} & 169.92 & 27.97 & \\textbf{3.41}\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~- no rolling &\n\t\t31.80 & 33.57 & 1.92 &\n\t\t46.79 & 31.98 & 2.31\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &\n\t\t\\textbf{15.76} & \\textbf{36.51} & 2.00 &\n\t\t\\textbf{21.73} & \\textbf{35.39} & 2.34\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\label{tab:QuantitativeResults_normal}\n\\end{table}\n\n\n\n\n\n\n\\begin{table*}[t]\n\t\\centering\n\t\\caption{Quantitative results for the gesture-to-gesture translation task for the two experimental settings.}\n\n\t\\begin{tabularx}{\\textwidth}{@{}Xccrrrrrrrrrrrrrrrrrrr@{}}\n\t\\toprule\n\t\t\\textbf{Model} & \\textbf{Experimental setting} & \\textbf{Easy to draw map} & \\phantom{a} &\n\t\t\\multicolumn{3}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \\phantom{a} & \\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n\t\t\\cmidrule{5-7} \\cmidrule{9-11}\n\t\t& & && PSNR & FID & F1 && PSNR & FID & F1\n\t\t\\\\\n\t\t\\midrule\n\t\tGANimation~\\cite{pumarola2018ganimation} & \\multirow{6}{*}{\\textit{Normal}} & \\multirow{6}{*}{$\\times$} && 10.03 & 440.45 & 0.08 && 11.11 & 402.99 & 0.23 \n\t\t\\\\\n\t\tStarGAN~\\cite{choi2018stargan} &&&& 17.79 & 98.84 & 0.09 && 11.44 & 137.88 & 0.07 \n\t\t\\\\\n\t\tPG$^2$~\\cite{siarohin2018deformable} &&&& 21.71 & 66.83 & 0.15 && 21.78 & 122.08 & 0.68\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} &&&& 34.24 & 15.06 & \\textbf{0.93} & & 28.65 & 54.00 & 0.96\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ - no rolling &&&&\n\t\t34.07 & 20.29 & 0.90 &&\n\t\t31.98 & 58.28 & 0.88\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &&&&\n\t\t\\textbf{36.51} & \\textbf{13.73} & \\textbf{0.93} &&\n\t\t\\textbf{35.39} & \\textbf{32.04} & \\textbf{0.99}\n\t\t\\\\\n\t\t\\midrule\n\t\t\\midrule\n\t\tPG$^2$~\\cite{ma2017pose} & \\multirow{5}{*}{\\textit{Challenging}} & \\multirow{5}{*}{\\checkmark} && 21.94 & 135.32 & 0.10 && 18.30 & 265.73 & 0.10\n\t\t\\\\\n\t\tGestureGAN~\\cite{tang2018gesturegan} && && 25.60 & 94.10 & 0.38 & & 18.69 & 254.37 & 0.34\n\t\t\\\\\n\t\tGestureGAN$^\\dag$ && && 27.32 & 59.79 & 0.43 & & 22.24 & 192.77 & 0.38\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ - no rolling &&& & 27.14 & 61.14 & 0.43 && 22.77 & 134.54 & 0.33\n\t\t\\\\\n\t\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} &&&& \\textbf{28.11} & \\textbf{47.88} & \\textbf{0.61} && \\textbf{23.11} & \\textbf{101.22} & \\textbf{0.58}\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\label{tab:QuantitativeResults_both}\n\\end{table*}\n\\begin{sloppypar}\n\n\n\n\\medskip\n\\noindent \\textbf{Implementation Details.}\nInspired by previous methods~\\cite{zhu2017unpaired,choi2018stargan}, both $E_1$ and $E_2$ are composed of two convolutional layers with the stride size of two for downsampling, $F_{res}$ refers to six residual blocks~\\cite{he2016deep}, and $F_{dec}$ is composed of two transposed convolutional layers with the stride size of two for upsampling. \nWe train our model using Adam~\\cite{kingma2014adam} with $\\beta_1=0.5$ and $\\beta_2=0.999$ and batch size 4. We use an $n$-dimensional one-hot vector to represent the category label ($n=10$ for NTU dataset and $n=11$ for Senz3D Dataset). For data augmentation we flip the images horizontally with a probability of 0.5 and we reverse the ``A-to-B\" direction with a probability of 0.5. The initial learning rate is set to $0.0002$. We train for 20 epochs and linearly decay the rate to zero over the last 10 epochs. To reduce model oscillation~\\cite{goodfellow2016nips}, we follow previous works~\\cite{shrivastava2017learning,zhu2017unpaired} and update the discriminators using a history of generated images rather than the ones produced by the latest generators. \nWe use instance normalization~\\cite{ulyanov2016instance} for the generator $G$.\nFor all the experiments, the weight coefficients for the loss term in Eq.~\\ref{eq:loss_D} and Eq.~\\ref{eq:loss_G} are set to $\\lambda_{D} = 1$, $\\lambda_{G} = 2$,\n$\\lambda_{cls} = 1$, $\\lambda_{rec}=100$, $\\lambda_{idt}=10$, $\\lambda_{cyc}=10$ and $\\lambda_{tv}=1e-5$.\nBaseline models are optimized using the same settings described in the respective articles.\nWe used the source code released by the authors for all competing works, except for GestureGAN, which was implemented from scratch following the description of the original article~\\cite{tang2018gesturegan}.\n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~is implemented using the deep learning framework PyTorch.\n\nTo compute the F1-score, we train a network on hand gesture recognition using Inception v3~\\cite{szegedy2016rethinking} network fine tuned on the NTU Hand Gesture and Creative Senz3D datasets. \nThe network achieves F1-score 0.93 and 0.99 on Creative Senz3D and NTU Hand Gesture test sets, respectively. Additional details on the training can be found in the supplementary materials.\n\n\n\\end{sloppypar}\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_real_A.jpeg}}\\hfill\n\t\\subfloat[Target skeleton]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_cond_B.jpeg}}\\hfill\n\t\\subfloat[Ground truth]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_real_B.jpeg}}\\hfill\n\t\\subfloat[GANimation~\\cite{pumarola2018ganimation}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat[StarGAN~\\cite{choi2018stargan}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$~\\cite{ma2017pose}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B2.jpeg}}\\hfill\n\t\\subfloat[GestureGAN~\\cite{tang2018gesturegan}]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B_gest.jpeg}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} ]{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/P1-G1-3-AB-P1-G5-6_fake_B2_masked_roll.jpeg}}\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \\\\\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_real_A.jpeg}\n\t\t}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B2_pg2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B_gesture.jpeg}}\\hfill\n\t\\subfloat{\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S1-G10-6-color-AB-S1-G7-18-color_fake_B2_masked_rolling.jpeg}}\n\t\\\\\n\t\\subfloat\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B_gesture.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.12\\textwidth]{figures\/qualitative\/S2-G4-9-color-AB-S2-G8-4-color_fake_B2_masked_roll.jpeg}}\n\n\t\\caption{Qualitative comparison between ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~and competing works in the \\textit{normal} experimental setting. NTU Hand Gesture dataset (top two rows) and Creative Senz3D (bottom two rows).}\n\t\\label{fig:res_qual_normal}\n\\end{figure*}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_real_A.jpeg}}\\hfill\n\t\\subfloat[Target]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_real_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_diff_map.jpeg}}\\hfill\n\t\\subfloat[GANimation]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B_mask.png}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-8_fake_B2_mask.jpeg}}\n\t\\\\\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\t%\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_diff_map.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B_mask.png}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2_mask.jpeg}}\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\t\\caption{Masks computed by the various state of the art methods. Specifically, PG$^2$ computes a difference map that is noisy, GANimation fails at computing the attention mask, while $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~computes the attention of the pixels that stay constant from source to target images.}\n\t\\label{fig:attention}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\begin{minipage}{.88\\textwidth}\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_real_A.jpeg}}\\hfill\n\t\\subfloat[Target triangle]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_cond_B.jpeg}}\\hfill\n\t\\subfloat[Category label]{%\n\t\t\\squarecat{\"9\"}}\\hfill\n\t\\subfloat[Ground truth]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_real_B.jpeg}}\\hfill\n\t\\subfloat[PG$^2$]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B1.jpeg}}\\hfill\n\t\\subfloat[GestureGAN]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B.jpeg}}\\hfill\n\t\\subfloat[$\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}]{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/P1-G2-6-AB-P1-G9-9_fake_B2_masked.jpeg}} \n\t\t\\\\\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\squarecat{\"4\"}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_real_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.137\\textwidth]{figures\/qualitative\/tria\/S1-G1-16-color-AB-S1-G4-26-color_fake_B2_masked.jpeg}}\n\t\\end{minipage}\n\n\t\\caption{Qualitative comparison between ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~and competing works in the \\textit{challenging} experimental setting. NTU Hand Gesture dataset (top row) and Creative Senz3D (bottom row).\n\t}\n\t\\label{fig:res_qual_challenging}\n\t\n\\end{figure*}\n\n\n\n\n\n\\section{Results}\n\\noindent \\textbf{Quantitative Results.} \nWe begin by directly comparing ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ with the same experimental protocol and metrics used by our most similar competitor, GestureGAN~\\cite{tang2018gesturegan}.\nTable~\\ref{tab:QuantitativeResults_normal} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ performs better than all competing works in terms of MSE and PSNR, especially when the rolling guidance is employed. In terms of IS GestureGAN performs better.\nHowever, the MSE and the IS are not directly related to the quality of the generated images. The MSE is indeed a metric of pixel difference, while the low reliability of the Inception Score is well known~\\cite{barratt2018note}.\n\nFor this reason, we compare our method with competing works using the PSNR, F1-score and the FID score, both in \\textit{normal} and \\textit{challenging} experimental settings. These metrics compare the diversity, quality and hand-gesture consistency of the generated images.\nTable~\\ref{tab:QuantitativeResults_both} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms all competing works, in both the experimental settings and for all the metrics.\nIn the \\emph{normal} setting, compared to GestureGAN, we achieve higher PSNR (36.51 vs 34.24 and 35.39 vs 28.65), F1-score (0.99 vs 0.96) and lower FID (13.73 vs 15.06 and 32.04 vs 54.00) in both datasets. \nOther methods perform particularly poor in terms of F1-score. \nFor example, GANimation and StarGAN perform near randomness in the F1-score (random baseline $\\sim 0.09$), while ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~ achieves near-perfect performance in Creative Senz3D ($0.99$) and NTU Hand Gesture ($0.93$).\nThese outcomes might be related to the fact that GANimation, StarGAN and PG$^2$ are not designed for image translation \\emph{in the wild}.\n\n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ significantly outperforms the competing methods in the \\emph{challenging} setting, where we enforce a stricter division of training and test gestures, and we use the 100\\% of the data, differently from GestureGAN's settings.\nIn particular, the FID score reduces by 49\\% and 60\\% from GestureGAN in NTU Hand Gesture and Creative Senz3D, respectively. \nIn terms of F1-score, the result improves by 61\\% and 71\\% in NTU Hand Gesture and Creative Senz3D, respectively. \nThe PSNR improves by 10\\% and 24\\% in NTU Hand Gesture and Creative Senz3D, respectively. We note that the rolling guidance applied to GestureGAN (denoted in Table~\\ref{tab:QuantitativeResults_both} as GestureGAN$^\\dag$) improves the original FID results by 36\\% and 24\\% in NTU Hand Gesture and Creative Senz3D, respectively. \n\n\nAltogether, the quantitative results show that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms the state of the art, both in the \\emph{normal} and \\emph{challenging} setting. \n\n\\begin{sloppypar}\n\\medskip\n\\noindent \\textbf{Qualitative Results.} Figure~\\ref{fig:res_qual_normal} shows some randomly selected gesture-to-gesture translations in the \\textit{normal} experimental setting. \nBoth GestureGAN~\\cite{tang2018gesturegan} and ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ produce sharper output results while the output gestures from PG$^2$~\\cite{ma2017pose} are very blurry. ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ also produces a better define sharper background than GestureGAN. \nStarGAN and GANimation, however, fail to produce gestures from the provided conditional map.\n\nWe further inspect the reason behind the poor results of PG$^2$ and GANimation. \nThese methods focus on multi-domain translation and are specifically tailored to the task of facial expression translation and to the cases where the input and target objects are aligned. \nFigure~\\ref{fig:attention} depicts two sample cases of the difference and attention masks generated by these methods. It can be seen that PG$^2$ fails at finding the difference mask, especially in the NTU Hand Gesture dataset (top row). Similarly, GANimation generates an empty attention mask, which might also be the cause of its poor results in both Figure~\\ref{fig:res_qual_normal} and Table~\\ref{tab:QuantitativeResults_both}. \n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}, instead, learns to focus on the differences between the source and target images.\n\nFigure~\\ref{fig:res_qual_challenging} shows the results in the \\textit{challenging} setting. \n${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ generates sharper images with recognizable hand gestures, while other methods such as GestureGAN fails at it. \nThis result is in line with the F1-scores reported in Table~\\ref{tab:QuantitativeResults_both} (bottom), which range within 0.10 and 0.38 for the competing works, and within 0.58 and 0.61 in case of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}. \nBoth qualitative and quantitative results confirm that state-of-the-art methods are not adapted to perform gesture translation in the challenging settings, i.e. where a user-friendly category independent conditional map is provided, and where the network is asked to translate to an unseen gesture for a given user.\nWe refer to the Supplementary Material for additional qualitative Figures.\n\\end{sloppypar}\n\n\n\n\n\\begin{figure}[!htp]\n\t\\centering\n\t\\subfloat[Source]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat[Target map]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat[\"7\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat[\"9\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat[\"11\"]{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/S2-G6-4-color-S2-G9-5-color-11_fake_B2_masked.jpeg}}\n\t\\\\ \n\t%\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/BOS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\n\t\\\\ \n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-1_real_A.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-1_cond_B.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-7_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-9_fake_B2_masked.jpeg}}\\hfill\n\t\\subfloat{%\n\t\t\\includegraphics[width=0.192\\columnwidth]{figures\/qualitative\/diversity\/SKS2-G6-4-color-S2-G9-5-color-11_fake_B2_masked.jpeg}}\n\n\t\\caption{${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~decouples the conditional map from the category label of the hand gesture. The same conditional map can be used with different category labels to generate multiple images. \n\t}\n\t\\label{fig:diversity}\n\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\renewcommand{\\tabcolsep}{1pt}\n\t\\subfloat{\\footnotesize\\begin{tabular}{>{\\centering\\arraybackslash}p{0.39\\columnwidth}>{\\centering\\arraybackslash}p{0.58\\columnwidth}}\n \\textbf{Source} & \\textbf{Conditional maps and generated images} \\\\ \n \\end{tabular}}\\\\\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \\subfloat{%\n\t\t\\includegraphics[width=0.39\\columnwidth,height=0.396\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-a_real_A.jpeg}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-13_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/S1-G1-21-color-S1-G7-8-color-7-13_fake_B2_masked.jpeg}\n\t\\end{minipage}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-6_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-6_fake_B2_masked.jpeg}\n\t\\end{minipage}}\\hfill\n\t\\subfloat{\\begin{minipage}{0.192\\columnwidth}%\n\t\t\t\\setlength{\\lineskip}{3pt}%\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-7_cond_B.jpeg}\\\\\n\t\t\t\\includegraphics[width=\\columnwidth]{figures\/qualitative\/shift\/SKS1-G1-21-color-S1-G7-8-color-7-7_fake_B2_masked.jpeg}\n\t\\end{minipage}}\n\n\t\\caption\n\t${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}~generates images with the same conditional map that is rotated, shifted and resized.\n\t}\n\t\\label{fig:diversity2}\n\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\medskip\n\\noindent \\textbf{Diversity Study.} ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ decouples the desired hand gesture category, specified through a class number, from its location, scale and orientation, which are specified by a conditional map.\nThis means that users can use the same conditional map with different hand gesture categories to generate multiple images.\nFigure~\\ref{fig:diversity} shows that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ can generate three different distinguishable hand gestures by using the same \\emph{triangle} conditional map and different category numbers.\nInstead, when using non category-independent shapes are used (e.g. boundary or skeleton), ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ fails to synthesize several hand gestures categories from the same conditional map. \nAs mentioned before, \\emph{Boundary} maps are weakly category dependent, as their shape might suggest the type of gesture, while \\emph{Skeleton} and \\emph{Key-points} maps are category dependent. \n\nFurthermore, we test the performance of ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ with the same source image and category, but with different conditional maps. \nFor this test, we manually draw three triangle conditional maps with arbitrary size, location and orientation.\nFigure~\\ref{fig:diversity2} shows that our method faithfully synthesizes the target gestures in all cases.\nAltogether, we show that users can generate hand gestures \\emph{in the wild} with much less effort than state-of-the-art models, that require complex annotations that are dependent on the specific hand gesture users want to generate. \n\n\n\n\n\\begin{table}[t]\n \\caption{Perceptual user study. Percentage of times, on average, when the translated images are selected as ``real'' by users, in the ``fake'' vs. ``real'' comparison.}\n \n \\centering\n \\small\n \\begin{tabularx}{\\columnwidth}{@{}X rr rrr@{}}\n \\toprule \n \\textbf{Model} & \\multicolumn{2}{c}{\\textbf{NTU Hand Gesture~\\cite{ren2013robust}}} & \n \\multicolumn{2}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\\n \\cmidrule(r{4pt}){2-3} \\cmidrule(l{4pt}){4-5}\n & \\textit{Normal} & \\textit{Challenging} & \\textit{Normal} & \\textit{Challenging} \\\\\n \\midrule\n GestureGAN & 36.54\\% & 3.21\\% & 3.85\\% & 0.64\\% \\\\\n ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN} & \\textbf{44.32\\%} & \\textbf{7.05\\%} & \\textbf{16.03\\%} & \\textbf{3.85\\%} \\\\\n \\bottomrule\n \\end{tabularx}\n \\label{tab:userstudy}\n\\end{table}\n\n\\medskip\n\\noindent \\textbf{Perception User Study.}\nTable~\\ref{tab:userstudy} shows the outcome of the perceptual user study, where we report the percentage of times when the translated image wins against the original target image in the real vs fake comparison, i.e when the original image is selected as fake. It can be seen that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms GestureGAN~\\cite{tang2018gesturegan} in both experimental settings. \nThis means that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ generates higher quality images that can be mistaken with the real ones at higher rates than competitors.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\small\n\t\\caption{Performance degradation of $\\pmb{\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ by removing the rolling guidance approach.\n\t}\n\n\t\\begin{tabularx}{\\columnwidth}{@{}X c rrr@{}}\n\t\\toprule\n\t\t\\textbf{Conditional map} & \\textbf{Category} & \n\t\t\\multicolumn{3}{c}{\\textbf{Creative Senz3D~\\cite{memo2018head}}} \\\\ \n\t\t\\cmidrule{3-5} \n\t\t& \\textbf{independent} & PSNR & FID & F1 \n\t\t\\\\\n\t\t\\midrule\n\t\t\\emph{Triangle} & \\checkmark & 1.53\\% \\textcolor{red}{\\textdownarrow} & 24.77\\% \\textcolor{red}{\\textuparrow} & 75.76\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\emph{Boundary} & $\\sim$ &\n\t\t 1.84\\% \\textcolor{red}{\\textdownarrow} & 35.37\\% \\textcolor{red}{\\textuparrow} & 58.06\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\emph{Skeleton} & $\\times$ &\n\t\t 10.66\\% \\textcolor{red}{\\textdownarrow} & 81.90\\% \\textcolor{red}{\\textuparrow} & 12.50\\% \\textcolor{red}{\\textdownarrow}\n\t\t\\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\label{tab:ablation}\n\t\\vspace{-0.3cm}\n\\end{table}\n\n\n\\medskip\n\\noindent \\textbf{Ablation study.} We further investigate the effect of the rolling guidance approach by removing this component from ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}. \nIn the quantitative results, Table~\\ref{tab:QuantitativeResults_both} it can be seen that the rolling guidance allows to significantly improves the quality of the generated images.\nIn Table~\\ref{tab:ablation} we report the degradation of our method performance without rolling guidance, on Creative Senz3D dataset, for the three types of annotation maps. \nSpecifically, we observe 24.77\\%, 35.37\\% and 81.90\\% worse (increased) FID score for the \\emph{Triangle}, \\emph{Boundary} and \\emph{Skeleton} maps, respectively. \nWhile the F1-score decreases by 75.76\\% and 58.06\\% on the \\emph{Triangle} and \\emph{Boundary} maps respectively, for \\emph{Skeleton} maps it decreases by 12.50\\%. In terms of PSNR the degradation is less significant for the \\emph{Triangle} and \\emph{Boundary}, but higher (10.66\\%) for \\emph{Skeleton} maps.\nFinally, we observed that a single rolling is sufficient to improve the generated results, while ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ does not benefit from additional rolling iterations.\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe have presented a novel GAN architecture for gesture-to-gesture translation \\textit{in the wild}. Our model decouples the conditional input into a category label and an easy-to-draw conditional map.\nThe proposed attention module and rolling guidance approach allow generating sharper images \\textit{in the wild}, faithfully generating the target gesture while preserving the background context. \nExperiments on two public datasets have shown that ${\\mathlarger{\\mathlarger{\\bigtriangleup}}}$-{GAN}\\ outperforms state of the art both quantitatively and qualitatively.\nMoreover, it allows the use of simple-to-draw and category-independent conditional maps, such as triangles.\nThis significantly reduces the annotation effort both at drawing time and allowing to use a single map for multiple gesture translations. The proposed framework is not limited to category labels but can also be used through embedding, learned from the data, that can express the gesture characteristics. In future work users could easily provide a reference image instead of a label, and translate the original image to the target gesture expressed by the reference image.\n\nOur approach is especially important when the target image, and thus the target conditional map, is not available, which is the typical scenario in photo-editing software. \n\n\n\n\n\n\\begin{acks}\nWe gratefully acknowledge NVIDIA Corporation for the donation of the Titan X GPUs and Fondazione Caritro for supporting the SMARTourism project.\n\\end{acks}\n\n\n\\bibliographystyle{ACM-Reference-Format}\n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStudying regions close to the Galactic plane in the optical is difficult due both due to dust obscuration and source confusion. It was only recently Feast et al. (2014) reported the discovery of five classical Cepheid variables at distances of 13 - 22 kpc from the Galactic center, towards the Galactic bulge, that may be associated with the flared atomic hydrogen disk of our Galaxy. Two classical Cepheid variables at 11 kpc close to the plane of the Milky Way have been recently uncovered from VVV data (Dekany et al. 2015), that indicates an underlying young star cluster. Searches for dwarf galaxies in the optical have primarily targeted high latitudes (McConnachie 2012). The Sagittarius (Sgr) dwarf galaxy is the closest known dwarf galaxy to the plane, at a latitude of $b = -14^\\circ$ (Ibata et al. 1994). The dearth of Milky Way satellites at low latitudes (Mateo 1998; McConnachie 2012) is underscored by simulations that suggest that there may be massive, nearly dark satellites that have not yet been discovered (Boylan-Kolchin et al. 2011). Not only dwarf galaxies, but even bright spiral galaxies are not easily seen if they are hidden behind the obscuring column of dust and gas of the Galactic disk (Kraan-Korteweg et al. 1994).\n\nMining data from deep infrared surveys of the Galactic plane may well uncover new dwarf galaxies and halo sub-structure. This would alleviate several outstanding problems in near-field cosmology. The \"missing satellites problem\", or the overabundance of dwarf galaxies in cosmological simulations relative to the number of observed dwarf galaxies in and around the Local Group (Klypin et al. 1999), and the \"too big to fail problem\", wherein there are too few massive satellites in the Milky Way relative to cosmological simulations (Boylan-Kolchin et al. 2011) are two such outstanding problems. Yet another is the ostensibly anisotropic distribution of the Milky Way satellites (Kroupa et al. 2005). These discepancies may be resolved by a more complete inventory of the structure of our Galaxy at low latitudes. \n\nWe have searched for distant stars close to the Galactic plane using near-infrared data from the ESO Public survey VISTA Variables of the Via Lactea (VVV) (Minniti et al. 2011; S12), targeting the VVV disk area, which covers Galactic longitudes $-65.3^\\circ < l <\n-10^\\circ$ within Galactic latitudes $-2.25^\\circ < b < +2.25^\\circ$. The VVV survey is an ongoing 5-band photometric survey in the Z (0.87$~\\mu\\rm m$), Y (1.02$~\\mu\\rm m$), \nJ (1.25 $~\\mu\\rm m$), H (1.64 $~\\mu\\rm m$) and $K_{s}$ (2.14 $~\\mu\\rm m$) bands (S12),\nand is multi-epoch in the $K_{s}$ band, with approximately 30-40 epochs per star across the VVV disk area at the time of writing. \nIn \\S \\ref{sec:results}, we review the methods we used to identify Cepheid variables, and present the distance and extinction values. We discuss possible interpretations and conclude in \\S \\ref{sec:conclusion}.\n\n\\section{Results \\& Analysis}\n\\label{sec:results}\n\nThe infrared photometry is from the VVV survey, which is based on aperture photometry\ncomputed on the individual tile images (S12).\nEach of the sources was observed with a median exposure time of $16$ s\nper pixel, depending on the position in the tile (each exposure is $8$ s\nlong, and most of the area in a tile is a combination of two\npointings). The limiting magnitude of the VVV data using aperture photometry\nis $K_{s} \\sim 18.0$ mag in most fields (S12). \nA particular pointing is called a ``tile'', covers $\\sim1.64$ square\ndegree, and contains approximately $10^{6}$ stars. \nAs a preliminary search, we examined the disk area of the VVV survey by applying color cuts that correspond to distant ($D > 60~\\rm kpc$) red-clump\nstars. Red-clump stars have been shown to be good distance indicators (Alves 2000; Paczynski \\& Stanek 1998).\nGiven the mean values of intrinsic near-infrared colors for red-clump stars in the Milky Way disk and the Cardelli et al. (1989) extinction law, we used the distance modulus noted in Minniti et al. (2011), which gives a color cut of $1.5<(J-K_{\\rm s})<1.8$ and\n$K_{s} > 17.6$ (which corresponds to distances in excess of $\\sim$ 60 kpc). Using this color cut, we saw an excess of distant red-clump stars \nat $l \\sim -27^{\\circ}$. We defer a detailed analysis of the red-clump stars and other stellar populations to a future paper.\n\nWe carried out a search for variable stars, restricting our search to faint variables, with mean $K_{s} > 15$ mag, and periods greater than three days. We examined the variability data in five tiles close to Galactic longitude $l \\sim -27^\\circ$ and searched six comparison tiles at other locations in the VVV disk area. We found four Cepheid variables at $l \\sim -27^\\circ$ at an average distance of $\\sim$ 90 kpc,\nand none in the other tiles. The survey strategy ensures that the tiles in the VVV disk area have similar number of observations and limiting magnitude (S12). While the control on the cadence is limited (Saito et al. 2013), we have checked that there is no significant difference in the cadence for the $l \\sim -27^\\circ$ tiles relative to the rest of the disk area, i.e., the region at $l \\sim -27^\\circ$ is not unique in terms of the way it was observed.\n\nIn identifying Cepheids, we employed\nseveral successive tests. The first two tests are based on the statistical significance of the highest peak in the Lomb-Scargle periodogram, and the uncertainty of the period, respectively. These two tests ensure that we have identified sources of a given pulsation period, and that there is a small uncertainty in the period that we derive. For the final cut, we quantitatively assess the shapes of the light curves by calculating the Fourier parameters of the sources, as well as the skewness and acuteness parameters, and visually inspect the light curves. \n\nA given tile is searched using the Lomb-Scargle algorithm (Lomb 1976; Scargle 1982), and periodograms\nare constructed for every source. The statistical significance of the amplitude of the largest peak in the periodogram (Scargle 1982)\ncorresponds to a false alarm probability $p_{0}$. Claiming the detection of the signal if the amplitude\nexceeds the threshold value, one can expect to be wrong a fraction $p_{0}$ of the time. Alternately, the statistical significance\nlevel of the detection is $1-p_{0}$, and this quantity is listed in Table 1.\nFor the first test, we require $\\bar{K_{s}} >$ 15 mag\nto search for faint variables, and set the minimum and maximum period range in our variability search\nbetween 3 - 50 days. To pass the first test, sources have to satisfy the following conditions:\n1) the period corresponding to the maximum in the Lomb-Scargle periodogram is greater\nthan three days, 2) the maximum in the Lomb-Scargle periodogram exceeds 90-th pericentile \nfor the significance level, 3) if there are other maxima in the periodogram that are at 90-th percentile\nor higher, the periods corresponding to these maxima must differ by a factor of two or less. \nThe last condition amounts to requiring a clean periodogram without spuriously large multiple peaks.\n\nIn the second test, we assess the quality of the light curves of the variables that pass the tests above with a\nparametric bootstrap. Assuming a Gaussian distribution of errors, we sample the distribution one thousand times\nto derive the distribution of periods for each source, which is similar to prior work (Klein et al. 2012; Klein et al. 2014)\non RR Lyrae stars.\nIf the mean of the period distribution agrees to within 20 \\% of the period calculated from the raw data, and \nif the mean of the period distribution $\\pm$ the standard deviation still exceeds three days, we consider the\nperiod distribution to be sufficiently well constrained. The goal of this second cut is to select sources that have a small uncertainty in the derived period, given the photometric errors. \nThe Lomb-Scargle algorithm allows us to derive the period and its statistical signficance, but not the uncertainty in the derived period. \nIf the width of the histogram that gives the distribution of periods from the bootstrap calculation is narrow, the uncertainty in the derived period is low. For the sources that pass the above tests, we fit the light curves with a Fourier series (Kovacs \\& Kupi 2007).\n The Fourier parameters are similar to the light curves of classical Cepheids for $P \\sim 3 - 15$ days observed in the K-band (Persson et al. 2004; Bhardwaj et al. 2015). The light curves of Cepheid variables in the optical are different in shape and amplitude from the light curves of Cepheid variables in the K-band (Matsunaga et al. 2013; Bhardwaj et al. 2015), and our comparison here is to the observed light curves of classical Cepheid variables in the K-band. The Cepheids we list here pass all of the automated and visual checks. Because of this multi-tiered, conservative approach, the light curve analysis is time consuming, but allows us to derive accurate distances. It is worth noting that in addition to lower extinctions in the infrared relative to the optical, another advantage of infrared photometry of Cepheids is that it is minimally affected by metallicity variations (Bono et al. 2010; Freedman et al. 2010).\n\nThe tiles close to longitude $l \\sim -27^\\circ$ produce a significantly larger number of variables\nthat pass the first of our tests than the other six tiles we examined (at $l = -15^{\\circ},-29^{\\circ}, -35^{\\circ}, -40^{\\circ}, -50^{\\circ}, -65^{\\circ}$). Figure 2 of S12 depicts the VVV survey area. The number of sources in tiles d027, d065, d103, and d141 \nthat are centered at $l \\sim -27 ^\\circ$ and extend upwards in latitude (S12), produce $\\sim 100-200$ sources\nthat pass our first cut. In contrast, the average number that pass the first cut from the comparison tiles is $\\sim$ 60. If we consider \nthis background number to be the mean of a Poisson distribution, and randomly sample a Poisson distribution with this mean value, values in excess of 100 are above 5-$\\sigma$, i.e., they are statistically extremely unlikely to occur by chance. Figure \\ref{f:comb} shows the number of sources that pass the first of our tests as a function of longitude (the value at $l \\sim -27 ^\\circ$ is an average over latitude), as well as a function of the total number of variable stars in the tile. While the number of sources that pass the first of our tests has some correlation with the number of variable stars (which is not unexpected), the region at $l \\sim -27 ^\\circ$ is a clear outlier. In some tiles centered at $l \\sim -27 ^\\circ$, there were a significant number that passed our automated and visual analysis of the light curves, but did not pass our visual inspection of the images, due to the possibility of spikes or blending. \nThe number of Cepheid variables that we report here from the final cut is very likely an underestimate. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{comb.png}\n\\caption{The number of sources that pass the first of our tests (requiring statistical significance greater than 90-th percentile, $P >$ 3 days, $K_{s} > 15~\\rm mag$) is shown as a function of longitude in the top panel, and as a function of the total number of variable stars in the tile (normalized to one million) in the bottom panel. The error bars in the top-panel is from Poisson noise. \n\\label{f:comb}}\n\\end{center}\n\\end{figure}\n\nThe phase folded light curves of the Cepheid variables, which show a clear resemblance to each other, and corresponding images are shown in Figure 1. Table 1 summarizes the derived distances and other parameters for the Cepheids. To estimate the dust extinction from the excess color, we use the quasi-simultaneous\nsingle epoch VVV measurements in the $J$, $H$ and $K_{s}$ bands ($\\sim$ 190 s between each band). The near-infrared amplitudes of \nclassical Cepheids are relatively small (Persson et al. 2004) and as such an estimate of the\ndust extinction from the single epoch measurements of the colour should be sufficient.\nThe average extinction-corrected $(J-K_{s})$ colours of the Cepheids is $\\sim$ 0.4, which is\nconsistent with the colors of short-period classical Cepheids (Persson et al. 2004) in the LMC.\n\n\\begin{table*}\n\\centering\n \\caption{Data For Individual Cepheids and Derived Parameters}\n\n \\begin{tabular}{@{}lccccccc@{}}\n \\hline\n\nVVV ID & $l~\\rm (deg)$ & $b~\\rm (deg)$ & D (kpc) & P (day) & $\\bar{K_{s}}$ & Significance Level \\\\\n\\hline\n\nVVVJ162559.36-522234.0 & -27.5971 & -2.23686 & 92 & 3.42 & 16.04 & 91 \\% \\\\\nVVV J162328.18-513230.4 & -27.2729 & -1.37557 & 100 & 4.19 & 16.12 & 93 \\% \\\\\nVVVJ162119.39-520233.3 & -27.8621 & -1.49382 & 71 & 5.69 & 15.1 & 97 \\% \\\\\nVVV J161542.47-494439.0 & -26.8882 & 0.768427 & 93 & 13.9 & 15.6 & 98 \\% \\\\\n\n\n\n\\hline\n\n\\end{tabular} \n\n\\small {VVV ID, Galactic longitude ($l$) and latitude ($b$), $D$ is the distance from the sun, $P$ is the pulsation period, $\\bar{K_{s}}$ is the mean $K_{s}$-band magnitude, the last column is the significance level of the highest peak in the Lomb-Scargle periodogram. }\n\n\n\n\\end{table*} \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.65]{Figure1Cep.png}\n\\caption{$JHK_{\\rm s}$ false color\nimage of the Cepheid variables, with phase folded $K_{\\rm s}$-band\nlight curves. All fields are $30\"\\times30\"$, oriented in Galactic\ncoordinates. The VVV ID and period are also listed in Table 1, along\nwith the Galactic latitude, longitude, distances and average $K_{\\rm\ns}$-band magnitude. The four light curves have a clear resemblance to each other, and a quantitative assessment of their shapes shows they are similar to those of $K_{s}$-band light curves of classical Cepheids.\n\\label{f:cep}}\n\\end{center}\n\\end{figure}\n\nWe adopt the period-luminosity relations of classical Cepheids in the LMC (Matsunaga et al. 2011), with a LMC distance modulus\nof 18.5 mag and interstellar extinction value of $A_{K_{s}} = 0.02$ mag for the LMC\ndirection. This gives the distance modulus $\\mu$ for a Cepheid with pulsation\nperiod $P$ (Feast et al. 2014):\n\n\\begin{equation}\n\\mu= K_{s} - A_{K_{s}} + 3.284~\\rm log(P) + 2.383 \\; ,\n\\end{equation}\nwhere $A_{Ks}$ is the extinction in the $K_{s}$ band, which we can express in terms\nof the colour excess:\n\n\\begin{equation}\nA_{Ks} = 0.6822 E(J - K_{s}) \\; ,\n\\end{equation}\nwhere $E(J-K_{s}) = (M_{J} - M_{K_{s}})_{\\rm obs} - (M_{J} - M_{K_{s}})_{\\rm int} $ is the difference between the observed and intrinsic colors,\nand we adopt the period-luminosity relations of classical Cepheids in the LMC (Matsunaga et al. 2011) and the Cardelli et al. (1989) extinction law. The single-epoch colors and extinctions in the $K_{s}$ and $J$ bands,\nalong with the extinction corrected colors are listed in Table 2. Using extinction values from dust maps derived\nfrom far-IR colors (Schlegel et al. 1998) leads to slightly larger values close to the plane of the Galaxy along these lines of sight, as well as recent work that is based on spectra from the Sloan Digital Sky Survey (Schlafly \\& Finkbeiner 2011). If we consider the standard deviation of these three values to be the uncertainty in the dust extinction, and include the photometric errors and uncertainty in the period distribution to derive the uncertainties in the distance, on average this gives a distance uncertainty of $\\sim$ 20 \\%, where the dominant term is the uncertainty in the extinction. The dust extinction in this area (Schlegel et al. 1998) is not unusual, i.e., this area is neither a high or low region of dust extinction. \n\n\\begin{table*}\n\\centering\n \\caption{Photometry Data And Extinction}\n\n\n \\begin{tabular}{@{}lccccccc@{}}\n \\hline\n\nVVV ID & J & H & $K_{s}$ & $A_{K_{s}} $ & $A_{J}$ & $(J-K_{s})_{\\rm corr}$ \\\\ \n\\hline\n\n\nVVVJ162559.36-522234.0 & 17.078 & 16.429 & 16.175 & 0.348 & 0.83 & 0.42 \\\\\nVVV J162328.18-513230.4 & 17.88 & 17.09 & 16.7 & 0.53 & 1.25 & 0.44 \\\\\nVVVJ162119.39-520233.3 & 16.416 & 15.414 & 14.96 & 0.7 & 1.68 & 0.48 \\\\\nVVV J161542.47-494439.0 & 18.71 & 16.69 & 15.45 & 1.89 & 4.52 & 0.6 \\\\\n\n\\hline\n\n\n\n\\end{tabular}\n\\small {\\center{Single epoch VVV photometry in the $J, H$ and $K_{s}$ bands, with extinction values derived from the color excess assuming the Cardelli et al. (1989) extinction law, along with the extinction corrected $(J-K_{s})$ color.}\n}\n\n\\end{table*}\n\nShort-period ($\\sim$ day) close contact eclipsing binaries like W Ursa Majoris stars can mimic the sinusoidal light curves of RR Lyrae stars, which have periods of $\\sim$ day (Rucinski 1993), but is less of a concern for long-period variables. To ensure that the shapes of the light curves are quantitatively similar to classical Cepheids, we compute their Fourier parameters, as well as the skewness and acuteness parameter, and visually inspect all the light curves.\nWe have computed the Fourier parameters of our sources (Figure \\ref{f:fourier}), following Kovacs \\& Kupi (2007). Out to fourth-order the Fourier series is expressed as:\n\n\\begin{equation}\nm(t) = A_{0} + \\sum_{i=1}^{4} A_{i} \\rm cos\\left(2\\pi i t\/P + \\phi_{i} \\right)\\; ,\n\\end{equation}\nwhere $m(t)$ is the light curve, $P$ is the period, and $A_{i}$ and $\\phi_{i}$ are the amplitudes and phases respectively.\nThe top panel shows $R_{21} = A_{2}\/A_{1}$ and $\\phi_{21} = \\phi_{2} - 2 \\phi_{1}$, and the bottom panel shows $R_{31} = A_{3}\/A_{1}$ and $\\phi_{31} = \\phi_{3} - 3 \\phi_{1}$. Eclipsing binaries have $\\phi_{21}$ and $\\phi_{31}$ values close to 2$\\pi$ or zero, which reflects their symmetric variations (Matsunaga et al. 2013). Thus, the Fourier parameters of our sources indicate that they are not eclipsing binaries. Bhardwaj et al. (2015) provides a compilation of the Fourier parameters of a large number of Galactic and LMC Type I Cepheids across a range of wavelengths. This work shows the differences in the shape of the light curve in the K-band relative to the I-band, as well as differences between the K-band light curves of Cepheids in the Galaxy and in the LMC. We have overplotted in Figure \\ref{f:fourier} the Fourier parameters of Type II Cepheids in the Milky Way (Matsunaga et al. 2013). These Type II Cepheids tend to have lower $\\phi_{31}$ values than classical, Type I Cepheids, but there is scatter in the Fourier parameters derived from K-band light curves.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.35]{fourierall.png}\n\\caption{The Fourier parameters (defined by Eqn 3 and corresponding text) are plotted vs period. We have plotted here the Fourier parameters derived from K-band light curves of Type I, classical Cepheids in the Milky Way (marked \"MW\"), Type I classical Cepheids from the LMC (marked \"LMC\") (Bhardwaj et al. 2015), the Cepheid variables discovered at $\\sim$ 90 kpc (marked \"this paper\"), eclipsing binaries (Matsunaga et al. 2013) (marked \"EB\"), and Type II Cepheids (Matsunaga et al. 2013) (marked \"Type II\").\n\\label{f:fourier}}\n\\end{center}\n\\end{figure}\n\nThe shape of the light curve can be further quantified by the skewness ($S_{k}$) and acuteness ($A_{c}$) parameters:\n\\begin{equation}\nS_{k}=\\frac{1}{\\phi_{\\rm rb}} -1, ~~~~ \\phi_{\\rm rb} = \\phi_{\\rm max}-\\phi_{\\rm min},~~~~ A_{c} = \\frac{1} {\\phi_{\\rm fw}} -1 \\; ,\n\\end{equation}\nwhere $\\phi_{\\rm max}$ and $\\phi_{\\rm min}$ are the phases corresponding to the minimum and maximum of the rising branch, and $\\phi_{\\rm rb}$ is therefore the phase duration of the rising branch, and $\\phi_{\\rm fw}$ is the full width at half maximum of the light curve. Bhardwaj et al. (2015) demonstrated that the skewness parameter derived from I-band light curves is significantly higher than K-band light curves. The average skewness parameter of our sources is $\\sim$ 0.63 and the average acuteness parameter is $\\sim$ 0.8, which is comparable to classical Cepheids observed in the K-band (Bhardwaj et al. 2015). \n\n\\section{Discussion \\& Conclusion}\n\\label{sec:conclusion}\n\nBy employing a series of successive tests to determine the periods of variable stars, the uncertainty in their periods, and a quantitative assessment of the light curve shape, we have found four Cepheid variables within an angular extent of one degree centered at Galactic longitude of $l = -27.4^\\circ$ and Galactic latitude of $b = -1.08 ^\\circ$, at an average distance of 90 kpc. These successive tests are not satisfied at any of the other locations where we searched for Cepheid variables. Spectroscopic observations would be useful to confirm the spectral type and determine a radial velocity. Type II Cepheids that are part of the Galactic halo are not expected to be clustered within a degree, which is what we see here. \nType II Cepheids that are part of a dwarf galaxy can be clustered. There are many more Type I, classical Cepheids than Type II Cepheids; the OGLE survey has detected 3361 Type I, classical Cepheids in the LMC, and 197 Type II Cepheids (Soszynski et al. 2008; Soszynki et al. 2008a). Unless this object is as massive and extended as the LMC, one would expect that these sources are more likely to be Type I rather than Type II Cepheids. If they are Type II Cepheids, they would be at an average distance of $\\sim$ 50 kpc (Matsunaga et al. 2013), and such a concentration of Type II Cepheids (which are very rare) is unexpected beyond the edge of the Galactic disk. Therefore, on the basis of the Fourier parameters, skewness and acuteness parameters, and their angular concentration, we conclude these sources are Type I Cepheids.\n\nEarlier work (Chakrabarti \\& Blitz 2009) predicted that the observed perturbations in the atomic hydrogen disk of our Galaxy (Levine, Blitz \\& Heiles 2006)\nare due to a recent (300 Myr years ago) interaction with a dwarf satellite galaxy that is one-hundredth the mass of our Galaxy,\ncurrently at a distance of 90 kpc from the Galactic center, close to the plane, and within\nGalactic longitudes of $-50^{\\circ} < l < -10^{\\circ}$ (Chakrabarti \\& Blitz 2011). \nThis methodology was applied to spiral galaxies with known, tidally dominant optical companions to provide a proof of principle of the method (Chakrabarti et al. 2011). \n\nThere are no known dwarf galaxies that have\ntidal debris at this location. \nThe tidal debris of the Sgr dwarf does not not extend to within $\\sim$\ntwenty-five degrees of Galactic longitude of $l=-27^\\circ$ (Carlin et al. 2012), and the Magellanic stream does not extend to within $\\sim$ 40 degrees of this region (Putman et al. 2003). \nThe Canis Major overdensity was identified as an excess of M-giant stars from the Two Micron All Sky Survey (2MASS)\nat $(l,b) = (-120^\\circ,-8^\\circ)$, at a distance of $\\sim$ 7 kpc from the Sun (Martin et al. 2004).\nIts proximity to the Milky Way indicates that this overdensity is also unlikely to be associated with the\nCepheids we report here.\n\nThese are the most distant Cepheid variables close to the plane of our Galaxy discovered to date. The fact that the Cepheids that we detect are at an average distance of 90 kpc, highly clustered in angle (within one degree) and in distance (within 20 \\% of the mean value of 90 kpc), is difficult to explain without invoking the hypothesis of these stars being associated with a dwarf galaxy, which may be more extended in latitude than can be determined from the VVV survey alone. Constraining the structure of this object should be possible with future deeper observations.\n\n\\bigskip\n\\bigskip\n\n\\acknowledgements\nWe gratefully acknowledge use of data from the ESO Public Survey\nprogramme ID 179.B-2002 taken with the VISTA Telescope, and data\nproducts from the Cambridge Astronomical Survey Unit. R.K.S.\nacknowledges support from CNPq\/Brazil through projects 310636\/2013-2\nand 481468\/2013-7. F.G. acknowledges support from CONICYT-PCHA Mag\\'{i}ster National 2014-22141509.\nS.C. thanks M. Feast, B. Madore, C.F. McKee, J. Scargle, and G. Kovacs for helpful discussions.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{The presence of renewable generations in power systems, especially solar generations, has increased rapidly in recent decades. Reference \\cite{SolarMart} reports that the global capacity of photovoltaic (PV) installment reached 634 GW in 2019. The solar capacity in 2019 has grown nearly 400 times since 2000. California Independent System Operator (ISO) estimates that the renewable energy generations will contribute 50\\% power supplies by 2030 in California \\cite{CAISO}. \n\nThe wide deployment of renewable generations\ndecreases greenhouse gas emissions, however, but also brings great challenges to the reliability and resiliency of existing power systems. For example, at the substation level, the measurements of power consumptions are the net loads that contain different types of loads. The solar generation is invisible to the power system operator and thus is behind-the-meter (BTM). {Because of the stochastic nature and high volatility of renewable generations, the accurate estimation of generated energy is challenging.}\nEnergy disaggregation at the substation level (EDS)\naims to disaggregate each individual load\\footnote{{Generation is considered as a negative load in this paper.}} from aggregate measurement. The accurate information for load consumption are crucial for power system planning and operations, such as hosting capacity evaluation \\cite{AMZ18,WDW19}, demand response and load dispatching \\cite{MB20,XGL20} and load forecasting \\cite{WZCK18,SZWS19}.\n \n \n {Energy disaggregation problem at the household level (EDH) has been extensively studied, see, e.g., \n\\cite{KBNY10,KDM16,CZWHFJ19,HSLS16},\nalso under the terminology \\textit{non-intrusive load monitoring (NILM)} \\cite{KBNY10,KDM16,CZWHFJ19,HSLS16,H92,GAM15,MSW16}}. The electric appliances are typically single-state or multi-state devices and patterns of their power consumptions usually are repeatable. The general procedure for EDH methods is to first collect historical power consumption for each individual appliance and learn patterns from these well-annotated data. Then EDH methods disaggregate power consumptions for each appliance from the aggregate data based on these patterns. In comparison, obtaining historical power consumption for each individual load at the substation level is more difficult, as the measurements at the substation level are highly aggregated from different types of loads. Even though the operator has information about load types attached to a substation, whether a certain load is consuming\/generating energy or not in a certain time interval is not already clear. One example is the BTM solar generation.\n{Thus, the measurements at the substation usually contain multiple loads and are partially labeled. \n It is more challenging to learn distinctive load profiles\nunder this situation than learning from measurements on individual loads. Moreover, the volatility of load and renewable generation often lead to significant estimation errors. \nHowever, to the best of our knowledge, there is no work that provides a confidence measure of the energy disaggregation results.\n\n{ This paper summarizes our recent results\nfor solving these two challenges. Given partially labeled training data, our work~\\cite{LYWW20} proposes a deterministic dictionary learning-based method to learn load patterns and disaggregate the aggregate measurements into individual loads in real-time.\n\nNote that \\cite{LYWW20} is a deterministic approach and therefore is unable to provide the confidence measure of the estimation results. To estimate the reliability of the disaggregation results,\nin \\cite{YW21}, we propose a probabilistic energy disaggregation approach based on Bayesian dictionary learning. }\n\n\n{The contributions of this paper are three folds: 1. We summarize our works~\\cite{LYWW20} and \\cite{YW21} for solving the ``partial label'' problem and modeling the uncertainty. 2. We compare these two methods and other two existing works in the experiment. (3) We provide more testing cases for these two methods in this paper. }\n\n\n{The remainder of this paper is organized as follows. Section II explains our partial label formulation. Section\nIII discusses our proposed deterministic approach to solve the issue of partial labels and introduces our proposed Bayesian method for modeling the uncertainty of disaggregation results. Section IV summarizes this paper.}\n\n\n\n\n\n\n\n\n\n\n\\section{Problem Formulation}\n\n{A substation is connected to $C$ ($C>1$) types of loads in total.\nLet $\\bm{x}\\in \\mathbb{R}^P$ denote the aggregate measurement with window length $P$. Let a binary vector $\\bm{y}= [y^1, y^2,..., y^C] \\subseteq \\{0,1\\}^C$ denote the load existence in $\\bm{x}$. For example,\nwhen $C=3$, $\\bm{y}=[0,0,1]^T$ means that only load 3 exists in $\\bm{x}$.}\n\n{In our paper \\cite{LYWW20}, we propose a ``partial label formulation'' where the operator only knows partial entries in $\\bm{y}$. The partial labels can be obtained by designing a load detector for each load separately \\cite{HCQ19,ZG16} or from engineering experience. As described in \\cite{LYWW20}, annotating partial labels has a lower cost for manpower or communication burdens than annotating all the labels. Moreover, if a detector fails to identify some loads \\cite{HD18}, we can only obtain partial labels. \n\n\nLet $\\bar{\\bm{X}}=[\\bar{\\bm{x}}_1,\\bar{\\bm{x}}_2,...,\\bar{\\bm{x}}_N ] \\in \\mathbb{R}^{P \\times N}$ denote $N$ measurements. $\\bar{\\bm{x}}_i$ denotes the data at the $i$th time window. $\\bm{y}_i \\in \\{0,1\\}^C$ denotes the labels in $\\bar{\\bm{x}}_i$. Let label matrix $\\bm{Y}=[\\bm{y}_1, \\bm{y}_2,...,\\bm{y}_N]$ denote all the labels in $\\bar{\\bm{X}}$. Let $\\Omega$ denote the indices of known entries in $\\bm{Y}$. $\\bm{Y}_\\Omega$ denotes all the known partial labels. In the above example, if one only knows $\\bar{\\bm{x}}$ contains load 3 and does not know whether the other two loads exist or not, then the corresponding $\\bm{Y}_\\Omega$ is $[?, ?, 1]^T$ where $?$ denotes one does not know the corresponding load exists or not.\n\nFig. 1 illustrates our partial label formulation. The aggregate data are aggregated from two industrial loads and one solar generation. Each subfigure shows patterns of aggregate data and the corresponding individual loads at the same time interval. {In all these four cases, the label is $[?, ?, 1]^T$, indicating that load 3 always exists, while the existence of loads 1 and 2 is unknown. \n\n Given training dataset $\\bar{\\bm{X}}$, the corresponding partial label matrix $\\bm{Y}_\\Omega$ and an aggregate measurement $\\hat{\\bm{x}}\\in \\mathbb{R}^P$, the objective of this paper is to: (1) learn distinctive patterns of individual loads from $\\bar{\\bm{X}}$ and disaggregate\n $\\hat{\\bm{x}}$, and (2) characterize the uncertainty of the disaggregation results.\n\n}\n\n\n \n \n \n \n\n \\begin{figure}[!ht] \n \\centering\n \t\\includegraphics[width=0.45\\textwidth]{.\/Figures\/PartialLabel.pdf} \n \t\\caption{{An example of partial label formulation. There are three load types in the aggregate data. All four aggregate data have the same partial label $[?,?,1]^T$. The aggregate data contain (a) all three loads; (b) load 1 and 3; (c) load 2 and 3; (d) only load 3.} \\cite{LYWW20}} \\label{fig1}\n \\end{figure}\n \n \n \n \n\\section{Methodology}\n\n{In this section, we present our two model-free approaches based on deterministic dictionary learning \\cite{LYWW20} and Bayesian dictionary learning \\cite{YW21}, respectively.\n\\subsection{Deterministic Energy Disaggregation}\n\nTo learn patterns of each individual load from the given training data $\\bar{X}$, we formulate\na deterministic dictionary learning problem,\n \\begin{align} \n\\min_{ A, D } f(A,D) &=\\lVert \\bar{X}- \\Sigma_{i = 1}^C D_i A_i \\rVert_F^2+\\Sigma_{i = 1}^C \\lambda_i \\Sigma_{j: i \\notin y_j } \\lVert A_i^{j} \\rVert \\nonumber\\\\\n & + \\lambda_D \\text{Tr}(D\\Theta D^\\intercal) \\label{eqn:dictionary} \\\\\n\\text{s.t. }& \\lVert d^{m } \\rVert_2 \\leq 1, d^{m} \\geq 0, m=1, \\cdots, K \\label{cons} \\\\\n& c_i A_i \\geq 0, \\forall i \\label{eqn:positive} \n\\end{align} \nwhere $D_i\\in \\mathbb{R}^{P\\times K_i}$ denotes the dictionary for load $i$, and $A_i \\in \\mathbb{R}^{K_i \\times N}$ denotes the corresponding coefficients of load $i$. $A_i^j$ is the $j$th column in $A_i$. $A=[A_1; A_2; \\cdots; A_C]\\in \\mathbb{R}^{K \\times N}$ is the matrix that contains all coefficients. $D = \\begin{bmatrix}\nD_1, D_2, \\cdots, D_C\n\\end{bmatrix} \\in \\mathbb{R}^{P \\times K }$ is the matrix that contains all dictionaries. $d^m$ is the $m$th column in $D$. $K = \\Sigma_{i=1}^C K_i $. $\\text{Tr}(\\cdot)$ is the trace operator, and $D^\\intercal$ represents the transpose matrix of $D$. $\\lambda_i$ and $\\lambda_D$ are pre-defined hyper-parameters. }\n\n \\begin{figure}[!ht] \n \\centering\n \t\\includegraphics[width=0.48\\textwidth]{.\/Figures\/Dictionary1.pdf} \n \t\\caption{ {The dictionary representation in Fig. \\ref{fig1}.\n The coefficients\t$A_1$ and $A_2$ are column-sparse. \\cite{LYWW20}}}\n \t\\label{diction}\n \\end{figure}\n\n \\begin{figure*}[!ht] \n \\centering\n \t\\includegraphics[width=0.8\\textwidth]{.\/Figures\/figure1.pdf} \n \n \\caption{{An illustrative framework of the proposed Bayesian method. In the training stage, our method learns the posterior distribution of atom labels, coefficients and load labels from the training data. In the testing stage, the method samples learned distributions of the dictionary and learns the posterior distribution for atom labels, coefficients and load labels, respectively, for test data. \\cite{YW21}}}\n \t\\label{fig3}\n \n \\label{fig_example}\n \\end{figure*}\n \n{The first term $\\lVert \\bar{X}- \\Sigma_{i = 1}^C D_i A_i \\rVert_F^2$ is the standard reconstruction error in dictionary learning. It measures the reconstruction error between the original data and the learned dictionaries and coefficients. $\\sum_{j: i \\notin y_j} \\lVert A_i^{j} \\rVert$ is the column sparsity constraint. The motivation of using this regularization is illustrated in Fig. \\ref{diction}, which shows the dictionary representation of Fig. \\ref{fig1}.\nBecause the training data only have partial label 3, load 1 and load 2 may not exist in the training data. Therefore, we impose the column sparsity on $A_1$ and $A_2$ to promote the group sparsity of $A_1$ and $A_2$. \n \nThe incoherence term $\\text{Tr} (D\\Theta D^\\intercal) $ is defined as\n\n\\begin{align}\\label{eqn:trace}\n\\text{Tr} (D\\Theta D^\\intercal) & = \\Sigma_{m = 1}^{K} \\Sigma_{p =1}^{K } \\theta_{mp} (d^m)^\\intercal d^{p}.\n\\end{align} \nThe $(m,p)$th entry $\\theta_{m p}$ in the weight matrix $\\Theta \\in R^{K \\times K}$ is $0$ if $d^m$ and $d^p$ are in the same dictionary and $1$ otherwise. The incoherence term promotes a discriminative dictionary such that $D_i$ and $D_j$ are as different as possible. The discriminative dictionaries are able to enhance the disaggregation performance.}\n\n\n\n\n \n {Given an aggregate test data $\\hat{x}$, we aim to disaggregate the aggregate measurement into individual load $c$, denoted by $\\hat{x}^c$. The objective function in the testing stage can be written as\n\\begin{align} \\label{w} \n \\min_{w \\in \\mathbb{R}^q } & \\lVert \\hat{x} - \\hat{D} \\tilde{A} w \\rVert_2 + \\mu \\lVert w \\rVert_1, \n \\end{align} \nwhere we select a submatrix $\\tilde{A}=[\\tilde{A}_1;\\cdots; \\tilde{A}_n] \\in \\mathbb{R}^{K \\times q}$ from $\\hat{A}$. $\\hat{D}$, $\\hat{A}$ is the solution by solving the objective function \\eqref{eqn:dictionary}. $\\mu$ is a pre-defined hyper-parameter. The intuition is that some load combinations are repetitive in the training data. We can select some representative combinations and disaggregate the aggregate measurement with respect to these combinations to improve the disaggregation accuracy. Let $\\hat{w}$ be the solution to \\eqref{w}, then the estimated load for load $c$ is $\\hat{x}^c= \\hat{D}_i \\tilde{A}_i \\hat{w}$. }\n\n\n\n \n \\subsection{Bayesian Energy Disaggregation}\n \n{ In \\cite{YW21}, we propose a Bayesian method to deal with partial label data and provide the confidence measure of our disaggregation results. An overall framework is shown in Fig. \\ref{fig3}. Given the training data $\\bar{X}$ and partial labels $\\bm{Y}_\\Omega$, the proposed Bayesian method learns the posterior distribution of dictionaries and coefficients in the training stage. At the testing stage, the method learns the distributions of coefficients based on the learned distributions of dictionaries. The distribution of $\\hat{x}^c$ is then computed, where $\\hat{x}^c$ is the estimated power consumption of load $c$. The mean of the distribution of $\\hat{x}^c$ is used as the estimation of the load $c$ and the covariance is computed to measure the uncertainty. \n\n\nThe proposed method is based on a hierarchical probabilistic model. The prior distribution of the aggregate data ${x}_i$ can be written as \n\n \\begin{equation} \\label{equation1}\n\\bar{x}_i = \\sum_{c=1}^{C}{D}_{c} \\bm{\\omega}_i^c+\\bm{\\epsilon}_i \n\\end{equation}\n\\begin{equation}\\label{equation2}\n\\bm{\\omega}_i^c = (\\bm{z}_i^c \\odot \\bm{s}_i^c)y_i^c\n\\end{equation}\nfor all $i=1,2,3,...,N$, $c=1,2,3,...,C$, \nwhere $\\bm{\\omega}_i^c \\in \\mathbb{R}^{K_c}$ is the coefficients for $D_c$, and $\\bm{\\epsilon}_i$ is the measurement noise. {In \\eqref{equation2}, $\\odot$ represents the element-wise product.} Let $\\bm{d}^c_k$ denote the $k$th column in the dictionary ${D}_{c}$. $\\bm{d}^c_k$ is sampled from a multivariate Gaussian distribution $\\mathcal{N}(\\bold{0},\\frac{1}{\\lambda_d} \\bm{I}_P)$, where $\\lambda_d$ is a pre-defined scalar, and $\\bm{I}_P$ is an identity matrix with size $P\\times P$. The noise $\\bm{\\epsilon}_i$ is sampled from Gaussian $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_\\epsilon} \\bm{I}_P)$. One can see from (\\ref{equation2}) that $\\bm{\\omega}_i^c$ is the element-wise product of $\\bm{z}_i^c$ and $\\bm{s}_i^c$ and then multiplied by $y_i^c$. $y_i^c$ is a binary variable sampled from a Bernoulli distribution and $y_i^c=1$ indicates that load $c$ exists in ${x}_i$. $\\bm{z}_i^c$ is a binary vector. Let $\\bm{z}_{ik}^c$ denote the $k$th entry of $\\bm{z}_i^c$. $\\bm{z}_{ik}^c=1$ indicates $d_k^c$ is used to represent ${x}_i$ and 0 otherwise. $\\bm{z}_{ik}^c$ is sampled from the Bernoulli distribution. Note that the Bayesian method is able to infer the actual dictionary size $K_c$ by gradually pruning the dictionary size based on $\\bm{z}_i^c$ in the training stage. Therefore, the Bayesian method is not sensitive to the selection of initial dictionary size.\n{$\\bm{s}_i^c$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_s^c} \\bm{I}_{Kc})$. We put Gamma priors on $\\gamma_s^c$ and $\\gamma_\\epsilon$, respectively. { The Gamma priors are conjugate priors of the Gaussian distribution. If conjugate priors are selected, we can derive the analytical solution of the posterior distribution in the variational inference, which simplifies the updating process.} Note that our model selects conjugate priors to simplify the updating process.} \n\n Let $\\bm{\\Theta}$ denote all the latent variables. Given $\\bar{X}$ and partial labels $\\bm{Y}_\\Omega$, the objective is to obtain the posterior $P(\\bm{\\Theta}, \\bm{Y}_{\\bar{\\Omega}} |\\bm{X}, \\bm{Y}_\\Omega)$. From the Bayes theorem, \n\n\\begin{equation}\\label{eqn:bayes}\n P(\\bm{\\Theta}, \\bm{Y}_{\\bar{\\Omega}} |\\bar{X}, \\bm{Y}_\\Omega)=\\frac{P(\\bm{\\Theta}, \\bar{X}, \\bm{Y})}{P(\\bar{X},\\bm{Y}_{\\Omega})}\n \\end{equation}\nBecause computing \\eqref{eqn:bayes} directly is intractable, we use Gibbs sampling \\cite{I12} to compute the posterior distribution. Gibbs sampling sequentially samples from the conditional probability of one variable in $\\bm{\\Theta}$ and $\\bm{Y}_{\\bar{\\Omega}}$ while keeping all other variables fixed. These conditional distributions have closed-form expressions because of the conjugate priors, which leads to an efficient updating process. \n\n\nIn the testing stage, given the aggregate test data $\\hat{\\bm{x}}$, the goal of our approach is to estimate $\\hat{{x}}^c$. A similar probabilistic model for $\\hat{\\bm{x}}$ and $\\hat{{x}}^c$ is described as: \n\n\n\\begin{equation} \\label{xhat}\n \\hat{{x}} = \\sum_{c=1}^{C}{D}_{c} (\\bm{\\hat{z}^c} \\odot \\bm{\\hat{s}^c})\\hat{y}^c+\\hat{\\bm{\\epsilon}}\\\\\n\\end{equation}\n \n\\begin{equation}\\label{eqn:hatxc}\n {\\hat{{x}}^c} = {D}_c (\\bm{\\hat{z}^c} \\odot \\bm{\\hat{s}^c})\\hat{y}^c+\\frac{\\hat{\\bm{\\epsilon}}}{C}.\n \\end{equation}\nfor all $c=1,...,C$, $k=1,...,K_c$.\n\nThe dictionary atom $d_k^c$ is sampled from learned distribution $p({d_k^c}|\\bm{X},\\bm{Y}_\\Omega)$ in the training stage. We also assume that $\\hat{y}^c$ and $\\hat{\\bm{z}}^c$ are sampled from Bernoulli distributions. $\\hat{\\bm{s}}^c$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\gamma_s^c} \\bm{I}_{K_c})$ and $\\hat{\\bm{\\epsilon}}$ is sampled from $\\mathcal{N}(\\bold{0},\\frac{1}{\\hat{\\gamma}_{\\epsilon}} \\bm{I}_P)$. Gibbs sampling is also employed for computing probabilistic distributions of $\\hat{y}^c$, $\\hat{\\bm{z}}^c$, $\\hat{\\bm{s}}^c$, and $\\hat{\\gamma}_\\epsilon$. \n\n\n{The per-iteration computational complexity of the Bayesian offline training is $\\mathcal{O}(CK_cPN)$. The per-iteration computational complexity of the online testing is $\\mathcal{O}(CK_cP)$. Thus, the computational complexity scales linearly with respect to the number of loads. }\n\n \\subsection{Uncertainty Modeling}\n Equipped with all learned posterior distributions, we then estimate the distribution of $\\hat{x}^c$. However, it is intractable to obtain the explicit expression for the distribution of $\\hat{x}^c$. Monte-Carlo integration \\cite{PBJ12} is employed to approximately compute the predictive mean and predictive variance.\n\n \n \nDefine\n\\begin{equation}\nf (\\bm{\\Psi}) = {D}_c (\\hat{\\bm{z}}^c \\odot \\hat{\\bm{s}}^c)\\hat{y}^c\n\\end{equation}\nwhere $\\bm{\\Psi}=\\{{D}_c, \\hat{\\bm{z}}^c, \\hat{\\bm{s}}^c, {\\hat{y}^c}, \\hat{\\gamma}_\\epsilon\\}$.\nThe predictive mean of ${\\hat{{x}}^c}$ is computed by\n\\begin{equation} \\label{predictmean}\n\\begin{split}\nE[{\\hat{{x}}^c}]\n &\\approx\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}^l)\n\\end{split}\n\\end{equation}\nwhere $L$ is the number of Monte-Carlo samples. {More Monte-Carlo samples increase the estimation accuracy, at the cost of higher computational burden. Our experiments show that $50$ Monte-Carlo samples suffice to provide accurate estimations of the predictive mean and the predictive variance.} $\\bm{\\Psi}^l$ is sampled from the learned distributions of variables in $\\bm{\\Psi}$. $E[{\\hat{{x}}^c}]$ is then used as the estimation of the power consumption of load C. \n\nThe predictive covariance is approximated by\n\n\\begin{equation} \\label{predictvariance}\n\\begin{split}\n \\textrm{Var}[{\\hat{{x}}^c}] \n =&E[{\\hat{x}^c}{\\hat{x}^c}{}^T]-{E[{\\hat{x}^c}]}{E[{\\hat{x}^c}]}^T \\\\\n \\approx &\\frac{\\bm{I}_P}{LC}\\sum_{l=1}^{l=L} \\frac{1}{\\hat{\\gamma}_{\\epsilon}^{l}} +\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}_l){ f (\\bm{\\Psi}^l)}^T \\\\ -&(\\frac{1}{L}\\sum_{l=1}^{l=L} f (\\bm{\\Psi}^l))(\\frac{1}{L}\\sum_{l=1}^{l=L} {f (\\bm{\\Psi}^l)}^T) \n\\end{split}\n\\end{equation}\n\n\n\\noindent Let $\\sigma_i$ ($i=1,..., P$) denote all the singular values of $\\textrm{Var}[{\\hat{{x}}^c}]$.\n\nThe uncertainty index $U_c$ for individual load $c$ and the uncertainty index $U_{\\textrm{all}}$ for total estimated loads are computed as\n\\begin{equation}\\label{Uncertaintyindex1}\nU_c = \\Sigma_{i = 1}^P \\sigma_i\n\\end{equation}\n\\begin{equation}\\label{Uncertaintyindex2}\nU_{\\textrm{all}} = \\Sigma_{c = 1}^C U_c\n\\end{equation}\nThe intuition is that a large variance indicates higher uncertainty of the estimation. The uncertainty index is able to characterize the confidence level of disaggregation results. \n\n }\n\n \n\\section{Numerical Experiment}\n{ The performance of the proposed methods is evaluated on a partially labeled dataset. The dataset contains two industry loads and one solar generation. $N=360$ training samples and $M=300$ testing samples are generated. Even though the generated training samples contain up to three loads, each sample is annotated with only one label. The testing samples also contain up to three loads and have no label. In the following experiments, $\\gamma$ represents the percentage of the training data that measure individual loads. For example, $\\gamma=50\\%$ denotes that $50\\%$ training data labeled as load $c$ contain pure load $c$ and the remaining $50\\%$ data contain other loads. \n\n} \n\n\n{ \\subsubsection{Error Metrics} Several metrics are employed to compute the disaggregation error. The standard Root Mean Square Error (RMSE) \\cite{RQFKT17, PGWRA12} is defined as, \n\\begin{align}\n\\text{RMSE}_c & = \\sqrt{\\frac{\\Sigma_{i=1}^{M} \\lVert \\hat{x}^c_i -x^c_i \\rVert_2^2}{P \\times M}. } \n\\end{align}\nwhere $\\hat{x}^c_i, x^c_i \\in \\mathbb{R}^P$ are the estimated and the ground-truth load $c$ in the $i$th testing sample, respectively.\n \nA new Total Error Rate (TER) is proposed to compute the disaggregation error of all the loads as follows,\n\\begin{align}\n\\text{TER} = & \\frac{ \\Sigma_{i = 1}^{M} \\Sigma_{c =1}^C \\min( \\lVert \\hat{x}^c_i - x_i^c \\rVert_1 , \\lVert {x}^c_i \\rVert_1 ) }{ \\Sigma_{i= 1}^{M} \\Sigma_{c = 1}^C \\lVert x^c_i \\rVert_1 } \n\\end{align} \n\n \n \n \n\nThe Weighted Root Mean Square Error (WRMSE) is proposed to take the uncertainty index into account. The weighted average disaggregation error is computed as, \n\n\n\\begin{equation}\\label{eqn:WRMSE}\n{\\text{WRMSE}_c = \\sqrt{\\frac{\\Sigma_{i=1}^{M} \\frac{\\lVert \\hat{x}^c_i -{x}^c_i \\rVert_2^2}{U_c(\\hat{x}^c_i)} }{P\\Sigma_{i=1}^{M} \\frac{1}{U_c(\\hat{x}^c_i)} } } }\n\\end{equation}\n\n\\noindent where $U_c(\\hat{x}^c_i)$ denotes the uncertainty index of $\\hat{x}^c_i$. A larger $U_c(\\hat{x}^c_i)$ represents a less reliable estimation. If the estimated loads with higher disaggregation errors are accompanied by larger uncertainty indices, the RMSE$_c$ could be much larger than WRMSE$_c$. The scenario that RMSE$_c$ is much larger than WRMSE$_c$ indicates that the unreliable estimation results are correctly flagged by higher uncertainty indices.\n \n \n\\subsubsection{Methods} Our deterministic EDS method in \\cite{LYWW20} is abbreviated as ``D-EDS.'' Our Bayesian EDS method in \\cite{YW21} is abbreviated as ``B-EDS.'' Two other existing methods are employed for comparison. The work in \\cite{KBNY10} that is based on discriminative sparse coding is abbreviated as ``DDSC,'' and the work \\cite{RQFKT17} based on sum-to-k matrix factorization is abbreviated as ``sum-to-k''. Because we set the Monte-Carlo samples $L=50$ in our method B-EDS, then D-EDS, DDSC and sum-to-k are averaged over 50 runs for\na fair comparison. The comparisons of disaggregation performance of B-EDS, D-EDS, DDSC and sum-to-k are shown in Table \\ref{table1}. $\\gamma=70\\%$. Note that all the existing works such as DDSC and sum-to-k methods require fully labeled data to obtain accurate estimation.\nDirectly applying the existing methods\nto partially labeled data leads to a low disaggregation accuracy. The proposed two approaches B-EDS and D-EDS are designed for partially labeled data and can achieve state-of-the-art disaggregation performance.\nBetween these two methods, the disaggregation accuracy of B-EDS is slightly better. Moreover, one can see from Table I that the WRMSE$_c$ is much smaller than the corresponding RMSE$_c$. As we discussed above, this means that those estimations with larger disaggregation errors also have large uncertainty indices. This validates the effectiveness of applying the proposed uncertainty index to measure the reliability of the disaggregation results. \n\n\nThe major advantage of B-EDS over D-EDS is that B-EDS is able to measure the confidence level of disaggregation results from the uncertainty index. We provide five case studies to verify the performance of uncertainty modeling of B-EDS.\n\n\n\\begin{itemize}\n\\item Case 1: we select test data from the testing datasets in Table I and this test data contains three types of loads.\n\n\\item Case 2: the test data is as same as the data in Case 1 with an additional Gaussian noise $\\mathcal{N}(0,4^2)$ in each entry. \n\n\\item Case 3: the test data is as same as the data in Case 1 with an additional Gaussian noise $\\mathcal{N}(0,6^2)$ in each entry. \n\n\\item Case 4: the test data only contains one solar generation, but the pattern of solar generation is different from the solar patterns in the training data.\n\n\\item Case 5: the test data contains the same load 1 and 2 as those in Case 1, and as well as a solar generation with a pattern different from the solar patterns in the training data. \n\\end{itemize}\n\n\nFigs.~\\ref{fig_4} and ~\\ref{fig_5} show the disaggregation performance of D-EDS on Case1 and Case 4. \nThe aggregate measurement is shown in (a), and the disaggregation results are shown in (b)-(d) in both figures. In Case 1, the disaggregation results by D-EDS follow the actual load pattern. In Case 4, the disaggregation result of the solar generation does not follow the actual solar pattern because it is different from the learned pattern from the training data. In both cases, the disaggregation results contain some errors. That motivates using the Bayesian approach to compute a probabilistic distribution of load consumption rather than computing one deterministic estimation. \n\n\n{Fig.~\\ref{fig_uncertainty} shows the \n disaggregation performance of B-EDS on these five cases, and Table II shows the corresponding uncertainty index.\n Each subfigure in Fig.~\\ref{fig_uncertainty} plots the ground-truth load, the estimated load and the $99.7\\%$ confidence interval of the estimated load. One can see\n that in Cases 1-3, although there are some errors in the disaggregation results, the ground-truth loads are within the confidence interval. Moreover, the estimation errors increase slightly when the noise levels increase. Correspondingly, Table II shows that the total uncertainty indices in Case 1-3 also increase as the noise level increases, which indicates the effectiveness of using the uncertainty index to characterize the uncertainty in the estimation. In Case 4 and Case 5, because the patterns of solar generation are far from the patterns in the training data, the ground-truth load consumption may not fall inside the confidence interval (especially Case 5).\n The uncertainty indices in Table II also increase significantly, indicating that the estimated results are less reliable in these cases.\n Therefore,\n the users can use the uncertainty index to evaluate the accuracy of the disaggregation results.\nTable II also compares the TER of B-EDS and D-EDS, and B-EDS has a smaller disaggregation error of D-EDS.}\n\n\\begin{figure*}[!ht] \n \\centering\n \\subfigure[]\t{\\includegraphics[width=0.24\\textwidth]{.\/Figures\/netFig1.png}}\n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig1.png}}\n \n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig2.png}}\n\\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case21Fig3.png}}\n \t\\caption{ {Disaggregation performance of D-EDS on Case 1.(a) Net load. (b) The ground-truth and disaggregated load 1.(c) The ground-truth and disaggregated load 2. (d) The ground-truth and disaggregated solar.} } \\label{fig_4}\n \\end{figure*}\n \n\\begin{figure*}[!ht] \n \\centering\n \\subfigure[]\t{\\includegraphics[width=0.24\\textwidth]{.\/Figures\/nettFig1.png}}\n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig1.png}}\n \n \\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig2.png}}\n\\subfigure[]{\t\\includegraphics[width=0.24\\textwidth]{.\/Figures\/Case23Fig3.png}}\n \t\\caption{ {Disaggregation performance of D-EDS on Case 4.(a) Net load. (b) The ground-truth and disaggregated load 1.(c) The ground-truth and disaggregated load 2. (d) The ground-truth and disaggregated solar.} } \\label{fig_5}\n \\end{figure*}\n \n \n \\begin{table}[]\n\\centering\n\\caption{Comparison between B-EDS, D-EDS, sum-to-k and DDSC methods on disaggregation accuracy}\n\\label{table1}\n\\begin{tabular}{lllll}\n \\hline & B-EDS & D-EDS & sum-to-k & DDSC \\\\\n \\hline RMSE$_1$ & 6.20 & 6.62 & 13.17 & 22.77 \\\\\n \\hline RMSE$_2$ & 5.19 & 6.34 & 11.35 & 23.86 \\\\\n \\hline RMSE$_3$ & 5.82 & 4.65 & 10.70 & 13.49 \\\\\n \\hline WRMSE$_1$ & 0.16 & - & - & - \\\\\n \\hline WRMSE$_2$ & 0.13 & - & - & - \\\\\n \\hline WRMSE$_3$ & 0.13 & - & - & - \\\\\n \\hline TER & 8.97\\% & 9.95\\% & 20.61\\% & 37.12\\% \\\\\n \\hline \n\\end{tabular}\n\\end{table}\n\n\n\n} \n\n\\begin{figure*} [h!]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig1.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig1.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig1.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig1.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig1.png}}\n\n \\addtocounter{subfigure}{-12}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig2.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig2.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig2.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig2.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig2.png}}\n \\addtocounter{subfigure}{-12}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{Figures\/Case4Fig3.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case5Fig3.png}} \n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case3Fig3.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.18\\textwidth]{.\/Figures\/Case10Fig3.png}}\n \\addtocounter{subfigure}{+2}\n \\subfigure[]{\\includegraphics[width=0.184\\textwidth]{.\/Figures\/Case13Fig3.png}}\n \t\\caption{The disaggregation performance of B-EDS on Cases 1-5. Each subfigure shows the ground-truth load, disaggregated load and the corresponding confidence interval. (a)-(c): the disaggregation results for Case 1. (d)-(f): the disaggregation results for Case 2 where the test data contains Gaussian noise $\\mathcal{N}(0,4^2)$. (g)-(i): the disaggregation results for Case 2 where the test data contains Gaussian noise $\\mathcal{N}(0,6^2)$. (j)-(l): the disaggregation results for Case 4 where the test data is a solar generation and its pattern is different from the training data. (m)-(o) the disaggregation results for Case 5 where the test data contains three loads, and the pattern of solar generation is different from the training data. }\\label{fig_uncertainty}\n \n\\end{figure*}\n\n\n\\begin{table}[ht!]\n\\centering\n\\caption{The uncertainty indices and disaggregation accuracy on five testing cases}\n\\label{table_uncertainty}\n\\begin{tabular*}{\\hsize}{@{}@{\\extracolsep{\\fill}}c|ccccc@{}}\n\\hline & Case 1 & Case 2 & Case 3 & Case 4 & Case 5 \\\\\n\\hline U$_1$ &\t243.72\t&\t280.35\t&\t201.52\t&\t0.058\t&\t160.89\t\\\\\n\\hline U$_2$&\t116.07\t&\t101.24\t&\t215.23\t&\t0.060\t&\t394.41\t\\\\\n\\hline U$_3$ &\t249.89\t&\t257.78\t&\t287.23\t&\t703.48\t&440.26\t\\\\\n\\hline U$_{\\textrm{all}}$ &609.69\t&\t639.37\t&\t703.98\t&\t703.60\t&788.44\t\\\\\n\\hline B-EDS TER &\t4.77\\%\t&\t5.10\\%\t&\t7.00\\%\t&\t6.77\\%\t&\t12.97\\%\t\\\\\n\\hline D-EDS TER &\t7.19\\%\t&\t8.86\\%\t&\t11.60\\%\t&\t11.01\\%\t&\t16.45\\%\t\\\\\n\\hline \n\\end{tabular*}\n\\end{table}\n\n\n\n\nThe Bayesian method B-EDS has slightly better disaggregation performance than\nthe deterministic approach D-EDS. The major advantage of the Bayesian approach is to measure the confidence level of the disaggregation results. However, the deterministic approach\nis much more computationally efficient than the Bayesian method. {In Table I, the B-EDS requires around 50 seconds for the offline training and 4 seconds for each testing sample. In comparison, the D-EDS requires around 15 seconds for the offline training, and 0.9 seconds for each testing sample. } If users want to know the reliability of the estimation results, the Bayesian method should be selected. In contrast, if users need to disaggregate the aggregate measurement with limited computational resources in real-time, the deterministic approach is a better option. \n\n\n\\section{Conclusion}\n{\n\n\n\n\nEnergy disaggregation at substations with BTM solar generations has drawn increasing attention.\nAccurate energy disaggregation results are crucial for power system planning and operations. However, collecting training data with full labels at the substation level is challenging. Therefore, \nwe propose the concept of partially labeled data which is applicable in practice and significantly reduces the burden of annotating data. This paper summarizes two new load disaggregation approaches.\nBoth the deterministic approach and the Bayesian approach can achieve accurate disaggregation results on partially labeled data. Moreover,\nan uncertainty index is proposed to measure the reliability of the disaggregation results.\nTo the best of our knowledge, this is the first work to provide the uncertainty measure for the energy disaggregation problem.}\n\n\n\n\n\\section*{Acknowledgment}\n\n\nThis work was supported in part by the NSF grant \\# 1932196, AFOSR FA9550-20-1-0122, and ARO W911NF-21-1-0255.\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn quantum computers, quantum bits (qubits) \\cite{nielsen2011quantum, scherer2019mathematics} are the elementary information carriers. In such a computer, quantum gates \\cite{nielsen2011quantum, scherer2019mathematics, deutsch1989quantum} can manipulate arbitrary multi-partite quantum states \\cite{raussendorf2001one} including arbitrary superposition of the computational basis states, which are frequently also entangled. Thus the logic gates of quantum computation are considerably more varied than the logic gates of classical computation. In addition, a quantum computer can solve problems exponentially faster than any classical computer \\cite{deutsch1992rapid} because by exploiting superposition principle and entanglement allows the computer to manipulate and store more bits of information than a classical computer.\n\nIn this paper we present a theoretical approach of realizing Hadamard and controlled-NOT (C-NOT) quantum logic gates which form a universal set for quantum computation \\cite{barenco1995elementary, shi2002both, boykin2000new}.\nThe important discovery and proof of a conserved excitation number operator of the AJC Hamiltonian \\cite{omolo2017conserved} now means that dynamics generated by the AJC Hamiltonian is exactly solvable, as demonstrated in the polariton and anti-polariton qubit (photospin qubit) models in \\cite{omolo2017polariton, omolo2019photospins}. The reformulation developed in \\cite{omolo2017conserved, omolo2017polariton, omolo2019photospins}, drastically simplifies exact solutions of the AJC model which we shall apply in the present work.\n\nWe define the quantum C-NOT gate as that which affects the unitary operation on two qubits which in a chosen orthonormal basis in $\\mathbb{C}^2$ gives the C-NOT operation obtained as\n\n\\begin{equation}\n|a\\rangle|b\\rangle\\rightarrow|a\\rangle|a\\oplus|b\\rangle\n\\label{eq:theqcnot}\n\\end{equation}\nwhere $|a\\rangle$ is the control qubit, $|b\\rangle$ is the target qubit and $\\oplus$ indicates addition modulo 2 \\cite{barenco1995conditional, scherer2019mathematics, nielsen2011quantum}. The C-NOT gate transforms superposition into entanglement thus acts as a measurement gate \\cite{nielsen2011quantum, scherer2019mathematics,barenco1995conditional} fundamental in performing algorithms in quantum computers \\cite{knill2002introduction}. Transformation to a separable state (product state) is realized by applying the C-NOT gate again. In this case, it is used to implement Bell measurement on the two qubits \\cite{braunstein1992maximal}. \n\nWe note here that the JC model has been applied extensively in implementing C-NOT and Hadamard gate operations. Domokos \\textit{et al} (1995) \\cite{domokos1995simple} showed that using induced transitions between dressed states, it is possible to implement a C-NOT gate in which a cavity containing at most one photon is the control qubit and the atom is the target qubit. Later, Vitali, D. \\textit{et al} (2001)\\cite{vitali2001quantum} proposed a scheme of implementing a C-NOT gate between two distinct but identical cavities, acting as control and target qubits respectively. By passing an atom prepared initially in ground state consecutively between the two cavities a C-NOT ($cavity\\rightarrow{atom}$) and a C-NOT ($atom\\rightarrow{cavity}$) is realised with the respective classical fields S. Saif, F. \\textit{et al} (2001) \\cite{saif2007engineering} presented a study of quantum computing by engineering non-local quantum universal gates based on interaction of a two-level atom with two modes of electromagnetic field in high Q superconducting cavity. The two-level atom acted as the control qubit and the two-mode electromagnetic field served as the the target qubit. In this letter, we apply an approach similar to that in \\cite{saif2007engineering} where we implement a quantum C-NOT gate operation between two cavities defined in a two-dimensional Hilbert space spanned by the state vectors $|\\mu_1\\rangle=|1_A,0_B\\rangle$ and $|\\mu_2\\rangle=|0_A,1_B\\rangle$ as target qubits. Here $|\\mu_1\\rangle$ expresses the presence of one photon in mode A, when there is no photon in mode B, and $|\\mu_2\\rangle$ indicates that mode A is in vacuum state and one photon is present in mode B. The control qubit in this respect is a two-level atom. The important difference with the approach used in \\cite{saif2007engineering} is the model, i.e, while the initial absolute atom-field ground state $|g,0\\rangle$ in the AJC interaction is affected by atom-cavity coupling, the ground state $|g,0\\rangle$ in the JC model \\cite{saif2007engineering} is not affected by atom-cavity coupling. A similar result was determined independently in \\cite{omolo2019photospins}. Further, with precise choice of interaction time in the AJC qubit state transition operations defined in the AJC qubit sub-space spanned by normalised but non-orthogonal basics qubit state vectors \\cite{omolo2017polariton, omolo2019photospins}, C-NOT gate operations are realized between the two cavities.\n\nThe Hadamard gate also known as the Walsh-Hadamard gate is a single qubit gate \\cite{scherer2019mathematics, nielsen2011quantum}. The Hadamard transformation is defined as \n \n\\begin{equation}\n\\hat{H}=\\frac{\\hat{\\sigma}_\\textit{x}+\\hat{\\sigma}_\\textit{z}}{\\sqrt{2}}\n\\label{eq:mathhadamard}\n\\end{equation}\nwhere it transforms atomic computational basis states $|e\\rangle(|0\\rangle)$, $|g\\rangle(|1\\rangle)$ into diagonal basis states according to\n\\begin{eqnarray}\n\\hat{H}|e\\rangle\\rightarrow\\frac{|e\\rangle+|g\\rangle}{\\sqrt{2}}\\quad;\\quad\\hat{H}|g\\rangle\\rightarrow\\frac{|e\\rangle-|g\\rangle}{\\sqrt{2}}\\nonumber\\\\\n\\hat{H}|0\\rangle\\rightarrow\\frac{|0\\rangle+|1\\rangle}{\\sqrt{2}}\\quad;\\quad\\hat{H}|1\\rangle\\rightarrow\\frac{|0\\rangle-|1\\rangle}{\\sqrt{2}}\n\\label{eq:mathhadamard1}\n\\end{eqnarray}\n Vitali, D. \\textit{et al} (2001)\\cite{vitali2001quantum} showed that one qubit operations can be implemented on qubits represented by two internal atomic states because it amounts to applying suitable Rabi pulses. He demonstrated that the most practical solution on implementing one qubit operations on two Fock states is sending the atoms through the cavity. If the atom inside the cavity undergoes a $\\frac{\\pi}{2}$ pulse one realizes a Hadamard-phase gate. Saif, F. \\textit{et al} (2001) \\cite{saif2007engineering} also showed that it is possible to realise Hadamard operation by a controlled interaction between a two-mode high Q electromagnetic cavity field and a two-level atom. In his approach, the two-level atom is the control qubit, whereas the target qubit is made up of two modes of cavity field. Precision of the gate operations is realised by precise selection of interaction times of the two-level atom with the cavity mode. In this paper, we show that Hadamard operation in the AJC interaction is possible for a specified initial atomic state by setting a specific sum frequency and photon number in the anti-Jaynes-Cummings qubit state transition operation \\cite{omolo2017polariton, omolo2019photospins}, noting that the interaction components of the anti-Jaynes-Cummings Hamiltonian generates state transitions.\n\nThe content of this paper is therefore summarised as follows. Section \\ref{sec:model} presents an overview of the theoretical model. In sections \\ref{sec:qcgate} and \\ref{sec:hadgatelogic} respectively, implementation of a quantum C-NOT and Hadamard gates in the AJC interaction are presented. Finally section \\ref{sec:conclusion} contains the conclusion. \n\\section{The model}\n\\label{sec:model}\nThe quantum Rabi model of a quantized electromagnetic field mode interacting with a two-level atom is generated by the Hamiltonian \\cite{omolo2017conserved} \n\n\\begin{equation}\n\\hat{H}_R=\\frac{1}{2}\\hbar\\omega(\\hat{a}^{\\dagger}\\hat{a}+\\hat{a}\\hat{a}^{\\dagger})+\\hbar\\omega_0\\hat{s}_z + \\hbar\\lambda(\\hat{a}+\\hat{a}^{\\dagger})(\\hat{s}_++\\hat{s}_-)\n\\label{eq:rabi1}\n\\end{equation}\nnoting that the free field mode Hamiltonian is expressed in normal and anti-normal order form $\\frac{1}{2}\\hbar\\omega(\\hat{a}^{\\dagger}\\hat{a}+\\hat{a}\\hat{a}^{\\dagger})$.\nHere, $\\omega\\hspace{1mm},\\hspace{1mm}\\hat{a}\\hspace{1mm},\\hspace{1mm}\\hat{a}^{\\dagger}$ are quantized field mode angular frequency, annihilation and creation operators, while $\\omega_0\\hspace{1mm},\\hspace{1mm}\\hat{s}_z\\hspace{1mm},\\hspace{1mm}\\hat{s}_+\\hspace{1mm},\\hspace{1mm}\\hat{s}_-$ are atomic state transition angular frequency and operators. The Rabi Hamiltonian in eq.~\\eqref{eq:rabi1} is expressed in a symmetrized two-component form \\cite{omolo2017conserved, omolo2017polariton, omolo2019photospins}\n\n\\begin{equation}\n\\hat{H}_R=\\frac{1}{2}(\\hat{H}+\\hat{\\overline{H}})\n\\label{eq:rabi2}\n\\end{equation}\nwhere $\\hat{H}$ is the standard JC Hamiltonian interpreted as a polariton qubit Hamiltonian expressed in the form \\cite{omolo2017conserved}\n\n\\begin{eqnarray}\n\\hat{H}&=&\\hbar\\omega\\hat{N}+2\\hbar\\lambda\\hat{A}-\\frac{1}{2}\\hbar\\omega\\quad;\\quad\\hat{N}=\\hat{a}^{\\dagger}\\hat{a}+\\hat{s}_+\\hat{s}_- \\nonumber\\\\\n\\hat{A}&=&\\alpha\\hat{s}_z+\\hat{a}\\hat{s}_++\\hat{a}^{\\dagger}\\hat{s}_-\\quad;\\quad\\alpha=\\frac{\\omega_0-\\omega}{2\\lambda}\n\\label{eq:pham1}\n\\end{eqnarray}\nwhile $\\hat{\\overline{H}}$ is the AJC Hamiltonian interpreted as an anti-polariton qubit Hamiltonian in the form \\cite{omolo2017conserved}\n\n\\begin{eqnarray}\n\\hat{\\overline{H}}&=&\\hbar\\omega\\hat{\\overline{N}}+2\\hbar\\lambda\\hat{\\overline{A}}-\\frac{1}{2}\\hbar\\omega\\quad;\\quad\\hat{\\overline{N}}=\\hat{a}\\hat{a}^{\\dagger}+\\hat{s}_-\\hat{s}_+\\nonumber\\\\\\hat{\\overline{A}}&=&\\overline{\\alpha}\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^{\\dagger}\\hat{s}_+\\quad;\\quad\\overline{\\alpha}=\\frac{\\omega_0+\\omega}{2\\lambda}\n\\label{eq:antpham1}\n\\end{eqnarray}\nIn eqs.~\\eqref{eq:pham1} and \\eqref{eq:antpham1}, $\\hat{N}$, $\\hat{\\overline{N}}$ and $\\hat{A}$, $\\hat{\\overline{A}}$ are the respective polariton and anti-polariton qubit conserved excitation numbers and state transition operators.\n\nFollowing the physical property established in \\cite{omolo2019photospins}, that for the field mode in an initial vacuum state only an atom entering the cavity in an initial excited state $|e\\rangle$ couples to the rotating positive frequency field component in the JC interaction mechanism, while only an atom entering the cavity in an initial ground state $|g\\rangle$ couples to the anti-rotating negative frequency field component in an AJC interaction mechanism, we generally take the atom to be in an initial excited state $|e\\rangle$ in the JC model and in an initial ground state $|g\\rangle$ in the AJC model. \n\nConsidering the AJC dynamics, applying the state transition operator $\\hat{\\overline{A}}$ from eq.~\\eqref{eq:antpham1} to the initial atom-field \\textit{n}-photon ground state vector $|g,n\\rangle$, the basic qubit state vectors $|\\psi_{gn}\\rangle$ and $|\\overline{\\phi}_{gn}\\rangle$ are determined in the form (\\textit{n}=0,1,2,....) \\cite{omolo2019photospins}\n\n\\begin{equation}\n|\\psi_{gn}\\rangle=|g,n\\rangle\\quad;\\quad|\\overline{\\phi}_{gn}\\rangle=-\\overline{c}_{gn}|g,n\\rangle+\\overline{s}_{gn}|e,n+1\\rangle\n\\label{eq:entsuptate}\n\\end{equation}\nwith dimensionless interaction parameters $\\overline{c}_{gn}$, $\\overline{s}_{gn}$ and Rabi frequency $\\overline{R}_{gn}$ defined as\n\n\\begin{eqnarray}\n\\overline{c}_{gn}&=&\\frac{\\overline{\\delta}}{2\\overline{R}_{gn}}\\quad;\\quad\\overline{s}_{gn}=\\frac{2\\lambda\\sqrt{n+1}}{\\overline{R}_{gn}}\\quad;\\quad\\overline{R}_{gn}=2\\lambda{\\overline{A}_{gn}}\\nonumber\\\\\n\\overline{A}_{gn}&=&\\sqrt{(n+1)+\\frac{\\overline{\\delta}^2}{16\\lambda^2}}\\quad;\\quad\\overline{\\delta}=\\omega_0+\\omega\n\\label{eq:parameters}\n\\end{eqnarray}\nwhere we have introduced sum frequency $\\overline{\\delta}=\\omega_0+\\omega$ to redefine $\\overline{\\alpha}$ in eq.~\\eqref{eq:antpham1}.\n\nThe qubit state vectors in eq.~\\eqref{eq:entsuptate} satisfy the qubit state transition algebraic operations\n\n\\begin{equation}\n\\hat{\\overline{A}}|\\psi_{gn}\\rangle=\\overline{A}_{gn}|\\overline{\\phi}_{gn}\\rangle\\quad;\\quad\\hat{\\overline{A}}|\\overline{\\phi}_{gn}\\rangle=\\overline{A}_{gn}|\\psi_{gn}\\rangle\n\\label{eq:traans}\n\\end{equation}\nIn the AJC qubit subspace spanned by normalized but non-orthogonal basic qubit state vectors $|\\psi_{gn}\\rangle$\\hspace{1mm},\\hspace{1mm} $|\\overline{\\phi}_{gn}\\rangle$ the basic qubit state transition operator $\\hat{\\overline{\\varepsilon}}_g$ and identity operator $\\hat{\\overline{I}}_g$ are introduced according to the definitions \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_g=\\frac{\\hat{\\overline{A}}}{\\overline{A}_{gn}}\\quad;\\quad\\hat{\\overline{I}}_g=\\frac{\\hat{\\overline{A}}^2}{\\overline{A}_{gn}^2}\\quad\\Rightarrow\\quad\\hat{\\overline{I}}_g=\\hat{\\overline{\\varepsilon}}_g^2\n\\label{eq:anttransop1}\n\\end{equation}\nwhich on substituting into eq.~\\eqref{eq:traans} generates the basic qubit state transition algebraic operations\n\n\\begin{eqnarray}\n\\hat{\\overline{\\varepsilon}}_g|\\psi_{gn}\\rangle&=&|\\overline{\\phi}_{gn}\\rangle\\quad;\\quad\\hat{\\overline{\\varepsilon}}_g|\\overline{\\phi}_{gn}\\rangle=|\\psi_{gn}\\rangle\\nonumber\\\\\\hat{\\overline{I}}_g|\\psi_{gn}\\rangle&=&|\\psi_{gn}\\rangle\\quad;\\quad\\hat{\\overline{I}}_g|\\overline{\\phi}_g\\rangle=|\\overline{\\phi}_g\\rangle\n\\label{eq:algop11}\n\\end{eqnarray}\nThe algebraic properties \\hspace{0.5mm} $\\hat{\\overline{\\varepsilon}}^{2k}=\\hat{\\overline{I}}_g$ \\hspace{0.5mm} and \\hspace{0.5mm} $\\hat{\\overline{\\varepsilon}}^{2k+1}=\\hat{\\overline{\\varepsilon}}_g$ \\hspace{0.5mm}easily gives the final property \\cite{omolo2019photospins}\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}=\\cos(\\theta)\\hat{\\overline{I}}_g-i\\sin(\\theta)\\hat{\\overline{\\varepsilon}}_g\n\\label{eq:antialgprop}\n\\end{equation}\nwhich is useful in evaluating time-evolution operators.\n\nThe AJC qubit Hamiltonian defined within the qubit subspace spanned by the basic qubit state vectors $|\\psi_{gn}\\rangle$ , $|\\overline{\\phi}_{gn}\\rangle$ is then expressed in terms of the basic qubit states transition operators $\\hat{\\overline{\\varepsilon}}_g$, $\\hat{\\overline{I}}_g$ in the form \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\hat{\\overline{H}}_g=\\hbar\\omega(n+\\frac{3}{2})\\hat{\\overline{I}}_g+\\hbar\\overline{R}_{gn}\\hat{\\overline{\\varepsilon}}_g\n\\label{eq:antijch2}\n\\end{equation}\n\\section{Quantum c-not gate operations}\n\\label{sec:qcgate}\n\nIn order to realise a C-NOT quantum gate operation in this context, we take a two-level atom as the control qubit, which is defined in a two dimensional Hilbert space with $|e\\rangle$ and $|g\\rangle$ as basis vectors, where $|e\\rangle$ expresses the excited state of the two-level atom and $|g\\rangle$ indicates the ground state. Two non-degenerate and orthogonal polarized cavity modes $C_A$ and $C_B$ make the target qubit. The target qubit is defined in two-dimensional Hilbert space spanned by the state vector $|\\mu_1\\rangle=|1_A,0_B\\rangle$, which expresses the presence of one photon in mode A, when there is no photon in mode B, and the state vector $|\\mu_2\\rangle=|0_A,1_B\\rangle$, which indicates that mode A is in the vacuum state and one photon is present in mode B.\n\nWith reference to the AJC qubit state transition operator in eq.\\eqref{eq:antialgprop}, lets first consider when an atom in ground state $|g\\rangle$ enters an electromagnetic cavity with mode A in vacuum state and a single photon in mode B. The atom couples to the anti-rotating negative frequency component of the field mode undergoing an AJC qubit state transition. After the atom interacts with mode A for a time $t=\\frac{\\pi}{\\overline{R}_{g0}}$, equal to half Rabi oscillation time, the driving field is modulated such that $\\theta=\\overline{R}_{g0}t=2\\lambda\\overline{A}_{g0}t=\\pi$. Redefining \\cite{omolo2019photospins}\n\n\\begin{equation}\n\\overline{\\alpha}=\\frac{\\overline{\\delta}}{2\\lambda}=\\frac{\\omega_0-\\omega+2\\omega}{2\\lambda}=\\frac{\\delta}{2\\lambda}+\\frac{\\omega}{\\lambda}=\\alpha+\\frac{\\omega}{\\lambda}\n\\end{equation}\nand considering a resonance case where $\\delta=\\omega_0-\\omega=0$ with $\\lambda\\gg\\omega$, $\\overline{\\alpha}$ becomes very small thus $\\theta=\\lambda{t}=\\frac{\\pi}{2}$, since $\\overline{A}_{g0}=1$ determined from eq.~\\eqref{eq:parameters}. The evolution of this interaction determined by applying the AJC qubit state transition operation in eq.~\\eqref{eq:antialgprop} noting the definition of $\\hat{\\overline{I}}_g$ and $\\hat{\\overline{\\varepsilon}}_g$ \\citep{omolo2019photospins} in eq.~\\eqref{eq:anttransop1} is of the form\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}|g,0_A\\rangle=\\cos(\\theta)|g,0_A\\rangle-i\\sin(\\theta)|e,1_A\\rangle\n\\label{eq:modeA}\n\\end{equation}\nwhich reduces to\n\n\\begin{equation}\n|g,0_A\\rangle\\rightarrow-i|e,1_A\\rangle\n\\label{eq:flip1}\n\\end{equation}\nWe observe that the atom interacted with mode A and completed half of the Rabi oscillation, as a result, it contributed a photon to mode A and evolved to excited state $|e\\rangle$. Now, after the interaction time, it enters mode B containing a single photon, interacting with the cavity mode as follows\n\n\\begin{equation}\n-ie^{i\\theta\\hat{\\overline{\\varepsilon}}_e}|e,1_B\\rangle=-i\\cos(\\theta)|e,1_B\\rangle+\\sin(\\theta)|g,0_B\\rangle\n\\label{eq:modeB}\n\\end{equation}\nAfter an interaction with mode B for a time $t_1=2t$ such that $t_1=\\frac{\\pi(\\overline{R}_{g0}+\\overline{R}_{e1})}{\\overline{R}_{g0}\\overline{R}_{e1}}$, the driving field is modulated such that $\\theta=\\left(\\frac{\\overline{R}_{g0}\\overline{R}_{e1}}{\\overline{R}_{g0}+\\overline{R}_{e1}}\\right)t=\\frac{\\pi}{2}$ with $\\overline{R}_{g0}=2\\lambda{\\overline{A}_{g0}}=2\\lambda$ since $\\overline{A}_{g0}=1$ and $\\overline{R}_{e1}=2\\lambda{\\overline{A}_{e1}}=2\\lambda$ since $\\overline{A}_{e1}=1$. Therefore, $\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeB} results into the evolution\n\n\\begin{equation}\n-i|e,1_B\\rangle\\rightarrow|g,0_B\\rangle\n\\label{eq:flip2}\n\\end{equation}\nThe results in eq.~\\eqref{eq:flip2} shows that the atom evolves to ground state and absorbs a photon initially in mode B. Therefore the atom clearly performs a swapping of the electromagnetic field between the two field modes by controlled interaction. \n\nWhen the atom in ground state $|g\\rangle$, enters the electromagnetic cavity containing a single photon in mode A and mode B in vacuum state, the atom and the field interacts as follows\n\n\\begin{equation}\ne^{-i\\theta\\hat{\\overline{\\varepsilon}}_g}|g,0_B\\rangle=\\cos(\\theta)|g,0_B\\rangle-\\sin(\\theta)|e,1_B\\rangle\n\\label{eq:modeB2}\n\\end{equation} \nAfter an interaction with field mode B for a time $t=\\frac{\\pi}{\\overline{R}_{g0}}$ equal to half Rabi oscillation time, the driving field is modulated such that $\\theta=\\overline{R}_{g0}t=\\pi$, with $\\overline{R}_{g0}=2\\lambda{\\overline{A}}_{g0}=2\\lambda$ since $\\overline{A}_{g0}=1$. Therefore $\\theta=\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeB2} results into the evolution\n\n\\begin{equation}\n|g,0_B\\rangle\\rightarrow-|e,1_B\\rangle\n\\label{eq:flip3}\n\\end{equation}\nThe atom then enters mode A containing one photon and interacts as follows\n\n\\begin{equation}\n-e^{i\\theta\\hat{\\overline{\\varepsilon}}_e}|e,1_A\\rangle=-\\cos(\\theta)|e,1_A\\rangle-i\\sin(\\theta)|g,0_A\\rangle\n\\label{eq:modeA2}\n\\end{equation}\nAfter an interaction with the cavity mode for a time $t_1=2t$ such that $t_1=\\frac{\\pi(\\overline{R}_{e1}+\\overline{R}_{g0})}{\\overline{R}_{e1}\\overline{R}_{g0}}$ we obtain a driving field modulation $\\theta=\\left(\\frac{\\overline{R}_{e1}\\overline{R}_{g0}}{\\overline{R}_{e1}+\\overline{R}_{g0}}\\right)t=\\frac{\\pi}{2}$, with $\\overline{R}_{e1}=2\\lambda{\\overline{A}_{e1}}=2\\lambda$ since $\\overline{A}_{e1}=1$ and $\\overline{R}_{g0}=2\\lambda{\\overline{A}_{g0}}=2\\lambda$ since $\\overline{A}_{g0}=1$. Therefore $\\theta=\\lambda{t}=\\frac{\\pi}{2}$. The form of eq.~\\eqref{eq:modeA2} results into the evolution\n\n\\begin{equation}\n|e,1_A\\rangle\\rightarrow{i}|g,0_A\\rangle\n\\label{eq:flip4}\n\\end{equation}\nThis shows that the atom evolves to ground state and performs a field swapping by absorbing a photon in mode A.\n\nWhen the atom in excited state $|e\\rangle$ enters mode A in vacuum state,that is target qubit $|\\mu_2\\rangle$, the atom propagates as a free wave without coupling to the field mode in vacuum state $|0\\rangle$ \\cite{omolo2019photospins}, leaving the cavity without altering the state of the cavity-field mode. A similar observation is made when the atom enters cavity B in vacuum state for the case of target qubit $|\\mu_1\\rangle$.\n\nFrom the results obtained, we conclude that the target qubit made up of the electromagnetic field remains unchanged if the control qubit, that is, the two-level atom, is initially in the excited state $|e\\rangle$, while when the atom is in ground state $|g\\rangle$, the cavity states $|0\\rangle$ and $|1\\rangle$ flip. We shall refer to this gate as the AJC C-NOT $(atom\\rightarrow{cavity})$\n\n\\subsection{Probability of success of the C-NOT gate} \n\\label{sec:cnotgate}\nSuccess probability for the C-NOT gate is given by\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\theta_A)+\\cos^2(\\theta_A)\\sin^2(\\theta_B))\n\\label{eq:success1}\n\\end{equation}\nIn terms of the Rabi frequency we write eq.~\\eqref{eq:success1} as\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\overline{R}_A\\Delta{t_A})+\\cos^2(\\overline{R}_A\\Delta{t_A})\\sin^2(\\overline{R}_B\\Delta{t_B}))\n\\label{eq:success2}\n\\end{equation}\nFor the case in which the atom is in the ground state $|g\\rangle$ and enters an electromagnetic cavity with mode A in vacuum state and a single photon of the field mode B\n\n\\begin{equation*}\n\\overline{R}_A=\\overline{R}_{{g}{0}}=2\\lambda{\\overline{A}_{{g}{0}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation*}\n\\Delta{t_A}=\\frac{\\pi}{\\overline{R}_A}=\\frac{\\pi}{\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\end{equation*}\n\n\\begin{equation*}\n\\overline{R}_B=\\overline{R}_{{e}{1}}=2\\lambda{\\overline{A}_{{e}{1}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation}\n\\Delta{t_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_A+\\overline{R}_B)}{\\overline{R}_A\\overline{R}_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_{{g}{0}}+\\overline{R}_{{e}{1}})}{\\overline{R}_{{g}{0}}\\overline{R}_{{e}{1}}}=\\frac{\\pi}{2\\lambda}\n\\label{eq:first}\n\\end{equation}\nsubstituting eq.~\\eqref{eq:first} into eq.~\\eqref{eq:success2} we obtain\n\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\pi)+\\cos^2(\\pi)\\sin^2(\\pi))=1\n\\label{eq:success3}\n\\end{equation}\na unit probability of success.\n\nWhen the atom in ground state $|g\\rangle$ enters an electromagnetic cavity containing a single photon in mode A, and mode B in the vacuum state\n\n\\begin{equation*}\n\\overline{R}_A=\\overline{R}_{{e}{1}}=2\\lambda{\\overline{A}_{{e}{1}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation*}\n\\Delta{t_A}=\\frac{\\pi}{2}\\frac{(\\overline{R}_A+\\overline{R}_B)}{\\overline{R}_A\\overline{R}_B}=\\frac{\\pi}{2}\\frac{(\\overline{R}_{{e}{1}}+\\overline{R}_{{g}{0}})}{\\overline{R}_{{e}{1}}\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\end{equation*}\n\n\\begin{equation*}\n\\overline{R}_B=\\overline{R}_{{g}{0}}=2\\lambda{\\overline{A}_{{g}{0}}}=2\\lambda\n\\end{equation*}\n\n\\begin{equation}\n\\Delta{t_B}=\\frac{\\pi}{\\overline{R}_B}=\\frac{\\pi}{\\overline{R}_{{g}{0}}}=\\frac{\\pi}{2\\lambda}\n\\label{eq:second}\n\\end{equation}\nsubstituting eq.~\\eqref{eq:second} into eq.~\\eqref{eq:success2} we obtain\n\n\\begin{equation}\nP_s=1-(\\sin^2(\\pi)+\\cos^2(\\pi)\\sin^2(\\pi))=1\n\\label{eq:success4}\n\\end{equation}\na unit probability of success.\n\nWe observe that success probabilities depend mainly upon the precise selection of the interaction times of the two level atom with the successive cavity modes.\n\n\\section{Hadamard logic gate}\n\\label{sec:hadgatelogic}\n\nTo realise Hadamard operation in the AJC interaction, we apply the qubit state transition operation in eq.~\\eqref{eq:anttransop1} and the general form in \\cite{omolo2017polariton, omolo2019photospins}. In this respect, we define the Hadamard operation at sum frequency $\\overline{\\delta}=4\\lambda$ and $n=0$ specified for an initial atomic state $|g\\rangle$ as \n\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_g=\\frac{1}{\\overline{A}_{g0}}\\left(2\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^\\dagger\\hat{s}_+\\right)\\hspace{5mm};\\hspace{5mm}\\overline{A}_{g0}=\\sqrt{2}\n\\end{equation}\nThe initial atomic state $|g\\rangle$ is rotated to\n\n\\begin{equation}\n|g\\rangle\\rightarrow\\frac{1}{\\sqrt{2}}(|e\\rangle-|g\\rangle)\n\\label{eq:hadop1}\n\\end{equation}\nSimilarly, the Hadamard operation at sum frequency $\\overline{\\delta}=4\\lambda$ and $n=1$ specified for an initial atomic state $|e\\rangle$ is defined as \\citep{omolo2019photospins}\n\\begin{equation}\n\\hat{\\overline{\\varepsilon}}_e=\\frac{1}{\\overline{A}_{e1}}\\left(2\\hat{s}_z+\\hat{a}\\hat{s}_-+\\hat{a}^\\dagger\\hat{s}_+\\right)\\hspace{5mm};\\hspace{5mm}\\overline{A}_{e1}=\\sqrt{2}\n\\end{equation}\nThe initial atomic state $|e\\rangle$ is rotated to\n\n\n\\begin{equation}\n|e\\rangle\\rightarrow\\frac{1}{\\sqrt{2}}(|e\\rangle+|g\\rangle)\n\\label{eq:hadop2}\n\\end{equation}\nThe Hadamard transformations in Eqs.~\\eqref{eq:hadop1} and \\eqref{eq:hadop2} realised in the AJC interaction process (AJC model) agree precisely with the standard definition in Eq.~\\eqref{eq:mathhadamard1}.\n\\section{ Conclusion}\n\\label{sec:conclusion}\nIn this paper we have shown how to implement quantum C-NOT and Hadamard gates in the anti-Jaynes-Cummings interaction mechanism. We obtained ideal unit probabilities of success due to precise selection of interaction times during the C-NOT gate operations. We also realised efficient Hadamard operations through application of respective AJC qubit state transitions. \n\\section*{Acknowledgment}\n\nWe thank Maseno University, Department of Physics and Materials Science for providing a conducive environment to do this work.\n\n \\bibliographystyle{apsrev}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}