diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcjgx" "b/data_all_eng_slimpj/shuffled/split2/finalzzcjgx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcjgx" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction} \\label{sec:intro}\n\nBrown dwarfs (BDs) are sub-stellar objects below the hydrogen burning limit ($\\lesssim$80\\;$\\mbox{$M_{\\rm Jup}$}$) but massive enough to fuse deuterium ($\\gtrsim$13\\;$\\mbox{$M_{\\rm Jup}$}$) \\citep{Spiegel_2011,Dieterich_2014}. After their formation, BDs cool radiatively and follow mass-luminosity-age relationships. The degeneracy in these parameters, especially in mass and age, plus assumptions about the initial conditions of BD formation, have long been a major difficulty in calibrating the evolutionary and atmospheric models for substellar objects \\citep{Burrows_1989,Baraffe_2003,Joergens_2006,Gomes_2012,Helling_2014,Caballero_2018}. BDs for which we can measure these parameters independently can benchmark the evolutionary models. As a result, BDs in multiple systems are especially important. Their ages can be determined from the characteristics of their host stars or associated groups, assuming coevality \\citep[e.g.][]{Seifahrt_2010,Leggett_2017}, and some of these BDs can have their masses measured dynamically \\citep[e.g.][]{Crepp+Johnson+Fischer+etal_2012,Crepp+Gonzales+Bechter+etal_2016,Dupuy+Liu_2017,Dieterich_2018,Brandt_2019,Brandt_2020}. \n\n$\\varepsilon$~Indi~B\\xspace, discovered by \\cite{Scholz_2003_EpsIndiB_discovery}, is a distant companion to the high proper motion ($\\sim$4.7 arcsec\/yr) star $\\varepsilon$~Indi\\xspace. It was later resolved to be a binary brown dwarf system by \\cite{McCaughrean_2004}, who estimated the two components of the binary, $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, to be T dwarfs with spectral types T1 and T6, respectively. It was the first binary T dwarf to be discovered and remains one of the closest binary brown dwarf systems to our solar system; {\\sl Gaia } EDR3 measured a distance of $3.638 \\pm 0.001$\\,pc to $\\varepsilon$~Indi\\xspace~A \\citep{Lindegren+Klioner+Hernandez+etal_2020}. Their proximity makes $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace bright enough and their projected separation wide enough to obtain high quality, spatially resolved images and spectra. And their relatively short orbital period of $\\approx$10\\,yr allows the entire orbit to be traced in a long-term monitoring campaign. Being near the boundary of the L-T transition, $\\varepsilon$~Indi~Ba\\xspace is especially valuable for understanding the atmospheres of these ultra-cool brown dwarfs \\citep{Apai_2010,Goldman_2008,Rajan_2015}.\n\n\\cite{King_2010} carried out a detailed photometric and spectroscopic study of the system, and derived luminosities of $\\log_{10} L\/L_{\\odot} = -4.699 \\pm 0.017$ and $-5.232 \\pm 0.020$ for Ba\\xspace and Bb\\xspace, respectively. They found that neither a cloud-free nor a dusty atmospheric model can sufficiently explain the brown dwarf spectra, and that a model allowing partially settled clouds produced the best match. The relative orbit monitoring was still ongoing at the time, so a preliminary dynamical system mass of $121 \\pm 1\\,\\mbox{$M_{\\rm Jup}$}$ measured by \\citet{Cardoso_2012} was assumed by the authors to derive mass ranges of 60-73 $\\mbox{$M_{\\rm Jup}$}$ and 47-60 $\\mbox{$M_{\\rm Jup}$}$ for Ba\\xspace and Bb\\xspace based on their photometric and spectroscopic observations.\n\n\\cite{Cardoso_2012} and \\cite{Dieterich_2018} both used a combination of absolute and relative astrometry to obtain individual dynamical masses of\n$\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace.\n\\cite{Cardoso_2012} used NACO \\citep{NACO_ins_paper_1,NACO_ins_paper_2} and FORS2 \\citep{Appenzeller+Fricke+Furtig+etal_1998,FORS2_ADC_1997} imaging to measure $77.8\\pm0.3$\\,$M_{\\rm Jup}$ and $61.9\\pm0.3$\\,$M_{\\rm Jup}$ for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, respectively, with a parallax of $263.3\\pm 0.3$\\,mas. This parallax disagreed strongly with the {\\sl Hipparcos } parallax { of $\\varepsilon$~Indi\\xspace~A} \\citep{ESA_1997,vanLeeuwen_2007}. Fixing parallax to the {\\sl Hipparcos } 2007 value of $276.1 \\pm 0.3$\\,mas, \\cite{Cardoso_2012} instead obtained masses of $68.0\\pm0.9$\\,$M_{\\rm Jup}$ and $53.1\\pm0.3$\\,$M_{\\rm Jup}$. \\cite{Dieterich_2018} used a different data set to measure individual masses of $75.0\\pm0.8$\\,$M_{\\rm Jup}$ and $70.1\\pm0.7$\\,$M_{\\rm Jup}$ with a parallax of $276.9\\pm0.8$\\,mas, consistent with the {\\sl Hipparcos } distance. The three dynamical mass measurements---two from \\cite{Cardoso_2012} and one from \\cite{Dieterich_2018}---disagree strongly with one another. The highest masses of $\\gtrsim$75\\,$M_{\\rm Jup}$ are in tension with the predictions of substellar cooling models even at very old ages \\citep{Dieterich_2014}.\n\nIn this paper, we use relative orbit and absolute astrometry monitoring of $\\varepsilon$~Indi~B\\xspace from 2005 to 2016 acquired with the VLT to measure the individual dynamical masses of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. Much of this data set overlaps with that used by \\cite{Cardoso_2012}, but we have the advantage of a few more epochs of data, {\\sl Gaia } astrometric references \\citep{Lindegren+Klioner+Hernandez+etal_2020} and a better understanding of the direct imaging system thanks to years of work on the Galactic center \\citep{Gillessen+Eisenhauer+Trippe+etal_2009,Plewa+Gillessen+Eisenhauer+etal_2015,Gillessen+Plewa+Eisenhauer+etal_2017}. We structure the paper as follows. We review the stellar properties and its age in Section \\ref{sec:stellarprop}. Section \\ref{sec:data} presents the VLT data that we use, and Section \\ref{sec:positions} describes our method for measuring calibrating the data and measuring the position of the two BDs. Section \\ref{sec:photvar} presents our search for periodic photometric variations, while in Section \\ref{sec: orbit fit} we fit for the orbit and mass of the pair. In Section \\ref{sec:BDtests} we discuss the implications of our results for models of substellar evolution. We conclude with Section \\ref{sec:conclusions}.\n\n\\section{Stellar Properties} \\label{sec:stellarprop}\n\nThe $\\varepsilon$~Indi~B\\xspace system is bound to $\\varepsilon$~Indi\\xspace A (=HIP 108870, HD 209100, HR 8387), a bright K4V or K5V star \\citep{Adams_1935,Evans_1957,Gray_2006}. $\\varepsilon$~Indi\\xspace A has a $2.7^{+2.2}_{-0.4}\\,\\mbox{$M_{\\rm Jup}$}$ planet on a low eccentricity and wide orbit \\citep{Endl+Kurster+Els+etal_2002,Zechmeister+Kurster+Endl+etal_2013,Feng_2019}. The star appears to be slightly metal poor. Apart from a measurement of ${\\rm [Fe\/H]} = -0.6$\\,dex \\citep{Soto+Jenkins_2018}, literature spectroscopic measurements range from ${\\rm [Fe\/H]} = -0.23$\\,dex \\citep{Abia+Rebolo+Beckman+etal_1988} to $+0.04$\\,dex \\citep{Kollatschny_1980}, with a median of $-0.17$\\,dex \\citep{Soubiran_2016}. \n\nSeveral studies have constrained the age of the $\\varepsilon$~Indi\\xspace system via various methods such as evolutionary models, Ca\\,{\\sc ii}~HK age dating techniques, and kinematics. Using a dynamical system mass of $121 \\pm 1 \\mbox{$M_{\\rm Jup}$}$ and evolutionary models, \\citet{Cardoso_2012} predicted a system age of $3.7$-$4.3$\\,Gyr. This age is older than the age of $0.8$-$2.0$\\,Gyr derived from stellar rotation {of $\\varepsilon$~Indi\\xspace A} and the age of $1$-$2.7$\\,Gyr from the Ca \\RomanNumeralCaps{2} activity {of $\\varepsilon$~Indi\\xspace A}, reported in \\cite{Lachaume_1999} { assuming a stellar rotation of $\\sim$20 days}, but is younger than the kinematic estimate of $>$7.4\\,Gyr quoted in the same study. { \\citet{Feng_2019} inferred a longer rotation period of $\\sim$35 days derived from a relatively large data set of high precision RVs and multiple activity indicators for $\\varepsilon$~Indi\\xspace A, and found an age of $\\sim$4 Gyr}. To date, the age of the star still remains a major source of uncertainty in the evolutionary and atmospheric modeling of the system. \n\nWe perform our own analysis on the age of $\\varepsilon$~Indi\\xspace using a Bayesian activity-based age dating tool devised by \\citet{Brandt_2014} and applied in \\cite{Li+Brandt+Brandt+etal_2021}. To do this, we adopt a Ca\\,{\\sc ii} chromospheric index of $\\log R'_{\\rm HK} = -4.72$ from \\citet{Pace_2013}, an X-ray activity index of $R_{X} = -5.62$ from the ROSAT all-sky survey bright source catalog \\citep{Voges_1999}, and Tycho $B_T V_T$ photometry ($B_T = 6.048\\pm0.014$\\,mag, $V_T = 4.826\\pm0.009$\\,mag) from the Tycho-2 catalog \\citep{Hog+Fabricius+Makarov+etal_2000}. The star lacks a published photometric rotation period. Figure \\ref{fig::age} shows our resulting posterior probability distribution, with an age of $3.48_{-1.03}^{+0.78}$ Gyr. This age is somewhat older than the young ages most literature measurements suggest, but is similar to the system age of $3.7$-$4.3$\\,Gyr used by \\citet{Cardoso_2012} for their analysis { based on the preliminary system mass for $\\varepsilon$~Indi\\xspace{} Ba+Bb compared to evolutionary models} { and to the $\\sim$4\\,Gyr age more recently inferred by \\cite{Feng_2019}}. We use our Bayesian age posterior when analyzing the consistency with our dynamical masses with brown dwarf models (Section \\ref{sec:BDtests}).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Figures\/age_epsiIndiA.pdf}\n \\caption{Age posterior of $\\varepsilon$~Indi\\xspace A based on the Bayesian activity-age method of \\citet{Brandt_2014}. Our analysis does not use a directly measured rotation period for $\\varepsilon$~Indi\\xspace. The median and 1$\\sigma$ uncertainties are shown by the grey dotted lines; they correspond to $3.48_{-1.03}^{+0.78}$ Gyr.}\n \\label{fig::age}\n\\end{figure}\n\n\\section{Data} \\label{sec:data}\n\n\n\\subsection{Relative Astrometry}\n\nWe measure the relative positions of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace using nine years of monitoring by the Nasmyth Adaptive Optics System (NAOS) + Near-Infrared Imager and Spectrograph (CONICA), NACO for short \\citep{NACO_ins_paper_1, NACO_ins_paper_2}. We use images taken by the S13 Camera on NACO in the $J$, $H$ and $K_s$ passbands. \nOur images come from Program IDs 072.C-0689(F), 073.C-0582(A), 074.C-0088(A), 075.C-0376(A), 076.C-0472(A), 077.C-0798(A), 078.C-0308(A), 079.C-0461(A), 380.C-0449(A), 381.C-0417(A), 382.C-0483(A), 383.C-0895(A), 384.C-0657(A), 385.C-0994(A), 386.C-0376(A), 087.C-0532(A), 088.C-0525(A), 089.C-0807(A), and 091.C-0899(A), all PI McCaughrean, and 381.C-0860(A), PI Kasper.\n\nThe S13 camera on NACO has a field of view (FOV) of $14'' \\times 14''$ and a plate scale of $\\approx$13.2$\\,{\\rm mas\\,pix^{-1}}$. Most observing sequences consisted of $\\approx$5 dithered images in each filter. The binary system HD~208371\/2 was usually observed on the same nights and in the same mode to serve as an astrometric calibrator. We use a total of 939 images of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, taken over 56 nights of observations from 2004 to 2013 for which we have contemporaneous imaging of HD~208371\/2.\n\nWe perform basic calibrations on all of these images. For each night, we use contemporaneous dark images to identify bad pixels and to remove static backgrounds. We construct and use a single, master flat field for all images. We mask pixels for which the flatfield correction deviates by more than 20\\% from its median or for which the standard deviation of the dark frames is more than five times its median standard deviation. We then subtract the median dark image and divide by the flatfield image.\n\nThe data quality varies depending on the observing conditions and the performance of the adaptive optics (AO) system. Therefore, we apply a selection criterion to exclude poor quality data. We first extract the sources in the images using the DAOPHOT program as implemented in the {\\tt photutils} python package \\citep{Stetson_1987, photutils110}. We obtain estimates of the following parameters for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace: centroid, sharpness (a DAOPHOT parameter that characterizes the width of the source), roundness (a DAOPHOT parameter that characterizes the symmetry of the source), and signal to noise ratio (SNR). We discard images where one or both of the two targets fall outside the field of view, and for the remaining images we apply the following cut-offs in DAOPHOT detection parameters to exclude highly extended, highly elongated and low signal to noise images: sharpness $\\geqslant 0.3$, $- 0.5 \\leqslant $ roundness $\\leqslant 0.5$ and SNR $\\geqslant 25$. We then visually inspect the remaining images to remove ones with bad pixels (cosmic rays or optical defects) landing on or near the target objects, and ones with AO correction artifacts that survived our DAOPHOT cut. Table \\ref{table:relastrodata} summarizes the final data set selected for relative astrometry measurements.\n\n\\begin{deluxetable}{cccc}\n\\tablewidth{0pt}\n\\tablecaption{Relative astrometry data summary \\label{table:relastrodata} }\n\\tablehead{\nDate & \nFilter(s) &\n\\# Frames &\nTotal integration (s)\n}\n\\startdata\n 2004-09-24 & $J$, $H$, $K_s$ & 15 & 150 \\\\\n 2004-11-14 & $J$, $H$, $K_s$ & 14 & 840 \\\\\n 2004-11-15 & $J$ & 5 & 270 \\\\\n 2004-12-15 & $J$, $H$ & 11 & 220 \\\\\n 2005-06-04 & $J$, $H$, $K_s$ & 13 & 780 \\\\\n 2005-07-06 & $K_s$ & 6 & 310 \\\\\n 2005-08-06 & $J$, $H$, $K_s$ & 13 & 780 \\\\\n 2005-12-17 & $J$, $K_s$ & 7 & 210 \\\\\n 2005-12-30 & $J$, $H$, $K_s$ & 14 & 840 \\\\\n 2005-12-31 & $J$, $H$, $K_s$ & 13 & 780 \\\\\n 2006-07-19 & $H$, $K_s$ & 8 & 80 \\\\\n 2006-08-06 & $J$, $H$ & 10 & 100 \\\\\n 2006-09-22 & $J$, $H$, $K_s$ & 15 & 150 \\\\\n 2006-10-03 & $J$, $H$ & 7 & 420 \\\\\n 2006-10-20 & $J$, $H$ & 5 & 300 \\\\\n 2006-11-12 & $J$ & 5 & 300 \\\\\n 2007-06-18 & $J$, $H$, $K_s$ & 12 & 720 \\\\\n 2007-09-09 & $J$, $H$ & 10 & 450 \\\\\n 2007-09-29 & $J$, $H$ & 15 & 900 \\\\\n 2007-11-07 & $J$, $H$ & 10 & 600 \\\\\n 2008-06-05 & $J$, $H$ & 10 & 600 \\\\\n 2008-06-10 & $J$, $H$ & 7 & 70 \\\\\n 2008-06-21 & $J$, $H$ & 10 & 100 \\\\\n 2008-08-25 & $J$, $H$ & 9 & 540 \\\\\n 2008-12-01 & $J$, $H$ & 12 & 720 \\\\\n 2009-06-17 & $J$, $H$, $K_s$ & 12 & 720 \\\\\n 2010-08-01 & $J$, $H$ & 7 & 105 \\\\\n 2010-11-07 & $J$, $H$ & 10 & 300 \\\\\n 2011-07-18 & $J$, $H$, $K_s$ & 13 & 390 \\\\\n 2012-07-18 & $J$, $H$ & 9 & 540 \\\\\n 2012-09-14 & $J$, $H$ & 9 & 540 \\\\\n 2013-06-07 & $J$, $H$ & 10 & 600\n\\enddata\n\\end{deluxetable}\n\n\\subsection{Absolute Astrometry} \\label{subsec:AbsAstData}\n\nThe long term absolute position of $\\varepsilon$~Indi~B\\xspace was monitored with the FOcal Reducer and low dispersion Spectrograph \\citep[FORS,][]{Appenzeller+Fricke+Furtig+etal_1998} installed on ESO's UT1 telescope at the Very Large Telescope (VLT). The FORS system consists of twin imagers and spectrographs FORS1 and FORS2, collectively covering the visual and near UV wavelength. The absolute astrometry monitoring was done with the FORS2 imager coupled with two mosaic MIT CCDs; the camera has a pixel scale of 0$.\\!\\!''$126\/pixel in its unbinned mode and a field of view (FOV) of $\\approx$8$.\\!'6\\times8.\\!'6$. \n\nThe FORS2 monitoring of $\\varepsilon$~Indi~B\\xspace covers a long temporal baseline beginning in 2005 and ending in 2016. Our images come from Program IDs 072.C-0689(D), 075.C-0376(B), 076.C-0472(B), 077.C-0798(B), 078.C-0308(B), 079.C-0461(B), 380.C-0449(B), 381.C-0417(B), 382.C-0483(B), 383.C-0895(B), 384.C-0657(B), 385.C-0994(B), 386.C-0376(B), 087.C-0532(B), 088.C-0525(B), 089.C-0807(B), and 091.C-0899(B), all PI McCaughrean.\nThe FORS-2 focal plane consists of two CCDs, chip1 and chip2. We only consider the data taken with the chip1 CCD. Over the 12 years of absolute position monitoring, 940 images were taken with chip1 over 88 epochs. For the majority of the epochs, 10 dithered images in $I_{\\rm BESSEL}$ filter were obtained, with a 20 second exposure time for each image. We exclude 36 blank image frames over 4 epochs between 2009-8-21 and 2009-11-3, resulting in a final total of 904 image frames for our analysis. A summary of the FORS2 data is given in Table \\ref{tab:absast_obs_log}.\nThese 904 science frames are bias-corrected and flat-fielded using the normalized master values generated from median combination of the flat and bias frames obtained in the same set of observing programs. \n\n\\begin{deluxetable}{cccc}\n\\tablewidth{0pt}\n\\tablecaption{Absolute astrometry data from FORS2\\tablenotemark{a} \\label{tab:absast_obs_log}}\n\\tablehead{Date & \\# Frames & Band &\nTotal integration (s)}\n\\tablewidth{0pt}\n\\startdata\n2005-05-06 & 10 & $I_{\\rm Bess}$ & 200 \\\\\n2005-05-12 & 10 & $I_{\\rm Bess}$ &200 \\\\\n2005-06-08 & 10 & $I_{\\rm Bess}$ &200 \\\\\n2005-07-06 & 10 & $I_{\\rm Bess}$ &200\n\\enddata\n\\tablenotetext{a}{The full observing log is available as an online table; only the first four rows are shown here for reference.}\n\\end{deluxetable}\n\n\\section{Relative and Absolute Positions} \\label{sec:positions}\n\n\\subsection{Point Spread Function (PSF) Fitting} \\label{subsec:joint psf fit}\n\nTo measure the relative separations of the two brown dwarfs in the NACO data, we need to fit their PSFs. \\cite{Cardoso_2012} has demonstrated that Moffat is the best analytical profile for the NACO data compared to Lorentzian and Gaussian. During the epochs when the projected separations of the two brown dwarfs are small, the two PSFs are only separated by one or two full widths at half maximum (FWHM). As a result, the flux near the center of one source has non-negligible contributions from the wings of the other source. This could introduce significant biases in the measured positions if fitting a PSF profile to each source separately. Therefore, we implement a joint fit of the two PSFs using a sum of two elliptical Moffat profiles:\n\\begin{equation}\n \\label{eqn:joint moffat model}\n{\\rm Counts}(x, y) = f_1\\psi_{1}(x, y) + f_2\\psi_{2}(x, y)\n\\end{equation}\nwith\n\\begin{multline}\n\\label{eqn:elliptical moffat}\n \\psi_{i}(x, y) = f_i (1 + c_1(x - x_i)^2 + 2c_2(x - x_i)(y - y_i) \\\\ + c_3(y - y_i)^2 ) ^{-\\beta}\n\\end{multline}\nwhere $\\psi_i$ is a general elliptical 2D Moffat profile centered at \\{$x_i, y_i$\\} with peak intensity $f_i$. Our model is the sum of two such profiles with different fluxes at different locations, sharing the same morphology, i.e., the same \\{$c_1, c_2, c_3$\\}. Instead of fitting for \\{$c_1, c_2, c_3$\\} directly, we fit for three equivalent parameters: \\{${\\rm fwhm}_x, {\\rm fwhm}_y, \\phi$\\}, which are the FWHMs of the elliptical Moffat profile along the x and y axes, and the counter-clockwise rotation angle of the PSF, respectively. These physical parameters are related to \\{$c_1, c_2, c_3, \\beta$\\} through the following equations:\n\\begin{align}\n c_1 &= \\frac{\\cos^2\\phi}{\\sigma_x^2} + \\frac{\\sin^2\\phi}{\\sigma_y^2}\\\\\n c_2 &= \\frac{\\sin 2\\phi}{2\\sigma_x^2} - \\frac{\\sin 2\\phi}{2\\sigma_y^2}\\\\\n c_3 &= \\frac{\\sin^2\\phi}{\\sigma_x^2} + \\frac{\\cos^2\\phi}{\\sigma_y^2}\\\\\n {\\rm fwhm}_{x, y} &= 2\\sigma_{x, y} \\sqrt{(2^{1\/\\beta} - 1)}\n\\end{align} \n\nFor each background subtracted image, we fit for the sum of two PSFs by minimizing $\\chi^2$ over 10 parameters: \\{$x_1, y_1, x_2, y_2, f_1, f_2, {\\rm fwhm}_x, {\\rm fwhm}_y, \\phi, \\beta$\\}. In this case, $\\chi^2$ is defined by:\n\\begin{align}\n\\label{eqn: moffat chisqr}\n\\chi^2 = \\sum_{i}^{n_{pix}} \\frac{(D_{i} - f_1\\; \\psi_{1, i} - f_2\\; \\psi_{2, i})^2}{\\sigma_{i}^2}\n\\end{align}\nWe use scipy's non-linear optimization routines \\citep{2020SciPy-NMeth} to minimize $\\chi^2$ over the 8 non-linear parameters \\{$x_1, y_1, x_2, y_2, {\\rm fwhm}_x, {\\rm fwhm}_y, \\phi, \\beta$\\}, and for each trial set of non-linear parameters, we solve for the best fit linear parameters \\{$f_1, f_2$\\} analytically and marginalize over them.\n\n\\subsection{Calibrations for Relative Astrometry} \\label{subsec: relast calibrations}\n\nIn order to measure precise relative astrometry, we must measure and correct various instrumental properties and atmospheric effects that can alter the apparent separation and position angle (PA) of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. In this section we describe our calibrations for the instrument plate scale and orientation, distortion correction and differential atmospheric refraction.\n\n\\subsubsection{Plate scale, Orientation, and Distortion Correction} \\label{subsubsec: platecal}\n\nWe calibrate the plate scale and the north pointing of the NACO S13 camera using NACO's observations of a nearby wide separation binary, HD 208371\/2, observed concurrently with the science data over the $\\sim$10-year relative orbit monitoring period. We calibrate the separation and PA of the binary in NACO data against the high precision measurements from Gaia EDR3 for HD 208371\/2:\n\\begin{align}\n \\label{eq:EDR3 AB sep}\n \\frac{\\rm sep}{\\rm arcsec} &= 8.90612 + 0.00011 \\left({\\rm Jyear} - 2016.0 \\right) \\\\\n \\label{eq:EDR3 AB PA}\n \\frac{\\rm PA}{\\rm degree} &= 348.10345 - 0.00040 \\left({\\rm Jyear} - 2016.0 \\right)\n\\end{align}\nThe uncertainties on these do depend on the epoch, but with proper motion uncertainties $\\lesssim$40\\,$\\mu$as\\,yr$^{-1}$, positional uncertainties are only $\\approx$0.5\\,mas even extrapolated ten years before Gaia. This represents a fractional uncertainty in separation below 10$^{-4}$ and contributes negligibly to our error budget.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/NaCoS13platecal.pdf}\n \\caption{Pixel scale and PA zero point calibrations for the NACO S13 camera, derived using the binary HD~208371\/2 as measured by Gaia EDR3.}\n \\label{fig:naco calibrations}\n\\end{figure}\n\n\nTo measure the separation and PA of the calibration binary, we use the Moffat PSF fitting algorithm described in section \\ref{subsec:joint psf fit}. Since the binary is widely separated, a joint PSF fit in this case is effectively equivalent to fitting a single 2D Moffat profile for each star separately {(albeit with the same structure parameters for each star's Moffat function)}. The calibration results are shown in Fig.\\ref{fig:naco calibrations}. We measure an overall average plate scale of $13.260 \\pm 0.001$, but we also note that the plate scale seems to increase slightly with time from 2004 to 2010. Both the plate scale and the increasing trend agree with other measurements in the literature, \\citep{Chauvin_2010,Cardoso_2012}. In \\cite{Cardoso_2012}, the same calibration binary was used to derive the plate scales but a different reference measurement for the binary was used. Adjusting their results to the more precise Gaia measurement of the binary brings their plate scale into agreement with ours. The PA zero point of the instrument varies from observation to observation, and has a long term trend as well. This is in agreement with the analysis in \\cite{Plewa_2018}.\n\nThe distortion correction was shown to be of little significance for the NACO S13 camera \\citep{Trippe_2008} due to the small field of view. For completeness, we still apply the distortion correction derived by \\citet{Plewa_2018_distortion_correction}.\n\n\\subsubsection{Differential Atmospheric Refraction and Annual Aberration}\nThe dominant atmospheric effect that needs to be corrected for is differential atmospheric refraction \\citep{Gubler_1998}. When a light ray travels from vacuum into Earth's atmosphere, it is refracted along the zenith direction and changes the observed zenith angle of the source, making the apparent zenith angle, $z$, deviate from the true zenith angle in the absence of an atmosphere, $z_0$:\n\\begin{equation}\n z = z_0 + R\n\\end{equation}\nwhere R is the total refraction angle experienced by the light ray. The amount of this refraction depends on atmospheric conditions, the wavelength of the incoming light, and the zenith angle of the object. Therefore, for two objects at different positions in the sky and with different spectral types, the total refraction angles are different and can alter the apparent separation and PA of the objects. We can write this differential refraction, $\\Delta R$, in terms of two components, one due to their difference in color, and one due to the difference of their true zenith angles \\citep{Gubler_1998}:\n\\begin{equation}\n \\Delta R = \\Delta R_{\\rm color} + \\Delta R_{\\Delta z_0}\n\\end{equation}\nFor $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, the second term is {much smaller} as they are separated by only $< 1''$, {and produced negligible effects on the final results compared to the first term. We included both effects for completeness. The total differential refraction can be calculated with:}\n\\begin{equation}\n \\label{eq: ADR}\n \\Delta R = R_2(n_2, z_2) - R_1(n_1, z_1)\n\\end{equation}\nwhere the $n_i$'s are the effective refractive indices of the Earth's atmosphere for the target sources. $n_i$ depends on the effective central wavelength ($\\lambda_i$) of the target in the observed passband, and on observing conditions, most commonly pressure ($P$), temperature ($T$), humidity ($H$) and altitude ($z$). \\citet{Cardoso_2012} calculated the effective central wavelengths for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace in the $J$, $H$, and $K_s$ bands by integrating high resolution spectra of the two brown dwarfs. To calculate the refractive index, $n_i(\\lambda_i, P, T, H, z)$, we use the models in \\citet{Mathar_2007} covering a wavelength range of $1.3\\;\\mu$m to $24\\; \\mu$m. Then, the total refraction can be approximately expressed as \\citep{smart_1977}:\n\\begin{equation}\n R(n, z) \\approx (n - 1) \\tan(z)\n\\end{equation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/S13_refrac.pdf}\n \\caption{Residual altitude separation of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace in each band compared to the mean of all bands, before (top panel) and after (bottom panel) applying a correction for differential atmospheric refraction.}\n \\label{fig:refrac consistency}\n\\end{figure}\n\nA comparison of the separations along the zenith direction of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace are shown in Figure.\\ref{fig:refrac consistency}. We can clearly see the systematic differences between the $J$, $H$, and $K_s$ bands due to differential refraction before the correction. After applying the correction, the three bands are brought to much better agreement as well as having a smaller total scatter around the mean.\n\n{Annual aberration is a phenomenon that describes a change in the apparent position of a light source caused by the observer's changing reference frame due to the orbital motion of the Earth \\citep{Bradley_aberration_discovery, Phipps_relativity_and_aberration}. We correct for the differential annual aberration, the difference in aberration between $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, in relative astrometry by transforming the measured positions of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace to a geocentric reference frame using {\\tt astropy}. The effect is generally a small fraction of the relative astrometry error bars and has negligible impact on the relative orbit fit. For absolute astrometry, the aberration is absorbed by the linear component of the distortion correction.}\n\n\\subsubsection{{PSF Fitting Performance and Systematics}} \\label{subsubsec: error inflation}\nIn order to understand how well our PSF fitting algorithm described in Section \\ref{subsec:joint psf fit} performs, we investigate the systematic errors and potential biases of the algorithm in this section, and adjust the errors of our results accordingly. \n\nTo do this, we {crop out boxes around} the stars in the calibration binary, HD~208371\/2, {and use them as} empirical PSFs. {We build a collection of such PSF stamps from the images of the calibration binary based on AO quality and SNR. We use these PSFs stamps and empty background regions of the NACO data to generate mock data sets containing overlapping PSFs}. For each such mock image, we randomly select one empirical PSF from the collection and place two copies of this PSF onto the background of an $\\varepsilon$~Indi~B\\xspace image. We scale the fluxes of the two PSF copies to be similar to those of $\\varepsilon$ Ba\\xspace and Bb\\xspace in a typical image. We then generate a large sample of these mock images at various separations and PAs. Since the calibration binary stars are widely separated, these empirical PSFs are effectively free of nearby star contamination. We then perform the PSF fitting described in Section \\ref{subsec:joint psf fit} on the mock images and compare the measurements to the true, known separations and PAs.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/PSF_systematics.pdf}\n \\caption{Root mean square residuals of the measured separations from the true separations of the PSFs in simulated data. Top panel shows the residuals in the radial direction. Bottom panel shows the residuals in the tangential direction in terms of arclength. Arclength is a better indicator of the fitting algorithm's performance than PA, because we expect arclength residuals to be independent of radial separation, but the PA will go up at smaller separation simply due to geometry.}\n \\label{fig:relast error inflation}\n\\end{figure}\n\nThe results for this test are shown in Figure \\ref{fig:relast error inflation}. Each data point is the root mean square residual from fitting 400 mock images at the same separation but with various PAs. The errors we find from these mock data sets are slightly larger but within the same order of magnitude as the scatter in our $\\varepsilon$~Indi~B\\xspace measurements. We also find that the residuals of these mock data measurements do increase as the PSF overlap becomes significant, but they remain at the milliarcsecond level even at a separation equal to the closest separation in the $\\varepsilon$~Indi~B\\xspace data set. The performance for the $K_s$ band is slightly worse due to the large flux ratio of the system in $K_s$. Overall, {our joint PSF fitting algorithm has sub-milliarcsec errors across all three bands for widely separated sources, and has within a few milliarcseconds errors for overlapping sources}. For our final relative astrometry results for $\\varepsilon$~Indi~B\\xspace, we add the systematic errors shown in Figure \\ref{fig:relast error inflation} to the measurement errors of the relative astrometry in quadrature.\n\n\\subsection{Relative Astrometry Results}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/relast_PSF_residual.pdf}\n \\caption{Examples of the joint moffat PSF fits to NACO data. Top panel shows an example of negligible PSF overlap. Bottom panel shows an example from the epoch with maximum overlap in NACO data.}\n \\label{fig:relast psf residual}\n\\end{figure}\n\nThe final relative astrometry results are shown in Table \\ref{table:relastroresults}. These are measured by applying the calibrations described in Section \\ref{subsec: relast calibrations} and jointly fitting for the positions of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace for every selected image, using the PSF fitting method described in Section \\ref{subsec:joint psf fit}. We take the mean and the error on the mean for every epoch, and {add the systematic errors quadratically to the measurement errors} as described in Section \\ref{subsubsec: error inflation}. In Figure \\ref{fig:relast psf residual}, we show a PSF fit for the simple case where the two PSFs are effectively isolated, as well as a PSF fit from the epoch with the closest projected separation and hence maximum PSF overlap. For each case, we take the fit with the median squared residual from that epoch to demonstrate the typical residual level. \n\n\\begin{deluxetable}{ccccc}\n\\tablecaption{Relative astrometry results \\label{table:relastroresults} } \n\\tablehead{\n\\text{Epoch} & \\text{$\\rho$ (arcsec)} & \\text{$\\sigma_{\\rho}${(arcsec)}} & \\text{$\\theta$ (deg)} & \\text{$\\sigma_{\\theta}$ {(deg)}}\n}\n\\startdata\n2004.730 & 0.88310 & 0.00108 & 140.317 & 0.047 \\\\\n2004.869 & 0.89461 & 0.00110 & 140.853 & 0.047 \\\\\n2004.872 & 0.89560 & 0.00126 & 140.814 & 0.067 \\\\\n2004.954 & 0.90200 & 0.00107 & 141.115 & 0.051 \\\\\n2005.423 & 0.93141 & 0.00112 & 142.648 & 0.045 \\\\\n2005.511 & 0.93351 & 0.00126 & 142.888 & 0.054 \\\\\n2005.595 & 0.93654 & 0.00112 & 143.169 & 0.044 \\\\\n2005.959 & 0.94067 & 0.00118 & 144.222 & 0.050 \\\\\n2005.995 & 0.94079 & 0.00117 & 144.352 & 0.049 \\\\\n2005.997 & 0.93987 & 0.00115 & 144.356 & 0.050 \\\\\n2006.546 & 0.92015 & 0.00111 & 146.044 & 0.053 \\\\\n2006.595 & 0.91721 & 0.00107 & 146.230 & 0.049 \\\\\n2006.724 & 0.90802 & 0.00106 & 146.657 & 0.047 \\\\\n2006.754 & 0.90502 & 0.00110 & 146.745 & 0.047 \\\\\n2006.800 & 0.90222 & 0.00138 & 146.984 & 0.053 \\\\\n2006.863 & 0.89489 & 0.00110 & 147.111 & 0.054 \\\\\n2007.461 & 0.81432 & 0.00109 & 149.295 & 0.050 \\\\\n2007.688 & 0.77183 & 0.00104 & 150.250 & 0.053 \\\\\n2007.743 & 0.76014 & 0.00110 & 150.515 & 0.055 \\\\\n2007.849 & 0.73666 & 0.00109 & 151.017 & 0.063 \\\\\n2008.427 & 0.57619 & 0.00104 & 154.647 & 0.078 \\\\\n2008.441 & 0.57103 & 0.00112 & 154.665 & 0.106 \\\\\n2008.471 & 0.56145 & 0.00110 & 154.978 & 0.086 \\\\\n2008.648 & 0.49830 & 0.00103 & 156.664 & 0.082 \\\\\n2008.915 & 0.39126 & 0.00163 & 160.569 & 0.148 \\\\\n2009.458 & 0.14626 & 0.00273 & 186.175 & 0.562 \\\\\n2010.582 & 0.32838 & 0.00120 & 332.295 & 0.157 \\\\\n2010.849 & 0.30942 & 0.00103 & 339.059 & 0.134 \\\\\n2011.543 & 0.17352 & 0.00243 & 12.950 & 0.433 \\\\\n2012.545 & 0.25518 & 0.00110 & 107.165 & 0.186 \\\\\n2012.703 & 0.29394 & 0.00112 & 112.857 & 0.164 \\\\\n2013.431 & 0.47861 & 0.00104 & 126.845 & 0.088\n\\enddata\n\\end{deluxetable}\n\n\\subsection{Calibrations for Absolute Astrometry}\n\\label{sec: absast calibrations}\n\nWe now seek to measure the position of $\\varepsilon$~Indi~Ba\\xspace relative to a set of reference stars in the FORS2 images with known absolute astrometry. We approach the problem in stages. First, we fit for the pixel positions of all stars in the frame. We then use Gaia EDR3 astrometry of a subsample of these stars to construct a distortion map. Next, we use our fit to the NACO data (Section \\ref{sec: orbit fit}) to fix the relative positions of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. Finally, we use the PSFs of nearby reference stars to model the combined PSF of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace and measure their position in a frame anchored by Gaia EDR3.\n\nWe begin by measuring stellar positions in pixel coordinates and using them to derive a conversion between pixel coordinates ($x,y$) and sky coordinates ($\\alpha, \\delta$), i.e., a distortion correction. We identify 46 Gaia sources in the field of view of the FORS2 images; these will serve as reference stars to calibrate and derive the distortion corrections. \n\nWe fit elliptical Moffat profiles to retrieve each individual reference star's pixel location $(x, y)$ on the detector. These are Gaia sources with known $\\alpha$ and $\\delta$ measurements { propagated backwards} from Gaia EDR3's single star astrometry in epoch 2016.0. We adopt the same module used for relative astrometry described in Section~\\ref{subsec:joint psf fit} to fit for the reference stars' positions. For each star, we fit for three additional parameters: the FWHMs along two directions, and a rotation angle in between. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Figures\/fig6_withscale.pdf}\n \\caption{Distortion correction to one of the 904 image frames from the FORS2 long term monitoring of $\\varepsilon$~Indi~B\\xspace's absolute positions. The position of $\\varepsilon$~Indi~B\\xspace is indicated by the yellow star while the blue dots are field stars. The red points are reference stars in the Gaia EDR3 catalog. The green lines indicate the residuals of the measured centroids from their distortion-corrected predictions based on EDR3 astrometry. Red open circles are Gaia EDR3 stars that we discard as outliers. \n }\n \\label{fig::distortion}\n\\end{figure*}\n\nWe assume a polynomial distortion solution of order N for FORS2: \n\\begin{align}\n \\alpha*_{\\rm model} \\equiv \\alpha \\cos \\delta &= \\sum_{i = 0}^{N} \\sum_{j = 0}^{N - i} a_{ij} x^i y^j \\\\\n \\delta_{\\rm model} &= \\sum_{i = 0}^{N} \\sum_{j = 0}^{N - i} b_{ij} x^i y^j .\n\\end{align}\n \nminimizing \n\n\\begin{equation}\n \\chi^2 = \\sum_{k=1}^{n_{\\rm ref}} \\left[\\left( \\frac{{\\alpha*_{k}} - {\\alpha*_{{\\rm model},k}}}{\\sigma_{\\alpha*,k}} \\right)^2 + \\left(\\frac{{\\delta_{k}} - {\\delta_{{\\rm model},k}}}{\\sigma_{\\delta_{k}}} \\right)^2 \\right].\n\\end{equation}\n\nThis defines a linear least-squares problem because each of the $a_{ij}$ appears linearly in the data model. \nTo avoid numerical problems, we define $x=y=0$ at the center of the image and subtract $\\alpha_{\\rm ref} = 181.\\!\\!^\\circ327$, $\\delta_{\\rm ref} = -56.\\!\\!^\\circ789$ from all Gaia coordinates. { To determine the best model for distortion correction, we compare 2nd, 3rd and 4th-order polynomial models. We derive distortion corrections excluding one Gaia reference star at a time. We then measure the excluded star's positions, and use the distortion correction built without using this star to derive its absolute astrometry. The consistency of the best-fit astrometric parameters with the Gaia measurements, and the scatter of the individual astrometric measurements about this best-fit sky path, both act as a cross-validation test of the distortion correction. For most stars, a second-order correction outperforms a third-order correction on both metrics. This also holds true dramatically for $\\varepsilon$~Indi~B\\xspace itself, with a second-order distortion correction providing substantially smaller scatter about the best-fit sky path.} \n\nOnce we have a list of pixel coordinates $(x, y)$ and sky coordinates ($\\alpha_{*}$, $\\delta$) for all of our reference stars, we derive { second} order distortion corrections for each image. \nTo avoid having poorly fit stars drive the results, we clip reference stars that are $\\geq$10$\\sigma$ outliers. \nFigure~\\ref{fig::distortion} shows an example of an image frame indicating the displacement of the distortion-corrected centroids according to Gaia with respect to their original ``uncorrected'' centroid locations on the detector. The empty red circles are Gaia stars that were discarded as outliers. \n\nWe now seek to measure the position of $\\varepsilon$~Indi~Ba\\xspace on the distortion-corrected frame defined by the astrometric reference stars. We cannot fit the brown dwarfs' positions in the same way as the reference stars: their light is blended in most images. Instead, we first fix their relative position using an orbital fit to the relative astrometry (Sections \\ref{subsec: relast calibrations} and \\ref{sec: orbit fit}). We then model the two-dimensional image around $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace as a linear combination of the interpolated PSFs of the five nearest field stars. \n\nWith the relative astrometry fixed, our fit to the image around $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace has nine free parameters: five for the normalization of each reference PSF, one for the background intensity, one for the flux ratio between Ba\\xspace and Bb\\xspace, and two for the position of $\\varepsilon$~Indi~Ba\\xspace. The fit is linear in the first six of these parameters. We solve this linear system for each set of positions and flux ratios, and use nonlinear optimization to find their best-fit values in each image. We then fix the flux ratio to its median best-fit value of 0.195 and perform the fits again, optimizing the position of $\\varepsilon$~Indi~Ba\\xspace in each FORS2 image.\n\nFigure~\\ref{fig:selfcal_epsiIndib} shows two examples of the residuals to this fit. The residual intensity exhibits little structure, whether the two components are strongly blended (bottom panel) or clearly resolved (top panel). \n\nOur fit produces pixel coordinates of $\\varepsilon$~Indi~Ba\\xspace in each frame. Our use of self-calibration ensures that these pixel coordinates are in the same reference system as the astrometric standard stars. We then apply the distortion correction derived from these reference stars to convert from pixel coordinates to absolute positions in right ascension and declination. { Another important calibration for absolute astrometry is the correction for atmospheric dispersion. However, our data were taken with an atmospheric dispersion corrector (ADC) in place, which has not been sufficiently well-characterized to model and remove residual dispersion \\citep{FORS2_ADC_1997}. We therefore use only the azimuthal projection of the absolute astrometry in the orbital fit. The effects and implications of the ADC and residual atmospheric dispersion are discussed in Section~\\ref{sec:adc}. } \n\n\n\\begin{figure}\n \\vskip 0.1 truein\n \\includegraphics[width=1\\linewidth]{Figures\/selfcal_separate.pdf} \\quad\n \\includegraphics[width=1.018\\linewidth]{Figures\/selfcal_blended.pdf}\n \\caption{Example fits and residuals to FORS2 images when the two components ($\\varepsilon$~Indi~Ba\\xspace and $\\varepsilon$~Indi~Bb\\xspace) are completely resolved (top panel) or strongly blended (bottom panel). In each panel, all values are normalized to the peak intensity of the model fits.}\n \\label{fig:selfcal_epsiIndib}\n\\end{figure}\n\n\n\\section{Photometric Variability} \\label{sec:photvar}\n\n{\\cite{Koen_2005}, \\cite{Koen_2005_JHK}, and \\cite{Koen_2013} found potential evidence of variability of the system in the Near-Infrared ($I$, $J$, $H$ and $K_s$) but also stated that the results are inconclusive due to correlation between seeing and variability.\nWith the long-term monitoring data acquired by NACO ($J$, $H$, and $K_s$ bands) and FORS2 ($I$ band), we further investigate the photometric variability of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace in this section.}\n\nWe apply the generalized Lomb-Scargle method \\citep{generalized_Lomb-Scargle_method_2009} and we use the implementation in the {\\tt astropy} Python package \\citep{astropy:2013, astropy:2018} for this work. For NACO data, there are no other field stars within the FOV to calibrate the photometry. Therefore, we take the best fit flux ratios of Ba\\xspace to Bb\\xspace from PSF fitting and apply the periodogram on the time series of this flux ratio. For FORS2 data, we use the {\\tt photutils} python package to perform differential aperture photometry on the sky-subtracted, flat-fielded and dark-corrected FORS2 images. We first measure the total flux of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, and the fluxes of fields stars in the field of view. We then normalize the flux of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace using the median flux of all the non-variable field stars to obtain the relative flux of the $\\varepsilon$~Indi\\xspace system. We apply the periodogram on this relative flux. We choose a minimum frequency of 0 and a maximum frequency of $1 \\; {\\rm hour}^{-1}$, which is roughly an upper frequency limit associated with rotational activities if either object were rotating at break-up velocity. We choose a frequency grid size of $\\Delta f = 1 \/ n_0T$, where $n_0 = 10$, $T=10 \\; {\\rm yr}$ to sufficiently sample the peaks \\citep{VanderPlas_2018}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{Figures\/EpsIndi_lightcurves_and_scatter.pdf}\n \\caption{{Top panel shows the flux ratios of $\\varepsilon$~Indi~Ba\\xspace over Bb\\xspace in J, H and Ks bands measured using the joint PSF fitting method, and the flux ratios of Ba\\xspace + Bb\\xspace over the average of the field stars measured using aperture photometry. The bottom panel shows the flux scatter of $\\varepsilon$~Indi\\xspace compared to the field stars in I band FORS2 data.}}\n \\label{fig:light curves}\n\\end{figure}\n\n{Figure 8 top panel shows the measured flux ratios in both NACO and FORS2 data over all epochs. The bottom panels shows that the $\\varepsilon$~Indi\\xspace system has a typical flux scatter for its brightness in FORS2 data. From our simple analysis, we do not see any significant evidence of photometric variability of the system in our periodograms. However, since the observations are not designed for the purpose of investigating variability, the non-uniform and sparsely sampled window function of the observations resulted in very noisy periodograms. Therefore, we also cannot reach any definitive conclusions regarding whether there is any variability of the system with a physical origin.}\n\n\\section{Orbital Fit} \\label{sec: orbit fit}\n\n\\subsection{Relative Astrometry}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/rel_orbit_corner_plot.pdf}\n \\caption{Corner plot for the relative orbit fit MCMC chain. The parameters are angular semi-major axis ($a_{\\rm ang}$) in {mas}, period ($P$) in years, eccentricity ($e$), longitude of ascending node ($\\Omega$) in degrees and inclination ($i$) in degrees. {The posterior mean is used as the estimator for each parameter, and the errors are one standard deviation from the mean.}}\n \\label{fig:rel orbit corner plot}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/rel_orbit.pdf}\n \\caption{Relative orbit fit of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. The orbit is plotted as the relative separation of Bb\\xspace from Ba\\xspace, where Ba\\xspace is fixed at the origin. The black dots are the measured relative astrometry, the hollow dots show the beginning of each year, and the solid line is the best fit orbit. The bottom panel shows the residuals of the separation in blue and PA in green.}\n \\label{fig:rel orbit}\n\\end{figure}\n\nWe use the relative astrometry measurements summarized in Table \\ref{table:relastroresults} to fit for a relative orbit and obtain the orbital parameters. For this, we use an adaptation of the open-source orbital fitting python package, \\texttt{orvara} \\citep{orvara_2021}, and fit for 7 orbital parameters: period, the angular extent of the semi-major axis ($a_{\\rm ang}$, { hereby referred to as the angular semi-major axis}), eccentricity ($e$), argument of periastron ($\\omega$), time of periastron ($T_0$), longitude of ascending node ($\\Omega$), and inclination ($i$). The corner plot for the MCMC chain is shown in Figure \\ref{fig:rel orbit corner plot}. The best fit relative orbit is shown in Figure \\ref{fig:rel orbit}, and the best fit orbital parameters are summarized in Table. \\ref{table:orbitfit}. The reduced $\\chi^2$ is 0.77 which suggests that we may be slightly overestimating the errors, especially for the earlier epochs. This is possibly because the earlier epochs in general have higher quality data, while we used empirical PSFs of a wider range of qualities in order to generate a large enough sample for the error inflation estimate described in section \\ref{subsubsec: error inflation}. Nevertheless, we are able to produce an excellent fit and obtain very tight constraints on the orbital parameters thanks to high quality direct imaging data and a long monitoring baseline that almost covers an entire period.\n\n\\subsection{Absolute Astrometry}\n\nWe have derived optical geometric distortion corrections for all the FORS2 images in Section~\\ref{sec: absast calibrations}. We describe here our approach to fit for astrometric models to the reference stars, field stars, and most importantly the $\\varepsilon$~Indi~B\\xspace system. We fit standard five-parameter astrometric models, with position, proper motion, and parallax, to the reference stars and field stars in the field of view of the FORS2 images. The results from the fits for reference stars match within 20$\\%$ from the proper motions and parallaxes Gaia provided.\nFor the binary system $\\varepsilon$~Indi~B\\xspace, we fit a six-parameter astrometric solution, adding an extra parameter which is the ratio between the semi-major axes of the orbits of the two components. { We also review, and ultimately project out, the effects of atmospheric dispersion. The wavelength-dependent index of refraction of air causes an apparent, airmass-dependent displacement between the redder brown dwarfs $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace and the bluer field stars along the zenith direction.}\n\nThe results from absolute astrometry give proper motions, parallax, and a ratio between the semi-major axes which can then be converted into a mass ratio and individual masses. In conjuction with our previous relative astrometry results, full Keplerian solutions can be derived that completely characterize the orbits of both $\\varepsilon$~Indi~Ba\\xspace and $\\varepsilon$~Indi~Bb\\xspace. \n\n\\subsubsection{Astrometric Solution}\n\nThe astrometric solution for a single and isolated reference star or background star is a five-parameter linear model in terms of reference pixel coordinates in RA and Declination, proper motions in RA and Declination, and the parallax. A star's instantaneous position $(\\alpha*, \\delta)$ would be its position $(\\alpha*_{\\rm ref}, \\delta_{\\rm ref})$\nat a reference epoch, plus proper motion $(\\mu_{\\alpha*}, \\mu_\\delta)$ multiplied by the time since { the reference epoch $ t_{\\rm ref}$}, and parallax $\\varpi$ times the so-called parallax factors $\\Delta \\pi_{\\alpha*}$ and $\\Delta \\pi_\\delta$:\n\n\\begin{equation}\n\\begin{bmatrix}\n1 & 0 & t - t_{\\rm ref} & 0 & \\rm \\Delta \\pi_{\\alpha*} \\\\\n0 & 1 & 0 & t - t_{\\rm ref} & \\rm \\Delta \\pi_{\\delta} \\\\\n\\end{bmatrix} \n\\begin{bmatrix}\n{\\alpha*_{\\rm ref}} \\\\ \\delta_{\\rm ref} \\\\\n\\mu_{\\alpha*} \\\\\n\\mu_{\\delta} \\\\\n\\varpi\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\alpha* \\\\\n\\delta \\\\\n\\end{bmatrix}.\n\\end{equation}\n\n\nTo test the robustness of the distortion corrections in RA and Declination that we have derived for each image, we `reverse engineer' by excluding a particular reference star from the fit and solve for the astrometric solution of that star based on the discussion above for comparison to the Gaia parameters. In particular, we focus on the reference stars close to $\\varepsilon$~Indi~B\\xspace. \n\nFor the binary system $\\varepsilon$~Indi~B\\xspace, the astrometric solution demands an additional parameter $r_{\\rm Ba}$: the ratio between the semi-major axis of $\\varepsilon$~Indi~Ba\\xspace about the barycenter to the total semi-major axis $a$. The parameter $r_{\\rm Ba}$ is related to the binary mass ratio by \n\\begin{equation}\n r_{\\rm Ba} = \\frac{M_{\\rm Bb}}{M_{\\rm Ba} + M_{\\rm Bb}}.\n\\end{equation}\nThe model becomes\n\\begin{equation}\n\\begin{split}\n\\begin{bmatrix}\n1 & 0 & t - t_{\\rm ref} & 0 & \\rm \\Delta \\pi_{\\alpha*} & a_{\\alpha*}\\\\\n0 & 1 & 0 & t - t_{\\rm ref} & \\rm \\Delta \\pi_{\\delta} & a_{\\delta}\\\\\n\\end{bmatrix} \n\\begin{bmatrix}\n\\alpha*_{\\rm ref} \\\\\n\\delta_{\\rm ref} \\\\\n\\mu_{\\alpha*} \\\\\n\\mu_{\\delta} \\\\\n\\varpi \\\\\nr_{\\rm Ba}\n\\end{bmatrix}\n\\\\\n=\n\\begin{bmatrix}\n\\alpha \\!*_{\\rm Ba}-0.5\\dot{\\mu}_{\\alpha*} (t-t_{\\rm ref})^2 \\\\\n\\delta_{\\rm Ba}- 0.5 \\dot{\\mu}_{\\delta}(t-t_{\\rm ref})^2 \\\\\n\\end{bmatrix}.\n\\label{eq:absast_fit}\n\\end{split}\n\\end{equation}\n\n{ We also take into account the perspective acceleration that occurs when a star passes by the observer and its proper motion gets exchanged into radial velocity. This effect is more significant for $\\varepsilon$~Indi\\xspace than for remote stars. We employ constant perspective accelerations of $\\dot{\\mu}_{\\alpha*}$ = 0.165 mas\\,yr$^{-2}$ in RA and $\\dot{\\mu}_{\\delta}$ = 0.078 mas\\,yr$^{-2}$ in Dec for the $\\varepsilon$~Indi~B\\xspace system based on Gaia EDR3 measurements and assuming the radial velocity measured for $\\varepsilon$~Indi\\xspace\\,A. We adopt a reference epoch $t_{\\rm ref}$=2010. With an astrometric baseline of $\\sim$10 years, this gives a displacement of $0.5\\dot{\\mu}(t-t_{\\rm ref})^2\\approx 2$\\,mas at the edges of the observing window, where $\\dot{\\mu}$ is the acceleration. The perspective acceleration, because it is known, is included in the right hand side of Equation \\eqref{eq:absast_fit}.}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Figures\/res_2nd_order.pdf}\n \\caption{Residuals to the best-fit astrometric model for $\\varepsilon$~Indi~Ba\\xspace. The top panel shows the residuals in altitude, and the bottom panel shows the residuals in azimuth, both plotted as a function of altitude. Empty red circles show the rejected epochs from 3$\\sigma$ clipping of the azimuthal residuals. A histogram of the residuals is shown to the right of each scatter plot. Strong systematics are seen in the altitude residuals but not the azimuth ones, evident from the symmetric, roughly Gaussian distribution for the former and the altitude-dependent nonzero mean for the latter. These systematics are consistent with the magnitude expected for uncorrected ADC residual dispersion \\citep{FORS2_ADC_1997}. \\label{fig:residuals}}\n\\end{figure*}\n\n\n\\subsubsection{ Residual Atmospheric Dispersion}\n\\label{sec:adc}\n\nThe FORS2 imaging covers twelve years, with data taken over a wide range of airmasses. This makes it essential to correct for atmospheric dispersion caused by the differential refraction of light of different colors as it passes through the atmosphere. The degree of dispersion is related to the wavelength of light, the filter used, and the airmass, but is always along the zenith direction. \nMany of the FORS2 images were taken at very high airmass. The typical airmass will vary over the course of the year, because the time of observation will vary depending on what part of night the target is up.\n\nAll of the FORS2 images were taken with an atmospheric dispersion corrector, or ADC. The residual dispersion depends on filter, airmass, and position on the FOV, but is typically tens of mas \\citep{FORS2_ADC_1997}. This is smaller than the system's parallax and { angular} semi-major axis, but only by a factor of $\\approx$10. { Further, the ADC is only intended to provide a full correction to a zenith angle of $50^\\circ$ \\citep{FORS2_ADC_1997}. At lower elevations it is parked at its maximum extent; \\cite{Cardoso_2012} applied an additional correction to these data.} Because the { residual dispersion is} only in the zenith direction, we perform two fits to the absolute astrometry of $\\varepsilon$~Indi~Ba\\xspace. First, we use our measurements in RA and Decl.~directly. Second, we use the parallactic angle $\\theta$ to take only the component of our measurement along the azimuth direction, which is immune to the effects of differential atmospheric refraction.\n\nWe project the data into the altitude-azimuth frame by left-multiplying both sides of Equation \\eqref{eq:absast_fit} by the rotation matrix\n\\begin{equation}\n { R} = \n \\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta \\\\\n \\sin \\theta & \\cos \\theta\n \\end{bmatrix}.\n \\label{eq:rot_altaz}\n\\end{equation}\nThe top row of Equation \\eqref{eq:rot_altaz} corresponds to the azimuth direction, while the bottom row corresponds to the altitude direction.\n\nFitting in both the altitude and azimuth directions produces a parallax of 263~mas, in agreement with \\cite{Cardoso_2012} but much lower than both the Hipparcos and Gaia values for $\\varepsilon$~Indi\\xspace~A. We then perform a fit only in the azimuth direction: we multiply both sides of Equation \\eqref{eq:absast_fit} by the top row of Equation \\eqref{eq:rot_altaz}. \nWe exclude eight $3\\sigma$ outliers and assume a uniform per-epoch uncertainty of 8.01 mas to give a normalization factor that gives a reduced $\\chi^2$ value of 1.00. This procedure results in a parallax of $274.99 \\pm 0.43$\\,mas, in good agreement with both the Hipparcos and Gaia measurements. { We note that the 25-year time baseline between Hipparcos and Gaia causes a small parallax difference. $\\varepsilon$~Indi\\xspace~A has a radial velocity of 40.5 km\/s \\citep{GaiaDR2_2018}, which translates to a fractional change of $3\\times 10^{-4}$ or a decrease in parallax of about 0.08 mas over 25 years. This difference is much smaller than the uncertainties of any of these parallax measurements.}\n\n\n\n\nFigure~\\ref{fig:residuals} shows the residual to the best-fit model using only azimuthal measurements: the top panel shows the residuals in altitude, while the bottom panel shows the residuals in azimuth. A upward trend and nonzero mean are seen in the altitude component of the parallax as a function of altitude, but no dependence on altitude was seen in the azimuth-based parallax. This confirms that the altitude component of the position measurements is corrupted by residual atmospheric dispersion of a magnitude consistent with expectations \\citep{FORS2_ADC_1997}. \n\nThe six-parameter azimuth-component-only astrometric solution gives a mass ratio of 0.4431 $\\pm$ 0.0008 between the binary brown dwarf $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. This mass ratio is consistent with \\cite{Cardoso_2012}. The only differences in our approaches arise from our usage of Gaia EDR3 to anchor the distortion correction and our account of atmospheric dispersion by only taking the azimuthal projection of the motion of the system. Our parallax of $274.99 \\pm 0.43$ mas agrees with the parallax value from Hipparcos and that of \\citet{Dieterich_2018}. \n\n\\subsection{Individual Dynamical Masses} \n\nThe relative astrometry orbital fit provides a precise period and angular semi-major axis. With a parallax from absolute astrometry, we convert the angular semi-major axis to distance units: $2.4058 \\pm 0.0040$\\,au. We then use Kepler's third law to calculate a total system mass of \\hbox{$120.17\\pm 0.62$}\\,$M_{\\rm Jup}$. Finally, the mass ratio derived from absolute astrometry provides individual dynamical masses of \\hbox{$66.92\\pm 0.36$}\\ and \\hbox{$53.25\\pm 0.29$}\\,$M_{\\rm Jup}$ for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, respectively. Table \\ref{table:orbitfit} shows the results of each component of the orbital fit, with the final individual mass measurements in the bottom panel. The uncertainty on these masses is dominated by uncertainty in the parallax: the mass ratio is constrained significantly better than the total mass. In the following section, we use both our individual mass constraints and our measurement of the mass ratio to test models of substellar evolution.\n\n\\begin{deluxetable}{cc}\n\\tablewidth{0pt}\n \\tablecaption{Orbital Fit of the $\\varepsilon$~Indi~B\\xspace system \\label{table:orbitfit}}\n \\tablehead{\n \\colhead{{ Fitted Parameters}} & \\colhead{Posterior {mean} $\\pm$1$\\sigma$}}\n \\startdata\n Period (yr) & $11.0197 \\pm 0.0076\\phn$ \\\\\n { Angular} semi-major axis (mas) & $661.58 \\pm 0.37\\phn\\phn$ \\\\\n Eccentricity & $0.54042 \\pm 0.00063$ \\\\\n $\\omega$ (deg) & $328.27 \\pm 0.12\\phn\\phn$ \\\\\n $\\Omega$ (deg) & $147.959 \\pm 0.023\\phn\\phn$ \\\\\n Inclination (deg) & $77.082 \\pm 0.032\\phn$ \\\\\n $\\mu_{\\alpha*}$ (\\hbox{mas\\,yr$^{-1}$}) & $3987.41 \\pm 0.12 \\phn\\phn\\phn$ \\\\\n $\\mu_{\\delta}$ (\\hbox{mas\\,yr$^{-1}$}) & $-2505.35 \\pm 0.10\\phn\\phn\\phn$\\phs \\\\\n $\\Big(\\frac{M_{\\rm Bb}}{M_{\\rm Ba}+M_{\\rm Bb}}\\Big)$ & $0.4431 \\pm 0.0008$ \\\\\n $\\varpi$ (mas) & $274.99 \\pm 0.43\\phn\\phn$\\\\\n reduced $\\chi^2$ & 1.00\\\\\n \\hline\n { Derived Parameters} & Posterior {mean} $\\pm$1$\\sigma$\\\\\n \\hline\n a (AU) & $2.4058 \\pm 0.0040$ \\\\\n System mass ($\\mbox{$M_{\\rm Jup}$}$) & \\hbox{$120.17\\pm 0.62$}$\\phn\\phn$ \\\\\n ${\\rm Mass}_{\\rm Ba}$ ($\\mbox{$M_{\\rm Jup}$}$) & \\hbox{$66.92\\pm 0.36$}$\\phn$\\\\\n ${\\rm Mass}_{\\rm Bb}$ ($\\mbox{$M_{\\rm Jup}$}$) & \\hbox{$53.25\\pm 0.29$}$\\phn$\n \\enddata\n\\end{deluxetable}\n\n\n\n\\section{Testing Models of Substellar Evolution} \\label{sec:BDtests}\n\nThe evolution of substellar objects is characterized by continuously-changing observable properties over their entire lifetimes. Therefore, the most powerful tests to benchmark evolutionary models utilize dynamical mass measurements of brown dwarfs of known age (usually, from an age-dated stellar companion) or of binary brown dwarfs that can conservatively presumed to be coeval. \nA single brown dwarf of known age and mass can test evolutionary models in an absolute sense, and the strength of the test is limited by both the accuracy of the age and of the mass. Pairs of brown dwarfs of known masses can test the slopes of evolutionary model isochrones, even without absolute ages, because their age difference is known very precisely to be near zero unless they are very young. \n\nThe $\\varepsilon$~Indi~B\\xspace system is an especially rare case where both of these types of tests are possible. In fact, it is the only such system containing T~dwarfs where both the absolute test of substellar cooling with time and coevality test of model isochrones are possible.\n\nIn the following, we consider a collection of evolutionary models applicable to $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace covering a range of input physics. The ATMO-2020 grid \\citep{2020A&A...637A..38P} represents the most up-to-date cloudless evolutionary models from the ``Lyon'' lineage that includes DUSTY \\citep{Chabrier:2000sh}, COND \\citep{2003A&A...402..701B}, and BHAC15 \\citep{2015A&A...577A..42B}. For models that include the effect of clouds, we use the hybrid tracks of \\citet[][hereinafter SM08]{2008ApJ...689.1327S}, which are cloudy at $\\mbox{$T_{\\rm eff}$}>1400$\\,K, cloudless at $\\mbox{$T_{\\rm eff}$}<1200$\\,K, and a hybrid of the two in between 1400\\,K and 1200\\,K. These are the most recent models that include cloud opacity from the ``Tucson'' lineage. We also compare to the earlier cloudless Tucson models \\citep{Burrows1997} given their ubiquity in the literature.\n\nIn order to test these models, we chose pairs of observable parameters from among the fundamental properties of mass, age, and luminosity. Using any two parameters, we computed the third from evolutionary models. When the first two parameters were mass and age, we bilinearly interpolated the evolutionary model grid to compute luminosity. When the first two parameters were luminosity and either mass or age, we used a Monte Carlo rejection sampling approach as in our past work \\citep{Dupuy+Liu_2017,2018AJ....156...57D,Brandt2021_Six_Masses}. Briefly, we randomly drew values for the observed independent variable, according to the measured mass or age posterior distribution, and then drew values for the other from an uninformed prior distribution (either log-flat in mass or linear-flat in age). We then bilinearly interpolated luminosities from the randomly drawn mass and age distributions. For each interpolated luminosity $L_{\\rm bol}^{\\prime}$, we computed $\\chi^2 = (\\mbox{$L_{\\rm bol}$}-L_{\\rm bol}^{\\prime})^2\/\\sigma_{L_{\\rm bol}}^2$. For each trial we drew a random number between zero and one, and we only retained trial sets of mass, age, and luminosity in our output posterior if $e^{-(\\chi^2-\\chi^2_{\\rm min})\/2}$ was greater than the random number.\n\nWe used the luminosities of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace from \\citet{2010A&A...510A..99K}, accounting for the small difference between the {\\sl Hipparcos }\\ parallax of 276.06\\,mas that they used and our value of 274.99\\,mas, which resulted in $\\log(\\mbox{$L_{\\rm bol}$}\/\\mbox{$L_{\\sun}$}) = -4.691\\pm0.017$\\,dex and $-5.224\\pm0.020$\\,dex. Their luminosity errors were dominated by their measured photometry of $\\varepsilon$~Indi~Ba\\xspace and $\\varepsilon$~Indi~Bb\\xspace and the absolute flux calibration of Vega's spectrum, so our errors are identical to theirs.\n\nOur Monte Carlo approach naturally accounts for the relevant covariances between measured parameters. There are six independently-measured parameters for which we randomly drew Gaussian-distributed values: the orbital period ($P$), the semi-major axis in angular units ($a^{\\prime\\prime}$), the ratio of the mass of $\\varepsilon$~Indi~Bb\\xspace to the total mass of $\\varepsilon$~Indi~B\\xspace ($M_{\\rm Bb}\/M_{\\rm tot}$), the parallax in the same angular units as the semi-major axis ($\\varpi$), and the two bolometric fluxes computed from the luminosities and distance in \\citet{2010A&A...510A..99K}. From these, we computed the total mass, $M_{\\rm tot} = (a^{\\prime\\prime}\/\\varpi)^3 (P\/1{\\rm yr})^{-2}$, and the individual masses and luminosities.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-agyr_TUCSON.pdf}\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-agyr_ATMO_CEQ.pdf}\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-agyr_SM08_hybrid_solar.pdf}\n \\caption{Substellar cooling curves derived from three independent evolutionary models given our measured masses. The top curve in each panel corresponds to $\\varepsilon$~Indi~Ba\\xspace, and the bottom curve corresponds to $\\varepsilon$~Indi~Bb\\xspace. The darker shaded region of each curve shows the 1$\\sigma$ range in our measured mass, and the lighter shading is the 2$\\sigma$ range. On each curve, the ages corresponding to $\\mbox{$T_{\\rm eff}$} = 1400$\\,K and 1200\\,K are marked, indicating the approximate beginning and ending of the L\/T transition. Over-plotted on each panel are the 1$\\sigma$ and 2$\\sigma$ joint uncertainty contours for the age and luminosities of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace.}\n \\label{fig:lbol-age}\n\\end{figure*}\n\n\\subsection{Absolute test of $\\mbox{$L_{\\rm bol}$}(t)$}\n\nIn general, tests of substellar luminosity as a function of time are either dominated by the uncertainty in the age or in the mass. In the case of the $\\varepsilon$~Indi~B\\xspace system, with highly precise masses having 0.5\\% errors, the uncertainty in the system age ($t = 3.5^{+0.8}_{-1.0}$\\,Gyr) is by far the dominant source of uncertainty. \n\nFigure~\\ref{fig:lbol-age} shows the measured joint confidence intervals on luminosities and age of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace compared to evolutionary model predictions given their measured masses. The measured luminosity-age contours overlap all model predictions to within $\\approx$1$\\sigma$ or less for both components. To quantitatively test models and observations, we compared our model-derived substellar cooling ages for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace to $\\varepsilon$~Indi\\xspace~A's age posterior, finding that they are all statistically consistent with the stellar age.\n\nOur results for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace are comparable to other relatively massive (50--75\\,\\mbox{$M_{\\rm Jup}$}) brown dwarfs of intermediate age (1--5\\,Gyr) that also broadly agree with evolutionary model predictions of luminosity as a function of age \\citep{Brandt2021_Six_Masses}. These include objects such as HR~7672~B \\citep{Brandt_2019}, HD~4747~B \\citep{Crepp+Principe+Wolff+etal_2018}, HD~72946~B \\citep{Maire+Baudino+Desidera_2020}, and HD~33632~Ab \\citep{2020ApJ...904L..25C}.\n\nHowever, despite agreeing with models in an absolute sense, it is evident in Figure~\\ref{fig:lbol-age} that the ATMO-2020 and \\citet{Burrows1997} models prefer a younger age for $\\varepsilon$~Indi~Ba\\xspace than for $\\varepsilon$~Indi~Bb\\xspace. To examine the statistical significance of this difference in model-derived ages between the two components we now consider only their measured masses and luminosities, excluding the rather uncertain stellar age.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-mass_TUCSON.pdf}\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-mass_ATMO_CEQ.pdf}\n \\includegraphics[width=0.32\\textwidth]{Figures\/epsIndBC_logl-mass_SM08_hybrid_solar.pdf}\n \\caption{Isochrones from three different evolutionary models, ranging from 2\\,Gyr to 6\\,Gyr. Black and gray contours show the joint 1$\\sigma$ and 2$\\sigma$ confidence intervals of the masses and luminosities of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. Because these two brown dwarfs must be coeval they should lie along a single model isochrone. The only models that pass this test are the \\citet{2008ApJ...689.1327S} hybrid models that predict a distinctly different mass--luminosity relation for brown dwarfs. These models have a much shallower dependence of luminosity on mass as objects cool through the L\/T transition over $\\mbox{$T_{\\rm eff}$} = 1400$\\,K to 1200\\,K, changing from cloudy to cloud-free atmosphere boundary conditions.}\n \\label{fig:lbol-mass}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Figures\/epsIndBC-coeval.pdf}\n \\caption{Probability distributions of the difference between the model-derived substellar cooling ages ($t_{\\rm cool}$) of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace. The dashed line shows the expectation that $t_{\\rm cool, Bb} = t_{\\rm cool, Ba}$. Only the \\citet{2008ApJ...689.1327S} hybrid models predict consistent, coeval ages. This is the highest-precision coevality test of brown dwarf binaries to date, and it supports previous results from brown dwarf binaries with mass errors of $\\approx$5\\% \\citep{2015ApJ...805...56D,Dupuy+Liu_2017}.}\n \\label{fig:benchmark-coeval}\n\\end{figure}\n\n\\subsection{Isochrone test of $M$--$\\mbox{$L_{\\rm bol}$}$ relation for T~dwarfs}\n\nEvolutionary models of brown dwarfs, from some of the earliest theoretical calculations up to modern work \\citep[e.g.,][]{1993RvMP...65..301B,2020A&A...637A..38P}, typically predict a power-law relationship between mass and luminosity with a slope of $\\Delta\\log{L} \/ \\Delta\\log{M} = 2.5$--3.0. This general agreement between models with very different assumptions---and that vary greatly in other predictions such as the mass of the hydrogen-fusion boundary---can be seen in the slopes of isochrones for 40--60\\,\\mbox{$M_{\\rm Jup}$}\\ brown dwarfs in Figure~\\ref{fig:lbol-mass}.\n\nOne set of models, from \\citet{2008ApJ...689.1327S}, that substantially alters the atmospheric boundary condition as objects cool from $\\mbox{$T_{\\rm eff}$} = 1400$\\,K to 1200\\,K predicts a much shallower slope from the $M$--$L$ relation during that phase of evolution (Figure~\\ref{fig:lbol-mass}). These so-called hybrid models provide the best match to the $M$--$L$ relation as measured in binaries composed of late-L dwarf primaries ($\\mbox{$T_{\\rm eff}$} \\approx 1400$\\,K) and early-T dwarf secondaries ($\\mbox{$T_{\\rm eff}$} \\approx 1200$\\,K), objects that straddle this evolutionary phase \\citep{2015ApJ...805...56D,Dupuy+Liu_2017}. A fundamental prediction of these models is that during the L\/T transition, objects of similar luminosity can have wider-ranging masses than in other models. The other chief prediction is that luminosity fades more slowly during the L\/T transition, so that the brown dwarfs emerging from this phase are more luminous than in other models.\n\n$\\varepsilon$~Indi~B\\xspace is the only example of a binary with precise individual masses where one component is an L\/T transition brown dwarf and the other is a cooler T~dwarf. This provides a unique test of the $M$--$L$ relation, where the cooler brown dwarf is well past the L\/T transition and the other is in the middle of it. According to hybrid models, the brown dwarf within the L\/T transition will be experiencing slower cooling, so it would be more luminous than in other models. On the other hand, with the immediate removal of cloud opacity in hybrid models below 1200\\,K, a brown dwarf will cool even faster than predicted by other, non-hybrid models. These two effects predict that a system like $\\varepsilon$~Indi~B\\xspace will, in fact, have an especially steep $M$--$L$ relation.\n\nOur measured masses give a particularly steep slope for the $M$--$L$ relation of $\\Delta\\log{L} \/ \\Delta\\log{M} = 5.37\\pm0.08$ between the L\/T-transition primary $\\varepsilon$~Indi~Ba\\xspace and the cooler secondary $\\varepsilon$~Indi~Bb\\xspace. The only evolutionary models that predict such a steep slope are the hybrid models of \\citet{2008ApJ...689.1327S}.\n\nTo quantitatively test models, we compared the model-derived cooling ages of $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, given their measured masses and luminosities (Figure~\\ref{fig:benchmark-coeval}). Models like ATMO-2020 that assume a single, cloud-free atmospheric boundary condition are 3.9$\\sigma$ inconsistent with our measurements. At 6.9$\\sigma$, models from \\citet{Burrows1997} are even more inconsistent because the bunching up of isochrones around the end of the main sequence, which has a similar effect as the bunching up of isochrones due to slowed cooling in the hybrid models, occurs at higher masses than in ATMO-2020.\n\nThe $\\varepsilon$~Indi~B\\xspace system therefore provides further validation of hybrid evolutionary models, where the atmosphere boundary condition is changed drastically over the narrow range of \\mbox{$T_{\\rm eff}$}\\ corresponding to late-L and early-T dwarfs. No longer just within the L\/T transition, but affirming the consequences of slowed cooling during the L\/T transition to cooler brown dwarfs ($\\mbox{$T_{\\rm eff}$} < 1000$\\,K).\n\n\\subsection{Testing Model Atmospheres: \\mbox{$T_{\\rm eff}$}\\ and \\mbox{$\\log(g)$}}\n\nBrown dwarfs that have both directly measured masses and individually measured spectra have long been used in another type of benchmark test that tests for consistency between evolutionary models and the atmosphere models that they use as their surface boundary condition. Comparison of model atmospheres to observed spectra allows for determinations of \\mbox{$T_{\\rm eff}$}, \\mbox{$\\log(g)$}, and metallicity. Evolutionary models predict brown dwarf radii as a function of mass, age, and metallicity. Combining these radii with empirically determined luminosities produce mostly independent estimates of $\\mbox{$T_{\\rm eff}$} = (\\mbox{$L_{\\rm bol}$}\/4\\pi R^2 \\sigma_{\\rm SB})^{1\/4}$, and with masses gives estimates of $\\mbox{$\\log(g)$} = \\log(GM\/R^2)$. (Evolutionary model radii have a small dependence on the model atmospheres and thus estimates of \\mbox{$T_{\\rm eff}$}\\ and \\mbox{$\\log(g)$}\\ from their radii are not strictly, completely independent.) There are many examples of such benchmark tests, ranging from late-M dwarfs \\citep[e.g.,][]{2001ApJ...554L..67K,2004ApJ...615..958Z,2010ApJ...721.1725D}, L~dwarfs \\citep[e.g.,][]{2004A&A...423..341B,2009ApJ...692..729D,2010ApJ...711.1087K}, and T~dwarfs \\citep[e.g.,][]{2008ApJ...689..436L,Dupuy+Liu_2017}.\n\nFrom \\citet{2010A&A...510A..99K}, $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace have perhaps the most extensive and detailed spectroscopic observations (0.6--5.1\\,$\\mu$m at up to $R\\sim5000$) of any brown dwarfs with dynamical mass measurements. They found that BT-Settl atmosphere models \\citep{2012RSPTA.370.2765A} with parameters of $\\mbox{$T_{\\rm eff}$} = 1300$--1340\\,K and $\\mbox{$\\log(g)$} = 5.50$\\,dex best matched $\\varepsilon$~Indi~Ba\\xspace. For $\\varepsilon$~Indi~Bb\\xspace, they found $\\mbox{$T_{\\rm eff}$} = 880$--940\\,K and $\\mbox{$\\log(g)$} = 5.25$\\,dex.\n\nWe computed evolutionary model-derived values for \\mbox{$T_{\\rm eff}$}\\ and \\mbox{$\\log(g)$}\\ to compare to the model atmosphere results of \\citet{2010A&A...510A..99K}. The most precise estimates result from using the mass and luminosity to derive a substellar cooling age and then interpolating \\mbox{$T_{\\rm eff}$}\\ and \\mbox{$\\log(g)$}\\ from the same evolutionary model grid using the measured mass and the cooling age. The SM08 hybrid models gave $\\mbox{$T_{\\rm eff}$} = 1312\\pm13$\\,K and $972\\pm13$\\,K for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, respectively, and $\\mbox{$\\log(g)$} = 5.365\\pm0.006$\\,dex and $5.288\\pm0.003$\\,dex. These evolutionary model-derived values agree remarkably well with the model atmosphere results, which were based on an atmosphere grid with discrete steps of 20\\,K in \\mbox{$T_{\\rm eff}$}\\ and 0.25\\,dex in \\mbox{$\\log(g)$}. \n\nATMO-2020 models are only strictly appropriate for $\\varepsilon$~Indi~Bb\\xspace, and they give $\\mbox{$T_{\\rm eff}$} = 992\\pm13$\\,K and $\\mbox{$\\log(g)$} = 5.311\\pm0.003$\\,dex. This effective temperature is $\\approx$4$\\sigma$ higher than the BT-Settl model atmosphere temperature. ATMO-2020 models are actually based on this family of model atmospheres (BT-Cond and BT-Settl should be effectively equivalent at this \\mbox{$T_{\\rm eff}$}), so this suggests a genuine $\\approx$50\\,K discrepancy between atmosphere model-derived \\mbox{$T_{\\rm eff}$}\\ (too low) and evolutionary model-derived \\mbox{$T_{\\rm eff}$}\\ (too high). If so, this could be due a to a combination of systematics in atmosphere models (e.g., non-equilibrium chemistry, inaccurate opacities) and\/or ATMO-2020 evolutionary model radii (10--20\\% too high).\n\n\\section{Conclusions} \\label{sec:conclusions}\n\nIn this paper we use $\\sim$12 years of VLT data to infer dynamical masses of \\hbox{$66.92\\pm 0.36$}\\,$\\mbox{$M_{\\rm Jup}$}$ and \\hbox{$53.25\\pm 0.29$}\\,$\\mbox{$M_{\\rm Jup}$}$ for the brown dwarfs $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, respectively. These masses put the the two objects firmly below the hydrogen burning limit. Our system mass agrees with that in \\cite{Cardoso_2009}, who estimated a system mass of $121 \\pm 1 \\; M_{\\rm Jup}$. With extra data from the completed relative and absolute astrometry monitoring campaign, we are able to derive precise individual masses and improve upon their previous analysis on several fronts. Using Gaia EDR3, we provide a much more precise calibration of both relative and absolute astrometry. In addition, we have shown that our joint PSF fitting method accounts for the effect of overlapping halos reasonably well and adjusted our final errors for the relative astrometry according to our systematics analysis. Lastly, we have investigated and corrected for the systematics due to differential atmospheric refraction and residual atmospheric dispersion. As a result, we are able to obtain very tight constraints on the orbital parameters and final masses, and measure a parallax consistent with both the Hipparcos and Gaia values.\n\nOur results disagree with \\cite{Dieterich_2018}, who used the photocenter's orbit together with three NACO epochs to derive a mass of $75.0 \\pm 0.82 \\; M_{Jup}$ for Ba\\xspace, and a mass of $70.1 \\pm 0.68 \\; M_{Jup}$ for Bb\\xspace. These masses are at the boundaries of the hydrogen burning limit, challenging theories of substellar structure and evolution. We cannot conclusively say why \\cite{Dieterich_2018} derive much higher masses. However, we are able to reproduce their results, and find that rotating their measurements into an azimuth-only frame produces a mass closer to ours. We speculate that highly asymmetric uncertainties in RA\/Decl.~for a few of their measurements had a disproportionate effect on the results. \n\nWe also provide a Fourier analysis of $\\varepsilon$~Indi~B\\xspace's fluxes to investigate its potential variability. We find no definitive evidence of variability with a frequency less than 1 hr$^{-1}$.\n\nOur newly precise masses and mass ratios enable new tests of substellar evolutionary models. We find that $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace are generally consistent with cooling models at the activity age of $3.5^{+0.8}_{-1.0}$\\,Gyr we derive for $\\varepsilon$~Indi\\xspace~A. However, the two brown dwarfs are consistent with coevality only under hybrid models like those of \\cite{2008ApJ...689.1327S}, with a transition to cloud-free atmospheres near the L\/T transition. \n\nOur masses for $\\varepsilon$~Indi~Ba\\xspace and Bb\\xspace, precise to $\\approx$0.5\\%, and our mass ratio, precise to $\\approx$0.2\\%, establish the $\\varepsilon$~Indi~B\\xspace binary brown dwarf as a definitive benchmark for substellar evolutionary models. As one of the nearest brown dwarf binaries, it is also exceptionally well-suited to detailed characterization with future telescopes and instruments including JWST. $\\varepsilon$~Indi~B\\xspace, with its two components straddling the L\/T transition, now provides some of the most definitive evidence for cloud clearing and slowed cooling in these brown dwarfs. \\\\\n\n\\acknowledgements{T.D.B.~gratefully acknowledges support from National Aeronautics and Space Administration (NASA) under grant 80NSSC18K0439 and from the Alfred P.~Sloan Foundation. MJM and CVC would like to thank their collaborators on the original ESO VLT NACO and FORS2 programmes which provided the great majority of the $\\varepsilon$~Indi~B\\xspace astrometry data re-reduced and analysed in this paper: Laird Close, Ralf-Dieter Scholz, Rainer Lenzen, Wolfgang Brandner, Nicolas Lodieu, Hans Zinnecker, Rainer K\u00f6hler, and Quinn Konopacky. We would also like to recognise the tremendous efforts made by the many ESO service mode astronomers in carrying out these observations over many runs between 2004 and 2016, more than a full period of the binary orbit, and to the ESO TAC for continuing to support the programme throughout that time. Based on observations collected at the European Southern Observatory under ESO programs 072.C-0689(F), 073.C-0582(A), 074.C-0088(A), 075.C-0376(A), 076.C-0472(A), 077.C-0798(A), 078.C-0308(A), 079.C-0461(A), 380.C-0449(A), 381.C-0417(A), 382.C-0483(A), 383.C-0895(A), 384.C-0657(A), 385.C-0994(A), 386.C-0376(A), 087.C-0532(A), 088.C-0525(A), 089.C-0807(A), 091.C-0899(A), 381.C-0860(A), 072.C-0689(D), 075.C-0376(B), 076.C-0472(B), 077.C-0798(B), 078.C-0308(B), 079.C-0461(B), 380.C-0449(B), 381.C-0417(B), 382.C-0483(B), 383.C-0895(B), 384.C-0657(B), 385.C-0994(B), 386.C-0376(B), 087.C-0532(B), 088.C-0525(B), 089.C-0807(B), and 091.C-0899(B).}\n\n\\software{photutils (\\citealt{Stetson_1987, photutils110}), astropy (\\citealt{Astropycollab2013aaAstropy}), orvara (\\citealt{orvara_2021}), Scipy (\\citealt{2020SciPy-NMeth}), matplotlib (\\citealt{Hunter:2007}), numpy (\\citealt{harris2020array})}\\\\\n\n\\input{main.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nA new race of insidious threats called Advanced Persistent Threats (APTs) have joined the league of eCrime activities on the Internet, and caught a lot of organizations off guard in the fairly recent times. Critical infrastructures and the governments, corporations, and individuals supporting them are under attack by these increasingly sophisticated cyber threats. The goal of the attackers is to gain access to intellectual property, personally identifiable information, financial data, and targeted strategic information. This is not simple fraud or hacking, but intellectual property theft and infrastructure corruption on a grand scale~\\cite{Daly2009}. APTs use multiple attack techniques and vectors that are conducted by stealth to avoid detection, so that hackers can retain control over target systems unnoticed for long periods of time. Interestingly, no matter how sophisticated these attack vectors may be, the most common ways of getting them inside an organization's network are social engineering attacks like phishing, and targeted spear phishing emails. There have been numerous reports of spear phishing attacks causing losses of millions of dollars in the recent past.~\\footnote{\\url{http:\/\/businesstech.co.za\/news\/internet\/56731\/south-africas-3-billion-phishing-bill\/}}~\\footnote{\\url{http:\/\/www.scmagazine.com\/stolen-certificates-used-to-deliver-trojans-in-spear-phishing-campaign\/article\/345626\/}} Although there exist antivirus, and other similar protection software to mitigate such attacks, it is always better to stop such vectors at the entry level itself~\\cite{kumaraguru2010teaching}. This requires sophisticated techniques to deter spear phishing attacks, and identify malicious emails at a very early stage itself.\n\n\nIn this research paper, we focus on identifying such spear phishing emails, wherein the attacker targets an individual or company, instead of anyone in general. Spear phishing emails ususally contain victim-specific context instead of general content. Since it is targeted, a spear phishing attack looks much more realistic, and thus, harder to detect~\\cite{Jakobsson2006}. A typical spear phishing attack can broadly be broken down into two phases. In the first phase, the attacker tries to gather as much information about the victim as possible, in order to craft a scenario which looks realistic, is believable for the victim, and makes it very easy for the attacker to attain the victim's trust. In the second phase, the attacker makes use of this gained trust, and draws the victim into giving out sensitive \/ confidential information like a user name, password, bank account details, credit card details, etc. The attacker can also exploit the victim's trust by infecting the victim's system, through luring them into downloading and opening malicious attachments~\\cite{Jakobsson2006}. While spear phishing may be a timeworn technique, it continues to be effective even in today's Web 2.0 landscape. A very recent example of such a spear phishing attack was reported by FireEye. Here, attackers exploited the news of the disappearance of Malaysian Airlines Flight MH370, to lure government officials across the world into opening malicious attachments (Figure~\\ref{fig:mh370attach}) sent to them over email~\\cite{fireeye:2014}. In 2011, security firm RSA suffered a breach via a targeted attack; analysis revealed that the compromise began with the opening of a spear phishing email.~\\footnote{\\url{http:\/\/blogs.rsa.com\/rivner\/anatomy-of-an-attack\/}} That same year, email service provider Epsilon also fell prey to a spear phishing attack that caused the organization to lose an estimated US\\$4 billion.~\\footnote{\\url{http:\/\/www.net-security.org\/secworld.php?id=10966}} These examples indicate that spear phishing has been, and continues to be one of the biggest forms of eCrime over the past few years, especially in terms of the monetary losses incurred. \n\n\n\\begin{figure}[!h]\n \\begin{center}\n\\fbox{\\includegraphics[scale=0.61]{mh3703.png}}\n \\end{center}\n \\caption{Example of a malicious PDF attachment sent via a spear phishing email. The PDF attachment was said to contain information about the missing Malaysian Airlines Flight 370.\n }\n \\label{fig:mh370attach}\n\n\\end{figure}\n\n\n\nSpear phishing was first studied as context aware phishing by Jakobsson et al. in 2005~\\cite{Jakobsson2005}. A couple of years later, Jagatic et al. performed a controlled experiment and found that the number of victims who fell for context aware phishing \/ spear phishing is 4.5 times the number of victims who fell for general phishing~\\cite{Jagatic2007}. This work was preliminary proof that spear phishing attacks have a much higher success rate than normal phishing attacks. It also highlighted that, what separates a regular phishing attack from a spear phishing attack is the additional information \/ context. Online social media services like LinkedIn, which provide rich professional information about individuals, can be one such source for extracting contextual information, which can be used against a victim. Recently, the FBI also warned that spear phishing emails typically contain accurate information about victims often obtained from data posted on social networking sites, blogs or other websites.~\\footnote{\\url{http:\/\/www.computerweekly.com\/news\/2240187487\/FBI-warns-of-increased-spear-phishing-attacks}} In this work, we investigate if publicly available information from LinkedIn can help in differentiating a spear phishing from a non spear phishing email received by an individual.\nWe attained a dataset of true positive targeted spear phishing emails, and a dataset of a mixture of non targeted, spam and phishing attack emails from the Symantec's enterprise email scanning service, which is deployed at multiple international organizations around the world. To conduct the analysis at the organizational level, we extracted the most frequently occurring domains from the \\emph{``to\"} fields of these emails, and filtered out 14 most targeted organizations, where the first name and last name could be derived from the email address.\nOur final dataset consisted of 4,742 spear phishing emails sent to 2,434 unique employees, and 9,353 non targeted spam \/ phishing emails to 5,914 unique employees. For a more exhaustive analysis, we also used a random sample of 6,601 benign emails from the Enron email dataset~\\cite{Cohen2009} sent to 1,240 unique employees with LinkedIn profiles.\n\nWe applied 4 classification algorithms, and were able to achieve a maximum accuracy of 97.04\\% for classifying spear phishing, and non spear phishing emails using a combination of \\emph{email} features, and \\emph{social} features. However, without the \\emph{social} features, we were able to achieve a slightly higher accuracy of 98.28\\% for classifying these emails. We then looked at the most informative features, and found that \\emph{email} features performed better than \\emph{social} features at differentiating targeted spear phishing emails from non targeted spam \/ phishing emails, and benign Enron emails. To the best of our knowledge, this is the first attempt at making use of a social media profile of a user to distinguish targeted spear phishing emails from non targeted attack emails, received by her.\nHaving found that \\emph{social} features extracted from LinkedIn profiles do not help in distinguishing spear phishing and non spear phishing emails, our results encourage to explore other social media services like Facebook, and Twitter. Such studies can be particularly helpful in mitigating APTs, and reducing the chances of attacks to an organization at the entry level itself. \n\nThe rest of the paper is arranged as follows. Section~\\ref{sec:bg} discuss the related work, Section~\\ref{sec:dcm} describes our email and LinkedIn profile datasets, and the data collection methodology. The analysis and results are described in Section~\\ref{sec:ar}. We conclude our findings, and discuss the limitations, contributions, and scope for future work in Section~\\ref{sec:conclusion}.\n\n\n\n\\section{Background and Related work} \\label{sec:bg}\n\nThe concept of targeted phishing was first introduced in 2005 as \\emph{social phishing} or \\emph{context-aware phishing}~\\cite{Jakobsson2005}. Authors of this work argued that if the attacker can infer or manipulate the context of the victim before the attack, this context can be used to make the victim volunteer the target information. This theory was followed up with an experiment where Jagatic et al. harvested freely available acquaintance data of a group of Indiana University students, by crawling social networking websites like Facebook, LinkedIn, MySpace, Orkut etc.~\\cite{Jagatic2007}. This contextual information was used to launch an actual (but harmless) phishing attack targeting students between the age group of 18-24 years. Their results indicated about 4.5 times increase in the number of students who fell for the attack which made use of contextual information, than the generic phishing attack. However, authors of this work do not provide details of the kind and amount of information they were able to gather from social media websites about the victims.\n\n\\paragraph{Who falls for phish}\n\nDhamija et al. provided the first empirical evidence about which malicious strategies are successful at deceiving general users~\\cite{Dhamija2006}. Kumaraguru et al. conducted a series of studies and experiments on creating and evaluating techniques for teaching people not to fall for phish~\\cite{kumaraguru2009school,kumaraguru2007protecting,kumaraguru2010teaching}. Lee studied data from Symantec's enterprise email scanning service, and calculated the odds ratio of being attacked for these users, based on their area of work. The results of this work indicated that users with subjects \\emph{``Social studies\"}, and \\emph{``Eastern, Asiatic, African, American and Australasian Languages, Literature and Related Subjects\"} were both positively correlated with targeted attacks at more than 95\\% confidence~\\cite{Lee2012}. Sheng et al. conducted an online survey with 1,001 participants to study who is more susceptible to phishing based on demographics. Their results indicated that women are more susceptible than men to phishing, and participants between the ages 18 and 25 are more susceptible to phishing than other age groups~\\cite{Sheng2010}. In similar work, Halevi et al. found a strong correlation between gender and response to a prize phishing email. They also found that neuroticism is the factor most correlated to responding to the email. Interestingly, authors detected no correlation between the participants' estimate of being vulnerable to phishing attacks and actually being phished. This suggests susceptibility to phishing is not due to lack of awareness of the phishing risks, and that real-time response to phishing is hard to predict in advance by online users~\\cite{Halevi2013}. \n\n\\paragraph{Phishing email detection techniques}\n\nTo keep this work focused, we concentrate only on techniques proposed for detecting phishing emails; we do not cover all the techniques used for detecting phishing URLs or phishing websites in general. Abu-Nimeh et al.~\\cite{Abu-Nimeh2007} studied the performance of different classifiers used in text mining such as Logistic regression, classification and regression trees, Bayesian additive regression trees, Support Vector Machines, Random forests, and Neural networks. Their dataset consisted of a public collection of about 1,700 phishing mails, and 1,700 legitimate mails from private mailboxes. They focused on richness of word to classify phishing email based on 43 keywords. The features represent the frequency of ``bag-of-words\" that appear in phishing and legitimate emails. However, the ever-evolving techniques and language used in phishing emails might make it hard for this approach to be effective over a long period of time.\n\nVarious feature selection approaches have also been recently introduced to assist phishing detection. A lot of previous work~\\cite{Abu-Nimeh2007,Chandrasekaran2006,Fette2007} has focused on email content in order to classify the emails as either benign or malicious. Chandrasekaran et al.~\\cite{Chandrasekaran2006} presented an approach based on natural structural characteristics in emails. The features included number of words in the email, the vocabulary, the structure of the subject line, and the presence of 18 keywords. They tested on 400 data points which were divided into five sets with different type of feature selection. Their results were the best when more features were used to classify phishing emails using Support Vector Machine. Authors of this work proposed a rich set of stylometric features, but the dataset they used was very small as compared to a lot of other similar work. Fette et al.~\\cite{Fette2007} on the other hand, considered 10 features which mostly examine URL and presence of JavaScript to flag emails as phishing. Nine features were extracted from the email and the last feature was obtained from WHOIS query. They followed a similar approach as Chandrasekaran et al. but using larger datasets, about 7,000 normal emails and 860 phishing emails. Their filter scored 97.6\\% F-measure, false positive rate of 0.13\\% and a false negative rate of 3.6\\%. The heavy dependence on URL based features, however, makes this approach ineffective for detecting phishing emails which do not contain a URL, or are attachment based attacks, or ask the user to reply to the phishing email with potentially sensitive information. URL based features were also used by Chhabra et al. to detect phishing using short URLs~\\cite{Chhabra2011}. Their work, however, was limited to only URLs, and did not cover phishing through emails. Islam and Abawajy~\\cite{Islam2013} proposed a multi-tier phishing detection and filtering approach. They also proposed a method for extracting the features of phishing email based on weights of message content and message header. The results of their experiments showed that the proposed algorithm reduces the false positive problems substantially with lower complexity.\n\nBehavior-based approaches have also been proposed by various researchers to determine phishing messages~\\cite{Toolan2010,Zhang2007a}. Zhang et al.~\\cite{Zhang2007a} worked on detecting abnormal mass mailing hosts in network layer by mining the traffic in session layer. Toolan et al.~\\cite{Toolan2010} investigated 40 features that have been used in recent literature, and proposed behavioral features such as number of words in \\emph{sender} field, total number of characters in \\emph{sender} field, difference between sender's domain and reply-to domain, and difference between sender's domains and the email's modal domain, to classify ham, spam, and phishing emails. Ma et al.~\\cite{Ma2009} attempted to identify phishing email based on hybrid features. They derived 7 features categorized into three classes, i.e. content features, orthographic features, and derived features, and applied 5 machine learning algorithms. Their results stated that Decision Trees worked best in identifying phishing emails. Hamid et al.~\\cite{Hamid2013} proposed a hybrid feature selection approach based on combination of content and behaviour. Their approach mined attacker behavior based on email header, and achieved an accuracy of 94\\% on a publicly available test corpus.\n\nAll of the aforementioned work concentrates on distinguishing phishing emails from legitimate ones, using various types of features extracted from email content, URLs, header information etc. To the best of our knowledge, there exists little work which focuses specifically on targeted spear phishing emails. Further, there exists no work which utilizes features from the social media profiles of the victim in order to distinguish an attack email from a legitimate one. In this work, we direct our focus on a very specific problem of distinguishing targeted spear phishing emails from general phishing, spam, and benign emails. Further, we apply \\emph{social} features extracted from the LinkedIn profile of recipients of such emails to judge whether an email is a spear phishing email or not; which has never been attempted before, to the best of our knowledge. We performed our entire analysis on a real-world dataset derived from Symantec's enterprise email scanning service.\n\n\n\n\\section{Data collection methodology} \\label{sec:dcm}\n\nThe dataset we used for the entire analysis, is a combination of two separate datasets, viz. a dataset of emails (combination of targeted attack and non targeted attack emails), and a dataset of LinkedIn profiles. We now explain both these datasets in detail. \n\n\\subsection{Email dataset} \\label{sec:emd}\n\n\n\\begin{table*}[!ht]\n\\begin{center}\n \\begin{tabular}{l|l||l|l}\n \\hline\n Spear phishing Attachment Name & \\% & Spam \/ phishing Attachment Name & \\% \\\\ \\hline\n work.doc & 3.46 & 100A\\_0.txt & 20.74 \\\\\nMore detail Chen Guangcheng.rar & 3.01 & 100\\_5X\\_AB\\_PA1\\_MA-OCTET-STREAM\\_\\_form.html & 9.02 \\\\\n ARMY\\_600\\_8\\_105.zip & 2.54 & .\/attach\/100\\_4X\\_AZ-D\\_PA2\\_\\_FedEx=5FInvoice=5FN56=2D141.exe & 4.19 \\\\\n Strategy\\_Meeting.zip & 1.58 & 100\\_2X\\_PM3\\_EMS\\_MA-OCTET=2DSTREAM\\_\\_apply.html & 2.66 \\\\\n 20120404 H 24 year annual business plan 1 quarterly.zip & 1.33 & 100\\_4X\\_AZ-D\\_PA2\\_\\_My=5Fsummer=5Fphotos=5Fin=5FEgypt=5F2011.exe & 1.87 \\\\\n The extension of the measures against North Korea.zip & 1.30 & .\/attach\/100\\_2X\\_PM2\\_EMS\\_MA-OCTET=2DSTREAM\\_\\_ACC01291731.rtf & 1.40 \\\\\n Strategy\\_Meeting\\_120628.zip & 1.28 & 100\\_5X\\_AB\\_PA1\\_MH\\_\\_NothernrockUpdate.html & 1.28 \\\\\n image.scr & 1.24 & .\/attach\/100\\_2X\\_PM2\\_EMS\\_MA-OCTET=2DSTREAM\\_\\_invoice.rtf & 1.15 \\\\\n Consolidation Schedule.doc & 0.98 & 100\\_6X\\_AZ-D\\_PA4\\_\\_US=2DCERT=20Operations=20Center=20Report=2DJan2012.exe & 1.12 \\\\\n DARPA-BAA-11-65.zip & 0.93 & 100\\_4X\\_AZ-D\\_PA2\\_\\_I=27m=5Fwith=5Fmy=5Ffriends=5Fin=5FEgypt.exe & 1.11 \\\\\n Head Office-Y drive.zip & 0.93 & 100\\_4X\\_AZ-D\\_PA2\\_\\_I=27m=5Fon=5Fthe=5FTurkish=5Fbeach=5F2012.exe & 0.80 \\\\\n page 1-2.doc & 0.90 & 100\\_5X\\_AB\\_PA1\\_MA-OCTET-STREAM\\_\\_Lloyds=R01TSB=R01-=R01Login=R01Form.html & 0.69 \\\\\n Aircraft Procurement Plan.zip & 0.90 & 100\\_6X\\_AZ-D\\_PA4\\_\\_Fidelity=20Investments=20Review=2Dfrom=2DJan2012.exe & 0.68 \\\\\n Overview of Health Reform.doc & 0.74 & 100\\_4X\\_AZ-D\\_PA2\\_\\_FedEx=5FInvoice=20=5FCopy=5FIDN12=2D374.exe & 0.64 \\\\\n page 1-2.pdf & 0.64 & 100\\_4X\\_AZ-D\\_PA2\\_\\_my=5Fphoto=5Fin=5Fthe=5Fdominican=5Frepublic.exe & 0.63 \\\\\n fisa.pdf & 0.58 & 100\\_2X\\_PM4\\_EMQ\\_MH\\_\\_message.htm & 0.60 \\\\\n urs.doc & 0.52 & \/var\/work0\/attach\/100\\_4X\\_AZ-D\\_PA2\\_\\_document.exe & 0.58 \\\\\n script.au3 & 0.50 & 100\\_6X\\_AZ-D\\_PA4\\_\\_Information.exe & 0.58 \\\\\n install\\_reader10\\_en\\_air\\_gtbd\\_aih.zip & 0.48 & \/var\/work0\/attach\/100\\_4X\\_AZ-D\\_PA2\\_\\_Ticket.exe & 0.57 \\\\\n dodi-3100-08.pdf & 0.43 & 100\\_4X\\_AZ-D\\_PA2\\_\\_Ticket.exe & 0.57 \\\\ \\hline\n \\end{tabular}\n\\vspace{10pt}\n\\caption{Top 20 most frequently occurring attachment names, and their corresponding percentage share in our spear phishing and spam \/ phishing datasets. Attachment names in the spear phishing emails dataset look much more realistic and genuine as compared to attachment names in spam \/ phishing emails dataset.}\n\\label{tab:attach_names}\n\\end{center}\n\\end{table*}\n\n\nOur email dataset consisted of a combination of targeted spear phishing emails, non targeted spam and phishing emails, and benign emails. We obtained the targeted spear phishing emails from Symantec's enterprise email scanning service. \\emph{Symantec} collects data regarding targeted attacks that consist of emails with malicious attachments. These emails are identified from the vast majority of non-targeted malware by evidence of there being prior research and selection of the recipient, with the malware being of high sophistication and low copy number. The process by which Symantec's enterprise mail scanning service collects such malware has already been described elsewhere~\\cite{Thonnard2012,Lee2013}. The corpus almost certainly omits some attacks, and most likely also includes some non-targeted attacks, but nevertheless it represents a large number of sophisticated targeted attacks compiled according to a consistent set of criteria which render it a very useful dataset to study. \n\n\n\n\nThe non targeted attack emails were also obtained from Symantec's email scanning service. These emails were marked as \\emph{malicious}, and were a combination of malware, spam, and phishing. Both these datasets contained an enormously large number of emails received at hundreds of organizations around the world, where Symantec's email scanning services are being used. Before selecting a suitable sample for organization level analysis, we present an overview of this entire data. Table~\\ref{tab:attach_names} shows the top 20 most frequently occurring attachment names in the complete spear phishing and spam \/ phishing datasets. We found distinct differences in the type of attachment names in these two datasets. While names in spear phishing emails looked fairly genuine and personalized, attachment names in spam \/ phishing emails were irrelevant, and long. It was also interesting to see that the attachment names associated with spear phishing emails were less repetitive than those associated with spam \/ phishing emails. As visible in Table~\\ref{tab:attach_names}, the most commonly occurring attachment name in spear phishing emails was found in less than 3.5\\% of all spear phishing emails, while in the case of spam \/ phishing emails, the most common attachment name was present in over 20\\% of all spam \/ phishing emails. This behavior reflects that attachments in spear phishing emails are named more carefully, and with more effort to make them look genuine.\n\n\n\nWe also looked at the most frequently spread file types in spear phishing, spam, and phishing emails. Table~\\ref{tab:attach_types} shows the top 15 most frequently occurring file types in both the spear phishing and spam \/ phishing email datasets. Not surprisingly, both these datasets had notable presence of executable (.exe, .bat, .com), and compressed (.rar, .zip, .7z) file types. In fact, most of the file types spread through such emails were among the most frequently used file types in general, too. Microsoft Word, Excel, PowerPoint, and PDF files were also amongst the most frequently spread files. It was, however, interesting to note that lesser percentage of targeted spear phishing emails contained executables than spam \/ phishing emails. This reflects that attackers prepare targeted attacks smartly as compared to spammers \/ phishers, and avoid using executables, which are more prone to suspicion.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{p{2.7cm}|p{0.6cm}||p{3cm}|p{0.8cm}}\n \\hline\n Spear phishing Attachment Type & \\% & Spam \/ phishing Attachment Type & \\% \\\\ \\hline\n Zip archive data (zip) & 19.59 & Windows Executable (exe) & 38.39 \\\\\n PDF document (pdf) & 13.73 & ASCII text (txt) & 21.73 \\\\\n Composite Document File & 13.63 & Hypertext (html) & 18.08 \\\\\n Windows Executable (exe) & 11.20 & Hypertext (htm) & 7.06 \\\\\n Rich Text Format data (rtf) & 10.40 & Rich Text Format data (rtf) & 3.04 \\\\\n RAR archive data (rar) & 9.47 & PDF document (pdf) & 2.04 \\\\\n Screensaver (scr) & 5.06 & Zip archive data (zip) & 1.75 \\\\\n data (dat) & 3.00 & Microsoft Word & 1.27 \\\\\n JPEG image data (jpg) & 1.64 & Screensaver (scr) & 1.14 \\\\\n CLASS & 1.56 & Microsoft Excel (xls) & 0.81 \\\\\n Microsoft Word 2007+ & 1.15 & Program Info. file (pif) & 0.80 \\\\\n 7-zip archive data (7z) &1.12 & Dynamic-link Library (dll) & 0.30 \\\\\n Shortcut (lnk) & 1.08 & Windows Batch file (.bat) & 0.24 \\\\\n ASCII text (txt) & 0.80 & JavaScript (js) & 0.17 \\\\\n Dynamic-link Library (dll) & 0.54 & Microsoft HTML Help (chm) & 0.16 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Top 15 most frequently occurring attachment types, and their corresponding percentage share in our spear phishing and spam \/ phishing datasets. Only 5 file types were common in the top 15 in these datasets.}\n\\label{tab:attach_types}\n\\end{center}\n\\end{table}\n\n\n\nAll the emails present in our full dataset were collected over a period of slightly under 4 years, from March 2009 to December 2013. Figure~\\ref{fig:timeline} presents a timeline of the ``received time\" of all these emails. The spam \/ phishing emails were collected over a period of 3 years, from March 2009 to March 2012. The targeted spear phishing emails were also collected during a period of about 3 years, but from January 2011, to December 2013. The two datasets, thus, had data for a common time period of about 15 months, from January 2011, to March 2012. It was interesting to observe that during this period, while the number of spam and phishing emails saw a tremendous rise, the number of spear phishing emails did not vary too much. This characteristic was observed for the entire 3-year period for spear phishing emails. The number of spear phishing emails received in the beginning and end of our three year observation period saw a 238\\% rise, as compared to a rise of 35,422\\% percent in the number of spam \/ phishing emails.\n\n\\begin{figure*}[!ht]\n \\begin{center}\n\\fbox{\\includegraphics[scale=0.45]{timeline_2.pdf}}\n \\end{center}\n \\caption{Timeline of the number of spear phishing and spam \/ phishing emails in our dataset. The X axis represents time, and the Y axis represents the number of emails on a logarithmic scale. The period of May 2011 - September 2011 saw an exponential increase in the number of spam \/ phishing emails in our dataset.\n }\n \\label{fig:timeline}\n\\end{figure*}\n\n\n\nIn addition to the attachment information and timeline, we also looked at the ``subject\" fields of all the emails present in our datasets. Table~\\ref{tab:subjects} shows the top 20 most frequently occurring ``subject lines\" in our datasets. Evidently, ``subjects\" in both these datasets were very different in terms of context. Targeted spear phishing email subjects seemed to be very professional, talking about jobs, meetings, \\emph{unclassified} information etc. Spam \/ phishing email subjects, however, were observed to follow a completely different genre. These emails' subjects were found to follow varied themes, out of which, three broad themes were fairly prominent: a) fake email delivery failure error messages, which lure victims into opening these emails to see which of their emails failed, and why; b) arrival of packages or couriers by famous courier delivery services -- victims tend to open such messages out of curiosity, even if they are not expecting a package; and c) personalized messages via third party websites and social networks (Hallmark E-Card, hi5 friend request, and Facebook message in this case). Most of such spam \/ phishing emails have generic subjects, to which most victims can relate easily, as compared to spear phishing email subjects, which would seem irrelevant to most common users.\n\n\nIt is important to note that these statistics are for the complete datasets we obtained from Symantec. The total number of emails present in the complete dataset was of the order of hundred thousands. However, we performed our entire analysis on a sample picked from this dataset. The analysis in the rest of the paper talks about only this sample. To make our analysis more exhaustive, we also used a sample of benign emails from the Enron email dataset for our analysis~\\cite{Cohen2009}. All the three email datasets had duplicates, which we identified and removed by using a combination of 5 fields, viz. \\emph{from ID}, \\emph{to ID}, \\emph{subject}, \\emph{body}, and \\emph{timestamp}. On further investigation, we found that these duplicate email messages were different instances of the same email. This happens when an email is sent to multiple recipients at the same time. A globally unique \\emph{message-id} is generated for each recipient, and thus results in duplication of the message. Elimination of duplicates reduced our email sample dataset by about 50\\%. Our final sample email dataset that we used for all our analysis was, therefore, a mixture of targeted attack emails, non targeted attack emails, and benign emails. We now describe this sample.\n\n\n\n\\begin{table*}[!ht]\n\\begin{center}\n \\begin{tabular}{l|l||l|l}\n \\hline\n Spear phishing subjects & \\% & Spam \/ phishing subjects & \\% \\\\ \\hline\n Job Opportunity & 3.45 & Mail delivery failed: returning message to sender & 10.95 \\\\\n Strategy Meeting & 3.09 & Delivery Status Notification (Failure) & 6.71 \\\\\n What is Chen Guangcheng fighting for? & 3.00 & Re: & 2.59 \\\\\n FW: $[$2$]$ for the extension of the measures against North Korea & 1.70 & Re & 2.56 \\\\\n $[$UNCLASSIFIED$]$ 2012 U.S.Army orders for weapons & 1.27 & Become A Paid Mystery Shopper Today! Join and Shop For Free! & 1.28 \\\\\n FW:$[$UNCLASSIFIED$]$2012 U.S.Army orders for weapons & 1.27 & failure notice & 1.09 \\\\\n $<$blank subject line$>$ & 1.17 & Delivery Status Notification (Delay) & 1.06 \\\\\n FW: results of homemaking 2007 annual business plan (min quarter 1 included) & 1.02 & Returned mail: see transcript for details & 0.95 \\\\\n $[$UNCLASSIFIED$]$DSO-DARPA-BAA-11-65 & 0.93 & Get a job as Paid Mystery Shopper! Shop for free and get Paid! & 0.85 \\\\\n Wage Data 2012 & 0.90 & Application number: AA700003125331 & 0.82 \\\\\n U.S.Air Force Procurement Plan 2012 & 0.90 & Your package is available for pickup & 0.78 \\\\\n About seconded expatriate management in overseas offices & 0.80 & Your statement is ready for your review & 0.75 \\\\\n FW:[CLASSIFIED] 2012 USA Government of the the Health Reform & 0.74 & Unpaid invoice 2913. & 0.71 \\\\\n T.T COPY & 0.62 & Track your parcel & 0.70 \\\\\n USA to Provide Declassified FISA Documents & 0.58 & You have received A Hallmark E-Card! & 0.59 \\\\\n FY2011-12 Annual Merit Compensation Guidelines for Staff & 0.55 & Your Account Opening is completed. & 0.57 \\\\\n Contact List Update & 0.45 & Delivery failure & 0.57 \\\\\n DOD Technical Cooperation Program & 0.43 & Undelivered Mail Returned to Sender & 0.56 \\\\\n DoD Protection of Whistleblowing Spies & 0.43 & Laura would like to be your friend on hi5! & 0.56 \\\\\n FW:UK Non Paper on arrangements for the Arms Trade Treaty (ATT) Secretariat & 0.42 & You have got a new message on Facebook! & 0.55 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Top 20 most frequently occurring subjects, and their corresponding percentage share in our spear phishing, and spam \/ phishing email datasets. Spear phishing email subjects appear to depict that these emails contain highly confidential data. Spam \/ phishing emails, on the other hand, are mainly themed around email delivery error messages, and courier or package receipts.}\n\\label{tab:subjects}\n\\vspace{-15pt}\n\\end{center}\n\\end{table*}\n\n\n\n\\subsection{Email Sample Dataset Description}\n\nTo focus our analysis at the organization level, we identified and extracted the most attacked organizations (excluding free email providing services like Gmail, Yahoo, Hotmail etc.) from the domain names of the victims' email addresses, and picked 14 most frequently attacked organizations. We were however, restricted to pick only those organizations, where the first names and last names were easily extractable from the email addresses. The first name and last name were required to obtain the corresponding LinkedIn profiles of these victims (this process is discussed in detail in Section~\\ref{sec:linkedin_data}). This restriction, in addition to removal of duplicates, left us with a total of 4,742 targeted spear phishing emails sent to 2,434 unique victims (referred to as \\emph{SPEAR} in the rest of the paper); 9,353 non targeted attack emails sent to 5,912 unique non victims (referred to as \\emph{SPAM} in the rest of the paper), and 6,601 benign emails from the Enron dataset, sent to 1,240 unique Enron employees (referred to as \\emph{BENIGN} in the rest of the paper). Further details of this dataset can be found in Table~\\ref{tab:stats}. Table contains the number of victims, and non victims in each of the 15 companies (including Enron), and the number of emails sent to them. The victim and non victim employee sets are mutually exhaustive, and each employee in these datasets received at least one email, and had at least one LinkedIn profile. To maintain anonymity, we do not include the name of the organizations we picked; we only mention the operation sector of these companies.\n\n\\begin{table}[!ht]\n\\begin{center}\n \\begin{tabular}{p{2.1cm}|p{0.8cm}|p{0.7cm}|p{0.8cm}|p{0.7cm}|p{1.3cm}}\n \\hline\n Sector & \\#Victims & \\#Emails & \\#Non Victims & \\#Emails & No. of Employees \\\\ \n\\hline\n Govt. \\& Diplomatic & 206 & 511 & 572 & 1,103 & 10,001+ \\\\\n Info. \\& Broadcasting & 150 & 326 & 240 & 418 & 10,001+ \\\\\n NGO & 131 & 502 & 218 & 472 & 1001-5000 \\\\\n IT\/Telecom\/Defense & 158 & 406 & 68 & 157 & 1001-5000 \\\\\n Pharmaceuticals & 120 & 216 & 589 & 862 & 10,001+ \\\\\n Engineering & 396 & 553 & 1000 & 1,625 & 10,001+ \\\\\n Automotive & 153 & 601 & 891 & 1,204 & 10,001+ \\\\\n Aviation\/Aerospace & 281 & 355 & 161 & 187 & 1001-5000 \\\\\n Agriculture & 94 & 138 & 173 & 264 & 10,001+ \\\\\n IT \\& Telecom & 11 & 12 & 543 & 943 & 5001-10,000 \\\\\n Defense & 388 & 651 & 123 & 147 & 10,001+ \\\\\n Oil \\& energy & 201 & 212 & 680 & 1,017 & 10,001+ \\\\\n Finance & 89 & 129 & 408 & 608 & 10,001+ \\\\\n Chemicals & 56 & 130 & 248 & 346 & 10,001+ \\\\\n Enron & NA & NA & 1,240 & 6,601 & 10,001+ \\\\ \\hline\n Total & 2,434 & 4,742 & 7,154 & 15,954 & ~ \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Detailed description of our dataset of LinkedIn profiles and emails across 15 organizations including Enron. }\n\\label{tab:stats}\n\\vspace{-20pt}\n\\end{center}\n\\end{table}\n\n\n\n\nFigures~\\ref{fig:spearphish_subject},~\\ref{fig:mixed_subject}, and~\\ref{fig:enron_subject} represent the tag clouds of the 100 most frequently occurring words in the ``subject\" fields of our SPEAR, SPAM, and BENIGN datasets respectively. We noticed considerable differences between subjects from all the three datasets. While all three datasets were observed to have a lot of \\emph{forwarded} emails (represented by ``fw\", and ``fwd'' in the tag clouds), SPAM and BENIGN datasets were found to have much more \\emph{reply} emails (signified by ``re\" in the tag clouds) as compared to SPEAR emails. These characteristics of whether an email is forwarded, or a reply, have previously been used as boolean features by researchers to distinguish between phishing and benign emails~\\cite{Toolan2010}. The difference in vocabularies used across the three email datasets is also notable. The SPEAR dataset (Figure~\\ref{fig:spearphish_subject}) was found to be full of attention-grabbing words like \\emph{strategy, unclassified, warning, weapons, defense, US Army} etc. Artifact~\\ref{tab:egspear} shows an example of the attachment name, subject and body of such an email. We removed the received address and other details to maintain anonymity.\n\n\\renewcommand{\\tablename}{ARTIFACT} \n\n\\begin{table}\n\\begin{center}\n \\begin{tabular}{|p{8.5cm}|}\n\\hline\n {\\bf Attachment}: All information about mobile phone.rar \\\\ \\hline\n {\\bf Subject}: RE: Issues with Phone for help \\\\ \\hline\n {\\bf Body}: $<$name$>$,\\\\Thanks for your replying.I contacted my supplier,but he could not resolved it.Now I was worried, so I take the liberty of writing to you.I collect all information including sim card details,number,order record and letters in the txt file.I hope you can deal with the issues as your promised.\\\\Best,\\\\$<$name$>$\\\\\\\\-----Original Message-----\\\\From: Customer Care [mailto:Customer\\_Care@$<$companyDomain$>$]\\\\Sent: 2011-12-8 0:35\\\\To: $<$name$>$\\\\Cc:\\\\Subject: RE: Issues with Phone for help\\\\\\\\Dear $<$name$>$,\\\\\\\\Thank you for your E-mail. I am sorry to hear of your issues. Please can you send your SIM card details or Mobile number so that we can identify your supplier who can assist you further?\\\\\\\\Thank you\\\\\\\\Kind regards,\\\\\\\\$<$name$>$\\\\Customer Service Executive\\\\\\\\$<$Company Name$>$,\\\\$<$Company Address$>$\\\\United Kingdom\\\\\\\\Tel: $<$telephone number$>$\\\\Fax : $<$Fax number$>$ \\\\$<$company website$>$\\\\\\\\-----Original Message-----\\\\From: $<$name$>$ [mailto:$<$email address$>$]\\\\Sent: 08 December 2011 08:27\\\\To: support@$<$companyDomain$>$\\\\Subject: Issues with Phone for help\\\\\\\\Hello,\\\\I purchased order for your IsatPhone Pro six months ago.Now I have trouble that it can't work normally.It often automatic shuts down.Sometimes it tells some information that i can't understand.How to do?Can you help me?\\\\Best,\\\\$<$name$>$\\\\\\\\\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\\\This e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http:\/\/www.verizonbusiness.com\/uk \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{A spear phishing email from our SPEAR dataset. The email shows a seemingly genuine conversation, where the attacker sent a malicious compressed (.rar) attachment to the victim in the middle of the conversation.}\n\\label{tab:egspear}\n\\end{center}\n\\end{table}\n\n\nSPAM emails in our dataset (Figure~\\ref{fig:mixed_subject}) followed a completely different genre, dominated by words like \\emph{parcel, order, delivery, tracking, notification, shipment} etc. We also found mentions of famous courier service brand names like FedEx and DHL which seem to have been used for targeting victims. Such attacks have been widely talked about in the recent past; users have also been warned about scams, and infected payloads (like spyware or malware), that accompany such emails.~\\footnote{\\url{http:\/\/nakedsecurity.sophos.com\/2013\/03\/20\/dhl-delivery-malware\/}}~\\footnote{\\url{http:\/\/www.spamfighter.com\/News-13360-FedEx-and-DHL-Spam-Attack-with-Greater-Ferocity.htm}} Some examples of attachment names, and subjects of such non targeted SPAM emails are shown in Artifact~\\ref{tab:egspam}. BENIGN subjects comprised of diverse keywords like \\emph{report, program, meeting, migration, energy}, which did not seem specific to a particular theme (Figure~\\ref{fig:enron_subject}). These keywords were fairly representative of the kind of typical internal communication that may have been going on in the company.\n\n\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{|p{8.5cm}|}\n \\hline\n {\\bf Attachment}: 100A\\_0.txt \\\\\n {\\bf Subject}: DHL Express Notification for shipment 15238305825550113 \\\\ \\hline\n {\\bf Attachment}: .\/attach\/100\\_4X\\_AZ-D\\_PA2\\_\\_FedEx=5FInvoice=5FN 56=2D141.exe \\\\\n {\\bf Subject}: FEDEX Shipment Status NR-6804 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Examples of \\emph{subject} and \\emph{attachment} names of two spam emails from our SPAM dataset. The \\emph{body} field of the emails was not available in this dataset.}\n\\label{tab:egspam}\n\\end{center}\n\\end{table}\n\n\\renewcommand{\\tablename}{TABLE} \n\n\n\\begin{figure*}[!ht]\n \\begin{center}\n \\subfigure[SPEAR subjects]{\n \\label{fig:spearphish_subject}\n \\includegraphics[scale=0.17]{spearphish_subject.png}\n }\n \t\\subfigure[SPEAR bodies]{\n \\label{fig:spearphish_body}\n \\includegraphics[scale=0.16]{spearphish_body.png}\n }\n \t\\subfigure[SPAM subjects]{\n \\label{fig:mixed_subject}\n \\includegraphics[scale=0.16]{mixed_subject.png}\n }\\\\\n \t\\subfigure[BENIGN subjects]{\n \\label{fig:enron_subject}\n \\includegraphics[scale=0.2]{enron_subject.png}\n }\n \t\\subfigure[BENIGN bodies]{\n \\label{fig:enron_body}\n \\includegraphics[scale=0.2]{enron_body.png}\n }\n \\end{center}\n \\caption{%\nTag clouds of the 100 most frequently occurring words in the subjects and bodies of our SPEAR, SPAM, and BENIGN datasets. Bodies of SPAM emails were not available in our dataset.\n }\n \\label{fig:tags}\n\\end{figure*}\n\n\nWe also compared the body content of SPEAR and BENIGN emails. Figures~\\ref{fig:spearphish_body} and~\\ref{fig:enron_body} represent the tag clouds of the 100 most frequently occurring words in the body fields of the SPEAR and BENIGN datasets respectively. Contrary to our observations from the subject content in the SPEAR dataset (Figure~\\ref{fig:spearphish_subject}), the body content of the SPEAR emails (Figure~\\ref{fig:spearphish_body}) did not look very attention-grabbing or themed. SPEAR bodies contained words like \\emph{attached, please, email, dear, materials, phone} etc., which commonly occur in routine email communications too. The BENIGN body content did not contain anything peculiar or alarming either (Figure~\\ref{fig:enron_body}). Since Symantec's email dataset of spear phishing, spam and phishing emails isn't publicly available, we believe that this characterization of our dataset can be useful for researchers to get a better idea of state-of-the-art, real world malicious email data which circulates in the corporate environment.\n\n\n\n\n\n\n\n\n\n\\subsection{LinkedIn profile dataset} \\label{sec:linkedin_data}\n\nOur second dataset consisted of LinkedIn profiles of the receivers of all the emails present in our email dataset. In fact, we restricted our email dataset to only those emails which were sent to employees having at least one LinkedIn profile. This was done to have a complete dataset in terms of the availability of social and stylometric features. There were two major challenges with data collection from LinkedIn; a) Strict input requirements, and b) Rate limited API.\n\nFirstly, to fetch the profiles of LinkedIn users who are outside a user's network (3$^{rd}$ degree connections and beyond), the LinkedIn People Search API requires first name, last name, and company name as a mandatory input.~\\footnote{\\url{developer.linkedin.com\/documents\/people-search-api}} Understandably, none of the users we were looking for, were in our network, and thus, as specified in the previous subsection, we were restricted to pick up emails of only those companies which followed the format \\emph{firstName}.\\emph{lastName}$@$\\emph{companyDomain} or \\emph{firstName}\\_\\emph{lastName}$@$\\emph{companyDomain}. Restricting our dataset to such email addresses was the only way we could satisfy the API's input requirements.\n\nSecondly, the rate limit of the people search API posed a major hindrance. Towards the end of 2013, LinkedIn imposed a tight limit of 250 calls per day, per application, on the people search API for existing developers, and completely restricted access for new applications and developers, under their Vetted API access program.~\\footnote{\\url{https:\/\/developer.linkedin.com\/blog\/vetted-api-access}} We\nwere able to get access to the Vetted API for two of our applications. Although the new rate limit allowed 100,000 API calls per day, per application, this was still restricted to 100 calls per user, per day, per application. We then created multiple LinkedIn user accounts to make calls to the API. Even with multiple applications, and user accounts, this data collection process took about 4 months. This happened because a lot of our search queries to the API returned no results. On average, we were able to find a LinkedIn profile for only 1 in 10 users in our dataset. This resulted in about 90\\% of the API calls returning no results, and hence getting wasted. Eventually, we were able to collect a total of 2,434 LinkedIn profiles of victims, 5,914 LinkedIn profiles of non victims, across the 14 organizations; and 1,240 LinkedIn profiles of employees from Enron (Table~\\ref{tab:stats}). To obtain these profiles for the 9,588 employees (2,434 victims, 5,914 non victims, and 1,240 Enron employees), the number of API calls we had to make was approximately 100,000 (approx. 10 times the number of profiles obtained). Figure~\\ref{fig:arch} shows the flow diagram describing our data collection process.\n\n\n\\begin{figure}[!h]\n \\begin{center}\n\\includegraphics[scale=0.36]{arch2.pdf}\n \\end{center}\n \\caption{%\nFlow diagram describing the data collection process we used to collect LinkedIn data, and create our final feature vector containing stylometric features from emails, and social features from LinkedIn profiles.\n }\n \\label{fig:arch}\n\\end{figure}\n\n\n\n\nOur first choices for extracting \\emph{social} features about employees were Facebook, and Twitter. However, we found that identifying an individual on Facebook and Twitter using only the first name, last name, and employer company was a hard task. Unlike LinkedIn, the Facebook and Twitter APIs do not provide endpoints to search people using the work company name. This left us with the option to search for people using first name, and last name only. However, searching for people using only these two fields returned too many results on both Facebook and Twitter, and we had no way to identify the correct user that we were looking for. We then visited the profile pages of some users returned by the API results manually, and discovered that the \\emph{work} field for most users on Facebook was private. On Twitter profiles, there did not exist a \\emph{work} field at all. It thus became very hard to be able to find Facebook or Twitter profiles using the email addresses in our dataset. \n\n\n\n\n\\section{Analysis and results} \\label{sec:ar}\n\nTo distinguish spear phishing emails from non spear phishing emails using \\emph{social} features of the receivers, we used four machine learning algorithms, and a total of 27 features; 18 stylometric, and 9 \\emph{social}. The entire analysis and classification tasks were performed using the Weka data mining software~\\cite{Hall2009}. We applied 10-fold cross validation to validate our classification results. We now describe our feature sets, analysis, and results of the classification.\n\n\n\n\\subsection{Feature set description} \\label{sec:fsd}\n\nWe extracted a set of 18 stylometric features from each email in our email dataset, and a set of 9 \\emph{social} features from each LinkedIn profile present in our LinkedIn profile dataset, features described in Table~\\ref{tab:feats}. Features extracted from our email dataset are further categorized into three categories, viz. \\emph{subject} features, \\emph{attachment} features, and \\emph{body} features. It is important to note that we did not have all the three types of these aforementioned features available for all our datasets. While the SPAM dataset did not have \\emph{body} features, the BENIGN dataset did not have the \\emph{attachment} features. Features marked with $^*$ (in Table~\\ref{tab:feats}) have been previously used by researchers to classify spam and phishing emails~\\cite{Toolan2010}. The \\emph{richness} feature is calculated as the ratio of the number of words to the number of characters present in the text content under consideration. We calculate richness value for the email \\emph{subject}, email \\emph{body}, and LinkedIn profile \\emph{summary}. The \\emph{Body\\_hasAttach} features is a boolean variable which is set to true, if the body of the email contain the word ``attached'' or ``attachment'', indicating that an attachment is enclosed with the email. This feature helped us to capture the presence of attachments for the BENIGN dataset, which did not have attachment information. The \\emph{Body\\_numFunctionWords} feature captures the total number of occurrences of function words present in the email body, from a list of function words which includes the words: ~\\emph{account, access, bank, credit, click, identity, inconvenience, information, limited, log, minutes, password, recently, risk, social, security, service,} and \\emph{suspended}. These features have been previously used by Chandrasekaran~\\cite{Chandrasekaran2006}.\n\n\n\\begin{table}[!ht]\n\\begin{center}\n \\begin{tabular}{l|l|l}\n\\hline\n Feature & Data Type & Source \\\\ \\hline\n Subject\\_IsReply$^*$ & Boolean & Email \\\\\n Subject\\_hasBank$^*$ & Boolean & Email \\\\\n Subject\\_numWords$^*$ & Numeric & Email \\\\\n Subject\\_numChars$^*$ & Numeric & Email \\\\\n Subject\\_richness$^*$ & Decimal (0-1) & Email \\\\\n Subject\\_isForwarded$^*$ & Boolean & Email \\\\\n Subject\\_hasVerify$^*$ & Boolean & Email \\\\ \\hline\n Length of attachment name & Numeric & Email \\\\\n Attachment size (bytes) & Numeric & Email \\\\ \\hline\n Body\\_numUniqueWords$^*$ & Numeric & Email \\\\\n Body\\_numNewlines & Numeric & Email \\\\\n Body\\_numWords$^*$ & Numeric & Email \\\\\n Body\\_numChars$^*$ & Numeric & Email \\\\\n Body\\_richness$^*$ & Decimal (0-1) & Email \\\\\n Body\\_hasAttach & Boolean & Email \\\\\n Body\\_numFunctionWords$^*$ & Numeric & Email \\\\\n Body\\_verifyYourAccount$^*$ & Boolean & Email \\\\\n Body\\_hasSuspension$^*$ & Boolean & Email \\\\ \\hline \n Location & Text (country) & LinkedIn \\\\\n numConnections & Numeric (0-500) & LinkedIn \\\\\n SummaryLength & Numeric & LinkedIn \\\\\n SummaryNumChars & Numeric & LinkedIn \\\\\n SummaryUniqueWords & Numeric & LinkedIn \\\\\n SummaryNumWords & Numeric & LinkedIn \\\\\n SummaryRichness & Decimal (0-1) & LinkedIn \\\\\n jobLevel & Numeric (0-7) & LinkedIn \\\\\n jobType & Numeric (0-9) & LinkedIn \\\\\n \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{List of features used in our analysis. We used a combination of stylometric features in addition to normal features. Features marked with $^*$ have been previously used for detecting spam and phishing emails.}\n\\label{tab:feats}\n\n\\end{center}\n\\end{table}\n\n\n\nThe \\emph{social} features we extracted from the LinkedIn profiles, captured three distinct types of information about an employee, viz. location, connectivity, and profession. The \\emph{Location} was a text field containing the state \/ country level location of an employee, as specified by her on her LinkedIn profile. We extracted and used the country for our analysis. The \\emph{numConnections} was a numeric field, and captured the number of connections that a user has on LinkedIn. If the number of connections for a user is more than 500, the value returned is ``500+\" instead of the actual number of connections. These features captured the location and connectivity respectively. In addition to these two, we extracted 5 features from the \\emph{Summary} field, and 2 features from the \\emph{headline} field returned by LinkedIn's People Search API. The \\emph{Summary} field is a long, free-text field comprising of a summary about a user, as specified by her, and is optional. The features we extracted from this field were similar to the ones we extracted from the subject and body fields in our email dataset. These features were, the \\emph{summary length, number of characters, number of unique words, total number of words}, and \\emph{richness}. We introduced two new features, \\emph{job\\_level} and \\emph{job\\_type}, which are numeric values ranging from 0 to 7, and 0 to 9 respectively, describing the position and area of work of an individual. We looked for presence of certain level and designation specific keywords in the ``headline'' field of a user, as returned by the LinkedIn API. The job levels and job types, and their numeric equivalents are as follows:\n\n\\begin{itemize}\n\\item Job\\_level; maximum of the following:\n\n1 - Support\\\\\n2 - Intern\\\\\n3 - Temporary\\\\\n4 - IC\\\\\n5 - Manager\\\\\n6 - Director\\\\\n7 - Executive\\\\\n0 - Other; if none of the above are found.\n\n\\item Job\\_type; minimum of the following:\n\n1 - Engineering\\\\\n2 - Research\\\\\n3 - QA\\\\\n4 - Information Technology\\\\\n5 - Operations\\\\\n6 - Human Resources\\\\\n7 - Legal\\\\\n8 - Finance\\\\\n9 - Sales \/ Marketing\\\\\n0 - Other; if none of the above are found.\n\\end{itemize}\n\nTo see if information extracted about a victim from online social media helps in identifying a spear phishing email sent to her, we performed classification using a) \\emph{email} features~\\footnote{We further split email features into \\emph{subject}, \\emph{body}, and \\emph{attachment} features for analysis, wherever available.}; b) \\emph{social} features, and c) using a combination of these features. We compared these three accuracy scores across a combination of datasets viz. SPEAR versus SPAM emails from Symantec's email scanning service, SPEAR versus benign emails from BENIGN dataset, and SPEAR versus a mixture of emails from BENIGN, and SPAM from the Symantec dataset. As mentioned earlier, not all \\emph{email} features mentioned in Table~\\ref{tab:feats} were available for all the three email datasets. The BENIGN dataset did not have attachment related features, and the \\emph{body} field was missing in the SPAM email dataset. We thus used only those features for classification, which were available in both the targeted, and non targeted emails.\n\n\\subsection{SPEAR versus SPAM emails from Symantec} \\label{sec:sp_vs_spam}\n\nTable~\\ref{tab:sp_spam} presents the results of our first analysis where we subjected SPEAR and SPAM emails from Symantec, to four machine learning classification algorithms, viz. Random Forest~\\cite{Breiman2001}, J48 Decision Tree~\\cite{Quinlan1993}, Naive Bayesian~\\cite{John1995}, and Decision Table~\\cite{Kohavi1995}. Feature vectors for this analysis were prepared from 4,742 SPEAR emails, and 9,353 SPAM emails, combined with \\emph{social} features extracted from the LinkedIn profiles of receivers of these emails. Using a combination of all \\emph{email} and \\emph{social} features, we were able to achieve a maximum accuracy of 96.47\\% using the Random Forest classifier for classifying SPEAR and SPAM emails. However, it was interesting to notice that two out of the four classifiers performed better \\emph{without} the social features. Although the Decision Table classifier seemed to perform equally well with, and without the social features, it performed much better using only \\emph{email} features, as compared to only \\emph{social} features.~\\footnote{This happened because the Decision Table classifier terminates search after scanning for a certain (fixed) number of non-improving nodes \/ features.} In fact, the Decision Table classifier achieved the maximum accuracy using \\emph{attachment} features, which highlights that the attachments associated with SPEAR and SPAM emails were also substantially different in terms of name and size. We achieved an overall maximum accuracy of 98.28\\% using the Random Forest classifier trained on only email features. This behavior revealed that the public information available on the LinkedIn profile of an employee in our dataset, does not help in determining whether she will be targeted for a spear phishing attack or not.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{l|l|p{0.7cm}|p{1cm}|p{0.9cm}|p{0.7cm}}\n \\hline\n Feature set & Classifier & Random Forest & J48 Decision Tree & Naive Bayesian & Decision Table \\\\ \\hline\n Subject (7) & Accuracy (\\%) & 83.91 & 83.10 & 58.87 & 82.04 \\\\\n ~ & FP rate & 0.208 & 0.227 & 0.371 & 0.227 \\\\ \\hline\n Attachment (2) & Accuracy (\\%) & 97.86 & 96.69 & 69.15 & {\\bf 95.05} \\\\\n ~ & FP rate & 0.035 & 0.046 & 0.218 & 0.056 \\\\ \\hline\n All email (9) & Accuracy (\\%) & {\\bf 98.28} & {\\bf 97.32} & 68.69 & {\\bf 95.05} \\\\\n ~ & FP rate & 0.024 & 0.035 & 0.221 & 0.056 \\\\ \\hline\n Social (9) & Accuracy (\\%) & 81.73 & 76.63 & 65.85 & 70.90 \\\\\n ~ & FP rate & 0.229 & 0.356 & 0.445 & 0.41 \\\\ \\hline\n Email + & Accuracy (\\%) & 96.47 & 95.90 & {\\bf 69.35} & {\\bf 95.05} \\\\\n Social (18) & FP rate & 0.052 & 0.054 & 0.232 & 0.056 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Accuracy and weighed false positive rates for SPEAR versus SPAM emails. Social features reduce the accuracy when combined with email features.}\n\\label{tab:sp_spam}\n\\end{center}\n\\end{table}\n\n\nTo get a better understanding of the results, we looked at the information gain associated with each feature using the InfoGainAttributeEval Attribute Evaluator package.~\\footnote{\\url{http:\/\/weka.sourceforge.net\/doc.dev\/weka\/attributeSelection\/InfoGainAttributeEval.html}} This package calculates the ~\\emph{information gain}~\\footnote{This value ranges between 0 and 1, where a higher value represents a more discriminating feature.} associated with each feature, and ranks the features in descending order of the information gain value. The ranking revealed that the attachment related features were the most distinguishing features between SPEAR and SPAM emails. This phenomenon was also highlighted by the Decision Table classifier (Table~\\ref{tab:sp_spam}). The attachment size was the most distinguishing feature with an information gain score of 0.631, followed by length of attachment name, with an information gain score of 0.485. As evident from Table~\\ref{tab:rankedfeats1}, attachment sizes associated with SPAM emails have very high standard deviation values, even though the average attachment sizes of SPAM and SPEAR emails are fairly similar. It is also evident that attachments associated with SPAM emails tend to have longer names; on average, twice in size as compared to attachments associated with SPEAR emails. Among subject features, we found no major difference in the length (number of characters, and number of words) of the subject fields across the two email datasets.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{l|l|c|c|c|c}\n \\hline\n \\multirow{2}{*}{Feature} & \\multirow{2}{*}{Info. Gain} & \\multicolumn{2}{c|}{SPEAR} & \\multicolumn{2}{c}{SPAM} \\\\ \\cline{3-6}\n ~ & ~ & Mean & Std Dev. & Mean & Std. Dev. \\\\ \\hline\n Attachment size (Kb) & 0.6312 & 285 & 531 & 262 & 1,419 \\\\\n Len. attachment name & 0.4859 & 25.48 & 16.03 & 51.08 & 23.29 \\\\\n Subject\\_richness & 0.2787 & 0.159 & 0.05 & 0.177 & 0.099 \\\\\n Subject\\_numChars & 0.1650 & 29.61 & 17.77 & 31.82 & 23.85 \\\\\n Location & 0.0728 & - & - & - & - \\\\\n Subject\\_numWords & 0.0645 & 4.74 & 3.28 & 4.59 & 3.97 \\\\\n numConnections & 0.0219 & 158.68 & 164.31 & 183.82 & 171.45 \\\\\n Subject\\_isForwarded & 0.0219 & - & - & - & - \\\\ \n Subject\\_isReply & 0.0154 & - & - & - & - \\\\\n SummaryRichness & 0.0060 & 0.045 & 0.074 & 0.053 & 0.078 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Information gain, mean and standard deviation of the 10 most informative features from SPEAR and SPAM emails.}\n\\label{tab:rankedfeats1}\n\\end{center}\n\\end{table}\n\n\nIt was interesting to see that apart from the Location, number of LinkedIn connections, and SummaryRichness, none of the other social features were ranked amongst the top 10 informative features. Figure~\\ref{fig:countries_spam_sp} shows the top 25 \\emph{Locations} extracted from the LinkedIn profiles of employees of the 14 companies who received SPAM and SPEAR emails. We found a fairly high correlation of 0.88 between the number of SPAM and SPEAR emails received at these locations, depicting that there is not much difference between the number of SPAM and SPEAR emails received by most locations. This falls in line with the low information gain associated with this feature. Among the top 25, only 3 locations viz. France, Australia, and Afghanistan received more SPEAR emails than SPAM emails.\n\n\\begin{figure}[!h]\n \\begin{center}\n\\includegraphics[scale=0.42]{countries_spam_spearphish.png}\n \\end{center}\n \\caption{%\nNumber of SPEAR and SPAM emails received by employees in the top 25 locations extracted from their LinkedIn profiles. Employees working in France, Australia, and Afghanistan received more SPEAR emails than SPAM emails.\n }\n \\label{fig:countries_spam_sp}\n\\end{figure}\n\n\nThe number of LinkedIn connections of the recipients of SPEAR and SPAM emails in our dataset are presented in Figure~\\ref{fig:linkedin1}. There wasn't much difference between the number of LinkedIn connections of recipients of SPEAR emails, and the number of LinkedIn connections of recipients of SPAM emails. We grouped the number of LinkedIn connections into 11 buckets as represented by the X axis in Figure~\\ref{fig:linkedin1}, and found a strong correlation value of 0.97 across the two classes (SPEAR and SPAM). This confirmed that the number of LinkedIn connections did not vary much between recipients of SPEAR and SPAM emails, and thus, is not an informative feature for distinguishing between SPEAR and SPAM emails.\n\n\n\\begin{figure}[!h]\n \\begin{center}\n\\includegraphics[scale=0.33]{linkedin_conn_1_labeled.png}\n \\end{center}\n \\caption{%\nNumber of LinkedIn connections of the recipients of SPEAR and SPAM emails. The number of connections are plotted on the X axis, and the number of employee profiles are plotted on the Y axis. Maximum number of employee profiles had less than 50 LinkedIn connections.\n }\n \\label{fig:linkedin1}\n\\end{figure}\n\n\n\n\n\\subsection{SPEAR emails versus BENIGN emails}\n\nSimilar to the analysis performed in Section~\\ref{sec:sp_vs_spam}, we applied machine learning algorithms on a different dataset containing SPEAR emails, and BENIGN emails. This dataset contained 4,742 SPEAR emails, and 6,601 benign emails from BENIGN. Since BENIGN mostly contains internal email communication between Enron's employees, we believe that it is safe to assume that none of these emails would be targeted spear phishing emails, and can be marked as benign. Similar to our observations in Section~\\ref{sec:sp_vs_spam}, we found that, in this case too, only \\emph{email} features performed slightly better than a combination of \\emph{email} and \\emph{social} features, at distinguishing spear phishing emails from non spear phishing emails. We were able to achieve a maximum accuracy of 97.04\\% using the Random Forest classifier trained on a set of 25 features; 16 \\emph{email}, and 9 \\emph{social} features. However, the overall maximum accuracy that we were able to achieve for this dataset was 97.39\\%, using only \\emph{email} features. Table~\\ref{tab:sp_enron} shows the results of our analysis in detail. Three out of the four classifiers performed best with \\emph{email} features; two classifiers performed best using a combination of \\emph{subject} and \\emph{body} features, while one classifier performed best using only \\emph{body} features. The Naive Bayes classifier worked best using \\emph{social} features.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{l|l|p{0.8cm}|p{1cm}|p{0.9cm}|p{0.7cm}}\n \\hline\n Feature set & Classifier & Random Forest & J48 Decision Tree & Naive Bayesian & Decision Table \\\\ \\hline\n Subject (7) & Accuracy (\\%) & 81.19 & 81.11 & 61.75 & 79.55 \\\\\n ~ & FP rate & 0.210 & 0.217 & 0.489 & 0.228 \\\\ \\hline\n Body (9) & Accuracy (\\%) & 97.17 & 95.62 & 53.81 & {\\bf 90.85} \\\\\n ~ & FP rate & 0.031 & 0.048 & 0.338 & 0.082 \\\\ \\hline\n All email (16) & Accuracy (\\%) & {\\bf 97.39} & {\\bf 95.84} & 54.14 & 89.80 \\\\\n ~ & FP rate & 0.029 & 0.044 & 0.334 & 0.090 \\\\ \\hline\n Social (9) & Accuracy (\\%) & 94.48 & 91.79 & {\\bf 69.76} & 83.80 \\\\\n ~ & FP rate & 0.067 & 0.103 & 0.278 & 0.198 \\\\ \\hline\n Email + & Accuracy (\\%) & 97.04 & 95.28 & 57.27 & 89.80 \\\\\n Social (25) & FP rate & 0.032 & 0.052 & 0.316 & 0.090 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Accuracy and weighed false positive rates for SPEAR emails versus BENIGN emails. Similar to SPEAR versus SPAM, social features decrease the accuracy when combined with email features in this case too.}\n\\label{tab:sp_enron}\n\\end{center}\n\\end{table}\n\n\nTable~\\ref{tab:rankedfeats2} presents the 10 most informative features, along with their information gain, mean and standard deviation values from the SPEAR and BENIGN datasets. The \\emph{body} features were found to be the most informative in this analysis, with only 2 \\emph{social} features among the top 10. Emails in the BENIGN dataset were found to be much longer than SPEAR emails in our Symantec dataset in terms of number of words, and number of characters in their ``body\". The ``subject\" lengths, however, were found to be very similar across SPEAR and BENIGN. \n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{p{2.5cm}|p{0.6cm}|c|c|c|c}\n \\hline\n \\multirow{2}{*}{Feature} & Info. & \\multicolumn{2}{c|}{SPEAR} & \\multicolumn{2}{c}{BENIGN} \\\\ \\cline{3-6}\n ~ & Gain & Mean & Std Dev. & Mean & Std. Dev. \\\\ \\hline\n Body\\_richness & 0.6506 & 0.134 & 0.085 & 0.185 & 0.027 \\\\\n Body\\_numChars & 0.5816 & 313.60 & 650.48 & 1735.5 & 8692.6 \\\\\n Body\\_numWords & 0.4954 & 53.12 & 107.53 & 312.81 & 1572.1 \\\\\n Body\\_numUniqueWords & 0.4766 & 38.08 & 49.70 & 149.93 & 416.40 \\\\\n Location & 0.3013 & - & - & - & - \\\\\n Body\\_numNewlines & 0.2660 & 11.29 & 32.70 & 43.58 & 215.77 \\\\\n Subject\\_richness & 0.2230 & 0.159 & 0.051 & 0.174 & 0.056 \\\\\n numConnections & 0.1537 & 158.68 & 164.31 & 259.89 & 167.14 \\\\\n Subj\\_numChars & 0.1286 & 29.61 & 17.77 & 28.54 & 15.23 \\\\ \n Body\\_numFunctionWords& 0.0673 & 0.375 & 1.034 & 1.536 & 5.773 \\\\\n \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Information gain, mean and standard deviation of the 10 most informative features from SPEAR and BENIGN emails. The \\emph{body} features performed best at distinguishing SPEAR emails from BENIGN emails.}\n\\label{tab:rankedfeats2}\n\\end{center}\n\\end{table}\n\n\n\nThe Random Forest classifier was also able to achieve an accuracy rate of 94.48\\% using only \\emph{social} features; signifying that there exist distinct differences between the LinkedIn profiles of Enron employees, and the LinkedIn profiles of the employees of the 14 companies in our dataset. The \\emph{location} attribute was found to be the most distinguishing feature among the \\emph{social} features. This was understandable since most of the Enron employees were found to be based in the US (as Enron was an American services company). However, we also found a considerable difference in the average number of LinkedIn connections of Enron employees, and employees of the 14 organizations from our dataset (mean values for \\emph{numConnections} feature in Table~\\ref{tab:rankedfeats2}). \n\n\n\\subsection{SPEAR versus a mixture of BENIGN and SPAM}\n\nWhile analyzing SPEAR with SPAM, and BENIGN emails separately, we found similar results where \\emph{social} features were not found to be very useful in both the cases. So we decided to use a mixture of SPAM and BENIGN emails against SPEAR emails, and perform the classification tasks again. We found that in this case, two out of the four classifiers performed better with a combination of email and social features, while two classifiers performed better with only \\emph{email} features. However, the overall maximum accuracy was achieved using a combination of \\emph{email} and \\emph{social} features (89.86\\% using Random Forest classifier). This result is in contradiction with our analysis of SPEAR versus SPAM, and SPEAR versus BENIGN separately, where \\emph{email} features always performed better independently, than a combination of \\emph{email} and \\emph{social} features. Our overall maximum accuracy, however, dropped to 89.86\\% (from 98.28\\% in SPEAR versus SPAM email classification) because of the absence of \\emph{attachment} features in this dataset. Although the \\emph{attachment} features were available in the SPAM dataset, their unavailability in BENIGN forced us to remove this feature for the current classification task. Eventually, merging the SPAM email dataset with BENIGN reduced our email dataset to only 7 features, all based on the email ``subject\". Table~\\ref{tab:sp_spamenron} presents the detailed results from this analysis.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{l|l|p{0.8cm}|p{1cm}|p{0.9cm}|p{0.7cm}}\n \\hline\n Feature set & Classifier & Random Forest & J48 Decision Tree & Naive Bayesian & Decision Table \\\\ \\hline\n Subject (7) & Accuracy (\\%) & 86.48 & 86.35 & {\\bf 77.99} & {\\bf 85.46} \\\\\n ~ & FP rate & 0.333 & 0.352 & 0.681 & 0.341 \\\\ \\hline\n Social (9) & Accuracy (\\%) & 88.04 & 84.69 & 74.46 & 80.61 \\\\\n ~ & FP rate & 0.241 & 0.371 & 0.454 & 0.432 \\\\ \\hline\n Email + & Accuracy (\\%) & {\\bf 89.86} & {\\bf 88.38} & 73.97 & 84.14 \\\\\n Social (16) & FP rate & 0.202 & 0.248 & 0.381 & 0.250 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\n\\caption{Accuracy and weighed false positive rates for SPEAR emails versus mix of SPAM emails and BENIGN emails. Unlike SPEAR versus SPAM, or SPEAR versus BENIGN, \\emph{social} features increased the accuracy when combined with email features in this case.}\n\\label{tab:sp_spamenron}\n\\end{center}\n\\end{table}\n\n\nAs mentioned earlier, combining the SPAM email dataset with BENIGN largely reduced our \\emph{email} feature set. We were left with 7 out of a total of 18 email features described in Table~\\ref{tab:feats}. Understandably, due to this depleted \\emph{email} feature set, we found that the email features did not perform as good as \\emph{social} features in this classification task. Despite being fewer in number, the \\emph{subject} features, viz. \\emph{Subject\\_richness} and \\emph{Subject\\_numChars} were found to be two of the most informative features (Table~\\ref{tab:rankedfeats3}). However, the information gain value associated with both these features was fairly low. This shows that even being the best features, the \\emph{Subject\\_richness} and \\emph{Subject\\_numChars} were not highly distinctive features amongst spear phishing, and non spear phishing emails. Similar mean and standard deviation values for both these features in Table~\\ref{tab:rankedfeats3} confirm these outcomes.\n\n\n\\begin{table}[!h]\n\\begin{center}\n \\begin{tabular}{l|l|c|c|c|c}\n \\hline\n \\multirow{2}{*}{Feature} & \\multirow{2}{*}{Info. Gain} & \\multicolumn{2}{c|}{SPEAR} & \\multicolumn{2}{c}{SPAM + BENIGN} \\\\ \\cline{3-6}\n ~ & ~ & Mean & Std Dev. & Mean & Std. Dev. \\\\ \\hline\n Subject\\_richness & 0.1829 & 0.159 & 0.051 & 0.176 & 0.084 \\\\\n Subject\\_numChars & 0.1050 & 29.61 & 17.77 & 30.46 & 20.79 \\\\\n Location & 0.0933 & - & - & - & - \\\\\n numConnections & 0.0388 & 158.68 & 164.31 & 215.30 & 173.76 \\\\\n Subject\\_numWords & 0.0311 & 4.74 & 3.28 & 4.75 & 3.57 \\\\\n Subject\\_isForwarded & 0.0188 & - & - & - & - \\\\\n Subject\\_isReply & 0.0116 & - & - & - & - \\\\ \n SummaryNumChars & 0.0108 & 140.98 & 308.17 & 198.62 & 367.81 \\\\ \n SummaryRichness &0.0090 & 0.045 & 0.074 & 0.057 & 0.080 \\\\ \n jobLevel &0.0088 & 3.41 & 2.40 & 3.71 & 2.49 \\\\ \\hline\n \\end{tabular}\n\\vspace{5pt}\n\\caption{Information gain, mean and standard deviation of the 10 most informative features from SPEAR and a combination of BENIGN and SPAM emails. The \\emph{subject} features performed best at distinguishing SPEAR emails from non SPEAR emails.}\n\\label{tab:rankedfeats3}\n\\end{center}\n\\end{table}\n\n\nContrary to our observations in SPEAR versus SPAM, and SPEAR versus BENIGN emails, we found five \\emph{social} features among the top 10 features in this analysis. These were the \\emph{Location, numConnections, SummaryNumChars, Richness}, and \\emph{jobLevel} features. Although there was a significant difference between the average number of LinkedIn connections in the two datasets, this feature did not have much information gain associated with it due to the very large standard deviation.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion} \\label{sec:conclusion}\n\nIn this paper, we attempted to utilize \\emph{social} features from LinkedIn profiles of employees from 14 organizations, to distinguish between spear phishing and non spear phishing emails. We extracted LinkedIn profiles of 2,434 employees who received a 4,742 targeted spear phishing emails; 5,914 employees who received 9,353 spam or phishing emails; and 1,240 Enron employees who received 6,601 benign emails.\nWe performed our analysis on a real world dataset from Symantec's enterprise email scanning service, which is one of the biggest email scanning services used in the corporate organizational level. Furthermore, we targeted our analysis completely on corporate employees from 14 multinational organizations instead of random real-world users. The importance of studying spear phishing emails in particular, instead of general phishing emails, has been clearly highlighted by Jagatic et al.~\\cite{Jagatic2007}.\nWe performed three classification tasks viz. spear phishing emails versus spam \/ phishing emails, spear phishing emails versus benign emails from Enron, and spear phishing emails versus a mixture of spam \/ phishing emails and benign Enron emails. We found that in two out of the three cases, social features extracted from LinkedIn profiles of employees did not help in determining whether an email received by them was a spear phishing email or not. Classification results from a combination of spam \/ phishing, and benign emails showed some promise, where \\emph{social} features were found to be slightly helpful. The depleted \\emph{email} feature sets in this case, however, aided the enhancement in classifier performance.\nWe believe that it is safe to conclude that publicly available content on an employee's LinkedIn profile was not used to send her targeted spear phishing emails in our dataset. However, we cannot rule out the possibility of such an attack outside our dataset, or in future. These attacks may be better detected with access to richer \\emph{social} features. This methodology of detecting spear phishing can be helpful for safeguarding soft targets for phishers, i.e. those who have strong social media footprint. Existing phishing email filters and products can also exploit this technique to improve their performance, and provide personalized phishing filters to individuals.\n\nThere can be multiple reasons for our results being non-intuitive. Firstly, the amount of social information we were able to gather from LinkedIn, was very limited. These limitations have been discussed in Section~\\ref{sec:linkedin_data}. It is likely that in a real-world scenario, an attacker may be able to gain much more information about a victim prior to the attack. This could include looking for the victim's profile on other social networks like Facebook, Twitter etc., looking for the victim's presence on the Internet in general, using search engines (Google, Bing etc.), and profiling websites like Pipl~\\footnote{\\url{https:\/\/pipl.com\/}}, Yasni~\\footnote{\\url{http:\/\/www.yasni.com\/}} etc. The process of data collection by automating this behavior was a time consuming process, and we were not able to take this approach due to time constraints. Secondly, it was not clear that which all aspect(s) of a user's social profiles were most likely to be used by attackers against them. We tried to use all the features viz. textual information (summary and headline), connectivity (number of connections), work information (job level, and job type) and location information, which were made available by LinkedIn API, to perform our classification tasks. However, it is possible that none of these features were used by attackers to target their victims. In fact, we have no way to verify that the spear phishing emails in our dataset were even crafted using features from social profiles of the victims. These reasons, however, only help us in better understanding the concept of using social features in spear phishing emails.\n\n\nIn terms of research contributions, this work is based on a rich, true positive, real world dataset of spear phishing, spam, and phishing emails, which is not publicly available. We believe that characterization of this data can be very useful for the entire research community to better understand the state-of-the-art spear phishing emails that have been circulated on the Internet over the past two years. To maintain anonymity and confidentiality, we could not characterize this data further, and had to anonymize the names of the 14 organizations we studied. Also, after multiple reports highlighting and warning about social media features being used in spear phishing, there does not exist much work in the research community which studies this phenomenon. \n\n\nWe would like to emphasize that the aim of this work is not to try and improve the existing state-of-the-art phishing email detection techniques based on their header, and content features, but to see if the introduction of social media profile features can help existing techniques to better detect spear phishing emails. We believe that this work can be a first step towards exploring threats posed by the enormous amount of contextual information about individuals, that is present on online social media. In future, we would like to carry out a similar analysis using the same email dataset, with more social features, which we were not able to collect in this attempt due to time constraints. We would also like to apply more machine learning and classification techniques like Support Vector Machines, Stochastic Gradient boosting techniques etc. on this dataset to get more insights into why social features did not perform well.\n\n\n\n\n\\section{Acknowledgement}\nWe would like to thank the Symantec team for providing us with the email data that we used for this work. We would also like to thank the members of Precog Research Group, and Cybersecurity Education and Research Center at IIIT-D for their support.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAssume that $H$ is a separable Hilbert space with inner product $\\langle \\, \\cdot \\,,\\, \\cdot \\, \\rangle$ and consider a self-adjoint operator~$A$ with simple discrete spectrum acting in~$H$. Our aim is to study spectral properties of the rank one perturbations of~$A$, i.e., of the operators $B$ of the form\n\\\n B=A + \\langle \\cdot, \\varphi \\rangle \\psi,\n\\\nwhere $\\varphi$ and $\\psi$ are nonzero elements of $H$.\n\nRank-one perturbations of operators and matrices have been actively studied in both mathematical and physical literature for the reason that, on the one hand, they are simple enough to allow description of the spectral properties of perturbed operators via closed-form formulae which then can be analysed using various techniques; on the other hand, such perturbations turn out to be general enough to produce various non-trivial effects. \n\nOne of the most general results in a finite-dimensional setting is given by Krupnik~\\cite{Kru92} and states that a rank-one perturbation of an $n\\times n$ matrix~$A$ can possess an arbitrary spectrum counting multiplicity. In other words, given any natural number $k$, any pairwise distinct complex numbers~$z_1, z_2, \\dots, z_k$, and any natural numbers $m_1, m_2, \\dots, m_k$ satisfying $m_1+ m_2 + \\dots + m_k =n$, there is a rank-one perturbation~$B$ of~$A$ whose spectrum consists of the points $z_1, z_2, \\dots, z_k$ of the corresponding algebraic multiplicities $m_1,m_2, \\dots, m_n$. This statement is also specialized to cases when both~$A$ and the perturbed matrix~$B$ belong to the Hermitian, unitary, or normal classes. Savchenko~\\cite{Sav03} studies the effect a generic rank-one perturbation has on the Jordan structure of a matrix~$A$; an interesting observation is that, typically, in each root subspace, only the Jordan chain of the largest length splits; in~\\cite{Sav04}, this is further generalized to low-rank perturbations, cf.\\ also~\\cite{MorDop03}. Similar results in infinite-dimensional Banach spaces were earlier derived by H\\\"ormander and Melin in~\\cite{HorMel94}. Bounds on the number of distinct eigenvalues of~$B$ in terms of some spectral characteristics of~$A$ are established in~\\cite{Far16}. \n\n\nStructured perturbations of matrices and matrix pencils have recently been thoroughly studied in a series of papers by Mehl a.o.~\\cite{MehMehRanRod11, MehMehRanRod12, MehMehRanRod13, MehMehRanRod14, MehMehRanRod16, MehMehWoj17, SosMorMeh20}. Changes in the Jordan structures under perturbation within the classes of complex $J$-Hamiltonian and $H$-symmetric matrices and application in the control theory\nis discussed in~\\cite{MehMehRanRod11}; see~\\cite{MehMehRanRod13, MehMehRanRod16} for further treatment in both the real and complex case. The class of $H$-Hermitian matrices, with (skew-)Hermitian $H$, is studied in~\\cite{MehMehRanRod12, MehMehRanRod14} via the canonical form of the pair $(B, H)$. Rank-one perturbations of matrix pencils are discussed e.g.\\ in~\\cite{MehMehWoj17, GerTru17, BarRoc20}. A general perturbation theory for structured matrices is developed in the recent paper~\\cite{SosMorMeh20}.\n\nThe above results typically exploit essentially matrix methods and thus are not directly applicable to the infinite-dimensional case (see, however, \\cite{HorMel94}). Rank-one perturbations of bounded or unbounded operators in infinite-dimensional Hilbert spaces have been studied within the general operator theory. For instance, a comprehensive spectral theory for rank-one perturbations of unbounded operators in the self-adjoint case is developed in~\\cite{Sim95}, where a detailed characterization of discrete, absolutely continuous, and singlularly continuous components of the spectrum of the perturbed operator is given. A thorough overview of the theory of singular point perturbations of Schr\\\"odinger operators (formally corresponding to additive Dirac delta-functions and their derivatives) is given in the monographs by Albeverio a.o.~\\cite{AGHH, AlbKur00}. There has been much work devoted to the so-called singular and super-singular rank-one perturbations of self-adjoint operators, where the functions $\\varphi$ and $\\psi$ belong to the scales of Hilbert spaces $\\operatorname{dom}(A^\\alpha)$ with negative~$\\alpha$, see e.g.~\\cite{AlbKosKurNiz03, AlbKonKos05, AlbKos99, AlbKuzNiz08, Gol18, Kur04, KurLugNeu19, KuzNiz06, DudVdo16}; in this case, a typical approach is through the Krein extension theory of self-adjoint operators. Rank-one and finite-rank perturbations of self-adjoint operators in Krein spaces have been recently discussed in e.g.~\\cite{BehMoeTru14, BehLebPerMoeTru16}.\n\nDespite the extensive research in the area, there seems to be no complete infinite-dimensional generalization of the results by Krupnik~\\cite{Kru92}. The most pertinent work we are aware of include the papers by H\\\"ormader and Melin~\\cite{HorMel94} and by Behrndt a.o.~\\cite{BehLebPerMoeTru15}, which characterize possible changes in the Jordan structure of root subspaces of linear mappings in infinite-di\\-men\\-si\\-o\\-nal linear vector spaces under general finite-rank perturbations. \n\nOur motivation in this work was to understand how the spectrum of an operator in an infinite-dimensional Hilbert space can change under a rank-one perturbation, both locally, i.e., on the level of root subspaces, and globally, i.e., on the level of eigenvalue asymptotics. This task is quite non-trivial even in the case when the unperturbed operator~$A$ is self-adjoint but has generic spectrum, cf.~\\cite{Sim95}. Therefore, we decided to start with deriving a complete spectral picture in the simplest case where the unperturbed operator~$A$ is self-adjoint and has simple discrete spectrum. Under this assumption, our main result (Theorem~\\ref{thm:main}) shows that the rank-one perturbation~$B$ of~$A$ may get eigenvalues of arbitrary algebraic multiplicity in an arbitrary finite set of points; however, all sufficiently large eigenvalues remain simple and asymptotically close to the eigenvalues of~$A$. In the finite-dimensional case, our analysis leads to an extension of the result by Krupnik~\\cite{Kru92}; Theorem~\\ref{thm:finite-dim} states that one of the vectors~$\\varphi$ or $\\psi$ can be fixed arbitrarily in a ``generic'' set, and then one can find the other vector such that the perturbed matrix $B$ possesses the prescribed spectrum and, moreover, such choice is unique. We also specify this result in Theorem~\\ref{thm:phi-arbitrary} to the case when $\\varphi$ or $\\psi$ is fixed arbitrarily. We also note that a complete characterization of the possible spectra of rank-one perturbations of self-adjoint operators in Hilbert space, including precise asymptotic distribution and the constructive algorithm of finding~$\\varphi$ and $\\psi$, is suggested in a subsequent paper~\\cite{DobHry20}.\n\nThe structure of the paper is as follows. In the next section, we introduce the characteristic function of the perturbed operator~$B$ and discuss how it is related to its spectrum. In Section~\\ref{sec:mult}, the algebraic multiplicities of eigenvalues are discussed, and in Section~\\ref{sec:asympt}, the asymptotic distribution of eigenvalues is established. In Section~\\ref{sec:finite-dim}, we specialize the obtained results to the finite-dimensional case, and in the final section we discuss possible generalizations of the main results to wider classes of the operators~$A$.\n\n\n\n\\section{General spectral properties of $B$}\\label{sec:general}\n\n\n\nThroughout the paper, we make the following standing assumption on the operator~$A$:\n\\begin{itemize}\n \\item[(A1)] the operator~$A$ is self-adjoint and has simple discrete spectrum.\n\\end{itemize}\nThe operator~$A$ is necessarily unbounded above or\/and below; clearly, by considering $-A$ in place of~$A$, we reduce the case when $A$ is bounded above to the case when it is bounded below. Therefore, under assumption~(A1), the spectrum of~$A$ consists of real simple eigenvalues that can be listed in increasing order as $\\lambda_n$, $n\\in I$, with the index set~$I$ equal to~$\\mathbb{N}$ in the case where $A$ is bounded below and to~$\\mathbb{Z}$ otherwise. \n\nThe operator~$B$ is a rank one perturbation of the operator~$A$, i.e.,\n\\begin{equation}\\label{eq:B}\n B=A + \\langle \\cdot, \\varphi \\rangle \\psi\n\\end{equation}\nwith fixed nonzero vectors~$\\varphi$ and $\\psi$ in~$H$. Clearly, the operator~$B$\nis well defined and closed on its natural domain $\\dom (B)$ equal to $\\dom (A)$.\nNext, for $\\lambda$ in the resolvent set $\\rho(A)$ of~$A$, we introduce the \\emph{characteristic function}\n\\begin{equation}\\label{eq:F}\n F(\\lambda) := \\langle (A-\\lambda)^{-1}\\psi, \\varphi \\rangle + 1\n\\end{equation}\nand denote by $\\mathcal{N}_F$ the set of zeros of $F$.\nMany spectral properties of the operator~$B$ of~\\eqref{eq:B} will be derived from the explicit formula for its resolvent known as the Krein formula~(see, e.g., \\cite[Sec. 1.1.1]{AlbKur00}); we include its proof for the sake of completeness and to derive some explicit relations to be used later on.\n\n\\begin{lemma}[The Krein formula]\\label{lm:krein}\nThe set $\\rho(A)\\setminus \\cN_F$ consists of resolvent points of the operator~$B$ and, for every~$\\lambda\\in\\rho(A)\\setminus \\cN_F$,\n\\begin{equation}\\label{eq:Krein}\n (B - \\lambda)^{-1}\n = (A - \\lambda)^{-1} - \\frac{\\langle \\, \\cdot \\,, (A - \\overline{\\lambda})^{-1} \\varphi \\rangle}{F(\\lambda)} \\:\n (A-\\lambda)^{-1} \\psi.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nTo prove that a fixed $\\lambda\\in\\rho(A)\\setminus \\cN_F$ is a resolvent point of~$B$, we need to show that for every~$g\\in H$ the equation\n\\begin{equation}\\label{eq:krein-1}\n g = (B - \\lambda) f\n\\end{equation}\ncan be uniquely solved for $f \\in H$. Assuming such an~$f$ exists, writing the equality \\eqref{eq:krein-1} as\n\\begin{equation}\\label{eq:krein-2}\ng = (A - \\lambda) f + \\langle f, \\varphi \\rangle \\psi,\n\\end{equation}\nand applying the resolvent of the operator $A$ to both sides, we obtain\n\\begin{equation}\\label{eq:krein-3}\n (A - \\lambda)^{-1} g\n = f + \\langle f, \\varphi \\rangle (A - \\lambda)^{-1} \\psi.\n\\end{equation}\nTaking the inner product with $\\varphi$ results in the equality\n\\begin{equation}\\label{eq:krein-4}\n \\langle (A - \\lambda)^{-1} g, \\varphi \\rangle\n = \\langle f, \\varphi \\rangle\n + \\langle f, \\varphi \\rangle \\langle (A - \\lambda) ^ {- 1} \\psi, \\varphi \\rangle\n = \\langle f, \\varphi \\rangle F(\\lambda),\n\\end{equation}\nwhich on account of $F(\\lambda)\\ne0$ leads to\n\\begin{equation}\\label{eq:krein-5}\n \\langle f, \\varphi \\rangle =\n \\frac {\\langle (A - \\lambda) ^ {-1} g, \\varphi \\rangle} {F (\\lambda)}.\n\\end{equation}\nSubstituting now \\eqref{eq:krein-5} in \\eqref{eq:krein-3}, we derive the following formula for~$f$:\n\\begin{equation}\\label{eq:krein-6}\n f = (A - \\lambda)^{-1} g\n - \\frac{\\langle (A - \\lambda)^{-1} g, \\varphi \\rangle}{F(\\lambda)} (A-\\lambda)^{-1}\\psi.\n\\end{equation}\nA direct verification shows that $f$ of~\\eqref{eq:krein-6} belongs to~$\\dom(B)=\\dom(A)$ and is indeed a solution of equation~\\eqref{eq:krein-1}.\n\nTherefore the operator $B - \\lambda$ is surjective. It is also injective since if an $f\\in \\dom(B)$ satisfies~\\eqref{eq:krein-1} with $g = 0$, then~\\eqref{eq:krein-3} on account of~\\eqref{eq:krein-5} gives~$f=0$. Thus the operator $B - \\lambda$ is invertible and its inverse is equal to\n$$\n(B - \\lambda)^{-1} = (A - \\lambda)^{-1} - \\frac{\\langle \\, \\cdot \\,, (A - \\overline {\\lambda})^{-1} \\varphi \\rangle} {F(\\lambda)} \\, (A - \\lambda)^{-1} \\psi\n$$\nas claimed. The proof is complete.\n\\end{proof}\n\nThe Krein formula shows that, for every $\\lambda \\in \\rho(A) \\setminus \\cN_F$, the resolvent~$(B-\\lambda)^{-1}$ is a rank one perturbation of the compact operator~$(A-\\lambda)^{-1}$. Therefore, we get the following\n\n\\begin{corollary}\\label{cr:krein}\nThe resolvent of the operator $B$ is compact, i.e., $B$ is an operator with discrete spectrum.\n\\end{corollary}\n\n\nNext we denote by~$v_n$ a normalized eigenvector of~$A$ corresponding to its eigenvalue~$\\lambda_n$; then the set~$\\{v_n\\}_{n\\in I}$ is an orthonormal basis of~$H$. We also denote by $a_n$ and $b_n$ the Fourier coefficients of the vectors~$\\varphi$ and $\\psi$ with respect to this basis, so that%\n\\begin{footnote}\n{In the case $I=\\mathbb{Z}$, the summation will always be understood in the principal value sense}\n\\end{footnote}\n\\[\n \\varphi = \\sum_{n\\in I} a_n v_n, \\qquad \\psi = \\sum_{n\\in I} b_n v_n.\n\\]\n\n\n\n\n\\begin{lemma}\\label{lem:eig-B}\nThe following relations hold between the spectra of the operators $A$ and $B$:\n\\begin{itemize}\n\\item[a)]\nfor $\\lambda\\in\\rho(A)$, $\\lambda$ belongs to the spectrum of~$B$ if and only if~$\\lambda \\in \\cN_F$;\n\n\\item[b)]\nthe eigenvalue $\\lambda = \\lambda_n$ of the operator $A$ belongs to spectrum of the operator $B$ if and only if $a_nb_n = 0$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\na) Let a point $\\lambda \\in \\rho(A) $ belong to the spectrum of the operator~$B$. By Corollary \\ref{cr:krein}, $\\lambda$ is an eigenvalue of the operator~$B$, and we denote by~$y$ a corresponding eigenvector. Then \\eqref{eq:krein-1} holds with~$g=0$ and with $y$ in place of~$f$, so that equations~\\eqref{eq:krein-3} and~\\eqref{eq:krein-4} can be recast as\n\\[\n y = - \\langle y, \\varphi \\rangle (A - \\lambda)^{-1} \\psi\n\\]\nand\n\\[\n \\langle y ,\\varphi \\rangle F (\\lambda) = 0,\n\\]\nrespectively. Since $y$ is a nonzero vector, we see from the former equality that $\\langle y, \\varphi \\rangle\\ne0$, and then the latter one yields $F(\\lambda)=0$.\n\nConversely, if $F(\\lambda)=0$ for some $\\lambda \\in \\rho(A) $, then $y: = (A - \\lambda)^{-1} \\psi $ is an eigenvector of the operator $B$ for the eigenvalue $\\lambda$, as is seen from the equalities\n\\[\n (A - \\lambda) y + \\langle y, \\varphi \\rangle \\psi\n = [1 + \\langle (A - \\lambda)^{-1} \\psi, \\varphi \\rangle]\\psi\n = F(\\lambda)\\psi= 0.\n\\]\nThis completes the proof of part a).\n\nb) Let $\\lambda = \\lambda_n $ belong to the spectrum of the operator $B$; then there is a vector $y \\in \\dom(B)$ such that $B y = \\lambda_n y$, i.e.,\n\\begin{equation}\\label{eq:1-2}\n (B-\\lambda_n) y\n = (A - \\lambda_n) y + \\langle y, \\varphi \\rangle \\psi = 0.\n\\end{equation}\nTaking the inner product with $v_n$ results in\n\\begin{align*}\n\\langle (A - \\lambda_n) y, v_n \\rangle + \\langle y, \\varphi \\rangle \\langle \\psi, v_n \\rangle\n & = \\langle y, (A - \\lambda_n) v_n \\rangle + \\langle y, \\varphi \\rangle \\langle \\psi, v_n \\rangle \\\\\n & = \\langle y, \\varphi \\rangle \\langle \\psi, v_n \\rangle = 0.\n\\end{align*}\nThus $\\langle y, \\varphi \\rangle = 0$ or $\\langle \\psi, v_n \\rangle = 0$. If $\\langle y, \\varphi \\rangle = 0$, then $y = c v_n$ for some constant $c$ on account of~\\eqref{eq:1-2}, so that $a_n = 0 $. If $\\langle \\psi, v_n \\rangle = 0 $, then $b_n = 0$. Therefore the point $\\lambda = \\lambda_n $ belongs to the spectrum of $B$ only if $a_nb_n = 0$.\n\nConversely, let $a_nb_n = 0$; we need to prove that the point $\\lambda = \\lambda_n$ belongs to the spectrum of $B$. If $a_n = 0$, then\n\\[\n (B-\\lambda_n) v_n = (A-\\lambda_n)v_n + a_n \\psi = 0\n\\]\nso that $y = v_n$ is an eigenvector of~$B$ for the eigenvalue $\\lambda_n$. If $b_n = 0$, then for all $y \\in \\dom(B)$\n$$\n \\langle (B - \\lambda_n) y, v_n \\rangle\n = \\langle (A - \\lambda_n) y, v_n \\rangle\n = \\langle y, (A - \\lambda_n)v_n \\rangle =0,\n$$\nso that $B - \\lambda_n$ is not surjective on $\\dom(B)$ and the point $\\lambda = \\lambda_n$ belongs to the spectrum of the operator $B$. The proof is complete.\n\\end{proof}\n\nWe introduce the sets of indices\n$$\nI_0 \\overset{\\text{def}}{=} \\{n \\in I \\, | \\, a_nb_n = 0 \\}, \\quad\nI_1 \\overset{\\text{def}}{=} \\{n \\in I \\, | \\, a_nb_n \\neq 0 \\}\n$$\nof cardinalities (possibly infinite) $N_0$ and $N_1$ respectively, and split the eigenvalues of~$A$ into the respective subsets\n$$\n \\sigma_0 (A) \\overset {\\text{def}}{=} \\{\\lambda_n \\, | \\, n \\in I_0 \\}\n \\text {\\qquad and \\qquad}\n \\sigma_1 (A) \\overset {\\text{def}}{=} \\{\\lambda_n \\, | \\, n \\in I_1 \\}.\n$$\nAccording to Lemma~\\ref{lem:eig-B}, the spectrum of the operator $B$ consists of two parts: $\\sigma_0(A) = \\sigma(A) \\cap \\sigma(B)$, the common eigenvalues of $A$ and $B$, and the set $\\cN_F$ of zeros of the function~$F$ in $\\rho(A)$. Certainly, the latter part of $\\sigma(B)$ is more interesting.\n\n\n\\section{Eigenvalue multiplicity}\\label{sec:mult}\n\n\nIn this section we discuss multiplicity of eigenvalues of the operator~$B$. \n\nFirst we recall that the \\textit{geometric multiplicity} of an eigenvalue $\\lambda$ of an operator~$T$ is the dimension of the corresponding eigenspace, i.e., the number $\\dim \\ker (T - \\lambda)$ \\cite[Ch.~5.1]{Kat95}, and its \\textit{algebraic multiplicity} is the dimension of the corresponding root subspace, i.e., the rank of the corresponding spectral projector \\cite[Ch.~5.4]{Kat95}. Note that for a selfadjoint operator geometric and algebraic multiplicities of every eigenvalue are equal.\n\nBefore proceeding, we recall that the function~$F$ was initially defined only on the resolvent set of the operator~$A$. However, using the spectral theorem for the operator~$A$, we can write the function $F$ as\n\\begin{equation}\\label{eq:F-new}\n F(\\lambda)\n = \\sum_{n \\in I_1} \\frac {\\overline{a_n} b_n}{\\lambda_n - \\lambda} + 1,\n\\end{equation}\nand this formula gives an analytic continuation of~$F$ onto the set $\\sigma_0(A)$. We shall denote this continuation by the same letter~$F$ but will write $\\cN_F^0$ for the set of zeros of $F$ continued onto $\\mathbb{C} \\setminus \\sigma_1(A)$.\n\n\\begin{lemma}\\label{lem:geom-mult}\nAn eigenvalue $\\lambda$ of~$B$ has geometric multiplicity larger than~$1$ if and only if there exists an integer $n$ such that $\\lambda = \\lambda_n$, $a_n = b_n = 0$, and $F(\\lambda_n) = 0$. In that case, the geometric multiplicity of~$\\lambda$ is equal to~$2$.\n\\end{lemma}\n\n\\begin{proof}\nAssume that $\\lambda\\in\\sigma(B)$ has geometric multiplicity larger than~$1$, and denote by $y$ any of the corresponding eigenvectors. Then\n\\[\n (B-\\lambda) y = (A-\\lambda) y + \\langle y, \\varphi \\rangle \\psi =0,\n\\]\nand if $\\lambda$ is a resolvent point of $A$, then $y$ must be collinear to the vector $(A-\\lambda)^{-1}\\psi$ and thus the geometric multiplicity of~$\\lambda$ is one. Therefore $\\lambda\\in\\sigma(A)$, so that $\\lambda=\\lambda_n$ for some $n\\in I_0$.\nNow, as in the proof of part b) of Lemma~\\ref{lem:eig-B}, we find that\n\\[\n 0=\\langle (B-\\lambda_n)y, v_n \\rangle\n = \\langle y, \\varphi \\rangle \\langle \\psi, v_n \\rangle\n = \\langle y, \\varphi \\rangle b_n,\n\\]\nso that $\\langle y, \\varphi \\rangle =0$ or $b_n=0$.\n\nAssume that $b_n\\ne0$; then $\\langle y, \\varphi \\rangle =0$ and\n$(B-\\lambda_n)y =(A-\\lambda_n) y =0$. Thus $y$ in that case must be collinear to $v_n$, and the geometric multiplicity of~$\\lambda_n$ is then~$1$.\nTherefore, $b_n=0$ and the vector $\\psi$ belongs to the subspace $H_n:=H \\ominus \\langle v_n \\rangle$. Since the nullspace of $B-\\lambda_n$ is of dimension at least~$2$, there is an eigenvector $w$ in $H_n$. We denote by $A_n$ the restriction $A|_{H_n}$ of~$A$ onto its invariant subspace~$H_n$ and see that\n\\[\n (A_n - \\lambda_n) w + \\langle w, \\varphi \\rangle \\psi =0.\n\\]\nNote that $\\lambda_n$ is a resolvent point of the operator~$A_n$, so that\nthe above equality implies that $w= c(A_n-\\lambda_n)^{-1}\\psi$ and that\n\\[\n \\langle (A_n-\\lambda_n)^{-1}\\psi, \\phi\\rangle +1 = 0,\n\\]\ni.e., that $F(\\lambda_n)=0$. Therefore, there is at most one (up to a factor) eigenvector of~$B$ in the space~$H_n$, and thus its second eigenvector must be of the form $v_n + w_n$ with some $w_n \\in H_n$. However, then\n\\[\n (B-\\lambda_n)(v_n + w_n) = (A-\\lambda_n)w_n\n + \\langle v_n + w_n, \\varphi \\rangle \\psi = 0\n\\]\nso that $w_n$ is collinear to the eigenvector $(A_n-\\lambda_n)^{-1}\\psi$ found earlier, and thus $v_n$ must also be an eigenvector of~$B$. As $(B-\\lambda_n)v_n = \\langle v_n , \\varphi \\rangle \\psi$, this requires that $a_n=0$.\n\nSumming up, we see that the assumption that $\\dim \\ker (B-\\lambda) > 1$ implies that $\\lambda=\\lambda_n$ for some~$n\\in I$ and $b_n=0$; moreover, there is an eigenvector $w$ in the subspace~$H_n$ if and only if $F(\\lambda_n)=0$, and then $w$ is collinear to~$(A_n-\\lambda_n)^{-1}\\psi$. The second eigenvector must be $v_n$, which is possible if and only if $a_n=0$. Therefore all the conditions are necessary, and the geometric multiplicity is then equal to~$2$.\n\n\nTo prove that these conditions are also sufficient, we assume that $\\lambda=\\lambda_n$ is such that $a_n=b_n=0$ and $F(\\lambda_n)=0$. Then, as shown above, $v_n$ and $w:= (A_n-\\lambda_n)^{-1}\\psi \\in H_n$ are linearly independent eigenvectors of~$B$ for the eigenvalue~$\\lambda_n$. The proof is complete.\n\\end{proof}\n\n\\begin{example}\\rm \nLet $\\lambda$ and $\\mu$ be distinct eigenvalues of an operator $A$ with corresponding normalized eigenvectors~$v$ and $w$; then for the operator\n\\(\n\tB := A + (\\lambda - \\mu)\\langle \\cdot, w\\rangle w\n\\) \nthe number~$\\lambda$ is an eigenvalue of geometric multiplicity two, $v$ and $w$ being the corresponding eigenvectors. As the above lemma shows, geometric multiplicity cannot be made larger by a rank-one perturbation of~$A$.\t\n\\end{example}\n\n\\begin{remark}\\label{rem:multiplicity}\nAssume that $a_n=b_n=0$, so that $\\lambda_n$ is an eigenvalue of~$B$ with eigenvector~$v_n$. Then $v_n$ is also an eigenvector of the adjoint operator~$B^*$, so that the subspaces $\\langle v_n \\rangle$ and $H_n$ are reducing for $B$. Moreover, the restrictions of $A$ and $B$ onto $\\langle v_n\\rangle$ coincide.\n\nMore generally, we denote by $H^{0}$ the closed linear space of all eigenvectors~$v_k$ of~$A$ for which $a_k=b_k=0$. Then the subspace $H^0$ is reducing for~$B$ and the restrictions of~$A$ and of $B$ onto $H^0$ coincide. Therefore, we can concentrate on the study of the restriction of the operator~$B$ onto its invariant subspace $H^1:=H \\ominus H^0$. Without loss of generality, we shall assume that $H^0 = \\{0\\}$, so that $H=H^1$. Under this assumption, all eigenvalues of the operator~$B$ are geometrically simple.\n\\end{remark}\n\nNext we discuss algebraic multiplicity of the eigenvalues of~$B$ in the resolvent set of~$A$. As every such an eigenvalue $\\lambda$ is geometrically simple by Lemma~\\ref{lem:geom-mult}, its algebraic multiplicity coincides with the largest length of chains of eigen- and associated vectors (also called Jordan chains). We recall that a sequence of vectors~$y_0, y_1, \\dots, y_m$ forms a chain of eigen- and associated vectors of~$B$ for an eigenvalue~$\\lambda$ if every $y_k$ is in the domain of~$B$, $(B-\\lambda)y_0=0$, and $(B-\\lambda)y_k = y_{k-1}$ for $k=1,\\dots,m$.\nChains of eigen- and associated vectors are not defined uniquely; however, for geometrically simple eigenvalues all such chains are closely related, as the next lemma demonstrates.\n\n\n\\begin{lemma}\\label{lem:jordan-chains}\nAssume that $\\lambda$ is a (geometrically simple) eigenvalue of the operator~$B$ and $y_0, y_1,\\dots, y_m$ is a chain of eigen- and associated vectors corresponding to~$\\lambda$.\n\\begin{itemize}\n \\item[(i)] For every sequence of complex numbers $c_1, \\dots, c_m$ introduce the vectors $\\tilde y_0 = y_0$ and\n \\begin{equation}\\label{eq:CEAV}\n \\tilde y_k = y_k + c_1 y_{k-1} + \\cdots + c_k y_0\n \\end{equation}\n for $k=1,\\dots,m$. Then $\\tilde y_0, \\tilde y_1,\\dots,\\tilde y_m$ is a chain of eigen- and associated vectors of~$B$ corresponding to $\\lambda$.\n \\item[(ii)] Vice versa, assume that $\\tilde y_0, \\tilde y_1,\\dots,\\tilde y_m$ is another chain of eigen- and associated vectors of~$B$ corresponding to the eigenvalue $\\lambda$ such that $\\tilde y_0 = y_0$. Then there are constants $c_1, \\dots, c_m$ such that for all $k=1,2,\\dots,m$ relations \\eqref{eq:CEAV} hold.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nBy definition of~$\\tilde y_k$ and $y_k$, we find that\n\\[\n (B-\\lambda)\\tilde y_k =y_{k-1}+ c_1 y_{k-1} + \\cdots + c_{k-1} y_0\n = \\tilde y_{k-1}\n\\]\nfor $k\\ge1$ thus establishing Part~(i).\n\n\\\nFor part (ii), the proof is by induction. Since $(B-\\lambda)(\\tilde y_1 - y_1) = \\tilde y_0 - y_0 =0$, it follows that there is $c_1 \\in \\mathbb{C}$ such that $\\tilde y_1- y_1 = c_1 y_0$ thus establishing the base of induction. Assume that the claim has already been proved for $k =1, \\dots, l-10$). In particular, $l=0$ is equivalent to $m=1$ (recall that the case $a_n = b_n = F(\\lambda_n)=0$ was excluded), and the equality $m=l+1$ is then satisfied.\n\n Assume therefore that $l>0$ and introduce the vectors\n \\[\n y_k:= - \\frac1{b_n}(A_n - \\lambda_n)^{-k}P_n \\psi, \\qquad k\\ge1.\n \\]\n Then one sees that\n \\[\n (B-\\lambda_n) y_k = (A-\\lambda_n) y_k + \\langle y_k ,\\varphi \\rangle \\psi\n\t= y_{k-1} + \\langle y_k ,\\varphi \\rangle \\psi\n \\]\n and\n \\[\n \\langle y_k ,\\varphi \\rangle = - \\frac1{b_n} \\langle (A_n - \\lambda_n)^{-k}P_n \\psi ,\\varphi \\rangle\n\t= -\\frac1{b_n(k-1)!}F^{(k-1)}(\\lambda_n).\n \\]\n It follows that the vectors $y_1, y_2,\\dots,y_l$ form a chain of vectors associated to the eigenvector~$y_0$, so that the algebraic multiplicity $m$ of the eigenvalue~$\\lambda_n$ is at least $l+1$.\n\n Conversely, as in the proof of Lemma~\\ref{lem:alg-mult} one can show that in any chain $\\tilde y_0, \\tilde y_1, \\dots, \\tilde y_{m-1}$ of eigen- and associated vectors for $B$ the vectors $\\tilde y_1, \\dots, \\tilde y_{m-1}$ are related to the above-constructed vectors $y_1, \\dots, y_{m-1}$ via~\\eqref{eq:tilde-y} and that $F(\\lambda_n) = F'(\\lambda_n) = \\dots = F^{(m-2)}(\\lambda_n)=0$. This shows that $l \\ge m-1$ and completes the proof in the case (a).\n\n \\textbf{Case (b)}: $b_n = 0$. Then $\\psi$ belongs to $H_n = H \\ominus v_n$ and thus the range\n $\\ran(B - \\lambda_n)$ of $B-\\lambda_n$ is contained in $H_n$.\n We look for an eigenvector $y_0$ of~$B$ of the form $\\alpha_0 v_n + z_0$ with $z_0 \\in H_n$. Then $(A-\\lambda_n)y_0 = (A_n - \\lambda_n)z_0$, and $(B-\\lambda_n)y_0=0$ can be written as\n \\[\n (A_n - \\lambda_n) z_0 + \\langle y_0 , \\varphi \\rangle \\psi =0,\n \\]\n so that $z_0 = c(A_n - \\lambda_n)^{-1}\\psi$ with an appropriate constant~$c$. Substituting this $z_0$ into the above equation results in the relation\n \\[\n c \\psi + \\bigl[\\alpha_0 \\overline{a_n} + c \\langle (A_n - \\lambda_n)^{-1}\\psi , \\varphi \\rangle \\bigr] \\psi =0,\n \\]\n yielding the equality\n \\begin{equation}\\label{eq:alpha0}\n c F(\\lambda_n) + \\alpha_0 \\overline{a_n} =0.\n \\end{equation}\n\n In order that for the eigenvector~$y_0$ there could exist an associated vector~$y_1$, it is necessary that $y_0 = (B-\\lambda_n)y_1$ belong to $H_n$ and thus that $\\alpha_0 = 0$ and $y_0=z_0$. Equation~\\eqref{eq:alpha0} then yields $cF(\\lambda_n)=0$, and since $c=0$ would lead to the contradiction that $y_0=z_0=0$, we conclude that necessarily $F(\\lambda_n)=0$. In particular, $l=0$ gives $m=1$ as stated.\n\n Assume therefore that $l>0$, so that $F(\\lambda_n)=0$. As the case $a_n=b_n =0$ was excluded earlier, we have $a_n\\ne0$ and thus $\\alpha_0=0$ by~\\eqref{eq:alpha0} and $y_0= z_0 := (A_n - \\lambda_n)^{-1}\\psi$.\n\n We first show that $m\\ge l+1$ by constructing a chain $y_1, \\dots, y_l$ of vectors associated to this~$y_0$. Namely, take\n $y_k := (A_n - \\lambda_n)^{-(1+k)}\\psi$\n for $k=1,\\dots, l-1$ and\n $y_l := \\alpha_l v_n + (A_n - \\lambda_n)^{-(1+l)}\\psi$\n with an~$\\alpha_l$ to be determined later. As in the proof of Case~(a) we find that\n \\[\n (B-\\lambda_n) y_k\n = (A_n - \\lambda_n) y_k + \\langle y_k,\\varphi\\rangle \\psi\n = y_{k-1} + \\frac{1}{k!} F^{(k)}(\\lambda_n) \\psi\n = y_{k-1}\n \\]\n for $k =1, 2, \\dots, l-1$. For $k=l$ we get\n \\[\n (B-\\lambda_n) y_l\n = (A_n - \\lambda_n) y_l + \\langle y_l,\\varphi\\rangle \\psi\n = y_{l-1}\n + \\bigl[\\alpha_l \\overline{a_n}\n +\\frac{1}{l!} F^{(l)}(\\lambda_n)\\bigr] \\psi,\n \\]\n and the equality $(B-\\lambda_n) y_l = y_{l-1}$ is guaranteed by taking (recall that $a_n \\ne0$)\n \\[\n \\alpha_l := - \\frac{1}{\\overline{a_n}l!} F^{(l)}(\\lambda_n).\n \\]\n\n It remains to show that $l \\ge m-1$. We take a chain of eigen- and associated vectors $\\tilde y_0, \\dots, \\tilde y_{m-1}$ of the maximal possible length~$m>1$. The equalities $(B-\\lambda_n) \\tilde y_k = \\tilde y_{k-1}$ for $k=1, \\dots, m-1$ show that the vectors $\\tilde y_0, \\dots, \\tilde y_{m-2}$ belong to $H_n$. Without loss of generality we may assume that $\\tilde y_0 = y_0$ and then prove by induction that with $c_k:= - \\langle \\tilde y_k, \\varphi \\rangle$ for $k=0,1,\\dots, m-2$ we have\n \\\n \\tilde y_k = y_k + c_1 y_{k-1} + \\dots + c_k y_0\n \\\n with $y_k$ defined above and that $F(\\lambda_n) = F'(\\lambda_n)=\\dots = F^{(k)}(\\lambda_n)=0$.\n\n The base of induction was already set up: $\\tilde y_0 = y_0$ and $F(\\lambda_n) = 0$. Assume therefore that the claim holds for all indices $k$ less than $j$ with $0m$. More precisely, we take\n\\[\n\t\\phi(x) = \\sum_{k=-m}^m e^{ikx} = \\frac{\\sin(m+\\tfrac12)x}{\\sin(\\tfrac12x)}\n\\]\nand \n\\[\n\t\\psi(x) = \\sum_{k=1}^m d_k \\sin(kx)\n\\]\nwith coefficients $d_k$ to be determined. Since $\\langle \\psi, v_0 \\rangle = 0$, the corresponding chain of eigen- and associated vectors can be formed as in Case (b) of the above theorem. Namely, with $A_0$ standing for the restriction of~$A$ onto the space $H_0:= H \\ominus v_0$, we take\n\\[\n\ty_k := A_0^{-(k+1)}\\psi, \\qquad k=0,\\dots, 2m-1,\n\\]\nand \n\\[\n\ty_{2m}:= d_0v_0 + A_0^{-(2m+1)}\\psi \n\\]\nfor a suitable $d_0$. We next show that there is a unique set of $d_0,\\dots,d_m$ for which the above $y_0,\\dots, y_{2m}$ form a chain of eigen- and associated vectors of~$B$ and that there is no longer chains of eigen- and associated vectors corresponding to~$\\lambda_0$. \n\nNotice that \n\\[\n\tA_0^{-2l}\\psi(x) = \\sum_{k=1}^m \\frac{d_k}{k^{2l}}\\sin(kx)\n\\]\nand \n\\[\n\tA_0^{-2l+1} = -i \\sum_{k=1}^m \\frac{d_k}{k^{2l-1}}\\cos(kx).\n\\]\nIt then follows that $y_{2l+1}$ are odd functions for all $l=0,\\dots,m-1$, and as~$\\phi$ is an even function, we find that $By_{2l+1} = A y_{2l+1} = y_{2l}$. On the other hand, the equalities\n$By_{2l} = y_{2l-1}$ for $l=0,\\dots,m$ amount to a non-singular system of $m+1$ linear equations in $m+1$ variables $d_0,d_1,\\dots, d_m$,\n\\begin{equation}\\label{eq:example-system}\n\t\\sum_{k=1}^m \\frac{d_k}{k^{2l+1}} = f_l, \\quad l=0,1,\\dots,m,\n\\end{equation}\nwith $f_0 = -i\/(2\\pi)$, $f_1 = \\dots = f_{m-1} = 0$, and $f_m = -id_0\/\\sqrt{2\\pi}$. \n\nNote that $d_0 \\ne0$ as otherwise the system would be inconsistent, so that $y_{2m}$ does not belong to $H_0$ and thus the chain cannot be extended further. In view of Lemma~\\ref{lem:jordan-chains}, this is true of any other chain of EAV's for the eigenvalue~$\\lambda_0$. As $a_0\\ne0$, geometric multiplicity of $\\lambda_0=0$ is equal to one by Lemma~\\ref{lem:geom-mult}; therefore, $\\lambda_0$ is a geometrically simple eigenvalue of~$B$ of algebraic multiplicity~$2m+1$. \n\nThe explicit form of $\\phi$ and $\\psi$ yields their Fourier coefficients: $a_n = b_n = 0$ if $|n|>m$, $a_n = \\sqrt{2\\pi}$ for $|n| \\le m$, and, finally, $b_n = \\sqrt{2\\pi}d_n\/2i$ for $n=1,\\dots,m$, $b_n = -b_{-n}$ for $n=-m,\\dots,-1$, and $b_0=0$. Then the characteristic function,\n\\[\n\tF(z) = \\sum_{n=-m}^m\\frac{\\overline{a_n}b_n}{n-z} + 1 \n\t\t = \\sqrt{2\\pi}\\sum_{n=1}^m\\frac{2nb_n}{n^2-z^2} + 1 \n\t\t = \\frac{2\\pi}i\\sum_{n=1}^m\\frac{nd_n}{n^2-z^2} + 1 \n\\] \nis a rational function of the form $P(z)\/Q(z)$ with $P$ and $Q$ polynomials of degree at most $2m$. Therefore, $F$ has at most $2m$ zeros counting with multiplicity. On the other hand, it is straightforward to verify that equations~\\eqref{eq:example-system} amount to the relations \n\\[\n\tF(0) = F'(0) = \\dots = F^{(2m-1)}(0) = 0,\n\\]\nso that $z = 0$ is a zero of~$F$ of multiplicity~$2m$. This implies that $F$ has no other zeros. In particular, $F(n)\\ne0$ if $n\\ne0$, and thus $\\lambda_n = n$ is an algebraically simple eigenvalue of the operator~$B$ whenever $|n|>m$. \n\nTo sum up, the operator~$B$ has an eigenvalue~$\\lambda_0 = 0$ of algebraic multiplicity~$2m+1$ and simple eigenvalues $\\lambda_n$ for $|n|>m$. Loosely speaking, the rank one perturbation shifts the eigenvalues $\\lambda_{-m}, \\dots, \\lambda_{-1}$, $\\lambda_1, \\dots, \\lambda_m$ towards $\\lambda_0$ respectively enlarging the multiplicity of the latter. \n\\end{example}\n\n\n\n\n\\section{Spectral localization of the operator~$B$}\\label{sec:asympt}\n\nWe next turn to the question, what spectra the rank-one perturbations~$B$ of a given self-adjoint operator~$A$ can have. Keeping in mind the most important and interesting applications to the differential operators, in addition to~$(A1)$ we assume that\n\\begin{itemize}\n\t\\item[(A2)] the eigenvalues of~$A$ are separated, i.e.,\n\t\\begin{equation}\\label{eq:dist}\n\t\\inf_{n \\in I} |\\lambda_{n+1} - \\lambda_n| =: d > 0.\n\t\\end{equation}\n\\end{itemize}\n\nWe next localize the spectrum of~$B$ by studying its characteristic function\n\\begin{equation*}\\label{eq:F1}\n\tF(z) = \\sum_{k \\in I_1} \\frac {\\overline{a_k} b_k}{\\lambda_k - z} + 1.\n\\end{equation*}\nAs the Fourier coefficients $a_k$ and $b_k$ of the functions~$\\phi$ and $\\psi$ are in $\\ell_2(I)$, the sequence $\\overline{a_k}b_k$ is summable and, due to the Cauchy--Bunyakowsky--Schwarz inequality, its $\\ell_1$-norm is bounded by $\\|\\varphi\\|\\|\\psi\\|$. \n\n\\begin{lemma}\n\tThe spectrum of $B$ lies in the strip \n\t\\[\n\t\t\\Pi := \\{z \\in \\bC \\mid |\\myIm z| \\le \\|\\varphi\\|\\|\\psi\\|\\}.\n\t\\]\n\\end{lemma}\n\n\\begin{proof}\n\tIf $z\\not\\in \\Pi$, then $|\\lambda_k - z| \\ge |\\myIm z| > \\|\\varphi\\|\\|\\psi\\|$, so that \n\t\\[\n\t\t\\sum_{k \\in I_1} \\biggl|\\frac {\\overline{a_k} b_k}{\\lambda_k - z}\\biggr| \n\t\t\t< \\sum_{k \\in I_1} |\\overline{a_k} b_k|\/(\\|\\varphi\\|\\|\\psi\\|) \n\t\t\t< 1\n\t\\]\n\tso that $F(z) \\ne 0$. \n\\end{proof}\n\n\nNext, for an $\\varepsilon>0$ we denote by $C_n(\\varepsilon)$ the open circle\n\\[\n\tC_n(\\varepsilon) := \\{ z \\in \\bC \\mid |z - \\lambda_n| < \\varepsilon\\}\n\\]\nand set \n\\[\n\tR_{N, \\varepsilon}:= \\Bigl\\{z \\in \\bC \\mid |\\myRe z| \\ge N\\} \n\t\t\\setminus \\Bigl(\\bigcup\\nolimits_{n\\in I} C_n(\\varepsilon) \\Bigr)\\Bigr\\}\n\\]\n\n\\begin{lemma}\\label{lem:RN}\nFor every $\\varepsilon>0$ there is $N>0$ such that $R_{N,\\varepsilon}$ belongs to the resolvent set of the operator~$B$.\n\\end{lemma}\t\n\n\\begin{proof}\nFor an $\\varepsilon >0$, we choose $N'\\in\\mathbb{N}$ so that%\n\\begin{footnote}\n\t{Throughout this section, the symbol $\\sum{\\hspace*{-2pt}\\vphantom{\\sum}}^{(1)}$ denotes summation over the index set $I_1$}\n\\end{footnote} \n\t\\[\n\t\t\\sumI_{|k|\\ge N'} |\\overline{a_k} b_k| \\le \\frac{\\varepsilon}4; \n\t\\]\n\tthen, for $z$ outside every circle $C_n(\\varepsilon)$,\n\t\\[\n\t\t\\Bigl|\\sumI_{|k|\\ge N'} \\frac{\\overline{a_k} b_k}{\\lambda_k - z}\\Bigr|\n\t\t\t\\le \\frac1\\varepsilon \\sum_{k\\in I_1, |k|\\ge N'} |a_k b_k| \\le \\frac14.\n\t\\]\n\tWe now take $N''\\in\\mathbb{N}$ such that $N''\\ge N' + 4 \\|\\varphi\\|\\|\\psi\\|\/d$ and choose $N\\in \\mathbb{N}$ such that $N\\ge |\\lambda_{N''}|$ and $N \\ge |\\lambda_{-N''}|$ if $-N'' \\in I$. Due to Assumption~$(A2)$ it holds that $|\\lambda_k - \\lambda_m| \\ge d|k-m|$; therefore, \n\t$|\\lambda_k - z| \\ge d (N'' - N') \\ge 4\\|\\varphi\\|\\|\\psi\\|$ whenever $z \\in R_{N,\\varepsilon}$ and $|k| \\le N'$, so that \n\t\\[\n\t \t\\Bigl|\\sumI_{|k| < N'} \\frac{\\overline{a_k} b_k}{\\lambda_k - z}\\Bigr| \n\t \t\t\\le \\frac14\n\t\\]\n\tfor such $z$. \n\tAs a result, for all $z \\in R_{N,\\varepsilon}$ it holds\n\t\\[\n\t\t\t|F(z)| \\ge 1 - \t\\Bigl|\\sum_{k\\in I_1} \\frac{\\overline{a_k} b_k}{\\lambda_k - z}\\Bigr| \n\t\t\t\t\\ge \\frac12;\n\t\\]\n\tby Lemma~\\ref{lem:eig-B} the set $R_{N,\\varepsilon}$ is in the resolvent set of~$B$, and the proof is complete.\n\\end{proof}\n\nCombining the above two lemmata, we conclude that the spectrum of $B$ is localized in the circles $C_n(\\varepsilon)$ and in the rectangular domain\n\\[\n\t\\{z \\in \\bC \\mid |\\myRe|\\le N, \\ |\\myIm z| \\le \\|\\varphi\\|\\|\\psi\\|\\},\n\\]\nwith $N=N(\\varepsilon)$ from Lemma~\\ref{lem:RN}. \n\n\\begin{lemma}\\label{lem:EVinCn}\n\tFor every $\\varepsilon>0$ there is $K=K(\\varepsilon)$ such that for each $n\\in I$ with $|n| > K(\\varepsilon)$ the circle $C_n(\\varepsilon)$ contains precisely one eigenvalue of~$B$.\n\\end{lemma}\n\n\\begin{proof}\n\tBy Lemma~\\ref{lem:RN}, for all $n$ with large enough $|n|$, the boundary $\\partial C_n(\\varepsilon)$ of~$C_n(\\varepsilon)$ is in the resolvent set of~$B$. We next show that the Riesz spectral projections for $A$ and $B$ corresponding to $C_n(\\varepsilon)$ are of the same rank (and thus of rank~$1$) for large enough~$|n|$.\n\t\n\t For every $n$ with $\\partial C_n(\\varepsilon) \\subset \\rho(B)$, we denote by $P_n$ and $P'_n$ the Riesz spectral projectors for $A$ and $B$ respectively on the root subspaces corresponding to the eigenvalues inside $C_n(\\varepsilon)$,\n\t\\[\n\t\tP_n = \\frac1{2\\pi i} \\int_{C_n(\\varepsilon)} (A - z)^{-1}\\,dz, \\qquad \n\t\tP'_n = \\frac1{2\\pi i} \\int_{C_n(\\varepsilon)} (B - z)^{-1}\\,dz.\n\t\\]\n\tBy the Krein resolvent formula~\\eqref{eq:Krein}, we get \n\t\\[\n\t\tP_n - P'_n = \\frac1{2\\pi i} \\int_{C_n(\\varepsilon)}\\frac{dz}{F(z)} \\langle \\, \\cdot \\,, (A - \\overline{z})^{-1} \\varphi \\rangle\n\t\t(A-z)^{-1} \\psi.\n\t\\]\n\tAs the norm of a rank-one operator $\\langle \\, \\cdot \\, u \\rangle v$ is equal to $\\|u\\|\\|v\\|$ and, as proved in Lemma~\\ref{lem:RN}, $|F(z)|\\ge 1\/2$ on $C_n(\\varepsilon)$ for large enough $|n|$, we conclude that \n\t\\[\n\t\t\\|P_n - P'_n\\| \\le d \\max_{z\\in C_n(\\varepsilon)} \\|(A - \\overline{z})^{-1} \\varphi\\| \\|(A - {z})^{-1} \\psi\\|\n\t\\]\n\tfor such $n$. Observe now that for every vector $u = \\sum c_k v_k$ we have \n\t\\[\n\t\t\\|(A - z)^{-1} u \\|^2 = \\sum_{k\\in I} \\frac{|c_k|^2}{|\\lambda_k - z|^2};\n\t\\]\n\tapplying the Lebesgue dominated convergence theorem, we conclude that\n\t\\[\n\t\t\\max_{z\\in C_n(\\varepsilon)} \\|(A - z)^{-1} u \\|^2 \\to 0\n\t\\]\n\tas $|n| \\to \\infty$. Therefore, $\\|P_n - P'_n\\| \\to 0$ as $|n|\\to\\infty$; as a result~\\cite[\\S IV.2]{Kat95}, the ranks of the Riesz projectors $P_n$ and $P'_n$ coincide for all $n$ with large enough $|n|$, and the proof is complete.\n\\end{proof}\n\nTherefore, the operator~$B$ has at most finitely many nonsimple eigenvalues; we next prove that there are no other restrictions on them.\n\n\\begin{lemma}\\label{lem:nonrealEV}\n\tFix an arbitrary $n\\in\\mathbb{N}$, an arbitrary sequence $z_1, z_2, \\dots, z_n$ of pairwise distinct complex numbers, and an arbitrary sequence $m_1$, $m_2$, $\\dots$, $m_n$ of natural numbers. Then there is a rank-one perturbation~$B$ of the operator~$A$ such that, for every $j=1,2,\\dots, n$, the number $z_j$ is an eigenvalue of~$B$ of algebraic multiplicity~$m_j$. \n\\end{lemma}\n\n\\begin{proof}\n\tFor simplicity, we assume that none of $z_j$ is in the spectrum of~$A$; the changes to be made otherwise are not very significant, cf.~Lemma~\\ref{lem:alg-mult0} and Example~\\ref{ex:multiple-EV}.\n\t\n\tSet $N:= m_1 + m_2 + \\dots + m_n$; we will construct a rank-one perturbation~$B$ of $A$ with \n\t\\[\n\t\t\\varphi = \\sum_{k=1}^N a_k v_k, \\qquad \t\t\n\t\t\\psi = \\sum_{k=1}^N b_k v_k.\n\t\\]\n\tAccording to Lemma~\\ref{lem:eig-B}, it suffices to choose $a_k$ and $b_k$ in such a way that the characteristic function~$F$ of~\\eqref{eq:F-new} has zeros $z_1, z_2, \\dots, z_n$ of multiplicity $m_1, m_2, \\dots, m_n$ respectively. Set $c_k := \\overline{a_k}b_k$, $k=1,2,\\dots,n$; then\n\t\\[\n\t\tF(z) = \\sum_{k=1}^n \\frac{c_k}{\\lambda_k - z} + 1,\n\t\\]\n\tand the equalities $F(z_k) = F'(z_k) = \\dots = F^{(m_k-1)}(z_k) = 0$ lead to an inhomogeneous system of $N$ equations in the variables $c_1, c_2, \\dots, c_N$:\n\t\\begin{equation}\\label{eq:system}\n\t\t\\sum_{k=1}^N \\frac{c_k}{(\\lambda_k - z_j)^m} + \\delta_{m1}= 0, \\quad j = 1,2, \\dots, n, \\quad m = 1, 2, \\dots, m_j,\n\t\\end{equation}\n\twith $\\delta_{m1}$ being the Kronecker delta. By Lemma~\\ref{lem:Cauchy} below, the coefficient matrix of the above system is non-singular; therefore, the system possesses a unique solution~$c_1,c_2, \\dots, c_N$. It remains to take $a_k =1$ and $b_k = c_k$ for $k=1,2,\\dots, N$, and the proof is complete.\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lem:Cauchy}\n\tThe coefficient matrix of system~\\eqref{eq:system} is non-singular.\n\\end{lemma}\n\n\\begin{proof}\n\tFor pairwise distinct numbers $\\omega_1, \\omega_2, \\dots, \\omega_N$ from the resolvent set of~$A$, we introduce the Cauchy matrix $M$ with entries \n\t\\[\n\t(M)_{jk} = \\frac{1}{\\lambda_k - \\omega_j}.\n\t\\]\n\tIt is non-singular and has determinant equal to \n\t\\begin{equation}\\label{eq:Cauchy}\n\tD(\\omega_1,\\omega_2,\\dots,\\omega_{N}) = \\frac{\\prod\\prod_{j>k}(\\lambda_j - \\lambda_k)(\\omega_j-\\omega_k)} \t\t{\\prod_j\\prod_k(\\lambda_j-\\omega_k)}.\n\t\\end{equation}\n\tWe set $C:= \\prod\\prod_{j>k}(\\lambda_j - \\lambda_k)$ for brevity. \n\t\n\tTaking the derivative of that determinant in $\\omega_2$ and setting $\\omega_2 = \\omega_1 = z_1$, we get the determinant of the matrix~$M_2$, whose first and second rows have entries \n\t\\[\n\t\t\t\\frac1{\\lambda_k - z_1 } \\quad \\text{and} \\quad \\frac1{(\\lambda_k - z_1)^2}, \\qquad k = 1, 2, \\dots, N,\n\t\\]\n\trespectively, and the other rows are as in the matrix~$M$. By~\\eqref{eq:Cauchy}, we have \n\t\\[\n\t\tD(\\omega_1,\\omega_2,\\dots,\\omega_{N}) = (\\omega_2 - \\omega_1) D_2(\\omega_1,\\omega_2,\\dots,\\omega_{N}),\n\t\\] \n\tso that \n\t\\[\n\t\t\\frac{\\partial}{\\partial \\omega_2}D(z_1,\\omega_2,\\dots,\\omega_{N})\\Bigr|_{\\omega_2 = z_1}\n\t\t\t= D_2(z_1, z_1, \\omega_3, \\dots, \\omega_N).\n\t\\]\n\tExplicit calculations give \n\t\\begin{multline*}\n\t\t\\det M_2 = D_2(z_1, z_1, \\omega_3, \\dots, \\omega_N) \\\\\n\t\t\t= C \\prod_{j>2}(\\omega_j-z_1)^2 \\frac{\\prod\\prod_{j>k>2}(\\omega_j-\\omega_k)}\n\t\t\t{\\prod_j (\\lambda_j-z_1)^2\\prod_{k>2}(\\lambda_j-\\omega_k)}\n\t\t\t\\ne 0.\n\t\\end{multline*}\n\t\n\tNext, we take the second derivative of $D_2(z_1, z_1, \\omega_3, \\dots, \\omega_N)$ in $\\omega_3$ and set $\\omega_3 = z_1$; this becomes the determinant $D_3(z_1, z_1, z_1, \\omega_4, \\dots, \\omega_N)$ of the matrix $M_3$ that is $M_2$ with its third row replaced by\n\t\\[\n\t\t\\frac2{(\\lambda_k - z_1)^3}, \\qquad k = 1, 2, \\dots, N.\n\t\\]\n\tOn the other hand, \n\t\\begin{multline*}\n\t\t\\det M_3 = D_3(z_1, z_1, z_1, \\omega_4, \\dots, \\omega_N) \n\t\t\t= \\frac{\\partial^2}{\\partial \\omega^2_3}D(z_1, z_1, \\omega_3, \\dots,\\omega_{N})\\Bigr|_{\\omega_3 = z_1} \\\\\n\t\t\t= 2C \\prod_{j>3}(\\omega_j-z_1)^3 \\frac{\\prod\\prod_{j>k>3}(\\omega_j-\\omega_k)}\n\t\t\t\t{\\prod_j (\\lambda_j-z_1)^3\\prod_{k>3}(\\lambda_j-\\omega_k)}\n\t\t\t\\ne 0.\n\t\\end{multline*}\n\tOn each next step, we repeat a similar procedure with the next row and variable until we reach row number $m_1$. \n\t\n\tAfter that, we set $\\omega_{m_1+1} = z_2$, take the derivative in $\\omega_{m_1+2}$ at $\\omega_{m_1 + 2} = z_2$, and repeat with the subsequent rows until we reach row number $m_1 + m_2$. Clearly, the operations described above can be performed on separate groups of variables $\\omega_l$ with $l=m_1 + \\dots + m_j + 1, m_1 + \\dots + m_j + 2, \\dots, m_1 + m_2 + \\dots + m_{j+1}$ independently. At the end, the determinant of the coefficient matrix of the system~\\eqref{eq:system} is found explicitly to be\n\t\\[\n\t\t\\frac{\\prod_{j=k+1}^N\\prod_{k=1}^N(\\lambda_j - \\lambda_k)\t\\prod_{j=k+1}^n\\prod_{k=1}^n(z_j-z_k)^{m_j + m_k}}\n\t\t{\\prod_{j=1}^N\\prod_{k=1}^n(\\lambda_j-z_k)^{m_j}} \\ne 0,\n\t\\]\n\tand the proof is complete.\n\\end{proof}\n\n\\begin{remark}\n\tIn the paper~\\cite{DobHry20}, it is proved that the operators $A$ and $B$ have the same number of eigenvalues in special increasing rectangles exhausting the whole complex plane~$\\bC$. Combined with the results of Lemmata~\\ref{lem:RN} and \\ref{lem:EVinCn}, this allows an enumeration of the eigenvalues of~$B$ as $\\mu_n$, $n\\in I$, such that each value $\\mu_n$ is repeated according to its multiplicity and $\\mu_n- \\lambda_n \\to0$ as $|n|\\to\\infty$. \n\\end{remark}\n\n\nWe summarize the above results in the following theorem.\n\n\\begin{theorem}\\label{thm:main}\n\tAssume that $A$ is an operator in a Hilbert space~$H$ satisfying assumptions~$(A1)$ and $(A2)$ and $B$ is its rank-one perturbation~\\eqref{eq:B}. Then \n\t\\begin{itemize}\n\t\t\\item[(i)] all eigenvalues of $B$ of sufficiently large absolute value are localized within $\\varepsilon$-neighbourhood of the eigenvalues of~$A$ and thus are simple;\n\t\t\\item[(ii)] the eigenvalues of~$B$ can be enumerated as $\\mu_n$, $n\\in I$, so that $\\mu_n-\\lambda_n \\to 0$ as $|n|\\to\\infty$;\n\t\t\\item[(iii)] geometric multiplicity of every eigenvalue of~$B$ is at most~$2$, and multiplicity~$2$ is only possible when the corresponding eigenspace of~$A$ is reducing for $B$.\n\t\\end{itemize}\n\tMoreover, for every prescribed finite set $z_1, z_2, \\dots, z_n$ of pairwise distinct complex numbers, and an arbitrary sequence $m_1$, $m_2$, $\\dots$, $m_n$ of natural numbers there exists a~$B$ such that each $z_j$, $j=1,2,\\dots, n$, is an eigenvalue of~$B$ of algebraic multiplicity~$m_j$.\n\\end{theorem}\n\n\\section{Finite-dimensional case}\\label{sec:finite-dim}\n\nThe analysis of Section~\\ref{sec:mult} allows to essentially complement the results in the fi\\-ni\\-te-di\\-men\\-si\\-o\\-nal case. Namely, assume that $A$ is a Hermitian matrix in $\\bC^n$ with pairwise distinct eigenvalues $\\lambda_1, \\lambda_2, \\dots, \\lambda_n$ and normalized (column) eigenvectors~$\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_n$ and define the \\emph{generic set} $\\mathcal{G}(A)$ of $A$ as\n\\[\n\t\\mathcal{G}(A) = \\{ \\mathbf{x} \\in \\mathbb{C}^n \\mid \\langle \\mathbf{x}, \\mathbf{v}_k\\rangle_{\\bC^n} \\ne 0, \\quad k = 1, 2, \\dots,n \\}. \n\\]\nThen we have the following generalization of the result of~\\cite{Kru92}.\n\n\\begin{theorem}\\label{thm:finite-dim}\n\tUnder the above assumptions, let $\\bm{\\varphi}$ be a vector from the generic set~$\\mathcal{G}(A)$. Then for any natural number $k$, any pairwise distinct complex numbers~$z_1, z_2, \\dots, z_k$, and any natural numbers $m_1, m_2, \\dots, m_k$ satisfying $m_1+ m_2 + \\dots + m_k =n$, there is a unique vector $\\bm{\\psi} \\in \\bC^n$ such that the rank-one perturbation $B = A + \\bm{\\psi}\\bm{\\varphi}^\\top$ of the matrix~$A$ has eigenvalues $z_1, z_2, \\dots, z_k$ of corresponding multiplicities $m_1, m_2, \\dots, m_k$. \n\t\n\tSimilarly, for every fixed $\\bm{\\psi} \\in \\mathcal{G}(A)$ there is a unique $\\bm{\\varphi}\\in \\bC^n$ such that $B$ has the eigenvalues $z_j$ of prescribed multiplicities $m_j$, $j = 1,2, \\dots, k$. \n\\end{theorem}\n\n\\begin{proof}\nDenote by $\\sigma_0(A)$ the common part of the spectrum $\\sigma(A)$ of $A$ and the set $\\{z_1, z_2, \\dots, z_k\\}$, by $\\sigma_1(A):=\\sigma(A) \\setminus\\sigma_0(A)$ the remaining part of $\\sigma(A)$, and let $I_\\ell := \\{ j \\mid \\lambda_j \\in \\sigma_p(A)\\}$, $\\ell=0,1$, be the corresponding index sets. We update the multiplicities $m_j$ to \n\\begin{equation}\\label{eq:reduce-mult}\n\tm'_j := \\begin{cases}\n\t\tm_j - 1, & \\qquad z_j \\in \\sigma(A); \\\\\n\t\tm_j, & \\qquad z_j \\not\\in \\sigma(A);\t\n\t\\end{cases}\n\\end{equation}\nand set\n\\begin{equation}\\label{eq:F-prod}\n\tF(z):= \\frac{\\prod_{j=1}^k (z - z_j)^{m'_j}}{\\prod_{j \\in I_1} (z - \\lambda_j)}.\n\\end{equation}\nDenoting by $-c_j$ the residue of the function~$F$ at the point $z=\\lambda_j$, $j\\in I_1$, we conclude that $F$ can be written in the form\n\\begin{equation}\\label{eq:F-sum}\n\tF(z) = \\sum_{j \\in I_1} \\frac{c_j}{\\lambda_j-z} + 1. \n\\end{equation}\nDenote by $a_j = \\langle \\bm{\\varphi}, \\mathbf{v}_k\\rangle_{\\bC^n}$, $j= 1,2, \\dots, n$, the coefficients of the vector $\\bm{\\varphi}$ in the basis $\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_n$. By assumption, no $a_j$ vanishes, and we set $b_j:= c_j \/ \\overline{a_j}$ for $j \\in I_1$ and $b_j = 0$ for $j \\in I_0$, and define the vector $\\psi$ via\n\\[\n\t\\bm{\\psi} = \\sum_{j=1}^n b_j \\mathbf{v}_j = \\sum_{j \\in I_1} b_j \\mathbf{v}_j. \n\\] \nIt follows from the results of Section~\\ref{sec:mult} that the characteristic function of the matrix $B = A + \\bm{\\psi}\\bm{\\varphi}^\\top$ coincides with the above function~$F$; therefore, the matrix $B$ has eigenvalues $z_1, z_2, \\dots, z_k$ and the multiplicity of the eigenvalue $z_j$ is $m_j'$ if $z_j \\not \\in \\sigma(A)$ or $m_j'+1$ otherwise. \n\nThe second part is proved in a similar manner, by interchanging the roles of $a_n$ and $b_n$.\n\\end{proof}\n\nIf the vector $\\bm{\\varphi}$ is not in the generic set~$\\mathcal{G}(A)$ of $A$, the above theorem has the following analogue. \n\n\\begin{theorem}\\label{thm:phi-arbitrary}\n\tUnder the above assumptions on the matrix~$A$, take a nonzero vector $\\bm{\\varphi} = \\sum _{j=1}^n a_j \\mathbf{v}_j \\in \\bC^n$ and set $I_0 := \\{j \\mid a_j = 0\\}$ and $\\sigma_0(A):=\\{\\lambda_j \\mid j \\in I_0\\}$. Then for every natural $k$, every set $S=\\{z_1, z_2,\\dots, z_k\\}$ of $k$ pairwise distinct complex numbers obeying $S \\cap \\sigma(A) = \\sigma_0(A)$, and every sequence $m_1, m_2, \\dots, m_k$ of natural numbers with $m_1 + m_2 + \\dots + m_k = n$ there is a vector $\\bm{\\psi}\\in\\bC^n$ such that the matrix $B = A + \\bm{\\psi}\\bm{\\varphi}^\\top$ has eigenvalues $z_1, z_2, \\dots, z_k$ of multiplicities $m_1, m_2 , \\dots, m_k$ respectively. \n\t\n\tA similar statement holds with the r\\^oles of $\\bm{\\varphi}$ and $\\bm{\\psi}$ interchanged.\n\\end{theorem}\n\n\\begin{proof}\n\tThe fact that the set $S$ is in the spectrum of~$B$ is proved in Lemma~\\ref{lem:eig-B}. We denote by~$\\sigma_1(A)$ the spectrum of~$A$ not in $\\sigma_0(A)$ and set $I_1$ to be the corresponding set of indices. Reducing by $1$ the multiplicity of each $z_j$ from $S$ and denoting the resulting multiplicities by $m'_j$ as in~\\eqref{eq:reduce-mult}, we construct the function~$F$ of~\\eqref{eq:F-prod} and observe that it assumes the form~\\eqref{eq:F-sum}, with uniquely determined residues~$-c_j$, $j\\in I_1$. Then we define $b_j$ for such $j$ from the relation~$\\overline{a}_jb_j = c_j$, and fix arbitrarily $b_j$ for $j \\in I_0$. \n\t\n\tBy Lemmata~\\ref{lem:alg-mult} and \\ref{lem:alg-mult0}, the numbers $z_j$ not in $\\sigma_0(A)$ are eigenvalues of the matrix~$B$ of multiplicity~$m_j'$, while those in~$\\sigma_0(A)$ have multiplicity $m_j'+1$. The proof is complete.\n\\end{proof}\t\n\n\\begin{remark}\n\tWe can conclude from the above proof that the coordinates of the vector $\\bm{\\psi}$ in the basis $\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_n$ for $j\\in I_0$ are not fixed; therefore, there is an $|I_0|$-dimensional affine set of such vectors producing the required spectrum. \n\\end{remark}\n\n\\section{Concluding remarks}\n\nIt should be noted that some restrictions imposed on~$A$ can be relaxed. For instance, self-adjointness of~$A$ is not essential; the proof with minor amendments will work for rank-one perturbations of every normal operator with simple discrete spectrum, or even in the case when the eigenvectors of~$A$ can be chosen to form a Riesz basis of~$H$. Simplicity of the eigenvalues of~$A$ can also be dropped; however, this will result in a more complicated Jordan structure of the root subspaces of~$B$, cf.~\\cite{BehMoeTru14}. Also, the operator $A$ may possess, in addition to an infinite discrete spectrum, a non-trivial essential component; the results we proved have natural generalization to this case as well. \n\nFinally, this study has found its continuation in~\\cite{DobHry20}, in which a complete characterization of all possible spectra of rank-one perturbations~\\eqref{eq:B} of self-adjoint operators~$A$ with simple discrete spectrum is given. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Acknowledgements}\nThis work was supported in part by the Intel Neuromorphic Research Community Grant to Lule\\aa~ University of Technology, the Swedish Foundation for international Cooperation in Research and Higher Education (STINT) under Mobility Grant for Internationalisation MG2020-8842, and the Russian Science Foundation during the period of 2020-2021 under grant 20-71-10116.\nSK and DH are recipients of the Centre for Data Analytics and Cognition (CDAC) Ph.D. research scholarships. \nDK has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 839179.\n}\n\\thanks{E. Osipov is with the Department of Computer Science, Electrical and Space Engineering at Lule\u00e5 University of Technology, 97187 Lule\u00e5, Sweden. \\mbox{E-mail}: \\mbox{Evgeny.Osipov@ltu.se}\n}\n\\thanks{S. Kahawala, D. Haputhanthri, T.~Kempitiya, D.~De~Silva and D.~Alahakoon are with the Centre for Data Analytics and Cognition (CDAC) at La Trobe University, Melbourne, Australia. \\mbox{E-mail}: \\{S.Kahawala, D.Haputhanthri, T.Kempitiya,\n D.DeSilva, D.Alahakoon\\} @latrobe.edu.au\n }\n\n\\thanks{D. Kleyko is with the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley, CA 94720, USA and also with the Intelligent Systems Lab at Research Institutes of Sweden, 16440 Kista, Sweden. \\mbox{E-mail}: \\mbox{denis.kleyko@ri.se}\n}\n\n}\n\\newcommand{\\marginpar{FIX}}{\\marginpar{FIX}}\n\\newcommand{\\marginpar{NEW}}{\\marginpar{NEW}}\n\n\n\\maketitle\n\n\\begin{abstract} \n\nMotivated by recent innovations in biologically-inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of Vector Symbolic Architectures (VSA) for fast learning of a topology preserving feature map of unlabelled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within Fourier Holographic Reduced Representations model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are, few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets as well as on illustrative benchmark use-cases, IRIS classification, and a language identification task using $n$-gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware. \n\\end{abstract}\n \n \n\\begin{IEEEkeywords}\nself-organizing maps, vector symbolic architectures, hyperseed, neuromorphic hardware \n\\end{IEEEkeywords}\n\n\n\\input{tex\/introduction} \n\\input{tex\/Related} \n\\input{tex\/Method} \n\\input{tex\/Hyperseed} \n\\input{tex\/Experiments} \n\\input{tex\/Discussion} \n\\input{tex\/Conclusion} \n\n\n\n\\bibliographystyle{IEEEtran}\n\n\n\n\n\\section{Method: Holographic Reduced Representations (HRR) model}\n\\label{sect:method}\n\nThe Hyperseed algorithm is designed using the Fourier Holographic Reduced Representations (FHRR) model \\cite{PlateNested1994}. \nFHRR facilitates the mathematical treatment of Hyperseed operations. The potential argument that complex numbers used in FHRR add to the memory requirements of Hyperseed is intuitively true in the case of CPU realization. However, in Section \\ref{sect:discussion}, we rationalise this is not an issue for the neuromorphic hardware. Also, due to the equivalence of FHRR and HRR models, the operations of Hyperseed can be implemented with hypervectors from $\\mathbb{R}^d$. \nIn fact, when evaluating the performance of the bottleneck functionality of Hyperseed on Intel's Loihi, we use HRR model\\footnote{\nThe supplementary code base also contains the HRR implementation of the algorithm.\n}. \nThe atomic FHRR hypervectors are randomly sampled from $\\mathbb{C}^d$. Dimensionality $d$ is a hyperparameter of Hyperseed. In high-dimensional random spaces, all random hypervectors are dissimilar to each other (quasi-orthogonal) with an extremely high probability. VSA defines operations and a similarity measure on hypervectors. In this article, we use the cosine similarity of real parts of hypervectors for characterizing the similarity.\nThe three primary operations for computing with hypervectors are superposition, binding, and permutation. \n\n\\subsection{Binding operation} \nThe binding operation is used to bind two hypervectors together. The result of the binding is another hypervector. For example, for two hypervectors $\\textbf{v}_1$ and $\\textbf{v}_2$ the result of binding of their hypervectors (denoted as $\\textbf{b}$) is calculated as follows: \n\\begin{equation}\n\\label{eq:bind} \n\\textbf{b} = \\textbf{v}_1 \\circ \\textbf{v}_2, \n\\end{equation}\nwhere the notation $\\circ$ is used to denote the binding operation. \nIn HRR, the binding operation is implemented as circular convolution of $\\textbf{v}_1$ and $\\textbf{v}_2$, which can be implemented as the component-wise multiplication in the Fourier domain. This observation inspired FHRR where the representations are already in the Fourier domain in a form of phasors so that the component-wise multiplication, which is equivalent to the addition of phase angles modulo $2\\pi$, plays the role of the binding operation. \nBinding is, essentially, a randomizing operation that moves hypervectors to another (random) part of the high-dimensional space. \nThe role played by the binding operation depends on the algorithmic context. \nIn data structures with roll-filler pairs, the binding operation corresponds to the assignment of a value (filler) to a variable (role). There are two important properties of the binding operation. First, the resultant hypervector $\\textbf{b}$ is dissimilar to the hypervectors being bound, i.e., the similarity between $\\textbf{b}$ and $\\textbf{v}_1$ or $\\textbf{v}_2$ is approximately $0$.\n\nSecond, the binding operation preserves similarity. That is the distribution of the similarity measure between hypervectors from some set $\\mathcal{S}$ is preserved after binding of all hypervectors in $\\mathcal{S}$ with the same random hypervector $\\textbf{v}$. \n\nThe binding operation is reversible. The unbinding, denoted as $\\oslash$, is implemented by the circular correlation in HRR. In the case of FHHR this is equivalent to component-wise multiplication with the complex conjugate. Being the inverse of the binding operation, the unbinding obviously has the same similarity preservation property when performed on all hypervectors in $\\mathcal{S}$ with the same hypervector $\\textbf{v}$: \n\n \\begin{equation}\n\\label{eq:unbind} \n\\textbf{v}_2 \\oslash \\textbf{b} = \\textbf{v}_1. \n\\end{equation}\nThe interpretation of the unbinding operation is a retrieval of a value from the hypervector encoding the assignment. When unbinding is performed from the superposition of bindings (see Section~\\ref{sec:vsa:superposition}), the retrieved hypervector contains noise. In VSA, the noisy vector can be cleaned-up by performing a search for the closest atomic hypervector stored in an associative memory.\n\n\\subsection{Permutation operation}\nThe permutation (rotation) operation $\\textbf{b} = \\rho(\\textbf{v})$ is a unitary operation that is commonly used to represent an order of the symbol in a sequence. As with the binding operation, the resultant hypervector $\\textbf{b}$ is dissimilar to $\\textbf{v}$. In this article, this operation is used for encoding a certain type of input data as further described in Section \\ref{sect:perf}.\n\n\\subsection{Superposition operation}\n\\label{sec:vsa:superposition}\nSuperposition is denoted with $+$ and implemented via component-wise addition. \nThe superposition operation combines several hypervectors into a single hypervector. \nFor example, for hypervectors $\\textbf{v}_1$ and $\\textbf{v}_2$ the result of superposition (denoted as $\\textbf{a}$) is simply: \n\\begin{equation}\n\\label{eq:bindle} \n\\textbf{a} = \\textbf{v}_1 + \\textbf{v}_2.\n\\end{equation}\nIn contrast to the binding operation, the resultant hypervector $\\textbf{a}$ is similar to all superimposed hypervectors, i.e., the cosine similarity between $\\textbf{b}$ and $\\textbf{v}_1$ or $\\textbf{v}_2$ is larger than $0$.\nIf several copies of any hypervector are included (e.g., $\\textbf{a} = 3\\textbf{v}_1 + \\textbf{v}_2$), the resultant hypervector is more similar to the dominating hypervector than to other components. \n\nIf superposition is applied to several bindings it is possible to unbind any hypervector from any binding. In this case, the result of the unbinding operation is a noisy version of the second operand of the particular binding. For example, if $\\textbf{a}=\\textbf{v}_1\\circ \\textbf{v}_2 + \\textbf{u}_1 \\circ \\textbf{u}_2$, then $\\textbf{u}_2 \\oslash \\textbf{a}=\\textbf{u}_1+\\mathrm{noise}= \\textbf{u}_1^*$. Given that noiseless atomic hypervectors ($\\textbf{v}_1,\\textbf{v}_2, \\textbf{u}_1, \\textbf{u}_2$) are kept in the associative memory and so vector $\\textbf{u}_1^*$ is expected to have the highest similarity to $\\textbf{u}_1$. The same property holds for the binding of any atomic hypervector with the superposition of unbindings (which we use below in the description of our approach). That is if $\\textbf{a}=\\textbf{v}_1\\oslash \\textbf{v}_2 + \\textbf{u}_1 \\oslash \\textbf{u}_2$, then $\\textbf{u}_2 \\circ \\textbf{a}=\\textbf{u}_1+\\mathrm{noise}= \\textbf{u}_1^*$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sect:discussion}\n\nHaving presented the Hyperseed algorithm and its empirical evaluation, it is now pertinent to discuss the following aspects that require further investigation, Hyperseed on neuromorphic hardware, performance comparison of embeddings and limitations of Hyperseed. \n\n\\subsection{Hyperseed on neuromorphic hardware}\n\nThe Hyperseed algorithm from the start was designed targeting an implementation on the neuromorphic hardware. This target departs from recent developments within the Intel neuromorphic research community, where VSA is promoted as an algebraic framework for the development of algorithms on Intel's Loihi \\cite{Loihi18, TPAM, Frady20_KNN}. \n\nSince the main focus of this article is on the algorithmic aspects of Hyperseed, we resort to making rather high level links to a neuromorphic realization of algorithm's operations and present an evaluation of its computational bottleneck using the existing neuromorphic implementation.\n\nBoth HRR and FHRR representations could be mapped onto activities of spiking neurons. \nIn the case of FHRR, the phase of components is used for phase-to-spike-timing mapping.\nIn this way, FHRR representation does not result in higher memory footprint as in the case of the CPU implementation. \nVSA operations are realized in either Resonate-and-Fire neurons~\\cite{TPAM} or in Leaky Integrate and Fire neurons~\\cite{RennerBinding2022, RennerVisualScene}. \n\n\nIn the case of HRR, real-valued components are used in spike-time latency code, where\nearlier spikes represent larger magnitudes. \nVSA operations can realized by Leaky Integrate and Fire neurons. \nIn this article, we use the realization of the dot product calculation presented in \\cite{ Frady20_KNN} (also based on Leaky Integrate and Fire neurons) in order to demonstrate the feasibility of neuromorphic implementation of computational bottleneck of Hyperseed -- the search for the BMV in HD-map for an unbound noisy hypervector $\\textbf{p}^*$ resulting from~(\\ref{eq:seedbind}).\n\nTo measure the performance of the Hyperseed's search procedure in the language identification task HD-maps of various sizes (starting from $30\\times30$ and incrementing the grid size by $10$ along each axis until $90\\times90$) were generated as in Section~\\ref{sect:hdMap}. The hypervectors of each HD-map were then used for mapping their values onto spiking activity of the $k$NN reference base on Intel's Loihi-based Nahuku-32 neuromorphic system. This operation was performed only once as part of the initialization since HD-map remains unchanged for the life-time of Hyperseed. Therefore, the time to construct the $k$NN reference base was not taken into account in the run-time performance evaluation.\n\nFor the experiment we chose the case of identifying five randomly selected languages with a single seed hypervector $\\mathbf{s}$ as the reference scenario. The original dimensionality of the hypervectors used for the encoding of the input data as well as for all other hypervectors of the Hyperseed algorithm was $d=10000$. \nThe reference scenario accuracy of Hyperseed obtained on a CPU was $0.84$. \n\n$k$NN on Loihi was used to model the search operation during the labeling and testing process. To do this, seed hypervector $\\mathbf{s}$ was pre-trained offline on the CPU as described in Section~\\ref{sect:update}. For the labeling process, the binding of all training and test data with the trained $\\mathbf{s}$ was performed and the results (the noisy versions of the BMVs) were used as queries to the $k$NN reference base storing HD-map on Loihi.\nThe dimensionality of the existing Loihi implementation of $k$NN is $d_{kNN}<512$, which is a platform specific limit. \nNote, that in VSA the dimensionality of hypervectors is connected to the information capacity of the superposition. In Hyperseed, every update of seed hypervector $\\mathbf{s}$ increases the cross-talk noise to previous bindings $\\mathbf{s}\\circ \\mathbf{p}_{\\texttt{target}}$. Fig.~\\ref{fig:single} demonstrated the accuracy degradation for dimensionality $d=5000$ with the increase of the number of updates of $\\mathbf{s}$. For smaller dimensionalities, the number of updates for maintaining the acceptable accuracy is even lower. Therefore, in this experiment the training of Hyperseed was done on higher dimensionalities and then the dimensionality was reduced using principal component analysis to $d=400$ to meet the current limitations of the existing implementation. \n\nTo label BMVs, the training data (after binding with $\\mathbf{s}$) were used as queries to the $k$NN reference base and the top-1 index for each query (the earliest fired output neuron) was recorded. After that, the labeling was performed on the CPU using the procedure described above. To compute the accuracy, the test data hypervectors (also after binding with $\\mathbf{s}$) were used as queries to the $k$NN reference base. The top-1 index was recorded and used to compute the accuracy against the list of the labeled indices. \n\nThe average measured accuracy on Loihi was $0.84$, which matched the reference accuracy on the CPU implementation of Hyperseed. This means that the neuromorphic implementation of HD-map and the calculation of similarity did not introduce sensible errors.\n\nNext, in addition to the accuracy we also measured the query time to the $k$NN reference base for different sizes of HD-map. The main outcome of this experiment is illustrated by Fig. \\ref{fig:qTimeVsNumberOfRefVecs}. It shows the computational benefit of implementing the bottleneck operation of Hyperseed on in the neuromorphic hardware: due to parallel, power-efficient computation of the dot product in Loihi, Hyperseed, as expected, was empowered with constant-time search for different sizes of HD-map. \n\n\n\n\n \n\n \n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=8cm]{img\/Loihi\/qTimeVsNumberOfRefVecs.png}\n \\caption{Time of querying a noisy BMV in HD-map implemented as the $k$NN reference base against the size of HD-map in the number of reference hypervectors.}\n \\label{fig:qTimeVsNumberOfRefVecs}\n\\end{figure}\n\n\\subsection{Performance comparison of embeddings and distributed representations}\n While a large body of knowledge on methods for encoding data into hypervectors is accumulated throughout the years, the problem is still considered as of the primary importance in the area of VSA. Hyperseed in this respect offers a playground for comparing different embeddings as the embeddings of high quality lead to higher accuracy in classification tasks. It is particularly important to develop the operations of Hyperseed on sparse representations. \n \n \\subsection{Limitations of Hyperseed and future developments}\nHyperseed when using 2D HD-map as evaluated in this article is limited in its capability to describe complex manifold structures. In this sense the algorithm does not show advantages over Self-Organizing Maps, which have similar limitations. This case was chosen to demonstrate the feasibility of implementing non-trivial learning functionality with straightforward VSA operations, which in itself is an original research contribution. However, Hyperseed is in fact scalable in terms of two other aspects: 1) topologies of higher dimensionalities than two as it does not require updates of the hypervectors in HD-map and 2) topologies of other structures than regular grid due to the generality of FPE encoding. The investigation of this capability is part of an ongoing work on Hyperseed extension. \n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\\label{sect:conclusions}\n\nThe increasing accumulation of unstructured and unlabelled big data initiated a renewed interest in unsupervised machine learning algorithms that are able to capitalize on computational efficiencies of biologically-inspired neuromorphic hardware. \nIn this article, we presented the Hyperseed algorithm that addresses these challenges through the manifestation of a novel unsupervised learning approach that leverages Vector Symbolic Architectures for fast learning from only few input vectors and single vector operation learning rule implementation. A further novelty is that it implements the entire learning pipeline purely in terms of operations of Vector Symbolic Architectures. Hyperseed has been empirically evaluated across diverse scenarios: synthetic datasets from Fundamental Clustering Problems Suite, benchmark classification using the Iris dataset, and the more practical classification of $21$ European languages using their $n$-gram statistics. \nAs future work, we will work on the adaptation of the Hyperseed algorithm for neuromorphic hardware. \n\n\n\n\n\\section{Hyperseed: Unsupervised Learning with Vector Symbolic Architectures}\n\\label{sect:vsaseed}\n\nThis section presents the main contribution of this article -- the method for unsupervised learning -- Hyperseed. Denote the set of FHRR-represented input data as $\\mathcal{D}\\in\\mathbb{C}^d$. \nData hypervectors for training are generated during the encoding phase (see Section \\ref{sect:encoding} for the details of the encoding). They are kept in a working memory. The input query for testing is also a data hypervector obtained using the same encoding procedure as for the training data. \n\nDenote as $\\mathcal{P}\\in\\mathbb{C}^d$ the\nvector space with known similarity properties. Set $\\mathcal{P}$ is created by encoding points (also referred to as nodes further in the text) of a 2D grid using FPE method~\\cite{PlateNested1994,frady2021computing, FradyFunctionsNICE2022, Komer2019ANR, komer2020biologically}. The cardinality of HD-map $|\\mathcal{P}|=n\\times m$, where $n$ and $m$ are the sizes of the grid along the vertical and the horizontal axes, respectively. For the sake of brevity, further in the text we will refer to set $\\mathcal{P}$ as HD-map. HD-map is computed, as described below, once and is stored in the associative memory. This memory is fixed throughout the life-time of the system. \n\nThe Hyperseed algorithm relies on the similarity preservation property of the (un)binding operation. The goal with Hyperseed is to translate the original data hypervectors $\\mathcal{D}$ (with unknown) internal similarity layout to HD-map $\\mathcal{P}$ by unbinding all of its members from hypervector $\\mathbf{s}$, i.e.:\n\n\\begin{equation}\n \\mathcal{D}\\oslash \\mathbf{s}\\Rightarrow\\mathcal{P}.\n\\label{eq:transf}\n\\end{equation}\n\\noindent\nHypervector $\\mathbf{s}$ is obtained as the result of applying an unsupervised learning rule. In essence, during the learning, some selected hypervectors from $\\mathcal{D}$ will be bound to selected vectors from $\\mathcal{P}$ as described further\\footnote{Due to an analogy of ``seeding'' data hypervectors onto HD-map, hypervector $\\mathbf{s}$ is referred as seed vector throughout the article.}. \n\n\\subsection{Initialization Phase: Generation of HD-map $\\mathcal{P}$ and hypervector $\\mathbf{s}$}\n\\label{sect:hdMap}\nThe hypervectors, which are the members of $\\mathcal{P}$, are generated such that the similarity between them relates to topological proximity of grid nodes. Note, however, that the reference to the topological arrangement of $\\mathcal{P}$ is virtual in a sense that in the associative memory, in which hypervectors of $\\mathcal{P}$ are stored does not have any structure. Topology information is kept on a side to be used for visualization purposes only.\n\nThe generation of HD-map starts with two randomly generated unit hypervectors $\\mathbf{x}_0, \\mathbf{y}_0 \\in \\mathbb{C}^d$ as $\\mathbf{x}_0 \\sim e^{j\\cdot 2\\pi\\cdot U(0,1)}$ and $\\mathbf{y}_0 \\sim e^{j\\cdot 2\\pi\\cdot U(0,1)}$.\nLet us denote the bandwidth parameter regulating the similarity between the adjacent coordinates on the grid by $\\epsilon$. The \\textit{i}-th $x$ and $y$ coordinates of the grid will be created using the FPE method as:\n\\noindent\n\\begin{equation}\n \\mathbf{x}_i=\\mathbf{x}_0^{\\epsilon \\cdot i}, \n \\mathbf{y}_i=\\mathbf{y}_0^{\\epsilon \\cdot i}.\n \\label{eq:x_y}\n\\end{equation}\n\\noindent\nThe hypervector $\\mathbf{p}_{(i,j)}$ representing a node with coordinates $(i,j)$ on the grid is computed as $\\mathbf{p}_{(i,j)}=\\mathbf{x}_i \\circ \\mathbf{y}_j$. \nFig.~\\ref{fig:hdplane} illustrates the landscape of similarity between all hypervectors of HD-map stored in the associative memory and one selected hypervector of the same HD-map encoding coordinates (15,15) on $50\\times50$ 2D grid as a function of coordinates $i$ and $j$.\n\n\\begin{figure}[t!\n\\centerline{\\includegraphics[width=0.7\\columnwidth]{img\/HDplane}}\n\\caption{Similarity distribution on an HD-map. The target node is $(15,15)$, the size of the grid is $50\\times50$, bandwidth $\\epsilon=0.05$.\n}\n\\label{fig:hdplane}\n\\end{figure}\n\nHypervector $\\mathbf{s}$ is also initialized randomly: $\\mathbf{s} \\sim e^{j\\cdot 2\\pi\\cdot U(0,1)}$. It is updated over several iterations during the learning phase as described below. The number of update iterations is a hyperparameter of Hyperseed. \n\n\\subsection{Search procedure in Hyperseed: Finding Best Matching Vector on HD-map}\n\\label{sect:search}\nIn the Hyperseed algorithm, HD-map $\\mathcal{P}$ acts as the auto-associative memory~\\cite{FrolovWillshaw2002,FrolovTime2006, GritsenkoAMSurvey2017}. That is the only operation performed on HD-map is the search for the Best Matching Vector (BMV) given some input hypervector. \nThe BMV is found by computing the cosine similarity between the input hypervector and all hypervectors in $\\mathcal{P}$. The output of this procedure is a valid hypervector in $\\mathcal{P}$ with the highest similarity to the input hypervector. \n\nThe mapping ($\\mathbf{d}_i \\rightarrow \\mathbf{p}_i$) of data hypervectors in $\\mathcal{D}$ to hypervectors of HD-map $\\mathcal{P}$ is done by unbinding $\\mathbf{d}_i$ from the trained hypervector $\\mathbf{s}$: \n\n \\begin{equation}\n \\mathbf{p}_i^*=\\mathbf{d}_i \\oslash \\mathbf{s}.\n \\label{eq:seedbind}\n \\end{equation}\n In (\\ref{eq:seedbind}), $\\mathbf{p}_i^*$ is a noisy version of a hypervector in $\\mathcal{P}$.\n\n\\subsection{Update phase: Unsupervised learning of hypervector $\\mathbf{s}$}\n\\label{sect:update}\n The goal with the update procedure on each iteration is to map input hypervector $\\mathbf{d}_i$ as near as possible to some target hypervector in $\\mathcal{P}$ with respect to the cosine similarity.\n \n Therefore, a single learning iteration consists of three steps: \n \\begin{enumerate}\n \\item Choose a target hypervector $\\mathbf{p}_{\\texttt{target}}$ (see the next subsection); \n \\item Compute a hypervector for the perfect mapping \n $\\mathbf{d}_i \\rightarrow \\mathbf{p}_{\\texttt{target}}$ by binding of $\\mathbf{d}_i$ with $\\mathbf{p}_{\\texttt{target}}$. \n \\item Update hypervector $\\mathbf{s}$ by adding this perfect mapping hypervector to hypervector $\\mathbf{s}$:\n \n \n \\begin{equation}\n \\mathbf{s}=\\mathbf{s} + \\mathbf{d}_i \\circ \\mathbf{p}_{\\texttt{target}}.\n \\label{eq:seedupdate}\n \\end{equation}\n \\end{enumerate}\n \\noindent\n Note that after the update, $\\mathbf{s}$ is not a phasor vector anymore so it might be renormalized if necessary. \n Thus, by the end of the learning phase, hypervector $\\mathbf{s}$ is the superposition of bindings $\\mathbf{d}_i \\circ \\mathbf{p}_j$. As such, the result of unbinding of hypervectors similar to $\\mathbf{d}_i$ with hypervector $\\mathbf{s}$ (\\ref{eq:seedbind}) will resemble hypervectors in $\\mathcal{P}$ (i.e., the hypervectors used in the update phase).\n \n\\subsection{Weakest match search (WMS) phase: Finding a data hypervector for the update in a single iteration}\n\n\\label{sect:observe}\n\n\nTo find a data hypervector for the update of hypervector $\\mathbf{s}$ (\\ref{eq:seedupdate}), Hyperseed uses a heuristic based on the farthest-first traversal rule (FFTR). This principle is widely used for defining heuristics in many important computing applications ranging from approximation of Traveling Salesman Problem \\cite{TravelingSalesman} to $k$-center clustering \\cite{k-center} and fast similarity search \\cite{Rachkovskij2017Cybern}. FFTR has been also used as a weight update rule in SOMs \\cite{FFTSOM}, which resulted in better representation of outliers as well as lower topographic and quantization errors. In FFTR, the first point is selected arbitrarily and each successive point is as far as possible from the set of previously selected points. In the case of Hyperseed, FFTR is straightforwardly implemented by checking the cosine similarity between the noisy vector $\\mathbf{p}^*$ (\\ref{eq:seedbind}) with all noiseless hypervectors of HD-map $\\mathcal{P}$. In order to demonstrate this, we shall consider the properties of the transformation from $\\mathcal{D}$ to $\\mathcal{P}$ with the unbinding operation (\\ref{eq:transf}).\n\n\n\\begin{figure}[t!\n\\centering\n\\includegraphics[width=6.5cm]{img\/gridmapping_hyperseed}\n\\caption{Transformation of space $\\mathcal{D}$ into $\\mathcal{P}$ with the unbinding operation.\n}\n\\label{fig:gridmapping}\n\n\\end{figure}\n\n\nConsider an example of transforming vector space $\\mathcal{D}$ of FPE-encoded two dimensional grid structure to HD-map $\\mathcal{P}$. Graphically, the scenario is illustrated in Fig. \\ref{fig:gridmapping}. The data is transformed via FPE as described in Section~\\ref{sect:hdMap} with initial random bases $\\mathbf{x}^\\mathcal{D}_0$ and $\\mathbf{y}^\\mathcal{D}_0$. Importantly, while the transformed hypervectors in $\\mathcal{D}$ are available for observation, the base hypervectors, which were used in the transformation are unknown. The bandwidth $\\epsilon_{\\mathcal{D}}$ used during the encoding is also unknown. \n\nVector space $\\mathcal{P}$ is represented via FPE, now with initial random bases $\\mathbf{x}^P_0$ and $\\mathbf{y}^P_0$. Importantly, the base hypervectors of $\\mathcal{P}$ as well as FPE bandwidth $\\epsilon_{\\mathcal{P}}$ are known. For brevity of calculations, it is assumed the FPE bandwidth of vector space $\\mathcal{D}$ is the same as that of vector space $\\mathcal{P}$, that is $\\epsilon_{\\mathcal{P}}=\\epsilon_{\\mathcal{D}}=\\epsilon$.\n\n\nWe now pick an arbitrary vector from $\\mathcal{D}$ encoding a pair of values $(k,l)$ and bind it to a hypervector from $\\mathcal{P}$ encoding some predefined pair of values $(p,q)$:\\footnote{\nStrictly speaking, equations below should include modulo $2\\pi$ operations but they are omitted for the sake of readability. \n} \n\\begin{equation}\n\\begin{split}\n \\mathbf{s} &=e^{j\\cdot 2\\pi\\epsilon_{\\mathcal{P}}\\cdot x^P_0\\cdot p+j\\cdot 2\\pi\\epsilon_{\\mathcal{P}}\\cdot y^P_0\\cdot q +j\\cdot 2\\pi\\epsilon_{\\mathcal{D}}\\cdot x^D_0\\cdot k+j\\cdot 2\\pi\\epsilon_{\\mathcal{D}} \\cdot y^D_0\\cdot l}\\\\\n &=e^{j\\cdot 2\\pi \\epsilon_{\\mathcal{P}} \\cdot (x^P_0\\cdot p+y^P_0\\cdot q+\\frac{\\epsilon_{\\mathcal{D}}}{\\epsilon_{\\mathcal{P}}}x^D_0\\cdot k+\\frac{\\epsilon_{\\mathcal{D}}}{\\epsilon_{\\mathcal{P}}}y^D_0\\cdot l)}. \n\\end{split}\n\\label{eq:seedvector}\n\\end{equation}\n\n\nObviously, as the result of unbinding of the hypervector representing the coordinate $(k,l)$ from $\\mathcal{D}$ from hypervector $\\mathbf{s}$, it will be translated to the hypervector for $(p,q)$ from $\\mathcal{P}$ as illustrated in Fig. \\ref{fig:gridmapping}. \n\n\nLet us now unbind a hypervector from $\\mathcal{D}$ encoding a point on a certain offset $(a,b)$ from $(k,l)$, that is hypervector: $e^{j\\cdot 2\\pi\\epsilon_{\\mathcal{D}} \\cdot (x^D_0\\cdot (k+a) + y^D_0\\cdot (l+b))}$, which is the farthest from point$(k,l)$ in this scenario. Recall that unbinding in FHRR is implemented as a component-wise multiplication with the complex conjugate:\n\\noindent\n\\begin{equation}\n\\begin{split}\n \\mathbf{v}^* &=e^{j\\cdot 2\\pi \\epsilon_{\\mathcal{P}} \\cdot (x^P_0\\cdot p+y^P_0\\cdot q+\\frac{\\epsilon_{\\mathcal{D}}}{\\epsilon_{\\mathcal{P}}}x^D_0\\cdot k+\\frac{\\epsilon_{\\mathcal{D}}}{\\epsilon_{\\mathcal{P}}}y^D_0\\cdot l)} \\cdot \\\\\n &\\cdot e^{j\\cdot 2\\pi\\epsilon_{\\mathcal{D}} \\cdot (-x^D_0\\cdot (k+a) - y^D_0\\cdot (l+b))}\\\\\n &=e^{j\\cdot 2\\pi \\epsilon_{\\mathcal{P}} \\cdot(x^P_0\\cdot (p+\\alpha_1\\cdot a)+y^P_0\\cdot (q + \\alpha_2 \\cdot b)}. \n\\end{split}\n\\label{eq:unbindoffset}\n\\end{equation}\n\\noindent\nIn \\ref{eq:unbindoffset}, $\\alpha_1=-\\frac{\\epsilon_{\\mathcal{D}} \\cdot x^D_0}{\\epsilon_{\\mathcal{P}} \\cdot x^P_0}$ and $\\alpha_2=-\\frac{\\epsilon_{\\mathcal{D}} \\cdot y^D_0}{\\epsilon_{\\mathcal{P}} \\cdot y^P_0}$ are coefficients introduced in order to align the result of unbinding with HD-map $\\mathcal{P}$ for ease of the interpretation. \n\nIn $\\alpha_1$ and $\\alpha_2$, the parameter of interest is $\\epsilon_{\\mathcal{P}}$, which is the bandwidth of HD-map $\\mathcal{P}$ and is the hyperparameter of the algorithm. Consider the case where $\\epsilon_{\\mathcal{P}}>\\epsilon_{\\mathcal{D}}$. This means that the fidelity of the similarity between hypervectors in HD-map $\\mathcal{P}$ is much coarser compared to the fidelity of the inter-hypervector similarity in the original vector space $\\mathcal{D}$. In this case, all hypervectors from $\\mathcal{D}$ after unbinding will be similar to the hypervector in $\\mathcal{P}$, which was chosen for the update of seed hypervector $\\mathbf{s}$ (e.g., $\\mathbf{v}^P_{(p,q)}$ in Fig.\n~\\ref{fig:gridmapping}). Essentially, we will observe an effect of collapsing of all hypervectors in space $\\mathcal{D}$ onto a single hypervector from space $\\mathcal{P}$. This is demonstrated by a simulation in which hypervector $\\mathbf{s}$ was created as $\\mathbf{s}=\\mathbf{v}^D_{(1,2)} \\circ \\mathbf{v}^P_{(2,2)}$. The size of HD-map $\\mathcal{P}$ is $5\\times 5$. Two simulations were performed: 1.) With the FPE bandwidth for encoding input data being equal $0.2$, while the FPE bandwidth of HD-map was set to 0.03; and 2.) With the FPE bandwidth for encoding input data being equal 0.2, while the FPE bandwidth of HD-map was set to 0.8.\nFig.~\\ref{fig:fft1} and~\\ref{fig:fft2} show the distribution of cosine similarities to all hypervectors of HD-map for every data hypervector after unbinding with hypervector $\\mathbf{s}$ (\\ref{eq:seedbind}) for the first and second simulation, respectively. Fig. \\ref{fig:fft2} demonstrates the effect of collapsing of all points in $\\mathcal{P}$ onto hypervector $\\mathbf{v}^P_{(2,2)}$. Fig. \\ref{fig:fft3} shows cosine similarities for every data hypervector after unbinding with hypervector $\\mathbf{s}$ to the BMV in the second simulation (i.e., $\\mathbf{v}^P_{(2,2)}$). \nObserve that the lowest similarities are for hypervectors $\\mathbf{v}^D_{(1,3)}$ and $\\mathbf{v}^D_{(3,3)}$, which are the farthest away from the hypervector $\\mathbf{v}^D_{(1,2)}$ used to compute hypervector $\\mathbf{s}$. Therefore, following the FFTR heuristic, one of these hypervectors should be used in (\\ref{eq:seedupdate}) to update hypervector $\\mathbf{s}$, thus, creating a new point of attraction in $\\mathcal{P}$. To summarize, the WMS procedure is as follows:\n\n\\begin{enumerate}\n\n\\item Initialize the lowest similarity variable $D_{min}=1$;\n\\item For each hypervector in $\\mathcal{D}$ compute $p^*$ (\\ref{eq:seedbind}) and search for the BMV in HD-map. Store the similarity to BMV $D_{BMV}$;\n\\item If $D_{BMV}\\epsilon_{\\mathcal{D}}$.}} \n \\label{fig:fft2}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.3\\columnwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/hits_bmv.png}\n \\caption[]%\n {{\\small Similarities to $\\mathbf{v}^P_{(2,2)}$, $\\epsilon_{\\mathcal{P}}>\\epsilon_{\\mathcal{D}}$.}} \n \\label{fig:fft3}\n \\end{subfigure}\n\n \\caption[ ]\n {\\small Distribution of cosine similarities to hypervectors in $\\mathcal{P}$ for every data hypervector after unbinding with hypervector $\\mathbf{s}$.} \n \\label{fig:FFT}\n \\end{figure}\n\n\n\n\n\\subsection {Finding the target node on HD-map}\n\\label{sect:find}\n\n\n\n\nThe hypervector found during the WMS procedure must be anchored to a hypervector $\\mathbf{p}_{\\texttt{target}}$, encoding a node on HD-map. Since the WMS procedure finds a hypervector, which unbinds a hypervector of HD-map with the lowest similarity, intuitively, it should be bound to a different hypervector. Using the FFTR heuristic, this new hypervector should be located further away from the current BMV in order to create a new point of attraction for the hypervector found by the WMS procedure and other hypervectors similar to it. \n\nWith the FPE encoding of HD-map's hypervectors the availability of farthest hypervectors to one selected hypervector is different depending on the choice of the bandwidth parameter for the same size of the map. For large values of $\\epsilon_{\\mathcal{P}}$ the similarity between hypervectors encoding neighboring nodes decays faster than for small values of $\\epsilon_{\\mathcal{P}}$, therefore, the number of dissimilar hypervectors is larger in the former case. This is demonstrated in Fig. \\ref{fig:fpeplace1} and \\ref{fig:fpeplace2} for $\\epsilon_{\\mathcal{P}}=0.03$ and Fig. \\ref{fig:fpeplace3} and \\ref{fig:fpeplace4} for $\\epsilon_{\\mathcal{P}}=0.008$ on $200\\times200$ HD-map.\n\n\nThe simplest heuristic for finding the hypervector farthest to the given BMV for HD-maps with large $\\epsilon_{\\mathcal{P}}$ is, therefore, a random selection of $\\mathbf{p}_{\\texttt{target}}$.\nAs we demonstrate in the next section, this heuristic leads to an adequate accuracy performance of Hyperseed in classification tasks. At the same time, obviously, the visualization of such projections is not very informative since it does not adequately display the internal disposition of classes. \n\nIn the case of small values of $\\epsilon_{\\mathcal{P}}$, the number of farthest hypervectors on HD-map is limited and has to be selected according to a certain heuristic. When Hyperseed is used in visualization tasks, $\\epsilon_{\\mathcal{P}}$ is chosen such that most dissimilar hypervectors are located in the corners of HD-map. These corner nodes are then chosen as $\\mathbf{p}_{\\texttt{target}}$ during the update phase.\n\nFig.~\\ref{fig:lang_proj} demonstrates an instance of projections of a dataset containing collection of $n$-gram statistics from texts on seven European languages. The dataset is projected onto a $20\\times20$ HD-map with $\\epsilon_{\\mathcal{P}}=0.008$. The complete experiment description follows in the next section. In the figure, crosses in the corners of HD-map show the choice of target nodes during the update procedure. We observe a semantically meaningful projections of languages. The color of the crosses corresponds to the class of the hypervector selected by the WMS procedure. One most important observation at this point is that the classes that were not used for the update of hypervector $\\mathbf{s}$, e.g., Swedish, French, and Bulgarian languages, emerged automatically and adequately projected. \n\n\n \\begin{figure}[t!]\n \\centering\n \\begin{subfigure}[b]{0.475\\columnwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/FPE_largeband.png}\n \\caption[]%\n {{\\small $\\epsilon_{\\mathcal{P}}=0.03$.}} \n \\label{fig:fpeplace1}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.475\\columnwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/FPE_largeband_placement.png}\n \\caption[]%\n {{\\small $\\epsilon_{\\mathcal{P}}=0.03$.}} \n \\label{fig:fpeplace2}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\begin{subfigure}[b]{0.475\\columnwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/FPE_lowband.png}\n \\caption[]%\n {{\\small $\\epsilon_{\\mathcal{P}}=0.008$.}} \n \\label{fig:fpeplace3}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.475\\columnwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/FPE_lowband_placement.png}\n \\caption[]%\n {{\\small $\\epsilon_{\\mathcal{P}}=0.008$.}} \n \\label{fig:fpeplace4}\n \\end{subfigure}\n \\caption[ ]\n {\\small Distribution of cosine similarities on HD-map for different values of FPE bandwidth $\\epsilon_{\\mathcal{P}}$.} \n \\label{fig:fpeplace}\n \\end{figure}\n\n\n\n\n\n\n \\subsection{The iterative Hyperseed algorithm}\n \\label{sect:iterative}\n \n\n Fig. \\ref{fig:flowchart} displays all phases of the Hyperseed algorithm in a flowchart. The hyperparameters of the algorithm are dimensionality of hypervectors $d$ and the number of iterations $I$ (i.e., the number of updates of hypervector $\\mathbf{s}$). In the flowchart, function \\texttt{SelectD()} returns an arbitrary hypervector from $\\mathcal{D}$ if its argument is ``any'' or $j$-th hypervector from $\\mathcal{D}$ when it is called with argument $j$. Function \\texttt{SelectP()} returns a hypervector from $\\mathcal{D}$ as described in the previous subsection. Function \\texttt{FindBMV}($\\mathbf{p}^*$,$\\mathcal{P}$) performs a search in the associative memory storing hypervectors of $\\mathcal{P}$ and returns the hypervector with the closest cosine similarity to $\\mathbf{p}^*$. Function \\texttt{Sim($\\mathbf{p}^*$,$\\mathbf{BMV}$)} computes the cosine similarity between the two hypervectors.\n\n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=6cm]{img\/lang_proj.png}\n \\caption{Projection of seven European languages after four updates. The chosen $\\mathbf{p}_{\\texttt{target}}$ hypervectors are four nodes (0,0), (20,0), (20,20) and (0, 20). }\n \\label{fig:lang_proj}\n\\end{figure}\n \n \n\n\n \n\n\n\nThe computational complexity of Hyperseed is $\\mathcal{O}(INd|\\mathcal{P}|)$, where $I$ is the number of iterations (updates) of Hyperseed, $N$ is the number of training data hypervectors, $d$ is the dimensionality of hypervectors, and $|\\mathcal{P}|$ is the size of HD-map. When implemented on neuromorphic hardware the search of the BMV happens in a constant time $\\tau$, therefore, the time complexity of Hyperseed in this case is $O(IN)\\cdot \\tau$. The memory complexity, of course, depends on the size of HD-map which is $d \\times |\\mathcal{P}|$. \n\nThe complexity of the SOM algorithm in the Winner-Takes-All phase and in the weight matrix update procedures is $\\mathcal{O}(INd|SOM|)$. Here, $I$ is the number of iterations of SOM algorithm, $N$ is the number of data samples , $|SOM|$ is the number of nodes in the SOM map (corresponds to $|\\mathcal{P}|$ in Hyperseed) and $d$ is the number of neurons per node (corresponds to the dimensionality of hypervectors in Hyperseed). While the $\\mathcal{O}$ complexities of the two algorithms are matching, Hyperseed uses a single vector operation in the update phase and substantially fewer number of iterations. \n\n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{img\/HyperSeedFlowchart.png}\n \\caption{Flowchart of Hyperseed. }\n \\label{fig:flowchart}\n\\end{figure}\n\n \n \n\\section{Introduction}\n\\label{sect:intro}\nVector Symbolic Architectures (VSA) are increasingly leveraged and adapted in machine learning and robotics algorithms and applications \\cite{HDGestureIEEE, Superposition_Olshausen, NeubertRobotics2019, HerscheCompressingBCI2020, Kleyko_RVFL, NeubertAggregation2021}. In classification tasks, the use of VSA leads to order of magnitude increase in energy efficiency of computations on the one hand and natively enables one-shot and multi-task learning on the other~\\cite{ChangHDTaskProjected2020, ChangHDInformationPreserved2020, KarunaratneInMemory2020,KarunaratneHDAugmented2021, KleykoAugmented2022}. It is prospected that VSA will play a key role in the development of novel neuromorphic computer architectures~\\cite{Loihi18} as an algorithmic abstraction \\cite{RahimiNanoscalable2017, KleykoComputingParadigm2021}. The main contribution of this article is a novel algorithm for unsupervised learning called Hyperseed, which relies on the mathematical properties arising from random high-dimensional representation spaces described through the phenomenon of the concentration of measure \\cite{Gorban2018Blessing} and of the main VSA operations of binding and superposition~\\cite{Kanerva09}. The method's name suggests that data samples are encoded as high-dimensional vectors (also called \\textit{hypervectors}, HVs or ``seeds''), which are then mapped (a.k.a. ``sowed'') onto a specially prepared topologically arranged set of hypervectors for revealing the internal cluster structure in the unlabeled data.\n\nThe Hyperseed algorithm bears conceptual similarities to Kohonen's Self-Organizing Maps (SOM) algorithm \\cite{SOMBook, intSOM}, therefore, selected SOM terminology is adopted for the description of our approach. \nHowever, it is designed using cardinally different computing principles to the SOM algorithm. \nHyperseed implements the entire learning pipeline in terms of VSA operations. To the best of our knowledge, this has not been attempted prior and is reported for the first time. \n\nThe Hyperseed algorithm is presented using Frequency Holographic Reduced Representations (FHRR)~\\cite{PlateNested1994} model of VSAs and the concept of Fractional Power Encoding (FPE) \\cite{PlateNested1994, frady2021computing, FradyFunctionsNICE2022, Komer2019ANR, komer2020biologically}. The usage of the FHRR model makes the proposed solution specifically fit for implementation on spiking neural network architectures including Intel's Loihi \\cite{Loihi18}.\n\nTo this end, we introduce the Hyperseed algorithm and demonstrate its performance on three illustrative non-linear classification problems of varying complexity: Synthetic datasets from the Fundamental Clustering Problems Suite, Iris classification, and language identification using $n$-gram statistics. Across all experiments, Hyperseed convincingly demonstrates its key novelties of learning from a few input vectors and single vector operation learning rule, both of which contribute towards reduced time and computation complexity. \n\nThe article is structured as follows. \nSection~\\ref{sect:related} describes related work to Hyperseed operations. The VSA methods leveraged in Hyperseed are presented in Section \\ref{sect:method}.\nSection~\\ref{sect:vsaseed} presents the main contribution -- the method for unsupervised learning -- Hyperseed. \nSection~\\ref{sect:perf} reports the results of the experimental performance evaluation. \nSection~\\ref{sect:discussion} discusses the suitability of Hyperseed for realization on neuromorphic hardware. \nThe conclusions follow in Section~\\ref{sect:conclusions} .\n\n\n\n\n\n\n\n\\section{Related Work}\n\\label{sect:related}\n \nVSA \\cite{PlateNested1994, Rachkovskij2001, KleykoSDR2016, FradySDR2020} is a computing framework providing methods of representing and manipulating concepts and their meanings in a high-dimensional space. VSA finds its applications in, for example, \ncognitive architectures~\\cite{BuildBrain,RachkovskijAnalogical2004,RachkovskijAnalogy2012}, \nnatural language processing~\\cite{BICA16CT,JonesMeaning2007, RPRSKJ2015, RachkovskijRecursiveBinding2022, RachkovskijEquivariant2021}, communications~\\cite{JakimovskiCollective2012, KleykoMACOM2012, KimHDM2018},\nbiomedical signal processing~\\cite{ACCESS_HRV, HDGestureIEEE}, approximation of conventional data structures~\\cite{HD_FSA, ABF},\nand for classification tasks such as gesture recognition~\\cite{TNNLS18, HDGestureIEEE}, cybersecurity threat detection \\cite{christopher2021minority, moraliyage2022evaluating}, physical activity recognition~\\cite{Rasanen14}, character recognition~\\cite{goltsev2005combination, Rachkovskij2022NCA}, speaker identification~\\cite{HuangSpeaker2022}, fault isolation and diagnostics~\\cite{KussulDiagnostics1998, ACCESS_BIOFAULT, EggimannConfigurableHD2021}. Examples of efforts on using VSA for other than classification learning tasks are using data HVs for clustering~\\cite{ImaniHDCluster2019, BandaragodaTrajectoryTraffic2019, HernandezClustering2021}, semi-supervised learning~\\cite{ImaniSemiHD2019}, collaborative privacy-preserving learning~\\cite{ImaniHDColLearn2019, KhaleghiPriveHD2020}, multi-task learning~\\cite{ChangHDTaskProjected2020, ChangHDInformationPreserved2020}, distributed learning~\\cite{RosatoHDDistributed2021, HsiehFL2021}.\nA comprehensive two-part survey of VSA is available in~\\cite{KleykoSurveyVSA2021Part1,KleykoSurveyVSA2021Part2}. \n\nHypervectors of high (but fixed) dimensionality (denoted as $d$) are the basis for representing information in VSA. \nThe information is distributed across hypervector positions, therefore, hypervectors use distributed representations \\cite{Hinton1986}. There are different VSA models that all offer the same operation primitives but differ slightly in terms of the implementation of these primitives. For example, there are VSA models that compute with binary, bipolar \\cite{Kanerva:Hyper_dym13, MAP}, continuous real, and continuous complex vectors \\cite{PlateNested1994}. Thus, the VSA concept has the flexibility to connect to a multitude of different hardware types, such as binary-valued VSAs for analog in-memory computing architectures~\\cite{KarunaratneInMemory2020} or complex-valued VSAs for spiking neuron architectures~\\cite{TPAM, RennerBinding2022, BentSpike2022}.\n\nThe relevant sub-domain of related work to the proposed Hyperseed algorithm is the application of VSA for solving machine learning tasks. In this context, VSA have been used for: 1) Representing input data and interfacing such representations with conventional machine learning algorithms and 2) Implementing the functionality of neural networks with VSA operations. \n\nThe most illustrative use cases for encoding of input data into hypervectors and interfacing conventional machine learning algorithms are \\cite{BandaragodaTrajectoryTraffic2019, RachkovskijClassifiers2007, RIJHK2015, AlonsoHyperEmbed2020, MirusBehavior2019, KleykoBoostingSOM2019, MirusBalanced2020, ShridharEnd2End2020, Kussul1999IJCNN, Rachkovskij2015Cybern}. For example, works~\\cite{AlonsoHyperEmbed2020, KleykoBoostingSOM2019} proposed encoding $n$-gram statistics into hypervectors and subsequently solving typical natural language processing tasks with either supervised or unsupervised learning using standard artificial neural network architectures. The main distinctive property of VSA represented data is the substantial reduction of the memory footprint and the reduced learning time. In \\cite{BandaragodaTrajectoryTraffic2019}, hypervectors were used to encode sequences of variable lengths in the context of unsupervised learning of traffic patterns in intelligent transportation system application.\nIn the context of visual navigation, hypervectors were used as input to Simultaneous Localization and Mapping (SLAM) algorithms \\cite{NeubertRobotics2019} as well as for ego-motion estimation~\\cite{MitrokhinSensorimotor2019, KleykoCommentariesSR2020, HerscheDVSCDT2020}.\n\n\n\n\n\n\n\n\nA great potential of VSAs was demonstrated when used for the implementation of the entire functionality of some classical neural network architectures. In~\\cite{Kleyko_RVFL, KleykointESN2020, DiaoGLVQHD2021} the functionality of an entire class of randomly connected neural networks (random vector functional link networks~\\cite{IgelnikRVFL1995} and echo state networks~\\cite{RC09}) was implemented purely in terms of VSA operations.\nIt was demonstrated that implementing the algorithm functionality with bipolar VSAs allows reducing energy consumption on the specialized digital hardware by the order of magnitude, while substantially decreasing the operation times. \nMoreover, further flexibility can be achieved~\\cite{KleykoCA2020, EggimannConfigurableHD2021} when considering the ways of generating random connections used in the networks.\n\n\n\n\n\n\nThe main contribution of this article in the context of VSA is a novel approach to learning since the dominating learning approach in the area is on creating a single hypervector for a specific class.\nThis is achieved through encoding input data and then forming associative memory storing the prototypical representations for individual classes. \nOur approach to learning is radically different -- it utilizes the similarity preservation property of the binding operation in combination with the FPE encoding method~\\cite{PlateNested1994}. \nFPE was recently used to simulate and predict dynamical systems~\\cite{VoelkerFPEDynamical2021}, perform integer factorization~\\cite{KleykoPrimes2022}, and represent order in time series~\\cite{SchlegelHDC-MiniROCKET2022}.\nThe associative memory in the proposed approach is created once during the initialization phase and remains fixed during the life-time of the system. The update requires a single vector operation. To the best of our knowledge, this is the first reseacrh article to present the usage of VSA in unsupervised learning tasks.\n\n\n \n\\section{Experiments and Results}\n\\label{sect:perf}\n\nThis section describes the results of the experimental evaluation of the proposed Hyperseed algorithm. \nBefore elaborating on the details of the experimental evaluation, it is important to set realistic expectations in order to correctly interpret the results. \nIn particular, in the classification tasks, it would not be reasonable to expect a performance comparable to, e.g., deep learning-based models.\nThis is because classification is not the primary task for unsupervised learning approaches. As any other unsupervised learning algorithm, Hyperseed does not modify the input representations to make them more separable. We chose to evaluate the performance of Hyperseed on classification tasks mainly because visualization (as one of the main applications of the unsupervised learning) is subjective and it is hard to quantify its quality.\n\nWe report three illustrative cases: 1) unsupervised learning from one-shot demonstrations on six synthetic datasets; 2) classification of the Iris dataset; and 3) identification of $21$ languages using their $n$-gram statistics. In all the experiments, we used dense complex FHRR representations \\cite{plate1995holographic} of varying dimensionality. The Python implementation of the Hyperseed algorithm and the code necessary to reproduce all the experiments reported in this study are publicly available\\footnote{Implementation of Hyperseed and the experiments, 2022. [Online.] Available: \\url{https:\/\/github.com\/eaoltu\/hyperseed}}. \n\\begin{table*}[tbh!]\n \\caption{Comparison of projections and classification accuracy of Hyperseed vs. SOMs on FCPS datasets} \\label{tab:hyperseed_fcps}\n \\centering\n \\begingroup\n \\setlength{\\tabcolsep}{6pt}\n \\renewcommand{\\arraystretch}{2}\n \\begin{tabular}{ |c|c|c|c|c|c|c| } \n \\hline\n & Atom & Chain link & Engy time & Hepta & Two diamonds & Lsun 3D \\\\\n \\cline{2-7}\n \\raisebox{20pt}{\\rotatebox{90}{Dataset}}\n &\n {\\includegraphics[scale=0.25]{img\/Atom_data.PNG}}\n &\n {\\includegraphics[scale=0.26]{img\/Chain_link_data.PNG}}\n &\n \\raisebox{-3pt}{\\includegraphics[scale=0.20]{img\/EngyTime_data.PNG}}\n &\n \\raisebox{-3pt}{\\includegraphics[scale=0.26]{img\/Hepta_data.PNG}}\n &\n {\\includegraphics[scale=0.19]{img\/Two_diamonds_data.PNG}}\n &\n \\raisebox{-3pt}{\\includegraphics[scale=0.26]{img\/Lsun3D_data.PNG}} \\\\\n \\hline \n \\raisebox{10pt}{\\rotatebox{90}{SOM}}\n &\n {\\includegraphics[scale=0.185]{img\/Atom_trad.PNG}}\n &\n {\\includegraphics[scale=0.185]{img\/Chainlink_trad.PNG}}\n &\n {\\includegraphics[scale=0.185]{img\/EngyTime_trad.PNG}}\n &\n {\\includegraphics[scale=0.185]{img\/Hepta_trad.PNG}} \n &\n {\\includegraphics[scale=0.185]{img\/TwoDiamonds_trad.PNG}} \n &\n {\\includegraphics[scale=0.185]{img\/Lsun3D_trad.PNG}} \\\\ \\cline{2-7}\n & Accuracy: 0.8878 & Accuracy: 0.9265 & Accuracy: 0.9528 & Accuracy: 0.9777 & Accuracy: 0.9774 & Accuracy: 0.9715 \\\\\n \\hline\n \\raisebox{5pt}{\\rotatebox{90}{Hyperseed}}\n &\n {\\includegraphics[scale=0.15]{img\/Atom_hrr.png}}\n &\n {\\includegraphics[scale=0.15]{img\/Chainlink_hrr.PNG}}\n &\n {\\includegraphics[scale=0.15]{img\/EngyTime_hrr.png}} \n &\n {\\includegraphics[scale=0.15]{img\/Hepta_hrr.png}} \n &\n {\\includegraphics[scale=0.15]{img\/TwoDiamonds_hrr.png}}\n &\n {\\includegraphics[scale=0.15]{img\/Lsun3D_hrr.PNG}} \\\\ \\cline{2-7}\n \n & Accuracy: 0.9821 & Accuracy: 0.9780 & Accuracy: 0.9071 & Accuracy: 0.9552 & Accuracy: 0.9902 & Accuracy: 0.9005 \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n\\end{table*}\n\n\n\\subsection{Data transformation to high-dimensional space}\n\\label{sect:encoding}\nIn the first two experiments, where input data are in the form of feature vectors of dimensionality $K$, the values of each feature $f_k, k \\in [1,K]$ were normalized to range $[0,1]$. The interval $[0,1]$ was then split into $q$ quantization levels. \nFor each feature a base hypervector of unit length $\\mathbf{b}_k$ was randomly generated.\nThen levels for the particular feature $\\textbf{l}_i^k$ were encoded in FHRR representation using FPE~\\cite{PlateNested1994, komer2020biologically, frady2021computing, FradyFunctionsNICE2022, Komer2019ANR}: $\\textbf{l}^k=\\mathbf{b}_k^{\\epsilon i}$, where $\\epsilon \\in \\mathbb{R}$ represents the bandwidth parameter. The feature vector of a data sample was then represented as a single hypervector with the superposition operation: $\\textbf{v}=\\Sigma_{k \\in K} \\textbf{l}_i^k$. \n\nIn the language identification experiments, $n$-gram statistics was represented as hypervectors following the procedure in \\cite{RIJHK2015, KleykoBoostingSOM2019}. First, a bijection of the alphabet letters $i \\in |\\mathcal{A}|$ to random unitary atomic hypervectors $\\mathbf{b}_i$ was created. To encode the position of character $i$ in the $n$-gram, the permutation operation was used on the corresponding atomic hypervector $\\mathbf{b}_i$. For example, the hypervector for character ``b'' on a third position in a tri-gram is the corresponding atomic hypervector rotated three times: $\\rho^3(\\mathbf{b}_{b})$. An $n$-gram of size $n$ was encoded as binding of position based encoded hypervectors for corresponding characters. For example, a tri-gram ``bdf'' was encoded as: $\\rho(\\mathbf{b}_{b}) \\circ \\rho^2( \\mathbf{b}_{d})\\circ \\rho^3( \\mathbf{b}_{f})$. \nThe $n$-gram statistics for a given text sample was then encoded into a single hypervector through the superposition of hypervectors for all observed $n$-grams.\n\n\n\n\\subsection{Hyperseed for classification tasks}\n\nThe proposed Hyperseed algorithm is by definition an unsupervised learning algorithm, therefore, an extra mechanism is needed to use it in supervised tasks such as the considered Iris classification and language identification task. Once Hyperseed was trained, there is a need to assign labels to the best matching hypervectors in HD-map. \n\nRecall that Hyperseed is trained on a small subset of the available training data as described in Section \\ref{sect:iterative}. \nFor the labeling process in the experiments presented below, the training data were presented to the trained Hyperseed HD-map for one full epoch that did not update the seed hypervector $\\mathbf{s}$. The labels of the training data were used to calculate statistics for the BMV in HD-map. The nodes were assigned labels of the input samples that were prominent in the collected statistics. \n\nAt the classification phase, hypervectors of the nodes with the assigned labels were stored in the memory. During the classification phase, samples of the test data were used to assess the trained Hyperseed. For each sample in the test data, the BMV in HD-map was determined using the search procedure (Section~\\ref{sect:search}). The test sample was then assigned the label of the closest labeled hypervector stored in the memory.\n\nAccuracy was used as the main performance metric for evaluation and comparison of Hyperseed runs with different parameters. It should be re-emphasised that the focus of experiments was not on achieving the highest possible accuracy but on a comparative analysis of the Hyperseed algorithm. \n\n\n\n\n\\subsection{Experiment 1: The performance of Hyperseed on synthetic datasets with one-shot demonstration}\n\nThis experiment serves the purpose of highlighting the major property of the Hyperseed algorithm -- the capability of one-shot learning. When talking about one-shot learning, one has to be careful with its definition. In many practical cases, a single example is obviously not enough for accurate inference, instead, it is reasonable to talk about learning from a limited number of data samples, i.e., few-shot learning. This is what we intend to gradually demonstrate with all the experiments in this section. \n\nIn the first experiment, we fix the number of updates of seed hypervector $\\mathbf{s}$ to one. This, essentially, boils down to randomly picking a sample from the particular training data, running the Hyperseed's update phase, and right after that performing labeling, classification on the corresponding test data, and the visualization. \n\nFor this purpose, we used synthetic datasets from Fundamental Clustering Problems Suite (FCPS)~\\cite{Ultsch05}. FCPS provides several non-linear but simple datasets that can be visualized in two or three dimensions, for elementary benchmarking of clustering and non-linear classification algorithms. \n\n\nWe selected six FCPS datasets that are most representative of the non-linearity, which we aim to learn using Hyperseed, they are: ``Atom'', ``Chain Link'', ``Engy Time'', ``Hepta'', ``Two Diamonds'', and ``Lsun 3D''. We repeated the experiment eight times. In each run, all hypervectors used for data encoding as well as for the operation of the Hyperseed algorithm were generated with a new seed used to initialize a pseudorandom generator. \n\nIn Table~\\ref{tab:hyperseed_fcps}, we present the results of this experiment in the form of the comparative evaluation of the projections by the conventional SOM and the projections produced by the Hyperseed algorithm, both visually and as the classification accuracy for each dataset. The sizes of the SOM grid and HD-map of the Hyperseed algorithm were the same $100\\times 100$.\nThe first row in the table presents the visualization of the six selected datasets in their original two or three dimensional data space. The second row presents HD-map projections and classification accuracy of the conventional SOM, while the third row shows HD-map projections and classification accuracy of the Hyperseed algorithm. \n\nThe primary observation in relation to the classification is that the Hyperseed algorithm provided average accuracy on a par with the conventional SOM ($0.948$ versus $0.943$). \n\n\n\nThe visualization of HD-map projections of Hyperseed are more representative of the topology preservation of the original data space, in comparison to the conventional SOM. In three out of six datasets, Engy Time, Hepta, and Lsun 3D, the topology preservation was directly comparable to the original dataset, where data samples of the same class were tightly clustered, in contrast to the conventional SOM, where these data samples were more scattered. For the remaining three datasets (Atom, Chain Link and Two Diamonds), although the topology preservation was not representative, the classification accuracy was high. This can be rationalized by the fixed 2D structure, which inhibits the complete visualization of the data space. \n\n\n\\subsection{Experiment 2: Iris classification with Hyperseed}\n\n\n\nWe continue the evaluation by exploring the details of Hyperseed's operations during several updates. We used the Iris dataset with 150 samples of labeled data. The dataset contains three classes of Iris flowers described by four real-valued features. Data were encoded into hypervectors as described in Section~\\ref{sect:encoding}. \n\nTo further highlight the capabilities of Hypreseed to learn from few-shots, we decided to split the Iris dataset such that the size of the test data was larger than the size of the training data. The results for Hyperseed in this section were obtained for the $20\\% \/ 80\\%$ split for training and test data, respectively. In order to provide a benchmark performance, we trained the conventional SOM with a $30 \\times 30$ grid on different splits of the Iris dataset and counted the number of iterations it required the SOM to reach $95\\%$ accuracy. The experiments with the conventional SOM were repeated 10 times and the maximum accuracy across runs was recorded. Table~\\ref{tab:num_iter} reports the results. \nOne could see that for the 20\/80 split the conventional SOM failed to achieve the target accuracy. Increasing the size of the training data allowed SOM reaching the target accuracy. The number of required iterations, however, was significant (the lowest was 200 for the 80\/20 split).\nThe maximum number of iterations of Hyperseed was set to 3 and 6. \nThis means that algorithm performed 90 (3 times 30 samples of the training data) and 180 (6 times 30 samples of the training data) searches for the BMV and only 3 (and correspondingly 6) updates of seed hypervector $\\mathbf{s}$. This has to be compared to $200 \\times 120$ of searches and updates of the conventional SOM in the 80\/20 split case. Each experiment (with three and six updates) was run ten times with different seeds.\n\nThe target nodes at each update were pre-selected to be (15,15) - for the first update, (20,20) - for the second update, (10,10) - for the third update, (5,5) - for the fourth update, (25,25) - for the fifth update, and (5,25) - for the sixth update. These nodes are marked by red crosses in the visualizations. This highlights again the importance of the target node selection rule for visually adequate projections. In this experiment, however, the adequate visualization was not important since our focus was on the performance characteristics of the Hyperseed algorithm in the classification task. \nTable~\\ref{tab:hyperseed_comp1} shows the results of the experiment. For the fair comparison with the conventional SOM here we also select the best performance across runs. \n\n\\begin{table}[t!]\n \\caption{Number of iterations of the conventional SOM required to reach $95\\%$ accuracy for different splits of Iris.} \\label{tab:num_iter}\n \\centering\n \n \n \\renewcommand{\\arraystretch}{2}\n \\begin{tabular}{ |p{4cm}|c|c|c|c| } \n \\hline\n Data set split (training\/testing), \\% & 20\/80 & 40\/60 & 60\/40 & 80\/20 \\\\ \\hline\n Number of updates \n & - & 2500 & 600 & 200 \\\\\n \\hline\n\n \\end{tabular}\n \n\\end{table}\n\n\\begin{table}[t]\n \\caption{Comparison of the Hyperseed projections with different random seeds for the Iris dataset.} \\label{tab:hyperseed_comp1}\n \\centering\n \\begingroup\n \\setlength{\\tabcolsep}{6pt}\n \\renewcommand{\\arraystretch}{2}\n \\begin{tabular}{ |c|c|c| } \n \\hline\n \n &\n \\# updates: 3, Acc: 0.92 & \\# updates: 6, Acc: 0.95 \\\\\n\n \\hline\n {\\rotatebox[origin=c]{90}{Train Projection}}\n &\n \\raisebox{-45pt}{\\includegraphics[scale=0.2]{img\/Iris_data\/1_train_3itr.png}}\n &\n \\raisebox{-45pt}{\\includegraphics[scale=0.2]{img\/Iris_data\/1_train.png}}\n \\\\\n \n \\hline\n \\rotatebox[origin=c]{90}{Test Projection}\n &\n \\raisebox{-45pt}{\\includegraphics[scale=0.2]{img\/Iris_data\/1_test_3itr.png}} \n &\n \\raisebox{-45pt}{\\includegraphics[scale=0.2]{img\/Iris_data\/1_test.png}}\\\\\n \n \\hline\n \\end{tabular}\n \\endgroup\n\\end{table} \n\nThe first interesting observation comes in the case of three updates. \nWe selected the best runs, which resulted in the highest accuracy on the test data ($0.93$). The projection of the training data show that out of three updates, in total one update was done for the first class (blue circles) and two updates were done for the second class. \nThus, the cluster for the third class emerged automatically. \n\nAnother important observation comes from the relative placement of the data samples (both from the training and test data) on HD-map. For the Iris dataset, it is known that the second and third classes are very similar to each other, which manifests in misclassification of some of their samples. Here, we see that this was, indeed, the case (orange triangles are very close to green triangles in several nodes of HD-map). \nHowever, in the case of Hyperseed this proximity did not lead to large degradation of the classification accuracy. This is because each point in HD-map attracted similar samples.\nNext, Fig.~\\ref{fig:accVsIterIRIS} shows the accuracy of Hyperseed when increasing the number of updates. On average, the classification accuracy increased with more updates of seed hypervector $\\mathbf{s}$.\nAlso, in Fig.~\\ref{fig:accVSqVSbw} we demonstrate the tradeoff between the choice of hyperparameters for the FPE encoding of input data (bandwidth $\\epsilon$ and the number of quantization levels, $q$) and the classification accuracy. \nWe observed that the best accuracy was achieved when $\\epsilon$ and $q$ were in the inverse relationship, that is when $\\epsilon q = 1$.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=8cm]{img\/Iris_data\/accVSIterations_iris.png}\n \\caption{Number of iterations against the accuracy ($d=500$) of Hyperseed on the Iris dataset. }\n \\label{fig:accVsIterIRIS}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=8cm]{img\/Iris_data\/q_vs_bW_vs_acc.png}\n \\caption{Classification accuracy against the choice of FPE hyperparameters for data encoding: bandwidth and the number of quantization levels. Dimensionality of hypervectors was set to $d=500$.}\n \\label{fig:accVSqVSbw}\n\\end{figure}\n\n\n\n\n\n\n\n\n \\begin{figure*}[t!]\n \\centering\n \\begin{subfigure}[b]{0.475\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/textDataPlots\/accVSnumOfClasses-5000D-1SOM-3gram.png}\n \\caption[]%\n {{\\small Accuracy with one seed hypervector $\\mathbf{s}$ using the original update rule from Section~\\ref{sect:update}, $d=5000$.}} \n \\label{fig:single}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.475\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/textDataPlots\/accVSnumOfClasses-5000D-10SOM-3gram.png}\n \\caption[]%\n {{\\small Number of classes vs. accuracy with original and modified learning phases ($d=5000$).}} \n \\label{fig:ten}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\begin{subfigure}[b]{0.475\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/textDataPlots\/1-numOfSomsVsAccuracy-5000-3grams.png}\n \\caption[]%\n {{\\small Number of $\\mathbf{s}$ hypervectors vs. accuracy ($d=5000$).}} \n \\label{fig:num_seed}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.475\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{img\/textDataPlots\/5-accVsDIM_numOfSoms-10-3grams.png}\n \\caption[]%\n {{\\small Dimensionality of hypervectors vs. accuracy (Number of $\\mathbf{s}$ hypervectors is 10).}} \n \\label{fig:dimensionality}\n \\end{subfigure}\n \\caption[ TResults of Experiment 3. Performance of Hyperseed in 21 languages classification task. ]\n {\\small The performance of Hyperseed in the task of language identification.} \n \\label{fig:Experiment3}\n \\end{figure*}\n\n\n\n\n\\subsection{Experiment 3: Training Hyperseed on $n$-grams in language identification task}\nWhile the previous two experiments were used to get first insights on the major properties of Hyperseed, with this experiment we intend to demonstrate its performance on a larger scale problem. \nWe will use it for the classification task of 21 European languages using $n$-gram statistics. The list of languages is as follows: Bulgarian, Czech, Danish, German, Greek, English,\nEstonian, Finnish, French, Hungarian, Italian, Latvian, Lithuanian, Dutch, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish, Swedish. \nThe training data is based on the Wortschatz Corpora \\cite{LANGDATA}. \nThe average size of each language corpus in the training data was $1085637.3 \\pm 121904.1$ symbols. \nIn this study, we use the method for encoding $n$-gram statistics into hypervectors from~\\cite{KleykoBoostingSOM2019}, where it was used as an input to the conventional SOM. Each original language corpus was divided into samples where the length of each sample was set to $1000$ symbols. \nThe total number of samples in the training data was $22791$. \n\n\nThe test data also contains samples from the same languages and is based on the Europarl Parallel Corpus\\footnote{Available online at \\url{http:\/\/www.statmt.org\/europarl\/}. \n}.\nThe total number of samples in the test data was $21000$, where each language was represented with $1000$ samples. \nEach sample in the test data corresponded to a single sentence. \nThe average size of a sample in the test data was $150.3 \\pm 89.5$ symbols.\n\nThe data for each language was pre-processed such that the text included only lower case letters and spaces. \nAll punctuation was removed. \nLastly, all text used the 26-letter ISO basic Latin alphabet, i.e., the alphabet for both training and test data was the same and it included $27$ symbols. \nFor each text sample, the $n$-gram statistics transformed to hypervectors was obtained, which was then used as input when training or testing Hyperseed.\nSince each sample was pre-processed to use the alphabet of only $a=27$ symbols, the conventional $n$-gram statistics input was $27^{n}$ dimensional. In the experiment, we used tri-grams, therefore, the conventional representation was $19,683$ dimensional. The dimensionality of the mapped $n$-gram statistics into hypervectors as described in Section~\\ref{sect:encoding} depends on the dimensionality of the hypervectors $d$. The results reported below were obtained for different dimensionalities in the range [400, 10000], which corresponds to the dimensionality reduction of the original representation space from $49$ fold (for $d=400$) to $2$ fold (for $d=10,000$). In this experiment, we used $100\\times100$ HD-map.\n\n\nThe first investigation was performed with dimensionality of hypervectors $d=5000$ ($4$ fold dimensionality reduction). \nIn this experiment, we exposed Hyperseed to a different number of classes from the original dataset. The number of Hyperseed updates in each case was set relative to the number of classes at hand. Importantly, this heuristic for choosing the number of updates was adopted for automating experiments only. It only reflects the desire of keeping the number of iterations low. In the unsupervised learning context, the information about the number of classes is, obviously, unavailable. \n\nThe experiment was repeated eight times for each number of classes, each time selecting a different subset of languages for Hyperseed training. Fig.~\\ref{fig:single} shows the results. The reference accuracy 0.97 (the red line) is the accuracy obtained with the conventional tri-gram statistics representation reported in \\cite{HDenergy, RIJHK2015} using the nearest neighbor classifier. \nThe main observation to make from this investigation is that the performance of Hyperseed dropped as it was exposed to larger number of classes. The explanation to this is connected to the finite capacity of hypervectors in terms of the number of hypervectors one can superimpose together while keeping the sufficient accuracy of retrieving them back~\\cite{FradyCapacity2018,KleykoPerceptron2020}. As we discussed above, each update to seed hypervector $\\mathbf{s}$ introduces noise to the previous updates. That is why the number of updates in Hyperseed needs to be kept low. \n\n\\subsubsection{Modified Hyperseed learning phase}\nTo mitigate the problem of the reduced accuracy on large problems (in terms of the number of distinct classes) the learning phase of Hyperseed was modified. \n\nInstead of having a single seed hypervector, it is proposed to use $N$ vectors $\\mathbf{s}_i, i=1..N$, where $N$ becomes another hyperparameter of Hyperseed. During the update procedure, these hypervectors will be updated in a round-robin manner in order to keep the balanced number of updates per a seed hypervector.\n\nDuring the search of the BMV in either the WMS or test procedures, the binding in (\\ref{eq:bind}) is now computed for the current input hypervector and all seed hypervectors $\\mathbf{s}_i$'s. \nHypervector $\\mathbf{p}$ with the highest cosine similarity across all results of unbinding with all $\\mathbf{s}_i$ is selected as the BMV.\n\nThe number of iterations (and as the result the number of updates) with the new update phase should be scaled such that the number of updates per $\\mathbf{s}_i$ is approximately the same. For example, in the experiments with $N=10$ the number of iterations was configured to $30$ to allow for three updates per $\\mathbf{s}_i$. \nFig.~\\ref{fig:ten} demonstrates significant performance improvement with the modified learning phase. The maximum performance of Hyperseed in $21$ classes case is $0.91$ after $30$ iterations.\n\n\n\n\n\nNext, we are interested in the effect of the number of seed hypervectors on the classification performance of Hyperseed. \nFig.~\\ref{fig:num_seed} shows the classification accuracy in the case of 21 classes for different number of seed hypervectors used in the modified learning phase. The main observation here was that the performance stabilized after a certain value of this parameter. \nIn our case, there was no significant increase in the accuracy after using ten seed hypervectors.\n\n\\subsubsection{Hyperseed performance for different dimensionalities of hypervectors}\n\nFinally, we are interested in the effect of the dimensionality of hypervectors on the performance of Hyperseed. We performed an experiment with ten seed hypervectors and varied the dimensionality of the hypervectors used by the algorithm from 400 until 10000. The results are depicted in Fig.~\\ref{fig:dimensionality}. The main observation to make was that the classification accuracy on small dimensionalities was low, as expected. On the positive side, it was substantially higher than the random choice, which is $0.04$ in this case. The reason for this is rather clear and it is connected to the dimensionality of the non-distributed representations, which is high ($19,683$). With only $400$ dimensions and the adopted encoding procedure of the input data, we operated well above the capacity of the hypervectors, this lead to high inter-class similarity. \nIncreasing the dimensionality allowed addressing this issue and resulted in better accuracy. It turned out that in the case of the language identification $5000$ was the optimal dimensionality for obtaining high-quality performance. It is worth noting that the accuracy of Hyperseed was still lower by $0.07$ compared to the baseline. It was, however, not expected that it would necessarily achieve higher accuracy compared to the supervised methods.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\nThe Authors stress that for a continuous beam, ``the effect of the detuning impedance is to add an additional tune shift to the bare machine working point\". In other words, the detuning impedance is indistinguishable from the external quadrupole focusing. We agree with the Authors on that; our disagreement with them is about the relationship between beam optics and coupling of transverse collective modes. \n\nFirst, let us clarify the general coupling conditions and the terminology for the transverse modes of a coasting beam, since the Article~\\cite{BiancacciPRAB2020} creates confusion in these respects. It is well-known that for a coasting beam, the transverse spectrum consists of two complex-conjugated series, which may be called {\\it positive-based} and {\\it negative-based}, $\\Omega \\approx (Q_\\beta + n)\\omega_0$ and $\\Omega \\approx (- Q_\\beta + n) \\omega_0$ respectfully; $n=0,\\pm1,\\pm2,...$ is an integer harmonic number. Driving impedance terms, which are relatively small, are dropped here for the moment, and the notations are same as in the Article. Each series, in turn, consists of four types of modes, distinguished by the signs and values of the angular phase velocity $\\Omega\/n$, see e.g. Ref.~\\cite{lee2018accelerator}. For the positive-based modes, these types follow,\n\n\\begin{itemize}\n\\item{ Zero mode, $n=0$, $\\Omega= Q_\\beta \\, \\omega_0 \\equiv \\omega_\\beta$;}\n\\item{ Fast mode, $n >0$, hence, $\\Omega\/n > \\omega_0$;}\n\\item{ Backward mode, $-Q_\\beta < n <0$, hence, $\\Omega\/n < 0$;}\n\\item{ Slow mode, $n<-Q_\\beta$, hence, $0<\\Omega\/n < \\omega_0$.}\n\\end{itemize}\n\nNotation for the negative-based modes follows with complex conjugation, $n \\rightarrow -n\\,$, $Q_\\beta \\rightarrow -Q_\\beta$,\n\n\\begin{itemize}\n\\item{ Zero mode, $n=0$, $\\Omega=-Q_\\beta \\omega_0 \\equiv -\\omega_\\beta$;}\n\\item{ Fast mode, $n<0$, hence, $\\Omega\/n > \\omega_0$;}\n\\item{ Backward mode, $0Q_\\beta$, hence, $0<\\Omega\/n < \\omega_0$.}\n\\end{itemize}\n\nDue to the driving impedance properties, only the slow modes can be unstable. An illustrative sketch of the spectrum is presented in Fig.~\\ref{PlotFastSlowModes}, assuming smooth approximation and focusing detuning impedance, when the modes can cross but not couple. \n\nFor the Article's PS example with lattice tunes $Q_\\beta =\\omega_\\beta\/\\omega_0=6.4$, a slow mode with frequency $\\omega_\\beta -7 \\omega_0 = -0.6 \\omega_0$ has the nearest mode $-\\omega_\\beta +6 \\omega_0 = -0.4 \\omega_0$, the backward one. The Article, including the title, calls the latter mode ``fast,\" which is a terminological mistake: the value of the phase velocity of that allegedly ``fast\" mode is actually smallest among all the modes. \n\n\n\\begin{figure*}[tbh!]\n \\centering\n \\includegraphics*[width=0.7\\textwidth]{PlotFastSlowModes2.pdf}\n \\caption{\\label{PlotFastSlowModes} Sketch of a coasting beam spectrum, assuming smooth approximation and focusing detuning impedance. For graphical reasons, the lattice tune $Q_\\beta=1.4$ is assumed, for this Figure only. Positive-based part is in red; negative-based is in blue. Zero modes are shown by thick solid lines, backward modes -- by dashed lines, slow ones -- by dotted lines, and fast modes -- by normal solid lines. All crossing lines necessarily have the same difference of the harmonic numbers, which is the nearest integer above the doubled tune, $n_1-n_2= \\pm \\mathrm{ceil}(2Q_\\beta)=\\pm 3$ in this example.}\n\\end{figure*}\n\nIn linear systems with time-independent coefficients, modes can couple only when their frequencies coincide. Clearly, positive-based modes can couple only with the negative-based ones, and vice versa; one of the modes must be stable (fast, zero, or backward), and another unstable (slow). For the PS example, the positive-based slow mode with $n=n_1=-7$ might couple with the negative-based backward mode with $n=n_2=+6$. The coupling can happen if the lattice tune difference between the two modes, $0.6-0.4=0.2$, is compensated by the detuning impedance, presumably able to shift the betatron tune up by $0.1$. Note that the difference between the harmonic numbers of the coupled modes, $n_2 - n_1 = 13 = \\mathrm{ceil}(2Q_\\beta)$, is just above the doubled betatron tune; the same is true for any pair of coupled modes. \n\nWe beg pardon for this pedantic textbook explanation, but we feel obliged to make it since in the Article the terms are confused and the harmonic numbers are given without signs, making a false impression that the modes of the neighbor harmonics, 6 and 7, can sometimes be coupled. \n\nLet us now come back to Eqs.~(23). They are derived from Eq.~(22) by an ansatz that the collective oscillation $y(s,t)$ is a linear combination of two harmonics, $n_1$ and $n_2$. In that derivation, the dependence on $s$ was dropped without any explanation. For equations with constant coefficients, such as Eqs.~(23), this omission is a mathematical mistake, since the cross terms in Eqs. (23) depend on the coordinate $s$ in a different way than the direct terms, $y_1 \\propto \\exp(i n_1 s\/R)$, while $y_2 \\propto \\exp(i n_2 s\/R)$, and $n_1 \\neq n_2$. To provide the mode coupling, the cross terms can only be built by harmonic $n_1 - n_2=\\pm \\mathrm{ceil}(2Q_\\beta)$ of the driving impedance weighed with the beta-function, $Z^\\mathrm{driv}(\\Omega,s) \\beta(s)$. In the smooth approximation, where $Z^\\mathrm{driv}(\\Omega,s) \\beta(s) = \\mathrm{const}$, these cross terms are equal to zero. Thus, instead of Eqs.~(23) of the Article, the mode coupling problem should be described by the following equations;\n\\begin{equation}\n\\begin{split}\n& \\Ddot{y}_1 + \\omega_\\beta^2 y_1 = -2\\omega_\\beta \\Delta \\Omega^\\mathrm{tot}\\, y_1 -2\\omega_\\beta \\Delta \\Omega^\\mathrm{driv}_{n_1 - n_2} \\,y_2 \\,;\t\\\\\n& \\Ddot{y}_2 + \\omega_\\beta^2 y_2 = -2\\omega_\\beta \\Delta \\Omega^\\mathrm{driv}_{n_2 - n_1} y_1 - 2\\omega_\\beta \\Delta \\Omega^\\mathrm{tot}\\, y_2 \\,.\t\\\\\n\\end{split}\n\\label{TrueEq}\n\\end{equation}\nHere $\\Delta \\Omega^\\mathrm{tot} \\propto i \\int{ \\dd s [Z^\\mathrm{det}(0,s) + Z^\\mathrm{driv}(\\Omega,s)] \\beta(s)}$ is the conventional uncoupled coherent tune shift at the sought-for frequency $\\Omega \\approx \\pm \\omega_\\beta +n \\omega_0$, and the cross-coefficients can be expressed as \n\\begin{equation}\n\\Delta \\Omega^\\mathrm{driv}_{n_1 - n_2} = \\Delta \\Omega^\\mathrm{driv}_0 \n\\frac{\\int{ \\dd s\\, Z^\\mathrm{driv}(\\Omega,s) \\beta(s) \\exp(i (n_1 - n_2)s\/R)}}{\\int{ \\dd\\, s Z^\\mathrm{driv}(\\Omega,s) \\beta(s)}}\\,,\n\\label{CrossCoeff}\n\\end{equation}\nwith $\\Delta \\Omega^\\mathrm{driv}_0$ as the contribution of the driving impedance into the conventional coherent tune shift, $\\Delta \\Omega^\\mathrm{driv}_0 \\propto i \\int{ \\dd s\\, Z^\\mathrm{driv}(\\Omega,s) \\beta(s)}$. Let us stress again, that within the smooth approximation, apparently presumed by the Article, but contrary to its Eqs.~(23), the cross coefficients can only be zeros, since the integral in the numerator of Eq. (2) is equal to zero for $n_1 \\neq n_2$. It is also worth noting, that the mode coupling described by Eq.~(\\ref{TrueEq}) can hardly be of practical importance: the coupling may show itself only near the half-integer resonance, wherefrom the tunes should be kept out anyway. \n\nThe Authors claim that the proposed instability mechanism is conceptually analogous to the transverse mode coupling instability (TMCI) for bunched beams. Here they overlook an important difference between the azimuthal harmonics of a coasting beam and synchrotron harmonics of a bunch. The former are exact eigenfunctions for the coasting beam for arbitrary impedance, provided that the smooth approximation is justified, which is apparently assumed in the Article. This fact is guaranteed by the translation invariance of the dynamic equations. Contrary to that, the synchrotron harmonics of a bunch can, at best, be only approximations for the eigenfunctions, which accuracy deteriorates when the coherent tune shift becomes comparable with the synchrotron tune. For a bunch, the dynamic equations are not invariant under the synchrotron phase shifts, and thus, strictly speaking, the synchrotron harmonics can constitute the eigenfunctions only at zero wake field. That is why coupling of the azimuthal harmonics is forbidden for homogeneous coasting beams in the linear approximation, while coupling of the synchrotron harmonics of a bunch is possible. Note also that the eigenvectors of equidistant multiple bunches are the same harmonic exponents, for any impedance. \n \nNow we are coming to the last point of this comment, to the claimed excellent agreement between the theory of Eqs.~(23) and pyHEADTAIL simulations for the PS presented in Fig.~6. Were this agreement really there, it could only mean that a mistake occurs also in the code. However, we think that Fig.~6 demonstrates something different. Rather, it demonstrates an illusion of agreement. To show this, let us first have a look on the upper plot of this Figure, which represents the growth time $\\tau$[turns]. Here, we clearly see an agreement for all the points with $\\tau \\gg 1$, where the cross terms of Eqs.~(23) do not make any visible difference. For these points we indeed see agreement between the textbooks and the pyHEADTAIL simulations, while nothing at all can be said about the innovative aspect of the Article. Apart from these conventional points, we also see in this plot a sequence of points and the line at the growth time $\\tau \\approx 0$. These results of the code and Eqs.~(23) might really demonstrate the agreement or disagreement between the theory and the simulations, but for that we have to see these data. Instead, we see something indistinguishable from zero, or from infinity, if to speak in terms of the growth rates. Thus, the data of the upper plot of Fig.~6 consist of two parts: one is irrelevant to the suggested model of the mode coupling, and the other is obscured from judgement about the model validity by means of the way the data are presented. \n\nAs to the bottom plot of Fig.~6, we see there that the mode tunes are locked in the half-integer resonance. For the simulations, something like that has to be expected just on the ground of the sufficiently strong detuning quadrupole for a lattice with nonzero harmonic $13$. For a perfectly smooth lattice, however, the half-integer tune $6.5$ would be as good as any other tune; thus, the result of the simulations must be sensitive to the lattice smoothness. Since Eqs.~(23) are fully insensitive to the phase advance per cell or other smoothness parameters, the agreement between the pyHEADTAIL simulations and theory in the bottom plot of Fig.~6 can be only accidental. \n\nWe'd like also to note that, although the half integer resonance is not presented in the smooth approximation, it plays a significant role in any real machine. Approaching this resonance results in a big variation of beta-functions and, consequently, fast increase of effective impedance $ \\left< Z^\\mathrm{driv}(\\Omega,s) \\beta(s) \\right >$ and its coupling-related harmonic, mentioned above. \n\nWe hope that our disagreement with key issues of the Article is clearly expressed, and we would appreciate a response of the Authors.\n\n\n\nThis manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}