diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaedu" "b/data_all_eng_slimpj/shuffled/split2/finalzzaedu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaedu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{Introduction}\n\nIn the past years measurements of the absolute sky emission at\nfrequency $\\nu \\sim 1$ GHz have been carried out to evaluate the\nbrightness temperature ($T_{cmb}$) of the Cosmic Microwave\nBackground (CMB). Besides the non-trivial problem of assuring an\naccurate absolute calibration of the measured signal, we need to\nremember that the sky emission is a superposition of different\ncontributions. After subtracting the local emissions (mainly due\nto atmosphere inside the main beam, ground and radio frequency\ninterferences in the far side-lobes) the sky brightness\ntemperature ($T_{sky}$) can be written as:\n\n\\begin{equation}\nT_{sky}(\\nu,\\alpha,\\delta) = T_{cmb}(\\nu) + T_{gal}(\\nu,\\alpha,\n\\delta) + T_{UERS}(\\nu)\n\\end{equation}\n\n\\noindent where $T_{gal}$ is the emission of our galaxy and\n$T_{UERS}(\\nu)$ the temperature of the unresolved extragalactic\nradio sources (UERS). In the present paper we evaluate the UERS\nbrightness temperature and its frequency dependence.\n\nThis paper follows a series of others describing the measurements\nof the sky brightness temperature at frequencies close to 1 GHz\ngathered by the TRIS experiment with an angular resolution\n$FWHM_{TRIS} \\sim 20$ deg (\\cite[]{TRIS-I}, \\cite[]{TRIS-II},\n\\cite[]{TRIS-III}). The results obtained in the present paper were\nused to disentangle the components of the sky brightness and to\nevaluate the CMB temperature at the frequencies $\\nu =$ 600, 820\nand 2500 MHz (\\cite[]{TRIS-II}). The aim of this work is to\nprovide a new estimate of the integrated contribution of UERS to\nthe diffuse brightness of the sky. An accurate estimate of\n$T_{UERS}(\\nu)$ is necessary for the TRIS experiment, but also for\nall the experiments aimed at the study of the spectral distortions\nin the Rayleigh-Jeans tail of the CMB spectrum. Deviations from\nthe black-body distribution can be present at low frequency, but\nthe amplitude of the distortions at frequencies around 1 GHz is\nnowadays constrained by past experiments at the level of few tens\nof mK \\cite[]{Fixen_96}.\n\nExperiments like TRIS \\cite[]{TRIS-I} can reach a control of\nsystematics at the level of $\\sim$50 mK, a remarkable improvement\nif compared to previous measurements at the same frequencies. On\nthe other hand, relying on the current knowledge of both amplitude\nand spectrum of the UERS signal \\cite[]{Longair_66}, we can\nestimate that at 600, 820, 1400 and 2500 MHz (where CMB\nobservations have been carried out in the past) the extra-galactic\ncontribution is respectively $810\\pm 180$ mK, $340\\pm 80$ mK,\n$79\\pm 19$ mK and $16\\pm 4$ mK (see for example \\cite{Sironi_90}).\nUsing the current 178 MHz normalization \\cite[]{Longair_66}, for\nstate-of-the-art experiments, this means that the uncertainty\nassociated with the UERS at the lowest frequencies (which are the\nmost interesting when looking for CMB spectral distortions), is\npotentially higher than instrumental systematics. In this paper we\nshow that by exploiting all the data available in literature we\ncan significantly improve the present status of our knowledge\nabout the UERS contribution, and that TRIS-like experiments are\nessentially limited by the current technology. New and updated\nestimates of the brightness temperature of UERS will be useful\nalso for feasibility studies of future experiments in this field.\n\nThis paper is organized as follows: Section \\ref{Sources}\ndiscusses the general properties of the UERS and the data in\nliterature. In Section \\ref{Fit} we describe the procedure to fit\nthe available number counts; in Section \\ref{Brightness} we\ncalculate the UERS sky brightness and its frequency dependence.\nFinally in Section \\ref{Discussion} we discuss the implications of\nthe results obtained for astrophysics and cosmology.\n\n\n\\section{The extragalactic radio sources}\\label{Sources}\n\n\\subsection{The population of sources}\n\nThe unresolved extragalactic radio sources contribute as a blend\nof point sources to the measurements of diffuse emission,\nespecially with poor angular resolution. Actually UERS are an\ninhomogeneous collection of quasars, radio galaxies, and other\nobjects. These can be both compact and extended sources with\ndifferent local radio luminosity function, lifetimes and cosmic\nevolution. An extensive discussion can be found in\n\\cite{Longair_78}.\n\nUsually we can distinguish two populations of radio sources: {\\it\nsteep spectrum} sources if $\\alpha > 0.5$, and {\\it flat spectrum}\nsources if $\\alpha < 0.5$, where $\\alpha$ is the spectral index of\nthe source spectrum ($S(\\nu)\\propto \\nu^{-\\alpha}$). Compact radio\nsources, like quasars, have mostly a flat spectrum ($\\alpha \\simeq\n0$) and are most commonly detected at higher frequencies. On the\nother hand, extended sources like radio-galaxies, have a steep\nspectrum ($\\alpha \\simeq 0.7-0.8$) and dominate low frequency\ncounts (see \\cite[]{Peacock_81a} and \\cite[]{Longair_78}).\n\n{\\it Steep spectrum} sources and {\\it flat spectrum} sources\ncontribute in different ways to the number counts. {\\it Flat\nspectrum} sources are important only at high frequency ($\\nu\n\\gtrsim 2$ GHz). In the same range, {\\it flat spectrum} source\ncounts seem to be comparable to {\\it steep spectrum} source counts\nfor high fluxes, but at low fluxes {\\it steep spectrum} sources\nare still dominating, as shown for example by \\cite{Condon_84a}\nand \\cite{Kellermann_87} The total number counts have been\nsuccessfully fitted using a luminosity evolution model by\n\\cite{Peacock_81a} and \\cite{Danese_87}.\n\n\\subsection{Isotropy}\n\nThe large scale isotropy of the extragalactic radio sources has\nbeen studied by \\cite{Webster_77}. He analyzed several samples\nof sources measured in a cube of 1 Gpc-side, getting an upper\nlimit $\\Delta N \/ N < 3 \\%$ on the fluctuation of the source\nnumber. This limit is set by the finite statistics of the sources\ncontained in the survey: $N \\sim 10^{4}$.\n\n\\cite{Franceschini_89} have evaluated the fluctuation of the\nsource counts assuming a poissonian distribution: at $\\nu = 5$ GHz\nthey found fluctuations of the antenna temperature $\\Delta T_A \/\nT_A < 10^{-4}$ over an angular scale $\\theta \\sim 5$ deg. This\nfluctuation rapidly decreases at larger scales. At lower\nfrequencies the fluctuations increase, but we have at most $\\Delta\nT_A \/ T_A < 10^{-2}$ for $\\nu=408$ MHz and $\\Delta T_A \/ T_A <\n10^{-4}$ for $\\nu=2.5$ GHz at an angular scale $\\theta \\sim 20$\ndeg.\n\nDue to these considerations, the\ncontribution of UERS to the sky brightness at large angular scale\nis assumed isotropic in the following discussion.\nMoreover the radio sources are supposed\nto be randomly distributed and the fluctuation in the number\ncounts is assumed to be poissonian. A possible anisotropic\ncontribution of UERS is in most cases negligible and limited to\nthe region of the super-galactic plane (see \\cite[]{Shaver_89}).\n\n\\subsection{The data set}\n\nMany source-number vs flux distributions have been produced in the\npast years: radio source counts at low frequency have been\npublished even since the Sixties, while deep surveys at higher\nfrequencies have been performed recently. Compilation of source\ncounts can be found in several papers (see for example\n\\cite[]{Condon_84a}, \\cite[]{Franceschini_89},\n\\cite[]{Toffolatti_98}, \\cite[]{Windhorst_93}). Most of the data\nwe used can be extracted from the references included in these\npapers.\n\nWe used the counts distributions at the frequencies between 150\nand 8000 MHz, spanning a frequency range larger than the one\ncovered by the TRIS experiment. In literature we found data at\neight frequencies: $\\nu=$ 151 MHz, 178 MHz, 408 MHz, 610 MHz, 1.4\nGHz, 2.7 GHz, 5.0 GHz, 8.44 GHz. The complete list of the papers\nwe examined is reported in Table \\ref{tab1}. Some of those\nmeasurements were performed at a frequency slightly different from\nthe nominal one. In this case, the original measurements were\nscaled to the nominal frequency by assuming a dependence of the\nsource flux $S(\\nu) \\sim \\nu^{-0.7}$. Usually the correction is\nnegligible. The number counts extracted from Table \\ref{tab1} is\nshown in Figure \\ref{fig1}.\n\n\n\\section{Number counts distribution fit}\\label{Fit}\n\nThe fit was performed on the differential number counts normalized\nto the Euclidean distribution of sources $E(S) = S^{5\/2}(dN\/dS)$,\nusing the following analytical expression:\n\n\\begin{equation}\nQ(S) = Q_1(S) + Q_2(S) = \\frac{1}{A_1 S^{\\varepsilon_1} + B_1\nS^{\\beta_1}} + \\frac{1}{A_2 S^{\\varepsilon_2} + B_2 S^{\\beta_2}}\n\\label{Fitequation}\n\\end{equation}\n\nThe best values of the fit parameters are summarized in Tables\n\\ref{tab2} and \\ref{tab3}. This analytical fit function is\nempirical. It reproduces the distribution of the experimental data\nwith some advantages: 1) it is a simple analytical function; 2) it\nallows to extend the extrapolation beyond the available data,\nbecause both the amplitude and slope of the tails are well\ndefined, both at low and high fluxes; 3) it is built as the sum of\ntwo \\textit{\"populations\"} of sources with different emission, but\nsimilar behavior and shape; 4) the fitting procedure is\nindependent on the source type and evolution but it is applied to\nthe source counts as currently observed. In addition, this\nanalytical form is simply a power law, with various indices at\ndifferent values of flux, exactly as the observations suggest.\n\n\\subsection{High flux distribution fit}\n\nTo get the fit we first considered the source distribution at high\nfluxes (i.e. we evaluated the parameters of the component\n$Q_1(S)$). For this high flux distribution we have data at all the\nconsidered frequencies. The best fit parameters are listed in\nTable \\ref{tab2}. We found that the values of the spectral indices\n$\\varepsilon_1$ and $\\beta_1$ obtained at all the frequencies are\nvery similar. We then decided to take a unique weighted average\nvalue for $\\varepsilon_1$ and $\\beta_1$. The parameter\n$\\varepsilon_1$ is particularly well constrained by the data at\n1400 MHz, where a large set of data with low scatter is available\n(see \\cite[]{White_97} and Figure \\ref{fig1}). Conversely\navailable data at 178 MHz do not span a flux range wide enough to\nconstrain the two slopes.\n\nSource counts at 2700 and 8440 MHz are the less accurate among the\nfull data set we used. At both frequencies there are sets of data\nnot completely overlapping and the statistics is poor. Therefore,\nto take into account these uncertainties, we assumed a uniform\ndistribution of the parameter $A_1$. At $\\nu=2700$ MHz we took\n$A_1 = (6 - 12) \\times 10^{-4}$, while at $\\nu=8440$ MHz we took\n$A_1 = (16 - 34) \\times 10^{-4}$ (see Table \\ref{tab2}). In this\nfitting procedure we used all the data collected from the papers\nlisted in the Table \\ref{tab1}. We excluded from the fit only the\nseven data points at lowest flux published by \\cite{White_97} at\n1400 MHz because these data present a roll-off, exactly in the\nregion where a changing of the slope is expected, but in the\nopposite direction. According to the authors, this roll-off is an\nindication of the incompleteness of the survey at the faint-flux\nlimit.\n\n\\subsection{Low flux distribution fit}\n\nWe extended the fitting procedure to the counts at low flux, in\norder to constrain the distribution $Q_2(S)$. The two parameters\n$A_2$ and $\\varepsilon_2$ fixed amplitude and slope of the\nlow-flux tail of the distribution, while $B_2$ and $\\beta_2$ are\nneeded to fit the distribution in the region showing a change in\nthe slope. Deep counts are available only at 0.61, 1.4, 5 and 8.44\nGHz, but at 8.44 GHz the number of experimental points and their\nscatter do not allow to fit any parameter. In addition we\nconsidered the model of the low-flux tail published by\n\\cite{Franceschini_89} at 1.4 and 5 GHz. This source evolution\nmodel fits the data also after the addition of the most recent\nmeasurements both at 1.4 and 5 GHz. The low-flux tail of the model\ncompared with the experimental data and our fit $Q(S)$ are shown\nin Figure \\ref{fig2}. This evolution model is able to predict\naccurately the slope of the low-flux tail and we used it to get\nthe value of $\\varepsilon_2$, which is independent on the\nfrequency \\cite[]{Franceschini_07}. The values of $\\varepsilon_2$,\nobtained at 1.4 and 5 GHz, are fully compatible with the average\nvalue of $\\varepsilon_1$ previously evaluated. Combining the\nestimates at 1.4 and 5 GHz we get $\\varepsilon_2 = -0.856 \\pm\n0.021$.\n\nSince the model by \\cite{Franceschini_89} is not able to fix the\namplitude of the number counts, i.e. the parameter $A_2$, we\nevaluated this parameter by using the experimental data at 1.4 and\n5 GHz. We get: $A_2\/A_1 = 0.24 \\pm 0.02$ at 1.4 GHz; $A_2\/A_1 =\n0.30 \\pm 0.04$ at 5 GHz. The ratio $A_2\/A_1$ is almost independent\non the frequency. The small change is due to the different\ncontribution of the \\textit{flat-spectrum} sources in the total\ncounts at the two frequencies in the low flux region (below\n$10^{-5}$ Jy) and in the high flux region (10 mJy - 1 Jy).\n\nUsing the model of \\cite{Franceschini_89} and \\cite{Toffolatti_98}\nat 1.4, 5 and 8.4 GHz, we extrapolated the value of the ratio\n$A_2\/A_1$ also to the other frequencies. In order to evaluate\n$A_2\/A_1$ at lower frequencies we estimated the variation of\n$A_2\/A_1$ at 1.4 GHz excluding from the counts the\n\\textit{flat-spectrum} sources. This is the extreme condition,\nwhich holds at very low frequencies. In fact the other sources\ncontributing to the number counts do not change significantly with\nfrequency. We obtain in this situation that $0.23 \\leq A_2\/A_1\n\\leq 0.24$. The same result was obtained starting from data and\nmodel at 5 GHz. Therefore for the frequencies 151, 178, 408 and\n610 MHz we take the value obtained at 1.4 GHz, but associating a\nlarger error bar, due to the uncertainty in the extrapolation\nprocedure: $A_2\/A_1 = 0.24 \\pm 0.04$.\n\nWe then evaluated the contribution of the {\\it flat spectrum}\nsources in the total counts at 8.44 GHz (see \\cite{Toffolatti_98})\nin comparison with the counts at 5 GHz. In this way we estimate at\n8.44 GHz the value $A_2\/A_1 = 0.31 \\pm 0.04$. At 2.7 GHz we took a\nvalue constrained by the results obtained at 1.4 and 5 GHz:\n$A_2\/A_1 = 0.24 - 0.30$.\n\nWe finally estimated the $B_2$ and $\\beta_2$. They are important\njust to define the shape of the distribution in the region showing\na change in the slope. At 1.4 GHz data are accurate enough to\nconstrain both parameters, but $\\beta_2$ can not be constrained at\nthe other frequencies. Since the accuracy of these two parameters\nis not important for the calculation of the integrated brightness\ntemperature, we assumed for them the average value for all the\nfrequencies.\n\nThe summary of the best values of all the parameters of $Q_2(S)$\nis shown in Table \\ref{tab3}. The number counts and the function\nwhich fit them are shown in Figure \\ref{fig1}. In conclusion we\ncan note that: 1) $A_1$ and $B_1$, the two frequency dependent\nparameters of the fit, take different values at each frequency.\nThe same is true for the ratio $A_2\/A_1$. 2) The power law indices\n($\\varepsilon_1$, $\\beta_1$, $\\varepsilon_2$ and $\\beta_2$) are\nfrequency independent and we take a common value at all the\nfrequencies.\n\n\n\\section{The UERS contribution to the sky diffuse emission}\\label{Brightness}\n\n\\subsection{Evaluation of the diffuse emission}\n\nThe contribution of the UERS ($B_{UERS}(\\nu)$) to the sky\nbrightness is evaluated by integrating the function $S(dN\/dS)$\nfrom the largest flux ($S_{max}$) of the measured sources down to\nthe lowest fluxes ($S_{min}$) corresponding to the faintest\nsources:\n\n\\begin{equation}\nB_{UERS}(\\nu) = \\int^{S_{max}}_{S_{min}} \\frac{dN}{dS}(\\nu) \\cdot\nS \\ dS\n\\end{equation}\n\nThe brightness temperature $T_{UERS}(\\nu)$ is by definition:\n\n\\begin{equation}\nT_{UERS}(\\nu) = B_{UERS}(\\nu) \\frac{\\lambda^2}{2 \\ k_B},\n\\end{equation}\n\n\\noindent $k_B$ being the Boltzmann constant. The values of\n$T_{UERS}$ at the eight frequencies we considered are\nsummarized in Table \\ref{tab4}.\n\nFrom the observations we have $S_{max} \\sim 10^2$ Jy (as measured\nat 151, 408 and 1400 MHz) and $S_{min} \\sim 10^{-6}$ Jy (as\nmeasured in the deepest counts at 5 GHz). While sources at higher\nfluxes if present in the surveyed sky region can be easily\nmeasured, the limit at low flux is set by the confusion limit or\nby the observation completeness. In other words there is no sharp\nlimit at low flux in the population of the sources. We extended\nthe integration down to very faint limits, several orders of\nmagnitude below the faintest detected sources ($S_{min} \\sim\n10^{-6}$ Jy). When the integration is extended down to $S_{min}\n\\sim 10^{-12}$ Jy \\ the brightness increase by $3-4$ \\% and then\nthe value converges. This increment is comparable with the total\nuncertainty we get on the value of the brightness, as shown in\nFigure \\ref{fig3}.\n\nWe extended the integration also to higher values of the flux, in\norder to test also this integration limit. Increasing $S_{max}$\nwell beyond the flux of the strongest sources observed, the\nintegral change by less than $0.5\\%$ and quickly converges. This\nis a consequence of the very low statistics of sources at the\nhighest fluxes. This is a confirmation that the large scale\nbrightness is actually not sensitive to the upper limit of\nintegration.\n\n\\subsection{Evaluation of the uncertainty}\n\nThe error budget takes into account both the fit uncertainties and\nthe number counts fluctuations over the observed sky region. The\nfit uncertainties were evaluated by means of Monte Carlo\nsimulations. For each parameter we considered a gaussian\ndistribution with standard deviation as reported in Tables\n\\ref{tab2} and \\ref{tab3}. Only for the values of $A_1$ at 2.7 and\n8.44 GHz and for $A_2\/A_1$ at 2.7 GHz we assumed a uniform\ndistribution inside the interval reported in Tables \\ref{tab2} and\n\\ref{tab3}. The error bars of the parameters of the fit (in\nparticular $A_2\/A_1$ and $\\varepsilon_2$) include the uncertainty\non the extrapolation at the lowest fluxes for the various\nfrequencies. We underline that, as shown in Figure \\ref{fig3}, the\ncontribution to the brightness temperature of the low-flux tail\n(below $\\sim 10^{-6}$ Jy) is lower than the overall uncertainty\nreported in Table \\ref{tab4}.\n\nThe statistical fluctuations of the sources' number counts have no\neffect in a large part of the distribution because the number of\nsources is quite large. We concentrated on the effect of the\nfluctuation of the few sources with the highest flux. We\nconsidered these sources randomly distributed and therefore their\nfluctuation is Poissonian. We evaluated the fluctuation of the\nbrightness in a patch of the sky corresponding to the beam of\nTRIS: $\\Omega_{TRIS} \\sim 0.1$ sr. The upper limit of the\ncontribution to the temperature uncertainty due to the fluctuation\nin the number of sources is directly measured by the maximum of\nthe function\n\n\\begin{equation}\\label{}\nC(S_{min})=\\frac{\\lambda^2}{2k_B\\Omega_{TRIS}}\n\\frac{\\int_{S_{min}}^{100Jy}\\frac{dN}{dS}SdS}\n{\\int_{S_{min}}^{100Jy}\\frac{dN}{dS}dS}\n\\end{equation}\n\n\\noindent plotted in Figure \\ref{fig4} for the specific case of\nthe 1400 MHz data. For all the frequencies this maximum falls\naround $\\sim 5$ Jy, and its value is from 2 to 6 times smaller\nthan the corresponding values reported in Table \\ref{tab4}.\nTherefore for every frequency, and over the full flux range, the\nerror of the brightness temperature is dominated by the\nstatistical uncertainties of the fit parameters.\n\nThe relative error bar of the brightness temperature is $6-7\\%$ at\n151, 408, 610 and 1400 MHz increasing up to $9\\%$ at 5000 MHz. At\n178 MHz the available measurements are few and old and the\nobtained error bar is $13\\%$. At 2700 and 8440 MHz both the\nquantity and the quality of the data do not allow accurate\nestimates of the parameters of the fit and we get an uncertainty\nof $25-30\\%$.\n\n\\subsection{Frequency dependence}\n\nThe integrated contribution of the UERS to the brightness of the\nsky at the various frequencies is shown in Figure \\ref{fig5}. The\ndistribution can be fitted by a power law:\n\n\\begin{equation}\nT_{UERS}(\\nu) = T_0 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_0}\n\\end{equation}\n\n\\noindent Setting $\\nu_0 = 610$ MHz (chosen because it is\nclose to one of the channels of the TRIS experiment), we obtain\nthe best fit of $T_0$ and $\\gamma_0$ shown in Table \\ref{tab5}\n(\\textit{FIT1}). As shown in Figure \\ref{fig5}, in spite of the\nlarge error bars at 2700 and 8440 MHz, scatter of the data points is\nlimited. The fit of a single power law, done excluding the data with\nthe largest uncertainty (at $\\nu=178$, 2700 and 8440 MHz), gives\nthe values of $T_0$ and $\\gamma_0$ shown in Table \\ref{tab5}\n(\\textit{FIT2}). Now the scatter of the experimental data is much\nsmaller, as shown by the value of the reduced $\\chi^2$.\n\nIn both cases the results obtained are fully compatible with the\nslope of the {\\it steep spectrum} sources, which are therefore the\nmain contributors to the source counts. However it is interesting\nto check the contribution of the {\\it flat spectrum} sources\nespecially at high frequencies. We assumed a {\\it flat spectrum}\ncomponent with fixed slope $\\gamma_1 = -2.00$ and a dominant {\\it\nsteep spectrum} component with slope $\\gamma_0 = -2.70$:\n\n\\begin{equation}\nT_{UERS}(\\nu) = T_0 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_0} +\nT_1 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_1}\n\\end{equation}\n\n\\noindent The results of the fit of $T_0$ and $T_1$ are shown in Table\n\\ref{tab5} (\\textit{FIT3}). Doing so we can get an estimate of\nthe contribution of the {\\it flat-spectrum} component in the\nnumber counts at the various frequencies. The value of\n$T_{UERS}(\\nu) (\\nu \/\\nu_0)^{2.70}$ \\ at the frequencies analyzed\nis shown in Figure \\ref{fig6}, together with the power law best\nfits. Figure \\ref{fig6} shows that the two last fits\n(\\textit{FIT2} and \\textit{FIT3}) are equivalent within the error\nbars.\n\n\\section{Discussion}\\label{Discussion}\n\n\\subsection{Analytical form of the fit function}\n\nAs discussed in Section \\ref{Fit}, the fit function $Q(S)$ is a\npower law distribution taking different slopes and amplitudes in\nthree different regions of the sources' flux. We prefer to deal\nwith a power law (like $Q(S)$) instead of a polynomial fit, as\nperformed by \\cite{Katgert_88} and \\cite{Hopkins_03}. A polynomial\nfit can be better adapted to the experimental points, but its\nvalidity range is restricted to the region covered by experimental\npoints. Therefore it is not possible to use a polynomial fit to\nextrapolate the distribution outside this range. Conversely our\nfit function can be extrapolated because amplitude and slope of\nthe tails are well defined and take into account the shapes\nexpected from the counts evolution models\n\\cite[]{Franceschini_89}.\n\nAccording to \\cite{Longair_78} we should expect a broadened\nmaximum in the distribution of the source counts with increasing\nfrequency. This indication seems to be confirmed looking at the\ndifferential counts shown in (\\cite[]{Kellermann_87} and\n\\cite[]{Condon_84a}). In spite of these expectations we have\nobtained a good fit of the counts at the various frequencies,\nusing the same function with the same slopes, as shown Figure\n\\ref{fig1}. Part of this effect is probably hidden by the larger\nscatter of the high frequency differential counts. In any case the\nbroadening of the maximum could be marginally observed above 1400\nMHz and the effect is not relevant for the calculation of the\ncontribution to the sky brightness.\n\n\\subsection{Frequency dependence}\n\nThe spectral dependence of the brightness temperature $T_{UERS}$\nfollows the expectations at low frequency where the number counts\nare dominated by the sources with steep spectrum: $\\alpha \\sim\n0.7$. At high frequency the situation is more complex. We could\nexpect a flattening, because at these frequencies the number\ncounts of {\\it flat-spectrum} sources begin to be important. It\nwill probably appear at frequencies higher than 1400 MHz.\nUnfortunately the available data are not accurate enough to\nconstrain the fit parameters (in particular $A_1$) at 2700 and\n8440 MHz. The values obtained at these two frequencies, lower than\nthe values expected looking at the other frequencies, could also\nbe an indication of incompleteness of the surveys (see Figure\n\\ref{fig6}). Conversely at 5000 MHz data are better and numerous.\nWe obtain a value for $T_{UERS}$ fully consistent with the slope\nof the data at low frequency, where {\\it steep-spectrum} sources\nare dominating, even if there is a marginal indication of a\nspectral flattening (see Figure \\ref{fig6}). In fact when we fit\nthe data adding the contribution of the {\\it flat-spectrum}\nsources we get an estimate of this contribution which is $T_1\/T_0\n\\simeq 2\\%$ at $\\nu=610$ MHz and $T_1\/T_0 \\simeq 9\\%$ at $\\nu=5$\nGHz (see \\textit{FIT3} in the Table \\ref{tab5}).\n\n\n\\subsection{Previous estimates of $T_{UERS}$}\n\nThe values of brightness temperature shown in Table \\ref{tab4}\nhave error bars of roughly 7\\% at most frequencies. The\nuncertainty is a bit larger at 178 MHz and even worse at 2700 and\n8440 MHz, because of the quality of the number counts data. So far\nvery few estimates of $T_{UERS}$ have been published (see\n\\cite[]{Longair_66}, \\cite[]{Wall_90} and \\cite[]{Burigana_04}),\nand the frequencies covered were limited and sometimes the\nuncertainty was not quoted. Our results are in agreement with the\nvalues previously estimated by \\cite{Longair_66}, $T_{UERS}(178 \\\nMHz)=23\\pm5$ K, and by \\cite{Wall_90}, $T_{UERS}(408 \\\nMHz)\\simeq2.6$ K, $T_{UERS}(1.4 \\ GHz)\\simeq0.09$ K, and\n$T_{UERS}(2.5 \\ GHz)\\simeq0.02$ K, but our error bars are\ndefinitely smaller. The accuracy of the estimated values of\n$T_{UERS}$ is particularly important if this contribution is to be\nsubtracted to calculate the value of the CMB temperature at low\nfrequency. Table \\ref{tab4} suggests that the value of $T_{UERS}$\nneeds to be accurately evaluated up to a frequency of several GHz,\nbecause its value is not negligible.\n\n\\section{Conclusions}\n\nWe used the source number - flux measurements in literature to\nevaluate the contribution of the Unresolved Extragalactic Radio\nSources to the diffuse brightness of the sky. We analyzed the\ncount distributions at eight frequencies between 150 and 8000 MHz,\nspanning over the frequency range partially covered by the TRIS\nexperiment (see \\cite[]{TRIS-I}): $\\nu=$ 151 MHz, 178 MHz, 408\nMHz, 610 MHz, 1.4 GHz, 2.7 GHz, 5.0 GHz, 8.44 GHz.\n\nWe optimized the fitting function of the experimental number\ncounts distribution. The differential number counts ($dN\/dS$) at\nthe various frequencies are well described by a multi power law\nempirical distribution $Q(S)$ (see Equation \\ref{Fitequation}).\nThe amplitudes ($A_1$ and $B_1$) are frequency dependent\nparameters of the fit and have different values at each frequency.\nConversely the power law indices ($\\varepsilon_1$, $\\beta_1$,\n$\\varepsilon_2$ and $\\beta_2$) have a common value at all the\nfrequencies.\n\nThe contribution of the UERS to the sky brightness was\nevaluated by integrating the function $S(dN\/dS)$ from the largest\nflux ($S_{max} = 10^{2}$) of the measured sources down to the\nlowest fluxes ($S_{min} = 10^{-12}$) corresponding to the expected\nfaintest sources. We got the brightness temperature with a\nrelative error bar of $\\delta T_{UERS}\/T_{UERS} \\simeq 6-7\\%$ at\n$\\nu =$ 151, 408, 610 and 1400 MHz, $\\delta T_{UERS}\/T_{UERS}\n\\simeq 9\\%$ at $\\nu =$ 5000 MHz, $\\delta T_{UERS}\/T_{UERS} \\simeq\n13\\%$ at $\\nu =$ 178 MHz and $\\delta T_{UERS}\/T_{UERS} \\simeq\n25-30\\%$ at $\\nu =$ 2700 and 8440 MHz.\n\nWe finally evaluated the spectral dependence of the point source\nintegrated brightness. As expected this dependence can be\ndescribed using a power law with a spectral index $\\gamma_0 \\simeq\n-2.7$, in agreement with the frequency dependence of the flux\nemitted by the {\\it steep-spectrum} sources. We have also tested\nthe contribution of the {\\it flat-spectrum} sources, adding a\nsecond component with the slope $\\gamma_1 = -2.0$. The\ncontribution of these sources starts to be relevant only at\nfrequencies above several GHz. In fact we estimated a contribution\nby {\\it flat-spectrum} sources $\\sim 2 \\%$ at 610 MHz and $\\sim 9\n\\%$ at 5 GHz.\n\nThe above results were used to evaluate the CMB temperature\nat frequencies close to 1 GHz from absolute measurements of the\nsky temperature made by our group (see \\cite[]{TRIS-I},\n\\cite[]{TRIS-II}, \\cite[]{TRIS-III}).\n\n\\acknowledgments {\\bf Acknowledgements}: This work is part of the\nTRIS activity, which has been supported by MIUR (Italian Ministry\nof University and Research), CNR (Italian National Council of\nResearch) and the Universities of Milano and of Milano-Bicocca.\nThe authors acknowledge A. Franceschini for useful discussions and\nthe anonymous referee for helpfull comments to the draft.\n\n\\vfill \\eject\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOptimization problems are wide-spread in several domains of science and engineering. \nThe usual goal is to minimize or maximize some pre-defined objective(s). \nMost of the real-world scenarios place certain restrictions \non the variables of the problem i.e. the variables need to satisfy \ncertain pre-defined constraints to realize an acceptable solution.\n \nThe most general form of a constrained optimization problem (with inequality constraints,\nequality constraints and variable bounds) can be written as a nonlinear programming\n(NLP) problem: \n\n{\\small\\begin{eqnarray}\nMinimize\t &f(\\vec{x})\t\t&\t\\nonumber\\\\\nSubject to\t &g_j(\\vec{x}) \\ge 0, & j= 1,...,J \\nonumber\\\\\n\t\t &h_k(\\vec{x}) = 0,\t& k= 1,...,K \\nonumber\\\\\n\t \t & x_i^{(L)} \\le x_i \\le x_i^{(U)}, & i=1,...,n. \\label{eq:NLPprob}\n\\end{eqnarray}}\n\nThe NLP problem defined above contains $n$ decision variables (i.e. $\\vec{x}$ is a vector of size $n$),\n$J$ greater-than-equal-to type inequality constraints (less-than-equal-to can be expressed in this form by \nmultiplying both sides by $-1$), and $K$ equality-type constraints.\nThe problem variables $x_is$ are bounded by the lower ($x_i^{(L)}$) and upper ($x_i^{(U)}$)\nlimits. When only the variable bounds are specified then the \nconstraint-handling strategies are often termed as the boundary-handling methods.\n\\footnote{For the rest of the paper, by \\textit{constraint-handling} we imply\ntackling all of the following: variable bounds, inequality constraints and equality constraints. \nAnd, by a \\textit{feasible solution} it is implied that the solution satisfies all the variable bounds, inequality constraints,\nand equality constraints. The main contribution of the paper is to propose an efficient constraint-handling method\nthat operates and generates only feasible solutions during optimization.}\n\nIn classical optimization, the task of constraint-handling has been addressed\nin a variety of ways: (i) \\textit{using penalty approach} developed by Fiacoo and McCormick \\cite{jensen2003operations},\nwhich degrades the function value\nin the regions outside the feasible domain, (ii) \\textit{using barrier methods} which operate in a similar fashion \nbut strongly degrade the function values as the solution approaches a constraint boundary from \ninside the feasible space, (iii) \\textit{performing search in the feasible directions} using methods\nsuch gradient projection, reduced gradient and Zoutendijk's approach \\cite{zoutendyk1960methods}\n(iv) \\textit{using the augmented Lagrangian formulation} of the problem, as commonly \ndone in linear programming and sequential quadratic programming (SQP).\nFor a detailed account on these methods along with their implementation and \nconvergence characteristics the reader is referred to \\cite{rekl,debOptiBOOK,MichalewiczConstraint}.\nThe classical optimization methods reliably and effectively solve convex constrained optimization problems while ensuring convergence\nand therefore widely used in such scenarios. However, same is not true in the presence of non-convexity.\nThe goal of this paper is to address the issue of constraint-handling for evolutionary algorithms in real-parameter optimization, \nwithout any limitations to convexity or a special form of constraints or objective functions. \n\nIn context to the evolutionary algorithms the constraint-handling has been addressed\nby a variety of methods; including borrowing of the ideas from the classical techniques. These include\n(i) \\textit{use of penalty functions} to degrade the fitness\nvalues of infeasible solutions such that the degraded solutions are given less emphasis during the evolutionary search. A\ncommon challenge in employing such penalty methods arises from\nchoosing an appropriate penalty parameter ($R$) that strikes the right balance between\nthe objective function value, the amount of constraint violation and the associated penalty. \nUsually, in EA studies, a trial-and-error method is employed to estimate $R$.\nA study \\cite{debpenalty} in 2000 suggested a parameter-less\napproach of implementing the penalty function concepts for population-based optimization method. \nA recent bi-objective method\n\\cite{deb-dutta} was reported to find the appropriate $R$ values adaptively during the optimization process.\nOther studies \\cite{wang2012dynamic,wang2012combining} have employed the concepts of multi-objective\noptimization by simultaneously considering the minimization of the constraint violation and optimization of the\nobjective function, (ii) \\textit{use of feasibility preserving operators},\nfor example, in \\cite{michalewicz1996genocop} specialized operators in the presence of linear constraints \nwere proposed to create new and feasible-only individuals from the feasible parents. In another example, \ngeneration of feasible child solutions within the variable bounds was achieved \nthrough Simulated Binary Crossover (SBX) \\cite{debSBX} and polynomial mutation\noperators \\cite{debBookMO}. The explicit feasibility of child solutions was ensured by redistributing the probability distribution \nfunction in such a manner that the infeasible regions were assigned a zero probability \nfor child-creation \\cite{debpenalty}. Although explicit creation of feasible-only solutions during an EA search is an\nattractive proposition, but it may not be possible always since generic crossover or mutation operators\nor other standard EAs do not gaurantee creation of feasible-only solutions, (iii) \\textit{deployment of repair strategies}\nthat bring an infeasible solution back into the feasible domain.\nRecent studies \\cite{PadhyeBIC2012,Helwid-Constraing-handling-2013,amir,chu} investigated the \nissue of constraint-handling through repair techniques in context to PSO and DE, and \nshowed that the repair mechanisms can introduce a bias in the search and \nhinder exploration. Several repair methods \nproposed in context PSO \\cite{padhyeCEC2009,Sabina,JonathanPareto} \nexploit the information about location of the optimum and fail to perform when the location of \noptimum changes \\cite{padhye2010}. These issues are universal and\noften encountered with all EAs (as shown in later sections).\nFurthermore, the choice of the evolutionary optimizer, the constraint-handling strategy, and \nthe location of the optima with respect to the search space, all play an important\nrole in the optimization task. To this end, authors have realized a need\nfor a reliable and effective repair-strategy that explicitly preserves feasibility.\nAn ideal evolutionary optimizer (evolutionary algorithm\nand its constrained-handling strategy) should be robust in terms of finding the \noptimum, irrespective of the location of the optimal location in the search space. \nIn rest of the paper, the term constraint-handling strategy refers to explicit feasibility\npreserving repair techniques. \n\nFirst we review the existing constraint-handling strategies \nand then propose two new constraint-handling schemes, namely, Inverse Parabolic Methods (IPMs). \nSeveral existing and newly proposed constrained-handling strategies are first tested on a class \nof benchmark unimodal problems with variable bound constraints.\nStudying the performance of constraint-handling strategies on problems with variable bounds\nallows us to gain better understanding into the operating principles in a simplistic manner. \nParticle Swarm Optimization, Differential Evolution and real-coded Genetic Algorithms are chosen \nas evolutionary optimizers to study the performance of different constraint-handling strategies. \nBy choosing different evolutionary optimizers, better understanding on the functioning\nof constraint-handlers embedded in the evolutionary frame-work can be gained. \nBoth, the search algorithm and constraint-handling strategy must operate efficiently and synergistically in\norder to successfully carry out the optimization task. It is shown that the constraint-handling\nmethods possessing inherent pre-disposition; in terms of bringing infeasible solutions back into the\nspecific regions of the feasible domain, perform poorly. Deterministic constraint-handling strategies such as \nthose setting the solutions on the constraint boundaries result in the loss of population diversity. \nOn the other hand, random methods of bringing the solutions back into the search space\narbitrarily; lead to complete loss of all useful information carried by the solutions. \nA balanced approach that utilizes the useful information from the solutions\nand brings them back into the search space in a meaningful way is desired. The newly proposed IPMs are motivated\nby these considerations.\nThe stochastic and adaptive components of IPMs (utilizing the information of the solution's\nfeasible and infeasible locations), and a user-defined parameter ($\\alpha$) render\nthem quite effective. \n \nThe rest of the paper is organized as follows:\nSection~\\ref{sec:feasibility-preserving-existing} reviews existing constraint-handling techniques\ncommonly employed for problems with variable bounds.\nSection~\\ref{sec:IP} provides a detailed description on two newly proposed IPMs.\nSection~\\ref{sec:ResultsDiscussion} provides a description on the benchmark test problems and \nseveral simulations performed on PSO, GAs and DE with different constraint-handling techniques. \nSection~\\ref{sec:scale-up} considers \noptimization problems with larger number of variables. \nSection~\\ref{sec:Constraint-Programming} shows the extension and applicability of proposed\nIPMs for generic constrained problems.\nFinally, conclusions and scope for future work are discussed in Section~\\ref{sec:Conclusion}.\n \n\n\n\n\n\\section{Feasibility Preserving Constraint-Handling Approaches for Optimization Problems with Variable Bounds}\n\\label{sec:feasibility-preserving-existing}\nSeveral constraint-handling strategies have been proposed to bring solutions back into the feasible region\nwhen constraints manifest as variable bounds. Some of these strategies can also be extended in \npresence of general constraints. An exhaustive recollection and \ncomparison of all the constraint-handling techniques is beyond the scope of this study. \nRather, we focus our discussions on the popular and representative constraint-handling techniques. \n\nThe existing constraint-handling methods for problems with variable bounds can be broadly categorized into two groups: \nGroup~$A$ techniques that perform feasibility check variable wise, and Group~$B$ techniques that perform feasibility\ncheck vector-wise. According to Group~$A$ techniques, for every solution, each variable is tested for its feasibility \nwith respect to its supplied bounds and made feasible if the corresponding bound is violated.\nHere, only the variables violating their corresponding bounds are altered, independently, and\nother variables are kept unchanged. \nAccording to Group~$B$ techniques, if a solution (represented as a vector) \nis found to violate any of the variable bounds, it is brought back into\nthe search space along a vector direction into the feasible space. In\nsuch cases, the variables that explicitly do not violate their own\nbounds may also get modified.\n \nIt is speculated that for variable-wise separable problems, that is,\nproblems where variables are not linked to one another, \ntechniques belonging to Group~$A$ are likely to perform well. However, for the problems \nwith high correlation amongst the variables (usually referred to as {\\em linked}-problems), Group~$B$ techniques are likely to be more useful. \nNext, we provide description of these constraint-handling methods in detail \\footnote{The implementation of several \nstrategies as C codes can be obtained by emailing npdhye@gmail.com or pulkitm.iitk@gmail.com}.\n\n\\subsection{Random Approach}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{Random.eps} \n\\end{center}\n\\caption{Variable-wise random approach for handling bounds.}\n\\label{fig:RandomBH}\n\\end{figure}\nThis is one of the simplest and commonly used approaches for handling boundary\nviolations in EAs \\cite{chu}. This approach belongs to Group~$A$. \nEach variable is checked for a boundary violation and if the variable bound is violated\nby the current position, say $x_i^c$, then $x_i^c$ is replaced with a randomly chosen value\n$y_i$ in the range $[x_i^{(L)}, x_i^{(U)}]$, as follows:\n\\begin{equation}\ny_i = \\mbox{random} [x_i^{(L)}, x_i^{(U)}].\n\\end{equation}\nFigure~\\ref{fig:RandomBH} illustrates this\napproach. Due to the random choice of the feasible location, this approach explicitly maintains\ndiversity in the EA population. \n\n\\subsection{Periodic Approach}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{Periodic.eps} \n\\end{center}\n\\caption{Variable-wise periodic approach for handling bounds.}\n\\label{fig:PeriodicBH}\n\\end{figure}\nThis strategy assumes a periodic repetition of the objective function\nand constraints with a period $p=x_i^{(U)}-x_i^{(L)}$. This is carried out by\nmapping a violated variable $x_i^c$ in the range $[x_i^{(L)},\nx_i^{(U)}]$ to $y_i$, as follows:\n\\begin{equation}\ny_i = \\left\\{ \n\\begin{array}{ll}\n x_i^{(U)} - (x_i^{(L)}-x_i^c)\\%p, & \\quad \\text{if $x_i^cx_i^{(U)}$}, \\\\\n\\end{array} \n\\right.\n\\end{equation}\nIn the above equation, \\% refers to the modulo operator.\nFigure~\\ref{fig:PeriodicBH} describes the periodic approach. \nThe above operation brings back an infeasible solution in a structured\nmanner to the feasible region. \nIn contrast to the random method, the periodic approach is too methodical and it is unclear \nwhether such a repair mechanism is supportive of\npreserving any meaningful information of the solutions that have created the\ninfeasible solution. This approach belongs to Group~$A$. \n\n\\subsection{SetOnBoundary Approach}\nAs the name suggests, according to this strategy a violated variable\nis reset on the \nbound of the variable which it violates.\n\\begin{equation}\ny_i = \\left\\{ \n\\begin{array}{ll}\n x_i^{(L)}, & \\quad \\text{if $x_i^cx_i^{(U)}$}.\\\\\n\\end{array} \\right.\n\\end{equation}\nClearly this approach forces all violated solutions to lie on the\nlower or on the upper boundaries, as the case may be. Intuitively, this approach will work\nwell on the problems when the optimum of the problem lies exactly on one of the variable\nboundaries. This approach belongs to Group~$A$.\n\n\\subsection{Exponentially Confined (Exp-C) Approach}\n\\begin{figure}[hbt]\n\\begin{minipage}{0.47\\linewidth}\n\\begin{center}\n\\includegraphics[scale=.35]{FieldSend.eps} \n\\end{center}\n\\caption{Variable-wise exponentially approach (Exp-C) for handling\n bounds.}\n\\label{fig:FieldSendDistBH}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.47\\linewidth}\n\\begin{center}\n\\includegraphics[scale=.35]{expS.eps} \n\\end{center}\n\\caption{Variable-wise exponentially approach (Exp-S) for handling bounds.}\n\\label{fig:expS}\n\\end{minipage}\n\\end{figure}\nThis method was proposed in \\cite{JonathanPareto}. According to this \napproach, a particle is brought back inside the feasible search space variable-wise in the region between \nits old position and the violated bound. The new location is created\nin such a manner that higher sampling probabilities are assigned to the regions\nnear the violated boundary. The developers suggested the use of an\nexponential probability distribution, shown in\nFigure~\\ref{fig:FieldSendDistBH}. \nThe motivation of this approach is based on the hypothesis \nthat a newly created infeasible point violates a particular variable\nboundary because the optimum solution lies closer to that variable\nboundary. Thus, this method will probabilistically \ncreate more solutions closer to the boundaries, unless the optimum lies well\ninside the restricted search space. This approach belongs to Group~$A$.\n\nAssuming that the exponential distribution is $p(x_i) =\nA\\exp(|x_i-x_i^p|)$, the value of $A$ can be obtained by integrating\nthe probability from $x_i=x_i^p$ to $x_i=x_i^{(B)}$ (where $B=L$ or\n$U$, as the case may be). Thus, the probability distribution is given\nas $p(x) = \\exp(|x_i-x_i^p|)\/(\\exp(|x_i^{(B)} - x_i^p|)-1)$. For any\nrandom number $r$ within $[0,1]$, the feasible solution is calculated as follows:\n\\begin{equation}\ny_i= \\left\\{\\begin{array}{ll}\nx_i^p - \\ln (1+r(\\exp (x_i^p-x_i^{(L)})-1)) &\\mbox{if $x_ix_i^{(U)}$}.\n\\end{array}\\right.\n\\label{eq:exp}\n\\end{equation}\n\n\n\\subsection{Exponential Spread (Exp-S) Approach}\nThis is a variation of the above approach, in which, instead of \nconfining the probability to lie between $x_i^p$ and the violated\nboundary, the exponential probability is spread over the entire\nfeasible region, that is, the probability is distributed from lower\nboundary to the upper boundary with an increasing probability towards\nthe violated boundary. This requires replacing $x_i^p$ with\n$x_i^{(U)}$ (when the lower boundary is violated) or $x_i^{(L)}$ \n(when the upper boundary is violated) in the Equation~\\ref{eq:exp} as follows: \n\\begin{equation}\ny_i= \\left\\{\\begin{array}{ll}\nx_i^{(U)} - \\ln (1+r(\\exp(x_i^{(U)}-x_i^{(L)})-1)) &\\mbox{if $x_ix_i^{(U)}$}.\n\\end{array}\\right.\n\\end{equation}\nThe probability distribution is shown in Figure~\\ref{fig:expS}.\nThis approach also belongs to Group~$A$.\n\n\\subsection{Shrink Approach}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{SHR.eps} \n\\end{center}\n\\caption{Vector based {SHR.} strategy for handling bounds.}\n\\label{fig:SHRBH}\n\\end{figure}\nThis is a vector-wise approach and belongs to Group~$B$ in which the violated solution is\nset on the intersection point of the line joining the parent point\n($\\vec{x}_{not}$), child point ($\\vec{x}^c)$, and the violated boundary. Mathematically,\nthe mapped vector $\\vec{y}$ is created as follows:\n\\begin{equation}\n\\vec{y} = \\vec{x}_{not} +\\beta (\\vec{x}^c-\\vec{x}_{not}), \n\\end{equation}\nwhere $\\beta$ is computed as the minimum of all positive values of intercept\n$(x_i^{(L)}-x_{i,not})\/(x_i^c-x_{i,not})$ for a violated boundary\n$x_i^{(L)}$ and $(x_i^{(U)}-x_{i,not})\/(x_i^c-x_{i,not})$ for a violated boundary\n$x_i^{(U)}$.\nThis operation is shown in Figure~\\ref{fig:SHRBH}. In the case shown,\n$\\beta$ needs to be\ncomputed for variable bound $x_2^{(U)}$ only. \\\\ \\\\\n\n\n\\section{Proposed Inverse Parabolic (IP) Constraint-Handling Methods}\\label{sec:IP}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.75]{ProposedDistDeb.eps} \n\\end{center}\n\\caption{Vector based {Inverse Parabolic Methods}.}\n\\label{fig:ProbDistBH}\n\\end{figure}\nThe exponential probability distribution function described in the previous section brings \nviolated solutions back into the allowed range variable-wise, but \nignores the distance of the violated solution $x_i^c$ with respect to\nthe violated boundary. The distance from the violated boundary \ncarries useful information for remapping the violated solution into\nthe feasible region. One way to utilize this distance information is\nto bring solutions back into the allowed range with a higher\nprobability closer to the boundary, when the \\textit{fallen-out}\ndistance ($d_v$, as shown in Figure~\\ref{fig:ProbDistBH}) is small. \nIn situations, when points are too far outside the allowable range,\nthat is, the fallen-out\ndistance $d_v$ is large, particles are brought back more uniformly\ninside the feasible range. Importantly, when the fallen-out distance\n$d_v$ is small (meaning that the violated child solution is close to the\nvariable boundary), the repaired point is also close to the violated\nboundary but in the feasible side. Therefore, the nature of the exponential\ndistribution should become more and more like a uniform distribution\nas the fallen-out distance $d_v$ becomes large.\n\nLet us consider Figure~\\ref{fig:ProbDistBH} which shows a\nviolated solution $\\vec{x}^c$ and its parent solution $\\vec{x}^p$. Let\n$d_p=\\|\\vec{x}^c-\\vec{x}^p\\|$ denote the distance between the violated solution\nand the parent solution. Let $\\vec{v}$ and $\\vec{u}$ be the intersection points\nof the line joining $\\vec{x}^c$ and $\\vec{x}^p$ with the violated\nboundary and the non-violated boundary, respectively. The\ncorresponding distances of these two points from $\\vec{x}^c$ are $d_v$\nand $d_u$, respectively. Clearly, the violated distance is $d_v =\n\\|\\vec{x}^c-\\vec{v}\\|$. We now define an inverse\nparabolic probability distribution function from $\\vec{x}^c$ along \nthe direction $(\\vec{x}^p-\\vec{x}^c)$ as:\n\\begin{equation}\np(d) = \\frac{A}{(d-d_v)^2+\\alpha^2d_v^2}, \\quad d_v \\leq d \\leq a,\n\\end{equation}\nwhere $a$ is the upper bound of $d$ allowed by the constraint-handling\nscheme (we define $a$ later) and $\\alpha$ is a pre-defined parameter. By calculating and\nequating the cumulative probability equal to one, we find:\n\\[A = \\frac{\\alpha d_v}{\\tan^{-1} \\frac{a-d_v}{\\alpha d_v}}.\\]\nThe probability is maximum at $d=d_v$ (at the violated boundary) and\nreduces as the solution enters the allowable range. Although this \ncharacteristic was also present in the exponential distribution, the\nabove probability distribution is also a function of violated distance\n$d_v$, which acts\nlike a variance to the probability distribution. If $d_v$ is\nsmall, then the variance of the distribution is small, thereby resulting in a\nlocalized effect of creating a mapped solution. \nFor a random number $r\\in [0,1]$, the distance of the mapped solution\nfrom $\\vec{x}^c$ in the allowable\nrange $[d_v,d_u]$ is given as follows:\n\\begin{equation}\nd' = d_v + \\alpha d_v \\tan \\left(r \\tan^{-1} \\frac{a-d_v}{\\alpha d_v}\\right).\n\\label{eq:s}\n\\end{equation}\nThe corresponding mapped solution is as follows:\n\\begin{equation}\n\\vec{y} = \\vec{x}^c + d' (\\vec{x}^p-\\vec{x}^c).\n\\label{eq:map}\n\\end{equation}\nNote that the IP method makes a vector-wise operation and is sensitive\nto the relative locations of the infeasible solution, the parent solution, and\nthe violated boundary. \n\nThe parameter $\\alpha$ has a direct {\\em external\\\/} effect of inducing small or large\nvariance to the above probability distribution. If $\\alpha$ is large,\nthe variance is large, thereby having uniform-like distribution. Later \nwe shall study the effect of the parameter $\\alpha$. A value $\\alpha$ $\\approx$ 1.2 is\nfound to work well in most of the problems and is recommended. \nNext, we describe two particular constraint-handling schemes employing this \nprobability distribution. \n\n\\subsection{Inverse Parabolic Confined (IP-C) Method}\nIn this approach, the probability distribution is confined\nbetween $d \\in [d_v,d_p]$, thereby making $a=d_p$. Here, a mapped\nsolution $\\vec{y}$ lies strictly between violated boundary location\n($\\vec{v}$) and the parent ($\\vec{x}^p$). \n\n\\subsection{Inverse Parabolic Spread (IP-S) Method} \nHere, the mapped solution is allowed to lie in the entire\nfeasible range between $\\vec{v}$ and\n$\\vec{u}$ along the vector $(\\vec{x}^p-\\vec{x}^c)$, but more emphasis is given on relocating the \nchild near the violated boundary. The solution can be found by using Equations~\\ref{eq:s} and\n\\ref{eq:map}, and by setting $a=d_u$. \n\n\\section{Results and Discussions}\\label{sec:ResultsDiscussion}\nIn this study, first we choose four standard scalable unimodal test functions (in presence of variable bounds): \nEllipsoidal ($F_{\\rm elp}$), Schwefel ($F_{\\rm sch}$), Ackley ($F_{\\rm sch}$),\nand Rosenbrock ($F_{\\rm ros}$), described as follows:\n\\begin{eqnarray}\nF_{\\rm elp} &=& \\sum_{i=1}^n ix_i^2 \\\\\nF_{\\rm sch} &=& \\sum_{i=1}^n\\left(\\sum_{j=1}^i x_j\\right)^2 \\\\\nF_{\\rm ack} &=& -20 exp \\left(-0.2 \\sqrt{ \\frac{1}{n} \\sum_{i=1}^{i=n} x_i^2}\\right) -exp\\left(\\frac{1}{n} \\sum_{i=1}^{n}cos(2\\pi x_i)\\right) +20 + e\\\\\nF_{\\rm ros} &=& \\sum_{i=1}^{n-1} (100(x_i^2-x_{i+1})^2 + (x_i-1)^2) \n\\end{eqnarray}\n\nIn the unconstrained space, $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ have a minimum\nat $x_i^{\\ast}=0$, whereas $F_{ros}$ has a minimum at $x_i^{\\ast}=1$.\nAll functions have minimum value $F^{\\ast}=0$. \n$F_{\\rm elp}$ is the only variable separable problem.\n$F_{\\rm ros}$ is a challenging test problem that has a ridge which poses\ndifficulty for several optimizers. \nIn all the cases the number of variables is chosen to be $n=20$. \n\nFor each test problem three different scenarios corresponding to the\nrelative location of the \noptimum with respect to the allowable search range are considered. This is done \nby selecting different variable bounds, as follows: \n \n\\begin{description}\n\\item[On the Boundary:] Optimum is exactly on one of the variable boundaries (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [0, 10]$, and for $F_{\\rm ros}$, $x_i\\in [1, 10]$),\n\\item[At the Center:] Optimum is at the center of the allowable range (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [-10, 10]$, and for $F_{\\rm ros}$, $x_i\\in [-8, 10]$), and \n\\item[Close to Boundary:] Optimum is near the variable boundary, \nbut not exactly on the boundary (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [-1, 10]$, and for $F_{\\rm ros}$, $x_i\\in [0, 10]$).\n\\end{description}\nThese three scenarios are shown in the Figure~\\ref{fig:3scenario} for a\ntwo-variable problem having variable bounds: $x_i^{(L)}=0$ and $x_i^{(U)}=10$.\nAlthough in practice, the optimum can lie anywhere in the allowable range,\nthe above three scenarios pose adequate representation of different\npossibilities that may exist in practice. \n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{boundary-ellp-optima}\n\n \\label{fig:boundary-ellp-optima}\n\\end{subfigure\n\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{center-ellp-optima}\n \n \\label{fig:center-ellp-optima}\n\\end{subfigure}\n\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{close-to-boundary-ellp-optim}\n \n \\label{fig:close-to-boundary-ellp-optim}\n\\end{subfigure}\n\n\\caption{Location of optimum for $F_{elp}$: (a) on the boundary (b) in the center and (c) close to the \nedge of the boundary by selecting different search domains.}\n\\label{fig:3scenario}\n\\end{figure}\n \nFor each test problem, the population is initialized uniformly in the\nallowable range. We count the number of function\nevaluations needed for the algorithm to find a solution close to the\nknown optimum solution and we call this our \nevaluation criterion $S$. \nChoosing a high accuracy (i.e. small value of $S$)\nas the termination criteria minimizes the chances of locating the optimum due to \nrandom effects, and provides a better insight into the behavior of a constraint-handling mechanism. \n\nTo eliminate the random effects and gather results of statistical importance, each algorithm \nis tested on a problem $50$ times (each run starting with a different\ninitial population). A particular run is terminated if the evaluation\ncriterion $S$ is met (noted as a successful run), \nor the number of function evaluations exceeds one million (noted as an\nunsuccessful run). If only a few out of the $50$ runs are\nsuccessful, then we report the number of successful runs in the\nbracket. In this case, the best, median and worst number of function\nevaluations are computed from the successful runs only. If none of the runs\nare successful, we denote this by marking \\textit{(DNC)} (Did Not\nConverge). In such cases, we report the best, median and worst\nattained function values of the best solution at the end of each\nrun. To distinguish the unsuccessful results from successful\nones, we present the fitness value information of the unsuccessful\nruns in italics.\n\nAn in-depth study on the constraint-handling techniques is carried out\nin this paper. \nDifferent locations of the optimum are selected and systematic comparisons are \ncarried out for PSO, DE and GAs in Sections ~\\ref{subsec:psoresults}, \n~\\ref{subsec:DEresults} and ~\\ref{subsec:GAresults} , respectively. \n \n\\subsection{Results with Particle Swarm Optimization (PSO)}\\label{subsec:psoresults}\nIn PSO, decision variable and the velocity terms are updated\nindependently. Let us say, that the initial position is $\\vec{x}_{t}$, the newly created \nposition is infeasible and represented by $\\vec{x}_{t+1}$, and the repaired solution\nis denoted by $\\vec{y}_{t}$. \n\nIf the velocity update is based on the infeasible solution as:\n\n\\begin{equation}\n\\label{eqn:velocity-standard}\n\\vec{v}_{t+1}=\\vec{x}_{t+1} - \\vec{x}_{t} \n\\end{equation}\n\nthen, we refer to this as ``Velocity Unchanged''. However, if the velocity update \nis based on the repaired location as: \n\n\\begin{equation}\n\\label{eqn:velocity-recomputed}\n\\vec{v}_{t+1}=\\vec{y}_{t} - \\vec{x}_{t}, \n\\end{equation}\n\nthen, we refer to this as ``Velocity Recomputed''. This terminology is used for rest of the paper. \nFor inverse parabolic (IP) and exponential (Exp) approaches, we use\n``Velocity Recomputed'' strategy only. We have performed ``Velocity\nUnchanged'' strategy with IP and exponential approaches, but the\nresults were not as good as compared to ``Velocity Recomputed'' strategy. \nFor the \\textit{SetOnBoundary} approach, we use the ``Velocity Recomputed''\nstrategy and two other strategies discussed as follows. \n\nAnother strategy named \n``Velocity Reflection'' is used, which simply implies \nthat if a particle is set on the $i$-th boundary, then $v_i^{t+1}$ \nis changed to $-v_i^{t+1}$. The goal of the velocity\nreflection is to explicitly allow particles to move back into the search space.\nIn the ``Velocity Set to Zero'' strategy, if a particle is set\non the $i$-th boundary, then the corresponding velocity component is set to zero i.e. $v_i^{t+1}=0$. \nFor the shrink approach, both ``Velocity Recomputed'' and ``Velocity\nSet to Zero'' strategies are used. \n\nFor PSO, a recently proposed {\\em hyperbolic} \\cite{Helwid-Constraing-handling-2013} constraint-handling approach is also\nincluded in this study. This strategy operates by first calculating velocity according to the standard mechanism ~\\ref{eqn:velocity-standard}, and \nin the case of violation a linear normalization is performed on the velocity to restrict the solution from jumping out of the constrained boundary as follows:\n\\begin{equation}\n\\label{eq:hyperbolic}\nv_{i,t+1} = \\frac{v_{i,t+1}}{1+\\frac{|v_{i,t+1}|}{\\min(x_i^{(U)}-x_{i},x_{i,t}-x_{i}^{(L)})}}. \n\\end{equation}\nEssentially, the closer the particle gets to the boundary (e.g.,\n$x_{i,t}$ only slightly smaller than $x_{i}^{(U)}$), the more difficult it becomes to reach the boundary. In fact, the particle is never completely\nallowed to reach the boundary as the velocity tends to zero. We emphasize again that this strategy is only applicable to\nPSO. A standard PSO is employed in this study with a population size of 100. \nThe results for all the above scenarios with PSO are presented in\nTables~\\ref{tab:PSOEllp} to ~\\ref{tab:PSORos}.\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm elp}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline \n{Strategy} & Velocity update & {Best} &\n Median & Worst \\\\ \\hline \\hline \n\\multicolumn{5}{|c|}{\t{$F_{\\rm elp}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 39,900 & 47,000 &\n67,000 \\\\ IP Confined & Recomputed & 47,900 (49) & 88,600 & 140,800 \\\\ \nExp. Spread & Recomputed &\\textit{3.25e-01} & \\textit{5.02e-01} & \\textit{1.08e+00} \\\\\nExp. Confined & Recomputed & {\\bf 4,600} & {\\bf 5,900} &\n{\\bf 7,500} \\\\ \nPeriodic & Recomputed &\\textit{3.94e+02} \\textit{(DNC)}& \\textit{6.63e+02} & \\textit{1.17e+03} \\\\ \nPeriodic & Unchanged & \\textit{8.91e+02} \\textit{(DNC)} &\n\\textit{1.03e+03} &\\textit{1.34e+03} \\\\ \nRandom & Recomputed & \\textit{1.97e+01} \\textit{(DNC)}& \\textit{3.37e+01} & \\textit{8.10e+01} \\\\ \nRandom & Unchanged & \\textit{5.48e+02} \\textit{(DNC)}&\\textit{6.69e+02} & \\textit{9.65e+02} \\\\ \nSetOnBoundary & Recomputed & 900 (44) & 1,300 & 5,100 \\\\\nSetOnBoundary & Reflected & 242,100 & 387,100 & 811,400 \\\\ \nSetOnBoundary & Set to Zero & 1,300 (48) & 1,900 & 4,100 \\\\ \nShrink & Recomputed & 8,200 (49) & 10,900 & 14,300 \\\\ \nShrink & Set to Zero & 33,000 & 40,700 & 53,900 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t & 14,100 &15,100 & 16,500\t\t\t\\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{\t{$F_{\\rm elp}$ in [-10,10]: At the Center}}\t\\\\ \\hline\nIP Spread & Recomputed & 31,600 & 34,000 &37,900 \\\\\nIP Confined & Recomputed & 30,900 & 33,800 & 38,500 \\\\\nExp. Spread & Recomputed & 30,500 & 34,700 & 38,300 \\\\\nExp. Confined & Recomputed & 31,900 & 35,100 & 38,200 \\\\\nPeriodic & Recomputed & 32,200 & 35,100 & 37,900 \\\\\nPeriodic & Unchanged & 33,800 & 36,600 & 41,200 \\\\%\\hline\nRandom & Recomputed & 31,900 & 34,800 & 37,400 \\\\\nRandom & Unchanged & 31,600 & 34,900 & 38,100 \\\\\nSetOnBoundary &Recomputed & 31,900 & 35,500 & 40,500 \\\\ \nSetOnBoundary & Reflected & 50,800 (38) & 83,200 & 484,100 \\\\\nSetOnBoundary & Set to Zero & 31,600 & 35,000 & 37,200 \\\\\nShrink & Recomputed & 32,000 & 34,400 & 48,200 \\\\% \\hline \nShrink & Set to Zero & 31,400 & 34,000 & 37,700 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\n& {\\bf 29,400} & {\\bf 31,200} & {\\bf 34,700}\t\t\t\\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{\t {$F_{\\rm elp}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 28,200 & 31,900 & 35,300 \\\\\nIP Confined & Recomputed & 28,300 & 32,900 & 44,600 \\\\\nExp. Spread & Recomputed & 28,300 & 30,700 & 33,200 \\\\% \\hline\nExp. Confined & Recomputed & 29,500 & 33,000 & 44,700 \\\\\nPeriodic & Recomputed & \\textit{4.86e+01} \\textit{(DNC)} & \\textit{1.41e+02} & \n\\textit{4.28e+02} \\\\\nPeriodic & Unchanged & \\textit{2.84e+02} \\textit{(DNC)} & \\textit{5.46e+02} & \\textit{8.28e+02} \\\\\nRandom & Recomputed & 36,900 & 41,900 & 45,600 \\\\\nRandom & Unchanged & \\textit{1.13e+02} \\textit{(DNC)} & \\textit{2.26e+02} & \\textit{4.35e+02} \\\\ \nSetOnBoundary & Recomputed & \\textit{1.80e+01} \\textit{(DNC)} & \\textit{7.60e+01} & \n\\textit{3.00e+02} \\\\ \nSetOnBoundary & Reflected & \\textit{2.13e-01} \\textit{(DNC)}& \\textit{2.17e+01} & \n\t\\textit{1.06e+02} \\\\\nSetOnBoundary & Set to Zero & 31,700 (2) & 31,700 & 32,600 \\\\\nShrink & Recomputed & 29,500 (6) & 36,100 & 42,300 \\\\\nShrink & Set to Zero & 28,400 (36) & 32,700 & 65,600 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\n& {\\bf 25,900} & {\\bf 29,200} & {\\bf 31,000}\t\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:PSOEllp}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrrr|} \\hline \\hline\n{{Strategy}} & Velocity & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 67,200 & 257,800 & 970,400 \\\\% \\hline\nIP Confined & Recomputed & 112,400 (6) & 126,500 & 145,900 \\\\% \\hline\nExp. Spread & Recomputed & \\textit{3.79e+00} \\textit{(DNC)}& \\textit{8.37e+00} & \\textit{1.49e+01} \\\\\nExp. Confined & Recomputed & {\\bf 4,900} & {\\bf 6,100} & {\\bf 13,500} \\\\% \\hline\nPeriodic & Recomputed & \\textit{4.85e+03} \\textit{(DNC)}& \\textit{7.82e+03}&\\textit{1.34e+04} \\\\\nPeriodic &Unchanged & \\textit{7.69e+03} \\textit{(DNC)}& \\textit{1.11e+04}& \\textit{1.51e+04} \\\\% \\hline\nRandom &Recomputed & \\textit{2.61e+02} \\textit{(DNC)}& \\textit{5.44e+02} & \\textit{1.05e+03} \\\\\n\nRandom &Unchanged & \\textit{5.30e+03} \\textit{(DNC)} & \\textit{7.60e+03} & \\textit{1.22e+04} \\\\% \\hline\n \nSetOnBoundary &Recomputed & 800 (30) & 1,100 & 3,900 \\\\ \nSetOnBoundary &Reflected & 171,500 & 241,700 & 434,200 \\\\\nSetOnBoundary &Set to Zero & 1,000 (40) & 1,600 & 5,300 \\\\\nShrink &Recomputed\t & 6,900 & 9,100 & 11,600 \\\\% \\hline\nShrink &Set to Zero & 17,900 & 31,900 & 49,800 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& 36,400 & 41,700 & 48,700\t\t\t\\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & Recomputed & 106,700 & 127,500 & 144,300 \\\\\nIP Confined & Recomputed & 111,500 & 130,100 & 149,900 \\\\% \\hline\nExp. Spread & Recomputed & 112,300 & 131,400 & 149,000 \\\\\nExp. Confined & Recomputed & 116,400 & 131,300 & 148,200 \\\\\nPeriodic &Recomputed & 113,400 & 130,900 & 150,600 \\\\%\t\\hline\nPeriodic &Unchanged & 121,200 & 137,800 & 159,100 \\\\\nRandom &Recomputed & 112,900 & 129,800 & 151,100 \\\\\nRandom &Unchanged & 117,000 & 130,600 & 148,100 \\\\\nSetOnBoundary &Recomputed & 118,500 (49) & 132,300 & 161,100 \\\\\nSetOnBoundary &Reflected & \\textit{3.30e-06} \\textit{(DNC)}& \\textit{8.32e+01}\\textit{(DNC)}& \n\\textit{2.95e+02} \\textit{(DNC)} \\\\% \\hline \nSetOnBoundary & Set to Zero & 111,900 & 132,200 & 149,700 \\\\\nShrink.&Recomputed & 111,800 (49)\t& 131,800\t& 183,500 \\\\\nShrink.&Set to Zero & 108,400\t& 125,100\t& 143,600 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& {\\bf 101,300}\t& \t{\\bf 117,700} & {\\bf 129,700} \\\\\\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [-1,10]: Close to\n Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 107,200 & 130,400 & 272,400 \\\\\nIP Confined & Recomputed & 120,100 (44) & 171,200 & 301,200 \\\\\nExp. Spread & Recomputed & 92,800 & 109,200 & 126,400 \\\\\nExp. Confined & Recomputed & 110,200 & 127,400 & 256,100 \\\\\nPeriodic&Recomputed & \\textit{8.09e+02} \\textit{(DNC)}& \\textit{2.01e+03} \\textit{(DNC)}& \n\\textit{5.53e+03}\\textit{(DNC)} \\\\\nPeriodic&Unchanged & \\textit{2.16e+03} \\textit{(DNC)} & \\textit{4.36e+03} \\textit{(DNC)} & \\textit{6.87e+03} \\textit{(DNC)} \\\\\nRandom&Recomputed & 123,300 & 165,600\t& 280,000 \\\\\nRandom&Unchanged & \\textit{8.17e+02} \\textit{(DNC)} & \\textit{1.96e+03} \\textit{(DNC)} & \n\\textit{2.68e+03} \\textit{(DNC)} \\\\\n\t\t\nSetOnBoundary&Recomputed & \\textit{2.50e+00} \\textit{(DNC)} & \\textit{1.25e+01} \\textit{(DNC)} & \\textit{5.75e+02} \\textit{(DNC)} \\\\\nSetOnBoundary&Reflected & \\textit{1.86e+00} \\textit{(DNC)} & \\textit{7.76e+00} \\textit{(DNC)} & \\textit{5.18e+01} \\textit{(DNC)} \\\\\nSetOnBoundary&Set to Zero & \\textit{1.00e+00} \\textit{(DNC)} & \\textit{5.00e+00} \\textit{(DNC)} & \n\\textit{4.21e+02} \\textit{(DNC)} \\\\\nShrink & Recomputed & \\textit{5.00e-01} \\textit{(DNC)} & \\textit{3.00e+00} \\textit{(DNC)} & \\textit{1.60e+01} \\textit{(DNC)} \\\\\nShrink &Set to Zero & 108,300 (8) & 130,300 & 143,000 \\\\ \nHyperbolic \t& Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t& {\\bf 93,100} \t\t&{\\bf \t108,300} & {\\bf 119,000}\t\\\\ \\hline\n\\end{tabular}\n\\label{tab:PSOSch}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\caption{Results on $F_{\\rm ack}$ with PSO for $10^{-10}$ termination criterion. }\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm ack}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed\t\t&\t150,600 (49)\t& 220,900\t & 328,000 \\\\\nIP Confined & Recomputed\t& \\textit{4.17e+00} \\textit{(DNC)} & \\textit{6.53e+00} & \\textit{8.79e+00}\\\\ \nExp. Spread & Recomputed\t&\\textit{2.76e-01} \\textit{(DNC)} & \\textit{9.62e-01} & \n\\textit{2.50e+00} \\\\ \nExp. Confined & Recomputed\t& 7,800\t& 9,600\t& 11,100 \\\\% \\hline \nPeriodic&Recomputed \t&\\textit{6.17e+00} \\textit{(DNC)} & \\textit{6.89e+00} & \n\\textit{9.22e+00} \\\\\n\t\nPeriodic&Unchanged \t & \\textit{8.23e+00} \\textit{(DNC)} & \\textit{9.10e+00} & \n\\textit{9.68e+00} \\\\\nRandom&Recomputed \t& \\textit{3.29e+00} \\textit{(DNC)} & \\textit{3.40e+00} & \\textit{4.19e+00} \\\\\nRandom&Unchanged &\\textit{6.70e+00} \\textit{(DNC)}& \\textit{7.46e+00}& \\textit{8.57e+00} \n\\\\ \nSetOnBoundary&Recomputed \t&{\\bf 800}\t& {\\bf 1,100}\t& {\\bf 2,100} \\\\% \\hline \nSetOnBoundary&Reflected \t & 420,600\t& 598,600\t& 917,400 \\\\\nSetOnBoundary&Set to Zero\t & 1,100\t& 1,800\t& 3,100\t \\\\% \\hline \nShrink.&Recomputed \t\t& 33,800 (5) & 263,100 & 690,400 \\\\% \\hline \nShrink.&Set Zero \t\t& \\textit{3.65e+00} \\textit{(DNC)} & \\textit{6.28e+00} & \\textit{8.35e+00} \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t& 24,600 (25) &26,100 &28,000 \\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ack}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & Recomputed\t\t\t &\t53,900 (46)\t& 58,600\t& 66,500 \\\\% \\hline\nIP Confined & Recomputed\t\t\t& 54,800 (49)\t& 59,200\t& 64,700 \\\\\nExp. Spread & Recomputed\t\t& {\\bf 55,100}\t& {\\bf 59,300}\t& {\\bf 63,600} \\\\\nExp. Confined\t& Recomputed\t &\t 56,800 & 59,600\t& 65,000 \\\\\nPeriodic&Recomputed \t\t & 55,700 (48)\t& 59,900\t& 64,700 \\\\\nPeriodic&Unchanged \t\t &\t57,900 (49)\t& 62,100\t& 66,700 \\\\% \\hline\nRandom&Recomputed \t\t& 55,100 (47)\t& 59,400\t& 65,100 \\\\\nRandom&Unchanged \t\t&\t56,300\t & 59,700\t& 65,500 \\\\% \\hline\nSetOnBoundary&Recomputed \t & 55,100 (49)\t& 58,900\t& 65,400 \\\\\nSetOnBoundary&Reflected \t& 86,900 (4)\t& 136,400\t& 927,600 \\\\% \\hline\nSetOnBoundary&Set to Zero \t& 53,900 (49)\t& 59,600\t& 67,700 \\\\\nShrink &Recomputed \t\t& 55,800 (47)\t\t& 58,700\t& 65,800 \\\\\nShrink &Set to Zero \t\t\t& 55,700 (49)\t\t& 58,900\t& 62,000 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& 52,900 (49) & 56,200 & 64,400\t\t\\\\\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ack}$ in [-1,10]: Close to\n Boundary}} \\\\ \\hline\nIP Spread & Recomputed\t\t\t &\t54,600 (5)\t& 55,100\t& 56,600 \\\\\nIP Confined & Recomputed\t\t\t& 63,200 (1)\t& 63,200\t& 63,200 \\\\\nExp. Spread & Recomputed\t\t& {\\bf 51,300}\t& {\\bf 55,200}\t& {\\bf 58,600} \\\\\nExp. Confined & Recomputed & \\textit{1.42e+00} \\textit{(DNC)} & \\textit{2.17e+00} \n& \\textit{2.92e+00} \\\\\nPeriodic&Recomputed & \\textit{2.88e+00} \\textit{(DNC)} & \\textit{4.03e+00} & \\textit{5.40e+00} \\\\\nPeriodic&Unchanged & \\textit{6.61e+00} \\textit{(DNC)} & \\textit{7.46e+00} & \\textit{8.37e+00} \\\\ \nRandom&Recomputed & 60,300 (45) & 66,200 & 72,200 \\\\% \\hline\nRandom&Unchanged \t& \\textit{4.21e+00} \\textit{(DNC)} &\n\\textit{4.93e+00} & \\textit{6.11e+00} \\\\ \nSetOnBoundary&Recomputed & \\textit{2.74e+00} \\textit{(DNC)} & \\textit{3.16e+00} & \\textit{3.36e+00} \\\\\nSetOnBoundary&Reflected \t & 824,700 (1)\t& 824,700\t& 824,700 \\\\% \\hline\nSetOnBoundary&Set to Zero \t& \\textit{1.70e+00} \\textit{(DNC)} & \\textit{2.63e+00} \n& \\textit{3.26e+00} \\\\ \nShrink&Recomputed & \\textit{1.45e+00} \\textit{(DNC)} & \\textit{2.34e+00} & \\textit{2.73e+00} \\\\ \nShrink&Set to Zero & \\textit{2.01e+00} \\textit{(DNC)} & \\textit{3.96e+00} & \\textit{6.76e+00} \\\\ \n\tHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic})) &\n 50,000 (39) & 53,500 & 58,100 \\\\ \\hline\n\\end{tabular}\n\\label{tab:PSOAck}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ros}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 89,800 & 195,900 & 243,300\t \\\\\nIP Confined & Recomputed & 23,800 &164,300 & 209,300\t\\\\\nExp. Spread & Recomputed & \\textit{9.55e-01} \\textit{(DNC)} & \\textit{2.58e+00}\n& \\textit{7.64e+00} \\\\ \nExp. Confined & Recomputed & {\\bf 3,700} &\t{\\bf 128,100}\t& {\\bf 344,400}\t\\\\\nPeriodic&Recomputed & \\textit{1.24e+04} \\textit{(DNC)} & \\textit{2.35e+04}\n\t&\\textit{4.24e+04} \\\\\nPeriodic&Unchanged & \\textit{6.99e+04} \\textit{(DNC)} & \\textit{1.01e+05} \n& \\textit{1.45e+05} \t\t\\\\\nRandom&Recomputed & \\textit{6.00e+01} \\textit{(DNC)} & \\textit{1.37e+02} & \n\t\\textit{4.42e+02} \t\t \\\\\nRandom&Unchanged & \\textit{2.32e+04} \\textit{(DNC)} & \\textit{3.90e+04} \n& \\textit{8.22e+04} \\\\ \nSetOnBoundary&Recomputed \t& 900 (45) &\t1,600 \t&89,800\t\t \\\\\nSetOnBoundary&Reflected \t& \\textit{2.14e-03} \\textit{(DNC)} & \\textit{6.01e+02} \n& \\textit{5.10e+04} \t\t \\\\ \nSetOnBoundary&Set to Zero \t& 1,400 (48) &\t3,000\t& 303,700 \t\t \\\\\nShrink.&Recomputed \t& 3,900 (44) &\t5,100 \t& 406,000\t\t \\\\% \\hline\nShrink.&Set to Zero & 15,500 &136,200 & 193400\t\t \\\\\nHyperbolic & & 177,400 (45) & 714,300 & 987,500\\\\\t\t \\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [-8,10]: Near the\n Center}} \\\\ \\hline\nIP Spread & Recomputed & 302,300 (28) & 774,900 & 995,000 \t \\\\% \\hline\nIP Confined & Recomputed & 296,600 (32) &729,000 &955,000 \t \\\\\nExp. Spread & Recomputed & 208,800 (24) & 754,700 & 985,200 \t \\\\\nExp. Confined & Recomputed & 301,100 (33) & 801,400 & 961,800 \\\\\nPeriodic&Recomputed & 26,200 (27) & 705,100 & 986,200 \\\\% \\hline\nPeriodic&Unchanged & 247,300 (32) & 776,800 & 994,900 \t \\\\% \\hline\nRandom&Recomputed & 311,200 (30) &809,300 & 990,800 \\\\\nRandom&Unchanged & 380,100 (29) & 793,300 & 968,300\t \\\\%\t\\hline\nSetOnBoundary&Recomputed & {\\bf 248,700} (35) & {\\bf 795,600} & {\\bf 973,900} \\\\\nSetOnBoundary&Reflected & 661,900 (01) & 661,900 & 661,900 \\\\% \\hline\nSetOnBoundary&Set to Zero & 117,400 (25) &\t858,400 \t& 995,400 \\\\% \\hline\nShrink.&Recomputed & 347,900 (33) & 790,500& 996,300 \\\\% \\hline\nShrink.&Set to Zero & 353,300 (26) & 788,700 & 986,800 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t & \\textit{6.47e-08 (DNC)} & \\textit{1.27e-04 (DNC)}& \\textit{6.78e+00 (DNC)} \t\\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [1,10]: Close to Boundary}}\n\\\\ \\hline\nIP Spread & Recomputed & 184,600 (47) & 442,200 & 767,500 \\\\% \\hline\nIP Confined & Recomputed & 229,900 (40) & 457,600 & 899,200 \\\\% \\hline\nExp. Spread & Recomputed & 19,400 (47) & 378,200 & 537,300 \\\\\nExp. Confined & Recomputed & \\textit{6.79e-03} \\textit{(DNC)} & \\textit{4.23e+00} & \n\\textit{6.73e+01} \\\\ \nPeriodic&Recomputed & \\textit{1.51e-02} \\textit{(DNC)} \t& \\textit{3.73e+00} \n& \\textit{5.17e+02} \\\\\nPeriodic&Unchanged & \\textit{1.92e+04} \\textit{(DNC)} & \\textit{2.86e+04} & \n\\textit{6.71e+04} \\\\ \nRandom&Recomputed & {\\bf 103,800} \t& {\\bf 432,200}\t& {\\bf 527,200} \\\\\nRandom&Unchanged & \\textit{2.33e+02} \\textit{(DNC)} & \\textit{1.47e+03} \n& \\textit{4.23e+03} \\\\ \nSetOnBoundary&Recomputed & \\textit{1.71e+01} \\textit{(DNC)} & \\textit{1.87e+01}& \n\\textit{3.13e+02} \t\\\\\nSetOnBoundary&Reflected & \\textit{6.88e+00} \\textit{(DNC)} \t& \\textit{5.52e+02} & \n\t\\textit{2.14e+04} \\\\\nSetOnBoundary&Set to Zero & \\textit{6.23e+00} \\textit{(DNC)} & \\textit{1.80e+01}\n& \\textit{3.12e+02} \\\\ \nShrink &Recomputed & 350,300 (3) \t& 350,900\t& 458,400 \\\\ \nShrink &Set to Zero & 163,700 (26) & 418,000 &531,900 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t & 920,900 (1) & 920,900 & 920,900 \t\\\\ \t\\hline\n\\end{tabular}\n\\label{tab:PSORos}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\nThe extensive simulation results are summarized using the following method. For each (say $j$) of the 14\napproaches, the corresponding number of the successful applications ($\\rho_j$) are recorded. Here,\nan application is considered to be successful if more than 45 runs out of 50\nruns are able to find the optimum within the specified accuracy. It is\nobserved that IP-S is successful in 10 out of 12 problem instances. Exponential confined approach (Exp-C) is successful in 9\ncases. To investigate the required number of function evaluations (FE) needed\nto find the optimum, by an approach (say $j$), we compute the average number of $\\bar{\\rm FE}_k^j$ \nneeded to solve a particular problem ($k$) and construct the following objective for\n$j$-th approach:\n\\begin{equation}\n\\mbox{FE-ratio}_j = \\frac{1}{\\rho_j}\\sum_{k=1}^{12} \\frac{\\mbox{FE}_k^j}{\\bar{\\rm FE}_k^j}, \n\\end{equation}\nwhere FE$_k^j$ is the FEs needed by the $j$-th approach to solve the\n$k$-th problem. Figure~\\ref{fig:pso_rank} shows the performance of\neach ($j$-th) of the 14 approaches on the two-axes plot ($\\rho_j$ and\n$\\mbox{FE-ratio}_j$). \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{pso_rank.eps} \n\\end{center}\n\\caption{Performance comparison of 14 approaches for two metrics --\n number of problems solved successfully and function evaluation ratio\n -- with the PSO algorithm.}\n\\label{fig:pso_rank}\n\\end{figure}\n\nThe best approaches should have large $\\rho_j$ values and small $\\mbox{FE-ratio}_j$ values. \nThis results in a trade-off between three best approaches which are marked in filled circles. All other 11\napproaches are {\\em dominated\\\/} by these three approaches. The\n{\\it SetBound} (\\textit{SetOnBoundary}) with velocity set to zero performs in only six out of 12\nproblem instances. Thus, we ignore this approach. There is a clear\ntrade-off between IP-S and Exp-C approaches. IP-S solves one\nproblem more, but requires more FEs in comparison to Exp-C. Hence, we\nrecommend the use of both of these methods vis-a-vis all other methods used in this study. \n\nOther conclusions of this extensive study of PSO with different\nconstraint-handling methods are summarized as follows:\n\\begin{enumerate}\n\\item The constraint-handling methods show a large variation in the performance\ndepending on the choice of test problem and location of the optimum in\nthe allowable variable range.\n\n\\item When the optimum is on the variable boundary, periodic and\n random allocation methods perform poorly. This\n is expected intuitively. \n\n\\item When the optimum is on the variable boundary, methods that set infeasible\n solutions on the violated boundary (\\textit{SetOnBoundary} methods) work very\n well for obvious reasons, but these methods do not perform well for other cases. \n\n\\item When the optimum lies near the center of the allowable range,\n most constraint-handling approaches work almost equally well. This can be understood intuitively from the fact that\ntendency of particles to fly out of the search space is small when the optimum is\nin the center of the allowable range. For example, the\nperiodic approaches fail in all the cases but are able to demonstrate\nsome convergence characteristics \nfor all test problems, when the optimum is at the center. When the optimum is on the boundary or close\nto the boundary, then the effect of the chosen \nconstraint-handling method becomes critical.\n\n\\item The shrink method (with ``Velocity Recomputed\" and ``Velocity Set\n Zero'' strategies) succeeded in 10 of the 12 cases.\n\\end{enumerate}\n\n\\subsection{Parametric Study of $\\alpha$}\\label{sec:alpha-effect}\nThe proposed IP approaches involve a parameter $\\alpha$ affecting the\nvariance of the probability distribution for the mapped variable. In\nthis section, we perform a parametric study of $\\alpha$ to determine\nits effect on the performance of the IP-S approach.\n \nFollowing $\\alpha$ values are chosen: $0.1$, $1$, $10$, and\n$1,000$. To have an idea of the effect of $\\alpha$, we plot the\nprobability distribution of mapped values in the allowable range\n($[1,10]: On the Boundary$) for $d=1.0$ in Figure~\\ref{fig:Alpha-effect-figure}. It can\nbe seen that for $\\alpha=10$ and 1,000, the distribution is almost\nuniform. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.75]{alpha-effect-final.eps} \n\\caption{Probability distribution function with different $\\alpha$\n values. The $x$-axis denotes the increased distance from the\n violated boundary. $x=1$ means the violated boundary. The child is\n created at $x=0$.}\n\\label{fig:Alpha-effect-figure}\n\\end{center}\n\\end{figure}\n\nFigure ~\\ref{fig:Alpha-effect-result} shows the effect\nof $\\alpha$ on $F_{elp}$ problem. For the same termination criterion \nwe find that $\\alpha=0.1$ and $1.0$ perform better compared to other\nvalues. With\nlarger values of $\\alpha$ the IP-S method does not even find the\ndesired solution in all 50 runs. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{Alpha-effect.eps} \n\\caption{Performance of PSO algorithm with the IP-S approach with\n different $\\alpha$ values on $F_{elp}$ problem.}\n\\label{fig:Alpha-effect-result}\n\\end{center}\n\\end{figure}\n\n\\subsection{Results with Differential Evolution (DE)}\\label{subsec:DEresults}\nDifferential evolution, originally proposed in \\cite{storn}, has gained \npopularity as an efficient evolutionary optimization algorithm. \nThe developers of DE proposed a total of ten different strategies \n\\cite{StornPriceBook}. In \\cite{DE-Boundary-Handling} it was shown that \nperformance of DE largely depended upon the choice of constraint-handling mechanism.\nWe use Strategy~1 (where the offspring is created around the population-best solution), which is\nmost suited for solving unimodal problems \\cite{padhyeJOGO2012}. \nA population size of $50$ was chosen with parameter values of \n$CR=0.5$ and $F=0.7$. Other parameters are set the same as before. \nWe use $S=10^{-10}$ as our termination criterion. Results are tabulated \nin Tables~\\ref{tab:DEellp} to ~\\ref{tab:DEros}. \nFollowing two observations can be drawn:\n\\begin{enumerate}\n\\item For problems having optimum at one of the boundaries,\n {\\it SetOnBoundary} approach performs the best. This is not a surprising\n result.\n\\item However, for problems having the optimum near the center of the\n allowable range, almost all eight algorithms perform in a similar\n manner.\n\\item For problems having their optimum close to one of the boundaries,\nthe proposed IP and existing exponential approaches perform better than\nthe rest of the approaches with DE.\n\\end{enumerate}\nDespite the differences, somewhat similar performances of different constraint-handling\napproaches with DE indicates\nthat the DE is an efficient optimization algorithm and its performance\nis somewhat less dependent on the choice of constraint-handling scheme\ncompared to the PSO algorithm.\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [0,10]: On the Boundary} \\\\ \\hline\nIP Spread & 25,600 & 26,850 &27,650 \\\\ \nIP Confined & 22,400 &23,550 &24,200 \\\\ \nExp. Spread & 38,350 & 39,800 &41,500 \\\\ \nExp. Confined & 19,200 & 20,700 & 21,900 \\\\ \nPeriodic & 42,400 & 43,700 & 45,050 \\\\\nRandom & 40,650 & 43,050 & 44,250 \\\\ \nSetOnBoundary & {\\bf 2,850} & {\\bf 3,350} & {\\bf 3,900} \\\\ \nShrink & 4,050 & 4,900 & 5,850 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [-10,10]: At the Center} \\\\ \\hline \nIP Spread & 29,950 & 31,200 & 32,500 \\\\ \nIP Confined & {\\bf 29,600} & {\\bf 31,200} & {\\bf 32,400} \\\\\nExp. Spread & 29,950 &31,300 & 32,400 \\\\ \nExp. Confined & 30,500& 31,400 & 32,250 \\\\% \\hline\nPeriodic & 29,650 & 31,300 & 32,400 \\\\\nRandom & 30,000 & 31,200 & 31,250 \\\\ \nSetOnBoundary & 29,850 & 31,200 & 32,700 \\\\\nShrink & 30,300 & 31,250 &32,750 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [-1,10]: Close to Boundary} \\\\ \\hline\nIP Spread & 28,550 & 29,600 & 30,550 \\\\\nIP Confined & 28,500 & 29,500 & 30,650 \\\\% \\hline\nExp. Spread & {\\bf 28,050} & {\\bf 28,900} & {\\bf 29,850} \\\\\nExp. Confined& 28,150 & 29,050 & 29,850 \\\\\nPeriodic & 29,850 & 30,850 & 32,100 \\\\% \\hline\nRandom & 28,900 & 30,200 & 31,000 \\\\\nSetOnBoundary & 28,650 & 29,600 & 30,500 \\\\ \nShrink & 28,800 & 29,900 & 31,200 \\\\ \\hline \\hline\n\\end{tabular}\n\\label{tab:DEellp}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm sch}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 26,600 & 27,400 & 28,000 \\\\\nIP Confined & 22,450 & 23,350 & 24,300 \\\\ \nExp. Spread & 40,500 & 42,050 & 43,200 \\\\\nExp. Confined& 19,650 & 20,350 & 22,050 \\\\ \nPeriodic & 44,700 & 46,300 & 48,250 \\\\ \nRandom & 43,850 & 45,150 & 47,000 \\\\% \\hline\nSetOnBoundary & {\\bf 2,100 } &{\\bf 3,100 } & {\\bf 3,750 } \\\\\nShrink & 3,450 & 4,400 & 5,100 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & {\\bf 258,750} & {\\bf 281,650} & {\\bf 296,300} \\\\\nIP Confined & 268,150 & 283,050 & 300,450 \\\\\nExp. Spread & 266,850 & 283,950 & 304,500 \\\\\nExp. Confined & 266,450 & 283,700 & 305,550 \\\\\nPeriodic & 269,700 & 284,100 & 310,100 \\\\\nRandom & 263,300 & 282,600 & 306,250 \\\\% \\hline\nSetOnBoundary & 267,750 & 284,550 & 298,850 \\\\\nShrink & 263,600 & 282,750 & 304,350 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & {\\bf 228,950} & {\\bf 242,300} & {\\bf 255,700} \\\\% \\hline\nIP Confined & 232,200 & 243,900 & 263,400 \\\\\nExp. Spread & 227,550 & 243,000 & 261,950 \\\\\nExp. Confined & 228,750 & 243,800 & 262,500 \\\\\nPeriodic & 231,950 & 247,150 & 260,700 \\\\ \nRandom & 228,550 & 244,850 & 261,900 \\\\ \nSetOnBoundary & 237,100 & 255,750 & 266,400 \\\\ \nShrink & 234,000 & 253,250 & 275,550 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEsch}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ack}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hlin\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 43,400 & 44,950 & 45,950 \\\\ \nIP Confined & 37,300 & 38,700 & 40,350 \\\\\nExp. Spread Dist. & 66,300 & 69,250 & 71,300 \\\\\nExp. Confined Dist& 32,750 & 34,600 & 36,200 \\\\\nPeriodic & 72,500 & 74,250 & 75,900 \\\\\nRandom & 70,650 & 73,000 & 74,750 \\\\\nSetOnBoundary & {\\bf 2,550} & {\\bf 3,250} &{\\bf 3,950 } \\\\ \nShrink & 3,500 & 4,700 & 5,300 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & {\\bf 50,650} &{\\bf 52,050} & {\\bf 53,450} \\\\\nIP Confined & 51,050 & 52,200 & 53,800 \\\\ \nExp. Spread & 51,200 &52,150 & 53,400 \\\\% \\hline\nExp. Confined& 51,100 &52,300 & 53,850 \\\\ \nPeriodic & 51,250 & 52,250 &53,500 \\\\ \nRandom & 50,950 & 52,200 & 53,450 \\\\\nSetOnBoundary & 50,950 & 52,300 & 53,450 \\\\\nShrink & 50,450 & 52,300 & 53,550 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & 49,100 & 50,650 & 51,650 \\\\ \nIP Confined & 48,650 & 50,400 & 52,100 \\\\% \\hline\nExp. Spread & {\\bf 48,300} & {\\bf 49,900} & {\\bf 51,750} \\\\\nExp. Confined& 48,900 & 50,000 & 51,250 \\\\ \nPeriodic & 50,400 & 51,950 & 53,300 \\\\ \nRandom & 50,250 &51,200 &52,150 \\\\\nSetOnBoundary & 49,900 (33) & 51,100 & 53,150 \\\\ \nShrink & 50,200 & 51,400 & 52,750 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEack}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ros}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 38,850 & 62,000 & 89,700 \\\\ \nIP Confined & 24,850 & 45,700 & 73,400 \\\\ \nExp. Spread & 57,100 & 86,800 & 118,600 \\\\\nExp. Confined & 16,600 & 21,400 &79,550 \\\\ \nPeriodic & 69,550 & 93,500 & 18,1150 \\\\\nRandom & 65,850 &92,950 &157,600 \\\\% \\hline\nSetOnBoundary & {\\bf 2,950} & {\\bf 4,700} & {\\bf 30,450} \\\\ \nShrink & 5,450 & 8,150 &55,550 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]: At the Center}} \\\\ \\hline\nIP Spread & 133,350 (41) & 887,250& 995,700 \\\\\nIP Confined & 712,500 (44) & 854,800 & 991,400 \\\\\nExp. Spread & {390,700} (48) & {866,150} & {998,950} \\\\\nExp. Confined & 138,550 (40) &883,500 & 994,350 \\\\ \nPeriodic & 764,650 (39) & 874,700 & 999,650 \\\\% \\hline\nRandom & {\\bf 699,400} (49) & {\\bf 885,450} & {\\bf 999,600} \\\\ \nSetOnBoundary & 743,600 (38) & 865,450& 995,500 \\\\\nShrink & 509,900 (40) & 873,450 & 998,450 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [0,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & 36,850 & 78,700 & 949,700 \\\\\nIP Confined & 46,400 (46) & 95,900 & 891,450 \\\\ \nExp. Spread & 49,550 (49) & 85,900 & 968,200 \\\\% \\hline\nExp. Confined & 87,300 (43) & 829,200 & 973,350 \\\\ \nPeriodic & {\\bf 38,750} &{\\bf 62,200} & {\\bf 94,750} \\\\\nRandom & 41,200 & 61,300 & 461,500 \\\\\nSetOnBoundary & 8.23E+00 (DNC) & 1.62E+01 & 1.89E+01 \\\\% \\hline\nShrink & 252,650 (9) & 837,700 & 985,750 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEros}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\subsection{Results with Real-Parameter Genetic Algorithms (RGAs)}\n\\label{subsec:GAresults}\nWe have used two real-parameter GAs in our study here:\n\\begin{enumerate}\n\\item {\\em Standard-RGA\\\/} using the simulated binary crossover (SBX) \\cite{debramb} operator\nand the polynomial mutation operator\n \\cite{debBookMO}. In this approach, variables are expressed as\n real numbers initialized within the allowable range of each\n variable. The SBX and polynomial mutation operators can create\ninfeasible solutions. Violated boundary, if any, is handled using one\nof the approaches studied in this paper. Later we shall investigate a\nrigid boundary implementation of these operators which ensures\ncreation of feasible solutions in every recombination and mutation operations.\n\n\\item {\\em Elitist-RGA} in which two newly created offsprings are compared against the two\nparents, and the best two out of these four \nsolutions are retained as parents (thereby introducing elitism). Here, the \noffspring solutions are created using non-rigid\n versions of SBX and polynomial mutation operators. \nAs before, we test eight different constraint-handling approaches and, later\nexplore a rigid boundary implementation of the operators in presence of \nelite preservation. \n\\end{enumerate}\n\nParameters for RGAs are chosen as follows: population size of 100, \ncrossover probability $p_{c}$=0.9, mutation probability $p_{m}$=0.05, distribution index for crossover $n_{dist.,c}$=2, \ndistribution index for mutation $n_{dist.,m}$=100.\nThe results for the Standard-RGA are shown in Tables~\\ref{tab:RGAellp} to ~\\ref{tab:RGAros}\nfor four different test problems. Tables~\\ref{tab:GA-elitist-elp}\nto ~\\ref{tab:GA-elitist-ros} show results using the Elitist-RGA.\nFollowing key observations can be made:\n\\begin{enumerate}\n\\item For all the four test problems, Standard-RGA shows convergence\n \\textit{only} in the situation when optima is on the boundary. \n \n\\item Elitist-RGA shows good convergence on $F_{elp}$ when the optimum is on the boundary and,\nonly some convergence is noted when the optima is at the other locations.\nFor other three problems, convergence is only obtained when optimum is present on the boundary. \n\n\\item Overall, the performance of Elite-RGA is comparable or slightly better compared to Standard-RGA.\n\\end{enumerate}\n\nThe \\textit{Did Not Converge} cases can be explained on the fact that\nthe SBX operator has the property of creating solutions around\nthe parents; if parents are close to each other. This property is a likely cause of premature\nconvergence as the population gets closer to the optima. \nFurthermore the results suggest that the elitism implemented in this study\n(parent-child comparison) is not quite effective. \n \nAlthough RGAs are able to locate the optima,\nhowever, they are unable to fine-tune the optima due to undesired \nproperties of the generation scheme. This emphasizes the fact that \ngeneration scheme is primarily \nresponsible for creating good solutions, and\nthe constraint-handling methods cannot act as surrogate\nfor generating efficient solutions. \nEach step of the evolutionary \nsearch should be designed effectively in order to achieve overall success. \nOn the other hand one could argue that strategies such as increasing the mutation rate (in order to promote diversity so as to avoid\npre-mature convergence) should be tried, however, creation of good and meaningful solutions\nin the generation scheme is rather an important and a desired fundamental-feature.\n \nAs expected, when the optima is on the boundary \\textit{SetOnBoundary} finds the optima most efficiently within a minimum number\nof function evaluations. Like in PSO the performance of \n\\textit{Exp. Confined} is better than \n\\textit{Exp. Spread}.\n\\textit{Periodic} and \\textit{Random} show comparable \nor slightly worse performances (these mechanisms don't have any preference\nof creating solutions close to the boundary and actually promote spread of \nthe population). \n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with Standard-RGA for termination criterion of $10^{-10}$.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [0,10]}} \\\\\nIP Spread & 9,200 & 10,500 & 12,900 \\\\% \\hline\nIP Confined & 7,900 & 9,400 & 10,900 \\\\% \\hline\nExp. Spread & 103,100 (6) & 718,900 & 931,200 \\\\\nExp. Confined & 4,500 & 5,700 & 7,000 \\\\% \\hline\nPeriodic & 15,200 (1) & 15,200 & 15,200 \\\\\nRandom & 68,300 (12) & 314,700 & 939,800 \\\\\nSetOnBoundary & {\\bf 1,800} & {\\bf 2,400} & {\\bf 2,800} \\\\\nShrink & 3,700 & 5,100 & 6,600 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-10,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ \t{2.60e-02 \\textit{(DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-1,10]}} \\\\ \n\\multicolumn{4}{|c|}{\t{\\textit{1.02e-02 (DNC)}}}\t\t\\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAellp}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]}} \\\\% \\hline\nIP Spread & 6,800 &9,800 & 11,800 \\\\% \\hline\nIP Confined & 6,400 & 8,200 & 10,300 \\\\\nExp. Spread & 21,200 (47) & 180,000 & 772,200 \\\\% \\hline\nExp. Confined & 4,300 & 5,500 & 6,300 \\\\% \\hline\nPeriodic & 14,800 (26) & 143,500 & 499,400 \\\\\nRandom & 8,700 (43) & 195,200 & 979,300 \\\\% \\hline\nSetOnBoundary & {\\bf 1,800} & {\\bf 2,300} & {\\bf 2,900} \\\\% \\hline\nShrink. & 3,600 & 4,600 & 5,500 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\t\\textit{1.20e-01 (DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ {\\textit{8.54e-02 (DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAsch}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm ack}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]}} \\\\\nIP Spread & 12,100 & 22,600 & 43,400 \\\\\nIP Confined & 9,800 & 13,200 & 16,400 \\\\ \nExp. Spread & 58,100 (29) & 355,900 & 994,000 \\\\ \nExp. Confined& 6,300 & 9,100 & 11,900 \\\\ \nPeriodic & 19,600 (46) & 122,300 &870,200 \\\\ \nRandom &35,700 (38) & 229,200 & 989,500 \\\\ \nSetOnBoundary & {\\bf 1,800} & {\\bf 2,500} & {\\bf 3,100} \\\\% \\hline\nShrink &4,200 & 5,700 & 8,600 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ { \\textit{7.76e-02(DNC)}}} \\\\ \\hline \\hline\n \n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]}} \\\\% \\hline \\hline\n\\multicolumn{4}{|c|}{ {\\textit{4.00e-02 (DNC)}}} \\\\ \\hline\n \n\\end{tabular}\n\\label{tab:RGAack}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm ros}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\nIP Spread & 12,400 (39)& 15,800& 20,000 \\\\\nIP Confined & 9,400 (39)& 11,800& 13,600 \\\\% \\hline\nExp. Spread &\\textit{9.73e+00} \\textit{(DNC)} & \\textit{1.83e+00}& \\textit{2.43e+01} \\\\% \\hline\nExp. Confined & 6,000 & 6,900& 8,200 \\\\\nPeriodic & \\textit{6.30E+01} \\textit{(DNC)}& \\textit{4.92e+02}& \\textit{5.27e+04} \\\\\nRandom & \\textit{3.97e+02} \\textit{(DNC)} & \\textit{9.28e+02}& \\textit{1.50e+03} \\\\% \\hline\nSetOnBoundary & {\\bf 1,900} & {\\bf 2,700} & {\\bf 3,400} \\\\% \\hline\nShrink & 4,100 & 5,200 & 6,500 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]}} \\\\% \\hline \n\\multicolumn{4}{|c|}{ { \\textit{3.64e+00 (DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.04e+01(DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAros}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\n\n\n\\clearpage\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with Elite-RGA for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [0,10]}} \\\\\nIP Spread & 6,600 & 8,000 & 9,600 \\\\% \\hline\nIP Confined & 6,300 & 8,100 & 9,800 \\\\\nExp. Spread & 4,800 & 6,900 & 8,300 \\\\\nExp. Confined & 4,600 & 5,800 & 6,700 \\\\% \\hline\nPeriodic & 6,500 & 8,800 & 11,500 \\\\\nRandom & 6,400 & 7,900 & 10,300 \\\\\nSetOnBoundary & {\\bf 2,200} & {\\bf 2,600} & {\\bf 3,500} \\\\\nShrink & 4,000 & 5,200 & 6,700 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-10,10]}} \\\\\nIP Spread & 980,200 (1) & 980,200 & 980,200 \\\\% \\hline\nIP Confined & 479,000 (1) & 479,000 & 479,000 \\\\% \\hline\nExp. Spread & \\textit{2.06e-01} \\textit{(DNC)} & \\textit{4.53e-01} \n& \\textit{4.86e-01} \\\\\nExp. Confined & 954,400 (1)\t & 954,400 & 954,400 \\\\% \\hline\nPeriodic & \\textit{1.55E-01} \\textit{(DNC)}& \\textit{2.48E-01} & \\textit{2.36E-01} \\\\\nRandom & \\textit{1.92E-01} \\textit{(DNC)}& \\textit{2.00E-01} & \\textit{2.46E-01} \\\\\nSetOnBoundary & \\textit{2.11E-01} \\textit{(DNC)}&2.95E-01 & 1.94E-01 \\\\% \\hline\nShrink & {\\bf 530,900} (3) & {\\bf 654,000} & {\\bf 779,000} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-1,10]}} \\\\\nIP Spread & 803,400 (5) & 886,100 & 947,600 \\\\\nIP Confined & 643,300 (2) & 643,300 &963,000 \\\\% \\hline\nExp. Spread & 593,300 (3) & 628,900 & 863,500 \\\\\nExp. Confined & 655,400 (3) & 940,500 & 946,700 \\\\% \\hline\nPeriodic & 653,800 (3) & 842,900 &843,100 \\\\\nRandom & 498,500 (2) & 498,500 &815,500 \\\\\nSetOnBoundary & {\\bf 593,800} (5) & {\\bf 870,500} & {\\bf 993,500} \\\\\nShrink & 781,000 (2) & 781,000 &928,300 \\\\ \\hline\n\\end{tabular}\n\\label{tab:GA-elitist-elp}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with Elite-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]}} \\\\ \\hline\nIP Spread & 5,000 & 6,500 & 7,900 \\\\% \\hline\nIP Confined & 4,900 & 6,500 & 7,900 \\\\% \\hline\nExp. Spread & 4,300 & 5,800 & 7,800 \\\\\nExp. Confined & 4,300 & 4,900 & 5,600 \\\\\nPeriodic & 5,400 & 7,000 & 11,300 \\\\\nRandom & 5,300 & 6,600 & 8,500 \\\\\nSetOnBoundary & {\\bf 1,600} & {\\bf 2,200} & {\\bf 2,600} \\\\% \\hline\nShrink & 3,100 & 4,200 & 5,400 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{8.12e-05(DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ { \\textit{1.61e-01(DNC)}}} \\\\ \\hline \n\\end{tabular}\n\\label{tab:GA-elitist-sch}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm ack}$ with Elite-RGA for termination criterion of $10^-{10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]}} \\\\ \\hline\nIP Spread & 6,300 &8,700 &46,500 \\\\% \\hline\nIP Confined & 6,800 &9,200 &32,000 \\\\\nExp. Spread & 5,600 &6,800 &8,700 \\\\\nExp. Confined & 5,200 &7,800 &9,900 \\\\\nPeriodic & 6,300 &9,300 &12,200 \\\\\nRandom & 6,200 &8,300 &53,700 \\\\\nSetOnBoundary & {\\bf 1,900} &{\\bf 2,500} &{\\bf 4,000} \\\\% \\hline\nShrink & 3,900 &5,100 &7,700 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.03e-01(DNC)}}} \\\\ \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.15e-00 (DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:GA-elitist-ack}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n \n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm ros}$ with Elite-RGA for termination criterion of $10^-{10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [0,10]}} \\\\ \\hline\nIP Spread & 9,900 (13) &12,500 &14,000 \\\\ \nIP Confined & 10,100 (12)& 12,100 &14,400 \\\\ \nExp. Spread & 8,500 (10) &11,000 &15,400 \\\\ \nExp. Confined & 6,600 (30) &7,800 &8,900 \\\\ \nPeriodic & 9,500 (10) &13,300 &16,800 \\\\ \nRandom & 14,000 (3)& 15,300 &16,100 \\\\ \nSetOnBoundary & {\\bf 2,300} (44) & {\\bf 3,200} & {\\bf 4,500} \\\\ \nShrink & 4,500 (32) &6,100 &8,100 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]}} \\\\ \n\\multicolumn{4}{|c|}{ {\\textit{1.27e-00 (DNC)}}} \\\\ \\hline \\hline\n \n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.49e-00 (DNC)}}} \\\\ \\hline \n\\end{tabular}\n\\label{tab:GA-elitist-ros}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\n\\subsubsection{RGAs with Rigid Boundary}\\label{subsec:NonelitistGAresults}\n\n\\begin{table*}[htb]\n\\begin{footnotesize}\n\\caption{RGA with rigid boundary with termination criterion $10^{-10}$}\n{\\footnotesize\\begin{center}\n\\begin{tabular}{|lrrr|} \\hline \n\t\t\\multicolumn{4}{|c|}{ Optimum on the boundary }\t\t\t\\\\\t\\hline\t\\hline\t\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n$F_{elp}$\t\t&\t8,100 & 8,500 & 8,800\t\t\\\\\n$F_{sch}$\t\t&\t7,800 & 8,100 & 8,300\t\t\\\\\n$F_{ack}$\t\t&\t9,500 & 10,100 & 10,800\t\t\\\\\n$F_{ros}$\t\t&\t10,100 (39) & 10,900 & 143,600\t\t\\\\\t\\hline\t\\hline \n\\multicolumn{4}{|c|}{ Optimum in the center}\t\t\\\\\n\\multicolumn{4}{|c|}{ {\\textit{3.88e-02 (DNC)}}} \\\\\n\\hline \\hline \n\\multicolumn{4}{|c|}{ Optimum close to the edge of the boundary } \\\\\t\n\\multicolumn{4}{|c|}{ {\\textit{9.44e-03 (DNC)}}} \\\\\n\\hline \\hline \n\\end{tabular}\n\\label{tab:rga-standard-rigid-boundary}\n\\end{center}}\n\\end{footnotesize}\n\\end{table*}\n\nWe also tested RGA (standard and its elite version) \nwith a rigid bound consideration in its operators. In this case, the\nprobability distributions of both\n{SBX} and polynomial mutation operator are changed in a way so as to\nalways create a feasible solution. \nIt is found that the Standard-RGA with rigid bounds shows \nconvergence only when optimum is on the boundary (Table~\\ref{tab:rga-standard-rigid-boundary}). \nThe performance of Elite-RGA with rigid bounds is slightly better \n(Table~\\ref{tab:rga-elite-rigid-boundary}).\nOverall, SBX operating within the rigid bounds is found to perform slightly better\ncompared to the RGAs employing boundary-handling mechanisms. \nHowever, as mentioned earlier, in the scenarios where the generation scheme cannot \nguarantee creation of feasible only solutions there is a necessary\nneed for constraint-handling strategies. \n\n\\begin{table*}[htb]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\caption{Elitist-RGA with rigid boundary with termination criterion $10^{-10}$.}\n\\begin{center}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n \\multicolumn{4}{|c|}{ Optimum on the boundary} \\\\ \n$F_{elp}$ & 7,300 & 7,900 \t& 8,400 \t\\\\ \n$F_{sch}$ & 6,500 & 6,900 & 7,500 \t \\\\ \n$F_{ack}$ & 9,400 & 10,400 & 12,200 \t\\\\ \n$F_{ros}$ & 11000 (10) & 12700 & 16400 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ Optimum in the center } \\\\ \n\\multicolumn{4}{|c|}{ {\\textit{1.24e-01(DNC)}}} \\\\\n\\hline \\hline \n\\multicolumn{4}{|c|}{ Optimum close to the boundary edge} \\\\ \n$F_{elp}$ & 579,800 (3) &885,900 & 908,600 \\\\ \n$F_{sch}$ & \\textit{2.73E-00} \\textit{DNC} & \\textit{6.18E-00} & \\textit{1.34E-00} \\\\ \n$F_{ack}$ & \\textit{1.75E-01} \\textit{DNC} & \\textit{8.38E-01} & \\textit{2.93E-00} \\\\ \n$F_{ros}$ & \\textit{3.29E-00} \\textit{DNC} & \\textit{4.91E+00} & \\textit{5.44E+00} \\\\ \\hline\n\\end{tabular}\n\\label{tab:rga-elite-rigid-boundary}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\\section{Higher-Dimensional Problems}\\label{sec:scale-up}\nAs the dimensionality of the search space increases it becomes\ndifficult for a search algorithm to locate the optimum.\nConstraint-handling methods play even a more critical role\nin such cases. \nSo far in this study, DE has been found to be the best algorithm. Next, we consider all\nfour unimodal test problems with an increasing problem size: $n=20$, $50$, $100$, $200$, $300$,\nand $500$. For all problems the variable bounds were chosen such that optima occured\nnear the center of the search space. No population scaling is used for DE. The DE parameters are chosen as $F=0.8$ and $CR=0.9$.\nFor $\\alpha = 1.2$ we were able to achieve a high degree of convergence and\nresults are shown in Figure~\\ref{fig:DE-scale-up}.\nAs seen from the figure, although it is expected that the required\nnumber of function evaluations would increase with the \nnumber of variables, the increase is sub-quadratic.\nEach case is run $20$ times and the termination criteria\nis set as $10^{-10}$. All 20 runs are found to be successful in each case,\ndemonstrating the robustness of the method in terms of finding the\noptimum with a high precision.\nParticularly problems with large variables, complex search spaces and \nhighly nonlinear constraints, such a methodology should be useful in terms of \napplying the method to different real-world problems. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{scale.eps} \n\\end{center}\n\\caption{Scale-up study on four test problems for the DE algorithm with\n the proposed IP-S approach shows sub-quadratic performance in all\n problems. Slopes of a fitted polynomial line is marked within\n brackets for each problem. Linear and quadratic limit lines are shown with dashed lines.}\n\\label{fig:DE-scale-up}\n\\end{figure}\n\nIt is worth mentioning that authors independently tested other scenarios for \nlarge scale problems with corresponding optimum on the boundary, and in order to achieve convergence\nwith IP methods we had to significantly reduce the \nvalues of $\\alpha$. Without lowering $\\alpha$, particularly, PSO did not show any convergence. \nAs expected in larger dimensions the probability to sample a point closer to the boundary\ndecreases and hence a steeper distribution is needed. However, this highlights the usefulness\nof the parameter $\\alpha$ to modify the behavior of the IPMs so as to yield the desired performance. \n\n\\section{General Purpose Constraint-Handling} \n\\label{sec:Constraint-Programming}\n\nSo far we have carried out simulations on problems where constraints have \nmanifested as the variable bounds. The IP methods proposed in this paper can be easily\nextended and employed for solving nonlinear constrained optimization problems (inclusive of \nvariable bounds).\\footnote{By \\textit{General Purpose Constraint-Handling} we imply\ntackling of all variable bounds, inequality constraints and equality constraints.}\n\nAs an example, let us consider a generic inequality constraint function: $g_j(\\vec{x})\n\\geq 0$ -- the $j$-th constraint in a set of $J$ inequality\nconstraints. In an optimization algorithm, every created (offspring) solution\n$\\vec{x}^c$ at an iteration must be\nchecked for its feasibility. If\n$\\vec{x}^c$ satisfies all $J$ inequality constraints, the solution is\nfeasible and the algorithm can proceed with the created solution. \nBut if $\\vec{x}^c$ does not\nsatisfy one or more of $J$ constraints, the solution is\ninfeasible and should be repaired before proceeding further. \n\nLet us\nillustrate the procedure using the inverse parabolic (IP) approach described in\nSection~\\ref{sec:IP}; though other constraint-handling\nmethods discussed before may also be used. The IP approaches require us to locate\nintersection points $\\vec{v}$ and $\\vec{u}$: two bounds in the\ndirection of ($\\vec{x}^p-\\vec{x}^c$), where $\\vec{x}^p$ is one of the\nparent solutions that created the offspring solution (see Figure~\\ref{fig:constr}). The critical intersection\npoint can be found by\nfinding multiple roots of the direction vector with each constraint\n$g_j(\\vec{x})$ and then choosing the\nsmallest root. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{constr.eps} \n\\caption{Extension of constraint-handling approaches to constrained optimization problems.}\n\\label{fig:constr}\n\\end{center}\n\\end{figure}\nWe define a parameter\n$\\alpha$ as the extent of a point from $\\vec{x}^c$, as\nfollows:\n\\begin{equation}\n\\vec{x}(\\alpha) = \\vec{x}^c + \\alpha\n\\frac{\\vec{x}^p-\\vec{x}^c}{\\|\\vec{x}^p-\\vec{x}^c\\|}.\n\\label{eq:mapping}\n\\end{equation}\n\nSubstituting above expression for $\\vec{x}(\\alpha)$ \\footnote{Note that $\\alpha$\nhere for calculating points should not be confused with parameter $\\alpha$ introduced in\nthe proposed constraint-handling methods.}in the $j$-th constraint function, we have the following\nroot-finding problem for the $j$-th constraint:\n\\begin{equation}\ng_j(\\vec{x}(\\alpha)) = 0.\n\\end{equation}\nLet us say the roots of the above equation are $\\alpha_k^j$ for\n$k=1,2\\ldots,K_j$. The above procedure can now be repeated for all $J$\ninequality constraints and corresponding roots can be found one at a time. \nSince the extent of $\\alpha$ to reach $\\vec{x}^p$ from $\\vec{x}^c$\nis given as \n\\[\\alpha^p = \\|\\vec{x}^p-\\vec{x}^c\\|,\\]\nwe are now ready to compute the two closest bounds (lower and upper bounds) on\n$\\alpha$ for our consideration, as\nfollows:\n\\begin{eqnarray}\n\\alpha^v &=& \\max \\{\\alpha_k^j | 0 \\leq \\alpha_k^j \\leq \\alpha^p,\n \\forall\nk, \\forall j\\}, \\\\ \n\\alpha^u &=& \\min \\{\\alpha_k^j | \\alpha_k^j \\geq \\alpha^p, \\forall\nk, \\forall j\\}.\n\\end{eqnarray}\nIP-S and IP-C approaches presented in Section~\\ref{sec:IP} now can be used to map the violated variable\nvalue $x_i^c$ into the feasible region using $d=\\alpha^v$ (the lower bound),\n$d_u=\\alpha^u$ (the upper bound) and $d^p=\\alpha^p$ (location of parent). \nIt is clear that the only difficult aspect of this method is to find\nmultiple intersection points in presence of nonlinear constraints. \n\nFor the sake of completeness, we show here that the two bounds for each variable: $x_i\\geq x_i^{(L)}$ and\n$x_i\\leq x_i^{(U)}$ used in previous sections can be also be treated\nuniformly using the above described approach. The two bounds can be written together as follows:\n\\begin{equation}\n\\left(x_i-x_i^{(L)}\\right)\\left(x_i^{(U)}-x_i\\right) \\geq 0.\n\\end{equation}\nNote that a simultaneous non-positive value of each of the bracketed terms\nis not possible, thus only way to satisfy the above left side is to\nmake each bracketed term non-negative. The above inequality can be\nconsidered as a quadratic constraint function, instead of two\nindependent variable bounds and treated as a single combined nonlinear\nconstraint and by finding both roots of the resulting quadratic root-finding equation. \n\nFinally, the above procedure can also be extended to handle equality constraints ($h_k(\\vec{x})=0$) with a\nrelaxation as follows: $-\\epsilon_k \\leq h_k(\\vec{x}) \\leq\n\\epsilon_k$. Again, they can be combined together as follows:\n\\begin{equation}\n\\epsilon_k^2-(h_k(\\vec{x}))^2 \\geq 0.\n\\end{equation}\nAlternatively, the above can also be written as $\\epsilon_k -\n|h_k(\\vec{x})| \\geq 0$ and may be useful for non-gradient based\noptimization methods, such as evolutionary algorithms. We now show the\nworking of the above constraint handling procedure on a number of\nconstrained optimization problems. \n \n\\subsection{Illustrations on Nonlinear Constrained Optimization}\n\nFirst, we consider the three unconstrained problems used in previous\nsections as $f(\\vec{x})$, but now add an inequality constraint by imposing a\nquadratic constraint that makes the solutions fall within a radius of one-unit from\na chosen point $\\vec{o}$:\n\\begin{equation}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}), \\\\\n\\mbox{subject to} & \\sum_{i=1}^{n} (x_i-o_i)^2 \\leq 1.\t\n\\end{array}\n\\label{eq:convex-opti-problem}\n\\end{equation}\nThere are no explicit variable bounds in the above problem. \nBy choosing different locations of the center\nof the hyper-sphere ($\\vec{o}$), we can have different scenarios of\nthe resulting constrained optimum. \nIf the minimum of the objective function (without constraints)\nlies at the origin, then setting $o_i=0$ the unconstrained minimum is also the solution to the\nconstrained problem, and this case is similar to the ``Optimum at the\nCenter'' (but in the context\nof constrained optimization now).\nThe optimal solution is at $x_i=0$ with $f^*=0$. DE with IP-S and previous parameter \nsettings is applied to this new constrained problem, and the results from 50 different runs for this case are shown in\nTable~\\ref{tab:convex-constraints-de-1}.\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results for test functions with DE for $o_i=0$ with\n $S=10^{-10}$.}\n\\label{tab:convex-constraints-de-1} \n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}} & {{Median}} & {{Worst}} \\\\\\hline \\hline\n$F_{\\rm elp}$ & 22,800 &\t23,750 &\t24,950 \t\t \\\\ \n$F_{\\rm sch}$ & 183,750\t& 206,000 &\t229,150 \\\\\n$F_{\\rm ack}$ & 42,800 &\t44,250\t& 45,500 \t \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\nAs a next case, we consider $o_i = 2$. The constrained\nminimum is now different from that of the unconstrained problem, as\nthe original unconstrained minimum is no more feasible. This case is equivalent to ``Optima on \nthe Constraint Boundary''. Since the optimum value is not zero as before, instead of choosing\na termination criterion based on $S$ value, we allocate a maximum of\none million function evaluations for a run\nand record the obtained optimized solution. The {best fitness} values for \n$f(\\vec{x})$ as $f_{elp}$, $f_{sch}$ and $f_{ack}$ are shown in Table\n~\\ref{tab:convex-constraints-de-2}. For each function, we verified\nthat the obtained optimized solution satisfies the KKT optimality\nconditions \\cite{rekl,debOptiBOOK} suggesting that a truly optimum solution has\nbeen found by this procedure. \n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Best fitness results for test functions with DE for $o_i=2$.}\n\\begin{tabular}{|ccc|} \\hline\n$F_{\\rm elp}$ & $F_{\\rm sch}$& $F_{\\rm ack}$\t \t\t \\\\ \\hline \\hline\n 640.93 $\\pm$ 0.00 & 8871.06 $\\pm$ 0.39\t& 6.56 $\\pm$ 0.00 \t \\\\ \\hline\n\\end{tabular}\n\\label{tab:convex-constraints-de-2} \n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n \nNext, we consider two additional nonlinear constrained optimization\nproblems (TP5 and TP8) from \\cite{debpenalty} and a well-studied structural design and mechanics problem (`Weld') taken from \\cite{ragsdell1976optimal}.\nThe details on the mechanics of the welded structure and the beam deformation can be found in \\cite{shigley1963engineering,timoshenko1962theory}.\nThese problems have \nmultiple nonlinear inequality constraints and our goal is to demonstrate the performance of our proposed constraint-handling\nmethods. We used DE with\nIP-S method with the following \nparameter settings: $NP=50$, $F=0.7$, $CR=0.5$, and $\\alpha=1.2$ for all three problems.\nA maximum of 200,000 function evaluations were allowed and a termination criteria of $10^{-3}$\nfrom the known optima is chosen. \nThe problem definitions of TP5, TP8 and `Weld' are as follows:\\\\\n\n\\noindent \\textbf{TP5:}\n\\begin{equation}\n\\label{eq:TP5}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = (x_1-10)^2+5(x_2-12)^2+x_3^4+3(x_4-11)^2 \\\\\t\n& \\qquad\t+ 10x_5^6+7x_6^2+x^4_7-4x_6x_7-10x_6-8x_7, \\\\\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv 2x_1^2+3x_2^4+x_3+4x_4^2+5x_5 \\leq 127,\t\\\\\n& g_2(\\vec{x}) \\equiv 7x_1+3x_2+10x_3^2+x_4-x_5 \\leq 282, \\\\\t\n& g_3(\\vec{x}) \\equiv 23x_1+x_2^2+6x_6^2-8x_7 \\leq 196, \\\\\n& g_4(\\vec{x}) \\equiv 4x_1^2+x_2^2-3x_1x_2+2x_3^2 +5x_6-11x_7 \\leq 0,\\\\\t\t\n& -10 \\leq x_i \\leq 10, \\quad i = 1,\\ldots,7. \n\\end{array} \n\\end{equation} \n\n\\noindent\\textbf{TP8:}\n\\begin{equation}\n\\label{eq:TP8}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = x_1^2+x_2^2+x_1x_2-14x_1-16x_2+2(x_9-10)^2\\\\\n & \\qquad + 2(x_6-1)^2 + 5x_7^2+7(x_8-11)^2+45+(x_{10}-7)^2 \\\\\n & \\qquad + (x_3-10)^2 + 4(x_4-5)^2 + (x_5-3)^2\\\\\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv 4x_1+5x_2-3x_7+9x_8 \\leq 105,\\\\\n& g_2(\\vec{x}) \\equiv\t10x_1-8x_2-17x_7+2x_8 \\leq 0,\\\\\n& g_3(\\vec{x}) \\equiv -8x_1+2x_2+5x_9-2x_{10} \\leq 12,\\\\\n& g_4(\\vec{x})\\equiv 3(x_1-2)^2 + 4(x_2-3)^2+2x_3^2-7x_4 \\leq 120,\\\\\n& g_5(\\vec{x})\t\\equiv 5x_1^2+8x_2+(x_3-6)^2-2x_4 \\leq 40,\t\\\\\n& g_6(\\vec{x})\t\\equiv\t x_1^2+2(x_2-2)^2-2x_1x_2+14x_5-6x_6 \\leq 0,\\\\\n& g_7(\\vec{x})\t \\equiv\t 0.5(x_1-8)^2+2(x_2-4)^2+3x_5^2-x_6 \\leq 30,\\\\\n& g_8(\\vec{x})\t \\equiv\t -3x_1+6x_2+12(x_9-8)^2-7x_{10} \\leq 0,\\\\\n& -10 \\leq x_i\\leq 10, \\quad i = 1,\\ldots,10. \n\\end{array} \n\\end{equation}\t \n\n\\noindent\\textbf{`Weld':}\n\\begin{equation}\n\\label{eq:weld-problem}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = 1.10471h^2l+0.04811tb(14.0+l),\t \\\\\t\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv \\tau(\\vec{x}) \\leq 13,600, \\\\\n& g_2(\\vec{x}) \\equiv \\sigma(\\vec{x}) \\leq 30,000,\t\\\\\n& g_3(\\vec{x}) \\equiv h-b \\leq 0, \\\\\t \n& g_4(\\vec{x}) \\equiv P_c(\\vec{x}) \\geq 6,000,\\\\\n& g_5(\\vec{x}) \\equiv \\delta(\\vec{x}) \\leq 0.25,\t\t\\\\\n& 0.125\\leq (h,b) \\leq 5, \\mbox{ and } 0.1 \\leq (l,t) \\leq 10, \n\\end{array}\n\\end{equation}\nwhere,\t\t\t \n\\begin{eqnarray*}\n\\tau(\\vec{x})\t&=& \\sqrt{ (\\tau')^2 + (\\tau'')^2 + (l\\tau'\\tau'')\/ \\sqrt{0.25(l^2+(h+t)^2)}}, \\\\\t\t\\\\\t\n\\tau' &=& \\frac{6,000}{\\sqrt{2}hl}, \\\\\n\\tau'' &=& \\frac{6,000(14+0.5l)\\sqrt{0.25*(l^2+(h+t)^2)}}{2[0.707hl(l^2\/12+0.25(h+t)^2)] }, \\\\\n\\sigma(\\vec{x}) &=& \\frac{504,000}{t^2b}, \\\\\n\\delta(\\vec{x}) &=& \\frac{2.1952}{t^3b}, \\\\\nP_c(\\vec{x}) &=& 64,746.022(1-0.0282346t)tb^3. \n\\end{eqnarray*} \n \nThe feasible region in the above problems is quite complex, unlike hypercubes in case\nof problems with variable bounds, and since our methods require feasible initial population,\nwe first identified a single feasible-seed solution (known as the seed solution), and generated other several other \nrandom solutions. Amongst the several randomly generated solutions, those infeasible,\nwere brought into the feasible region using IP-S method and feasible-seed solution as the reference.\nThe optimum results for TP5, TP8 and `Weld', thus found, are shown in the\nTable~\\ref{tab:Non-linear-opti}. \n\n\\begin{table*}[htb]\n\\caption{Results from TP5, TP8 and `Weld' problem. For each problem, the\n obtained solution also satisfies the KKT conditions.}\n\\label{tab:Non-linear-opti}\n\\begin{center}\n\\begin{footnotesize}\n\\begin{tabular}{|l|l|l|l|} \\hline\n\t \t& \\multirow{2}{1.5cm}{Optimum Value ($f^*$)}& {Corresponding Solution Vector ($\\vec{x}^{\\ast}$)}\n & \\multirow{2}{1.2cm}{Active Constraints} \\\\ \n & & & \\\\ \\hline\n\\textbf{TP5}\t& $680.63$\n&$(2.330,1.953,-0.473,4.362,-0.628,1.035,1.591)^T$\t& $g_1$, $g_4$ \t\t\t\t\\\\ \\hline\n\\textbf{TP8}\t\t& $24.33$ &\n$(2.160,2.393,8.777,5.088,0.999,1.437,1.298,9.810,8.209,8.277)^T$&\n$g_1$ to $g_6$\\\\ \\hline \n\\textbf{`Weld'}\t&\n$2.38$& $(0.244,6.219,8.291,0.244)^T$\t& $g_1$ to $g_4$ \\\\ \\hline \n\\end{tabular}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\nTo verify the obtained optimality of our solutions, we employed MATLAB sequential\nquadratic programming (SQP) toolbox with a\ntermination criterion of $10^{-6}$ to solve each of the three problems. The solution obtained from SQP\nmethod matches with our reported solution indicating that our proposed constrained\nhandling procedure is successful\nin solving the above problems.\n\nFinally, the convergence plots of a sample run on these test\nproblems is shown in Figures~\\ref{fig:TP5}, \\ref{fig:TP8}, and \\ref{fig:WELD}, respectively.\nFrom the graphs it is clear that our proposed method is able to\nconverge to the final solution quite effectively. \n\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{TP5-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP5.}\n\\label{fig:TP5}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{TP8-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP8.}\n\\label{fig:TP8}\n\\end{figure}\n\n\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{Weld-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP8.}\n\\label{fig:WELD}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:Conclusion}\nThe existing constraint-handling strategies that repair solutions by bringing\nthem back into the search spaces exhibit several\ninadequacies. This paper has addressed the task of studying, and proposing two new, explicit feasibility preserving\nconstraint-handling methods for real-parameter optimization using evolutionary algorithms. \nOur proposed single parameter Inverse Parabolic Methods with stochastic and adaptive components, \novercome limitations of existing methods and perform effectively\non several test problems.\nEmpirical comparisons on four different\nbenchmark test problems ($F_{elp}$, $F_{sch}$, $F_{ack}$, and $F_{ros}$) \nwith three different settings of the optimum relative to the\nvariable boundaries revealed key insights into the performance of PSO, GAs and DE\nin conjunction with different constraint-handling strategies.\nIt was noted that in addition to the critical role of constraint-handling strategy (which\nis primarily responsible for bringing the infeasible solutions back into the search space),\nthe generation scheme (child-creation step) in an evolutionary algorithm must create efficient solutions \nin order to proceed effectively.\nExponential and Inverse Parabolic Methods were most robust methods and never\nfailed to solve any problem. The other constraint-handling strategies\nwere either too deterministic and\/or operated without utilizing sufficient useful information from \nthe solutions. The probabilistic methods\nwere able to bring the solutions back into the\nfeasible part of the search space and showed a consistent\nperformance. In particular, scale-up studies on four problems, up to 500 variables, demonstrated\nsub-quadratic empirical run-time complexity with the proposed IP-S method.\n\nFinally, the success of the proposed IP-S scheme is demonstrated \non generalized nonlinear constrained optimization problems. \nFor such problems, the IP-S method requires finding the lower and upper bounds\nfor feasible region along the direction of search by solving a series\nof root-finding problems. \nTo best of our knowledge, the\nproposed constraint-handling approach is a first explicit feasibility preserving method that has demonstrated\nsuccess on optimization problems with variable bounds and nonlinear constraints.\nThe approach is arguably general, and can be \napplied with complex real-parameter search spaces such as non-convex, discontinuous, etc., \nin addition to problems dealing with multiple conflicting objectives, multi-modalities, dynamic\nobjective functions, and other complexities.\nWe expect that proposed constraint-handling scheme will find its utility in \nsolving complex real-world constrained optimization problems using evolutionary algorithms. \nAn interesting direction for the future would be to carry out one-to-one comparison between\nevolutionary methods employing constrained handling strategies and the classical\nconstrained optimization algorithms. Such studies shall benefit the practitioners in optimization \nto compare and evaluate different algorithms in a unified frame-work.\nIn particular, further development of a robust, parameter-less\nand explicit feasibility preserving constraint-handling procedure can be attempted.\nOther probability distribution functions and utilizing of information \nfrom other solutions of the population can also be attempted. \n\\section*{Acknowledgments}\nNikhil Padhye acknowledges past discussions with Dr. C.K. Mohan on swarm intelligence. \nProfessor Kalyanmoy Deb acknowledges \nthe support provided by the J. C. Bose National\nfellowship generously provided by the Department of Science and\nTechnology (DST), Government of India. \n{\\footnotesize\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBootstrap percolation on a hypergraph with infection threshold $r\\geq 1$ is a deterministic infection process which evolves in rounds. In each round every vertex has exactly one of two possible states: it is either infected or uninfected. We denote the set of initially infected vertices by $\\mathcal{A}_r(0)$. We say that a vertex $u$ is a neighbour of $v$ if there exists an edge containing both $u$ and $v$. In each round of the process every uninfected vertex $v$ becomes infected if it has at least $r$ infected neighbours, otherwise it remains uninfected. Once a vertex has become infected it remains infected forever. The process stops once no more vertices become infected and we denote this time step by $T$. The final infected set is denoted by $\\mathcal{A}_r(T)$.\n\nBootstrap percolation was introduced by Chalupa, Leath, and Reich \\cite{bootstrapintr} in the context of\nmagnetic disordered systems. Since then bootstrap percolation processes (and extensions) have been used to describe several complex phenomena: from neuronal activity \\cite{MR2728841,inhbootstrap} to the dynamics of the Ising model at zero temperature \\cite{Fontes02stretchedexponential}.\n\nIn the context of social networks, bootstrap percolation provides a prototype model for the spread of ideas. In this setting infected vertices represent individuals who have already adopted a new belief and a person adopts a new belief if at least $r$ of his acquaintances have already adopted it.\n\nOn the $d$-dimensional grid $[n]^d$ bootstrap percolation has been studied by\nBalogh, Bollob{\\'a}s, Duminil-Copin, and Morris \\cite{MR2888224}, when the initial infected set contains every vertex independently with probability $p$. \nFor the size of the final infection set they showed the existence of a sharp threshold. More precisely, they established the threshold probability $p_\\mathrm{c}$, such that if $p\\leq (1-\\varepsilon )p_\\mathrm{c}$, then the probability that every vertex in $[n]^d$ becomes infected tends to 0, as $n\\rightarrow\\infty$, while if $p\\geq (1+\\varepsilon )p_\\mathrm{c}$, then the probability that every vertex in $[n]^d$ becomes infected tends to one, as $n\\rightarrow\\infty$.\n\nBootstrap percolation has also been studied for several random graph models. For instance Amini and Fountoulakis \\cite{bootpower} considered the Chung-Lu model \\cite{MR1955514} where the vertex weights follow a power law degree distribution and the presence of an edge $\\{u,v\\}$ is proportional to the product of the weights of $u$ and $v$. Taking into account that in this model a linear fraction of the vertices have degree less than $r$ and thus at most a linear fraction of the vertices can become infected the authors proved the size of the final infected set $\\mathcal{A}_r(T)$ exhibits a phase transition. \n\n\nJanson, \\L uczak, Turova, and Vallier \\cite{MR3025687} analysed bootstrap percolation on the binomial random graph $G(n,p)$ where every edge appears independently with probability $p$. For $r\\geq 1$ and $n^{-1}\\ll p \\ll n^{-1\/r}$ they showed that there is a threshold such that if the initial number of infected vertices is below the threshold, then the process infects only a few additional vertices and if the initial number of infected vertices exceeds the threshold, then almost every vertex becomes infected.\n\nIn this paper we investigate the binomial random hypergraph $H_k(n,p)$, where every edge ($k$-tuple of vertices) is present independently with probability $p$. We choose the initial infected set uniformly at random and consider bootstrap percolation with infection threshold $r> 1$ in the regime $n^{-1}\\ll n^{k-2}p \\ll n^{-1\/r}$.\nThe main contribution of this paper are:\n\\begin{itemize}\n\\item strengthening of the result in \\cite{MR3025687}, by showing that the failure probability decreases {\\em exponentially} (Theorem~\\ref{thm:graph});\n\\item extension of the original results from graphs to hypergraphs (Theorem~\\ref{thm:hypergraph}).\n\\end{itemize}\n\n\n\\section{Main Results}\n\nWe extend the following result, which was originally proved in \\cite{MR3025687}, to $H_k(n,p)$: Consider bootstrap percolation with infection threshold $r$ on $G(n,p)$, where $n^{-1}\\ll p \\ll n^{-1\/r}$. There is a threshold $b_r=b_r(n,p)$ such that if $|\\mathcal{A}_r(0)|\\le(1-\\varepsilon)b_r$, then with probability tending to one as $n\\rightarrow \\infty$ (whp for short) only a few additional vertices become infected, while if $|\\mathcal{A}_r(0)|\\ge(1+\\varepsilon)b_r$, then whp almost every vertex in the process becomes infected.\nFor integers $k\\geq 2$ and $r> 1$ set\n\\[\nb_{k,r}:=b_{k,r}(n,p)=\\left\\{ \\begin{array}{ll}\n\\left(1-\\frac{1}{r}\\right)\\left(\\frac{(r-1)!}{n\\left(\\binom{n}{k-2}p\\right)^r}\\right)^{1\/(r-1)} & \\mbox{if } r > 2 \\\\\n\\frac{1}{2(2k-3)}\\frac{1}{n\\left(\\binom{n}{k-2}p\\right)^2} & \\mbox{if } r = 2,\n\\end{array}\n\\right.\n\\]\nand note that the only difference for the $r=2$ case is a $1\/(2k-3)$ multiplier. Since $2k-3=1$ when $k=2$ this is consistent with the threshold in the graph case i.e.\\ $b_{2,r}=b_r$. \n\n\\begin{theorem}\\label{thm:hypergraph}\nFor $k\\geq 2$ consider bootstrap percolation with infection threshold $r> 1$ on $H_{k}(n,p)$ when $n^{-1}\\ll n^{k-2} p \\ll n^{-1\/r}$. Assume the initial infection set is chosen uniformly at random from all sets of vertices of size $a=a(n)$. Then for any fixed $\\varepsilon>0$ we have that\n\\begin{itemize}\n\\item if $a\\leq(1-\\varepsilon)b_{k,r}$ then whp $|\\mathcal{A}_r(T)|= O(b_{k,r})$;\n\\item if $a\\geq (1+\\varepsilon)b_{k,r}$ then whp $|\\mathcal{A}_r(T)|=(1+o(1))n$.\n\\end{itemize}\n\\end{theorem}\n\nUsing the methods developed for this result we also obtain a strengthened form of the result for $G(n,p)$ establishing exponentially small bounds on the failure probability.\n\n\\begin{theorem}\\label{thm:graph}\nConsider bootstrap percolation with infection threshold $r > 1$ on $G(n,p)$ when $n^{-1}\\ll p \\ll n^{-1\/r}$. Assume the initial infection set is chosen uniformly at random from the set of vertices of size $a=a(n)$. Then for any fixed $\\varepsilon>0$ the following holds with probability $1-\\exp(-\\Omega(b_{2,r}))$:\n\\begin{itemize}\n\\item if $a\\leq(1-\\varepsilon)b_{2,r}$, then $|\\mathcal{A}_r(T)|=O(b_{2,r})$;\n\\item if $a\\geq (1+\\varepsilon)b_{2,r}$, then $|\\mathcal{A}_r(T)|=(1+o(1))n$.\n\\end{itemize}\n\\end{theorem}\n\nThe proofs rely on surprisingly simple methods. When the number of vertices infected in the individual rounds is large, we apply Chebyshev's or Chernoff's inequality. However when the process dies out, these changes can become arbitrarily small. In this case we couple the infection process with a subcritical branching process which dies out very quickly.\n\n\n\n\n\\section{Proof outlines}\n\nWe first show the outline for the proof of Theorem~\\ref{thm:hypergraph}. For brevity we will only describe the $r>2$ case in detail and comment on the differences for $r=2$ at the end. \n\nStart with a given set of initially infected vertices $\\mathcal{A}_r(0)$ and consider the infection process round by round. At the end of round $t\\geq 1$ we partition the set of vertices into $\\mathcal{A}_0(t),\\mathcal{A}_1(t),...,\\mathcal{A}_r(t)$ where the set $\\mathcal{A}_i(t)$ consists of all the vertices which have exactly $i$ infected neighbours (these are vertices in $\\mathcal{A}_r(t-1)$), for $i0$, there exists a $\\tau$, which does not depend on $n$, such that $\\Delta(\\tau)\\leq \\eta b_{k,r}$.\nThe fact that $|\\mathcal{A}_i(t)|$ is concentrated around $a_i(t)$ for $t<\\tau$ follows from Chebyshev's inequality.\n\nSince we are in the subcritical regime the size of the individual generations will become small and the concentration will fail. In order to avoid this we attempt to analyse the remaining steps together. Consider the forest where every vertex in $\\mathcal{A}_r(\\tau+1)\\backslash \\mathcal{A}_r(\\tau)$ is a root.\nRecall that in order for a vertex to become infected in round $t+1$ it must have a neighbour that got infected in round $t$. The children of a vertex $v \\in \\mathcal{A}_r(t+1)\\backslash \\mathcal{A}_r(t)$ will be the vertices $u \\in \\mathcal{A}_r(t+2)\\backslash \\mathcal{A}_r(t+1)$ which lie in an edge containing $v$ and should this relation not be unique for some vertex $u$, $u$ is assigned arbitrarily to one of the candidates. Clearly every vertex of $A_r(T)\\setminus A_r(\\tau)$ is contained in the forest and thus the size of this forest matches the number vertices which got infected after round $\\tau$.\n\nNote that for every $\\delta>0$ there exists a $t_0$ such that $|\\mathcal{A}_i(t)|\\leq (1+\\delta)a_i(\\tau)$, for every $0\\leq i\\leq r$ and $\\tau(1+\\delta)a_r(\\tau)$ is dominated by the probability that the total size of the branching process exceeds $\\delta a_r(\\tau)$. However for properly chosen $\\eta,\\delta>0$ the probability that the total size of the branching process exceeds $\\delta a_r(\\tau)$ is sufficiently small. Therefore we have that there are at most $(1+\\delta)a_{r}(\\tau)$ infected vertices in total.\n\n\nNow for the supercritical case. Recall that \\eqref{eq:seqchange} and \\eqref{eq:seq} hold when \\linebreak[4] $a_r=o\\left(\\left(n^{k-2}p\\right)^{-1}\\right)$. Again we consider the differences $\\Delta(t)=a_{r}(t+1)-a_r(t)$. Although at the beginning of the process the values of $\\Delta(t)$ decrease there exists a value $t_1$ not depending on $n$ such that for every $t>t_1$ we have that $\\Delta(t+1)>\\Delta(t)$. In fact there exists a $t_2$ not depending on $n$ such that for $t\\geq t_2$ we have that $\\Delta(t+1)>2\\Delta(t)$.\nTherefore the probability of non-concentration is dominated by a geometric sequence and applying the union bound gives us concentration as long as $a_r(t)=o\\left(\\left({n}^{k-2}p\\right)^{-1}\\right)$.\nWhen $a_r(t)=\\Omega\\left(\\left({n}^{k-2}p\\right)^{-1}\\right)$ the expected number of neighbours is $\\Omega(1)$ and thus our approximation in \\eqref{eq:seqchange} does not hold any more. Refining these approximations shows that at most 2 rounds are required for almost every vertex to become infected, with $\\Theta(n)$ vertices becoming infected in every required step.\n\nRecall that for $r>2$ the typical vertex became infected when it was contained in $r$ different edges each containing a different infected vertex. When $r=2$ this is equivalent to finding two intersecting edges each containing a different infected vertex. However unlike the $r>2$ case finding two such edges in step $t$ implies that every vertex in these edges is infected by step $t+1$. Two intersecting edges typically overlap in exactly one vertex and thus finding such an edge pair implies that $2k-3$ vertices will become infected, not just one. Taking this into account gives us the modified bound on the threshold.\n\nThe proof of Theorem~\\ref{thm:graph} is analogous. In the random graph case, in round $t$ of the process only those edges are examined which contain exactly one vertex from $\\mathcal{A}(t)\\backslash \\mathcal{A}(t-1)$ and no vertices from $\\mathcal{A}(t-1)$. Since each of these edges can contain at most one uninfected vertex the behaviour of the individual vertices is independent. Thus we can replace Chebyshev's inequality with Chernoff's inequality and achieve a stronger bound on the failure probability.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe large-scale distribution of dark matter halos is one of the key\ningredients of the theoretical description of large-scale structure (LSS). \nSince most observed tracers of LSS, such as galaxies, reside in halos,\nthe statistics of halos determine those of galaxies on large scales. \nSimilarly, the halo model description of the nonlinear matter density\nfield \\cite{cooray\/sheth} crucially relies on halo statistics. In the context of perturbation\ntheory, the statistics of halos are written in terms of bias parameters \nmultiplying operators constructed out of the matter density field. \nIn general, these operators consist of powers of the matter density\nand tidal field \\cite{fry\/gaztanaga,mcdonald\/roy}, as well as convective time derivatives of these quantities \\cite{MSZ,senatore:14}. \nHowever, the most well-studied and phenomenologically most important\nbias parameters on large scales are those multiplying powers of\nthe matter density field, i.e. \n\\begin{equation}\n\\d_h(\\v{x},\\tau) \\supset b_1(\\tau) \\delta_\\rho(\\v{x},\\tau) + \\frac12 b_2(\\tau) \\delta_\\rho^2(\\v{x},\\tau)\n+ \\frac16 b_3(\\tau) \\delta_\\rho^3(\\v{x},\\tau) + \\cdots\\,,\n\\label{eq:localbias}\n\\end{equation}\nwhere $\\d_h$ is the fractional number density perturbation of a given\nhalo sample,\nwhile $\\delta_\\rho$ is the matter density perturbation. More precisely,\nthe powers of $\\delta_\\rho$ should be understood as renormalized operators\n\\cite{mcdonald,assassi\/etal,PBSpaper}. \nThe $b_n$ are commonly called (nonlinear) \\emph{local bias parameters}. \nThe goal of this paper is to present precision measurements of\n$b_1,\\,b_2,\\,b_3$ using a novel technique, \\emph{separate universe simulations}.\n\nIn the separate universe approach \n\\citep{lemaitre:1933,sirko:2005,baldauf\/etal:2011,li\/hu\/takada:2014,Wagner:2014}, a long-wavelength density\nperturbation is included in an N-body simulation by changing the \ncosmological parameters, in particular $\\Omega_m,\\,\\Omega_\\Lambda,\\,\\Omega_K$ and $H_0$, \nfrom their fiducial values, and running the simulation to a different\nscale factor. As argued in \\cite{baldauf\/etal:11,jeong\/etal,PBSpaper}, \nthe (renormalized) local bias parameters defined in \\refeq{localbias}\ncorrespond to the response of the halo abundance, $\\bar n_h$, to a long-wavelength\ndensity perturbation, equivalent to a change in the background density, $\\bar\\rho$,\n\\begin{equation}\nb_{n} = \\frac{\\bar\\rho^{\\hskip 1pt n}}{\\bar n_h} \\frac{\\partial^n \\bar n_h}{\\partial\\bar\\rho^{\\hskip 1pt n}}\\, .\n\\end{equation}\nThis can be understood as an exact formulation of the peak-background split (PBS) \\cite{kaiser:1984,mo\/white:1996}. Thus, the $b_n$ can be measured through the mass function of halos in a suite\nof separate universe simulations. This technique has several advantages: \nfirst, it is guaranteed to recover the large-scale limit of the $b_n$, without\nscale-dependent or nonlinear corrections which affect measurements of the\nbias parameters from the halo power spectrum and bispectrum, or\nfrom the cross-correlation with smoothed fields. Note that, starting \nat second order, ``nonlocal'' bias parameters such as those with respect\nto powers of the tidal field will enter in these latter measurements at\nthe same level as the $b_n$. Second, \nwe can straightforwardly obtain measurements of higher order bias parameters\nsuch as $b_3$, which become cumbersome to measure using correlations. Finally,\nby using the same initial phases for simulations with different density,\nwe can cancel to a large extent the cosmic variance contribution to the measurement error. \n\nSeparate universe simulations are expected to estimate the same set of bias parameters as those obtained from matter-halo cross-correlations. We will thus compare the biases obtained from the separate universe simulations to\nthose determined by fitting to halo two- and three-point statistics. \nWe also compare the results to biases derived from universal mass functions\nusing the classic peak-background split argument, and \nrecent theoretical predictions from\nthe excursion set-peaks (ESP) approach \\cite{Paranjape:2012,Paranjape:2013}, which incorporates some aspects\nof the Gaussian peaks model into the excursion set framework. \n\nHigher order bias parameters have previously been measured in simulations\nby correlating the halo number with powers of the smoothed density field \nat the final time (Eulerian frame) \n\\cite{angulo\/baugh\/lacey:2008,manera\/gaztanaga:2011}\nor in the initial conditions \\cite{paranjape\/etal:2013}. \nHowever, the bias parameters measured in this way depend on the smoothing\nscale adopted, while the local bias parameters that are relevant for perturbation theory predictions, and that we are interested in here, correspond to\na smoothing scale of infinity. Further, all these references\nneglect the nonlocal bias terms mentioned above, \nwhich will affect the inferred values of $b_2$ and higher. \nFor these reasons, it is difficult to directly compare our measurements of\nnonlinear bias parameters with these previous results (although we find\nbroad agreement). \nWe stress again that in the separate universe approach we are guaranteed\nto obtain the local bias in the large-scale limit, without nonlinear\nor tidal corrections. Moreover, we simultaneously obtain both\nthe Eulerian ($b_n$) and Lagrangian ($b_n^L$) bias parameters. \n\nTwo related papers appeared on the preprint archive simultaneously\nto this paper. Ref.~\\cite{li\/etal:15} measured the linear bias using\nseparate universe simulations through an abundance matching technique\nwhich yields the integrated halo bias above a mass threshold. This\ntechnique reduces the shot noise in the bias measurement. \nRef.~\\cite{baldauf\/etal:15} also measured the linear bias via\nthe mass function. In addition, they present measurements of $b_2$\nthrough the response of the halo power spectrum to a long-wavelength mode\n(as done in \\cite{li\/hu\/takada:2014,Wagner:2015} for the matter power spectrum). \nOur results are consistent with the findings of both of these references. \nHowever, unlike these and any other previous published results, we\nuse the fully nonlinear separate universe approach to obtain accurate\nmeasurements of the \\emph{linear and nonlinear} local biases.\n\n\nIn this paper we adopt a flat $\\Lambda$CDM fiducial cosmology with $\\Omega_m=0.27$, $h=0.7$, $\\Omega_b h^2=0.023$ and $\\mathcal{A}_s=2.2\\cdot 10^{-9}$. \nThe outline of the paper is as follows. In \\refsec{theory}, we present\nthe theoretical predictions that we will compare our measurements with. \\refSec{bsep}\ndescribes the technique of measuring bias parameters from separate universe\nsimulations, while \\refsec{bcorr} presents the estimators for $b_1$ and $b_2$\nusing the conventional approach of measuring halo correlations. \nWe discuss the results in \\refsec{res}. We conclude in \\refsec{concl}. \nThe appendices contain more details on the ESP predictions as well as\nour bias measurements. \n\n\\section{Theory predictions}\n\\label{sec:theory}\n\nIn this section we present several theoretical predictions for the large-scale bias from the literature. We first recap the PBS argument in \\refsec{PBS} and briefly present the ESP formalism in \\refsec{ESP}.\n\nBefore jumping into details, we briefly explain the definitions of \\textit{Lagrangian} and \\textit{Eulerian} halo bias. The Lagrangian bias links the abundance of dark matter halos to the density perturbations in Lagrangian space, i.e. it describes the relation of proto-halos in the initial conditions that correspond to halos identified at redshift $z$ to the initial linear density perturbation field. On the other hand, the Eulerian bias relates the halos identified at redshift $z$ to the nonlinear density field, $\\delta_\\rho$, at redshift $z$. \nIn the case of the local bias parameters considered here, there is\nan exact nonlinear mapping between the Lagrangian bias parameters $b_n^L$\nand their Eulerian counterparts $b_m$, see \\refapp{compbsep}. We will\nmake use of this mapping both for the theory predictions and measurements. \n\nIn the following, the top-hat filtered variance on a scale $R_{\\rm TH}$ \n(the Lagrangian radius of halos) is denoted as\n\\begin{equation}\n\\s{0}^2 \\equiv \\int {\\rm dln} k\\, \\Delta^2 (k) [W_{\\rm TH}(kR_{\\rm TH})]^2,\n\\label{eq:sigma0}\n\\end{equation}\nwhere $\\Delta^2(k) = k^3 P(k)\/2\\pi^2$ is the dimensionless linearly extrapolated matter power spectrum and \nthe top-hat filter in Fourier space $W_{\\rm TH}(k R_{\\rm TH})$ is given in \\refeq{WTH}. \n\n\\subsection{Peak-background split bias}\n\\label{sec:PBS}\n\nWe briefly recap how the bias parameters can be derived from the differential halo mass function using the PBS argument,\nas initially proposed in \\cite{kaiser:1984,cole\/kaiser:1989,mo\/white:1996}. \nFollowing the PBS argument, the effect of a long wavelength mode $\\d_0$ on the small scale formation can be seen as locally modulating the density threshold for halo formation, or barrier $B$, sending it to $B-\\d_0$ (here we denote the barrier as $B$ to emphasize that this argument is not restricted to the constant spherical collapse threshold $\\d_c$ and can be extended to barriers depending e.g. on the halo mass $M$ through $\\s{0}$). Note that, in the case where stochasticity should be introduced in the barrier, this shift does not modify the stochastic contribution to the barrier, which is supposed to capture the effect of small-scale modes. We define the differential mass function as \n\\begin{equation}\nn_{\\rm}(\\nu_{\\rm B}) = \\frac{\\bar{\\rho}_m}{M}f_{\\rm}(\\nu_{\\rm B}) \\left|\\frac{{\\rm d ln}\\,\\s{0}}{{\\rm d ln }\\,M}\\right|,\n\\label{eq:nf}\n\\end{equation}\nwith $\\nu_{\\rm B} \\equiv B(\\s{0})\/\\s{0}$ (we reserve the notation $\\nu$ for $\\nu\\equiv\\d_c\/\\s{0}$), $M$ the corresponding mass and $f(\\nu_{\\rm B})$ the mass fraction contained in halos of mass $M$. The scale-independent large-scale Lagrangian bias parameters are then defined by the well known relation \n\\begin{equation}\nb^L_n(\\nu_{\\rm B}) = \\frac{1}{n(\\nu_{\\rm B})}\\frac{\\partial ^n n([B(\\s{0})-\\d_0]\/\\sigma_0)}{\\partial \\d_0^n}\\Bigg|_{\\d_0=0}.\n\\label{eq:biasPBS}\n\\end{equation}\nAs we have indicated, this also applies if the deterministic part of the barrier is mass-dependent. We will use \\refeq{biasPBS} both to derive the bias in the ESP model and from the fits to the mass function proposed in \\cite{Sheth:1999} and \\cite{Tinker:2008} (hereafter ST99 and T08 respectively). \n\n\\subsection{Excursion set peaks}\n\\label{sec:ESP}\n\nIn this section, we review the ESP formalism proposed in \\citep{Paranjape:2012} and \\citep{Paranjape:2013}. The details of the calculation are relegated to \\refapp{ESP}. All the results that we present here and in \\refapp{ESP} were already derived in these two references, but in a different way; here, we use the PBS argument to derive the bias parameters directly. Further, the ESP predictions\nfor $b_3$ and $b_4$ are computed here for the first time. \n\nThe ESP aims at unifying the peak model of Bardeen et al. in 1986 (hereafter BBKS) \\citep{Bardeen:1986} and the excursion set formalism of Bond et al. in 1991 \\citep{Bond:1991}. It can be seen either as addressing the cloud-in-cloud problem within the peak model, or as applying the excursion set formalism to a special subset of all possible positions (the peaks). \nWe follow \\citep{Paranjape:2013}, who chose a top-hat filter for the excursion\nset part, and a Gaussian filter to identify peaks (in order to ensure finite moments\nof derivatives of the smoothed density field). \n\nMore importantly, \\citep{Paranjape:2013} improved the model by adding a mass-dependent stochastic scatter to the threshold. Specifically, the barrier is\ndefined as \\citep{Paranjape:2012}\n\\begin{equation} \nB(\\s{0}) = \\d_c + \\beta \\s{0}\\,.\n\\label{eq:barrier}\n\\end{equation}\nHere, $\\beta$ is a stochastic variable and \\cite{Paranjape:2013} chose its PDF $p(\\beta)$ to be lognormal with mean and variance corresponding to $\\<\\beta\\> = 0.5$ and ${\\rm Var}(\\beta)=0.25$. This choice was made to match the peak height measured in simulations by \\cite{robertson\/etal}. Hence $\\beta$ takes only positive values. Note that \\refeq{barrier} then corresponds to a mass-dependent mean barrier $\\d_c + 0.5 \\s{0}$. \n\nAs we show in \\refapp{ESP}, the Lagrangian bias parameters in the ESP\ncan be directly derived from \\refeq{biasPBS} by inserting the multiplicity\nfunction $f_{\\rm ESP}(\\nu)$ into \\refeq{nf}, and sending \n$\\nu = \\d_c\/\\s{0}$ to $\\nu_1 = \\nu\\left(1-\\d_0\/\\d_c\\right)$.\\footnote{Here one needs to take care not to shift one instance of $\\nu$ in the expression for $f_{\\rm ESP}(\\nu)$ that is actually unrelated to the barrier. See \\refapp{ESP}.} \nOur results for the bias, \\refeq{btheo}, are identical to the large-scale bias parameters derived using\na different approach in \\citep{Paranjape:2012,Paranjape:2013}. \nWe will see that the choice of barrier \\refeq{barrier} leads to significant differences from the standard PBS biases derived using $B = \\d_c$\nfrom the T08 and ST99 mass functions. \n\n\n\\section{Bias parameters from separate universe simulations}\n\\label{sec:bsep} \n\nOur results are based on the suite of separate universe simulations described in \\cite{Wagner:2014,Wagner:2015}, performed using the cosmological code GADGET-2 \\citep{Springel:2005}. The idea of the separate universe simulations is that a uniform matter overdensity $\\delta_\\rho$ of a scale larger than the simulation box can be absorbed in the background density $\\tilde{\\rho}_m$ of a modified cosmology simulation (throughout the whole paper, quantities in modified cosmologies will be denoted with a tilde), where\n\\begin{equation} \n\\tilde{\\rho}_m(t) = \\rho_m(t)\\left[1+\\delta_\\rho(t)\\right], \n\\label{eq:dr}\n\\end{equation}\nwith $\\rho_m$ the mean matter density in a simulation with no overdensity (which we call the fiducial cosmology). Indeed, a uniform density can only be included in this way, since the Poisson equation for the potential enforces a vanishing mean density perturbation over the entire box. Thus one can see a simulation with a constant overdensity $\\delta_\\rho$ as a separate universe simulation with a properly modified cosmology. Qualitatively, a positive overdensity causes slower expansion and enhances the growth of structure, i.e. more halos, whereas a negative one will have the opposite effect. The precise mapping of $\\delta_\\rho$ to modified cosmological parameters is described in \\cite{Wagner:2014}. Crucially, we work to fully nonlinear order in $\\delta_\\rho(t)$. \n\nWe use two sets of simulations denoted by ``lowres'' and ``highres'' throughout the paper. Both have a comoving box size of $500\\,h^{-1}{\\rm Mpc}$ in the fiducial cosmology. The ``lowres'' set uses $256^3$ particles in each simulation, while ``highres'' employs $512^3$ particles. For both sets, we run the fiducial cosmology, i.e. $\\delta_\\rho=0$, and simulations with values of $\\delta_\\rho$ corresponding to $\\d_L$ = \\{$\\pm$0.5, $\\pm$0.4, $\\pm$0.3, $\\pm$0.2, $\\pm$0.1, $\\pm$0.07, $\\pm$0.05, $\\pm$0.02, $\\pm$0.01\\}, where $\\d_L$ is the present-day linearly extrapolated matter density contrast. \nIn addition, we simulate separate universe cosmologies corresponding to $\\d_L$ = 0.15, 0.25, and 0.35 for both resolutions. \nThis makes the sampling in the final, nonlinear $\\delta_\\rho$ more symmetric around 0 which should help diminish the covariance between the bias parameters.\\footnote{We have not performed a systematic study on the number of $\\d_L$ values that are necessary to derive accurate measurements of the $b_n$ up to a given order. \nGiven the significant degeneracies between $b_n$ and $b_{n+2}$ we have found\n(\\refapp{cov}), this is a nontrivial question.} \nThe comoving box size in the modified cosmology simulations is adjusted to match that in the fiducial cosmology, $L=500\\,h^{-1}{\\rm Mpc}$. \nHence, in the high redshift limit ($z\\rightarrow \\infty$ for which $\\delta_\\rho\\rightarrow 0$) the physical size of the box is the same for all simulations whereas at the present time ($z=0$ in the fiducial cosmology) the physical size of the simulation box varies with $\\delta_\\rho$. However, this choice of the box size has the advantage that the physical mass resolution is the same within each set of simulations regardless of the simulated overdensity $\\delta_\\rho$ (i.e. $\\tilde{m}_p = m_p$ where $m_p$ is the particle mass in the fiducial cosmology). \nSince the biases are determined by comparing halo abundances between different overdensities, this eliminates any possible systematic effects in the biases due to varying mass resolution. \nThe mass resolution is $m_p = 5.6\\cdot 10^{11}h^{-1} M_\\odot$ in the ``lowres'' set of simulations and $m_p=7\\cdot 10^{10}h^{-1} M_\\odot$ in the ``highres'' one. Furthermore, for the ``lowres'' set of simulation, we ran 64 realizations of the entire set of $\\d_L$ values. For the ``highres'' one we ran only 16 realizations of each $\\d_L$ value as they are more costly in terms of computation time. \nEach simulation was initialized using 2LPT at $z_i$ = 49. \nFor further details about the simulations, see \\citep{Wagner:2015}.\n\n\\subsection{Halo catalogs}\n\\label{sec:HC}\n\nThe halos were identified using the Amiga Halo Finder (hereafter AHF) \\cite{Gill:2004,Knollmann:2009}, which identifies halos with a spherical overdensity (SO) algorithm. We identify halos at a fixed proper time corresponding to $z=0$ in the fiducial cosmology. \nIn this paper, we only use the number of distinct halos and do not consider their sub-halos. \n\nThe key point in identifying halos with the spherical overdensity criterion is the setting of the density threshold. We choose here a value of $\\Delta_{\\rm SO}=200$ times the background matter density in the \\emph{fiducial} cosmology. \nThus, our measured bias parameters are valid for this specific halo definition. \nFor the simulations with a different background density, the threshold must be rescaled in order to compare halos identified using the same physical density in each simulation. Specifically, we need to use\n\\begin{equation}\n\\Delta_{\\rm SO} = \\frac{200}{1+\\delta_\\rho}\\,.\n\\label{eq:DSO}\n\\end{equation}\nAnother point is the treatment of the particle unbinding in a halo. AHF has the ability to remove unbound particles, i.e particles which are not gravitationally bound to the halo they are located in. However, in order to avoid having to implement the complicated matching of the unbinding criterion between the modified and fiducial cosmologies, we have turned unbinding off in all halo catalogs. \nNote that the effect of unbinding is very small (of order 1\\% on the mass function), and\nthat we consistently use the same halo catalogs for all measurements,\nso that this choice does not affect our comparison between different methods\nfor measuring bias. \n\nWe count halos in top-hat bins given by \n\\begin{equation}\nW_n(M,M_{{\\rm center}})=\\begin{cases} 1 &\\mbox{if } \\left|{\\rm log}_{10}(M)-{\\rm log}_{10}(M_{{\\rm center}})\\right| \\leq 0.1 \\\\\n0 & \\mbox{otherwise, } \\end{cases}\n\\label{eq:bins}\n\\end{equation}\nwhere $M$ is the mass ($M_{{\\rm center}}$ corresponding to center of the bin). \nFor the high resolution simulations, we count halos in 12 bins centered from ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 12.55$ to ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 14.75$, to ensure that we have enough halos in each bin. For the low resolution simulations, we have 7 bins from ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 13.55$ to ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 14.75$. With this binning choice, the lowest bin is centered around halos with 63 particles for the ``lowres'' set of simulations, with a lower limit at halos containing around 50 particles. For the ``highres'' set of simulations, the lowest mass bin is centered on halos with around 51 particles, with a lower limit around 40 particles. These numbers are quite low compared to more conservative values (e.g. 400 particles in T08). However $\\d_h$ is the \\emph{relative difference} of the number of halos between the fiducial and modified cosmology simulations (see \\refeq{DN_N} hereafter) and therefore that quantity should be less affected by resolution effects. For halos with a minimum number of 40 particles, we did not find any systematic difference between the bias parameters measured from the ``lowres'' and ``highres'' simulations. Thus, we present results for halos that are resolved by at least 40 particles. \n\n\\subsection{Eulerian biases}\n\\label{sec:BE}\n\nInstead of fitting the Eulerian bias parameters directly to the simulation results, we derive them from the measured Lagrangian biases for which the fitting is more robust, using the exact nonlinear evolution of $\\delta_\\rho$ (see \\refapp{compbsep} for the details of the mapping). \nIn order to obtain the Lagrangian bias parameters, we compute $\\d_h(M,\\d_L)$ versus $\\d_L$ where $\\d_h(M,\\d_L)$ is the overdensity of halos in a bin of mass $M$ compared to the fiducial case $\\d_L=0$,\n\\begin{equation}\n\\delta_h (M,\\d_L) = \\frac{\\tilde{N}(M,\\d_L) - N(M)}{N(M)},\n\\label{eq:DN_N}\n\\end{equation}\nwith $\\tilde N(M,\\d_L)$ the number of halos in a bin centered around mass $M$ in the presence of the linear overdensity $\\d_L$ and $N(M)=\\tilde N(M,\\d_L=0)$. \nNote that $\\d_h(M,\\d_L)$ is the overdensity of halos in Lagrangian space as the physical volumes of the separate universe simulations only coincide at high redshift.\n\nIn order to obtain the Lagrangian bias parameters $b_n^L$, we then fit \\refeq{DN_N} by \n\\begin{equation}\n\\d_h = \\sum_{n=1}^5 \\frac{1}{n!} b_n^L (\\d_L)^n\\,.\n\\label{eq:BiasExp}\n\\end{equation}\nAs indicated in \\refeq{BiasExp} we use a $5^{\\rm th}$ order polynomial in $\\d_L$ by default. In \\refapp{deg} we study the effect of the degree of the polynomial on the results; as a rough rule, if one is interested in $b_n^L$, then one should fit a polynomial up to order $n+2$. \n\nIn order to estimate the overall best-fit of and error bars on the bias parameters, we use a bootstrap technique. For each non zero $\\d_L$ value, we randomly produce $p$ resamples of the mass function. Each resample is composed of the same number of realizations as the original sample (i.e. 16 or 64) and we choose $p=100\\cdot 64$ ($100\\cdot 16$) for the low (high) resolution simulations. We then compute the average number of halos per mass bin for each resample. This gives us $p$ numbers $\\tilde{N}^i(M,\\d_L)$. \nFor a given $\\d_L$, we also create the same set of resamples for the fiducial cosmology \nand again compute the average number of halos, i.e. $N^i(M)$. We then compute $p$ times $\\d^i_h$ according to \\refeq{DN_N} for every $\\d_L$ value. \nSince we use the same resamples for the separate universe results, $\\tilde{N}^i(M,\\d_L)$, and the fiducial case, $N^i(M)$, the cosmic variance is canceled to leading order. The error on $\\d_h$ at fixed mass and $\\d_L$ is given by the sample variance and we use it as a weight for the fit. \nWe neglect, however, the covariance between $\\tilde{N}^i(M,\\d_L)$ for different $\\d_L$ values.\nWe then produce $p$ fits with a weighted least squares method. For every bias parameter, the value we report is the mean of the results of the $p$ fits while the corresponding error bar is given by the square root of the variance of the distribution. Within the mass range common to both sets of simulations ``lowres'' and ``highres'', the measurements are consistent with each other and hence we perform a volume-weighted average of the biases from the two sets of simulations.\n\n\\section{Bias parameters from correlations}\n\\label{sec:bcorr}\n\nTraditionally bias parameters are used for and measured from $n$-point correlation functions or $n$-spectra. The $n$-th order bias parameters enter the tree-level calculation of the $n+1$-point functions. For instance, $b_1$ appears at the leading order in the large-scale behavior of the halo power spectrum, $b_2$ in the large-scale limit of the bispectrum and $b_3$ in the large-scale limit of the trispectrum. For the comparison to $n$-point functions, we will restrict ourselves to the power spectrum and bispectrum at tree level here. The bispectrum also contains nonlocal bias parameters, i.e. biases with respect to the tidal field, that arise from triaxial collapse and gravitational evolution. The estimation of the first and second order bias parameters closely follows the steps outlined in \\cite{Baldauf:2012} (see also \\cite{Saito:2014}), with the difference that we are performing a joint fit for all the bias parameters, instead of first fitting $b_1$ to the halo power spectrum and then using its value in the bispectrum analysis.\n\nLet us start by discussing the power spectrum. We measure the halo-matter cross power spectrum $P_\\text{hm}$, which at tree level (on large scales) is given by\n\\begin{equation}\nP_\\text{hm}(k)=b_1 P_\\text{mm}(k).\n\\end{equation}\nWe refrain from explicitly including the loop corrections, since they contain third order biases not present in the bispectrum as well as scale-dependent biases $\\propto k^2$ \\cite{assassi\/etal}. \nThe advantage of the halo-matter cross power spectrum over the halo-halo power spectrum is that it is free of shot noise. To ensure that our measurements are not contaminated by higher order contributions or scale dependent bias, we will \nin fact fit $P_\\text{hm}(k)=(b_1 + b_{P,k^2} k^2 ) P_\\text{mm}(k)$ to the simulation\nresults, where $b_{P,k^2}$ is a free nuisance parameter. This term absorbs the\nloop corrections in the large-scale limit. \nWe measure the matter and halo power spectra in the same wavenumber bins in the simulation and take their ratio to cancel the leading cosmic variance, i.e. we define a quantity $q(k)=P_\\text{hm}(k)\/P_\\text{mm}(k)$ and the $\\chi^2$\n\\begin{equation}\n\\chi^2_P=\\sum_{k}^{k_\\text{max}} \\left(\\frac{q(k)-b_1-b_{P,k^2}k^2}{\\sigma[q(k)]}\\right)^2,\n\\end{equation}\nwhere the variance $\\sigma^2(q)$ is estimated from the box-to-box scatter between the simulation realizations. \n\nLet us now turn to the bispectrum. One can form three different bispectra containing the halo field, the halo-halo-halo, the halo-halo-matter and the halo-matter-matter bispectrum. We are using the latter, since it is the only bispectrum free of shot noise. Furthermore, we will employ the unsymmetrized bispectrum, where the halo mode is the one associated with the wavevector $\\vec k_3$. This unsymmetrized bispectrum measurement allows for a clear distinction of the second order local bias $b_2$ and tidal tensor bias $b_{s^2}$, once the matter bispectrum is subtracted out. The unsymmetrized tree-level bispectrum reads\n\\begin{equation}\nB_\\text{mmh}(k_1,k_2,k_3)=b_1 B_\\text{mmm}(k_1,k_2,k_3) + b_2 P(k_1)P(k_2)+2 b_{s^2} S_2(\\vec k_1,\\vec k_2) P(k_1)P(k_2)\\; ,\n\\end{equation}\nwhere $B_\\text{mmm}$ is the tree-level matter bispectrum (e.g., \\cite{Baldauf:2012}), and we employed the tidal operator $S_2$ defined as\n\\begin{equation}\nS_2(\\vec k_1,\\vec k_2)=\\left(\\frac{\\vec k_1\\cdot \\vec k_2}{k_1^2 k_2^2}-\\frac{1}{3}\\right).\n\\end{equation}\nSimilarly to the power spectrum defined above, this bispectrum does not include loop corrections or scale dependent biases. Thus, we again add a term of the form\n$b_{B,k^2} (k_1^2+k_2^2)P(k_1) P(k_2)$ with a free coefficient $b_{B,k^2}$,\ndesigned to absorb the loop corrections. \nTo cancel cosmic variance, we define the ratio of bispectrum and power spectrum measurements\n\\begin{equation}\nQ(k_1,k_2,k_3;b_1)=\\frac{B_\\text{mmh}(k_1,k_2,k_3)-b_1 B_\\text{mmm}(k_1,k_2,k_3)}{P_\\text{mm}(k_1) P_\\text{mm}(k_2)},\n\\end{equation}\nand using this we define the corresponding $\\chi^2$\n\\begin{equation}\n\\chi^2_B=\\sum_{k_1,k_2,k_3}^{k_\\text{max}} \\left(\\frac{Q(k_1,k_2,k_3;b_1)-b_2 -2b_{s^2}S_2-b_{B,k^2}(k_1^2+k_2^2)}{\\sigma[Q(k_1,k_2,k_3;b_{1,\\text{fid}})]}\\right)^2\\; ,\n\\end{equation}\nwhere the variance of $Q$ is estimated from the box-to-box scatter between the simulation realizations for a fiducial $b_{1,\\text{fid}}$. Equivalent results could have been obtained using the estimator presented in \\cite{Schmittfull:2014tca}. We decided to stick with the more traditional bispectrum estimation for the following reasons: for their method the smoothing scale of the fields needs to be chosen before the simulation data is reduced, complicating convergence tests. Furthermore, \\cite{Schmittfull:2014tca} ignored two-loop corrections to their estimator and higher derivative terms, while we marginalize over an effective shape accounting for the onset of scale dependence. A detailed comparison of the two methods is however beyond the scope of this work.\n\nAll measurements are done on the ``lowres'' and ``highres'' sets of the fiducial cosmology. \nWe find the best fit biases $b_1$ and $b_2$ by sampling the log-likelihood $\\ln \\mathcal{L}=-\\chi^2_\\text{tot}\/2$, where $\\chi^2_\\text{tot}=\\chi^2_P+\\chi^2_B$ using the Markov Chain code EMCEE \\cite{emcee}. \nThe errors on the bias parameters are estimated from the posterior distribution of sampling points after marginalizing over the (for our purposes) nuisance parameters $b_{P,k^2}$, $b_{B,k^2}$ and $b_s^2$. \nWe have varied that maximum wavenumber $k_\\text{max}$ to ensure that we remain in the regime where the tree level bias parameters remain consistent with increasing $k_\\text{max}$. Further, we demand that the total $\\chi^2$ per degree of freedom is approximately unity. The results shown below use a conservative value of $k_\\text{max} = 0.06 \\; h \\, {\\rm Mpc}^{-1}$. This limits the number of modes to $\\mathcal{O}(100)$ and thus also the number of power and bispectrum configurations. Due to the cancellation of the leading order cosmic variance this is not of major concern. We have compared the clustering constraints with a larger $2400\\; h^{-1}\\text{Mpc}$ box providing a factor of 100 more modes to the same cutoff and found consistent results.\n\n\\section{Results}\n\\label{sec:res}\n\nThis section presents the results for the Eulerian bias parameters $b_1$ to $b_3$. For completeness, we also present results for $b_4$, which is poorly constrained, in \\refapp{b4}.\n\nIn order to obtain a precise comparison between any theoretical prediction for the bias $b_n(M)$ (such as the ESP, \\refeq{btheo}) and our data points, we convolve the theoretical prediction with the mass bins used in the simulation (see \\refsec{bsep}). I.e., the theory predictions we will show in the following are given by\n\\begin{equation}\nb_n^{\\rm conv}(M) = \\frac{\\int{W_n(M',M) n(M') b_n(M') {\\rm d} M'}}{\\int{ W_n(M',M) n(M'){\\rm d}M'}},\n\\label{eq:b1TbinConv}\n\\end{equation} \nwhere $W_n(M',M)$ is the window function of the mass bin given by \\refeq{bins}, and $n(M')$ is the differential halo mass function, parametrized by the fitting formula of Eq. (2) in T08. \nIn this way, we obtain smooth curves for the theory prediction whose\nvalue at the center of a given mass bin can be compared directly to the simulation results.\n\n\\subsection{Linear bias}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.55]{b1_comp_deg5.pdf} \n\\caption{\\textbf{Top panel:} comparison between the linear halo bias from separate universe simulations (green dots), and from clustering (red crosses; displaced slightly horizontally for clarity). Error bars that are not visible are within the marker size. The solid black curve is the Tinker et al. (2010) best fit curve for $b_1$, while the dot-dashed green curve is the ESP prediction \\refeq{btheo}. We also show the result obtained by applying the PBS argument [\\refeq{biasPBS}] to the T08 \nand ST99 mass functions (blue dashed curves). \\textbf{Bottom panel:} relative difference between the measurements and the Tinker et al. (2010) best fit.} \n\\label{fig:b1}\n\\end{figure}\n\n\\refFig{b1} presents the results for $b_1$. The green points show the results obtained from the separate universe simulations, while the red crosses show those from fitting $P_{\\text{hm}}$ and $B_{\\text{mmh}}$. The mutual agreement of the two measurements is very good (the only point with relative difference greater than the $1\\sigma$ uncertainty is at $\\log M=13.15$). The error bars of the separate universe measurements are significantly smaller. Note however that the effective volume used by these measurements is also larger, since the halo-matter power spectrum was only measured in the fiducial boxes. This is a first validation of the separate universe method and also proves its efficiency.\n\nThese results are consistent with the ones presented in \\cite{li\/etal:15} who derived the linear bias from abundance matching. Since Ref.~\\cite{li\/etal:15} used a linearized implementation of separate universe simulations, they are restricted to small overdensities (they take $\\delta_\\rho= \\pm 0.01$), resulting in very small changes in the halo abundance. For such small changes, abundance matching is much more efficient than binning halos into finite mass intervals. We circumvent this issue by using fully nonlinear separate universe simulations which allow us to simulate arbitrary values of $\\delta_\\rho$. \n\nWe also compare our data with several results from the literature. The solid black curve is the fit to $P_{\\text{hm}}$ measurements from Tinker et al. (2010) \\cite{Tinker:2010} [their Eq. (6)]. As shown in the lower panel of\n\\refFig{b1}, the agreement is better than 5\\%, the quoted accuracy of the\nfitting formula. Note that we do not remove unbound particles from our\nhalos, which we expect to lead to a slight underestimate of the bias at the few percent level at low masses. \nNext, we turn to the ``standard'' peak-background split \nargument \\refeq{biasPBS} applied to the universal mass functions of ST99 \nand T08 (blue dashed curves). At low masses, the T08 curve is at 1\\% level agreement but the ST99 prediction overestimates the bias by around 8\\%. The agreement is worse at high mass where these two curves underestimate the bias by around 8\\% and 11\\% respectively.\n\nThe green dot-dashed line finally shows the prediction from excursion set\npeaks \\refeq{btheo}. The agreement at high masses is excellent, where\nthe ESP matches the measured $b_1$ to better than 2\\%. The agreement is far less good at low masses where the ESP prediction overestimates the bias by roughly 10\\%. Note that the assumption that halos correspond to peaks in the\ninitial density field is not expected to be accurate at low masses\n\\cite{ludlow\/porciani}. Part of the discrepancy might also come from the up-crossing criterion applied to derive the ESP prediction, which is only expected to be accurate at high masses \\cite{Musso:2013}. It is worth emphasizing that \\refeq{biasPBS} still \napplies in the case of the ESP. That is, the large-scale bias can still be\nderived directly from the mass function. The key difference to the\nPBS curves discussed previously is that, following \\cite{Paranjape:2013},\nwe employ a stochastic moving barrier, which changes the relation between\nmass function and bias. This more realistic barrier leads to the\nsignificant improvement in the prediction of the bias for high-mass halos. \n\n\\subsection{Higher order biases}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{b2_comp_deg5.pdf} \n\\caption{\\textbf{Top panel:} same as \\refFig{b1}, but for the quadratic bias $b_2$. The color code is as in \\refFig{b1}. \\textbf{Bottom panel:} relative difference between measurements and the theoretical prediction of the ESP. In each panel, the clustering points have been horizontally displaced as in \\refFig{b1}.} \n\\label{fig:b2}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.55]{b3_comp_deg5.pdf} \n\\caption{As \\refFig{b2} but for $b_3$.} \n\\label{fig:b3}\n\\end{figure}\n\n\\refFigs{b2}{b3} present the analogous results of \\refFig{b1} for $b_2$ and $b_3$, respectively. For $b_2$ at masses below $10^{13.5} h^{-1} M_\\odot$, there is some scatter in the separate universe results that is apparently larger than what is expected given the error bars (a hint of a similar effect can be seen in $b_1$ as well). \nNote however that there is significant residual degeneracy\nbetween the $b_n$ for a given mass bin, so that a ``$\\chi$-by-eye''\ncan be misleading. As an example, we show projections of the likelihood for one mass bin in \\refFig{contours}. \nThe covariance between the bias parameters is further explored in \\refapp{cov}. Covariance in the halo shot noise between different mass bins, which we do not take into account in the likelihood, could also contribute to the fluctuations in the bias parameters.\n\nIn the case of $b_2$, we can compare the separate universe results to the results of fitting to $P_{\\text{hm}}$ and $B_{\\text{mmh}}$. Again, we find good agreement, with all points being within $2\\sigma$ from each other. Note that $b_2$ is most difficult to constrain from correlations around its zero-crossing. The difference in constraining power between the two methods is now even larger than in the case of $b_1$. This is because, when using correlations, $b_2$ has to be measured from a higher order statistic which has lower signal-to-noise. \nIn the case of $b_3$, a measurement from correlations would have to\nrely on the trispectrum and accurate subtraction of 1-loop contributions in \nperturbation theory. We defer this significantly more involved measurement\nto future work. \nAs discussed in the introduction, it is difficult to rigorously compare these\nmeasurements to previously published results, since those were measured\nat a fixed smoothing scale and did not take into account nonlocal bias\nterms. Nevertheless, our results for $b_2$ and $b_3$ appear broadly consistent with those of \n\\cite{angulo\/baugh\/lacey:2008,paranjape\/etal:2013}\nand \\cite{angulo\/baugh\/lacey:2008}, respectively.\n\nWe again compare with the peak-background split results, now derived at\nsecond and third order from the ST99 and T08\nmass functions. For $b_2$, at low mass, both predictions deviate from our measurements by about 50\\%. At high mass, the deviation is at most 25\\% for T08 and 40\\% for ST99. In the low mass range, this apparently big discrepancy is also due to the smallness of the absolute value of $b_2$. \nIn the case of $b_3$, the PBS predictions using either the T08 or ST99\nmass functions are in fact completely consistent with the measurements \nat masses $\\gtrsim 10^{12.7} h^{-1} M_\\odot$ and $10^{13.5} h^{-1} M_\\odot$, respectively.\n\nTurning to the ESP prediction, we again find very good agreement at\nhigh masses, although for $b_2$ and $b_3$ the performance is not significantly better than the\nPBS-derived biases from the T08 mass function. At low masses,\nwe again find larger discrepancies, with the ESP now underpredicting\nthe magnitude of $b_2$ and $b_3$. The same caveats regarding the relation of low-mass halos to peaks and the efficiency of the up-crossing condition apply here, i.e. we do not expect the ESP\nprediction to work well for those masses.\\\\ \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.37]{b2overb1_of_b1_zdep.pdf} \n\\includegraphics[scale=0.37]{b3overb1_of_b1_zdep.pdf}\n\\caption{$b_2$ and $b_3$ as a function of $b_1$ obtained from separate universe simulations and for different redshifts. The dashed curves present the third order best fit polynomial for each bias. See text for details about the fit.} \n\\label{fig:b2b3(b1)}\n\\end{figure}\n\nSo far, we have only shown results at redshift $0$. \\refFig{b2b3(b1)}\nshows results from various redshifts by plotting $b_2,\\,b_3$ as functions of\n$b_1$. If the bias parameters are uniquely determined by $\\sigma_0 = \\sigma(M)$, then this relation will be redshift-independent. Indeed, we find no\nevidence for a redshift dependence over the range $z = 0 \\dots 2$ and\n$b_1 = 1 \\dots 10$. Note that we have kept the overdensity criterion\n$\\Delta_{\\rm SO}=200$ fixed. \nSince the separate universe simulation measurements of $b_2$ and $b_3$ \nare very accurate, we provide fitting formulas in \nthe form of $b_n(b_1)$ for convenience. \nGiven the consistency with a universal behavior, we perform a joint fit of results from all redshifts. We use a $3^{\\rm rd}$ order polynomial form for both $b_2$ and $b_3$. Again, we use a weighted least squares method for the fit but do not take into account the error on $b_1$ since it is much smaller than those in $b_2,\\,b_3$. We obtain\n\\begin{equation}\nb_2(b_1) = 0.412-2.143\\,b_1+0.929\\,b_1^2+0.008\\,b_1^3, \n\\label{eq:b2(b1)} \n\\end{equation}\nand\n\\begin{equation}\nb_3(b_1) = -1.028+7.646\\,b_1-6.227\\,b_1^2+0.912\\,b_1^3. \n\\label{eq:b3(b1)} \n\\end{equation}\nThe fits are shown as dashed lines in the two panels of \\refFig{b2b3(b1)}. Notice that we restricted ourselves to $b_1 < 9.6$ on these figures for clarity but we used the full range of results to produce the fits.\nNote that one should be careful when using these formulas outside the fitting range $1\\lesssim b_1\\lesssim 10$.\n\\refeqs{b2(b1)}{b3(b1)} are similar to the fitting formulas provided in \\cite{Hoffmann:2015} who fitted $2^{\\rm nd}$ and $3^{\\rm rd}$ order polynomials for\n$b_2(b_1)$ and $b_3(b_1)$, respectively, to PBS predictions, and found no redshift dependence of their results. Such universal relations became already apparent in \\cite{Saito:2014} (their figure 9). \n\n\\section{Conclusions}\n\\label{sec:concl}\n\nWe have presented a new method to measure the large-scale, renormalized\nlocal density bias parameters $b_n$ of dark matter halos, with $n=1,2,3$,\nby running simulations which simulate an infinite-wavelength density\nperturbation of arbitrary amplitude. This method can be seen as an \nexact implementation of the peak-background split. \nThis method has several advantages, including a simple implementation applicable,\nin principle, to arbitrarily high $n$. The most important advantage,\nhowever, is that the measured biases are not affected\nby the modeling of scale-dependent or nonlinear corrections, and there\nis no ambiguous choice of $k_{\\rm max}$, with the associated risk of \noverfitting, as when\nfitting halo $N$-point functions. The most significant disadvantage\nof the method is that it needs a set of dedicated simulations with\nvarying cosmological parameters to generate a range of $\\d_L$ (note\nhowever that once the simulations are done, they can be used for\nvarious studies, such as for example the nonlinear power spectrum\nresponse \\cite{Wagner:2015}). \n\nWe have compared our results for $b_1$ and $b_2$ to those measured\nfrom the halo-matter power spectrum and halo-matter-matter bispectrum,\nand find excellent agreement overall. One necessary condition for this\nagreement is a careful fitting procedure of the halo statistics and choice of $k_{\\rm max}$. \n\nWe also compared our results to predictions based on the analytical peak-background\nsplit. Once a specific barrier $B$ is assumed, the PBS allows for\na derivation of all local bias parameters $b_n$ from a given halo mass\nfunction. The simplest and most common choice is $B=\\d_c$, which we have\napplied to the ST99 and T08 mass function prescriptions. \nWe found that even though the latter provides a very accurate mass function, the linear bias derived via the PBS and simple collapse threshold is only accurate at the $\\sim 10$\\% level, in agreement\nwith previous results \\cite{manera\/etal:2010}. Things are even worse for $b_2$, with up to 50\\% discrepancy at low mass, although the absolute difference between the PBS predictions and the measurements is similar to that in $b_1$. For $b_3$, the simple PBS predictions are consistent with the measurements (at least at high masses), but this is not a very strong statement given the large error bars on $b_3$. \n\nWe also derived the biases predicted in the excursion set-peaks\napproach, which includes a stochastic moving barrier motivated by \nsimulation results. At high mass, this performs much better, at least for $b_1$, showing\nthat the choice of barrier is a key ingredient in deriving accurate\nbias parameters. In this context, it is important to note that previous\nresults on the inaccuracy of PBS bias parameters \\cite{manera\/etal:2010}\nrelied on the simple constant threshold $B=\\d_c$. This shows that the cause of \ntheses inaccuracies is not the peak-background split itself. The\ninaccuracy of the peak-background split thus depends on what one\ndefines PBS to mean, and can be summarized as follows:\n\\begin{itemize}\n\\item The PBS implemented via the separate universe approach is exact.\n\\item The PBS using a simulation-derived stochastic moving barrier \\cite{robertson\/etal,Paranjape:2013}, as in the ESP, is accurate to a few percent, at least at high masses. The discrepancy found at low mass can be explained by the failure of the peak assumption at such masses, an issue unrelated to the choice of the barrier.\n\\item The PBS using the constant spherical collapse barrier is no better than 10\\%\\,.\n\\end{itemize} \n\nWe also provide fitting formulas for $b_2,\\,b_3$ as a function of $b_1$\nwhich are valid over a range of redshifts and \ncan be useful for predictions and forecasts based on halo statistics,\nsuch as for example the halo model. \n\nIn the future, we plan to extend our analysis to accurately measure\nassembly bias, i.e. the dependence of bias on halo properties beyond\nthe mass (e.g., \\cite{gao\/etal,wechsler\/etal,dalal\/etal}). \nFurther, it will be interesting to extend this technique beyond the \ninfinite wavelength, spherically symmetric ``separate universe'' to allow for precision measurements of the \ntidal and scale-dependent biases. \n\n\\acknowledgments{We thank Aseem Paranjape, Marcello Musso and Vincent~Desjacques for useful discussion about the ESP. F.S.~acknowledges support from the Marie Curie Career Integration Grant (FP7-PEOPLE-2013-CIG) ``FundPhysicsAndLSS''. T.B.~gratefully acknowledges support from the Institute for Advanced Study through a Corning Glass works foundation grant.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nThe promise of automating tedious tasks and keeping humans away from hazardous environments has driven the development of robotic systems to reach tremendous capabilities.\nWith the increased maturity of the single agent systems, the interest in teams of robots collaborating with each other has been steadily rising.\nThe use of such a team of robots collaborating towards a common goal promises to increase the efficiency and robustness of a mission by distributing the tasks among the participating agents and has been proposed for a variety of different tasks, such as in search-and-rescue missions \\cite{srr}, archaeological mapping \\cite{col_arch_map}, precision agriculture \\cite{agri-robots}, and surveillance \\cite{surveillance}.\nSharing information across the agents not only enables the robots to perform a task faster but also enables the individual agents to make better-informed decisions as they can profit from information beyond their own gathered experience, as shown in \\cite{bartolomei2020multi}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_eurocv3.png}\n \\caption{Collaborative SLAM estimate for 5 agents (EuRoC MH Dataset) running different VIO front-ends. The COVINS-G back-end does not require map points (shown here only for visualization) and thus, is compatible with any arbitrary VIO front-end.\n }\n \\label{fig:irchel_5ag} \n \\vspace{-20pt}\n\\end{figure}\n\nHowever, in order for the robots to work towards any higher-level goal, they need to be aware of their surroundings and their pose in their workspace.\nMoreover, for the robots to be able to collaborate, the knowledge of the pose of all other robots within the team is crucial.\nThe use of external sensors, such as GPS or motion capture systems can provide such a shared reference frame enabling the coordination of the robotic team, however, for many practical applications, such data is not reliable or simply not available in the first place.\nFor example, the uncertainty of GPS measurements can be in the order of tens of meters close to larger structures (e.g. within a city) or not available at all inside buildings or underground.\nIn order to remove dependencies on external sensors, research into Simultaneous Localization And Mapping (SLAM) has made significant progress.\nIn particular, the use of cameras and Inertial Measurement Units (IMUs) for visual-inertial SLAM has proven to provide robustness and accuracy, which led to their deployment onboard products in the market already.\nWith the increasing maturity of single-agent vision-based SLAM techniques, the extension towards multi-agent SLAM as a core enabler for robotic collaboration in real-world scenarios has been increasingly gaining interest, sparking a variety of works addressing multi-agent SLAM \\cite{multi-uav-slam, ccm-slam, cvi-slam, door-slam, kimera-multi, covins}. \nWhile the capabilities and robustness of the developed systems have been steadily increasing, due to their tailored architectures, their modularity is often sacrificed. \nAs a result, any exchange and modification of the front-end onboard such systems most often requires significant effort to adapt the back-end to the structure.\nIn this spirit, this work addresses this issue by proposing a generic back-end solution built on top of the architecture of \\cite{covins}.\nAs the proposed approach requires only 2D features, it is agnostic to the front-end running onboard each agent and even allows mixing different front-end algorithms on the different agents during the same mission as illustrated in Figure \\ref{fig:irchel_5ag}.\nIn summary, the contributions of this work are the following:\n\\begin{itemize}[leftmargin=10pt]\n \\item a generalized collaborative SLAM back-end, which requires only 2D keypoints and a pose estimate to fuse the estimates from multiple agents, enabling the use of any arbitrary VIO and stereo front-ends onboard each agent, \n \\item a publicly available codebase\\footnote{\\href{https:\/\/github.com\/VIS4ROB-lab\/covins}{https:\/\/github.com\/VIS4ROB-lab\/covins}}, which is integrated with the framework of \\cite{covins}. Furthermore, a front-end wrapper is provided, to support any off-the-shelf front-end, and\n \\item an extensive evaluation of the proposed back-end on both the EuRoC dataset \\cite{euroc} as well as newly collected datasets. Our evaluation reveals the flexibility of the proposed approach, using and combining different types of front-ends onboard collaborating agents within the same mission.\n\\end{itemize}\n\n\\vspace{0pt}\n\\section{Related Work}\n\\label{sec:relatedwork}\nThe capability to process multiple trajectories sequentially ({\\it{aka}} the multi-session capability) can be seen as a special case of collaborative SLAM.\nRecent SLAM systems, such as ORB-SLAM3\\cite{ORBSLAM3_TRO} and VINS-mono \\cite{vins-mono} have such multi-session capabilities, which enables them to achieve joint pose and scene estimates similar to collaborative SLAM estimates.\nWhile these approaches achieve greater accuracy and robustness when compared to single-agent SLAM, they are not designed to be used in real-time applications where multiple agents are operating at the same time. \n\n\nIn multi-agent SLAM literature, the classification of the systems is generally made into decentralized and centralized architectures.\nOne of the first decentralized approaches to multi-agent SLAM is DDF-SAM \\cite{ddfsam}, which communicates and propagates condensed local graphs between the robots to distribute the information.\nCombining efficient decentralized place recognition \\cite{Cieslewski:Scaramuzza:MRS2017} and with a Gauss-Seidel based distributed pose graph optimization \\cite{Coudhary:etal:ICRA2016}, in \\cite{Cieslewski:etal:ICRA2018} a data-efficient and complete decentralized visual SLAM approach was proposed. \nIn \\cite{dist-coslam}, a monocular vision-only distributed SLAM for mapping large-scale environments is presented. \nThe recent works \\cite{door-slam, kimera-multi} both make use of distributed pose graph optimization schemes along with a robust mechanism for identifying and rejecting incorrect loop closures. \nWhile these distributed SLAM approaches have advantages in terms of scalability, in general, as the information is also distributed the extent of collaboration is limited in order to keep the communication requirements feasible.\n\nOn the other hand, in centralized systems, all relevant information passes through a central node.\nIn \\cite{PG-slam}, the authors present a back-end for real-time multi-robot collaborative SLAM where the server combines the local pose graphs obtained from different agents into a global pose graph.\nIn MOARSLAM \\cite{moarslam}, each agent runs a full SLAM pipeline onboard, while the server is used to store the agents' maps and perform map merges across them.\nAs the agents perform all the computationally expensive tasks, in particular global optimization, this approach is not well-suited for resource-constrained platforms.\nOn the other end of the spectrum is C${}^2$TAM \\cite{riazuelo2014c}, which offloads all tasks onto a server platform, except pose tracking.\nWhile this allows for very limited computational load onboard the agents, it limits the autonomy of the agents, as a loss of connection to the server eventually causes the onboard tracking to fail.\nA middle ground between MOARSLAM and C${}^2$TAM was proposed in \\cite{multi-uav-slam}, which introduces a system architecture enabling the agents to function autonomously, but is still able to offload heavy computations to a server and crucially, enable two-way information flow between the agents and the server.\nThis work was extended in CCM-SLAM \\cite{ccm-slam}, shown to perform in real-time on real data, proposing redundancy detection that enabled scalability, which is key in large-scale missions.\nPushing for the incorporation of inertial cues onboard each agent, aside from the monocular cues, CVI-SLAM \\cite{cvi-slam} was demonstrated to achieve higher accuracy and metrically scaled SLAM estimates aligned with the gravity direction -- which are core to robotic autonomy in reality.\nMost recently, COVINS \\cite{covins} was proposed, revisiting the most important components for centralized collaborative SLAM, shown to achieve significant gains in accuracy, while pushing the scalability of the architecture to up to 12 participating agents.\n\nDespite the improved performance, however, one of the major limitations of COVINS is that this performance is highly dependent on the choice of the VIO front-end employed onboard the agents.\nFor example, using an ORB-SLAM3\\cite{ORBSLAM3_TRO} front-end with the COVINS back-end gives an exceptional performance, but the performance drops significantly when using an alternative state-of-the-art VIO front-end, such as VINS-mono \\cite{vins-mono}.\nThis is mainly caused due to the reliance of COVINS on large numbers of highly accurate map points for closing loops, which holds for ORB-SLAM3, for example, but not for VINS-mono. \nBy utilizing a generic multi-camera relative pose solver for map fusion and loop-closures we break the dependency on highly accurate map points and instead perform the operations using only 2D image observations.\nThis does not only enable COVINS-G to perform well with virtually any VIO frontend, but also permits the usage of more powerful image features, which in return, allows to handle drastic view-point changes where previous systems failed.\n\n\n\\section{Methodology}\n\\label{sec:methodology}\nIn this section, we first provide an overview of the overall architecture in Section \\ref{sec:architecture}.\nSince our system is closely inspired by the COVINS framework, we then focus on the individual modules of the whole architecture, which are directly impacted by the contributions of this work.\n\n\\subsection{System Overview}\n\\label{sec:architecture}\nA summary of our system architecture can be seen in Figure \\ref{fig:architecture}.\nThe system consists of $n$ individual agents, which all run their own local VIO independently and are able to communicate their Keyframe (KF) information to a central server.\nThe communication module is based on the approach of \\cite{covins}, and is shown to be resilient to potential message losses as well as network delays and occasional bottlenecks in the bandwidth.\nHowever, owing to the fact that each agent is running an independent VIO, even a complete loss in connection does not violate the autonomy of an agent.\n\nThe central server, on the other hand, is responsible for maintaining the data of the individual agents, fusing the information from the agent, and carrying out computationally expensive tasks, such as global optimization.\nAfter decoding the received KF information, for every KF a visual descriptor based on the Bag-Of-Words (BoW) approach gets computed and added to the KF Database containing the visual descriptor for all the collected KFs.\nFor every incoming KF, a place-recognition query is performed to find the closest visual matches (candidate KFs).\nBased on the visual similarity, it is attempted to compute the transformation between the newly received KF and the candidate KFs.\nUpon successful computation of the transformation between the KFs, it is used to either perform a loop closure within a map or if it is across multiple maps, these get fused based on the computed transformation.\n\nAt the start of a mission, each of the agents is initialized separately and has a dedicated map on the server.\nIn our system, the map closely resembles a Pose Graph, where nodes correspond to KFs and edges are based on the relative poses from the odometry and loop closures.\nAs soon as a loop is found across two agents, their maps get merged together by transforming the map of one agent into the coordinate frame of the other based on the estimated loop transformation.\nUpon detection of any loop, a Pose Graph Optimization (PGO) is carried out over all connected Maps.\n\n\nAs opposed to COVINS, which strongly relies on having a well-estimated and consistent set of map points, in our approach we only operate on 2D keypoint information.\nThis opens up the possibility to use and combine all sorts of front-ends, even if no map points are accessible at all (e.g. in the case of an off-the-shelf tracking sensor such as the T265).\nIn order to be able to compute the loop constraints without access to map points, we make use of a multi-camera relative pose estimation algorithm \\cite{17PT} by treating neighboring KFs as a multi-camera system.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.47\\textwidth]{images\/covins_arch_v3.pdf}\n \\caption{Overview of the COVINS-G system architecture.\n }\n \\label{fig:architecture} \n \\vspace{-19pt}\n\\end{figure}\n\n\n\\subsection{Loop Closure and Map Fusion}\n\\label{sec:loop_closure}\nThe process of closing loops consists of two main processes; first, suitable candidates for the current query KF are detected, and second, the candidates get geometrically verified by estimating the relative pose between the candidate and the query KF.\nThe first step is handled by the Place Recognition module, which queries the KF Database for similar KFs based on the BoW \\cite{dbow2} image descriptor.\n\nThe second step, the geometrical verification, is used as an additional check beside the visual appearance and in order to obtain a transformation between the query KF $\\text{KF}_q$ and the candidate KF $\\text{KF}_c$.\nHence, with $q$ being the sensor coordinate frame of the $\\text{KF}_q$ and $c$ being the coordinate frame of the sensor for $\\text{KF}_c$, the goal is to find the transformation $T_{cq}$, describing the transformation from frame $q$ into frame $c$.\nThe standard way of computing this transformation is to establish 3D-2D correspondences between the query KF and the map points associated with the candidate KF. \nHowever, in COVINS-G the system does not have access to 3D map points, but only 2D keypoints.\nEstimating the transformation between $\\text{KF}_q$ and $\\text{KF}_c$ using standard 2D-2D relative pose estimation algorithms like the 5-point algorithm \\cite{5PT} would result in a scale ambiguity.\nInstead, we propose to not only use the query and candidate KFs but also use some of their neighboring KFs and the relative odometry transformations to form two sets of cameras which can be treated as a multi-camera system as illustrated in Figure \\ref{fig:17_PT}.\nUsing the 17-point algorithm \\cite{17PT}, the transformation between two such camera systems can be estimated at metric scale.\n\n\\subsubsection{The 17-point Algorithm}\n\\label{sec:seventeen_pt}\nThe 17-point (or 17-ray) algorithm \\cite{17PT} can be used to solve for the relative motion between two multi-camera systems. \nA multi-camera system consists of more than one camera where the relative transformations between the cameras are known. \nUsing 17 2D-2D point correspondences, the relative transformation between two viewpoints of a multi-camera system can be estimated. \nTo do so, the generalized epipolar constraints \\cite{generalized_epipolar} are used, which can be seen as an extension of the epipolar constraints to systems without a central point of projection.\nNote that in our setup we only have a monocular camera, however, we leverage the fact that we have a good estimate of the relative pose between adjacent KFs provided by the VIO front-end.\nHence for a $\\text{KF}_a$, we take the $m$ neighbors $\\text{KF}_{a_{n_i}}, i \\in [1, m]$ of $\\text{KF}_a$ and use the relative transformations $T_{aa_{n_{i}}}$ to build up a conceptual multi-camera system (Figure \\ref{fig:17_PT}).\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{images\/17PT_v5.png}\n \\caption{The 17-point algorithm requires 17 2D point correspondences to estimate the relative transformation $T_{cq}$ between the multi-camera systems $S_{c}$ and $S_{q}$ (represented by blue dashed lines). }\n \\label{fig:17_PT}\n \\vspace{-15pt}\n\\end{figure}\n\n\n\\subsubsection{Implementation}\n\\label{sec:seventeen_pt_approach}\nTo estimate the transformation between the candidate KF ($\\text{KF}_c$) and the query KF ($\\text{KF}_q$), we extend the neighborhood to form two multi-camera systems $S_{q}$ and $S_{c}$ for the $\\text{KF}_q$ and the $\\text{KF}_c$, respectively.\nIn our setup the two sets comprised of one additional neighbor for $\\text{KF}_q$ and two additional neighbors for $\\text{KF}_c$, i.e. $S_{q} = \\{ \\text{KF}_q, \\text{KF}_{q_{n_1}} \\}$ and $S_{c} = \\{ \\text{KF}_c, \\text{KF}_{c_{n_1}}, \\text{KF}_{c_{n_2}}\\}$.\nIn order to establish the 2D-2D candidate correspondences, we first perform a brute force descriptor matching across all pairs of the two sets $S_{q}$ and $S_{c}$, leading to 6 sets of correspondences.\n\nInstead of using this set of correspondences directly with the 17-point algorithm inside a RANSAC framework, we first perform a pre-filtering, as a high outlier ratio would render a RANSAC with 17 correspondences infeasible.\nThe pre-filtering consists of performing a standard 2D-2D RANSAC on each of the 6 sets of correspondences.\nTo reject bad candidates early on, we require a minimum number of inliers (30) in each of the 6 sets.\nOn the remaining inlier correspondences, we carry out a 17-point RANSAC, where during the sampling we ensure that candidates from all sets are present in order to prevent degenerate configurations (e.g. all 17 correspondences from a single set).\nFor both the pre-filtering as well as the 17-point RANSAC, we utilize the OpenGV library \\cite{opengv}.\n\nFor accepting a transformation we require to have at least 100 inliers after RANSAC.\nTo quantify the uncertainty of the computed transformation we use a sampling-based approach to compute a covariance matrix.\nThe samples are obtained by repeatedly selecting 17 inliers and computing the transformation using the 17-point algorithm.\nWe favor the sampling approach over an analytical one as it results in a more consistent covariance estimate since unmodelled effects (e.g. the relative pose uncertainty of the odometry) are better reflected in the samples.\n\n\\subsection{Pose Graph Optimization}\n\\label{sec:pgo}\nThe Pose Graph Optimization (PGO) in COVINS-G is where the information from the different agents gets fused and the drift in the corresponding trajectories can be corrected.\nThe PGO step is triggered every time a new constraint is added to the graph as a result of the loop detection.\nThe state that is optimized in the PGO consists of all KF poses that are in the corresponding graph. \nIn the following, we denote the pose of a KF $k$ by a rotation $q_{ws_k}$ and a translation $p_{ws_k}$, where $w$ represents the coordinate frame in which the poses are expressed and $s_k$ denotes the device coordinate frame at time-point $k$.\nHence, the state in PGO can be defined as $\\mathcal{S} = \\{ q_{ws_1}, p_{ws_1}, \\cdots, q_{ws_n}, p_{ws_n} \\}$, with $n$ being the number of keyframes in the graph.\n\nIn the actual PGO, we optimize the following objective:\n\\begin{equation}\n\\label{eq:pg}\n \\mathcal{S}^{*} = \\underset{\\mathcal{S}}{\\text{arg min}} \\{ \\sum_{k=1}^{n} \\sum_{l=1}^{q} \\norm{e_{kk+l}}^2_{W_{kk+l}} + \\sum_{i,j \\in \\mathcal{L}} \\varrho (\\norm{e_{ij}}^2_{W_{ij}}) \\},\n\\end{equation}\nwhere $\\norm{e}^2_{W} = e^TWe$ is the squared Mahalanobis distance with the information matrix $W$, $\\mathcal{L}$ denotes the set of all loop-closure edges and $\\varrho(\\cdot)$ denotes the use of a robust loss function, here, the Cauchy loss function.\nThe parameter $q$ represents the number of neighbor KFs added as odometry constraints (e.g. $q=1$ would correspond to having an edge only with the subsequent KF). \nThis is used to approximate correlations between poses which are generally computed in a sliding window fashion and in our implementation is set $q=4$.\nThe error terms $e_{ij}$ are of the form\n\\begin{equation}\ne_{ij} = \\begin{bmatrix}\n\\left(p_{ij} - \\hat{p_{ij}} \\right)^T & 2 \\cdot \\text{vec}(q_{ij}^{-1} \\cdot \\hat{q_{ij}})^T \n\\end{bmatrix}^{T},\n\\end{equation}\nwhere $\\text{vec}(q)$ extracts the vector part of a quaternion. \nThe variables denoted with $\\hat{\\cdot}$ correspond to measurements, e.g. from a loop closure transformation or the delta pose estimated by the odometry front-end.\nWe use the simplified notation $p_{ij}, q_{ij}$ to denote the translation and rotation of the delta pose between the pose of KF $i$ and $j$ given by $T_{ij} = T_{ws_i}^{-1} T_{ws_j}$.\nFor the optimization, the Ceres's implementation of the Levenberg-Marquardt is used.\nThe information matrices $W$ for the loop constraints are obtained via the estimated covariance outlined in section \\ref{sec:seventeen_pt_approach}.\nThe ones corresponding to the odometry edges are obtained using the expected accuracy of the corresponding front-end.\n\n\n\\section{Experiments and Discussions}\n\\label{sec:experiments}\nIn our evaluation, we demonstrate the generic nature of our collaborative back-end and its capability to use and combine all sorts of VIO front-ends.\nIn Sections \\ref{sec:tracking_cam} and \\ref{sec:feature_support} we show two applications where our back-end enables a collaborative estimation, which previous approaches cannot support.\nAll our experiments are performed using pre-recorded datasets which we play back in real-time.\nThe agents' front-ends are run on Intel NUC 7i7BNH @3.5 GHz hardware, while the server is run on a laptop with a core i7-6700HQ @2.6 GHz.\nNote that in our setup the agents and the server are connected via a wireless network, in order for real communication to take place.\nThis setup allows us to have more comparable results over multiple runs while still making use of a real wireless network as would be the case during real-world deployment.\n\n\\subsection{Collaborative SLAM Estimation Accuracy}\n\\label{sec:euroc_eval}\nWe evaluate the accuracy of the collaborative SLAM estimate on the EuRoC Dataset\\cite{euroc} using the Machine Hall (MH) and the Vicon Room 1 (V1) sequences to establish a collaborative estimation scenario with three to five participating agents. \nWe use various combinations of front-ends with our back-end and compare the results against COVINS \\cite{covins} as well as the state-of-the-art multi-session capabilities of VINS-mono\\cite{vins-mono} and ORB-SLAM3 \\cite{ORBSLAM3_TRO}.\nAs multi-session methods only support one agent at a time, we process the datasets sequentially with one agent at a time, in contrast to the collaborative SLAM approaches, where all agents' processes are run in parallel.\n\nTo showcase the flexibility of our approach, we perform the evaluation of our back-end with different front-end combinations.\nIn the first experimental setup, all agents are operated using the same front-end, which in our experiments is either the VINS-mono or the ORB-SLAM3 front-end.\nIn subsequent experiments, we test COVINS-G using different front-ends for the different agents, namely OpenVINS\\cite{Openvins}, VINS-mono, ORB-SLAM3 and SVO-pro\\cite{Forster17troSVO}.\nSince COVINS performs as a post-processing step a Global Bundle Adjustment (GBA) at the end of every run, we do not directly compare against it but rather include its performance here as a reference.\nFor more appropriate comparisons with COVINS-G, we include the COVINS result without the additional GBA step.\nThe averaged results over 5 runs are summarized in Table \\ref{tab:multi_ag}.\n\nUsing the ORB-SLAM3 front-end, we achieve on-par performance to the ORB-SLAM3 multi-session back-end, even though in our approach we do not perform map re-use as in ORB-SLAM3.\nAs ORB-SLAM3 performs a GBA upon every loop detected, it can potentially reach very high accuracy (as COVINS demonstrates with GBA enabled), however, as long as it is able to find sufficient correspondences in the re-localization, no GBA gets triggered.\nTherefore, the overall accuracy is coupled also to the time that the last GBA step was performed.\nCompared to COVINS without GBA, our approach reduces the error almost by a factor of two.\nThis can be explained by the fact that in COVINS-G we are able to close more loops, as also shown in Table \\ref{tab:computation}, and also because the covariance of the loop transformations gets explicitly estimated instead of using fixed heuristics.\n\nThe comparison with the VINS-mono front-end showcases a similar outcome that COVINS-G performs similarly to the VINS-mono multi-session back-end.\nThe small gap in performance can be explained in that VINS-mono incorporates a larger number of loops, whereas in COVINS-G we set a minimum number of KFs between consecutive loops in order to reduce the computational burden.\nCompared to COVINS with the VINS-mono frontend, COVINS-G shows a significant improvement with a factor of around 3.\nThis is because COVINS only detects and closes few loops because it's loop-closure pipeline is tailored to a high quality map with a large number of map points.\nDue to this weak connection within and across the trajectories, even the GBA is unable to reduce the error.\nWith COVINS-G, even with a mix of different front-ends, we can see that the performance is in a similar range as when using the VINS-mono front-end, demonstrating the effectiveness of our generic approach.\n\n\\begin{table*}[]\n\\caption{Evaluation of joint-trajectory estimates for different methods on the EuRoC Dataset\\cite{euroc} (lowest error in bold) reported as the average trajectory error over 5 runs each. $^\\#$For the heterogenous front-end, agents 1-5 utilize OpenVINS\\cite{Openvins}, VINS-mono\\cite{vins-mono}, ORB-SLAM3\\cite{ORBSLAM3_TRO}, SVO-Pro\\cite{Forster17troSVO} and VINS-mono\\cite{vins-mono} front-ends, respectively. As *COVINS\\cite{covins} performs a Global Bundle Adjustment (GBA) step at the end of the run it has been included here for reference only and is excluded from our comparison.\n}\n\\label{tab:multi_ag}\n\\centering\n\\def1.2{1.2}\n\n\\begin{tabular}{|cc|ccc|}\n\\hline\n\\multicolumn{2}{|c|}{\\textbf{Method}} & \\multicolumn{3}{c|}{\\textbf{Translational RMSE (m)}} \\\\ \\hline\n\\multicolumn{1}{|c|}{} & & \\multicolumn{1}{c|}{MH01-MH03} & \\multicolumn{1}{c|}{MH01-MH05} & V101-V103 \\\\ \\cline{3-5} \n\\multicolumn{1}{|c|}{\\multirow{-2}{*}{Front-end}} & \\multirow{-2}{*}{Back-end} & \\multicolumn{1}{c|}{(3 Agents)} & \\multicolumn{1}{c|}{(5 Agents)} & (3 Agents) \\\\ \\hline\n\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & ORB-SLAM3 MS\\cite{ORBSLAM3_TRO} & \\multicolumn{1}{c|}{0.041} & \\multicolumn{1}{c|}{0.082} & \\textbf{0.048} \\\\ \\hline\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & COVINS (No GBA) \\cite{covins} & \\multicolumn{1}{c|}{0.075} & \\multicolumn{1}{c|}{0.119} & 0.130 \\\\ \\hline\n\\multicolumn{1}{|c|}{{\\color[HTML]{666666} ORB-SLAM3\\cite{ORBSLAM3_TRO}}} & {\\color[HTML]{666666} COVINS (GBA) \\cite{covins}*} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.024}} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.036}} & {\\color[HTML]{666666} 0.042} \\\\ \\hline\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & Ours & \\multicolumn{1}{c|}{\\textbf{0.040}} & \\multicolumn{1}{c|}{\\textbf{0.064}} & 0.067 \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{VINS-mono \\cite{vins-mono}} & VINS MS\\cite{vins-mono} & \\multicolumn{1}{c|}{\\textbf{0.062}} & \\multicolumn{1}{c|}{0.100} & \\textbf{0.076} \\\\ \\hline\n\\multicolumn{1}{|c|}{VINS-mono \\cite{vins-mono}} & COVINS (No GBA) \\cite{covins} & \\multicolumn{1}{c|}{0.259} & \\multicolumn{1}{c|}{0.305} & 0.183 \\\\ \\hline\n\\multicolumn{1}{|c|}{{\\color[HTML]{666666} VINS-mono \\cite{vins-mono}}} & { \\color[HTML]{666666} COVINS (GBA) \\cite{covins}*} & \\multicolumn{1}{c|}{ {\\color[HTML]{666666} 0.261}} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.321}} & {\\color[HTML]{666666} 0.183} \\\\ \\hline\n\\multicolumn{1}{|c|}{VINS-mono\\cite{vins-mono}} & Ours & \\multicolumn{1}{c|}{0.081} & \\multicolumn{1}{c|}{\\textbf{0.095}} & 0.090 \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{Heterogenous$^\\#$} & Ours & \\multicolumn{1}{c|}{0.081} & \\multicolumn{1}{c|}{0.090} & 0.088 \\\\ \\hline\n\\end{tabular}\n\\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Large-Scale Outdoor Experiments}\n\\label{sec:outdoor_expt}\nTo demonstrate the applicability of our back-end to large-scale scenarios, we captured a dataset consisting of 4 sequences using a hand-held setup with an Intel Realsense D455.\nThe dataset was recorded by walking on the sidewalks around the campus of ETH Zurich in the center of Zurich and with a combined trajectory length of around 2400$m$ covered an area of approximately 67500$m^2$.\nFor this experiment, the VINS-mono front-end was used on all sequences.\nAs we do not have a ground truth trajectory for the dataset, we superimposed the estimated combined trajectory on a satellite image of the area as illustrated in Figure \\ref{fig:outdoor_expt}. \nAs can be seen, the different trajectories are well aligned with the road structure. \nA complete visualization of the whole experiment can be found in the supplementary video.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_outdoor_largescalev3.png}\n \\caption{Joint trajectory estimates for 4 agents superimposed on the satellite image of our large-scale outdoor experiment with a combined trajectory length of 2400 $m$.}\n \\label{fig:outdoor_expt} \n \\vspace{-12pt}\n\\end{figure}\n\n\\subsection{Tracking Camera Support}\n\\label{sec:tracking_cam}\nThis experiment aims at highlighting the generalization capabilities of our system with respect to the front-end that is used.\nTo do so, we captured a dataset with two different sensors, one Intel Realsense D455 camera, and one Intel Realsense T265 tracking camera inside an office floor.\nFor the D455 we used the VINS-mono as a front-end, whereas for the T265 we used the odometry estimate that is provided directly by the sensor.\nAs this sensor only provides its images and the corresponding poses but does not offer access to its internal state, we use our ROS-based front-end wrapper, which does a motion-based keyframe selection, detects feature points, and creates and communicates the KF messages to the back-end.\n\nThe outcome of the experiment is illustrated in Figure \\ref{fig:tracking}, where one can see an example image for each of the sensors and the combined trajectory overlaid with the floor plan of the office (the complete experiment is visualized in the supplementary video).\nAs it can be seen, the estimated trajectory fits the floor plan, indicating its consistency.\nIn addition to the different odometry sources, the two sensors have also two rather different lenses, rendering matching between the two systems more difficult, however, our proposed back-end is able to merge the estimates nonetheless.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_trackingv3.png}\n \\caption{Joint trajectory estimate for a heterogeneous system of 2 agents superimposed on the floor plan of the \n building. Map points are shown for visualization purposes only}\n \\label{fig:tracking} \n \\vspace{-9pt}\n\\end{figure}\n\n\\subsection{Alternative Feature Descriptor support}\n\\label{sec:feature_support}\nVisual features are designed to be tolerant to illumination changes, perspective distortions as well as viewpoint changes.\nWhile for runtime efficiency, real-time SLAM systems are mainly restricted to highly computational efficient binary features such as ORB \\cite{ORB} or BRISK \\cite{Leutenegger:etal:ICCV2011}.\nHowever, it is accepted that to date, binary features are not able to match the robustness of more expensive descriptors such as e.g. SIFT \\cite{SIFT}.\nAs our approach is largely decoupled from the front-end, we are able to make use of more expensive features, also because we do not require them to be computed at frame rate, thus allowing us to potentially match trajectories with much larger viewpoint differences.\n\nIn this experiment, we recorded another outdoor dataset with two hand-held trajectories walking in parallel on two sides of a street.\nWe modified our front-end wrapper to detect SIFT instead of ORB features and run the back-end using these SIFT features as well.\nWhile using the framework with ORB features no overlap can be detected to merge the trajectories, using the powerful SIFT features allowed us to successfully detect loops across the agents and align the trajectories.\nThe comparison of this experiment can be found in the accompanying video.\nSuch an improved tolerance to large view-point changes allows the system to be used in scenarios where larger view-point changes are inherent to the use case, for example, the collaboration between aerial and ground robots.\n\n\\subsection{Computational and Communication Requirements}\n\\label{sec:computation}\n\\begin{table}[]\n\\caption{Comparison of the runtime for the loop computation and the average communication traffic.}\n\\label{tab:computation}\n\\centering\n\\def1.2{1.2}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{1}{|l|}{\\textbf{}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}\\# Loops\\\\ detected\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Computation Time \\\\ per loop\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Network Traffic\\\\ per Agent\\end{tabular}} \\\\ \\hline\nOurs & 75 & 213 $\\pm$ 171 ms & 179.74 kB\/s \\\\ \\hline\nCOVINS \\cite{covins} & 57 & 35 $\\pm$ 23 ms & 486.41 kB\/s \\\\ \\hline\n\\end{tabular}\n\\vspace{-15pt}\n\\end{table}\n\nThe statistics for communication and loop transformation computation are generated and compared with COVINS for the experiment performed on EuRoC MH dataset with 5 agents, each running an ORB-SLAM3 front-end. \nThe computation time for estimating the loop transformation using our approach (including feature matching, 2D-2D pre-filtering, 17-point RANSAC and covariance matrix computation) is compared against the standard approach used in COVINS (feature matching + PnP RANSAC) and summarized in Table \\ref{tab:computation}. \nThough the computation time for our method is around an order of magnitude higher than for COVINS, the average computation time per loop is around $213 ms$ which still makes it possible to operate in real-time for up to 5 loop computations per second. \nFor this experiment, a total of 75 loops were detected for a mission time of around 135 seconds, resulting in 0.55 loops per second over all agents. \nThe network traffic for our approach is around three times lower than that of COVINS.\nThis is owed to the fact that in our back-end we require only 2D keypoints and their descriptors for each KF, unlike COVINS which also requires sending the map points as well as the data required to perform the IMU pre-integration.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this work, we present a front-end-agnostic back-end for collaborative visual-inertial SLAM.\nBy making use of a multi-camera relative pose estimator for estimating loop-closure transformations, our system is able to work using only 2D keypoint information.\nThis allows our collaborative back-end to be compatible with virtually any VIO front-end with minimal to no modifications to it.\nIn our experimental evaluation, we achieve at least on-par accuracy with state-of-the-art multi-session and collaborative SLAM systems, which use back-ends specifically designed for the respective front-end.\nOwed to the decoupled nature of the data used in our back-end, we can make use of more powerful keypoint descriptors like SIFT, allowing us to close loops and merge trajectories with drastic viewpoint changes, which cannot be handled by state-of-the-art systems.\nOur open-source implementation of the system enables users to unlock the capabilities of a collaborative SLAM system without the need to change their state estimation pipeline of choice.\n\nIn proceeding work, we would like to make more explicit use of the information coming from our VIO, in particular the gravity alignment, by replacing the 17-point algorithm with a minimal 4-point solver which leverages the gravity information \\cite{4PT_gravity}.\nThis allows to speed up the relative pose estimation step significantly, as it requires drawing fewer samples within the RANSAC computation.\nIn the future, we would also like to deploy this system on a team of robots for multi-robot applications, such as exploration and 3D reconstruction of large areas.\n\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}