{"text":"\\section{Introduction}\n\n\\footnotetext[1]\npheligenius@yahoo.com}\\footnotetext[2]\nfarida\\_tahir@comsats.edu.pk}One of the most flummoxing problems in modern\nphysics that kept scientists at alert and has been hotly debated since $1929\n, is the realization of the expansion of the universe, established when\nEdwin Hubble published his revolutionary paper. Astronomical observations\nand study of universe, in the past few decades, strongly invalidated\nastronomers' view point that the universe was entirely composed of \\emph\n\\textquotedblleft baryonic matter\\textquotedblright }. The latest\nconformation of the accelerating universe \\cite{DN,AG,SP,SW,SE} endorsed the\nfact that the universe is infused with an unknown form of energy density $(\ndubbed as dark energy $\\left( \\rho _{\\Lambda }\\right) )$ which makes up for\nabout $75\\%$ of the total energy density of the universe. It is this $75\\%$\nmysterious $\\rho _{\\Lambda }$, which conditions our three-dimensional\nspatial curvature to be zero, that is responsible for the acceleration of\nthe universe. This discovery provided the first direct evidence that $\\rho\n_{\\Lambda }$ is non-zero, with $\\rho _{\\Lambda }\\approx \\left( 2.3\\times\n10^{-3}eV\\right) ^{4}$\\cite{FR,JZ}.\n\nHowever, the theoretical expectations for the $\\rho _{\\Lambda }$ exceed\nobservational limits by some $120$ orders of magnitude \\cite{JC}. This huge\ndiscrepancy between theory and observation, hitherto, constitutes a serious\nproblem for theoretical physics community. In fact, Steven Weinberg puts it\nmore succinctly by saying that the small non-zero value of $\\rho _{\\Lambda }$\nis \\emph{\\textquotedblleft a bone in the throat of theoretical\nphysics\\textquotedblright .} \\ Considering this huge discrepancy may\nun-shroud something fundamental, yet to be unveiled, about the hidden nature\nof the universe. This paper is one of such attempts.\n\nThe most elegant and comprehensible endeavour in order to solve this\nproblem, in our view, was put forward by F. R. Urban and A. R. Zhitnitsky \n\\cite{FR}. These authors approached the problem from the angle of the\neffective theory of gravity interacting with standard model fields by using\nthe solution of the $U(1)$ problem as put forward by G. Veneziano and E.\nWitten \\cite{GV,EW}.\\ In this framework, the basic problem of why the dark\nenergy is $120$ orders of magnitude smaller than its Planck scale \nM_{planck}^{4}$, is replaced by fundamentally different questions:\n\\textquotedblleft $(i)$ What is the relevant scale which enters the\neffective theory of gravitation? $(ii)$ How does this scale appear in the\neffective quantum field theory for gravity?\\textquotedblright\\ In their\nview, this effective scale has nothing to do with the cutoff ultraviolet \n(UV)$ scale $M_{planck}$: the appropriate effective scale must emerge as a\nresult of a subtraction at which some infrared (IR) scale enters the\nphysics. They completely turned the problem on its head!\n\nThough their attempt being cognizant, yet it fails to reproduce, exactly,\nthe measured value of $\\rho _{\\Lambda }$ \\cite{MT}. We observe here that\ntheir assumption $g\\equiv c=C_{QCD}\\times C_{grav}=1$ is debatable, since it\nis valid only for $C_{QCD}$ but not for $C_{grav}$, as proved in this paper.\nHere $g$ is the Minkowski metric in vacuum, $C_{QCD}$ is the Quantum\nChromodynamic $(QCD)$ coupling constant, and $C_{grav}$ is the gravitational\ncoupling constant.\n\nFrom Ref.[7], the value of $C_{grav}$ was wrongly computed to be \nC_{grav}=0.0588$\\ (which is approximately one-third of the value we proved\nin our calculation) but for obvious reason the authors neglected this value\nand used a position dependent Minkowski metric distance $g(x^{2})$\\ instead.\nThey computed $g(x^{2})$ to be $g(x^{2})=1\/6.25.$ For no clear reason, they\napproximated the value of $g(x^{2})$ to $g(x^{2})\\approx 1\/6$ by truncating \n0.25$ from their original \\ value of $g(x^{2}).$ This approach is totally\nunacceptable in the field of computational cosmology where every minuscule\nvalue counts.\n\nIn this paper, we have proved the value of $C_{grav}$ to an order of\nmagnitude less than one i.e. $1.797\\times 10^{-1}$; this leads towards the\nexact measured \\cite{MT} value of $\\rho _{\\Lambda }$. In order to get this\nvalue, we have used finite temperature and density $(FTD)$ correction\ntechnique. Here, the $FTD$ background acts as highly energetic medium \n\\left( M_{planck}^{4}\\right) $ controlling the particle propagation. Our\nbasic guiding idea is that the finite temperature field theory $(FTFT)$,\nsimilar to the physics of superconductivity (quantum field theory at $T=0$),\nis linked to the infrared sector of the effective theory of gravity\ninteracting with standard model fields, specifically with $QCD$ fields \\cit\n{FR}. In this case, the statistical background effects are incorporated in\npropagators through the Bose-Einstein distribution function \\cite{KR,FE}: it\nis worth noting that the Bose-Einstein distribution function is the\nmathematical tool for understanding the essential feature of the theory of\nsuperconductivity \\cite{FE}. The general attribute of a successful theory of\nsuperconductivity is the existence of degenerate vacuum\/broken symmetry\nmechanism. A characteristic feature of such a theory is the possible\nexistence of \\textquotedblleft unphysical\\textquotedblright\\ zero-mass\nbosons which tend to preserve the underlying symmetry of the theory.\\ The\nmasslessness of these singularities is protected in the limit \nq\\longrightarrow 0$. This means that it should cost no energy to create a\nYang-Mills quantum at $q=0$ and thus the mass is zero \\cite{GS}. In the\npreceding Ref. the Goldstone-Salam-Weinberg theorem is valid for a zero-mass\npole, which is protected. That pole is not physical and is purely gauge,\nhence unphysical. This is precisely the highly celebrated Veneziano ghost \n\\cite{FR}, which is analogous to the Kogut-Susskind \\emph{(KS)} ghost in the\nSchwinger model \\emph{(distinctive unphysical degree of freedom which is\nmassless and can propagate to arbitrary large distances)}.\n\n\\qquad It is imperative to note that this set of unphysical massless bosons\ntends to transform as a basis for a representation of a compact Lie group \n\\cite{FE} thereby, forming a compact manifold. We do not make any specific\nassumptions on the topological nature of the manifold; we only assume that\nthere is at least one Minkowski metric distance that defines a general\ncovariance of comoving coordinates \\cite{SW} with size $L_{M}=2\\times\nEuclidean$ metric distance.\n\nIn the next section, we derive the finite temperature and density relation\nfor the Veneziano ghosts by using Bose-Einstein distribution function. It\nshould be noted here that Veneziano ghosts are treated as unphysical\nmassless bosons due to the fact that they both have the same propagator \n(+ig_{\\mu \\nu }\/q^{2})$\\cite{FR}: the propagator for unphysical massless\nboson is obtained from $(+\\left( 2\\pi \\right) ^{4}ie^{2}g_{\\mu \\nu\n}\/q^{2})\\langle \\phi \\rangle ^{2}$ \\cite{FE}.\n\n\\section{ Veneziano-Ghost Density}\n\nFrom Fermi-Dirac and Bose-Einstein distribution functions, we hav\n\\begin{equation}\nn_{r}=\\frac{g_{r}}{e^{\\alpha +\\beta \\varepsilon _{r}}\\pm 1} \\tag{1}\n\\end{equation}\n\nThe positive sign applies to fermions and the negative to bosons. $g_{r}$ is\nthe degenerate parameter, $\\alpha $ is the coefficient of expansion of the\nboson gas inside the volume $(V)$, $\\beta $ is the Lagrange undetermined\nmultiplier, $n_{r}$ and $\\varepsilon _{r}$ are the numbers of particles and\nthe energy of the $r-th$ state respectively. The value of $\\alpha $ for\nboson gas at a given temperature is determined by the normalization\ncondition \\cite{PC\n\\begin{equation}\nN=\\underset{r}{\\sum }\\frac{g_{r}}{e^{\\alpha +\\beta \\varepsilon _{r}}-1} \n\\tag{2}\n\\end{equation}\n\nThis sum can be converted into an integral, because for a particle in a box,\nthe states of the system have been found to be very close together i.e. \n\\left( \\Delta \\varepsilon _{vac}\\equiv d\\varepsilon \\rightarrow 0\\right) $.\nUsing the density of single-particle states function, $Eq.(2)$ reduces to\n\n\\begin{equation}\nN=\\overset{\\infty }{\\underset{0}{\\int }}\\frac{D\\left( \\varepsilon \\right)\nd\\varepsilon }{e^{\\alpha +\\beta \\varepsilon }-1} \\tag{3}\n\\end{equation}\n\nWhere $D\\left( \\varepsilon \\right) d\\varepsilon $ is the number of allowed\nstates in the energy range $\\varepsilon $ to $\\varepsilon +d\\varepsilon $\nand $\\varepsilon $ is the energy of the single-particle states. Using the\ndensity of states as a function of energy, we have \\cite{PC}\n\n\\begin{equation*}\nD\\left( \\varepsilon \\right) d\\varepsilon =\\frac{4\\pi V}{h^{3}}2m\\varepsilon\n\\left( \\frac{m}{p}\\right) d\\varepsilon\n\\end{equation*}\n\nwith $p=\\sqrt{2m\\varepsilon }$\n\n\\begin{equation}\nD\\left( \\varepsilon \\right) d\\varepsilon =2\\pi V\\left( \\frac{2m}{h^{2}\n\\right) ^{3\/2}\\varepsilon ^{1\/2}d\\varepsilon \\tag{4}\n\\end{equation}\n\nPutting $Eq.(4)$ into $Eq.(3),$ we get\n\n\\begin{equation}\nN=2\\pi V\\left( \\frac{2m}{h^{2}}\\right) ^{3\/2}\\overset{\\infty }{\\underset{0}\n\\int }}\\frac{\\varepsilon ^{1\/2}d\\varepsilon }{e^{\\alpha +\\beta \\varepsilon\n}-1} \\tag{5}\n\\end{equation}\n\nWhere $m$ is the mass of boson and $h$ is the Planck constant. $\\alpha\n=\\beta \\mu $ and $\\beta =1\/kT$. $\\mu $ is the chemical potential, $k$ is the\nBoltzmann constant and $T$ is temperature. Since there is no restriction on\nthe total number of bosons, the chemical potential is always equals to zero.\nThus $Eq.(5)$ reads as:\n\n\\begin{equation}\nN=2\\pi V\\left( \\frac{2m}{h^{2}}\\right) ^{3\/2}\\overset{\\infty }{\\underset{0}\n\\int }}\\frac{\\varepsilon ^{1\/2}d\\varepsilon }{e^{\\varepsilon \/kT}-1} \\tag{6}\n\\end{equation}\n\nBy using standard integral\n\n\\begin{equation*}\n\\overset{\\infty }{\\underset{0}{\\int }}\\frac{x^{z-1}dx}{e^{x}-1}=\\varsigma\n\\left( z\\right) \\Gamma \\left( z\\right)\n\\end{equation*}\n\nwhere $\\varsigma \\left( z\\right) $ is the Riemann zeta function and $\\Gamma\n\\left( z\\right) $ is the gamma function. $Eq.(6)$ takes the form\n\n\\begin{equation*}\nN=2.61V\\left( \\frac{2\\pi mkT}{h^{2}}\\right) ^{3\/2}\n\\end{equation*}\n\nLet $n_{gv}=N\/V$\n\n\\begin{equation}\nn_{gv}=2.61\\left( \\frac{2\\pi mkT}{h^{2}}\\right) ^{3\/2} \\tag{7}\n\\end{equation}\n\n\\bigskip Recall that \\ $m=\\Delta \\varepsilon _{vac}\/c^{2}$ and the average\nkinetic energy of gas in three-dimensional space is given by $\\Delta\n\\varepsilon _{vac}=\\frac{3kT}{2}$. Thus $Eq.(7)$ becomes\n\n\\begin{equation*}\nn_{gv}=\\left( \\frac{\\left( 2.61\\right) \\left( 3\\pi \\right) ^{3\/2}k^{3}}\n\\left( hc\\right) ^{3}}\\right) T^{3}\n\\end{equation*}\n\nDefine\n\n\\begin{eqnarray*}\n\\xi &\\equiv &\\left( \\frac{\\left( 2.61\\right) \\left( 3\\pi \\right) ^{3\/2}k^{3\n}{\\left( hc\\right) ^{3}}\\right) \\\\\n&=&2.522\\times 10^{7}\\left( mk\\right) ^{-3}\n\\end{eqnarray*}\n\nHence, the Veneziano-ghost density$\\left( n_{gv}\\right) $ can be\nre-expressed in more elegant form as:\n\n\\begin{equation}\nn_{gv}=\\xi T^{3} \\tag{8}\n\\end{equation}\n\n\\bigskip $Eq.(8)$ is the required result for the finite temperature and\ndensity relation for the Veneziano ghost(s).\n\n\\section{Gravitational Coupling Constant From Veneziano-Ghost Density}\n\nThe principle of general covariance tells us that the energy-momentum tensor\nin the vacuum must take the form\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=T_{\\mu \\nu }^{vac}=g\\left\\langle \\rho \\right\\rangle \\tag{9}\n\\end{equation}\n\n\\bigskip Here $\\left\\langle \\rho \\right\\rangle $ has the dimension of energy\ndensity and $g$ describes a real gravitational field \\cite{SE}. Thus $Eq.(9)$\ncan be written as\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=g\\left( \\Delta \\varepsilon _{vac}\\right) ^{4} \\tag{10}\n\\end{equation}\n\nWhere \\textquotedblleft $g$\\textquotedblright\\ in Ref.\\cite{FR,JZ}, is\ndefined as $g\\equiv c=C_{QCD}\\times C_{grav}.$ Therefore, $Eq.(10)$ can be\nwritten as\n\n\\begin{equation*}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=C_{QCD}\\times C_{grav}\\times \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}\n\\end{equation*}\n\nWhere, $C_{QCD}=1$ as quoted by \\cite{JZ}, and references within, thus\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=C_{grav}\\times \\left( \\Delta \\varepsilon _{vac}\\right) ^{4} \\tag{11}\n\\end{equation}\n\nNow, the energy density can be written as\n\n\\begin{equation}\n\\rho _{vac}=\\frac{\\Delta \\varepsilon _{vac}}{V}=V^{-1}\\times \\Delta\n\\varepsilon _{vac} \\tag{12}\n\\end{equation}\n\n$Eq.(12)$ is justified by the standard box-quantization procedure \\cite{SE}.\nBy comparing $Eq.(12)$ with $Eq.(8)$, we get\n\n\\begin{equation}\n\\rho _{vac}=n_{gv}\\times \\Delta \\varepsilon _{vac} \\tag{13}\n\\end{equation}\n\nWith $n_{gv}\\equiv V^{-1},$ From the average kinetic energy for gas in\nthree-dimensional space, we have $T=2\\Delta \\varepsilon _{vac}\/3k.$ Hence \nEq.(8)$ becomes\n\n\\begin{equation}\nn_{gv}=\\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{3}}{27k^{3}} \n\\tag{14}\n\\end{equation}\n\nPutting the value of $n_{gv}$ in $Eq.(13)$, we get\n\n\\begin{equation}\n\\rho _{vac}=\\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}}{27k^{3}}\n\\tag{15}\n\\end{equation}\n\n$Eq.(15)$ represents the energy density of a vacuum state.\n\nThe natural demand of the Lorentz invariance of the vacuum state is bedecked\nin the structure of (effective) quantum field theory in Minkowski space-time\ngeometry \\cite{SE,RM}. Hence, if $\\left\\vert 0\\right\\rangle $\\ is a vacuum\nstate in a reference frame $S$ and $\\left\\vert \n{\\acute{}\n\\right\\rangle $ refers to the same vacuum state observed from a reference\nframe $\n{\\acute{}\n,$\\ which moves with uniform velocity relative to $S$, then the quantum\nexpression for Lorentz invariance of the vacuum state read\n\\begin{equation}\n\\left\\vert \n{\\acute{}\n\\right\\rangle =u\\left( L\\right) \\left\\vert 0\\right\\rangle =\\left\\vert\n0\\right\\rangle \\tag{16}\n\\end{equation}\n\nWhere $u\\left( L\\right) $ is the unitary transformation (acting on the\nquantum state $\\left\\vert 0\\right\\rangle $) corresponding to a Lorentz\ntransformation $L$. All the physical properties that can be extracted from\nthis vacuum state, such as the value of energy density, should also remain\ninvariant under Lorentz transformations \\cite{SE}. If the Lorentz\ntransformation is initiated by $\\rho _{vac}$, then $2\\times \\rho _{vac}$ is\nneeded for a unitary transformation to take place. The logic behind this\nassumption is simple: if $\\rho _{vac}$ defines the Lorentz invariant length \n(L)$ (Euclidean metric distance) of $\\left\\vert 0\\right\\rangle $, then the\nLorentz transformation from $\\left\\vert 0\\right\\rangle $ to $\\left\\vert \n{\\acute{}\n\\right\\rangle $ (with continuous excitation) requires \\ $2\\times \\rho _{vac}\n: \\ $\\left\\vert 0\\right\\rangle \\overset{2\\times \\rho _{vac}}{\\longrightarrow \n}$\\ $\\left\\vert \n{\\acute{}\n\\right\\rangle $. This leads to the principle of general covariance as\napriori stated in the introduction \\cite{SE}. Thus,\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=2\\times \\rho _{vac}=\\frac{16\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4\n}{27k^{3}} \\tag{17}\n\\end{equation}\n\n\\bigskip $Eq.(17)$ is also justified by the standard box-quantization\nprocedure \\cite{SE}. Now by combining $Eq.(11)$ and $Eq.(17)$, we have\n\n\\begin{equation*}\nC_{grav}=\\frac{16\\xi }{27k^{3}}=2.336\\times 10^{19}\\left( m.eV\\right) ^{-3}\n\\end{equation*}\n\n\\bigskip As $1m=5.07\\times 10^{15}GeV^{-1}$. This leads to\n\n\\begin{equation}\nC_{grav}=1.797\\times 10^{-1} \\tag{18}\n\\end{equation}\n\nwhich is the required gravitational coupling constant.\n\n\\section{Dark Energy From The Veneziano-Ghost: A Review}\n\nThe major ingredient of standard Witten-Veneziano resolution of $U(1)$\nproblem is the existence of topological susceptibility $\\chi $. In Ref.\\cit\n{FR}, it has been proved that the deviation in $\\chi $, i.e. $\\Delta \\chi $,\nrepresents the vacuum energy density \\emph{(dark energy)}. We review this\nresult by making use of $Eq.(9)$ and resolve the inherent hitch in this\napproach with the help of $Eq.(18)$. Thus from $Eq.(9)$ we have,\n\n\\begin{equation}\ni\\int dx\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert\n0\\right\\rangle =i\\int dxT_{\\mu \\nu }^{vac} \\tag{19}\n\\end{equation}\n\nBy using the standard Witten-Veneziano relations\n\n\\begin{equation*}\n\\widehat{T}_{\\mu \\nu }\\equiv T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right)\n\\right\\}\n\\end{equation*}\n\n\\bigskip Wher\n\\begin{equation*}\nQ\\equiv \\frac{\\alpha _{s}}{16\\pi }\\epsilon ^{\\mu \\nu \\rho \\sigma }G_{\\mu \\nu\n}^{a}G_{\\rho \\sigma }^{a}\\equiv \\frac{\\alpha _{s}}{8\\pi }G_{\\mu \\nu }^{a\n\\widetilde{G}^{\\mu \\nu a}\\equiv \\partial _{\\mu }K^{\\mu }\n\\end{equation*}\n\nAnd\n\n\\begin{equation}\nK^{\\mu }\\equiv \\frac{\\Gamma ^{2}}{16\\pi ^{2}}\\epsilon ^{\\mu \\nu \\lambda\n\\sigma }A_{\\nu }^{a}\\left( \\partial _{\\lambda }A_{\\sigma }^{a}+\\frac{\\Gamma \n}{3}f^{abc}A_{\\lambda }^{b}A_{\\sigma }^{c}\\right) \\tag{20}\n\\end{equation}\n\n\\bigskip Where $A_{\\mu }^{a}$\\ are the conventional $QCD$ color gluon fields\nand $Q$ is the topological charge density, and $\\alpha _{s}=$ $\\frac{\\Gamma\n^{2}}{4\\pi }$. Thus we have\n\n\\begin{equation*}\ni\\int dx\\left\\langle 0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right)\n\\right\\} \\right\\vert 0\\right\\rangle =i\\int dxT_{\\mu \\nu }^{vac}\n\\end{equation*}\n\n\\begin{equation}\n\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle 0\\left\\vert\nT\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert\n0\\right\\rangle =\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}T_{\\mu\n\\nu }^{vac} \\tag{21}\n\\end{equation}\n\n\\bigskip Le\n\\begin{equation*}\n\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}T_{\\mu \\nu }^{vac}=\\chi\n\\end{equation*}\n\nHence $Eq.(21)$ becomes\n\n\\begin{equation*}\n\\chi =\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle\n0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert\n0\\right\\rangle\n\\end{equation*}\n\nAnd\n\n\\begin{equation}\n\\Delta \\chi =\\Delta \\left[ \\underset{q\\longrightarrow 0}{\\lim }i\\int\ndxe^{iqx}\\left\\langle 0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left(\n0\\right) \\right\\} \\right\\vert 0\\right\\rangle \\right] \\tag{22}\n\\end{equation}\n\nUsing $\\Delta =c\\left( H\/m_{\\eta }\\right) $ and $\\left[ \\underset\nq\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle 0\\left\\vert T\\left\\{\nQ\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert 0\\right\\rangle\n\\right] =-\\left[ \\lambda _{YM}^{2}\\left( q^{2}-m_{0}^{2}\\right) \/\\left(\nq^{2}-m_{0}^{2}-\\frac{\\lambda _{\\eta }^{2}}{N_{c}}\\right) \\right] $ from \\\nRef.\\cite{FR}, $Eq.(22)$ can be written as\n\n\\begin{equation}\n\\Delta \\chi =-c\\left( \\frac{2H}{m_{\\eta }}\\right) .\\frac{\\lambda\n_{YM}^{2}\\left( q^{2}-m_{0}^{2}\\right) }{\\left( q^{2}-m_{0}^{2}-\\frac\n\\lambda _{\\eta }^{2}}{N_{c}}\\right) } \\tag{23}\n\\end{equation}\n\nThe standard Witten-Veneziano solution of $U(1)$ problem is based on the\nwell-established assumption (confirmed by various lattice computations) that\n\\ $\\chi $ does not vanish, despite the fact that $Q$ is a total derivative \nQ\\equiv \\partial _{\\mu }K^{\\mu }$. This suggests that there is an unphysical\npole at $q=0$ in the correlation function of $K^{\\mu }$, similar to \\emph{KS}\nghost in the Schwinger model \\cite{FR}. Thus $Eq.(23)$ becomes\n\n\\begin{equation}\n\\Delta \\chi =-c\\left( \\frac{2H}{m_{\\eta }}\\right) .\\frac{\\lambda\n_{YM}^{2}m_{0}^{2}}{m_{\\eta }^{2}} \\tag{24}\n\\end{equation}\n\nwhere $m_{\\eta }^{2}=m_{0}^{2}+\\frac{\\lambda _{\\eta }^{2}}{N_{c}}$ is the\nmass of physical $\\eta $ field and the reason for a factor of $2$ in \nEq.(24) $ follows from the principle of general covariance as we have\nalready established. Using Witten-Veneziano relation \\ $4\\lambda _{YM}^{2}$\\ \n$=f_{\\pi }^{2}m_{\\eta }^{2}$ and chiral condensate $m_{0}^{2}f_{\\pi\n}^{2}=-4m_{q}\\left\\langle \\overline{q}q\\right\\rangle ,$ \\ $Eq.(24)$ \\ can be\nwritten as\n\n\\begin{equation}\n\\Delta \\chi =c\\left( \\frac{2H}{m_{\\eta }}\\right) \\left\\vert\nm_{q}\\left\\langle \\overline{q}q\\right\\rangle \\right\\vert \\tag{25}\n\\end{equation}\n\nwhere $H$ is Hubble constant and $m_{q}$\\ is the mass of a single light\nquark. From Ref.\\cite{FR} $c\\left( \\frac{2H}{m_{\\eta }}\\right) \\left\\vert\nm_{q}\\left\\langle \\overline{q}q\\right\\rangle \\right\\vert \\approx c\\left(\n3.6\\times 10^{-3}eV\\right) ^{4}$ leads to\n\n\\begin{equation}\n\\Delta \\chi \\approx c\\left( 3.6\\times 10^{-3}eV\\right) ^{4} \\tag{26}\n\\end{equation}\n\nBy using $c=C_{QCD}\\times C_{grav}\\approx C_{grav}$ from \\cite{JZ} and\nreference within, $Eq.(26)$ can be written as\n\n\\begin{equation}\n\\Delta \\chi \\approx C_{grav}\\left( 3.6\\times 10^{-3}eV\\right) ^{4} \\tag{27}\n\\end{equation}\n\n\\bigskip Comparision of $Eq.(18)$ with $Eq.(27)$ gives \n\\begin{equation}\n\\rho _{\\Lambda }\\equiv \\Delta \\chi \\approx \\left( 2.3\\times 10^{-3}eV\\right)\n^{4} \\tag{28}\n\\end{equation}\n\n$Eq.(28)$ is the measured value of $\\rho _{\\Lambda }$ that is responsible\nfor the acceleration of the universe.\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Using Planck scale $M_{Pl}^{4}$ as the cutoff\ncorrection, $Eq.(8)$ becomes\n\n\\begin{equation}\nn_{gv}^{planck}=7\\times 10^{103}m^{-3} \\tag{29}\n\\end{equation}\n\nFrom the standard box-quantization procedure \\cite{SE}, we have\n\n\\begin{equation}\n2\\times \\rho _{\\Lambda }^{total}=\\frac{1}{V}\\underset{k}{\\sum \\text\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n}\\omega _{k}} \\tag{30}\n\\end{equation}\n\nBy imposing Lorentz invariance of vacuum state formalism on $Eq.(30)$, we\nhave\n\n\\begin{equation}\n2\\times \\rho _{\\Lambda }^{total}=\\frac{1}{V}n\\text\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n}\\omega =n\\left[ n_{gv}\\times \\Delta \\varepsilon _{vac}\\right] \\tag{31}\n\\end{equation}\n\nWhere $n_{gv}\\equiv \\frac{1}{V}$ and $\\Delta \\varepsilon _{vac}\\equiv \nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n$\\omega .$ Note that $Eq.(31)$ reduces to $Eq.(17)$ for $n=1,$ therefore \nEq.(31)$ can be rewritten for Planck scale cutoff correction (where Planck\nseries of energy $(n\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n$\\omega )$ is taken to be the Planck energy $(E_{Pl})$):\n\n\\begin{equation}\nn\\left[ n_{gv}\\times \\Delta \\varepsilon _{vac}\\right] =n_{gv}^{Planck}\\times\nE_{Pl} \\tag{32}\n\\end{equation}\n\nFrom $Eqs.(13),(14),(15),(17)$ and $(28),$ we have\n\n\\begin{equation*}\nn\\left[ \\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}}{27k^{3}\n\\right] =n_{gv}^{Planck}\\times E_{Pl}\n\\end{equation*}\n\n\\begin{equation}\nn\\left[ \\frac{\\rho _{\\Lambda }}{2}\\right] =n_{gv}^{Planck}\\times\nE_{Pl}=M_{Pl}^{4} \\tag{33}\n\\end{equation}\n\nWhere $\\rho _{vac}=\\rho _{\\Lambda }\/2$ is the energy density of each\ninfrared sector. $Eq.(33)$ shows how cutoff $UV$ scale $M_{Pl}^{4}$\\\nmanifests itself as linearly independent infrared sectors of the effective\ntheory of gravity interacting with $QCD$ fields.\n\nBy combining $Eqs.(28),(29)$ and $(33)$ we have\n\n\\begin{equation}\nn=4\\times 10^{122}\\approx 10^{122} \\tag{34}\n\\end{equation}\n\nWhere $n_{gv}^{Planck}\\times E_{Pl}=M_{Pl}^{4}=1.4\\times 10^{113}J\/m^{3}.$\nThus $Eq.(34)$ suggests that there are $\\approx 10^{122}$(degenerate) vacuum\nstates. These vacuum states ($n-$torus) are called \\emph{\\textquotedblleft\nsubuniverses or multiverse\"\\ }\\cite{EB,SWH,SC,AV,SW97}. An $n-$torus is an\nexample of $n-$dimensional compact manifold or a compact Abelian Lie group \nU(1).$ In this sense, it is a product of $n$ circles i.e $T^{n}=S^{1}\\times\nS^{1}\\times ..........\\times S^{1}=T^{1}\\times T^{1}\\times ........\\times\nT^{1}$ \\cite{BR,DL,RP}. In this paper, $n$ circles, which are the elements\nof $U(1)$ group, represent $n$ linearly independent infrared sectors or the\nunphysical massless gauge bosons dubbed as Veneziano ghosts.\n\nIt is important to notice that the existence of non-vanishing \\ and linearly\nindependent infrared sectors of the effective theory of gravity interacting\nwith $QCD$ fields is parametrically proportional to the Planck cutoff\nenergy. Therefore, our simple extension of Veneziano ghost theory of $QCD$\nto accommodate FTFT has striking consequences: it predicts, accurately, the\nvalue of \\ $C_{grav},$ which leads towards the $100\\%$ consistency between\ntheory and experimental value of $\\rho _{\\Lambda }.$ As an offshoot, it\nfotifies the idea of multiverse and paints a new picture of quantum\ncosmological paradigm.\n\n\\section{Summary and Conclusion}\n\nThe computational analysis of the dark energy problem from the combined\nframeworks of finite temperature-density correction technique and the\nVeneziano ghost theory of $QCD$ conditions $FTD$ background to behave like a\nreservoir for the infrared sectors of the effective theory of gravity\ninteracting with $QCD$ fields. These infrared sectors (unphysical massless\nbosons) transform as a basis for a representation of a compact manifold.\nThis is analogous to the process of quantizing on manifold $M$ (such as a\ntorus group $T^{n}=T^{1}\\times ..........\\times T^{1}$ $=T^{10^{122}}$), in\nwhich all the submanifolds (tori) are linearly independent of each other.\nThis means that an \\emph{\\textquotedblleft observer\\textquotedblright }\\\ntrapped in one of such tori would think his torus is the whole Universe. An\nimportant prediction of this is that the vacuum energy $\\Delta \\varepsilon\n_{vac}$ owes its existence to the degenerate nature of vacuum (or to the\nasymmetric nature of the universe). The effect of this is a direct\nconsequence of the embedding of our subuniverse on a non-trivial manifold $M$\nwith (minuscule) different linear sizes.\n\nThe main result of the present study is that the effective scales obviously\nhave something to do with the cutoff Ultraviolet $(UV)$ scale $M_{Pl}$.\nBased on the standard box-quantization procedure, the $UV$ scale $M_{Pl}$ is\na collection of infrared $(IR)$ scales. Undoubtedly, the relevant effective\nscales appear as a result of energy differences (subtractions) at which the \nIR$ scales enter the physics of $UV$ scale $M_{Pl}$. It is therefore\nimpossible to compute the value of $\\rho _{\\Lambda }$ without putting into\nconsideration the statistical effect of the $UV$ scale $M_{Pl}$ which\nmanifests itself through the existence of the linearly independent $IR$\nsectors of the effective theory of quantum field theory $(QFT)$: this is the \n\\emph{\\textquotedblleft stone\\textquotedblright } that confirms the\ninterrelationship between $FTFT$ and the theory of superconductivity $(QFT$\nat $T=0)$.\n\nThus, if you buy the idea of Lorentz invariance of vacuum state formalism or\nthe degenerate vacuum mechanism, then $\\sim 10^{122}$ subuniverses come as\nfree gifts!\n\n\\subsubsection{\\textbf{Acknowledgement}}\n\nMr. O. F. Akinto is indebted to the Department of Physics, CIIT, Islamabad \\\nand the National Mathematical Center Abuja, Nigeria \\ for their finacial\nsupport.\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nStereo matching\\cite{DBLP:journals\/pami\/SunZS03,Ke2009Cross} is a core technique in many 3D computer vision applications, such as autonomous driving, robot navigation, object detection and recognition\\cite{DBLP:conf\/nips\/ChenKZBMFU15,DBLP:conf\/iccv\/ZhangLCCCR15}. It aims to get the depth information for the reference image by calculating the disparity map of the two images (\\emph{i.e.} left image and right image, sized $H\\times W$, respectively) captured by the stereo camera. The reference image can be either left image or right image. In the rest of this manuscript, we assume the left image to be the reference image, and the right image is regarded as the source image accordingly. \n\nThe disparity of a target point in the reference image is the horizontal displacement between the target point and its most similar point in the source image\\cite{DBLP:journals\/ijcv\/ScharsteinS02,DBLP:conf\/cvpr\/LuoSU16,DBLP:conf\/cvpr\/SekiP17} . In order to find the most similar point for a target point, the similarity scores between this point and the candidate points in the source image are calculated(\\emph{i.e.} similarity distribution)\\cite{DBLP:conf\/cvpr\/Hirschmuller05,DBLP:journals\/pami\/BrownBH03}. When there are $D_{max}$ candidate points(\\emph{i.e.} matching range), a 3D cost volume of size $D_{max}\\times H \\times W$ containing all similarity scores for $H \\times W$ target points is calculated. \n\nTo obtain such 3D cost volume, recent cost aggregation network based learning methods\\cite{DBLP:conf\/iccv\/KendallMDH17,DBLP:conf\/cvpr\/ChangC18,DBLP:conf\/cvpr\/GuoYYWL19,DBLP:conf\/cvpr\/ZhangPYT19} first form a 4D volume of size $F\\times \\frac{1}{n}D_{max}\\times \\frac{1}{n}H \\times \\frac{1}{n}W$(where $F$ is the dimension of the correlation feature and $n$ is the ratio of downsampling) by associating each unary feature with their corresponding unary from the opposite source image across $ \\frac{1}{n}D_{max}$ disparity levels. They obtain a high quality low-resolution 3D cost volume through focusing on optimizing the low-resolution 4D volume by cost aggregation network, and then get the high precision performance on final high-resolution disparity map ($H\\times W$). The process of most cost aggregation networks contains multiple identical 4D volume aggregation stages to refine the correlation features multiple times. These methods only output low-resolution 3D cost volume containing similarity scores of partial candidates. To obtain high-resolution disparity, the widely accepted method is to use the linear interpolation to get the complete 3D cost volume firstly. However, the similarity score is not a linear function of the disparity, which causes inaccurate estimation of the final disparity. Although some methods add additional refinement modules to refine the disparity, they still cannot get satisfactory results due to the lack of correlation features between matching pairs.\n \nActually, due to the nature of CNN, each point in the low resolution features contains the information of all pixels in the patch of the original resolution images where its located. Therefore, all $D_{max}$ correlation features between target points and candidates are included in the low-resolution 4D volume. Leveraging convolutional layers for decoupling all $D_{max}$ similarity scores from the 4D volume is naturally a better solution for complete 3D cost volume. Early methods, such as GC-Net\\cite{DBLP:conf\/iccv\/KendallMDH17}, decouple all $D_{max}$ similarity scores by \\emph{Transposed convolution}. However, the implementation of \\emph{Transposed convolution} introduces additional computation. More notably, as the network deepens, details in 4D Volume will be lost. In addition, learning $D_{max}$ similarity scores from the optimized low scale 4D volume, containing $\\frac{1}{n}D_{max}$ correlation features for each target point, means that one correlation feature outputs $n$ similarity scores. This is an internal competitive task, because each feature essentially represents the degree of the correlation between the target point and $n$ different candidates. It is too difficult for the network to compute a universal correlation features to predict $n$ optimal similarity scores of $n$ different candidates simultaneously. \n\nBased on the above analysis, we design a new Multistage Full Matching scheme (MFM) in this work through simply decomposing the full matching task into multiple stages. Each stage estimate a different $\\frac{1}{n}D_{max}$ similarity scores. In this decomposing way, we can not only learning all similarity scores directly from the low-resolution 4D volume, but also keep one similarity score learning from one correlation feature. \n\nWhile it is noteworthy that we share the similar insight with the existing multistage matching methods\\cite{DBLP:conf\/cvpr\/TonioniTPMS19,DBLP:conf\/iccv\/DuggalWMHU19,DBLP:conf\/cvpr\/YangMAL20,DBLP:conf\/cvpr\/YinDY19}, as decomposing the matching task into multiple stages. Such methods first obtain a coarse disparity and then perform residual disparity search from the neighbor of the current disparity by constructing a partial cost volume. The later stage strongly depend on the previous stage. In contrast, the previous stage only provide a reference for the later stage in MFM. \n\nMultiple tasks in the proposed MFM are equally important. However, serial multistage framework, which is designed for more sufficient cost aggregation, results in unbalanced prediction of multiple stages. Aiming at this problem, we propose the strategy of \\emph{Stages Mutual Aid}. Specifically, we take advantage of the close distribution of the similarities predicted at each stage, and merge the output of other $(n-1)$ stages to obtain a voting result of the current similarity distribution for reference in the current stage. In this way, not only can the shallower network exploit the output of the deeper network, but also the voting result provides a correction message for the current prediction. \n\nThe contributions of our work are summarized as follows:\n\\begin{itemize} \n\\item A Multistage Full Matching disparity estimation scheme (MFM) is proposed to decompose the full matching learning task into multiple stages and decouple all $D_{max}$ similarity scores from the low-resolution 4D volume step by step, which improves the stereo matching precision accordingly. \n\\item A \\emph{Stages Mutual Aid} strategy is proposed to solve the unbalance between the predictions of each stage in the serial multistage framework.\n\\end{itemize}\nWe evaluate the proposed method on three challenging datasets, \\emph{i.e.} SceneFlow\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16}, KITTI 2012\\cite{DBLP:conf\/cvpr\/GeigerLU12} and KITTI 2015 datasets\\cite{DBLP:conf\/cvpr\/MenzeG15}. The results demonstrate that our MFM scheme achieves state-of-the-art.\n\n\\section{Related Work}\nThis section reviews recent end-to-end supervised deep learning stereo matching methods.\n\\subsection{2D CNN Regression Module Based Methods}\nDispNetC\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16} is the first end-to-end trainable disparity estimation network. It forms a low-resolution 3D cost volume by calculating cosine distance of each unary feature with their corresponding unary from the opposite stereo image across each disparity level. Then the 3D cost volume is input to 2D CNN with left features for disparity regeression. Following DispNetC, CRL\\cite{DBLP:conf\/iccvw\/PangSRYY17} and iRes-Net\\cite{DBLP:conf\/cvpr\/LiangFGLCQZZ18} introduce stack refinement sub-networks to further improve the performance. SegStereo\\cite{DBLP:conf\/eccv\/YangZSDJ18} and EdgeStereo\\cite{DBLP:journals\/corr\/abs-1903-01700} both design multiple tasks frameworks for the disparity regression task. The former introduces semantic information in the refinement stage and the latter applies edge information in guiding disparity optimization. \n\nThe 2D CNN regression network fails to make good use of geometric principles in stereo matching to regress accurate disparity. More recent works focus on directly optimize and compute 3D cost volume by cost aggregation networks.\n\n\\subsection{Cost Aggregation Network Based Methods}\nCost aggregation network based methods study how to optimize the low-resolution 4D volume to obtain more accurate similarity scores in the low-resolution 3D cost volume and output a better disparity map accordingly. Yu \\emph{et al.}\\cite{DBLP:conf\/aaai\/YuWWJ18} propose an explicit cost aggregation sub-network to provide better contextual information. PSMNet\\cite{DBLP:conf\/cvpr\/ChangC18} introduces a pyramid pooling module for incorporating global context information into image features, and stacked 3D CNN hourglasses to extend the regional support of context information in cost volume. In order to make full use of the features, GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} builds the cost volume by concatenating the cost volume constructed in different ways. GANet\\cite{DBLP:conf\/cvpr\/ZhangPYT19} proposes two new neural net layers to capture the local and the whole-image cost dependencies and to replace the 3D convolutional layer. AANet\\cite{DBLP:journals\/corr\/abs-2004-09548} proposes a sparse points based intra-scale cost aggregation method to achieve fast inference speed while maintaining comparable accuracy. \n\nThese methods only output low-resolution 3D cost volume containing similarity scores of partial candidates from the iterative cost aggregation network. However, the low-resolution 3D cost volume is inadequate for calculating high-resolution disparity without correlation features. Although the high-resolution 3D cost volume in GC-Net\\cite{DBLP:conf\/iccv\/KendallMDH17} is obtained by \\emph{Transposed convolution}, additional calculations are introduced because of the way the \\emph{Transposed convolution} is implemented. In addition, it is an internal competitive task to calculate $D_{max}$ similarity features for the high-resolution 3D cost volume from only one 4D volume with $\\frac{1}{n}D_{max}$ correlation features. The proposed MFM in this work decomposes the full matching task into these multiple cost aggregation stages and decouple the high-resolution 3D cost volume directly from the correlation features, which solved the aforementioned problem.\n\n\\subsection{Multistage Matching Methods}\nMultistage matching methods\\cite{DBLP:conf\/cvpr\/TonioniTPMS19,DBLP:conf\/cvpr\/YinDY19} first obtain a coarse disparity and then perform residual disparity search from the neighbor of the current disparity by constructing a partial cost volume. DeepPruner\\cite{DBLP:conf\/iccv\/DuggalWMHU19} develops a differentiable PatchMatch module that allows to discard most disparities without requiring full cost volume evaluation in the second stage. CVP-MVSNet\\cite{DBLP:conf\/cvpr\/YangMAL20} proposes a cost volume pyramid based Multi-View Stereo Network for depth inference. \n\nThese methods improve the stereo matching speed and avoid the process of obtaining high-resolution disparity from low-resolution 3D cost volume. However, if the cues obtained in the first stage is wrong, the subsequent fine matching will also be wrong. In contrast, the previous stage in our multistage methods has only guidance for the latter stage and does not limit the estimation range of the latter stage, which guarantees the freedom of estimation in the latter stage.\n\n\\section{Multistage Full Matching Network}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{architecture}\n\t\\caption{The architecture of the Multistage Full Matching network. LR denotes low-resolution and HR represents high-resolution. The Multistage estimation module is composed of $n$ stages. Here takes $n=3$ as an example. Each stage estimates the similarity scores of different candidate points($\\frac{1}{n}D_{max}$ points) in the matching range($D_{max}$ points). The high-resolution similarity distribution($D_{max}\\times H\\times W$) is obtained by combining the low-resolution similarity distribution output from each stage.}\n\t\\label{fig1}\n\\end{figure*}\n\nAs shown in Figure 1, the proposed framework has the following main structure: \n\nFirst, the MFM network extracts low-resolution feature maps of left and right images with shared 2D convolution. For feature extraction network, we adopt the ResNet-like network. We control the scale of the output features by controlling the stride of each convolution layer. The scale of the features is $\\frac{1}{n}$ the size of the input image. \n\nSecond, low-resolution 4D volume is calculated by \\emph{Group-wise correlation}\\cite{DBLP:conf\/cvpr\/GuoYYWL19}. The left features and the right features are divided into groups along the channel dimension, and correlation maps are computed among each group to obtain multiple cost volumes, which are then packed into one cost volume called \\emph{g-cost}. Another cost volume called \\emph{cat-cost} is obtained by concatenating the features of each target point and candidate points in the matching range. The final cost volume is obtained by concatenating the corresponding correlation features of \\emph{g-cost} and \\emph{cat-cost}.\n\nThen, the low-resolution 4D volume from the second step is fed to the cost aggregation process for the high-resolution 3D cost volume by multistage 3D cost volume estimation module. \n\nAt last, the final disparity is obtained by the modified parabolic fitting\\cite{Raffel1998Particle,DBLP:journals\/scjapan\/ShimizuO02,nishiguti} sub-pixel estimation method on the high-resolution 3D cost volume.\n\n\\subsection{Multistage 3D Cost Volume Estimation Module} \nThe proposed method divides the matching range into $k$ cells, and each cell contains $n$ points, where $k=D_{max}\/n$. $D_{max}$ is the length of the matching range. $n$ is the same as the ratio of downsampling. In this way, the candidate set can be represented as $\\{c_{0}^{0}, c_{0}^{1},c_{0}^{2}, \\cdots, c_{0}^{n-1}, c_{1}^{0}, c_{1}^{1}, \\cdots, c_{m}^s, \\cdots, c_{k-1}^{n-1}\\}$. Each stage of the multistage full matching module learns similarity scores of specific $k$ candidates, \\emph{i.e.} $\\{c_{0}^{s}, c_{1}^{s}, \\cdots, c_{k-1}^{s}\\}$($s$ is the stage number of current stage), from $k$ correlation features of the low-resolution 4D volume. The candidates learned in the $s$-th stage is adjacent to $(s-1)$ stage and $(s+1)$ stage. $D_{max}$ high-resolution similarity scores are obtained after $n$ stages.\n\nIn order to obtain $D_{max}$ high-resolution similarity scores, we decouple $D_{max}$ similarity features from different low-resolution 4D volumes. Let $f(x)$-$C$ be the coordinate system set up with $f(x)$ as the origin in the C dimensional feature space, and $f(x)$ is the feature corresponding to the point $I_r(x)$ of reference image. The similarity features between $I_r(x)$ and its candidates, \\emph{i.e.} $\\{fc(x+0), fc(x+1), \\cdots, fc(x+d), \\cdots, fc(x+D_{max}-1)\\}$, is a series points in $f(x)$-$C$. \n\nIn camera coordinate system, the position difference between $c_m^s(x)$ can be represented as $\\Delta x$. Similarly, in $f(x)$-$C$, the position difference between $fc(x+d)$ can be represented as $\\Delta fc$, where $\\Delta fc \\in \\mathbb{R}^C$. Therefore, when the position of $fc(x+d)$ is known, the position of $fc(x+d+\\Delta x)$ can be obtained by:\n\\begin{equation}\nfc(x+d+\\Delta x) = fc(x+d) + \\Delta fc\n\\end{equation}\nwhere $\\Delta x$ represents the position difference between $c_m^s(x)$ and $c_m^s(x+\\Delta x)$.\n\n$fc(x+d)$ in high-dimensional feature space not only contains the information of $c_m^s(x)$, but also include the features of points surround it. Consequently, $\\Delta fc$ from $fc(x+d)$ to $fc(x+d+\\Delta x)$ can be decoupled from $fc(x+d)$ when $\\Delta x$ is small. The success of our experiment verifies the correctness of this inference.\n\nBased on the above analysis, we design the follow framework:\n\\paragraph{Step 1} we first initialize a similarity feature $F^{ori}=\\{fc^{ori}_m | m \\in N, m < k\\}$ for all stage in the space. After that, we will estimate $\\Delta F^{s}=\\{\\Delta fc^{s}_m | m \\in N, m < k\\}$ from $F^{ori}$ to $F^{s}=\\{fc^{s}_m | m \\in N, m < k\\}$ for each stage in the second step, where $s$ is the stage number.\n\nDue to the nature of CNN, each point in the low scale features contains the information of all pixels in the patch of the original resolution images where its located. Such features will first be associated across $\\frac{1}{n}D_{max}$ disparity levels as follows:\n\\begin{equation}\nF_{raw}(f_{r},f_{so}^{d}) = Corr(f_{r},f_{so}^{d})\n\\end{equation}\nwhere $Corr(\\cdot, \\cdot)$ represent \\emph{Group-wise correlation}. $f_{r}$ is the feature of a target point on reference feature map, and $f_{so}^{d}$ is the feature of the $d$-th candidate point on source feature map. $F_{raw}(f_{r},f_{so}^{d})$ is the raw correlation feature of $f_{r}$ and $f_{so}^{d}$.\n\nTherefore, all $D_{max}$ correlation features between target points and candidates are included in the raw 4D volume composed of $F_{raw}(f_{r},f_{so}^{d})$. Then, $F_{raw}(f_{r},f_{so}^{d})$ in such raw 4D volume is converted to $F^{ori}$ by two 3D convolutions. In this way, the $F^{ori}$ can provide an excellent initial value for better $\\Delta F^{s} $.\n\n\\paragraph{Step 2} The multistage similarity features estimation process is divided into $n$ stages. Except the first stage which takes $F_{ori}$ as input, each stage takes $F^{s-1}$ as the input to get the shift $\\Delta F^{s} $ between $F^{s}$ and $F^{ori}$.\n\\begin{equation}\n \\Delta F^{s} = De(F^{s-1} )\n\\end{equation}\nwhere $De(\\cdot)$ is the decouple function, which is implemented by an hourglass network composed of 3D CNN. The structure of $De(\\cdot)$ is the same as the basic block in most cost aggregation network\\cite{DBLP:conf\/cvpr\/ChangC18,DBLP:conf\/cvpr\/GuoYYWL19}. Then, the simlairty features of candidates $\\{c_{0}^{s}, c_{1}^{s}, \\cdots, c_{k-1}^{s}\\}$ in the $s$-th stage can be obtained by\n\\begin{equation}\nF^{s} =F^{ori} + \\Delta F^{s}\n\\end{equation}\n\n\\paragraph{Step 3} Another role of $De(\\cdot)$ is to update each similarity feature by referring to the information in a larger receptive field. Therefore, a serial network composed of the $n$ $De(\\cdot)$ is necessary to obtain more sufficient aggregation. However, a serial network will lead to unbalance between the predictions of each stage, for each $De(\\cdot)$ is responsible for a different task. In this step, as shown in Fig.1, a \\emph{Stages Mutual Aid} operation is conducted on $F^{s}$.\n \n We design the formula(5) to construct similarity scores for supervising each stage:\n\\begin{equation}\nS\\left(m,i\\right)=e^{-(m \\times n+i-d_{gt})^2}\n\\end{equation}\nwhere $m$ stands for the $m$-th cell, and $m \\in \\{0,1,2,\\ldots, k-1\\}$. $n$ is the length of each cell. $i$ is the order of the candidate point in each cell, and it also represents the stage order. $d_{gt}$ is the ground-truth. $S(m,i)$ represents the similarity score that the $i$-th point in the $m$-th cell of the $i$-th stage should have. The similarity distribution ground-truth of different stages can be obtained by changing the value of $i$. If the similarity peak falls in the $i$-th position of the $m$-th bin, the similarity peak of each stage will fall in the $m$-th bin or the ajacent bin of $m$. Therefore, the similarity distributions output of each stage is close, the similarity peak difference of which is 1 or 0. Accordingly, a voting result can be obtained from the similarity features output of the other $(n-1)$ stages for the estimation of the $s$-th stage.\n\nFirst, we obtain the voting results $V$ for the $s$-th stage by \n\\begin{equation}\nV= \\sum_{i=0, i\\ne s}^{n-1}(F^i)\n\\end{equation}\nThen, the $V$ is optimized by one 3D convolution layer, obtaining $V^s$.\nBecause it's hard to decide artificially the percentage of $V^s$ and $F^s$ when fusion the two features, we directly let the network learn how to merge the two features. The two features are concatenated along $d$ dimension and fed to the distance network $D(\\cdot)$ to calculate the similarity scores. The $D(\\cdot)$ network is consist of two 3D convolution layers.\n\\begin{equation}\nP(I_r(x), c^s(x))) = D(Concat(V^s, F^s)), \n\\end{equation}\nwhere $P(I_r(x), c^s(x)))$ is the 3D cost volume output from the $s$-th stage, which represents the predicted similarity scores between $I_r(x)$ and $c^s(x)$ of the $s$-th stage. $Concat(\\cdot)$ is the concatenation operation. \n\n \\paragraph{Step 4}\n Latitude $H$ and latitude $W$ of the $n$ 3D cost volumes are restored to the original scale by linear interpolation. The full resolution cost volume is obtained by combining the $n$ low-resolution 3D cost volume along the similarity dimension. Finally, high-resolution cost volume is normalized by \\emph{softmax} operation along the similarity dimension and rearranged in the similarity dimension to obtain the final high-resolution cost volume.\n\n\\subsection{Supervision Strategy and Loss Function}\n\nWe design the formula(5) to construct similarity scores for supervising each stage. By using formula(5), our supervision strategy can guarantee the relationship between the output results of multiple stages. \n\nThe full loss of the proposed MFM network can be represented as the following:\n\\begin{equation}\nLoss= L_{stage}+L_{1}\n\\end{equation}\n\n$L_{1}$ loss is utilized to optimize the final disparity calculated through the high-resolution similarity distribution. We denote the predicted disparity as $d$ and the disparity ground-truth as $d_{gt}$, $L_{1}$ loss can be represented as the following:\n\\begin{equation}\nL_{1} = \\sum|d-d_{gt}|\n\\end{equation}\n\n$L_{stage}$ is designed to guide each stage to learn specific $k$ similarity scores from $k$ correlation features of the low-resolution 4D volume. We need to align the predicted similarity distributions of $n$ stages with the aforementioned supervision similarity distributions of the $n$ stages, respectively, therefore the Cross Entropy Error Function is selected for supervision. $L_{stage}$ loss is designed as follows:\n\\begin{equation}\nL_{stage}=\\sum_{i=0}^n\\sum_{m=0}^{k-1}S(m,i)\\cdot logP(m,i)\n\\end{equation}\nwhere $P(m,i)$ represents the similarity score between the target point and the $i$-th candidate point in the $m$-th cell output from the $i$-th stage, and $k=D_{max}\/n$. \n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.43]{kitti12.pdf}\n \\caption{Error map visualization of AcfNet\\cite{DBLP:conf\/aaai\/0005C0YYLY20}, AANet+\\cite{DBLP:journals\/corr\/abs-2004-09548} and our method on KITTI 2012. Darker represents lower error.}\n \\end{figure*}\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.43]{kitti15.pdf}\n \\caption{ Error map visualization of AcfNet\\cite{DBLP:conf\/aaai\/0005C0YYLY20}, AANet+\\cite{DBLP:journals\/corr\/abs-2004-09548} and our method on KITTI 2015. Darker blue represents lower error.}\n\\end{figure*}\n\\section{Experiments}\n\\subsection{Implementation Details}\n\\paragraph{Datasets.} Scene Flow datasets\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16} provide 35,454 training and 4,370 testing images of size $960\\times540$ with accurate ground-truth. We use the Finalpass of the Scene Flow datasets, since it contains more motion blur and defocus, and is more real than the Cleanpass. KITTI 2012\\cite{DBLP:conf\/cvpr\/GeigerLU12} and KITTI 2015\\cite{DBLP:conf\/cvpr\/MenzeG15} are driving scene datasets. KITTI 2012 contains 194 training image pairs with sparse ground truth and 195 testing image pairs with ground truth disparities held by evaluation server for submission evaluation only. KITTI 2015 contains 200 training stereo image pairs with sparse ground-truth and 200 testing image pairs with ground truth disparities held by evaluation server for submission evaluation only. \n\n\\paragraph{Evaluation Indicators.} For Scene Flow datasets, the evaluation metrics are the end-point error (EPE), which is the mean average disparity error in pixels, and the error rates $>1px$ and $>3px$ are the percentage of pixels whose error are greater than 1 pixel and 3 pixels, respectively. For KITTI 2015, we use the percentage of disparity outliers D1 as evaluation indicators. The outliers are defined as the pixels whose disparity errors are larger than $max(3px, 0.05\\cdot d_{gt})$, where $d_{gt}$ denotes the ground-truth disparity. For KITTI 2012, we use the error rates $ >2px, >3px, >4px$ and $>5px$ as evaluation indicators.\n\n\\paragraph{Training.} Our network is implemented with PyTorch. We use Adam optimizer, with $\\beta_{1} = 0.9, \\beta_{2} = 0.999$. The batch size is fixed to 8. For Scene Flow datasets, we train the network for 16 epochs in total. The initial learning rate is set as 0.001 and it is down-scaled by 2 after epoch 10, 12, 14. We set the $D_{max}$ as 192 and $n$ as 4, respectively. For KITTI 2015 and KITTI 2012, we fine-tune the network pre-trained on Scene Flow datasets for another 300 epochs. The learning rate is 0.001 and it is down-scaled by 10 after 210 epochs.\n\n\\subsection{Performance Comparison}\n\n\\paragraph{Quantitative Comparison.}\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance comparison on Scene Flow datasets.}\n\t\\begin{tabular}{lccc}\n\t\t\\hline\n\t\tModel & EPE & \\textgreater{}1px & \\textgreater{}3px \\\\ \n\t\t\\hline\n\t\tiResNet-i3\\shortcite{DBLP:conf\/cvpr\/LiangFGLCQZZ18}&2.45&9.28\\%&4.57\\% \\\\\n\t\tCRL\\shortcite{DBLP:conf\/iccvw\/PangSRYY17}&1.32&-&6.20\\% \\\\\n\t\t\\hline\n\t\tStereoNet\\shortcite{DBLP:conf\/eccv\/KhamisFRKVI18}&1.10&21.33\\%&8.80\\% \\\\\n\t\tPSMNet\\shortcite{DBLP:conf\/cvpr\/ChangC18}&1.03& 10.32\\%&4.12\\% \\\\\n\t\tGANet\\shortcite{DBLP:conf\/cvpr\/ZhangPYT19}&0.81& 9.00\\%&3.49\\% \\\\\n\t\tGwcNet\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&0.77&8.03\\% &3.30\\% \\\\\n\t\tAANet\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&0.87&9.30\\%&- \\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&0.87&-&4.31\\% \\\\ \n\t\t\\hline\n\t\tDeepPruner-Best\\shortcite{DBLP:conf\/iccv\/DuggalWMHU19}&0.86&-&- \\\\\n\t\t\\hline\n\t\tOurs&\\textbf{0.66}&\\textbf{4.95\\%}&\\textbf{2.50\\%} \\\\ \n\t\t\\hline\n\t\\end{tabular}\n \\label{table1}\n\\end{table}\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance comparison on KITTI 2015 datasets.}\n\t\\begin{tabular}{llll}\n\t\t\\hline \n\t\tModel&D1-bg&D1-fg&D1-all \\\\ \n\t\t\\hline\n\t\tCRL\\shortcite{DBLP:conf\/iccvw\/PangSRYY17}&2.48\\%&3.59\\%&2.67\\% \\\\\n\t\tEdgeStereo-v2\\shortcite{DBLP:journals\/corr\/abs-1903-01700}&1.84\\%&3.30\\%&2.08\\% \\\\\n\t\tSegStereo\\shortcite{DBLP:conf\/eccv\/YangZSDJ18}&1.88\\%&4.07\\%&2.25\\% \\\\\n\t\t\\hline\n\t\tGCNet\\shortcite{DBLP:conf\/iccv\/KendallMDH17}&2.21\\%&6.16\\%&2.87\\% \\\\\n\t\tGwcNet-g\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&1.74\\%&3.93\\%&2.11\\% \\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&1.51\\%&3.80\\%&1.89\\% \\\\\n\t\tBi3D\\shortcite{DBLP:conf\/cvpr\/BadkiTKKSG20}&1.95\\%&\\textbf{3.48}\\%&2.21\\% \\\\\n\t\tAANet+\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&1.65\\%&3.96\\%&2.03\\% \\\\\n\t\t\\hline\n\t\tHD\\^ \\ 3\\shortcite{DBLP:conf\/cvpr\/YinDY19}&1.70\\%&3.63\\%&2.02\\% \\\\\n\t\tCSN\\shortcite{DBLP:conf\/cvpr\/GuFZDTT20}&1.59\\%&4.03\\%&2.00\\% \\\\\n\t\t\\hline\n\t\tOurs&\\textbf{1.51\\%}&3.67\\%&\\textbf{1.87}\\% \\\\\n\t\t\\hline\n\t\\end{tabular}\n \\label{table2}\n\\end{table}\n\\begin{table*}[t]\n\t\\centering\n\t\\caption{Performance comparison on KITTI 2012 datasets.}\n\t\\begin{tabular}{l|cc|cc|cc|cc}\n\t\t\\hline\n\t\t\\multirow{2}{*}{Model} & \\multicolumn{2}{c|}{ \\textgreater{}2px}&\\multicolumn{2}{c|}{ \\textgreater{}3px}&\\multicolumn{2}{c|}{ \\textgreater{}4px}& \\multicolumn{2}{c}{ \\textgreater{}5px} \\\\\n\t\t\\cline{2-9} \n\t\t&Noc&All&Noc&All&Noc&All&Noc&All \\\\\n\t\t\\hlin\n\t\tEdgeStereo-v2\\shortcite{DBLP:journals\/corr\/abs-1903-01700}&2.32\\%&2.88\\%&1.46\\%&1.83\\%&1.07\\%&1.34\\%&0.83\\%&1.04\\%\\\\\n\t\tSegStereo\\shortcite{DBLP:conf\/eccv\/YangZSDJ18}&2.66\\%&3.19\\%&1.68\\%&2.03\\%&1.25\\%&1.52\\%&1.00\\%&1.21\\%\\\\%&0.5\\%\\\\\n\t\t\\hline\n\t\tGwcNet-gc\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&2.16\\%&2.71\\%&1.32\\%&1.70\\%&0.99\\%&1.27\\%&0.80\\%&1.03\\%\\\\\n\t\tGANet-deep\\shortcite{DBLP:conf\/cvpr\/ZhangPYT19}&1.89\\%&2.50\\%&1.19\\%&1.60\\%&0.91\\%&1.23\\%&0.76\\%&1.02\\%\\\\\n\t\tAMNet\\shortcite{DBLP:journals\/corr\/abs-1904-09099}&2.12\\%&2.71\\%&1.32\\%&1.73\\%&0.99\\%&1.31\\%&0.80\\%&1.06\\%\\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&1.83\\%&2.35\\%&1.17\\%&1.54\\%&0.92\\%&1.21\\%&0.77\\%&1.01\\%\\\\\n\t\tAANet+\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&2.30\\%&2.96\\%&1.55\\%&2.04\\%&1.20\\%&1.58\\%&0.98\\%&1.30\\%\\\\\n\t\t\\hline\n\t HD\\^ \\ 3\\shortcite{DBLP:conf\/cvpr\/YinDY19}&2.00\\%&2.56\\%&1.40\\%&1.80\\%&1.12\\%&1.43\\%&0.94\\%&1.19\\%\\\\\n\t\t\\hline\n\t\tOurs&\\textbf{1.68\\%}&\\textbf{2.16\\%}&\\textbf{1.15\\%}&\\textbf{1.47\\%}&\\textbf{0.91\\%}&\\textbf{1.16\\%}&\\textbf{0.76\\%}&\\textbf{0.97\\%}\\\\%&\\textbf{0.4}&\\textbf{0.5} \n\t\t\\hline \n\t\\end{tabular}\n \\label{table3}\n\\end{table*}\n\nThe quantitative comparison focuses on deep learning stereo matching methods. Table 1, Table 2 and Table 3 show the performance of different methods on Scene Flow datasets, KITTI 2012 dataset, and KITTI 2015 dataset, respectively. In each table from top to bottom, the methods are separated into four groups: (1)2D CNN regression based methods, (2)cost aggregation network based methods, (3)multistage matching methods, (4)our MFM method.\n\n(1) methods refine the low-resolution coarse disparity with much error and noise without referring to the correlation features. Thus, such methods have lower precision. (2) methods obtain the high quality 3D cost volume iteratively from the 4D volume, which provides satisfied geometry context. Therefore, compared with (1) methods, the EPE value of (2) methods is significantly reduced to below 1.0. However, the $>1px$ error rates is still high. Because the multistage cost aggregation module only outputs a low-resolution 3D cost volume, and the high-resolution disparity is obtained from the low-resolution 3D cost volume without the geometry correlation features. (3) methods match within the narrow matching range obtained from the previous stage other than obtaining all similarity scores directly. The miscalculated narrow matching range obtained in the previous stage results in misestimating in the latter stage, which causes its low accuracy. As shown in Table1,Table2 and Table3, the proposed MFM method performs better, which demonstrates the effectiveness of the multistage 3D cost volume estimation module.\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{scene}\n\t\\caption{Error map visualization of GwcNet(top row)\\cite{DBLP:conf\/cvpr\/GuoYYWL19} and our method(bottom row) on Scene Flow datasets. Darker blue represents lower error.}\n\t\\label{figure2}\n\\end{figure*}\n\n\n\n\\paragraph{Qualitative Comparison.}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{similarity}\n\t\\caption{Peak position estimation comparison with existing classic cost aggregation network based methods. }\n\t\\label{figure3}\n\\end{figure}\nFigure 4 visualizes the error maps of our method and the GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} on Scene Flow datasets. As shown in Figure 4, light blue area takes up less area in the error map of our MFM method, which illustrates that our method has better prediction accuracy by decompose the matching task into multiple stage for high-resolution 3D cost volume directly.\n\nFigure 2 and Figure 3 visualize the error maps of our method and other methods on KITTI 2012 and KITTI 2015, respectively. The error area takes up less area in the error map of our MFM method, which also demonstrates the effectiveness of the proposed MFM mechanism.\n\nThe prediction of the peak position in the similarity distribution determines the accuracy of the disparity obtained through 3D cost volume refinement, which is the first and the most important phase of the disparity prediction task. We count the proportion of the points with peak position deviating from the ground truth by more than 1 pixel and 3 pixels, respectively. As shown in Figure 5, PSMNet, GANet, and GwcNet all have a larger number of error points than our MFM method, especially on the deviated more than 1 pixel figure. Well begun is half done. The remarkable advantage of our method shown in Figure 5 largely determines our final state-of-art performance.\n\n\\subsection{Detailed Analysis of Proposed Method}\n\\paragraph{Ablation Study.}\n\\begin{table}[t]\n\t\\centering\n\t\\caption{The ablative prediction results of different variants of MFM on Scene Flow datasets. B:Baseline, de:decouple, ms:\\emph{Multistage Matching}, sma: \\emph{Stages Mutual Aid}.}\n\n\t\\begin{tabular}{ccccccl}\n\t\t\\hline\n\t\tModel&EPE& \\textgreater{}1px & \\textgreater{}3 px \\\\\n\t\t\\hline\n\t\tBaseline &0.77&8.03\\% &3.30\\%\\\\\n\t\t\\hline\n\t\tB+de&0.86&6.30\\%&3.10\\%\\\\\n\t\t\\hline\n\t\tB+de+ms&0.74&5.18\\%&2.66\\%\\\\\n\t\tB+de+ms+sma&\\textbf{0.66}&\\textbf{4.95\\%}& \\textbf{2.50\\%} \\\\\n\t\t\\hline\n\t\\end{tabular}\n \\label{table3}\n\\end{table}\nWe conduct ablation studies to understand the influence of different designs in our proposed method. We design different runs on Scene Flow datasets and report the results in Table 4. First, GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} is adopted as our baseline. The \"Baseline\" refine the correlation features multiple times for a low-resolution 3D cost volume and the high-resolution 3D cost volume is obtained by linear interpolation. When we introduce the learning mechanism(Baseline+decouple) to learn all similarity scores from the last stage of the cost volume aggregation module, the $> 1px$ and $> 3px$ prediction error on Scene Flow datasets improve $1.73\\%$ and $0.20\\%$, respectively. However, the EPE drops slightly for the competition of simultaneously learning multiple similarity scores from one correlation feature, which may influence the prediction results on a large pixel error range. Then, we take account into the multistage decomposition of the learning task(Baseline+decouple +\\emph{Multistage matching}), and achieve the state-of-the-art result on all accuracy metrics. This demonstrates that decoupling all similarity scores step by step from different 4D volume makes the task easier to learn. Finally, the \\emph{Stages Mutual Aid} operation is added to 'Baseline+decouple +\\emph{Multistage matching}'. The performance is significantly improved, which demonstrates the simplicity and effectiveness of the \\emph{Stages Mutual Aid} module. Ablation experiments have verified that the proposed MFM scheme indeed learns the more accurate similarity scores by decomposing the task into multiple stages, thus effectively improves the prediction performance of the high-resolution disparity map. \n\n\\section{Conclusion}\nIn this paper, we propose the Multistage Full Matching framework, which directly estimates high-resolution 3D cost volume from the low-resolution 4D volume by decomposing the matching task into multiple stages. First, a serial network is designed for sufficient cost aggregation and multistage high-resolution 3D cost volume estimation. Then, the \\emph{Stages Mutual Aid} is proposed to solve the unbalanced prediction of the multiple stages resulted by serial network. The last but the most important, our MFM scheme achieves state-of-the-art on three popular datasets.\n\n\n\n\n\\bibliographystyle{aaai21}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nCurrent advances in robotics and autonomous systems have expanded the use of robots in a wide range of robotic tasks including assembly, advanced manufacturing, human-robot or robot-robot collaboration. In order for robots to efficiently perform these tasks, they need to have the ability to adapt to the changing environment while interacting with their surroundings, and a key component of this interaction is the reliable grasping of arbitrary objects. Consequently, a recent trend in robotics research has focused on object detection and pose estimation for the purpose of dynamic robotic grasping.\n\nHowever, identifying objects and recovering their poses are particularly challenging tasks as objects in the real world are extremely varied in shape and appearance. Moreover, cluttered scenes, occlusion between objects, and variance in lighting conditions make it even more difficult. Additionally, the system needs to be sufficiently fast to facilitate real-time robotic tasks. As a result, a generic solution that can address all these problems remains an open challenge.\n\nWhile classification~\\cite{resnet, vgg, inception,flexnet, overfeat, spatial}, detection~\\cite{fast-rcnn, faster-rcnn, ssd, yolo, yolo-9000, fldnet}, and segmentation~\\cite{segnet, maskrcnn, unet} of objects from images have taken a significant step forward - thanks to deep learning, the same has not yet happened to 3D localization and pose estimation. One primary reason was the lack of labeled data in the past as it is not practical to manually infer, thus As a result, the recent research trend in the deep learning community for such applications has shifted towards synthetic datasets~\\cite{butler,mayer,qiu,zhang,mccormac}. Several pose estimation methods leveraging deep learning techniques~\\cite{posecnn,dope,brachmann,wang,hu} use these synthetic datasets for training and have shown satisfactory accuracy. \n\nAlthough synthetic data is a promising alternative, capable of generating large amounts of labeled data, it requires photorealistic 3D models of the objects to mirror the real-world scenario. Hence, generating synthetic data for each newly introduced object needs photo-realistic 3D models and thus significant effort from skilled 3D artists. Furthermore, training and running deep learning models are not feasible without high computing resources as well. As a result, object detection and pose estimation in real-time with computationally moderate machines remain a challenging problem. To address these issues, we have devised a simpler pipeline that does not rely on high computing resources and focuses on planar objects, requiring only an RGB image and the depth information in order to infer real-time object detection and pose estimation.\n\nIn this work, we present a feature-detector-descriptor based method for detection and a homography based pose estimation technique where, by utilizing the depth information, we estimate the pose of an object in terms of a 2D planar representation in 3D space. The robot is pre-trained to perform a set of canonical grasps; a canonical grasp describes how a robotic end-effector should be placed relative to an object in a fixed pose so that it can securely grasp it. Afterward, the robot is able to detect objects and estimates their pose in real-time, and then adapt the pre-trained canonical grasp to the new pose of the object of interest. We demonstrate that the proposed method can detect a well-textured planar object and estimate its accurate pose within a tolerable amount of out-of-plane rotation. We also conducted experiments with the humanoid PR2 robot to show the applicability of the framework where the robot grasped objects by adapting to a range of different poses.\n\n\n\\section{Related Work}\nOur work constitutes of three modules: object detection, planar pose estimation, and adaptive grasping. In the following sub-sections, several fields of research that are closely related to our work are reviewed.\n\\subsection{Object Detection}\nObject detection has been one of the fundamental challenges in the field of computer vision and in that aspect, the introduction of feature detectors and descriptors represents a great achievement. Over the past decades, many detectors, descriptors, and their numerous variants have been presented in the literature. The applications of these methods have widely extended to numerous other vision applications such as panorama stitching, tracking, visual navigation, etc.\n\nOne of the first feature detectors was proposed by Harris et al.~\\cite{harris} (widely known as the Harris corner detector). Later Tomasi et al.~\\cite{tomasi} developed the KLT (Kanade-Lucas-Tomasi) tracker based on the Harris corner detector. Shi and Tomasi introduced a new detection metric GFTT~\\cite{shi} (Good Features To Track) and argued that it offered superior performance. Hall et al. introduced the concept of saliency~\\cite{hall} in terms of the change in scale and evaluated the Harris method proposed in~\\cite{lindeberg} and the Harris Laplacian corner detector~\\cite{mikolajczyk} where a Harris detector and a Laplacian function are combined.\n\nMotivated by the need for a scale-invariant feature detector, in 2004 Lowe~\\cite{SIFT} published one of the most influential papers in computer vision, SIFT (Scale Invariant Feature Transform). SIFT is both a feature point detector and descriptor. H. Bay et al.~\\cite{SURF} proposed SURF (Speeded Up Robust Features) in 2008. But both of these methods are computationally expensive as SIFT detector leverages the difference of Gaussians (DoG) in different scales while SURF detector uses a Haar wavelet approximation of the determinant of the Hessian matrix to speed up the detection process. Many variants of SIFT~\\cite{sift_v_1, sift_v_2, sift_v_3, sift_v_4} and SURF~\\cite{surf_v_1, surf_v_2, surf_v_3} were proposed, either targeting a different problem or reporting improvements in matching, however, the execution time remained a persisting problem for several vision applications.\n\nTo improve execution time, several other detectors such as FAST~\\cite{fast} and AGAST~\\cite{agast} have been introduced. Calonder et al. developed the BRIEF~\\cite{brief} (Binary Robust Independent Elementary Features) descriptor of binary strings that has a fast execution time and is very useful for matching images. E. Rublee et al. presented ORB~\\cite{ORB} (Oriented FAST and Rotated Brief) which is a combination of modified FAST (Features from Accelerated Segment Test) for feature detection and BRIEF for description. S. Leutnegger et al. designed BRISK~\\cite{BRISK} (Binary Robust Invariant Scale Keypoint) that detects corners using AGAST and filters them using FAST. On the other hand, FREAK (Fast Retina Key-point), introduced by Alahi et al.~\\cite{FREAK} generates retinal sampling patterns using a circular sampling grid and uses a binary descriptor, formed by a one bit difference of Gaussians (DoG). Alcantarilla et al. introduced KAZE~\\cite{KAZE} features that exploit non-linear scale-space using non-linear diffusion filtering and later extended it to AKAZE~\\cite{AKAZE} where they replaced it with a more computationally efficient method called FED (Fast Explicit Diffusion)~\\cite{fed1, fed2}.\n\nIn our work, we have selected four methods to investigate: SIFT, SURF, FAST+BRISK, AKAZE. \n\n\\subsection{Planar Pose Estimation}\n\nAmong the many techniques in literature on pose estimation, we focus our review on those related to planar pose estimation. In recent years, planar pose estimation has been increasingly becoming popular in many fields, such as robotics and augmented reality. \n\nSimon et. al~\\cite{simon} proposed a pose estimation technique for planar structures using homography projection and by computing camera pose from consecutive images. Changhai et. al~\\cite{changhai} presented a method to robustly estimate 3D poses of planes by applying a weighted incremental normal estimation method that uses Bayesian inference. Donoser et al.~\\cite{donoser} utilized the properties of Maximally Stable Extremal Regions (MSERs~\\cite{LMSER}) to construct a perspectively invariant frame on the closed contour to estimate the planar pose. In our approach, we applied perspective transformation to approximate a set of corresponding points on the test image for estimating the basis vectors of the object surface and used the depth information to estimate the 3D pose by computing the normal to the planar object.\n\n\\subsection{Adaptive Grasping}\n\nDesigning an adaptive grasping system is challenging due to the complex nature of the shapes of objects. In early times, analytical methods were used where the system would analyze the geometric structure of the object and would try to predict suitable grasping points. Sahbani et al.~\\cite{sahbani} did an in depth review on the existing analytical approaches for 3D object grasping. However, with the analytical approach it is difficult to compute force and not suitable for autonomous manipulation. Later, as the number of 3D models increased, numerous data driven methods were introduced that would analyze grasps in the 3D model database and then transfer to the target object. Bohg et al.~\\cite{bohg} reviewed data driven grasping method methods where they divided the approach into three groups based on the familiarity of the object.\n\nKehoe et al. \\cite{Kehoe2013} used a candidate grasp from the candidate grasp set based on the feasibility score determined by the grasp planner. The grasps weren't very accurate in situations where the objects had stable horizontal poses and were close to the width of the robot's gripper. Huebner et al. \\cite{Huebner2008} also take a similar approach as they perform grasp candidate simulation. They created a sequence of grasps by approximating the shape of the objects and then computed a random grasp evaluation for each model of objects. In both works, a grasp has been chosen from a list of candidate grasps.\n\nThe recent advances in deep learning also made it possible to regress grasp configuration through deep convolutional networks. A number of deep learning-based methods were reviewed in~\\cite{caldera} where the authors also discussed how each element in deep learning-based methods enhances the robotic grasping detection. \\cite{Yu2013} presented a system where deep neural networks were used to learn hierarchical features to detect and estimate the pose of an object, and then use the centers of the defined pose classes to grasps the objects. Kroemer et al.~\\cite{Kroemer2009} introduced an active learning approach where the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Aleotti et al.~\\cite{Aleotti2011} proposed a grasping model that is capable of grasping objects by their parts which learns new tasks from human demonstration with automatic 3D shape segmentation for object recognition and semantic modeling. \\cite{Saxena2008} and \\cite{Montesano2012} used supervised learning to predict grasp locations from RGB images. In~\\cite{Nogueira2016}, as an alternative to a trial and error exploration strategy, the authors proposed a Bayesian optimization technique to address the robot grasp optimization problem of unknown objects. These methods emphasized developing and using learning models for obtaining accurate grasps. \n\nIn our work, we focus on pre-defining a suitable grasp relative to an object that can adapt to a new grasp based on the change of position and orientation of the object.\n\n\\section{Method}\nThe proposed method is divided into two parts. The first part outlines the process of simultaneous object detection and pose estimation of multiple objects and the second part describes the process of generating an adaptive grasp using the pre-trained canonical grasp and the object pose.\nThe following sections describe the architecture of the proposed framework (figure~\\ref{fig:sysarch}) in detail.\n\n\n\n\\subsection{Object Detection and Pose Estimation}\n\nWe present a planar pose estimation algorithm (algorithm \\ref{algorithm}) for adaptive grasping that consists of four phases: (i) feature extraction and matching, (ii) homography estimation and perspective transformation, (iii) directional vectors estimation on the object surface, (iv) planar pose estimation using the depth data. In the following sections, we will focus on the detailed description of the aforementioned steps.\n\n\\subsubsection{Feature extraction and matching}\n\nOur object detection starts with extracting features from the images of the planar objects and then matching them with the features found in the images acquired from the camera. Image features are patterns in images based on which we can describe the image. A feature detecting algorithm takes an image and returns the locations of these patterns - they can be edges, corners or interest points, blobs or regions of interest points, ridges, etc. This feature information then needs to be transformed into a vector space using a feature descriptor, so that it gives us the possibility to execute numerical operations on them. A feature descriptor encodes these patterns into a series of numerical values that can be used to match, compare, and differentiate one feature to another; for example, we can use these feature vectors to find the similarities in different images which can lead us to detect objects in the image. In theory, this information would be invariant to image transformations. In our work, we have investigated SIFT~\\cite{SIFT}, SURF~\\cite{SURF}, AKAZE~\\cite{AKAZE}, and BRISK~\\cite{BRISK} descriptors. SIFT, SURF, AKAZE are both feature detectors and descriptors, but BRISK uses FAST~\\cite{fast} algorithm for feature detection. These descriptors were selected after carefully reviewing the comparisons done in the recent literature~\\cite{andersson2016comparison, karami2017image, tareen2018comparative}.\n\nOnce the features are extracted and transformed into vectors, we compare the features to determine the presence of an object in the scene. For non-binary feature descriptors (SIFT, SURF) we find matches using the Nearest Neighbor algorithm. However, finding the nearest neighbor matches within high dimensional data is computationally expensive, and with more objects introduced it can affect the process of updating the pose in real-time. To counter this issue to some extent, we used the FLANN~\\cite{muja_flann_2009} implementation of K-d Nearest Neighbor Search, which is an approximation of the K-Nearest Neighbor algorithm that is optimized for high dimensional features. For binary features (AKAZE, BRISK), we used the Hamming distance ratio method to find the matches. Finally, if we have more than ten matches, we presume the object is present in the scene. \n\n\\RestyleAlgo{boxruled}\n\\begin{algorithm}[ht]\n\\fontsize{8}{8}\\selectfont\n\\DontPrintSemicolon\n \n \\KwIn{Training images of planar objects, $\\mathcal{I}$}\n $Detector \\gets \\text{Define feature detector}$\\;\n $Descriptor \\gets \\text{Define feature descriptor}$\\;\n \\tcc{\\fontsize{8}{8}\\selectfont retrieve feature descriptor}\n \\tcc{\\fontsize{8}{8}\\selectfont for each image in $\\mathcal{I}$}\n \\For{i in $\\mathcal{I}$}{\n \\tcc{\\fontsize{7}{7}\\selectfont $\\mathcal{K}$ is set of detected keypoints for image i}\n \\fontsize{8}{8}\\selectfont $\\mathcal{K} \\gets \\texttt{DetectKeypoints($i, Detector$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $\\mathcal{D}[i]$ is the corresponding descriptor set for image i }\n $\\mathcal{D}[i] \\gets \\texttt{GetDescriptors( $\\mathcal{K}, Descriptor$)}$\\;\n }\n\n \\While{$\\text{camera is on}$}\n { \\fontsize{8}{8}\\selectfont\n $f \\gets \\text{RGB image frame}$\\;\n $PC \\gets \\text{Point cloud data}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $K_F$ is set of detected keypoints for image frame $f$}\n $K_F \\gets \\texttt{DetectKeypoints($f, Detector$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $D_F$ is the corresponding descriptor set for rgb image $f$}\n $D_F \\gets \\texttt{GetDescriptors( $K_F, Descriptor$)}$\\;\n \\For{i in $\\mathcal{I}$}\n {\n $matches \\gets \\texttt{FindMatches( $\\mathcal{D}[i]$, $D_F$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont If there is at least 10 matches then we have the object (described in image $i$) in the scene}\n \\uIf{\\text{Total number of }$matches \\geq 10$}\n {\n \n \\tcc{\\fontsize{7}{7}\\selectfont extract matched keypoints pair $(kp_{i},kp_{f})$ from the corresponding descriptors matches.}\n $kp_{i}, kp_{f} \\gets \\texttt{ExtractKeypoints($matches$)}$\\;\n $\\mathbf{H} \\gets \\texttt{EstimateHomography($kp_{i}, kp_{f}$)}$\\;\n $p_c, p_x, p_y \\gets \\text{points on the planar object }\\newline \\text{~~~~~~~~~~~~~ obtained using equation (\\ref{eqn:axis})}$\\;\n $p_c^{'}, p_x^{'}, p_y^{'} \\gets \\text{corresponding projected points}\\newline \\text{~~~~~~~~~~~~~ of $p_c, p_x, p_y$ on image frame $f$}\\newline \\text{~~~~~~~~~~~~~ estimated using equations}\\newline \\text{~~~~~~~~~~~~~ (\\ref{eqn:homography}) and (\\ref{eqn:projection})}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $\\vec{c}$ denotes the origin of the object frame with respect to the base\/world frame}\n $\\Vec{c}, \\Vec{x}, \\Vec{y} \\gets \\text{corresponding 3d locations }\\newline \\text{~~~~~~~~~ of $p_c^{'}, p_x^{'}, p_y^{'}$ from point cloud $PC$}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont shift $\\vec{x}, \\vec{y}$ to the origin of the base or the world frame}\n $\\vec{x} \\gets \\vec{x}-\\vec{c}$\\; \n $\\vec{y} \\gets \\vec{y}-\\vec{c}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont estimate the object frame in terms of three orthonormal vectors \n $\\hat{i}, \\hat{j}$, and $\\hat{k}$.}\n $\\hat{i}, \\hat{j}, \\hat{k} \\gets \\text{from equation (\\ref{eqn:unitv})}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont compute the rotation $\\phi_i,\\theta_i,\\psi_i$ of the object frame $\\hat{i}, \\hat{j}$, $\\hat{k}$ with respect to the base or the world frame $\\vec{X}, \\vec{Y}, \\vec{Z}$.}\n $\\phi_i,\\theta_i,\\psi_i \\gets \\text{from equation (\\ref{eqn:eulerangles})}$\\;\n \n \n \\tcc{\\fontsize{7}{7}\\selectfont finally, publish the position and orientation of the object.}\n \\texttt{publish$(\\vec{c},\\phi_i,\\theta_i,\\psi_i)$}\\;\n }\n }\n }\n \\caption{Planar Pose Estimation}\n \\label{algorithm}\n\\end{algorithm}\n\n\\subsubsection{Homography Estimation and Perspective Transformation}\nA homography is an invertible mapping of points and lines on the projective plane that describes a 2D planar projective transformation~(figure~\\ref{fig:homography}) that can be estimated from a given pair of images. In simple terms, a homography is a matrix that maps a set of points in one image to the corresponding set of points in another image. We can use a homography matrix $\\mathbf{H}$ to find the corresponding points using equation \\ref{eqn:homography} and~\\ref{eqn:projection}, which defines the relation of projected point $(x^{'}, y^{'})$ (figure \\ref{fig:homography}) on the rotated plane to the reference point $(x,y)$. \n\nA 2D point $(x,y)$ in an image can be represented as a 3D vector $(x, y, 1)$ which is called the homogeneous representation of a point that lies on the reference plane or image of the planar object. In equation (\\ref{eqn:homography}), $\\mathbf{H}$ represents the homography matrix and $[x~y~1]^{T}$ is the homogeneous representation of the reference point $(x,y)$ and we can use the values of $a,b,c$ to estimate the projected point $(x^{'},y^{'})$ in equation (\\ref{eqn:projection}). \n\n\\begin{align}\n \\left [ \\begin{matrix} a \\\\ b \\\\ c \\end{matrix} \\right ] = \\mathbf{H}\\begin{bmatrix} x\\\\ y\\\\ 1\\\\ \\end{bmatrix} = \\begin{bmatrix} h_{11}&h_{12}&h_{13}\\\\ h_{21}&h_{22}&h_{23}\\\\ h_{31}&h_{32}&h_{33}\\\\ \\end{bmatrix} \\begin{bmatrix} x\\\\ y\\\\ 1\\\\ \\end{bmatrix}\n\\label{eqn:homography} \n\\end{align}\n\n\\begin{equation}\n \\begin{aligned}\n\\left \\lbrace \\begin{aligned}\n x^{'} = \\frac{a}{c} \\\\\n y^{'} = \\frac{b}{c} \n\\end{aligned} \\right . \n\\end{aligned}\n\\label{eqn:projection}\n\\end{equation}\n\nWe estimate the homography using the matches found from the nearest neighbor search as input; often these matches can have completely false correspondences, meaning they don't correspond to the same real-world feature at all which can be a problem in estimating the homography. So, we chose RANSAC~\\cite{ransac} to robustly estimate the homography by considering only inlier matches as it tries to estimate the underlying model parameters and detect outliers by generating candidate solutions through random sampling using a minimum number of observations.\n\nWhile the other techniques use as much data as possible to find the model parameters and then pruning the outliers, RANSAC uses the smallest set of data point possible to estimate the model, thus making it faster and more efficient than the conventional solutions. \n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[height=6cm]{images\/homography.png}\n\\end{center}\n \\caption{Object in different orientation from the camera}\n\\label{fig:homography}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth, height=0.50\\linewidth]{images\/sys_arch_2.png}\n\\end{center}\n \\caption{System architecture.}\n\\label{fig:sysarch}\n\n\\end{figure*}\n\n\\subsubsection{Finding directional vectors on the object}\n\nIn order to find the pose of a planar object, we need to find the three orthonormal vectors on the planar object that describe the object coordinate frame and consequently, the orientation of the object relative to the world coordinate system. We start by estimating the vectors on the planar object that form the basis of the plane, illustrated in figure~\\ref{eqn:axis}. Then, we take the cross product of these two vectors to find the third directional vector which is the normal to the object surface. Let's denote the world coordinate system as $XYZ$, and the object coordinate system as $xyz$. We define the axes of the orientation in relation to a body as: \n\\\\\n\n\\qquad \\qquad \\qquad \\qquad $x \\to \\text{right}$\n\n\\qquad \\qquad \\qquad \\qquad $y \\to \\text{up}$ \n\n\\qquad \\qquad \\qquad \\qquad $z \\to \\text{towards the camera}$ \n\nFirst, we retrieve the locations of the three points $p_c, p_x, p_y$ on the planar object from the reference image using equation (\\ref{eqn:axis}) and then locate the corresponding points $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ on the image acquired from the Microsoft Kinect sensor. We estimate the locations of these points using the homography matrix $\\mathbf{H}$ as shown in equation~\\ref{eqn:homography}, \\ref{eqn:projection}. Then we find the corresponding 3D locations of $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ from the point cloud data also obtained from the Microsoft Kinect sensor. We denote them as vectors $\\vec{c}$,$\\vec{x}$, and $\\vec{y}$. Here, $\\vec{c}$ represents the translation vector from the object frame to the world frame and also the position of the object in the world frame. Next, we subtract $\\vec{c}$ from $\\vec{x}$, $\\vec{y}$ which essentially gives us two vectors $\\vec{x}$ and $\\vec{y}$ centered at the origin of the world frame. We take the cross product of these two vectors $\\vec{x}, \\vec{y}$ to find the third axis $\\vec{z}$. But, depending on the homography matrix the estimated axes $\\vec{x}$ and $\\vec{y}$ might not be exactly orthogonal, so we take the cross product of $\\vec{y}$ and $\\vec{z}$ to recalculate the vector $\\vec{x}$. Now that we have three orthogonal vectors, we compute the three unit vectors $\\hat{i}$, $\\hat{j}$, and $\\hat{k}$ along the $\\vec{x}$, $\\vec{y}$, and $\\vec{z}$ vectors respectively using equation~\\ref{eqn:unitv}. These three orthonormal vectors describe the object frame. These vectors were projected onto the image plane to give a visual confirmation of the methods applied; figure~\\ref{fig:posevizcam} shows the orthogonal axes projected onto the object plane.\n\n \n\n\\begin{equation}\n\\vcenter{\\hbox{\\begin{minipage}{5cm}\n\\centering\n\\includegraphics[width=4cm,height=4cm]{images\/box_axis.png}\n\\captionof{figure}{Axis on the reference plane}\n\\end{minipage}}}\n\\begin{aligned}\n\\left \\lbrace \\begin{aligned}\np_c &= (w\/2, h\/2)\n\\\\\np_x &= (w, h\/2)\n\\\\\np_y &= (w\/2, 0)\n\\end{aligned} \\right . \n\\end{aligned}\n\\label{eqn:axis}\n\\end{equation}\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=78pt, height=78pt]{images\/pose1.png}} \n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/pose2.png}}\n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/pose3.png}}\n \\caption{Computed third directional axis projected onto image plane}\n \\label{fig:posevizcam}\n\\end{figure}\n\n\\begin{align}\n\\begin{split}\n\\hat{j} = \\frac{\\Vec{y}}{|\\Vec{y}|} = [j_X \\hspace{0.15cm} j_Y \\hspace{0.15cm} j_Z]\n\\\\\n\\hat{k} = \\frac{\\Vec{x} \\times \\Vec{y}}{|\\Vec{x} \\times \\Vec{y}|} = [k_X \\hspace{0.15cm} k_Y \\hspace{0.15cm} k_Z]\n\\\\\n\\hat{i} = \\frac{\\Vec{y} \\times \\Vec{z}}{|\\Vec{y} \\times \\Vec{z}|} = [i_X \\hspace{0.15cm} i_Y \\hspace{0.15cm} i_Z]\n\\end{split}\n\\label{eqn:unitv}\n\\end{align}\n\n\\subsubsection{Planar pose computation}\nWe compute the pose of the object in terms of the Euler angles. Euler angles are three angles that describe the orientation of a rigid body with respect to a fixed coordinate system. The rotation matrix $\\mathbf{R}$ in equation (\\ref{eqn:rotR}) rotates X axis to $\\hat{i}$, Y axis to $\\hat{j}$, and Z axis to $\\hat{k}$. \n\n\\begin{align}\n \\mathbf{R} = \\left [ \\begin{matrix} i_X & j_X & k_X \\\\ i_Y & j_Y & k_Y \\\\ i_Z & j_Z & k_Z \\end{matrix} \\right ]\n\\label{eqn:rotR} \n\\end{align}\n\nEuler angles are combinations of the three axis rotations (equation~\\ref{eqn:euler-axis}), where $\\phi$, $\\theta$, and $\\psi$ specify the intrinsic rotations around the X, Y, and Z axis respectively. The combined rotation matrix is a product of three matrices: $\\mathbf{R} = \\mathbf{R}_z \\mathbf{R}_y \\mathbf{R}_x$ (equation~\\ref{eqn:rotcomb}); the first intrinsic rotation rightmost, last leftmost.\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n\\mathbf{R}_x &= \\colvec {1 & 0 & 0 \\\\ 0 & \\cos\\phi & -\\sin\\phi \\\\ 0 & \\sin\\phi & \\cos\\phi } \\\\\n\\mathbf{R}_y &= \\colvec {\\cos\\theta & 0 & \\sin\\theta \\\\ 0 & 1 & 0 \\\\ -\\sin\\theta & 0 & \\cos\\theta } \\\\\n\\mathbf{R}_z &= \\colvec {\\cos\\psi & -\\sin\\psi & 0 \\\\ \\sin\\psi & \\cos\\psi & 0 \\\\ 0 & 0 & 1 }\n \\end{aligned} \\right .}\n \\label{eqn:euler-axis}\n\\end{align}\n\n\\begin{align}\n\\mathbf{R} = \n \\begin{bmatrix*}\n c\\theta c\\psi\n & s\\phi s\\theta c\\psi - c\\phi s\\psi\n & c\\phi s\\theta c\\psi + s\\phi s\\psi\n \\\\ c\\theta s\\psi\n & s\\phi s\\theta s\\psi + c\\phi c\\psi\n & c\\phi s\\theta s\\psi - s\\phi c\\psi\n \\\\ -s\\theta\n & s\\phi c\\theta\n & c\\phi c\\theta\n \\end{bmatrix*}\n\\label{eqn:rotcomb}\n\\end{align}\n\nIn equation~\\ref{eqn:rotcomb}, $c$ and $s$ represents $\\cos$ and $\\sin$ respectively.\n\nSolving for $\\phi, \\theta$, and $\\psi$ from (\\ref{eqn:rotR}) and (\\ref{eqn:rotcomb}), we get,\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n \\phi &= \\tan^{-1}\\left(\\frac{j_Z}{k_Z}\\right) \\\\\n \\theta &= \\tan^{-1}\\left(\\frac{-i_Z}{\\sqrt{1-i_Z^2}}\\right) = \\sin^{-1}\\left(-i_Z\\right) \\\\\n \\psi &= \\tan^{-1}\\left(\\frac{i_Y}{i_X}\\right)\n \\end{aligned} \\right .}\n \\label{eqn:eulerangles}\n\\end{align}\n\n\\begin{figure*}\n\\centering\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_1.jpg}}}\\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_2.jpg}} }\\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_3.jpg}}}\n\n\\vspace{0pt}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_1.jpg}}} \\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_2.jpg}}} \\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_3.jpg}}}\n\n\\vspace{-5pt}\n \\caption{(a),(b),(c) are recovered poses from robot's camera and (d),(e),(f) are corresponding poses visualized in RViz}\n\\label{fig:poseviz}\n\\end{figure*}\n\n\n\n\n\\subsection{Training Grasps for Humanoid Robots}\nTo ensure that the robot can grasp objects in an adaptive manner, we pre-train the robot to perform a set of canonical grasps. We place the object and the robot's gripper close to each other and record the relative pose. This essentially gives us the pose of the gripper with respect to the object. Figure~\\ref{fig:can_grasp} illustrates the training process in which the robot's gripper and a cracker box have been placed in close proximity and the relative poses have been recorded for grasping the objects from the side. \n\n\\begin{equation}\n \\textbf{T}_{s}^{d}\n = \\begin{bmatrix} \\textbf{R}_{s}^{d} & P_{s}^{d} \\\\ 0 & 1 \\end{bmatrix}\n =\\begin{bmatrix} r_{11} & r_{12} & r_{13} & X_t \\\\\n r_{21} & r_{22} & r_{23} & Y_t \\\\\n r_{31} & r_{32} & r_{33} & Z_t \\\\\n 0 & 0 & 0 & 1 \\end{bmatrix} \n\\label{eqn:transmat}\n\\end{equation}\n\nEquation~\\ref{eqn:transmat} outlines the structure of a transformation matrix $\\textbf{T}_{s}^{d}$ that describes the rotation and translation of frame $d$ with respect to frame $s$; $\\textbf{R}_{s}^{d}$ represents the rotation matrix similar to equation~\\ref{eqn:rotcomb} and $P_{s}^{d}=[X_{t},Y_{t},Z_{t}]^{T}$ is the translation matrix which is the 3D location of the origin of frame $d$ in frame $s$.\n\nDuring the training phase, we first formulate the transformation matrix $\\textbf{T}_{b}^{o}$ using the rotation matrix and the object location. We take the inverse of $\\textbf{T}_{b}^{o}$ which gives us the transformation matrix $\\textbf{T}_{o}^{b}$. We then use the equation~\\ref{eqn:graspmat} to record the transformation $\\mathbf{T}_{o}^{g}$ of the robot's wrist relative to the object.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.65\\linewidth]{images\/canonical_grasp.png}\n \\caption{Pre-training canonical grasp}\n \\label{fig:can_grasp}\n\\end{figure}\n\n\\begin{equation} \\label{eqn:graspmat}\nT_{o}^{g} = T_{o}^{b} \\times T_{b}^{g} \\ \\text{where} \\ T_{o}^{b} = (T_{b}^{o})^{-1}\n\\end{equation}\n\\par\n\nIn the equation~\\ref{eqn:graspmat}, $b$ refers to the robot's base, $o$ refers to the object, and $g$ refers to the wrist of the robot to which the gripper is attached. Once we record the matrix, we get a new pose of the object from the vision in the testing phase and generate the final matrix using the equation~\\ref{eqn:fingrasp} that has the new position and orientation of the robot's wrist in matrix form .\n\\begin{equation} \\label{eqn:fingrasp}\nT_{b}^{g} = T_{b}^{o} \\times T_{o}^{g}\n\\end{equation}\n\\par\n\nWe then extract the rotational angles $\\gamma$, $\\beta$, $\\alpha$~(roll, pitch, yaw) of the grasp pose from matrix $\\mathbf{T}_{b}^{g}$ using equation~\\ref{eqn:grippereulerangles}\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n \\gamma=tan^{-1}(r_{32}\/r_{33}) \\\\\n \\beta=tan^{-1}\\frac{-r_{31}}{\\sqrt {{r_{32}}^2+{r_{33}}^2}}\\\\\n \\alpha=tan^{-1}(r_{21}\/r_{11})\n \\end{aligned} \\right .}\n \\label{eqn:grippereulerangles}\n\\end{align}\n\n\\section{Evaluation}\nThe proposed object recognition and pose estimation algorithm was implemented on an Ubuntu 14.04 platform equipped with 3.0 GHz Intel R Core(TM) i5-7400 CPU and 8GB system memory. The RGB-D camera used in the experiments was a Microsoft Kinect sensor v1. We evaluated the proposed algorithm by comparing the accuracy of object recognition, pose estimation, and execution time of four different feature descriptors. We also validated the effectiveness of our approach for adaptive grasping by conducting experiments with the PR2 robot.\n\n\\subsection{Object detection and pose estimation}\n\nWithout enough observable features, the system would fail to find good matches that are required for accurate homography estimation. Consequently, our object detection and pose estimation approach has a constraint on the out-of-plane rotation $\\theta$, illustrated in figure~\\ref{fig:theta}. In other words, if the out-of-plane rotation of the object is more than $\\theta$, the system would not be able to recognize the object. Fast execution is also a crucial aspect to facilitate multiple object detection and pose estimation for real-time applications. We experimented with four different descriptors on several planar objects and the comparative result is shown in table~\\ref{tbl:comparisondescriptor}. The execution time was measured for the object detection and pose estimation step. AKAZE and BRISK had much lower processing time for detection and pose estimation, thus would have a better frame rate, but SIFT and SURF had larger out-of-plane rotational freedom.\n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[width=5cm]{images\/rot_all.png}\n\\end{center}\n \\caption{Out of plane rotation}\n\\label{fig:theta}\n\\end{figure}\n\n\\begin{table}[h!]\n\\centering\n \\caption{Comparison of feature descriptors}\n \n \\begin{tabular}{l|c|s}\n \\toprule\n \n \\textbf{Descriptor} & \\begin{tabular}[x]{@{}c@{}}\\textbf{Maximum out of}\\\\ \\textbf{plane rotation} (degree)\\end{tabular} & \\begin{tabular}[x]{@{}c@{}}\\textbf{Execution time}\\\\ (second)\\end{tabular}\\\\\n \n \\midrule\n \n SIFT & $48^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.21s}\\\\ \\hline\n SURF & $37^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.27s}\\\\ \\hline\n AKAZE & $18^{\\circ}\\pm1^{\\circ}$ & \\text{~~~~~~~0.05s}\\\\ \\hline\n BRISK & $22^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.06s}\\\\\n \\bottomrule\n \\end{tabular}\n \\label{tbl:comparisondescriptor}\n\\end{table}\n\nWe also compared the \\textit{RMS} difference $\\epsilon$~(equation~\\ref{eqn:epsilon}) of re-calculated $\\vec{x}$ to original $\\vec{x}$ ($\\vec{x}^{'}$ in the equation) for increasing out-of-plane rotation of the planar objects to assess the homography estimation. Ideally, the two estimated vectors $\\vec{x}$ and $\\vec{y}$, which describe the basis of the plane of the planar object, should be orthogonal to each other, but often they are not. So, the values of $\\epsilon$ in figure~\\ref{fig:epsilon} give us an indication of the average error in homography estimation for different out-of-plane rotations. In figure~\\ref{fig:epsilon}, we can see AKAZE has much higher $\\epsilon$ values while the rest remained within a close range. This tells us AKAZE results in a much larger error in estimating the homography than the other methods. \n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[width=\\linewidth]{images\/chart.png}\n\\end{center}\n \\caption{Out of plane rotation vs $\\epsilon$}\n\\label{fig:epsilon}\n\\end{figure}\n\nWe chose SIFT and SURF to evaluate how the execution time for detection scales up while increasing the number of objects. From table~\\ref{tbl:multobjcompdesc}, which shows the mean processing time for object detection, we can see that SURF had a detection time around 50\\% more than SIFT in all the cases. This outcome coupled with the previous results prompted us to select SIFT for the subsequent experiments.\n\nThe system was capable of detecting multiple objects in real-time and at the same time could estimate their corresponding poses. Figure~\\ref{fig:multobjdet} shows detected objects with estimated directional planar vectors. We can also observe that the system was robust to in-plane rotation and partial occlusion.\n\n\\begin{table}[h]\n\\centering\n\\caption{\\centering Execution time of SIFT and SURF for multiple object detection}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Number of\\\\ Objects\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Detection time\\\\ (second)\\end{tabular}} \\\\ \\cline{2-3} \n & SIFT & SURF \\\\ \\hline\n1 & 0.06s & 0.09s \\\\ \\hline\n2 & 0.11s & 0.17s \\\\ \\hline\n3 & 0.17s & 0.26s \\\\ \\hline\n4 & 0.22s & 0.35s \\\\ \\hline\n5 & 0.28s & 0.4s5 \\\\ \\hline\n6 & 0.34s & 0.54s \\\\ \\hline\n\\end{tabular}\n\\label{tbl:multobjcompdesc}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=78pt, height=78pt]{images\/col1.png}} \n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/col2.png}}\n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/col3.png}}\n \\caption{\\centering Multiple object detection with estimated planar vectors}\n \\label{fig:multobjdet}\n\\end{figure}\n\nWe used RViz~\\cite{RViz}, a 3D visualizer for the Robot Operating System (ROS)~\\cite{ros}, to validate the pose estimation. The calculated directional axes were projected onto the image and the estimated poses were visualized in RViz. As shown in figure~\\ref{fig:poseviz}, we qualitatively verified the accuracy of the detection and the estimated pose by comparing the two outputs. We can see that both the outputs render similar results. We conducted experiments with multiple objects and human held objects as well. Figure~\\ref{fig:pose_check} illustrates the simultaneous detection and pose estimation of two different boxes and an object held by a human, respectively.\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=100pt, height=100pt]{images\/Pose_Detection.png}} \n \\hspace{1px}\n {\\includegraphics[width=100pt, height=100pt]{images\/pose_detection_human.png}}\n \n \n \\caption{(a) Pose estimation of multiple objects (b) Estimated pose of an object held by a human}\n \n \\label{fig:pose_check}\n\\end{figure}\n\n\n\\begin{equation}\n \\epsilon = \\frac{1}{N}\\sum_{i=1}^{N}||\\vec{x_i}^{'}-\\vec{x_i}||, \\text{\\fontsize{8}{8}\\selectfont where N is the number of frames}\n \\label{eqn:epsilon}\n\\end{equation}\n\n\\subsection{Adaptive grasping}\nWe assessed our approach for adaptive grasping keeping two different aspects of the robotic application in mind; robotic tasks that require 1) interacting with a static environment, and 2) interacting with humans.\n\nWe first tested our system for static objects where the object was attached to a tripod. Next, we set up experiments where the object was held by a human. We used a sticker book and a cartoon book and evaluated our system on a comprehensive set of poses. In almost all the experiments, the robot successfully grasped the object in a manner consistent with its training. There were some poses that were not reachable by the robot - for instance, when the object was pointing inward along the X-axis in the robot reference frame, it was not possible for the end-effector to make a top grasp. Figure~\\ref{fig:tripod_results} and \\ref{fig:human_results} show the successful grasping of the robot for both types of experiments.\n\n\\begin{figure}\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.3.jpg}}\n \n \\caption{Robot grasping an object from a tripod. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.}\n \\label{fig:tripod_results}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.3.jpg}}\n \n \\caption{Robot grasping an object held by a human. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.}\n \\label{fig:human_results}\n\\end{figure}\n\n\n\n\n\\section{Conclusion and Future Work}\n\nThis work presents an approach that enables humanoid robots to grasp objects using planar pose estimation based on RGB image and depth data. We examined the performance of four feature-detector-descriptors for object recognition and found SIFT to be the best solution. We used FLANN's K-d Tree Nearest Neighbor implementation, and Bruteforce Hamming to find the keypoint matches and employed RANSAC to estimate the homography. The homography matrix was used to approximate the three orthonormal directional vectors on the planar object using perspective transformation. The pose of the planar object was estimated from the three directional vectors. The system was able to detect multiple objects and estimate the pose of the objects in real-time. We also conducted experiments with the humanoid PR2 robot to show the practical applicability of the framework where the robot grasped objects by adapting to a range of different poses.\n\nIn the future, we plan to add GPU acceleration for the proposed algorithm that would further improve the overall computational efficiency of the system. We would like to extend the algorithm to automatically prioritize certain objects and limit the number of objects needed for detection based on different scheduled tasks. Finally, we would like to incorporate transferring grasp configuration for familiar objects and explore other feature matching technique e.g. multi probe LSH, hierarchical k-means tree, etc.\n\n \\bibliographystyle{unsrt2}\n \n \\footnotesize{\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nSupport Vector Data Description (SVDD) is a machine learning technique used\nfor single class classification and outlier detection. SVDD technique is\nsimilar to Support Vector Machines and was first introduced by Tax and Duin\n\\cite{tax2004support}. It can be used to build a flexible boundary around\nsingle class data. Data boundary is characterized by observations designated\nas support vectors. SVDD is used in domains where majority of data belongs to\na single class. Several researchers have proposed use of SVDD for multivariate\nprocess control \\cite{sukchotrat2009one}. Other applications of SVDD involve\nmachine condition monitoring \\cite{widodo2007support, ypma1999robust} and image\nclassification \\cite{sanchez2007one}.\n\n\\subsection{\\bf Mathematical Formulation of SVDD}\n\\label{mfsvdd}\n\\paragraph*{\\bf Normal Data Description}\\mbox{}\\\\\nThe SVDD model for normal data description builds a minimum radius hypersphere around the data.\n\\paragraph*{\\bf Primal Form}\\mbox{}\\\\\nObjective Function:\n\\begin{equation}\n\\min R^{2} + C\\sum_{i=1}^{n}\\xi _{i}, \n\\end{equation}\nsubject to: \n\\begin{align}\n\\|x _{i}-a\\|^2 \\leq R^{2} + \\xi_{i}, \\forall i=1,\\dots,n,\\\\\n\\xi _{i}\\geq 0, \\forall i=1,...n.\n\\end{align}\nwhere:\\\\\n$x_{i} \\in {\\mathbb{R}}^{m}, i=1,\\dots,n $ represents the training data,\\\\\n$ R:$ radius, represents the decision variable,\\\\\n$\\xi_{i}:$ is the slack for each variable,\\\\\n$a$: is the center, a decision variable, \\\\\n$C=\\frac{1}{nf}:$ is the penalty constant that controls the trade-off between the volume and the errors, and,\\\\\n$f:$ is the expected outlier fraction.\n\\paragraph*{\\bf Dual Form}\\mbox{}\\\\\nThe dual formulation is obtained using the Lagrange multipliers.\\\\ \nObjective Function:\n\\begin{equation} \n\\max\\ \\sum_{i=1}^{n}\\alpha _{i}(x_{i}.x_{i}) - \\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}(x_{i}.x_{j}) ,\n\\end{equation}\nsubject to:\n\\begin{align}\n& & \\sum_{i=1}^{n}\\alpha _{i} = 1,\\label{sv:s}\\\\\n& & 0 \\leq \\alpha_{i}\\leq C,\\forall i=1,\\dots,n.\n\\end{align}\nwhere:\\\\\n$\\alpha_{i}\\in \\mathbb{R}$: are the Lagrange constants,\\\\\n$C=\\frac{1}{nf}:$ is the penalty constant.\n\\paragraph*{\\bf Duality Information}\\mbox{}\\\\\nDepending upon the position of the observation, the following results hold:\nCenter Position: \\begin{equation} \\sum_{i=1}^{n}\\alpha _{i}x_{i}=a. \\label{sv:0} \\end{equation}\nInside Position: \\begin{equation} \\left \\| x_{i}-a \\right \\| < R \\rightarrow \\alpha _{i}=0.\\end{equation}\nBoundary Position: \\begin{equation} \\left \\| x_{i}-a \\right \\| = R \\rightarrow 0< \\alpha _{i}< C.\\end{equation}\nOutside Position: \\begin{equation}\\left \\| x_{i}-a \\right \\| > R \\rightarrow \\alpha _{i}= C. \\label{sv:1} \\end{equation}\nThe radius of the hypersphere is calculated as follows:\\\\\n\\begin{equation} \nR^{2}=(x_{k}.x_{k})-2\\sum_{i}^{ }\\alpha _{i}(x_{i}.x_{k})+\\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}(x_{i}.x_{j}).\n\\label{eq:a}\n\\end{equation}\n using any $ x_{k} \\in SV_{ R^{2} $ are designated as outliers.\n\nThe spherical data boundary can include a significant amount of space with a very sparse distribution of training\nobservations which leads to a large number of falses positives. The use of kernel functions leads to better compact\nrepresentation of the training data.\n\\paragraph*{\\bf Flexible Data Description}\\mbox{}\\\\\nThe Support Vector Data Description is made flexible by replacing the inner product $ (x_{i}.x_{j}) $ in equation \\eqref{eq:a} with a \nsuitable kernel function $ K(x_{i},x_{j}) $. The Gaussian kernel function used in this paper is defined as:\n\\begin{equation} \nK(x_{i}, x_{j})= \\exp \\dfrac{ -\\|x_i - x_j\\|^2}{2s^2}\n\\label{eq:b}\n\\end{equation}\nwhere $s$: Gaussian bandwidth parameter.\n\nThe modified mathematical formulation of SVDD with kernel function is:\n\nObjective function:\n\\begin{equation} \\label{eq:1}\n\\max\\ \\sum_{i=1}^{n}\\alpha _{i}K(x_{i},x_{i}) - \\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}K(x_{i},x_{j}),\n\\end{equation}\nSubject to:\n\\begin{align}\n& &\\sum_{i=1}^{n}\\alpha _{i} = 1, \\label{eq:2} \\\\\n& & 0 \\leq \\alpha_{i}\\leq C = \\frac{1}{nf} , \\forall i=1,\\dots,n. \\label{eq:3}\n\\end{align}\nConditions similar to \\eqref{sv:0} to \\eqref{sv:1} continue to hold even when the kernel function is used.\\\\\nThe threshold $R^{2}$ is calculated as :\n\\begin{multline}\nR^{2} = K(x_{k},x_{k})-2\\sum_{i}^{ }\\alpha _{i}K(x_{i},x_{k})+\\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}K(x_{i},x_{j})\n\\end{multline}\nusing any $ x_{k} \\in SV_{ R^{2} $ are designated as outliers.\n\n\\section{Need for a Sampling-based Approach}\nAs outlined in Section \\ref{mfsvdd}, SVDD of a data set is obtained\nby solving a quadratic programming problem. The time required to solve\nthe quadratic programming problem is directly related to the number of\nobservations in the training data set. The actual time complexity depends\nupon the implementation of the underlying Quadratic Programming solver.\nWe used LIBSVM to evaluate SVDD training time as\na function of the training data set size.\nWe have used C++ code that uses LIBSVM ~\\cite{chang2011libsvm} implementation of SVDD\nthe examples in this paper, we have also provided a Python implmentation which uses Scikit-learn~\\cite{scikit-learn} \nat \\cite{smsvddg}.\nFigure~\\ref{fig:image_0} shows\nprocessing time as a function of training data set size for the two donut\ndata set (see Figure \\ref{fig:image_3} for a scatterplot of the two donut\ndata). In Figure~\\ref{fig:image_0} the x-axis indicates the training data set\nsize and the y-axis indicates processing time in minutes. As indicated in\nFigure~\\ref{fig:image_0}, the SVDD training time is low for small or moderately\nsized training data but gets prohibitively high for large datasets.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.25]{image0}\n\t\\caption{SVDD Training Time: Two Donut data}\n\t\\label{fig:image_0}\n\\end{figure}\n\nThere are applications of SVDD in areas such as process control and equipment\nhealth monitoring where size of training data set can be very large, consisting\nof few million observations. The training data set consists of sensors readings\nmeasuring multiple key health or process parameters at a very high frequency.\nFor example, a typical airplane currently has $\\approx$7,000 sensors\nmeasuring critical health parameters and creates 2.5 terabytes of data per day.\nBy 2020, this number is expected to triple or quadruple to over 7.5 terabytes\n\\cite{ege2015IoT}. In such applications, multiple SVDD training models are\ndeveloped, each representing separate operating mode of the equipment or process\nsettings. The success of SVDD in these applications require algorithms which can\ntrain using huge amounts of training data in an efficient manner.\n\n\nTo improve performance of SVDD training on large data sets, we propose a new\nsampling based method. Instead of using all observations from the training data\nset, the algorithm computes the training data SVDD by iteratively computing SVDD\non independent random samples obtained from the training data set and combining\nthem. The method works well even when the random samples have few observations.\nWe also provide a criteria for detecting convergence. At convergence the our\nmethod provides a data description that compares favorably with result obtained\nby using all the training data set observations.\n\n\nThe rest of this document is organized as follows: Section~\\ref{sbm} provides\ndetails of the proposed sampling-based iterative method. Results of training\nwith the proposed method are provided in section~\\ref{res}; the analysis of high\ndimensional data is provided in section~\\ref{pd}; the results of a simulation\nstudy on random polygons is provided in section Section~\\ref{ss} and we provide\nour conclusions in section~\\ref{cn}.\n\n\\textbf{Note:\\underline{\\underline{}}} \\textit{In the remainder of this paper, we refer to the training method using all\nobservations in one iteration as the full SVDD method.\\textit{}}\n\n\\section{Sampling-based Method}\n\\label{sbm}\nThe Decomposition and Combination method of Luo et.al.\\cite{luo2010fast} and\nK-means Clustering Method of Kim et.al.\\cite{kim2007fast}, both use sampling for\nfast SVDD training, but are computationally expensive. The first method by Lou\net.al. uses an iterative approach and requires one scoring action on the entire\ntraining data set per iteration. The second method by Kim et.al. is a classic\ndivide and conquer algorithm. It uses each observation from the training data\nset to arrive at the final solution.\n\n\nIn this section we describe our sampling-based method for fast SVDD training.\nThe method iteratively samples from the training data set with the objective\nof updating a set of support vectors called as the master set of support\nvectors ($SV^{*}$). During each iteration, the method updates $SV^{*}$ and\ncorresponding threshold $R^{2}$ value and center $a$. As the threshold value\n$R^{2}$ increases, the volume enclosed by the $SV^{*}$ increases. The method\nstops iterating and provides a solution when the threshold value $R^{2}$ and the\ncenter $a$ converge. At convergence,\nthe members of the master set of support vectors $SV^{*}$, characterize\nthe description of the training data set. For all test cases, our method\nprovided a good approximation to the solution that can be obtained by using all\nobservations in the training data set.\n\nOur method addresses drawbacks of existing sampling based methods proposed\nby Luo et.al.\\cite{luo2010fast} and Kim et.al.\\cite{kim2007fast}. In each\niteration, our method learns using very a small sample from the training data\nset during each step and typically uses a very small subset of the training data set. The method\ndoes not require any scoring actions while it trains.\n\nThe sampling method works well for different sample sizes for the random draws in the iterations. \nIt also provides a better alternative to training SVDD on one large random sample from the training data\nset, since establishing a right size, especially with high dimensional data, is\na challenge.\n\nThe important steps in this algorithm are outlined below:\\mbox{}\\\\\n\\textbf{Step 1:}\nThe algorithm is initialized by selecting a random sample $S_{0}$ of size $n$\nfrom the training data set of $M$ observations ($n \\ll M$). SVDD of $S_{0}$ is\ncomputed to obtain the corresponding set of support vectors $SV_{0}$. The set\n$SV_{0}$ initializes the master set of support vectors $SV^{*}$. The iteration\nnumber $i$ is set to 1.\\\\\n\\textbf{Step 2:}\nDuring this step, the algorithm updates the master set of support vectors,\n$SV^{*}$ until the convergence criteria is satisfied. In each iteration $i$,\nfollowing steps are executed:\n\\begin{adjustwidth}{2mm}{0pt} \\textbf{Step 2.1:} A random sample $S_{i}$ of size $n$ is selected and its SVDD is computed. The corresponding support vectors are designated as $SV_{i}$.\\\\\n\\textbf{Step 2.2}: A union of $SV_{i}$ with the current master set of support vectors, $SV^{*}$ is taken to obtain a set $S_{i}^{'}$ ($S_{i}^{'}=SV_{i} \\bigcup SV^{*}$).\\\\\n\\textbf{Step 2.3: }SVDD of $S_{i}^{'}$ is computed to obtain corresponding support vectors $SV_{i}^{'}$, threshold value\n$R_{i}^{2}$ and ``center'' $a_{i}$ (which we define as $\\sum_{i}\\alpha_i x_i$ even when a Kernel is used). The set $SV_{i}^{'}$, is designated as the new master set of support vectors $SV^{*}$. \n\\end{adjustwidth}\n\\textbf{ Convergence Criteria:} \nAt the end of each iteration $i$, the following conditions are checked to determine convergence.\n\\begin{adjustwidth}{2mm}{0pt}\n\\begin{enumerate}\n\\item $i$ = $maxiter$, where $maxiter$ is the maximum number of iteration; or\\\\\n\\item $ \\| a_{i} - a_{i-1} \\| \\le \\epsilon_1 \\|a_{i-1}\\|$, and\n $\\left \\| R_{i}^{2}-R_{i-1}^{2} \\right \\| \\le \\epsilon_2 R_{i-1}^{2}$ where $\\epsilon_1,\\epsilon_2$ are appropriately\nchosen tolerance parameters.\n\\end{enumerate}\n\\end{adjustwidth}\nIf the maximum number of iterations is reached or the second condition satisfied for $t$ consecutive iterations,\nconvergence is declared. In many cases checking the convergence of just $R_i^2$ suffices.\n\nThe pseudo-code for this method is provided in algorithm~\\ref{alg:the_alg1}. The pseudo-code uses following notations:\n\\begin{enumerate}\n\\item $S_{i} \\leftarrow SAMPLE (T, n)$ denotes the data set $S_{i}$ obtained by selecting random sample of size $n$ from data set $T$.\n\\item $\\delta S_{i}$ denotes SVDD computation on data set $S_{i}$.\n\\item $\\langle SV_{i}, R_{i}^{2}, a_{i} \\rangle \\leftarrow \\delta S_{i} $ denotes the set of support vectors $SV_{i}$, threshold value $R_{i}^{2}$ and center $a_{i}$ obtained by performing SVDD computations on data set $S_{i}$.\n\\end{enumerate}\n\\begin{algorithm}\n\t\\caption{Sampling-based iterative method}\\label{euclid}\n\t\\label{alg:the_alg1}\n\t\\begin{algorithmic}[1]\n\t\t\\State $T$ (training data set) , $n$ (sample size), convergence criteria, $s$ (Gaussian bandwidth parameter),\n$f$ (fraction of outliers) and $t$ (required number of consecutive observations satisfying convergence criteria ).\n\t\t\\State $S_{0} \\leftarrow SAMPLE(T, n)$\n\t\t\\State $ \\langle SV_{0}, R_{0}^{2}, a_{0} \\rangle \\leftarrow \\delta S_{0}$ \n\t\t\\State $SV^{*} \\leftarrow SV_{0}$\n\t\t\\State $i=1$\n\t\t\\While {(Convergence criteria not satisfied for $t$ consecutive obs)} \n\t\t\\State $S_{i} \\leftarrow SAMPLE(T, n)$\n\t\t\\State $\\langle SV_{i}, R_{i}^{2}, a_{i} \\rangle \\leftarrow \\delta S_{i}$\n\t\t\\State $S_{i}^{'} \\leftarrow SV_{i} \\bigcup SV^{*}$.\t\t\n\t\t\\State $\\langle SV_{i}^{'}, R_{i}^{2'}, a_{i}^{'}\\rangle \\leftarrow \\delta S_{i}^{'}$\n\t\t\\State Test for convergence\n\t\t\\State $SV^{*} \\leftarrow SV_{i}^{'}$\n\t\t\\State $i=i+1$\n\t\t\\EndWhile\n\t\t\\RETURN { $SV^{*}$}\n\t\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\nAs outlined in steps 1 and 2, the algorithm obtains the final training data\ndescription by incrementally updating the master set of support vectors $SV^{*}$.\nDuring each iteration, the algorithm first selects a small random\nsample $S_{i}$, computes its SVDD and obtains corresponding set of support\nvectors, $SV_{i}$. The support vectors of set $SV_{i}$ are included in the\nmaster set of support vectors $SV^{*}$ to obtain $S_{i}^{'}$ ($S_{i}^{'}=SV_{i}\n\\bigcup SV^{*}$). The set $S_{i}^{'}$ thus represents an incremental expansion\nof the current master set of support vectors $SV^{*}$. Some members of $SV_{i}$\ncan be potentially ``inside'' the data boundary characterized by $SV^{*}$ the\nnext SVDD computation on $S_{i}^{'}$ eliminates such ``inside'' points.\nDuring initial iterations as $SV^{*}$ gets updated, its threshold value $R_{i}^{2'}$ typically increases and \nthe master set of support vectors expands to describe the entire data set.\n\nEach iteration of our algorithm involves two small SVDD computations and one union operation. The first SVDD\ncomputation is fast since it is perfomed on a small sample of training data set. For the remaining two operations, our\nmethod exploits the fact that for most data sets support vectors obtained from SVDD are a tiny fraction\nof the input data set and both the union operation and the second SVDD computation are fast. So our method consists\nof three fast operations per iteration. For most large datasets we have experimented on the time to convergence is fast\nand we achieve a reasonable approximation to full SVDD in a fraction to time needed compute SVDD with the full dataset.\n\n\n\n\\subsubsection{Distributed Implementation}\nFor extremely large training datasets, efficiency gains using distributed\nimplementation are possible. Figure \\ref{fig:image_511} describes SVDD solution\nusing the sampling method outlined in section \\ref{sbm} utilizing a distributed\narchitecture. The training data set with $M$ observations is first distributed\nover $p$ worker nodes. Each worker node computes SVDD of its $\\dfrac{M}{p}$\nobservations using the sampling method to obtain its own master set of support\nvectors $SV_{i}^*$. Once SVDD computations are completed, each worker node\npromotes its own master set of support vectors $SV_{i}^*$, to the controller\nnode. The controller node takes a union of all worker node master sets of\nsupport vectors, $SV_{i}^*$ to create data set $S^{'}$. Finally, solution is\nobtained by performing SVDD computation on $S^{'}$. The corresponding set of\nsupport vectors $SV^{*}$ are used to approximate the original training data set\ndescription.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.25]{image1}\n\t\\caption{Distributed Implementation}\n\t\\label{fig:image_511}\n\\end{figure}\n\n\\section{Results}\n\\label{res}\nTo test our method we experimented with three data sets of known geometry which we call the Banana-shaped, Star-shaped,\nand Two-Donut-shaped\ndata. The figures \\ref{fig:image_1}-\\ref{fig:image_3} illustrate these three data sets. \n\\begin{figure}\n\t\\centering \n\t\\subfloat[Banana-shaped data]{\\label{fig:image_1}\\includegraphics[scale=0.11]{image2}}\n\t\\subfloat[Star-shaped data]{\\label{fig:image_2}\\includegraphics[scale=0.11]{image3}}\n\t\\subfloat[Two-donut-shaped data]{\\label{fig:image_3}\\includegraphics[scale=0.11]{image4}}\n\t\\caption{Scatter plots}\\label{fig:image_4}\n\\end{figure}\nFor each data set, we first obtained SVDD using all observations.\nTable~\\ref{table:t2} summarizes the results.\\\\\nFor each data set, we varied the value of the sample size $n$ from 3 to 20\nand obtained multiple SVDD using the sampling method. For each sample size\nvalue, the total processing time and number of iterations till convergence was\nnoted. Figures \\ref{fig:image_5} to \\ref{fig:image_7} illustrate the results.\nThe vertical reference line indicates the sample size corresponding to the\nminimum processing time. Table~\\ref{table:t3} provides the minimum processing\ntime, corresponding sample size and other details for all three data sets.\nFigure \\ref{fig:image_106} shows the convergence of threshold $R^2$ for the\nBanana-shaped data trained using sampling method.\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_51}\\includegraphics[width=3.00in]{image5_1}}\\mbox{}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_52}\\includegraphics[width=3.00in]{image5_2}}\n\\caption{Banana-shaped data}\\label{fig:image_5}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_61}\\includegraphics[width=3.00in]{image6_1}}\\mbox{}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_62}\\includegraphics[width=3.00in]{image6_2}}\n\\caption{Star-shaped data}\\label{fig:image_6}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_71}\\includegraphics[width=3.00in]{image7_1}}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_72}\\includegraphics[width=3.00in]{image7_2}}\n\\caption{Two Donut data}\\label{fig:image_7}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3.00in]{image106}\n\\caption{Plot of threshold $R^2$ - Banana shaped data (Sample size = 6) }\n\\label{fig:image_106}\n\\end{figure}\n\n\\begin{table}[h!]\n\\begin{minipage}{.5\\textwidth}\n \\begin{tabular}{||c c c c c||} \n \\hline\n Data & \\#Obs & $R^{2}$ & \\#SV & Time \\\\ [0.5ex] \n \\hline\\hline\n Banana & 11,016 & 0.8789 & 21 & 1.98 sec \\\\ \n TwoDonut &1,333,334 & 0.8982 &178 & 32 min \\\\ \n Star & 64,000 & 0.9362 &76 &11.55 sec \\\\ [1ex] \n \\hline\n \\end{tabular}\n \\caption{SVDD Training using full SVDD method}\\label{table:t2}\n\\end{minipage}\\\\\n\\mbox{}\\\\\n\\begin{minipage}{.4\\textwidth}\n\t\\begin{tabular}{||ccccc||} \n\t\t\\hline\n\t\tData&Iterations & $R^{2}$ & \\#SV & Time\\\\ [0.5ex] \n\t\t\\hline\\hline\n\t\tBanana(6)&119&0.872&19&0.32 sec\\\\ \n\t\tTwoDonut(11)&157&0.897&37&0.29 sec\\\\\n\t\tStar(11)&141&0.932&44&0.28 sec\\\\[1ex] \n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{SVDD Results using Sampling Method (sample size in parenthesis)}\\label{table:t3} \n\\end{minipage}\n\\end{table}\nResults provided in Table \\ref{table:t2} and Table \\ref{table:t3} indicate\nthat our method provides an order of magnitude performance improvement as\ncompared to training using all observations in a single iteration. The threshold\n$R^{2}$ values obtained using the sampling-based method are approximately equal\nto the values that can be obtained by training using all observations in a\nsingle iteration. Although the radius values are same, to confirm if the data\nboundary defined using support vectors is similar, we performed scoring on a\n$200\\times200$ data grid. Figure \\ref{fig:image_8} provides the scoring results\nfor all data sets. The scoring results for the Banana-shaped and the Two-Donut-shaped are very similar\nfor both the method, the scoring results for the Star-shaped shaped data for the two methods are also similar\nexcept for a region near the center.\n\n\n\\begin{figure}\n\\begin{tabular}{cc}\n\\includegraphics[width=1.5in]{image100_1} & \\includegraphics[width=1.5in]{image100_2} \\\\\n(a) & (b) \\\\[6pt]\n\\includegraphics[width=1.5in]{image101_1} & \\includegraphics[width=1.5in]{image101_2} \\\\\n(a) & (b) \\\\[6pt]\n\\includegraphics[width=1.5in]{image102_1} & \\includegraphics[width=1.5in]{image102_2} \\\\\n(a) & (b) \\\\[6pt]\nFull SVDD Method & Sampling method \\\\[6pt]\n\\end{tabular}\n\\caption{Scoring results. Above figures show results of scoring on a 200x200 data grid. Light gray color indicates outside points and black color indicates inside points. Figure (a) used full SVDD method for training. Figure (b) used sampling method for training. }\\label{fig:image_8}\n\\end{figure}\n\n\n\\section{Analysis of High Dimensional Data}\n\\label{pd}\nSection \\ref{res} provided comparison of our sampling method with full SVDD\nmethod. For two-dimensional data sets the performance of sampling method can\nbe visually judged using the scoring results. We tested the sampling method\nwith high dimensional datasets, where such visual feedback about classification\naccuracy of sampling method is not available. We compared classification\naccuracy of the sampling method with the accuracy of training with full SVDD\nmethod. We use the $F_{1}$-measure to quantify the classification accuracy\n\\cite{zhuang2006parameter}.\nThe $F_{1}$-measure is defined as follows:\n\\begin{equation} \nF_{1}=\\dfrac{2\\times \\text{Precision}\\times \\text{Recall}}{\\text {Precision}+\\text {Recall}},\n\\end{equation}\nwhere:\n\\begin{align}\n\\text {Precision}=\\dfrac{\\text{true positives}}{\\text{true positives} + \\text{false positives}}\\\\\n\\text {Recall}=\\dfrac{\\text{true positives}}{\\text{true positives} + \\text{false negatives}}.\n\\end{align} \nThus high precision relates to a low false positive rate, and high recall\nrelates to a low false negative rate. We chose the $F_{1}$-measure because it is\na composite measure that takes into account both the Precision and the Recall.\nModels with higher values of $F_{1}$-measure provide a better fit. \\\\\n\n\\subsection{Analysis of Shuttle Data}\nIn this section we provide results of our experiments with Statlog (shuttle)\ndataset \\cite{Lichman:2013}. This is a high dimensional data consists of nine\nnumeric attributes and one class attribute. Out of 58,000 total observations,\n80\\% of the observations belong to class one.\nWe created a training data set of randomly selected 2,000 observations belonging\nto class one. The remaining 56,000 observations were used to create a scoring\ndata set. SVDD model was first trained using all observations in the training\ndata set. The training results were used to score the observations in the\nscoring data set to determine if the model could accurately classify an\nobservation as belonging to class one and the accuracy of scoring was measured\nusing the $F_{1}$-measure. We then trained using the sampling-based method,\nfollowed by scoring to compute the $F_{1}$-measure again. The sample size for\nthe sampling-based method was set to 10 (number of variables + 1). We measured\nthe performance of the sampling method using the $F_{1}$-measure ratio defined\nas $F_{\\text{Sampling}}\/F_{\\text{Allobs}}$ where\n$F_{\\text{Sampling}}$ is the $F_1$-measure obtained when the value obtained using the sampling method for training, and\n$F_{\\text{Allobs}}$ is the value of $F_1$-measure computed when all observations were used for training. A value close to 1 indicate that sampling method is competitive with full SVDD method.\nWe repeated the above steps varying the training data set of size from 3,000\nto 40,000 in the increments of 1,000. The corresponding scoring data set size\nchanged from 55,000 to 18,000. Figure \\ref{fig:image_9_1_1} provides the plot of\n$F_{1}$-measure ratio. The plot of $F_1$-measure ratio is constant, very close\nto 1 for all training data set sizes, provides the evidence that our sampling\nmethod provides near identical classification accuracy as compared to full SVDD\nmethod. Figure \\ref{fig:image_9_1_2} provides the plot of the processing time\nfor the sampling method and training using all obsrvations. As the training\ndata set size increased, the processing time for full SVDD method increased\nalmost linearly to a value of about 5 seconds for training data set of 40,000\nobservations. In comparison, the processing time of the sampling based method\nwas in the range of 0.24 to 0.35 sec. The results prove that the sampling-based\nmethod is efficient and it provides near identical results to full SVDD method.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=3.45in]{image9_1_b}\n\t\\caption{$F_1$-measure plot: Shuttle data. \\textit{Sample size for sampling method=10}}\n\t\\label{fig:image_9_1_1}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3.45in]{image9_2_b}\n\\caption{Processing time plot: Shuttle data. \\textit{Sample size for sampling method=10}}\n\\label{fig:image_9_1_2}\n\\end{figure}\n\n\n\\subsection{Analysis of Tennessee Eastman Data}\nIn this section we provide results of our experiments with high dimensional\nTennessee Eastman data. The data was generated using the MATLAB simulation code\n\\cite{Ricker:2002} which provides a model of an industrial chemical process\n\\cite{downs1993plant}. The data was generated for normal operations of the\nprocess and twenty faulty processes. Each observation consists of 41 variables,\nout of which 22 are measured continuously, on an average, every 6 seconds\nand remaining 19 sampled at a specified interval either every 0.1 or 0.25\nhours. We interpolated the 22 observations which are measured continuously\nusing SAS\\textsuperscript{\\textregistered} \\textit{EXPAND} procedure. The\ninterpolation increased the observation frequency and generated 20 observations\nper second. The interpolation ensured that we have adequate data volume to\ncompare performance our sampling method with full SVDD method.\n\n\nWe created a training data set of 5,000 randomly selected observations belonging\nto the normal operations of the process. From the remaining observations, we\ncreated a scoring data of 228,000 observations by randomly selecting 108,000\nobservations belonging to the normal operations and 120,000 observations\nbelonging to the faulty processes. A SVDD model was first trained using all\nobservations in the training data set. The training results were used to score\nthe observations in the scoring data set to determine if the model could\naccurately classify an observation as belonging to the normal operations. The\naccuracy of scoring was measured using the $F_{1}$-measure. We then trained\nusing the sampling method, followed by scoring to compute the $F_{1}$-measure\nagain. The sample size for the sampling based method was set to 42 (number\nof variables + 1). Similar to the Shuttle data analysis, we measured the\nperformance of the sampling method using the $F_{1}$-measure ratio defined as\n$F_{\\text{Sampling}}\/F_{\\text{Allobs}}$ where $F_{\\text{Sampling}}$ is the\n$F_1$-measure obtained when the value obtained using the sampling method for\ntraining, and $F_{\\text{Allobs}}$ is the value of $F_1$-measure computed when\nall observations were used for training. A value close to 1 indicate that\nsampling method is competitive with full SVDD method.\n\nWe repeated the above steps varying the training data set of size from 10,000\nto 100,000 in the increments of 5,000. The scoring data set was kept unchanged\nduring each iteration. Figure \\ref{fig:image_9_1} provides the plot of\n$F_{1}$-measure ratio. The plot of $F_{1}$-measure ratio was constant, very\nclose to 1 for all training data set sizes, provides the evidence that the\nsampling method provides near identical classification accuracy as compared\nto full SVDD method. Figure \\ref{fig:image_9_2} provides the plot of the\nprocessing time for the sampling-based method and the all obsrvation method. As\nthe training data set size increased, the processing time for full SVDD method\nincreased almost linearly to a value of about one minute for training data set\nof 100,000 observations. In comparison, the processing time of the sampling\nbased method was in the range of 0.5 to 2.0 sec. The results prove that the\nsampling-based method is efficient and it provides and closely approximates\nthe results obtained from full SVDD method.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.45in]{image12_1}\n\t\\caption{$F_1$-measure ratio plot: Tennessee Eastman data. Sample size for sampling method=42}\n\t\\label{fig:image_9_1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image12_2}\n\\caption{Processing time plot: Tennessee Eastman data. Sample size for sampling method=42}\n\\label{fig:image_9_2}\n\\end{figure}\n\n\\section{Simulation Study}\n\\label{ss}\nIn this section we measure the accuracy of Sampling method when it is\napplied to randomly generated polygons. Given the number of vertices, $k$,we\ngenerate the vertices of a randomly generated polygon in the anticlockwise\nsense as $r_1\\exp i \\theta_{(1)}, \\dots, r_k \\exp i \\theta_{(k)}.$ Here\n$\\theta_{(i)}$'s are the order statistics of an i.i.d sample uniformly\ndrawn from $(0,2\\pi)$ and $r_i$'s are uniformly drawn from an interval\n$[\\text{r}_{\\text{min}},\\text{r}_{\\text{max}}].$ For this simulation we chose\n$\\text{r}_{\\text{min}}=3$ and $\\text{r}_{\\text{max}}=5$ and varied the number\nof vertices from $5$ to $30$. We generated $20$ random polygons for each vertex\nsize. Figure \\ref{fig:image_10} shows two random polygons. Having determined a\npolygon we randomly selected $600$ points uniformly from the interior of the\npolygon to construct a training data set.\n\nTo create the scoring data set we the divided the bounding rectangle of each\npolygon into a $200 \\times 200$ grid. We labeled each point on this grid\nas an ``inside'' or an ``outside'' point. We then fit SVDD on the training\ndata set and scored the corresponding scoring data set and calculated the\n$F_1$-measure. The process of training and scoring was first performed using the\nfull SVDD method, followed by the sampling method. For sampling method we\nused sample size of 5. We trained and scored each instance of a polygon 10 times\nby changing the value of the Gaussian bandwidth parameter, $s$. We used $s$\nvalues from the following set:\\linebreak $s=[1, 1.44, 1.88, 2.33, 2.77, 3.22, 3.66,\n4.11, 4.55, 5].$\n\nAs in previous examples we used the $F_!$ measure ratio to judge the accuracy of the sampling method.\n\n\\begin{figure}\n\t\\centering \n\t\\subfloat[Number of Vertices = 5]{\\label{fig:image_10}\\includegraphics[width=3.45in]{p5}}\\\\\n\t\\subfloat[Number of Vertices = 25]{\\label{fig:image_11}\\includegraphics[width=3.45in]{p25}}\n\t\\caption{Random Polygons}\\label{fig:image_10}\n\\end{figure}\n\nThe Box-whisker plots in figures \\ref{fig:image_11_a} to \\ref{fig:image_11_c}\nsummarize the simulation study results. The x- axis shows the number of\nvertices of the ploygon and y-axis shows the $F_1$-measure ratio. The bottom and\nthe top of the box shows the first and the third quartile values. The ends of\nthe whiskers represent the minimum and the maximum value of the $F_1$-measure\nratio. The diamond shape indicates the mean value and the horizontal line in the\nbox indicates the second quartile.\n\\subsection{Comparison of the best fit across $s$}\nFor each instance of a polygon we looked at $s$ value which provides the best\nfit in terms of the $F_1$-ratio for each of the methods. The plot in Figure\n\\ref{fig:image_11_a} shows the plot of $F_{1}$ measure ratio computed using the\nmaximum values of $F_{1}$ measures. The plot shows that $F_1$-measure ratio\nis greater than $\\approx 0.92$ across all values of number of vertices. The\n$F_1$ measure ratio in the top three quartiles is greater than $\\approx$ 0.97\nacross all values of the number of vertices. Using best possible value of s, the\nsampling method provides comparable results with full SVDD method.\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image104}\n\\caption{Box-whisker plot: Number of vertices vs. Ratio of max $F_1$ measures \\label{fig:image_11_a}}\n\\end{figure}\n\n\\subsection{Results Using Same Value of $s$}\nWe evaluated sampling method against full SVDD method, for the same value\nof $s$. The plots in Figure \\ref{fig:image_11_b} illustrate the results for\ndifferent six different values of $s$. The plot shows that except for one outlier result in Figure \\ref{fig:image_11_b} (d), $F_1$-measure\nratio is greater than 0.9 across number of vertices and $s$. In Figures\n\\ref{fig:image_11_b} (c) to (f), the top three quartiles of $F_{1}$ measure\nratio was consistently greater than $\\approx 0.95$. Training using sampling method\nand full SVDD method, using same $s$ value, provide similar results.\\\\\n\n\\begin{figure}\n \\centering\n\t\\subfloat[$s$=1]{\\includegraphics[width=3.00in]{image105_1}}\\\\\n\t\\subfloat[$s$=1.4]{\\includegraphics[width=3.00in]{image105_2}}\\\\\n\t\\subfloat[$s$=2.3]{\\includegraphics[width=3.00in]{image105_3}}\\\\\n\t\\subfloat[$s$=3.4]{\\includegraphics[width=3.00in]{image105_4}}\n \\phantomcaption\n\\end{figure}\n\\begin{figure}\n \\ContinuedFloat\n \\centering\n\t\\subfloat[$s$=4.1]{\\includegraphics[width=3.00in]{image105_5}}\\\\\n\t\\subfloat[$s$=5.0]{\\includegraphics[width=3.00in]{image105_6}}\n \\caption[]{Box-whisker plot: Number of vertices vs. $F_1$ measure ratio for different s values \\label{fig:image_11_b}}\n\\end{figure}\n\n\\subsection{Overall Results}\nFigure \\ref{fig:image_11_c} provides summary of all simulation performed for\ndifferent polygon instances and varying values of $s$. The plot shows that\nexcept for one outlier result, $F_1$-measure ratio is greater than 0.9 across number of vertice. The $F_1$\nmeasure ratio in the top three quartiles is greater than $\\approx 0.98$ across\nall values of the number of vertices. The accuracy of sampling method is\ncomaprable to full SVDD method.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image103}\n\\caption{Box-whisker plot: Number of vertices vs. $F_1$ measure ratio }\n\\label{fig:image_11_c}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{cn}\nWe propose a simple sampling-based iterative method for training SVDD. The\nmethod incrementally learns during each iteration by utilizing information\ncontained in the current master set of support vectors and new information\nprovided by the random sample. After a certain number of iterations, the\nthreshold $R^2$ value and the center $a$ start to converge. At this point, the\nSVDD of the master set of support vectors is close to the SVDD of training data\nset. We provide a mechanism to detect convergence and establish a stopping\ncriteria. The simplicity of proposed method ensures ease of implementation.\nThe implementation involves writing additional code for calling SVDD training\ncode iteratively, maintaining a master set of support vectors and implementing\nconvergence criteria based on threshold $R^2$ and center $a$. We do not\npropose any changes to the core SVDD training algorithm as outlined in section\n\\ref{mfsvdd}. The method is fast. The number of observations used for\nfinding the SVDD in each iteration can be a very small fraction of the number\nof observations in the training data set. The algorithm provides good results\nin many cases with sample size as small as $m+1$, where $m$ is the number of variables in\nthe training data set. The small sample size ensures that each iteration of the\nalgorithm is extremely fast. The proposed method provides a fast alternative\nto traditional SVDD training method which uses information from all observations\nin one iteration. Even though the sampling based method provides an approximation \nof the data description but in applications where training data set is large, fast\napproximation is often preferred to an exact description which takes more time to determine.\nWithin the broader realm of Internet of Things (IoT) we\nexpect to see multiple applications of SVDD especially to monitor industrial\nprocesses and equipment health and many of these applications will require fast periodic training\nusing large data sets. This can be done very efficiently with our\nmethod.\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nThe two gas giants of the \\replaced{solar}{Solar} system, Jupiter and Saturn, are host to a large number of satellites and rings. The satellites of both planets follow a similar progression pattern. The inner region of each system consists of small icy satellites, with an accompanying ring system \\citep{Thomas1998InnerSatJup, Throop2004JupiterRings, Porco2005CassiniRingSat,Thomas2013InnerSatSatu}. Further out, there are larger icy\/silicate satellites \\citep{Thomas2010SaturnSatsCassiniProps, Deienno2014OrbitGalileanSat}. In the outer system, both planets have a series of irregular satellites, small satellites with high eccentricities and inclinations \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats}. It is thought that these satellites were captured from other populations of small \\replaced{solar}{Solar} system bodies \\citep{Colombo1971JupSatsForm, Heppenheimer1977Capture, Pollack1979SatGasDrag, Sheppard2003IrrSatNature, Nesvorny2004IrrSatFamilyOrigin, Johnson2005PhoebeKuiper, Nesvorny2007IrrSatCap, Nesvorny2014IrrCapture}. This is in contrast to the inner satellites, which are thought to have accreted in a circumplanetary disk \\citep[e.g.][]{Canup2002GalSatAcc, Canup2010SaturnSatOrigin}. Such a formation mechanism is thought to resemble the accretion of planets in a protoplanetary disk around a young star \\citep{Lissauer1987PlanetAccretion}, a conclusion that is supported by the recent discovery of the TRAPPIST-1 planetary system \\citep{Gillon2016Trapist1}. That system features at least seven Earth-mass planets orbiting a very low mass star. The star itself, TRAPPIST-1, is within two orders of magnitude more massive than Jupiter, and similar in size. The seven planets span an area comparable to that of Jupiter's regular satellite system. Studying and understanding the gas giant systems in our own \\replaced{solar}{Solar} system, can therefore provide context for future exploration of low-mass exoplanetary systems. \n\n\\subsection{The Jovian System}\nHistorically, \\citet{Galileo1610SidereusNuncius} discovered the first satellites in the Jovian system, the large Galileans, Io, Europa, Ganymede and Callisto. Our knowledge of these satellites has increased greatly, as a result of both improved ground-based instrumentation \\citep[e.g.][]{Sparks2016EuropaHST,Vasundhara2017GalileansGround} and spacecraft visitations \\citep[e.g.][]{Smith1979Jupiter, Grundy2007NewHorizonsJupiterSats, Greenberg2010IcyJovian}\\deleted{have given us a more detailed understanding of these objects}. \n\nAmalthea, one of the inner set of Jovian satellites, was discovered by \\citet{Barnard1892Amalthea}. A few years later, the first two small irregular satellites, Himalia \\citep{Perrine1905Himalia} and Elara \\citep{Perrine1905Elara}, were discovered in inclined, prograde orbits. The discovery of Pasiphae three years later by \\citet{Melotte1908Pasiphae} is significant as this was only the second satellite in the Solar system to be found on a retrograde orbit, and the first such object found in the Jovian system. Several other irregular satellites were discovered in the first half of the 20th century, Sinope \\citep{Nicholson1914Sinope}, Lysithea \\citep{Nicholson1938LysitheaCarme}, Carme \\citep{Nicholson1938LysitheaCarme} and Ananke \\citep{Nicholson1951Ananke}. Leda, another small prograde irregular, was discovered 20 years later by \\citet{Kowal1975Leda}. Themisto, the first Jovian satellite smaller than 10km to be discovered, was found that same year \\citep{Kowal1975Themisto} and subsequently lost. Themisto was rediscovered by \\citet{Sheppard2000ThemistoReDis} nearly 20 years later. The Voyager visitations of Jupiter discovered the remaining three inner satellites, Metis \\citep{Synnott1981MetisDiscov}, Adrastea \\citep{Jewitt1979AdrasteaDiscov} and Thebe \\citep{Synnott1980ThebeDiscov}, along with a ring system \\citep{Smith1979Jupiter}. These three satellites, Amalthea and the ring system, would be imaged again by the Galileo \\citep{OckertBell1999JupiterRing} and Cassini \\citep{Porco2005CassiniRingSat} spacecraft during their missions. \n\nThe irregular Jovian satellites orbit the planet with semi-major axes an order of magnitude greater than the Galilean moons, and have large eccentricities and inclinations. In the early years of the 21st century, extensive surveys were carried out to search for the Jovian irregular satellites \\citep{Scotti2000Callirrhoe, Sheppard2001SatsJupDiscovery,\nSheppard2002JupSatDisc, Gladman2003JupIAU1, Gladman2003JupIAU2,\nSheppard2003JupIAU1, Sheppard2003JupIAU2, Sheppard2003JupIAU3,\nSheppard2003IrrSatNature,\nSheppard2004JupIAU1,Sheppard2004JupIAU2, Beauge2007IrregSatRsonance,\nJacobson2011NewJupSats, Sheppard2012JupIAU}. These surveys increased the number of known Jovian satellites from 14 after Voyager, to the 67 known today. The inner five irregular satellites, Leda, Himalia, Lystea, Elara and Dia, have prograde orbits and have previously been classified into the Himalia group \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature}. Themisto and Carpo were proposed as single members of their own groups by \\citet{Sheppard2003IrrSatNature}. The remainder of the irregular satellites have retrograde orbits. Based on similarities in semi-major axis, inclination and eccentricity, these satellites have been grouped into families by \\citet{Sheppard2003IrrSatNature} and \\citet{Nesvorny2003IrrSatEvol}. These dynamical families are typified by their largest member, Himalia representing the inner prograde satellites, with the retrograde ones being broken down into the Ananke, Pasiphae and Carme families. Recently, several additional small irregular satellites have been discovered \\citep{Jacobson2011NewJupSats, Sheppard2012JupIAU} which are yet to be named or classified. With the discovery of new satellites \\citep{Scotti2000Callirrhoe, Sheppard2001SatsJupDiscovery, Beauge2007IrregSatRsonance,\nJacobson2011NewJupSats, Sheppard2012JupIAU} and additional information from the Cassini spacecraft \\citep{Porco2005CassiniRingSat}, a revisitation of the classification of the Jovian irregular satellites \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats} is warranted. \n\n\\subsection{The Saturnian System}\nThe Saturnian system is broadly similar to that of Jupiter, but exhibits greater complexity. One of the most striking features, visible to even the most modest telescope, is Saturn's ring system. First observed by Galileo in 1610, it was \\citet{Huygens1659systema} that observed that the objects surrounding Saturn were in fact rings. The rings themselves are composed of individual particles, from micrometer to meter size \\citep{Zebker1985SaturnRingParticle}. Embedded within several of the main rings are a series of small moonlets \\citep{Tiscareno2006SaturnAringMoonlets} and several shepherd satellites \\citep{Showalter1991Pan, Porco2007SaturnSmallSats, Cuzzi2014FringPromethius}. The co-orbitals Janus and Epimetheus \\citep{Yoder1983SaturnCoorobiting, Yoder1989JanusEpiMassOrbit, Nicholson1992CoorbitalSaturn, Treffenstdt2015JanusEpiFormation, ElMoutamid2016JansSwapAring}, and their associated faint ring system \\citep{Winter2016JanusEpiRing} are unique to the Saturn system. Just beyond the Janus\/Epimetheus orbit, there is a diffuse G-ring, the source of which is the satellite Aegaeon \\citep{Hedman2007Gring}.\n\n\\citet{Huygens1659systema} also discovered Saturn's largest satellite, Titan. Earth-based observations highlighted the methane based atmosphere of Titan \\citep{Kuiper1944TitanAtmos, Karkoschka1994TitanESO}, with further characterization by the Cassini spacecraft \\citep{Niemann2005TitanAtmos} and Huygens lander \\citep{Lebreton2005HuygensTiten}. The bulk composition of Titan is analogous to that of the other icy satellites, with an icy shell, subsurface water ocean and silicate core \\citep{Hemingway2013TitanIceshell}. There are seven other mid-sized icy satellites, with semi-major axes on a similar order of magnitude to \\added{that of} Titan. The five largest, Mimas, Enceladus, Tethys, Dione and Rhea are large enough to be in hydrostatic equilibrium. All of the mid-sized satellites are thought to be predominantly composed of water ice, with some contribution from silicate rock, and may contain subsurface liquid oceans \\citep{Matson2009SaturnSat, Filacchione2012VIMS3}. Those satellites closer to Saturn than Titan, Mimas, Enceladus, Tethys, Dione and Rhea, are embedded in the E-ring \\citep{Feibelman1967SatEring, Baum1981SatEring, Hillier2007EringComp, Hedman2012EringStruc}. The Cassini mission identified the source of this ring as the southern cryo-plumes of Enceladus \\citep{Sphan2006EnceladusEring}. \n\nIn addition to the larger icy satellites, there are four small Trojan satellites \\citep{Porco2005CassiniRingSat}, situated at the leading and trailing Lagrange points, 60\\degree \\ ahead or behind the parent satellites in their orbit. Tethys has Telesto and Calypso as Trojan satellites, while Helene and Polydeuces are Trojan satellites of Dione. So far, these Trojan satellites are unique to the Saturnian system. Between the orbits of Mimas and Enceladus, there are the Alkyonides, Methone, Anthe and Pallene, recently discovered by the Cassini spacecraft \\citep{Porco2005CassiniRingSat}. Each of the Alkyonides have their own faint ring arcs \\citep{Hedman2009SatRingArcs} comprised of similar material to the satellite. Dynamical modeling by \\citet{Sun2017MethoneDust} supports the theory of \\citet{Hedman2009SatRingArcs}, that the parent satellite is the source of the rings.\n\nIn the outer Saturnian system there are a large number of smaller irregular satellites, with 38 known to date. The first of these irregular satellites to be discovered was Phoebe, which was the first planetary satellite to be discovered photographically \\citep{Pickering1899Phoebe}. Phoebe was also the first satellite to be discovered moving on a retrograde orbit \\citep{Pickering1905Phoebe, Ross1905Phoebe}. Phoebe is the best studied irregular satellite and the only one for which in-situ observations have been obtained \\citep{Clark2005Phoebe}. Recently, a large outer ring associated with Phoebe and the other irregular satellites has been discovered \\citep{Verbiscer2009SatLarRing}. It has been suggested that Phoebe may have originated in the Edgeworth-Kuiper Belt and captured into orbit around Saturn \\citep{Johnson2005PhoebeKuiper}. The other Saturnian irregular satellites were discovered in extensive surveys during the early 21st century \\citep{Gladman200112Sat, Sheppard2003IAUJupSat, Jewitt2005IAUCSat,\nSheppard2006SatIAUC, Sheppard2007SatIAUC}. \\replaced{With}{Due to} the small size of the majority of these satellites, only their orbital information is available. There are nine prograde and 29 retrograde outer satellites, of which attempts have been made to place into families based on dynamical \\citep{Gladman200112Sat, Jewitt2007IrregularSats, Turrini2008IrregularSatsSaturn} and photometric \\citep{Grav2003IrregSatPhoto, Grav2007IrregSatCol} information. In the traditional naming convention \\citep{Grav2003IrregSatPhoto}, the Inuit family, Ijiraq, Kiviuq, Paaliaq, Siarnaq and Tarqeq, are small prograde satellites, whose inclination is between 45\\degree and 50\\degree . The Gallic family, Albiorix, Bebhionn, Erriapus and Tarvos, is a similar, prograde group, but with inclinations between 35\\degree and 40\\degree . The retrograde satellites are all grouped into the Norse family, including Phoebe. There is a possibility that the Norse family could be further split into subfamilies, based on photometric studies \\citep{Grav2003IrregSatPhoto, Grav2007IrregSatCol}. The convention of using names from respective mythologies for the satellite clusters \\citep{Jewitt2007IrregularSats}, has become the default standard for the irregular satellite families of Saturn. \n\n\\subsection{Formation Theories}\n\nThe purpose of taxonomy and classification, beyond simple grouping, is to investigate the origin of objects. The origin of the irregular satellites is a major topic of ongoing study \\citep{Nesvorny2012JumpingJupiter, Nesvorny2014IrrCapture}. Here we present an overview for context. There are three main theories in the formation of the Jovian satellites: formation via disk accretion \\citep{Canup2002GalSatAcc}; via nebula drag \\citep{Pollack1979SatGasDrag}; or via dynamic capture \\citep{Nesvorny2003IrrSatEvol, Nesvorny2007IrrSatCap}. The satellites that are captured either by nebula drag or through dynamical means, are thought to be from \\replaced{solar}{Solar} system debris, such as asteroids and comets. \n\nThe disk accretion theory has generally been accepted as the mechanism for the formation of the inner prograde satellites of Jupiter \\citep{Canup2002GalSatAcc}. The satellites form from dust surrounding proto-Jupiter in a process analogous to the formation of planets around a star \\citep{Lissauer1987PlanetAccretion}. This surrounding disk would have lain in the equatorial plane of Jupiter, with material being accreted to the planet itself through the disk. This would explain both the prograde, coplanar orbits of the regular satellites and their near circular orbits.\n\nThe second theory requires satellites to be captured in the original Jovian nebula \\citep{Pollack1979SatGasDrag, Cuk2004HimaliaGasDrag}. Before it coalesced into a planet, Jupiter is proposed to have had a greater radius, and lower density than now. There was a `nebula' surrounding this proto-Jupiter. As other pieces of \\replaced{solar}{Solar} system debris crossed into the Hill sphere of this nebula, they would be slowed down by friction and be captured as a satellite. Related to this is the concept of a pull down mechanism \\citep{Heppenheimer1977Capture}. As a gas giant increases in mass from accretion \\citep{Pollack1996GiantPlanetAccretion}, the hills sphere increases. As a subsequent effect, small \\replaced{solar}{Solar} system bodies can possibly be captured as irregular satellites.\n\nDynamical capture can explain the retrograde orbits of the Jovian satellites \\citep{Nesvorny2003IrrSatEvol}. The Hill sphere of a planet dictates the limit of its gravitational influence over other bodies. The theory \\citep{Nesvorny2003IrrSatEvol, Nesvorny2007IrrSatCap} states that is it impossible for a satellite to be captured in a three body system (Sun, planet and satellite). The Nice model of the \\replaced{solar}{Solar} system \\citep{Tsiganis2005NICEplanets, Nesvorny2007IrrSatCap, Nesvorny2014IrrCapture} has a fourth body interaction placing the satellite into a stable orbit inside the Hill sphere of the gas giant. Recently the Nice model was updated to include a fifth giant planet \\citep{Nesvorny2012JumpingJupiter}. This updated theory has the new planet interacting with Jupiter and allowing for the capture of the satellites, before the fifth giant planet is ejected from the \\replaced{solar}{Solar} system. Collisions between objects could also play a part in the dynamical capture of the irregular satellites \\citep{Colombo1971JupSatsForm}.\n\nThe formation of the Saturnian satellite system is thought to be similarly complex. The inner satellites are possibly formed from accretion within the ring system \\citep{Charnoz2010SaturnMooletsfromMainRings} or from the breakup of a large, lost satellite \\citep{Canup2010SaturnSatOrigin}. Modeling of the Saturnian system by \\citet{Salmon2017SaturnMidAccretion} has \\replaced{indicated}{shown} that the mid-sized satellites could have formed from a large ice-dominated ring, with contamination of \\replaced{asteroids}{cometary material} during the Late Heavy Bombardment, delivering the requisite silicate rock. Being the largest satellite in the Saturnian system, Titan is thought to have formed from accretion of proto-satellites \\citep{Asphaug2013SatMerger}. The Saturnian irregular satellites are predicted to be captured objects \\citep{Jewitt2007IrregularSats}, though their origins are still in dispute. Collisions are thought to have played a part in the capture of the irregular satellites of Saturn \\citep{Turrini2009IrregularSatsSaturn}. The cratering data provided by the Cassini spacecraft \\citep{Giese2006PhoebeTopo} supports this hypothesis.\n\n\\subsection{This Project}\nWith the discovery of several new irregular satellites \\citep{Scotti2000Callirrhoe, Gladman200112Sat, Sheppard2001SatsJupDiscovery,\nSheppard2002JupSatDisc, Gladman2003JupIAU1,Gladman2003JupIAU2,\nSheppard2003IAUJupSat, Sheppard2003JupIAU1,Sheppard2003JupIAU2,Sheppard2003JupIAU3,\nSheppard2003IrrSatNature,\nSheppard2004JupIAU1,Sheppard2004JupIAU2, Jewitt2005IAUCSat, Sheppard2006SatIAUC,Sheppard2007SatIAUC, \nJacobson2011NewJupSats, Sheppard2012JupIAU}, along with the detailed examination of the Jovian and Saturnian system by the Cassini spacecraft \\citep{Brown2003CassiniJupiter, \nPorco2005CassiniRingSat, Cooper2006CassiniAmaltheaThebe, Giese2006PhoebeTopo, Porco2006EnceladusPlume, Sphan2006EnceladusEring, Filacchione2007VIMS1, Nicholson2008VIMSRings, Matson2009SaturnSat, Buratti2010SatInnerSat, Filacchione2010VIMS2, Thomas2010SaturnSatsCassiniProps, \nClark2012VIMSIapetus, Filacchione2012VIMS3, Spitale2012s2009s1, Tosi2010IapetusDark, Hirtzig2013VIMSTitan, Brown2014Rayleigh, Filacchione2014VIMSrings, Filacchione2016VIMS4}, there is an opportunity to revisit the classification of the satellite systems of the gas giants. We apply a technique called \\textit{cladistics} to characteristics of the Jovian and Saturnian satellites, in order to examine the relationships between objects in the systems. The purpose of this is two fold. First, due to their \\replaced{well established}{well-established} classification systems, the Jovian and Saturnian satellite systems offer an opportunity to test the cladistical technique in a planetary science context. This project is an extension of \\citet{Holt2016JovSatCald} and together they form the first use of cladistics for planetary bodies. The second aim of the project is to classify recently discovered satellites, as well as providing context for future work.\n\nIn Section \\ref{Methods}, we introduce the cladistical technique, and how it is used in this paper. The resulting taxonomic trees for the Jovian and Saturnian systems, along with their implications for the taxonomy of the satellites, are presented in Sections \\ref{JupiterTax} and \\ref{SaturnTax} respectively. Section \\ref{Discussion} discusses the implications of cladistics in a planetary science context, along with some remarks on origins of the gas giant satellites and possible future work.\n\n\\section{Methods}\n\\label{Methods}\nIn this section, we present an overview of the cladistical method and how it is applied to the Jovian and Saturnian satellite systems. Following a general overview of cladistics, the section progresses into the specifics of this study, including characteristics used in the paper. The section concludes with an explanation on the specific matrices of the Jovian and Saturnian satellites and how they are applied to the cladistical method. \n\\subsection{Cladistics}\n\\label{cladistics}\n\nCladistics is an analytical technique, originally developed to examine the relationships between living organisms \\citep{Hennig1965PhylogeneticSystem}. A \\textit{clade} is the term used for a cluster of objects or \\textit{taxa}, that are related to each other at some level. In astronomy\/astrophysics, the technique has been used to look at the relationships between stars \\citep{FraixBurnet2015StarClads,Jofre2017StarsClads}, gamma-ray bursts \\citep{Cardone2013GRBClads}, globular clusters \\citep{FraixBurnet2009GlobularClusters} and galaxies \\citep{FraixBurnet2006DwarfGalaxies, FraixBurnet2010EarlyGalx, FraixBurnet2012SixPermGal, FraixBurnet2015GalClad}. These works, along with this study, form a body of work in the new field of `Astrocladistics' \\citep{FraixBurnet2015GalClad}. There are good reasons to believe that cladistics can provide sensible groupings in a planetary science context. Objects that have similar formation mechanisms should have comparable characteristics. Daughter objects that are formed by breaking pieces off a larger object should also have similar characteristics. The advantage of this method over other multivariate analysis systems is the inclusion of a larger number of characteristics, enabling us to infer more detailed relationships. \n\nThe vast majority of work in cladistics and phylogenetics has been undertaken in the Biological and Paleontological sciences. Biologists and Paleontologists use cladistics as a method to investigate the common origins\\replaced{ or `tree', of life}{, or `tree' of life} \\citep{Darwin1859Origin, Hennig1965PhylogeneticSystem, Hug2016TreeLife}, and how different species are related to one another \\citep[e.g.][]{Van1993new, Salisbury2006originCrocs, Vrivcan2011two, Smith2017new, Aria2017burgess}. Historically, the investigation into relationships between different organisms reaches back to \\citet{Darwin1859Origin}. Early attempts at using tree analysis techniques occurred in the early 20th century \\citep{Mitchell1901BridsClads, Tillyard1926insects, Zimmermann1931arbeitsweise}. \\citet{Hennig1965PhylogeneticSystem} is regarded as one of the first to propose `phylogenetic systematics', the technique that would become modern cladistical\/phylogenetic analysis. The technique was quickly adopted by the biological community and used to analyze every form of life, from Bacteria \\citep[e.g.][]{Olsen1994winds} to Dinosauria \\citep[e.g.][]{Bakker1974dinosaur} and our own ancestors \\citep[e.g.][]{Chamberlain1987EarlyHominid}. Recently, the use of DNA led to the expansion of the technique to become molecular phylogenetics \\citep{Suarez2008HistoryPhylo}. As computing power improves, larger datasets can be examined, and our understanding of the `Tree of Life' improves \\citep{Hug2016TreeLife}. For a detailed examination of the history of cladistics and pyholgenetics, we refer the interested reader to \\citet{Hamilton2014EvolSystem}.\n\n\nThe cladisitcal methodology begins with the creation of a taxon-character matrix. Each matrix is a 2-d array, with the taxa, the objects of interest, in the rows, and each characteristic in the columns. The taxa used in this study are the rings and satellites of the Jovian and Saturnian Systems. The orbital, physical and compositional properties of the rings and satellites are used as characteristics, see Section \\ref{Characteristics}. For a given taxa, each corresponding characteristic is defined as a numerical state, usually a 0 or 1, though multiple, discrete states may be used. A 0 numerical state is used to indicate the original or `base' state. An \\textit{outgroup}, or a taxa outside the area of interest, is used to dictate the 0 base state of a characteristic. For this study, we use the Sun as an outgroup. An unknown character state can be accounted for, with a question mark (?). This taxon-character matrix is created using the Mesquite software package \\citep{Mesquite}.\n\nA set of phylogenetic trees are subsequently created from the Mesquite taxon-character matrix, using Tree analysis using New Technology (TNT) 1.5 \\citep{Goloboff2008TNT, Golboff2016TNT15}, via the Zephyr Mesquite package \\citep{MesquiteZephyr}. The trees are created on the concept of maximum parsimony \\citep{Maddison1984outgroup}, that the tree with the shortest lengths, the smallest number of changes, is most likely to show the true relationships. TNT uses a method of indirect tree length estimation \\citep{Goloboff1994Treelengths, Goloboff1996FastPasrimony}, in its heuristic search for trees with the smallest length. TNT starts the drift algorithm \\citep{Goloboff1996FastPasrimony} search by generating 100 Wagner trees \\citep{Farris1970MethodsComp}, with 10 drifting trees per replicate. These starting trees are then checked using a Tree bisection and reconnection (TBR) algorithm \\citep{Goloboff1996FastPasrimony} to generate a block of \\replaced{equality}{equally} parsimonious trees. Closely related taxa are grouped together in the tree. Ideally, all equally parsimonious trees would be stored, but this is computationally prohibitive. For this analysis, 10000 equally parsimonious trees are requested from TNT, to create the tree block. Once a tree block has been generated and imported into Mesquite \\citep{Mesquite} for analysis, a 0.5 majority-rules consensus tree can be constructed using \\replaced{the a well established}{a well-established} algorithm \\citep{Margush1981MajorityRules}. This tree is generated as a consensus of the block, with a tree branch being preserved if it is present in the majority of the trees. The resulting branching taxonomic tree is then a hypothesis for the relations between taxa, the satellites and rings of the gas giants. \n\nWe can assess how accurately a tree represents true relationships between taxa. The number of steps it takes to create a tree is call the \\textit{tree length}. A smaller tree length \\replaced{indicates}{implies} a more likely tree, as it is more parsimonious. Tree length estimation algorithms \\citep{Farris1970MethodsComp} continue to be improved, and are fully explored in a modern context by \\cite{Goloboff2015Parsimony}. Two other tree metrics, the consistency and retention indices, are a measure of \\textit{homoplasy}, or the independent loss or gain of a characteristic \\citep{Givnish1997consistency}. High amounts of homoplasy in a tree is \\replaced{indicative}{suggestive} of random events, rather than the desired relationships between taxa \\citep{Brandley2009Homoplasy}. Mathematically, homoplasy can be represented by the consistency index ($CI$) of a tree, (equation (\\ref{ConIndexEq}) \\citep{Kluge1969Cladistics}) and is related to the minimum number of changes ($M$) and the number of changes on the tree actually observed ($S$). \n\n\\begin{equation}\nCI = M\/S\n\\label{ConIndexEq}\n\\end{equation}\n\nA tree with no \\textit{homoplasy} would have a consistency index of 1. \\added{One of the criticisms of the consistency index is that it shows a negative correlation with the number of taxa and characteristics \\citep{Archie1989homoplasy, Naylor1995RetentionIndex}. In order to combat the issues with the consistency index, a new measure of homoplasy, the retention index, was created \\citep{Farris1989retention}.} The retention index ($RI$) \\citep{Farris1989retention} \\deleted{, a second measure of homoplasy} introduces the maximum number of changes ($G$) required into equation (\\ref{RetentionIndexEq}). \n\n\\begin{equation}\nRI = \\frac{G - M}{G - S}\n\\label{RetentionIndexEq}\n\\end{equation}\n\nAs with the consistency index, a tree with a retention index of 1 indicates a perfectly reliable tree. Both of these metrics \\replaced{give an indication of}{show} how confidently the tree represents the \\replaced{true}{most plausible} relationships between taxa. Values closer to 1 of both the consistency and retention indices indicate that the tree represents the true relationships between taxa \\citep{Sanderson1989VaiationHomoplasy}. For a detailed examination of the mathematics behind the algorithms and statistics used in cladistical analysis, we direct the interested reader to \\cite{Gascuel2005MathEvolPhylogeny}.\n\nA traditional form of multivariate hierarchical clustering is used in the detection of asteroid collisional families \\citep{Zappala1990HierarchicalClustering1, Zappala1994HierarchicalClustering2}.\nThis method of clustering uses Gauss equations to find clusters in n parameter space, typically using semi-major axis, eccentricity and inclination \\citep{Zappala1990HierarchicalClustering1}. Work has also been undertaken incorporating the known colors \\citep{Parker2008AsteroidFamSDSS} and albedo \\citep{Carruba2013AsteroidFamilies} of the asteroids \\citep{Milani2014AsteroidFamilies} into the classical method, though this reduces the dataset significantly. The classical method of multivariate hierarchical clustering was used by \\citep{Nesvorny2003IrrSatEvol} to identify the Jovian irregular satellite families. \\citet{Turrini2008IrregularSatsSaturn} expanded the classical method into the Saturnian irregular satellites, and utilized the Gauss equations, solved for velocities, in a similar way to \\cite{Nesvorny2003IrrSatEvol} to verify the families found, using semi-major axis ($a$), eccentricity ($e$) and inclination ($i$) of the satellites. The rational behind these calculations is that the dispersal velocities of the clusters would be similar to the escape velocities of the parent body. In this work we use the inverse Gauss equations, equations \\ref{InvGauss1}, \\ref{InvGauss2} and \\ref{InvGauss3}, substituted into equation \\ref{VelocityEq}, to test the dispersal velocities of the clusters found through cladistics. $\\delta a$, $\\delta e$ and $\\delta i$ are the difference between the individual satellites and the reference object. $a_r$, $e_r$, $i_r$ and orbital frequency ($n_r$) are parameters of the reference object. In this case, the reference object is taken as the largest member of the cluster. The true anomaly ($f$) and perihelion argument ($w + f$) at the time of disruption are unknown. Only in special cases, for example, for young asteroid families \\citep[e.g.]{Nesvorny2002AsteroidBreakup}, the values of ($f$) and ($w + f$) can be inferred from observations. In this work we adopt $f = 90 \\degree $ and $(f+w) = 45 \\degree $ respectively as reasonable assumptions. Previous works by \\cite{Nesvorny2003IrrSatEvol} and \\cite{Turrini2008IrregularSatsSaturn} using this method, do not \\replaced{indicate}{specify} the true anomaly ($f$) and perihelion argument ($w + f$) used, nor the central reference point, making any comparisons between them and this work relative rather than absolute. The final $\\delta V_d$ for the cluster is composed of the velocities in the direction of orbital motion ($\\delta V_T$), the radial direction ($\\delta V_R$) and perpendicular to the orbital plane ($\\delta V_W$). \n\n\\begin{equation}\n\\delta V_T = \\frac{n_r a_r (1+e_r \\cos f)}{\\sqrt{1-e_r^2}} \\cdot \\left [ \\frac{\\delta a}{2 a_r} - \\frac{e_r \\delta e}{1-e_r^2} \\right ]\n\\label{InvGauss1}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_R = \\frac{n_r a_r}{(\\sqrt{1-e_r^2})\\sin f} \\cdot \\left [ \\frac{\\delta e_r (1 + e_r \\cos f)^2}{1-e_r^2} - \\frac{\\delta a (e_r + e_r \\cos^2 f + 2\\cos f)}{2a_r} \\right ]\n\\label{InvGauss2}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_W = \\frac{\\delta i \\cdot n_r a_r }{\\sqrt{1-e_r^2}} \\cdot \\frac{1+e_r \\cos f}{\\cos (w+f)}\n\\label{InvGauss3}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_d = \\sqrt{\\delta V_T^2 + \\delta V_R^2 + \\delta V_W^2}\n\\label{VelocityEq}\n\\end{equation}\n\nCladistics offers a fundamental advantage over this primarily dynamics based clustering, and that is the incorporation of unknown values. Classical multivariate hierarchical clustering \\citep{Zappala1990HierarchicalClustering1} requires the use of a complete dataset, and as such a choice is required. The parameters are either restricted to only known dynamical elements, or the dataset is reduced to well studied objects. Cladistical analysis can incorporate objects with large amounts of unknown information, originally fossil organisms \\citep{Cobbett2007FossilsCladistics}, without a reduction in the number of parameters. \n\n\n\\subsection{Characteristics}\n\\label{Characteristics}\nWe define 38 characteristics that can be broken into three broad categories: orbital, physical and compositional parameters. All numerical states are considered having equal weight. The discrete character sets are unordered. Any continuous characteristics are broken into bins, as cladistical analysis requires discrete characteristics. We developed a Python program to establish the binning of continuous characteristics. \\added{The pandas Cut module \\citep{Mckinney2010Pandas} is used to create the bins.} Each characteristic is binned independently of each other and for each of the Jovian and Saturnian systems. The aforementioned \\replaced{python}{Python} program iterates the number of bins until \\replaced{an $r^2$ score of $>0.99$ is reached for that characteristic set}{a linear regression model between binned and unbinned sets achieves a coefficient of determination ($r^2$) score of $>0.99$. This is calculated using the stats package in SciPy \\citep{Jones2010SciPy}}. Thus each character set will have a different number of bins, $r^2$ score and delimiters. All characteristics are binned in a linear fashion, with the majority increasing in progression. The exception to the linear increase is the density character set, with a reversed profile. All of the continuous, binned characteristic sets are ordered, as used by \\cite{FraixBurnet2006DwarfGalaxies}. A full list of the characteristics used, the $r^2$ score for each of the binned characteristics, along with the delimiters are listed in Appendix \\ref{ListCharacters}.\n\nThe first broad category includes the five orbital characteristics (Appendix \\ref{listOrbitChar}). This category is comprised of two discrete characteristics, presence in orbit around the gas giant, and prograde or retrograde orbit. The three remaining characteristics, semi-major axis (a), orbital inclination (i) and eccentricity (e), are continuous and require binning using the aforementioned \\replaced{python}{Python} program. \n\nThe second category used to construct the matrix consists of two continuous physical characteristics, density and visual geometric albedo (Appendix \\ref{listPhysChar}). We chose to not include mass, or any properties related to mass, as characters in the analysis. The inclusion of these characteristics could hide any relationships between a massive object and any daughter objects, as the result of collisions. \n\nThe third category describes the discrete compositional characteristics and details the presence or absence of 31 different chemical species (Appendix \\ref{listCompChar}). In order to account for any positional bias, the fundamental state, solid, liquid, gas or plasma was not considered. In this anyalysis, we make no distinction between surface, bulk and trace compositions. This is to account for the potential of daughter objects to have their bulk composition comprised of surface material from the parent. The majority of compounds have absence as a base state (0), and presence as the derived (1). The exception are the first three molecules, elemental hydrogen (eH), hydrogen (H$_2$) and helium (He), all of which are found in the Sun. As the Sun is the designated outgroup, the base state (0) indicates the presence of these species. With the exception of elemental hydrogen, the remaining single element species are those found in compounds. The spectroscopy of an object often only reports on the presence of an ion, as opposed to a full chemical analysis. \\deleted{As the full chemical composition of a body.} As more detailed analysis becomes available, characters may be added to the matrix. Several chemical species are used in this particular matrix which are either not present in any of the satellites or unknown. These are included for future comparisons with other orbital bodies.\n\n\\subsection{Matrices}\nThe Jovian taxon-character matrix holds 68 taxa consisting of: the Sun (outgroup), four inner satellites, the main ring, four Galilean satellites and 59 irregular satellites. Appendix \\ref{JupiterMatrix} contains the matrix, along with the references used in its construction.\n\nThe Saturnian matrix, presented in Appendix \\ref{SaturnMatrix}, is created with 76 taxa. These taxa are the Sun (outgroup), six main rings, nine inner small satellites, four minor rings, eight large icy satellites, four Trojan satellites, three Alkynoids and their associated rings, and the 38 irregular satellites. The references used in the construction of the Saturnian matrix are located in Appendix \\ref{SaturnMatrix}. Both matricies use the same characteristics, as discussed in Section \\ref{Characteristics}, and are available in machine readable format. \n\n\n\\section{Results}\n\nIn this section we present the resulting taxonomic trees from the analysis of the Jovian and Saturnian satellites. The taxonomic trees are used to form the systematic classification, of the Jovian (Table \\ref{JupiterClassTable}) and Saturnian (Table \\ref{SaturnClassTable}) satellite systems. Using inverse Gauss equations \\citep{Zappala1990HierarchicalClustering1}, in a similar method to \\cite{Nesvorny2003IrrSatEvol} and \\cite{Turrini2008IrregularSatsSaturn}, we show in Tables \\ref{JupiterClassTable} and \\ref{SaturnClassTable}, dispersal velocities ($\\delta V$) for each of the taxonomic groups where a single origin object is hypothesized, namely the irregular satellites. For these calculations we assume the largest representative of the cluster as the origin point. See section \\ref{cladistics} for further discussion.\n\n\\subsection{Jovian Taxonomy}\n\\label{JupiterTax}\nThe results of the cladistical analysis of the individual Jovian satellites is shown in Figure \\ref{JupiterTree}. This 0.5 majority-rules consensus tree has a tree length score of 128, with a consistency index of 0.46 and a retention index of 0.85. \\added{The low value of the consistency index is possibly due to the mixed use of ordered, multi-state, continuous characteristics and bi-modal compositional characteristics \\citep{Farris1990phenetics}.} \\replaced{These values indicate}{The high retention index suggests} that the consensus tree is robust and \\replaced{indicative of the true}{demonstrates the most likely} relationships between the satellites. \n\n\\begin{figure*}\n\\includegraphics[height=0.65\\paperheight]{Fig1JupiterCladColourResub.pdf} \n\\caption{Majority consensus taxonomic tree of objects in the Jovian system. This tree has a tree length score of 128, with a consistency index of 0.46 and a retention index of 0.85. Numbers indicate frequency of the node in the 10000 most parsimonious tree block. \\replaced{Colors are indicative of}{Colors represent terminology used in} traditional classification: \\textcolor{Amalthea}{Amalthea inner regular family} ; \n \\textcolor{Galilians}{Galilean family};\n \\textcolor{Themisto}{Themisto prograde irregular} ;\n \\textcolor{Himalia}{Himalia prograde irregular family};\n \\textcolor{Carpo}{Carpo prograde irregular} ; \n \\textcolor{Ananke}{Ananke irregular family};\n \\textcolor{Carme}{Carme irregular family} ; \n \\textcolor{Pasiphae}{Pasiphae irregular group};\n Unnamed and unclassified. Proposed groups and families are \\replaced{indicated}{shown} on the right.} \n\\label{JupiterTree}\n\\end{figure*}\n\n\\begin{deluxetable}{p{3cm}p{3.8cm}ccccccp{2cm}cc}\n\\label{JupiterClassTable}\n\n\\rotate\n\n\\tabletypesize{\\scriptsize}\n\n\n\\tablecaption{Jovian Satellite Systematic Classification}\n\n\n\\tablenum{1}\n\n\n\\tablehead{\\colhead{Taxonomy} & \\colhead{Members} & \\colhead{Orbit} & \\colhead{Semi-major Axis} & \\colhead{Inclination} & \\colhead{Eccentricity} & \\colhead{Density} & \\colhead{Albedo} & \\colhead{Composition} & \\colhead{Velocity ($\\delta V$)} & \\colhead{Ref.} \\\\ \n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{(km)} & \\colhead{} & \\colhead{} & \\colhead{($kg~m^{-3}$)} & \\colhead{} & \\colhead{} & \\colhead{($m~s^{-1}$)} & \\colhead{} } \n\n\\startdata\nAmalthea family & Thebe, Amalthea, Metis and Adrastea & Prograde & $< 3.0 \\times 10^5$ & $< 0.02\\degree$ & $< 2\\degree$ & $< 900$ & $< 0.1$ & predominately water ice and silicates & $3570.4 \\pm 491.8 $ & 1 \\\\\nGalilean family & Io, Ganymede, Europa, Callisto & Prograde & $4.0\\times 10^5$ \u2013 $2.0\\times 10^6$ & $< 0.5\\degree$ & $< 0.01$ & $> 1800$ & $> 0.18$ & water ice and silicates dominate; presence of SO$_2$; other chemical species present & \\nodata & 2 \\\\\nJovian Irregular Satellite group & & & & & & & & & & \\\\\nHimalia family & Leda, Elara, Lyithea, Himalia and Themisto. & Prograde & $7.5 \\times 10^6$ - $1.8 \\times 10^6$ & $25\\degree$ - $55\\degree$ & $0.1$ - $0.3$ & \\nodata & $< 0.1$ & silicate based & $623.8 \\pm 750.3$ & 3,4 \\\\\nAnanke\/Carme Family & S\/2003 J3, S\/2003 J9, Ananke subfamily, Carme subfamily and Sinope subfamily. & Retrograde & $1.88 \\times 10^7$ - $2.5\\times 10^7$ & $143\\degree$ - $166\\degree$ & $0.2$ - $0.4$ & \\nodata & $< 0.07$ & \\nodata & $457.2 \\pm 445.7$ & 3,4 \\\\\nAnanke Subfamily & Euanthe, Thyone, Mneme, Harpalyke, Praxidike, Thelxinoe and Ananke. & Retrograde & $2.0 \\times 10^7$ - $2.15 \\times 10^7$ & $145\\degree$ - $152\\degree$ & $0.2$ - $0.25$ & \\nodata & $< 0.07$ & \\nodata & $61.0 \\pm 45.6$ & 3,4 \\\\\nCarme Subfamily & Arche, Pasithee, Chaldene, Isonoe, Kale, Aitne, Erinome, Taygete, Carme, Kalyke, Eukelade and Kallichore. & Retrograde & $2.2 \\times 10^7$ - $2.4 \\times 10^7$ & $164\\degree$ - $166\\degree$ & $0.24$ - $0.27$ & \\nodata & $< 0.07$ & \\nodata & $36.1 \\pm 13.1$ & 3,4 \\\\\nSinope Subfamily & Eurydome, Autonoe, Sinope and Callirrhoe. & Retrograde & $2.2 \\times 10^7$ - $2.42 \\times 10^7$ & $147\\degree$ - $159\\degree$ & $0.27$ - $0.35$ & \\nodata & $< 0.06$ & \\nodata & $323.9 \\pm 97.3$ & \\\\\nIocaste Family & Euporie, S\/2003 J18, Hermippe, Helike, Iocaste, S\/2003 J15, Herse, S\/2003 J4, Aoede, S\/2003 J5 and S\/2003 J10 & Retrograde & $1.9 \\times 10^7$ - $2.5 \\times 10^7$ & $140\\degree$ - $165\\degree$ & $0.1$ - $0.45$ & \\nodata & $< 0.05$ & \\nodata & $510.2 \\pm 303.3$ & \\\\\nPasiphae Family & S\/2003 J12, S\/2011 J1, S\/2010 J2, S\/2003 J19, S\/2010 J1, S\/2011 J2, Sponde, Pasiphae, Megaclite, Hegemone, S\/2003 J23, Cyllene, Kore and S\/2003 J2. & Retrograde & $1.9 \\times 10^7$ - $2.9 \\times 10^7$ & $145\\degree$ - $164\\degree$ & $0.30$ - $0.421$ & \\nodata & $< 0.1$ & \\nodata & $412.3 \\pm 224.5$ & 3,4 \\\\\n\\enddata\n\n\n\\tablerefs{(1) \\citet{Barnard1892Amalthea};\n(2) \\citet{Galileo1610SidereusNuncius};\n(3) \\citet{Nesvorny2003IrrSatEvol};\n(4) \\citet{Sheppard2003IrrSatNature}.}\n\n\\end{deluxetable}\n\n\nAs can be seen in the Jovian taxonomic tree in Figure \\ref{JupiterTree}, the satellites cluster in to clades resembling the taxonomy proposed by \\citet{Nesvorny2003IrrSatEvol} and \\citet{Sheppard2003IrrSatNature}. The irregular satellites are a separate cluster to the prograde regular satellites. \n\nWe maintain the closest family to Jupiter, the Amalthea family, as a valid taxonomic cluster. The dispersal velocity is very large and may \\replaced{indicative}{suggest} that the Amalthea family did not form from a single object. This family, along with Jupiter's main ring, \\replaced{are}{is} associated with the well known Galilean family. \n\nIn the analysis, we maintain the 'irregular' satellite group. The Himalia family clusters with the retrograde satellites, separate to the other prograde satellites. The Himalia family has relatively low inclinations, in comparison with the Jovian retrograde satellites and their high eccentricity could be explained by disruptions \\citep{Christou2005HimaliaScattering}. The small satellites Themisto and Carpo cluster together with the other prograde satellites in the Himalia family. We propose that Themisto and Carpo be included in the Himalia family, as they are the sole members of the groups proposed by \\citet{Sheppard2003IrrSatNature}, and show similar orbital characteristics. The large mean dispersal velocity calculated for the Himalia family (see Table \\ref{JupiterClassTable}) was also noticed by \\citet{Nesvorny2003IrrSatEvol} for the Prograde satellites. The large mean dispersal velocity due to the dispersal velocities of Themisto and Carpo. Without including these members, the mean dispersal velocity for the classical Himalia family is $154.6 \\pm 72.5 m\/s$, close to the escape velocity of Himalia, $121.14 m\/s$. This dispersal velocity of the classical Himalia family, was explained via gravitational scattering from Himalia by \\citet{Christou2005HimaliaScattering}. Disruption and scattering could also be used to explain the large dispersal velocities of Themisto and Carpo, though further modeling is required. \n\nThe term `irregular' is maintained through the retrograde family for consistency with the literature \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Nesvorny2004IrrSatFamilyOrigin, Beauge2007IrregSatRsonance, Jewitt2007IrregularSats}. The retrograde irregular satellites are separate, but related cluster to the Himalia, prograde irregulars. The broad classifications introduced by \\citet{Sheppard2003IrrSatNature} and \\citet{Nesvorny2003IrrSatEvol} are preserved, though the Ananke\/Carme family is unresolved and may be split into subfamilies. Separating out the traditional families \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature}, see colors in figure \\ref{JupiterTree}, give smaller dispersal velocities. The traditional Ananke (escape velocity (eV) $23.10 m\/s$) family has a $\\delta V$ of $61.0 \\pm 45.6 m\/s$, traditional Carme (eV $29.83 m\/s$) has $36.2 \\pm 13.1 m\/s$, and a created Sinope (eV $27.62 m\/s$) family has $323.9 \\pm 97.3 m\/s$. These are smaller than the $\\delta V$ of our unresolved Ananke\/Carme Family ($457.2 \\pm 445.7 m\/s$, see table \\ref{JupiterClassTable}). \\citet{Nesvorny2003IrrSatEvol} used similar small $\\delta V$ values to establish the Ananke and Carme dynamical families. The dynamical situation could be explained through a more recent capture and breakup event for Ananke, Carme and Sinope, that disrupted the ancestral irregular satellites. The identified Iocaste and Pasiphae families also have large dispersal velocities, \\replaced{indicative}{suggestive} of disruptions. Following the nomenclature of \\citet{Sheppard2003IrrSatNature}, each of the families and subfamilies are represented by the name of the largest contained satellite. Satellites within families are related by their retrograde orbit, high inclinations and eccentricities. In addition to their linked orbital characteristics, the satellites of the retrograde irregular group all show a low albedo \\citep{Beauge2007IrregSatRsonance}. \n\nThe Ananke subfamily is tightly constrained in its orbital characteristics, with a small dispersal velocity. While the characteristics listed in Table \\ref{JupiterClassTable} would preclude them from being included in the Pasiphae family, their clustering around a common semi-major axis, inclination and eccentricity, \\replaced{indicated}{suggesting} that they are a distinct young dynamical family. The members we include in the Ananke family for this analysis are all historical members of the family \\citep{Jewitt2007IrregularSats}. Some of the satellites that have been historically included in the Ananke family \\citep{Jewitt2007IrregularSats} are moved to other families. We do not add any new satellites to this family. \n\nThe orbital characteristics of the Carme subfamily are tightly constrained. Satellites in this family orbit further from Jupiter, with higher orbital inclinations, but similar eccentricities to the Ananke family. As with the Ananke family, it is the highly constrained orbital characteristics and low mean dispersal velocity, that justify the classification of this traditional family \\citep{Jewitt2007IrregularSats}. According to the tree presented in Figure \\ref{JupiterTree}, there is a continuum between the Ananke and Carme families. However, differences in orbital characteristics, broken down in Table \\ref{JupiterClassTable}, distinguish both of these families from each other.\n\nA new cluster, the Iocaste family, is defined as shown in Figure \\ref{JupiterTree} and Table \\ref{JupiterClassTable}. The semi-major axis of this family spans most of the orbital space where irregular satellites have been discovered. The lower eccentricities and albedo are used to separate this family from the Pasiphae family. As with the Passiphae family, the Iocaste family has a high mean dispersal velocity ($510.2 \\pm 303.3$ compared with a escape velocity of $3.16 m\/s$), \\replaced{indicative}{suggestive} of disruptions taking place at some point since the break-up of the original object \\citep{Christou2005HimaliaScattering}. Iocaste, being the largest member of this family, is proposed as the representative object. Also included are several members that have been previously included in other families \\citep{Jewitt2007IrregularSats}, along with new unnamed satellites. For full details on included satellites and the descriptive properties of the family, see Table \\ref{JupiterClassTable}.\n\nThe Pasiphae family show a broad range of orbital characteristics that, along with the large dispersal velocity ($412.3 \\pm 224.5$ compared with an escape velocity of $47.16 m\/s$), are \\replaced{indicative}{suggestive} of disruptions during the family's life-time \\citep{Christou2005HimaliaScattering}. The Pasiphae family has a broad range semi-major axies and inclinations, with the Pasiphae family orbiting further from Jupiter and having larger eccentricities on average than the new Iocaste family, see table \\ref{JupiterClassTable}. A Pasiphae subfamily, see figure \\ref{JupiterTree}, with a $\\delta V$ of $230.1 \\pm 174.3 m\/s$, can be identified. This may \\replaced{indicate}{imply} a secondary, more recent break-up from Pasiphae. In addition, many of the unnamed satellites from recent observations \\citep{Gladman2003JupIAU1, Gladman2003JupIAU2, Sheppard2003JupIAU1, Sheppard2003JupIAU2, Sheppard2003JupIAU3, Sheppard2003IAUJupSat,\nSheppard2004JupIAU1,Sheppard2004JupIAU2,\nJacobson2011NewJupSats, Sheppard2012JupIAU} are associated with this family, see Table \\ref{JupiterClassTable} and Figure \\ref{JupiterTree} for a complete list. \n\n\n\\subsection{Saturnian Taxonomy}\n\\label{SaturnTax}\nCladistical analysis of the Saturnian \\replaced{System}{system} yields the 0.5 majority-rules consensus tree, Figure \\ref{SaturnTree}, constructed from the 10000 parsimonious trees, with a tree length score of 186. The tree has a consistency index of 0.30 and a retention index of 0.81. The consistency index of the Saturnian tree is lower than that of the Jovian tree, though this could be due to the number of taxa used \\citep{Sanderson1989VaiationHomoplasy}. \\added{As with the Jovian tree, this low consistency index could be due to the mixed character states. This effect is to be explored further in a future paper.} The high retention index indicates that the tree \\replaced{is indicative}{is suggestive} of the true relationships \\citep{Farris1989retention}. \n\n\n\\begin{figure*}\n\n\\includegraphics[height=0.65\\paperheight]{Fig2SaturnCladColourReSub.pdf} \n\\caption{Majority Consensus taxonomic tree of objects in the Saturnian system. The tree has a consistency index of 0.30 and a retention index of 0.81. Numbers indicate frequency of the node in the 10000 most parsimonious tree block. \\replaced{Colors are indicative of}{Colors represent terminology used in} classical classification: \\textcolor{MainRing}{Main ring group, with associated shepherd satellites} ; \n \\textcolor{IcySats}{Mid-sized Icy satellites and Titan};\n \\textcolor{Trojans}{Trojan satellites} ;\n \\textcolor{Alkanoids}{Alkanoids and associated rings};\n \\textcolor{Inuit}{`Inuit' prograde irregular family} ; \n \\textcolor{Gallic}{`Gallic' prograde irregular family};\n \\textcolor{Norse}{`Norse' retrograde irregular family} ; \n Unnamed and unclassified. \n Proposed groups and families are \\replaced{indicated}{shown} to the right. }\n\\label{SaturnTree}\n\\end{figure*}\n\n\n\\begin{deluxetable}{p{3cm}p{3.8cm}ccccccp{2cm}cc}\n\n\\rotate\n\n\\tabletypesize{\\scriptsize}\n\n\\tablecaption{Saturnian Satellite Systematic Classification}\n\\label{SaturnClassTable}\n\n\\tablenum{2}\n\n\n\\tablehead{\\colhead{Taxonomy} & \\colhead{Members} & \\colhead{Orbit} & \\colhead{Semi-major Axis} & \\colhead{Inclination} & \\colhead{Eccentricity} & \\colhead{Density} & \\colhead{Albedo} & \\colhead{Composition} & \\colhead{Velocity ($\\delta V$)} & \\colhead{Ref.} \\\\ \n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{(km)} & \\colhead{} & \\colhead{} & \\colhead{($kg~m^{-3}$)} & \\colhead{} & \\colhead{} & \\colhead{($m~s^{-1}$)} & \\colhead{} } \n\n\\startdata\nSaturnian Inner system Group, Main ring and Icy satellites & Atlas, Janus, Epimetheus, Prometheus, Janus\/Epimetheus ring, G ring, D ring, Pan, Aegaeon, S\/2009 S1, F ring, B ring, Cassini Division, C ring, Daphnis and A ring. Possible members: Telesto, Calypso, Methone ring arc, Anthe ring arc, Pallene ring arc, Methone, Anthe, Pallene, Polydeuces Mimas, Tethys, Enceladus family, Hyperion, Titan and Iapetus ; see Section \\ref{SaturnTax} for discussion. & Prograde & $< 4.0 \\times 10^6$ & $< 15\\degree$ & $< 0.03$ & $550$ \u2013 $1900$ & $0.1$ - $1$ & Composition of water ice with silicates and presence of CO$_2$. Other chemical species may be present & \\nodata & 1, 2 \\\\\nEnceladus Family & E ring, Enceladus, Rhea, Dione and Helene. & Prograde & $1.8 \\times 10^5$ - $5.3 \\times 10^5$ & $< 0.5\\degree$ & $0$ & $1200$ \u2013 $1700$ & $> 0.7$ & Complex composition, predominately water ice and silicates, with Hydrocarbons and CO$_2$ present & \\nodata & \\\\\nSaturnian Irregular Satellite group & & & & & & & & & & \\\\\nAlbiorix family & Bebhionn, Erriapus, Albiorix and Tarvos & Prograde & $1.6 \\times 10^7$ - $1.8 \\times 10^7$ & $30\\degree$ - $40\\degree$ & $0.4$ - $0.6$ & \\nodata & $< 0.1$ & \\nodata & $80.9 \\pm 1.6 $ & 3,4,5 \\\\\nSiarnaq family & Tarqeq, Kiviuq, Ijiraq, Paaliaq and Siarnaq & Prograde & $1.1 \\times 10^7$ - $1.9 \\times 10^7$ & $40\\degree$ - $50\\degree$ & $0.1$ \u2013 $0.4$ & \\nodata & $< 0.1$ & \\nodata & $266.8 \\pm 60.0$ & 3,4,5 \\\\\nPhoebe family & Phoebe Ring, Phoebe, Fenrir, Loge, Aegir subfamily, and Ymir subfamily. & Retrograde & $1.1 \\times 10^7$ - $2.51 \\times 10^7$ & $> 145\\degree$ & $> 0.1$ & \\nodata & $< 0.1$ & \\nodata & $763.3 \\pm 259.0 $ & 3,4,5 \\\\\nAegir subfamily & S\/2007 S2, Mundilfari, Jarnsaxa, S\/2006 S1, Bergelmir, Suttungr, Farbauti, S\/2007 S3, Aegir and Fornjot. & Retrograde & $1.6 \\times 10^7$ - $2.51 \\times 10^7$ & $> 150\\degree$ & $0.1$ \u2013 $0.25$ & \\nodata & \\nodata & \\nodata & $295.1 \\pm 125.0$ & 5 \\\\\nYmir subfamily & Skathi, Skoll, Greip, Hyrrokkin, S\/2004 S13, S\/2004 S17, Narvi, S\/2004 S12, S\/2004 S07, Hati, Bestla, Thrymr, S\/2006 S3, Kari, Surtur and Ymir & Retrograde & $1.55 \\times 10^7$ - $2.30 \\times 10^7$ & $> 145\\degree$ & $0.25$ - $0.6$ & \\nodata & $< 0.1$ & \\nodata & $497.5 \\pm 247.7$ & 5 \\\\\n\\enddata\n\n\n\\tablerefs{(1) \\citet{Huygens1659systema};\n(2) \\citet{Cassini1673Sat2Sats,Cassini1686Sat2Sats}\n(3) \\citet{Nesvorny2003IrrSatEvol};\n(4) \\citet{Sheppard2003IrrSatNature};\n(5) \\citet{Turrini2008IrregularSatsSaturn}.}\n\n\\end{deluxetable}\n\n\n\n\nThe tree shown in Figure \\ref{SaturnTree} highlights the diversity of structures found in the orbit of Saturn. Satellites cluster into two main grouping around Saturn, the Inner group, comprised of rings and icy satellites, and the Irregular satellite group, see Table \\ref{SaturnClassTable} for members and diagnostic properties of each clade. While the traditional classification nomenclature \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats} is broadly conserved, several \\replaced{anomalies}{discrepancies} require discussion. Table \\ref{SaturnClassTable} shows our new taxonomy, along with included members of the families and their descriptive properties.\n\nThe Main ring and Icy satellite group form an unresolved, inner system group. This group includes the Saturnian ring system, the Alkynoids and their associated ring arcs, as well as the larger Icy satellites and their Trojans. We have confirmed the recently discovered S\/2009 S1 \\citep{Spitale2012s2009s1} is part of this group due to its orbital characteristics. Within this large group, there is the resolved Enceladus family.\n\nOur results suggest the traditionally classified Alkyonides, Methone, Anthe and Pallene, along with their associated rings, are not clustered with the the Enceladus family, as would be expected by their orbital location, between Mimas and Enceladus, within the E-ring. Due to their bulk water ice composition, the Alkynoides associate with the Main ring objects, see Figure \\ref{SaturnTree}. The low density and mid-range albedo of Pallene and Methone \\citep{Hedman2009SatRingArcs} \\replaced{indicates}{suggests} that the association with the Main ring group is genuine. The dynamic resonances of both Methone and Anthe \\citep{Callegari2010SmallSaturnSatsDynamics} \\replaced{are also indicative of these objects being}{implies that these objects where} captured, rather than forming in-situ. As there is very little known about the composition of these objects, beyond their bulk water ice composition \\citep{Hedman2009SatRingArcs}, further study and dynamical modeling of the capture process is required to resolve their true origins.\n \nLike the Alkynoids, the Trojan satellites of Tethys, Calypso and Telesto, also form an association with the main rings. The reason for this could be that Calypso and Telesto, like the Alkynoids, are also possible captured main ring objects. The capture dynamics could be similar to that of the Jovian Trojan asteroids \\citep{Morbidelli2005TrojanCapture, Lykawka2010TrojanCaputer, Nesvorny2013TrojanCaptureJJ}. Both of the Tethys Trojans \\citep{Buratti2010SatInnerSat} and main ring objects, are chiefly comprised of water ice, \\replaced{indicative of}{implying} a common origin. The bulk composition of Tethys is also prominently water ice \\citep{Buratti2010SatInnerSat}, with a very small fraction of silicates. Trojans may instead have formed from the same material as Tethys itself, either during accretion \\citep{Charnoz2011SaturnSatAccretion} or in the same orbit from a large debris disk \\citep{Canup2010SaturnSatOrigin}. As Tethys is also in the unresolved Main ring and Satellite group, we can not differentiate between the two scenarios. Further compositional information about the Tethys Trojans could shed light on this issue. Polydeuces, a Trojan of Dione, also forms an association with the Main ring group in our analysis. This could be due to overemphasis on orbital and physical characteristics, since the bulk composition of Polydeauces is unknown \\citep{Thomas2013InnerSatSatu}. Helene, the more well studied Trojan of Dione \\citep{Thomas2013InnerSatSatu}, is well within the Enceladus Family. Helene and Dione are closely associated in our analysis, \\replaced{indicating}{implying} that Helene is a daughter object of Dione.\n\nThe outer icy satellites, Titan, Hyperion and Iapetus, do not form a single cluster, and are therefore not considered a valid taxonomic group. They are associated with the Main ring and Icy Satellite group. The Enceladus family is formed by the known association between the E-ring, Enceladus and Icy Satellites \\citep{Verbiscer2007Enceladus}, which is mainly due to the detection of volatile chemicals, such as NH$_3$, CH$_4$ and other hydrocarbons. Plumes from Enceleadus containing these chemicals \\citep{Porco2006EnceladusPlume}, thought to be representative of the subcrust ocean \\citep{Porco2006EnceladusPlume}, \\deleted{and} are the source of the E ring \\citep{Sphan2006EnceladusEring}. Titan itself also has an abundance of these volatiles \\citep{Hirtzig2013VIMSTitan}, \\replaced{indicating}{implying} a possible association between the Icy satellites of Saturn that remains unresolved in our analysis.\nMaterial from the outer satellites, particularly Pheobe and its associated ring \\citep{Tosi2010IapetusDark, Tamayo2011IapetusDust} is thought to play a role in the observed hemispherical dichotomy on Iapetus \\citep{Tosi2010IapetusDark}. In Figure \\ref{SaturnTree}, Iapetus is unresolved in the Main ring and Icy Satellite group.\n\nThe irregular satellites form a major cluster with each other separate from the inner Saturnian system, and are therefore collected under the Irregular satellite group. Along with their high inclinations, eccentricities and semi-major axes, the Irregular satellite group is characterized by a dark albedo, comparative to the other objects in the Saturnian system. We follow the naming convention introduced with the Jovian satellites, Section \\ref{JupiterTax}, where each irregular satellite family is represented by the largest member \\citep{Jewitt2007IrregularSats}. We therefore rename the classical Inuit group \\citep{Blunck2010SaturnSats} to the Siarnaq family and the Gallic group \\citep{Blunck2010SaturnSats} to the Albiorix family. Though this does change the formal name of the clusters, we encourage the discoverers of the unnamed satellites \\citep{Gladman200112Sat,Sheppard2003IAUJupSat,Jewitt2005IAUCSat,Sheppard2006SatIAUC,Sheppard2007SatIAUC} and any future discoveries that are placed in these groups, to follow IAU convention and use names from Inuit and Gallic mythology for satellites in the Siarnaq and Albiorix families respectively. \nAs in \\cite{Turrini2008IrregularSatsSaturn}, the Albiorix family is distinct and has a low mean dispersal velocity ($\\delta V$). The Siarnaq family has a higher $\\delta V$, again \\replaced{indicative}{suggestive} of disruptions \\citep{Christou2005HimaliaScattering}. The mean $\\delta V$ of all prograde satellites is $364.8 \\pm 114.9 m\/s$, only slightly higher than that of the Siarnaq family \\citep{Turrini2008IrregularSatsSaturn}. This could \\replaced{be indicative of}{imply} a disruption scenario, with a more recent capture of the Albiorix family parent body disrupting the older Siarnaq family. Our cladistical analysis supports this scenario, as the Siarnaq family shows a more branching structure than the Albiorix family. Further compositional information about these bodies, as well as dynamical modeling, could resolve this complex situation.\n\n\\replaced{Our study shows that the classically named retrograde Norse group \\citep{Blunck2010SaturnSats} is determined to be the Phoebe family, which can be split into at least two subfamilies.}{In our analysis, we separate out the retrograde irregular satellites, including Phoebe, from the prograde irregular satellites. In previous taxonomy, this group has been classified as the 'Norse' group \\citep{Blunck2010SaturnSats}. In our revised nomenclature, this group should be termed the Phoebe family. We further separate out two clades, distinct from Phoebe and its associated ring.} The first clade, the unresolved Aegir subfamily\\added{ (previously identified as the S\/2004 S10 group in \\cite{Turrini2008IrregularSatsSaturn})}, is characterized as having on average, orbits further from Saturn, with low eccentricities and higher inclinations. The second clade is the Ymir subfamily and is categorized, on average, by being closer to Saturn, but with high eccentricities. This subfamily shows a branching structure and may be further split \\citep{Grav2007IrregSatCol}. \\added{This family was also identified by \\cite{Turrini2008IrregularSatsSaturn}.} We identify an association between Fenrir and Loge, with a low dispersal velocity ($\\delta V = 114.4 m\/s$), \\replaced{indicative}{suggestive} of a recent breakup. The high dispersal velocity ($\\delta V$) of the Phoebe family is due to the selection of Phoebe as a reference point. If Phoebe and the \\replaced{associate}{associated} ring are removed from the \\replaced{subfamily}{family}, \\added{and Ymir (with an escape velocity of $8.56 m\/s$) selected as the reference object,} the $\\delta V$ is halved from $763.3 \\pm 259.0 m\/s$ to $439.9 \\pm 215.1 m\/s$. The satellite with the lowest $\\delta V$ to Phoebe is S\/2007 S2, with $\\delta V = 248.0 m\/s$, still \\added{significantly}larger than the escape velocity of Phoebe ($100.8 m\/s$). \\deleted{If Phoebe is removed from the family, and Ymir (escape velocity $8.56 m\/s$) selected as the reference object, the mean $\\delta V$ of the cluster is $439.9 \\pm 215.1 m\/s$, lower than the $\\delta V$ of the Ymir subfamily.} \\added{\\cite{Turrini2008IrregularSatsSaturn} also found a dynamical separation between Phoebe and the other retrograde satellites.} This is supportive of the narrative \\replaced{for a different origin of Phoebe and}{that Phoebe has a different origin to} the other retrograde irregular satellites of Saturn \\citep{Turrini2008IrregularSatsSaturn}. The high $\\delta V$ among all the subfamilies shows \\added{that} a complex dynamical situation is present in the Saturnian irregular satellites. Phoebe has been shown to clear its orbital parameter space \\citep{Turrini2008IrregularSatsSaturn}, which could have had a major disruptive effect on those remaining satellites \\citep{Turrini2008IrregularSatsSaturn}. \\added{The similarities between our analysis and that of \\cite{Turrini2008IrregularSatsSaturn} further validates cladistics as a method suitable for applications in Solar system astronomy.} The addition of \\replaced{detail}{detailed} compositional information from the other irregular satellites to an updated cladistical analysis could solve some of the \\added{minor} discrepancies found between this analysis and that of \\cite{Turrini2008IrregularSatsSaturn}.\n\nWe assign the currently unnamed irregular satellites to each of the subfamilies. S\/2006 S1, S\/2007 S2 and S\/2007 S3 are part of the Aegir subfamily. We include S\/2004 S13, S\/2004 S17, S\/2004 S12, S\/2006 S3 and S\/2007 S7 in the Ymir subfamily. See Table \\ref{SaturnClassTable} for a full list of members in each subfamily. As with the Albiorix and Siarnaq families, we encourage discoverers of new satellites that fall within the Phoebe family to follow the Norse mythological naming convention as set by the IAU. \n\n\n\n\\section{Discussion}\n\\label{Discussion}\n\nIn this study we have shown, using the Jovian and Saturnian satellite systems, that cladistics can be used in a planetary science context. We have ensured that the technique is objective, by statistically creating bins for characteristics that are continuous in nature, see Section \\ref{Characteristics}. By thus ensuring the objectivity of our analysis, we increase the confidence that cladistics is a valid technique that can be applied in the planetary sciences. Our results largely support the traditional classifications used in both the Jovian and Saturnian systems. However, the power of cladistics is shown in the ease of classifying new satellites as well as identifying substructures within larger clusters. Cladistics also offers a method of analysis where limited information is available. In our study we have examined well studied satellites, as well as those where only dynamical information available. In traditional methods of analysis, either only dynamical information is considered, or the dataset is truncated in favor of more well studied bodies. Cladistics offers a method that can incorporate as much information about an object as is available, while accounting for any unknown characteristics. As more detailed information becomes available, either of known or newly discovered satellites, cladistics offers a systematic method for inclusion or revision of the classification system. \n\nThe relationships that we noted between the satellites suggest common formation scenarios within the clusters. The prograde, inner families of Jupiter are the products of accretion from a circumplanetary disk \\citep{Canup2002GalSatAcc}. The association of the Amalthea and Galilean families, along with the Main ring of Jupiter, in our analysis supports this hypothesis. Clustering of the Himalia family with other 'irregular' satellites, \\replaced{indicates}{implying} a capture scenario. The prograde nature of the Himalia family is possibly explained via a nebula drag capture mechanism \\citep{Cuk2004HimaliaGasDrag}. Further modeling of the Himalia family is required to ascertain their true origins, particularly in light of the Jovian pebble formation hypothesis that may not include an extended nebula \\citep{Levison2015GasGiantsPebbles}. \n\nWith the proposal that Sinope forms its own subfamily, each Jovian irregular satellite subfamilies contain only a single large satellite. This strengthens the hypothesis that each of the families represents a capture event and subsequent breakup \\citep{Nesvorny2007IrrSatCap} of an object external to the Jovian system. Two of the subfamiles, the Pasiphae and Sinope subfamiles, show a broad range of orbital characteristics and larger dispersal velocities. The other two, the Ananke and Carme subfamiles, show much more constrained characteristics and smaller dispersal velocities. This dichotomy between the two types of subfamiles, broad versus constrained, could \\replaced{indicate}{imply} at least two capture events, with the earlier Pasiphae and Sinope families being disrupted by later Ananke and Carme captures. The Iocaste family does not contain a large progenitor satellite, but has high dispersal velocities. This is \\replaced{indicative}{suggestive} of a possible ejection scenario. An alternative hypothesis is that the capture events happen simultaneously, but there were multiple disruption events. Both scenarios are supported by the dichotomy in dispersal velocities. Future analysis and simulations into the origins of the irregular satellites could help determine which theory is more probably correct. \n\nAs with the Jovian satellites, there are multiple origins for the origin of the Saturnian rings and satellites. The results from our analysis support a growing body of work showing the complexity of formation scenarios in the Saturnian system. The rings themselves possibly formed after the breakup of an inner icy satellite \\citep{Canup2010SaturnSatOrigin}. \n\nThe unresolved nature of the inner Saturnian system shows a complexity of formation scenarios. The main ring satellites, along with the Alkyonides and Tethys Trojans possibly formed via accretion from the current ring system \\citep{Charnoz2010SaturnMooletsfromMainRings}. The Alkynoides and Tethys Trojans are then secondarily captured in their current orbits. The major icy satellites, those in the E-ring and outer satellites, probably formed in an accretion scenario, with delivery of the silicate from the outer system \\citep{Salmon2017SaturnMidAccretion}. Titan could be secondarily derived from multiple subsatellites that formed in the same disk \\citep{Asphaug2013SatMerger}. The volatiles are delivered from comets, with at least one, Phoebe, being captured in orbit. The size of Phoebe is not traditionally associated with comet nuclei, but atleast one comet, C\/2002 VQ94 with a similar ~100km diameter has been observed \\citep{Korsun2014c2002vq94100kmComet}. The irregular satellite families and subfamiles form from collisional breakup events \\citep{Nesvorny2004IrrSatFamilyOrigin} resulting from the captured comet nuclei. The large dispersal velocities of the subfamilies \\replaced{indicate}{imply} that this capture and disruption process is complex and requires detailed modeling.\n\nWe have shown that cladistics can be used in the classification of the Jovian and Saturnian satellite systems. Consequently, several related studies may be attempted in the future. Uranus and Neptune have similarly complex satellite systems to those of Jupiter and Saturn \\citep{Jewitt2007IrregularSats}. These satellite systems could also be classified using cladistics, particularly the irregular satellites. Such a study is hampered by a lack of completeness in the irregular satellite dataset \\citep{Sheppard2005UransIrr, Sheppard2006NeptuneIrr}, but may become practical as observational technology improves and the hypothesized small irregular satellites are discovered. Cladistics could be used to further investigate the origins of the irregular satellites of Saturn and Jupiter. As the irregular satellites are thought to be captured bodies \\replaced{\\citep{Nesvorny2007IrrSatCap}}{\\citep[e.g.][]{Nesvorny2007IrrSatCap}}, the question becomes from \\replaced{what}{which} small body population they originated. Comparisons between the well studied irregular satellites and other \\replaced{solar}{Solar} system bodies could help constrain the origins of these satellites. \n\n\n\\section{Conclusions}\n\\label{Conclusion}\n\nWe have shown that the new application of cladistics on the Jovian and Saturnian satellite systems is valid for investigating the relationships between orbital bodies. In the Jovian system, the traditional classification categories \\citep{Nesvorny2003IrrSatEvol,Sheppard2003IrrSatNature,Jewitt2007IrregularSats} are preserved. We support the hypothesis put forward by \\cite{Nesvorny2007IrrSatCap} that each Jovian irregular satellite family can be represented by the largest member, and that each family is the remnants of a dynamical capture event, and subsequent breakup. We can also assign recently discovered, as yet unnamed, satellites to each of their respective Jovian families. Cladistical analysis of the Saturnian system broadly preserves the traditional classifications \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats,Turrini2008IrregularSatsSaturn}, strengthening the validity of the cladistical method. In the Phoebe family of retrograde, irregular satellites, we assign two subfamilies\\added{similar to those found by \\citep{Turrini2008IrregularSatsSaturn}}. We rename the classical mythological designations for the Saturnian irregular satellites, to represent the largest member of the subfamily, in order to be consistent with the Jovian naming convention. Newly discovered, unnamed Saturnian satellites are easily assigned to various subfamiles. Through the application of the technique to the Jovian and Saturnian systems, we show that cladistics can be used as a valuable tool in a planetary science context, providing a systematic method for future classification.\n\n\n\\acknowledgments\nThis research was in part supported by the University of Southern Queensland's Strategic Research Initiative program. We wish to thank an anonymous reviewer for \\replaced{their}{his\/her} comments, particularly on Multivariate Hierarchical Clustering. The AAS Statistics Reviewer provided valuable feedback on the methodology. Dr. Guido Grimm assisted with the cladistical methodology and terminology used in this paper. Dr. Pablo Goloboff provided assistance with TNT, which is subsidized by the Willi Hennig Society, as well as additional comments on the methodology. We would like to thank \\added{Dr.} Henry Throop for discussions regarding the Ring systems.\n\n\\software{\nMesquite 3.10 \\citep{Mesquite}, \nPython 3.5, \nSpyder 2.3.8 \\citep{Spyder238}, \nAnaconda Python distribution package 2.40 \\citep{Anaconda240},\n\\added{pandas Python package \\citep{Mckinney2010Pandas},\nScyPy Python package \\citep{Jones2010SciPy},}\nTexMaker 4.1.1, \nTree analysis using New Technology (TNT) 1.5 \\citep{Goloboff2008TNT, Golboff2016TNT15}.\nZephyr 1.1: Mesquite package \\citep{MesquiteZephyr}.\n}\n\n\n\\bibliographystyle{aasjournal} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}