diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdglx" "b/data_all_eng_slimpj/shuffled/split2/finalzzdglx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdglx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nBlack holes are the most fascinating objects in our Universe. Due to the strong gravity, there are many interesting astronomical phenomena around them, such as the black hole lensing, shadow, and accelerator. In particular, with the help of the geodesics of a test particle, these phenomena can be well studied. Therefore, investigating different orbital trajectories for photons and massive particles is of great interest.\n\nRecently, motivated by the detection of gravitational waves \\cite{Abbott}, the study of the geodesics has received extensive attention. For example, in the ringdown stage of the black hole merger, the frequency of the gravitational waves is characterized by the linearized vibrational modes, the quasinormal modes. In the eikonal limit, such mode is associated with the spacetime's light ring \\cite{Mashhoona,Schutz,IyerWill,Cardoso}, which can be obtained by the analysis of the null geodesics. On the other hand, the final spin of the black hole can also be well estimated by the Buonanno-Kidder-Lehner receipt \\cite{Buonanno} through solving the innermost stable circular orbits (ISCOs) from the time-like geodesics. During the inspiral stage, accompanied by the gravitational wave radiation, the two initial black holes of extreme mass ratio approach to each other. In this process, periodic orbits act as successive orbit transition states and play an important role in the study of the gravitational wave radiation \\cite{Glampedakis} . Inspired by this, Levin \\emph{et. al.} proposed a classification for the periodic orbits of massive particles, which is very useful for understanding the dynamics of the black hole merger. In their classification, each periodic orbit is characterized by three integers, $z$, $w$, and $v$, which, respectively, denote the zoom, whirl, and vertex behaviors. This study has been carried out for the Schwarzschild black holes, Kerr black holes, charged black holes, Kerr-Sen black holes, and naked singularities \\cite{Levin,Grossman,Levin2,Misra,Babar,LiuDing}.\n\nOn the other hand, supermassive black hole surrounded by a test particle is reminiscent of a hydrogen atom while in a macroscopic level. Then we can make an analogy between these periodic orbits and the ones of the electron around an atom. Thus, we can obtain the discrete energy levels for the black hole system \\cite{Levin2}. By measuring these energy levels, we can extract the information of the black hole. Especially, combining with the black hole thermodynamics, we can explore the area and entropy spectra for the black hole \\cite{Bekenstein1}. Therefore, with this energy levels, the fundamental black hole physics will be revealed.\n\nFrom another side, modified gravity continues to be an important and fascinating subject in gravitational physics. Among them, Ho\\u{r}ava-Lifshitz (HL) theory \\cite{Horava} proposed by Ho\\u{r}ava at a Lifshitz point is an interesting one. It keeps spatial general covariance and time reparametrization invariance. Such theory also performs as a good candidate for studying the quantum field theory of gravity. In the infrared divergence limit, it can reduce to the Einstein's gravity. A progress report on the recent developments of HL gravity can be found in Ref. \\cite{anzhong}. In this paper, we aim to study the geodesics and periodic orbits in the black hole in deformed HL gravity, and to show the difference between it and general relativity. This will also bring us the insight in detecting the gravitational waves from the black hole merger in deformed HL gravity.\n\nThe present paper is organized as follows. In Sec. \\ref{bhg}, we first introduce the black hole solution and its geodesics. Then in Sec. \\ref{epbo}, by employing the effective potential, we investigate the marginally bound orbits and ISCOs. Then, based on the result, the periodic orbits are studied in detail. Finally, we summarize and discuss our results in Sec. \\ref{Conclusion}.\n\n\n\n\\section{Black hole and geodesics}\n\\label{bhg}\n\nIn the limit of cosmological constant $\\Lambda_W \\to 0$, the action of the deformed HL gravity is given by \\cite{KS}\n\\begin{eqnarray}\nS_{HL}&=&\\int dtd^3x \\Big({\\cal L}_0 + {\\cal L}_1\\Big),\n\\end{eqnarray}\nwhere ${\\cal L}_0$ and ${\\cal L}_1$ read\n\\begin{eqnarray}\n {\\cal L}_0 &=& \\sqrt{g}N\\left\\{\\frac{2}{\\kappa^2}(K_{ij}K^{ij}\n -\\lambda K^2)+\\frac{\\kappa^2\\mu^2(\\Lambda_W R\n -3\\Lambda_W^2)}{8(1-3\\lambda)}\\right\\}\\,,\\\\\n {\\cal L}_1&=&\n\\sqrt{g}N\\left\\{\\frac{\\kappa^2\\mu^2 (1-4\\lambda)}{32(1-3\\lambda)}R^2\n-\\frac{\\kappa^2}{2\\tilde{\\omega}^4} \\left(C_{ij} -\\frac{\\mu \\tilde{\\omega}^2}{2}R_{ij}\\right)\n\\left(C^{ij} -\\frac{\\mu \\tilde{\\omega}^2}{2}R^{ij}\\right) +\\mu^4R\n\\right\\}.\n\\end{eqnarray}\nHere $\\tilde{\\omega}$, $\\lambda$, $\\mu$, and $\\kappa$ are parameters in HL gravity, and the extrinsic curvature $K_{ij}$ and Cotton tensor $C_{ij}$ are\n\\begin{eqnarray}\n K_{ij} &=& \\frac{1}{2N} \\Bigg(\\partial_t g_{ij}-\\nabla_i N_j -\n\\nabla_j N_i\\Bigg),\\\\\n C^{ij}&=&\\epsilon^{ik\\ell}\\nabla_k\\left(R^j{}_\\ell\n-\\frac14R\\delta_\\ell^j\\right)\\nonumber\\\\\n &=&\\epsilon^{ik\\ell}\\nabla_k R^j{}_\\ell\n-\\frac14\\epsilon^{ikj}\\partial_kR.\n\\end{eqnarray}\nFor $\\lambda=1$ or $\\tilde{\\omega}=16\\mu^{2}\/\\kappa^{2}$, there is a static and\nasymptotically flat black hole solution given by Kehagias and Sfetsos (KS) \\cite{KS}\n\\begin{eqnarray}\n ds^{2}=-N^{2}(r)dt^{2}+\\frac{1}{f(r)}dr^{2}+r^{2}(d\\theta^{2}+\\sin^{2}\\theta d\\phi^{2}),\n\\end{eqnarray}\nwhere the metric functions are given by\n\\begin{eqnarray}\n N^{2}(r)=f(r)=1+\\tilde{\\omega}r^{2}-\\sqrt{\\tilde{\\omega}^{2}r^{4}+4\\tilde{\\omega}Mr}.\n\\end{eqnarray}\nWhen $\\tilde{\\omega}\\rightarrow\\infty$, such black hole will reduce to the Schwarzschild black hole. In order to clearly show their difference, we make a parameter transformation,\n\\begin{eqnarray}\n \\omega=\\frac{1}{2\\tilde{\\omega}^{2}}.\n\\end{eqnarray}\nThus the parameter $\\omega$ has a positive value, and the Schwarzschild black hole will be recovered with $\\omega$=0. The metric functions will be of the following form\n\\begin{eqnarray}\n N^{2}(r)=f(r)=1+\\frac{1}{2\\omega^{2}}(r^{2}-\\sqrt{r^{4}+8Mr\\omega^{2}}).\n\\end{eqnarray}\nSolving $f(r)=0$, we can obtain the horizon radius for the KS black hole: \n\\begin{eqnarray}\n r_{\\pm}=M\\pm\\sqrt{M^{2}-\\omega^{2}}.\n\\end{eqnarray}\nA KS black hole corresponds to $\\omega\/M\\leq1$. While the extremal black hole occurs at $\\omega\/M=1$, where the two horizons coincide with each other.\n\nNow let us turn to the geodesics. For a particle freely moving in a spherically symmetric black hole background, its motion is always in the equatorial plane under an appropriate coordinate system. Thus without loss of generality, we set $\\theta=\\pi\/2$. Then the Lagrangian for a particle is\n\\begin{eqnarray}\n 2\\mathcal{L}=g_{\\mu\\nu}\\dot{x}^{\\mu}\\dot{x}^{\\nu}\n =g_{tt}\\dot{t}^{2}+g_{rr}\\dot{r}^{2}+g_{\\phi\\phi}\\dot{\\phi}^{2}\n =\\delta,\n\\end{eqnarray}\nwith $\\delta$=-1 and $0$ for massive and massless particles, respectively. Here we consider a massive particle, so we take $\\delta$=-1. Following the Lagrangian, the generalized momentum is $p_{\\mu}=\\frac{\\partial\\mathcal{L}}{\\partial\\dot{x}^{\\mu}}=g_{\\mu\\nu}\\dot{x}^{\\nu}$. On the other hand, this spacetime admits two Killing fields $\\partial_{t}$ and $\\partial_{\\phi}$. So there are two corresponding constants $E$ and $l$ for each geodesic curve, which are the conservation of energy and orbital angular momentum\nper unit mass of the motion. Thus we have\n\\begin{eqnarray}\n p_{t}&=&g_{tt}\\dot{t}=-E,\\\\\n p_{\\phi}&=&g_{\\phi\\phi}\\dot{\\phi}=l,\\\\\n p_{r}&=&g_{rr}\\dot{r}.\n\\end{eqnarray}\nSolving the above three equations, we have\n\\begin{eqnarray}\n \\dot{t}&=&\\frac{E}{N^{2}(r)},\\\\\n \\dot{\\phi}&=&\\frac{l^{2}}{r^{2}},\\\\\n \\dot{r}^{2}&=&f(r)\\bigg(\\frac{E^{2}}{N^{2}(r)}-\\frac{l^{2}}{r^{2}}-1\\bigg).\\label{rr}\n\\end{eqnarray}\nObviously, such geodesic motion for the KS black hole depends on the parameter $\\omega$ of the deformed HL gravity. Note that the study of the geodesics was carried out with different interests in Refs. \\cite{ChenJing,ChenWang,Setare,Abdujabbarov,Enolskii,Vieira}.\n\n\n\n\n\\section{Effective potential and bound orbits}\n\\label{epbo}\n\nIn this section, we mainly study the $r$-motion. For simplicity, we expression Eq. (\\ref{rr}) in the following form\n\\begin{eqnarray}\n \\dot{r}^{2}=E^{2}-V_\\text{eff},\n\\end{eqnarray}\nwith the effective potential given by\n\\begin{eqnarray}\n V_{\\text{eff}}=f(r)\\bigg(1+\\frac{l^{2}}{r^{2}}\\bigg).\\label{effective}\n\\end{eqnarray}\nAs $r\\rightarrow\\infty$, we can expand the metric function $f(r)$\n\\begin{eqnarray}\n f(r\\rightarrow\\infty)=1-\\frac{2M}{r}+\\frac{4M^{2}\\omega^{2}}{r^{4}}\n +\\mathcal{O}\\left(\\frac{1}{r^{7}}\\right).\n\\end{eqnarray}\nTherefore, one has $V_\\text{eff}|_{r\\rightarrow\\infty}$=1. In general, a bound orbit is bounded by two turning points. One is near the black hole, while other one is far from it. With the increase of the energy $E$, the farther one will get more farther, and it approaches infinity when $E$=1. Above this value, the particle will have positive velocity at infinity, i.e., $\\dot{r}^{2}=E^{2}-V_\\text{eff}|_{r\\rightarrow\\infty}>0$, and which will lead to no bound orbit. Therefore, $E=1$ is the maximum of the energy for the bound orbits.\n\n\n\\subsection{Marginally bound orbits}\n\nHere, we consider the marginally bound orbits, which are defined by\n\\begin{eqnarray}\n V_\\text{eff}=1,\\quad \\partial_{r}V_\\text{eff}=0.\n\\end{eqnarray}\nPlunging (\\ref{effective}) into the above equation, we get\n\\begin{eqnarray}\n r^{4}+l^{2}r^{2}+2l^{2}\\omega^{2}-(l^{2}+r^{2})\\sqrt{r^{4}+8Mr\\omega^{2}}=0, \\label{MarginallyBoundOrbits1}\\\\\n r^{6}+2Mr^{3}\\omega^{2}-6l^{2}Mr\\omega^{2}-(r^{4}-2l^{2}\\omega^{2})\\sqrt{r^{4}+8Mr\\omega^{2}}=0.\n \\label{MarginallyBoundOrbits2}\n\\end{eqnarray}\nFor a fixed $\\omega$, we can obtain the radius $r_{\\text{mb}}$ and angular momentum $l_{\\text{mb}}$ by solving Eqs.~\\eqref{MarginallyBoundOrbits1} and \\eqref{MarginallyBoundOrbits2}, and the numerical result is given in Fig. \\ref{Rlw}. From it, we clearly see that both the radius and angular momentum decrease with $\\omega$. And for the extremal black hole ($\\omega=M$), we have $r_{\\text{mb}}=3.3422M$ and $l_{\\text{mb}}=3.8432M$.\n\n\\begin{figure}\n\\center{\n\\includegraphics[width=8cm]{Rlw_1.eps}}\n\\caption{The radius (red solid line) and angular momentum (blue dashed line) for the marginally bound orbits.}\\label{Rlw}\n\\end{figure}\n\n\\subsection{Innermost stable circular orbits}\n\nThe ISCOs satisfy the following conditions\n\\begin{eqnarray}\n V_\\text{eff}=E^{2},\\quad \\partial_{r}V_\\text{eff}=0,\\quad \\partial_{r,r}V_\\text{eff}=0,\n\\end{eqnarray}\nfrom which one can obtain the corresponding energy and momentum:\n\\begin{eqnarray}\n E_{\\text{isc}}&=&\\frac{f(r_{\\text{isc}})}{\\sqrt{f(r_{\\text{isc}})-r_{\\text{isc}}f'(r_{\\text{isc}})\/2}},\\\\\n l_{\\text{isc}}&=&r_{\\text{isc}}^{3\/2}\\sqrt{\\frac{f'(r_{\\text{isc}})}{2f(r_{\\text{isc}})-r_{\\text{isc}}f'(r_{\\text{isc}})}},\n\\end{eqnarray}\nwhere the radius of the ISCOs can be calculated from the following relation:\n\\begin{eqnarray}\n r_{\\text{isc}}=\\frac{3f(r_{\\text{isc}})f'(r_{\\text{isc}})}{2f'^{2}(r_{\\text{isc}})-f(r_{\\text{isc}})f''(r_{\\text{isc}})}.\n\\end{eqnarray}\nWe plot the result in Fig. \\ref{pEisco}. It is clear that both the energy and angular momentum for the ISCOs decrease with $\\omega$. For the extremal KS black hole, one has $E_{\\text{isc}}=0.9370$ and $l_{\\text{isc}}=3.3477M$, and the radius of the corresponding orbit is $r_{\\text{isc}}=5.2366M$.\n\n\\begin{figure}\n\\center{\\subfigure[]{\\label{Lisco}\n\\includegraphics[width=8cm]{Lisco_2a.eps}}\n\\subfigure[]{\\label{Eisco}\n\\includegraphics[width=8cm]{Eisco_2b.eps}}}\n\\caption{The energy $E_{\\text{isc}}$ and angular momentum $L_{\\text{isc}}$ for the ISCOs. (a) $E_{\\text{isc}}$ vs $\\omega$. (b) $L_{\\text{isc}}$ vs $\\omega$.}\\label{pEisco}\n\\end{figure}\n\n\\begin{figure}\n\\center{\\subfigure[]{\\label{Veff}\n\\includegraphics[width=8cm]{Veff_3a.eps}}\n\\subfigure[]{\\label{rr2}\n\\includegraphics[width=8cm]{rr2_3b.eps}}}\n\\caption{(a) The effective potential $V_\\text{eff}$ as a function of $r\/M$ with $\\omega$=0.5. The angular momentum $l\/M$ varies from $l_{\\text{isc}}$ to $l_{\\text{mb}}$ from bottom to top. The dashed line represents the extremal points of the effective potential. For this case, the outer black hole horizon locates at $r_{+}=1.87M$. (b) $\\dot{r}^{2}$ as a function of $r\/M$ with $\\omega$=0.5 and $l=\\frac{1}{2}(l_{\\text{isc}}+l_{\\text{mb}})$. The energy $E$ varies from 0.948 to 0.973 from bottom to top.}\\label{prr2}\n\\end{figure}\n\nThe behavior of the effective potential is given in Fig. \\ref{Veff} with $\\omega$=0.5. The angular momentum $l\/M$ varies from $l_{\\text{isc}}$ to $l_{\\text{mb}}$ from bottom to top. The top one describes the case of the marginally bound orbit, which has two extremal points. Decreasing $l\/M$, the two points get closer to each other, and finally meet each other at $l=l_{\\text{isc}}$ for the ISCOs (bottom one) with the radius $r_{\\text{isc}}\\approx5.8M$.\n\nMoreover, for a fixed angular momentum $l=\\frac{1}{2}(l_{\\text{isc}}+l_{\\text{mb}})$, we plot the behavior of $\\dot{r}^{2}$ as a function of $r\/M$ with $\\omega$=0.5 in Fig. \\ref{rr2}. From bottom to top, the energy $E$ varies from 0.948 to 0.973. From the figure, we can see that each curve admits two extremal points. If the two points have negative values, these trajectories are forbidden for a massive particle from infinity. On the other hand, if two extremal points are positive, the particle can start at a finite distance, and then fall into the black hole.\nA detailed analysis shows that the bound orbits can be only allowed for the case that one extremal point is positive while the other is negative. For example, the energy bound is (0.9537, 0.9678) for $l\/M=3.7032$. Further, for each fixed energy $E$, the turning points can be obtained by solving $\\dot{r}^{2}=0$. For example, the turning points $r\/M$=6.0095, 15.8782 for $E=0.96$. In Fig. \\ref{pElmb}, we list the regions for the bound orbits in the $E$-$l\/M$ diagram for $\\omega$=0.3 and 0.5, respectively. When the parameters fall in the shadow regions, there will be the bound orbits, otherwise, the orbits will not be bounded. Moreover, it is clear that, for a fixed $l\/M$, the energy for the bound orbits has a width, and this width increases with $l\/M$.\n\n\n\\begin{figure}\n\\center{\\subfigure[$\\omega$=0.3]{\\label{Elma}\n\\includegraphics[width=8cm]{Elma_4a.eps}}\n\\subfigure[$\\omega$=0.5]{\\label{Elmb}\n\\includegraphics[width=8cm]{Elmb_4b.eps}}}\n\\caption{Parameter regions for the bound orbits (in shadow). (a) $\\omega$=0.3. (b) $\\omega$=0.5.}\\label{pElmb}\n\\end{figure}\n\n\n\n\\section{Periodic orbits}\n\nIn this section, we would like to examine the periodic orbits around the KS black hole in the HL gravity. According to the viewpoint of Ref. \\cite{Levin}, a generic orbit can be treated as a perturbation of periodic orbits. Thus the study of the periodic orbits is very helpful for understanding the nature of any generic orbits and gravitational radiation.\n\nPeriodic orbit is one special kind of bound orbit. In this paper, we mainly focus on a spherically symmetric black hole background. So the periodic orbit is completely characterized by the frequencies of oscillations in the $r$-motion and $\\phi$-motion. In particular, a periodic orbit requires that the ratio of these two frequencies must be a rational number such that the particle can return to its initial location after a finite time.\n\nFor a bound orbit with two turning points $r_{1}$ and $r_{2}$, the particle is reflected in the range of $(r_{1}, r_{2})$. For each period in $r$, the apsidal angle $\\Delta\\phi$ passed by the particle is\n\\begin{eqnarray}\n \\Delta\\phi=\\oint d\\phi.\n\\end{eqnarray}\nBy making use of the geodesics, the above quantity can be further expressed as\n\\begin{eqnarray}\n \\Delta\\phi&=&2{\\int_{\\phi_{1}}^{\\phi_{2}}d\\phi} \\nonumber\\\\\n &=&2\\int_{r_{1}}^{r_{2}}\\frac{\\dot{\\phi}}{\\dot{r}}dr\\nonumber\\\\\n &=&2\\int_{r_{1}}^{r_{2}}\n \\frac{l}{r^{2}\\sqrt{E^{2}-f(r)(1+\\frac{l^{2}}{r^{2}})}}dr.\\label{knk}\n\\end{eqnarray}\nThe factor `2' comes from the symmetry analysis of the geodesics. From this expression, we can clearly see that $\\Delta\\phi$ closely depends on the energy $E$ and angular momentum $l$, as well as the metric function $f(r)$. Thus, for black holes of different $\\omega$, the apsidal angle $\\Delta\\phi$ will be different. Here we would like to mention that by the inversion of general hyperelliptic integrals of the first, second, and third kind developed in Ref. \\cite{Sirimachan}, many properties of the integral of the apsidal angle $\\Delta\\phi$ can be analytically obtained in some black hole backgrounds, such as one particular black hole in HL gravity and Myers\u2013Perry black hole. However, here it is different to apply this method. So we will numerically solve the integral.\n\nFollowing Ref. \\cite{Levin}, we introduce a new parameter $q$, which is defined by\n\\begin{eqnarray}\n q=\\frac{\\Delta\\phi}{2\\pi}-1.\n\\end{eqnarray}\nFor a bound orbit, one can calculate $q$ with given $\\omega$, $E$, and $l$. In order to show the behavior of $q$, we plot $q$ vs. $E$ and $q$ vs. $l$ in Figs. \\ref{pqe09} and \\ref{pql98}, respectively. For a bound orbit, the value of the angular momentum $l$ can vary from $l_{\\text{isc}}$ to $l_{\\text{mb}}$. In order to parameterize it, we express the angular momentum in the following form\n\\begin{eqnarray}\n l=l_{\\text{isc}}+\\epsilon(l_{\\text{mb}}-l_{\\text{isc}}).\n\\end{eqnarray}\nThen the parameter $\\epsilon$ will be limited in the range of (0, 1) for a bound orbit. For example, $\\epsilon$=0 and 1 always, respectively, correspond to $l=l_{\\text{isc}}$ and $l=l_{\\text{mb}}$ for different values of $\\omega$. Thus, no bound orbit exists when the parameter $\\epsilon$ is above one. Conservation of angular momentum also implies that $\\epsilon$ keeps constant. The parameter $q$ is displayed as a function of energy $E$ in Fig. \\ref{pqe09} with $\\epsilon$=0.3, 0.5, 0.7, and 0.9. We find from Fig. \\ref{pqe09} that the parameter $q$ slowly increases with the energy $E$ at first, then when the maximal energy is approached, $q$ suddenly blows up. Note that the maximal energy decreases with $\\omega$. With a detailed compare, one can also find that the maximal energy increases with $\\epsilon$.\n\nOn the other hand, we show the value of $q$ in Fig. \\ref{pql98} for fixed $E$=0.95, 0.96, 0.97, and 0.98. All the results reveal that $q$ decreases with the angular momentum $l$. Interestingly, $q$ goes to positive infinity at the minimum of $l$. Moreover, the minimum value increases with $E$, while decreases with $\\omega$.\n\n\\begin{figure}\n\\center{\n\\subfigure[$\\epsilon=0.3$]{\\label{qe03}\n\\includegraphics[width=6cm]{qe03_5a.eps}}\n\\subfigure[$\\epsilon=0.5$]{\\label{qe05}\n\\includegraphics[width=6cm]{qe05_5b.eps}}}\\\\\n\\center{\\subfigure[$\\epsilon=0.7$]{\\label{qe07}\n\\includegraphics[width=6cm]{qe07_5c.eps}}\n\\subfigure[$\\epsilon=0.9$]{\\label{qe09}\n\\includegraphics[width=6cm]{qe09_5d}}}\n\\caption{$q$ vs $E$. The parameter $\\omega$=0, 0.4, 0.6, 0.8, 0.9, and 1 from right to left. (a) $\\epsilon=0.3$. (b) $\\epsilon=0.5$. (c) $\\epsilon=0.7$. (d) $\\epsilon=0.9$.}\\label{pqe09}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\center{\\subfigure[$E=0.95$]{\\label{ql95}\n\\includegraphics[width=6cm]{ql95_6a.eps}}\n\\subfigure[$E=0.96$]{\\label{ql96}\n\\includegraphics[width=6cm]{ql96_6b.eps}}}\\\\\n\\center{\\subfigure[$E=0.97$]{\\label{ql97}\n\\includegraphics[width=6cm]{ql97_6c.eps}}\n\\subfigure[$E=0.98$]{\\label{ql98}\n\\includegraphics[width=6cm]{ql98_6d}}}\n\\caption{$q$ vs $l$. The parameter $\\omega$=0, 0.4, 0.6, 0.8, 0.9, and 1 from right to left. (a) $E=0.95$. (b) $E=0.96$. (c) $E=0.97$. (d) $E=0.98$.}\\label{pql98}\n\\end{figure}\n\nFor a periodic orbit, the value of $q$ is a rational number, and it can be decomposed in terms of three integers $(z, w, v)$:\n\\begin{eqnarray}\n q=w+\\frac{v}{z}.\n\\end{eqnarray}\nIn previous work \\cite{Levin,Grossman,Levin2,Misra,Babar}, it was found that periodic orbits around the Schwarzschild black holes, Kerr black holes, and naked singularities are characterized by these three integers \\cite{Levin,Grossman,Levin2,Misra,Babar}. As pointed out in Ref. \\cite{Levin}, each integer has its own geometric interpretation. For example, integers $z$, $w$, and $v$ are, respectively, the zoom number, the whirls number, and the number of vertices formed by joining the successive apastra of the periodic orbit.\n\n\n\\begin{figure}\n\\center{\n\\subfigure[$E$=0.964346]{\\label{Orbitsa}\n\\includegraphics[width=4.5cm]{Orbitsa_7a.eps}}\n\\subfigure[$E$=0.967744]{\\label{Orbitsb}\n\\includegraphics[width=4.5cm]{Orbitsb_7b.eps}}\n\\subfigure[$E$=0.967822]{\\label{Orbitsc}\n\\includegraphics[width=4.5cm]{Orbitsc_7c.eps}}}\\\\\n\\center{\\subfigure[$E$=0.967314]{\\label{Orbitsd}\n\\includegraphics[width=4.5cm]{Orbitsd_7d.eps}}\n\\subfigure[$E$=0.967811]{\\label{Orbitse}\n\\includegraphics[width=4.5cm]{Orbitse_7e.eps}}\n\\subfigure[$E$=0.967823]{\\label{Orbitsf}\n\\includegraphics[width=4.5cm]{Orbitsf_7f.eps}}}\\\\\n\\center{\\subfigure[$E$=0.954275]{\\label{Orbitsg}\n\\includegraphics[width=4.5cm]{Orbitsg_7g.eps}}\n\\subfigure[$E$=0.967551]{\\label{Orbitsh}\n\\includegraphics[width=4.5cm]{Orbitsh_7h.eps}}\n\\subfigure[$E$=0.967817]{\\label{Orbitsi}\n\\includegraphics[width=4.5cm]{Orbitsi_7i.eps}}}\\\\\n\\center{\\subfigure[$E$=0.958277]{\\label{Orbitsj}\n\\includegraphics[width=4.5cm]{Orbitsj_7j.eps}}\n\\subfigure[$E$=0.967624]{\\label{Orbitsk}\n\\includegraphics[width=4.5cm]{Orbitsk_7k.eps}}\n\\subfigure[$E$=0.967819]{\\label{Orbitsl}\n\\includegraphics[width=4.5cm]{Orbitsl_7l.eps}}}\n\\caption{Periodic orbits of different $(z, w, v)$ around a KS black hole with $\\omega=0.5$ and $\\epsilon=0.5$. (a) $E$=0.964346. (b) $E$=0.967744. (c) $E$=0.967822. (d) $E$=0.967314. (e) $E$=0.967811. (f) $E$=0.967823. (g) $E$=0.954275. (h) $E$=0.967551. (i) $E$=0.967817. (j) $E$=0.958277. (k) $E$=0.967624. (l) $E$=0.967819.}\\label{pOrbitsa}\n\\end{figure}\n\nIn Fig. \\ref{pOrbitsa}, we show the visualization of periodic orbits for fixed $\\omega=0.5$ and $\\epsilon=0.5$ for different $(z, w, v)$. Obviously, $z$ describes the number of the leaf pattern for the orbits. With the increase of $z$, the leaf pattern grows, and the orbit becomes more complicated.\n\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{ccccccccc}\n \\hline\\hline\n $\\omega$ & $E_{(1,1,0)}$ & $E_{(1,2,0)}$ & $E_{(2,1,1)}$ & $E_{(2,2,1)}$\n &$E_{(3,1,2)}$ & $E_{(3,2,2)}$& $E_{(4,1,3)}$ & $E_{(4,2,3)}$\\\\\\hline\n 0 & 0.953628 & 0.957086 & 0.956607 & 0.957170\n & 0.956864 & 0.957178 & 0.956946 & 0.957181\\\\\\hline\n 0.2& 0.953430 & 0.956953 & 0.956462 & 0.957040\n & 0.956725 & 0.957048 & 0.956809 & 0.957051\\\\\\hline\n 0.4& 0.952802 & 0.956537 & 0.956007 & 0.956633\n & 0.956290 & 0.956643 & 0.956380 & 0.956646\\\\\\hline\n 0.6& 0.951635 & 0.955784 & 0.955173 & 0.955899\n & 0.955496 & 0.955910 & 0.955600 & 0.955914\\\\\\hline\n 0.8& 0.949666 & 0.954565 & 0.953800 & 0.954716\n & 0.954198 & 0.954736 & 0.954329 & 0.954741\\\\\\hline\n 1.0& 0.946222 & 0.952564 & 0.951467 & 0.952815\n & 0.952023 & 0.952836 & 0.952212 & 0.952854\\\\\\hline\\hline\n\\end{tabular}\n\\caption{The energy $E$ for the orbits with different $(z, w, v)$ and different black hole parameter $\\omega$. The angular momentum parameter $\\epsilon=0.3$. Note that $\\omega$=0 denotes the Schwarzschild black hole case.}\\label{tab1}\n\\end{center}\n\\end{table}\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{ccccccccc}\n \\hline\\hline\n $\\omega$ & $E_{(1,1,0)}$ & $E_{(1,2,0)}$ & $E_{(2,1,1)}$ & $E_{(2,2,1)}$\n &$E_{(3,1,2)}$ & $E_{(3,2,2)}$& $E_{(4,1,3)}$ & $E_{(4,2,3)}$\\\\\\hline\n 0 & 0.965425 & 0.968383 & 0.968026 & 0.968434\n & 0.968225 & 0.968438 & 0.968285 & 0.96844\\\\\\hline\n 0.2& 0.965265 & 0.968286 & 0.967919 & 0.96834\n & 0.968123 & 0.968344 & 0.968185 & 0.968345\\\\\\hline\n 0.4& 0.964757 & 0.967984 & 0.967583 & 0.968045\n & 0.967804 & 0.968050 & 0.967872 & 0.968051\\\\\\hline\n 0.6& 0.963802 & 0.967434 & 0.966963 & 0.967511\n & 0.967222 & 0.967518 & 0.967302 & 0.96752\\\\\\hline\n 0.8& 0.962156 & 0.966538 & 0.965930 & 0.966654\n & 0.966261 & 0.966664 & 0.966366 & 0.966667\\\\\\hline\n 1.0& 0.959159 & 0.965072 & 0.964126 & 0.965264\n & 0.964617 & 0.965284 & 0.964780 & 0.965291\\\\\\hline\\hline\n\\end{tabular}\n\\caption{The energy $E$ for the orbits with different $(z, w, v)$ and different black hole parameter $\\omega$. The angular momentum parameter $\\epsilon=0.5$.}\\label{tab2}\n\\end{center}\n\\end{table}\n\nTaking $\\epsilon=$0.3 and 0.5, we also list the corresponding energy for each periodic orbit in Tables \\ref{tab1} and \\ref{tab2}. Since the Schwarzschild black hole corresponds to $\\omega$=0, we can find that, for each periodic orbit with fixed $(z, w, v)$, the particle orbiting a Schwarzschild black hole always has the highest energy. The energy decreases with the parameter $\\omega$. When the extremal black hole $\\omega$=1 is approached, the energy reaches its minimum. Moreover, the energy of the particle grows with the parameter $\\epsilon$. For example, for a black hole with $\\omega$=0.6, $E_{(2,1,1)}$=0.955173 and 0.966963 for $\\epsilon$=0.3 and 0.5, respectively.\n\nIn general, accompanied by the emission of gravitational waves, the energy and angular momentum of the orbiting particle decrease. So there will be a transition for the orbit from one energy level diagram with given $l$ to another one with a lower $l$. As shown above, the parameter $q$ closely depends on both the energy $E$ and the angular momentum $l$, so the rate of change of q is\n\\begin{eqnarray}\n \\frac{dq}{dt}=\\frac{\\partial q}{\\partial E}\\frac{dE}{dt}\n +\\frac{\\partial q}{\\partial l}\\frac{dl}{dt}.\n\\end{eqnarray}\nDuring the successive orbit transition in the inspiral stage, there may exist the resonances satisfying $dq\/dt\\approx0$. From Figs. \\ref{pqe09} and \\ref{pql98}, one can find that $\\partial q\/\\partial E>$0 and $\\partial q\/\\partial l<$0. Thus, it is possible to satisfy condition $dq\/dt\\approx0$.\n\nOn the other hand, the energy of the orbiting particle gradually decreases with emitting gravitational waves. Supposing the angular momentum $l$ keeps constant, we plot the energy $E$ as a function of $z$ for the periodic orbits of $(z, 1, 1)$ in Fig. \\ref{Eezz}. As mentioned above, the value $z$ denotes the number of the leaf for the orbits. Interestingly, with fixed $l$, the energy decreases with $z$, which implies that the orbit will approach to an orbit with high $z$ by decreasing particle energy. As expected, the angular momentum $l$ will also be radiated away in the form of gravitational waves. From Fig. \\ref{Eezz}, we can also find that these particles have lower energy for small $l$ or $\\epsilon$ by comparing the cases of $\\epsilon$=0.5 and 0.3. Thus, the orbit of a particle with emitting gravitational waves will finally tend to the one with lower energy and angular momentum while with high $z$. It is also worthwhile noting that with $z\\rightarrow\\infty$, $q$ will approach an integer, and thus a circular orbit, specifically the ISCO, is formed.\n\n\n\\begin{figure}\n\\center{\n\\includegraphics[width=8cm]{Eezz_8.eps}}\n\\caption{The energy $E$ as a function of $z$ for the orbits $(z, 1, 1)$ with the black hole parameter $\\omega$=0.5. The angular momentum is fixed with $\\epsilon$=0.3 (bottom) and 0.5 (top).}\\label{Eezz}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{Conclusion}\n\nIn the present work, we studied the property of the periodic orbits around a KS black hole in the deformed HL gravity. At first, we obtained the geodesics for the particle moving in the KS black hole backgrounds, which is quite different from that of the Schwarzschild black hole, while the result for the Schwarzschild case can be recovered with $\\omega$=0.\n\nThen we reexpressed the $r$-motion with an effective potential. Employing the effective potential, we numerically calculated the marginally bound orbits and ISCOs. The results show that for the marginally bound orbits, both the radius and angular momentum decrease with $\\omega$. The energy and angular momentum for the ISCOs also decrease with $\\omega$.\n\nBased on the properties of the marginally bound orbits and ISCOs, we considered the periodic orbits in the KS black holes. The quantity $q$ describing the apsidal angle increases with the particle energy, while decreases with the angular momentum. In particular, for fixed $E$, $q$ increases with $\\omega$. However, for fixed $l$, $q$ decreases with $\\omega$.\n\nAccording to Ref. \\cite{Levin}, each periodic orbit is characterized by a set of parameters $(z, w, v)$. For the same orbit of constant $(z, w, v)$, the energy gets lower and lower with the increase of $\\omega$. Thus, these periodic orbits in a KS black hole always have a lower energy than that of a Schwarzschild black hole. Moreover, the minimum energy is approached for the extremal KS black hole. We also expanded the study to these orbits with the same $w$ and $v$. It is exhibited that the energy decreases with $z$, which means that, with fixed angular momentum, these orbits with high $z$ generally have lower energy. When $z\\rightarrow\\infty$, the orbits tend to circular ones. So these circular orbits, especially the ISCOs, have lowest energy. These results may provide us a possible observational signature by testing these periodic orbits around a central source to distinguish a KS black hole from a Schwarzschild one. Furthermore, since the orbits pass a series of periodic orbits during the inspiral stage, which can be taken as transient states, these periodic orbits with non-vanishing $dq\/dt$ are very important for the detection of the gravitational waves.\n\n\n\\section*{Acknowledgements}\nThis work was supported by the National Natural Science Foundation of China (Grants No. 11675064, No. 11875151, and No. 11522541) and the Fundamental Research Funds for the Central Universities (Grants No. lzujbky-2018-k11).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n``Beables'' is a word coined by John Bell \\cite{Bell76b} for variables that have values in nature even when nobody looks. The ``ontic state'' of a quantum system means (e.g., \\cite{Lei14}) the actual, factual, physical state in reality. The ontic state could be described through the values of all beables. An ``ontological model'' of quantum mechanics is (e.g., \\cite{Lei14}) a proposal for what the ontic states might be and which laws might govern them. We denote an ontological model by $M$ (definition below), the ontic state by $\\lambda$, and the set of ontic states by $\\Lambda$.\n\nWe prove a theorem asserting that, given any ontological model $M$ that is empirically adequate (i.e., whose observable predictions agree with the rules of quantum mechanics), there is no experiment that would determine $\\lambda$. Here, we take the ontological model $M$ as known and only ask whether an experiment would reveal $\\lambda$ in a (hypothetical) world governed by $M$.\n\nOur result goes against some kinds of positivism, as it poses a \\emph{limitation to knowledge}: It shows that not everything that exists in reality is observable, and not every variable that has a value can be measured. \nVarious theories of quantum mechanics have been criticized for postulating the real existence of objects (or variables) that are not fully observable (respectively, measurable); in view of our result, this criticism appears inappropriate because every theory of quantum mechanics will have this property.\n\nOur proof is based on the \\emph{main theorem about POVMs} (see \\cite{DGZ04} or \\cite[Sec.\\ 5.1]{book}), which asserts that {\\it for every experiment $\\mathscr{E}$ on a quantum system $S$ whose possible outcomes lie in a space $\\Values$, there exists a POVM $E$ on $\\Values$ such that, whenever $S$ has wave function $\\psi$ at the beginning of $\\mathscr{E}$, the random outcome $Z$ has probability distribution given by}\n\\begin{equation}\\label{PPPPOVM}\n\\mathbb{P}(Z \\in B) = \\scp{\\psi}{E(B)|\\psi}~~~~\\forall B \\subseteq \\Values\\,.\n\\end{equation}\n(We simply write $\\forall B \\subseteq \\Values$ when meaning for all $B$ in the $\\sigma$-algebra of the measurable space $\\Values.$)\n\n\n\n\\section{Result}\n\nLet $\\mathbb{S}(\\mathscr{H})=\\{\\psi\\in\\mathscr{H}: \\|\\psi\\|=1\\}$ denote the unit sphere of the Hilbert space $\\mathscr{H}$ and $\\mathcal{POVM}$ the set of all POVMs acting on $\\mathscr{H}$. An \\emph{ontological model} $M$ of a quantum system with Hilbert space $\\mathscr{H}$ consists of \n\\begin{itemize}\n\\item[(i)] a measurable space $(\\Lambda,\\mathscr{F})$ called the ontic space; \n\\item[(ii)] for every $\\psi\\in\\mathbb{S}(\\mathscr{H})$, a probability measure $\\varrho^\\psi$ over $\\Lambda$;\\footnote{One could imagine that different procedures for preparing $\\psi$ lead to different distributions over $\\Lambda$; for our purposes, we can simply choose one such distribution for each $\\psi$.}\n\\item[(iii)] an index set $\\mathcal{EXP}$ representing the set of possible experiments; \n\\item[(iv)] a surjective mapping $E:\\mathcal{EXP}\\to\\mathcal{POVM}$ associating with every experiment the POVM according to the main theorem about POVMs; \n\\item[(v)] for every ontic state $\\lambda\\in\\Lambda$ and every experiment $\\mathscr{E}\\in\\mathcal{EXP}$, a probability distribution $P_{\\lambda,\\mathscr{E}}$ over the value space $\\Values$ of $\\mathscr{E}$ (thought of as the distribution of the outcome when $\\mathscr{E}$ is applied to a system in state $\\lambda$).\\footnote{For mathematicians: $P_{\\lambda,\\mathscr{E}}(B)$ is required to be a measurable function of $\\lambda$ for every $\\mathscr{E}$ and $B$.}\n\\end{itemize}\n\nAn ontological model $M$ is said to be \\emph{empirically adequate} if and only if for every $\\psi\\in\\mathbb{S}(\\mathscr{H})$ there is a probability measure $\\varrho^\\psi$ on $(\\Lambda,\\mathscr{F})$ such that for every $\\mathscr{E}\\in\\mathcal{EXP}$\n\\begin{equation}\\label{adequate}\n\\int_\\Lambda \\varrho^\\psi(d\\lambda) \\, P_{\\lambda,\\mathscr{E}}(B) = \\scp{\\psi}{E_{\\mathscr{E}}(B)|\\psi}\n\\end{equation}\nfor all $B\\subseteq\\Values$.\n\nOur claim is that it is impossible to measure $\\lambda$. If there were an experiment $\\mathscr{G}\\in\\mathcal{POVM}$ that, when applied to a system in state $\\lambda$, yields $\\lambda$ as the outcome, it would be associated with a POVM $G=E_\\mathscr{G}$ on $\\Lambda$ acting on $\\mathscr{H}$. In particular, the distribution of the outcome would be, by \\eqref{PPPPOVM}, $\\scp{\\psi}{G(d\\lambda)|\\psi}$, and that would have to agree with the distribution $\\varrho^\\psi(d\\lambda)$ of $\\lambda$ before the experiment. But even that is impossible:\n\n\\begin{thm}\nGiven a Hilbert space $\\mathscr{H}$ with $\\dim\\mathscr{H}\\geq 2$ and an empirically adequate ontological model $M$,\nthere is no POVM $G$ on $(\\Lambda,\\mathscr{F})$ acting on $\\mathscr{H}$ such that\n\\begin{equation}\\label{psiG}\n\\varrho^\\psi(A) = \\scp{\\psi}{G(A)|\\psi}~~~~\\forall A\\in\\mathscr{F}\\,.\n\\end{equation}\n\\end{thm}\n\n\n\\section{Proof}\n\nWe assume that such a $G$ exists and will derive a contradiction. Putting \\eqref{adequate} and \\eqref{psiG} together, we obtain that\n\\begin{equation}\n\\int_\\Lambda \\scp{\\psi}{G(d\\lambda)|\\psi} \\, P_{\\lambda,\\mathscr{E}}(B) = \\scp{\\psi}{E_{\\mathscr{E}}(B)|\\psi}\n\\end{equation}\nfor all $B\\subseteq\\Values$. The left-hand side can be re-written as\n\\begin{equation}\n\\Bigl\\langle \\psi \\Big| \\int_\\Lambda G(d\\lambda) \\, P_{\\lambda,\\mathscr{E}}(B) \\Big| \\psi \\Bigr\\rangle\\,.\n\\end{equation}\nSince $\\scp{\\psi}{R|\\psi}=\\scp{\\psi}{S|\\psi}$ for all $\\psi\\in\\mathbb{S}(\\mathscr{H})$ only if $R=S$ (by the polarization identity), we have that\n\\begin{equation}\n\\int_\\Lambda G(d\\lambda) \\, P_{\\lambda,\\mathscr{E}}(B) = E_{\\mathscr{E}}(B)\n\\end{equation}\nfor all $B\\subseteq\\Values$. Let $g$ be a 1d subspace of $\\mathscr{H}$ and consider the POVM on $\\{0,1\\}$ consisting of $I-P_g$ (where $I$ is the identity operator and $P_g$ the projection to $g$) and $P_g$. Since $E$ is surjective, it is possible to choose, for every $g$, an experiment $\\mathscr{E}(g)\\in \\mathcal{EXP}$ so that $E_{\\mathscr{E}(g)}$ is this POVM. For $B=\\{1\\}$, we obtain that\n\\begin{equation}\\label{intg}\n\\int_\\Lambda G(d\\lambda) \\, P_{\\lambda,\\mathscr{E}(g)}(\\{1\\}) = P_g \\,.\n\\end{equation}\nNote that for every $g$, $P_{\\lambda,\\mathscr{E}(g)}(\\{1\\})$ is a function of $\\lambda$ with values in $[0,1]$, and define\n\\begin{equation}\n\\Lambda_g:=\\{\\lambda\\in\\Lambda:P_{\\lambda,\\mathscr{E}(g)}(\\{1\\}) >0\\}\\,.\n\\end{equation}\nIt follows that $P_{\\lambda,\\mathscr{E}(g)}(\\{1\\})\\leq 1_{\\Lambda_g}(\\lambda)$ and thus\n\\begin{equation}\\label{GPg}\nG(\\Lambda_g) = \\int_\\Lambda G(d\\lambda) \\, 1_{\\Lambda_g}(\\lambda) \\geq\n\\int_\\Lambda G(d\\lambda) \\, P_{\\lambda,\\mathscr{E}(g)}(\\{1\\}) = P_g \\,.\n\\end{equation}\n\nOn the other hand, for any $A\\subseteq \\Lambda_g$, $G(A)$ must be a non-negative multiple of $P_g$: indeed, $G(A)$ is a positive operator, and if $G(A)$ had any eigenvector $\\notin g$ with nonzero eigenvalue, then there would exist $0\\neq \\chi\\in g^\\perp$ with $\\scp{\\chi}{G(A)|\\chi}>0$ and so \n\\begin{equation}\n\\Bigl\\langle \\chi \\Big| \\int_\\Lambda G(d\\lambda) \\, P_{\\lambda,\\mathscr{E}(g)}(\\{1\\}) \\Big| \\chi \\Bigr\\rangle >0,\n\\end{equation}\nin contradiction to \\eqref{intg} together with $\\scp{\\chi}{P_g|\\chi}=0$.\n\nSince $G(A)$ is a multiple of $P_g$,\n\\begin{equation}\n0\\leq G(A)\\leq P_g \\,.\n\\end{equation}\nTwo consequences: First, by \\eqref{GPg},\n\\begin{equation}\nG(\\Lambda_g)=P_g\\,.\n\\end{equation}\nSecond, for 1d subspaces $g\\neq h$, $\\Lambda_g$ and $\\Lambda_h$ must be disjoint up to $G$-null sets,\n\\begin{equation}\nG(\\Lambda_g \\cap \\Lambda_h)=0\n\\end{equation}\n(because this operator must be $\\leq P_g$ and $\\leq P_h$, and the only positive operator achieving that is 0).\n\nNow let $\\psi_1$ and $\\psi_2$ be mutually orthogonal unit vectors, set $\\psi_3=\\frac{1}{\\sqrt{2}}(\\psi_1+\\psi_2)$ and $g_i=\\mathbb{C}\\psi_i$ for $i=1,2,3$. Then\n\\begin{equation}\nG\\bigl(\\Lambda_{g_1}\\cup \\Lambda_{g_2} \\cup \\Lambda_{g_3}\\bigr)= \nG(\\Lambda_{g_1})+ G(\\Lambda_{g_2}) + G(\\Lambda_{g_3})= \nP_{g_1}+P_{g_2}+P_{g_3}\\,,\n\\end{equation}\nwhich has eigenvalue 2 in the direction of $\\psi_3$, in contradiction to $G(S)\\leq G(\\Lambda)=I$ for every $S\\in\\mathscr{F}$.\n\n\n\\section{Concluding Remarks}\n\nThe result is in line with a sentiment expressed by\nBell \\cite[Sec.~1]{Bell87a}:\n\\begin{quote}\nTo admit things not visible to the gross creatures that we are is, in my opinion, to show a decent humility, and not just a lamentable addiction to metaphysics.\n\\end{quote}\nThe question of limitations to knowledge has been addressed for various specific interpretations (e.g., \\cite{Bohm52a,CT16,book}). It is also known that \\emph{every} interpretation of quantum mechanics must entail limitations to knowledge \\cite{Tum98,EE,book}. One way of arriving at this conclusion is to note that wave functions cannot be measured \\cite{DGZ04} and that the Pusey--Barrett--Rudolph theorem \\cite{PBR,Lei14} shows that, under reasonable assumptions on the ontological model, the wave function is a beable.\n\nCompared to these previous results, however, our present result addresses the question in a particularly direct way.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nGalaxy formation in the Universe is closely related to the neutral hydrogen ({\\sc Hi}) gas in the intergalactic medium (IGM).\nWithin the modern paradigm of galaxy formation, galaxies form and evolve in the filament structure of {\\sc Hi} gas (e.g., \\citealt{meiksin+09}; \\citealt{mo+10}). \n Cosmological hydrodynamics simulations suggest that the picture of galaxy formation and evolution is associated with large-scale baryonic gas exchange between the galaxy and the IGM (\\citealt{fox+17}; \\citealt{van+17}).\nEnormous rivers of cold gas ($\\sim 10^4$ K) flow into the galaxy and trigger the star formation. \\citep[e.g.,][]{Dekel+09,Kere+05}\nThe cold gas is heated by star formation and then ejected by the powerful galactic-scale outflows due to feedback caused by stellar winds and supernovae.\n\nThe circulation of gas is one of the keys to understanding galaxy formation and evolution.\nThe interplay of gravitational and feedback-driven processes can have surprisingly large effects on the large scale behavior of the IGM.\nSome of the radiation produced by massive stars and black hole accretion disks can escape from the dense gaseous environments and propagate out of galaxies and photoionize the {\\sc Hi} gas in the circumgalactic medium (CGM) and even in the IGM \\citep{astro2021,mukae+20}.\n\nGreat progress has been achieved in exploring the {\\sc Hi} distribution around galaxies and active galactic nuclei (AGN).\nThe cross-correlation of the {\\sc Hi} in the IGM and galaxies has been detected by Ly$\\alpha$ absorption features in the spectra of background quasars (e.g., \\citealt{rauch+98}; \\citealt{Faucher-Giguere+08}; \\citealt{Prochaska+13}) and bright star-forming galaxies\n\\citep{Steidel+10,Mawatari+16,Thomas+17}.\nThe Keck Baryon Structure Survey \\citep[KBSS:][]{Rudie+12,Rakic+12,Turner+14}, the Very Large Telescope LBG Redshift Survey \\citep[VLRS:][]{Crighton+11,Tummuangpak+14}, and other spectroscopic programs \\citep[e.g.,][]{Adelberger+03,Adelberger+05} have investigated the detailed properties of the {\\sc Hi} distribution around galaxies.\nThese observations target {\\sc Hi} gas around galaxies on the scale of the circumgalactic medium (CGM).\nRecently, 3-dimensional (3D) {\\sc Hi} tomography mapping, a powerful technique to reconstruct the large scale structure of {\\sc Hi} gas, has been developed by \\cite{Lee+14a,Lee+16,Lee+18}.\n{\\sc Hi} tomography mapping is originally proposed by \\cite{Pichon+01} and \\cite{Caucci+08} with the aim of reconstructing the 3D matter distribution from the {\\sc Hi} absorption of multiple sightlines. By this technique, the COSMOS Ly$\\alpha$ Mapping and Tomography Observations (CLAMATO) survey \\citep{Lee+14a,Lee+18} has revealed {\\sc Hi} large scale structures with spatial resolutions of 2.5 $h^{-1}$ comoving Megaparsec (cMpc). This survey demonstrates the power of 3D {\\sc Hi} tomography mapping in a number of applications, including the study of a protocluster at $z = 2.44$ \\citep{Lee+16} and the identification of cosmic voids \\citep{Krolewski+18}.\nDue to an interpolation algorithm (Section \\ref{subsec:reconstruction}) used in the reconstruction of the 3D {\\sc Hi} tomography map, we are able to estimate the {\\sc Hi} distribution along lines-of-sight where there are no available background sources.\nBased on the 3D {\\sc Hi} tomography map of the CLAMATO survey, \\cite{momose+21} have reported measurements the IGM {\\sc Hi}\u2013galaxy cross-correlation function (CCF) for several galaxy populations.\nDue to the limited volume of the CLAMATO 3D IGM tomography data, \\cite{momose+21} cannot construct the CCFs at scales over 24 $h^{-1}$cMpc in the direction of transverse to the line-of-sight.\n\\cite{mukae+20} have investigated a larger field than the one of \\cite{momose+21} using 3D {\\sc Hi} tomography mapping and report that a huge ionized structure of {\\sc Hi} gas associated with an extreme QSO overdensity region in the EGS field.\n\\cite{mukae+20} interpret the large ionized structure as the overlap of multiple proximity zones which are photoionized regions created by the enhanced ultraviolet background (UVB) of quasars.\nHowever, \\cite{mukae+20} found only one example of a huge ionized bubble, and no others have been reported in the literature.\n\nDispite the great effort made by previous studies, the limited volume of previous work prevents us from understanding how ubiquitous or rare these large ionized structures are. In order to answer this question, we must investigate the statistical {\\sc Hi} distributions around galaxies and AGN at much larger spatial scales ($\\gtrsim10~h^{-1}$cMpc). Although \\cite{momose+21} derived CCFs for different populations: Ly$\\alpha$ emitters (LAEs), H$\\alpha$ emitters (HAEs), [{\\sc Oiii}] emitters (O3Es), active galactic nuclei (AGN), and submillimeter galaxies (SMGs), on a scale of more than $20$ $h^{-1}$cMpc, the limited sample size results in large uncertainties in the CCF at large scales and prevents definitive conclusions to be made regarding the statistical {\\sc Hi} distributions around galaxies and AGN.\n\nAnother open question is the luminosity and AGN type dependence of the large scale {\\sc Hi} distribution around AGN.\n\\cite{FR+13} have estimated the {\\sc Hi} distribution around AGN using the Sloan Digital Sky Survey (SDSS; \\citealt{York+00}) data release 9 quasar catalog (DR9Q; \\citep{paris+11}) and find no dependence of the {\\sc Hi} distribution on AGN luminosity.\nIn this study, we investigate the luminosity dependence using the SDSS data release 14 quasar (DR14Q; \\citealt{Paris+18}) catalog, which includes sources $\\sim 2$ magnitude fainter than those used by \\cite{FR+13}.\nIn the AGN unification model (\\citealt{Antonucci+85}; see also \\citealt{Spinoglio+21}), which provides a physical picture that a hot accretion disk of super-massive blackhole is obscured by a dusty torus, the type-1 and type-2 classes are produced by different accretion disk viewing angles.\nIn this picture, the type-1 (type-2) AGN is biased to AGN with a wide (narrow) opening angle.\nIn the case of type-1 AGN, one can directly observe the accretion disks and the broad line region, while for type-2 AGN, only the narrow line region is observable.\nPrevious studies have identified the proximity effect that the IGM of type-1 AGN is statistically more ionized due to the local enhancement of the UV background on the line-of-sight passing near the AGN \\citep{FaucherGigu+08}.\nBased on the unification model, the type-2 AGN obscured on the line of sight statistically radiates in transverse direction.\nThe investigation of the AGN type dependence on the surrounding {\\sc Hi} can reveal the large scale {\\sc Hi} distribution influenced by the direction of radiation from the AGN. \n\n\n\n\nTo investigate the {\\sc Hi} distributions around galaxies and AGN on large scales, over tens of $h^{-1}$cMpc, we need conduct a new study in a field with length of any side larger than 100 $h^{-1}$cMpc.\nWe reconstruct a 3D {\\sc Hi} tomography maps of {\\sc Hi} distribution at $z \\sim$ $2-3$ in a total area of $837$ deg$^2$.\nWe use $\\gtrsim 15,000$ background sightlines from SDSS quasars \\citep{Paris+18,Lyke+20} for the {\\sc Hi} tomography map reconstruction and have a large number of unbiased galaxies and AGN from the Hobby Eberly Telescope Dark Energy eXperiment (HETDEX; \\citealt{Gebhardt+21}) and SDSS surveys for the investigations of the large scale {\\sc Hi} distributions around galaxies and AGN.\n\nThis paper is organized as follows. \nSection \\ref{sec:data} describes the details of the HETDEX survey and our spectroscopic data.\nOur foreground and background samples of galaxies and AGN are presented in Section \\ref{sec:sample}.\nThe technique of creating the {\\sc Hi} tomography mapping and the reconstructed {\\sc Hi} tomography map\nare described in Section \\ref{sec:hi_tomography}, and the \nobservational results of {\\sc Hi} distributions around galaxies and AGN are given in Section \\ref{sec:result}.\nIn this section, we also interpret our results in the context of previous studies, and investigate the dependence of out tomography maps on AGN type and luminosity.\nWe adopt a cosmological parameter set of ($\\Omega_m$, $\\Omega_{\\rm \\Lambda}$, $h$) = (0.29, 0.71, 0.7) in this study.\n\n\n\\section{Data} \\label{sec:data}\n\\subsection{HETDEX Spectra} \\label{subsec:hetdex}\nHETDEX provides an un-targeted, wide-area, integral field spectroscopic survey, and aims to determine the evolution of dark energy in the redshift range $1.88-3.52$ using $\\sim1$ million Lyman-$\\alpha$ emitters (LAEs) over 540 deg$^2$ in the northern and equatorial fields that are referred to as ``Spring'' and ``Fall'' fields, respectively. The total survey volume is $\\sim 10.9$ comoving Gpc$^3$.\n\n\nThe HETDEX spectroscopic data are gathered using the 10 m Hobby-Eberly Telescope \\citep[HET;][]{lwr94,Hill+21} to collect light for\nthe Visible Integral-field Replicable Unit Spectrograph \\citep[VIRUS;][]{hil18a,Hill+21} with 78 integral field unit \\citep[IFUs;][]{kelz14} fiber arrays. VIRUS covers a wavelength, with resolving power ranging from $750-950$. \nEach IFU has 448 fibers with a $1''.5$ diameter. The $78$ IFUs are spread over the $22$ arcmin field of view, with a $1\/4.6$ fill factor.\nHere we make use of the data release 2 of the HETDEX \\citep[HDR2;][]{Cooper+23} over the Fall and Spring fields. In this study, we investigate the fields where HETDEX survey data are taken between 2017 January and 2020 June.\nThe effective area is 11542 arcmin$^2$.\nThe estimated depth of an emission line at S\/N$=5$ reaches $3-4 \\times 10^{-17}$ erg cm$^{-2}$ s$^{-1}$.\n\n\n\n\\subsection{Subaru HSC Imaging} \\label{subsec:hsc}\nThe HETDEX-HSC imaging survey was carried out in a total time allocation of 3 nights in $2015-2018$ (semesters S15A, S17A, and S18A; PI: A. Schulze) and $2019-2020$ (semester S19B; PI: S. Mukae) over a $\\sim$250 deg$^2$ area in the Spring field, accomplishing a 5$\\sigma$ limiting magnitude of $r = 25.1$ mag.\nThe SSP-HSC program has obtained deep multi-color imaging data on\nthe 300 deg$^2$ sky, half of which overlaps with the HETDEX footprints. In this study, we use the $r$-band imaging data from the public data release 2 (PDR2) of SSP-HSC. The 5$\\sigma$ depth of the SSP-HSC PDR2 $r$-band imaging data is typically $27.7$ mag for the $3''.0$ diameter aperture. The data reduction of HETDEX-HSC survey and SSP-HSC program are processed with HSC pipeline software, \\texttt{hscPipe} \\citep{bosch18} version $6.7$.\n\nBecause the spectral coverage width of the HETDEX survey is narrow, only 2000 \\AA, most sources appear as single-line emitters. \nFurthermore, since the {\\sc Oii} doublet is not resolved, we rely on the equivalent width (EW) to distinguish Ly$\\alpha$ from {\\sc Oii}.\n The high-$z$ Ly$\\alpha$ emission is typically stronger than low-$z$ {\\sc [Oii]} lines, due to the intrinsic line strengths and the cosmological effects.\nThe continuum estimate from the HETDEX spectra reach about g$=25.5$ \\citep{Davis+21,Cooper+23} and we improve on this using the deep HSC imaging. We estimate EW using continua measured from two sets of images taken by HSC r-band imaging survey for HETDEX (HETDEX-HSC survey) and the Subaru Strategic Program \\citep[SSP-HSC;][]{aihara18b}.\n\\citeauthor{Davis+21} and \\citeauthor{Cooper+23} find that our contamination of {\\sc Oii} emitters in the LAE sample to be below 2\\%.\n\n\n\\subsection{SDSS-IV eBOSS Spectra} \\label{subsec:sdss}\nWe use quasar data from eBOSS \\citep{Dawson+16}, which is publically available in the SDSS Data Release 14 and 16 quasar catalog \\citep[DR14Q, DR16Q;][]{Paris+18,Lyke+20}.\nThe cosmology survey, eBOSS, is part of SDSS-IV. The eBOSS quasar targets are selected by the XDQSOz\nmethod \\citep{Bovy+12} and the color cut\n\\begin{equation} \\label{eq:quasar_select}\n m_{opt}-m_{WISE}\\geq(g-i)+3,\n\\end{equation}\nwhere $m_{opt}$ is a weighted stacked magnitude in the $g, r$ and $i$ bands and $m_{WISE}$ is a weighted stacked magnitude in the W1 and W2 bands of the Wide-Field Infrared Survey (WISE; \\citealt{Wright+10}). \nThe aim of the eBOSS is to accomplish precision angular-diameter distance measurements and the Hubble parameter determination at $z \\sim 0.6-3.5$ using different tracers of the underlying density fields over 7500 deg$^2$. Its final goal is to obtain spectra of $\\sim 2.5$ million luminous red galaxies, $\\sim 1.95$ million emission line galaxies, $\\sim$ 450,000 QSOs at\n$0.9 \\leq z \\leq 2.2$, and the Lyman-$\\alpha$ forest of 60,000 QSOs at $z>2$ over four years of operation.\n\nThe eBOSS program is conducted with twin SDSS spectrographs \\citep{Smee13}, which are fed by 1,000 fibers connected from the focal plane of the 2.5m Sloan telescope \\citep{Gunn06} at Apache Point Observatory. SDSS spectrographs have a fixed spectral bandpass of $3600-10000$ \\AA \\ over the 7 deg$^2$ field of view. The spectral resolution varies from 1300 at the blue end to 2600 at the red end,\nwhere one pixel corresponds to $1.8-5.2$ \\AA.\n\n\n\n\\section{Samples} \\label{sec:sample}\n\n\nOur study aims to map the statistical distribution of {\\sc Hi} gas on a cosmological scale around foreground galaxies and AGN by the 3D {\\sc Hi} tomography mapping technique with background sources at $z=2-3$. \nWe use the foreground galaxies, foreground AGN, and background sources presented in Sections \\ref{subsec:fg_galaxy}, \\ref{subsec:fg_agn}, and \\ref{subsec:bkagn}, respectively.\n\nTwo of the goals of this study are to explore the dependence of luminosity and AGN type on the {\\sc Hi} distribution. To examine statistical results, we need a large number of bright AGN and type-2 AGN. Compared to moderately bright AGN and type-1 AGN, bright AGN and type-2 AGN are relatively rare.\nTo obtain a sufficiently large samples of bright AGN and type-2 AGN, we expand the Spring and Fall fields of the HETDEX survey, from which we are able to investigate the statistical luminosity and AGN type dependence of the HI distribution around AGN (Section \\ref{subsec:fg_agn}).\nThe northern extended Spring field flanking the HETDEX survey fields, referred to as the ``ExSpring field\", covers over 738 deg$^2$, while the equatorial extended Fall field flanking the HETDEX survey fields, here after ``ExFall field\", covers 99 deg$^2$.\nThe total area of our 3D {\\sc Hi} tomography mapping field is 837 deg$^2$ in the ExSpring and ExFall fields that is referred to as ``our study field\".\nOur analysis is conducted in our study field where the foreground galaxies+AGN and the background sources overlap on the sky. As an example, we present the foreground galaxies+AGN in the ExFall field at ${\\rm z}=2.0-2.2$ in Figure \\ref{fig:our_study_field}. We also present the sky distribution of the background sources within the ExFall field in Figure \\ref{fig:exfall_bk}. The rest of the foreground and background sources are shown in the Appendix.\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.55]{exfall_1_rm.png}\n\\end{center}\n\\caption{\nSky distribution of the foreground AGN and galaxies at ${\\rm z}=2.0-2.2$ in the ExFall field. The squares present the positions of All-AGN sample sources. Pink (magenta) squares represent the sources of the T1-AGN (T2-AGN) sample. The cyan and blue dots show the positions of the Galaxy and T1-AGN(H) sample sources, respectively. The black dashed line indicates the border of the {\\sc Hi} tomography map in the Exfall field.}\n\\label{fig:our_study_field}\n\\end{figure*}\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.54]{exfall_bk.png}\n\\end{center}\n\\caption{Sky distribution of background AGN in the ExFall field. The gray crosses indicate background AGN that are used to reconstruct our {\\sc Hi} tomography map. The back dashed line has the same meaning as that in Figure \\ref{fig:our_study_field}.}\n\\label{fig:exfall_bk}\n\\end{figure*}\n\n\n\n\\begin{deluxetable*}{cccccccc}[ht!]\n\\tablecaption{Sample size of foreground samples at $z = 2-3$ \\label{tab:fg}} \n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name of sample} &\n\\colhead{ExFall} &\n\\colhead{ExSpring} &\n\\colhead{Total} &\n\\colhead{Survey} &\n\\colhead{Criteria} \n}\n\\startdata\nGalaxy & 3431 & 11436\n & 14867 & HETDEX & EW$_0>20$ \\AA, FWHM$_{\\rm Ly\\alpha}$ $<1000$ km\/s, Muv$>-22$ mag \\\\ \nT1-AGN(H) & 438 & 1349 & 1787 & HETDEX & EW$_0>20$ \\AA, FWHM$_{\\rm Ly\\alpha}$ $>1000$ km\/s \\\\\nT1-AGN & 2393 & 12300\n & 14693 & SDSS & FWHM$_{\\rm Ly\\alpha}$ $>1000$ km\/s \\\\ \nT2-AGN & 436 & 1633\n & 2069 & SDSS & FWHM$_{\\rm Ly\\alpha}$ $<1000$ km\/s \\\\ \n\\enddata\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{cccccccc}[ht!]\n\\tablecaption{Sample size of background sample at $z = 2.08-3.67$ \\label{tab:bg}} \n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name of sample} &\n\\colhead{ExFall} &\n\\colhead{ExSpring} &\n\\colhead{Total} &\n\\colhead{Survey} &\n\\colhead{Criteria} \n}\n\\startdata\nbackground AGN & 2181 & 12555\n & 14736 & SDSS & $\\langle S\/N \\rangle_{\\rm Ly\\alpha forest}>1.4$\n\\\\ \n\\enddata\n\\end{deluxetable*}\n\n\\subsection{Foreground Galaxy Sample} \\label{subsec:fg_galaxy}\nWe make a sample of foreground galaxies from the data of the HETDEX spectra (Section \\ref{subsec:hetdex}) and the Subaru HSC images (Section \\ref{subsec:hsc}). \nWith these data, \\cite{Zhang21} have build a catalog of LAEs that have the rest-frame equivalent widths (${\\rm EW}_0$) of ${\\rm EW}_0>20$ \\AA \\ and the HETDEXs Emission Line eXplorer (ELiXer) probabilities \\citep{Davis+21,Davis+23} larger than 1. This ${\\rm EW}_0$ cut is similar to previous LAE studies (e.g., \\citealt{gronwall07,konno16}).\nThis catalog of LAEs is composed of 15959 objects.\nBecause the LAE catalog of \\cite{Zhang21} consists of galaxies, type-1 AGN, and type-2 AGN, we isolate galaxies from the sources of the LAE catalog with the limited observational quantities, Ly$\\alpha$ and UV magnitude ($M_{\\rm UV}$), that can be obtained from the HETDEX and Subaru\/HSC data.\nBecause type-1 AGN have broad-line Ly$\\alpha$ emission, we remove sources with broad-line Ly$\\alpha$ whose full width half maximum (FWHM) of the Ly$\\alpha$ emission lines are greater than 1000 km s$^{-1}$. To remove clear type-2 AGN from the LAE catalog, we apply a UV magnitude cut of $M_{\\rm UV}<-22$ mag that is the bright end of the UV luminosity function dominated by galaxies \\citep{Zhang21}.\nWe then select sources in our study field, and apply the redshift cut of $z=2.0-3.0$ (as measured by the principle component analysis of multiple lines; \\citealt{Paris+18}) to match the redshift range over which we construct {\\sc Hi} tomography map. These redshifts are measured with Ly$\\alpha$ emission \\citep{Zhang21}, because Ly$\\alpha$ is the only emission available for all of the sources.\n\nBy these selections, we obtain 14130 galaxies from the LAE catalog. These 14130 galaxies are referred to as the ``Galaxy\" sample.\n\n\\subsection{Foreground AGN Samples} \\label{subsec:fg_agn}\nIn this subsection, we describe how we select foreground \nAGN from two sources, (a) the combination of the HETDEX spectra and the HSC imaging data and (b) the SDSS DR14Q catalog. The type-$1$ AGN are identified with the sources of (a) and (b), while the type-$2$ AGN are drawn from the source of (b).\n\nWith the source (a) that is the same as the one stated in Section \\ref{subsec:fg_galaxy}, \\cite{Zhang21} have constructed the LAE catalog.\nWe use the catalog of \\cite{Zhang21} to select LAEs at $z\\sim2-3$ that fall in our study field.\nApplying a Ly$\\alpha$ line width criterion of FWHM $>1000$ km s$^{-1}$ with the HETDEX spectra, we identify broad-line AGN, i.e. type-1 AGN, from the LAEs. \nWe thus obtain 1829 type-1 AGN that are referred to as T1-AGN(H).\n\nWe use the width of Ly$\\alpha$ emission line for the selection of type-1 AGN. This is because no other emission lines characterising AGN, e.g. {\\sc Civ}, are available for all of the LAEs due to the limited wavelength coverage and the sensitivity of HETDEX. Similarly, the redshifts of T1-AGN(H) objects are measured with Ly$\\alpha$ emission whose redshifts may be shifted from the systemic redshifts by up to a few 100 km s$^{-1}$ (See Section \\ref{subsec:fg_galaxy}). We do not select type-2 AGN from the source of (a), because we cannot identify type-2 AGN easily with the given data set of source (a).\n\nFrom the source (b), we obtain the other samples of foreground AGN. We first choose objects with a classification of QSOs of the SDSS DR14Q, and remove objects outside the redshift range of $z=2.0-3.0$ in \nour study field. \nWe obtain 23721 AGN.\nFor 16762 out of 23721 AGN, Ly$\\alpha$ FWHM measurements are available from \\cite{Rakshit20}.\nThe other AGN without FWHM measurement are removed due to the poor quality of the Ly$\\alpha$ line.\nWe thus use these 16762 AGN with good quality of the Ly$\\alpha$ line to compose our AGN sample, referred to as All-AGN sample. \n\nTo investigate the type dependence, we classify these 16762 AGN into type-$1$ and type-$2$ AGN. In the same manner as the T1-AGN(H) sample construction, we use Ly$\\alpha$ line width measurements of \\cite{Rakshit20} for the type-1 and type-2 AGN classification.\nFor the 16762 AGN, we apply the criterion of Ly$\\alpha$ FWHM $> 1000$ km s$^{-1}$ \\citep{Villarroel+14,Panessa+02} to select type-1 AGN, and obtain 14693 type-1 AGN. Following \\cite{Villarroel+14,Panessa+02}, we classify type-2 AGN by the criterion of Ly$\\alpha$ FWHM $< 1000$ km s$^{-1}$ and obtain 2069 type-2 AGN \\citep[c.f.][]{Alexandroff+13,Zakamska+03}.\nThese type-1 and type-2 AGN are referred to as T1-AGN and T2-AGN, respectively.\n\nTable \\ref{tab:fg} presents the summary of foreground samples. We obtain 14693 and 1829 type-1 AGN, which referred to as T1-AGN and T1-AGN(H), from the SDSS and HETDEX surveys, respectively. We select 2069 type-2 AGN that are referred to as T2-AGN from the SDSS survey. \n\n\\subsection{Background Source Sample} \\label{subsec:bkagn}\n\nIn this subsection, we describe how the background sources are selected. We select the background sources with the SDSS DR16Q catalog, following the three steps below.\n\nIn the first step, we extract QSOs in our study field from the SDSS DR16Q catalog. We then select QSOs falling in the range of redshifts from 2.08 to 3.67.\nThe lower and upper limits of the redshift range are determined by the Ly$\\alpha$ forest.\nOur goal is to probe {\\sc Hi} absorbers at $z=2.0-3.0$ with the Ly$\\alpha$ forest.\nBecause the Ly$\\alpha$ forest is observed in the rest-frame $1040-1185$ \\AA\\ of the background sources, we obtain the lower and upper limits of the redshifts, 2.08 and 3.67, by \n$1216\\times (1+2.0)\/1185-1=2.08$ and $1216\\times(1+3.0)\/1040-1=3.67$, respectively.\nBy this step, we have selected 26899 background source candidates.\n\nIn the second step, we choose background source candidates with good quality.\nWe calculate the average signal to noise ratio, $\\langle$S\/N$\\rangle$, in the wavelength range of the Ly$\\alpha$ forest for the 26899 background source candidates, and select 15573 candidates with $\\langle$S\/N$\\rangle$ greater than 1.4. \nTo maximize the special resolution of the tomography map, we set the threshold, $\\langle$S\/N$\\rangle$ $>1.4$, smaller than the value used by \\cite{mukae+20}.\nThis threshold is more conservative than the value, 1.2, used in \\cite{Lee+18}.\nIn the third step, we remove damped Ly$\\alpha$ absorbers (DLAs) and broad absorption lines (BALs) from the Ly$\\alpha$ forest of the 15573 candidates, because the DLAs and BALs cause an overestimation of the absorption of the Ly$\\alpha$ forest. \nWe identify and remove DLAs using the catalog of \\cite{Chabanier+22}, which is based on the SDSS DR16Q \\citep{Lyke+20}.\nWe mask out the wavelength ranges contaminated by the DLAs of the \\cite{Chabanier+22} catalog (see Section \\ref{subsec:masking} for the procedures). We conduct visual inspection for the 15573 candidates to remove 115 BALs. \nIn Figure \\ref{fig:BAL}, we show the spectrum with BALs identified by visual inspection.\nIn this way, we obtain 15458 ($=15573-115$) sources whose spectra are free from DLAs and BALs, which we refer to as the background source sample. Table \\ref{tab:bg} lists the number of background sources in each field.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.23]{BAL_fitting.png}\n\\end{center}\n\\caption{Spectrum of background AGN with BALs. The black line represents the spectrum of a background source. The vertical dashed lines present the central wavelengths of the metal absorptions. The yellow hatches show the wavelength ranges of the BALs. The gray hatches indicate the wavelength ranges not used for the reconstruction of {\\sc Hi} tomography maps. The SDSS ID of this spectrum is 106584616, whose redshift is 3.067837.}\n\\label{fig:BAL}\n\\end{figure}\n\n\\section{HI Tomography and Mapping} \\label{sec:hi_tomography}\n\nIn this section we describe the process to construct {\\sc Hi} tomography maps with the spectra of the background sources. For {\\sc Hi} tomography, we need to obtain intrinsic continua of the background sources. Section \\ref{subsec:fitting} explains masking the biasing absorption features in the background sources, while Section \\ref{subsec:reconstruction} determines the intrinsic continua of the background source spectra. In Section \\ref{subsec:reconstruction}, we construct {\\sc Hi} tomography maps with the intrinsic continuum spectra.\n\n\\subsection{DLA and Intrinsic Absorption Masking} \\label{subsec:masking}\n\nBecause a DLA is an absorption system with a high neutral hydrogen column density $N_\\mathrm{\\sc HI} > 2 \\times 10^{20}$ cm$^{-2}$, the intervening DLA completely absorbs a large portion of the Ly$\\alpha$ forest over $\\Delta$v $\\sim$ 10$^3$ km s$^{-1}$, which gives bias in the estimates of the intrinsic continua of the background sources.\nFor the spectra of the background sources, we mask out the DLAs identified in Section \\ref{subsec:bkagn}. \nWe determine the range of wavelengths for masking with the IDL code of \\cite{Lee+12}.\nThe wavelength range corresponds to the equivalent width of each DLA \\citep{Draine+11}:\n\\begin{equation} \\label{eq:dla}\n W \\sim \\lambda_\\mathrm{\\alpha}\\left[\\frac{e^2}{m_\\mathrm{e}c^2}N_\\mathrm{HI} f_\\mathrm{\\alpha}\\lambda_\\mathrm{\\alpha}\\left(\\frac{\\gamma_\\mathrm{\\alpha} \\lambda_\\mathrm{\\alpha}}{c}\\right)\\right]^{1\/2}.\n\\end{equation}\nIn the formula, $\\lambda_{\\alpha}$ is the rest-frame wavelength of the hydrogen Ly$\\alpha$ line (i.e. 1216 \\AA),\nwhile $c$, $e$, $m_e$, $f_\\alpha$, $N_{\\sc Hi}$, and $\\gamma_\\alpha$ are the speed of light, the electron charge, the electron mass, the Ly$\\alpha$ oscillator strength, the {\\sc Hi} column density of the DLA, and the sum of the Einstein A coefficients.\nWe mask out these wavelength ranges of the background source spectra.\nIn Figure \\ref{fig:MFPCA}, the masked DLA is indicated by yellow hatches.\n\nWe also mask out the intrinsic absorption lines of the metal absorption lines, which are the other sources of bias.\nWe mask {\\sc SIv} $\\lambda$1062, {\\sc Nii} $\\lambda$1084, {\\sc Ni} $\\lambda$1134, and {\\sc Ciii} $\\lambda$1176 \\citep{Lee+12}, which are shown by the dashed lines in Figure \\ref{fig:MFPCA}.\nBecause the spectral resolutions of SDSS DR14Q are $\\Delta\\lambda=1.8-5.2$ \\AA, we adopt the masking size of $10$ \\AA\\ in the observed frame.\n\n\\subsection{Intrinsic Continuum Determination} \\label{subsec:fitting}\n\nIn order to obtain the intrinsic continuum of the background source (Section \\ref{subsec:bkagn}) in the Ly$\\alpha$ forest\nwavelength range (LAF-WR; $1040-1185$ \\AA), we conduct mean-flux regulated principle component analysis (MF-PCA) fitting with the IDL code \\citep{Lee+12} for the background sources after the masking (Section \\ref{subsec:masking}).\n\nThere are two steps in the MF-PCA fitting process.\nThe first step is to predict the shape of the intrinsic continuum of the background sources in the LAF-WR. We conduct least-squares principle component analysis (PCA) fitting \\citep{suzuki+05,Lee+12} to the background source spectrum in the rest frame $1216 - 1600$ \\AA\\ :\n\\begin{equation} \\label{eq:pca}\n f_{\\rm PCA} (\\lambda) = \\mu(\\lambda)+\\sum_{j=1}^{8} c_j\\xi_j (\\lambda),\n\\end{equation}\nwhere $\\lambda$ is the rest-frame wavelength. The values of $c_j$ are the free parameters for the weights.\nThe function of $\\mu(\\lambda)$ is the average spectrum calculated from the 50 local QSO spectra in \\cite{suzuki+05}.\nThe function of $\\xi_j(\\lambda)$ represents the $j$th principle component (or 'eigenspectrum') out of the 8 principle components taken from the PCA template shown in Figure \\ref{fig:principle_component}.\n\nIn the second step, we predict the intrinsic continuum of the background source in the LAF-WR.\nBecause the PCA template is obtained with the local QSO spectra, the best-fit $f_{\\rm PCA}$ in the LAF-WR does not include cosmic evolution on the average transmission rate of the Ly$\\alpha$ forest.\nOn average, the best-fit $f_{\\rm PCA}$ in the LAF-WR should agree with the cosmic mean-flux evolution \\citep{FG+08}:\n\\begin{equation} \\label{eq:mf}\n \\langle F(z) \\rangle = \\exp[-0.001845(1 + z)^{3.924}],\n\\end{equation}\nwhere $z$ is the redshift of the absorber.\nWe use $f_{\\rm PCA}$ and a correction function of $a+b\\lambda$ to estimate the intrinsic continuum $f_{\\rm intrinsic}(\\lambda)$ for large-scale power along the line of sight with the equation:\n\\begin{equation} \\label{eq:mf1}\n \n f_{\\rm intrinsic}(\\lambda) = f_{\\rm PCA }(\\lambda)\\times(a+b\\lambda),\n\\end{equation}\nwhere $a$ and $b$ are the free parameters.\nBecause the ratio of $f_{\\rm obs} (\\lambda)\/f_{\\rm intrinsic}(\\lambda)$ should agree with the cosmic average $\\langle F(z) \\rangle $\nfor $z = (\\lambda\/1216) - 1$ in the LAF-WR, we conduct least-squares-fitting to find the values of $a$ and $b$ providing the best fit between the mean ratio and the cosmic average.\nThe red line shown by the bottom panel of Figure \\ref{fig:MFPCA} presents a MF-PCA fitted continuum derived from the spectrum of one of our background sources.\n\nBy the MF-PCA fitting, we have obtained the estimates of $f_{\\rm intrinsic}(\\lambda)$ for 14736 out of the 15458 background sources. We find the other background sources show poor fitting results found by visual inspection.\nWe do not use these background sources in the following analyses.\nFigure \\ref{fig:Poor_fitting} shows an example of poor fitting result due to the unknown absorption.\nWe adopt continuum fitting errors of $\\sim7\\%, \\sim6\\%$, and $\\sim4\\%$ for Ly$\\alpha$ forests with mean S\/N values of $<4$, $4-10$, and $>10$, respectively \\citep{Lee+12}.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.43]{principal_components.png}\n\\end{center}\n\\caption{Principle components and mean flux taken from \\cite{suzuki+05}. The top panel shows the normalized mean flux of 50 local QSOs in rest-frame wavelength. The bottom 8 panels show the $1{\\rm st}-8{\\rm th}$ principle components that are used in the PCA fitting in our study. Each principle component is normalized to the mean flux.}\n\\label{fig:principle_component}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.22]{MFPCA.png}\n\\end{center}\n\\caption{Example of a background source spectrum that was used for the reconstruction of the {\\sc Hi} tomography map. Bottom panel: Estimation of intrinsic continuum. The thin black line is the spectrum of a background source taken from the SDSS survey. The red and magenta lines are the results of MF-PCA and PCA fitting, respectively.\nThe vertical dashed lines present the central wavelengths of the metal absorptions. \nThe gray hatches represent the wavelength ranges that are not used for the {\\sc Hi} tomography map reconstructions.\nThe yellow hatch indicates the wavelength ranges of DLA. Top panel: Spectrum of $\\delta_\\mathrm{F}$ extracted from the bottom panel in the LAF-WR. The vertical yellow and gray hatches are the same as those in the bottom panel. The black and pink lines show the spectrum of $\\delta_\\mathrm{F}$ and the error of $\\delta_\\mathrm{F}$ at the corresponding wavelength extracted from the bottom panel. The horizontal line indicates the cosmic average of Ly$\\alpha$ forest transmission.}\n\\label{fig:MFPCA}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.22]{unknown_absorption.png}\n\\end{center}\n\\caption{Same as the bottom panel of Figure \\ref{fig:MFPCA}, but for the background spectrum with a poor fitting result. The red and magenta lines are the results of MF-PCA and PCA continuum fitting, respectively. The yellow hatch indicates the wavelength range of unknown absorption.}\n\\label{fig:Poor_fitting}\n\\end{figure}\n\n\\subsection{HI Tomography Map Reconstruction} \\label{subsec:reconstruction}\nWe reconstruct our {\\sc Hi} tomography maps by a procedure similar to \\cite{Lee+18}.\nWe define Ly$\\alpha$ forest fluctuations $\\delta_{F}$ at each pixel on the spectrum by\n\\begin{equation} \\label{eq:df}\n \\delta_{\\rm F} = \\frac{f_{\\rm obs} \/ f_{\\rm intrinsic}} {\\langle F(z) \\rangle } - 1\n\\end{equation}\n, where $f_{obs}$ and $f_{intrinsic}$ are the observed spectrum and estimated intrinsic continuum, respectively. ${\\langle F(z) \\rangle }$ is the cosmic average transmission.\nWe calculate $\\delta_{\\rm F}$ with our background source spectra.\nThe top panel of Figure \\ref{fig:MFPCA} shows the `spectrum' of $\\delta_{\\rm F}$ derived from the $f_{obs}$ and $f_{intrinsic}$ in the bottom panel.\nFor the pixels in the wavelength ranges of masking (Section \\ref{subsec:masking}), we do not use $\\delta_{\\rm F}$ in our further analyses. We thus obtain $\\delta_{\\rm F}$ in 876,560 pixels.\n\nFor the the HI tomography map of the Extended Fall field, we define the cells of the {\\sc Hi} tomography map in the three-dimensional comoving space.\nWe choose a volume of $30^{\\circ} \\times 3.3^{\\circ}$ in the longitudinal and latitudinal dimensions, respectively, in the redshift range of $2.0 < z < 3.0$. \nThe comoving size of our {\\sc Hi} tomography map is 2257 $h^{-1} {\\rm cMpc}$ $\\times$ 233 $h^{-1} {\\rm cMpc}$ $\\times$ 811 $h^{-1} {\\rm cMpc}$ in the right ascension (R.A.), declination (Dec), and $z$ directions, respectively in the same manner as \\cite{mukae+20}.\nOur {\\sc Hi} tomography map has $451 \\times 46 \\times 162$ cells,\nand one cell is a cubic with a size of 5.0 $h^{-1} {\\rm cMpc}$ on a side, where the line-of-sight \ndistance is estimated under the assumption of the Hubble flow.\n\nWe conduct a Wiener filtering scheme for reconstructing the sightlines that do not have background sources. \nWe use the calculation code developed by \\cite{stark+15a}.\nThe solution for each cell of the reconstructed sightline\nis obtained by\n\\begin{equation} \\label{eq:wf}\n \\delta_{\\rm F}^{\\rm rec} = \\mathrm{C}_{\\mathrm{MD}}\\cdot(\\mathrm{C}_{\\mathrm{DD}}+\\mathrm{N})^{-1}\\cdot\\delta_{\\rm F},\n\\end{equation}\nwhere $\\mathrm{C}_{\\mathrm{MD}}$, $\\mathrm{C}_{\\mathrm{DD}}$, and $\\mathrm{N}$ are the map-data, data-data, and noise covariances, respectively.\nWe assume Gaussian covariances between two points $\\mathrm{r}_1$ and $\\mathrm{r}_2$:\n\\begin{equation} \\label{eq:covariance}\n \\mathrm{C}_{\\mathrm{MD}} = \\mathrm{C}_{\\mathrm{DD}} = \\mathrm{C}(\\mathrm{r}_\\mathrm{1},\\mathrm{r}_\\mathrm{2}),\n\\end{equation}\n\\begin{equation} \\label{eq:gaussian covariance}\n \\mathrm{C}(\\mathrm{r}_\\mathrm{1},\\mathrm{r}_\\mathrm{2}) = \\sigma^{2}_{F}\\exp \\left[ -\\frac{(\\Delta r_{\\|})^2}{2L^2_{\\|}} \\right] \\exp \\left[ -\\frac{(\\Delta r_{\\bot})^2}{2L^2_{\\bot}} \\right],\n\\end{equation}\nwhere $\\Delta r_{\\|}$ and $\\Delta r_{\\bot}$ are the distances between $\\mathrm{r}_\\mathrm{1}$ and $\\mathrm{r}_\\mathrm{2}$ in the directions of parallel and transverse to the line of sight, respectively. \nThe values of $L_{\\bot}$ and $L_{\\|}$ are the correlation lengths for vertical and parallel to the line-of-sight (LoS) direction, respectively, and defined with $L_{\\bot}$ = $L_{\\|}$ = 15 $h^{-1} {\\rm cMpc}$.\nThe value of $\\sigma^2_F$ is the normalization factor that is \n$\\sigma^2_F = 0.05$.\n\\cite{stark+15a} develop this Gaussian form to obtain a reasonable estimate of the true correlation function of the Ly$\\alpha$ forest. \nWe perform the Wiener filtering reconstruction with the values of $\\delta_{\\rm F}$ at the 898390 pixels, using \nthe aforementioned parameters of the \\cite{stark+15a} algorithm with a stopping tolerance of $10^{-3}$ for the pre-conditioned conjugation gradient solver.\nAs noted by \\cite{Lee+16}, the boundary effect that leads to an additional error on $\\delta_{\\rm F}$ occurs at the positions that are near the boundaries of an {\\sc Hi} tomography map.\nThe boundary effect is caused by the background sightlines not covering the region that contribute to the calculation of the $\\delta_{F}$ values for cells near the {\\sc Hi} tomography map boundaries.\nTo avoid the boundary effect, we extend a distance of 40 h$^{-1}$cMpc for each side of the {\\sc Hi} tomography map of the ExFall field.\nThe resulting map is shown in Figure \\ref{fig:tomography_map_fall}.\n\nFor the HI tomography map reconstruction of the Extended Spring field (hereafter ExSpring field), we perform almost the same procedure as the one of the ExFall field.\nThe area of the ExSpring field is more than 6 times larger than that of the ExFall field.\nWe separate the ExSpring field into $8\\times3=24$ footprints to save calculation time.\nEach footprint covers an area of $10^\\circ \\times 5^\\circ$ in the R.A. and Dec directions, respectively.\nWe reconstruct the {\\sc Hi} tomography map one by one for the footprints of the ExSpring field.\n\nTo weaken the boundary effect, we extend a distance of 40 h$^{-1}$cMpc for each side of the footprints.\nThe extensions mean that every two adjacent footprints has an overlapping region of 80 h$^{-1}$cMpc width.\nThe width of the overlapping regions is a conservative value to weaken the boundary effect since it is much larger than the resolution, 15 h$^{-1}$cMpc, of our {\\sc Hi} tomography maps.\nBy the 40 h$^{-1}$cMpc extension, we reduce the uncertainty in the $\\delta_{F}$ value for the edge of each footprint caused by boundary effect to $\\pm$0.01.\nThis value corresponds to the $1\/10$ of the typical error for each cell of the {\\sc Hi} tomography map \\citep{mukae+20}\nThe remaining additional error caused by boundary effect is negligible compared to the statistical uncertainties in the HI distributions obtained in Section \\ref{sec:result}.\nThen we follow the reconstruction procedure for the ExFall field to reconstruct HI tomography maps of the footprints and cut off all the cells within 40 h$^{-1}$cMpc to the borders that are affected by the boundary effect.\nFinally we obtain the {\\sc Hi} tomography map of the ExSpring field with a special volume of 3475 $h^{-1} {\\rm cMpc}$ $\\times$ 1058 $h^{-1} {\\rm cMpc}$ $\\times$ 811 $h^{-1} {\\rm cMpc}$ in the R.A., Dec, and $z$ directions, respectively (Figure \\ref{fig:tomography_map_spring}).\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.2]{3dtomo_fall.png}\n\\end{center}\n\\caption{3D {\\sc Hi} tomography map of the ExFall field. The color\ncontours represent the values of $\\delta_{\\rm F}$ from negative (red) to positive (blue).\nThe spatial volume of the {\\sc Hi} tomography map is $2257 \\times 233 \\times 811$ $h^{\\rm -3}$cMpc$^3$.\nThe redshift range is $z = 2.0 - 3.0$.}\n\\label{fig:tomography_map_fall}\n\\end{figure*}\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.2]{3dtomo_spring.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:tomography_map_fall}, but for the ExSpring field.\nThe spatial volume of the {\\sc Hi} tomography map is $3475 \\times 1058 \\times 811$ $h^{\\rm -3}$cMpc$^3$.\n}\n\\label{fig:tomography_map_spring}\n\\end{figure*}\n\n\\section{Results and Discussions} \\label{sec:result}\n\n\\subsection{Average HI Profiles around AGN: Validations of our AGN Samples} \n\\label{subsec:average_profile}\n\nIn this section we present the {\\sc Hi} profile, $\\delta_\\mathrm{F}$ as a function of distance, with the All-AGN sample sources, using the reconstructed {\\sc Hi} tomography maps.\nWe compare the {\\sc Hi} profile of the All-AGN sample to the one of the previous study \\citep{FR+13}.\nWe also present the comparison of the {\\sc Hi} profiles between T1-AGN(H) and T1-AGN samples that are made with the HETDEX and SDSS data.\nIn this study, we only discuss the structures having size $\\gtrsim 15$ $h^{-1}$cMpc corresponding to the resolution of our 3D {\\sc Hi} tomography maps.\n\nFor the {\\sc Hi} profiles with the All-AGN sample, we extract $\\delta_\\mathrm{F}$ values around the 16978 All-AGN sample sources in the {\\sc Hi} tomography map.\nWe cut the {\\sc Hi} tomography map centered at the positions of the All-AGN sample sources, and stack the $\\delta_\\mathrm{F}$ values to make a two dimensional (2D) map of the average $\\delta_\\mathrm{F}$ distribution around the sources that is referred to as a 2D {\\sc Hi} profile of the All-AGN sample sources. The two dimensions of the 2D {\\sc Hi} profile correspond to the transverse distance $D_\\mathrm{Trans}$ and the LoS Hubble distance. The velocity corresponding to the LoS Hubble distance is referred to as the LoS velocity.\nHere we define the Ly$\\alpha$ forest absorption fluctuation\n\\begin{equation}\nA_{\\rm F} \\equiv -\\delta_{\\rm F}\n\\end{equation}\nthat is an indicator of the amount of the {\\sc Hi} absorption.\n\nFigure \\ref{fig:all_agn_2d} shows the 2D {\\sc Hi} profile with values of $A_{\\rm F}$ ($\\delta_{\\rm F}$) for All-AGN sample.\nThe solid black lines denote the contours of $A_{\\rm F}$. \nIn each cell of the 2D {\\sc Hi} profile, we define the $1\\sigma$ error with the standard deviation of $A_{\\rm F}$ values of the 100 mock 2D {\\sc Hi} profiles.\nEach mock 2D {\\sc Hi} profile is obtained in the same manner as the real 2D {\\sc Hi} profile, but with random positions of sources whose number is the same as the one of All-AGN sample sources.\nIn Figure \\ref{fig:all_agn_2d}, the dotted black lines indicate the contours of the 6$\\sigma$, 9$\\sigma$ and $12\\sigma$ confidence levels, respectively.\nWe find the $19.5\\sigma$ level detection of $A_{\\rm F}$ at the source position (0,0).\nThe $A_{\\rm F}$ value at the source position indicates the averaging value over the ranges of ($-7.5$ $h^{-1}\\mathrm{cMpc}$, $+7.5$ $h^{-1}\\mathrm{cMpc}$) in both the LoS and transverse directions.\nThe $19.5\\sigma$ level detection at the source position is suggestive that rich {\\sc Hi} gas exists near the All-AGN sources on average\nThe 2D {\\sc Hi} profile is more extended in the transverse direction than along the line of sight.\nWe discuss this difference in Section \\ref{subsec:AGN_LoSandTrans_profile}.\n\nWe then define a 3D distance, $D$, under the assumption of the Hubble flow in the LoS direction. We derive $A_{\\rm F}$ as a function of $D$ that is referred to as \"{\\sc Hi} radial profile\", averaging $A_{\\rm F}$ values of the 2D {\\sc Hi} profile over the 3D distance.\nFigure \\ref{fig:ravoux+20} shows the {\\sc Hi} radial profile of the All-AGN sample.\nWe find that the $A_{\\rm F}$ values decrease towards a large distance.\nThis trend is consistent with the one found by \\cite{ravoux+20} with the SDSS quasars.\n\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.8]{all_agn_sf.png}\n\\end{center}\n\\caption{\n2D {\\sc Hi} profile of the All-AGN sample sources. The color map indicates the $A_{\\rm F}$ ($\\delta_\\mathrm{F}$) values of each cell of the 2D {\\sc Hi} profile.\nThe solid lines denote constant $A_{\\rm F}$ ($\\delta_{\\rm F}$) values in steps of $0.01 \\ (-0.01)$ starting at $0.01 \\ (-0.01)$. The dotted lines correspond to multiples of $3\\sigma$ starting at $6\\sigma$. }\n\\label{fig:all_agn_2d}\n\\end{figure*}\n\n\n\\cite{ravoux+20} have obtained the average {\\sc Hi} absorption distribution around the AGN taken from the SDSS data release 16 quasar (SDSS DR16Q) catalog in the field of Strip 82. The criteria of the target selection for the SDSS DR16Q and SDSS DR14Q sources are the same. The luminosity distribution of AGN for \\cite{ravoux+20} is almost the same as that of our All-AGN sample sources that are taken from the SDSS DR14Q catalog.\nWe derive the average radial {\\sc Hi} profile of the \\cite{ravoux+20} AGN sources by the same method as for our All-AGN sample, using the 3D {\\sc Hi} tomography map reconstructed by \\cite{ravoux+20}.\nWe compare the radial {\\sc Hi} profile of the All-AGN sample with the one derived from the 3D {\\sc Hi} tomography map of \\cite{ravoux+20}.\nThe comparison is shown in Figure \\ref{fig:ravoux+20}.\nOur result agrees with that of \\cite{ravoux+20} within the error range at scale $D> 10$ $h^{-1}$ cMpc.\nThe peak values of $A_\\mathrm{F}$ are comparable, $A_{\\rm F}\\simeq 0.02$. \nThe slight difference between the peak values of our and \\citeauthor{ravoux+20}'s results can be explained by the different approaches of the estimation for the intrinsic continuum adopted by \\citeauthor{ravoux+20} and us.\n\\citeauthor{ravoux+20} conduct power law fitting, which is different from the MF-PCA fitting that we used, for the intrinsic continuum in the wavelength range of the Ly$\\alpha$ forest.\nGiven the low ($\\sim 15$ $h^{-1}$) spatial resolution of both our {\\sc Hi} tomography map and that of \\cite{ravoux+20}, neither studies are able to search for the proximity effect making a photoionization region around AGN \\citep{D'Odorico+08}.\nFrom the comparison shown by Figure \\ref{fig:ravoux+20}, we conclude that the {\\sc Hi} distribution derived from our {\\sc Hi} tomography map is reliable.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{ravoux20_compare.png}\n\\end{center}\n\\caption{\n{\\sc Hi} radial profile of the All-AGN and \\cite{ravoux+20} AGN samples.\nThe black and gray data points and error bars show the {\\sc Hi} radial profiles of our All-AGN sample sources and the AGN of \\citealt{ravoux+20}, respectively. The horizontal dashed line shows the cosmic average {\\sc Hi} absorption, $A_\\mathrm{F}=0$ ($\\delta_\\mathrm{F}=0$).}\n\\label{fig:ravoux+20}\n\\end{figure}\n\nTo check the reliability of the HETDEX survey results, we use the reliable result of the SDSS AGN to compare with the result derived by the HETDEX AGN.\n\nWe select type-1 AGN from the HETDEX's T1-AGN(H) and SDSS's T1-AGN samples to make sub-samples of T1-AGN(H) and T1-AGN whose rest-frame 1350 \\AA \\ luminosity ($L_\\mathrm{1350}$) distributions are the same.\nFor T1-AGN, the measurements directly from the SDSS spectra ($L^\\mathrm{spec}_\\mathrm{1350}$) are available \\citep{Rakshit20}. For T1-AGN(H), we do not have $L^\\mathrm{spec}_\\mathrm{1350}$ measurements from the HETDEX spectra, we estimate it using HSC r-band imaging. Since the central wavelength of the r-band imaging is rest-frame $\\sim1700 {\\rm \\AA}$, we calibrate the conversion between r-band luminosity, $L^\\mathrm{phot}_\\mathrm{UV}$, and $L^\\mathrm{spec}_\\mathrm{1350}$. We examine the 283 type-1 AGN sources that appear in both the SDSS and HETDEX surveys (and, thus, have both $L^\\mathrm{spec}_\\mathrm{1350}$ measurements from SDSS and r-band luminosities from HSC) to calibrate the relationship. The results are displayed in Figure \\ref{fig:luv_vs_l1350}.\nThe $L^\\mathrm{phot}_\\mathrm{UV}$ are always smaller than those of $L^\\mathrm{spec}_\\mathrm{1350}$ \\citep{Rakshit20}.\nDue to the blue UV slope of the spectra for the AGN both categorized in the T1-AGN(H) and T1-AGN samples, the luminosity of the rest-frame 1350 \\AA\\ always shows a larger value than the one of rest-frame 1700 \\AA.\nWe conduct linear fitting to the data points of Figure \\ref{fig:luv_vs_l1350}, and obtain the best-fit linear function.\nWith the best-fit linear function, we estimate $L^\\mathrm{spec}_\\mathrm{1350}$ values for the HETDEX's T1-AGN(H) sample sources.\n\nWe show the $L^\\mathrm{spec}_\\mathrm{1350}$ distributions of all the T1-AGN(H) and T1-AGN sample sources in the upper panel of Figure \\ref{fig:pdf_sdsst1agn_hetdext1agn}.\nWe make the sub-samples of T1-AGN and T1-AGN(H) that consist of the sources in the overlapping area of $L^\\mathrm{spec}_\\mathrm{1350}$ distributions.\nWe present the $L^\\mathrm{spec}_\\mathrm{1350}$ distributions of the T1-AGN and T1-AGN(H) sub-samples in the bottom panel of Figure \\ref{fig:pdf_sdsst1agn_hetdext1agn}.\nWe obtain 540 and 4338 type-1 AGN for the sub-samples of T1-AGN(H) and T1-AGN, respectively, whose $L^\\mathrm{spec}_\\mathrm{1350}$ distributions are shown in the bottom panel of Figure \\ref{fig:luv_vs_l1350}.\n\nWe derive the {\\sc Hi} radial profiles for the sub-samples of T1-AGN(H) and T1-AGN sample sources, as shown in Figure \\ref{fig:hetdext1agn_sdsst1agn}.\nThe {\\sc Hi} radial profiles of T1-AGN(H) and T1-AGN sub-sample sources are in good agreement.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.6]{luv_vs_l1350_new.png}\n\\end{center}\n\\caption{Relations of $L^\\mathrm{phot}_\\mathrm{UV}$ against $L^\\mathrm{spec}_\\mathrm{1350}$ for the sources both categorized in the T1-AGN(H) and T1-AGN samples. The $L^\\mathrm{phot}_\\mathrm{UV}$ and $L^\\mathrm{spec}_\\mathrm{1350}$ are measured from the HSC r-band imaging and SDSS spectra \\citep{Rakshit20}, respectively. The gray points show the distribution of $L^\\mathrm{spec}_\\mathrm{1350}$ $-$ $L^\\mathrm{phot}_\\mathrm{UV}$ relations for the sources both categorized in the T1-AGN(H) and T1-AGN samples. The black dashed line indicates the relation where $L^\\mathrm{spec}_\\mathrm{1350}$ $=$ $L^\\mathrm{phot}_\\mathrm{UV}$. The red dashed line represents the linear best fit of the blue points.}\n\\label{fig:luv_vs_l1350}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.6]{pdf_sdsst1agn_hetdext1agn_sf.png}\n\\end{center}\n\\caption{Top panel: L$^\\mathrm{spec}_\\mathrm{1350}$ distributions of the T1-AGN and T1-AGN(H) samples with blue and red histograms, respectively. \nBottom panel: Same as the top panel, but for the T1-AGN and T1-AGN(H) sub-sample sources.}\n\\label{fig:pdf_sdsst1agn_hetdext1agn}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{hetdext1agn_sdsst1agn_1dprofile_sf.png}\n\\end{center}\n\\caption{{\\sc Hi} radial profiles of the T1-AGN and T1-AGN(H) sub-samples. \nThe blue and red triangles show the values of $A_\\mathrm{F}$ as a function of distance, $D$, for the T1-AGN and T1-AGN(H) sample sources, respectively. \nThe horizontal dashed line shows the cosmic average {\\sc Hi} absorption, $A_{\\rm F}=0$.\nThe right $y$-axis shows the corresponding $\\delta_{\\rm F}$ values. }\n\\label{fig:hetdext1agn_sdsst1agn}\n\\end{figure}\n\n\\subsection{AGN Average Line-of-Sight and Transverse {\\sc Hi} Profiles} \\label{subsec:AGN_LoSandTrans_profile}\n\nBased on the 2D {\\sc Hi} profile of the All-AGN sample (Figure \\ref{fig:all_agn_2d}), we find that the {\\sc Hi} distributions of the All-AGN sample sources\nare more extended in the transverse direction.\nIn this section, we present the {\\sc Hi} radial profiles of All-AGN sample in the LoS and transverse directions and compare these two {\\sc Hi} radial profiles.\n\nTo derive the {\\sc Hi} radial profile of the All-AGN sample with the absolute LoS distance, which is referred to as the LoS {\\sc Hi} radial profile (Figure \\ref{fig:all_agn_LT}), we average $A_{\\rm F}$ values of the 2D {\\sc Hi} profiles of All-AGN over $D_\\mathrm{Trans}$ $<7.5$ $h^{-1}\\mathrm{cMpc}$ (from $-7.5$ $h^{-1}\\mathrm{cMpc}$ to $+7.5$ $h^{-1}\\mathrm{cMpc}$ in the transverse direction) that corresponds to the spatial resolution of the 2D {\\sc Hi} profile map, $15$ $h^{-1}\\mathrm{cMpc}$.\nAmong the 16,978 All-AGN sample sources, 10,884 sources are used as both background and foreground sources. In this case, the {\\sc Hi} absorption ($A_{\\rm F}$) of these 10,884 sources at the LoS velocity $\\lesssim-5250$ km s$^{-1}$ is estimated mainly from their own spectrum. As the discussion in \\citet{Youles+22}, the redshift uncertainty of the SDSS AGN causes the overestimation of intrinsic continuum and the underestimation of $A_{\\rm F}$ around the metal emission lines such as {\\sc Ciii} $\\lambda$1176. This leads to a systemics toward positive $A_{\\rm F}$ in the {\\sc Hi} radial profile of LoS velocity (LoS distance) at the LoS velocity $\\lesssim5250$ km s$^{-1}$ (Figure \\ref{fig:allagn_con}).\nThe {\\sc Hi} radial profile of LoS velocity (LoS distance) is derived by averaging $A_{\\rm F}$ values over $D_\\mathrm{Trans}$ $<7.5$ $h^{-1}\\mathrm{cMpc}$ as a function of the negative and positive LoS velocity (LoS distance).\nIn this study, we only use the values of $A_{\\rm F}$ at the LoS distance $>-52.5h^{-1}\\mathrm{cMpc}$ (LoS velocity $>-5250$ km s$^{-1}$) to derive the LoS {\\sc Hi} radial profile of the All-AGN sample (Figure \\ref{fig:all_agn_LT}).\nThe scale, LoS distance $>-52.5h^{-1}\\mathrm{cMpc}$ (LoS velocity $>-5250$ km s$^{-1}$), is determined by the maximum wavelength of the Ly$\\alpha$ forest we used, the smoothing scale of the Wiener filtering scheme, and the AGN redshift uncertainty, assumed by \\cite{Youles+22}.\nAfter removing the $A_{\\rm F}$ values affected the systemics in the 2D {\\sc Hi} profile, we present the LoS {\\sc Hi} radial profile of the All-AGN sample in Figure \\ref{fig:all_agn_LT}.\n\nWe estimate the {\\sc Hi} radial profiles of $D_\\mathrm{Trans}$, which is referred to as the Transverse {\\sc Hi} radial profile,by averaging the $A_{\\rm F}$ values over the LoS velocity of $(-750,+750)$ km s$^{-1}$ whose velocity width corresponds to $15$ $h^{-1}$ cMpc in the Hubble-flow distance. The {\\sc Hi} radial profile of $D_\\mathrm{Trans}$ is also shown in Figure \\ref{fig:all_agn_LT}.\n\nWe compare the LoS and Transverse {\\sc Hi} radial profile. \nThe $A_{\\rm F}$ value decrease more rapidly in the LoS direction than those in the Transverse direction (Figure \\ref{fig:all_agn_LT}).\nThis difference may be explained by an effect similar to the Kaiser effect \\citep{Kaiser+87}, doppler shifts in AGN redshifts caused by the large-scale coherent motions of the gas towards the AGN.\nThe LoS {\\sc Hi} radial profile is negative, $A_{\\rm F}\\sim -0.002\\pm0.0008$, at the large scale, $\\gtrsim 30$ $h^{-1}$cMpc.\nIn Section \\ref{sec:fr+13}, we discuss the negative $A_{\\rm F}$ values of LoS {\\sc Hi} radial profiles at large scale and compare our observational result to the models of a previous study, \\citet{FR+13}.\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.4]{allagn_los_error.png}\n\\end{center}\n\\caption{{\\sc Hi} radial profiles of LoS velocity (LoS distance) for the All-AGN sample. The black solid line shows the $A_\\mathrm{F}$ values as a function of LoS velocity (LoS distance) for the All-AGN sample. The vertical dashed line presents the position of LoS velosity $=0$ km s$^{-1}$ (LoS distance $=0$ $h^{-1}$cMpc). The horizontal dashed indicates the cosmic average {\\sc Hi} absorption, $A_\\mathrm{F}=0$. The gray shaded area shows the range of the $A_\\mathrm{F}$ not used to derive LoS {\\sc Hi} radial profile.}\n\\label{fig:allagn_con}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.45]{all_agn_sf_LT.png}\n\\end{center}\n\\caption{LoS and Transverse {\\sc Hi} radial profiles the All-AGN sample. The black and gray lines show the $A_{\\rm F}$ ($\\delta_\\mathrm{F}$) values as a function of LoS distance and $D_{Trans}$, respectively. The horizontal dashed line indicates $A_{\\rm F}$ ($\\delta_\\mathrm{F}$) $=0$ $(=0)$.}\n\\label{fig:all_agn_LT}\n\\end{figure}\n\n\\subsection{Source Dependences of the AGN Average HI Profiles} \\label{subsec:source_dependence}\nIn this section, we present 2D and {\\sc Hi} radial profiles of the AGN sub-samples to investigate how the average {\\sc Hi} density depends on luminosity and AGN type.\n\n\\subsubsection{AGN Luminosity Dependence} \\label{subsubsec:luminosity_dependence}\n\nWe study the AGN-luminosity dependence of the average {\\sc Hi} profiles. Figure \\ref{fig:bright_faint_t1agn_pdf} presents the $L_{1350}^{\\rm spec}$ distribution of All-AGN. We make 3 sub-samples of All-AGN that are All-AGN-L3, All-AGN-L2 and All-AGN-L1.\nThe luminosity ranges of the sub-samples are $43.70<\\log (L_{1350}^{\\rm spec}\/{\\rm [erg\\ s^{-1}]})<45.41$, $45.41<\\log (L_{1350}^{\\rm spec}\/{\\rm [erg\\ s^{-1}]})<45.75$, and $45.75<\\log (L_{1350}^{\\rm spec}\/{\\rm [erg\\ s^{-1}]})<47.35$, respectively. The luminosity ranges of the 3 sub-samples are defined in a way that the numbers of the AGN are same 5695 in each subsamples.\nWe derive the 2D {\\sc Hi} profiles of the sub-samples in the same manner as Section \\ref{subsec:average_profile}, and present the profiles in Figures \\ref{fig:brightandfaint_t1agn_2d}.\nIn these 2D {\\sc Hi} profiles, The brightest sub-sample of All-AGN-L1 (the faintest sub-sample of All-AGN-L3) shows the weakest (the strongest) {\\sc Hi} absorptions around the source position, $D=0$.\n\n\nWe then extract the {\\sc Hi} radial profiles from the 2D {\\sc Hi} profiles of the All-AGN sub-samples, and present the {\\sc Hi} radial profiles in Figure \\ref{fig:bright_faint_t1agn_1d}.\nIn this figure, we find that the peak values of $A_\\mathrm{F}$ for the All-AGN sub-samples is anti-correlates with AGN luminosities.\nThe peak $A_\\mathrm{F}$ values near the source position drops from the faintest All-AGN-L3 subsample to the brightest All-AGN-L1 subsample. \nThe gas densities around bright AGN are higher than (or comparable to) those around faint AGN, this result would suggest that the ionization fraction of the hydrogen gas around bright AGN is higher than the one around faint AGN on average.\n\nWe also present the LoS and Transverse {\\sc Hi} radial profiles of the All-AGN sub-samples derived by the same method as that for the All-AGN sample in Figure \\ref{fig:allagn_L321_lostrans}.\nSimilar to what we found in the comparison of the {\\sc Hi} radial profiles for the All-AGN sub-samples, the peak values of the LoS and Transverse {\\sc Hi} profiles also decrease from the faintest sub-sample, All-AGN L3, to the brightest sub-sample, All-AGN L1.\nFor the LoS (Transverse) {\\sc Hi} radial profiles at the scales beyond 25 $h^{-1}$ cMpc, we do not find any significant differences in the comparison of the LoS (Transverse) {\\sc Hi} radial profiles for the All-AGN sub-samples.\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.5]{allagn_lu_distribution_sf.png}\n\\end{center}\n\\caption{log$L^\\mathrm{spec}_\\mathrm{1350}$ distribution of the bright and All-AGN sample sources. The vertical dashed lines indicate the boarders of $L^\\mathrm{spec}_\\mathrm{1350}$ where log$(L^\\mathrm{spec}_\\mathrm{1350}\/[{\\rm erg \\ s ^{-1}}])$ $=45.41$ and $45.75$, respectively.\nThese three borders separate the All-AGN sample into 3 sub-samples of All-AGN-L3, All-AGN-L2, and All-AGN-L1, respectively.}\n\\label{fig:bright_faint_t1agn_pdf}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{allagn_L3_sf.png}\n\\includegraphics[scale=0.39]{allagn_L2_sf.png}\n\\includegraphics[scale=0.39]{allagn_L1_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:all_agn_2d}, but for the All-AGN-L3 (top), All-AGN-L2 (middle) and All-AGN-L1 (bottom) samples.}\n\\label{fig:brightandfaint_t1agn_2d}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{allagn_L321_1dprofile_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:hetdext1agn_sdsst1agn}, but for the All-AGN-L3 (red), All-AGN-L2 (gray) and All-AGN-L1 (black) samples.}\n\\label{fig:bright_faint_t1agn_1d}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.42]{allagn_L321_LoS_sf.png}\n\\includegraphics[scale=0.42]{allagn_L321_Trans_sf.png}\n\\end{center}\n\\caption{LoS and Transverse {\\sc Hi} radial profiles of the All-AGN-L3, All-AGN-L2, and All-AGN-L1 sub-samples. The top figure (bottom figure) presents the LoS (Transverse) {\\sc Hi} radial profiles of the All-AGN-L3, All-AGN-L2, and All-AGN-L1 sub-samples, shown by the red, gray, and black lines, respectively.\nThe meaning of the horizontal dashed lines both in the top and bottom figures are the same as the one in Figure \\ref{fig:ravoux+20}.}\n\\label{fig:allagn_L321_lostrans}\n\\end{figure}\n\n\\subsubsection{AGN Type Dependence} \\label{subsubsec:type_dependence}\n\nWe investigate the dependence of {\\sc Hi} profiles on type-1 and type-2 AGN.\nTo remove the effects of the AGN luminosity dependence (Section \\ref{subsubsec:luminosity_dependence}), we make sub-samples of T1-AGN and T2-AGN with the same $L^\\mathrm{spec}_\\mathrm{1350}$ distribution by the same manner as the one we conduct for the selection of T1-AGN and T1-AGN(H) sub-samples in Section \\ref{subsec:average_profile}.\nThe top panel of Figure \\ref{fig:t1agnt2agn_pdf} presents the $L^\\mathrm{spec}_\\mathrm{1350}$ distributions of T1-AGN and T2-AGN samples, while the bottom panel of Figure \\ref{fig:t1agnt2agn_pdf} shows those of the T1-AGN and T2-AGN sub-samples.\nThe sub-samples of T1-AGN and T2-AGN are composed of 10329 type-1 AGN and 1462 type-2 AGN, respectively.\nWe derive the 2D {\\sc Hi} profiles from the T1-AGN and T2-AGN sub-samples. The profiles are presented in\nFigure \\ref{fig:t1agnt2agn_2d}.\nWe find $17.7$ and $7.9$ $\\sigma$ detections at the source center position (0,0) of the T1-AGN and T2-AGN sub-samples, respectively.\nWe calculate the {\\sc Hi} radial profiles from the 2D {\\sc Hi} profiles of the T1-AGN and T2-AGN sub-samples. In Figure \\ref{fig:t1agnt2agn_1d}, we compare the {\\sc Hi} radial profiles of the T1-AGN and T2-AGN sub-samples.\nNo notable difference is found within 1$\\sigma$ error.\nThe peak value of $A_\\mathrm{F}$ of the T2-AGN subsample is within $1\\sigma$ error of the peak value of the T1-AGN subsample near the source position.\n\nTo compare the {\\sc Hi} distributions of type-1 and type-2 AGN in the LoS and transverse directions, we derive the LoS and Transverse {\\sc Hi} radial profiles of the T1-AGN and T2-AGN sub-samples and present\nthe profiles in Figure \\ref{fig:t1agnt2agn_1d_LosTrans}.\nSimilar to the trend of the {\\sc Hi} radial profiles, the peak values of the LoS and Transverse {\\sc Hi} radial profiles for T1-AGN and T2-AGN sub-samples are not significantly different.\nThe comparable peak values of the LoS and Transverse {\\sc Hi} radial profiles suggest that the selectively different orientation and opening angles of the dusty tori of the type-1 and type-2 AGN do not significantly affect the {\\sc Hi} distribution at the scale $\\lesssim15$ $h^{-1}\\mathrm{cMpc}$.\n\nFor the {\\sc Hi} radial profiles at the scale $>15$ $h^{-1}\\mathrm{cMpc}$, we find that the $A_\\mathrm{F}$ value for the LoS {\\sc Hi} radial profile of the T1-AGN sub-sample is greater than those of the T2-AGN sub-sample over the 1$\\sigma$ error bar at the scale around $25$ $h^{-1}\\mathrm{cMpc}$.\nThis result may hint that the type-2 AGN \nhave a stronger power of ionization at $25$ $h^{-1}\\mathrm{cMpc}$ than the type-1 AGN.\nThe interpretation of ionization at large-scales is in Section \\ref{sec:fr+13}.\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.6]{t1agn_t2agn_pdf_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:pdf_sdsst1agn_hetdext1agn}, but for the T1-AGN (blue) and T2-AGN (red) samples.}\n\\label{fig:t1agnt2agn_pdf}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.4]{t1agn_sf.png}\n\\includegraphics[scale=0.4]{t2agn_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:all_agn_2d}, but for the T1-AGN (top figure) and T2-AGN (bottom figure) sub-samples.}\n\\label{fig:t1agnt2agn_2d}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.4]{t1agn_t2agn_1d_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:hetdext1agn_sdsst1agn}, but for the T1-AGN (blue) and T2-AGN (red) sub-samples and the Galaxy (gray) sample.}\n\\label{fig:t1agnt2agn_1d}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.45]{T1T2AGN_LoS.png}\n\\includegraphics[scale=0.45]{T1T2AGN_Trans.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:allagn_L321_lostrans}, but for the T1-AGN and T2-AGN sub-samples.}\n\\label{fig:t1agnt2agn_1d_LosTrans}\n\\end{figure}\n\n\\subsection{Average HI Profiles around Galaxy} \n\\label{subsec:average_profile_gy}\n\nWe derive the 2D {\\sc Hi} profile at the positions of the Galaxy sample sources in the same manner as the one of the All-AGN sample sources. Figure \\ref{fig:all_hetdex_2d} presents the 2D {\\sc Hi} profile of the Galaxy sample sources. There is a clear $10.5\\sigma$ detection at the source position of (0,0).\nSimilarly, we calculate the {\\sc Hi} radial profile from the 2D {\\sc Hi} profile of the Galaxy sample (Figure \\ref{fig:galaxyandt1agnh}).\nThe {\\sc Hi} radial profile of the Galaxy sample shows a trend similar to those of the All-AGN sample. Both for the Galaxy and All-AGN samples, the {\\sc Hi} radial profile decreases towards the large scales, reaching $A_{\\rm F}\\sim 0$.\n\nIn Figure \\ref{fig:all_hetdex_2d}, we find that the {\\sc Hi} distributions in the LoS and transverse directions are different.\nA similar difference between the values of $A_{\\rm F}$ in LoS and transverse directions of 2D {\\sc Hi} profiles is claimed by \\cite{mukae+20}. \nTo investigate the difference between the {\\sc Hi} distributions in LoS and transverse directions for the Galaxy sample, we present the LoS and Transverse {\\sc Hi} radial profiles of the Galaxy sample in Figure \\ref{fig:galaxy_t1agnh_LT}.\nWe find that the LoS and Transverse {\\sc Hi} radial profiles of the Galaxy sample show different gradient of the decreasing $A_\\mathrm{F}$ at the scale D $\\sim\n3.75-50$ $h^{-1}$cMpc.\nThis difference can be explained by the gas version of the Kaiser effect that we discussed in Section \\ref{subsec:AGN_LoSandTrans_profile}.\nIn the LoS {\\sc Hi} radial profile of the Galaxy sample, we find that the $A_\\mathrm{F}$ values are negative on the scale of D $=25-70$ $h^{-1}$cMpc, which is similar to the negative $A_\\mathrm{F}$ values we found on the large scale of the LoS {\\sc Hi} radial profile for the All-AGN sample.\nWe discuss these negative $A_\\mathrm{F}$ values on the LoS {\\sc Hi} radial profile of the Galaxy sample in Section \\ref{sec:fr+13}.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{galaxy_sf.png}\n\\end{center}\n\\caption{2D {\\sc Hi} profile of the Galaxy sample sources. The color map indicates the $A_{\\rm F}$ ($\\delta_\\mathrm{F}$) values of each cell of the 2D {\\sc Hi} profile. The dotted lines show confidence level contours of $3\\sigma$ and $6\\sigma$. The solid line presents the contour where $A_{\\rm F}$ $=0.01$ ($\\delta_\\mathrm{F}$ $=-0.01$).}\n\\label{fig:all_hetdex_2d}\n\\end{figure}\n\n\\subsubsection{Galaxy-AGN Dependence} \\label{subsec:galaxies_agn_dependence}\n\nWe derive 2D {\\sc Hi} profiles for the T1-AGN(H) sample constructed from the HETDEX data.\nFigure \\ref{fig:all_hetdex_2d} and \\ref{fig:galaxyandt1agnh} show the 2D {\\sc Hi} profiles of the Galaxy and T1-AGN(H) samples. \nWe find $7.6$$\\sigma$ detection around the source position for the T1-AGN(H) sample.\nFigure \\ref{fig:galaxy_t1agnh_1dprofile} presents the {\\sc Hi} radial profiles of the Galaxy and T1-AGN(H) samples derived from the 2D {\\sc Hi} profiles. We also compare the {\\sc Hi} radial profiles of the Galaxy sample with those of T1-AGN and T2-AGN in Figure \\ref{fig:t1agnt2agn_1d}.\nIn the {\\sc Hi} radial profiles of the Galaxy and T1-AGN(H) samples, the $A_\\mathrm{F}$ values increase toward the source position $D=0$.\nIn Figure \\ref{fig:galaxy_t1agnh_1dprofile} (\\ref{fig:t1agnt2agn_1d}), we find that the $A_\\mathrm{F}$ values of T1-AGN(H) (T1-AGN and T2-AGN) are larger than those of the galaxies at $\\lesssim 20$ $h^{-1}$ cMpc. These $A_\\mathrm{F}$ excesses of the AGN may be explained by the hosting dark matter halos of the AGN being more massive than those of the galaxies.\n\\citet{momose+21} also investigate the {\\sc Hi} radial profile around AGN, and find {\\sc Hi} absorption decrement at the source center ($\\lesssim5$ $h^{-1}$Mpc). They argure that this trend can be explained by the proximity effect. On the other hand, their result is different from ours that the $A_{\\rm F}$ values monotonically increase with decreasing distance.\nThis difference between our and \\citeauthor{momose+21}'s results is produced by the fact that our results for $\\lesssim 10$ $h^{-1}$ cMpc are largely affected by the {\\sc Hi} absorption at $\\sim 10$ $h^{-1}$ cMpc due to the coarse resolution of our {\\sc Hi} tomography map, 15 $h^{-1}$ cMpc, in contrast with $2.5$ $h^{-1}$ cMpc for the resolution of \\cite{momose+21}.\n\nWe then derive the LoS and Transverse radial {\\sc Hi} profile of the T1-AGN(H) sample.\nThe results of the profiles are shown in Figure \\ref{fig:galaxy_t1agnh_LT}.\nSimilar to the LoS and Transverse {\\sc Hi} radial profiles of the All-AGN and Galaxy samples, the gas version of the Kaiser effect and the negative $A_\\mathrm{F}$ in the LoS direction on the scale beyond $D=25$ h$^{-1}$cMpc are also found in those of the T1-AGN(H) sample.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{t1agnh_2dprofile.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:brightandfaint_t1agn_2d}, but for the T1-AGN(H) sample. }\n\\label{fig:galaxyandt1agnh}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.39]{galaxy_t1agnh_1dprofile_sf.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:bright_faint_t1agn_1d}, but for Galaxy (gray) and T1-AGN(H) (black) samples.}\n\\label{fig:galaxy_t1agnh_1dprofile}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.45]{galaxy_t1agnh_LT.png}\n\\end{center}\n\\caption{Same as Figure \\ref{fig:all_agn_LT}, but for the Galaxy and T1-AGN(H) samples.}\n\\label{fig:galaxy_t1agnh_LT}\n\\end{figure}\n\n\\subsection{Comparison with Theoretical Models}\\label{sec:fr+13}\n\nThere are theoretical models of {\\sc Hi} radial profiles around AGN that are made by \\citet{FR+13}. \\citet{FR+13} present their {\\sc Hi} radial profiles with\nthe LoS distance\nin the form of cross-correlation function (CCF).\n\nWe first calculate theoretical CCFs of All-AGN, following the definition of the CCF presented in \\citet{FR+13}.\n\\citet{FR+13} assume the linear cross-power spectrum of the QSOs and Ly$\\alpha$ forest,\n\\begin{equation} \\label{eq:ccf0}\n P_{\\rm qF}(\\mathbf{k},z)=b_{\\rm q}(z)[1+\\beta_{\\rm q}(z) \\mu^2_{\\rm k}]b_{\\rm F}(z)[1+\\beta_{\\rm F}(z)\\mu^2_{\\rm k}]P_{\\rm L}(k,z),\n\\end{equation}\nwhere $P_{\\rm L}(k,z)$ is the linear matter power spectrum.\nHere $\\mu_{\\rm k}$ is the cosine of the angle between the Fourier mode and the LoS \\citep{Kaiser+87}. \nThe values of $b_{\\rm q}$ and $b_{\\rm F}$ ($\\beta_{\\rm q}$ and $\\beta_{\\rm F}$) are the bias factors (redshift space distortion parameters) of the QSO and Ly$\\alpha$ density, respectively.\n\nThe redshift distortion parameter of QSO obeys the relation $\\beta_{\\rm q}=f(\\Omega)\/b_{\\rm q}$, where $f(\\Omega)$ is the logarithmic derivative of the linear growth factor \\citep{Kaiser+87}, $b_{\\rm q}=3.8\\pm0.3$ \\citep{White+12}.\nWe use the condition of Ly$\\alpha$ forest, $b_{\\rm F}(1+\\beta_{\\rm F})=-0.336$ for $b_F \\propto (1+z)^{2.9}$,\nthat is determined by observations of Ly$\\alpha$ forest at $z\\simeq 2.25$ \\citep{Slosar+11}.\n\\citet{FR+13} estimate the CCF of QSOs by the Fourier transform of $P_{\\rm qF}$ \\citep{Hamilton+92}:\n\\begin{equation} \\label{eq:ccf1}\n \\xi({\\bf r})=\\xi_0(r)P_0(\\mu)+\\xi_2(r)P_2(\\mu)+\\xi_4(r)P_4(\\mu),\n\\end{equation}\nwhere $\\mu$ is the cosine of angle between the position ${\\bf r}$ and the LoS in the redshift space. The values of $P_0$, $P_2$, and $P_4$ are the Legendre polynomials, $P_0=1$, $P_2=(3\\mu^2-1)$, and $P_4=(35\\mu^4-30\\mu^2+3)\/8$, respectively.\nThe functions of $\\xi_0$, $\\xi_2$, and $\\xi_4$ are:\n\\begin{equation} \\label{eq:ccf2}\n \\xi_0(r)=b_{\\rm q} b_{\\rm F} [1+(\\beta_{\\rm q}+\\beta_{\\rm F})\/3+\\beta_{\\rm q}\\beta_{\\rm F}\/5]\\zeta(r),\n\\end{equation}\n\\begin{equation} \\label{eq:ccf3}\n \\xi_2(r)=b_{\\rm q} b_{\\rm F} [2\/3(\\beta_{\\rm q}+\\beta_{\\rm F})+4\/7\\beta_{\\rm q}\\beta_{\\rm F}][\\zeta(r)-\\bar{\\zeta}(r)],\n\\end{equation}\n\\begin{equation} \\label{eq:ccf4}\n \\xi_4(r)=8\/35 b_{\\rm q} b_{\\rm F} \\beta_{\\rm q}\\beta_{\\rm F} [\\zeta(r)-5\/2\\bar{\\zeta}(r)-7\/2\\bar{\\bar{\\zeta}}(r)].\n\\end{equation}\nThe function $\\zeta(r)$ is the standard CDM linear correlation function in real space \\citep{Bardeen+86,Hamilton+91}.\nThe functions $\\bar{\\zeta}(r)$ and $\\bar{\\bar{\\zeta}}(r)$ are given by:\n\\begin{equation} \\label{eq:ccf5}\n \\bar{\\zeta}(r) \\equiv 3r^{-3} \\int^r_0 \\zeta(s)s^2ds ,\n\\end{equation}\n\\begin{equation} \\label{eq:ccf6}\n \\bar{\\bar{\\zeta}}(r) \\equiv 5r^{-5} \\int^r_0 \\zeta(s)s^4ds.\n\\end{equation}\nHere we define\n\\begin{equation} \\label{eq:xi_prime}\n \\xi{'}({\\bf r}) \\equiv - \\xi({\\bf r}).\n\\end{equation}\nIn Figure \\ref{fig:allagn_model},\nwe present $D\\xi{'}$ as a function of the LoS distance for the model of \\citet{FR+13} that is calculated under the assumption of the mean overdensity of the $15$ $h^{-1}$cMpc corresponding to the spatial resolution of our observational results.\n\nTo compare our observational measurements with the model CCF of \\citet{FR+13}, we calculate the value of $\\xi{'}$ for our All-AGN sample.\nThe value of $\\xi{'}$ in each cell $\\xi{'}_{\\rm cell}$ \nis calculated by\n\\begin{equation} \\label{eq:cross-correlation}\n \\xi{'}_{\\rm cell} = \\frac{\\sum_{i\\in {\\rm cell}} \\omega_{i} A_{{\\rm F}i}}{\\sum_{i\\in {\\rm cell}} \\omega_{i}},\n\\end{equation}\nwhere $\\omega_{i}$ is the weight determined by the observational errors and the intrinsic variance of the Ly$\\alpha$ forest. The value of $\\omega_{i}$ is obtained by\n\\begin{equation} \\label{eq:weight}\n \\omega_{i} = \\left\\lbrack \\sigma^2_{\\rm F}(z_i)+\\frac{1}{\\langle S\/N \\rangle^2\\times \\langle F(z_i) \\rangle ^2 } \\right\\rbrack^{-1},\n\\end{equation}\nwhere $\\sigma_{\\rm F}(z_i)$ is the intrinsic variance of the Ly$\\alpha$ forest. The value of $\\langle F(z_i) \\rangle $ is the cosmic average Ly$\\alpha$ transmission (Eq.\\ref{eq:mf}).\nWe adopt $\\langle S\/N \\rangle = 1.4$ that is the criterion of the background source selection (Section \\ref{subsec:bkagn}).\nThe intrinsic variance, $\\sigma_{\\rm F}(z_i)$, of the Ly$\\alpha$ forest taken from \\cite{FR+13} is:\n\\begin{equation} \\label{eq:intrinsic variance}\n \\sigma^2_{\\rm F}(z_i) = 0.065[(1+z_i)\/3.25]^{3.8}.\n\\end{equation}\n\nWe calculate $\\xi{'}$ with our All-AGN sample via\nthe Equations \\ref{eq:cross-correlation}, \\ref{eq:weight}, and \\ref{eq:intrinsic variance}, using\nthe binning sizes same as those in \\cite{FR+13}.\nWe present $\\xi{'}$ multiplied by $D$ with the black squares in Figure \\ref{fig:allagn_model}.\n(explanation of Momose+21)\nFor reference, we also derive the $\\xi{'}$ for our Galaxy sample shown by the blue triangles.\n\nIn Figure \\ref{fig:allagn_model}, we find that the $D\\xi{'}$ profile of our All-AGN sample show a trend similar to the one of the model predicted by \\citet{FR+13}.\nThe observational $D\\xi{'}$ profile of our All-AGN sample shows a good agreement with the model $D\\xi{'}$ profile of \\cite{FR+13} at the scale of $D>30$ $h^{-1}$cMpc.\nAlthough the model $D\\xi{'}$ profile of \\cite{FR+13} is slightly higher than the $D\\xi{'}$ profiles of the observations at $\\gtrsim 60$ $h^{-1}$cMpc,\nthe general trend of the negative $D\\xi{'}$ profiles at $\\gtrsim 30$ $h^{-1}$cMpc are the same.\n\\cite{FR+13} suggests that the negative $D\\xi^{'}$ values at the large scale of $\\gtrsim 30$ $h^{-1}$cMpc are explained by the ionization.\nIn the model of ionization, \\cite{FR+13} assume the spectrum of the AGN at $D = 0$ with $L_\\nu \\propto \\nu^{- \\alpha}$, where $\\alpha = 1.5$ (1.0) for the frequency $\\nu$ over (below) the Lyman limit. The luminosity of $\\lambda=1420$ \\AA\\ is normalized as $L_\\nu= 3.1\\times10^{30}$ erg\/s\/Hz, which is taken from the mean luminosity of the SDSS data release 9 quasars.\nNo assumptions of AGN type have been made in the models of Font-Ribera+13.\nBased on the model of ionization, \\cite{FR+13} calculate $\\xi$ for the homogeneous gas radiated by AGN, and obtain the function\n\\begin{equation} \\label{eq:ionization}\n \\xi= 0.0065(20\\; h^{-1}{\\rm cMpc}\/D)^2.\n\\end{equation}\nWith the $\\xi$ function, we calculate $D\\xi^{'}$ that is presented with the cyan dashed curve in Figure \\ref{fig:allagn_model}. The cyan dashed curve shows the plateau at $D\\geq40$ $h^{-1}{\\rm cMpc}$ with negative $D\\xi^{'}$ values that is comparable with the model $D\\xi{'}$ profile of \\cite{FR+13}. It indicates that the negative $D\\xi^{'}$ values are originated from the ionization of radiation including the hard radiation.\nSimilarly, the negative $D\\xi^{'}$ values of our All-AGN at the large scale towards $\\gtrsim 40$ $h^{-1}$cMpc may be explained by the ionization of radiation.\nTo distinguish the large-scale negative $D\\xi^{'}$ values, which are referred to as the `ionized outskirts', from the proximity zone created by the proximity effect, we plot the observational CCF of AGN obtained by \\cite{momose+21} in Figure \\ref{fig:allagn_model}. The AGN CCF obtained by \\citeauthor{momose+21} shows a decreasing {\\sc Hi} absorption toward source position ($D=0$ $h^{-1}$cMpc) caused by the proximity effect. Our findings indicate that the {\\sc Hi} radial profile of AGN has transitions from proximity zones ($\\lesssim$ a few $h^{-1}$cMpc) to the {\\sc Hi} structures ($\\sim 1-30$ $h^{-1}$cMpc) and the ionized outskirts ($\\gtrsim 30$\n$h^{-1}$cMpc). The hard radiation may pass through the {\\sc Hi} structure due to the small cross-section and ionizes the {\\sc Hi} gas in the regions of ionized outskirts. Because of the low recombination rate, the {\\sc Hi} gas remains ionized in the ionized outskirt.\n\nInterestingly, the $D\\xi{'}$ profile of our Galaxy sample also shows negative $D\\xi^{'}$ values towards $\\gtrsim 30$ $h^{-1}$cMpc which is similar to those of the model and our All-AGN sample.\nThis result may suggest that the {\\sc Hi} gas at large scale ($\\gtrsim 20$ $h^{-1}$cMpc) around galaxies has been ionized.\nThe ionizing source causing the structure of negative $D\\xi^{'}$ values at the large scale may not be a single galaxy, but a group of galaxies within a radius of a few cMpc.\nRegions around galaxies are special as galaxies are clustered together.\nGalaxies in this work are bright with $M_{\\rm UV}<-22$ mag. The galaxies can be hosted by massive haloes, and are likely to distribute at overdensity regions.\nThe overdensity region suggests that each galaxy can be surrounded by several satellite galaxies.\nAlthough it is difficult for a galaxy to ionize the {\\sc Hi} gas on a scale of $\\gtrsim 20$ $h^{-1}$cMpc, a group galaxies may have enough ionizing photons to ionize the {\\sc Hi} on this scale.\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[scale=0.46]{allagn_model_mms.png}\n\\end{center}\n\\caption{Comparison between our All-AGN and Galaxy results and the models of \\cite{FR+13} in the LoS CCF ($\\xi^{'}$) multiplied by distance ($D$). The black and blue points are the results derived from the All-AGN and Galaxy samples sources, respectively. The orange curve is the LoS CCF of QSOs with the Ly$\\alpha$ forest derived by \\cite{FR+13}. The cyan dashed curve shows the ionization of radiation effect taken from \\cite{FR+13}. The pink line presents the CCF of AGN obtained by \\cite{momose+21}. The gray shade presents the range of the {\\sc Hi} structure. Two white areas show the regions of proximity zone and ionized outskirt. The horizontal gray line indicates the cosmic average where $D\\xi^{'}=0$.}\n\\label{fig:allagn_model}\n\\end{figure}\n\n\n\n\\section{Summary}\n\nWe reconstruct two 3D {\\sc Hi} tomography maps based on the Ly$\\alpha$ forests in the spectra of 14763 background QSOs from the SDSS survey with no signatures of damped Ly$\\alpha$ system or broad absorption lines.\nThe maps cover the extended Fall and Spring fields defined by the HETDEX survey.\nThe spatial volume of the reconstructed 3D {\\sc Hi} tomography maps are $2257 \\times 233 \\times 811$ $h^{\\rm -3}$cMpc$^3$ and $3475 \\times 1058 \\times 811$ $h^{\\rm -3}$cMpc$^3$.\nWe investigate {\\sc Hi} distribution around galaxies and AGN with samples made from HETDEX and SDSS survey results in our study field.\nOur results are summarized below.\n\\begin{itemize}\n \\item We derive the 2D {\\sc Hi} and {\\sc Hi} radial profiles of the All-AGN sample consisted of SDSS AGN. We find that the 2D {\\sc Hi} profile is more extended in the transverse direction than along the line of sight. In the {\\sc Hi} radial profile All-AGN sample, the values of {\\sc Hi} absorption, $A_{\\rm F}$, decrease toward the large scale, touching to $A_{\\rm F} \\sim 0$.\n \n \\item We compare the {\\sc Hi} radial profiles derived from the T1-AGN and T1-AGN(H) sub-samples, whose $L^{\\rm spec}_{\\rm 1350}$ distributions are the same.\n \n We find that the {\\sc Hi} radial profile of the T1-AGN sub-sample agrees with that of the T1-AGN(H) sub-sample. \n This agreement suggests that the systematic uncertainty between the SDSS and the HETDEX survey results is negligible.\n \n \\item We examine the dependence of the {\\sc Hi} profile on AGN luminosity by deriving the 2D {\\sc Hi}, {\\sc Hi} radial, LoS {\\sc Hi} radial, and Transverse {\\sc Hi} radial profiles of the All-AGN-L3 (the faintest), All-AGN-L2, and All-AGN-L1 (the brightest) sub-samples.\n We find that the {\\sc Hi} absorption is the greatest in the lowest-luminosity AGN sub-sample, and that the {\\sc Hi} absorption becomes weaker with increasing AGN luminosity\n This result suggests that, on average, if the density of {\\sc Hi} gas around the bright AGN is greater than (or comparable to) those of the faint AGN, the ionization fraction of {\\sc Hi} gas around bright AGN is higher than that around faint AGN.\n \n \\item We investigate the AGN type dependence of {\\sc Hi} distribution around type-1 and type-2 AGN by the 2D {\\sc Hi}, {\\sc Hi} radial, LoS {\\sc Hi} radial, and Transverse {\\sc Hi} radial profiles extracted from the T1-AGN and T2-AGN sub-samples with the same $L^{\\rm spec}_{\\rm 1350}$ distributions.\n The comparison between the {\\sc Hi} radial profiles of T1-AGN and T2-AGN sub-samples indicates that the {\\sc Hi} absorption around the T2-AGN sub-sample is comparable to the one of the T1-AGN sub-sample on average.\n This trend suggests that, the selectively different opening angle and orientation of the dusty torus for type-1 and type-2 AGN do not have a significant impact on the Mpc-scale {\\sc Hi} distribution.\n\n \\item We compare the {\\sc Hi} distributions around galaxies and type-1 AGN with the 2D {\\sc Hi}, {\\sc Hi} radial, LoS {\\sc Hi} radial, and Transverse {\\sc Hi} radial profiles derived from the Galaxy and T1-AGN(H) sample sources.\n The {\\sc Hi} absorption values, $A_{\\rm F}$, around the T1-AGN(H) sample are larger than those of the Galaxy sample on average.\n This result may be caused by the dark matter halos of type-1 AGN having a larger mass than the one of galaxies on average.\n \n \\item We find that the {\\sc Hi} radial profiles of the LoS distance for the Galaxy and All-AGN samples show negative $A_{\\rm F}$ values, which means weak {\\sc Hi} absorption, at the scale over $\\sim 30$ $h^{-1}$cMpc. We extract the $D\\xi{'}$ profile of our Galaxy and All-AGN samples to compare with the model CCF of AGN from \\cite{FR+13}. The general trend of the negative $D\\xi{'}$ at $\\gtrsim 30$ $h^{-1}$cMpc is the same as the model CCF.\n This results suggest that the {\\sc Hi} radial profile of AGN has transitions from proximity zones ($\\lesssim$ a few $h^{-1}$cMpc) to the {\\sc Hi} rich structures ($\\sim 1-30$ $h^{-1}$cMpc) and the ionized outskirts ($\\gtrsim 30$ $h^{-1}$cMpc).\n \n \n\n\\end{itemize}\n\n\n\\section*{Acknowledgements}\nWe thank Nobunari Kashikawa, Khee-Gan Lee, Akio Inoue, Rikako Ishimoto, Shengli Tang, Yongming Liang, Rieko Momose, and Koki Kakiichi for giving us helpful comments.\n\nHETDEX is led by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, Max-Planck-Institut f\u00fcr Extraterrestrische Physik (MPE), Leibniz-Institut f\u00fcr Astrophysik Potsdam (AIP), Texas A\\&M University, Pennsylvania State University, Institut f\u00fcr Astrophysik G\u00f6ttingen, The University of Oxford, Max-Planck-Institut f\u00fcr Astrophysik (MPA), The University of Tokyo and Missouri University of Science and Technology. In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2- 0355), and generous support from private individuals and foundations.\nThe observations were obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, and Georg-August-Universit\u00e4t G\u00f6ttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.\nThe authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high performance computing, visualization, and storage resources that have contributed to the research results reported within this paper. URL: http:\/\/www.tacc.utexas.edu\n\nVIRUS is a joint project of the University of Texas at Austin,\nLeibniz-Institut f\\\"ur Astrophysik Potsdam (AIP), Texas A\\&M University\n(TAMU), Max-Planck-Institut f\\\"ur Extraterrestrische Physik (MPE),\nLudwig-Maximilians-Universit\\\"at Muenchen, Pennsylvania State\nUniversity, Institut fur Astrophysik G\\\"ottingen, University of Oxford,\nand the Max-Planck-Institut f\\\"ur Astrophysik (MPA). In addition to\nInstitutional support, VIRUS was partially funded by the National\nScience Foundation, the State of Texas, and generous support from\nprivate individuals and foundations.\n\nThis work is supported in part by MEXT\/JSPS KAKENHI Grant Number 21H04489 (HY), JST FOREST Program, Grant Number JP-MJFR202Z (HY).\n\nK. M. acknowledges financial support from the Japan Society for the Promotion of Science (JSPS) through KAKENHI grant No. 20K14516.\n\nThis paper is supported by World Premier International\nResearch Center Initiative (WPI Initiative), MEXT, Japan, the\njoint research program of the Institute of Cosmic Ray Research (ICRR), the University of Tokyo, and KAKENHI (19H00697,\n20H00180, and 21H04467) Grant-in-Aid for Scientific\nResearch (A) through the Japan Society for the Promotion of\nScience.\n\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAttributed networks are ubiquitous in myriad of high impact domains, ranging from social media networks, academic networks, to protein-protein interaction networks. In contrast to conventional plain networks where only pairwise node dependencies are observed, nodes in attributed networks are often affiliated with a rich set of attributes. For example, in scientific collaboration networks, researchers collaborate and are distinct from others by their unique research interests; in social networks, users interact and communicate with others and also post personalized content. It has been widely studied and received that there exhibits a strong correlation among the attributes of linked nodes~\\cite{shalizi2011homophily,pfeiffer2014attributed}. The root cause of the correlations can be attributed to social influence and homophily effect in social science theories~\\cite{marsden1993network,mcpherson2001birds}. Also, many real-world applications, such as node classification, community detection, topic modeling and anomaly detection~\\cite{jian2017toward,yang2009combining,li2017toward,li2017residual,he2017modeling}, have shown significant improvements by modeling such correlations.\n\nNetwork embedding~\\cite{chen2007directed,perozzi2014deepwalk,tang2015line,cao2015grarep,chang2015heterogeneous,grover2016node2vec,huang2017label,huang2017accelerated,qu2017attention} has attracted a surge of research attention in recent years. The basic idea is to preserve the node proximity in the embedded Euclidean space, based on which the performance of various network mining tasks such as node classification~\\cite{aggarwal2011node,bhagat2011node}, community detection~\\cite{tang2008community,yang2009combining}, and link prediction~\\cite{liben2007link,wang2011human,barbieri2014follow} can be enhanced. However, a vast majority of existing work are predominately designed for plain networks. They inevitably ignore the node attributes that could be potentially complementary in learning better embedding representations, especially when the network suffers from high sparsity. In addition, a fundamental assumption behind existing network embedding methods is that networks are static and given a prior. Nonetheless, most real-world networks are intrinsically dynamic with addition\/deletion of edges and nodes; examples include co-author relations between scholars in an academic network and friendships among users in a social network. Meanwhile, similar as network structure, node attributes also change naturally such that new content patterns may emerge and outdated content patterns will fade. For example, humanitarian and disaster relief related topics become popular on social media sites after the earthquakes as users continuously post related content. Consequently, other topics may receive less public interests. In this paper, we refer this kind of networks with both network and node attribute value changes as \\emph{dynamic attributed networks}.\n\nDespite the widespread of dynamic attributed networks in real-world applications, the study in analyzing and mining these networks are rather limited. One natural question to ask is when attributed networks evolve, how to correct and adjust the staleness of the end embedding results for network analysis, which will shed light on the understanding of their evolving nature. However, dynamic attributed network embedding remains as a daunting task, mainly because of the following reasons: (1) Even though network topology and node attributes are two distinct data representations, they are inherently correlated. In addition, the raw data representations could be noisy and even incomplete, individually. Hence, it is of paramount importance to seek a noise-resilient consensus embedding to capture their individual properties and correlations; (2) Applying offline embedding methods from scratch at each time step is time-consuming and cannot seize the emerging patterns timely. It necessitates the design of an efficient online algorithm that can give embedding representations promptly.\n\n\\begin{figure*}[!htbp]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{DANE.eps}\n\\caption{An illustration of the proposed dynamic attributed network embedding framework - DANE. At time step $t$, DANE performs spectral embedding on network structure $\\mat{A}$ and node attributes $\\mat{X}$, and obtain two embeddings $\\mat{Y}_{\\mat{A}}$ and $\\mat{Y}_{\\mat{X}}$. Afterwards, DANE maximizes their correlation for a consensus embedding representation $\\mat{Y}$. At the following time step $t+1$, the network is characterized by both topology structure and attribute value changes $\\mat{\\Delta}\\mat{A}$ and $\\mat{\\Delta}\\mat{X}$ (the changes are highlighted in orange). DANE leverages matrix perturbation theory to update $\\mat{Y}_{\\mat{A}}$ and $\\mat{Y}_{\\mat{X}}$, and give the new consensus embedding $\\mat{Y}$.}\n\\label{fig:framework}\n\\end{figure*}\n\nTo tackle the aforementioned challenges, we propose a novel embedding framework for dynamic attributed networks. The main contributions can be summarized as follows:\n\\begin{itemize}\n\\item \\textbf{\\emph{Problem Formulations}}: we formally define the problem of dynamic attributed network embedding. The key idea is to initiate an offline model at the very beginning, based on which an online model is presented to maintain the freshness of the end attributed network embedding results.\n\\item \\textbf{\\emph{Algorithms and Analysis}}: we propose a novel framework - DANE for dynamic attributed network embedding. Specifically, we introduce an offline embedding method as a base model to preserve node proximity in terms of both network structure and node attributes for a consensus embedding representation in a robust way. Then to timely obtain an updated embedding representation when both network structure and attributes drift, we present an online model to update the consensus embedding with matrix perturbation theory. We also theoretically analyze its time complexity and show its superiority over offline methods.\n\\item \\textbf{\\emph{Evaluations}}: we perform extensive experiments on both synthetic and real-world attributed networks to corroborate the efficacy in terms of two network mining tasks (both unsupervised and supervised). Also, we show its efficiency by comparing it with other baseline methods and its offline counterpart. In particular, our experimental results show that the proposed method outperforms the best competitors in terms of both clustering and classification performance. Most importantly, it is much faster than competitive offline embedding methods.\n\\end{itemize}\n\nThe rest of this paper is organized as follows. The problem statement of dynamic attributed network embedding is introduced in Section 2. Section 3 presents the proposed framework DANE with analysis. Experiments on synthetic and real datasets are presented in Section 4 with discussions. Section 5 briefly reviews related work. Finally, Section 6 concludes the paper and visions the future work.\n\n\\section{Problem Definition}\nWe first summarize some notations used in this paper. Following the commonly used notations, we use bold uppercase characters for matrices (e.g., $\\mat{A}$), bold lowercase characters for vectors (e.g., $\\mat{a}$), normal lowercase characters for scalars (e.g., $a$). Also we represent the $i$-th row of matrix $\\mat{A}$ as $\\mat{A}(i,:)$, the $j$-th column as $\\mat{A}(:,j)$, the ($i,j$)-th entry as $\\mat{A}(i,j)$, transpose of $\\mat{A}$ as $\\mat{A}'$, trace of $\\mat{A}$ as $tr(\\mat{A})$ if it is a square matrix. $\\mat{1}$ denotes a vector whose elements are all 1 and $\\mat{I}$ denotes the identity matrix. The main symbols used throughout this paper are listed in Table~\\ref{table:symbols}.\n\\begin{table}[!htbp]\n\\small\n\\begin{tabular}{|c|c|} \\hline\nNotations& Definitions or Descriptions \\\\ \\hline \\hline\n$\\mathcal{G}^{(t)}$ & attributed network at time step $t$ \\\\ \\hline\n$\\mathcal{G}^{(t+1)}$ & attributed network at time step $t+1$ \\\\ \\hline\n$\\mat{A}^{(t)}$ & adjacency matrix for the network structure in $\\mathcal{G}^{(t)}$\\\\ \\hline\n$\\mat{X}^{(t)}$ & attribute information in $\\mathcal{G}^{(t)}$\\\\ \\hline\n$\\mat{A}^{(t+1)}$ & adjacency matrix for the network structure in $\\mathcal{G}^{(t+1)}$\\\\ \\hline\n$\\mat{X}^{(t+1)}$ & attribute information in $\\mathcal{G}^{(t+1)}$\\\\ \\hline\n$\\Delta\\mat{A}$ & change of adjacency matrix between time steps $t$ and $t+1$ \\\\ \\hline\n$\\Delta\\mat{X}$ & change of attribute values between time steps $t$ and $t+1$ \\\\ \\hline\n$n$ & number of instances (nodes) in $\\mathcal{G}^{(t)}$ \\\\ \\hline\n$d$ & number of attributes in $\\mathcal{G}^{(t)}$\\\\ \\hline\n$k$ & embedding dimension for network structure or attributes \\\\ \\hline\n$l$ & final consensus embedding dimension \\\\ \\hline\n\\end{tabular}\n\\caption{Symbols.}\n\\vspace{-1\\baselineskip}\n\\label{table:symbols}\n\\end{table}\n\nLet $\\mathcal{U}^{(t)}=\\{u_{1},u_{2},...,u_{n}\\}$ denote a set of $n$ nodes in the attributed network $\\mathcal{G}^{(t)}$ at time step $t$. We use the adjacency matrix $\\mat{A}^{(t)}\\in \\mathbb{R}^{n\\times n}$ to represent the network structure of $\\mathcal{U}^{(t)}$. In addition, we assume that nodes are affiliated with $d$-dimensional attributes $\\mathcal{F}=\\{f_{1},f_{2},...,f_{d}\\}$ and $\\mat{X}^{(t)}\\in \\mathbb{R}^{n\\times d}$ denotes the node attributes. At the following time step, the attributed network is characterized with both topology and content drift such that new\/old edges and nodes may be included\/deleted, and node attribute values could also change. We use $\\Delta\\mat{A}$ and $\\Delta\\mat{X}$ to denote the network and attribute value changes between two consecutive time step $t$ and time step $t+1$, respectively. Following the settings of~\\cite{tong2008colibri}, and for the ease of presentation, we consider the number of nodes is constant over time, but our method can be naturally extended to deal with node addition\/deletion scenarios. As mentioned earlier, node attributes are complementary in mitigating the network sparsity for better embedding representations. Nonetheless, employing offline embedding methods repeatedly in a dynamic environment is time-consuming and cannot seize the emerging\/fading patterns promptly, especially when the networks are of large-scale. Therefore, developing an efficient online embedding algorithm upon an offline model is fundamentally important for dynamic network analysis, and also could benefit many real-wold applications. Formally, we define the dynamic attributed embedding problem as two sub-problems as follows. The work flow of the proposed framework DANE is shown in Figure~\\ref{fig:framework}.\n\n\\begin{problem}{The offline model of DANE at time step $t$: given network topology $\\mat{A}^{(t)}$ and node attributes $\\mat{X}^{(t)}$; output attributed network embedding $\\mat{Y}^{(t)}$ for all nodes.}\n\\label{problem:problem1}\n\\end{problem}\n\n\\begin{problem}{The online model of DANE at time step $t+1$: given network topology $\\mat{A}^{(t+1)}$ and node attributes $\\mat{X}^{(t+1)}$, and intermediate embedding results at time step $t$; output attributed network embedding $\\mat{Y}^{(t+1)}$ for all nodes.}\n\\label{problem:problem2}\n\\end{problem}\n\n\\section{The Proposed Framework - DANE}\nIn this section, we first present an offline model that works in a static setting to tackle the Problem~\\ref{problem:problem1} in finding a consensus embedding representation. Then to tackle the Problem~\\ref{problem:problem2}, we introduce an online model that provides a fast solution to update the consensus embedding on the fly. At the end, we analyze the computational complexity of the online model and show its superiority over the offline model.\n\\subsection{DANE: Offline Model}\nNetwork topology and node attributes in attributed networks are presented in different representations. Typically, either of these two representations could be \\emph{incomplete} and \\emph{noisy}, presenting great challenges to embedding representation learning. For example, social networks are very sparse as a large amount of users only have a limited number of links~\\cite{adamic2000power}. Thus, network embedding could be jeopardized as links are inadequate to provide enough node proximity information. Fortunately, rich node attributes are readily available and could be potentially helpful to mitigate the network sparsity in finding better embeddings. Hence, it is more desired to make these two representations compensate each other for a consensus embedding. However, as mentioned earlier, both representations could be noisy and the existence of noise could degenerate the learning of consensus embedding. Hence, it motivates us to reduce the noise of these two raw data representations before learning consensus embedding.\n\nLet $\\mat{A}^{(t)}\\in\\mathbb{R}^{n\\times n}$ be the adjacency matrix of the attributed network at time step $t$ and $\\mat{D}_{\\mat{A}}$ be the diagonal matrix with $\\mat{D}_{\\mat{A}}^{(t)}(i,i)=\\sum_{j=1}^{n}\\mat{A}^{(t)}(i,j)$, then $\\mat{L}_{\\mat{A}}^{(t)}=\\mat{D}_{\\mat{A}}^{(t)}-\\mat{A}^{(t)}$ is a Laplacian matrix. According to spectral theory~\\cite{belkin2001laplacian,von2007tutorial}, by mapping each node in the network to a $k$-dimensional embedded space, i.e., $\\mat{y}_{i}\\in \\mathbb{R}^{k}$ ($k\\ll n$), the noise in the network can be substantially reduced. A rational choice of the embedding $\\mat{Y}_{\\mat{A}}^{(t)}=[\\mat{y}_{1},\\mat{y}_{2},...,\\mat{y}_{n}]'\\in\\mathbb{R}^{n\\times k}$ is to minimize the loss $\\frac{1}{2}\\sum_{i,j}\\mat{A}^{(t)}(i,j)||\\mat{y}_{i}-\\mat{y}_{j}||_{2}^{2}$. It ensures that connected nodes are close to each other in the embedded space. In this case, the problem boils down to solving the following generalized eigen-problem $\\mat{L}_{\\mat{A}}^{(t)}\\mat{a}=\\lambda\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}$. Let $\\mat{a}_{1},\\mat{a}_{2},...,\\mat{a}_{n}$ be the eigenvectors of the corresponding eigenvalues $0=\\lambda_{1}\\leq\\lambda_{2}\\leq...\\leq\\lambda_{n}$. It is easy to verify that $\\mat{1}$ is the only eigenvector for the eigenvalue $\\lambda_{1}=0$. Then the $k$-dimensional embedding $\\mat{Y}_{\\mat{A}}^{(t)}\\in\\mathbb{R}^{n\\times k}$ of the network structure is given by the top-$k$ eigenvectors starting from $\\mat{a}_{2}$, i.e., $\\mat{Y}_{\\mat{A}}^{(t)}=[\\mat{a}_{2},...,\\mat{a}_{k},\\mat{a}_{k+1}]$. For the ease of presentation, in the following parts of the paper, we refer these $k$ eigenvectors and their eigenvalues as the top-$k$ eigenvectors and eigenvalues, respectively. Akin to the network structure, noise in the node attributes can be reduced in a similar fashion. Specifically, we first normalize attributes of each node and obtain the cosine similarity matrix $\\mat{W}^{(t)}$. Afterwards, we obtain the top-$k$ eigenvectors $\\mat{Y}_{\\mat{X}}^{(t)}=[\\mat{b}_{2},...,\\mat{b}_{k+1}]$ of the generalized eigen-problem corresponding to $\\mat{W}^{(t)}$.\n\nThe noisy data problem is resolved by finding two intermediate embeddings $\\mat{Y}_{\\mat{A}}^{(t)}$ and $\\mat{Y}_{\\mat{X}}^{(t)}$. We now take advantage of them to seek a consensus embedding. However, since they are obtained individually, these two embeddings may not be compatible and in the worst case, they may be independent of each other. To capture their interdependency and to make them compensate each other, we propose to maximize their correlations (or equivalently minimize their disagreements)~\\cite{hardoon2004canonical}. In particular, we seek two projection vectors $\\mat{p}_{\\mat{A}}^{(t)}$ and $\\mat{p}_{\\mat{X}}^{(t)}$ such that the correlation of $\\mat{Y}_{\\mat{A}}^{(t)}$ and $\\mat{Y}_{\\mat{X}}^{(t)}$ is maximized after projection. It is equivalent to solving the following optimization problem:\n\\begin{equation}\n\\begin{split}\n&\\max_{\\mat{p}_{\\mat{A}}^{(t)},\\mat{p}_{\\mat{X}}^{(t)}}\\mat{p}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)}\\mat{p}_{\\mat{A}}^{(t)}+\\mat{p}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)}\\mat{p}_{\\mat{X}}^{(t)}\\\\\n&+\\mat{p}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)}\\mat{p}_{\\mat{A}}^{(t)}+\\mat{p}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)}\\mat{p}_{\\mat{X}}^{(t)}.\\\\\n&\\mbox{s.t.}\\quad \\mat{p}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)}\\mat{p}_{\\mat{A}}^{(t)}+\\mat{p}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)}\\mat{p}_{\\mat{X}}^{(t)}=1.\n\\end{split}\n\\end{equation}\n\nLet $\\gamma$ be the Lagrange multiplier for the constraint, by setting the derivative of the Lagrange function w.r.t. $\\mat{p}_{\\mat{A}}^{(t)}$ and $\\mat{p}_{\\mat{X}}^{(t)}$ to zero, we obtain the optimal solution for $[\\mat{p}_{\\mat{A}}^{(t)};\\mat{p}_{\\mat{X}}^{(t)}]$, which corresponds to the eigenvector of the following generalized eigen-problem:\n\\begin{equation}\n\\begin{split}\n\\begin{bmatrix}\n \\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)} & \\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)} \\\\\n \\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)} & \\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)} \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n \\mat{p}_{\\mat{A}}^{(t)} \\\\\n \\mat{p}_{\\mat{X}}^{(t)} \\\\\n\\end{bmatrix}\n=\n\\gamma&\n\\begin{bmatrix}\n \\mat{Y}_{\\mat{A}}^{(t)'}\\mat{Y}_{\\mat{A}}^{(t)} & \\mat{0} \\\\\n \\mat{0} & \\mat{Y}_{\\mat{X}}^{(t)'}\\mat{Y}_{\\mat{X}}^{(t)} \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n \\mat{p}_{\\mat{A}}^{(t)} \\\\\n \\mat{p}_{\\mat{X}}^{(t)} \\\\\n\\end{bmatrix}.\n\\end{split}\n\\label{eq:embeddingfushion}\n\\end{equation}As a result, to obtain a consensus embedding representation from $\\mat{Y}_{\\mat{A}}$ and $\\mat{Y}_{\\mat{X}}$, we could take the top-$l$ eigenvectors of the above generalized eigen-problem and stack these top-$l$ eigenvectors together. Suppose the projection matrix $\\mat{P}^{(t)}\\in\\mathbb{R}^{2k\\times l}$ is the concatenated top-$l$ eigenvectors, the final consensus embedding representation can be computed as $\\mat{Y}^{(t)}=[\\mat{Y}_{\\mat{A}}^{(t)},\\mat{Y}_{\\mat{X}}^{(t)}]\\times\\mat{P}^{(t)}$.\n\n\\subsection{Online Model of DANE}\nMore often than not, attributed networks often exhibit high dynamics. For example, in social media sites, social relations are continuously evolving, and user posting behaviors may also evolve accordingly. It raises challenges to the existing offline embedding methods as they have to rerun at each time step which is time-consuming and is not scalable to large networks. Therefore, it is important to build an efficient online embedding algorithm which gives an informative embedding representation on the fly.\n\nThe proposed online embedding model is motivated by the observation that most of real-world networks, with no exception for attributed networks, often evolve smoothly in the temporal dimension between two consecutive time steps~\\cite{chi2007evolutionary,aggarwal2014evolutionary,wang2016recommending,li2016toward}. Hence, we use $\\Delta\\mat{A}$ and $\\Delta\\mat{X}$ to denote the perturbation of network structure and node attributes between two consecutive time steps $t$ and $t+1$, respectively. With these, the diagonal matrix and Laplacian matrix of $\\mat{A}$ and $\\mat{X}$ also evolve smoothly such that:\n\\begin{equation}\n\\begin{split}\n\\mat{D}_{\\mat{A}}^{(t+1)} &= \\mat{D}_{\\mat{A}}^{(t)}+\\Delta\\mat{D}_{\\mat{A}}, \\quad \\mat{L}_{\\mat{A}}^{(t+1)} = \\mat{L}_{\\mat{A}}^{(t)}+\\Delta\\mat{L}_{\\mat{A}},\\\\\n\\mat{D}_{\\mat{X}}^{(t+1)} &= \\mat{D}_{\\mat{X}}^{(t)}+\\Delta\\mat{D}_{\\mat{X}}, \\quad \\mat{L}_{\\mat{X}}^{(t+1)} = \\mat{L}_{\\mat{X}}^{(t)}+\\Delta\\mat{L}_{\\mat{X}}.\n\\end{split}\n\\end{equation}\n\nAs discussed in the previous subsection, the problem of attributed network embedding in an offline setting boils down to solving generalized eigen-problems. In particular, offline model focuses on finding the top eigenvectors corresponding to the smallest eigenvalues of the generalized eigen-problems. Therefore, the core idea to enable online update of the embeddings is to develop an efficient way to update the top eigenvectors and eigenvalues. Otherwise, we have to perform generalized eigen-decomposition each time step, which is not practical due to its high time complexity.\n\nWithout loss of generality, we use the network topology as an example to illustrate the proposed algorithm for online embedding. By the matrix perturbation theory~\\cite{stewart1990matrix}, we have the following equation in embedding the network structure at the new time step:\n\\begin{equation}\n(\\mat{L}_{\\mat{A}}^{(t)}+\\Delta\\mat{L}_{\\mat{A}})(\\mat{a}+\\Delta\\mat{a})=(\\lambda+\\Delta\\lambda)(\\mat{D}_{\\mat{A}}^{(t)}+\\Delta\\mat{D}_{\\mat{A}})(\\mat{a}+\\Delta\\mat{a}).\n\\end{equation}For a specific eigen-pair $(\\lambda_{i},\\mat{a}_{i})$, we have the following equation:\n\\begin{equation}\n(\\mat{L}_{\\mat{A}}^{(t)}+\\Delta\\mat{L}_{\\mat{A}})(\\mat{a}_{i}+\\Delta\\mat{a}_{i})=(\\lambda_{i}+\\Delta\\lambda_{i})(\\mat{D}_{\\mat{A}}^{(t)}+\\Delta\\mat{D}_{\\mat{A}})(\\mat{a}_{i}+\\Delta\\mat{a}_{i}).\n\\end{equation}The problem now is how to compute the change of the $i$-th eigen-pair $(\\Delta\\mat{a}_{i}, \\Delta\\lambda_{i})$ by taking advantage of the small perturbation matrices $\\Delta\\mat{D}$ and $\\Delta\\mat{L}$.\n\\paragraph{\\textbf{A - Computing the change of eigenvalue $\\mat{\\Delta}\\mat{\\lambda_{i}}$}}~\\\\\nBy expanding the above equation, we have:\n\\begin{equation}\n\\begin{split}\n&\\mat{L}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\mat{L}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}+\\Delta\\mat{L}_{\\mat{A}}\\Delta\\mat{a}_{i}\\\\\n=&\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\Delta\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}\\\\\n+&(\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}+\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}+\\Delta\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}+\\Delta\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}})\\Delta\\mat{a}_{i}.\n\\end{split}\n\\label{eq:expasion}\n\\end{equation}The higher order terms, i.e., $\\Delta\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}$, $\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\Delta\\mat{a}_{i}$, $\\Delta\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}$ and $\\Delta\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\Delta\\mat{a}_{i}$ can be removed as they have limited effects on the accuracy of the generalized eigen-systems~\\cite{golub2012matrix}. By using the fact that $\\mat{L}_{\\mat{A}}^{(t)}\\mat{a}_{i}=\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}$, we have the following formulation:\n\\begin{equation}\n\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\mat{L}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}=\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}.\n\\label{eq:firstorder}\n\\end{equation}Multiplying both sides with $\\mat{a}_{i}'$, we now have:\n\\begin{equation}\n\\mat{a}_{i}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\mat{a}_{i}'\\mat{L}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}=\\lambda_{i}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\n\\lambda_{i}\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}.\n\\label{eq:transpose}\n\\end{equation}Since both the Laplacian matrix $\\mat{L}_{\\mat{A}}^{(t)}$ and the diagonal matrix $\\mat{D}_{\\mat{A}}^{(t)}$ are symmetric, we have:\n\\begin{equation}\n\\mat{a}_{i}'\\mat{L}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}=\\lambda_{i}\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}.\n\\end{equation}Therefore, Eq.~(\\ref{eq:transpose}) can be reformulated as follows:\n\\begin{equation}\n\\mat{a}_{i}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}=\\lambda_{i}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}.\n\\end{equation}Through this, the variation of eigenvalue, i.e., $\\Delta\\lambda_{i}$, is:\n\\begin{equation}\n\\Delta\\lambda_{i}=\\frac{\\mat{a}_{i}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}-\\lambda_{i}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}}{\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}}.\n\\label{eq:eigenvaluesolution}\n\\end{equation}\n\\begin{theorem} In the generalized eigen-problem $\\mat{A}\\mat{v}=\\lambda\\mat{B}\\mat{v}$, if $\\mat{A}$ and $\\mat{B}$ are both Hermitian matrices and $\\mat{B}$ is a positive-semidefinite matrix, the eigenvalue $\\lambda$ are real; and eigenvectors $\\mat{v}_{j}$ ($i\\neq j$) are $\\mat{B}$-orthogonal such that $\\mat{v}_{i}'\\mat{B}\\mat{v}_{j}=0$ and $\\mat{v}_{i}'\\mat{B}\\mat{v}_{i}=1$~\\cite{parlett1980symmetric}.\n\\label{theorem:theorem1}\n\\end{theorem}\n\n\\begin{corollary}\n$\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}=1$ and $\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{j}=0$ ($i\\neq j$).\n\\label{corollary:corollary1}\n\\end{corollary}\n\\label{corollary:orthonormal}\n\\begin{proof}\nBoth $\\mat{D}_{\\mat{A}}^{(t)}$ and $\\mat{L}_{\\mat{A}}^{(t)}$ are symmetric and are also Hermitian matrices. Meanwhile, the Laplacian matrix $\\mat{L}_{\\mat{A}}^{(t)}$ is a positive-definite matrix, which completes the proof.\n\\end{proof}Therefore, the variation of the eigenvalue $\\lambda_{i}$ is as follows:\n\\begin{equation}\n\\Delta\\lambda_{i}=\\mat{a}_{i}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}-\\lambda_{i}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}.\n\\label{eq:eigenvaluefinalupdate}\n\\end{equation}\n\n\\paragraph{\\textbf{B - Computing the change of eigenvector $\\mat{\\Delta}\\mat{a}_{i}$}}~\\\\\nAs network structure often evolves smoothly between two continuous time steps, we assume that the perturbation of the eigenvectors $\\Delta\\mat{a}_{i}$ lies in the column space that is composed by the top-$k$ eigenvectors at time step $t$ such that $\\Delta\\mat{a}_{i}=\\sum_{j=2}^{k+1}\\alpha_{ij}\\mat{a}_{j}$, where $\\alpha_{ij}$ is a weight indicating the contribution of the $j$-th eigenvector $\\mat{a}_{j}$ in approximating the new $i$-th eigenvector. Next, we show how to determine these weights such that the perturbation $\\Delta\\mat{a}_{i}$ can be estimated.\n\nBy plugging $\\Delta\\mat{a}_{i}=\\sum_{j=2}^{k+1}\\alpha_{ij}\\mat{a}_{j}$ into Eq.~(\\ref{eq:firstorder}) and using the fact that $\\mat{L}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\mat{a}_{j}=\\mat{D}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\lambda_{j}\\mat{a}_{j}$, we obtain the following:\n\\begin{equation}\n\\small\n\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\mat{D}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\lambda_{j}\\mat{a}_{j}=\\lambda_{i}\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\lambda_{i}\\mat{D}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\mat{a}_{j}.\n\\label{eq:eigenvectornew2}\n\\end{equation}By multiplying eigenvector $\\mat{a}_{p}'\\,(2\\leq p\\leq k+1, p\\neq i)$ on both sides of Eq.~(\\ref{eq:eigenvectornew2}) and taking advantage of the orthonormal property from Corollary~\\ref{corollary:orthonormal}, we obtain the following:\n\\begin{equation}\n\\begin{split}\n&\\mat{a}_{p}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\mat{a}_{p}'\\mat{D}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\lambda_{j}\\mat{a}_{j}\\\\\n=\\,&\\lambda_{i}\\mat{a}_{p}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\Delta\\lambda_{i}\\mat{a}_{p}'\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}+\\lambda_{i}\\mat{a}_{p}'\\mat{D}_{\\mat{A}}^{(t)}\\sum_{j=2}^{k+1}\\alpha_{ij}\\mat{a}_{j}\\\\\n\\Rightarrow \\quad &\\mat{a}_{p}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}+\\alpha_{ip}\\lambda_{p}=\\lambda_{i}\\mat{a}_{p}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}+\\alpha_{ip}\\lambda_{i}.\n\\end{split}\n\\end{equation}Hence, the weight $\\alpha_{ip}$ can be determined by:\n\\begin{equation}\n\\alpha_{ip}=\\frac{\\mat{a}_{p}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}-\\lambda_{i}\\mat{a}_{p}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}}{\\lambda_{i}-\\lambda_{p}}.\n\\label{eq:alphaip}\n\\end{equation}After eigenvector perturbation, we still need to make the orthonormal condition holds for new eigenvectors, thus we have $(\\mat{a}_{i}+\\Delta\\mat{a}_{i})'(\\mat{D}_{\\mat{A}}+\\Delta\\mat{D}_{\\mat{A}})(\\mat{a}_{i}+\\Delta\\mat{a}_{i})=1$. By expanding it and removing the second-order and third-order terms, we obtain the following equation:\n\\begin{equation}\n2\\mat{a}_{i}'\\mat{D}_{\\mat{A}}^{(t)}\\Delta\\mat{a}_{i}+\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}^{(t)}\\mat{a}_{i}=0.\n\\label{eq:alphaiitmp}\n\\end{equation}Then the solution of $\\alpha_{ii}$ is as follows:\n\\begin{equation}\n\\alpha_{ii}=-\\frac{1}{2}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}.\n\\label{eq:alphaii}\n\\end{equation}\n\nWith the solutions of $\\alpha_{ip}$ ($p\\neq i$) and $\\alpha_{ii}$, the perturbation of eigenvector $\\mat{a}_{i}$ is given as follows:\n\\begin{equation}\n\\Delta\\mat{a}_{i}=-\\frac{1}{2}\\mat{a}_{i}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}\\mat{a}_{i}+\\sum_{j=2,j\\neq i}^{k+1}(\\frac{\\mat{a}_{j}'\\Delta\\mat{L}_{\\mat{A}}\\mat{a}_{i}-\\lambda_{i}\\mat{a}_{j}'\\Delta\\mat{D}_{\\mat{A}}\\mat{a}_{i}}{\\lambda_{i}-\\lambda_{j}})\\mat{a}_{j}.\n\\label{eq:eigenvectorfinalupdate}\n\\end{equation}\n\nOverall, the $i$-th eigen-pair ($\\Delta\\lambda_{i}, \\Delta\\mat{a}_{i}$) can be updated on the fly by Eq.~(\\ref{eq:eigenvaluefinalupdate}) and Eq.~(\\ref{eq:eigenvectorfinalupdate}), the pseudocode of the updating process is illustrated in Algorithm~\\ref{alg:generalizedeigenupdate}. The first input is the top-$k$ eigen-pairs of the generalized eigen-problem, they can be computed by standard methods like power iteration and Lanczos method~\\cite{golub2012matrix}. Another input is the variation of the diagonal matrix and the Laplacian matrix. For the top-$k$ eigen-pairs, we update eigenvalues in line 2 and update eigenvectors in line 3.\n\nLikewise, the embedding of node attributes can also be updated in an online manner by Algorithm~\\ref{alg:generalizedeigenupdate}. Specifically, let $\\mat{Y}_{\\mat{A}}^{(t)}$ and $\\mat{Y}_{\\mat{X}}^{(t)}$ denote the embedding of network structure and node attributes at time step $t$, then at the following time step $t+1$, we first employ the proposed online model to update their embedding representations, then a final consensus embedding representation $\\mat{Y}^{(t+1)}$ is derived by the correlation maximization method mentioned previously.\n\n\\begin{algorithm}[!htbp]\n\\begin{algorithmic}[1]\n \\Require Top-$k$ eigen-pairs of the generalized eigen-problem $\\{$($\\lambda_{2},\\mat{a}_{2}$),($\\lambda_{3},\\mat{a}_{3}$),...,($\\lambda_{k+1},\\mat{a}_{k+1}$)$\\}$ at time $t$, variation of the diagonal matrix $\\Delta\\mat{L}_{\\mat{A}}$ and Laplacian matrix $\\Delta\\mat{D}_{\\mat{A}}$.\n \\Ensure Top-$k$ eigen-pairs $\\{$($\\lambda_{2}^{(t+1)},\\mat{a}_{2}^{(t+1)}$),...,($\\lambda_{k+1}^{(t+1)},\\mat{a}_{k+1}^{(t+1)}$)$\\}$ at time step $t+1$.\n \\For {$i=2$ to $k+1$}\n \\State {Calculate the variation of $\\Delta\\lambda_{i}$ by Eq.~(\\ref{eq:eigenvaluefinalupdate});}\n \\State {Calculate the variation of $\\Delta\\mat{a}_{i}$ by Eq.~(\\ref{eq:eigenvectorfinalupdate});}\n \\State {$\\lambda_{i}^{(t+1)}=\\lambda_{i}+\\Delta\\lambda_{i}$; $\\mat{a}_{i}^{(t+1)}=\\mat{a}_{i}+\\Delta\\mat{a}_{i}$;}\n \\EndFor\n\\end{algorithmic}\n\\caption{Updating of embedding results for the network}\n\\label{alg:generalizedeigenupdate}\n\\end{algorithm}\n\n\\paragraph{\\textbf{C - Computational Complexity Analysis}}\nWe theoretically analyze the computational complexity of the proposed online algorithm and show its superiority over the offline embedding methods.\n\\begin{lemma}\nThe time complexity of the proposed online embedding algorithm over $T$ time steps is $\\mathcal{O}(Tk^{2}(n+l+l_{a}+l_{x}+d_{x}+d_{x}))$, where $k$ is the intermediate embedding dimension for network (or attributes), $l$ is the final consensus embedding dimension, $n$ is the number of nodes, and $l_{a}$, $l_{x}$, $d_{a}$, $d_{x}$ are the number of non-zero entries in the sparse matrices $\\mat{\\Delta}\\mat{L}_{\\mat{A}}$, $\\mat{\\Delta}\\mat{L}_{\\mat{X}}$, $\\mat{\\Delta}\\mat{D}_{\\mat{A}}$, and $\\mat{\\Delta}\\mat{D}_{\\mat{X}}$, respectively.\n\\begin{proof}\nIn each time step, to update the top-$k$ eigenvalues of the network and node attributes in an online fashion, it requires $\\mathcal{O}(k(d_{a}+l_{a}))$ and $\\mathcal{O}(k(d_{x}+l_{x}))$, respectively. Also, the online updating of the top-$k$ eigenvectors for the network and attributes are $\\mathcal{O}(k^{2}(d_{a}+l_{a}+n))$ and $\\mathcal{O}(k^{2}(d_{x}+l_{x}+n))$, respectively. After that, the complexity for the consensus embedding is $\\mathcal{O}(k^{2}l)$. Therefore, the computational complexity of the proposed online model over $T$ time steps are $\\mathcal{O}(Tk^{2}(n+l+l_{a}+l_{x}+d_{x}+d_{x}))$.\n\\end{proof}\n\\end{lemma}\n\n\\begin{lemma}\nThe time complexity of the proposed offline embedding algorithm over $T$ time steps is $\\mathcal{O}(Tn^{2}(k+l))$, where $k$ is the intermediate embedding dimension for network (or attributes), $l$ is the final consensus embedding dimension.\n\\begin{proof}\nOmitted for brevity.\n\\end{proof}\n\\end{lemma}\n\nAs can be shown, since $\\mat{\\Delta}\\mat{L}_{\\mat{A}}$, $\\mat{\\Delta}\\mat{L}_{\\mat{X}}$, $\\mat{\\Delta}\\mat{D}_{\\mat{A}}$, and $\\mat{\\Delta}\\mat{D}_{\\mat{X}}$ are often very sparse, thus $l_{a}$, $l_{x}$, $d_{a}$, $d_{x}$ are usually very small, meanwhile we have $k\\ll n$ and $l \\ll n$. Based on the above analysis, the proposed online embedding algorithm for dynamic attributed networks is much more efficient than rerunning the offline method repeatedly.\n\n\n\\section{Experiments}\nIn this section, we conduct experiments to evaluate the effectiveness and efficiency of the proposed DANE framework for dynamic attributed network embedding. In particular, we attempt to answer the following two questions: (1) \\emph{Effectiveness}: how effective are the embeddings obtained by DANE on different learning tasks? (2) \\emph{Efficiency}: how fast is the proposed framework DANE compared with other offline embedding methods? We first introduce the datasets and experimental settings before presenting details of the experimental results.\n\\subsection{Datasets}\nWe use four datasets BlogCatalog, Flickr, Epinions and DBLP for experimental evaluation. Among them, BlogCatalog and Flickr are synthetic data from static attributed networks, and they have been used in previous research~\\cite{li2015unsupervised,li2016robust}. We randomly add 0.1\\% new edges and change 0.1\\% attribute values at each time step to simulate its evolving nature. The other two datasets, Epinions and DBLP are real-world dynamic attributed networks. Epinions is a product review site in which users share their reviews and opinions about products. Users themselves can also build trust networks to seek advice from others. Node attributes are formed by the bag-of-words model on the reviews, while the major categories of reviews by users are taken as the ground truth of class labels. The data has 16 different time steps. In the last dataset DBLP, we extracted a DBLP co-author network for the authors that publish at least two papers between the years of 2001 and 2016 from seven different areas. Bag-of-words model is applied on the paper title to obtain the attribute information, and the major area the authors publish is considered as ground truth. It should be noted that in all these four datasets, the evolution of network structure and node attributes are very smooth. The detailed statistics of these datasets are listed in Table~\\ref{table:datasets}.\n\\begin{table}\n\\centering\n\\begin{tabular}{c|c|c|c|c} \\hline\n& BlogCatalog & Flickr & Epinions & DBLP\\\\ \\hline \\hline\n$\\#$ Nodes & 5,196 & 7,575 & 14,180 & 23,393\\\\ \\hline\n$\\#$ Attributes & 8,189 & 12,047 & 9,936 & 8,945 \\\\ \\hline\n$\\#$ Edges & 173,468 & 242,146 & 227,642 & 289,478 \\\\ \\hline\n$\\#$ Classes & 6 & 9 & 20 & 7\\\\ \\hline\n$\\#$ Time Steps & 10 & 10 & 16 & 16 \\\\ \\hline\n\\end{tabular}\n\\caption{Detailed information of the datasets.}\n\\label{table:datasets}\n\\end{table}\n\n\\subsection{Experimental Settings}\nOne commonly adopted way to evaluate the quality of the embedding representation~\\cite{chang2015heterogeneous,jacob2014learning,perozzi2014deepwalk,tang2015line} is by the following two unsupervised and supervised tasks: network clustering and node classification. First, we validate the effectiveness of the embedding representations by DANE on the network clustering task. Two standard clustering performance metrics, i.e., \\emph{clustering accuracy} (ACC) and \\emph{normalized mutual information} (NMI) are used. In particular, after obtaining the embedding representation of each node in the attributed network, we perform K-means clustering based on the embedding representations. The K-means algorithm is repeated 10 times and the average results are reported since K-means may converge to the local minima due to different initializations. Another way to assess the embedding is by the node classification task. Specifically, we split the the embedding representations of all nodes via a 10-fold cross-validation, using 90\\% of nodes to train a classification model by logistic regression and the rest 10\\% nodes for the testing. The whole process is repeated 10 times and the average performance are reported. Three evaluation metrics, \\emph{classification accuracy}, \\emph{F1-Macro} and \\emph{F1-Micro} are used. How to determine the optimal number of embedding dimensions is still an open research problem, thus we vary the embedding dimension as $\\{10,20,...,100\\}$ and the best results are reported.\n\n\\subsubsection{Baseline Methods}\nDANE is measured against the following baseline methods on the two aforementioned tasks:\n\\begin{itemize}\n\\item \\textbf{Deepwalk}: learns network embeddings by word2vec and truncated random walk techniques~\\cite{perozzi2014deepwalk}.\n\\item \\textbf{LINE}: learns embeddings by preserving the first-order and second-order proximity structures of the network~\\cite{tang2015line}.\n\\item \\textbf{DANE-N}: is a variation of the proposed DANE with only network information.\n\\item \\textbf{DANE-A}: is a variation of the proposed DANE with only attribute information.\n\\item \\textbf{CCA}: directly uses the original network structure and attributes for a joint low-dimensional representation~\\cite{hardoon2004canonical}.\n\\item \\textbf{LCMF}: maps network and attributes to a shared latent space by collective matrix factorization~\\cite{zhu2007combining}.\n\\item \\textbf{LANE}: is a label informed attributed network embedding method, we use one of its variant LANE w\/o Label~\\cite{huang2017label}.\n\\item \\textbf{DANE-O}: is a variation of DANE that reruns the offline model at each time step.\n\\end{itemize}\n\nIt is important to note that Deepwalk, LINE, CCA, LCMF, LANE, and DANE-O can only handle static networks. To have a fair comparison with the proposed DANE framework, we rerun these baseline methods at each time step and report the average performance over all time steps\\footnote{For baseline methods that cannot finish in 24hrs, we only run it once. As networks evolve smoothly, there is not much difference in terms of average performance.}. We follow the suggestions of the original papers to set the parameters of all these baselines.\n\n\\subsection{Unsupervised Task - Network Clustering}\nTo evaluate the effectiveness of embedding representations, we first compare DANE with baseline methods on network clustering which is naturally an unsupervised learning task. As per the fact that the attributed networks are constantly evolving, we compare the average clustering performance over all time steps. The average clustering performance comparison w.r.t. ACC and NMI are presented in Table~\\ref{table:clustering}. We make the following observations:\n\n\\begin{table*}[!t]\n\\centering\n\\caption{Clustering results ($\\%$) comparison of different embedding methods.}\n\\newcommand{\\minitab}[2][l]{\\begin{tabular}{#1}#2\\end{tabular}}\n\\begin{tabular}{|c|c||c|c||c|c||c|c||c|c|}\\hline\n\\multicolumn{2}{|c||}{Datasets} & \\multicolumn{2}{|c||}{BlogCatalog} & \\multicolumn{2}{|c||}{Flickr} & \\multicolumn{2}{|c||}{Epinions} & \\multicolumn{2}{|c|}{DBLP} \\\\ \\hline\\hline\n\\multicolumn{2}{|c||}{Methods} & ACC & NMI & ACC & NMI & ACC & NMI & ACC & NMI \\\\ \\hline\n\\multirow{3}{*}{Network} & Deepwalk & 49.85 & 30.51 & 40.70 & 24.29 & 13.31 & 12.72 & 53.61 & 32.54 \\\\ \\cline{2-10}\n& LINE & 50.20 & 29.53 & 42.93 & 26.01 & 14.34 & 12.65 & 51.61 & 30.74 \\\\ \\cline{2-10}\n & DANE-N\t & 37.05 & 21.84 & 31.89 & 18.91 & 12.01 & 11.95 & 56.61 & 31.54 \\\\ \\hline \\hline\nAttributes & DANE-A\t & 62.32 & 45.95 & 63.80 & 48.29 & 16.12 & 11.62 & 47.37 & 20.64 \\\\ \\hline \\hline\n\\multirow{5}{*}{Network+Attributes} & CCA\t & 33.42 & 11.86 & 24.39 & 10.89 & 10.85 & 8.61 & 26.42 & 18.60 \\\\ \\cline{2-10}\n&LCMF\t & 55.72 & 40.38 & 27.03 & 13.06 & 12.86 & 10.73 & 42.27 & 26.48 \\\\ \\cline{2-10}\n&LANE\t & 65.06 & 48.89 & 65.45 & 52.58 & 32.18 & 22.09 & 55.80 & 31.84 \\\\ \\cline{2-10}\n&DANE-O\t & 80.31 & 59.46 & 67.33 & 53.04 & 34.11 & 23.07 & 59.14 & 35.31 \\\\ \\cline{2-10}\n&DANE\t & 79.69 & 59.32 & 67.24 & 52.19 & 34.52 & 22.36 & 57.68 & 34.87 \\\\ \\hline\n\\end{tabular}\n\\label{table:clustering}\n\\end{table*}\n\n\\begin{itemize}\n\\item DANE and its offline version DANE-O consistently outperform all baseline methods on four dynamic attributed networks by achieving better clustering performance. We also perform pairwise Wilcoxon signed-rank test~\\cite{demvsar2006statistical} between DANE, DANE-O and these baseline methods and the test results show that DANE and DANE-O are significantly better (with both 0.01 and 0.05 significance levels).\n\\item DANE, DANE-O and LANE achieve better clustering performance than network embedding methods such as Deepwalk, LINE and DANE-N and attribute embedding method DANE-A. The improvements indicate that attribute information is complementary to pure network topology and can help learn more informative embedding representations. Meanwhile, DANE also outperforms the CCA and LCMF which also leverage node attributes. The reason is that although these methods learn a low-dimensional representation by using both sources, they are not explicitly designed to preserve the node proximity. Also, their performance degenerates when the data is very noisy.\n\\item Even though DANE leverages matrix perturbation theory to update the embedding representations, its performance is very close to DANE-O which reruns at each time step. It implies that the online embedding model does not sacrifice too much informative information in terms of embedding.\n\\end{itemize}\n\n\\subsection{Supervised Task - Node Classification}\n\\begin{table*}[!t]\n\\centering\n\\caption{Classification results ($\\%$) comparison of different embedding methods.}\n\\newcommand{\\minitab}[2][l]{\\begin{tabular}{#1}#2\\end{tabular}}\n\\begin{tabular}{|c|c||c|c|c||c|c|c||c|c|c||c|c|c|}\\hline\n\\multicolumn{2}{|c||}{Datasets} & \\multicolumn{3}{|c||}{BlogCatalog} & \\multicolumn{3}{|c||}{Flickr} & \\multicolumn{3}{|c||}{Epinions} & \\multicolumn{3}{|c|}{DBLP} \\\\ \\hline\\hline\n\\multicolumn{2}{|c||}{Methods} & AC & Micro & Macro & AC & Micro & Macro & AC & Micro & Macro & AC & Micro & Macro \\\\ \\hline\n\\multirow{3}{*}{Network} & Deepwalk & 68.05 & 67.15 & 68.18 & 60.08 & 58.93 & 59.08 & 22.12 & 17.43 & 20.10 & 74.38 & 69.65 & 72.37 \\\\ \\cline{2-14}\n&LINE & 70.20 & 69.88 & 70.91 & 61.03 & 60.90 & 60.01 & 23.54 & 17.17 & 21.05 & 72.97 & 67.56 & 70.97 \\\\ \\cline{2-14}\n&DANE-N & 66.97 & 66.06 & 67.78 & 49.37 & 47.82 & 49.34 & 21.25 & 20.57 & 21.88 & 71.99 & 65.33 & 71.94 \\\\ \\hline \\hline\nAttributes&DANE-A & 80.23 & 79.86 & 80.23 & 76.66 & 75.59 & 76.60 & 23.76 & 21.57 & 22.00 & 63.92 & 54.80 & 62.97 \\\\ \\hline \\hline\n\\multirow{5}{*}{Network+Attributes} &CCA & 48.63 & 49.96 & 49.63 & 27.09 & 26.54 & 26.09 & 11.53 & 9.43 & 10.56 & 45.67 & 42.08 & 43.83 \\\\ \\cline{2-14}\n&LCMF\t & 84.41 & 89.01 & 89.26 & 66.27 & 66.75 & 65.71 & 19.14 & 9.22 & 10.14 & 69.71 & 68.01 & 68.42 \\\\ \\cline{2-14}\n&LANE & 87.52 & 87.52 & 87.93 & 77.54 & 77.81 & 77.26 & 27.74 & 28.45 & 28.87 & 72.15 & 71.09 & 73.48 \\\\ \\cline{2-14}\n&DANE-O & 89.34 & 89.15 & 89.23 & 79.68 & 79.52 & 79.95 & 31.23 & 31.28 & 31.35 & 77.21 & 74.96 & 75.48 \\\\ \\cline{2-14}\n&DANE & 89.09 & 88.78 & 88.94 & 79.56 & 78.94 & 79.56 & 30.87 & 30.93 & 30.81 & 76.64 & 74.53 & 75.69 \\\\ \\hline\n\\end{tabular}\n\\label{table:classification}\n\\end{table*}\nNext, we assess the effectiveness of embedding representations on a supervised learning task - node classification. Similar to the settings of network clustering, we report the average classification performance over all time steps. The classification results in terms of three different measures are shown in Table~\\ref{table:classification}. The following findings can be inferred from the table:\n\\begin{itemize}\n\\item Generally, we have the similar observations as the clustering task. The methods which only use link information or node attributes (e.g., Deepwalk, LINE, DANE-N, DANE-A) and methods which do not explicitly model node proximity (e.g., CCA, LCMF) give poor classification results.\n\\item The embeddings learned by DANE and DANE-O help train a more discriminative classification model by obtaining higher classification performance. In addition, pairwise Wilcoxon signed-rank test~\\cite{demvsar2006statistical} shows that DANE and DANE-O are significantly better.\n\\item For the node classification task, the attribute embedding method DANE-A works better than the network embedding method in the BlogCatalog, Flickr and Epinions datasets. The reason is that in these datasets, the class labels are more closely related to the attribute information than the network structure. However, it is a different case for the DBLP dataset in which the labels of authors are more closely related to the coauthor relationships.\n\\end{itemize}\n\n\\subsection{Efficiency of Online Embedding}\nTo evaluate the efficiency of the proposed DANE framework, we compare DANE with several baseline methods CCA, LCMF, LANE which also use two data representations. Also, we include the offline version of DANE, i.e., DANE-O. As all these methods are not designed to handle network dynamics, we compare their cumulative running time over all time steps and plot it in a log scale. As can be observed from Figure~\\ref{fig:runtime}, the proposed DANE is much faster than all these comparison methods. In all these datasets, it terminates within one hour while some offline methods need several hours or even days to run. It can also be shown that both DANE and DANE-O are much faster than all other offline methods. To be more specific, for example, DANE is 84$\\times$, $21\\times$ and 14$\\times$ faster than LCMF, CCA and LANE respectively on Flickr dataset.\n\\begin{figure*}[!t]\n\\centering\n\\begin{minipage}{0.48\\textwidth}\n\\centering\n\\subfigure[BlogCatalog\\label{fig:blogcatalogcumulative}]\n{\\includegraphics[width=\\textwidth]{cumulative-blogcatalog.eps}}\n\\end{minipage}\n\\begin{minipage}{0.48\\textwidth}\n\\centering\n\\subfigure[Flickr\\label{fig:flickrcumulative}]\n{\\includegraphics[width=\\textwidth]{cumulative-flickr.eps}}\n\\end{minipage}\n\\begin{minipage}{0.48\\textwidth}\n\\centering\n\\subfigure[Epinions\\label{fig:epinionscumulative}]\n{\\includegraphics[width=\\textwidth]{cumulative-epinions.eps}}\n\\end{minipage}\n\\begin{minipage}{0.48\\textwidth}\n\\centering\n\\subfigure[DBLP\\label{fig:dblpcumulative}]\n{\\includegraphics[width=\\textwidth]{cumulative-dblp.eps}}\n\\end{minipage}\n\\vspace{-1\\baselineskip}\n\\caption{Cumulative running time comparison.}\n\\label{fig:runtime}\n\\end{figure*}\nTo further investigate the superiority of DANE against its offline version DANE-O, we compare the speedup rate of DANE against DANE-O w.r.t. different embedding dimensions in Figure~\\ref{fig:speedup}. As can be observed, when the embedding dimension is small (around 10), DANE achieves around 8$\\times$, 10$\\times$, 8$\\times$, 12$\\times$ speedup on BlogCatalog, Flickr, Epinions, and DBLP, respectively. When the embedding dimensionality gradually increases, the speedup of DANE decreases, but it is still significantly faster than DANE-O. With all the above observations, we can draw a conclusion that the proposed DANE framework is able to learn informative embeddings for attributed networks efficiently without jeopardizing the classification and the clustering performance.\n\n\\begin{figure*}[!t]\n\\centering\n\\begin{minipage}{0.475\\textwidth}\n\\centering\n\\subfigure[BlogCatalog\\label{fig:blogcatalogspeedup}]\n{\\includegraphics[width=\\textwidth]{speedup-blogcatalog.eps}}\n\\end{minipage}\n\\begin{minipage}{0.475\\textwidth}\n\\centering\n\\subfigure[Flickr\\label{fig:flickrspeedup}]\n{\\includegraphics[width=\\textwidth]{speedup-flickr.eps}}\n\\end{minipage}\n\\begin{minipage}{0.475\\textwidth}\n\\centering\n\\subfigure[Epinions\\label{fig:epinionsspeedup}]\n{\\includegraphics[width=\\textwidth]{speedup-epinions.eps}}\n\\end{minipage}\n\\begin{minipage}{0.475\\textwidth}\n\\centering\n\\subfigure[DBLP\\label{fig:dblpspeedup}]\n{\\includegraphics[width=\\textwidth]{speedup-dblp.eps}}\n\\end{minipage}\n\\vspace{-1\\baselineskip}\n\\caption{Running time speedup of DANE against its offline version DANE-O.}\n\\label{fig:speedup}\n\\end{figure*}\n\\section{Related Work}\nWe briefly review related work from (1) network embedding; (2) attributed network mining; and (3) dynamic network analysis.\n\nThe pioneer of network embedding can be dated back to the 2000s when many graph embedding algorithms~\\cite{belkin2001laplacian,roweis2000nonlinear,tenenbaum2000global} were proposed. These methods target to build an affinity matrix that preserves the local geometry structure of the data manifold and then embed the data to a low-dimensional representation. Motivated by the graph embedding techniques, Chen et al.~\\cite{chen2007directed} proposed one of the first network embedding algorithms for directed networks. They used random walk to measure the proximity structure of the directed network. Recently, network embedding techniques have received a surge of research interests in network science. Among them, Deepwalk~\\cite{perozzi2014deepwalk} generalizes the word embedding and employs a truncated random walk to learn latent representations of a network. Node2vec~\\cite{grover2016node2vec} further extends Deepwalk by adding the flexibility in exploring node neighborhoods. LINE~\\cite{tang2015line} carefully designs an optimized objective function that preserves first-order and second-order proximities to learn network representations. GraRep~\\cite{cao2015grarep} can be regarded as an extension of LINE which considers high-order information. Most recently, some deep learning based approaches are proposed to enhance the learned embeddings~\\cite{wang2016structural,yang2016revisiting}.\n\nAll the above mentioned approaches, however, are limited to deal with plain networks. In many cases, we are often faced with attributed networks. Many efforts have been devoted to gain insights from attributed networks. For example, Zhu et al.~\\cite{zhu2007combining} proposed a collective matrix factorization model that learns a low-dimensional latent space by both the links and node attributes. Similar matrix factorization based methods are proposed in~\\cite{yang2015network,zhang2016collective}. Chang et al.~\\cite{chang2015heterogeneous} used deep learning techniques to learn a joint feature representation for heterogeneous networks. Huang et al.~\\cite{huang2017label} studied whether label information can help learn better feature representation in attributed networks. Instead of directly learning embeddings, another way is to perform unsupervised feature selection~\\cite{tang2012unsupervised,li2016robust,cheng2017feature}. Nevertheless, all these methods can only handle static networks; it is still not clear how to learn embedding representations efficiently when attributed networks are constantly evolving over time. The problem of attributed network embedding is also related to but distinct from multi-modality or multi-view embedding~\\cite{xu2013survey,kumar2011co,zhang2017react}. In attributed networks, the network structure is more than a single view of data as it encodes other types of rich information, such as connectivity, transitivity, and reciprocity.\n\nAs mentioned above, many real-world networks, especially social networks, are not static but are continuously evolving. Hence, the results of many network mining tasks will become stale and need to be updated to keep freshness. For example, Tong et al.~\\cite{tong2008colibri} proposed an efficient way to sample columns and\/or rows from the network adjacency matrix to achieve low-rank approximation. In~\\cite{tang2008community}, the authors employed the temporal information to analyze the multi-mode network when multiple interactions are evolving. Ning et al.~\\cite{ning2007incremental} proposed an incremental approach to perform spectral clustering on networks dynamically. Aggarwal and Li~\\cite{aggarwal2011node} proposed a random-walk based method to perform dynamic classification in content-based networks. In~\\cite{chen2015fast,chen2017eigen}, a fast eigen-tracking algorithm is proposed which is essential for many graph mining algorithms involving adjacency matrix. Li et al.~\\cite{li2016toward} studied how to perform unsupervised feature selection in a dynamic and connected environment. Zhou et al.~\\cite{zhou2015rare} investigated the rare category detection problem on time-evolving graphs. A more detailed review of dynamic network analysis can be referred to~\\cite{aggarwal2014evolutionary}. However, all these methods are distinct from our proposed framework as we are the first to tackle the problem of attributed network embedding in a dynamic environment.\n\n\\section{Conclusions and Future Work}\nThe prevalence of attributed networks in many real-world applications presents new challenges for many learning problems because of its natural heterogeneity. In such networks, interactions among networked instances tend to evolve gradually, and the associated attributes also change accordingly. In this paper, we study a novel problem: how to learn embedding representations for nodes in dynamic attributed networks to enable further learning tasks. In particular, we first build an offline model for a consensus embedding presentation which could capture node proximity in terms of both network topology and node attributes. Then in order to capture the evolving nature of attributed network, we present an efficient online method to update the embeddings on the fly. Experimental results on synthetic and real dynamic attributed networks demonstrate the efficacy and efficiency of the proposed framework.\n\nThere are many future research directions. First, in this paper, we employ first-order matrix perturbation theory to update the embedding representations in an online fashion. We would like to investigate how the high-order approximations can be applied to the online embedding learning problem. Second, this paper focuses on online embedding for two different data representations; we also plan to extend the current framework to multi-mode and multi-dimensional dynamic networks.\n\\section*{Acknowledgements}\nThis material is based upon work supported by, or in part by, the National Science Foundation (NSF) grant 1614576, and the Office of Naval Research (ONR) grant N00014-16-1-2257.\n\n\\balance\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\\subsection{Experiments with ConvNeXt, a Stronger Backbone}\n\nIn Table \\ref{tab:RebuttalConvNextTable}, we evaluate our novel segment-level augmentation strategy using ConvNeXt \\cite{liu2022convnet}, a stronger, recent, fast backbone that can compete with transformer-based architectures. Although ConvNeXt provides significantly better performance than the ResNet baseline, Table \\ref{tab:RebuttalConvNextTable} confirms our experiments with ResNet that segment-level augmentation can improve logo retrieval performance. \n\n\\input{table_convnet}\n\n\\subsection{Experiments with a Different Dataset}\n\nWithout tuning, we repeat our experiments on using a different logo retrieval dataset, namely the Large Logo Dataset (LLD) \\cite{sage2017logodataset}. LLD has {61380} logos for training and {61540} logos for testing. The LLD dataset does not have a query set for providing known similarities and therefore, for the experiments on LLD, we use the query set of the METU dataset to find similarities in the LLD dataset. \n\nThe results in Table \\ref{tab:LLDtable} show that segment-level augmentation performs better than image-level augmentation in R@1 measure and on par in terms of NAR and R@8 measures. Considering that LLD is a smaller dataset than METU, we believe that segment-level augmentation can provide a larger margin in performance if tuned.\n\n\\input{table_LLD}\n\n\\subsection{Comparisons with Different Image-level Augmentations}\n\nWe now compare segment-level augmentation with more image-level augmentation methods. We have selected most commonly used image-level augmentation methods ({Cutout, Elastic Transform and Channel Shuffle}) from \\cite{albumentations_paper}. For this experiment, we use the same selection probability and setup as in Section IV.C. of the main paper. The results in Table \\ref{tab:albumentations} show that, \\textbf{without any tuning}, segment-level augmentation performs better than this new set of image-level augmentations in terms of NAR and R@8 measures whereas is inferior in terms of R@1 measure. With tuning, segment-level augmentation has potential to perform better.\n\n\\input{table_more_augmentation}\n\n\\subsection{An Analysis of Segmentation Maps}\n\nLogos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works generally well for most logos (see Figure \\ref{seg_fig_failure_cases} for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, this is not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as a form of segment-level augmentation and this is still useful for training.\n\nFigure \\ref{seg_fig_failure_cases} displays edge examples where our segment-level augmentation produces drastically different logos for which computing similarity with the original logos is highly challenging. These cases happen if there are only a few segments in a logo with comparable size and one of them is removed, or if the segmentation method under-segments the image and a segment that groups multiple regions is removed, or if the background segment is selected for augmentation and rotated. We did not try to address these edge cases as they are not frequent. We believe that they are useful stochastic perturbations and helpful to the training dynamics. We leave an analysis of this and improving the quality of segmentation \\& augmentation as future work.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{augmentation_failures.pdf}\n \\caption{Edge cases in our segment-level augmentation. The first is a result of under-segmentation where the connected petals are with similar color are grouped in a single segment, and therefore, changing its color produces a distinct logo. The second case happens because the logo has two main segments and removing one results in a drastically different logo. The third is a case where a background segment is rotated and the outcome is an overlay of two segments.}\n \\label{seg_fig_failure_cases}\n\\end{figure}\n\n\\subsection{The Effect of Text in Logos}\n\nIn this experiment, we analyze the effect of text on logo similarity. The visual results in Table \\ref{tab:RebuttalLogoAnalysis} and the quantitative analysis in Table \\ref{tab:LogoTypeAnalysis} show that, when used as queries, logos with text are more difficult to match to the logos in the dataset and the best and average ranks of similar logos are very high. This is not surprising since representations in deep networks do capture text as well and unless additional mechanisms are used to ignore them, they are taken into account while measuring similarity. The effect of text on logo retrieval is already known in the literature and soft or hard mechanisms can be used to remove them from logos -- see e.g. \\cite{tursun2020learning,Kalkan}.\n\n\\input{logo_analysis}\n\n\n\\subsection{Visual Retrieval Results}\n\nFigure \\ref{fig:visual_results} displays sample retrieval results for different queries. We see in Figure \\ref{fig:visual_results}(a) and (b) that segment-level augmentation is able to provide better retrieval than its competitors. Figure \\ref{fig:visual_results}(c) displays a failure case where a query with an unusual color distribution is used. We see that all methods are adversely affected by this; however, segment-level augmentation is able to retrieve logos with similar color distributions.\n\n\\begin{figure*}[!h]\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_1.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_2.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_3_woConvNext.pdf}\n}} \n\\caption{Visual results on the METU dataset for the methods at their best settings. \\textbf{(a-b)} Example cases where segment-level augmentation produces better retrieval than image-level augmentation. \\textbf{(c)} A negative result for a query with an unusual intensity distribution, which affects all methods adversely. Though we observe that segment-level augmentation is able to retrieve logos with similar color distributions.\n\\label{fig:visual_results}}\n\\end{figure*}\n\n\\subsection{Running-Time Analysis}\n\nTable \\ref{tab:RebuttalTimeComplexity} provides a running-time analysis of segment-level and image-level augmentation strategies. We observe that the time spent on segmentation (0.4ms) and Segment Color Change (2.4ms) is comparable to image-level transformation Horizontal Flip (2.8ms). However, Segment Removal and Segment Rotation takes significantly more time. It is important to that our implementation is not optimized for efficiency.\n\n\\input{time_complexity}\n\n\n\\subsection{Discussion}\n\n\\subsubsection{Comparing Triplet Loss and Smooth-AP Loss}\n\nOur results on the METU and LLD datasets lead to two interesting findings, which we discuss below: \n\n\\begin{itemize}\n \\item \\textit{Finding 1: Triplet Loss is better in terms of R@1 and R@8 measures whereas Smooth-AP is better in terms of NAR measure.}\n \\item \\textit{Finding 2: Smooth-AP provides its best NAR performance with Color Change whereas Triplet Loss provides its best with Color Change \\& Segment Removal.}\n\\end{itemize} \n \nTriplet Loss is by definition optimizing or learning a distance metric and considered a surrogate ranking objective. On the other hand, Smooth-AP directly aims to optimize Average Precision, a ranking measure, and therefore, it pertains to a more global ranking objective than Triplet Loss, which works at the level of triplets only. See also Brown et al. \\cite{Brown2020eccv} who discussed and contrasted Triplet Loss with Smooth-AP Loss.\n\nThe two objectives provides different inductive biases to the learning mechanism and therefore, we see differences in terms of their performances with respect to different performance measures and augmentation strategies. For example, Finding 1 is likely to be because NAR considers all logos whereas R@1 and R8 only consider logos on the top of the ranking which can be easily learned to be ranked at the top using local similarity arrangements as in Triplet Loss. Moreover, Finding 2 occurs because different augmentation strategies incur different ranking among logos and the inductive biases of Triplet Loss and Smooth-AP handle them differently.\n\nWe believe that this is an interesting research question that we leave as future work.\n\n\\subsubsection{Different Effects and Selection Probabilities for Segment-Augmentations}\n\nThe results in Table IV of the main paper show that different augmentations contribute to performance differently and with different selection probabilities. For example, we see that Color Change (with $p=0.5$) gives the best performance in terms of NAR and R@8 whereas Rotation (with $p=0.75$) provides the best performance for R@1 and Removal improves over the baseline only in terms of R@8 measure. \n\nWe attribute these differences to the fact that the different segment-level augmentations incur different biases: Color Change enforces invariance to perturbations in color differences at the segment-level whereas Segment Rotation and Removal encourage invariance to changes to the spatial layout of the shape. \n\n\\subsubsection{Applicability to Other Problems}\n\nWe agree that our analysis is limited to logo retrieval. However, the idea of segment-level augmentation is a viable approach for reasoning about similarities at object part levels and require transfer of knowledge at the level of object parts. One good example is reasoning about affordances of objects \\cite{zhu2014reasoning,myers2015affordance,myers2014affordance}, where supported functions of object parts can be transferred across objects having similar parts. Another example is reasoning about similarity between shapes that have partial overlap \\cite{leonard20162d,latecki2000shape}, where correspondences between parts of shapes need to be calculated. In either example, the specific segment-level augmentation methods may have to be adjusted to the specific problem. For example, performing affine transformations on the segments may be helpful for problems with real-world\n\n\\clearpage\n\n\n\n\n\n\n\n\n\\end{document}\n\n\n\n\\section{Conclusion}\n\nWe introduced a novel data augmentation method based on image segments for training neural networks for logo retrieval. We performed segment-level augmentation by identifying segments in a logo and do transformations on selected segments. Experiments were conducted on the METU \\cite{Kalkan} and LLD \\cite{sage2017logodataset} datasets with ResNet \\cite{ResNet} and ConvNeXt \\cite{liu2022convnet} backbones and suggest significant improvements on two evaluation measures of ranking performance. Moreover, we use metric learning and differentiable ranking with the proposed segment-augmentation method to demonstrate that our method can lead to a further boost in ranking performance.\n\nWe note that our segment-level augmentation strategy generates similarities between logos that are rather simplistic: It is based on the assumption that two similar logos differ from each other in terms of certain segments having differences in color, orientation and presence. An important research direction is exploring more sophisticated augmentation strategies for introducing artificial similarities. However, our results suggest that even such a simplistic strategy can improve the retrieval performance significantly and therefore, our study can be considered as a first step towards developing better segment\/part-level augmentation strategies.\n\\section{Experiments and Results}\n\nWe now evaluate the performance of the proposed segment-augmentation strategy and its use with Triplet Loss and Smooth-AP Loss. \n\n\\subsection{Experimental and Implementation Details}\n\n\\subsubsection{Dataset}\n\nWe use the METU Dataset \\cite{Kalkan}, which is one of the largest publicly available logo retrieval datasets. The dataset is composed of more than 900K authentic logos belonging to actual companies worldwide. Moreover, it includes query sets, i.e. similar logos, of varying difficulties, allowing logo retrieval researchers to benchmark their methods against other methods. We have used 411K training images, 413K test images, and 418 query images.\n\n\\begin{comment}\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{@{}ll@{}}\n\\toprule\n\\textbf{Aspect} & \\textbf{Value} \\\\ \\midrule\nTrademarks & 923,343 \\\\\nUnique Register Firms & 409,675 \\\\\nUnique Trademarks & 687,842 \\\\\nTrademarks Containing Text Only & 583,715 \\\\\nTrademarks Containing Figure Only & 19,214 \\\\\nTrademarks Containing Figure and Text & 310,804 \\\\\nOther Trademarks & 9,610 \\\\\nImage Format & JPEG \\\\\nMax Resolution & 1800 $\\times$ 1800 pixels \\\\\nMin Resolution & 30 $\\times$ 30 pixels \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\caption{\\label{tab:table-name} Details of METU Dataset \\cite{Kalkan}.}\n\\end{table}\n\\end{comment}\n\n\\subsubsection{Training and Implementation Details}\n\nFor every experiment that will be discussed, we use ImageNet \\cite{ImageNet} pre-trained ResNet50 \\cite{ResNet} as our backbone architecture which has a linear layer with 512 dimensions, rather than a final Softmax layer. We use the Adam optimizer with the hyper-parameters tuned as $10^{-7}$ for the learning rate and 256 for the batch size.\n\n\n\\subsubsection{Evaluation Measures}\n\nFollowing the earlier studies \\cite{Kalkan,Tursun2019}, we use {Normalized Average Rank} (NAR) and {Recall@K} for quantifying the performance of the methods. NAR is calculated as:\n\\begin{equation}\n {NAR = \\frac{1}{N\\times N_{rel}} \\left(\\sum_{i=1}^{N_{rel}} R_i - \\frac{N_{rel}(N_{rel}+ 1)}{2} \\right)},\n\\end{equation}\nwhere $N_{rel}$ is the number of similar images for a particular query image; $N$ is the size of the image set; and $R_i$ is the rank of the $i^{th}$ similar image. NAR lies in the range $[0,1]$, where $0$ denotes the perfect score, and $1$ the worst. Recall@K (R@K) is recall for top-K similar logos.\n\n\\begin{table}[hbt!]\n\\centering\n\\caption{\\label{tab:RankingLossesTable} The effect of using Triplet Loss and Smooth-AP Loss for logo retrieval. Neither image-level nor segment-level augmentation is used for any method in this table.}\n\\begin{tabular}{@{}llll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$ & \\textbf{Recall@1} $\\uparrow$ & \\textbf{Recall@8} $\\uparrow$ \\\\ \\midrule\nBaseline & 0.102 & 0.310 & 0.536 \\\\ \\midrule\nTriplet Loss & 0.053 & \\textbf{0.344} & \\textbf{0.586} \\\\ \\midrule\nSmooth-AP Loss & \\textbf{0.046} & 0.339 & 0.581 \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\\subsection{Experiment 1: Effect of Ranking Losses}\n\nBefore analyzing the effect of segment-level augmentation, in this section, we first provide a stand-alone analysis to illustrate the effect of the ranking losses. We compare Triplet Loss and Smooth-AP Loss with a baseline that compares features extracted with the pre-trained Resnet50 backbone using Cosine Similarity. For this analysis, no image-level or segment-level augmentations are used, except for the Random Resized Crop to fit the images to the expected resolution of the network, i.e. $224 \\times 224$. \n\nThe results in Table \\ref{tab:RankingLossesTable} suggest that both loss adaptations provide a significant performance improvement in both NAR and Recall measures and Smooth-AP adaptation achieves the best performance. Applying Cosine Similarity on off-the-shelf ResNet50 features shows adequate results in no-text logo instances, however, it performs worse on logos with text (see Appendix E). \n\nIt is evident that the improvement in Recall is not as visible as NAR. This difference states that the adapted loss functions highly affect the overall rankings of the similar known instances. However, these effects are not completely reflected by the Recall because of the selected K$=8$ value. \n\n\\begin{table}[ht]\n\\centering\n\\caption{\\label{tab:BestAugmentationResults} The effect of image-level (H. Flip, V. Flip) and segment-level augmentation. Only the best augmentation strategies are reported. See Section \\ref{sect:ablation} for an ablation analysis.}\n\\begin{tabular}{@{}llll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$ & \\textbf{Recall@1} $\\uparrow$ & \\textbf{Recall@8} $\\uparrow$ \\\\ \\midrule\n\nBaseline\\\\\n(No augmentation) & 0.102 & 0.310 & 0.536 \\\\ \\midrule\\midrule\nTriplet Loss\\\\\n(No augmentation)& 0.053 & 0.344 & 0.586 \\\\ \\midrule\nTriplet Loss\\\\\n(Image-level aug.) & 0.051 & 0.354 & 0.596 \\\\ \\midrule\nTriplet Loss\\\\\n(S. Color, S. Removal) & 0.046 & \\textbf{0.374} & \\textbf{0.640} \\\\ \\midrule\\midrule\nSmooth-AP Loss\\\\\n(No augmentation) & 0.046 & 0.339 & 0.581 \\\\ \\midrule\nSmooth-AP Loss\\\\\n(Image-level aug.) & 0.044 & 0.339 & 0.596 \\\\ \\midrule\nSmooth-AP Loss \\\\\n(S. Color) & \\textbf{0.040} & 0.354 & 0.610 \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[hbt!]\n\\centering\n\\caption{\\label{tab:SOTATable} Normalized Average Rank (NAR) values for previous state-of-the-art results on the METU dataset. The results are not comparable as the methods differ in their backbones, training datasets, or training regime. For some methods, these details are not even reported. See the text for details.}\n\\begin{tabular}{@{}ll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$\\\\ \\midrule\nHand-crafted Features (Feng \\emph{et al.} \\cite{Feng}) & 0.083 \\\\ \\midrule\nHand-crafted Features (Tursun \\emph{et al.} \\cite{Kalkan}) & 0.062 \\\\ \\midrule\nOff-the-shelf Deep Features (Tursun \\emph{et al.} \\cite{TursunandKalkan}) & 0.086 \\\\ \\midrule\nTransfer Learning (Perez \\emph{et al.} \\cite{Perez}) & 0.047 \\\\ \\midrule\nComponent-based attention (SPoC \\cite{SPOC}, \\cite{Tursun2019}) & 0.120 \\\\ \\midrule\nComponent-based attention (CRoW \\cite{CRoW}, \\cite{Tursun2019}) & 0.140 \\\\ \\midrule\nComponent-based attention (R-MAC \\cite{MAC}, \\cite{Tursun2019}) & 0.072 \\\\ \\midrule\nComponent-based attention (MAC \\cite{MAC} \\cite{Tursun2019}) & 0.120 \\\\ \\midrule\nComponent-based attention (Jimenez \\cite{Jimenez},\\cite{Tursun2019}) & 0.093 \\\\ \\midrule\nComponent-based attention (CAM MAC \\cite{Tursun2019}) & 0.064 \\\\ \\midrule\nComponent-based attention (ATR MAC \\cite{Tursun2019}) & 0.056 \\\\ \\midrule\nComponent-based attention (ATR R-MAC \\cite{Tursun2019}) & 0.063 \\\\ \\midrule\nComponent-based attention (ATR CAM MAC \\cite{Tursun2019}) & 0.040 \\\\ \\midrule\nMR-R-MAC w\/UAR (Tursun \\emph{et al.} \\cite{Tursun2022}) & {0.028} \\\\ \\midrule\nSegment-Augm. (Color Change) w Smooth-AP (Ours) & {0.040} \\\\\n\\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Experiment 2: Effect of Segment Augmentation}\n\nWe now compare our segment-based augmentation methods with the conventional image-level augmentation techniques on the METU dataset \\cite{Kalkan}. In every experiment, we resize the images with Random Resized Crop to fit them to the expected resolution of the network, i.e. 224$\\times$224. For both segment-based and image-level augmentation, the same number of images are augmented and the probability $p=0.5$ is used for selecting a certain transformation.\n \nWe have provided a comparison between the best resulting methods for both image-level and segment-level augmentation methods in Table \\ref{tab:BestAugmentationResults}. We see that image-level augmentation can improve ranking performance. However, the results suggest that segment-level augmentation provides a significantly better gain both in terms of NAR and R@K measures. Detailed comparison between image-level and segment-level methods is provided in ablation study.\n\n\n\n\n\n\\subsection{Experiment 3: Comparison with State of the Art}\n\nWe compare our method and the state-of-the-art methods on the METU dataset \\cite{Kalkan}. It is important to note that a fair comparison between the methods is not possible because they differ in their backbones, training datasets or dataset splits, or training time. For some papers, even those details are missing; e.g. for \\cite{Tursun2022}, which reports the best NAR performance. Therefore, we list the results in Table \\ref{tab:SOTATable}, and refrain from drawing conclusions.\n\n\\begin{table}[ht]\n \\centering\n \\caption{\\label{tab:ProbabilityTable} The effect of probability $p$ for augmenting a selected segment, with Smooth-AP Loss.}\n \\begin{tabular}{@{}lllllll@{}}\n \\toprule\n \\multicolumn{1}{c}{Color C.} &\n \\multicolumn{1}{c}{Rotation} &\n \\multicolumn{1}{c}{Removal} &\n \\multicolumn{1}{c}{\\textbf{$p$}} &\n \\multicolumn{1}{c}{\\textbf{NAR $\\downarrow$}} &\n \\multicolumn{1}{c}{\\textbf{R@1 $\\uparrow$}} &\n \\multicolumn{1}{c}{\\textbf{R@8 $\\uparrow$}} \\\\ \\midrule \\midrule\n \\multicolumn{3}{c}{\\textit{Baseline}} &\n \\multicolumn{1}{c}{0} &\n \\multicolumn{1}{c}{0.046} &\n \\multicolumn{1}{c}{0.339} &\n \\multicolumn{1}{c}{0.581} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{\\textbf{0.040}} &\n \\multicolumn{1}{c}{0.354} &\n \\multicolumn{1}{c}{\\textbf{0.610}} \\\\ \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.043} &\n \\multicolumn{1}{c}{0.354} &\n \\multicolumn{1}{c}{0.601} \\\\ \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.049} &\n \\multicolumn{1}{c}{0.325} &\n \\multicolumn{1}{c}{0.566} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{0.048} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.601} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.049} &\n \\multicolumn{1}{c}{\\textbf{0.369}} &\n \\multicolumn{1}{c}{0.591} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.060} &\n \\multicolumn{1}{c}{0.330} &\n \\multicolumn{1}{c}{0.556} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{0.048} &\n \\multicolumn{1}{c}{0.339} &\n \\multicolumn{1}{c}{0.596} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.047} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.571} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.047} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.591} \\\\\n \\bottomrule\n \\addlinespace\n \\end{tabular}\n \\end{table}\n\n\n\\input{buyuk_tablo}\n\n\\subsection{Experiment 4: Ablation Study}\n\\label{sect:ablation}\n\n\\subsubsection{Choice of Hyper-Parameters}\n\nOur segment-level augmentation has two hyper-parameters: The number of segments, $n$, selected for augmentation, and the probability, $p$, of applying a selected augmentation. Table \\ref{tab:ProbabilityTable} shows that the best performance is obtained with $p$ as 0.5. A similar analysis for $n$ (with values 1, 2, $L\/3$ and $L\/2$ where $L$ is the number of segments in a logo) provided the best performance for $n$ as $L\/3$. \n\n\\subsubsection{Effects of Individual Augmentation Methods}\n\nTables \\ref{tab:SmoothAP-DataAugmentation} (Smooth-AP Loss) and \\ref{tab:TripletLoss-DataAugmentation} (Triplet Loss) list the effects of both image-level and segment-level augmentation. The tables show that, among the segment-level augmentation methods, (Segment) `Color Change' outperforms the others for both loss functions. With Triplet Loss adaptation, (Segment) `Removal' and `Rotation' provide slightly better NAR values than the baseline. Another point worth mentioning is that combining (Segment) `Rotation' or `Removal' degrades the NAR performance measure whereas the combination of (Segment) `Removal' and `Color Change' yields the best result at $Recall@8$.\n\n\\subsubsection{Experiments with a Different Backbone}\n\nSection A in the Appendix provides an analysis using ConvNeXt \\cite{liu2022convnet}, a recent, fast and strong backbone competing with transformer-based architectures. Our results without any hyper-parameter tuning are comparable to the baseline or better than the baseline with the R@8 measure.\n\n\\subsubsection{Experiments with a Different Dataset}\n\nSection B in the Appendix reports results on the LLD dataset \\cite{sage2017logodataset} that confirm our analysis on the METU dataset: We observe that segment-level augmentation provides significant gains for all measures.\n\n\n\n\\subsection{Experiment 5: Visual Results}\n\nSection F in the Appendix provides sample retrieval results for several query logos for the baseline as well the adaptations of Triplet Loss and Smooth-AP Loss with our segment-level augmentation methods. The visual results also confirm that segment augmentation with our Smooth-AP adaptation performs best.\n\n\n\\section{Introduction}\n\n\\begin{figure}[hbt!]\n\\centering\n\\includegraphics[width=0.99\\columnwidth]{Figures\/teaser.pdf}\n\\caption{(a) Conventional data augmentation approaches apply transformations at the image level. (b) We propose segment-level augmentation as a more suitable approach for problems like logo retrieval.\n\\label{fig_teaser}}\n\\end{figure}\n\n\nWith the rapid increase in companies founded worldwide and the fierce competition among them globally, the identification of companies with their logos has become more pronounced, and it has become more paramount to check similarities between logos to prevent trademark infringements. Checking trademark infringements is generally performed manually by experts, which can be sub-optimal due to human-caused errors and time-consuming as it takes days to make a decision. Therefore, automatically identifying similar logos using content-based image processing techniques is crucial.\n\nFor a query logo, identifying the similar ones in a database of logos is a content-based image retrieval problem. With the rise in deep learning, there have been many studies that have used deep learning for logo retrieval problem \\cite{Tursun2019, Feng, unsupervisedattention, Tursun2022, Perez}. Existing approaches generally rely on extracting features of logos and ranking them according to a suitable distance metric \\cite{TursunandKalkan, Tursun2019, Perez}.\n\nLogo retrieval is a challenging problem especially for two main reasons: (i) Similarity between logos is highly subjective, and similarity can occur at different levels, e.g., texture, color, segments and their combination etc. (ii) The amount of known similar logos is limited. We hypothesize that this has limited the use of more modern deep learning solutions, e.g. metric learning, contrastive learning, differentiable ranking, as they require tremendous amount of positive pairs (similar logos) as well as negative pairs (dissimilar logos) for training deep networks.\n\nIn this paper, we address these challenges by (i) proposing a segment-level augmentation to produce artificially similar logos and (ii) using metric learning (Triplet Loss \\cite{weinberger2009distance}) and differentiable ranking (Smooth Average Precision (AP) Loss \\cite{Brown2020eccv}) as a proof of concept that, with our novel segment-augmentation method, such data hungry techniques can be trained better. \n\n\\textit{Main Contributions.} Our contributions are as follows:\n\\begin{itemize}\n \\item We propose a segment-level augmentation for producing artificial similarities between logos. To the best of our knowledge, ours is the first to introduce segment-level augmentation into deep learning. Unlike image-level augmentation methods that transform the overall image, we identify segments in a logo and make transformations at the segment level. Our results suggest that this is more suitable than image-level augmentation for logo retrieval.\n \n \\item To showcase the use of such a tool to generate artificially similar logos, we use data-hungry deep learning methods, namely, Triplet Loss \\cite{weinberger2009distance} and Smooth-AP Loss \\cite{Brown2020eccv}, to show that our novel segment-augmentation method can indeed yield better retrieval performance. To the best of our knowledge, ours is the first to use such methods for logo retrieval.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Method}\n\nIn this section, after providing a definition for logo retrieval, we present our novel segment-based augmentation method and how we use it with deep metric learning and differentiable ranking approaches.\n\n\\begin{wrapfigure}{r}{0.45\\columnwidth}\n\\centering\n\\includegraphics[width=0.45\\columnwidth]{Figures\/sample_augmentations.pdf}\n\\caption{Examples for our segment-level augmentation.}\n\\label{fig_augmentation}\n\\end{wrapfigure}\n\n\\subsection{Problem Definition}\n\nGiven an input query logo $I_q$, logo retrieval aims to rank all logos in a retrieval set $\\Omega$ = ${I_i}$, $i=\\{0,1,..., N\\}$, based on their similarity to the query $I_q$. To be able to evaluate retrieval performance and to train a deep network that relies on known similarities, we require for each $I_q$ to have a set of positive (similar) logos, $\\Omega^+(I_q)$, and a set of negative (dissimilar) logos, $\\Omega^-(I_q)$. Note that logo retrieval defined as such does not have the notion of classes of a classification setting.\n\n\n\\subsection{Segment-level Augmentation for Logo Retrieval}\n\nWe perform segment-level augmentation by following these steps: (i) Logo segmentation, (ii) segment selection, and (iii) segment transformation. See Figure \\ref{fig_augmentation} for some samples.\n\\begin{wrapfigure}{r}{0.45\\columnwidth}\n \\centering\n \\vspace*{-4cm}\n \\includegraphics[width=0.45\\columnwidth]{Figures\/segmentation.pdf}\n \\caption{Sample segmentation results. Each segment is represented with different color.}\n \\label{fig_segmentation}\n\\end{wrapfigure}\n\n\n\\subsubsection{Logo Segmentation}\n\nThere are many sophisticated segmentation approaches available in the literature. Since logo images have relatively simpler regions compared to images, we observed that a simple and computationally-cheap approach using standard connected-component labeling is sufficient for extracting logo segments. See Figure \\ref{fig_segmentation} for some samples, and Supp. Mat. Section S4 for more samples and a discussion on the effect of segmentation quality.\n\n\n\n\\subsubsection{Segment Selection}\n\nThe next step is to select $n$ random segments to apply transformations on them. Segment selection is a process that should be evaluated carefully since the number of segments or the area for each segment is not the same for each logo. Simplicity of logo instances also affects the number of available components and many logo instances have less than five components. Therefore, the choice of $n$ can have drastic effects especially when the number of components in a logo is small, especially for the `segment removal' transformation. For this reason, `segment removal' is not applied to a segment with the largest area, and $n$ is chosen to be small values. We present an ablation study to evaluate the effect of $n$ on model performance for the introduced augmentation strategies. For the same reason, the background component is removed from available segments for augmentation.\n\n\\subsubsection{Segment Transformation}\n\nFor each selected segment $S$, the following are performed with probability $p$:\n\\begin{itemize}\n \\item `(Segment) Color change': Every pixel in $S$ is assigned to a randomly selected color. \n \\item `Segment removal': Pixel values in $S$ are set to the same value of the background component.\n \\item `Segment rotation': We first select a segment and create a mask for the segment. The mask image and the corresponding segment pixels are rotated with a random angle in [-90, 90]. Then, the rotated segment is combined with the other segments. See also Figure \\ref{fig_segment_rotation} for an example.\n\\end{itemize}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{Figures\/rotation_steps.pdf}\n \\caption{The steps of rotating a segment.}\n \\label{fig_segment_rotation}\n\\end{figure}\n\nSee Figure \\ref{fig_augmentation} for some sample augmentations.\n\n\n\n\n\\subsection{Adapting Ranking Losses for Logo Retrieval}\n\n\\subsubsection{Mini-batch Sampling for Training}\n\\label{sect:minibatch}\nFor training the deep networks, we construct the batches as follows, similar to but different from \\cite{Brown2020eccv} as we do not have classes: Each mini-batch $B$ with size $|B|$ is constructed with two sub-sets: the similar set $B^{+}$ and the dissimilar set $B^{-}$. The similar logo set $B^{+}$ consists of logos that are known to be similar to each other (this information is available in the dataset \\cite{Kalkan}; logos with known similarities are provided as the ``query set''), and $B^{-}$ contains logos that are dissimilar to the logos in $B^{+}$ (to be specific, logos other than the query set of the dataset are randomly sampled for $B^{-}$). The size of $B^{+}$ is set to $4$, and that of $B^{-}$ is $|B|-4$. For training the network, every $I \\in B^{+}$ has label as ``1'' and $I \\in B^{-}$ has label ``0''.\n\n\n\n\n\\subsubsection{Smooth-AP Adaptation}\n\nSmooth-AP \\cite{Brown2020eccv} is a ranking-based differentiable loss function, approximating AP. The main aspect of this approximation is to replace discrete counting operation (the indicator function) in the non-differentiable AP with a Sigmoid function. Brown et al. \\cite{Brown2020eccv} applied their study to standard retrieval benchmarks such as Stanford Online Products \\cite{song2016deep}, VGGFace2 \\cite{DBLP:journals\/corr\/abs-1710-08092} and VehicleID \\cite{liu2016deep}. However, the logo retrieval problem requires a dataset with a different structure as there is no notion of classes as in the Stanford Online Products \\cite{song2016deep}, VGGFace2 \\cite{DBLP:journals\/corr\/abs-1710-08092} and VehicleID \\cite{liu2016deep} datasets. Hence, Smooth-AP cannot be applied directly to our problem.\n\nThe first adaptation is about the structure of the mini-batch sampling. In Smooth-AP, Brown et al. explain their sampling as they have \\textit{``formed each mini-batch by randomly sampling classes such that each represented class has P samples per class''} \\cite{Brown2020eccv}. Standard retrieval benchmarks have a notion of classes and are assumed to have sufficient instances per class to distribute among the mini-batches; however, there are not enough instances for known similarity ``classes'' in logo retrieval. This difference requires an adaption in both sampling and calculation of the loss. Smooth-AP Loss is calculated as follows \\cite{Brown2020eccv}:\n\\begin{equation}\n{\\mathcal{L}_{AP}} = {\\frac{1}{C}} {\\sum_{k=1}^{C} (1-\\tilde{AP}_{k})} ,\n\\end{equation}\nwhere $C$ is the number of \\textit{classes} and $\\tilde{AP}_k$ is the smoothed AP calculated for each class in the mini-batch with their Sigmoid-based smoothing method.\n\nOur mini-batch sampling (Section \\ref{sect:minibatch}) causes a natural contradiction because our batches only contain two classes: ``similar'' and ``dissimilar''; therewith the ``dissimilar'' class should not be included in the calculation of the loss. Dissimilar class instances have the same label (``0''), but that does not mean they have the same class; they are just not similar to the similar logo set $B^+$ in the mini-batch.\nHence, the ranking among $B^-$ does not matter in our case. This difference in the batch construction and the notion of classes lead to our second adaptation. In this adaptation, the only calculated AP approximation belongs to the known ``similar'' class (logos in $B^+$). Therefore, the loss calculation becomes:\n\\begin{equation}\n \\mathcal{L}^+_{AP} = 1-{\\tilde{AP}_+},\n\\end{equation}\nwhere $\\tilde{AP}$ is calculated (approximated) in the same way as in the original paper \\cite{Brown2020eccv}. \n\n\\subsubsection{Triplet Loss Adaptation}\n\nTriplet Loss \\cite{weinberger2009distance} is a well-known loss function used in many computer vision problems. Triplet Loss is differentiable, but, unlike Smooth-AP Loss \\cite{Brown2020eccv}, rather than optimizing ranking, it optimizes the distances between positive pairs and negative pairs of instances. In this paper, for each mini-batch, triplets consist of one ``anchor\" instance, one positive instance, and one negative instance. For the same reasons discussed in Smooth-AP Loss \\cite{Brown2020eccv}, only the instances of known similarity classes can be used as the anchor instance. Optimizing the distances between dissimilar logo instances is not sensible because, as discussed in the previous section, instances of the dissimilar logos do not have any known similarity between them. Thus, triplet loss calculation is limited to the triplets that contain known similar instances as the ``anchor'' instance.\n\\section*{Authors' Response}\n\nWe thank all the reviewers for their valuable feedback. To summarize: The reviewers found our paper \\textit{``well-motivated''} (\\textcolor{red}{\\textbf{R4}}, \\textcolor{cadmiumgreen}{\\textbf{R7}}), \\textit{``well-organized''} (\\textcolor{cadmiumgreen}{\\textbf{R7}}), \\textit{``well-written''} (\\textcolor{auburn}{\\textbf{R8}}), containing \\textit{``valuable ideas''} (\\textcolor{red}{\\textbf{R4}}), and \\textit{``interesting''} (\\textcolor{auburn}{\\textbf{R8}}) and noted that \\textit{``experimental results demonstrate the effectiveness of the proposed strategy\"} (\\textcolor{cadmiumgreen}{\\textbf{R7}}). The reviewers had concerns about the novelty of the ideas as well as the significance of the results. Moreover, they requested more analysis and a discussion of the limitations. \n\nWe've addressed all reviewer comments and the quality of our paper improved significantly. To be specific, in addition to updating the text to clarify ambiguous points, we've conducted new experiments and we report positive results on another dataset ({LLD - Large Logo Dataset} \\cite{sage2017logodataset} -- see item A3), with a more recent, stronger and fast backbone ({ConvNeXt} architecture \\cite{liu2022convnet} -- see item {C1}), and provide a running-time analysis (C5), an analysis of the effect of text (C2), visual results (A2\\&C6) and more discussion. In the following, we only summarize our changes and respond to the comments provided by the reviewers. Due to space limitations, the new results are provided in the supplementary material, with pointers placed in the main text.\n\n\\subsection{Common Concerns}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{A1. Simplicity of the Contribution.} \n\\textcolor{blue}{\\textbf{R6}}: \\textit{``[t]he contribution and novelty of this paper are limited since the authors propose a standard application of well-known methods according to a classic algorithmic scheme (the proposed techniques are simple and already proposed in the literature)\"}. \\textcolor{auburn}{\\textbf{R8}}: \\textit{``idea of the proposed \"segment level\" augmentation appears more as an engineering, rather than a scientific contribution\"}.\n\n\\textbf{Authors}: We respectfully disagree with the reviewers on the requirement that a paper's methodology should be sophisticated or non-engineered: (i) [Computer] Science makes progress by putting together existing tools and methods in novel ways for solving problems, generally improving some performance measures while doing so. In our case, we do use existing tools (segmentation, augmentation) but propose a novel idea for combining them for solving an existing problem with a significantly better performance. (ii) An `engineering' approach can be scientific if the engineered solution is novel. (iii) There exist similar papers at top conferences (``Copy-paste networks...\", ICCV2017 \\cite{lee2019copy} ; ``Cut, paste and learn...\", ICCV2019 \\cite{dwibedi2017cut}; ``Simple copy-paste ...\", CVPR2021 \\cite{ghiasi2021simple}) that propose ``simple'' and ``engineered'' augmentation methods or analyze them. Such papers are scientific contributions as they contribute to the progress of [Computer] Science by bringing together existing tools in new ways or by analyzing what's not been known before. We strongly believe that our paper is similar, its contributions are scientific and our novel idea deserves to be seen by the community. Moreover, our novel way of performing \\& analyzing segment-level augmentation has strong potential for inspiring more granular augmentation strategies than what's generally used in the literature (see also E2 \\& Supp. Mat. Section S8-C).\n\n\\noindent\\textbf{A2. Quality and Explanation of segmentation.}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``...quantitative results and performance of segmentation quality are not given\"}. \\textbf{\\textcolor{cadmiumgreen}{\\textbf{R7}}}: \\textit{... segmentation errors, limitations and errors should be discussed.}\n\n\\textbf{Authors}: We thank the reviewers for highlighting this important point. Logos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works well for most logos (see Supp. Mat. Section S4 for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, such infrequent cases are not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as segment-level augmentation and this is still useful for training. We provide a discussion about this in Supp. Mat. Section S4.\n\n\n\\noindent\\textbf{A3. Other Datasets}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``I encourage the authors to point out the performance of their method on more benchmark datasets.\"}\n\n\\textbf{\\textcolor{blue}{\\textbf{R6}}}: \\textit{``... Please conduct more experiments on other recent datasets.\"}\n\n\\textbf{Authors}: With this revision, we've performed experiments with the {LLD - Large Logo Dataset} {\\cite{sage2017logodataset}}. Our results (Supp. Mat. Section S2) confirm that the performing segment-level augmentation is a working strategy for learning better representations for trademark similarity. These results are obtained without any hyper-parameter tuning and we anticipate the gap to widen if they are tuned.\n\n\\input{LLD_table}\n\\subsection{Specific Concerns of \\textcolor{red}{\\textbf{R4}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{B1 - \\textcolor{red}{\\textbf{R4}}}: \\textit{``contributions of the paper are not clearly stated, so the novelty remains somewhat questionable.\"}\n\n\\textbf{Authors}: The last two paragraphs of Introduction state our contributions as follows (truncated to save space): \n\n``\\textit{Main Contributions.} Our contributions are as follows:\n\\begin{itemize}\n \\item We propose a segment-level augmentation for producing artificial similarities between logos. To the best of our knowledge, ours is the first to introduce segment-level augmentation in deep learning. [...]\n \n \\item To showcase the use of such a tool to generate artificially similar logos, we use data-hungry deep learning methods, namely, Triplet Loss \\cite{weinberger2009distance} and Smooth AP \\cite{Brown2020eccv}, [...]\"\n\\end{itemize}\n\n\\noindent\\textbf{B2 - \\textcolor{red}{\\textbf{R4}}}: \\textit{``...table I and table II are not evident in order to demonstrate the performance. What are the chosen hyperparameters?\"}\n\n\\textbf{Authors}: In Table I, we see the effect of applying ranking\/comparison-based losses for logo retrieval. We see that both Triplet Loss and Smooth-AP Loss do significantly improve over the baseline. Although these results are new for logo retrieval, the methods are not novel for our paper. In Table II, we show that applying image-level augmentation improves logo retrieval performance; though we obtain better gain with our proposed way of performing segment-level augmentation. The hyper-parameters are provided in detail in Section IV.A.2.\n\n\\subsection{Specific Concerns of \\textcolor{blue}{\\textbf{R6}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{C1 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...the authors should compare their contribution with recent deep networks.\"}\n\\input{ConvNext_table}\n\\textbf{Authors}: We now compare our method with a very recent CNN architecture, namely, ConvNeXt \\cite{liu2022convnet}. The new results in Supp. Mat. Section S1 confirm our hypothesis that segment-level augmentation performs on par or improves logo retrieval performance. Our ongoing experiments with ResNext, which could not be completed because ResNext is slower than ResNet or ConvNeXt, also promise significant gain in favor of using segment-level augmentation (results will be included in the camera-ready version).\n\n\\noindent\\textbf{C2 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...justify why ResNet performs worse on logos having text content in IV-B Experiment 1. ... give qualitative and quantitative performance analysis of this observation\"}\n\n\\textbf{Authors}: We now provide both qualitative \\& quantitative results on this in Supp. Mat. Section S5, and provide a discussion.\n\n\\noindent\\textbf{C3 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...why certain transformations in the data augmentation step are more performant when applying Triplet Loss, while other are better when using Smooth AP.\"}.\n\n\\textbf{Authors}: This is an important point and we now provide a discussion on this in Supp. Mat. Section S8. To provide a summary here: Smooth AP and Triplet Loss incur different inductive ranking biases and this leads to different overall behaviors for the methods.\n\n\\noindent\\textbf{C4 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...best value of p depends on the type of used transformation and the computed performance evaluation metric.\"}.\n\n\\textbf{Authors}: The reviewer is right about this point. However, different augmentations incur different biases and it is not surprising that they contribute to performance differently and with different selection probabilities. We now provide a discussion about this in Supp. Mat. Section S8.\n\n\\noindent\\textbf{C5 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...discussion on the complexity (e.g. training time and retrieval response) is missing.\"}\n\n\\textbf{Authors}: We now provide a running-time analysis in Supp. Mat. Section S7.\n\n\\noindent\\textbf{C6 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``The authors should present an error analysis to find the reasons behind the incurred errors by analyzing correlations between the different scenarios.\"}\n\n\\textbf{Authors}: We now provide more qualitative analysis in Supp. Mat. Sections S4 and S6 on failure cases as well and provide a discussion for potential reasons.\n\n\\noindent\\textbf{C7 - \\textcolor{blue}{\\textbf{R6}}:} \\textit{``The Triplet Loss function requires preparing a database done by triplets of images, limiting its application to the existing publicly available datasets.\"}\n\n\\textbf{Authors}: Taking samples from a class as anchors \\& positive samples, and samples from other classes as negative samples, applying Triplet Loss is actually very straightforward and can be applied to any classification or retrieval problem. This can be easily done on the run while sampling a batch without preparing a separate dataset.\n\n\\subsection{Specific Concerns of \\textcolor{cadmiumgreen}{\\textbf{R7}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{D1 - \\textcolor{cadmiumgreen}{\\textbf{R7}}}: \\textit{\"...Please further discuss the performance of triplet loss and smooth AP loss in Table \\ref{tab:RankingLossesTable}}\n\n\\textbf{Authors}: We now provide a discussion on this in Supp. Mat. Section S6. To provide a summary here: Smooth AP and Triplet Loss incur different inductive ranking biases and this leads to different overall behaviors for the methods.\n\n\\subsection{Specific Concerns of \\textcolor{auburn}{\\textbf{R8}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{E1 - \\textcolor{auburn}{\\textbf{R8}}:} ``\\textit{The variety of image-level augmentation techniques, which were used for comparison, is insufficient and the selected techniques themselves are trivial. To improve the impact of that contribution a better comparison with other techniques should be performed. For instance, the are public implementations [1,2].\"}\n\n\\textbf{Authors}: We now compare with three more commonly-used image-level augmentation methods (namely, {Cutout, Elastic Transform and Channel Shuffle}) from the library {\\cite{albumentations_paper}} proposed by the reviewer. The new results in Supp. Mat. Section S3 confirm our previous results that segment-level augmentation is a better strategy than image-level augmentation for trademark retrieval.\n\n\\input{Albumentations_table}\n\n\n\\noindent\\textbf{E2 - \\textcolor{auburn}{\\textbf{R8}}}: ``\\textit{...limited to a particular logo recognition task}\"\n\n\\textbf{Authors}: We agree that our analysis is limited to logo retrieval. However, our method is a viable approach for reasoning about similarities at part levels and require transfer of knowledge or finding correspondences (e.g. affordances, partial shape similarity) at the level of object parts. This is an interesting research direction and we provide a discussion about this in Supp. Mat. Section S8.\n\n\\noindent\\textbf{E3 - \\textcolor{auburn}{\\textbf{R8}}}: \\textit{``... III.C.1, III.C.2 is not sufficiently clear.\"} \\textbf{Authors}: Sorry about the ambiguity. This was partially due to a mistake in notation (as noted by \\textcolor{red}{\\textbf{R4}}). We've revised the notation and the text to make it more clear.\n\n\n\\noindent\\textbf{E4 - \\textcolor{auburn}{\\textbf{R8}}}: \\textit{I can only guess (without confidence) that you just simplify the Smooth AP by removing the ranking of inter-class samples. The Triplet Loss Adaptation seem to be straightforward for the problem. However, some sentences confuse the reader, for instance: \"Optimizing the distances between dissimilar logo instances [...].\"} \\textbf{Authors}: We've updated the text and improve the notation. We hope that it is more clear now.\n\n\n\\subsection{Minor Comments}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``Omega notation has been used in two different variables.(logo and minibatch)\"}. \\textbf{\\textcolor{blue}{\\textbf{R6}}}: \\textit{``There are many linguistic glitches here\"}. \\textbf{\\textcolor{auburn}{\\textbf{R8}}}: \\textit{``The introduction of the data augmentation strategy at the beginning of the work creates the false first impression that it is primarily used to increase the \"samples per class\" value of the dataset. ... this should be better clarified in the III.A or elsewhere before introducing the methodology.\"} \n\n\\textbf{Authors}: Thank you for the suggestions and corrections. We've integrated them into the text and highlighted them in red.\n\n\\textbf{\\textcolor{auburn}{\\textbf{R8}}}: \\textit{``...the location of Figures 2 and 3 (namely together with the text in double-column format) is not standard\"} \n\n\\textbf{Authors}: We agree that such a layout is not ideal or pleasing to the eye; sorry for that. Though, this is allowed by the conference template and common in space scarcities.\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Logo Retrieval}\nEarlier studies in trademark retrieval \\cite{TursunandKalkan} used hand-crafted features and deep features extracted using pre-trained networks and revealed that deep features obtained considerably better results. Perez \\emph{et al.} \\cite{Perez} improved the results by combining two CNNs trained on two different datasets. Later, Tursun \\emph{et al.} \\cite{Tursun2019} achieved impressive results by introducing different attention methods to reduce the effect of text regions, and in their most recent work \\cite{Tursun2022}, they introduced different modifications and achieved state-of-the-art results.\n\n\\subsection{Data Augmentation}\nData augmentation \\cite{augmentation2, AugmentationSurvey} is an essential and well-known technique in deep learning to make networks more robust to variations in data. Conventional augmentation methods perform geometric transformations such as zooming, flipping or cropping the entire image. Alternatively, adding noise, random erasing or synthesizing training data \\cite{ImageNet} are key approaches to improve overall model performance. Random Erasing \\cite{RandomErasing} is a recently introduced method that obtains significant improvement on various recognition tasks. Although augmentation methods that focus on cutting and mixing windows \\cite{CutMix,MixUp,imagemix} rather than the whole image are not widely used, they have shown significant gains in performance. \n\nIn logo retrieval, studies generally use conventional augmentation methods. For example, Tursun \\emph{et al.} \\cite{tursun2021learning} applied a reinforcement learning approach to learn an ensemble of test-time data augmentations for trademark retrieval. An exception to such an approach is the study by Tursun \\emph{et al.} \\cite{Tursun2019}, who proposed a method to remove text regions from logos while evaluating similarity. \n\n\\subsection{Differentiable Ranking}\n\nImage or logo retrieval are by definition ranking problems, though ranking is not differentiable. To address this limitation, many solutions have been proposed recently \\cite{FastAP,BlackboxAP,Brown2020eccv}. These approaches mainly optimize Average Precision (AP) with different approximations: For example, Cakir \\emph{et al.} \\cite{FastAP} quantize distances between pairs of instances and use differentiable relaxations for these quantized distances. Rolinek \\emph{et al.} \\cite{BlackboxAP} consider non-differentiable ranking as a black box and use smoothing to estimate suitable gradients for training a network to rank. Finally, Brown \\emph{et al.} \\cite{Brown2020eccv} propose smoothing AP itself to use differentiable operations to train a deep network to rank.\n\n\nThese approximations have been mainly applied to standard retrieval benchmarks. In this paper, we show that differentiable ranking-based loss functions can lead to a performance improvement for logo retrieval as well.\n\n\\subsection{Summary}\n\nLooking at the studies in the literature, we observe that \\textbf{(1)} No study has performed segment-level augmentation either for logo retrieval or for general recognition or retrieval problems. The closest study for this research direction is the study by Tursun \\emph{et al.} \\cite{Tursun2019}, which just removed text regions in logos while evaluating similarity. \\textbf{(2)} Promising deep learning approaches such as metric learning using e.g. Triplet Loss and differentiable ranking have not been employed for logo retrieval.\n\\section{Experiments with ConvNeXt, a Stronger Backbone}\n\nIn Table \\ref{tab:RebuttalConvNextTable}, we evaluate our novel segment-level augmentation strategy using ConvNeXt \\cite{liu2022convnet}, a stronger, recent, fast backbone that can compete with transformer-based architectures. Although ConvNeXt provides significantly better performance than the ResNet baseline, Table \\ref{tab:RebuttalConvNextTable} confirms our experiments with ResNet that segment-level augmentation can improve logo retrieval performance. \n\n\\input{table_convnet}\n\n\\section{Experiments with a Different Dataset}\n\nWithout tuning, we repeat our experiments on using a different logo retrieval dataset, namely the Large Logo Dataset (LLD) \\cite{sage2017logodataset}. LLD has {61380} logos for training and {61540} logos for testing. The LLD dataset does not have a query set for providing known similarities and therefore, for the experiments on LLD, we use the query set of the METU dataset to find similarities in the LLD dataset. \n\nThe results in Table \\ref{tab:LLDtable} show that segment-level augmentation performs better than image-level augmentation in R@1 measure and on par in terms of NAR and R@8 measures. Considering that LLD is a smaller dataset than METU, we believe that segment-level augmentation can provide a larger margin in performance if tuned.\n\n\\input{table_LLD}\n\n\\section{Comparisons with Different Image-level Augmentations}\n\nWe now compare segment-level augmentation with more image-level augmentation methods. We have selected most commonly used image-level augmentation methods ({Cutout, Elastic Transform and Channel Shuffle}) from \\cite{albumentations_paper}. For this experiment, we use the same selection probability and setup as in Section IV.C. of the main paper. The results in Table \\ref{tab:albumentations} show that, \\textbf{without any tuning}, segment-level augmentation performs better than this new set of image-level augmentations in terms of NAR and R@8 measures whereas is inferior in terms of R@1 measure. With tuning, segment-level augmentation has potential to perform better.\n\n\\input{table_more_augmentation}\n\n\\section{An Analysis of Segmentation Maps}\n\nLogos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works generally well for most logos (see Figure \\ref{seg_fig_failure_cases} for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, this is not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as a form of segment-level augmentation and this is still useful for training.\n\nFigure \\ref{seg_fig_failure_cases} displays edge examples where our segment-level augmentation produces drastically different logos for which computing similarity with the original logos is highly challenging. These cases happen if there are only a few segments in a logo with comparable size and one of them is removed, or if the segmentation method under-segments the image and a segment that groups multiple regions is removed, or if the background segment is selected for augmentation and rotated. We did not try to address these edge cases as they are not frequent. We believe that they are useful stochastic perturbations and helpful to the training dynamics. We leave an analysis of this and improving the quality of segmentation \\& augmentation as future work.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{augmentation_failures.pdf}\n \\caption{Edge cases in our segment-level augmentation. The first is a result of under-segmentation where the connected petals are with similar color are grouped in a single segment, and therefore, changing its color produces a distinct logo. The second case happens because the logo has two main segments and removing one results in a drastically different logo. The third is a case where a background segment is rotated and the outcome is an overlay of two segments.}\n \\label{seg_fig_failure_cases}\n\\end{figure}\n\n\\section{The Effect of Text in Logos}\n\nIn this experiment, we analyze the effect of text on logo similarity. The visual results in Table \\ref{tab:RebuttalLogoAnalysis} and the quantitative analysis in Table \\ref{tab:LogoTypeAnalysis} show that, when used as queries, logos with text are more difficult to match to the logos in the dataset and the best and average ranks of similar logos are very high. This is not surprising since representations in deep networks do capture text as well and unless additional mechanisms are used to ignore them, they are taken into account while measuring similarity. The effect of text on logo retrieval is already known in the literature and soft or hard mechanisms can be used to remove them from logos -- see e.g. \\cite{tursun2020learning,Kalkan}.\n\n\\input{logo_analysis}\n\n\n\\section{Visual Retrieval Results}\n\nFigure \\ref{fig:visual_results} displays sample retrieval results for different queries. We see in Figure \\ref{fig:visual_results}(a) and (b) that segment-level augmentation is able to provide better retrieval than its competitors. Figure \\ref{fig:visual_results}(c) displays a failure case where a query with an unusual color distribution is used. We see that all methods are adversely affected by this; however, segment-level augmentation is able to retrieve logos with similar color distributions.\n\n\\begin{figure*}[!h]\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_1.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_2.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_3_woConvNext.pdf}\n}} \n\\caption{Visual results on the METU dataset for the methods at their best settings. \\textbf{(a-b)} Example cases where segment-level augmentation produces better retrieval than image-level augmentation. \\textbf{(c)} A negative result for a query with an unusual intensity distribution, which affects all methods adversely. Though we observe that segment-level augmentation is able to retrieve logos with similar color distributions.\n\\label{fig:visual_results}}\n\\end{figure*}\n\n\\section{Running-Time Analysis}\n\nTable \\ref{tab:RebuttalTimeComplexity} provides a running-time analysis of segment-level and image-level augmentation strategies. We observe that the time spent on segmentation (0.4ms) and Segment Color Change (2.4ms) is comparable to image-level transformation Horizontal Flip (2.8ms). However, Segment Removal and Segment Rotation takes significantly more time. It is important to that our implementation is not optimized for efficiency.\n\n\\input{time_complexity}\n\n\n\\section{Discussion}\n\n\\subsection{Comparing Triplet Loss and Smooth-AP Loss}\n\nOur results on the METU and LLD datasets lead to two interesting findings, which we discuss below: \n\n\\begin{itemize}\n \\item \\textit{Finding 1: Triplet Loss is better in terms of R@1 and R@8 measures whereas Smooth-AP is better in terms of NAR measure.}\n \\item \\textit{Finding 2: Smooth-AP provides its best NAR performance with Color Change whereas Triplet Loss provides its best with Color Change \\& Segment Removal.}\n\\end{itemize} \n \nTriplet Loss is by definition optimizing or learning a distance metric and considered a surrogate ranking objective. On the other hand, Smooth-AP directly aims to optimize Average Precision, a ranking measure, and therefore, it pertains to a more global ranking objective than Triplet Loss, which works at the level of triplets only. See also Brown et al. \\cite{Brown2020eccv} who discussed and contrasted Triplet Loss with Smooth-AP Loss.\n\nThe two objectives provides different inductive biases to the learning mechanism and therefore, we see differences in terms of their performances with respect to different performance measures and augmentation strategies. For example, Finding 1 is likely to be because NAR considers all logos whereas R@1 and R8 only consider logos on the top of the ranking which can be easily learned to be ranked at the top using local similarity arrangements as in Triplet Loss. Moreover, Finding 2 occurs because different augmentation strategies incur different ranking among logos and the inductive biases of Triplet Loss and Smooth-AP handle them differently.\n\nWe believe that this is an interesting research question that we leave as future work.\n\n\\subsection{Different Effects and Selection Probabilities for Segment-Augmentations}\n\nThe results in Table IV of the main paper show that different augmentations contribute to performance differently and with different selection probabilities. For example, we see that Color Change (with $p=0.5$) gives the best performance in terms of NAR and R@8 whereas Rotation (with $p=0.75$) provides the best performance for R@1 and Removal improves over the baseline only in terms of R@8 measure. \n\nWe attribute these differences to the fact that the different segment-level augmentations incur different biases: Color Change enforces invariance to perturbations in color differences at the segment-level whereas Segment Rotation and Removal encourage invariance to changes to the spatial layout of the shape. \n\n\\subsection{Applicability to Other Problems}\n\nWe agree that our analysis is limited to logo retrieval. However, the idea of segment-level augmentation is a viable approach for reasoning about similarities at object part levels and require transfer of knowledge at the level of object parts. One good example is reasoning about affordances of objects \\cite{zhu2014reasoning,myers2015affordance,myers2014affordance}, where supported functions of object parts can be transferred across objects having similar parts. Another example is reasoning about similarity between shapes that have partial overlap \\cite{leonard20162d,latecki2000shape}, where correspondences between parts of shapes need to be calculated. In either example, the specific segment-level augmentation methods may have to be adjusted to the specific problem. For example, performing affine transformations on the segments may be helpful for problems with real-world objects or images.\n\n\n\n\\bibliographystyle{.\/IEEEtran}\n\n\n\n\n\n\n\n\n\\subsection{Experiments with ConvNeXt, a Stronger Backbone}\n\nIn Table \\ref{tab:RebuttalConvNextTable}, we evaluate our novel segment-level augmentation strategy using ConvNeXt \\cite{liu2022convnet}, a stronger, recent, fast backbone that can compete with transformer-based architectures. Although ConvNeXt provides significantly better performance than the ResNet baseline, Table \\ref{tab:RebuttalConvNextTable} confirms our experiments with ResNet that segment-level augmentation can improve logo retrieval performance. \n\n\\input{table_convnet}\n\n\\subsection{Experiments with a Different Dataset}\n\nWithout tuning, we repeat our experiments on using a different logo retrieval dataset, namely the Large Logo Dataset (LLD) \\cite{sage2017logodataset}. LLD has {61380} logos for training and {61540} logos for testing. The LLD dataset does not have a query set for providing known similarities and therefore, for the experiments on LLD, we use the query set of the METU dataset to find similarities in the LLD dataset. \n\nThe results in Table \\ref{tab:LLDtable} show that segment-level augmentation performs better than image-level augmentation in R@1 measure and on par in terms of NAR and R@8 measures. Considering that LLD is a smaller dataset than METU, we believe that segment-level augmentation can provide a larger margin in performance if tuned.\n\n\\input{table_LLD}\n\n\\subsection{Comparisons with Different Image-level Augmentations}\n\nWe now compare segment-level augmentation with more image-level augmentation methods. We have selected most commonly used image-level augmentation methods ({Cutout, Elastic Transform and Channel Shuffle}) from \\cite{albumentations_paper}. For this experiment, we use the same selection probability and setup as in Section IV.C. of the main paper. The results in Table \\ref{tab:albumentations} show that, \\textbf{without any tuning}, segment-level augmentation performs better than this new set of image-level augmentations in terms of NAR and R@8 measures whereas is inferior in terms of R@1 measure. With tuning, segment-level augmentation has potential to perform better.\n\n\\input{table_more_augmentation}\n\n\\subsection{An Analysis of Segmentation Maps}\n\nLogos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works generally well for most logos (see Figure \\ref{seg_fig_failure_cases} for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, this is not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as a form of segment-level augmentation and this is still useful for training.\n\nFigure \\ref{seg_fig_failure_cases} displays edge examples where our segment-level augmentation produces drastically different logos for which computing similarity with the original logos is highly challenging. These cases happen if there are only a few segments in a logo with comparable size and one of them is removed, or if the segmentation method under-segments the image and a segment that groups multiple regions is removed, or if the background segment is selected for augmentation and rotated. We did not try to address these edge cases as they are not frequent. We believe that they are useful stochastic perturbations and helpful to the training dynamics. We leave an analysis of this and improving the quality of segmentation \\& augmentation as future work.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{augmentation_failures.pdf}\n \\caption{Edge cases in our segment-level augmentation. The first is a result of under-segmentation where the connected petals are with similar color are grouped in a single segment, and therefore, changing its color produces a distinct logo. The second case happens because the logo has two main segments and removing one results in a drastically different logo. The third is a case where a background segment is rotated and the outcome is an overlay of two segments.}\n \\label{seg_fig_failure_cases}\n\\end{figure}\n\n\\subsection{The Effect of Text in Logos}\n\nIn this experiment, we analyze the effect of text on logo similarity. The visual results in Table \\ref{tab:RebuttalLogoAnalysis} and the quantitative analysis in Table \\ref{tab:LogoTypeAnalysis} show that, when used as queries, logos with text are more difficult to match to the logos in the dataset and the best and average ranks of similar logos are very high. This is not surprising since representations in deep networks do capture text as well and unless additional mechanisms are used to ignore them, they are taken into account while measuring similarity. The effect of text on logo retrieval is already known in the literature and soft or hard mechanisms can be used to remove them from logos -- see e.g. \\cite{tursun2020learning,Kalkan}.\n\n\\input{logo_analysis}\n\n\n\\subsection{Visual Retrieval Results}\n\nFigure \\ref{fig:visual_results} displays sample retrieval results for different queries. We see in Figure \\ref{fig:visual_results}(a) and (b) that segment-level augmentation is able to provide better retrieval than its competitors. Figure \\ref{fig:visual_results}(c) displays a failure case where a query with an unusual color distribution is used. We see that all methods are adversely affected by this; however, segment-level augmentation is able to retrieve logos with similar color distributions.\n\n\\begin{figure*}[!h]\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_1.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_2.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_3_woConvNext.pdf}\n}} \n\\caption{Visual results on the METU dataset for the methods at their best settings. \\textbf{(a-b)} Example cases where segment-level augmentation produces better retrieval than image-level augmentation. \\textbf{(c)} A negative result for a query with an unusual intensity distribution, which affects all methods adversely. Though we observe that segment-level augmentation is able to retrieve logos with similar color distributions.\n\\label{fig:visual_results}}\n\\end{figure*}\n\n\\subsection{Running-Time Analysis}\n\nTable \\ref{tab:RebuttalTimeComplexity} provides a running-time analysis of segment-level and image-level augmentation strategies. We observe that the time spent on segmentation (0.4ms) and Segment Color Change (2.4ms) is comparable to image-level transformation Horizontal Flip (2.8ms). However, Segment Removal and Segment Rotation takes significantly more time. It is important to that our implementation is not optimized for efficiency.\n\n\\input{time_complexity}\n\n\n\\subsection{Discussion}\n\n\\subsubsection{Comparing Triplet Loss and Smooth-AP Loss}\n\nOur results on the METU and LLD datasets lead to two interesting findings, which we discuss below: \n\n\\begin{itemize}\n \\item \\textit{Finding 1: Triplet Loss is better in terms of R@1 and R@8 measures whereas Smooth-AP is better in terms of NAR measure.}\n \\item \\textit{Finding 2: Smooth-AP provides its best NAR performance with Color Change whereas Triplet Loss provides its best with Color Change \\& Segment Removal.}\n\\end{itemize} \n \nTriplet Loss is by definition optimizing or learning a distance metric and considered a surrogate ranking objective. On the other hand, Smooth-AP directly aims to optimize Average Precision, a ranking measure, and therefore, it pertains to a more global ranking objective than Triplet Loss, which works at the level of triplets only. See also Brown et al. \\cite{Brown2020eccv} who discussed and contrasted Triplet Loss with Smooth-AP Loss.\n\nThe two objectives provides different inductive biases to the learning mechanism and therefore, we see differences in terms of their performances with respect to different performance measures and augmentation strategies. For example, Finding 1 is likely to be because NAR considers all logos whereas R@1 and R8 only consider logos on the top of the ranking which can be easily learned to be ranked at the top using local similarity arrangements as in Triplet Loss. Moreover, Finding 2 occurs because different augmentation strategies incur different ranking among logos and the inductive biases of Triplet Loss and Smooth-AP handle them differently.\n\nWe believe that this is an interesting research question that we leave as future work.\n\n\\subsubsection{Different Effects and Selection Probabilities for Segment-Augmentations}\n\nThe results in Table IV of the main paper show that different augmentations contribute to performance differently and with different selection probabilities. For example, we see that Color Change (with $p=0.5$) gives the best performance in terms of NAR and R@8 whereas Rotation (with $p=0.75$) provides the best performance for R@1 and Removal improves over the baseline only in terms of R@8 measure. \n\nWe attribute these differences to the fact that the different segment-level augmentations incur different biases: Color Change enforces invariance to perturbations in color differences at the segment-level whereas Segment Rotation and Removal encourage invariance to changes to the spatial layout of the shape. \n\n\\subsubsection{Applicability to Other Problems}\n\nWe agree that our analysis is limited to logo retrieval. However, the idea of segment-level augmentation is a viable approach for reasoning about similarities at object part levels and require transfer of knowledge at the level of object parts. One good example is reasoning about affordances of objects \\cite{zhu2014reasoning,myers2015affordance,myers2014affordance}, where supported functions of object parts can be transferred across objects having similar parts. Another example is reasoning about similarity between shapes that have partial overlap \\cite{leonard20162d,latecki2000shape}, where correspondences between parts of shapes need to be calculated. In either example, the specific segment-level augmentation methods may have to be adjusted to the specific problem. For example, performing affine transformations on the segments may be helpful for problems with real-world\n\n\\clearpage\n\n\n\n\n\n\n\n\n\\end{document}\n\n\n\n\\section{Conclusion}\n\nWe introduced a novel data augmentation method based on image segments for training neural networks for logo retrieval. We performed segment-level augmentation by identifying segments in a logo and do transformations on selected segments. Experiments were conducted on the METU \\cite{Kalkan} and LLD \\cite{sage2017logodataset} datasets with ResNet \\cite{ResNet} and ConvNeXt \\cite{liu2022convnet} backbones and suggest significant improvements on two evaluation measures of ranking performance. Moreover, we use metric learning and differentiable ranking with the proposed segment-augmentation method to demonstrate that our method can lead to a further boost in ranking performance.\n\nWe note that our segment-level augmentation strategy generates similarities between logos that are rather simplistic: It is based on the assumption that two similar logos differ from each other in terms of certain segments having differences in color, orientation and presence. An important research direction is exploring more sophisticated augmentation strategies for introducing artificial similarities. However, our results suggest that even such a simplistic strategy can improve the retrieval performance significantly and therefore, our study can be considered as a first step towards developing better segment\/part-level augmentation strategies.\n\\section{Experiments and Results}\n\nWe now evaluate the performance of the proposed segment-augmentation strategy and its use with Triplet Loss and Smooth-AP Loss. \n\n\\subsection{Experimental and Implementation Details}\n\n\\subsubsection{Dataset}\n\nWe use the METU Dataset \\cite{Kalkan}, which is one of the largest publicly available logo retrieval datasets. The dataset is composed of more than 900K authentic logos belonging to actual companies worldwide. Moreover, it includes query sets, i.e. similar logos, of varying difficulties, allowing logo retrieval researchers to benchmark their methods against other methods. We have used 411K training images, 413K test images, and 418 query images.\n\n\\begin{comment}\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{@{}ll@{}}\n\\toprule\n\\textbf{Aspect} & \\textbf{Value} \\\\ \\midrule\nTrademarks & 923,343 \\\\\nUnique Register Firms & 409,675 \\\\\nUnique Trademarks & 687,842 \\\\\nTrademarks Containing Text Only & 583,715 \\\\\nTrademarks Containing Figure Only & 19,214 \\\\\nTrademarks Containing Figure and Text & 310,804 \\\\\nOther Trademarks & 9,610 \\\\\nImage Format & JPEG \\\\\nMax Resolution & 1800 $\\times$ 1800 pixels \\\\\nMin Resolution & 30 $\\times$ 30 pixels \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\caption{\\label{tab:table-name} Details of METU Dataset \\cite{Kalkan}.}\n\\end{table}\n\\end{comment}\n\n\\subsubsection{Training and Implementation Details}\n\nFor every experiment that will be discussed, we use ImageNet \\cite{ImageNet} pre-trained ResNet50 \\cite{ResNet} as our backbone architecture which has a linear layer with 512 dimensions, rather than a final Softmax layer. We use the Adam optimizer with the hyper-parameters tuned as $10^{-7}$ for the learning rate and 256 for the batch size.\n\n\n\\subsubsection{Evaluation Measures}\n\nFollowing the earlier studies \\cite{Kalkan,Tursun2019}, we use {Normalized Average Rank} (NAR) and {Recall@K} for quantifying the performance of the methods. NAR is calculated as:\n\\begin{equation}\n {NAR = \\frac{1}{N\\times N_{rel}} \\left(\\sum_{i=1}^{N_{rel}} R_i - \\frac{N_{rel}(N_{rel}+ 1)}{2} \\right)},\n\\end{equation}\nwhere $N_{rel}$ is the number of similar images for a particular query image; $N$ is the size of the image set; and $R_i$ is the rank of the $i^{th}$ similar image. NAR lies in the range $[0,1]$, where $0$ denotes the perfect score, and $1$ the worst. Recall@K (R@K) is recall for top-K similar logos.\n\n\\begin{table}[hbt!]\n\\centering\n\\caption{\\label{tab:RankingLossesTable} The effect of using Triplet Loss and Smooth-AP Loss for logo retrieval. Neither image-level nor segment-level augmentation is used for any method in this table.}\n\\begin{tabular}{@{}llll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$ & \\textbf{Recall@1} $\\uparrow$ & \\textbf{Recall@8} $\\uparrow$ \\\\ \\midrule\nBaseline & 0.102 & 0.310 & 0.536 \\\\ \\midrule\nTriplet Loss & 0.053 & \\textbf{0.344} & \\textbf{0.586} \\\\ \\midrule\nSmooth-AP Loss & \\textbf{0.046} & 0.339 & 0.581 \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\\subsection{Experiment 1: Effect of Ranking Losses}\n\nBefore analyzing the effect of segment-level augmentation, in this section, we first provide a stand-alone analysis to illustrate the effect of the ranking losses. We compare Triplet Loss and Smooth-AP Loss with a baseline that compares features extracted with the pre-trained Resnet50 backbone using Cosine Similarity. For this analysis, no image-level or segment-level augmentations are used, except for the Random Resized Crop to fit the images to the expected resolution of the network, i.e. $224 \\times 224$. \n\nThe results in Table \\ref{tab:RankingLossesTable} suggest that both loss adaptations provide a significant performance improvement in both NAR and Recall measures and Smooth-AP adaptation achieves the best performance. Applying Cosine Similarity on off-the-shelf ResNet50 features shows adequate results in no-text logo instances, however, it performs worse on logos with text (see Appendix E). \n\nIt is evident that the improvement in Recall is not as visible as NAR. This difference states that the adapted loss functions highly affect the overall rankings of the similar known instances. However, these effects are not completely reflected by the Recall because of the selected K$=8$ value. \n\n\\begin{table}[ht]\n\\centering\n\\caption{\\label{tab:BestAugmentationResults} The effect of image-level (H. Flip, V. Flip) and segment-level augmentation. Only the best augmentation strategies are reported. See Section \\ref{sect:ablation} for an ablation analysis.}\n\\begin{tabular}{@{}llll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$ & \\textbf{Recall@1} $\\uparrow$ & \\textbf{Recall@8} $\\uparrow$ \\\\ \\midrule\n\nBaseline\\\\\n(No augmentation) & 0.102 & 0.310 & 0.536 \\\\ \\midrule\\midrule\nTriplet Loss\\\\\n(No augmentation)& 0.053 & 0.344 & 0.586 \\\\ \\midrule\nTriplet Loss\\\\\n(Image-level aug.) & 0.051 & 0.354 & 0.596 \\\\ \\midrule\nTriplet Loss\\\\\n(S. Color, S. Removal) & 0.046 & \\textbf{0.374} & \\textbf{0.640} \\\\ \\midrule\\midrule\nSmooth-AP Loss\\\\\n(No augmentation) & 0.046 & 0.339 & 0.581 \\\\ \\midrule\nSmooth-AP Loss\\\\\n(Image-level aug.) & 0.044 & 0.339 & 0.596 \\\\ \\midrule\nSmooth-AP Loss \\\\\n(S. Color) & \\textbf{0.040} & 0.354 & 0.610 \\\\ \\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[hbt!]\n\\centering\n\\caption{\\label{tab:SOTATable} Normalized Average Rank (NAR) values for previous state-of-the-art results on the METU dataset. The results are not comparable as the methods differ in their backbones, training datasets, or training regime. For some methods, these details are not even reported. See the text for details.}\n\\begin{tabular}{@{}ll@{}}\n\\toprule\n\\textbf{Method} & \\textbf{NAR} $\\downarrow$\\\\ \\midrule\nHand-crafted Features (Feng \\emph{et al.} \\cite{Feng}) & 0.083 \\\\ \\midrule\nHand-crafted Features (Tursun \\emph{et al.} \\cite{Kalkan}) & 0.062 \\\\ \\midrule\nOff-the-shelf Deep Features (Tursun \\emph{et al.} \\cite{TursunandKalkan}) & 0.086 \\\\ \\midrule\nTransfer Learning (Perez \\emph{et al.} \\cite{Perez}) & 0.047 \\\\ \\midrule\nComponent-based attention (SPoC \\cite{SPOC}, \\cite{Tursun2019}) & 0.120 \\\\ \\midrule\nComponent-based attention (CRoW \\cite{CRoW}, \\cite{Tursun2019}) & 0.140 \\\\ \\midrule\nComponent-based attention (R-MAC \\cite{MAC}, \\cite{Tursun2019}) & 0.072 \\\\ \\midrule\nComponent-based attention (MAC \\cite{MAC} \\cite{Tursun2019}) & 0.120 \\\\ \\midrule\nComponent-based attention (Jimenez \\cite{Jimenez},\\cite{Tursun2019}) & 0.093 \\\\ \\midrule\nComponent-based attention (CAM MAC \\cite{Tursun2019}) & 0.064 \\\\ \\midrule\nComponent-based attention (ATR MAC \\cite{Tursun2019}) & 0.056 \\\\ \\midrule\nComponent-based attention (ATR R-MAC \\cite{Tursun2019}) & 0.063 \\\\ \\midrule\nComponent-based attention (ATR CAM MAC \\cite{Tursun2019}) & 0.040 \\\\ \\midrule\nMR-R-MAC w\/UAR (Tursun \\emph{et al.} \\cite{Tursun2022}) & {0.028} \\\\ \\midrule\nSegment-Augm. (Color Change) w Smooth-AP (Ours) & {0.040} \\\\\n\\bottomrule\n\\addlinespace\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Experiment 2: Effect of Segment Augmentation}\n\nWe now compare our segment-based augmentation methods with the conventional image-level augmentation techniques on the METU dataset \\cite{Kalkan}. In every experiment, we resize the images with Random Resized Crop to fit them to the expected resolution of the network, i.e. 224$\\times$224. For both segment-based and image-level augmentation, the same number of images are augmented and the probability $p=0.5$ is used for selecting a certain transformation.\n \nWe have provided a comparison between the best resulting methods for both image-level and segment-level augmentation methods in Table \\ref{tab:BestAugmentationResults}. We see that image-level augmentation can improve ranking performance. However, the results suggest that segment-level augmentation provides a significantly better gain both in terms of NAR and R@K measures. Detailed comparison between image-level and segment-level methods is provided in ablation study.\n\n\n\n\n\n\\subsection{Experiment 3: Comparison with State of the Art}\n\nWe compare our method and the state-of-the-art methods on the METU dataset \\cite{Kalkan}. It is important to note that a fair comparison between the methods is not possible because they differ in their backbones, training datasets or dataset splits, or training time. For some papers, even those details are missing; e.g. for \\cite{Tursun2022}, which reports the best NAR performance. Therefore, we list the results in Table \\ref{tab:SOTATable}, and refrain from drawing conclusions.\n\n\\begin{table}[ht]\n \\centering\n \\caption{\\label{tab:ProbabilityTable} The effect of probability $p$ for augmenting a selected segment, with Smooth-AP Loss.}\n \\begin{tabular}{@{}lllllll@{}}\n \\toprule\n \\multicolumn{1}{c}{Color C.} &\n \\multicolumn{1}{c}{Rotation} &\n \\multicolumn{1}{c}{Removal} &\n \\multicolumn{1}{c}{\\textbf{$p$}} &\n \\multicolumn{1}{c}{\\textbf{NAR $\\downarrow$}} &\n \\multicolumn{1}{c}{\\textbf{R@1 $\\uparrow$}} &\n \\multicolumn{1}{c}{\\textbf{R@8 $\\uparrow$}} \\\\ \\midrule \\midrule\n \\multicolumn{3}{c}{\\textit{Baseline}} &\n \\multicolumn{1}{c}{0} &\n \\multicolumn{1}{c}{0.046} &\n \\multicolumn{1}{c}{0.339} &\n \\multicolumn{1}{c}{0.581} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{\\textbf{0.040}} &\n \\multicolumn{1}{c}{0.354} &\n \\multicolumn{1}{c}{\\textbf{0.610}} \\\\ \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.043} &\n \\multicolumn{1}{c}{0.354} &\n \\multicolumn{1}{c}{0.601} \\\\ \\midrule\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.049} &\n \\multicolumn{1}{c}{0.325} &\n \\multicolumn{1}{c}{0.566} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{0.048} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.601} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.049} &\n \\multicolumn{1}{c}{\\textbf{0.369}} &\n \\multicolumn{1}{c}{0.591} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.060} &\n \\multicolumn{1}{c}{0.330} &\n \\multicolumn{1}{c}{0.556} \\\\ \\midrule \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{0.5} &\n \\multicolumn{1}{c}{0.048} &\n \\multicolumn{1}{c}{0.339} &\n \\multicolumn{1}{c}{0.596} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{0.75} &\n \\multicolumn{1}{c}{0.047} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.571} \\\\ \\midrule\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{\\checkmark} &\n \\multicolumn{1}{c}{1.0} &\n \\multicolumn{1}{c}{0.047} &\n \\multicolumn{1}{c}{0.344} &\n \\multicolumn{1}{c}{0.591} \\\\\n \\bottomrule\n \\addlinespace\n \\end{tabular}\n \\end{table}\n\n\n\\input{buyuk_tablo}\n\n\\subsection{Experiment 4: Ablation Study}\n\\label{sect:ablation}\n\n\\subsubsection{Choice of Hyper-Parameters}\n\nOur segment-level augmentation has two hyper-parameters: The number of segments, $n$, selected for augmentation, and the probability, $p$, of applying a selected augmentation. Table \\ref{tab:ProbabilityTable} shows that the best performance is obtained with $p$ as 0.5. A similar analysis for $n$ (with values 1, 2, $L\/3$ and $L\/2$ where $L$ is the number of segments in a logo) provided the best performance for $n$ as $L\/3$. \n\n\\subsubsection{Effects of Individual Augmentation Methods}\n\nTables \\ref{tab:SmoothAP-DataAugmentation} (Smooth-AP Loss) and \\ref{tab:TripletLoss-DataAugmentation} (Triplet Loss) list the effects of both image-level and segment-level augmentation. The tables show that, among the segment-level augmentation methods, (Segment) `Color Change' outperforms the others for both loss functions. With Triplet Loss adaptation, (Segment) `Removal' and `Rotation' provide slightly better NAR values than the baseline. Another point worth mentioning is that combining (Segment) `Rotation' or `Removal' degrades the NAR performance measure whereas the combination of (Segment) `Removal' and `Color Change' yields the best result at $Recall@8$.\n\n\\subsubsection{Experiments with a Different Backbone}\n\nSection A in the Appendix provides an analysis using ConvNeXt \\cite{liu2022convnet}, a recent, fast and strong backbone competing with transformer-based architectures. Our results without any hyper-parameter tuning are comparable to the baseline or better than the baseline with the R@8 measure.\n\n\\subsubsection{Experiments with a Different Dataset}\n\nSection B in the Appendix reports results on the LLD dataset \\cite{sage2017logodataset} that confirm our analysis on the METU dataset: We observe that segment-level augmentation provides significant gains for all measures.\n\n\n\n\\subsection{Experiment 5: Visual Results}\n\nSection F in the Appendix provides sample retrieval results for several query logos for the baseline as well the adaptations of Triplet Loss and Smooth-AP Loss with our segment-level augmentation methods. The visual results also confirm that segment augmentation with our Smooth-AP adaptation performs best.\n\n\n\\section{Introduction}\n\n\\begin{figure}[hbt!]\n\\centering\n\\includegraphics[width=0.99\\columnwidth]{Figures\/teaser.pdf}\n\\caption{(a) Conventional data augmentation approaches apply transformations at the image level. (b) We propose segment-level augmentation as a more suitable approach for problems like logo retrieval.\n\\label{fig_teaser}}\n\\end{figure}\n\n\nWith the rapid increase in companies founded worldwide and the fierce competition among them globally, the identification of companies with their logos has become more pronounced, and it has become more paramount to check similarities between logos to prevent trademark infringements. Checking trademark infringements is generally performed manually by experts, which can be sub-optimal due to human-caused errors and time-consuming as it takes days to make a decision. Therefore, automatically identifying similar logos using content-based image processing techniques is crucial.\n\nFor a query logo, identifying the similar ones in a database of logos is a content-based image retrieval problem. With the rise in deep learning, there have been many studies that have used deep learning for logo retrieval problem \\cite{Tursun2019, Feng, unsupervisedattention, Tursun2022, Perez}. Existing approaches generally rely on extracting features of logos and ranking them according to a suitable distance metric \\cite{TursunandKalkan, Tursun2019, Perez}.\n\nLogo retrieval is a challenging problem especially for two main reasons: (i) Similarity between logos is highly subjective, and similarity can occur at different levels, e.g., texture, color, segments and their combination etc. (ii) The amount of known similar logos is limited. We hypothesize that this has limited the use of more modern deep learning solutions, e.g. metric learning, contrastive learning, differentiable ranking, as they require tremendous amount of positive pairs (similar logos) as well as negative pairs (dissimilar logos) for training deep networks.\n\nIn this paper, we address these challenges by (i) proposing a segment-level augmentation to produce artificially similar logos and (ii) using metric learning (Triplet Loss \\cite{weinberger2009distance}) and differentiable ranking (Smooth Average Precision (AP) Loss \\cite{Brown2020eccv}) as a proof of concept that, with our novel segment-augmentation method, such data hungry techniques can be trained better. \n\n\\textit{Main Contributions.} Our contributions are as follows:\n\\begin{itemize}\n \\item We propose a segment-level augmentation for producing artificial similarities between logos. To the best of our knowledge, ours is the first to introduce segment-level augmentation into deep learning. Unlike image-level augmentation methods that transform the overall image, we identify segments in a logo and make transformations at the segment level. Our results suggest that this is more suitable than image-level augmentation for logo retrieval.\n \n \\item To showcase the use of such a tool to generate artificially similar logos, we use data-hungry deep learning methods, namely, Triplet Loss \\cite{weinberger2009distance} and Smooth-AP Loss \\cite{Brown2020eccv}, to show that our novel segment-augmentation method can indeed yield better retrieval performance. To the best of our knowledge, ours is the first to use such methods for logo retrieval.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Method}\n\nIn this section, after providing a definition for logo retrieval, we present our novel segment-based augmentation method and how we use it with deep metric learning and differentiable ranking approaches.\n\n\\begin{wrapfigure}{r}{0.45\\columnwidth}\n\\centering\n\\includegraphics[width=0.45\\columnwidth]{Figures\/sample_augmentations.pdf}\n\\caption{Examples for our segment-level augmentation.}\n\\label{fig_augmentation}\n\\end{wrapfigure}\n\n\\subsection{Problem Definition}\n\nGiven an input query logo $I_q$, logo retrieval aims to rank all logos in a retrieval set $\\Omega$ = ${I_i}$, $i=\\{0,1,..., N\\}$, based on their similarity to the query $I_q$. To be able to evaluate retrieval performance and to train a deep network that relies on known similarities, we require for each $I_q$ to have a set of positive (similar) logos, $\\Omega^+(I_q)$, and a set of negative (dissimilar) logos, $\\Omega^-(I_q)$. Note that logo retrieval defined as such does not have the notion of classes of a classification setting.\n\n\n\\subsection{Segment-level Augmentation for Logo Retrieval}\n\nWe perform segment-level augmentation by following these steps: (i) Logo segmentation, (ii) segment selection, and (iii) segment transformation. See Figure \\ref{fig_augmentation} for some samples.\n\\begin{wrapfigure}{r}{0.45\\columnwidth}\n \\centering\n \\vspace*{-4cm}\n \\includegraphics[width=0.45\\columnwidth]{Figures\/segmentation.pdf}\n \\caption{Sample segmentation results. Each segment is represented with different color.}\n \\label{fig_segmentation}\n\\end{wrapfigure}\n\n\n\\subsubsection{Logo Segmentation}\n\nThere are many sophisticated segmentation approaches available in the literature. Since logo images have relatively simpler regions compared to images, we observed that a simple and computationally-cheap approach using standard connected-component labeling is sufficient for extracting logo segments. See Figure \\ref{fig_segmentation} for some samples, and Supp. Mat. Section S4 for more samples and a discussion on the effect of segmentation quality.\n\n\n\n\\subsubsection{Segment Selection}\n\nThe next step is to select $n$ random segments to apply transformations on them. Segment selection is a process that should be evaluated carefully since the number of segments or the area for each segment is not the same for each logo. Simplicity of logo instances also affects the number of available components and many logo instances have less than five components. Therefore, the choice of $n$ can have drastic effects especially when the number of components in a logo is small, especially for the `segment removal' transformation. For this reason, `segment removal' is not applied to a segment with the largest area, and $n$ is chosen to be small values. We present an ablation study to evaluate the effect of $n$ on model performance for the introduced augmentation strategies. For the same reason, the background component is removed from available segments for augmentation.\n\n\\subsubsection{Segment Transformation}\n\nFor each selected segment $S$, the following are performed with probability $p$:\n\\begin{itemize}\n \\item `(Segment) Color change': Every pixel in $S$ is assigned to a randomly selected color. \n \\item `Segment removal': Pixel values in $S$ are set to the same value of the background component.\n \\item `Segment rotation': We first select a segment and create a mask for the segment. The mask image and the corresponding segment pixels are rotated with a random angle in [-90, 90]. Then, the rotated segment is combined with the other segments. See also Figure \\ref{fig_segment_rotation} for an example.\n\\end{itemize}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{Figures\/rotation_steps.pdf}\n \\caption{The steps of rotating a segment.}\n \\label{fig_segment_rotation}\n\\end{figure}\n\nSee Figure \\ref{fig_augmentation} for some sample augmentations.\n\n\n\n\n\\subsection{Adapting Ranking Losses for Logo Retrieval}\n\n\\subsubsection{Mini-batch Sampling for Training}\n\\label{sect:minibatch}\nFor training the deep networks, we construct the batches as follows, similar to but different from \\cite{Brown2020eccv} as we do not have classes: Each mini-batch $B$ with size $|B|$ is constructed with two sub-sets: the similar set $B^{+}$ and the dissimilar set $B^{-}$. The similar logo set $B^{+}$ consists of logos that are known to be similar to each other (this information is available in the dataset \\cite{Kalkan}; logos with known similarities are provided as the ``query set''), and $B^{-}$ contains logos that are dissimilar to the logos in $B^{+}$ (to be specific, logos other than the query set of the dataset are randomly sampled for $B^{-}$). The size of $B^{+}$ is set to $4$, and that of $B^{-}$ is $|B|-4$. For training the network, every $I \\in B^{+}$ has label as ``1'' and $I \\in B^{-}$ has label ``0''.\n\n\n\n\n\\subsubsection{Smooth-AP Adaptation}\n\nSmooth-AP \\cite{Brown2020eccv} is a ranking-based differentiable loss function, approximating AP. The main aspect of this approximation is to replace discrete counting operation (the indicator function) in the non-differentiable AP with a Sigmoid function. Brown et al. \\cite{Brown2020eccv} applied their study to standard retrieval benchmarks such as Stanford Online Products \\cite{song2016deep}, VGGFace2 \\cite{DBLP:journals\/corr\/abs-1710-08092} and VehicleID \\cite{liu2016deep}. However, the logo retrieval problem requires a dataset with a different structure as there is no notion of classes as in the Stanford Online Products \\cite{song2016deep}, VGGFace2 \\cite{DBLP:journals\/corr\/abs-1710-08092} and VehicleID \\cite{liu2016deep} datasets. Hence, Smooth-AP cannot be applied directly to our problem.\n\nThe first adaptation is about the structure of the mini-batch sampling. In Smooth-AP, Brown et al. explain their sampling as they have \\textit{``formed each mini-batch by randomly sampling classes such that each represented class has P samples per class''} \\cite{Brown2020eccv}. Standard retrieval benchmarks have a notion of classes and are assumed to have sufficient instances per class to distribute among the mini-batches; however, there are not enough instances for known similarity ``classes'' in logo retrieval. This difference requires an adaption in both sampling and calculation of the loss. Smooth-AP Loss is calculated as follows \\cite{Brown2020eccv}:\n\\begin{equation}\n{\\mathcal{L}_{AP}} = {\\frac{1}{C}} {\\sum_{k=1}^{C} (1-\\tilde{AP}_{k})} ,\n\\end{equation}\nwhere $C$ is the number of \\textit{classes} and $\\tilde{AP}_k$ is the smoothed AP calculated for each class in the mini-batch with their Sigmoid-based smoothing method.\n\nOur mini-batch sampling (Section \\ref{sect:minibatch}) causes a natural contradiction because our batches only contain two classes: ``similar'' and ``dissimilar''; therewith the ``dissimilar'' class should not be included in the calculation of the loss. Dissimilar class instances have the same label (``0''), but that does not mean they have the same class; they are just not similar to the similar logo set $B^+$ in the mini-batch.\nHence, the ranking among $B^-$ does not matter in our case. This difference in the batch construction and the notion of classes lead to our second adaptation. In this adaptation, the only calculated AP approximation belongs to the known ``similar'' class (logos in $B^+$). Therefore, the loss calculation becomes:\n\\begin{equation}\n \\mathcal{L}^+_{AP} = 1-{\\tilde{AP}_+},\n\\end{equation}\nwhere $\\tilde{AP}$ is calculated (approximated) in the same way as in the original paper \\cite{Brown2020eccv}. \n\n\\subsubsection{Triplet Loss Adaptation}\n\nTriplet Loss \\cite{weinberger2009distance} is a well-known loss function used in many computer vision problems. Triplet Loss is differentiable, but, unlike Smooth-AP Loss \\cite{Brown2020eccv}, rather than optimizing ranking, it optimizes the distances between positive pairs and negative pairs of instances. In this paper, for each mini-batch, triplets consist of one ``anchor\" instance, one positive instance, and one negative instance. For the same reasons discussed in Smooth-AP Loss \\cite{Brown2020eccv}, only the instances of known similarity classes can be used as the anchor instance. Optimizing the distances between dissimilar logo instances is not sensible because, as discussed in the previous section, instances of the dissimilar logos do not have any known similarity between them. Thus, triplet loss calculation is limited to the triplets that contain known similar instances as the ``anchor'' instance.\n\\section*{Authors' Response}\n\nWe thank all the reviewers for their valuable feedback. To summarize: The reviewers found our paper \\textit{``well-motivated''} (\\textcolor{red}{\\textbf{R4}}, \\textcolor{cadmiumgreen}{\\textbf{R7}}), \\textit{``well-organized''} (\\textcolor{cadmiumgreen}{\\textbf{R7}}), \\textit{``well-written''} (\\textcolor{auburn}{\\textbf{R8}}), containing \\textit{``valuable ideas''} (\\textcolor{red}{\\textbf{R4}}), and \\textit{``interesting''} (\\textcolor{auburn}{\\textbf{R8}}) and noted that \\textit{``experimental results demonstrate the effectiveness of the proposed strategy\"} (\\textcolor{cadmiumgreen}{\\textbf{R7}}). The reviewers had concerns about the novelty of the ideas as well as the significance of the results. Moreover, they requested more analysis and a discussion of the limitations. \n\nWe've addressed all reviewer comments and the quality of our paper improved significantly. To be specific, in addition to updating the text to clarify ambiguous points, we've conducted new experiments and we report positive results on another dataset ({LLD - Large Logo Dataset} \\cite{sage2017logodataset} -- see item A3), with a more recent, stronger and fast backbone ({ConvNeXt} architecture \\cite{liu2022convnet} -- see item {C1}), and provide a running-time analysis (C5), an analysis of the effect of text (C2), visual results (A2\\&C6) and more discussion. In the following, we only summarize our changes and respond to the comments provided by the reviewers. Due to space limitations, the new results are provided in the supplementary material, with pointers placed in the main text.\n\n\\subsection{Common Concerns}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{A1. Simplicity of the Contribution.} \n\\textcolor{blue}{\\textbf{R6}}: \\textit{``[t]he contribution and novelty of this paper are limited since the authors propose a standard application of well-known methods according to a classic algorithmic scheme (the proposed techniques are simple and already proposed in the literature)\"}. \\textcolor{auburn}{\\textbf{R8}}: \\textit{``idea of the proposed \"segment level\" augmentation appears more as an engineering, rather than a scientific contribution\"}.\n\n\\textbf{Authors}: We respectfully disagree with the reviewers on the requirement that a paper's methodology should be sophisticated or non-engineered: (i) [Computer] Science makes progress by putting together existing tools and methods in novel ways for solving problems, generally improving some performance measures while doing so. In our case, we do use existing tools (segmentation, augmentation) but propose a novel idea for combining them for solving an existing problem with a significantly better performance. (ii) An `engineering' approach can be scientific if the engineered solution is novel. (iii) There exist similar papers at top conferences (``Copy-paste networks...\", ICCV2017 \\cite{lee2019copy} ; ``Cut, paste and learn...\", ICCV2019 \\cite{dwibedi2017cut}; ``Simple copy-paste ...\", CVPR2021 \\cite{ghiasi2021simple}) that propose ``simple'' and ``engineered'' augmentation methods or analyze them. Such papers are scientific contributions as they contribute to the progress of [Computer] Science by bringing together existing tools in new ways or by analyzing what's not been known before. We strongly believe that our paper is similar, its contributions are scientific and our novel idea deserves to be seen by the community. Moreover, our novel way of performing \\& analyzing segment-level augmentation has strong potential for inspiring more granular augmentation strategies than what's generally used in the literature (see also E2 \\& Supp. Mat. Section S8-C).\n\n\\noindent\\textbf{A2. Quality and Explanation of segmentation.}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``...quantitative results and performance of segmentation quality are not given\"}. \\textbf{\\textcolor{cadmiumgreen}{\\textbf{R7}}}: \\textit{... segmentation errors, limitations and errors should be discussed.}\n\n\\textbf{Authors}: We thank the reviewers for highlighting this important point. Logos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works well for most logos (see Supp. Mat. Section S4 for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, such infrequent cases are not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as segment-level augmentation and this is still useful for training. We provide a discussion about this in Supp. Mat. Section S4.\n\n\n\\noindent\\textbf{A3. Other Datasets}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``I encourage the authors to point out the performance of their method on more benchmark datasets.\"}\n\n\\textbf{\\textcolor{blue}{\\textbf{R6}}}: \\textit{``... Please conduct more experiments on other recent datasets.\"}\n\n\\textbf{Authors}: With this revision, we've performed experiments with the {LLD - Large Logo Dataset} {\\cite{sage2017logodataset}}. Our results (Supp. Mat. Section S2) confirm that the performing segment-level augmentation is a working strategy for learning better representations for trademark similarity. These results are obtained without any hyper-parameter tuning and we anticipate the gap to widen if they are tuned.\n\n\\input{LLD_table}\n\\subsection{Specific Concerns of \\textcolor{red}{\\textbf{R4}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{B1 - \\textcolor{red}{\\textbf{R4}}}: \\textit{``contributions of the paper are not clearly stated, so the novelty remains somewhat questionable.\"}\n\n\\textbf{Authors}: The last two paragraphs of Introduction state our contributions as follows (truncated to save space): \n\n``\\textit{Main Contributions.} Our contributions are as follows:\n\\begin{itemize}\n \\item We propose a segment-level augmentation for producing artificial similarities between logos. To the best of our knowledge, ours is the first to introduce segment-level augmentation in deep learning. [...]\n \n \\item To showcase the use of such a tool to generate artificially similar logos, we use data-hungry deep learning methods, namely, Triplet Loss \\cite{weinberger2009distance} and Smooth AP \\cite{Brown2020eccv}, [...]\"\n\\end{itemize}\n\n\\noindent\\textbf{B2 - \\textcolor{red}{\\textbf{R4}}}: \\textit{``...table I and table II are not evident in order to demonstrate the performance. What are the chosen hyperparameters?\"}\n\n\\textbf{Authors}: In Table I, we see the effect of applying ranking\/comparison-based losses for logo retrieval. We see that both Triplet Loss and Smooth-AP Loss do significantly improve over the baseline. Although these results are new for logo retrieval, the methods are not novel for our paper. In Table II, we show that applying image-level augmentation improves logo retrieval performance; though we obtain better gain with our proposed way of performing segment-level augmentation. The hyper-parameters are provided in detail in Section IV.A.2.\n\n\\subsection{Specific Concerns of \\textcolor{blue}{\\textbf{R6}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{C1 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...the authors should compare their contribution with recent deep networks.\"}\n\\input{ConvNext_table}\n\\textbf{Authors}: We now compare our method with a very recent CNN architecture, namely, ConvNeXt \\cite{liu2022convnet}. The new results in Supp. Mat. Section S1 confirm our hypothesis that segment-level augmentation performs on par or improves logo retrieval performance. Our ongoing experiments with ResNext, which could not be completed because ResNext is slower than ResNet or ConvNeXt, also promise significant gain in favor of using segment-level augmentation (results will be included in the camera-ready version).\n\n\\noindent\\textbf{C2 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...justify why ResNet performs worse on logos having text content in IV-B Experiment 1. ... give qualitative and quantitative performance analysis of this observation\"}\n\n\\textbf{Authors}: We now provide both qualitative \\& quantitative results on this in Supp. Mat. Section S5, and provide a discussion.\n\n\\noindent\\textbf{C3 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...why certain transformations in the data augmentation step are more performant when applying Triplet Loss, while other are better when using Smooth AP.\"}.\n\n\\textbf{Authors}: This is an important point and we now provide a discussion on this in Supp. Mat. Section S8. To provide a summary here: Smooth AP and Triplet Loss incur different inductive ranking biases and this leads to different overall behaviors for the methods.\n\n\\noindent\\textbf{C4 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...best value of p depends on the type of used transformation and the computed performance evaluation metric.\"}.\n\n\\textbf{Authors}: The reviewer is right about this point. However, different augmentations incur different biases and it is not surprising that they contribute to performance differently and with different selection probabilities. We now provide a discussion about this in Supp. Mat. Section S8.\n\n\\noindent\\textbf{C5 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``...discussion on the complexity (e.g. training time and retrieval response) is missing.\"}\n\n\\textbf{Authors}: We now provide a running-time analysis in Supp. Mat. Section S7.\n\n\\noindent\\textbf{C6 - \\textcolor{blue}{\\textbf{R6}}}: \\textit{``The authors should present an error analysis to find the reasons behind the incurred errors by analyzing correlations between the different scenarios.\"}\n\n\\textbf{Authors}: We now provide more qualitative analysis in Supp. Mat. Sections S4 and S6 on failure cases as well and provide a discussion for potential reasons.\n\n\\noindent\\textbf{C7 - \\textcolor{blue}{\\textbf{R6}}:} \\textit{``The Triplet Loss function requires preparing a database done by triplets of images, limiting its application to the existing publicly available datasets.\"}\n\n\\textbf{Authors}: Taking samples from a class as anchors \\& positive samples, and samples from other classes as negative samples, applying Triplet Loss is actually very straightforward and can be applied to any classification or retrieval problem. This can be easily done on the run while sampling a batch without preparing a separate dataset.\n\n\\subsection{Specific Concerns of \\textcolor{cadmiumgreen}{\\textbf{R7}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{D1 - \\textcolor{cadmiumgreen}{\\textbf{R7}}}: \\textit{\"...Please further discuss the performance of triplet loss and smooth AP loss in Table \\ref{tab:RankingLossesTable}}\n\n\\textbf{Authors}: We now provide a discussion on this in Supp. Mat. Section S6. To provide a summary here: Smooth AP and Triplet Loss incur different inductive ranking biases and this leads to different overall behaviors for the methods.\n\n\\subsection{Specific Concerns of \\textcolor{auburn}{\\textbf{R8}}}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\noindent\\textbf{E1 - \\textcolor{auburn}{\\textbf{R8}}:} ``\\textit{The variety of image-level augmentation techniques, which were used for comparison, is insufficient and the selected techniques themselves are trivial. To improve the impact of that contribution a better comparison with other techniques should be performed. For instance, the are public implementations [1,2].\"}\n\n\\textbf{Authors}: We now compare with three more commonly-used image-level augmentation methods (namely, {Cutout, Elastic Transform and Channel Shuffle}) from the library {\\cite{albumentations_paper}} proposed by the reviewer. The new results in Supp. Mat. Section S3 confirm our previous results that segment-level augmentation is a better strategy than image-level augmentation for trademark retrieval.\n\n\\input{Albumentations_table}\n\n\n\\noindent\\textbf{E2 - \\textcolor{auburn}{\\textbf{R8}}}: ``\\textit{...limited to a particular logo recognition task}\"\n\n\\textbf{Authors}: We agree that our analysis is limited to logo retrieval. However, our method is a viable approach for reasoning about similarities at part levels and require transfer of knowledge or finding correspondences (e.g. affordances, partial shape similarity) at the level of object parts. This is an interesting research direction and we provide a discussion about this in Supp. Mat. Section S8.\n\n\\noindent\\textbf{E3 - \\textcolor{auburn}{\\textbf{R8}}}: \\textit{``... III.C.1, III.C.2 is not sufficiently clear.\"} \\textbf{Authors}: Sorry about the ambiguity. This was partially due to a mistake in notation (as noted by \\textcolor{red}{\\textbf{R4}}). We've revised the notation and the text to make it more clear.\n\n\n\\noindent\\textbf{E4 - \\textcolor{auburn}{\\textbf{R8}}}: \\textit{I can only guess (without confidence) that you just simplify the Smooth AP by removing the ranking of inter-class samples. The Triplet Loss Adaptation seem to be straightforward for the problem. However, some sentences confuse the reader, for instance: \"Optimizing the distances between dissimilar logo instances [...].\"} \\textbf{Authors}: We've updated the text and improve the notation. We hope that it is more clear now.\n\n\n\\subsection{Minor Comments}\n\\noindent\\hline\n\\vspace*{0.2cm}\n\n\\textbf{\\textcolor{red}{\\textbf{R4}}}: \\textit{``Omega notation has been used in two different variables.(logo and minibatch)\"}. \\textbf{\\textcolor{blue}{\\textbf{R6}}}: \\textit{``There are many linguistic glitches here\"}. \\textbf{\\textcolor{auburn}{\\textbf{R8}}}: \\textit{``The introduction of the data augmentation strategy at the beginning of the work creates the false first impression that it is primarily used to increase the \"samples per class\" value of the dataset. ... this should be better clarified in the III.A or elsewhere before introducing the methodology.\"} \n\n\\textbf{Authors}: Thank you for the suggestions and corrections. We've integrated them into the text and highlighted them in red.\n\n\\textbf{\\textcolor{auburn}{\\textbf{R8}}}: \\textit{``...the location of Figures 2 and 3 (namely together with the text in double-column format) is not standard\"} \n\n\\textbf{Authors}: We agree that such a layout is not ideal or pleasing to the eye; sorry for that. Though, this is allowed by the conference template and common in space scarcities.\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Logo Retrieval}\nEarlier studies in trademark retrieval \\cite{TursunandKalkan} used hand-crafted features and deep features extracted using pre-trained networks and revealed that deep features obtained considerably better results. Perez \\emph{et al.} \\cite{Perez} improved the results by combining two CNNs trained on two different datasets. Later, Tursun \\emph{et al.} \\cite{Tursun2019} achieved impressive results by introducing different attention methods to reduce the effect of text regions, and in their most recent work \\cite{Tursun2022}, they introduced different modifications and achieved state-of-the-art results.\n\n\\subsection{Data Augmentation}\nData augmentation \\cite{augmentation2, AugmentationSurvey} is an essential and well-known technique in deep learning to make networks more robust to variations in data. Conventional augmentation methods perform geometric transformations such as zooming, flipping or cropping the entire image. Alternatively, adding noise, random erasing or synthesizing training data \\cite{ImageNet} are key approaches to improve overall model performance. Random Erasing \\cite{RandomErasing} is a recently introduced method that obtains significant improvement on various recognition tasks. Although augmentation methods that focus on cutting and mixing windows \\cite{CutMix,MixUp,imagemix} rather than the whole image are not widely used, they have shown significant gains in performance. \n\nIn logo retrieval, studies generally use conventional augmentation methods. For example, Tursun \\emph{et al.} \\cite{tursun2021learning} applied a reinforcement learning approach to learn an ensemble of test-time data augmentations for trademark retrieval. An exception to such an approach is the study by Tursun \\emph{et al.} \\cite{Tursun2019}, who proposed a method to remove text regions from logos while evaluating similarity. \n\n\\subsection{Differentiable Ranking}\n\nImage or logo retrieval are by definition ranking problems, though ranking is not differentiable. To address this limitation, many solutions have been proposed recently \\cite{FastAP,BlackboxAP,Brown2020eccv}. These approaches mainly optimize Average Precision (AP) with different approximations: For example, Cakir \\emph{et al.} \\cite{FastAP} quantize distances between pairs of instances and use differentiable relaxations for these quantized distances. Rolinek \\emph{et al.} \\cite{BlackboxAP} consider non-differentiable ranking as a black box and use smoothing to estimate suitable gradients for training a network to rank. Finally, Brown \\emph{et al.} \\cite{Brown2020eccv} propose smoothing AP itself to use differentiable operations to train a deep network to rank.\n\n\nThese approximations have been mainly applied to standard retrieval benchmarks. In this paper, we show that differentiable ranking-based loss functions can lead to a performance improvement for logo retrieval as well.\n\n\\subsection{Summary}\n\nLooking at the studies in the literature, we observe that \\textbf{(1)} No study has performed segment-level augmentation either for logo retrieval or for general recognition or retrieval problems. The closest study for this research direction is the study by Tursun \\emph{et al.} \\cite{Tursun2019}, which just removed text regions in logos while evaluating similarity. \\textbf{(2)} Promising deep learning approaches such as metric learning using e.g. Triplet Loss and differentiable ranking have not been employed for logo retrieval.\n\\section{Experiments with ConvNeXt, a Stronger Backbone}\n\nIn Table \\ref{tab:RebuttalConvNextTable}, we evaluate our novel segment-level augmentation strategy using ConvNeXt \\cite{liu2022convnet}, a stronger, recent, fast backbone that can compete with transformer-based architectures. Although ConvNeXt provides significantly better performance than the ResNet baseline, Table \\ref{tab:RebuttalConvNextTable} confirms our experiments with ResNet that segment-level augmentation can improve logo retrieval performance. \n\n\\input{table_convnet}\n\n\\section{Experiments with a Different Dataset}\n\nWithout tuning, we repeat our experiments on using a different logo retrieval dataset, namely the Large Logo Dataset (LLD) \\cite{sage2017logodataset}. LLD has {61380} logos for training and {61540} logos for testing. The LLD dataset does not have a query set for providing known similarities and therefore, for the experiments on LLD, we use the query set of the METU dataset to find similarities in the LLD dataset. \n\nThe results in Table \\ref{tab:LLDtable} show that segment-level augmentation performs better than image-level augmentation in R@1 measure and on par in terms of NAR and R@8 measures. Considering that LLD is a smaller dataset than METU, we believe that segment-level augmentation can provide a larger margin in performance if tuned.\n\n\\input{table_LLD}\n\n\\section{Comparisons with Different Image-level Augmentations}\n\nWe now compare segment-level augmentation with more image-level augmentation methods. We have selected most commonly used image-level augmentation methods ({Cutout, Elastic Transform and Channel Shuffle}) from \\cite{albumentations_paper}. For this experiment, we use the same selection probability and setup as in Section IV.C. of the main paper. The results in Table \\ref{tab:albumentations} show that, \\textbf{without any tuning}, segment-level augmentation performs better than this new set of image-level augmentations in terms of NAR and R@8 measures whereas is inferior in terms of R@1 measure. With tuning, segment-level augmentation has potential to perform better.\n\n\\input{table_more_augmentation}\n\n\\section{An Analysis of Segmentation Maps}\n\nLogos are simplistic images composed of regions that are generally homogeneous in color. Therefore, an off-the-shelf simple segmentation algorithm works generally well for most logos (see Figure \\ref{seg_fig_failure_cases} for some examples). Logo segmentation can produce spurious segments especially on regions which have strong color gradients. However, this is not a problem for us because segments corresponding to over-segmentation or under-segmentation do function as a form of segment-level augmentation and this is still useful for training.\n\nFigure \\ref{seg_fig_failure_cases} displays edge examples where our segment-level augmentation produces drastically different logos for which computing similarity with the original logos is highly challenging. These cases happen if there are only a few segments in a logo with comparable size and one of them is removed, or if the segmentation method under-segments the image and a segment that groups multiple regions is removed, or if the background segment is selected for augmentation and rotated. We did not try to address these edge cases as they are not frequent. We believe that they are useful stochastic perturbations and helpful to the training dynamics. We leave an analysis of this and improving the quality of segmentation \\& augmentation as future work.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{augmentation_failures.pdf}\n \\caption{Edge cases in our segment-level augmentation. The first is a result of under-segmentation where the connected petals are with similar color are grouped in a single segment, and therefore, changing its color produces a distinct logo. The second case happens because the logo has two main segments and removing one results in a drastically different logo. The third is a case where a background segment is rotated and the outcome is an overlay of two segments.}\n \\label{seg_fig_failure_cases}\n\\end{figure}\n\n\\section{The Effect of Text in Logos}\n\nIn this experiment, we analyze the effect of text on logo similarity. The visual results in Table \\ref{tab:RebuttalLogoAnalysis} and the quantitative analysis in Table \\ref{tab:LogoTypeAnalysis} show that, when used as queries, logos with text are more difficult to match to the logos in the dataset and the best and average ranks of similar logos are very high. This is not surprising since representations in deep networks do capture text as well and unless additional mechanisms are used to ignore them, they are taken into account while measuring similarity. The effect of text on logo retrieval is already known in the literature and soft or hard mechanisms can be used to remove them from logos -- see e.g. \\cite{tursun2020learning,Kalkan}.\n\n\\input{logo_analysis}\n\n\n\\section{Visual Retrieval Results}\n\nFigure \\ref{fig:visual_results} displays sample retrieval results for different queries. We see in Figure \\ref{fig:visual_results}(a) and (b) that segment-level augmentation is able to provide better retrieval than its competitors. Figure \\ref{fig:visual_results}(c) displays a failure case where a query with an unusual color distribution is used. We see that all methods are adversely affected by this; however, segment-level augmentation is able to retrieve logos with similar color distributions.\n\n\\begin{figure*}[!h]\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_1.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_2.pdf}\n}}\n\\centerline{\n \\subfigure[]{\n \\includegraphics[width=0.65\\textwidth]{final_visual_results_3_woConvNext.pdf}\n}} \n\\caption{Visual results on the METU dataset for the methods at their best settings. \\textbf{(a-b)} Example cases where segment-level augmentation produces better retrieval than image-level augmentation. \\textbf{(c)} A negative result for a query with an unusual intensity distribution, which affects all methods adversely. Though we observe that segment-level augmentation is able to retrieve logos with similar color distributions.\n\\label{fig:visual_results}}\n\\end{figure*}\n\n\\section{Running-Time Analysis}\n\nTable \\ref{tab:RebuttalTimeComplexity} provides a running-time analysis of segment-level and image-level augmentation strategies. We observe that the time spent on segmentation (0.4ms) and Segment Color Change (2.4ms) is comparable to image-level transformation Horizontal Flip (2.8ms). However, Segment Removal and Segment Rotation takes significantly more time. It is important to that our implementation is not optimized for efficiency.\n\n\\input{time_complexity}\n\n\n\\section{Discussion}\n\n\\subsection{Comparing Triplet Loss and Smooth-AP Loss}\n\nOur results on the METU and LLD datasets lead to two interesting findings, which we discuss below: \n\n\\begin{itemize}\n \\item \\textit{Finding 1: Triplet Loss is better in terms of R@1 and R@8 measures whereas Smooth-AP is better in terms of NAR measure.}\n \\item \\textit{Finding 2: Smooth-AP provides its best NAR performance with Color Change whereas Triplet Loss provides its best with Color Change \\& Segment Removal.}\n\\end{itemize} \n \nTriplet Loss is by definition optimizing or learning a distance metric and considered a surrogate ranking objective. On the other hand, Smooth-AP directly aims to optimize Average Precision, a ranking measure, and therefore, it pertains to a more global ranking objective than Triplet Loss, which works at the level of triplets only. See also Brown et al. \\cite{Brown2020eccv} who discussed and contrasted Triplet Loss with Smooth-AP Loss.\n\nThe two objectives provides different inductive biases to the learning mechanism and therefore, we see differences in terms of their performances with respect to different performance measures and augmentation strategies. For example, Finding 1 is likely to be because NAR considers all logos whereas R@1 and R8 only consider logos on the top of the ranking which can be easily learned to be ranked at the top using local similarity arrangements as in Triplet Loss. Moreover, Finding 2 occurs because different augmentation strategies incur different ranking among logos and the inductive biases of Triplet Loss and Smooth-AP handle them differently.\n\nWe believe that this is an interesting research question that we leave as future work.\n\n\\subsection{Different Effects and Selection Probabilities for Segment-Augmentations}\n\nThe results in Table IV of the main paper show that different augmentations contribute to performance differently and with different selection probabilities. For example, we see that Color Change (with $p=0.5$) gives the best performance in terms of NAR and R@8 whereas Rotation (with $p=0.75$) provides the best performance for R@1 and Removal improves over the baseline only in terms of R@8 measure. \n\nWe attribute these differences to the fact that the different segment-level augmentations incur different biases: Color Change enforces invariance to perturbations in color differences at the segment-level whereas Segment Rotation and Removal encourage invariance to changes to the spatial layout of the shape. \n\n\\subsection{Applicability to Other Problems}\n\nWe agree that our analysis is limited to logo retrieval. However, the idea of segment-level augmentation is a viable approach for reasoning about similarities at object part levels and require transfer of knowledge at the level of object parts. One good example is reasoning about affordances of objects \\cite{zhu2014reasoning,myers2015affordance,myers2014affordance}, where supported functions of object parts can be transferred across objects having similar parts. Another example is reasoning about similarity between shapes that have partial overlap \\cite{leonard20162d,latecki2000shape}, where correspondences between parts of shapes need to be calculated. In either example, the specific segment-level augmentation methods may have to be adjusted to the specific problem. For example, performing affine transformations on the segments may be helpful for problems with real-world objects or images.\n\n\n\n\\bibliographystyle{.\/IEEEtran}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}