diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgkcr" "b/data_all_eng_slimpj/shuffled/split2/finalzzgkcr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgkcr" @@ -0,0 +1,5 @@ +{"text":"\\section*{Supplementary Material}\nIn this Supplementary Material, we first discuss our flat band model with more details. We furthermore provide the ground state degeneracy (GSD), i.e.\\ the number of zero modes,\nobtained by numerical diagonalization of the interaction Hamiltonian for various\nsamples and interaction types.\n\n\\section{flat band model}\nIn the main text, we consider a square lattice with an effective magnetic flux $\\phi=1\/q$ piercing each elementary\nplaquette. We assign $M$ internal orbitals to each lattice site $i$ with the real-space coordinate\n$(x_i, y_i)$, where $M$ must be a factor of $q$. We then label each orbital by a site-dependent index\n\\begin{equation}\\label{eq:si}\ns_i=x_i\\bmod\\Big(\\frac{q}{M}\\Big)+m\\frac{q}{M}, \n\\end{equation}\nwith \\(m=0,1,\\dotsc,M-1\\), such that orbitals with indices in the same residue class \\(s_i\n\\equiv s_k \\bmod (\\frac{q}{M})\\) share the same lattice site.\nThe single-particle tight-binding Hamiltonian is\n\\begin{equation} \\label{eq:hoppingham}\n H_0=\\sum_{j,s_j}\\sum_{k,s_k}t_{j,k}^{s_j,s_k} a^\\dagger_{j,s_j}a_{k,s_k},\n\\end{equation}\nwhere \\(a^\\dagger_{j,s_j}\\) (\\(a_{j,s_j}\\)) creates (annihilates) a particle on\nthe orbital \\(s_j\\) at site \\(j\\). To achieve an exactly flat lowest Chern band with Chern number $\\mathcal{C}=M$,\nwe choose the hopping amplitudes as \n\\begin{equation} \\label{eq:hopping}\nt_{j,k}^{s_j,s_k} =\\delta_{s_j-x_j,s_k-x_k}^{\\bmod q} (-1)^{x+y+xy} e^{-\\frac{\\pi}{2}(1-\\phi)|z|^2} e^{-i\\pi\\phi(\\tilde{x}_j+\\tilde{x}_k)y},\n\\end{equation}\nwhere \\(z_j = x_j + i y_j\\), \\(z = z_j - z_k\\), and $\\tilde{x}_j=x_j+(s_j-x_j)\\bmod q$. \n\nIn Fig.~\\ref{fig:lattice}, we show typical examples of our model for $\\phi=1\/2,M=2$; $\\phi=1\/3,M=3$; and $\\phi=1\/4,M=2$. Once $\\phi$ and $M$ are fixed, the orbital index $s_i$ in each lattice site can be easily computed from Eq.~(\\ref{eq:si}). For example, for $\\phi=1\/4,M=2$, Eq.~(\\ref{eq:si}) gives $s_i=(x_i\\bmod2)+2m,m=0,1$. So we have $s_i=0,2$ for even $x_i$ and $s_i=1,3$ for odd $x_i$ (lower panel in Fig.~\\ref{fig:lattice}). The unit cell, which contains $q\/M$ sites, can be determined by the period of orbital indices. Hopping only occurs between orbitals satisfying $(s_j-x_j)\\bmod q=(s_k-x_k)\\bmod q$ due to the $\\delta_{s_j-x_j,s_k-x_k}^{\\bmod q}$ factor in Eq.~(\\ref{eq:hopping}). If we imagine all orbitals that can be connected by hopping as an effective ``layer'', it is obvious to see that our model on an infinite lattice is equivalent to the stacking of $M$ ``layers'' of the $M=1$ model (Fig.~\\ref{fig:lattice}). However, the phase of the hopping between two specific lattice sites is layer-dependent. This can be seen from the Aharonov-Bohm factor $e^{-i\\pi\\phi(\\tilde{x}_j+\\tilde{x}_k)y}$ in Eq.~(\\ref{eq:hopping}), where $\\tilde{x}_j$ is shifted from the site coordinate $x_j$ by an orbital (layer)-dependent term $(s_j-x_j)\\bmod q$. For example, if we consider the hopping from site $(x_k,y_k)=(0,0)$ to $(x_j,y_j)=(0,1)$ in the upper panel of Fig.~\\ref{fig:lattice}, a phase $e^{i 0}$ ($e^{i\\pi}$) is picked up because $\\tilde{x}_j=\\tilde{x}_k=0$ ($\\tilde{x}_j=\\tilde{x}_k=1$) in the blue (red) layer. The factor $\\delta_{s_j-x_j,s_k-x_k}^{\\bmod q}$ in Eq.~(\\ref{eq:hopping}) guarantees that the shift term $(s_j-x_j)\\bmod q$ is constant in each layer. When $M=1$, we have $s_i=x_i\\bmod q$ which leads to $\\delta_{s_j-x_j,s_k-x_k}^{\\bmod q}=1$ and $\\tilde{x}_i=x_i$. Thus our model returns to the Kapit-Mueller model [30] in Landau gauge. If we only keep the \nnearest neighbor hopping, the phase shift differentiates our model from the usual multi-orbital Hofstadter model, where the hopping phase only depends on the lattice site coordinate thus no phase shift exists. Instead, our model can be thought as a multi-orbital \nHofstadter model with the orbital-dependent hopping phase and color-entangled boundary condition.\n\\begin{figure}\n\\centerline{\\includegraphics[width=0.6\\linewidth]{lattice_v2.pdf}}\n\\caption{Typical examples of our flat band model for different $\\phi=1\/q$ and $M$. Each ellipse represents a lattice site with the real-space coordinate $(x,y)$. In each lattice site $i$, there are circles representing orbitals, whose indices $s_i$ are given by the numbers.\nThe orbitals connected by allowed hopping (only nearest-neighbor hopping is shown for simplicity) in Eq.~(\\ref{eq:hopping}) have the same color and can be thought as an effective layer. The unit cell containing $q\/M$ sites is indicated by the dashed rectangular. In the infinite lattice case, our model is equivalent to the stacking of $M$ layers of the $M=1$ model, with layer-dependent Aharonov-Bohm hopping phases.\n}\n\\label{fig:lattice}\n\\end{figure}\n\nAlthough it is straightforward on an {\\em infinite} lattice to map our model to the shifted stacking of $M$ layers of the $M=1$ model, a {\\em finite} lattice of $N_x\\times N_y$ unit cells with simple periodic boundary conditions can lead to complicated boundary conditions in the layer stacking picture. Let us consider an example with $\\phi=1\/2,M=2$ (Fig.~\\ref{fig:latticeNx}). If $N_x$ is even, by tracking the nearest-neighbor hopping in the $x$ direction, we can find that all hopping still occurs between orbitals with the same color (i.e., the same effective layer). In that case, our model is again equivalent to the shifted stacking of two complete $M=1$ layers, each of which has integer number of unit cells and periodic boundary conditions (left panel of Fig.~\\ref{fig:latticeNx}). However, if $N_x$ is odd, Eq.~(\\ref{eq:hopping}) leads to \\(x\\)-direction hopping between orbitals with different colors across the boundary (right panel of Fig.~\\ref{fig:latticeNx}), which implements color-entangled boundary conditions [7,14,18] in the $x$-direction for the two effective layers. Crucially, each layer is no longer a complete \\(M=1\\) model with integer number of unit cells and periodic boundary conditions. Instead, by unfolding the two layers, our model is now equivalent to a single copy of $M=1$ model with usual periodic boundary conditions. In general, one can find that our model on a periodic $N_x\\times N_y$ lattice can be mapped to \\(\\gcd(N_x,M)\\) copies of complete \\(M=1\\) model with usual periodic boundary conditions, each copy has \\([N_x\/\\gcd(N_x,M)]\\times N_y\\) unit cells and copy-dependent Aharonov-Bohm hopping phases.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=0.8\\linewidth]{lattice_Nx.pdf}}\n\\caption{Our model on periodic finite lattices with $\\phi=1\/2$ and $M=2$. The unit cell is indicated by the dashed rectangular. The hopping across the boundary is highlighted by the dashed lines. (a) When $N_x=4$, our model is equivalent to two (blue and red) complete $M=1$ layers with periodic boundary conditions. (b) When $N_x=3$, hopping across the boundary may occur between orbitals (layers) with different colors. Thus we have color-entangled boundary conditions in the $x$-direction for the blue and red layers, and each layer is no longer a complete \\(M=1\\) model with integer number of unit cells and periodic boundary conditions.\n}\n\\label{fig:latticeNx}\n\\end{figure}\n\n\\section{ground state degeneracy}\nHere we summarize the ground state degeneracy, i.e.\\ the number of zero modes,\nobtained by numerical diagonalization of the interaction Hamiltonian for various\nsamples and interaction types (Tab.~\\ref{t1}).\n\\begin{table}\n\\caption{The ground state degeneracy (GSD), i.e.\\ the number of zero modes,\nobtained by numerical diagonalization of the interaction Hamiltonian for various\nsamples and interaction types. As discussed in the main paper, here we consider\ncolor-entangled FCIs, nematic FCIs, and other states (separated by the\nhorizontal lines). The notations of symbols are the same as in the main\npaper. ``N\/A'' means that we do not include the specific interaction type.}\n\\label{t1}\n\n\\begin{ruledtabular}\n\\begin{tabular}{cccccccccc}\n\\(N\\) & \\(N_x\\times N_y\\) & \\(M\\) ($\\mathcal{C}=M$) & \\(\\gcd(N_x,M)\\) & \\(\\phi\\) & \\(\\nu\\) & intra-orbital interaction & inter-orbital interaction & GSD \\\\\n\\hline\n\\(6\\) & \\(6\\times3\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\/3\\) & \\(k=1\\) & \\(k=1\\) & \\(3\\) \\\\\n\\(7\\) & \\(3\\times7\\) & \\(2\\) & \\(1\\) & \\(1\/2\\) & \\(1\/3\\) & \\(k=1\\) & \\(k=1\\) & \\(3\\) \\\\\n\\(6\\) & \\(3\\times3\\) & \\(2\\) & \\(1\\) & \\(1\/2\\) & \\(2\/3\\) & \\(k=2\\) & \\(k=2\\) & \\(6\\) \\\\\n\\(8\\) & \\(4\\times3\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(2\/3\\) & \\(k=2\\) & \\(k=2\\) & \\(6\\) \\\\\n\\(9\\) & \\(3\\times3\\) & \\(2\\) & \\(1\\) & \\(1\/2\\) & \\(1\\) & \\(k=3\\) & \\(k=3\\) & \\(10\\) \\\\\n\\(6\\) & \\(6\\times4\\) & \\(3\\) & \\(3\\) & \\(1\/3\\) & \\(1\/4\\) & \\(k=1\\) & \\(k=1\\) & \\(4\\) \\\\\n\\(7\\) & \\(4\\times7\\) & \\(3\\) & \\(1\\) & \\(1\/3\\) & \\(1\/4\\) & \\(k=1\\) & \\(k=1\\) & \\(4\\) \\\\\n\\(6\\) & \\(3\\times4\\) & \\(3\\) & \\(3\\) & \\(1\/3\\) & \\(1\/2\\) & \\(k=2\\) & \\(k=2\\) & \\(10\\) \\\\\n\\(8\\) & \\(4\\times4\\) & \\(3\\) & \\(1\\) & \\(1\/3\\) & \\(1\/2\\) & \\(k=2\\) & \\(k=2\\) & \\(10\\) \\\\\n\\(9\\) & \\(3\\times4\\) & \\(3\\) & \\(3\\) & \\(1\/3\\) & \\(3\/4\\) & \\(k=3\\) & \\(k=3\\) & \\(20\\) \\\\\n\\hline\n\\(6\\) & \\(3\\times4\\) & \\(2\\) & \\(1\\) & \\(1\/2\\) & \\(1\/2\\) & \\(k=1\\) & N\/A & \\(2\\) \\\\\n\\(8\\) & \\(4\\times4\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\/2\\) & \\(k=1\\) & N\/A & \\(4\\) \\\\\n\\(6\\) & \\(6\\times2\\) & \\(4\\) & \\(2\\) & \\(1\/4\\) & \\(1\/2\\) & \\(k=1\\) & N\/A & \\(4\\) \\\\\n\\(7\\) & \\(2\\times7\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\/2\\) & \\(k=1\\) & N\/A & \\(0\\) \\\\\n\\(6\\) & \\(3\\times2\\) & \\(2\\) & \\(1\\) & \\(1\/2\\) & \\(1\\) & \\(k=2\\) & N\/A & \\(3\\) \\\\\n\\(8\\) & \\(2\\times4\\) & \\(2\\) & \\(2\\) & \\(1\/4\\) & \\(1\\) & \\(k=2\\) & N\/A & \\(9\\) \\\\\n\\(10\\) & \\(2\\times5\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\\) & \\(k=2\\) & N\/A & \\(1\\) \\\\\n\\(9\\) & \\(2\\times3\\) & \\(2\\) & \\(2\\) & \\(1\/4\\) & \\(3\/2\\) & \\(k=3\\) & N\/A & \\(0\\) \\\\\n\\(12\\) & \\(2\\times4\\) & \\(2\\) & \\(2\\) & \\(1\/4\\) & \\(3\/2\\) & \\(k=3\\) & N\/A & \\(16\\) \\\\\n\\hline\n\\(6\\) & \\(2\\times6\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\/2\\) & \\(k=2\\) & \\(k=1\\) & \\(15\\) \\\\\n\\(8\\) & \\(4\\times4\\) & \\(2\\) & \\(2\\) & \\(1\/2\\) & \\(1\/2\\) & \\(k=2\\) & \\(k=1\\) & \\(19\\)\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Algorithms for TOPS\\xspace}\n\\label{sec:algo}\n\n\\begin{table}[t]\\scriptsize\n\n\t\\centering\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\n\t\t\tAlgorithm & Selected sites $\\mathcal{Q}$ & Utility $U$ \\\\\n\t\t\t\\hline\n\t\t\t\\textsc{Optimal} & $\\{s_1,s_3\\}$ & $1.0$ \\\\\n\t\t\n\t\t\t\\textsc{Inc-Greedy}\\xspace & $\\{s_1,s_2\\}$ & $0.9$ \\\\\n\t\t\n\t\t\t\\hline\n\n\t\t\\end{tabular}\n\t\t\\tabcaption{Utilities of different algorithms for\n\t\tExample~\\ref{ex:one} with $k = 2$.}\n\t\t\\label{tab:utilities}\n\t\t\\vspace*{-2mm}\n\n\\end{table}\n\nWe first state an example, using which we will explain the different algorithms\nfor TOPS\\xspace problem.\n\n\\begin{example}\n\t%\n\t\\label{ex:one}\n\t%\n\t\\emph{Consider Table~\\ref{tab:example}, which lists $2$ trajectories\n\t$T_1,T_2$ with their preference scores for $3$ sites $s_1$, $s_2$ and $s_3$.\n\tWe set $k=2$ sites to be selected for TOPS\\xspace query. The utilities\n\tachieved by different algorithms (discussed next) are shown in\n\tTable~\\ref{tab:utilities}. The optimal solution consists of the sites $s_1$ and\n$s_3$ with a total utility of $1.0$.}\n\t%\n\\end{example}\n\n\n\\subsection{Optimal Algorithm}\n\n\nWe first discuss the \\emph{optimal} solution to TOPS\\xspace problem in\nthe form of an \\emph{integer programming problem} (ILP)\n\\begin{align}\n\t%\n\t\\text{Maximize } U &= \\sum_{j=1}^m U_j \\label{eq:obj} \\\\\n\t\\text{such that } \\sum_{i=1}^n x_i &\\leq k, \\label{eq:cardinality} \\\\\n\t\\forall j=1,\\dots,m, \\ \n\tU_j & \\leq \\max_{1\\le i \\le n} \\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\times x_i\\}\n\t\\label{eq:MaxUtilityConstraint} \\\\\n\t\\forall i = 1, \\dots, n, \\ \n\tx_i &\\in \\{0,1\\}\n\t\\label{eq:binary}\t\n\t%\n\\end{align}\n\n\n\nThe above IP is, however, not in the form of a standard integer linear program\n(ILP). To do so, the constraints in Ineq.~\\eqref{eq:MaxUtilityConstraint} can\nbe linearized, as discussed in Appendix~\\ref{app:ip}. Since TOPS\\xspace is NP Hard, the cost of solving this optimal algorithm is \\emph{exponential} with respect to the\nnumber of trajectories $m$, and the number of sites $n$ and, therefore,\n\\emph{impractical} except for very small $m$ and $n$. This is demonstrated through experiments in Sec.~\\ref{sec:comparison with optimal}. \n\n\n\n\n\\subsection{Approximation Algorithms}\n\\label{sec:approx}\n\n\nWe next present a greedy approximation\nalgorithm. Before that, let us define a few\nterms and how they are computed.\n\n\n\nGiven a site $s_i \\in \\mathcal{S}$, $\\ensuremath{TC}\\xspace(s_i)$ denotes the set of trajectories\ncovered by $s_i$, i.e., $\\ensuremath{TC}\\xspace(s_i)=\\{T_j|\\ensuremath{d_r}\\xspace(T_j,s_i) \\leq \\tau\\}$. Similarly,\ngiven a trajectory $T_j$, $\\ensuremath{SC}\\xspace(T_j)$ denotes the set of sites that cover $T_j$,\ni.e., $\\ensuremath{SC}\\xspace(T_j)=\\{ s_i|\\ensuremath{d_r}\\xspace(T_j,s_i) \\leq \\tau\\}$. For each site $s_i$, the\nsite weight $w_i = \\sum_{j=1}^m \\{\\ensuremath{\\psi}\\xspace(T_j,s_i)|T_j \\in \\ensuremath{TC}\\xspace(s_i)\\}$ denotes the\nsum of preference scores of all the trajectories covered by it.\nTable~\\ref{tab:netclus} tabulates some of the important notations.\n\nSince $\\tau$ is available only at query time, the computation of these covering\nsets need to be efficient. Hence, for each site $s_i \\in \\mathcal{S}$, the\nshortest path distances to all other nodes in $V$ are pre-computed. This\nrequires $O(|\\mathcal{S}|.|E|\\log |V|)$ time \\cite{algorithmsbook}. As the road\nnetwork graphs are almost planar, i.e., $|E|=O(|V|)$, the above cost simplifies\nto $O(n.N \\log N)$ where $n = |\\mathcal{S}|$ and $N = |V|$.\n\nUsing these distance values, the distance between each pair of site and\ntrajectory, $\\ensuremath{d_r}\\xspace(T_j,s_i)$, is computed. Suppose there are at most $l$ nodes in\na trajectory $T_j$, i.e., $\\max_{T_j \\in \\mathcal{T}}|T_j| = l$. Referring to\nthe definition of $\\ensuremath{d_r}\\xspace(T_j,s_i)$ in Sec.~\\ref{sec:formulation}, it follows that\n$\\ensuremath{d_r}\\xspace(T_j,s_i)$ can be computed in $O(l^2)$ time for a given site-trajectory\npair using the above site-node distances. Therefore, the total cost across all\n$n$ sites and $m$ trajectories is $O(mnl^2)$.\n\nEach site $s_i$ maintains the set of trajectories in an ascending order of\n$\\ensuremath{d_r}\\xspace(T_j,s_i)$ while each trajectory $T_j$ maintains the sites in a similar\nascending order based on $\\ensuremath{d_r}\\xspace(T_j,s_i)$ again. Sorting increases the time to\n$O(mn(\\log n + \\log m))$. The space complexity for storing all site to\ntrajectory distances is $O(mn)$.\n\nAt query time, when $\\tau$ is available, the sets $\\ensuremath{TC}\\xspace$, $\\ensuremath{SC}\\xspace$, and the site\nweights can be, thus, computed from the stored distance matrix in $O(mn)$ time.\n\nAlternatively, the sets \\ensuremath{TC}\\xspace and \\ensuremath{SC}\\xspace can be computed at query time when $\\tau$\nis made available. However, for a city-sized road network, even a small $\\tau$\nincludes a lot of neighbors (e.g., for the Beijing dataset described in\nSec.~\\ref{sec:exp}, there are more than $40$ sites on average within a distance\nof $\\tau = 0.8$ Km from a site). Since this needs to be computed for all the\ncandidate sites, this approach is infeasible at query time.\n\n\\subsection{Inc-Greedy}\n\n \nWe adapt the general greedy heuristic for maximizing non-decreasing\nsub-modular functions \\cite{nemhauser1978analysis} to design \\textsc{Inc-Greedy}\\xspace. The main\nidea is based on the principle\nof \\emph{maximizing marginal gain}.\n\nThen the algorithm starts with an empty set of sites $\\mathcal{Q}_0=\\varnothing$, and\nincrementally adds the sites (one in each of the $k$ iterations) such that each\nsuccessive addition of a site produces the maximal marginal gain in the utility\n$U$. More specifically, if the set of selected sites after $\\theta-1$\niterations $(\\theta=1,\\dots,k)$ is\n$\\mathcal{Q}_{\\theta-1}=\\{s_1,\\dots,s_{\\theta-1}\\}$, in the $\\theta^{th}$\niteration, \\textsc{Inc-Greedy}\\xspace chooses the site $s_\\theta \\notin \\mathcal{Q}_{\\theta-1}$ such\nthat $U(\\mathcal{Q}_{\\theta} = \\mathcal{Q}_{\\theta-1}\\cup \\{s_{\\theta}\\}) -\nU(\\mathcal{Q}_{\\theta-1})$ is \\emph{maximal}.\n\n\nThe marginal utility gained due to the addition of $s_i$ to the set\n$\\mathcal{Q}_{\\theta-1}$ in iteration $\\theta=1,\\dots,k$ is $U_{\\theta}(s_i) =\nU(\\mathcal{Q}_{\\theta-1} \\cup \\{s_i\\}) - U(\\mathcal{Q}_{\\theta-1})$. Thus,\n$U(\\mathcal{Q}_\\theta)=\\sum_{i=1}^\\theta U_i(s_i)$. \nIf $\\alpha_{ji}$ denotes\nthe marginal gain in the utility of trajectory $T_j$ by addition of site $s_i$\nto the existing set of chosen sites, then, at the end of each iteration $\\theta\n= 1, \\dots, k$, $U_\\theta(s_i) = \\sum_{\\forall T_j \\in \\ensuremath{TC}\\xspace(s_i)} \\alpha_{ji}$.\n\n\n\n\n\\textsc{Inc-Greedy}\\xspace algorithm begins by computing the sets $\\ensuremath{TC}\\xspace,\\ensuremath{SC}\\xspace$, and the\nsite-weights as explained above. It then initializes the\nmarginal utilities of each site $s_i \\in \\mathcal{S}$ to its site weight, i.e.,\n$U_1(s_i) = w_i$. Further, for each trajectory $T_j \\in \\ensuremath{TC}\\xspace(s_i)$, it\ninitializes $\\alpha_{ji} = \\ensuremath{\\psi}\\xspace(T_j,s_i)$.\n\nIn iteration $\\theta=1,\\dots,k$, it selects the site $s_{\\theta}$ with\n\\emph{maximal} marginal utility, i.e., $\\forall s_i \\notin\n\\mathcal{Q}_{\\theta-1}, U_\\theta(s_\\theta) \\geq U_\\theta(s_i)$. If multiple\ncandidate sites yield the same maximal marginal gain, it chooses the one with\nmaximal weight. Still, if there are ties, then without loss of generality, the\nsite with the highest index is selected.\n\nFor each trajectory $T_j \\in \\ensuremath{TC}\\xspace(s_\\theta)$, first the utility $U_j$ is updated\nas follows: $U_j \\leftarrow \\max(U_j,\\ensuremath{\\psi}\\xspace(T_j,s_\\theta))$. In case there is a\ngain in $U_j$, the marginal utility of each site $s_i \\in \\ensuremath{SC}\\xspace(T_j)$ is updated. \n\nThe pseudocode of \\textsc{Inc-Greedy}\\xspace is prsented in Appendix~\\ref{app:inc-greedy}.\n\n\nIn Example~\\ref{ex:one}, at iteration $1$, the site $s_2$ has the largest\nmarginal utility of $0.61$ and is, therefore, selected. With\n$\\mathcal{Q}_1=\\{s_2\\}$, the marginal utility of adding the site $s_1$ is\n$0.4-0.11=0.29$, and that of site $s_3$ is $0.6-0.5=0.1$. Hence, \\textsc{Inc-Greedy}\\xspace selects\nthe sites $s_1$ and $s_2$, and yields an utility $U=0.9$, as indicated in\nTable~\\ref{tab:utilities}.\n\n\nWe next analyze the quality of \\textsc{Inc-Greedy}\\xspace.\n\n\\begin{lem}\n\t%\n\t\\label{lem:inc_topso2}\n\t%\n\n\tThe approx. bound of \\textsc{Inc-Greedy}\\xspace is $(1-1\/e)$.\n\t%\n\\end{lem}\n\n\n\n\\begin{proof}\n\t\n\tSince \\textsc{Inc-Greedy}\\xspace is a direct adaptation of the generic greedy algorithm for\n\tmaximizing non-decreasing sub-modular functions under cardinality\n\tconstraints \\cite{nemhauser1978analysis} with $U(\\varnothing)=0$, it offers the approximation bound of\n\t$\\left( 1-(1-\\frac{1}{k})^k \\right) \\geq \\left(1-\\frac{1}{e}\\right)$.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\n\n\\begin{lem}\n\t%\n\t\\label{lem:inc_topso3}\n\t%\n\t$U(\\mathcal{Q}_k) \\geq (k\/n). U(\\mathcal{S})$.\n\t%\n\\end{lem}\n\n\n\\begin{proof}\n\t%\n\tFrom Theorem~\\ref{thm:submodular} it follows that successive marginal\n\tutilities are non-increasing, i.e., $U_\\theta(s_\\theta) \\geq\n\tU_{\\theta+1}(s_{\\theta+1})$. Thus,\n\t\\begin{small}\n\t\\begin{align}\n\tU_\\theta(s_\\theta) \\geq \\frac{\\sum_{i=\\theta}^n U_i(s_i)}{n-\\theta+1}\n\t\t\\geq \\frac{U(\\mathcal{Q}_n)-U(\\mathcal{Q}_{\\theta-1})}{n-\\theta+1}\n\t\t\\label{eq:sum of marginal utilities}\n\t\\end{align}\n\t \\end{small}\n\t \n\tNext, we claim that $\\forall \\theta=1,2,\\dots,n$,\n\t\\begin{small}\n\t\\begin{align*}\n\tU(\\mathcal{Q}_{\\theta}) \\geq (\\theta \/ n). U(\\mathcal{Q}_n)\n\t\\end{align*}\n\\end{small}\n\tWe will prove this by induction on $\\theta$.\n\t\n\tConsider the base case $\\theta=1$. Using Ineq.~\\eqref{eq:sum of marginal\n\tutilities}, $U(\\mathcal{Q}_1) = U_1(s_1) \\geq U(\\mathcal{Q}_n)\/n$ since\n\t$U(\\mathcal{Q}_0)=0$. Thus, the induction hypothesis is true for the base\n\tcase.\n\t\n\tNext, we assume it to be true for iteration $\\theta$.\n\tFor iteration $\\theta+1$,\n\t\\begin{small}\n\t\\begin{align*}\n\tU(\\mathcal{Q}_{\\theta+1})&= U(\\mathcal{Q}_\\theta) + U_{\\theta+1}(s_{\\theta+1})\\\\\n\t&\\ge U(\\mathcal{Q}_\\theta) + \\frac{U(\\mathcal{Q}_n)-U(\\mathcal{Q}_\\theta)}{n-\\theta}\\\\\n\t&\\ge \\left(\\frac{n-\\theta-1}{n-\\theta}\\right)U(\\mathcal{Q}_\\theta) + \\left(\\frac{1}{n-\\theta}\\right)U(\\mathcal{Q}_n)\n\t\\end{align*}\n\t\\end{small}\n\n\tUsing the induction hypothesis for iteration $\\theta$,\n\t\\begin{small}\n\t\\begin{align*}\n\tU(\\mathcal{Q}_{\\theta+1})&\\ge \\left(\\frac{n-\\theta-1}{n-\\theta}\\right)\\left(\\frac{\\theta}{n}\\right) U(\\mathcal{Q}_n) + \\left(\\frac{1}{n-\\theta}\\right)U(\\mathcal{Q}_n)\\\\\n\t&\\ge \\left(\\frac{\\theta+1}{n}\\right). U(\\mathcal{Q}_n) \n\t\\end{align*}\n\t\\end{small}\n\t\n\tThus, the induction hypothesis holds true for any $\\theta=1,\\dots,n$.\n\t\n\tSince $\\mathcal{Q}_n=\\mathcal{S}$, after $k$ iterations, $U(\\mathcal{Q}_k)\n\t\\geq (k\/n). U(\\mathcal{S})$.\t \n\t\\hfill{}\n\\end{proof}\n\n\n\\begin{lem}\n\t\\label{lem:inc-greedy topso}\n\tThe approximation bound of \\textsc{Inc-Greedy}\\xspace is $(k\/n)$.\n\\end{lem}\n\n\\begin{proof}\n\tSince $U$ is non-decreasing (from Th.~\\ref{thm:submodular}) and $OPT\n\t\\subseteq \\mathcal{S}$, therefore $U(OPT) \\leq U(\\mathcal{S})$. The rest follows from Lem.~\\ref{lem:inc_topso3}. \n\t\\hfill{}\n\\end{proof}\n\n\\begin{thm}\n\t\\label{thm:inc_topso}\n\tThe approximation bound of \\textsc{Inc-Greedy}\\xspace for TOPS\\xspace is $\\max\\{1-1\/e, k\/n\\}$.\n\\end{thm}\n\\begin{proof}\n\tThe result follows from Lem.~\\ref{lem:inc_topso2} and\n\tLem.~\\ref{lem:inc-greedy topso}.\n\t\\hfill{}\n\\end{proof}\n\nThe next result establishes the complexity of \\textsc{Inc-Greedy}\\xspace.\n\n\\begin{thm}\n\t%\n\t\\label{thm:complexity of incg}\n\t%\n\tThe time and space complexity bounds for \\textsc{Inc-Greedy}\\xspace are $O(k.mn)$ and $O(mn)$\n\trespectively.\n\t%\n\\end{thm}\n\\begin{proof}\n\t%\n\tAs discussed earlier in Sec.~\\ref{sec:approx}, the computation of the sets\n\t$\\ensuremath{TC}\\xspace$, $\\ensuremath{SC}\\xspace$ and the weights require $O(mn)$ time and storage.\n\t\n\tInitializing the marginal utilities $U_\\theta(s_i)$ of all the sites and\n\t$\\alpha_{ji}$ values for all trajectory site pairs require $O(mn)$ time.\n\t\n\tIn any iteration $\\theta=1,\\dots,k$, selecting the site with maximal\n\tmarginal utility requires at most $O(n)$ time. The largest size of any \\ensuremath{TC}\\xspace\n\tset is at most $m$ and that of any \\ensuremath{SC}\\xspace set is at most $n$. Hence, scanning\n\tthe set of trajectories $\\ensuremath{TC}\\xspace(s_\\theta)$ \n\n\tand updating the utilities requires $O(m)$ time. If $U_j$ gets updated,\n\tthen scanning the set of sites in $\\ensuremath{SC}\\xspace(T_j)$\n\n\tand updating the values of $U_\\theta(s_i)$ and $\\alpha_{ji}$ requires $O(n)$\n\ttime. Thus, the time complexity of any iteration $\\theta$ is $O(mn)$.\n\tHence, the total time complexity of $k$ iterations is $O(kmn)$.\n\n\tSince no other space is required than that for storing \\ensuremath{TC}\\xspace, \\ensuremath{SC}\\xspace and site\n\tweights, the total space complexity is $O(mn)$.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\n\\subsection{Limitations of Inc-Greedy}\n\\label{sec:limitations}\n\nAlthough \\textsc{Inc-Greedy}\\xspace provides a constant factor approximation, it is not scalable to\nlarge real-life datasets. The reasons are:\n\n\\noindent\n\\textbf{High query cost}: \n\t%\n\tThe input parameters for TOPS\\xspace query, $(k,\\tau, \\ensuremath{\\psi}\\xspace)$, are available only at query time. Hence, the\n\tcovering sets $\\ensuremath{TC}\\xspace$, $\\ensuremath{SC}\\xspace$, and the site weights (that depend on the value of $\\tau$) can be generated \\emph{only}\n\tat query time. Even if all pairwise site-to-trajectory distances are pre-computed, this step requires a high computation cost of $O(mn)$ (both in terms of time\n\tand memory) where $m,n$ denote the number of trajectories, and candidate sites, respectively. Due to this reason, for real city-scale datasets\n\t(such as the Beijing dataset \\cite{cab1} used in our experiments that has\n\tover 120,000 trajectories and 250,000 sites), \\textsc{Inc-Greedy}\\xspace is not scalable. Fig.~\\ref{subfig:tautime} shows that \\textsc{Inc-Greedy}\\xspace takes about 2000 sec. to complete for $\\tau=1.2$ Km. and $k=5$, and goes out of memory for $\\tau > 1.2$ Km.\n\t\n\\noindent\n\\textbf{High storage cost:}\n\tAs discussed above, to facilitate faster computation of covering sets \\ensuremath{TC}\\xspace, we need to pre-compute all pairwise site-to-trajectory distances. \t\t\n\tHowever, for any city-scale dataset, this storage requirement is\n\tprohibitively large. For example, the Beijing dataset \\cite{cab1} \n\t require close to 250 GB of\n\tstorage. This is unlikely to fit in the main memory and, therefore,\n\tmultiple expensive random disk seeks are required at run-time. Even with pre-computed distances up to 10 Km., \\textsc{Inc-Greedy}\\xspace crashes beyond $\\tau \\> 1.2$ Km. (shown in Tab.~\\ref{tab:memory}.)\n\n\\noindent\n\\textbf{High update cost:}\n\t%\n\t%\n\t\\textsc{Inc-Greedy}\\xspace is also not amenable to updates in trajectories and sites. If a\n\tnew trajectory is added, its distance to all the sites needs to be computed\n\tand sorted. In addition, the sorted \\ensuremath{TC}\\xspace set of all the sites need to be\n\tupdated as well. Similarly, if a new candidate site is added, the distances\n\tof all the trajectories to this site will need to be computed and sorted.\nSuch\n\tcostly update operations are impractical, especially at run-time.\n\n\nA careful analysis of \\textsc{Inc-Greedy}\\xspace reveals that there are two main stages of the\nalgorithm. In the first, the sets \\ensuremath{TC}\\xspace, \\ensuremath{SC}\\xspace and the site weights are computed.\nIn the second stage, some of these sets are updated in an iterative manner.\n\nThe first stage is heavier in terms of time and space requirements. Thus, to\nmake it efficient, we use an index structure, \\textsc{NetClus}\\xspace, which is described in\nSec.~\\ref{sec:offline} and Sec.~\\ref{sec:online}. The use of indexing reduces\nthe computational burden of the update (i.e., the second) stage as well.\nFurther, Section~\\ref{sec:updates} shows how \\textsc{NetClus}\\xspace allows easier handling of\nadditions and deletions of candidate sites and trajectories.\n\nIn addition, if the preference function $\\ensuremath{\\psi}\\xspace$ is binary as defined in\nDef.~\\ref{def:binary}, the update steps for \\textsc{Inc-Greedy}\\xspace can be performed quite\nefficiently using FM sketches \\cite{fm,krep}. We next describe the details.\n\n\\subsection{Using FM Sketch to speed up Inc-Greedy}\n\\label{sec:fm}\n\n\n\nThe main use of FM sketches is in counting the number of \\emph{distinct}\nelements in a set or union of sets \\cite{fm}. Suppose, the maximum number of\ndistinct elements is $N$. An FM sketch is a bit vector, which is initially\nempty, and is of size at least $O(\\log_2 N)$. The probability of an element\nfrom the domain hashing into the $i^\\text{th}$ bit of the FM sketch is $2^{-i}$.\nThus, if a set has $\\Gamma$ elements, the probability of the last bit marked in\nthe FM sketch is $O(\\log_2 \\Gamma)$. Hence, after the elements of a set are\nhashed to the FM sketch, the last set bit can be used to estimate the number of\ndistinct elements in the set. (The details of how the hashing function is\nchosen and the exact estimates are in \\cite{fm}.) Although the FM sketch does\nnot count the number of distinct elements exactly, it provides a multiplicative\nguarantee on the error in counting. When more copies of the FM sketch is used,\nthe error decreases.\n\nThe FM sketch can be used to speed up the update stage of \\textsc{Inc-Greedy}\\xspace, since selecting\na site with the largest maximal utility is the same as selecting a site that\ncovers the largest number of \\emph{distinct} trajectories not yet covered.\n\nFor each site $s_i \\in \\ensuremath{\\mathcal{S}}\\xspace$, the set of trajectories that it covers, i.e.,\n$\\ensuremath{TC}\\xspace(s_i)$ is maintained as an FM sketch. Thus, instead of maintaining\n$O(m)$-sized lists for each site where $m$ is the total number of trajectories,\nwe only need to maintain $O(\\log_2 m)$-sized bit vectors per site.\n\nSuppose the count of distinct trajectories covered by a site $s_i$ is $\\chi_i$.\nThe \\emph{marginal} utility of site $s_j$ when site $s_i$ has been chosen is the\nnumber of distinct trajectories that the two sites \\emph{together} cover over\nthe number of trajectories that site $s_i$ \\emph{alone} covers. The estimate of\nthe number of trajectories covered by the union of $s_j$ and $s_i$ can be\nobtained by the bitwise OR of the FM sketches corresponding to $s_j$ and $s_i$.\nIf this estimate is $\\chi_{ij}$, the marginal utility of site $s_j$ over site\n$s_i$ is $\\chi_{ij} - \\chi_i$.\n\nTherefore, when there are $n$ candidate sites, to determine the site that\nprovides the best marginal utility over site $s_i$, $n-1$ such bitwise OR\noperations are performed, and the maximum is chosen. At the end of the\n$\\theta^\\text{th}$ iteration, the combined number of trajectories covered by the\nsites in $\\mathcal{Q}_{\\theta}$ is stored by the \\emph{union} of the FM sketches\nobtained successively in the $\\theta$ iterations. The $(\\theta+1)^\\text{th}$\nsite is chosen by using this combined FM sketch as the base.\n\nThe above brute-force algorithm can be improved in the following way. The upper\nbound of the marginal utility for any site $s_j$ is its own utility. Thus, if\nthe current best marginal utility of another site $s_k$ is already greater than\nthat, it is not required to do the union operation with $s_j$. If the sites are\n\\emph{sorted} according to their utilities, the scan can stop as soon as the\nfirst such site $s_j$ is encountered. All sites having a lower utility are\nguaranteed to be useless as well.\n\nIn our implementation, the FM sketches are stored as $32$-bit words. This\nallows handling of roughly $2^{32}$ (which is more than $4$ billion) number of\ntrajectories. The length $32$ is chosen since the bitwise OR operation of two\nsuch regular-sized words is extremely fast in modern operating systems. \n\n\n\\section{Algorithms for TOPS\\xspace}\n\n\\subsection{Linearization for Optimal Algorithm}\n\\label{app:ip}\n\nEach inequality in Ineq.~\\eqref{eq:MaxUtilityConstraint} can be expressed as\n$$U_j \\leq \\max\\{U_{j1}, U_{j2}\\}$$\nwhere\n$$U_{j1} \\leq \\max\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)x_i|1 \\leq i \\leq \\lfloor n\/2 \\rfloor\\}$$\nand\n$$U_{j2} \\leq \\max\\{\\ensuremath{\\psi}\\xspace(T_j,s_i).x_i|\\lfloor n\/2 \\rfloor+1 \\leq i \\leq n\\}$$\n\nThe terms $U_{j1}$ and $U_{j2}$ can be recursively expressed in the same manner\nas in case of $U_j$.\n\nFinally, the constraint $U_j \\leq \\max\\{U_{j1},U_{j2}\\}$ can be linearized as\nfollows:\n\\begin{align*}\nU_{j1} &\\leq U_{j2} + M.y_j \\\\\nU_{j2} &\\leq U_{j1} + M.(1-y_j) \\\\\nU_j &\\leq U_{j2} + M.y_j \\\\\nU_{j} &\\leq U_{j1} + M.(1-y_j) \\\\\ny_j &\\in \\{0,1\\}\n\\end{align*}\nwhere $M$ is a sufficiently large number.\n\n\\subsection{Inc-Greedy Algorithm}\n\\label{app:inc-greedy}\n\n\\begin{algorithm}[t]\n\t\\caption{\\textsc{Inc-Greedy}\\xspace}\n\t\\label{algo:incg}\n\\begin{algorithmic}[1]\n{\n\\scriptsize\n\\Procedure{\\textsc{Inc-Greedy}\\xspace}{$k,\\tau,\\ensuremath{\\psi}\\xspace$}\n\\State Compute the sets $\\ensuremath{TC}\\xspace,\\ensuremath{SC}\\xspace$ and the site-weights.\n\\State $\\mathcal{Q}_0 \\leftarrow \\varnothing$\n\\ForAll{$ s_i \\in \\mathcal{S}$} \n\\State $U_0(s_i) \\leftarrow w_i$\n\\State $\\forall T_j \\in \\ensuremath{TC}\\xspace(s_i), \\alpha_{ji} \\leftarrow \\ensuremath{\\psi}\\xspace(T_j,s_i)$\n\\EndFor\n\\State $\\forall T_j \\in \\mathcal{T}, U_j \\leftarrow 0$\n\\ForAll{$\\theta=1,\\dots,k$} \\label{line:for loop of iterations}\n\\State $s_\\theta\\leftarrow\\arg\\max_{\\forall s_i\\in\\mathcal{S} \\setminus \\mathcal{Q}_{\\theta-1} }\\{ U(\\mathcal{Q}_{\\theta-1}\\cup\\{s_i\\} )-U(\\mathcal{Q}_{\\theta-1} )\\}$\n\\State $\\mathcal{Q}_\\theta \\leftarrow \\mathcal{Q}_{\\theta-1} \\cup \\{s_\\theta\\}$\n\\ForAll{trajectory $T_j \\in \\ensuremath{TC}\\xspace(s_\\theta)$} \\label{line:for loop of trajectories}\n\\State $U_j \\leftarrow \\max(U_j,\\ensuremath{\\psi}\\xspace(T_j,s_\\theta))$ \\label{line:new Utility of trajectory j}\n\\If{$U_j$ changes}\n\\ForAll{site $s_i \\in \\ensuremath{SC}\\xspace(T_j), s_i \\notin \\mathcal{Q}_{\\theta}$} \\label{line:for loop of sites}\n\\State $\\alpha_{ji}^\\prime \\leftarrow \\max(0,\\ensuremath{\\psi}\\xspace(T_j,s_i)-U_j)$ \\label{line:delta utility}\n\\State $U_\\theta(s_i)\\leftarrow U_{\\theta-1}(s_i)-(\\alpha_{ji}-\\alpha_{ji}^\\prime)$\n\\State $\\alpha_{ji}\\leftarrow \\alpha_{ji}^\\prime$\n\\EndFor\n\\EndIf\n\\EndFor\n\\EndFor\n\\EndProcedure\n}\n\\end{algorithmic} \n\\end{algorithm}\n\n\nThe \\textsc{Inc-Greedy}\\xspace procedure is outlined in Algorithm~\\ref{algo:incg}. The algorithm\nbegins by computing the sets $\\ensuremath{TC}\\xspace,\\ensuremath{SC}\\xspace$, and the site-weights. It then\ninitializes the marginal utilities of each site $s_i \\in \\mathcal{S}$ to its\nsite weight, i.e., $U_0(s_i) = w_i$. Further, for each trajectory $T_j \\in\n\\ensuremath{TC}\\xspace(s_i)$, it initializes $\\alpha_{ji} = \\ensuremath{\\psi}\\xspace(T_j,s_i)$.\n\nIn iteration $\\theta=1,\\dots,k$, it selects the site $s_{\\theta}$ with\n\\emph{maximal} marginal utility, i.e., $\\forall s_i \\notin\n\\mathcal{Q}_{\\theta-1}, U_\\theta(s_\\theta) \\geq U_\\theta(s_i)$. If multiple\ncandidate sites yield the same maximal marginal gain we choose the one with\nmaximal weight. Still, if there are ties, then without loss of generality, the\nsite with the highest index is selected.\n\nFor each trajectory $T_j \\in \\ensuremath{TC}\\xspace(s_\\theta)$ (line~\\ref{line:for loop of\ntrajectories}), first the utility $U_j$ is updated (line~\\ref{line:new Utility\nof trajectory j}). In case there is a gain in $U_j$, each site $s_i \\in\n\\ensuremath{SC}\\xspace(T_j)$ is processed as indicated in line~\\ref{line:for loop of sites}. In\nline~\\ref{line:delta utility}, the new marginal utility of trajectory $T_j$\nw.r.t. site $s_i$ is computed. Using this value, the marginal utility of site\n$s_i$ and $\\alpha_{ji}$ are updated.\n\n\\section{NetClus}\n\\label{app:nc}\n\n\\subsection{Clustering based on Jaccard Similarity}\n\\label{app:Jaccard}\n\n\\begin{table}[t]\n\\scriptsize\n\t\\begin{center}\n\t\\begin{tabular}{c|c|c}\n\t\\hline\n\t$\\tau$ (Km) & Running Time (s) & Memory (in GB)\\\\\n\t\\hline\n0\t&\t2890\t&\t3.88\t\\\\\n0.2\t&\t12473\t&\t8.93\t\\\\\n0.4\t&\t26693\t&\t9.98\t\\\\\n0.8\t&\t28391\t&\t14.23\t\\\\\n1.2\t&\t29058\t&\t16.57\t\\\\\n1.6\t&\t32647\t&\t20.85\t\\\\\n2.0\t&\t34618\t&\t26.74\t\\\\\n2.4 & \\multicolumn{2}{c}{Out of memory}\\\\\n\\hline \n\t\\end{tabular}\n\t\\tabcaption{Performance of Jaccard similarity based clustering for Beijing road network using a Jaccard distance threshold $\\ensuremath{\\alpha}\\xspace=0.8$.}\n\t\\label{tab:jaccard}\n\\end{center}\n\\end{table}\n\nGiven two nodes $s_i,s_j \\in \\ensuremath{\\mathcal{S}}\\xspace$, the Jaccard similarity between their trajectory covers is\n\\begin{align*}\nJ_s(s_i,s_j) &= \\frac{|\\ensuremath{TC}\\xspace(s_i) \\cap \\ensuremath{TC}\\xspace(s_j)|}{|\\ensuremath{TC}\\xspace(s_i) \\cup \\ensuremath{TC}\\xspace(s_j)|}\n\\end{align*}\nThe Jaccard distance is\n\\begin{align*}\n\tJ_d(s_i,s_j)=1-J_s(s_i,s_j)\n\\end{align*}\n\nThe nodes in the road network $V$ are clustered using the above measure as follows.\n\nThe node $v_1$ with the highest weight, i.e., the one whose sum of preference\nscores over the covered trajectories as defined in Sec.~\\ref{sec:approx}, is\nchosen as the first cluster center of the cluster $g_1$. All nodes that are\nwithin a threshold of $\\ensuremath{\\alpha}\\xspace$ Jaccard distance from $v_1$ are added to $g_1$.\nFrom the remaining set of unclustered nodes, the node with the highest weight\nis chosen as a cluster center, and the process is repeated until each node is\nclustered.\n\n\n\nThe results of clustering on the Beijing\\xspace dataset are shown in\nTable~\\ref{tab:jaccard}.\nThe limitations of Jaccard similarity based clustering are explained in Sec.~\\ref{sec:offline}.\n\n\n\n\\subsection{Complexity Analysis of Greedy-GDSP}\n\\label{app:gdsp}\n\n\\begin{thm}\n\t%\n\tGreedy-GDSP runs in $O(|V| . (\\nu \\log \\nu + \\eta))$ time where $\\nu$ is the\n\tmaximum number of vertices that are reachable within the largest round-trip\n\tdistance $R_{max}$ from any vertex $v$, and $\\eta$ is the number of clusters\n\treturned by the algorithm.\n\t%\n\\end{thm}\n\n\\begin{proof} \nAs the underlying graph\nmodels a road network which is roughly planar, we assume that the number of\nedges (i.e., road segments) incident on any set of $\\nu$ vertices is $O(\\nu)$.\nSince the cost of running the shortest path algorithm from a given node in a graph $G = (V,E)$ is\n$O(|E| \\log |V|)$ \\cite{algorithmsbook}, therefore, the initial distance\ncomputation for a given source vertex requires $O(\\nu \\log \\nu)$ time. Thus, the distance computation across all the vertices in $V$ takes $O(|V|.\\nu \\log \\nu)$. \nThe neighbors of each node $v$ can be maintained as a list sorted by the round-trip\ndistance from $v$. Therefore, computing the dominating set for a particular $R$\nrequires at most $\\nu$ time for each vertex. The total time for the construction of dominating sets, hence, is $O(|V|.\\nu)$.\n\nChoosing the vertex with the largest \\emph{incremental} dominating set requires\nbitwise OR operations of FM sketches. For $|V|$ vertices, there are at most $|V|-1$\nsuch operations, each requiring $O(f)$ time (since there are $f$ copies of FM\nsketches each with a constant size of $32$ bits). As the number of clusters\nproduced is $\\eta$, the running time is $O(|V|. \\eta)$.\n\nThe total running time is, therefore, $O(|V| . (\\nu \\log \\nu + \\eta))$.\n\\hfill{}\n\\end{proof}\n\n\n\\subsection{Complexity of NetClus}\n\\label{app:netclus-complexity}\n\n\\begin{thm}\n\t%\n\t\\label{appthm:complexity}\n\t%\n\tThe time and space complexities of \\textsc{NetClus}\\xspace are $O(k.\\eta_p.\\xi_p)$ and\n\t$O(\\sum_{p=1}^t (\\eta_p(\\xi_p + \\lambda_p)))$ respectively.\n\t%\n\\end{thm}\n\n\n\\begin{proof}\n\t%\n\tThe number of cluster representatives, $|\\widehat{\\mathcal{S}}|$ is at most\n\tthe number of clusters, $\\eta_p$. For any cluster $g_i \\in \\ensuremath{\\mathcal{I}}\\xspace_p$,\n\t$|\\ensuremath{\\mathcal{TL}}\\xspace(g_i)| = O(\\xi_p)$. We assume that the number of neighboring\n\tclusters for any cluster is bounded by a constant, i.e., $|\\ensuremath{\\mathcal{CL}}\\xspace(g_i)| =\n\tO(1)$.\n\t%\n\t(Table~\\ref{tab:cluster} that lists typical mean values of\n\t$|\\ensuremath{\\mathcal{CL}}\\xspace(g_i)|$ empirically supports the assumption.) \n\n\n\tHence, computing the set $\\widehat{\\ensuremath{TC}\\xspace}(r_i)$\n\trequires at most $O(\\xi_p)$ time. The total time across all the $\\eta_p$\n\tclusters, therefore, is $O(\\eta_p. \\xi_p)$. The inverse maps\n\t$\\widehat{\\ensuremath{CC}\\xspace}(T_j)$ for the trajectories can be computed along with\n\t$\\widehat{\\ensuremath{TC}\\xspace}(r_i)$. Hence, the total time remains $O(\\eta_p. \\xi_p)$.\n\tUsing Th.~\\ref{thm:complexity of incg}, the subsequent \\textsc{Inc-Greedy}\\xspace phase requires\n\t$O(k.\\eta_p.\\xi_p)$ time. Therefore, the overall time complexity of the\n\talgorithm is $O(k.\\eta_p.\\xi_p)$.\n \n\tWe next analyze the space complexity for a particular index instance\n\t$\\ensuremath{\\mathcal{I}}\\xspace_p$. Storing the sets $\\ensuremath{\\mathcal{TL}}\\xspace(g_i)$, each of size at most $\\xi_p$,\n\tacross all the $\\eta_p$ clusters require $O(\\eta_p.\\xi_p)$ space. Storing\n\tthe inverse maps $\\ensuremath{CC}\\xspace(T_j)$ for all the trajectories, thus, also requires\n\t$O(\\eta_p.\\xi_p)$ space. As there are at most $\\lambda_p$ vertices in any\n\tcluster, storing their ids and distances from the cluster center requires\n\t$O(\\lambda_p)$ space for each cluster. Assuming $|\\ensuremath{\\mathcal{CL}}\\xspace(g_i)| = O(1)$, the\n\ttotal storage cost for a cluster is $O(\\lambda_p)$. Therefore, the total\n\tstorage cost is $O(\\eta_p.(\\xi_p + \\lambda_p))$. Summing across all the\n\tindex instances, the entire space complexity is $O(\\sum_{p=1}^t\n\t(\\eta_p(\\xi_p + \\lambda_p)))$.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\n\\section{Conclusions}\n\\label{sec:concl}\n\nIn this paper, we have proposed a generic TOPS\\xspace framework to solve the problem\nof finding facility locations for trajectory-aware services. We showed that the problem is NP-hard, and proposed a greedy heuristic with constant factor approximation bound. However, this heuristic does not scale for large datasets. Thus, we developed\nan index structure, \\textsc{NetClus}\\xspace, to make it practical for city-scale road networks.\nExtensive experiments over real datasets showed that \\textsc{NetClus}\\xspace yields solutions that\nare comparable with those of the greedy heuristic, while being significantly faster, and low in memory overhead. The proposed framework can handle a wide class of objectives, and additional constraints, thus making it highly generic and practical.\n\n\n\\section{The TOPS\\xspace Problem}\n\\label{sec:formulation}\n\n\\begin{table}[t]\n\\scriptsize\n\\centering\n\\begin{tabular}{cl}\n\\hline\n\\bf Symbol & \\bf Description \\\\\n\\hline\n\\hline\n$G=(V,E)$, $|V|=N$ & Road network $G$ with node set $V$ and edge set $E$\\\\\n$\\mathcal{T}$, $|\\mathcal{T}|= m$ & Set of trajectories\\\\\n$\\mathcal{S}$, $|\\mathcal{S}|=n$ & Set of candidate sites \\\\\n\\hline\n$\\ensuremath{d}\\xspace(u,v)$ & Distance of shortest path from node $u$ to $v$ \\\\\n$\\ensuremath{d_r}\\xspace(u,v)$ & Round-trip distance between nodes $u$ and $v$ \\\\\n$\\ensuremath{d_r}\\xspace(T_j,s_i)$ & Round-trip distance from trajectory $T_j$ to site $s_i$\\\\\n\\hline\n$k$ & Desired number of service locations\\\\\n$\\tau$ & Coverage threshold \\\\\n$\\ensuremath{\\psi}\\xspace(T_j,s_i)$ & Preference function for trajectory $T_j$ and site $s_i$\\\\\n\\hline\n$\\mathcal{Q} \\subseteq \\mathcal{S}$, $|\\mathcal{Q}| = k$ & Set of locations to set up service\\\\\n$U_j$ & Utility of trajectory $T_j$ over the set of sites $\\mathcal{Q}$ \\\\\n$U(\\mathcal{Q})=\\sum_{j=1}^m U_j$ & Total utility offered by $\\mathcal{Q}$ \\\\\n\\hline\n\\end{tabular}\n\\tabcaption{Symbols used in the paper.}\n\\label{tab:symbol}\n\\vspace*{-1mm}\n\\end{table}\n\nConsider a road network $G = \\{V,E\\}$ over a geographical area where $V$\ndenotes the set of road intersections, and $E$ denotes the set of road segments\nbetween two adjacent road intersections. The direction of the underlying\ntraffic that passes over a road segment is modeled by directed edges.\n\nAssume a set of candidate sites $\\mathcal{S} = \\{s_1, \\cdots, s_n\\} \\subseteq\nV$ where a certain service or facility can be set up. The choice of\n$\\mathcal{S}$ is generally provided by the application itself by taking into\naccount various factors such as availability, reputation of neighborhood, price\nof land, etc. Most of these factors are outside the purview of the main focus\nof this paper and are, therefore, not studied. We simply assume that the set\n$\\mathcal{S}$ is provided as input to our problem. However, as described\nlater, if all the latent factors of choosing a site can be encapsulated as its\ncost, we can handle it quite robustly.\n\nThe candidate sites can be located anywhere on the road network. If it is\nalready on a road intersection, then it is part of the set of vertices $V$. If\nnot, i.e., if it is on the middle of a road connecting vertices $u$ and $v$,\nwithout loss of generality, we augment $V$ to include this site as a new vertex\n$w$. We augment the edge set $E$ by two new edges $(u,w)$ and $(v,w)$ (with\nappropriate directions) and remove the old edge $(u,v)$. Thus, ultimately,\n$\\mathcal{S} \\subseteq V$.\n\nThe set of candidate sites $\\mathcal{S}$ can be in addition to the set of\nexisting service locations $\\mathcal{E_S}$.\n\nThe set of trajectories over the road network is denoted by $\\mathcal{T} =\n\\{T_1, \\cdots, T_m\\}$ where each trajectory is a sequence of nodes, $T_j =\n\\{v_{j_1}, \\cdots, v_{j_l}\\}$, $v_{j_i}\\in V$. The trajectories are usually\nrecorded as GPS traces and may contain arbitrary spatial points on the road\nnetwork. For our purpose, each trajectory is map-matched \\cite{map1} to form a\nsequence of road intersections through which it passes. We assume that each\ntrajectory belongs to a separate user. However, the framework can easily\ngeneralize to multiple trajectories belonging to a single user by taking union of each of these trajectories. \n\nSuppose $\\ensuremath{d}\\xspace(v_i,v_j)$ denotes the shortest network distance along a directed\npath from node $v_i$ to $v_j$, and $\\ensuremath{d_r}\\xspace(v_i,v_j)$ denotes the shortest distance\nof a round-trip starting at node $v_i$, visiting $v_j$, and finally returning\nto $v_i$, i.e., $\\ensuremath{d_r}\\xspace(v_i,v_j)=\\ensuremath{d}\\xspace(v_i,v_j)+\\ensuremath{d}\\xspace(v_j,v_i)$. In general,\n$\\ensuremath{d}\\xspace(v_i,v_j) \\neq \\ensuremath{d}\\xspace(v_j,v_i)$, but $\\ensuremath{d_r}\\xspace(v_i,v_j) = \\ensuremath{d_r}\\xspace(v_j,v_i)$. With a\nslight abuse of notation, assume that $\\ensuremath{d_r}\\xspace(T_j,s_i)$ denotes the \\emph{extra}\ndistance traveled by the user on trajectory $T_j$ to avail a service at site\n$s_i$. Formally, $\\ensuremath{d_r}\\xspace(T_j,s_i) = \\min_{\\forall v_k, v_l \\in T_j} \\{\n\\ensuremath{d}\\xspace(v_k,s_i) + \\ensuremath{d}\\xspace(s_i,v_l) - \\ensuremath{d}\\xspace(v_k,v_l) \\}$.\n\n\nIt is convenient for a user to avail a service only if its location is not too\nfar off from her trajectory. Thus, beyond a distance $\\tau$, we assume that the\nutility offered by a site $s_i$ to a trajectory $T_j$ is $0$. We call this\nuser-specified distance $\\tau$ as the \\emph{coverage threshold}.\n\n\\begin{defn}[Coverage] \n\t%\n\tA candidate site $s_i$ \\emph{covers} a trajectory $T_j$ if the distance\n\t$\\ensuremath{d_r}\\xspace(T_j,s_i)$ is at most $\\tau$, where $\\tau \\geq 0$ is the \\emph{coverage\n\tthreshold}.\n\t%\n\\end{defn}\n\nFor all sites within the coverage threshold $\\tau$, the user also specifies a\npreference function $\\ensuremath{\\psi}\\xspace$. The preference function $\\ensuremath{\\psi}\\xspace(T_j,s_i)$ assigns a\nscore (normalized to $[0,1]$) for a trajectory $T_j$ and a site $s_i$ that\nindicates how much $s_i$ is preferred by the user on trajectory $T_j$. Higher\nvalues indicate higher preferences with $0$ indicating no preference. In\ngeneral, sites that are \\emph{closer} to the trajectory have \\emph{higher}\npreferences than those farther away. Usage of such preference functions are common in location analysis literature \\cite{zeng2009pickup}.\n\n\\begin{defn}[Preference Function $\\ensuremath{\\psi}\\xspace$]\n\t\\label{def:pref}\n\t%\n$\\ensuremath{\\psi}\\xspace:(\\mathcal{T},\\mathcal{S})\\rightarrow[0,1]$ is a real-valued preference\nfunction defined as follows:\n\\begin{align}\n\t\\ensuremath{\\psi}\\xspace(T_j,s_i)=\n\t\t\\begin{cases}\n\t\t\tf(\\ensuremath{d_r}\\xspace(T_j,s_i)) & \\text{ if } \\ensuremath{d_r}\\xspace(T_j,s_i) \\le \\tau \\\\\n\t\t\t0 & \\text{otherwise}\n\t\t\\end{cases}\n\\end{align}\nwhere $f$ is a non-increasing function of $\\ensuremath{d_r}\\xspace(T_j,s_i)$. \n\\end{defn}\n \n\\begin{table}[t]\\scriptsize\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|}\n\t\\hline\n\t\\multirow{2}{*}{Trajectories} & \\multicolumn{3}{|c|}{Preference scores for different sites} \\\\\n\t\\cline{2-4}\n\t\t& $s_1$ & $s_2$ & $s_3$\\\\\n\t\t\t\\hline \n $T_1$ & $\\ensuremath{\\psi}\\xspace(T_1,s_1)=0.4$ & $\\ensuremath{\\psi}\\xspace(T_1,s_2)=0.11$ & $\\ensuremath{\\psi}\\xspace(T_1,s_3)=0$\\\\\n \\hline\n $T_2$ & $\\ensuremath{\\psi}\\xspace(T_2,s_1)=0$ & $\\ensuremath{\\psi}\\xspace(T_2,s_2)=0.5$ & $\\ensuremath{\\psi}\\xspace(T_2,s_3)=0.6$ \\\\\n \\hline \n\t\t\\end{tabular}\n\t\t\\tabcaption{Examples of trajectories with site preferences.}\n\t\t\\label{tab:example}\n\t\t\\vspace*{-2mm}\n\\end{table}\n \n \nThe goal of TOPS\\xspace query is to report a set of $k$ sites $\\mathcal{Q}\n\\subseteq \\mathcal{S},\\ |\\mathcal{Q}| = k$, that maximizes the preference score\nover the set of trajectories. The preference score of a trajectory $T_j$ over a\nset of sites $\\mathcal{Q}$ is defined as the \\emph{utility function} $U_j$ for\n$T_j$, which is simply the \\emph{maximum} score corresponding to the sites in\n$\\mathcal{Q}$, i.e., $U_j = \\max_{s_i \\in \\mathcal{Q}}\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\}$. The various symbols used in the TOPS\\xspace formulation are listed in\nTable~\\ref{tab:symbol}.\n\nThe generic TOPS\\xspace query formulation is stated next.\n\n\\begin{prob}[TOPS\\xspace]\n\t%\n\tGiven a set of trajectories $\\mathcal{T}$, a set of candidate sites\n\t$\\mathcal{S}$ that can host the services, TOPS\\xspace problem with query\n\tparameters $(k,\\tau, \\ensuremath{\\psi}\\xspace)$ seeks to report the best $k$ sites,\n\t$\\mathcal{Q} \\subseteq \\mathcal{S}, \\ |\\mathcal{Q}| = k$, that maximize the\n\tsum of trajectory utilities, i.e., $\\mathcal{Q} = \\arg\\max \\sum_{j=1}^m U_j$\n\twhere $U_j = \\max_{s_i \\in Q}\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\}$.\n\t%\n\\end{prob}\n\n\\textbf{Extensions and variants of TOPS\\xspace:} The TOPS\\xspace framework as defined above\nis \\emph{generic}. It can have various extensions as described in Sec.~\\ref{sec:cost} and Sec.~\\ref{sec:capacity} and can work with existing facilities as well (Sec.~\\ref{sec:existing}). Finally, the preference function in Def.~\\ref{def:pref} subsumes various existing models as exemplified in Sec.~\\ref{sec:other}.\n\n\n\n\n\\subsection{Properties of TOPS\\xspace}\n\nTOPS\\xspace problem is NP-hard. To show that, we first define a specific instance\nof TOPS\\xspace where the preference function $\\ensuremath{\\psi}\\xspace(T_j,s_i)$ is a binary function. A\nbinary function is a natural choice in situations where a service provider wants\nto intercept the maximum number of trajectories that pass within a vicinity of\nit \\cite{berman1995locatingMain}.\n\n\\begin{defn}[Binary Instance of TOPS\\xspace]\n\t%\n\t\\label{def:binary}\n\t%\n\tTOPS\\xspace with parameters $(k,\\tau,\\ensuremath{\\psi}\\xspace)$ where the preference score is given by:\n\n\t\\begin{align}\n\t\t\\ensuremath{\\psi}\\xspace(T_j,s_i)=\n\t\t\t\\begin{cases}\n\t\t\t\t1 & \\text{ if } \\ensuremath{d_r}\\xspace(T_j,s_i) \\le \\tau \\\\\n\t\t\t\t0 & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\\end{align}\n\t%\n\\end{defn}\n\n\n\\begin{thm}\n\t%\n\t\\label{thm:nphard}\n\t%\n\tTOPS\\xspace is NP-hard.\n\t%\n\\end{thm}\n\n\\begin{proof}\n\t%\n\tThis follows from the fact that the binary instance of TOPS\\xspace is NP-hard, as set cover problem is reducible to it \\cite{berman1995locatingMain}.\n\n\n\t\\hfill{}\n\t%\n\\end{proof}\n\nNext we show that the sum of utilities $U = \\sum_{j=1}^m U_j$ is a\n\\emph{non-decreasing sub-modular} function.\nA function $f$ defined on any\nsubset of a set $\\mathcal{S}$ is \\emph{sub-modular} if for any pair of subsets\n$\\mathcal{Q,R} \\subseteq \\mathcal{S}$, $f(\\mathcal{Q}) + f(\\mathcal{R}) \\geq\nf(\\mathcal{Q} \\cup \\mathcal{R}) + f(\\mathcal{Q} \\cap \\mathcal{R})$\n\\cite{nemhauser1978analysis}.\n\n\\begin{thm}\n\t%\n\t\\label{thm:submodular}\n\t%\n\n\t$U = \\sum_{j=1}^m U_j$ is non-decreasing and sub-modular. \n\t%\n\\end{thm}\n\n\n\n\\begin{proof}\n\t%\n\tConsider any two pair of subsets $\\mathcal{Q,R} \\subseteq \\mathcal{S}$ such\n\tthat $\\mathcal{Q} \\subseteq \\mathcal{R}$. Since $U_j = \\max_{s_i \\in\n\t\\mathcal{Q}}\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\}$, $U_j(\\mathcal{R})\\geq U_j(\\mathcal{Q})$.\n\tTherefore, $\\sum_{j=1}^m U_j(\\mathcal{R})=U(\\mathcal{R})\\geq \\sum_{j=1}^m\n\tU_j(\\mathcal{Q})=U(\\mathcal{Q})$. Hence, $U$ is a non-decreasing function.\n\n\tTo show that $U$ is sub-modular, following \\cite{nemhauser1978analysis}, it\n\tis sufficient to show that for any two subsets $\\mathcal{Q,R} \\subseteq\n\t\\mathcal{S}, \\ \\mathcal{Q} \\subset \\mathcal{R}$, for any site $s \\in\n\t\\mathcal{S} \\setminus \\mathcal{R}$, $U(\\mathcal{Q}\\cup \\{s\\})-U(\\mathcal{Q})\n\t\\geq U(\\mathcal{R}\\cup \\{s\\})-U(\\mathcal{R})$. For our formulation, it is,\n\tthus, enough if for any trajectory $T_j \\in \\mathcal{T}$,\n\t%\n\t\\begin{align}\n\t\t%\n\t\t\\label{eq:submodular}\n\t\t%\n\t\tU_j(\\mathcal{Q}\\cup \\{s\\})-U_j(\\mathcal{Q})\n\t\t\t\\geq U_j(\\mathcal{R}\\cup \\{s\\})-U_j(\\mathcal{R})\n\t\t%\n\t\\end{align}\n\n\tSuppose the site $s^* \\in \\mathcal{R}\\cup \\{s\\}$ offers the maximal utility\n\tto trajectory $T_j$ in the set of sites $\\mathcal{R}\\cup \\{s\\}$. There can\n\tbe two cases: \\\\\n\t%\n\t(a) $s^* = s$: $U_j(\\mathcal{Q} \\cup \\{s\\})=U_j(\\{s\\})=U_j(\\mathcal{R} \\cup\n\t\\{s\\})$. Further, $U_j(\\mathcal{Q})\\leq U_j(\\mathcal{R})$ since\n\t$U_j=\\{\\max\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\}|s_i \\in Q\\}$. These two inequalities together lead to\n\tIneq.~\\eqref{eq:submodular}. \\\\\n\t%\n\t(b) $s^* \\neq s$: Here, $U_j(\\mathcal{R} \\cup\n\t\\{s\\})-U_j(\\mathcal{R}) = 0$. As $U_j()$ is non-decreasing,\n\t$U_j(\\mathcal{Q} \\cup \\{s\\}) \\geq U_j(\\mathcal{Q})$. Thus,\n\tIneq.~\\eqref{eq:submodular} follows.\n\t%\n\n\n\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\n\n\n \n\n\\section{Introduction and Motivation}\n\\label{sec:Intro} \n\n\n\n\n\\emph{Facility Location queries} \n(or \\emph{Optimal location (OL) queries}) in a road network aim to identify the best\nlocations to set up new facilities with respect to a given service\n\\cite{tops_icde,du2005optimal, ghaemi2010optimal, xiao2011optimal, chen2014efficient,li2016mining}.\nExamples include setting up new retail stores, gas stations, or cellphone base\nstations. OL queries also find applications in various spatial decision support\nsystems, resource planning and infrastructure management\\cite{myinfocom,\nPeopleInMotionCDR}. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.60\\columnwidth]{.\/Figures\/motivation.pdf}\n \\figcaption{Illustration of the need of trajectory-aware querying for optimal\n locations. Each line represents a separate user trajectory.}\n \\label{fig:motivation}\n \\vspace*{-2mm}\n\\end{figure}\n\n\n\nWith growing applications of data-driven location-based systems, the importance of OL queries is well-recognized in the database community \\cite{zhang2006progressive,xiao2011optimal,chen2014efficient}.\nHowever, most of the existing works assume the users to be \\emph{fixed} or \\emph{static}. Such an\nassumption is often too prohibitive. For example, services such as gas\nstations, ATMs, bill boards, traffic monitoring systems, etc. are widely\naccessed by users while commuting. Further, it is common for many users to make their daily purchases while returning from their offices. Consequently, the placement of facilities\nfor these services require taking into consideration the mobility patterns (or\ntrajectories) of the users rather than their static locations. We refer to such\nservices as \\emph{trajectory-aware services}. Formally, a \\emph{trajectory} is\na sequence of location-time coordinates that lie on the path of a moving user. Note that trajectories strictly generalize the static\nusers' scenario because static users can always be modeled as trajectories with\na single user location. Thus, trajectories capture user patterns more\neffectively.\nSuch trajectory data are commonly available from GPS traces \\cite{RideSharing}\nor CDR (Call Detail Records) data \\cite{PeopleInMotionCDR} recorded through\ncellphones, social network check-ins, etc. \nRecently, there have been many works\nin the area of trajectory data analytics \\cite{edwp,DTW,LCSS, DISSIM, ERP,\nlee2007trajectory, kalnis2005discovering, jeung2008discovery, li2010swarm}.\n\nTo illustrate the need for trajectory-aware optimal location queries, consider\nFig.~\\ref{fig:motivation}. There are 5 locations that are either homes or\noffices and 5 trajectories of users commuting across them. A company wants to\nopen 2 new gas stations. For simplicity, we assume that a trajectory of a user\nis \\emph{satisfied} if it passes through at least one gas station. If\nonly the static locations are considered, i.e., any two out of the five office\nand residential areas are to be selected, no combination of gas stations would\nsatisfy all the users. In contrast, if we factor in the mobility of the users,\nchoosing $S_1$ and $S_3$ as the installation locations satisfies all the\ntrajectories of the users.\n\nNote that it is not enough to simply look at trajectory counts in each possible\ninstallation location and then choose the two most frequent ones ($S_1$ and\n$S_2$). The combination may not be effective since a large number of\ntrajectories between them can be common, thereby reducing each other's\nutilities.\n\nIn this work, we formalize the problem of OL queries over trajectories of users\nin road networks. We refer to this as \\emph{TOPS\\xspace (Trajectory-aware Optimal\nPlacement of Services)} query. Given a set of user trajectories $\\mathcal{T}$,\nand a set of candidate sites $\\mathcal{S}$ over a road network that can host\nthe services, TOPS\\xspace query with input parameters $k$ and preference function $\\ensuremath{\\psi}\\xspace$ seeks to report the\nbest $k$ sites $\\mathcal{Q} \\subseteq \\mathcal{S}$ maximizing a \\emph{utility function} that is defined over the preference function $\\ensuremath{\\psi}\\xspace$ which captures how a particular candidate site is \\emph{preferred} by a given trajectory.\n\n\nFacility location queries with respect to trajectories have been studied by a number of previous\nworks \\cite{berman1992optimal, berman1995locatingDiscretionary,\nberman1995locatingMain, berman2002generalized,berman1998flow, li2013trajectory}.\nAlthough the formulations are not identical, the common eventual goal is to\nidentify the best sites. However, all these works remain limited to a\ntheoretical exercise and cannot be applied in a real-life scenario due to a\nnumber of issues as explained next.\n\n$\\bullet$ \\textbf{Data-based mobility model:} Existing techniques are neither\nbased on real trajectories nor on real road networks \\cite{berman1992optimal,\nberman1995locatingDiscretionary, berman1995locatingMain,\nberman2002generalized,berman1998flow}. They base their\nsolutions on simplistic assumptions such as traveling in shortest paths and\nsynthetic road networks. It is well known that the shortest path assumption does\nnot hold in real life \\cite{utraj}. In many works, the distances are \\emph{not} computed over the road network but approximated using some spatial distance measure such as $L_2$ norm. Our framework is the first to study TOPS\\xspace on\nreal trajectories over real road networks.\n\n$\\bullet$ \\textbf{Generic framework:} We develop the first generic framework to\nanswer TOPS\\xspace queries across a wide family of preference functions $\\ensuremath{\\psi}\\xspace$ that are non-increasing w.r.t. the distance between any pair of trajectory and candidate site. The proposed framework\nencompasses many of the existing formulations and also considers other practical factors such as\ncapacity constraints, site-costs, dynamic updates, etc. \n\n$\\bullet$ \\textbf{Scalability:} The state-of-the-art technique for a basic version of TOPS\\xspace query\n\\cite{berman1995locatingMain} requires prohibitively large memory.\nConsequently, it fails to scale on urban-scale datasets (Further details in Sec.~\\ref{sec:exp}). Hence, a scalable\nframework for TOPS\\xspace query is a basic necessity. In addition, all OL queries\nincluding TOPS\\xspace are typically\nused in an interactive fashion by varying the various parameters such as $k$ and\nthe coverage threshold $\\tau$ \\cite{chen2014efficient}. Moreover, in certain ventures, such as deployment of mobile ATM vans(\\url{https:\/\/goo.gl\/WjSPvx}), real-time answers are need based on current trajectory patterns. Hence, practical\nresponse time with the ability to absorb data updates is critical. This factor has been completely ignored in the\nexisting\nworks. \n\n\n$\\bullet$ \\textbf{Extensive benchmarking:} Since TOPS\\xspace queries and their\nvariants are NP-hard, heuristics have been proposed. How do their effectiveness\nvary across road-network topologies? Are these heuristics biased towards certain\nspecific parameter settings? The existing techniques are generally silent on\nthese questions. We, on the other hand, perform benchmarking that is grounded\nto reality by extensively studying the performance of TOPS\\xspace across multiple\nmajor city topologies and other important parameters.\n\n\nTo summarize, the proposed framework is the first practical and generic solution\nto address TOPS\\xspace queries. Fig.~\\ref{fig:flowchart} depicts the top-level flow\ndiagram of our solution. Given the raw GPS traces of user movements, they are\nmap-matched \\cite{map1} to the corresponding road network. Using the\nmap-matched trajectories, a multi-resolution clustering of the road network is\nbuilt to construct the index structure \\emph{\\textsc{NetClus}\\xspace}. Indexed views of both the\ncandidate sites and the trajectories are maintained in a compressed format at\nvarious granularities. This completes the offline phase. In the online phase,\ngiven the query parameters, the optimum clustering resolution to answer the\nquery is identified, and the corresponding views of the trajectories and road\nnetwork are analyzed to retrieve the best $k$ sites for facility locations.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.80\\columnwidth]{.\/Figures\/flowchart.pdf}\n \n \\figcaption{Flowchart of TOPS\\xspace querying framework.}\n \\label{fig:flowchart}\n \\vspace*{-2mm}\n \n\\end{figure}\n\nThe major contributions of our work are as follows:\n\n\\begin{enumerate}[nosep,leftmargin=*]\n\n\t\\item We propose a highly generic trajectory-aware optimal placement of\n\t\tservices problem, TOPS\\xspace, that can handle the wide class of\n preference functions $\\ensuremath{\\psi}\\xspace$ that are non-increasing w.r.t. the distance between any pair of trajectory and candidate site\t(Sec.~\\ref{sec:formulation}).\n\t\t\n\t\n\t\t\n\t\\item TOPS\\xspace is NP-hard, and even the greedy approach does not scale in city-scale datasets\t(Sec.~\\ref{sec:algo}). To overcome this bottleneck, we design a\n\t\t\\emph{multi-resolution clustering-based} index structure called\n\t\t\\emph{\\textsc{NetClus}\\xspace} that generates a \\emph{compressed} representation of the road\n\t\tnetwork and the set of trajectories (Sec.~\\ref{sec:offline} and\n\t\tSec.~\\ref{sec:online}).\nThe solutions returned by \\textsc{NetClus}\\xspace framework have bounded quality guarantees.\n\n\n\t \\item The proposed TOPS\\xspace formulation is capable of absorbing dynamic updates (Sec.~\\ref{sec:updates}) and manage realistic constraints such as cost, capacity, and existing services. \n\t \n\n\n\t (Sec.~\\ref{sec:variants}).\n\n\t\\item Extensive experiments on real datasets show that \\textsc{NetClus}\\xspace offers solutions that are comparable in terms of quality with those of the greedy heuristic, while having practical response times and fairly low memory footprints \t\n\t\t (Sec.~\\ref{sec:exp}).\n\\end{enumerate}\n\n\n\n\\section{Offline Construction of \\textsc{NetClus}\\xspace}\n\\section{Offline Construction of NetClus}\n\\label{sec:offline}\n\nAs discussed in Sec.~\\ref{sec:limitations}, \\textsc{Inc-Greedy}\\xspace has two computationally\nexpensive components. While FM sketch expedites the information update\ncomponent, computation of \\ensuremath{TC}\\xspace, \\ensuremath{SC}\\xspace, etc. still remain a bottleneck with\n$O(mn)$ time and storage complexity. To overcome this scalability issue, we\ndevelop an index structure. \n\nOne of the most natural ways to achieve the above objectives is to cluster the\nsites in the road network to reduce the number, and then apply \\textsc{Inc-Greedy}\\xspace on the\ncluster representatives. The clustering of sites can be done according to two\nbroad strategies. The first is to cluster the sites that are highly similar in terms of their trajectory covering sets \\ensuremath{TC}\\xspace. The similarity between these covering sets can be quantified using the \\emph{Jaccard similarity} measure (Appendix~\\ref{app:Jaccard}).\n However, this\napproach is not practical due to two major limitations: (1)~Since the coverage threshold $\\tau$ is available \\emph{only} at query time, the covering sets \\ensuremath{TC}\\xspace can be computed only at query time. Hence, the clustering can be performed only at query time. This leads to impractical query time. \n(2)~Alternatively, multi-resolution clustering may be performed based on few\nfixed values of $\\tau$, and at query time, a clustering instance of a\nparticular resolution is chosen based on the value of the query parameter\n$\\tau$. However, this still requires computing the similarity between each pair of sites. Owing to large number of sites, and large size of the covering sets, such computation demands impractical memory overhead. \nHence, we adopt the second clustering option, that of\ndistance-based clustering. \n \nWe first state one basic observation. If two sites are close, the sets of\ntrajectories they cover are likely to have a high overlap. Hence, when $k \\ll\nn$, which is typically the case, the sites chosen in the answer set are likely\nto be distant from each other. The index structure, \\textsc{NetClus}\\xspace, is designed based on\nthe above observation.\n \nOur method follows two main phases: offline and online. In the offline phase,\nclusters are built at multiple resolutions. This forms the different index\ninstances. A particular index instance is useful for a particular range of\nquery coverage thresholds. In the online phase, when the query parameters are\nknown, first the appropriate index instance is chosen. Then the \\textsc{Inc-Greedy}\\xspace algorithm is\n run with the cluster representatives of that instance.\n \nWe explain the\noffline phase in this section and the online phase in the next. The important\nnotations used in the \\textsc{NetClus}\\xspace scheme are listed in Table~\\ref{tab:netclus}. \n\n\\subsection{Distance-Based Clustering}\n\\label{sec:clustering}\n\nThe clustering method is parameterized by a distance threshold $R$, which is\nthe \\emph{maximum cluster radius}. The round-trip distance from any node\nwithin the cluster to the cluster-center is constrained to be at most $2R$.\nThe radius is varied to obtain clusters at multiple resolutions. We describe\nthe significance of the choices of $R$ later.\n\nThe objective of the clustering algorithm is to partition the set of nodes in\nthe road network, $V$, into disjoint clusters such that the number of such\nclusters is \\emph{minimal}. This leads not only to savings in index storage,\nbut more importantly, it results in faster query time, as \\textsc{Inc-Greedy}\\xspace is run on a\nsmaller number of cluster representatives. We next describe how to achieve this\nobjective.\n\n\\subsubsection{Generalized Dominating Set Problem (GDSP)}\n\nGiven an undirected graph $G=(V,E)$, the \\emph{dominating set problem (DSP)}\n\\cite{haynes1998fundamentals} computes a set $D \\subseteq V$ of \\emph{minimal}\ncardinality, such that for each vertex $v \\in V - D$, there exists a vertex $u\n\\in D$, such that $(u,v) \\in E$. DSP is NP-hard \\cite{haynes1998fundamentals}.\nIn \\cite{chen2012approximation}, it was generalized to the \\emph{measured\ndominating set} problem for weighted graphs. In this work, we propose a\n\\emph{generalized dominating set problem (GDSP)}, that uses a different notion\nof dominance from \\cite{chen2012approximation}.\n\n\\begin{prob}[GDSP]\n\t%\n\tGiven a weighted directed graph $G=(V,E,W)$ where $W: E \\rightarrow\n\t\\mathbb{N}$ assigns a positive weight for each edge in $E$, and a constant\n\t$R>0$, a vertex $u \\in V$ is said to \\emph{dominate} another vertex $v \\in\n\tV$ if $d(u,v) + d(v,u) \\leq 2R$, where $d$ denotes the directed path weight.\n\tThe \\emph{GDSP} problem computes a set $D \\subseteq V$ of \\emph{minimal}\n\tcardinality such that for any $v \\in V - D$, there exists a vertex $u \\in D$\n\tsuch that $u$ \\emph{dominates} $v$.\n\t%\n\\end{prob}\n \n\\emph{GDSP} is NP-hard due to a direct reduction from DSP where all edge weights\nare assumed to be $1$, and $R = 2$.\n\n\\subsubsection{Greedy Algorithm for GDSP}\n\\label{sec:greedy algo for GDSP}\n\nTo solve GDSP, we adapt the greedy algorithm proposed in\n\\cite{chen2012approximation}. We refer to our algorithm as \\emph{Greedy-GDSP}.\nThe only input parameter to the clustering process is the cluster radius $R$.\n\n\nFirst, the \\emph{dominating sets} of every vertex $v$, denoted by $\\Lambda(v)$,\nare computed. This is achieved by running the shortest path algorithm from a\nsource vertex till distance $2R$.\nThe dominance relationship is \\emph{symmetric}, i.e., $u \\in \\Lambda(v)\n\\Leftrightarrow v \\in \\Lambda(u)$.\n\nThe main part of the Greedy-GDSP algorithm is iterative. In the first\niteration, the vertex $v$ that dominates the largest number of vertices is\nchosen. The set of dominated vertices, $\\Lambda(v)$, forms a new \\emph{cluster}\nwith $v$ as the cluster center. The vertices $v$ and $\\Lambda(v)$ are not\nconsidered for further comparisons. In addition, the vertices in $\\Lambda(v)$\nare removed from the dominating sets of other non-clustered vertices. In other\nwords, for each $u \\in \\Lambda(v)$, if $u \\in \\Lambda(w)$ for some non-clustered\nvertex $w$ then $\\Lambda(w) = \\Lambda(w) - \\{u\\}$. In the subsequent\niterations, the vertex with the largest \\emph{incremental} dominating set as\nproduced from previous iterations is chosen. The dominated vertices that are\nnot part of other clusters form a new cluster. The algorithm terminates when\nall the vertices are clustered.\n\nSimilar to the \\textsc{Inc-Greedy}\\xspace algorithm, we use FM sketches to efficiently update the\ndominating sets and choose the vertex with the largest incremental dominating\nset in each iteration. The details are same as those described in\nSec.~\\ref{sec:fm} with trajectory covering sets for each candidate site in \\textsc{Inc-Greedy}\\xspace\nreplaced by dominating sets for each vertex in Greedy-GDSP.\n\n\n\\subsubsection{Analysis of Greedy-GDSP}\n\nThe next two theorems characterize the\napproximation bound and time complexity of Greedy-GDSP.\n\n\\begin{thm}\n\t%\n\tThe cardinality of the dominating set computed using the proposed algorithm\n\tis within an approximation bound of $(1 + \\epsilon').(1 + \\ln n)$ of the\n\toptimal where $\\epsilon'$ is the approximation error of FM sketches.\n\t%\n\\end{thm}\n\\begin{proof}\n\t%\n\tFollowing the analysis in \\cite{chen2012approximation}, the Greedy-GDSP\n\talgorithm offers an approximation bound of $H_n = 1 + \\frac{1}{2} + \\dots +\n\t\\frac{1}{n} \\leq 1 + \\ln n$ which was shown to be tight unless $P=NP$.\n\tThis, however, does not consider the approximation due to the use of FM\n\tsketches. Incorporating that, the bound becomes $(1 + \\epsilon').(1 + \\ln\n\tn)$ where $\\epsilon'$ is the approximation error of FM sketches.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\\begin{thm}\n\t%\n\t\\label{thm:gdsp-complexity}\n\tGreedy-GDSP runs in $O(|V| . (\\nu \\log \\nu + \\eta))$ time where $\\nu$ is the\n\tmaximum number of vertices that are reachable within the largest round-trip\n\tdistance $R_{max}$ from any vertex $v$, and $\\eta$ is the number of clusters\n\treturned by the algorithm.\n\t%\n\\end{thm}\nThe proof is given in Appendix~\\ref{app:gdsp}.\n\n\n\n\\begin{table}[t]\n\\scriptsize\n\\centering\n\\begin{tabular}{cl}\n\\hline\n\\textbf{Symbol} & \\multicolumn{1}{l}{\\textbf{Description}} \\\\\n\\hline\n\\hline\n$\\ensuremath{TC}\\xspace(s)$ & Set of trajectories covered by site $s$ \\\\\n$\\ensuremath{\\mathcal{CL}}\\xspace(g)$ & Neighbors of cluster $g$ \\\\\n$\\ensuremath{\\mathcal{TL}}\\xspace(g)$ & Set of trajectories passing through cluster $g$ \\\\\n$\\ensuremath{TC}\\xspace(g)$ & Set of trajectories passing through $g$ and its neighbors $\\ensuremath{\\mathcal{CL}}\\xspace(g)$\\\\\n$\\widehat{\\ensuremath{d_r}\\xspace}(T,s)$ & Estimate of $\\ensuremath{d_r}\\xspace(T,s)$ in the clustered space\\\\\n$\\widehat{\\ensuremath{TC}\\xspace}(s)$ & Estimate of $\\ensuremath{TC}\\xspace(s)$ in the clustered space\\\\\n$\\ensuremath{\\gamma}\\xspace$ & Resolution at which index instances change \\\\\n$t$ & Number of index instances \\\\\n$\\ensuremath{\\mathcal{I}}\\xspace_p$ & Index instance \\\\\n$R_p$ & Cluster radius for $\\ensuremath{\\mathcal{I}}\\xspace_p$ \\\\\n$\\eta_p$ & Number of clusters for $\\ensuremath{\\mathcal{I}}\\xspace_p$ \\\\\n$\\Lambda(v)$ & Dominating set for node $v$ \\\\\n$f$ & Number of bit vectors for FM sketch \\\\\n$\\epsilon$ & Error parameter for FM sketch \\\\\n\\hline\n\\end{tabular}\n\\tabcaption{Important notations used in the algorithms.}\n\\label{tab:netclus}\n\\vspace*{2mm}\n\\end{table}\n\\subsection{Selection of Cluster Representatives} \n\\label{sec:representative}\n\nIn order to run \\textsc{Inc-Greedy}\\xspace on the clusters, each cluster needs to choose a\n\\emph{representative} candidate site. This may be different from the cluster\ncenter that was used to construct the cluster. The flexibility is needed since\nthe cluster representative should \\emph{necessarily} be a candidate site,\nalthough the cluster center may be any vertex in $V$. Taking\ninto account the fact that the cluster representative should summarize the\ninformation about the cluster and the trajectories that pass through it, and\nuse this information to compete against the other cluster representatives in\nthe online phase, we study two alternatives of choosing the cluster\nrepresentative:\n\n\\begin{enumerate\n\n\t\\item The \\emph{most frequently} accessed candidate site, i.e., the one\n\t\tthrough which the largest number of trajectories pass through. \n\n\t\\item The candidate site that is \\emph{closest} to the cluster center. \n\n\\end{enumerate}\n\nWhile the first option guarantees that the utility of the cluster is at least\nthat of its best site, the second summarizes the distribution of trajectories\nbetter.\nEmpirical studies show that the utilities returned by the two alternatives are\nquite similar, but the second alternative is marginally better.\nConsequently, \nwe adopt the second option.\n\n\n\\subsection{Cluster Information}\n\\label{sec:information}\n\nSuppose the above clustering algorithm produces clusters of radius $R$, i.e., the maximum round-trip distance of any node within a cluster to its cluster center is at most $2R$. Then, a pair of clusters are considered as \\emph{neighbors} of each other if their\ncenters are within a round-trip distance of $4.R(1+\\ensuremath{\\gamma}\\xspace)$, where $\\ensuremath{\\gamma}\\xspace \\in\n(0,1]$ is the index resolution parameter to be described in\nSec.~\\ref{sec:index}. This choice of neighborhood is explained in Sec.~\\ref{sec:topsc problem}.\n \nAs part of the index structure, every cluster $g_i$ stores the following\ninformation:\n\n\\begin{enumerate\n\n\t\\item Cluster \\emph{center}, $c_i$.\n\n\t\\item Cluster \\emph{representative}, $r_i$.\n\n\t\\item Trajectory set, i.e., list of \\emph{trajectories} passing through at\n\t\tleast one site in $g_i$, along with their round-trip distance to $c_i$,\n\t\t$\\ensuremath{\\mathcal{TL}}\\xspace(g_i) = \\langle T_j, \\ensuremath{d_r}\\xspace(T_j,c_i) \\rangle$.\n\n\t\\item Cluster \\emph{neighbors} along with the round-trip distance between\n\t\ttheir centers and $c_i$, $\\ensuremath{\\mathcal{CL}}\\xspace(g_i) = \\{\\langle g_j, \\ensuremath{d_r}\\xspace(c_i,c_j)\n\t\t\\rangle\\}$, sorted by $\\ensuremath{d_r}\\xspace(c_i,c_j)$.\n\n\t\\item Set of \\emph{nodes} in the cluster and their round-trip distance to\n\t\t$c_i$, $\\{\\langle v_i, \\ensuremath{d_r}\\xspace(v_i,c_i) \\rangle\\}$.\n\n\\end{enumerate}\n\n\nThe set $\\ensuremath{\\mathcal{TL}}\\xspace(g_i)$ is computed by scanning the sequence of nodes in each\ntrajectory $T_j$. If $v_i \\in T_j$, then $T_j$ is added to $\\ensuremath{\\mathcal{TL}}\\xspace(g(v_i))$,\nwhere $g(v_i)$ denotes the cluster in which $v_i$ resides. Thus, a trajectory\nis represented as a \\emph{sequence of clusters}. As neighboring nodes in any\ntrajectory are likely to fall into the same cluster, this allows a\n\\emph{compressed} representation of the trajectory by collapsing consecutive\ncopies of the same cluster into one. This compression contributes towards the\nefficiency of \\textsc{NetClus}\\xspace in terms of both space and time.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[Example]{\n \\includegraphics[width=0.34\\columnwidth]{.\/Figures\/clusters.pdf}\n\\label{fig:clusters}\n }\n \\subfloat[Covering Sets]{\n \\includegraphics[width=0.58\\columnwidth]{.\/Figures\/coverage.pdf}\n\\label{fig:coverage}\n }\n\\figcaption{\\textsc{NetClus}\\xspace example using $\\ensuremath{\\gamma}\\xspace=0.5$.}\n\\vspace*{-2mm}\n\\end{figure}\n\n\n\n\\begin{example}\n\t%\n\t\\emph{\n\tFig.~\\ref{fig:coverage} illustrates an example of \\textsc{NetClus}\\xspace clustering with\n\tcluster radius $R$ and $\\ensuremath{\\gamma}\\xspace=0.5$.\n\tThe clusters $g_i$, $g_j$, $g_k$ and $g_l$ have centers $c_i$, $c_j$, $c_k$ and\n\t$c_l$ respectively. The cluster $g_i$ has two candidate sites $r_i$ and $s_i$.\n\tSince $r_i$ is located at $c_i$, it is chosen as the cluster-representative.\n\tWhile cluster $g_j$ has no candidate site, each of the clusters $g_k$ and $g_l$\n\thave one candidate site, namely, $r_k$ and $r_l$, respectively, each of\n\twhich is a\n\tcluster-representative. The distance between the cluster centers are as\n\tfollows:\n\t$\\ensuremath{d_r}\\xspace(c_i,c_j)=5.5R$, $\\ensuremath{d_r}\\xspace(c_i,c_k) \\gtrsim 2R$, and $\\ensuremath{d_r}\\xspace(c_i,c_l) = 4R$ where\n\t$\\gtrsim$ means just greater\n\tthan. The distance between any other pair of cluster centers is greater\n\tthan or equal to $6R$. Given that $\\ensuremath{\\gamma}\\xspace =0.5$, the distance between any two\n\tneighboring cluster centers lies in the range $[2R,6R)$. Based on this,\n\t\tthe cluster neighbors, \\ensuremath{\\mathcal{CL}}\\xspace, are shown in the figure. The trajectories\n\t\t$T_1$, $T_2$ and $T_3$ pass through nodes $c_j$, $r_i$ and $s_i$,\n\t\trespectively. The figure lists the trajectory sets $\\ensuremath{\\mathcal{TL}}\\xspace$ for each\n\t\tcluster. Note that when\n\t\t$\\tau \\ge 4R$, it is guaranteed that any site covers any trajectory\n\t\tthat passes through the same cluster. For example, $r_k$ covers $T_2$\n\t\tas it passes through the cluster $g_k$ that contains $r_k$. \n\t}\t\n\t%\n\t\\hfill{}\n\t%\n\t$\\square$\n\t%\n\\end{example}\n\n\\subsection{Multi-Resolution Index Structure}\n\\label{sec:index}\n\nWe next explain how the multi-resolution index structure, \\textsc{NetClus}\\xspace, is built by using\nthe clustering algorithm outlined above. Assume that the normal range of query\ncoverage threshold $\\tau$ is $[\\tau_{min}$, $\\tau_{max})$. (We discuss the two\nextreme cases later.) \\textsc{NetClus}\\xspace maintains $t$ instances of index structures\n$\\ensuremath{\\mathcal{I}}\\xspace_0,\\dots,\\ensuremath{\\mathcal{I}}\\xspace_{t-1}$ of varying cluster radii. From one instance to the\nnext, the radius increases by a factor of $\\ensuremath{(1 + \\gamma)}\\xspace$ for some $\\ensuremath{\\gamma}\\xspace > 0$. Thus,\nthe total number of index instances is $t = \\lfloor \\log_{\\ensuremath{(1 + \\gamma)}\\xspace} (\\tau_{max} \/\n\\tau_{min}) \\rfloor +1$. For each instance, all the clusters and their\nassociated information are stored.\n\nConsider a particular index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$ with cluster radius $R_p$. As\ndiscussed above, the maximum round-trip distance from a site $s_i$ belonging to\nthe cluster $g_i$ to a trajectory $T_j$ that passes through $g_i$ is at most\n$\\ensuremath{d_r}\\xspace(T_j,s_i)$ $\\leq 4.R_p$. Thus, if the coverage threshold $\\tau < 4.R_p$,\nthen it is not guaranteed if $s_i$ covers $T_j$ or not. Hence, the index\ninstance $\\ensuremath{\\mathcal{I}}\\xspace_p$ is not useful for any $\\tau < 4.R_p$ and a finer instance\nwith a lesser cluster radius should be used.\n\nOn the other hand, if $\\tau$ is too large, too many neighboring clusters may\ncover a trajectory. Therefore, intuitively, it makes sense to switch to a\nhigher index instance with a larger cluster radius so that less number of\nclusters need to be processed. The parameter \\ensuremath{(1 + \\gamma)}\\xspace captures the ratio of\n$\\tau$ to $4.R_p$ beyond which the switch is made. Thus, if $\\tau > 4.R_p .\n\\ensuremath{(1 + \\gamma)}\\xspace$, a higher index instance is used.\n\nTherefore, the range of useful $\\tau$ for the index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$ is\n$[4.R_p, 4.R_p.\\ensuremath{(1 + \\gamma)}\\xspace)$. Hence, the lowest cluster radius is $R_0 = (\\tau_{min} \/ 4)$,\nand the successive cluster radii for instances $\\ensuremath{\\mathcal{I}}\\xspace_p, \\ p = 1, \\dots, t-1$\nare $R_p = \\ensuremath{(1 + \\gamma)}\\xspace^{p} R_0$. From one index instance to the next, as the\ncluster radius $R_p$ grows, the number of clusters, $|\\ensuremath{\\mathcal{I}}\\xspace_p|$, falls\nexponentially.\n\n\n\n\n\n\\noindent\n\\textbf{Choice of $\\ensuremath{\\gamma}\\xspace$:} The number of index instances $t$ depends on $\\ensuremath{\\gamma}\\xspace$.\nA smaller value of $\\ensuremath{\\gamma}\\xspace$ creates more number of instances, thereby requiring\nlarger storage and offline running time. The approximation error is also\naffected by $\\ensuremath{\\gamma}\\xspace$. When $\\ensuremath{\\gamma}\\xspace$ is smaller, the range of $\\tau$ handled by a\nparticular index instance is tighter. Therefore, the distance approximations\nare better.\nExperimental results showing the empirical impact of \\ensuremath{\\gamma}\\xspace are discussed in\nSec.~\\ref{sec:parameters}.\n\n\\noindent\n\\textbf{Extreme cases of $\\tau$:} The extreme values of the range of $\\tau$,\nnamely, $\\tau_{min}$ and $\\tau_{max}$, are assigned respectively as the\n\\emph{minimum} and \\emph{maximum} round-trip distance between any two sites in\n$\\mathcal{S}$. This particular choice is guided by the following analysis. If\nthere is a query with $\\tau < \\tau_{min}$, then the method degenerates to\nnormal \\textsc{Inc-Greedy}\\xspace as each site becomes a cluster by itself. If, on the other hand,\n$\\tau \\geq \\tau_{max}$, then \\textsc{NetClus}\\xspace reports any $k$ sites, as each site covers\nevery other site, and consequently, all the trajectories. Hence, the\nmulti-resolution \\textsc{NetClus}\\xspace is applicable to \\emph{all} query coverage thresholds.\n\n\\section{Querying using NetClus}\n\\label{sec:online}\n\nWe next explain the online phase of querying that starts after the query\nparameters $(k,\\tau)$ are available.\n\n\n\nThe first important consideration is choosing the index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$\nthat supports the given query threshold $\\tau$. The index $p$ is computed as $p\n= \\lfloor \\log_{\\ensuremath{(1 + \\gamma)}\\xspace} (\\tau \/ \\tau_{min}) \\rfloor$. This ensures that $4.R_p\n\\leq \\tau < 4.R_p. \\ensuremath{(1 + \\gamma)}\\xspace$ where $R_p$ is the cluster radius for\n$\\ensuremath{\\mathcal{I}}\\xspace_p$.\n\nWe next discuss how to apply TOPS\\xspace on the clustered space.\n\n\\subsection{TOPS-Cluster\\xspace Problem}\n\\label{sec:topsc problem}\n\n\nConsider a cluster $g_i$ with its representative $r_i$, and a trajectory $T_j$ passing through a cluster $g_j$, where $g_j$ may or may not be equal to $g_i$. Then $T_j \\in\n\\ensuremath{TC}\\xspace(r_i)$ if and only if $\\ensuremath{d_r}\\xspace(T_j,r_i) \\leq \\tau$. In the clustered space,\nhowever, we only store the distances of each trajectory from the centers of the clusters that it passes through. Hence, it is not possible to compute $\\ensuremath{d_r}\\xspace(T_j,r_i)$ without extensive online computation. \n Hence, an\n\\emph{approximate} distance $\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i)$ is computed and used. The\nround-trip distance estimate from $T_j$ to $r_i$ is\n\\begin{align}\n\t\\label{eq:approxcover}\n\t\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) = \\ensuremath{d_r}\\xspace(T_j,c_j) + \\ensuremath{d_r}\\xspace(c_j,c_i) + \\ensuremath{d_r}\\xspace(c_i,r_i)\n\\end{align}\nIt is important to note that the distance can be estimated using \\emph{only} the\ninformation computed in the offline phase. \nSince the distances are approximate, the \\emph{approximate trajectory cover} of\n$r_i$ is\n\\begin{align}\n\t\\label{eq:approx tc}\n\t\\widehat{\\ensuremath{TC}\\xspace}(r_i)= \\{T_j \\in \\ensuremath{TC}\\xspace(g_i) | \\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) \\leq \\tau\\}\n\\end{align}\nwhere $\\ensuremath{TC}\\xspace(g_i) = \\ensuremath{\\mathcal{TL}}\\xspace(g_i) \\cup \\{T_j \\in \\ensuremath{\\mathcal{TL}}\\xspace(g_j)|g_j \\in \\ensuremath{\\mathcal{CL}}\\xspace(g_i)\\}$ consists of the\ntrajectories passing through $g_i$ and its neighbors $\\ensuremath{\\mathcal{CL}}\\xspace(g_i)$. \n\nFor any $T_j \\in \\widehat{\\ensuremath{TC}\\xspace}(r_i)$, $T_j \\in\n\\ensuremath{TC}\\xspace(g_i)$, if and only if there exists a cluster $g_j$ such that $T_j \\in \\ensuremath{\\mathcal{TL}}\\xspace(g_j)$ and $\\ensuremath{d_r}\\xspace(c_i,c_j) \\leq\n\\tau$. This follows from the fact that $\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) \\leq \\tau$. For the index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$, since\n$\\tau \\leq 4R_p\\ensuremath{(1 + \\gamma)}\\xspace$, therefore, this condition reduces to $\\ensuremath{d_r}\\xspace(c_i,c_j)\n\\leq 4R_p\\ensuremath{(1 + \\gamma)}\\xspace$. This is the reason why the neighborhood of a cluster is\ndefined as those whose centers are within a round-trip distance of\n$4R_p\\ensuremath{(1 + \\gamma)}\\xspace$ in Sec.~\\ref{sec:information}.\n\nConsequently, to compute the set $\\widehat{\\ensuremath{TC}\\xspace}(r_i)$, it is sufficient to\nexamine only the trajectory sets of the neighbors of the cluster $g_i$. For\neach trajectory $T_j \\in \\ensuremath{\\mathcal{TL}}\\xspace(g_j)$ where $g_j$ is a neighbor of $g_i$, the\napproximate distance $\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i)$ is computed. The trajectory $T_j$\nis included in $\\widehat{\\ensuremath{TC}\\xspace}(r_i)$ if $\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) \\le \\tau$.\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\\begin{tabular}{c|ccc}\n\t$\\widehat{\\ensuremath{d_r}\\xspace}$ & $r_i$ & $r_k$ & $r_l$\\\\\n\t\\hline\n\t$T_1$ & $5.5R$ & $\\gtrsim 9.5R$ & $10.5R$\\\\\n\t$T_2$ & $0R$ & $\\le 4R$ & $5R$ \\\\\n\t$T_3$\t& $2R$ & $\\gtrsim 6R$ & $7R$\\\\ \t\t\n\t\t\\end{tabular}\n\t\t\\tabcaption{Distance estimates $\\widehat{\\ensuremath{d_r}\\xspace}(\\cdot)$ for example in Fig.~\\ref{fig:clusters}.}\n\t\t\\label{tab:distance-estimates}\n\\end{center}\n\\end{table}\n\n\n\n\\begin{example}\n\t%\n\t\\emph{\n\tThe $\\ensuremath{TC}\\xspace$ sets for the three clusters in Fig.~\\ref{fig:clusters} are shown\n\tin Fig.~\\ref{fig:coverage}. Using Eq.~\\eqref{eq:approxcover}, the distance estimates between each pair of cluster representative and trajectory is shown in Table~\\ref{tab:distance-estimates}. Since $\\ensuremath{\\gamma}\\xspace=0.5$, therefore, the supported range of $\\tau$ is\n\t$[4R, 6R)$. Thus, if $\\tau=4R$, $\\widehat{\\ensuremath{TC}\\xspace}(r_i)=\\{T_2,T_3\\}$, \n\t$\\widehat{\\ensuremath{TC}\\xspace}(r_k)=\\{T_2\\}$, and $\\widehat{\\ensuremath{TC}\\xspace}(r_l)=\\varnothing$. Similarly, if $\\tau=5.75R$, $\\widehat{\\ensuremath{TC}\\xspace}(r_i)=\\{T_1,T_2,T_3\\}$, \n\t$\\widehat{\\ensuremath{TC}\\xspace}(r_k)=\\{T_2\\}$, and $\\widehat{\\ensuremath{TC}\\xspace}(r_l)=\\{T_2\\}$. \n}\t\n\t%\n\t\\hfill{}\n\t%\n\t$\\square$\n\t%\n\\end{example}\n\nIf a trajectory $T_j \\in \\widehat{\\ensuremath{TC}\\xspace}(r_i)$, then it also lies in the set\n$\\ensuremath{TC}\\xspace(r_i)$ since $\\ensuremath{d_r}\\xspace(T_j,r_i) \\leq \\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) \\le \\tau$. However, the\nreverse is not true, since there may be a trajectory $T_j$ such that\n$\\ensuremath{d_r}\\xspace(T_j,r_i) \\leq \\tau$, but the estimate $\\widehat{\\ensuremath{d_r}\\xspace}(T_j,r_i) > \\tau$. Therefore, $\\widehat{\\ensuremath{TC}\\xspace}(r_i) \\subseteq \\ensuremath{TC}\\xspace(r_i)$.\nFor example, in Fig.~\\ref{fig:clusters}, $\\ensuremath{d_r}\\xspace(T_3,r_k) \\le 4R$, but $\\widehat{\\ensuremath{d_r}\\xspace}(T_3,r_k) \\gtrsim 6R$ which exceeds any supported value of $\\tau$. Thus, $T_3 \\notin \\widehat{\\ensuremath{TC}\\xspace}(r_k)$.\n\n\n\n\n\nFinally, based on the query preference function $\\ensuremath{\\psi}\\xspace$, the preference score\n$\\ensuremath{\\psi}\\xspace(T_j,r_i)$ is computed between all cluster representatives $r_i$ and their\ntrajectory covers $T_j \\in \\widehat{\\ensuremath{TC}\\xspace}(r_i)$.\n\n\nUsing these approximate covering sets, we run the following instance of TOPS\\xspace\nproblem, called \\textsc{TOPS-Cluster\\xspace}.\n\n\\begin{prob}[TOPS-Cluster\\xspace]\n\t%\n\tGiven an index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$ defined over the road network $G =\n\t(V,E)$, suppose $\\widehat{\\mathcal{S}} \\subseteq S$ denote the set of cluster\n\trepresentatives in $\\ensuremath{\\mathcal{I}}\\xspace_p$. TOPS-Cluster\\xspace problem seeks to report a\n\tset of $k$ cluster representatives, $\\mathcal{Q} \\subseteq\n\t\\widehat{\\mathcal{S}}$, $|\\mathcal{Q}| = k$, such that\n\t$U(\\mathcal{Q})$ is \\emph{maximal}.\n\t%\n\\end{prob} \n\n\n To\nsolve TOPS-Cluster\\xspace, we employ \\textsc{Inc-Greedy}\\xspace on the set of cluster representatives\n$\\widehat{\\mathcal{S}}$ using the above covering sets $\\widehat{\\ensuremath{TC}\\xspace}(r_i)$.\n\n\nWhen the preference function $\\ensuremath{\\psi}\\xspace$ is binary, FM sketches can be employed for faster updating of marginal utilities during the execution of \\textsc{Inc-Greedy}\\xspace on the cluster representatives, in the same manner as described in Sec.~\\ref{sec:fm}. \n\n\\subsection{Analysis of NetClus}\n\\label{sec:analysis}\n\\noindent\n\\textbf{Quality Analysis:} \nThe first result is due to a direct application of Lem.~\\ref{lem:inc_topso3}.\n\n\\begin{cor}\n\t%\n\t\\label{cor:approx}\n\t%\n\tThe utility of the set $\\widehat{\\mathcal{Q}}$ returned by the \\textsc{NetClus}\\xspace framework is\n\tbounded as follows: $U(\\widehat{\\mathcal{Q}}) \\geq\n\t(k \/ |\\widehat{\\mathcal{S}}|) U(\\widehat{\\mathcal{S}})$.\n\t%\n\\end{cor}\n\nIf the index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$ is used for a particular query threshold\n$\\tau$, then $|\\widehat{\\mathcal{S}}|$ is at most the number of clusters, $\\eta_p$.\nThe next result states the approximation guarantees offered by the \\textsc{NetClus}\\xspace\nframework.\n\n\\begin{thm}\n\t\\label{thm:topsbound}\n\tThe approximation bound offered by \\textsc{NetClus}\\xspace for the binary instance of TOPS\\xspace is\n\t$(k \/ \\eta_p)$. For a general preference function\n\t$\\ensuremath{\\psi}\\xspace(T_j,s_i)=f(\\cdot)$, the approx. bound is $f(\\tau). (k \/ \\eta_p)$.\n\\end{thm}\n\n\\begin{proof}\n\t%\n\tAssuming all nodes are candidate sites, we observe that each trajectory $T_j \\in \\ensuremath{\\mathcal{T}}\\xspace$ is covered by the cluster representative $r_i$ of a cluster $g_i$ that it passes through. This is because the maximum round-trip distance between $T_j$ and $r_i$ is at most $4R_p$ and $\\tau$ is at least\n\t$4R_p$.\n \n\tThis ensures that $U(\\widehat{\\mathcal{S}})=\\sum_{T_j \\in \\ensuremath{\\mathcal{T}}\\xspace} U_j = |\\ensuremath{\\mathcal{T}}\\xspace|=m$ because $\\forall T_j \\in \\ensuremath{\\mathcal{T}}\\xspace,\\ U_j=1$.\n\t\n\t Since\n\tthe maximum utility of the optimal algorithm for TOPS\\xspace can be at most $m$,\n\tfollowing the result in Cor.~\\ref{cor:approx}, the approximation bound is at\n\tleast $k \/ \\eta_p$ where $\\eta_p=|\\widehat{\\ensuremath{\\mathcal{S}}\\xspace}|$. \n\n\tNext, consider a general preference function $\\ensuremath{\\psi}\\xspace(T_j,s_i)=f(\\cdot)$ where\n\t$f$ is a positive non-increasing function of $\\ensuremath{d_r}\\xspace(T_j,s_i)$ such that\n\t$f(0)=1$. If $T_j$ passes through the cluster $g_i$, then $\\ensuremath{d_r}\\xspace(T_j,r_i)\n\t\\leq \\tau$. Hence, $U_j = \\max\\{\\ensuremath{\\psi}\\xspace(T_j,r_i)| r_i \\in \\widehat{\\mathcal{S}}\\}\n\t\\geq f(\\tau)$ since $f$ is non-increasing. Therefore,\n\t$U(\\widehat{\\mathcal{S}})=\\sum_{T_j \\in \\ensuremath{\\mathcal{T}}\\xspace} U_j \\geq f(\\tau).m$. Since the\n\tpreference scores lie in the range $[0,1]$, the utility offered by any\n\toptimal algorithm for TOPS\\xspace is at most $m$. Therefore, following\n\tCor.~\\ref{cor:approx}, the approximation bound is at least $f(\\tau). (k \/\n\t\\eta_p)$.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\nTo solve the binary instance of TOPS\\xspace problem, the FM sketches may be employed\nwhile running the \\textsc{Inc-Greedy}\\xspace algorithm on the cluster representatives. The resulting\nscheme is referred to as \\textsc{FM-NetClus}\\xspace. In that case, the bound is updated as follows.\n\n\\begin{thm}\n\t%\n\tThe approximation bound of \\textsc{FM-NetClus}\\xspace for the binary instance of TOPS\\xspace is $(k \/\n\t\\eta_p).(1 + \\epsilon)^k$, where $\\epsilon$ is the error parameter provided\n\tby the FM sketch.\n\t%\n\\end{thm}\n\n\n\\begin{proof}\n\t%\n\tIf the error parameter for FM sketch is $\\epsilon$, running it for $k$\n\titerations produces an error bound of at most $(1 + \\epsilon)^k$. In\n\tconjunction with Th.~\\ref{thm:topsbound}, the required error bound\n\n\tis obtained.\n\t%\n\t\\hfill{}\n\t%\n\\end{proof}\n\n\n\\noindent\n\\textbf{Complexity Analysis:} \nFor a given value of $\\tau$, suppose the index instance $\\ensuremath{\\mathcal{I}}\\xspace_p$ with\n$|\\ensuremath{\\mathcal{I}}\\xspace_p| = \\eta_p$ clusters is used. Assume that the largest number of\ntrajectories passing through a cluster is $\\xi_p = \\max\\{|\\ensuremath{\\mathcal{TL}}\\xspace(g_i)|\\}$, and\n$\\lambda_p$ is the largest number of vertices in any cluster in $\\ensuremath{\\mathcal{I}}\\xspace_p$.\n\n\n\\begin{thm}\n\t%\n\t\\label{thm:complexity}\n\t%\n\tThe time and space complexities of \\textsc{NetClus}\\xspace are $O(k.\\eta_p.\\xi_p)$ and\n\t$O(\\sum_{p=1}^t (\\eta_p(\\xi_p + \\lambda_p)))$ respectively.\n\t%\n\\end{thm}\n\nThe proof is provided in Appendix~\\ref{app:netclus-complexity}.\n\n\nThe average values of $\\eta_p,\\lambda_p$ and $|\\ensuremath{\\mathcal{TL}}\\xspace|$ are shown in\nTable~\\ref{tab:cluster}, for different values\nof cluster radius $R_p$.\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\nThe related work falls in two main classes, \\emph{optimal location} queries\n\\cite{du2005optimal, ghaemi2010optimal, xiao2011optimal,\nchen2014efficient,li2013trajectory}, and \\emph{flow based facility location}\nproblems \\cite{berman1992optimal, berman1995locatingDiscretionary,\nberman1995locatingMain,berman1995locating,\nberman2002generalized,berman1998flow}. \n\nAn optimal location (OL) query has three inputs: a set of facilities, a set of\nusers, and a set of candidate sites. The objective is to determine a candidate\nsite to set up a new facility that optimizes an objective function based on the\ndistances between the facilities and the users. Comparing OL query with TOPS\\xspace\nquery, we note that: (a)~While \\emph{fixed} user locations are considered for OL\nqueries, TOPS\\xspace uses \\emph{trajectories} of users. (b)~OL queries report only a\nsingle optimal location, while TOPS\\xspace reports $k$ locations. (c)~Unlike OL\nqueries that are solvable in polynomial time, TOPS\\xspace is NP-hard (it is\npolynomially solvable only for $k=1$).\n\nRecently, \\cite{li2013trajectory} studied OL queries on trajectories over a road\nnetwork. Two algorithms were proposed to compute the optimal road segment to\nhost a new service. Their work has quite a few limitations and differences when\ncompared with our work: (a)~Since a single optimal road segment is reported,\ntheir problem is polynomially solvable. (b)~Their work identifies the optimal\nroad segment, rather than the optimal site. (c)~There is no analysis on the\nquality of the reported optimal road segment, either theoretically or\nempirically. (d)~It is not shown how does the reported road segment performs\nfor other established metrics, such as number of new users covered, distance\ntraveled by the users to avail the service, etc.\n\n\nFacility location problems \\cite{drezner1995facility, FacilityLocation}\ntypically consider a set of users, and a set of candidate sites. The goal is\nto identify a set of $k$ candidate sites that optimize certain metrics such as\ncovering maximum number of users, or minimizing the average distance between a\nuser and its nearest facility, etc. Almost all of these problems are NP-hard.\nWhile early works assumed that the users are static, mobile users are now\nconsidered. The \\emph{flow based} facility location works\n\\cite{berman1992optimal, berman1995locatingDiscretionary,\nberman1995locatingMain,berman1995locating, berman2002generalized,berman1998flow}\nassume a flow model to characterize human mobility, instead of using real\ntrajectories.\nA fairly comprehensive literature survey is available in \\cite{FIFLPSurvey}. We\nbriefly outline the major works related to the different versions of TOPS\\xspace\nqueries that have been discussed in Sec.~\\ref{sec:variants}.\nIn \\cite{berman1992optimal}, few exact and approximate algorithms for \\textsc{Tops1}\\xspace\nand \\textsc{Tops4}\\xspace, were presented, under the restriction that a customer would stay\non the path, i.e., $\\tau=0$. Later, in \\cite{berman1995locatingMain}, few\ngeneralizations of the model were proposed, where the customers were allowed to\ndeviate. These include \\textsc{Tops1}\\xspace, \\textsc{Tops2}\\xspace and \\textsc{Tops3}\\xspace problems. Further\ngeneralizations of \\textsc{Tops1}\\xspace were examined in \\cite{berman1995locating}, with\nprobabilistic customer flows, single and multiple flow interceptions, and fixed\nand variable installation costs of the services. Several existing flow-based facility location models were generalized in \\cite{zeng2010generalized}.\n Few flow refueling location models have been proposed in \\cite{lim2010heuristic,kuby2009optimization,mirhassani2012flexible}\nfor sitingalternative- fuel stations. We have already discussed how our work differs from these works in\nSec.~\\ref{sec:Intro}. \n\nRecently, a greedy heuristic was proposed for the \\textsc{Tops1}\\xspace problem in \\cite{li2016mining}. under the assumption that $\\tau=0$, i.e., no customer deviations are allowed. We present a detailed study of the \\textsc{Tops1}\\xspace problem in \\cite{tops_icde}, with arbitrary value of $\\tau$. \n\n\n\n\n\n\\section{Experimental Evaluation}\n\\label{sec:exp}\n\n\\begin{table}[t]\\small\n\\centering\n\\begin{tabular}{ccrr}\n\\hline\n\\bf Dataset & \\bf Type & \\bf \\#Trajectories & \\bf \\#Sites \\\\\n\\hline\nBeijing-Small\\xspace & Real & 1,000 & 50 \\\\\nBeijing\\xspace & Real & 123,179 & 269,686 \\\\\n\\hline\nBangalore & Synthetic & 9,950 & 61,563 \\\\\nNew York & Synthetic & 9,950 & 355,930 \\\\\nAtlanta & Synthetic & 9,950 & 389,680 \\\\\n\\hline\n\\end{tabular}\n\\tabcaption{Summary of datasets.}\n\\vspace{-0.05in}\n\\label{tab:datasets}\n\\vspace*{-2mm}\n\\end{table}\n\nIn this section, we perform extensive experiments to establish\n(1)~\\emph{Efficiency:} that \\textsc{NetClus}\\xspace is efficient, practical and scales well to\nreal-life datasets, and (2)~\\emph{Quality:} that \\textsc{NetClus}\\xspace produces solutions that are\nclose to those of \\textsc{Inc-Greedy}\\xspace, which serves as the \\emph{baseline} technique.\n\nMost of the results are shown for the binary version of the problem since it is\neasier to comprehend. Sec.~\\ref{sec:results-tops-variants} shows the results\nfor various extensions and variants of TOPS\\xspace.\n\nThe experiments were conducted using Java (version 1.7.0) platform on an\nIntel(R) Core i7-4770 CPU @3.40GHz machine with 32 GB RAM running Ubuntu 14.04.2\nLTS OS.\n\n\\subsection{Evaluation Methodology} \n\\label{sec:methodology}\n \n\n\\noindent\n\\textbf{Algorithms:} We evaluate the performance of three different algorithms to\naddress TOPS\\xspace: \\textsc{OPT}\\xspace, \\textsc{Inc-Greedy}\\xspace, and \\textsc{NetClus}\\xspace, which are henceforth referred to\nas \\textsc{Opt}\\xspace, \\textsc{IncG}\\xspace, and \\textsc{NetClus}\\xspace respectively in text and in the figures. The\nvariants of \\textsc{IncG}\\xspace and \\textsc{NetClus}\\xspace, based on the FM sketches, are henceforth referred to\nas \\textsc{FMG}\\xspace and \\textsc{FMNetClus}\\xspace respectively. \n\nTo the best of our knowledge, there is no existing algorithm for TOPS\\xspace that\nworks on real trajectories on city-scale road networks. The state-of-the-art\nalgorithm for \\textsc{Tops1}\\xspace problem, proposed in \\cite{berman1995locatingMain}, when\nadapted for our case, can be reduced to \\textsc{Inc-Greedy}\\xspace. Hence, \\textsc{Inc-Greedy}\\xspace acts as the baseline\nalgorithm for evaluation on \\textsc{Tops1}\\xspace. \n\n\\noindent\n\\textbf{Variants:} The specific variant of TOPS\\xspace on which we evaluated most of\nthe experiments was the binary instance defined in Def.~\\ref{def:binary} or\n\\textsc{Tops1}\\xspace. We choose to evaluate this particular variant of TOPS\\xspace due to three\nreasons: (1)~Usually, such binary optimization problems are the worst case\ninstances of the general integer optimization problems and are, therefore,\nhardest to approximate. (2)~The site preference function is simple. (3)~It has\nseveral applications in transportation science and operations research.\nHowever, we also evaluate a few other TOPS\\xspace variants and extensions in\nSec.~\\ref{sec:results-tops-variants}.\n \n\\noindent\n\\textbf{Metrics of evaluation:} The main metrics of evaluation were (a)~total\n\\emph{utility} measured as a percentage of the total number of trajectories $m$,\nand (b)~query \\emph{running time}. The two basic parameters studied were\n(i)~number of service locations $k$, and (ii)~coverage threshold $\\tau$,\nwhose default values were $5$ and $0.8$\\,Km. respectively.\n\nWe conducted experiments on both real and synthetic datasets, whose details are\nshown in Table~\\ref{tab:datasets}. For simplicity, we assume that the number of\ncandidate sites is the same as the number of nodes in the graph, unless\notherwise stated.\n\n\\noindent\n\\textbf{Real datasets:} We used GPS traces of taxis from Beijing consisting\nof user trajectories generated by tracking taxis for a week \\cite{cab1,cab2}.\nThis is the most widely used and one of the largest publicly available\ntrajectory datasets. To generate trajectories as sequences of road\nintersections, the raw GPS-traces were map-matched~\\cite{map1} to the Beijing\nroad network extracted from OpenStreetMap (\\url{http:\/\/www.openstreetmap.org\/}).\nThe road network contains 269,686 nodes and 293,142 edges, with an underlying\narea of $\\sim$1720 sq.\\,Km. \n\n\n\nSince TOPS\\xspace is NP-hard, the optimal algorithm requires exponential time and,\ntherefore, can be run only on a very small dataset. Hence, we evaluate all the\nalgorithms against the optimal on Beijing-Small\\xspace which is generated by randomly sampling\n1000 trajectories from a fixed area, and then randomly selecting the set\n$\\mathcal{S}$, consisting of 50 candidate sites, from the same area. The\nsampling was conducted 10 times to increase the robustness of the results. All\nthe other experiments were done on the full Beijing\\xspace dataset. \n \n\\noindent\n\\textbf{Synthetic datasets:} To study the impact of city geographies, we\ngenerated three synthetic datasets that emulate trajectories in the patterns\nfollowed in New York, Atlanta and Bangalore. We used an online traffic generator\ntool, MNTG (\\url{http:\/\/mntg.cs.umn.edu\/tg\/index.php}) to generate the traffic\ntraces, that were later map-matched to generate the trajectories in the desired\nformat.\n\n\\subsection{Choice of Parameters}\n\\label{sec:parameters}\n\nWe first run experiments to determine the choice of two important parameters:\n(a)~the resolution of the index instances, $\\ensuremath{\\gamma}\\xspace$, and (b)~the number of FM\nbit vectors, $f$.\n\nAs discussed earlier in Sec.~\\ref{sec:index}, the choice of \\ensuremath{\\gamma}\\xspace affects both\nthe storage and offline run-time costs as well as quality. Table~\\ref{tab:eps}\nlists the values for the Beijing\\xspace dataset when \\ensuremath{\\gamma}\\xspace is changed. The error is\nmeasured as relative loss in utility of \\textsc{NetClus}\\xspace with that of \\textsc{IncG}\\xspace. When \\ensuremath{\\gamma}\\xspace is too\nsmall, there is almost no compression of the trajectories. As a result, the\nindex structure size is large. On the other hand, with a very large \\ensuremath{\\gamma}\\xspace, the\nerror may be unacceptable. We fix $\\ensuremath{\\gamma}\\xspace=0.75$ for our experiments since it\noffers a nice balance of a medium sized index structure that can fit in most\nmodern systems with an error of within 5\\%.\n\n\\begin{table}[t]\n\\scriptsize\n\t\\centering\n\t\t\\begin{tabular}{c|c|c|c}\n\t\t\t\\hline\n\t\t\t\\ensuremath{\\gamma}\\xspace\t& Time (s)\t& Space (GB)\t& Rel. Error \\% w.r.t. \\textsc{IncG}\\xspace \\\\\n\t\t\t\\hline\n\t\t\t0.25\t& 108427\t& 14.095\t& 3.54 \\\\\n\t\t\t0.50\t& 3216\t\t& 4.215\t\t& 3.97 \\\\\n\t\t\t0.75\t& 1652\t\t& 2.374\t\t& 4.53 \\\\\n\t\t\t1.00\t& 520\t\t& 1.053\t\t& 5.21 \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t\t\\tabcaption{Variation across resolution of index instances, \\ensuremath{\\gamma}\\xspace.}\n\t\t\\label{tab:eps}\n\\end{table}\n\n\nTable~\\ref{tab:f} shows how the utility and running time varies when $f$ number\nof FM sketches are used as compared to the original \\textsc{NetClus}\\xspace. The error is measured as a relative loss in utility of \\textsc{FMNetClus}\\xspace with that of \\textsc{NetClus}\\xspace.\nThe values of $\\tau$\nand $k$ were their default values ($0.8$\\,Km. and $5$ respectively). When\n$f$ is very small, the error is too large. As $f$ increases, expectedly the\nerror decreases while the speed-up decreases as well. When $f$ is extremely\nhigh, the number of operations may overshadow the gains and using FM sketches\nmay be actually slower. We fixed $f = 30$ since it produced less than $5\\%$\nerror with a speed-up factor of more than $5$.\n\n\\begin{table}[t]\n\\scriptsize\n\t\\centering\n\t\\begin{tabular}{r|rr|r|rr|r}\n\t\\hline\n\t\\multirow{2}{*}{$f$}\t& \\multicolumn{2}{c|}{Utility}\t&\n\t\\multirow{2}{*}{Rel. Error \\%}\t& \\multicolumn{2}{c|}{Time (ms)}\t& \\multirow{2}{*}{Speed-up} \\\\\n\t\\cline{2-3}\n\t\\cline{5-6}\n\t& \\textsc{NetClus}\\xspace & FM &\t& \\textsc{NetClus}\\xspace & FM & \\\\\t\n\t\\hline\n\t1\t& 47.23\t& 26.60\t& 43.67\t& 846.65\t& 16.32\t\t& 51.88 \\\\\n\t2\t& 47.23\t& 34.27\t& 27.45\t& 846.65\t& 21.66\t\t& 39.09 \\\\\n\t4\t& 47.23\t& 39.33\t& 16.73\t& 846.65\t& 32.65\t\t& 25.93 \\\\\n\t10\t& 47.23\t& 41.27\t& 12.62\t& 846.65\t& 65.32\t\t& 12.96 \\\\\n\t20\t& 47.23\t& 43.29\t& 8.34\t& 846.65\t& 116.32\t& 7.28 \\\\\n\t30\t& 47.23\t& 44.96\t& 4.81\t& 846.65\t& 161.53\t& 5.24 \\\\\n\t40\t& 47.23\t& 45.52\t& 3.63\t& 846.65\t& 216.62\t& 3.91 \\\\\n\t50\t& 47.23\t& 45.89\t& 2.84\t& 846.65\t& 272.18\t& 3.11 \\\\\n\t100\t& 47.23\t& 46.43\t& 1.69\t& 846.65\t& 984.17\t& 0.86 \\\\\n\t\\hline \n\t\\end{tabular}\n\t\\tabcaption{Variation across the number of FM sketches, $f$.}\n\t\\label{tab:f}\n\t\\vspace*{-2mm}\n\\end{table}\n\n\\subsection{Comparison with Optimal}\n\\label{sec:comparison with optimal}\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-2mm}\n\t\\vspace*{-2mm}\n\t\\subfloat[Utility.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_small\/util\/output}\n\t\t\\label{subfig:optutil}\n\t}\n\t\\subfloat[Running time.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_small\/time\/output}\n\t\t\\label{subfig:opttime}\n\t}\n\t\\figcaption{Comparison with optimal at $\\tau=0.8$ Km.}\n\t\\label{fig:optimal}\n\\end{figure}\n\nSince this integer linear program based optimal algorithm requires impractical running\ntimes, we ran it only on the Beijing-Small\\xspace dataset mainly to assess the quality of the\nother algorithms. Fig.~\\ref{fig:optimal} shows that the average utility of all\nthe algorithms are quite close to \\textsc{Opt}\\xspace although the running times are much\nbetter. (Note that the utilities in these and all subsequent figures are\nplotted as a percentage of the total number of trajectories.) \\textsc{Opt}\\xspace requires\nhours to complete even for this small dataset and, therefore, is not practical\nat all. Consequently, we did not experiment with \\textsc{Opt}\\xspace any further.\n\n\\subsection{Quality Results}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-2mm}\n\t\\vspace*{-2mm}\n\n\t\\subfloat[Varying $k$]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_large\/k_util\/output}\n\t\t\\label{subfig:simkutil}\n\t}\n\n\t\\subfloat[Varying $\\tau$]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_large\/tau_util\/output}\n\t\t\\label{subfig:simtauutil}\n\t}\n\t\\figcaption{Quality results.}\n\t\\label{fig:sim}\n\t\\vspace*{-2mm}\n\\end{figure}\n\n\n\nFig.~\\ref{fig:sim} shows the\nutility yields for different values of $k$ and $\\tau$. The\nutilities of \\textsc{NetClus}\\xspace are close to that of \\textsc{IncG}\\xspace and are within 93\\% of it on an average. Owing to\nhigh memory requirements (for reasons to be discussed in the next section), \\textsc{IncG}\\xspace and \\textsc{FMG}\\xspace could not run beyond $\\tau=1.2$ Km. The\nutilities of \\textsc{FMG}\\xspace and \\textsc{FMNetClus}\\xspace are very close to that of \\textsc{IncG}\\xspace and \\textsc{NetClus}\\xspace,\nrespectively.\n\n\n\\subsection{Memory Footprint}\n\n\\begin{table}[t]\n\\scriptsize\n\t\\centering\n\t\\begin{tabular}{|c|rr|cc|}\n\t\\hline\n\t$\\tau$ (in Km.)\t& \\textsc{IncG}\\xspace\t& \\textsc{FMG}\\xspace & \\textsc{NetClus}\\xspace & \\textsc{FMNetClus}\\xspace \\\\\n\t\t\\hline \n0.1\t&\t7.04\t&\t7.90\t&\t6.43\t&\t7.09\t\\\\\n0.2\t&\t9.14\t&\t10.00\t&\t4.17\t&\t4.81\t\\\\\n0.4\t&\t13.47\t&\t14.34\t&\t3.67\t&\t3.94\t\\\\\n0.8\t&\t19.58\t&\t20.44\t&\t3.22\t&\t3.64\t\\\\\n1.2\t&\t23.98\t&\t24.85\t&\t3.52\t&\t3.88\t\\\\\n1.6\t&\t\\multicolumn{2}{c|}{Out of memory}\t&\t3.41\t&\t3.98\t\\\\\t\t\n\t\t\\hline\n\t\\end{tabular}\n\t\\tabcaption{Memory footprint of different algorithms (in GB).}\n\t\\label{tab:memory}\n\\end{table}\n\nTable~\\ref{tab:memory} shows that the memory footprints of \\textsc{NetClus}\\xspace and \\textsc{FMNetClus}\\xspace are\nsignificantly less than those of \\textsc{IncG}\\xspace and \\textsc{FMG}\\xspace. As the coverage threshold\n$\\tau$ increases, the size of the covering sets, $\\ensuremath{TC}\\xspace$ and $\\ensuremath{SC}\\xspace$, used in \\textsc{IncG}\\xspace\nand \\textsc{FMG}\\xspace, increase sharply. Consequently, these algorithms could not scale\nbeyond $\\tau=1.2$ Km. On the other hand, with higher $\\tau$, \\textsc{NetClus}\\xspace and \\textsc{FMNetClus}\\xspace use\nlower resolution clustering instances leading to higher data compression,\nthereby resulting in lower memory footprints. The FM sketch based schemes\nrequire slightly more memory than their counterparts, due to storage of multiple\nbit vectors for each site or cluster, as applicable.\n\n\\subsection{Performance Results}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-2mm}\n\t\\vspace*{-2mm}\n\n\t\\subfloat[Varying $k$]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_large\/k_time\/output}\n\t\t\\label{subfig:ktime}\n\t}\n\n\t\\subfloat[Varying $\\tau$]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{beijing_large\/tau_time\/output}\n\t\t\\label{subfig:tautime}\n\t}\n\t\\figcaption{Running time performance.}\n\t\\label{fig:time}\n\\vspace{-0.05in}\n\\vspace*{-1mm}\n\\end{figure}\n\nWe next measure the performance of the algorithms for different values of $k$\nand $\\tau$. Fig.~\\ref{fig:time} shows that for $\\tau \\le 1.2$ km., \\textsc{NetClus}\\xspace and\n\\textsc{FMNetClus}\\xspace are up to $36$ times faster than \\textsc{IncG}\\xspace and \\textsc{FMG}\\xspace, respectively. For $\\tau\n>1.2$ km., as stated in the previous section, \\textsc{IncG}\\xspace and \\textsc{FMG}\\xspace fail to run due to\nhigh memory overheads. When $\\tau$ increases, \\textsc{NetClus}\\xspace (and \\textsc{FMNetClus}\\xspace) uses a higher\nindex instance having lesser number of clusters, leading to its efficiency. On\nthe other hand, \\textsc{IncG}\\xspace and \\textsc{FMG}\\xspace use \\ensuremath{TC}\\xspace and \\ensuremath{SC}\\xspace covering sets of increasingly\nlarger size, resulting in poor performance.\n\nNote that the plots in Fig.~\\ref{subfig:ktime} appear to be linear w.r.t. $k$. This is because (i)~the initial cost of computing the covering sets significantly dominates the iterative phase of the algorithms, (ii)~the running times are plotted in log-scale. \n \nAlthough \\textsc{FMNetClus}\\xspace (\\textsc{FMG}\\xspace) offers a speed up of about $5$ times in the algorithm\nrunning time when compared to \\textsc{NetClus}\\xspace (\\textsc{IncG}\\xspace respectively), its effect is negated by\na relatively large initial pre-processing time required for computing the\ncovering sets. Due to this fact, we only compare the results of \\textsc{NetClus}\\xspace with that of\n\\textsc{IncG}\\xspace in the subsequent sections. \n\n\n\\subsection{Extensions and Variants of TOPS\\xspace}\n\\label{sec:results-tops-variants}\n\n\\begin{figure}[bt]\n\t\\centering\n\t\\vspace*{-2mm}\n\t\\subfloat[Cost.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{cost\/sdv_util\/output}\n\t\t\\label{subfig:cost_util}\n\t}\n\t\\subfloat[Capacity.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{capacity\/mean_util\/output}\n\t\t\\label{subfig:cap_util}\n\t}\n\t\\figcaption{Utilities for TOPS\\xspace extensions.}\n\t\\label{fig:cost-cap-util}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\vspace*{-1mm}\n\t\\subfloat[Utility.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{tops2\/util\/output}\n\t\t\\label{subfig:variantutil}\n\t}\n\t\\subfloat[Running time.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{tops2\/time\/output}\n\t\t\\label{subfig:varianttime}\n\t}\n\t\\figcaption{\\textsc{Tops2}\\xspace: A variant of TOPS\\xspace.}\n\t\\label{fig:variant}\n\\end{figure}\n\n\nWe next show results of \\textsc{NetClus}\\xspace on different TOPS\\xspace extensions and variants\n(discussed in Sec.~\\ref{sec:variants}) over the Beijing\\xspace dataset.\n\n\\textbf{\\textsc{Tops-Cost}\\xspace:}\nWe consider a budget of $B=5.0$ and $\\tau=0.8$ Km. The cost of each site was\nassigned using a normal distribution with mean $\\mu = 1.0$ and standard\ndeviation varied between $\\sigma \\in [0,1]$ (the least cost of a site was\nconstrained to be $0.1$). Fig.~\\ref{subfig:cost_util} shows that the utility\nincreases with standard deviation (note that $\\sigma = 0$ degenerates to basic\nTOPS\\xspace). This is due to the fact that with higher standard deviation, more\nnumber of sites can be chosen with lower costs which ultimately leads to larger\nnumber of trajectories being covered. The increased number of iterations does\nnot increase the running time much since it is a small overhead on the initial\ncosts (Fig.~\\ref{fig:cost2}).\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-1mm}\n\t\\subfloat[Number of sites.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{cost\/sdv_k\/output}\n\t\t\\label{subfig:cost_k}\n\t}\n\t\\subfloat[Running time.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{cost\/sdv_time\/output}\n\t\t\\label{subfig:cost_time}\n\t}\n\t\\figcaption{TOPS\\xspace under cost constraint.}\n\t\\vspace*{1mm}\n\t\\label{fig:cost2}\n\\end{figure}\n\n\n\\textbf{\\textsc{Tops-Capacity}\\xspace:}\nWe consider $k=5$ and $\\tau=0.8$ Km. The sites were assigned varying capacities\ndrawn from a normal distribution where the mean was varied in the range\n$[0.1\\%,100\\%]$ of the total number of trajectories, and the standard deviation\nfixed at $10\\%$ of the mean. (note that mean capacity of $100\\%$ corresponds to\nbasic unconstrained TOPS\\xspace). Fig.~\\ref{subfig:cap_util} shows that, as expected,\nutility increases with mean capacity. \\textsc{NetClus}\\xspace has almost the same utility as that\nof \\textsc{IncG}\\xspace. We do not show the running time plots, as the algorithms for \\textsc{Tops-Capacity}\\xspace\nare almost the same as those for TOPS\\xspace and, hence, exhibit similar performance.\n\n\\textbf{\\textsc{Tops2}\\xspace:}\nFinally, we study the \\textsc{Tops2}\\xspace variant where the preference function \\ensuremath{\\psi}\\xspace was a\nconvex function of the distance between the site and trajectory. The results,\nshown in Fig.~\\ref{fig:variant}, portray that \\textsc{NetClus}\\xspace has utility close to that of\n\\textsc{IncG}\\xspace, while being about an order of magnitude faster.\n\n\\subsection{Updates of Sites and Trajectories}\n\\label{sec:updateexp}\n\n\\begin{table}[t]\n\\scriptsize\n\\centering\n\\begin{tabular}{|c|r||c|r|}\n\\hline\n\\# Trajectories added & Update time & \\# Candidate sites added & Update time \\\\\n\\hline\n10000\t&\t 22.83 s\t&\t10000\t&\t1.26 s\t\\\\\n20000\t&\t 44.58 s\t&\t20000\t&\t1.56 s\t\\\\\n30000\t&\t 74.07 s\t&\t30000\t&\t1.76 s\t\\\\\n40000\t&\t 92.06 s\t&\t40000\t&\t1.85 s\t\\\\\n50000\t&\t122.69 s\t&\t50000\t&\t2.10 s\t\\\\\n\\hline\n\\end{tabular}\n\\tabcaption{Index update cost.}\n\\label{tab:update}\n\\vspace*{-3mm}\n\\end{table}\n\n\nTable~\\ref{tab:update} shows that \\textsc{NetClus}\\xspace efficiently processes additions of trajectories and candidate sites over the index structure \n(Sec.~\\ref{sec:updates}). Adding a trajectory requires more\ntime than that for a candidate site since a trajectory passes through multiple\nclusters in general and the covering sets, etc. of all those clusters need to be\nupdated. Adding a site, on the other hand, requires simply finding the cluster\nit is in and updating the cluster representative, if applicable.\n\n\\subsection{Robustness with Parameters}\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-1mm}\n\t\\subfloat[Number of sites.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{site\/time\/output}\n\t\t\\label{subfig:site}\n\t}\n\t\\subfloat[Number of trajectories.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{traj\/time\/output}\n\t\t\\label{subfig:traj}\n\t}\n\t\\figcaption{Scalability results ($k=5$ and $\\tau=0.8$ Km).}\n\t\\label{fig:scale}\n\\end{figure}\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\vspace*{-1mm}\n\t\\subfloat[Utility.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{synthetic\/util\/output}\n\t\t\\label{subfig:synutil}\n\t}\n\t\\subfloat[Running time.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{synthetic\/time\/output}\n\t\t\\label{subfig:syntime}\n\t}\n\t\\figcaption{Effect of city geometries ($k=5$ and $\\tau=0.8$ Km).}\n\t\\label{fig:synthetic}\n\\end{figure}\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\vspace*{-1mm}\n\t\\subfloat[Utility.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{length\/util\/output}\n\t\t\\label{subfig:lengthutil}\n\t}\t\n\t\\subfloat[Running time.]\n\t{\n\t\t\\includegraphics[width=0.48\\columnwidth]{length\/time\/output}\n\t\t\\label{subfig:lengthtime}\n\t}\n\t\\figcaption{Effect of trajectory length ($k=5$ and $\\tau=0.8$ Km).}\n\t\\label{fig:length}\n\\end{figure}\n\n\n\n\\textbf{Number of Sites and Trajectories:}\nFig.~\\ref{fig:scale} shows scalability results with varying number of candidate sites and trajectories on the Beijing\\xspace dataset. \\textsc{NetClus}\\xspace is about an order of magnitude faster than \\textsc{IncG}\\xspace.\n\n\n\\textbf{City Geometries:}\nWe experimented with three typical city geometries, Atlanta, New York, and\nBangalore (Fig.~\\ref{fig:synthetic}). New York has a star\ntopology while Bangalore is poly-centric. Consequently, Bangalore has a larger\nutility percentage. Since Atlanta has a mesh structure with trajectories\ndistributed all over the city, its utility is lowest. There is not much\ndifference in the running times, though. Bangalore has least running time due to its smaller road network. \n\n\n\n\\textbf{Length of Trajectories:}\nTo determine the effect of length of trajectories, the trajectories were divided\ninto four classes based on their lengths and from each of the classes, 5,000\ntrajectories were sampled (Fig.~\\ref{fig:length}). Longer\ntrajectories are easier to cover since they pass through more number of\ncandidate sites over a larger area and, therefore, exhibit higher utility than\nthe shorter ones. The running time also increases with trajectory length due to\nmore number of update operations of the marginal utilities. \n\n\\subsection{Index Construction}\n\n\n\\begin{table}[hbt]\n\\scriptsize\n\t\\begin{center}\n\t\\begin{tabular}{rrrrrr}\n\n\t$R_p$ (Km) & $\\kappa$ & $\\bar{|\\Lambda|}$ & $\\bar{|\\ensuremath{\\mathcal{TL}}\\xspace|}$ & $\\bar{|\\ensuremath{\\mathcal{CL}}\\xspace|}$ & Run-time (s)\t\\\\\n\t\\hline\n0.0093\t&\t258340\t&\t1.04\t&\t561.88\t&\t4.29\t&\t269.35\t\\\\\n0.0286\t&\t195910\t&\t1.15\t&\t571.41\t&\t6.43\t&\t239.58\t\\\\\n0.0163\t&\t233729\t&\t1.38\t&\t592.45\t&\t12.62\t&\t255.60\t\\\\\n0.0500\t&\t153210\t&\t1.76\t&\t626.18\t&\t19.44\t&\t221.79\t\\\\\n0.0875\t&\t112223\t&\t2.40\t&\t675.60\t&\t26.04\t&\t204.38\t\\\\\n0.1531\t&\t76836\t&\t3.51\t&\t757.03\t&\t45.38\t&\t192.12\t\\\\\n0.2680\t&\t48288\t&\t5.58\t&\t895.19\t&\t64.27\t&\t188.25\t\\\\\n0.4689\t&\t28510\t&\t9.46\t&\t1162.93\t&\t53.74\t&\t205.74\t\\\\\n0.8207\t&\t15775\t&\t17.10\t&\t1525.09\t&\t42.73\t&\t281.62\t\\\\\n1.4361\t&\t8258\t&\t32.66\t&\t2162.01\t&\t35.62\t&\t537.27\t\\\\\n2.5133\t&\t4202\t&\t64.18\t&\t3092.64\t&\t21.73\t&\t1300.12\t\\\\\n4.3982\t&\t2024\t&\t133.24\t&\t4148.03\t&\t12.49\t&\t3231.92\t\\\\\n7.6968\t&\t938\t&\t287.51\t&\t7537.33\t&\t3.51\t&\t7333.60\t\\\\\n\t\\hline \n\t\\end{tabular}\n\t\\tabcaption{Details of indexing for Beijing road network comprising of 269,686 nodes with $\\ensuremath{\\gamma}\\xspace=0.75$.}\n\t\\label{tab:cluster}\n\\end{center}\n\\end{table}\n\nReferring to Table~\\ref{tab:cluster}, we observe that as the cluster radius\n$R_p$ increases, the number of clusters $\\eta_p$ decreases as the average\ndominating set sizes $\\bar{|\\Lambda|}$ increases. Therefore, the average\nnumber of trajectories passing through a cluster $\\bar{|\\ensuremath{\\mathcal{TL}}\\xspace|}$ also increases.\nThe average number of neighbors of a cluster $\\bar{|\\ensuremath{\\mathcal{CL}}\\xspace|}$ initially increases\nbut finally decreases. \nImportantly, we observe that the\n\\emph{offline} index construction times across different cluster radii are\npractical. \n\n\n\n\\section{Handling Dynamic Updates}\n\\label{sec:updates}\n\nIn this section, we discuss how the \\textsc{NetClus}\\xspace framework efficiently handles dynamic\nupdates of trajectories and candidate sites. We assume that the underlying road\nnetwork does not change. In each of the following cases, the updates are\nprocessed for \\emph{all} the index instances (of varying cluster radii).\n\n\\noindent\n\\textbf{Addition of a site:} Suppose a location $s_{add}$ is identified as a new\ncandidate site, i.e., $s_{add}$ gets added to $\\mathcal{S}$. If $s_{add}$ is\nalready in $V$, its cluster $g_{add}$ is identified. Otherwise, it is added to\nthe cluster $g_{add}$ whose cluster center $c_{add}$ is the closest. To\ndetermine the closest cluster center, the neighbors $N(s_{add})$ of $s_{add}$ in\n$G$ are used. The round-trip distance to a cluster center $c_i$ is estimated\nusing $\\min_{s_l \\in N(s_{add})} \\{ \\ensuremath{d_r}\\xspace(s_{add}, s_l) + \\ensuremath{d_r}\\xspace(s_l, c_i)\\}$ if\n$\\ensuremath{d_r}\\xspace(s_l, c_i)$ is available. Suppose $c_{near}$ is the nearest cluster\ncenter to $s_{add}$. If the distance $\\ensuremath{d_r}\\xspace(s_{ad},g_{nearest})>2R_p$, then we\ncreate a new cluster $g_{add}$ with $s_{add}$ as its center. If the identified\ncluster $g_{add}$ does not have a cluster-representative, then $s_{add}$ is\nmarked as its new cluster representative. Else, it is determined if $s_{add}$\ncan be a better representative for $g_{add}$ as discussed in\nSec.~\\ref{sec:representative}. Finally, the exact round-trip distance to the\ncluster center, $\\ensuremath{d_r}\\xspace(s_{add}, c_{add})$, is computed.\n\n\n\n\\noindent\n\\textbf{Deletion of a site:} Suppose a particular site $s_{del}$ is no longer\nviable for a service and, therefore, needs to be deleted from $S$. Suppose,\n$s_{del}$ lies in the cluster $g_{del}$. First, it is untagged as a candidate\nsite in $g_{del}$.\nIf $s_{del}$ is not the cluster representative of $g_{del}$, nothing more needs\nto be done. Otherwise, another candidate site, if available, is chosen as the\nnew cluster representative using the methodology described in\nSec.~\\ref{sec:representative}.\n\n\n \n\\noindent\n\\textbf{Addition of a trajectory:} Suppose a new trajectory $T_{add}$ is added.\nIt is first mapped into a sequence of clusters, $g_1, \\dots, g_l$. For each\nsuch cluster $g_i$, $T_{add}$ is added to the set $\\ensuremath{\\mathcal{TL}}\\xspace(g_i)$ and its round-trip\ndistance to the cluster center $c_i$ of $g_i$, $\\ensuremath{d_r}\\xspace(T,c_i)$, is computed and\nstored. In addition, $g_i$ is added to the set $\\ensuremath{CC}\\xspace(T_{add})$. The procedure\nis essentially the same one discussed in Sec.~\\ref{sec:online}.\n \n\\noindent\n\\textbf{Deletion of a trajectory:} Suppose a trajectory $T_{del}$ is deleted.\nAssume that the coverage set of $T_{del}$ is $\\ensuremath{CC}\\xspace(T_{del}) = \\{g_1,\\dots,g_l\\}$.\nFor each such cluster $g_i$, $T_{del}$ is removed from its coverage set\n$\\ensuremath{\\mathcal{TL}}\\xspace(g_i)$. Finally, the set $\\ensuremath{CC}\\xspace(T_{del})$ is deleted.\n\nWhile multiple updates can be applied one after another, batch processing is\nmore efficient. Sec.~\\ref{sec:updateexp} shows that the updates are handled\nquite efficiently.\n\n\n\\section{Extensions and Variants of TOPS\\xspace}\n\\label{sec:variants}\n\nIn this section, we present a few extensions and variants within the TOPS\\xspace\nframework and discuss how the \\textsc{Inc-Greedy}\\xspace algorithm for TOPS\\xspace can be adapted to solve\nthese problems. As \\textsc{NetClus}\\xspace essentially runs \\textsc{Inc-Greedy}\\xspace on the cluster representatives,\nit can also be adapted in a similar manner.\n\n\\subsection{Cost Constrained TOPS\\xspace}\n\\label{sec:cost}\n\nIn this problem, referred to as \\textsc{Tops-Cost}\\xspace, each site $s_i \\in \\mathcal{S}$ has\na cost $cost(s_i)$ associated with it, and the goal is to select a set of sites\nwithin a fixed budget $B$ such that the sum of trajectory utilities is\nmaximized. Formally, the problem is stated as follows. \n\n\\begin{prob}[\\textsc{Tops-Cost}\\xspace]\n\t%\n\tGiven a set of trajectories $\\mathcal{T}$, a set of candidate sites\n\t$\\mathcal{S}$ where each site $s_i \\in \\mathcal{S}$ has a fixed cost\n\t$cost(s_i)$, \\textsc{Tops-Cost}\\xspace problem with query parameters $(B,\\tau, \\ensuremath{\\psi}\\xspace)$\n\tseeks to report a set $\\mathcal{Q} \\subseteq \\mathcal{S}$, that maximize\n\tthe sum of trajectory utilities, i.e., $\\max_{\\ensuremath{\\mathcal{Q}}\\xspace \\subseteq \\ensuremath{\\mathcal{S}}\\xspace}\n\tU(\\mathcal{Q}) = \\sum_{T_j \\in \\ensuremath{\\mathcal{T}}\\xspace} U_j$ such that $U_j = \\max_{s_i \\in\n\t\\ensuremath{\\mathcal{Q}}\\xspace}\\{\\ensuremath{\\psi}\\xspace(T_j,s_i)\\}$, $cost(\\mathcal{Q})=\\sum_{s_i \\in \\mathcal{Q}}\n\tcost(s_i) \\le B$.\n\t%\n\\end{prob}\n\nThe TOPS\\xspace problem reduces to \\textsc{Tops-Cost}\\xspace by assigning unit cost to each site and\n$B=k$. In contrast to TOPS\\xspace, \\textsc{Tops-Cost}\\xspace does not restrict the number of sites\nselected in the answer set. Since TOPS\\xspace is NP-hard, so is \\textsc{Tops-Cost}\\xspace.\n\nThe \\textsc{Inc-Greedy}\\xspace algorithm can be adapted based on the greedy heuristic for the\nbudgeted maximum coverage problem \\cite{khuller1999budgeted} to solve\n\\textsc{Tops-Cost}\\xspace. The algorithm starts with an empty set of sites $\\mathcal{Q} =\n\\varnothing$ and proceeds in iterations. In each iteration, it selects a site\n$s_i \\in \\mathcal{S} - \\mathcal{Q}$ such that $\\left(U(\\mathcal{Q} \\cup\n\\{s_i\\}) - U(\\mathcal{Q})\\right)\/cost(s_i)$ is \\emph{maximal}. If $cost(s_i)$\nis within the remaining budget $B - cost(\\mathcal{Q})$, it is added to\n$\\mathcal{Q}$; otherwise, it is pruned from $\\mathcal{S}$. This process\ncontinues until $\\mathcal{S} = \\varnothing$.\n\nIt was shown in \\cite{khuller1999budgeted} that the above approach can perform\narbitrarily bad. Thus, in order to bound the approximation guarantee, the\nalgorithm is augmented with the following step. Assume $s_{max}$ to be a\ncandidate site such that $cost(s_{max}) \\leq B$ and $U(\\{s_{max}\\})$ is\nmaximal. The algorithm returns either the site $s_{max}$ or the set\n$\\mathcal{Q}$ whichever offers the maximum utility. Following the analysis in\n\\cite{khuller1999budgeted}, this scheme is guaranteed to produce a solution\nwith an approximation bound of $(1-1\/e)\/2$.\n\n\\subsection{Capacity Constrained TOPS\\xspace}\n\\label{sec:capacity}\n\nIn this problem, referred to as \\textsc{Tops-Capacity}\\xspace, each site $s_i \\in \\mathcal{S}$ has a\nfixed capacity $cap(s_i)$ that denotes the maximum number of trajectories it\ncan serve. The goal is to select a set $\\ensuremath{\\mathcal{Q}}\\xspace \\subseteq \\ensuremath{\\mathcal{S}}\\xspace$ of size $k$ such that\nthe sum of trajectory utilities is maximized. Formally, the problem is stated\nas follows. \n\n\\begin{prob}[\\textsc{Tops-Capacity}\\xspace]\n\t%\n\tConsider a set of trajectories $\\mathcal{T}$, a set of candidate sites\n\t$\\mathcal{S}$ where each site $s_i \\in \\mathcal{S}$ can serve at most\n\t$cap(s_i)$ trajectories. For any set $\\ensuremath{\\mathcal{Q}}\\xspace \\subseteq \\ensuremath{\\mathcal{S}}\\xspace$, let $x_{ji}$ be a\n\tboolean indicator variable such that $x_{ji}=1$ if and only if the\n\ttrajectory $T_j$ can be served by the site $s_i \\in \\ensuremath{\\mathcal{Q}}\\xspace$. \\textsc{Tops-Capacity}\\xspace problem\n\twith query parameters $(k,\\tau,\\ensuremath{\\psi}\\xspace)$ seeks to report a set $\\mathcal{Q}\n\t\\subseteq \\mathcal{S}, |\\mathcal{Q}|=k$, that maximizes the sum of\n\ttrajectory utilities, i.e., $\\max_{\\ensuremath{\\mathcal{Q}}\\xspace \\subseteq \\ensuremath{\\mathcal{S}}\\xspace} U(\\ensuremath{\\mathcal{Q}}\\xspace) = \\sum_{T_j \\in\n\t\\ensuremath{\\mathcal{T}}\\xspace} U_j$ such that $U_j = \\max_{s_i \\in \\ensuremath{\\mathcal{Q}}\\xspace} (\\ensuremath{\\psi}\\xspace(T_j,s_i) . x_{ji})$,\n\tand $\\forall s_i \\in \\ensuremath{\\mathcal{Q}}\\xspace, \\ \\sum_{T_j \\in \\ensuremath{\\mathcal{T}}\\xspace} x_{ji} \\leq cap(s_i)$. \n\t%\n\\end{prob}\n\nTOPS\\xspace reduces to \\textsc{Tops-Capacity}\\xspace by assigning the capacity of each site to be infinite\nor more than the total number of trajectories in the dataset. Hence, \\textsc{Tops-Capacity}\\xspace\nis also NP-hard.\n\n\\textsc{Inc-Greedy}\\xspace can be adapted to solve \\textsc{Tops-Capacity}\\xspace in the following manner. The algorithm\nstarts with an empty set $\\ensuremath{\\mathcal{Q}}\\xspace_0=\\varnothing$ and sets $\\forall T_j \\in \\ensuremath{\\mathcal{T}}\\xspace,\nU_j=0$. Suppose the set of sites selected after iteration $\\theta=1,\\dots,k$, is\ndenoted by $\\ensuremath{\\mathcal{Q}}\\xspace_\\theta$. In each iteration $\\theta$, it augments $\\ensuremath{\\mathcal{Q}}\\xspace_{\\theta-1}$\nby selecting a site $s_\\theta \\in \\ensuremath{\\mathcal{S}}\\xspace - \\ensuremath{\\mathcal{Q}}\\xspace_{\\theta-1}$ that offers the maximal\nmarginal gain in utility.\n\nIt then updates the trajectory utilities $U_j$. The utility of $T_j$ due to\n$\\ensuremath{\\mathcal{Q}}\\xspace_{\\theta}$ is $U_j(\\ensuremath{\\mathcal{Q}}\\xspace_\\theta) = \\max_{s_i \\in \\ensuremath{\\mathcal{Q}}\\xspace_{\\theta}}\\ensuremath{\\psi}\\xspace(T_j,s_i)$.\nThe marginal gain in the utility of $T_j$ due to addition of $s_\\theta$ is\n$U_j(s_\\theta) = U_j(\\ensuremath{\\mathcal{Q}}\\xspace_{\\theta-1}\\cup \\{s_\\theta\\})-U_j(\\ensuremath{\\mathcal{Q}}\\xspace_{\\theta-1})$.\nSince any site $s_i \\in \\ensuremath{\\mathcal{S}}\\xspace -\\ensuremath{\\mathcal{Q}}\\xspace_{\\theta-1}$ can serve at most\n$\\alpha_i=\\min\\{|TC(s_i)|, cap(s_i)\\}$ trajectories, its marginal utility is\ndefined as the sum of the largest $\\alpha_i$ trajectory marginal utilities\n$U_j(s_i)$.\n\nSince the objective function of the \\textsc{Tops-Capacity}\\xspace problem is identical to that of\nTOPS\\xspace, it follows the non-decreasing sub-modular property. Thus, \\textsc{Inc-Greedy}\\xspace offers\nthe same approximation bound of $1-1\/e$, as stated in\nLem.~\\ref{lem:inc_topso2}.\n\n\\subsection{TOPS\\xspace with Existing Services}\n\\label{sec:existing}\n\nOptimal location queries usually factor in existing service locations before\nidentifying new ones. The problem is NP-hard. The \\textsc{Inc-Greedy}\\xspace algorithm can take\ninto account the existing service locations as follows.\n\nSuppose $\\mathcal{E_S}$ is the set of existing service locations. On receiving\nthe query parameter $\\tau$, the covering sets \\ensuremath{TC}\\xspace and \\ensuremath{SC}\\xspace over the set of\nsites $\\mathcal{S} \\cup \\mathcal{E_S}$ are computed. \\textsc{Inc-Greedy}\\xspace starts with\n$\\mathcal{Q}_0 = \\mathcal{E_S}$ and updates the marginal utilities of the sites\nin $\\mathcal{S}$. The remaining algorithm stays unaltered. The algorithm\nterminates after selecting $k$ sites from the set $\\mathcal{S}$, in the same\nmanner as TOPS\\xspace. \n\nAn advantage and important feature of \\textsc{Inc-Greedy}\\xspace is that the site chosen in a given\niteration depends solely on what the existing service locations are, and not on\nhow they were chosen.\n\nSince the initial utility $U(\\ensuremath{\\mathcal{Q}}\\xspace_0)=U(\\mathcal{E_S}) \\neq 0$, the approximation\nbound of $1-1\/e$ is not directly applicable. However, we next show that the\nsame bound holds.\n\nFor any set $\\ensuremath{\\mathcal{Q}}\\xspace \\subseteq \\ensuremath{\\mathcal{S}}\\xspace$, the extra utility, defined as $U^\\prime(\\ensuremath{\\mathcal{Q}}\\xspace) =\nU(\\ensuremath{\\mathcal{Q}}\\xspace)-U(\\mathcal{E_S})$ where $U(\\mathcal{E_S})$ is the utility offered by the\nexisting services, is non-negative. Now we have $U^\\prime(\\ensuremath{\\mathcal{Q}}\\xspace)$ as a\nnon-decreasing sub-modular function with $U^\\prime(\\ensuremath{\\mathcal{Q}}\\xspace_0) = 0$. Let $\\textsc{Opt}\\xspace$ and\n\\ensuremath{\\mathcal{Q}}\\xspace denote the set of sites returned by an optimal algorithm and \\textsc{Inc-Greedy}\\xspace\nrespectively. Then, following Lem.~\\ref{lem:inc_topso2}, $U^\\prime(\\ensuremath{\\mathcal{Q}}\\xspace) \\geq\n(1-1\/e) U^\\prime(\\textsc{Opt}\\xspace)$. This leads to $U(\\ensuremath{\\mathcal{Q}}\\xspace) \\geq (1-1\/e) U(\\textsc{Opt}\\xspace) +\nU(\\mathcal{E_S})\/e$. Since $U(\\mathcal{E_S}) \\geq 0$, hence, $U(\\ensuremath{\\mathcal{Q}}\\xspace) \\geq\n(1-1\/e) U(\\textsc{Opt}\\xspace)$.\n\n\n\n\\subsection{Other TOPS\\xspace Variants}\n\\label{sec:other}\n\nThere are certain variants of trajectory-aware service location problems that\nalready exist in the literature. The proposed TOPS\\xspace formulation generalizes\nthese variants and, thus, the \\textsc{Inc-Greedy}\\xspace algorithm can be used to solve them. We\nnext discuss some of the important variations.\n\n\\noindent\n\\textbf{\\textsc{Tops1}\\xspace: Binary world}: This is the simple binary instance of\nTOPS\\xspace query that is defined in Def.~\\ref{def:binary}. \nThe problem is still NP-hard \\cite{berman1995locatingMain}. \\textsc{Inc-Greedy}\\xspace offers the\nsame approximation bound for \\textsc{Tops1}\\xspace, as in the case of TOPS\\xspace\n(Th.~\\ref{thm:inc_topso}). \n\n\\noindent\n\\textbf{\\textsc{Tops2}\\xspace: Maximize market size:} Instead of operating in a\nbinary world where a trajectory is either covered or not covered, in this\nformulation, the aim is to maximize the probability of a trajectory being\ncovered \\cite{berman1995locatingMain}. The probability $p(T_j,s_i)$ of $T_j$\nbeing covered by site $s_i$ is modeled as a convex function of the distance\n$\\ensuremath{d_r}\\xspace(T_j,s_i)$ between them. This problem aims to set up $k$ services that\nmaximizes the expected number of total trajectories served. This is a special\ncase of our proposed TOPS\\xspace formulation with $\\ensuremath{\\psi}\\xspace(T_j,s_i) = p(T_j,s_i)$ if\n$\\ensuremath{d_r}\\xspace(T_j,s_i) \\leq \\tau$ and $0$ otherwise. It is again NP-hard\n\\cite{berman1995locatingMain}. \\textsc{Inc-Greedy}\\xspace offers the same approximation bound for\n\\textsc{Tops2}\\xspace, as in the case of TOPS\\xspace (Th.~\\ref{thm:inc_topso}).\n\n\\noindent\n\\textbf{\\textsc{Tops3}\\xspace: Minimize user inconvenience:} Assuming that each user on\nits trajectory would necessarily avail a service, this problem aims to minimize\nthe expected deviation incurred by a user \\cite{berman1995locatingMain}. This\ncan be handled through the TOPS\\xspace framework by setting the preference score to\n$\\ensuremath{\\psi}\\xspace(T_j,s_i) = - \\ensuremath{d_r}\\xspace(T_j,s_i)$, and $\\tau$ to $\\infty$. Since the\ntrajectory utility is defined to be the maximum of the preference scores of the\nselected sites, maximizing the sum of trajectory utilities would minimize the\ntotal user deviation. \\textsc{Tops3}\\xspace is NP-hard owing to reduction from the\n$k$-medians problem \\cite{berman1995locatingMain}. The approximation bound of\n\\textsc{Inc-Greedy}\\xspace for this problem is not yet known.\n\n\n\\noindent\n\\textbf{\\textsc{Tops4}\\xspace: Best service locations under fixed market share:}\nThe aim here is to place the minimum number of services that can capture a fixed\nshare of the market comprising of the user trajectories\n\\cite{berman1992optimal}. The problem is complementary to \\textsc{Tops1}\\xspace and asks the\nfollowing: What is the smallest set of sites $\\mathcal{Q} \\subseteq \\mathcal{S}$\nsuch that at least $\\beta$ fraction of $|\\mathcal{T}|$ is covered, where $0 <\n\\beta \\leq 1$? This problem is NP-hard \\cite{berman1992optimal}. Since \\textsc{Inc-Greedy}\\xspace\nalgorithm is iterative, it selects as many sites that are necessary to cover the\ndesired fraction of users.\nNote that the set cover problem directly reduces to this problem, and so does\nthe greedy heuristic for the set cover problem to \\textsc{Inc-Greedy}\\xspace. Hence, \\textsc{Inc-Greedy}\\xspace algorithm\noffers the same approximation bound of $1 + \\ln n$ for \\textsc{Tops4}\\xspace.\n\n\n\\subsection{Generic Framework}\n\nFrom the above discussions, it is clear that the TOPS\\xspace framework is highly\ngeneric and can absorb many extensions and variations with little or no\nmodification in the prposed algorithms \\textsc{Inc-Greedy}\\xspace and \\textsc{NetClus}\\xspace. Also, importantly, the\nframework also enables \\emph{combining} multiple extensions and variants. For\nexample, \\textsc{Tops-Cost}\\xspace and \\textsc{Tops-Capacity}\\xspace extensions can be merged to create a new version\nof TOPS\\xspace. In Sec.~\\ref{sec:results-tops-variants}, we discuss experimental\nevaluation of some of the above extensions and variants.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\nIt was Dirac \\cite{D} who first explored the electromagnetic duality in the Maxwell equations and came up with a mathematical\nformalism of magnetic monopoles, which was initially conceptualized by P. Curie \\cite{Curie}. Motivated by the search of a quark model,\nSchwinger \\cite{S} extended the study of Dirac \\cite{D} to obtain a new class of particle-like solutions of the Maxwell equations carrying both electric and\nmagnetic charges, called dyons, and derived an elegant charge-quantization formula for dyons, generalizing that of Dirac for monopoles. However, both the Dirac monopoles and Schwinger dyons\nare of infinite energy and deemed unphysical. In the seminal works of Polyakov \\cite{Po} and 't Hooft \\cite{'t}, finite-energy smooth monopole solutions were obtained\nin non-Abelian gauge field theory. Later, Julia and Zee \\cite{JZ} extended the works of Polyakov and 't Hooft and obtained finite-energy smooth dyon solutions in the same\nnon-Abelian gauge field theory framework. See Manton and Sutcliffe \\cite{MS} for a review of monopoles and dyons in the context of a research monograph on topological solitons.\nSee also \\cite{A,Go,R} for some earlier reviews on the subject.\nIn contemporary physics, monopoles and dyons are relevant theoretical constructs for an interpretation of quark confinement \\cite{Gr,Man,SY}.\n\nMathematically, the existence of monopole and dyons is a sophisticated and highly challenging problem. In fact, the construction of monopoles and dyons was first made possible \nin the critical Bogomol'nyi \\cite{B} and Prasad--Sommerfeld \\cite{PS} (BPS) limit, although an analytic proof of existence of spherically symmetric unit-charge monopoles was also obtained\nroughly at the same time \\cite{BPST}. A few years later, the BPS monopoles of multiple charges were obtained by Taubes \\cite{JT,Taubes} using a gluing technique to patch a\ndistribution of widely separated unit-charge BPS monopoles together. Technically, the existence of dyons is a more difficult problem even for spherically symmetric solutions of unit charges.\nThe reason is that the presence of electricity requires a non-vanishing temporal component of gauge field as a consequence of the 't Hooft construction \\cite{tH} of electromagnetism so that\nthe action functional governing the equations of motion becomes indefinite due to the Minkowski spacetime signature. In fact, the original derivation of the BPS dyons is based on\nan internal-space rotation of the BPS monopoles, also called the Julia--Zee correspondence \\cite{A}. An analytic proof for the existence of the Julia--Zee dyons \\cite{JZ},\naway from the BPS limit, was obtained by\nSchechter and Weder \\cite{SW} using a constrained minimization method. Developing this method, existence theorems have been established for dyons in the Weinberg--Salam electroweak\ntheory \\cite{CM,Yw,Ybook}, \nand in the Georgi--Glashow--Skyrme model \\cite{BKT,LY}, as well as for the Chern--Simons vortex equations \\cite{CGSY,LPY}.\n\nIt is well known that the Skyrme model \\cite{S1,S2} is important for baryon physics \\cite{GP,GM,MRS,ZB} and soliton-like solutions in the Skyrme model, called Skyrmions, are used to model\nelementary particles. Thus, in order to investigate inter-particle forces among Skyrmions, gauge fields have been introduced into the formalism\n \\cite{AR,BHT,BKT,CW,DF,Eilam,PT,W1}. Here, we are interested in the minimally gauged Skyrme model studied by Brihaye, Hartmann, and Tchrakian \\cite{BHT}, where the Skyrme (baryon) charge may be\nprescribed explicitly in a continuous interval. The Skyrme map is hedgehog and the presence of gauge fields makes the static solutions carry both electric and magnetic charges. In other words,\nthese gauged Skyrmions are dyons. In \\cite{BHT}, numerical solutions are obtained which convincingly support the existence of such solutions. The purpose of this paper is to give an analytic\nproof for the existence of these solutions, extending the methods developed in the earlier studies \\cite{LY,SW,Yw,Ybook} for the dyon solutions in other models in field theory described above.\nSee also \\cite{BR,FR}.\nNote that, since here we are interested in the minimally gauged Skyrme model where no Higgs field is present, we lose the control over the negative terms\nin the indefinite action functional which can otherwise be controlled if a Higgs field is present \\cite{BKT,LY,SW,Yw,Ybook}. In order to overcome this difficulty, we need to obtain suitable\nuniform estimates for a minimizing sequence at singular boundary points and to achieve strong convergence results, for the sequences of the negative terms.\n\nThe contents of the rest of the paper are outlined as follows. In Section 2, we review the minimally gauged Skyrme model of Brihaye--Hartmann--Tchrakian \\cite{BHT} and then state our main\nexistence theorem for dyon solutions. It is interesting that the solutions obtained are of unit monopole and magnetic charges but continuous Skyrme charge and non-quantized electric charge.\nIn the subsequent three sections, we establish this existence theorem.\nIn Section 3, we prove the existence of a finite-energy critical point of the indefinite action functional by formulating and solving a constrained minimization problem.\nIn Section 4, we show that the critical point obtained in the previous section for the constrained minimization problem solves the original equations of motion by proving that\nthe constraint does not give rise to a Lagrange multiplier problem. In Section 5, we study the properties of the solutions. In particular, we obtain some uniform decay\nestimates which allow us to describe the dependence of the ('t Hooft) electric charge on the asymptotic value of the electric potential function at infinity.\n\\section{Dyons in the minimally gauged Skyrme model}\n\\setcounter{equation}{0}\n\nAs in the classical Skyrme model \\cite{S1,S2}, the minimally gauged Skyrme model of Brihaye--Hartmann--Tchrakian \\cite{BHT} is built around a wave map, $\\phi=(\\phi^a)$ ($a=1,2,3,4$), from\nthe Minkowski spacetime $\\bfR^{3,1}$ of signature $(+---)$ into the unit sphere, $S^3$, in $\\bfR^4$, so that $\\phi$ is subject to the constraint $|\\phi|^2=(\\phi^a)^2=1$, where and in the sequel,\nsummation convention is implemented over repeated indices. Finite-energy condition implies that $\\phi$ approaches a fixed vector in $S^3$, at spatial infinity. Thus,\nat any time $t=x^0$, $\\phi$ may be viewed as a map from $S^3$, which is a one-point compactification of $\\bfR^3$, into $S^3$. Hence, $\\phi$ is naturally characterized by an integral class,\nsay $[\\phi]$, in\nthe homotopy group $\\pi_3(S^3)=\\bfZ$. The integer $[\\phi]$, also identified as the Brouwer degree of $\\phi$, may be represented as a volume integral of the form\n\\be \\label{B}\nB_\\phi=[\\phi]=\\frac1{12\\pi^2}\\int_{\\bfR^3}\\vep_{ijk}\\vep^{abcd}\\pa_i\\phi^a\\pa_j\\phi^b\\pa_k\\phi^c\\phi^d\\,\\dd x,\n\\ee\nwhere $i,j,k=1,2,3$ denote the spatial coordinate indices and $\\vep$ is the Kronecker skewsymmetric tensor. \nThis topological invariant is also referred to as the Skyrme charge or baryon charge. Let $(\\eta_{\\mu\\nu})=\\mbox{diag}\\{1,-1,-1,-1\\}$ ($\\mu,\\nu=0,1,2,3$) be\nthe Minkowski metric tensor and $(\\eta^{\\mu\\nu})$ its inverse.\nWe use $|A_\\mu|^2=\\eta^{\\mu\\nu}A_\\mu A_\\nu$ to denote the squared Minkowski norm of a 4-vector $A_\\mu$ and $A_{[\\mu}B_{\\nu ]}=A_\\mu B_\\nu-A_{\\nu}B_\\mu$ to denote the skewsymmetric tensor product\nof $A_\\mu$ and $B_\\mu$. The Lagrangian action density of the Skyrme model \\cite{S1,S2} is of the form\n\\be \n{\\cal L}=\\frac12\\kappa_1^2|\\pa_\\mu \\phi^a|^2-\\frac12 k_2^4|\\pa_{[\\mu}\\phi^a \\pa_{\\nu ]}\\phi^b|^2,\n\\ee \nwhere $\\kappa_1,\\kappa_2>0$ are coupling constants. The model is invariant under any internal space rotation. That is, the model enjoys a global $O(4)$ symmetry. Such a symmetry is broken \ndown to $SO(3)$ by\nsuppressing the vacuum manifold to a fixed point, say ${\\bf n}=(0,0,0,1)$, which may be specified by inserting a potential term of the form\n\\be \nV=\\lm(1-\\phi^4)=\\lm (1-{\\bf n}\\cdot\\phi)^4,\\quad\\lm>0,\n\\ee\ninto the Skyrme Lagrangian density.\n\nThe `residual' $SO(3)$ symmetry is now to be gauged. In order to do so, we follow \\cite{BHT} to set $\\phi=(\\phi^a)=(\\phi^\\alpha,\\phi^4)$ and replace the common derivative\nby the $SO(3)$ gauge-covariant derivative\n\\be \nD_\\mu\\phi^\\alpha=\\pa_\\mu\\phi^\\alpha+\\vep^{\\alpha\\beta\\gamma}A^\\beta_\\mu\\phi^\\gamma,\\quad\\alpha,\\beta,\\gamma=1,2,3,\\quad D_\\mu\\phi^4=\\pa_\\mu\\phi^4,\n\\ee\nwhere $A^\\alpha_\\mu$ is the $\\alpha$-component of the $SO(3)$-gauge field ${\\bf A}_\\mu$ in the standard isovector representation ${\\bf A}_\\mu=(A_\\mu^\\alpha)$, which induces the gauge field\nstrength tensor\n\\be \n{\\bf F}_{\\mu\\nu}=\\pa_\\mu {\\bf A}_\\nu-\\pa_\\nu{\\bf A}_\\mu+{\\bf A}_\\mu\\times {\\bf A}_\\nu=(F^\\alpha_{\\mu\\nu}).\n\\ee\nAs a result, the $SO(3)$ gauged Skyrme model is then defined by the Lagrangian density \\cite{BHT}\n\\begin{eqnarray}\n\\mathcal{L}\n&=&-\\kappa^4_0|F^\\alpha_{\\mu\\nu}|^2+\\frac{1}{2}\\kappa^2_1|D_\\mu\\phi^a|^2-\\frac{1}{2}\\kappa^4_2|D_{[\\mu}\\phi^aD_{\\nu]}\\phi^b|^2-V_\\omega(\\phi),\n\\label{L}\\end{eqnarray} \nwhere $\\kappa_0>0$ and the potential function $V_\\omega$ is taken to be\n\\be \\label{Vo}\nV_\\omega(\\phi)=\\lm(\\cos\\omega-\\phi^4)^2,\\quad 0\\leq\\omega\\leq\\pi,\n\\ee\nwith $\\omega$ an additional free parameter which is used to generate a rich vacuum manifold defined by\n\\be \\label{V}\n|\\phi^\\alpha|=\\sin\\omega,\\quad \\phi^4=\\cos\\omega.\n\\ee\n\nIn order to stay within the context of minimal coupling, we shall follow \\cite{BHT} to set $\\lm=0$ to suppress the potential term (\\ref{Vo}) but maintain the vacuum manifold (\\ref{V}) by imposing\nappropriate boundary condition at spatial infinity.\n\nBesides, since the topological integral (\\ref{B}) is not gauge-invariant, we need to replace it by the quantity \\cite{AT,BHT}\n\\be \nQ_S=B_{\\phi,A}=\\frac1{12\\pi^2}\\int_{\\bfR^3}\\left(\\vep_{ijk}\\vep^{abcd}D_i\\phi^a D_j\\phi^b D_k\\phi^c\\phi^d-3\\vep_{ijk}\\phi^4 F_{ij}^\\alpha D_k\\phi^\\alpha\\right)\\,\\dd x,\n\\ee\nas the Skyrme charge or baryon charge. On the other hand, following \\cite{Go,Ryder}, the monopole charge $Q_M$ is given by\n\\be \nQ_M=\\frac1{16\\pi}\\int_{\\bfR^3}\\vep_{ijk} F^\\alpha_{ij}D_k\\phi^\\alpha\\,\\dd x,\n\\ee\nwhich defines the homotopy class of $\\phi$ viewed as a map from a 2-sphere near the infinity of $\\bfR^3$ into the vacuum manifold described in (\\ref{V}) which happens to be a \n2-sphere as well when $\\omega\\in (0,\\pi)$.\n\nFollowing \\cite{BHT}, we will look for solutions under the spherically symmetric ansatz\n\\bea\nA^\\alpha_0&=&g(r)\\left(\\frac{x^\\alpha}r\\right),\\quad\nA^\\alpha_i=\\frac{a(r)-1}{r}\\varepsilon_{i\\alpha\\beta}\\left(\\frac{x^\\beta}r\\right),\\label{a1}\\\\\n\\phi^\\alpha&=&\\sin f(r)\\left(\\frac{x^\\alpha}r\\right),\\ \\ \\ \\phi^4=\\cos\nf(r),\\label{a2}\\eea\nwhere $r=|x|$ ($x\\in\\bfR^3$). Since the presence of the function $g$ gives rise to a nonvanishing temporal component of the gauge field,\n$g$ may be regarded as an electric potential. With (\\ref{a2}), the Skyrme charge $Q_S$ can be shown to be given by \\cite{AT,BHT,LY}\n\\be \\label{QS}\nQ_S=-\\frac2\\pi\\int_0^\\infty \\sin^2 f(r) f'(r)\\,\\dd r.\n\\ee\n\nRecall also that, with the notation $\\vec{\\phi}=(\\phi^\\alpha)$ and the updated gauge-covariant derivative \n\\be \nD_\\mu\\vec{\\phi}=\\pa_\\mu{\\vec\\phi}+{\\bf A}_\\mu\\times{\\vec\\phi},\n\\ee\nwe may express the 't Hooft electromagnetic field $F_{\\mu\\nu}$ by the formula \\cite{JZ,tH0,tH} \n\\be\\label{EM}\nF_{\\mu\\nu}=\\frac1{|\\vec{\\phi}|}{\\vec\\phi}\\cdot {\\bf F}_{\\mu\\nu}-\\frac1{|\\vec{\\phi}|^3}{\\vec{\\phi}}\\cdot(D_\\mu{\\vec\\phi}\\times\nD_\\nu{\\vec\\phi}).\n\\ee\n\nInserting (\\ref{a1}) and (\\ref{a2}) into (\\ref{EM}), we see that the electric and magnetic fields, \n${\\bf E}=\n(E^i)$ and ${\\bf B}=(B^i)$, are given by \\cite{Go,JZ,PS}\n\\bea\nE^i&=&-F^{0i}=\\frac{x^i}{r}\\frac{\\dd g}{\\dd r},\\label{g60}\\\\\nB^i&=&-\\frac12\\epsilon_{ijk}F^{jk}=\\frac{x^i}{r^3}.\\label{g61}\n\\eea\n Therefore the magnetic charge $Q_m$ may be calculated immediately to give us\n\\be\\label{g62}\nQ_m=\\frac1{4\\pi}\\,\\lim_{r\\to\\infty}\\oint_{S^2_r}{\\bf B}\\cdot\\dd{\\bf S}=1,\n\\ee\nwhere $S^2_r$ denotes the 2-sphere of radius $r$, centered at the origin in the 3-space. Similarly, the monopole charge $Q_M$ may be shown to be\n1 as well \\cite{LY}.\n\n\nWithin the ansatz (\\ref{a1})--(\\ref{a2}), using suitable rescaling, and denoting\n$\\kappa^4_2\\equiv\\kappa$, it is shown \\cite{BHT} that the Lagrangian density (\\ref{L}) may be reduced into the following one-dimensional one, after suppressing\nthe potential term,\n\\be \\label{L1}\n{\\cal L}={\\cal E}_1-{\\cal E}_2,\n\\ee\nwhere\n\\bea\n{\\cal E}_1&=&2\\left(2(a')^2+\\frac{(a^2-1)^2}{r^2}\\right)+{1\\over\n2}\\left(r^2(f')^2+2a^2\\sin^2f\\right)\\nn\\\\\n&&+2\\kappa a^2\\sin^2f\\bigg(2f'^2+\\frac{a^2\\sin^2f}{r^2}\\bigg),\\label{E1}\\\\\n{\\cal E}_2&=&r^2(g')^2+2a^2g^2,\\label{E2}\\end{eqnarray}\nand $'$ denotes the differentiation $\\frac{\\small\\dd}{{\\small\\dd }r}$, such that the associated Hamiltonian (energy) density is given by\n\\be \n{\\cal E}={\\cal E}_1+{\\cal E}_2.\n\\ee\nThe equations of motion of the original Lagrangian density (\\ref{L}) now become the variational equation\n\\be \\label{dA}\n\\delta {L}=0,\n\\ee\n of the static action \n\\be \\label{action}\n{L}(a,f,g)=\\int_0^\\infty {\\cal L}\\,\\dd r=\\int_0^\\infty({\\cal E}_1-{\\cal E}_2)\\,\\dd r,\n\\ee\n which is indefinite. Explicitly, the equation (\\ref{dA}) may be expressed in terms of\nthe unknowns $a,f,g$ as\n\\begin{eqnarray}\n a''&=&\\frac{1}{r^2}a(a^2-1)+{1\\over\n 4}a\\sin^2f+\\kappa a\\sin^2f(f')^2\\nn\\\\\n&&\\quad +{1\\over{r^2}}\\kappa a^3\\sin^4f\n -\\frac{ag^2}{2},\\label{4}\\\\\n 8\\kappa(a^2\\sin^2ff')'+(r^2f')'&=&2a^2\\sin f \\cos f+8\\kappa a^2\\sin f\\cos f(f')^2\\nn\\\\\n&&\\quad+\\frac{8\\kappa a^4\\sin^3f\\cos f}{r^2},\n\\label{5}\\\\\n(r^2g')'&=&2a^2g.\\label{6}\\end{eqnarray}\n\nWe are to solve these equations under suitable boundary conditions. First we observe in view of the ansatz (\\ref{a1})--(\\ref{a2}) that the regularity of the fields $\\phi$ and $A_\\mu$ \nimposes at $r=0$ the boundary condition\n\\be \\label{bc1}\na(0)=1,\\quad f(0)=\\pi,\\quad g(0)=0.\n\\ee \nFurthermore, the finite-energy condition\n\\be \nE(a,f,g)=\\int_0^\\infty{\\cal E}\\,\\dd r=\\int_0^\\infty({\\cal E}_1+{\\cal E}_2)\\,\\dd r<\\infty,\n\\ee\nthe definition of the vacuum manifold (\\ref{V}), and the non-triviality of the $g$-sector lead us to the boundary condition at $r=\\infty$, given as\n\\be \\label{bc2}\na(\\infty)=0,\\quad f(\\infty)=\\omega,\\quad g(\\infty)=q,\n\\ee\nwhere $q>0$ (say) is a parameter, to be specified later, which defined the asymptotic value of the electric potential at infinity.\n\nApplying the boundary conditions (\\ref{bc1}) and (\\ref{bc2}) in (\\ref{QS}), we obtain\n\\be \\label{Qo}\nQ_S=Q_S(\\omega)=1+\\frac1\\pi \\left(\\frac12\\sin(2\\omega) -\\omega\\right),\n\\ee\nwhich is strictly decreasing for $\\omega\\in [0,\\pi]$ with $Q_S(0)=1, Q_S(\\frac\\pi2)=\\frac12,Q_S(\\pi)=0$, and the range of $Q_S(\\omega)$ over $[0,\\pi]$ is the entire interval $[0,1]$.\n\nWe now evaluate the electric charge.\nUsing (\\ref{g60}) and the equation (\\ref{6}), we see that the electric charge $Q_e$ is given by\n\\bea\nQ_e&=&\\frac1{4\\pi}\\lim_{r\\to\\infty}\\oint_{S^2_r} {\\bf E}\\cdot\\,\\dd{\\bf S}\n=\\frac1{4\\pi}\\,\\lim_{r\\to\\infty}\\int_{|x|0, \\omega0$, and $a,f,g$ are strictly monotone functions of $r$. Moreover,\n$a(r)$ vanishes at infinity exponentially fast and $f(r), g(r)$ approach their limiting values at the rate {\\rm O}$(r^{-1})$ as $r\\to\\infty$. The solution\ncarries a unit monopole charge, $Q_M=1$, a continuous Skyrme charge $Q_S$ given as function of $\\omega$ by\n\\be \nQ_S(\\omega)=1+\\frac1\\pi\\left(\\frac12\\sin(2\\omega)-\\omega\\right),\\quad \\frac\\pi2<\\omega<\\pi,\n\\ee \nwhich may assume any value in the interval\n$(0,\\frac12)$, a unit magnetic charge $Q_m=1$, and an electric charge $Q_e$ given by the integral\n\\be \\label{2.34}\nQ_e=2\\int_0^\\infty a^2(r)g(r)\\,{\\rm\\dd} r>0,\n\\ee\nwhich depends on $q$ and approaches zero as $q\\to0$.\n\\end{theorem}\n\nIt is interesting that the 't Hooft electric charge $Q_e$ cannot be quantized as stated in the Dirac quantization formula, which reads in normalized units \\cite{Ryder},\n\\be \\label{2.35}\nq_e q_m =\\frac n2,\\quad n\\in\\bfZ,\n\\ee\nwhere $q_e$ and $q_e$ are electric and magnetic charges, respectively. Indeed, according to\nTheorem \\ref{Main}, $Q_e=0$ is an accumulation point of the set of electric charges of the model. On the other hand,\nthe formula (\\ref{2.35}) says that, for $q_m>0$, the smallest positive value of $q_e$ is $(2q_m)^{-1}$.\n\\medskip \n\n We note that\nthe expression (\\ref{2.34}) suggests that $Q_e$ should depend on $q$ continuously, although a proof of this statement is yet to be worked out.\n\\medskip \n\nThe above theorem will be established in the subsequent sections.\n\n\\section{Constrained minimization problem}\n\\setcounter{equation}{0}\n\nWe first observe that the action density (\\ref{L1}) is invariant under the transformation $f\\mapsto\\pi-f$. Hence we may `normalize' the boundary conditions (\\ref{bc1}) and (\\ref{bc2}) into\n\\bea \na(0)&=&1,\\quad f(0)=0,\\quad g(0)=0,\\label{bc3}\\\\\na(\\infty)&=&0,\\quad f(\\infty)=\\pi-\\omega,\\quad g(\\infty)=q,\\label{bc4}\n\\eea\n\nThe proof of our main existence theorem, Theorem \\ref{Main}, for the dyon solutions in the minimally gauged Skyrme model amounts to establishing the following.\n\n\\begin{theorem}\\label{theorem 01}\nGiven $\\omega$ satisfying\n\\be \\label{om}\n\\frac\\pi2<\\omega<\\pi,\n\\ee\nset\n\\be \\label{qom}\nq_\\omega=\\frac{\\sqrt\n2}{\\pi}(\\pi-\\omega).\n\\ee\nFor any constant $q$ satisfying $00, 00$. \n\\end{theorem}\n\nThe proof of the theorem will be carried out through establishing a series of lemmas. In this section, we concentrate on formulating and solving\na constrained minimization problem put forth to overcome the difficulty arising from the negative terms in the action functional (\\ref{action}).\nIn the next section, we show that the solution obtained in this section is indeed a critical point of (\\ref{action}) so that the constraint does not\ngive rise to a Lagrangian multiplier problem.\n\nTo proceed, we begin by defining the admissible space of our one-dimensional variational problem to be\n\\bea\n{\\cal A}&=&\\left\\{(a,f,g)| a,f,g\\mbox{ are continuous functions over $[0,\\infty)$ which are\nabsolutely}\\right.\\nn\\\\\n&&\\,\\left.\\mbox{continuous on any compact subinterval of $(0,\\infty)$, satisfy}\\right.\\nn\\\\\n&&\\,\\mbox{the boundary conditions $a(0)=1,a(\\infty)=0,f(0)=0,f(\\infty)=\\pi-\\omega,$}\\nn\\\\\n&&g(\\infty)=q,\n\\left.\\mbox{ and of finite-energy $E(a,f,g)<\\infty$}\\right\\}.\\nn\n\\eea\n\nNote that in the admissible space $\\cal A$ we only implement partially the boundary conditions (\\ref{bc3})--(\\ref{bc4}) to ensure the compatibility with the\nminimization process. The full set of the boundary conditions will eventually be recovered in the solution process.\n\nIn order to tackle the problem arising from the negative terms involving the function $g$ in the action (\\ref{action}), we use the methods developed in \\cite{LY,SW,Yw} by imposing the\nconstraint\n\\begin{equation}\\int_0^\\infty(r^2g'G'+2a^2gG)\\,\\dd r=0, \\label{8}\\end{equation}\nto `freeze' the troublesome $g$-sector, where $G$ is an arbitrary test function satisfying $G(\\infty)=0$ and\n\\begin{equation}\nE_2(a,G)=\\int_0^\\infty(r^2[G']^2+2a^2G^2)\\,\\dd r<\\infty.\\label{9}\n\\end{equation}\nThat is, for given $a$, the function $g$ is taken to be a critical point of the energy functional $E_2(a,\\cdot)$ subject to the boundary condition $g(\\infty)=q$.\n\nWe now define our constrained class $\\mathcal{C}$ to be\n\\be \\label{C}\n{\\cal C}=\\{(a,f,g)\\in {\\cal A}|\\, (a,f,g) \\mbox{ satisfies (\\ref{8})}\\}.\n\\ee\n\nIn the rest of this section, we shall study the following\nconstrained minimization problem\n\\begin{equation}\n\\min\n\\left\\{L(a,f,g)|(a,f,g)\\in\n\\mathcal{C}\\right\\}.\\label{12}\\end{equation}\n\n\n\\begin{lemma}\\label{lemma0} Assume (\\ref{om}). For the problem (\\ref{12}), we may always restrict our attention to functions $f$ satisfying $0\\leq f\\leq \\pi\/2$.\n\\end{lemma}\n\n\\begin{proof} Since the action (\\ref{action}) is even in $f$, it is clear that $L(a,f,g)=L(a,|f|,g)$. Hence we may assume $f\\geq0$ in the minimization problem. Besides, since\n$f(\\infty)=\\pi-\\omega<\\frac\\pi2$, if there is some $r_0>0$ such that $f(r_0)>\\frac\\pi2$, then there is an interval $(r_1,r_2)$ with $0\\leq r_1\\frac\\pi2$ ($r\\in (r_1,r_2)$) and $f(r_1)=f(r_2)=\\frac\\pi2$. We now modify $f$ by reflecting $f$ over the interval $[r_1,r_2]$ with respect to the level $\\frac\\pi2$ to get a new function $\\tilde{f}$ satisfying\n$\\tilde{f}(r)=\\pi-f(r)$ ($r\\in [r_1,r_2]$) and $\\tilde{f}(r)=f(r)$ ($r\\not\\in[r_1,r_2]$). We have $L(a,f,g)\\geq L(a,\\tilde{f},g)$ again.\n\\end{proof}\n\n\\begin{lemma}{\\label{lemma 1}}\nThe constrained admissible class $\\cal C$ defined in (\\ref{C}) is non-empty. Furthermore, if $q>0$ and $(a,f,g)\\in{\\cal C}$, we have $00$ and that $g$ is the unique solution\nto the minimization problem\n\\be \\min \\left\\{E_2(a,G)\\, \\bigg|\\, G(\\infty)=q\n\\right\\}.\\label{01}\\ee\n \\end{lemma}\n \\begin{proof}\nConsider the problem (\\ref{01}).\n Then the Schwartz inequality gives us the asymptotic estimate\n\\be\n|G(r)-q|\\leq \\int_r^\\infty\\left|G'(\\rho)\\right|\\,\\dd\\rho\n\\leq r^{-\\frac12}\\left(\\int_r^\\infty\n\\rho^2(G'(\\rho))^2\\,\\dd\\rho\\right)^{1\\over 2}\n\\leq r^{-{1\\over 2}}E_2^{\\frac12}(a,G),\n\\label{10}\\ee \nwhich indicates that the limiting behavior $G(\\infty)=q$ can be preserved for any minimizing sequence of the problem (\\ref{01}). Hence (\\ref{01}) is solvable. In fact, it has\na unique solution, say $g$, for any given function $a$, since the functional $E_2(a,\\cdot)$ is strictly convex. Since $E_2(a,\\cdot)$ is even, we have $g\\geq0$.\nApplying the maximum principle in (\\ref{6}), we conclude with $00$. The uniqueness of the solution to (\\ref{01}), for given $a$, is obvious.\n\\end{proof}\n\n\\begin{lemma}{\\label{lemma 2}} For any $(a,f,g)\\in\n\\mathcal{C}$, $g(r)$ is nondecreasing for $r>0$ and\n$g(0)=0.$\\end{lemma}\n\\begin{proof} \nTo proceed, we first claim that \n\\be\n\\liminf\\limits_{r\\rightarrow 0}r^2|g'(r)|=0.\\label{02}\n\\ee\nIndeed, if (\\ref{02}) is false, then there are\n$\\epsilon_0,\\delta>0$, such that $r^2|g'(r)|\\geq \\epsilon_0$ for\n$00.\\label{3.12}\n\\eea\nHence $g'(r)\\geq 0$ and\n$g(r)$ is nondecreasing.\nIn particular, we conclude that there is number $g_0\\geq0$ such that \n\\be\\lim\\limits_{r\\rightarrow\n0}g(r)=g_0.\\label{04}\\ee\nWe will need to show $g_0=0$.\nOtherwise, if $g_0>0$, we can use $a(0)=1,r^2g'(r)\\rightarrow\n0$ ($r\\rightarrow 0$) (this latter result follows from (\\ref{3.12})), and L'Hopital's rule to get\n$$2g_0=2\\lim\\limits_{r\\rightarrow\n0}a^2(r)g(r)=\\lim\\limits_{r\\rightarrow\n0}(r^2g')'=\\lim\\limits_{r\\rightarrow\n0}\\frac{r^2g'(r)}{r}=\\lim\\limits_{r\\rightarrow 0}rg'(r).$$ Hence,\nthere is a $\\delta>0$, such that \\be g'(r)\\geq \\frac{g_0}r,\\quad\n00$ are constants depending on $\\omega$ and $q$ only.\n\\end{lemma}\n\\begin{proof}\n For any $(a,f,g)\\in\n\\mathcal{C}$, set $g_1={q}{(\\pi-\\omega)^{-1}}f$. Then $g_1$ satisfies\n$g_1(\\infty)=q$. As a consequence, we have\n\\be \nE_2(a,g_1)\\geq E_2(a,g),\n\\ee\nand thus,\n\\begin{eqnarray}\\label{3.17}\n&&L(a,f,g)=E_1(a,f)-E_2(a,g)\n\\geq E_1(a,f)-E_2(a,g_1)\\nonumber\\\\\n&&=\\int_0^\\infty\n\\dd r\\left\\{2\\left(2(a')^2+\\frac{(a^2-1)^2}{r^2}\\right)+\\left({1\\over\n2}-\\frac{q^2}{(\\pi-\\omega)^2}\\right)r^2(f')^2\\right.\\nonumber\\\\\n&&+2\\kappa\na^2\\sin^2f\\bigg(2(f')^2+\\frac{a^2\\sin^2f}{r^2}\\bigg)\\left.+\\left(\\frac{\\sin^2f}{f^2}-\\frac{2q^2}{(\\pi-\\omega)^2}\\right)a^2f^2\\right\\}.\\end{eqnarray}\n\nUsing the elementary inequality $\\frac{\\sin t}t\\geq\\frac2\\pi$ ($00,\n\\ee \nin view of (\\ref{qq}), we see that the lower estimate (\\ref{11}) is established.\n\\end{proof}\n\n\\begin{lemma}{\\label{lemma 4}}\nUnder the conditions stated in Theorem \\ref{theorem 01}, the constrained minimization problem (\\ref{12}) has a solution.\n\\end{lemma}\n\\begin{proof} We start by observing that the condition (\\ref{qq}) is implied by the condition (\\ref{qsin}). So Lemma \\ref{lemma 3} is valid.\nHence, applying Lemma \\ref{lemma 3}, we see that\n\\be\n\\eta=\\inf\\{L(a,f,g)\\,|\\,(a,f,g)\\in {\\cal C}\\}\n\\ee\nis well defined.\nLet $\\{(a_n,f_n,g_n)\\}$ denote any minimizing\nsequence of (\\ref{12}). That is, $(a_n,f_n,g_n)\\in {\\cal C}$ and $L(a_n,f_n,g_n)\\to\\eta$ as $n\\to\\infty$. Without loss of generality, we may assume\n$L(a_n,f_n,g_n)\\leq \\eta+1$ (say) for all $n$.\nIn view of (\\ref{11}) and the Schwartz inequality, we have\n\\be\\label{3.21}\n|a_n(r)-1|\\leq\\int_0^r\\left|a'_n(\\rho)\\right|\\,\\dd \\rho\\leq r^{\\frac12}\\left(\\int_0^r (a_n'(\\rho))^2\\,\\dd\\rho\\right)^{\\frac12}\\leq C r^{\\frac12}(\\eta+1)^{\\frac12},\n\\ee\n\\be\\label{3.22}\n\\left|f_n(r)-(\\pi-\\omega)\\right|\\leq\\int_r^\\infty\\left|f'_n(\\rho)\\right|\\,\\dd\\rho\n\\leq r^{-\\frac12}\\left(\\int_r^\\infty \\rho^2 (f'_n(\\rho))^2\\,\\dd\\rho\\right)^{\\frac12}\\leq Cr^{-\\frac12}(\\eta+1)^{\\frac12},\n\\ee\nwhere $C>0$ is a constant independent of $n$.\nIn particular,\n$a_n(r)\\rightarrow 1$ and $f_n(r)\\rightarrow (\\pi-\\omega)$\nuniformly as $r\\rightarrow 0$ and $r\\rightarrow \\infty$,\nrespectively.\n\nFor any $(a_n,f_n,g_n)$, the function\n$G_n=\\frac{{q}}{{(\\pi-\\omega)}}f_n$ satisfies $G_n(\\infty)=q$.\nThus, by virtue of the definition of $g_n$ and (\\ref{11}), we have\n \\be \\label{3.23}\nE_2(a_n,g_n)\\leq\nE_2(a_n,G_n)\n=\\frac{q^2}{(\\pi-\\omega)^2}\\int_0^\\infty(r^2(f'_n)^2+a_n^2f_n^2)\\,\\dd r\\leq CL(a_n,f_n,g_n),\n\\ee\nwhere $C>0$ is a constant, which shows that\n$E_2(a_n,f_n)$ is bounded as well.\n\n\nWith the above preparation, we are now ready to investigate the limit of the sequence $\\{(a_n,f_n,g_n)\\}$.\n\nConsider the Hilbert space $(X,(\\cdot,\\cdot))$, where the\n functions in $X$ are all continuously defined in $r\\geq0$ and\n vanish at $r=0$ and the inner product $(\\cdot,\\cdot)$ is defined\n by $$(h_1,h_2)=\\int_0^\\infty h'_1(r)h'_2(r)\\,\\dd r, \\ \\ h_1,h_2\\in X.$$\n\nSince $\\{a_n-1\\}$ is bounded in $(X,(\\cdot,\\cdot))$, we may assume\nwithout loss of generality that $\\{a_n\\}$ has a weak limit, say,\n$a$, in the same space,\n\\begin{equation}\n\\int_0^\\infty a'_n h'\\,\\dd r\\rightarrow\\int_0^\\infty\na' h'\\,\\dd r,\\ \\ \\ \\forall h\\in\nX,\\nonumber\\end{equation} as $n\\rightarrow \\infty$.\n\nSimilarly, for the Hilbert space $(Y,(\\cdot,\\cdot))$ where the\nfunctions in $Y$ are all continuously defined in $r>0$ and vanish\nat infinity and the inner product $(\\cdot,\\cdot)$ is defined by\n$$(h_1,h_2)=\\int_0^\\infty r^2h_1'h_2'\\,\\dd r, \\ \\ h_1,h_2\\in Y.$$\nSince $\\{f_n-(\\pi-\\omega)\\}$, $\\{g_n-q\\}$ are bounded in\n$(Y,(\\cdot,\\cdot))$, we may assume without loss of generality that there are functions $f,g$ with\n$f(\\infty)=\\pi-\\omega, g(\\infty)=q$, and $f-(\\pi-\\omega), g-q\\in\n(Y,(\\cdot,\\cdot))$, such that\n\\begin{equation}\n\\int_0^\\infty r^2H_n'h'\\,\\dd r\\rightarrow\n\\int_0^\\infty r^2H'h'\\,\\dd r,\\quad\n\\forall h\\in Y,\\label{15}\\end{equation}\nas $n\\to\\infty$, for\n$H_n=f_n-(\\pi-\\omega),\\ H=f-(\\pi-\\omega)$, and $H_n=g_n-q,\\ H=g-q$, respectively.\n\nNext, we need to show that the weak limit $(a,f,g)$ of the minimizing\nsequence $\\{(a_n,f_n,g_n)\\}$ obtained above actually lies in\n$\\mathcal{C}$. There are two things to be verified for $(a,f,g)$: the boundary conditions and the constraint (\\ref{8}).\nFrom the uniform estimates (\\ref{10}), (\\ref{3.21}), and (\\ref{3.22}), we easily deduce that $a(0)=1,f(\\infty)=\\pi-\\omega,g(\\infty)=q$.\nMoreover,\napplying Lemma \\ref{lemma 3}, we get $a\\in W^{1,2}(0,\\infty)$. Hence $a(\\infty)=0$. To verify $f(0)=0$, we use (\\ref{3.21}) to get a $\\delta>0$ such that \n\\be \\label{an}\n|a_n(r)|\\geq\\frac12,\\quad r\\in[0,\\delta].\n\\ee\nThen, using (\\ref{an}), we have\n\\bea \\label{fn}\n\\sin^2 f_n(r)&\\leq&2\\int_0^r|\\sin f_n(\\rho) f'_n(\\rho)|\\,\\dd\\rho\\nn\\\\\n&\\leq&4 r^{\\frac12}\\left(\\int_0^r a_n^2(\\rho)\\sin^2 f_n(\\rho) (f'_n(\\rho))^2\\,\\dd\\rho\\right)^{\\frac12}\\nn\\\\\n&\\leq& 2\\kappa^{-\\frac12}r^{\\frac12} L^{\\frac12}(a_n,f_n,g_n),\\quad r\\in[0,\\delta].\n\\eea\nSince $0\\leq f_n\\leq\\frac\\pi2$, we can invert (\\ref{fn}) to obtain the uniform estimate\n\\be \\label{fnn}\n0\\leq f_n(r)\\leq Cr^{\\frac14},\\quad r\\in[0,\\delta],\n\\ee\nwhere $C>0$ is independent of $n$. Letting $n\\to\\infty$ in (\\ref{fnn}), we see that $f(0)=0$ as anticipated.\n\nThus, it remains to verify (\\ref{8}). For this purpose, it suffices to establish the following\nresults,\n\\bea \n\\int_0^\\infty(a^2_ng_n-a^2g)G\\,\n\\dd r &\\rightarrow& 0,\\label{*}\\\\\n\\ \\int_0^\\infty\n(r^2g'_n-r^2g')G' \\,\\dd r&\\rightarrow& 0,\\label{**}\n\\eea\nfor any test function $G$ satisfying (\\ref{9}) and $G(\\infty)=0$, as $n\\to\\infty$.\n\nFrom the fact $G\\in Y$ and (\\ref{15}), we immediately see that (\\ref{**}) is valid.\n\nTo establish (\\ref{*}), we\nrewrite \n\\be \n \\int_0^\\infty(a^2_ng_n-a^2g)G\\,\n\\dd r=\\int_0^{\\delta_1}+\\int_{\\delta_1}^{\\delta_2}+\\int_{\\delta_2}^\\infty\n\\equiv I_1+I_2+I_3,\n\\ee\nfor some positive constants $0<\\delta_1<\\delta_2<\\infty$, and we begin with\n\\be \nI_1\n=\\int_0^{\\delta_1}(a_n^2-a^2)g_n G\\,\n\\dd r+\\int_0^{\\delta_1}a^2(g_n-g)G\\, \\dd r\n\\equiv I_{11}+I_{12}.\n\\ee\nIn view of (\\ref{3.21}) and (\\ref{3.23}), we see that there is a small $\\delta>0$ such that $g_n\\in L^2(0,\\delta)$ and there holds the uniform bound\n\\be \n\\|g_n\\|_{L^2(0,\\delta)}\\leq K,\n\\ee\nfor some constant $K>0$. Thus, we may assume $g_n\\to g$ weakly in $L^2(0,\\delta)$ as $n\\to\\infty$. In particular, $g\\in L^2(0,\\delta)$ and $\\|g\\|_{L^2(0,\\delta)}\\leq K$.\nBesides, since in (\\ref{9}), the function $a$ satisfies $a(0)=1$, we have $G\\in L^2(0,\\delta)$ when $\\delta>0$ is chosen small enough. Thus, using (\\ref{3.21}) and taking $\\delta_1\\leq\\delta$, we get\n\\begin{eqnarray}\n|I_{11}|&\\leq&\\int_0^{\\delta_1}|a^2_n-a^2|\n|g_n G|\\,\\dd r\n\\leq\\int_0^{\\delta_1}\\left(|a^2_n-1|+|a^2-1|\\right)|g_n G|\\,\\dd r\\nonumber\\\\\n&\\leq& CK \\delta^{\\frac12}\\|G\\|_{L^2(0,\\delta)},\n\\end{eqnarray}\nwhere $C>0$ is a constant independent of $n$. \nThus, for any $\\vep>0$, we can choose $\\delta_1>0$ sufficiently small to get $|I_{11}|<\\vep$.\nOn the other hand,\nsince $g_n\\to g$ weakly in $L^2(0,\\delta)$ and $G\\in L^2(0,\\delta)$, we have $I_{12}\\to0$ as $n\\to\\infty$.\n\nSince $\\{a_n\\}$ and $\\{g_n\\}$ are bounded sequences in $W^{1,2}(\\delta_1,\\delta_2)$, using the compact embedding\n$W^{1,2}(\\delta_1,\\delta_2)\\mapsto C[\\delta_1,\\delta_2]$, we see that $a_n\\to a$ and $ g_n\\to g$ uniformly over $[\\delta_1,\\delta_2]$ as $n\\to\\infty$.\nThus $I_2\\to0$ as $n\\to\\infty$.\n\nTo estimate $I_3$, we recall that $\\{E_2(a_n,g_n)\\}$ is bounded by (\\ref{3.23}), \n$g_n(r)\\rightarrow q$ uniformly as $n\\to\\infty$ by (\\ref{10}), and $G(r)=\\mbox{O}(r^{-\\frac12})$ as\n$r\\to \\infty$ by (\\ref{9}). In particular, since $q>0$, we may choose $r_0>0$ sufficiently large so that\n\\be \\label{3.32}\n|g(r)|\\geq\\frac q2,\\quad \\inf_{n}|g_n(r)|\\geq \\frac q2,\\quad r\\geq r_0.\n\\ee\nCombining the above facts, we arrive at\n\\be \\label{3.33}\n|I_3|\\leq\\int_r^\\infty \\left(|a^2_n g_n |+|a^2 g|\\right)|G|\\,\\dd \\rho\\leq Cr^{-\\frac12}\\int^\\infty_r\\frac2q (a^2_n g_n^2+a^2 g^2)\\,\\dd\\rho,\n\\ee\nwhere $r\\geq r_0$ (cf. (\\ref{3.32})) and $C>0$ is a constant. Using (\\ref{3.23}) in (\\ref{3.33}), we see that for any $\\vep>0$ we may choose $\\delta_2$ large enough\nto get $|I_3|<\\vep$.\n\nSummarizing the above discussion, we obtain\n\\be \n\\limsup_{n\\to\\infty}\\left|\\int_0^\\infty(a^2_ng_n-a^2g)G\\,\n\\dd r\\right|\\leq 2\\vep,\n\\ee\nwhich proves the desired conclusion (\\ref{*}). Thus, the claim $(a,f,g)\\in{\\cal C}$ follows.\n\nTo show that $(a,f,g)$ solves (\\ref{12}), we need to establish\n\\be\n\\eta=\\liminf_{n\\to\\infty} L(a_n,f_n,g_n)\\geq L(a,f,g).\n\\label{00}\\ee\nThis fact is not automatically valid and extra caution is to be exerted because the functional $L$ contains negative terms.\n\nWith\n\\be \n\\Om=\\pi-\\omega,\\quad\n0<\\Om<\\frac\\pi2,\n\\ee \nwe may rewrite the Lagrange density (\\ref{L1}) as\n\\be\n{\\mathcal{L}}(a,f,g)\n= {\\cal L}_0(a,f)-{\\cal E}_0(a,g),\\ee\nwhere\n\\bea \n{\\cal L}_0(a,f)&=&2\\bigg(2(a')^2+\\frac{(a^2-1)^2}{r^2}\\bigg)+{1\\over\n2}r^2(f')^2+2\\kappa a^2\\sin^2f\\bigg(2(f')^2+\\frac{a^2\\sin^2f}{r^2}\\bigg)\\nonumber\\\\\n&&+a^2(\\sin^2\\Om-2q^2)+a^2\\sin^2\\Om\\left(\\cos^2(f-\\Om)-1\\right)\\nonumber\\\\\n&&+2a^2\\sin\\Om\\cos\\Om\\sin(f-\\Om)\\cos(f-\\Om)\n+a^2\\cos^2\\Om\\sin^2(f-\\Om),\\\\\n{\\cal E}_0(a,g)&=&r^2(g')^2+2a^2(g-q)^2+4a^2(g-q)q,\n\\eea \n Thus, in order to establish (\\ref{00}), it suffices to show that\n\\be \\label{L0}\n\\liminf_{n\\to\\infty}\\int_0^\\infty {\\cal L}_0(a_n,f_n)\\,\\dd r\\geq \\int_0^\\infty{\\cal L}_0(a,f)\\,\\dd r,\n\\ee\n\\be \\label{E0}\n\\lim_{n\\to \\infty}\\int_0^\\infty {\\cal E}_0(a_n,g_n)\\,\\dd r=\\int_0^\\infty{\\cal E}_0(a,f)\\,\\dd r.\n\\ee\n\nWe first show (\\ref{E0}). To this end, we observe that, \nsince both $(a_n,g_n)$ and $(a,g)$ satisfy (\\ref{8}), i.e.,\n\\be\n\\int_0^\\infty(r^2g'_n G'+2a^2_ng_n G)\\,\\dd r =0,\\quad\n\\int_0^\\infty(r^2g' G'+2a^2g G)\\,\\dd r=0,\n\\ee\nwe can set $G=g-g_n$ in the above equations and\nsubtract them to get\n\\bea\\label{3I}\n \\int_0^\\infty\nr^2(g'_n-g')^2\\dd r&=&2\\int_0^\\infty(a^2_ng_n-a^2g)(g-g_n)\\dd r\\nn\\\\\n&=&\\int_0^{\\delta_1}+\\int_{\\delta_1}^{\\delta_2}+\\int_{\\delta_2}^\\infty\\equiv I_1+I_2+I_3,\n\\eea\nwhere $0<\\delta_1<\\delta_2<\\infty$.\n\nTo study $I_1$, we need to get some uniform estimate for the sequence $\\{g_n\\}$ near $r=0$. From (\\ref{3.21}), we see that for any $0<\\gamma<\\frac12$ (say) there is a $\\delta>0$ such that\n\\be \n2a^2_n(r)\\geq (2-\\gamma),\\quad r\\in[0,\\delta].\n\\ee\nConsider the comparison function\n\\be \\label{sig}\n\\sigma(r)=C r^{1-\\gamma},\\quad r\\in[0,\\delta], \\quad C>0.\n\\ee\nThen\n\\be \n(r^2 \\sigma')'=(1-\\gamma)(2-\\gamma)\\sigma<2a^2_n(r)\\sigma,\\quad r\\in[0,\\delta].\n\\ee\nConsequently, we have\n\\be \\label{3.47}\n(r^2(g_n-\\sigma)')'>2a^2_n(r)(g_n-\\sigma),\\quad r\\in[0,\\delta].\n\\ee \nChoose $C>0$ in (\\ref{sig}) large enough so that $C\\delta^{1-\\gamma}\\geq q$. Since $g_n0$ there is some $\\delta_1>0$ ($\\delta_1<\\delta$)\nsuch that $|I_1|<\\vep$.\n\nMoreover, in view of the uniform estimate (\\ref{10}) and (\\ref{3.32}), we have\n\\bea \n|I_3|&\\leq&2\\int_{\\delta_2}^\\infty (a_n^2g_n+a^2 g)(|g_n-q|+|g-q|)\\,\\dd r\\nn\\\\\n&\\leq&\\frac4q(|g_n(\\delta_2)-q|+|g(\\delta_2)-q|)\\int_0^\\infty (a_n^2 g_n^2+a^2 g^2)\\,\\dd r\\nn\\\\\n&\\leq&\\frac2q \\delta_2^{-\\frac12} \\left(E_2^{\\frac12}(a_n,g_n)+E_2^{\\frac12}(a,g)\\right)\\left(E_2(a_n,g_n)+E_2(a,g)\\right),\n\\eea\nwhich may be made small than $\\vep$ when $\\delta_2>0$ is large enough due to (\\ref{3.23}).\n\nFurthermore, since $a_n\\to a$ and $g_n\\to g$ in $C[\\delta_1,\\delta_2]$, we see that $I_2\\to0$ as $n\\to\\infty$.\n\nIn view of the above results regarding $I_1, I_2, I_3$ in (\\ref{3I}), we obtain the strong convergence\n\\begin{equation}\\lim\\limits_{n\\rightarrow\\infty}\\int_0^\\infty\nr^2(g'_n-g')^2\\,\\dd r=0.\\end{equation} In particular, we have\n\\be \\label{3.51}\n\\lim_{n\\to\\infty}\\int_0^\\infty r^2 (g'_n)^2\\,\\dd r=\\int_0^\\infty r^2 (g')^2\\,\\dd r.\n\\ee\n\nWe can also show that\n\\be \\label{3.52}\n\\lim_{n\\to\\infty}\\int_0^\\infty\\left(a_n^2 (g_n-q)^2+2a_n^2(g_n-q)q\\right)\\,\\dd r=\\int_0^\\infty\\left(a^2 (g-q)^2+2a^2(g-q)q\\right)\\,\\dd r.\n\\ee \n\nIn fact, we have seen that $\\{(a_n,f_n,g_n)\\}$ is bounded in $W^{1,2}_{\\mbox{loc}}(0,\\infty)$. Thus, the sequence is convergent in $C[\\alpha,\\beta]$ for any\npair of numbers, $0<\\alpha<\\beta<\\infty$. Since we have shown that $a_n(r)\\to 1$ and $g_n(r)\\to 0$ as $r\\to0$ uniformly, with respect to $n=1,2,\\cdots$, we conclude that\n$a_n\\to a$ and $g_n\\to g$ uniformly over any interval $[0,\\beta]$ ($0<\\beta<\\infty$). Thus, combining this result with the uniform estimate (\\ref{10}), we see that\n(\\ref{3.52}) is proved.\n\nIn view of (\\ref{3.51}) and (\\ref{3.52}), we see that (\\ref{E0}) follows.\n\nOn the other hand, applying the uniform estimate (\\ref{3.22}), we also have\n\\bea \n\\lim_{n\\to\\infty}\\int_0^\\infty a_n^2 (\\cos^2(f_n-\\Om)-1)\\,\\dd r &=& \\int_0^\\infty a^2 (\\cos^2(f-\\Om)-1)\\,\\dd r,\\label{lim1}\\\\\n\\lim_{n\\to\\infty}\\int_0^\\infty a_n^2\\sin(f_n-\\Om)\\cos(f_n-\\Om)\\,\\dd r&=&\\int_0^\\infty a^2\\sin(f-\\Om)\\cos(f-\\Om)\\,\\dd r.\\nn\\\\ \\label{lim2}\n\\eea\n\nFinally, using (\\ref{lim1}), (\\ref{lim2}), and the condition (\\ref{qsin}), i.e.,\n\\be \n\\sin^2\\Om-2q^2> 0,\n\\ee\nwe see that (\\ref{L0}) is established and the proof of the lemma is complete.\n\\end{proof}\n\n\\section{Fulfillment of the governing equations}\n\\setcounter{equation}{0}\n\nLet $(a,f,g)$ be the solution of (\\ref{12}) obtained in the previous section. We need to show that it satisfies the governing equations (\\ref{4})--(\\ref{6}) for dyons.\nSince we have solved a constrained minimization problem, we need to prove that the Lagrange multiplier problem does not arise\nas a result of the constraint, which would otherwise alter the original\nequations of motion. In fact, since the constraint (\\ref{8}) involves $a$ and $g$ only and (\\ref{8}) immediately gives rise to (\\ref{6}), we see that all we have to do is\nto verify the validity of (\\ref{4}) because (\\ref{5}) is the $f$-equation and (\\ref{8}) does not involve $f$ explicitly.\n\nTo proceed, we take $\\widetilde{a}\\in C^1_0$. For any $t\\in \\bfR$,\nthere is a unique corresponding function $g_t$ such that\n$(a+t\\widetilde{a},f,g_t)\\in \\mathcal{C}$ and that $g_t$ smoothly\ndepends on $t$. Set\n\\be\ng_t=g+\\widetilde{g}_t, \\quad \\widetilde{G}=\\left.\\left(\\frac{\\dd}{\\dd t}\\widetilde{g}_t\\right)\\right|_{t=0}.\n\\ee\nSince $(a+t\\widetilde{a},f,g_t)|_{t=0}=(a,f,g)$ is a minimizing\nsolution of (\\ref{12}), \nwe have\n\\bea \\label{4.2}\n0&=&\\left.\\frac{\\dd}{\\dd t}L(a+t\\widetilde{a},f,g_t)\\right|_{t=0}\\nn\\\\\n&=&8\\int_0^\\infty \\dd r\\bigg\\{a'\\widetilde{a}'+\\frac{(a^2-1)a\\widetilde{a}}{r^2}+\\frac14\\sin^2f \\, a\\widetilde{a}\\nn\\\\\n&&+\\kappa\\sin^2f\\bigg((f')^2a\\widetilde{a}+\\frac{\\sin^2f}{r^2}a^3\\widetilde{a}\\bigg)\n-\\frac12g^2a\\widetilde{a}\\bigg\\}\n-2\\int_0^\\infty\\dd r\\bigg\\{r^2g'\\widetilde{G}'+2a^2g\\widetilde{G}\\bigg\\}\\nn\\\\\n&\\equiv& 8I_1-2I_2.\n\\eea\n\nIt is clear that the vanishing of $I_1$ implies (\\ref{4}) so that it suffices to prove that $I_2$ vanishes. To this end and in view of (\\ref{8}), we only need to show that $\\widetilde{G}$\nsatisfies the same conditions required of $G$ in (\\ref{8}).\n\nIn (\\ref{8}), when we make the replacements $a\\mapsto a+t\\widetilde{a},g\\mapsto\ng_t,G\\mapsto \\widetilde{g}_t$, we have\n\\begin{equation}\\int_0^\\infty\\left(r^2g'_t\\widetilde{g}'_t+2(a+t\\widetilde{a})^2g_t\\widetilde{g}_t\\right)\\,\\dd r=0.\\end{equation}\nOr, with $g_t=g+\\widetilde{g}_t$, we have\n\\begin{equation}\\label{4.4}\n\\int_0^\\infty\n\\left(r^2(g'+\\widetilde{g}'_t)\\widetilde{g}'_t+2a^2(g+\\widetilde{g}_t)\\widetilde{g}_t+2t^2\\widetilde{a}^2g_t\\widetilde{g}_t+4ta\\widetilde{a}g_t\\widetilde{g}_t\\right)\\,\\dd r=0.\n\\end{equation} \nRecall that $\\int_0^\\infty\\left(\nr^2g'\\widetilde{g}'_t+2a^2g\\widetilde{g}_t\\right)\\,\\dd r=0$. Thus (\\ref{4.4}) and the Schwartz inequality give us\n\\begin{eqnarray}\n&&\\int_0^\\infty(r^2(\\widetilde{g}'_t)^2+2a^2\\widetilde{g}^2_t)\\,\\dd r\n=\\left|2t\\int_0^\\infty(t\\widetilde{a}^2+2a\\widetilde{a})g_t\\widetilde{g}_t\\,\\dd r\\right|\\nonumber\\\\\n&&\\leq|2t|\\left(|2t|\\int_0^\\infty \\widetilde{a}^2g_t^2\n\\dd r+\\frac{1}{|2t|}\\int_0^\\infty\na^2\\widetilde{g}^2_t\\,\\dd r\\right)+2t^2\\int_0^\\infty\n\\widetilde{a}^2|g_t|\\,|\\widetilde{g}_t|\\,\\dd r\\nonumber\\\\\n&=&4t^2\\int_0^\\infty \\widetilde{a}^2g^2_t \\,\\dd r+\\int_0^\\infty\na^2\\widetilde{g}^2_t\\,\\dd r+2t^2\\int_0^\\infty\n\\widetilde{a}^2|g_t|\\,|\\widetilde{g}_t|\\,\\dd r.\\label{4.5}\n\\end{eqnarray}\nApplying the bounds $0\\leq g, g_t\\leq q$ and the relation $\\widetilde{g}_t=g_t-g$ in (\\ref{4.5}), we have\n \\begin{eqnarray}\n\\int_0^\\infty\n(r^2(\\widetilde{g}'_t)^2+a^2\\widetilde{g}^2_t)\\,\\dd r&\\leq&4t^2\\int_0^\\infty\n\\widetilde{a}^2g^2_t\\,\\dd r+2t^2\\int_0^\\infty\n\\widetilde{a}^2|g_t|\\,|\\widetilde{g}_t|\\,\\dd r \\nonumber\\\\\n&\\leq& 8q^2 t^2\\int_0^\\infty\n\\widetilde{a}^2\\,\\dd r.\\label{4.6}\n\\end{eqnarray}\nAs a consequence,\nwe have \n \\be\n\\int_0^\\infty\\left(r^2\\left(\\frac{\\widetilde{g}'_t}{t}\\right)^2+a^2\\left(\\frac{\\widetilde{g}_t}{t}\\right)^2\\right)\\,\\dd r\n\\leq 8q^2\\int_0^\\infty \\widetilde{a}^2\\,\\dd r,\\quad t\\neq 0.\\label{17}\\ee\nUsing\n$\\widetilde{g}_t(\\infty)=0$, the Schwartz inequality and (\\ref{17}), we have for $t\\neq0$ the estimate\n\\be\\label{4.8}\n\\left|\\frac{\\widetilde{g}_t}{t}(r)\\right|\\leq \\int_r^\\infty\\left|\n\\frac{\\widetilde{g}_t'(\\rho)}{t}\\right|\\,\\dd\\rho\\nonumber\\\\\n\\leq r^{-\\frac{1}{2}}\\left(\\int_r^\\infty\n\\rho^2\\left(\\frac{\\widetilde{g}'_t}{t}\\right)^2\\,\\dd\\rho\\right)^{\\frac{1}{2}}\\leq 2\\sqrt{2} q\\|\\widetilde{a}\\|_{L^2(0,\\infty)}.\n\\ee\n\nLetting $t\\to0$ in (\\ref{17}) and (\\ref{4.8}), we obtain $E_2(a,\\widetilde{G})<\\infty$ and $\\widetilde{G}(r)=\\mbox{O}(r^{-\\frac12})$ (for $r$ large). In particular,\n$\\widetilde{G}(\\infty)=0$ and $\\widetilde{G}$ indeed satisfies all conditions required in (\\ref{8}) for $G$. Hence $I_2$ vanishes in (\\ref{4.2}).\nConsequently, the equation (\\ref{4}) has been verified.\n\n\\section{Properties of the solution obtained}\n\\setcounter{equation}{0}\n\nIn this section, we study the properties of the solution, say $(a,f,g)$, of the equations (\\ref{4})--(\\ref{6}) obtained as a solution of the constrained minimization\nproblem (\\ref{12}). We split the investigation over a few steps.\n\n\\begin{lemma}{\\label{lemma 6}}\nThe solution $(a,f,g)$ enjoys the properties $a(r)>0$, $00$.\n\\end{lemma}\n\\begin{proof}\nWe have $0\\leq g\\leq q$ and $0\\leq f\\leq \\frac\\pi2$ from Lemmas \\ref{lemma0} and \\ref{lemma 1}. Besides, it is clear that $a\\geq0$ since\nboth (\\ref{E1}) and (\\ref{E2}) are even in $a$.\n\nIf $a(r_0)=0$ for some $r_0>0$, then $r_0$ is a\nminimizing point and $a'(r_0)=0$. Using the uniqueness of the solution to the initial value problem consisting of\n(\\ref{4}) and $a(r_0)=a'(r_0)=0$, we get $a\\equiv0$ which contradicts $a(0)=1$. Thus, $a(r)>0$ for all $r>0$.\nThe same argument shows that $f(r)>0, g(r)>0$ for all $r>0$. Since (\\ref{3.12}) is valid, we see that $g(r)$ is strictly increasing. In particular, $g(r)0$.\n\nLemma \\ref{lemma0} already gives us $f\\leq\\frac\\pi2$. We now strengthen it to $f<\\pi-\\omega$. First it is easy to see that $f\\leq\\pi-\\omega$. Otherwise there is\na point $r_0>0$ such that $f(r_0)>\\pi-\\omega$. Thus, we can find two points $r_1, r_2$, with $0\\leq r_10$, then $r_0$ is a \nmaximum point of $f$ such that $f'(r_0)=0$ and $f''(r_0)\\leq 0$. Inserting these results\ninto (\\ref{5}), we arrive at a contradiction since $0<\\pi-\\omega<\\frac\\pi2$. \n\nTo see that $f$ is non-decreasing, we assume otherwise that there are $0f(r_2)$. Since $f(0)=0$, we see that $f$ has a local maximum point $r_0$ below\n$r_2$, which is known to be false. Thus $f$ is non-decreasing. To see that $f$ is strictly increasing, we assume otherwise that there are $0R_\\delta,\n\\ee\nwhere $R_\\delta>0$ is sufficiently large but independent of $q\\in[0,q_0]$ and $\\delta>0$ is arbitrarily small. Write\n\\be \n\\frac14\\sin^2\\omega (1-\\delta) -\\frac12q^2=\\frac14\\left(\\sin^2\\omega -2q^2\\right)(1-\\vep)^2,\n\\ee\nand $r_\\vep=R_\\delta$. Then (\\ref{5.3}) gives us $a''\\geq\\gamma^2(1-\\vep)^2 a$, $r>r_\\vep$.\nUsing the comparison function $\\sigma (r)=C_\\vep\\e^{-\\gamma(1-\\vep)r}$, we have \nthat $(a-\\sigma)''\\geq \\gamma^2(1-\\vep)^2 (a-\\sigma)$, $r>r_\\vep$. Thus, by virtue of the maximum principle, we have $(a-\\sigma)(r)<0$\nfor all $r>r_\\vep$ when the constant $C_\\vep$ is chosen large enough so that $a(r_\\vep)\\leq \\sigma(r_\\vep)$. This establishes the uniform exponential decay\nestimate for $a(r)$ as $r\\to\\infty$ with respect to $q\\in[0,q_0]$.\n\n\nTo get the estimate for $g$, we note from (\\ref{3.12}) that\n\\be \ng'(r)=\\frac1{r^2}\\int_0^r 2a^2(\\rho) g(\\rho)\\,\\dd\\rho,\\quad r>0,\n\\ee\nwhich leads to\n\\be \nq-g(r)=\\int_r^\\infty \\frac1{\\rho^2}\\int_0^\\rho 2a^2(\\rho') g(\\rho')\\,\\dd\\rho'\\,\\dd\\rho=\\mbox{O}(r^{-1}),\n\\ee\nfor $r>0$ large, since $a(r)$ vanishes exponentially fast at $r=\\infty$.\n\nTo study the asymptotic behavior of $f$, we integrate (\\ref{5}) over the interval $(r_0,r)$ ($00$ large enough so that $\\frac12(\\pi-\\omega)\\leq f(r)<\\pi-\\omega$ for $r\\geq r_0$. Thus\n\\be \\label{5.9}\n0<\\sin\\frac12(\\pi-\\omega)\\leq \\sin f(r),\\quad r\\geq r_0.\n\\ee\nUsing (\\ref{5.9}) and recalling the definition of ${\\cal E}_1$, we see that the integral\n\\be \n\\int^\\infty_{r_0}a^2\\sin f\\cos f(f')^2\\,\\dd r\n\\ee\nis convergent. Applying this result and the exponential decay estimate of $a(r)$ as $r\\to\\infty$, we obtain\n\\be \\label{5.11}\nf'(r)=\\frac{C_0+\\int_{r_0}^r F(\\rho)\\,\\dd\\rho}{8\\kappa \\,a^2(r)\\sin^2f(r)+r^2}=\\mbox{O}(r^{-2}),\\quad r>r_0.\n\\ee\nIntegrating (\\ref{5.11}) over $(r,\\infty)$ ($r>r_0$), we arrive at\n\\be \n(\\pi-\\omega)-f(r)=\\mbox{O}(r^{-1}).\n\\ee\n\nThe proof of the lemma is complete.\n\\end{proof}\n\n\\begin{lemma} For the solution $(a,f,g)$ with fixed $\\omega\\in (\\frac\\pi2,\\pi)$, the electric charge \n\\be \\label{Qq}\nQ_e(q)=2\\int_0^\\infty a^2(r) g(r)\\,\\dd r\n\\ee\nenjoys the property $Q_e(q)\\to0$ as $q\\to0$.\n\\end{lemma}\n\\begin{proof} For fixed $\\omega$, let $q_0$ satisfy (\\ref{q0}). Since $a$ vanishes exponentially fast at infinity uniformly with respect to $q\\in (0,q_0]$ and\n$00$, we see that we can apply the dominated convergence theorem to (\\ref{Qq}) to conclude that $Q_e(q)\\to0$ as $q\\to0$.\n\\end{proof}\n\n\\medskip \n\nIt should be noted that the special case $\\kappa=0$ is to be treated separately since the study above relies on the condition $\\kappa>0$ (see (\\ref{fn})).\nIn fact, when $\\kappa=0$, the Skyrme term in (\\ref{L}) is absent and the model becomes the gauged sigma model which is easier. However, technically, the boundary condition\n$f(0)=0$ has to be removed from the definition of the admissible space $\\cal A$ but recovered later for the obtained solution to the constrained minimization problem as is done for\nthe function $g$ in the proof of Lemma \\ref{lemma 2}. The details are omitted here.\n\n\\small{\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTransition-metal-oxides (TMOs) heterostructures have been used to build solar-cells \\cite{Chang2016} and have been proposed as materials for achieving unprecedented light-conversion efficiency. As paradigmatic example of polar TMOs, LaVO$_3$ \\cite{Assmann2013} exhibits a correlation-driven direct Mott gap in the optimal range for solar light absorption and an internal electric potential gradient that was argued to help separating the photo-generated electron-hole pairs. Recently it was discussed that with proper engineering of the collector properties, the system can be tuned to the superradiant transition which gives rise to fast quantum coherent transport in thin films made of few TMO monolayers \\cite{Kropf2019}. This effect was shown to be robust against realistic estimates of electron-electron correlations \\cite{Kropf2019} and to environment induced dephasing up to ambient temperatures. The effect of the intrinsic electric potential gradient was taken into account only insofar as a shift of the Fermi-level of the collector as is common in the Landauer-B\u00fcttiker \\cite{Ryndyk2016,Landauer1957,Pastawski1991} and the master equation \\cite{Gurvitz1996} approaches to transport. However, an intrinsic potential slope also leads to a constant shift of the energy levels. While in the classical diffusive-like regime such a shift has only little influence, this is no longer the case in the quantum coherent regime where one obtains for instance, Bloch-oscillations in the infinite size limit \\cite{Zener1934}. Furthermore, one cannot directly transfer the intuition from the semi-classical regime that an electric field favors transport by accelerating the charge carriers towards the collector. Hence, we expect potentially large changes in the transport properties. Here we study the influence of the electric potential gradient inside the system on the transport properties of few-monolayers TMO heterostructures with a particular emphasis on the quantum-coherent regime. Our primary aim is to clarify whether, and for which values of the electric field the previously described superradiant quantum coherent transport (in absence of electric field, see \\cite{Kropf2019}) prevails that gives rise to coherence-driven enhancement of the photo-transport efficiency. In addition, we seek to understand if the electric field can be tuned to further optimize photo-transport.\n\nAs a realistic, though simplified, platform we investigate the average transfer time to the sink of an excitation in a one-dimensional chain with an intrinsic constant potential gradient and nearest-neighbour interactions as illustrated in Fig.~\\ref{fig:Fig1}. The influence of further environmental degrees of freedom, such as phonons or dynamical noise, are modelled by the inclusion of an effective pure dephasing channel on each site. We show that for electric potentials smaller than the mean-level spacing the transport is quantum coherent and optimized at a coupling to the sink correspondent to the superradiant transition (ST). Hence, superradiance emerges as an extremely valid transport optimization principle \\cite{Celardo2012}, which is robust against disorder \\cite{Zhang2017a,Zhang2017b}, decay, dephasing, electron-electron interactions \\cite{Kropf2019} and a moderate electric potential.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width = 0.5\\textwidth]{fig_model.png}\n\t\\caption{Transport model of a single excitation hopping through a chain with nearest-neighbour coupling $\\Omega$ with a decay rate $\\gamma_{out}$ to the sink. The chain is subject to a constant potential $N E_0$ and pure dephasing on each site with rate $\\gamma_\\phi$.}\n\t\\label{fig:Fig1}\n\\end{figure}\n\nIn our study we consider mainly the average transfer time for an initial Gaussian wavepacket. The case of an initial state localized on a single site is also briefly discussed to mark the differences with the former case. We show that in the quantum coherent regime the electric field mainly affects transport through the modification of the shape of the eigenstates in the site basis. Indeed, for a given initial state, the electric field can be tuned to maximize (or minimize) the overlap with the faster decaying eigenstates and favor (suppress) resonant excitation transport to the sink. While for small electric fields (compared with the average level spacing) one can achieve an enhancement of the quantum coherent transport by choosing an optimal field strength, large electric fields induce the localization of all eigenstates that strongly suppresses any transport; efficient transport can thus be restored only on increasing dephasing (incoherent transport). Simultaneously, a sufficiently strong electric field increases the gap between the largest decay width and the average decay width of all other (subradiant) states induced by the opening to the continuum of states \\cite{Chavez2019}. However, in contrast to the superradiance from the coupling to the sink (i.e. ST in the opening) \\cite{Kropf2019}, this does not lead to fast quantum coherent transport as we will demonstrate below.\n\nWe provide analytical formulae in the limits of low and large electric field and\/or dephasing. For large dephasing, transport is diffusive-like. For large electric field, using Leegwater theory \\cite{Leegwater1996}, we have found that the average transfer time is proportional to the square of the potential gradient. Thus, in this region transport efficiency decreases with an increasing electric field. In the end we provide a mean to describe the charge current using average transfer time and compute the conductance of the system.\n\nOverall, we show that while a moderate intrinsic electric field can indeed help transport even in the quantum coherent regime, if it is too strong, transport can only occur through classical diffusion which is a much slower and inefficient process.\n\n\n\n\\section{The Model and the relevant observables}\n\n \n\nThe starting point of our approach, already introduced in Ref.~\\cite{Kropf2019}, is to model the interlayer transport by the single-excitation manifold of a one dimensional N-site chain of two-level systems, which represent the photo-excitation across the gap of the TMO. The many-body interactions, which lead to the effective broadening of the electronic levels, are accounted for by a dephasing term that tends to suppress quantum transport. In order to obtain a photo-current through the multi-layered TMO along the perpendicular axis which we model as a one-dimensional chain, we need to break the symmetry so that the photo-generated charges migrate to one end of the chain. We achieve this by attaching a sink made of a metallic material with a lower Fermi-level to one end of the chain, and assuming hard boundary on the other end from a material with a larger Fermi-level. The addition of a constant electric field potential is in itself not symmetry-breaking in the quantum regime, and thus cannot be the sole drive of the current. However, as we shall see below, a ladder pointing towards the sink can help transport by giving a finite momentum to an initial Gaussian state which then experiences less interference. \n\nMore precisely, we model the transport of an initial photo-excitation $\\ket{\\psi_0}$ in the single-excitation manifold by a one-dimensional chain of $N$ sites $\\ket{j}$ with nearest-neighbour-coupling $\\Omega$ attached to a sink $\\ket{N}$ (tight-binding model) as illustrated in Fig.~\\ref{fig:Fig1}. In addition, we consider a constant (potential) energy shift $E_0 = V \/ N$ induced by an external or internal constant electric potential $V$ over the chain length $N$ that for simplicity we define always positive $E_0>0$ (negative potentials are denoted by $-E_0$). The Hamiltonian of the system reads\n\\begin{align}\n\t\\hat H = E_0 \\sum_{j=1}^N j \\ketbra{j}{j} - \\Omega\\sum_{j=1}^{N-1} \\left(\\ketbra{j}{j+1} + \\ketbra{j+1}{j} \\right) .\t\n\\end{align}\nIn all the manuscript we will use $\\Omega$ as the reference energy scale. The coupling to the sink is described within the Lindblad master equation formalism \\cite{Gurvitz1996} and is described by the rate $ \\gamma_{out}$ and the corresponding Lindblad operator, \n\\begin{align}\n\t \\hat L_{out} = \\ketbra{0}{N}.\n\\end{align}\nThe decay rate to the sink is characterized by the tunneling rate $\\Omega_{N}$ from site $N$ to the sink and by the density of states $\\sigma$ in the sink. In the wide-band limit, the density of states is constant and $\\gamma_{out} \\sim \\Omega_{N}^2 \\sigma$ \\cite{Gurvitz1996,Giusteri2015}. \n\nAny additional decoherence from the coupling to other environmental degrees of freedom such as phonons or other excitations is effectively modelled\\cite{Haken1973,Breuer2002} by dephasing operators $\\hat L_j^\\phi = \\ketbra{j}{j},\\,j=1,\\ldots,N$ with a constant, homogeneous rate $\\gamma_\\phi$. The time-evolution is obtained by solving the corresponding Lindblad master equation \n\\begin{align} \n\t\\frac{d}{dt}\\hat\\rho =& -\\frac{i}{\\hbar} \\left[\\hat H,\\hat\\rho\\right] + \\gamma_\\phi\\sum_{j=1}^N {\\hat L_j^\\phi} \\hat\\rho {\\hat L_j^\\phi}^\\dagger -\\frac{1}{2}\\left\\{ {\\hat L_j^\\phi}^\\dagger {\\hat L_j^\\phi}, \\hat\\rho\\right\\} \\nonumber \\\\\n\t& +\\gamma_{out} \\left(\\hat L_{out} \\hat\\rho \\hat L_{out} -\\frac{1}{2}\\left\\{\\hat L_{out}^\\dagger \\hat L_{out}, \\hat\\rho\\right\\}\\right).\\label{eq:Lindblad}\n\\end{align}\nFor a finite-size chain, any initial excitation asymptotically reaches the sink. Hence, as a figure of merit for the transport efficiency we consider the average transfer time to the sink \\cite{Kropf2019} defined as\n\\begin{align}\\label{eq:taudef}\n\t\\tau := \\frac{\\gamma_{out}}{\\hbar \\eta}\\int_0^\\infty t \\matelem{N}{\\hat \\rho (t)}{N} dt,\n\\end{align}\nwhere $\\eta =1$ is the asymptotic probability to be on the sink. Our goal is to find parameters to optimize the transport to the sink, which means to minimize the average transfer time $\\tau$. \n\nIn the absence of dephasing ($\\gamma_\\phi = 0$) we can also use the non-Hermitian Hamiltonian formalism \\cite{Celardo2009} which will be useful for later analysis. In this case, one considers only the Hilbert space of the sites inside of the chain. The decay to the sink is described by an imaginary term $W = -\\frac{i\\gamma_{out}}{2}\\ketbra{N}{N}$. The total (non-Hermitian) Hamiltonian then reads\n\\begin{align}\\label{eq:HnH}\n\t\\hat {\\cal H} = \\hat H -\\frac{i\\gamma_{out}}{2}\\ketbra{N}{N}.\n\\end{align}\nThen, denoting as $\\ket{E_\\alpha}$ and $\\bbra{E_\\alpha}$ the right and left eigenvectors of $\\hat {\\cal H} = \\sum_{\\alpha} E_\\alpha \\ketbbra{E_\\alpha}{E_\\alpha}$ , we can compute the time-evolution of an initial state $\\ket{\\psi_0}$ as\n\\begin{align*}\n\t\\ket{\\psi(t)} = \\sum_{\\alpha= 1}^N e^{-\\frac{it}{\\hbar}E_\\alpha} \\ket{E_\\alpha}\\bbra{E_\\alpha}\\psi_0\\rangle.\n\\end{align*}\nand from that the average transfer time to the sink\n\\begin{align}\n\t\\tau \n\t=& \\gamma_{out} \\int_0^\\infty t |\\langle N | \\psi(t)\\rangle |^2 dt \\\\\n =& \\sum_{\\alpha,\\beta = 1}^N \\langle N\\ket{E_\\alpha} \\left(\\langle N\\ket{E_\\beta}\\right)^*\n\\bbra{E_\\alpha}\\psi_0\\rangle \\left( \\bbra{E_\\beta}\\psi_0\\rangle\\right)^* I_{\\alpha,\\beta}. \\nonumber\n\\end{align}\nwhere the integral $I_{\\alpha,\\beta}$ can be solved by integration by part and yields\n\\begin{align}\n\tI_{\\alpha,\\beta} = \\int_0^\\infty t e^{-\\frac{it}{\\hbar}(E_\\alpha-E_\\beta^*)} dt = \\frac{-\\hbar}{(E_\\alpha-E_\\beta^*)^2}.\n\\label{eq:tauNH}\n\\end{align}\nThus, as expected, the average transfer time $\\tau$ is fully characterized by the eigenvalues and eigenvectors of $\\hat{\\cal H}$.\n\n\n\\section{The Non-hermitian approach : widths, eigenstates and the Superradiant Transition}\n\nBefore investigating the dynamics, let us analyze the behavior of eigenvalues and eigenfunction of the non-Hermitian Hamiltonian Eq.~(\\ref{eq:HnH}). Exact diagonalization produces complex eigenvalues $E_\\alpha = {\\cal E}_\\alpha -i\\Gamma_\\alpha\/2$ where $\\Gamma_\\alpha$ are the decay widths, responsible for the probability decay to the sink. \n\nIn Fig.~\\ref{fig:Fig22} the largest decay width and the average decay width of all other states are shown as a function of the coupling to the sink $\\gamma_{out}$ for two different values of the electric field (left and right panels). In both cases the picture is very close to that considered in \\cite{Celardo2009,Celardo2012} : at some critical value of the opening strength $\\gamma_{out} \\simeq \\gamma^{ST}$, one width increases with $\\gamma_{out}$ while the average value of the others decreases. This has been called Superradiant Transition (ST), meaning that the state with the largest width (i.e. fastest probability to decay) is \"Superradiant\", while the others, whose average width decreases for large opening strength become subradiant \\cite{Chavez2019}. As one can see from a comparison between the two panels, an increase of almost three orders of magnitude in the electric field increases the width separation between the superradiant and the average for small values of $\\gamma_{out}$, but does not change qualitatively the overall picture. From this point of view, we may say that the ST is robust to the application of an electric field as far as the sharp opening of a gap between the largest and the average width as a function of the coupling to the sink $\\gamma_{out}$ is concerned.\n\nOne may wonder if an analogous transition occurs at fixed opening (below and above the ST) on changing the electric field. The answer to this question is in Fig.~\\ref{fig:Fig222}. As one can see both below and above the ST, on increasing the value of the electric field $E_0$ a further width separation occurs with the sharp increase of the gap between the largest width $\\Gamma_{max}$ (superradiant) and the average width over all others states $\\langle\\Gamma\\rangle$ (subradiant). Quite naturally, since $\\gamma_{out}$ is now fixed, at variance with the previous case in which we fixed the value of $E_0$, the total probability to decay (sum over all widths) is now a constant. \n\nIn order to make the two cases more \"symmetric\", let us consider the normalized gap, defined as \n\\begin{align}\\label{eq:dg}\n\t\\delta \\gamma = \\frac{\\Gamma_{max}-\\langle\\Gamma\\rangle}{\\gamma_{out}}\n\\end{align}\nthat we plot in Fig.~\\ref{fig:Fig22a} as a function of both $\\gamma_{out}$ and $E_0$.\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{Fig22.png}\n\t\t\\caption{[Color online] Decay width ($\\Gamma_\\alpha$ of $\\hat{\\cal H}$, Eq.~\\eqref{eq:HnH})\nfor the Superradiant state (full symbols) and for the average of all other subradiant states (crosses)\n as a function of $\\gamma_{out}$ for a chain of $N=10$ sites. A) Below the critical field $|E_0| = 0.2\\Omega < \\tilde E_0 \\approx 0.56\\Omega$, Eq.~\\eqref{eq:CriticalE0}, B) Above the critical value, $|E_0| = 10\\Omega>\\tilde E_0\\approx 0.56 \\Omega $.\n}\n\t\t\\label{fig:Fig22}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{width_E0.png}\n\t\t\\caption{[Color online] Decay width ($\\Gamma_\\alpha$ of $\\hat{\\cal H}$, Eq.~\\eqref{eq:HnH}) \nfor the Superradiant state (full symbols) and for the average of all other subradiant states (crosses)\n as a function of $E_0$ for a chain of $N=10$ sites.\n A) The opening $\\gamma_{out}=0.1 \\gamma^{ST}$ is below the ST, B) The opening $\\gamma_{out}=10 \\gamma^{ST}$ is above the ST.\n}\n\t\\label{fig:Fig222}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{normalize_gap.png}\n\t\t\\caption{[Color online] Normalized gap $\\delta\\gamma$, see Eq.~(\\ref{eq:dg}) as a function of both \n$\\gamma_{out}$ and $E_0$ for a chain of $N=10$ sites which shows the Superradiant border (white area).\n}\n\t\\label{fig:Fig22a}\n\\end{figure}\n\n\nAs one can see, the opening of a gap appears now to be symmetric in both variables $\\gamma_{out}$ and $E_0$ and we may speak about a Superradiant border (in the figure, roughly speaking the separation between the blue rectangle and the red border). Even if, from the point of view of the gap opening in the widths, $\\gamma_{out}$ and $E_0$ seem to behave in the same they will affect transport properties in a very different way.\n\nIn order to better address this point, let us consider another important quantity that affects transport properties : the degree of localization of the eigenstates. One standard measure of their localization length is the so-called participation ratio $$PR_\\alpha = 1\/\\sum_k |\\langle k | E_\\alpha \\rangle|^4,$$ where $\\ket{k}$ is the site basis and $1 \\leq PR_\\alpha \\leq N$ measures approximatively the \"number of significantly occupied sites\" by the eigenstate $\\ket{E_\\alpha}$ in a chain of length $N$. In similar way to what we have done for the widths, let us consider the participation ratio of the Superradiant state and the average participation ratio of all other subradiant states as a function of both $E_0$ and $\\gamma_{out}$. Results are shown in Fig.~\\ref{fig:Fig22b}. As one can see the difference between the transition in $E_0$ and $\\gamma_{out}$ is very different -- while the former induces a localization of all eigenstates, the latter produces the localization of only the superradiant eigenstate (at the sink $N$).\n\n\\begin{figure}[t]\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{PR_A.png}\n \\includegraphics[width =0.5\\textwidth]{PR_B.png}\n\t\t\\caption{[Color online] Participation ratio for the Superradiant state (panel A) and average participation ratio of all subradiant states (panel B) , as a function of both $E_0$ and $\\gamma_{out}$ for a chain of $N=10$ sites. Vertical lines represent the average level spacing $4\\Omega\/N$, horizontal ones the ST transition. \n}\n\t\\label{fig:Fig22b}\n\\end{figure} \n\nThis of course will have dramatic consequences on the transport properties, specially regarding the dependence on the initial state as will be studied in the next section.\n\n\n\\section{Results on transport : the average transfer time}\n\nWe start with a numerical study of the average transfer time followed by analytical calculations for different limiting behaviour. The average transfer time is computed by explicit integration of $\\tau$ and numerical diagonalization of the Liouvillian associated with the Lindblad equation \\eqref{eq:Lindblad} (c.f. Appendix \\ref{app:Liouvillian}). In order to compute $\\tau$, we use the Python packages \\textit{Qutip} \\cite{Johansson2013} and \\textit{numpy} for the numerical diagonalization of the Liouvillian.\n\nWe consider a chain of $N=10$ sites, which was shown to be short enough to expect quantum coherent transport in TMOs heterostructures~\\cite{Kropf2019}. The initial photo-excitation is modelled by a generic Gaussian initial state that we choose to be localized on site $n_0=3$, which is far enough from the sink to obtain meaningful conclusion about transport, to have width $\\Delta_0 = 1$ (in lattice size units) and no initial momentum $k_0 =0$,\n\\begin{align}\\label{eq:gauss3}\n\t\\ket{\\psi_0} = \\sum_{n=1}^N e^{-ik_0 n}e^{-\\frac{(n-n_0)^2}{4 \\Delta_0^2}}\\ket{n}\n\\end{align}\nAfter the general discussion we shall comment on the impact on the average transfer time of longer chains and different initial states.\n\n\\subsection{Negligible dephasing $\\gamma_\\phi \\approx 0$.}\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{Fig21-double.png}\n\t\t\\caption{[Color online] Average transfer time in function of the electric field gradient $E_0$ and the coupling rate $\\gamma_{out}$ to the sink at negligible dephasing $\\gamma_{\\phi} = 0.001$. Optimal transport occurs in the black region. This region is characterized by an optimal coupling to the sink $\\gamma_{out} = gamma^{ST}$ (horizontal line), and by an optimal electric field. The regime where quantum coherent transport occurs is delimited by the two dashed vertical lines (see Eq.~\\eqref{eq:CriticalE0} and discussion below). Please note the axes jump at $|E_0|=0.01$. The initial state is a Gaussian state Eq.~\\ref{eq:gauss3} with $\\Delta_0=0, n_0 = 3, k_0 = 0$. \n}\n\t\t\\label{fig:Fig2}\n\\end{figure}\nThe average transfer time as a function of the electric field and the coupling to the sink at vanishing dephasing $\\gamma_\\phi \\approx 0$ is shown in Fig.~\\ref{fig:Fig2} for a Gaussian initial state (c.f. Eq.~\\eqref{eq:gauss3}). As one can see, there is both an optimal value of the opening $\\gamma_{out}$ and of the electric field $E_0$ at which the average transfer time is minimized. As we will show later, the optimum in $E_0$ is strongly dependent on the initial state. For instance, for any localized initial state the optimal field is $E_0= 0$, see Section \\ref{sec:initialstate}. \n\nFor an electric field gradient smaller than some critical value $\\tilde{E}_0$ (indicated by vertical dashed lines) the average transfer time is always minimized at the ST\\cite{Celardo2012,Kropf2019}, $\\gamma_{out} = \\gamma^\\textrm{ST} \\approx 2\\Omega$ . In this figure, as vertical dashed line, we indicate the ``critical'' electric potential $\\tilde E_0 = 4\\sqrt{2}\\Omega\/N$ derived below, see Eq.~\\eqref{eq:CriticalE0}, which turns out to be on the same order of the average level spacing.\nWith ``critical'' we mean that any electric field exceeding such value is detrimental for transport. \n\nLet us interpret physically this result. For small electric fields, $|E_0| < \\tilde E_0$, the transport is coherent, and is optimized at the ST and and at a particular value of the electric field. For a generic initial state which is a uniform superposition over all eigenstates the probability to reach the sink in a shorter time increases on increasing the coupling to the sink. Nevertheless, on increasing $\\gamma_{out}$ above the ST one state, the superradiant one, decays faster than any other and, at the same time starts to be localized at the sink. This means that all other subradiant states will have a negligible overlap with the sink (due to the eigenfunction normalization) thus producing an effective barrier to transport. In other words the transport will be optimized just in the middle of these two situations (a large enough shared width and a low enough localization to the sink) which is exactly the ST. On the other hand, a further increase of the electric field $E_0$, larger than the average level spacing will produce a localization of all eigenstates thus producing a complete blocking of transport. In this regime, changing the value of $\\gamma_{out}$ has practically no effect on the average transfer time for initial states that do not strongly overlap with the superradiant state as can be seen in Fig.~\\ref{fig:Fig2}. \n\nNote also that the average transfer time at a given $|E_0|<\\tilde E_0$ is minimized at the optimal coupling to the sink $\\gamma^{ST}$, but the average transport time $\\tau$ is lower for $E_0 <0$ than for $E_0>0$. This is because a potential gradient towards the sink favors a shorter transfer time as will be discussed in details below. \n\nThis is quite important since it shows that it makes no sense to look for an optimal electric field considering only the widths, but not the initial conditions. Indeed from Fig.~\\ref{fig:Fig2} it is clear that an \"optimal\" electric field also exists but this cannot be related to the ST transition in the widths, which occurs also for negative $E_0$ (since the widths do not depend on the sign of $E_0$) while for our initial Gaussian state transport is optimized only for negative $E_0$ and not for positive. This suggests that the choice of the initial state is crucial for understanding the optimal transport conditions.\n\n\\subsection{Adding dephasing : optimal electric field gradient}\n\\begin{figure}[t]\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{tau_gout_E0_deph01.png}\n \\includegraphics[width =0.5\\textwidth]{tau_gout_E0_deph1.png}\n \\includegraphics[width =0.5\\textwidth]{tau_gout_E0_deph10.png}\n \\caption{[Color online] Average transfer time as a function of the electric field gradient $E_0$ and the coupling rate $\\gamma_{out}$ to the sink for small dephasing $\\gamma_{\\phi} = 0.1$ (top panel), \nmoderate dephasing $\\gamma_\\phi=1$ (central panel) and large dephasing $\\gamma_\\phi = 10$ (bottom panel). Vertical and horizontal lines and the initial state are the same as in Fig.~\\ref{fig:Fig2}.\n}\n\t\t\\label{fig:deph}\n\\end{figure}\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{Fig3.png}\n\t\t\\caption{[Color online] Average transfer time in function of the electric potential $E_0$ and the dephasing rate $\\gamma_\\phi$ at the superradiant transition $\\gamma_{out} = \\gamma^{ST}$ for a chain of $N=10$ sites. At large electric fields, (i.e., $>\\tilde E_0$) dephasing is required for transport and is slow. For low electric fields (i.e., $<\\tilde E_0$), just below the critical dephasing $\\tilde \\gamma_\\phi$ signalling the transition to the diffusive-like transport, there is an optimal dephasing value. At low dephasing ($\\gamma_\\phi < \\tilde \\gamma_\\phi$), transport is optimal at a finite value of the electric field gradient smaller than the critical value $\\tilde E_0$, Eq.~\\eqref{eq:CriticalE0}. Note the axes jump at $|E_0|=0.01$. The initial state is a Gaussian state Eq.~\\ref{eq:gauss3} with $\\Delta_0=0, n_0 = 3, k_0 = 0$. \n}\n\t\t\\label{fig:Fig3}\n\\end{figure}\nSimilar plots, as that presented in Fig.~\\ref{fig:Fig2} but for increasing values of the dephasing $\\gamma_\\phi$\nare shown in Fig.~\\ref{fig:deph}. While the top panel indicates that the picture for zero dephasing persists even for larger dephasing strength $\\gamma_\\phi = 0.1$, the other panels show that a further increase of the dephasing blurs everything. In a sense transport is again \"slightly optimized\" but this happens for a wide region of the parameters $E_0$ and $\\gamma_{out}$. We can see that transport is, in any case, always optimized when the opening strength is set at the critical value $\\gamma^{ST}$.\n\nIn order to estimate more precisely the effect of the dephasing, let us thus consider the transport exactly at the ST (i.e. $\\gamma_{out} = \\gamma^{ST}$) and study the transport time as a function of both $E_0$ and $\\gamma_\\phi$. Results are shown in Fig.~\\ref{fig:Fig3}. The black region in this figure shows that transport is optimized for suitable values of the dephasing and electric field strengths. On the other hand, for large electric fields $|E_0|>\\tilde E_0$ transport is slow and increasing $E_0$ is always detrimental to transport. This is due to the localization of the eigenstates and so, the diffusive-like transport can only happen in the presence of dephasing. \n\nFor large dephasing, the situation is analogous and transport is diffusive-like (at least in an infinite chain). The combination of both large dephasing and large electric field gives rise to dephasing-assisted transport~\\cite{Plenio2008}, i.e., when dephasing is of the same order of the energy separation between levels, energy fluctuations can support transport by inducing resonances between levels. As shown in Fig.~\\ref{fig:Fig3}, for any fixed value of the electric field gradient\n$|E_0| > \\tilde E_0$ (indicated by vertical dashed lines) an optimal value of $\\gamma_\\phi$ exists minimizing the transport time by the dephasing-assisted transport mechanism. \n\nLet us now turn our attention to the most interesting regime in which the electric field gradient is below the critical value $|E_0|\\lesssim\\tilde E_0$ and the dephasing is small enough as compared to the average level spacing to allow for quantum coherent transport. Then the most important parameter to optimize transport is to tune the coupling to the sink at the ST as shown for $\\gamma_\\phi = 0, E_0\\lesssim \\tilde E_0$ in Fig.~\\ref{fig:Fig2}, and as was established in \\cite{Kropf2019} for $E_0 =0, \\gamma_\\phi \\lesssim \\tilde \\gamma_\\phi \\approx 4\\Omega\/N$. In addition we find two additional optimal parameters as shown in Fig.~\\ref{fig:Fig3}~: a wide interval of optimal dephasing rates which is independent on the electric field values, i.e., it is not related to dephasing-assisted transport, and an optimal electric field. While the former has a rather small impact on the average transfer time and is essentially due to a small broadening of the energy levels, the latter can shorten the average transfer time by up to an order of magnitude. Since we are in the coherent transport regime, transport is optimized when the overlap between the initial state and the radiant states is maximized. We recall that at the ST the average decay of every eigenstates is maximized. This guarantees an overall efficient transport for any initial state, even when not localized on the superradiant state. In addition, the electric field can fine-tune the best overlap. The main effect of a non-vanishing $E_0$ is to modify the eigenstates, while conserving the overall structure of the eigenvalues. While the eigenvalues are insensitive to the direction of the electric gradient this is not the case for the eigenstates and explains the asymmetry in the average transfer time shown in Fig.~\\ref{fig:Fig3}. Indeed, $\\tau$ is smaller for an electric gradient towards the sink, i.e., $E_0 < 0$.\n\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{Fig81-1.png}\n\t\t\\caption{Electric field gradient at which the average transfer time is minimized as a function of the chains length, numerical grid search (triangles) and analytic estimate Eq.~\\eqref{eq:Eopt} (dots), at the superradiant transition $\\gamma_{out} = \\gamma^{ST}$ and small dephasing $\\gamma_\\phi = 10^{-6}$. The errorbar indicates the electric field gradient with $10\\%$ variations in $\\tau$. The initial state is a Gaussian state Eq.~\\eqref{eq:gauss3} with $\\Delta_0 = 1$, $n_0 =3$ and $k_0=0$ for all $N$. As a visual guide the critical field $\\tilde E_0$ (blue dashed) above which localized eigenstates suppress coherent transport, and the field (orange crosses) at which the normalized gap between the super- and sub-radiant states ($\\delta \\gamma =0.5$, see Eq.~\\eqref{eq:dg}) opens are shown.\n}\n\t\t\\label{fig:Eopt}\n\\end{figure}\nHaving in mind the discussion above, we propose here below a phenomenological expression able to capture the electric field at which transport time is minimized (given that $\\gamma_{out} = \\gamma^{ST}$).\nThe idea is that transport will be optimal when the initial state is overlapped with many eigenstates whose widths are significantly different from zero. Therefore we will expect an optimal electric field gradient by minimizing (and not maximizing since $\\textrm{Im}[E_k]<0$ $\\forall k$)\n\n\\begin{align}\\label{eq:Eopt}\n\t E_0^{opt} = \\underset{E_0}{\\min}\\sum_{k=1}^N \\textrm{Im}[E_k] |\\braket{\\psi_0}{E_k}|,\n\\end{align} \n\nSince from Figs.\\ref{fig:Fig2},\\ref{fig:Fig3} the optimal transport time occuring at small dephasing and at the ST is weakly dependent on both values $\\gamma_\\phi$ and $\\gamma_{out}$, we check the validity\nof Eq.~\\ref{eq:Eopt}, by changing the length of the chain $N$ keeping the initial state Eq.~\\eqref{eq:gauss3}. Results are shown in Fig.~\\ref{fig:Eopt}. We plot the results from a numerical grid search for the $E_0$ that minimizes $\\tau$ with an error bar given by the $\\pm E_0$ values corresponding to 10\\% variation in $\\tau$ around the optimal value. This we do since the minimal times are usually characterized by large plateaus that can mask artificially the predictions, see for instance Fig.~\\ref{fig:deph}. As one can see, the analytical results fit well the numerical ones for small chain lengths, while some deviations occur at larger $N$ values, but fit within the $10\\%$ plateau. The case of long chain actually is quite off from our main considerations concerning quantum coherent transport, being even in the presence of small dephasing typically characterized by ``classical'' diffusion.\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width =0.5\\textwidth]{Fig82.png}\n\t\t\\caption{Minimal $\\tau$ as a function of the number of sites $N$ for an initially localized state \n $\\ket{3}$ (green triangles), the Gaussian state Eq.~\\eqref{eq:gauss3} (blue circles) and a flat state $1\/(N-1)\\sum_{j=1}^{N-1} \\ket{j}$ (red squares) at the optimal electric field for the given initial state and chain length ($E_{opt}$ from Fig.~\\ref{fig:Eopt} (green) and $E_0 = 0$ (red, blue)). Dephasing is negligeable $\\gamma_\\phi = 10^{-6}$ and the opening is set at the ST : $\\gamma_{out}=\\gamma^{ST}$. The fastest transport is achieved for the Gaussian wavepacket at the electric field shown in Fig.~\\ref{fig:Eopt}. The heuristic equation Eq.~\\eqref{eq:heuristicfinal} reproduces the numerics for the localized state. Inset : the same in normal scale on both axis and for small lengths.\n}\n\t\t\\label{fig:Eopt2}\n\\end{figure}\n\n\\begin{figure*}\n\t\\includegraphics[width = \\textwidth]{Fig7.png}\n\t\\caption{Time-evolution of the probabilities $|\\langle j | \\psi(t) \\rangle |^2 $ for different values of the electric field gradients (from top to bottom $E_0 = [-1, -0.2, -0.001, 0.001,0.2,1]$) and of the dephasing (From left to right $\\gamma_\\phi= [0.0001, 0.1, 1]$) for a chain of $N=10$ sites. Vertical cyan line indicates the average transfer time $\\tau$. At the optimal value of the electric field (i.e, $E_0 \\approx -0.2$, second line from the top), self-interference is minimized. Moderate dephasing (central column) also helps transport by destroying self-interferences without making the transport diffusive-like. The initial state is a Gaussian state Eq.~\\ref{eq:gauss3} with $\\Delta_0=0, n_0 = 3, k_0 = 0$. \n}\n\\label{fig:Fig7}\n\\end{figure*}\n\n\\subsection{Changing the initial state}\\label{sec:initialstate}\nUnder optimal conditions, the Gaussian state is transferred to the sink faster than a completely localized ($\\ket{\\psi_0} = \\ket{3}$) or delocalized state ($\\ket{\\psi_0} = 1\/\\sqrt{N-1}\\sum_{j=1}^{N-1} \\ket{j}$), as shown in Fig.~\\ref{fig:Eopt2}. It is also important to observe that for an initial state localized on a single site a non-vanishing electric field is always detrimental, see for details \\ref{app:Localized}. Indeed, in this case one cannot avoid self-interference, and thus the electric field cannot enhance transport. Localized states are completely delocalized in momentum space and thus always have a component which will be driven away from the sink. However, we remark that $\\tau$ can be efficiently minimized by the introduction of dephasing. It was shown that this is related to the diffusion of the state from low momentum component to larger momentum components\\cite{Li2015} (also termed \"momentum rejuvenation\").\n\n\\subsection{A dynamical point of view}\n\n\nAnother point-of-view on the optimization of the average transfer time by non-vanishing electric field and dephasing in the coherent transport regime is obtained in the temporal regime. In Fig.~\\ref{fig:Fig7} we show the time-evolution of the site populations for different values of $\\gamma_\\phi$ and $E_0$ at the superradiant transition. In the absence of dephasing and electric field, the initial Gaussian wave-packet will first spread ballistically. Then, since the wavepacket starts close to the edge of the chain which is described by a hard wall, part of the wavepacket will be reflected and will begin to interfere with itself. These interferences then prevent a smooth movement of the particle towards the sink and leads to a large transport time. Introducing a negative electric field then allows to give an initial momentum away from the wall, thus minimizing the self-interference and optimizing the transport. Consequently, positive electric field never lower $\\tau$ (c.f. Fig.~\\ref{fig:Fig3}), and particles with initial momentum towards the sink do not benefit as much from the tuning of $E_0$. On the other hand, introducing dephasing does not provide a directionality, but suppresses any coherence, and thus also unwanted interferences. In both cases there is an optimal value favoring transport. Introducing a too large dephasing leads to slow diffusive-like transport, and a too large electric field leads to a localization of all eigenstates, both of which eventually kill the quantum coherent transport. \n\n\n\n\\subsection{Comparison with disorder}\n\nIt is clear from both the spectral as well as the time analysis that the electric field does not act as a static (diagonal) disorder in the quantum coherent regime. On the other hand, once $E_0$ is large enough to localize the eigenstates, it is similar to Anderson-type disorder as shown in Fig.~\\ref{fig:Disorder} (see also the r.h.s. of Fig.~\\ref{fig:Fig3}). Indeed, if we set $E_0 = 0$ and introduce independent uniform disorder $\\epsilon_j$ sampled from $[-W\/2,W\/2]$ on each site $j$, the ensemble-averaged $\\overline \\tau$ never gets smaller with increasing disorder $W$. In other words, there is no optimal value of disorder, as opposed to the existence of an optimal value of the electric field. For a disorder $W \\gtrsim 10\\Omega\/\\sqrt{N}$ such that the localization length is expected to be shorter than the chain's length \\cite{Izrailev1998}, transport is strongly suppressed and requires non-vanishing dephasing. Therefore, in this case, the disorder has a similar impact on the average transfer time as electric field gradients $|E_0| \\gtrsim \\tilde E_0$, as can be seen by comparing Figs.~\\ref{fig:Disorder} and \\ref{fig:Fig3}. For values $W \\lesssim 10\\Omega\/\\sqrt{N}$, the effect of the disorder appears similar to $0 \\tilde E_0$ and\/or $\\gamma_\\phi > \\tilde \\gamma_\\phi$).\n\nFor an initial Gaussian state centered around site $n$, the heuristic formula Eq.~\\eqref{eq:heuristicfinal} works well in the limits of large dephasing and\/or large electric field, but fails to capture the quantum transport at the optimal electric field and optimal dephasing as shown in Fig.~\\ref{fig:TauEfieldSuper_Leegwaters}. This indicates that the optimization is a genuine quantum effect that cannot be captured by effective rate equations as obtained from Leegwaters and F\u00f6rster theory.\n\n\n\\begin{figure}[]\n\t\\includegraphics[width = 0.5\\textwidth]{Fig6.png}\n\t\\caption{Average transfer time in function of the electric field gradient $E_0$ at the ST $\\gamma_{out} = \\gamma^{ST}$ for a chain of $N=10$ sites. Our heuristic model $\\tau = \\tilde \\tau + \\tau_L^*$, Eq.~\\eqref{eq:heuristicfinal}, (solid lines) describes the average transfer time when self-interference effects inside of the chain are suppressed by large dephasing, $\\gamma_\\phi > \\tilde \\gamma_\\phi \\approx 0.4$, Eq.~\\eqref{eq:dephcrit} (blue crosses), or when the eigenstates are localized by the large electric field, i.e., $|E_0|>\\tilde E_0 \\approx 0.56$, Eq.\\eqref{eq:CriticalE0} (indicated with the vertical dashed line). The initial state is a Gaussian state Eq.~\\ref{eq:gauss3} with $\\Delta_0=0, n_0 = 3, k_0 = 0$. \n}\\label{fig:TauEfieldSuper_Leegwaters}\n\\end{figure}\nWe can use this to derive the approximate value $\\tilde E_0$ of the electric field gradient at which the transition to incoherent transport occurs by the following reasoning. In Fig.~\\ref{fig:TauEfieldSuper_Leegwaters}, we compare numerical results with Eq.~\\eqref{eq:heuristicfinal}. As one can see the analytics accurately describes the average transfer time for any electric field at large dephasing. On the other hand, at lower dephasing, the formula is not able to describe the data. We can estimate the value of dephasing $\\tilde\\gamma_\\phi$ at which this happens directly from Eq.~\\eqref{eq:heuristicfinal}. In order to do that, let us consider the time $\\tau_L^*$ in the absence of electric field and compare the leading order in $N$ contribution of the $\\gamma_\\phi$ and $\\gamma_{out}$ terms. At the ST, $\\gamma_{out} \\approx 2\\Omega$ we find \n\\begin{align}\\label{eq:dephcrit}\n\t\\tilde \\gamma_\\phi \\approx \\frac{4\\Omega n}{N+n}\n\\end{align}\nNext, we can find the critical electric field gradient by finding the value of $E_0$ at which the low and large field limit results coincide, i.e., $\\tau_L^* = \\tilde \\tau$, at the ST and at the critical value of the dephasing given in Eq.~(\\ref{eq:dephcrit}). We remark that $\\tau$ describes the average transfer time when starting from the site $1$ and above the dephasing threshold with high accuracy. When starting from another site, we do not have a close formula for $\\tau$ at finite values of the dephasing. Hence, we estimate $\\tilde E_0$ for $n=1$ and find\n\\begin{align}\\label{eq:CriticalE0}\n\t\\tilde E_0 \\approx 4\\frac{\\sqrt{2}\\Omega}{N}.\n\\end{align}\nwhich is in good agreement with our numerical results shown in Figs.~\\ref{fig:Fig2},~\\ref{fig:Fig3},~\\ref{fig:Fig7},~and~\\ref{fig:TauEfieldSuper_Leegwaters}. Moreover, it appears intuitive to relate this quantities to the mean average level spacing $\\sim 4\\Omega\/N$. Then it becomes clear that the transition from coherent transport to diffusive-like, dephasing-driven transport occurs roughly when the eigenstates localize under the effect of $E_0$ (c.f. Fig.~\\ref{fig:Fig22b}).\n\n\n\\section{Charge current}\nFor a comparison with TMO materials of technological interest, we derive an appropriate quantity to describe the electric current photo-generated by light excitation across the insulating gap, e.g., the Mott gap in the case of LaVO$_3$. Since the electric current is defined as the charge per unit time escaping from the collectors, i.e., $\\pm e\/\\tau$, both the electrons and holes injected by photon absorption participate to the total current $I$. In order to obtain a non-vanishing charge current, an asymmetry between the electron and holes transfer time is required. In presence of intrinsic (or extrinsic) electric fields, it is reasonable to assume that the two species will be collected on the two different sides of the chain.\n\n\\begin{figure}\\label{fig:CurrentCartoon}\n\t\\includegraphics[width = 0.5 \\textwidth]{cartoon_conductivity.pdf}\n\t\\caption{[Color online] Illustration of the decomposition of the system for electron-hole pairs. A realistic system consists of two collectors and describes the transport of electron-hole pairs. Since under the action of the electric field electrons and holes migrate to opposite collectors, the effective charge current can be modelled as the sum of the contributions from two chains with one collector. \n}\n\\end{figure}\n\nWe can model the charge collection process by assuming that both ends of the chains are made of metallic contacts and thus act as collectors with rate $\\gamma_{out}$. For electrons this amounts to assuming a Fermi-level larger than the system energy on the left (hard-wall), and lower on the right (sink). For holes, one should take the inverse system, with a collector on the left and hard-wall on the right. Therefore, assuming the complete symmetry between electrons and holes, the escape time for electrons should be exactly the same as for holes, but with reversed electric field gradient. The system with two collectors and an electron-hole pair can thus be decomposed into two chains, one with electrons and a collector on the right, and one with holes and a collector on the left, respectively, but with an electric field in the same direction in both cases as illustrated in Fig.~\\ref{fig:CurrentCartoon}. We can hence define the charge current as\n\\begin{align}\\label{eq:Current}\n\tI := e\\left(\\frac{1}{\\tau(-E_0)} - \\frac{1}{\\tau(E_0)}\\right).\n\\end{align}\nWe now compute the current $I$ for an initial excitation starting in the middle of the chain in order to account for the symmetry between holes and electrons, i.e., we consider an initial Gaussian state, see Eq.~\\eqref{eq:gauss3}, with $n_0= (N+1)\/2 = 5.5$ ($k_0 =0$ and $\\sigma=1$ as previously). As shown in Fig.~\\ref{fig:Current}, for $|E_0| \\lesssim \\tilde E_0$ the current is linear in the electric field gradient, in accordance with standard descriptions of current in solids \\cite{Landauer1957,Gurvitz1996}. We can thus define a conductance as the proportionality constant $I = g V $ where $V =N |E_0| \/e$ is the total potential drop. As is shown in Fig.~\\ref{fig:Conductance}, the conductance $g\\sim 0.25 e^2\/\\hbar$ is constant in the coherent regime ($\\gamma_\\phi < \\tilde\\gamma_\\phi$ and $E_0 < \\tilde E_0$), while it vanishes in the diffusive regime (according to our numerics we have found $g \\sim \\gamma_\\phi^{-3}$ for large dephasing strength). \nIt is important to note that in the coherent regime it is of same order of magnitude as the quantum of conductance in 1--d systems $g_0 = 2e^2\/h \\sim 0.32 e^2\/\\hbar$ which characterizes the current in the Landauer formalism. \n\nWe remark that the actual photo-current in TMOs must depend on the rate photo-excitation generation, i.e. on the incoming light. Our model does not include any excitation rate, and the current $I$ in Eq.~\\eqref{eq:Current} must be understood as the maximal possible current in the single-excitation limit.\n\n\n\n\\begin{figure}\n\t\\includegraphics[width = 0.5 \\textwidth]{Current_v2.png}\n\t\\caption{[Color online] Maximal charge current Eq.~\\eqref{eq:Current} at the ST $\\gamma_{out} = \\gamma^{ST}$ as a function of the electric field gradient $E_0$ for a dephasing value above ($\\gamma_\\phi =1.0$) and below ($\\gamma_\\phi = 0.001$) the critical value $\\tilde\\gamma_\\phi \\sim 4\\Omega\/N$. For not too large electric field, the current is linear in the gradient (dotted line as visual guide). The chain has length $N=10$. The initial state is a Gaussian state Eq.~\\eqref{eq:gauss3} centered in the middle of the chain, width one, and with zero momentum ($\\Delta_0 = 1, n_0 = N\/2, k_0 = 0$).\n\t}\\label{fig:Current}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width = 0.5 \\textwidth]{Conductance_v2.png}\n\t\\caption{[Color online] Conductance at the superradiant transition $\\gamma_{out} = \\gamma^{ST}$ in function of the dephasing $\\gamma_\\phi$ for an electric field gradient below the critical value $E_0 < \\tilde E_)$. Below the critical dephasing $\\tilde\\gamma_\\phi \\approx 4\\Omega\/N$ (dashed vertical line), the current is independent of the dephasing, and above the current is $\\propto \\gamma_\\phi^{-3}$ (dotted line). The chain has length $N=10$.}\\label{fig:Conductance}\n\\end{figure}\n\n\n\n\\section{Conclusions} \n\nIt was shown that the superradiant enhancement of transport in one-dimensional chains is robust to the presence of moderate electric fields. This complements previous results that have shown superradiance to be robust against moderate levels of disorder \\cite{Zhang2017b}, complex network geometries \\cite{Zhang2017b}, and electron-electron interactions \\cite{Kropf2019}. Hence, superradiance emerges as a very robust quantum coherent effect to maximize transport in small-scale systems. Moreover, we have shown that for dephasing and electric fields smaller than the mean level spacing, the transport is quantum coherent. The transport is optimal at a finite value of the electric potential because the particle acquires a momentum towards the sink, and thereby self-interference from reflection of the chain boundary are reduced. When the electric potential becomes too large, transport is suppressed due to the (partial) localization of the wavefunction.\n\nThe present results strongly impact on the possibility of exploiting coherence-enhanced transport mechanism to improve the photo-conversion efficiency of polar TMOs-based devices. As a paradigmatic case we start from the case of LaVO$_3$\/SrVO$_3$ heterostructures, which have been recently suggested as potential candidate for quantum-coherent photo-conversion at ambient temperatures \\cite{Kropf2019}. The electric potential along the transport axis was estimated to be around 0.08 eV{\\AA}$^{-1}$ \\cite{Assmann2013} and the distance between two sites is 7.849 {\\AA} \\cite{DeRaychaudhury2007}. Given an estimate of $\\Omega \\approx 200$meV \\cite{DeRaychaudhury2007}, we obtain a value of $E_0\\sim 3.13 \\Omega \\gg \\tilde E_0 \\sim 0.56 \\Omega$. This would mean that we are deep in the localized regime, and quantum coherent transport is not possible. However, the overall electric potential can be controlled by the application of an external field. Our work suggests the possibility to enhance the current by applying a negative external bias to partially compensate the intrinsic potential slope and drive the system back to the quantum-coherent regime. \n\nMore generally, the intrinsic electric field of polar TMO heterostructures, such as LaAlO3\/SrTiO3, depends on the way the charges redistribute within the heterostructure itself. Typical values of electric fields can range between 0.01-0.1 eVA$^{-1}$ \\cite{Song2018} thus offering the opportunity of engineering the local field in order to enhance the transport of photo-generated carriers. We must however emphasize that the existence, and also the magnitude of the optimal electric field is strongly dependent on the initial state configuration, the value of dephasing and of the coupling to the sink. All of these variables, including the magnitude of the intrinsic electric field are very difficult to estimate precisely and thus caution is required in the comparison with our theory.\n\n\n\n\n\n\\section*{Acknowledgements}\nF.B. and C.M.K. acknowledge support by the Iniziativa Specifica INFN-DynSysMath. C.G. and F.B. acknowledge support from Universit\\'a Cattolica del Sacro Cuore through D1, D.2.2 and D.3.1 grants. C.G. and F.B. acknowledge financial support from MIUR through the PRIN 2017 program (Prot. 20172H2SC4\\_005). C.G. acknowledges financial support from MIUR through the PRIN 2015 program (Prot.2015C5SEJJ001). G.L.C. acknowledges the Conacyt project A1-S-22706.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nWith an increasing number of replication studies performed in psychological science, the question of how to evaluate the outcome of a replication attempt deserves careful consideration. Bayesian approaches allow to incorporate uncertainty and prior information into the analysis of the replication attempt by their design. The Replication Bayes factor, introduced by \\cite{Verhagen2014}, provides quantitative, relative evidence in favor or against a successful replication. In previous work by \\citet{Verhagen2014} it was limited to the case of $t$-tests. In this paper, the Replication Bayes factor is extended to $F$-tests in multi-group, fixed-effect ANOVA designs. Simulations and examples are presented to facilitate the understanding and to demonstrate the usefulness of this approach. Finally, the Replication Bayes factor is compared to other Bayesian and frequentist approaches and discussed in the context of replication attempts. R code to calculate Replication Bayes factors and to reproduce the examples in the paper is available at \\url{https:\/\/osf.io\/jv39h\/}.\n\n\\section{Introduction}\nThe ''replication crisis'' \\citep{Pashler2012,Asendorpf2013} has been a focus of discussion in psychological science in recent years. There is ongoing debate about how to improve methodological rigor and statistical analysis in order to improve reliability and replicability of published findings. When considering a given replication study, a central question is the final evaluation, i.e. the question whether a previous finding has successfully been replicated.\n\nAs with any scientific question in empirical disciplines, researchers use statistical tools in order to find an answer to this question. An intuitive way to evaluate replication results is to compare statistical significance in both an original study and a replication study (''vote counting''): If both studies are significant and show effects in the same direction, the replication is deemed ''successful''. If, on the other hand, the original study reported a statistically significant test which is non-significant in the replication study, the replication might be considered ''failed''. This interpretation, while intuitive and common practice in past research \\citep{Maxwell2015}, is flawed in general and wrong in the case of non-significant replications. A non-significant result cannot be interpreted as evidence for the absence of an effect -- especially since the difference between a significant result in an original study and a non-significant result in a replication study might not be significant in itself \\citep{Gelman2006}. Moreover, statistical significance and $p$-values do not contain information about uncertainty in estimates and their misinterpretations have repeatedly been covered over the past decades \\citep{Wasserstein2016,Ionides2017,Nickerson2000,Bakan1966}. A related question is whether the strict dichotomy between ''successful'' and ''failed'' replications is sensible in practice.\n\nTo overcome problems with simply comparing $p$-values based on their significance, other methods for evaluating replication studies have been proposed, partly also alleviating the strict dichotomy: confidence intervals (CI) for effect size estimates have been advocated by some authors for both the reporting of statistical summaries \\citep{Cumming2001,Cumming2012} and for the evaluation of replications (\\citealp{Cumming2008}; \\citealp{Gilbert2016}, but see also \\citealp{Anderson2016b}). A researcher could, for example, check if the effect size estimate from a replication study falls within the 95\\% confidence interval of the effect size estimate from the original study -- or, \\emph{vice versa}, check if the effect size estimate from the original study falls within the 95\\% confidence interval of the replication study. While this approach takes uncertainty into account (in contrast to $p$-values), confidence intervals can be difficult to interpret and give rise to misinterpretations \\citep{Belia2005,Anderson2016b}.\n\nThe width of a confidence interval is directly related to sample size and thus to the power curve of a statistical test. Since power is notoriously low in psychological science \\citep{Button2013,Sedlmeier1989,Szucs2017}, confidence intervals tend to be generally wide for original studies from the published literature \\citep{morey2016most}. This can make a sensible comparison of confidence intervals difficult and lead to inconclusive or misleading results (depending on the particular decision rule). \\citet{Simonsohn2015} therefore proposed a method to take the power of the original study into account when evaluating a replication study. In his ''small telescopes'' approach, the confidence interval of a replication study is compared not to the effect size estimate from the original study, but to an effect the original study had 33\\% power to detect. The benchmark of 33\\% is arbitrary, but \\cite{Simonsohn2015} considers it a ''small effect''. The goal is to determine if a replication study, yielding a smaller effect size than in the original study, is still in line with a small effect that the original study had only little power to detect. This allows for a more nuanced interpretation of a replication outcome than simply comparing $p$-values or confidence intervals.\n\nThe approaches outlined so far rely on a frequentist interpretation of probability and data analysis. Frequentist statistics is primarily concerned with repeated sampling and long-run rates of events. It does not allow a researcher to answer questions such as ''How much more should I believe in an effect?'' or ''Given the data, what are the most credible effect size estimates?''. Bayesian statistics allows to address such questions in a coherent framework and uses uncertainty in estimates as an integral part of the analysis.\n\nRelated to the analysis of a replication study, various Bayesian approaches are possible. \\cite{Marsman2017}, for example, have evaluated the results from the ''Reproducibility Project: Psychology'' \\citep{OpenScienceCollaboration2015} using several methods, including Bayesian parameter estimation on the individual study level, Bayesian meta-analysis, and Bayes factors. For psychology in particular, Bayes factors \\citep{Kass1995,Jeffreys1961} have repeatedly been proposed as an alternative or addition to significance testing in common study scenarios (see \\citealp{Morey2016}, or \\citealp{Bayarri2015}, for introductions to Bayesian hypothesis testing).\n\n\\cite{Verhagen2014} compared several Bayes factors to be used in the context of replication studies and introduced a Bayes factor to specifically investigate the outcome of a replication study, which they termed \\emph{Replication Bayes factor}. It is constructed to be used only with reported test-statistics from an original study and a replication study. As some researchers disagree with the notion of ''belief'' in data-analysis, it is also a favorable property of the Replication Bayes factor that it makes only little assumptions about prior beliefs. The Replication Bayes factor allows to test the hypothesis that the effect in a replication study is in line with an original study against the hypothesis that there is no effect. The result is a continuous quantification of relative evidence \\citep{Morey2016}, showing by much the data are more in line with one hypothesis compared to the other.\n\nThe present article focuses on the Bayesian perspective and extends the Replication Bayes factor. \\citet{Verhagen2014} introduced the Replication Bayes factor solely for the case of one- and two-sample $t$-tests. In this paper, it will be extended to the case of $F$-tests in fixed-effect ANOVA designs, a common study design in cognitive, social, and other sub-fields of psychology. In order to do so, an outline of hypothesis testing using Bayes factors and how it can be applied to replication studies is presented. In the second section, it is shown how the Replication Bayes factor can be used in studies investigating difference between several groups, when fixed-effect ANOVAs are carried out. Simulations and example studies are then followed by a general discussion of the method.\n\nIn general, it has been recommended \\citep{Brandt2014,Anderson2016} that replicators should include different methods to evaluate the outcome of a replication study to account for the advantages, limitations, and varying statistical questions the different approaches present. This allows for a more nuanced interpretation of replication studies than a simple comparison of statistical significance. The Replication Bayes factor is hence presented as an addition to existing methods of analyzing the outcome of a replication study. The article aims to provide an accessible introduction in the setup of the Replication Bayes factors, but assumes a general understanding of Bayesian statistics -- in particular with the general concepts of priors, marginal likelihoods, and Bayesian belief updating. Readers unfamiliar with the Bayesian approach might find the excellent textbooks by \\citet{McElreath2016} and \\citet{Kruschke2015} helpful. A reading list of relevant introductory papers is given by \\citet{Etz2015b}. \\cite{Rouder2012a} and \\cite{Morey2016} have elaborated specifically on the use of Bayes factors.\n\nTo reproduce the examples in this article, all scripts are available in an OSF repository at \\url{https:\/\/osf.io\/jv39h\/} \\citep{Harms_2018_OSF}. To calculate the Replication Bayes factor for $t$-tests \\citep{Verhagen2014} and $F$-tests (the present article), an R-Package is available at \\url{https:\/\/github.com\/neurotroph\/ReplicationBF}.\n\n\\section{Bayesian Hypothesis Testing}\nWhile Bayes' theorem allows several interpretations, Bayes factors are best understood in the context of \\emph{Bayesian belief updating}. Bayes factors indicate how to rationally shift beliefs in two competing hypotheses (formalized as models $M_0$ and $M_1$) based on the observed data $Y$:\n\\begin{equation}\n\t\\label{eq:bayesian-updating}\n\t\\underbrace{\\frac{P(M_0 | Y)}{P(M_1 | Y)}}_{\\text{Posterior Odds}} =\n\t \\underbrace{\\frac{\\pi({M_0})}{\\pi(M_1)}}_{\\text{Prior Odds}} \\times\n\t \\underbrace{\\frac{P(Y | M_0)}{P(Y | M_1)}}_{\\text{Bayes factor}}\n\\end{equation}\n\nThe Bayes factor $BF_{01} = \\frac{P(Y | M_0)}{P(Y | M_1)}$ is the factor by which the prior odds are multiplied in order to get updated posterior odds \\citep{Jeffreys1961,Lindley1993,Kass1995}. It is a ratio of the \\emph{marginal likelihoods} or \\emph{model evidence} of the two models, which is given by\n\\begin{equation}\n\t\\label{eq:marginal-likelihood}\n\tP(Y | M_i) = \\int p(Y | \\theta, M_i) \\pi(\\theta | M_i) \\ \\mathrm{d}\\theta\n\\end{equation}\nwhere $\\theta$ is the vector of model parameters, $\\pi(\\theta | M_i)$ is the prior distribution for the parameters in Model $M_i$ and $p(Y | \\theta, M_i)$ is the likelihood function of $M_i$. The marginal likelihood is the normalizing constant in Bayes' rule in order for the posterior to be a proper probability distribution. Hence, the integrand consists of the same parts as the nominator in Bayes' rule, namely likelihood and prior.\n\nThe challenge of computing Bayes factors lies in the possible complexity of the integral. As will be explained in more detail later, in many practical cases -- such as in the case for the Replication Bayes factor -- the marginal likelihood needs to be approximated, for example by using Monte Carlo methods.\n\nBayes factors provide relative evidence for one model when compared to another model and they can -- in contrast to $p$-values in the Neyman-Pearson framework of null-hypothesis significance testing -- be interpreted directly in a continuous, quantitative manner. To facilitate verbal interpretation, \\cite{Jeffreys1961} and \\cite{Kass1995} provided guidelines for the description of Bayes factors: A $BF_{10}$ is ''not worth more than a bare mention'' if it is between 1 (the data provide evidence for both models equally) and about 3; the evidence against model $M_0$ is ''substantial'' if $3 < BF_{10} \\leq 10$, ''strong'' if $10 < BF_{10} \\leq 100$ and ''decisive'' if $BF_{10} > 100$.\n\nIn recent years it has become increasingly popular to report Bayes factors as an addition or alternative to traditional null-hypothesis significance testing using $p$-values. Calculations and tools for common scenarios in psychological research exist \\citep[e.g.][]{Rouder2012a,Rouder2012b,Rouder2009,Dienes2016}. In the context of replications, \\cite{Etz2016} have used Bayes factors to show that the original studies in the ''Reproducibility Project: Psychology'' \\citep{OpenScienceCollaboration2015} have provided only little relative evidence against the null hypothesis (when taking into account publication bias). Replication studies yielding substantial in favor or against the null hypothesis were mostly studies with larger sample sizes. The analysis is an example how Bayes factors could be used to provide more insights into the results of empirical studies.\n\nTo summarize, the Bayesian frameworks allows a researcher to incorporate existing, previous knowledge into the statistical analysis. This is particularly useful in the context of replication studies: An original experiment has provided information about an effect and if one wants to evaluate a replication study, the information from the original should be considered. The Replication Bayes factor formalizes this and tests the hypothesis of a successful replication against the hypothesis of a null effect.\n\n\\subsection{Bayes factors for Replications}\nThe \\emph{Replication Bayes factor} introduced by \\cite{Verhagen2014} is a way to use the Bayesian hypothesis testing framework in the context of replications and to quantify the outcome of a replication attempt given the information from the original study. The general idea is to use the posterior from the original study as a prior for the analysis of the replication attempt -- in line with the idea of updating beliefs as outlined above. Furthermore, it is desirable to only use data that is easily available from published studies. Thus, ideally, only reported test-statistics with degrees of freedom and sample size are used.\n\nFor a Bayesian hypothesis test, two models need to be set up and compared in the Bayes factor. For the Replication Bayes factor these two models or hypotheses about an effect size measure $\\delta$ are:\n\\begin{enumerate}\n\t\\item $H_0: \\delta = 0$, that is the hypothesis that the true effect size is zero (the ''skeptic's position'' in the terms of \\citealp{Verhagen2014}).\n\t\\item $H_r: \\delta \\approx \\delta_{\\mathrm{orig}}$, i.e. the hypothesis that the original study is a good estimate of the true effect size (the ''proponent's position'').\n\\end{enumerate}\n\nThe Replication Bayes factor is then the ratio of the marginal likelihoods of the two models considering the data from the replication study, denoted by $Y_{\\mathrm{rep}}$ \\citep[p. 1461]{Verhagen2014}:\n\\begin{equation}\n\t\\label{eq:bf-rep-general}\n\t\\text{B}_{\\text{r}0} = \\frac{p( Y_{\\mathrm{rep}} | H_r )}{p( Y_{\\mathrm{rep}} | H_0 )}\n\\end{equation}\n\nThat is, the Replication Bayes factor represents the relative evidence of the data in favor of a successful replication when compared to a null hypothesis of no effect. ''Successful'' here means, that the replication yields a similar effect size as the original study. This implies an assumption about the validity of the effect size estimate of the original study, which is also discussed later in this paper.\n\nIn order to calculate $\\text{B}_{\\text{r}0}$ one needs to mathematically define the two models and find a useful representation of the data from the original study ($Y_{\\mathrm{orig}}$) and the replication study ($Y_{\\mathrm{rep}}$). While the data would ideally be the raw data and used to fit a probabilistic model describing the relationship between independent and dependent variables, for the Replication Bayes factor it is desired to rely on more accessible data. In the case of (published) replication studies often only summary and test statistics are reported. Thus, for the evaluation of replication studies it is desirable to be able to calculate the Replication Bayes factor based on the reported test statistics (with degrees of freedom and sample sizes) from the original study and the replication study alone.\n\n\\subsection{Replication Bayes factor for $t$-tests}\nIn their paper, \\citet{Verhagen2014} provided models and formulas for the computation of the Replication Bayes factor for $t$-tests. In the following section, the rationale and the setup of the model will be reiterated as a basis for the extension to $F$-tests.\n\nThe general model is based on the $t$-distribution used for the $t$-test and derives from the distribution of $t$-values under the alternative hypothesis. The marginal likelihood for the two models that are compared is given\n\\begin{equation}\n\t\\label{eq:model-evidence-t}\n\tp(Y_{\\mathrm{rep}} | H_i) = \\int t_{df_{\\mathrm{rep}}, \\delta \\sqrt{N_{\\mathrm{rep}}}}(t_{\\mathrm{rep}}) \\pi(\\delta | H_i)\\ \\mathrm{d}\\delta\n\\end{equation}\nwith $t_{df, \\Delta}(x)$ being the non-central $t$-distribution with $df$ degrees of freedom and non-centrality parameter $\\Delta = \\delta \\sqrt{N_{\\mathrm{rep}}}$.\n\nThe two models under consideration differ in their prior distributions $\\pi(\\delta | H_i)$: For the skeptic ($H_0$), the prior distribution is 1 at $\\delta = 0$ and 0 everywhere else. Thus, the marginal likelihood simplifies to\n\\begin{equation}\n\t\\label{eq:marginal-likelihood-skeptic-t}\n\tp(Y_{\\mathrm{rep}} | H_0) = t_{df_{\\mathrm{rep}}}(t_{\\mathrm{rep}})\n\\end{equation}\nWhich is the central $t$-distribution (i.e. non-central $t$-distribution with non-centrality parameter $\\Delta = 0$) evaluated at the point of the $t$-value observed in the replication study, $t_{\\mathrm{rep}}$.\n\nFor the proponent on the other hand, the prior distribution for the replication, $\\pi(\\delta | H_r)$, is the posterior distribution of the original study. If one starts out with a flat, uninformative prior before the original experiment, the resulting posterior distribution was described as $\\Lambda^\\prime$-distribution by \\cite{Lecoutre1999}. While in Bayesian statistics flat priors are often disregarded in favor of at least minimally regularizing priors \\citep{Gelman2013}, in the present case the prior for the original study plays a minor rule: It is quickly overruled by the data even when considering only a single original study. For the Replication Bayes factor, only the posterior of the original study is relevant. The $\\Lambda^\\prime$-distribution can be approximated closely through a normal distribution as \\cite{Verhagen2014} showed in their appendix. If the prior for the replication, $\\pi(\\delta | H_r)$, is expressed through the posterior of the original study, so that $\\pi(\\delta | H_r) = p(\\delta | \\delta_{\\mathrm{orig}}, H_r)$, the marginal likelihood for the proponent's model is finally given as\n\\begin{equation}\n\t\\label{eq:marginal-likelihood-proponent-t}\n\tp(Y_{\\mathrm{rep}} | H_r) = \\int t_{df_{\\mathrm{rep}}, \\delta \\sqrt{N_{\\mathrm{rep}}}}(t_{\\mathrm{rep}}) p(\\delta | \\delta_{\\mathrm{orig}}, H_r) \\ \\mathrm{d}\\delta\n\\end{equation}\n\nFor the integral no closed form is yet known. Hence, it needs to be approximated and different methods exist. \\cite{Verhagen2014} used the Monte Carlo estimate \\citep[][chap. 7.2.1]{Gamerman2006}: random samples from the posterior-turned-prior distribution, $p(\\delta | \\delta_{\\mathrm{orig}}, H_r)$, are repeatedly drawn and the average of the marginal likelihood term is calculated. Other approaches to approximating the marginal likelihood for the Bayes factor are available and will be discussed below.\n\nPutting Equations~\\ref{eq:marginal-likelihood-skeptic-t} and \\ref{eq:marginal-likelihood-proponent-t} into Equation~\\ref{eq:bf-rep-general} yields the formula for the Replication Bayes factor for $t$-tests:\n\\begin{equation}\n\t\\label{eq:bf-rep-t}\n\t\\text{B}_{\\text{r}0} = \\frac{\\int t_{df_{\\mathrm{rep}}, \\delta \\sqrt{N_{\\mathrm{rep}}}}(t_{\\mathrm{rep}}) p(\\delta | \\delta_{\\mathrm{orig}}, H_r) \\ \\mathrm{d}\\delta}{t_{df_{\\mathrm{rep}}}(t_{\\mathrm{rep}})}\n\\end{equation}\n\nIn their paper, \\cite{Verhagen2014} used simulation studies and examples to demonstrate the usefulness of the Replication Bayes factor, and show how it compares to other Bayes factors that might also be used for replication studies.\n\n\\section{Replication Bayes factor for $F$-tests}\nMany studies in psychological research do not compare two independent groups through $t$-tests, but investigate differences across multiple groups and interactions between factors. Thus, there seems to be a need to extend this approach to other tests such as the $F$-test in ANOVAs. In this section the steps necessary to apply the Replication Bayes factor to other tests are explained, and the Bayes factor is derived for the $F$-test in fixed-effect ANOVA designs.\n\nAs a general difference to the $t$-test, $F$-tests do not convey information about the direction or location of an effect; the hypothesis under investigation is an omnibus hypothesis if the effect degrees of freedom are $df_{\\mathrm{effect}} > 1$ \\citep{rosenthal2000contrasts,Steiger2004}. Therefore incorporating the $F$-statistic in the Replication Bayes factor does not allow researchers to take the direction of the effect into account. As will be shown in Example 3 and discussed later, researchers need to consider additional information when evaluating the outcome of replication studies with ANOVA designs. Nevertheless, researchers use and report $F$-tests both for omnibus and interaction hypotheses. And the $F$-statistic does contain information about the effect size. Thus the $F$-statistic can be used in the Replication Bayes factor if a statement about the size of an effect is desired.\n\nIn order to maintain the general nature of the Replication Bayes factor from Equation~\\ref{eq:bf-rep-general}, an effect size measure needs to be chosen to parametrise the model. Cohen's $f^2$ \\citep{Cohen1988} has a simple relationship to the non-centrality parameter $\\lambda$ of the non-central $F$-distribution \\citep{Steiger2004}:\n\n\\begin{equation*}\n\t\\lambda = f^2 \\cdot N\n\\end{equation*}\nSince this relationship holds only in the case of fixed-effects models, the Replication Bayes factor for $F$-tests can also only be used validly in these cases -- a limitation that will also be discussed later.\n\nSetting up the model based on the $F$-distribution in a way similar to the $t$-test case (cf. Equation~\\ref{eq:model-evidence-t}) leads to the marginal likelihoods\n\\begin{equation}\n\t\\label{eq:model-evidence-F}\n\tp(Y_{\\mathrm{rep}} | H_i) = \\int F_{df_{\\mathrm{effect}},df_{\\mathrm{error}},\\lambda}(F_{\\mathrm{rep}}) \\pi(f^2 | H_i) \\ \\mathrm{d}f^2\n\\end{equation}\nWhere $F_{df_{\\mathrm{effect}},df_{\\mathrm{error}},\\lambda}(x)$ is the noncentral $F$-distribution with degrees of freedom $df_{\\mathrm{effect}}$ and $df_{\\mathrm{error}}$ and noncentrality parameter $\\lambda$.\n\nFor the skeptic's position, $H_0$, the marginal likelihood simplifies -- analogous to the $t$-test case -- to the central $F$-distribution evaluated at the observed $F$-value from the replication study, since the prior $\\pi(f^2 | H_0)$ is chosen so that it is 1 at $f^2 = 0$ and 0 everywhere else:\n\\begin{equation}\n\t\\label{eq:marginal-likelihood-skeptic-F}\n\tp(Y_{\\mathrm{rep}} | H_0) = F_{df_{\\mathrm{effect}},df_{\\mathrm{error}}}(F_{\\mathrm{rep}})\n\\end{equation}\n\nFor the $H_r$ model on the other hand, the prior $\\pi(f^2 | H_r)$ should again be the posterior of the original study. Starting out with a uniform prior before the original study results in the following posterior distribution for the original study:\n\\begin{equation}\n\t\\label{eq:posterior-original-F}\n\tp(f^2 | Y_{\\mathrm{orig}}) = \\frac{F_{df_{\\mathrm{effect, orig}},df_{\\mathrm{error, orig}},f^2\\cdot N_{\\mathrm{orig}}}(F_{\\mathrm{orig}})}{\\int F_{df_{\\mathrm{effect, orig}},df_{\\mathrm{error, orig}},f^2\\cdot N_{\\mathrm{orig}}}(F_{\\mathrm{orig}})\\ \\mathrm{d}f^2}\n\\end{equation}\n\\cite{Lecoutre1999} called this distribution $\\Lambda^2$-distribution, similar to the $\\Lambda^\\prime$-distribution for $t$-tests. In contrast, this distribution cannot be easily approximated by a normal distribution (see shape of distribution in Figure~\\ref{fig:normal-approx}). Despite the use of an improper prior, the posterior distribution is valid as all parameters are actually observed in the original study, i.e., $df_{\\mathrm{effect, orig}}, df_{\\mathrm{error, orig}}, F_{\\mathrm{orig}} > 0$ and $N_{\\mathrm{orig}} \\gg 1$.\n\nThe Replication Bayes factor for $F$-tests is then given by\n\\begin{equation}\n\t\\label{eq:bf-rep-F}\n\t\\text{B}_{\\text{r}0} = \\frac{\\int \\! F_{df_{\\mathrm{effect}},df_{\\mathrm{error}},f^2\\cdot N}(F_{\\mathrm{rep}}) p(f^2 | Y_{\\mathrm{orig}}) \\ \\mathrm{d}f^2}{F_{df_{\\mathrm{effect}},df_{\\mathrm{error}}}(F_{\\mathrm{rep}})}\n\\end{equation}\n\nThe challenge to compute the Replication Bayes factor lies primarily in the calculation of the integral of the numerator. Marginal likelihoods in general are often difficult to compute or intractable and analytical solutions are rarely available in applied settings. \\citet{Verhagen2014} chose the Monte Carlo estimator \\citep[p. 239]{Gamerman2006}. For the Monte Carlo estimate samples are randomly drawn from the prior distribution of the $H_r$ model, i.e. from the original study's posterior distribution $p(f^2 | Y_{\\mathrm{orig}})$, the integrand is calculated and the marginal likelihood is approximated by taking the average.\n\\begin{equation}\n\t\\mathrm{B}_{\\mathrm{r}0} \\approx \\frac{1}{M} \\sum_{i}^{M} \\frac{F_{df_\\mathrm{effect}, df_\\mathrm{error}, f^2_{(i)}\\cdot N_{\\mathrm{rep}}}(F_\\mathrm{rep})}{F_{df_\\mathrm{effect}, df_\\mathrm{error}}(F_\\mathrm{rep})}, \\qquad f^2_{(i)} \\sim p(f^2 | Y_\\mathrm{orig})\n\\end{equation}\n\nThe Monte Carlo estimate, however, is inefficient and unstable, especially when prior and likelihood disagree (i.e. when the original and the replication study yield very different effect size estimates). There are other estimators for the marginal likelihood available. \\citet[chap. 7]{Gamerman2006} provide an overview on twelve different estimators. \\citet{Bos2002} compared seven estimators in simulations and showed, that the Monte Carlo estimate is highly unstable and often yields values very different from an analytically derived solution. \\emph{Numerical integration} can work well in low-dimensional settings, but does not scale to problems involving multiple parameters or wide ranges of possible parameter values.\n\nFor complex models, \\emph{bridge sampling} provides a stable and efficient way to estimate the marginal likelihood \\citep{Meng1996,Gronau2017}. Since the present case involves only a single parameter and relatively simple likelihood function and posterior distribution, \\emph{importance sampling} is a faster, but also efficient and stable way to estimate the marginal likelihood.\n\nThe importance sampling estimate for the marginal likelihood \\citep[chap. 7]{Gamerman2006} is calculated by drawing $M$ random samples from an importance density $g(f^2)$ and averaging an adjusted likelihood term:\n\\begin{align}\n\tp(Y_{\\mathrm{rep}} | H_r) &\\approx \\frac{1}{M} \\sum_{i}^{M} \\frac{p(Y_{\\mathrm{rep}} | \\tilde{f^2_i}) \\pi(\\tilde{f^2_i})}{g(\\tilde{f^2_i})} \\nonumber \\\\\n\t\\label{eq:importance-sampling-estimate}\n\t&\\approx \\frac{1}{M} \\sum_{i}^{M} \\frac{p(Y_{\\mathrm{rep}} | \\tilde{f^2_i}) p(\\tilde{f^2_i} | Y_{\\mathrm{orig}})}{g(\\tilde{f^2_i})}, \\\\\n\t\\tilde{f^2_i} &\\sim g(f^2) \\nonumber\n\\end{align}\n\nSince the prior for $H_r$ is the posterior distribution given in Equation~\\ref{eq:posterior-original-F}, the marginal likelihood $p(Y_{\\mathrm{orig}} | H_r)$ for the original study also needs to be computed in order for $p(f^2 | Y_{\\mathrm{orig}})$ to be a proper probability density and the estimator to yield correct results.\n\nHow should the importance density $g(f^2)$ be chosen? A simple and straight forward way is to use a half-normal distribution with mean and standard deviation determined by samples from the posterior. The half-normal distribution is easy for drawing random samples and can easily be calculated at any point. While it is not as close to the true posterior (see Figure~\\ref{fig:normal-approx}) as in the $t$-test case, it is sufficient for the purpose of importance sampling since the tails of the half-normal distribution are fatter than the non-normalized posterior density (see requirements for importance densities listed in \\citealp{Gronau2017}).\n\n\\begin{figure}[tb!]\n\\centering\n\\includegraphics[width=1.0\\textwidth]{BFrep_Posterior_NormApprox-eps-converted-to.pdf}\n\\caption{\\label{fig:normal-approx}Plot of posterior distribution for a study with an observed $f^2 = 0.625$ in an one-way ANOVA with two groups and 10 participants each, i.e. 20 participants in total, yielding $F(1, 8) = 5.0$. The grey line represents the half-normal importance density $g(f^2)$, which is constructed based on samples from the posterior distribution generated by Metropolis-Hastings, used for estimating the marginal likelihood.} \n\\end{figure}\n\nTo sample from the unknown and yet non-normalized posterior-distribution Markov Chain Monte Carlo (MCMC) techniques can be used, e.g. the Metropolis-Hastings algorithm \\citep{Chib1995}. In Bayesian statistics approximating the posterior distribution is one of the core challenges and thus several implementations for different algorithms are available. Metropolis-Hastings yields samples from a target distribution by taking a random walk through the parameter space. An accessible introduction is given in \\citet[chap. 8]{McElreath2016} and a more mathematical description can be found in \\citet[chap. 11.2]{Gelman2013}. Software packages such as JAGS or Stan \\citep{Gelman2015} are commonly used to draw samples from the posterior distribution of Bayesian models.\n\nThe mean and standard deviation of the posterior samples are then used to construct a half-normal distribution as importance density $g(f^2)$. Subsequently, random samples from the importance density are used to estimate the marginal likelihood according to Equation~\\ref{eq:importance-sampling-estimate}, especially in regions where the posterior distribution has high probability mass, resulting in a better estimate compared to the Monte Carlo estimate.\n\nFor the case of the Replication Bayes factor it will be shown below in simulations, that the differences between the estimates are small except in the most extreme cases. Since it is nonetheless desirable to have accurate and robust estimates, in the remainder of the article the Importance Sampling estimate is used.\n\nDividing the resulting estimate for the marginal likelihood of $H_r$ by the $H_0$ model evidence (Equation~\\ref{eq:bf-rep-F}) allows the calculation of the Replication Bayes factor:\n\\begin{equation}\n\t\\text{B}_{\\text{r}0} = \\frac{\\displaystyle\\int F_{df_\\text{effect}, df_\\text{error}, f^2\\times N}(F_\\text{rep}) p(f^2 | Y_\\text{orig}) \\,\\mathrm{d}f^2}{F_{df_\\text{effect}, df_\\text{error}}(F_\\text{rep})}\n\\end{equation}\n\nThe resulting Bayes factor can then be interpreted based on its quantitative value: $\\text{B}_{\\text{r}0} > 1$ is evidence in favor of the proponent's hypothesis, i.e. evidence in favor of a true effect of a similar size, while a $\\text{B}_{\\text{r}0} < 1$ is evidence against a true effect of similar size. The more the Bayes factor deviates from 1, the stronger the evidence. It might be helpful to use the commonly used boundaries of 3 and $\\frac{1}{3}$ for sufficient evidence: $\\frac{1}{3} < \\text{B}_{\\text{r}0} < 3$ is weak evidence for either hypothesis \\citep[p. 432]{Jeffreys1961} and should lead the researcher to collect further data to strengthen the evidence \\citep{Schonbrodt2015,Edwards1963}.\n\nThe provided R-package contains functions to calculate Replication Bayes factors for both $t$- and $F$-tests using the formulas provided here, relying on importance sampling to estimate marginal likelihoods.\n\n\\section{Simulation Studies}\nIn order to show properties of the Replication Bayes factors, three simulation studies are presented. First, the numerical results in different scenarios are shown to allow comparisons across different study designs and effect sizes. Second, the Monte Carlo estimate and the Importance Sampling estimate for the marginal likelihood are compared. While the differences are small except for the most extreme cases, it is argued that the more robust method is to be preferred. Last, the relationship between $t$- and $F$-values in studies of two independent groups is used to compare the Replication Bayes factor for $t$-tests with the proposed adaptation to the $F$-test.\n\n\\subsection{Simulation 1: Behavior in Different Scenarios}\nTo better understand the Replication Bayes factor, it is useful to explore different scenarios in which a researcher might calculate it. For Figure~\\ref{fig:bfrep-scenarios}, different combinations for original and replication studies were considered. In particular, one can see that the Replication Bayes factor increases towards support for the proponent's position when the replication has large sample size and yields a large effect size estimate.\n\n\\begin{figure}[tb!]\n\\centering\n\\includegraphics[width=1.0\\columnwidth]{BFrep_Ftest_Scenarios-eps-converted-to.pdf}\n\\caption{\\label{fig:bfrep-scenarios}Value of the Replication Bayes factor for $F$-tests in various scenarios. Columns show sample sizes per group in original and replication study, rows are $f^2$ effect sizes in the original study. Horizontal axes in each plot show $f^2$ effect size in replication study and vertical axes are $\\log_{10}$-scaled showing $B_{\\text{r}0}$. Shades of grey denote number of groups. Each point is one calculated Replication Bayes factor in a given scenario. Vertical black lines indicate the effect size in the original study, i.e. points on this line are replication studies where the effect size equals the original study. Horizontal dashed line is $B_{\\text{r}0} = 1$.}\n\\end{figure}\n\nFor situations in which the effect size estimate is very small in both the original and the replication study, the Replication Bayes factor is close to 1. In these cases, the proponent's and the skeptic's position are very similar and even 100 participants per group are not enough to properly distinguish between the two models (see left most points in first row plots of Figure~\\ref{fig:bfrep-scenarios}). If, in contrast, the original study reported a relatively large effect and the replication study yields a small effect, the Replication Bayes factor quantifies this correctly as evidence in favor of the skeptic's positions. Moreover, the considered setup shows that -- holding group sizes equal -- more groups allow stronger conclusions since total sample size is higher.\n\n\\subsection{Simulation 2: Monte Carlo vs Importance Sampling Estimate}\nThe aforementioned difference between the Monte Carlo estimate for the marginal likelihood and an importance sampling estimate is especially relevant if prior and likelihood of a model disagree substantially. For the Replication Bayes factor this is the case when original study and replication study yield very different effect size estimates.\n\nAs referenced above, \\citet{Bos2002} has shown in simulations that the Monte Carlo estimate is generally unstable and leads to biased estimates of the marginal likelihood. He compared the results both against the analytical solution and other estimators, which performed substantially better.\n\nIn order to evaluate how this affects the Replication Bayes factor, pairs of studies are investigated. For illustrative purposes, original studies with $n_{\\mathrm{orig}} = 15$ per group with an observed effect size of $d_{\\mathrm{orig}} \\in \\{1; 2; 5\\}$ and subsequent replications with sample sizes $n_{\\mathrm{rep}} \\in \\{50; 100\\}$ per group and observed effect sizes $d_{\\mathrm{rep}} \\in \\{0; 0.3; 0.5\\}$ are entered in the Replication Bayes factor for $t$-tests. The resulting Bayes factor is estimated using the Monte Carlo estimate and the Importance Sampling estimate.\n\nAs can be seen from the results in Figure~\\ref{fig:bfrep-sim-estimators}, the Monte Carlo estimate is generally close to identical to the importance sampling estimate. It is only in extreme cases that there are differences. For example, when an original study yields $d = 5$ in a small sample of $n = 15$ per group and a replication shows only a tiny or even a null effect in a much larger sample. The effect size of $d = 5$ is, however, an implausibly large effect size for most empirical research fields. Even when considering the larger sampling error in small samples, effect sizes around $d = 1$ are already considered a ''large effect'' in social sciences \\citep{Cohen1988} -- effects of $d > 3$ do not realistically appear in the literature in psychology.\n\nWhile the differences can become large in orders of magnitude, in these cases the Replication Bayes factor is also large in general because of the disagreement between original and replication study. Thus, the conclusions are not changed by the difference between estimates. Furthermore, in the practical examples reported below the differences between the two estimates are numerically very small and would not yield different conclusions.\n\nNevertheless, it should be desirable to provide an accurate estimates of a statistical indicator to make an informed judgement, e.g. about the relative evidence a replication study can provide. The Importance Sampling estimate has been shown to be more stable and more accurate and is thus preferable to the Monte Carlo estimate \\citep{Bos2002,Gronau2017,Gamerman2006}.\n\n\\begin{figure}[tbhp]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{BFrep_Simulation_Importancesampling-eps-converted-to.pdf}\n\t\\caption{\\label{fig:bfrep-sim-estimators}Comparison for Replication Bayes factors ($t$-test) with marginal likelihoods estimated by either Monte Carlo estimation (x-axis) or Importance Sampling estimation (y-axis). Points are individual Bayes factors. Grey dashed line indicates equality between both estimation methods. Only for extreme cases (i.e. $d_{\\mathrm{orig}} = 0.5$ and $d_{\\mathrm{rep}} > 2$) the estimations yield substantially different results, but conclusions would remain the same. Original studies always had $n_{\\mathrm{orig}} = 15$.} \n\\end{figure}\n\n\\subsection{Simulation 3: $F$-test for two groups}\nIn the context of significance testing, it is a well known relationship that the $F$-tests from a one-way ANOVA with two groups yield the same $p$-values as a two-sided, two-sample $t$-test, when $F = t^2$ is used. Accordingly, it might be an interesting question, how the Replication Bayes factor for $F$-tests relates to the Replication Bayes factor for $t$-tests when used on the same data from two independent groups.\n\nFor pairs of original and replication studies with two groups of different sizes ($n_{\\mathrm{orig}} \\in \\{15; 50\\}$, $n_{\\mathrm{rep}} \\in \\{15; 30; 50; 100\\}$) and different effect sizes ($d_{\\mathrm{orig}} \\in \\{0.2; 0.4; 0.6; 0.8; 1; 2\\}$, $d_{\\mathrm{rep}} \\in \\{^1\/_{10^{5}}; 0.2; 0.4; 0.6; 0.8; 1; 2\\}$), the Replication Bayes factor for $t$-test was calculated using the observed $t$-value and for the $F$-test using $F = t^2$.\n\nThe results are shown in Figure~\\ref{fig:bfrep-sim-tf}. As can be expected, the resulting Bayes factors are very close ($r = 0.999$ across all scenarios). What cannot easily be seen in the figure, however, is that the Replication Bayes factor for the $F$-test is about half the size of the Replication Bayes factor for the $t$-test ($\\frac{B_{r0,t}}{B_{r0,F}} = 2.211$).\n\nThis result makes intuitively sense: The $F$-statistic does not contain information about the direction of the effect and thus cannot provide the same amount of relative evidence as a Bayes factor on the $t$-test.\n\n\\begin{figure}[tbhp]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{BFrep_sim_tF-eps-converted-to.pdf}\n\t\\caption{\\label{fig:bfrep-sim-tf}Comparison of Replication Bayes factors for $t$- and $F$-test on the same data-set with two groups. Study pairs with replication studies smaller than original studies were not simulated. Dashed grey line is equality.} \n\\end{figure}\n\n\\section{Examples}\nIn this section, the Replication Bayes factor for $F$-tests is applied in two example cases of replication studies from the ''Reproducibility Project: Psychology'' \\citep{OpenScienceCollaboration2015}. The third example presented in this section aims to show how to investigate the pattern of effects to ensure valid conclusions based on a high value of the Replication Bayes factor.\n\n\\subsection{Example 1}\nThe first example is an original study conducted by \\citet{Albarracin2008}, which was replicated as part of the ''Reproducibility Project: Psychology'' \\citep{OpenScienceCollaboration2015} by \\citet{Voracek_Sonnleitner_2016} and is available at \\url{https:\/\/osf.io\/tarp4\/}.\n\n\\citet{Albarracin2008} investigated the effect ''action'' and ''inaction goals'' on subsequent ''motor or cognitive output''. In study 7 specifically, participants were primed with either words relating to ''action'', ''inaction'' or neutral words (control condition). Participants subsequently engaged in either an active or inactive task before they were instructed to read a text and write down their thoughts about the text. The number of listed thoughts was used as a measure for ''cognitive activity''. The experiment thus had a 3 (\\emph{Prime}: action, inaction, control) $\\times$ 2 (\\emph{Task:} active, inactive) between-subjects design.\n\nParticipants were predicted to write down more thoughts about the text when they are primed with an ''action'' goal compared to participants primed with an ''inaction'' goal. Futhermore, the possibility to exert activity in an active task, should moderate the effect: ''satisfied action goals should yield less activity than would unsatisfied action goals''. They found the two-way interaction \\emph{Prime} $\\times$ \\emph{Task} to be significant ($F(2, 92) = 4.36$, $p = .02$, $\\eta_p^2 = 0.087$ corresponding to $f^2 = 0.095$) in a sample of 98 student participants (group sizes were not reported).\n\nThe replication by Sonnleitner and Voracek did not find the same interaction to be significant in their sample of 105 participants, $F(2, 99) = 2.532$, $p = .085$, $\\eta_p^2 = 0.049$, $f^2 = 0.051$.\n\nComparing the replication attempt to the original study using the Replication Bayes factor assuming equally sized groups yields $B_{\\mathrm{r}0} = 1.153$, which can be considered inconclusive as the data are nearly equally likely under both models. Based on the $p$-values, it was concluded that the study was not successfully replicated. Judging from the Bayes factor, however, it should be concluded that the replication study did not have enough participants to indicate stronger evidence in either direction. Considering the power of a frequentist test, recent recommendations suggest at least 2.5 times as many participants than in the original study one seeks to replicate \\citep{Simonsohn2015}. The Replication Bayes factor thus shows absence of evidence rather than evidence of absence of a failed or successful replication.\n\nFor the main effect of \\emph{Task}, in contrast, which also was significant in the original study ($F(1, 92) = 7.57$, $p = .007$, $\\eta_p^2 = 0.076$, $f^2 = 0.082$) but not in the replication ($F(1, 99) = .107$, $p = .745$, $\\eta_p^2 \\approx f^2 < .001$), the Replication Bayes factor yields strong evidence in favor of the skeptic's model ($B_{\\mathrm{r}0} = 0.057$). This is evidence in favor of an unsuccessful replication of the main effect, since the Bayes factor shows that the data are 17 times more likely under the model implying an effect size $f^2 = 0$ than under the model informed by the original study. It should be noted however, that this does not answer the question whether the replication is in line with an existing albeit smaller effect (which the replication also did not have sufficient power to detect, see \\citealp{Simonsohn2015}).\n\n\\subsection{Example 2}\nFor the second example another replication from the ''Reproducibility Project'' is considered: The original study was conducted by \\cite{Williams2008} and investigated cues of ''spatial distance on affect and evaluation''. The replication was performed by \\citet{Joy-Gaba_Clay_Cleary_2016}. The replication data, materials and final report are available at \\url{https:\/\/osf.io\/vnsqg\/}.\n\nIn study 4 of the original paper, \\cite{Williams2008} have primed 84 participants in three different conditions. The number of participants per group was not reported. The authors hypothesized that different primes for spatial distance will effect evaluations of perceived ''closeness'' to siblings, parents and hometown. The dependent variable was an index of ratings to those three evaluations. Hence, the study was a between-subjects design with one factor (\\emph{Prime}: Closeness, Intermediate, Distance). They found a significant main effect of priming on the ''index of emotional attachment to one's nuclear family and hometown'' in a one-way ANOVA ($F(2, 81) = 4.97$, $p = .009$, $\\eta_p^2 = .11$, $f^2 = .124$).\n\nThe replication by Joy-Gaba, Clay and Cleary did not find the same main effect in a sample of 125 participants ($F(2, 122) = .24$, $p = .79$, $\\eta_p^2 = .003919$, $f^2 = .00393$). Based on the $p$-values they concluded, that the replication was not successful.\n\nBut how much more are the data in line with a null model when compared to the proponent's alternative? This is the answer the Replication Bayes factor can give, assuming equal group sizes: $\\text{B}_{\\text{r}0} = 0.031$. This means, the data is about 32 times more likely under the model stating that the true effect size is 0 than under the model using the original study's posterior.\n\n\\subsection{Example 3}\nThe final example aims to show a caveat when using the Replication Bayes factor for $F$-tests yielding strong support for the proponent's model. As has been addressed above and could be seen in the last simulation, the $F$-statistic (and, consequently, the $f^2$ effect size measure) does not convey information about the location or direction of an effect. This is a general problem when evaluating the outcomes from ANOVA-design studies based on the test-statistic alone. Researchers need additional judgments based on post-hoc $t$-tests or qualitative consideration of interaction plots.\n\nIn order to show how to inspect the results thoroughly, an imaginary study is conducted: In an original study, 15 participants each are randomly assigned to three different conditions (45 participants in total). The true population means of the three groups are $\\mu_1 = 1.5$, $\\mu_2 = 2.2$ and $\\mu_3 = 2.9$ and standard deviation is 1 for all groups. Running a one-way ANOVA on a generated data-set yields a significant result, $F(2, 42) = 7.91$, $p = .001$, $f^2 = 0.377$. For the replication study, 30 participants are randomly sampled to the same three conditions each (thus 90 participants in total). For the generated replication data-set the ANOVA yields a significant result as well, $F(2, 87) = 7.60$, $p < .001$, $f^2 = 0.175$.\n\n\\begin{figure}[tb!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{BFrep_Ex3_Interaction-eps-converted-to.pdf}\n\t\\caption{\\label{fig:bfrep-ex3-interaction}Summary plot for the imaginary study in Example 3. Original study has 15 participants per group with population means $\\mu = (1.5; 2.2; 2.9)$. Replication has 30 participants per group with population means in reverse order.} \n\\end{figure}\n\nThe test-statistics and effect size estimates show a significant effect in both studies. Did the replication happen to successfully replicate the original finding? Before calculating the Replication Bayes factor, one should inspect the data in more detail. Considering the the plot in Figure~\\ref{fig:bfrep-ex3-interaction}, it is obvious that the pattern of effects is strikingly different. In fact, the replication shows the reverse pattern. Since the $F$-statistic is insensitive to this difference, the Replication Bayes factor does not provide us this information. Yet, it does correctly indicate that the data are more in favor of an effect in size -- not direction -- of the original study ($\\text{B}_{\\text{r}0} = 38.261$).\n\nConsidering post-hoc $t$-tests or planned contrasts helps to include location and direction of the effect in the analysis. For the present example, for example, the difference between groups 1 and 3 is significant in both the original study ($t(20.809) = -3.953$, $p = .001$) and the replication study ($t(56.765) = 3.6412$, $p < .001$). The difference in direction is visible in the sign of the $t$-statistic and thus the Replication Bayes factor for $t$-test can consider this information ($\\text{B}_{\\text{r}0} = 0.0015$ or the reciprocal $\\text{B}_{0\\text{r}} = 668.15$)\n\nConsequently, researchers investigating $F$-tests should always pay attention to the particular nature of the effect under investigation. This is a recommendation not limited to the evaluation of replication studies, but is generally a crucial step when analyzing data from ANOVA-studies.\n\n\\section{Discussion}\nThe Replication Bayes factor for $F$-tests in fixed-effect ANOVA designs outlined in this paper is an extension of the work by \\cite{Verhagen2014}. It utilizes a Bayesian perspective on replications, namely using the results and uncertainty from the original study in the analysis of a replication attempt. The approach outlined in this paper adapts the Replication Bayes factor from $t$- to $F$-tests.\n\nIn evaluating replication studies it is reasonable to use the information available from the original study. When $p$-values of two studies are directly compared based on their significance, or when a confidence interval is used to examine whether the effect size in the original can be rejected, this is already essentially done. The Bayesian framework allows to do this in a more formal framework and incorporates uncertainty in estimates. The latter is ignored when only $p$-values are compared. This is one of the reasons why ''vote counting'' is not recommended when evaluating a replication study.\n\nA general criticism of Bayesian hypothesis testing with Bayes factors is the role of the prior. While proponents of Bayesian statistics generally consider incorporating previous knowledge or expectation as a strength of Bayesian statistics, the perceived subjectivity in its selection is troublesome for some non-Bayesians. In contrast to other ways of including the results from the original study in Bayesian model (see below), the Replication Bayes factor introduces very little subjectivity to the analyses. The model for the Replication Bayes factor is derived directly from the assumptions of null-hypothesis significance testing.\n\nWhile Bayes factors do not provide controls for error rates over many repeated samplings, they do allow for a quantification of relative model evidence. That is, a Bayes factor allows statements such as ''the data are 5-times more likely under the alternative model than under the null model''. Researchers reluctant to the Bayesian approach in general might nevertheless find the Replication Bayes factor in particular a useful addition without adding too much subjectivity. After all, the evaluation of replication studies should not rely on a single method \\citep{Marsman2017,Brandt2014,Anderson2016,Gilbert2016}.\n\nThe steps taken and explained in this paper can also be used to apply the Replication Bayes factor logic to other tests such as $\\chi^2$-tests, $z$-tests, or correlations \\citep[see also][Appendix]{Boekel2015}. One needs to use the appropriate test-statistic and sampling distribution under the alternative hypothesis and accordingly derive the resulting marginal likelihoods. The use of Importance Sampling for estimating the marginal likelihoods allows for more general cases than the Monte Carlo estimate; on the one hand it is more robust to large disagreements between original study and replication study (see Simulation 3) and, on the other hand, scales better if more parameters were introduced in the model.\n\n\\subsection{Limitations}\nAs explained above, Cohen's $f^2$ was chosen as the parameter of interest. This limits the application of the proposed Replication Bayes factor for $F$-tests to fixed-effects ANOVAs with approximately equal cell sizes. The relationship between the non-centrality parameter $\\lambda$ and $f^2$ is only valid in these cases.\n\nIn general, effect size measures in ANOVAs take into account the specific effects (i.e. each cell mean's deviation from the grand mean \\citealp{Steiger2004}) and cell sizes. In many reported ANOVAs, however, only the total sample size along with the omnibus or interaction test is reported. Even if specific contrasts are of interest, they are not reported with sample sizes or descriptive statistics required to calculate the specific effects.\n\nWhat happens when $f^2$ is used in unbalanced designs? If a specific effect is present in a group with larger sample size, $f^2$ will overestimate the overall effect. For studies with randomized allocation to the groups, the differences in cell sizes should be small, so the assumption of balancedness seems warranted. In non-random studies where unbalanced groups are to be expected, the Replication Bayes factor as outlined here should not be used. More elaborate alternatives as suggested below might be more appropriate.\n\nThe Replication Bayes factor is also limited by the information provided in the $F$-statistic: Since the $F$-statistic (and consequently $f^2$) does not convey information about the direction and location of the effect, the Replication Bayes factor cannot take the pattern of effects into account. The Replication Bayes factor still does give the relative evidence regarding the size of the effect. As shown in Example 3, one way is to follow-up evidence in favor of a replicated effect by investigating the pattern in the post post-hoc $t$-tests or contrasts. If no such tests are reported, one could use the Replication Bayes factor for the omnibus test as a quantitative indicator and use visual inspection whether the effect is indeed in the same direction. This is in parallel to the recommended practice for analyzing data with ANOVAs.\nAnother way of dealing with this could be to extend the model used in the Replication Bayes factor to include the descriptive statistics of all groups instead of the test-statistic. This, however, would require additional assumptions and does not follow as directly from the significance tests as the method proposed here. The Bayesian alternatives mentioned below can be a starting point for further extensions of the model.\n\nWhat is true for $p$-values \\citep{Wasserstein2016} also holds true for Bayes factors: No single statistical parameter will take the responsibility from researchers to make a careful evaluation of the different statistical results at hand. Even if all statistical parameters are in agreement, differences in methodology and design might render the statistical comparison \\emph{ad absurdum}.\n\nLast, effect size estimates are more difficult to compute for random- and mixed-effects ANOVAs. While effect size measures such as $\\omega^2$ can be calculated in these scenarios, the relationship to the non-central $F$-distribution is not as obvious. Further developments of the Replication Bayes factor might be useful in this direction.\n\n\\subsection{Alternatives to the Replication Bayes factor}\nThe Replication Bayes factor is only one way to evaluate a replication study in the Bayesian framework. As outlined above, its strength lies in making little assumptions and the ability to use only reported test-statistic for the evaluation.\n\nOne alternative way to use Bayes factors would be to model the information from the original study differently. Where the posterior of the original study is used here as a prior in the replication model, one could also use the effect size estimate and a reported confidence interval to construct a Normal distribution as prior. Statistical software such as JASP \\citep{JASP2018} allows to use different priors, including normal distributions, to calculate Bayes factors from test-statistics.\n\nAnother way to use the Bayesian framework would be not to rely on Bayes factors but use Bayesian estimation in hierarchical models and to incorporate both the original and the replication study to estimate the effect size across both studies. \\citet{Etz2016} have used such models to evaluate the outcomes of the ''Reproducibility Project: Psychology''. Some of the methods outlined in this article can also be used for single study pairs.\n\nRecently, \\citet{Ly2017} have proposed a reconceptualization of the Replication Bayes factor. The goal is to avoid the computations for the posterior-turned-prior distribution and instead rely on ''evidence updating''. Different Bayes factors can be multiplied yielding the Replication Bayes factor $\\mathrm{BF}_{10}(d_{\\mathrm{rep}} | d_{\\mathrm{orig}})$ \\citep[p. 7]{Ly2017}:\n\n\\begin{equation*}\n\t\\mathrm{BF}_{10}(d_{\\mathrm{rep}} | d_{\\mathrm{orig}}) = \\frac{\\mathrm{BF}_{10}(d_{\\mathrm{orig}}, d_{\\mathrm{rep}})}{\\mathrm{BF}_{10}(d_{\\mathrm{orig}})}\n\\end{equation*}\n\nThis requires the computation of a Bayes factor for the original study alone, $\\mathrm{BF}_{10}(d_{\\mathrm{orig}})$, and a Bayes factor of a combined data set from both studies, $\\mathrm{BF}_{10}(d_{\\mathrm{orig}}, d_{\\mathrm{rep}})$. In their pre-print, \\citet{Ly2017} outline how to calculate this Bayes factor from ''Evidence Updating'' for $t$-tests and contingency tables.\n\nThe calculation of an $F$-value representing a combined data-set requires reported means, standard deviations and group sizes for both the original and replication study. These are not always reported and the Replication Bayes factor outlined here is designed to not require it. Therefore the extension of the original concept for the Replication Bayes factor \\citep{Verhagen2014} is useful when only limited data is available from a published article.\n\nLast, the Replication Bayes factor uses the effect size estimate from the original study at face value. This is in many cases a questionable assumption, since publication bias and $p$-hacking are known to inflate effect size estimate in the reported literature. For sample size planning it is relevant to account for this, e.g. by planning a study at least about 2.5-times as large as the original study \\citep{Simonsohn2015}. For the analysis of a new replication study on an effect which might not yet have been estimated through meta-analyses the comparison with an originally reported effect size is useful and often the first step in an analysis. The Replication Bayes factor can then answer the question: How much more evidence in favor of an effect of this size does the data provide when compared to a null effect? A Bayes factor using a manually (i.e. more subjectively) chosen prior, as mentioned above, would be able to incorporate a corrected effect size. By what factor or procedure one should correct a reported effect size is an open question in meta-analytical research. \\cite{GelmanEdlinFactor} has used the term ''Edlin factor'' for this: a measure by which published estimates should be correct by.\n\n\\subsection{Conclusion}\nThe Replication Bayes factor introduced by \\cite{Verhagen2014} and extended herein is one index to evaluate the results of a replication attempt. It is, obviously, not able to cover all questions and pitfalls in the analysis of a replication. Instead it is a way to formally and transparently integrate previously available information in the analysis within the Bayesian framework and allows to quantitatively assess the gained evidential value. It is further easy to apply to frequentist results as it uses reported test statistics from the original and replication study only. To cover replications comprehensively, however, researchers have to use different tools depending on the question asked. No single statistical index is sufficient to globally assess the quality of a study or a theory. This is true not only for $p$-values \\citep{Wasserstein2016} but also for Bayes factors.\n\n\\pagebreak \n\\section*{Acknowledgements}\nI thank Farid Anvari, Andr\u00e9 Beauducel, Nicholas Coles, Peder Isager, Anne Scheel, Dani\\\"el Lakens, Eric-Jan Wagenmakers and one anonymous reviewer for their constructive remarks and comments, that helped to improve the clarity and structure of the manuscript.\n\n\\section*{Conflicts of Interest}\nNone.\n\n\\bibliographystyle{apa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}